id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16131
|
Lusztig sheaves and integrable highest weight modules
|
We consider the localization
$\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}$ of Lusztig's
sheaves for framed quivers, and define functors
$E^{(n)}_{i},F^{(n)}_{i},K^{\pm}_{i},n\in \mathbb{N},i \in I$ between the
localizations. With these functors, the Grothendieck group of localizations
realizes the irreducible integrable highest weight modules $L(\Lambda)$ of
quantum groups. Moreover, the nonzero simple perverse sheaves in localizations
form the canonical bases of $L(\Lambda)$. We also compare our realization (at
$v \rightarrow 1$) with Nakajima's realization via quiver varieties and prove
that the transition matrix between canonical bases and fundamental classes is
upper triangular with diagonal entries all equal to $\pm 1$.
|
Jiepeng Fang, Yixin Lan, Jie Xiao
|
2023-07-30T04:56:39Z
|
http://arxiv.org/abs/2307.16131v3
|
# Lusztig sheaves and integrable highest weight modules
###### Abstract.
We consider the localization \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\) of Lusztig's sheaves for framed quivers, and define functors \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm},n\in\mathbb{N},i\in I\) by the localizations. With these functors, the Grothendieck group of localizations realizes the irreducible integrable highest weight modules \(L(\Lambda)\) of quantum groups. Moreover, the nonzero simple perverse sheaves in localizations form the canonical bases of \(L(\Lambda)\). We also compare our realization (at \(v\to 1\)) with Nakajima's realization via quiver varieties and prove that the transition matrix between canonical bases and fundamental classes is upper triangular with diagonal entries all equal to \(\pm 1\).
Key words and phrases:perverse sheaves, quantum groups, integrable highest weight modules,Nakajima quiver varieties 2000 Mathematics Subject Classification: 16G20, 17B37
###### Contents
* 1 Introduction
* 2 Lusztig sheaves and quantized enveloping algebras
* 3 Realization of the integrable highest weight modules
* 4 Compare with Nakajima's realization
## 1. Introduction
Given a symmetric Cartan datum \((I,(-,-))\), we can associate an acyclic quiver \(Q=(I,H,\Omega)\) and define its Kac-Moody Lie algebra \(\mathfrak{g}\) and the quantized enveloping algbera (or quantum group) \(\mathbf{U}=\mathbf{U}_{v}(\mathfrak{g})\). In [12],[13] and [14], G.Lusztig has considered the moduli space \(\mathbf{E}_{\mathbf{V},\Omega}\) of quiver representations and introduced his category \(\mathcal{Q}_{\mathbf{V}}\) of semisimple perverse sheaves (complexes). Perverse sheaves in \(\mathcal{Q}_{\mathbf{V}}\) are called Lusztig sheaves. By using Grothendieck's six operators, the induction functor \(\mathbf{Ind}_{\mathbf{V},\mathbf{V}^{\prime\prime}}^{\mathbf{V}^{\prime\prime}}\) and the restriction functor \(\mathbf{Res}_{\mathbf{T},\mathbf{W}}^{\mathbf{T}}\) have been defined. Together with the induction and restriction functors, the Grothendieck group \(\mathcal{K}\) of the category \(\coprod_{\mathbf{V}}\mathcal{Q}_{\mathbf{V}}\) becomes a bialgebra, which is canonically isomorphic to the integral form \({}_{\mathcal{A}}\mathbf{U}^{+}\) (or \({}_{\mathcal{A}}\mathbf{U}^{-}\)) of the positive (or negative) part of the quantum group. (Here \(\mathcal{A}=\mathbb{Z}[v,v^{-1}]\).) Moreover, the set \(\mathcal{P}\) of simple Lusztig sheaves forms a basis of \({}_{\mathcal{A}}\mathbf{U}^{+}\), which is called the canonical basis by Lusztig. The canonical basis has many remarkable properties, such as integral property and positive property.
Given a dominant weight \(\Lambda\), one can define the irreducible highest weight module \(L(\Lambda)\). Even though Lusztig hasn't provided a categorification of \(L(\Lambda)\), he has constructed the canonical basis of \(L(\Lambda)\). Indeed, if we identify \(\mathcal{K}\) with \({}_{\mathcal{A}}\mathbf{U}^{-}\) and consider the canonical map
\[\pi:\mathbf{U}^{-}\to\mathbf{U}^{-}/\sum_{i\in I}\mathbf{U}^{-}f_{i}^{( \Lambda,\alpha_{i}^{\vee})+1}\cong L(\Lambda)\]
then \(\{\pi([L])\neq 0|L\in\mathcal{P}\}\) forms a basis of \(L(\Lambda)\). However, a categorical realization of \(L(\Lambda)\) and its canonical basis is still expected.
H.Zheng took a breakthrough in his work [27]. He categorified the irreducible integrable highest weight modules and their tensor products by using classes of micro-local perverse sheaves \(\mathfrak{D}_{\overrightarrow{\omega}}\) on moduli stacks of framed quivers. Later, Y.Li in [11] pointed out that there is a vector bundle \(\pi_{\mathbf{W}}\) which implies the relation between Zheng's construction and Lusztig's canonical basis.
Compared with Lusztig's theory, another approach to categorify the quantum group is to use the projective representations of quiver Hecke algebras [9],[10], [22]. S-J.Kang and M.Kashiwara considered the cyclotomic quiver Hecke algebras \(R^{\Lambda}\) in [7], which is a quotient of quiver Hecke algebras. They defined functors \(F_{i}^{\Lambda},E_{i}^{\Lambda},i\in I\) for the category of modules of cyclotomic quiver Hecke algebras, which satisfy the following relation
\[q_{i}^{-2}F_{i}^{\Lambda}E_{i}^{\Lambda}\oplus\bigoplus_{k\geq 0}^{\langle h_{ i},\Lambda\rangle-1}q_{i}^{2k}Id\cong E_{i}^{\Lambda}F_{i}^{\Lambda}, \langle h_{i},\Lambda\rangle\geqslant 0,\]
\[q_{i}^{-2}F_{i}^{\Lambda}E_{i}^{\Lambda}\cong E_{i}^{\Lambda}F_{i}^{\Lambda} \oplus\bigoplus_{k\geqslant 0}^{-\langle h_{i},\Lambda\rangle-1}q_{i}^{2k}Id, \langle h_{i},\Lambda\rangle\leqslant 0.\]
Then the Grothendieck group of projective modules becomes a \(\mathbf{U}\)-module and \([Proj(R^{\Lambda})]\cong{}_{\mathcal{A}}L(\Lambda)\). The key ingredient of their construction is the following exact sequence
\[0\rightarrow\bar{F}_{i}M\to F_{i}M\to F_{i}^{\Lambda}M\to 0,\]
see [7, Theorem 4.7], which categorifies the following equation
\[[e_{i},P]=\frac{K_{i}^{-1}e_{i}^{\prime}(P)-K_{i}e_{i}^{\prime\prime}(P)}{q_{ i}^{-1}-q_{i}}.\]
This provides a successful model to categorify \(L(\Lambda)\).
Inspired by H. Zheng's work [27] and with our puzzles about his proof, we go back to Lusztig's theory and give a self-contained categorical realization of \(L(\Lambda)\) in the present paper. We define a certain localization \(\mathcal{L}_{\mathbf{V}}(\Lambda)=\mathcal{Q}_{\mathbf{V},\mathbf{W}}/ \mathcal{N}_{\mathbf{V}}\) of Lusztig's category on the framed quiver. Here, \(\mathcal{N}_{\mathbf{V}}\) is a thick subcategory such that objects in \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\cap\mathcal{N}_{\mathbf{V}}\) correspond to elements in the left ideal \(\sum\limits_{i\in I}\mathbf{U}^{-}f_{i}^{\langle\Lambda,\alpha_{i}^{\vee} \rangle+1}\), hence the Grothendieck group of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\) corresponds to the weight space of \(L(\Lambda)\) via Lusztig's categorification of \(\mathbf{U}^{-}\). We also define functors \(E_{i}^{(n)}\) and \(F_{i}^{(n)}\), which realize the operators \(E_{i}^{(n)}\) and \(F_{i}^{(n)}\) on \(L(\Lambda)\) respectively. In order to prove that the functor \(E_{i}^{(n)}\) is well-defined between localizations \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\cap\mathcal{N}_{\mathbf{V}}\), we introduce a new functor \(\mathcal{F}_{j}^{\vee}\) and study the commutative relations between \(\mathcal{F}_{j}^{\vee}\), \(F_{i}^{(n)}\) and \(E_{i}^{(n)}\), which is totally different from H.Zheng's methods in [27]. Our first main theorem is the following:
**Theorem 1.1**.: _With the action of linear operators induced by functors \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm},n\in\mathbb{N},i\in I\), the Grothendieck group \(\mathcal{K}_{0}(\Lambda)\) of \(\coprod_{\mathbf{V}}\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{ V}}\) becomes a \({}_{\mathcal{A}}\mathbf{U}\)-module, and there exists a canonical isomorphism of \({}_{\mathcal{A}}\mathbf{U}\)-modules_
\[\varsigma^{\Lambda}:\mathcal{K}_{0}(\Lambda)\rightarrow{}_{\mathcal{A}}L( \Lambda).\]
_The morphism \(\varsigma^{\Lambda}\) sends the constant sheaf \([\overline{\mathbb{Q}}_{l}]=[L_{0}]\) on \(\mathbf{E}_{0,\mathbf{W},\hat{\Omega}}\) to the highest weight vector \(v_{\Lambda}\) in \({}_{\mathcal{A}}L(\Lambda)\). The Euler form induces a contravariant bilinear form on \(\mathcal{K}_{0}(\Lambda)\). Moreover, the set \(\{\varsigma^{\Lambda}([L])|L\) is a simple perverse sheaf in \(\mathcal{L}_{\mathbf{V}}(\Lambda)\}\) form a bar-invariant and almost orthogonal \(\mathcal{A}\)-basis of \({}_{\mathcal{A}}L_{\mathbf{V}}(\Lambda)\), which is exactly the canonical basis of \(L(\Lambda)\)._
In particular, since the functor \(F_{i}^{(n)}\) is defined by Lusztig's induction functor, the \(\mathbf{U}^{-}\)-module structure of \(\mathcal{K}_{0}(\Lambda)\) is compatible with Lusztig's algebraic construction of \(L(\Lambda)\). In particular, the vector bundle \(\pi_{\mathbf{W}}\) allows us to compare our construction with Lusztig's construction of canonical basis directly, without relying on the uniqueness of the canonical basis.
Another progress is that we compare our functor \(E_{i}\) with Lusztig's restriction functors. More precisely, we define category \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\) and functors \(\hat{\mathcal{R}}_{i}^{\Lambda}\) and \({}_{i}\hat{\mathcal{R}}^{\Lambda}\), which categorify linear operators \(\frac{1}{v-v^{-1}}\bar{r}_{i}\) and \(\frac{v^{(i,\mathbf{V}^{\prime}\oplus\mathbf{W})}}{v-v^{-1}}{}_{i}\bar{r}:{}_ {\mathcal{A}}\mathbf{U}^{-}\to\mathbf{U}^{-}\to L(\Lambda)\) respectively, then we obtain a split exact sequence as follows.
**Theorem 1.2**.: _For any object \(L\) of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\), we have a split exact sequence in \(\hat{\mathcal{Q}}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{ \prime}}\)_
\[0\to{}_{i}\hat{\mathcal{R}}^{\Lambda}(L)\to\hat{\mathcal{R}}_{i}^{\Lambda}(L) \to E_{i}(L)\to 0.\]
Indeed, the theorem above is analogous to S.-J.Kang and M.Kashiwara's exact sequence in [7, Theorem 4.7] and categorifies the equation
\[E_{i}(x\cdot v_{\Lambda})=(v^{(i,|x|-i)-\langle\Lambda,\alpha_{i}^{\vee} \rangle}{}_{i}\bar{r}(x)\cdot v_{\Lambda}-v^{\langle\Lambda,\alpha_{i}^{\vee} \rangle}\bar{r}_{i}(x)\cdot v_{\Lambda})/(v^{-1}-v).\]
Recall that H.Nakajima provided a construction of irreducible highest weight \(\mathfrak{g}\)-modules (denoted by \(L_{0}(\Lambda)\)) via Borel-Moore cohomology groups of quiver varieties \(\mathfrak{m}(\nu,\omega)\) and \(\mathfrak{L}(\nu,\omega)\) in [16] and [17]. He defined operators \(E_{i},F_{i},i\in I\) by using Hecke correspondences and proved that with operators \(E_{i},F_{i}\), \(\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu,\omega))\) becomes a \(\mathbf{U}(\mathfrak{g})\)-module, which is isomorphic to the irreducible integrable highest weight \(\mathfrak{g}\)-module \(L(\Lambda)\). ( We denote this isomorphism by \(\varkappa^{\Lambda}\).) Moreover, the fundamental classes of irreducible components of \(\mathfrak{L}(\nu,\omega)\) form a basis of \(L_{0}(\Lambda)\). The last main result is the comparison between our sheaf realization (taking \(v\to 1\)) and Nakajima's construction via quiver varieties. We provide a correspondence \(\Phi^{\Lambda}\) between the set of simple objects in localizations and the set of irreducible components of \(\mathfrak{L}(\nu,\omega)\). Indeed, Lusztig's key lemma induces a left graph of the canonical basis of \(L(\Lambda)\), which is isomorphic to the left graph of Nakajima's quiver varieties. As a corollary, these two different construction share the same monomial basis. After defining orders \(\preceq\) and \(\preceq^{\prime}\) on these bases, we can state our last main theorem:
**Theorem 1.3**.: _The transition matrix from the canonical basis to the fundamental classes is upper triangular and with diagonal entries all equal to \(\pm 1\). More precisely, if \(X\) is an irreducible component of \(\mathfrak{L}(\nu,\omega)\) and \([L]=\Phi^{\Lambda}(X)\), then we have_
\[\varsigma^{\Lambda}([L])=\varkappa^{\Lambda}(sgn(X)[X])+\sum_{X\preceq X^{ \prime}}c_{X^{\prime}}\varkappa^{\Lambda}(sgn(X^{\prime})[X^{\prime}])\]
_where \(c_{X^{\prime}}\in\mathbb{Q}\) are constants._
In section 2, we recall the construction of Lusztig's sheaves and the categorification theory of the positive part of quantum group.
In section 3, we define the localization \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and functors \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm}\) to categorify \(L(\Lambda)\). We also compare the functor \(E_{i}^{(n)}\) for \(n=1\) with Lusztig's restriction functors, which categorify derivative operators, and prove the split exact sequence in Theorem 1.2.
In section 4, we compare our construction at \(v\to 1\) with Nakajima's construction and prove the last main theorem.
Obviously, as what has been done in [18], [19] and [27], a realizaition of tensor products of integrable highest weight modules via Lusztig sheaves is expected. We have done this work in the preprint [4].
## 2. Lusztig sheaves and quantized enveloping algebras
In this section, we recall Lusztig's theory of semisimple perverse sheaves and refer [13] for details..
### Induction functor and restriction functor
Given a symmetric Cartan datum \((I,(-,-))\), let \(\mathbf{\Gamma}\) be the finite graph without loops associated to \((I,(-,-))\), where \(I\) is the set of vertices and \(H\) is the set of pairs consisting of edges with an orientation. More precisely, to give an edge \(h\) with an orientation is equivalent to give \(h^{\prime},h^{\prime\prime}\in I\) and we adapt the notation \(h^{\prime}\xrightarrow{h}h^{\prime\prime}\). Let \(-:h\mapsto\bar{h}\) be the involution of \(H\) such that \(\bar{h}^{\prime}=h^{\prime\prime},\bar{h}^{\prime\prime}=h^{\prime}\) and \(\bar{h}\neq h\). An orientation of the graph \(\Gamma\) is a subset \(\Omega\subset H\) such that \(\Omega\cap\bar{\Omega}=\emptyset\) and \(\Omega\cup\bar{\Omega}=H\).
Let \(k=\overline{\mathbb{F}}_{q}\) be the algebraic closure of the finite field \(\mathbb{F}_{q}\). Given \(\nu\in\mathbb{N}[I]\), a subset \(\tilde{H}\subseteq H\) and an \(I\)-graded \(k\)-vector space \(\mathbf{V}\) of dimension vector \(|\mathbf{V}|=\nu\), we take
\[\mathbf{E}_{\mathbf{V}}=\bigoplus_{h\in H}\mathbf{Hom}(\mathbf{V}_{h^{\prime }},\mathbf{V}_{h^{\prime\prime}}),\]
\[\mathbf{E}_{\mathbf{V},\tilde{H}}=\bigoplus_{h\in\tilde{H}}\mathbf{Hom}( \mathbf{V}_{h^{\prime}},\mathbf{V}_{h^{\prime\prime}}).\]
In particular, for an orientation \(\Omega\), we have
\[\mathbf{E}_{\mathbf{V},\Omega}=\bigoplus_{h\in\Omega}\mathbf{Hom}(\mathbf{V}_ {h^{\prime}},\mathbf{V}_{h^{\prime\prime}}).\]
The algebraic group \(G_{\mathbf{V}}=\prod\limits_{i\in I}\mathbf{GL}(\mathbf{V}_{i})\) acts on \(\mathbf{E}_{\mathbf{V}},\mathbf{E}_{\mathbf{V},\tilde{H}}\) and \(\mathbf{E}_{\mathbf{V},\Omega}\) by \((g\cdot x)_{h}=g_{h^{\prime\prime}}x_{h}g_{h^{\prime}}^{-1}\). Let \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) be the \(G_{\mathbf{V}}\)-equivariant derived category of mixed sheaves on \(\mathbf{E}_{\mathbf{V},\Omega}\). For any \(n\in\mathbb{Z}\), we denote by \(\mathbf{\Sigma}^{n}\) the shift functor, \((\frac{n}{2})\) the Tate twist if \(n\) is even or the square root of the Tate twist if \(n\) is odd, and \([n]\) the composition \((\mathbf{\Sigma})^{n}(\frac{n}{2})\). The shift functors \(\mathbf{\Sigma}^{n}\) always appear together with \((\frac{n}{2})\). We say complexes \(A\) and \(B\) are isomorphic up to shifts, if \(A\) and \(B[n]\) are isomorphic for some \(n\in\mathbb{Z}\).
Given \(\nu^{\prime}+\nu^{\prime\prime}=\nu\in\mathbb{N}[I]\) and graded vector spaces \(\mathbf{V},\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\) of dimension vectors \(\nu,\nu^{\prime},\nu^{\prime\prime}\) respectively, let \(\mathbf{E}^{\prime}_{\Omega}\) be the variety consisting of \((x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})\), where \(x\in\mathbf{E}_{\mathbf{V},\Omega}\),\(\tilde{\mathbf{W}}\) is an \(I\)-graded \(x\)-stable subspace of \(\mathbf{V}\) of dimension vector \(\nu^{\prime\prime}\) and \(\rho_{1}:\mathbf{V}/\tilde{\mathbf{W}}\simeq\mathbf{V}^{\prime},\rho_{2}: \tilde{\mathbf{W}}\simeq\mathbf{V}^{\prime\prime}\) are linear isomorphisms. Here we say \(\tilde{\mathbf{W}}\) is \(x\)-stable if and only if \(x_{h}(\tilde{\mathbf{W}}_{h^{\prime}})\subset\tilde{\mathbf{W}}_{h^{\prime \prime}}\) for any \(h\in\Omega\). Let \(\mathbf{E}^{\prime\prime}_{\Omega}\) be the variety consisting of \((x,\tilde{\mathbf{W}})\) as above. Consider the following diagram
\[\mathbf{E}_{\mathbf{V}^{\prime},\Omega}\times\mathbf{E}_{\mathbf{V}^{\prime \prime},\Omega}\stackrel{{ p_{1}}}{{\leftarrow}}\mathbf{E}^{\prime}_{ \Omega}\stackrel{{ p_{2}}}{{\rightarrow}}\mathbf{E}^{\prime\prime}_{ \Omega}\stackrel{{ p_{3}}}{{\rightarrow}}\mathbf{E}_{\mathbf{V},\Omega}\]
where \(p_{1}(x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})=(\rho_{1,*}(x|_{\mathbf{V}/ \tilde{\mathbf{W}}}),\rho_{2,*}(x|_{\tilde{\mathbf{W}}}))\), \(p_{2}(x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})=(x,\tilde{\mathbf{W}})\) and \(p_{3}(x,\tilde{\mathbf{W}})=x\), where \(\bar{x}|_{\mathbf{V}/\tilde{\mathbf{W}}}\) is the natural linear map induced by \(x\) on the quotient space \(\mathbf{V}/\tilde{\mathbf{W}}\) and \(x|_{\tilde{\mathbf{W}}}\) is the restriction of \(x\) on the subspace \(\tilde{\mathbf{W}}\), then \(\rho_{1,*}(\bar{x}|_{\mathbf{V}/\tilde{\mathbf{W}}})=\rho_{1}(\bar{x}|_{ \mathbf{V}/\tilde{\mathbf{W}}})\rho_{1}^{-1}\in\mathbf{E}_{\mathbf{V}^{\prime}}\) and \(\rho_{2,*}(x|_{\tilde{\mathbf{W}}})=\rho_{2}(x|_{\tilde{\mathbf{W}}})\rho_{2 }^{-1}\in\mathbf{E}_{\mathbf{V}^{\prime\prime}}\). Notice that \(p_{1}\) is smooth with connected fibers, \(p_{2}\) is a principle \(G_{\mathbf{V}^{\prime}}\times G_{\mathbf{V}^{\prime\prime}}\)-bundle and \(p_{3}\) is proper. Let \(d_{1}\) be the dimension of the fibers of \(p_{1}\) and \(d_{2}\) be the dimension of the fibers of \(p_{2}\). Lusztig's induction functor is defined by
\[\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime},\mathbf{V}^{\prime \prime}}:\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{ \prime},\Omega})\times\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_ {\mathbf{V}^{\prime\prime},\Omega})\rightarrow\mathcal{D}^{b}_{G_{\mathbf{V}}}( \mathbf{E}_{\mathbf{V},\Omega})\] \[\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime},\mathbf{V}^{ \prime\prime}}(A\boxtimes B)=(p_{3})_{!}(p_{2})_{\flat}(p_{1})^{*}(A \boxtimes B)[d_{1}-d_{2}].\]
Here for principle \(G\)-bundle \(f:Y\to X\), the functor \(f_{\flat}\) is defined to be the inverse of \(f^{*}\), which gives equivalence \(\mathcal{D}^{b}(X)\cong\mathcal{D}^{b}_{G}(Y)\). (See [1, Theorem 6.5.9] or [21, Section 1.3].)
We fix a decomposition \(\mathbf{T}\oplus\mathbf{W}=\mathbf{V}\) of graded vector space such that \(|\mathbf{T}|=\nu^{\prime}\) and \(|\mathbf{W}|=\nu^{\prime\prime}\), let \(F_{\Omega}\) be the closed subvariety of \(\mathbf{E}_{\mathbf{V},\Omega}\) consisting of \(x\) such that \(\mathbf{W}\) is \(x\)-stable. Consider the following diagram
\[\mathbf{E}_{\mathbf{T},\Omega}\times\mathbf{E}_{\mathbf{W},\Omega}\stackrel{{ \epsilon_{\Omega}}}{{\leftarrow}}F_{\Omega}\stackrel{{ \iota_{\Omega}}}{{\rightarrow}}\mathbf{E}_{\mathbf{V},\Omega}\]
where \(\iota_{\Omega}\) is the natural embedding and \(\kappa_{\Omega}(x)=(\overline{x}|_{\mathbf{T}},x|_{\mathbf{W}})\in\mathbf{E}_{ \mathbf{T},\Omega}\times\mathbf{E}_{\mathbf{W},\Omega}\) for any \(x\in F_{\Omega}\). Notice that \(\kappa_{\Omega}\) is a vector bundle. Lusztig's restriction functor is defined by
\[\mathbf{Res}^{\mathbf{V}}_{\mathbf{T},\mathbf{W}}:\mathcal{D}^{b}_{G_{ \mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\rightarrow\mathcal{D}^{b}_{G_{ \mathbf{V}^{\prime}}\times G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{ \mathbf{T},\Omega}\times\mathbf{E}_{\mathbf{W},\Omega})\] \[\mathbf{Res}^{\mathbf{V}}_{\mathbf{T},\mathbf{W}}(C)=(\kappa_{ \Omega})_{!}(\iota_{\Omega})^{*}(C)[-\langle\nu^{\prime},\nu^{\prime\prime} \rangle],\]
where \(\langle\nu^{\prime},\nu^{\prime\prime}\rangle=\sum_{i\in I}\nu^{\prime}_{i}\nu^{ \prime\prime}_{i}-\sum_{h\in\Omega}\nu^{\prime}_{h^{\prime}}\nu^{\prime\prime}_ {h^{\prime\prime}}\) is the Euler form associated to \(Q\).
We denote by \(\mathcal{S}_{|\mathbf{V}|}\) the set of sequences \(\underline{\nu}=(\nu^{1},\nu^{2},\cdots,\nu^{m})\) such that each \(\nu^{l}\) is of the form \((i_{l})^{a_{l}}\in\mathbb{N}[I]\) for some \(i_{l}\in I,a_{l}\in\mathbb{N}\) and \(\sum_{l=1}^{m}\nu^{l}=|\mathbf{V}|\). For any \(\underline{\nu}=(\nu^{1},\nu^{2},\cdots,\nu^{m})\in\mathcal{S}_{|\mathbf{V}|}\), the flag variety \(\mathcal{F}_{\underline{v},\Omega}\) is the variety consisting of \((x,f)\), where \(x\in\mathbf{E}_{\mathbf{V},\Omega}\) and \(f=(0=\mathbf{V}^{m}\subseteq\mathbf{V}^{m-1}\subseteq\cdots\subseteq\mathbf{ V}^{0}=\mathbf{V})\) is a flag of \(\mathbf{V}\) such that \(x(\mathbf{V}^{k})\subseteq\mathbf{V}^{k}\) and \(|\mathbf{V}^{k-1}/\mathbf{V}^{k}|=\nu^{k}\) for every \(1\leqslant k\leqslant m\).
The flag variety \(\mathcal{F}_{\underline{v},\Omega}\) is smooth and the natural projection map \(\pi_{\underline{v},\Omega}:\mathcal{F}_{\underline{v},\Omega}\to\mathbf{E}_{ \mathbf{V},\Omega}\) is proper. Then by the decomposition theorem in [2], the complex \(L_{\underline{v}}=(\pi_{\underline{v},\Omega})\overline{\mathbb{Q}}_{\hat{I}}[ \dim\mathcal{F}_{\underline{v},\Omega}]\) is a semisimple complex on \(\mathbf{E}_{\mathbf{V},\Omega}\), where \(\overline{\mathbb{Q}}_{\hat{I}}\) is the constant sheaf on \(\mathcal{F}_{\underline{v},\Omega}\).
Let \(\mathcal{P}_{\mathbf{V},\Omega}\) be the full subcategory of \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) consisting of simple perverse sheaves appearing as direct summands of some \(L_{\underline{\nu}},\underline{\nu}\in\mathcal{S}_{|\mathbf{V}|}\) up to \([n]\) shifts, and let \(\mathcal{Q}_{\mathbf{V},\Omega}\) be the full subcategory of \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) consisting of direct sums of shifts of objects in \(\mathcal{P}_{\mathbf{V},\Omega}\). Objects in \(\mathcal{Q}_{\mathbf{V},\Omega}\) are called Lusztig sheaves in [21].
**Proposition 2.1**.: _[_12_, Lemma 3.2, Proposition 4.2]_ _For any \(\nu^{\prime}+\nu^{\prime\prime}=\nu\) and \(\underline{\nu}^{\prime}\in\mathcal{S}_{|\mathbf{V}^{\prime}|},\underline{\nu }^{\prime\prime}\in\mathcal{S}_{|\mathbf{V}^{\prime\prime}|}\),_
\[\mathbf{Ind}_{\mathbf{V},\mathbf{V}^{\prime\prime}}^{\mathbf{V}}(L_{ \underline{\nu}^{\prime}}\boxtimes L_{\underline{\nu}^{\prime\prime}})=L_{ \underline{\nu}^{\prime}\underline{\nu}^{\prime\prime}},\]
_where \(\underline{\nu}^{\prime}\underline{\nu}^{\prime\prime}=((i^{\prime}_{1})^{a^{ \prime}_{1}},\cdots,(i^{\prime}_{m})^{a^{\prime\prime}_{m}},(i^{\prime}_{l})^{ a^{\prime\prime}_{1}},\cdots,(i^{\prime\prime}_{n})^{a^{\prime\prime}_{n}})\) is the flag type obtained by connecting \(\underline{\nu}^{\prime}=((i^{\prime}_{1})^{a^{\prime}_{1}},\cdots,(i^{\prime }_{m})^{a^{\prime}_{m}})\) and \(\underline{\nu}^{\prime\prime}=((i^{\prime\prime}_{1})^{a^{\prime}_{1}}, \cdots,(i^{\prime\prime}_{n})^{a^{\prime\prime}_{n}})\)._
_For \(\underline{\nu}\in\mathcal{S}_{|\mathbf{V}|}\),_
\[\mathbf{Res}_{\mathbf{T},\mathbf{W}}^{\mathbf{V}}(L_{\underline{\nu}})=\bigoplus _{\underline{\tau}^{+}\underline{\omega}=\underline{\nu}}L_{\underline{\tau}} \boxtimes L_{\underline{\omega}}[M(\underline{\tau},\underline{\omega})],\]
_here the notation \(\underline{\tau}+\underline{\omega}=\underline{\nu}\) means that the flag types \(\underline{\tau}=(i^{b_{1}}_{1},i^{b_{2}}_{2},\cdots,i^{b_{m}}_{m})\), \(\underline{\omega}=(i^{c_{1}}_{1},i^{c_{2}}_{2},\cdots,i^{c_{m}}_{m})\) and \(\underline{\nu}=(i^{a_{1}}_{1},i^{a_{2}}_{2},\cdots,i^{a_{m}}_{m})\) satisfy \(b_{k}+c_{k}=a_{k}\) for every \(1\leqslant k\leqslant m\), and the integer \(M(\underline{\tau},\underline{\omega})\) is given by the following formula_
\[M(\underline{\tau},\underline{\omega})= -\sum_{h\in H,l^{\prime}<l}(\tau^{l^{\prime}}_{h^{\prime}}\omega^ {l}_{h^{\prime\prime}}+\tau^{l^{\prime}}_{h^{\prime\prime}}\omega^{l}_{h^{ \prime}})+\sum_{h\in H}(\dim\mathbf{T}_{\mathrm{h}^{\prime}}\dim\mathbf{W}_{ \mathrm{h}^{\prime\prime}}+\dim\mathbf{T}_{\mathrm{h}^{\prime\prime}}\dim \mathbf{W}_{\mathrm{h}^{\prime}})\] \[-\sum_{i\in I,l<l^{\prime}}\tau^{l^{\prime}}_{i}\omega^{l}_{i}+ \sum_{i\in I,l>l^{\prime}}\tau^{l^{\prime}}_{i}\omega^{l}_{i}-\sum_{i\in I} \dim\mathbf{T}_{\mathrm{i}}\dim\mathbf{W}_{\hat{\mathrm{i}}}.\]
As a corollary, the induction and restriction functors can be restricted to
\[\mathbf{Ind}_{\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}}^{ \mathbf{V}}:\mathcal{Q}_{\mathbf{V}^{\prime}}\times\mathcal{Q}_{\mathbf{V}^{ \prime\prime}}\to\mathcal{Q}_{\mathbf{V}},\] \[\mathbf{Res}_{\mathbf{T},\mathbf{W}}^{\mathbf{V}}:\mathcal{Q}_{ \mathbf{V}}\to\mathcal{Q}_{\mathbf{T}}\boxtimes\mathcal{Q}_{\mathbf{W}}.\]
**Remark 2.2**.: _Let \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) be the full subcategory of \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) consisting of semisimple complexes, then the induction functor and restrcition functor also restrict to_
\[\mathbf{Ind}_{\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}}^{ \mathbf{V}}:\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{ \prime},\Omega})\times\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{ \mathbf{V}^{\prime\prime},\Omega})\to\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{ \mathbf{V},\Omega}),\] \[\mathbf{Res}_{\mathbf{T},\mathbf{W}}^{\mathbf{V}}:\mathcal{D}^{b,ss }_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\to\mathcal{D}^{b,ss}_{G_{ \mathbf{T}}\times G_{\mathbf{W}}}(\mathbf{E}_{\mathbf{T},\Omega}\times\mathbf{E}_{ \mathbf{W},\Omega}).\]
Let \(\mathcal{K}_{\mathbf{V},\Omega}\) be the Grothendieck group of \(\mathcal{Q}_{\mathbf{V},\Omega}\) and \(\mathcal{K}_{\Omega}=\bigoplus\limits_{\mathbf{V}}\mathcal{K}_{\mathbf{V},\Omega}\) which have an \(\mathcal{A}\)-module structure given by
\[v[L]=[L[1]].\]
**Theorem 2.3**.: _[_12_, Theorem 10.17]_ _With the induction and restriction functors, the Grothendieck group \(\mathcal{K}\) becomes a bialgebra, and is canonically isomorphic to the (integral form of) positive part of the quantized enveloping algebra \({}_{\mathcal{A}}\mathbf{U}^{+}\):_
\[\varsigma:[L_{\underline{i}(\underline{\nu})}]\mapsto E^{(p)}_{i}\]
_where \(L_{i^{(p)}}\) is the constant sheaf on \(\mathbf{E}_{\mathbf{V},\Omega}\) with \(|\mathbf{V}|=pi\). Moreover, the images of simple perverse sheaves in \(\mathcal{P}_{\mathbf{V}}\) form a \(\mathcal{A}\)-basis of \(\mathcal{A}\mathbf{U}_{\mathbf{V}}^{+}\), which is called the canonical basis. The canonical basis is bar-invariant and has positivity._
### Fourier-Deligne transformation
We fix a nontrivial character \(\mathbb{F}_{q}\to\bar{\mathbb{Q}}_{l}^{*}\). This character defines an Artin-Schreier local system of rank \(1\) on \(k\).
For two orientations \(\Omega,\Omega^{\prime}\), we define \(T:\mathbf{E}_{\mathbf{V},\Omega\cup\Omega^{\prime}}\to k\) by \(T(x)=\sum\limits_{h\in\Omega\setminus\Omega^{\prime}}tr(x_{h}x_{\bar{h}})\). Then the inverse image of the Artin-Schreier local system under \(T\) is a well-defined \(G\mathbf{V}\)-equivariant local system of rank \(1\) on \(\mathbf{E}_{\mathbf{V},\Omega\cup\Omega^{\prime}}\), denote by \(\mathcal{L}_{T}\).
Consider the following diagram
\[\mathbf{E}_{\mathbf{V},\Omega}\stackrel{{\delta}}{{\leftarrow}} \mathbf{E}_{\mathbf{V},\Omega\cup\Omega^{\prime}}\stackrel{{\delta ^{\prime}}}{{\rightarrow}}\mathbf{E}_{\mathbf{V},\Omega^{\prime}}\]
where \(\delta,\delta^{\prime}\) are the forgetting maps defined by
\[\begin{array}{c}\delta((x_{h})_{h\in\Omega\cup\Omega^{\prime}})=((x_{h})_{ h\in\Omega}),\\ \delta^{\prime}((x_{h})_{h\in\Omega\cup\Omega^{\prime}})=((x_{h})_{h\in\Omega ^{\prime}}).\end{array}\]
Lusztig defined the Fourier-Deligne transformation for quivers to be the functor
\[\begin{array}{c}\mathcal{F}_{\Omega,\Omega^{\prime}}:\mathcal{D}_{G\mathbf{V }}^{b}(E_{\mathbf{V},\Omega})\to\mathcal{D}_{G\mathbf{V}}^{b}(E_{\mathbf{V},\Omega^{\prime}})\\ \mathcal{F}_{\Omega,\Omega^{\prime}}(L)=\delta_{!}^{\prime}(\delta^{*}(L) \otimes\mathcal{L}_{T})[D],\end{array}\]
here \(D=\sum\limits_{h\in\Omega\setminus\Omega^{\prime}}\dim\mathbf{V}_{h^{\prime} }\dim\mathbf{V}_{h^{\prime\prime}}\).
**Proposition 2.4**.: _[_12_, Theorem 5.4]_ _With the notations above, for any semisimple perverse sheaves \(L_{1}\) and \(L_{2}\),_
\[\mathcal{F}_{\Omega,\Omega^{\prime}}(\mathbf{Ind}_{\mathbf{V}^{\prime}, \mathbf{V}^{\prime\prime}}^{\mathbf{V}}(L_{1}\boxtimes L_{2}))\cong\mathbf{ Ind}_{\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}}^{\mathbf{V}}(\mathcal{F}_{ \Omega,\Omega^{\prime}}(L_{1})\boxtimes\mathcal{F}_{\Omega,\Omega^{\prime}}( L_{2})).\]
**Corollary 2.5**.: _[_12_, Proposition 10.14]_ _The functor \(\mathcal{F}_{\Omega,\Omega^{\prime}}\) induces an algebra isomorphism between \(\mathcal{K}_{\Omega}\) and \(\mathcal{K}_{\Omega^{\prime}}\)._
**Corollary 2.6**.: _[_12_, Corollary 5.6]_ _The functor \(\mathcal{F}_{\Omega,\Omega^{\prime}}\) induces an equivalence of categories \(\mathcal{Q}_{\mathbf{V},\Omega}\cong\mathcal{Q}_{\mathbf{V},\Omega^{\prime}}\) and a bijection \(\eta_{\Omega,\Omega^{\prime}}:\mathcal{P}_{\Omega}\to\mathcal{P}_{\Omega^{ \prime}}\). Moreover, for orientations \(\Omega,\Omega^{\prime},\Omega^{\prime\prime}\), we have \(\eta_{\Omega^{\prime},\Omega^{\prime\prime}}\eta_{\Omega,\Omega^{\prime}}= \eta_{\Omega,\Omega^{\prime\prime}}\)._
**Remark 2.7**.: _Indeed, the Fourier-Deligne transformation gives equivalence of the bounded derived categories We denote the natural functor \(\mathcal{D}_{G\mathbf{V}}^{b}(\mathbf{E}_{\mathbf{V},\Omega})\) and \(\mathcal{D}_{G\mathbf{V}}^{b}(\mathbf{E}_{\mathbf{V},\Omega^{\prime}})\). (See the proof of [21, Theorem 3.13].)_
With two corollaries above, we denote \(\mathcal{K}_{\Omega}\) by \(\mathcal{K}\) and \(\mathcal{P}_{\mathbf{V},\Omega}\) by \(\mathcal{P}_{\mathbf{V}}\) respectively, if there is no ambiguity.
### Analysis at sink
Fix \(i\in I\) and an orientation \(\Omega\) such that \(i\) is a sink, for any \(p\in\mathbb{N}\), we define \(\mathbf{E}_{\mathbf{V},i,p}\) to be the locally closed subset of \(\mathbf{E}_{\mathbf{V},\Omega}\) consisting of \(x\) such that \(\operatorname{codim}_{\mathbf{V}_{i}}(\operatorname{Im}\sum\limits_{h\in\Omega,h^{\prime\prime}=i}x_{h})=p\). Then \(\mathbf{E}_{\mathbf{V}}\) has a partition \(\mathbf{E}_{\mathbf{V},\Omega}=\bigcup\limits_{p}\mathbf{E}_{\mathbf{V},i,p}\), and the union \(\mathbf{E}_{\mathbf{V},i,\geqslant p}=\bigcup\limits_{p^{\prime}\geqslant p} \mathbf{E}_{\mathbf{V},i,p^{\prime}}\) is a closed subset of \(\mathbf{E}_{\mathbf{V},\Omega}\).
Given a simple perverse sheaf \(L\), there exists a unique integer \(t\) such that \(\operatorname{supp}(L)\subseteq\mathbf{E}_{\mathbf{V},i,\geqslant t}\) but \(\operatorname{supp}(L)\nsubseteq\mathbf{E}_{\mathbf{V},i,\geqslant t+1}\) and we set \(t_{i}(L)=t\). Notice that \(t_{i}(L)\leqslant\nu_{i}\).
The following lemma is the key lemma [12, Lemma 6.4] in Lusztig's categorification theory. Indeed, Lusztig's proof works for all simple perverse sheaves, but not only for simple objects in \(\mathcal{P}_{\mathbf{V}}\).
**Lemma 2.8**.: _With the notation above, fix \(0\leqslant t\leqslant\nu_{i}\) and assume \(|\mathbf{T}|=|\mathbf{V}^{\prime}|=ti\)._
_(1) Let \(L\in\mathcal{D}_{G\mathbf{V}}^{b,ss}(\mathbf{E}_{\mathbf{V},\Omega})\) be a simple perverse sheaf such that \(t_{i}(L)=t\), then \(\mathbf{Res}_{\mathbf{T},\mathbf{W}}^{\mathbf{V}}(L)\) is a direct sum of finitely many summands of the form \(K^{\prime}[f^{\prime}]\) for various simple perverse sheaves
\(K^{\prime}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{ \mathbf{W},\Omega})\) and \(f^{\prime}\in\mathbb{Z}\). Moreover, exactly one of these summands, denoted by \(K[f]\), satisfies \(t_{i}(K)=0\) and \(f=0\) and the others satisfy \(t_{i}(K^{\prime})>0\). If \(L\in\mathcal{P}_{\mathbf{V}}\), then those \(K^{\prime}\) are also Lusztig's sheaves._
_(2) Let \(K\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{\mathbf{V} ^{\prime\prime},\Omega})\) be a simple perverse sheaf such that \(t_{i}(K)=0\), then \(\mathbf{Ind}^{\mathbf{V}}_{V^{\prime},\mathbf{V}^{\prime\prime}}(\bar{\mathbb{ Q}}_{l}\boxtimes K)\) is a direct sum of finitely many summands of the form \(L^{\prime}[g^{\prime}]\) for various simple perverse sheaves \(L^{\prime}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V}, \Omega})\) and \(g^{\prime}\in\mathbb{Z}\). Moreover, exactly one of these summands, denoted by \(L[g]\), satisfies \(t_{i}(L)=t\) and \(g=0\) and the others satisfy \(t_{i}(L^{\prime})>t\). If \(K\in\mathcal{P}_{\mathbf{V}^{\prime\prime}}\), then those \(L^{\prime}\) are also Lusztig's sheaves._
_(3) There is a bijection \(\pi_{i,t}\) between the set \(\{K\) is a simple perverse sheaf in \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{\mathbf{V}^{ \prime\prime},\Omega})|t_{i}(K)=0\}\) and the set \(\{L\) is a simple perverse sheaf in \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})|t_{i}(L)=t\}\), which is induced by the decompositions of the direct sums above. Moreover, the bijection above also restricts to a bijection between the set \(\{K\in\mathcal{P}_{\mathbf{V}^{\prime\prime}}|t_{i}(K)=0\}\) and the set \(\{L\in\mathcal{P}_{\mathbf{V}}|t_{i}(L)=t\}\)._
If \(|\mathbf{V}^{\prime}|=ri,|\mathbf{V}^{\prime\prime}|=|\mathbf{V}|-ri\), we denote \(\mathbf{V}^{\prime}\) by \(\mathbf{V}^{\prime}_{ri}\) and \(\mathbf{V}^{\prime\prime}\) by \(\mathbf{V}^{\prime\prime}_{\nu-ri}\). For an orientation \(\Omega^{\prime}\) and a simple perverse sheaf \(L\), we define \(s_{i}(L)\) to be the largest integer \(r\) satisfying that there exists a semisimple complex \(L^{\prime}\) such that \(L\) is isomorphic to a shift of a direct summand of \(\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime}_{ri},\mathbf{V}^{\prime\prime} _{\nu-ri}}(\bar{\mathbb{Q}}_{l}\boxtimes L^{\prime})\). Notice that the definition of \(s_{i}(L)\) does not depend on the choice of \(\Omega^{\prime}\) by Proposition 2.4. Lusztig's proof of Proposition 6.6 in [12] can also be generalized to \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega^{\prime}})\) as the following.
**Proposition 2.9**.: _Let \(L\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega^{\prime}})\) be a simple perverse sheaf and \(s_{i}(L)=r\)._
_(1) There exist semisimple complexes \(L^{\prime}_{r^{\prime}}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}_{ \nu-r^{\prime}i}}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i}},\Omega^{\prime})\) for \(r^{\prime}>s_{i}(L)\) and semisimple complexes \(L^{\prime\prime}_{r^{\prime}}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime \prime}_{\nu-r^{\prime}i}}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}_{\nu-r^{ \prime}i}},\Omega^{\prime})\) for \(r^{\prime}\geqslant s_{i}(L)\) such that_
\[L\oplus\bigoplus_{r^{\prime}>s_{i}(L)}\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{ \prime}_{r^{\prime}i},\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i}}(\bar{ \mathbb{Q}}_{l}\boxtimes L^{\prime}_{r^{\prime}})\cong\bigoplus_{r^{\prime} \geqslant s_{i}(L)}\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime}_{r^{\prime}i},\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i}}(\bar{\mathbb{Q}}_{l}\boxtimes L ^{\prime\prime}_{r^{\prime}}).\]
_Moreover, if \(L\in\mathcal{P}_{\mathbf{V},\Omega^{\prime}}\), then those \(L^{\prime}_{r^{\prime}}\) and \(L^{\prime\prime}_{r^{\prime}}\) can be chosen in \(\mathcal{Q}_{\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i},\Omega^{\prime}}\)._
_(2) \(s_{i}(L)=t_{i}(L)\), if \(i\) is a sink in \(\Omega^{\prime}\)._
### Analysis at source
Fix \(i\in I\) and an orientation \(\Omega\) such that \(i\) is a source, we define \(\mathbf{E}^{p}_{\mathbf{V},i}\) to be the subset of \(\mathbf{E}_{\mathbf{V},\Omega}\) consisting of \(x\) such that \(\dim(\text{Ker}\bigoplus\limits_{h\in\Omega,h^{\prime}=i}x_{h})=p\). Then \(\mathbf{E}_{\mathbf{V}}\) has a partition \(\mathbf{E}_{\mathbf{V},\Omega}=\bigcup\limits_{p}\mathbf{E}^{p}_{\mathbf{V},i}\). and the union \(\mathbf{E}^{\geqslant p}_{\mathbf{V},i}=\bigcup\limits_{p^{\prime}\geqslant p }\mathbf{E}^{p^{\prime}}_{\mathbf{V},i}\) is a closed subset.
Given a simple perverse sheaf \(L\), there exists a unique integer \(t\) such that \(\text{supp}(L)\subseteq\mathbf{E}^{\geqslant t}_{\mathbf{V},i}\) but \(\text{supp}(L)\nsubseteq\mathbf{E}^{\geqslant t+1}_{\mathbf{V},i}\) and we write \(t^{*}_{i}(L)=t\). Notice that \(t^{*}_{i}(L)\leqslant\nu_{i}\).
The following lemmas are dual to Lemma 2.8.
**Lemma 2.10**.: _With the notation above, fix \(0\leqslant t\leqslant\nu_{i}\) and assume \(|\mathbf{W}|=|\mathbf{V}^{\prime\prime}|=ti\)._
_(1) Let \(L\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) be a simple perverse sheaf such that \(t^{*}_{i}(L)=t\), then \(\mathbf{Res}^{\mathbf{V}}_{\mathbf{T},\mathbf{W}}(L)\in\mathcal{D}^{b,ss}_{G_{ \mathbf{T}}}(\mathbf{E}_{\mathbf{T},\Omega})\) is a direct sum of finitely many summands of the form \(K^{\prime}[f^{\prime}]\) for various simple perverse sheaves \(K^{\prime}\) and \(f^{\prime}\in\mathbb{Z}\). Moreover, exactly one of these summands, denoted by \(K[f]\), satisfies \(t^{*}_{i}(K)=0\) and \(f=0\) and the others satisfy \(t^{*}_{i}(K^{\prime})>0\). If \(L\in\mathcal{P}_{\mathbf{V}}\), then those \(K^{\prime}\) are also Lusztig's sheaves._
_(2) Let \(K\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{ \prime},\Omega})\) be a simple perverse sheaf such that \(t^{*}_{i}(K)=0\), then \(\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}}(K \boxtimes\bar{\mathbb{Q}}_{l})\) is a direct sum of finitely many summands of the form \(L^{\prime}[g^{\prime}]\) for various simple perverse sheaves \(L^{\prime}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})\) and \(g^{\prime}\in\mathbb{Z}\). Moreover, exactly one of these summands, denoted by \(L[g]\), satisfies \(t^{*}_{i}(L)=t\) and \(g=0\) and the others satisfy \(t^{*}_{i}(L^{\prime})>t\). If \(K\in\mathcal{P}_{\mathbf{V}^{\prime}}\), then those \(L^{\prime}\) are also Lusztig's sheaves._
_(3) There is a bijection \(\pi_{i,t}^{*}\) between the set \(\{K\) is a simple perverse sheaf in \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{\prime}, \Omega})|t^{*}_{i}(K)=0\}\) and the set \(\{L\) is a simple perverse sheaf in \(\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega})|t^{*}_{i}(L )=t\}\), which is induced by the decompositions of the direct sums above. Moreover, the bijection above also restricts to a bijection between the set \(\{K\in\mathcal{P}_{\mathbf{V}^{\prime}}|t^{*}_{i}(K)=0\}\) and the set \(\{L\in\mathcal{P}_{\mathbf{V}}|t^{*}_{i}(L)=t\}\)._
In this section, if \(|\mathbf{V}^{\prime}|=|\mathbf{V}|-ri\) and \(|\mathbf{V}^{\prime\prime}|=ri\), we denote \(\mathbf{V}^{\prime}\) by \(\mathbf{V}^{\prime}_{\nu-ri}\) and \(\mathbf{V}^{\prime\prime}\) by \(\mathbf{V}^{\prime\prime}_{ri}\). For an orientation \(\Omega^{\prime}\) and a simple perverse sheaf \(L\), we define \(s^{*}_{i}(L)\) to be the largest integer \(r\) satisfying that there exists \(L^{\prime}\) such that \(L\) is isomorphic to a direct summand of \(\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime}_{\nu-ri},\mathbf{V}^{\prime \prime}_{ri}}(L^{\prime}\boxtimes\bar{\mathbb{Q}}_{l})\). The following Proposition is dual to Proposition 2.9.
**Proposition 2.11**.: _Let \(L\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\Omega^{\prime }})\) be a simple perverse sheaf with \(s_{i}(L)=r\)._
_(1) There exist semisimple complexes \(L^{\prime}_{r^{\prime}}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime\prime}_{ \nu-r^{\prime}i}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i}}, \Omega^{\prime})}\) for \(r^{\prime}>s^{*}_{i}(L)\) and semisimple complexes \(L^{\prime\prime}_{r^{\prime}}\in\mathcal{D}^{b,ss}_{G_{\mathbf{V}^{\prime \prime}_{\nu-r^{\prime}i}}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}_{\nu-r^{ \prime}i}},\Omega^{\prime})\) for \(r^{\prime}\geqslant s^{*}_{i}(L)\) such that_
\[L\oplus\bigoplus_{r^{\prime}>s^{*}_{i}(L)}\mathbf{Ind}^{\mathbf{V}}_{\mathbf{ V}^{\prime}_{\nu-r^{\prime}i},\mathbf{V}^{\prime\prime}_{r^{\prime}}}(L^{ \prime}_{r^{\prime}}\boxtimes\bar{\mathbb{Q}}_{l})\cong\bigoplus_{r^{\prime} \geqslant s^{*}_{i}(L)}\mathbf{Ind}^{\mathbf{V}}_{\mathbf{V}^{\prime}_{\nu-r^ {\prime}i},\mathbf{V}^{\prime\prime}_{r^{\prime}i}}(L^{\prime\prime}_{r^{ \prime}}\boxtimes\bar{\mathbb{Q}}_{l}).\]
_Moreover, if \(L\in\mathcal{P}_{\mathbf{V},\Omega^{\prime}}\), then \(L^{\prime}_{r^{\prime}}\) and \(L^{\prime\prime}_{r^{\prime}}\) can be chosen in \(\mathcal{Q}_{\mathbf{V}^{\prime\prime}_{\nu-r^{\prime}i},\Omega^{\prime}}\)._
_(2) \(s^{*}_{i}(L)=t^{*}_{i}(L)\), if \(i\) is a source in \(\Omega^{\prime}\)._
## 3. Realization of the integrable highest weight modules
Given a symmetric Cartan datum \((I,(-,-))\), we denote by \(\alpha^{\vee}_{i}\) the simple coroot for \(i\in I\). In this section, we fix a dominant weight \(\Lambda\) and set \(d_{i}=\langle\Lambda,\alpha^{\vee}_{i}\rangle\in\mathbb{N}\) for \(i\in I\).
Consider the framed quiver \(\hat{Q}=(I\cup\hat{I},\hat{H},\hat{\Omega})\), where \(\hat{I}=\{\hat{i}|i\in I\}\), \(\hat{H}=H\cup\{i\to\hat{i},\hat{i}\to i|i\in I\}\) and \(\hat{\Omega}=\Omega\cup\{i\to\hat{i}|i\in I\}\). Take an \(\hat{I}\)-graded space \(\mathbf{W}\) such that \(\dim\mathbf{W}_{\hat{i}}=d_{i}\) for any \(i\in I\) and set
\[\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}=\mathbf{E}_{\mathbf{V},\Omega }\oplus\bigoplus_{i\in I}\mathbf{Hom}(\mathbf{V}_{i},\mathbf{W}_{\hat{i}}).\]
The algebraic group \(G_{\mathbf{V}}\) acts naturally on \(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}\) and there is a natural projection
\[\pi_{\mathbf{W}}:\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}\to\mathbf{E }_{\mathbf{V},\Omega},\]
which is a \(G_{\mathbf{V}}\)-equivariant trivial vector bundle.
Let \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) be the full subcategory of \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega }})\) consisting objects of the form \((\pi_{\mathbf{W}})^{*}(L)\) for some object \(L\) in \(\mathcal{Q}_{\mathbf{V}}\). In particular, the set of simple objects in \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) is naturally bijective to \(\mathcal{P}_{\mathbf{V}}\) via \((\pi_{\mathbf{W}})^{*}[\operatorname{rank}(\pi_{\mathbf{W}})]\), since \((\pi_{\mathbf{W}})^{*}\) is fully faithful. The following proposition follows from an observation of Y.Li in [11].
**Proposition 3.1**.: _Fix any order \(i_{1},i_{2},\cdots,i_{n}\) of \(I\), let \(\underline{d}\) be the flag type of \(\mathbf{W}\) such that \(\underline{d}=((\hat{i}_{1})^{d_{\hat{i}_{1}}},(\hat{i}_{2})^{d_{\hat{i}_{2}}}, \cdots,(\hat{i}_{n})^{d_{i_{n}}})\). Then for any flag type \(\underline{v}\) of \(\mathbf{V}\), we have \(L_{\underline{v}\underline{d}}\cong(\pi_{\mathbf{W}})^{*}(L_{\underline{v}})[ \sum\limits_{i\in I}\nu_{i}d_{i}]\). In particular, \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) is the full subcategory consisting of direct sums of shifted summands of \(L_{\underline{v}\underline{d}}\) for any flag types \(\underline{v}\)._
Proof.: By definition of the induction functor, one can easily check that
\[(\pi_{\mathbf{W}})^{*}[\sum\limits_{i\in I}\nu_{i}d_{i}]\cong\mathbf{Ind}^{ \mathbf{V}\oplus\mathbf{W}}_{\mathbf{V},\mathbf{W}}(-\boxtimes L_{\underline{d}}).\]
By Proposition 2.1, we get the proof.
### Localizations of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\)
For any \(i\in I\), we choose an orientation \(\Omega^{i}\) of \(Q\) such that \(i\) is a source in \(\Omega^{i}\), then \(i\) is also a source in the orientation \(\hat{\Omega}^{i}\) of the framed quiver \(\hat{Q}\). Regard \(\hat{Q}\) as a new quiver, then by section 2.4, there is a partition \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{\geq 1}\cup\mathbf{E}_{\mathbf{V}, \mathbf{W},i}^{0}=\mathbf{E}_{\mathbf{V},\mathbf{W},\Omega^{i}}\), where \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0}\) is the open subset
\[\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0}=\{x\in\mathbf{E}_{\mathbf{V},\mathbf{ W},\Omega^{i}}|\text{dim}\text{Ker}(\bigoplus_{h\in\Omega^{i},h^{\prime}=i}x_{h}: \mathbf{V}_{i}\to\mathbf{W}_{\bar{i}}\oplus\bigoplus_{h\in\Omega^{i},h^{ \prime}=i}\mathbf{V}_{h^{\prime\prime}})=0\},\]
and \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{\geq 1}\) is its complement. Denote the open embedding of \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0}\) by \(j_{\mathbf{V},i}\).
Following [27], let \(\mathcal{N}_{\mathbf{V},i}\) be the full subcategory of \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})\) consisting of objects whose supports are contained in \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{\geq 1}\), then \(\mathcal{N}_{\mathbf{V},i}\) is a thick subcategory. We can see that the Verdier quotient \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\) is a triangulated category, with a natural perverse \(t\)-structure induced from the perverse \(t\)-structure of \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})\).
Similarly, given an orientation \(\Omega\) of \(Q\), let \(\hat{\Omega}\) be the associated orientation of \(\hat{Q}\). Let \(\mathcal{N}_{\mathbf{V}}\) be the thick subcategory of \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})\) generated by objects in \(\mathcal{F}_{\hat{\Omega}^{i},\hat{\Omega}}(\mathcal{N}_{\mathbf{V},i}),i\in I\), then we can also define the Verdier quotient \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})/\mathcal{N}_{\mathbf{V}}\). It is also a triangulated category, with a natural perverse \(t\)-structure. If there is no ambiguity, we omit \(\mathcal{F}_{\hat{\Omega},\hat{\Omega}^{i}}\) and denote \(\mathcal{F}_{\hat{\Omega}^{i},\hat{\Omega}}(\mathcal{N}_{\mathbf{V},i})\) by \(\mathcal{N}_{\mathbf{V},i}\).
**Definition 3.2**.: _(a) For any \(i\in I\) and an orientation \(\Omega^{i}\) such that \(i\) is a source in \(\Omega^{i}\), define the localizations of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})\) at \(i\) to be the full subcategories of \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\) consisting of objects which are isomorphic to objects of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})\) respectively. Denote them respectively by \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V},i}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\). (b)For an orientation \(\Omega\) of \(Q\), define the global localizations of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})\) to be the full subcategories of \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})/\mathcal{N}_{\mathbf{V}}\) consisting of objects which are isomorphic to objects of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})\) respectively. Denote them by \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})/\mathcal{N}_{\mathbf{V}}\)._
**Remark 3.3**.: _Noctie that if \(\tilde{\Omega}^{i}\) is another orientation such that \(i\) is a source in \(\tilde{\Omega}^{i}\), then by definition \(\tilde{\Omega}^{i}\) and \(\Omega^{i}\) determine the same \(\mathcal{N}_{\mathbf{V},i}\) up to Fourier-Deligne transformations. In particular, the localizations defined above are independent of the choices of those \(\tilde{\Omega}^{i}\)._
For any open embedding \(j:U\to X\), the middle extension functor
\[j_{!*}:Perv(U)\to Perv(X)\]
can be naturally extended to objects of the semisimple category. (But it is not a functor, since it cannot defined on morphisms.) More precisely, for any direct sum of simple perverse sheaves up to shifts \(L=\bigoplus K[n]\), we set
\[j_{!*}(L)=\bigoplus j_{!*}(K)[n].\]
**Lemma 3.4**.: _For any semisimple complex \(L\) on \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0}\), we have \((j_{\mathbf{V},i})_{!*}(L)\cong(j_{\mathbf{V},i})!(L)\) in \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\). In particular, the functor \((j_{\mathbf{V},i})_{!}\) restricts to equivalences of categories_
\[(j_{\mathbf{V},i})^{*}(\mathcal{Q}_{\mathbf{V},\mathbf{W}})\xleftarrow{(j_{ \mathbf{V},i})_{!}\overbrace{\varepsilon\!\!
where \(i:\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{\geqslant 1}\to\mathbf{E}_{\mathbf{V}, \mathbf{W},\hat{\Omega}^{i}}\) is the closed embedding. Notice that \(i_{*}i^{*}(K)\) has support contained in \(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{\geqslant 1}\). Hence \((j_{\mathbf{V},i})_{!}(j_{\mathbf{V},i})^{*}(K)\) is isomorphic to \(K\) in the localization \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\), and the first statement follows from \((j_{\mathbf{V},i})^{*}(K)\cong L\).
Notice that \((j_{\mathbf{V},i})^{*}\) and \((j_{\mathbf{V},i})_{!}\) are quasi-inverse equivalences between the category \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0})\) and the localization \(\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})/\mathcal{N}_{\mathbf{V},i}\), the above argument implies that these equivalences restrict to \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) and \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}^{i}})\) and we get the proof.
### The functor \(\mathcal{E}_{i}^{(n)},\mathcal{F}_{j}^{(n)}\) of localizations at \(i\)
In this subsection, we fix an orientation \(\Omega=\Omega^{i}\) of \(Q\) such that \(i\) is a source.
For any \(n\in\mathbb{N}\), take graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) of dimension vector \(\nu^{\prime},\nu\) respectively, such that \(\nu^{\prime}+ni=\nu\), we will define varieties and morphisms appearing in following diagram, and then define a functor \(\mathcal{E}_{i}^{(n)}\).
Define an affine subspace
\[\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}=\bigoplus\limits_{h\in\Omega,h^{ \prime}\neq i}\mathbf{Hom}(\mathbf{V}_{h^{\prime}},\mathbf{V}_{h^{\prime\prime }})\oplus\bigoplus\limits_{i^{\prime}\neq i}\mathbf{Hom}(\mathbf{V}_{i^{\prime} },\mathbf{W}_{\hat{i}^{\prime}}).\]
For any \(x\in\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}\), we denote by \(\dot{x}=(x_{h})_{h\in\hat{\Omega},h^{\prime}\neq i}\), then there is a morphism
\[\phi_{\mathbf{V},i}:\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0} \to\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{ Grass}(\nu_{i},\tilde{\nu}_{i})\] \[x \mapsto(\dot{x},\mathrm{Im}(\bigoplus\limits_{h\in\hat{\Omega},h^ {\prime}=i}x_{h})),\]
where \(\nu_{i}=\mathrm{dim}\mathbf{V}_{i}\), \(\tilde{\nu}_{i}=\sum\limits_{h\in\hat{\Omega},h^{\prime}=i}\mathrm{dim} \mathbf{V}_{h^{\prime\prime}}+d_{i}\), and \(\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i})\) is the Grassmannian consisting of \(\nu_{i}\)-dimensional subspaces of \(\tilde{\nu}_{i}\)-dimensional space \((\bigoplus\limits_{h^{\prime}=i}\mathbf{V}_{h^{\prime\prime}})\oplus \mathbf{W}_{\hat{i}}\). We can check by definition that \(\phi_{\mathbf{V},i}\) is a principal \(\mathbf{GL}(\mathbf{V}_{i})\)-bundle. Let
\[\mathbf{Flag}(\nu_{i}-n,\nu_{i},\tilde{\nu}_{i})=\{\mathbf{U}_{1}\subset \mathbf{U}_{2}\subset(\bigoplus\limits_{h^{\prime}=i}\mathbf{V}_{h^{\prime \prime}})\oplus\mathbf{W}_{\hat{i}})|\mathrm{dim}\mathbf{U}_{1}=\nu_{i}-n, \mathrm{dim}\mathbf{U}_{2}=\nu_{i}\}.\]
be the flag variety and \(q_{1},q_{2}\) are natural projections
\[q_{1}(\dot{x},\mathbf{U}_{1},\mathbf{U}_{2})=(\dot{x},\mathbf{U }_{2});\] \[q_{2}(\dot{x},\mathbf{U}_{1},\mathbf{U}_{2})=(\dot{x},\mathbf{U }_{1}).\]
**Definition 3.5**.: _Define the functor \(\mathcal{E}_{i}^{(n)}:\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\hat{\Omega}})\to\mathcal{D}_{G_{\mathbf{V}^{\prime}}}^{b}(\mathbf{E }_{\mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}})\) via_
\[\mathcal{E}_{i}^{(n)}=(j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{*}(q_{2})_{!}(q_{1})^{*}(\phi_{\mathbf{V},i})_{!}(j_{\mathbf{V},i})^{*}[ -n\nu_{i}]\]
_Notice that \(\mathcal{E}_{i}^{(n)}(\mathcal{N}_{\mathbf{V},i})=0\), thus \(\mathcal{E}_{i}^{(n)}\) descends to a functor between localizations, still denoted by_
\[\mathcal{E}_{i}^{(n)}:\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\hat{\Omega}})/\mathcal{N}_{\mathbf{V},i}\to\mathcal{D}_{G_{ \mathbf{V}^{\prime}}}^{b}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\hat{ \Omega}})/\mathcal{N}_{\mathbf{V}^{\prime},i}.\]
_In particular, we denote by \(\mathcal{E}_{i}=\mathcal{E}_{i}^{(1)}\)._
**Lemma 3.6**.: _If \(\tilde{\Omega}\) is another orientation such that \(i\) is a source in \(\tilde{\Omega}\) and we define \(\tilde{\mathcal{E}}_{i}^{(n)}\) in the same way as Definition 3.5, then \(\tilde{\mathcal{E}}_{i}^{(n)}\mathcal{F}_{\tilde{\Omega},\hat{\tilde{\Omega}}} \cong\mathcal{F}_{\hat{\tilde{\Omega}},\hat{\Omega}}\mathcal{E}_{i}^{(n)}\). In particular, \(\mathcal{E}_{i}^{(n)}\) is independent of the choice of \(\Omega\) such that \(i\) is a source in \(\Omega\)._
Proof.: Since \(i\) is a source in \(\tilde{\Omega}\) and \(\Omega\), \(\mathcal{F}_{\hat{\Omega},\hat{\Omega}}\) only involves the edges which do not connect to \(i\). Let \(\dot{Q}\) be the quiver obtained by removing \(i\) from \(Q\), then \(\mathcal{F}_{\hat{\Omega},\hat{\tilde{\Omega}}}\) indeed acts on \(\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\) as the Fourier-Deligne transformation of \(\dot{Q}\). By definition and base change, it is easy to check that \((q_{2})_{!}(q_{1})^{*}\) commutes with the Fourier-Deligne transformations of \(\dot{Q}\) and we get the proof.
For \(j\in I\), we take graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}^{\prime\prime}|-nj=|\mathbf{V}|,|\mathbf{V}^{\prime}|=nj\) and define the functor \(\mathcal{F}_{j}^{(n)}:\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\hat{\Omega}})\to\mathcal{D}_{G_{\mathbf{V}^{\prime\prime}}}^{b}( \mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},\hat{\Omega}})\) (or \(\mathcal{F}_{j}^{(n)}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}\to\mathcal{Q}_{ \mathbf{V}^{\prime\prime},\mathbf{W}}\) ) via
\[\mathcal{F}_{j}^{(n)}=\mathbf{Ind}_{\mathbf{V}^{\prime\prime},\mathbf{V}\oplus \mathbf{W}}^{\mathbf{V}^{\prime\prime}\oplus\mathbf{W}}(\overline{\mathbb{Q}}_ {l}\boxtimes-)\]
where \(\overline{\mathbb{Q}}_{l}\) is the constant sheaf of \(\mathbf{E}_{\mathbf{V}^{\prime},0,\hat{\Omega}}\cong\mathbf{E}_{\mathbf{V}^{ \prime},\Omega}\).
**Lemma 3.7**.: _The functor \(\mathcal{F}_{j}^{(n)}\) sends objects of \(\mathcal{N}_{\mathbf{V},k}\) to objects of \(\mathcal{N}_{\mathbf{V}^{\prime\prime},k}\)._
Proof.: Since the induction functor for \(\mathcal{D}_{G_{\mathbf{V}}}^{b,ss}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})\) commutes with Fourier-Deligne tarnsformation, we can assume \(k\) is a source. In this case, the simple representation \(S_{k}\) of \(Q\) (and \(\hat{Q}\)) at \(k\) is injective. In particular, any exact sequence of representations of \(\hat{Q}\) of the form
\[0\to S_{k}^{\oplus m}\to Y\to Z\to 0\]
must be split, then by definition of \(\mathcal{F}_{j}^{(n)}\), we can see that \(\operatorname{supp}(\mathcal{F}_{j}^{(n)}(L))\subseteq\mathbf{E}_{\mathbf{V} ^{\prime\prime},\mathbf{W},k}^{\geqslant 1}\) if \(\operatorname{supp}(L)\subseteq\mathbf{E}_{\mathbf{V},\mathbf{W},k}^{\geqslant 1}\). We get the proof.
As a corollary, we can well define functors \(\mathcal{F}_{j}^{(n)}\) between localizations.
**Definition 3.8**.: _For \(j\in I\), we take graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}^{\prime\prime}|-nj=|\mathbf{V}|,|\mathbf{V}^{\prime}|=nj\) and define the functor \(\mathcal{F}_{j}^{(n)}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{ \mathbf{V},i}\to\mathcal{Q}_{\mathbf{V}^{\prime\prime},\mathbf{W}}/\mathcal{N }_{\mathbf{V}^{\prime\prime},i}\) (or \(\mathcal{F}_{j}^{(n)}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{ \mathbf{V}}\to\mathcal{Q}_{\mathbf{V}^{\prime\prime},\mathbf{W}}/\mathcal{N }_{\mathbf{V}^{\prime\prime}}\) ) via_
\[\mathcal{F}_{j}^{(n)}=\mathbf{Ind}_{\mathbf{V}^{\prime},\mathbf{V}\oplus \mathbf{W}}^{\mathbf{V}^{\prime\prime}\oplus\mathbf{W}}(\overline{\mathbb{Q}}_ {l}\boxtimes-)\]
_where \(\overline{\mathbb{Q}}_{l}\) is the constant sheaf of \(\mathbf{E}_{\mathbf{V}^{\prime},0,\hat{\Omega}}\). In particular, we set \(\mathcal{F}_{j}=\mathcal{F}_{j}^{(1)}\)._
The functor \(\mathcal{F}_{j}\) is defined by Lusztig's induction functor. It can be described by functors induced by morphisms appearing in the definition of \(\mathcal{E}_{i}^{(n)}\) as follows.
Take graded spaces \(\mathbf{V}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|+j=|\mathbf{V}^{\prime\prime}|\). We set \(\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{{}^{\prime\prime},0}=(p_{3 })^{-1}(\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{{}^{\prime},0})\) and \(\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{{}^{\prime},0}=(p_{2})^{- 1}(\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{{}^{\prime\prime},0})\), then we have the following commutative diagram
where \(j_{i},i=1,2,3,4\) are open embeddings \(\tilde{p}_{1},\tilde{p}_{2},\tilde{p}_{3}\) are obvious morphisms induced from \(p_{1},p_{2},p_{3}\) respectively. Notice that the right square is Cartesian, we have
\[(j_{4})^{*}(p_{3}^{\prime})_{!}(p_{2}^{\prime})_{p}(p_{1}^{\prime})^{*}=(\tilde{ p}_{3})_{!}(\tilde{p}_{2})_{\!}(\tilde{p}_{1})^{*}(j_{1})^{*}. \tag{1}\]
**Case (I)**\(i=j\). Consider the following commutative diagram
where
\[q_{1}((\dot{x},\mathbf{U}_{1}\subset\mathbf{U}_{2}))=(\dot{x},\mathbf{U}_{1}),\]
\[q^{\prime}_{2}((\dot{x},\mathbf{U}_{1}\subset\mathbf{U}_{2}))=(\dot{x},\mathbf{ U}_{2}),\]
\[\phi_{1}=\phi_{\mathbf{V},i},\ \phi_{2}=\tilde{\phi}\circ\tilde{p}_{2},\ \phi_{3}= \tilde{\phi},\ \phi_{4}=\phi_{\mathbf{V}^{\prime\prime},i}\]
and \(\tilde{\phi}:\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{{}^{\prime \prime},0}\rightarrow\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i} \times\mathbf{Flag}(\nu_{i},\nu^{\prime\prime}_{i},\tilde{\nu}_{i})\) is defined by
\[\tilde{\phi}((x,\tilde{\mathbf{W}}))=(\dot{x},\mathrm{Im}(\bigoplus_{h\in \Omega,h^{\prime}=i}x_{h})|_{\tilde{\mathbf{W}}})\subset\mathrm{Im}(\bigoplus_{h \in\Omega,h^{\prime}=i}x_{h})\subset(\bigoplus_{h^{\prime}=i}\mathbf{V}_{h^{ \prime\prime}})\oplus\mathbf{W}_{\tilde{i}}).\]
Notice that the right square is Cartesian, and we have
\[(q_{2})_{!}(q_{1})^{*}(\phi_{1})_{!}=(\phi_{4})_{!}(\tilde{p}_{3})_{!}(\tilde{ p}_{2})_{!}(\tilde{p}_{1})^{*}. \tag{2}\]
**Case (II)**\(i\neq j\) and \(i,j\) are connected by some edge. Let \(Z_{1}\) be the variety consisting of quadruples \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime\prime},\dot{\rho}, \tilde{\mathbf{V}}^{\prime\prime}\subset\bigoplus_{h\in\Omega,h^{\prime}=i} \mathbf{U}_{h^{\prime\prime}}\oplus\mathbf{W}_{\tilde{i}})\) such that \(\dot{\mathbf{U}}\) is a \(\dot{x}\)-stable subspace of \(\dot{\mathbf{V}}^{\prime\prime}\), \(\dot{\rho}:\dot{\mathbf{U}}\oplus\dot{\mathbf{W}}\rightarrow\dot{\mathbf{V}} \oplus\dot{\mathbf{W}}\) is a linear isomorphism and \(\tilde{\mathbf{V}}^{\prime\prime}\) is a subspace of dimension \(\nu_{i}\). And let \(Z_{2}\) be the variety consisting of triples \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime\prime},\tilde{ \mathbf{V}}^{\prime\prime}\subset\bigoplus_{h\in\Omega,h^{\prime}=i}\mathbf{ U}_{h^{\prime\prime}}\oplus\mathbf{W}_{\tilde{i}})\) such that \(\dot{\mathbf{U}}\) is a \(\dot{x}\)-invariant subspace of \(\dot{\mathbf{V}}^{\prime\prime}\) and \(\tilde{\mathbf{V}}^{\prime\prime}\) is subspace of dimension \(\nu_{i}\). Consider the following commutative diagram
where \(a_{i,j}\) is the number of edges connecting \(i\) and \(j\) and
\[q_{1}(\dot{x},\dot{\mathbf{U}},\dot{\rho},\tilde{\mathbf{V}}^{ \prime\prime})=((\dot{\rho})_{*}(\dot{x}|_{\dot{\mathbf{U}}\oplus\dot{\mathbf{W }}}),\dot{\rho}(\tilde{\mathbf{V}}^{\prime\prime})),\] \[q_{2}(\dot{x},\dot{\mathbf{U}},\dot{\rho},\tilde{\mathbf{V}}^{ \prime\prime})=(\dot{x},\dot{\mathbf{U}},\tilde{\mathbf{V}}^{\prime\prime}),\] \[q_{3}((\dot{x},\dot{\mathbf{U}},\tilde{\mathbf{V}}^{\prime\prime} ))=(\dot{x},\tilde{\mathbf{V}}^{\prime\prime})\] \[\phi_{1}=\phi_{\mathbf{V},i},\] \[\phi_{2}((x,\mathbf{U},\rho))=(\dot{x},\dot{\mathbf{U}},\dot{ \rho},\mathrm{Im}(\bigoplus_{h\in\Omega,h^{\prime}=i}x_{h})),\] \[\phi_{3}((x,\mathbf{U}))=(\dot{x},\dot{\mathbf{U}},\mathrm{Im}( \bigoplus_{h\in\Omega,h^{\prime}=i}x_{h})),\] \[\phi_{4}=\phi_{\mathbf{V}^{\prime\prime},i}.\]
Notice that the right square is Cartesian, we have
\[(q_{3})_{!}(q_{2})_{!}(q_{1})^{*}(\phi_{1})_{!}=(\phi_{4})_{!}(\tilde{p}_{3})_ {!}(\tilde{p}_{2})_{!}(\tilde{p}_{1})^{*}. \tag{3}\]
**Case (III)**\(i\neq j\) and there is no edge connecting \(i,j\). Let \(Z_{1}\) be the variety consisting of quadruples \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime\prime},\dot{\rho},\ddot{ \mathbf{V}}^{\prime\prime}\subset\bigoplus\limits_{h\in\Omega,h^{\prime}=i} \mathbf{U}_{h^{\prime\prime}}\oplus\mathbf{W}_{\dot{\mathbf{i}}})\) such that \(\dot{\mathbf{U}}\) is a \(\dot{x}\)-stable subspace of \(\dot{\mathbf{V}}^{\prime\prime}\), \(\dot{\rho}:\dot{\mathbf{U}}\oplus\dot{\mathbf{W}}\rightarrow\dot{\mathbf{V}} \oplus\dot{\mathbf{W}}\) is a linear isomorphism and \(\tilde{\mathbf{V}}^{\prime\prime}\) is a subspace of dimension \(\nu_{i}\). And let \(Z_{2}\) be the variety which consists of \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime\prime},\tilde{ \mathbf{V}}^{\prime\prime}\subset\bigoplus\limits_{h\in\Omega,h^{\prime}=i} \mathbf{U}_{h^{\prime\prime}}\oplus\mathbf{W}_{\dot{\mathbf{i}}})\) such that \(\dot{\mathbf{U}}\) is a \(\dot{x}\)-invariant subspace of \(\dot{\mathbf{V}}^{\prime\prime}\) and \(\tilde{\mathbf{V}}^{\prime\prime}\) is a subspace of dimension \(\nu_{i}\). Consider the following commutative diagram
where \(q_{1},q_{2},q_{3},\phi_{1},\phi_{2},\phi_{3},\phi_{4}\) are given by the same formulas in **Case (II)** above. Notice that the right square is Cartesian, we have
\[(q_{3})_{!}(q_{2})_{!}(q_{1})^{*}(\phi_{1})_{!}=(\phi_{4})_{!}(\tilde{p}_{3})_{!}(\tilde{p}_{2})_{!}(\tilde{p}_{1})^{*}. \tag{4}\]
In a conclusion, the functor \(\mathcal{F}_{j}:\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\tilde{\Omega}})/\mathcal{N}_{\mathbf{V},i}\rightarrow\mathcal{D}^ {b}_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}, \mathbf{W},\tilde{\Omega}})/\mathcal{N}_{\mathbf{V}^{\prime\prime},i}\) can be described by
\[\mathcal{F}_{j}\cong\begin{cases}(j\mathbf{v}^{\prime\prime},i)_{!}(\phi \mathbf{v}^{\prime\prime},i)^{*}(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}(\phi _{\mathbf{V},i})_{!}(j\mathbf{v},i)^{*}[\tilde{\nu}_{i}+\nu_{i}],i=j,\\ (j\mathbf{v}^{\prime\prime},i)_{!}(\phi\mathbf{v}^{\prime\prime},i)^{*}(q_{3} ^{\prime})_{!}(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}(\phi_{\mathbf{V},i})_ {!}(j\mathbf{v},i)^{*}[\tilde{\nu}_{i}+\nu_{i}],i\neq j.\end{cases} \tag{5}\]
We also introduce a functor \(\mathcal{F}_{j}^{\vee}\) which will be used in the next section.
**Definition 3.9**.: _For \(j\in I\) and \(j\neq i\), we take graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}^{\prime}|-j=|\mathbf{V}|,|\mathbf{V}^{\prime\prime}|=j\) and define the functor \(\mathcal{F}_{j}^{\vee}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}\rightarrow\mathcal{ Q}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime},i}\) (or \(\mathcal{F}_{j}^{\vee}:\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\tilde{\Omega}})\rightarrow\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime}}} (\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\tilde{\Omega}})/\mathcal{N}_{ \mathbf{V}^{\prime},i}\) ) via_
\[\mathcal{F}_{j}^{\vee}=\mathbf{Ind}_{\mathbf{V}^{\prime}\oplus\mathbf{W}, \mathbf{V}^{\prime\prime}}^{\vee\oplus\mathbf{W}}(-\boxtimes\overline{ \mathbb{Q}}_{l})\]
_where \(\overline{\mathbb{Q}}_{l}\) is the constant sheaf of \(\mathbf{E}_{\mathbf{V}^{\prime\prime},0,\tilde{\Omega}}\)._
Notice that \((j_{\mathbf{V}^{\prime},i})_{!}(j_{\mathbf{V}^{\prime},i})^{*}\) is isomorphic to \(Id\) of the localization \(\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{\prime}, \mathbf{W},\tilde{\Omega}})/\mathcal{N}_{\mathbf{V}^{\prime},i}\), there is an isomorphism
\[\mathcal{F}_{j}^{\vee}\cong(j_{\mathbf{V}^{\prime},i})_{!}(j_{\mathbf{V}^{ \prime},i})^{*}\mathcal{F}_{j}^{\vee}\cong(j_{\mathbf{V}^{\prime},i})_{!}( \tilde{p}_{3})_{!}(\tilde{p}_{2})_{!}(\tilde{p}_{1})^{*}(j_{\mathbf{V},i})^{*} [d_{1}-d_{2}],\]
where the morphisms are given by the following commutative diagram:
**Case (I)** The vetices \(i,j\) are connected by some edge. Let \(Z_{1}\) be the variety consisting of quadruples \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime},\dot{\rho},\ddot{ \mathbf{V}}^{\prime}\subset\bigoplus\limits_{h\in\Omega,h^{\prime}=i}\mathbf{V} _{h^{\prime\prime}}^{\prime}\oplus\mathbf{W}_{\dot{\mathbf{i}}})\) such that \(\dot{\mathbf{U}}\) is a \(\dot{x}\)-stable subspace of \(\dot{\mathbf{V}}^{\prime}\), \(\dot{\rho}:\dot{\mathbf{V}}^{\prime}\oplus\dot{\mathbf{W}}/\dot{\mathbf{U}} \rightarrow\dot{\mathbf{V}}\oplus\dot{\mathbf{W}}\) is a linear isomorphism and \(\tilde{\mathbf{V}}^{\prime}\) is a subspace of dimension \(\nu_{i}\). And let \(Z_{2}\) be the variety consisting of triples \((\dot{x},\dot{\mathbf{U}}\subset\dot{\mathbf{V}}^{\prime},\tilde{\mathbf{V}}^{ \prime}\subset\bigoplus\limits_{h\in\Omega,h^{\prime}=i}\mathbf{V}_{h^{\prime \prime}}^{\prime}\oplus\mathbf{W}_{\dot{\mathbf{i}}})\) such that \(\dot{\mathbf{U}}\) is a
\(\dot{x}\)-invariant subspace of \(\dot{\mathbf{V}}^{\prime}\) and \(\tilde{\mathbf{V}}^{\prime}\) is subspace of dimension \(\nu_{i}\). Consider the following commutative diagram
where
\[q_{1}(\dot{x},\dot{\mathbf{U}},\dot{\rho},\tilde{\mathbf{V}}^{\prime}) =((\dot{\rho})_{*}(\dot{x}|_{\tilde{\mathbf{V}}^{\prime}\oplus\tilde{ \mathbf{W}}/\dot{\mathbf{U}}}),\dot{\rho}(\tilde{\mathbf{V}}^{\prime}/\dot{ \mathbf{U}})),\] \[q_{2}(\dot{x},\dot{\mathbf{U}},\dot{\rho},\tilde{\mathbf{V}}^{ \prime}) =(\dot{x},\dot{\mathbf{U}},\tilde{\mathbf{V}}^{\prime}),\] \[q_{3}((\dot{x},\dot{\mathbf{U}},\tilde{\mathbf{V}}^{\prime}))=( \dot{x},\tilde{\mathbf{V}}^{\prime})\] \[\phi_{1}=\phi_{\mathbf{V},i},\] \[\phi_{2}((x,\mathbf{U},\rho))=(\dot{x},\dot{\mathbf{U}},\dot{ \rho},\mathrm{Im}(\bigoplus_{h\in\hat{\Omega},h^{\prime}=i}x_{h})),\] \[\phi_{3}((x,\mathbf{U}))=(\dot{x},\dot{\mathbf{U}},\mathrm{Im}( \bigoplus_{h\in\hat{\Omega},h^{\prime}=i}x_{h})),\] \[\phi_{4}=\phi_{\mathbf{V}^{\prime},i}.\]
Notice that the right square is Cartesian, we have
\[(q_{3})_{!}(q_{2})_{\flat}(q_{1})^{*}(\phi_{1})_{\flat}=(\phi_{4})_{\flat}( \tilde{p}_{3})_{!}(\tilde{p}_{2})_{\flat}(\tilde{p}_{1})^{*}. \tag{6}\]
**Case (II)** There is no edge connecting \(i,j\). Let \(Z_{1}\) and \(Z_{2}\) be the varieties as before. Consider the following commutative diagram
where \(q_{2},q_{3},\phi_{1},\phi_{2},\phi_{3},\phi_{4}\) are given by the same formulas in **Case (II)** above and
\[q_{1}(\dot{x},\dot{\mathbf{U}},\dot{\rho},\tilde{\mathbf{V}}^{\prime})=((\dot{ \rho})_{*}(\dot{x}|_{\tilde{\mathbf{V}}^{\prime}\oplus\tilde{\mathbf{W}}/ \dot{\mathbf{U}}}),\dot{\rho}(\tilde{\mathbf{V}}^{\prime})).\]
Notice that the right square is Cartesian, we have
\[(q_{3})_{!}(q_{2})_{\flat}(q_{1})^{*}(\phi_{1})_{\flat}=(\phi_{4})_{\flat}( \tilde{p}_{3})_{!}(\tilde{p}_{2})_{\flat}(\tilde{p}_{1})^{*}. \tag{7}\]
### Commutative relations in localizations at \(i\)
At the beginning of this section, we recall two lemmas in algebraic geometry.
**Lemma 3.10**.: _For any morphism \(f:X\to Y\) and any complex \(A\) on \(Y\), we have \(f_{!}f^{*}A\cong f_{!}\overline{\mathbb{Q}}_{l}\otimes A\)._
Proof.: By the projection formula, see [1, Theorem 1.4.9], we have
\[f_{!}\overline{\mathbb{Q}}_{l}\otimes A\cong f_{!}(\overline{\mathbb{Q}}_{l} \otimes f^{*}A)\cong f_{!}f^{*}(A),\]
as desired.
**Lemma 3.11**.: _[_13_, Section 8.1.6]_ _If \(X=\coprod_{n}X_{n}\) is a partition such that for any \(n\), \(X^{\leqslant n}=\coprod_{m\leqslant n}X_{m}\) is closed and the restriction \(f_{n}\) of \(f\) on \(X_{n}\) can be decomposed as_
\[X_{n}\xrightarrow{g_{n}}Z_{n}\xrightarrow{h_{n}}Y\]
_where \(Z_{n}\) is smooth, \(g_{n}\) is a vector bundle of rank \(d_{n}\) and \(h_{n}\) is proper, then_
\[f_{!}(\overline{\mathbb{Q}}_{l}|_{X})\cong\bigoplus_{n}(f_{n})_{!}(\overline{ \mathbb{Q}}_{l}|_{X_{n}})\cong\bigoplus_{n}(h_{n})_{!}(\overline{\mathbb{Q}}_{l }|_{Z_{n}})[-2d_{n}].\]
**Lemma 3.12**.: _For any graded space \(\mathbf{V},\mathbf{V}^{\prime}\) such that \(|\mathbf{V}^{\prime}|+(n-1)i=|\mathbf{V}|\), there is an isomorphism as functors between localizations \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\tilde{ \Omega}})/\mathcal{N}_{\mathbf{V},i}\to\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime }}}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\tilde{\Omega}})/\mathcal{N}_{ \mathbf{V}^{\prime},i}\),_
\[\mathcal{E}^{(n)}_{i}\mathcal{F}_{i}\oplus\bigoplus_{0\leqslant m\leqslant N -1}\mathcal{E}^{(n-1)}_{i}[N-1-2m]\cong\mathcal{F}_{i}\mathcal{E}^{(n-1)}_{i }\oplus\bigoplus_{0\leqslant m\leqslant-N-1}\mathcal{E}^{(n)}_{i}[-2m-N-1],\]
_where \(N=2\nu_{i}-\tilde{\nu}_{i}-n+1\). More precisely,_
\[\mathcal{E}^{(n)}_{i}\mathcal{F}_{i}\cong\mathcal{F}_{i}\mathcal{ E}^{(n)}_{i}, \text{if }N=0;\] \[\mathcal{E}^{(n)}_{i}\mathcal{F}_{i}\oplus\bigoplus_{0\leqslant m \leqslant N-1}\mathcal{E}^{(n-1)}_{i}[N-1-2m]\cong\mathcal{F}_{i}\mathcal{E} _{i}, \text{if }N\geqslant 1;\] \[\mathcal{E}^{(n)}_{i}\mathcal{F}_{i}\cong\mathcal{F}_{i}\mathcal{ E}^{(n)}_{i}\oplus\bigoplus_{0\leqslant m\leqslant-N-1}\mathcal{E}^{(n-1)}_{i}[-2m-N-1], \text{if }N\leqslant-1.\]
Proof.: On the one hand, take graded space \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|=|\mathbf{V}^{\prime\prime}|-i\) and consider the diagrams
By base change, we have
\[\mathcal{E}^{(n)}_{i}\mathcal{F}_{i}\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{*} (q_{2})_{!}(q_{1})^{*}(\phi_{\mathbf{V}^{\prime\prime},i})_{!}(j_{\mathbf{V}^ {\prime\prime},i})^{*}(j_{4})_{!}(\phi_{4})_{!}(q^{\prime}_{2})_{!}(q^{\prime }_{1})^{*}(\phi_{1})_{!}(j_{1})^{*}[\tilde{\nu}_{i}+\nu_{i}-n\nu_{i}-n]\] \[\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(q_{2})_{!}(q_{1})^{*}(q^{\prime}_{2})_{!}(q^{\prime}_{1})^{*}(\phi_{1})_{!} (j_{1})^{*}[\tilde{\nu}_{i}+\nu_{i}-n\nu_{i}-n]\] \[\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(q_{2})_{!}(q_{1})^{*}(q^{\prime}_{2})_{!}(q^{\prime}_{1})^{*}(\phi_{\mathbf{ V},i})_{!}(j_{\mathbf{V},i})^{*}[\tilde{\nu}_{i}+\nu_{i}-n\nu_{i}-n],\]
where the second isomorphism follows from \(j_{4}=j_{\mathbf{V}^{\prime\prime},i},\phi_{4}=\phi_{\mathbf{V}^{\prime\prime},i}\).
On the other hand, take a graded space \(\mathbf{V}^{\prime\prime\prime}\) such that \(|\mathbf{V}|=|\mathbf{V}^{\prime\prime\prime}|+ni\) and consider the diagrams
Similarly, we have
\[\mathcal{F}_{i}\mathcal{E}_{i}^{(n)}= (j_{4})!(\phi_{4})^{*}(\tilde{q}_{2}^{\prime})!(\tilde{q}_{1}^{ \prime})^{*}(\phi_{1})_{b}(j_{1})^{*}(j_{{\bf{V}}^{\prime},i})!(\phi_{{\bf{V}}^{ \prime\prime\prime},i})^{*}(\tilde{q}_{2})!(\tilde{q}_{1})(\tilde{q}_{1})^{*}( \phi_{{\bf{V}},i})_{b}(j_{{\bf{V}},i})^{*}[\tilde{\nu}_{i}+\nu_{i}-n\nu_{i}-n]\] \[\cong (j_{4})!(\phi_{4})^{*}(\tilde{q}_{2}^{\prime})!(\tilde{q}_{1}^{ \prime})^{*}(\tilde{q}_{2})!(\tilde{q}_{1})^{*}(\tilde{q}_{1})^{*}(\phi_{{\bf{ V}},i})_{b}(j_{{\bf{V}},i})^{*}[\tilde{\nu}_{i}+\nu_{i}-n\nu_{i}-n]\] \[\cong (j_{{\bf{V}}^{\prime},i})!(\phi_{{\bf{V}}^{\prime},i})^{*}(\tilde {q}_{2}^{\prime})!(\tilde{q}_{1}^{\prime})^{*}(\tilde{q}_{2})!(\tilde{q}_{1}) ^{*}(\tilde{q}_{1})^{*}(\phi_{{\bf{V}},i})_{b}(j_{{\bf{V}},i})^{*}[\tilde{\nu} _{i}+\nu_{i}-n\nu_{i}-n].\]
Since \(\phi_{\flat}\) and \(\phi^{*}\) are quasi inverse to each other, we only need to calculate the difference between \((q_{2})!(q_{1})^{*}(q_{2})!(q_{1})^{*}\) and \((\tilde{q}_{2}^{\prime})!(\tilde{q}_{1}^{\prime})^{*}(\tilde{q}_{2})!(\tilde{q }_{1})^{*}\). Notice that we have \(\nu_{i}=\nu_{i}^{\prime}+n-1=\nu_{i}^{\prime\prime}-1=\nu_{i}^{\prime\prime \prime}+n\),\(\tilde{\nu}_{i}=\tilde{\nu}_{i}^{\prime}=\tilde{\nu}_{i}^{\prime\prime \prime}\) and \(\hat{\bf{E}}_{{\bf{V}},{\bf{W}},i}=\hat{\bf{E}}_{{\bf{V}}^{\prime},{\bf{W}},i} =\hat{\bf{E}}_{{\bf{V}}^{\prime\prime},{\bf{W}},i}=\hat{\bf{E}}_{{\bf{V}}^{ \prime\prime\prime},{\bf{W}},i}\). Consider the following commutative diagram
where the varieties
\[Y_{0}=\hat{\bf{E}}_{{\bf{V}},{\bf{W}},i}\times{\bf{Grass}}(\nu_{i},\tilde{\nu }_{i})\times{\bf{Grass}}(\nu_{i}+1-n,\tilde{\nu}_{i}),\]
and
\[Y_{1}= \{(\dot{x},{\bf{V}}^{1}\subseteq{\bf{V}}^{3}\subseteq(\bigoplus_ {h^{\prime}=i}{\bf{V}}_{h^{\prime\prime}})\oplus{\bf{W}}_{\hat{\imath}}),{\bf{ V}}^{2}\subseteq{\bf{V}}^{3}\subseteq(\bigoplus_{h^{\prime}=i}{\bf{V}}_{h^{ \prime\prime}})\oplus{\bf{W}}_{\hat{\imath}})\] \[\mid\dot{x}\in\hat{\bf{E}}_{{\bf{V}},{\bf{W}},i},\ \dim{\bf{V}}^{1}=\nu_{i},\ \dim{\bf{V}}^{2}=\nu_{i}-n+1,\ \dim{\bf{V}}^{3}=\nu_{i}+1.\}\]
is the fiber product of \(q_{2}^{\prime},q_{1}\), the morphisms \(\pi_{1},\pi_{1}^{\prime},\tilde{\pi},\tilde{\pi}^{\prime}\) are natural projections and
\[r_{1}((\dot{x},\mathbf{V}^{1},\mathbf{V}^{2},\mathbf{V}^{3}))=(\dot{x},\mathbf{ V}^{1},\mathbf{V}^{2}).\]
By base change, we have
\[(q_{2})!(q_{1})^{*}(q_{2}^{\prime})!(q_{1}^{\prime})^{*}\cong (q_{2})!(\pi_{1})!(\pi_{1}^{\prime})^{*}(q_{1}^{\prime})^{*}\] \[= (\tilde{\pi})!(r_{1})!(r_{1})^{*}(\tilde{\pi}^{\prime})^{*}.\]
Similarly, consider the commutative diagram
where the variety
\[Y_{2}= \{(\dot{x},\mathbf{V}^{0}\subseteq\mathbf{V}^{1}\subseteq( \bigoplus_{h^{\prime}=i}\mathbf{V}_{h^{\prime\prime}})\oplus\mathbf{W}_{\dot{ i}},\mathbf{V}^{0}\subseteq\mathbf{V}^{2}\subseteq(\bigoplus_{h^{\prime}=i} \mathbf{V}_{h^{\prime\prime}})\oplus\mathbf{W}_{\dot{i}})\] \[|\ \dot{x}\in\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i},\dim \mathbf{V}^{1}=\nu_{i},\dim\mathbf{V}^{2}=\nu_{i}-n+1,\dim\mathbf{V}^{0}=\nu_ {i}-n.\}\]
is the fiber product of \(\tilde{q}_{1}^{\prime},\tilde{q}_{2}\), the morphisms \(\pi_{2},\pi_{2}^{\prime},\tilde{\pi},\tilde{\pi}^{\prime}\) are natural projections and
\[r_{2}((\dot{x},\mathbf{V}^{0},\mathbf{V}^{1},\mathbf{V}^{2}))=(\dot{x}, \mathbf{V}^{1},\mathbf{V}^{2}).\]
By base change, we have
\[(\tilde{q}_{2}^{\prime})!(\tilde{q}_{1}^{\prime})^{*}(\tilde{q}_{2})!(\tilde{ q}_{1})^{*}\cong(\tilde{\pi})!(r_{2})!(r_{2})^{*}(\tilde{\pi}^{\prime})^{*}.\]
It remains to calculate the difference between \((\tilde{\pi})!(r_{1})!(r_{1})^{*}(\tilde{\pi}^{\prime})^{*}\) and \((\tilde{\pi})!(r_{2})!(r_{2})^{*}(\tilde{\pi}^{\prime})^{*}\). We divide \(Y_{0}\) into the disjoint union \(Y_{0}=Y_{0}^{0}\cup Y_{0}^{1}\), where
\[Y_{0}^{0}=\{(\dot{x},\mathbf{V}^{1},\mathbf{V}^{2})\in Y_{0}|\mathbf{V}^{2} \subseteq\mathbf{V}^{1}\},\]
\[Y_{0}^{1}=\{(\dot{x},\mathbf{V}^{1},\mathbf{V}^{2})\in Y_{0}|\mathbf{V}^{2} \nsubseteq\mathbf{V}^{1}\}.\]
For \(a,b=0,1\), let \(Y_{a}^{0}=(r_{a})^{-1}(Y_{0}^{0}),Y_{a}^{1}=(r_{a})^{-1}(Y_{0}^{1})\) and \(\iota^{b}:Y_{0}^{b}\to Y_{0}\) be the natural embedding. Let \(\tilde{r}_{a}^{b}:Y_{a}^{b}\to Y_{0}^{b}\) be the restriction of \(r_{a}\) on \(Y_{a}^{b}\), and \(r_{a}^{b}=\iota^{b}\tilde{r}_{a}^{b}:Y_{a}^{b}\to Y_{0}\).
Notice that when \(\mathbf{V}^{2}\nsubseteq\mathbf{V}^{1}\), we have \(\mathbf{V}^{0}=\mathbf{V}^{1}\cap\mathbf{V}^{2},\mathbf{V}^{3}=\mathbf{V}^{1} \cup\mathbf{V}^{2}\), hence \(\tilde{r}_{a}^{1}:Y_{a}^{1}\to Y_{0}^{1}\) is an isomorphism. By Lemma 3.10 and 3.11, we have
\[(\tilde{\pi})!(r_{1})!(r_{1})^{*}(\tilde{\pi}^{\prime})^{*}(-)\cong (\tilde{\pi})!((r_{1})!\overline{\mathbb{Q}}_{l}\otimes(\tilde{ \pi}^{\prime})^{*}(-))\oplus(\tilde{\pi})!((r_{1}^{1})!\overline{\mathbb{Q}}_{ l}\otimes(\tilde{\pi}^{\prime})^{*}(-))\] \[\cong (\tilde{\pi})!((r_{1}^{0})!\overline{\mathbb{Q}}_{l}\otimes(\tilde{ \pi}^{\prime})^{*}(-))\oplus(\tilde{\pi})!((\iota^{1})!\overline{\mathbb{Q}}_{ l}\otimes(\tilde{\pi}^{\prime})^{*}(-))\] \[\cong (\tilde{\pi})!((\iota^{0})!(\tilde{r}_{1}^{0})!\overline{\mathbb{Q }}_{l}\otimes(\tilde{\pi}^{\prime})^{*}(-))\oplus(\tilde{\pi})!((\iota^{1})! \overline{\mathbb{Q}}_{l}\otimes(\tilde{\pi}^{\prime})^{*}(-)).\]
Similarly, we have
\[(\tilde{\pi})!(r_{2})!(r_{2})^{*}(\tilde{\pi}^{\prime})^{*}(-)\cong(\tilde{\pi })!((\iota^{0})!(\tilde{r}_{2}^{0})!\overline{\mathbb{Q}}_{l}\otimes(\tilde{ \pi}^{\prime})^{*}(-))\oplus(\tilde{\pi})!((\iota^{1})!\overline{\mathbb{Q}}_{ l}\otimes(\tilde{\pi}^{\prime})^{*}(-)).\]
Since \(\tilde{r}_{1}^{0}\) is a fiber bundle with each fiber isomorphic to \(\mathbf{Grass}(1,\tilde{\nu}_{i}-\nu_{i})\) and \(\tilde{r}_{2}^{0}\) is a fiber bundle with each fiber isomorphic to \(\mathbf{Grass}(\nu_{i}-n,\nu_{i}-n+1)\), we have
\[(\tilde{r}_{1}^{0})!(\overline{\mathbb{Q}}_{l})\cong\bigoplus_{0\leqslant m \leqslant\tilde{\nu}_{i}-\nu_{i}-1}\overline{\mathbb{Q}}_{l}[-2m],\]
\[(\tilde{r}_{2}^{0})_{!}(\overline{\mathbb{Q}}_{l})\cong\bigoplus_{0\leqslant m \leqslant\nu_{i}-n}\overline{\mathbb{Q}}_{l}[-2m].\]
Note that \(Y_{0}^{0}\) is isomorphic to \(\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{Flag}(\nu_{i}-n+1,\nu_{ i},\tilde{\nu}_{i})\) by definition, and morphisms \(\tilde{\pi}^{\prime}\iota^{0}:Y_{0}^{0}\to\dot{\mathbf{E}}_{\mathbf{V}, \mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i})\) and \(\tilde{\pi}\iota^{0}:Y_{0}^{0}\to\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i} \times\mathbf{Grass}(\nu_{i}-n+1,\tilde{\nu}_{i})\) can be respectively identified with the morphisms \(\check{q}_{1}\) and \(\check{q}_{2}\) in the definition of \(\mathcal{E}_{i}^{(n-1)}\) as follows
Applying Lemma 3.10 for morphisms \(\iota^{0}\) and \(\tilde{\pi}^{\prime}\iota^{0},\tilde{\pi}\iota^{0}\) respectively, we obtain
\[(\tilde{\pi})_{!}((\iota^{0})_{!}\overline{\mathbb{Q}}_{l}\otimes(\tilde{\pi }^{\prime})^{*}(-))\cong(\tilde{\pi})_{!}(\iota^{0})_{!}(\iota^{0})^{*}( \tilde{\pi}^{\prime})^{*}(-)\cong(\check{q}_{2})_{!}(\check{q}_{1})^{*}(-).\]
Therefore, the difference between \((\tilde{\pi})_{!}(r_{1})_{!}(r_{1})^{*}(\tilde{\pi}^{\prime})^{*}\) and \((\tilde{\pi})_{!}(r_{2})_{!}(r_{2})^{*}(\tilde{\pi}^{\prime})^{*}\) only involves direct sums of shifts of \((\check{q}_{2})_{!}(\check{q}_{1})^{*}\), and so the difference between \(\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\) and \(\mathcal{F}_{i}\mathcal{E}_{i}^{(n)}\) only involves direct sums of shifts of \(\mathcal{E}_{i}^{(n-1)}\). By direct calculation, we have
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\cong\mathcal{F}_{i}\mathcal{E }_{i}^{(n)},\] \[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\oplus\bigoplus_{0\leqslant m \leqslant N-1}\mathcal{E}_{i}^{(n-1)}[N-1-2m]\cong\mathcal{F}_{i}\mathcal{E}_ {i}^{(n)}, \text{if }N\geqslant 1;\] \[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\cong\mathcal{F}_{i} \mathcal{E}_{i}^{(n)}\oplus\bigoplus_{0\leqslant m\leqslant-N-1}\mathcal{E}_ {i}^{(n-1)}[-2m-N-1], \text{if }N\leqslant-1.\]
as desired.
**Lemma 3.13**.: _For any graded space \(\mathbf{V},\mathbf{V}^{\prime}\) such that \(|\mathbf{V}^{\prime}|+ni=|\mathbf{V}|+j\) and \(i\neq j\), there is an isomorphism in the localization \(\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{\mathbf{V}^{\prime}, \mathbf{W},\hat{\Omega}})/\mathcal{N}_{\mathbf{V}^{\prime},i}\),_
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{j}\cong\mathcal{F}_{j}\mathcal{E}_{i}^{(n)}.\]
Proof.: We assume that \(i,j\) are connected by some edges, the other case can be proved by a similar argument. On the one hand, take graded spaces \(\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|+j=|\mathbf{V}^{\prime\prime}|=|\mathbf{V}^{\prime}|+ni\) and consider the diagrams
\(\hat{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\times\mathbf{ Grass}(\nu_{i}^{\prime\prime},\tilde{\nu}_{i}+a_{i,j})\)\(\hat{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\times\mathbf{Flag}(\nu_{i}^{\prime\prime},\nu_{i}^{ \prime},\tilde{\nu}_{i}+a_{i,j})\)\(\hat{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}\times\mathbf{ Grass}(\nu_{i}^{\prime},\tilde{\nu}_{i}+a_{i,j})\). By base change, up to shifts we have
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{j}\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}(\phi_{\mathbf{V}^{\prime\prime},i})_ {b}(j_{\mathbf{V}^{\prime\prime},i})^{*}(j_{4})_{!}(\phi_{4})^{*}(q_{3})_{!}(q _{2})_{b}(q_{1})^{*}(\phi_{1})_{b}(j_{1})^{*}\] \[\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}(q_{3})_{!}(q_{2})_{b}(q_{1})^{*}( \phi_{1})_{b}(j_{1})^{*}\] \[\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}(q_{3})_{!}(q_{2})_{b}(q_{1})^{*}( \phi_{\mathbf{V},i})_{b}(j_{\mathbf{V},i})^{*}.\]
On the other hand, take a graded space \(\mathbf{V}^{\prime\prime\prime}\) such that \(|\mathbf{V}|=|\mathbf{V}^{\prime\prime\prime}|+ni\) and consider the diagrams
\(\hat{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{ Grass}(\nu_{i},\tilde{\nu}_{i})\)\(\hat{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{Flag}(\nu_{i}-n,\nu_{i}, \tilde{\nu}_{i})\)\(\hat{\mathbf{E}}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},i}\times \mathbf{Grass}(\nu_{i}-n,\tilde{\nu}_{i})\)\(\hat{\mathbf{E}}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},i}\times \mathbf{Grass}(\nu_{i}-n,\tilde{\nu}_{i})\),
Similarly, up to shifts we have
\[\mathcal{F}_{j}\mathcal{E}_{i}^{(n)}\cong (j_{4})_{!}(\phi_{4})^{*}(\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{ 2}^{\prime})_{b}(\tilde{q}_{1}^{\prime})^{*}(\phi_{1})_{b}(j_{1})^{*}(j_{ \mathbf{V}^{\prime\prime\prime},i})_{!*}(\phi_{\mathbf{V}^{\prime\prime\prime },i})^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1})^{*}(\phi_{\mathbf{V},i})_{b}(j_{ \mathbf{V},i})^{*}\] \[\cong (j_{4})_{!}(\phi_{4})^{*}(\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{ 2}^{\prime})_{b}(\tilde{q}_{1}^{\prime})^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1} )^{*}(\phi_{\mathbf{V},i})_{b}(j_{\mathbf{V},i})^{*}\] \[\cong (j_{\mathbf{V}^{\prime},i})_{!}(\phi_{\mathbf{V}^{\prime},i})^{* }(\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{2}^{\prime})_{!}(\tilde{q}_{1}^{ \prime})^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1})^{*}(\phi_{\mathbf{V},i})_{b}(j_ {\mathbf{V},i})^{*}.\]
We only need to calculate the difference between \((q_{2})_{!}(q_{1})^{*}(q_{3})_{!}(q_{2})_{b}(q_{1})^{*}\) and \((\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{2}^{\prime})_{!}(\tilde{q}_{1}^{\prime })^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1})^{*}\). Notice that we have \(\nu_{i}^{\prime}+n=\nu_{i}=\nu_{i}^{\prime\prime}=\nu_{i}^{\prime\prime\prime }+n,\tilde{\nu}_{i}^{\prime\prime}=\tilde{\nu}_{i}+1=\tilde{\nu}_{i}^{\prime}\) and \(\hat{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}=\hat{\mathbf{E}}_{ \mathbf{V}^{\prime\prime},\mathbf{W},i},\hat{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}=\)
\(\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},i}\times\mathbf{ Grass}(\nu_{i}-n,\tilde{\nu}_{i}+a_{i,j})\)\(\dot{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}\times\mathbf{Flags}(\nu_{i}-n,\nu_{i},\tilde{\nu}_{i}+a_{i,j})\)\(\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{Flags}(\nu_{i}-n,\nu_{i},\tilde{\nu}_{i}+a_{i,j})\)\(\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},i}\times\mathbf{Flags}(\nu_{i}-n,\tilde{\nu}_{i})\)\(\dot{\tilde{\pi}}_{1}\)\(\dot{\tilde{\pi}}_{2}\)\(\dot{\tilde{\pi}}_{2}\)\(\dot{\tilde{\pi}}_{2}\)\(\dot{\tilde{\pi}}_{1}\
and \(\tilde{\mathbf{V}}^{\prime\prime\prime}\) is a subspace of dimension \(\nu_{i}-n\) and \(\mathbf{V}^{1}\subseteq\mathbf{V}^{2}\subseteq(\bigoplus\limits_{h^{\prime}=i} \mathbf{U}_{h^{\prime\prime}})\oplus\mathbf{W}_{\hat{i}}\) is a flag such that \(\dim\mathbf{V}^{1}=\nu_{i}-n,\dim\mathbf{V}^{2}=\nu_{i}\}\).
Let \(Y_{2}^{\prime\prime}\) be the variety consists of quadruples \((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2})\) such that \((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2})\) satisfies the same conditions as in \(Y_{2}^{\prime}\).
The morphisms \(r_{2},r_{3},r_{4}\) are projections and
\[r_{1}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2},\dot{ \rho}))=((\dot{\rho})_{*}(\dot{x}^{\prime}|_{\dot{\mathbf{U}}\oplus\dot{ \mathbf{W}}}),\dot{\rho}(\mathbf{V}^{1}),\dot{\rho}(\mathbf{V}^{2})),\]
\[\tilde{\pi}_{1}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^ {2},\dot{\rho}))=((\dot{\rho})_{*}(\dot{x}^{\prime}|_{\dot{\mathbf{U}}\oplus \dot{\mathbf{W}}}),\dot{\rho}(\mathbf{V}^{2})),\]
\[\tilde{\pi}_{2}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^ {2}))=(\dot{x}^{\prime},\mathbf{V}^{1}),\]
then the middle square is Cartesian. By base change, we have
\[(\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{2}^{\prime})_{!}(\tilde{q}_{1}^{\prime })^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1})^{*}\cong(\tilde{\pi}_{2})_{!}(r_{2})_ {!}(\tilde{\pi}_{1}^{*}).\]
Notice that \(\dot{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}=\dot{\mathbf{E}}_{\mathbf{ V}^{\prime\prime},\mathbf{W},i},\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}= \dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},i}\), there are natural isomorphisms \(Y_{1}^{\prime}\cong Y_{2}^{\prime}\),\(Y_{1}^{\prime\prime}\cong Y_{2}^{\prime\prime}\). Under the isomorphisms, we have \(\tilde{\pi}_{1}=\tilde{\pi}_{2}\) and \(\pi_{1}=\pi_{2}\), and so
\[(q_{2})_{!}(q_{1})^{*}(q_{3})_{!}(q_{2})_{!}(q_{1})^{*}\cong(\tilde{q}_{3}^{ \prime})_{!}(\tilde{q}_{2}^{\prime})_{!}(\tilde{q}_{1}^{\prime})^{*}(\tilde{ q}_{2})_{!}(\tilde{q}_{1})^{*},\]
as desired.
**Lemma 3.14**.: _The functors \(\mathcal{E}_{i}^{(n)}\) for \(n\geqslant 1\) satisfy the following relation_
\[\bigoplus\limits_{0\leqslant m<n}\mathcal{E}_{i}^{(n)}[n-1-2m]\cong\mathcal{E }_{i}^{(n-1)}\mathcal{E}_{i},\ n\geqslant 2,\]
_as endofunctors of the localization \(\bigoplus\limits_{\mathbf{V}}\mathcal{Q}_{\mathbf{V}}/\mathcal{N}_{\mathbf{V},i}\)._
Proof.: Take graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|-|\mathbf{V}^{\prime}|=i\) and \(|\mathbf{V}|-|\mathbf{V}^{\prime\prime}|=ni\), consider the following commutative diagram
where the morphisms are obvious forgetting maps. By base change, we have
\[(q_{2})_{!}(q_{1})^{*}(q_{2}^{\prime})_{!}(q_{1}^{\prime})^{*}\cong (q_{2}\pi)_{!}(q_{1}^{\prime}\pi^{\prime})^{*}\] \[\cong (q_{2}^{\prime\prime})_{!}r_{!}r^{*}(q_{1}^{\prime\prime})^{*}\] \[\cong \bigoplus\limits_{0\leqslant m<n}(q_{2}^{\prime\prime})_{!}(q_{1} ^{\prime\prime})^{*}[-2m],\]
where the last isomorphism holds by the projection formula, since \(r\) is a trivial fiber bundle with fiber isomorphic to \(\mathbb{P}^{(n-1)}\). Compose them with quasi-inverse equivalences \((\phi_{\mathbf{V},i})^{*},(\phi_{\mathbf{V},i})_{\flat}\) and \((j_{\mathbf{V},i})^{*},(j_{\mathbf{V},i})_{!}\), we get a proof.
**Lemma 3.15**.: _For \(i\neq j\) in \(I\), \(1\leqslant n\in\mathbb{N}\) and graded spaces \(\mathbf{V},\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\) and \(\mathbf{V}^{\prime\prime\prime}\) such that \(|\mathbf{V}|+j=|\mathbf{V}^{\prime\prime}|=|\mathbf{V}^{\prime}|+ni=|\mathbf{V}^ {\prime\prime\prime}|+ni+j\), the compostion of functors_
\[\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega }})\xrightarrow{\mathcal{E}_{i}^{(n)}}\mathcal{D}_{G_{\mathbf{V}^{\prime\prime \prime}}}^{b}(\mathbf{E}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W},\hat{ \Omega}})\xrightarrow{\mathcal{F}_{j}^{V}}\mathcal{D}_{G_{\mathbf{V}^{\prime}}}^{b} (\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}})/\mathcal{N}_{ \mathbf{V}^{\prime},i}\]
_and the composition of functors_
\[\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}) \stackrel{{\mathcal{F}^{\vee}_{j}}}{{\longrightarrow}}\mathcal{D}^{b }_{G_{\mathbf{V}^{\prime\prime}}}(\mathbf{E}_{\mathbf{V}^{\prime\prime}, \mathbf{W},\hat{\Omega}})/\mathcal{N}_{\mathbf{V}^{\prime\prime},i}\stackrel{{ \mathcal{E}^{(n)}_{i}}}{{\longrightarrow}}\mathcal{D}^{b}_{G_{\mathbf{V}^{ \prime}}}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}})/\mathcal{N }_{\mathbf{V}^{\prime},i}\]
_are isomorphic to each other_
\[\mathcal{E}^{(n)}_{i}\mathcal{F}^{\vee}_{j}\cong\mathcal{F}^{\vee}_{j}\mathcal{ E}^{(n)}_{i}.\]
Proof.: We only prove the lemma when \(i,j\) are connected by some edges, the other case can be proved similarly. By similar argument as in the proof of Lemma 3.13, the functor \(\mathcal{E}^{(n)}_{i}\mathcal{F}^{\vee}_{j}\) is reduced to the following commutative diagram
where \(Y^{\prime\prime}_{1}\) is the fiber product of \(q_{3}\) and \(q^{\prime}_{1}\), and \(Y^{\prime}_{1}\) is the fiber product of \(q_{2}\) and \(r_{1}\).
More precisely, the variety \(Y^{\prime\prime}_{1}\) consists of quadruples \((\dot{x},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2})\), where \(\dot{x}\in\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\), \(\dot{\mathbf{U}}\subseteq\dot{\mathbf{V}}^{\prime\prime}\) is a \(\dot{x}\)-stable graded subspace and \(\mathbf{V}^{1}\subseteq\mathbf{V}^{2}\subseteq(\bigoplus\limits_{h^{\prime}= i}\mathbf{V}^{\prime\prime}_{h^{\prime\prime}})\oplus\mathbf{W}_{\hat{i}})\) is a flag such that \(|\dot{\mathbf{U}}|=j\) and \(\dim\mathbf{V}^{1}=\nu_{i}-n,\dim\mathbf{V}^{2}=\nu_{i}\).
The variety \(Y^{\prime}_{1}\) consists of quadruples \((\dot{x},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2},\dot{\rho})\) such that \(\dot{x}\in\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\), \(\dot{\mathbf{U}}\subseteq\dot{\mathbf{V}}^{\prime\prime}\) is \(\dot{x}\)-stable graded subspace, \(\mathbf{V}^{1}\subseteq\mathbf{V}^{2}\subseteq(\bigoplus\limits_{h^{\prime}= i}\mathbf{V}_{h^{\prime\prime}})\oplus\mathbf{W}_{\hat{i}})\) is a flag and \(\dot{\rho}:\dot{\mathbf{V}}^{\prime\prime}\oplus\dot{\mathbf{W}}/\dot{ \mathbf{U}}\rightarrow\dot{\mathbf{V}}\oplus\dot{\mathbf{W}}\) is a linear isomorphism such that \(|\dot{\mathbf{U}}|=j\) and \(\dim\mathbf{V}^{1}=\nu_{i}-n,\dim\mathbf{V}^{2}=\nu_{i}\).
The morphisms \(\pi^{\prime},r_{1},r_{2}.r_{3}\) are natural projections and
\[\pi_{1}((\dot{x},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2},\dot{\rho}))= (\dot{\rho}_{*}(\dot{x}|_{\dot{\mathbf{V}}^{\prime\prime}\oplus\dot{\mathbf{W}} /\dot{\mathbf{U}}}),\dot{\rho}(\mathbf{V}^{2}/\dot{\mathbf{U}})),\]
\[\pi_{2}((\dot{x},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2}))=(\dot{x}, \mathbf{V}^{1}).\]
By base change, we have
\[(q^{\prime}_{2})_{!}(q^{\prime}_{1})^{*}(q_{3})_{!}(q_{2})_{!}(q_{ 1})^{*}= (q^{\prime}_{2})_{!}(r_{3})_{!}(r_{1})^{*}(q_{2})_{!}(q_{1})^{*}\] \[= (q^{\prime}_{2})_{!}(r_{3})_{!}(r_{2})_{!}(r^{\prime})^{*}(q_{1}) ^{*}\] \[= (\pi_{2})_{!}(r_{2})_{!}(\pi^{*}_{1}).\]
Similarly, the functor \(\mathcal{F}^{\vee}_{j}\mathcal{E}^{(n)}_{i}\) is reduced to the following commutative diagram
\[\begin{array}{c}\dot{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}\times \mathbf{Grass}(\nu_{i}-n,\tilde{\nu}_{i}+a_{i,j})\\ \dot{\pi}_{2}\\ \end{array}\]
\[\dot{\mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde {\nu}_{i})\stackrel{{\tilde{q}_{1}}}{{\longrightarrow}}\dot{ \mathbf{E}}_{\mathbf{V},\mathbf{W},i}\times\mathbf{Flag}(\nu_{i}-n,\nu_{i}, \tilde{\nu}_{i})\stackrel{{\tilde{q}_{2}}}{{\longrightarrow}}\dot{ \mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i} -n,\tilde{\nu}_{i}),\]
where the variety \(Y_{2}^{\prime}\) is the fiber product of \(\tilde{q}_{1}^{\prime}\) and \(\tilde{q}_{2}\).
More precisely, recall that the variety \(\tilde{Z}_{1}\) consists of quadruples \((\dot{x}^{\prime},\dot{\mathbf{U}}\subseteq\dot{\mathbf{V}}^{\prime},\dot{\rho},\tilde{\mathbf{V}}^{\prime\prime\prime}\subseteq\bigoplus\limits_{h^{\prime}=i }\mathbf{V}^{\prime}_{h^{\prime\prime}}\oplus\mathbf{W}_{\dot{i}})\), where \(\dot{x}^{\prime}\in\dot{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}\), \(\dot{\mathbf{U}}\) is a \(\dot{x}^{\prime}\)-stable subspace of \(\dot{\mathbf{V}}^{\prime}\), \(\dot{\rho}:\dot{\mathbf{V}}^{\prime}\oplus\dot{\mathbf{W}}/\dot{\mathbf{U}} \rightarrow\dot{\mathbf{V}}^{\prime\prime\prime}\) is a linear isomorphism and \(\tilde{\mathbf{V}}^{\prime\prime\prime}\) is a subspace of dimension \(\nu_{i}-n\), then \(Y_{2}^{\prime}\) consists of quadruples \((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2},\dot{\rho})\) such that \(\dot{x}^{\prime}\in\dot{\mathbf{E}}_{\mathbf{V}^{\prime},\mathbf{W},i}\), \(\dot{\mathbf{U}}\) is a \(\dot{x}^{\prime}\)-stable subspace of \(\dot{\mathbf{V}}^{\prime}\), \(\dot{\rho}:\dot{\mathbf{V}}^{\prime}\oplus\dot{\mathbf{W}}/\dot{\mathbf{U}} \rightarrow\dot{\mathbf{V}}^{\prime\prime\prime}\) is a linear isomorphism and \(\tilde{\mathbf{V}}^{\prime\prime\prime}\) is a subspace of dimension \(\nu_{i}-n\) and \(\mathbf{V}^{1}\subseteq\mathbf{V}^{2}\subseteq(\bigoplus\limits_{h^{\prime}=i }\mathbf{V}^{\prime}_{h^{\prime\prime}})\oplus\mathbf{W}_{\dot{i}}\) is a flag such that \(\dim\mathbf{V}^{1}=\nu_{i}-n,\dim\mathbf{V}^{2}=\nu_{i}\).
Let \(Y_{2}^{\prime\prime}\) be the variety consists of quadruples \((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2})\) such that \((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V}^{2})\) satisfies the same conditions as in \(Y_{2}^{\prime}\).
The morphisms \(r_{2},r_{3},r_{4}\) are projections and
\[\tilde{\pi}_{1}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V} ^{2},\dot{\rho}))=((\dot{\rho})_{*}(\dot{x}^{\prime}|_{\dot{\mathbf{V}}^{ \prime}\oplus\dot{\mathbf{W}}/\dot{\mathbf{U}}}),\dot{\rho}(\mathbf{V}^{1}/ \dot{\mathbf{U}}),\dot{\rho}(\mathbf{V}^{2}/\dot{\mathbf{U}})),\]
\[\tilde{\pi}_{2}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V} ^{2},\dot{\rho}))=((\dot{\rho})_{*}(\dot{x}^{\prime}|_{\dot{\mathbf{V}}^{ \prime}\oplus\dot{\mathbf{W}}/\dot{\mathbf{U}}}),\dot{\rho}(\mathbf{V}^{2}/ \dot{\mathbf{U}})),\]
\[\tilde{\pi}_{2}((\dot{x}^{\prime},\dot{\mathbf{U}},\mathbf{V}^{1},\mathbf{V} ^{2}))=(\dot{x}^{\prime},\mathbf{V}^{1}),\]
then the middle square is Cartesian. By base change, we have
\[(\tilde{q}_{3}^{\prime})_{!}(\tilde{q}_{2}^{\prime})_{!}(\tilde{q}_{1}^{\prime })^{*}(\tilde{q}_{2})_{!}(\tilde{q}_{1})^{*}\cong(\tilde{\pi}_{2})_{!}(r_{2})_ {5}(\tilde{\pi}_{1}^{*}).\]
Notice that \(\dot{\mathbf{E}}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}\cong\dot{\mathbf{E}} _{\mathbf{V}^{\prime},\mathbf{W},i}\), hence \(Y_{2}^{\prime}\cong Y_{1}^{\prime}\) and \(Y_{2}^{\prime\prime}\cong Y_{1}^{\prime\prime}\). Moreover, \(\tilde{\pi}_{1},\tilde{\pi}_{2}\) equal to \(\pi_{1},\pi_{2}\) respectively via those isomorphisms. We get the proof.
**Corollary 3.16**.: _For \(i\neq j\) in \(I\), \(1\leqslant n\in\mathbb{N}\) and graded spaces \(\mathbf{V},\mathbf{V}^{\prime}\) such that \(|\mathbf{V}|-ni=|\mathbf{V}^{\prime}|\), then the functor \(\mathcal{E}_{i}^{(n)}:\mathcal{D}_{GV}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W}, \dot{\Omega}})/\mathcal{N}_{\mathbf{V},i}\rightarrow\mathcal{D}_{G_{\mathbf{V} ^{\prime}}}^{b}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\dot{\Omega}})/ \mathcal{N}_{\mathbf{V}^{\prime},i}\) sends objects of \(\mathcal{N}_{\mathbf{V},j}\) to objects of \(\mathcal{N}_{\mathbf{V}^{\prime},j}\). In particular, \(\mathcal{E}_{i}^{n}\) induces a functor between global localizations \(\mathcal{E}_{i}^{(n)}:\mathcal{D}_{GV}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W}, \dot{\Omega}})/\mathcal{N}_{\mathbf{V}}\rightarrow\mathcal{D}_{G_{\mathbf{V} ^{\prime}}}^{b}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\dot{\Omega}})/ \mathcal{N}_{\mathbf{V}^{\prime}}\)._
Proof.: Since \(\mathcal{N}_{\mathbf{V},j}\) is a thick subcategory, it suffices to verify for simple objects. Let \(A\) be a simple perverse sheaf in \(\mathcal{N}_{\mathbf{V},j}\), then by Proposition 2.11, there exists an object \(B\in\mathcal{D}_{G_{\mathbf{V}^{\prime\prime}}}^{b,ss}(\mathbf{E}_{\mathbf{V}^{ \prime\prime},\mathbf{W},\dot{\Omega}})\) such that \(|\mathbf{V}^{\prime\prime}|+j=|\mathbf{V}|\) and \(A\) is a direct summand of \(\mathbf{Ind}_{\mathbf{V}^{\prime\prime\oplus\mathbf{W}},\mathbf{V}^{\prime \prime\prime}}^{\mathbf{W}}(B\boxtimes\overline{\mathbb{Q}}_{l})\cong\mathcal{F} _{j}^{\vee}(B)\) in \(\mathcal{D}_{GV}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\dot{\Omega}})/\mathcal{N }_{\mathbf{V},i}\). Hence \(\mathcal{E}_{i}^{(n)}(A)\) is a direct summand of \(\mathcal{E}_{i}^{(n)}\mathcal{F}_{j}^{\vee}(B)\cong\mathcal{F}_{j}^{\vee} \mathcal{E}_{i}^{(n)}(B)\), which is contained in \(\mathcal{N}_{\mathbf{V}^{\prime},j}\) by Proposition 2.11.
**Corollary 3.17**.: _The functor \(\mathcal{E}_{i}^{(n)}:\mathcal{D}_{G_{\mathbf{V}}}^{b}(\mathbf{E}_{\mathbf{V}, \mathbf{W},\dot{\Omega}})/\mathcal{N}_{\mathbf{V}}\rightarrow\mathcal{D}_{G_{ \mathbf{V}^{\prime}}}^{b}(\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\dot{\Omega}})/ \mathcal{N}_{\mathbf{V}^{\prime}}\) restricts to a functor \(\mathcal{E}_{i}^{(n)}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{ \mathbf{V}}\rightarrow\mathcal{Q}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N }_{\mathbf{V}^{\prime}}\)._
Proof.: It suffices to prove that \(\mathcal{E}_{i}^{(n)}(L_{\underline{\nu}\underline{d}})\) is isomorphic to a direct sum of some \(L_{\underline{\nu}^{\prime}\underline{d}}\). We argue by induction on the length of \(\underline{\nu}\) and \(n\).
Without loss of generality, we can replace the flag type \(\underline{\nu}=(i_{1}^{a_{1}},i_{2}^{a_{2}},\cdots,i_{k}^{a_{k}})\) by
\[(i_{1},\cdots,i_{1},i_{2},\cdots,i_{2},\cdots,i_{k},\cdots,i_{k})\]
such that each \(i_{l}\) appears repeatedly for \(a_{l}\) times for \(1\leqslant l\leqslant k\), then \(L_{\underline{\nu}\underline{d}}=\mathcal{F}_{i_{1}}L_{\underline{\nu}^{ \prime}\underline{d}}\) for \(\underline{\nu}^{\prime}=(i_{1},i_{1},\cdots,i_{k})\) such that \(i_{1}\) appears for \(a_{1}-1\) times and the other \(i_{l}\) appear repeatedly for \(a_{l}\) times for \(1<l\leqslant k\). Assume \(|\mathbf{V}^
summand of \(\mathcal{F}_{i_{1}}\mathcal{E}_{i}^{(n)}L_{\nu^{\prime}\underline{d}}\). By inductive hypothesis and the fact that \(\mathcal{F}_{i_{1}}\) sends an object of \(\mathcal{Q}_{\mathbf{V}^{\prime\prime\prime},\mathbf{W}}\) to an object of \(\mathcal{Q}_{\mathbf{V}^{\prime},\mathbf{W}}\), we finish the proof.
**Corollary 3.18**.: _For \(i\neq j\) and graded spaces such that \(|\mathbf{V}^{\prime}|+ni=|\mathbf{V}|+j\), there is an isomorphism of functors between global localizations \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\to\mathcal{Q}_{ \mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime}}\)_
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{j}\cong\mathcal{F}_{j}\mathcal{E}_{i}^{(n)}.\]
_For \(i=j\) and graded spaces such that \(|\mathbf{V}^{\prime}|+(n-1)i=|\mathbf{V}|\), let \(N=2\nu_{i}-\tilde{\nu}_{i}-n+1\), then there is an isomorphism of functors between global localizations \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\to\mathcal{Q}_{ \mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime}}\)_
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\oplus\bigoplus_{0\leqslant m\leqslant N -1}\mathcal{E}_{i}^{(n-1)}[N-1-2m]\cong\mathcal{F}_{i}\mathcal{E}_{i}^{(n)} \oplus\bigoplus_{0\leqslant m\leqslant-N-1}\mathcal{E}_{i}^{(n-1)}[-2m-N-1].\]
_More precisely, in this case we have_
\[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\cong\mathcal{F}_{i}\mathcal{ E}_{i}^{(n)}, \text{if }N=0;\] \[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\oplus\bigoplus_{0\leqslant m \leqslant N-1}\mathcal{E}_{i}^{(n-1)}[N-1-2m]\cong\mathcal{F}_{i}\mathcal{E}_ {i}^{(n)}, \text{if }N\geqslant 1;\] \[\mathcal{E}_{i}^{(n)}\mathcal{F}_{i}\cong\mathcal{F}_{i}\mathcal{ E}_{i}^{(n)}\oplus\bigoplus_{0\leqslant m\leqslant-N-1}\mathcal{E}_{i}^{(n-1)}[-2m-N-1], \text{if }N\leqslant-1.\]
### The integrable highest weight modules
In this subsection, we fix an orientation \(\Omega\).
**Definition 3.19**.: _For \(n\in\mathbb{N}\) and \(i\in I\), we define functors between \(\mathcal{L}_{\mathbf{V}}(\Lambda)=\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{ N}_{\mathbf{V}}\) as follows._
_(1) For graded spaces \(\mathbf{V}\) and \(\mathbf{V}^{\prime}\) such that \(|\mathbf{V}|=|\mathbf{V}^{\prime}|+ni\), define \(E_{i}^{(n)}:\mathcal{L}_{\mathbf{V}}(\Lambda)\to\mathcal{L}_{\mathbf{V}^{ \prime}}(\Lambda)\) via_
\[E_{i}^{(n)}=\mathcal{F}_{\hat{\Omega}^{i},\hat{\Omega}}\mathcal{E}_{i}^{(n)} \mathcal{F}_{\hat{\Omega},\hat{\Omega}^{i}}.\]
_(2) For graded spaces \(\mathbf{V}\) and \(\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|+ni=|\mathbf{V}^{\prime\prime}|\), define \(F_{i}^{(n)}:\mathcal{L}_{\mathbf{V}}(\Lambda)\to\mathcal{L}_{\mathbf{V}^{ \prime\prime}}(\Lambda)\) via_
\[F_{i}^{(n)}=\mathcal{F}_{i}^{(n)}.\]
_(3) Define \(K_{i}:\mathcal{L}_{\mathbf{V}}(\Lambda)\to\mathcal{L}_{\mathbf{V}}(\Lambda)\) via_
\[K_{i}=\text{Id}\;[2\nu_{i}-\tilde{\nu}_{i}].\]
_In particular, we denote by \(E_{i}=E_{i}^{(1)}\) and \(F_{i}=F_{i}^{(1)}\). Note that \(K_{i}\) is invertible, and we define \(K_{i}^{-}=\text{Id}\;[\tilde{\nu}_{i}-2\nu_{i}]\) to be its inverse._
**Remark 3.20**.: _Even though we need to choose an orientation \(\Omega^{i}\) to define \(\mathcal{E}_{i}^{(n)}\) for each \(i\), it's easy to see that the definition of functor \(E_{i}^{(n)},i\in I\) does not rely on the choice \(\hat{\Omega}^{i}\) and \(\hat{\Omega}\)._
By definitions and Corollary 3.18, we obtain the following proposition.
**Proposition 3.21**.: _The functors \(E_{i}\), \(F_{i}\) and \(K_{i},i\in I\) satisfy the following relations_
\[K_{i}K_{j}=K_{j}K_{i},\]
\[E_{i}K_{j}=K_{j}E_{i}[-a_{j,i}],\]
\[F_{i}K_{j}=K_{j}F_{i}[a_{i,j}],\]
\[E_{i}F_{j}=F_{j}E_{i}\text{ for }i\neq j,\]
\[E_{i}F_{i}\oplus\bigoplus_{0\leqslant m\leqslant N-1}\text{Id}[N-1-2m] \cong F_{i}E_{i}\oplus\bigoplus_{0\leqslant m\leqslant-N-1}\text{Id}[-2m-N-1] :\mathcal{L}_{\mathbf{V}}(\Lambda)\to\mathcal{L}_{\mathbf{V}}(\Lambda),\]
_as endofunctors of \(\mathcal{L}(\Lambda)=\coprod\mathcal{L}_{\mathbf{V}}(\Lambda)\), where \(N=\tilde{\nu}_{i}-2\nu_{i}\)._
By Theorem 2.3, we also have the following proposition.
**Proposition 3.22**.: _The functors \(F_{i}\) for \(i\in I\) satisfy the following relations_
\[\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\text{ is odd}}F_{i}^{(m)}F_{j}F_{i}^{(1-a_{i,j}-m)} \cong\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\text{ is even}}F_{i}^{(m)}F_{j}F_{i}^{(1-a_{i,j}-m)},\]
\[\bigoplus_{0\leqslant m<n}F_{i}^{(n)}[n-1-2m]\cong F_{i}^{(n-1)}F_{i}\text{ for }n\geqslant 2,\]
_as endofunctors of \(\mathcal{L}(\Lambda)=\coprod\mathbf{C}_{\mathbf{V}}(\Lambda)\)._
Proof.: Notice that for \(\underline{v}=(i_{1}^{a_{1}},\cdots i_{k}^{a_{k}})\), by definition we have \(F_{i_{1}}^{(a_{1})}F_{i_{2}}^{(a_{2})}\cdots F_{i_{k}}^{(a_{k})}=\mathbf{Ind }_{\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\oplus\mathbf{W}}^{\mathbf{ V}\oplus\mathbf{W}}(L_{\underline{v}}\boxtimes-)\). Then the proposition follows from Theorem 2.3.
**Proposition 3.23**.: _The functors \(E_{i}\) for \(i\in I\) satisfy the following relations_
\[\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\text{ odd}}E_{i}^{(m)}E_{j}E_{i} ^{(1-a_{i,j}-m)}(L)\cong\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\text{ even}}E_{i}^{(m)}E_{j}E_{i} ^{(1-a_{i,j}-m)}(L)\]
_for any object \(L\) and_
\[\bigoplus_{0\leqslant m<n}E_{i}^{(n)}[n-1-2m]\cong E_{i}^{(n-1)}E_{i},n \geqslant 2,\]
_as endofunctors of \(\mathcal{L}(\Lambda)=\coprod\mathbf{C}_{\mathbf{V}}(\Lambda)\)._
Proof.: The second isomorphism follows from Lemma 3.15. We only need to prove the first one. For any \(L\in\mathcal{Q}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\), let \(N=2\nu_{i}-\tilde{\nu}_{i}\). By Corollary 3.18, if \(m\geqslant 1+N\), we have
\[E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}F_{i}(L) \cong F_{i}E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}(L)\] \[\oplus \bigoplus_{0\leqslant l\leqslant-N-2+m}E_{i}^{(1-a_{i,j}-m)}E_{j} E_{i}^{(m-1)}(L)[-N-2+m-2l]\] \[\oplus \bigoplus_{0\leqslant l\leqslant-N+m-1}E_{i}^{(1-a_{i,j}-m-1)}E_{j }E_{i}^{(m)}(L)[-N+m-1-2l]. \tag{8}\]
Otherwise, if \(m<1+N\), or equivalently \(m\leqslant N\), we have
\[F_{i}E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}(L) \cong E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}F_{i}(L)\] \[\oplus \bigoplus_{0\leqslant l\leqslant N-2-m}E_{i}^{(1-a_{i,j}-m)}E_{j} E_{i}^{(m-1)}(L)[N-2-m-2l]\] \[\oplus \bigoplus_{0\leqslant l\leqslant N-m-1}E_{i}^{(1-a_{i,j}-m-1)}E_{ j}E_{i}^{(m)}(L)[N-m-1-2l]. \tag{9}\]
Let \(M=2\nu_{j}-\tilde{\nu}_{j}-ma_{i,j}\). By Corollary 3.18, if \(M\leqslant 0\), we have
\[E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}F_{j}(L) \cong F_{j}E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}(L)\] \[\oplus \bigoplus_{0\leqslant l\leqslant-M-1}E_{i}^{(1-a_{i,j}-m)}E_{i}^{ (m)}(L)[-M-1-2l]. \tag{10}\]
If \(M>0\), we have
\[F_{j}E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}(L) \cong E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}F_{j}(L)\] \[\oplus \bigoplus_{0\leqslant l\leqslant M-1}E_{i}^{(1-a_{i,j}-m)}E_{i}^{ (m)}(L)[M-1-2l]. \tag{11}\]
For \(k\neq i,j\), we have
\[F_{k}E_{i}^{(1-a_{i,j}-m)}E_{j}E_{i}^{(m)}(L)\cong E_{i}^{(1-a_{i,j}-m)}E_{j}E_{ i}^{(m)}F_{k}(L). \tag{12}\]
We claim that
\[\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\ odd}E_{i}^{(m)}E_{j}E_{i}^{(1-a_{i,j }-m)}(L_{\underline{\nu}\underline{d}})\cong\bigoplus_{0\leqslant m\leqslant 1 -a_{i,j},m\ even}E_{i}^{(m)}E_{j}E_{i}^{(1-a_{i,j}-m)}(L_{\underline{\nu} \underline{d}})\]
for any flag type \(\underline{\nu}\). We replace \(\underline{\nu}=(i_{1}^{a_{1}},i_{2}^{a_{2}},\cdots,i_{k}^{a_{k}})\) by \(\underline{\nu}^{\prime}=(i_{1},\cdots,i_{1},i_{2},\cdots,i_{2},\cdots,i_{k}, \cdots,i_{k})\). Then using the commutation relations above, we can argue by induction on length of \(\underline{\nu}^{\prime}\) to show that
\[\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\ odd}E_{i}^{(m)}E_{j}E_{i}^{(1-a _{i,j}-m)}(L_{\underline{\nu}^{\prime}\underline{d}})\cong\bigoplus_{0 \leqslant m\leqslant 1-a_{i,j},m\ even}E_{i}^{(m)}E_{j}E_{i}^{(1-a_{i,j}-m)}(L_{ \underline{\nu}^{\prime}\underline{d}}).\]
Since \(L_{\underline{\nu}^{\prime}\underline{d}}\) is isomorphic to a direct summ of shifts of \(L_{\underline{\nu}\underline{d}}\), the claim is proved.
By Proposition 2.9, for any simple perverse sheaf \(L\), we can find families of flag types \(\underline{\tau}\) and \(\underline{\omega}\) such that
\[L\oplus\bigoplus_{\tau,n}L_{\underline{\tau}\underline{d}}^{\oplus N(\tau,n)}[ n]\cong\bigoplus_{\omega,n}L_{\underline{\omega}\underline{d}}^{\oplus N(\omega,n)}[ n],\]
where \(N(\tau,n)\) and \(N(\omega,n)\) are multiplicities. Using the claim, we can see that
\[\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\ odd}E_{i}^{(m)}E_{j}E_{i}^{(1-a _{i,j}-m)}(L)\cong\bigoplus_{0\leqslant m\leqslant 1-a_{i,j},m\ even}E_{i}^{(m)}E_{j}E_ {i}^{(1-a_{i,j}-m)}(L)\]
for any perverse sheaf \(L\).
**Proposition 3.24**.: _The functors \(F_{i}^{(n)},E_{i}^{(n)},K_{i}\) for \(n\in\mathbb{N},i\in I\) and Verdier duality functor \(\mathbf{D}\) satisfy the following relations_
\[F_{i}^{(n)}\mathbf{D} \cong\mathbf{D}F_{i}^{(n)},\] \[E_{i}^{(n)}\mathbf{D} \cong\mathbf{D}E_{i}^{(n)},\] \[K_{i}\mathbf{D} \cong\mathbf{D}(K_{i})^{-1}.\]
Proof.: The first relation holds, since the induction functors commute with the Verdier duality functor. The last relation can be easily checked by definition. We only prove the second relation for \(n=1\), the other case can be proved by a similar argument. Notice that \(E_{i}\) can be written as
\[E_{i}=(j_{\mathbf{V}^{\prime},i})_{!}((\phi_{\mathbf{V}^{\prime},i})^{*}[( \nu^{\prime}_{i})^{2}])(q_{2})_{!}((q_{1})^{*}[\nu_{i}-1])((\phi_{\mathbf{V},i })_{9}[-\nu^{2}_{i}])(j_{\mathbf{V},i})^{*}.\]
Since \(q_{2}\) is proper, \((q_{2})_{!}\) commutes with \(\mathbf{D}\). Notice that \((j_{\mathbf{V}^{\prime},i})_{!}\cong(j_{\mathbf{V}^{\prime},i})_{*}:\mathcal{ D}_{G\mathbf{V}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},i}^{0})\to\mathcal{D}_{G \mathbf{V}}^{b}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})/\mathcal{N}_ {,i}\), we can see that \((j_{\mathbf{V}^{\prime},i})_{!}\) also commutes with \(\mathbf{D}\). Since each functor in the expression commutes with the Verdier duality functor, so does \(E_{i}\).
**Definition 3.25**.: _Define \(\mathcal{K}_{0}(\Lambda)=\mathcal{K}_{0}(\mathcal{L}(\Lambda))\) to be the Grothendieck group of \(\mathcal{L}(\Lambda)\), which can be endowed with a \(\mathcal{A}\)-module structure. More precisely, \(\mathcal{K}_{0}(\Lambda)\) is the \(\mathcal{A}\)-module spanned by objects \([L]\) in \(\mathcal{L}(\Lambda)\) subject to relations_
\[[X\oplus Y]=[X]+[Y],\]
\[[X[1]]=v[X].\]
_Similarly, we denote the Grothendieck group of \(\mathcal{L}_{\mathbf{V}}(\Lambda)\) by \(\mathcal{K}_{0,|\mathbf{V}|}(\Lambda)\)._
_The functors \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm}\) for \(n\in\mathbb{N},i\in I\) induces \(\mathcal{A}\)-linear operators on \(\mathcal{K}_{0}(\Lambda)\), and we still denote these operators by \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm}\) for \(n\in\mathbb{N},i\in I\), respectively._
**Theorem 3.26**.: _The linear operators induced by functors \(E_{i}^{(n)},F_{i}^{(n)},K_{i}^{\pm}\) for \(n\in\mathbb{N},i\in I\) defines a \({}_{\mathcal{A}}\mathbf{U}\)-module structure on \(\mathcal{K}_{0}(\Lambda)\) which is isomorphic to the integrable highest weight \({}_{\mathcal{A}}\mathbf{U}\)-module \({}_{\mathcal{A}}L(\Lambda)\) via the canonical isomorphism_
\[\varsigma^{\Lambda}:\mathcal{K}_{0}(\Lambda)\to{}_{\mathcal{A}}L(\Lambda).\]
_such that \(\varsigma^{\Lambda}\) sends the constant sheaf \([L_{0}]=[\overline{\mathbb{Q}}_{l}]\) on \(\mathbf{E}_{0,\mathbf{W},\hat{\Omega}}\) to the highest weight vector \(v_{\Lambda}\in{}_{\mathcal{A}}L(\Lambda)\). Moreover, the set \(\bigcup\limits_{\mathbf{V}}\{\varsigma^{\Lambda}([L])|L\) is a nonzero simple perverse sheaf in \({\mathcal{L}}_{\mathbf{V}}(\Lambda)\}\) form a bar-invariant \({\mathcal{A}}\)-basis of \({}_{\mathcal{A}}L(\Lambda)\), which is exactly the canonical basis of \({}_{\mathcal{A}}L(\Lambda)\) constructed by Lusztig._
Proof.: By Proposition 3.21, 3.22 and 3.23, \({\mathcal{K}}_{0}(\Lambda)\) is a \(\mathbf{U}\)-module. By Lemma 2.8 and Theorem 2.3, for any simple perverse sheaf \(L\), its image \([L]\in{\mathcal{K}}_{0}(\Lambda)\) can be written as a \({\mathcal{A}}\)-linear combination of some \([L_{\underline{v}d}]=[F_{i_{1}}^{(a_{1})}F_{i_{2}}^{(a_{2})}\cdots F_{i_{k}}^{ (a_{k})}L_{0}]\), hence \({\mathcal{K}}_{0}(\Lambda)\) is a highest weight module, where \([L_{0}]\) is the highest weight vector.
It remains to prove that \({\mathcal{K}}_{0}(\Lambda)\) is integrable. Let \(L\) be a simple perverse sheaf in \({\mathcal{Q}}_{\mathbf{V}}\). On the one hand, for \(N>\nu_{i}\), we have \((E_{i})^{(N)}([L])=0\). On the other hand, note that if \(\nu_{i}-\sum\limits_{h\in\Omega,h^{\prime}=i}\nu_{h^{\prime\prime}}>0\), then \(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}^{i}}^{\geqslant 1}=\mathbf{E}_{ \mathbf{V},\mathbf{W},\hat{\Omega}^{i}}\), so any objects of \({\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\) belong to \({\mathcal{N}}_{\mathbf{V},i}\) and \({\mathcal{L}}_{\mathbf{V}}(\Lambda)=0\). For large enough \(N\), we have \((F_{i})^{(N)}([L])\in{\mathcal{L}}_{\mathbf{V}^{\prime}}(\Lambda)\), where \(\mathbf{V}^{\prime}\) satisfies the \(\nu_{i}^{\prime}-\sum\limits_{h^{\prime}=i}\nu_{h^{\prime\prime}}^{\prime}>0\), and so \((F_{i})^{(N)}([L])=0\).
It is clear that nonzero simple perverse sheaves form a bar-invariant \({\mathcal{A}}\)-basis of \({}_{\mathcal{A}}L(\Lambda)\). We only need to compare this basis with the canonical basis.
Indeed, recall that via the identification \({\mathcal{K}}=\bigoplus\limits_{\mathbf{V}}K_{0}({\mathcal{Q}}_{\mathbf{V}}) \cong{}_{\mathcal{A}}\mathbf{U}^{-}\), the left ideal \({}_{\mathcal{A}}\mathbf{U}^{-}f_{i}^{(\langle\Lambda,\alpha_{i}^{\vee}\rangle+ 1)}\) is exactly the \({\mathcal{A}}\)-module spanned by Lusztig's sheaves supported in \(\mathbf{E}_{\mathbf{V},\Omega^{i}}^{\geqslant d_{i}+1}\) by Proposition 2.11. Since \((\pi_{\mathbf{W}})^{-1}(\mathbf{E}_{\mathbf{V},\Omega^{i}}^{\geqslant d_{i}+ 1})\subseteq\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}^{i}}^{\geqslant 1}\), \(\operatorname{supp}(A)\subseteq\mathbf{E}_{\mathbf{V},\Omega^{i}}^{\geqslant d _{i}+1}\) implies that \((\pi_{\mathbf{W}})^{*}(A)\) is contained in \({\mathcal{N}}_{\mathbf{V},i}\) for any \(A\). Otherwise, if \(\operatorname{supp}(A)\cap\mathbf{E}_{\mathbf{V},\Omega^{i}}^{\leqslant d_{i}}\neq\emptyset\), then \(\operatorname{supp}((\pi_{\mathbf{W}})^{*}(A))\cap\mathbf{E}_{\mathbf{V}, \mathbf{W},\hat{\Omega}^{i}}^{0}\neq\emptyset\). In a conclusion, a Lusztig's simple perverse sheaf \(L\in{\mathcal{P}}_{\mathbf{V}}\) induces a zero object \((\pi_{\mathbf{W}})^{*}(L)\) in \({\mathcal{Q}}_{\mathbf{V},\mathbf{W}}/{\mathcal{N}}_{\mathbf{V}}\) if and only if its image is contained in the left ideal \({}_{\mathcal{A}}\mathbf{U}^{-}f_{i}^{(\langle\Lambda,\alpha_{i}^{\vee}\rangle+ 1)}\) for some \(i\). Let \({\mathcal{I}}\) be the left ideal \(\sum\limits_{i\in{\mathcal{I}}}{}_{\mathcal{A}}\mathbf{U}^{-}f_{i}^{(\langle \Lambda,\alpha_{i}^{\vee}\rangle+1)}\) and \(\tilde{\pi}\) be the canonical projection \({}_{\mathcal{A}}\mathbf{U}^{-}\to{}_{\mathcal{A}}\mathbf{U}^{-}/{\mathcal{I}} \cong{}_{\mathcal{A}}L(\Lambda)\), then we have the following commutative diagram and recover Lusztig's construction.
where \(\pi:{\mathcal{K}}\to{\mathcal{K}}_{0}(\Lambda)\) is the \({\mathcal{A}}\)-linear map induced by the composition of functors
\[{\mathcal{Q}}_{\mathbf{V}}\xrightarrow{(\pi_{\mathbf{W}})^{*}[\sum\limits_{i \in{\mathcal{I}}}\nu_{i}d_{i}]}{\mathcal{Q}}_{\mathbf{V},\mathbf{W}} \xrightarrow{\text{natural functor}}{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}/{ \mathcal{N}}_{\mathbf{V}}.\]
In particular, the set \(\bigcup\limits_{\mathbf{V}}\{\varsigma^{\Lambda}([L])|L\) is a nonzero simple perverse sheaf in \({\mathcal{L}}_{\mathbf{V}}(\Lambda)\}\) is exactly identified with the subset of \({\mathcal{P}}_{\mathbf{V}}\) consisting of \([L]\) such that \(\tilde{\pi}(\varsigma([L]))\neq 0\), which is exactly the canonical basis of \({}_{\mathcal{A}}L(\Lambda)\) constructed by Lusztig.
**Remark 3.27**.: _Notice that the nonzero simple objects in \({\mathcal{Q}}_{\mathbf{V},\mathbf{W}}/{\mathcal{N}}_{\mathbf{V}}\) are exactly the simple perverse sheaves in \({\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\) but not in \({\mathcal{N}}_{\mathbf{V}}\). We denote the set of these simple perverse sheaves by \({\mathcal{P}}_{\mathbf{V}}\backslash{\mathcal{N}}_{\mathbf{V}}\), then one can see that these simple objects form a basis of \({}_{\mathcal{A}}L(\Lambda)\) and_
\[|{\mathcal{P}}_{\mathbf{V}}\backslash{\mathcal{N}}_{\mathbf{V}}|=\dim_{{ \mathbb{Q}}(v)}L(\Lambda).\]
Following [27], the Euler form of localization defines a bilinear form on \({\mathcal{K}}_{0}(\Lambda)\) as the following:
**Definition 3.28**.: _Define a bilinear form \((-,-)^{\Lambda}\) on \({\mathcal{K}}_{0}(\Lambda)\) by:_
\[([A],[B])^{\Lambda}=\sum\limits_{n\geq 0}\dim\mathrm{Ext}_{{\mathcal{D}}_{{ \mathcal{O}}_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})/{ \mathcal{N}}_{\mathbf{V}}}^{n}(A,\mathbf{D}B)v^{n},\]
_for any objects \(A,B\) of \(\mathcal{L}_{\mathbf{V}}(\Lambda)\). Otherwise, if \(A\) is a object of \(\mathcal{L}_{\mathbf{V}}(\Lambda)\) and \(B\) is a object of \(\mathcal{L}_{\mathbf{V}^{\prime}}(\Lambda)\) such that \(|\mathbf{V}|\neq|\mathbf{V}^{\prime}|\), define_
\[([A],[B])^{\Lambda}=0.\]
**Proposition 3.29**.: _For any \(i\in I\), the bilinear form \((-,-)^{\Lambda}\) is contravariant with respect to \(F_{i}\) and \(E_{i}\). More precisely, for any objects \(A,B\),_
\[([F_{i}A],[B])^{\Lambda}=([A],v^{-1}[K_{i}E_{i}B])^{\Lambda}.\]
_Moreover, for simple perverse sheaves \(L\) and \(L^{\prime}\), (1)If \([L]\neq[L^{\prime}]\), then \(([L],[L^{\prime}])^{\Lambda}\in v^{-1}\mathbb{Z}[[v^{-1}]]\cap\mathbb{Q}(v)\) ; (2)If \(L\) is a nonzero simple perverse sheaf in \(\mathcal{L}_{\mathbf{V}}(\Lambda)\), then \(([L],[L])^{\Lambda}\in 1+v^{-1}\mathbb{Z}[[v^{-1}]]\cap\mathbb{Q}(v)\)._
Proof.: The proof is similar to the proof of [27, Theorem 4.13]. Notice that simple perverse sheaves are self-dual, the almost orthogonality follows from the perverse \(t\)-structure of \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{ \Omega}})/\mathcal{N}_{\mathbf{V}}\).
Recall that for any objects \(A\) in \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}}_{\mathbf{V},\mathbf{W},i} \times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))\) and \(B\) in \(\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}}_{\mathbf{V},\mathbf{W},i} \times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))\),
\[\mathrm{Hom}_{\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}} _{\mathbf{V},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))}((q_{ 2})!(q_{1})^{*}[\bar{\nu}_{i}-1]A,B)\] \[= \mathrm{Hom}_{\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}}_{ \mathbf{V},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))}(A,(q_ {1})_{*}(q_{2})^{!}[-\bar{\nu}_{i}+1]B)\] \[= \mathrm{Hom}_{\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}}_{ \mathbf{V},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))}(A,(q_ {1})_{*}(q_{2})^{!}[-\bar{\nu}_{i}+1]B)\] \[= \mathrm{Hom}_{\mathcal{D}^{b}_{G_{\mathbf{V}}}(\hat{\mathbf{E}}_{ \mathbf{V},\mathbf{W},i}\times\mathbf{Grass}(\nu_{i},\tilde{\nu}_{i}))}(A,(q_ {1})_{!}(q_{2})^{*}[\nu_{i}][\nu_{i}-\bar{\nu}_{i}+1]B),\]
where \(q_{1}\) and \(q_{2}\) are those morphisms defining \(E_{i}\). By formula 5, it implies that \(K_{i}^{-1}F_{i}[-1]\) is left adjoint to \(E_{i}\). Hence for any objects \(A,B\),
\[\mathrm{dimExt}^{j}_{\mathcal{D}^{b}_{G_{\mathbf{V}}}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})/\mathcal{N}_{\mathbf{V}}}(F_{i}A,\mathbf{D}B)= \mathrm{dimExt}^{j}_{\mathcal{D}^{b}_{G_{\mathbf{V}^{\prime}}}(\mathbf{E}_{ \mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}})/\mathcal{N}_{\mathbf{V}^{ \prime}}}(A,\mathbf{D}K_{i}E_{i}[-1]B).\]
Then by definition of \((-,-)^{\Lambda}\), we get the proof.
### Compare the functor \(E_{i}\) with derivation functors
In this section, we construct a split exact sequence which is analogy to the exact sequence in [7, Theorem 4.7]. We fix a vertex \(i\in I\) and an orientation \(\Omega\) such that \(i\) is a source.
**Definition 3.30**.: _(1) Define \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\) to be the full subcategory of \(\mathcal{D}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})\) whose objects are direct sums of Lusztig's sheaves in \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\) such that multiplicity of each simple summand \(A[n]\) is finite for any simple object \(A\) and \(n\in\mathbb{Z}\)._
_(2) Define the localization \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V},i}\) to be the full subcategory of \(\mathcal{D}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})/\mathcal{N}_{ \mathbf{V},i}\) consisting of objects in \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\), and define the global localization \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}/\mathcal{N}_{\mathbf{V}}\) to be the full subcategory of \(\mathcal{D}(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}})/\mathcal{N}_{ \mathbf{V}}\) consisting of objects \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\)._
Let \(\hat{\mathcal{K}}_{\mathbf{V}}\) be the Grothendieck group of \(\hat{\mathcal{Q}}_{\mathbf{V},\mathbf{W}}\) and \(\hat{\mathcal{K}}=\bigoplus\limits_{\mathbf{V}}\hat{\mathcal{K}}_{\mathbf{V}}\). They have \(\mathbb{Z}[[v,v^{-1}]]\)-module structures and the set of simple objects forms a bar-invariant \(\mathbb{Z}[[v,v^{-1}]]\)-basis of \(\hat{\mathcal{K}}\).
Let \(\mathrm{H}^{*}(\mathbb{P}^{\infty})\) be the cohomology of the infinite projective variety, then following the proof of [13, Lemma 12.3.6],
\[\mathrm{H}^{*}(\mathbb{P}^{\infty})\cong\bigoplus\limits_{m\geqslant 0}\overline{ \mathbb{Q}}_{l}[-2m].\]
**Definition 3.31**.: _For graded spaces \(\mathbf{V},\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\) such that \(|\mathbf{V}|=|\mathbf{V}^{\prime}|+i,|\mathbf{V}^{\prime\prime}|=i\) and \(\mathbf{V}=\mathbf{V}^{\prime}\oplus\mathbf{V}^{\prime\prime}\)._
_(1) define the functor \(\hat{\mathcal{R}}^{\Lambda}_{i}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}\to\hat{ \mathcal{Q}}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime}}\) by_
\[\hat{\mathcal{R}}^{\Lambda}_{i}(-)=\mathbf{Res}^{\mathbf{V}\oplus\mathbf{W}}_{ \mathbf{V}^{\prime\prime}\oplus\mathbf{W},\mathbf{V}^{\prime\prime}}(-)\otimes( \mathrm{H}^{*}(\mathbb{P}^{\infty}))[-1].\]
_(2) Define the functor \({}_{i}\hat{\mathcal{R}}^{\Lambda}:\mathcal{Q}_{\mathbf{V},\mathbf{W}}\to\hat{ \mathcal{Q}}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime}}\) by_
\[{}_{i}\hat{\mathcal{R}}^{\Lambda}(-)=\mathbf{Res}_{\mathbf{V}^{\prime\prime}, \mathbf{V}^{\prime\oplus\mathbf{W}}}^{\mathbf{V}\oplus\mathbf{W}}(-)\otimes( \mathrm{H}^{*}(\mathbb{P}^{\infty}))[(i,|\mathbf{V}^{\prime}\oplus\mathbf{W}|) -1].\]
**Lemma 3.32**.: _For a graded space \(\mathbf{V}\) and \(\underline{\nu}\in\mathcal{S}_{|\mathbf{V}|}\), \({}_{i}\hat{\mathcal{R}}^{\Lambda}(L_{\underline{\nu}d})\) is a direct summand of \(\hat{\mathcal{R}}^{\Lambda}_{i}(L_{\underline{\nu}d})\) and its complement \(C(L_{\underline{\nu}d})\) is a finite direct sum of some \([L_{\underline{\nu}d}]\), where \(\underline{\omega}\in\mathcal{S}_{\mathbf{V}^{\prime}}\) runs over flag types which can be obtained from \(\underline{\nu}\) by reducing \(i\) from some \(v^{k}\). In particular, \(C(L_{\underline{\nu}d})\) belongs to \(\mathcal{Q}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{\prime}}\)._
Proof.: Consider the following morphisms in the definition of \(\mathbf{Res}_{\mathbf{V}^{\prime}\oplus\mathbf{W},\mathbf{V}^{\prime\prime}}^ {\mathbf{V}\oplus\mathbf{W}}\) and \(\mathbf{Res}_{\mathbf{V}^{\prime\prime},\mathbf{V}^{\prime\oplus\mathbf{W}}}^ {\mathbf{V}\oplus\mathbf{W}}\) respectively
\[\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}}\xrightarrow{\kappa_{ \Omega}^{\prime}}F^{\prime}\xrightarrow{\iota_{\Omega}^{\prime}}\mathbf{E}_{ \mathbf{V},\mathbf{W},\hat{\Omega}};\]
\[\mathbf{E}_{\mathbf{V}^{\prime},\mathbf{W},\hat{\Omega}}\xrightarrow{\kappa _{\Omega}}F\xrightarrow{\iota_{\Omega}}\mathbf{E}_{\mathbf{V},\mathbf{W},\hat {\Omega}}.\]
It is well known that the restriction functor is a hyperbolic localization functor, that is, there is a \(k^{*}\)-action on \(\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}\) such that the following diagrams commute
see [3] and [6, Proposition 2.10] for details. By formula (1) and Theorem 1 in [3], we have
\[\begin{split}\mathbf{Res}_{\mathbf{V}^{\prime}\oplus\mathbf{W}, \mathbf{V}^{\prime\prime}}^{\mathbf{V}\oplus\mathbf{W}}\\ \cong&(\kappa_{\Omega}^{\prime})!(\iota_{\Omega})^{ *}[-\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle]\\ \cong&(\pi^{-})!(g^{-})^{*}[-\langle|\mathbf{V}^{ \prime}\oplus\mathbf{W}|,i\rangle]\\ \cong&(\pi^{+})_{*}(g^{+})^{!}[-\langle|\mathbf{V}^{ \prime}\oplus\mathbf{W}|,i\rangle]\\ \cong&(\kappa_{\Omega})_{*}(\iota_{\Omega})^{!}[- \langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle].\end{split}\]
Since \(i\) is a source, \(\iota_{\Omega}\) is the identity, and so
\[\hat{\mathcal{R}}^{\Lambda}_{i}\cong\bigoplus_{n\geqslant 0}(\kappa_{\Omega})_{* }[-\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle-1-2n].\]
By definition,
\[{}_{i}\hat{\mathcal{R}}^{\Lambda}\cong \bigoplus_{n\geqslant 0}(\kappa_{\Omega})_{!}[(i,|\mathbf{V}^{ \prime}\oplus\mathbf{W}|)-\langle i,|\mathbf{V}^{\prime}\oplus\mathbf{W}| \rangle-1-2n]\] \[\cong \bigoplus_{n\geqslant 0}(\kappa_{\Omega})_{!}[\langle|\mathbf{V}^{ \prime}\oplus\mathbf{W}|,i\rangle-1-2n].\]
By Proposition 2.1, \(\mathbf{Res}_{\mathbf{V}^{\prime\oplus\mathbf{W}}\oplus\mathbf{W},\mathbf{V}^{ \prime\prime}}^{\mathbf{V}\oplus\mathbf{W}}(L_{\underline{\nu}d})\) is a direct sum of some \(L_{\underline{\nu}d}\) such that \(\underline{\omega}\in\mathcal{S}_{\mathbf{V}^{\prime}}\) is a flag type which can be obtained from \(\underline{\nu}\) by reducing \(i\) from \(\nu^{k}\) for some \(k\). More precisely, \(\underline{\omega}=(\nu^{1},\nu^{2},\cdots,\nu^{k-1},i^{a_{k}-1},\nu^{k+1}, \cdots,\nu^{m})\) for some \(k\) such that \(\nu^{k}=i^{a_{k}},a_{k}>0\). Consider the following commutative diagram in the proof of [12, Proposition 4.2],
where \(\mathcal{F}_{\underline{\nu}\underline{d},\hat{\Omega}}(\underline{\omega})\) is the locally closed subset of the flag variety \(\mathcal{F}_{\underline{\nu}\underline{d},\hat{\Omega}}\) consisting of \((x,f)\) such that \(x\in\mathbf{E}_{\mathbf{V},\mathbf{W},\hat{\Omega}}\) and \(f\) is a flag of \(\mathbf{V}\oplus\mathbf{W}\) which induces a flag of \(\mathbf{V}^{\prime}\oplus\mathbf{W}\) such that its type is \(\underline{\omega}\underline{d}\). By [12, Lemma 4.4], \(\alpha_{\underline{\omega}}\) is a vector bundle, we denote its rank by \(f_{\underline{\omega}}\). Then we have
\[(\kappa_{\Omega})_{!}(L_{\underline{\nu}\underline{d}})=\bigoplus_{ \underline{\omega}}L_{\underline{\omega}\underline{d}}[-\dim\mathcal{F}_{ \underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_{\underline{ \omega}\underline{d},\hat{\Omega}}-2f_{\underline{\omega}}],\]
\[(\kappa_{\Omega})_{*}(L_{\underline{\nu}\underline{d}})=\bigoplus_{ \underline{\omega}}L_{\underline{\omega}\underline{d}}[-\dim\mathcal{F}_{ \underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_{\underline{ \omega}\underline{d},\hat{\Omega}}],\]
where the direct sum follows from similar arguments as in the proof of [12, Lemma 4.7] or [6, Corollary 3.7]. Hence
\[\hat{\mathcal{R}}^{\Lambda}_{i}(L_{\underline{\nu}\underline{d}})=\bigoplus_{ \underline{\omega}}\bigoplus_{n\geqslant 0}L_{\underline{\omega}\underline{d}}[- \dim\mathcal{F}_{\underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_ {\underline{\omega}\underline{d},\hat{\Omega}}-\langle|\mathbf{V}^{\prime} \oplus\mathbf{W}|,i\rangle-1-2n],\]
and
\[{}_{i}\hat{\mathcal{R}}^{\Lambda}(L_{\underline{\nu}\underline{d}})=\bigoplus_ {\underline{\omega}}\bigoplus_{n\geqslant 0}L_{\underline{\omega}\underline{d}}[- \dim\mathcal{F}_{\underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_ {\underline{\omega}\underline{d},\hat{\Omega}}-2f_{\underline{\omega}}+ \langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle-1-2n].\]
If \(\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle\leqslant f_{\underline {\omega}}\), then \(-2f_{\underline{\omega}}+\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle \leqslant-\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle\). For such \(\underline{\omega}\), it is easy to see that
\[\bigoplus_{n\geqslant 0}L_{\underline{\omega}\underline{d}}[-\dim\mathcal{F}_{ \underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_{\underline{ \omega}\underline{d},\hat{\Omega}}-2f_{\underline{\omega}}+\langle|\mathbf{V} ^{\prime}\oplus\mathbf{W}|,i\rangle-1-2n]\]
is a direct summand of
\[\bigoplus_{n\geqslant 0}L_{\underline{\omega}\underline{d}}[-\dim\mathcal{F}_ {\underline{\nu}\underline{d},\hat{\Omega}}+\dim\mathcal{F}_{\underline{ \omega}\underline{d},\hat{\Omega}}-\langle|\mathbf{V}^{\prime}\oplus\mathbf{W }|,i\rangle-1-2n]\]
and its complement is a finite direct sum of some \([L_{\underline{\omega}\underline{d}}]\).
Otherwise, if \(\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle>f_{\underline{\omega}}\), we claim that \(L_{\underline{\omega}\underline{d}}\) is contained in \(\mathcal{N}_{\mathbf{V}^{\prime},i}\), then the lemma is proved. Indeed, with the notations above, we assume that \(\underline{\omega}=(\nu^{1},\nu^{2},\cdots,\nu^{k-1},i^{a_{k}-1},\nu^{k+1}, \cdots,\nu^{m})\) and take \(\underline{\omega}^{\prime}=(i^{a_{k}-1},\nu^{k+1},\cdots,\nu^{m})\). Then \(L_{\underline{\omega}^{\prime}\underline{d}}\) is contained in \(\mathcal{Q}_{\mathbf{V}^{\prime\prime},\mathbf{W}}\) for some \(\mathbf{V}^{\prime\prime}\). Since \(i\) is a source in \(\hat{\Omega}\), \(\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle=\nu^{\prime}_{i}\). By Proposition 4.2(b) in [12], we can see that \(f_{\underline{\omega}}=\sum\limits_{l<k}\nu^{l}_{i}+\sum\limits_{l>k,h\in\hat{ \Omega},h^{\prime}=i}\nu^{l}_{h^{\prime\prime}}\), then \(0<\langle|\mathbf{V}^{\prime}\oplus\mathbf{W}|,i\rangle-f_{\underline{\omega} }=\nu^{\prime\prime}_{i}-\sum\limits_{h\in\hat{\Omega},h^{\prime}=i}\nu^{ \prime\prime}_{h^{\prime\prime}}.\) In particular, \(\mathbf{E}_{\mathbf{V}^{\prime\prime},\mathbf{W},i}^{\geqslant 1}=\mathbf{E}_{\mathbf{V}^{\prime \prime},\mathbf{W},\hat{\Omega}}\) and \(L_{\underline{\omega}^{\prime}\underline{d}}\) belongs to \(\mathcal{N}_{\mathbf{V}^{\prime\prime},i}\). Notice that \(L_{\underline{\omega}\underline{d}}\cong F_{i_{1}}^{(a_{1})}\cdots F_{i_{k-1}}^{ (a_{k-1})}L_{\underline{\omega}^{\prime}\underline{d}}\), it is also contained in \(\mathcal{N}_{\mathbf{V}^{\prime},i}\) and we get the proof.
**Theorem 3.33**.: _For any object \(L\) of \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\), we have a split exact sequence in \(\hat{\mathcal{Q}}_{\mathbf{V}^{\prime},\mathbf{W}}/\mathcal{N}_{\mathbf{V}^{ \prime}}\)_
\[0\to{}_{i}\hat{\mathcal{R}}^{\Lambda}(L)\to\hat{\mathcal{R}}^{\Lambda}_{i}(L)\to E _{i}(L)\to 0.\]
Proof.: By Lemma 3.32, we can define an \(\mathcal{A}\)-linear operator \(\tilde{E}_{i}\) by sending \([L_{\underline{\nu}\underline{d}}]\) to \([C(L_{\underline{\nu}\underline{d}})]\) for any flag type \(\underline{\nu}\). By definition, we have \(\tilde{E}_{i}([L_{\underline{\nu}\underline{d}}])=[\hat{\mathcal{R}}^{\Lambda}_{i }(L_{\underline{\nu}\underline{d}})]-[{}_{i}\hat{\mathcal{R}}^{\Lambda}(L_{ \underline{\nu}\underline{d}})].\) From [12] and Corollary 3.1 and 3.3 in [25], we can see that
\[\hat{\mathcal{R}}^{\Lambda}_{i}F_{j}\cong(F_{j}\hat{\mathcal{R}}^{\Lambda}_{i}) \oplus(\delta_{i,j}\mathrm{Id}\otimes\mathrm{H}^{*}(\mathbb{P}^{\infty})[-1]),\]
\[{}_{i}\hat{\mathcal{R}}^{\Lambda}F_{j}\cong(F_{ji}\hat{\mathcal{R}}^{\Lambda}) \oplus(\delta_{i,j}\mathrm{Id}\otimes\mathrm{H}^{*}(\mathbb{P}^{\infty})[( \mathrm{i},|\mathbf{V}\oplus\mathbf{W}|^{\prime})-1]).\]
Hence, we can check that the linear operator \(\tilde{E}_{i}=[\hat{\mathcal{R}}^{\Lambda}_{i}]-[{}_{i}\hat{\mathcal{R}}^{ \Lambda}]:\mathcal{K}\to\mathcal{K}_{0}(\Lambda)\) satisfies the relations
\[\tilde{E}_{i}F_{j}=F_{j}\tilde{E}_{i},i\neq j;\]
\[\tilde{E}_{i}F_{i}+\sum\limits_{0\leqslant m\leqslant N-1}v^{N-1-2m}\mathrm{Id}=F _{i}\tilde{E}_{i}\oplus\sum\limits_{0\leqslant m\leqslant-N-1}v^{-2m-N-1} \mathrm{Id}.\]
Since the opeartor \(E_{i}\) also satisfies the same commutative relation with \(F_{j}\), we can argue by induction to show that
\[[\tilde{E}_{i}(L_{\underline{\nu^{\prime}}\underline{d}})]=[E_{i}(L_{\underline{ \nu^{\prime}}\underline{d}})]\]
for any flag type \(\underline{\nu^{\prime}}=(\nu^{\prime 1},\nu^{\prime 2},\cdots,\nu^{\prime k})\) with each \(\nu^{\prime l}\in I\). (It means that \(\underline{\nu^{\prime}}\) is a flag type of complete flags). Note that each \(L_{\underline{\nu}\underline{d}}\) can be write as a finite direct sum of some \(L_{\underline{\nu^{\prime}}\underline{d}}\), \([\tilde{E}_{i}(L_{\underline{\nu}\underline{d}})]=[E_{i}(L_{\underline{\nu }\underline{d}})]\) for any flag type \(\underline{\nu}\). In particular, \(E_{i}(L_{\underline{\nu}\underline{d}})\cong C(L_{\underline{\nu}\underline{d }})\), since both of them are semisimple complexes (up to isomorphisms). Then by definition, \(E_{i}(L_{\underline{\nu}\underline{d}})\oplus{}_{i}\hat{\mathcal{R}}^{ \Lambda}(L_{\underline{\nu}\underline{d}})\cong\hat{\mathcal{R}}^{\Lambda}_{i }(L_{\underline{\nu}\underline{d}})\) for any \(\underline{\nu}\).
Notice that from Lemma 2.8, we can deduce the following fact: for any simple object \(L\) in \(\mathcal{Q}_{\mathbf{V},\mathbf{W}}\), there exist families of flag types \(\underline{\omega,\underline{\tau}}\) and integers \(N(\underline{\omega},n)\) and \(N(\underline{\tau},n)\) such that
\[L\oplus\bigoplus_{\underline{\omega}}L_{\underline{\omega}\underline{d}}^{ \oplus N(\underline{\omega},n)}[n]\cong\bigoplus_{\underline{\tau}}L_{ \underline{\tau}\underline{d}}^{\oplus N(\underline{\tau},n)}[n],\]
hence for any \(L\), \(E_{i}(L)\oplus{}_{i}\hat{\mathcal{R}}^{\Lambda}(L)\cong\hat{\mathcal{R}}^{ \Lambda}_{i}(L)\) and we get the proof.
**Remark 3.34**.: _Recall that the derivation operators \(\bar{r}_{i}\) and \({}_{i}\bar{r}\) can be realized by the restriction functor, see [13] or [25]. If we formally write \(\frac{1}{v-v^{-1}}=\sum\limits_{n\geqslant 0}v^{-1-2n}\), then \(\hat{\mathcal{R}}^{\Lambda}_{i}\) and \({}_{i}\bar{\mathcal{R}}^{\Lambda}\) indeed realize the linear operators \(\frac{1}{v-v^{-1}}\bar{r}_{i},\frac{v^{(i,|V^{\prime}\otimes\mathbf{W}|)}}{v-v ^{-1}}{}_{i}\bar{r}:{}_{\mathcal{A}}\mathbf{U}^{-}\to\mathbf{U}^{-}\) respectively. Therefore, Theorem 3.33 indeed categorify the equation_
\[E_{i}(x\cdot v_{\Lambda})=(v^{(i,|x|-i)-\langle\Lambda,\alpha_{i}^{\vee}\rangle }{}_{i}\bar{r}(x)\cdot v_{\Lambda}-v^{\langle\Lambda,\alpha_{i}^{\vee}\rangle }\bar{r}_{i}(x)\cdot v_{\Lambda})/(v^{-1}-v).\]
## 4. Compare with Nakajima's realization
In this section, we recall Nakajima's realization of integrable highest weight modules via his quiver varieties in [16] and [17] and compare our construction (at \(v\to 1\)) with the realization of Nakajima. We omit \(\mathbb{G}_{m}\)-action defined in [17, Section 3.iv] in order to simplify the notations.
### Nakajima quiver variety
For given quiver \(Q=(I,H,\Omega)\), Nakajima considered the double version of framed quiver \(\hat{Q}\), its set of vertices is \(I\cup\hat{I}\) and its set of arrows is \(\hat{H}=H\cup\{i\rightarrow\hat{i},\hat{i}\rightarrow i|i\in I\}\).
For dimension vectors \(\nu,\omega\), \(I\)-graded space \(\mathbf{V}\) and \(\hat{I}\)-graded space \(\mathbf{W}\) such that \(|\mathbf{V}|=\nu\) and \(|\mathbf{W}|=\omega\), we consider the moduli space
\[\mathbf{E}_{\mathbf{V},\mathbf{W}}=\bigoplus_{h\in\hat{H}}\mathbf{Hom}( \mathbf{V}_{h^{\prime}},\mathbf{V}_{h^{\prime\prime}})=\bigoplus_{h\in H} \mathbf{Hom}(\mathbf{V}_{h^{\prime}},\mathbf{V}_{h^{\prime\prime}})\oplus \bigoplus_{i\in I}\mathbf{Hom}(\mathbf{V}_{i},\mathbf{W}_{\hat{i}})\oplus \bigoplus_{i\in I}\mathbf{Hom}(\mathbf{W}_{\hat{i}},\mathbf{V}_{i})\]
We denote an element of \(\mathbf{E}_{\mathbf{V},\mathbf{W}}\) by \((B,i,j)\), where \(B\) is an element of \(\mathbf{E}_{\mathbf{V}}\), and \(i,j\) are elements of \(\bigoplus\limits_{i\in I}\mathbf{Hom}(\mathbf{V}_{i},\mathbf{W}_{\hat{i}})\) and \(\bigoplus\limits_{i\in I}\mathbf{Hom}(\mathbf{W}_{\hat{i}},\mathbf{V}_{i})\) respectively. The moment map of \(\mathbf{E}_{\mathbf{V},\mathbf{W}}\) is defined by
\[(\mu((B,i,j)))_{k\in I}=\sum_{h\in H,h^{\prime\prime}=k}\epsilon(h)B_{h}B_{ \bar{h}}+i_{k}j_{k}.\]
The algebraic group
\[G=G_{\mathbf{V}}=\prod_{i\in I}\mathbf{GL}(\mathbf{V}_{i})\]
acts on \(\mathbf{E}_{\mathbf{V},\mathbf{W}}\) by
\[g\cdot(B,i,j)=(gBg^{-1},gi,jg^{-1}).\]
**Definition 4.1**.: _Nakajima quiver variety \(\mathfrak{m}_{0}(\nu,\omega)\) and \(\mathfrak{m}(\nu,\omega)\) are defined as the affine and projective geometric quotients of \(\mu^{-1}(0)\) respectively. More precisely, let \(A(\mu^{-1}(0))\) be the coordinate ring of the affine variety \(\mu^{-1}(0)\), the affine quiver variety \(\mathfrak{m}_{0}(\nu,\omega)=\mathfrak{m}_{0}\) is defined by_
\[\mathfrak{m}_{0}(\nu,\omega)=\mu^{-1}(0)//G=\operatorname{Spec}\,A(\mu^{-1}(0 ))^{G}.\]
_Take character \(\chi_{G}:G\to\mathbb{C};g\mapsto\prod\limits_{k\in I}\det g_{k}^{-1}\) and set_
\[A(\mu^{-1}(0))^{G,\chi_{G}^{n}}=\{f\in A(\mu^{-1}(0))|f(g(B,i,j))=\chi_{G}(g)^{ n}f((B,i,j))\}\]
_then the projective quiver variety \(\mathfrak{m}(\nu,\omega)=\mathfrak{m}\) is defined by_
\[\mathfrak{m}(\nu,\omega)=\operatorname{Proj}(\bigoplus\limits_{n\geqslant 0 }A(\mu^{-1}(0))^{G,\chi_{G}^{n}}).\]
In order to describe \(\mathfrak{m}(\nu,\omega)\), Nakajima introduced the following stable conditions.
**Lemma 4.2**.: _Assume that \((B,i,j)\in\mu^{-1}(0)\), then \((B,i,j)\) is stable if and only if the following statement holds,_
_(S) For any \(B\)-stable subspace \(S\subset\mathbf{V}\) such that \(S\subset\operatorname{Ker}i\), we have \(S=0\)._
Let \(\mu^{-1}(0)^{s}\) be the subset of stable points, then Nakajima proved the following lemma and corollary in [17].
**Lemma 4.3**.: _If \((B,i,j)\in\mu^{-1}(0)^{s}\),then: (1) \(\operatorname{Stab}_{G}((B,i,j))=\{1\}\); (2) The tangent map \(d\mu\) is surjective at \((B,i,j)\). In particular \(\mu^{-1}(0)^{s}\) is nonsingular and of dimension \(\sum\limits_{k\in I}(2\nu_{k}\omega_{k}-\nu_{k}^{2})+(\nu,\nu)\)._
**Corollary 4.4**.: _Nakajima quiver variety \(\mathfrak{m}(\nu,\omega)\) is the geometric quotient of \(\mu^{-1}(0)^{s}\). In particular, \(\mathfrak{m}(\nu,\omega)\) is nonsingular and of dimension \(\sum\limits_{k\in I}(2\nu_{k}\omega_{k}-2\nu_{k}^{2})+(\nu,\nu)\). Moreover, the closed points in \(\mathfrak{m}(\nu,\omega)\) are bijective to orbits of \(\mu^{-1}(0)^{s}\)._
We denote the geometric point corresponding to the orbit of \((B,i,j)\in\mu^{-1}(0)^{s}\) by \([B,i,j]\). If the orbit of \((B,i,j)\in\mu^{-1}(0)\) is close, then \([B,i,j]\) is also a geometric point of \(\mathfrak{m}_{0}(\nu,\omega)\). The geometric invariant theory provides a morphism \(\pi:\mathfrak{m}(\nu,\omega)\to\mathfrak{m}_{0}(\nu,\omega)\):
\[\pi([B,i,j])=[B^{0},i^{0},j^{0}]\]
where \(G(B^{0},i^{0},j^{0})\) is the unique close orbit contained in the closure of \(G(B,i,j)\).
**Proposition 4.5**.: _The algebraic variety \(\pi^{-1}(0)\subset\mathfrak{m}(\nu,\omega)\) is Lagrangian and homotopic to \(\mathfrak{m}(\nu,\omega)\). In particular, we denote \(\pi^{-1}(0)\) by \(\mathfrak{L}(\nu,\omega)\)._
For dimension vectors \(\nu^{1},\nu^{2}\) and graded spaces \(\mathbf{V}^{1},\mathbf{V}^{2}\) such that \(|\mathbf{V}^{i}|=\nu^{i},i=1,2\), we define the morphisms \(\pi_{i}:\mathfrak{m}(\nu^{i},\omega)\to\mathfrak{m}_{0}(\nu^{1}+\nu^{2},\omega)\) to be the composition of projections \(\pi:\mathfrak{m}(\nu^{i},\omega)\to\mathfrak{m}_{0}(\nu^{i},\omega)\) and embeddings \(\mathfrak{m}_{0}(\nu^{i},\omega)\to\mathfrak{m}_{0}(\nu^{1}+\nu^{2},\omega)\).
**Definition 4.6**.: _Let \(Z(\nu^{1},\nu^{2},\omega)\) be the subvariety \(\mathfrak{m}(\nu^{1},\omega)\times\mathfrak{m}(\nu^{2},\omega)\) defined by:_
\[Z(\nu^{1},\nu^{2},\omega)=\{(x_{1},x_{2})\in\mathfrak{m}(\nu^{1},\omega) \times\mathfrak{m}(\nu^{2},\omega)|\pi_{1}(x_{1})=\pi_{2}(x_{2})\}.\]
Let \(\mathfrak{m}_{0}(\nu,\omega)^{reg}\) be the subset of \(\mathfrak{m}_{0}(\nu,\omega)\) consisting of \([B,i,j]\) with trivial stabilizer \(\operatorname{Stab}(\mathrm{B},\mathrm{i},\mathrm{j})\). Then we have the following result by Nakajima:
**Lemma 4.7**.: _If \([B,i,j]\in\mathfrak{m}_{0}(\nu,\omega)^{reg}\), then \((B,i,j)\in\mu^{-1}(0)^{s}\). Moreover, \(\pi\) induces an isomorphism between \(\mathfrak{m}_{0}(\nu,\omega)^{reg}\) and \(\pi^{-1}(\mathfrak{m}_{0}(\nu,\omega)^{reg})\)._
**Definition 4.8**.: _Let \(Z^{reg}(\nu^{1},\nu^{2},\omega)\) be the complement of the closure of the set_
\[\{(x_{1},x_{2})|\pi_{1}(x_{1})=\pi_{2}(x_{2})\notin\mathfrak{m}_{0}(\nu,\omega)^{ reg},\subset\mathfrak{m}(\nu^{1}+\nu^{2},\omega)^{reg}\}.\]
Nakajima proved that \(Z^{reg}(\nu^{1},\nu^{2},\omega)\) is Lagrangian and its irreducible components are of dimension \(\frac{1}{2}(\operatorname{dimm}(\nu^{1},\omega)+\operatorname{dimm}(\nu^{2},\omega))\), \(Z^{reg}(\nu^{1},\nu^{2},\omega)\) is open in \(Z(\nu^{1},\nu^{2},\omega)\).
For dimension vectors \(\nu^{i},i=1,2,3\), we define the projections
\[p_{i,j}:\mathfrak{m}(\nu^{1},\omega)\times\mathfrak{m}(\nu^{2},\omega)\times \mathfrak{m}(\nu^{3},\omega)\to\mathfrak{m}(\nu^{i},\omega)\times\mathfrak{m}( \nu^{j},\omega),\]
then we can define the convolution product of the the Borel-Moore homology groups
\[H_{*}(Z(\nu^{1},\nu^{2},\omega))\otimes H_{*}(Z(\nu^{2},\nu^{3},\omega))\to H _{*}(Z(\nu^{1},\nu^{3},\omega))\]
\[(c_{1},c_{2})\mapsto(p_{1,3})_{*}(p_{1,2}^{*}c_{1}\cap p_{2,3}^{*}c_{2})\]
where \(H_{*}(Z(\nu^{1},\nu^{2},\omega))\) is the Borel-Moore homology group of \(Z(\nu^{1},\nu^{2},\omega)\) and \(H_{top}(Z(\nu^{1},\nu^{2},\omega))\) is the homology group of top degree. The fundamental classes of irreducible components of \(Z(\nu^{1},\nu^{2},\omega)\) form a basis of \(H_{top}(Z(\nu^{1},\nu^{2},\omega))\). Under the convolution product, \(\bigoplus\limits_{\nu^{1},\nu^{2}}H_{*}(Z(\nu^{1},\nu^{2},\omega))\) becomes a \(\mathbb{Q}\)-algebra and \(\bigoplus\limits_{\nu^{1},\nu^{2}}H_{top}(Z(\nu^{1},\nu^{2},\omega))\) is a subalgebra.
For given \(\nu^{0}\) and \(x\in\mathfrak{m}_{0}(\nu^{0},\omega)^{reg}\subset\mathfrak{m}(\nu,\omega)\), let \(\mathfrak{m}(\nu,\omega)_{x}\) be the fiber at \(x\) of the morphism \(\pi:\mathfrak{m}(\nu,\omega)\to\mathfrak{m}_{0}(\nu,\omega)\). In particular, for \(x=0\), we have
\[\mathfrak{m}(\nu,\omega)_{0}=\mathfrak{L}(\nu,\omega)\]
Under the action defined by convolution, \(\bigoplus\limits_{\nu}H_{*}(\mathfrak{m}(\nu,\omega)_{x})\) becomes a \(\bigoplus\limits_{\nu^{1},\nu^{2}}H_{*}(Z(\nu^{1},\nu^{2},\omega))\)-module. Moreover, \(\bigoplus\limits_{\nu}H_{top}(\mathfrak{m}(\nu,\omega)_{x})\) becomes a \(\bigoplus\limits_{\nu^{1},\nu^{2}}H_{top}(Z(\nu^{1},\nu^{2},\omega))\)-module.
Fix \(k\in I\) and consider graded spaces \(\mathbf{V}^{2}\) such that \(|\mathbf{V}^{2}|=\nu\) and \(\mathbf{V}^{1}\) such that \(|\mathbf{V}^{1}|=\nu-k\). We set \(\nu^{2}=\nu,\nu^{1}=\nu-k\). Nakajima introduced the Hecke correspondence \(\beta_{k}(\nu,\omega)\), which is a nonsingular Lagrangian subvariety of \(\mathfrak{m}(\nu^{1},\omega)\times\mathfrak{m}(\nu^{2},\omega)\)(See details in [17, Section 5]). He also defined \(E_{k}\in\prod\limits_{\nu^{1}}H_{top}(Z(\nu^{1},\nu^{1}+k,\omega))\) to be the following formal sum
\[E_{k}=\sum\limits_{\nu}[\beta_{k}(\nu,\omega)]\]
and defined \(F_{k}\in\prod\limits_{\nu^{1}}H_{top}(Z(\nu^{1},\nu^{1}-k,\omega))\) to be the following formal sum
\[F_{k}=\sum\limits_{\nu}(-1)^{r(\nu,\omega)}[\tilde{\omega}(\beta_{k}(\nu, \omega))]\]
where \(\tilde{\omega}:\mathfrak{m}(\nu^{1},\omega)\times\mathfrak{m}(\nu^{2},\omega) \to\mathfrak{m}(\nu^{2},\omega)\times\mathfrak{m}(\nu^{1},\omega)\) is the morphism exchanging two factors and
\[r(\nu,\omega)=\frac{1}{2}(\operatorname{dimm}(\nu-k,\omega)-\operatorname{ dimm}(\nu,\omega)).\]
Nakajima proved the following main theorem.
**Theorem 4.9**.: _[_17_, Theorem 10.2]_ _With the action by \(E_{k},F_{k},k\in I\), the group \(\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu,\omega))\) becomes a (left) \(\mathbf{U}(\mathfrak{g})\)-module. Moreover, it is isomorphic to the highest weight module \(L_{0}(\Lambda_{\omega})\) with highest weight \(\Lambda_{\omega}\) and the highest weight vector \([\mathfrak{L}(0,\omega)]\), and we have an \(\mathbf{U}(\mathfrak{g})\)-linear isomorphism satisfying_
\[\varkappa^{\Lambda_{\omega}}:\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu, \omega))\to L_{0}(\Lambda_{\omega})\]
\[[\mathfrak{L}(0,\omega)]\mapsto v_{\Lambda_{\omega}}\]
_where \(\Lambda_{\omega}\) is the dominant weight such that \(\langle\Lambda_{\omega},\alpha_{\nu}^{\vee}\rangle=\omega_{i},i\in I\)._
The quiver variety \(\mathfrak{m}(\nu,\omega)\) admits a partition \(\bigcup\limits_{r\geqslant 0}\mathfrak{m}_{k,r}(\nu,\omega)=\mathfrak{m}(\nu,\omega)\), where
\[\mathfrak{m}_{k,r}(\nu,\omega)=\{[B,i,j]\in\mathfrak{m}(\nu,\omega)|\text{ codim}(\sum\limits_{h\in H,h^{\prime\prime}=k}\text{Im}B_{h}+\text{Im}j_{k})=r\}.\]
We also set \(\mathfrak{m}_{k,\leqslant r}(\nu,\omega)=\bigcup\limits_{s\leqslant r} \mathfrak{m}_{k,s}(\nu,\omega)\). There is a natural map defined by
\[p:\mathfrak{m}_{k,\leqslant r}(\nu,\omega)\rightarrow\mathfrak{m}_{k, \leqslant 0}(\nu-rk,\omega);\]
\[[B,i,j]\mapsto[B,i,j]|_{\text{Im}B_{h}+\text{Im}j_{k}\oplus\bigoplus\limits_{ i\neq k}\mathfrak{v}_{i}}.\]
By definition, one can check that each fiber of \(p\) is isomorphic to a Grassmannian, hence \(p\) is smooth with connected fiber.
For any irreducible component \(X\subset\mathfrak{m}(\nu,\omega)_{x}\) and \(k\in I\), there exists a unique integer \(r\) such that \(X\cap\mathfrak{m}_{k,r}(\nu,\omega)_{x}\) is dense in \(X\) and we define \(r=t_{k}(X)\).
Since \(p:\mathfrak{m}_{k,r}(\nu,\omega)\rightarrow\mathfrak{m}_{k,0}(\nu-rk,\omega)\) is smooth and with connected fibers, \(p\) induces a bijection between the set of irreducible components of \(\mathfrak{m}_{k,r}(\nu,\omega)_{x}\) and the set of irreducible components of \(\mathfrak{m}_{k,0}(\nu-rk,\omega)_{x}\). It also induces a bijection \(\varrho_{k,r}\) between sets \(Irr\mathfrak{m}(\nu,\omega)_{x}\) of irreducible components:
\[\varrho_{k,r}:\{X\subset\mathfrak{m}(\nu,\omega)_{x}|t_{k}(X)=r\}\rightarrow\{ X^{\prime}\subset\mathfrak{m}(\nu-rk,\omega)_{x}|t_{k}(X^{\prime})=0\}\]
\[X\mapsto\overline{p(X\cap\mathfrak{m}_{k,r}(\nu,\omega))}\]
where \(\overline{X}\) is the closure of the set \(X\).
**Lemma 4.10**.: _[_17_, Lemma 10.1]_ _If \(t_{k}(X)=r>0\) and \(\varrho_{k,r}(X)=\varrho_{k,r-1}(X^{\prime\prime})\),then we have_
\[F_{k}[X^{\prime\prime}]=\pm r[X]+\sum\limits_{t_{k}(X^{\prime})>r}c_{X^{\prime }}[X^{\prime}].\]
As a corollary, we have the following lemma, which is parallel to Lemma 2.8.
**Lemma 4.11**.: _For irreducible components \(X\subset\mathfrak{L}(\nu,\omega)\) such that \(t_{k}(X)=r>0\) and \(X^{\prime\prime}=\varrho_{k,r}(X)\), we have the following equation in \(\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu,\omega))\):_
\[F_{k}^{(r)}([X^{\prime\prime}])=\pm[X]+\sum\limits_{t_{k}(X^{\prime})>r}c_{X^{ \prime}}[X^{\prime}]\]
_where \(F_{k}^{(r)}=\frac{F_{k}^{r}}{r!}\) and \(c_{X^{\prime}}\) are constants._
Proof.: We prove by induction on \(r\). If \(r=1\), the lemma trivially holds. For general \(r\), we have
\[F_{k}^{(r)}([X^{\prime\prime}])= \frac{1}{r}F_{k}F_{k}^{(r-1)}([X^{\prime\prime}])\] \[= \frac{1}{r}F_{k}(\pm[\tilde{X}]+\sum\limits_{t_{k}(X^{\prime})>r- 1}c_{X^{\prime}}[X^{\prime}])\] \[= \pm\left[X\right]+\sum\limits_{t_{k}(X^{\prime})>r}c_{X^{\prime}} [X^{\prime}]\]
where \(\tilde{X}\) is the irreducible component such that \(X^{\prime\prime}=\varrho_{k,r-1}(\tilde{X})\). The second equation holds by the inductive assumption and the third equation holds by applying the above lemma to \(\tilde{X}\) and \(X^{\prime}\) respectively.
### The left graphs and the isomorphism \(\Phi^{\Lambda}\)
We recall the left graph of the canonical basis defined by Lusztig. In this and the next section, we usually omit the notation of Fourier-Deligne transformations if there is no ambiguity. (For example, we denote \(t_{i}(\mathcal{F}_{\hat{\Omega},\hat{\Omega}_{i}}(L))\) by \(t_{i}(L)\) and denote \(\mathcal{F}_{\hat{\Omega}_{i},\hat{\Omega}}\pi_{i,p}\mathcal{F}_{\hat{\Omega}, \hat{\Omega}_{i}}\) by \(\pi_{i,p}\).)
**Definition 4.12**.: _We define the left graph \(\mathcal{G}_{1}=(\mathcal{V}_{1},\mathcal{E}_{1})\) to be the \(I\times\mathbb{N}_{>0}\)-colored graph consisting of the set of vertices \(\mathcal{V}_{1}\) and the set of arrows \(\mathcal{E}_{1}\) as follows,_
\[\mathcal{V}_{1}=\{[L]\in\mathcal{K}|L\in\mathcal{P}_{\mathbf{V}},|\mathbf{V}| \in\mathbb{N}[I]\},\]
\[\mathcal{E}_{1}=\{[L]\xrightarrow{(k,r)}[L^{\prime}]|k\in I,0<r\in\mathbb{N},\pi_{k,r}(L^{\prime})=L\text{ for some }|\mathbf{V}|\in\mathbb{N}[I]\}.\]
Now we can introduce the left graph of \(\mathcal{L}(\Lambda)\), which is isomorphic to a subgraph of \(\mathcal{G}_{1}\).
**Definition 4.13**.: _We define the left graph of \(\mathcal{L}(\Lambda)\) to be the \(I\times\mathbb{N}_{>0}\)-colored graph \(\mathcal{G}_{1}(\Lambda)=(\mathcal{V}_{1}(\Lambda),\mathcal{E}_{1}(\Lambda))\), which consists of the following set of vertices \(\mathcal{V}_{1}(\Lambda)\) and the set of arrows \(\mathcal{E}_{1}(\Lambda)\)_
\[\mathcal{V}_{1}(\Lambda)=\{[L]\in\mathcal{K}_{0}(\Lambda)|L\in\mathcal{P}_{ \mathbf{V}},L\not\cong 0\text{ in }\mathcal{L}_{\mathbf{V}}(\Lambda),|\mathbf{V}|\in\mathbb{N}[I]\},\]
\[\mathcal{E}_{1}(\Lambda)=\{[L]\xrightarrow{(k,r)}[L^{\prime}]|[L],[L^{\prime} ]\in\mathcal{V}_{1}(\Lambda),k\in I,0<r\in\mathbb{N},\pi_{k,r}(L^{\prime})=L \text{ for some }|\mathbf{V}|\in\mathbb{N}[I]\}.\]
We say a sequence \(\underline{s}=((i_{1},n_{1}),(i_{2},n_{2}),\cdots,(i_{l},n_{l}))\) of \(I\times\mathbb{N}_{>0}\) is a left admissible path of \(L\in\mathcal{P}_{\mathbf{V}}\), if \(\pi_{i_{1},n_{1}}\pi_{i_{2},n_{2}}\cdots\pi_{i_{l},n_{l}}(L_{0})=L\), where \(L_{0}\) is the unique simple perverse sheaf in \(\mathcal{P}_{\mathbf{V}}\) with \(|\mathbf{V}|=0\). By Lemma 7.2 in [12], for any \(L\in\mathcal{P}_{\mathbf{V}}\), there exists a left admissible path of \(L\).
Consider the affine variety \(\Lambda_{\mathbf{V},\mathbf{W}}\subset\mathbf{E}_{\mathbf{V},\mathbf{W}}\)
\[\Lambda_{\mathbf{V},\mathbf{W}}=\{(B,i,j)\in\mu^{-1}(0)|j=0,B\text{ is nilpotent }\}\]
In particular, for \(\mathbf{W}=0\), we denote \(\Lambda_{\mathbf{V},0}\) by \(\Lambda_{\mathbf{V}}\).
Let \(\Lambda_{\mathbf{V},i,p}\) be the close subset of \(\Lambda_{\mathbf{V}}\) defined by
\[\Lambda_{\mathbf{V},i,p}=\{x\in\Lambda_{\mathbf{V}}|\text{codim}_{\mathbf{V}_ {i}}(\text{Im}\sum_{h\in H,h^{\prime\prime}=i}x_{h})=p\},\]
then for any irreducible component \(Z\), there exists a unique \(p\) such that \(Z\cap\Lambda_{\mathbf{V},i,p}\) is dense in \(Z\) and we define \(t_{i}(Z)=p\). Similarly, we can consider the close subsets
\[\Lambda_{\mathbf{V},i}^{p}=\{x\in\Lambda_{\mathbf{V}}|\text{dim}_{\mathbf{V}_ {i}}(\text{Ker}\bigoplus_{h\in H,h^{\prime}=i}x_{h})=p\}\]
and define \(t_{i}^{*}(Z)=p\) if \(p\) is the unique integer such that \(Z\cap\Lambda_{\mathbf{V},i}^{p}\) is dense in \(Z\).
Given \(\nu^{\prime}+\nu^{\prime\prime}=\nu\in\mathbb{N}[I]\) and graded vector spaces \(\mathbf{V},\mathbf{V}^{\prime},\mathbf{V}^{\prime\prime}\) with dimension vectors \(\nu,\nu^{\prime},\nu^{\prime\prime}\) respectively, let \(\Lambda^{\prime}\) be the set which consists of \((x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})\) where \(x\in\Lambda_{\mathbf{V}}\), \(\tilde{\mathbf{W}}\) is a \(x\)-stable subspace of \(\mathbf{V}\) with dimension \(\nu^{\prime\prime}\) and \(\rho_{1}:\mathbf{V}/\tilde{\mathbf{W}}\simeq\mathbf{V}^{\prime},\rho_{2}: \tilde{\mathbf{W}}\simeq\mathbf{V}^{\prime\prime}\) are linear isomorphisms, and let \(\Lambda^{\prime\prime}\) be the set which consists of \((x,\tilde{\mathbf{W}})\) as above. Consider the morphisms
\[\Lambda_{\mathbf{V}^{\prime}}\times\Lambda_{\mathbf{V}^{\prime\prime}}\overset {p}{\leftarrow}\Lambda^{\prime}\overset{r}{\rightarrow}\Lambda^{\prime \prime}\overset{q}{\rightarrow}\Lambda_{\mathbf{V}}\]
where \(p(x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})=(\rho_{1,*}(\bar{x}|_{\mathbf{V}/ \tilde{\mathbf{W}}}),\rho_{2,*}(x|_{\tilde{\mathbf{W}}}))\), \(r(x,\tilde{\mathbf{W}},\rho_{1},\rho_{2})=(x,\tilde{\mathbf{W}})\) and \(q(x,\tilde{\mathbf{W}})=x\). Note that \(r\) is a principle \(G_{\mathbf{V}^{\prime}}\times G_{\mathbf{V}^{\prime\prime}}\)-bundle and \(q\) is proper.
We assume that \(|\mathbf{V}^{\prime\prime}|+pi=|\mathbf{V}|\) and denote \(qr:\Lambda^{\prime}\rightarrow\Lambda_{\mathbf{V}}\) by \(q^{\prime}\). Then \(p^{-1}(\Lambda_{\mathbf{V}^{\prime\prime},i,0})=(q^{\prime-1})(\Lambda_{\mathbf{V },i,p})\). We denote \(p^{-1}(\Lambda_{\mathbf{V}^{\prime\prime},i,0})\) by \(\Lambda^{\prime}_{i,p}\) and denote \((q^{-1})(\Lambda_{\mathbf{V},i,p})\) by \(\Lambda^{\prime\prime}_{i,p}\). Then we have the following commutative diagram
here \(i_{1},i_{2},i_{3}\) and \(i_{4}\) are the natural embeddings.
**Lemma 4.14**.: _[_21_, 4.17]_ _(1) \(q^{\prime}:\Lambda^{\prime}_{i,p}\rightarrow\Lambda_{\mathbf{V},i,p}\) is a principle \(G_{\mathbf{V}^{\prime\prime}}\times G_{\mathbf{V}^{\prime}}\) -bundle. (2) \(p:\Lambda^{\prime}_{i,p}\rightarrow\Lambda_{\mathbf{V}^{\prime\prime},i,0}\) is a smooth map whose fibers are connected of dimension \((\sum\limits_{i\in I}\nu_{i}^{2})-p(\nu^{\prime\prime},i)\)._
**Corollary 4.15**.: \(q^{\prime}p^{-1}\) _induces a bijection \(\eta_{i,p}:Irr\Lambda_{\mathbf{V}^{\prime\prime},i,0}\to Irr\Lambda_{ \mathbf{V},i,p}\) from the set of irreducible components of \(\Lambda_{\mathbf{V}^{\prime\prime},i,0}\) to the set of irreducible components \(\Lambda_{\mathbf{V},i,p}\). The map \(\eta_{i,p}\) also induces a bijection ( still denoted by \(\eta_{i,p}\)) from \(\{Z\in Irr\Lambda_{\mathbf{V}^{\prime\prime}}|t_{i}(Z)=0\}\) to \(\{Z\in Irr\Lambda_{\mathbf{V}}|t_{i}(Z)=p\}\) defined by:_
\[\eta_{i,p}(\bar{Z})=\overline{\eta_{i,p}(\bar{Z})}\]
_for \(Z\in Irr\Lambda_{\mathbf{V}^{\prime\prime},i,0}\). Here \(\bar{Z}\) and \(\overline{\eta_{i,p}(Z)}\) are the closure of \(Z\) and \(\eta_{i,p}(Z)\) respectively._
Now we can define the left graph \(\mathcal{G}_{2}\) for \(Irr\Lambda_{\mathbf{V}}\).
**Definition 4.16**.: _We define the left graph \(\mathcal{G}_{2}=(\mathcal{V}_{2},\mathcal{E}_{2})\) to be the \(I\times\mathbb{N}_{>0}\)-colored graph consisting of the following set of vertices \(\mathcal{V}_{2}\) and the set of arrows \(\mathcal{E}_{2}\) as follows,_
\[\mathcal{V}_{2}=\{Z|Z\in Irr\Lambda_{\mathbf{V}},|\mathbf{V}|\in\mathbb{N}[I ]\},\]
\[\mathcal{E}_{2}=\{Z\xrightarrow{(k,r)}Z^{\prime}|k\in I,0<r\in\mathbb{N}, \eta_{k,r}(Z^{\prime})=Z\text{ for some }|\mathbf{V}|\in\mathbb{N}[I]\}.\]
Similarly, we say a sequence \(\underline{s}=((i_{1},n_{1}),(i_{2},n_{2}),\cdots,(i_{l},n_{l}))\) of \(I\times\mathbb{N}_{>0}\) is a left addmissible path of \(Z\in Irr\Lambda_{\mathbf{V}}\), if \(\eta_{i_{1},n_{1}}\eta_{i_{2},n_{2}}\cdots\eta_{i_{l},n_{l}}(Z_{0})=Z\), where \(Z_{0}\) is the unique irreducible component of \(\Lambda_{0}\). By Corollary 1.6 in [15], for any \(Z\in Irr\Lambda_{\mathbf{V}}\), there exists a left addmissible path of \(Z\). Then by [8] or Theorem 7.1 in [5], we have the following result:
**Proposition 4.17**.: _There is a bijection \(\Phi:\mathcal{V}_{1}\rightarrow\mathcal{V}_{2}\) such that \(\Phi\) commutes with the arrows in \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) and \(\Phi\) preserves the value \(t_{i}\) and \(t_{i}^{*}\)._
\[\Phi([L])\xrightarrow{(k,r)}\Phi([L^{\prime}])\iff L\xrightarrow{(k,r)}L^{ \prime},k\in I,0<r\in\mathbb{N};\]
\[t_{i}(L)=t_{i}(\Phi([L])),t_{i}^{*}(L)=t_{i}^{*}(\Phi([L])),i\in I.\]
_Moreover, if \(\underline{s}=((i_{1},n_{1}),(i_{2},n_{2}),\cdots,(i_{l},n_{l}))\) is a left addmissible path of \(L\), then \(\Phi([L])=\eta_{i_{1},n_{1}}\eta_{i_{2},n_{2}}\cdots\eta_{i_{l},n_{l}}(Z_{0})\)._
We can also define the left graph \(\mathcal{G}_{0}(\Lambda_{\omega})\) for Nakajima's quiver variety, which is an analogy of \(\mathcal{G}_{1}(\Lambda)\).
**Definition 4.18**.: _We define the left graph \(\mathcal{G}_{0}(\Lambda_{\omega})=(\mathcal{V}_{0},\mathcal{E}_{0})\) of Nakajima's quiver variety (associated with \(\omega\)) to be the \(I\times\mathbb{N}\)-colored graph, which consists of the following set of vertices \(\mathcal{V}_{0}\) and the set of arrows \(\mathcal{E}_{0}\)_
\[\mathcal{V}_{0}=\{X\in Irr\mathfrak{L}(\nu,\omega)|\nu\in\mathbb{N}[I]\},\]
\[\mathcal{E}_{0}=\{X^{\prime}\xrightarrow{(k,r)}X|k\in I,0<r\in\mathbb{N}, \varrho_{k,r}(X)=X^{\prime}\text{ for some }\nu\in\mathbb{N}[I]\}.\]
**Lemma 4.19**.: _[_16_, Lemma 5.8]_ _(1) \(\mathfrak{L}(\nu,\omega)\) is isomorphic to the geometric quotient of \(\Lambda_{\mathbf{V},\mathbf{W}}\cap\mu^{-1}(0)^{s}\). In particular, the set \(Irr\mathfrak{L}(\nu,\omega)\) is bijective to the set of \(G_{\mathbf{V}}\)-invariant Lagrangian irreducible components of \(\Lambda_{\mathbf{V},\mathbf{W}}\cap\mu^{-1}(0)^{s}\). (2) The projection \(\pi_{\mathbf{W}}:\Lambda_{\mathbf{V},\mathbf{W}}\rightarrow\Lambda_{\mathbf{V} };(B,i,0)\mapsto B\) is a vector bundle. In particular, the set \(Irr\Lambda_{\mathbf{V},\mathbf{W}}\) is bijective to the set \(Irr\Lambda_{\mathbf{V}}\)._
Notice that the set of \(G_{\mathbf{V}}\)-invariant Lagrangian irreducible components of \(\Lambda_{\mathbf{V},\mathbf{W}}\cap\mu^{-1}(0)^{s}\) can be naturally regarded as a subset of \(Irr\Lambda_{\mathbf{V},\mathbf{W}}\). With the lemma above, consider an injective map \(\Psi:Irr\mathfrak{L}(\nu,\omega)\to Irr\Lambda_{\mathbf{V}}\) defined by composing the two bijections above. If \(\Lambda=\Lambda_{\omega}\), then for some \(\nu\in\mathbb{N}[I]\) and \(X\in Irr\mathfrak{L}(\nu,\omega)\), we define \(\Phi^{\Lambda}(X)=(\Phi)^{-1}(\Psi(X))\). By definition, \(\Phi^{\Lambda}:\mathcal{V}_{0}\to V_{1}\) is injective.
**Theorem 4.20**.: _The image of \(\Phi^{\Lambda}\) is contained in \(\mathcal{V}_{1}(\Lambda)\). Moreover, \(\Phi^{\Lambda}:\mathcal{V}_{0}\to\mathcal{V}_{1}(\Lambda)\) is exactly an isomorphism of (\(I\times\mathbb{N}_{>0}\)-colored) left graphs,_
\[X^{\prime}\xrightarrow{(k,r)}X\iff\Phi^{\Lambda}(X^{\prime})\xrightarrow{(k,r) }\Phi^{\Lambda}(X),k\in I,0<r\in\mathbb{N}.\]
Proof.: We firstly prove that \(\operatorname{Im}(\Phi^{\Lambda})\subset\mathcal{V}_{1}(\Lambda)\). Assume that \(\Phi^{\Lambda}(X)=[L]\) and \(L\in\mathcal{N}_{\mathbf{V}}\) for some \(X\), then there exists some \(i\in I\) such that \(L\) in \(\mathcal{N}_{\mathbf{V},i}\). Then by Remark 3.25(2), viewed as an object in \(\mathcal{Q}_{\mathbf{V}}\) via the inverse of \((\pi_{\mathbf{W}})^{*}\), \(L\) satisfies \(t_{i}^{*}(L)=r>d_{i}\), hence the irreducible component \(Z=\Phi(L)\subset\Lambda_{\mathbf{V}}\) satisfies \(t_{i}^{*}(Z)=r>d_{i}\). Let \(Z^{\prime}=\pi_{\mathbf{W}}^{-1}(Z)\) be the irreducible component of \(\Lambda_{\mathbf{V},\mathbf{W}}\) corresponding to \(Z\). By definition, \(\pi_{\mathbf{W}}^{-1}(\Lambda_{\mathbf{V},i}^{r}\cap Z)\) is dense in \(Z^{\prime}\). We claim that \(\pi_{\mathbf{W}}^{-1}(\Lambda_{\mathbf{V},i}^{r}\cap Z)\cap\mu^{-1}(0)^{s}\) is empty. Indeed, for any \((B,i,0)\in\pi_{\mathbf{W}}^{-1}(\Lambda_{\mathbf{V},i}^{r}\cap Z)\), we consider \(S=\operatorname{Ker}\bigoplus\limits_{h\in H,h^{\prime}=i}B_{h}\). \(S\) is a \(B\)-stable subspace, and \(S\cap\operatorname{Ker}i\neq 0\). (Otherwise, \(i|_{S}\) is an injective linear map from the \(r\)-dimensional space \(S\) to the \(d_{i}\)-dimensional space \(W\)). Hence \(S\cap\operatorname{Ker}i\subset\operatorname{Ker}i\) fails the stablility condition (S). However, \(Z^{\prime}\cap\mu^{-1}(0)^{s}\) is dense in \(Z\). We get a contradiction and have proved that \(\operatorname{Im}(\Phi^{\Lambda})\subset\mathcal{V}_{1}(\Lambda)\).
Notice that by Theorem 3.26, we have
\[|\mathcal{P}_{\mathbf{V}}\backslash\mathcal{N}_{\mathbf{V}}|=\dim_{\mathbb{Q} (v)}L_{|\mathbf{V}|}(\Lambda)\]
where \(\mathcal{P}_{\mathbf{V}}\backslash\mathcal{N}_{\mathbf{V}}\) is the set of nonzero simple perverse sheaves in \(\mathcal{L}_{\mathbf{V}}(\Lambda)\). And by Theorem 4.9, we have
\[\dim_{\mathbb{Q}(v)}L_{|\mathbf{V}|}(\Lambda)=\dim_{\mathbb{Q}}L_{0,|\mathbf{ V}|}(\Lambda)=|Irr\mathfrak{L}(\nu,\omega)|\]
hence \(\Phi^{\Lambda}:\mathcal{V}_{0}\to\mathcal{V}_{1}(\Lambda)\) is a bijection. Notice that \(\Psi\) commutes with \(\eta_{k,r}^{-1}\) and \(\varrho_{k,r}\) by definition. Indeed, both \(\eta_{k,r}^{-1}\) and \(\varrho_{k,r}\) can be obtained by restricting \(B\) to \(\operatorname{Im}B_{h}\oplus\bigoplus\limits_{i\neq k}\mathbf{V}_{i}\) for generic \(B\). Hence \(\Phi^{\Lambda}\) commutes with the arrows. More precisely, we have
\[X^{\prime}\xrightarrow{(k,r)}X\iff\Phi^{\Lambda}(X^{\prime})\xrightarrow{(k, r)}\Phi^{\Lambda}(X),k\in I,0<r\in\mathbb{N},\]
as desired.
### Monomial bases from left graphs
Let \(\mathcal{S}\) be the set of sequences of \(I\times\mathbb{N}_{>0}\).
**Definition 4.21**.: _For a given order \((i_{1}\prec i_{2}\prec\cdots\prec i_{n})\) of \(I\), we inductively define a map \(\underline{s}^{\prec}=\underline{s}:\mathcal{V}_{1}(\Lambda)\to\mathcal{S}\) as follows, (1) If \(L\) is the unique simple perverse sheaf on \(\mathbf{E}_{\mathbf{V},\mathbf{w},\hat{\Omega}},|\mathbf{V}|=pi,p\leqslant d_ {i}\), we define \(\underline{s}([L])=((i,p))\). In particular \(\underline{s}([L_{0}])=\emptyset\). (2) For the other \(L\in\mathcal{V}_{1}(\Lambda)\), there exists a unique \(r\in\mathbb{N}\) such that \(t_{i_{r}}(L)=n>0\) but \(t_{i_{*}}(L)=0\) holds for any \(r<s\), take \(K\) such that \(L\xrightarrow{(i_{r},n)}K\) and we define \(\underline{s}([L])=((i_{r},n),\underline{s}([K]))\)._
**Lemma 4.22**.: _The map \(\underline{s}^{\prec}=\underline{s}:\mathcal{V}_{1}(\Lambda)\to\mathcal{S}\) is well-defined and injective._
Proof.: It suffices to show that the inductive definition can be continued. If \(t_{i_{r}}(L)=n>0\) in \(\mathcal{V}_{1}(\Lambda)\), but \(K\) such that \(L\xrightarrow{(i_{r},n)}K\) belongs to \(\mathcal{N}_{\mathbf{V}^{\prime}}\). Then there exits \(j\) such that \(s_{j}^{*}(K)>0\) and \(K\) is a direct summand of \(L_{(\underline{u},j^{d})},d>0\). By definition of \(\pi_{i_{r},n}\), we know that \(L\) is a a direct summand of \(L_{(i_{r}^{n},\underline{u},j^{d})}\) and belongs to \(\mathcal{N}_{\mathbf{V},j}\), a contradiction.
Similarly, we can inductively define \(\underline{s}^{\prime}:\mathcal{V}_{0}\to\mathcal{S}\). By definition, \(\underline{s}([L])\) is a left admissible path of \(L\) for any \(L\in\mathcal{P}_{\mathbf{V}}\backslash\mathcal{N}_{\mathbf{V}}\) and \(\underline{s}(X^{\prime})\) is a left admissible path of \(X\) for any \(X\in Irr\mathfrak{L}(\nu,\omega)\).
**Definition 4.23**.: _Given two sequences_
\[\underline{s}_{1}=((i_{n_{1}},m_{1}),(i_{n_{2}},m_{2}),\cdots,(i_{n_{k}},m_{k})), \ \underline{s}_{2}=((i_{n_{1}^{\prime}},m_{1}^{\prime}),(i_{n_{2}^{\prime}},m_{2}), \cdots,(i_{n_{l}^{\prime}},m_{l}^{\prime}))\in\mathcal{S}\]
_such that \(\sum\limits_{1\leqslant s\leqslant k}m_{s}i_{n_{s}}=\sum\limits_{1\leqslant s \leqslant l}m^{\prime}_{s}i_{n^{\prime}_{s}}\), we say \(\underline{s}_{2}\prec\underline{s}_{1}\) if there exists \(r\in\mathbb{N}\) such that \((i_{n_{t}},m_{t})=(i_{n^{\prime}_{t}},m^{\prime}_{t})\) for \(1\leqslant t<r\) and \((i_{n^{\prime}_{r}},m^{\prime}_{r})\prec(i_{n_{r}},m_{r})\)._
Recall that \(\mathcal{S}_{|\mathbf{V}|}\) is naturally bijective to the subset of \(\mathcal{S}\) which consists of the sequences \(\underline{s}=((i_{n_{1}},m_{1}),(i_{n_{2}},m_{2}),\cdots,(i_{n_{k}},m_{k}))\) such that \(\sum\limits_{1\leqslant s\leqslant k}m_{s}i_{n_{s}}=|\mathbf{V}|\), then \((\mathcal{S}_{|\mathbf{V}|},\prec)\) becomes a partially ordered set. Regraded as subsets of \(\mathcal{S}\) via \(\underline{s}\) and \(\underline{s}^{\prime}\), \(\mathcal{V}_{1}(\Lambda)\) and \(\mathcal{V}_{0}\) also become partially ordered sets. We still denote their partial order by \(\prec\).
For any sequence
\[\underline{s}=((i_{n_{1}},m_{1}),(i_{n_{2}},m_{2}),\cdots,(i_{n_{k}},m_{k})) \in\mathcal{S},\]
we define
\[m^{\Lambda}_{\underline{s}}=F^{(m_{1})}_{i_{n_{1}}}*F^{(m_{2})}_{i_{n_{2}}}* \cdots*F^{(m_{k})}_{i_{n_{k}}}[\mathfrak{L}(0,\omega)]\in\bigoplus\limits_{ \nu}H_{top}(\mathfrak{L}(\nu,\omega))\]
and
\[[M^{\Lambda}_{\underline{s}}]=F^{(m_{1})}_{i_{n_{1}}}F^{(m_{2})}_{i_{n_{2}}} \cdots F^{(m_{k})}_{i_{n_{k}}}[L_{0}]\in\mathcal{K}_{0}(\Lambda).\]
**Proposition 4.24**.: _For fixed \(|\mathbf{V}|=\nu\), we set_
\[\mathbf{M}^{\Lambda}_{\mathbf{V}}=\{[M^{\Lambda}_{\underline{s}([L])}]|[L]\in \mathcal{V}_{1}(\Lambda),L\in\mathcal{P}_{\mathbf{V}}\},\]
\[\mathbf{M^{\prime}}^{\Lambda}_{\mathbf{V}}=\{m^{\Lambda}_{\underline{s}^{ \prime}(\mathcal{X})}|X\in Irr\mathfrak{L}(\nu,\omega)\}\]
_then we have (1) The set \(\mathbf{M}^{\Lambda}_{\mathbf{V}}\) is a \(\mathcal{A}\)-basis of \(\mathcal{K}_{0,|\mathbf{V}|}(\Lambda)\). (2) The set \(\mathbf{M^{\prime}}^{\Lambda}_{\mathbf{V}}\) is a \(\mathbb{Q}\)-basis of \(H_{top}(\mathfrak{L}(\nu,\omega))\). (3) The transition matrix from \(\mathbf{B}^{\Lambda}_{1,\mathbf{V}}=\{[L]|L\in\mathcal{V}_{1}(\Lambda),L\in \mathcal{P}_{\mathbf{V}}\}\) to \(\mathbf{M}^{\Lambda}_{\mathbf{V}}\) is upper triangular (with respect to \(\prec\)) and with diagonal entries all equal to 1. (4) The transition matrix from \(\mathbf{B}^{\Lambda}_{2,\mathbf{V}}=\{[X]|X\in Irr\mathfrak{L}(\nu,\omega)\}\) to \(\mathbf{M^{\prime}}^{\Lambda}_{\mathbf{V}}\) is upper triangular (with respect to \(\prec\)) and with diagonal entries all equal to \(\pm 1\)._
Proof.: We claim that for any \([L]\in\mathcal{V}_{1}(\Lambda)\),
\[M^{\Lambda}_{\underline{s}([L])}=[L]+\sum\limits_{\underline{s}[(L^{\prime} )]\prec\underline{s}([L])}c_{L,L^{\prime}}[L^{\prime}]\]
where \(c_{L,L^{\prime}}\) are constants in \(\mathcal{A}\). We argue by induction on the length \(k\) of
\[\underline{s}([L])=((i_{n_{1}},m_{1}),(i_{n_{2}},m_{2}),\cdots,(i_{n_{k}},m_{k })).\]
If \(k=1\) and \(\underline{s}([L])=((i_{n_{1}},m_{1})\), then \(\nu=m_{1}i_{n_{1}}\) and
\[[L_{i_{n_{1}}^{m_{1}}}]=F^{(m_{1})}_{i_{n_{1}}}[L_{0}]=M^{\Lambda}_{\underline{ s}([L])}\]
holds trivially. If \(k>1\), by Lemma 2.8, we can take \(L^{\prime}\in\mathcal{P}_{\mathbf{V}^{\prime}}\),\(|\mathbf{V}|=|\mathbf{V}^{\prime}|+m_{1}i_{n_{1}}\) such that \(t_{i_{1}}(L^{\prime})=0,\pi_{i_{1},m_{1}}(L^{\prime})=L\), and then
\[F^{(m_{1})}_{i_{n_{1}}}[L^{\prime}]=[L]+\sum\limits_{t_{i_{n_{1}}}(L^{\prime \prime})>m_{1}}c_{L^{\prime},L^{\prime\prime}}[L^{\prime\prime}]\]
By the inductive assumption,
\[M^{\Lambda}_{\underline{s}[[L^{\prime}])}=[L^{\prime}]+\sum\limits_{ \underline{s}([L^{\prime\prime}])\succ\underline{s}([L^{\prime}])}c_{L^{ \prime},L^{\prime\prime}}[L^{\prime\prime}].\]
Notice that \(\underline{s}([L])=((i_{n_{1}},m_{1}),\underline{s}([L^{\prime}]))\), we have
\[M^{\Lambda}_{\underline{s}([L])}= F^{(m_{1})}_{i_{n_{1}}}M^{\Lambda}_{\underline{s}([L^{\prime}])}\] \[= F^{(m_{1})}_{i_{n_{1}}}([L^{\prime}]+\sum_{\underline{s}([L^{ \prime\prime}])\succ\underline{s}([L^{\prime}])}c_{L^{\prime},L^{\prime\prime} }[L^{\prime\prime}])\] \[= [L]+\sum_{t_{i_{n_{1}}}(L^{\prime\prime})>m_{1}}c_{L^{\prime},L^{ \prime\prime}}[L^{\prime\prime}]+\sum_{\underline{s}([L^{\prime\prime}]) \succ\underline{s}([L^{\prime}])}c_{L^{\prime},L^{\prime\prime}}(\mathcal{F}_ {i_{n_{1}}})^{(m_{1})}[L^{\prime\prime}].\]
Notice that if \(t_{i_{n_{1}}}([L^{\prime\prime}])=m>m_{1}\), then \(\underline{s}([L^{\prime\prime}])\) either starts with \((i_{n_{1}},m)\) or starts with \((i_{r},m^{\prime})\) for some \(r<n_{1}\), hence
\[(\sharp)\ \ \underline{s}([L^{\prime\prime}]))\succ\underline{s}([L])\text{ holds for }L^{\prime\prime}\text{ which satisfies }t_{i_{n_{1}}}(L^{\prime\prime})=m>m_{1}.\]
We only need to show
\[F^{(m_{1})}_{i_{n_{1}}}[L^{\prime\prime}]=\sum_{\underline{s}([L^{\prime\prime }])\succ\underline{s}([L])}d_{L^{\prime\prime\prime}}[L^{\prime\prime\prime}]\]
for those \(L^{\prime\prime}\) which satisfy \(\underline{s}([L^{\prime\prime}])\succ\underline{s}([L^{\prime}])\). For those \(L^{\prime\prime}\) such that \(t_{i_{n_{1}}}([L^{\prime\prime}])>0\), by 2.9 we have \([L^{\prime\prime}]\) belongs to the \(\mathcal{A}\)- submodule \(F_{i_{n_{1}}}\mathcal{K}_{0}(\Lambda)\), hence \(F^{(m_{1})}_{i_{n_{1}}}[L^{\prime\prime}]\) belongs to \(F^{(m_{1}+1)}_{i_{n_{1}}}\mathcal{K}_{0}(\Lambda).\) By Proposition 2.9, \(\{[L]|L\in\mathcal{P},t_{i_{n_{1}}}(L)\geqslant m_{1}+1\}\) form a basis of \([L_{i_{n_{1}}^{(m_{1}+1)}}]*\mathcal{K}\) in in \(\mathcal{K}\). Project this basis to \(\mathcal{K}_{0}(\Lambda)\), we know that \(\{[L]|L\in\mathcal{P},L\notin\mathcal{N},t_{i_{n_{1}}}\geqslant m_{1}+1\}\) form a basis of \(F^{(m_{1}+1)}_{i_{n_{1}}}*\mathcal{K}_{0}(\Lambda)\).
\[F^{(m_{1})}_{i_{n_{1}}}[L^{\prime\prime}]=\sum_{t_{i_{n_{1}}}([L^{\prime\prime \prime}])>m_{1}}e_{L^{\prime\prime\prime}}[L^{\prime\prime\prime}].\]
By \((\sharp)\), those \(L^{\prime\prime\prime}\) satisfy \(t_{i_{n_{1}}}([L^{\prime\prime\prime}])=m>m_{1}\), hence \(\underline{s}([L^{\prime\prime\prime}])\succ\underline{s}([L])\). For those \(L^{\prime\prime}\) such that \(t_{i_{n_{1}}}([L^{\prime\prime}])=0\), we have
\[F^{(m_{1})}_{i_{n_{1}}}[L^{\prime\prime}]=[\tilde{L}]+\sum_{t_{i_{n_{1}}}([L^{ \prime\prime}])>m_{1}}e_{L^{\prime\prime\prime}}[L^{\prime\prime\prime}]\]
where \(\tilde{L}\) is a simple perverse sheaf such that \(t_{i_{n_{1}}}([\tilde{L}])=m_{1}\). By \((\sharp)\), we have \(\underline{s}([L^{\prime\prime\prime}])\succ\underline{s}([L])\). So we only need to show \(\underline{s}([\tilde{L}])\succ\underline{s}([L])\). Indeed, if \(t_{i_{r}}([\tilde{L}])>0\) holds for some \(r<n_{1}\), then \(\underline{s}([\tilde{L}])\) starts with \((i_{r},m^{\prime}),r<n_{1}\) and \(\underline{s}([\tilde{L}])\succ\underline{s}([L])\) holds by definition. Otherwise, we have \(\underline{s}([\tilde{L}])=((i_{n_{1}},m_{1}),\underline{s}([L^{\prime\prime }])\). Since \(\underline{s}([L^{\prime\prime}])\succ\underline{s}([L^{\prime}])\), we have
\[\underline{s}([\tilde{L}])=((i_{n_{1}},m_{1}),\underline{s}([L^{\prime\prime}] )\succ((i_{n_{1}},m_{1}),\underline{s}([L^{\prime}]))=\underline{s}([L]).\]
In a conclusion, we have proved our claim. Then \((1)\) and \((3)\) follow by basic linear algebra and our claim. Similarly, we can prove
\[m^{\Lambda}_{\underline{s}^{\prime}(X)}=\pm[X]+\sum_{\underline{s}^{\prime}(X^ {\prime})\succ\underline{s}^{\prime}(X)}c_{X,X^{\prime}}[X^{\prime}]\]
where \(c_{X,X^{\prime}}\) are constants in \(\mathbb{Q}\). Then \((2)\) and \((4)\) hold.
Recall that we have isomorphisms \(\varsigma^{\Lambda}:\mathcal{K}^{\Lambda}_{0}\rightarrow{}_{\mathcal{A}}L(\Lambda)\) and \(\varkappa^{\Lambda}:\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu,\omega)) \to L_{0}(\Lambda)\). We still denote the composition of \(\varsigma^{\Lambda}:\mathcal{K}^{\Lambda}_{0}\rightarrow{}_{\mathcal{A}}L(\Lambda)\) and the classical limits by \(\varsigma^{\Lambda}:\mathbb{Z}\otimes_{\mathcal{A}}\mathcal{K}^{\Lambda}_{0} \rightarrow{}_{\mathbb{Z}}L(\Lambda)\).
\[\mathbb{Z}\otimes\mathcal{K}_{0}(\Lambda)\stackrel{{\varsigma^{ \Lambda}}}{{\longrightarrow}}{}_{\mathbb{Z}}L_{0}(\Lambda)\]
\[\bigoplus\limits_{\nu}H_{top}(\mathfrak{L}(\nu,\omega))\stackrel{{ \varkappa^{\Lambda}}}{{\longrightarrow}}L_{0}(\Lambda)\]
**Definition 4.25**.: _We define \(sgn=sgn^{\succ}:\bigcup\limits_{\nu}Irr\mathfrak{L}(\nu,\omega)\to\{\pm 1\}\) to be the function which satisfies_
\[m^{\Lambda}_{\underline{s}^{\prime}([X])}=sgn(X)[X]+\sum\limits_{\underline{s} ^{\prime}([X^{\prime}])\succ\underline{s}^{\prime}([X])}c_{X,X^{\prime}}[X^{ \prime}].\]
For the integrable highest weight module \(L_{0}(\Lambda)\) of the universal enveloping algbra, we define
\[\tilde{\mathbf{B}}^{\Lambda}_{1,\mathbf{V}}=\{\varsigma^{\Lambda}([L])|L\in \mathcal{V}_{1}(\Lambda),L\in\mathcal{P}_{\mathbf{V}}\};\tilde{\mathbf{B}}_{1 }=\tilde{\mathbf{B}}^{\Lambda}_{1}=\bigcup\limits_{\mathbf{V}}\tilde{ \mathbf{B}}^{\Lambda}_{1,\mathbf{V}}\]
\[\tilde{\mathbf{B}}^{\Lambda}_{2,\mathbf{V}}=\{\varkappa^{\Lambda}(sgn(X)[X])| X\in Irr\mathfrak{L}(\nu,\omega)\};\tilde{\mathbf{B}}_{2}=\tilde{\mathbf{B}}^{ \Lambda}_{2}=\bigcup\limits_{\mathbf{V}}\tilde{\mathbf{B}}^{\Lambda}_{2, \mathbf{V}}\]
**Theorem 4.26**.: _The transition matrix from \(\tilde{\mathbf{B}}_{1}\) to \(\tilde{\mathbf{B}}_{2}\) is upper triangular (with respect to \(\prec\)) and with diagonal entries all equal to 1. More precisely, if \(X\in\mathcal{V}_{0}\) and \([L]=\Phi^{\Lambda}(X)\), then we have_
\[\varsigma^{\Lambda}([L])=\varkappa^{\Lambda}(sgn(X)[X])+\sum\limits_{ \underline{s}^{\prime}([X^{\prime}])\succ\underline{s}^{\prime}([X])}c_{X^{ \prime}}\varkappa^{\Lambda}(sgn(X^{\prime})[X^{\prime}])\]
_where \(c_{X^{\prime}}\in\mathbb{Q}\) are constants._
Proof.: Notice that \(\underline{s}(\Phi^{\Lambda}(X))=\underline{s}^{\prime}(X)\), we have
\[\varkappa^{\Lambda}(m^{\Lambda}_{\underline{s}^{\prime}(X)})= F_{i_{n_{1}}}^{(m_{1})}F_{i_{n_{2}}}^{(m_{2})}\cdots F_{i_{n_{k}}}^{(m_{k}) }v_{\Lambda}\] \[= \varsigma^{\Lambda}([M^{\Lambda}_{\underline{s}([L])}])\]
hence \(\varkappa^{\Lambda}(\mathbf{M}^{\prime\Lambda})=\varsigma^{\Lambda}(\mathbf{ M}^{\Lambda})\) is a basis of \(L_{0}(\Lambda)\), denoted by \(\tilde{\mathbf{M}}^{\Lambda}\). Let \(P_{i},i=1,2\) be the transition matrices from \(\tilde{\mathbf{M}}^{\Lambda}\) to \(\tilde{\mathbf{B}}_{i},i=1,2\) respectively, then each \(P_{i}\) is upper triangular (with respect to \(\prec\)) and with diagonal entries all equal to 1. Since \(\Phi^{\Lambda}\) preserves the order \(\prec\), the transition matrix \(P_{2}P_{1}^{-1}\) from \(\tilde{\mathbf{B}}_{1}\) to \(\tilde{\mathbf{B}}_{2}\) is also upper triangular and with diagonal entries all equal to 1.
**Remark 4.27**.: _We can see that the theorem does not depend on the choice of the order of \(I\). More precisely, if we choose another order \(\widetilde{\prec}\) of \(I\), then \(\widetilde{\prec}\) induces an order of \(\mathcal{S}\). We can also define \(\underline{s}^{\widetilde{\prec}}:\mathcal{V}_{1}\to\mathcal{S}\) and \(\underline{s}^{\prime}{}^{\widetilde{\prec}}:\mathcal{V}_{2}\to\mathcal{S}\) in a similar way. Then with the notation above, we still have \(\varsigma([L])=\varkappa(f_{Z})+\sum\limits_{\underline{s}^{\prime}{}^{ \widetilde{\prec}}(f_{Z^{\prime}})\succ\underline{s}^{\prime}{}^{\widetilde {\prec}}(f_{Z})}c_{Z^{\prime}}\varkappa(f_{Z^{\prime}})\). Moreover, the transition matrix \(P_{2}P_{1}^{-1}\) does not depend on the choice of \(\prec\) (up to a permutation). In particular, the function \(sgn\) does not depend on the choice of \(\prec\), either._
**Definition 4.28**.: _For \(X,X^{\prime}\in\mathcal{V}_{0}\), we define \([X]\preceq^{\prime}[X^{\prime}]\) if and only if for any order \(\prec\) of \(I\) and the induced map \(\underline{s}^{\prime}{}^{\prec}:\mathcal{V}_{0}\to\mathcal{S}\), we always have \(\underline{s}^{\prime\prec}([X])\prec\underline{s}^{\prime}{}^{\prec}([X^{ \prime}])\)._
_Similarly, given \(L,K\in\mathcal{V}_{1}(\Lambda)\), we say \(L\preceq K\) if and only if for any order \(\prec\) of \(I\) and the induced map \(\underline{s}^{\sim}:\mathcal{V}_{1}\to\mathcal{S}\), we have \(\underline{s}^{\prec}([L])\prec\underline{s}^{\prec}([K])\)._
One can check that \(X\preceq^{\prime}X^{\prime}\) if and only if \(\Phi^{\Lambda}([X])\preceq\Phi^{\Lambda}([X^{\prime}])\).
**Corollary 4.29**.: _The transition matrix from \(\tilde{\mathbf{B}}_{1}\) to \(\tilde{\mathbf{B}}_{2}\) is upper triangular (with respect to \(\preceq\) and \(\preceq^{\prime}\)) and with diagonal entries all equal to 1. More precisely, if \(X\in\mathcal{V}_{0}\) and \([L]=\Phi^{\Lambda}(X)\), then we have_
\[\varsigma^{\Lambda}([L])=\varkappa^{\Lambda}(sgn(X)[X])+\sum\limits_{X\preceq X ^{\prime}}c_{X^{\prime}}\varkappa^{\Lambda}(sgn(X^{\prime})[X^{\prime}])\]
_where \(c_{X^{\prime}}\in\mathbb{Q}\) are constants._
**Acknowledgements.** J. Fang is partially supported by National Key R&D Program of China [Grant No. 2020YFE0204200] and Tsinghua University Initiative Scientific Research Program [Grant No. 2019Z07L01006]. Y. Lan is partially supported by the National Natural Science Foundation of China [Grant No. 1288201] and Tsinghua University Initiative Scientific Research Program [Grant No. 2019Z07L01006]. J. Xiao is partially supported by Natural Science Foundation of China [Grant No. 12031007].
|
2304.00203
|
Effect of the resonant ac-drive on the spin-dependent recombination of
polaron pairs: Relation to organic magnetoresistance
|
The origin of magnetoresistance is bipolar organic materials is the influence
of magnetic field on the dynamics of recombination within localized
electron-hole pairs. Recombination from the $S$ spin-state of the pair in
preceded by the beatings between the states $S$ and $T_0$. Period of the
beating is set by the the random hyperfine field. For the case when
recombination time from $S$ is shorter than the period, we demonstrate that a
{\em weak} resonant ac drive, which couples $T_0$ to $T_+$ and $T_{-}$ affects
dramatically the recombination dynamics and, thus, the current A distinctive
characteristics of the effect is that the current versus the drive amplitude
exhibits a {\em maximum}.
|
M. E. Raikh
|
2023-04-01T02:22:52Z
|
http://arxiv.org/abs/2304.00203v1
|
Effect of the resonant ac-drive on the spin-dependent recombination of polaron pairs: Relation to organic magnetoresistance
###### Abstract
The origin of magnetoresistance is bipolar organic materials is the influence of magnetic field on the dynamics of recombination within localized electron-hole pairs. Recombination from the \(S\) spin-state of the pair in preceded by the beatings between the states \(S\) and \(T_{0}\). Period of the beating is set by the the random hyperfine field. For the case when recombination time from \(S\) is shorter than the period, we demonstrate that a _weak_ resonant ac drive, which couples \(T_{0}\) to \(T_{+}\) and \(T_{-}\) affects dramatically the recombination dynamics and, thus, the current A distinctive characteristics of the effect is that the current versus the drive amplitude exhibits a _maximum_.
## I Introduction
In the pioneering paper Ref. [1] it was first proposed that sensitivity of the resistance of certain organic materials to a weak magnetic field is due to spin-dependent recombination within polaron pairs. Roughly speaking, while formation of a pair in each of the four possible spin states \(S\), \(T_{0}\), \(T_{+}\), and \(T_{-}\), occurs with equal probabilities, they recombine predominantly from \(S\). As a result, the \(S-T\) dynamics (\(S-T\) beatings) affects the net recombination rate, which, in turn, determines the current. On the other hand, the \(S-T\) dynamics (see the classical paper Ref. [2]) is governed by the ratio of the applied magnetic field to the hyperfine field. If both pair-partners have the same charge, recombination should be replaced by the bipolaron formation.[3] In the later studies (see e.g. Refs. [4; 5; 6; 7; 8; 9]) the scenario of Ref. [1] was confirmed experimentally. A conclusive experimental evidence that hyperfine magnetic fields play a crucial role in magnetic field response of organic semiconductors was reported in Ref. [10] on the basis of comparison the data on hydrogenated (with higher hyperfine fields) and deuterated (with lower hyperfine fields) samples.
It is clear that subjecting a pair to a resonant ac drive can flip the spin of one of the pair-partners and, thus, affect the \(S-T\) beatings. This idea was proposed in Refs. [11; 12] and confirmed experimentally in Refs. [13; 14; 15; 14]. A peculiar feature of the analytical theory Ref. [12] is that it predicted a non-monotonic dependence of magnetoresistance on the drive amplitude, \(B_{1}\). This behavior reflects a delicate balance between two processes. Firstly, without drive, the only triplet state coupled to \(S\) is \(T_{0}\). In other words, it cannot recombine from \(T_{+}\) or \(T_{-}\), which we call the trapping configurations. Weak drive couples \(T_{+}\) and \(T_{-}\) to \(T_{0}\), thus, effectively coupling them to \(S\) and allowing recombination from these states. This leads to the enhancement of current. On the other hand, strong drive leads to the formation of a new mode of spin dynamics[12], namely, \(T_{+}-T_{-}\). This mode is orthogonal to \(T_{0}\), and, thus, it is decoupled from \(S\). Possibility to be trapped in \(T_{+}-T_{-}\) leads to the slowing of recombination and, correspondingly, to the decrease of current upon increasing of drive.
The theory of Ref. [12] was based on the assumption that the \(S-T\) asymmetry of the pair is strong. Typical value of this asymmetry is \(\gamma b_{0}\), where \(\gamma\) is the gyromagnetic ratio, while \(b_{0}\) is the magnitude of the hyperfine field. The criterion of a strong asymmetry reads \(\gamma b_{0}\tau\gg 1\), where \(\tau\) is the recombination time from \(S\). Physically, this criterion implies that the pair undergoes many, \(\sim\gamma b_{0}\tau,\quad S-T_{0}\) beatings before it recombines. In the opposite limit, \(\gamma b_{0}\tau\ll 1\), there are no \(S-T_{0}\) beatings. Instead, the pair recombines almost instantly after it finds itself in \(S\). In this limit it is not clear whether the ac drive can affect the current. It is this limit that is studied in the present paper. We employ the same maximally simplified transport model as in Refs. [8; 9].
Figure 1: (Color online) (a) Illustration of the transport model. Current passage through a bipolar device involves recombination of the localized electron (red) and localized hole (blue), which are assembled on the neighboring sites; (b) Spin dynamics of a resonantly driven pair preceding recombination from \(S\). The dynamics involves \(S-T_{0}\) beatings and the Rabi oscillations between \(T_{0}\) and two other triplet components.
|
2306.09338
|
Understanding Optimization of Deep Learning via Jacobian Matrix and
Lipschitz Constant
|
This article provides a comprehensive understanding of optimization in deep
learning, with a primary focus on the challenges of gradient vanishing and
gradient exploding, which normally lead to diminished model representational
ability and training instability, respectively. We analyze these two challenges
through several strategic measures, including the improvement of gradient flow
and the imposition of constraints on a network's Lipschitz constant. To help
understand the current optimization methodologies, we categorize them into two
classes: explicit optimization and implicit optimization. Explicit optimization
methods involve direct manipulation of optimizer parameters, including weight,
gradient, learning rate, and weight decay. Implicit optimization methods, by
contrast, focus on improving the overall landscape of a network by enhancing
its modules, such as residual shortcuts, normalization methods, attention
mechanisms, and activations. In this article, we provide an in-depth analysis
of these two optimization classes and undertake a thorough examination of the
Jacobian matrices and the Lipschitz constants of many widely used deep learning
modules, highlighting existing issues as well as potential improvements.
Moreover, we also conduct a series of analytical experiments to substantiate
our theoretical discussions. This article does not aim to propose a new
optimizer or network. Rather, our intention is to present a comprehensive
understanding of optimization in deep learning. We hope that this article will
assist readers in gaining a deeper insight in this field and encourages the
development of more robust, efficient, and high-performing models.
|
Xianbiao Qi, Jianan Wang, Lei Zhang
|
2023-06-15T17:59:27Z
|
http://arxiv.org/abs/2306.09338v3
|
# Understanding Optimization of Deep Learning
###### Abstract
This article provides a comprehensive understanding of optimization in deep learning, with a primary focus on the challenges of gradient vanishing and gradient exploding, which normally lead to diminished model representational ability and training instability, respectively. We analyze these two challenges through several strategic measures, including the improvement of gradient flow and the imposition of constraints on a network's Lipschitz constant. To help understand the current optimization methodologies, we categorize them into two classes: explicit optimization and implicit optimization. Explicit optimization methods involve direct manipulation of optimizer parameters, including weight, gradient, learning rate, and weight decay. Implicit optimization methods, by contrast, focus on improving the overall landscape of a network by enhancing its modules, such as residual shortcuts, normalization methods, attention mechanisms, and activations. In this article, we provide an in-depth analysis of these two optimization classes and undertake a thorough examination of the Jacobian matrices and the Lipschitz constants of many widely used deep learning modules, highlighting existing issues as well as potential improvements. Moreover, we also conduct a series of analytical experiments to substantiate our theoretical discussions. This article does not aim to propose a new optimizer or network. Rather, our intention is to present a comprehensive understanding of optimization in deep learning. We hope that this article will assist readers in gaining a deeper insight in this field and encourages the development of more robust, efficient, and high-performing models.
###### Contents
* 1 Introduction
* 1.1 Outline
* 1.2 Notation
* 2 Foundations on Optimization
* 2.1 Lipschitz Continuity
* 2.2 Lipschitz Gradient Continuity
* 2.3 Lipschitz Hessian Continuity
* 3 Foundations on Deep Learning
* 3.1 Basic Modules in Deep Learning
* 3.1.1 Linear Layer
* 3.1.2 Convolution
* 3.1.3 Normalization
* 3.1.4 Self-attention
* 3.1.5 Residual Shortcut
###### Abstract
We propose a novel approach to the Deep Learning Learning (Deep Learning) to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep). The proposed Deep Learning is a novel method to learn the best-performing Deep Learning (Deep Learning).
* 8.4 Sensitivity Analysis of Output with Respect to Different Parameters
* 8.5 Sensitivity Analysis of Different Norms
* 9 Discussion
* 9.1 The Difficulty of Training Large Models
* 9.2 Open Questions
* 10 Conclusion
* A Appendix
* A.1 List of Notations
* A.2 Lipschitz Constants of Some Modules
## 1 Introduction
Deep learning has revolutionized a myriad of industries and disciplines, extending the boundaries of machine learning capabilities. The sectors transformed by this technology include computer vision (CV) (He et al., 2016; Dosovitskiy et al., 2020; Carion et al., 2020; Liu et al., 2021, 2022), natural language processing (NLP) (Radford et al., 2018, 2019; Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Hoffmann et al., 2022; Wei et al., 2022; Touvron et al., 2023), multi-modal understanding and generation (Radford et al., 2021; Ramesh et al., 2021, 2022; Saharia et al., 2022; Rombach et al., 2022), and others (Silver et al., 2016; Jumper et al., 2020). Despite these remarkable achievements, the mastery of deep learning (Heinonen, 2005; Bengio et al., 2021) still presents a series of unique challenges. This article aims to shed light on one such critical and intricate aspect: the optimization of deep learning models.
Optimization in deep learning (Bottou et al., 2018; Goodfellow et al., 2016; Sun, 2019) is a multifaceted endeavor. It involves tuning the parameters of a model through back-propagation (Rumelhart et al., 1986; LeCun et al., 1989, 1998) in an effort to minimize the discrepancy between the model's predictions and the actual data. However, the process of optimization is not straightforward. It constitutes a journey through a high-dimensional and often non
Figure 1: An optimization overview.
convex landscape, filled with numerous local minima and saddle points. Navigating this landscape introduces its own set of challenges. The two most notable challenges are gradient vanishing and gradient exploding.
The gradient vanishing problem (Glorot and Bengio, 2010; He et al., 2016, 2016) refers to a phenomenon where gradients shrink exponentially as they are propagated backwards through the layers of the network during training. This issue leads to the early layers of the network being updated slowly, resulting in a network with diminished representational ability as the early layers are unable to learn complex, meaningful representations of the input data.
On the other hand, the gradient exploding problem (Pascanu et al., 2013; Liu et al., 2020; Wang et al., 2022) is characterized by the exponential growth of gradients during back-propagation. This issue often leads to unstable training as the model's parameters undergo large, volatile updates. Such instability can prompt a variety of issues, ranging from wildly oscillating loss values to, in extreme cases, the model's failure to converge.
Despite these challenges, numerous strategies and techniques exist to tackle both issues. To alleviate the gradient vanishing problem, the emphasis is typically on promoting improved gradient flow through the network. This can be achieved through various means, such as an implementation of skip or residual connections, a careful initialization of weights, or an use of non-saturating activation functions. On the other hand, to counteract the gradient exploding problem, a common approach is to constrain the Lipschitz constant of the network. The Lipschitz constant serves as a measure of the network's sensitivity to changes in its inputs. By controlling this constant, we can constrain the growth of the gradients, thereby stabilizing the training process.
However, there remains a significant gap in the theoretical understanding of these methods. This article aims to bridge this gap. We categorize existing optimization methods into two primary facets: explicit and implicit optimization. Explicit optimization methods directly act upon optimizer parameters, which include weight, gradient, learning rate, and weight decay. Implicit optimization methods, on the other hand, focus on refining network modules to enhance the network's optimization landscape. These methods encompass techniques such as residual shortcuts, normalization methods, activations and attention mechanisms. In this article, we provide an in-depth analysis of these two classes of optimization. Specifically, we conduct a detailed examination of the gradient or Jacobian and the Lipschitz constant of the widely-used deep learning modules, pinpoint potential issues, and identify existing and prospective improvements. Figure 1 illustrates a general overview of our understanding of optimization in deep learning. We would like to highlight that the problems of gradient vanishing and gradient exploding both can be attributed to the Jacobian matrix of each module. One conclusion from Figure 1 is that _Jacobian matrices determine the back-propagation process, and Lipschitz constant, that can be calculated according to the Jacobian matrices, affects representation ability and training stability of a network. Therefore, to understand the optimization of deep learning in depth, we need to analyze Jacobian matrix and Lipschitz constant of each module in detail._ In this paper, we will also provide theoretical analysis of some existing skills. In addition to the theoretical analysis, we perform analytical experiments to verify our theoretical assertions.
Below, we briefly summarize some of our analyses and observations:
* A convolutional network comprises homogeneous blocks 1, such as Convolutions and Linear Layers. In contrast, the Transformer network includes heterogeneous blocks like Multi-head self-attention and Feed-forward Networks (FFN). These heterogeneous blocks have distinct Jacobian matrices and differing Lipschitz constants, adding complexity to the optimization of Transformer models. Footnote 1: Homogeneous operator denotes these modules have similar form of Jacobian matrices, such as linear layer and convolution, both are first-order linear operators. Heterogeneous operators mean these modules have very different Jacobian properties, such as linear layer and self-attention, the former is a linear operator but the latter is high-order nonlinear operator.
* The Adam optimizer demonstrates robustness to variations of Lipschitz constants during the training process, as it employs a normalized update value (i.e., the element-wise division between the first-order momentum and the square root of the second-order momentum). Conversely, the Stochastic Gradient Descent (SGD) optimizer is highly sensitive to changes in the Lipschitz constant of the network. The AdamW optimizer rectifies incorrect weight decay, thereby improving its performance.
* The initialization of a network should be mindful of Lipschitz constant, particularly for larger models. To achieve this, we recommend the use of Lipschitz-aware initialization.
* Residual shortcut, despite its advantage in mitigating the gradient vanishing problem in the backward process, smooths the landscape of the network.
* Normalization is a useful method to ensure that a network adheres to the forward optimization principle, also contributing to a smoother network landscape.
* Weight decay and DropPath can reduce the Lipschitz constant of the network, thereby decreasing the likelihood of unstable training. Essentially, they function as contraction mappings.
* Dot-Product attention and normalization techniques, despite their strong representation capabilities, exhibit large Lipschitz constants. Consequently, these methods are more likely to trigger unstable training during the backward process compared to convolution, fully connected (FC) layers, and activation functions.
* Instances of unstable training often coincide with rapid increases in Lipschitz constant of a network. This phenomenon is typically indicated by a swift increase in the top eigenvalues of the weight matrices.
Even though the research community has gained a deeper understanding of optimization in deep learning, numerous open questions still remain, such as:
* What are the properties of weight updates in optimizers? Is the function for updating contraction mapping or expansion mapping? If it is expansion mapping, what is the expansion factor?
* Is it possible to discover an automatic setup and adjustment strategy for the learning rate and weight decay according to a simulated Lipschitz constant of the network?
* What is the value and necessity of warmup? Why is it so important especially in large model?
* What are the implications of constrained optimization methods in deep learning?
* What is the relationship between representation ability and training stability?
There are many additional open problems that warrant in-depth exploration, including:
* Is a second-order optimization method necessary and more powerful?
* How is Lipschitz smoothness considered in deep learning? Smoothness is usually the fundamental assumption when in numerical optimization.
* What is the comparison of generalization ability between non-smooth and smooth functions?
We will not delve into these open questions in this article due to limit space and our unclear understanding of these questions, but they certainly deserve serious consideration in future studies. This article does not aim to provide a survey of optimization methods. Instead, our objective is to develop a simple, thorough, and comprehensive understanding of optimization in deep learning. For a survey of optimization methods, readers are referred to Sun (2019); Sun et al. (2019); Li et al. (2020).
### Outline
The structure of this article is as follows: In Section 2, we introduce fundamental optimization concepts, including Lipschitz continuity, contraction mapping, Lipschitz gradient and Hessian continuity. In Section 3, we review the essential modules in deep learning, which include linear layer, convolutions, normalization, residual shortcut, self-attention, activation, and feed-forward network. Section 4 provides an overview of deep learning optimization, covering both forward and backward optimization perspectives and introducing our general optimization principles for deep learning. Section 5 discusses methods for implicitly optimizing the network, while Section 6 addresses practical considerations from an optimizer's perspective. Here, we also provide both theoretical and practical remarks about each factor. In Section 7, based on our analysis and discussions from previous sections, we compile guidelines for deep learning optimization. We then conduct experiments in Section 8 to validate our theoretical analysis. In Section 9, we discuss the difficulties of optimizing large models and other existing problems in deep learning optimization. Finally, in Section 10, we draw a conclusion for this article.
### Notation
Before we delve into specific algorithms, let us provide a brief introduction to the notation system utilized throughout this article. We primarily follow the notation system of the renowned deep learning book 2(Goodfellow et al., 2016).
We use \(\mathbf{x}\) to denote a column vector with \(\mathbf{x}\in\mathbb{R}^{D}\), and \(\mathbf{X}\) to represent a set with \(N\) points with \(\mathbf{X}\in\mathbb{R}^{D\times N}\). \(\mathbf{W}\) is a weight matrix with \(\mathbf{W}\in\mathbb{R}^{D_{1}\times D}\), and \(\mathbf{y}=\mathbf{W}\mathbf{x}\) with \(\mathbf{y}\) being a column vector in \(\mathbb{R}^{D_{1}}\). It should be noted that our notation of self-attention in this article differs from that in (Qi et al., 2023), for which we apologize for any inconsistency. As an example, here, \(\mathbf{Y}=(\mathbf{W}_{1}\mathbf{X})^{\top}(\mathbf{W}_{2}\mathbf{X})\), where both \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\in\mathbb{R}^{D_{1}\times D}\), \(\mathbf{X}\in\mathbb{R}^{D\times N}\), and \(\mathbf{Y}\) is a tensor in \(\mathbb{R}^{N\times N}\).
In terms of matrix calculus, we use the denominator layout 3. Therefore, the Jacobian matrix of \(\mathbf{y}\) with respect to \(\mathbf{x}\) is represented as \(\mathbf{J}_{\mathbf{y}}(\mathbf{x})=\mathbf{W}^{\top}\). Consequently, we have the following equations:
Footnote 3: [https://en.wikipedia.org/wiki/Matrix_calculus](https://en.wikipedia.org/wiki/Matrix_calculus)
\[\frac{\partial\mathbf{w}^{\top}\mathbf{x}}{\partial\mathbf{x}}=\mathbf{w},\ \ \frac{\partial\mathbf{W}\mathbf{x}}{ \partial\mathbf{x}}=\mathbf{W}^{\top},\frac{\partial\mathbf{x}^{\top}\mathbf{W}\mathbf{x}}{ \partial\mathbf{x}}=\left(\mathbf{W}+\mathbf{W}^{\top}\right)\mathbf{x}. \tag{1}\]
Suppose we have a chain function \(\mathbf{o}=f(g(h(\mathbf{x})))\), where \(\mathbf{y}=h(\mathbf{x})\), \(\mathbf{z}=g(\mathbf{y})\), and \(\mathbf{o}=f(\mathbf{z})\). Then, utilizing the denominator layout, the Jacobian matrix of \(\mathbf{o}\) with respect to \(\mathbf{x}\) according to the chain rule is:
\[\mathbf{J}_{\mathbf{o}}(\mathbf{x})=\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\frac{\partial \mathbf{z}}{\partial\mathbf{y}}\frac{\partial\mathbf{o}}{\partial\mathbf{z}}. \tag{2}\]
More knowledge about calculus can be found in Petersen et al. (2008). A checklist of notations can be found in the Appendix A.1.
## 2 Foundations on Optimization
Lipschitz continuity4, in mathematical analysis, represents a strong form of uniform continuity for functions. To probe the characteristics of functions, it is beneficial to understand their Lipschitz properties, along with those of their derivatives. Considering that a neural network is a specific function composed of multiple layers of simple functions, it's critical to comprehend the basic concepts of Lipschitz continuity to grasp deep neural networks (also referred as deep learning).
Footnote 4: [https://en.wikipedia.org/wiki/Lipschitz_continuity](https://en.wikipedia.org/wiki/Lipschitz_continuity)
### Lipschitz Continuity
**Definition 1** (Lipschitz Continuity).: _A function \(f(\mathbf{x}):\mathbb{R}^{D}\to\mathbb{R}^{D_{1}}\) is said to be Lipschitz continuous (or \(K_{0}\)-Lipschitz) under a chosen \(p\)-norm \(\|\cdot\|_{p}\) in the variable \(\mathbf{x}\) if there exists a constant \(K_{0}\) such that for all \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) in the domain of \(f\), the following inequality is always satisfied,_
\[\|f(\mathbf{x}_{1})-f(\mathbf{x}_{2})\|\leq K_{0}\|\mathbf{x}_{1}-\mathbf{x}_{2}\|.\]
For ease of understanding, we can default to considering the norm \(\|\cdot\|\) as the Euclidean norm. We will specify if we use different norms.
Lipschitz continuity provides a bound on the rate at which a function can change and ensures that the function does not exhibit any extreme variations in value.
**Definition 2** (Local Lipschitz Continuity).: _Given a point \(\mathbf{x}\), and a function \(f(\mathbf{x})\) : \(\mathbb{R}^{D}\to\mathbb{R}^{D_{1}}\). \(f(\mathbf{x})\) is said to be local Lipschitz continuous at point \(\mathbf{x}\) if there exists a constant \(K_{0}\) such that for all points \(\mathbf{x}+\mathbf{\epsilon}\), the following inequality is always satisfied,_
\[\|f(\mathbf{x}+\mathbf{\epsilon})-f(\mathbf{x})\|\leq K_{0}\|\mathbf{\epsilon}\|,\]
where \(\|\mathbf{\epsilon}\|\leq\tau\).
Lipschitz constant at a point \(\mathbf{x}\) characterizes the curvature of the network at current point \(\mathbf{x}\). Lipschitz constant of the whole network depicts the optimization landscape of the network.
**Lemma 1** (First-order Condition for Lipschitz Continuity).: _A continuous and differentiable function \(f\) is \(K_{0}\)-Lipschitz continuous if and only if the norm of its gradient is bounded by \(K_{0}\),_
\[\|\nabla f(\mathbf{x})\|\leq K_{0}. \tag{3}\]
It should be noted that the categories of "continuously differentiable" and "Lipschitz continuous" have the following relationship:
\[\text{Continuously differentiable}\subset\text{Lipschitz continuous}. \tag{4}\]
This relationship indicates that every continuously differentiable function is also Lipschitz continuous, but the reverse is not necessarily true. In other words, the set of continuously differentiable functions is a subset of Lipschitz continuous functions. For example, the ReLU function is not continuously differentiable but is Lipschitz continuous. The condition of being continuously differentiable is stricter than being Lipschitz continuous, as it requires the function to have a limited gradient or Jacobian.
**Example 1**.: _Consider that \(f(\mathbf{x})=c\), where \(c\) is a constant, the Lipschitz constant of \(f(\mathbf{x})\) is 0. If \(f(\mathbf{x})=\mathbf{a}^{\top}\mathbf{x}+1\), its Lipschitz constant can be computed as \(\|\mathbf{a}\|\). Now, if \(f(\mathbf{x})=\mathbf{x}^{\top}\mathbf{x}+2\), where \(\mathbf{x}\) is a column vector and \(\mathbf{x}\in\mathbb{R}^{D}\), the Lipschitz constant of the function \(f(\mathbf{x})\) becomes \(\infty\) and thus, \(f(\mathbf{x})\) is not Lipschitz continuous._
**Definition 3** (Contraction Mapping).: _Let \((\mathbf{X},f)\) be a metric space. A mapping \(\mathcal{M}:\mathbf{X}\to\mathbf{X}\) is called a contraction mapping if there exists a constant \(K_{0}\), with \(0\leq K_{0}<1\), such that_
\[f(\mathcal{M}(\mathbf{x}),\mathcal{M}(\mathbf{y}))\leq K_{0}f(\mathbf{x},\mathbf{y}), \tag{5}\]
_for all \(\mathbf{x},\mathbf{y}\in\mathbf{X}\)._
### Lipschitz Gradient Continuity
**Definition 4** (Lipschitz Gradient Continuity).: _A function \(f(\mathbf{x})\) : \(\mathbb{R}^{D}\to\mathbb{R}^{D_{1}}\) is said to have a Lipschitz continuous gradient (or \(K_{1}\)-Lipschitz) under a choice of \(p\)-norm \(\|\cdot\|_{p}\) in the variable \(\mathbf{x}\) if there exists a constant \(K_{1}\) such that for all \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) in the domain of \(f\), the following inequality is always satisfied,_
\[\|\nabla f(\mathbf{x}_{1})-\nabla f(\mathbf{x}_{2})\|\leq K_{1}\|\mathbf{x}_{1}-\mathbf{x}_{2}\|. \tag{6}\]
Lipschitz gradient continuity provides a bound on the rate at which the gradient of the function can change, ensuring that the function's slope does not change too abruptly.
For Lipschitz gradient continuity, we have the following lemma,
**Lemma 2** ((Smoothness Lemma).: _A continuous and twice differentiable function \(f\) is \(K_{1}\)-smoothness if and only if_
\[\|\nabla^{2}f(\mathbf{x})\|<K_{1}. \tag{7}\]
**Remark 2.1: Lipschitz Continuity and Lipschitz Constant**
1. Lipschitz continuity is more general than continuously differentiable.
2. Local Lipschitz continuity and its Lipchitz constant at a point \(\mathbf{x}\) characterize the curvature of the network at the current point. Lipschitz constant of the whole network depicts the optimization landscape of the network.
3. Analyzing Lipschitz constant of each module and even the whole network is an important and effective way to understand the properties of the network.
### Lipschitz Hessian Continuity
Furthermore, we can define Lipschitz Hessian continuity as:
**Definition 5** (Lipschitz Hessian Continuity).: _A function \(f(\mathbf{x})\) : \(\mathbb{R}^{D}\to\mathbb{R}^{D_{1}}\) is said to have Lipschitz Hessian continuity (or \(K_{2}\)-Lipschitz) under a chosen \(p\)-norm \(\|\cdot\|_{p}\) in the variable \(\mathbf{x}\) if there exists a constant \(K_{2}\) such that for all \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) in the domain of \(f\), the following inequality is always satisfied:_
\[\|\nabla^{2}f(\mathbf{x}_{1})-\nabla^{2}f(\mathbf{x}_{2})\|\leq K_{2}\|\mathbf{x}_{1}-\mathbf{ x}_{2}\|. \tag{8}\]
Lipschitz Hessian continuity provides a bound on the rate at which the curvature of the function can change, ensuring that a function's second-order derivatives do not change too abruptly.
In summary, Lipschitz continuity, Lipschitz gradient continuity, and Lipschitz Hessian continuity provide bounds on the rates of change of a function, its first-order derivatives, and its second-order derivatives, respectively. These properties help understand the behavior of a function and are useful in optimization and numerical analysis problems. In Remark 2.1, we have built several remarks about Lipschitz continuity and Lipschitz constant.
Interested audiences can refer to (Nesterov, 2003; Bottou et al., 2018; Bubeck et al., 2015; Wright & Ma, 2022) for a more detailed introduction. For a deeper understanding, a Lipschitz monograph (Heinonen, 2005) is recommended.
## 3 Foundations on Deep Learning
In this section, we will briefly introduce the mathematical definitions of some popular modules (also called layers) in deep learning. We will cover more discussions about certain improvements and their underlying mathematical principles in Section 5.
### Basic Modules in Deep Learning
#### 3.1.1 Linear Layer
Linear projection (also called a linear layer in deep learning) is the most fundamental module in deep learning. Its definition is as follows:
\[\boldsymbol{y}=f(\boldsymbol{x};\boldsymbol{W},\boldsymbol{b})=\boldsymbol{W} \boldsymbol{x}+\boldsymbol{b}. \tag{9}\]
The nature of linear projection is a linear feature transformation, which mathematically corresponds to a coordinate system transformation.
The Jacobian matrix 5 of \(\boldsymbol{y}\) with respect to \(\boldsymbol{x}\) can be calculated as:
Footnote 5: [https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant)
\[\frac{\partial\boldsymbol{y}}{\partial\boldsymbol{x}}=\boldsymbol{W}^{\top}. \tag{10}\]
For an affine transformation \(f\left(\boldsymbol{x};\boldsymbol{W},\boldsymbol{b}\right)=\boldsymbol{W} \boldsymbol{x}+\boldsymbol{b}\), its Lipschitz constant is,
\[\mathrm{Lip}_{p}(f(\boldsymbol{x};\boldsymbol{W},\boldsymbol{b}))=\sup_{\| \boldsymbol{x}\|_{p}=1}\|\boldsymbol{W}\boldsymbol{x}+\boldsymbol{b}\|_{p}= \left\{\begin{array}{ll}\sigma_{\max}(\boldsymbol{W}),&\text{if }p=2\\ \max_{i}\sum_{j}|W_{ij}|&\text{if }p=\infty\end{array}\right. \tag{11}\]
where \(\sigma_{\max}(\boldsymbol{W})\) is the largest absolute eigenvalue of \(\boldsymbol{W}\).
Let \(\|\boldsymbol{x}\|_{2}=1\), if \(\boldsymbol{x}\) lies in the same direction as the maximum eigenvector of the matrix \(\boldsymbol{W}\), then \(\|\boldsymbol{W}\boldsymbol{x}\|=\sigma_{\max}(\boldsymbol{W})\). On the other hand, if \(\boldsymbol{x}\) lies in the same direction as the minimum eigenvector of the matrix \(\boldsymbol{W}\), then \(\|\boldsymbol{W}\boldsymbol{x}\|=\sigma_{\min}(\boldsymbol{W})\).
The forward process of a typical neural network propagates computation as \(\boldsymbol{y}^{l+1}=\boldsymbol{W}^{l+1}\boldsymbol{x}^{l}+\boldsymbol{b}^ {l+1}\), where \(\boldsymbol{x}^{l}\) and \(\boldsymbol{W}^{l+1}\) are the input and the weight matrix of layer \(l+1\). To back-propagate the network loss \(\mathcal{L}\), we have
\[\frac{\partial\mathcal{L}}{\partial\boldsymbol{x}^{l}}=\left(\boldsymbol{W}^{ l+1}\right)^{\top}\frac{\partial\mathcal{L}}{\partial\boldsymbol{y}^{l+1}}, \quad\frac{\partial\mathcal{L}}{\partial\boldsymbol{W}^{l+1}}=\frac{\partial \mathcal{L}}{\partial\boldsymbol{y}^{l+1}}\left(\boldsymbol{x}^{l}\right)^{ \top},\quad\frac{\partial\mathcal{L}}{\partial\boldsymbol{b}^{l+1}}=\frac{ \partial\mathcal{L}}{\partial\boldsymbol{y}^{l+1}}.\]
Since deep learning is optimized using a stochastic optimization mechanism, the updated value of \(\boldsymbol{W}\) will affect the back-propagation process of \(\boldsymbol{x}\) in the next training step. Similarly, the value of \(\boldsymbol{x}\) will influence the update of \(\boldsymbol{W}\).
#### 3.1.2 Convolution
Convolution (LeCun et al., 1998; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016) is a widely used and effective method in computer vision, with the concept of local receptive fields (LRF) being central to its effectiveness.
In convolutional neural networks (CNN), a LRF refers to a region in the input data (such as a small region in an image) that is connected to a neuron in a convolutional layer. This approach allows the network to focus on local features of the input data, reducing computational complexity and making the network more robust to variations in the input.
One advantage of using LRF is that it significantly reduces the number of parameters in the model. Instead of connecting each neuron to every pixel in the input image, each neuron is only connected to a small region of an image, resulting in a more manageable number of weights to learn. Another advantage of LRF is its ability to learn features in a hierarchical manner. When applied to image data, convolutional layers with local receptive fields can learn to recognize local features like edges and corners in early layers, which can then be combined in later layers to recognize higher-level features such as shapes and objects.
Suppose we have an input tensor \(\mathbf{X}\in\mathbb{R}^{W\times\bar{H}\times C}\), with width \(\bar{W}\), height \(\bar{H}\), and channel \(C\), and a kernel size of \(K\times K\). The 2D convolution operation with stride 1 is defined as:
\[Y_{i,j,o}=\sum_{m=-\frac{K+1}{2}}^{\frac{K+1}{2}}\sum_{n=-\frac{K+1}{2}}^{\frac {K+1}{2}}C^{-1}X_{i+m,j+n,c}\cdot W_{m,n,c,o}, \tag{12}\]
where \(Y\) is the output tensor, and \(i\), \(j\), and \(o\) are the row, column, and output channel indices of the output tensor \(Y\). \(\mathbf{W}\in\mathbb{R}^{K\times K\times C\times O}\) is a 4d tensor, where \(O\) is the number of output channels. This operation is carried out for \(i=0,1,2,...,\bar{W}-K\), \(j=0,1,2,...,\bar{H}-K\), and \(c=0,1,2,...,C-1\), where the kernel \(\mathbf{W}\) can fit into the input tensor \(\mathbf{X}\).
To simplify the representation, we can use the Einstein notation 6. Using this notation, we can rewrite the equation as:
Footnote 6: [https://en.wikipedia.org/wiki/Einstein_notation](https://en.wikipedia.org/wiki/Einstein_notation)
\[\mathbf{Y}^{O}=\mathbf{W}^{O}_{K,K,C}\mathbf{X}^{K,K,C}. \tag{13}\]
In essence, for each location in the convolution, it corresponds to a linear projection where the parameter weights are shared among all locations.
Let us discuss the gradients of \(\frac{\partial\mathcal{L}}{\partial\mathbf{W}}\) and \(\frac{\partial\mathcal{L}}{\partial\mathbf{X}}\) respectively. Here, we use \(\mathbf{Y}^{\prime}\) to represent \(\frac{\partial\mathcal{L}}{\partial\mathbf{Y}}\). During the back-propagation process, given \(\mathbf{Y}^{\prime}\), we can calculate the gradients \(\mathbf{W}^{\prime}\) and \(\mathbf{X}^{\prime}\) as follows:
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{W}}& =\mathbf{Y}^{\prime}(\mathbf{X}^{K,K,C})^{\top},\\ \frac{\partial\mathcal{L}}{\partial\mathbf{X}}&=(\mathbf{W} ^{O}_{K,K,C})^{\top}\mathbf{Y}^{\prime}.\end{split} \tag{14}\]
Convolution, in essence, is a linear operator that can be applied to multi-dimensional tensors. Hence, we can consider Convolution is a homogeneous operator as a linear layer. Here, homogeneous operator means the Convolution and the linear layer are both first-order linear operator.
In fact, Conv1D can be seen as an equivalence of a linear layer. Additionally, a Conv2D operator can be converted to a matrix multiplication using the im2col 7 operator. In conclusion, Convolution and Linear Layer (also known as Fully-Connected or FC) are homogeneous operators.
Footnote 7: [https://caffe.berkeleyvision.org/tutorial/layers/im2col.html](https://caffe.berkeleyvision.org/tutorial/layers/im2col.html)
#### 3.1.3 Normalization
Batch Normalization (Ioffe & Szegedy, 2015) and Layer Normalization (Ba et al., 2016) are widely used techniques in deep learning to improve the training of neural networks.
**Batch Normalization** (BN) (Ioffe & Szegedy, 2015) is primarily employed in CNNs (Ioffe & Szegedy, 2015; He et al., 2016).
Let us consider a mini-batch \(\mathbf{X}\) with a shape of \(D\times N\), where \(D\) represents the feature dimension and \(N\) is the batch size. The definition of BN is as follows:
\[\mathbf{\mu} =\frac{1}{N}\sum_{i=1}^{N}\mathbf{X}_{:,i}\] \[\mathbf{\sigma}^{2} =\frac{1}{N}\sum_{i=1}^{N}\left(\mathbf{X}_{:,i}-\mathbf{\mu}\right) \odot\left(\mathbf{X}_{:,i}-\mathbf{\mu}\right) \tag{15}\] \[\widehat{\mathbf{X}}_{:,i} =\left(\mathbf{X}_{:,i}-\mathbf{\mu}\right)\oslash\sqrt{\mathbf{\sigma}^{2}+\epsilon}\] \[\mathrm{BN}\left(\mathbf{X}_{:,i}\right) =\mathbf{\gamma}\odot\widehat{\mathbf{X}}_{:,i}+\mathbf{\beta},\]
where \(\odot\) and \(\oslash\) represent element-wise multiplication and division respectively, \(\mathbf{\mu}\in\mathbb{R}^{D}\) and \(\mathbf{\sigma}\in\mathbb{R}^{D}\), \(\epsilon\) is a smoothing factor. It's worth noting that we discuss a two-dimensional matrix here, but this can easily be extended to a four-dimensional tensor by reshaping the 4D tensor into a 2D tensor. In the training process, \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) are updated by a moving average. \(\mathbf{\gamma}\) and \(\mathbf{\beta}\) are optimized by a stochastic gradient descent.
To derive the Jacobian matrix, let us consider \(\frac{\partial\widehat{X}_{j,i}}{\partial X_{k,l}}\). When \(k\neq j\), its value is 0.
When \(j=k\), let us consider \(\frac{\partial\widehat{X}_{j,i}}{\partial X_{j,i}}\),
\[\frac{\partial\widehat{X}_{j,i}}{\partial X_{j,i}}=\frac{(1-\frac{1}{N})\sqrt{ \sigma_{j}^{2}+\epsilon}-\frac{(X_{j,i}-\mu_{j})^{2}}{N\sqrt{\sigma_{j}^{2}+ \epsilon}}}{\sigma_{j}^{2}+\epsilon} \tag{16}\]
where \(\sigma_{j}\) is the \(j\)-th dimension of \(\mathbf{\sigma}\).
Further, where \(j=k\) and \(l\neq i\), let us consider \(\frac{\partial\widehat{X}_{j,i}}{\partial X_{j,l}}\),
\[\frac{\partial\widehat{X}_{j,i}}{\partial X_{j,l}}=\frac{(0-\frac{1}{N})\sqrt {\sigma_{j}^{2}+\epsilon}-\frac{(X_{j,i}-\mu_{j})(X_{j,i}-\mu_{j})}{N\sqrt{ \sigma_{j}^{2}+\epsilon}}}{\sigma_{j}^{2}+\epsilon} \tag{17}\]
**Layer Normalization** (LN) (Ba et al., 2016) has broader applications compared to BN. It is widely utilized in various domains such as Transformers (including Vision Transformers (Wu et al., 2021; Liu et al., 2021b; Dosovitskiy et al., 2020)) and Language Models (Touvron et al., 2023; Zhang et al., 2022; Chowdhery et al., 2022; Radford et al., 2018, 2019; Brown et al., 2020)), as well as ConvNeXt (Liu et al., 2022).
To simplify notation, here, let us define \(\mathbf{x}=\mathbf{X}_{:,i}\), where \(\mathbf{x}\) is a column vector. The forward process of LN with a smoothing factor \(\epsilon\) is defined as follows:
\[\mathrm{LN}(\mathbf{x})=\mathbf{\gamma}\odot\mathbf{z}+\mathbf{\beta},\text{ where }\mathbf{z}=\frac{\mathbf{y}}{\sqrt{\left\|\mathbf{y}\right\|_{2}^{2}+\epsilon}}\text{ \ and \ }\mathbf{y}=\left(\mathbf{I}-\frac{1}{D}\mathbf{1}\mathbf{1}^{\top}\right)\mathbf{x}, \tag{18}\]
Where \(D\) is the dimension of the input, \(\mathbf{\gamma}\) and \(\mathbf{\beta}\) are learned parameters, similar to BN, obtained through gradient descent. \(\epsilon\) is a smoothing factor. It is important to note that in LN, there is no need to maintain a moving average of \(\mathbf{\mu}\) and \(\mathbf{\sigma}\).
The Jacobian matrix of the variable \(\mathbf{z}\) with respect to \(\mathbf{x}\) can be calculated as:
\[\mathbf{J_{z}}(\mathbf{x})=\frac{\partial\mathbf{z}}{\partial\mathbf{x}}=\frac{\partial\mathbf{y }}{\partial\mathbf{x}}\frac{\partial\mathbf{z}}{\partial\mathbf{y}}=\frac{1}{\sqrt{\left\| \mathbf{y}\right\|_{2}^{2}+\epsilon}}\left(\mathbf{I}-\frac{1}{D}\mathbf{1}\mathbf{1}^{\top} \right)\left(\mathbf{I}-\frac{\mathbf{y}\mathbf{y}^{\top}}{\left\|\mathbf{y}\right\|_{2}^{2}+ \epsilon}\right). \tag{19}\]
It should be noted that in this article, we consider a version of LayerNorm that incorporates smoothing, whereas Lipschitzer Qi et al. (2023) discusses a non-smoothing LayerNorm. The non-smoothing LN is not Lipschitz continuous, while the smoothing LN is Lipschitz continuous but with a very large Lipschitz constant due to a typically small value of \(\epsilon\).
LN can be applied to a wider range of deep learning problems than BN, regardless of whether they involve variable-length or fixed-length sequences. On the other hand, BN is more suitable for problems with constant sequence lengths or input sizes. For instance, BN cannot be applied to generative language models because the sequence length increases during the generation process, while LN is a viable choice in such scenarios.
#### 3.1.4 Self-attention
Attention mechanism is firstly introduced in Bahdanau et al. (2014), and then widely used in CV and NLP areas. Dot-product attention (DPA) (Vaswani et al., 2017) is a crucial component in Transformer, enabling the capture of long-range relationships within data. In practice, multi-head attention is employed to effectively capture such relationships in different contexts. The formulation of single-head attention is as follows:
\[\mathrm{Attn\_DP}(\mathbf{X};\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V})=\mathbf{W}^{V}\mathbf{X} \cdot\mathcal{S}\left(\frac{\left(\mathbf{W}^{Q}\mathbf{X}\right)^{\top}\mathbf{W}^{K}\bm {X}}{\sqrt{D}}\right), \tag{20}\]
where \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\) are the projection matrices used to transform \(\mathbf{X}\) into query, key, and value matrices, respectively. \(\mathcal{S}(\cdot)\) denotes the softmax function. Intuitively, each token aggregates information from all visible tokens by calculating a weighted sum of the values of visible tokens based on the similarity between its query and the key of each visible token. The similarity between the \(i\)-th query \(\mathbf{q}_{i}\) and the \(j\)-th key \(\mathbf{k}_{j}\) is represented as \(\mathbf{P}_{ij}\propto\mathbf{x}_{i}{}^{\top}(\mathbf{W}^{Q})^{\top}\mathbf{W}^{K}\mathbf{x}_{j}\).
Usually, multi-head self-attention is used in practice. For the \(i\)-th attention head, where \(i\in\{1,...,H\}\), we define it as follows:
\[\mathbf{h}_{i}(\mathbf{x};\mathbf{\mathsf{W}}_{i})=\mathrm{Attn\_DP}_{i}(\mathbf{X};\mathbf{W}_{i }^{Q},\mathbf{W}_{i}^{K},\mathbf{W}_{i}^{V}),\]
where \(\mathbf{\mathsf{W}}_{i}\) represents the set of projection weight matrices \((\mathbf{W}_{i}^{Q},\mathbf{W}_{i}^{K},\mathbf{W}_{i}^{V})\).
In multi-head attention, the different attention results are concatenated as follows:
\[\mathbf{h}(\mathbf{x};\mathbf{\mathsf{W}})=[\mathbf{h}_{1}(\mathbf{x};\mathbf{\mathsf{W}}_{1});\;\mathbf{h }_{2}(\mathbf{x};\mathbf{\mathsf{W}}_{2});\;...;\;\mathbf{h}_{H}(\mathbf{x};\mathbf{\mathsf{W}}_{H })]. \tag{21}\]
Compared to linear layers and convolutions, self-attention is a high-order nonlinear function. It exhibits high-order nonlinearity due to the following aspects: 1) It captures complex dependencies between elements in the input sequence. 2) It employs a nonlinear softmax activation function to compute attention scores, resulting in a nonlinear relationship between the input sequence and the output. 3) The query, key, and value projections create a high-dimensional space where interactions between elements become more complex, contributing to the high-order nonlinearity of the function.
This high-order nonlinearity enables self-attention to effectively model intricate relationships and dependencies within the input data. We will discuss more attention mechanisms in Section 5.2.
#### 3.1.5 Residual Shortcut
Residual shortcut (He et al., 2016, 2016) is a breakthrough technology that effectively addresses the vanishing gradient problem. Prior to its introduction, deep neural networks faced significant challenges with vanishing gradients (Simonyan and Zisserman, 2014; Szegedy et al., 2015). Back to 2014, training a 16-layer VGG network was difficult. However, since 2015, ResNet50 has become a standard and fundamental configuration for deep learning researchers. Subsequent neural network architectures have widely adopted the residual shortcut.
The residual shortcut is defined as:
\[\mathbf{y}=\mathbf{x}+f(\mathbf{x};\mathbf{W}). \tag{22}\]
The Jacobian matrix of \(\mathbf{y}\) with respect to \(\mathbf{x}\) is:
\[\mathbf{J}_{\mathbf{y}}(\mathbf{x})=\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\frac{\partial \mathbf{x}}{\partial\mathbf{x}}+\frac{\partial f(\mathbf{x};\mathbf{W})}{\partial\mathbf{x}}=\mathbf{ I}+\frac{\partial f(\mathbf{x};\mathbf{W})}{\partial\mathbf{x}}. \tag{23}\]
Given \(\frac{\partial\mathcal{L}}{\partial\mathbf{y}}\), even when \(\frac{\partial f(\mathbf{x};\mathbf{W})}{\partial\mathbf{x}}\) is very small, \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}}\approx\frac{\partial\mathcal{L}}{ \partial\mathbf{y}}\), meaning the error information can still be propagated to shallow layers through \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}}\). Without the residual shortcut, the gradient information would be blocked at this layer.
#### 3.1.6 Activation
Activation (Dahl et al., 2013; He et al., 2015; Hendrycks and Gimpel, 2016; Shazeer, 2020) is an effective method for introducing nonlinearity to neural networks. Among various types of activation functions, ReLU is widely used. It is defined as:
\[\mathbf{y}=\max\left(\mathbf{0},\mathbf{x}\right). \tag{24}\]
The Jacobian matrix of ReLU is:
\[\mathbf{J_{y}}(\mathbf{x})=\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\mathrm{diag}\left( \mathbf{1}\left(\mathbf{x}>\mathbf{0}\right)\right), \tag{25}\]
where \(\mathbf{1}(\cdot)\) is the indicator function.
Compared to the Sigmoid and Tanh activations, ReLU preserves the gradients better, and thus mitigate the gradient vanishing problem well. ReLU is a non-smooth function at the point of zero. The generalization ability of non-smooth functions requires further investigation. Meanwhile, according to the classical numerical optimization, non-smooth function will lead to a slower convergence rate. After ReLU, several extensions have been proposed, including PReLU (He et al., 2015), Swish (Ramachandran et al., 2017), GeLU (Hendrycks and Gimpel, 2016), and GLU (Shazeer, 2020).
#### 3.1.7 Feed-Forward Network
In Transformer architecture (Vaswani et al., 2017), in addition to self-attention, Feed-Forward Network (FFN) is another key component. An FFN is defined as:
\[\mathbf{y}=\mathrm{FFN}(\mathbf{x};\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b}_{1},\mathbf{b}_{2})=\mathbf{W }_{2}\max\left(\mathbf{0},\mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1}\right)+\mathbf{b}_{2}, \tag{26}\]
where \(\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b}_{1},\mathbf{b}_{2}\) are learned parameters. FFN is a composition of linear projections and the ReLU activation function.
In FFN, \(\mathbf{W}_{1}\) is typically a large weight matrix that projects \(\mathbf{x}\) (with dimension \(D\)) into a higher dimensional space (usually \(4D\)), and then \(\mathbf{W}_{2}\) projects the high-dimensional feature back to the same dimension as \(\mathbf{x}\).
The Jacobian matrix of an FFN can be computed as:
\[\mathbf{J_{y}}(\mathbf{x})=\frac{\partial\,\mathrm{FFN}(\mathbf{x};\mathbf{W}_{1},\mathbf{W}_{2}, \mathbf{b}_{1},\mathbf{b}_{2})}{\partial\mathbf{x}}=\mathbf{W}_{1}^{\top}\,\mathrm{diag} \left(\mathbf{1}\left(\mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1}>\mathbf{0}\right)\right)\mathbf{W}_{2} ^{\top}. \tag{27}\]
Similar to self-attention, FFN is typically used in conjunction with a residual shortcut. With the residual shortcut, we have \(\mathbf{y}=\mathbf{x}+\mathrm{FFN}(\mathbf{x};\mathbf{W}_{1},\mathbf{W}_{2},\mathbf{b}_{1},\mathbf{b}_{2})\), and its Jacobian matrix is \(\mathbf{J_{y}}(\mathbf{x})=\mathbf{I}+\mathbf{W}_{1}^{\top}\,\mathrm{diag}\left(\mathbf{1}\left( \mathbf{W}_{1}\mathbf{x}+\mathbf{b}_{1}>\mathbf{0}\right)\right)\mathbf{W}_{2}^{\top}\).
So far, we have briefly introduced some basic modules in deep learning and derived their Jacobian matrices, which will be used in back-propagation.
### ResNet and Transformer
Based on the above introduction, we are now ready to discuss the renowned ResNet (He et al., 2016) and Transformer (Vaswani et al., 2017). ResNet allows the training of extremely deep convolutional networks by addressing the vanishing gradient problem through the use of residual connections. This enables the networks to effectively learn complex representations.
Transformer was initially introduced in natural language processing and has since expanded its capabilities to other modalities, including images, audio, video, and 3D vision. The Transformer architecture has opened doors for models that can handle diverse data types within a unified framework. Recent advancements, such as CLIP (Radford et al., 2021) and DALL-E (Ramesh et al., 2022), further demonstrate the flexibility and potential of the Transformer architecture in handling multi-modal data. Interested readers can refer to Tay et al. (2022); Lin et al. (2022) for a overview survey of Transformer.
1. Linear projection and convolution, in nature, are two similar linear first-order operators. They are _homogeneous_. On the other hand, self-attention is a nonlinear high-order operator. It is a _heterogeneous_ operator compared to linear projection and convolution.
2. ResNet is composed of homogeneous operators, while Transformer is composed of heterogeneous operators. ResNet focuses on describing local regions, while Transformer captures longer and larger contextual information.
3. The Jacobian matrices of homogeneous and heterogeneous networks have very different properties, which result in different optimization difficulties. The heterogeneous Transformer is much harder to optimize.
In Remark 3.1, we have provided several remarks to discuss and compare ResNet and Transformer. Heterogeneous operators mean these modules have very different Jacobian properties. For example, self-attention and linear layer are heterogeneous operators because linear layer is a first-order linear operator but self-attention is a high-order nonlinear operator.
### Forward and Backward in Neural Network
In this section, we provide a brief definition of the forward and backward processes in neural networks, which are fundamental concepts in deep learning.
Given an input \(\mathbf{x}\), it is passed through a series of hidden layers, with each layer performing a specific transformation. The output \(\mathbf{x}^{l+1}\) of layer \(l+1\) is calculated based on the input \(\mathbf{x}^{l}\) from the previous layer, and \(\mathbf{x}^{l}\) is calculated based on its corresponding input \(\mathbf{x}^{l-1}\). This implies that \(\mathbf{x}^{l}\) is influenced by the weight matrices \(\mathbf{W}^{1}\) to \(\mathbf{W}^{l}\). We can define the forward process to obtain \(\mathbf{x}^{l}\) as follows:
\[\mathbf{x}^{l}=\mathcal{F}\left(\mathbf{x};\mathbf{W}^{1},\dots,\mathbf{W}^{l}\right). \tag{28}\]
During the forward process, \(\mathbf{x}^{l}\) is not influenced by the weight matrices in the subsequent layers, such as \(\mathbf{W}^{l+1}\).
Through back-propagation and the chain rule, we can define the backward view of deep learning as follows:
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l}}& =\nabla_{\mathbf{W}^{l}}\mathcal{L}\left(\mathbf{x},\mathbf{y};\mathbf{W}^{1}, \dots,\mathbf{W}^{L}\right),\\ \frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}&= \nabla_{\mathbf{x}^{l}}\mathcal{L}\left(\mathbf{x},\mathbf{y};\mathbf{W}^{1},\dots,\mathbf{W}^{L} \right),\end{split} \tag{29}\]
These equations will be further discussed in the following section. The forward and backward views form the basic principles of optimization in deep learning.
#### 3.3.1 An Example to Understand Jacobian Matrix
To gain a deeper understanding of the forward and backward processes, let us consider an example. The network structure is illustrated in Figure 2.
In this example, \(\mathbf{x}\) represents the input data. The network consists of \(L\) layers in the stem, where each layer contains three submodules: a fully connected (FC), a rectified linear unit (ReLU), and a layer normalization (LayerNorm). Finally, we have a classification layer, a softmax, and a cross-entropy to compute the final loss \(\mathcal{L}\).
Figure 2: Visualization of a simple neural network with \(L\) layers and a classification layer.
Mathematically, we can define the _forward process_ as follows:
\[\begin{split}\mathbf{x}^{1}&=\mathrm{LN}(\mathrm{ReLU}(\mathbf{W }^{1}\mathbf{x}))\\ &\vdots\\ \mathbf{x}^{l+1}&=\mathrm{LN}(\mathrm{ReLU}(\mathbf{W}^{l+1}\mathbf{x }^{l}))\\ &\vdots\\ \mathbf{x}^{L}&=\mathrm{LN}(\mathrm{ReLU}(\mathbf{W}^{L}\mathbf{x }^{L-1}))\\ \mathbf{o}&=\mathbf{O}\mathbf{x}^{L}\\ \mathbf{p}&=\mathrm{softmax}(\mathbf{o})\\ \mathcal{L}&=\mathrm{cross\_entropy}(\mathbf{p},\mathbf{t}),\end{split} \tag{30}\]
Here, \(\mathbf{O}\) represents the classification weight matrix, and \((\mathbf{x},\mathbf{t})\) denote the input and the label, respectively. We omit bias term for easier understanding.
To facilitate the subsequent derivations, let us define \(\mathbf{h}^{l+1}=\mathbf{W}^{l+1}\mathbf{x}^{l}\) and \(\mathbf{r}^{l+1}=\mathrm{ReLU}(\mathbf{h}^{l+1})\). For convenience, we rewrite LayerNorm equation as:
\[\mathbf{x}^{l+1}=\mathrm{LN}(\mathbf{r}^{l})=\mathbf{\gamma^{l+1}}\odot\mathbf{z}^{l+1}+\mathbf{ \beta^{l+1}}, \tag{31}\]
where \(\mathbf{z}^{l+1}=\frac{\mathbf{y}^{l+1}}{\sqrt{\|\mathbf{y}^{l+1}\|_{2}^{2}+\epsilon}}\) and \(\mathbf{y}^{l+1}=\left(\mathbf{I}-\frac{1}{D}\mathbf{1}\mathbf{1}^{\top}\right)\mathbf{r}^{l+1}\).
In the _backward process_, we calculate \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}\) and \(\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l}}\) in a reverse order, starting from the \(L\)-th layer and moving towards the 1-st layer. We can compute \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}\) as follows:
\[\begin{split}\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}& =\frac{\partial\mathbf{x}^{l+1}}{\partial\mathbf{x}^{l}}\frac{\partial \mathcal{L}}{\partial\mathbf{x}^{l+1}}\\ &=\frac{\partial\mathbf{x}^{l+1}}{\partial\mathbf{x}^{l}}\cdots\frac{ \partial\mathbf{x}^{L}}{\partial\mathbf{x}^{L-1}}\frac{\partial\mathcal{L}}{\partial \mathbf{x}^{L}}\\ &=\frac{\partial\mathbf{x}^{l+1}}{\partial\mathbf{x}^{l}}\cdots\frac{ \partial\mathbf{x}^{L}}{\partial\mathbf{x}^{L-1}}\frac{\partial\mathbf{o}}{\partial\mathbf{x} ^{L}}\frac{\partial\mathcal{L}}{\partial\mathbf{o}}\\ &=\left(\prod_{k=l+1}^{L}\frac{\partial\mathbf{x}^{k}}{\partial\mathbf{x }^{k-1}}\right)\frac{\partial\mathbf{o}}{\partial\mathbf{x}^{L}}\frac{\partial\mathcal{ L}}{\partial\mathbf{o}}\\ &=\left(\prod_{k=l+1}^{L}\left\|\frac{\partial\mathbf{h}^{k}}{\partial \mathbf{x}^{k-1}}\frac{\partial\mathbf{r}^{k}}{\partial\mathbf{h}^{k}}\frac{\partial\mathbf{ x}^{k}}{\partial\mathbf{r}^{k}}\right)\frac{\partial\mathbf{o}}{\partial\mathbf{x}^{L}} \frac{\partial\mathcal{L}}{\partial\mathbf{o}}\\ &=\left(\prod_{k=l+1}^{L}\left\|\frac{\left(\mathbf{W}^{k}\right)^{ \top}\mathrm{diag}\left(\mathbf{1}\left(\mathbf{W}^{k}\mathbf{x}^{k-1}>\mathbf{0}\right)\right) \right.}\right.\right.\frac{\left(\mathbf{I}-\frac{1}{\mathcal{D}}\mathbf{1}\mathbf{1}^{ \top}\right)}{\sqrt{\|\mathbf{y}^{k}\|_{2}^{2}+\epsilon}\left(\mathbf{I}-\frac{\mathbf{y} ^{k}\mathbf{y}^{k}{}^{\top}}{\|\mathbf{y}^{k}\|_{2}^{2}+\epsilon}\right)\mathrm{ diag}\left(\mathbf{\gamma}^{k}\right)}\right)\frac{\mathbf{O}^{\top}\left(\mathbf{p}-\mathbf{t}\right)}{ \partial\mathbf{o}}\,.\end{split} \tag{32}\]
\(\frac{\partial\mathcal{L}}{\partial\mathbf{o}}\) represents the gradient of the softmax and cross-entropy loss. Its value is \(\mathbf{p}-\mathbf{t}\), which can be found in deep learning books Goodfellow et al. (2016). We will not provide a detailed derivation here.
Once we have obtained \(\frac{\partial\mathcal{L}}{\partial\mathbf{w}^{l}}\), we can further calculate \(\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l+1}}\) as follows:
\[\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l+1}}=\frac{\partial\mathcal{L}}{ \partial\mathbf{h}^{l+1}}\mathbf{x}^{l}{}^{\top}. \tag{33}\]
Now, let us delve into Equation 32. The range of \(\frac{\partial\mathcal{L}}{\partial\mathbf{w}^{l}}\) depends on five terms, represented by different colors: red, blue, purple, red, and green. All these five terms denote the Jacobian matrices of their corresponding modules. Among these terms, three of them (red, blue, and purple) appear multiple times. The last term (green) is bounded by the range [-1.0, 1.0]. The second-to-last term in red represents the classification weight matrix and appears only once. Regarding the
first three terms in the brackets, the blue term, i.e., the ReLU term, corresponds to a contraction mapping. The range of the purple term, i.e., the LayerNorm term, depends on the input data and activations, while the range of the red term, i.e., the FC term, depends on the weight matrix, particularly the eigenvalues of the weight matrix.
_In conclusion, Back-propagation is a composition function of Jacobian matrices of different modules. This example explains part of our analysis in Figure 1 that Jacobian matrix determines the training process._
### Network Initialization
Now that we have a network structure, it is crucial to properly initialize the network parameters. Initialization plays a crucial role in training neural networks. Xavier initialization (Glorot and Bengio, 2010), which emerged as a breakthrough, provides insights into the challenges of training deep networks.
Xavier initialization recommends two types of initialization. The first one is defined as follows:
\[W_{i,j}\sim\mathrm{U}\left(-\sqrt{\frac{6}{n_{in}+n_{out}}},\sqrt{\frac{6}{n_ {in}+n_{out}}}\right), \tag{34}\]
Here, \(\mathrm{U}(\cdot)\) represents a uniform distribution. The second type is defined as:
\[W_{i,j}\sim\mathrm{N}\left(0,\frac{2}{n_{in}+n_{out}}\right), \tag{35}\]
where \(\mathrm{N}(\mu,\sigma^{2})\) denotes a Gaussian distribution with \(\mu\) as the mean and \(\sigma^{2}\) as the variance. Here, \(n_{in}\) represents the dimension of the input, and \(n_{out}\) is the dimension of the output.
Xavier initialization has made training deeper neural networks possible. Fruthermore, the introduction of Batch Normalization (BN) has reduced the sensitivity of convolutional networks to weight initialization. BN enables the training of deeper networks. Additionally, the use of residual shortcuts has further made it feasible to train convolutional neural networks with thousands of layers.
Figure 3 shows three eigenvalue distributions of the weight matrix after Xavier initialization, one for \(n_{in}=1024\) and \(n_{out}=1024\), one for \(n_{in}=1024\) and \(n_{out}=2048\), and one for \(n_{in}=2048\) and \(n_{out}=2048\). We have two observations. First, if \(\mathbf{W}\) is a square matrix, there will be many eigenvalues close to zero. Second, after Xavier initialization, the maximum eigenvalue is always less than 2.0.
### Formulation of Optimization Problem in Deep Learning
Generally, a deep neural network can be seen as an unconstrained optimization problem, with the exception of certain special cases like optimization on sphere. Typically, we formulate the problem as follows:
\[\min_{\theta}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}\left(\mathbf{y}_{i},F\left(\mathbf{x }_{i};\textbf{W}\right)\right)+\sum_{l=1}^{L}\lambda\|\textbf{W}^{l}\|_{2}^{2}, \tag{36}\]
Figure 3: Distribution of eigenvalues after Xavier initialization. The left figure shows the distribution of \(n_{in}=1024\) and \(n_{out}=1024\). The middle figure shows the distribution of \(n_{in}=1024\) and \(n_{out}=2048\). The right shows the eigenvalue distribution of \(n_{in}=2048\) and \(n_{out}=2048\).
where \(N\) represents the batch size of inputs, \(\mathbf{x}_{i}\) denotes the \(i\)-th input, \(L\) is the number of layers, \(\mathbf{W}^{l}\) represents the weights of the \(l\)-th layer, and \(\mathbf{\mathsf{W}}\) represents the set of all weights \(\{\mathbf{W}^{1},\mathbf{W}^{2},\ldots,\mathbf{W}^{L}\}\), \(\mathcal{L}(\cdot,\cdot)\) is the loss function.
Gradient Descent-based methods such as SGD are the classical optimization methods used for training deep neural networks. The weight update rule for these methods can be expressed as:
\[\mathbf{W}^{l}_{new}=\mathbf{W}^{l}-\alpha\nabla_{\mathbf{W}^{l}}\mathcal{L}-\alpha \lambda\mathbf{W}^{l}, \tag{37}\]
here, \(\alpha\) represents the learning rate, and \(\lambda\) denotes the weight decay factor. They can be set as functions of various factors to control the optimization process effectively.
According to whether the optimization operator is directly applied to Eq 37, we have the following definition:
_Explicit optimization refers to operations directly conducted on \(\mathbf{W}\), \(\nabla_{\mathbf{W}}\mathcal{L}\), \(\alpha\), and \(\lambda\),_
_while the opposite is implicit optimization._
More precisely, we categorize the optimization of deep neural networks into two types: explicit optimization and implicit optimization. The optimization space of explicit optimization is summarized in Table 1. On the other hand, implicit optimization encompasses a wide range of techniques. It includes achieving normalized gradients for weights through activation, normalization, preventing vanishing gradients using residual shortcuts, preserving Lipschitz smoothing through strong data augmentation, and more. In later sections, we will discuss implicit optimization techniques in deep learning, as well as some standard methods in explicit optimization.
```
1:Input: learning rate scheduler\(\alpha_{t}\), weight decay\(\lambda\), and momentum\(\beta\)
2:Output: updated weight\(\mathbf{w}_{T}\)
3:for\(t=1,\ldots,T\)do
4:\(\alpha_{t}\leftarrow\mathrm{SetScheduleMultiplier}(t)\)
5:\(\mathbf{g}_{t}=\nabla_{\mathbf{w}}\mathcal{L}\left(\mathbf{\mathsf{w}}_{t},\mathbf{\theta},t\right)\)
6:\(\mathbf{v}_{t}=\beta\mathbf{v}_{t-1}+(1-\beta)\mathbf{g}_{t}\)
7:if Weight Decay is Yes then
8:\(\mathbf{w}_{t}\ =\ \mathbf{w}_{t-1}-\alpha_{t}\mathbf{v}_{t}-\alpha_{t}\lambda\mathbf{w}_{t-1}\)
9:else
10:\(\mathbf{w}_{t}\ =\ \mathbf{w}_{t-1}-\alpha_{t}\mathbf{v}_{t}\)
11:endif
12:endfor
```
**Algorithm 1** SGD with momentum and decoupled weight decay
### Popular Optimizers in Deep Learning
SGD (Stochastic Gradient Descent) (Robbins & Monro, 1951) is a fundamental optimization algorithm widely used in deep learning. It updates a model's parameters based on the gradient of the loss function with respect to the parameters. While SGD is fast and simple, it can struggle to converge to the global minimum. Momentum SGD (mSGD) (Nesterov, 1983) enhances SGD by incorporating a momentum term in the update rule. This term helps the optimizer move more swiftly through shallow areas of the loss function, preventing it from getting trapped in local minima. It proves useful when the loss function exhibits many plateaus or valleys. We have shown the SGD with momentum in Algorithm 1.
Currently, adaptive learning rate optimization algorithms such as Adagrad (Duchi et al., 2011), RMSProp (Hinton et al., 2012), Adam (Kingma & Ba, 2014), and AdamW (Loshchilov & Hutter, 2017) dominate neural network training, particularly with the widespread use of Transformers across different modalities. Adagrad (Duchi et al., 2011) is the first adaptive learning rate optimization algorithm that adjusts the learning rate for each parameter based on its past
\begin{table}
\begin{tabular}{||c|c|c|c||} \hline Weight & Gradient & Learning rate & Weight decay factor \\ \hline \(\mathbf{W}\) & \(\nabla_{\mathbf{W}}\mathcal{L}\) & \(\alpha\) & \(\lambda\) \\ \hline \end{tabular}
\end{table}
Table 1: Explicit optimization variables.
gradients. This adaptivity proves beneficial for sparse datasets, where some parameters may have large gradients while others have small gradients. RMSProp (Hinton et al., 2012) is another adaptive learning rate algorithm that adjusts the learning rate based on the root mean square of past gradients. It helps prevent the learning rate from becoming excessively large or small. Adam combines the concepts of momentum and adaptive learning rates. It calculates exponentially decaying averages of past gradients and their squares to update the learning rate for each parameter. Adam demonstrates good performance across a wide range of problems and is currently one of the most popular optimization algorithms.
```
1:Input: learning rate scheduler \(\alpha_{t}\), weight decay \(\lambda\), and first-order and second-order mementums \(\beta_{1}\), \(\beta_{2}\)
2:Output: updated weight \(\mathbf{w}_{T}\)
3:for\(t=1,2,\ldots,T\)do
4:\(\alpha_{t}\leftarrow\text{SetScheduleMultiplier}(t)\)
5:\(\mathbf{g}_{t}=\nabla_{\mathbf{w}}\mathcal{L}\left(\mathbf{w}_{t},\mathbf{\theta},t\right)\)
6:\(\mathbf{m}_{t}=\beta_{1}\mathbf{m}_{t-1}+\left(1-\beta_{1}\right)\mathbf{g}_{t}\)
7:\(\mathbf{v}_{t}=\beta_{2}\mathbf{v}_{t-1}+\left(1-\beta_{2}\right)\mathbf{g}_{t}^{2}\)\(\triangleright\)\(\mathbf{m}_{t}\) and \(\mathbf{v}_{t}\) will be cached.
8:\(\hat{\mathbf{m}}_{t}=\mathbf{m}_{t}/\left(1-\beta_{1}\right)\)
9:\(\hat{\mathbf{v}}_{t}=\mathbf{v}_{t}/\left(1-\beta_{2}\right)\)
10:\(\mathbf{\mu}_{t}=\hat{\mathbf{m}}_{t}/\hat{\mathbf{v}}_{t}\)\(\triangleright\)\(\mathbf{\mu}_{t}\) is the final gradient used for update.
11:if Weight Decay is Yes then\(\triangleright\) Some variables are weight decay free.
12:\(\mathbf{w}_{t}\)\(=\mathbf{w}_{t-1}-\alpha_{t}\mathbf{\mu}_{t}-\alpha_{t}\lambda_{t}\mathbf{w}_{t-1}\)
13:else
14:\(\mathbf{w}_{t}\)\(=\)\(\mathbf{w}_{t-1}-\alpha_{t}\mathbf{\mu}_{t}\)
15:endif
16:endfor
```
**Algorithm 2** AdamW
AdamW (Loshchilov and Hutter, 2017) is an extension of Adam that rectifies the flawed \(L_{2}\) regularization technique by incorporating weight decay. In the original version, the \(L_{2}\) regularization is applied to the gradient, whereas in the corrected weight decay version, it is directly applied to the weight matrix. This correction further improves the performance of Adam.
In Algorithm 1 and Algorithm 2, we have presented SGD and AdamW with momentum and decoupled weight decay. Now, let us discuss the advantages and disadvantages of these methods. In addition to SGD, there are several improved optimizers that build upon it. Examples include SGDR (Loshchilov and Hutter, 2016), SVRG (Johnson and Zhang, 2013), and signSGD (Bernstein et al., 2018). These optimizers aim to enhance the performance of SGD in various ways. For adaptive learning rate optimization, there are multiple algorithms designed to improve upon it. AdaHessian (Yao et al., 2020), Adabelief (Zhuang et al., 2020), and Adafactor (Shazeer and Stern, 2018) are a few examples. These algorithms aim to refine the adaptive learning rate mechanisms to achieve better optimization results. Furthermore, there are specific optimizers (You et al., 2019, 2017) tailored for addressing large-scale optimization problems. These optimizers are designed to handle the challenges posed by datasets and models of significant size. Each optimizer brings its own set of benefits and considerations, depending on the specific characteristics of the problem at hand. We will discuss the optimizers in Section 6 in detail.
### Mixed Precision Training
Mixed Precision Training (MPT) (Micikevicius et al., 2017)8 is a widely adopted strategy in deep learning that brings significant speedup and memory savings during training. This technique enables researchers and practitioners to train larger and more complex models effectively.
Footnote 8: [https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html)
In computer memory, a single-precision floating point (also known as FP32 or float32) typically occupies 32 bits. It offers a wide range of numeric values by utilizing a floating radix point. The minimum positive normal value is \(2^{-126}\approx 1.18\times 10^{-38}\), and the largest positive normalized FP32 number \(2^{128}\left(1-2^{-24}\right)\approx 3.4\times 10^{38}\).
On the other hand, half precision (also referred to as FP16 or float16) is a floating-point format that occupies 16 bits (two bytes) in computer memory. It provides a more compact representation. The minimum positive normal value
is \(2^{-14}\approx 6.10\times 10^{-5}\). The maximum representable value is \(\left(2-2^{-10}\right)\times 2^{15}=65504\). The calculation of FP16 values follows the equation:
\[\left(-1\right)^{\text{Sign}}\times 2^{\text{Exponent}-15}\times\left(1+\frac{ \text{Fraction}}{1024}\right). \tag{38}\]
To visualize the format of half precision, refer to Figure 4.
In MPT, half-precision (FP16) is utilized for both storage and arithmetic operations, with weights, activations, and gradients being stored in FP16. However, a single-precision (FP32) master copy of weights is employed for updates. There are several advantages to use numerical formats with FP16 over FP32. Firstly, lower precision formats like FP16 consume less memory, which enables the training and deployment of larger neural networks. The reduced memory footprint is particularly beneficial when dealing with models that have a large number of parameters. Secondly, FP16 formats demand less memory bandwidth, resulting in accelerated data transfer operations. This efficiency in memory usage improves the overall performance and speed of the training process. Thirdly, mathematical operations execute much faster in reduced precision, especially on GPUs equipped with Tensor Core support specifically designed for the given precision. The utilization of FP16 allows for quicker computations, leading to enhanced training speed and efficiency. In practical applications, the adoption of mixed precision training can yield substantial speedups, with reported gains of up to 3x on architectures like Volta and Turing. By leveraging FP16 for storage and arithmetic operations, combined with an FP32 master copy for weight updates, the training process is optimized, resulting in improved overall performance.
Regarding the question of why one needs to make an FP32 copy for weight matrices in MPT, let us consider the following example. Suppose the learning rate is 1e-5 and the weight value is 1e-4. The multiplication of these values is 1e-9. However, in the FP16 representation, 1e-9 is rounded to zero. Therefore, using weight matrices in FP16 alone is insufficient to represent such small variations accurately. To ensure the preservation of fine-grained details, an FP32 copy of the weight matrix is necessary.
## 4 Optimization Principles of Deep Learning
In this section, we will define two fundamental principles of optimization in deep learning. Before that, let us further review several definitions and lemmas that will be used in the optimization principles.
### Lipschitz Constant of Deep Neural Network
**Definition 6**.: _Let \(F(\mathbf{x};\{\mathbf{W}_{l},l=1,\ldots,L\}):\mathbb{R}^{D}\rightarrow\mathbb{R}\) be an L-layer neural network defined as a composite function with L transformation functions:_
\[F(\mathbf{x};\{\mathbf{W}^{l};l=1,\ldots,L\})=f^{L}\left(\mathbf{W}^{L}f^{L-1}\left(\mathbf{W }^{L-1}\ldots f^{1}\left(\mathbf{W}^{1}\mathbf{x}\right)\right)\right), \tag{39}\]
Here, \(\{\mathbf{W}_{l},l=1,\ldots,L\}\) represents the parameter set, and \(f^{l}(\cdot)\) denotes the transformation function of the \(l\)-th layer. For simplicity, we have omitted the bias term in the mathematical representation. It is important to note that the expression should be modified if extended to a Transformer model.
To simplify the notation, let us define:
\[\mathbf{y}^{l}=\mathbf{W}^{l}\mathbf{x}^{l-1}, \tag{40}\] \[\mathbf{x}^{l}=f^{l}\left(\mathbf{y}^{l}\right),\]
where \(\mathbf{x}^{l-1}\) represents the input features and \(\mathbf{x}^{l}\) denotes the output activation. \(\mathbf{W}^{l}\) refers to the parameters in the \(l\)-th layer, and \(f(\cdot)\) represents the transformation function (e.g., ReLU). It is worth noting that this notation can be extended according to Equation 39, yielding \(\mathbf{x}^{l}=f^{l}\left(\mathbf{W}^{L}\mathbf{x}^{l-1}\right)=f^{l}\left(\mathbf{W}^{l}f^{l- 1}\left(\mathbf{W}^{l-1}\mathbf{x}^{l-2}\right)\right)\).
Figure 4: Visualization of Half-precision floating-point format (FP16).
In this section, we will define two fundamental principles of optimization in deep learning. Before that, let us further review several definitions and lemmas that will be used in the optimization principles.
Given \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l-1}}\), we can calculate \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l-1}}\) and \(\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l}}\) as follows:
\[\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l-1}} =\mathbf{W}^{l\top}\frac{\partial\mathbf{x}^{l}}{\partial\mathbf{y}^{l}} \frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}, \tag{41}\] \[\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l}} =\frac{\partial\mathbf{x}^{l}}{\partial\mathbf{y}^{l}}\frac{\partial \mathcal{L}}{\partial\mathbf{x}^{l}}\mathbf{x}^{l-1}{}^{\top}.\]
Let us define \(\operatorname{Lip}(f^{l}(\mathbf{W}^{l}\mathbf{x}^{l-1}))\) as the Lipschitz constant of the \(l\)-th layer. We have the following lemma.
**Lemma 3**.: _Given the Lipschitz constant of each transformation function in a network \(F\), the following inequality holds:_
\[\operatorname{Lip}(F(\mathbf{x};\{\mathbf{W}^{l},l=1,\dots,L\}))\leq\prod_{l=1}^{L} \operatorname{Lip}(f^{l}(\mathbf{W}^{l}\mathbf{x}^{l-1})). \tag{42}\]
From Lemma 3, the Lipschitz constant of a network is upper-bounded by the product of each layer's Lipschitz constant. It should be noted that \(\prod_{l=1}^{L}\operatorname{Lip}(f^{l}(\mathbf{W}^{l}\mathbf{x}^{l-1}))\) is the upper bound of the Lipschitz constant of the whole network. Considering that the network cannot always reach the upper bound Lipschitz constant in each layer, in practice, the true Lipschitz constant of the whole network will be smaller than the upper bound.
One method to estimate the Lipschitz constant is to simulate the value according to its definition as:
\[K_{s}=\max_{\mathbf{\epsilon};\mathbf{x}}\frac{\|F(\mathbf{x}+\mathbf{\epsilon};\{\mathbf{W}_{l}, l=1,\dots,L\})-F(\mathbf{x};\{\mathbf{W}_{l},l=1,\dots,L\})\|}{\|\mathbf{x}+\mathbf{\epsilon}- \mathbf{x}\|} \tag{43}\]
In practice, for efficiency, we cannot enumerate all points, so we only sample some points. For instance, we select 100 points \(\mathbf{x}\), and for each point, we randomly select 100 points of different \(\mathbf{\epsilon}\). If we denote the simulated Lipschitz constant as \(K_{s}\), the true Lipschitz constant as \(K_{t}\), and the theoretical upper bound of Lipschitz constant as \(K_{u}\), we have the following relationship among these three values:
\[K_{s}\leq K_{t}\leq K_{u}. \tag{44}\]
\(K_{u}\) can be obtained by theoretical derivation, and it is usually very large. However, obtaining \(K_{t}\) is challenging because it is difficult to enumerate all possible points. We can approximate \(K_{t}\) by simulating as many points as possible, thus we have \(K_{s}\leq K_{t}\).
#### 4.1.1 An Example to Understand Lipschitz Constants of Networks
To gain a deeper understanding of the computation of the Lipschitz constant, let us consider an example. The network structure is illustrated in Figure 5.
Same as our previous example, \(\mathbf{x}\) represents the input data. The network consists of \(L\) layers in the stem, where each layer contains two submodules: self-attention (SA), feed-forward network (FFN), each module includes a residual shortcut. Finally, we have a classification layer.
Figure 5: Visualization of a simple Transformer with \(L\) layers and a FC layer.
Mathematically, we can define the forward process as follows:
\[\mathbf{y}^{1} =\mathbf{x}+\mathrm{SA}^{1}(\mathbf{x};\mathbf{\mathbb{W}}^{1}) \tag{45}\] \[\mathbf{x}^{1} =\mathbf{y}^{1}+\mathrm{FFN}^{1}(\mathbf{y}^{1};\mathbf{\mathbb{V}}^{1})\] \[\vdots\] \[\mathbf{y}^{l+1} =\mathbf{x}^{l}+\mathrm{SA}^{l+1}(\mathbf{x}^{l};\mathbf{\mathbb{W}}^{l+1})\] \[\mathbf{x}^{l+1} =\mathbf{y}^{l+1}+\mathrm{FFN}^{l+1}(\mathbf{y}^{l+1};\mathbf{\mathbb{V}}^{l+1})\] \[\vdots\] \[\mathbf{y}^{L} =\mathbf{x}^{L-1}+\mathrm{SA}^{L}(\mathbf{x}^{L-1};\mathbf{\mathbb{W}}^{L})\] \[\mathbf{x}^{L} =\mathbf{y}^{L}+\mathrm{FFN}^{L}(\mathbf{y}^{L};\mathbf{\mathbb{V}}^{L})\] \[\mathbf{o} =\mathbf{O}\mathbf{x}^{L}\]
where \(\mathbf{\mathbb{W}}^{l}\) and \(\mathbf{\mathbb{V}}^{l}\) are learnable parameters of the self-attention and the feed-forward network submodules in the \(l\)-th layer individually.
We can compute the Lipschitz constant of the whole network as:
\[\mathrm{Lip}\left(F(\mathbf{x};(\mathbf{\mathbb{W}}^{l},\mathbf{\mathbb{V}}^{ l}),l=1,\ldots,L)\right) \leq\left(\prod_{l=1}^{L}\mathrm{Lip}\left(f^{l}(\mathbf{x}^{l-1};( \mathbf{\mathbb{W}}^{l},\mathbf{\mathbb{V}}^{l}))\right)\mathrm{Lip}(\mathbf{O}\mathbf{x}^{L})\right. \tag{46}\] \[=\left(\prod_{l=1}^{L}\left(1+\mathrm{Lip}\left(\mathrm{SA}^{l}( \mathbf{x}^{l-1};\mathbf{\mathbb{W}}^{l})\right)\right)\left(1+\mathrm{Lip}\left( \mathrm{FFN}^{l}(\mathbf{y}^{l};\mathbf{\mathbb{V}}^{l})\right)\right)\right)\mathrm{ Lip}(\mathbf{O}\mathbf{x}^{L})\] \[=\left(\prod_{l=1}^{L}\left(1+\mathrm{Lip}\left(\mathrm{SA}^{l}( \mathbf{x}^{l-1};\mathbf{\mathbb{W}}^{l})\right)\right)\left(1+\mathrm{Lip}\left( \mathrm{FFN}^{l}(\mathbf{y}^{l};\mathbf{\mathbb{V}}^{l})\right)\right)\right)\cdot \sigma_{max}(\mathbf{O}),\]
where \(\sigma_{max}(\mathbf{O})\) is the maximum absolute eigenvalue of the matrix \(\mathbf{O}\).
From this example, we can see that the Lipschitz constant of the network is highly related to the submodules of \(\mathrm{Lip}\left(\mathrm{SA}^{l}(\mathbf{y}^{l},\mathbf{\mathbb{W}}^{l})\right)\) and \(\mathrm{Lip}\left(\mathrm{FFN}^{l}(\mathbf{y}^{l},\mathbf{\mathbb{V}}^{l})\right)\). If these submodules are unstable (their Lipschitz constant are very large or unbound.), then the whole network will be unstable. _The Lipschitz constant of each module affects the training stability of the network, and it can be calculated according to its Jacobian matrix._ Therefore, we should analyze the Jacobian matrix of each module in detail. This observation further explains our understanding overview in Figure 1.
### Principles of Optimization
In this subsection, we will clarify two simple and fundamental optimization principles.
**Principle 1** (Forward Principle of Optimization).: _To ensure the stability of model training, it is necessary to ensure that the values of activations across all layers in the forward process satisfy the following condition:_
\[\mathbf{x}^{l},\mathbf{y}^{l}<\mathcal{R},\text{ for }l\in[1,L], \tag{47}\]
_where \(\mathcal{R}\) represents the maximum value range of the float precision used, and \(\mathbf{x}^{l},\mathbf{y}^{l}\) are defined as in Equation 40._
When using single precision (Float32) training, \(\mathcal{R}=3.4\times 10^{38}\). In MPT (Micikevicius et al., 2017), the FP16 range is \([-65,504,65,504]\), which means \(\mathcal{R}=65,504\). If the value of an activation exceeds \(\mathcal{R}\), it will trigger an overflow and result in an Infinity value. Performing an operation on two Infinity values will trigger a NAN (Not a Number) warning or error. As for the underflow problem, the model can still be trained stably, although the precision may be slightly affected due to the decreased precision from underflow.
Normalization techniques such as BN and LN are highly effective in ensuring the validity of the forward principle of optimization. Without normalization, in a network where each layer is an expansion mapping, the activation values
may overflow after several layers. However, when a normalization operation is applied after each layer, the feature's norm is consistently normalized to a relatively small value, preventing any overflow issues during the forward process. While normalization plays a crucial role in upholding the forward principle, it should be acknowledged that for certain abnormal inputs, normalization might violate the backward optimization principle.
**Principle 2** (Backward Principle of Optimization).: _To ensure the convergence of model training, we need to ensure that the gradients of the activations across all layers in the backward process satisfy the condition:_
\[\nabla_{\mathbf{w}^{l}}\mathcal{L},\nabla_{\mathbf{y}^{l}}\mathcal{L}<\mathcal{R},\ \ \text{for}\ \ l\in[1,L]. \tag{48}\]
Based on the backward computation shown in Equation 41, we can observe that Principle 2 typically implies:
\[\nabla_{\mathbf{W}^{l}}\mathcal{L}<\mathcal{R},\text{for}\ \ l\in[1,L]. \tag{49}\]
Principles 1 and 2 are two fundamental principles for a stable network training.: Forward principle of optimization is usually easy to promise via some normalization skills, but backward principle is harder to satisfy considering that the training process of deep learning is a dynamic process. In each training step, the Jacobian matrices and the Lipschitz constants are evolving.
As depicted in Figure 1, optimization in deep learning mainly faces two main challenges: gradient vanishing and gradient exploding. Gradient vanishing does not cause the network training to collapse but results in a weak representation. On the other hand, gradient exploding directly leads to failed model training.
Back-propagation involves the chain composition of the Jacobian matrix of each layer. The Lipschitz constant of each layer can be calculated using the Jacobian matrix. Therefore, considering the Jacobian matrix of each transformation function in the network is an effective approach to understanding deep learning optimization.
Table 2 presents the forward definitions of some common layers, their Jacobian matrices or gradients, and their theoretical Lipschitz constants. A large Lipschitz constant indicates that the layer may often result in an expansion mapping for the gradients in the backward process. Similarly, a small Lipschitz constant implies that the gradient norm may not expand significantly. From the table, we observe that if Sigmoid is placed in the stem, it can lead to gradient vanishing. ReLU, GeLU, and Swish all propagate the gradients effectively, with ReLU being non-smooth while GeLU and Swish being smooth functions. The residual shortcut is an effective way to preserve the gradient flow in the stem, even if the branch experiences gradient vanishing. Linear and Convolution are two homogeneous operators, and they have similar forms of Lipschitz constants. For most normalization methods, the values of their Jacobian matrices can be very large when abnormal data points are inputted. This indicates a large Lipschitz constant for these layers. Three attention mechanisms are shown in the table, where dot-product attention is not Lipschitz continuous despite its powerful representation ability. \(L_{2}\) distance attention is Lipschitz continuous when \(\mathbf{W}^{Q}=\mathbf{W}^{K}\). Scale cosine similarity attention is Lipschitz continuous without requiring specific conditions on the weight matrices.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Layer Type & Definition & Gradient or Jacobian & Lipschitz Constant \\ \hline \hline
**Linear** & \(\mathbf{y}=\mathbf{W}\mathbf{x}\) & \(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\mathbf{W}^{\top}\) & \(\sigma_{\max}(\mathbf{W})\) \\ \hline
**Convolution** & \(\mathbf{y}^{O}=\mathbf{W}_{K,K,C}^{O}\mathbf{x}^{K,K,C}\) & \(\frac{\partial\mathbf{y}^{O}}{\partial\mathbf{x}^{D}}=\left(\mathbf{W}_{D}^{O}\right)^{ \top}=\mathbf{W}_{O}^{D}\) & \(\sigma_{\max}(\mathbf{W}_{O}^{D})\) \\ \hline \hline
**Sigmoid** & \(y_{i}=\frac{1}{1+\exp(-x_{i})}\) & \(\frac{\partial y_{i}}{\partial x_{i}}=\sigma(x_{i})(1-\sigma(x_{i}))\) & \(\frac{1}{4}\) \\ \hline
**Softmax** & \(y_{i}=\frac{\exp(x_{i})}{\sum_{i=0}^{d}\exp(x_{i})}\) & \(\frac{\partial y_{i}}{\partial x_{i}}=(y_{i})\,(1(i==j)-y_{i})\) & \(\leq 1\) \\ & \(\frac{1}{\sum_{i=0}^{d}\exp(x_{i})}\) & \(\frac{\partial y_{i}}{\partial x_{i}}=(y_{i})\,(1(i==j)-y_{i})\) & Gao \& Pawel (2017) \\ \hline
**ReLU** & \(y_{i}=\max(0,x_{i})\) & \(\frac{\partial y_{i}}{\partial x_{i}}=1(x_{i}>0)\) & \(1.0\) \\ \hline
**GeLU** & \(y_{i}=x_{i}\,\mathrm{P}(x<x_{i})\approx x_{i}\sigma(1.702x_{i})\) & \(\frac{\partial y_{i}}{\partial x_{i}}\approx\sigma(1.702x_{i})+1.702x_{i}\) & \(\approx 1.1\) \\ & \(\frac{\sigma(1.702x_{i})}{\partial x_{i}}\) & \(1-\sigma(1.702x_{i}))\) & \(\approx 1.1\) \\ \hline
**Swish** & \(\mathbf{y}_{i}=x_{i}\sigma(x_{i})\) & \(\frac{\partial y_{i}}{\partial x_{i}}=\sigma(x_{i})+x_{i}\) & \(\approx 1.1\) \\ & \(\sigma(x_{i})(1-\sigma(x_{i}))\) & & \\ \hline \hline
**DP Attention** & \(\mathbf{Y}=\mathbf{W}^{V}\mathbf{X}\cdot\mathcal{S}\left(\frac{\left(\mathbf{W}^{Q}\mathbf{X} \right)^{\top}\left(\mathbf{W}^{K}\mathbf{X}\right)}{\sqrt{D/H}}\right)\) & See Equation 12 in Kim et al. (2021) & \(\infty\) \\ & \(\mathbf{Y}=\mathbf{W}^{V}\mathbf{X}\cdot\) & See Equation 19 and 20 & \(\frac{\sqrt{D/H}}{\sqrt{D/H}}\left(4\phi^{-1}(N-1)+1\right)\) \\ & \(\mathcal{S}\left(-\frac{\mathbf{X}^{\top}\left(\mathbf{W}^{Q}-\mathbf{W}^{K}\right)^{\top }\left(\mathbf{W}^{Q}-\mathbf{W}^{K}\right)\mathbf{x}}{\sqrt{D/H}}\right)\) & in Kim et al. (2021) & \(\left(\sqrt{\|\mathbf{W}^{Q}\|_{2}^{2}\,\|\mathbf{W}^{V}\|_{2}^{2}}\right)\,\left\| \mathbf{W}^{O}\right\|_{2}\) \\ & & when \(\mathbf{W}^{Q}=\mathbf{W}^{K}\) \\ \hline
**SCS Attention** & \(\mathbf{Y}=\nu\mathbf{V}\mathbf{P}\), & See Equation 13 and 14 & \(2N(N-1)\nu\tau e^{-\frac{1}{2}}\|\mathbf{W}^{K}\|_{2}+\) \\ Qi et al. & where \(\mathbf{P}=\mathrm{softmax}\left(\mathbf{\tau}\mathbf{Q}^{\top}\mathbf{K}\right)\) & in Qi et al. (2023) & \(2(N-1)\nu\tau e^{-\frac{1}{2}}\|\mathbf{W}^{Q}\|_{2}+\) \\ \hline \hline
**LayerNorm** & \(\mathbf{y}=\left(\mathbf{I}-\frac{1}{\mathbf{y}}\mathbf{1}\mathbf{1}^{\top}\right)\mathbf{x}\) & \(\frac{\partial\,\mathrm{LN}(\mathbf{x})}{\partial\mathbf{x}}=\frac{1}{\sqrt{\|\mathbf{y}\|_{ 2}^{2}+\epsilon}}\left(\mathbf{I}-\frac{1}{\mathbf{x}}\mathbf{1}^{\top}\right)\) & \(\frac{\max_{D}[\gamma_{D}]}{\epsilon}\) \\ Ba et al. & \(\mathbf{z}=\frac{\mathbf{y}}{\sqrt{\|\mathbf{y}\|_{2}^{2}+\epsilon}}\) & \(\left(\mathbf{I}-\frac{\mathbf{y}\mathbf{W}}{\|\mathbf{y}\|_{2}^{2}+\epsilon}\right)\mathrm{ diag}\left(\mathbf{\gamma}\right)\) & \(\frac{\max_{D}[\gamma_{D}]}{\epsilon}\) \\ & \(\mathrm{LN}(\mathbf{x})=\mathbf{\gamma}\odot\mathbf{z}+\mathbf{\beta}\) & \(\mathbf{\mu}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{X}_{:,i}\) & \\ \hline
**BatchNorm** & \(\mathbf{\sigma}^{2}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{(X}_{:,i}-\mathbf{\mu})\odot(\mathbf{X}_ {:,i}-\mathbf{\mu})\) & see Equation 16 and 17 & \(\approx\max_{D}\frac{|\gamma_{D}|}{\sqrt{\sigma_{D}^{D+\epsilon}}}\) \\ Ioffe \& Szegedy & \(\mathbf{X}_{:,i}=\mathbf{\gamma}\odot\mathbf{X}_{:,i}+\mathbf{\beta}\) & \(\frac{\partial\,\mathrm{NN}(\mathbf{x}_{:,i})}{\partial\mathbf{x}}=\mathbf{W}\), & \\ & \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{missing}}}}}}\mathbf{\mathrm{S}}_{\mathrm{im }}\mathbf{s}\) & \(\frac{\partial\,\mathrm{NN}(\mathbf{x})}{\partial\mathbf{x}}=\mathbf{W}\), & \(\sigma_{\max}(\mathbf{W})\leq\sqrt{\sum_{i=1}^{O}\gamma_{i}^{2}}\) \\ \hline
**WeightNorm** & \(\mathbf{W}(i,:)=\gamma_{i}\frac{\mathbf{y}}{\sqrt{\|\mathbf{z}\|_{2}^{2}+\epsilon}}\) & where, \(\mathbf{W}(i,:)=\gamma_{i}\frac{\mathbf{y}}{\sqrt{\|\mathbf{z}\|_{2}^{2}+\epsilon}}\) & \(\sigma_{\max}(\mathbf{W})\leq\sqrt{\sum_{i=1}^{O}\gamma_{i}^{2}}\) \\ Salimans \& Kingma & \(\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{missing}}}}}(\mathbf{x})=\mathbf{W}\mathbf{w}\) & \(\frac{\partial\,\mathrm{NN}(\mathbf{x})}{\partial\mathbf{x}}=\frac{1}{\sqrt{\|\mathbf{z}\|_{2}^ {2}+\epsilon}}\) & \(\sigma_{\max}(\mathbf{W})\leq\sqrt{\sum_{i=1}^{O}\gamma_{i}^{2}}\) \\ \hline
**RMSNorm** & \(\mathrm{\mathrm{\mathrm{missing}}}\mathbf{\mathrm{\mathrm{missing}}}=\mathbf{\gamma}\odot \frac{\mathbf{z}}{\sqrt{\|\mathbf{z}\|_{2}^{2}+\epsilon}}+\mathbf{\beta}\) & \(\frac{\partial\,\mathrm{NN}(\mathbf{z})}{\partial\mathbf{x}}=\frac{1}{\sqrt{\|\mathbf{z}\|_{2}^ {2}+\epsilon}}\) & \(\frac{\max_{D}[\gamma_{D}]}{\epsilon^{\frac{1}{2}}}\) \\ Zhang \& Sennrich & \(\left(\mathbf{I}-\frac{\mathbf{w}\mathbf{\mathrm{T}}}{\mathbf{w}}\right)\mathrm{diag}\left(\mathbf{ \gamma}\right)\) & \(\frac{\partial\,\mathrm{CN}(\mathbf{z})}{\partial\mathbf{x}}=\frac{D}{D-1}\left(\mathbf{I}- \frac{1}{\mathbf{z}}\mathbf{1}\mathbf{1}^{\top}\right)\) & \(\frac{D}{D-1}\max_{D}[\gamma_{D}]\) \\ \hline
**CenterNorm** & \(\mathbf{y}=\left(\mathbf{I}-\frac{1}{\mathbf{y}}\mathbf{1}\mathbf{1}^{\top}\right)\mathbf{x}\) & \(\frac{\partial\,\mathrm{CN}(\mathbf{z})}{\partial\mathbf{x}}=\frac{D}{D-1}\left(\mathbf{I}- \frac{1}{\mathbf{y}}\mathbf{1}\mathbf{1}^{\top}\right)\) & \(\frac{D}{D-1}\max_{D}[\gamma_{D}]\) \\ Qi et al. & \(\mathrm{CN}(\mathbf{x})=\frac{D}{D-1}\mathbf{\gamma}\odot\mathbf{y}+\mathbf{\beta}\) & \(\mathrm{diag}\left(\mathbf{\gamma}\right)\) &
Implicit Optimization in Deep Learning
### Normalization
Normalization is an effective re-parameterization technique 9 that can significantly improve the training process and performance of deep neural networks. By re-scaling and centering the input features, normalization helps mitigate issues related to the scale and distribution of the data. In this section, we will discuss different types of normalization techniques and their benefits in deep learning.
Footnote 9: [https://sassafras13.github.io/ReparamTrick/](https://sassafras13.github.io/ReparamTrick/)
**Remark 5.1: Normalization**
1. Normalization is an effective approach to mitigating gradient vanishing.
2. Normalization smoothens the landscape.
3. Generally, BN is more stable than LN, but LN has a broader range of applications than BN.
In Section 3.1.3, we have briefly reviewed LayerNorm and BatchNorm. In Table 2, we list the definitions of some other normalizations along with their Jacobian or gradients, and Lipschitz constants. Due to space and time limitations, we could not include many other normalization methods such as Group Normalization (Wu & He, 2018) and Instance Normalization (Ulyanov et al., 2016), and others.
From the perspective of coordinate centralization, we consider the following ranking:
\[\text{LayerNorm}>\text{BatchNorm}>\text{CenterNorm}>\text{RMSNorm}>\text{ WeightNorm}. \tag{50}\]
LayerNorm centralizes and re-scales the activations at each spatial or sequence point, providing a more fine-grained normalization. On the other hand, BatchNorm centralizes and re-scales the activations by computing a moving average mean and standard deviation. CenterNorm only centralizes the features without re-scaling them, while RMSNorm scales the features based on their \(L_{2}\) norm. WeightNorm, on the other hand, normalizes the weights instead of the activations.
From the perspective of Lipschitz stability, we consider the following ranking:
\[\text{CenterNorm}>\text{WeightNorm}>\text{BatchNorm}>\text{RMSNorm}\approx \text{LayerNorm}. \tag{51}\]
Their corresponding Lipschitz constants, according to Table 2, are:
\[\frac{D}{D-1}\max_{D}|\gamma_{D}|<\sqrt{\sum_{i=1}^{O}\gamma_{i}^{2}}<\max_{D} \frac{|\gamma_{D}|}{\sqrt{\sigma_{D}^{2}+\epsilon}}<\frac{\max_{D}|\gamma_{D}| }{\epsilon^{\frac{1}{2}}}\approx\frac{\max_{D}|\gamma_{D}|}{\epsilon^{\frac{1 }{2}}} \tag{52}\]
From their Jacobian matrix, we can see that when the input features are equal across all dimensions, LayerNorm will have a very large Lipschitz constant when the values in all dimensions are equivalent. RMSNorm will have a large Lipschitz constant when \(\mathbf{x}\approx 0\). Due to the mean and standard value being computed from the entire batch via a moving average, there is a lower probability of centering the features to 0 across all dimensions. CenterNorm and WeightNorm have Lipschitz constants that are close to the norm of \(\mathbf{\gamma}\).
We make several remarks about normalization in Remark 5.1. As we have described, the forward process of a typical neural network propagates computation as \(\mathbf{y}^{l+1}=\mathbf{W}^{l+1}\mathbf{x}^{l}\), where \(\mathbf{x}^{l}\) and \(\mathbf{W}^{l+1}\) are the input and weight matrix of Layer \(l+1\). To back-propagate the network loss \(\mathcal{L}\), we have:
\[\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}=\mathbf{W}^{l+1}{}^{\top}\frac{ \partial\mathcal{L}}{\partial\mathbf{y}^{l+1}},\ \ \frac{\partial\mathcal{L}}{ \partial\mathbf{W}^{l+1}}=(\frac{\partial\mathcal{L}}{\partial\mathbf{y}^{l+1}})\mathbf{x} ^{l}{}^{\top}.\]
When \(\mathbf{x}^{l}\) is normalized, the gradient of \(\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l+1}}\) in all channels will be distributed more evenly across all channels. This alleviates the issue of gradient vanishing. As also pointed out in Santurkar et al. (2018), BN smooths the entire landscape of the network. We will discuss this property further in the experimental section.
As shown in Xiong et al. (2020), the main difference between pre-LN and post-LN is that post-LN is used in the stem, while pre-LN is used in the branch. We have discussed earlier that LayerNorm is important in smoothing the landscape, but we also find out that it has a high probability of creating abnormal gradients for some abnormal input points, which leads to unstable training. Since the abnormal gradients occur in the stem, they affect layers from the current layer to the input, and thus lead to unstable training.
### Self-attention
In Section 3.1.4, we reviewed the basic Dot-product (DP) attention. Here, we will further review some improvements over DP attention.
In Kim et al. (2021), Kim et al. prove that the standard dot-product attention is _not_ Lipschitz continuous and introduced an alternative L2 attention which is Lipschitz continuous. Their \(L_{2}\) distance attention (referred to as "L2 attention" below) can be defined as:
\[\mathrm{Attn\_L_{2}}(\mathbf{X};\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V})=\mathbf{W}^{V}\mathbf{X }\cdot\mathcal{S}\Bigg{(}-\frac{\big{(}\mathbf{W}^{Q}\mathbf{X}-\mathbf{W}^{K}\mathbf{X} \big{)}^{\top}\big{(}\mathbf{W}^{Q}\mathbf{X}-\mathbf{W}^{K}\mathbf{X}\big{)}}{\sqrt{D/H}} \Bigg{)}, \tag{53}\]
where \(\mathcal{S}(\cdot)\) denotes the softmax operation, \(D\) is the hidden dimension and \(H\) is the number of heads.
Qi et al. (Qi et al., 2023) introduce Scaled Cosine Similarity Attention (referred to as "SCSA attention" or "SCSA"), which is defined as:
\[\mathrm{Attn\_SCS}(\mathbf{X};\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V},\nu,\tau)=\nu\mathbf{V} \mathbf{P},\text{where }\mathbf{P}=\mathrm{softmax}\left(\tau\mathbf{Q}^{\top}\mathbf{K}\right). \tag{54}\]
where,
\[\mathbf{Q}=\left[\mathbf{q}_{1},\cdots,\mathbf{q}_{N}\right],\ \ \ \mathbf{K}=\left[\mathbf{k}_{1}, \cdots,\mathbf{k}_{N}\right],\ \ \ \mathbf{V}=\left[\mathbf{v}_{1},\cdots,\mathbf{v}_{N}\right].\]
where \(\nu\) and \(\tau\) are predefined or learnable scalars. \(\mathbf{Q},\mathbf{K},\mathbf{V}\) are \(\ell^{2}\) column-normalized:
\(\mathbf{q}_{i},\mathbf{k}_{i},\mathbf{v}_{i}=\frac{\mathbf{W}^{Q}\mathbf{x}_{i}}{\sqrt{\|\mathbf{W}^{ Q}\mathbf{x}_{i}\|^{2}+\epsilon}}\), \(\frac{\mathbf{W}^{K}\mathbf{x}_{i}}{\sqrt{\|\mathbf{W}^{K}\mathbf{x}_{i}\|^{2}+\epsilon}}\), \(\frac{\mathbf{W}^{V}\mathbf{x}_{i}}{\sqrt{\|\mathbf{W}^{V}\mathbf{x}_{i}\|^{2}+\epsilon}}\). Here, \(\epsilon\) is a smoothing factor that guarantees the validity of cosine similarity computation even when \(\|\mathbf{W}^{Q}\mathbf{x}_{i}\|=0\). For arbitrary pairs of rows of \(\mathbf{Q}\) and \(\mathbf{K}\) denoted as \(\mathbf{q}_{i}\) and \(\mathbf{k}_{j}\), the cosine similarity on their \(\ell^{2}\)-normalized vectors is proportional to their dot product. The upper bound of SCS Attention's Lipschitz constant with respect to \(\|\cdot\|_{2}\) is shown in Table 2.
For easy understanding, in the following, we abbreviate SCS attention as SCSA, \(L_{2}\) attention as \(L_{2}\)A, and dot-product attention as DPA.
**Remark 5.2: Self-Attention**
1. Self-attention is a high-order nonlinear operator with strong representation ability.
2. DP attention is not Lipschitz continuous, which can result in training instability if warmup, weight decay, and learning rate are not properly set.
3. DP attention and LN are two modules that often trigger training instability due to their unbounded or large Lipschitz constants.
4. Considering the Lipschitz constants of different attention mechanisms, SCS attention is a more stable version of attention.
According to the Lipschitz stability of all attention mechanisms, we reckon that:
\[\text{SCSA}>L_{2}\text{A}>\text{DPA}.\]
In Remark 5.2, we have provided several remarks about self-attention. Self-attention is a higher-order nonlinear operator that differs from linear layers and convolutions. From the Jacobian and gradient derivations in Table 2, we can see that self-attention and LN are two modules that can result in large gradients, leading to unstable training.
In Table 2, we have listed the Lipschitz constants for DPA, \(L_{2}\)A, and SCSA. More detailed derivations can be found in Kim et al. (2021); Qi et al. (2023). \(L_{2}\) attention is Lipschitz continuous under the assumption that \(\mathbf{W}^{Q}=\mathbf{W}^{K}\).
**Remark 5.3: Residual Shortcut**
1. Residual shortcut is an effective approach to mitigating the gradient vanishing problem.
2. Residual shortcut helps smooth the landscape of a network.
3. However, residual shortcut may also increase the Lipschitz constant of the network, which can potentially exacerbate the issue of gradient exploding.
### Residual Shortcut
Residual shortcuts, introduced in ResNet architectures (He et al., 2016, 2016), are an effective way to alleviating the vanishing gradient problem that often affects deep neural network training. By incorporating skip connections, residual shortcuts enable gradients to flow more easily through the network, resulting in improved training and performance. Since the introduction of residual shortcuts, several enhancements have been made in this area.
ReZero, introduced by Bachlechner et al., is one such enhancement applied to residual networks. ReZero is defined as:
\[\mathbf{x}^{l+1}=\mathbf{x}^{l}+\mathbf{\nu}_{1}\odot f(\mathbf{x}^{l};\mathbf{W}), \tag{55}\]
where \(\mathbf{\nu}_{1}\) is a learned parameter initially set to \(\mathbf{0}\). ReZero serves as an initialization method, ensuring that the module after ReZero has a Lipschitz constant of 1.0 under the initial condition. This allows network training even without warmup.
In contrast, Qi et al. (2023) introduce a Weighted Residual Shortcut (WRS) block instead of initializing \(\mathbf{\nu}\) to 0. WRS initializes \(\mathbf{\nu}\) to \(\frac{1}{L}\), where \(L\) represents the number of layers. In their study, after WRS initialization, the Lipschitz constant of the network becomes a value related to Euler's number \(e\).
A potential issue is that the value of \(\mathbf{\nu}\) may increase rapidly, leading to an increased Lipschitz constant for the network. A simple solution is to constrain the values such that \(\mathrm{abs}(\mathbf{\nu})<\omega\), where \(\omega\) can be set, for example, to 2.0. This helps maintain the Lipschitz stability of the network.
We have made several remarks about residual shortcut in Remark 5.3. \(\mathbf{x}^{l+1}=\mathbf{x}^{l}+f(\mathbf{x}^{l};\mathbf{W}^{l+1})\), since the Jacobian matrix \(\frac{\partial\mathbf{x}^{l+1}}{\partial\mathbf{x}^{l}}=\mathbf{I}+\frac{\partial f(\mathbf{x} ^{l};\mathbf{W}^{l})}{\partial\mathbf{x}^{l}}\), even when \(\frac{\partial f(\mathbf{x}^{l};\mathbf{W}^{l})}{\partial\mathbf{x}^{l}}\approx\mathbf{0}\), the gradient can still be propagated to lower layer because \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l}}=\frac{\partial\mathcal{L}}{ \partial\mathbf{x}^{l+1}}\) when \(\frac{\partial f(\mathbf{x}^{l};\mathbf{W}^{l+1})}{\partial\mathbf{x}^{l}}=\mathbf{0}\).
**Remark 5.4: Activation**
1. Activation functions introduce non-linearity into the network.
2. The sigmoid function is prone to the problem of gradient vanishing, while ReLU function disables half of the gradient back-propagation. On the other hand, GELU and Swish functions do not suffer from these issues.
3. ReLU is a non-smooth function. From the perspective of classical numerical optimization, non-smooth functions tend to have slower convergence rates during training and may exhibit generalization problems.
### Activation
Activation functions play a crucial role in deep neural networks by introducing non-linearity, allowing the network to learn complex, non-linear relationships between input and output. Without activation functions, neural networks (classical multi-layer perceptron (MLP) and convolutional neural network (CNN) without normalization) would be limited to learning only linear transformations, greatly reducing their capacity to model real-world problems. In this section, we discuss the role of activation functions in deep learning and their impact on network performance.
In Table 2, we provide the definitions of several activation functions along with their gradients and Lipschitz constants. All the mentioned activation methods do not have Lipschitz stability issues. However, Sigmoid is prone to gradient saturation, which can hinder the flow of gradients. As a result, they are not suitable for the stem of the network but can be used in the branch part. We have provided further remarks in Remark 5.4.
In recent large language models (LLM) (Chowdhery et al., 2022; Touvron et al., 2023), Gated Linear Units (GLU) (Shazeer, 2020) have been utilized. GLU naturally induces more non-linearity into the network.
We have built several remarks about activations in Remark 5.4.
### Initialization
Weight initialization plays a critical role in the training process of deep neural networks. Proper initialization can lead to faster convergence and improved model performance.
1. For large models, the number of layers \(L\) should be taken into consideration during initialization because traditional Xavier initialization does not consider \(L\), leading to a very large Lipschitz constant. A large Lipschitz constant can trigger training instability.
2. Generally, deeper networks should use a smaller initialization variance.
3. Many previous works, including Admin (Liu et al., 2020), Fixup (Zhang et al., 2019), DS-Init (Zhang et al., 2019), Deepnet (Wang et al., 2022), ReZero (Bachlechner et al., 2021), and more, focus on constraining the Lipschitz constant of the network in the initial stage, although they may not explicitly mention it.
In Table 3, we have listed several initialization methods. Here, we would like to suggest a general initialization method as follows:
\[\mathbf{x}^{l+1}=\mathbf{x}^{l}+f(\mathbf{x}^{l};\nu_{2}\odot\mathbf{W}). \tag{56}\]
\begin{table}
\begin{tabular}{c c} \hline Method Name & Method \\ \hline
**Xavier Initialization** & \(W_{i,j}\sim\mathrm{U}\left(-\sqrt{\frac{6}{n_{in}+n_{out}}},\sqrt{\frac{6}{n_{ in}+n_{out}}}\right)\) \\ Glorot \& Bengio (2010) & or, \(W_{i,j}\sim\mathrm{N}\left(0,\frac{2}{n_{in}+n_{out}}\right)\) \\ \hline
**Kaiming Initialization** & \(W_{i,j}\sim\mathrm{N}\left(0,\frac{2}{(1+a^{2})\times n_{in}}\right)\) \\ He et al. (2015) & \(W_{i,j}\sim\mathrm{N}\left(0,\frac{2}{(1+a^{2})\times n_{in}}\right)\) \\ \hline
**Orthogonal Initialization** & Initialize \(\mathbf{W}_{1}\) with Xavier initialization, \(\mathbf{U},\mathbf{S},\mathbf{V}=\mathrm{SVD}(\mathbf{W}_{1})\), \(\mathbf{W}=\mathbf{U}\) \\ \hline
**Spectral Initialization** & Initialize \(\mathbf{W}_{1}\) with Xavier initialization, \(\mathbf{U},\mathbf{S},\mathbf{V}=\mathrm{SVD}(\mathbf{W}_{1})\), \(\mathbf{W}=\frac{\mathbf{W}_{1}}{\mathrm{SVD}(\mathbf{W}_{1})}\), \(\mathbf{W}=\frac{\mathbf{W}_{1}}{\mathrm{SVD}(\mathbf{W}_{1})}\) \\ \hline
**Depth-aware Initialization** & Initialize \(\mathbf{W}_{1}\) with Xavier initialization, \(\mathbf{W}=f(\mathbf{W}_{1},L)\), e.g., \(\mathbf{W}=\frac{\mathbf{W}_{1}}{\sqrt{L}}\) \\ \hline \end{tabular}
\end{table}
Table 3: Initialization methods. In Kaiming initialization, \(a\) is the slope of the non-linearity function.
In this method, \(\mathbf{W}\) is initialized using Xavier initialization, and \(\nu_{2}\) is a fixed parameter that its pre-set and used only once during the initialization stage of the network. Two suggested choices for \(\nu_{2}\) are \(\frac{1}{\sqrt{L}}\) or \(\frac{1}{L}\). These choices correspond to different Lipschitz constants. When \(\nu_{2}=\frac{1}{\sqrt{L}}\), the depth-aware initialization Zhang et al. (2019a) follows the distribution:
\[W_{i,j}\sim \operatorname{U}\left(-\sqrt{\frac{6}{n_{in}+n_{out}}}\frac{1}{ \sqrt{L}},\sqrt{\frac{6}{n_{in}+n_{out}}}\frac{1}{\sqrt{L}}\right)\quad\text{or},\] \[W_{i,j}\sim \operatorname{N}\left(0,\frac{2}{n_{in}+n_{out}}\frac{1}{L}\right)\]
For smaller models, weight initialization is not sensitive for network training. However, for larger models (e.g., 175 billion parameters or larger), weight initialization becomes more important.
In Remark 5.5, we have presented several remarks about initialization.
Here, we will not discuss the Lipschitz constants of Fixup (Zhang et al., 2019b), DS-Init (Zhang et al., 2019a), Admin (Liu et al., 2020), Deepnet (Wang et al., 2022), and ReZero (Bachlechner et al., 2021) operators. However, it is worth noting that these works on network initialization can be reconsidered from the perspective of constraining the Lipschitz constant. Interested readers can calculate the corresponding values in their initializations.
**Remark 5.6: DropPath**
1. DropPath is an effective method to mitigate overfitting.
2. In the training stage, DropPath is also an effective way to stabilize training by constraining the Lipschitz constant of a network.
### DropPath
DropPath (Huang et al., 2017) is another effective technique for training deep transformers. It can be defined as follows:
\[\mathbf{y}=\left\{\begin{array}{ll}\mathbf{x},&\text{if the residual path is dropped}\\ \mathbf{x}+\rho\cdot f(\mathbf{x}),&\text{otherwise}\end{array}\right. \tag{57}\]
When using DropPath with a drop probability \(p\) within each residual block, the Lipschitz constant of LipsFormer is refined as:
\[\operatorname{Lip}(F)\leq\prod_{s=1}^{S}\prod_{m=1}^{M_{s}}(1+\operatorname{ DropPath}(\rho_{s,m}\operatorname{Lip}(f_{s,m}),p)),\]
where
\[\operatorname{DropPath}(\alpha_{s,m},p)=\{\begin{array}{ll}0,&\text{with probability }p\\ \alpha_{s,m}\operatorname{Lip}(f_{s,m}),&\text{with probability }1-p.\end{array}\]
DropPath effectively decreases the upper bound of a network's Lipschitz constant by randomly dropping the contributions of residual paths.
While DropPath is widely used in Vision Transformers (ViT), it is not often used in language Transformers. One possible reason is that for vision problems like ImageNet, overfitting is more common, whereas for large language models, overfitting is not a concern due to the availability of rich training data (Gao et al., 2020; Laurencon et al., 2022). Analyzing the variation of the Lipschitz constant after applying Dropout (Srivastava et al., 2014) is not an easy task and requires further research.
## 6 Explicit Optimization in Deep Learning
According to the definition in Table 1, we define explicit optimization as including operations on weight \(\mathbf{W}\), gradient \(\nabla_{\mathbf{W}}\mathcal{L}\), learning rate \(\alpha\), and weight decay factor \(\lambda\). In this section, we discuss each factor and its impact on optimization. Additionally, we provide several remarks about each factor.
### On Choice of Optimizer
Before delving into each factor, let us discuss general guidelines for selecting a suitable optimizer. We have analyzed that ResNet is a homogeneous network, while Transformer is a heterogeneous network. In ResNet, since each sub-module is homogeneous, there is no quantitative difference in the Lipschitz constants of each sub-module, allowing us to choose an optional learning rate. However, in Transformer, the Lipschitz constant of each sub-module varies significantly. As a result, we can only select a minimal learning rate to ensure stable training, but this may degrade the network's performance.
Classical optimization primarily focuses on SGD and its variants, such as mSGD (Nesterov, 1983) and SVRG (Johnson and Zhang, 2013). However, SGD methods have disadvantages in deep learning, especially in heterogeneous networks. We expect that more researchers will focus on adaptive learning rate methods. Overall, AdamW is one of the best-performing methods in almost all types of networks. There have also been several analyses (Molybog et al., 2023; Reddi et al., 2019; Chen et al., 2018) on the convergence rate of Adam.
**Remark 6.1: On Choice of Optimizer**
1. Adaptive learning rate methods (e.g., Adam, RMSProp, AdaGrad) perform much better than SGD on heterogeneous networks (e.g., Transformer and RNN), making them a better choice for Transformer and RNN over SGD.
2. The learning rate in SGD is sensitive to the Lipschitz constant of the network, while Adam is more robust to the Lipschitz constant due to its use of a normalized update value (the element-wise division between the first-order momentum and the square root of the second-order momentum).
3. Both SGD and Adam are suitable for convolutional networks (homogeneous networks), especially for shallow networks. In some shallow convolutional networks, SGD may outperform Adam. However, as the network depth increases, Adam becomes more competitive and outperforms SGD.
4. The learning rate in SGD is sensitive to the Lipschitz constants of all layers, which is closely related to the Jacobian matrix. On the other hand, Adam leverages a normalized operator, making its learning rate less sensitive to the Jacobian matrix compared to SGD.
5. Weight decay is an effective way to stabilize network training by constraining the Lipschitz constant of the network. It acts as a contraction mapping and consistently improves performance.
6. AdamW improves Adam by rectifying the weight decay term. The original Adam uses a wrong weight decay scheduler.
7. The default parameters (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)) are not optimal. When the input data has a large noise, the loss may not be stable. A suitable choice is to use (\(\beta_{1}=0.9\), \(\beta_{2}=0.98\)) or (\(\beta_{1}=0.9,\beta_{2}=0.95\)).
In Remark 6.1, we have provided several remarks on the choice of optimizer. In the experiments, we will observe that the Jacobian matrix of a heterogeneous network varies significantly across all layers, indicating that the gradients in different layers vary significantly. This necessitates the selection of a very small learning rate to prevent exploding gradients. However, this approach compromises the network's representation ability. Adaptive learning rate methods can effectively mitigate this issue. For instance, Adam normalizes the gradient by dividing the first-order momentum by the square root of the second-order momentum.
### On Weight
For the optimizer in deep learning, most works focus on the gradient, such as first-order and second-order momentum, and variance reduction in multiple steps of gradients. There are few works that focus on the weight operator.
Initialization methods are one example of focusing on the weight, but they are only applied once at the beginning of training.
**Remark 6.2: On Weight**
1. Initialization of the weight matrix is important as it significantly affects the training stability and the final representation ability of the network.
2. The eigenvalues of the weight matrix determine the Lipschitz constant of each sub-module. Unstable training often occurs when the eigenvalues of the weight matrix increase rapidly. An possible choice is to clamp the maximum absolute eigenvalue in the training process to constraint the Lipschitz constant of the network as in BigGAN (Brock et al., 2018).
3. Re-parameterization is an effective approach to mitigating the negative effects of fast-growing Lipschitz constants, which can cause instability in network training. Examples of re-parameterization include weight normalization and scaled cosine similarity attention.
4. Exponential Moving Average (EMA) is a useful technique for improving the generalization ability of the model.
We have presented several remarks about the weight operator in Remark 6.2. The eigenvalues of the weight matrix (e.g., FC or Convolution) or the norm of the vector (\(\gamma\) in LN or BN) reflect the properties of the network. In the experiments, we will investigate how the weight matrix varies along with the training process.
Re-parameterization, which involves changing the parameters or variables of a model, is an effective way to facilitate learning, improve numerical stability, or enable certain types of computation. It is widely used in deep learning, such as in BN (Ioffe and Szegedy, 2015), WeightNorm (Salimans and Kingma, 2016), and Scaled cosine similarity attention (Qi et al., 2023).
Exponential Moving Average (EMA) is a technique used to improve the generalization ability of a model. It is commonly used in small and medium-sized models, but it requires storing a copy of the weights in memory. It should be noted that EMA is sensitive to FP16 precision.
### On Gradient
As shown in Algorithm 1, in SGD, the update value is \(\mathbf{v}_{t}=\beta_{t}\mathbf{v}_{t-1}+(1-\beta_{t})\mathbf{g}_{t}\). For the \(l\)-th layer, its \(\mathbf{g}^{l}=\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{l}}=\frac{\partial \mathcal{L}}{\partial\mathbf{x}^{l+1}}\mathbf{x}^{l\top}\). If \(\frac{\partial\mathcal{L}}{\partial\mathbf{x}^{l+1}}\) is unbounded, resulting in unbounded gradient values \(\mathbf{g}^{l}\). Deeper models tend to have larger ranges of gradient values with high probability. Additionally, the ranges of gradient values across different layers can vary significantly. Therefore, using a single learning rate for all layers may not be suitable. However, for simplicity, most SGD-based methods employ this strategy.
**Remark 6.3: On Gradient**
1. In Adam, the absolute value of the update value \(|\mathbf{\mu}_{t}|\) is bounded by \(\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\). In SGD, the update value \(\mathbf{v}_{t}\) is not bounded and is influenced by the Jacobian matrix and the input activation.
2. NAN and INF values are often encountered in LayerNorm and Self-Attention due to their unbounded or very large Lipschitz constants.
3. The variance of Lipschitz constants in Transformer is larger than that in ResNet. As a result, the gradients in different layers exhibit larger variations in Transformer compared to ResNet.
As shown in Algorithm 2, in Adam, the updated value is \(\mathbf{\mu}_{t}=\frac{\mathbf{m}_{t}}{\mathbf{v}_{t}}=(\beta_{1}\mathbf{m}_{t-1}+\left(1-\beta_{ 1}\right)\mathbf{g}_{t})\oslash\sqrt{\beta_{2}\mathbf{v}_{t-1}+\left(1-\beta_{2}\right) \mathbf{g}_{t}^{2}}\). When \(\mathrm{abs}(\mathbf{g}_{t})\gg\mathrm{abs}(\mathbf{m}_{t-1})\), the absolute value of \(\mathbf{\mu}_{t}\) will approximately be \(\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\). For example, when using the default parameters \((\beta_{1}=0.9,\beta_{2}=0.999)\) in Adam, \(\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}=\sqrt{10}\). Thus, the range of the updated value \(\mathbf{\mu}_{t}\) is approximately \(\left[-\sqrt{10},\sqrt{10}\right]\). If we use \((\beta_{1}=0.9,\beta_{2}=0.99)\), then the range of the updated value becomes \([-1.0,1.0]\). When \((\beta_{1}=0.0,\beta_{2}=0.0)\), Adam is equivalent to signSGD (Bernstein et al., 2018).
Compared to SGD, Adam provides a bounded update value to the weights, making it more stable during network training. This partly explains why a learning rate of 5e-4 is often effective for small and shallow networks. However, even for shallow networks, tuning the learning rate multiple times may still be necessary. In contrast, Adam allows each layer to actively learn since the ranges of values in different layers are comparable. In SGD, due to issues like vanishing or exploding gradients, only certain layers (usually higher or shallower layers) receive significant updates while others are not strongly updated. This can lead to weaker representation ability compared to Adam.
Gradient clipping is a common technique used in deep learning. It is important to note that gradient clipping is typically applied after the entire back-propagation process is completed. Therefore, gradient clipping cannot solve the NAN or INF problems that may occur within the current batch, but it can influence the weights in the next batch. One suitable approach is to apply gradient clipping on-the-fly during training.
### On Learning Rate
In classical numerical optimization literature (Nesterov, 2003; Nocedal & Wright, 1999; Wright & Ma, 2022), the optimal learning rate is typically chosen as \(\frac{1}{K_{1}}\), assuming that the function is \(K_{1}\)-smooth. If the learning rate exceeds \(\frac{2}{K_{1}}\), the training process is likely to result in an explosion.
**Remark 6.4:** on Learning Rate
1. The choice of learning rate in SGD is strongly influenced by the network structure, including its depth and width (see Remark 1 on Gradient).
2. Larger models require smaller learning rates because their Lipschitz constant \(K_{0}\) tends to be larger than that of smaller models.
3. Warmup duration is closely related to the Lipschitz constant \(K_{0}\). Generally, larger models require longer warmup periods.
4. The learning rate should decrease during the training process because the Lipschitz constant of the network usually increases as training progress.
In general, the choice of learning rate should take into account the constants \(K_{0}\) and \(K_{1}\). However, in practice, estimating the Lipschitz constant and Lipschitz gradient constant for each layer, let alone the entire network, is challenging. This makes it difficult to determine the optimal learning rate. Using an optimal learning rate would ensure faster convergence.
Given that the Lipschitz constant and Lipschitz gradient constant of each module in a Transformer vary more significantly compared to those in a ResNet, classical SGD is not well-suited for Transformer models. Adaptive learning rate methods like Adam are more suitable for Transformers.
### On Weight Decay
In mathematics, the weight decay operator is represented as follows:
\[\mathbf{W}_{new}=(1-\alpha\lambda)\mathbf{W}, \tag{58}\]
where \(\lambda\) is a preset weight decay parameter and \(\alpha\) is the learning rate. For example, we can set \(\lambda=0.1\) and \(\alpha=5e-4\)
Applying the weight decay operator to the parameters \(\mathbf{W}\) will result in a decrease in the Lipschitz constant of the module. Let us consider a Feed-Forward Network (FFN) as defined in Equation 26 as an example. The original Lipschitz constant of an FFN module is given by \(\sigma_{max}(\mathbf{W}_{1})\cdot\sigma_{max}(\mathbf{W}_{2})\), and after applying weight decay, the Lipschitz constant of the FFN becomes:
\[\mathrm{Lip(FFN)}=(1-\alpha\lambda)^{2}\cdot\sigma_{max}(\mathbf{W}_{1})\cdot \sigma_{max}(\mathbf{W}_{2}). \tag{59}\]
**Remark 6.5: on Weight Decay**
1. The assumption for weight decay is that \(W_{ij}\) follows a \(\mathcal{N}(\mu=0,\,\sigma^{2})\) distribution. Under this prior assumption, \(\gamma\) in BatchNorm, LayerNorm, and the scale factor \(\nu\) in ReZero (Bachlechner et al., 2021) and WRS (Qi et al., 2023) should not use weight decay.
2. Weight decay can accelerate training convergence. The choice of weight decay depends on the training epochs, with longer training requiring smaller decay values.
3. Weight decay can decrease the Lipschitz constant of a network, acting as a contraction mapping.
From the above equation, we observe that each weight decay operator reduces the Lipschitz constant of the network. This reduction is particularly important for large models because, after gradient updates, the Lipschitz constant of the network tends to increase. If we do not counteract this trend with weight decay, the network training can become more unstable.
We have made several remarks in Remark 6.5. A potential assumption is that weight value admits a Gaussian distribution \(\mathcal{N}(\mu=0,\,\sigma^{2})\). If the prior value of the weight does not admit Gaussian distribution, one should not use weight decay, or, it will degrade the performance. For instance, the \(\gamma\) in LN and BN has a assumptive value 1.0. Thus, the weight decay should not be applied on the \(\gamma\) term. A general strategy is to enforce larger weight decay for bigger models. Weight decay is a contraction mapping.
## 7 A Guideline for Deep Learning Optimization
In this section, we will compile some guidelines for deep learning optimization based on our previous analysis and discussion.
### Guideline for Exploding Gradient
For the problem of exploding gradients, we have compiled ten guidelines that need to be carefully considered:
**1. Optimizer choice**: The choice of optimizer is crucial for training a neural network. As discussed in Section 3.2, ResNet has homogeneous blocks with comparable Lipschitz constants in each block, while Transformer has heterogeneous blocks (Self-attention and FFN) with extremely diverse Lipschitz constants. SGD is sensitive to the Lipschitz constant of the network, while Adam is more robust to variations in the Lipschitz constant due to its element-wise division between first-order momentum and the square root of second-order momentum in the weight update term. In conclusion, Adam (along with other adaptive learning rate optimizers) is a better option for Transformer than SGD. AdamW further improves upon Adam.
After choosing an optimizer like AdamW, we need to consider the learning rate, weight decay, and the updated weight. These can be understood from the following equation:
\[\mathbf{w}_{t}=\mathbf{w}_{t-1}-\alpha_{t}\mathbf{\mu}_{t}-\alpha_{t}\lambda_{t}\mathbf{w}_{t- 1}, \tag{60}\]
which is shown in Algorithm 2. Let us consider each factor one-by-one.
**2. Learning rate**: The learning rate is the parameter that is most frequently adjusted. It is typically inversely related to the Lipschitz constant of the network. Larger networks generally have larger Lipschitz constants. A general guideline for setting the learning rate is to use a smaller learning rate for larger networks (deeper and wider). In the initial state,
Transformer has a very sharp landscape, so warmup is always necessary to smooth the landscape. Deeper networks will require a longer warmup period.
**3. Weight decay**. As we discussed in Section 6.5, weight decay is an effective way to constrain the growing trend of the Lipschitz constant of the network during training. Weight decay acts as a contraction mapping in nature. Generally, larger networks should use a larger weight decay. It should be noted that the parameters \(w\) that use weight decay should have a prior that \(w\in\mathcal{N}(\mu=0,\,\sigma^{2})\). Some terms, such as \(\gamma\) in LayerNorm and BatchNorm, has a prior value 1.0. These terms should not use weight decay, or it will decrease the performance.
**4. Adam parameters**. We have presented the AdamW algorithm in Algorithm 2 and discussed the influence of parameters \(\beta_{1}\) and \(\beta_{2}\). As mentioned earlier, the range of the updated value is determined by these parameters. In PyTorch (Paszke et al., 2017) 10, the default values are \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). With these default parameters, \(\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}=\sqrt{10}\). Consequently, the range of the updated value \(\boldsymbol{\mu}_{t}\) becomes \(\left[-\sqrt{10},\sqrt{10}\right]\). Such a large range can lead to unstable training, especially when data have noise or incorrect labels, causing \(\mathrm{abs}(\boldsymbol{g}_{t})\gg\mathrm{abs}(\boldsymbol{m}_{t-1})\) (see Algorithm 2). By using \((\beta_{1}=0.9,\beta_{2}=0.99)\), the range of the updated value becomes \([-1.0,1.0]\). Similarly, when using \((\beta_{1}=0.9,\beta_{2}=0.95)\), the range of the updated value is \([-0.447,0.447]\).
Footnote 10: [https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
**5. Initialization**. Initialization is crucial for training a neural network, especially when the network is deep. Some classical initialization methods were proposed without considering very deep networks. In deep networks, the Lipschitz constant of the network can become very large. If we continue to use classical initialization methods, the training process becomes prone to instability. One possible solution is to employ Lipschitz-aware initialization, which is equivalent to depth-aware initialization (Zhang et al., 2019) in specific implementations. Lipschitz-aware initialization allows us to theoretically constraint the Lipschitz constant of the network to a known value in its initial state.
**6. Normalization**. Normalization is an extremely effective module for smoothing the landscape of the network. This will be further demonstrated in the experimental section. As discussed in Section 5, normalization enforces network activations to satisfy forward principles. However, from a backward perspective, it's important to note that LN (LayerNorm) has potential problems. Non-smoothing LN is not Lipschitz continuous, while smoothing LN is Lipschitz continuous but with a very large Lipschitz constant. As shown in Table 2, the Lipschitz constant of LN is
Figure 6: A general guideline for solving exploding gradient problem.
very large when considering very small \(\epsilon\), such as \(\epsilon=10^{-8}\). Equation 51, we have shown the stability of different normalizations.
**7. Self-attention mechanism**. Self-attention is a high-order nonlinear operator with powerful representation abilities. However, we have observed that the Lipschitz constant of dot-product attention (DPA), as shown in Table 2, is unbounded, which can lead to overflow problems. As alternatives, we can consider \(L_{2}\) attention (\(L_{2}\)A) and scaled cosine similarity attention (SCSA). Based on the calculated Lipschitz constants of these three attention mechanisms, we can expect that SCSA and \(L_{2}\)A will exhibit more stable properties than DPA. Experiments will further validate that SCSA will smooth the landscape of the Transformer better than DPA.
**8. Floating-point precision**. In most current training models, mixed precision training is used, where the forward and backward computations are performed with FP16 precision, and the weight updates are done in FP32. However, this introduces a problem: in mixed precision training, we are more prone to encountering overflow issues compared to using FP32 throughout. According to Table 2, normalization and self-attention modules are particularly prone to precision problems (overflow) due to their higher Lipschitz constants compared to other modules such as Convolution and FFN. One possible alternative to FP16 is to use BF16 or even FP32, which has a larger integer range. Higher float precision can only be only applied to some unstable modules instead of the whole network. While this can partially address the problem, there is still a significant possibility of encountering overflow. A better strategy is to use powerful and stable normalization and self-attention modules.
**9. DropPath**. DropPath is an effective method for mitigating overfitting in network training. In computer vision, it is common to train models on the training data for hundreds of epochs. Without using DropPath, it is easy to overfit the training data. In large language models (LLMs) (Touvron et al., 2023; Chowdhery et al., 2022; Hoffmann et al., 2022; Zhang et al., 2022; OpenAI, 2023), overfitting is generally less of a concern due to the abundance of data (e.g., trillions of tokens) (Gao et al., 2020; Laurencon et al., 2022). Another benefit of DropPath is that it can reduce the Lipschitz constant of a network at runtime by dropping several layers. In conclusion, in ViT models, DropPath is effective in mitigating overfitting and reducing the Lipschitz constant of the network during runtime.
**10. Gradient clipping**. Gradient clipping is a widely used technique to prevent the exploding gradient problem. Typically, it is applied as a post-processing step after back-propagation is completed. Alternatively, an on-the-fly approach can be used, where gradient clipping is performed after each layer during back-propagation. Generally, larger models require smaller clipping thresholds.
### Guideline for Vanishing Gradient
As discussed previously, the vanishing gradient problem does not disable the training process but rather leads to a weaker representation in the network. With the introduction of residual shortcuts (He et al., 2016, 2016), non-gradient saturation activation functions (Dahl et al., 2013; Ramachandran et al., 2017; Hendrycks and Gimpel, 2016), and various effective normalizations (Ioffe and Szegedy, 2015; Ba et al., 2016; Salimans and Kingma, 2016; Zhang and Sennrich, 2019; Qi et al., 2023), the vanishing gradient problem is no longer a major obstacle for a successful network training. In this subsection, we present three guidelines to debug vanishing gradient problems.
**1. Residual shortcut**. Residual shortcut, as discussed in Section 3.1.5, is a breakthrough technique that effectively addresses the vanishing gradient problem. The residual shortcut is defined as follows:
\[\mathbf{y}=\mathbf{x}+f(\mathbf{x};\mathbf{W}).\]
Figure 7: A general guideline for solving vanishing gradient problem.
The Jacobian matrix of \(\mathbf{y}\) with respect to \(\mathbf{x}\) is:
\[\mathbf{J}_{\mathbf{y}}(\mathbf{x})=\frac{\partial\mathbf{y}}{\partial\mathbf{x}}=\frac{\partial\bm {x}}{\partial\mathbf{x}}+\frac{\partial f(\mathbf{x};\mathbf{W})}{\partial\mathbf{x}}=\mathbf{I}+ \frac{\partial f(\mathbf{x};\mathbf{W})}{\partial\mathbf{x}}.\]
Even if we have gradients close to zero in the branch \(\frac{\partial f(\mathbf{x},\mathbf{W})}{\partial\mathbf{x}}\), the gradient can still be propagated to lower layers. If a vanishing gradient problem is encountered, it is important to check whether the residual shortcut is properly utilized in the stem.
**2. Normalization**. Normalization typically normalizes the activation to a comparable level with a target mean value and standard variance. Let us see how it mitigates the vanishing gradient problem. Suppose \(\mathbf{y}=\mathrm{Normalization}(\mathbf{x})\) and \(\mathbf{z}=\mathbf{W}\mathbf{y}\). In the backward process, we have \(\frac{\partial\mathcal{L}}{\partial\mathbf{z}}\). Since \(\mathbf{y}\) is a normalized value, the gradients of each point contributing to \(\mathbf{W}\) are evenly distributed. This prevents the situation where the gradient from point \(\mathbf{x}\) to \(\mathbf{W}\) is small due to small activation values in \(\mathbf{x}\). As \(\mathbf{W}\) can obtain correct and valid gradients and \(\frac{\partial\mathcal{L}}{\partial\mathbf{y}}=\mathbf{W}^{\top}\frac{\partial \mathcal{L}}{\partial\mathbf{z}},\frac{\partial\mathcal{L}}{\partial\mathbf{y}}\) remains valid.
**3. Gradient blocking in stem**. An often observed phenomenon in vanishing gradient problems is continuous oscillation of the loss around a relatively large value. For example, when training a model on ImageNet, the loss decreases from approximately 10.0 to 7.0 and then oscillates around 7.0. In such cases, it is important to check whether gradients are blocked in one layer in the stem.
## 8 Experimental Analysis
In this section, we will analyze the properties of different networks and explore the underlying reasons through experiments. Let us first describe our experimental settings.
Our analysis is based on the simulated Lipschitz constant of various networks. The computational equation is as follows:
\[K=\max_{\mathbf{x},\epsilon,\mathbf{z}}\frac{\left\|f(\mathbf{x}+\epsilon\mathbf{z};\mathbf{W})-f( \mathbf{x};\mathbf{W})\right\|_{p}}{\left\|\mathbf{x}+\epsilon\mathbf{z}-\mathbf{x}\right\|_{p}} \tag{61}\]
Here, \(f(\cdot)\) represents a network, \(\mathbf{x}\) denotes a randomly selected point, \(\mathbf{z}\) is a random Gaussian noise, and \(\epsilon\) is a small scaling factor. \(\left\|\left\|\right\|_{p}\) denotes the \(L_{p}\) norm, and by default, we will use the \(L_{2}\) norm. We will also compare different norms in the experiments. The default settings include \(\epsilon=1e7\) and \(\mathbf{z}\) admits a Gaussian distribution with \(\mathcal{N}(\mu=0,\sigma^{2}=1.0)\). Since it is impractical to enumerate all possible values for \(\epsilon\) and \(\mathbf{z}\), we will use 10 input points \(\mathbf{x}\), and for each \(\mathbf{x}\), we will select 10 points of \(\mathbf{z}\) to obtain the simulated value of \(K\). In practice, we find that the variance is small between several random seeds.
**What does the value \(K\) mean?** The value of \(K\) reflects the landscape of a network. If \(K\) equals 0, it means that the output does not change with respect to any variation in \(\mathbf{x}\). On the other hand, if \(K\) is very large, it indicates that the gradient changes rapidly and the curvature is substantial around the point \(\mathbf{x}\). In conclusion, the value of \(K\) describes the landscape of the network.
Let us discuss the settings for the network \(f(\cdot)\). By default, both ResNet and Transformer have 12 layers, with a hidden dimension of 1024. For Transformer, we use 8 heads for queries, keys, and values, and the expansion scale in the feed-forward network (FFN) is 4. The input \(\mathbf{x}\) is a randomly created data with a shape of Width \(\times\) Height \(\times\) Hidden_dimension by default. We set the default values for Width and Height as 32. For ResNet, we input the tensor directly, while for Transformer, we reshape it into a sequence tensor with a shape of Length \(\times\) Hidden_dimension, where Length \(=\) Width \(\times\) Height. The final output will have the same shape as the input.
In ResNet, each layer can be calculated as follows:
\[\mathbf{x}^{l+1}=f(x^{l};\mathbf{W}^{l+1})=\mathbf{x}^{l}+\mathrm{BN}\left(\mathrm{Conv} \left(\mathrm{ReLU}\left(\mathrm{BN}\left(\mathrm{Conv}\left(\mathbf{x}^{l}\right) \right)\right)\right)\right). \tag{62}\]
For Transformer, we will evaluate two types of attention mechanisms: dot-product attention and scale cosine similarity attention. We will refer to them as DPA Transformer and SCSA Transformer, respectively. We do not include the \(L_{2}\) distance attention (Kim et al., 2021) because we do not have the implementation of their code. The difference between our implementation and the authors' code may lead to unobjective assessment.
When evaluating a network without residual shortcuts, we remove all residual shortcuts. For example, in Transformer, we remove the residual shortcuts in both the attention and feed-forward network (FFN) modules. Similarly, when
evaluating a network without normalization, we remove all normalization in the network. We use BatchNorm for ResNEt and use LayerNorm for Transformer. When using LayerNorm, we use post-norm.
By default, we use Xavier initialization 11 to initialize the network. To simulate the training process where the eigenvalues of the weight matrix consistently grow, we will use a gain of 2.0 in Xavier initialization in default. This means that after initialization, each weight matrix \(\mathbf{W}\) will be multiplied by 2.0.
Footnote 11: [https://pytorch.org/docs/stable/modules/torch/nn/init.html#xavier_normal_](https://pytorch.org/docs/stable/modules/torch/nn/init.html#xavier_normal_)
We can express the operation as follows:
\[\text{First}\:W_{i,j}\sim\mathrm{N}\left(0,\frac{2}{n_{in}+n_{out}}\right),\: \text{then}\:\:W_{i,j}=2.0\times W_{i,j}.\]
It is important to note that in this article, our aim is not to achieve state-of-the-art (SOTA) performance. Instead, our goal is to analyze the properties of a network. We set the gain to 2.0 to match the observation that the eigenvalues increase significantly after some training steps. This allows us to uncover potential problems that some modules may have. It should be emphasized again that our goal is not to achieve SOTA or introduce a new method.
In the following experiments, we will vary different parameters to observe how \(K\) changes. For example, we will vary the number of layers \(L\) across ten different settings: [1, 2, 4, 8, 12, 16, 24, 32, 48, 64], and analyze how \(K\) varies accordingly.
### Why is Transformer Harder to Optimize than ResNet?
To compare Transformer with ResNet, we vary the number of layers and observe how the \(K\) values change across different layers. We also compare them with and without residual shortcuts (shown as solid and dashed lines, respectively).
We simulate the Lipschitz constants of ResNet (blue color), DPA Transformer (orange color), SCSA Transformer (green color) with or without residual shortcuts for different numbers of layers. The results are shown in Figure 8.
From Figure 8, we make the following three observations:
* **DPA Transformer becomes harder to optimize as the network becomes deeper.** With an increase in the number of layers, the simulated \(K\) value for DPA Transformer increases quickly. For example, when \(L=64\), the \(K\) value exceeds the maximum range of FP16, leading to an "INF" value during the training process.
Figure 8: Simulated Lipschitz constants of ResNet, DPA Transformer, SCSA Transformer with or without residual shortcuts for different numbers of layers. The horizontal axis is scaled by \(\log_{2}\), and the vertical axis is scaled by \(\log_{10}\) for better visualization.
* **Residual shortcuts effectively smooth the landscape.** ResNet with residual shortcuts (solid blue line) exhibits a slower increase in the \(K\) value compared to a very fast increase of ResNet without residual shortcuts (dashed blue line). A smaller \(K\) value indicates a smoother landscape. For instance, a 64-layer ResNet with residual shortcuts has a simulated \(K\) value of 1e3, which is still smaller than 65504. This means that the learning process will not explode under the current conditions. In our implementation, we use post-layernorm. In this way, residual shortcut do not smooth the landscape as it in ResNet. ResNet uses a pre-norm way.
* **SCSA attention mechanism exhibits a smoother landscape compared to DPA attention.** We observe that SCSA Transformer with residual shortcuts shows a very slow increase in the \(K\) value. This indicates that SCSA attention effectively smooths the landscape. The introduced re-parametrization technique in SCSA works well.
These observations highlight the challenges faced in optimizing Transformer models compared to ResNet models. Meanwhile, these observations also verify that 1) residual shortcut can smooth the landscape of a network effectively; 2) Transformer is harder to optimize than ResNet.
### On Normalization
We further compare ResNet, DPA Transformer, and SCSA Transformer with or without normalization across different numbers of layers. The result is shown in Figure 9.
From Figure 9, we observe the following:
* **Normalization is extremely effective in smoothing the landscape.** Transformer without LN will quickly experience an explosion when the layer number exceeds 16. Adding LN noticeably smooths the landscape for Transformer. ResNets with or without BN both have a smoothed landscape, as mentioned before, BN can mitigate the gradient vanishing problem.
The observation highlight that normalizations can smooth the landscape of the networks effectively. It is an extremely useful skill in deep learning.
Figure 9: Simulated Lipschitz constants of ResNet, DPA Transformer, SCSA Transformer with or without normalization (BN for ResNet and LN for Transformer) across different numbers of layers. The horizontal axis is scaled by \(\log_{2}\), and the vertical axis is scaled by \(\log_{10}\) for better visualization. \(K\) values of DPA Transformer without normalization larger than 16 layers become INF and thus have not been plotted in the figure. The same issue applies to SCSA Transformer with 64 layers.
### Simulated Lipschitz Constants across Different Layers
Here give a point \(\mathbf{x}\), we denote \(f^{l}(\mathbf{x},\mathbf{W})\) as the output of the \(l\)-th layer. We define:
\[K_{l0}=\max_{\epsilon,\mathbf{z}}\frac{\left\|f^{l}(\mathbf{x}+\epsilon\mathbf{z};\mathbf{W})-f^ {l}(\mathbf{x};\mathbf{W})\right\|_{p}}{\left\|\mathbf{x}+\epsilon\mathbf{z}-\mathbf{x}\right\|_{p }},\ \ \text{for $l$ in [0, L]}.\]
And we define:
\[K_{Ll}=\max_{\epsilon,\mathbf{z}}\frac{\left\|f^{L}(\mathbf{x}+\epsilon\mathbf{z};\mathbf{W})- f^{L}(\mathbf{x};\mathbf{W})\right\|_{p}}{\left\|f^{l}(\mathbf{x}+\epsilon\mathbf{z};\mathbf{W})- f^{l}(\mathbf{x};\mathbf{W})\right\|_{p}},\ \ \text{for $l$ in [0, L]}.\]
In this subsection, we will evaluate the \(K_{l0}\) and \(K_{Ll}\) values. The results are shown in Figure 10. We can observe that the \(K_{l0}\) value for the DPA Transformer increases rapidly as the number of layers increases. ResNet exhibits a slower increase, while SCSA Transformer shows the slowest growth rate. The trend on the right side of Figure 10 aligns with our expectations.
### Sensitivity Analysis of Output with Respect to Different Parameters
In this subsection, we further evaluate four different parameters: \(\epsilon\), hidden dimension, the scale factor of the gain in the weight matrix, and the input size. \(\epsilon\) represents the perturbation distance from \(\mathbf{x}\) to \(\mathbf{x}+\epsilon\mathbf{z}\). We select ten settings for \(\epsilon\): [0.25, 0.5, 1.0, 4.0, 16.0, 64.0, 128.0, 256.0, 512.0, 1024.0]. For the hidden dimension, we choose ten settings: [128, 256, 512, 768, 1024, 2048, 3072, 4096, 6144, 8192]. The weight scale factor (also known as the gain in PyTorch) is varied in ten settings: [0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 128.0]. The input size is selected from the following list: [32, 64, 128, 256, 384, 512, 768, 1024, 1532, 2048]. When evaluating hidden dimension and input size, we use networks with only 4 layers instead of 12 layers due to memory limitation. The results are shown in Figure 11.
We have the following observations from Figure 11:
* The gain of the weight matrix significantly affects the landscape of the DPA Transformer, while it does not have a significant impact on ResNet.
* The \(K\) value of the DPA Transformer is greatly affected by the choice of \(\epsilon\). When \(\epsilon=4.0\), the DPA Transformer has the highest \(K\) value. In contrast, the \(K\) value decreases with increasing \(\epsilon\) for ResNet and SCSA Transformer.
* The \(K\) values for all three networks increase along with the hidden dimension. It partly explains that larger models (with wider hidden dimension) are harder to optimize.
* As the image size or sequence length increases, the \(K\) value does not change too much for all three networks.
Figure 10: Simulated Lipschitz constants \(K_{l0}\) and \(K_{Ll}\) across different layers
### Sensitivity Analysis of Different Norms
We have calculated the \(K\) value under different norms, including \(L_{1}\), \(L_{2}\) and \(L_{\infty}\). The result is shown in Figure 12. It shows that the results under different norm metric are consistent.
## 9 Discussion
### The Difficulty of Training Large Models
For the training of large models, including ViT (Dosovitskiy et al., 2020; Liu et al., 2021b, a; Wu et al., 2021) or Large Language Models (Radford et al., 2018, 2019; Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Zhang et al., 2022; Touvron et al., 2023), we encounter two types of problems: system-level optimization problems and numerical optimization problems. Regarding system-level optimization, very large models can be trained using 3D parallelism (data, model, and pipeline parallelism) if sufficient hardware resources are available. While this article primarily focuses on numerical optimization, it is worth acknowledging that many current successes of large language models owe credit to system-level optimization techniques, including Zero (Rajbhandari et al., 2020), DeepSpeed (Rasley et al., 2020), and Megatron (Smith et al., 2022).
Figure 11: Evaluation of influence of different parameters, including the scale factor of the gain in the weight matrix, \(\epsilon\), hidden dimension and the input size, to the simulated Lipschitz constant value \(K\) using ResNet, DPA Transformer and SCSA Transformer.
When referring to large models, we typically mean network with a larger depth \(L\) and wider hidden dimension \(D\) in each \(\mathbf{W}\). The Lipschitz constant of a network can be calculated by the following equation:
\[\mathrm{Lip}(F_{\mathbf{x}}(\{\mathbf{W}^{l},l=1,\ldots,L\}))\leq\prod_{l=1}^{L} \mathrm{Lip}(f^{l}(\mathbf{x}^{l-1};\mathbf{W}^{l})).\]
From the above equation, we can see that the Lipschitz constant of the network is a multiplier of the Lipschitz constant of each layer. Deeper networks entail more terms in the multiplication equation. Typically, the Lipschitz constant in each layer is greater than 1.0, meaning that deeper networks tend to have larger Lipschitz constants. Consequently, deeper networks, with their often large Lipschitz constants, are more likely to violate the principles of forward and backward optimization.
Let us consider the influence of network width \(D\). In each training step, the update equation is given by
\[\mathbf{W}^{l}_{new}=\mathbf{W}^{l}-\alpha\nabla_{\mathbf{W}^{l}}\mathcal{L}-\alpha \lambda\mathbf{W}^{l}.\]
It is unclear how the dimension \(D\) affects the training process, but we observe that larger \(D\) usually has larger probability to have a larger absolute eigenvalue. In this way, it will enlarge the Lipschitz constant of a network, and it will make the training harder.
For the dynamics of the training process, it is challenging to determine whether \(\mathbf{W}^{l}_{new}=\mathbf{W}^{l}-\alpha\nabla_{\mathbf{W}^{l}}\mathcal{L}\) is a contraction mapping or an expansion mapping. If it is an expansion mapping, what is the expansion factor after this operation? Based on experimental observations, we know that in an unstable training process, the weight update corresponds to an expansion mapping, causing the eigenvalues of the weight matrix to increase rapidly. Currently, our understanding of the properties of weight updates remains limited. We believe that Random Matrix Theory (RMT) (Edelman & Rao, 2005) might shed some light on this problem. But currently, it remains unclear to us.
### Open Questions
**The properties of weight update in optimizers.** Is the weight update function shown in Equation 9.1 a contraction mapping or an expansion mapping? If it is an expansion mapping, what is the expansion factor? This understanding is crucial for deep learning optimization. Currently, there is limited research on this topic, and we hope to see future works addressing this problem.
**Automatic setup and adjustment of learning rate and weight decay.** Given the weight update equation mentioned earlier, how can algorithms select optimal values for \(\alpha\) and \(\lambda\) to ensure stable weight updates? Currently, the choices of \(\alpha\) and \(\lambda\) are mainly based on researchers' empirical experience.
**The relationship between representation ability and training stability.** As discussed in the paper, we can constrain the Lipschitz constant of the network during initialization or training. For example, we can use a Lipschitz-aware
Figure 12: Simulated Lipschitz constant value \(K\) of ResNet, DPA Transformer and SCSA Transformer under the metric of \(L_{1}\), \(L_{2}\) and \(L_{\infty}\).
initialization as follows:
\[\mathbf{x}^{l+1}=\mathbf{x}^{l}+f(\mathbf{x}^{l};\nu_{2}\odot\mathbf{W}),\]
Here, \(\nu_{2}\) can be set to \(\frac{1}{L}\) or \(\frac{1}{\sqrt{L}}\) in the initialization stage, where \(L\) is the number of layers. Initializing it to \(\frac{1}{L}\) leads to more stable training compared to \(\frac{1}{\sqrt{L}}\), but how does it affect the learning representation ability? This topic requires further in-depth study.
**The value and necessity of warmup should be investigated.** The theoretical and practical necessity of warmup in Transformers is still not fully understood, despite some existing studies. We observe that even Transformers with 12 layers have a large Lipschitz constant. Training with a large learning rate in the initial stage usually leads to instability. In contrast, ResNet50 can often be trained successfully without warmup. Exploring the inner workings and effects of warmup would be interesting.
**More attention to the backward process.** Back-propagation, which calculates chain gradients, is crucial in deep learning. In widely used deep learning tools, gradients are returned by auto-differentiation, providing average gradients instead of gradients for each input. However, exploring gradient flow or even obtaining gradients for each sample may inspire new optimization methods. Better consideration of the backward process can lead to novel insights.
There are many other open problems that warrant further exploration, such as better second-order optimization techniques, incorporating Lipschitz smoothness (widely considered in classical numerical optimization) into deep learning, and comparing the generalization ability between smooth and non-smooth functions.
**Second-order optimization is a promising direction**, although the widely used methods such as SGD momentum and Adam are first-order optimization methods. Obtaining the second-order Hessian matrix is not straightforward in PyTorch and TensorFlow due to cumbersome operations. As a result, many second-order methods like Adahessian rely on approximation methods to estimate the Hessian matrix, which may not be well bounded. We need better tools to calculate higher-order information. We have observed that the JAX toolbox allows for easier acquisition of higher-order information. Future methods may consider utilizing it.
**Optimization under constrained conditions**, such as on Riemannian manifolds, is an interesting topic. Most deep learning approaches in computer vision and natural language processing are formulated as unconstrained optimization problems. Many constrained optimization problems can be transformed into unconstrained problems by adding regularization terms. However, it would be interesting to directly study constrained optimization and observe its performance.
## 10 Conclusion
This article has provided a comprehensive analysis of optimization methods in deep learning, with a particular focus on addressing the challenges of gradient vanishing and gradient exploding. We delved into various strategies to tackle these issues, including improving gradient flow and constraining the Lipschitz constant of a network. Our analysis covers both explicit optimization methods, which directly act on optimizer parameters, and implicit optimization methods, which aim to improve the landscape of a network by enhancing its modules. Throughout the article, we have provided an in-depth analysis of these optimization classes and examined the gradients or Jacobians of widely used deep learning modules. We identified existing issues and discussed current and potential enhancements. Moreover, we empirically verified our theoretical analysis. Our goal is to provide readers with a deeper understanding of deep learning optimization and thus inspire the development of more robust, efficient, and high-performing models. The field of deep learning optimization is continuously evolving, presenting both challenges and opportunities. We anticipate exciting developments on the horizon.
|
2305.10339
|
Vulnerability Propagation in Package Managers Used in iOS Development
|
Although using third-party libraries is common practice when writing
software, vulnerabilities may be found even in well-known libraries. Detected
vulnerabilities are often fixed quickly in the library code. The easiest way to
include these fixes in a dependent software application, is to update the used
library version. Package managers provide automated solutions for updating
library dependencies. However, library dependencies can have dependencies to
other libraries resulting in a dependency network with several levels of
indirections. Assessing vulnerability risks induced by dependency networks is a
non-trivial task for software developers. The library dependency network in the
Swift ecosystem encompasses libraries from CocoaPods, Carthage and Swift
Package Manager. We analysed how vulnerabilities propagate in the library
dependency network of the Swift ecosystem, how vulnerable dependencies could be
fixed via dependency upgrades, and if third party vulnerability analysis could
be made more precise given public information on these vulnerabilities. We
found that only 5.9% of connected libraries had a direct or transitive
dependency to a vulnerable library. Although we found that most libraries with
publicly reported vulnerabilities are written in C, the highest impact of
publicly reported vulnerabilities originated from libraries written in native
iOS languages. We found that around 30% of vulnerable dependencies could have
been fixed via upgrading the library dependency. In case of critical
vulnerabilities and latest library versions, over 70% of vulnerable
dependencies would have been fixed via a dependency upgrade. Lastly, we checked
whether the analysis of vulnerable dependency use could be refined using
publicly available information on the code location (method or class) of a
reported vulnerability. We found that such information is not available most of
the time.
|
Kristiina Rahkema, Dietmar Pfahl
|
2023-05-17T16:22:38Z
|
http://arxiv.org/abs/2305.10339v1
|
# Vulnerability Propagation in Package Managers Used in iOS Development
###### Abstract
Although using third-party libraries is common practice when writing software, vulnerabilities may be found even in well-known libraries. Detected vulnerabilities are often fixed quickly in the library code. The easiest way to include these fixes in a dependent software application, is to update the used library version. Package managers provide automated solutions for updating library dependencies, which make this process relatively easy. However, library dependencies can have dependencies to other libraries resulting in a dependency network with several levels of indirections. Assessing vulnerability risks induced by dependency networks is a non-trivial task for software developers.
The library dependency network in the Swift ecosystem encompasses libraries from CocoaPods, Carthage and Swift Package Manager. These three package managers are used while developing, for example, iOS or Mac OS applications in Swift or Objective-C. We analysed how vulnerabilities propagate in the library dependency network of the Swift ecosystem, how vulnerable dependencies could be fixed via dependency upgrades, and if third party vulnerability analysis could be made more precise given public information on these vulnerabilities.
We found that only 5.9% of connected libraries had a direct or transitive dependency to a vulnerable library. Although we found that most libraries with publicly reported vulnerabilities are written in C, the highest impact of publicly reported vulnerabilities originated from libraries written in native iOS languages, i.e., Objective-C and Swift. We found that around 30% of vulnerable dependencies could have been fixed via upgrading the library dependency. In case of critical vulnerabilities and latest library versions, over 70% of vulnerable dependencies would have been fixed via a dependency upgrade. Lastly, we checked whether the analysis of vulnerable dependency use could be refined using publicly available information on the code location (method or class) of a reported vulnerability. We found that such information is not available most of the time.
iOS, Swift, Vulnerabilities, Library Dependency Networks, Publicly Reported Vulnerabilities
## I Introduction
Using third-party libraries is common practice in software development. Third-party libraries make it possible to reuse existing solutions to common problems. This can make the development process faster and easier. These third-party solutions are often better vetted than custom solutions. The Open Web Application Security Project (OWASP), for example, strongly recommends against the use of custom encryption algorithms [1].
Nevertheless, vulnerabilities can be found in even very popular and well-tested libraries. For example, in December 2021, a security vulnerability was discovered in the widely used Log4J Java logging library. This vulnerability affected 4% of all the Java applications [2] and made them vulnerable to remote code execution attacks.
Many of these vulnerabilities are fixed relatively quickly [3]. After a fix is made available, dependents of the vulnerable library can include the fix via upgrading the library dependency version. It can, however, be tedious to update multiple dependencies manually. Automated solutions make this process easier. For this purpose, package managers have been created where the developer simply states the library name and exact version or a version requirement. The package manager takes care of downloading and installing the suitable library version.
Using a package manager, it is easy to declare as many dependencies as needed. These library dependencies themselves can, again, have dependencies to other libraries, creating a network of library dependencies. The collection of all libraries that are available through a package manager and their library dependencies create a library dependency network for each package manager. When the number of direct and transitive dependencies (i.e. indirect dependencies of any level of indirection) grows, it also increases the risks of a library depending on vulnerable library versions. As seen in the example of Log4J, even a vulnerability in a seemingly harmless logging library can have an affect on a significant part of an ecosystem.
The spread of vulnerabilities in package manager library dependency networks has been studied for some package managers. Zerouali et al. [3] studied how long it takes for vulnerabilities in npm and RubyGems to be fixed and how these vulnerabilities spread through the library dependency network. They found that around 40% of libraries have a direct or transitive dependency to a vulnerable library version. Dusing et al. [4] analyzed how vulnerabilities in transitive
dependencies affect the NuGet, npm and Maven package manager library dependency networks. They also studied how fast developers update their library dependencies when a vulnerability is publicly disclosed. They found, that there is a significant difference on how many libraries are affected by vulnerable dependencies depending on the package manager. They also found, that developers probably rely on automated dependency updates, which are triggered when a vulnerability is disclosed.
Although there are many studies that analyze library dependency networks, especially for npm and Maven, there are no studies analyzing the library dependency networks of CocoaPods, Carthage and Swift Package Manager (Swift PM). These three package managers are used when developing applications in Swift, such as iOS, Mac OS or Watch OS applications. In the following, we refer to the combined ecosystems of CocoaPods, Carthage and Swift PM as the Swift ecosystem. It is important to note that this ecosystem also contains libraries written in other languages (such as Objective-C, C, C++). Additionally, CocoaPods and Carthage can also be used in applications written in Objective-C. In the following, when referring to library dependency networks we mean the library dependency networks of the Swift ecosystem unless specified differently.
Rahkema et al. [5] published a tool for finding publicly reported vulnerabilities in Swift application dependencies. The evaluation of the tool showed that there are indeed open source apps written in Swift that depend on library versions with publicly reported vulnerabilities. To better understand the magnitude of the use of vulnerable library versions, it would be necessary to analyse the spread of vulnerabilities in the entire library dependency network. No such studies have been conducted so far for the Swift ecosystem. In the following, we analyze how publicly reported vulnerabilities spread through the library dependency networks of the Swift ecosystem.
In our study, we investigate three research questions, focusing on (1) how the Swift ecosystem is affected by publicly reported vulnerabilities, (2) how risks from these vulnerabilities could be mitigated via dependency upgrades, and (3) if publicly available vulnerability information would allow more precise analysis of vulnerable dependencies.
The rest of the article is structured as follows. In Section II, we summarize related work. In Section III, we explain some background on the studied package managers, vulnerabilities and the dataset used. In Section IV, we describe the research questions and how we analyzed the dataset for each research question. In Section V, we present answers to the tree research questions and Section VI discusses these answers. In Section VII, we describe the threats to validity. In Section VIII, we conclude the paper.
## II Related Work
### _Vulnerabilities in Library Dependency Networks_
Zerouali et al. [3] studied how long it takes for vulnerabilities in libraries from npm and RubyGems to be fixed, how vulnerabilities spread through the library dependency network and if vulnerable libraries are updated. They matched vulnerability data from Snyk to npm and RubyGems libraries and found that more than 15% of latest library versions are directly dependent on vulnerable libraries. Additionally, dependencies to vulnerable libraries affected 42.1% of npm and 39% of RubyGems libraries. They found that one third of vulnerable dependencies could be fixed by updating the vulnerable dependency version.
Dising et al. [4] matched vulnerabilities from Snyk to libraries from the Maven, NuGet, and npm library dependency networks. They, then, analyzed how vulnerabilities in direct and transitive dependencies affect different library dependency networks. They found that only 1% of libraries in NuGet and 8% of libraries in npm are affected by vulnerable dependencies. Whereas, 29% of libraries served through Maven have dependencies to vulnerable library versions. They also studied how long it takes for libraries to update their vulnerable dependencies after vulnerability disclosure and found, that at least some libraries are probably using automated tools that follow vulnerability databases and update all vulnerable dependencies automatically.
Li et al. [6] analyzed library dependency networks of Java projects from Maven and GitHub. They matched vulnerability data from the National Vulnerability Database (NVD) to these Java projects and found 503 vulnerabilities matching 174 Maven projects and 3326 vulnerabilities matching 840 GitHub projects. They observed libraries with vulnerable dependencies from 2019 to 2020 and found that only 5% of vulnerable dependencies were fixed during this time frame. Prana et al. [7] analysed vulnerabilities in library dependencies for Java, Python and Ruby projects. They found that most vulnerabilities persisted through their one year long observation period.
Zimmermann et al. [8] studied security risks in the npm library dependency network. They found, that when installing an average npm library the user implicitly trusts 80 dependent libraries. When analyzing publicly reported vulnerabilities from Snyk, they found, that up to 40% of all libraries have (direct or transitive) vulnerable dependencies. Alfadel et al. [9] analyzed the use of vulnerable npm dependencies in Node.js applications. They found, that although 67.9% of examined applications depended directly on vulnerable libraries, 94.9% of these vulnerabilities were not known at the time.
Our study is the first study to analyse vulnerability propagation in the Swift ecosystem.
### _Vulnerability Reachability Analysis_
Tools have been implemented for multiple languages that allow more detailed analysis of vulnerable dependencies. Ponta et al [10] implemented Eclipse Steady, that analyses if a vulnerable code in a dependency is called. The analysis relies on fix commits, which provided more accurate results than tools that relied only on vulnerability metadata such as OWASP Dependency Check.
Bhandari et al. [11] built a dataset containing vulnerable files and methods. Their analysis relied on the fixing commit being available on NVD.
Hommersom et al. [12] analysed how vulnerability fix commits can be found given public vulnerability information. They built a commit ranking system based on metrics such as time distance, commit messages and vulnerability description and others. They showed that detailed public vulnerability information can be successfully used to determine fixing commits, especially if the vulnerability description includes affected files, the patching commit or the commit message of the fixing commit refers to the CVE.
In our study we analyse how often detailed information on the vulnerability location can be extracted from NVD, which would allow for a more detailed analysis of the vulnerable dependency.
## III Background
In this section we describe the three package managers used in iOS development, how dependencies are declared, the available vulnerability information and the dataset used.
### _Package Managers_
There are three package managers that are used in iOS development: CocoaPods, Carthage and Swift Package Manager (Swift PM).
**CocoaPods** is the first package manager that was released in 20111. CocoaPods has a centralized repository of libraries that anyone can add their libraries to. CocoaPods is easy to use, but somewhat heavyweight, as it forces its users to use a project file generated by the package manager.
Footnote 1: [https://cocoapods.org](https://cocoapods.org)
**Carthage** is the second oldest package manager in the Swift ecosystem and was released in 20142. Carthage has no centralized list of libraries and is very lightweight. Developers using Carthage use the package manager to fetch and compile libraries, but the libraries need to be added to the projects manually.
Footnote 2: [https://github.com/Carthage/Carthage](https://github.com/Carthage/Carthage)
**Swift Package Manager** is the latest and official package manager for the Swift language released in 20173. Swift PM also works as a project configuration file. Similarly to Carthage, Swift PM does not have a centralized list of libraries. When Swift PM was first released it only worked with command line applications. Since 2019 support for XCode and iOS has been added, making it the go to package manager in iOS development.
Footnote 3: [https://www.swift.org/package-manager/](https://www.swift.org/package-manager/)
### _Dependency Declarations_
For each of the package managers dependencies are declared in a package manager manifest file. Once the package manager version resolution is run a resolution file is created that stores the actual library versions installed. The three package managers use different ways of declaring library dependencies, but work similarly in principle [13].
For example, for CocoaPods dependencies are declared in a Podfile. If Library C wanted to declare a dependency to Library B the Podfile would look as follows:
```
pod 'LibraryB'
```
If Library B itself had a dependency to Library A, the resolution file Podfile.lock would include the following after CocoaPods installed the declared dependency:
```
PODS: - LibraryB (versionl): - LibraryA - LibraryA - (versionl)
```
This package manager resolution file indicates that version 1 of Library B was installed and since Library B depends on Library A, version 1 of Library A was also installed.
There are now two dependency chains at time T3, as illustrated in Figure 1:
* ABC1: A:v1 \(\leftarrow\) B:v1 \(\leftarrow\) C:v1
* AB1: A:v1 \(\leftarrow\) B:v1
In Figure 1, version 1 of Library A has a publicly reported vulnerability. This means that dependency chains ABC1 and AB1 both represent dependencies to vulnerable library versions. At time T4, Library A releases a fix for the vulnerability: version 2 of Library A. At time T5, Library B releases a new version 2 that upgrades its library dependency to the not vulnerable version of Library A. At time T6, Library C releases a new version but does not upgrade its library dependency. At time T7, Library C releases another version that does update its library dependency to Library B version 2. At time T7, the following dependency chains exist in our dataset:
* ABC1: A:v1 \(\leftarrow\) B:v1 \(\leftarrow\) C:v1
* AB1: A:v1 \(\leftarrow\) B:v1
* AB2: A:v2 \(\leftarrow\) B:v2
* ABC2: A:v1 \(\leftarrow\) B:v1 \(\leftarrow\) C:v2
* ABC3: A:v2 \(\leftarrow\) B:v2 \(\leftarrow\) C:v3
Dependency chains ABC1, AB1 and ABC2 represent vulnerable dependencies.
Fig. 1: Illustration of dependency chains in a library dependency network with three libraries A, B and C.
### _Vulnerabilities_
If a vulnerability is discovered in a software system, it can either be fixed silently or the vulnerability can be released in public vulnerability databases. Such databases make it possible for users to monitor software systems they are using. One of such vulnerability databases is the National Vulnerability Database (NVD)4.
Footnote 4: [http://wd.nist.gov](http://wd.nist.gov)
NVD provides public information on vulnerabilities, such as a vulnerability description, severity levels, affected software, and others. Useful links are provided for some vulnerabilities, for example, to the security advisory or to the fixing commit. The vulnerability database is accessible through an online interface, through an API, and through downloadable database snapshots.
### _Dataset Used_
In our analysis we used the Swift library dependency network dataset compiled by Rahkema et al. [14, 15] containing information on libraries from the three package managers used in iOS development. The dataset consists of nodes and relationships. In our analysis, we are interested in the following types of nodes:
* App (analysed library version)
* Library (library version)
* LibraryDependency (library dependency declaration from manifest file)
* Vulnerability (publicly reported vulnerability)
and the following relationships:
* [IS] \(\rightarrow\) (Library)
* [CHANGED_TO] \(\rightarrow\) (App)
* [LIBRARY_DEPENDS_ON] \(\rightarrow\) (Library)
* [DEPENDS_ON] \(\rightarrow\) (LibraryDefinition)
* [HAS_VULNERABILITY] \(\rightarrow\) (Vulnerability)
This dataset is a node database, allowing for easy querying of node chains, which we use to find dependency chains between library versions. We give a more detailed description of the dataset in our technical report [15]. The dataset contains data on 60533 libraries, 572131 library versions and 23419 dependencies between libraries.
## IV Method
In this section, we first present the research questions that guided our study. Then we describe what analyses we conducted to answer each of the research questions. The jupyter notebook containing the scripts used in our analyses can be found on GitHub5.
Footnote 5: [https://github.com/kristiinara/VulnerabilityPropagationAnalysis](https://github.com/kristiinara/VulnerabilityPropagationAnalysis)
### _Research Questions_
Our goal is a) to understand the scope of the library dependency network affected by vulnerabilities, b) if vulnerable dependencies could be effectively fixed via upgrading, and c) if there is enough public information available about these vulnerabilities such that the functionality of existing tools could be complemented with more detailed yet lightweight vulnerability analyses. Guided by this goal, we formulate three research questions.
* RQ1: How frequently do libraries have vulnerable dependencies?
* RQ2: How many vulnerabilities could be fixed via upgrading?
* RQ3: How precise could the vulnerability analysis be made given public information?
In the following, we explain the underlying rationale of each of the three research questions.
**RQ1: Libraries with Vulnerable Dependencies**
To better understand the risks imposed by vulnerabilities in the library dependency network, we ask how libraries in Carthage, CocoaPods and Swift PM are affected by vulnerabilities. To get started, it is necessary to investigate which libraries have publicly reported vulnerabilities. We are aware that the actual number of vulnerabilities will be higher, as not every vulnerability is publicly reported or even detected. Nevertheless, it is reasonable to look at publicly reported vulnerabilities instead of running a vulnerability scanner, to avoid the multitude of false positive results that these tools usually produce. Looking at publicly reported vulnerabilities we can be reasonably certain that these vulnerabilities are true positives and no manual double-checking is required. We expect to find publicly reported vulnerabilities in a certain amount of third-party libraries of the Swift ecosystem.
Yet, this is not sufficient. Since we expect vulnerabilities to spread through dependency chains, we analyse the library dependency network, i.e., the occurrences and lengths of dependency chains along which vulnerabilities might propagate.
In addition, we refine our analysis by including information about the predominantly used project language of the vulnerable library and the severity level of the vulnerability. Libraries in the Swift ecosystem can be written in different languages. The most common languages are Swift, Objective-C, C and C++ [16], with Swift and Objective-C covering the vast majority of the libraries. We expect the vulnerable libraries to have a similar distribution of languages as the rest of the ecosystem.
**RQ2: Vulnerable Dependencies Fixed via Upgrading**
The simplest way to fix a dependency to a vulnerable library version is to upgrade to a library version where the vulnerability is fixed, if such a fix exists. Given that developers are wary of upgrading their library dependencies [3, 6, 8] our hypothesis is that, as in other programming language ecosystems, many dependencies to vulnerable libraries remain unchanged although an easy fix is possible via upgrading the library dependency version. To check our hypothesis, we analyse how often vulnerable dependencies could have been fixed by upgrading the library dependency version.
**RQ3: Precision of Public Vulnerability Information**
Tools exist that can find dependencies to vulnerable libraries when using CocoaPods, Carthage or Swift Package Manager [14]. There are, however, no tools for Swift and Objective-C that could perform more detailed analyses and determine if a vulnerability from a library dependency really affects the program. For such analyses it would either be necessary to have data on where the vulnerability is located in the library or an extensive analysis of the vulnerable library would be needed. Our goal is to check if information about the location of a vulnerability in the code is publicly available for the reported vulnerabilities in the Swift ecosystem.
### _Data Analysis_
In this section we describe what we do to answer the three research questions.
#### Iv-B1 RQ1: Libraries with Vulnerable Dependencies
To understand how vulnerable library versions may impact other libraries, we first find all library versions that are connected to vulnerable library versions through DEPENDS_ON chains. A dependency chain of length zero implies that the library version itself is vulnerable. A dependency chain of length one implies that the library version has a direct dependency to a vulnerable library version. Dependency chains longer than one imply that the library version has a transitive dependency to a vulnerable library version.
For each library version that depends on a vulnerable library we find the shortest path to a vulnerable library version. We do this, because we assume that the risk of using vulnerable code is higher when the dependency chain is the shortest. We then report the number of libraries for each dependency chain length by filtering out duplicate library names. The resulting numbers indicate how many libraries have publicly reported vulnerabilities and how many libraries depend on vulnerable libraries (either through direct or transitive dependencies).
Additionally, we analyse how the language of the vulnerable library and the severity level of the vulnerability is associated with how far the vulnerabilities spread in the library dependency network. We gather library dependency chains for libraries that depend on vulnerable library versions and plot the number of affected libraries for each dependency level. For libraries with multiple dependencies to vulnerable libraries, we count the library on each dependency level where it depends on a vulnerable library version. We first plot the dependency level graph distinguished by the programming language and then by the severity level of the vulnerability.
The language of the library is determined by querying the main project language from GitHub.
#### Iv-B2 RQ2: Vulnerable Dependencies Fixed via Upgrading
Figure 1 in Section III showed five dependency chains of which three corresponded to vulnerable dependencies: ABC1, AB1 and ABC2. For dependency chain ABC2, Library C could have resolved the vulnerable dependency by upgrading the dependency to Library B from version 1 to version 2. For RQ2, we will study how often such chains to vulnerable dependencies could have been fixed via upgrading the dependency version.
For this analysis we first filter out library dependencies where the package manager resolution file was missing. These dependency versions were calculated based on the manifest file and are therefore not suitable for upgradeability analysis.
To analyse how vulnerable dependencies could be fixed via upgrading, we first identify all dependency chains to vulnerable library versions. For each of these chains we then check if a newer version of the direct dependency (like B:v2 for dependency chain ABC2 in Figure 1) exists that is not dependent on the vulnerable library version. The process of finding the newer version of the direct dependency takes into account release times for each of the library versions such that the release time of the dependency has to always be before the release time of the dependent. This means that in Figure 1 it would have been possible to upgrade the dependency chain ABC2, but not the dependency chain ABC1 because B:v2 was released after C:v1.
For each dependency level we plot the number of dependency chains that could have been fixed via an upgrade and the number that could not have been fixed via an upgrade. Additionally we count how many library dependencies could have been fixed for each vulnerability severity level and vulnerable library programming language.
The above analysis shows the upgradeability of vulnerable dependencies over the whole time frame of the dataset. To understand the potential impact of upgrades to the most recent state of the library dependency network, we also analyse for how many of the latest versions of libraries vulnerable dependencies could have been fixed via upgrading.
#### Iv-B3 RQ3: Precision of Public Vulnerability Information
For each vulnerability, we check the public vulnerability description on NVD and record if it contains information about the class or method that contains the vulnerability. Additionally we check, if available, the patch link to see if the patch of the vulnerability reveals where the vulnerability was fixed in the code.
## V Results
In the following, we present the answers to our research questions one by one.
### _RQ1: Libraries with Vulnerable Dependencies_
We found a total of 149 vulnerabilities in 61222 libraries. This corresponds to 24.3 vulnerabilities per 10000 libraries. We found that only 5.9% of connected libraries had dependencies to vulnerable library versions. For 3% of connected libraries, even the latest version of the library had a dependency to a vulnerable library version.
In the following, we present in more detail our results on how vulnerabilities propagate through the Swift library dependency network. Furthermore, we show how library project language and vulnerability severity are associated with vulnerability propagation.
Figure 2 shows how publicly reported vulnerabilities propagate through the Swift library dependency network. There are 41 libraries with publicly reported vulnerabilities (dependency tree level 0). Of those libraries only 12 have dependents. There are 202 libraries without a publicly reported vulnerability that have a direct dependency to at least one vulnerable library version (dependency tree level 1). A considerable number of libraries are added on level two (83) and level three (126). Libraries with dependencies to multiple vulnerable library versions are counted at the lowest dependency tree level where a dependency to a vulnerable library version exists. In total, 415 libraries have dependencies to vulnerable library versions, and 456 libraries are affected by publicly reported vulnerabilities in total, if we include libraries that are vulnerable themselves. Moreover, we can say that in case a library has at all a (possibly indirect) dependency to a vulnerable library version, then the longest chain to the first vulnerable library version has at most six levels in the dependency tree.
Table I shows the results of the analysis that explored whether the programming language in which a library is written has an influence on how vulnerabilities spread through the dependency network. We determined the programming language of each vulnerable library based on data from GitHub on the main project language of the library. Table I indicates that most vulnerabilities originate in libraries written in C (88) and C++ (24). Libraries written in Swift and Objective-C contribute only 19 and three vulnerabilities, respectively.
However, the highest impact on the Swift ecosystem comes from vulnerabilities in libraries written in Swift and Objective-C. Table I shows that vulnerable libraries written in Swift and Objective-C have significantly more dependents (98 for Swift and 313 for Objective-C) than projects written in other programming languages (56 and 14 for C and C++). Figure 3 shows how far vulnerabilities from libraries written in the different languages spread across the dependency network. Although there are some libraries that have dependencies to libraries written in C and C++, there are significantly longer dependency chains to libraries written in Swift and, especially, to libraries written in Objective-C. In the case of Objective-C dependency chains to vulnerable library versions can have up to 14 levels of indirection. Differently to Figure 2, libraries in Figure 3 are counted on each level of indirection they occur.
In the following, we present our results on how vulnerabilities of different severity propagate throughout the library dependency network. Vulnerabilities have four levels of severity: CRITICAL, HIGH, MEDIUM and LOW. Table II provides information on the distribution of severity levels of the vulnerabilities found in the Swift ecosystem, as well as dependent libraries affected by these vulnerabilities. Most vulnerabilities (80) are of level HIGH, 31 and 37 vulnerabilities are CRITICAL and MEDIUM, respectively. Only one vulnerability has the level LOW. Most libraries (353) are affected by MEDIUM level vulnerabilities through dependencies.
Figure 4 shows how vulnerabilities with different severity levels propagate through the library dependency network. Vulnerabilities with severity level MEDIUM propagate the furthest through the dependency network. However, vulnerabilities with severity level CRITICAL and HIGH can both be observed with levels of indirection in the dependency tree up to Level 5.
Fig. 3: Number of libraries affected by vulnerabilities for each dependency level classified by main project language.
Fig. 2: Cumulative number of libraries affected by vulnerabilities for each dependency level classified by the shortest dependency level to a vulnerable library version for each library.
### _RQ2: Vulnerable Dependencies Fixed via Upgrading_
To answer RQ2, we analyse how many vulnerable dependencies could have been fixed via a dependency upgrade at the time a library version was released. For the upgradability analysis we require that the version data for direct dependencies originates from package manager resolution files and that the direct dependency is to a library included in the set of libraries available for our analysis. After filtering out dependencies that did not meet our criteria 341 out of 415 libraries with vulnerable dependencies remain.
First, we analyse how updating direct dependencies would have fixed a vulnerable dependency for different levels of indirection. Figure 5 shows that 27% (498 of 1833 in total) of vulnerable direct dependencies could have been fixed via an upgrade. Furthermore upgrading direct dependencies would also have fixed 16% (244 of 1555 in total) of second level vulnerable dependencies and 64% (694 of 1082 in total) of third level vulnerable dependencies. Note that the levels of dependency tree in Figure 5 are greater than 0 and less than 10. They must be greater than 0 because there are no dependencies at level 0. There is no data beyond level 9 as the data on those library chains happened to not be compatible with the upgradability analysis.
Next, we analyse how many vulnerable dependencies could have been fixed via upgrading depending on the severity of the vulnerability. Table III shows that over all dependency chains the probability of fixing the vulnerability via a dependency upgrade is around 30%. However, if we look at the latest version of each library, the percentage of fixing dependencies to critical vulnerabilities via upgrading is 71%, fixing dependencies to vulnerabilities of level HIGH is 52% and fixing dependencies to vulnerabilities of level MEDIUM is 39%.
Finally, we explore whether there are differences in the percentages of fixing vulnerabilities via upgrading between the project languages of the vulnerable libraries. Table IV shows percentages of vulnerable dependencies being fixed via upgrading for each of the four most prominent languages. Over all library versions, the probability of fixing a vulnerable dependency is around 30%, with the exception of C++ where the probability is considerably smaller. For the latest versions of each library the probability of fixing a vulnerable dependency via an upgrade is highest for C (67%) and Swift (60%), and lowest for Objective-C (38%) and C++ (33%).
Looking at the success rates of fixing a vulnerable dependency via upgrading from the point of view of the different
Fig. 4: Number of libraries affected by vulnerabilities for each dependency level classified by severity level of the vulnerability.
Fig. 5: Number of dependency chains to vulnerable library versions that could be fixed (green) and not fixed (red) by an upgrade of the first dependency in the dependency chain. The numbers are shown for each dependency level.
vulnerabilities we see that for 25% of vulnerabilities the success of upgrading is over 89% and for another 25% of vulnerabilities the failure of fixing the dependency via an upgrade is over 94%. These numbers indicate that fixing a vulnerable dependency via upgrading is very successful for some of the vulnerabilities and not possible for others.
### _RQ3: Precision of Public Vulnerability Information_
Our answer to RQ3 is based on checking whether the public descriptions of vulnerabilities in the Swift ecosystem include information on the class or method that contains the described vulnerability. This kind of detailed information could be used to fine tune the analysis and detection of vulnerable dependencies by identifying the piece of code that contains the vulnerability. In situations where a library is dependent on a vulnerable library version it might be good to know whether the code of the vulnerable library is used by the dependent library. Analysing the description of each vulnerability and including data from patch links showed that most vulnerability descriptions do not include detailed enough information to determine the vulnerable class or method. Table V shows that not a single vulnerability in a project written in Swift specified both the vulnerable method and the vulnerable class. Similarly, very little information is available about vulnerabilities in projects written in Objective-C. There is more information on vulnerabilities in projects written in C and C++ but these vulnerabilities also affect significantly less libraries in the Swift ecosystem.
## VI Discussion
### _RQ1: Libraries with Vulnerable Dependencies_
The total amount of 149 vulnerabilities in 61222 libraries in the Swift ecosystem corresponds to 24.3 vulnerabilities per 10000 libraries. This ratio is much higher than that for npm where Zimmermann et al. found the ratio to be around 8 in 2018 [8]. The difference between the two ecosystems could be due to the high number of very small libraries in npm. In contrast, Li et al. found the ratio to be 113.5 for Java projects [6]. The Java ecosystem is older and might have larger libraries, but we do not have a definite reason for the big difference.
Our results show that only 5.9% of connected libraries have dependencies to vulnerable library versions. For 3% of connected libraries, its latest version is still dependent on a vulnerable library version. In contrast, Dusing et al [4] found that 9% of libraries in npm had direct dependencies to vulnerable libraries. Zimmermann et al. [8] found that 40% of npm projects they studied depended (directly or transitively) on vulnerable libraries. Alfadel et al. [9] found that 67% of all npm applications had at least one vulnerable direct dependency. Zerouali et al. [3] found that for more than 15% of npm and RubyGems libraries the latest version of the library is directly dependent on a vulnerable library version. Additionally, they found that for 42.1% of all npm libraries and for 30% of RubyGems libraries the latest version of the library had a transitive dependency on a vulnerable library version. Therefore, in comparison to other ecosystems, the Swift ecosystem is considerably less affected by vulnerable library dependencies. A possible reason could be that libraries in the Swift ecosystem have less dependencies on average than libraries in other ecosystems, such as npm. Another possibility is that there are less vulnerabilities reported for the libraries in the Swift ecosystem but our analysis shows that this is not true, at least in comparison to npm.
Looking at the severity of the vulnerabilities, the vulnerabilities with severity level MEDIUM spreads the most in the library dependency network. A possible explanation is that vulnerabilities with a MEDIUM severity level are not taken as seriously as vulnerabilities with higher severity levels and, therefore, are able to exist longer and spread further in the library dependency network.
Most vulnerabilities in the Swift ecosystem originate form libraries written in C and C++. When looking at the impact on the whole library dependency network, however, vulnerabilities in libraries written in Swift and Objective-C spread considerably farther. Libraries written in Swift and Objective-C have more dependents and therefore a higher impact on the overall library dependency network. Dominguez-Alvarez et al. [16] found that most libraries available through the CoCoaPods package manager are written in Swift and Objective-C. It might be, that libraries written in C and C++ are very specialized and therefore not used by many other libraries.
### _RQ2: Vulnerable Dependencies Fixed via Upgrading_
Overall, around 30% of vulnerable dependencies could have been fixed via an update of direct dependencies. Surprisingly, there is not much difference between vulnerability severity levels when looking at upgradability over all library versions. When looking at the latest version of each library, however, our results show that vulnerabilities with severity level CRITICAL could have been fixed in 70% of the cases. This is a strong indication for developers to keep up with library dependency upgrades as a means to avoid dependence on vulnerable libraries. If upgrading to each new version is not possible, developers should at least check if their dependencies have publicly reported vulnerabilities, for example using automated tooling such as [5].
### _RQ3: Precision of Public Vulnerability Information_
Currently, tools exist that can be used to check for vulnerable dependencies when using CocoaPods, Carthage or Swift Package Manager [5]. There are, however, no tools for
the Swift ecosystem that could check if a vulnerability in a dependency really affects the developed application. Existing tools could be extended if detailed information about the exact code location of a vulnerability was available. Our results suggest that NVD does not include enough detailed information on vulnerabilities in libraries written in Swift and Objective-C. Therefore, the best solution for developers is to upgrade to a version of the library where the vulnerability has been fixed - if such a version is available.
## VII Threats to Validity
In this section we discuss the potential limitations of our analyses, focusing on construct, internal, and external validity threats.
### _Construct Validity_
In our analysis, we assume that every vulnerable dependency implies that the dependent library is (indirectly) vulnerable as well. However, the presence of vulnerable dependencies does not necessarily imply that the library is actually vulnerable. In a preliminary study [17] Zapata et al. analysed dependencies of 60 projects using the npm package manager and showed that most projects with vulnerable dependencies do not actually use the vulnerable code. Hejderup et al. [18] analysed libraries written in Rust and showed that not all resolved dependencies are really called, which means that dependencies to vulnerable libraries might not necessarily affect the library itself. Given that our results show that relatively few libraries depend on vulnerable libraries in the library dependency networks of the Swift ecosystem, a more detailed analysis would not affect this conclusion. A detailed analysis of call graphs might reduce the percentage of libraries with dependencies to vulnerable libraries even further. However, as we show in our answer to RQ3, the data needed for such an analyis is often not available.
Our analyses using information about the programming languages in which the vulnerable libraries are written depends on the information provided by GitHub about the main programming language. A library could, however, be written in several programming languages and the vulnerability itself could be located in code that was written in a programming language different from the main programming language. To understand the level of correctness of the information provided by GitHub in the context of our analysis, for those vulnerabilities where class/file level data was available, we also checked the language of the class/file and compared it to the main programming language indicated by GitHub. In only two of the 92 cases where such information is available, the vulnerability is located in a file written in a different language as the main programming language of the library. None of these two cases occurs in libraries classified as written in Objective-C or Swift.
### _Internal Validity_
The analysed dependency data relies on package manager resolution files. Not every library that uses a package manager includes the corresponding resolution files in the repository. For such repositories the package manager manifest files are used to resolve the dependencies following Rahkema et al. [14].
Building the dependency graph by only declaring the exact version of a dependency means that transitive dependencies could in practice be resolved differently. When a transitive dependency is resolved at a later date then it is possible that the actual version of the transitive dependency would not match the version in the dataset. The data on the version ranges does, however, exist in the dataset and could be checked as future work.
For the vulnerability data we rely on data from NVD. This means that we need to trust that the data is correct. It is possible that there are incorrect entries if they have not been checked by third parties. We do, however, believe the data to be reliable as it is an official and public database which is continuously reviewed and maintained.
### _External Validity_
We claim that our results hold for all open source libraries in the library dependency networks of the Swift ecosystem, i.e. all open source libraries that are available through CocoaPods, Carthage and Swift PM. CocoaPods, Carthage and Swift PM are the only package managers used in the Swift ecosystem.
The dataset we analyzed represents the Swift ecosystem in the period from September 2011 to December 2021. We make no claims regarding how the vulnerability propagation might evolve in the future.
The vulnerability data in the dataset is based on public data from the NVD. When using other vulnerability databases, for example Snyk, the results might be different. Vulnerabilities in NVD are publicly reported, which adds to the trustworthiness of the data.
## VIII Conclusion
We studied the vulnerability propagation in the library dependency network controlled through the package managers used in iOS development, and analysed how developers could protect against vulnerabilities from third party libraries.
We found that the Swift ecosystem is affected less by public vulnerabilities than other ecosystems such as npm. The spread of vulnerabilities is smaller, probably due to less dependencies on average. We showed that upgrading direct dependencies is an effective way to fix vulnerable dependencies and, as long as no tools exist that correctly analyse whether the vulnerable code is actually called, the best option for developers is to regularly upgrade library dependency versions. Currently, no tools exist for Swift and Objective-C that are able to perform such a detailed analysis, and we showed that there is not enough public data available to implement such tools easily.
|
2308.07861
|
Profound optical flares from the relativistic jets of active galactic
nuclei
|
Intense outbursts in blazars are among the most extreme phenomena seen in
extragalactic objects. Studying these events can offer important information
about the energetic physical processes taking place within the innermost
regions of blazars, which are beyond the resolution of current instruments.
This work presents some of the largest and most rapid flares detected in the
optical band from the sources 3C 279, OJ 49, S4 0954+658, Ton 599, and PG
1553+113, which are mostly TeV blazars. The source flux increased by nearly ten
times within a few weeks, indicating the violent nature of these events. Such
energetic events might originate from magnetohydrodynamical instabilities near
the base of the jets, triggered by processes modulated by the magnetic field of
the accretion disc. We explain the emergence of flares owing to the injection
of high-energy particles by the shock wave passing along the relativistic jets.
Alternatively, the flares may have also arisen due to geometrical effects
related to the jets. We discuss both source-intrinsic and source-extrinsic
scenarios as possible explanations for the observed large amplitude flux
changes.
|
Gopal Bhatta, Staszek Zola, M. Drozdz, Daniel Reichart, Joshua Haislip, Vladimir Kouprianov, Katsura Matsumoto, Eda Sonbas, D. Caton, Urszula Pajdosz-Śmierciak, A. Simon, J. Provencal, Dariusz Góra, Grzegorz Stachowski
|
2023-08-15T16:27:20Z
|
http://arxiv.org/abs/2308.07861v1
|
# Profound optical flares from the relativistic jets of active galactic nuclei
###### Abstract:
Intense outbursts in blazars are among the most extreme phenomena seen in extragalactic objects. Studying these events can offer important information about the energetic physical processes taking place within the innermost regions of blazars, which are beyond the resolution of current instruments. This work presents some of the largest and most rapid flares detected in the optical band from the sources 3C 279, OJ 49, S4 0954+658, Ton 599, and PG 1553+113, which are mostly TeV blazars. The source flux increased by nearly ten times within a few weeks, indicating the violent nature of these events. Such energetic events might originate from magnetohydrodynamical instabilities near the base of the jets, triggered by processes modulated by the magnetic field of the accretion disc. We explain the emergence of flares owing to the injection of high-energy particles by the shock wave passing along the relativistic jets. Alternatively, the flares may have also arisen due to geometrical effects related to the jets. We discuss both source-intrinsic and source-extrinsic scenarios as possible explanations for the observed large amplitude flux changes.
Introduction
Blazars, a sub-class of radio-loud active galactic nuclei (AGN), exhibit intense outbursts and extreme phenomena in extragalactic objects. They feature relativistic jets closely aligned with the line of sight, which serve as sources of highly variable non-thermal continuum emission that is Doppler boosted [1]. Blazars can be divided into two sub-classes: flat-spectrum radio quasars (FSRQ), which exhibit broad emission lines, and BL Lacertae (BL Lac) objects, which show weak or no emission lines. Despite their differences, both types are visible in the TeV energy range and are the predominant discrete gamma-ray sources in the sky. The broadband non-thermal spectrum of blazars extends from radio to TeV gamma-rays and consists of low- and high-energy components. The low-energy component arises from synchrotron emission by relativistic plasma in the magnetized jets. Various models, including leptonic and hadronic scenarios, have been proposed to explain the origin of the high-energy emission. Leptonic models involve ultra-relativistic electrons up-scattering low-energy photons via the inverse Compton mechanism. In the synchrotron self-Compton model (SSC; e.g. [2, 3]), synchrotron photons produced by electrons are inverse-Compton scattered by co-spatial leptons. External Compton (EC) models suggest contributions from AGN components for inverse-Compton scattering. on the other hand, hadronic models involve high-energy protons producing the high-energy spectral component through proton-synchrotron and/or photon-initiated cascades [see 4, 5, 6]. While both models can explain the emission, hadronic models require a stronger magnetic field to cool more massive protons [7, 8, 9]. Additionally, proton-synchrotron models require the acceleration of ultra-high-energy cosmic rays (UHECRs) in blazar jets, resulting in gamma-ray emission and neutrino production.
Multiple wavelength (MWL) observations reveal that blazars emit flux at that is variable across a wide range of timescales, e. g. from decades to a few minutes [see 10, 11, 12, 13, 14]. While the general statistical nature of variablity can be approximated by a single power-law of spectral density (see optical and \(\gamma\)-ray [11, 15], blazars often display complex variability patterns, such as red-noise-like variability combined with quasi-periodic oscillations (QPO) and occasional sharp flux rises. These distinct flaring events, lasting from weeks to months, indicate extreme physical conditions in the central engine and jets, driving efficient particle acceleration and cooling processes. Gamma-ray flares in blazars are frequently accompanied by rotation of optical polarization angle and ejection of superluminal radio knots [16]. Flaring events are associated with disturbances propagating along the jet, which lead to particle energization through shock waves in weakly magnetized jets, or through magnetic reconnection and shock wave re-collimation in highly magnetized jets. [17, 18, 19, 20].
In this proceeding, we present a modeling approach for analyzing the flaring observations of five blazars, which were obtained through long-term optical monitoring of AGN conducted by our research group. In Section 2, we provide a detailed account of the observations of the source sample and the relevant data processing procedures for the optical data. Then, in Section 3, we discuss the details of the models used to explain the flares observed in the light curves. The outcomes of the results and potential implications are summarized in Section 4.
## 2 Observations and data processing
To constrain the variability properties of AGN, several telescopes have been employed to monitor a sample of quasars over an extended duration through the Skynet Robotic Telescope Network [21]. Optical band observations of the sources 3C 279, OJ 49, S4 0954+658, Ton 599, and PG 1553+113 were acquired through the network. Furthermore, additional data were collected from telescopes in Turkey, Japan, and Poland. The study primarily used the wide band R filter (Bessell prescription), with longer runs conducted at the Krakow and Mt. Suhora sites.
The data from the Skynet consisted of scientific images taken each night, which were then processed by the network pipeline to remove bias, dark, and flatfield effects. Raw images from other sites were accompanied by calibration frames. The standard procedure of calibration, including bias, dark, and flatfield correction using the IRAF package, was applied to the raw images. Magnitude extraction was performed using aperture photometry with the CMunipack program, which employs the DAOPHOT algorithm. Differential magnitudes were derived by comparing the objects with selected comparison stars visible in the field of view of all telescopes. Moreover, the constancy of the comparison stars was verified using check stars chosen in a similar manner.
Table 1 presents the sample of sources with their classes, positions, and redshifts, along with the blazar source classification based on the synchrotron peak frequency. Table 2 lists the total observation duration, number of observations, and mean magnitude for each source.
## 3 Analysis and Modeling
The light curves presented in Figure 1 clearly demonstrate significant flux variability in the sources. This variability, quantified using fractional variability [22], is listed in the 6th column of Table 2. Similarly, a measure of the variability timescale, represented by the minimum variability
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline Source name & Source class & R.A. (J2000) & Dec. (J2000) & Redshift (z) \\ \hline OJ 49 & BL Lac, LSP & \(08^{h}31^{m}48.88^{s}\) & \(+04^{d}29^{m}39.086^{s}\) & 0.17386 \\ S4 0954+658 & BL Lac, LSP & \(09^{h}58^{m}47.2^{s}\) & \(+65^{d}33^{m}55^{s}\) & 0.368 \\ TXS 1156+295 & FSRQ, LSP & \(11^{h}59^{m}032.07^{s}\) & \(+29^{d}14^{m}42.0^{s}\) & 0.729 \\
3C 279 & FSRQ, LSP & \(12^{h}56^{m}11.1665^{s}\) & \(-05^{d}47^{m}21.523^{s}\) & 0.536 \\ PG 1553+113 & BL Lac, HSP & \(15^{h}55^{m}43.044^{s}\) & \(+11^{d}1^{m}24.365^{s}\) & 0.36 \\ \hline \end{tabular}
\end{table}
Table 1: General properties the sample of blazar targets
\begin{table}
\begin{tabular}{l|r|l|l|l|l|l} \hline Source name & Duration (d) & Npt. & mean mag & VA (mag) & Fvar (\%) & t\({}_{\rm var}\) (min.) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline OJ 49 & 164.28 & 620 & 16.28 & 2.75 & 66.23\(\pm\)0.24 & 38.24\(\pm\)11.80 \\ TXS 1156+295 & 79.65 & 650 & 17.45 & 3.30 & 62.78\(\pm\)0.13 & 19.06\(\pm\)13.73 \\ PG 1553+113 & 97.43 & 1071 & 15.57 & 0.91 & 23.61\(\pm\)0.21 & 64.76\(\pm\)11.18 \\
3C 279 & 461.58 & 1831 & 17.59 & 2.78 & 46.16\(\pm\)0.50 & 11.73\(\pm\)7.80 \\ S4 0954+658 & 242.70 & 2988 & 14.80 & 2.61 & 47.60\(\pm\)0.17 & 17.10\(\pm\)6.18 \\ \hline \end{tabular}
\end{table}
Table 2: Optical observations of a sample of blazars and their variability properties
Figure 1: R-band optical observations of the sample of blazars. The bottom right plot shows the emergence of flares in blazars via shock-in-jet processes.
time scale [12], is listed in the 7th column of the table. Furthermore, the modeling of the flares using both intrinsic and extrinsic source scenarios is presented in the following sections.
### Source intrinsic scenario
Blazar flares can be explained within the scenario of internal shocks moving along their relativistic jets. To demonstrate this, we attempt to create a flare lasting about a week using the approach described Kirk et al. [23] (for details see [24]). The method involves a time-dependent analysis of a homogeneous single zone leptonic model, where variable emission results in significant flares due to gradual particle acceleration at the shock front and subsequent radiative cooling at the emission region. The model assumes cylindrical jets aligned near the line of sight, with particles being injected at a constant rate at the shock wavefront and primarily losing energy through synchrotron emission.
The evolution of particles in the acceleration zone can be described by the following form of the diffusion equation
\[\frac{\partial N}{\partial t}+\frac{\partial}{\partial\gamma}\left[\left( \frac{\gamma}{t_{acc}}-\beta_{s}\gamma^{2}\right)\right]N+\frac{N}{t_{acc}}=Q \delta(\gamma-\gamma_{0}), \tag{1}\]
where \(\beta_{s}\gamma^{2}\) represents the loss of the energy by the synchrotron radiation given by
\[\beta_{s}=\frac{4}{3}\frac{\sigma_{T}}{m_{e}c^{2}}\left(\frac{B^{2}}{2\mu_{0} }\right). \tag{2}\]
Here \(\sigma_{T}\) represents the Thompson-scattering cross-section, B denotes the magnetic field, and \(\mu_{0}\) stands for the permeability of free space. Furthermore, the following equations are used to represent the particle enhancement in the regions, \(Q\) (\(t\)) = \(Q_{0}\) for t \(<\) 0 and t \(>\)\(t_{f}\) and \(Q\) (\(t\)) = \(\left(1+\eta_{f}\right)Q_{0}\) for 0 \(<\) t \(<\)\(t_{f}\). Then, the corresponding rise in the intensity of the source flux can be given by
\[I\left(\nu,t\right)=I_{1}\left(\nu,t\right)+\eta_{f}\left[I_{1}\left(\nu,t \right)-I_{1}\left(\nu,\left(1-u_{s}/c\right)t_{f}\right)\right]. \tag{3}\]
For small angles, where \(\cos\)\(\theta\sim 1\), and with \(\delta\sim 2\Gamma\), the relations \(I(\nu,t)=\delta^{3}I(\nu^{\prime},t^{\prime})\sim 8\Gamma^{3}I(\nu^{\prime},t^{ \prime})\), and \(\nu=\Gamma\left(1+\beta\right)\approx 2\Gamma\nu^{\prime}\) are utilized to convert the source rest-frame quantities (primed) to the observer's frame (unprimed). In this context, we consider \(\nu=4.55\times 10^{14}\) Hz, which corresponds to the mean effective wavelength of R-band (658 nm), and the shocks travel down the blazar jet at relativistic speeds (\(\beta_{s}=0.1\)). The resultant normalized intensity profile, which imitates the flaring behavior observed in the blazars, is displayed in the bottom right panel of Figure 1
### Source extrinsic scenario
Large flares in blazars can also be attributed to the Doppler-boosted emission resulting from emission regions moving along curved trajectories within the jets. In such scenarios, the flux in the rest frame (\(F^{\prime}\nu^{\prime}\)) is connected to the observed flux (\(Fv\)) using the following equations:
\[\frac{F_{\nu}(\nu)}{F^{\prime}_{\nu^{\prime}}(\nu)}=\delta^{3+\alpha}\quad \text{and}\quad\delta(t)=\frac{1}{\Gamma\left(1-\beta cos\theta\right)} \tag{4}\]
In general, the optical spectral slope tends to be greater than 1 in Low-Synchrotron-Peaked (LSP) blazars, such as OJ 49, S4 0954+658, TON 599, and 3C 279, while it is less than 1 in High-Synchrotron-Peaked (HSP) blazars, specifically in this case, PG1553+113, serves as an example with such behavior. For the purpose of illustration, we assume a spectral index of approximately \(\alpha\sim 1\).
Assuming that the apparent flux rise occurs purely due to changes in the Doppler factor (\(\delta\)), this can be related to alterations in the angle with the line of sight and/or changes in the bulk Lorentz factor (\(\Gamma\)). Here, for illustrative purposes, we consider two cases: one involving changes in \(\theta\) and another involving changes in \(\Gamma\).
* Change in the angle of the line of sight: To simulate a gradual decrease in the angle between the emission region and the line of sight, we use the approximation \(\theta=\theta_{0}-\mathrm{Asin}^{2}\omega\mathrm{t}\), where \(\theta_{0}\) is set to 5.7 and A is set to 3.7, as depicted in the top left panel of Figure 2. The resulting flux rise profiles for three values of the bulk Lorentz factor, namely \(\Gamma\)=10, 15, and 20, are shown as blue, green, and red curves, respectively, in the lower left panel of Figure 2.
* Change in \(\Gamma\): In this scenario, the bulk Lorentz factor of the dominant emission region gradually increases following the approximation \(\Gamma=\Gamma_{0}+\mathrm{Asin}^{2}\omega\mathrm{t}\), with an initial value of \(\Gamma_{0}\)=10 and A=20. This allows the plasma blob traveling at a speed of \(\Gamma=10\) to accelerate and reach \(\Gamma=30\), then decelerate back to its original speed. The evolution of \(\Gamma\) over time is presented in the top right panel of Figure 2. The resulting flux rise profiles for three values of the angles of the line of sight, i.e. \(\theta\)=1.5, 3.5, and 6.5\({}^{o}\), are depicted as blue, green, and red curves, respectively, in the lower right panel of Figure 2.
Figure 2: _Left:_ As the angle between the emission region and the line of sight decreases (top panel), the flux appears to flare as a result of relativistic beaming (bottom panel). The three curves correspond to the three different values of the bulk Lorentz factors. _Right:_ Similarly, flaring of flux (bottom panel) owing to the increase in the bulk Lorentz factor (top panel). The three curves correspond to the three different values of the angles of sight.
## 4 Discussion & Conclusion
Blazars exhibit violent variability across a wide range of timescales, from minutes to decades. Their light curves often display well-defined MWL flaring events, characterized by a sudden increases in blazar flux followed by energetic dissipation, which usually lasts for a few weeks to months. Numerous blazar monitoring studies focus on gamma-ray flares frequently linked to the ejection of radio knots visible in VLBA images [25, 26]. Emerging evidence suggests that gamma-ray flares may be associated with superluminally-moving features passing through stationary features along the jet; many flares coincide with the passage of superluminal knots through the millimeter-wave core [see 27, and the references therein]. Particle acceleration mechanisms, including shock diffusive acceleration and turbulent jet models [28, 29, 30, 31], play a crucial role in accelerating particles to millions of Lorentz factors. In the turbulent jet scenario, shock waves in the relativistic plasma's turbulent flow can heat and compress particles, causing individual turbulent cells to appear as flares. Similarly, high magnetic fields in the jet can trigger intermittent magnetic instabilities, leading to magnetic reconnection events and particle acceleration ([32, 33]. These events can produce rapid flares with distinctive envelopes, possibly showing slight asymmetry due to disturbance passing through the emission region or geometric effects like light crossing time.
## Acknowledgments
We acknowledge the support of the Polish National Science Centre through the grants 2018/29/B/ST9/01793 (SZ), and 2020/39 / B / ST9 / 01398 (DG). KM acknowledges JSPS KAKENHI grant number 19K03930.
|
2303.12878
|
Robust Consensus in Ranking Data Analysis: Definitions, Properties and
Computational Issues
|
As the issue of robustness in AI systems becomes vital, statistical learning
techniques that are reliable even in presence of partly contaminated data have
to be developed. Preference data, in the form of (complete) rankings in the
simplest situations, are no exception and the demand for appropriate concepts
and tools is all the more pressing given that technologies fed by or producing
this type of data (e.g. search engines, recommending systems) are now massively
deployed. However, the lack of vector space structure for the set of rankings
(i.e. the symmetric group $\mathfrak{S}_n$) and the complex nature of
statistics considered in ranking data analysis make the formulation of
robustness objectives in this domain challenging. In this paper, we introduce
notions of robustness, together with dedicated statistical methods, for
Consensus Ranking the flagship problem in ranking data analysis, aiming at
summarizing a probability distribution on $\mathfrak{S}_n$ by a median ranking.
Precisely, we propose specific extensions of the popular concept of breakdown
point, tailored to consensus ranking, and address the related computational
issues. Beyond the theoretical contributions, the relevance of the approach
proposed is supported by an experimental study.
|
Morgane Goibert, Clément Calauzènes, Ekhine Irurozki, Stéphan Clémençon
|
2023-03-22T19:36:56Z
|
http://arxiv.org/abs/2303.12878v1
|
# Robust Consensus in Ranking Data Analysis:
###### Abstract
As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed. Preference data, in the form of (complete) rankings in the simplest situations, are no exception and the demand for appropriate concepts and tools is all the more pressing given that technologies fed by or producing this type of data (_e.g._ search engines, recommending systems) are now massively deployed. However, the lack of vector space structure for the set of rankings (_i.e._ the symmetric group \(\mathfrak{S}_{n}\)) and the complex nature of statistics considered in ranking data analysis make the formulation of robustness objectives in this domain challenging. In this paper, we introduce notions of robustness, together with dedicated statistical methods, for _Consensus Ranking_ the flagship problem in ranking data analysis, aiming at summarizing a probability distribution on \(\mathfrak{S}_{n}\) by a _median_ ranking. Precisely, we propose specific extensions of the popular concept of _breakdown point_, tailored to consensus ranking, and address the related computational issues. Beyond the theoretical contributions, the relevance of the approach proposed is supported by an experimental study.
Machine Learning, Robustness
the present article, we complement these works on the issue of robustness to vote manipulation by investigating how the seminal concept of _breakdown point_, a popular measure of robustness of estimators in multivariate statistical analysis, may apply to consensus ranking. Basically, it can be defined as the proportion of outliers or (possibly deliberately) corrupted observations that can contaminate the data sample without jeopardizing the statistic. As will be shown, one of the main difficulties faced in the context considered here lies in the fact that consensus rankings are often obtained by solving an optimization problem and no closed analytical form for the solutions is available in general. Consequently, the computation of breakdown points of ranking statistics is generally a computational challenge. Our main proposal here consists in relaxing the constraint stipulating that the summary of a ranking distribution should be necessarily represented by a single ranking (_i.e._ a strict order on the set of items indexed by \(i\in\{1,\ \ldots,\ n\}\)), or equivalently by a point mass on \(\mathfrak{S}_{n}\). Instead, we suggest summarizing a ranking distribution by a _bucket ranking_ (_i.e._ a weak order on the set \(\{1,\ \ldots,\ n\}\)), the possibility of observing ties in the orderings considered being shown to have crucial advantages regarding robustness.
The paper is organized as follows. In Section 2, basics in ranking aggregation and the notion of breakdown function are introduced, as well as the contributions of our paper. Section 3 focus on robustness, by detailing our theoretical results on the breakdown functions for the classical median, extending this concept to bucket rankings, and providing an optimization algorithm to estimate it in practice. Section 4 is dedicated to the definition of our robust statistic, called the Downward Merge statistic. Finally, experiments are done in Section 5 to highlight the usefulness of our Downward Merge statistic for solving Robust Consensus Ranking tasks.
## 2 Framework and Problem Statement
We start with a reminder of key concepts in ranking data analysis and _Robust Statistics_. The interested reader can refer to Alvo and Yu (2014); Huber and Ronchetti (2009) for more details. Here and throughout, a ranking over a set of \(n\geq 1\) items is represented as a permutation \(\sigma\in\mathfrak{S}_{n}\) where \(\mathfrak{S}_{n}\) is the symmetric group. By convention, the rank \(r\) of an item \(i\in[n]\) is \(r=\sigma(i)\). For any measurable space \(\mathcal{X}\), \(\mathcal{M}^{1}_{+}(\mathcal{X})\) is the set of probability measures on \(\mathcal{X}\), \(\mathrm{TV}(p,q)\) the total variation distance between \(p\) and \(q\) in \(\mathcal{M}^{1}_{+}(\mathcal{X})\).
### Ranking Data and Summary Statistics
The descriptive analysis of probability distributions, or datasets for their empirical counterparts, is a fundamental problem in statistics. For distributions on Euclidean spaces such as \(\mathbb{R}^{d}\), this problem has been widely studied and covered by the literature, with the study of statistics ranging from the simplistic sample mean to more sophisticated data functionals, such as \(U/L/R/M\)-statistics or depth functions for instance (van der Vaart, 1998).
Defining similar notions for probability distributions on \(\mathfrak{S}_{n}\), the space of rankings, is challenging due to the absence of vector space structure. However, fueled by the recent surge of applications using preference data, such as _e.g._ recommender systems, the statistical analysis of ranking data has recently regained attention and certain classic problems have been revisited, as for instance those related to consensus rankings and their generalization ability (see _e.g._Korba et al. (2017) and the references therein) or to the extension of depth functions to ranking data (Goibert et al., 2022).
Central tendency or location.Statistics measuring centrality, such as the mean (or the median for univariate distribution), can be seen as barycenters of the sampling observations w.r.t a certain distance. Consensus Ranking / Ranking Aggregation extends this idea to probability distributions on \(\mathfrak{S}_{n}\)(Deza and Deza, 2009). Given a (pseudo-)metric \(d\) defined on \(\mathfrak{S}_{n}\) and a distribution \(p\in\mathcal{M}^{1}_{+}(\mathfrak{S}_{n})\), a _ranking median_\(\sigma^{\mathrm{med}}_{p,d}\in\mathfrak{S}_{n}\) can be defined as
\[\sigma^{\mathrm{med}}_{d}(p):=\operatorname*{argmin}_{\sigma\in\mathfrak{S}_{n} }\mathbb{E}_{\Sigma\sim p}(d(\sigma,\Sigma)). \tag{1}\]
A well-studied instance of ranking median is the _Kemeny consensus_, which corresponds to the situation where \(d\) is the _Kendall Tau_ distance: for all \(\sigma,\ \nu\) in \(\mathfrak{S}_{n}\),
\[d_{\tau}(\sigma,\nu)=\frac{2}{n(n-1)}\sum_{i<j}\mathds{1}_{[\sigma(i)<\sigma( j)]}\mathds{1}_{[\nu(i)>\nu(j)]} \tag{2}\]
Another common choice is the _Borda count_ when \(d\) is the _Spearman Rho_, see Appendix A for more details. Moreover, when \(d\) is the Kendall tau, Borda is a \(O(n\log n)\), 5-approximation of the Kemeny ranking (Caragiannis et al., 2013; Jiao et al., 2016; Coppersmith et al., 2010), which is a NP-hard to compute (Dwork et al., 2001).
More complex statistics based on ranking data.Often, the information carried by a location statistic must be complemented. For instance, a notion of _dispersion_ or _shape_ is generally key to assessing convergence results or building confidence regions. To this end, the notion of _statistical depth function_ has been developed for multivariate data (in Euclidean spaces) (see (Zuo and Serfling, 2000) and the references therein) and recently adapted to ranking, refer to (Goibert et al., 2022). However, as more complex statistics are more likely to exhibit robustness issues, we focus on simple statistics estimating location for ranking distribution.
### Robust Statistics
To evaluate the robustness of a statistic, the notion of _breakdown function_ has been introduced in the seminal work of
(Huber, 1964). Informally, the breakdown function for a statistic \(T\) on a distribution \(p\) measures the minimal attack budget required for an adversarial distribution to change the outcome of the statistic \(T\) by an amount at least \(\delta>0\).
**Definition 2.1**.: (Breakdown Function) Let \(\mathcal{X}\) and \(\mathcal{Y}\) be measurable spaces, \(p\in\mathcal{M}^{1}_{+}(\mathcal{X})\), \(T:\mathcal{M}^{1}_{+}(\mathcal{X})\rightarrow\mathcal{Y}\) a measurable function and \(d\) a metric on \(\mathcal{Y}\). For any level \(\delta\geq 0\), the breakdown function of the functional \(T\) at \(p\) is
\[\varepsilon^{\star}_{d,p,T}(\delta)=\inf\left\{\varepsilon>0\,\middle|\sup_{q :\mathrm{TV}(p,q)\leq\varepsilon}d(T(p),T(q))\geq\delta\right\}.\]
In the traditional case \(\mathcal{X}=\mathcal{Y}=\mathbb{R}\), the level \(\delta\) is generally set to \(+\infty\) and the budget required is referred to as _breakdown point_. In the extreme case, when \(T\) is the identity and \(\delta=0^{+}\), \(\varepsilon^{\star}\) quantifies the budget of attack under which _identifiability_ of the distribution is possible (which requires the additional knowledge that \(p\) belongs to some family).
Application to Ranking Data.In Agarwal et al. (2020) such a study on identifiability is provided for the Bradley-Terry-Luce (Bradley and Terry, 1952; Luce, 1959) model under a budget constraint on pairwise marginals rather than the Total Variation, and Jin et al. (2018) on the Heterogeneous Thurstone Models (Thurstone, 1927). However, summary statistics, such as a central tendency, are generally harder to break than the full distribution itself, so the breakdown function provides a finer quantification of robustness than the identifiability of the distribution. Since the distances on \(\mathfrak{S}_{n}\) are bounded, in general, the full breakdown function needs to be considered and one cannot focus only on a particular level such as \(\delta=0^{+}\) or \(\delta=+\infty\). From here and throughout, the distance \(d\) and the attack amplitude \(\delta\) are normalized to lie between \(0\) and \(1\).
The robustness of the median statistic when an adversary is allowed to attack with any strategy a pairwise model has also been studied (Datar et al., 2022). They characterize the robustness of two statistics in terms of the L2 distance on distributions. We propose in Definition 2.1 a more general and natural measure for robustness as a function of the distance between the true and a corrupted statistic.
Bucket Rankings as a robustness candidate.In rankings, adversarial attacks often target pairs of items that are "close" in some sense (Agarwal et al., 2020): consecutive ranks, a pairwise marginal probability close to \(\frac{1}{2}\),...Thus, a simple and efficient way to robustify a ranking median is to accept _ties_, rather than being restricted to a strict order.
### Challenges and Contributions
There is a wide number of median statistic studies motivated by the lack of analytical expression and the computational and statistical challenges that arise in the estimation process. However, robustness results for ranking statistics are rare and not rigorous enough for comparing different estimators.
**Contribution 1**.: _Using Definition 2.1 with the Kendall tau distance provides a straightforward measure of robustness for ranking medians. In Section 3.1 we provide a lower-bound on the breakdown function for a ranking median (Theorem 3.2) and a tight upper-bound for the Kemeny consensus (Theorem 3.2)._
Moreover, slight perturbations in the pairwise relations of items that are similar to each other can imply breaking a median estimator, showing a lack of robustness. It is natural to propose more robust estimators by allowing pairs of items to be "equally ranked", i.e., by considering bucket ranking statistics. However, generalizations of the breakdown function for bucket rankings require the use of Kendall tau for buckets, which is computationally impractical.
**Contribution 2**.: _In Section 3.2 we propose an extension of the breakdown function for bucket rankings which is built upon a Hausdorff generalization of the Kendall tau distance. We also develop an optimization algorithm to approximate this breakdown function that overcomes the computational issue of having a piece-wise constant objective function._
We illustrate and show empirically that bucket rankings are more robust median estimators than rankings. However, finding the optimal bucket order statistic requires exhaustively searching the space of bucket rankings \(\Pi_{n}\), which is even larger than the space of permutations, of factorial cardinality, and therefore, it is totally infeasible.
**Contribution 3**.: _In Section 4 we propose a general method for robustifying medians: given a ranking median, our algorithm successively merges "similar" items together into the same bucket. We evaluate this statistic in Section 5, showing an improvement of robustness w.r.t. Kemeny's median without sacrificing its precision._
## 3 Robustness - Breakdown Function for Ranking and Bucket Rankings
This section first details how to apply the notion of _breakdown function_\(\varepsilon^{\star}_{d,p,T}\). This allows providing insights into the robustness of classical location statistics such as the Kemeny consensus. These results advocate for the introduction of a more robust type of statistics based on bucket orders that are also developed in this section.
### Breakdown Function for the Kemeny Consensus
We explore the robustness of ranking medians \(\sigma^{\mathrm{med}}_{d}(p)\) as defined in Equation (1) for different metrics \(d\) over \(\mathfrak{S}_{n}\) as defined by the breakdown function \(\varepsilon^{\star}_{d_{\nu},p,T}\). In particular, it is possible to tightly sandwich the breakdown function for
the Kemeny median.
**Theorem 3.1**.: _For \(p\in\mathcal{M}^{1}_{+}(\mathfrak{S}_{n})\), \(\sigma^{*}_{p}=\sigma^{\mathrm{med}}_{d}(p)\) (Kemeny median) and \(\delta\geq 0\), if \(\varepsilon^{+}(\delta)\leq 2p(\sigma^{*}_{p})\) then \(\varepsilon^{*}_{d_{\tau},p,\sigma^{*}_{p}}(\delta)\leq\varepsilon^{+}(\delta)\) with_
\[\varepsilon^{+}(\delta)=\min_{\begin{subarray}{c}\sigma\in\mathfrak{S}_{n}\\ d_{\tau}(\sigma,\sigma^{*}_{p})\geq\delta\end{subarray}}\max_{\begin{subarray} {c}\nu\in\mathfrak{S}_{n}\\ d_{\tau}(\sigma,\sigma^{*}_{p})<\delta\end{subarray}}\frac{\mathbb{E}_{\sum_{ \supset p}}\left[d_{\tau}(\Sigma,\sigma)-d_{\tau}(\Sigma,\nu)\right]}{d_{\tau}( \sigma^{*}_{p},\sigma)-d_{\tau}(\sigma^{*}_{p},\nu)}\.\]
Proof Sketch.: Detailed Proof can be found in Appendix C.1 The proof relies on showing that, for \(\varepsilon>0\), the _attack_ distribution \(\bar{q}_{\varepsilon}=p-\frac{\varepsilon}{2}\mathds{1}_{[-\sigma^{*}_{p}]}+ \frac{\varepsilon}{2}\mathds{1}_{[-\sigma^{*,\mathrm{rev}}_{p}]}\), where \(\sigma^{*,\mathrm{rev}}_{p}\) is the reverse of \(\sigma^{*}_{p}\), is in the feasible set of the optimization problem \(\sup_{q:\mathrm{TV}(p,q)\leq\varepsilon}d_{\tau}(\sigma^{*}_{p},\sigma^{*}_{q})\) (see Definition 2.1).
Using \(\bar{q}_{\varepsilon}\) provides a way to link \(\varepsilon\) and \(\delta\). The condition \(\varepsilon^{+}(\delta)\leq 2p(\sigma^{*}_{p})\) ensures \(\bar{q}_{\varepsilon}\) is well-defined.
It is also possible to provide a lower bound on the breakdown function for any generic ranking median.
**Theorem 3.2**.: _For \(p\in\mathcal{M}^{1}_{+}(\mathfrak{S}_{n})\), \(m\) and \(d\) being two metrics on \(\mathfrak{S}_{n}\), \(\sigma^{*}_{p}=\sigma^{\mathrm{med}}_{d}(p)\) and \(\delta\geq 0\), we have \(\varepsilon^{*}_{m,p,\sigma^{*}_{p}}(\delta)\geq\varepsilon^{-}(\delta)\) with_
\[\varepsilon^{-}(\delta)=\min_{\begin{subarray}{c}\sigma\in\mathfrak{S}_{n}\\ m(\sigma,\sigma^{*}_{p})\geq\delta\end{subarray}}\max_{\begin{subarray}{c }\nu\in\mathfrak{S}_{n}\\ \nu\neq\sigma\end{subarray}}\frac{\mathbb{E}_{\Sigma\sim p}\left[d(\Sigma, \sigma)-d(\Sigma,\nu)\right]}{\max_{\sigma^{\prime}\in\mathfrak{S}_{n}}d( \sigma^{\prime},\sigma)-d(\sigma^{\prime},\nu)}\]
Proof.: Detailed proof can be found in Appendix C.2.
Figure 1 shows that no choice of \(d\) makes the median uniformly more robust than another. Then, unfortunately, it also illustrates the fragility of median statistics against corruption of the distribution. In this example, impacting the distribution \(p\) by less than \(5\%\) allows changing the Kemeny median by flipping more than half item pairs (\(\delta\geq 0.5\)).
Sensitivity to similar items.To further illustrate the fragility of Kemeny's median, Figure 2 shows its breakdown function on specific distributions. As could be expected, if all items are almost indifferent (uniform distribution - purple curve), then a ranking median is very fragile: a small nudge on \(p\) is enough to change the Kemeny median from one ranking to its reverse. On the contrary, when \(p\) is a point mass at a given ranking (blue curve), it requires a large attack on \(p\) to impact the median.
The green curve shows a weakness in the median: despite \(p\) being concentrated on two neighbouring rankings (identical up to a pair of adjacent items), the robustness is very low for \(\delta\leq 0.2\). This highlights a mechanism underlying adversarial attacks in real-world recommender systems (ex: fake reviews...): at a small cost, it is possible to be systematically ranked on top of close alternatives. This calls for using the natural alternative to (strict) rankings, which incorporates indifference between items: _bucket rankings_.
### Bucket Ranking - Extended Ranking Consensus
Intuitively, bucket rankings are rankings with ties allowed. Formally, they can equivalently be defined as a total preorder - _i.e._ a homogeneous binary relation that satisfies transitivity and reflexivity (preorder) in which any two elements are comparable (total) - or as a strict weak ordering - _i.e._ a strict total order over equivalence classes of items (buckets).
**Definition 3.3**.: (Bucket ranking) A bucket order \(\pi\) is a strict weak order defined by an ordered partition of \([n]\), _i.e._ a sequence \((\pi^{(1)},\ldots,\pi^{(k)})\) of \(k\geq 1\) pairwise disjoint non empty subsets (buckets) of \([n]\) such that:
1. \(i\prec_{\pi}j\) \(\Leftrightarrow\) \(\exists l<l^{\prime}\in[k],(i,j)\in\pi^{(l)}\times\pi^{(l^{\prime})}\),
2. \(i\sim_{\pi}j\) \(\Leftrightarrow\) \(\exists l\in[k],(i,j)\in\pi^{(l)}\times\pi^{(l)}\),
We denote \(\Pi_{n}\) the set of bucket rankings, which is of size
Figure 1: An illustration of \(\varepsilon^{+}(\delta)\) and \(\varepsilon^{-}(\delta)\) (from Theorem 3.1 and Theorem 3.2) for a distribution on permutations of 4 items. For Borda and the median associated with Spearman footrule, only the lower bound is displayed.
Figure 2: Breakdown function for Kemeny’s median for different distributions \(p\). ”Uniform” denotes an almost uniform distribution; ”Point mass” an almost point mass distribution, and ”Bucket” an almost point mass distribution on two neighboring rankings.
\(\sum_{k=1}^{n}k!S(n,k)\)1 (vs \(n!\) for \(\mathfrak{S}_{n}\)).
Footnote 1: \(S(n,k)\) are Stirling numbers of the second kind.
The indifference between items that bucket rankings can incorporate is an interesting feature to gain robustness, because the statistic can output alternatives between several strict orders, making it harder to attack.
As sets of permutations.A bucket ranking \(\pi\in\Pi_{n}\) can be equivalently mapped to a subset of permutations, generated through the different ways to break ties. We say that a permutation \(\sigma\in\mathfrak{S}_{n}\) is _compatible_ with a bucket ranking \(\pi\in\Pi_{n}\) - denoted \(\sigma\in\pi\) - if for any \(i,j\in[n]\), \(\sigma(i)<\sigma(j)\ \Leftrightarrow\ i\prec_{\pi}j\) or \(i\sim_{\pi}j\). For two bucket orders \(\pi_{1},\pi_{2}\), we say that \(\pi_{1}\) is _stricter_ that \(\pi_{2}\), denoted \(\pi_{1}\subseteq\pi_{2}\), iff for any \(\sigma\in\mathfrak{S}_{n},\ \ \sigma\in\pi_{1}\Rightarrow\sigma\in\pi_{2}\).
As a distribution.Being a set of permutations, a bucket order \(\pi\in\Pi_{n}\) can also be seen as a uniform distribution with restricted support. This point of view is particularly intuitive from a robustness perspective: a randomized output is generally harder to attack for an adversary.
Distances between bucket rankings.A key to applying the breakdown function from Definition 2.1 to bucket orders statistics is to have a metric on \(\Pi_{n}\) that extends those defined on \(\mathfrak{S}_{n}\). To this end, we use the previous remark that weak orders are sets of rankings as well as a classical Hausdorff extension of metrics to sets. More precisely, we define:
**Definition 3.4**.: (Non-symmetric Hausdorff) Let \(d\) be a metric on \(\mathfrak{S}_{n}\). The non-symmetric Hausdorff pseudoquasimetric between two bucket rankings \(\pi_{1},\pi_{2}\in\Pi_{n}\) is
\[H_{d}^{\mathrm{ss}}(\pi_{1},\pi_{2})=\max_{\sigma_{2}\in\pi_{2}}\min_{\sigma_{ 1}\in\pi_{1}}d(\sigma_{1},\sigma_{2})\,.\]
Even though it is not a metric, \(H_{d}^{\mathrm{ss}}\) is well-suited to ranking with ties. Intuitively, its lack of symmetry allows differentiating adversarial attacks whose effect is on the strict part of the bucket order (e.g. swapping two items that are strictly ordered) from those whose effect is "only" to disambiguate a tie. More precisely, if \(\pi_{2}\subseteq\pi_{1}\), then \(H_{d}^{\mathrm{ss}}(\pi_{1},\pi_{2})=0\). Depending on the application, one may want to focus on the first type of attacks, in which case \(H_{d}^{\mathrm{ss}}\) is a suitable choice to define the breakdown function as \(\varepsilon_{H_{d}^{\mathrm{ss}},p,T}\). Otherwise, it is possible (and usual) to symmetrize the Hausdorff metric.
**Definition 3.5**.: (\(1/2\)-symmetric Hausdorff) Let \(d\) be a metric on \(\mathfrak{S}_{n}\). The \(1/2\)-symmetric Hausdorff metric between two bucket rankings \(\pi_{1},\pi_{2}\in\Pi_{n}\) is defined by
\[H_{d}^{(1/2)}(\pi_{1},\pi_{2})=\frac{1}{2}\Big{(}H_{d}^{\mathrm{ss}}(\pi_{1}, \pi_{2})+H_{d}^{\mathrm{ss}}(\pi_{2},\pi_{1})\Big{)}\,.\]
Usual symmetrization of the Hausdorff metric uses a maximum rather than an average (Fagin et al., 2006). However, under the Kendall-tau distance, the average version is computationally simpler (see Appendix D for more details).
### The Breakdown Function in Ranking Data Analysis - Definition and Estimation
Definition.Putting all the pieces together, from now on, the statistic \(T:\mathcal{M}_{+}^{1}(\mathfrak{S}_{n})\to\Pi_{n}\) summarizes a distribution over \(\mathfrak{S}_{n}\) by a bucket ranking in \(\Pi_{n}\). Then, we use either \(H_{d_{\tau}}^{(NS)}(\pi_{1},\pi_{2})\) (see Definition 3.4) or \(H_{d_{\tau}}^{(1/2)}(\pi_{1},\pi_{2})\) on \(\Pi_{n}\) where \(d_{\tau}\) is the Kendall tau (see Equation (2)). Finally, the breakdown function \(\varepsilon_{H_{d_{\tau}}^{(NS)},p,T}^{\star}\) is the result of the following optimization problem
\[\inf\left\{\varepsilon>0\,\Bigg{|}\sup_{q:\mathrm{TV}(p,q)\leq\varepsilon}H_{d _{\tau}}^{(NS)}(T(p),T(q))\geq\delta\right\} \tag{3}\]
The Empirical Breakdown Function.Computing a closed-form expression for the breakdown point for any statistic \(T\) and distribution \(p\) is challenging in general. However, it can be estimated empirically: the extended expression of the breakdown function in Equation (3) can be simplified so that it is the solution to the following Lagrangian-relaxed optimization problem.
\[\inf_{q\in\Delta^{\mathfrak{S}_{n}}}\sup_{\lambda\geq 0}1/2\|p-q\|_{1}+\lambda( \delta-H_{d_{\tau}}^{(NS)}(T(p),T(q))) \tag{4}\]
Smoothing.As \(H_{d_{\tau}}^{(NS)}(T(p),T(q)))\) is piece-wise constant as a function of \(q\) (with a combinatorial number of pieces), Problem (4) cannot directly be solve using standard optimization techniques. To solve this issue, we used a smoothing procedure by convolving this function with a smoothing kernel \(k_{\gamma}\) with scale \(\gamma\). Thus, after the relaxation, the optimization problem (4) becomes:
\[\inf_{q\in\Delta^{\mathfrak{S}_{n}}}\sup_{\lambda\geq 0}1/2\|p-q\|_{1}+\lambda( \delta-\rho_{T}(p,q)), \tag{5}\]
with
\[\begin{split}\rho_{T}(p,q)&=H_{d_{\tau}}^{(NS)}(T(p), T(q))\star k_{\gamma}(q)\\ &=\int_{u}H_{d_{\tau}}^{(NS)}(T(p),T(u))\times k_{\gamma}(q-u) \text{d}u,\end{split} \tag{6}\]
On a practical note, a simple way to build a convolution kernel \(k_{\gamma}\) on a simplex like \(\mathcal{M}_{+}^{1}(\mathfrak{S}_{n})\), is to use a convolution kernel \(\kappa_{\gamma}\) on the whole euclidean space - for instance an independent Gaussian density \(\kappa_{\gamma}(x)=\frac{1}{\sqrt{(2\pi\gamma)^{n}}}\exp\left\{(-\frac{x^{T} }{2\gamma^{2}})\right\}\) - and set \(k_{\gamma}\) to be the density of the push-forward through a _softmax_ function. We denote \(\varepsilon_{p,T}^{\gamma}(\delta)\) the limiting value of \(\|p-q\|_{1}/2\) at the solution of (5). Note the bias induced by such definition of \(k_{\gamma}\) fades away when \(\gamma\) goes to \(0\) in the same way as the bias induced by the
convolution. This smoothing ensures \(\rho_{T}\) is a continuous, differentiable function with respect to \(q\). Moreover, it can easily be estimated using a Monte-Carlo sampling, using the following remark: \(\rho_{T}(p,q)=\mathbb{E}_{u\sim k_{(p,\gamma)}}(H_{d_{\tau}}^{(NS)}(T(u),T(q))\).
Optimization.When using Monte-Carlo estimation for \(\rho_{T}\), Equation (5) is a stochastic saddle-point problem. To solve such problems, gradient/ascent has a rate of convergence of \(\mathcal{O}(t^{1/2})\) for its ergodic average (\(t\) being the number of steps) (Nemirovski & Rubinstein, 2002). Our empirical optimization algorithm for computing the breakdown functions relies on stochastic gradient descent and is able to provide good approximations, as illustrated in Figure 4. We denote \(\tilde{\varepsilon}_{p,T}^{\gamma}(\delta)=\|p-\bar{q}_{t}\|_{1}\), where \(\bar{q}_{t}\) is the ergodic average of the iterates \((q_{s})_{s\leq t}\) obtained during the optimization.
Let's make a couple of remarks on the empirical breakdown function \(\tilde{\varepsilon}_{p,T}^{\gamma}\). First, it is a noisy estimate of \(\varepsilon_{p,T}^{\gamma}\) as \(\rho_{T}\) and its gradients are estimated via Monte-Carlo. Thus, the choice of \(\gamma\) and \(t\) should trade-off the variance of \(\tilde{\varepsilon}_{p,T}^{\gamma}\) and the bias \(|\varepsilon_{p,T}^{\gamma}-\varepsilon_{d_{\tau},p,T}^{\ast}|\). Second, as the term \(\|p-q\|_{1}\) is minimized in (5), it is expected \(\tilde{\varepsilon}_{p,T}^{\gamma}\) over-estimates \(\varepsilon_{p,T}^{\gamma}\).
## 4 Robust Consensus Ranking Statistics
As proved by Theorem 3.1, the classical median statistics as defined by (1) can be easily broken, which motivates defining more robust statistics, based on bucket rankings. As illustrated by Figure 2, the weakness of median statistics comes from being "forced" to rank all items, even those which are (almost) indistinguishable. Bucket rankings seem to be a natural solution to this problem, but _what is a good way to build a bucket order statistic?_
As \(H_{d_{\tau}}^{(NS)}\) defines a (pseudoquasi-) distance on \(\Pi_{n}\), we could adapt the idea of a median as in (1) for bucket rankings. However, contrarily Borda medians which can be computed in a scalable way (Caragiannis et al., 2013), Hausdorff-based medians would require to optimize over \(\Pi_{n}\). As its cardinality is larger than \(\mathfrak{S}_{n}\) this problem can be more computationally challenging than Kemeny's median.
A more scalable approach is to start from a ranking median such as the Kemeny or Borda consensus and to robustify it using a plug-in method based on merging items that are close into buckets. Figure 3 illustrates this idea. The left graph describes pairwise marginal probabilities for which the Kemeny consensus is \(A\prec B\prec C\prec D\). Intuitively, merging either \(C\) and \(D\) (as \(\mathbb{P}(C\prec D)=0.51\)) or \(B\) and \(C\) (as \(\mathbb{P}(B\prec C)=0.52\)) leads to bucket rankings (i) and (ii), which will be harder to attack. However, this example also highlights that there is no unique way of merging items. For instance, if the constraint is to only merge items whose pairwise preference probability is in \([0.4,0.6]\), it is possible to merge \(B,C\) or \(C,D\), but not \(B,C,D\) as \(\mathbb{P}(B\prec D)=0.7\): _pairwise indistinguishability is not transitive_.
### Naive Merge Statistic
In order to formalize the latter intuition and to derive a first (naive) plug-in rule, we define the pairwise preference probability between two items, which provides a relevant notion of closeness between items.
**Definition 4.1**.: (Pairwise probabilities). For \(p\in\mathcal{M}_{+}^{1}(\mathfrak{S}_{n})\), the pairwise preference probability between items \(i\) and \(j\), denoted \(P_{i,j}\), is defined for \(i\neq j\) by: \(P_{i,j}=\mathbb{P}_{\Sigma\sim p}(\Sigma(i)<\Sigma(j))\). By convention, \(P_{ii}=0.5\). We define the pairwise matrix of \(p\) as \(P:=[P_{i,j}]_{1\leq i,j\leq n}\).
Then, given a bucket ranking \(\pi\in\Pi_{n}\), we formalize the notion that two buckets can be merged, with the constraint of not changing the strict order between buckets. To this end, we define \(\bar{P}_{i}(\pi)\), the _strongest deviation from indifference_ between any two items within the \(i^{\text{th}}\) bucket \(\pi^{(i)}\).
\[\bar{P}_{i}(\pi)=\max\left\{|P_{l,l^{\prime}}-0.5|:(l,l^{\prime})\in\pi^{(i)}\right\} \tag{7}\]
Then, one needs to quantify the value of \(\bar{P}_{i}(\pi)\) that would result from merging bucket \(i\) to bucket \(j\),
\[\bar{P}_{ij}(\pi)=\max\left\{\left|P_{l,l^{\prime}}-\frac{1}{2}\right|:(l,l^{ \prime})\in\bigcup_{\begin{subarray}{c}l\in[n]\\ i\leq l\leq j\end{subarray}}\pi^{(l)}\right\} \tag{8}\]
Finally, given a threshold \(\theta\in[0,0.5]\) on the acceptable deviation from indifference, we define the set of pairs of buckets that can be merged while keeping \(\bar{P}\) below \(\theta\),
\[\mathcal{G}(\pi,\theta)=\left\{(i,j)\in[n]^{2}:\bar{P}_{ij}(\pi)\leq\theta\right\} \tag{9}\]
The first intuition is to merge buckets iteratively, starting with the most indifferent ones, as described in Algorithm 1.
```
Input :Pairwise matrix \(P\), Ranking median \(\sigma\), threshold \(\theta\in[0,0.5]\). \(\pi\leftarrow\sigma\) // \(\sigma\) as a bucket ranking while\(\mathcal{G}(\pi,\theta)\neq\emptyset\)do \((i^{*},j^{*})=\operatorname*{argmin}_{(i,j)\in\mathcal{G}(\pi,\theta)}\bar{P}_{ ij}(\pi)\) update \(\pi\) by merging all buckets between \(i^{*}\) and \(j^{*}\) \(\left\{\begin{aligned} \pi^{(i)}&\leftarrow\pi^{(i)}&\text{ for }i<i^{*}\\ \pi^{(i^{*})}&\leftarrow\bigcup_{l\in[n],i^{*}\leq l\leq j ^{*}}\pi^{(l)}\\ \pi^{(i-j^{*}+i^{*})}&\leftarrow\pi^{(i)}&\text{ for }i>j^{*}\\ \end{aligned}\right.\) Output :\(\pi\)
```
**Algorithm 1**Naive Merge
Termination of Algorithm 1 is guaranteed by the fact that the number of buckets in \(\pi\) strictly decreases at each iteration.
Then, by definition of \(\mathcal{G}(\pi,\theta)\), the resulting bucket ranking \(\pi\) is such that any of its bucket \(i\) satisfies \(\bar{P}_{i}(\pi)\leq\theta\) - _i.e._ no two items with higher deviation than \(\theta\) have been merged.
Despite being very natural, this algorithm suffers from an important limitation: when changing the threshold \(\theta\), its output only spans a limited subset of valid bucket rankings. In the example provided by Figure 3, the naive merge method plugged-in on the Kemeny consensus can only output (i) and (iii). Whatever the value of \(\theta\), it can never output (ii) or (iv). This limitation is induced by its outputs being a monotonic (w.r.t. to inclusion) function of \(\theta\) - _i.e._ for \(\theta_{1}\leq\theta_{2}\), the resulting bucket rankings satisfy \(\pi_{\theta_{1}}\subseteq\pi_{\theta_{2}}\).
### Downward Merge Statistic
Overcoming this limitation only requires a small change in the algorithm which results in our main plug-in method named _Downward Merge_, shown in Algorithm 2. Downward Merge algorithm selects the two buckets \((i^{*},j^{*})\) whose deviation from indifference \(\bar{P}_{ij}(\pi)\) is maximal among those \(\bar{P}_{ij}(\pi)\leq\theta\). 2 Then, all the buckets \(l\) such that \(i^{*}\leq l\leq j^{*}\) are merged. This process is repeated while there exist pairs of buckets whose deviation from indifference \(\bar{P}_{ij}(\pi)\leq\theta\) and thus termination is guaranteed.
Footnote 2: Instead of taking the most similar buckets, as in the previous statistic, we take the most different pair among those that are “similar enough”.
**Input :** Pairwise matrix \(P\), Ranking median \(\sigma\),
threshold \(t\in[0,0.5]\).
\(\pi\leftarrow\sigma\) // \(\sigma\) as a bucket ranking
**while \(\mathcal{G}(\pi,t)\neq\emptyset\) do**
\((i^{*},j^{*})=\operatorname*{argmax}_{(i,j)\in\mathcal{G}(\pi,t)}\bar{P}_{ij}(\pi)\)
update \(\pi\) by merging all buckets between \(i^{*}\) and \(j^{*}\)
\(\begin{cases}\pi^{(i)}&\leftarrow\pi^{(i)}\quad\text{for }i<i^{*}\\ \pi^{(i^{*})}&\leftarrow\bigcup_{l\in[n],i^{*}\leq l\leq j^{*}}\pi^{(l)}\\ \pi^{(i-j^{*}+i^{*})}&\leftarrow\pi^{(i)}\quad\text{for }i>j^{*}\end{cases}\)
**Output :\(\pi\)**
**Algorithm 2** Downward Merge
The Downward Merge method is thus able to span a larger set of bucket orders when varying \(\theta\). In the example from Figure 3, the Downward Merge method plugged-in on the Kemeny consensus can generate all four bucket rankings (i-iv) for \(\theta\in\{0.51,0.52,0.69,0.7\}\).
The next experimental section illustrates the robustness improvement brought by this plug-in method over a ranking median.
## 5 Numerical Experiments
In this section, we illustrate the relevance of the statistic outputted by our Downward Merge plug-in on Kemeny's median (called our _Downward Merge statistic_ for short) by running several illustrative experiments for various settings and comparing with the baseline provided by the usual Kemeny's median. The code is available here.
Figure 4: Breakdown function \(\bar{\varepsilon}^{\gamma}_{p,T}(\delta)\) as a function of attack amplitude \(\delta\) for a bucket distribution \(p\) (almost a point mass on two neighboring rankings) with \(n=4\). The plain blue line denotes the theoretical value for Kemeny’s median \(\varepsilon^{*}_{p}(\delta)\), blue crosses (resp. red dots) the empirical approximation \(\bar{\varepsilon}^{\gamma}_{p,T}\) for Kemeny’s median (resp. Down. Merge statistic for different thresholds \(\theta\)).
Figure 3: Left: Directed Graph that summarizes a pairwise marginal probability matrix. (i-iv) Graph representations of bucket orders that are compatible with merging items which pairwise preference probability is below 0.52 (i, ii) and below 0.7 (iii,iv).
### Empirical Robustness
Our Downward Merge plug-in aims at providing a robustified statistic. To illustrate its usefulness, we ran experiments computing the approximate breakdown functions \(\tilde{\varepsilon}_{p,T}^{\gamma}(\delta)\) for the Kemeny's median as a baseline and our statistic when varying \(\delta\). Figure 4 shows the robustness as a function of attack amplitude \(\delta\) and for a hand-picked distribution \(p\) that is almost a point mass on a bucket ranking.
When the threshold is set to a sensible value (here \(\theta=0.05\)), the Downward Merge algorithm outputs a bucket order as a statistic: thus, the robustness increases very strongly to reach nearly optimal values even for very small values of \(\delta\), which illustrates its efficiency. When \(\theta=0.5\), the statistic is the bucket order regrouping all items. In this case, the statistic cannot be broken, and provide optimal values for the breakdown function. However, such a statistic does not provide any information about the distribution under analysis: its accuracy of location is very poor. Formally, the accuracy of location of a statistic \(T\) is defined by its closeness (under the same metric \(d\) used in its definition) to the whole ranking distribution: \(AL_{d,p}(T):=\|d\|_{\infty}-\mathbb{E}_{p}(d(T(p),\Sigma))\), which is the opposite of the _loss_, as simply defined by \(Loss_{d,p}(T)=\mathbb{E}_{p}(d(T(p),\Sigma))\). By definition, under metric \(d=d_{\tau}\), Kemeny's median has the highest accuracy of location, _i.e._ the smallest loss. On the other hand, the Downward Merge statistic when \(\theta=0.5\) has a very high loss, which makes it irrelevant in most cases. These observations justify the analysis of the loss/robustness tradeoff of our Downward Merge statistic compared to Kemeny's median.
### Tradeoffs between Loss and Robustness
We ran experiments for various distributions \(p\) and computed the loss and the breakdown function of Kemeny's median and our Downward Merge algorithm to show the loss/robustness tradeoff for each statistic. Figure 5 shows the results for different choices of distribution \(p\) when the number of items \(n=4\), and for \(\delta=1/6\) (normalized value of \(\delta\) that requires at least a switch between two items to break the statistic).
The point mass (resp. the uniform) distribution represents the extreme case for which Kemeny's median is very robust (resp. not robust at all) and for which we expect no improvement from using the Downward Merge statistic. This intuition is verified in both cases, and we can see that the Downward Merge statistic yields the same results (in loss and in robustness) as Kemeny's median.
The bucket distributions (for which the gap between the probabilities for two rankings in the bucket order is respectively \(0.1\) and \(0.01\)) represent the settings to which our Downward Merge is best suited. As expected, the improvement in robustness when using our Downward Merge statistic is high, and the increase in loss is negligible.
Finally, the Plackett Luce distributions (for which the parameters were generated randomly) represent a random setting. The results are interestingly very similar to those for the bucket distributions: the gain in robustness is high and the increase in loss is negligible. This random setting illustrates the usefulness of our Downward Merge statistic in general cases and shows that, overall, it yields a much better compromise than Kemeny's median.
## 6 Conclusion
In this paper, we developed a framework to study robustness in ranks: we defined breakdown functions for rankings, extended it to bucket rankings, and created an optimization algorithm to approximate its value in practice. We developed our Downward Merge statistic as a plug-in to the classical Kemeny's median to provide, as confirmed by our experiments, not only an improved robustness but also a better compromise between centrality and robustness. We ensured our Downward Merge algorithm is scalable to practical settings, but the evaluation of the breakdown function remains challenging because of the use of the Total-Variation distance as a metric for the budget constraint. The definition and study of further scalable approximations of the breakdown function are left for future work.
Figure 5: Loss/Robustness tradeoffs for different \(p\) with \(\delta=1\). Pairs of points linked by a black line denote results for Kemeny’s median and Down. Merge statistics on the same distribution \(p\) with \(n=4\). “Buckets” are hand-picked distributions generated to be almost a point mass on a bucket order, “Uniform” (resp. “Point mass”) "is an almost uniform (resp. point mass) hand-picked distribution, and “PL distribs.” are random Plackett-Luce distributions.
|
2304.10010
|
Separability, Contextuality, and the Quantum Frame Problem
|
We study the relationship between assumptions of state separability and both
preparation and measurement contextuality, and the relationship of both of
these to the frame problem, the problem of predicting what does not change in
consequence of an action. We state a quantum analog of the latter and prove its
undecidability. We show how contextuality is generically induced in state
preparation and measurement by basis choice, thermodynamic exchange, and the
imposition of a priori causal models, and how fine-tuning assumptions appear
ubiquitously in settings characterized as non-contextual.
|
Chris Fields, James F. Glazebrook
|
2023-04-19T23:32:19Z
|
http://arxiv.org/abs/2304.10010v1
|
# Separability, Contextuality, and the Quantum Frame Problem
###### Abstract
We study the relationship between assumptions of state separability and both preparation and measurement contextuality, and the relationship of both of these to the frame problem, the problem of predicting what does not change in consequence of an action. We state a quantum analog of the latter and prove its undecidability. We show how contextuality is generically induced in state preparation and measurement by basis choice, thermodynamic exchange, and the imposition of _a priori_ causal models, and how fine-tuning assumptions appear ubiquitously in settings characterized as non-contextual.
**Keywords**: Cone-Cocone Diagram, Godel's theorem, Markov blanket, Measurement, Quantum Reference Frame, Task environment, Undecidability
## 1 Introduction
A theory exhibits contextuality, in the sense of Bell [1] or Kochen and Specker [2], if the outcome of some operation allowed by the theory can depend on what other operations are performed simultaneously [3, 4, 5, 6].1 Over the past decade, and in parallel with increasing recognition of its importance as a resource for quantum computing [7, 8, 9, 10, 11, 12], a number of formal representations of contextuality have been developed and their relevance to various operational settings investigated [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]; see the recent scholarly review by Adlam [27] addressing several interpretative issues along with the various nuances raised by these and other related works. Here we contribute to this progression of ideas by reformulating contextuality in terms of a decision problem that can be proved to be undecidable. This decision problem is a quantum analog of the frame problem (FP) [28, 29, 30, 31, 32, 33], a well-known problem in Artificial Intelligence (AI) that was first characterized, ironically, only a year after the publication of the Kochen-Specker theorem. Informally, the FP is the problem of efficiently specifying what does not change in consequence of an action [28]. It is thus a problem about characterizing an action's side-effects, which can be expected to depend on the action's context. Hence the FP can also be seen as the problem of efficiently specifying, or circumscribing, the context or, in AI terms, task environment that is in fact relevant to an action.
Footnote 1: This is sometimes called _quantum_ or _intrinsic_ contextuality to distinguish it from classical, causal, context dependence.
As in [1, 2], the operations or actions of interest here are measurement and, dually [34], preparation of a quantum state. We represent these operations as physical interactions between a finite quantum system regarded as an "agent" (\(A\) or Alice) and some other, disjoint finite quantum system (\(B\) or Bob) that make use of some finite collection of finite quantum reference frames (QRFs [35, 36]). Following the development in [37, 38, 39, 40, 41], we assume that the joint state \(|AB\rangle\) is separable over the time interval of interest, allowing us to write the interaction as:
\[H_{AB}=\beta_{k}k_{B}\,T_{k}\sum_{i}^{N}\alpha_{i}^{k}M_{i}^{k}, \tag{1.1}\]
where \(k=\ A\) or \(B\), \(k_{B}\) is Boltzmann's constant, \(T_{k}\) is temperature, the \(\alpha_{i}^{k}\in[0,1]\) are such that \(\sum_{i}^{N}\alpha_{i}^{k}=1\), the \(M_{i}^{k}\) are \(N\) Hermitian operators with eigenvalues in \(\{-1,1\}\), and \(\beta_{k}\geq\ln 2\) is an inverse measure of \(k\)'s thermodynamic efficiency that depends on the internal dynamics \(H_{k}\). We prove in [26] that in any interaction given by Eq. (1.1), the QRFs deployed by either system \(k\) can be represented, without loss of generality, by finite category-theoretic structures, cone-cocone diagrams (CCCDs) [42] of Barwise-Seligman
classifiers (or classifications) [43], that specify quantum computations implemented by \(H_{k}\). This representation effectively bundles the QRF employed to perform an operation (e.g. a length standard) together with the components of \(H_{AB}\) that are employed (i.e. some subset of the \(M_{i}^{k}\)); we henceforth use the term 'QRF' to refer to this combined representation. We have previously shown, using this representation, that noncommutativity of QRFs induces contextuality [25]. This result effectively reformulates previous criteria for representations of contextuality: noncommutativity of QRFs acting on sets of variables \(\{x_{i}\}\) and \(\{y_{i}\}\) corresponds both to the non-existence of a well-defined, well-behaved joint probability distribution over the \(\{x_{i}\}\) and \(\{y_{i}\}\) and to the non-existence of a well-defined section of a sheaf constructed jointly over the \(\{x_{i}\}\) and \(\{y_{i}\}\).
Here we first extend these results, in SS2, by showing that contextuality is induced whenever Alice comprises mutually-separable components, i.e. components that maintain a separable joint state, that implement distinct QRFs [44]. We then state in SS3 a decision problem, the _Quantum Frame Problem_ (QFP [33]), and prove that it is algorithmically undecidable, i.e. not decidable in finite time by a Turing machine with arbitrarily-large memory. The QFP is, informally, the problem of determining whether an action alters the entanglement structure of the environment. The undecidability of the QFP suggests that contextuality is ubiquitous, and that "noncontextual" settings depend for their definition on implicit fine-tuning assumptions. We show, using a canonical Bell/EPR experiment as an example, that such hidden fine-tuning assumptions can mask contextuality. We then review, in SS4, our previous results and those of others showing that contextuality can be induced by choice of basis and hence of QRF (SS4.1), by classical thermodynamic exchange (SS4.2), and by the imposition of classical causal models (SS4.3). We remark briefly in SS5 on the relationship between contextuality and Godel's [50] celebrated 1st incompleteness theorem, and conclude that contextuality is in no way paradoxical, but is rather to be expected whenever finite observers measure (or prepare) multiple distinct degrees of freedom in any sufficiently-large environment. 2
Footnote 2: Contextuality in quantum mechanics is by now accepted as a state of affairs to be lived with. Curiously, noncontextuality seems to be an equally perplexing issue. Hofer-Szabo in [51] distinguishes between _simultaneous_ and _measurement noncontextuality_ as distinct, logically independent models. In the first case, an ontological (hidden variable) model is noncontextual if every ontic (hidden) state determines the probability of the outcomes of every measurement independently of whatever other measurements are simultaneously performed (otherwise the model is contextual). In the second case, an ontological model is noncontextual if any two measurements represented by the same self-adjoint operator, or equivalently, which have the same probability distribution of outcomes in every quantum state, also have the same probability distribution of outcomes in every ontic state (ditto). This distinction reflects upon the schism between operational and ontological models when dealing with contextuality, as exemplified by the fact that contextuality-by-default [21, 22] and causal contextuality [52, 53, 54], respectively, are different theories [55].
## 2 Separable QRFs induce contextuality
We begin by supposing that Alice comprises (at least) two distinct components \(A_{1}\) and \(A_{2}\) that implement distinct QRFs \(Q_{1}\) and \(Q_{2}\), respectively. The QRFs \(Q_{1}\) and \(Q_{2}\) can be
regarded as (quantum) computations, physically implemented by the internal interactions \(H_{A_{1}}\) and \(H_{A_{2}}\), respectively, that measure and dually, prepare states \(|S_{1}\rangle\) and \(|S_{2}\rangle\) (over time, densities \(\rho_{S_{1}}\) and \(\rho_{S_{2}}\)) of distinct sectors \(S_{1}\) and \(S_{2}\), respectively, of the boundary \(\mathscr{B}\) separating \(A\) from its environment \(B\); see [26, 39, 41] for details. Then we can state:
**Lemma 2.1**.: _If systems \(A_{1}\) and \(A_{2}\) implement QRFs \(Q_{1}\) and \(Q_{2}\), respectively, and the joint state \(|A_{1}A_{2}\rangle\)\((\)or density \(\rho_{A_{1}A_{2}}\)\()\) is separable as \(|A_{1}A_{2}\rangle=|A_{1}\rangle|A_{2}\rangle\)\((\rho_{A_{1}A_{2}}=\rho_{A_{1}}\rho_{A_{2}})\), then the commutator \([Q_{1},Q_{2}]\neq 0\)._
Proof.: We recall from [26], Thm. 1 that every QRF can be specified by a CCCD, i.e. a commutative diagram of the form:
(2.1)
in which the \(\mathcal{A}_{i}\) are Barwise-Seligman classifiers, the maps \(f_{i}\), \(h_{i}\), and \(g_{ij}\) are Barwise-Seligman informorphisms (i.e. morphisms of classifiers), and the "core" \(\mathbf{C}^{\prime}\) is a Barwise-Seligman classifier that is simultaneously the category-theoretic colimit of the "incoming" maps \(f_{i}\) and the category-theoretic limit of the "outgoing" maps \(h_{i}\). Details of this construction following the aforesaid concepts as established in [43], are provided in [42], and reviewed with further developments in [26, 39, 41]. If the QRFs \(Q_{1}\) and \(Q_{2}\) commute, their corresponding CCCDs commute; in this case, the cores \(\mathbf{C}^{\prime}_{\mathbf{1}}\) and \(\mathbf{C}^{\prime}_{\mathbf{2}}\) can be mapped to a common core \(\mathbf{C}^{\prime}_{\mathbf{1},\mathbf{2}}\) that is the colimit of all incoming maps, and limit of all outgoing maps, thus defining a single CCCD over the combined sets \(\{\mathcal{A}_{1,i}\}\) and \(\{\mathcal{A}_{2,i}\}\) of classifiers of \(Q_{1}\) and \(Q_{2}\). This CCCD specifies a single QRF \(Q_{1}Q_{2}=Q_{2}Q_{1}\).
We now recall from [26], Thm. 2 that the action of any QRF \(Q\) induces a topological quantum field theory (TQFT) \(\mathscr{T}_{Q}\) representable as a cobordism of linear maps with copies of the Hilbert space \(\mathcal{H}_{S}\) of the sector \(S\) on which \(Q\) acts as boundaries. As the data generated by \(Q\) at discrete times \(t_{i},t_{j}\) are just \(|S(t_{i})\rangle\) and \(|S(t_{j})\rangle\), respectively, and as the TQFT \(\mathscr{T}_{Q}|_{t_{i}\to t_{j}}:|S(t_{i})\rangle\mapsto|S(t_{j})\rangle\) by definition, we can simply identify \(Q\) with \(\mathscr{T}_{Q}\)[44].
Provided \(Q_{1}\) and \(Q_{2}\) commute, therefore, there is a TQFT \(\mathscr{T}_{Q_{1}Q_{2}}=\mathscr{T}_{Q_{2}Q_{1}}\) that can be identified with the combined QRF \(Q_{1}Q_{2}=Q_{2}Q_{1}\); this TQFT can be represented by:
(2.2)
where Alice and Bob are represented as the components of the bulk to the left and right of \(\mathscr{B}\), respectively. In this case Alice implements, and the components \(A_{1}\) and \(A_{2}\) jointly implement, a single unitary process \(Q_{1}Q_{2}\). The joint state \(|A_{1}A_{2}\rangle\) of the components \(A_{1}\) and \(A_{2}\) is, therefore, entangled over the entire time period of interest.
Separability of \(|A_{1}A_{2}\rangle\) can, therefore, only be achieved if \(Q_{1}\) and \(Q_{2}\) do not commute, i.e. \([Q_{1},Q_{2}]\neq 0\).
We note that the converse of Lemma 2.1 is true trivially: if \(Q_{1}\) and \(Q_{2}\) do not commute, the joint state \(|A_{1}A_{2}\rangle\) cannot be entangled. We show in [44] that the value of the commutator \([Q_{1},Q_{2}]\) can be related to the entanglement of formation [56] of mixed states of \(B\) prepared by the joint action of \(Q_{1}\) and \(Q_{2}\).
If \(Q_{1}\) and \(Q_{2}\) commute, and hence the joint operation \(Q_{1}Q_{2}\) is well-defined as a QRF, then \(Q_{1}\) and \(Q_{2}\) are, in the language of [25], _co-deployable_ and the joint probability distribution over the states \(|S_{1}\rangle\) and \(|S_{2}\rangle\) of their respective sectors \(S_{1}\) and \(S_{2}\) is well-defined. If \(Q_{1}\) and \(Q_{2}\) do not commute, and hence the joint operation \(Q_{1}Q_{2}\) is not well-defined as a QRF, then \(Q_{1}\) and \(Q_{2}\) are _non-codeployable_ and no joint probability distribution over the states \(|S_{1}\rangle\) and \(|S_{2}\rangle\) of their respective sectors \(S_{1}\) and \(S_{2}\) is well-defined. This latter condition corresponds to contextuality, as shown explicitly in [25] using methods developed in [22, 23]. Hence we have:
**Theorem 2.1**.: _If systems \(A_{1}\) and \(A_{2}\) implement QRFs \(Q_{1}\) and \(Q_{2}\), respectively, and the joint state \(|A_{1}A_{2}\rangle\)\((\)or density \(\rho_{A_{1}A_{2}})\) is separable as \(|A_{1}A_{2}\rangle=|A_{1}\rangle|A_{2}\rangle\)\((\rho_{A_{1}A_{2}}=\rho_{A_{1}}\rho_{A_{2}})\), then observational outcomes obtained with, and state preparations implemented with, the QRFs \(Q_{1}\) and \(Q_{2}\) will exhibit contextuality._
Proof.: We recall from [25], Thm. 7.1 that non-commutativity, and hence non-codeployability of QRFs induces contextuality. Whenever the conditions of Lemma 2.1 are met, the relevant QRFs are non-codeployable.
As shown in [26], the set of all finite CCCDs that specify QRFs form a category **CCCD**; the morphisms of this category compose CCCDs specifying QRFs to form larger CCCDs, and decompose CCCDs into components that are themselves CCCDs. We can therefore consider the notion of a quiver representation for CCCDs, referring to [57] for relevant definitions. We have from [26], Thm. 9:
**Theorem 2.2**.: _If a diagram of the form of Diagram (2.1), when viewed as a quiver, is non-commutative (and hence is not a CCCD), then at least one section of its quiver representation by vector spaces is not definable; in particular, at least one section of its representation by vector spaces of measurable functions is not definable._
We note that Theorems 2.1 and 2.2 describe conditions that, in the spirit of [2], induce break-downs of logical consistency; in particular, break-downs of logical consistency between the classifiers composing the CCCD representations of the relevant QRFs. They also bear obvious similarity to the main result of [13] proving that in some'measurement scenario', the non-existence of a global section of a sheaf of measurable distributions implies Kochen-Specker contextuality.3
Footnote 3: It is worth pointing out that a distributed system of information flow of [43], of which CCCDs (when considered as systems of logical gates) are an example, already embody notions of context and causation, and are especially suited for modeling ontologies. These characteristics have been professed in [46, 45], in which the former uses the logical formulism of [43] to argue that causation itself may be viewed as a form of computation resulting from the regular relations in a distributed system (such as a CCCD), and the latter shows that for any pair of classifiers \(\mathcal{A},\mathcal{B}\) occurring in such a system, there exists some (logic) infomorphism between them such that \(\mathcal{A}\) directly _causes_\(\mathcal{B}\). In a companion theory of integrative information involving abstract logical structures known as _Institutions_ (see e.g. [47, 48, 49]), the element of context is further emphasized. As briefly recalled in the proof of Lemma 2.1, the basic mechanism of a QRF in conjunction with binary-valued classifiers provides a model for a generic topological quantum field theory [26] with no need for further logical abstractions. The ‘Institutional’ approach will, however, be addressed relative to the present development of ideas at a later date.
From a physical perspective, the mechanism enforcing contextuality is clear from Diagram (2.2): the QRFs \(Q_{1}\) or \(Q_{2}\) act, via their respective sectors \(S_{1}\) and \(S_{2}\) of \(\mathscr{B}\), on the quantum system \(B\). Any such actions affect the state \(|B\rangle\), of which the sector states \(|S_{1}\rangle\) and \(|S_{2}\rangle\) are effectively (via the actions of QRFs implemented by \(B\)) samples. Hence if \(Q_{1}\) and \(Q_{2}\) do not commute, \(A\)'s actions with either render the outcomes of measurements made with the other unpredictable. The unpredictability of side-effects of actions is the core of the FP [28]. Indeed, we have previously proved that a distributed information-flow system (e.g. a CCCD) can be informationally unencapsulated (i.e. embody an "open environment" solution to the FP) only in the absence of intrinsic contextuality [25, Cor 8.1]. We can, therefore, expect that contextuality generates a quantum analog of this well-known, provably undecidable [33] problem.
Contextuality renders the QFP undecidable
### Statement of the QFP
The FP was originally formulated as the problem of efficiently characterizing the components of the overall state of the environment (in AI language, the states of and relations between objects in the environment) that do not need to be updated in consequence of an action [28]. An efficient means of predicting all side-effects of any action taken in any environment would solve the FP. It is generally recognized that all practical solutions of the FP are heuristic, and rely on local, classical, causal models of the task environment [30, 31]. Local causal models being insufficient to predict the side-effects of an action naturally suggests the influence of hidden variables. In a quantum setting, such hidden variables may be nonlocal in the sense of acting via (unobserved components of) the entanglement structure of the environment. The simplest measure of entanglement structure of an environment \(B\) is the _entanglement entropy_:
\[{\cal S}(B)=_{def}\max_{B_{1},B_{2}|B_{1}B_{2}=B}\,{\cal S}(|B_{1}B_{2})) \tag{3.1}\]
Hence we can state, for any agent \(A\) implementing an action \(a\) on its environment \(B\):
**QFP:** Does an action \(a\) on an environment \(B\) change the entanglement entropy \({\cal S}(B)\)?
If the QFP can be answered with 'no', local causal models may be sufficient, in practice, to solve the FP; hence the latter can be thought of, at least in practice, as the problem of efficiently modeling local causal relations. If the QFP is answered 'yes', all such local causal models are insufficient to solve the FP.
### The QFP is undecidable
The FP has been shown [33] to be equivalent to the Halting Problem (HP), the problem of deciding whether an arbitrary algorithm halts on an arbitrary input [58, 59]. As the HP is known to be algorithmically undecidable, the interesting direction of proof is HP \(\rightarrow\) FP. This is straightforward: if whether an arbitrary algorithm halts on an arbitrary input is undecidable, whether the FP solution of an arbitrary agent \(A\) halts given an arbitrary action \(a\) is undecidable.
That the QFP is algorithmically undecidable is also straightforward [33]. Decidability in finite time is equivalent to decidability given a finite number of finite inputs, i.e. finite-resolution observational outcomes. The idea of "measurement" as a process that yields observational outcomes is physically meaningful only if the interacting systems \(A\) and \(B\) are separable, i.e. only if \({\cal S}(|AB\rangle)=0\); otherwise Eq. (1.1) fails to describe the \(A\)-\(B\) interaction, and hence \(A\) cannot be regarded as deploying a well-defined QRF or as recording an observational outcome as classical data. Separability can, however, only be achieved
if the \(A\)-\(B\) interaction \(H_{AB}\) is weak, which in turn requires that \(\dim(\mathcal{H}_{A})\), \(\dim(\mathcal{H}_{B})\gg\dim(H_{AB})\). This is effectively a "large environment" assumption that allows the existence of nonlocal hidden variables in \(B\). Finite classical data obtainable by \(A\) cannot, in this case, determine \(\mathcal{S}(B)\) or even \(\dim(B)\); hence it cannot determine whether \(\mathcal{S}(B)\) is altered by an action \(a\)[33, 39]. Here we provide an alternative proof of this result that does not rely explicitly on a large environment, but rather highlights the role of contextuality.
**Theorem 3.1**.: _The QFP is algorithmically undecidable._
Proof.: The entanglement entropy \(\mathcal{S}(B)\) of Eq. (3.1)clearly cannot be measured using just one QRF; hence given the results of SS2 above, \(\mathcal{S}(B)\) cannot be measured using any combination of mutually-commuting QRFs. Hence the only case of interest is that in which \(A\) comprises separable components \(A_{i}\) that, as required by Lemma 2.1, implement QRFs (or sets of mutually-commuting QRFs) that do not mutually commute. In general, any QRF \(Q_{i}\) deployed by a component \(A_{i}\) acts on some component of \(A_{i}\)'s environment, which is the joint system \(B\prod_{j}A_{j}\) where \(j\neq i\). For simplicity, we consider only two components \(A_{1}\) and \(A_{2}\) that are _minimal_ in that each only a single QRF (or set of mutually-commuting QRFs), \(Q_{1}\) and \(Q_{2}\) respectively, where \(Q_{1}\) and \(Q_{2}\) do not commute. We also assume \(H_{A_{1}A_{2}}=0\), so each of these QRFs acts only on \(B\). We can, in this case, consider the situation from the perspective of either of the minimal, separable components \(A_{1}\) or \(A_{2}\). As each of these components has only a single (effective) QRF, neither component alone can measure \(\mathcal{S}(B)\) or decide the QFP. This same argument applies, however, also to any further minimal, separable components of \(A\) that interact with either \(B\) or any of the other \(A_{i}\) (equivalently, to any further modular quantum computations implemented by \(A\) that accept classical inputs from either \(B\) or the \(A_{i}\)). The entanglement entropy \(\mathcal{S}(B)\) cannot, therefore, be measured by \(A\), so the QFP cannot be decided by \(A\). As \(A\) is an arbitrary finite system, the QFP is algorithmically undecidable.
From a physical perspective, Lemma 2.1 restricts any minimal, separable component of a system \(A\) to the use of its own computational resources - which must all mutually commute - to measure the state of its own environment, which may comprise other components of \(A\). Effectively restricting each separable component to a single QRF renders the environment of each such component "large enough" to prevent measurement of its entanglement entropy.
Theorem 3.1 has two important corollaries:
**Corollary 3.1**.: _No finite system \(A\) can measure the entanglement entropy \(\mathcal{S}(|AB))\) across the boundary \(\mathscr{B}\) that separates it from its environment \(B\)._
Proof.: Any finite system \(A\) comprises some set of minimal, separable components, so it is enough to assume that \(A\) itself is a minimal, separable system. Lemma 2.1 restricts any minimal, separable system \(A\) to a single QRF acting the boundary \(\mathscr{B}\) that separates it from its environment \(B\). The maximum information obtainable is the state \(|S\rangle\) of the sector \(S\subseteq\mathscr{B}\) acted upon by this QRF [26, 39, 41].
Corollary 3.1 in fact follows directly from Eq. (1.1); the maximum information obtainable by \(A\) at time \(t\) is the \(N\)-bit encoding of an eigenvalue of \(H_{AB}\). We state it as a corollary of Theorem 3.1 to emphasize its relation to the QFP: no finite system can determine that it is separable from, and hence only in classical causal interaction with, its environment. The eigenvalues of \(H_{AB}\) are also, clearly, insufficient to specify the entanglement entropy \(\mathcal{S}(A)\); hence \(A\) cannot determine whether it has separable components.
**Corollary 3.2**.: _No finite system \(A\) can determine that any decomposition of its environment \(B=B_{1}B_{2}\) isolates all causal consequences of its actions in a single component \(B_{1}\)._
Proof.: Isolating all causal consequences of its actions in \(B_{1}\) requires determining that \(\mathcal{S}(|B_{1}B_{2}\rangle)=0\), which Theorem 3.1 forbids.
Corollary 3.2 shows that no system can isolate its own task environment. The undecidability of the FP immediately follows. More significantly for our purposes, no system \(A\) can determine the context of any of its actions \(a\). Griffiths [60], for example, shows that preparations and measurements become noncontextual when described in a consistent, i.e. commuting framework. A framework is a context, extended over time. Corollary 3.2 shows that no finite system can specify a consistent framework for its own actions.
### Example: Bell/EPR experiments
The most obvious example of measuring an entanglement entropy is a Bell/EPR experiment. Such experiments have obviously been performed, and have obviously yielded results, some of the most important in contemporary physics [61, 62, 63, 64, 65, 66]. Theorem 3.1 implies, however, that such experiments can only be regarded as measurements of entanglement entropies subject to further assumptions that place _a priori_ limits on contextuality. It is instructive to examine these assumptions explicitly.
We can represent the set-up of a Bell/EPR experiment by the following state-space decomposition:
(3.2)
We interpret \(A_{1}\) and \(A_{2}\) as the observers, \(B\) as the entangled pair together with the detectors, and \(C\) as everything else, including whatever prepares the entangled pair, any 3rd parties,
and the general environment. We assume \(H_{BC}\sim 0\); in particular, that it is insufficient to induce decoherence. The pointer states of the detectors are, in this case, decohered only by interaction with the observers \(A_{1}\) and \(A_{2}\), consistent with the model of QRF-driven decoherence in [39, 41].
Each of \(A_{1}\) and \(A_{2}\) interacts with only one detector, so as is well-known, neither can observe entanglement in isolation. The joint systems \(A=A_{1}A_{2}\) and \(AC\) observe both detectors, so it is these systems that, in standard interpretations of the Bell/EPR set-up, observe entanglement; "Charlie" (\(C\)) becomes the locus of entanglement observation if \(A_{1}\) and \(A_{2}\) separately report their observations. Hence the interactions \(H_{A_{1}A_{2}}\), \(H_{A_{1}C}\), and \(H_{A_{2}C}\) are critical for observing entanglement.
Assuming that \(A_{1}\) and \(A_{2}\) are separable, their interaction \(H_{A_{1}A_{2}}\) can be represented as classical communication [41]; indeed they are standardly represented as exchanging classically-encoded data once their observations are completed. We can assume that this communication is free of classical noise. Assuming that it is free of _quantum_ noise, however, requires assuming that \(A_{1}\) and \(A_{2}\) share a reference frame, e.g. a \(z\) axis if \(s_{z}\) measurements are employed to communicate outcomes encoded as bit strings. This reference frame must be physically implemented, so is a QRF. Jointly implementing a QRF, however, induces entanglement as illustrated in Diagram (2.2); see [39, 41, 44] for proofs and further discussion. Equivalently, separability of \(A_{1}\) and \(A_{2}\) requires each to have free choice of basis for every measurement, including free choice of \(z\) axis (i.e. "language") when communicating their results [41]. Hence the common assumption that post-observation classical communication is conceptually unproblematic involves an assumption of superdeterminism: \(A_{1}\) and \(A_{2}\) must be assumed to choose the same basis when they prepare and measure classical encodings. That "classical communication" always involves quantum measurement in this way has been noted previously [67]; one can make similar remarks, clearly, about requirements for quantum measurements and prior agreement about choice of basis when multiple observers calibrate their detectors using a common standard.
While superdeterminism of choice of basis for post-observation classical communication is required to measure \(\mathcal{S}(B)\), superdeterminism of choice of basis for the observations themselves (i.e. choice of "settings") prevents measurement of \(\mathcal{S}(B)\) by rendering \(A_{1}\) and \(A_{2}\) entangled [5]. We can, therefore, see superdeterminism of choice of basis for post-observation classical communication as a specific, _a priori, ceteris paribus_ or "nothing else changes" [30] assumption that is required to render \(A_{1}\) and \(A_{2}\) able to perform a joint measurement [68, 69]. Assumptions that circumscribe the task environment by ruling out external influences are the basis of all heuristic solutions to the FP [29, 30].4
Footnote 4: The approach to causality (and hence to the FP) in [30] is via the theory of situations (i.e. contexts), the latter being cast in [45] within the category of information flow of classifications [43].
Such _ceteris paribus_ assumptions are clearly assumptions of _noncontextuality_: the larger context in which the experiment is performed is assumed to be constrained in a way that specifically superdetermines a particular classical, causal process - classical information exchange between \(A_{1}\) and \(A_{2}\) - while not superdetermining anything else. This _a priori_ constraint is a clear instance of fine-tuning. Indeed Cavalcanti [52] has shown that
any classical causal model that violates a Bell inequality or demonstrates Kochen-Specker contextuality will involve fine-tuning, and hence will violate an essential principle of the classical causal modeling framework, namely, that such models should not be fine-tuned. A counterpoise, as pointed out in [70], is the result of Wood and Spekkens [71] asserting that any classical causal model reproducing the observed statistics of the Bell correlations must be fine-tuned (equivalently 'unfaithful'); see [27] for review and further discussion.
## 4 Induced contextuality
The results of SS2 and 3 above suggest that _all_ multi-QRF measurements, and in particular all measurements made by observers comprising multiple, separable components, are contextual. In this case, contextuality becomes not just a default assumption as in [22], but rather a principled requirement. If this is the case, all measurement settings considered "non-contextual" involve implicit assumptions, e.g. assumptions of fine-tuning.
As discussed in [25], the ubiquity of contextuality is suggested by the very existence of the FP: if task environments could be circumscribed non-heuristically, or even reliably circumscribed heuristically, the FP would not arise. Practical "solutions" of the FP are, however, based on heuristics of unknown reliability. This is because task environments cannot, in practice, be fully isolated or fully characterized in advance. Unexpected side-effects cannot, therefore, be ruled out.
While the FP is often attributed to the environment being "large" or "open," we have seen in the above discussion that it in fact follows from the physics of measurement, and thus characterizes all environments. Here we consider three specific contextuality-inducing mechanisms in more detail.
### Basis choice induces contextuality
It is well-known that the observability of entanglement depends on choice of basis [72, 73, 74, 75, 76]; a Bell state is separable if measured in the Bell basis \((|10\rangle\pm|01\rangle)/\surd{2}\). If \(A_{1}\) and \(A_{2}\) deploy a Bell basis, however, they are _ipso facto_ entangled; choosing the Bell basis merely transfers the entanglement from the measured state to the observers. Theorem 2.1 generalizes this to arbitrary QRFs: if \(A_{1}\) and \(A_{2}\) deploy non-commuting QRFs (and hence are separable), they will observe contextuality; if they deploy commuting QRFs (and hence are entangled), they will not.
The classical limit of non-contextual observations by separable observers - e.g. spatially-separated observers - generalizes the assumption of fine-tuning of classical communication found in the case of Bell/EPR experiments. Observations made by distinct, spatially-separated observers are guaranteed to commute only if 1) the observers deploy the same QRFs, and 2) the environment being measured is causally uniform, i.e. homogeneous and
isotropic. The existence of equivalent observers and the causal uniformity of the environment are central, if largely implicit, postulates of the classical worldview. As they require precise values of relevant constants and rapid dissipation of early inhomogeneities, they are effectively fine-tuning assumptions, as is broadly acknowledged [77, 78].
### Thermodynamic exchange induces contextuality
Irreversibly recording classical information requires free energy [79]: hence all information processing that results in persistently-recorded data requires thermodynamic exchange with a free-energy source that we can assume, without loss of generality, to be external to any system \(A\) that acts as an observer. This can be made explicit with the state-space decomposition:
\[\begin{array}{|c|c|c|}\hline\mbox{\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt} \rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt} \rule{0.0pt}{12.9pt}\rule{0.0pt}{12.
description (a "map") of a system is more informative about its (classical) causal capacity than a microscale description.
Any finite causal model can be represented as a finite directed acyclic graph (DAG) via the causal Markov condition (CMC, [85, 86]). Any such finite DAG is noncontextual; it specifies a task environment that is free, by definition, of unspecified side effects. Hence as shown by [52], imposing a finite causal model on any physical situation amounts to a fine-tuning assumption: all influences of the larger environment on the modeled environment are assumed to cancel out. Here entanglement is ruled out by requiring _no disturbance_[52, Def 2], i.e. that \(P(A|XY)=P(A|X)\) and \(P(B|XY)=P(B|Y)\), for all values of the variables \(A,B,X,Y\) for which these conditionals are defined.5 This suggestion of _causally-induced contextuality_ is affirmed in [53, 54], where ambient causal constraints themselves become context dependent. The approach taken in [53, 54] is analogous, in part, to that of [14], in adapting constraint functions and input histories to create a presheaf of causal distributions, but departs from [14] in so far that under certain conditions, the constructed presheaf does not form a sheaf.
Footnote 5: This sense of ‘no disturbance’ has also been called ‘no-signaling’, where in relativistic accordance, such pairs of measurements \(X,Y\) are made in spacelike separated regions (see also e.g. [14, 87]).
Within the framework of Deep Learning (DL), any trained network implements a causal model. A precondition to prove algorithmic effectiveness is the causal faithfulness condition (CFC); any causal Bayesian network satisfying the CMC and CFC admits a unique Markov blanket (MB, [88, 89, 90]). In a Bayesian network, the MB of any class variable provides all of the information required to predict its value. This allows an optimal solution of the Feature Selection (FS) problem which, if solvable, offers advantages including data compression and effectiveness in prediction. From an algorithmic aspect, introducing causality into FS may enhance robustness and mechanistic activity, upon which causal analysis may determine whether or not the FS solution obtained is causally informative, or if some part of it is spurious [88, 89, 90, 91]. More generally, there are several classical error correction programs analogous to quantum error correction, or possibly derivable from the latter in the classical limit, where effectively the presence of a holographic screen implements an MB (see discussions in [41, 44, 92]); in particular, those correction/error-awareness algorithms suitable for FS that identify the MB of the class variable (e.g. [91, 93, 94]).
## 5 Conclusion
We have seen here how the phenomenon of contextuality in physical preparations or measurements follows from the separability - non-entanglement - of observers or their components, and how the undecidability of the QFP similarly hinges on separability. Practical measurements of entanglement by separable observers - e.g. in Bell/EPR experiments - avoid these consequences via a fine-tuning assumption. Such assumptions can invoke "preferred" or _a priori_ selected QRFs, assumptions that thermodynamic exchange is decoupled from measurement in the observer, the environment, or both, or more general impositions
of _a priori_ causal models that circumscribe the task environment. The results of SS3.2, in particular, show that no observer can determine that a causal model captures all the potential side-effects of any action.
Abramsky [16] has previously considered the relationship between contextuality and "liar paradox" sentences, e.g. "this sentence is false." Such sentences play a key role in proofs of Godel's 1st incompleteness theorem, where such a sentence is proven to be true if and only if it is not provable within a particular axiomatic system [50]. As a principled restriction on the applicability of classical causal models, which like proofs comprise sequences of sentences connected by law-like relations, contextuality speaks, like Godel's theorem, to the limitations of logical or lawful deduction. Corollary 3.2 is particularly Godelean, as it shows that some side effects (true sentences) cannot be predicted by any given causal model (proof). The insufficiency of "laws" as a basis for physics in the quantum era has of course been noted previously [95].
While contextuality is often (e.g. in [16]) regarded as paradoxical, when the conditions for contextuality are restated in terms of the FP or the QFP, they appear somewhat inevitable. A solution to either the FP or the QFP would, indeed, seem to require an infinite or omniscient observer [29, 33]. The fine-tuning assumptions that crop up ubiquitously in situations described as noncontextual reinforce this, suggesting that it is the assumption of noncontextuality - and the Laplacian omniscience it seems to require - that is paradoxical. It is a paradox which carries with it an irreconcilable difference between operational and ontological models. Perhaps that is one reason why quantum theory - when viewed as a _mechanical_ theory of moving objects - remains up to the present day a theory that falls short of being universally comprehendible, as R. Feynman has famously asserted.
## Conflict of interest
The authors report no conflicts of interest involved in this work.
## Acknowledgement
We thank Eric Dietrich and Antonino Marciano for prior discussions on the frame problem and contextuality, respectively.
## Funding
This work received no external funding.
|
2305.09668
|
Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free
|
Differential Privacy (DP) is a well-established framework to quantify privacy
loss incurred by any algorithm. Traditional DP formulations impose a uniform
privacy requirement for all users, which is often inconsistent with real-world
scenarios in which users dictate their privacy preferences individually. This
work considers the problem of mean estimation under heterogeneous DP
constraints, where each user can impose their own distinct privacy level. The
algorithm we propose is shown to be minimax optimal when there are two groups
of users with distinct privacy levels. Our results elicit an interesting
saturation phenomenon that occurs as one group's privacy level is relaxed,
while the other group's privacy level remains constant. Namely, after a certain
point, further relaxing the privacy requirement of the former group does not
improve the performance of the minimax optimal mean estimator. Thus, the
central server can offer a certain degree of privacy without any sacrifice in
performance.
|
Syomantak Chaudhuri, Thomas A. Courtade
|
2023-04-27T05:23:06Z
|
http://arxiv.org/abs/2305.09668v1
|
# Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free
###### Abstract
Differential Privacy (DP) is a well-established framework to quantify privacy loss incurred by any algorithm. Traditional DP formulations impose a uniform privacy requirement for all users, which is often inconsistent with real-world scenarios in which users dictate their privacy preferences individually. This work considers the problem of mean estimation under heterogeneous DP constraints, where each user can impose their own distinct privacy level. The algorithm we propose is shown to be minimax optimal when there are two groups of users with distinct privacy levels. Our results elicit an interesting saturation phenomenon that occurs as one group's privacy level is relaxed, while the other group's privacy level remains constant. Namely, after a certain point, further relaxing the privacy requirement of the former group does not improve the performance of the minimax optimal mean estimator. Thus, the central server can offer a certain degree of privacy without any sacrifice in performance.
## I Introduction
Privacy-preserving techniques in data mining and statistical analysis have a long history [1, 2, 3], and are increasingly mandated by laws such as the GDPR in Europe [4] and the California Consumer Privacy Act (CCPA) [5]. The current defacto standard for privacy - Differential Privacy (DP) - was proposed by [6, 7]. Recent extensions of DP include Renyi-DP [8], Concentrated-DP [9], and Zero-Concentrated-DP [10].
Statistical problems like mean estimation under privacy constraints are important in real-world applications, and there is a need to understand the trade-off between accuracy and privacy. Most existing works consider a uniform privacy level for all users (see, e.g., [11]) and do not capture the heterogeneity in privacy requirements encountered in the real-world. Such heterogeneity frequently emerges as users balance their individual privacy options against the utility they desire from a service. Thus, a natural question arises: how should one deal with heterogeneous privacy for statistical tasks, such as optimal mean estimation? The effect of heterogeneity of privacy levels on accuracy is not well-understood; here, we make an effort to further the understanding of this trade-off by focusing on the mean estimation problem as a step in this direction.
We remind the readers that in the classical estimation problem without privacy constraints, the mean squared error decays as \(1/n\), where \(n\) is the sample size. While the same decay is also observed under homogeneous DP constraints, the cost of DP is present in the second-order term, generally of form \(1/(\epsilon n)^{2}\), where \(\epsilon\) is the privacy level [12].
### _Our Contribution_
We consider the problem of univariate mean estimation of bounded random variables under the Central-DP model with Heterogeneous Differential Privacy (HDP). While the proposed scheme for mean estimation can handle arbitrary heterogeneity in the privacy levels, we prove the minimax optimality of the algorithm for the case of two groups of users with a distinct privacy constraint for each group. This setting is particularly relevant for social media platforms, where studies have found two broad groups of users - one with high privacy sensitivity and another that does not care [13, Example 2]. This two-level setting is also a good first-order approximation to scenarios where users have some minimum privacy protections (e.g., ensured by legislation), but may also opt-in to greater privacy protections (e.g., the 'do not sell my information' option mandated by the CCPA). The setting also includes when one group's data is public, corresponding to some already known information. For the general case of every user having a distinct privacy level, experiments confirm the superior performance of our proposed algorithm over other methods. We view this work as a step in understanding the trade-off in the heterogeneity of privacy and accuracy; directions for further investigation are outlined in Section V.
Out of a total of \(n\) users, a fraction \(f\) are in the first group, and the rest are in the second group. Every user in the first group has a privacy level of \(\epsilon_{1}\) and the second group has a privacy level of \(\epsilon_{2}\) (\(\epsilon_{2}\geq\epsilon_{1}\)). As in homogeneous DP, one might expect better accuracy in mean estimation as \(\epsilon_{2}\) is increased keeping \(n,f,\epsilon_{1}\) fixed1. However, we show that after a certain critical value, increasing \(\epsilon_{2}\) provides no further improvement in the accuracy of our estimator. By matching upper and lower bounds, we show that this phenomenon is fundamental to the problem and not an artifact of our algorithm. As a corollary of this saturation phenomenon, having a public dataset (\(\epsilon_{2}\rightarrow\infty\)) has no particular benefit for mean estimation. Thus, the central-server can advertise and offer extra privacy up to the critical value of \(\epsilon_{2}\) to the second group while not sacrificing the estimation performance.
Footnote 1: In DP framework, higher \(\epsilon\) corresponds to lower privacy.
We stress that our results do not assume the fraction \(f\) to be constant. For example, for a fixed \(n\), one could take \(f=0\) or \(f=1\) to recover results known for the homogeneous DP setting. One could also consider \(f\) to depend on \(n\); e.g., consider \(1-f=c/n\) and \(\epsilon_{2}\rightarrow\infty\) to denote a constant number
|
2306.10669
|
Repulsion, Chaos and Equilibrium in Mixture Models
|
Mixture models are commonly used in applications with heterogeneity and
overdispersion in the population, as they allow the identification of
subpopulations. In the Bayesian framework, this entails the specification of
suitable prior distributions for the weights and location parameters of the
mixture. Widely used are Bayesian semi-parametric models based on mixtures with
infinite or random number of components, such as Dirichlet process mixtures or
mixtures with random number of components. Key in this context is the choice of
the kernel for cluster identification. Despite their popularity, the
flexibility of these models and prior distributions often does not translate
into interpretability of the identified clusters. To overcome this issue,
clustering methods based on repulsive mixtures have been recently proposed. The
basic idea is to include a repulsive term in the prior distribution of the
atoms of the mixture, which favours mixture locations far apart. This approach
is increasingly popular and allows one to produce well-separated clusters, thus
facilitating the interpretation of the results. However, the resulting models
are usually not easy to handle due to the introduction of unknown normalising
constants. Exploiting results from statistical mechanics, we propose in this
work a novel class of repulsive prior distributions based on Gibbs measures.
Specifically, we use Gibbs measures associated to joint distributions of
eigenvalues of random matrices, which naturally possess a repulsive property.
The proposed framework greatly simplifies the computations needed for the use
of repulsive mixtures due to the availability of the normalising constant in
closed form. We investigate theoretical properties of such class of prior
distributions, and illustrate the novel class of priors and their properties,
as well as their clustering performance, on benchmark datasets.
|
Andrea Cremaschi, Timothy M. Wertz, Maria De Iorio
|
2023-06-19T02:00:03Z
|
http://arxiv.org/abs/2306.10669v1
|
# Repulsion, Chaos and Equilibrium in Mixture Models
###### Abstract
Mixture models are commonly used in applications with heterogeneity and overdispersion in the population, as they allow the identification of subpopulations. In the Bayesian framework, this entails the specification of suitable prior distributions for the weights and location parameters of the mixture. Widely used are Bayesian semi-parametric models based on mixtures with infinite or random number of components, such as Dirichlet process mixtures (Lo, 1984) or mixtures with random number of components (Miller and Harrison, 2018). Key in this context is the choice of the kernel for cluster identification. Despite their popularity, the flexibility of these models and prior distributions often does not translate into interpretability of the identified clusters. To overcome this issue, clustering methods based on repulsive mixtures have been recently proposed (Quinlan et al., 2021). The basic idea is to include a repulsive term in the prior distribution of the atoms of the mixture, which favours mixture locations far apart. This approach is increasingly popular and allows one to produce well-separated clusters, thus facilitating the interpretation of the results. However, the resulting models are usually not easy to handle due to the introduction of unknown normalising constants. Exploiting results from statistical mechanics, we propose in this work a novel class of repulsive prior distributions based on Gibbs measures. Specifically, we use Gibbs measures associated to joint distributions of eigenvalues of random matrices,
which naturally possess a repulsive property. The proposed framework greatly simplifies the computations needed for the use of repulsive mixtures due to the availability of the normalising constant in closed form. We investigate theoretical properties of such class of prior distributions, and illustrate the novel class of priors and their properties, as well as their clustering performance, on benchmark datasets.
## 1 Mixture models with repulsive component
Mixture models are a very powerful and natural statistical tool to model data from heterogeneous populations. In a mixture model, observations are assumed to have arisen from one of \(M\) (finite or infinite) groups, each group being suitably modelled by a density, typically from a parametric family. The density of each group is referred to as a component of the mixture, and is weighed by the relative frequency (weight) of the group in the population. This model offers a conceptually simple way of relaxing distributional assumptions and a convenient and flexible method to approximate distributions that cannot be modelled satisfactorily by a standard parametric family. Moreover, it provides a framework by which observations may be clustered together into groups for discrimination or classification. For a comprehensive review of mixture models and their applications see McLachlan et al. (2000); Fruhwirth-Schnatter (2006) and Fruhwirth-Schnatter et al. (2019). A mixture model for a vector of \(d\)-dimensional observations \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) is usually defined as:
\[\mathbf{y}_{i}\mid\mathbf{w},\mathbf{\theta},M\sim\sum_{j=1}^{M}w_{j}f(\mathbf{y}_{i}\mid\theta _{j})\quad i=1,\ldots,n \tag{1}\]
where the function \(f(\mathbf{y}\mid\mathbf{\theta})\), referred to as the _kernel_, represents the chosen sampling model for the observations (often a parametric distribution such as the Gaussian distribution), \(\mathbf{w}=(w_{1},\ldots,w_{M})\) is a vector of normalised weights and \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{M})\) is an array of kernel-specific parameters. The number of components (sub-populations) in the mixture is equal to \(M\), which can be either fixed or random. In this work, we consider the latter case.
An important feature of mixture models is their ability to identify sub-populations by allowing for clustering of the subjects. Conditionally on cluster allocation and model parameters, observations are independent and identically distributed within the groups and independent between groups. Indeed, model (1) can be re-written introducing a vector of latent allocation variables \(\mathbf{c}=(c_{1},\ldots,c_{n})\) indicating the allocation of observations to a mixture component:
\[\mathbf{y}_{i}\mid c_{i},\mathbf{\theta},M\sim f(\mathbf{y}_{i}\mid\theta_{c_ {i}})\quad i=1,\ldots,n\] \[c_{1},\ldots,c_{n}\mid\mathbf{w},M\stackrel{{\mathrm{iid} }}{{\sim}}\text{Multinomial}(1,\mathbf{w}) \tag{2}\]
where Multinomial\((1,\mathbf{w})\) denotes the multinomial distribution of size 1 and probability vector \(\mathbf{w}\). The model is completed by specifying prior distributions on the remaining
parameters:
\[\begin{split}&\theta_{1},\ldots,\theta_{M}\mid M\sim P_{0}(\mathbf{ \theta})\\ &\mathbf{w}\mid M\sim\pi_{\mathbf{w}}\\ & M\sim\pi_{M}\end{split} \tag{3}\]
Thus, observations are partitioned into clusters such that \(\theta_{c_{i}}=\theta_{c_{j}}\) iff \(i\) and \(j\) belong to the same component. The number \(K\) of unique values in the vector \(\theta_{c_{1}},\ldots,\theta_{c_{n}}\) represents the number of clusters. It is important to highlight the distinction between \(M\) and \(K\): \(M\) refers to the data-generation process and denotes the number of components in a mixture, i.e. of possible clusters/sub-populations, while the number of clusters, \(K\), represents number of allocated components, i.e. components to which at least one observation has been assigned (see Argiento and De Iorio, 2022). In general, both \(M\) and \(K\) are unknown and object of posterior inference in the study of finite mixtures (i.e., when \(M<+\infty\)). Still, even when \(M\) is fixed in a finite mixture model, i.e. the number of components in the population is fixed, we need to estimate \(K\), the actual number of clusters in the sample (allocated components) (see Rousseau and Mengersen, 2011). On the other hand, in different settings such as Bayesian nonparametrics, \(M=+\infty\) and the object of interest is only \(K\). Clustering is of importance in many applications where a more parsimonious representation of the data is desired, or where the identification of subpopulations is relevant (e.g., patients risk groups). The specific choice of the kernel, as well as the prior distribution on \(M\), the parameters \(\mathbf{\theta}\) and the weights \(\mathbf{w}\) plays a crucial role in defining the clustering output. Various features of mixture models have been carefully investigated in the literature, together with associated computational schemes (McLachlan et al., 2000; Fruhwirth-Schnatter et al., 2019; Fruhwirth-Schnatter and Malsiner-Walli, 2019; Argiento and De Iorio, 2022).
In this paper, we focus on the specification of a prior distribution \(P_{0}\) for the location parameters \(\mathbf{\theta}\). Typically, the location parameters \(\theta_{j}\) are assumed i.i.d. from \(P_{0}\). However, such assumption can be too restrictive in several applications where it is desirable to introduce dependence among the locations to improve interpretability of the results. A popular example of this approach is the specification of _repulsive mixtures_, which have recently attracted increasing interest in the literature on model-based clustering (see, for example, Petralia et al., 2012; Bianchini et al., 2020; Xie and Xu, 2020; Quinlan et al., 2021). The rationale behind this approach is purely empirical and based on the notion of distance between clusters, reflecting the requirement of more separated clusters to improve interpretability, a property referred to as _repulsion_. As pointed out in Hennig and Liao (2013), the properties of the clustering method used should be a reflection of the definition of clusters given by the user, rather than based on the assumption that an underlying true partition of the data exists. Following this idea, the specification of repulsive mixtures reflects the need to improve the interpretation of the resulting partition by enhancing separation between observations in different clusters. Indeed, Quinlan et al. (2017) argue that repulsion promotes the reduction of redundant mixture components (or singletons) without substantially sacrificing goodness-of-fit, favouring a-priori subjects to be allocated to a few well-separated clusters. This strategy offers a compromise between the desire to remove redundant (or singleton)
clusters that often appear when modelling location parameters of mixture components independently, and the forced parsimony induced by hard types of repulsion. An alternative definition of repulsive mixture is provided by Malsiner-Walli et al. (2017), whose approach encourages nearby components to merge into groups at a first hierarchical level and then to enforce between-group separation at the second level. A similar idea has been employed in Natarajan et al. (2021) in the context of distance-based clustering, where the repulsive term appears at the likelihood level. Finally, we note that Fouquene et al. (2019) propose the use on Non-Local priors (NLP) to select the number of components, characterised by improved parsimony obtained through the inclusion of a penalty term, and leading to well-separated components with non-negligible weight, interpretable as distinct subpopulations. Despite similarities between NLPs and repulsive over-fitted mixtures, the former approach requires not only a repulsive force between the locations, but also penalising low weight components, which leads to better model performance (Fouquene et al., 2019). Still, they fit their model for different numbers of components and compare them through model choice criteria based on estimates of the marginal likelihood, without performing full posterior inference on \(M\).
Two popular approaches to introduce repulsion among the elements of \(\mathbf{\theta}\) are determinantal point processes (DPP) and the inclusion of a repulsive term in the specification of \(P_{0}\), borrowing ideas from Gibbs point processes (GPP). There is an interesting connection between GPPs and DPPs (Georgii and Yoo, 2005; Lavancier et al., 2015) as DPPs can be considered as a subclass of GPPs, at least when they are defined on a bounded region. Since this link is of limited interested in what follows, we will not discuss it further.
DPPs were firstly introduced in Macchi (1975) as _fermion processes_, since they are used to describe the behaviour of systems of fermions, subatomic particles exhibiting an "antibunching" effect, i.e. their configuration tends to be well separated. A DPP is defined as a point process where the joint distribution of the points (i.e., the particle configuration) is expressed in terms the determinant of a positive semidefinite matrix. The repulsion property of the DPPs derives from the characterisation of the determinant as the hyper-volume of the parallelepiped spanned by the columns of the corresponding matrix. As such, the probability of a configuration grows as the columns are farther apart from each other in \(\mathbb{R}^{d}\) equipped with the Euclidean norm. See Macchi (1975), Borodin and Rains (2005), Hough et al. (2009), Kulesza et al. (2012), Lavancier et al. (2015) and references therein for theoretical and computational details on DPPs. To the best of our knowledge, Kwok and Adams (2012) are the first to employ a DPP as a prior distribution for the locations in mixture models, highlighting the repulsive property of the DPP. In their work, inference is performed via a maximum a-posteriori estimation. Affandi et al. (2013) adopt a DPP in a mixture model with a fixed number of components (the \(K\)-DPP), and propose two sampling schemes for posterior inference. More recently, Xu et al. (2016), Bianchini et al. (2020) and Beraha et al. (2022) propose the use of DPPs in Bayesian mixture models with a random number of components \(M\). Posterior inference in Xu et al. (2016) and Bianchini et al. (2020) is performed through the labour-intensive reversible jump algorithm (Green, 1995), while Beraha
et al. (2022) propose an algorithm which exploits the construction by Argiento and De Iorio (2022), and implement a sampling scheme based on the Metropolis-Hastings birth-and-death algorithm by Geyer and Moller (1994). In general, these methods do not scale well with the dimension of the parameters of interest due to the inherent double-intractability of the posterior (Murray et al., 2006), and require the implementation of tailored algorithms due to the presence of a prior on the number of components \(M\).
Another common strategy for repulsive mixtures is to directly specify a repulsion term in the prior density function corresponding to \(P_{0}\). The main idea behind this approach is to start with the usual independent prior on the locations, e.g. a product of Gaussian distributions, and include, in a fairly _ad-hoc_ manner, a multiplicative factor which represents a _penalty_ term often defined on the basis of the pairwise distances between the parameters of interest (Petralia et al., 2012; Xie and Xu, 2020; Quinlan et al., 2021). Borrowing ideas from statistical mechanics, in analogy with the behaviour of gas particles interacting with each other, the penalty term describes the repulsion among the location parameters of the mixture. This approach, arguably the most common in practice, presents computational and theoretical drawbacks. It is discussed in detail in Section 3 as it is one of the main motivations of this work. In Figure 1 we show two-dimensional realisations from an independent Gaussian prior, a DPP and the repulsive prior of Quinlan et al. (2021).
In this work, we take a different approach and specify tractable joint distributions, still within the class of GPPs, presenting connections with statistical mechanics and the mathematical theory of gases. We specify a novel class of repulsive distributions for the location parameters of mixture models with random number of components, based on GPPs and specifically on the joint distribution of the eigenvalues of random matrices, once again providing an interpretation of the concept of repulsion in terms of interacting particles. These distributions are linked to the joint Gibbs canonical distributions used to model _Coulomb gases_, also called log-gases (Dyson, 1962; Forrester, 2010).
The paper is structured as follows. In Section 2 we recall basic concepts of statistical mechanics, while in Section 3 we describe the link between Gibbs Point Processes and repulsive mixtures. In Section 4 we introduce the novel class of repulsive prior distributions, based on the joint law of eigenvalues of random matrices, whose theoretical properties are investigated in Section 5. In Section 6 we prove that the proposed class of prior distributions satisfies the large deviation principle. In Section 7 we specify the full mixture model with random number of components and a repulsive prior on the locations. We demonstrate the approach in simulations and on a real data application in Section 7.1. Finally, we conclude the paper in Section 8.
## 2 General concepts of statistical mechanics
In this Section, we introduce basic concepts from statistical mechanics necessary for the following development.
An individual configuration of the particles is referred to as _microstate_, while the macroscopic observables of interest (e.g., energy of the system) are called _macrostates_. Each macrostate can be associated to several microstates, since different particle configurations can lead to the same macroscopic quantity of interest. An _ensemble_ is a set of microstates of a system that are consistent with a given macrostate. In this framework, statistical mechanics examines an ensemble of microstates corresponding to a given macrostate by providing their probability distribution. Therefore, the macroscopic properties of a system are derived from probability distributions describing the interactions among the particles. Note that these probability distributions are _conditional_ on a particular macrostate. The systems studied in statistical mechanics often involve particles interacting with an external environment, called a _reservoir_. In particular, the microstates are influenced by the macroscopic features of the surrounding environment, thus characterising their probability distribution. The three main ensembles, corresponding to different assumptions on the system conditions, are (Landau and Lifshitz, 1968; Mandl, 1991):
* the _microcanonical_ ensemble: assumes an isolated system in which the energy is pre-specified. There are no exchanges, either of energy or particles, with the surrounding environment.
* the _canonical_ ensemble: assumes a system interacting with an external reservoir. The system is kept at a fixed temperature, while the energy is allowed to vary with the particle configuration. The total energy of the combined system (reser
Figure 1: Samples of size \(N=100\) points from three different distributions in \(\mathbb{R}^{2}\): (a) independent Gaussian; (b) Determinantal Point Process; (c) Repulsive mixture as in Quinlan et al. (2021).
-voir and particles) is fixed. The probability of each configuration is given by the Boltzmann distribution.
* the _grand canonical_ ensemble: assumes a system where both energy and particles are allowed to fluctuate between the system and the reservoir. The probability distribution describing the particle configuration presents an additional term involving the number of particles in the system. Both the total energy and number of particles of the combined systems are fixed. The distribution of the particle configuration is known as the Gibbs distribution.
Statistical mechanics often focuses on a system in _equilibrium_, i.e. for which the probability distribution over the ensemble does not have an explicit time dependence [Tuckerman, 2010]. We assume that this is the case throughout this work. From a mathematical point of view, this implies that the probability distribution of the particles configuration in a particular ensemble is a function of the total energy as expressed by the _Hamiltonian_, an operator containing kinetic and potential energy terms describing the system [Tuckerman, 2010].
Let \(\boldsymbol{\theta}=(\theta_{1},\ldots,\theta_{M})\) be a set of \(M\) random variables describing the position of the particles on the \(d\)-dimensional lattice \(\mathbb{Z}^{d}\) and let \(\Omega\) be the set of possible configurations of such particles. The internal energy of the system \(\overline{E}\) can be computed via the Hamiltonian as a function of the microstate \(\omega\in\Omega\) describing the pairwise interactions between the particles and the relationship with the reservoir. A general expression for the Hamiltonian of a system is:
\[\mathcal{H}(\boldsymbol{\theta}\mid\zeta,h)=h\sum_{i=1}^{M}\psi_{1}(\theta_{ i})+\zeta\sum_{1\leq i<j\leq M}\psi_{2}(\theta_{i},\theta_{j}) \tag{4}\]
where \(h\), \(\zeta\in\mathbb{R}\), while \(\psi_{1}\) and \(\psi_{2}\) represent the _self energy_ and the _interaction function_, respectively [Domb, 2000]. The latter specifies how the particles interact with each other, often through the use of pairwise terms. On the other hand, \(\psi_{1}\) captures how the particles are affected by the presence of fields external to the system. An example of Hamiltonian is the one corresponding to the popular Ising model, used to study the behaviour of a system of ferromagnetic particles immersed in a magnetic field. In this case, the Hamiltonian is:
\[\mathcal{H}(\boldsymbol{\theta}\mid\zeta,h)=h\sum_{i=1}^{M}\theta_{i}+\zeta \sum_{i,j:|i-j|=1}\theta_{i}\theta_{j}\]
where \(\theta_{i}\in\{-1,1\}\), indicating the positive or negative spin of the particles, and \(\zeta>0\) is the _inverse temperature_.
A fundamental concept in physics is that of Entropy, which plays a crucial role in determining the probability distribution of the configuration of particles, given different types of ensemble [Maxwell, 1860]. The Entropy of a system is given by [Boltzmann, 1866]:
\[S=\kappa_{B}\ln W \tag{5}\]
where \(\kappa_{B}\approx 1.38\times 10^{-23}\)J/K is the Boltzmann constant, and \(W\) is the number of microstates associated with a given macrostate \(\overline{E}\) (e.g., energy of the system). Note that \(W\) depends on the type of ensemble assumed to describe the system of particles and on the assumption of equilibrium. Given \(W\), the probability distributions of each of the three ensembles can be derived (Landau and Lifshitz, 1968, 1980; Bowley and Sanchez, 1999):
* Microcanonical ensemble. Boltzmann's _postulate of equal a-priori probabilities_ assumes that each configuration of the particles is equally probable. Therefore, since in this case this is an isolated system with fixed energy and number of particles, the probability of observing a microstate \(\omega\) is given by the inverse of the number of microstates \(W\), i.e. \(p_{\omega}=1/W\).
* Canonical ensemble. The Boltzmann distribution is given by: \[p_{\omega}=\frac{e^{-\mathcal{H}(\boldsymbol{\theta}|\zeta,h)/(\kappa_{B}T)} }{Z_{M}^{c}}\] (6) where the normalising constant \(Z_{M}^{c}=\sum_{\omega\in\Omega}e^{-\mathcal{H}(\boldsymbol{\theta}|\zeta,h) /\kappa_{B}T}\) is the _partition function_. Due to its role as unit conversion factor in the description of thermodynamic systems, the constant \(\kappa_{B}\) is usually called _inverse temperature_ and \(T\) is the temperature.
* Grand canonical ensemble. The additional assumption of exchange of particles between the system and the reservoir affects the expression of the probability distribution, by including an additional term reflecting the migration of particles between the two sub-systems: \[p_{\omega}=\frac{e^{(\mu N_{\omega}-\mathcal{H}(\boldsymbol{\theta}|\zeta,h) )/(\kappa_{B}T)}}{Z_{M}^{gc}}\] (7) where \(\mu\) is the _chemical potential_, \(N_{\omega}\) is the number of particle in the microstate \(\omega\) and \(Z_{M}^{gc}=\sum_{\omega\in\Omega}e^{(\mu N_{\omega}-\mathcal{H}(\boldsymbol{ \theta}|\zeta,h))/\kappa_{B}T}\) is the partition function.
From an information-theoretic perspective, the above distributions have the property of maximising the Shannon entropy, a measure of the amount of information or uncertainty about the possible outcomes of a random variable (Shannon, 1948). In statistical mechanics, this is referred to as the Gibbs entropy. Given the postulate of a-priori probabilities, the Boltzmann entropy is a special case of the Shannon entropy, where all the probabilities are equal. For more details on this topic see Jaynes (1965). The higher the value of the entropy for the distribution of microstates over an ensemble, the higher the uncertainty around the distribution of the particle configurations. In practice, choosing the distribution maximising the entropy of a system, i.e. the Boltzmann distribution, corresponds to choosing the flattest possible distribution over the microstates compatible with the available information, i.e. the macrostate \(\overline{E}\). In a Bayesian setting, this concept is analogous to that of Gibbs posterior (see, for instance, Jiang and Tanner, 2008; Rigon et al., 2020, and references therein), where a
likelihood-free approach is devised for the estimation of a set of parameters of interest, directly specifying their posterior distribution via a standard prior and a term depending on a loss function with desired properties, yielding an expression similar to Eq. (6). Bissiri et al. (2016) show that this approach has good theoretical properties relating to a maximum-entropy principle. Finally, this approach is reminiscent of the "product of approximate conditionals" (PAC) likelihood (Cornuet and Beaumont, 2007; Li and Stephens, 2003).
This work focuses on the description and discussion of Bayesian mixing distributions characterised by a repulsive term. The latter presents an interesting analogy with the distributions arising in statistical mechanics under different ensemble assumptions. Indeed, there is a parallelism between particles in a thermodynamic system and location parameters in a mixture model with repulsion terms. The locations and the number of components of the mixture can be associated to the position and number of particles of a physical system, while the repulsion term in their prior distribution can be related to their interaction in a given configuration (i.e. the function \(\psi_{2}\) in Eq. (4)). The three ensembles are recovered by making different assumptions on the prior for the locations. For instance, the microcanonical ensemble is recovered by assuming a mixture model with fixed number of components and no repulsion term with the locations i.i.d. draws from \(P_{0}\), the canonical ensemble corresponds to a mixture model with fixed number of components and non-zero repulsion (introducing dependence among the locations), while the grand canonical ensemble additionally allows for a prior distribution on the number of components.
## 3 Gibbs Point Processes and Repulsive Mixtures
Gibbs Point processes (Daley et al., 2003, page 127) are a fundamental class of point processes arising in statistical physics to describe forces acting on and between particles. Moreover, point processes can be seen as limits of ensembles (Holcomb, 2015). Gibbs processes are generated by interaction potentials, as described in the previous section. The total potential energy corresponding to a given configuration of particles is assumed to be decomposable into terms representing the interactions between the particles taken in pairs, triples, and so on; first-order terms representing the potential energies of the individual particles due to the action of an external force field may also be included. Given a _potential_\(\mathcal{H}\), a Gibbs process, i.e. homogeneous spatial point pattern (Daley et al., 2003), is directly specified by using the _Boltzmann distribution_ for the configuration of a set of particles (also referred to as Gibbs canonical distribution), with Janossy density:
\[p_{M}\left(\boldsymbol{\theta}\mid\zeta\right)=Z_{M}^{-1}(\zeta)\exp\left\{- \zeta\mathcal{H}(\boldsymbol{\theta})\right\} \tag{8}\]
where \(\boldsymbol{\theta}=(\theta_{1},\ldots,\theta_{M})\).
In Eq. (8) \(\zeta\) is a parameter controlling the amount of repulsion and \(Z_{M}(\zeta)\) is a normalisation constant called _partition function_, which plays an important role in the study
of particle systems, as well as in the study of repulsive mixture models, as discussed later. Point processes, involved in the description of particle configurations, are usually characterised by interaction between the points. The interaction can be attractive or repulsive, depending on geometrical features, whereas the null interaction is associated to the well-known Poisson point process. Frequently, it is supposed that only the first- and second-order terms need to be included, so that the process is determined by the point pair potentials. In this case we have _repulsive interactions with pair potential_:
\[\mathcal{H}\left(\boldsymbol{\theta}\right)=\sum_{i=1}^{M}\psi_{1}(\theta_{i} )+\sum_{1\leq i<j\leq M}\psi_{2}(\theta_{i},\theta_{j}) \tag{9}\]
for appropriate choice of \(\psi_{1}\) and \(\psi_{2}\). Commonly, three types of potentials are used to model the pairwise interactions (Daley et al., 2003):
\[\phi_{1}(r) =-\log\left(1-e^{-(r/\zeta)^{2}}\right)\] \[\phi_{2}(r) =\left(\zeta/r\right)^{m_{1}}-\left(\zeta/r\right)^{m_{2}},\quad m _{1}>m_{2}\geq 0\ldots \tag{10}\] \[\phi_{3}(r) =\infty\mathbb{I}_{[0,\zeta]}(r)\]
where \(r\) is the distance between two particles and \(\zeta>0\) is a tuning parameter. These three pairwise potentials are all functions of the pairwise interactions, captured by the distance between points \(r\), and can be used to specify \(\psi_{2}\).
The works of Petralia et al. (2012), Xie and Xu (2020) and Quinlan et al. (2021) are based on the above pairwise potentials and propose repulsive prior distributions of the form:
\[p_{M}\left(\boldsymbol{\theta}\mid\zeta\right) \propto \left(\prod_{m=1}^{M}g(\theta_{m})\right)\left(\prod_{1\leq i<j \leq M}h_{s}\left(\left\|\theta_{i}-\theta_{j}\right\|\right)\right)\] \[\propto \exp\left\{\sum_{m=1}^{M}\log g(\theta_{m})+\sum_{1\leq i<j\leq M }\phi_{s}\left(\left\|\theta_{i}-\theta_{j}\right\|\right)\right\}\] \[s=1\quad\text{ in Quinlan et al. (2021)}\] \[s=2\quad\text{ in Petralia et al. (2012)}\] \[s=3\quad\text{ in Beraha et al. (2022)}\]
The repulsive prior is specified by setting \(\psi_{1}=\log p_{\boldsymbol{\mu}}\) as in Quinlan et al. (2021), where \(p_{\boldsymbol{\mu}}\) is then chosen to be a (often) Normal distribution with mean zero. This yields \(\psi_{1}=\sum_{1}^{M}\theta_{i}^{2}\) and that the repulsion term \(\psi_{2}\) is a function of \((\theta_{i}-\theta_{j})^{2}\) in Quinlan et al. (2021) and of the Euclidean distance between pairs of locations in Petralia et al. (2012), becoming special cases of repulsive interactions with pair potential. Both papers consider extensions. These approaches present serious drawbacks that will be discussed later.
On the other hand, Xie and Xu (2020) use the same form for \(\psi_{1}\), but the repulsive part of the prior, still belonging to the class of GPPs, does not simplify to a function
of pairwise interactions. The fact that \(\psi_{2}\) does not simplify as before forces them to make stronger assumptions on \(\mathcal{H}\) to ensure the existence of the partition function \(Z_{M}(\zeta)\). In particular, the authors show that the partition function \(Z_{M}(\zeta)\) is bounded as a function of the number of components \(M\) when the repulsion term is smaller than 1 and when square integrability of a function of the norm \(\|\ \theta_{i}-\theta_{j}\ \|_{2}\) is assumed. In more details, the latter assumption states that \(\int_{\mathbb{R}}\int_{\mathbb{R}}\left(\log\left(\hat{g}\left(\|\ \theta_{i}-\theta_{j}\ \|_{2}\right)\right)\right)^{2}d \theta_{i}d\theta_{j}<+\infty\), for \(\hat{g}:\mathbb{R}^{+}\rightarrow[0,1]\) a strictly monotonically increasing function s.t. \(\hat{g}(0)=0\). We point out that the first assumption is not necessary for the prior distributions presented in this work. Moreover, the authors use the second condition to prove a theorem similar to the large deviation principle discussed later.
When \(\psi_{2}\) is chosen to be one of the functions in Eq. (10), the Janossy density of the resulting GPP factorises. These potentials have an interpretation in statistical mechanics. The first type of potential arises in the study of the behaviour of gases (Ruelle, 1970; Preston, 1976), and is reminiscent of earlier work involving the Morse potential, used to describe interatomic interactions (Morse, 1929). The second type of potential is the Lennard-Jones potential (Jones, 1924), which has been investigated in relation to the study of gases (such as argon) whose repulsive and attractive forces follow an inverse power of the distance between the particles. Potential \(\phi_{3}\) is called Strauss potential (Strauss, 1975), and it has been firstly introduce to test the hypothesis that a collection of points in space is distributed uniformly. It is constructed by replacing the distance between two points \(r\) by an indicator variable describing the proximity of the points in terms of a given radius. This potential is an example of "hard-core" potential, due to the abrupt change in repulsive force imposed on the particles, describing the situation in which the particles are hard spheres of radius \(\zeta\), with their centres corresponding to the points in the configuration.
In conclusion, in Bayesian mixture models, the specification of a repulsive joint prior distribution often reduces to multiplying a standard density (usually corresponding to the independence assumption) by a repulsive term, usually a function of the pairwise distances between the locations (see previous section). In principle, this approach allows the specification of a wide range of joint distributions for the parameters of interest, governed by some tuning parameters. Indeed, Gibbs processes are appealing in terms of flexibility and interpretability, but are typically intractable due to the normalising constant \(Z_{M}(\zeta)\), i.e. the partition function. To deal with this issue, Ogata and Tanemura (1981, 1984) provide an approximation of the joint probability distribution for maximum likelihood estimation, while, under some conditions, bounds on the partition function can be derived (Dobrushin and Shlosman, 1985, 1986). In the context of finite mixture models (i.e., when \(M\) is fixed), as in Petralia et al. (2012) and Quinlan et al. (2021), the knowledge of \(Z_{M}(\zeta)\) is not needed to perform posterior inference, which is usually based on Metropolis-Hastings algorithms as the number of particles is fixed. When a prior distribution is assumed on the number of components of the mixture \(M\), knowledge of the normalising constant \(Z_{M}(\zeta)\) enables simpler computations, as opposed to the approach of (Beraha et al., 2022) which requires non-trivial adaptation of birth-and-death Metropolis-Hastings algorithms and the exchange algorithm of Murray et al. (2006). Alternatively, Xie and Xu (2020) construct an _ad-hoc_ prior
for \(M\) which contains the normalising constant \(Z_{M}(\zeta)\) in its specification to remove such issues, since \(Z_{M}(\zeta)\) cancels out when evaluating Metropolis-Hastings acceptance probabilities. Although theoretical results on the relationship between the number of components and \(Z_{M}(\zeta)\) are shown by Xie and Xu (2020), this choice of prior distribution for \(M\) does not allow the inclusion of relevant a-priori information into the model, making interpretation and tuning of the hyper-parameters difficult. As already pointed out by Murray (2007); Murray and Ghahramani (2012), the prior choice of Xie and Xu (2020) tends to dominate posterior inference. Furthermore, the intractability of \(Z_{M}(\zeta)\) does not allow to perform inference on parameters such as the strength of the repulsion. Another drawback of these previous approaches is the fact, already pointed out by Quinlan et al. (2021), that the joint distribution for the location parameters specified in this way is not _sample-size consistent_, i.e. the \((M-1)\)-th dimensional distribution cannot be derived from appropriate marginalisation of the \(M\)-dimensional distribution, leading to theoretical and computational issues. For instance, it is not possible to specify a mixture model with random number of components in a standard way, and consequently posterior inference across dimensions is unfeasible.
## 4 Repulsive priors obtained from random matrices
In this work, we take a different approach and specify tractable joint distributions, still within the class of GPPs, exploiting results from random matrix theory. Such distributions still present interesting connections with the mathematical theory of gases. Specifically, we consider the joint Gibbs canonical distributions used to model _Coulomb gases_, also called log-gases (Dyson, 1962; Forrester, 2010; Mehta and Dyson, 1963; De Monvel et al., 1995b). These distributions are obtained starting from a potential \(\mathcal{H}\) with logarithmic pairwise interactions, following the same approach used to define the Boltzmann distribution in (8). Coulomb gases (Dyson, 1962) provide an example of analytically tractable systems, with particles described as infinitely long parallel charged lines. In the case of _one component_ Coulomb system, all \(M\) particles are of like charge, \(\sqrt{\zeta}\), say. For such gases, the interaction between the particles is described by a logarithmic function in the expression of the Hamiltonian:
\[\phi_{\zeta}(r_{ij})=-\zeta\log{(r_{ij})} \tag{11}\]
where \(r_{ij}\) is the distance between particles \(i\) and \(j\). The parameter \(\zeta>0\) characterises the strength of the interaction between particles. In this case, a general expression for the potential \(\mathcal{H}\) for \(M\) particles and the resulting Janossy density are:
\[\mathcal{H}(\theta_{1},\ldots,\theta_{M}\mid\zeta)=\sum_{i=1}^{M }\psi_{1}(\theta_{i})+\zeta\sum_{i<j}\log(\|\theta_{i}-\theta_{j}\|) \tag{12}\] \[p_{M}(\theta_{1},\ldots,\theta_{M}\mid\zeta)=Z_{M}^{-1}(\zeta) \exp{\{-\mathcal{H}(\theta_{1},\ldots,\theta_{M}\mid\zeta)\}}\]
Different expressions of the function \(\psi_{1}\) lead to different joint distributions for the location parameters \(\boldsymbol{\theta}\). Under specific choices (discussed later), these present tractable
normalising constants (i.e., partition functions), obtained by normalising (12), making them an appealing class of distributions in statistical inference. In particular, for suitable choices of \(\psi_{1}\), the joint distributions obtained by normalising (12) coincide with those of the eigenvalues of Gaussian, Wishart and Beta random matrices (Mehta, 2004; Forrester, 2010). The link between Coulomb gases and random matrix theory, referred to as the Coulomb gas analogy, is widely recognised in the field of statistical mechanics.
The Gibbs canonical distributions of the Coulomb gases also present a link with DPPs. In particular, for different choices of \(\zeta\), \(p_{M}\) in (12) can be expressed as the determinant of specific random matrices (Mehta, 2004). These results are, in general, very technical. A tractable example is obtained for \(\zeta=2\), where a symmetric positive definite matrix can be constructed via a basis of orthogonal polynomials, whose joint law of eigenvalues coincides with its determinant. Suitable choices of the orthogonal polynomial basis yield the eigenvalue distributions for the Gaussian (Hermite polynomials), Wishart (Laguerre polynomials) or Beta (Jacobi polynomials) random matrices (Forrester, 2010), introduced in the following sections. The three families of distributions generated in this way inherit the names of Hermite, Laguerre and Jacobi ensembles, respectively. Moreover, in one and two dimensions, at inverse temperature \(\zeta=2\), the Coulomb system with logarithmic interactions (a.k.a. Dyson log gas in 1D) is known to be a determinantal point process, meaning that its correlation functions are given by certain determinants (Ghosh and Nishny, 2018). We refer the reader to the work by Forrester (2010) for an extensive discussion on the topic.
Here, we introduce eigenvalue distributions of random matrices whose probability density function is proportional to:
\[\prod_{l=1}^{M}\eta\left(\theta_{l}\mid\zeta\right)\prod_{1\leq i<j\leq M}\mid \theta_{i}-\theta_{j}\mid^{\zeta},\quad\zeta>0 \tag{13}\]
where \(\eta\left(\theta\mid\zeta\right)\) is a weight function characterising the resulting probability law. Some common weight functions used in random matrix theory are:
\[\eta(\theta\mid\zeta)=\left\{\begin{array}{ll}e^{-\frac{\zeta}{2}\theta^{2} }&\theta\in\mathbb{R}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
### Eigenvalue distributions derived from Gaussian ensembles
In this Section, we describe the eigenvalue distribution for the set of random matrices belonging to the Gaussian ensemble. Let \(M>0\) and \(\mathbf{Z}\) be a \(M\times M\) matrix with i.i.d. entries \(Z_{ij}\sim\mathcal{N}(0,1)\). Define the \(M\times M\) matrix \(\mathbf{X}=\left(\mathbf{Z}+\mathbf{Z}^{\prime}\right)/\sqrt{2}\). The resulting symmetric matrix \(\mathbf{X}\) is called a real Wigner matrix and the joint law of its eigenvalues, denoted by \(\mathbf{\theta}=\left(\theta_{1},\ldots,\theta_{M}\right)\), is given by:
\[p(\mathbf{\theta}\mid M)=\mathcal{G}_{M}^{-1}\prod_{l=1}^{M}e^{-\frac{\theta_{l}^ {2}}{2}}\prod_{i<j}\mid\theta_{i}-\theta_{j}\mid \tag{15}\]
where \(\mathcal{G}_{M}\) is the normalising constant. The family of eigenvalue distributions originated by this random matrix construction is known as the Gaussian orthogonal ensemble. When the matrix \(\mathbf{Z}\) has complex or quaternion Gaussian entries, the distribution of the eigenvalues of \(\mathbf{X}\) changes. In general, the joint eigenvalue distribution for the Gaussian ensemble has the following expression:
\[p(\mathbf{\theta}\mid M,\zeta)=\mathcal{G}_{M,\zeta}^{-1}\prod_{l=1}^{M}e^{-\frac{ \zeta}{2}\theta_{l}^{2}}\prod_{i<j}\mid\theta_{i}-\theta_{j}\mid^{\zeta} \tag{16}\]
where, when \(\zeta=1\), we recover the Gaussian orthogonal ensemble. Values of \(\zeta\) equal to 2 and 4 correspond to \(\mathbf{Z}\) with complex (Gaussian unitary ensemble) and quaternion (Gaussian symplectic ensemble) entries, respectively. In general, for \(\zeta>0\), Eq.(16) gives the eigenvalue distribution of \(\mathbf{X}\) in the case of tri-diagonal random matrices with standard normal diagonal entries and \(\chi\)-squared off-diagonal entries with degrees of freedom equal to \((M-1)\zeta,(M-2)\zeta,\ldots,\zeta\)(Forrester, 2010). Such distribution can be used to specify a joint prior distribution for the location parameters of a mixture model. The normalising constant has a closed form expression (Mehta and Dyson, 1963; Mehta, 2004):
\[\mathcal{G}_{M,\zeta}=\zeta^{-\frac{M}{2}-\zeta M(M-1)/4}\left(2\pi\right)^{ \frac{M}{2}}\prod_{l=0}^{M-1}\frac{\Gamma\left(1+(j+1)\,\frac{\zeta}{2}\right) }{\Gamma\left(1+\frac{\zeta}{2}\right)}\]
which allows for more efficient computations. Note that, when \(M=1\), we recover the univariate Gaussian distribution with precision parameter \(\zeta\).
_Remark:_ in Eq. (16), when \(\zeta=2\), the repulsion is a function of the Euclidean distance as proposed by Quinlan et al. (2021). The difference is that the distance in Quinlan et al. (2021) appears in the exponential function, with the repulsion term penalising more heavily close locations that our penalty term. Still, the normalising constant is not available analytically, adding complexity to computations.
### Eigenvalue distributions derived from Laguerre ensembles
Throughout, we assume \(N>M\). Let \(\mathbf{Z}\) be a \(N\times M\) matrix with i.i.d. entries \(Z_{ij}\sim\mathcal{N}(0,1)\). Let \(\mathbf{\Sigma}\) be a deterministic positive-definite \(M\times M\) matrix and let \(\sqrt{\mathbf{\Sigma}}\) denote its unique positive-definite square root, such that \(\mathbf{\Sigma}=\sqrt{\mathbf{\Sigma}}^{{}^{\prime}}\sqrt{\mathbf{\Sigma}}\). Define the \(N\times M\) matrix \(\mathbf{X}=\mathbf{Z}\sqrt{\mathbf{\Sigma}}^{{}^{\prime}}\). Then, the \(M\times M\) matrix \(\mathbf{S}=\mathbf{X}^{\prime}\mathbf{X}\) has a Wishart distribution with \(N\) degrees of freedom and scale matrix \(\mathbf{\Sigma}\) and we write \(\mathbf{S}\sim W_{M}\left(N,\mathbf{\Sigma}\right)\), with the following p.d.f.:
\[\begin{split} p\left(\mathbf{S}\mid N,\mathbf{\Sigma}\right)=\\ \frac{\mid\mathbf{\Sigma}\mid^{-\frac{N}{2}}}{2^{M\frac{N}{2}}\Gamma_ {M}\left(\frac{N}{2}\right)}\mid\mathbf{S}\mid^{\frac{N-M-1}{2}}\exp\left\{\frac{ tr\left(\mathbf{\Sigma}^{-1}\mathbf{S}\right)}{2}\right\}\mathbb{1}_{\Omega_{M}^{\mathbf{S}}} (\mathbf{S})\end{split} \tag{17}\]
where \(\Gamma_{p}(x)\) is the multidimensional gamma function of dimension \(M\) and argument \(x>0\)(Gupta and Nagar, 2018), \(tr(\cdot)\) and \(\mid\cdot\mid\) indicate the trace and the determinant of a square matrix, respectively, while \(\Omega_{M}^{\mathbf{S}}\) represents the cone of positive definite square matrices of dimension \(M\). The joint law of the eigenvalues of \(\mathbf{S}\), denoted by \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{M})\), is given by:
\[\begin{split} p(\mathbf{\theta}\mid N,M,\mathbf{\Sigma})=\\ \mathcal{W}_{N,M}^{-1}|\mathbf{\Sigma}|^{-\frac{N}{2}}{}_{0}F_{0}\left( -\frac{1}{2}\mathbf{\Sigma}^{-1},\mathbf{S}\right)\prod_{l=1}^{M}\theta_{l}^{N-M-1/2} \prod_{i<j}\mid\theta_{i}-\theta_{j}\mid\end{split} \tag{18}\]
where \({}_{0}F_{0}(\cdot,\cdot)\) is the hypergeometric function of two matrix arguments (James, 1964) and \(\mathcal{W}_{N,M}\) is the normalising constant. Note that \(\theta_{j}>0\) since they are eigenvalues of a positive definite matrix. The set of random matrices for which Eq. (18) is the joint eigenvalue distribution is known as the Laguerre orthogonal ensemble.
When \(\mathbf{\Sigma}=\mathbb{I}_{M}\), i.e. the identity matrix of dimension \(M\), the eigenvalue distribution reduces to:
\[p(\mathbf{\theta}\mid N,M)=\mathcal{W}_{N,M}^{-1}e^{-\frac{1}{2}\sum_{l=1}^{M} \theta_{l}}\prod_{l=1}^{M}\theta_{l}^{N-M-1/2}\prod_{1\leq i<j\leq M}\mid \theta_{i}-\theta_{j}\mid \tag{19}\]
In this case, the normalising constant can be computed exactly (Fisher, 1939; Hsu, 1939; Roy, 1939). When the matrix \(\mathbf{X}\) has complex or quaternion entries, we can introduce the parameters \(\zeta>0\), such that \(\mathbf{\Sigma}=\zeta\mathbb{I}_{M}\), and \(\alpha=N-M+1-2/\zeta\) and define the family of distributions referred to as the Laguerre \(\zeta\)-ensemble (Forrester, 2010):
\[p(\mathbf{\theta}\mid N,M,\alpha,\zeta)=\mathcal{W}_{\alpha,\zeta,M}^{-1}e^{- \frac{\zeta}{2}\sum_{l=1}^{M}\theta_{l}}\prod_{l=1}^{M}\theta_{l}^{\alpha\frac {\zeta}{2}}\prod_{i<j}\mid\theta_{i}-\theta_{j}\mid^{\zeta} \tag{20}\]
For \(\zeta=1\), this is exactly the joint law in Eq. (19), also called the Laguerre orthogonal ensemble. The cases \(\zeta=2\) and \(\zeta=4\) correspond to the analogous distributions
when the matrix \(\mathbf{X}\) has complex (Laguerre unitary ensemble) and quaternion (Laguerre symplectic ensemble) entries, respectively (Edelman and Rao, 2005). For arbitrary \(\zeta\not\in\{1,2,4\}\), the distribution (20) may be viewed as the case when \(\mathbf{Z}\) contains \(\chi\)-squared distributed entries (Forrester, 2010). Notice, when \(M=1\), we obtain the Gamma \((\alpha\zeta/2+1,\zeta/2)\) distribution with mean \(\alpha+2/\zeta\) and variance \((\alpha+2/\zeta)\,2/\zeta\).
Note that the above law is normalisable for any pair \(\zeta>0\) and \(\alpha>-2/\zeta\), with normalising constant known in closed form (Forrester, 2010):
\[\mathcal{W}_{\alpha,\zeta,M}= \tag{21}\] \[\left(\zeta/2\right)^{-M\left(1+\frac{\zeta}{2}(\alpha+M-1) \right)}\prod_{j=0}^{M-1}\frac{\Gamma\left(1+\frac{\zeta}{2}\left(j+1\right) \right)\Gamma\left(1+\alpha\frac{\zeta}{2}+j\frac{\zeta}{2}\right)}{\Gamma \left(1+\frac{\zeta}{2}\right)}\]
The joint distribution is otherwise not well-defined.
### Eigenvalue distributions derived from Jacobi ensembles
The Jacobi ensemble is related to the eigenvalues of the Beta random matrix (Forrester, 2010) and has applications in physics (Livan and Vivo, 2011; Vivo and Vivo, 2008). Following Mitra (1970), let \(\mathbf{S}_{1}\sim W_{M}\left(N_{1},\mathbf{\Sigma}\right),\mathbf{S}_{2}\sim W_{M}\left( N_{2},\mathbf{\Sigma}\right)\), with \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) independent. Let us define \(\mathbf{S}_{12}=\mathbf{S}_{1}+\mathbf{S}_{2}\), so that \(\mathbf{S}_{12}\sim W_{M}\left(N_{1}+N_{2},\mathbf{\Sigma}\right)\). We take the Cholesky factorisation \(\mathbf{S}_{12}=\mathbf{C}^{\prime}\mathbf{C}\), where \(\mathbf{C}^{\prime}\) is lower triangular with non-negative diagonal elements. Note that \(\mathbf{S}_{12}\), and hence \(\mathbf{C}^{\prime}\), is invertible with probability 1. Letting \(\mathbf{L}^{\prime}=\left(\mathbf{C}^{\prime}\right)^{-1}\), we define:
\[\mathbf{U}=\mathbf{L}^{\prime}\mathbf{S}_{1}\mathbf{L}, \tag{22}\]
and we say that \(\mathbf{U}\) follows the matrix-variate Beta distribution with parameters \(\frac{N_{1}}{2},\frac{N_{2}}{2}\), with \(N_{1},N_{2}>(M-1)/2\). The latter condition is derived from the existence of the Wishart matrix \(\mathbf{S}_{12}\). The p.d.f. of the matrix-variate Beta distribution is the following:
\[p\left(\mathbf{U}\mid N_{1},N_{2}\right)= \tag{23}\] \[\frac{\Gamma_{M}\left(\frac{N_{1}+N_{2}}{2}\right)}{\Gamma_{M} \left(\frac{N_{1}}{2}\right)\Gamma_{M}\left(\frac{N_{2}}{2}\right)}\mid\mathbf{U} \mid^{\frac{N_{1}-M-1}{2}}\mid\mathbf{I}_{M}-\mathbf{U}\mid^{\frac{N_{2}-M-1}{2}} \mathbb{1}_{\Omega_{M}^{\mathbf{U}}}(\mathbf{U})\]
where \(\Omega_{M}^{\mathbf{U}}\) is the space of \(M\times M\) symmetric matrices \(U\) with the property that \(\mathbf{U}\) and \(\mathbf{I}_{M}-\mathbf{U}\) are both positive definite. Notice that the p.d.f. of \(\mathbf{U}\) does not depend on the scale matrix \(\mathbf{\Sigma}\). Consider the joint distribution of the eigenvalues of \(\mathbf{U}\), \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{M})\), given by:
\[p(\mathbf{\theta}|\alpha,\beta,M)=\mathcal{B}_{\alpha,\beta,M}^{-1}\prod_{l=1}^{M }\left(\theta_{l}\right)^{\alpha-1}(1-\theta_{l})^{\beta-1}\prod_{i<j}|\theta_ {i}-\theta_{j}| \tag{24}\]
where \(\alpha=(N_{1}-M+1)/2\) and \(\beta=(N_{2}-M+1)/2\), highlighting that fact that the hyperparameters of this distribution also depend on the number of components
\(M\). As before, this can be viewed as a specific instance of a more general family of distributions indexed by the parameter \(\zeta>0\), leading to the \(\zeta\)-Jacobi ensemble:
\[p(\boldsymbol{\theta}\mid\alpha,\beta,\zeta,M)=\mathcal{B}_{\alpha,\beta,\zeta, M}^{-1}\prod_{l=1}^{M}\left(\theta_{l}\right)^{\alpha-1}\left(1-\theta_{l} \right)^{\beta-1}\prod_{i<j}|\theta_{i}-\theta_{j}|^{2\zeta} \tag{25}\]
The normalising constant \(\mathcal{B}_{\alpha,\beta,\zeta,M}\) is also known as the Selberg integral (Selberg, 1944):
\[\mathcal{B}_{\alpha,\beta,\zeta,M}=\prod_{j=0}^{M-1}\frac{\Gamma(\alpha+j\zeta )\Gamma(\beta+j\zeta)\Gamma(1+(j+1)\zeta)}{\Gamma(\alpha+\beta+(M+j-1)\zeta) \Gamma(1+\zeta)} \tag{26}\]
The conditions for the existence of the distribution are \(\alpha,\beta>-1\) and \(\zeta\geq 0\).
Pham-Gia (2009) discuss several properties of the above distribution, alternatively called the multivariate Selberg Beta distribution, and present an example of marginal laws obtained in the bivariate case. An interesting result is that, for fixed values of \(\alpha\), \(\beta\), \(\zeta\) and \(M\), thanks to the symmetry of expression (25), the univariate marginal distributions are all of the same type, although difficult to compute analytically. The authors also show that a generalisation of the Dirichlet distribution can be achieved, referred to as the Selberg Dirichlet distribution, by normalising a vector of random variables whose joint law is given by Eq. (25). This joint distribution presents the same tractable features as the multivariate Selberg distributions (e.g., the joint distribution of the eigenvalues of the Beta random matrix), i.e. its normalising constant is known in closed form and presents the desired repulsive property. Furthermore, when \(M=1\), the distribution coincides with the Beta distribution with parameters \(\alpha/2\) and \(\beta/2\).
As we can observe from Eq. (16), (20) and (25), the joint distributions of the eigenvalues of random matrices belonging to the Gaussian, Laguerre or Jacobi ensembles include a repulsive term depending on the pairwise differences between the eigenvalues, and on the choice of the parameters \((\alpha,\beta,\zeta)\). When these distributions are used as centring measure \(P_{0}\) for the location parameters in a mixture model, the choice of the parameters \((\alpha,\beta,\zeta)\) is crucial in defining the amount of repulsion induced, and influence the effect on the resulting clustering structure. As the repulsion term appears in the same functional form, these prior distributions will lead to similar posterior inference and \(\zeta\) plays the role of a calibration parameter.
Contour plots of the joint distribution of the eigenvalues of two-dimensional matrices in the Gaussian ensemble are reported in Figure 2, for increasing values of \(\zeta\). This
\begin{table}
\begin{tabular}{c|c c c} Matrix & Gaussian & Wishart & Beta \\ Ensemble & Hermite & Laguerre & Jacobi \\ \hline \(\mathbb{E}\left(\theta_{l}\right)\) & \(-\) & \(1+(\alpha+M-1)\zeta/2\) & \(\frac{\alpha+(M-1)\zeta}{\alpha+\beta+2(M-1)\zeta}\) \\ \(\mathbb{E}\left(\theta_{l}^{2}\right)\) & \(\left(1+\zeta/2(M-1)\right)/\zeta\) & \(2+(\alpha+2M-1)\zeta/2\) & \(\frac{(\alpha+1+2(M-1)\zeta\mathbb{E}(\theta_{l})-(M-1)\zeta\mathbb{E}(\theta _{l})}{\alpha+\beta+1+2(M-1)\zeta}\) \\ \(\mathbb{E}\left(\prod\limits_{l=1}^{k}\theta_{l}\right)\) & \(\left(-\frac{1}{2}\right)^{\frac{1}{2}}\frac{k!}{2^{\frac{1}{2}}(\frac{k}{2})!}\), \(k\) even & \(\prod\limits_{l=1}^{k}\left(1+(\alpha+M-l)\zeta/2\right)\) & \(\prod\limits_{l=1}^{k}\frac{\alpha+(M-l)\zeta}{\alpha+\beta+(2M-l-1)\zeta}\) \\ \end{tabular}
\end{table}
Table 1: Moments of eigenvalues of random matrices.
parameter influences the repulsive structure of the joint distribution, as well as its variability. The joint distribution of the eigenvalues of the two-dimensional Wishart matrix are reported in Figure 3, for different choices of the parameters \((\alpha,\zeta)\). As expected, increasing the value of \(\zeta\) increases the amount of repulsion imposed on the eigenvalues, producing two diverging modes. On the other hand, higher values of the parameter \(\alpha\) increase the peakiness of the modes, reducing the variability. Similar plots in the case of the matrix-variate Beta distribution are shown in Figure 4. The panels show the different roles of the parameters of this distribution. The parameter \(\zeta\) controls the amount of repulsion, while the parameters \(\alpha\) and \(\beta\) act as shape parameters, similarly to a classical Beta or Dirichlet distribution, allowing for asymmetry. In Table 1 we report analytical moments of the eigenvalues for the three different ensembles. Results about the moments of the eigenvalue distributions can be found in Ullah (1986), Mehta (2004), Iguri (2009), Pham-Gia (2009).
We conclude this section by noting that the such eigenvalues distribution share a common form as the repulsive priors proposed previously: they are given by the product of independent kernels (in the three cases Gaussian, Gamma, Beta, respectively) and a repulsion term which is given by \(\exp\left(\log\left(r_{ij}\right)\right)\). The repulsion we propose grows slower than the one utilised, for instance, in Quinlan et al. (2021) and allows for analytical tractability. On the other hand, their repulsion is very strong near the origin. In the Wishart case, when we do not have the identity matrix, we lose the normalising constant as well.
## 5 Gibbs Measures and Chaos
Allowing the number of components in our model to be arbitrarily large introduces some additional theoretical difficulties. Namely, we require the existence of an infinite-dimensional object from which our finite-dimensional prior distributions can be obtained. The most familiar method of obtaining such an infinite-dimensional object is the Kolmogorov's extension theorem. In this theorem, the existence and uniqueness of the infinite-dimensional distribution of interest are guaranteed under mild conditions on a set of finite-dimensional distributions. These finite-dimensional distributions are thus obtained from the infinite-dimensional one via marginalisation. One of these conditions, usually referred to as Kolmogorov consistency, is not satisfied by the finite-dimensional distributions presented in this work. Instead of Kolmogorov consistency, we consider the notion of DLR consistency, named for its theorists Dobrushin, Randford and Ruelle (Dobrushin, 1968, Lanford III and Ruelle, 1969). The DLR approach is based on the definition of _specifications_, i.e. families of probability kernels constructed using a given Hamiltonian (i.e., the _interaction principle_) that are consistent via composition of kernels operation. Note that the notion of consistency here is different from the Kolmogorov one, which requires consistency via marginalisation. Under some conditions, it is possible to find a set of infinite-dimensional probability measures which is compatible with a given specification, i.e. each kernel in the specification yields a regular conditional distribution. The resulting set is a set of Gibbs
measures. It can be shown that this set, when it exists, is a simplex, and as such every element in it is a linear combination of its extreme points (also called in statistical mechanics infinite-volume Gibbs states, in relation with the states of a physical system undergoing a phase transition). We refer to Friedli and Velenik (2017), ch. 6, for a thorough exposition on this topic. Unlike in the Kolmogorov case, DLR consistency guarantees neither existence nor uniqueness. Nevertheless, it is the natural notion for specifying an infinite-dimensional object in terms of conditional, rather than marginal, distributions. It is at the heart of the theory of Gibbs measures. To obtain additional useful properties, e.g., existence and uniqueness, several tools are available. Here, we make use of a large deviation principle.
Once again we refer to work in statistical physics, where the need for an alternative definition (i.e. different from the Kolmogorov one) of a measure on an infinite-dimensional space derives from the necessity to describe complex systems of particles interacting with each other (Georgii, 2011). Indeed, in this setting, Gibbs measures can be used to describe the equilibrium states of the particles with respect to the given interaction at a fixed temperature conditionally on the external system. Gibbs measures are defined in terms of conditional distributions rather than marginal distributions.
Consider a system of particles with configuration \(\mathbf{\theta}\) and internal energy equal to \(\mathcal{H}\left(\mathbf{\theta}\mid\zeta,h\right)\) in Eq. (4), described by the Boltzmann Entropy law in Eq. (5). Depending on the expressions chosen for the functions \(\psi_{1}\) and \(\psi_{2}\) in the Hamiltonian \(\mathcal{H}\left(\mathbf{\theta}\mid\zeta,h\right)\), a Gibbs measure corresponding to this specification of the conditional distributions needs not to exist, but even more interestingly-and in contrast to the Kolmogorov extension-its uniqueness is also not guaranteed. A popular example of such models is the Ising model for \(d\geq 2\)(Ising, 1924). Interestingly, for certain values of the parameters \(h\) and \(\zeta\), the Ising model has a unique associated Gibbs measure, but its marginal distributions are never consistent (in the Kolmogorov sense) (Friedli and Velenik, 2017). Thus, we see that the marginal distributions associated with a Gibbs measure do not generally satisfy the Kolmogorov consistency conditions, even when the associated Gibbs measure is unique. See Section 7.1 for a thorough discussion.
In general, there is no way to express the marginal distributions associated to an infinite-lattice version of the measure (6) without making explicit reference to the Hamiltonian (Friedli and Velenik, 2017). Moreover, the physical systems with distributions of the form (6) are meant to model systems that undergo so-called phase transitions, abrupt changes in the physical properties of a system, such as the familiar liquid-to-gas transition of water boiling into steam. The association of phase transitions to non-uniqueness of Gibbs measure is due to Dobrushin (1968), but it does not coincide with the physical notion in all cases (Rassoul-Agha and Seppalainen, 2015).
As Gibbs measures, we will use distributions derived from random matrices, so we will be able to establish the existence and uniqueness of the limiting Gibbs measures by using the tools of random matrix theory.
ChaosBeyond existence and uniqueness, we will show that the distributions (16), (20) and (25) satisfy another useful property, which in the physics literature is sometimes referred to as molecular chaos and can be seen as a sort of asymptotic independence. In the context of thermodynamic studies, molecular chaos is closely related to the second law of thermodynamic, which states that the total entropy of an isolated system never decreases. In particular, the statistical foundation of the second law of thermodynamics are based on the _molecular chaos hypothesis_, assuming that the velocities of colliding particles are uncorrelated and independent of their individual positions Boltzmann (2003). More formally, let \(p_{M}(\theta_{1},\ldots,\theta_{M})\) be the joint law of some collection of random variables and let
\[p_{M,k}=\int\limits_{\mathbb{R}^{M-k}}p_{M}(\theta_{1},\ldots,\theta_{M})d \theta_{k+1}\cdots d\theta_{M} \tag{27}\]
be the marginal distribution of the first \(k\) variables. The functions \(p_{M,k}\) are referred to as correlation functions in some sources (De Monvel et al., 1995a). Let \(f\) be a distribution on \(\mathbb{R}\). Then, the sequence \(\{p_{M}\}_{M=1}^{\infty}\) is called \(f\)-chaotic if \(p_{M,k}\) converges weakly to the product \(\prod_{i=1}^{k}f(x_{i})\). This notion is equivalent to another that is more familiar in random matrix theory. First, we need another definition.
Given the random variables \(\theta_{1},\ldots,\theta_{M}\), we can define the empirical measure
\[\mu_{M}:=\frac{1}{M}\sum_{i=1}^{M}\delta_{\theta_{i}}\]
where \(\delta_{x}\) is the Dirac measure at \(x\). It is well known in the literature (Jabin and Wang, 2017) that weak convergence of the empirical measure to a measure \(f\) is equivalent to the sequence \(p_{M,k}\) being \(f\)-chaotic. When the random variables are the eigenvalues of a symmetric matrix, the empirical measure is referred to as the empirical spectral distribution. For the Gaussian (Wigner, 1958), Laguerre (Marcenko and Pastur, 1967) and Jacobi (Bai et al., 2015) ensembles, the empirical spectral distributions converge under an appropriate limit. Convergence of the empirical measures for the generalised distributions discussed previously, as well as the existence and uniqueness of the associated Gibbs measure, follows from a so-called large deviation principle (LDP, Anderson et al., 2009). We end this section by formalising the idea of an LDP and showing its connection to Gibbs measures.
Large DeviationsThe most classical method for showing the uniqueness of a Gibbs measure for a particular system are conditions due to Dobrushin (Dobrushin, 1970) and Dobrushin and Shlosman (Dobrushin and Shlosman, 1985). These are often referred to as the Dobrushin Uniqueness Criterion and the Dobrushin-Shlosman Uniqueness Criterion, respectively. Here, we take a different approach that is more convenient in the context of random matrices, so we will not say more about these criteria, other than to note that they give sufficient, but not necessary conditions for uniqueness of a Gibbs measure.
Let \(\mu_{n}\) be a sequence of probability measures on a topological space \(T\). Let \(I:T\rightarrow[0,\infty]\) be a lower-semicontinuous function and \(r_{n}\) a sequence of positive real constants. Then the sequence \(\{\mu_{n}\}\) is said to satisfy a LDP with rate function \(I\) and scale \(r_{n}\) if
1. for all open \(G\subset T\), we have \[\liminf_{n}\frac{1}{r_{n}}\log\mu_{n}(G)\geq-\inf_{x\in G}I(x)\]
2. for all closed \(F\subset T\), we have \[\limsup_{n}\frac{1}{r_{n}}\log\mu_{n}(F)\leq-\inf_{x\in F}I(x)\]
When the sets \(\{I\leq c\}\) are compact, the rate function is often called good or tight.
LDPs can be very helpful when working with Gibbs measures because every Gibbs measure corresponds to a zero of the rate function (see below), and we will be able to show that the rate function associated to the empirical measure has a unique zero. The rate function \(I\) can be defined via a certain limit, and is guaranteed to exist by Theorem 8.3 in Rassoul-Agha and Seppalainen (2015).
**Theorem 1** (DLR Variational Principle, Rassoul-Agha and Seppalainen (2015)).: _Fix a shift-invariant absolutely summable continuous interaction principle \(\Phi\). Let \(\mathcal{M}_{\theta}(\Omega)\) denote the space of invariant probability measures and let \(\gamma\in\mathcal{M}_{\theta}(\Omega)\). Then \(\gamma\) is a Gibbs measure for the specification determined by \(\Phi\) if and only if \(\gamma\) is a zero of the rate function._
To help build intuition about the usefulness of large deviations theory, we first present a classical result. We will use the notation \(\mathcal{M}_{1}(\mathcal{S})\) to denote the collection of probability measures on the space \(\mathcal{S}\).
**Definition 1**.: Let \(\nu,\lambda\in\mathcal{M}_{1}(\mathcal{S})\). Then, the entropy of \(\nu\) relative to \(\lambda\) is defined to be
\[H(\nu|\lambda)=\left\{\begin{array}{ll}\int f\log fd\lambda&\quad\nu\ll \lambda\text{ and }f=\frac{d\nu}{d\lambda}\\ \infty&\quad\text{otherwise.}\end{array}\right.\]
**Theorem 2** (Sanov's Theorem. [Sanov, 1961]).: _Let \(\mathcal{S}\) be a Polish space and let \(\lambda\in\mathcal{M}_{1}(\mathcal{S})\) be a probability measure on \(\mathcal{S}\). Let \(\{X_{n}\}\) be a sequence of i.i.d. \(\mathcal{S}\)-valued \(\lambda\)-distributed random variables. Let \(L_{n}\) be the associated empirical measure and define \(\rho_{n}(A)=\mathbb{P}(L_{n}\in A)\) for Borel subsets \(A\subset\mathcal{M}_{1}(\mathcal{S})\). Then the family \(\rho_{n}\) satisfies an LDP with rate function \(I(\nu)=H(\nu|\lambda)\)._
In the case when \(\mathcal{S}=\mathbb{R}\) and \(\lambda\) has density \(f\), we can interpret Sanov's theorem as saying that, under the assumption that our data is i.i.d. with distribution \(f\), the probability that a histogram looks like the density of a different measure \(\nu\) decays like \(e^{-H(\nu|\lambda)n}\)(Rassoul-Agha and Seppalainen, 2015).
ConnectionsThen weak convergence of the sequence of empirical measures to some deterministic measure \(\mu\) is equivalent to the sequence \(p_{M}\) being \(\mu\)-chaotic (Jabin and Wang, 2017). Moreover, if the sequence \(p_{M}\) satisfies a LDP, then it must converge weakly, so we also have that LDP implies chaos. The Dobrushin Uniqueness criterion is strictly stronger than the existence of a unique Gibbs measure.
The connections between these various concepts are visualised in the diagram in Figure 5. In brief
* The DLR Variational Principle tells us that uniqueness of the Gibbs measure is equivalent to having a LDP whose rate function has a unique zero
* A LDP also implies the convergence of the empirical measure, but the converse is not true in general
* Convergence of the empirical measure is equivalent to molecular chaos
## 6 Large Deviations
As discussed previously, measures of the type (6) and (7) are often used in statistical physics to model interacting systems of particles. In this context, one is interested in the conditional distribution conditioned on events of extremely low probability. Bounds on the probability of rare events (e.g., the weak law of large numbers) are an elementary topic in probability. In the context of statistical mechanics, the usual bounds are generally insufficient, so we turn to LDPs. As we have just seen, establishing a LDP will have far-reaching consequences. LDPs have already been shown for the Gaussian and the Wishart type distributions introduced earlier (Guionnet, 2004; Hiai and Petz, 1998). Here, we present an analogous result for the matrix variate Beta distribution discussed above. Our approach is inspired by Hiai (2000).
In order to establish a LDP for the Beta matrix, we need some control over the growth of the parameters. Namely, we require that \(M\) grows with \(N_{1}\) and \(N_{2}\) and that the following relationships are satisfied:
\[\lim_{N_{1}\rightarrow\infty}\frac{M}{N_{1}}=a,\quad\lim_{N_{2} \rightarrow\infty}\frac{M}{N_{2}}=b\] \[\lim_{N_{1}\rightarrow\infty}\lim_{N_{2}\rightarrow\infty}\frac {M}{N_{1}+N_{2}}=\frac{ab}{a+b}=:c\in(0,1)\]
These assumptions are required in order for the empirical spectral measure to converge (Bai et al., 2015). We will also require that the distance between the parameters \(N_{1}\) and \(N_{2}\) does not grow too quickly. For notational simplicity, we set \(\gamma(x)=x-(M+1)/2\) and let \(A=a(1-a/2)\). Then we require that the parameters \(N_{1}\) and \(N_{2}\) satisfy
\[\lim_{N_{1}\rightarrow\infty}\frac{\gamma\left(N_{1}\right)}{N_{1}}=\lim_{N_{ 1}\rightarrow\infty}\frac{\gamma\left(N_{2}\right)}{N_{1}}=1-\frac{a}{2}\]
For example, requiring that \(\mid N_{1}-N_{2}\mid\leq k\) ensures this condition is met. More generally, sublinear growth of the distance between the parameters is sufficient. Using the notation above, we write the joint distribution as
\[p_{M}\left(\boldsymbol{\theta}\mid\gamma\right)=\mathcal{B}_{N_{1},N_{2},\zeta, M}^{-1}\prod_{l=1}^{M}\left(\theta_{l}\right)^{\gamma(N_{1})}\left(1-\theta_{l} \right)^{\gamma(N_{2})}\prod_{i<j}|\theta_{i}-\theta_{j}|^{2\zeta} \tag{28}\]
We state our main theorem below. We use the notation \(\mathcal{M}([0,1])\) to denote the collection of probability measures on the interval \([0,1]\). It is important to note that \([0,1]\) is compact and \(\mathcal{M}([0,1])\) equipped with the weak-\(*\) topology is compact and metrisable.
**Theorem 3**.: _Suppose \(\theta_{1},\ldots,\theta_{M}\) are distributed according to (28), and let \(\mu_{\boldsymbol{U}_{M}}\) be the associated empirical measure. Suppose that the conditions on \(N_{1},N_{2},M\) described above hold, and let \(N=N_{1}\). Then the limit_
\[\lim_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}:=B<\infty\]
_exists and \(\mu_{\boldsymbol{U}_{n}}\) satisfies the large deviation principle in the scale \(N^{-2}\) with rate function_
\[I(\mu) =-2a^{2}\zeta\int\limits_{0}^{1}\int\limits_{0}^{1}\log|x-y|d\mu(x )d\mu(y)\] \[-A\int\limits_{0}^{1}\log x+\log(1-x)d\mu(x)+B\]
_for \(\mu\in\mathcal{M}([0,1])\). Moreover, there exists a unique \(\mu_{0}\in\mathcal{M}([0,1])\) such that \(I(\mu_{0})=0\)._
In this case, if \(\mathcal{A}\) is a base for the topology, then the large deviation principle is equivalent to the conditions (Dembo and Zeitouni, 2009)
\[-I(x) =\inf\left\{\limsup_{n\to\infty}\frac{1}{r_{n}}\log\mu_{n}(G):G \in\mathcal{A},x\in G\right\} \tag{29}\] \[=\inf\left\{\liminf_{n\to\infty}\frac{1}{r_{n}}\log\mu_{n}(G):G \in\mathcal{A},x\in G\right\} \tag{30}\]
Intuitively, we can think of these as conditions on the tails of the distribution. Now, to obtain our theorem, we must show that
\[-I(\mu)\geq\inf\left\{\limsup_{n\to\infty}\frac{1}{r_{n}}\log\mu_{n}(G):G\right\} \tag{31}\]
and
\[-I(\mu)\leq\inf\left\{\liminf_{n\to\infty}\frac{1}{r_{n}}\log\mu_{n}(G):G\right\} \tag{32}\]
where \(G\) runs over neighbourhoods of \(\mu\).
In general, the conditions above are equivalent only to a weak LDP, in which case another property of the sequence of measures, exponential tightness, is required to establish the full LDP (Rassoul-Agha and Seppalainen, 2015). Due to compactness of the underlying space, we do not need this additional property for the full LDP.
The proof of our theorem will be accomplished via several lemmas, for which proofs are given in Appendix A. In those lemmas, we will make frequent use of the following functions, defined for \((x,y)\in(0,1)^{2}\), the interior of the unit square.
\[F(x,y):= -a^{2}\zeta\log|x-y|\] \[-\frac{A}{2}\left[\log x+\log y+\log(1-x)+\log(1-y)\right] \tag{33}\]
and
\[F_{N}(x,y)= -2\frac{M^{2}}{N^{2}}\zeta\log|x-y|\] \[-\frac{M\gamma(N_{1})}{2N^{2}}(\log x+\log y) \tag{34}\] \[-\frac{M\gamma(N_{2})}{2N^{2}}(\log(1-x)+\log(1-y))\]
For \(R>0\), define
\[F_{R}(x,y):=\min(F(x,y),R) \tag{35}\] \[F_{R,N}(x,y):=\min(F_{N}(x,y),R) \tag{36}\]
Note that \(F_{R,N}\to F_{R}\) uniformly as \(N\to\infty\). Moreover, for each \(R\), \(F_{R}(x,y)\) is bounded and continuous on \([0,1]^{2}\). Then, since \(F_{R}(x,y)\to F(x,y)\) monotonically as \(R\to\infty\), the functional \(J:\mathcal{M}([0,1])\to\mathbb{R}\) defined by
\[J(\mu):=\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu(x)d\mu(y)\]
is lower-semicontinuous. Similarly, \(F_{N}(x,y)\) also defines a lower-semicontinuous functional \(J_{N}\). Thus, there exist unique (Totik, 1994)\(\mu_{0},\mu_{N}\in\mathcal{M}([0,1])\) having compact support such that
\[\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0}(y)=\inf_{\mu \in\mathcal{M}([0,1])}J(\mu) \tag{37}\]
and
\[\int\limits_{0}^{1}\int\limits_{0}^{1}F_{N}(x,y)d\mu_{N}(x)d\mu_{N}(y)=\inf_{ \mu\in\mathcal{M}([0,1])}J_{N}(\mu) \tag{38}\]
Finally, note that \(I(\mu)=J(\mu)+B\).
**Lemma 1**.: _The sequence \(\{\mu_{N}\}\) is tight and_
\[\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0}(y)\leq\] \[\liminf_{N\to\infty}\int\limits_{0}^{1}\int\limits_{0}^{1}F_{N}(x, y)d\mu_{N}(x)d\mu_{N}(y)\]
_Proof_: see Appendix A.1\(\qed\)
**Lemma 2**.: _For the normalising constant \(\mathcal{B}_{N_{1},N_{2},\zeta,M}\), we have_
\[\limsup_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\leq- \int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0}(y)\]
_Proof_: see Appendix A.2\(\qed\)
**Lemma 3**.: _For every \(\mu\in\mathcal{M}([0,1])\) we have_
\[\inf_{G}\left(\limsup_{N\to\infty}\frac{1}{N^{2}}\log(\mu_{\mathbf{U}_{ M}}(G))\right)\leq\] \[-\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu(x)d\mu(y)- \liminf_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\]
_where \(G\) runs over neighbourhoods of \(\mu\) in the weak topology._
_Proof_: see Appendix A.3\(\qed\)
**Lemma 4**.: \[\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\geq- \iint F(x,y)d\mu_{0}(x)d\mu_{0}(y)\]
_and for every \(\mu\in\mathcal{M}([0,1])\)_
\[\inf_{G}\left(\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mu_{\mathbf{U}_{ M}}(G)\right)\geq\] \[-\iint F(x,y)d\mu(x)d\mu(y)-\limsup_{N\to\infty}\frac{1}{N^{2}} \log\mathcal{B}_{N_{1},N_{2},\zeta,M}\]
_where \(G\) runs over neighbourhoods of \(\mu\) in the weak topology._
_Proof_: see Appendix A.4\(\qed\)
Note that this result, along with a previous lemma, implies that the limit
\[\lim_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}:=B<\infty\]
exists.
Noting again that \(I(\mu)=J(\mu)+B\), then the preceding Lemmas 1-4 imply that (31) and (32) hold. Moreover, we see that \(I(\mu_{0})=0\) and \(I\) is strictly convex, so that it is the only measure with this property.
In summary, we have shown that the empirical spectral measure associated to the distribution (25) satisfies a large deviation principle with a unique minimiser. Then, by Rassoul-Agha and Seppalainen (2015), we see that there is a unique Gibbs measure whose conditional distributions are given by (25).
## 7 Hierarchical Mixture Model
In this Section we show how to use the Coulomb priors in the context of mixture models. In Section 4, we introduce a repulsive effect on the latent parameters \(\mathbf{\theta}\) by specifying \(P_{0}(\mathbf{\theta})\) from the joint distribution of the eigenvalues of random matrices. As shown earlier, such distributions induce sparsity in the mixture by repelling the cluster centres from each other, thus allowing flexible configurations of the parameters. Due to the fact that the support of each eigenvalue is not the whole real line (\(\mathbb{R}^{+}\) in the Wishart case and \((0,1)\) in the Beta one), each \(\mathbf{\theta}\) is modelled on an appropriate scale depending on the application at hand. For instance, in a setting where the latent parameters represent the locations of real-valued observations, these can be transformed via suitable link functions, e.g. \(\log\) or logit. Notice that these laws preserve the repulsion term in their distribution, as it is easily seen by applying a change of variable transformation.
To specify a prior distribution on the weights \(\mathbf{w}\) and the number of components \(M\) of the mixture, we follow the approach of Argiento and De Iorio (2022). We assume the weights are derived by normalising a finite point process, which requires the unnormalised weights \(\mathbf{S}=(S_{1},\ldots,S_{M})\) to be infinitely divisible. Argiento and De Iorio (2022) show that a finite mixture model is simply a realisation of a stochastic process whose dimension is random and has an infinite dimensional support. This leads to flexible distributions for the weights of the mixture, which also allows for efficient posterior computations. They refer to their construction as a _normalised independent finite point process_. Here, we assume that \(S_{1},\ldots,S_{M}\) are i.i.d. from a Gamma distribution. This construction is the finite-dimensional version of the class of normalised random measures with independent increments (Regazzini et al., 2003; James et al., 2009). The
resulting model is the following:
\[\begin{split}&\mathbf{y}_{i}\mid z_{i},\mathbf{\theta},M\sim f\left(\mathbf{y}_{ i}\mid\mathbf{\theta}_{z_{i}}\right)\quad i=1,\ldots,n\\ &\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{M}\mid M\sim P_{0}(\mathbf{ \theta})\\ &\mathbb{P}\left(z_{i}=h\mid\mathbf{S},M\right)\propto S_{h}\quad h=1, \ldots,M\\ & S_{1},\ldots,S_{M}\mid M\stackrel{{\text{iid}}}{{ \sim}}\text{Gamma}\left(\gamma_{S},1\right)\\ & M\sim\text{Poi}_{1}\left(\Lambda\right)\end{split} \tag{39}\]
where \(\text{Gamma}(a,b)\) represents the Gamma distribution with mean \(a/b\) and variance \(a/b^{2}\), while \(\text{Poi}_{1}(\Lambda)\) is the Poisson distribution shifted by one unit, i.e. the random variable \(M-1\) is Poisson with mean \(\Lambda\). Note that upon marginalisation of the allocation variables \(z_{i}\), for \(i=1,\ldots,n\), we obtain the normalised weights \(w_{m}=S_{m}/\sum_{j}S_{j}\). Here we highlight the distinction between the number, \(M\), of components in a mixture, i.e. of possible clusters/sub-populations, and the number, \(K_{n}\), of clusters, i.e. the number of allocated components (components to which at least one observation has been assigned). The latter quantity \(K_{n}\leq M\) can only be estimated a-posteriori.
In the original setting proposed by Argiento and De Iorio (2022), the updates of the location parameters \(\mathbf{\theta}\) for the allocated and non-allocated components are performed separately, thanks to the independence assumption between these two components of the process. In our scenario, the parameters \(\mathbf{\theta}\) are not i.i.d., and therefore their update require a different technique. Just as in Argiento and De Iorio (2022), we use the information provided by the vector of allocation variables \(\mathbf{z}\) to identify which components have been allocated and which ones have not. Thus, we can split the vector of parameters as \(\mathbf{\theta}=\left(\mathbf{\theta}^{(a)},\mathbf{\theta}^{(na)}\right)\) to highlight their allocation status. Update of the allocated components of the mixture is performed via a Metropolis-Hastings step. On the other hand, the update of the non-allocated components is tackled with a Metropolis-Hastings birth-and-death move, as proposed by Geyer and Moller (1994) and implemented in the context of repulsive mixtures based on DPPs in Beraha et al. (2022). One of the advantages of the proposed class of prior distributions for the parameters \(\mathbf{\theta}\) is their tractability. Indeed, all distributions described in Section 4 are known in closed form and have tractable normalising constants. This allows to perform inference a-posteriori for the hyperparameters driving these distributions, a feature that can be very appealing in clustering analysis. In Supplementary Material, we detail the MCMC algorithm.
### Examples
In this Section, we first present the popular Ising model used in the thermodynamic literature to describe the behaviour of polarised particles on the 2-dimensional lattice \(\mathbb{Z}^{2}\), highlighting an interesting behaviour related to the uniqueness of the associated Gibbs measures. Then, we demonstrate the proposed approach on simulated and benchmark data. We explore the partitions induced a-posteriori by the Coulomb priors introduced in Section 4, and compare them with alternative repulsive mixture models proposed
in the literature, as well as the mixture model with a random number of components proposed by Argiento and De Iorio (2022), which can be recovered from our model when no repulsion is assumed. The latter can be easily implemented using the R package AntMAN(Ong et al., 2021). We assess the clustering results by comparing the estimates of the partitions obtained by minimising the Binder loss function (Binder, 1978), while density estimation is assessed by computing the predictive densities.
The 2-dimensional Ising modelConsider \(M\) particles on a 2-dimensional lattice. Let \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{M})\) be the vector of spins of the \(M\) particles under study, so that \(\theta_{i}\in\{-1,+1\}\), where \(-1\) and \(+1\) indicate negative and positive spins, respectively. The Hamiltonian of the Ising model is given by:
\[\mathcal{H}(\mathbf{\theta}\mid\zeta,h)=h\sum_{i=1}^{M}\theta_{i}+\zeta\sum_{i \sim j}\theta_{i}\theta_{j} \tag{40}\]
where \(\zeta>0\) is the inverse temperature and \(h\in\mathbb{R}\) quantifies the influence of a magnetic field acting on the particles and in particular on the values of the spins \(\mathbf{\theta}\). Notice how the expression of the Hamiltonian of the Ising model is of the same form as the Gibbs potential in Eq. (4), where \(\psi_{1}(\theta)=\theta\) and \(\psi_{2}(\theta_{i},\theta_{j})=\theta_{i}\theta_{j}\). In the standard Ising model, the particles are assumed to interact only with their immediate neighbours, as indicated by the notation \(i\sim j\) in Eq. (40). From Eq. (40) we obtain the probability of observing a specific configuration of the particle spins \(\mathbf{\theta}\):
\[p_{M}\left(\mathbf{\theta}\mid\zeta,h\right)=\frac{e^{-\mathcal{H}(\mathbf{\theta}| \zeta,h)}}{Z_{n}^{Ising}\left(\zeta,h\right)} \tag{41}\]
with \(Z_{n}^{Ising}\left(\zeta,h\right)=\sum_{\mathbf{\theta}\in\{-1,+1\}^{n}}e^{- \mathcal{H}(\mathbf{\theta}|\zeta,h)}\) representing the normalising constant (i.e., the partition function). Computations under the Ising model are challenging due to the intractability of \(Z_{n}^{Ising}\left(\zeta,h\right)\), with exception of the case \(d=1\) which has a closed form solution.
The behaviour of the Ising model at the thermodynamic limit, i.e. for \(M\rightarrow+\infty\), is of particular interest as it depends on the dimension of the lattice \(d\) and on the parameters \(\zeta\) and \(h\). Theorem 3.25 by Friedli and Velenik (2017) gives an overview of the limiting distributions that arise under different Ising model specifications. When \(d\geq 2\), a first order phase transition may occur, allowing the set of limiting Gibbs measures to contain more than one element. In particular, when the external magnetic field is absent (\(h=0\)) there exist a _critical value_\(\zeta_{c}=\zeta_{c}(d)\) of the inverse temperature above which two possible states are obtained for \(M\rightarrow+\infty\). On the other hand, when \(h\neq 0\), the behaviour of the particles is dominated by the external magnetic field.
We illustrate this result numerically for the 2-dimensional Ising model. We specify a univariate Gaussian mixture model with two components, indexed by an underlying
Ising model, as follows:
\[y_{i}\sim\text{N}\left(y_{i}\mid\theta_{i},1\right),\quad i=1, \ldots,M\] \[\theta_{1},\ldots,\theta_{M}\mid\zeta,h\sim\text{Ising}_{M}^{2} \left(\boldsymbol{\theta}\mid\zeta,h\right) \tag{42}\]
where \(\text{Ising}_{M}^{2}\left(\boldsymbol{\theta}\mid\zeta,h\right)\) indicates that the \(M\) particle spins \(\boldsymbol{\theta}\) are distributed according to the 2-dimensional Ising model specified above. We simulate \(M=n\times n\) observations from model (42), on a square lattice of size \(n\in\{5,10,15,20\}\), with equal proportion \(\pi_{\theta}\) of positive and negative spins. We fit the above model for varying values of the parameters \(\zeta\in\{0,0.1,0.25,\zeta_{c},0.5,0.75,1,10,100\}\), with \(\zeta_{c}=\log\left(1+\sqrt{2}\right)/2\approx 0.44\), and \(h\in[-5,5]\). We run the MCMC algorithm for length 1500 iterations, of which the first 1000 are discarded as burn-in. We start the MCMC chains from different spin configurations \((\theta_{1},\ldots,\theta_{M})\) sampled from independent Bernoulli distributions over \(\{-1,1\}\) with varying success probability (i.e., expected proportion of spins equal to 1), \(\pi_{\theta}\in\{0,0.25,0.5,0.75,1\}\).
As we show in Figure 6, when \(h=0\), the initial configuration is of particular importance as it determines the limiting distribution, when non-uniqueness is encountered. Specifically, for values of the inverse temperature \(\zeta\) greater than the critical \(\zeta_{c}\), often referred to as the _low temperatures_ case, the distribution of the spins is the same as the one used to sample the initial configuration.
Another important quantity monitored in the context of the Ising mode is the _magnetisation_ of the particle system, defined as the average spin value among the \(M\) particles, i.e. \(\mathcal{M}_{M}=\frac{1}{M}\sum_{i=1}^{M}\theta_{i}\). As it can be observe from Figure 7, the posterior mean of the magnetisation strongly depends on the value of the external field \(h\). This is in agreement with the results of Theorem 3.25 by Friedli and Velenik (2017).
These results are a consequence of the non-uniqueness of the Gibbs measure. We conclude by noting that the approach of Petralia et al. (2012), Quinlan et al. (2021) does not have a corresponding infinite-dimensional measures, as their prior process is not Kolmogorov consistent, and they have not shown there is an infinite dimensional object with their distributions as conditionals.
Mixture of binomial distributionsWe illustrate the performance of the proposed model on data simulated from a mixture of binomial distributions. Specifically, \(n=150\) observations are sampled from the following mixture with five components:
\[f\left(y_{i}\mid\boldsymbol{\theta}\right)=\sum_{m=1}^{5}w_{m} \text{Bin}\left(y_{i}\mid 15,\pi_{m}\right)\quad i=1,\ldots,n\] \[\boldsymbol{w}\sim\text{Dir}_{M}\left(\boldsymbol{w}\mid 1, \ldots,1\right)\] \[\boldsymbol{\pi}=\text{logit}^{-1}\left(-5,-2.5,0,2.5,5\right)\]
where \(\text{Bin}\left(y\mid R,\pi\right)\) indicates the Binomial distribution with \(R\) trials and success probability \(\pi\), while \(\text{Dir}_{M}\left(\boldsymbol{w}\mid a,\ldots,a\right)\) represents the \(M\)-dimensional Dirichlet distribution with shape parameter vector \((a,\ldots,a)\) and \(a>0\).
To investigate the results obtained by fitting misspecified mixture models to the simulated data, we compare several repulsive mixture models in a finite mixture setting, i.e. when \(M\) is fixed. In particular, we implement the models proposed by Quinlan et al. (2021); Petralia et al. (2012) and Xu et al. (2016), as well as the one proposed in this paper, with suitable prior distributions specified for the parameter vector of interest \(\mathbf{\pi}\). We opt for a Beta kernel density for the methods proposed by Quinlan et al. (2021) and Petralia et al. (2012), while use Gaussian kernels for the specification of the DPP in Xu et al. (2016), and impose an inverse-logit link on the parameters of interest \(\pi_{m}\), for \(m=1,\ldots,M\). To fix the hyperparameters of the competitor models, we follow the authors' guidelines presented in the corresponding works. Finally, the Jacobi-Coulomb prior distribution is used when fitting our model.
First, we assess posterior inference on the partition estimated by the different models. We show in Figure 8 the posterior distribution of the number of clusters \(K_{n}\) under different model specifications. We observe right skewness in most scenarios, and more markedly for the competitor models. In particular, the proposed model induces partitions characterised by a lower number of clusters when compared to the model of Petralia et al. (2012) and comparable results to the DDP model of Xu et al. (2016) and the one of Quinlan et al. (2021). Such difference is less evident when a large number of mixture components \(M\) is specified. This behaviour is confirmed by computing the partition minimising the Binder loss function, for which we report the corresponding number of clusters in Table 2. The proposed method yields partitions more robust to increasing values of \(M\) and composed of less clusters.
Air quality dataWe illustrate the proposed model on the bivariate _Air Quality_ dataset (Chambers et al., 1983), containing \(n=111\) air quality measurements, indicating the levels of New York's Ozone and Solar radiation in 1973. We fit to the Air Quality data a bivariate mixture of Gaussian distributions, with random number of components \(M\) and Gaussian Coulomb prior for the location parameters. Note that each dimension of
\begin{table}
\begin{tabular}{c|c c c c c} \(M\) & 2 & 3 & 5 & 7 & 10 & 25 \\ \hline Beta-Coulomb & 2 & 3 & 3 & 3 & 3 & 5 \\ DPP & 2 & 3 & 3 & 4 & 3 & 5 \\ QQP & 2 & 3 & 3 & 3 & 4 & 9 \\ Petralia & 2 & 3 & 4 & 5 & 4 & 6 \\ \end{tabular}
\end{table}
Table 2: Binomial mixture data. Number of clusters for the partition estimates obtained by minimising the Binder loss function, for increasing value of \(M\) and different repulsive models.
the mean vector is a-priori independent. The model is specified as follows:
\[\mathbf{y}_{i}\mid z_{i},\mathbf{\theta},M\sim\text{N}_{2}\left(\mathbf{y}_{i} \mid\mathbf{\theta}_{z_{i}},\mathbf{\Sigma}_{z_{i}}\right)\quad i=1,\ldots,n\] \[\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{M}\mid M,\zeta\sim p_{M} \left(\mathbf{\theta}\right)\] \[\zeta\sim\text{Gamma}\left(1,1\right)\] \[\mathbf{\Sigma}_{1},\ldots,\mathbf{\Sigma}_{M}\mid M\stackrel{{ \text{iid}}}{{\sim}}\text{inv-Wishart}\left(\nu,\mathbf{\Psi}\right)\] \[\mathbb{P}\left(z_{i}=h\mid\mathbf{S},M\right)\propto S_{h},\quad h= 1,\ldots,M\] \[S_{1},\ldots,S_{M}\mid M\stackrel{{\text{iid}}}{{ \sim}}\text{Gamma}\left(\gamma_{S},1\right)\] \[M\sim\text{Poi}_{1}\left(\Lambda\right)\]
Note that the covariance matrices \(\mathbf{\Sigma}_{m}\), for \(m=1,\ldots,M\), are component-specific but modelled independently from the location parameters \(\mathbf{\theta}\), as an i.i.d. sample from an inverse Wishart distribution with with \(\nu=6\) degree of freedoms and \(\mathbf{\Psi}\) set equal to the 2-dimensional identity matrix. We also impose a prior distribution on the repulsion parameter \(\zeta\). We compare our results with those obtained using the repulsive finite mixture model by Quinlan et al. (2021) with \(M=5\) components, and with the model implemented in the R package AntMAN. The latter corresponds to the the proposed model when the repulsive term is omitted, and we fix the hyperparameters \(\gamma_{S}=\Lambda=1\) in both models. After an initial burn-in of 100 iterations used to initialise the adaptive steps of the MCMC, all the simulations are run for a total of 7500 iterations. These include 2500 iterations discarded as burn-in and 5000 thinned to obtain a final sample of 2500 iterations, used for posterior inference.
Figures 9(a-b) show the posterior distribution of the number of clusters \(K_{n}\) and components \(M\) for the three models. The proposed approach is able to identify coarser partitions a-posteriori than the model of Quinlan et al. (2021) and the one without repulsion implemented in the AntMAN package. The posterior distribution of the repulsive parameter \(\zeta\) is shown in Figure 9(c). The partitions estimated by minimising the Binder loss function (Binder, 1978) are displayed, together with the contour plots of the predictive densities, in Figure 10. These figures show how the proposed method is able to gather in the same cluster points spread across a wider region, avoiding redundancy.
A key property of Coulomb priors is a soft penalisation of components close together, which leads to sparsity in the number of estimated components. A key advantage of our approach is the availability of the normalising constant in closed form, as well as the existence of a unique infinite dimensional distribution for \(M\to\infty\). These properties allow specifying also a prior on the number of components in the mixture (differently from previous approaches), the ability of devising efficient computational schemes and the possibility of performing posterior inference on the repulsion parameters, leading to substantially improved clustering performance in general applications. We show that compared to the independent prior on the component centres and competitor repulsive approaches, the Coulomb priors induce additional shrinkage effect on the tail probability of the posterior number of components, reducing model complexity.
There are many extensions to our work. Promising venues of future research are the extension of dependent priors in the context of nested clustering, and the specification of a prior on the weights of the mixture which favours large components (as in Fiquene et al. (2019)), while maintaining a Coulomb prior on the locations.
## Appendix A Appendix A: Proofs
### A.1: Proof of Lemma 1
**Lemma 1**.: _The sequence \(\{\mu_{N}\}\) is tight and_
\[\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0}(y)\leq\] \[\liminf_{N\to\infty}\int\limits_{0}^{1}\int\limits_{0}^{1}F_{N}(x,y)d\mu_{N}(x)d\mu_{N}(y)\]
Proof.: Note that since the unit square is compact, we can immediately conclude that the sequence is tight. Then, there exists a subsequence \(\{\mu_{N_{i}}\}\) that converges weakly to some \(\hat{\mu}\in\mathcal{M}([0,1])\), and
\[\lim_{i\to\infty}\int\limits_{0}^{1}\int\limits_{0}^{1}F_{N_{i}}( x,y)d\mu_{N_{i}}(x)d\mu_{N_{i}}(y)=\] \[\liminf_{N\to\infty}\int\limits_{0}^{1}\int\limits_{0}^{1}F_{N}( x,y)d\mu_{N}(x)d\mu_{N}(y)\]
Now,
\[\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0}(y) \leq\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\hat{\mu}(x)d\hat{ \mu}(y)\] \[=\sup\limits_{R>0}\int\limits_{0}^{1}\int\limits_{0}^{1}F_{R}(x,y) d\hat{\mu}(x)d\hat{\mu}(y)\] \[=\sup\limits_{R>0}\lim\limits_{N\rightarrow\infty}\int\limits_{0 }^{1}\int\limits_{0}^{1}F_{R,N}(x,y)d\mu_{N_{i}}(x)d\mu_{N_{i}}(y)\] \[\leq\lim\limits_{N\rightarrow\infty}\int\limits_{0}^{1}\int \limits_{0}^{1}F_{R,N}(x,y)d\mu_{N_{i}}(x)d\mu_{N_{i}}(y)\]
which yields the desired inequality.
### A.2: Proof of Lemma 2
**Lemma 2**.: _For the normalising constant \(\mathcal{B}_{N_{1},N_{2},\zeta,M}\), we have_
\[\limsup\limits_{N\rightarrow\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2 },\zeta,M}\leq-\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu_{0 }(y)\]
Proof.: We begin by rewriting the density (28) as
\[\begin{split}\exp\left\{-2\frac{N^{2}}{M^{2}}\sum\limits_{i<j}F_ {N}(\theta_{i},\theta_{j})\right\}\times\\ \exp\left\{\sum\limits_{i=1}^{M}\frac{\gamma(N_{1})}{M}\log( \theta_{i})+\frac{\gamma(N_{2})}{M}\log(1-\theta_{i})\right\}\end{split} \tag{43}\]
Now, note that
\[\sum\limits_{i<j}F_{N}(\theta_{i},\theta_{j})=\iint\limits_{x\neq y}M^{2}F_{N }(x,y)d\mu_{\mathbf{U}_{M}}(x)d\mu_{\mathbf{U}_{M}}(y)\]
Then,
\[\mathcal{B}_{N_{1},N_{2},\zeta,M} \leq\left[\int\limits_{0}^{1}\exp\left(\frac{\gamma(N_{1})}{M}\log( x)+\frac{\gamma(N_{2})}{M}\log(1-x)\right)dx\right]^{M}\times\] \[\exp\left(-N^{2}\iint\limits_{x\neq y}F_{N}(x,y)d\mu_{\mathbf{U}_{M}} (x)d\mu_{\mathbf{U}_{M}}(y)\right)\]
Since
\[\sup\limits_{N\geq 1}\int\limits_{0}^{1}\exp\left(\frac{\gamma(N_{1})}{M}\log( x)+\frac{\gamma(N_{2})}{M}\log(1-x)\right)dx<\infty\]
we have
\[\limsup\limits_{N\to\infty}\frac{1}{N^{2}}\mathcal{B}_{N_{1},N_{2 },\zeta,M} \leq-\liminf\limits_{N\to\infty}\iint\limits_{x\neq y}F_{N}(x,y)d \mu_{\mathbf{U}_{M}}(x)d\mu_{\mathbf{U}_{M}}(y)\] \[\leq-\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu_{0}(x)d\mu _{0}(y)\]
### A.3: Proof of Lemma 3
**Lemma 3**.: _For every \(\mu\in\mathcal{M}([0,1])\) we have_
\[\inf\limits_{G}\left(\limsup\limits_{N\to\infty}\frac{1}{N^{2}} \log(\mu_{\mathbf{U}_{M}}(G))\right)\leq\] \[-\int\limits_{0}^{1}\int\limits_{0}^{1}F(x,y)d\mu(x)d\mu(y)-\liminf \limits_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\]
_where \(G\) runs over neighbourhoods of \(\mu\) in the weak topology._
Proof.: Let \(\mu\in\mathcal{M}([0,1])\) and let \(G\) be a neighbourhood of \(\mu\). For \(t\in[0,1]^{M}\), define \(D_{t,M}\) to be the \(M\times M\) diagonal matrix whose entries are given by \(t\). Define \(\tilde{G}:=\{t\in[0,1]^{M}:\mu_{D_{t,M}}\in G\}\). Then, letting \(\nu\) denote the measure corresponding to the density \(p(\mathbf{\theta}|\gamma)\), we have
\[\mu_{\mathbf{U}_{M}}(G)=\nu(\tilde{G})\]
Now, note that \(\mu_{\mathbf{U}_{M}}\otimes\mu_{\mathbf{U}_{M}}\) (\(\{x=y\}\)) = \(\frac{1}{M}\), from which we see
\[\iint f_{R,N}(x,y)d\mu_{\mathbf{U}_{M}}(x)d\mu_{\mathbf{U}_{M}}(y)=\] \[\iint_{x\neq y}f_{R,N}(x,y)d\mu_{\mathbf{U}_{M}}(x)d\mu_{\mathbf{U}_{M}}(y )+\frac{R}{M}\]
Then, rewriting the density as before, we obtain
\[\mu_{\mathbf{U}}(G)=\nu(\tilde{G})\] \[\leq\mathcal{B}_{N_{1},N_{2},\zeta,M}^{-1}\left[\int\limits_{0}^ {1}\exp\left(\frac{\gamma(N_{1})}{M}\log(x)+\frac{\gamma(N_{2})}{M}\log(1-x) \right)dx\right]^{M}\] \[\times\exp\left(-N^{2}\inf_{\sigma\in G}\iint_{x\neq y}F_{R,N}(x,y)d\sigma(x)d\sigma(y)+NR\right)\]
for any \(R>0\). Moreover, we know that
\[\lim_{N\to\infty}\left(\inf_{\sigma\in G}\iint F_{R,N}(x,y)d \sigma(x)d\sigma(y)\right)=\] \[\inf_{\sigma\in G}\iint F_{R}(x,y)d\sigma(x)d\sigma(y)\]
Hence
\[\limsup_{N\to\infty}\frac{1}{N^{2}}\log\mu_{\mathbf{U}_{M}}(G)\leq\] \[-\inf_{\sigma\in G}\iint F_{R}(x,y)d\sigma(x)d\sigma(y)-\liminf_{ N\to\infty}\frac{1}{N^{2}}\mathcal{B}_{N_{1},N_{2},\zeta,M}\]
Since \(F_{R}(x,y)\) is bounded and continuous, it defines a continuous functional so that
\[\inf_{G}\left(\limsup_{N\to\infty}\frac{1}{N^{2}}\log(\mu_{\mathbf{U} _{M}}(G))\right)\leq-\] \[\iint F_{R}(x,y)d\mu(x)d\mu(y)-\liminf_{N\to\infty}\frac{1}{N^{2}} \log\mathcal{B}_{N_{1},N_{2},\zeta,M}\]
Taking the limit as \(R\to\infty\) and applying the monotone convergence theorem yields the desired result.
### A.4: Proof of Lemma 4
**Lemma 4**.: \[\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\geq- \iint F(x,y)d\mu_{0}(x)d\mu_{0}(y)\]
_and for every \(\mu\in\mathcal{M}([0,1])\)_
\[\inf_{G}\left(\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mu_{\mathbf{U}_{M}}(G)\right)\geq\]
\[-\iint F(x,y)d\mu(x)d\mu(y)-\limsup_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_ {N_{1},N_{2},\zeta,M}\]
_where \(G\) runs over neighbourhoods of \(\mu\) in the weak topology._
Proof.: Without loss of generality, we may assume that \(\mu\) has a continuous density \(f\) on \([0,1]\). Then, there exists a \(\epsilon>0\) such that \(\epsilon\leq f(x)\leq\frac{1}{\epsilon}\) for \(x\in[0,1]\). Next, for each \(N\), define constants
\[0=s_{0,N}<r_{1,N}<s_{1,N}<\dots<r_{M,N}<s_{M,N}=1\]
such that
\[\int\limits_{0}^{r_{i,N}}f(x)dx=\frac{i-1/2}{M}\quad\text{ and }\quad\int \limits_{0}^{s_{i,N}}f(x)dx=\frac{i}{M}\]
Then we have
\[\frac{\epsilon}{2M}\leq s_{i,N}-r_{i,M}\leq\frac{1}{2M\epsilon}\]
Now, define
\[\Delta_{N}=\left\{(t_{1},\dots,t_{M})\in\mathbb{R}^{M}:r_{i,N}\leq t_{i}\leq s _{i,N}\right\}\]
For any neighbourhood \(G\) of \(\mu\) we can choose \(N\) large enough so that \(\Delta_{N}\subset\tilde{G}\). Thus,
\[\mu_{\mathbf{U}_{M}}(G)=\nu(\tilde{G})\geq\nu(\Delta_{N})\] \[=\mathcal{B}_{N_{1},N_{2},\zeta,M}^{-1}\int\dots\int\limits_{ \Delta_{N}}\prod_{i=1}^{M}t_{i}^{\gamma}(1-t_{i})^{\gamma}\prod_{i<j}(t_{i}-t _{j})dt_{1}\dots dt_{M}\] \[\geq\mathcal{B}_{N_{1},N_{2},\zeta,M}^{-1}\left(\frac{\zeta}{2M} \right)^{M}\prod_{i=1}^{M}r_{i,N}^{\gamma}\prod_{i<j}d_{i,j,N}\]
where \(d_{i,j,N}=\min\{|x-y|:r_{i,N}\leq x\leq s_{i,N},r_{j,N}\leq y\leq s_{j,N}\}\). Now, we observe that
\[\lim_{N\to\infty}\sum_{i=1}^{M}\frac{\gamma(N_{1})}{N^{2}}\log r_{i,N}+\frac{ \gamma(N_{2})}{N^{2}}log(1-r_{i,N})=\]
\[A\int\log x+\log(1-x)d\mu(x)\]
and
\[\lim_{N\to\infty}\frac{1}{N^{2}}\sum_{i<j}\log(r_{j,N}-s_{i,N})=a^{2}\zeta \int\log|x-y|d\mu(x)d\mu(y)\]
Thus,
\[\limsup_{N\to\infty}\frac{1}{N^{2}}\log\mu_{\boldsymbol{U}_{M}}(G)\geq\]
\[-\iint F(x,y)d\mu(x)d\mu(y)-\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_ {N_{1},N_{2},\zeta,M}\]
After taking the infimum over \(\mu\), this implies
\[\liminf_{N\to\infty}\frac{1}{N^{2}}\log\mathcal{B}_{N_{1},N_{2},\zeta,M}\geq- \iint F(x,y)d\mu_{0}(x)d\mu_{0}(y)\]
Moreover, we have
\[\liminf_{N\to\infty}\log\mu_{\boldsymbol{U}_{M}}(G)\geq\]
\[-\iint F(x,y)d\mu(x)d\mu(y)-\limsup_{N\to infty}\frac{1}{N^{2}}\log\mathcal{B} _{N_{1},N_{2},\zeta,M}\]
Figure 2: Gaussian ensemble. Joint distribution of the eigenvalues of a 2-dimensional Gaussian matrix for different choices of the parameter \(\zeta\).
Figure 3: Laguerre ensemble. Joint distribution of the eigenvalues of a 2-dimensional Wishart matrix for different choices of the parameters \((\alpha,\zeta)\).
Figure 4: Jacobi ensemble. Joint distribution of the eigenvalues of a 2-dimensional Beta matrix for different choices of the parameters \((\alpha,\beta,\zeta)\).
Figure 5: Connections between properties of Gibbs Measures.
Figure 6: Ising model (\(d=2\)). Posterior mean of the proportion of positive spins in the sample, for different values of \(\zeta\) and \(h=0\). Each line corresponds to a different initialisation setting, i.e. \(\pi_{\theta}\).
## 6 Conclusion
Figure 7: Ising model (\(d=2\), \(n=20\)). Posterior mean of the magnetisation of the system \(\mathcal{M}_{M}\), for different values of \(\zeta\) and \(h\). Each panel corresponds to a different initialisation setting, i.e. \(\pi_{\theta}\).
Figure 8: Binomial mixture data. Posterior distribution of the number of clusters \(K_{n}\) for different model specifications. Each panel shows the results for a different value of \(M\).
Figure 10: Air Quality data. Contour plots of the predictive distributions obtained from a mixture model with (a) the Gaussian Coulomb prior; (b) the repulsive finite mixture proposed by Quinlan et al. (2021); (c) non-repulsive prior as implemented in the R package AntMAN. Different colours refer to cluster assignment as estimated by minimising the Binder loss function.
Figure 9: Air Quality data. Posterior distribution of (a) the number of clusters \(K_{n}\) and of (b) the number of components \(M\) obtained under different modelling specifications. Panel (c) shows the posterior distribution of the repulsion parameter \(\zeta\) of the Gaussian Coulomb prior.
|
2306.08026
|
Mapping and Probing Froggatt-Nielsen Solutions to the Quark Flavor
Puzzle
|
The Froggatt-Nielsen (FN) mechanism is an elegant solution to the flavor
problem. In its minimal application to the quark sector, the different quark
types and generations have different charges under a $U(1)_X$ flavor symmetry.
The SM Yukawa couplings are generated below the flavor breaking scale with
hierarchies dictated by the quark charge assignments. Only a handful of charge
assignments are generally considered in the literature. We analyze the complete
space of possible charge assignments with $|X_{q_i}| \leq 4$ and perform both a
set of Bayesian-inspired numerical scans and an analytical spurion analysis to
identify those charge assignments that reliably generate SM-like quark mass and
mixing hierarchies. The resulting set of top-20 flavor charge assignments
significantly enlarges the viable space of FN models but is still compact
enough to enable focused phenomenological study. We demonstrate that these
distinct charge assignments result in the generation of flavor-violating
four-quark operators characterized by significantly varied strengths,
potentially differing substantially from the possibilities previously explored
in the literature. Future precision measurement of quark flavor violating
observables may therefore enable us to distinguish among otherwise equally
plausible FN charges, thus shedding light on the UV structure of the flavor
sector.
|
Claudia Cornella, David Curtin, Ethan T. Neil, Jedidiah O. Thompson
|
2023-06-13T18:00:00Z
|
http://arxiv.org/abs/2306.08026v2
|
# Mapping and Probing Froggatt-Nielsen Solutions to the Quark Flavor Puzzle
###### Abstract
The Froggatt-Nielsen (FN) mechanism is an elegant solution to the flavor problem. In its minimal application to the quark sector, the different quark types and generations have different charges under a \(U(1)_{X}\) flavor symmetry. The SM Yukawa couplings are generated below the flavor breaking scale with hierarchies dictated by the quark charge assignments. Only a handful of charge assignments are generally considered in the literature. We analyze the complete space of possible charge assignments with \(|X_{q_{i}}|\leq 4\) and perform both a set of Bayesian-inspired numerical scans and an analytical spurion analysis to identify those charge assignments that reliably generate SM-like quark mass and mixing hierarchies. The resulting set of top-20 flavor charge assignments significantly enlarges the viable space of FN models but is still compact enough to enable focused phenomenological study. We demonstrate that these distinct charge assignments result in the generation of flavor-violating four-quark operators characterized by significantly varied strengths, potentially differing substantially from the possibilities previously explored in the literature. Future precision measurement of quark flavor violating observables may therefore enable us to distinguish among otherwise equally plausible FN charges, thus shedding light on the UV structure of the flavor sector.
+
Footnote †: preprint: MITP-23-026
## I Introduction
The possible origin of the wide and varying hierarchies amongst the quark and lepton masses and mixing angles has long invited speculation. While such disparate Lagrangian parameters are technically natural for fermions, the fact that the three matter generations appear identical in all other respects calls for a dynamical explanation of this so-called flavor puzzle.
A common strategy is to describe these patterns in terms of an approximate flavor symmetry, a subgroup of \(U(3)^{5}\), whose breaking yields the observed masses and mixing angles. Historically, the first attempt in this direction was made by Froggatt and Nielsen in 1978 Froggatt and Nielsen (1978). This strategy, expanded in Froggatt and Nielsen (1978); Nielsen (1978), became known as the Froggatt-Nielsen (FN) mechanism (see e.g. Ref. Froggatt and Nielsen (1978) for a modern review). In its simplest form, this mechanism relies on the introduction of a \(U(1)_{X}\) symmetry under which fermions of different generations have different charges. This symmetry is broken at some high "flavor scale" \(\Lambda_{F}\) by the vacuum expectation value (vev) of a SM-singlet scalar \(\phi\), often referred to as the flavon. The SM Yukawa couplings are then generated as effective operators suppressed by factors \((\langle\phi\rangle/\Lambda_{F})^{n}=\epsilon^{n}\), where \(\epsilon\lesssim 0.1\) and \(n\) is determined by the \(U(1)_{X}\) charges of the Higgs and the respective fermions. Assuming that these operators are generated in the UV completion of the model through heavy particle exchange with presumably \(\mathcal{O}(1)\) coefficients (see e.g. Refs. Froggatt and Nielsen (1978); Froggatt (1978); Froggatt (1978)), the hierarchies of the Yukawa couplings in the flavor basis are generated by the \(U(1)_{X}\) charge assignments of the SM fields and the flavon vev relative to the UV scale, \(\epsilon\).
FN models have been extremely well studied over the last decades (see e.g. Refs. Froggatt and Nielsen (1978
insights can be gained regarding FN models in the foreseeable future, despite their characteristic high energy scale.
In this paper, we perform the first step in such a model-exhaustive program by identifying the most general "theory space" of natural FN models with a single Higgs field that can address the SM flavor problem. As pointed out in Ref. [19], flavor charge assignments beyond the few canonical choices can generate close matches to the SM, seemingly with natural \(\mathcal{O}(1)\) choices for the various coefficients. Inspired by this observation, our analysis considers all possible flavor charge assignments up to \(|X_{q}|\leq 4\) in a FN setup restricted to the quark sector, then performs numerical scans over natural \(\mathcal{O}(1)\) choices of all the Yukawa coefficients, to determine in a maximally agnostic and general fashion which charge assignments and flavon vevs can generate "SM-like" quark masses and mixings. We find that the space of fully natural FN models is much larger than previously understood in the literature, including at least several dozen models depending on the precise interpretation of our results.
Unlike previous analyses, we select the "SM-like" choices in a Bayesian-inspired fashion without explicitly fitting to the SM. We believe this more accurately reflects the intended spirit of the FN solution, that the SM hierarchies should emerge "naturally." Reassuringly, we do find that the SM-like FN setups easily yield many stable exact fits to the SM, and that our numerical results can be understood in the context of an analytical spurion analysis that assumes small rotations between the flavor- and mass-bases.
With this natural theory space of viable FN models now defined, we then perform a toy-demonstration of their phenomenological significance by estimating the size of various flavor-violating dimension-6 SMEFT operators in each model and comparing their size to current constraints. This not only demonstrates that distinct charge assignments or "textures" (these terms are used interchangeably hereafter) yield varying predictions for flavor-violating observables, but it also provides insights into the specific observables that are most effective in discerning between different FN solutions to the flavor puzzle in the future. It is our hope that this kind of analysis can be helpful for future studies of flavor physics, including SMEFT global fits, by providing a "theoretical lamp post" that can direct attention on the most well-motivated parts of the vast landscape of FN models and flavor observables.
This paper is structured as follows. In Section II we briefly review the Froggatt-Nielsen mechanism. The numerical analysis of all possible FN models with \(|X_{q}|\leq 4\) is presented in Section III. The results of this analysis are corroborated by an analytical spurion analysis in Section IV. Some phenomenological implications are sketched out in Section V, and we conclude in Section VI. Details on SM fits, a custom tuning measure appropriate for FN models, and a variety of necessary statistical checks are collected in the Appendices.
## II Review of the Froggatt-Nielsen Mechanism
In this Section we briefly review the Froggatt-Nielsen mechanism in its minimal form. Our treatment and notation parallels closely that of Ref. [19]. As stated in the introduction, the FN mechanism consists of adding a new \(U(1)_{X}\) global flavor symmetry to the SM, along with a scalar field \(\phi\) whose ve spontaneously breaks this symmetry. Without loss of generality we take \(X_{\phi}=1,X_{H}=0\) and the vev of \(\phi\) to be in the positive real direction, \(\left\langle\phi\right\rangle=\left\langle\phi\right\rangle^{\dagger}= \epsilon\Lambda_{F}\), where \(\Lambda_{F}\) is the characteristic scale of spontaneous symmetry breaking. SM matter fields are then assigned \(U(1)_{X}\) charges, forcing the scalar to appear in the SM Yukawa couplings in order to preserve the symmetry. We denote the left-handed quark doublets as \(Q_{i}\) and the right-handed up- and down-quark singlets \(u_{i}\) and \(d_{i}\), where \(i=1,2,3\) is an index running over the fermion families. The quark mass terms in the low-energy SM effective field theory then take the form:
\[\mathcal{L}_{\rm Y}\supset-c_{ij}^{u}\bar{Q}_{i}\widetilde{H}u_{j} \epsilon^{|X_{Q_{i}}-X_{u_{j}}|}-c_{ij}^{d}\bar{Q}_{i}Hd_{j}\epsilon^{|X_{Q_{i }}-X_{d_{j}}|}+\text{h.c.}\,, \tag{1}\]
where \(H\) is the SM Higgs doublet, and \(c^{u,d}\) are arbitrary \(3\times 3\) complex matrices with (presumably) \(\mathcal{O}(1)\) entries. Comparing Eq. 1 to the usual SM Yukawa Lagrangian
\[\mathcal{L}_{\rm SM}\supset-Y_{ij}^{u}\bar{Q}_{i}\widetilde{H}u_{j}-Y_{ij}^{d} \bar{Q}_{i}Hd_{j}+\text{h.c.}\,, \tag{2}\]
we can read off the Yukawa matrices in terms of the FN parameters:
\[Y_{ij}^{u}=c_{ij}^{u}\epsilon^{n_{ij}^{u}}\,,\qquad\qquad Y_{ij}^{d}=c_{ij}^{ d}\epsilon^{n_{ij}^{d}}\,, \tag{3}\]
where
\[n_{ij}^{u}=|X_{Q_{i}}-X_{u_{j}}|\,,\qquad\quad n_{ij}^{d}=|X_{Q_{i}}-X_{d_{j}}|\,. \tag{4}\]
From this expression it is clear that, if \(\epsilon\ll 1\), the SM Yukawa couplings can exhibit large hierarchies even if all entries of the coefficient matrices \(c^{u,d}\) are \(\mathcal{O}(1)\). These can be related to observable quantities - the quark masses and mixing angles - by performing unitary flavor rotations of the quark fields. Upon performing such rotations, the Yukawa Lagrangian can be written in the mass eigenstate basis, or in the so-called "up-basis":
\[\mathcal{L}_{\rm SM}\supset-\hat{Y}_{ij}^{u}\bar{Q}_{i}\widetilde{H}u_{j}+(V_{ \rm CKM}\hat{Y}^{d})_{ij}\bar{Q}_{i}Hd_{j}+\text{h.c.}\, \tag{5}\]
(analogously for the "down-basis") where \(V_{\rm CKM}\) is the CKM matrix and \(\hat{Y}^{u/d}\) are real, diagonal matrices, whose eigenvalues are related to the quark masses via \(y_{q}\equiv\sqrt{2}m_{q}/v_{H}\), with \(m_{q}\) and \(v_{H}\) being the mass of the quark \(q\) and the Higgs vev, respectively.
## III Systematic Exploration of Froggatt-Nielsen Models
We consider all possible charge assignments with \(|X|\leq 4\) for all SM quarks. Since \(y_{t}\approx 1\), baryon number is con
served, and permutations of the quark fields do not constitute a physical difference between FN models, we can set \(X_{q_{3}}=X_{u_{3}}=0\) and adopt the ordering convention that \(|X_{Q_{i}}|\geq|X_{Q_{j}}|\) for \(i<j\) when all \(X_{Q}\) have the same sign, otherwise \(X_{Q_{i}}\geq X_{Q_{j}}\). (Similarly for \(X_{u}\), or \(X_{d}\).) In addition, following [19] we remove "mirror" charges which are related by multiplying all of the charges \(X\) by \(-1\) (and then enforcing the above ordering convention). This gives a total of 167,125 charge assignments that are physically inequivalent in the IR.
We now wish to assess these textures based on how "typical" it is for them to reproduce the flavor hierarchies of the SM. Given a certain charge assignment \(\mathbf{X}=\{X_{Q},X_{u},X_{d}\}\), we randomly draw some large number of \(\mathcal{O}(1)\) complex coefficient matrices \(c^{u,d}\) (details discussed below) and calculate the SM parameters for each instance of \(c^{u,d}\) as a function of \(\epsilon\). For a given choice of \(\epsilon\), we can then calculate the observable with the maximum fractional deviation from its SM value:
\[\Xi_{\text{SM}}=\max_{i}\exp\left|\ln\left(\frac{\mu_{i}^{\text{ guess}}}{\mu_{i}^{\text{SM}}}\right)\right|\,, \tag{6}\]
where \(\mu_{i}\) ranges over the six quark masses, the three independent CKM entries \(|V_{12}|,|V_{13}|\), and \(|V_{23}|\), and the absolute value of the Jarlskog invariant \(|J|\). The precise numerical values we use can be found in Table 2 in Appendix B. Since \(\epsilon\) plays the central role of generating the overall hierarchy, we minimize \(\Xi_{\text{SM}}\) with respect to \(\epsilon\) for each instance of the \(c^{u,d}\) coefficient matrices to match as closely as possible the SM values.
After repeating this process for each charge assignment \(\mathbf{X}\), we define \(\mathcal{F}_{2}\) (\(\mathcal{F}_{5}\)) as the fraction of random coefficient choices that yields \(\Xi_{\text{SM}}\leq 2\) (5) for each model. This allows us to define "global naturalness criteria" (i.e. independent of a particular _fit_ to the SM) for a given texture to be a good candidate for reproducing the SM hierarchies, namely requiring that \(\mathcal{F}_{2}\) or \(\mathcal{F}_{5}\) be above some lower bound.
Out of the total 167,125 possible charge assignments, we only find about 10 for which \(\mathcal{F}_{2}\gtrsim 1\%\). We adopt the latter criterion as the most stringent measure of naturalness for a texture to solve the SM quark flavor problem, and show the top-20 possible charge assignemnts ranked by \(\mathcal{F}_{2}\) in Table 1.We also show the average value of \(\epsilon\) for each texture. The distribution of preferred \(\epsilon\) values is very narrow, indicating that each texture has a uniquely suited value of \(\langle\phi\rangle/\Lambda_{F}\) to reproduce the SM. Some of these good textures correspond to those identified already in the literature, but, to the best of our knowledge, most of them have not been considered before.1
Footnote 1: Texture 3 first appeared in the seminal papers [2; 3].
This is quite remarkable, given how similarly they all naturally generate SM-like mass and mixing hierarchies. Textures where all quark masses and CKM entries lie within a factor of 5 of their measured SM values are even more common, with about 430 textures satisfying \(\mathcal{F}_{5}\gtrsim 10\%\). This constitutes a large collection of textures that are well-motivated in this model-independent way. The complete ranking of these textures is included in the file charges.csv provided in the auxiliary material.2
Footnote 2: Note that because the auxiliary file is rank ordered by \(\mathcal{F}_{5}\), it does not match the order given in Table 1, and the precise ordering of textures with very similar values of \(\mathcal{F}_{5}\) (or \(\mathcal{F}_{2}\)) is subject to small numerical fluctuations.
As mentioned above, to understand how close a texture generically is to the SM, we may look at the distribution of \(\Xi_{\text{SM}}\) values for a given texture over many different random choices of the \(c^{u,d}\) matrices. In Fig. 1, we show such a distribution for the top texture from Table 1. As we can see, the distribution is clustered around \(\Xi_{\text{SM}}\sim 4\) with broad tails on either side, indicating that this texture generically results in physical parameters with the proper SM-like hierarchies regardless of the \(\mathcal{O}(1)\) coefficients \(c^{u,d}\). Of course getting the precise SM values requires a precise choice of these coefficients, but the hierarchies themselves are robust.
We may also look at such a distribution for a charge assignment that does not perform well by this metric. This category includes many textures previously mentioned in the literature, including for example most of the textures listed in Tables 1 and 2 of Ref. [19]. The third texture in Table 2 of [19] results in the distribution
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} Num. & \(X_{Q_{1}}\) & \(X_{Q_{2}}\) & \(X_{u_{1}}\) & \(X_{u_{2}}\) & \(X_{d_{1}}\) & \(X_{d_{2}}\) & \(X_{d_{3}}\) & \(\mathcal{F}_{2}\) (\%) & \(\mathcal{F}_{5}\) (\%) & \(\epsilon\) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 1: The top 20 textures ranked by the fraction of the time that a random draw of \(c^{u,d}\) coefficient matrices result in all quark masses, CKM matrix elements, and the Jarlskog invariant within a factor of 2 of the SM (allowing \(\epsilon\) to vary to minimize deviation from the SM). For each texture, \(\mathcal{F}_{2}\) (\(\mathcal{F}_{5}\)) is the fraction within 2 (5), and \(\epsilon\) is the average value of \(\epsilon\) for those coefficients that end up within a factor of 2. For all of these textures, \(X_{Q_{3}}=X_{u_{3}}=0\).
for \(\Xi_{\rm SM}\) shown in Fig. 2. It is quite reasonable to ask why these textures show up as good in that analysis but not ours, and the answer is simple: Ref. [19] searches for textures for which there is at least one technically natural choice of \(c^{u,d}\) coefficients that approximately reproduces the SM, whereas we look for textures which generically produce SM-like hierarchies. With random \(\mathcal{O}(1)\) coefficient matrices, a texture like that of Fig. 2 results in a typical deviation from SM parameters of more than an order of magnitude, but there does exist a very special choice of \(c^{u,d}\sim\mathcal{O}(1)\) that results in the SM. This choice of coefficients may be technically natural but it is still exceedingly rare, as Fig. 2 shows. The difference thus stems from our "global" choice of metric for naturalness, which prioritizes textures that can lead to SM-like parameters without need for additional dynamics or precise restrictions on the coefficients.
To verify that our global naturalness criterion also produces viable and "locally natural" solutions to the SM flavor problem, we also have to check that each of the textures in Table 1 yields exact and untuned numerical fits to the SM, including uncertainty on the SM parameters. This necessitates the definition of a custom tuning measure that is more appropriate for FN models than standard choices like the Barbieri-Giudice measure [28; 29], taking into account that any change in the UV theory would likely perturb all coefficients in Eq. 1 at once, rather than one at a time. The details are in Appendix A, but the upshot is that each of the textures in Table 1 readily yields many good SM fits that are completely untuned with respect to simultaneous random uncorrelated perturbations of all coupling coefficients. This confirms that our global naturalness criterion guarantees particular technically natural solutions to the SM flavor problem as well.
We perform several statistical checks to make sure our conclusions are robust. The results obtained here depend on the details of the statistical distributions from which we draw the coefficients in the \(c^{u,d}\) matrices. Taking a Bayesian-inspired point of view, these distributions can be thought of as prior probability distributions for the \(c^{u,d}\) coefficients. For the analysis shown here, we use a "log-normal" prior in which the logarithm of the magnitude of each \(c^{u,d}\) entry is drawn from a Gaussian distribution. We have repeated the same analysis for a wider log-normal distribution, and a third "uniform" prior, in which the real and imaginary parts of each \(c^{u,d}\) are drawn uniformly from the range \([-3,3]\). Up to modest reorderings in our ranked list of top textures, our results are robust with respect to changing the prior. We also confirm that the SM-like hierarchies for our good textures are robustly determined by the flavor charges and the small \(\epsilon\) rather than anomalous hierarchies amongst the randomly drawn \(c^{u,d}\) coefficients, and that individual SM observables are roughly uncorrelated and drawn from a distribution roughly centered on the correct SM value for each good texture. For details, see Appendix B.
## IV Analytical Spurion Analysis
Our numerical study identified many new plausible FN textures beyond those that have been studied in the literature. It would be useful to understand analytically why the textures in Table 1 reliably yield SM-like hierarchies. If viable FN models involve no large rotations between the flavor- and the mass-basis, then it is possible to estimate masses and mixings via an analytical spurion analysis. We can easily test this assumption.
Figure 1: A probability distribution function for \(\Xi_{\rm SM}\) for the top-ranked texture in Tab. 1. To produce this plot we randomly generate many different choices of \(c^{u/d}\) matrices and compute \(\Xi_{\rm SM}\) for each one (allowing \(\epsilon\) to vary to minimize deviation from the SM). Also shown are the total fraction of coefficient matrices that result in \(\Xi_{\rm SM}<2,5,\text{and}\,10\) respectively. For this texture, typical random \(\mathcal{O}(1)\) matrices of coefficients will result in all quark masses and CKM parameters within a factor of \(\approx 4\) of their measured SM values.
Figure 2: Similar to Fig. 1 for a texture that does not perform well by our metric. This texture does have a technically natural fit to the SM (as found in Ref. [19]), but typical \(\mathcal{O}(1)\) coefficient matrices for this texture result in deviations of more than an order of magnitude from the measured SM parameters.
In general, the flavor-basis Yukawa matrix \(Y^{u}\) (identically for \(Y^{d}\)) can be related to the diagonal mass-basis Yukawa \(\hat{Y}^{u}\) via the usual singular value decomposition \(Y^{u}=U_{u}\hat{Y}^{u}W^{\dagger}_{u}\). Under the near-aligned assumption, and adopting the strict ordering of charges defined at the beginning of Section III, the mass-basis Yukawa couplings are simply given by
\[\hat{Y}^{u}_{ij}\sim\epsilon^{n^{u}_{ii}}\delta_{ij} \tag{7}\]
where the exponent matrices \(n^{u,d}\) are defined terms of the \(X_{Q,u,d}\) charges in Eq. 4. Under the same strict assumptions,3 the magnitudes of the elements of the rotation matrices can be written as
Footnote 3: Eq. 8 does not apply for flavor charge assignments where the heaviest up and down quarks are not in the same generation, but those are irrelevant for our analysis.
\[(U_{u})_{ij} \sim \left\{\begin{array}{ll}1&i=j\\ \epsilon^{n^{u}_{ij}-n^{u}_{jj}}&i<j\\ \epsilon^{n^{u}_{ji}-n^{u}_{ii}}&i>j\end{array}\right. \tag{8}\] \[(W_{u})_{ij} \sim \left\{\begin{array}{ll}1&i=j\\ \epsilon^{n^{u}_{ji}-n^{u}_{jj}}&i<j\\ \epsilon^{n^{u}_{ij}-n^{u}_{ii}}&i>j\end{array}\right..\]
This makes it straightforward to obtain magnitude estimates for the elements of \(V_{\rm CKM}=U_{u}^{\dagger}U_{d}\).4
Footnote 4: For about half of the textures in Table 1, this yields \(V_{\rm CKM}\) estimates that follow the Wolfenstein parameterization (with \(\lambda\to\epsilon\)), while the other half show slight deviations from this pattern.
For a given FN charge assignment and choice of \(\epsilon\), we can now compute the spurion estimate \(\mu_{i}^{\rm spurion}\) for \(i=\) each of the six quark masses and \(|V_{\rm CKM}|_{12,23,13}\). To assess how SM-like a given FN charge assignment is, we consider the logs of the ratios by which these estimates deviate from the SM, added in quadrature,
\[d=\left[\sum_{i}\log_{10}^{2}\left(\frac{\mu_{i}^{\rm spurion}}{\mu_{i}^{\rm SM }}\right)\right]^{1/2}\, \tag{9}\]
with \(\epsilon\) chosen to minimize \(d\). This measure always takes all SM observables into account and is more appropriate for the approximate nature of the spurion analysis than Eq. 6. We would expect that FN textures that naturally give a good fit to the SM would satisfy \(d\lesssim\mathcal{O}(1)\).
Indeed, we find that all the textures in Table 1 satisfy \(d<2.4\). Of all 167,125 possible charge assignments, there are only 36 additional possibilities that also satisfy this criterion, and all of them are in the top 160 of our ranked list of textures obtained in the numerical analysis of the previous section. This analytical spurion estimate is therefore fully consistent with our full numerical study, which gives us additional confidence that the global naturalness criterion derived in our numerical study is theoretically plausible. Furthermore, this convergent result confirms the expectation that natural FN models always feature a high degree of alignment between the flavor and the mass bases.5
Footnote 5: This was confirmed in the numerical analysis, where coefficient choices with small values of \(\Xi_{\rm SM}\) for textures in Table 1 always feature small mixing angles in \(U_{u,d},W_{u,d}\).
Obviously, the specific \(d<2.4\) criterion was derived _a posteriori_ from the results of the numerical study, but if one were to guess at a reasonable upper bound for \(d\) that natural SM-like FN models must satisfy, a number like \(d<3\) might plausibly come to mind, which yields a still very modest total of 149 textures (including the top 20). As with any such definition, the cutoff for what constitutes a natural model is somewhat arbitrary, and if one wants to make quantitative statements the fully numerical approach of the previous section is necessary. However, it is interesting to note that the numerical study was not necessary to make the much more important qualitative observation that there are many different FN charge assignments, of the order of several dozen at least, that very naturally give SM-like hierarchies. The crucial ingredient is merely to formulate a well-defined and general criterion and check it across all possible flavor charges.
## V Phenomenological Implications
Flavor-violating effects are the most relevant experimental signatures of FN models. At low energies these can be effectively parameterized within the framework of Standard Model Effective Field Theory (SMEFT). Our focus here lies on examining 4-fermion operators of the form
\[\mathcal{O}=\frac{1}{\Lambda_{\rm eff}^{2}}(\bar{q}_{i}q_{j})(\bar{q}_{k}q_{l}) \tag{10}\]
where \(i,j,k,l\) denote different quark flavors, and \(q\) can represent either \(Q,d\) or \(u\) type quarks. While semi-leptonic and fully leptonic 4-fermion operators can also be generated, 4-quark operators are most important for our discussion, given that we assign FN charges only to the quarks. (For the sake of simplicity, Dirac structures as well as Lorentz and group indices are left implicit.)
We can estimate the size of these operators using a spurion analysis analogous to the estimates for SM quantities in the last section. In the "FN flavor basis" of Eq. 3, where flavor charges are well-defined for each quark generation, the leading contribution to the coefficient of this operator will take the form
\[\frac{1}{\Lambda_{\rm eff}^{2}}=c_{\mathcal{O}}\frac{\epsilon^{|-X_{q_{i}}+X_{q _{j}}-X_{q_{k}}+X_{q_{l}}|}}{\Lambda_{F}^{2}}. \tag{11}\]
where \(\Lambda_{F}\) is the UV scale of the model (with \(\epsilon=\langle\phi\rangle/\Lambda_{F}\)), and \(c_{\mathcal{O}}\) is an unknown dimensionless complex coefficient.
For FN models that satisfy our global naturalness criterion, it is plausible to expect the different coefficients \(c_{\mathcal{O}}\) to be relatively uncorrelated and have modest \(\mathcal{O}(1)\) sizes.6
Footnote 6: Support for this assumption can be derived from the near-uncorrelated near-log-normal distributions of the individual SM observables in our numerical scans, see Fig. 6 in Appendix B. It is plausible to expect other observables to behave similarly.
The 'SM-like' FN textures we find and list in Table 1 are equivalently suitable for generating the SM quark Yukawa coupling matrices. However, they predict quite disparate flavor-violating signatures, and future measurements hold the potential to not only detect these non-standard flavor violating-effects but also distinguish between possible natural flavor textures. Such investigations could provide valuable insights into unraveling the true structure of the flavor sector.
To demonstrate these parametric differences quantitatively, we work in the Warsaw basis [30] and obtain bounds on the 4-quark-operator Wilson coefficients using the global likelihood implemented in the Python package smelli[31; 32; 33]. For simplicity, we only turn on one operator at a time (since we merely want to demonstrate the significant differences between different FN charge assignments), and take the bound on \(\Lambda_{\text{eff}}\) for each operator to correspond to the most constraining bound obtained by turning the operator on with purely real or imaginary positive or negative coefficient. Since \(c_{\mathcal{O}}\) is likely to have a "random phase" in the FN model, this is the most conservative assumption for our demonstration.
Some care must be taken to obtain physically consistent results from this spurion estimate for 4-quark operators. smelli supplies bounds in the down-aligned (or up-aligned) interaction basis, where the right-handed \(u,d\) fields are rotated using the \(W_{u,d}\) matrices estimated in Eq. 8, while \(Q\) is rotated using \(U_{d}\) (\(U_{u}\)). As a result, \(u_{R},d_{R}\) and \(d_{L}\) (\(u_{L}\)) are mass eigenstates, while the \(u_{L}\) (\(d_{L}\)) fields are rotated by \(V_{\text{CKM}}\) with respect to their mass eigenstate basis. This obviously results in bounds on operator coefficients being different in the up- and down-aligned basis, simply because the make-up of each 4-Fermi operator in terms of quark-mass-basis operators is different. In an exact numerical analysis of FN models -- where for a given FN charge assignment one might choose random \(c_{ij}^{u,d}\) restricted to give good fits to SM masses and mixings; then consider random numerical trial values for the \(c_{\mathcal{O}}\)'s in the FN flavor basis; and finally perform exact rotations into the up- or down-aligned basis -- the resulting bounds on \(c_{\mathcal{O}}/\Lambda_{F}^{2}\) in the FN flavor basis would be exactly equivalent in the two bases. However, in our parametric spurion estimates, while it seems one could use the transformation matrix estimates of Eq. 8 to obtain 4-Fermi operator estimates in the up- or down-aligned bases, in reality this fails to reliably capture differences between the operator definitions at sub-leading order in \(\epsilon\). Differences between the two bases enter exactly at this order, but this still results in large differences in the numerical bounds on operator coefficients, since in some cases the leading bound on an up- or down-basis operator comes from a mass-basis contribution to that operator that is sub-leading in CKM mixing angles, but can nonetheless supply the dominant bound due to the severity of the corresponding physical constraint.
Fortunately, the important effects that are CKM-suppressed in the up-aligned basis are leading-order in the down-aligned basis, and vice versa. Furthermore, the close alignment between the FN flavor basis, up-aligned basis, down-aligned basis and mass-basis (evident from the near-diagonal nature of the \(U_{u,d},W_{u,d}\) in Eq. 8) means that the FN flavor basis predictions of Eq.11 apply, to leading order in \(\epsilon\), in the up- and down-aligned bases as well. Taking these two observations together, we find that we can obtain physically consistent parametric estimates of the bounds on \(c_{\mathcal{O}}/\Lambda_{F}^{2}\) in the FN flavor basis by comparing the predictions of Eq.11 to the constraints supplied by smelli in both the up- and down-aligned basis, and simply adopting the stronger of the two bounds for that operator.
We find that all textures in Table 1 are constrained to have a flavor scale \(\Lambda_{F}\gtrsim 40\) - 100 PeV.7 Setting \(\Lambda_{F}=100\) PeV for all textures, we show in Fig. 3 the relative size of some operators compared to their current bound, for a subset of operators with predicted size not too far from current constraints. The operators which dominate the bound on \(\Lambda_{\text{F}}\) are qd8_2212, qd8_1112, qd1_2212, qd1_1112, which also show the least variation between textures. However, there is a large degree of variation amongst the other operators with predictions that are a factor of \(\gtrsim 10\) beyond current bounds. Four textures have been highlighted to showcase this variation in the prediction of various flavor-violating quark operators: the top texture #1 in red, the "original" FN texture [2; 3] #3 in green, and two others, #7 and #18, in blue and orange. Note that each prediction has a "theoretical uncertainty" of \(\mathcal{O}(1)\) in this plot, due to the unknown size of the \(c_{\mathcal{O}}\) coefficients.
Footnote 7: Specifically, the bound is 35-45 PeV for textures # 2, 3, 5, 7, 12, 14, 19 in Table 1, and 95 PeV for the rest.
To further emphasize the differences in the _relative_ predictions for flavor-violating quark operators, rather than different overall suppression of flavor-violating effects, Figure 4 shows the predictions for a subset of operators where the flavor scale has been set to its current bound separately for each texture. We do not show the four operators qd(1,8)_(2212,1112), which dominate the \(\Lambda_{F}\) bound and have similar predictions for all considered textures, but we now show some additional operators that display large variation between textures under these new assumptions. This explicitly demonstrates that if a given non-SM flavor-violating effect were found (setting the absolute size of some flavor-violating
operators), the resulting predictions for the other flavor-violating operators would differ widely amongst equally natural FN models.
Unsurprisingly, the observables that drive the bound on these most "distinguishing" coefficients are those related to quark flavor mixing in the 1-2 sector, in particular to the imaginary part of the \(D-\bar{D}\) and \(K-\bar{K}\) mixing amplitudes, encoded in \(x_{12}\sin\phi_{12}\) and \(\epsilon_{K}\), respectively8. For kaon mixing, the current bottleneck preventing tighter constraints is the uncertainty in the SM prediction for \(\epsilon_{K}\). Future improvements may be possible, but are hard to predict. On the other hand, a significant improvement is expected in the experimental determination of the parameters describing CP violation in \(D-\bar{D}\) oscillations. In particular, the uncertainty on the imaginary part of the amplitude is expected to shrink at least of a factor 5 at the end of Upgrade II of the LHCb experiment [37]. There has also been significant recent work on global SMEFT fits under certain flavor violating hypotheses (see e.g. [38; 39; 40; 41]). Our menu of natural FN models is a natural candidate for future analyses of this kind, which may yet yield unexpected constraints on particular models considered in a global fit.
Footnote 8: For the experimental value \(\epsilon_{K}\) we use the PDG [34]. For CP violation in the charm system we use HFLAV results [35; 36].
## VI Conclusions
The flavor puzzle is a vexing mystery of the Standard Model, with the pattern of the fermion mass matrices hinting at the presence of a deeper structure. Directly
Figure 3: Relative size of a demonstrative sub-set of flavor-violating 4-quark operators in the Warsaw operator basis and FN flavor quark basis for each of the 20 “most natural” FN models in Table 1, for a flavor scale of \(\Lambda_{F}=100\) PeV, compared to their current bound assuming \(c_{\mathcal{O}}=1\). Models 1, 3, 7 and 18 are emphasized to demonstrate the qualitatively different configurations of flavor-violating operators that can be generated by natural FN models. The number in brackets after the operator label on the horizontal axis is the lower bound on \(\Lambda_{\text{eff}}\) in PeV.
Figure 4: Same as Figure 3, but for each texture the flavor scale \(\Lambda_{F}\) is separately chosen to saturate its current bounds. The operators qd(1,8)_(2212,1112) that dominate the \(\Lambda_{F}\) bound and have similar predictions for all considered textures are omitted, and instead some additional operators are shown.
probing the mechanisms that generate this structure is difficult, as tight constraints on flavor violation usually push the relevant scale of flavor physics to very high values far beyond our ability to probe directly in the intermediate future. Even so, the advent of future experiments makes direct or indirect detection and diagnosis of this underlying flavor structure an enticing prospect.
Our work provides a theoretical roadmap to help navigate the potentially vast space of Froggatt-Nielsen solutions to the quark flavor problem. It dramatically enlarges the space of viable charge assignments compared to what has been studied in the literature, but our global criterion of naturally generating the SM mass and mixing hierarchies across the model's whole parameter space is still constraining enough to net a manageable number of FN models, see Table 1.
We showed that this well-defined subset of FN benchmark models generates a variety of different flavor-violating operators within the SMEFT framework, with widely varying magnitudes depending on the flavor charge assignments. _In principle_ this makes diagnosing the solution to the quark flavor problem plausible once flavor-violating signals beyond the SM expectation are unambiguously observed. Global SMEFT fits that assume a particular FN model can be even more constraining [38; 39; 40; 41], and it would be interesting to conduct such fits for each of the natural FN models we identify in Table 1 for well-defined priors on the unknown \(\mathcal{O}(1)\) coefficients of the model.
Our method of finding natural FN models naturally generalizes to FN models of the lepton sector, or non-minimal setups with multiple or non-abelian flavor symmetries [42; 43; 44; 45; 46]. This may suggest further flavor-violating observables that have the most promise of detecting and diagnosing the physics of the flavor puzzle. It may also be interesting to study the impact of our work on dark sectors that are related to the SM via a discrete symmetry [47]. We leave such investigations for future work.
###### Acknowledgements.
It is a pleasure to thank Savas Dimopoulos, Marco Fedele, Christophe Grojean, Anson Hook, Seyda Ipek, Junwu Huang, Yael Shadmi, Peter Stangl, Ben Stefanek, Patrick Owen, Alan Schwartz and Gudrun Hiller for valuable discussions. CC, DC and JOT would like to thank Perimeter Institute for hospitality during the completion of this work. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. The research of CC was supported by the Cluster of Excellence _Precision Physics, Fundamental Interactions, and Structure of Matter_ (PRISMA\({}^{+}\), EXC 2118/1) within the German Excellence Strategy (Project-ID 39083149). The research of DC was supported in part by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada, the Canada Research Chair program, the Alfred P. Sloan Foundation, the Ontario Early Researcher Award, and the University of Toronto McLean Award. The research of ETN was supported by the U. S. Department of Energy (DOE), Office of Science, Office of High Energy Physics, under Award Number DE-SC0010005.
## Appendix A Standard Model Fits and Tuning
In this appendix we present details on finding solutions for the coefficients \(c_{ij}^{u,d}\) in Eq. 3 that represent realistic SM fits within experimental uncertainties for the quark masses and mixings, see Table 2, for each of the FN charge assignments in Table 1.
In order to obtain SM fits, we used a simplex minimization algorithm to adjust the coefficients \(c_{ij}^{u,d}\) and the parameter \(\epsilon\) in order to optimize the standard \(\chi^{2}\) score,
\[\chi^{2}_{\text{SM}}=\sum_{i}\left(\frac{\mu_{i}-\mu_{i}^{\text{SM}}}{\sigma _{i}^{\text{SM}}}\right)^{2} \tag{10}\]
where as in Sec. III the index \(i\) runs over all of the SM quark masses, three CKM mixing angles, and \(|J|\). For all of the textures in Table 1, we are able to find fits with average deviation of less that \(2\sigma\) over all 10 parameters that we fit to; we were further able to find fits with average deviation less than \(1\sigma\) for more than half of the textures in the table, including all of the top 5. We have further verified that for these fits, the values of the coefficients \(c_{ij}^{u,d}\) do not have a significantly different distribution than the prior used to generate the same coefficients for our numerical scans, indicating that the fits do not require any substantial drift away from our \(\mathcal{O}(1)\) assumption.
\begin{table}
\begin{tabular}{c|c} SM parameter & Value \\ \hline \(m_{u}\) & 0.00117(35) \\ \(m_{c}\) & 0.543(72) \\ \(m_{t}\) & 148.1(1.3) \\ \(m_{d}\) & 0.0024(42) \\ \(m_{s}\) & 0.049(15) \\ \(m_{b}\) & 2.41(14) \\ \(|V_{12}|\) & 0.22450(67) \\ \(|V_{13}|\) & 0.00382(11) \\ \(|V_{23}|\) & 0.04100(85) \\ \(|J|\) & 3.08(14) \(\times 10^{-5}\) \\ \end{tabular}
\end{table}
Table 2: Standard Model parameters and uncertainties used to compute goodness-of-fit parameters such as \(\Xi_{\text{SM}}\) (Eq. 6) Quark masses (given in GeV) are taken from [48], evaluated at \(\mu=1\) TeV. Running these parameters to the \(\mathcal{O}(\text{PeV})\) flavor scale would not significantly change our results. CKM parameters are taken from the PDG review [34], evaluated at low scales; we explicitly checked that including the running of the CKM parameters does not have any noticeable effect on our analysis.
Our final check is to verify that these fits are not locally tuned.
A standard tuning test is the Barbieri-Giudice measure [28; 29], which for a single SM observable \(\mathcal{O}_{K}\) can be defined as:
\[\Delta_{\rm BG}^{K}\equiv\max_{k}|\delta_{K,k}|\ \,\ \delta_{K,k}\equiv\frac{ \delta\log\mathcal{O}_{K}}{\delta\log c_{k}}\, \tag{10}\]
where \(c_{k}\) runs separately over the real and imaginary parts of each of the \(c_{ij}^{u,d}\) coefficients defined in Eq. 3. A second maximization then gives the overall tuning taking into account all fitted SM observables:
\[\Delta_{\rm BG}\equiv\max_{K}\Delta_{\rm BG}^{K}. \tag{11}\]
This basically represents the maximum sensitivity of all the 10 SM observables \(\mathcal{O}_{K}\) with respect perturbing any single one of the 18 complex coefficients \(c_{ij}^{u,d}\). However, we argue that for Froggatt-Nielsen models and their many coefficients, which arise from some UV theory, this tuning measure is of limited utility, since a small change in the UV theory would slightly change all the coefficients at once, not just one at a time. Indeed, if random small permutations of some characteristic size are applied to _all_\(c_{ij}^{u,d}\) coefficients of a given SM fit, we numerically find that \(\Delta_{\rm BG}^{K}\) generally significantly _underestimates_ the variance of a given SM observable \(\mathcal{O}_{K}\). In other words, for FN models, the Barbieri-Giudice measure underestimates tuning.
Some way of assessing this total sensitivity is required. We therefore define the following tuning measure for each SM observable \(\mathcal{O}_{K}\):
\[\Delta_{\rm tot}^{K}\equiv\left[\sum_{s}(\lambda_{s}^{K})^{2}\right]^{1/2} \tag{12}\]
where the \(\lambda_{s}^{K}\) are the eigenvalues of the matrix \(\Delta_{kl}^{K}\) defined as
\[\Delta_{kl}^{K}\equiv\frac{\delta^{2}\log\mathcal{O}_{K}}{\delta\log c_{k} \delta\log c_{l}} \tag{13}\]
and \(c_{k}\) runs over the real and imaginary parts of all \(c_{ij}^{u,d}\) coefficients as above. It is easy to see that this quantity could give a more complete account of the tuning of a given SM fit, since the sum in quadrature over the principal directions of \(\Delta_{kl}^{K}\) takes into account the total variability of SM observable \(\mathcal{O}_{K}\), regardless of whether the direction of maximum sensitivity is aligned with any one \(c_{ij}^{u,d}\), which is appropriate when perturbing all coefficients at once. Indeed, when varying all coefficients by random perturbations of characteristic relative scale \(\sigma\), we find that \(\Delta_{\rm tot}^{K}\sigma\) gives a very good direct estimate of the resulting relative variance of \(\mathcal{O}_{K}\). This supports the argument that \(\Delta_{\rm tot}^{K}\) is a more faithful measure of sensitivity to changes in the underlying UV theory of a FN model, and an overall tuning measure for the whole SM fit can be obtained by summing in quadrature over all SM observables:
\[\Delta_{\rm tot}=\left[\sum_{K}(\Delta_{\rm tot}^{K})^{2}\right]^{1/2} \tag{14}\]
We find that it is generally easy to find SM fits for the top-20 textures in Table 1 that have \(\Delta_{\rm tot}\lesssim\) a few, proving that these solutions to the SM flavor problem are truly untuned with respect to the underlying details of the UV theory. Informally, we notice that there seems to be a trend that the \(\Delta_{\rm tot}\) of an average SM fit increases modestly for lower-ranked textures in Table 1. While further study would be required to solidify this relationship, it
\begin{table}
\begin{tabular}{c|c|c|c|c} Rank by this prior & Rank in Tab. 1 & \(\mathcal{F}_{2}\) (\%) & \(\mathcal{F}_{5}\) (\%) & \(\epsilon\) \\ \hline
1 & 1 & 4.0 & 71 & 0.15 \\
2 & 2 & 3.2 & 79 & 0.16 \\
3 & [23] & 0.6 & 40 & 0.14 \\
4 & 13 & 0.4 & 48 & 0.22 \\
5 & 5 & 0.4 & 46 & 0.21 \\
6 & 4 & 0.4 & 58 & 0.14 \\
7 & [22] & 0.4 & 78 & 0.17 \\
8 & [21] & 0.3 & 46 & 0.13 \\
9 & 9 & 0.2 & 62 & 0.14 \\
10 & [28] & 0.2 & 39 & 0.24 \\ \end{tabular}
\end{table}
Table 4: Analogue of Table 1 for top-10 textures and their \(\mathcal{F}_{2}\), \(\mathcal{F}_{5}\), and average \(\epsilon\), but using the “uniform flat” prior distribution for the coefficients of the \(c^{u,d}\) matrices. For comparison, we also show their rank in Table 1 in the second column. The textures with rank in [square brackets] do not appear in Table 1, but we show their rank if that table was continued to the top-30. These textures have the following charges: rank 3 (this table), {3, 2, -3, -2, -3, -3, -3}; rank 7, {3, 2, -4, -2, -4, -4, -3}; rank 8, {3, 2, -3, -2, -3, -3, -2}; rank 10, {4, 3, -4, -2, -4, -4}.]
\begin{table}
\begin{tabular}{c|c|c|c|c} Rank by this prior & Rank in Tab. 1 & \(\mathcal{F}_{2}\) (\%) & \(\mathcal{F}_{5}\) (\%) & \(\epsilon\) \\ \hline
1 & 1 & 0.17 & 24 & 0.16 \\
2 & 2 & 0.13 & 22 & 0.17 \\
3 & 3 & 0.12 & 19 & 0.12 \\
4 & 16 & 0.11 & 19 & 0.2 \\
5 & 8 & 0.1 & 17 & 0.1 \\
6 & 15 & 0.1 & 18 & 0.21 \\
7 & 18 & 0.1 & 14 & 0.18 \\
8 & 11 & 0.1 & 16 & 0.06 \\
9 & 12 & 0.09 & 19 & 0.21 \\
10 & 10 & 0.09 & 18 & 0.15 \\ \end{tabular}
\end{table}
Table 3: Analogue of Table 1 for top-10 textures and their \(\mathcal{F}_{2}\), \(\mathcal{F}_{5}\), and average \(\epsilon\), but using the “wider log-normal” prior for the coefficients of the \(c^{u,d}\) matrices. For comparison, we also show their rank in Table 1 in the second column. The overall fraction of \(c^{u,d}\) matrices with this prior that give good fits to the SM is smaller for all textures, which is to be expected because this prior allows for larger random hierarchies in the \(c^{u,d}\) coefficients and thus washes out the effect of the texture itself in generating the SM hierarchies. Textures that are perform well in Tab. 1 also perform well with this prior, although the exact ordering is somewhat shuffled.
provides further suggestive evidence that our global naturalness criterion of ranking textures by their \(\mathcal{F}_{2}\) or \(\mathcal{F}_{5}\) fractions is the fundamental measure of how SM-like a texture wants to be, and how natural any SM fits are that do exist.
## Appendix B Dependence on prior distributions and statistical checks
In this appendix we collect the details of the prior distributions and statistical measures we use in our analysis. Because one of the main products of this paper is a collection of "statistically good" textures, i.e. a collection of textures which do a good job of reproducing SM-like hierarchies for somewhat arbitrary \(\mathcal{O}(1)\) Yukawa coefficients \(c^{u,d}\), it is important to understand how much these results depend on our priors for these coefficients. While we do find that the precise numbers claimed here depend on this distribution, our derived list of good textures is robust with respect to the exact choice of prior for the coefficients, up to some minor reordering.
For all results shown in the main body of this paper, our numerical scan generated the entries of \(c^{u,d}\) as follows: for each coefficient independently, a magnitude was drawn from a log normal distribution centered around \(1\) with a standard deviation of \(\ln 10^{0.3}\) (to enforce that all coefficients in the matrix be \(\mathcal{O}(1)\)), and a phase was drawn from a flat distribution between \(0\) and \(2\pi\). This choice yields the results shown in Table 1, namely that there are a few textures for which \(\mathcal{O}(2\%)\) of randomly generated coefficients yield quark masses and CKM elements within a factor of \(2\) of their SM values, and \(\mathcal{O}(50\%)\) yield parameters within a factor of \(5\).
In order to check that this list is robust to other choices of distribution, we ran two other scans for all charge assignments:
1. A "wider log-normal" scan where we drew the \(c^{u,d}\) coefficients with a uniform distribution in phase and a log-normal distribution in magnitude centered on \(0\) and with a standard deviation of \(\ln 10^{0.6}\).
2. A "uniform flat" scan where we drew the real and imaginary parts of each entry of \(c^{u,d}\) separately from uniform flat distributions between \(-3\) and \(3\).
The lists of the top \(10\) textures (ranked by the fraction \(\mathcal{F}_{2}\) within a factor of \(2\) of the SM value) for each of these scans are given in Tables 3 and 4. We can see that the rough list and ordering of textures in the top \(5\) is not significantly changed by any of these choices, so we conclude that this data is robust to changes of distribution (provided all entries of \(c^{u,d}\) be \(\mathcal{O}(1)\)).
Another reasonable concern that one may have about these results is whether the observed hierarchies are primarily coming from the texture choice. Our prior distribution for \(c^{u,d}\) is chosen to result in \(\mathcal{O}(1)\) coefficients, but there can still be anomalously hierarchical draws from this distribution, and it is possible that there are textures in which it is precisely these anomalously hierar
Figure 5: Correlation between closeness to the SM (\(\Xi_{\rm SM}\)) and how anomalous the random hierarchy between the largest and smallest \(c^{u,d}_{ij}\) coefficients is (anomalous here means a particularly low or high \(\eta\) percentile), for the example of the top texture in Table 2. The lack of clear correlation demonstrates that the SM-like hierarchies do not rely on accidental hierarchies amongst the coefficients in Eq. 3.
Figure 6: Individual parameter distributions for the best texture listed in Table 1. For the quark masses and the three independent CKM elements, each histogram shows the distribution of one particular parameter relative to its true SM value. For this texture, all parameter distributions are peaked within a factor of a few from their SM values. Although not shown here, there is minimal cross-correlation between any pair of these parameters.
chical draws that give results closer to the SM. To test this, we construct a measure of how hierarchical a particular choice of coefficients \(c^{u,d}\) is with respect to a given prior distribution. Namely, we define
\[\eta\equiv\log_{10}\frac{\max_{i,j}|c_{ij}^{u/d}|}{\min_{i,j}|c_{ij}^{u/d}|}. \tag{30}\]
Given a distribution on the individual coefficients we can construct the distribution of \(\eta\) and find a measure of how anomalously hierarchical a particular choice of \(c^{u,d}\) are. For the textures displayed in Table 1, there is generally very little correlation with how anomalously high or low \(\eta\) is and how good of a fit the coefficients are to the SM (quantified by the smallness of \(\Xi_{\rm SM}\)). For example, the cross-correlation of these two parameters for our best texture is shown in Fig. 5. We take this as evidence that the observed hierarchies in the good textures are driven almost entirely by the textures themselves rather than the Yukawa coefficient matrices.
For a given texture, we can also construct histograms of the individual quark masses and CKM elements to verify that the texture predicts all of them to be approximately their SM values. We do this by generating many random choices of \(c^{u,d}\), varying \(\epsilon\) to minimize \(\Xi_{\rm SM}\) for each choice of \(c^{u,d}\), and then plotting the resulting distributions of the quark masses and CKM elements. As an example, in Fig. 6 we show the distributions for the individual parameters for our best-performing texture in Table 1. We have checked these distributions for all textures listed in Table 1 and we find that all parameter distributions are centered within a factor of a few of the true SM values. We have also checked the cross-correlations between these parameters for the best textures listed in Table 1, and they are relatively mild, meaning that each parameter behaves approximately as if it were drawn independently from a distribution of the type shown in Fig. 6.
|
2302.12589
|
Revisiting Modality Imbalance In Multimodal Pedestrian Detection
|
Multimodal learning, particularly for pedestrian detection, has recently
received emphasis due to its capability to function equally well in several
critical autonomous driving scenarios such as low-light, night-time, and
adverse weather conditions. However, in most cases, the training distribution
largely emphasizes the contribution of one specific input that makes the
network biased towards one modality. Hence, the generalization of such models
becomes a significant problem where the non-dominant input modality during
training could be contributing more to the course of inference. Here, we
introduce a novel training setup with regularizer in the multimodal
architecture to resolve the problem of this disparity between the modalities.
Specifically, our regularizer term helps to make the feature fusion method more
robust by considering both the feature extractors equivalently important during
the training to extract the multimodal distribution which is referred to as
removing the imbalance problem. Furthermore, our decoupling concept of output
stream helps the detection task by sharing the spatial sensitive information
mutually. Extensive experiments of the proposed method on KAIST and UTokyo
datasets shows improvement of the respective state-of-the-art performance.
|
Arindam Das, Sudip Das, Ganesh Sistu, Jonathan Horgan, Ujjwal Bhattacharya, Edward Jones, Martin Glavin, Ciarán Eising
|
2023-02-24T11:56:57Z
|
http://arxiv.org/abs/2302.12589v2
|
# Revisiting Modality Imbalance in Multimodal Pedestrian Detection
###### Abstract
Multimodal learning, particularly for pedestrian detection, has recently received emphasis due to its capability to function equally well in several critical autonomous driving scenarios such as low-light, night-time, and adverse weather conditions. However, in most cases, the training distribution largely emphasizes the contribution of one specific input that makes the network biased towards one modality. Hence, the generalization of such models becomes a significant problem where the non-dominant input modality during training could be contributing more to the course of inference. Here, we introduce a novel training setup with regularizer in the multimodal architecture to resolve the problem of this disparity between the modalities. Specifically, our regularizer term helps to make the feature fusion method more robust by considering both the feature extractors equivalently important during the training to extract the multimodal distribution which is referred to as removing the imbalance problem. Furthermore, our decoupling concept of output stream helps the detection task by sharing the spatial sensitive information mutually. Extensive experiments of the proposed method on KAIST and UTokyo datasets shows improvement of the respective state-of-the-art performance.
Arindam Das\({}^{1,2}\), Sudip Das\({}^{3}\), Ganesh Sistu\({}^{4}\), Jonathan Horgan\({}^{4}\), Ujjwal Bhattacharya\({}^{3}\),
Edward Jones\({}^{5}\), Martin Glavin\({}^{5}\), and Ciaran Eising\({}^{2,5}\)\({}^{1}\)DVS, Valeo India, \({}^{2}\)University of Limerick, Ireland, \({}^{3}\)Indian Statistical Institute, Kolkata,
\({}^{4}\)Valeo Vision Systems, Ireland, \({}^{5}\)University of Galway, Ireland
[email protected], {arindam.das, ciaran.eising}@ul.ie Multimodal Learning, Modality Imbalance, Multimodal Feature Fusion, Pedestrian Detection.
## 1 Introduction
While cameras are one of the primary sensors for autonomous driving perception systems, they have generally failed in certain crucial scenarios. For example, 1) automotive cameras are generally mounted outside of the vehicle body that makes the camera lens directly exposed and high chance of becoming soiled [2, 3] in the presence of sand, mud, dirt, snow, grass, etc.; 2) Sun glare [4] obstructs downstream vision-based algorithms to work efficiently on the overexposed area; 3) the presence of dense shadow hinders especially the algorithms that operate at pixel level that include semantic segmentation, instance segmentation, etc; and 4) incorrect detection of pedestrians due to lack of information in camera data during low-light and night-time operation, leading to inaccurate pedestrian detection, and its extensions, such as the estimation of pedestrian pose [5][6][7]. Thus, in adverse conditions, the perception stack is not reliable enough for vehicle autonomy considering input only in the visible spectrum. Therefore, it makes sense for autonomous driving systems to consider inputs from different modalities, such as depth [8], thermal [9], LiDAR [10], and others, in order to maintain the high safety requirements of vehicle automation. In this paper, we specifically look at a multimodal learning approach to detection using visual and thermal data.
When considering multimodal learning, scenes in which
one sensor performs significantly better than the other sensor type can cause a bias in training towards one modality. For example, if we train with only scenes from night-time, then there will be a bias towards thermal, limiting the generalisability of the network. The goal, therefore, is to bring a less imbalanced multimodal training scheme by reducing such biases, with the aim for multimodal feature fusion to become robust. Zhou _et al._[11] addressed the problem of modality imbalance from two different aspects - illumination and feature. Layer-wise fusion was proposed between two unimodal stream networks through Differential Modality Aware Fusion (DMAF) module. However, the reported solution is based on architectural design that makes it difficult to regularize any modality. Likewise, in [9], the fine-tuned fused features are further re-calibrated equally for both modalities.
There is, therefore, potential for further improvement in regularization between modalities without the need for any extra inputs to the network. The main contributions of our work are as follows. 1) We introduce a new concept to balance the information of both modalities by leveraging Logarithmic Sobolev Inequality to equivalently consider the information during the multimodal fusion; 2) We propose a novel end-to-end multi-modal network to achieve improvements over the recent state-of-the-art methods on two publicly available benchmark datasets, KAIST[1] and UTokyo [12]; and, 3) We conduct a systematic ablation study involving different backbones, training strategies, and network components.
## 2 Proposed Approach
The proposed end-to-end architecture, illustrated in Figure 2, combines several components that are discussed below.
### Multimodal Feature Extraction and Fusion
Recently, the Pyramid Vision Transformer (PVT) [13] has shown exemplary performance in dense prediction tasks, especially for smaller objects, which makes PVT a suitable feature extractor in this work. We employed two instances of PVT specific to visible and thermal inputs - \(E_{V}(.)\) and \(E_{T}(.)\). We have added the same feature fusion unit as used in [9], where the modality agnostic raw feature vector (\(\sigma\)) is passed and each element of the vector is multiplied with each channel before respective deconvolution layers, as shown in the proposed architecture diagram (Figure 2). We concatenate the calibrated features (\(\varphi\)) of both the streams, passing them through a \(Conv_{1\times 1}\) to reduce the dimension similar to the last layer of the encoder. We represent the latent variables which are an inner representation of the encoding model to use further for the downstream tasks.
### Multi-stream Decoupled Detection Branch
In most of the existing methods, such as [9], final predictions are extracted from a single-stage detection decoder. In this work, we combine Score Map and IoU as outputs from one branch and decouple bounding box output to a separate branch. The main reason for creating a multi-stream decoupled detection branch is to group the related tasks. In multitask learning setup, pixel-wise classification helps with the detection task for estimating the bounding box (BBoxes) of the pedestrians accurately in different scales and various occlusion scenarios. To achieve this, our decoupled output streams are designed in such a way that IoU map further helps the task of the score map to improve the performance while the region of the objects in the score map is shifted. Both streams consist of multiple deconvolution layers, where layers are progressively upsampled by a factor of \(2\) and the number of channels is reduced by half. Class balanced cross-entropy loss and IoU loss, as used in [9], are applied for Score Map and IoU regression. Repulsion loss [14], which was proposed exclusively to handle crowded pedestrian scenarios, is applied for the BBoxes regression.
### Detection Loss Function
We developed a training setup with Logarithmic Sobolev Inequalities [15] to deal with the problem of training a network with multimodal data that allows the extraction of equivalent information from both modalities. Here, we consider the visible and thermal images as multimodal inputs for the training of a multimodal model in the application of pedestrian detec
Figure 2: Our proposed End-to-End learning framework for multimodal pedestrian detection in diverse scenarios.
tion. The training data \(X\) (i.e., \(x_{t}^{k}\) and \(x_{v}^{k}\), for thermal and visual respectively) where each of the instance \(x^{i}\) that consists of visible image \(x_{v}^{k}\) and thermal image \(x_{t}^{k}\), with their corresponding ground truth \(y^{k}\), and \(\,m^{k}\) represents the multimodal features in the probability measure space. The measure of the uncertainty of the random variables \(H(\cdot)\) via Softmax Entropy between the distributions of \(g(\hat{y})\) and \(f_{w}(\hat{y})\) is:
\[H(g,f_{w})=-\sum_{\hat{y}}g(\hat{y})\log f_{w}(\hat{y}|x_{t}^{k},x_{v}^{k}) \tag{1}\]
Throughout, we consider a transformation function \(f_{w}(\cdot)\), parameterized by \(w\), and \(g(\hat{y})\) is the ground truth. Additionally, we define \(w_{v}\in W\) and \(w_{t}\in W\) to denote the parameters for visible stream and thermal stream respectively.
Through the optimization process of the parameters, it is possible that the network trains more on a single modality due to one specific input scheme being emphasized as a result of different lighting conditions, weather conditions, etc. Hence, an approximation of an imbalance function \(f_{w}(\hat{y})\) towards a single modality leads to poor fusion. To resolve the problem of multimodal inequality, we apply Fisher information that helps to measure the amount of _modality-specific_ information in each input distribution (i.e., \(x_{t}^{k}\) and \(x_{v}^{k}\)) to train the parameters \(w_{t}\) and \(w_{v}\) respectively as described in (3).
We consider the Logarithmic Sobolev method that is encapsulated for extracting the equally important features from multimodal distribution.
\[\begin{split}\int_{R^{n}}\|f(X)\|^{2}\log\|f(X)\|\,du^{X}(x)\\ \leq\int_{R^{n}}\|\nabla f(X)\|^{2}\,du^{X}(x)+\|f(X)\|_{2}^{2} \log\!|f(X)|_{2}\end{split} \tag{2}\]
where \(\,du^{X}(X)\) is the probability density function and \(u\) denotes the Gaussian measure on \(\,R^{2}\). \(\|f(\cdot)\|\) is the norm on the Hilbert space \(\,L^{2}\). We derive the following equation where the function \(f(x)\geq 0\),
\[\begin{split}\int_{R^{n}}f(X)\log f(X)\,du^{X}(X)-\\ \int_{R^{n}}f(X)\,du^{X}(X)\log\int_{R^{n}}f(X)du^{X}(X)\\ \leq\frac{1}{2}\int_{R^{n}}f(X)\frac{\|\nabla f(X)\|^{2}}{f(X)}\, du^{X}(x)\end{split} \tag{3}\]
The above equation describes that entropy is non-negative function since the formulation of Fisher information is non-negative. It also bounds the functional entropy using the method of Fisher information through the log Sobolev inequality. The functional entropy \(E(f(X))\) is as follows,
\[\begin{split}& E(f(X))=\int_{R^{n}}f(X)\log f(X)\,du^{X}(X)-\\ &\int_{R^{n}}f(X)\,du^{X}(X)\log\int_{R^{n}}f(X)du^{X}(X)\end{split} \tag{4}\]
In the problem of maximizing the information in latent space, we denote measures \(\,u^{x_{t}}\) and \(\,u^{x_{v}}\) respectively for visible and thermal feature distributions. During optimization, the measures \(\,u^{x_{t}}\) and \(\,u^{x_{v}}\) are in Gaussian distribution and denoted as \(\,u^{x_{t}}\sim\mathcal{N}(\mu_{x_{t}},\sigma_{x_{t}}^{2})\) and \(\,u^{x_{v}}\sim\mathcal{N}(\mu_{x_{v}},\,\sigma_{x_{v}}^{2})\). Here, \((\mu_{x_{t}},\,\mu_{x_{v}})\) and \((\sigma_{x_{t}}^{2},\,\sigma_{x_{v}}^{2})\) are the mean and variance of the measures of the multimodal features. The product measure over both distributions for (3) is defined as \(\,u^{X}\,=\,u^{x_{t}}\bigotimes\,u^{x_{v}}\). We define a function \(S^{X}(\cdot)\) in (5) to calculate the sensitivity of the softmax function \(p_{w}(\cdot|x_{v},x_{t})\) for our proposed architecture to the Gaussian measures \(u^{x_{v}}\) and \(u^{x_{t}}\).
\[S^{X}(\cdot)=H(p_{w}(\cdot|u^{x_{v}},u^{x_{t}}),p_{w}(\cdot|x_{v},x_{t})) \tag{5}\]
Hence, we replace the sensitivity function in (3) with the following regularization,
\[\lambda_{regu}=\int_{R^{n}}\frac{\|\nabla S^{X}(\cdot)\|^{2}}{S^{X}(\cdot)}\, du^{X}(X) \tag{6}\]
The regularization \(\lambda_{regu}\) is applied with the loss function of score map, denoted as \(\lambda_{bce_{regu}}\).
\[\lambda_{bce_{regu}}=\beta\lambda_{bce}+\delta\lambda_{regu} \tag{7}\]
The minimization of the cost function to estimate \(f_{w}(\cdot)\) is effective since the detector never saturates unless and until it detects the pedestrians precisely with high probability by leveraging the features, importantly, from both modalities.
Our final cost function is made up of four components,
\[\lambda_{loss}=\alpha\lambda_{rep}+\beta\lambda_{bce}+\delta\lambda_{regu}+ \gamma\lambda_{iou} \tag{8}\]
where \(\lambda_{rep}\) is the repulsion loss [14] to minimize the error between the predicted and ground truth boxes and also to accurately fit the predicted bounding boxes for occluded pedestrians in crowded scenarios. We make use of binary cross entropy loss [9] denoted as \(\lambda_{bce}\) to calculate the error of the score map. Additionally, we use \(\lambda_{iou}\) to over-penalize the overlapping error on the detected object and ground truth for precision. Both tasks mutually share the gradient to the early layers of the detection network. \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\) are the hyper-parameters to balance the four different auxiliary losses.
## 3 Experimentation Details
### Datasets and Training Details
KAIST [1] is a popular multimodal dataset used for pedestrian detection tasks. It contains approximately \(95,000\) pairs of _color-thermal_ frames with a total of \(1,182\) unique pedestrians and \(103,128\) annotated bounding boxes. Zhang _et al_. [16] fixed the alignment issues of this dataset and Liu _et al_. [17] published the refined annotations for the test set. The latest version of this dataset with these fixes are used in this
work. UTokyo [12] dataset consists of \(7,512\) frames captured using RGB, far-infrared (FIR), mid-infrared (MIR), and near-infrared (NIR) cameras, are used in this work for cross dataset generalization. We replicated the training strategy discussed in [9].
### Ablation Study
We considered KAIST [1] to perform our ablation study as it is quite popular and contains large number of samples. The results of the ablation study are reported on the test set using standard log average Miss Rate (MR) to estimate the error. Due to the recent improvement shown by PVT [13] for smaller objects, we designed a series of experiments where we kept PVT as a baseline. This ablation study includes our proposed regularizer that is enabled and disabled with all possible combinations of network components to validate our proposal. Number of instances of feature fusion units from [9] are progressively added in the network from \(1\) to \(4\). Curriculum Learning [18] was considered it has been helpful to achieve better generalization in our previous works. Table 1 summarizes the results of the ablation study and indicates the optimal configuration of the network and training strategy. Further we performed ablation study on different backbones using the optimal configuration, reported in Table 2 where PVT has outperformed other encoders by large margin.
### Evaluation results
To facilitate fair visual comparison and verify our proposed regularizer, we have performed inference on a few samples where the network is trained with and without our proposed modified Logarithmic Sobolev Inequalities. In Figure 1, it can be clearly observed when the multimodal learning is not accompanied by regularizing the input schemes based on the training distribution then the network tends to miss the detection when the representation of the same object is severely poor in one of the modalities. Table 3 and 4 compares the proposed method with other existing state-of-the-art approaches using MR on KAIST and UTTokyo respectively. We obtain incremental improvement in all categories for both datasets. All the experiments performed on KAIST [1] dataset was evaluated as per the _reasonable_ setup [21] protocol.
|
2303.03975
|
GATE: A Challenge Set for Gender-Ambiguous Translation Examples
|
Although recent years have brought significant progress in improving
translation of unambiguously gendered sentences, translation of ambiguously
gendered input remains relatively unexplored. When source gender is ambiguous,
machine translation models typically default to stereotypical gender roles,
perpetuating harmful bias. Recent work has led to the development of "gender
rewriters" that generate alternative gender translations on such ambiguous
inputs, but such systems are plagued by poor linguistic coverage. To encourage
better performance on this task we present and release GATE, a linguistically
diverse corpus of gender-ambiguous source sentences along with multiple
alternative target language translations. We also provide tools for evaluation
and system analysis when using GATE and use them to evaluate our translation
rewriter system.
|
Spencer Rarrick, Ranjita Naik, Varun Mathur, Sundar Poudel, Vishal Chowdhary
|
2023-03-07T15:23:38Z
|
http://arxiv.org/abs/2303.03975v1
|
# GATE: A Challenge Set for Gender-Ambiguous Translation Examples
###### Abstract
Although recent years have brought significant progress in improving translation of unambiguously gendered sentences, translation of ambiguously gendered input remains relatively unexplored. When source gender is ambiguous, machine translation models typically default to stereotypical gender roles, perpetuating harmful bias. Recent work has led to the development of "gender rewriters" that generate alternative gender translations on such ambiguous inputs, but such systems are plagued by poor linguistic coverage. To encourage better performance on this task we present and release GATE, a linguistically diverse corpus of gender-ambiguous source sentences along with multiple alternative target language translations. We also provide tools for evaluation and system analysis when using GATE and use them to evaluate our translation rewriter system.
## 1 Introduction
Gender is expressed differently across different languages. For example, in English the word _lawyer_ could refer to either a male or female individual, but in Spanish, _abogada_ and _abogado_ would be used to refer to a female or a male lawyer respectively. This frequently leads to situations where in order to produce a single translation, a translator or machine translation (MT) model tends to choose an arbitrary gender to assign to an animate entity in translation output where it was not implied by the source. In this paper, we refer to this phenomenon as _arbitrary gender marking_ and to such entities as Arbitrarily Gender-Marked Entities (AGMEs).
Translation with arbitrary gender marking is a significant issue in MT because these arbitrary gender assignments often align with stereotypes, perpetuating harmful societal bias Stanovsky et al. (2019); Ciora et al. (2021). For example, MT models will commonly translate the following (from English to Spanish):
_The surgeon \(\xrightarrow{\mathit{MT}}\) El cirujano (m)_
_The nurse \(\xrightarrow{\mathit{MT}}\) La erfmera (f)_
Progress has been made to remedy this using a "gender rewriter" - a system that transforms a single translation with some set of gender assignments for AGMEs into a complete set of translations that covers all valid sets of gender assignments for a source sentence into the target language Prates et al. (2018). Using a rewriter:
_The surgeon_
_El cirujano_ (m)
_La cirujana_ (f)
_El cirujano_ (m)
Although a step in the right direction, these rewriters often have poor linguistic coverage and only work correctly in simpler cases. Google Translate has publicly released such a system for a subset of supported languages, and we observe two error cases1:
Footnote 1: as observed on Mar 6, 2023
1. It does not rewrite when necessary: _The director was astonished by the response of the community_. produces only one translation corresponding to masculine director.
2. It rewrites partially, or incorrectly: _I'd rather be a nurse than a lawyer_ produces two translations but only lawyer is reinflected for gender (nur
To facilitate improvement in coverage and accuracy of such rewriters and reduce bias in translation, we release GATE2, a test corpus containing gender-ambiguous translation examples from English (en) into three Romance languages ([22]): Spanish (es), French (fr) and Italian (it). Each English source sentence3 is accompanied by one target language translation for each possible combination of masculine and feminine gender assignments of AGMEs 4:
Footnote 2: Data and evaluation code available at [https://github.com/Microsoft/Translator/GATE](https://github.com/Microsoft/Translator/GATE)
Footnote 3: A few non-sentence utterances are also included as well, such as noun-phrases and sentence fragments
Footnote 4: The majority of source sentences contain only one AGME and thus two translations
_I know **a Turk** who lives in Paris._
_\(\biguplus\)_ it
_Conosco una turca che vive a Parigi._ (f)
_Conosco un turco che vive a Parigi._ (m)
GATE is constructed to be challenging, morphologically rich and linguistically diverse. It has \(\sim 2000\) translation examples for each target language, and each example is annotated with linguistic properties (coreferent entities, parts of speech, etc.). We additionally propose a set of metrics to use when evaluating gender rewriting modules.
This corpus was developed with the help of bilingual linguists with significant translation experience for each of our target languages (henceforth _linguists_). Each is a native speaker in their respective target language. We spoke in depth with our linguists about the nuances of gender-related phenomena in our focus languages and we share our analysis of the relevant aspects and how they impact our work and the task of gender rewriting.
Along with the corpus, we also provide tools for evaluation and system analysis when using GATE and use them to evaluate our own translation rewriter system.
## 2 GATE Corpus
We present GATE corpus, a collection of bilingual translation examples designed to challenge source-aware gender-rewriters. The linguists were asked to compile roughly 2,000 examples for each target language, with the hope that this would be sufficient for good variety along several dimensions: sentence lengths, sentence structures, vocabulary diversity, and variety of AGME counts.
### Anatomy of an Example
Each example in the data set consists of an English sentence with at least one AGME, and a set of alternative translations into the given target language corresponding to each possible combination of male/female gender choices for each AGME. Variation among the alternative translations is restricted to the minimal changes necessary to naturally and correctly indicate the respective gender-markings.
We also mark several category features on each example, such as what class of animate noun AGMEs belong to (profession, relationship, etc), what grammatical role they play in the sentence (subject, direct object, etc), sentence type (question, imperative, etc) and several other phenomena. These are discussed in more detail in Appendix A, as well as statistics over each language's corpus.
Additionally, each example is accompanied by a list of AGMEs as they appear in the English source, as well as their respective masculine and feminine translations found in the translated sentences. For multi-word phrases, we asked annotators to enclose the head noun in square brackets. For example, if _police officer_ is translated to _polica_ in Spanish, the English field could include _police [officer]_.
The same entity may be referred to multiple times in the same sentence through coreference. We asked annotators to indicate coreferent mentions of AGMEs are by joining them with '='. For example, in the following _en-es_ example, the English AGME field would contain "nurse=lawyer".
_I'd rather be a **nurse** than a **lawyer**._
_Prefero ser enfermera que abogada._ (f)
_Prefero ser enfermero que abogado._ (m)
Finally, In cases where an AGME is represented by a pronoun that is elided in the translation, it will be represented by the nominative case form and be enclosed in parentheses. For example, in the following example, the Spanish AGME field would contain _(yo)_:
_I am **itred**._
_Estoy cansada._ (f)
_Estoy cansado._ (m)
### Arbitrarily Gender-Marked Entities
In this paper, we use _animate entity_ (or just _entity_) to refer to an individual or group for which a refer
ential gender could be implied in either the source or target language5. Usually this will refer to humans, but may also be extended to some animals and mythical or sentient beings. For example, _cat_ is generally translated into Spanish as _gato_, but _gata_ is also frequently used to refer to a female cat. Following Dahl (2000), we use _referential gender_ to refer to an entity's gender as a concept outside of any linguistic considerations.
Footnote 5: For simplicity, we limit our discussion of gender and linguistics to masculine and feminine within the scope of this paper, but we do not intend to imply that gender is limited in this way.
To qualify as an AGME, an entity's referential gender must be ambiguous in the source sentence, but implied by one or more words in the target translation. Compared to Romance languages, there are relatively few ways that gender is denoted through word-choice in English. Most notably, English uses a handful of gendered pronouns and possessive adjectives (_she_, _her_, _hers_, _he_, _him_, _his_), as well as a relatively small number of animate nouns that imply a gender (e.g. _mother_, _father_, _masseuse_, _masseur_, etc). There is also often a correlation between certain proper names and referential gender (e.g. _Sarah_ is traditionally a female name and _Matthew_ is traditionally male), but we do not consider this a reliable enough signal for gender determination unless they are a well known public figure (e.g. _Barrack Obama_ is known to be male). We follow Vanmassenhove and Monti (2021) in this.
Additionally, an AGME must have some gender marking in the translation. In the following English-Italian example,
_1 heard the thief insult **his interlocutor**._
_]_ _it_
_lo ho sentito iladro insultare la sua interlocutrice._
_lo ho sentito iladro insultare il suo interlocutore._
_interlocutor\(\rightarrow\)interlocutrice (_f_) _/ interlocutore_ (m) is an AGME, while _thief\(\rightarrow\)ladro_ and _l\(\rightarrow\)lo_ are not. _Thief_ is unambiguously male because of its coreference with _his_ in the source, while \(I\) has ambiguous gender which is not marked in the target.
### Corpus Development Process
The linguists were asked to aim for a distribution of sentences lengths ranging from very short (< 10 words) to complex (> 30) words. Actual example counts are shown in Table 1. Of the 2,000 examples for each language, linguists were asked to include roughly the following breakdown:
* 1,000 single animate noun AGME
* 500 single pronoun AGME
* 500 with two or more AGMEs
Linguists were given details of the various categories and attributes listed in section A and asked to find sentences such that each such category is well represented (depending on the relative ease of finding such sentences). Linguists were also asked to prioritize diversity of animate nouns where possible. They were allowed to pull examples sentences from natural text or construct them from scratch as they saw fit. However, except for a small number of toy examples, we asked that they include only sentences that were natural in both English and their target language, and could reasonably appear in some imaginable context.
We provided samples of web-scraped data that had been filtered with various heuristics to help identify sentences fitting some of the harder-to-satisfy criteria. For example, we used Stanza (Qi et al., 2020) to filter some web-scraped data for those containing an animate noun marked as an indirect object and provided this to the linguists. In some cases these sentences were used directly, and in others they were modified slightly to fit the requirements.
Throughout the process, we prioritized diversity of sentence structure, domain and vocabulary. Rather than produce a representative sample, our intention was to produce a corpus that would chal
\begin{table}
\begin{tabular}{|l|l l l l|l|} \hline
**Data Set** & **< 10** & **10-19** & **20-29** & **>= 30** & **Total** \\ \hline Spanish 1 AGME & 477 & 722 & 197 & 105 & 1,501 \\ Spanish 2+ AGMEs & 70 & 176 & 56 & 21 & 323 \\ \hline French 1 AGME & 704 & 661 & 171 & 14 & 1,550 \\ French 2+ AGMEs & 177 & 222 & 41 & 4 & 444 \\ \hline Italian 1 AGME & 397 & 867 & 195 & 48 & 1,507 \\ Italian 2+ AGMEs & 93 & 500 & 139 & 30 & 762 \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of lengths (words) of English utterance per target language and AGME count
lenge any tested systems on a wide range of phenomena.
## 3 Evaluation with GATE
### Gender Rewriting
Our goal in developing this corpus is to facilitate the generation of multiple translations covering all valid gender assignments. One strategy for producing such a set of translations is to first use an MT model to produce a default translation and then use a rewriter to generate one or more alternative translations with other gender assignments (Prates et al., 2018).
\[\text{source}\xRightarrow{\text{MT}}\text{translation}\xRightarrow{\text{ rewriter}}\text{ \{all translations}}\]
### Evaluation Methodology
We formalize the task of gender rewriting on a single-AGME sentence as follows: given the source sentence \(src\), target translations corresponding to male and female referent entities, and a rewrite direction (M to F or F to M), produce an output target translation with the alternative gender from the original translation. We will refer to the original input translation as \(tgt_{0}\), the desired/reference translation as \(tgt_{1}\) and the output generated by the rewriter as \(hyp\):
\[rewriter(src,tgt_{0})=hyp\sim tgt_{1}\]
For this task, we consider looking at exact full-sentence matches between \(hyp\) and \(tgt_{1}\) to be the most sensible approach for evaluation. We do not give partial credit for changing the gender markings on only a subset of the words to those found in \(tgt_{1}\). Doing so will generally result in a sentence that is either grammatically incorrect due to newly introduced agreement errors, or for which the semantics has changed in an unacceptable way, such as a changed coreference. Because of this, we find sentence-similarity measures such as BLEU (Papineni et al., 2002) and words error rate not to reflective of a user's experience.
The rewriter may also produce a null output, meaning that only the default translation will be produced. This is necessary because in real-world scenarios, many sentences will not contain AGMEs. When AGMEs are present, it may still be preferable to produce null output over a low confidence rewrite if accuracy errors are judged to be more costly than coverage errors.
We calculate precision as the proportion of correct alternatives among those attempted, i.e. that were non-null outputs. Because there are no true negatives in GATE, recall can be calculated as the proportion of correct alternatives produced among all sentences, including null outputs. Using these definitions of precision and recall, we also find \(F_{0}.5\) to be a useful overall metric, prioritizing precision while still incorporating coverage.
While we have focused our discussion of evaluation on sentences containing a single AGME, which typically should produce exactly two alternative translations, GATE also includes a smaller number of examples with more than one AGME. These have more than two alternative translations and thus more than one correct output for a rewriter. We do not formalize evaluation on this subset here but believe that the data set will be useful in evaluating rewriting systems capable of producing multiple outputs for multiple sets of gender assignments.
### System Overview
We use GATE to evaluate our translation gender rewriter, which follows a pipeline approach, roughly similar to Habash et al. (2019).
The system receives as input the original source sentence (\(src\)) and a default translation (\(tgt_{0}\)) with the specified language pair. The following components are then applied:
**AGME Identifier** - We first attempt to find AGMEs in the sentence pair to determine whether rewriting is appropriate. We leverage an AllenNLP coreference model to detect ambiguously gendered entities in the source sentence (Lee et al., 2018). We use a dependency parse generated by Stanza (Qi et al., 2020) and a gendered vocab list to identify gender-marked animate entities in the target sentence.
**Candidate Generator** - For each word position in \(tgt_{0}\), we use a lookup table to find all possible alternate gender variants for the word in that position. We compose the word-level variant sets to build a set of sentence-level hypotheses, while applying grammatical constraints to prune incoherent hypotheses. This yields a set of candidate rewrites.
**Translation Scorer** - Finally, we use a Marian translation model (Junczys-Dowmunt et al., 2018) to score each rewrite candidate as a translation of source sentence. If no candidates have scores close to the \(tgt_{0}\), We return a null output. Otherwise we choose.
### Experimental Results
We evaluate our system for rewriting quality on GATE in both masculine-to-feminine and feminine-to-masculine directions. To simulate runtime efficiency constraints, we impose a cutoff of 20 maximum source words. Any input sentence longer than this is treated as a null output and therefore a false negative.
From these results we can see that our system performs best for Spanish in both directions, and in the female-to-male direction across all language pairs. Both trends can be explained to an extent by the properties of the translation models. High quality training data for English-Spanish is more plentiful than for the other two languages, leading to a higher quality model in general. As noted earlier, translation models have been shown to skew towards stereotypical gender assignments, which are more heavily weighted towards masculine forms. Therefore, it is not too surprising that when rewriting in this direction, the translation model is more likely to prefer an incorrect rewrite candidate.
### End-to-End Evaluation
In our envisioned scenario, a gender rewriter would operate on the output of an MT system. It is unlikely, however, that direct MT output will consistently match GATE's translations word-for-word. As a result, references cannot be directly utilized, and human annotation is required to assess the output of a rewriter alongside machine translation (MT) or any integrated system that generates a series of gender alternative translations from a single source sentence. One consideration is that a parallel sentence from GATE may no longer contain an AGME when machine translated, as the MT output may be unmarked for gender.
In order to test our combined system end-to-end, we sampled 200 source sentences from GATE and used our production MT models to translate them into Spanish, and then pass that output to our rewriter. We then ask annotators to examine the source sentence and all translation outputs, and to provide the following annotations:
* If two translations are produced, mark true positive if the following are true (otherwise false positive):
* Is the target gender-marked for an ambiguous source entity?
* Were all the words marking gender of AGME changed correctly?
* Were only the words marking gender of AGME changed?
* If only one translation is produced, is the target gender marked for an ambiguous source entity?
* If there are multiple AGMEs:
* If two valid translations are produced mark as a true positive.
* If only one translation is produced mark as a true negative.
* Otherwise mark as a false positive.
We also retrieve translations for these sentences from an online English-Spanish translation system that can produce masculine and feminine alternative translations for this translation direction. We asked annotators to annotate these translations in the same manner.
Finally, we also asked annotators to mark source sentences for which the speaker is reasonably likely to know the referent's gender, and therefore use of a masculine generic should be less likely (see 4.5). We evaluate quality on that subset as well for each system, in rows marked NG ( _(non-generic)_). Results are presented in Table 3 and visualized in Figure 1.
Both systems heavily favor precision over recall, and recall is somewhat higher on the _non-generic_
\begin{table}
\begin{tabular}{|l|c||c|c|c|} \hline Language & Direction & P & R & F0.5 \\ \hline \hline Spanish & F\(\rightarrow\)M & 0.97 & 0.50 & 0.82 \\ \hline Spanish & M\(\rightarrow\)F & 0.95 & 0.40 & 0.74 \\ \hline French & F\(\rightarrow\)M & 0.97 & 0.28 & 0.65 \\ \hline French & M\(\rightarrow\)F & 0.91 & 0.27 & 0.61 \\ \hline Italian & F\(\rightarrow\)M & 0.96 & 0.47 & 0.79 \\ \hline Italian & M\(\rightarrow\)F & 0.91 & 0.32 & 0.67 \\ \hline \end{tabular}
\end{table}
Table 2: Our rewriter’s scores on GATE for each target language and rewrite direction
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & P & R & F0.5 \\ \hline Our System & 0.97 & 0.41 & 0.76 \\ \hline Our System (NG) & 1.00 & 0.50 & 0.84 \\ \hline \hline Online system & 0.96 & 0.14 & 0.45 \\ \hline Online system (NG) & 1.00 & 0.21 & 0.56 \\ \hline \end{tabular}
\end{table}
Table 3: end-to-end scores for our system and an online translation system. NG rows are calculated only on _non-generic_ sentences
portion of the data. Overall, our system demonstrates significantly better coverage.
A full, end-to-end evaluation should include testing both sentences with and without AGMEs. As each instance in GATE involves at least one AGME, we suggest enhancing GATE with instances from Renduchintala and Williams (2021) and Vanmassenhove and Monti (2021), which feature unequivocally gendered source entities. In future work, we intend to develop a supplemental data set for GATE containing various types of negative examples: unambiguous source entities, entities that are unmarked in both source and target, and inanimate objects whose surface forms are distractors (e.g. depending on context, _player_ and _cleaner_ may refer to either objects or people).
## 4 Linguistic Background
### Gender in Romance Languages
In Spanish, French and Italian, all nouns have a grammatical gender - either masculine or feminine. For inanimate objects, this gender is fixed and often arbitrary; for example, in French, _chaise_ (chair) is feminine, while _canape_ (couch) is masculine. When a noun or pronoun refers to an animate entity, its grammatical gender will, with some notable exceptions, match the referential gender of that entity. (Vincent, 1988)
In these languages, referential gender of entities is frequently marked through morphology of an animate noun (e.g. _en-es_: _lawyer_\(\Rightarrow\)\(abogada\)(f), _abogado_(m)) or through agreement with gendered determiners, adjectives and verb forms.
### Dual Gender and Epicene
Some animate nouns are _dual gender_, meaning that the same surface form is used for both masculine and feminine, such as French _artiste_ (artist) (Cornett 1991 as cited in Hellinger and Bussmann 2002). However, other clues to the artist's gender may exist in a French sentence through gender agreement with other associated words. For example, _The tall artist_ could be translated into French as _La grande artiste_(f) or _Le grandar antiste_(m). Here, grammatical gender of translations of _the_ (_la_ (f) / _le_ (m)) and _tall_ (_grande_ (f), _grand_ (m)) must match the referential gender of the referent noun.
Dual-gender determiners and adjectives exist as well, such as Spanish _mi_ (my) and _importante_ (important). So, for example, Spanish _mi huesped importante_ (My important guest) has no gender marking. Similarly, in French and Italian, some determiners may contract before vowels to lose their gender marking. Feminine and masculine forms of _the_ in French, _le_ and _la_, both contract before vowels (and sometimes _h_) to become _l'_, so _l'artiste_ (the artist) is not marked for gender.
While typically an entity's referential gender will align with its grammatical gender, these languages each contain a handfull of _epicene_ nouns. These are nouns whose grammatical gender is fixed, regardless of the referential gender of the referential (Grevisse 2016 as cited in Hellinger and Bussmann 2003). Most notable among these is the direct translation of _person_ into each of the target languages, which is always grammatically feminine: _La persona_ (_es_,_it_) or _La personne_ (_fr_). We also find some language-specific epicene nouns. For example, these Italian words are always grammatically feminine: _la guardia_ (guard), _la vedetta_ (sentry), _la sentinella_ (sentry), _la recluta_ (recruit), _la spia_ (spy). 6
Footnote 6: Color-coding in this paragraph corresponds only to grammatical gender, while referential gender is ambiguous in these expressions.
### Pronouns
Similarly to English, some pronouns in Romance languages are inherently gendered, while others are not. Entities referred to by gender-neutral pronouns, such as Spanish _yo_ (I) and _tu_ (you) commonly become gender-marked through predicative gender-inflecting adjectives. Further complicating these cases, subject pronouns are frequently omitted in Spanish and Italian (but notably not in
Figure 1: End-to-end scores for our system and an online translation system.
French) as the subject can be inferred from verb morphology (Hellinger and Bussmann 2002, pp. 189, 252). This means that in some cases, the AGME in a sentence pair may be a zero-pronoun, such as English _I am **tired**_ being translated to Spanish as _estoy cansada_ (f) or _estoy cansado_ (m). There is no overt subject in these translations corresponding to \(I\), but the subject is implied by the verb form _estoy_.
### Coreference
Another common pattern is that of coreferent mentions of a single entity, which must by definition have the same referential gender, and usually but not always the same grammatical gender. For example, in the following sentence, _friend_ and _nurse_ are the same individual and we would typically expect them to share the same referential gender in a direct translation into any of the target languages.
_My best friend is a nurse_
In cases where one coreferent mention is an epicene noun as described in 4.2, the grammatical genders of those mentions may in fact differ. In the following sentence, the described individual is unambiguously male. The phrase _una buena persona_ (a good person) is grammatically female, while _un mal amigo_ (a bad friend) and _el_ (he) are grammatically male. 7
Footnote 7: In this example, color-coding indicates grammatical gender of each mention as it appears the Spanish translation
_He is a good person but a bad friend._
_E1 es una buena persona, pero un mal amigo._
### Masculine Generics
Traditionally, many languages, including Spanish, French and Italian, employ a paradigm known as masculine generics. Under this paradigm, feminine forms are considered to be explicitly gender-marked, while masculine forms should be used in situations where referential gender is unclear. Specifically, when referential gender is unknown by the speaker, or a mixed-gender group is known to contain at least one male individual, defaulting to grammatically masculine forms is generally considered correct in the language standard8. In this sense, masculine gender marking does not imply the exclusion of female-identifying individuals, but a feminine gender marking would imply the exclusion of male-identifying individuals. (Hellinger and Bussmann 2002, 2003)
Footnote 8: In recent years there is some explorations of using novel, gender-neutral forms in these contexts
In most cases where a masculine generic might be used, we nonetheless ask our linguists to provide an alternative translation with feminine gender-marking. Language critics have noted that the use of masculine generics can evoke an association with'male' (Hellinger and Bussmann 2003, pp. 101), and so we believe that inclusion of a feminine generic variant fits our mission of promoting inclusive language use. Our linguists were asked to annotate such generic mentions with the label INDF (_indefinite gender_), so that users who wish to follow a stricter interpretation can exclude these examples in their evaluations. However, upon analysis of our corpus we noted that this annotation was only consistently applied to the Italian data.
## 5 Related Work
A slew of challenge sets has been proposed for evaluating gender bias in Machine Translation.
**MuST-SHE**(Bentivogli et al. (2020) ; Savoldi et al. (2022) comprises approximately 1000 triplets consisting of audio, transcript, and reference translations for en-es, en-fr, and en-it languages. Each triplet is classified based on the gender of the speaker or explicit gender markers, such as pronouns, as either masculine or feminine. Furthermore, the dataset contains an alternative incorrect reference translation for every correct reference translation that alters the gender-marked words.
**WinoMT**Stanovsky et al. (2019) is a challenge set that comprises English sentences containing two animate nouns, one of which is coreferent with a gendered pronoun. Based on the context provided in the sentence, a human can easily identify which animate noun is coreferent and thus deduce the gender of the person described by that noun. By evaluating the frequency with which an MT system generates a translation with the correct gender for that animate noun, one can measure the extent to which the system depends on gender stereotypes rather than relevant context.
**SimpleGEN**Renduchintala et al. (2021) on the English-Spanish (en-es) and English-German (en-de) language pairs. It includes a test set consisting of short sentences with straightforward syntactic structures. Each source sentence includes an occupation noun and a clear indication of the gender of
the person described by that noun. In other words, the source sentence provides all the necessary information for a model to generate occupation nouns with the correct gender.
**The Translated Wikipedia Biographies9** dataset comprises 138 documents containing human translations of Wikipedia biographies from English to Spanish and German. Each document comprises 8-15 sentences, providing a context for gender disambiguation evaluation across sentences.
Footnote 9: [https://ai.googleblog.com/2021/06/a-dataset-for-studying-gender-bias-in.html](https://ai.googleblog.com/2021/06/a-dataset-for-studying-gender-bias-in.html)
**MT-GenEval**Currey et al. (2022) is a dataset that includes gender-balanced, counterfactual data in eight language pairs. The dataset ensures that the gender of individuals is unambiguous in the input segment, and it comprises multi-sentence segments that necessitate inter-sentential gender agreement.
Regarding the work on addressing ambiguously gendered inputs, Habash et al. (2019) tackle translation of ambiguous input by treating it as a gender classification and reinflection task when translating English into Arabic. Their approach focuses on the first-person singular cases. Given a gender-ambiguous source sentence and its translation, their system generates an alternative translation using the opposite gender. Additionally, they create a parallel corpus of first-person singular Arabic sentences that are annotated with gender information and reinflected accordingly. Alhafni et al. (2021) expand on the work of Habash et al. (2019) by adding second person targets to the Arabic Parallel Gender Corpus, as well as increasing the total number of sentences.
Google Translate announced10 an effort to address gender bias for ambiguously gendered inputs by showing both feminine and masculine translations. They support this feature for English to Spanish translation, as well as several gender-neutral languages into English.
Footnote 10: [https://ai.googleblog.com/2020/04/a-scalable-approach-to-reducing-gender.html](https://ai.googleblog.com/2020/04/a-scalable-approach-to-reducing-gender.html)
Regarding debiasing in the monolingual context, (Zmigrod et al., 2019) propose a generative model capable of converting sentences inflected in masculine form to those inflected in feminine form, and vice versa, in four morphologically rich languages. Their work focuses on animate nouns.
In terms of rewriting text in English, Vanmassenhove et al. (2021) and Sun et al. (2021) propose rule-based and neural rewriting models, respectively, that are capable of generating gender-neutral sentences.
## 6 Conclusion
We have presented GATE, a corpus of hand-curated test cases designed to challenge gender rewriters on a wide range of vocabulary, sentence structures and gender-related phenomena. Additionally, we provide an in-depth analysis of many of the nuances of grammatical gender in Romance languages and how it relates to translation. We also suggest metrics for gender rewriting and provide tools to aid with their calculation. Through this work we aim to improve the quality of MT output in cases of ambiguous source gender, as well as facilitate the development of better and more inclusive natural language processing (NLP) tools in general.
We look forward to future work in improving GATE and related projects. We aim to add additional languages pairs to GATE and investigate translation directions into English. We also hope to supplement with additional data, including negative examples. Finally, we plan to explore use of gender-neutral language use in various languages and how it can be incorporated into NLP applications.
## 7 Bias Statement
In this work, we propose a test set to evaluate translation of ambiguously gendered source sentences by NMT systems. Our work only deals with English as the source and is currently scoped to Romance languages as the target. To construct our test set, we have worked with bilingual linguists for each target language. We plan to increase scope of both source and target languages in future work.
Through this work, we hope to encourage and facilitate more inclusive use of natural language processing technology, particularly in terms of gender representation. In recent years, there is significant ongoing movement in the way gender manifests in languages use. One form that this takes is in new gender-neutral language constructs in Romance languages such as French, Spanish and Italian to accommodate gender underspecificity and non-binary gender identities. We support the development of this more representative and inclusive language, and endeavor to find ways to support it through technology. In this work, however, for the sake of simplicity we restrict our scope to language as used to express gender along more conventionally binary
lines, and we therefore do not consider non-binary language or word forms. We are working with both language experts and non-binary-identifying individuals to expand the scope to include non-binary and gender-underspecified language in future work.
|
2307.00753
|
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb: Microlensing ice giants
detected via non-caustic-crossing channel
|
We investigate the microlensing data collected in the 2022 season from the
high-cadence microlensing surveys in order to find weak signals produced by
planetary companions to lenses. From these searches, we find that two lensing
events KMT-2022-BLG-0475 and KMT-2022-BLG-1480 exhibit weak short-term
anomalies. From the detailed modeling of the lensing light curves, we identify
that the anomalies are produced by planetary companions with a mass ratio to
the primary of $q\sim 1.8\times 10^{-4}$ for KMT-2022-BLG-0475L and a ratio
$q\sim 4.3\times 10^{-4}$ for KMT-2022-BLG-1480L. It is estimated that the host
and planet masses and the projected planet-host separation are $(M_{\rm
h}/M_\odot, M_{\rm p}/M_{\rm U}, a_\perp/{\rm au}) = (0.43^{+0.35}_{-0.23},
1.73^{+1.42}_{-0.92}, 2.03^{+0.25}_{-0.38})$ for KMT-2022-BLG-0475L, and
$(0.18^{+0.16}_{-0.09}, 1.82^{+1.60}_{-0.92}, 1.22^{+0.15}_{-0.14})$ for
KMT-2022-BLG-1480L, where $M_{\rm U}$ denotes the mass of Uranus. Both
planetary systems share common characteristics that the primaries of the lenses
are early-mid M dwarfs lying in the Galactic bulge and the companions are ice
giants lying beyond the snow lines of the planetary systems.
|
Cheongho Han, Chung-Uk Lee, Ian A. Bond, Weicheng Zang, Sun-Ju Chung, Michael D. Albrow, Andrew Gould, Kyu-Ha Hwang, Youn Kil Jung, Yoon-Hyun Ryu, In-Gu Shin, Yossi Shvartzvald, Hongjing Yang, Jennifer C. Yee, Sang-Mok Cha, Doeon Kim, Dong-Jin Kim, Seung-Lee Kim, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge, Shude Mao, Wei Zhu, Fumio Abe, Richard Barry, David P. Bennett, Aparna Bhattacharya, Hirosame Fujii, Akihiko Fukui, Ryusei Hamada, Yuki Hirao, Stela Ishitani Silva, Yoshitaka Itow, Rintaro Kirikawa, Iona Kondo, Naoki Koshimoto, Yutaka Matsubara, Shota Miyazaki, Yasushi Muraki, Greg Olmschenk, Clément Ranc, Nicholas J. Rattenbury, Yuki Satoh, Takahiro Sumi, Daisuke Suzuki, Taiga Toda, Mio Tomoyoshi, Paul J. Tristram, Aikaterini Vandorou, Hibiki Yama, Kansuke Yamashita
|
2023-07-03T04:57:31Z
|
http://arxiv.org/abs/2307.00753v1
|
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb: Microlensing ice giants detected via non-caustic-crossing channel
###### Abstract
Context:
Aims:We investigate the microlensing data collected in the 2022 season from the high-cadence microlensing surveys in order to find weak signals produced by planetary companions to lenses.
Methods:From these searches, we find that two lensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480 exhibit weak short-term anomalies. From the detailed modeling of the lensing light curves, we identify that the anomalies are produced by planetary companions with a mass ratio to the primary of \(q\sim 1.8\times 10^{-4}\) for KMT-2022-BLG-0475L and a ratio \(q\sim 4.3\times 10^{-4}\) for KMT-2022-BLG-1480L.
Results:It is estimated that the host and planet masses and the projected planet-host separation are \((M_{\rm h}/M_{\odot},M_{p}/M_{\rm U},a_{\perp}/{\rm au})=(0.43^{+0.35}_{-0.23},1.73^{+1.42}_{-0.67},2.03^{+0.25}_{-0.33})\) for KMT-2022-BLG-0475L, and \((0.18^{+0.16}_{-0.09},1.82^{+1.60}_{-0.92},1.22^{+0.15}_{-0.14})\) for KMT-2022-BLG-1480L, where \(M_{\rm U}\) denotes the mass of Uranus. Both planetary systems share common characteristics of the primary of the lenses are early-mid M dwarfs lying in the Galactic bulge and the companions are ice giants lying beyond the snow lines of the planetary systems.
Conclusions:
## 1 Introduction
The microlensing signal of a planet usually appears as a short-term anomaly on the smooth and symmetric lensing light curve generated by the host of the planet (Mao & Paczynski 1991; Gould & Loeb 1992). The signal arises when a source approaches the perturbation region formed around the caustic induced by the planet. Caustics represent the positions on the source plane at which the lensing magnification of a point source is infinite, and thus source crossings over the caustic result in strong signals with characteristic spike features.
The region of planetary deviations extends beyond caustics, and planetary signals can be produced without the caustic crossing of a source. Planetary signals produced via the non-caustic-crossing channel are weaker than those generated by caustic crossings, and the strength of the signal diminishes as the separation of the source from the caustic increases. Furthermore, these signals do not exhibit characteristic features such as the spikes produced by caustic crossings. Due to the combination of these weak and featureless characteristics, the planetary signals generated via the non-caustic channel are difficult to be noticed. If such signals are missed despite the fact that they meet the criterion of detection, the statistical studies based on the incomplete planet sample would lead to erroneous results on the demographics of planets. In order to prevent this, the Korea Microlensing Telescope Network (KMTNet: Kim et al. 2016) group has regularly conducted systematic inspection of the data collected by the survey experiments in search of weak planetary signals and has published detected planets in a series of papers (Zang et al. 2021a,b, 2022, 2023; Hwang et al. 2022; Wang et al. 2022; Gould et al. 2022b; Jung et al. 2022, 2023; Han et al. 2022a,b,c,d,e, 2023a; Shin et al. 2023).
In this work, we present the analyses of the two microlensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480, for which weak short-term anomalies were found from the systematic investigation of the data collected from the high-cadence microlensing surveys conducted in the 2022 season. We investi
gate the nature of the anomalies by carrying out detailed analyses of the light curves.
The organization of the paper for the presentation of the analyses and results are as follows. In Sect. 2, we describe the observations and data used in the analyses. In Sect. 3, we begin by explaining the parameters used in modeling the lensing light curves, and we then detail the analyses conducted for the individual events in the following subsections: Sect. 3.1 for KMT-2022-BLG-0475 and in Sect. 3.2 for KMT-2022-BLG-1480. In Sect. 4, we explain the procedure of constraining the source stars and estimating the angular Einstein radii of the events. In Sect. 5, we explain the procedure of the Bayesian analyses conducted to determine the physical lens parameters and present the estimated parameters. We summarize results and conclude in Sect. 6.
## 2 Observations and data
We inspected the microlensing data of the KMTNet survey collected from the observations conducted in the 2022 season. The total number of KMTNet lensing events detected in the season is 2803. For the individual events, we first fitted light curves with a single-lens single-source (1L1S) model and then visually inspected residuals from the model. From this inspection, we found that the lensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480 exhibited weak short-term anomalies. We then cross-checked whether there were additional data from the surveys conducted by other microlensing observation groups. We found that both events were additionally observed by the Microlensing Observations in Astrophysics (MOA: Bond et al., 2011) group, who referred to the events as MOA-2022-BLG-185 and MOA-2022-BLG-383, respectively. For KMT-2022-BLG-1480, there were extra data acquired from the survey observations conducted by the Microlensing Astronomy Probe (MAP) collaboration during the period from 2021 August to 2022 September, whose primary purpose was to verify short-term planetary signals found by the KMTNet survey. In the analyses of the events, we used the combined data from the three survey experiments.1
Footnote 1: The Optical Gravitational Microlensing Experiment (OGLE: Udalski et al., 1994) is another major microlensing survey, although the two events analyzed in this work were not detected by the survey because the OGLE telescope was not operational in the first half of the 2022 season. Besides these surveys dedicated to the microlensing program, lensing events are detected from other surveys such as the Zwicky Transient Facility (2DF) survey (Medford et al., 2023) and the Asteroid Terrestrial-impact Last Alert System (ATLAS) survey (Tonry et al., 2018), or observed using space-based instrumental such as the Gaia survey (Kruszynska et al., 2022; Luberto et al., 2022) and Hubble Space Telescope (Sahu et al., 2022).
The observations of the events were carried out using the telescopes that are operated by the individual survey groups. The three identical telescopes used by the KMTNet group have a 1.6 m aperture equipped with a camera yielding 4 deg\({}^{2}\) field of view, and they are distributed in the three continents of the Southern Hemisphere for the continuous coverage of lensing events. The sites of the individual KMTNet telescopes are the Cerro Tololo Interamerican Observatory in Chile (KMTC), the South African Astronomical Observatory in South Africa (KMTS), and the Siding Spring Observatory in Australia (KMTA). The MOA group utilizes the 1.3 m telescope at the Mt. John Observatory in New Zealand, and the camera mounted on the telescope has a 1.2 deg\({}^{2}\) field of view. The MAP collaboration uses the 3.6 m Canada-France-Hawaii Telescope (CFHT) in Hawaii.
Observations by the KMTNet, MOA, MAP groups were done mainly in the \(I\), customized MOA-\(R\), and SDSS-\(i\) bands, respectively. A fraction of images taken by the KMTNet and MOA surveys were acquired in the \(V\) band for the measurement of the source colors of the events. Reduction of data and photometry of source stars were done using the pipelines of the individual survey groups. For the data used in the analyses, we readjusted the error bars estimated from the automated pipelines so that the error bars are consistent with the scatter of data and \(\chi^{2}\) per degree of freedom for each data set becomes unity following the method described in Yee et al. (2012).
## 3 Light curve analyses
The analyses of the lensing events were carried out by searching for lensing solutions specified by the sets of lensing parameters that best describe the observed light curves. The lensing parameters vary depending on the interpretation of an event. It is known that a short-term anomaly can be produced by two channels, in which the first is a binary-lens single-source (2L1S) channel with a low-mass companion to the lens, and the other is a single-lens binary-source (1L2S) channel with a faint companion to the source (Gaudi, 1998).
The basic lensing parameters used in common for the 2L1S and 1L2S models are \((t_{0},u_{0},t_{\rm E},\rho)\). The first two parameters represent the time of the closest source approach to the lens and the lens-source separation (impact parameter) scaled to the angular Einstein radius \(\theta_{\rm E}\) at \(t_{0}\), respectively. The third parameter denotes the event time scale, that is defined as the time for the source to transit \(\theta_{\rm E}\). The last parameter is the normalized source radius, that is defined as the ratio of the angular source radius \(\theta_{\rm s}\) to \(\theta_{\rm E}\). The normalized source radius is needed in modeling to describe the deformation of a lensing light curve caused by finite-source effects (Bennett & Rhie, 1996).
In addition to the basic parameters, the 2L1S and 1L2S models require additional parameters for the description of the extra lens and source components. The extra parameters for the 2L1S model are \((s,q,\alpha)\), where the first two parameters denote the projected separation scaled to \(\theta_{\rm E}\) and the mass ratio between the lens components \(M_{1}\) and \(M_{2}\), and the last parameter denotes the source trajectory angle defined as the angle between the direction of the lens-source relative proper motion \(\mu\) and the \(M_{1}\)-\(M_{2}\) binary axis. The extra parameters for the 1L2S model include \((t_{0,2},u_{0,2},\rho_{2},q_{F})\), which refer to the closest approach time, impact parameter, the normalized radius of the source companion \(S_{2}\), and the flux ratio between the source companion and primary \((S_{1})\), respectively. See Table 2 of Han et al. (2023b) for the summary of lensing parameters that are required to be included under various interpretations of the lens-system configuration.
For the individual events, we check whether higher-order effects improve the fits by conducting additional modeling. The considered higher-order effects are the microlens-parallax effect (Gould, 1992) and the lens-orbital effects (Albrow et al., 2000), which are caused by the orbital motion of Earth and lens, respectively. For the consideration of the microlens-lens parallax effects, we included two extra parameters (\(\pi_{\rm E,\it N},\pi_{\rm E,\it E}\)), which denote the north and east components of the microlens-parallax vector
\[\pi_{\rm E}=\left(\frac{\pi_{\rm rel}}{\theta_{\rm E}}\right)\left(\frac{\mu}{ \mu}\right), \tag{1}\]
respectively. Here \(\pi_{\rm rel}=\pi_{\rm L}-\pi_{\rm S}=\rm{au}(1/D_{\rm L}-1/D_{\rm S})\) denotes the relative lens-source parallax, while \(D_{\rm L}\) and \(D_{\rm S}\) denote the
distances to the lens and source, respectively. The lens-orbital effects were incorporated into modeling by including two extra parameters (\(ds/dt,d\alpha/dt\)), which represent the annual change rates of the binary-lens separation and the source trajectory angle, respectively.
We searched for the solutions of the lensing parameters as follows. For the 2L1S modeling, we found the binary-lens parameters \(s\) and \(q\) using a grid approach with multiple seed values of \(\alpha\), and the other parameters were found using a downhill approach based on the MCMC logic. For the local solutions identified from the \(\Delta\chi^{2}\) map on the \(s\)-\(q\) parameter plane, we then refined the individual solutions by allowing all parameters to vary. We adopted the grid approach to search for the binary parameters because it was known that the change of the lensing magnification is discontinuous due to the formation of caustics, and this makes it difficult to find a solution using a downhill approach with initial parameters (\(s,q\)) lying away from the solution. In contrast, the magnification of a 1L2S event smoothly changes with the variation of the lensing parameters, and thus we searched for the 1L2S parameters using a downhill approach with initial values set by considering the magnitude and location of the anomaly features. In the following subsections, we describe the detailed procedure of modeling and present results found from the analyses of the individual events.
### Kmt-2022-Blg-0475
The source of the lensing event KMT-2022-BLG-0475 lies at the equatorial coordinates \(\rm{(RA,DEC)_{2000}}=\) (18:05:20.56, -27:02:15.61), which correspond to the Galactic coordinates \((l,b)=(3\aas@@fstack{\circ}835,-2\aas@@fstack{\circ}804)\). The KMTNet group first discovered the event on 2022 April 19, which corresponds to the abridged Heliocentric Julian date HJD\({}^{\prime}\) = HJD - 2450000 = 9688, when the source was brighter than the baseline magnitude \(I_{\rm{base}}=18.78\) by \(\Delta I\sim 0.6\) mag. Five days after the KMTNet discovery, the event was independently found by the MOA group, who designated the event as MOA-2022-BLG-185. Hereafter we use the KMTNet event notation following the convention of using the event ID reference of the first discovery survey. The event was in the overlapping region of the two KMTNet prime fields BLG03 and BLG43, toward which observations were conducted with a 0.5 hr cadence for each field and \(\sim 0.25\) hr in combination. The MOA observations were done with a similar cadence.
Figure 1 shows the light curve of KMT-2022-BLG-0475 constructed from the combination of the KMTNet and MOA data. The anomaly occurred at around \(t_{\rm{mona}}=9698.4\), which was \(\sim 0.85\) day before time of the peak. The zoom-in view of the region around the anomaly is shown the upper panel of Figure 1. The anomaly lasted for about 0.5 day, and the beginning part was covered by the MOA data while the second half of the anomaly was covered by the KMTS data. There is a gap between the MOA and KMTS data during \(9698.30\leq\rm{HJD}^{\prime}\leq 9698.42\), and this gap corresponds to the night time at the KMTA site, which was clouded out except for the very beginning of the evening.
Figure 2 shows the best-fit 2L1S and 1L2S models in the region around the anomaly. From the 2L1S modeling, we identified a pair of 2L1S solutions resulting from the close-wide degeneracy. In Table 1, we present the lensing parameters of the two 2L1S and the 1L2S solutions together with the \(\chi^{2}\) values of the fits and degrees of freedom (dof). It was found that the severity of the degeneracy between the close and wide 2L1S solutions is moderate, with the close solution being preferred over the wide solution by \(\Delta\chi^{2}=8.4\). For the best-fit solution, that is, the close 2L1S solution, we also list the flux values of the source \(f_{s}\) and blend \(f_{h}\), where the flux values are approximately scaled by the relation \(I=18-2.5\log f\).
We find that the anomaly in the lensing light curve of KMT-2022-BLG-0475 is best explained by a planetary 2L1S model. The planet parameters are \((s,q)_{\rm{close}}\sim(0.94,1.76\times 10^{-4})\) for
Figure 1: Light curve of KMT-2022-BLG-0475. The lower panel shows the whole view and the upper panel shows the enlarged view of the region around the anomaly. The arrow in the lower panel indicates the approximate time of the anomaly, \(t_{\rm{mona}}\). The curve drawn over the data points is the model curve of the 1L1S solution. The KMTC data set is used for the reference to align the other data sets.
Figure 2: Zoom-in view around the anomaly in the lensing light curve of KMT-2022-BLG-0475. The lower three panels show the residuals from the finite- and point-source close 2L1S models and 1L2S model.
the close solution and \((s,q)_{\rm wide}\sim(1.14,1.77\times 10^{-4})\) for the wide solution. The estimated planet-to-host mass ratio is an order of magnitude smaller than the ratio between the Jupiter and the sun, \(q\sim 10^{-3}\), indicating that the planet has a mass that is substantially smaller than a typical gas giant. Although the 1L2S model approximately describes the anomaly, it leaves residuals of 0.03 mag level in the beginning and ending parts of the anomaly, resulting in a poorer fit than the 2L1S model by \(\Delta\chi^{2}=27.3\). It was found that the microlens-parallax parameters could not be measured because of the short time scale, \(t_{\rm E}\sim 16.8\) days, of the event.
In the upper and lower panels of Figure 3, we present the lens-system configurations of the close and wide 2L1S solutions, respectively. In each panel, the inset shows the whole view of the lens system, and the main panel shows the enlarged view around the central caustic. A planetary companion induces two sets of caustics, with the "central" caustic indicating the one lying close to the primary lens, while the other caustic, lying away from the primary, is referred to as the as the "planetary" caustic. The configuration shows that the anomaly was produced by the passage of the source through the deviation region formed in front of the protruding cusp of the central caustic. We found that finite-source effects were detected despite the fact that the source did not cross the caustic. In order to show the deformation of the anomaly pattern by finite-source effects, we plot the light curve and residual from the point-source model that has the same lensing parameters as those of the finite-source model except for \(\rho\), in Figure 2.
It is known that the planet separations of the pair of degenerate solutions resulting from a close-wide degeneracy follow the relation \(\sqrt{s_{\rm close}}\times s_{\rm wide}=1.0\) (Griest & Safizadeh 1998). For the close and wide solutions of KMT-2022-BLG-0475, this value is \(\sqrt{s_{\rm close}}\times s_{\rm wide}=1.032\), which deviates from unity with a fractional discrepancy \((\sqrt{s_{\rm close}}\times s_{\rm wide}-1.0)/1.0\sim 3.2\%\). We find that the relation between the two planet separations is better described by the Hwang et al. (2022) relation
\[s_{\rm a}^{\dagger}=\sqrt{s_{\rm in}\times s_{\rm out}}=\frac{\sqrt{u_{\rm anom }^{2}+4}\pm u_{\rm anom}}{2}, \tag{2}\]
which was introduced to explain the relation between the planet separations \(s_{\rm in}\) and \(s_{\rm out}\) of the two solutions that are subject to the inner-outer degeneracy (Gaudi & Gould 1997). Here \(u_{\rm anom}^{2}=\tau_{\rm anom}^{2}+u_{0}^{2}\tau_{\rm anom}=(t_{\rm anom}-t_{ 0})/t_{\rm E}\), \(t_{\rm anom}\) is the time of the anomaly, and the sign in the left and right sides of Eq. (2) is "\(+\)" for a major image perturbation and "\(-\)" for a minor-image perturbation. The terms "inner" and "outer" refer to the cases in which the source passes the inner and outer sides of the planetary caustic, respectively. In the case of KMT-2022-BLG-0475 (and major-image perturbations in general), the close and wide solutions correspond to the outer and inner solutions, respectively. From the measured planet separations of \(s_{\rm in}=s_{\rm wide}=1.135\) and \(s_{\rm out}=s_{\rm close}=0.940\), we found that
\[s^{\dagger}=(s_{\rm in}\times s_{\rm out})^{1/2}=1.033. \tag{3}\]
From the lensing parameters \((t_{0},t_{\rm anom},t_{\rm E},u_{0})=\)(9699.25, 9698.40, 16.8, 0.035), we found that \(\tau=(t_{\rm anom}-t_{0})/t_{\rm E}=0.050\), \(u_{\rm anom}=0.061\),and
\[s^{\dagger}=\frac{\sqrt{u_{\rm anom}^{2}+4}+u_{\rm anom}}{2}=1.038. \tag{4}\]
Then, the fraction deviation of the \(s^{\dagger}\) values estimated from Eqs. (3) and (4) is \(\Delta s^{\dagger}/s^{\dagger}=0.5\%\), which is 6.4 times smaller
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline Parameter & \multicolumn{2}{c|}{2L1S} & \multicolumn{1}{c}{1L2S} \\ \cline{3-4} & Close & Wide & \\ \hline \(\chi^{2}\)/dof & 7903.4/7918 & 7911.8/7918 & 7930.7/7918 \\ \(t_{0}\) (HJD\({}^{\prime}\)) & 9699.258 \(\pm\) 0.002 & 9699.259 \(\pm\) 0.002 & 9699.276 \(\pm\) 0.003 \\ \(n_{0}\) & 0.035 \(\pm\) 0.001 & 0.035 \(\pm\) 0.001 & 0.035 \(\pm\) 0.001 \\ \(t_{\rm E}\) (days) & 16.84 \(\pm\) 0.13 & 16.77 \(\pm\) 0.13 & 17.05 \(\pm\) 0.13 \\ \(s\) & 0.940 \(\pm\) 0.011 & 1.135 \(\pm\) 0.012 & – \\ \(q\) (10\({}^{-4}\)) & 1.76 \(\pm\) 0.26 & 1.77 \(\pm\) 0.25 & – \\ \(\sigma\) (rad) & 5.692 \(\pm\) 0.004 & 5.691 \(\pm\) 0.005 & – \\ \(\rho\) (10\({}^{-3}\)) & 4.06 \(\pm\) 0.98 & 3.62 \(\pm\) 1.09 & – \\ \(t_{0,2}\) (HJD\({}^{\prime}\)) & – & – & 9698.425 \(\pm\) 0.012 \\ \(u_{0,2}\) (10\({}^{-2}\)) & – & – & \(-\)0.016 \(\pm\) 0.104 \\ \(\rho_{2}\) (10\({}^{-3}\)) & – & – & 3.51 \(\pm\) 0.69 \\ \(q_{F}\) (10\({}^{-2}\)) & – & – & 0.40 \(\pm\) 0.10 \\ \(f_{s}\) & 0.4054 \(\pm\) 0.0003 & & \\ \(f_{b}\) & \(-\)0.0215 \(\pm\) 0.0010 & & \\ \hline \end{tabular} 10
\end{table}
Table 1: Model parameters of KMT-2022-BLG-0475
Figure 3: Lens-system configurations of the close (upper panel) and wide (lower panel) 2L1S solutions of KMT-2022-BLG-0475. In each panel, the red cuspy figures are caustics and the line with an arrow represents the source trajectory. The whole view of the lens system is shown in the inset, in which the small filled dots indicate the positions of the lens components and the solid circle represents the Einstein ring. The grey curves surrounding the caustic represent equi-magnification contours.
than the 3.2% fractional discrepancy of the \(\sqrt{s_{\rm close}\times s_{\rm wide}}=1\) relation. Although the \(\sqrt{s_{\rm close}\times s_{\rm wide}}=1.0\) relation is approximately valid in the case of KMT-2022-BLG-0475, the deviation from the relation can be substantial especially when the planetary separation is very close to unity, and thus the relation in Eq. (2) helps to identify correct degenerate solutions.
### Kmt-2022-Blg-1480
The lensing event KMT-2022-BLG-1480 occurred on a source lying at \(({\rm RA,DEC})_{\rm 12000}=(17\):58:54.96, -29:28:23.99), which correspond to \((l,b)=(1\aas@@fstack{\prime}.015,-2\aas@@fstack{\prime}.771)\). The event was first found by the KMTNet group on 2022 July 11 (HJD\({}^{\prime}=9771\)), when the source was brighter than the baseline magnitude, \(I_{\rm base}=18.11\), by \(\Delta I\sim 0.65\) mag. The source was in the KMTNet prime field BLG02, toward which observations were conducted with a 0.5 hr cadence. This field overlaps with the BLG42 field in most region, but the source was in the offset region that was not covered by the BLG42 field. The event was also observed by the MOA and MAP survey groups, who observed event with a 0.2 hr cadence and a 0.5-1.0 day cadence, respectively.
In Figure 4, we present the light curve of KMT-2022-BLG-1480. We found that a weak anomaly occurred about 3 days after the peak centered at \(t_{\rm moon}\sim 9793.2\). The anomaly is characterized by a negative deviation in most part of the anomaly and a slight positive deviation in the beginning part centered at \({\rm HJD}^{\prime}\sim 9792.2\). The anomaly, that lasted about 2.7 days, was covered by multiple data sets from KMTS, KMTA, and CFHT. The sky at the KMTC site was clouded out for two consecutive nights from July 30 to August 1 (\(9791\leq{\rm HJD}^{\prime}\leq 9793\)), and thus the anomaly was not covered by the KMTC data.
In Table 2, we present the best-fit lensing parameters of the 2L1S solution. We did not conduct 1L2S modeling because a negative deviation cannot be explained with a 1L2S interpretation. We found a pair of 2L1S solutions, in which one solution has binary parameters \((s,q)\sim(0.83,4.9\times 10^{-4})\) and the other solution has parameters \((s,q)\sim(1.03,4.7\times 10^{-4})\). Similar to the case of KMT-2022-BLG-0475L, the estimated mass ratio of order \(10^{-4}\) is much smaller than the Jupiter/sun mass ratio. As we discuss below, the similarity between the model curves of the two 2L1S solutions is caused by the inner-outer degeneracy, and thus we refer to the solutions as "inner" and "outer" solutions, respectively. From the comparison of the inner and outer solutions obtained under the assumption of a rectilinear relative lens-source motion, it was found that the outer model yields a substantially better fit than the fit of the inner model by \(\Delta\chi^{2}=63.9\), indicating that the degeneracy was resolved. In Figure 5, we present the model curves of the two solutions in the region around the anomaly. From the comparison of the models, it is found that the fit of the outer solution is better than the inner solution in the region around \({\rm HJD}^{\prime}\sim 9792\), at which the anomaly exhibits slight positive deviations from the 1L1S model.
Figure 6 shows the lens-system configurations of the inner and outer 2L1S solutions. Although the fit is worse, we present the configuration of the inner solution in order to find the origin of the fit difference between the two solutions. The configuration shows that the outer solution results in a resonant caustic, in which the central and planetary caustics merge and form a single caustic, while the central and planetary caustics are detached in the case of the inner solution. According to the interpretations of both solutions, the source passed the back-end side of central caustic without caustic crossings. The configuration of the outer solution results in strong cusps lying on the back-end side, and this caustic feature explains the slight positive deviation appearing in the beginning part of the anomaly around \({\rm HJD}^{\prime}\sim 9792.2\). Similar to the case of KMT-2022-BLG-0475, finite-source effects were detected although the source did not cross the caustic. We plot the point-source model in Figure 5 for the comparison with the finite-source model.
We find that the relation in Eq. (2) is also applicable to the two local solutions of KMT-2022-BLG-1480. With \((s_{\rm in},s_{\rm out})\sim(1.03,4.7\times 10^{-4})\), the 2L1S solutions are consistent with the 2L1S solutions. The 2L1S solutions are consistent with the 2L1S solutions.
(\(0.82,1.03\)), the fractional deviation of the value \(\sqrt{s_{\rm in}\times s_{\rm out}}\) from unity is \((\sqrt{s_{\rm in}\times s_{\rm out}}-1.0)/1.0\sim 8\%\). On the other hand, the fractional difference between \(s^{\dagger}=\sqrt{s_{\rm in}\times s_{\rm out}}=0.919\) and \(s^{\dagger}=[(u_{\rm non}^{2}+4)^{1/2}-u_{\rm non}]/2=0.934\) is \(\Delta s^{\dagger}/s^{\dagger}=1.6\%\), which is 5 times smaller than that of the \(\sqrt{s_{\rm close}\times s_{\rm wide}}=1\) relation. This also indicates that the two local solutions result from the inner-outer degeneracy rather than the close-wide degeneracy. We also checked the Hwang et al. (2021) relation between the planet-to-host mass ratio and the lensing parameters for the "dip-type" anomalies,
\[q=\left(\frac{\Delta t_{\rm dip}}{4t_{\rm E}}\right)\frac{s}{|u_{0}|}|\sin\alpha |^{3}, \tag{5}\]
where \(\Delta t_{\rm dip}\sim 1.9\) day is the duration of the dip in the anomaly. With the lensing parameters \((t_{\rm E},u_{0},s,\alpha)=(26,0.069,1.03,59^{\prime})\), we found that the mass ratio analytically estimated from Eq. (5) is \(q\sim 5.9\times 10^{-4}\), which is close to the value \(4.6\times 10^{-4}\) found from the modeling.
We check whether the microlens-parallax vector \(\pi_{\rm E}=(\pi_{\rm E,N},\pi_{\rm E,E})\) can be measured by conducting extra modeling considering higher-order effects. We find that the inclusion of the higher-order effects improves the fit by \(\Delta\chi^{2}=6.9\) with respect to the model obtained under the rectilinear lens-source motion (standard model). In Table 2, we list the lensing parameters of the pair of higher-order models with \(u_{0}>0\) and \(u_{0}<0\), which result from the mirror symmetry of the source trajectory with respect to the binary-lens axis (Skowron et al., 2011). The \(\Delta\chi^{2}\) maps of the models on the \((\pi_{\rm E,E},\pi_{\rm E,N})\) parameter plane obtained from the higher-order modeling are shown in Figure 7. It is found that the maps of the solutions results in a similar pattern of a classical 1-dimensional parallax ellipse, in which the east component \(\pi_{\rm E,E}\) is well constrained and the north component \(\pi_{\rm E,N}\) has a fairly big uncertainty. Gould et al. (1994) pointed out that the constraints of the 1-dimensional parallax on the physical lens parameters are significant, and Han et al. (2016) indicated that the parallax constraint should be incorporated in the Bayesian analysis to estimate the physical lens parameters. We describe the detailed procedure of imposing the parallax constraint in the second paragraph of Sect. 5. While the parallax parameters \((\pi_{\rm E,N},\pi_{\rm E,E})\) are constrained from the overall pattern of the light curve, the orbital parameters \((ds/dt,d\alpha/dt)\) are constrained from the anomaly induced by the lens companion. For KMT
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline \multicolumn{1}{c|}{ Parameter} & \multicolumn{1}{c|}{Inner} & \multicolumn{3}{c}{Outer} \\ & Standard & Standard & Higher-order (\(u_{0}>0\)) & Higher-order (\(u_{0}<0\)) \\ \hline \(\chi^{2}/\)dof & 5724.8/5678 & 5660.9/5678 & 5654.0/5674 & 5655.3/5674 \\ \(t_{0}\) (HJD) & 9790.133 \(\pm\) 0.004 & 9790.132 \(\pm\) 0.004 & 9790.134 \(\pm\) 0.004 & 9790.138 \(\pm\) 0.004 \\ \(u_{0}\) & \(0.067\pm 0.001\) & \(0.069\pm 0.001\) & \(0.069\pm 0.001\) & \(-0.069\pm 0.001\) \\ \(t_{\rm E}\) (days) & \(26.37\pm 0.15\) & \(26.09\pm 0.15\) & \(26.18\pm 0.16\) & \(26.08\pm 0.16\) \\ \(s\) & \(0.826\pm 0.004\) & \(1.030\pm 0.018\) & \(1.017\pm 0.014\) & \(1.011\pm 0.015\) \\ \(q\) (\(10^{-4}\)) & \(4.87\pm 0.36\) & \(4.68\pm 0.783\) & \(4.30\pm 0.69\) & \(4.18\pm 0.70\) \\ \(\alpha\) (rad) & \(0.516\pm 0.004\) & \(0.515\pm 0.004\) & \(0.525\pm 0.006\) & \(-0.532\pm 0.009\) \\ \(\rho\) (\(10^{-3}\)) & \(<6\) & \(14.18\pm 4.02\) & \(14.68\pm 2.51\) & \(14.83\pm 2.46\) \\ \(\pi_{\rm E,N}\) & – & – & \(0.42\pm 0.29\) & \(0.27\pm 0.31\) \\ \(\pi_{\rm E,E}\) & – & – & \(-0.02\pm 0.05\) & \(0.04\pm 0.05\) \\ \(ds/dt\) (yr\({}^{-1}\)) & – & – & \(0.60\pm 0.40\) & \(1.14\pm 0.44\) \\ \(da/dt\) (yr\({}^{-1}\)) & – & – & \(-0.57\pm 0.49\) & \(1.59\pm 0.73\) \\ \(f_{s}\) & & \(0.9222\pm 0.0009\) & & \\ \(f_{b}\) & & \(0.0921\pm 0.0016\) & & \\ \hline \end{tabular}
\end{table}
Table 2: Model parameters of KMT-2022-BLG-1480
Figure 6: Lens-system configurations for the inner and outer 2L1S solutions of KMT-2022-BLG-1480. Notations are same as those in Fig. 3.
2022-BLG-1480, the orbital parameters are poorly constrained because the duration of the planet-induced anomaly is very short.
## 4 Source stars and angular Einstein radii
In this section, we constrain the source stars of the events to estimate the angular Einstein radii. Despite the non-caustic-crossing nature of the planetary signals, finite-source effects were detected for both events, and thus the normalized source radii were measured. With the measured value of \(\rho\), we estimated the angular Einstein radius using the relation
\[\theta_{\rm E}=\frac{\theta_{\rm s}}{\rho}, \tag{6}\]
where the angular radius of the source was deduced from the reddening- and extinction-corrected (de-reddened) color and magnitude.
The left and right panels of Figure 8 show the source locations in the instrumental color-magnitude diagrams (CMDs) of stars lying around the source stars of KMT-2022-BLG-0475 and KMT-2022-BLG-1480, respectively. For each event, the instrumental source color and magnitude, \((V-I,I)_{\rm S}\), were determined by estimating the flux values of the source \(f_{\rm s}\) and blend \(f_{b}\) from the linear fit to the relation \(F_{\rm obs}=A(t)f_{\rm s}+f_{b}\), where the lensing magnification is obtained from the model. We are able to constrain the blend for KMT-2022-BLG-1480 and mark its location on the CMD, but the blend flux of KMT-2022-BLG-0475 resulted in a slightly negative value, making it difficult to constrain the blend position. By applying the method of Yoo et al. (2004), we then estimated the de-reddened source color and magnitude, \((V-I,I)_{\rm 0.5}\), using the centroid of the red giant clump (RGC), for which its de-reddened color and magnitude \((V-I,I)_{\rm 0,RGC}\) were known (Bensby et al., 2013; Nataf et al., 2013), as a reference, that is,
\[(V-I,I)_{\rm 0,S}=(V-I,I)_{\rm 0,RGC}+[(V-I,I)_{\rm S}-(V-I,I)_{\rm RGC}]. \tag{7}\]
Here \((V-I,I)_{\rm RGC}\) denotes the instrumental color and magnitude of the RGC centroid, and thus the last term in the bracket represents the offset of the source from the RGC centroid in the CMD.
In Table 3, we summarize the values of \((V-I,I)_{\rm S}\), \((V-I,I)_{\rm RGC}\), \((V-I,I)_{\rm 0,RGC}\), and \((V-I,I)_{\rm 0,S}\) for the individual events. From the estimated de-reddened colors and magnitudes, it was found that the source star of KMT-2022-BLG-0475 is an early K-type turnoff star, and that of KMT-2022-BLG-1480 is a late G-type subgiant. We estimated the angular source radius by first converting the measured \(V-I\) color into \(V-K\) color using the Bessell & Brett (1988) color-color relation, and then deducing \(\theta_{\rm s}\) from the Kervella et al. (2004) relation between \((V-K,V)\) and \(\theta_{\rm s}\). With the measured source radius, we then estimated the angular Einstein radius using the relation in Eq. (6) and the relative lens-source proper motion using the relation \(\mu=\theta_{\rm E}/t_{\rm E}\). The estimated \(\theta_{\rm E}\) and \(\mu\) values of the individual events are listed in Table 3. We note that the uncertainties of the source colors and magnitudes presented in Table 3 are the values estimated from the model fitting, and those of \(\theta_{\rm s}\) and \(\theta_{\rm E}\) are estimated by adding an additional 7% error to consider the uncertain de-reddened RGC color of Bensby et al. (2013) and the uncertain position of the RGC centroid (Gould, 2014).
## 5 Physical lens parameters
We determined the physical parameters of the planetary systems using the lensing observables of the individual events. For KMT-2022-BLG-0475, the measured observables are \(t_{\rm E}\), and \(\theta_{\rm E}\), which are respectively related to the mass and distance to the planetary system by
\[t_{\rm E}=\frac{\theta_{\rm E}}{\mu};\qquad\theta_{\rm E}=(\kappa M\pi_{\rm rel })^{1/2}, \tag{8}\]
where \(\kappa=4G/(c^{2}{\rm au})=8.14\) mas/\(M_{\odot}\). For KMT-2022-BLG-1480, we additionally measured the observable \(\pi_{\rm E}\), with which the physical parameters can be uniquely determined by
\[M=\frac{\theta_{\rm E}}{\kappa\pi_{\rm E}};\qquad D_{\rm L}=\frac{{\rm au}}{\pi _{\rm E}\theta_{\rm E}+\pi_{\rm S}}. \tag{9}\]
Figure 8: Source locations of the lensing events KMT-2022-BLG-0475 (left panel) and KMT-2022-BLG-1480 (right panel) with respect to the positions of the centroids red giant clump (RGC) in the instrumental color-magnitude diagrams of stars lying around the source stars of the individual events. For KMT-2022-BLG-1480, the position of the blend is additionally marked.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Quantity} & KMT-2022-BLG-0475 & KMT-2022-BLG-1480 \\ \hline \((V-I,I)_{\rm S}\) & \((2.016\pm 0.005,18.980\pm 0.008)\) & \((2.498\pm 0.005,18.088\pm 0.006)\) \\ \((V-I,I)_{\rm RGC}\) & \((2.108,15.747)\) & \((2.732,16.199)\) \\ \((V-I,I)_{\rm 0,RGC}\) & \((1.060,14.332)\) & \((1.060,14.396)\) \\ \((V-I,I)_{\rm 0,S}\) & \((0.968\pm 0.005,17.566\pm 0.008)\) & \((0.826\pm 0.005,16.284\pm 0.006)\) \\ \(\theta_{\rm s}\) (\(\mu\)as) & \(1.29\pm 0.09\) & \(1.98\pm 0.14\) \\ \(\theta_{\rm E}\) (mas) & \(0.32\pm 0.02\) & \(0.13\pm 0.01\) \\ \(\mu\) (mas/yr) & \(6.92\pm 0.49\) & \(1.88\pm 0.13\) \\ \hline \end{tabular}
\end{table}
Table 3: Source properties, Einstein radii, and lens-source proper motions
We estimated the physical parameters by conducting Bayesian analyses because the observable \(\pi_{\rm E}\) was not measured for KMT-2022-BLG-0475, and the uncertainty of the north component of the parallax vector, \(\pi_{\rm E,N}\), was fairly big although the east component \(\pi_{\rm E,\it E}\) was relatively well constrained.
The Bayesian analysis was done by first generating artificial lensing events from a Monte Carlo simulation, in which a Galactic model was used to assign the locations of the lens and source and their relative proper motion, and a mass-function model was used to assign the lens mass. We adopted the Jung et al. (2021) Galactic model and the Jung et al. (2018) mass function. With the assigned values of \((M,D_{\rm L},D_{\rm S},\mu)\), we computed the lensing observables \((t_{\rm E,\it i},\theta_{\rm E,\it i},\pi_{\rm E,\it i})\) of each simulated event using the relations in Eqs. (1) and (8). Under the assumption that the physical parameters are independently and identically distributed, we then constructed the Bayesian posteriors of \(M\) and \(D_{\rm L}\) by imposing a weight \(w_{\rm i}=\exp(-\chi_{i}^{2}/2)\). Here the \(\chi_{i}^{2}\) value for each event was computed by
\[\chi_{i}^{2}=\left[\frac{t_{\rm E,\it i}-t_{\rm E}}{\sigma(t_{\rm E})}\right]^ {2}+\left[\frac{\theta_{\rm E,\it i}-\theta_{\rm E}}{\sigma(\theta_{\rm E})} \right]^{2}+\sum_{j=1}^{2}\sum_{k=1}^{2}b_{j,k}(\pi_{\rm E,\it i}-\pi_{\rm E, \it i})(\pi_{\rm E,\it i,\it j}-\pi_{\rm E,\it i}), \tag{10}\]
where \((t_{\rm E},\theta_{\rm E},\pi_{\rm E})\) represent the observed values of the lensing observables, \([\sigma(t_{\rm E}),\sigma(t_{\rm E})]\) denote the measurement uncertainties of \(t_{\rm E}\) and \(\theta_{\rm E}\), respectively, \(b_{j,k}\) denotes the inverse covariance matrix of \(\pi_{\rm E}\), and \((\pi_{\rm E,\it i},\pi_{\rm E,\it i},\pi_{\rm E,\it i})_{i}=(\pi_{\rm E,\it N}, \pi_{\rm E,\it i})_{i}\) denote the north and east components of the microlens-parallax vector of each simulated event, respectively. We note that the last term in Eq. (10) was not included for KMT-2022-BLG-0475, for which the microlens-parallax was not measured.
In the case of the event KMT-2022-BLG-1480, for which the blending flux was measured, we additionally imposed the blending constraint that was given by the fact that the lens could not be brighter than the blend. In order to impose the blending constraint, we computed the lens magnitude as
\[I_{\rm L}=M_{\rm J,L}+5\log\left(\frac{D_{\rm L}}{\rm pc}\right)-5+A_{\rm J,L}, \tag{11}\]
where \(M_{\rm J,L}\) represents the absolute \(I\)-band magnitude of a star corresponding to the lens mass, and \(A_{\rm J,L}\) represents the \(I\)-band extinction to the lens. For the computation of \(A_{\rm J,L}\), we modeled the extinction to the lens as
\[A_{\rm J,L}=A_{\rm J,tot}\left[1-\exp\left(-\frac{|z|}{h_{z,\rm dust}}\right) \right], \tag{12}\]
where \(A_{\rm J,tot}=1.53\) is the total \(I\)-band extinction toward the field, \(h_{z,\rm dust}=100\) pc is the vertical scale height of dust, \(z=D_{\rm L}\) sin \(b+z_{0}\), \(b\) is the Galactic latitude, and \(z_{0}=15\) pc is vertical position of the sun above the Galactic plane (Siegert, 2019). It turned out that the blending constraint had little effect on the posteriors because the other constraints, that is, those from \((t_{\rm E},\theta_{\rm E},\pi_{\rm E})\), already predicted that the planet host are remote faint stars whose flux contribution to the blended flux is negligible.
Figures 9 and 10 show the Bayesian posteriors of the lens mass and distances to the lens and source for KMT-2022-BLG-0475 and KMT-2022-BLG-1480, respectively. In Table 4, we summarize the estimated parameters of the host mass, \(M_{\rm h}\), planet mass, \(M_{\rm p}\), distance to the planetary system, projected separation between the planet and host, \(a_{\perp}=\theta_{\rm E}D_{\rm L}\), and the snow-line distances estimated as \(a_{\rm snow}\sim 2.7(M/M_{\odot})\)(Kennedy & Kenyon, 2008). Here we estimated the representative values and uncertainties of the individual physical parameters as the median values and the 16% and 84% range of the Bayesian posteriors, respectively.
Figure 10: Bayesian posteriors of KMT-2022-BLG-1480. Notations are same as those in Fig. 9.
Figure 9: Bayesian posteriors of the lens mass and distances to the lens and source for the lens system KMT-2022-BLG-0475L. In each panel, the solid vertical line represents the median value, and the two dotted vertical lines indicate the uncertainty range of the posterior distribution. The curves marked in blue and red represent the contributions by the disk and bulge lens populations, respectively.
We find that the two planetary systems KMT-2022-BLG-0475L and KMT-2022-BLG-1480L are similar to each other in various aspects. According to the estimated physical lens parameters, the masses of KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb are \(\sim 1.7\) and \(\sim 1.8\) times the mass of Uranus in our solar system. The planets are separated in projection from their hosts by \(\sim 2.0\) au and \(\sim 1.2\) au, respectively. The masses of planet hosts are \(\sim 0.43\)\(M_{\odot}\) and \(\sim 0.18\)\(M_{\odot}\), which correspond to the masses of early and mid-M dwarfs, respectively. Considering that the estimated separations are projected values and the snow-line distances of the planetary systems are \(a_{\rm snow}\sim 1.2\) au for KMT-2022-BLG-0475L and \(\sim 0.5\) au for KMT-2022-BLG-1480L, the planets of both systems are ice giants lying well beyond the snow lines of the systems. The planetary systems lie at distances of \(\sim 6.6\) kpc and \(\sim 7.8\) kpc from the sun. The planetary systems are likely to be in the bulge with a probability 70% for KMT-2022-BLG-0475L and with a probability 83% for KMT-2022-BLG-1480L.
## 6 Summary and conclusion
We analyzed the light curves of the microlensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480, for which weak short-term anomalies were found from the systematic investigation of the 2022 season data collected by high-cadence microlensing surveys. We tested various models that could produce the observed anomalies and found that the anomalies were generated by planetary companions to the lenses with a planet-to-host mass ratio \(q\sim 1.8\times 10^{-4}\) for KMT-2022-BLG-0475L and a ratio \(q\sim 4.3\times 10^{-4}\) for KMT-2022-BLG-1480L. From the physical parameters estimated from the Bayesian analyses using the observables of the events, it was found that the planets KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb have masses \(\sim 1.7\) and \(\sim 1.8\) times the mass of Uranus in our solar system, respectively, and they lie well beyond the snow lines of their hosts of early and mid-M dwarfs, indicating that the planets are ice giants.
Ice giants around M dwarf stars are difficult to be detected by other surveys using the transit and radical-velocity (RV) methods not only because of the long orbital period of the planet but also because of the faintness of host stars. The number of low-mass planets increases with the increase of the observational cadence of microlensing surveys as shown in the histogram of detected microlensing planets as a function of the planet-to-host mass ratio presented in Fig. 1 of Han et al. (2022). Being able to complement the transit and RV surveys, high-cadence lensing surveys will play an important role in the construction of a more complete planet sample, and thus for better understanding the demographics of extrasolar planets.
The two events are also similar in that they have \(\rho\) measurements (and therefore also \(\theta_{\rm E}\) measurements) despite the fact that the source does not cross any caustics. Zhu et al. (2014) predicted that about half of KMT planets would not have caustic crossings, and Jung et al. (2023) confirmed this for a statistical sample of 58 planetary events detected during the 2018-2019 period. However, Gould (2022a) showed that about 1/3 of non-caustic-crossing events nevertheless yield \(\theta_{\rm E}\) measurements.
Measurements of \(\theta_{\rm E}\) are important, not only because they improve the Bayesian estimates (see Sect. 5), but also because they allow accurate prediction of when high-resolution adaptive-optics (AO) imaging can resolve the lens separately from the source, which will then yield mass measurements of both the host and the planet (Gould, 2022a). For KMT-2022-BLG-0475, with proper motion \(\mu=6.9\) mas/yr, the separation in 2030 (approximate first AO light on 30 m class telescopes), will be \(\Delta\theta\sim 55\) mas, which should be adequate to resolve the lens and source. Resolving the lens of this event would also be important to confirm the planetary interpretation of the event because it is difficult to completely rule out the 1L2S interpretation. By contrast, for KMT-2022-BLG-1480, with \(\mu=1.9\) mas/yr, the separation will be only \(\Delta\theta\sim 15\) mas, which almost certainly means that AO observations should be delayed for many additional years. In particular, if the Bayesian mass and distance estimates are approximately correct, then the expected contrast ratio between the source and lens is \(\Delta K\sim 7\) mag, which will likely require separations of at least 4 FWHM, that is, 55 mas even on the 39 m European Extremely Large Telescope. Hence, the contrast between the two planets presented in this paper underlines the importance of \(\theta_{\rm E}\) measurements.
###### Acknowledgements.
Work by C.H. was supported by the grants of National Research Foundation of Korea (2019R1A2C085965). This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia. Data transfer from the host site to KASI was supported by the Korea Research Environment Open NETwork (KREONET). This research was supported by the Korea Astronomy and Space Science Institute under the RRD program (Project No. 2023-1832-03) supervised by the Ministry of Science and ICT. The MOA project is supported by JSPS KAKENHI Grant Number SPSP324253004, JSPS2624720, JSPS2333004, JSPS151600781, JP1606287, and JP171708271.CJ.C.J.C., and S.J.C. acknowledge support from NSF Grant No. AST-2108414. Y.S. acknowledges support from BSF Grant No. 2020704. This research uses data obtained through the Telescope Access Program (TAP), which has been funded by the TAP member institutes. W.Zang, H.Y., S.M., and W.Zhu acknowledge support by the National Science Foundation of China (Grant No. 12133005). W.Zang acknowledges the support from the Harvard-Smithsonian Center for Astrophysics through the CIA Fellowship. C.R. was supported by the Research fellowship of the Alexander von Humboldt Foundation.
|
2305.16748
|
A Decentralized Spike-based Learning Framework for Sequential Capture in
Discrete Perimeter Defense Problem
|
This paper proposes a novel Decentralized Spike-based Learning (DSL)
framework for the discrete Perimeter Defense Problem (d-PDP). A team of
defenders is operating on the perimeter to protect the circular territory from
radially incoming intruders. At first, the d-PDP is formulated as a
spatio-temporal multi-task assignment problem (STMTA). The problem of STMTA is
then converted into a multi-label learning problem to obtain labels of segments
that defenders have to visit in order to protect the perimeter. The DSL
framework uses a Multi-Label Classifier using Synaptic Efficacy Function
spiking neuRON (MLC-SEFRON) network for deterministic multi-label learning.
Each defender contains a single MLC-SEFRON network. Each MLC-SEFRON network is
trained independently using input from its own perspective for decentralized
operations. The input spikes to the MLC-SEFRON network can be directly obtained
from the spatio-temporal information of defenders and intruders without any
extra pre-processing step. The output of MLC-SEFRON contains the labels of
segments that a defender has to visit in order to protect the perimeter. Based
on the multi-label output from the MLC-SEFRON a trajectory is generated for a
defender using a Consensus-Based Bundle Algorithm (CBBA) in order to capture
the intruders. The target multi-label output for training MLC-SEFRON is
obtained from an expert policy. Also, the MLC-SEFRON trained for a defender can
be directly used for obtaining labels of segments assigned to another defender
without any retraining. The performance of MLC-SEFRON has been evaluated for
full observation and partial observation scenarios of the defender. The overall
performance of the DSL framework is then compared with expert policy along with
other existing learning algorithms. The scalability of the DSL has been
evaluated using an increasing number of defenders.
|
Mohammed Thousif, Shridhar Velhal, Suresh Sundaram, Shirin Dora
|
2023-05-26T08:50:49Z
|
http://arxiv.org/abs/2305.16748v1
|
A Decentralized Spike-based Learning Framework for Sequential Capture in Discrete Perimeter Defense Problem
###### Abstract
This paper proposes a novel Decentralized Spike-based Learning (DSL) framework for the Perimeter Defense Problem (PDP). The PDP in this paper is termed discrete-PDP (d-PDP), as the circular territory is discretized into multiple segments. A team of defenders is operating on the perimeter to protect the circular territory from radially incoming intruders. At first, the d-PDP is formulated as a spatio-temporal multi-task assignment problem (STMTA). The problem of STMTA is then converted into a multi-label learning problem to obtain labels of segments that defenders have to visit in order to protect the perimeter. The DSL framework uses a Multi-Label Classifier using Synaptic Efficacy Function spiking neuRON (MLC-SEFRON) network for deterministic multi-label learning. Each defender contains a single MLC-SEFRON network. Each MLC-SEFRON network is trained independently using input from its own perspective for decentralized operations. The input spikes to the MLC-SEFRON network can be directly obtained from the spatio-temporal information of defenders and intruders without any extra pre-processing step. The output of MLC-SEFRON contains the labels of segments that a defender has to visit in order to protect the perimeter. Based on the multi-label output from the MLC-SEFRON a trajectory is generated for a defender using a Consensus-Based Bundle Algorithm (CBBA) in order to capture the intruders. The target multi-label output for training MLC-SEFRON is obtained from an expert policy. Also, the MLC-SEFRON trained for a defender can be directly used for obtaining labels of segments assigned to another defender without any retraining. The performance of MLC-SEFRON has been evaluated for full observation and partial observation scenarios of the defender. The overall performance of the DSL framework is then compared with expert policy along with other existing learning algorithms. The scalability of the DSL has been evaluated using an increasing number of defenders.
Perimeter Defence Problem (PDP), Spiking Neural Network(SNN), Multi-label learning, spatiotemporal task
## I Introduction
Technological advances in sensors and computer vision have enabled organizations to use autonomous Unmanned Aerial Vehicles (UAVs) for various applications such as logistics [1], search and rescue [2], agriculture [3], security and surveillance [4], fire-fighting and defence [5]. The use of UAVs also pose a threat to the privacy and security of vital infrastructures [6]. UAV-based methods for this purpose have been evaluated for the protection of critical airspace infrastructures in [7]. These methods employ a team of UAVs, termed as defenders, that patrol the boundary of critical infrastructures to prevent intruders from infiltrating them. This problem is termed as the Perimeter Defense Problem (PDP) [8, 9]. Generally, the defenders are constrained to operate on the perimeter of the territory. The PDP has been addressed for different boundary shapes like linear territory [10], conical territory [11], circular territory [12], and more generally for any convex territory [8, 9]. A detailed review of existing approaches and challenges for the PDP are presented in [13, 14].
The differential geometric approaches have been used to compute feasible regions for defenders. Then using these feasibility constraints, one-to-one assignments of the defenders to intruders are computed as in [8, 9]. However, these geometric approaches are limited to one-to-one capture and require full observation. In [15], the decentralized policy for communication and decision-making for defenders is obtained from a centralized solution. But this decentralized solution allows each defender to capture only one intruder. As a result, the solution obtained does not account for sequential capture, where a defender captures multiple intruders in a sequence. In [16], a two-stage adaptive partitioning approach is proposed for protecting circular territories against multiple radially incoming intruders. The territory is divided into partitions in the first stage of the partitioning approach. In the second stage, intruders in each partition are independently assigned to the defenders. However, the defenders lack cooperation as the defender in each partition solves the assignments independently.
In [17] and [18], PDP has been formulated as a spatio-temporal Multi-Task Assignment Problem (STMTA) for convex territories. In this formulation, each intruder's arrival time and location on the perimeter are predicted by defenders. Then to neutralize an intruder, a defender has to reach that arrival location at the arrival time of that intruder, which represents a spatio-temporal task. The PDP is converted into an STMTA problem when there are multiple spatio-temporal tasks for a single defender to handle. The solution to the STMTA problem is necessary to compute assigned intruders for each defender. The trajectory, following which assigned intruders will be neutralized, is computed for each defender using the assignment solution. In [19], the dynamic programming approach has been presented to solve the STMTA formulation of PDP. However, due to its exponential computational complexity, it
is not implementable in real-time.
Analytical solutions for STMTA based on the exact time-constrained multiple Traveling Salesman Problem (mTSP) problem have been developed in [20, 21] for music-playing robots, in [22] for warehouse automation, and in [17, 18] for PDP. These approaches involve solving the linear sum assignment problem using the Hungarian solution for mTSP, which has a computational complexity of order \(\left(N+M-1\right)^{3}\). The aforementioned works based on analytical, geometric, and numerical approaches are not scalable. Their complexity grows with an increase in the number of intruders. Hence they are limited to the protection of small territories. Designing decentralized and scalable strategies with partial observability is one of the major challenges for real-time implementations of perimeter defense systems.
Recently, Graph Neural Networks (GNN) are used to develop a learning-based decentralized approach for solving PDP as an assignment problem [23]. Each defender is equipped with a vision camera that uses ANN to extract features of observed intruders and defenders. These features are used as input to GNN. The GNN communicates with its neighboring defender (i.e. another GNN) to share features dynamically. The maximum matching algorithm [24] is used as an expert policy for training the GNN. The GNN has a fixed number of output neurons representing the closest intruders to a defender (arranged in a specific order). The choice of the number of input and output channels affects performance. Also, as the number of input and output channels is hard-coded, the setting is not generic. However, the GNN approach is only suitable for one-to-one assignments of defenders to intruders. Thus, there is a need to develop a generic learning-based scalable solution for one-to-many assignments of defenders to intruders to protect large territories.
In this paper, a discrete PDP (d-PDP) is considered in which a defender needs to capture an intruder by visiting the segment of that intruder at the arrival of that intruder. To the best of the author's knowledge for the first time in this paper, the STMTA problem posed by d-PDP is converted into a deterministic multi-label learning problem using a Spiking Neural Network (SNN). The conventional multi-label learning algorithms provide a probabilistic solution, but multi-task assignment problem posed by d-PDP requires a deterministic solution for better performance. For predicting the multi-label deterministically a novel multi-label classifier using SNN is developed in this paper. The input to the SNN is obtained from the spatio-temporal information about the defenders and intruders. Each defender is trained using a different SNN, enabling the decentralized operation of the DSL framework. The DSL framework is executed for different sensing ranges of defenders. If a defender is capable of sensing the entire perimeter then it is termed a full observation scenario. Otherwise, it is termed a partial observation scenario.
DSL framework is designed to handle the spatio-temporal problem posed by d-PDP. As the d-PDP has inherent spatio-temporal nature, the DSL framework does not require any preprocessing to generate input spikes. The DSL framework is event-triggered which means that a neuron in the framework emits a spike when an object (either a defender or intruder) is detected otherwise, the neuron remains silent. This spiking nature of DSL makes it energy efficient.
For the deterministic multi-label learning, each label prediction is formulated as a binary classification problem using Synaptic Efficacy Function spiking neuRON (SEFRON) [25] network. Hence the classifier is termed a Multi-Label Classifier using SEFRON (MLC-SEFRON). A 3-layered MLC-SEFRON architecture is used for this purpose namely input layer, SEFRON layer and output layer. Each label prediction is evaluated using the response of two neurons in the SEFRON layer. If one neuron spikes earlier than the other then the output neuron connected to these two neurons in the SEFRON layer generates a label \(1\) and vice versa. If the response of the output neuron is \(1\) then the defender is assigned to the segment associated with that neuron and vice versa. Therefore MLC-SEFRON learning predicts the labels in a deterministic manner. Since the number of intruders exceeds the number of defenders considered in this paper, the MLC-SEFRON is designed to predict multiple labels of segments assigned for a single defender. Each defender has one MLC-SEFRON architecture. The DSL framework is designed such that all the defender uses the same trained MLC-SEFRON architecture. After training the MLC-SEFRON network with a single defender, it can be used for obtaining assignments of other defenders without retraining. Hence the DSL framework is scalable and can be used for any number of defenders without any extra learning computations. Once defenders obtain their assigned labels, they may have conflicts in their trajectories due to the decentralized approach. These conflicts are resolved to generate the trajectories of each defender using the Consensus-Based Bundle Algorithm (CBBA). The dataset and expert assignment labels for training are generated by solving d-PDP using expert policy derived from the simplified form of [18].
The performance of MLC-SEFRON in training is evaluated using two different observation scenarios. One is the partial observation scenario and the other is the full observation scenario of the defender around the perimeter. In a partial observation scenario, the defender is assumed to have a sensing range of 150 degrees of the perimeter. Whereas in full observation, the defender has a sensing range of 360 degrees which covers the entire perimeter. The MLC-SEFRON trained for one defender can be used for other defenders without retraining, hence the learning in this paper is decentralized. The performance of the MLC-SEFRON is evaluated using mutli-label metrics such as \(Precision\), \(Recall\), and \(F1-Score\) as in [26]. The DSL framework is trained with five defenders using the MLC-SEFRON algorithm, and then the same is tested. The results indicate that the DSL framework's success rate is on par with the expert solution. The performance of DSL framework is then compared with the state-of-the-art adaptive partitioning approach [16] and the expert policy. The results indicate that the DSL framework performs better compared. Furthermore, the DSL performance is evaluated for different-sized teams of defenders without retraining, to illustrate the scalability of the DSL framework. The DSL framework performs at par compared to the centralized expert policy for the different sizes of defenders to showcase its generalization performance.
The main contributions of this paper can be summarized as follows:
1. The formulation of the d-PDP into the spiking multi-label learning problem with the help of MLC-SEFRON architecture.
2. Development of deterministic MLC-SEFRON learning algorithm for predicting multiple labels of segments.
3. Distributed and scalable learning-based solution for sequential capture in d-PDP. To the authors' best knowledge this is the first time in literature, a learning-based solution is proposed for fewer defenders protecting territory against multiple intruders by sequential capture.
The rest of the paper is organized as follows: Section III presents the mathematical formulation of d-PDP for circular perimeter as a multi-label classification problem using SNN. Section IV presents the SNN architecture and MLC-SEFRON learning algorithm. Section V provides the performance evaluation results for MLC-SEFRON learning algorithms using multi-label metrics. This section also presents results on performance comparison of the DSL framework with expert policy and ablation studies exhibiting scalability. Finally, Section VI summarises the conclusions from this paper.
## II Related works
This section presents the related works on key techniques used in this DSL-based solution for d-PDP, namely multi-label learning, and SNNs.
### _Multi-label learning_
In a multi-label learning problem, each sample can have associations with more than one class. The goal of the learning algorithm is to predict all the classes that a given sample is associated with. Some existing multi-classification learning algorithms are also used for handling multi-label learning problems. One of the first approaches for multi-label learning is performed for text categorization [27]. In [27], the multi-label learning is used to categorize the text in a news article into multiple topics such as politics, society, etc., Then in [28], the prediction of each label is performed using a binary classifier. Therefore multiple binary classifiers are designed for learning multiple labels, this strategy is termed as binary relevance strategy.
Advances in the field of deep learning have led to the usage of deep neural networks for multi-label learning [29, 30, 31]. In [29] and [30], a backpropagation approach is proposed for multi-label learning. In [31], a clustering algorithm has been used for multi-label learning Multi-label learning using deep neural networks requires a large amount of data for efficient performance. Moreover, the architecture of deep neural networks consists of many hidden which makes it computationally expensive. Also, each label is predicted in a probabilistic manner, which decreases confidence in the prediction. Therefore in the proposed DSL framework, deterministic multi-label learning using a binary relevance strategy is performed using an SNN with no hidden layers. The relevant literature on SNNs is presented in the next section. II-B.
### _Spiking neural networks_
Spikes emitted by spiking neurons efficiently embed the spatio-temporal information present in the input given to them [32]. The detailed discussion on various existing learning algorithms for SNNs is presented in [33]. Bohte \(et\)\(al.\)[34] developed a gradient-based weight update strategy for SNNs and it is termed as SpikeProp. In SpikeProp, linearities are assumed in the membrane potential of a neuron to evaluate the gradient of a spike. This is done as the gradient of spikes cannot be evaluated directly due to their discontinuous nature.
Fig. 1: Decentralized spike-based Learning (DSL) framework for discrete perimeter defense problem
In[35], a supervised learning algorithm for weights in SNN is developed by exploiting spatio-temporal features. Spike Time Dependent Plasticity (STDP) [36], is another learning technique developed for SNNs that is motivated by biological mechanisms. In rank-order learning for SNNs described in [37] and [38], the weight updates are evaluated based on the spiking rate of the neuron. To increase the performance further evolving layers are used in SNN architecture by neuron addition or deletion strategies as in [39, 40].
In [41], SNN architecture with time-varying weights is proposed for classification. The SNN in [41] is termed as Synaptic Efficacy Function based leaky-integrate and fire neuRON (SEFRON)[41]. A novel learning algorithm for weight updates is proposed in SEFRON which distributes the evaluated weight updates over time. The SEFRON results clearly highlight that the proposed synapse model, with its time-varying nature, executes binary classification tasks with high computational power in comparison to other SNN learning algorithms now in practice. In [25] the SEFRON is extended for multi-class classification problems. The results in [41] and [25] point out that modeling weights with time-varying functions in SNN increase the classification performance of the network. The aforementioned SNN learning algorithm requires additional encoding mechanisms to convert the real-valued data into spikes. Whereas the STMTA problem posed by d-PDP has inherent spatio-temporal nature. Hence STMTA problem can be directly solved using SNNs without any extra encoding mechanisms. A Multi-Label Classifier using SEFRON (MLC-SEFRON) network is proposed in this paper to handle multi-task assignment problems. This is the first time in the literature that an SNN is used to solve a multi-task assignment problem.
## III Mathematical formulation of d-PDP as Decentralized Spiking multi-label learning problem
The mathematical description of discrete PDP (d-PDP) as a spiking multi-label learning problem is presented initially in this section. Figure 1 shows the proposed Decentralized Spike-based Learning (DSL) framework. The spatio-temporal location information of defenders and intruders is represented as spikes and given as input to the MLC-SEFRON network. The solution of STMTA is used as an expert for the training of the network to learn assignments as the labels. The proposed DSL framework employs the decentralized approach, in which a defender learns its assignments based on the spatio-temporal inputs from its own perspective. The labels of assigned locations to a defender are learned using the MLC-SEFRON learning algorithm. The predicted labels are then post-processed to compute the conflict-free assignments for all the defenders. This section also describes the generation of defender trajectory using the predictions of the MLC-SEFRON network.
### _Discrete Perimeter Defense Problem (d-PDP)_
This paper considers the multi-player perimeter defense problem in which a team of defenders protects a given territory from invading intruders. It is assumed that intruders are moving radially inwards with a constant velocity, as proposed in [16, 19]. Defenders are restricted to operate only on the perimeter [8, 9]. Each defender's trajectory is computed cooperatively such that it can capture the intruders assigned to it in a cost-effective sequence.
Without loss of generality, we assume a circular territory \(\Omega\) of unit radius centered at origin \(O\) whose perimeter \(\partial\Omega\) is given as
\[\Omega =\left\{\boldsymbol{p}\in\Re^{2}\ \Big{|}\ \|\boldsymbol{p}\|_{2} \leq 1\right\}, \tag{1}\] \[\partial\Omega =\left\{\boldsymbol{p}\in\Re^{2}\ \Big{|}\ \|\boldsymbol{p}\|_{2} =1\right\} \tag{2}\]
where \(\boldsymbol{p}\) is a point in 2-dimensional plane. This paper considers discrete PDP in which the perimeter is divided into \(N_{s}\) segments \((s_{1},...,s_{N_{s}})\), of equal arcs as shown in Figure 2.
Let us consider \(N\) defenders \(\{D_{1},\cdots,D_{i},\cdots,D_{N}\}\) operating on the perimeter. The initial position of the defender \(D_{i}\) on the perimeter is given in by \(p_{i}^{D}=(r,S_{i}^{D})\). \(r\) represents the distance from the origin (\(O\)) and is always equal to \(1\) as defenders operate only on the perimeter. \(S_{i}^{D}\) represents the segment of defender \(D_{i}\). The motions of the defenders are limited by a maximum angular velocity \(v^{D}\).
Consider \(M>N\) intruders \(I_{1},\cdots,I_{j},\cdots,I_{M}\) radially moving towards the perimeter with speed \(v^{I}\). The position \(p_{j}^{I}\) of the \(j^{th}\) intruder is represented by \(p_{j}^{I}=(r_{j}^{I},S_{j}^{I})\). \(r_{j}^{I}\) represents the distance of the intruder \(I_{j}\) from the center of the territory. \(S_{j}^{I}\) represents the segment of intruder \(I_{j}\). Note that the positions of the intruders are initialized outside the territory, (i.e., \(r_{j}^{I}>1\)). The kinematic equation of intruder \(I_{j}\) is given as, \(-r_{j}^{I}=v^{I}\).
If an intruder crosses the perimeter from any of the segments, then it is considered that the defenders have failed to defend the perimeter. If a defender is present in a given segment when an intruder enters that segment then the defender is considered to have captured the intruder. If \(S_{i}^{D}\) and \(S_{j}^{I}\) represent the segment locations of defender \(D_{i}\) and intruder \(I_{j}\) respectively, then capture of the latter is written as
\[Q(I_{j})=\left\{D_{i}\Big{|}r_{j}^{I}(t)=1\ \&\ S_{j}^{I}(t)=S_{i}^{D}(t) \Big{)}\right\} \tag{3}\]
Based on the heading and speed of the intruder \(I_{j}\), it is possible to estimate it's arrival time \(t_{j}^{a}\) and location \(\boldsymbol{p}_{j}^{T}\) on the perimeter. To defend the perimeter, a defender has to perform the spatio-temporal task of intercepting the intruder \(I_{j}\) at position \(\boldsymbol{p}_{j}^{T}\) and at time \(t_{j}^{a}\). Each defender needs to capture multiple intruders as there are more intruders than defenders. Thus, the team of defenders needs to solve a spatio-temporal multi-task assignment (STMTA) problem [17, 18].
### _Spike-based representation for d-PDP_
The technique developed in this section utilizes the capabilities of SNNs to represent both spatial and temporal information relevant for d-PDP. Consider the perimeter has been divided into n segments, each of which makes an angle of \((360/n)^{\circ}\) at the center. These segments are denoted as \(\{s_{1},s_{2},\cdots,s_{i},\cdots,s_{n}\}\). Each defender has observability of \(m\) segments, where \(m\leq n\)
A defender in segment \(s_{i}\), can observe m segments i.e. \(\left\{s_{i+\left\lceil\frac{m-1}{2}\right\rceil},\cdots,s_{i-1},s_{i},s_{i+1}, \cdots,s_{i+\left\lfloor\frac{m-1}{2}\right\rfloor}\right\}\). (All the negative and zero indices of segments are converted to positive indices in a cyclic fashion. ) For the decentralized setting, the zones are defined from the perspective of each defender to represent the information. The zones for defender present in segment \(s_{i}\) ( where \(s_{i}\Leftrightarrow z\lceil\frac{m+1}{2}\rceil\) ) are \(\left\{z_{m},z_{m-1},\cdots,z_{\lceil\frac{m+1}{2}\rceil},\cdots,z_{2},z_{1}\right\}\).
Figure 2(a) shows a scenario with 5 defenders and 10 intruders for 36 segments and observation of 15 zones. The perimeter has been divided into 36 segments each of which makes an angle of \(10^{\circ}\) at the center. These 36 segments are denoted as \(\left\{s_{1},s_{2},\cdots,s_{36}\right\}\). The current positions of intruders and defenders have been shown using \(\times\) and \(\bullet\), respectively. The radial trajectory of the intruders towards a specific segment on the perimeter is shown using red dashed lines.
For the partial observation scenario considered in this paper, it is assumed that defenders have a limited sensing range of \(150^{\circ}\), which constitutes 15 segments. For example, the yellow region in the figure represents the sensing range for \(D_{1}\) which implies that it can perceive the spatio-temporal information pertaining of both, intruders and defenders. The segments \(\left\{s_{10},s_{9},\ldots,s_{33},s_{32}\right\}\) segments are referred as zones of that defender (\(D_{1}\)), represented by \(\left(z_{15},z_{14},\ldots,z_{2},z_{1}\right)\) as shown in Figures 2(b) & 2(c). Similar to segments, the zones are also numbered in the anticlockwise direction from \(z_{1}\) to \(z_{15}\). Defender is always assumed to be present in \(z_{8}\), which is the central zone in its sensed region. The zones contain the decentralized information sensed by each defender, on the contrary, segments contain global information. In other words, segments of the territory are designed beforehand and do not depend on the positions of defenders.
The spatio-temporal information pertaining to intruders and defenders present in the zones of a defender \(D_{1}\) is represented using spike patterns and shown in Figure 2(b) and 2(c). These spikes are event-triggered based on the distance of the object (either intruder or defender) from the perimeter. The time of a given spike in Figure 2(b) is directly proportional to the arrival time of the intruder in the corresponding zone. For instance in Figure 2(b), the time of arrival of intruder \(I_{1}\) in zone \(z_{6}\) at the perimeter is shown using a spike at \(2.15s\). Similarly, a defender in zone \(z_{13}\) (i.e. segment \(s_{8}\)) is represented using a spike at \(0s\).The position of the defender whose perspective is being considered is always considered at the \(8^{th}\) zone (middle zone) in 15 segments observation scenario (partial observation scenario). The time of the spikes that represent the position of defenders is always set to \(0s\) as defenders are constrained to move along the perimeter. The spike patterns generated are directly presented as input to the MLC-SEFRON via the input layer without any extra pre-processing step.
### _Multi-task assignments to multi-label learning_
The solution of spatio-temporal multi-task assignments gives the assignments of defenders to the intruders. In a decentralized setting of defender \(D_{i}\), each assigned intruder can be identified by the corresponding zone. Using the assignment solution, one can label the assignment of defenders to the zones. For the labels of defender \(D_{i}\), if it is assigned to a zone \(z_{j}\), then \(z_{j}\) is labeled as \(TRUE\). A defender can have multiple assignments, all the assigned segments are labeled as \(TRUE\). The assignment solution provides the target labels (\(T^{l}\)) for all the observed m zones ( \(T^{l}\in\left\{1,0\right\}^{m}\)). In this way, the assignments of defender \(D_{i}\) are converted into multiple deterministic labels for zones of \(D_{i}\). One should note that the centralized solution to the STMTA problem provides the assignments of all the defenders. In the DSL framework assignments of each defender's assignments (with input from that defender's perspective) are used to train the individual MLC-SEFRON network. In this way, training is scalable and independent of specific defenders. The spiking network needs to learn these labels. The detailed learning algorithm is discussed in section IV.
### _Trajectory generation from MLC-SEFRON output_
The suitable trajectory for the team of defenders is generated using zones assigned by the MLC-SEFRON to each defender. Predicting assigned zones for each defender in a decentralized manner can lead to a situation where multiple defenders are assigned to a single zone. Also during SNN prediction, there is a chance of assigning a defender to a zone where no intruder is present. To resolve these two issues, a consensus-based bundle algorithm (CBBA) [42] is used to compute the final task assignments.
Lets consider that \(\hat{l}_{j}\) represents the prediction for the \(j^{th}\) segments' assignment to a defender. To exploit the spatial correlation between neighboring segments, the prediction for the \(j^{th}\) segment is updated using the predictions for the segments in the neighborhood. Further, the final assignment for a segment is assigned to \(0\) if there is no intruder present in that segment. Based on this, the effective predictions (\(\hat{l}_{j}^{\text{eff}}\)) for a given segment are given by
\[\hat{l}_{j}^{\text{eff}}=\begin{cases}(\hat{l}_{j}+\alpha*\hat{l}_{j+1}+\alpha* \hat{l}_{j-1}),&\text{Intruder present in $s_{j}$}\\ 0,&\text{Intruder absent from $s_{j}$}\end{cases} \tag{4}\]
Figure 2: Discrete Perimeter defense problem. The center of the perimeter is shown with O, and segments on the perimeter are denoted by \(s_{1},s_{2},s_{3},s_{4},s_{5}\), etc. D denotes the defender and \(\psi\) denotes it’s angular position of on the perimeter respectively
where \(\alpha\in[0,1]\) is the scaling factor that governs the impact of predictions for neighboring segments on a given segment. If \(\alpha=0\), there is no effect of predictions of neighboring segments.
Every defender computes its own trajectory by arranging the intruders present in the assigned zones in ascending order of their arrival times. Every defender computes the trajectory based on their individual effective labels (\(\hat{l}_{j}^{\text{eff}}\)). It is possible that multiple defenders are assigned a given segment because the predictions for each defender are generated independently. To determine final assignments, a distance-based cost function is used. Among these defenders, each one (say \(D_{i}\)) bids for the \(s_{j}\) segment with a cost \(l_{ij}\), where \(l_{ij}\) is the effective distance \(D_{i}\) needs to travel from its current location to \(j^{th}\) segment.
\[l_{ij}=\hat{l}_{j}^{\text{eff}}*arc(S_{i}^{\prime D}),s_{j}) \tag{5}\]
where \(S_{i}^{\prime D}\) is the segment location from which defender \(D_{i}\) has to start in order to capture the intruder in the \(s_{j}\) segment. To fix the issue of multiple defender assignments to a single \(s_{j}\) segment, the defender which has the minimum cost is considered the winner in the bidding. Based on this bidding and consensus algorithm final trajectory for each defender is generated.
## IV MLC-SEFRON Architecture & Learning Algorithm
In this section a Multi-Label Classifier using SEFRON (MLC-SEFRON) to predict the labels of multiple zones assigned to a single defender in a deterministic manner is presented. The MLC-SEFRON architecture is described first, followed by its learning algorithm.
### _Architecture of MLC-SEFRON_
The architecture of MLC-SEFRON for training a defender with a sensing range having \(m\) zones is shown in Figure 4. The input layer of MLC-SEFRON consists of \(2m\) neurons which are used to present the spike patterns illustrated in Figures (c)c & (b)b (which represent the spatio-temporal information about defenders and intruders). The first \(m\) neurons are used to represent the defender information, whereas the last \(m\) neurons are used for representing intruder information. Let \(x_{i}=\{t_{i}\}\) be the input spike pattern presented through the \(i^{th}\) input neuron, where \(t_{i}\) denotes the time of spike of that neuron. The SEFRON layer in the architecture also consists of \(2m\) spiking neurons (\(f_{1},...,f_{2m}\)). The weight of the synapse between \(i^{th}\) input neuron and \(2j^{th}\) neuron in the SEFRON layer is denoted as \(w_{i2j}(t)\). Each weight \(w(t)\) connecting the input layer and SEFRON layer is modeled as a time-varying Gaussian function as described in SEFRON [25]. The output layer consists of \(m\) neurons associated with \(m\) zones in the perimeter. The response of \(j^{th}\) output neuron depends on the spike responses of \(2j-1^{th}\) and \(2j^{th}\) neurons in the SEFRON layer as shown below
\[y_{j}=\begin{cases}1,&\text{If }\hat{t}_{2j-1}<\hat{t}_{2j}\\ 0,&\text{If }\hat{t}_{2j-1}\geq\hat{t}_{2j}\end{cases} \tag{6}\]
where \(\hat{t}_{2j-1}\) and \(\hat{t}_{2j}\) represents the time of first spikes of \(2j-1^{th}\) and \(2j^{th}\) neurons in the SEFRON layer. \(y_{j}\) represents the deterministic label predicted for the \(j^{th}\) zone in the perimeter. If \(y_{j}\) is \(1\) then a defender \(D\) is assigned to \(j^{th}\) zone and vice versa if \(y_{j}\) is \(0\). In a similar fashion,
Figure 4: Architecture of MLC-SEFRON
Figure 3: Spike-based representation of defenders and intruders. Figure (a)a shows the defender locations as \(\bullet\) and intruder locations as \(\times\). The partial observation range of defender present in segment \(s_{3}\) is shown by the shaded yellow region. Figure (b)b and Figure (c)c shows the spike representation of the intruders and defenders present in shaded yellow region.
labels are evaluated for all \(m\) neurons in the output layer. If \(\hat{C}\) denotes the predicted multi-label output from the MLC-SEFRON network then it is given as
\[\hat{C}=\{\hat{c}_{1},...\hat{c}_{j},...\hat{c}_{m}\},\text{where }\hat{c}_{j}=y_{j} \tag{7}\]
If \(\hat{t}_{j}\) denotes the spike time \(j^{th}\) neuron in the SEFRON layer then its spike time can be evaluated as
\[\hat{t}_{j}=\{t|v_{j}(t)=\theta_{j}\} \tag{8}\]
where \(\theta_{j}\) and \(v_{j}(t)\) denote the potential threshold and potential of \(j^{th}\) neuron in the SEFRON layer respectively. The potential thresholds for all neurons in the SEFRON layer are initialized as described in [25]. The potential \(v_{j}(t)\) is evaluated as
\[v_{j}(t)=\sum_{i=1}^{2m}w_{ij}(t_{i})*\epsilon(t-t_{i}) \tag{9}\]
where \(\epsilon(t-t_{i})\) is the unweighted membrane potential induced at time \(t\) by input spike at \(t_{i}\). It is modeled using the spike response function [34], given as
\[\epsilon(s)=\frac{s}{\tau}\exp(1-\frac{s}{\tau}) \tag{10}\]
where \(\tau\) is the time constant of the neuron.
The actual assignment prediction \(\hat{c}_{j}\) is then compared with expert assignment \(c_{j}\). The expert assignments for each zone are evaluated using an expert solution (see Section V-A). In the next section, an MLC-SEFRON learning algorithm is described.
### _MLC-SEFRON learning algorithm_
MLC-SEFRON uses three strategies for learning, namely _initialization strategy_, _escaped intruder strategy_ and _incorrect assignment strategy_. In the _initialization strategy_ the potential thresholds of the neurons in the SEFRON layer and weights connected to them are initialized as described in SEFRON [25]. In _escaped intruder strategy_ weights are updated such that the defender is assigned to the zone. In _incorrect assignment strategy_ weights are updated such that the defender is not assigned to the zone. Next, these three different strategies are explained in detail.
#### V-B1 Initialization strategy
Let us consider the initialization of weights connecting to \(2j^{th}\) neuron in the SEFRON layer and its threshold \(\theta_{2j}\). The first input sample from the dataset which has a desired prediction that a defender should not be assigned to \(j^{th}\) zone is used for this purpose. This initialization is done as shown below
\[\begin{split} w_{i(2j)}(t)&=u_{i(2j)}(T_{d})\exp(- \frac{(t-t_{i})^{2}}{2\sigma^{2}})\\ \theta_{2j}&=\sum_{i=1}^{2m}u_{i(2j)}(T_{d})\epsilon (T_{d}-t_{i})\quad\forall j\in\{1,\cdots,m\}\end{split} \tag{11}\]
where \(T_{d}\) is defined as the ideal firing time \(T_{d}\) is chosen such that a neuron in the SEFRON layer can utilize the information present in incoming spike patterns to make a decision. If \(T_{d}\) is chosen close to the start of the simulation, then MLC-SEFRON is not able to utilize the information present in spike patterns. If \(T_{d}\) is chosen close to the end of the simulation (i.e., \(T\)), then MLC-SEFRON takes a longer time to make a decision. Which leads to the inaccurate assignment prediction for zone \(z_{j}\) by MLC-SEFRON. \(T_{d}\) is chosen appropriately as mentioned in [25].
In Equation (11), \(u_{i(2j)}(T_{d})\) is defined as the fractional contribution of \(i^{th}\) input neuron for \(2j^{th}\) neuron in the SEFRON layer to spike at \(T_{d}\).
\[u_{i(2j)}(T_{d})=\frac{\delta w(T_{d}-t_{i})}{\sum_{i=1}^{2m}\delta w(T_{d}-t_ {i})} \tag{12}\]
where \(\delta w(.)\) is defined as the STDP weight update [36]. It is computed as
\[\delta w(s)=\begin{cases}A_{+}\exp(-\frac{s}{\tau_{+}}),\text{if }s\geq 0\\ -A_{-}\exp(\frac{s}{\tau_{-}}),\text{if }s<0\end{cases} \tag{13}\]
where \(A_{+},A_{-}\) are the maximum weight changes allowed and \(\tau_{+},\tau_{-}\) are the time constants for STDP. Similarly for initializing the threshold of \(2j-1^{th}\) neuron and weights connected to it, the first input sample which has a desired prediction that a defender should be assigned to \(j^{th}\) zone is used. In this manner, the weights connected to all neurons in the SEFRON layer and their thresholds are initialized. The values of constants such as \(T,\sigma,\tau_{+}\) and \(\tau_{-}\) are set as in [25].
#### V-B2 Escaped intruder strategy
This strategy is used when a defender is not assigned to a zone in which an intruder is present. Given by
**If \(\hat{c}_{j}\neq c_{j}\) and \(\hat{t}_{2j-1}\geq\hat{t}_{2j}\)**
In this case, the misclassification occurs because \(2j^{th}\) neuron spikes earlier than \(2j-1^{th}\) neuron in the SEFRON layer respectively. To fix this issue the weights connected to these neurons are updated such that they spike at desired firing times denoted by \(t_{2j-1}^{d}\) and \(t_{2j}^{d}\) respectively, which are given by
\[\begin{split} t_{2j-1}^{d}&=\begin{cases}\hat{t}_{2j- 1},&\text{if }\hat{t}_{2j-1}<T_{d}\\ T_{d},&\text{if }\hat{t}_{2j-1}\geq T_{d}\end{cases}\\ t_{2j}^{d}&=\begin{cases}\hat{t}_{2j}^{c},&\text{if }\hat{t}_{2j}^{c}\geq(t_{2j-1}^{d}+T_{m})\\ t_{2j-1}^{d}+T_{m},&\text{if }\hat{t}_{2j}^{c}<(t_{2j-1}^{d}+T_{m})\end{cases}\end{split} \tag{14}\]
where \(T_{m}\) is defined as the margin threshold required between two neurons in the SEFRON layer for better generalization. If \(T_{m}\) is set very small value (i.e., \(0\)), then MLC-SEFRON is not able to classify the highly overlapped samples. Also, if \(T_{m}\) is set to a high value then MLC-SEFRON takes a longer time to converge. \(T_{m}\) is chosen appropriately as described in [25].
Then weight update strategy as described in [25] is used to update the weights connected to \(2j-1^{th}\) and \(2j^{th}\) neurons in the SEFRON layer such that they spike at \(t_{2j-1}^{d}\) and \(t_{2j}^{d}\) respectively.
Without loss of generality the weight update for \(k^{th}\) neuron in the SEFRON layer with a desired firing time \(t_{k}^{d}\) is described further. The error function \(e\) to evaluate the update is given as
\[e=\frac{\theta_{k}}{\hat{V}_{k}(t_{k}^{d})}-\frac{\theta_{k}}{\hat{V}_{k}(\hat{ t}_{k})} \tag{15}\]
The error function \(e\) is designed as above because it can be multiplied directly with the fractional contribution \(u\) to get the required update value. If \(\Delta w_{ik}\) denotes the required weight update then it is given as
\[\Delta w_{ik}=\lambda\;u_{ik}(t_{k}^{d})\;e \tag{16}\]
where \(\lambda\) is the learning rate. In Equation (15), \(\hat{V_{k}}(\hat{t})\) is defined as the potential required for \(k^{th}\) neuron in the SEFRON layer to spike at \(\hat{t}\).
\[\hat{V_{k}}(\hat{t})=\sum_{i=1}^{2m}u_{ik}(\hat{t})\;\epsilon(\hat{t}-t_{i}) \tag{17}\]
The update \(\Delta w_{ik}\) is modulated using a gaussian function and then the existing weight is updated as given below
\[w_{ik}(t)\gets w_{ik}(t)+(\Delta w_{ik}\exp(-\frac{(t-t_{i})^{2}}{2\sigma ^{2}})) \tag{18}\]
The weights of \(2j-1^{th}\) and \(2j^{th}\) neurons are updated using Equations 15 to 18 to resolve the misclassification.
#### Iv-B3 Incorrect assignment strategy
This strategy is used when a defender is wrongly assigned to a zone. Given by
**If \(\hat{c}_{j}\neq c_{j}\) and \(\hat{t}_{2j-1}<\hat{t}_{2j}\)**
In this case, the misclassification occurs because \(2j-1^{th}\) neuron spikes earlier than \(2j^{th}\) neuron in the SEFRON layer. To fix this issue the \(t_{2j-1}^{d}\) and \(t_{2j}^{d}\) are evaluated as
\[t_{2j}^{d} =\begin{cases}\hat{t}_{2j},&\text{if }\hat{t}_{2j}^{*}<T_{d}\\ T_{d},&\text{if }\hat{t}_{2j}^{*}\geq T_{d}\end{cases} \tag{19}\] \[t_{2j-1}^{d} =\begin{cases}\hat{t}_{2j-1},&\text{if }\hat{t}_{2j-1}\geq(t_{2j}^{d}+T_{ m})\\ t_{2j}^{d}+T_{m},&\text{if }\hat{t}_{2j-1}<(t_{2j}^{d}+T_{m})\end{cases}\]
To make actual assignment \(\hat{c}_{j}\) similar to that of desired assignment \(c_{j}\) the weights connected to \(2j-1^{th}\) and \(2j^{th}\) neurons in the SEFRON layer respectively are updated using Equations 15 to 18.
To summarize the MLC-SEFRON learning algorithm is used to predict the labels of multiple segments assigned to a defender in a deterministic manner. The obtained labels of segments that are assigned to a defender are used for its trajectory generation.
## V Simulation results
This section presents the results of performance evaluation and comparison of the proposed DSL framework with the existing STMTA formulation from [18]. Initially in this section, the dataset is generated using the expert (STMTA) approach for the proposed DSL framework of d-PDP is discussed. Then the performance of the proposed MLC-SEFRON for training a single defender is studied using multi-label metrics such as \(Precision\), \(Recall\), and \(F1-score\)[26]. Finally, the success percentage achieved by the team of defenders in capturing the intruders is evaluated and compared with the expert approach. The simulations are carried out using Matlab 2022a, in a Windows 10 system with 16GB memory, 6 cores, and a 3.2GHz machine.
### _Expert policy-based data generation_
A circular perimeter is divided into 36 segments. Two different scenarios are considered for generating a dataset one is a full observation scenario in which a defender senses all 36 segments. The other is a partial observation scenario, in which a defender senses 15 segments of the perimeter. The dataset is generated with a team of 5 defenders, placing them randomly along the perimeter. The intruders are spawned using Poisson distribution with an arrival rate of 4 over a period of 8 seconds. The intruders are placed randomly using a uniform distribution over all segments. The dataset is generated using random Monte Carlo runs, with at most one intruder spawned in a segment in each run. The d-PDP is formulated as the spatio-temporal multi-task assignment (STMTA) problem as described in [17, 18]. This STMTA problem is solved by an expert policy similar to approaches presented in[18, 43]. The detailed steps of expert policy are given below. The expert policy computes the assignments by minimizing the distance traveled by the defenders along the perimeter while maximizing the number of captures. The cost function and the optimization problem used by an expert policy are discussed below.
#### V-A1 Cost function
The defender needs to capture the intruder either as a first task from its own location or as a subsequent task after capturing another intruder. The main difference is the starting point for the execution of the task. For the first task, the defender will start from its own location, and for subsequent tasks, it starts from the previously captured intruder. The cost is defined as the arc distance that needs to be traveled by the defender along the perimeter to capture an intruder. If this traveling distance is infeasible due to the velocity limit, the cost is selected as the large value ( denoted by \(\kappa\)) to (avoid or at least) minimize such assignments. If it requires traveling in negative time, the cost is selected as infinite to avoid assignments. Here, \(d(\widehat{AB})\) denotes arclength along the arc AB. Mathematically cost function is given as,
\[c_{ij}^{f}=\begin{cases}d(\widehat{S_{i}^{D},S_{j}^{I}})&\text{if }\frac{d( \widehat{S_{i}^{D},S_{j}^{I}})}{V_{max}^{D}}\leq t_{j}^{a}\\ \kappa&\text{otherwise}\end{cases} \tag{20}\]
\[c_{kj}^{s}=\begin{cases}d(\widehat{S_{k}^{T},S_{j}^{I}})&\text{if }\frac{d( \widehat{S_{k}^{T},S_{j}^{I}})}{V_{max}^{D}}\leq t_{j}^{a}-t_{k}^{a}\\ \kappa&\text{if }\frac{d(\widehat{S_{k}^{T},S_{j}^{I}})}{V_{max}^{D}}>t_{j}^{a}-t_{k}^{a} \\ \infty&\text{if }t_{j}^{a}<t_{k}^{a}\end{cases} \tag{21}\]
for \(i\in\mathcal{I}=\{1,2,\cdots,N\},\quad j\in\mathcal{J}=\{1,2,\cdots,M\}\),
\[k\in\mathcal{K}=\{1,2,\cdots,M-1\}\]
#### V-A2 Optimization Problem
The formulated optimization problem is a linear sum assignment problem where decision variable \(\delta_{ij}^{f}\) is used to decide whether a defender \(D_{i}\) is assigned to task \(T_{j}\) as a first task or not. Another decision variable \(\delta_{kj}^{s}\) is used to decide whether a defender will be assigned to task \(T_{j}\) just after the task \(T_{k}\) or not. Mathematically, the optimization problem is given as
\[\min_{\delta^{f}_{ij}}\sum_{\delta^{f}_{kj}}\sum_{i\in\mathcal{I}} \sum_{j\in\mathcal{J}}c^{f}_{ij}\delta^{f}_{ij}+\sum_{k\in\mathcal{K}}\sum_{j \in\mathcal{J}}c^{\star}_{kj}\delta^{\star}_{kj} \tag{22}\] \[\mathrm{s.~{}t.~{}}\delta^{f}_{ij}\in\{0,1\}\qquad\forall(i,j)\in \mathcal{I}\times\mathcal{J}\] (22a) \[\delta^{s}_{kj}\in\{0,1\}\qquad\forall(k,j)\in\mathcal{K}\times \mathcal{J}\] (22b) \[\sum_{i\in\mathcal{I}}\delta^{f}_{ij}+\sum_{k\in\mathcal{K}} \delta^{s}_{kj}=1\quad\forall j\in\mathcal{J}\] (22c) \[\sum_{j\in\mathcal{J}}\delta^{f}_{ij}\leq 1\quad\forall i\in \mathcal{I}\] (22d) \[\sum_{j\in\mathcal{J}}\delta^{s}_{kj}\leq 1\quad\forall k\in \mathcal{K} \tag{22e}\]
The solution to the optimization problem gives the assignments of the defender to intruders. This assignment solution is used as a ground truth for the supervised learning problem. As the expert solution is sub-optimal and it is obtained using a fixed team of defenders, its success rate cannot be 100% i.e., some of the intruders will penetrate the perimeter. Therefore for generating the dataset, intruders who enter the perimeter are sequentially removed to obtain feasible assignments of remaining intruders. The sequential removal of these infeasible intruders is not the optimal method *. The obtained solution will be sub-optimal, so we call it an expert solution.
Footnote *: One needs to solve the combinatorial optimization problem (to decide which tasks should be removed) for computing the optimal solution. Computing the optimal solution is outside the focus of this paper.
The dataset is generated for each defender in a decentralized fashion. While generating the dataset for a particular defender, the locations of the intruder and the defender are considered from the perspective of that particular defender. These spatio-temporal representations are carried out as described in section III-B. The desired output for a zone is labeled as assigned if the expert solution assigns a defender to that zone; otherwise, it is labeled as unassigned. If there are more intruders than defenders, the expert (STMTA) solution assigns some defenders to multiple zones.
The optimization problem in STMTA formulation of d-PDP minimizes the total distance traveled by a team of defenders while capturing the maximum number of intruders. Therefore in this solution, defenders are more likely assigned to the neighboring zones than the further zones to minimize the distance traveled. As the trainee defender is always located in the central zone, the intruders closer to these zones are more likely to be assigned when compared with intruders in the zones farther away from the central zone. Technically, the dataset has a long-tail problem, where the number of assigned labels is lower for the further away zones than that of the central zones.
A dataset is generated for a team of five defenders by solving 10000 scenarios of d-PDP using the expert solution. A total of 50000 samples are generated in this dataset, out of which only 10000 are used for training the MLC-SEFRON. The dataset has higher number samples in which trainee defender is assigned to the central zone when compared with the samples assigned to other zones. This is due to the presence of trainee defender in the central zone. Thus the generated dataset using expert policy is unbalanced due to the long-tail problem. To balance this dataset the minority class samples of the dataset are over-sampled using the synthetic minority class over-sampling technique [44]. This oversampled dataset is used for training. The remaining 40000 samples of the dataset are used for the testing. The results are presented for the testing dataset.
### _Performance evaluation of MLC-SEFRON_
In this section, the results of the performance evaluation of MLC-SEFRON in predicting assigned zones for a single defender are presented. The weights obtained after training a single defender are directly used for evaluating the performance of other defenders. Thus the training in the DSL framework is decentralized. Multi-label performance metrics such as \(Precision\), \(Recall\), and \(F1-Score\) are used for this performance evaluation. Figure 5 shows the zone-wise variation of these multi-label metrics for full and partial observational scenarios. Figure 5a displays the \(Precision\), \(Recall\), and \(F1-Score\) plots evaluated using the MLC-SEFRON for all 15 zones in partial observation scenario. It can be illustrated that performance measures are greater for central zones and subsequently decline for zones farther from central zones are considered. Figure 5b displays the multi-label performance metrics of the MLC-SEFRON for full observation
Figure 5: Multi-label performance evaluation. Figure 5a shows the results of performance evaluation of MLC-SEFRON algorithm for a single defender in a partial observation scenario. Whereas Figure 5b shows the results of performance evaluation of MLC-SEFRON algorithm for a single defender in a full observation scenario. Purple line shows the \(Recall\), blue line shows the \(Fl-score\) and red line shows the \(Precision\) along various zones respectively
scenario. The high \(Recall\) performance in the central zone is due to more assigned samples in the dataset for these zones. Whereas the high \(Recall\) performance in the farther zones is due to a higher number of un-assigned samples in the dataset for these zones.
### _Performance evaluation of DSL framework for d-PDP_
The performance of the team of defenders is evaluated using the success percentage. The success percentage (\(S_{p}\)) is defined as
\[S_{p}=\frac{\text{number of intruders captured}}{\text{total number of intruders}}\times 100 \tag{23}\]
The table I shows the success comparison of DSL framework and expert (STMTA) policy. The STMTA solution is used as a training dataset and hence STMTA is expected to perform better than the DSL framework The success of the STMTA is \(84.95\%\), where as SNN gives success of \(74.51\%\) for the full observation scenario i.e. each defender observes all 36 segments and acts accordingly. The performance of the DSL frameworkimproves up to \(82.88\%\) by considering the effect of neighbors in post-processing. Similar results are obtained for the case of partial observations. Here, defenders can sense only 15 segments but the auction of tasks is done in a centralized way. The STMTA (expert) policy has a success of \(85.34\%\) and the DSL framework provides a success of \(72.56\%\), and this can be improved to \(79.49\%\) by adding the effect of neighbors in post-processing.
The proposed DSL framework with post-processing obtains \(97.5\%\) and \(93.14\%\) success performance compared to the STMTA (expert) solution for the full and partial observation scenarios, respectively.
### _Comparison study_
An adaptive partitioning (AP) [16] approach has been proposed for PDP for sequential capture of radially incoming intruders. The arrival distribution of the intruders is estimated to partition the perimeter into sectors; each defender assigned to these partitions independently operates to capture intruders sequentially. Although both AP and DSL are designed for different settings, we are comparing them to show their efficacy. The naive and adaptive partitioning is used for the PDP, and the remaining approaches are used for discrete PDP. The dataset is generated with Poisson distribution with \(\lambda=4\) and at most one intruder in each segment.
Table II shows the performance comparison for success percentage for adaptive partitioning, expert policy, and DSL framework. Note that, in [17], a comparison study showed that STMTA performs better than adaptive partitioning in the same setting.
### _Scalability of proposed DSL framework for d-PDP_
In the proposed DSL framework once a defender is trained using MLC-SEFRON, the same network can be used for the different-sized team of defenders. Re-training is not required for changes in team size. Also the training of the defender is independent of the team size of defenders. Hence, the proposed DSL framework is said to be distributed and scalable.
In this paper, the MLC-SEFRON is trained with a dataset generated using STMTA formulation proposed in [18], considering a team of five defenders. All different sized-team of defenders in the DSL framework will use the same network
Fig. 6: Scalability study of proposed DSL framework with different-sized team of defenders (a)Partial observation scenario (b)Full observation scenario
(which is trained with a team size of five). The expert STMTA solution is computed (based on the actual number of defenders) using centralized computation. Figure 6 shows the performance of the proposed DSL framework with different team sizes. One can observe that, the solution obtained from the STMTA approach and the DSL framework is comparable. Both the solutions show similar characteristics; as the defenders' team size increases the success rate increases. One should note that the expert used for training (i.e. the solution of STMTA) is sub-optimal. The performance of DSL framework asymptotically matches the success rate of the expert. With the increases in the number of defenders, the difference between the success rates reduces. Hence, the proposed DSL framework is scalable.
One can observe that, the DSL framework performs better compared to the expert approach for team sizes of 3 and 4 in a partial observation scenario. It is an interesting result where the DSL solution outperforms the expert. The DSL has learned to solve the assignment problem and not just the given expert solution (which is sub-optimal). Investigating the reasons for this observation is out of the scope of this paper. If the sensing range of the defender changes then the number of neurons in the MLC-SEFRON network of the DSL framework also changes. The proposed DSL framework for d-PDP is not scalable in terms of the sensing range of the defender.
## VI Conclusion
In this paper, a novel Decentralized Spike-based Learning (DSL) framework for the discrete Perimeter Defense Problem (d-PDP) is proposed. The circular territory is divided into multiple segments and hence the PDP is termed as d-PDP. Initially, the d-PDP is formulated as a Spatio-Temporal Multi-Task Assignment problem (STMTA). Then this STMTA problem is converted into deterministic multi-label learning using Multi-Label Classifier using Synaptic Efficacy Function spiking neuRON (MLC-SEFRON) network. For decentralized training, each defender is trained with a separate MLC-SEFRON network. The spatiotemporal information of the defenders and intruders in the territory is converted as spikes without any extra pre-processing step. These spikes are given as input to the MLC-SEFRON network. The labels of segments assigned to the defender are obtained from the output of the MLC-SEFRON network The trained weights of one MLC-SEFRON network are directly used for obtaining labels of assigned segments from other MLC-SEFRON networks without any retraining. Using these labels the trajectories for defenders are then obtained with the help of the Consensus-Based Bundle Algorithm (CBBA). The performance of MLC-SEFRON is evaluated for full and partial observation scenarios of the defenders. The multi-label performance metrics obtained for MLC-SEFRON are observed to follow the input data which is obtained from the expert policy. The performance results show that the proposed DSL framework performs \(93.14\%\) and \(97.5\%\) compared to the expert policy in partial and full observation scenarios, respectively. The DSL performance evaluated for different numbers of defenders indicates that it is efficiently scalable. Future work will explore improving the performance of the proposed MLC-SEFRON and its application for various multi-label learning as well as STMTA problems. The presented d-PDP dataset will be a good benchmark for testing the SNN learning algorithms due to its inherent spatio-temporal nature.
|
2310.17810
|
Multi-line CO Imaging of Two Ultra-luminous Infrared Galaxies
|
The primary goal of this project was to use the given data of four emission
lines from two ultra-luminous infrared galaxies to calculate the molecular gas
mass and dynamical mass of each galaxy. These quantities can provide valuable
information about a galaxy's age, star formation properties, and molecular
make-up. Ultra-luminous infrared galaxies are formed from the merger or
interaction of gas-rich galaxies, and are classified by having a luminosity of
greater than 1012 L?. The presence of molecular gas clouds and dust encourages
formation of young, bright stars in these galaxies, but also makes it difficult
to measure the total gravitational and molecular masses, since molecular
hydrogen is virtually undetectable within gas clouds. Our data instead observed
emissions of carbon monoxide (CO) from the two ULIRGs at transitions of J= 1 ->
0 orJ= 2 -> 1. The software Difmap was used to map the interferometer data
obtained from the Plateau de Bure Interferometer (PdBI). Our data was cleaned,
filtered, and subsequently used to generate UV-plots of the emissions, along
with other values needed for calculations. Some of the key results include that
iras 15250 is much younger as it has not used up a majority of its molecular
gas while iras 17208 is older because it has used it up, the gas mass fraction
can be used to estimate the amount of dark matter present in the galaxy, and
that the gas content and the central surface brightness of the disk are
directly correlated.
|
Grishma Adenkar, Viktor Lipovka, Nihar Prabhala, Srikar Vakkalagadda
|
2023-10-26T23:01:16Z
|
http://arxiv.org/abs/2310.17810v1
|
# Multi-line CO Imaging of Two Ultraluminous Infrared Galaxies
###### Abstract
The primary goal of this project was to use the given data of four emission lines from two ultra-luminous infrared galaxies to calculate the molecular gas mass and dynamical mass of each galaxy. These quantities can provide valuable information about a galaxy's age, star formation properties, and molecular make-up. Ultraluminous infrared galaxies are formed from the merger or interaction of gas-rich galaxies, and are classified by having a luminosity of greater than \(10^{12}\) L\({}_{\odot}\). The presence of molecular gas clouds and dust encourages formation of young, bright stars in these galaxies, but also makes it difficult to measure the total gravitational and molecular masses, since molecular hydrogen is virtually undetectable within gas clouds. Our data instead observed emissions of carbon monoxide (CO) from the two ULIRGs at transitions of J= 1\(\rightarrow\)0 orJ= 2\(\rightarrow\)1. The software Difmap was used to map the interferometer data obtained from the Plateau de Bure Interferometer (PdBI). Our data was cleaned, filtered, and subsequently used to generate UV-plots of the emissions, along with other values needed for calculations. Some of the key results include that iras 15250 is much younger as it has not used up a majority of its molecular gas while iras 17208 is older because it has used it up, the gas mass fraction can be used to estimate the amount of dark matter present in the galaxy, and that the gas content and the central surface brightness of the disk are directly correlated.
## 1 Introduction
Understanding galactic structures and molecular gas mass in galaxies is critical to understanding both star formation and galaxy formation. Obtaining this information is difficult in cases where galaxy resolution is not high enough to distinguish between galactic structures, since the galaxy's molecular makeup cannot be accurately inferred. Ultraluminous Infrared Galaxies, also referred to as ULIRGS, are galaxies with luminosities above \(10^{12}\) L\({}_{\odot}\) in infrared wavelengths. They were first discovered in large numbers by the Infrared Astronomical Satellite in 1983, and were presumed to form from mergers of gas-rich disks of spiral galaxies. The abundance of molecular gas clouds in these galaxies is what generates high star formation rates, which is why luminous galaxies have very large populations of young, massive stars. The bright luminosity is generally attributed the active galactic nucleus (AGN) at the center of a ULIRG, along with the multitude of bright, young stars. ULIRGs are rich in gas and dust, which together absorb light from stars in the form of Ultraviolet photons, and re-emit it in the infrared.
Even though molecular hydrogen (H\({}_{2}\)) is the most abundant molecule in the universe, the presence of molecular gas clouds renders it difficult to measure the amount of it in ULIRGs. Only quadrupole rotational transitions can occur with molecular hydrogen, but these are high energy transitions and therefore only occur at higher temperatures (around 1200 K). The vast majority of molecular hydrogen within a galaxy is at a colder temperature of around 30 K, and since there are no transitions at this temperature there are no emitted photons to detect. Carbon monoxide (CO) on the other hand is the second most abundant molecule in galaxies, and it has a dipolar moment that has an easily observable transition (\(J=1\to 0\)) that occurs at lower temperatures. Since CO and H\({}_{2}\) are formed under similar conditions, astronomers typically assume a standard ratio between the number of H\({}_{2}\) molecules and the number of CO molecules present in a galaxy. This ratio is typically noted as \(10^{4}\) H\({}_{2}\) molecules for every 1 CO molecule, but this value is highly debated within the astronomical community and should not be assumed true for all cases.
Section 2 of this introduces our data and the galaxies observed. Section 3 describes our methodology. Results for individual galaxies are given in Section 4. We discuss our results in Section 5 and conclude in Section 6.
## 2 The Data
The data used in this research consists of four pre-calibrated sets of visibilities from two different Ultraluminous infrared galaxies (ULIRGs): infrared source (iras) 15250 and infrared source 17208. These visibilities were obtained with the Plateau de Bure Interferometer (PdBI), which is a facility located in the French Alps and is operated by the Institut de Radio Astronomie Millimetrique (IRAM). The PdBI is now a part of the Northern Extended Millimeter Array (NOEMA) which is also operated by IRAM. The images themselves are carbon monoxide emission lines of either \(J=1\to 0\) or \(J=2\to 1\), one of each for each galaxy.
Iras 15250 and iras 17208 both have substantial dust luminosities, which by definition indicates over 100 times that of the Milky Way. These large dust luminosities were created by the bursts of star formation that were triggered by the merger of two gas-rich progenitor galaxies. This is evident in the optical images of the galaxies because of their irregular shapes that are not common spiral or elliptical galaxies.
Figure 2 is an example of the dirty map of the emission as shown in Difmap, the software used in this research.
Figure 1: Iras 15250 and iras 17208 ULIRGs taken from the Aladin Sky Atlas.
## 3 Methodology
The core software utilized for the analysis of our data was Difmap. Difmap is an editing and mapping program that allows us to view and map interferometer data in both continuum and spectral lines. After loading our data into Difmap and initializing the ma
Figure 2: Dirty map of iras 17208.
The next objective was to find the spectral line fluxes, \(F_{CO(1-0)}\) and \(F_{CO(2-1)}\), and then solve for the total molecular gas mass of the galaxy,
\[\frac{M_{H_{2}}}{M_{\odot}}=1.180\times 10^{4}(\frac{D}{Mpc})^{2}(\frac{X}{3 \times 10^{20}~{}cm^{-2}(K~{}km~{}s^{-1})^{-1}})(\frac{F_{CO(1-0)}}{Jy~{}km~{} s^{-1}})\]
where \(M_{H_{2}}\) is the mass of hydrogen in the galaxy, D is the approximate distance to the galaxy, X is a conversion factor between the number of \(CO\) and \(H_{2}\) molecules, and \(F_{CO(1-0)}\) is the spectral line flux of the J=1-0 line multiplied by the range of velocities over which the emission is present, which was calculated earlier.
\[D\approx\frac{cz}{H_{0}}\]
\[H_{0}\approx 70~{}km~{}s^{-1}~{}Mpc^{-1}\]
\[X\equiv\frac{N_{H_{2}}}{I_{CO(1-0)}}=3\times 10^{20}~{}cm^{-2}(K~{}km~{}s^{-1} )^{-1}\]
Using the spectral line fluxes, the flux ratio of the emission lines for both galaxies can be calculated. Next, the total dynamical mass of each galaxy can be solved using the following equation, \(M_{dyn}=\frac{Rv^{2}}{G}\), where \(R\) is the radius of the galaxy, \(v\) is the velocity of the galaxy, and \(G\) is the gravitational constant equal to \(6.67\times 10^{-11}m^{3}kg^{-1}s^{-2}\). The radius \(R\), was found in several steps. The first step is to manually go through the frequency bins in Difmap and use Difmap to find the location of the peak flux in each set of bins. In most cases, the location of the peak flux will be in the same location as where it is in the peak frequency, but sometimes it is in a slightly different location which indicates a slight difference in angular separation, \(\theta\), in arc seconds. Then using our previously calculated redshift, we can use Ned Wright's Cosmology Calculator and find the angular size distance, \(D_{A}\), of the galaxy. Using both \(\theta\) and \(D_{A}\), \(\Delta R\) is found via the following diagram. Then, \(R\) is finally found by multiplying \(\Delta R\) by 0.5. Likewise, \(v\) is found by multiplying \(\Delta v\), the velocity range, by 0.5.
Finally, the total molecular gas mass fraction was solved using the following equation of \(f_{gas}\equiv\frac{M_{H_{2}}}{M_{dyn}}\).
Figure 3: Visualization of the frequency bins.
## 4 Results
Our results can be summarized in the following table.
As mentioned in the Methodology section, redshift was calculated using the equation \(1+z=\frac{\nu_{0}}{\nu}\). \(\nu\) was the frequency given for the origin of the emission. For both galaxies, we calculated two separate redshifts based on the two separate spectral emission data. For iras 15250, the average of our calculated redshift was 0.0525. The actual observed value is 0.0552, which gives our calculated redshift an error of about 5%. For iras 17208, the average between the two calculated redshift values is 0.0429; with an actual observed value of 0.042793, our calculated redshift has an error of less than 0.2%. It is encouraging to see that our redshifts are consistent within the same galaxy. Redshift tells us how fast a galaxy is receding away from us. Using the Doppler Redshift calculator, we find that iras 15250 is moving away from us at a velocity of about 5% the speed of light and iras 17208 is moving away from us at a velocity of about 4% the speed of light.
The next important piece of information to note from the table is the differing values for the velocity range. The velocity range was calculated by finding the bounds of the bands where each emission line shows emission. In short, for iras 15250, for example, for the emission line of \(J=1\to 0\), the frequency bounds showed that the gas that shows
Figure 5: This table summarizes the results of all the steps outlined in the Methodology section. The most important ones of note are Redshift, Total Molecular Gas Mass, Total Dynamical Mass, and Total Molecular Gas Fraction. It should be noted that the values for \(\nu_{0}\) in the top left box are for \(J=1\to 0\) and \(J=2\to 1\), respectively.
this specific emission has a velocity range of about \(660km/s\). To further analyze this, it is important to note exactly what is meant by \(J=1\to 0\) and \(J=2\to 1\) exactly means. Emission spectra are the phenomenon that occurs when an atom loses energy. This is specifically defined by the energy of its electrons. An atom in its ground state or \(J=0\), has the lowest possible energy. An atom in an excited state has electron(s) that are in higher energy levels than they would normally be, meaning they have more potential energy than in the ground state. When the excited electron eventually expends that energy and moves back to a lower energy level, by conservation of energy, that energy had to have gone somewhere. That somewhere is electromagnetic radiation, or light that is emitted at specific frequencies, which is what we are observing. What the different velocity ranges tell us about the CO molecules and their behavior is the radial velocity of CO molecules that undergo these specific emissions. It can tell us how fast the galaxy is moving and rotating. It is also important to note that this velocity range is calculated within the frame of the galaxy itself, not our frame as an observer.
Next, we move to the total molecular gas mass calculation. This is arguably the most important result of our analysis. This equation,
\[\frac{M_{H_{2}}}{M_{\odot}}=1.180\times 10^{4}(\frac{D}{Mpc})^{2}(\frac{X}{3 \times 10^{20}~{}cm^{-2}(K~{}km~{}s^{-1})^{-1}})(\frac{F_{CO(1-0)}}{Jy~{}km~{} s^{-1}})\]
, is the crux of our project. The most important term in this equation is the conversion factor, or \(X\). This term is what allows us to go from the data we have calculated about the emission of CO molecules, specifically the flux of \(CO_{J=1\to 0}\) to a mass for \(H_{2}\) molecules. It is vital to note the history of this conversion factor and assumptions that we have made regarding this term. The conversion factor is an empirically determined term that is generally accepted among the scientific community for calculating the mass of \(H_{2}\) molecules in a system. Of note is that it was calculated for spiral galaxies, so the use of it is valid in our calculations, given that our galaxies are spiral as well. There is some dispute, however, as to whether this conversion factor is totally valid. Both \(H_{2}\) and CO form under very similar, but not the exact same circumstances in space. Given that we do not know exactly the similarities and differences for these circumstances, however, the use of this conversion factor is not 100% accepted. For our purposes, however, we accepted this conversion factor as valid. The crux of our project was to use CO as a tracer molecule for \(H_{2}\), given that \(H_{2}\) is difficult to detect in cold dark clouds of gas among galaxies (it does not particularly exhibit an emission spectrum given that it is a stable molecule without movement of electrons. This is what it means when we say molecular hydrogen does not have a dipole moment; there is no polarization of the molecule due to the movement of the electrons and thus, no emission). Molecular hydrogen is by far the most abundant gas in the universe, so we can learn a lot of about a galaxy by determining the amount of \(H_{2}\) it has.
The next step was determining the total dynamical mass of the galaxy. The dynamical mass is the "gravitational" mass of the galaxy, or the total of all stars, gas, and dark matter. The details of this calculation are described above in the Methodology section. What is important to note for this calculation, however, is that because our galaxies are unresolved, we are applying a generous definition of a rotation curve to resolve them. We are assuming that the rotational velocity of our galaxy can be taken as half of the velocity range that we had calculated and that the radius can be taken as half of the angular separation in the sky. By doing this, we have defined a rotation curve for our galaxies, and as such have "resolved" them to a certain extent. For iras 15250, specifically, we find that the actual dynamical mass is about \(2.19\times 10^{9}M_{sun}\). With our calculated dynamical mass of \(6.37\times 10^{9}\), we find that our calculated dynamical mass has an error by a factor of about 3. It is encouraging that our calculation led us to a result that was not any orders of magnitude off the actual value. The error, then, can be attributed to our manual decision of which frequency bands showed emission. We mistakenly identified certain frequency bands as showing emission when they did not and should have instead been classified as background noise. This in turn would have led us to a narrower value for our velocity ranges and in turn, affected our dynamical mass calculations to be closer to the actual result.
The last step of our project was to determine the gas mass fraction of both galaxies. This was the ratio of molecular hydrogen to the total dynamical mass of the galaxy. Molecular hydrogen is the most abundant molecule in the universe, with it being about \(10^{4}\) times as abundant as the CO molecule (taken from project description). There are several implications of the gas mass ratio. The gas mass ratio can tell us about the evolution of galaxies. Galaxies with high gas mass ratios typically have a lot of star formation left to complete. The abundance of molecular hydrogen gas makes it more likely for the gas clouds to condense, starting the chain of star formation. Conversely, galaxies with low gas mass ratios are less likely to have active star forming regions. With less molecular hydrogen available, there
is far less chance of gas clouds collapsing under their own gravity and creating stars. Our galaxies each exhibit one of these traits. Iras 15250 has high gas mass ratio of 77%, meaning it is more likely to have active star forming regions and more stars will be born as the galaxy evolves. It also means as a whole, the galaxy is more dim (there is a direct correlation between luminosity and molecular hydrogen gas ratio). On the other side, iras 17208, with a gas mass ratio of 3.3% will have many stars already formed and be very luminous, more so than iras 15250. It is likely that iras 17208 is more along its galactic evolution than iras 15250 is.
## 5 Discussion
Our biggest challenge with this project was understanding and manipulating our data. We observed images of two galaxies, iras 15250 and iras 17208, each with a length of about 1 arc-sec. Usually, image is not centered: fortunately, our image already was centered so we did not have to do any triple jump around our head. Initially, we obtained undeceive image that usually seen on TV, well known as microwave background. However, it is not something random. Our data consists of multiple polarization and spectral line channels, with each visibility having 112 channels. In order to separate background noise from the image of the galaxy we found the boundaries of the emission, by observing which channels had the greatest changes in average frequency.
In the case of iras 17208, there were a lot of different anomalies. We found that even if we selected 10 channels for observation, there was still an error among those channels. All of those channels are not necessary noise one ore few channels still could hold needed information. Due to nature of our project we did not include those channels. If we knew how to deal with the difmap we would include those individual channels. For example, lets say your boundary is [1,10] and on avg it is very small: however, there is a frequency spike in channel 5 in our case we would keep it as noise. Possible explanation would be, due to Earth rotation signal was received 100 percent precise and happened to be in our boundaries channel.
Another explanation for the noise could be the presence of a very bright star or star cluster, although this is unlikely because it implies that the star would be hundreds or thousand light years away from the galaxy. Due to these reasons, we used sections of channels rather than individual channels. We went through channels 11 and 99 for iras 17208 and found the x and y coordinates where the luminosity peaked for each set of 10 channels, in order to locate the brightest object in those channels. We used difmap the difmap command peak(x) and peak(y) to do this rather than generating plots of each set of channels and using our eyes, in order to eliminate the possibility of human error. Moreover, in case of iras 17208 the signal was very strong and its actual size could have be smaller than what we observed. When we generated UV-plot for iras 17208 using difmap, we found out we had linear lines in our plot rather than circular, which was expected because of our results during the tutorial. This can be attributed to multiple signals overlapping each other and causing artificial bright spots in our galaxy. Because of these plots we presumed to have a higher margin of error with iras 17208. Referring back to our difmap steps in 3, we observed where the location of the peak shifted, and found the shift in arcsec. We then used Ned Wright's Cosmology Calculator to find the angular distance using the redshift. We used a cosmology calculator here, since trigonometry will not work alone because we are dealing with red shift and we have to know the curvature of the universe for our values to be accurate, but we do not know it with certainty.
On uv-plot we can see that most of our data consist of straight lines. It is because the target is equatorial: as a result, uv-plot become linear It is good and bad news all together. it is good because we can say that we get very strong signal and did not have to deal with rotation of the Earth as much as we did for iras.15250: however, it cause different problems. Some arrays are overlap that cause significantly stronger/brighter regions on our target. In order to figure out if we have some sort of uncertainties we used vplot, projplot, radplot56, tplot.
As we can see, vplot(on the left) graph is very linear. This plot combined all flaged data points from projplot and projplot 56 together. Typically we would press W and difmap would combine all flaged data points from projplot 56.
Figure 6: Visualization of uv array. In comparison iras 15250 looks more circular.
Figure 7: Visualization of v array and gives us better ideas of what is going on in Baseline 1 vs time
It is important to say that there are total 15 baselines and 6 stations. Moving forward, on the right side, we can see projplot. It does not allow us to identify radial UV-distance. The only thing that we may say that somewhere around 30 mas. data becomes predictable with same period between lines. It does not allow us to justify our results. Ideally we would want to see this data point distributed some where between 0 and 80 with some obviously more dens region. We can make any reasonable conclusions.
Projplot 56 is more evenly distributed of projplot. It was final step in order to identify any potential errors in iras.17208 data set. In order to identify any errors in Basellines we are looking at the buttom of selfcal. As we can see there is nothing to be said about error such as "A total of 2 un-correctable visibility were flagged" difmap. It is reasonable to check for how long the following object was observed. To find we can use command spaceplot, it gives us time frame. In case of iras.17208 observation took place between 30/05:34:25 - 30/11:45:02 for total of 370.586 mins of observational time.
It would be scientifically interesting to also collect data from other wavelengths such as the X-ray or gamma ray range. Events such as supernovas or stellar objects such as pulsars can be detected in these ranges through their large amount of energy they give off at this range.
## 6 Conclusions
Our primary goal in this project was to quantify the total molecular masses of the galaxies, as well as their total gravitational masses. Through our research we gained an understanding of the roles that carbon (CO) and molecular hydrogen (H\({}_{2}\)) play in terms of observing, measuring, and calculating these quantities. After making these calculations, we were able to draw the following conclusions about galaxy formation and evolution:
1. A high gas mass fraction indicates that the galaxy is young, and has consumed a small fraction of the gas so far. Particularly iras 15250 is a younger galaxy since it has used up less of its molecular gas mass whereas iras 17208 has used more of it indicating it is older.
2. Gas mass fraction and total gravitational mass can be used to calculate the amount of dark matter present in a galaxy.
3. Gas content and central surface brightness of the disk are directly correlated.
Figure 8: The result from projplot 56(on the left) for an extended double source with components along a position angle of 71. The beating of the two main component is not clearly seen and it is difficult to determine if wavelength directly related to their spatial separation of 10 or 62 or 71. On the right side is our final log that shows us that there is no errors in our cleaned data
## 7. Acknowledgements
Veselov Viktor wrote the Discussion section. Srikar Vakkalagadda wrote the Data and Methodology sections. Nihar Prabhala wrote the Results section. Grishma Adenkar wrote the Abstract, Introduction, and the Conclusion sections, and also edited the Discussion section.
|
2305.02715
|
An Acoustic Simulation Framework to Support Indoor Positioning and Data
Driven Signal Processing Assessments
|
We present an indoor acoustic simulation framework that supports both
ultrasonic and audible signaling. The framework opens the opportunity for fast
indoor acoustic data generation and positioning development. The improved
Pyroomacoustics-based physical model includes both an image-source model (ISM)
and ray tracing method to simulate acoustic signaling in geometric spaces that
extend typical shoe-box rooms. Moreover, it offers the convenience to
facilitate multiple speakers and microphones with different directivity
patterns. In addition to temperature and air absorption, the room reverberation
is taken into account characterized by the RT60 value or the combination of
building materials. Additional noise sources can be added by means of post
processing and/or extra speakers. Indoor positioning methods assessed in
simulation are compared with real measurements in a testbed, called 'Techtile'.
This analysis confirms that the simulation results are close to the
measurements and form a realistic representation of the reality. The simulation
framework is constructed in a modular way, and parts can be replaced or
modified to support different application domains. The code is made available
open source.
|
Daan Delabie, Chesney Buyle, Bert Cox, Liesbet Van der Perre, Lieven De Strycker
|
2023-05-04T10:37:34Z
|
http://arxiv.org/abs/2305.02715v2
|
An Acoustic Simulation Framework to Support Indoor Positioning and Data Driven Signal Processing Assessments
###### Abstract
We present an indoor acoustic simulation framework that supports both ultrasonic and audible signaling. The framework opens the opportunity for fast indoor acoustic data generation and positioning development. The improved Pyroomacoustics-based physical model includes both an image-source model (ISM) and ray tracing method to simulate acoustic signaling in geometric spaces that extend typical shoe-box rooms. Moreover, it offers the convenience to facilitate multiple speakers and microphones with different directivity patterns. In addition to temperature and air absorption, the room reverberation is taken into account characterized by the RT60 value or the combination of building materials. Additional noise sources can be added by means of post processing and/or extra speakers. Indoor positioning methods assessed in simulation are compared with real measurements in a testbed, called 'Technite'. This analysis confirms that the simulation results are close to the measurements and form a realistic representation of the reality. The simulation framework is constructed in a modular way, and parts can be replaced or modified to support different application domains. The code is made available open source.
Simulation, Acoustics, Ultrasonic applications, Localization, Audio databases, Data collection
## I Introduction
Acoustic signaling can serve a very broad set of applications ranging from localization to sonar, speech recognition, and sound classification to imaging in, for example, the medical world. In addition to audible acoustic signals, ultrasound is gaining attention, more specifically in the domain of positioning. It possesses a low propagation speed compared to RF and light, and lies outside the audible frequency range of the human ear. Moreover, ultrasound and acoustic waves are suitable to provide time-based distance estimations for indoor positioning with low-cost hardware. Previous research presented a hybrid RF-acoustic indoor positioning system (IPS) using ultrasonic chirp signals for ranging estimations between speaker anchors and a mobile microphone [1]. In order to achieve higher accuracy and reliability at all locations in an indoor environment, new positioning algorithms need to be developed and tested. Machine learning (ML) models offer the opportunity to provide significant improvements [2, 3]. The requirement for large data sets to train such models is a major hurdle: acoustic data collection for indoor positioning or audio classification is a time consuming task. Acoustic signal propagation simulators in combination with real-life measurements can provide a solution. For this reason, we developed a simulation framework that supports acoustic and ultrasonic signals combined with an option to extend it with extra building blocks for ML, positioning or classification algorithms. Hence, the flow from data creation to analysis of the operation of specific applications is fully supported within one framework, which fosters smooth development.
Acoustic simulation packages are widely available since the early 1960s [4]. Popular commercial examples include: Odeno Room Acoustics Software, CATT, EASE, Siemens Acoustic Simulation and OpenFOAM. Most are user-friendly though expensive, incapable to apply ultrasonic signals in the simulated rooms or cannot be easily expanded with desired post-processing [5]. There are also open-source toolboxes and libraries such as 'Roomsim' [6] for Matlab and 'Pyroomacoustics' [7] for Python. The latter is used as the basis for the proposed indoor acoustic simulation framework. A faster and more accurate shoe-box room simulator than 'Roomsim' was proposed in [8]. This simulator takes into account air absorption, humidity, temperature, scattering, and directivity. Nevertheless, only a shoe-box model simulation is possible. Convolutional neural network (CNN) based sound source localization is performed in [9]. Pyroomacoustics is used for data creation to feed the CNN. As with the other proposed simulation frameworks, no complete extension is provided for ultrasonic signals.
One of the few simulators that supports ultrasonic signals is Field II. Closely related to the research presented in this paper, an ultrasonic chirp was emitted and received, followed by a cross-correlation for range estimation within a small room [10]. Unfortunately, the reflection/reverberation property of the room, which noticeably affects the accuracy and reliability of an IPS, is not taken into account [11].
Since a comprehensive and representative acoustic simulator for both audible audio and ultrasound did not yet exist to the best of our knowledge, we adapted the open-source Pyroomacoustics library. In addition to the audible sound, ultrasonic signals in the range of 20 to 50 kHz can be modelled. This additional range matches the frequency response of commercially available ultrasonic (MEMS) microphones. A chirp-based IPS, closely related to [1], is simulated in our presented framework as a means for validation. Data generation, post-processing, positioning and evaluation take place within the same framework. This framework can readily be used to test and compare
acoustic/ultrasonic indoor positioning algorithms where both accuracy and reliability can be checked. Nevertheless, with some adaptations, the framework can also be used to generate audio data sets for other applications, e.g., speech recognition models or sound classification in general. The post-processing can be altered to output the audio fragment and the positioning engine can be replaced by, e.g., a classification engine. In addition, jammers and noise can be added.
The paper is structured as follows. An overview of the proposed simulation framework is given in Section II. This reflects what is currently implemented, with focus on the analysis of indoor positioning algorithms supporting compatibility with previous research [1]. Afterwards, a real world validation is presented in Section III, followed by a discussion with referenced future work in Section IV. Lastly, a conclusion is presented in Section V.
## II Simulation Framework
The proposed simulation framework1 consists of four main parts, as presented in Fig. 1: the physical simulator, post-processing layer, the positioning engine and a final evaluation layer.
Footnote 1: github.com/DaanDelabie/AcousticSimulationFramework
The top layer of this approach (i) models an indoor space, (ii) processes the physical properties of the space and (iii) calculates the propagation of acoustic signals. The addition of noise and processing of the received signals, e.g., pulse compression, is performed in the post-processing layer. This layer feeds the positioning engine where based on the input the 3D positioning of objects or sources is performed. To compare different algorithms in all three previous layers, a separated evaluation section has been added. All these components are linked through common parameters, bundled in a single configuration file which defines the desired simulation. The fact that the simulation framework is composed of stand-alone building blocks makes adaptations and comparisons on a component-by-component basis straightforward and efficient. For example, when comparing different positioning algorithms, the output of the physical simulation part and the post-processing part can be reused without recalculation, resulting in a great profit of time and computing resources. If a comparison needs to be made across different signal-to-noise ratio (SNR) values, only the simulation part from post-processing onward should be retaken. Moreover, the content and thus inputs and outputs of these blocks can easily be adapted to support other applications or algorithms. To describe what is possible with this simulator, each building block is discussed in more detail. An ultrasonic IPS [1], with speakers that act as anchor nodes and microphones as mobile devices, is used as a tutorial. The principle of the implemented analysis method is illustrated in Fig 2.
### _Physical Simulation_
This building block is based on the Pyroomacoustics library [7], with minor adjustments to support multi-processing and ultrasonic signals. The properties of this component are summarised.
* Multiple rooms can be simulated in parallel on different processor cores, allowing parallel/multi-processed testing of multiple speaker positions that are not allowed to output simultaneously.
* The physical simulation extends conventional shoe-box models and can create any desired flat ceiling 3D room shape. This simulation environment is based on the given ground plane vertices, heights and room materials for the accompanying ceiling, floor and walls.
* The option to define an RT60 value, the time spent for the room impulse response (RIR) to decay by 60 dB, instead of using materials is also offered. Unlike using materials, the RT60 value can only be used when simulating a shoe-box configuration since it uses the inverse Sabine formula [12] to approximate the wall energy absorption.
* Microphones and speakers can be placed freely at designated locations. In the example setup, as mentioned in the introduction and illustrated in Fig 2, the indoor positioning of a microphone is determined based on chirp signals transmitted at anchor nodes (speakers) located on the walls and ceiling. To assess the localization performance throughout the room, an equally spaced grid of microphone locations is generated. To train ML models, a split by a certain percentage of the training, development and test set is typically required. The train-dev-test set ratios are included in the configuration file, which is automatically taken into account. In case ML techniques
Figure 1: Overview of the acoustic positioning simulation framework.
Figure 2: Simulation flow for the analysis of the IPS.
are applied for the indoor positioning algorithms, the equally spaced location set is used as a test set, so that well-organized figures are created and can be compared. For the train and development set, a random set of locations is generated that is located in between the test set grid. These concepts are shown in Fig. 3.
* The directivities and the transmit signal can be set as desired.
* The speed of sound in air is determined by the temperature \(\theta\) specified in \({}^{\circ}\mathrm{C}\): \[v_{sound}=331.3\,\sqrt{\frac{273.15+\theta}{273.15}}=20\,\sqrt{273.15+\theta}\] (1)
* To perform the actual propagation simulation, an ISM and ray tracing option are provided by Pyroomacoustics, each with their (current) capabilities as summarized in Table I.
While the Pyroomacoustics package has been promoted as a tool for signal processing in the audio domain, the simulation environment can be adapted for ultrasonic signal propagation as well. However, two parameters should be adjusted to emulate the propagation of ultrasonic signals in real indoor environments: air absorption and material absorption.
* As acoustic signals propagate through air, the waves get attenuated due to viscosity and relaxation processes [13]. This results in an exponential decrease in the intensity of the acoustic signal. In Pyroomacoustics and thus the adapted simulator, air absorption is modelled as a factor \(\alpha_{\mathrm{air\,decay}}\): \[\alpha_{\mathrm{air\,decay}}=e^{-\alpha_{\mathrm{abs}}d},\] (2) where \(\alpha_{\mathrm{abs}}\) is the acoustic absorption coefficient in Np/m and \(d\) is the distance in m. The acoustic absorption coefficient \(\alpha_{\mathrm{abs}}\) depends to a great extend on the frequency of the acoustic signal, and on environmental parameters such as temperature and relative humidity. Pyroomacoustics supports only frequency-dependent absorption coefficients values for octave bands between \(125\,\mathrm{Hz}\) and \(8\,\mathrm{kHz}\). Specific values for the ultrasonic frequency range are manually added to the newly adapted simulator, according to previous research performed in [14, 15].
* Acoustic waves are also attenuated when reflected off objects and walls. The amount of energy that is absorbed depends, among other things, on the materials. The proposed simulator models materials by means of an energy absorption coefficient \(\alpha_{\mathrm{abs,material}}\), which represents the ratio of the energy absorbed by the material to the total energy of the incident sound wave. Consequently, the amplitude of the reflected wave \(A_{\mathrm{reflected}}\) can be calculated by \[A_{\mathrm{reflected}}=A_{\mathrm{incident}}\sqrt{1-\alpha_{\mathrm{abs, material}}},\] (3) where \(A_{\mathrm{incident}}\) is the amplitude of the incident acoustic wave and \(\alpha_{\mathrm{abs,reflection}}\) is the energy absorption coefficient of a specific material. A material can also be characterized by multiple energy absorption coefficients to model its frequency-dependent behaviour. As in the case of air absorption, frequency-dependent wall absorption is originally supported in Pyroomacoustics, but only for the octave bands between \(125\,\mathrm{Hz}\) and \(8\,\mathrm{kHz}\). Absorption coefficients are added in the simulator extension to support ultrasonic signals. However, to the best of our knowledge, very little research is available on absorption for several materials in the ultrasound frequency range. The usually employed impedance tube to determine the absorption coefficient can unfortunately not be used in the case of ultrasonic signaling, given the necessary strict and physically impossible-to-use dimensioning. To obtain an indication of the ultrasonic absorption coefficient of plywood, we measured the intensities of the direct and reflected wave within a frequency range of 20 to 50 kHz. The used method is based on the ISO 13472-1 standard. First, a free-field calibration is performed between a speaker and microphone to check the intensity of the incident sound at a certain distance for a certain frequency. Afterwards, a wooden panel is placed at the same distance to receive the reflected short pulse with a certain frequency at the position of the speaker. Since unwanted reflections are avoided when sending the short pulse, the reflected pulse can be purely received within a different time window than the one of the transmitted pulse. Within
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{**Shoebox**} & \multicolumn{4}{c}{**Non-Shoebox**} \\ & **Materials** & \multicolumn{2}{c|}{**Directivities**} & \multicolumn{2}{c}{**Materials**} & \multicolumn{2}{c}{**Directivities**} \\ \hline & \(\neq\) & \(=\) & **Mic** & **Speaker** & \(\neq\) & \(=\) & **Mic** & **Speaker** \\ \hline ISM & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ RT & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table I: Possibilities given an ISM or raytracing (RT) configuration applied to a (non)-shoebox shaped room. Materials on the walls, ceiling and floor can be equal (\(=\)) or different (\(\neq\)). The supported cardioid family directivities are frequency-independent.
Figure 3: An example of generated test grid and train/dev cloud microphone positions in a random shaped room with speakers representing anchors.
the specified frequency domain, an average absorption coefficient of 0.43 was obtained with a standard deviation interval of \([0.14,0.72]\).
The physical simulation part of the framework outputs the corresponding RIRs and received audio samples. Afterwards, the audio samples are downsampled to meet the desired sampling frequency of the microphone. This data is passed on to the post-processing part.
### _Post-Processing_
The post-processing part can add different types of noise and interference to the signal depending on a pre-specified SNR and signal-to-interference ratio (SIR) values. This allows to determine the influence of noise on, for example, the accuracy of the IPS under test. Interference can also be obtained by adding an extra speaker in the physical model, making it inherently present in the audio signal. The disadvantage of this method is that a modified noise parameter impacts the simulation time since this is performed in the physical part.
This post-processing block is easily extendable with necessary additions such as a Monte-Carlo simulation to assess the influence of noise at different SNR levels. In addition, downsampling, automatic gain control (AGC), pulse compression and filtering can be used. To give an example of what is possible, the previously illustrated IPS as shown in Fig 2 is elaborated, to comply with the procedure of [1]. After receiving the signals from the physical simulation, noise is added, followed by pulse compression based on a transmitted chirp signal and the noisy received part of that chirp signal. Afterwards, a low-pass filter is added to obtain the envelope curve of the pulse compression. The index of the peak of this curve gives an indication of the range between the corresponding speaker and microphone. This envelope curve is downsampled again to create a fixed output size among different simulation setups, for example, to use this as input features of a CNN.
### _Positioning Engine_
The positioning engine is used to perform a location estimate based on the given input data. In the exemplary configuration, using the peak prominence algorithm applied to the envelope of the pulse compression, a time of flight (ToF) and thus ranging estimation is performed between a certain speaker and microphone. The range estimates are then used to estimate a 3D position, for example using a least squares method. As mentioned in the physical simulation section, there is a possibility to already provide a test, train and dev data split. Since ML plays an increasingly important role in indoor positioning, it releases the possibility to apply and learn different models to obtain better accuracy and reliability of the 3D position estimate. This simulation model was developed for future research to investigate ML techniques for ultrasonic signal-based indoor positioning.
### _Evaluation_
A graphical representation of the results can lead to better interpretation and can be important when designing applications. If the simulation framework is used for indoor positioning, the current evaluation content can be used to report cumulative distribution functions (CDFs) and automated 3D plotting independent of the shape of the room to display the accuracy of the position estimate as a function of the location in space. An example is given in Fig. 4. This part is very application dependent and can, as a result, be easily adapted to visualise the answers to the specific research questions.
## III Real world validation
To validate the proposed simulation model for its physical world representation capabilities, a comparison between a simulation and a measurement set is performed. The measured sets are gathered in a highly reverberant and thus acoustically challenging environment [16]. Since the main focus of this framework lies on acoustic indoor positioning, the same positioning algorithms are applied to the simulated and measured data set and the corresponding CDFs are compared. The simulation framework was configured to emulate the real world measurement environment [16] as good as possible.
Both the real-life and simulation setup consist of four speaker anchors targeting a microphone which is placed at 150 different positions inside the 'Technile' testbed [17]. The speakers are located at positions [0.5, 0.05, 0.146]m, [1.015, 3.950, 2.215] m, [5.183, 0.067, 0.744] m and [6.901, 3.948, 1.322] m. Positioning signals, consisting of a linear, descending chirp of 30 ms, between 45 KHz-25 KHz, are emitted sequentially by the anchor nodes. The testbed is simulated as a wooden room with two open sides, emulated as absorbing surfaces within an ISM. The absorption coefficient of wood is set to 0.45 in the adapted Pyroomacoustics library since ultrasound is addressed. Given the aforementioned measurements on a wooden Techtile panel within this measurement's frequency range, this value can be considered. The air absorption for ultrasound is also taken into account.
The microphone samples the received signal during 1 ms at a sampling frequency of 250 kHz as described in [16]. The directivities of both speaker and microphone are also taken into account. An omni-directional pattern and a hypercardioid pattern are chosen for the microphone and speakers respectively. The speakers point towards the center of the room. The
Figure 4: An example of the Euclidean distance error on different locations in a 3D room.
SNR value is assumed to be 30 dB. For range estimation, the maximum value of the bit based pulse compression is used, as described in [16]. After this ranging estimate between the microphone at a certain position and 4 speakers/anchors, a location estimate is determined, in this case, using the following acclaimed algorithms: simple intersections, range Bancroft, Beck, Chueng and Gauss-Newton. Fig 5 shows the comparison between simulated and measured values of the CDFs obtained by the positioning algorithms. This demonstrates that the results achieved with the simulation framework closely match the reality for this indoor positioning application.
## IV Discussion and future work
The simulation environment makes it possible to quickly test IPSs and generate acoustic datasets. This framework will mainly be used to increase the accuracy and reliability of IPSs within the entire (not necessarily shoe-box) room. To this end, distributed anchor deployments and ML techniques are being studied to extend the already existing hybrid-acoustic positioning system for energy neutral (EN) devices [1]. If the simulation has to provide a lot of sequential speaker outputs at different locations, the simulation time can increase considerably, even in a multi-processed manner. Multiple microphones can receive sound within the same simulation episode, so that for example the simulation performed in Section III only took about 10 minutes on an Intel 8th Gen octa-core i7 processor. A disadvantage of this simulation framework is that no objects can be placed in the room at the moment. On the other hand, this framework offers flexibility and detailed possibilities to mimic a real environment.
## V Conclusion
An ISM and ray tracing simulation framework, based on the Pyroomacoustics library, was proposed making it possible to generate acoustic/ultrasonic indoor data sets and to test IPSs. The flexibility of this framework allows it to be used for all kinds of applications where components, such as the positioning engine, can be easily replaced or modified. Beside shoe-box models, other custom rooms can be created and simulated taking into account the materials, reverberation properties, directivities, sample frequencies, air absorption, temperature and SNR and/or SIR values. It was shown that the simulator can mimic a real-life environment well, making it a reliable data source. Future research on acoustic/ultrasonic indoor positioning and classification can make good use of this simulator to train and test ML models in addition to traditional algorithms.
## Acknowledgment
This work was supported by the Research Foundation-Flanders (FWO) under Grant G0D3819N.
|
2308.04398
|
Character-level NMT and language similarity
|
We explore the effectiveness of character-level neural machine translation
using Transformer architecture for various levels of language similarity and
size of the training dataset on translation between Czech and Croatian, German,
Hungarian, Slovak, and Spanish. We evaluate the models using automatic MT
metrics and show that translation between similar languages benefits from
character-level input segmentation, while for less related languages,
character-level vanilla Transformer-base often lags behind subword-level
segmentation. We confirm previous findings that it is possible to close the gap
by finetuning the already trained subword-level models to character-level.
|
Josef Jon, Ondřej Bojar
|
2023-08-08T17:01:42Z
|
http://arxiv.org/abs/2308.04398v1
|
# Character-level NMT and language similarity
###### Abstract
We explore the effectiveness of character-level neural machine translation using Transformer architecture for various levels of language similarity and size of the training dataset on translation between Czech and Croatian, German, Hungarian, Slovak, and Spanish. We evaluate the models using automatic MT metrics and show that translation between similar languages benefits from character-level input segmentation, while for less related languages, character-level vanilla Transformer-base often lags behind subword-level segmentation. We confirm previous findings that it is possible to close the gap by finetuning the already trained subword-level models to character-level.
## 1 Introduction
Character-level NMT has been studied for a long time, with mixed results compared to subword segmentation. In the MT practitioner's discourse, it has sometimes been assumed that character-level systems are more robust to domain shift and better in the translation of morphologically rich languages. Recent studies (Libovicky et al., 2022) show that there are no conclusive proofs for these claims.
At the same time, character-level systems have been reliably shown to be robust against source-side noise. In terms of general translation quality, they often either underperform or are on par with their subword-level counterparts (Libovicky et al., 2022). Also, both training and inference speeds are lower and memory requirements are higher due to longer sequence lengths (mostly because of the quadratic complexity of the Transformer attention mechanism with respect to the input length (Vaswani et al., 2017)) unless specialized architectures are used.
In this work, we present experiments on a specific use-case of translation of related languages. We train bilingual Transformer translation models to translate between Czech and Croatian, German, Hungarian, Slovak, or Spanish. We vary the training dataset size, vocabulary size and model depth and study the effects. We show that in the baseline configuration with vanilla Transformer-base, character-level models outperform subword-level models in terms of automated evaluation scores only in closely related Czech-Slovak translation pair. Finally, we confirm that it is possible to obtain a better quality of the char-level translation for less related languages by first training a subword-level model and in the later stage of the training switching to character-level processing.
## 2 Related work
Libovicky et al. (2022) analyze the body of the work on character-level NMT and show that in most cases, it still falls behind in many aspects compared to the subword-level counterpart.
Since they provide a comprehensive overview of the field up to today, we will only very briefly list the most influential works in this section, and refer the reader to the detailed analysis in Libovicky et al. (2022).
In one of the earliest works, Chung et al. (2016) use RNN with character segmentation on the decoder side. Lee et al. (2017) use CNN for fully character-level NMT. Costa-jussa et al. (2017) apply a similar approach to byte-level translation. Gupta et al. (2019) and Ngo et al. (2019) explore character-level MT using the Transformer model. Recent work on character-level NMT includes Li et al. (2021); Banar et al. (2021) and Gao et al. (2020).
Libovicky and Fraser (2020) show that problems with slow training and worse final translation quality for character-level NMT models can be largely mitigated by first training with subword segmentation and subsequently finetuning on character-segmented text. However, a problem of lower speed (due to longer sequence length) persists, which can make both the training and inference prohibitively costly and slow, especially for models that make use of a larger context than only one sentence.
Our work specifically targets character-level translation of closely related languages. In WMT 2019 Similar Language translation task (Barrault et al., 2019), Scherrer et al. (2019) show that character-level NMT is effective for translation between closely related Portuguese and Spanish and in Multilingual Low-Resource Translation for Indo-European Languages task at WMT21 (Akhbardeh et al., 2021), Jon et al. (2021) successfully apply character-level NMT to translation between Catalan and Occitan.
## 3 System description
### Data
We evaluate our models on translation from Czech to German, Spanish, Croatian, Hungarian and Slovak and vice-versa. We train on MultiParaCrawl (Banon et al., 2020)1 corpus. It is based on Paracrawl, which is English-centric (each language in the original dataset is aligned only to English). MultiParaCrawl aligns the sentences in the other languages that have the same English translation. This introduces mis-alignments into the dataset (it is possible that two sentences with different meanings in other languages have the same English translation), but we nevertheless use it to have a comparable training corpus for all the languages. We sample subsets for each language pair in sizes of 50k, 500k, and 5M sentences (Croatian corpus only has about 800k sentences in total, so we use only the 50k and 500k sizes). We use FLORES-200 (Team et al., 2022) as validation and test sets (we keep the original splits). We note that this test set is created by translating the same English test into all the languages and not translating the two tested languages between each other - this might mean that the effect of language similarity is somewhat subdued in this setting.
Footnote 1: [https://opus.nlpl.eu/MultiParaCrawl.php](https://opus.nlpl.eu/MultiParaCrawl.php)
We segment the text using SentencePiece with the given vocabulary size (32k, 4k, or character-level model), with 99.95% character coverage and UTF-8 byte fallback for unknown characters. The segmentation models are trained on the whole 5M datasets, jointly for each pair.
Language similarityWe use chrF score (Popovic, 2015), traditionally used to compute translation quality, as a language similarity metric. It is a character-level metric and we hypothesize that character-level similarity is an important aspect for our experiments. We compute chrF score of the Czech FLORES-200 test set relative to all the other languages (Table 1). We also show the lexical similarity score provided by the UKC database2, which is based on a number of cognates between languages in their contemporary vocabularies (Bella et al., 2021).
Footnote 2: [http://ulxc.disi.unitn.it/index.php/lexsim/](http://ulxc.disi.unitn.it/index.php/lexsim/)
### Model
We trained Transformer Vaswani et al. (2017) models to translate to Czech from other languages Hungarian, Slovak, Croatian, German and Spanish) and vice-versa using MarianNMT Junczys-Dowmunt et al. (2018).
Our baseline model is Transformer-base (512-dim embeddings, 2048-dim ffn) with 6 encoder and 6 decoder layers. We also train two other versions of Transformer-base: with 16 encoder + 6 decoder layers and with 16 encoder + 16 decoder layers. For other hyper-parameters, we use the default configuration of MarianNMT. We evaluate the models on the validation set each 5000 updates and we stop the training after 20 consecutive validations without improvement in either chrF or cross-entropy. We use Adam optimizer Kingma and Ba (2017) and one shared vocabulary and embeddings for both source and target.
Similarly to Libovicky and Fraser (2020), we compared training char-level models from scratch to starting the training from subword-level models (both with 4k and 32k vocabularies) and switching to character-level processing after subword-level training converged. They obtained better results with a more complex curriculum learning scheme, while we only finetune the pre-trained model.
We performed a length analysis on the character level for all the datasets. Based on this, we set the maximum source sequence length for training and inference to 400 for all the systems. We skip longer training examples. In the worst case Croatian to Czech), 1.3 % of the examples are skipped. Table 2 shows average character lengths and percentage of the skipped training examples in all directions. For inference, we normalize the output score by the length of the hypothesis as implemented in Marian. We search for the optimal value of the length normalization constant on the validation set in the range of 0.5 to 4.0.
### Evaluation
We use SacreBLEU Post (2018) to compute BLEU and chrF scores. We set \(\beta=2\) for chrF in all the experiments (i.e. chrF2, the default in SacreBLEU). For COMET Rei et al. (2020)3 scores we use the original implementation and the wmt20-comet-da model.
\begin{table}
\begin{tabular}{l r r}
**Language** & **chrF** & **LexSim** \\ \hline
**sk** & 36.7 & 16.5 \\
**hr** & 22.7 & 8.2 \\
**es** & 16.5 & 2.6 \\
**hu** & 16.3 & 2.9 \\
**de** & 15.4 & 3.7 \\ \end{tabular}
\end{table}
Table 1: UKC LexSim and chrF score-based similarities of the testsets, i.e. chrF score of untranslated Czech testset compared to the other languages.
\begin{table}
\begin{tabular}{l r r r}
**Pair** & **Lang** & **\% skip** & **Avg len** \\ \hline cs-de & cs & 0.43 & 88.2 \\ & de & 0.64 & 100.3 \\ & cs & 0.30 & 84.5 \\ cs-es & es & 0.50 & 95.5 \\ & cs & 1.21 & 127.1 \\ cs-hr & hr & 1.30 & 131.7 \\ cs-hu & cs & 0.26 & 76.4 \\ & hu & 0.45 & 83.0 \\ & cs & 0.25 & 74.9 \\ & sk & 0.29 & 77.4 \\ \end{tabular}
\end{table}
Table 2: Percentage of examples exceeding the training source length limit (400 characters) and average sentence character lengths for all the training datasets for character-level training.
### Hardware
We ran the experiments on a grid comprising of Quadro RTX 5000, GeForce GTX 1080 Ti, RTX A4000, or GeForce RTX 3090 GPUs. We trained a total of about 170 models with training times ranging from 10 hours to 14 days, depending on the dataset, model, and GPUs used.
## 4 Results
### Subwords vs. characters
We compare BLEU, chrF and COMET scores for Transformer-base trained on different training dataset sizes and with different segmentations in all the language directions in Table 3 and the same results are plotted in Figure 1. First and foremost, the character-level models provide the best results for the most similar language pair, Czech-Slovak (sk), across training data sizes and translation directions. For example, with a 50k dataset, the character-level model achieves a COMET score of \(0.8834\) and \(0.8429\) in Czech-to-Slovak and Slovak-to-Czech translations, respectively. The scores are better compared to those of 4k and 32k vocab models with the same training dataset. This trend continues with larger datasets; the character-level model outperforms in both the 500k and 5M datasets, although for the largest datasets, the results are very similar across vocabulary sizes.
However, for the other language pairs, the results are mixed, and subword-level models often outperform character-level models, particularly with larger training dataset sizes. For instance, in Czech-to-Hungarian (hu) translations with a 5M dataset, the 32k vocab model achieves a COMET score of \(0.6531\) which is better than the \(0.6263\) score of the character-level model. The same pattern is observed in Czech-to-German (de) translations with the 32k vocab model outperforming the character-level model in the 5M dataset with a COMET score of \(0.6275\) against \(0.5955\).
For all the other languages (aside from Slovak), training on the 50k dataset fails to produce usable translation model at any vocabulary size, even for the second most similar language, Croatian. However, as we show in the next section, we can see the benefits of char-level translation of Czech-Croatian when finetuning charl-level model from subword-level model.
The results are more favorable for subword-level models with increasing training set sizes, probably due to the sparsity of the longer subwords in smaller datasets which results in worse quality of the embeddings. We also see that generally, character-level models perform better in terms of chrF (char-level metric) than BLEU and COMET. For example, see Czech-to-Spanish, 5M dataset: character model has the best chrF score (although by a small margin), but the worst BLEU and COMET scores.
### Finetuning
We took an alternative approach to training character-level models from scratch by fine-tuning the subword-level models. We only finetuned the models in the direction from Czech to the target language. Starting from the last checkpoint of the subword-level training, we switched the dataset to a character-split one. Since SentencePiece models include all the characters in their vocabularies, there was no need to adjust them. We proceeded with the same hyperparameters, including the optimizer parameters, after resetting the early-stopping counters.
We present the results in Tables 4 and 5 for models finetuned from 4k and 32k subword models, respectively. We see that in cases where training a char-level model from scratch didn't perform well compared to a subword-level one, finetuning from subword-level helps to attain the quality of the subword-level and even surpass it in some cases. For example, Czech-to-Croatian char level model without finetuning obtains COMET score of \(-1.4055\), but after finetuning from 4k model, the score increases to \(-0.2671\), which is also better than the \(-1.0112\) of the 4k model alone.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{**Czech \(\rightarrow\) Lang**} & \multicolumn{4}{c}{**Lang \(\rightarrow\) Czech**} \\ \cline{3-9}
**Lang** & **Dataset** & **Vocab** & **BLEU** & **CHRF** & **COMET** & **BLEU** & **CHRF** & **COMET** \\ \hline \multirow{6}{*}{sk} & \multirow{3}{*}{500k} & char & **23.1** & **53.1** & **0.8834** & **23.4** & **53.1** & **0.8429** \\ & & 4k & 21.1 & 51.7 & 0.6989 & 21.6 & 51.8 & 0.7054 \\ & & 32k & 20.1 & 50.5 & 0.5155 & 20.1 & 50.2 & 0.5226 \\ \cline{2-9} & \multirow{3}{*}{500k} & char & **27.8** & **56.4** & **1.0737** & **27.2** & **56.1** & **1.0165** \\ & & 4k & 27.0 & 55.8 & 1.0574 & 26.7 & 55.8 & 1.0018 \\ & & 32k & 26.8 & 55.6 & 1.0342 & 26.3 & 55.4 & 0.9893 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & **28.7** & **57.0** & **1.1035** & **28.4** & **56.8** & **1.0419** \\ & & 4k & 28.6 & 56.9 & 1.1012 & 28.1 & 56.5 & 1.0333 \\ & & 32k & **28.7** & 56.9 & 1.0973 & 28.2 & 56.6 & 1.0376 \\ \hline \multirow{6}{*}{hu} & \multirow{3}{*}{500k} & char & 0.6 & 21.0 & -1.4054 & 0.3 & 18.1 & -1.4137 \\ & & 4k & 1.9 & 25.4 & -1.3256 & 1.5 & 24.2 & -1.2826 \\ & & 32k & **3.0** & **28.3** & **-1.2141** & **2.1** & **25.5** & **-1.2116** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & **13.3** & **45.8** & **0.1812** & **12.3** & **42.2** & 0.1892 \\ & & 4k & 12.7 & 44.7 & 0.1371 & **12.3** & 41.2 & **0.2414** \\ & & 32k & 12.4 & 43.4 & 0.0852 & 11.8 & 40.6 & 0.1658 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & 17.4 & **50.8** & 0.6263 & 17.7 & 46.9 & 0.6999 \\ & & 4k & 17.7 & 50.3 & 0.6447 & 18.4 & **47.4** & 0.7283 \\ & & 32k & **18.3** & 50.6 & **0.6531** & **18.6** & 47.2 & **0.7325** \\ \hline \multirow{6}{*}{de} & \multirow{3}{*}{50k} & char & 0.4 & 22.5 & -1.5904 & 0.4 & 18.5 & -1.4006 \\ & & 4k & 2.2 & 29.2 & -1.3982 & 2.0 & 25.7 & -1.2548 \\ & & 32k & **4.7** & **33.7** & **-1.2014** & **4.7** & **29.9** & **-1.0102** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & 18.0 & 50.6 & 0.3185 & **18.0** & **47.3** & 0.4657 \\ & & 4k & **19.2** & **50.9** & **0.3568** & **18.0** & **47.3** & **0.5533** \\ & & 32k & **19.2** & 50.3 & 0.3155 & 17.6 & 46.1 & 0.4517 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & 0.4 & 22.5 & -1.5904 & 0.4 & 18.5 & -1.4006 \\ & & 4k & 2.2 & 29.2 & -1.3982 & 2.0 & 25.7 & -1.2548 \\ & & 32k & **4.7** & **33.7** & **-1.2014** & **4.7** & **29.9** & **-1.0102** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & 18.0 & 50.6 & 0.3185 & **18.0** & **47.3** & 0.4657 \\ & & 4k & **19.2** & **50.9** & **0.3568** & **18.0** & **47.3** & **0.5533** \\ & & 32k & **18.3** & 50.6 & **0.6531** & **18.6** & 47.2 & **0.7325** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & 0.4 & 22.5 & -1.5904 & 0.4 & 18.5 & -1.4006 \\ & & 32k & **19.2** & 50.5 & 0.6160 & 19.3 & 47.6 & 0.6170 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test set scores for Transformer-base models (6 encoder and 6 decoder layers) trained from scratch. Bold are the best results within the same training dataset.
Figure 1: Relationship between language similarity scores (chrF of the untranslated test set source) and BLEU, chrF and COMET scores, depending on vocabulary size. First row are the results for 50k sentence train set, second row for 500k train set and third row for 5M train set.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} & \multicolumn{3}{c}{Score} & \multicolumn{3}{c}{\(\Delta(char)\)} & \multicolumn{3}{c}{\(\Delta(4k)\)} \\ \cline{3-11} Lang & Dataset & BLEU & CHRF & COMET & BLEU & CHRF & COMET & BLEU & CHRF & COMET \\ \hline sk & 50k & 21.8 & 52.4 & 0.8750 & \(-1.3\) & \(-0.7\) & \(-0.0084\) & 0.7 & 0.7 & 0.1761 \\ & 500k & 27.6 & 56.3 & 1.0720 & \(-0.2\) & \(-0.1\) & \(-0.0017\) & 0.6 & 0.5 & 0.0146 \\ & 5M & 28.8 & 57.0 & 1.1017 & 0.1 & 0.0 & \(-0.0018\) & 0.2 & 0.1 & 0.0005 \\ \hline hu & 50k & 1.7 & 22.8 & -1.3850 & 1.1 & 1.8 & 0.0204 & \(-0.2\) & \(-2.6\) & \(-0.0594\) \\ & 500k & 13.4 & 46.0 & 0.2555 & 0.1 & 0.2 & 0.0743 & 0.7 & 1.3 & 0.1184 \\ & 5M & 18.2 & 51.2 & 0.6726 & 0.8 & 0.4 & 0.0463 & 0.5 & 0.9 & 0.0279 \\ \hline de & 50k & 2.9 & 30.7 & -1.4227 & 2.5 & 8.2 & 0.1677 & 0.7 & 1.5 & \(-0.0245\) \\ & 500k & 19.3 & 51.3 & 0.3966 & 1.3 & 0.7 & 0.0781 & 0.1 & 0.4 & 0.0398 \\ & 5M & 24.7 & 55.6 & 0.6214 & 0.6 & 0.4 & 0.0259 & 0.4 & 0.4 & 0.0171 \\ \hline es & 50k & 1.8 & 27.5 & -1.4024 & 1.6 & 4.5 & 0.0823 & \(-0.5\) & \(-0.9\) & \(-0.0734\) \\ & 500k & 16.3 & 46.4 & 0.2276 & 0.3 & \(-0.2\) & 0.0419 & 0.7 & 0.7 & 0.0511 \\ & 5M & 19.8 & 49.5 & 0.5038 & 0.5 & 0.0 & 0.0436 & \(-0.2\) & 0.2 & 0.0127 \\ \hline hr & 50k & 10.3 & 42.9 & -0.2671 & 10.1 & 21.7 & 1.1384 & 5.5 & 8.9 & 0.7441 \\ & 500k & 20.6 & 52.4 & 0.7382 & 1.0 & 0.8 & 0.0979 & 0.9 & 1.2 & 0.0460 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of char-level models for translation from Czech finetuned from 4k subword-level models. Numbers under \(\Delta(char)\) show the difference between fine-tuned model scores compared to the char-level model trained from scratch, under \(\Delta(4k)\) difference from the model that served as the initial checkpoint for the finetuning.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} & \multicolumn{3}{c}{Score} & \multicolumn{3}{c}{\(\Delta(char)\)} & \multicolumn{3}{c}{\(\Delta(32k)\)} \\ \cline{3-11} Lang & Dataset & BLEU & CHRF & COMET & BLEU & CHRF & COMET & BLEU & CHRF & COMET \\ \hline & 50k & 21.2 & 52.2 & 0.8697 & \(-1.9\) & \(-0.9\) & \(-0.0137\) & 1.1 & 1.7 & 0.3542 \\ sk & 500k & 27.5 & 56.2 & 1.0723 & \(-0.3\) & \(-0.2\) & \(-0.0014\) & 0.7 & 0.6 & 0.0381 \\ & 5M & 29 & 57.2 & 1.1011 & 0.3 & 0.2 & \(-0.0024\) & 0.3 & 0.3 & 0.0038 \\ \hline & 50k & 2.2 & 24.8 & -1.358 & 1.6 & 3.8 & 0.0474 & \(-0.8\) & \(-3.5\) & \(-0.1439\) \\ hu & 500k & 12.7 & 45.7 & 0.1832 & \(-0.6\) & \(-0.1\) & \(0.0020\) & 0.3 & 2.3 & 0.0980 \\ & 5M & 18 & 51.0 & 0.6589 & 0.6 & 0.2 & \(0.0326\) & \(-0.3\) & 0.4 & 0.0058 \\ \hline & 50k & 4.5 & 33.3 & -1.3335 & 4.1 & 10.8 & 0.2569 & \(-0.2\) & \(-0.4\) & \(-0.1321\) \\ de & 500k & 19.4 & 51.4 & 0.3775 & 1.4 & 0.8 & 0.0590 & 1.4 & 0.8 & 0.0590 \\ & 5M & 24.8 & 55.6 & 0.6274 & 0.7 & 0.4 & 0.0319 & \(-0.4\) & \(-0.1\) & \(-0.0001\) \\ \hline & 50k & 3.3 & 30.9 & -1.3182 & 3.1 & 7.9 & 0.1665 & \(-1.3\) & \(-1.7\) & \(-0.1498\) \\ es & 500k & 15.8 & 46.2 & 0.1854 & \(-0.2\) & \(-0.4\) & \(-0.0003\) & 0.0 & 0.8 & 0.0878 \\ & 5M & 19.6 & 49.4 & 0.4875 & 0.3 & \(-0.1\) & \(0.0273\) & \(-0.8\) & 0.0 & \(-0.0199\) \\ \hline hr & 50k & 8.9 & 41.3 & -0.4144 & 8.7 & 20.1 & 0.9911 & 1.2 & 3.2 & 0.2904 \\ & 500k & 20.5 & 52.0 & 0.7181 & 0.9 & 0.4 & 0.0778 & 1.3 & 1.5 & 0.1021 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of char-level models for translation from Czech finetuned from 32k subword-level models. Numbers under \(\Delta(char)\) show the difference between fine-tuned model scores compared to the char-level model trained from scratch, under \(\Delta(32k)\) difference from the model that served as the initial checkpoint for the finetuning.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ \cline{4-9} \multicolumn{2}{c}{**Lang**} & **Dataset** & **Vocab** & **BLEU** & **CHRF** & **COMET** & **BLEU** & **CHRF** & **COMET** \\ \hline \multirow{9}{*}{sk} & \multirow{3}{*}{500k} & char & **21.9** & **52.4** & **0.8475** & **21.9** & **52.0** & **0.8001** \\ & & 4k & 20.2 & 51.0 & 0.6444 & 19.3 & 50.1 & 0.5262 \\ & & 32k & 19.6 & 50.1 & 0.5308 & 20.1 & 50.4 & 0.5764 \\ \cline{2-9} & \multirow{3}{*}{500k} & char & **27.4** & **56.0** & **1.0621** & **27.4** & **56.1** & **1.0618** \\ & & 4k & 26.5 & 55.6 & 1.0432 & 26.6 & 55.6 & 1.0469 \\ & & 32k & 26.2 & 55.4 & 1.0319 & 26.2 & 55.4 & 1.0194 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & **28.6** & **57.0** & **1.1016** & **28.5** & **56.9** & **1.1013** \\ & & 4k & **28.6** & 56.9 & 1.1015 & 28.3 & 56.7 & 1.0920 \\ & & 32k & 28.2 & 56.7 & 1.0916 & 28.4 & 56.8 & 1.0986 \\ \hline \multirow{9}{*}{hu} & \multirow{3}{*}{500k} & char & 2.8 & 26.2 & -1.3086 & 2.9 & 25.2 & -1.3019 \\ & & 4k & 2.8 & 26.4 & -1.2933 & 2.5 & 26.6 & -1.2995 \\ & & 32k & **3.0** & **28.3** & **-1.2445** & **3.1** & **27.5** & **-1.2623** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & **12.9** & **45.7** & **0.0855** & 11.8 & **43.4** & **-0.0212** \\ & & 4k & 11.1 & 42.0 & -0.1612 & 11.1 & 41.8 & -0.1580 \\ & & 32k & 11.4 & 42.3 & -0.0943 & **12.0** & 42.5 & -0.0934 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & 17.3 & **50.7** & **0.6280** & **17.6** & **50.1** & 0.6102 \\ & & 4k & 17.3 & 49.8 & 0.6140 & 17.4 & 49.8 & 0.6045 \\ & & 32k & **17.7** & 49.9 & **0.6280** & 17.5 & 50.0 & **0.6409** \\ \hline \multirow{9}{*}{de} & \multirow{3}{*}{500k} & char & **5.7** & **35.4** & **-1.2272** & **5.0** & **33.0** & -1.2836 \\ & & 4k & 3.5 & 31.5 & -1.3532 & 3.2 & 31.0 & -1.3571 \\ & & 32k & 4.8 & 34.2 & -1.2328 & 3.8 & 32.9 & **-1.2819** \\ \cline{2-9} & \multirow{3}{*}{500k} & char & **18.9** & **51.1** & **0.3203** & **18.6** & **51.0** & **0.3155** \\ & & 4k & 17.1 & 49.1 & 0.1909 & 16.6 & 48.4 & 0.1292 \\ & & 32k & 17.7 & 48.8 & 0.1595 & 17.5 & 49.0 & 0.1624 \\ \cline{2-9} & \multirow{3}{*}{5M} & char & 24.1 & **55.4** & 0.6146 & 24.1 & **54.9** & 0.6007 \\ & & 4k & 24.6 & 55.3 & 0.6138 & 24.1 & 54.8 & 0.6006 \\ & & 32k & **24.8** & 55.2 & **0.6178** & **24.3** & 54.7 & **0.6055** \\ \hline \multirow{9}{*}{es} & \multirow{3}{*}{500k} & char & **18.9** & **51.1** & **0.3203** & **18.6** & **51.0** & **0.3155** \\ & & 4k & 17.1 & 49.1 & 0.1909 & 16.6 & 48.4 & 0.1292 \\ & & 32k & 17.7 & 48.8 & 0.1595 & 17.5 & 49.0 & 0.1292 \\ & & 32k & 17.7 & 48.8 & 0.1595 & 17.5 & 49.0 & 0.1292 \\ & & 4k & 20.0 & 48.9 & 0.1595 & 17.5 & 49.0 & 0.1292 \\ \hline \multirow{9}{*}{hr} & \multirow{3}{*}{500k} & char & **15.5** & **45.7** & **0.1277** & **14.8** & **45.6** & **0.0684** \\ & & 4k & 15.0 & 44.6 & 0.0258 & 14.3 & 43.8 & -0.0695 \\ & & 32k & 14.6 & 44.1 & -0.0454 & **14.8** & 44.1 & -0.0491 \\ \cline{1-1} \cline{2-9} & \multirow{3}{*}{5M} & char & **20.1** & **49.7** & **0.4917** & **19.8** & **49.1** & 0.4679 \\ & & 4k & 19.3 & 48.8 & 0.4712 & 19.6 & 49.0 & 0.4582 \\ & & 32k & 20.0 & 48.9 & 0.4670 & 19.9 & 49.0 & **0.4708** \\ \hline \multirow{9}{*}{es} & \multirow{3}{*}{500k} & char & **10.3** & **42.3** & **-0.4010** & **9.5** & **40.4** & **-0.4877** \\ & & 4k & 5.7 & 35.5 & -0.9234 & 4.5 & 33.3 & -1.0641 \\ \cline{1-1} & & 32k & 7.8 & 37.9 & -0.7439 & 6.7 & 35.8 & -0.8185 \\ \cline{1-1} \cline{2-9} & \multirow{3}{*}{500k} & char & **19.3** & **51.6** & **0.
Similar, although small increases compared to training from scratch can be seen across all the language pairs, with the exception of Czech-Slovak. For this pair, the translation quality of the character-level model trained from scratch is already much higher on the 50k and 500k datasets. Finetuning from either 32k or 4k models hurts the quality in this case, which could be expected.
After the finetuning, the char-level Croatian model clearly outperforms both 4k and 32k subword models on the 50k dataset in all the metrics. As this did not occur with other, less similar languages, we hypothesize that language similarity is again an important factor in favor of character-level translation.
### Model size
Previous work suggests that character-level processing in Transformers requires the use of deeper models to reach the same performance as subword-level processing. We present experiments with increasing depth of the model in Table 6. All the models are trained in the direction Czech to target. The model sizes are described in Section 3.2. We observe improvements in character-level translation compared to subword-level models of the same depth, but not compared to the Transformer-base models (the results are actually often worse than for the base model). For instance, in German (de) target language with the 500k dataset, the character-level model using 16 encoder layers and 6 decoder layers yielded a COMET score of 0.3203. In contrast, the 4k and 32k vocab subword-level models achieved lower scores of 0.1909 and 0.1595, respectively. Similar patterns can be observed for other languages and datasets as well. However, the vanilla Transformer-base with 4k (Table 3) obtained COMET of 0.3568, still outperforming even the deeper character-level model. The baseline models outperform the deeper models with 4k and 32k vocabularies, often by a large margin, while performance at char-level remains similar or only slightly worse (compare corresponding rows in Table 3 and Table 6).
We hypothesize that the absence of improvements is caused by small dataset sizes and non-optimal hyperparameter choices. The results however suggest that deeper models are better suited for character-level translation, even though they mostly fail to outperform the shallower models in our setting.
## 5 Conclusions
We trained standard Transformer models to translate between languages with different levels of similarity both on subword-segmented and character-segmented data. We also varied the model depth and the training set size. We show that character-level models outperform subword-segmented models on the most closely related language pair (Czech-Slovak) as measured by automated MT quality metrics. Finetuning models trained with subword-level segmentation to character-level increases the performance in some cases. After finetuning, character-level models surpass the quality of subword-level models also for Czech-Croatian. Other, less similar language pairs reach similar preformances for both subword- and character-level models.
## Acknowledgements
This research was partially supported by grant 19-26934X (NEUREM3) of the Czech Science Foundation and by the Charles University project GAUK No. 244523.
|
2307.03272
|
Recent advances in metallographic polishing for SRF application
|
This paper is an overview of the metallographic polishing R&D program
covering Niobium and Copper substrates treatment for thin film coating as an
alternative fabrication pathway for 1.3 GHz elliptical cavities. The presented
research is the result of a collaborative effort between IJCLab, CEA/Irfu, HZB,
and KEK in order to develop innovative surface processing and cavity
fabrication protocols capable of meeting stringent requirements for SRF
surfaces, including the reduction of safety risks and ecological footprint,
enhancing reliability, improving the surface roughness, and potentially
allowing cost reduction. The research findings will be disclosed.
|
O. Hryhorenko, C. Z. Antoine, T. Proslier, F. Eozenou, T. Dohmae, S. Keckert, O. Kugeler, J. Knobloch, D. Longuevergne
|
2023-07-06T20:18:49Z
|
http://arxiv.org/abs/2307.03272v2
|
# Recent Advances in
###### Abstract
This paper is an overview of the metallographic polishing R&D program covering Niobium and Copper substrates treatment for thin film coating as an alternative fabrication pathway for 1.3 GHz elliptical cavities. The presented research is the result of a collaborative effort between IJCLab, CEA/Irfu, HZB, and KEK in order to develop innovative surface processing and cavity fabrication protocols capable of meeting stringent requirements for SRF surfaces, including the reduction of safety risks and ecological footprint, enhancing reliability, improving the surface roughness, and potentially allowing cost reduction. The research findings will be disclosed.
## 1 Introduction
The SRF cavities made of bulk Niobium are reaching their theoretical performance limits via different surface processing techniques (surface roughness decrease, interstitials engineering,...) [1, 2, 3]. The global strategy for future decades in the field of superconducting radiofrequency (SRF) technology is focused on the fabrication of novel superconducting cavities made of alternative superconductors and multilayer structures [4]. This approach aims to enhance performance beyond the theoretical limits of bulk Nb and improve the accelerator cost-efficiency using thin films with the ultimate goal of operation at 4.2K. A key aspect of this strategy involves optimizing and industrializing the substrate preparation (cavity), specifically focusing on materials such as Nb and Cu. This paper examines the protocols involved in the pursuit of substrate preparation via metallographic polishing (MP) technology for niobium and copper, highlighting the potential benefits of MP in the preparation of flat samples. In the framework of the IFAST project, RF disks and Quadrupole (QPR) resonator samples are used for superconducting RF evaluation of deposited thin films [5].
Samples were provided by STFC (Daresbury, UK) and HZB (Berlin, Germany) labs, were polished at IJCLab, and their surface quality was evaluated. The disclosed findings presented here contribute to the ongoing efforts in investigating an alternative cavity fabrication pathway, see related conference paper[6], in the framework of the FJPPL program. Moreover, in this work we show the first successful results of the RF test achieved on the QPR sample after metallographic polishing.
## 2 Experimental Set-Up and Quality Control
RF disks and QPR samples were polished with a commercial metallographic polishing (MP) machine MASTERLAM 1.0 (LAM PLAN production) at IJCLab. The surfaces were studied and the achieved quality was controlled after each manipulation via visual inspection and roughness measurements using a laser confocal microscope Keyence VK-X 200. The roughness measurements were performed on 9 different spots with a scan area 290 \(\upmu\)m X 215 \(\upmu\)m. Two parameters as average roughness (Sa), and maximal height deviation (Sz) were monitored and recorded. To complete characterization of the MP influence on the SRF surface at a high magnetic field, a QPR sample was evaluated under RF at HZB.
### RF Disks: Copper and Niobium Processing
Disks of the two types, see Fig. 1, are utilized at STFC for RF measurements with a choke cavity operated at 7.8 GHz, at a temperature of 4 K, and with surface magnetic fields up to 1 mT [7]. Type 1 (diameter of 100 mm), which can be made of copper (Cu) or Niobium (Nb), requires indium brazing between the sample and support holder, so indium removal after characterization is required. Type 2 (diameter of 110 mm), which is exclusively applied to copper, is based on a slightly complicated design as bolts are used to connect the sample with a choke cavity, resulting in the surface being drilled, but the step with indium removal is omitted.
The 5-step procedure described in Table 1 was used to MP polish RF disks made of copper. The recipe was applied on 3 disks of Type 1 and on one disk of Type 2. The first three steps are focused to planarize the surface. The last 2 steps are focused on removing the damage from the previous steps and creating a mirror-smooth and scratch-free surface. Considering prior experience with Nb (niobium), it was determined that the 5-step procedure could be replaced with a 2-step recipe [8]; however, this modification would significantly prolong the preparation time.
In addition to procedure steps, proper cleaning is required between steps. At the end of each step, the surface was rinsed with ultra-pure water, cleaned in the ultrasound bath (only de-ionized water), and dried with a Nitrogen gun. In order to improve the surface state, at the last polishing step,
abrasives supplies are replaced by deionized water on the polishing pad, so superior cleaning is achieved. The time of cleaning varies depending on the sample size.
As can be seen, in Fig. 2, the initial patterns after machining were removed by the MP technology in less than 90 minutes, and roughness parameters decreased from Sa=1.24 um, Sz=14 um down to Sa=0.02 um, Sz= 2.5 um.
The niobium RF disk with a RRR of 400 underwent an MP procedure, see Fig. 3. The received disk was found to have the following surface roughness parameters: \(\text{Sa}=2.2\pm 0.3\) um, \(\text{Sz}=21\pm 4\) um. High Sz indicates non-optimal flatness, so the sample undergoes pretreatment through a grinding process, a 200 um layer removal. Subsequently, a standard 2-step procedure [9] was applied.
The recipe is presented in Table 2. The obtained roughness are Sa = 30 \(\pm\) 9 nm, \(\text{Sz}=2.3\pm 0.2\) um. In cavity fabrication, it is typically common to use a material grade RRR=300. However, in the latter case, roughness is higher (typically Sa = 260 nm after 2 hours cycle) after an identical 2 steps polishing procedure, see the comparison in Fig. 4.
As can be seen in Fig. 5, the average roughness is compared for different material polished. Based on the visual inspection the revealed grains for RRR=400 grade are significantly bigger (>500 um) compared to RRR=300 (100-200 um).
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline
**Step** & **1** & **2** & **3** & **4** & **5** \\ \hline
**Disk/Commercial name** & Resin/Platinum 1 & Resin/Platinum 4 & Resin+Cu/Cameo Gold & Woven fibre/3TL1 & Viscous fibre/4FV3 \\ \hline
**Abrasive/Size, um** & diamond/125 & diamond/15 & diamond/3 & diamond/3 & diamond/1 \\ \hline
**Applied pressure, kPa** & 20 & 25 & 20-25 & 30 & 30 \\ \hline
**Rotation speed, RPM** & 150/150 & 150/150 & 150/150 & 60/150 & 60/150 \\ \hline
**Time, min** & Until plane & 15 & 15 & 40 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Recipe for Cu Planar Samples
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline
**Step** & **1** & **2** \\ \hline
**Disk** & Resin+Cu/Cameo Gold & Micro-porous polyurethane/4MP2 \\ \hline
**Abrasive/Size** & diamond/3 μm & SiO2/50 nm + H2O2 + NH4OH diluted in water (20\%) \\ \hline
**Applied pressure, kPa** & 10-150/150 & 150/150 \\ \hline
**Rotation speed, RPM** & 150/150 & 150/150 \\ \hline
**Time, min** & To remove damaged layer & 480 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Recipe Developed at IJCLab/CEA-IRFU for Nb Planar Sample [10].
Figure 1: Photographs of the Copper RF disks used at STFC for RF characterization at 7.8 GHz: a) design 1 before MP polishing, b) design 1 after MP polishing, c) design 2 before MP, d) design 2 after MP. Note: samples are used in two different designs. Design 1 (diameter of 100 mm) requires the indium brazing between the sample and the support holder. Design 2 uses bolts (diameter of 110 mm).
Figure 3: Photographs of the Nb RF disks used at STFC for RF characterization at 7.8 GHz: a) "as received", b) after MP.
Figure 2: Visual inspection of Cu disks with a laser confocal microscope: a) after machining, b) after MP polishing.
Polished RF disks in this work will be used for thin-film deposition activities in the framework of the IFAST project at STFC. A Nb RF disk before and after MP polishing was evaluated using a choke cavity, and results were already published, see paper D. Seal _et al_. [7]. However in order to verify to assess the surface properties are close to the defined specifications (lower frequency, higher external magnetic field), an advanced characterization with a QPR system is required.
### QPR Niobium Processing
In order to optimize the superconducting properties of the material, the QPR samples undergo surface treatment compatible with a cavity preparation. In addition, complete removal of In from the sample flange. Figure 6 shows a typical surface after BCP, EP and MP.
The roughness was evaluated in accordance with to type of performed polishing, see Fig. 7. The MP-treated surface shows a superior surface (Sa=0.5 \(\upmu\)m) compared to the conventional treatment (EP=1 \(\upmu\)m). However, to validate this technology for substrate preparation, a RF test is required.
RF characterization was performed with a Quadrupole resonator (QPR) at HZB lab [11]. This technique gives a unique possibility to test flat samples by a calometrique method at three operation frequencies (415 MHz, 847 MHz and 1289 MHz) and at a high RF field [12]. The design of
Figure 4: Visual inspection of Nb disks with a laser confocal microscope: a) ”as received”, b) after MP polishing (RRR=400), and c) after MP polishing (RRR=300).
Figure 5: Average surface roughness Sa versus MP polished sample.
Figure 6: Visual inspection of the QPR sample: a) after BCP, b) after EP, c) after MP.
Figure 7: Images of the QPR sample taken with a laser confocal microscope: a) after BCP, b) after EP, c) after MP.
the system is based on the QPR samples with a diameter of 75 mm, and an Indium brazing is required as an intermediate step to mount the sample to the test bench.
The baseline measurement (QPR run #32) history:
* 100 \(\upmu\)m removal by EP.
* Annealing 900 \({}^{\circ}\)C for 3 hours.
* Flash 20 \(\upmu\)m removal by EP.
The MP measurement (QPR run #38) history:
* Step 1: 20 \(\upmu\)m removal by MP (abrasion step).
* Step 2: 5 \(\upmu\)m removal by MP (polishing step).
* Annealing 600 \({}^{\circ}\)C for 10 hours.
Figure 8 depicts measured data of surface resistance as a function of RF fields for samples after baseline and after MP polishing processing. The residual surface resistance was extracted from the measured surface resistance and resulted in residual resistance lower than 1 nOhm in the case of the MP polished surface, and lower than 10 nOhm in the case of EP treated surface. Improved surface preparation pushed the RF fields above 80 mT, as a result, less incident power is dissipated.
## 5 Conclusion
We have shown that the preparation of the flat samples with the MP technology gives superior smoothness and flatness in comparison with the conventional treatment such as BCP or EP. The baseline MP recipe can be used as a robust method for the production of the substrates with the following deposition of thin films in the framework of the IFAST project. Additional studies are still required to characterize the cavity performance with this technology, as the flat surface will be replaced with curvature. The following studies will be covered in accordance with the FJPPL program.
The authors would like to mention that the MP polishing and visual inspection were done at Plateforme Vide & Surface (IJCLab). Part of this work was supported by the European Nuclear Science and Application Research-2 (ENSAR-2) under grant agreement N\({}^{\circ}\) 654002. This project has received funding from the European Union's Horizon 2020 Research and Innovation programme (IFAST) under Grant Agreement No 101004730 to perform an RF test. O. Hryhorenko is currently supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
|
2308.02931
|
Cultural heterogeneity constrains diffusion of innovations
|
Rogers' diffusion of innovations theory asserts that cultural similarity
among individuals plays a crucial role in the acceptance of an innovation in a
community. However, most studies on the diffusion of innovations have relied on
epidemic-like models where the individuals have no preference on whom they
interact with. Here, we use an agent-based model to study the diffusion of
innovations in a community of synthetic heterogeneous agents whose interaction
preferences depend on their cultural similarity. The community heterogeneity
and the agents' interaction preferences are described by Axelrod's model,
whereas the diffusion of innovations is described by a variant of the Daley and
Kendall model of rumour propagation. The interplay between the social dynamics
and the spreading of the innovation is controlled by the parameter $p \in
[0,1]$, which yields the probability that the agent engages in social
interaction or attempts to spread the innovation. Our findings support Roger's
empirical observations that cultural heterogeneity curbs the diffusion of
innovations.
|
Aruane M. Pineda, Sandro M. Reia, Colm Connaughton, José F. Fontanari, Francisco A. Rodrigues
|
2023-08-05T18:14:02Z
|
http://arxiv.org/abs/2308.02931v1
|
# Cultural heterogeneity constrains diffusion of innovations
###### Abstract
Rogers' diffusion of innovations theory asserts that the cultural similarity among individuals plays a crucial role on the acceptance of an innovation in a community. However, most studies on the diffusion of innovations have relied on epidemic-like models where the individuals have no preference on whom they interact with. Here, we use an agent based model to study the diffusion of innovations in a community of synthetic heterogeneous agents whose interaction preferences depend on their cultural similarity. The community heterogeneity and the agents' interaction preferences are described by Axelrod's model, whereas the diffusion of innovations is described by a variant of the Daley and Kendall model of rumour propagation. The interplay between the social dynamics and the spreading of the innovation is controlled by the parameter \(p\in[0,1]\), which yields the probability that the agent engages in social interaction or attempts to spread the innovation. Our findings support Roger's empirical observations that cultural heterogeneity curbs the diffusion of innovations.
pacs: 89.75.-k pacs: 87.23.Ge pacs: 89.75.Fb Complex systems Dynamics of social systems Structures and organization in complex systems
## 1 Introduction
Everett Rogers proposed in 1962 a theory for the diffusion of innovations that explained how novel ideas spread and are perceived by individuals in a community [1]. Among the goals of Rogers' ambitious enterprise was the identification of the elements that determine how innovations spread among individuals, communities, societies, and nations [2, 3]. The social fabric, or the relationships and connections that individuals make with each other, is one of these elements: Rogers found that the degree of cultural heterogeneity of a community influences how far information spreads among its members, concluding that cultural similarities facilitate the diffusion of innovations.
Individual heterogeneity refers to the variety of traits and behaviors that characterize the individuals within a community. Individuals that exhibit a wide range of traits and behaviors may also have varying requirements and preferences that can affect both their interests and the rate of adoption of novelties. Individual heterogeneity can result in a more varied and effective information flow, which can speed up the diffusion of innovations. The degree of shared values, beliefs, and norms among members of a community is referred to as cultural similarity [1]. People of similar cultural backgrounds may have similar requirements and preferences. According to Rogers, cultural similarity can speed up the spreading of innovations by lowering uncertainty and fostering interpersonal trust [1, 4, 5].
To facilitate mathematical analysis, individual heterogeneity is usually not taken into account in the models for the propagation of rumours, innovations, ideas, or even infectious diseases [6, 7]. However, individual heterogeneity and cultural similarity can easily be incorporated in those models by borrowing the agents' representation and the interaction rules introduced by Axelrod in his celebrated model of culture dissemination [8]. Axerold's model offers an explanation for the fact that although social interactions increase the similarity between individuals - a phenomenon known as social influence - there is still cultural diversity in human societies. In a two-dimensional lattice where the agents interact with their nearest neighbors only, the model exhibits two classes of absorbing config
urations: ordered (monocultural) configurations in which all agents (or the vast majority of them) exhibit the same culture, and disordered (multicultural) configurations in which multiple cultures coexist in the lattice. The absorbing configuration is the result of the interplay between the disorder of the initial configuration, tuned by the two parameters of the model, viz., \(F\) (number of cultural features) and \(Q\) (number of values each cultural feature takes on), and social influence [9, 10]. Interestingly, the model shows that multicultural configurations are possible outcomes of a dynamics where agents become more alike after each interaction.
The diffusion of new ideas in the academic world has been modeled using SIR-like epidemic models [11, 12], but the standard epidemic model for studying the spreading of rumours is the DK model proposed by Daley and Kendall in the 1960s [13]. In the DK model, the agents can assume three possible states, viz., \(S\) (ignorant), \(I\) (spreader) and \(R\) (stifler). Ignorants are not aware of the rumour, spreaders are aware of the rumour and spread it, whereas stiflers are aware of the rumour but do not spread it.
Here, we combine the Axelrod model and the DK model to study the diffusion of innovations (i.e., rumours) in a more realistic scenario where the agents are culturally heterogeneous at the outset, but their cultural dissimilarities change dynamically as they interact. This approach allows us to further investigate Rogers' empirical observations regarding the relationship between cultural heterogeneity and the success of innovations. The coupling between these models is that the spreader's attempt to pass on the rumour to an ignorant or to a stifler is conditioned on the cultural similarity between those agents. In that sense, our approach differs starkly from previous studies of diffusion of innovations in Axelrod's model in which the innovation was a novel value of a cultural feature [14, 15]. The cultural or Axelrod's dynamics takes place with probability \(p\), whereas the rumour or DK dynamics takes place with probability \(1-p\), so \(p\) can be viewed as a proxy for the openness of agents to social influence. A similar idea was used to verify the impact of rumour propagation on disease spreading [16, 17] and to study interacting diseases [18]. Henceforth we will use the words innovation and rumour interchangeably.
Since we keep the number of cultural features fixed to \(F=3\) throughout this paper, the cultural diversity of the community is controlled solely by the parameter \(Q\): the greater the number of distinct values that each cultural feature can take on, the more cultural diverse the community is. The main focus of this study is the fraction of agents that are aware of the rumour after the dynamics reaches an absorbing configuration, which is given by the ratio between the number of stiflers and the community size. We note that in an absorbing configuration there can be only stiflers and ignorants. Henceforth, we will denote the average of this ratio over many independent runs by \(\langle R\rangle\). Clearly, \(\langle R\rangle\) is a proxy for the success of the rumour: the greater it is, the more agents are aware of the rumour. The cultural diversity \(Q\) and the openness to social influence \(p\) will be the leading independent variables in our study. In particular, we find that \(\langle R\rangle\) decreases with increasing \(Q\) and increases with increasing \(p\). Therefore, heterogeneity makes it difficult for a rumour to spread in a community and so different narratives may emerge without need of topological changes [19, 20, 21].
## 2 Methods
The agent based model we study here is illustrated in Fig. 1. Agents are placed in a square lattice of side \(L\) with periodic boundary conditions and neighborhood given by the four nearest sites. As mentioned before, the Axelrod and the rumour dynamics take place with probability \(p\) and \(1-p\), respectively. The cultural and the rumour dynamics are, in principle, independent. In the following, we present the details of the implementation of both dynamics.
The Axelrod Model.Each agent \(i\) is assigned a cultural vector \(\Phi_{i}=(\phi_{i1},\phi_{i2},...,\phi_{iF})\) of length \(F\). Each component of this vector represents a cultural feature prone to social influence, and each feature can take on \(Q\) values (traits), i.e., \(\phi_{ik}=\{1,\ldots,Q\}\)[8]. The number of features (\(F\)) and traits (\(Q\)) determine the degree of disorder of the initial configuration since there are \(Q^{F}\) possible cultural states (or cultures) for each agent. The cultural dynamics proceeds as follows:
* An agent \(i\) is selected at random.
* An agent \(j\in V_{i}\), where \(V_{i}\) is the set of neighbors of agent \(i\), is selected at random.
* With probability given by the cultural similarity of
Figure 1: **Mechanisms of cultural assimilation and rumour propagation.** Agent \(i\) takes part in the cultural dynamics with probability \(p\) and in the spreading of the rumour with probability \(1-p\). In the cultural dynamics, agent \(i\) selects a random neighbor \(j\) and assimilates a divergent feature with probability \(P_{ij}\). In the rumour dynamics, partially represented here, a spreader agent \(i\) transmits the rumour to an ignorant neighbor \(j\) with probability \(\lambda P_{ij}\).
agents \(i\) and \(j\), viz.,
\[P_{ij}=\frac{1}{F}\sum_{k=1}^{F}\delta_{\phi_{ik},\phi_{jk}}, \tag{1}\]
a feature \(k\) such that \(\phi_{ik}\neq\phi_{jk}\) is selected at random and \(\phi_{jk}\rightarrow\phi_{ik}\). Here \(\delta_{\phi_{ik},\phi_{jk}}=1\) if \(\phi_{ik}=\phi_{jk}\) and \(0\) otherwise
These steps are repeated until an absorbing configuration is reached. In an absorbing configuration, any pair of neighbouring agents must share no cultural features (i.e., \(P_{ij}=0\)) or have identical cultures (i.e., \(P_{ij}=1\)).
The Rumour Model.Here we use the variant of the DK model introduced by Maki and Thompson [22], which we will refer to as MT model. Whereas in the DK model both agents can have their states updated in a single interaction, in the MT model only one agent is updated per interaction. Despite this difference, the outcomes of both models are similar and well described by the same mean-field approach, with the DK model exhibiting a higher dispersion of mean values than the MT model [23]. In addition, in both rumour models the agents are homogeneous, in the sense that they have (_i_) the same probability of receiving and re-transmitting the rumour when they become spreaders, and (_ii_) the same probability of becoming stiflers.
The spreading of the rumour is a contact process driven by the following rules:
* An agent \(i\) is selected at random.
* If agent \(i\) is a spreader, then:
* An agent \(j\in V_{i}\) is selected at random.
* If agent \(j\) is ignorant, then agent \(j\) becomes a spreader with probability \(\lambda P_{i,j}\).
* If agent \(j\) is a spreader or a stifler, then agent \(i\) becomes a stifler with probability \(\alpha P_{i,j}\).
These steps are repeated until the dynamics reaches an absorbing configuration where there are no spreaders in
Figure 2: **Absorbing configurations of the coupled culture-rumour model. For a fixed number of cultural features \(F\), the diversity of the community is captured by the number of cultural traits \(Q\). In a scenario of low cultural diversity, interactions increase even further the similarity between agents, which in turn facilitates the spreading of the rumour. For highly diverse communities, the steady state is characterized by the presence of many distinct cultural domains, which hamper the spreading of the rumour. The top row shows the initial (a) and final (b) cultural states of the agents for \(F=3\) and \(Q=4\). Each color corresponds to a cultural vector, so we can observe the ordering effect of social influence on the disordered initial configuration. Panel (c) shows that all agents became stiflers (blue) at the final state when there is a single spreader in the initial configuration. The bottom row shows the initial (d) and final (e) cultural states of the agents for \(F=3\) and \(Q=14\). In this case, social influence is not strong enough to homogenize the cultural states of the agents. Panel (f) shows that the rumour spreads across different cultural domains, but some ignorants (light blue) remain in the community. The other parameters are \(L=10\), \(\lambda=1\), \(\alpha=0.01\) and \(p=0.1\).**
the lattice. To ensure convergence, we introduce the recovery probability \(\delta=10^{-6}\) for a spreader to turn into a stiffer spontaneously [24].
We note that the propagation of the rumour is modulated by the cultural similarity of the interacting agents. In other words, the propagation of the rumour from agent \(i\) to agent \(j\) is possible if and only if these agents have at least one cultural feature in common to trigger a social interaction (i.e., \(P_{ij}>0\)). The cultural dynamics, however, is not influenced by the rumour dynamics.
## 4 Results
The mean fraction of stiflers \(\langle R\rangle\) in the absorbing configurations is frequently used as an order parameter in epidemic models [25] and so here we use this measure to characterize the asymptotic regime of the coupled culture-rumour dynamics. As already mentioned, the cultural dynamics is controlled by the parameters \(F\) and \(Q\), whereas the (pure) rumour dynamics is controlled by \(\lambda\) and \(\alpha\). For fixed \(F=3\), increase of \(Q\) leads to a sharp transition from a monocultural to a multicultural regime [9, 10], as illustrated in Fig. 2.
Figure 3 shows that \(\langle R\rangle\) is strongly influenced by the cultural heterogeneity of the community. We observe two asymptotic regimes: a regime where the rumour spreads to a sizeable portion of the lattice (i.e., \(\langle R\rangle\approx 1\)), and a regime where the rumour is restricted to a microscopic portion of the lattice (i.e., \(\langle R\rangle\approx 0\)). The transition point between these regimes dep
Figure 4: **Effect of the basic reproduction number on the spreading of rumours.** Mean fraction of stiflers \(\langle R\rangle\) as function of the basic reproduction number \(\mathcal{R}_{0}=\lambda/\alpha\) for \(Q=1,5\) and \(14\) as indicated. In low diversity communities (\(Q=1\) and \(Q=5\)), the rumour spreads over a sizeable portion of the community provided that \(\mathcal{R}_{0}>1\), but for high diversity communities (\(Q=14\)) it is limited to a tiny region around its creator, regardless of the value of \(\mathcal{R}_{0}\). The other parameters are \(F=3\), \(L=50\), \(p=0.5\) and \(\alpha=0.01\). We keep \(\alpha\) fixed and vary \(\lambda\) to change the ratio \(\lambda/\alpha\).
Figure 3: **Competition of cultural diversity and openness to social influence on the spreading of rumours.** Increase of cultural diversity \(Q\) decreases the mean fraction of stiflers \(\langle R\rangle\), whereas increase of openness to social influence \(p\) decreases \(\langle R\rangle\). Panel A: \(p=0\), i.e., the culture of the agents is frozen in the initial random configuration. Panel B: \(p=0.5\). Panel C: \(p=1\), i.e., the rumour propagates only after the cultural dynamics reaches an absorbing configuration. Panels D, E and F show the variances of the fraction of stiflers for \(p=0,0.5\) and \(1\), respectively. The peak of the variance for large \(L\) yields the approximate location of the transition point between the regimes \(\langle R\rangle\approx 1\) and \(\langle R\rangle\approx 0\). The lattice sizes \(L\) are indicated in Panel F. The other parameters are \(F=3\), \(\lambda=1\) and \(\alpha=0.01\).
epidemiological parameters of the coupled culture-rumour dynamics. However, since our primary interest is the effect of cultural heterogeneity on the propagation of the rumour, we set \(\lambda=1\) and \(\alpha=0.01\), which correspond to the basic reproduction number \(\mathcal{R}_{0}=\lambda/\alpha=100\). This guarantees that the rumour spreads to the entire lattice (\(\langle R\rangle=1\)) in the case of homogeneous agents (i.e, for \(Q=1\)) as can be seen in the top panels of Fig. 3.
These results show that \(\langle R\rangle\) decreases as \(Q\) increases, regardless of value of the parameter \(p\), which measures the openness of the agents to social influence. Thus, cultural heterogeneity always impairs the spreading of the rumour. As \(p\) increases from \(p=0\) to \(p=1\), the curves \(\langle R\rangle\)_vs._\(Q\) shift to the right (top panels of Fig. 3), revealing that the spreading of the rumour is facilitated by the openness to social influence.
In fact, recalling that for \(p=0\) the rumour dynamics takes place in the frozen initial configuration of the agents' cultures, which corresponds to the most heterogeneous (or disordered) configuration for a given \(F\) and \(Q\). Consequently, we observe a very small range of \(Q\) where \(\langle R\rangle\approx 1\), indicating that the spreading of the rumour is heavily impaired by cultural heterogeneity. On the other hand, for \(p=1\) the cultural dynamics proceeds until it reaches an absorbing configuration, and only then the rumour dynamics sets in. In this case, we observe a wider range of \(Q\) where \(\langle R\rangle\approx 1\), which highlights that openness to social influence facilitates the spread of rumours.
A word is in order about the phase transition between the asymptotic regimes characterized by \(\langle R\rangle>0\) and \(\langle R\rangle\to 0\). Since a phase transition occurs only in the thermodynamic limit \(L\to\infty\), we can only infer its onset for finite systems by considering lattices of different sizes [26], as done in Fig. 3. A typical sign of the onset of a phase transition is the increase of the size of the fluctuations, which in our case is captured by the variance of the fraction of stiffers. Thus, the value \(Q=Q_{c}(L)\) at which the variance is maximum offers a rough estimate of the transition point, as illustrated in the bottom panels of Fig. 3.
It is interesting that for \(p=0\) and \(\mathcal{R}_{0}\to\infty\) our problem reduces to the bond percolation in the square lattice [27] where the probability that a bond transmits the fluid is \(1-\left(1-1/Q\right)^{F}\) and so \(Q_{c}=\left(1-2^{-1/F}\right)^{-1}\). For \(F=3\), it yields \(Q_{c}\approx 4.85\) that agrees very well with the results of panel A of Fig. 3. More importantly, since in the bond percolation we have \(0<\langle R\rangle<1\) at the critical point \(Q=Q_{c}\), the transition is discontinuous and only for \(Q=1\) (homogeneous agents) the rumour spreads over the entire lattice. We conjecture that similar results hold for \(p>0\) as well.
For the sake of completeness, Fig. 4 shows the influence of the basic reproduction number \(\mathcal{R}_{0}=\lambda/\alpha\) on \(\langle R\rangle\). As expected, provided that \(\mathcal{R}_{0}>1\) and that \(Q\) is small enough such that the absorbing configurations of the cultural dynamics are monocultural, the rumour spreads to a sizeable portion of the lattice, but never to the entire lattice as pointed out before. The transition at \(\mathcal{R}_{0}=1\) is continuous [13, 22]. However, for large \(Q\) such that the absorbing configurations of the cultural dynamics are multicultural, the rumour reaches only a few agents close to the location of the initial spreader, regardless of the value of \(\mathcal{R}_{0}\).
## 5 Conclusion
In his seminal book, Rogers has offered plentiful empirical evidence that cultural similarity plays a major role in the fate of innovations [1]: innovations are more likely to spread among individuals sharing similar beliefs. The understanding of the role of heterogeneity in the spreading of innovations has allowed companies to reduce risk and maximize the impact of new products. In this paper, we add to Roger's conjecture by exploring a scenario in which the cultural heterogeneity of the community is described by Axelrod's model of culture dissemination and the diffusion of the innovation is described by a variant of the Daley and Kendall model of rumour propagation. Hence the interchangeability of the words innovation and rumour throughout the paper.
Our results suggest that curbing the spread of rumours does not necessarily require changes in the social fabric, i.e., changes in the topology of the agents interaction networks [19, 20, 21]. In fact, this outcome appears naturally in communities of heterogeneous agents who are allowed to hold different opinions and beliefs, thus providing theoretical support for Roger's empirical observations. Specifically, we find that cultural heterogeneity always impairs the propagation of the rumour and that, beyond a critical level of heterogeneity, the rumour is stuck in the neighborhood of its creator.
Rogers' theory of the diffusion of innovations is a sociological theory that builds on many natural experiments (e.g., adoption of hybrid corn, Norplant contraceptive, rap, cellular phones and Nintendo home video-game players) to determine the characteristics of both the innovation and the target community that best correlate with the success of the innovation [1]. The theory contends that the number of adopters of an innovation increases in time following an S-shaped growth curve (see [14] for a study of these growth curves for the pure Axelrod dynamics). Since our entire analysis is based on the characterization of the absorbing configurations, we have access only to the ultimate number of adopters of the innovation, viz., \(\langle R\rangle\). In that sense, a direct comparison with real data is not possible, as those data focus solely on the growth of the number of adopters. Nevertheless, our results support Rogers' conjectures and empirical observations in a qualitative way. For instance, Rogers' discussion of homophily as a barrier to the diffusion of an innovation is in full agreement with the conclusions of our study.
The study of the coupled culture-rumour dynamics adds to the rapidly growing literature on social physics [28]. In fact, the availability of high quality data on social patterns and human activity (see, e.g., [29]), as well as the great
appeal and relevance of social issues, has prompted physicists' contributions to several interdisciplinary areas, including (but not limited to) collective intelligence [30, 31], cultural polarization [32], criminal behavior [33], human mobility [34], urban dynamics [35, 36, 37], traffic flow [38, 39], social networks [40, 41], and moral behavior [42, 43]. These advances have been made possible by the maturing of complex systems as a research field, as attested by the 2021 Nobel Prize in Physics [44]. Perhaps the key characteristic of a complex system is the emergent phenomena resulting from its interacting components. In the coupled culture-rumour dynamics, the emergent phenomenon is the discontinuous phase transition that separates the regime where the rumour spreads over the community from the regime where it dies out close to its creator. The existence of this transition could only be revealed through a mathematical and computational modeling, hence the importance of assessing Roger's theory for the diffusion of innovations using the tools of statistical mechanics.
###### Acknowledgements.
JFF was partially supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), grant 2020/03041-3 and Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), grant 305620/2021-5. AMP acknowledges the support of Sao Paulo Research Foundation (FAPESP), grant 2019/22277-0. This research was conducted using the computational resources of the Center for Research in Mathematical Sciences Applied to Industry (CeMEAI) funded by FAPESP, grant 2013/07375-0.
|
2306.13348
|
Gravitational Interaction of Ultralight Dark Matter with Interferometers
|
Ultralight dark matter exhibits an order-one density fluctuation over the
spatial scale of its wavelength. These fluctuations gravitationally interact
with gravitational wave interferometers, leading to distinctive signals in
detectors. We investigate the ultralight dark matter-induced effects in the
gravitational wave interferometers. We perform a systematic computation of the
power spectrum of ultralight dark matter in interferometers. We show that the
ultralight dark matter-induced effect is most relevant for the interferometers
with long baseline and that it is only a sub-leading effect compared to the
estimated noise level in the case of Laser Interferometer Space Antenna or
future interferometers with an arm-length comparable to a few astronomical
units. Gravitational wave interferometers can then place upper limits on the
ultralight dark matter density in the solar system. We find that, under certain
assumptions, future interferometers with AU-scale arm-length might probe the
dark matter density a few hundred times the local dark matter density, which is
measured over a much larger spatial scale.
|
Hyungjin Kim
|
2023-06-23T07:55:50Z
|
http://arxiv.org/abs/2306.13348v2
|
# Gravitational Interaction of Ultralight Dark Matter with Interferometers
###### Abstract
Ultralight dark matter exhibits an order-one density fluctuation over the spatial scale of its wavelength. These fluctuations gravitationally interact with gravitational wave interferometers, leading to an additional noise floor or signals. We investigate the ultralight dark matter-induced effects in the gravitational wave interferometers. We perform a systematic computation of the power spectrum of ultralight dark matter in interferometers. We show that the ultrahigh dark matter-induced effect is most relevant for the interferometers with long baseline and that it only constitutes a sub-leading noise floor compared to the estimated noise level in the case of Laser Interferometer Space Antenna or future interferometers with an arm-length comparable to a few astronomical units. Gravitational wave interferometers can then place upper limits on the ultralight dark matter density in the solar system. We find that, under certain assumptions, future interferometers with AU-scale arm-length might probe the dark matter density a few hundred times the local dark matter density, which is measured over a much larger spatial scale.
DESY-23-085
###### Contents
* I Introduction
* II Ultralight dark matter
* II.1 Density fluctuation
* II.2 Metric fluctuation
* III Gravitational wave interferometers
* III.1 Interferometry response
* III.1.1 Michelson interferometer
* III.2 Time-delay interferometer
* III.2 ULDM signal
* III.2.1 Deterministic signal
* III.2.2 Stochastic signal
* III.3 Application
* III.3.1 LIGO
* III.3.2 Einstein Telescope
* III.3.3 LISA
* III.3.4 \(\mu\)Ares
* III.3.5 Big Bang Observatory
* IV Discussion
* V Conclusion
* A Power spectrum
* A.1.1 The \(\mu\)A.1.2 The \(\mu\)A.1.3 The \(\mu\)A.1.4 The \(\mu\)A.1.5 The \(\mu\)A.1.6 The \(\mu\)A.1.7 The \(\mu\)A.1.8 The \(\mu\)A.1.9 The \(\mu\)A.2.1 The \(\mu\)A.2.2 The \(\mu\)A.2.3 The \(\mu\)A.2.4 The \(\mu\)A.2.5 The \(\mu\)A.2.6 The \(\mu\)A.2.7 The \(\mu\)A.2.8 The \(\mu\)A.2.9 The \(\mu\)A.3 The \(\mu\)A.3.1 The \(\mu\)A.3.2 The \(\mu\)A.3.3 The \(\mu\)A.3.4 The \(\mu\)A.3.5 The \(\mu\)A.3.6 The \(\mu\)A.3.7 The \(\mu\)A.3.8 The \(\mu\)A.3.9 The \(\mu\)A.4 The \(\mu\)A.5.1 The \(\mu\)A.5.2 The \(\mu\)A.5.3 The \(\mu\)A.5.4 The \(\mu\)A.6 The \(\mu\)A.7 The \(\mu\)A.8 The \(\mu\)A.9 The \(\mu\)A.10 The \(\mu\)A.11 The \(\mu\)A.12 The \(\mu\)A.13 The \(\mu\)A.14 The \(\mu\)A.15.3 The \(\mu\)A.16 The \(\mu\)A.17 The \(\mu\)A.18 The \(\mu\)A.19 The \(\mu\)A.20 The \(\mu\)A.21 The \(\mu\)A.22 The \(\mu\)A.23 The \(\mu\)A.24 The \(\mu\)A.25 The \(\mu\)A.26 The \(\mu\)A.27 The \(\mu\)A.28 The \(\mu\)A.29 The \(\mu\)A.30 The \(\mu\)A.31 The \(\mu\)A.32 The \(\mu\)A.33 The \(\mu\)A.34 The \(\mu\)A.35 The \(\mu\)A.36 The \(\mu\)A.37 The \(\mu\)A.38 The \(\mu\)A.39 The \(\mu\)A.40 The \(\mu\)A.41 The \(\mu\)A.42 The \(\mu\)A.43 The \(\mu\)A.44 The \(\mu\)A.45 The \(\mu\)A.46 The \(\mu\)A.47 The \(\mu\)A.48 The \(\mu\)A.49 The \(\mu\)A.50 The \(\mu\)A.51 The \(\mu\)A.52 The \(\mu\)A.53 The \(\mu\)A.54 The \(\mu\)A.55.6 The \(\mu\)A.56 The \(\mu\)A.57 The \(\mu\)A.58 The \(\mu\)A.59 The \(\mu\)A.60 The \(\mu\)A.61 The \(\mu\)A.62 The \(\mu\)A.63 The \(\mu\)A.64 The \(\mu\)A.65 The \(\mu\)A.66 The \(\mu\)A.67 The \(\mu\)A.68 The \(\mu\)A.69 The \(\mu\)A.70 The \(\mu\)A.71 The \(\mu\)A.72 The \(\mu\)A.73 The \(\mu\)A.74 The \(\mu\)A.75 The \(\mu\)A.76 The \(\mu\)A.77 The \(\mu\)A.78 The \(\mu\)A.79 The \(\mu\)A.80 The \(\mu\)A.81 The \(\mu\)A.82 The \(\mu\)A.83 The \(\mu\)A.84 The \(\mu\)A.85 The \(\mu\)A.86 The \(\mu\)A.87 The \(\mu\)A.88 The \(\mu\)A.89 The \(\mu\)A.81 The \(\mu\)A.82 The \(\mu\)A.83 The \(\mu\)A.84 The \(\mu\)A.85 The \(\mu\)A.86 The \(\mu\)A.87 The \(\mu\)A.88 The \(\mu\)A.88 The \(\mu\)A.89 The \(\mu\)A.89 The \(\mu\)A.81 The \(\mu\)A.82 The \(\mu\)A.83 The \(\mu\)A.84 The \(\mu\)A.85 The \(\mu\)A.86 The \(\mu\)A.87 The \(\mu\)A.88 The \(\mu\)A.89 The \(\mu\)A.
Introduction
Ultralight dark matter (ULDM) remains an attractive dark matter candidate. Its mass is smaller than several eV-scale, and it behaves like classical waves. The QCD axion is one of the most theoretically interesting ultralight dark matter candidates [1; 2; 3; 4; 5; 6]. Other interesting candidates often arise from recent developments in dynamical solutions to the electroweak hierarchy problem [7; 8; 9; 10; 11; 12; 13; 14]. Being wave-like, such ultralight dark matter candidates lead to interesting phenomenology in the early and late universe.
Recent numerical simulations of ultralight dark matter halo have observed an order one density fluctuation all over galaxies [15; 16]. The fluctuation is extended over the spatial scale given by the wavelength of dark matter. This characteristic density fluctuation can be intuitively understood in terms of _quasiparticles_, whose size (\(\lambda\)) and mass (\(m_{\rm eff}\)) are given as its wavelength and the total mass enclosed within the volume of de Broglie wavelength [17]:
\[\lambda =\frac{1}{m\sigma}\simeq 10^{2}\,{\rm pc}\Big{(}\frac{10^{-22}\,{ \rm eV}}{m}\Big{)}\Big{(}\frac{200\,{\rm km/sec}}{\sigma}\Big{)}, \tag{1}\] \[m_{\rm eff} =\frac{\pi^{3/2}\rho_{0}}{(m\sigma)^{3}}\simeq 5\times 10^{4}M_{ \odot}\Big{(}\frac{\rho_{0}}{\rm GeV/cm^{3}}\Big{)}\Big{(}\frac{10^{-22}\,{ \rm eV}}{m}\Big{)}^{3}\Big{(}\frac{200\,{\rm km/sec}}{\sigma}\Big{)}^{3}. \tag{2}\]
Here \(m\) is the mass of ULDM, \(\rho_{0}\) is the mean dark matter density, and \(\sigma\) is the velocity dispersion. For \(m\ll\,{\rm eV}\), the size and the mass of the quasiparticles could be astronomical.
These quasiparticles interact continuously with interferometers, designed to detect gravitational waves (GWs) of astrophysical and cosmological origin. Since they are engineered to measure an extremely small disturbance of spacetime due to GWs, any interaction of quasiparticles with such interferometers leaves some distinctive impacts on detectors. Ultralight dark matter effects can be thought of as additional noise or signals in the detectors, depending on the perspective.
We investigate the impacts of ultralight dark matter on gravitational wave interferometers. Specifically, we address two questions in this work: (i) if ULDM effects are considered as additional noise, does ULDM-induced noise interfere with the operation of interferometers for the detection of GWs? (ii) if ULDM effects are considered as additional new physics signals, can GW interferometers detect the ULDM in the solar system? We only consider the gravitational interaction of order-one density fluctuations of ultralight dark matter with interferometers. The main goal is to characterize the ultralight dark matter-induced noise/signal power spectrum from the gravitational interaction and investigate if interferometers can probe or place a constraint on the dark matter in the solar system.
Without detailed computation, we can already estimate the expected effects of ULDM and partially answer one of the above two questions. Consider an interferometer with two test masses. A useful quantity is a differential acceleration, \(\Delta a=a_{1}-a_{2}\), between two test masses induced by ULDM density fluctuations. Suppose that a quasiparticle is located right next to the first test mass. In this case, the differential acceleration is estimated as
\[\Delta a=a_{1}-a_{2}=\frac{Gm_{\rm eff}}{\lambda^{2}}-\frac{Gm_{\rm eff}}{(L +\lambda)^{2}}\simeq\bar{a}\min\Big{(}1,\frac{2L}{\lambda}\Big{)},\]
where \(L\) is a typical size of the interferometer and typical acceleration \(\bar{a}\) is defined as
\[\bar{a}\equiv\frac{Gm_{\rm eff}}{\lambda^{2}}=\pi^{3/2}G\rho_{0}\lambda. \tag{3}\]
As the mass \(m\) decreases (hence wavelength and effective mass increases), the differential acceleration increases until it saturates to \(\bar{a}=2G\rho_{0}L\) when \(\lambda=2L\); when the size of a quasiparticle becomes comparable to the size of the interferometer. At this saturation point, we find \(\Delta a\sim 10^{-28}\,{\rm m\,s^{-2}}\) for \(L=4\,{\rm km}\) (LIGO/VIRGO), \(\Delta a\sim 10^{-22}\,{\rm m\,s^{-2}}\) for \(L=2.5{\rm M\,km}\) (LISA), and \(\Delta a\sim 10^{-20}\) for \(L=400{\rm M\,km}\) (e.g. \(\mu\)Ares strawman mission concept [18] and Fedderke et al [19]). Here we use \(\rho_{0}=0.4\,{\rm GeV/cm^{3}}\).
This can be compared with the sensitivity of detectors. The strain noise power spectrum can be converted into the root-mean-square fluctuation of acceleration by \(\Delta a\sim[S_{n}(2\pi f)^{4}L^{2}\Delta f]^{1/2}\). We find \(\Delta a=10^{-14}\,{\rm m\,s^{-2}}\) for LIGO with \(f\sim\Delta f\sim 50\,{\rm Hz}\), \(\Delta a=10^{-16}\,{\rm m\,s^{-2}}\) for LISA with \(f\sim\Delta f\sim 0.1{\rm mHz}\), and \(\Delta a=10^{-17}\,{\rm m\,s^{-2}}\) for \(\mu\)Ares proposal with \(f\sim\Delta f\sim{\rm a\,few}\times 10^{-7}{\rm Hz}\). For ground-based interferometers, the ULDM-induced noise is irrelevant as it is many orders of magnitude smaller than detector noises. For space-borne interferometers, the ULDM effect is a few orders of magnitude smaller than the mission-required detector noise level. We may conclude from this comparison that the ULDM effect in interferometers through gravitational interaction is unlikely to interfere with the operation of interferometers for the detection of GWs. For a more correct comparison, however, we will need to compute the power spectrum of ultralight dark matter-induced effects since the above estimation on \(\Delta a\) does not carry any spectral information about the ULDM effects in the detector.
We summarize the main results here. From the detailed computation of the ULDM power spectrum, we find that the spectrum generally consists of two distinctive frequency components: one at \(\omega=2m\) and the other at \(\omega<m\sigma^{2}\). This has interesting phenomenological implications as it offers different ways to search for ultralight dark matter signals. In Figure 1, we show the low-frequency part (\(\omega<m\sigma^{2}\)) of the ULDM signal. The ULDM noise is sub-dominant even for the interferometers with arm-lengths larger than a few million km, such as LISA and other proposals with the arm-length comparable to astronomical units, confirming our order-of-magnitude estimation above. The interferometers then can be used to place an upper limit on the dark matter content in the solar system. Two distinct frequency components offer multiple ways for ULDM searches in interferometers. The \(\omega=2m\) component behaves similarly to a deterministic signal, which can be searched with the matched filter. The low-frequency (\(\omega<m\sigma^{2}\)) component, on the other hand, behaves similarly to the stochastic background, which can be searched by cross-correlating detector outputs if more than two detectors are available. The resulting constraints and projections on \(\rho/\rho_{0}\) are shown in Figure 2, suggesting that the density of ULDM in the solar system, \(\rho\sim\mathcal{O}(10^{2})\rho_{0}\), could be probed with GW interferometers only through gravitational interaction.
The work is organized as follows. In section II, we review the basic statistical properties of ultralight dark matter
Figure 1: The ultralight dark matter-induced noise for (left) LISA and (right) \(\mu\)Ares proposal [18]. The red line is the required noise level for the mission. The other colored solid lines are the ultralight dark matter-induced noise for given masses. In both cases, the shown is the strain noise power spectrum in the Michelson-like TDI-X variable. Note that, for \(\mu\)Ares, the low-frequency noise could be dominated by the gravity gradient noise as discussed in Ref. [20]. See the main text for details.
Figure 2: The constraints and projections on dark matter density in the solar system. Here \(\rho_{0}=0.4\,\mathrm{GeV/cm^{3}}\) is the local dark matter density measured over \(\gtrsim\mathcal{O}(100)\) pc scale (see e.g. [21; 22] for reviews). The solid lines are obtained from the matched filter for the coherently oscillating dark matter signal at \(\omega=2m\), while the dashed lines are obtained by cross-correlating signals in more than two detectors for \(\omega<m\sigma^{2}\). See the main text for details.
and compute the spectrum of density contrast. This will establish a basis for the detailed investigation of the response of GW interferometers to the ULDM density fluctuations. In section III, we discuss how GW interferometers respond to the ULDM density fluctuation, and how the ultralight dark matter signal can be searched with the matched filter and with cross-correlation of detector outputs in cases where there are more than two detectors. In section IV, we discuss some of the assumptions that might affect our results. We conclude in section V. The natural unit (\(c=\hbar=1\)) is used in this work.
## II Ultralight dark matter
We review the basic statistical properties of ultralight dark matter in this section. We consider a scalar field minimally coupled to gravity without self-interaction. The ultralight dark matter field operator is given as
\[\hat{\phi}(x)=\sum_{i}\frac{1}{\sqrt{2mV}}(a_{i}e^{-ik_{i}\cdot x}+a_{i}^{ \dagger}e^{ik_{i}\cdot x}),\]
where \(a_{i}\) and \(a_{i}^{\dagger}\) are annihilation and creation operator of the plane wave mode \(i\), satisfying \([a_{i},a_{j}^{\dagger}]=\delta_{ij}\). The continuum expression can be obtained by \(\sum_{i}\to V\int d^{3}k/(2\pi)^{3}\), \(a_{i}\to a(\vec{k})/\sqrt{V}\), \([a(\vec{k}),a^{\dagger}(\vec{q})]=(2\pi)^{3}\delta^{(3)}(\vec{k}-\vec{q})\). For later convenience, we stick to the above discrete convention. Any deformation of wave function due to the gravitational potential of the Sun is ignored since this effect is at the level of \(\mathcal{O}(1)\%\) for the halo dark matter [23].
The statistical property of ultralight dark matter is defined by the density operator. The density operator is \(\hat{\rho}=\prod_{i}\hat{\rho}_{i}\otimes\) where \(\hat{\rho}_{i}\) is the density operator of each mode, given by [23]
\[\hat{\rho}_{i}=\int d^{2}\alpha_{i}\,p(\alpha_{i})|\alpha_{i}\rangle|\alpha_{ i}|. \tag{4}\]
Here \(|\alpha_{i}\rangle\) is the coherent state, satisfying \(a_{i}|\alpha_{i}\rangle=\alpha_{i}|\alpha_{i}\rangle\) (\(\alpha_{i}\in\mathbb{C}\)), and the quasi-probability distribution \(p(\alpha_{i})\) is given as
\[p(\alpha_{i})=\frac{1}{\pi f_{i}}\exp\left[-\frac{|\alpha_{i}|^{2}}{f_{i}} \right]. \tag{5}\]
The \(f_{i}\) is the mean occupation number for the mode \(i\). This quasi-probability distribution function \(p(\alpha_{i})\) corresponds to the Rayleigh distribution for the amplitude, \(|\alpha_{i}|\), and the uniform distribution for the phase, \(\arg(\alpha_{i})\)[24; 25; 26]. With this description, any \(n\)-point correlation function of an operator \(\hat{\phi}\) can be easily computed by the trace operation, \(\langle\mathcal{O}\rangle=\text{Tr}(\mathcal{O}\hat{\rho})\). In the large occupation number limit, the operators \((a_{i},a_{i}^{\dagger})\) can be replaced by commuting complex random number \((\alpha_{i},\alpha_{i}^{*})\), whose probability distribution is given by (5).
It is useful to note the ensemble averages of the creation and annihilation operators:
\[\langle a_{i}a_{j}^{\dagger}\rangle=\langle a_{i}^{\dagger}a_{j}\rangle=f_{i} \delta_{ij},\quad\text{and}\quad\langle a_{i}a_{j}a_{k}^{\dagger}a_{\ell}^{ \dagger}\rangle=f_{i}f_{j}(\delta_{ik}\delta_{j\ell}+\delta_{i\ell}\delta_{jk }). \tag{6}\]
Here we approximate \(a_{i}\) and \(a_{i}^{\dagger}\) as a commuting random complex number \(\alpha_{i}\) and \(\alpha_{i}^{*}\), whose underlying distribution follows the quasi-probability distribution \(p(\alpha_{i})\). The ultralight dark matter field defined with (5) is a Gaussian random field, and therefore, the ensemble average of any operators with \(N(a)-N(a^{\dagger})\neq 0\) vanishes, where \(N(a)\) and \(N(a^{\dagger})\) are the number of annihilation and creation operator, respectively.
The energy density of the ultralight dark matter is
\[\rho_{\phi} =\frac{1}{2}\left[\hat{\phi}^{2}+\left(\nabla\phi\right)^{2}+m^{2 }\phi^{2}\right]\] \[=\frac{1}{2V}\sum_{i,j}\frac{1}{\sqrt{2\omega_{i}}\sqrt{2\omega_{ j}}}\Big{[}W_{ij}^{-}\Big{(}a_{i}a_{j}e^{-i(k_{i}+k_{j})\cdot x}+a_{i}^{ \dagger}a_{j}^{\dagger}e^{i(k_{i}+k_{j})\cdot x}\Big{)}+W_{ij}^{+}\Big{(}a_{i} a_{j}^{\dagger}e^{-i(k_{i}-k_{j})\cdot x}+a_{i}^{\dagger}a_{j}e^{i(k_{i}-k_{j}) \cdot x}\Big{)}\Big{]}. \tag{7}\]
For notational simplicity, we have introduced \(W_{ij}^{\pm}\), defined as
\[W_{ij}^{\pm}=m^{2}\pm(\omega_{k_{i}}\omega_{k_{j}}+\vec{k}_{i}\cdot\vec{k}_{ j})\approx 2m^{2}\times\begin{cases}1&\text{for }W_{ij}^{+}\\ -\frac{1}{4}(\vec{v}_{i}+\vec{v}_{j})^{2}&\text{for }W_{ij}^{-}\end{cases} \tag{8}\]
where the second expression holds in the non-relativistic limit. The mean energy density in the non-relativistic limit is
\[\langle\rho_{\phi}\rangle\equiv\rho_{0}\approx\frac{m}{V}\sum_{i}f_{i}=m\int \frac{d^{3}k}{(2\pi)^{3}}f(\vec{k})=m\int d^{3}vf(\vec{v}), \tag{9}\]
By taking the continuum limit, we reproduce the usual expression for the local dark matter density. In the last line, we change the integration variable to the velocity. The above expression fixes the normalization of the mean occupation number \(f(\vec{v})\) used in this work.
### Density fluctuation
Let us consider the density fluctuation, \(\delta\rho_{\phi}=\rho_{\phi}-\rho_{0}\). The mean value vanishes \(\langle\delta\rho_{\phi}\rangle=0\). The correlator is
\[\langle\delta\rho_{\phi}(x)\delta\rho_{\phi}(y)\rangle=\int\frac{d^{4}k}{(2\pi) ^{4}}e^{-ik\cdot(x-y)}P_{\delta\rho}(k),\]
where the power spectrum is defined as \(\langle\delta\rho_{\phi}(k)\delta\rho_{\phi}^{*}(k^{\prime})\rangle=(2\pi)^{4 }\delta^{(4)}(k-k^{\prime})P_{\delta\rho}(k)\) and given by
\[P_{\delta\rho}(\omega,\vec{k})= \frac{(2\pi)^{4}}{2}\int d\Pi_{1}d\Pi_{2}f(\vec{p}_{1})f(\vec{p}_ {2})\] \[\times \Big{[}(W_{12}^{-})^{2}\big{(}\delta^{(4)}(k-p_{1}-p_{2})+\delta^ {(4)}(k+p_{1}+p_{2})\big{)}+(W_{12}^{+})^{2}\big{(}\delta^{(4)}(k-p_{1}+p_{2}) +\delta^{(4)}(k+p_{1}-p_{2})\big{)}\Big{]}. \tag{10}\]
This can be straightforwardly computed by using (5) and (7). Here \(d\Pi=[d^{3}p/(2\pi)^{3}](2E)^{-1}\) is the Lorentz invariant phase space measure. In the frequency space, the correlator is
\[\langle\vec{\delta\rho}_{\phi}(\omega,\vec{x})\vec{\delta\rho}_{\phi}^{*}( \omega^{\prime},\vec{y})\rangle=(2\pi)\delta(\omega-\omega^{\prime})S_{\delta \rho}(\omega,\vec{L}), \tag{11}\]
where \(\vec{L}=\vec{x}-\vec{y}\) and the power spectrum \(S_{\delta\rho}(\omega,\vec{L})\) is given by
\[S_{\delta\rho}(\omega,\vec{L})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot \vec{L}}P_{\delta\rho}(\omega,\vec{k}). \tag{12}\]
This power spectrum \(S_{\delta\rho}(\omega,\vec{L})\) in the frequency space turns out to be more useful for the study of the response of GW interferometers to the ULDM fluctuations.
To illustrate the statistical properties of density contrast, we explicitly compute \(S_{\delta}(\omega,\vec{0})\). We consider the normal distribution for dark matter velocity distribution,
\[f(\vec{v})=\frac{\rho_{0}/m}{(2\pi\sigma^{2})^{3/2}}\exp\left[-\frac{(\vec{v} -\vec{v}_{0})^{2}}{2\sigma^{2}}\right], \tag{13}\]
where \(\vec{v}_{0}\) is the mean dark matter velocity, and \(\sigma\) is the velocity dispersion. We work in the rest frame of the GW detectors. In general, \(\vec{v}_{0}\neq 0\) in this frame, but for the sake of simplicity, we take the isotropic limit \(\vec{v}_{0}=0\) for now. Normalization of \(f(\vec{v})\) is fixed by (9). Using (10) and (12), we find
\[S_{\delta}(\omega,\vec{0})=\tau\Big{[}\sigma^{4}A_{\delta}(\omega)+B_{\delta} (\omega)\Big{]}, \tag{14}\]
where \(\delta=\delta\rho_{\phi}/\rho_{0}\) is density contrast, and \(\tau=1/m\sigma^{2}=1/\omega_{0}\) is the coherence time. In the isotropic limit (\(v_{0}/\sigma\ll 1\)), the \(A_{\delta}(\omega)\) and \(B_{\delta}(\omega)\) are given by
\[A_{\delta}(\omega) = \frac{5\pi}{32}\bigg{(}\frac{\bar{v}}{\sigma}\bigg{)}^{8}\exp \Big{[}-\frac{\bar{v}^{2}}{\sigma^{2}}\Big{]}\theta(\bar{v}^{2})+(\omega \rightarrow-\omega), \tag{15}\] \[B_{\delta}(\omega) = 2|\bar{\omega}|K_{1}(|\bar{\omega}|), \tag{16}\]
where \(K_{n}(x)\) is the modified Bessel function. Here we introduce, for notational simplicity,
\[\bar{v}^{2}(\omega)=(\omega/m-2),\quad\text{and}\quad\bar{\omega}=\omega/(m \sigma^{2})=\omega/\omega_{0}. \tag{17}\]
The spectrum \(S_{\delta}(\omega,\vec{0})\) for \(\vec{v}_{0}=0\) and \(\sigma=0.1\) is shown in the left panel of Figure 3.
This computation reveals an interesting feature in the ULDM density fluctuation. The spectrum contains two distinctive frequency components: a narrow peak at \(\omega=2m\) represented by \(A_{\delta}(\omega)\) and a broad low-frequency spectrum at \(\omega<m\sigma^{2}\) represented by \(B_{\delta}(\omega)\). Both of them have a width \(\Delta\omega\sim m\sigma^{2}\). They can be understood by investigating the energy density of a single mode, \(\phi(t,\vec{x})=\phi_{0}\cos(\omega t-\vec{k}\cdot\vec{x})\),
\[\rho_{\phi}=\frac{1}{2}\Big{[}\dot{\phi}^{2}+(\nabla\phi)^{2}+m^{2}\phi^{2} \Big{]}\approx\rho_{0}\Big{[}1-v^{2}\cos(2mt+\varphi)\Big{]}.\]
It consists of dc mode, which is what constitutes the low-frequency part (\(\omega<m\sigma^{2}\)) for a realistic multi-mode density contrast, and a coherently oscillating part which constitutes a spectrum at \(\omega=2m\) with its amplitude suppressed by \(v^{2}\). This \(v^{2}\) suppression is responsible for a relative suppression of \(\sigma^{4}\) in front of \(A_{\delta}(\omega)\) in (14). These two distinctive frequency components in the density contrast remain in the ULDM-induced signal/noise power spectrum for the gravitational detectors, offering multiple ways to search for ultralight dark matter at a different mass range.
### Metric fluctuation
ULDM fluctuations source metric perturbations. The metric is given by
\[ds^{2}=(1+2\Phi)dt^{2}-(1-2\Psi)dx^{2}. \tag{18}\]
These metric fluctuations perturb the position of test masses in interferometers and induce a time delay of light. Suppose a test mass at the position \(\vec{x}\). The test mass acceleration and the metric fluctuations satisfy
\[\vec{a} = -\nabla\Phi, \tag{19}\] \[\nabla^{2}\Psi = 4\pi G\delta\rho_{\phi},\] (20) \[6\tilde{\Psi}+2\nabla^{2}(\Phi-\Psi) = 24\pi G\delta P_{\phi}. \tag{21}\]
Here \(\delta P_{\phi}\) is the pressure perturbation of the scalar field \(\phi\). The pressure of the field is \(P_{\phi}=\dot{\phi}^{2}/2-(\nabla\phi)^{2}/6-m^{2}\phi^{2}/2\). Statistical properties of metric fluctuation as well as test mass position can be derived from those of ULDM by solving the above equations.
The relation between test mass acceleration, metric perturbations, and ULDM fluctuations becomes transparent in
Figure 3: (Left) the density contrast spectrum \(S_{\delta}(\omega)=S_{\delta}(\omega,\vec{0})\) in the isotropic limit \(v_{0}=0\). (Right) the spectrum for the gradient of the metric fluctuation, \(S_{\nabla\Psi}(\omega)=\delta^{ij}S^{ij}_{\Psi\Psi}(\omega,\vec{0})\). Both spectra exhibit two distinctive frequency components: one at \(\omega=2m\) and another at \(\omega<m\sigma^{2}\). For demonstration, we choose unrealistically large velocity dispersion \(\sigma=0.1\). The narrow peak at \(\omega=2m\) is suppressed by \(\sigma^{4}\) compared to the smooth low-frequency spectrum. For \(S_{\nabla\Psi}\), there is a logarithmic divergence at low frequencies due to the long-range nature of gravitational force.
Fourier space:
\[\tilde{a}^{i}(k) = -ik^{i}\tilde{\Phi}(k), \tag{22}\] \[\tilde{\Psi}(k) = -\frac{4\pi G}{k^{2}}\widetilde{\delta\rho}_{\phi}(k),\] (23) \[\tilde{\Phi}(k) = -\frac{4\pi G}{k^{2}}\left[\Big{(}1-\frac{3\omega^{2}}{k^{2}} \Big{)}\widetilde{\delta\rho}_{\phi}(k)+3\widetilde{\delta P}_{\phi}(k)\right]. \tag{24}\]
From above, one can derive the relation between power spectra, e.g. \(P_{\Psi}(k)=(4\pi G/k^{2})^{2}P_{\delta\rho}(k)\) and \(P_{a}^{ij}(k)=k^{i}k^{j}P_{\Phi}(k)\), where the test mass acceleration power spectrum is defined as \(\langle\tilde{a}^{i}(k)\tilde{a}^{j*}(k^{\prime})\rangle=(2\pi)^{4}\delta^{(4 )}(k-k^{\prime})P_{a}^{ij}(k)\).
Among others, the power spectrum of \(\nabla\Psi\) turns out to be particularly useful for the later discussion. In the frequency space, it is given by
\[S_{\nabla\Psi}^{ij}(\omega,\vec{L})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k} \cdot\vec{L}}k^{i}k^{j}P_{\Psi}(k)=\bar{a}^{2}\tau\big{[}\sigma^{4}A^{ij}+B^{ ij}\big{]} \tag{25}\]
where \(A^{ij}\) and \(B^{ij}\) are
\[A^{ij}(\omega,\vec{L}) = 16\Big{(}\frac{\bar{v}}{\sigma}\Big{)}^{6}\exp\Big{[}-\frac{v_{0 }^{2}+\bar{v}^{2}}{\sigma^{2}}\Big{]}\theta(\bar{v}^{2})\left[\delta^{ij} \frac{I_{3}(X)}{X^{3}}+X^{i}X^{j}\frac{I_{4}(X)}{X^{4}}\right]+(\omega\to- \omega), \tag{26}\] \[B^{ij}(\omega,\vec{L}) = \frac{2}{\sqrt{\pi}}\int_{-\infty}^{\infty}ds\ \frac{e^{-i\omega}}{\sqrt{1+s^{2}}}\left[\delta^{ij}\frac{\mathrm{ erf}(Y)-G(Y)}{Y}+\frac{Y^{i}Y^{j}}{Y^{3}}\big{(}3G(Y)-\mathrm{erf}(Y)\big{)} \right], \tag{27}\]
with \(G(X)=[1/(2X^{2})](\mathrm{erf}(X)-2Xe^{-X^{2}}/\sqrt{\pi})\) and
\[\vec{X}=2\frac{\bar{v}}{\sigma}\Big{(}\frac{\bar{v}_{0}}{\sigma}+i\vec{L}_{ \lambda}\Big{)},\qquad\vec{Y}=\frac{1}{\sqrt{1+s^{2}}}\Big{(}\frac{s\bar{v}_ {0}}{\sigma}+\vec{L}_{\lambda}\Big{)},\qquad\vec{L}_{\lambda}=m\sigma\vec{L}. \tag{28}\]
The \(\bar{v}\) and \(\bar{\omega}\) are defined in (17). Recall that \(\tau=1/m\sigma^{2}\), \(m_{\mathrm{eff}}=\pi^{3/2}\rho_{0}/(m\sigma)^{3}\), and \(\lambda=(m\sigma)^{-1}\) and the typical acceleration \(\bar{a}=Gm_{\mathrm{eff}}/\lambda^{2}\). We will see shortly that the response of the GW interferometer depends mostly on \(\nabla\Psi\), and the above power spectrum can be used as a basic building block to study the response of interferometers to the ultralight dark matter fluctuations.
The structure of the spectrum is similar to the density contrast spectrum (14) since \(\Psi\) inherits its properties from ULDM fluctuations. Especially, \(A^{ij}(\omega,\vec{L})\) is centered at \(\omega=2m\), while \(B^{ij}(\omega,\vec{L})\) is spread over \(\omega\lesssim m\sigma^{2}\). In the isotropic (\(v_{0}=0\)) and the long-wavelength limit (\(m\sigma L\ll 1\)), the \(A^{ij}\) and \(B^{ij}\) are simplified to
\[A^{ij}_{0}(\omega) \approx \frac{1}{3}\Big{(}\frac{\bar{v}}{\sigma}\Big{)}^{6}\exp\Big{[}- \frac{\bar{v}^{2}}{\sigma^{2}}\Big{]}\theta(\bar{v}^{2})\Big{[}\delta^{ij}- \frac{(m\bar{v}L)^{2}}{4}(\delta^{ij}+2\hat{L}^{i}\hat{L}^{j})\Big{]}+(\omega \to-\omega), \tag{29}\] \[B^{ij}_{0}(\omega) \approx \frac{16}{3\pi}\Big{[}\delta^{ij}K_{0}(|\bar{\omega}|)-\frac{(m \sigma L)^{2}}{5}(\delta^{ij}+2\hat{L}^{i}\hat{L}^{j})|\bar{\omega}|K_{1}(| \bar{\omega}|)\Big{]}. \tag{30}\]
The spectrum \(S_{\nabla\Psi}(\omega)=\delta^{ij}S_{\nabla\Psi}^{ij}(\omega,\vec{0})\) is shown in the right panel of Figure 3. Naively, this can be considered as a test mass acceleration power spectrum. While the overall structure of the spectrum is similar, the acceleration spectrum logarithmically diverges at small \(\omega\). More specifically, \(B^{ij}_{0}\propto\delta^{ij}\ln(1/\bar{\omega})\) for \(\bar{\omega}\ll 1\), and this logarithmic divergence is related to the long-range nature of the gravitational interaction. The derivation of \(B^{ij}(\omega,\vec{L})\) is given in Appendix A.
## III Gravitational wave interferometers
To investigate how each interferometer responds to the ULDM fluctuation, we first consider a simple setup shown in Figure 4 where a phase-coherent laser begins from TM0, reaches TM1, and returns to TM0 again at \(t\). The phase of the laser at time \(t\) is given by
\[e^{-i\omega_{L}t_{0}}=e^{-i\omega_{L}t}e^{2ik_{L}L}e^{i\Delta\phi},\]
since the phase along the null trajectory does not change. Here \((\omega_{L},\vec{k}_{L})\) is the laser frequency and wave vector, and \(t_{0}\) is the time at which the laser begins from TM0. Since \(t_{0}=t-2L\), the laser acquires additional phase \(e^{2ik_{L}L}\) during this one round-trip.
Ultralight dark matter introduces an additional phase in two different ways: it introduces the time delay of laser light and it perturbs the test mass position. The additional phase \(e^{i\Delta\phi}\) is introduced as a result, and it is given by
\[\Delta\phi(t)=2\omega_{L}(\delta t+\delta L) \tag{31}\]
where the test mass perturbation \(\delta L\) and the time delay \(\delta t\) are given as
\[\delta L =+\frac{1}{2}\hat{x}\cdot\Big{[}\big{(}\delta\vec{x}_{1}(t-L)- \delta\vec{x}_{0}(t-2L)\big{)}+\big{(}\delta\vec{x}_{1}(t-L)-\delta\vec{x}_{0} (t)\big{)}\Big{]}, \tag{32}\] \[\delta t =-\frac{1}{2}\left[\int_{t-L}^{t}dt^{\prime}+\int_{t-2L}^{t-L}dt^ {\prime}\right][\Psi(t^{\prime},\vec{x}(t^{\prime}))+\Phi(t^{\prime},\vec{x}(t ^{\prime}))], \tag{33}\]
where \(\delta\vec{x}_{0}\) and \(\delta\vec{x}_{1}\) are the perturbation of the position of TM0 and TM1 around their nominal positions, \(\vec{x}_{0}\) and \(\vec{x}_{1}\). Note that the argument \(\vec{x}(t^{\prime})\) in the metric perturbation must be computed along the null trajectory. The above expression is obtained by solving null geodesic with the metric perturbation; specifically, one finds \(t_{0}=t-2(L+\delta L)-2\delta t\).
The additional phase \(\Delta\phi\) can be expanded in terms of metric perturbations in the Fourier space. In particular, it can be approximated as
\[\widetilde{\Delta\phi}(\omega)\approx\pm\frac{\omega_{L}}{\omega^{2}}e^{i \omega L}\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}\big{[}i \vec{k}\cdot\vec{D}(\hat{n},L)\big{]}\hat{\Psi}(k), \tag{34}\]
where \(\hat{n}=(\vec{x}_{1}-\vec{x}_{0})/L\) and the vector \(\vec{D}(\hat{n},L)=2\hat{n}[e^{i\vec{k}\cdot\hat{n}L}-\cos(\omega L)]\) encodes the response of this simple system to the ULDM fluctuations. The detailed derivation of the expression is given in Appendix B.
### Interferometry response
#### iii.1.1 Michelson interferometer
Consider now the Michelson interferometer. The Michelson interferometer consists of two arms. The laser combines at the asymmetric output, and the resulting power is \(P=P_{0}\sin^{2}(k_{L}\Delta L+\Delta\phi_{\rm Mich})\) where \(\Delta\phi_{\rm Mich}=\frac{1}{2}(\Delta\phi_{x}-\Delta\phi_{y})\) with the additional phase \(\Delta\phi_{x,y}\) acquired by the laser that has propagated along \(x-\) and \(y-\)arm, respectively. The expression for \(\Delta\phi_{x,y}\) is given in (34). In the frequency space, the phase is
\[\widetilde{\Delta}\phi_{\rm Mich}(\omega)=\pm\frac{\omega_{L}}{\omega^{2}}e^{ i\omega L}\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}\big{[}i \vec{k}\cdot\vec{D}_{\rm Mich}(\hat{x},\hat{y},L)\big{]}\hat{\Psi}(k) \tag{35}\]
where \(\vec{D}_{\rm Mich}(\hat{x},\hat{y};L)=\frac{1}{2}[\vec{D}(\hat{x},L)-\vec{D}( \hat{y},L)]\) is the response vector of the Michelson interferometer, and \(\vec{x}_{0}\) is the beam splitter position. For simplicity, we assume an equal arm-length.
The power spectrum for differential arm-length \(\widetilde{\Delta L}/L=\widetilde{\Delta\phi}_{\rm Mich}/(\omega_{L}L)\) can be directly obtained from the above expression, and it is given by
\[S_{\Delta L/L}^{\rm ULDM}(\omega)=\frac{1}{\omega^{4}L^{2}}\int\frac{d^{3}k}{( 2\pi)^{3}}|\vec{k}\cdot\vec{D}_{\rm Mich}|^{2}P_{\Psi}(k)=\frac{\bar{a}^{2}}{ \omega^{4}L^{2}}\tau\Big{[}\sigma^{4}A_{\rm Mich}+B_{\rm Mich}\Big{]}. \tag{36}\]
Figure 4: The laser begins at \(t_{0}\) at TM0. It propagates to TM1 and returns to TM0 at time \(t\). During one round-trip, the laser acquires a phase \(e^{2ik_{L}L}\) without ultralight dark matter. With ultralight dark matter, additional phase \(e^{i\Delta\phi}=e^{2i\omega_{L}(\delta t+\delta L)}\) is induced because of (i) time-delay \(\delta t\) of the laser light and (ii) the fluctuation of the distance between two test masses \(\delta L\).
The detector response is encoded in the function \(A_{\rm Mich}\) and \(B_{\rm Mich}\). Note again the similar structure of the power spectrum compared to the density contrast and metric fluctuation \(\nabla\Psi\). The \(A_{\rm Mich}\) and \(B_{\rm Mich}\) can be decomposed in terms of \(A^{ij}\) and \(B^{ij}\) in (25) as follows:
\[A_{\rm Mich}(\omega;\hat{x},\hat{y},L)= 2\Big{[}\frac{1+\cos^{2}(\omega L)}{2}\big{(}A_{R}^{11}(\vec{0})+ A_{R}^{22}(\vec{0})\big{)}-\cos(\omega L)\big{(}A_{R}^{11}(\vec{x}_{1})+A_{R}^{22}( \vec{x}_{2})\big{)}\] \[-\big{(}A_{R}^{12}(\vec{x}_{12})+\cos^{2}(\omega L)A_{R}^{12}(\vec {0})-\cos(\omega L)(A_{R}^{12}(\vec{x}_{1})+A_{R}^{12}(\vec{x}_{2}))\big{)} \Big{]}, \tag{37}\] \[B_{\rm Mich}(\omega;\hat{x},\hat{y},L)= 2\Big{[}\frac{1+\cos^{2}(\omega L)}{2}\big{(}B_{R}^{11}(\vec{0}) +B_{R}^{22}(\vec{0})\big{)}-\cos(\omega L)\big{(}B_{R}^{11}(\vec{x}_{1})+B_{R} ^{22}(\vec{x}_{2})\big{)}\] \[-\big{(}B_{R}^{12}(\vec{x}_{12})+\cos^{2}(\omega L)B_{R}^{12}(\vec {0})-\cos(\omega L)(B_{R}^{12}(\vec{x}_{1})+B_{R}^{12}(\vec{x}_{2}))\big{)} \Big{]}, \tag{38}\]
where \(A_{R}^{ab}=\hat{x}_{a}^{i}\hat{x}_{b}^{j}A_{R}^{ij}\), \(B_{R}^{ab}=\hat{x}_{a}^{i}\hat{x}_{b}^{j}B_{R}^{ij}\),
\[A_{R}^{ij}(\vec{x})=\frac{1}{2}\Big{[}A^{ij}(\vec{x})+A^{ij}(-\vec{x})\Big{]}, \quad\text{and}\quad B_{R}^{ij}(\vec{x})=\frac{1}{2}\Big{[}B^{ij}(\vec{x})+B ^{ij}(-\vec{x})\Big{]}.\]
Here \((\hat{x}_{1},\hat{x}_{2})=(\hat{x},\hat{y})\) denote the unit vector to each arm. The dependence of \(A_{R}^{ab}\) on \(\omega\) is suppressed in the above expression for notational simplicity. This power spectrum can be compared with the noise power spectrum in the current and future ground-based interferometers, such as LIGO and Einstein Telescope. While we only discuss the Michelson interferometer here, the response of Fabry-Perot Michelson interferometer is the same up to the overall multiplication factor.
The response functions can be simplified in the isotropic (\(v_{0}/\sigma\ll 1\)) and long-wavelength limit (\(m\sigma L\ll 1\)). In such limits, they become
\[A_{\rm Mich} \approx\frac{8}{3}\Big{(}\frac{\bar{v}}{\sigma}\Big{)}^{6}\theta (\bar{v}^{2})e^{-(\bar{v}/\sigma)^{2}}\Big{[}\sin^{4}\Big{(}\frac{\omega L}{2} \Big{)}+\frac{(m\bar{v}L)^{2}}{4}\Big{(}\cos\omega L-\sin^{2}\Big{(}\frac{ \omega L}{2}\Big{)}\Big{)}\Big{]}+(\omega\to-\omega), \tag{39}\] \[B_{\rm Mich} \approx\frac{128}{15\pi}|\bar{\omega}|K_{1}(|\bar{\omega})|(m \sigma L)^{2}. \tag{40}\]
In this limit, one sees \(B_{\rm Mich}\propto(m\sigma L)^{2}=(L/\lambda)^{2}\), which signals the tidal limit of differential acceleration as estimated in the introduction. On the other hand, for \(A_{\rm Mich}\), the prefactor strongly supports \(\omega=2m\). The first term \(\sin^{4}(\omega L/2)\) dominates the second term as long as \(m\sigma L\gtrsim\sigma^{2}\).
#### iii.2.2 Time-delay interferometer
Let us now consider the time-delay interferometer. We take LISA as a benchmark for the discussion. The constellation consists of three spacecraft (\(\rm SC_{1}\), \(\rm SC_{2}\), \(\rm SC_{3}\)) with three arms, and in each arm, two Doppler shift measurements
Figure 5: A cartoon for the time-delay interferometer. Each spacecraft are loaded with two proof masses. The Michelson-like X variable measures the phase difference of the laser along two paths shown as blue and red dashed lines.
are carried out; in this simplified picture, there are six one-way Doppler shift measurements. For a detailed discussion of the time-delay interferometer, we refer readers to Refs. [27; 28].
A single one-way Doppler shift measurement can be used for the detection of gravitational waves, but the sensitivity of such measurement will be limited by laser frequency noise. Such laser frequency noise can be eliminated in a certain linear combination of the Doppler shift measurements; such linear combinations are called time-delay interferometer (TDI) variables [29; 30].
One such TDI variable is called the Michelson-like TDI variable \(X\). It is defined as
\[X(t)=[\Delta\phi_{3}(t)+\Delta\phi_{2}(t-2L)]-[\Delta\phi_{2}(t)+\Delta\phi_{3 }(t-2L)] \tag{41}\]
where \(\Delta\phi_{2}(t)\) [\(\Delta\phi_{3}(t)\)] is the phase obtained by the laser along its round trip from SC1 to SC2 (SC3) at time \(t\). This TDI-X variable measures the laser phase difference between two paths shown in Figure 5. Assuming an equal arm-length for simplicity, the \(X\) in the Fourier space is1
Footnote 1: In each spacecraft, two proof masses are loaded. We denote the position of proof masses loaded in SC\({}_{i}\) as \(\vec{x}_{i}\). We do not distinguish the difference in the position of two proof masses loaded in the same spacecraft. This is equivalent to assuming that the ULDM fluctuation perturbs two proof masses in the same spacecraft in the same way. This can be justified by noting that the wavelength of dark matter is always larger than the separation of proof masses in the same spacecraft.
\[\tilde{X}(\omega)=\pm\frac{\omega_{L}}{\omega^{2}}e^{i\omega L}\int\frac{d^{3 }k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{1}}\big{[}i\vec{k}\cdot\vec{D}_{X}(\hat {n}_{3},\hat{n}_{2};L)\big{]}\tilde{\Psi}. \tag{42}\]
where
\[\vec{D}_{X}(\hat{n}_{3},\hat{n}_{2};L)=(1-e^{2i\omega L})[\vec{D}(\hat{n}_{3}, L)-\vec{D}(\hat{n}_{2},L)]=2(1-e^{2i\omega L})\vec{D}_{\rm Mich}(\hat{n}_{3}, \hat{n}_{2};L). \tag{43}\]
Here \(\hat{n}_{2}\) (\(\hat{n}_{3}\)) is the unit vector from SC\({}_{1}\) and SC\({}_{2}\) (SC\({}_{3}\)). The response vector \(\vec{D}_{X}\) is the same as that of the Michelson interferometer except for an overall factor of \(2(1-e^{2i\omega L})\). In the computations of the power spectrum, this additional factor leads to a common multiplication factor \(16\sin^{2}(\omega L)\), which will be ignored below.
The power spectrum due to ultralight dark matter fluctuation is
\[S_{\Delta L/L}^{\rm ULDM}(\omega)=\frac{\bar{a}^{2}}{\omega^{4}L^{2}}\tau \Big{[}\sigma^{4}A_{\rm Mich}(\omega;\hat{n}_{3},\hat{n}_{2},L)+B_{\rm Mich}( \omega;\hat{n}_{3},\hat{n}_{2},L)\Big{]} \tag{44}\]
which is the same as the simple Michelson interferometer except that the two arms are not orthogonal to each other, \(\hat{n}_{2}\cdot\hat{n}_{3}=1/2\neq 0\). The noise power spectrum is [31]
\[S_{\Delta L/L}(\omega)=\frac{1}{L^{2}}\left[S_{N}(\omega)+2[1+\cos^{2}(\omega L )]\frac{S_{\rm acc}(\omega)}{\omega^{4}}\right]. \tag{45}\]
where \(S_{N}(f)\) is the optical metrology noise power spectrum, and \(S_{\rm acc}(\omega)\) is the acceleration noise power spectrum. In this expression, the optical metrology and acceleration noise in each spacecraft are assumed to be identical, \(S_{N_{ij}}=S_{N}\) and \(S_{\rm acc,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
Deterministic signal
The \(\omega=2m\) part of the spectrum represents the coherently oscillating ULDM signals. This is a deterministic signal in the sense that the signal form is known. For such deterministic signals, we can use the matched filter. Consider the detector output \(d(t)=s(t)+n(t)\). By convoluting the detector output with the filter, \(\int dt\,d(t)K(t)\), and choosing the filter \(K(t)\) such that it maximally overlaps with the signal \(s(t)\), one can add the signal coherently. The signal-to-noise ratio is
\[\frac{S}{N}=\left[\min(T,\sqrt{T\tau})\int_{-\infty}^{\infty}df\,\frac{S_{s}(f )}{S_{n}(f)}\right]^{1/2} \tag{48}\]
where \(T\) is the total integration time scale and \(\tau=1/m\sigma^{2}\) is the coherence time scale of the deterministic signal. Here \(S_{s}(f)\) is the power spectrum in differential arm length or strain unit from the ULDM fluctuations. The factor \(\min(T,\sqrt{T\tau})\) denotes the gain in the signal-to-noise ratio in the coherent search, \(S/N\sim\sqrt{T}\) when \(T<\tau\) and the gain in the incoherent search, \(S/N\sim T^{1/4}\) when \(T>\tau\). This will be used to place an upper limit on dark matter density in the solar system. Note that the power spectrum in this expression is defined as a two-sided spectrum.
To proceed further, we note that the power spectrum of the coherently oscillating mode is narrowly peaked at \(\omega=2m\) with a width \(\Delta\omega=m\sigma^{2}\). For the Michelson interferometry, the integral can be approximated as
\[\frac{S}{N}\approx\frac{\bar{a}\sigma^{2}}{2\pi^{2}f_{m}^{2}L}\bigg{[}\frac{ \min(T,\sqrt{T\tau})}{S_{\Delta L/L}^{one}(f_{m})}\bigg{]}^{\frac{1}{2}}\bigg{[} \tau\int_{0}^{\infty}df\,A_{\rm Mich}(f)\bigg{]}^{\frac{1}{2}} \tag{49}\]
where \(f_{m}=m/\pi\) and \(S_{\Delta L/L}^{\rm one}(f)=2S_{\Delta L/L}(f)\) is the one-sided noise power spectrum. We have evaluated all the other factors in the integral at \(f=f_{m}\) except for the exponential factor strongly peaked at \(\omega=2m\), assuming that they are smoothly distributed over \(\Delta\omega\simeq m\sigma^{2}\). In the isotropic (\(v_{0}/\sigma\ll 1\)) and long-wavelength limit (\(m\sigma L\ll 1\)), one finds \(\tau\int_{0}^{\infty}df\,\mathcal{A}_{\rm Mich}\simeq(8/\pi)\sin^{4}(f_{m}/2f _{*})\) with \(f_{*}=(2\pi L)^{-1}\). The above expression simplified to
\[\frac{S}{N}=\frac{2^{5/2}}{\sqrt{\pi}}\sin^{2}(f_{m}/2f_{*})\left[\frac{\bar{ a}\sigma^{2}}{(2\pi f_{m})^{2}L}\right]\left[\frac{\min(T,\sqrt{T\tau})}{S_{ \Delta L/L}^{\rm one}(f_{m})}\right]^{\frac{1}{2}}. \tag{50}\]
The quantity in the first squared parenthesis is the typical displacement \(\Delta L/L\) due to ULDM fluctuation during \(\Delta t\sim 1/f_{m}\). Note that \(\bar{a}=Gm_{\rm eff}/\lambda^{2}\) and \(m_{\rm eff}=\pi^{3/2}\rho\lambda^{3}\). The above equation can be solved for \(\rho/\rho_{0}\) with the local dark matter density \(\rho_{0}=0.4\,\)GeV/cm\({}^{3}\). For the interferometers with equilateral geometry, \(\tau\int_{0}^{\infty}A_{\rm Mich}\approx(4/\pi)\sin^{4}(f_{m}/2f_{*})\). The additional factor of \(1/2\) is because the two arms are not orthogonal to each other.
#### iv.2.2 Stochastic signal
The low-frequency part of the ULDM spectrum behaves similarly to the stochastic gravitational wave background. To search for this part of the spectrum, we ideally need more than two detectors, which will allow us to cross-correlate the outputs in different detectors. If the separation of detectors is smaller than the wavelength of dark matter, the output at each detector is expected to be correlated, while the noise is expected to be uncorrelated. In this way, we can single out the ultralight dark matter signal in the output data stream of multiple detectors.
Consider two detector output \(d_{1,2}(t)=s_{1,2}(t)+n_{1,2}(t)\) with the signal \(s_{1,2}\) and the noise \(n_{1,2}\). Then define the cross-correlated output \(Y\) as
\[Y=\int_{-T/2}^{T/2}dt\int_{-T/2}^{T/2}dt^{\prime}\,d_{1}(t)d_{2}(t^{\prime})Q (t-t^{\prime})\simeq\int_{-\infty}^{\infty}df\,\tilde{d}_{1}(f)\tilde{d}_{2}^{ *}(f)\tilde{Q}^{*}(f) \tag{51}\]
where \(Q(t)\) is a real filter function. Suppose that the cross-correlation of signals at two detectors is \(\langle\tilde{s}_{1}(f)\tilde{s}_{2}^{*}(f^{\prime})\rangle=\delta(f-f^{\prime} )S_{12}(f)\). Using the optimal filter \(\tilde{Q}(f)=S_{12}(f)/S_{n}^{2}(f)\) and assuming uncorrelated detector noise, one finds the signal-to-noise ratio as [32]
\[\frac{S}{N}=\left[T\int_{-\infty}^{\infty}df\,\frac{|S_{12}(f)|^{2}}{S_{n}^{2} (f)}\right]^{1/2}, \tag{52}\]
where \(T\) is the total integration time scale. In this way, the low-frequency part of the ultralight dark matter signal can be searched.
For cross-correlation, we are mostly interested in space-borne interferometers for the following reasons. For the cross-correlation to be relevant, two conditions need to be met: (i) the ultralight dark matter-induced signal must lie within the detector bandwidth, and (ii) the ultralight dark matter signals must be correlated in two detectors. Each interferometer has a range of frequencies to which the detector is sensitive. For the LIGO, the minimum frequency that LIGO is sensitive to is about \(f_{\rm min}\sim 10\,\)Hz. On the other hand, two LIGO detectors, one at Hanford and the other at Livingston, are separated by \(L_{\rm det}\simeq 3000\,\)km. To guarantee that the ultralight dark matter signal is correlated over these two detectors, we require \(L_{\rm det}<(1/\Delta p)=(\sigma/\omega)\). This results in \(\omega<5\times 10^{-2}\,\)Hz, which is more than two orders of magnitude smaller than the minimum detectable frequency in LIGO detectors. In other words, for the frequency range where the ultralight dark matter signals would be correlated over two LIGO detectors, LIGO detectors are not sensitive to ULDM signals due to a large seismic noise.
The situation in the space-borne interferometer is slightly different. For the sake of discussion, let us consider two co-planar LISA-like constellations. The minimum frequency of the LISA detector is \(f_{\rm min}\sim 10^{-5}\,\)Hz. On the other hand, the detector separation is \(L_{\rm det}\sim L=2.5\times 10^{6}\,\)km or less. This translates into the condition \(\omega<\sigma/L_{\rm det}\simeq 10^{-4}\,\)Hz for ULDM signals to be correlated over two detectors. There is a range of frequencies where the signals are correlated over detectors, while the detector maintains its sensitivity to the ultralight dark matter signals. The same arguments hold for other types of proposed space-borne interferometers, such as \(\mu\)Ares [18], a proposal using asteroids as GW detectors [19], and Big Bang Observatory (BBO) [33].
For this reason, we only consider space-borne interferometers. The cross-correlation of TDI-X variables is
\[S_{12}(f)=\frac{\bar{a}^{2}\tau}{(2\pi f)^{4}L^{2}}\mathcal{B}_{\rm cross}. \tag{53}\]
The overlap function is
\[\mathcal{B}_{\rm cross}= \cos^{2}(2\pi fL)\mathcal{B}^{11^{\prime}}(\vec{x}_{11^{\prime}}) +\mathcal{B}^{33^{\prime}}(\vec{x}_{33^{\prime}})+\mathcal{B}^{22^{\prime}}( \vec{x}_{22^{\prime}})-\mathcal{B}^{23^{\prime}}(\vec{x}_{23^{\prime}})- \mathcal{B}^{32^{\prime}}(\vec{x}_{32^{\prime}})\] \[-\cos(2\pi fL)\Big{[}\mathcal{B}^{31^{\prime}}(\vec{x}_{31^{ \prime}})+\mathcal{B}^{13^{\prime}}(\vec{x}_{13^{\prime}})-\mathcal{B}^{21^{ \prime}}(\vec{x}_{21^{\prime}})-\mathcal{B}^{12^{\prime}}(\vec{x}_{12^{\prime }})\Big{]}, \tag{54}\]
where \(\vec{x}_{i}\) is the position of \(i\)-th spacecraft in the first constellation, \(\hat{n}_{2,3}=(\vec{x}_{2,3}-\vec{x}_{1})/L\), \(\hat{n}_{1}=(\vec{x}_{3}-\vec{x}_{2})/L\), \(\vec{x}_{ij^{\prime}}\equiv\vec{x}_{i}-\vec{x}_{j^{\prime}}\), and \(\mathcal{B}^{ab^{\prime}}=\hat{n}_{a}^{i}\hat{n}_{b^{\prime}}^{j}\mathcal{B}^ {ij}\). The primed quantities are for the second constellation. We take \(L=L^{\prime}\) for simplicity. The signal-to-noise ratio is
\[\frac{S}{N}=\frac{\bar{a}^{2}}{(2\pi f_{0})^{4}L^{2}}\left[\frac{T}{(2\pi)^{2 }}\int_{-\infty}^{\infty}df\left(\frac{f_{0}}{f}\right)^{8}\frac{|\mathcal{B} _{\rm cross}(f)|^{2}}{f_{0}^{2}S_{\Delta L/L}^{2}(f)}\right]^{1/2}. \tag{55}\]
where \(f_{0}=\omega_{0}/2\pi=m\sigma^{2}/2\pi\). The prefactor is the squared value of the typical differential length change, \((\Delta L/L)^{2}\), over \(\Delta t\sim 1/f_{0}\). The integral will be numerically solved.
### Application
We apply the discussion in previous sections to the current and proposed gravitational wave interferometers.
#### iv.3.1 Ligo
We take the sensitivity curve of LIGO O3 from Ref. [34]. The strain noise spectrum from ultralight dark matter can be found by using \(S_{\Delta L/L}(f)={\rm sinc}^{2}(2\pi fL)S_{h}(f)\). For the LIGO with \(f\lesssim 10^{4}\,\)Hz, \({\rm sinc}(2\pi fL)\simeq 1\), we have ignored the additional \({\rm sinc}^{2}(2\pi fL)\) factor. The arm length is \(L_{\rm LIGO}=4\,\)km. As we have already estimated in the introduction, the ULDM-induced noise is by many orders of magnitude smaller than the instrumental noise. Still, it can provide an upper limit on the dark matter density in the solar system. For the search with matched filter, we use (50) and rewrite it as
\[\frac{\rho}{\rho_{0}}\approx\frac{S}{N}\sqrt{\frac{\pi}{2}}\frac{1}{{\rm sinc }^{2}(f_{m}/2f_{*})}\left[\frac{(2\pi f_{*})^{2}L}{\bar{a}_{0}\sigma^{2}} \right]\left[\frac{S_{\Delta L/L}^{\rm one}(f_{m})}{\min(T,\sqrt{T\tau})} \right]^{\frac{1}{2}} \tag{56}\]
where \(\bar{a}_{0}=Gm_{{\rm eff},0}/\lambda^{2}\), \(m_{{\rm eff},0}=\pi^{3/2}\rho_{0}\lambda^{3}\) with \(\rho_{0}=0.4\,\)GeV/cm\({}^{3}\), and \(f_{*}=(2\pi L)^{-1}=10^{4}\,\)Hz for LIGO. Note that for LIGO, we can also approximate \({\rm sinc}^{2}(f_{m}/2f_{*})\simeq 1\). For \(T=1\,\)yr, \(\sigma=164\,\)km/sec, and \(S/N=1\), the upper limit on the dark matter density \(\rho/\rho_{0}\) is obtained and shown as a solid red line in Figure 2. Note that the above expression is valid in the long-wavelength limit \(m\sigma L\ll 1\), which is satisfied not only in LIGO but also in all other ground-based interferometers.
Einstein Telescope
Einstein telescope is a proposed future gravitational wave detector based on a dual-recycled Michelson interferometer. The arm length is \(L_{\rm ET}=10\,\)km. We assume an equilateral geometry for the Einstein telescope. The strain noise power spectrum is taken from [35]. Using the same expression (56) with \(f_{*}=(2\pi L_{\rm ET})^{-1}=5\times 10^{3}\,\)Hz, we show the upper limit on the density, \(\rho/\rho_{0}\), from Einstein Telescope with the blue solid line in Figure 2. We choose the same parameters: \(T=1\,\)yr, \(\sigma=164\,\)km/sec, and \(S/N=1\).
#### iv.1.3 Lisa
The noise source in LISA is conveniently divided into the collective optical metrology noise \(S_{N}(f)\) and the single test-mass acceleration noise \(S_{\rm acc}(f)\). Each of them is given by [36]
\[S_{N}^{1/2}(f) =10^{-11}\frac{\rm m}{\sqrt{\rm Hz}}\bigg{[}1+\left(\frac{2\rm mHz }{f}\right)^{4}\bigg{]}^{1/2}, \tag{57}\] \[S_{\rm acc}^{1/2}(f) =3\times 10^{-15}\,\frac{\rm m\,s^{-2}}{\sqrt{\rm Hz}}\left[1+ \left(\frac{0.4\,\rm mHz}{f}\right)^{2}\right]^{\frac{1}{2}}\left[1+\left( \frac{f}{8\,\rm mHz}\right)^{4}\right]^{\frac{1}{2}}. \tag{58}\]
LISA Pathfinder has reported differential acceleration noise of two test masses loaded in a single spacecraft, which is smaller than the LISA proposal requirement by a factor of few [37; 38]. For the matched filter search, we reuse (56) with an additional factor of \(\sqrt{2}\), arising from the fact that LISA has an equilateral geometry and two arms are not orthogonal to each other. When recycling (56), we replace \(\rm sinc^{2}\left(f_{m}/2f_{*}\right)\to 1/[1+(f_{m}/2f_{*})^{2}]\) to show the envelope of the projection. The upper limit on the dark matter density \(\rho/\rho_{0}\) with the same parameters is shown as a green solid line in Figure 2.
For the LISA, we also compare \(S_{\Delta L/L}^{\rm ULDM}(f)\) with the mission requirement \(S_{\Delta L/L}(f)\). For the comparison, we compare two spectra in the GW strain unit. To convert to the strain noise power spectrum, we divided the power spectrum for the differential arm-length with the sky and polarization averaged detector response function \(\mathcal{R}(f)\)[39],
\[\mathcal{R}(f)=\frac{3}{10(1+0.6(f/f_{*})^{2})}\]
with \(f_{*}=(2\pi L_{\rm LISA})^{-1}\simeq 19\,\)mHz. In the left panel of Figure 1, we show the mission requirement (red solid line) and \(S_{n}^{\rm ULDM}\) with \(m=10^{-12}\,\)eV (cyan) and \(m=10^{-13}\,\)eV (orange). The ULDM-induced noise is several orders of magnitude smaller than the mission requirement, and therefore, it can safely be ignored for the purpose of gravitational wave detection.
The subdominant low-frequency ULDM signals might be probed if there are more than two detectors. Although the current LISA mission concept does not invoke more than two constellations [36], we assume two LISA-like interferometers and investigate the reach of such configuration for the ULDM search as an exercise.2 For the cross-correlation, the position of the spacecraft needs to be specified. In the heliocentric ecliptic coordinate, their position is [41]
Footnote 2: Other space-borne interferometers, such as Taiji and TianQin, would allow the cross-correlation searches [40].
\[x =a\cos\alpha+ae[\sin\alpha\cos\alpha\sin\beta-(1+\sin^{2}\alpha) \cos\beta] \tag{59}\] \[y =a\sin\alpha+ae[\sin\alpha\cos\alpha\cos\beta-(1+\cos^{2}\alpha) \sin\beta]\] (60) \[z =\sqrt{3}ae\cos(\alpha-\beta) \tag{61}\]
where \(a\approx\) AU is the semi-major axis, \(e\) is eccentricity, \(\alpha=\omega t+\kappa\), and \(\beta=2n\pi/3+\lambda\) with \(n=0,1,2\). Here \(\alpha\) is the orbital phase of the guiding center. Note that the arm length is given by \(L=2\sqrt{3}eR\); for \(R=\) AU and \(L_{\rm LISA}=2.5\times 10^{6}\,\)km, the eccentricity is \(\epsilon=4.8\times 10^{-3}\). The position of spacecraft in the second constellation is given by the same expression but with a prime (\(\kappa^{\prime}\), \(\lambda^{\prime}\), and so on).
Rewriting (55) in terms of \(\rho/\rho_{0}\), we find
\[\frac{\rho}{\rho_{0}}=\frac{1}{2^{3/4}}\Big{(}\frac{S}{N}\Big{)}^{\frac{1}{2}} \left[\frac{(2\pi f_{0})^{2}L}{\bar{a}_{0}}\right]\left[\frac{T}{(2\pi)^{2}} \int_{0}^{\infty}df\left(\frac{f_{0}}{f}\right)^{8}\frac{|\mathcal{B}_{\rm cross }|^{2}}{f_{0}^{2}|S_{\Delta L/L}^{\rm one}(f)|^{2}}\right]^{-\frac{1}{4}}. \tag{62}\]
Using the halo dark matter parameters, \(\sigma=164\,\rm km/sec\) and \(\rho_{0}=0.4\,\rm GeV/cm^{3}\), and assuming \(\Delta\kappa=\kappa^{\prime}-\kappa=0\) (co-planar configuration), \(\Delta\lambda=\lambda^{\prime}-\lambda=\pi/2\), and \(T=5\,\rm yr\), we find the reach of two LISA-like detectors as a dashed green line in Figure 2. For the frequency range, we use \(f_{\rm min}=10^{-5}\,\rm Hz\) and \(f_{\rm max}=1\,\rm Hz\). We have included a galactic confusion noise, assuming \(T=4\,\rm yr\) of operation [42].
#### iii.2.4 \(\mu\)Ares
We also consider the \(\mu\)Ares proposal [18] for the ultralight dark matter search. The proposed concept is similar to LISA in the sense that it consists of three spacecraft, and it operates as a time-delay interferometer. The configuration of satellites is heliocentric, while the arm-length is \(L=400\rm M\,km\). For the matched filter and cross-correlation analysis, we take the strain noise power spectrum in [18], including astrophysical foreground. The upper limit on the dark matter density \(\rho/\rho_{0}\) with matched filtering is shown as a purple line in Figure 2.
We also compare the ultralight dark matter-induced noise with the required noise level for the proposed mission concept. The proposal requires \(S_{\rm acc}^{1/2}(f)=10^{-15}\,\rm m\,s^{-2}/\sqrt{\rm Hz}\), and \(S_{N}^{1/2}(f)=50\,\rm pm/\sqrt{\rm Hz}\). The result is shown on the right panel of Figure 1 for \(m=10^{-12}\)-\(10^{-15}\,\rm eV\). Note that the noise curve in this figure is obtained by using (45) with the proposal requirements, but without any astrophysical foregrounds. Note also that, for \(f\lesssim{\rm a~{}few}\times 10^{-7}\,\rm Hz\), the gravity gradient noise from asteroids could dominate the total noise spectrum [20].
For the cross-correlation analysis, we specify the position of each spacecraft in the heliocentric ecliptic coordinate as
\[\vec{x}_{n}=R\Big{(}\rm cos[\phi+2\pi(n-1)/3],\,\sin[\phi+2\pi(n-1)/3],\,0 \Big{)} \tag{63}\]
where \(n=1,2,3\) and \(\phi\) is the orbital phase of the interferometer. We assume another of such constellations lying on the same plane (co-planar configuration), with the relative phase with respect to the first constellation by \(\Delta\phi\). Using (62), we find the projected sensitivity of \(\rho/\rho_{0}\) of the cross-correlation search in \(\mu\)Ares as a purple dashed line in Figure 2. We use \(f_{\rm min}=4\times 10^{-7}\,\rm Hz\), \(f_{\rm max}=10^{-1}\,\rm Hz\), and \(\Delta\phi=\pi/2\).
#### iii.2.5 Big Bang Observatory
The configuration of the Big Bang Observatory is similar to the LISA, except for a smaller arm-length, \(L_{\rm BBO}=5\times 10^{7}\,\rm m\). The instrument parameters are [43]
\[S_{N}^{1/2} = 1.4\times 10^{-17}\,\rm m/\sqrt{\rm Hz}, \tag{64}\] \[S_{\rm acc}^{1/2} = 3\times 10^{-17}\,\rm m\,s^{-2}/\sqrt{\rm Hz}. \tag{65}\]
The cross-correlation search is identical to the one in LISA, except for this arm-length difference and different instrumental parameters. The matched filter and cross-correlation search (using two co-planar constellations) yields the upper limit on \(\rho/\rho_{0}\), represented by the (dashed) brown lines in Figure 2. For the analysis, we use \(f_{\rm min}=10^{-4}\,\rm Hz\), \(f_{\rm max}=10^{2}\,\rm Hz\), and \(\Delta\lambda=\pi\). We have included the same galactic confusion noise as in the case of LISA while assuming the other astrophysical foreground around \(\sim{\cal O}(0.1)\,\rm Hz\) from neutron star binaries can be fully resolved [44].
## IV Discussion
We discuss now some of the ignored effects in the main discussion and some of the other implications of our results.
In the main discussion, we have ignored the mean acceleration on the test masses due to ULDM; we have considered only its variance. The mean acceleration is nothing but the dynamical friction, and it is present as long as \(\vec{v}_{0}\neq 0\) in the rest frame of the detector. One may directly compute the dynamical friction from the expression (19). The dynamical friction for the ultralight dark matter has already been obtained in [45], where it is given by
\[\langle a^{i}\rangle=\hat{v}_{0}^{i}\frac{8\pi G^{2}\rho m_{\rm eff}\ln\Lambda }{\sigma^{2}}\left[G\Big{(}\frac{v_{0}}{\sigma}\Big{)}+\frac{m_{t}}{2m_{\rm eff }}G\Big{(}\frac{v_{0}}{\sqrt{2}\sigma}\Big{)}\right] \tag{66}\]
Here \(G(X)=1/(2X^{2})[\rm erf(X)-(2X/\sqrt{\pi})e^{-X^{2}}]\), \(m_{t}\) is the mass of the test body, and \(\ln\Lambda\) is the Coulomb logarithm factor arising due to the long-range nature of gravitational force. For the halo dark matter with
and \(m=10^{-22}\,\)eV, this dynamical friction force is causing \(\Delta x\sim\mathcal{O}(1)\,\)meter of drift of constellation over a year, which is much smaller than the expected unavoidable change of arm-length of LISA, \(\Delta x\sim\mathcal{O}(10^{4}\)-\(10^{5})\,\)km, due to the orbital dynamics [36]. In addition, the dynamical friction force acts on each spacecraft in the same way, and therefore, we do not expect that it changes the arm length itself. For these reasons, we do not expect that the dynamical friction is relevant for the discussion of the ULDM-induced signals in the interferometers.
We have found that some of the future gravitational wave interferometers could potentially probe \(\rho/\rho_{0}\sim\mathcal{O}(10^{2})\) where \(\rho_{0}=0.4\,\)GeV/cm\({}^{3}\) is the local dark matter density. Since we have projected the sensitivity of interferometers in units of \(\rho_{0}\) and this local dark matter density is one of the key input parameters for all terrestrial dark matter detectors, it is worthwhile to discuss how this value \(\rho_{0}\sim\mathcal{O}(0.1)\,\)GeV/cm\({}^{3}\) is obtained. The current measurement of local dark matter is usually performed over a much larger spatial scale. For instance, even the very local measurements of the dark matter density select stellar tracers within \(\mathcal{O}(10^{2})\) pc around the solar system, and therefore, the inferred value of dark matter density is an average value over that spatial volume, \(V\sim[\mathcal{O}(10^{2})\,\)pc\(]^{3}\)[21; 22]. When it comes to the measurement within the solar system, the dark matter density is poorly constrained. Measurement using the planetary ephemerides places an upper limit on the dark matter density in the solar system as \(\rho/\rho_{0}\lesssim 10^{4}\)[46], while the measurements of lunar laser ranging and LAGEOS geodetic satellite place an upper limit on the dark matter density between the Moon and the satellite's orbital radius as \(\rho/\rho_{0}\lesssim 10^{11}\)[47].
Furthermore, a large density fluctuation might just arise statistically in the ultralight dark matter halo. For the density of the field, \(\rho_{\phi}\), its underlying distribution is given by the exponential distribution, \(p(\rho_{\phi})d\rho_{\phi}=[\exp(-\rho_{\phi}/\rho_{0})/\rho_{0}]d\rho_{\phi}\) (see Appendix C). Given that the correlation of density field exponentially drops over \(L\gtrsim 1/m\sigma\)[45], we may consider the density fluctuation \(\rho_{\phi}\) over wavelength-sized patches as statistically independent random variables. The probability of finding a patch with \(\rho_{\phi}>c\rho_{0}\) is then given by
\[P(\rho_{\phi}>c\rho_{0})=\int_{c\rho_{0}}^{\infty}p(\rho_{\phi})=e^{-c}.\]
On the other hand, the number of statistically independent patches over the volume \(V\) is \(N\sim V/\lambda^{3}\). For \(m=10^{-15}\,\)eV, the wavelength is about AU scale, and the number of patches over \(V=(10\,\)kpc\()^{3}\) volume is \(N=V/\lambda^{3}=(10\,\)kpc/AU\()^{3}\sim 10^{28}\). For \(c=60\), one finds \(P(\rho_{\phi}>60\rho_{0})=e^{-60}\simeq 10^{-26}\). This would mean that there might be several hundreds of AU-sized patches within \(10\,\)kpc volume with \(\rho\gtrsim 60\rho_{0}\).
We have only considered cross-correlation searches for space-borne interferometers. The reason that it was not suitable for the ground-based interferometers, especially for the current advanced LIGO, was that the detector separation is too large for the ultralight dark matter signals to be correlated for the relevant frequency range of the detector. However, in the initial LIGO phase, two detectors at Hanford, with \(4\,\)km and \(2\,\)km arms, were co-located and co-aligned, and therefore, it is in principle possible to use the result of two Hanford detectors in the initial LIGO phase for the cross-correlation search as demonstrated in [48; 49] for the stochastic gravitational waves. Since the detectors are co-located, environmental noises are correlated, and this correlated noise must be carefully accounted for in the analysis.
For the low-frequency ULDM signals, we have considered the cross-correlation, which requires more than two detectors. One may still be able to search for these low-frequency ULDM signals with a single detector with the help of a null data stream. A null data stream is the output of the detector where the expected signal, either GWs or ULDM, is expected to be suppressed, allowing a way to calibrate the instrumental noises. For the time-delay interferometers, like LISA, a symmetrized Sagnac channel is shown to be insensitive to the gravitational wave signals at low frequencies, \(f<f_{*}=(2\pi L)^{-1}\), which could, at least partially, provide a way to distinguish for stochastic gravitational wave backgrounds from other noise sources at such low frequencies [50; 51; 52]. Similarly to the stochastic gravitational wave backgrounds, ultralight dark matter signal is strongly suppressed in the symmetrized Sagnac channel as \(\propto(m\sigma L)^{4}\) for \(m\sigma L<1\), and therefore, using such null data stream might provide a way to search for ULDM at sufficiently small masses with help of additional information such as annual modulations of the ULDM signal.
## V Conclusion
We have investigated how the density fluctuations of ultralight dark matter interact with interferometers, designed for the detection of gravity waves. We provide a systematic way to compute the ULDM spectrum in any gravitational wave interferometers. We show that the ultralight dark matter-induced noise is most significant when the arm-length is large, such as LISA or other proposals with an astronomical size of arm-length, and that, in all cases, we consider, the ultralight dark matter effects are subdominant compared to other noise sources, e.g. interferometric read-out noise and acceleration noise. Then we consider if such interferometers can be used to place an upper bound on the dark
matter density in the solar system, and found that, under certain assumptions, \(\rho/\rho_{0}\sim\mathcal{O}(10^{2})\) might be probed with future gravitational wave interferometers with its arm-length close to astronomical unit.
We note that the current local dark matter density measurement, \(\rho_{0}=0.4\,\mathrm{GeV/cm^{3}}\), is performed on a much larger scale, e.g. \(\gtrsim\mathcal{O}(100)\,\mathrm{pc}\), and that the constraints on dark matter abundance in the solar system from planetary ephemerides and laser ranging experiments are several orders of magnitude larger than \(\rho_{0}\). Especially in the light of a situation where there is no direct and gravitational detection of dark matter in the solar system, these interferometric approaches will provide an interesting way to probe ultralight dark matter only through gravitational interaction. In addition, it is recently proposed that the ultralight dark matter density in the solar system might be larger than the local dark matter density through certain capture processes [53; 54; 55]. Our interferometric search for ultralight dark matter in the solar system is expected to provide an interesting probe for such possibilities.
###### Acknowledgements.
We would like to thank Julian Rey and Nicholas Rodd for useful conversations. We especially thank Alessandro Lenoci for the initial collaboration. We also thank Thomas Konstandin, Germano Nardini, and Mauro Pieroni for their useful comments and suggestions on the manuscript. This work is supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306.
## Appendix A Power spectrum
We provide a detailed computation of the low-frequency part of the power spectrum \(S^{ij}_{\nabla\Psi}(\omega,\vec{L})\). To prepare an explicit computation, we derive the integral expression of the power spectrum (25) in the non-relativistic limit:
\[S^{ij}_{\nabla\Psi}(\omega,\vec{L})= 32\pi^{3}G^{2}\int d^{3}v_{c}\int d^{3}v_{d}\,f(\vec{v}_{c}+ \vec{v}_{d})f(\vec{v}_{c}-\vec{v}_{d}) \tag{100}\] \[\times \Big{[}v_{c}^{i}v_{c}^{j}\Big{(}\delta(\omega-\omega_{1}-\omega _{2})e^{2im\vec{v}_{c}\cdot\vec{L}}+\delta(\omega+\omega_{1}+\omega_{2})e^{-2 im\vec{v}_{c}\cdot\vec{L}}\Big{)}\] \[+\frac{v_{d}^{i}v_{d}^{j}}{v_{d}^{4}}\Big{(}\delta(\omega-\omega_ {1}+\omega_{2})e^{2im\vec{v}_{d}\cdot\vec{L}}+\delta(\omega+\omega_{1}-\omega _{2})e^{-2im\vec{v}_{d}\cdot\vec{L}}\Big{)}\Big{]}\]
where we have introduced new integration variables \(\vec{v}_{c}\) and \(\vec{v}_{d}\), defined as
\[\vec{v}_{c}=\frac{1}{2}(\vec{v}_{1}+\vec{v}_{2}),\qquad\vec{v}_{d}=\frac{1}{2 }(\vec{v}_{1}-\vec{v}_{2}).\]
The two terms in the first line are related by the complex conjugate with \(\omega\to-\omega\). The two terms in the last line are the same under the exchange of the integration variable \(\vec{v}_{1}\leftrightarrow\vec{v}_{2}\). The first two terms represent the spectrum at \(\omega=\pm 2m\), denoted by \(A^{ij}(\omega,\vec{L})\) in the main text, and the last two terms represent the low-frequency part, denoted by \(B^{ij}(\omega,\vec{L})\). Since the computation of \(\omega=\pm 2m\) mode is straightforward, only the computation of \(B^{ij}\) will be given below.
The relevant part of the power spectrum is
\[S^{ij}_{\nabla\Psi}(\omega,\vec{L})= \frac{32\pi^{3}G^{2}}{m}\int d^{3}v_{c}\int d^{3}v_{d}\,f(\vec{v} _{c}+\vec{v}_{d})f(\vec{v}_{c}-\vec{v}_{d})\frac{v_{d}^{i}v_{d}^{j}}{v_{d}^{4} }\delta\Big{(}\vec{v}_{c}\cdot\vec{v}_{d}-\frac{\omega}{2m}\Big{)}e^{2im\vec{ v}_{d}\cdot\vec{L}} \tag{101}\]
where we have used the symmetry of integrand under the exchange of integration variable \(v_{1}\leftrightarrow v_{2}\) in (100) and take the non-relativistic limit for the \(\delta\)-function.3 Using the normal distribution
Footnote 3: In the case of \(\vec{L}=0\) and \(\omega\to 0\) the above power spectrum is nothing but the second diffusion coefficient \(D[\Delta v^{i}(T)\Delta v^{j}(T)]=(\Delta v^{i}(T)\Delta v^{j}(T))/T\) of the test mass with the change of velocity \(\Delta v^{i}(T)\) over the time \(T\). Compare the expression with (54) in Bar-Or et al [45].
\[f(\vec{v})=\frac{(\rho_{0}/m)}{(2\pi\sigma^{2})^{3}}\exp\left[-\frac{(\vec{v} -\vec{v}_{0})^{2}}{2\sigma^{2}}\right],\]
we find
\[S^{ij}_{\nabla\Psi}(\omega,\vec{L})= \frac{4(G\rho_{0})^{2}}{m^{3}\sigma^{6}}\int d^{3}v_{c}\int d^{3}v_{ d}\,\exp\left[-\frac{(\vec{v}_{c}-\vec{v}_{0})^{2}}{\sigma^{2}}-\frac{v_{d}^{2}}{ \sigma^{2}}\right]\frac{v_{d}^{i}v_{d}^{j}}{v_{d}^{4}}\delta\Big{(}\vec{v}_{c} \cdot\vec{v}_{d}-\frac{\omega}{2m}\Big{)}e^{2i\vec{v}_{d}\cdot\vec{L}}\] \[= \frac{4(G\rho_{0})^{2}}{m^{3}\sigma^{4}}\int d^{3}v_{c}\int d^{3}v _{d}\,e^{-(\vec{v}_{c}-\vec{v}_{0}/\sigma)^{2}}e^{-v_{d}^{2}}\frac{v_{d}^{i}v_ {d}^{j}}{v_{d}^{4}}\delta\Big{(}\vec{v}_{c}\cdot\vec{v}_{d}-\frac{\omega}{2} \Big{)}e^{2i\vec{v}_{d}\cdot\vec{L}_{\lambda}}\] \[= -\frac{(G\rho_{0})^{2}}{m^{3}\sigma^{4}}\frac{\partial^{2}}{ \partial L_{\lambda}^{i}\partial L_{\lambda}^{j}}\int d^{3}v_{c}\int d^{3}v_{ d}\,e^{-(\vec{v}_{c}-\vec{v}_{0}/\sigma)^{2}}e^{-v_{d}^{2}}\frac{1}{v_{d}^{4}} \delta\Big{(}\vec{v}_{c}\cdot\vec{v}_{d}-\frac{\omega}{2}\Big{)}e^{2i\vec{v}_ {d}\cdot\vec{L}_{\lambda}} \tag{100}\]
where in the second line we have changed the integration variables \(\vec{v}_{c,d}\rightarrow\vec{v}_{c,d}\sigma\), and defined \(\vec{L}_{\lambda}=m\sigma\vec{L}\). In the last line, the vector \(v_{d}^{i}\) in the integrand is replaced with the derivative with respect to \(\vec{L}_{\lambda}\).
To proceed further, we use the integral representation of \(\delta\)-function: \(\delta(x)=\int(ds/2\pi)e^{-ixs}\). We then perform the \(\vec{v}_{c}\) integral, and then \(\hat{v}_{d}\) integral. After some manipulation, we find
\[S^{ij}_{\nabla\Psi}(\omega,\vec{L})= -4\pi^{3/2}\frac{(G\rho_{0})^{2}}{m^{3}\sigma^{4}}\frac{\partial^ {2}}{\partial L_{\lambda}^{i}\partial L_{\lambda}^{j}}\int_{-\infty}^{\infty} ds\,e^{-is\bar{\omega}}\sqrt{1+s^{2}}\int_{0}^{\infty}\frac{e^{-y^{2}}}{y^{2}} \mathrm{sinc}(2yY) \tag{101}\]
where \(\vec{Y}=(1+s^{2})^{-1/2}(s\vec{v}_{0}/\sigma+\vec{L}_{\lambda})\). The \(y\) integral is divergent. The divergence can be regulated by \(\mathrm{sinc}(2yY)\rightarrow\mathrm{sinc}(2yY)-1\). The additional \(-1\) term would not contribute to the final result as it does not depend on \(L_{\lambda}\). Using this regularization, we obtain
\[S^{ij}_{\nabla\Psi}(\omega,\vec{L})= (\bar{a}^{2}\tau)\left[\frac{2}{\sqrt{\pi}}\int_{-\infty}^{\infty }\frac{e^{-is\bar{\omega}}}{\sqrt{1+s^{2}}}\left(\delta^{ij}\frac{\mathrm{erf} (Y)-G(Y)}{Y}+Y^{i}Y^{i}\frac{3G(Y)-\mathrm{erf}(Y)}{Y^{3}}\right)\right]\equiv (\bar{a}^{2}\tau)B^{ij}(\omega,\vec{L}). \tag{102}\]
This reproduces the result used in the main text.
## Appendix B Induced phase
In this Appendix, we discuss the approximation for the phase (34). We begin with
\[\Delta\phi(t)=2\omega_{L}(\delta t+\delta L) \tag{103}\]
where the time delay \(\delta t\) and the arm-length fluctuation \(\delta L\) are
\[\delta L=+\frac{1}{2}\hat{x}\cdot\Big{[}\big{(}\delta\vec{x}_{1} (t-L)-\delta\vec{x}_{0}(t-2L)\big{)}+\big{(}\delta\vec{x}_{1}(t-L)-\delta\vec{ x}_{0}(t)\big{)}, \tag{104}\] \[\delta t=-\frac{1}{2}\left[\int_{t-L}^{t}dt^{\prime}+\int_{t-2L}^ {t-L}dt^{\prime}\right][\Psi(t^{\prime},\vec{x}(t^{\prime}))+\Phi(t^{\prime}, \vec{x}(t^{\prime}))]. \tag{105}\]
In the Fourier space, each of them is
\[\widetilde{\delta L}(\omega) =e^{i\omega L}\hat{x}\cdot\big{[}\delta\vec{x}_{1}(\omega)- \cos(\omega L)\delta\vec{x}_{0}(\omega)\big{]}=\frac{e^{i\omega L}}{\omega^{2} }\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}\Big{[}e^{i\vec{k} \cdot\hat{n}L}-\cos(\omega L)\Big{]}(i\vec{k}\cdot\hat{n})\tilde{\Phi} \tag{106}\] \[\widetilde{\delta t}(\omega) =-e^{i\omega L}\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{ x}_{0}}\left[\frac{(e^{i\vec{k}\cdot\hat{n}L}-\cos\omega L)(i\vec{k}\cdot\hat{n})+ \omega\sin\omega L}{\omega^{2}-(\vec{k}\cdot\hat{n})^{2}}\right](\tilde{\Phi}+ \tilde{\Psi}) \tag{107}\]
where we have used \(\delta\vec{x}_{i}(\omega)=-(1/\omega)^{2}\int[d^{3}k/(2\pi)^{3}]e^{i\vec{k} \cdot\vec{x}_{i}}\tilde{a}(\omega,\vec{k})\) and \(\tilde{a}(k)=-i\vec{k}\tilde{\Phi}(k)\). Here \(\hat{n}=(\vec{x}_{1}-\vec{x}_{0})/L\). The phase in the frequency space is
\[\widetilde{\Delta\phi}(\omega)=2e^{i\omega L}\omega_{L}\int\frac{d^{3}k}{(2\pi)^ {3}}e^{i\vec{k}\cdot\vec{x}_{0}}\Big{[}\frac{(e^{i\vec{k}\cdot\hat{n}L}-\cos \omega L)(i\vec{k}\cdot\hat{n})}{\omega^{2}}\tilde{\Phi}-\frac{(e^{i\vec{k}\cdot \hat{n}L}-\cos\omega L)(i\vec{k}\cdot\hat{n})+\omega\sin\omega L}{\omega^{2}-( \vec{k}\cdot\hat{n})^{2}}(\tilde{\Phi}+\tilde{\Psi})\Big{]}. \tag{108}\]
To simplify this expression, let us consider the power spectrum for the metric perturbation in the non-relativistic limit. They are given as
\[P_{\Psi}(k) =\frac{(4\pi Gm)^{2}}{2k^{4}}(2\pi)^{4}\int d^{3}v_{1}d^{3}v_{2}f_{ 1}f_{2}\Big{[}v_{c}^{4}\delta^{(4)}(k-p_{1}-p_{2})+\delta^{(4)}(k-p_{1}+p_{2}) \big{)}\Big{]}+(k\rightarrow-k) \tag{109}\] \[P_{\Phi}(k) =\frac{(4\pi Gm)^{2}}{2k^{4}}(2\pi)^{4}\int d^{3}v_{1}d^{3}v_{2} \,f_{1}f_{2}\Big{[}(v_{c}^{2}+v_{d}^{2}-3(\hat{v}_{c}\cdot\vec{v}_{d})^{2})^{2} \delta^{(4)}(k-k_{1}-k_{2})+\delta^{(4)}(k-k_{1}+k_{2})\Big{]}+(k\rightarrow-k) \tag{110}\]
where \(f_{i}=f(\vec{v}_{i})\) is the velocity distribution. In both cases, the \(\omega=2m\) mode is velocity suppressed, while \(\omega<m\sigma^{2}\) does not have such velocity suppression.
Let us first consider \(\omega<m\sigma^{2}\) modes. In this case, \(\tilde{\Phi}=\tilde{\Psi}\) and \(\omega/k\sim\sigma\ll 1\). Therefore, the time delay contribution is suppressed by \((\omega/k)^{2}\sim\sigma^{2}\) compared to the test mass acceleration. Hence, the phase is approximated as
\[\widetilde{\Delta\phi}(\omega)=e^{i\omega L}\frac{\omega_{L}}{\omega^{2}}\int \frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}(i\vec{k}\cdot\vec{D}) \tilde{\Psi} \tag{100}\]
with \(\vec{D}(\hat{n},L)=\hat{n}(e^{i\vec{k}\cdot\hat{n}L}-\cos\omega L)\).
Now consider \(\omega=2m\). In this case, \(\omega/k\sim 1/\sigma\gg 1\). The phase is approximated as
\[\widetilde{\Delta\phi}(\omega) \approx -2e^{i\omega L}\frac{\omega_{L}}{\omega^{2}}\int\frac{d^{3}k}{(2 \pi)^{3}}e^{i\vec{k}\cdot\tilde{x}_{0}}\Big{[}(e^{i\vec{k}\cdot\hat{n}L}-\cos \omega L)(i\vec{k}\cdot\hat{n})\tilde{\Psi}+\omega\sin\omega L\Big{(}1+\frac{( \vec{k}\cdot\hat{n})^{2}}{\omega^{2}}\Big{)}(\tilde{\Phi}+\tilde{\Psi})\Big{]} \tag{101}\] \[= -2e^{i\omega L}\frac{\omega_{L}}{\omega^{2}}\int\frac{d^{3}k}{(2 \pi)^{3}}e^{i\vec{k}\cdot\tilde{x}_{0}}\Big{[}(e^{i\vec{k}\cdot\hat{n}L}-\cos \omega L-i(\vec{k}\cdot\hat{n}L)\sin\omega L)(i\vec{k}\cdot\hat{n})\tilde{\Psi }+\text{sinc}\,\omega L(\vec{k}\cdot\hat{n})^{2}L\tilde{\Phi}\Big{]}\]
In the second line, we neglect the \(\omega\sin(\omega L)(\tilde{\Phi}+\tilde{\Psi})\) term. Interferometers are always sensitive to the phase difference, and since this term does not have any dependence on the direction of light propagation, it cancels out in the final interferometer observables. For \(\omega L\gtrsim 1\) and for \(kL<(\omega L)^{2}<1\), the terms proportional to \((\vec{k}\cdot\hat{n})^{2}\) are negligible, and thus, the phase is approximated as
\[\widetilde{\Delta\phi}(\omega)\approx-e^{i\omega L}\frac{\omega_{L}}{\omega^{ 2}}\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}(i\vec{k}\cdot \vec{D})\tilde{\Psi} \tag{102}\]
For \((\omega L)^{2}<kL\), the dominant term is the one proportional to \(\tilde{\Phi}\), so a more correct approximation would be obtained by replacing \(\tilde{\Psi}\rightarrow-\tilde{\Phi}\) from (102). Since \(\tilde{\Phi}\sim\tilde{\Psi}\) up to an order one numerical factor, the above expression still provides a reasonable approximation even in this limit. In summary, the phase in both cases can be approximated as
\[\widetilde{\Delta\phi}(\omega)=\pm e^{i\omega L}\frac{\omega_{L}}{\omega^{2}} \int\frac{d^{3}k}{(2\pi)^{3}}e^{i\vec{k}\cdot\vec{x}_{0}}(i\vec{k}\cdot\vec{D })\tilde{\Psi}. \tag{103}\]
## Appendix C Density fluctuation
We compute the statistical properties of (mass) density fluctuation. In the non-relativistic limit, the density field becomes
\[\hat{\rho}_{\phi}\approx\sum_{i,j}\frac{m}{V}a_{i}^{\dagger}a_{j}e^{i(k_{i}- k_{j})\cdot x}, \tag{104}\]
where we have ignored quantum fluctuations arising from the commutation relation. From the above expression with the quasi-probability distribution (5), moments of the density field can be computed as
\[\langle\rho_{\phi}^{n}\rangle=n!\rho_{0}^{n}. \tag{105}\]
They are identical to those random variables under the exponential distribution. Therefore, we find
\[p(\rho_{\phi})d\rho_{\phi}=\frac{e^{-\rho_{\phi}/\rho_{0}}}{\rho_{0}}d\rho_{\phi} \tag{106}\]
with \(\rho_{\phi}\in[0,\infty)\). The density fluctuation in the ultralight dark matter halo is larger than the particle dark matter halo since particle dark matter density follows the Poisson statistics.
|
2303.08808
|
Mesh Strikes Back: Fast and Efficient Human Reconstruction from RGB
videos
|
Human reconstruction and synthesis from monocular RGB videos is a challenging
problem due to clothing, occlusion, texture discontinuities and sharpness, and
framespecific pose changes. Many methods employ deferred rendering, NeRFs and
implicit methods to represent clothed humans, on the premise that mesh-based
representations cannot capture complex clothing and textures from RGB,
silhouettes, and keypoints alone. We provide a counter viewpoint to this
fundamental premise by optimizing a SMPL+D mesh and an efficient,
multi-resolution texture representation using only RGB images, binary
silhouettes and sparse 2D keypoints. Experimental results demonstrate that our
approach is more capable of capturing geometric details compared to visual
hull, mesh-based methods. We show competitive novel view synthesis and
improvements in novel pose synthesis compared to NeRF-based methods, which
introduce noticeable, unwanted artifacts. By restricting the solution space to
the SMPL+D model combined with differentiable rendering, we obtain dramatic
speedups in compute, training times (up to 24x) and inference times (up to
192x). Our method therefore can be used as is or as a fast initialization to
NeRF-based methods.
|
Rohit Jena, Pratik Chaudhari, James Gee, Ganesh Iyer, Siddharth Choudhary, Brandon M. Smith
|
2023-03-15T17:57:13Z
|
http://arxiv.org/abs/2303.08808v1
|
# Mesh Strikes Back: Fast and Efficient Human Reconstruction from RGB videos
###### Abstract
Human reconstruction and synthesis from monocular RGB videos is a challenging problem due to clothing, occlusion, texture discontinuities and sharpness, and frame-specific pose changes. Many methods employ deferred rendering, NeRFs and implicit methods to represent clothed humans, on the premise that mesh-based representations cannot capture complex clothing and textures from RGB, silhouettes, and keypoints alone. We provide a counter viewpoint to this fundamental premise by optimizing a SMPL+D mesh and an efficient, multi-resolution texture representation using only RGB images, binary silhouettes and sparse 2D keypoints. Experimental results demonstrate that our approach is more capable of capturing geometric details compared to visual hull, mesh-based methods. We show competitive novel view synthesis and improvements in novel pose synthesis compared to NeRF-based methods, which introduce noticeable, unwanted artifacts. By restricting the solution space to the SMPL+D model combined with differentiable rendering, we obtain dramatic speedups in compute, training times (up to 24x) and inference times (up to 192x). Our method therefore can be used as is or as a fast initialization to NeRF-based methods.
## 1 Introduction
Our goal is to generate detailed, personalized, and animatable 3D human models. This has many downstream applications, including teleconferencing, entertainment, surveillance, and realistic synthetic data generation. 3D body scanners are the gold standard when it comes to 3D reconstructions. Results are accurate and realistic, but scanners tend to be expensive and ungainly. Moreover, they require additional postprocessing, such as registration and rigging, before they can be used. More recently, CV-based systems have demonstrated recovering realistic 3D human geometry and appearance from monocular images or videos. In the case of monocular _images_, a predictor can be learned from a dataset of real or synthetic humans [53, 69]. However, this is ill-conditioned because a single view is insufficient to estimate the entire 3D human geometry or appearance completely or accurately. Extending it to multi-view or 360\({}^{\circ}\) video input would require running inference for all frames and fusing per-frame texture and mesh information. 3D geometry and texture recovery is therefore formulated as an optimization problem. This is an alternative to expensive 3D scanning and motion capture pipelines.
Human-specific NeRFs have recently become popular under this latter, video-based formulation. There are two downsides to typical NeRFs: (1) the highly unconstrained solution space of NeRFs and the use of volume rendering leads to long training and rendering times _e.g_., up to a few days for training [12, 67, 30] and up to minutes for rendering images [22], and (2) dense viewpoints are required. On the contrary, mesh-based reconstruction methods can be faster and less compute-intensive because they do not optimize over a volume. The solution space of the mesh is highly constrained due to flexible yet accurate param
Figure 1: **Performance vs training time tradeoff**: Our method maximizes the performance to training time tradeoff compared to other methods employing meshes, NeRFs and deferred rendering.
eterized human body priors like SMPL [32]. Moreover, mesh-based methods may perform better than NeRFs under sparser viewpoints without expensive pretraining or strong regularization that misses geometric details ([38]), due to the human shape prior.
However, mesh-based methods that rely on a visual hull or silhouette-based approach (_e.g_., [5]) suffer from ambiguous shape recovery, _i.e_., any concave surface is not captured by the visual hull and thus cannot be recovered. We show that silhouette-based optimization is highly ill-posed (Sec. 3.1). To overcome this ambiguity, recent methods augment silhouettes with depths or normals [69, 18]. However, obtaining ground truth depths or normals is expensive, and prediction can be error-prone. RGB images provides additional information that can be used across multiple frames to better recover 3D information. However, directly employing an RGB loss is non-trivial because of differentiability challenges in rasterization. NeRFs use the inherent'softness' of the occupancy field (in the volume integral) to reason about self-occluded portions of the body, allowing pose-refinement [12]. We use a soft differentiable rendering pipeline [31] to emulate this behavior. This allows us to use 'analysis-by-synthesis' as part of our optimization problem. We adopt an approach that uses texture information as part of the optimization similar to NeRFs, but we parameterize the body similar to mesh-based reconstruction methods. Meshes are simpler and more efficient, which affords significant speedup (up to 24x in training time and 192x in inference time) compared to NeRFs.
However, a naive mesh-based optimization to simultaneously minimize RGB and silhouette losses does not work because of _moving targets_. The problem of _moving targets_ occur when the RGB losses between the image and a partially learnt texture representation hinder the mesh deformation and vice versa. We propose a method to reduce the ill-posedness and mitigate the problem of moving targets in optimization using a two-stage optimization. We demonstrate that our 3D reconstruction results are similar in quality and accuracy to NeRFs, and significantly better than existing mesh-based reconstruction methods. In addition, we show competitive results in novel view and novel pose synthesis compared to NeRF-based methods.
In summary, this paper makes three main contributions: (1) To our knowledge, we present the first method to incorporate photogrammetric losses in the context of generating human avatars from monocular videos using a mesh representation (Sec. 3). This allows us to optimize texture and geometry using _analysis-by-synthesis_ in our optimization without additional auxiliary inputs. (2) An efficient, multi-resolution texture representation using hash encoding capable of capturing fine details is proposed. Unlike texel-based representations, capacity is not wasted on uniform-texture regions. (3) To mitigate the moving targets problem, where partially learnt texture and geometry hinder each other's loss functions, a novel two-stage optimization is proposed. This ensures stability and optimal convergence.
## 2 Related Work
### Human reconstruction via prediction
Recovering 3D human shape and pose estimation using a parametric 3D body model such as SCAPE [44], SMPL [33], SMPL-X [41], STAR [39] or GHUM [70] is an active area of research in the CV community. Most of these approaches directly predict model parameters using a learned model [25, 27, 54, 53, 56, 55, 57, 64, 60, 61]. Recent approaches (_e.g_., [28, 57, 53, 55]) have improved on prior methods by directly regressing body shape. However, these approaches can only represent the shape and pose of a minimally clothed body and fail to model complex topology due to clothing, hair, _etc_. BodyNet [64] and DeepHuman [76] attempt to predict volumetric representations of the human model from a single image. Implicit representations are an interesting alternative for representing high-fidelity 3D geometry without requiring the entire output volume be kept in memory. In contrast to explicit representations, implicit representations define a surface as a level set of a function. Recent methods, such as PiFu, PiFuHD and PHORHUM [48, 49, 7], learn an implicit surface representation estimated based on pixel aligned features and the depth of 3D locations. Some recent methods combine the benefits of both explicit and implicit representations to represent clothed people [16, 8, 75, 11, 21, 69]. These methods require 3D ground-truth supervision, limiting their applicability to a few datasets and their ability to generalize beyond in-distribution poses. Recently, implicit representations have been used to learn a generative model of 3D people in clothing [13, 6, 17, 50, 14]. However, these approaches require ground truth posed 3D meshes or RGB-D video sequence to learn a model [72, 76, 40, 3, 2, 1]. All of the methods share a common limitation, _i.e_., predicting the human shape and texture from a single image is ill-posed, and incorrectly regressed predictions are not iteratively refined using auxiliary signals.
### Human reconstruction via optimization
**Recovering pose**: Some of the earliest approaches fit model parameters via optimization at test time [5, 9, 29]. Bogo _et al_. [9] optimize SMPL parameters by minimizing the joint reprojection loss and prior terms, whereas [29, 28, 73] employ a likelihood term over the pose via a learned network, and perform iterative regression to minimize reprojection or multiview losses. Pavlakos _et al_. [42] reconstruct SMPL parameters of multiple humans from videos of TV shows. However, these methods only recover a pose with generic shapes and do not capture subject-specific de
formations and textures.
**Recovering shape and texture**: Alldieck _et al_. [5] extended this approach to monocular video by fusing the unposed silhouettes from all frames to generate a consensus mesh. Each human silhouette, extracted from a video frame, defines a constraint on the human shape, which can be used to estimate deviations in shape. The main problem with this method is that shape is ambiguous, _i.e_., concave surfaces are not captured. One would need to use auxiliary inputs like depths or normals to disambiguate the problem [69, 4], but depth prediction systems can introduce their own errors.
Mildenhall _et al_. [36, 68] pioneered NeRFs for representing static scenes with a color and density field without requiring any 3D ground truth supervision. Recently this approach has been extended to reconstruct clothed humans as well [43, 58, 12, 67, 24, 66, 30, 65, 23, 19, 62]. These approaches use SMPL as a prior to unpose the human body across multiple frames by transforming the rays from observation space to canonical space which is then rendered using a NeRF. PINA [18] learns a SDF and a learned deformation field to create an animatable avatar from an RGB-D image sequence. Chen _et al_. [15] used a polygon rasterization pipeline to speed up NeRF rendering as a post-process; however, their approach does not reduce NeRF training times. Some recent methods have improved the efficiency of scene-agnostic NeRFs (_e.g_. [71, 51, 37, 20, 46]). These methods demonstrate fast training and rendering times, but adapting them to include an animatable human shape prior requires expensive nearest neighbor operations that erode efficiency gains. However, most of these approaches are computationally expensive. In addition, the resulting representation may have poor generalization to OOD poses.
Our method falls in the category of shape and texture optimization, being most similar to [5, 12, 23, 67], where we aim to contrast NeRFs with its mesh-based equivalent formulation. In contrast to NeRFs, an explicit mesh representation combined with a carefully chosen optimization scheme enables our method to recover accurate geometry, while ensuring photo-realistic rendering, at substantially lower computational and time costs. We show that, contrary to conventional wisdom, our method, when employed with the correct optimization objective, can recover complex geometry (like loose clothing, skirts, long hair, hoodies, _etc_.). Because we constrain the solution space using a mesh, our method is computationally inexpensive, and can be run on a single consumer-grade GPU in about an hour. Our representation allows us to use differentiable rasterization pipelines, drastically reducing its training and inference time. Note that our goal is not to replace NeRF methods, which have asymptotically better performance due to more representational capacity. Rather, _we provide a computationally inexpensive alternative that can be used for real-time rendering and/or on-device applications, or to bootstrap NeRF optimization._ Unlike [43, 30], which have reported failure cases for out-of-distribution poses, our method's accuracy is only limited by the artefacts of the skinning function. A comparison with relevant methods is provided in Fig. 1.
## 3 Method
This paper focuses on jointly recovering accurate geometry and realistic textures from a monocular RGB video of a person using a mesh representation. First, we illustrate the ambiguity in mesh reconstruction from visual hull only, and how multi-view RGB consistency can help disambiguate the problem (Section 3.1). This allows us to avoid using auxiliary inputs, such as depths, landmarks, and normals, which are either expensive to obtain or error-prone if predicted. Next, we discuss the shape and texture representations (Sec 3.2, 3.3) used in our optimization. To enable photogrammetric losses to backpropagate to a parameterized mesh representation, we describe the differentiable rendering pipeline (Section 3.4). Finally, we describe the forward model (Section 3.5), loss functions (Section 3.6) and training pipeline (Section 3.7) to jointly recover the clothed human geometry and texture of the subject.
### A motivating toy problem
We use the following toy problem to illustrate the ambiguity of using visual hulls for mesh reconstruction. Consider a cube and the same cube but with all its faces dented inwards, shown in Fig. 2 (top). Rendering the visual hull of both objects gives us the same set of binary silhouettes for all camera angles, making it ambiguous for any optimization scheme to recover a unique mesh from the set of silhouettes. This ambiguity necessitates the use of auxiliary inputs, _e.g_., depth or normals [18, 69, 47]. Now consider the same scenario, but with the same overlaid texture on both objects. In this case, the two objects have different renders in Fig. 2 (middle & bottom) from the same viewpoints, disambiguating the shape of the underlying object when optimized with a multi-view RGB consistency framework. This
Figure 2: A toy example highlighting that methods based on visual hulls alone cannot recover concavities in the underlying object. Optimization from visual hull is ill-conditioned, which is disambiguated by multi-view RGB consistency. This idea is used in NeRFs, and in this work we show that it is possible to use RGB and visual hull to optimize a mesh representation.
is the idea used in NeRFs [36] to recover the occupancy and radiance volume from images alone. Therefore, one can use multi-view RGB consistency as a surrogate to depth, normals, _etc_. The key to using RGB images for mesh optimization is to assign a unique RGB value to each point on the mesh, such that it can guide the mesh vertices to produce consistent renderings in all views. However, doing so introduces a'moving target' problem (partially optimized mesh and RGB hinder each other's learning) which is non-trivial to optimize. Empirically, we find that carefully formulating the optimization problem (Sec 3.7) allows us to capture complex geometry, including hoodies, loose shirts, pants, skirts, and voluminous hair, better than prior work that uses visual hulls alone for optimization.
### Geometry model
Optimizing a high-fidelity textured avatar from monocular or multi-camera RGB video requires us to learn geometry and corresponding texture on the learned geometry. Methods using neural rendering learn a canonical or conditional volume from scratch without using any structural priors [36]. Instead of learning a volume from scratch, we use the parametric SMPL human model [33]. We learn per-frame pose and camera parameters \(\{(\mathbf{\theta}_{i},\mathbf{R}_{i},\mathbf{t}_{i})\}_{i\in\{1..n\}}\) and a common shape parameter \(\mathbf{\beta}\). Since the shape parameter is a low-dimensional embedding capturing human shape, we also learn a per-vertex offset matrix \(\mathbf{D}\in\mathbb{R}^{V\times 3}\) where \(V\) is the number of vertices in the mesh. Unlike [4] we show that the base SMPL+D model is enough to capture most geometric details (Fig. 6 and in Appendix) and we do not need to use a subdivided SMPL model.
### Texture representation
Mesh-based representations generally use predefined UV coordinates for each vertex [5, 4], and the RGB value at any point on the face is determined by evaluating the UV coordinate of the point, and using interpolation from the RGB values in the UV map. This is converted into a texel-map [26, 31]. This representation, although simple, is not adaptive to the complexity of texture on the mesh. Meshes may have regions of low or high texture (_e.g_., solid-color clothing versus faces, hair, or patterned clothing) which may require a more flexible texture representation. Bilinear interpolation also contributes to blurry textures, making it unsuitable for high-frequency or discontinuous textures. Moreover, deferred rendering [45, 63] may overfit to the UV distribution of the training frames and may not generalize to UV distributions of OOD poses (Fig. 4).
To alleviate these problems, we take inspiration from Muller _et al_. [37] who proposed a multi-resolution hash encoding of the input to capture high-frequency details. The main idea of this work is to use implicit hash collisions to average the gradients at a particular hash lookup entry, and therefore weigh the representation appropriately. We use the same idea to learn a high frequency texture representation. Human avatars have varying levels of texture complexity across their surface--regions such as faces, logos on clothing, _etc_. have high levels of detail, whereas regions such as skin and solid-color clothing have low levels of detail, and require little representation capacity to learn. The hash-encoding representation would learn this implicitly by finding the best embedding which leads to the least RGB reconstruction loss, effectively'splitting' its representation capacity between high and low-frequency details. For a
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Method** & **RGB loss** & **Mask loss** & **KPS loss** & **Representation** & **Novel pose** & **Training time** & **GPU** \\ \hline VideoAvatar [5] & ✗ & ✓ & ✓ & SMPL+D & ✓ & 16 hours & NA \\ AnimNeRF [12] & ✓ & ✓ & ✗ & NeRF & ✓ & 26 hours & 48GB \\ Neuralbody [43] & ✓ & ✓ & ✗ & NeRF & ✗ & 14 hours & 48GB \\ HumanNeRF [67] & ✓ & ✓ & ✗ & NeRF & ✓ & 72 hours & 48GB \\ NeuralActor [30] & ✓ & ✓ & ✗ & NeRF & ✓ & 48 hours & 256GB \\ SCARF [19] & ✓ & ✓ & ✗ & NeRF+SMPLX-D & ✓ & 40 hours & 32GB \\ \hline Ours & ✓ & ✓ & ✓ & SMPL+D & ✓ & \textless{}**1hour** & **5GB** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods for human reconstruction. Our simple yet clever use of NeRF-like losses with an SMPL+D representation bridges the gap between mesh-based and NeRF-based optimization with dramatic speedups and compute savings.
Figure 3: (**Top**) Traditional texturing methods use fixed per-vertex UV coordinates and linearly interpolate the color from the UV coordinates over the face leading to blurry texture. (**Bottom**) We use learnt per-vertex 3D texture coordinates (in cyan,magenta,yellow) and linearly interpolate the texture coordinates over the face. The interpolated texture coordinate is input into the multi-res hash encoding [37] which allows us to represent sharp textures within the face (illustrated by how the network treats the input space across multiple resolutions).
given frame \(i\) using shape \(\mathbf{\beta}\), body pose \(\mathbf{\theta}_{i}\) and deformation \(\mathbf{D}\) we produce a mesh \(\mathbf{M}_{i}\). This mesh is then projected to the image plane using camera extrinsics \((\mathbf{R}_{i},\mathbf{t}_{i})\) and a fixed intrinsics matrix using the projective transformation to mesh \(\mathbf{m}_{i}\). We perform rasterization on the projected mesh to output an image containing face indices and barycentric coordinates of the face index for each pixel. Let \(f_{j}\) be rendered at pixel \(p\) with barycentric coordinates \(\{w_{j1},w_{j2},w_{j3}\}\) such that \(w_{jk}\geq 0\) and \(\sum_{k=1}^{3}w_{jk}=1\). If the texture coordinates of \(f_{j}\) are given by \(\{t_{j1},t_{j2},t_{j3}\}\), then the rendered point on the face has the texture coordinate \(t=\sum_{k=1}^{3}w_{jk}t_{jk}\). The texture coordinate is a low-dimensional input that is passed into the hash encoder and MLP to output the RGB color at pixel \(p\). Conventionally, hardcoded 2D texture coordinates are used for vertices of each face, which are interpolated and used to lookup values in a UV map (typically an image). Due to UV unwrapping of a closed mesh, some mesh vertices have multiple texture coordinates. In contrast, we use a learned 3D texture coordinate for each vertex. This serves two purposes: (1) it sidesteps the need for UV unwrapping by moving the textures coordinates into 3D space, and (2) learnable coordinates allow us to expand or shrink the 3D coordinates of each vertex, which in turn accommodate different levels of detail for each face (See Appendix).
### Differentiable rendering
A major advantage of using NeRF-like methods is that the volume rendering process is fully differentiable. Moreover, the gradients with respect to the integral (to compute the color) takes occlusions into account. This allows NeRFs to learn a robust occupancy volume and RGB colors to minimize rendering errors. In contrast, rasterization is an inherently non-differentiable operation with respect to the vertices of the mesh [31]. Many solutions have been proposed to approximate gradients of the vertices from gradients in rendered images (_e.g._, [26, 31, 34]). We use SoftRas [31] due to its ability to flow gradients to the occluded and far-range vertices, allowing us to perform pose refinement via analysis-by-synthesis. Complex pose changes such as bringing an occluded limb into view, rotating joints, etc. can now be performed since SoftRas allows us to pass gradients into the occluded parts of the render. Empirically, we observe that SoftRas is helpful in updating body pose when a small amount of joint rotation is required. To learn the texture, we need the exact forward rasterization to map texture coordinates to RGB values. In SoftRas, we can set the softening parameters \(\gamma=\sigma=0\) and use the RGB loss to guide the texture learning. However, this leads to numerical instability and rendering artefacts in SoftRas. To mitigate this issue, we use NMR [26] to propagate texture gradients.
### Forward model
In this section, we describe the forward model for a given frame \(i\). The SMPL+D and camera parameters \((\mathbf{\beta},\mathbf{\theta}_{i},\mathbf{D},\mathbf{R}_{i},\mathbf{t}_{i})\) are used to generate the mesh \(\mathbf{M}_{i}\) which is projected into the image plane as \(\mathbf{m}_{i}\). The projected mesh and texture network parameters are passed into NMR and SoftRas to give us an opaque and translucent RGB image respectively. For texture parameters \(\phi\), we have:
\[\left[\hat{\mathbf{I}}_{i,\text{NMR}};\hat{\mathbf{I}}_{i,\text{SR}}\right]=\text{NMR }(\mathbf{m}_{i};\phi),\text{SoftRas}(\mathbf{m}_{i};\text{sg}(\phi),\sigma,\gamma) \tag{1}\]
where \(\sigma,\gamma\) are the face blur and depth scale parameters of SoftRas, and sg is the stop-grad operator.
### Loss functions
In this section, we describe the losses used to generate our results. Let the masked ground-truth image for frame \(i\) be \(\mathbf{I}_{i}\) and ground-truth binary foreground be \(\mathbf{S}_{i}\).
Image losses.The RGB losses are given by:
\[\mathcal{L}_{i,\text{RGB}}=\|\mathbf{I}_{i}-\hat{\mathbf{I}}_{i,\text{NMR}}\|_{1}+\| \mathbf{I}_{i}-\hat{\mathbf{I}}_{i,\text{SR}}\|_{1} \tag{2}\]
We also obtain a binary silhouette using NMR and projected mesh \(\mathbf{m}_{i}\), which we denote as \(\hat{\mathbf{S}}_{i}\). The silhouette loss is defined as an IOU loss:
\[\mathcal{L}_{i,\text{sil}}=1-\frac{\sum_{p}\hat{\mathbf{S}}_{i}(p)\cdot\mathbf{S}_{i}( p)}{\sum_{p}(\hat{\mathbf{S}}_{i}(p)+\mathbf{S}_{i}(p)-\hat{\mathbf{S}}_{i}(p)\cdot\mathbf{S}_{i}( p))}. \tag{3}\]
Keypoint loss.We add a keypoint loss to help guide the pose of the SMPL model. For simplicity, we use only keypoints corresponding to limbs (elbows, wrists, knees, anskles) and nose. From the mesh \(\mathbf{M}_{i}\), we project keypoints \(\hat{\mathbf{k}}_{i}\) and encourage their proximity to target 2D keypoints \(\mathbf{k}_{i}\) via the loss:
\[\mathcal{L}_{i,\text{ps}}=\|\hat{\mathbf{k}}_{i}-\mathbf{k}_{i}\|_{2}^{2}. \tag{4}\]
We run HRNet [59] on each frame to produce \(\mathbf{k}_{i}\) for people-snapshot and use the given keypoints for ZJU Mocap.
Mesh regularization losses.We add some regularization to the mesh representation. This is especially important for the free-form per-vertex offsets \(\mathbf{D}\). Unlike previous works (_e.g._, [5]), we observe that imposing a low-deformation loss reduces performance (Sec 4.4). This is because loose clothing need not necessarily correspond to a low deformation from the underlying SMPL model. We only encourage normal consistency of adjacent faces in the mesh. Let \(\mathbf{M}_{D}\) be the mesh generated from the SMPL parameters \((\mathbf{\beta},\mathbf{0},\mathbf{D},\mathbb{I},\mathbf{0})\) and let \(\mathbf{M}_{0}\) be generated from the SMPL parameters \((\mathbf{\beta},\mathbf{0},\mathbf{0},\mathbb{I},\mathbf{0})\). Let \(f_{j}\) be the \(j^{\text{th}}\) face of \(\mathbf{M}_{D}\) and \(f_{j}^{\prime}\) be the \(j^{\text{th}}\) face of \(\mathbf{M}_{0}\). With some abuse of notation,
for two faces \(f_{j}\) and \(f_{k}\), we denote \(|f_{j}\cap f_{k}|\) as the number of vertices that are shared between both faces. The normal consistency loss is then given as:
\[\mathcal{L}_{NC}=\sum_{|f_{j}\cap f_{k}|=2}(1-\hat{n}_{f_{i}}\cdot\hat{n}_{f_{j} }), \tag{5}\]
where \(\hat{n}_{f}\) is the outward normal of face \(f\). We also encourage each face in the mesh to have the same area with and without the deformation. This respects the relative sizes of faces corresponding to different regions in the mesh. If \(A_{f}\) represents the unsigned area of a face \(f\), the face area loss is given as:
\[\mathcal{L}_{FA}=\sum_{j}\biggl{(}\frac{A_{f_{j}}}{A_{f^{\prime}_{j}}}+\frac{A _{f^{\prime}_{j}}}{A_{f_{j}}}\biggr{)}. \tag{6}\]
We prefer this loss instead of the L2 loss \(\|A_{f_{j}}-A_{f^{\prime}_{j}}\|_{2}^{2}\) because the gradients of L2 loss are small when the area \(A_{f_{j}}\) approaches 0. On the other hand, we want to penalize shrinkage or expansion equally. The loss we propose is of the form \(x+\frac{1}{x}\) which achieves its minima at \(x=1\) for positive \(x\). The total loss is:
\[\mathcal{L}_{f}=\sum_{i}\left(\lambda_{\text{RGB}}\mathcal{L}_{i,\text{RGB}}+\lambda_{\text{Sill}}\mathcal{L}_{i,\text{Sill}}+\lambda_{\text{ kps}}\mathcal{L}_{i,\text{kps}}\right)\\ +\lambda_{NC}\mathcal{L}_{NC}+\lambda_{FA}\mathcal{L}_{FA} \tag{7}\]
Unlike [12], we do not add any other regularization terms on \(\mathbf{\beta}\) or temporal pose consistency or deviation terms.
### Two-stage training
Given a forward model to render RGB and silhouettes from mesh parameters \(\mathbf{\beta},\mathbf{\theta}_{i},\mathbf{D},\mathbf{R}_{i},\mathbf{t}_{i}\) and texture parameters \(\phi\), an intuitive way to learn all the parameters is to jointly optimize them. Let \(\Theta=\{\phi^{*},\mathbf{\beta}^{*},\mathbf{D}^{*},\{\mathbf{\theta}_{i}^{*},\mathbf{R}_{ i}^{*},\mathbf{t}_{i}^{*}\}_{i\in\{1...n\}}\}\) be the set of all optimizable parameters. The optimization is of the form: \(\Theta^{*}=\arg\min\mathcal{L}_{f}\). This optimization is still highly underconstrained. Meshes obtained using this procedure are jagged with blurry textures because texture and deformation parameters locally optimize their own parameters (examples in Appendix) leading to the _moving target_ problem. To alleviate this problem, we propose a two-stage training procedure. In the first stage, we use a _per-face_ RGB value as a 'base' texture color instead of the full texture network. This allows for a coarse alignment of the mesh vertices for each frame. Constraining the color of each face to just one optimizes the mesh parameters to place it in the best possible location and scale to minimize RGB and silhouette losses, thus ensuring photogrammetric consistency in the optimization. In the second stage, the deformations, shape, and per-face RGB values are fixed, and the texture network is trained with per-frame pose refinement. This allows for fine-grained alignment of the poses for each frame. Implementation details are provided in the Appendix.
## 4 Results
To evaluate our method, we look at the following aspects of geometry and texture recovery: (1) novel-view and pose
\begin{table}
\begin{tabular}{l c c c c c|c c c c|c c c c c} \multicolumn{11}{c}{**Novel view (people-snapshot)**} \\ \hline \multicolumn{11}{c}{**Subject ID**} & \multicolumn{4}{c}{**PSNR\(\uparrow\)**} & \multicolumn{4}{c|}{**SSIM(x100) \(\uparrow\)**} & \multicolumn{4}{c}{**LPIPS(x100) \(\downarrow\)**} \\ \multicolumn{11}{c}{**VA**} & **SP** & **NB** & **AN** & **Ours** & **VA** & **SP** & **NB** & **AN** & **Ours** & **VA** & **SP** & **NB** & **AN** & **Ours** \\ \hline
**m3c** & 22.91 & 22.94 & 23.98 & **29.43** & 29.40 & 93.16 & 92.56 & 96.12 & **97.11** & 96.24 & 4.87 & 6.89 & 7.24 & **1.85** & 2.65 \\
**m4c** & 22.63 & 21.43 & 22.84 & **27.50** & 26.31 & 93.22 & 92.66 & 94.81 & **95.87** & 94.27 & 6.00 & 8.04 & 10.93 & **3.77** & 5.30 \\
**f3c** & 22.10 & 21.80 & 23.19 & 22.96 & **27.25** & 94.35 & 93.95 & 95.83 & 94.56 & **96.21** & 5.43 & 5.61 & 10.22 & 4.61 & **3.47** \\
**f4c** & 23.49 & 22.64 & 22.18 & 29.03 & **29.61** & 93.99 & 93.27 & 95.63 & **96.88** & 96.61 & 4.12 & 5.92 & 8.52 & **2.10** & 2.42 \\ \hline \multicolumn{11}{c}{**Novel view (ZJU Mocap)**} \\ \hline \multicolumn{11}{c}{**VA**} & **SP** & **NB** & **HN** & **Ours** & **VA** & **SP** & **NB** & **HN** & **Ours** & **VA** & **SP** & **NB** & **HN** & **Ours** \\ \hline
**C377** & 24.48 & 27.28 & 24.81 & **30.86** & 30.58 & 93.12 & 94.64 & 97.17 & **97.45** & 96.51 & 9.01 & 5.82 & 5.71 & **2.58** & 4.48 \\
**C386** & 27.67 & 29.22 & 25.08 & **33.36** & 33.28 & 93.72 & 95.80 & 97.27 & **97.29** & 96.22 & 7.98 & 10.65 & 4.86 & **3.39** & 4.18 \\
**C387** & 23.30 & 24.27 & 23.60 & **28.58** & 28.07 & 92.58 & 95.08 & 96.08 & **96.10** & 94.73 & 9.18 & 12.08 & 6.88 & **3.96** & 6.40 \\
**C392** & 25.70 & 28.66 & 24.35 & **31.42** & 31.35 & 92.89 & 95.64 & **96.86** & 96.83 & 96.05 & 9.52 & 7.54 & 6.37 & **3.76** & 5.37 \\
**C393** & 23.45 & 24.83 & 24.17 & **28.89** & 28.35 & 92.30 & 93.60 & **96.35** & 95.83 & 93.88 & 10.99 & 12.77 & 6.66 & **4.22** & 6.43 \\
**C394** & 24.46 & 27.34 & 23.97 & 30.73 & **31.21** & 91.67 & 95.58 & **96.43** & 96.16 & 95.57 & 11.28 & 9.07 & 6.71 & **3.75** & 4.91 \\ \hline \multicolumn{11}{c}{**Novel pose (ZJU Mocap)**} \\ \hline
**C377** & 24.36 & 27.00 & 23.84 & **30.50** & 30.48 & 93.25 & 96.51 & 96.78 & **97.41** & 96.54 & 8.29 & 5.81 & 5.59 & **2.69** & 4.27 \\
**C386** & 28.34 & 30.38 & 23.26 & 33.55 & **34.03** & 93.84 & 96.60 & 96.46 & **97.20** & 96.16 & 7.43 & 9.79 & 5.50 & **3.41** & 4.00 \\
**C387** & 23.02 & 23.80 & 23.15 & **29.02** & 28.43 & 92.83 & 95.38 & **95.58** & **96.44** & 95.23 & 8.46 & 11.47 & 6.77 & **3.25** & 5.71 \\
**C392** & 25.83 & 29.12 & 22.46 & 31.43 & **32.22** & 92.98 & 96.44 & 95.97 & **96.89** & 96.37 & 9.40 & 7.26 & 7.03 & **3.70** & 5.01 \\
**C393** & 23.50 & 24.79 & 22.41 & **29.32** & 28.62 & 92.49 & 95.07 & 95.45 & **96.09** & 94.07 & 10.58 & 12.65 & 7.13 & **3.86** & 6.47 \\
**C394**
synthesis, (2) training/inference time and compute requirements, (3) geometry reconstruction. For experiments (1-2), we use People Snapshot [5] and ZJU Mocap [43] datasets. For (3), we use the Self-Recon synthetic dataset [23]. For people-snapshot, we follow the same subjects and experiment setup as [12], and for ZJU Mocap we use the same set of subjects as [67]. For ZJU Mocap, we use frames 0-450 in cameras 1,7,13,19 for training, and the rest of the frames for novel pose reconstruction. We use frames 0-450 from cameras 5,10,15,20 for novel view reconstruction. We choose baselines across a spectrum of representation choices: SMPLPix [45] (**SP**) which uses deferred rendering, VideoAvatar [5] (**VA**) which performs SMPL+D optimization, NeuralBody [43] (**NB**), HumanNeRF [67] (**HN**) and AnimNeRF [12] (**AN**) which are SOTA NeRF methods.
### Novel view synthesis
We evaluate novel view synthesis by holding back a certain set of test frames of the subject to check the quality of reconstruction for those frames (similar to [12]). For ZJU Mocap, we evaluate frames 0-450 from cameras 5,10,15,20. Results are shown in Table 2. Our method performs very competitively with AnimNeRF and HumanNeRF in terms all three metrics (PSNR, SSIM, LPIPS), and outperforms all other baselines by a substantial margin. On ZJU Mocap, our method competes with HumanNeRF consistently on the PSNR and LPIPS metrics, showing that our method can faithfully reconstruct accurate and realistic rendering. NeuralBody and SMPLpix are trained on the training poses only, and fail to generalize to novel views, even if the view deviations are very small from the training frames. VideoAvatar performs a multi-stage optimization, where the geometry is optimized from silhouettes only, and then a texture optimization step is performed. Errors from the mesh optimization propagate to the texture, leading to a low quality texture, and subsequently a low quality render. Our method recovers intricate details like loose clothing, loose pants, hoodies, hair, and skirts (Fig. 6). More qualitative results are in the Appendix.
### Novel pose synthesis
We use a set of held-back frames for novel pose synthesis in the ZJU dataset. Results are in Tab. 2. SMPLpix and Neural Body do not generalize well because novel poses that are not seen during training results in a distribution shift in the inputs of the frameworks. VideoAvatar has an uncanny valley effect in the faces of its rendered outputs. HumanNeRF achieves highly realistic results capturing nuances in body geometry and texture.
**OOD pose rendering**: However, we notice that the pose distribution is not very different from those in training frames. Moreover, the people-snapshot dataset doesn't contain frames with other poses than the A-pose. Therefore, we also compare the realism of textures and faces in an a set of OOD poses. We curate a set of poses from the AMASS dataset [35]. We compare the realism of the models by evaluating the Frechet Inception Distance (FID score) [52] of the input frames with novel pose renders, and
Figure 4: Novel pose synthesis (on **m3c** and **f3c**) from left to right: SMPLpix, VideoAvatar, NeuralBody, AnimNeRF, Ours. NeuralBody does not generalize to novel poses, AnimNeRF introduces cloud and tearing artifacts. Our method preserves detailed texture and doesn’t produce artifacts. More results in Appendix.
Figure 5: **Novel view on ZJU dataset, Left to right**: VideoAvatar, SMPLPix, NeuralBody, HumanNeRF, Ours. Our method performs dramatically better than VA/SMPLPix and is comparable to NeRF methods at significant training and inference speedups.
comparing the face texture of the rendered images with that of the input frames. Since we use the same novel poses for all methods, the differences in FID must come from texture quality. To evaluate texture quality of faces, we use face identification as a proxy task. We use MTCNN [74] to detect faces from images and VGGFace2 [10] to generate a template feature vector for each method. We use the face similarity metric between the template of the method and that of the input data, as proposed in [10]. Results in Tab. 3 and Fig. 4 shows that our method preserves the subject identity significantly more than VideoAvatar, showing that our method can recover accurate texture with a mesh. AnimNeRF reconstructs the texture well in the parts on the surface, but introduces cloud and tearing artifacts, especially when regions around the unseen areas (armpits and thighs) are stretched too much. Our method doesn't have such an issue, since our representation is based on a mesh. Moreover, the texture distortion for novel poses is virtually non-existent. More qualitative results are shown in Appendix.
### Training/inference time and compute
We compare the training and inference time and compute required our method against NeRF methods in Tab. 4. NeRFs have achieved huge successes in representing scenes faithfully with very accurate rendering. However, they take a prohibitively long time to train a single scene. Although several improvements have been proposed for static scenes, the main bottleneck for AnimNeRF is the KNN step for each sample along the ray, and unposing the transformation to the canonical space. This is a computationally expensive step since it has to be done for each point from each ray independently. Moreover, volume rendering leads to a significantly higher inference time and compute requirements [30]. In contrast, unposing is trivial using our method because the rasterization consists of the face index with barycentric coordinates.
### Geometry reconstruction
We quantitatively compare the effectiveness of our method to recover the underlying geometry from the set of images using the Self-Recon dataset [23], which consists of renderings of 5 human subjects with their ground truth meshes. We compute the average Chamfer distance and Point-to-Surface (P2S) measures. Our comparison with VideoAvatar [5], the other mesh-based optimization method is shown in Tab. 5. Note that our lower distances show that even a low dimensional mesh can capture complex details like loose clothing, hair, etc. with the right optimization and training scheme. Qualitative results on all three datasets are in Fig. 6 and Appendix.
## 5 Conclusion
There are myriad applications for detailed, personalized, and animatable 3D human models, and they become increasingly practical as generation times, data acquisition, and hardware requirements decrease. One promising direction is to bootstrap the training process of human-specific NeRFs [67, 12] with the geometry and texture learnt from our model. Since our model directly provides 3D coordinates and RGBs in the canonical space, volume rendering is not needed during this pretraining step. While NeRFs have demonstrated great versatility, _i.e_., they are scene-agnostic, they are prohibitive for reconstructing well-defined objects with strong priors (_e.g_. faces, human avatars) given their steep training requirements. For these applications, we argue that our mesh-based system is competitive and, in some cases, favorable. We have demonstrated that our approach is capable of generating results with very competitive performance to SOTA human-specific NeRFs [12, 67], but in a tiny fraction of the training time and compute.
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Method** & **Training** & **Inference** & **GPU usage** \\ & **time (min)** & **time (sec)** & **(GB-hrs)** \\ \hline HN [67] & 1013.35 & 3.51 & 399.09 \\ AN [12] & 769.26 & 11.54 & 591.38 \\ NB [43] & 1020.27 & 0.77 & 62.22 \\ Ours & **43.31** & **0.06** & **4.11** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Averaged total training and per image inference time (in minutes) and total GPU usage (GB-hr). Our model train upto 24x faster than HumanNeRF on the same data while using upto 4x less compute. Results for individual subjects are in the Appendix.
Figure 6: Geometry reconstruction quality of our method (top) and VideoAvatar (bottom) on ZJU Mocap, people-snapshot, and Self-Recon datasets. Our method captures loose fitting pants (1,3) and clothes (2,4,6), hoodie (2), hair (1,2,3), skirts (6).
|
2307.13061
|
Feature Gradient Flow for Interpreting Deep Neural Networks in Head and
Neck Cancer Prediction
|
This paper introduces feature gradient flow, a new technique for interpreting
deep learning models in terms of features that are understandable to humans.
The gradient flow of a model locally defines nonlinear coordinates in the input
data space representing the information the model is using to make its
decisions. Our idea is to measure the agreement of interpretable features with
the gradient flow of a model. To then evaluate the importance of a particular
feature to the model, we compare that feature's gradient flow measure versus
that of a baseline noise feature. We then develop a technique for training
neural networks to be more interpretable by adding a regularization term to the
loss function that encourages the model gradients to align with those of chosen
interpretable features. We test our method in a convolutional neural network
prediction of distant metastasis of head and neck cancer from a computed
tomography dataset from the Cancer Imaging Archive.
|
Yinzhu Jin, Jonathan C. Garneau, P. Thomas Fletcher
|
2023-07-24T18:25:59Z
|
http://arxiv.org/abs/2307.13061v1
|
# Feature Gradient Flow for Interpreting Deep Neural Networks in Head and Neck Cancer Prediction
###### Abstract
This paper introduces feature gradient flow, a new technique for interpreting deep learning models in terms of features that are understandable to humans. The gradient flow of a model locally defines nonlinear coordinates in the input data space representing the information the model is using to make its decisions. Our idea is to measure the agreement of interpretable features with the gradient flow of a model. To then evaluate the importance of a particular feature to the model, we compare that feature's gradient flow measure versus that of a baseline noise feature. We then develop a technique for training neural networks to be more interpretable by adding a regularization term to the loss function that encourages the model gradients to align with those of chosen interpretable features. We test our method in a convolutional neural network prediction of distant metastasis of head and neck cancer from a computed tomography dataset from the Cancer Imaging Archive.
Yinzhu Jin\({}^{\star}\) Jonathan C. Garneau\({}^{\dagger}\) P. Thomas Fletcher\({}^{\ddagger,\star}\)+\({}^{\star}\) Department of Computer Science, University of Virginia, Charlottesville, VA, USA
\({}^{\dagger}\)Department of Otolaryngology, University of Virginia, Charlottesville, VA, USA
\({}^{\ddagger}\) Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, USA
Footnote †: Published in: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
Footnote †: thanks: Published in: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
## 1 Introduction
Deep neural networks (DNNs) have great promise for predicting disease outcomes from medical imaging. For example, researchers have demonstrated that DNNs can predict outcomes in head and neck cancer, such as whether the cancer will metastasize, from computed tomography (CT) images with high accuracy [1, 2]. However, full adoption of such models is held back by the fact that DNNs are typically "black boxes", i.e., the image features that they learn and use to make predictions are not known or not interpretable by humans. This in turn leads to a lack of trust in DNNs by potential clinical users.
Several methods have been proposed for interpretable or explainable deep learning [3, 4] and applied in medical image analysis. The most commonly used methods fall into the class of saliency maps, e.g., Grad-CAM [5]. In these methods, information derived from the classifier gradient is displayed on an input image, highlighting where in the image the classifier is using information to make its decision. While these methods explain _where_ a classifier is focusing, they do not explain _what_ information it is using to make a decision. One other way is to build a decision tree that approximates the deep learning model [6], which makes use of the naturally interpretable architecture of decision trees. Similar to saliency mapping methods, it highlights object parts that are important for decision making without showing the exact features the model relies on. Another class of methods, such as LIME [7], fit a linear approximation to a DNN in a local region of the input space. The idea is that the approximating linear classifiers can be interpreted more easily because they are given by a single parameter vector that provides a weighting of the importance for each input feature. Testing with concept activation vectors (TCAV) [8] extended the linear approximation approach by fitting linear classifiers of binary concepts to the intermediate layer activations of a DNN. This direction was subsequently extended to linear regression of continuous concepts and applied in medical imaging tasks by Graziani et al. [9]. A limitation of these concept attribution approaches is that they rely on a _linear_ approximation to the activations of a DNN, whereas the flexibility of deep models comes from the fact that they are highly nonlinear.
In this paper, we seek to understand the nonlinear features that are being utilized in the decision process
of a DNN. We do this by first developing the _fibered manifold geometry_ that a classifier induces on the input space, decomposing it into nonlinear dimensions that are either relevant or irrelevant to the classifier's decision (Section 2). Second, we develop a method for measuring the alignment of a given set of interpretable features along the gradient flow of this classifier geometry (Section 3). In this way we query to what extent the classifier is using information that is human interpretable. Furthermore, we develop a regularlization term for training a classifier to prefer using interpretable feature dimensions. Finally, we demonstrate the effectiveness of our approach in a prediction of distant metastasis from CT images of head and neck tumors (Section 4). We show that our training method leads to a classifier that uses a higher percentage of interpretable image features.
## 2 The Fibered Manifold Geometry of a Classifier
In this section we describe how a classifier locally decomposes the input data space into nonlinear dimensions along which the classifier decision changes, called **sections**, and complementary dimensions along which the classifier decision remains constant, i.e., the level sets of the classifier function, called **fibers**. This structure is known as a **fibered manifold** (see Fig. 1). The key insight to our approach is that to interpret a classifier one must understand the section dimensions because these are the dimensions along which the classifier is differentiating different classes of data. A classifier is "ignoring" the features along its fiber dimensions.
Let's consider a classifier taking inputs \(x\in\mathbb{R}^{d}\) and predicting an output class \(y\in\{1,\ldots,K\}\). Furthermore, assume this classifier is a \(C^{1}\) mapping, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\). For example, the outputs could be conditional probabilities, \(p(y\mid x)\), or normalized logits, \(z=\ln(p(y\mid x))\). We will denote the Jacobian matrix of \(f\) at a point \(x\in\mathbb{R}^{d}\) as \(Df(x)\). The **rank** of \(f\) at \(x\) is defined as \(\mathrm{rank}(Df(x))\).
Assuming that \(K<d\), the maximal rank of \(f\) at any point is \(K-1\), due to the constraint that \(\sum_{y}p(y\mid x)=1\). A **regular point** of \(f\) is a point \(x\in\mathbb{R}^{d}\) such that \(Df(x)\) has maximal rank, that is, \(\mathrm{rank}(Df(x))=K-1\). The set of regular points of \(f\) is open in \(\mathbb{R}^{d}\). This implies that there is a neighborhood about any regular point of \(f\) that is a **fibered manifold**, i.e., there is a (possibly nonlinear) coordinate system that decomposes into \(d-K+1\) fiber coordinates, where \(f\) remains constant, and \(K-1\) section coordinates, where \(f\) changes its output.
## 3 Interpretable Feature Alignment
In addition to our classifier function, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{K}\), assume that we can also compute a set of \(m\)_interpretable_ features through a mapping \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\). The general idea of our method is to measure how well the classifier \(f\) is using these interpretable features by looking at the alignment of their section subspaces, e.g., if the dimensions in which they vary are similar.
Consider first the simple case where \(K=2\) (a binary classifier) and \(m=1\) (a single interpretable features). Then the sections defined by \(f\) and \(g\) are one dimensional and tangential to their respective gradients. Thus we can measure the agreement of the features by the alignment between the gradients of \(f\) and \(g\), e.g., the angle between them. We can estimate the expectation of this alignment over the data distribution by summing this value at each point in the test data. At a single data point \(x\in\mathbb{R}^{d}\), this **pointwise alignment** is given by
\[S(x)=\left(\frac{\langle\nabla f(x),\nabla g(x)\rangle}{\|\nabla f(x)\|\cdot\| \nabla g(x)\|}\right)^{2}. \tag{1}\]
We square the dot product between the normalized gradients because we are only concerned about how well these dimensions align. We do not care about the magnitude of the units or the polarity of the gradients, that is, we consider the alignment of the bidirectional lines defined by the two gradients.
### Decomposition into Feature Hyperplane
Now consider we want to evaluate the classifier's dependency on multiple features. We may derive this as a search for a classifier, \(\phi:\mathbb{R}^{m}\rightarrow\mathbb{R}^{K}\) from the interpretable features, \(g(x)\), that approximates the prediction output by \(f\), i.e.,
\[f(x)\approx\phi\circ g(x). \tag{2}\]
If the task is difficult enough, then the same classification from \(f\) will not be able to be computed using only the interpretable features \(g(x)\). Therefore, the relationship in (2) is not an exact equality. This is what we expect in the case of a DNN, that is to say, we don't expect
Figure 1: Fibered manifold geometry of a classifier.
that the decision of a DNN can be explained perfectly by interpretable features alone. Rather than optimize for \(\phi\) directly, we consider minimizing the difference in the gradients of both sides of (2). In other words, we want to minimize \(\|\nabla f-Dg^{T}\nabla\phi\|\). This gives us
\[\nabla\phi=(DgDg^{T})^{-1}Dg\nabla f,\]
which can be viewed as a projection of \(\nabla f\) to the hyperplane given by the rows of \(Dg\), i.e., the gradients of the interpretable features. Then we can decompose \(\nabla f\) into a parallel part and a vertical part:
\[\nabla f_{\parallel}=Dg^{T}(DgDg^{T})^{-1}Dg\nabla f, \tag{3}\] \[\nabla f_{\perp}=\nabla f-\nabla f_{\parallel}. \tag{4}\]
These can be understood as the interpretable component (3) and un-interpretable component (4) of the classifier gradient using the selected features.
Note that \(\|\nabla f\|^{2}=\|\nabla f_{\parallel}\|^{2}+\|\nabla f_{\perp}\|^{2}\), so we can naturally define the alignment measure as the fraction of the squared gradient norm contained in tangent space to the section of the interpretable features, i.e.,
\[S(x)=\frac{\|\nabla f_{\parallel}(x)\|^{2}}{\|\nabla f(x)\|^{2}}.\]
Note that in the case with only one feature, this definition of \(S\) corresponds to mission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."the previous one in (1).
### Gradient Flow to the Decision Boundary
Although observing the alignment at a point can give some indication whether the classifier is making use of the given features, it is only a local property and does not take into account more global geometry of the classifier. To address this, we next develop a measurement of how the gradients of interpretable features align with those of the classifier along a path from the data point to the classifier's decision boundary.
In practice, we can start from the data point and follow the gradient of the classifier and stop when it hits the decision boundary. Then we can integrate the normalized dot product of the gradients of the classifier and the gradients of the feature mapping along this path. To do this, first define the _gradient flow_ from a data point \(x\) as a curve in the data space \(\gamma:[0,T]\rightarrow\mathbb{R}^{d}\) that begins at \(\gamma(0)=x\) and follows the gradient of the classifier, i.e.,
\[\frac{d\gamma(t)}{dt}=\nabla f(\gamma(t)). \tag{5}\]
With a similar decomposition as the pointwise case, we measure the total fraction of alignment of the classifier and the interpretable features along the gradient flow admission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."
\[F(x)=\frac{\int_{0}^{T}\left\|\nabla f_{\parallel}\left(\gamma(t)\right)\right\| ^{2}dt}{\int_{0}^{T}\left\|\nabla f\left(\gamma(t)\right)\right\|^{2}dt}.\]
We note that both the pointwise score \(S\) and the gradient flow score \(F\) range from 0 to 1.
### Enhancing Interpretability During Training
With the above measure of alignment, we can further use it to encourage a model to be more interpretable. This is done by adding an alignment "reward" term to the loss during training. Given interpretable features, \(g_{i}(x),i=1,\dots,m\), for each data point \(x\), the training loss is:
\[\begin{split}\mathcal{L}(x,f,g)=&\sum_{x}L(f(x),y )\\ &-\sum_{x}\sum_{i=1}^{m}\lambda_{i}\left(\frac{\langle\nabla f(x),\nabla g_{i}(x)\rangle}{\|\nabla f(x)\|\cdot\|\nabla g_{i}(x)\|}\right)^{2} \end{split} \tag{6}\]
where \(L\) is the original training loss function, and \(\lambda_{i}\) is a positive scalar parameter that controls the weight of the alignment with the \(i\)th interpretable feature. It rewards the gradients of the model for aligning with the gradients of given features. We can tune \(\lambda_{i}\) to be as large as is possible without hurting the performance of the model.
## 4 Results
### Dataset and Architecture
We used a Head and Neck PET-CT dataset from [10] available on the Cancer Image Archive (TCIA) to evaluate our methods. The dataset includes data from 298
Figure 2: Example CT image with segmented tumor (left) and the masked tumor image used as input data (right).
subjects. The task is to predict distant metastasis of head and neck tumors using CT images and segmentations of gross tumor volumes. It's a highly imbalanced classification task, with only 40 positive cases (13%) in the entire dataset.
The classifier we used is a neural network with 3 convolutional layers (kernel sizes = \(5\times 5\), \(3\times 3\), and \(3\times 3\)) each followed by average pooling and exponential linear units (ELUs). These are followed by 3 fully-connected layers (output dimensions = 256, 128, and 1). We chose average pooling over max pooling and ELU over ReLU because they are differentiable, while providing equivalent classification performance. The input data is \(512\times 512\) gray scale images.
Following the work from Diamant et al. [1], the inputs are 2D CT slices, each chosen where the cross-sectional area of tumor is the largest. The area outside the tumor was masked as zero. We augmented the data by randomly rotating in the range \(\pm 20\) degrees and translating in the range \(\pm 0.015\) times the image width/height. We used weighted random sampling to balance the training batches evenly between negative and positive samples. The data was split into training set of 209 samples and test set of 89 samples. The model was trained 100 epochs using Adam optimizer with initial learning rate \(3.0\times 10^{-5}\) and batch size 32. For the model with alignment term, we chose 3 features each with with \(\lambda_{i}=3.0\times 10^{-5}\). Our plain classifier achieved 0.681 balanced accuracy, and the classifier with alignment term training achieved 0.688 balanced accuracy.
### Interpretable Features
We chose 3 features that can be calculated from the data, namely, overall brightness (\(g_{1}\)), tumor extent (\(g_{2}\)), and log aspect ratio of tumor (\(g_{3}\)). Let \(I(u)\) denote an image, with pixel grid coordinates \(u=(u_{1},u_{2})\). Then the three features are calculated as follows:
\[g_{1} =\sum_{u}I(u)\] \[\mu =\frac{1}{g_{1}}\sum_{u}uI(u),\ C=\frac{1}{g_{1}}\sum_{u}I(u)(u- \mu)(u-\mu)^{T}\] \[g_{2} =\mathrm{tr}(C)\] \[g_{3} =\log(\sigma_{1})-\log(\sigma_{2})\]
where \(\sigma_{1}^{2}\geq\sigma_{2}^{2}\) are eigenvalues of the covariance, \(C\).
### Interpretability Measures
Here we apply our pointwise interpretability measure, \(S\), to our test set. The results are shown in Table 1 for the both the plain model and the model trained with our enhanced interpretability method. While the interpretability scores for the individual features seem quite small, we show that they are actually large relative to the interpretability scores for a randomly generated feature (which provides a baseline interpretability score for a feature that should _not_ be useful to the classifier). We generated one random feature from a standard normal distribution to compare to the single feature case, and three independent random features to compare to the multiple feature case. These two are referred as "single random" and "three random" in the table. To quantify if our interpretable features were statistically significantly better than random to the classifier, we performed a Kolmogorov-Smirnov (KS) test between the distribution of \(S\) values. All three features \(g_{1},g_{2},g_{3}\), and their combination, were statistically significantly better than random at \(p<10^{-3}\). The last column in Table 1 is the KS test \(p\)-value to see if the enhanced training improved the interpretability over the plain model. As we can see, tumor extent and aspect ratio did not become more useful to the classifier, but brightness did. Finally, using all three interpretable features jointly accounts for 16% of the classifier's squared gradient magnitude. From the results for the model with alignment term, we can see that the interpretable fraction of classifier gradients increased while not negatively affecting classifier performance.
We also show in Table 2 the results of the gradient flow alignment measure, \(F\), applied on the both the plain model and the model trained with our interpretability enhancement. The overall behavior is similar to that of the pointwise interpretability scores. Again, the KS test indicates that the gradient flow interpretability measures for all three features \(g_{1},g_{2},g_{3}\), and their combination, were
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{feature} & \multicolumn{2}{c|}{Means} & \multirow{2}{*}{\(p\)-value} \\ \cline{2-3} \cline{5-5} & plain & & \multicolumn{1}{c|}{enhanced} \\ \hline overall brightness & 8.2e-4 & 3.4e-3 & \(<10^{-6}\) \\ tumor extent & 3.8e-2 & 2.4e-2 & — \\ log aspect ratio & 1.3e-2 & 9.5e-3 & — \\ combined features & 0.12 & 0.16 & \(<10^{-6}\) \\ \hline single random & 3.6e-6 & 4.1e-6 & 0.95 \\ three random & 1.2e-5 & 1.3e-5 & 0.30 \\ \hline \end{tabular}
\end{table}
Table 1: Results of Pointwise Alignment (\(S\))
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{feature} & \multicolumn{2}{c|}{Means} & \multirow{2}{*}{\(p\)-value} \\ \cline{2-3} \cline{5-5} & plain & & \multicolumn{1}{c|}{enhanced} \\ \hline overall brightness & 8.4e-4 & 3.4e-3 & \(<10^{-6}\) \\ tumor extent & 3.4e-2 & 2.1e-2 & — \\ log aspect ratio & 4.1e-3 & 2.0e-3 & — \\ combined features & 0.14 & 0.19 & \(<10^{-6}\) \\ \hline single random & 4.4e-6 & 3.8e-6 & 0.40 \\ three random & 1.1e-5 & 1.2e-5 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 2: Results of Gradient Flow Alignment (\(F\))
statistically significantly better than random at \(p<10^{-3}\). Interestingly, the gradient flow measures are similar to the pointwise measures for individual features, but somewhat higher for the three features combined.
## 5 Discussion
We introduced a new method to evaluate the importance of given interpretable features by quantifying their gradient alignment along the gradient flow of the model. Although the resulting alignment scores may seem small, we note that they are significantly larger than random and account for a high proportion of the variance relative to the high dimensionality of the input images. A limitation of our method is that it requires the user to input interpretable features of interest. Thus, our method may be less effective in cases where a model mostly makes use of unexpected features.
## 6 Compliance with Ethical Standards
This research study was conducted retrospectively using human subject data made available in open access by Vallieres et al. [10] in the Cancer Image Archive (TCIA). Ethical approval was not required as confirmed by the license attached with the open access data.
|
2306.12056
|
Secondary ionization of pyrimidine nucleobases and their microhydrated
derivatives in helium nanodroplets
|
Radiation damage in biological systems by ionizing radiation is predominantly
caused by secondary processes such as charge and energy transfer leading to the
breaking of bonds in DNA. Here, we study the fragmentation of cytosine (Cyt)
and thymine (Thy) molecules, clusters and microhydrated derivatives induced by
direct and indirect ionization initiated by extreme-ultraviolet (XUV)
irradiation. Photofragmentation mass spectra and photoelectron spectra of free
Cyt and Thy molecules are compared with mass and electron spectra of Cyt/Thy
clusters and microhydrated Cyt/Thy molecules formed by aggregation in
superfluid helium (He) nanodroplets. Penning ionization after resonant
excitation of the He droplets is generally found to cause less fragmentation
compared to direct photoionization and charge-transfer ionization after
photoionization of the He droplets. When Cyt/Thy molecules and oligomers are
complexed with water molecules, their fragmentation is efficiently suppressed.
However, a similar suppression of fragmentation is observed when homogeneous
Cyt/Thy clusters are formed in He nanodroplets, indicating a general trend.
Penning ionization electron spectra (PIES) of Cyt/Thy are broad and nearly
featureless but PIES of their microhydrated derivatives point at a sequential
ionization process ending in unfragmented microsolvated Cyt/Thy cations.
|
Jakob D. Asmussen, Abdul R. Abid, Akgash Sundaralingam, Björn Bastian, Keshav Sishodia, Subhendu De, Ltaief Ben Ltaief, Sivarama R. Krishnan, Henrik B. Pedersen, Marcel Mudrich
|
2023-06-21T06:58:40Z
|
http://arxiv.org/abs/2306.12056v1
|
Secondary ionization of pyrimidine nucleobases and their microhydrated derivatives in helium nanodroplets
###### Abstract
Radiation damage in biological systems by ionizing radiation is predominantly caused by secondary processes such as charge and energy transfer leading to the breaking of bonds in DNA. Here, we study the fragmentation of cytosine (Cyt) and thymine (Thy) molecules, clusters and microhydrated derivatives induced by direct and indirect ionization initiated by extreme-ultraviolet (XUV) irradiation. Photofragmentation mass spectra and photoelectron spectra of free Cyt and Thy molecules are compared with mass and electron spectra of Cyt/Thy clusters and microhydrated Cyt/Thy molecules formed by aggregation in superfluid helium (He) nanodroplets. Penning ionization after resonant excitation of the He droplets is generally found to cause less fragmentation compared to direct photoionization and charge-transfer ionization after photoionization of the He droplets. When Cyt/Thy molecules and oligomers are complexed with water molecules, their fragmentation is efficiently suppressed. However, a similar suppression of fragmentation is observed when homogeneous Cyt/Thy clusters are formed in He nanodroplets, indicating a general trend. Penning ionization electron spectra (PIES) of Cyt/Thy are broad and nearly featureless but PIES of their microhydrated derivatives point at a sequential ionization process ending in unfragmented microsolvated Cyt/Thy cations.
## 1 Introduction
Ionization of deoxyribonucle acid (DNA) bases is a key step in radiation damage leading to mutation.[1] Radiation damage is not only caused by direct impact of high-energy photons on the nucleobases, but to a large extent by secondary particles (electrons, ions and radicals) formed from reactions induced by the ionizing radiation.[2] The fragmentation of DNA bases upon ionization by collisions with electrons[3] and ions[4] has extensively been studied to unravel the key processes leading to radiation damage. An important element in understanding the reaction paths to damage of DNA is the interaction of the nucleobases with the aqueous medium which affects the ionization potential[5] and can alter the fragmentation pathways due to energy[6] or proton[7, 8, 9] transfer.
Fragmentation of molecular ions is readily studied in the gas phase using molecular beams techniques. Formation of microhydrated clusters (nucleobases weakly bound to a few water molecules) is possible in molecular beams, but controlling the cluster size is difficult.[10] In this study, we investigate the fragmentation of the two pyrimidine nucleobases cytosine (Cyt) and thymine (Thy) found in DNA (see Fig. 1) induced by direct and indirect ionization after XUV irradiation. Free Cyt and Thy molecules in an effusive beam are directly photoionized and Cyt/Thy clusters and microhydrated complexes are formed by means of aggregation in helium (He) nanodroplets (HINDs). HINDs are cold (0.4 K), superfluid clusters of weakly-bound He.[11] Due to their capability to efficiently pick up foreign species ("dopants'), the high mobility of dopants inside HINDs, and their chemical inertness, various clusters can be formed and studied in HINDs. The size of the dopants' clusters is controlled by the partial pressure of the dopant vapor in the pick-up cell and can often be determined from a Poisson distribution.[12] In this way we can achieve a relatively high degree of control of the composition of homogeneous and of heterogeneous clusters, such as microhyd
Fig. 1: Molecular structure of cytosine and thymine.
drated biomolecules. Dopants in HNDs can efficiently be Penning ionized through resonant excitation of the droplet, or through direct ionization of the droplet leading to radiative charge-transfer (RCT) ionization of the dopant.[13, 14] Resonant photoexcitation is most efficient for the HND state excited at a photon energy \(h\nu=21.6\) eV which correlates to the 1s2p \({}^{1}\)P atomic He state.[15] This state relaxes to the metastable 1s2s \({}^{1}\)S He atomic state within about 1 ps.[16, 17] The metastable He atom further Penning ionizes the dopant by decaying to the ground state thereby releasing its energy to the dopant which in turn is ionized.[18] Penning ionization and RCT ionization are instances of indirect ionization processes in heterogeneous systems induced by energetic radiation, which have analogues in aqueous systems such as biological tissue.[19, 20]
In this study, we aim at characterizing the fragmentation of Cyt and Thy by comparing direct photoionization of the free molecules in the gas-phase with indirect ionization of the molecules embedded in HNDs (see Fig. 2). By adjusting the doping level and by co-doping with water molecules, we measure fragment distributions from ionization of Thy and Cyt clusters and of the microhydrated Thy, Cyt molecules. As a general trend, fragmentation is efficiently suppressed when the molecules are complexed in clusters. From Penning ionization electron spectra (PIES) recorded in coincidence with various Thy, Cyt fragments and their complexes with water, we obtain some insight into the ionization process. Based on these results, we assess the benefit of HNDs as nano-matrices for studies of photoionization and fragmentation processes with relevance to radiation biology.
## 2 Methods
Ion mass spectra and electron velocity map images (VMIs) were measured using the XENIA (XUV electron-ion spectrometer for nanodroplets in Aarhus) endstation[21] located at the AMOLine of the ASTRID2 synchrotron at Aarhus University, Denmark.[22] Using photoelectron-photoion coincidence (PEPICO) detection, VMIs were recorded in coincidence with specific fragment ions of Thy, Cyt molecules and clusters. Ion and electron yields were background-subtracted using a rotating chopper which periodically blocks and unblocks the HND beam. At the photon energy \(h\nu=21.6\) eV (resonant excitation of HNDs), a tin (Sn) filter was used to block higher harmonics of the undulator radiation; at \(h\nu=26.0\) eV (photoionization of HNDs), an aluminum (Al) filter was used. The photon flux is estimated from the yield of electron-H\({}_{2}\)O\({}^{+}\) coincidences detected from the background gas.[23] Electron spectra were inferred from the VMI by Abel inversion using the MEVELER reconstruction method[24].
HNDs were formed by continuous expansion of He at high pressure (30 bar) into vacuum through a cryogenically cooled (14 K) nozzle of diameter 5 \(\mu\)m. The average droplet size is determined from titration measurements to be \(\langle N\rangle=1.9\times 10^{4}\).[25] The droplets were first doped with Cyt or Thy by passing them through a 1 cm long vapor cell. The cell was heated to 140-165 \({}^{\circ}\)C and 92-108 \({}^{\circ}\)C for doping with Cyt and Thy, respectively. The doping level for the two dopants is mainly determined from the monomer-to-dimer ratio detected at \(h\nu=21.6\) eV for an oven temperature of 140 \({}^{\circ}\)C and 90 \({}^{\circ}\)C, respectively. Changes of the doping level from this value is then determined based on the change in vapor pressure as a function of varying oven temperature.[26] Subsequently, the HNDs were doped with H\({}_{2}\)O or D\({}_{2}\)O by leaking water vapor into a gas doping cell of length 1.8 cm further downstream. The doping level was determined using the formula derived by Kuma _et al.[27]_ An effusive molecular beam of Cyt or Thy was realized by heating an effusive cell with a nozzle opening of 1 mm diameter. The cell was heated to 220 \({}^{\circ}\)C to create an effusive beam of Cyt and to 170 \({}^{\circ}\)C to create an effusive beam of Thy.
## 3 Results and discussion
### Ion mass spectra
In Fig. 3, we present the ion mass spectra recorded by either photoionizing free Cyt and Thy molecules in an effusive beam or by indirectly ionizing Cyt and Thy embedded in HNDs at two photon energies \(h\nu=21.6\) eV (Penning ionization) and \(h\nu=26\) eV (RCT ionization). Additionally, mass spectra of microhydrated Cyt and Thy formed by co-doping the HNDs with Cyt/Thy and with D\({}_{2}\)O molecules are shown (blue lines). In the case of photoionization
Fig. 2: Illustration of the three different ionization schemes for thymine and microhydrated thymine: a) Direct photoionization of the molecule, b) Penning ionization in a helium nanodroplet following resonant excitation (\(h\nu=21.6\) eV) of the droplet and c) charge-transfer ionization in a helium nanodroplet after direct ionization of the droplet (\(h\nu=26\) eV).
of the effusive beam, the photon energy is tuned to 20.6 eV and 24.6 eV; these values match the internal energy of the metastable He atom and the He ion which is released to the Cyt/Thy dopants by Penning ionization, respectively RCT ionization processes.
The various fragmentation channels of Cyt [28, 29, 30] and Thy [31, 32, 33] have been identified by means of high-resolution mass spectrometry and quantum chemical calculations. From photoionization of effusive Cyt and Thy, we identify two main fragments in agreement with mass spectra reported in the literature. The main fragmentation pathway of the parent cation involves the loss of isocyanic acid (OCNH) through a retro-Diels-Alder (rDA) reaction giving rise to the \(m/z=68\) and \(m/z=83\) for Cyt and Thy, respectively. The rDA reaction happens through breaking of the bonds N1-C2 and N3-C4 (see Fig. 1). [29, 34] Both fragments are subjected to further fragmentation. The product of the rDA reaction of Cyt further fragments through the elimination of HCN (or NHC) as a result of break-up of the C5-C6 bond (or C4-C5). [29] This leads to the HNC\({}_{2}\)H\({}_{2}^{+}\) fragment (\(m/z=41\)). The Thy rDA product fragments through elimination of either CO or HCN (breaking of C4-C5 or C5-C6 bonds) resulting in fragments of masses \(m/z=55\) and \(m/z=56\). [34] Due to our limited mass resolution, we cannot fully separate the two ionic fragments in the effusive beam mass spectrum, but based on the center of joint mass peak, we identify CO elimination as the main pathway in consistence with previous findings. [31] Accordingly, the peak is identified as HNC\({}_{3}\)H\({}_{4}^{+}\) (\(m/z=55\)). In general, minor fragmentation paths leading to the aforementioned fragments with one or two missing or added hydrogen atoms are expected to be present but cannot be resolved with our spectrometer.
Indirect ionization of Cyt/Thy embedded in HNDs leads to similar fragments as direct photoionization of free Cyt/Thy. Additionally, we detect Cyt\({}_{2}^{+}\), Thy\({}_{2}^{+}\) clusters. We do not detect any fragments with masses between the monomer and dimer parent ions indicating that fragmentation of clusters in HNDs is limited to loss of intact parent moieties. In the case of RCT ionization (\(hv=26\) eV), the relative yield of the fragments to the parent ion only slightly differs from that detected by photoionization of the effusive beam, whereas in the case of Penning ionization (\(hv=21.6\) eV) the yield of the fragments ions are significantly suppressed. This shows that Penning ionization is a
Fig. 3: Mass spectra recorded for Cyt [a) & b)] and Thy [c) & d)] in an effusive molecular beam, doped into HNDs or co-doped with D\({}_{2}\)O into HNDs. a) & c) show the mass spectra for Penning ionization of the dopant (\(hv=21.6\) eV), and b) & d) show mass spectra for charge-transfer ionization (\(hv=26\) eV). In case of photoionization of the effusive beam, the photon energy is matched to the internal energy of the excited He atom inducing Penning ionization and to the He ion inducing charge-transfer ionization.
"softer" ionization channel where the excess energy is dissipated in the droplet thereby suppressing fragmentation. Previous studies of electron-impact ionization of doped HNDs have reported reduced fragmentation of dopants in the droplet.[35, 36, 37] One study found that in particular fragmentation involving the break-up of C-C bonds was suppressed in HNDs[36] which matches well the observed reduction of the yields of fragments formed by HCN, NHC and CO elimination (relying on break-up of the C-C bonds) and increased yields of fragments from rDA (relying on break-up of N-C bonds).
In the case of microhydration of Cyt/Thy by co-doping with D\({}_{2}\)O, the fragmentation of the pyrimidines is further suppressed. For Penning ionization, a series of D\({}_{2}\)O cluster ions and Cyt/Thy ions with attached D\({}_{2}\)O molecules can be seen in the mass spectra (see Fig. 4 for Penning ionization of microhydrated cytosine), whereas in the case of RCT ionization these cluster ions are less prominent. This further confirms the fact that Penning ionization is a softer ionization channel compared to RCT ionization. For the water cluster ions, the deuterated cluster, (D\({}_{2}\)O\({}_{n}\)D\({}^{+}\), dominates over the undeterated cluster, (D\({}_{2}\)O)\({}_{n}^{+}\). Previous reports show that the unprotonated cluster is not observed for ionization of bare water clusters.[38, 39] However, in HNDs the ejection of the OH, OD radical following proton-, respectively deuteron transfer can be inhibited.[40, 41] For the microhydrated Cyt/Thy ion complex, the undeuterated cluster dominates over the deuterated cluster. Due to the high proton-affinity of the nucleobases,[42] one would expect the deuteron-transfer to be efficient towards Cyt/Thy. The fact that the undeuterated ion complexes have higher yields suggest that the ejection of the OH (OD) radical is less efficient for the mixed cluster. Denifl _et al._ have reported similar findings for ionization of HNDs co-doped with fullerene and water.[41]
Fig. 5 (red symbols, right axis) shows the average size, \(\langle n\rangle\), of the Cyt(H\({}_{2}\)O)\({}_{n}^{+}\), Thy(D\({}_{2}\)O)\({}_{n}^{+}\) ions detected as function of water doping level. Note that in the case of Cyt, we have replaced D\({}_{2}\)O as co-dopant by H\({}_{2}\)O as there was no particular advantages of using deuterated water. However, use of D\({}_{2}\)O avoids the overlap in mass between the Thy parent ion and the (H\({}_{2}\)O)\({}_{7}^{+}\) cluster. The average number of water molecules bound to the ionic complex rises with increasing doping level, but does not reflect the actual number of water molecules surrounding the Cyt/Thy molecule in the droplet since the average number of water molecules in the ionic complex is smaller than the average number of doped water molecules. We note that larger clusters (\(n\geq 6\)) may be present in mass spectra but are not considered here due to the limited resolution for larger masses. Thus, \(\langle n\rangle\) shown in Fig. 5 is most likely underestimated; however it is unlikely that this explains the large difference (nearly factor 10) between the detected ion cluster size and the actual dopant cluster size. Thus, the microhydrated Cyt/Thy clusters fragment into smaller clusters (elimination of [H\({}_{2}\)O]\({}_{n}\)/[D\({}_{2}\)O]\({}_{n}\)) upon Penning and RCT ionization, where fragmentation is enhanced for RCT ionization as compared to Penning ionization. The rise of \(\langle n\rangle\) seems to level out for high doping strength possibly indicating the presence of a hydration shell around the Cyt/Thy ion. A study of microhydrated Thy (and adenine) has shown a hydration shell of \(n=4\) around the nucleobase ion.[43] It would require higher doping levels to investigate whether convergence at \(n=4\) for the dopant ion is the case. However, we note that the \(n=4\) hydrated ion does not not show an enhanced abundance compared to \(n=3\) and \(n=5\) in the mass spectra [Fig. 3 a)+c]).
A clear trend is that the fragmentation of the Cyt/Thy molecules is suppressed by increasing the level of microhydration. Fig. 5 (blue symbols, left axis) shows the ratio of fragment ion [Cyt-OCH]\({}^{+}\) (NHC\({}_{3}\)H\({}_{4}^{+}\)) to unfragmented ion Cyt(H\({}_{2}\)O)\({}_{0-5}^{+}\) (Thy[D\({}_{2}\)O]\({}_{0-5}^{+}\)) yields for increasing doping level of water. We refer to the rDA product [Cyt-OCH]\({}^{+}\) for Cyt and the product following CO elimination, NHC\({}_{3}\)H\({}_{4}^{+}\), for Thy since the respective other main ion fragment of the two molecules overlap with a water cluster ion. Both for Penning ionization and RCT ionization, the ratio of fragment ions to hydrated and non-hydrated parent ion drops when increasing the number of added water molecules. In the case of Cyt, convergence of the ratio at 0.1 for both Penning and RCT ionization reflects the detection limit due to noise in the spectra.
Fig. 4: Penning ionization mass spectrum (\(h\nu=21.6\) eV) of microhydrated Cyt in He nanodroplets.
Reduced fragmentation of glycine and tryptophan from RCT ionization in HNDs has been observed for increased level of co-doping level with water [44, 45]. While microhydration clearly suppresses fragmentation of Cyt and Thy, naturally the question arises to to what extend this "buffer effect" is a unique property of water. To answer this question, we performed comparative measurements where homogeneous Cyt/Thy clusters where formed in the HNDs. We measured the relative yield of the fragments to the parent ion and parent cluster ion for increased pure Cyt/Thy dopant clusters sizes. Fig. 6 shows the ratio of the yields of the two main fragments to total yield of the parent ions and parent ion clusters (\(\text{Cyt}^{+}_{1-4}\), \(\text{ Thy}^{+}_{1-4}\)) as a function of average dopant cluster size. The relative reduction in the dopant fragment ratio is 60-70 % when increasing the average number of doped molecules from 0.5 to 2 in both cases of Penning and RCT ionization. A similar reduction of fragment yields with respect to monomers was observed for the case of impact ionization by keV ions [46]. New ion fragments were found for increasing Thy cluster sizes in a molecular beam which were facilitated by the intermolecular hydrogen-bonds in the cluster. These cluster-specific fragmentation channels are not present in HNDs most likely because conformations corresponding to local energy minima tend to be frozen out in HNDs at the expense of the equilibrium structures [47, 48].
The fact that the relative fragment yield decreases so rapidly for increasing pyrimidine cluster size without co-doping water indicates that the reduced fragmentation is a general trend for increasing cluster size largely independent of the molecular composition. In the case of glycine and tryptophan in HNDs [44, 45], the degree to which fragmentation was buffered by water was different indicating dependencies on the intermolecular bonding in the dopant cluster. Thus, the buffering effect of fragmentation may be different for the purine bases (adenine and guanine) in HNDs and could possibly out-compete the buffering effect from formation of homogeneous purine clusters in the droplet. A systematic study of different combinations of co-dopants combined with quantum chemical calculations should be carried out to unravel the specific
Fig. 5: Relative yields of \([\text{Cyt-OCNH}]^{+}\) to \(\text{Cyt(H_{2}O)}^{+}_{6-5}\) (a) and of \(\text{NHC_{3}H_{4}^{+}}\) to \(\text{Thy(D_{2}O)}^{+}_{6-5}\) (b) as a function of \(\text{H_{2}O/D_{2}O}\) doping level (blue symbols, left axis). The red symbols and right axes show the average number of water molecules bound to the Cyt/Thy parent ions. For both dopants, the relative fragment yield and average number of attached water molecules are shown for Penning ionization (\(h\nu=21.6\) eV) and charge-transfer ionization (\(h\nu=26\) eV).
Fig. 6: Ratios of the yield of the two main fragments of Cyt (a) and Thy (b) to the parent ion monomer and cluster (\(n=1\)-\(4\)) for increasing doping level of Cyt/Thy. These data were obtained for Penning ionization (\(h\nu=21.6\) eV) and charge-transfer ionization (\(h\nu=26\) eV).
intermolecular effect of clusters on the suppression of fragmentation upon ionization.
### Electron spectra
The PIES of dopants in HhDs are notoriously broad and structureless.[49, 50, 51] An exception to this is are alkali metals which reside on the droplet surface.[52, 13, 18] Fig. 7 shows PIES recorded at \(h\nu=21.6\) eV and the corresponding PES recorded for an effusive beam at \(h\nu=20.6\) eV in coincidence with the two main fragments and the parent ion for Cyt and Thy. The electron spectra recorded for effusive Cyt and Thy are consistent with those previously reported given the limited resolution of our VMI spectrometer.[53, 34] The energy gap between the falling edges of the electron spectra for different coincidences towards larger kinetic energy matches the appearance energy of the different fragments reported for Thy.[34] In contrast, PIES of Cyt/ Thy in HhDs are broad and structureless resembling previously reported PIES of polyatomic molecules embedded in HhDs. The nature of the PIES in HhDs is still not well-described. We have previously modelled the PIES of acene molecules in HhDs by relaxation through a series of elastic electron-He binary collisions.[49] However, the droplet size had to be massively overestimated to reach similar energy loss as observed in the experiment. We have recently demonstrated that the electron-He binary collision model can indeed reproduce the energy loss and change in angular distribution in the case of direct photoemission from HhDs.[54] These two results imply that elastic scattering cannot alone account for the observed PIES. The sharp drop of signal at \(<1\) eV was explained as electrons being trapped in the HhDs due to the barrier potential to the conduction band in liquid He which the electrons have to overcome to escape from the droplet.[55, 56, 57, 58] The falling edge of the PIES of acene molecules in HhDs could be modeled by a Maxwell-Boltzmann distribution corresponding to a thermal electron distribution of \(>10^{4}\) K. However, the temperature is incompatible with the temperature of HhDs (0.4 K).[11] We note that such hot thermal emission of electrons has been described as "hot electron ionization" by Hansen _et al.[59]_ for single photon ionization[60, 61], multi-photon ionization[62, 63] and Penning ionization[64] of fullerenes. If hot electron ionization would apply to Penning ionization of Cyt/ Thy in HhDs, it begs the question why hot electron ionization does not apply to direct ionization of the molecules in an effusive beam. Possibly, the presence of Cyt/ Thy clusters in the droplet could promote for a different ionization mechanism; however at the doping conditions in the experiment mostly monomer doping condition is expected. Thus, there is little evidence available at the moment to determine if the PIES of Cyt/ Thy is caused by hot electron ionization. PIES of rare gas atoms[58], small organic molecules[50] and water (see Fig. 8) in HhDs show additional features that can be associated with direct ionization from interatomic decay with an excited He atom. Additional experimental and theoretical work is needed to unravel the dopant-specific structure of the Penning ionization process in HhDs.
Fig. 8 shows the PIES recorded for microsolvated Cyt and Thy. The PIES recorded in coincidence with Cyt\({}^{+}\) or Thy\({}^{+}\) is identical in the case of doping only with the nucleobases and co
Fig. 7: VMIs of electrons in coincidence with Cyt\({}^{+}\) recorded for (a) direct photoionization of effusive Cyt (\(h\nu=20.6\) eV) and (b) Penning ionization of Cyt in HhDs (\(h\nu=21.6\) eV). Panel (c) and (d) show PIES (\(h\nu=21.6\) eV) of HND-doped Cyt and Thy, respectively, showing the three main electron-ion coincidence channels (solid lines). The dotted lines show the corresponding electron spectra of effusive Cyt and Thy (\(h\nu=20.6\) eV).
doping with water whereas the PIES detected in coincidence with Cyt(H\({}_{2}\)O)\({}_{1-2}^{+}\) and Thy(D\({}_{2}\)O)\({}_{1-2}^{+}\) extends to higher kinetic energy. This suggests that fragmentation of the microsolvated complex to the bare nucleobase ion is weak. Thus, the yields of Cyt\({}^{+}\) and Thy\({}^{+}\) reflect Penning ionization in the case where no water molecules were captured by the droplet.
We note that the Penning ionization electron distribution extends to higher kinetic energy for the ion fragment associated to the smallest appearance energy. The difference in highest detected electron energy for the different fragments matches the difference in appearance energy which indicates that the Penning spectra can be associated to the appearance energy for a specific fragment of the dopant in HNDs. With this information, we can asses the PIES of microsolvated Cyt and Thy in Fig. 8. The PIES detected in coincidence with Cyt(H\({}_{2}\)O)\({}_{1-2}^{+}\) and Thy(D\({}_{2}\)O)\({}_{1-2}^{+}\) extend to higher kinetic energy than the PIES of Cyt\({}^{+}\), Thy\({}^{+}\) or H\({}_{3}\)O\({}^{+}\), D\({}_{3}\)O\({}^{+}\). Photoionization of hydrated nucleobases in a molecular beam expansion have revealed a red-shift of \(\sim\)0.3 eV in the appearance energy for M(H\({}_{2}\)O)\({}_{1-2}^{+}\)[65] which matches well with the observation of a more extended PIES in HNDs. The Cyt(H\({}_{2}\)O)\({}_{1-2}^{+}\)/ Thy(D\({}_{2}\)O)\({}_{1-2}^{+}\) shows a maximum in the PIES at \(\sim\)7 eV which is close to the maximum found in the PIES of H\({}_{3}\)O\({}^{+}\), D\({}_{3}\)O\({}^{+}\) (\(\sim\) 6 eV). This indicates that the PIES of microhydrated dopants can be correlated to the PIES of water. This is expected, as the water molecules doped into the HNDs after the Cyt/ Thy molecules form a hydration shell and Penning ionization is particularly surface-sensitive. Thus, despite their broad structure, the PIES give some insight into the secondary ionization of hydrated systems. However, more detailed understanding regarding the nature of the PIES of dopants is needed for any further analysis of the system.
## 4 Conclusion
In summary, we have presented mass spectra of Cyt and Thy and their microhydrated derivatives following Penning ionization and charge-transfer ionization in HNDs upon EUV irradiation. Fragmentation of the pyrimidine nucleobases is strongly suppressed upon ionization in helium nanodroplets compared to direct photoionization of the molecules in an effusive beam. Generally, the probability for the parent ion to fragment is smaller for Penning ionization making it a softer ionization process as compared to charge-transfer ionization. By increasing the dopant cluster size, either of the pure pyrimidine bases or by increasing the level of hydration of the pyrimidine molecules, fragmentation of the parent ion is also reduced for both ionization channels. The mass spectra of ionized Cyt and Thy clusters and microhydrated Cyt/ Thy clusters in helium droplets deviate from clusters formed in a molecular beam expansion. The difference can most likely be assigned to the cold (0.4 K) environment of the helium droplets which facilitates stabilization of local minimum-energy structures.
Penning ionization electron spectra of Cyt/ Thy and their microhydrated derivatives are broad and dissimilar to photoelectron spectra of effusive Cyt/ Thy. Nevertheless, the highest kinetic energy of the Penning ionization electron spectra recorded in coincidence with ion fragments reflect the appearance energy of that fragment for direct photoionization.
These results demonstrate how a model system for radiation damage, _i. e._ microsolvated pyrimidine nucleobases, can be formed and studied in HNDs. Due to the efficient cooling of embedded species, helium droplets can give access to conformations not achievable under conventional molecular beams conditions. To obtain a more detailed understanding of the local minimum-energy conformations formed in HNDs and the quenching of fragmentation, further experiments and quantum chemistry calculations should be carried out.
### Author Contributions
J.D.A., A.R.A., A.S., B.B., K.S., S.D., L.B.L and M.M. performed the experiments with support from H.B.P.. S.K. aided remotely in the interpretation of the experimental results. J.D.A. and M.M. wrote the manuscript with input from all the co-authors.
### Conflicts of interest
There are no conflicts to declare.
Fig. 8: PIES (\(h\nu=21.6\) eV) of microsolvated Cyt (a) and Thy (b) in coincidence with the main water cluster ion fragment (H\({}_{3}\)O\({}^{+}\)/D\({}_{3}\)O\({}^{+}\)), the nucleobase parent ion and the ionic cluster of the parent ion bound to one or two water molecules.
## Acknowledgements
J.D.A. and M.M. acknowledge financial support by the Carlsberg Foundation. A.R.A. acknowledges with gratitude for the support from the Marie Sklodowska-Curie Postdoctoral Fellowship project Photochem-RS-RP (Grant Agreement No. 101068805) provided by the European Union's Horizon 2020 Research and Innovation Programme. S.R.K. thanks Dept. of Science and Technology, Gov. of India, for support through the DST-DAAD scheme and Science and Eng. Research Board. S.R.K., K.S. and S.D. acknowledge the support of the Scheme for Promotion of Academic Research Collaboration, Min. of Edu., Govt. of India, and the Institute of Excellence programme at IIT-Madras via the Quantum Center for Diamond and Emergent Materials. S.R.K. gratefully acknowledges support of the Max Planck Society's Partner group programme. S.R.K acknowledges support for this research through the Indo-French Center for Promotion of Advanced Research (CEFIPRA) M.M. and S.R.K. gratefully acknowledge funding from the SPARC Programme, MHRD, India. L.B.L. and M.M. acknowledge financial support by the Danish Council for Independent Research Fund (DFF) via Grant No. 1026-00299B. The research leading to this result has been supported by the COST Action CA21101 "Confined Molecular Systems: From a New Generation of Materials to the Stars (COSY)".
|
2308.15295
|
Benefits of Deterministic and Stochastic Tendency Adjustments in a
Climate Model
|
We develop and compare model-error representation schemes derived from data
assimilation increments and nudging tendencies in multi-decadal simulations of
the community atmosphere model, version 6. Each scheme applies a bias
correction during simulation run-time to the zonal and meridional winds. We
quantify to which extent such online adjustment schemes improve the model
climatology and variability on daily to seasonal timescales. Generally, we
observe a ca. 30% improvement to annual upper-level zonal winds, with largest
improvements in boreal spring (ca. 35%) and winter (ca. 47%). Despite only
adjusting the wind fields, we additionally observe a ca. 20% improvement to
annual precipitation over land, with the largest improvements in boreal fall
(ca. 36%) and winter (ca. 25%), and a ca. 50% improvement to annual sea level
pressure, globally. With mean state adjustments alone, the dominant pattern of
boreal low-frequency variability over the Atlantic (the North Atlantic
Oscillation) is significantly improved. Additional stochasticity further
increases the modal explained variances, which brings it closer to the observed
value. A streamfunction tendency decomposition reveals that the improvement is
due to an adjustment to the high- and low-frequency eddy-eddy interaction
terms. In the Pacific, the mean state adjustment alone led to an erroneous
deepening of the Aleutian low, but this was remedied with the addition of
stochastically selected tendencies. Finally, from a practical standpoint, we
discuss the performance of using data assimilation increments versus nudging
tendencies for an online model-error representation.
|
William E. Chapman, Judith Berner
|
2023-08-29T13:32:06Z
|
http://arxiv.org/abs/2308.15295v1
|
# Benefits of Deterministic and Stochastic Tendency Adjustments in a Climate Model
###### Abstract
We develop and compare model-error representation schemes derived from data assimilation increments and nudging tendencies in multi-decadal simulations of the community atmosphere model, version 6. Each scheme applies a bias correction during simulation run-time to the zonal and meridional winds. We quantify to which extent such online adjustment schemes improve the model climatology and variability on daily to seasonal timescales. Generally, we observe a ca. 30% improvement to annual upper-level zonal winds, with largest improvements in boreal spring (ca. 35%) and winter (ca. 47%). Despite only adjusting the wind fields, we additionally observe a ca. 20% improvement to annual precipitation over land, with the largest improvements in boreal fall (ca. 36%) and winter (ca. 25%), and a ca. 50% improvement to annual sea level pressure, globally. With mean state adjustments alone, the dominant pattern of boreal low-frequency variability over the Atlantic (the North Atlantic Oscillation) is significantly improved. Additional stochasticity further increases the modal explained variances, which brings it closer to the observed value. A streamfunction tendency decomposition reveals that the improvement is due to an adjustment to the high- and low-frequency eddy-eddy interaction terms. In the Pacific, the mean state adjustment alone led to an erroneous deepening of the Aleutian low, but this was remedied with the addition of stochastically selected tendencies. Finally, from a practical standpoint, we discuss the performance of using data assimilation increments versus nudging tendencies for an online model-error representation.
_Corresponding author_ : William E. Chapman, [email protected]
## 1 Introduction
In recent decades, significant strides have been made within the scientific community to enhance the numerical modeling of Earth's climate using General Circulation Models (GCMs). These models can emulate atmospheric and oceanic dynamics on various temporal and spatial scales and have broadened our knowledge of the past, present, and future climate system. This progress has led to more precise simulations of weather, extreme events, and climate processes. However, accurately representing all the dynamics that affect the Earth's climate remains a challenge despite these advancements. Many atmospheric processes occur at scales too small for the current resolution of GCMs, and it is unlikely that we will achieve the computational power to represent them soon. Hence, the parameterization of these phenomena continues to be a persistent necessity.
Approximations of the equations of fluid motion through physical parameterization schemes and truncation
errors in the numerical discretization give rise to inherent biases within GCMs (e.g., Berner et al., 2012; Flato et al., 2014). Consequently, GCM data may not accurately reflect real-world observations and may misrepresent climate and weather risks. A portion of this bias is the result of so-called "fast-physics" errors, which manifest in the first few days of a model run as a result of parameterization error (Hurrell et al., 2009; Ma et al., 2014; Palmer and Weisheimer, 2011; Williamson and Olson, 2007). These errors may initially be local but can influence model bias at climate timescales by cascading upscale (Jung et al., 2008; Klinker and Sardeshmukh, 1992; Rodwell and Palmer, 2007). Addressing model biases through model physics/parameterization improvement can be challenging, and compensation errors due to model tuning are likely to arise. Thus, some studies have proposed empirically determined corrections to the model's tendency equations. One of these approaches entails utilizing a Data Assimilation (DA) system to learn systematic model bias.
The goal of the current study is to compare two data assimilation (DA) based analysis adjustment techniques for assessing and addressing model error via examining and then re-inserting adjustment increments during model runtime: 1) Relaxation (or nudging) towards observed data and 2) an Ensemble Adjustment Kalman Filter (EAKF, Anderson, 2001).
### Nudging
Nudging techniques were developed in the late 1980s to study potential remote origins of forecast error by gradually relaxing the model towards analysis (or reanalysis) data during the integration process (e.g., Ferranti et al., 1990; Haseler, 1982; Klinker, 1990). Due to the availability of ever-improving reanalysis data, these techniques have been used to force numerical simulations toward specific observed trajectories.
Researchers have employed nudging to gain insight into the sources of forecast errors and to improve the accuracy of climate models (Ferranti et al., 1990; Jung et al., 2008, 2010)(Ferranti et al., 1990; Batte and Deque, 2016; Jung et al., 2008; Jung, 2011). Nudging is easy to implement and adds computationally negligible expense to the model simulations. However, these advantages come with some significant disadvantages: 1) the choice of nudging parameters is generally not dynamically driven and is largely subjective; 2) relaxation terms can provide a significant contribution to the tendencies and might have adverse effects on model dynamics; and 3) model dynamics are excluded during the relaxation implementation.
### Data Assimilation
An alternative approach is to estimate model bias utilizing a DA system, where the large-scale circulation is initially constrained to match observational data. This step helps eliminate errors that may result from slow atmospheric adjustments to nonlocal processes (e.g., Karspeck et al., 2018; Raeder et al., 2012, 2021). By analyzing the adjustments made to the constrained tendencies (the difference between the model prior and posterior state, referred to henceforth as model increments), we can effectively identify systematic fast-physics errors. Further, these systematic increments can form the basis function of an empirically driven online tendency adjustment which acts to constrain the fast-physics error, and improve mean/variability climate biases (e.g., Palmer and Weisheimer, 2011). DA increments have long been leveraged to reveal physical processes in a model which are behaving non-physically (e.g., Klinker and Sardeshmukh, 1992; Mapes and Bacmeister, 2012; Rodwell and Palmer, 2007; Simpson et al., 2018). The increments, when sufficiently averaged, reveal systematic fast-physics tendencies which indicate erroneous model behavior. Jung (2011) argued, through comparing analysis increments and nudging tendencies in a weather forecasting application, that some of the deficiencies of the nudging technique are ameliorated via the sophistication of a state-of-the-art DA system as there are fewer subjective choices of parameters to be made and spurious model tendencies are reduced. However, the algorithmic sophistication comes with some computational cost, and thus the ability to nimbly rerun, adjust, and evaluate parameter choices in the DA system is rendered
difficult.
### Online Bias Correction
Whether based on nudging or DA, inserting tendency correction during runtime has an advantage over post-processing methods, which correct archived simulations based on reforecast data (e.g., Chapman et al., 2022; Glahn and Lowry, 1972). Because they correct the atmospheric state only a posteriori, they cannot trigger atmospheric behavior that would have occurred given a correct base state. For example, if sea surface temperatures (SST) are systematically too low near the maritime continent, they cannot give rise to local convection leading to downstream teleconnections which then influence weather in a given hemisphere. Offline bias correction is incapable of remedying this problem, whereas online bias correction can adjust the actual model attractor and allow the model to access observed modes of variability it would have otherwise bypassed. Averaging and re-inserting the increments from nudging or DA during run-time have been previously implemented and can dramatically improve the background model state for weather and climate applications (e.g., Batte and Deque, 2016; Chang et al., 2019; Crawford et al., 2020; Lu et al., 2020). These studies have shown that online corrections result in significant improvements in the skill of weather forecasts which out-compete results obtained by posteriori corrections.
A separate question is if tendency corrections should be inserted deterministically or if random model error needs to be accounted for. Berner et al. (2017) make the argument that in nonlinear systems, even Gaussian stochastic noise will have an impact on systematic model error. The absence of a fluctuating subgrid-scale has been suggested as one reason for persistent biases across different centers and model versions (Berner et al., 2012; Palmer, 2001) and the omission of this effect might result in compensating model errors. Recently, Crawford et al. (2020) showed that for some surface variables including the random component of tendency corrections from DA was more beneficial than including the deterministic one.
The rise of machine learning in atmospheric science has led to recently renewed interest in tendency adjustments learned from DA systems, as it could lead to state-dependent error parameterizations (DelSole et al., 2008). Online state-dependent bias correction has been previously proposed (DelSole and Hou, 1999; Leith, 1978; Saha, 1992), and implemented in prototype (Brajard et al., 2021; Danforth et al., 2007) and in full GCMs (Chen et al., 2022; Yu et al., 2014, 2014; Watt-Meyer et al., 2021; Bonavita and Laloyaux, 2020) which showed success in improving the modeled climate state.
To our knowledge, this is the first study to compare the two DA techniques for use in climate simulations and test the difference in the application of these tendencies in a long-running, state-of-the-art climate model. We set out to answer three main questions with this study:
1. To what degree do model-error estimates from nudging tendencies agree with those from EAKF analysis increments?
2. To what extent does re-inserting DA increments and nudging increments during model runtime reduce climatological model bias of the free-running model?
3. Will representing subgrid-scale uncertainty in online increment corrections via stochasticity help to improve low-frequency modes of variability without degrading mean state climatological bias?
In the following _Section 2_, we detail the methods and model configurations leveraged for training and implementing the tendency adjustment. Results are presented in _Section 3. Section 4_ provides a discussion. _Section 5_ is a summary and conclusion.
Methods
### Analysis Increments
Data-assimilation increments are leveraged from the Finite Volume Community Atmosphere Model version 6 (CAM6-FV) as a CAM-DART reanalysis was recently produced (Raeder et al., 2021) for the period, 2011-2019, referred to henceforth as the training period. Raeder et al. (2021) leveraged an Ensemble Adjustment Kalman Filter (EAKF, Anderson, 2001), which is a state-of-the-art DA system that has been applied to CAM6-FV via the Data Assimilation Research Testbed (DART) (Karspeck et al., 2018; Raeder et al., 2012). The EAKF combines an ensemble of short-term forecasts with observations to generate an improved state estimate of the atmosphere at a specific time. By comparing the model simulations to observed data, the EAKF quantifies the error and adjusts the ensemble of model simulations accordingly. This adjustment process reduces uncertainty in the model predictions and brings the ensemble closer to the observed reality.
The reanalysis is the result of an 80-member ensemble of the global atmosphere using CAM6-FV from CESM version 2.1. The archived data represent the actual states of the atmosphere in the training period at horizontal resolutions of \(\sim\)0.9\({}^{\circ}\) latitude and 1.25\({}^{\circ}\) longitude at 6-hourly temporal frequency. CAM6-FV has 32 hybrid \(\sigma\)/pressure levels which are terrain following near the surface and constant pressure near the model top. When producing this reanalysis dataset, the ensemble mean prior and posterior model states were archived. The difference between these states provides the model-tendency adjustments in the 6-hour assimilation window. We refer to the difference of the prior and posterior as the DART model increment and divide by the temporal length of the assimilation window (6 hours) to determine the model-correction tendency (per model time-step). We run so-called CESM "F" compset with prescribed ocean and sea-ice coverage data provided by the AVHRR dataset at a spatial resolution of 0.25\({}^{\circ}\). The temporal resolution is nominally daily but represents an ocean state which is a weighted average of data from several consecutive days (Casey et al., 2010). CAM-FV was run with a standard half-hour time step. Additional external forcing of CAM6-FV includes specified aerosols, greenhouse gases, and volcanic forcing from CESM datasets. These are historical datasets through the end of 2014 and use CMIP6 scenarios from 2015 onwards. Observational data is never assimilated in the top two layers of the model (\(\sim\)5 hPa - \(\sim\)2 hPa) as this is known to cause model stability issues. The exact model configuration used in this study for every climate run is permanently archived on GitHub ([https://github.com/kdraeder/cesm](https://github.com/kdraeder/cesm)), and the data are stored in the National Center for Atmospheric Research's (NCAR) Research Data Archive.
### Nudging & Model Tag
As a point of comparison to the DART increments, we used a dynamical core-independent nudging scheme that applies a relaxation to observations as a physics tendency controlled via the "nudging" namelist (see Sect. 9.6 of the CAM6 user's guide). The nudging tendency is applied as a linear relaxation of the form:
\[\frac{\mathrm{d\ x}}{\mathrm{dt}}=-W\ \frac{x-x_{\mathrm{ref}}}{\tau}\]
Where \(x_{\mathrm{ref}}\) represents the reference meteorology value at the subsequent reference meteorology update step, and \(\tau\) is the relaxation timescale. \(W\) signifies a scalar window function that restricts the spatial extent within which the nudging tendency is implemented. In this case, \(W\) is set to 1 in the full atmosphere and set to 0 in the top 5 layers of the model (approximately 30 hPa - 2 hPa) as nudging near the model top is known to cause stability issues. Additionally, the scheme calculates \(x_{\mathrm{ref}}\) as a linear interpolation between the two nearest reference meteorology values. In computing the update tendencies, the number of dynamical steps taken each day was fixed at 48 (30-minute timesteps), and the tendencies were applied at every timestep using the linear interpolation as described above. The model is nudged towards the ERA-interim reanalysis dataset (Dee
et al., 2011), which has a temporal frequency of 6 hours. The reanalysis data was interpolated to the model grid using the bilinear interpolation in the xESMF package. The reanalysis data is interpolated linearly in log pressure coordinates and adjustments are made to account for topographical differences between CAM and the reanalysis product (see Sect. 9.6 of the CAM6 user's guide).
The nudging can be applied to the model prognostic variables (Zonal [U] and Meridional [V] winds, Temperature [T], specific humidity [Q], and surface pressure [PS]). Through testing multiple nudging configurations, we found that the most effective treatment of model bias is accomplished by adjusting model \(U\) and \(V\) alone when generating the nudging increments (validation was performed over the testing period, not shown). Additionally, multiple relaxation timescales were tested (from 2 hours to 10 hours at 1-hour intervals), and 6 hours was found to be optimal by quantitative examination of the climatological biases (not shown) when generating the nudging increments for the subset of model years 2011-2019. This is convenient as it is analogous to DART's 6 hour assimilation cycle.
To ensure a fair comparison with DART, the same model tag, lower boundary constraints, and external forcing datasets/configurations were used to generate the nudging increments. The CAM-FV model was again run from January of 2011 through August of 2019 while archiving the instantaneous nudging increments and model state with a temporal frequency of 6 hours. When re-inserting the archived tendencies, the model was run from 1982-2010. This is analogous to splitting data into a testing (1982-2010) and training (2011-2019) dataset so that the increments learned from the model runs in the testing period cannot artificially inflate the model skill or bias the background variability towards observations, once reinserted into the model.
### Reinserting Adjustment Tendencies
Fig. 1 shows an example of the tendency adjustment (online bias correction) at a single model grid point (15degN, 140degW) over the first 7-days of a model run. The total tendency adjustment added to U and V at every time step is:
Figure 1: Example for the tendency adjustment at a latitude of 15°N and longitude of 140°W for model days Jan 1-14, 1982 (30 minute timestep) in the Nu.S1.M.5 experiment (see Tab. 1). The full model tendency (purple), mean adjustment (dashed gray), and stochastic (dashed red) adjustment are shown. The black ticked line represents the period of system memory for the stochastic increment (see text).
\[\frac{dx_{t}}{\text{dt}}=f\left(x_{t}\right)+\delta\bar{x}_{t}^{d}+\delta x_{t}^{ \prime d}, \tag{1}\]
where \(f(x_{t})\) is the tendency term of the prognostic model equation and \(\delta\bar{x}_{t}^{d}+\delta x_{t}^{\prime d}\), is the online bias correction divided into climatological (Fig. 1; grey dash) and stochastic (Fig. 1; red dash) terms, respectively.
Climatological increment tendencies were archived from both the DART and nudging systems. These are any nonzero long-term averages of the nudging or DART increments, which indicates that these "tendency biases" force the model to drift away from the observational attractor to its own biased climatology. The term bias, in this text refers to the time mean differences between the model forecasts and observations (reanalysis product). The climatological increment (from either DART or nudging) is calculated by taking a centered, 31-day climatology of the system increments while respecting the model time of day (at 6-hour increments). The functional form of the mean increment adjustment is thus, \(\delta\bar{x}_{t}^{d}=\frac{\alpha}{{N_{s}}^{d}}~{}\sum_{k=1}^{{N_{s}}^{d}} \delta{x_{k}}^{d}\), where the mean increment (\(\delta\bar{x}_{t}^{d}\)) is conditioned on the model day \(t\) and time of day (\(d\), at 6-hour interval \(\in[0,6,12,18]\) hrs). \({N_{s}}^{d}\) is the total number of increments in a 31-day window spanning the 10-year training period at each model time of day (310 new samples to select from each time-step). This accounts for system increment evolution in both the seasonal and diurnal cycle (which were found to be significant).
Following Batte and Deque (2016) and Crawford et al. (2020), a library of anomaly increments (\(\delta x_{t}^{\prime d}\)), was archived, and provided as additional tendency terms to the model through stochastic selection (Fig. 1, red dash). The anomaly tendencies were randomly selected from the same centered 31-day period, while again respecting the model time of day. The total tendency inserted follows the form of equation 2. The first term is the model mean increment, and the second term is a stochastically selected increment anomaly. The anomaly term is read forward in time for \(p\) timesteps where \(p\) is incremented from 0 to 120, and then \({\delta x_{r_{i+p}}}^{d}\) is randomly sampled again. The anomaly increment library integrates forward 120 timesteps (30 hours) to retain some temporal autocorrelation of the stochastic increments. \(\alpha\) and \(b\) are scaling terms to control the magnitude of the stochastic and mean portion of the tendency, respectively. Here, we set \(\alpha\) = 1 and \(b\) = 0.5 (after multiple rounds of validation [by miniizing the climatological bias of surface temperature and precipitation] on the training period [2010-2019] for optimal results; not shown).
\[\delta x_{t}=\delta\bar{x}_{t}^{d}+\delta x_{t}^{\prime d}=\frac{\alpha}{{N_{s }}^{d}}~{}\sum_{k=1}^{{N_{s}}^{d}}\delta{x_{k}}^{d}+b\left(\left[\delta x_{r _{i}+p}^{d}\right]-\left[\frac{\alpha}{{N_{s}}^{d}}~{}\sum_{k=1}^{{N_{s}}^{d}} \delta{x_{k}}^{d}\right]\right) \tag{2}\]
We refer to the first term of equation 2 as a mean increment tendency adjustment (MITA, \(\delta\bar{x}_{t}^{d}\)) and the second half as a stochastic increment tendency adjustment (SITA, \(\delta x_{t}^{\prime d}\)), and name our experiments following the convention: Adjustment System_MITA*_SITA*\(b\). For example, a tendency adjustment derived from the DART system which applies a mean adjustment scaled to 1 and a stochastic adjustment scaled with 0.5 would be named: DArt_M1_S.5 (nudging would be Nu_M1_S.5). The experiments analyzed in this study are shown in Table 1.
\begin{tabular}{l l l l l} \hline \hline
**Expt. No.** & **Expt. Name** & **Description** & **Model** & **Lower Boundary** \\ \hline
1 & CNTRL & 29 yr Control run spanning 1982-2010 & AGCM & Prescribed SST \& Sea-Ice \\ \hline \hline \end{tabular}
Fig. 2 provides a comprehensive visualization of this study's workflow, which commences with two simultaneous processes for generating adjustment increments during the training years of 2011-2019: (1) nudging to ERAi (shown by the green and yellow boxes) and (2) the DART DA system (indicated by the red box). Following their generation, these increments are averaged to form a climatology of corrective increment forcing. This climatology is subsequently employed to generate a library of anomaly increments, which are used for stochastic selection. The resulting terms are multiplied by specific scalars (\(\alpha\), \(b\)) before being integrated into the online model's runtime during the testing years of 1982-2011 (depicted by the blue box). The workflow concludes with an evaluation of model bias within the online runs, thereby shedding light on the effectiveness of the implemented adjustments.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Expt. No.** & **Expt. Name** & **Description** & **Model** & **Lower Boundary** \\ \hline
2 & Dart.M1.S.0 & 29 yr MITA run spanning 1982-2010 & AGCM with the addition of MITA[1] with tendencies derived from DART & Prescribed SST \& Sea-Ice with tendencies derived from DART & Prescribed SST \& Sea-Ice with tendencies derived from the nudging \\
3 & Nu.M1.S.0 & 29 yr MITA run spanning 1982-2010 & AGCM with the addition of MITA[1] with tendencies derived from DART & Prescribed SST \& Sea-Ice with tendencies derived from DART & Prescribed SST \& Sea-Ice with tendencies derived from DART & Prescribed SST \& Sea-Ice with tendencies derived from DART & Prescribed SST \& Sea-Ice with the advantages of DART & Prescribed SST \& Sea-Ice with tendencies derived from DART & Prescribed SST \& Sea-Ice with the advantages of DART & Prescribed SST \& Sea-Ice with tendencies derived from nudging \\
5 & Nu.M1.S.5 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MITA/SITA run spanning 1982-2010 & 29 yr MIT MITA/SITA run spanning 1982-2010 & 29 yr MITA run spanning 1982-2010 & 29 yr MIT MITA run spanning 1982-2010
### Streamfunction Tendency
To clarify the relative roles of various dynamical processing in influencing the evolution of tropospheric low-frequency circulation, a decomposed streamfunction tendency equation (Cai and Van Den Dool, 1994) is diagnosed as in (Michel and Riviere, 2011; Riviere and Drouard, 2015; Tan et al., 2017):
\[\frac{\partial\psi^{L}}{\partial t}=\ \sum_{i=1}^{5}\xi_{i}\ +\ R \tag{3}\]
where:
\[\xi_{0}=\nabla^{-2}(-\zeta^{L})=\frac{\partial\psi^{L}}{\partial t}\]
\[\xi_{1}=-\nabla^{-2}\{\overline{\mathbf{V}}\cdot\nabla\zeta^{L}\ +\ \ \zeta^{L}\nabla\cdot\overline{\mathbf{V}}\}^{L}\]
\[\xi_{2}=-\nabla^{-2}\{\mathbf{V}^{L}\cdot\nabla(\overline{f+\zeta})+\overline {(f+\zeta)}\nabla\cdot\mathbf{V}^{L}\}^{L}\]
\[\xi_{3}=\nabla^{-2}{(-\mathbf{V}_{r}^{H}\cdot\nabla\zeta^{H})}^{L}+\nabla^{-2} \{-\nabla\cdot(-\mathbf{V}_{d}^{H}\zeta^{H})\}^{L}\]
\[\xi_{4}=\nabla^{-2}{(-\mathbf{V}_{r}^{L}\cdot\nabla\zeta^{L})}^{L}+\nabla^{-2} \{-\nabla\cdot(-\mathbf{V}_{d}^{L}\zeta^{L})\}^{L}\]
\[\xi_{5}=\nabla^{-2}{(-\mathbf{V}_{r}^{L}\cdot\nabla\zeta^{H})}^{L}+\nabla^{-2} \{-\nabla\cdot(-\mathbf{V}_{d}^{L}\zeta^{H})\}^{L}+\nabla^{-2}{(-\mathbf{V}_ {r}^{H}\cdot\nabla\zeta^{L})}^{L}+\nabla^{-2}\{-\nabla\cdot(-\mathbf{V}_{d}^{H }\zeta^{L})\}^{L}\]
Figure 2: MITA/SITA workflow diagram showing the generation (red, yellow, and green box) and online implementation (blue boxes) of nudging (1) and DART (2) DA increments. The green box (1.1) shows the generation of the ERA-interim reanalysis data set (Dee et al., 2011) [model years 2011-2019]. The yellow box (1.2) shows the creation of the nudging increments via 1 run of CAM6-FV [model years 2011-2019]. The red box (2.1) shows the generation of the CAM6-FV+DART reanalysis [80 ensemble members, model years 2011-2019]. The blue box (1.2/2.2) shows the online implementation in CAM6 [model years 1982-2010].
Here, \(\psi^{L}\) is the streamfunction, \(\zeta\) is the relative vorticity, \(V\) is the horizontal wind vector, and \(f\) is the Coriolis parameter. \(\nabla^{-2}\) represents an inverse Laplacian operator. The term \(R\) indicates the residual term induced by processes such as dissipation, external forcing, and two neglected terms [vertical advection and twisting terms (see, Feldstein, 1998)]. The subscripts \(r\) and \(d\) represent the rotational and divergent components of the wind. The superscripts \(H\) and \(L\) indicate the high- and low-frequency components that are divided at a period of 10-days (we then remove the climatology to form the low-frequency anomalous wind). All frequency filtering is done using Fast-Fourier transform, and the climatological wind is the centered 90-day average of the entire model simulation (29-year run). The low-frequency tendency of the streamfunction (\(\xi_{0}\)) is calculated as centered differences, except for the first (last) time step where a forward (backward) finite-difference scheme is used, this term is calculated as a point of comparison for the full decomposition. The calculations are performed on daily files, and thus a timestep of 24hrs is used, and the data is interpolated to a 1.5deg regular grid when the tendency is performed. The physical interpretations of the processes associated with \(\xi_{1}-\xi_{5}\) are (i) the low-frequency eddy vorticity advection by the climatological wind and its divergence term, (ii) the climatological absolute vorticity advection by the low-frequency wind anomaly and its divergence term, (iii) the interaction among all the low-frequency transient eddies, (iv) the self-interaction among the high-frequency anomalies, and (v) the cross-frequency interaction between high- and low-frequency anomalies.
As discussed in Cai and Van Den Dool (1994), there is some arbitrariness in the tendency decomposition, which comes fully or partly from the separation of the high- and low-frequency transients and the separation of rotational wind from irrotational wind. A full step-by-step method of this calculation is provided in the supplemental material, and a Jupyter notebook to perform the calculation is provided in the paper's public GitHub repository, which performs this decomposition on observations from daily CAM6-FV model output. The first two terms in equation 3 (\(\xi_{1}\) & \(\xi_{2}\)), are designated as linear terms, and the subsequent three terms (\(\xi_{3}-\xi_{5}\)) are designated as nonlinear terms. All gradients and the inverse Laplacian calculations are done with spherical harmonics in the windspharm package (Dawson, 2016).
The physical processes that are responsible for the growth and decay of any persistent low-frequency pattern (i.e., the Pacific North American (PNA) pattern, or the North Atlantic Oscillation (NAO)) can be clarified explicitly by examining each term on the RHS of eq. 3. This can be summarized by projecting each term onto the composite anomalous streamfunction pattern on a specified day (M). The projection for each \(\xi_{i}\) can be written as (Feldstein, 2003; Xu et al., 2020; Shen et al., 2023):
\[P_{i}=\sum_{\lambda,\theta}\frac{\xi_{i}(\lambda,\theta)\psi_{M}(\lambda, \theta)\cos\theta}{\psi_{M}^{2}\cos\theta}\quad(4)\]
Where \(\psi_{M}\) is the streamfunction pattern on day M; \(\lambda\) and \(\theta\) are the longitude and latitude, respectively. The projection is then performed over a region, which is specified in each figure, and shows the time rate of change of \(\psi^{L}\) towards day M.
### Significance
Significance is assessed via bootstrapping. Bootstrapping is a statistical technique used to assess significance by estimating the sampling distribution of a statistic without making assumptions about the data distribution. In this approach, a designated sample size (N, usually \(\sim\) 500-10000) is drawn from the original data with replacement to create a bootstrap sample distribution. The statistic of interest (e.g., climatological/seasonal mean bias) was calculated on this sample, and the process is repeated N times to create a distribution of the statistic. This distribution was then used to estimate pseudo confidence intervals.
We used a statistically strict criterium to assess significant difference. When comparing two simulation or observation statistics, as in Deser et al. (2017), two distribution samples are created, and their overlap at the 5th/95th percentile is examined to determine statistical significance. If there is an overlap, we conclude that there is no statistical difference between the two samples.
Various methods of model validation are performed in the results section, and we refer the reader to the appendix for a detailed explanation of the calculated empirical orthogonal functions (EOF), blocking index calculation, and global bias calculation used in this study.
## 3 Results
Figure 3 shows a vertical cross section of the zonal-mean U wind increment derived from the DART (column 1) and nudging (column 2) data in DJF (a, d) and JJA (b, e). Systematic increments occur at every vertical level of the model. In the lower-level tropics, a clear seasonal cycle is present. Consistent with Simpson et al. (2018), surface increments act to reduce the easterly flow at low latitudes where the climatological (Fig. 3; contours) low-level easterly flow is the strongest across the trade-wind region. The increment magnitude scales with lower-level wind strength, leading to a larger increment present in the winter-time hemisphere in both the DA and nudging data.
The sign of the increments differs over the low-level Southern Ocean \(\sim\) [40\({}^{\circ}\)S, 60\({}^{\circ}\)S, 975 hPa], where the nudging increments act to dampen the anomalously strong westerly flow. In this region, the nudging tendencies also exhibit a strong seasonal cycle, which again scales with the strength of the climatological westerly
Figure 3: The zonal-mean \(u\) DA increments (\(m\,s^{-1}\,d^{-1}\)) in DJF (a, c) and JJA (c, d) for the DART (Column I) and nudging system (Column II). Contours show the \(u\) wind climatology [2 \(m\,s^{-1}\) intervals, negative is dashed]. All fields are averaged over the period 1982-2010.
flow. This signal is not present in the DA system, but this is potentially a consequence of the sparsity of observations assimilated in this region (Raeder et al., 2021). At the upper levels (\(\sim 200\) hPa), both datasets show a systematic strengthening of the Northern Hemisphere jet. In the mid-troposphere Southern Hemisphere, opposite signed increments are present that are most clearly seen in DJF. Supplemental Fig. S1 shows the same figure but for the meridional winds. In both seasons, the increments act to strengthen the upper-tropospheric southerlies and the lower-tropospheric northerlies, which generally indicates that the model has too weak Hadley cell overturning.
Figure 4: The zonal-mean \(u\) bias (\(m\,s^{-1}\)) in JJA (column I) and DJF (column 2) for each experiment listed in Table 1. Contours show the \(u\) wind climatology [\(2\,\,m\,s^{-1}\) intervals, negative is dashed]. All fields are averaged over the period 1982-2010.
The impact of the tendency bias correction on the zonal-mean climatological biases is explored in Figure 4 for JJA (column I) and DJF (column II). Here, the bias is shown as the difference between the seasonal model runs and the ERA-interim seasonal mean (Experiment - ERA, therefore positive (negative) values show a positive (negative) bias). The zonal-wind biases in Cntrl are characteristic of a poleward shift of the midlatitude jets in the summer hemisphere (see the dipole \(\sim\)200 hPa. Fig. 4a). Marked positive impacts of the online bias correction scheme can be seen in the full atmosphere for the DArt.M1.S0, Nu.M1.S0, and Nu.M1.S.5 runs. At upper-levels, the jet bias is improved in both seasons and hemispheres. At lower-levels the nudging MITA (Nu.M1.S0, and Nu.M1.S.5) particularly improves the U wind biases, entirely eradicating them in some locations. An area that appears to be degraded is in the mid-to-upper-troposphere in the nudging MITA/SITA runs (Fig. 4d & 4e, 20"S-20"N); however, this an artifact of the zonal mean. The CNTRL, DArt.M1.S0, and DArt.M1.S.5 simulations have compensating biases in the tropics which are out of phase and thus cancel and present as a small bias in the zonal average (refer to Table S2 and Fig. 5, to see the bias in the tropics and a planar view of the bias), when in fact the CNTRL, DArt.M1.S0, and DArt.M1.S.5 biases are larger than the biases in the Nu.M1.S0, and Nu.M1.S.5 systems.
### Seasonal Global Mean Bias
We examine the percent improvement over the control run in global mean bias (Fig. 5) for surface variables (precipitation [both P-GPCP and P-ERAi], surface temperature [T2m], and sea-level pressure [PSL]; Fig. 5, column I) and variable bias at multiple vertical levels for zonal wind, meridional wind, specific humidity, and temperature (Fig. 5, column II). Two observational products were used to estimate precipitation bias - the NOAA global precipitation climatology project monthly precipitation analysis (P-GPCP) and the ERA-interim derived precipitation (P-ERAi) - since they have different precipitation climatologies, especially over
Figure 5: Global percent improvement over the Cntrl run in all seasons, for variables Precipitation (a; ERAi-observations, b; NOAA GPCP observations), T2m (c), PSL (d), U200 hPa (e), U850 hPa (f), V200 hPa (g), Q975 hPa d(h), T850 hPa (i). Error bars show the 5th and 95th percentile from the synthetic bootstrap distribution. All fields are averaged over the period 1982-2010. Each model is shown in different color, positive results are in green, negative results are in red (see figure key).
the ocean (Nogueira, 2020).
In general, the online bias correction demonstrates a positive impact on global biases across all examined variables, resulting in significant improvements in most seasons. Instances of model degradation are rare. In the following text, we highlight specific findings to underscore key results. For a comprehensive assessment, we provide a detailed analysis of model bias (as definied in the appendix) for multiple variables, including prognostic, diagnostic and transients in supplemental Tables S1-S3. These tables also offer a breakdown of bias information based on regional divisions, specifically the tropics [25degS-25degN] and extratropics [90degS-25degS; 25degN-90degN], as well as distinctions between land and ocean regions.
Through a comprehensive examination of the Nu.M1.S0 and Nu.M1.S.5 (lighter greens and reds, Fig 5.) experiments, it is evident that the online bias correction yields significant improvements across all seasons for both the deterministic and stochastic approaches. Particularly noteworthy are the substantial enhancements observed during the cool season in the Northern Hemisphere. The prognostic variables show a notable reduction in bias, ranging from 10% to 40%. Moreover, diagnostic variables show improvements of approximately 10% to 20%. It is important to note that adjustments were made solely to zonal and meridional winds, while the enhancements in non-dynamic variables result from dynamic interactions with the corrected wind field.
Figure 6 illustrates the spatial maps of P-ERAi, PSL, and zonal surface wind stress (TAUx, over ocean) biases in DJF, JJA is shown in supplemental Figure S3. In the best case showing substantial improvement in
Figure 6: DJF bias in Precipitation (mm/day, left), PSL (Pa, middle), and surface wind stress (N/m\({}^{2}\), right). Bias is oriented as field – observations (positive (negative) numbers are biased high (low)), for the five model configurations (a-d). All fields are averaged over the period 1982-2010. ).
precipitation biases, with a 21% improvement over land and a 22.8% improvement over the total extratropics when compared with the CNTRL simulation. Notably, precipitation in the midlatitude Pacific storm track exhibits considerable enhancement, likely indicating improved atmospheric river transport (Chapman et al., 2019; Zhu and Newell, 1998). The adjustments made to sea level pressure (PSL) yield significant improvements, particularly with a 45.7% improvement globally in DJF, a 25.7% improvement in JJA, and an annual improvement of 50% (see Table S1). The biases in the storm-track regions are significantly reduced, and the bias off the coast of Antarctica shows considerable improvement. Zonal surface wind stress (TAUx) also improves, especially in the Southern Ocean. The nudging increments lead to a 52.9% improvement in DJF and a 28.9% improvement in JJA in the best experiments.
Figure 7 illustrates the significant improvement in winds and dynamic variables resulting from the inclusion of DA and nudging increments. Specifically, W850 (vertical velocity of wind at 850 hPa), U200, and U850 in DJF are shown, with additional information provided in the supplemental material for JJA (Fig. S4). Notably, the nudging increments have a pronounced impact on vertical velocity in the South Pacific convergence zone (SPCZ), influencing tropical precipitation. In JJA (Fig. S4), the nudging sc
Figure 7: DJF Bias in vertical velocity (Pa s\({}^{-1}\), left), Zonal 200 hPa wind (m s\({}^{-1}\), middle), and Zonal 850 hPa wind (m s\({}^{-1}\); right). Bias is oriented as field – observations (positive (negative) numbers are biased high (low)), for the five model configurations (a-d). Contours (middle, right) show the DJF climatological wind field [m s\({}^{-1}\); 10 m s\({}^{-1}\) contour interval (excluding 0), negative indicated by dashed line]. All fields are averaged over the period 1982-2010. ).
in bias and a more accurate precipitation climatology over the maritime continent. The representation of the summer hemisphere jet exhibits improvements in various model configurations, except for DArt.M1.S.5. Lower-level winds (U850) also show substantial enhancements, particularly over the Southern Ocean and North Atlantic. Overall, the inclusion of DA and nudging tendencies results in considerable improvements in global climatology for both DJF and JJA, with the dominant improvements occurring in the summertime jet region and the Southern Ocean.
### Impact on Transients
Next, we turn our attention to the bias in the transients in DJF (Fig. 8), JJA is shown in the supplemental material (Fig. 5S). These are based on daily data (with the monthly means removed) and we examined the time-mean 250 hPa meridional wind variability (\(\overline{v^{\prime 2}}\)), a measure of storm track activity (Chang et al., 2002), and the 200 hPa transient kinetic energy (\(EKE,\overline{v^{\prime 2}+u^{\prime 2}}\)), a measure of mesoscale transient wave activity. No results for DArt.M1.S.5 are shown, because they did not show improvements over the Cntrl run (though little-to-no significant model degradation was observed). The climatological biases in the three
Figure 8: DJF Bias in 200 hPa Square of transient component of the meridional wind (\(m^{2}s^{-2}\); left), 200 hPa transient kinetic energy (\(m^{2}s^{-2}\), middle), and 850 hPa moisture flux by transients (\(gkg^{-1}ms^{-1}\); right). Bias is oriented as field – observations, for the four model configurations (a-d). All fields are averaged over the period 1982-2010. ).
shown quantities are apparent, and there are substantial corrections, especially in the summer hemisphere. There are negative biases (\(\overline{v^{\prime 2}}\)) which indicate a weak storm track, largely the corrections coincide with the correction to the mean state of zonal wind (Fig. 4), in which the shifted jet location was corrected. For (\(\overline{v^{\prime 2}}\)) in DJF the global improvement is 26.5%, 33.3% and 31.8% for the DArt.M1.S0, Nu.M1.S0, and Nu.M1.S.5 simulations, respectively. JJA shows similar significant improvements to (\(\overline{v^{\prime 2}}\)) (Table S3). The transient kinetic energy is shown to be too weak across the extratropical region. The MITA/SITA corrections act to improve the EKE across the extratropical regions; however, they are a detriment to the tropics. Globally, there is still a significant improvement in both DJF and JJA, despite the compensating bias arising across the tropical region.
### Low Frequency Variance Modes
In the previous section, we showed that the first-order statistics of the climatological state are improved across all examined global metrics by adding MITA/SITA tendency bias corrections. However, any assessment of the health of a climate model must also consider the accurate representation of natural climate variability and low-frequency climate modes (e.g., Phillips et al., 2014). The dynamical mechanisms associated with intraseasonal variability are complex and act across myriad timescales, leading to societally influential downstream weather (e.g., Branstator, 1992; Simmons et al., 1983; Wallace and Gutzler, 1981). The mechanisms for the growth and maintenance of these patterns can be compared in the models' space with observations to assess model health and real-world representation (e.g., Feldstein, 1998). Additionally, the low-frequency variability is highly attenuated by the climatological background state of the model, with many mechanisms having been proposed to maintain and grow this variability. These climatological/transient interplay mechanisms include: growth of the low-frequency anomaly due to instabilities associated with the zonally asymmetric midlatitude jet (e.g., Branstator, 1990, 1992; Frederiksen, 1983; Simmons et al., 1983), changes in quasi-stationary eddies associated with fluctuations in the zonal-mean flow (e.g., Branstator, 1984; Kang, 1990), forcing from tropical heating or orography (e.g., Hoskins and Karoly, 1981; Sardeshmukh and Hoskins, 1988), and the vorticity flux from high-frequency eddies (e.g., Branstator, 1992; Egger and Schilling, 1983; Lau, 1988; Ting and Lau, 1993).
Here, we explore the PNA pattern (Wallace and Gutzler, 1981) and the NAO (Barnston and Livezey, 1987), two of the Northern Hemisphere's most impactful and dominant modes. Since these patterns are most pronounced and influential in DJF, we narrow the discussion in this manuscript to the Northern Hemisphere winter, as this was the time with the highest biases present in all experiments and the wintertime surface climate variability is more strongly governed by the variability in the atmospheric circulation compared to summer variability (Walker, 1924).
The PNA is the leading mode of low-frequency variability in the Northern Hemisphere Pacific region (Franzke and Feldstein, 2005; Franzke et al., 2011). It is shown for DJF in Figure 9, where stippled and hashed regions denote a low and high variance bias, respectively. In its positive phase, the PNA is characterized by a deepened Aleetian low, an increased Canadian High, and a deepened Florida Low pattern which extends into the Atlantic (Wallace and Gutzler, 1981) (Wallace and Gutzler, 1981). The Aleetian low limb of the PNA, in particular, is responsible for downstream effects of precipitation and temperature anomalies across the much of North America (e.g., Carrera et al., 2004; Coleman and Rogers, 2003; Liu et al., 2017; Notaro et al., 2006; Chapman et al., 2021). The Cntrl run shows a good representation of the pattern across the North Pacific, with an Aleetian low pattern without bias. However, the explained variance is much higher than in ERAi (Fig. 9f). When adding MITA increments, DArt.M1.S0 and Nu.M1.S0 the Aleetian becomes too deep. Despite having a background state that is relatively unbiased compared to the control run, the leading mode of variability is deepened. Additionally, in DArt.M1.S0, Nu.M1.S0, and Cntrl, the second leading mode of variability in this region (the North Pacific Oscillation) is too weak (not shown) as the PNA dominates the North Pacific variance space. However, the addition of stochastic increments (DArt.M1.S.5 and Nu.M1.S.5) acts to remove variance in the AL. The two stochastic simulations show Aleetian low patterns that are unbiased and explained variances that are not significantly different from the variance seen in observations.
Next, we investigate the impacts on the North Atlantic Oscillation, which is the dominant pattern of zonal-mean flow variability across the Atlantic region (DeWeaver and Nigam, 2000; Hurrell et al., 2003). The
Figure 9: DJF 500 mb geopotential height leading mode of variability over the region [PNA, 20-85”N, 120”E-120”W], for every experiment configuration (a-e). Hatching (stippling) shows where the variance in the observations is higher (lower) than the model using the Sth and 95th percentile of the bootstrapped synthetic observations as a bias threshold. Where observational spread is determined by bootstrap (see methods). Also, the explained variance in each model experiment (vertical line), and the bootstrapped spread of explained variance in the observations (histogram) (f), grey vertical dashed lines show the 5th, 10th, 90th, and 95th percentile of bootstrapped observational variance explained.
NAO is a North/South dipole characterized by the high latitude, Subpolar Low (SL), and the midlatitude Azores High (Fig. 10). As many studies (Feldstein, 2003; Franzke and Feldstein, 2005) show that the NAO is initially driven by upper-level transient eddy vorticity fluxes (which we have shown to improve with MITA/SITA, Fig. 8) we base the NAO pattern on geopotential heights in 500hPa. Focusing first on the Azore's High, we see that the Cntrl simulation underestimates the variance of this pressure pattern across the North Atlantic. This leads to a stunted pattern with two shallowed nodes centered on the eastern United States and over France (as indicated by the stippling). It also has too little variance over the Atlantic sector with an explained variance below the 5th percentile of explained variance in the bootstrapped distribution of observations variances (Fig. 10f). In experiment DArt.M1.S.5 the pattern is unchanged compared to the Cntrl, but the explained variance is significantly increased. Every other simulation results in a significant improvement to the variability over the Atlantic domain. Nu.M1.S.5, DArt.M1S0, DArt.M1.S.5 all have explained variances that are not significantly different from ERAi (Fig. 10f), while NU.M1.S.0 does not increase the total explained variance over the Cntrl simulation. Additionally, experiments DArt.M1.S0 and Nu.M1.S.5 tend to insert bias over the high latitude SL portion of the NAO.
In summary, our results indicate that by improving the background state and high-frequency transients, the MITA/SITA increments the NAO variability is improved over the Cntrl experiment.
### Streamfunction tendency
Next, we turn to the evaluation of the intraseasonal growth and decay of the NAO and PNA patterns to assess their realistic representation within the models. First, we define the time-series of each pattern as the monthly (DJF) 500 hPa streamfunction anomaly EOF, projected onto the daily averaged cosine-latitude weighted streamfunction, in the respective NAO and PNA region. The projected time-series is then
Figure 10: As in Fig.9 but for the NAO over the region [20-80°N, 90°W-40°E]
normalized by its standard deviation. Days with an amplitude of more than 1.5 are defined as PNA/NAO events. As in previous studies, the peak event day, referred to as day 0, is defined as the day on which the magnitude of the index is larger than both the preceding and following days (Franzke et al., 2011; Shen et al., 2023; Xu et al., 2020). For our definition, the peak day must fall into DJF, though the lagged days prior and after the event can occur in the neighboring months.
The lagged composite of the principal component for the positive phase of the NAO and PNA events is shown in Fig. 11. The full cycle of the of the NAO and PNA takes about 3-4 weeks to complete with an e-folding time of \(\sim\)5-9 days which is consistent with previous studies (e.g., Cash and Lee, 2001; Dole and Black, 1990; Franzke et al., 2011; Johnson and Feldstein, 2010). The NAO growth and decay (Fig 11a) of all experiments is within the uncertainty of ERAi.
This, however, is not the case for the PNA. The addition of the MITA tendency adjustments (DArt.M1.S0 and Nu.M1.S0) alone acts to increase the growth and decay persistence with significant separation from ERAi, Cntrl, and SITA experiments, which form a separate group. This indicates that the bias observed in Fig 9 arises from not only the magnitude of the pattern but also the pattern persistence. The addition of the stochastic increments (Nu.M1.S.5) acts to dampen the growth phase somewhat. While it is not significant at the 80th percentile, the mean shows separation in this early growth phase (lag day -20 to -10). The decay
Figure 11: (a,c) time evolution of the composited normalized PC time series for the positive NAO (a) and PNA (c) event, the shading indicates the synthetic 80\({}^{\rm th}\) and 20\({}^{\rm th}\) percentile of uncertainty via bootstrapping with resampling. Projections for the positive NAO (b) and PNA (d) pattern on the rhs of eq. 3. onto the composited 500-hPa streamfunction anomalies at day M = 0, for \(\sum_{i=1}^{5}\xi_{i}\) (blue line) and \(\frac{\partial\psi^{L}}{\partial t}\) (black line). The ordinate is non-dimensional and has been multiplied by 10\({}^{7}\).
phase of Nu.M1.S.5 is similar to those of Cntrl and ERAi.
By comparing the projection of the R.H.S of (3) onto \(\psi_{M}\left(\lambda,\theta\right)\) (blue) to the projections of \(\frac{\partial\psi^{L}}{\partial t}\) (black) onto \(\psi_{M}\left(\lambda,\theta\right)\), we can evaluate the size of the residual, or the error, in the streamfunction decomposition (Fig. 11b, 11d). We show this to validate the accuracy of the given streamfunction decomposition given in the following figures. The projections are formed in the region [20-85N, 120E-120W] and [20-80N, 90W-40E] for the PNA and NAO, respectively. These errors likely arise from the interpolation from sigma-pressure hybrid coordinates to pressure coordinates and the use of daily data for the time-differencing rather that data at every reanalysis/model time-step. Here, we selected growth stages with minimal error, namely, the PNA/NAO anomaly growth for the lags denoted by the blue shaded regions in Fig.11b, d.
### NAO Anomaly Growth
Fig. 12 shows the composite streamfunction tendency terms at lag -7 to -4 for the NAO low-frequency tendencies for terms high-frequency (\(\xi_{3}\)), low-frequency transients (\(\xi_{4}\)), the linear terms (\(\xi_{1}\)+\(\xi_{2}\)), and the total streamfunction estimation (\(\xi_{1}\)+\(\xi_{2}\)+\(\xi_{3}\)+\(\xi_{4}\)+\(\xi_{5}\)) in the four examined experiments (Fig. 12a-12d) and ERAi (Fig. 12e). The composite low-frequency streamfunction pattern for the same period is shown in green contour. Note that we exclude DArt.M1.S.5 as the streamfunction tendency is not significantly different from the Cntrl. For ERAi, the anomalies that comprise the NAO pattern arise more than 7 days before the event
Figure 12: Composite lagged projections of the forcing term against the NAO [day M=0] at lag days -7 to -4 for each experiment (a-e, shown in label), column I: \(\xi_{3}\) (high-frequency transients), column II: \(\xi_{4}\) (low-frequency transients), column III: \(\xi_{1}\)+\(\xi_{2}\) (linear terms) and column IV: \(\xi_{1}\)+\(\xi_{2}\)+\(\xi_{3}\)+\(\xi_{4}\)+\(\xi_{5}\) (total estimated streamfunction tendency). The contour (green) shows the -7 to -4 day composite of the low-frequency streamfunction for each experiment.
peak (Fig. 11b) and the high- and low-frequency transient tendency project most strongly onto the NAO pattern and are dominantly responsible for its growth (Fig.12). These findings agree with many previous studies (e.g., Feldstein, 2003; Franzke and Feldstein, 2005). The low-frequency transients (\(\xi_{4}\)) are almost entirely responsible for the growth of the SL (Fig. 12e). Whereas it is the combined effect of the low- and high-frequency transient terms that build the Azores high.
In the Cntrl simulation, it is clear, the linear component (\(\xi_{1}\)+\(\xi_{2}\)) is dominantly responsible for the growth of the two nodal points of the NAO, additionally the linear terms act to dampen the Greenland node of the NAO pattern, which is shown to be biased (Fig. 10e). The linear components of the Cntrl simulation are in stark contrast with the ERAi which does not project strongly onto the NAO, especially over Europe. In summary the lack of variance observed in the Cntrl Azore's High (Fig. 9e), is a result of the lack of high- and low-frequency transient forcing over the Atlantic.
The MITA/SITA simulations show a more physically realistic NAO development, likely meaning that adjusting the background state enables a more realistic propagation of the transients which help to develop the growth of the NAO pattern. We note that every experiment shows an improvement to the upper-level meridional transient transport (Fig. 8), and climatological zonal wind bias (Fig. 4), which lends credence to the improved dynamics based on the growth mechanism of the NAO. The subpolar low is biased in the nudging simulations (Fig. 10), and the tendency decomposition shows that this is manifested through an increase to the low-frequency transient forcing in the model. Our results show that the addition of the MITA/SITA increments can improve not only the background bias but also helps improve the physical representation of the low-frequency NAO streamfunction, and dynamically this occurs through a minimization of bias in the transient forcing.
### PNA Anomaly Growth
For studying the growth and maintenance of the PNA the high-frequency (\(\xi_{3}\)) and low-frequency transients (\(\xi_{4}\)) are of less importance. Previous work (Franzke et al., 2011; Feldstein, 2002; Franzke and Feldstein, 2005) has established that the dominant growth is associated with the stationary eddy advection (eqn. 4), which can be derived from a further decomposition of the linear stream function tendency terms (\(\xi_{1}\)+\(\xi_{2}\)) in equation (3), (i.e., \(\xi_{1}\)+\(\xi_{2}=\ \xi_{6}+T,\) where \(T\) represents the remaining forcing terms in the linear portion of the streamfunction decomposition).
The functional form of the stationary eddy advection \(\xi_{6}\) is:
\[\xi_{6}=\nabla^{-2}(-\overline{\mathbf{V}^{*}}\cdot\nabla\zeta^{L}-\mathbf{V}^ {L}\cdot\nabla|\overline{\zeta^{*}}|) \tag{4}\]
Where the asterisk denotes deviations from the zonal averages (removing the zonal mean from the climatology). This term represents the induced tendency due to the interaction of the low-frequency transients and the zonally asymmetric time-mean flow. This interaction with the asymmetric background flow has long been studied extensively and shown to be the primary driver of PNA anomalies, anchoring the Aleutian low to the jet exit region.
From Fig. 13 we see that Cntrl captures this term quite well when compared to the ERAi, while the MITA experiments (DArt.M1.S0 and Nu.M1.S0) over-energize the stationary eddy advection leading to a deepening of the Aleutian low. By comparing Figs. 9 and 13 we see that this term indeed controls the bias shown in the Aleutian low region. The addition of the stochastic tendencies (Nu.M1.S.5) reduces the stationary eddy advection, aiding decreased bias in the growth of this pattern.
A further decomposition of \(\xi_{6}\) indicates that it is a change to the low-frequency transients and not an adjustment of the climatology that most dominantly affects the tendency growth (not shown). Previous work (Franzke et al., 2011) also showed the importance of the high-frequency transients (\(\xi_{3}\)) in growing and maintaining low-frequency PNA patterns. We find some effects of \(\xi_{3}\) during PNA growth phases (not shown); however, it is much less in magnitude than the tendencies shown in \(\xi_{6}\).
### Blocking
Considering the substantial enhancements in addressing climatological biases and capturing low-frequency atmospheric variability during DJF, we evaluate the model adjustment concerning blocking frequency by
Figure 13: Composite lagged projections of the interaction of the transients with the asymmetrical climatological flow stream-function tendency forcing (\(\xi_{7}\)) against the PNA [day M=0] at lag days -20 to -5 for each experiment (a-e, shown in label).
comparing it to the Cntrl simulation. We focus on a single model experiment, Nu.M1.S.5, as it represents the simulation with the most significant improvements to the NAO. Kleiner et al. (2021) showed that improving a models' climatological biases can improve the representation of atmospheric blocking events, and it is well documented that many climate models underestimate the observed frequency of blocking events (e.g., d'Andrea et al., 1998). Fig. 14 shows the percent of time in DJF where a Z500 hPa block is present at any latitude between 35degN and 65degN in the Cntrl, Nu.M1.S.5, and ERAi from 1982 to 2010. The shaded region shows the 5th and 95th percentile of the bootstrap-estimated uncertainty distribution. Both the Cntrl and Nu.M1.S.5 capture the bimodal nature of the blocking index; however, both models drastically underestimate the frequency of blocks across the western Pacific ocean (120degE-180deg), which is a frequent blocking region over the Bering strait (see, Kleiner et al., 2021). The Cntrl experiments shows a high blocking bias in the Atlantic sector that is centered on 30degW, which is immediately upstream of the large node in the Azores High in this model's representation of the NAO over Europe. Additionally, the Cntrl underestimates blocking occurrence over the western portion of the Atlantic Ocean, which is where the model underestimates NAO variability (Fig. 10e). Adding a stochastic parameterization improves the representation of Northern Hemispheric blocking over the Pacific which has been shown in previous studies (e.g., Berner et al., 2012; Jung et al., 2005). Future analysis will be focus on the connection between NAO and blocking, as well as the physical mechanisms by which the Pacific blocking is improved.
## 4 Discussion: Nudging vs. Data Assimilation
In this study, we conducted a comparison of an online-bias correction via mean and stochastic increment tendency adjustments (MITA/SITA, respectively) to the zonal and meridional wind. The increments were derived from the ensemble adjusted Kalman filter ensemble system DART (Raeder et al., 2021), and a linear relaxation (nudging) to ERA-interim observations. Implicitly, nudging also relies on a DA product, namely the reanalysis product ERA-interim (Fig. 2; green box), the creation of which incurs substantial computational resources (Fig. 2; red box). However, a difference is that a reanalysis data-set only needs to be created once and can then be used for various nudging experiments (Fig. 2; yellow box). The computational ease of this method enables agile rerunning, adjusting, and evaluating of parameter choices for nudging experiments, thereby facilitating the optimization of the bias correction algorithm.
The EAKF is a powerful and elegant algorithm, but it comes with great computational cost, which is dominated by two components: the production of an ensemble of model integrations (for a full-dimensional
Figure 14: Frequency of DJF Northern Hemisphere blocking events for the ERA-interim reanalysis product (grey), and mode experiments Cntrl (teal) and Nu.M1.S.5 (orange) for the period 1982-2010. The shading indicates the 95% confidence intervals of the blocking index from each run.
non-linear estimate of the background error), and computation of the filter products. Integrating the ensemble multiplies the cost of the single-model integration used in some simple data assimilation schemes by a factor of N ensemble members (where, N=80 here) (Anderson, 2001; Raeder et al., 2021), and involves the systematic start and restart of model runs. It's not feasible to directly compare and quantify the disparity in computational costs between generating the EAKF reanalysis product (DART increments) and the nudging increments. This is because the increments were generated on separate machines. Nevertheless, it's worth noting that the nudging runs utilized approximately 2,500 core-hours per simulation year. It's reasonable to conclude that the EAKF process is significantly more computationally intensive, with its model runtime alone being at least O(100) times higher than that of the nudging approach. We stress that these numbers are for CAM-DART reanalysis and are caused in-part by the specifics of CAM, which is developed as a climate model and thus does not optimize initialization and restart capabilities. While the computation cost of DA to nudging runs will differ from model to model and center to center, the fact that nudging runs are cheaper is generalizable.
Here, we treat the ERAi reanalysis product as ground truth, when in fact, there are significant biases within this product. When verifying against other reanalysis products, our results did not change (not shown), suggesting that the differences between reanalysis products play a small role in our context. A limitation of the nudging approach is that the model improvements will be limited by the quality of the reanalysis it is nudged towards. A good example of this is the discrepancy in the model bias of precipitation in the South Pacific convergence zone (SPCZ) in DJF when ERA-interim precipitation or NOAA GPCP precipitation is used as the ground truth. Precipitation observations over the ocean are notoriously difficult to constrain, and we show that by comparing to the ERAi product, the precipitation is vastly improved in the SPCZ. However, using the GPCP, the model appears to overestimate precipitation in the SPCZ. Dynamically, the lower-level winds, and the vertical velocity are almost without bias in this region after the MITA/SITA adjustments (Fig. 6), and this is a primary driver of tropical precipitation. This adjustment is due to the increments encouraging increased convergence across this region. Until there is agreement on the observations, and the link between the dynamics and parameterized precipitation scheme is solidified, precipitation correction and uncertainty will remain tenuous in any nudging scheme.
## 5 Summary and Conclusion
This study was designed to address three primary questions. Now, we will summarize and conclude our findings by directly addressing those questions.
1. To which degree do model-error estimates from nudging tendencies agree with those from EAKF analysis increments?
To our knowledge, there is only one study to compare nudging and DA increments(Jung, 2011), and this was done in the context of weather forecasting, and simply for error diagnosis. We conducted a comprehensive comparison between nudging and Ensemble Adjustment Kalman Filter (EAKF) increments, ensuring consistency in model tags, boundary conditions, and forcing, while using the same observational period. We find that overall, nudging tendencies and DA increments pick up the same general features of systematic model bias, particularly at lower model levels (Figure 1 and Figure S1). A more detailed analysis reveals differences in the variances of the nudging and the DART tendencies (Figures S2a and S2b). In particular, the observational network is imprinted in the DART tendencies, exhibiting larger variance centered around the locations of the high-density observations' networks, such as radiosonde launching sites and flight paths. We hypothesize that the inhomogeneity in the variance associated with the observations has a negative effect when in the stochastic experiments the random component of the tendencies is reinserted (explainin the results of experiment DArt.M1.S.5, Fig. 5).
2. To what extent does re-inserting DA increments and nudging increments during model runtime reduce climatological model bias of the free-running model?
Overall, we find a positive impact of an online model-error representation based on re-inserting DA increments and nudging on the climatological bias (Figs. 5-8). Our analysis revealed significant improvements in the background climatology for key surface variables, upper- and lower-level prognostic variables, and transient variables when the model-error scheme was either based on DART or nudging increments. Notably, a 20% enhancement was observed when comparing the DJF GPCP precipitation against the adjusted zonal and meridional winds. Furthermore, the interaction of non-dynamic variables with the corrected wind field led to additional improvements through dynamic and often non-linear processes. Regional divisions and further variables are presented in the supplemental material, reaffirming the overall finding that this online-bias correction method significantly enhances the background climatology.
3. Will representing subgrid-scale uncertainty in online increment corrections via stochasticity help to improve low-frequency modes of variability without degrading climatological bias?
This study places particular emphasis on improving the representation of not only mean climate but also climate variability, which is essential for the practical applicability of any climate model. Upon examination of the low-frequency variability, it is clear that a mean-state adjustment to the tendencies alone (experiments Nu.S1.M0 and DArt.S1.M0) lead to significant biases in variance across the North Pacific in the PNA pattern. The addition of a stochastic tendency (experiments Nu.S1.M.5 and DArt.S1.M.5) corrected this bias and created an accurate representation of the Aleutian low when compared to observations (Fig. 9). We hypothesize that the stochastic perturbations break up a deepened Aleutian low pattern and correct the mean state climatology while maintaining accurate variability. We explain this result via the stochastics influencing the interaction of the transients with the zonally asymmetric climatological background flow in the North Pacific (Fig. 13). The correction necessitates a proper balance between the adjustment of the background flow and the sub-grid variability (provided by the stochastics), if only one is adjusted, we introduce a model bias. The Cntrl run had a relatively good Aleutian low representation, however, we hypothesize that this is yet another manifestation of compensating biases - by neglecting the subgrid-scale variability the model mean state was tuned toward a shallower Aleutian low than consistent with the mean dynamics. If the mean flow is adjusted, the bias becomes evident, but can be remedied by adding a stochastic component.
Via a streamfunction tendency decomposition, it is shown that correcting the background state of the model can dramatically improve the representation of high- and low-frequency eddies. This forcing leads to a more accurate representation of the North Atlantic Oscillation, as this pattern is dominantly grown and maintained through the feedback of transient eddy fluxes (e.g., Feldstein, 2003; Franzke and Feldstein, 2005; Lorenz and Hartmann, 2001). Furthermore, we show the improvement to the high- and low-frequency eddies intermittently breaks up the zonality of the Northern Hemispheric circulation and leads to a better representation of blocking across the North Atlantic (Fig. 14).
A limitation of this study is that while we accounted for the daily and seasonal cycle, we did not account for state-dependence, as done in the context of machine learning by Chen et al. (2022); Watt-Meyer et al. (2021) and Bonavita and Laloyaux (2020). This will be the focus of future work together with an assessment of online bias corrections in coupled climate simulations with an evolving ocean. An encouraging result is the improvement of the zonal wind stress, since it has been linked to biases in ocean sea-surface temperatures (Neelin et al., 1994).
In conclusion, we find that the nudging increment adjustment outperforms the correction provided by the DART increments. A disadvantage of the DA increments is that they depend on observations which are
spatially inhomogeneous and can be sparse. Especially in data-limited regions, the analysis increment will unlikely represent model-error. On the other hand, nudging increments will benefit from the balance and conservation properties inherent in reanalysis as well as the spatial homogeneity of a gridded product. A noted disadvantage of the nudging tendencies is that the model will adopt the same biases present in the reanalysis (see our discussion of the SPCZ). While the exact computational cost is system-specific, all data assimilation systems are resource-intensive. Indeed, they are often not readily available, which makes an online-bias correction based on nudging tendencies a viable and flexible approach to study the impact of model bias in climate models.
## Acknowledgements
This research was made possible by Schmidt Futures, a philanthropic initiative founded by Eric and Wendy Schmidt, as part of its Virtual Earth System Research Institute (VESRI). The CESM project is supported primarily by the National Science Foundation (NSF). This work is supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement (1852977). We thank Dr. Peiqiang Xu for his useful discussions regarding the stream-function tendency budget. We thank Dr. Kevin Raeder, Dr. Jefferey Anderson, Dr. Julio Bacmeister, and Dr. Patrick Callaghan for useful discussions regarding data assimilation and nudging.
## Data Availability Statement
The datasets have been archived in long-term campaign storage at the National Center for Atmospheric Research (NCAR) at data path (/glade/campaign/cisl/aiml/wchapman/CAM_runs/). It is freely available, but an NCAR registration is required. All code to produce the figures and tables, from the archived record can be found at the authors' public GitHub repository ([https://github.com/WillyChap/MITA_SITA_CAM6](https://github.com/WillyChap/MITA_SITA_CAM6)). Additionally, source modifications, model build scripts, and namelists to run the described CAM6 versions can be found in the same repository.
## Supplemental
Supplemental material is hosted at the authors' public GitHub repository.
## 6 Appendix
### EOFs
To extract the leading patterns of variability we perform empirical orthogonal function (EOF) decomposition on the monthly anomaly fields. The climatology is defined as the monthly mean from the full 29-year run or reanalysis product. All EOF patterns are area weighted by the square root of the cosine (latitude) prior to decomposition. We express the orthogonal spatial field as the pointwise regression of each time series with a one-standard deviation change of the temporal principal component. The Pacific - North American Pattern (PNA) and North Atlantic Oscillation (NAO) were examined in detail. These patterns are defined as in Phillips et al. (2014) (NCAR's Climate Variability Diagnostic Package) as the leading mode of atmospheric variability in the region [20-85\({}^{\ast}\)N, 120\({}^{\ast}\)E-120\({}^{\ast}\)W] and [20-80\({}^{\ast}\)N, 90\({}^{\ast}\)W-40\({}^{\ast}\)E], respectively.
### Blocking
The Scherrer et al., (2006) index, a 2-dimensional extension of the Tibaldi & Molteni, (1990) index, was employed to identify atmospheric blocking events across a range of central latitudes. The index employs
the 500-hPa geopotential height field to detect blocks within the latitudinal band from 35degN to 65degN. The index algorithm applies two criteria to each grid point, namely, (i) a southward displacement of the 500-hPa geopotential height field (Z500) by at least 15deg of latitude, indicative of a jet stream reversal, and (ii) a northward excursion of the 500-hPa geopotential height field by at least 150 m at a latitude 15deg north of the block, indicative of a meandering jet stream around the blocked area. The criteria must be satisfied for a minimum of 5 consecutive days for an event to qualify as a blocking event.
### Global Bias Calculation
As in the NCAR Atmospheric Modeling Working Group Diagnostic Package (AMWG, 2022), bias is calculated as the sum of the cosine latitude weighted, root-mean-squared error (RMSE) of the spatial field after a seasonal, monthly, or daily mean has been computed. RMSE was used so that opposite signed local biases do not lead to canceling and erroneously inflated skill.
|
2310.20064
|
A Scalable Training Strategy for Blind Multi-Distribution Noise Removal
|
Despite recent advances, developing general-purpose universal denoising and
artifact-removal networks remains largely an open problem: Given fixed network
weights, one inherently trades-off specialization at one task (e.g.,~removing
Poisson noise) for performance at another (e.g.,~removing speckle noise). In
addition, training such a network is challenging due to the curse of
dimensionality: As one increases the dimensions of the specification-space
(i.e.,~the number of parameters needed to describe the noise distribution) the
number of unique specifications one needs to train for grows exponentially.
Uniformly sampling this space will result in a network that does well at very
challenging problem specifications but poorly at easy problem specifications,
where even large errors will have a small effect on the overall mean squared
error.
In this work we propose training denoising networks using an
adaptive-sampling/active-learning strategy. Our work improves upon a recently
proposed universal denoiser training strategy by extending these results to
higher dimensions and by incorporating a polynomial approximation of the true
specification-loss landscape. This approximation allows us to reduce training
times by almost two orders of magnitude. We test our method on simulated joint
Poisson-Gaussian-Speckle noise and demonstrate that with our proposed training
strategy, a single blind, generalist denoiser network can achieve peak
signal-to-noise ratios within a uniform bound of specialized denoiser networks
across a large range of operating conditions. We also capture a small dataset
of images with varying amounts of joint Poisson-Gaussian-Speckle noise and
demonstrate that a universal denoiser trained using our adaptive-sampling
strategy outperforms uniformly trained baselines.
|
Kevin Zhang, Sakshum Kulshrestha, Christopher Metzler
|
2023-10-30T22:29:07Z
|
http://arxiv.org/abs/2310.20064v1
|
# A Scalable Training Strategy for Blind Multi-Distribution Noise Removal
###### Abstract
Despite recent advances, developing general-purpose universal denoising and artifact-removal networks remains largely an open problem: Given fixed network weights, one inherently trades-off specialization at one task (e.g., removing Poisson noise) for performance at another (e.g., removing speckle noise). In addition, training such a network is challenging due to the curse of dimensionality: As one increases the dimensions of the specification-space (i.e., the number of parameters needed to describe the noise distribution) the number of unique specifications one needs to train for grows exponentially. Uniformly sampling this space will result in a network that does well at very challenging problem specifications but poorly at easy problem specifications, where even large errors will have a small effect on the overall mean squared error.
In this work we propose training denoising networks using an adaptive-sampling/active-learning strategy. Our work improves upon a recently proposed universal denoiser training strategy by extending these results to higher dimensions and by incorporating a polynomial approximation of the true specification-loss landscape. This approximation allows us to reduce training times by almost two orders of magnitude. We test our method on simulated joint Poisson-Gaussian-Speckle noise and demonstrate that with our proposed training strategy, a single blind, generalist denoiser network can achieve peak signal-to-noise ratios within a uniform bound of specialized denoiser networks across a large range of operating conditions. We also capture a small dataset of images with varying amounts of joint Poisson-Gaussian-Speckle noise and demonstrate that a universal denoiser trained using our adaptive-sampling strategy outperforms uniformly trained baselines.
Denoising, Active Sampling, Deep Learning
## I Introduction
Neural networks have become the gold standard for solving a host of imaging inverse problems [1]. From denoising and deblurring to compressive sensing and phase retrieval, modern deep neural networks significantly outperform classical techniques like BM3D [2] and KSVD [3].
The most straightforward and common approach to apply deep learning to inverse problems is to train a neural network to learn a mapping from the space of corrupted images/measurements to the space of clean images. In this framework, one first captures or creates a training set consisting of clean images \(x_{1},x_{2},\dots\) and corrupted images \(y_{1},y_{2},\dots\) according to some known forward model \(p(y_{i}|x_{i},\theta)\), where \(\theta\in\Theta\) denotes the latent variable(s) specifying the forward model. For example, when training a network to remove additive white Gaussian noise
\[p(y_{i}|x_{i},\theta)=\frac{1}{\sigma\sqrt{2\pi}}\exp-\frac{\|y_{i}-x_{i}\|^{2 }}{2\sigma^{2}}, \tag{1}\]
and the latent variable \(\theta\) is the standard deviation \(\sigma\). With a training set of \(L\) pairs \(\{x_{i},y_{i}\}_{i=1}^{L}\) in hand, one can then train a network to learn a mapping from \(y\) to \(x\).
Typically, we are not interested in recovering signals from a single corruption distribution (e.g., a single fixed noise standard deviation \(\sigma\)) but rather a range of distributions. For example, we might want to remove additive white Gaussian noise with standard deviations anywhere in the range \([0,50]\) (\(\Theta=\{\sigma|\sigma\in[0,50]\}\)). The size of this range determines how much the network needs to generalize and there is inherently a trade-off between specialization and generalization. By and large, a network trained to reconstruct images over a large range of corruptions (a larger set \(\Theta\)) will under-perform a network trained and specialized over a narrow range [4].
This problem becomes significantly more challenging when dealing with mixed, multi-distribution noise. As one increases the number of parameters (e.g., Gaussian standard deviation, Poisson rate, number of speckle realizations,...) the space of corrupted signals one needs to reconstruct grows exponentially: The specification space becomes the Cartesian product (e.g., \(\Theta=\Theta_{Gaussian}\times\Theta_{Poisson}\times\Theta_{speckle}\)) of the spaces of each of the individual noise distributions.
This expansion does not directly prevent someone (with enough compute resources) from training a "universal" denoising algorithm. One can sample from \(\Theta\), generate a training batch, optimize the network to minimize some reconstruction loss, and repeat. However, this process depends heavily on the policy/probability-density-function \(\pi\) used to sample from \(\Theta\). As noted in [5] and corroborated in Section V, uniformly sampling from \(\Theta\) will produce networks that do well on hard examples but poorly (relative to how well a specialized network performs) on easy examples.
### _Our Contribution_
In this work, we develop an adaptive-sampling/active-learning strategy that allows us to train a single "universal" network to remove mixed Poisson-Gaussian-Speckle noise such that the network consistently performs within a uniform bound of specialized bias-free DnCNN baselines [4, 6]. Our key contribution is a novel, polynomial approximation of the specification-loss landscape. This approximation allows us to tractably apply (using over \(50\times\) fewer training examples than it would otherwise require) the adaptive-sampling strategy
developed in [5], wherein training a denoiser is framed as a constrained optimization problem. We validate our technique with both simulated and experimentally captured data.
## II Related Work
Overcoming the specialization-generalization trade-off has been the focus of intense research efforts over the last 5 years.
### _Adaptive Denoising_
One approach to improve generalization is to provide the network information about the current problem specifications \(\theta\) at test time. For example, [7] demonstrated one could provide a constant standard-deviation map as an extra channel to a denoising network so that it could adapt to i.i.d. Gaussian noise. [8] extends this idea by adding a general standard-deviation map as an extra channel, to deal with spatially-varying Gaussian noise. This idea was recently extended to deal with correlated Gaussian noise [9]. The same framework can be extended to more complex tasks like compressive sensing, deblurring, and descattering as well [10, 11]. These techniques are all non-blind and require an accurate estimate of the specification parameters \(\theta\) to be effective.
### _Universal Denoising_
Somewhat surprisingly, the aforementioned machinery may be unnecessary if the goal is to simply remove additive white Gaussian noise over a range of different standard deviations: [6] recently demonstrated one can achieve significant invariance to the noise level by simply removing biases from the network architecture. [12] also achieves similar invariance to noise level by scaling the input images to the denoiser to match the distribution it was trained on. Alternatively, at a potentially large computational cost, one can apply iterative "plug and play" or diffusion models that allow one to denoise a signal contaminated with noise with parameters \(\theta^{\prime}\) using a denoiser/diffusion model trained for minimum mean squared error additive white Gaussian noise removal [13, 14, 15]. These plug and play methods are non-blind and require knowledge of the likelihood \(p(y|x,\theta^{\prime})\) at test time.
### _Training Strategies_
Generalization can also be improved by modifying the training set [16]. In the context of image restoration problems like denoising, [17] propose updating the training data sampling distribution each epoch to sample the data that the neural network performed worse on during the prior epoch preferentially, in an ad-hoc way. In [5] the authors developed a principled adaptive training strategy by framing training a denoiser across many problem specifications as a minimax optimization problem. This strategy will be described in detail in Section IV.
### _Relationship to Existing Works_
We go beyond [5] by incorporating a polynomial approximation of the specification-loss landscape. This approximation is the key to scaling the adaptive training methodology to high-dimensional latent parameter spaces. It allows us to efficiently train a blind image denoiser that can operate effectively across a large range of noise conditions.
## III Problem Formulation
### _Noise Model_
This paper focuses on removing joint Poisson-Gaussian-Speckle noise using a single blind image denoising network. Such noise occurs whenever imaging scenes illuminated by a coherent (e.g., laser) source. In this context, photon/shot noise introduces Poisson noise, read noise introduces Gaussian noise, and the constructive and destructive interference caused by the coherent fields scattered off optically rough surfaces causes speckle noise [18].
The overall forward model can be described by
\[y_{i}=\frac{1}{\alpha}\text{Poisson}(\alpha(r\circ w_{i}))+n_{i}, \tag{2}\]
where the additive noise \(n_{i}\) follows a Gaussian distribution \(\mathcal{N}(0,\sigma^{2}\mathbf{I})\); the multiplicative noise \(w_{i}\) follows a Gamma distribution with concentration parameter \(B/\beta\) and rate parameter \(B/\beta\), where \(B\) is the upper bound on \(\beta\); and \(\alpha\) is a scaling parameter than controls the amount of Poisson noise. The forward model is thus specified by the set of latent variables \(\theta=\{\sigma,\alpha,\beta\}\).
A few example images generated according to this forward model are illustrated in Figure 1. Variations in the problem specifications results in drastically different forms of noise.
### _Specification-Loss Landscape_
A _specification_ is a set of \(n\) parameters that define a task. In our setting, the specifications are the distribution parameters describing the noise in an image. Each of these parameters is bounded in an interval \([l_{i},r_{i}]\), for \(1\leq i\leq n\). The _specification space_\(\Theta\) is the Cartesian product of these intervals: \(\Theta=[l_{1},r_{1}]\times\cdots\times[l_{n},r_{n}]\). Suppose we have a function \(f\) that can solve a task (e.g., denoising) at any specification in \(\Theta\), albeit with some error. Then the _specification-loss landscape_, for a given \(f\) over \(\Theta\), is the function \(\mathcal{L}_{f}\) which maps points \(\theta\) from \(\Theta\) to the corresponding error/loss (e.g., mean squared error) that \(f\) achieves at that specification.
Now suppose that all functions \(f\) under consideration come from some family of functions \(\mathcal{F}\). Let the ideal function from \(\mathcal{F}\) that solves a task at a particular specification \(\theta\) be \(f^{\theta}_{\text{ideal}}=\arg\min_{f\in\mathcal{F}}\mathcal{L}_{f}(\theta)\). With this in mind, we define the _ideal specification-loss landscape_ as the function that maps points \(\theta\) in \(\Theta\) to the loss that \(f^{\theta}_{\text{ideal}}\) achieves on the task with specification \(\theta\), and denote it \(\mathcal{L}_{\text{ideal}}\).
### _The Uniform Gap Problem_
Our goal is to find a single function \(f^{*}\in\mathcal{F}\) that achieves consistent performance across the specification space \(\Theta\), compared to the ideal function at each point \(\theta\in\Theta\), \(f^{\theta}_{\text{ideal}}\)
More precisely, we want to minimize the maximum gap in performance between \(f^{*}\) and \(f^{\theta}_{\text{ideal}}\) across all of \(\Theta\). E.g., we want a universal denoiser \(f^{*}\) that works _almost_ as well as specialized denoisers \(f^{\theta}_{\text{ideal}}\), at all noise levels \(\Theta\). Following [5], we can frame this objective as the following optimization problem
\[f^{*}=\operatorname*{argmin}_{f\in\mathcal{F}}\sup_{\theta\in\Theta}\left\{ \mathcal{L}_{f}(\theta)-\mathcal{L}_{\text{ideal}}(\theta)\right\}, \tag{3}\]
which we call the _uniform gap problem_.
## IV Proposed Method
### _Adaptive Training_
To solve the optimization problem given in (3), [5] proposes rewriting it in its Lagrangian dual formulation and then applying dual ascent, which yields the following iterations:
\[f^{t+1} =\operatorname*{argmin}_{f\in\mathcal{F}}\left\{\int_{\theta\in \Theta}\mathcal{L}_{f}(\theta)\lambda^{t}(\theta)d\theta\right\} \tag{4}\] \[\lambda^{t+\frac{1}{2}} =\lambda^{t}+\gamma^{t}\left(\frac{\mathcal{L}_{f^{t+1}}}{ \mathcal{L}_{\text{ideal}}}-1\right)\] (5) \[\lambda^{t+1} =\lambda^{t+\frac{1}{2}}/\int_{\theta\in\Theta}\lambda^{t+\frac{ 1}{2}}(\theta)d\theta, \tag{6}\]
where \(\lambda(\theta)\) represents a dual variable at specification \(\theta\in\Theta\), and \(\gamma\) is the dual ascent step size.
We can interpret (4) as fitting a model \(f\) to the training data, where \(\lambda(\theta)\) is the probability of sampling a task at specification \(\theta\) to draw training data from. Next, (5) updates the sampling distribution \(\lambda(\theta)\) based on the difference between the current model \(f^{t+1}\)'s performance across \(\theta\in\Theta\) and the ideal models' performances. Lastly (6) ensures that \(\lambda(\theta)\) is a properly normalized probability distribution. We provide the derivation of the dual ascent iterations from [5] in the supplement.
While \(\Theta\) has been discussed thus far as a continuum, in practice we sample \(\Theta\) at discrete locations and compare the model being fit to the ideal model performance at these discrete locations only, so that the cardinality of \(\Theta\), \(|\Theta|\), is finite. Computing \(\mathcal{L}_{\text{ideal}}(\theta)\) for each \(\theta\in\Theta\) is extremely computationally demanding if \(|\Theta|\) is large; if \(f\) is a neural network, it becomes necessary to train \(|\Theta|\) neural networks. Furthermore, while \(\mathcal{L}_{\text{ideal}}(\theta)\) can be computed offline independent of the dual ascent iterations, during the dual ascent iterations, each update of \(\lambda\) requires the evaluation of \(\mathcal{L}_{f^{t+1}}\) for each \(\theta\in\Theta\), which is also time intensive if \(|\Theta|\) is large.
The key insight underlying our work is that one can approximate \(\mathcal{L}_{\text{ideal}}\) and \(\mathcal{L}_{f}\) in order to drastically accelerate the training process.
### _Specification-loss Landscape Approximations_
Let \(\mathcal{Q}\) be a class of functions (e.g., quadratics) which we will use to approximate the specification-loss landscape. Instead of computing \(\mathcal{L}_{\text{ideal}}(\theta)\) for each \(\theta\in\Theta\), we propose instead computing \(\mathcal{L}_{\text{ideal}}(\theta)\) at a set of locations \(\theta\in\Theta_{\text{sparse}}\), where \(|\Theta_{\text{sparse}}|\ll|\Theta|\), and then using these values to form an approximation \(\tilde{\mathcal{L}}_{\text{ideal}}\) of \(\mathcal{L}_{\text{ideal}}(\theta)\), as
\[\tilde{\mathcal{L}}_{\text{ideal}}=\operatorname*{argmin}_{\mathcal{L}\in \mathcal{Q}}\sum_{\theta\in\Theta_{\text{sparse}}}||\tilde{\mathcal{L}}( \theta)-\mathcal{L}_{f_{\text{ideal}}}(\theta)||_{2}^{2}. \tag{7}\]
We can similarly approximate \(\mathcal{L}_{f^{t+1}}\) with a polynomial \(\tilde{\mathcal{L}}_{f^{t+1}}\). Then we can solve (3) using dual ascent as before, replacing
Fig. 1: **Varying the noise specifications.** The first row shows images corrupted by Gaussian noise, the second row shows images corrupted by Poisson noise, and the last row shows images corrupted by speckle noise. In each of the rows, the other noise parameters are held fixed at \(0\), \(0.01\), and \(1.00\), respectively.
\(\mathcal{L}_{\text{ideal}}\) and \(\mathcal{L}_{f^{t+1}}\) with \(\tilde{\mathcal{L}}_{\text{ideal}}\) and \(\tilde{\mathcal{L}}_{f^{t+1}}\) where appropriate, resulting in a modification to (5):
\[\lambda^{t+\frac{1}{2}}=\lambda^{t}+\gamma^{t}\left(\frac{\tilde{\mathcal{L}}_{ f^{t+1}}}{\tilde{\mathcal{L}}_{\text{ideal}}}-1\right). \tag{8}\]
To justify our use of this approximation, we first consider a linear subspace projection "denoiser" and show that its specification-loss landscape is linear with respect to its specifications, and is thus easy to approximate.
**Example 1**.: _Let \(y=\alpha\text{Poisson}(\frac{1}{\alpha}x_{o})+n\) with \(n\sim N(0,\sigma^{2}\mathbf{I})\), let \(C\) denote a \(k\)-dimensional subspace of \(\mathbb{R}^{n}\) (\(k<n\)), and let the denoiser be the projection of \(y\) onto subspace \(C\) denoted by \(P_{C}(y)=\mathbf{P}y\). Then, assuming \(\frac{1}{\alpha}x_{o}\) is large, for every \(x_{o}\in C\)_
\[\mathbb{E}\|P_{C}(y)-x_{o}\|_{2}^{2}\approx k\sigma^{2}+\alpha tr(\mathbf{P} \text{diag}(x)\mathbf{P}^{t}),\]
_where \(tr(\cdot)\) denotes the trace. The loss landscape, \(\mathcal{L}(\sigma^{2},\theta)=\mathbb{E}\|P_{C}(y)-x_{o}\|_{2}^{2}\), is linear with respect to \(\sigma^{2}\) and \(\alpha\)._
Proof.: First note that if \(\frac{1}{\alpha}x_{o}\) is large the distribution of \(\alpha\text{Poisson}(x/\alpha)\) can be approximated with \(N(x,\alpha\text{diag}(x))\). Accordingly, \(y\approx x+\nu\) where \(\nu\sim N(0,\sigma^{2}\mathbf{I}+\alpha\text{diag}(x))\). Since the projection onto a subspace is a linear operator and since \(P_{C}(x_{o})=x_{o}\) we have
\[\mathbb{E}\|P_{C}(y)-x_{o}\|_{2}^{2}\approx\mathbb{E}\|x_{o}+P_{C}(\nu)-x_{o} \|_{2}^{2}=\mathbb{E}\|P_{C}(\nu)\|_{2}^{2}.\]
Let \(r=\mathbf{P}\nu\). Note that \(r\sim N(0,\mathbf{\Sigma})\) with \(\mathbf{\Sigma}=\sigma^{2}\mathbf{P}\mathbf{P}^{t}+\alpha\mathbf{P}\text{diag }(x)\mathbf{P}^{t}\). Accordingly,
\[\mathbb{E}\|P_{C}(\nu)\|_{2}^{2} =\mathbb{E}\|r\|^{2}=tr(\mathbf{\Sigma}),\] \[=\sigma^{2}tr(\mathbf{P}\mathbf{P}^{t})+\alpha tr(\mathbf{P} \text{diag}(x)\mathbf{P}^{t}),\] \[=k\sigma^{2}+\alpha tr(\mathbf{P}\text{diag}(x)\mathbf{P}^{t}),\]
where the last equality follows from the fact that \(\mathbf{P}\mathbf{P}^{t}=\mathbf{P}\) and the trace of a \(k\)-dimensional projection matrix is \(k\).
The specification-loss landscapes of high-performance image denoisers is highly regular as well. We show the achievable PSNR (peak signal-to-noise ratio) versus noise parameter plots for denoising Gaussian-speckle noise with the DnCNN denoiser [4] in Figure 2. Analogues figures for Poisson-Gaussian and Poisson-speckle noise can be found in the supplement. These landscapes are highly regular and can accurately approximated with quadratic polynomials. (Details on how we validated these quadratic approximations using cross-validation can be found in the supplement.)
Because we're more interested in ensuring a uniform PSNR gap than a more uniform MSE gap, we approximate the ideal PSNRs rather than the ideal mean squared errors. Then, following [5], we convert the ideal PSNRs to mean squared errors with the mapping \(\mathcal{L}(\theta)=10^{-\text{PSNR}(\theta)/10}\) for use in the dual ascent iterations.
### _Exponential Savings in Training-Time_
The key distinction between our adaptive training procedure and the adaptive procedure from [5] is that [5] relies upon a dense sampling of the specification-loss landscape whereas our method requires only a sparse sampling of the specification-loss landscape. Because "sampling" points on the specification-loss landscape requires training a CNN, sampling fewer points can result in substantial savings in time and cost.
As one increases the number of specifications, \(n\), needed to describe this landscape (\(n=1\) for Gaussian noise, \(n=2\) for Poisson-Gaussian noise, \(n=3\) for Poisson-Gaussian-Speckle noise,...), the number points needed to densely sample the landscape grows exponentially. Fortunately, the number of samples needed to fit a quadratic to this landscape only grows quadratically with \(n\): The number of possible nonzero coefficients, i.e., unknowns, of a quadratic of \(n\) variables is \(\binom{n+2}{2}=\frac{(n+1)(n+2)}{2}\) and thus one can uniquely specify this function from \(\frac{(n+1)(n+2)}{2}+1\) non-degenerate samples. Accordingly, our method has the potential to offer training-time savings that are exponential with respect to the problem specification dimensions. In the next section, we demonstrate our method reduces training-time over \(50\times\) for \(n=3\).
## V Synthetic Results
In this section we compare the performance of universal denoising algorithms that are trained (1) by uniformly sampling the noise specifications during training ("uniform"); (2) using the adaptive training strategy from [5] ("dense"), which adaptively trains based on a densely sampled estimate of the loss-specification landscape, and (3) using our proposed adaptive training strategy ("sparse"), which adaptively trains based on a sparsely sampled approximation of the loss-specification landscape.
### _Setup_
Implementation Details.We use a 20-layer DnCNN architecture [4] for all our denoisers. Following [6], we remove all biases from the network layers. We train all of our networks for 50 epochs, with 3000 mini-batches per epoch and 128
Fig. 2: **Loss Landscape Visualizations.** PSNR, which we use as our metric for error, versus denoising task specifications. The specification-loss landscapes (which represent the PSNRs a specialized denoiser can achieve at each specification) are smooth and amenable to approximation.
image patches per batch, for a total of 384,000 image patches total. We use the Adam optimizer [19] to optimize the weights with a learning rate of \(1\times 10^{-4}\), with an L2 loss. In practice, rather than training a model to convergence, to save training time we approximately solve (4) of the dual ascent iterations by training the model for 10 epochs.
Data.To train our denoising models, we cut-rate a high-quality image dataset that combines multiple high resolution image datasets: the Berkeley Segmentation Dataset [20], the Waterloo Exploration Database [21], the DIV2K dataset [22], and the Flick2K dataset [23]. To test our denoising models, we use the validation dataset from the DIV2K dataset. We use a patch size of 40 pixels by 40 pixels, and patches are randomly cropped from the training images with flipping and rotation augmentations, to generate a total of 384000 patches. All images are grayscale and scaled to the range \([0,1]\). We use the BSD68 dataset [24] as our testing dataset.
Noise Parameters.We consider four types of mixed noise distributions: Poisson-Gaussian, Speckle-Poisson, Speckle-Gaussian, and Speckle-Poisson-Gaussian noise. In each mixed noise type, \(\sigma\in[0.02,0.66]\), \(\alpha\in[0.1,41]\), and \(\beta\in[1,1024]\), and we discretize each range into 10 bins. Note that \(B=1024\) for speckle noise, following the parameterization in Section III-A. These ranges were chosen as they correspond to input PSNRs of roughly 5 to 30 dB.
Sampling the Specification-Loss Landscape.Both our adaptive universal denoiser training strategy ("sparse") and Gnanasmandam and Chan's adaptive universal universal denoiser training strategy ("dense") require sampling the loss-specification landscapes, \(\mathcal{L}_{\text{ideal}}(\theta)\), before training. That is, each requires training a collection of denoisers specialized for specific noise parameters \(\theta\in\Theta\).
For the loss-specifications landscapes with two dimensions (Poisson-Gaussian, Speckle-Poisson, and Speckle-Gaussian noise), we densely sampling the landscape (i.e., train networks) on a \(10\times 10\) grid for the "dense" training. For the "sparse" training (our method) we sample the landscape at 10 random specifications as well as at the specification support's endpoints, for a total of 14 samples for Poisson-Gaussian, Speckle-Poisson, and Speckle-Gaussian noise.
For the loss-specifications landscape with three dimensions (Speckle-Poisson-Gaussian), sampling densely would take nearly a year of GPU hours, so we restrict ourselves to sparsely sampling at only 18 specifications: 10 random locations plus the 8 corners of the specification cube.
Training Setup.For each Poisson-Gaussian, Speckle-Poisson, and Speckle-Gaussian noises separately, we (1) use approximations \(\widehat{\mathcal{L}}_{\text{ideal}}\) and \(\widehat{\mathcal{L}}_{f}\) to adaptively train a denoiser \(f^{*}_{\text{sparse}}\), (2) use densely sampled landscapes \(\mathcal{L}_{\text{ideal}}\) and \(\mathcal{L}_{f}\) to adaptively train a denoiser \(f^{*}_{\text{dense}}\), and (3) sample specifications uniformly to train a denoiser \(f^{*}_{\text{uniform}}\). For Speckle-Poisson-Gaussian noise, the set of possible specifications is too large to train an ideal denoiser for each specification (i.e., compute \(\mathcal{L}_{\text{ideal}}\)) and so we only provide results for \(f^{*}_{\text{sparse}}\) and \(f^{*}_{\text{uniform}}\).
### _Quantitative Results_
Tables I, II, III, and IV compare the performance of networks trained with our adaptive training method, which uses sparse samples to approximate of the specification-loss landscape; "ideal" non-blind baselines, which are trained for specific noise parameters; networks trained using the adaptive training procedure from [5], which requires training specialized ideal baselines at all noise specifications (densely) beforehand; and networks trained by uniformly sampling the specification-space. Both adaptive strategies approach the performance of the specialized networks and dramatically
Fig. 4: **Adaptive vs Uniform Training, 3D Specification Space.** Adaptive sampling with the polynomial approximation works effectively in the 3D problem space and produces a network whose performance is consistently close to the ideal. By contrast, a network trained by uniformly sampling from the space performs far worse than the specialized networks in certain contexts. The error bars represent one standard-deviation. Lower is better.
Fig. 3: **Adaptive vs Uniform Training, 2D Specification Space.** Adaptive training with sparse sampling and the polynomial approximation works effectively in the 2D problem space and produces a network whose performance is consistently close to the ideal. By contrast, a network trained by uniformly sampling from the space performs far worse than the specialized networks in certain contexts. The error bars represent one standard-deviation. Lower is better.
outperform the uniformly trained networks at certain problem specifications.
These results are further illustrated in Figures 3 and 4, which report how much the various universal denoisers underperform specialized denoisers across various noise parameters. As hoped, both adaptive training strategies ("dense" and "sparse") produce denoisers which consistently perform nearly as well as the specialized denoisers.1 In contrast, the uniformly trained denoisers drastically underperform the specialized denoisers in some contexts. Additional quantitative results can be found in the Supplement.
Footnote 1: To form these figures we train specialized networks at an additional 100 specifications. These networks were not used for training the universal denoisers.
### _Qualitative Results_
Figure 5 illustrates the denoisers trained with the different strategies (ideal, uniform, adaptive) on an example corrupted with a low amount of noise and an example corrupted with a high amount of noise. Notice that in the low-noise regime the uniform trained denoiser oversmooths the image so achieves worse performance than the adaptive trained denoiser, whereas in the high noise regime the uniform trained denoiser outperforms the adaptive trained denoiser. Additional qualitative results can be found in the supplement.
### _Time and Cost Savings_
Recall our adaptive training strategy requires pretraining significantly fewer specialized denoisers than the method developed in [5]: 14 vs 100 networks with 2D noise specifications and 18 vs 1000 networks with 3D specifications. We trained the DnCNN denoisers on Nvidia GTX 1080Ti GPUs, which took 6-8 hours to train each network. Overall training times associated with each of the universal denoising algorithms are presented in Table V. In the 3D case, our technique saves over 9 months in GPU compute hours.
## VI Experimental Validation
To validate the performance of our method, we experimentally captured a small dataset consisting of 5 scenes with varying amounts of Poisson, Gaussian, and speckled-speckle noise.
### _Optical Setup_
Our optical setup consists of a bench-top setup and a DSLR camera mounted on a tripod, pictured in 6. A red laser is shined through a rotating diffuser to illuminate an image printed on paper. The diffuser is controlled by an external motor controller, which allows us to capture a distinct speckle realization for each image. The resulting signal is captured by the DSLR camera. The pattern on the rotating diffuser causes a
Fig. 5: **Qualitative comparisons, simulated data.** Comparison between the performance of the ideal, uniform-trained, and adaptive distribution, sparse sampling-trained denoisers on a sample image corrupted with a low amount of noise and corrupted with a high amount of noise. Our adaptive distribution sparse approximation based blind training strategy performs only marginally worse than an ideal, non-blind baseline when applied to “easy” problem specifications, and significantly better than the uniform baseline, while also being only marginally worse than an ideal baseline and uniform baseline under “hard” problem specifications.
speckle noise pattern in the signal. Note, however, that because the paper being imaged is rough with respect to the wavelength of red light, an additional speckle pattern is introduced to the signal--we're capturing speckled-speckle. This optical setup allows us to capture images with varying levels of Gaussian, Poisson, and (speckled) speckle noise. The Gaussian noise can be varied by simply increasing or decreasing the ISO on the camera. Similarly, decreasing exposure time will produce more Poisson noise. Lastly, speckle noise can be varied by changing the number of recorded realizations that are averaged together; as the number of realizations increases, the corruption caused by speckle noise decreases. For each noisy capture, we keep the aperture size fixed at F/11.
### _Data Collection_
Using the previously described optical setup, we collected noisy and ground truth images for five different targets. To compute our ground truth image, we capture 16 images with a shutter speed of 0.4 seconds, ISO 100, and F/4 and average them together. To gather the noisy images, for each of the 5 scenes we consider, we capture 16 images at 4 different shuttterspeed/ISO pairs, at a fixed aperture of F/11: 1/15 seconds/ISO 160, 1/60 seconds/ISO 640, 1/250 seconds/ISO 2500, and 1/1000 seconds/ISO 10000, for a total of 320 pictures captured. We choose these pairs such that the product of the two settings is approximately the same across settings and thus dynamic range is approximately the same across samples. Additionally, we capture dark frames for each exposure setting and subtract them from the corresponding images gathered. The noisy images were scaled to match the exposure settings of the ground truth image.
### _Performance_
Quantitative Results.To compare our adaptive sparse training method to the uniform baseline on the real experimental data, we compare the SSIM of the noisy input to the networks with the SSIM of the output, both with respect to the computed ground truth [25]. In this comparison, we use the networks that were trained on the images corrupted with Speckle-Poisson-Gaussian noise. We choose to focus SSIM in this case because it has better perceptual qualities than PSNR, but also include a similar plot using PSNR instead in the supplemental material. We plot the input SSIM minus the output SSIM vs the input SSIM in Figure 7 (lower is better). Even though our adaptive distribution trained networks perform worse than the uniform trained networks in the higher noise regime, the adaptive distribution networks perform better in the low noise regime. Note that for all samples the adaptive trained network improves the image quality, whereas for some
Fig. 6: **Optical System. We capture images of paper with images on them, illuminated by a laser, using a DSLR camera. The laser beam passes through a diffuser which results in a speckle noise pattern on the image. We rotate the diffuser with a rotation mount connected to a motor controller to gather different independent realizations of speckle.**
lower noise images the uniform trained network actually lowers the image quality. This is because the uniform trained network is oversmoothing the images due to the higher noise images having a large impact on its training loss.
Qualitative Results.A qualitative evaluation on real world data collected in our lab suggests that our method effectively extends passed simulation. Close inspection of the samples in 8 show that our method is more consistent at preserving information after denoising at low noise levels. Notably, the uniform denoiser smooths the image, losing high frequency details. Even on highly corrupted samples, the our sparse sampling approach is able to achieve performance close to that of the uniformly trained denoiser. Again, our method retains details that the uniform denoiser smooths out of the image. The drop in performance for our method at high noise levels compared to the uniform sampling strategy is further evidence that the uniform strategy is liable to over-correct for high noise samples.
Additional qualitative results on the experimental data can be found in the supplement.
## VII Conclusions
In this work, we demonstrate that we can leverage a polynomial approximation of the specification-loss landscape to train a denoiser to achieve performance which is uniformly bounded away from the ideal across a variety of problem specifications. Our results extend to high-dimensional problem specifications (e.g., Poisson-Gaussian-Speckle noise) and in this regime our approach offers \(50\times\) reductions in training costs compared to alternative adaptive sampling strategies. We experimentally demonstrate our method extends to real-world noise as well. More broadly, the polynomial approximations of the specification-loss landscape developed in this work may provide a useful tool for efficiently training networks to perform a range of imaging and computer vis tasks.
|
2305.04364
|
A Generalized Framework for Predictive Clustering and Optimization
|
Clustering is a powerful and extensively used data science tool. While
clustering is generally thought of as an unsupervised learning technique, there
are also supervised variations such as Spath's clusterwise regression that
attempt to find clusters of data that yield low regression error on a
supervised target. We believe that clusterwise regression is just a single
vertex of a largely unexplored design space of supervised clustering models. In
this article, we define a generalized optimization framework for predictive
clustering that admits different cluster definitions (arbitrary point
assignment, closest center, and bounding box) and both regression and
classification objectives. We then present a joint optimization strategy that
exploits mixed-integer linear programming (MILP) for global optimization in
this generalized framework. To alleviate scalability concerns for large
datasets, we also provide highly scalable greedy algorithms inspired by the
Majorization-Minimization (MM) framework. Finally, we demonstrate the ability
of our models to uncover different interpretable discrete cluster structures in
data by experimenting with four real-world datasets.
|
Aravinth Chembu, Scott Sanner
|
2023-05-07T19:56:51Z
|
http://arxiv.org/abs/2305.04364v1
|
# A Generalized Framework for Predictive Clustering and Optimization
###### Abstract
Clustering is a powerful and extensively used data science tool. While clustering is generally thought of as an unsupervised learning technique, there are also supervised variations such as Spath's clusterwise regression that attempt to find clusters of data that yield low regression error on a supervised target. We believe that clusterwise regression is just a single vertex of a largely unexplored design space of supervised clustering models. In this article, we define a generalized optimization framework for predictive clustering that admits different cluster definitions (arbitrary point assignment, closest center, and bounding box) and both regression and classification objectives. We then present a joint optimization strategy that exploits mixed-integer linear programming (MILP) for global optimization in this generalized framework. To alleviate scalability concerns for large datasets, we also provide highly scalable greedy algorithms inspired by the Majorization-Minimization (MM) framework. Finally, we demonstrate the ability of our models to uncover different interpretable discrete cluster structures in data by experimenting with four real-world datasets.
**Keywords: Supervised clustering, mixed-integer linear programming, Majorization-Minimization, Data science**
## 1 Introduction
The availability of massive volumes of data coupled with the need to understand, analyze and explore patterns in them as a means to find solutions and drive decision making has made clustering a popular tool in data science. Cluster analysis is widely used in problems with unlabeled data and has become synonymous to unsupervised learning. It has been found helpful in a varied range of machine learning and data mining tasks, including pattern recognition, document clustering and retrieval, image segmentation, and medical and social sciences [34, 41, 55, 59, 60]. This reflects its broad applicability, and usefulness as an exploratory data analysis tool, especially for large datasets. However, relatively little focus has been directed towards using clustering for predictive tasks with labeled data.
In many cases, it is natural to assume that real data is generated from complex processes that might be mixtures of discrete modes of a predictive target response. Some of these modes could just be from different processes that generate the data [14, 22, 32, 48, 54], while others may be due to implicit or explicit confounders that lead to a significant change in the response variable being predicted. Naturally, a single predictive
model cannot capture such multiple relationships between the dependent and explanatory variables.
For an example use case, consider the housing price regression predictive task for a city. Crime rates influence the housing market, and in most cases, property values drop with an increase in crime [10, 20]. But in contrast to this trend, housing values in inner-city or downtown areas are high regardless of the high crime rates. This positive relationship could be because of increased reporting and higher property crimes in affluent high-income neighborhoods [10, 19, 39]. Clearly, one regression model cannot capture these two different trends in prices with respect to the crime rate. This kind of multiple regression modeling has been found suitable for analyzing data from various domains, including housing price prediction [15], marketing analysis [24], demographic neighborhood analysis [42], and weather prediction [6].
To this end, historically, several methods have gone beyond standard unsupervised clustering to supervised or predictive versions. Most of these models from the literature fall under the cluster-wise regression (CLR) category [9, 40, 49, 56]. These models primarily aim to identify disjoint subsets or explicit subclasses of the data that lead to different predictive (regression in this case) models in each cluster. However, existing methods for predictive clustering are largely bespoke for specific problems, supervised objectives, or cluster definitions and have largely gone unused as a general tool for data science. This motivated us to take a broader perspective towards clustering and build a framework to explore the predictive clustering design space.
Consider, for example, the samples of points shown in Figure 1. We generated this data consisting of points from three different regression planes such that the points in these three disjoint groups are reasonably well separated in the feature space. Predictive clustering aims to identify these distinct modes present in the data. The plots show multiple perspectives of clustering with the supervised regression objective used to solve this problem. We can either (1) assign data to clusters without any restriction on the search space (as is the case with traditional CLR [9, 49]), (2) define clusters as bounding boxes in the feature space, or (3) define clusters as the regions nearest to exemplar data centers [40, 56]. The plots with the projection of points in the feature plane show how these clustering methods differ and identify the three groups. We remark that to date, clustering methods have been defined for (1) and (3), but only limited approximated options are available when it comes to (2) [7, 12].
In this article, we seek to comprehensively explore the design space of supervised objectives and cluster definitions that allow us to identify several important gaps in this space. To this end, we formalize a general mathematical framework for predictive clustering that subsumes existing methods and introduces new ones. We also propose global optimization methods that can directly exploit our unifying formalization as well as general greedy optimization methods that are highly scalable for large-scale datasets, near-optimal on cases where we can compare to global methods, and which reduce to existing methodologies in some special cases. Finally, we demonstrate the power of this unified perspective through a variety of applications that exhibit how different supervised objective and cluster definitions allow us to detect and learn important discrete structures and behaviors in the dataset.
We summarize our main contributions in the article as follows:
* We present a general framework for predictive clustering that combines clustering with a supervised objective. Specifically, we focus on three clustering methods as shown in Figure 1, and we call them arbitrary, closest center, and bounding box clustering. Furthermore, we explore two supervised loss functions in our design space for regression and classification tasks. We identify that clusterwise classification adds novelty in the field of general linear classification.
* We provide two ways for optimizing the loss functions in our models: (1) mixed-integer linear programming (MILP) for global optimization, and (2) greedy methods inspired by the Majorization-Minimization (MM) prescription of algorithms to tackle the scalability issues of using MILP, and at the same time provide comparable but sub-optimal (locally optimal) solutions.
* We demonstrate the applicability of the different models in our framework with case studies
on four real-world datasets and evaluate its performance with baseline linear models. These models provide highly interpretable results that we believe will help in decision and policy making when applied to data science problems.
## 2 Related work
There is a substantial body of research related to clustering and its applications in unsupervised learning tasks. However, our proposed contributions focus more on clustering as a predictive tool. Therefore, we briefly survey the literature on available clustering techniques, followed by relevant research focused on using clustering for supervised learning tasks.
**K-means and alternative cluster definitions:** Clustering is an extensively researched topic in the context of advancements in clustering techniques (engineer highly scalable and fast algorithms) and its applications in problems in data science. Since surveying this sheer mass of literature is beyond the scope of this article (some comprehensive clustering surveys [34, 59, 60]), we focus only on several clustering techniques relevant to our work.
The most commonly used method for cluster analysis, especially in the context of hard-partition clustering, is the popular K-means algorithm [37]. It is a fast heuristic algorithm designed to solve the minimum sum-of-squares clustering problem (MSSC), where the task is to choose clusters such that the points within clusters have small sum-squared errors. Several attempts have been made to solve the MSSC problem optimally using column generation and integer linear programming [25, 2, 5, 13]; however, none of these could scale like K-means.
Similarly, several other definitions for clusters exist in the literature. Among them, density-based clustering [3, 26] has gained huge popularity primarily because of its ability to produce arbitrary-shaped clusters in contrast to k-means which can only deal with spherical clusters. Yet another approach is defining clusters based on grids as first described in the CLIQUE [1] algorithm for clustering high dimensional data. Here, the central idea was to first discretize the entire space into a mesh with a predefined grid size followed by identifying grids with a dense collection of points in subspaces. Although our bounding boxes clustering method (refer to Figure 0(a)) resembles the grid-based definitions for a cluster, they differ in how these clusters are identified. CLIQUE uses a bottom-up approach - unions of dense cells
Figure 1: Illustrative example with multiple generative modes that were uncovered with predictive clustering. (a) Different perspectives to clustering: (1) arbitrary clustering with points assigned without any restriction (left), (2) clusters defined as bounding boxes w.r.t. to features (center), and (3) clusters defined as regions closest to data centers demonstrated here using a Voronoi plot with the three cluster centers represented by points shown in black (right). (b) Synthetic data with three distinct regression planes (dependent variable Y as a linear function of two independent variables X1, X2) where the projection of points on the X1-X2 feature space gives well-separated clusters as shown by blue, orange, and green points (right).
from lower subspaces to define clusters in higher dimensions. In contrast, our model directly identifies the bounding boxes based on a supervised optimization objective.
**Predictive Clustering:** Numerous methods have been mentioned in the literature that have moved away from discussing clustering in the traditional sense and have focused on using it for predictive purposes. However, most of these models were designed for a specific supervised learning objective or application. Therefore, we survey the supervised clustering literature in two directions: clusterwise methods for regression and classification.
**Clusterwise Regression (CLR) greedy models:** The central idea in clusterwise regression (CLR) is to split the data into several disjoint sets to identify the various regression modes present in them. In the pioneering work of Spath [49] in CLR, he proposed an exchange method to jointly optimize the overall regression error by unifying regression and clustering phases. In this approach, two observations from different clusters would be exchanged if it reduces the overall error. In follow-up work, Spath [50] proposed a faster exchange algorithm where a single observation is shifted between clusters if it reduces the overall cost. More recently, Manwani and Sastry [40] proposed the K-plane regression algorithm, which is similar, in spirit, to the K-means [37] algorithm. This approach repeatedly involves (1) identifying the best regression weights in each cluster and (2) reassigning each observation to have the least error when assigned to that cluster. The above heuristic approaches provide acceptable solutions in many cases; however, as with K-means, they are sensitive to initializations and converge to sub-optimal solutions.
**Optimal CLR methods:** Several researchers have tried to provide globally optimal solutions for the CLR problem. Lau et al. [36] proposed a nonlinear programming formulation for a variant to the CLR problem, but they do not provide any guarantees for the optimal solution. A more common approach seen in the literature starts from the CLR problem's quadratic programming (QP) reformulation. As a first, Carbonneau et al. [16] proposed a mixed-logical quadratic programming formulation to solve the CLR problem to global optimality feasibly. They further improved upon this approach in their later works [17, 18] where they used linear integer programming tricks such as column generation and repetitive branch and bound [13] methods coupled with heuristic algorithms.
As an alternative approach, Bertsimas and Shioda [9] proposed the CRIO model where they used the more robust absolute error metric (similar to Spath's model in [51]) as the regression loss and used MILP to solve the problem optimally. In more recent research, Zhu et al. [62] also adopt the same approach for CLR. Obviously, this approach is more elegant and computationally less intensive than the QP counterparts. Here, we remark that the CLR method, specifically CRIO [9], is captured in our framework under arbitrary clustering (refer to Table 1). This approach to clustering works reasonably well for some instances, especially when the regression lines from two different clusters intersect. However, as shown in Figure 0(a), arbitrary assignment fails to identify the three well-separated clusters. Thus, homogeneity among points in clusters w.r.t. to the feature variables can be a desirable trait, as argued by several researchers in their works on CLR [14, 40, 48, 54, 56].
To address this drawback and obtain homogeneous clusters, Manwani and Sastry [40] expanded on their work to present a modified K-plane regression algorithm. In this approach, the authors added the MSSC loss (w.r.t. to the independent variables) to the squared error regression loss (with a regularization parameter). The same approach was used by Silva et al. [48]. In more recent research, authors in [56] presented the Optimal Predictive Clustering (OPC) method where they used a variant of the cost function used in the above approaches. They included the dependent variable (along with the features) while computing the MSSC error per cluster. Furthermore, they provided a greedy algorithm based on K-means++ [4] to warm-start the mixed-integer quadratic programming method to obtain near-optimal results. These approaches are arguably similar to our regression with the closest center clustering method. However, in contrast to OPC [56], we use the absolute error cost function for regression and hence, obtain a computationally more tractable MILP formulation for global optimization. Moreover, we provide a different methodology for greedy optimization when
compared with the modified K-plane regression algorithm [40].
We also present a novel bounding box clustering methodology to solve the CLR problem. With this approach, not only do we retain the critical advantage of the closest center method of having coherent clusters, but we also identify a set of decision rules to define a cluster. This adds to the interpretability of our results. This may be in a similar vein to model trees [45, 57], where a greedy approach is used to build decision trees with regression models at the leaves. Also, to solve model trees optimally, Bertsimas and Dunn [8] presented the Optimal Regression trees with linear predictions (ORT-L) model. Fundamentally, these approaches build a decision tree in search of good regression fits at the leaves (with just binary splits on a single feature at each node); hence, we believe that this approximates our approach. In contrast, our model performs a more holistic search of the feature space to identify the best set of bounding boxes.
**Clusterwise Classification:** Much of the research at the intersection of clustering and classification has been along the lines of either cluster-and-classify or clustering-based classification. In the former approach, clustering precedes the classification task. One such approach clustered large datasets to a relatively smaller number of clusters and used the cluster centroids to complete the classification task [27]. Other methods were more application-specific, where the data was first clustered, and then a classification model was run on each cluster. Fahad et al. [28] used this approach for activity recognition in smart homes; Tammenah et al. [53] used hierarchical clustering with Neural networks to classify road traffic accidents.
In contrast to the cluster-and-classify approaches, cluster-based classifiers perform the classification task assisted by clustering. Bertsimas and Shoda [9], in their CBIO, assigned one class of points to clusters such that no points of the other class belong in these clusters. The drawback of this approach is that it can only address a binary classification problem. Furthermore, clustering assisted information retrieval and text classification are also common [29, 61]. In more recent research, clustering was used for information retrieval to find multiple clusters that hold highly relevant retrieved information [11]. All these approaches identify clusters such that all observations in them belong to the class of interest.
In this work, we focused on a per cluster classification model approach, similar in spirit to the CLR approach. Unlike the cluster-and-classify approaches, our clusterwise classification (CLC) models jointly optimize the overall error of the clustering and classification tasks. Moreover, we use the closest center and bounding box clustering for our CLC tasks. The bounding box approach for CLC can be seen as similar to classification trees [12] and their optimal versions called optimal classification trees [7]. However, in contrast to classification tree methods which partition the feature space to propose one class per leaf, our approaches have one classification model per partitioned space.
In summary, with our framework, we were able to identify and address critical gaps in the supervised clustering literature while simultaneously capturing some existing models like CLR and CRIO (refer Table 1). Overall, in our work: (1) we directly capture the MILP based CRIO approach and K-plane regression greedy algorithm; (2) we provide a different problem formulation and loss function (MAE, which is more robust) for the
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{3}{c}{Cluster Assignment} \\ \cline{2-4} & Arbitrary & Closest Center & Bounding Boxes \\ \hline MSSC & - & K-means [37] & - \\ Regression & CLR [49], CRIO [9] & OPC [56], K-plane [40] & Model Trees [45], ORT [8] \\ Classification & - & - & Classification Trees [12], OCT [7] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of relevant works from the literature that are either directly consumed in our framework or are approximate solutions to our models
OPC model; (3) we propose a greedy optimization strategy that is different from the modified K-plane regression algorithm; (4) we describe an alternative approach to CLR with our bounding boxes clustering model and (5) we present a novel clusterwise classification approach.
## 3 Methodology
In this section, we formally present the predictive clustering framework and the mathematical notations we use. We then describe our two optimization procedures: (1) Mixed Integer optimization (MIO) and (2) greedy algorithms.
### Problem definition
As the name suggests, the framework for predictive clustering constitutes of two main "ingredients":
1. **Prediction:** involves optimizing a supervised objective function to predict the label or the dependent variable. Typical loss functions we include are mean squared error (MSE) and mean absolute error (MAE) for linear regression tasks, and hinge loss for soft margin support vector machines (SVM) for both binary and multi-class classification tasks.
2. **Cluster assignment:** which involves assigning every observation in the data to a cluster based on an assignment choice. As previously mentioned, the different options available are arbitrary (Arbit), closest center (CC), and bounding boxes (BB) clustering (refer to the Figure 0(a)).
In the following subsections, we describe the above-mentioned clustering methods and loss functions in detail.
### Notation
We assume that we have \(N\) observations of the form \(D=(\mathbf{x_{i}}\),\(y_{i})\) (for \(i\in\overline{N}=\{1,...,N\}\)) in the data, and where \(\mathbf{x_{i}}\) is the feature variable vector and \(y_{i}\) is the label to be predicted. Moreover, we assume that the features \(\mathbf{x_{i}}\in\mathbb{R}^{d}\) are \(d\) dimensional (\(\mathbf{x_{i}}=(x_{i1},...,x_{id})\)). We note that clustering without any prediction is the trivial case when labels \(y_{i}\) corresponding to all observations are null. The goal in hard-partitioning clustering is to assign each of the \(N\) observations to one of the \(K\) clusters \(\{C_{1},...,C_{K}\}\) where \(K\leq N\). We also have binary indicator variables \(c_{ik}\) to identify cluster assignments for all observations in the data. If a point \(i\) is associated with cluster \(C_{k}\), then we have \(c_{ik}=1\); \(c_{ik}=0\), otherwise. With the cluster definitions as above, we desire the following properties:
* No overlap between clusters: \(C_{k}\cap C_{j}=\phi,\ \forall\,k,j\in\overline{K}\ \text{and}\ \,k\neq j\)
* All observations are assigned to clusters: \(\bigcup_{i=1}^{K}C_{i}=D\)
We now define notations to capture various cluster definitions and supervised loss functions. We use the following notation throughout the rest of the article:
* Variables \(\mathbf{\theta_{k}}\) to denote the cluster-specific parameters of our model. It can be the weights of the regression planes or weights defining the hyperplanes separating the classes in a classification task.
* Per-datum error represented by \(l(\mathbf{x_{i}}\),\(y_{i},\mathbf{\theta_{k}})\) to typically indicate, for instance, the hinge loss in the case of SVM or squared error for regression associated with each observation.
* Overall error \(L(\mathbf{\theta,c})\) for any combination of clustering and supervised objective function given by: \[L(\mathbf{\theta,c})=\sum_{k=1}^{K}\sum_{i=1}^{N}l(\mathbf{x_{i}},y_{i},\mathbf{\theta_{k }})\ c_{ik}\] (1) The per-datum error \(l(\mathbf{x_{i}},y_{i},\mathbf{\theta_{k}})\) is multiplied by the indicator variable \(c_{ik}\) to ensure that for each observation we only account for the error associated with the cluster it is assigned to.
### Supervised learning objective
In this subsection, we discuss the supervised error functions we used in our framework.
1. **Regression loss :** The central idea in CLR is to cluster the data while simultaneously learning cluster-specific regression models through a join optimization methodology. We can either use mean squared error (MSE) or mean absolute error (MAE) to optimize our regression. With MAE, the total error would be given by:
\[\min_{c,\mathbf{\theta}} \sum_{k=1}^{K}\sum_{i=1}^{N}\lvert y_{i}-\mathbf{\theta^{\prime}_{k}} \mathbf{x_{i}}\rvert c_{ik} \tag{2}\]
Here, \(\mathbf{\theta_{k}}\) stands for the regression weights in the \(k\)-th cluster and the per-datum loss for this case is \(l(\mathbf{x_{i}},y_{i},\mathbf{\theta_{k}})=\lvert y_{i}-\mathbf{\theta^{\prime}_{k}}\mathbf{x _{i}}\rvert\). Similarly, for the MSE loss function, we have \(l(\mathbf{x_{i}},y_{i},\mathbf{\theta_{k}})=(y_{i}-\mathbf{\theta^{\prime}_{k}}\mathbf{x_{i}} )^{2}\). While both error measures are quite similar, MAE penalizes the outliers less substantially and hence, is more robust than MSE. Moreover, the overall CLR problem with the MAE loss reduces to a MILP formulation which can be more tractable and computationally less expensive than a Quadratic Programming formulation with MSE loss.
2. **Classification loss :** Similar to CLR, the purpose of CLC is to group points and run per-cluster classification models to drive the overall classification error to a minimum. This article only focuses on multi-class classification with SVM, wherein we find the hyperplanes that best separate the multiple classes found in each cluster.
The classical approach to solve the multi-class problem with SVM is to employ a collection of binary classifiers with the one-vs-all classification trick [46]. However, for our MILP formulations, we utilize the first "single machine" approach for the multi-class case called Weston and Watkins (WW-SVM) [58]. This approach provides a single error value per data which we could then elegantly plug into our framework. For a \(M\) class classification task, the overall cost function with this approach along with an L1 regularization [23] for the coefficients is given by:
\[\min_{c,\mathbf{\theta}} \sum_{k=1}^{K}\sum_{m=1}^{M}\lVert\mathbf{\theta_{k,m}}\rVert_{1}+C \sum_{k=1}^{K}\sum_{i=1}^{N}\sum_{m\neq y_{i}}\xi_{i}^{m}c_{ik} \tag{3}\] \[\mathrm{s.t.} \mathbf{\theta^{\prime}_{k,y_{i}}}\mathbf{x_{i}}\geq\mathbf{\theta^{\prime}_ {k,m}}\mathbf{x_{i}}+2-\xi_{i}^{m},\] \[m=\{1,...,M\}\backslash y_{i}\]
Here, the per-datum loss would be given by \(l(\mathbf{x_{i}},y_{i},\mathbf{\theta_{k}})=\sum_{m\neq y_{i}}\xi_{i}^{m}\).
### Cluster assignment
In this subsection, we briefly introduce the three unique clustering methods currently included in our framework. We provide the mathematical formulations necessary to achieve these cluster definitions in Section 3.5 along with a mixed-integer optimization procedure to solve the overall model.
1. **Arbitrary clustering (Arbit):** The assignment of a point to a cluster is independent of any constraints, and optimizing the supervised learning objective drives these assignments. For example, when we combine regression and arbitrary clustering, we obtain the traditional CLR model [9, 49]. The main advantage with arbitrary clustering is its ability to find overlapping clusters, specifically, intersecting regression lines in the case of CLR. However, as noted previously with the synthetic data example in Figure 1, this method fails to identify the three well-separated clusters but instead provides overlapping clusters as its solution. Hence, we seek other clustering methods that provide within-cluster homogeneity.
2. **Closest center (CC) clustering:** In this clustering method, points are assigned to their closest cluster center to give spherical-shaped
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Clustering Type} \\ \cline{2-4} & Arbitrary & Closest Center & Bounding Box \\ \hline MAE/MSE regression loss & ✓ & ✓ & ✓ \\ Hinge loss for SVM & & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the predictive clustering design space. We exclude the combination of Arbitrary cluster assignment for Hinge loss (SVM) since having overlapping SVMs increases complexity and hence reduces the interpretability of the model.
coherent clusters in the feature space. The central idea is to determine the best \(K\) cluster centers such that assigning observations in the data to them minimizes the overall loss function. The cluster centers help interpret and analyze the profile of points that belong to a cluster.
3. **Bounding boxes (BB) clustering:** The fundamental idea is to define clusters as axis-parallel hypercuboids (rectangles in the two features case as shown in Figure 0(a)), and observations that fall within the boundaries of a cluster belong to that cluster. A key benefit is that clusters can now be characterized with a set of decision rules like a DNF expression, making the models highly interpretable.
### Mixed Integer Optimization
Having presented the two components of our framework - clustering and prediction objective in the previous subsections, we now show how to marry them together to get the desired model. The available model choices in our design space are shown in Table 2. We can mix and match the three cluster definitions and the two-loss functions to give an array of models tailored to specific problems.
Our general strategy for optimization is to define a set of constraints for each of the three clustering methods and combine it with the previously described supervised objective functions. We then employ MIO to obtain globally optimal results for our models. The general form of the objective function is given in Equation 1. The objective function in this form is non-linear. Therefore, we reformulated the cost function using the "big-M" method as follows:
\[\begin{split}\min_{c,\boldsymbol{\theta}}&\sum_{k=1 }^{K}\sum_{i=1}^{N}e_{ik}\\ \text{s.t.}& l(\boldsymbol{x_{i}},y_{i},\boldsymbol {\theta_{k}})-e_{ik}\leq M*(1-c_{ik}),\\ i&\in\overline{N},\ \ k\in\overline{K}\\ & e_{ik}\geq 0,\quad i\in\overline{N},\ \ k\in\overline{K}\\ & c_{ik}\in\{0,1\},\quad i\in\overline{N},\ \ k\in\overline{K}\end{split} \tag{4}\]
With this reformulation, we forced the new variable \(e_{ik}\) to take the value of \(l(\boldsymbol{x_{i}},y_{i},\boldsymbol{\theta_{k}})\) when \(c_{ik}=1\). Additionally, when \(c_{ik}=0\), minimization of the objective function along with the constraints ensured that \(e_{ik}=0\), i.e., when an observation does not belong to a cluster \(k\), it does not incur prediction error w.r.t. that cluster. We remark that the choice of the big-M is critical in ensuring the reformulation works as expected.
In clustering, the main objective is to associate each point to one cluster and further add clustering type-specific restrictions. This is achieved by appropriately placing constraints on the indicator variables \(c_{ik}\). We describe these constraints in Table 3.
In the case of (1) arbitrary clustering: we used constraints to make sure that a point is assigned to only one cluster; (2) closest center clustering: we introduced variables \(d_{i}\) to capture the distance between a point and the cluster center \(\boldsymbol{\beta_{k}}\). We added this variable to the objective function to ensure that points are assigned to their closest cluster center. A hyperparameter \(\lambda\) was also used to trade-off between the supervised error and point to cluster center distances. In such a formulation, the indicator constraints along with the minimization criteria ensured that when \(c_{ik}=1\) then \(d_{i}=\|\boldsymbol{x_{i}}-\boldsymbol{\beta_{k}}\|_{1}\). We choose the L1 norm distance metric to compute distances between points and cluster centers to have computationally more feasible linear programming formulation; (3) bounding box clustering: we employed additional variables \(x_{kj}^{max}\) and \(x_{kj}^{min}\) to define the edges of the bounding box, and indicator variables \(I_{ikj}\) to force points that belong within these boundaries to belong to that cluster.
With the appropriate choice of the loss function, which was MAE for regression and L1 regularized SVM loss for classification, we had reformulated our overall problem as a MILP. However, such a MILP-based approach is NP-hard and a very difficult problem to solve [36]. This methodology is only practical with small datasets with a few hundred observations. Therefore, we describe greedy approaches to optimize our models in the following subsection. We used our MILP based solutions to benchmark these greedy methods with synthetic datasets.
### Greedy Optimization
We were inspired by the Majorization-Minimization (MM) [33, 35, 43] algorithm
framework to build our greedy methods. Fundamentally, the MM prescription for constructing algorithms is based on the principle of identifying a suitable, "easy to optimize" surrogate function to assist in the optimization of a non-convex objective. The algorithms iteratively optimize a sequence of these surrogate functions to drive the optimization of the original objective.
Formally, in a minimization task for an objective function \(f(\theta)\) w.r.t. parameter \(\theta\), we have a surrogate majorizing function \(g_{t}(\theta)\) at the \(t\)-th iteration satisfying the following: (1) touching condition \(f(\theta^{(t)})=g_{t}(\theta^{(t)})\) which ensures that both functions have the same value at \(\theta^{(t)}\); and (2) condition that \(g_{t}(\theta)\) majorizes \(f(\theta)\), i.e., \(f(\theta)\leq g_{t}(\theta)\). At each time step, we minimize the majorizing function to obtain the value of parameters for the next time step given by \(\theta^{(t+1)}\). This process is repeated to drive the original objective to a minima, but without assured convergence to global optima. The commonly seen expectation-maximization (EM) approach is a special case of the MM algorithm.
We briefly describe our algorithm and the elements of the MM framework that we adopted in our greedy search for clusters with the joint optimization of the supervised loss. We used the
\begin{table}
\begin{tabular}{c|c} \hline \hline Clustering & MILP Formulation \\ \hline \hline Arbitrary & \\ \hline \(\begin{array}{c}\bullet\\ \bullet\\ \bullet\end{array}\) & \(\begin{array}{c}\min_{\mathbf{c},\boldsymbol{\theta}}\ \ \sum_{k=1}^{K}\sum_{i=1}^{N}l(\boldsymbol{x_{i}},y_{i}, \boldsymbol{\theta_{k}})\ c_{ik}\\ \text{s.t.}\ \
'Predictive-clustering' algorithm to wrap the overall procedure and an 'assignment' subroutine to assign points to clusters.
1. **Predictive-clustering algorithm:** This algorithm, as shown in Algorithm 1 runs an iterative procedure with two steps until the convergence of loss function up to a threshold. First, the cluster assignment variables are randomly initialized. This is followed by an iterative procedure that involves: (1) optimizing the error function after fixing the point-cluster assignments to learn a new set of parameters \(\theta^{(t)}\) (line 4 in Algorithm 1); and (2) reassigning points to clusters based on the new parameters and clustering criteria (line 5 in Algorithm 1). We utilized the more traditional MSE loss function for regression and L2-regularized SVM for classification tasks in our greedy methods. As a result, when the cluster assignments were fixed, the overall objective reduced to a per cluster supervised learning (regression or classification) problem with smooth convex loss functions that were easy to solve. Consider the regression case when the cluster assignments are fixed, \[g_{t}(\theta)=\bigg{\{}\sum_{k=1}^{K}\sum_{i=1}^{N}(y_{i}-\mathbf{ \theta_{k}^{\prime}x_{i}})^{2}|c_{ik}^{(t)}\\,i\in\overline{N},\ \ k\in\overline{K}\bigg{\}}\] (5) Under the MM framework definitions, the function \(g_{t}(\theta)\) in Equation 5 is our easy to solve convex surrogate function. Optimizing this function gives us the best regression weights for the next iteration \(\theta_{k}^{(t)}\). This step is followed by the reassignment step where the 'Assignment' function is called to return the indicator variables \(c_{ik}^{(t+1)}\) for the next iteration.
2. **Assignment function:** The reassignment step is more complicated since it needs to address the different cluster assignment criteria. Here, the cluster-specific parameters (\(\theta_{k}^{(t)}\)) are fixed. This subroutine as shown in Algorithm 2 reassigns a point to a different cluster if it has a lower prediction error when assigned to that cluster. Continuing the regression example, the new assignments are as follows: \[c_{ik}^{(t+1)}=\{\mathbb{1}\,_{k=k_{i}^{*}}|k_{i}^{*}=\arg\min_{k}l(\mathbf{x_{i}}, y_{i},\mathbf{\theta_{k}^{(t)}})\}\] (6) When the function stops at this step (line 1 in Algorithm 2) and returns the new assignment variables \(c_{ik}^{(t+1)}\), we arrive at the result for the arbitrary clustering case. These new assignment variables can now be used to define the surrogate function for the next iteration by plugging them into Equation 5. Precisely, this assignment step as described in Equation 6
ensures the "touching condition" for the next time step under the MM framework definition. A similar procedure is described in a recent work by Manawami and Sastry [40] (called the K-plane regression algorithm) to solve the traditional CLR problem, but they do not make this connection between their algorithm and the MM approach.
Furthermore, we extend this assignment subroutine to address the other clustering types by slightly deviating from the MM framework. The function achieves the other clustering methods by either: (1) computing the new cluster centroids (with variables \(c_{ik}^{(t+1)}\)), and reassigning all points to the closest cluster center using the Euclidean distance metric to get the closest center clustering (line 4 in Algorithm 2); (2) computing the new centroids as above but now assigning points to the closest cluster centers using the L1 norm distance metric to obtain approximate bounding box clustering (line 6 in Algorithm 2). This is because when Voronoi diagrams are plotted with the L1 norm distance, the polygon edges are axis-parallel, giving an approximate bounding box shape.
## 4 Results
In this section, we experimentally investigate the ability of our models to converge to the ground truth using synthetic datasets. We compare our greedy methods with the MILP-based approach, and report the results from our experiments. We then report results for four real-world datasets to motivate and demonstrate the applicability of our models.
### Performance Evaluation
In this subsection, we benchmark the performance of our greedy methods with the MILP-based approaches and empirically show that these methods perform well using synthetic datasets. We obtain globally optimal solutions with MILP-based approaches, but they are not salable. In contrast, greedy methods may not guarantee global optimality, but they are more practical for real-world data. Therefore, we designed our experiments intending to understand (1) how well these greedy methods learn the underlying true generative model for several synthetic datasets; and (2) how time-efficient they are compared to MILP methods.
To implement our MILP-based approach, we employed the commercially available Gurobi solver [30] which is free for academic use. We evaluate all our models in a desktop computer with an 8-core CPU at 3.2 GHz and 8 GB memory. For our MILP models, we fixed the exit optimality-gap threshold at 5%. We also prescribed an upper limit of 1 hour running time per experiment. On the other hand, each evaluation was carried out for our greedy methods by averaging the results over ten independent runs of these models on the synthetic data.
Since we aim to understand the ability of our models to learn the underlying ground truth, we chose different generative models to construct the synthetic datasets. First, we took two feature variables and generated reasonably well-separated clusters of points in this feature space with the number of clusters \(K\in\{2,3\}\). Then, we used cluster-specific regression weights chosen randomly to give two datasets for the CLR task (Gaussian noise was also added). Similarly, two datasets for the CLC task were generated with a binary-classification objective per cluster (hyperplanes separating classes are different for different cluster) with some noise (class labels assigned randomly). Finally, we ran our experiments by varying the size of the data \(N\) from 20 to \(10^{4}\).
**Regression case:** We report the results in Figure 1(a) with the experimental setup described above for the two datasets. Here, we compared the MILP-based and greedy methods for each of the three clustering types. The evaluation metric we utilized was the overall \(R^{2}\) score to measure the goodness of regression fit across the different clusters.
We observed that MILP for the closest center and arbitrary clustering methods were not feasible for more than \(N=250\) observations. In fact, the optimality gap was over 30% at the 1-hour exit condition for the \(N=250\) case, and hence, we do not report results for larger \(N\) values. Interestingly, we found that MILP for the bounding box method is much more scalable. Moreover, it is evident that the greedy methods perform well in most cases and are comparable to the MILP methods. We also note that the performance of greedy CLR
with arbitrary assignment is good for the synthetic data 2 but does poorly for the other data. This is because overlapping regression planes, although not representative of the underlying trends in the data, can sometimes lead to a lower error due to the added noise.
**Classification case:** Similarly, we report the classification results in the Figure 1(b) for our four models - closest centroid and bounding box clustering with greedy and MILP approaches. Here, we used the accuracy metric to evaluate our models. It is evident that the greedy methods perform as well as the MILP on most occasions. While it may be concerning that the greedy methods performed poorly when \(N\leq 50\), we remark that our greedy methods can sometimes reach a solution with all points being in the same cluster when the number of observations are very small, resulting in a poor local minima.
In conclusion, these evaluations show that the greedy algorithms provide scalable solutions that are a good approximation to the MILP-based methods. Furthermore, with the bounding boxes MILP method succeeding to attain the 5% optimality gap threshold even in cases with more than 1000 observations in the data, we found that it is significantly more scalable when compared with the MILP for other clustering methods. We believe that defining clusters as bounding boxes introduces much stronger constraints (or cuts on the feasible space), resulting in much-reduced search space for the MILP solvers.
### Case Study
In this section, we illustrate the relevance of the array of models available in our design space by analyzing four different real-world datasets picked from a diverse set of domains. Each of these application problems asks a very different question, and we show how we can mix and match tools available in our framework to address them. Through these case studies, we aim at exploring our models' ability to (1) scale for large datasets, (2) perform better than the baseline linear models, and (3) provide highly interpretable results that help in uncovering the different underlying modes of behaviors in data. We focus on benchmarking model performance with the Boston housing dataset, interpretability of results with San Francisco crime rate and FAA Wildlife-strike dataset, and the model's ability to scale with the MovieLens 100k dataset.
As a general preprocessing step, we partitioned the data into the train (65%), validation (15%), and test (20%) sets to tune our hyperparameters (with a focus on finding the best \(K\) clusters) and report the 5-fold cross-validation results. We used \(\text{R}^{2}\) score and accuracy metric to evaluate
Figure 2: Performance of the MILP-based and greedy algorithms with (a) regression and (b) classification tasks for the different clustering types with 4 synthetic datasets (two per supervised objective). Size of the dataset is varied from \(N\) 20 to \(10^{4}\) and the time taken (shown in minutes for MILP and in seconds for greedy approach) to run the models and the resulting \(\text{R}^{2}\) score/Accuracy obtained are reported.
our regression and classification models, respectively. Furthermore, we compared the results from our greedy algorithms with baseline Lasso regression and SVM one-vs-all models from the sklearn package in python [44].
#### 4.2.1 Boston Housing data
We use the popular and well-studied Boston housing dataset to perform a clusterwise regression analysis and benchmark our model's performance. As mentioned previously, we expect property values to have multiple trends in different parts of the city. This makes Boston housing1 an interesting and relevant dataset to analyze using CLR and understand if our models can identify various trends.
Footnote 1: Stat-CMU StatLib Datasets Archive. Boston house-price data. Retrieved from [http://lib.stat.cmu.edu/datasets/boston](http://lib.stat.cmu.edu/datasets/boston)
The dataset is small and has \(N=506\) observations with 13 features. The variable for prediction is the median value of houses per census tract. The list of features, along with the prediction variable, is shown in Table 4. A description of these features is standardly found across articles [56]. We used our greedy methodology for the CLR-CC model to train this dataset. Our choice was based on the idea that the cluster centroids can help understand the average socio-economic and structural feature values in a cluster to explain the cluster-specific regression trends.
We trained our model with \(K\in\{2,...,7\}\) clusters. Best results were found with 6 clusters with an out-of-sample \(\text{R}^{2}\) score of 0.8622. This is very close in comparison with the average test \(\text{R}^{2}\) score of 0.863 reported by by authors in [56] with their optimal OPC model for the same dataset with 6 clusters.
We report the feature averages and regression weights for each cluster in Table 4. It is evident from cluster 5 that the property values are lowest for old houses in areas with a very high crime rate. We generally observe that the prices inverse with a decrease in crime (cluster 5, for example). Yet, in cluster 3, both property values and crime are high, and the variables are positively related (weight of 5.78), which is against the usual trend. This could potentially be a cluster of housing properties in Downtown where crime is generally high, and houses tend to be very expensive besides being old and small (also observed in Table 4). Another pattern worth noting is in cluster 1, which has the highest average housing price. Here, the costs increase steeply when the number of rooms increases and the educational environments improve (indicated by the PTRTATIO variable). Overall, our model is able to pick up on the different modes in the data and discover unique patterns.
#### 4.2.2 FAA Wildlife-Strike data
The FAA Wildlife-strike database contains2 the records of all reported aircraft-wildlife strikes (mostly bird strikes) in the US in the last three decades. A general upward trend in the number of bird strikes has been observed over the years, as shown in Figure 4. This could be caused by many factors like increased flights and/or birds, or increased reporting every year. We were motivated to explore this dataset with predictive clustering to answer some of these questions.
Footnote 2: Federal Aviation Administration. FAA Wildlife Strike Database. Retrieved from [https://wildlife.faa.gov/home](https://wildlife.faa.gov/home)
We were mainly interested in two sets of feature variables: 'level of damage' caused to the aircraft due to the bird strike and the'region' in the US where it took place. In the database, we found six levels of damage: minor, substantial, uncertain level, destroyed, unknown (damage not reported) and none (or no damage); and five regions in the US: Midwest, Northeast, South, West, and unknown (when the region is not known). These indicator levels were encoded as binary variables to generate our features. For example, variable 'South' \(=1\) when a bird strike took place in the South region of the US, and 0 otherwise. Finally, after the preprocessing and transformation steps, we grouped the individual bird strike records per year of the strike, damage levels, and the US regions. The resulting count of records, an aggregate of the number of strikes w.r.t to the features, was used as the prediction variable.
Our strategy was to use our greedy CLR-BB model to identify highly interpretable clusters and capture different regression lines describing the prediction variable. We have \(N=803\) observations with 12 features in our generated dataset.
We trained our model with \(K\in\{2,3,4,5\}\) clusters and found that the \(K=4\) case gave the best results in terms of interpretability and performance. The average out-of-sample R\({}^{2}\) score for our model was found to be 0.929, much higher in comparison with the R\({}^{2}\) score of 0.613 for the baseline lasso regression model. We report the regression weights and min-max ('range') of the prediction variable in Table 5. We also leverage our bounding box clustering model's ability to define clusters as a set of decision rules to build a tree-shaped architecture as shown in Figure 3.
It is evident from Table 5 that cluster 4 has a substantially lower number of bird-strikes reported, and the slope w.r.t. to year feature is very small. From the tree in Figure 3, we realize
\begin{table}
\begin{tabular}{l r r r r r r r r r r r} \hline \hline & \multicolumn{2}{c}{Cluster 1} & \multicolumn{2}{c}{Cluster 2} & \multicolumn{2}{c}{Cluster 3} & \multicolumn{2}{c}{Cluster 4} & \multicolumn{2}{c}{Cluster 5} & \multicolumn{2}{c}{Cluster 6} \\ \cline{2-13} Feature & ft & wt & ft & wt & ft & wt & ft & wt & ft & wt & ft & wt \\ \hline CRIM & 0.5 & -1.56 & 0.6 & 2.17 & 3.2 & 5.78 & 0.2 & 2.24 & 10.5 & -0.69 & 0.1 & 1.61 \\ ZN & 6.6 & -2.72 & 0.3 & 4.64 & 0.0 & 0.00 & 0.8 & -3.68 & 0.0 & 0.00 & 37.9 & 0.49 \\ INDUS & 7.1 & 4.53 & 12.4 & 1.71 & 19.9 & -0.88 & 8.6 & -1.97 & 18.5 & -2.50 & 4.7 & -1.52 \\ CHAS & 0.2 & 0.11 & 0.1 & 0.59 & 0.2 & 1.81 & 0.1 & 0.49 & 0.0 & -0.40 & 0.0 & 0.68 \\ NOX & 0.5 & -3.75 & 0.5 & -5.88 & 0.6 & -4.23 & 0.5 & -1.43 & 0.7 & -3.34 & 0.4 & -1.98 \\ RM & 7.1 & 5.98 & 5.9 & 0.91 & 6.3 & -4.37 & 6.1 & 4.42 & 5.9 & -1.51 & 6.6 & 7.13 \\ AGE & 76.8 & -2.99 & 90.4 & -1.89 & 83.5 & 0.68 & 63.1 & -2.06 & 92.0 & 1.64 & 32.2 & -1.25 \\ DIS & 3.0 & -5.64 & 3.2 & -1.28 & 2.5 & -11.48 & 3.7 & -3.33 & 1.9 & -1.31 & 6.3 & -1.36 \\ RAD & 4.7 & 6.64 & 4.4 & 0.85 & 16.6 & 2.17 & 4.5 & 3.02 & 21.0 & 1.99 & 4.2 & 1.72 \\ TAX & 289.1 & -11.61 & 330.9 & 0.42 & 600.2 & 0.06 & 311.9 & -2.16 & 631.8 & -2.26 & 295.4 & -2.59 \\ PTRATIO & 16.3 & -4.66 & 19.2 & -1.24 & 20.2 & 5.49 & 18.7 & -1.30 & 19.5 & -2.78 & 17.4 & -0.46 \\ B & 383.9 & -0.43 & 367.4 & 0.42 & 386.5 & -2.26 & 391.6 & -1.36 & 273.7 & 0.42 & 389.5 & 5.44 \\ LSTAT & 7.0 & -7.89 & 17.3 & -2.44 & 12.6 & -8.85 & 11.7 & 1.07 & 20.6 & -4.43 & 7.1 & 0.24 \\ \hline MEDV (y) & 33.1 & & 18.3 & & 22.2 & & 21.4 & & 15.0 & & 27.3 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Cluster-specific feature averages (ft) and regression weights (wt) for the Boston housing data analysis
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Features & Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 \\ \hline Year & 354.3 & 746.6 & 256.4 & 2.3 \\ Destroyed & 0.0 & 0.0 & 0.0 & -8.7 \\ Minor damage & 0.0 & 0.0 & 0.0 & 9.2 \\ Damage none & 66.8 & 151.2 & 32.2 & 0.0 \\ Substantial damage & 0.0 & 0.0 & 0.0 & -2.6 \\ Damage uncertain level & 0.0 & 0.0 & 0.0 & -0.9 \\ Damage unknown & -67.3 & -152.2 & -32.4 & 0.0 \\ Midwest & 17.3 & 0.0 & 0.0 & -1.9 \\ Northeast & 0.0 & 0.0 & 0.0 & -5.0 \\ South & 0.0 & 0.0 & 0.0 & 6.9 \\ Region unknown & -77.9 & 0.0 & 0.0 & -0.8 \\ West & 60.4 & 0.0 & 0.0 & 0.7 \\ \hline \hline \# of bird-strikes (min-max) & 1.0 - 2354.0 & 45.0 - 3183.0 & 34.0 - 1039.0 & 1.0 - 235.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Cluster-specific regression weights for the FAA wildlife-strike data analysis
that this cluster corresponds to the cases when the flight had minor to substantial damage, i.e., when 'Damage None' and 'Damage unknown' variables are both 0. For the "no" damage case, the model had partitioned the data based on the regions to give 3 clusters. Clusters 2 and 3 correspond to bird strike reporting from the South and Northeast regions. Clearly, the highest reporting was from the South region along with a steeper slope w.r.t. to the year variable. This is reflected in Figure 4. The plot for the South region in this Figure 4, shows two trends, one which increases steeply corresponding to the "no" damage case (cluster 4) while the other that remains flat corresponding to the "some" damage case (cluster 2). Similarly, the plot for the Northeast region represents clusters 3 and 4.
Overall, our model was also able to obtain very clear partitions along the features to identify regions of high and low bird strike activity, along with giving clarity on the damage levels in these regions. Because we note that only the curve for the "no" damage case is increasing, there is reason to believe that it is only the increased awareness among pilots that has resulted in higher reporting.
#### 4.2.3 SF Crime dataset
Through this case study, our goal was to categorize crime rates in the census tracts in San Francisco into three classes - high, medium, or low; and understand the relationship between crime in neighborhoods and census features. Moreover, we were motivated to leverage our models to conduct demographic analysis with a geospatial dataset and seek readily interpretable results. We used our greedy classification-bounding box clustering model to conduct this study. Since crime patterns primarily depend on location and intrinsic socio-economic features, we used the bounding boxes method to obtain spatially coherent clusters. To facilitate this study, we used the Longitudinal Tract Database (LTDB) [38] to get socio-economic and demographic features for the census tracts (2010) in San Francisco (SF). We then obtained the crime incidents reports from the police department database from 2003 to 2018 from DataSF3 (an open data portal for SF). We performed several preprocessing steps to prepare the data. First, we computed the per tract count of property and violent crime incidents reported, and then assigned class labels low, medium, and high (corresponding to classes 1,2, and 3) based on it. Next, we used mutual information [21] to pick the following census features [38]: housing units in multi-unit structures (multi), persons in poverty (npov), median house value (mhmval), people with at least a four-year college degree (col), professional employees (prof), per-capita income (incpc), latitude, and longitude (corresponding to the central point in a tract).
Footnote 3: City and County of San Francisco. Police Incident Reports. Retrieved [https://data.sfgov.org/Public-Safety/Police-Department-Incident-Reports-Historical-2003/tunnf-yvryr](https://data.sfgov.org/Public-Safety/Police-Department-Incident-Reports-Historical-2003/tunnf-yvryr)
The generated data is relatively small with \(N=195\) observations with 8 features. We trained the data with our greedy CLC-BB model with \(K\in\{2,3,4\}\) clusters and found that the best results were obtained with \(K=3\) clusters. The average out-of-sample accuracy was 0.692, an improvement over the baseline linear classification result of 0.558. We report the feature averages and their boundaries for the three clusters in Figure 5c. The prediction label average was 1.86, 2.67, and 1.89 for the three clusters. Clearly, a larger class label average represents a high crime cluster. From Figure 5, it is evident that we have two low crime rate clusters (1 and 3). Moreover, we note that clusters 2 and 3 are similar in being high-income, well-educated, and high property value clusters. However, these clusters have two contrasting modes concerning crime rates.
Figure 3: Decision rules based tree architecture representing the 4-clusters obtained in the FAA wildlife-strike dataset analysis
Any traditional unsupervised clustering model like K-means would have put both these groups in the same cluster. Furthermore, a single linear classification model may not efficiently capture such intricate details and multiple modes present in the data.
With Figure 5, we show how we leveraged our bounding box clustering approach to understand the results further. For instance, by connecting census tracts in the map containing the clusters from Figure 4(b) with that of the crime labels per tract shown in Figure 4(a), we recognize that cluster 2 corresponds to the high crime regions in the Downtown SF. Inner-city areas are expected to have higher crime reporting and a more significant proportion of the educated, high-income working-class population.
Overall, our clustering method was able to identify spatially coherent clusters while simultaneously recognizing the different modes of crimes observed in these regions. Also, it gave better performance than the baseline, along with interpretable and visually appealing results.
#### Movielens data
Our objective was to use the MovieLens-100K dataset [31] to perform a recommendation task using user and item features - content-based filtering. We transformed the data to a classification problem and applied our classification-closest centroid greedy model. Although we understand that the state-of-the-art recommendation systems use collaborative filtering or hybrid methods [47, 52], we use our methodology as a proof of concept to drive that predictive clustering models can be used as a first step in exploratory data analysis. Moreover, by using a large dataset for this case study, we could also complement the other three analyses that used relatively smaller datasets.
The MovieLens dataset contains information of 943 users, rating a fraction of the 1682 movies list available. To prepare our data, we went through several pre-processing steps. First, we identified the top 10 genres from a list of 19 genres that cover more than 85% of the movies in the list. We used these genre indicator variables as our movie features. Additionally, we obtained movie information like popularity indicator, number of votes, vote average, and revenue-budget ratio from the IMDB database to supplement our movie features. Second, we used the user's gender, and age (after binning the age) features for user description. Finally, we merged the two datasets to obtain our user-item rating dataset. Since the ratings are from 1-5, we threshold the ratings at 4, i.e., a rating greater than or equal to 4 got assigned to class 1 (recommend a movie), and class 0 otherwise. This generated dataset has more than 85K observations with 21 features.
Figure 4: General linear trend observed (left) versus multiple trend identified using our regression-BB clustering model in the South (center) and Northeast regions (right) in the US. For the South region (center), points denoted in orange (cluster 2) have a slope of 746.6 w.r.t to the year feature (increasing trend in reporting when the damage level is none and unknown) whereas the ones denoted in green (cluster 4) have a slope of 2.3 (i.e. the curve is flat where damage levels are destroyed, substantial, minor, or uncertain.
We used our greedy CLC-CC model for this dataset. We tuned the value of clusters \(K\in\{2,3,4,5,6,7\}\), and found the best results with \(K=3\). We observed an average test accuracy of 0.652 and test root mean square error (rmse) of 0.589.This is a small improvement compared to baseline classification, which gave an average accuracy of 0.637 and rmse of 0.602. In Table 6, we report a summarized version of what each cluster represents based on the feature averages found in them. The label's average indicates how users react to movies that fall in a cluster. For instance, cluster 2 indicates that males between 20-30 and 45+ prefer drama, action, and thriller genres. In addition to this, we found the following variables to be significant based on the weights of SVM hyperplanes found in each cluster: vote average,
\begin{table}
\begin{tabular}{l|l l l} \hline \hline Features & Cluster 1 & Cluster 2 & Cluster 3 \\ \hline Gender & Mostly women & Male & Both male and female \\ Age (years) & All age groups & 20-30 and 45+ & Less than 20, 30-45 \\ Genre & Drama, romance & Drama, thriller, action & Comedy, horror \\ \hline y (label) & 0.57 & 0.63 & 0.41 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Summary of the clusters based on the feature averages
Figure 5: The distribution of crime in SF census tracts and the representation of the three spatially coherent clusters obtained for the SF crime dataset analysis
vote count and age categories in cluster 1; vote count, action and crime genre features in cluster 2; and revenue-budget ratio, romance and thriller genres in cluster 3. Overall, our model was able to provide interpretable results that helped identify these interesting populations sections or target groups, as evident from Table 6.
## 5 Conclusion
In this section, we briefly summarize our key contributions and results, and discuss important limitations of our work along with possible future research directions to address them.
### Summary
In this article, we began with the observation that clustering for data science has often been viewed through the lens of K-means and the application of clustering for supervised tasks has largely been overlooked and unexplored. We took a broader outlook towards clustering and introduced a novel generalized framework for predictive clustering to address this deficiency.
In our framework, we presented different perspectives to define clusters and a general approach to combine clustering with various supervised objectives. As a result, an array of models falls out of this framework. Some of these have been previously explored in the literature; however, they were restrictive in their approach and largely application-driven. Furthermore, we presented two methodologies to optimize all models in our framework. Using MILP-based formulations, we ensured global optimization for our models and provided reproducible results. Our highly scalable and relatively efficient greedy algorithms inspired by the Majorization-minimization framework give a good approximation of the benchmark optimal MILP-based solution in instances where comparison was possible.
We also demonstrated the relevance of a predictive clustering framework by analyzing and obtaining results for four unique datasets from a diverse set of domains. Through our case studies, we were able to show how these models were able to detect the different generative modes present in the data and how we can interpret these results. Consequently, we obtained significantly better results compared to baseline regression and classification models.
More fundamentally, we had focused on defining a framework that can break the notion of clustering as an unsupervised learning tool, uncover multiple "behavioral" modes present in the data, and discover "hidden" patterns in the supervised sense. As a step towards this direction, we developed a small toolkit of supervised clustering methods, which can potentially be expanded to include many more cluster definitions and supervised objectives. This framework could not only be used as a novel conceptual approach to solve many problems in data science but also outperform traditional linear models to give better results. Moreover, we believe that data scientists and policymakers could efficiently leverage this toolkit to obtain workable solutions that are highly interpretable and can help design policy interventions.
### Future work
Overall, we believe that this work brings an alternative broader outlook towards clustering and thereby can act as a catalyst to inspire a range of fascinating extensions and future applications. We list some exciting areas of future work below:
1. Expansion of the design space along both dimensions: By defining a generalized framework for optimization for predictive clustering, we enable the scope for expanding the design space along both the clustering type and supervised objective dimensions. For example, unsupervised clustering methods like DBSCAN and spectral clustering can be added to the array of the cluster definitions already a part of the framework. Density-based clustering like DBSCAN could add the flavor of having arbitrary-shaped clusters, unlike the hyper-cuboids of the bounding box and spherical clusters of closest center clustering. Furthermore, several other loss functions can be incorporated into the framework, including 0-1 loss and Huber loss for classification with MILP-based and greedy optimization and cross-entropy loss function with greedy optimization. This would provide a broader toolkit of methods to choose from to tackle the constantly evolving needs in the data science field. More importantly, such a framework will then partially enjoy the
non-linearity advantage of random forests and neural networks while still retaining its quality of being highly interpretable.
2. Scalable optimization: As seen previously, MILP-based methods for predictive clustering were not practical to solve in real-time for large datasets but nonetheless provided global optimization. Although we observed some improvement with the bounding box clustering in this aspect, there is undoubtedly a need to address scalability for these models. We believe that further research can utilize decomposition methods, tighter and symmetry breaking constraints, constraint and column generation techniques to strengthen the optimization. This would enable us to exploit the global optimization advantage of mixed integer optimization while being able to scale for large datasets that are generally encountered in the real world.
|
2304.13531
|
Integrated Architecture for Neural Networks and Security Primitives
using RRAM Crossbar
|
This paper proposes an architecture that integrates neural networks (NNs) and
hardware security modules using a single resistive random access memory (RRAM)
crossbar. The proposed architecture enables using a single crossbar to
implement NN, true random number generator (TRNG), and physical unclonable
function (PUF) applications while exploiting the multi-state storage
characteristic of the RRAM crossbar for the vector-matrix multiplication
operation required for the implementation of NN. The TRNG is implemented by
utilizing the crossbar's variation in device switching thresholds to generate
random bits. The PUF is implemented using the same crossbar initialized as an
entropy source for the TRNG. Additionally, the weights locking concept is
introduced to enhance the security of NNs by preventing unauthorized access to
the NN weights. The proposed architecture provides flexibility to configure the
RRAM device in multiple modes to suit different applications. It shows promise
in achieving a more efficient and compact design for the hardware
implementation of NNs and security primitives.
|
Simranjeet Singh, Furqan Zahoor, Gokulnath Rajendran, Vikas Rana, Sachin Patkar, Anupam Chattopadhyay, Farhad Merchant
|
2023-04-26T13:07:15Z
|
http://arxiv.org/abs/2304.13531v2
|
# Integrated Architecture for Neural Networks and Security Primitives using RRAM Crossbar
###### Abstract
This paper proposes an architecture that integrates neural networks (NNs) and hardware security modules using a single resistive random access memory (RRAM) crossbar. The proposed architecture enables using a single crossbar to implement NN, true random number generator (TRNG), and physical unclonable function (PUF) applications while exploiting the multi-state storage characteristic of the RRAM crossbar for the vector-matrix multiplication operation required for the implementation of NN. The TRNG is implemented by utilizing the crossbar's variation in device switching thresholds to generate random bits. The PUF is implemented using the same crossbar initialized as an entropy source for the TRNG. Additionally, the weights locking concept is introduced to enhance the security of NNs by preventing unauthorized access to the NN weights. The proposed architecture provides flexibility to configure the RRAM device in multiple modes to suit different applications. It shows promise in achieving a more efficient and compact design for the hardware implementation of NNs and security primitives.
TRNG, PUF, NN, RRAM, Memristors, Hardware Security
## I Introduction
The Internet of things (IoT) era has led to a significant increase in data exchange between processors and memory, which results in high power consumption. It ultimately degrades the system's performance [1]. The von Neumann architecture, which separates processing and memory units, often suffers from data access latency due to the large volume of data movement [2]. To overcome these limitations, various novel computing paradigms are being investigated. In-memory computing, which performs calculations entirely within the computer memory, has gained significant attraction as a potential solution [3, 4, 5].
Additionally, it is necessary to authenticate IoT devices on the network to ensure data security and protection by maintaining their integrity, confidentiality, and availability, thus preventing any malicious attacks or unauthorized access. Physical unclonable function (PUF) circuits are becoming popular in IoT due to their ability to generate unique and unpredictable responses to challenges. This makes them highly useful for hardware security, such as device authentication and key generation, and for implementing security protocols ranging from device attestation to data encryption. Several circuits have been proposed for realizing in-memory computing architectures using resistive random access memory (RRAM) devices to implement various techniques, including PUF [6, 7, 8], neuromorphic neurons [9, 10] and digital gates [11, 12, 13]. RRAM is being considered as a potential candidate to address various drawbacks of conventional complementary metal oxide semiconductor (CMOS)-based architectures [14]. The relatively small size of RRAM devices also makes it highly feasible to integrate computing circuits and memory, thus realizing efficient architectures for learning algorithms, hardware security modules, and neural network (NN) applications [15].
RRAM has been extensively studied for main memory and in-memory computing architectures, but their stochastic nature and intrinsic variation in switching parameters have hindered their widespread adoption as next-generation memories [16]. However, this uncertain behavior is desirable for designing hardware security primitives [17]. Another commonly investigated hardware encryption module is true random number generators (TRNGs), which generate a stream of random numbers by exploiting randomness in physical processes [18]. While CMOS-based TRNG designs have been proposed, they only provide limited security-specific properties, paving the way for TRNGs based on emerging technologies. Among these designs, RRAM-based TRNGs demonstrate desirable properties, primarily due to their low power operation, high
Fig. 1: The NN locking using the embedded hardware security primitives by exploiting the variations of RRAM cells.
density, and stochastic filament formation [19].
The protection of trained neural network (NN) models has become crucial to prevent unauthorized access, which can lead to the cloning of the model by adversaries. This study proposes a novel architecture to implement NN and hardware security modules on a single RRAM crossbar, allowing only authorized users with the correct device to use the locked NN model [20]. The proposed architecture focuses on protecting the intellectual property (IP) rights of deep NN models. Fig. 1 illustrates the framework for locking the NN using the embedded security module. The major contributions of this work are as follows:
* Integrating the NN, PUF, and TRNG on the RRAM crossbar array.
* Discussion on how the proposed crossbar architecture can be used for realizing NN weights locking.
* Lastly, the methodology for implementing the NN, PUF, and TRNG on the same crossbar is validated.
The remainder of the paper is organized as follows: Section II details the architecture for implementing NN, TRNG, and PUF designs based on RRAM. Section III shows the NN weights-locking algorithm using the proposed architecture. Section IV discusses the results of integrating NN and hardware security primitives realized using the crossbar RRAM array. Section V concludes the paper.
## II Proposed Architecture
This section explains the architecture to integrate NN and hardware security modules using the RRAM crossbar. The proposed architecture is shown in Fig. 2. The RRAM cells connected in a passive crossbar configuration are at the core of the proposed architecture. The same crossbar has been used to implement NN, PUF, and TRNG.
### _RRAM crossbar_
The RRAM device employed in this investigation is characterized by its ability to store multiple bits. Specifically, the device can be programmed into a low resistive state (LRS) and a high resistive state (HRS). Still, it can also store multiple resistive states between LRS and HRS, which is referred to as multi-state storage. Applying varying voltage pulses across the device can be configured as two- or multi-state devices. For the purpose of the vector-matrix multiplication (VMM) implementation in this study, the device is configured as a multi-state device, but it can also function in the two-state mode. The proposed architecture offers the flexibility to configure the device in multiple modes to suit applications such as VMM, TRNG, and PUF implementations. Next, we will discuss using RRAM as a core device to implement these applications on a single crossbar.
### _VMM implementation_
The RRAM crossbar performs the VMM operation, a critical function in implementing NNs. The weight matrix required for VMM is stored on a crossbar, with weights corresponding to each device's resistance or conductance state. These devices are set up to store multi-bit weights, and their rows are linked to a digital-to-analog converter (DAC). The input vector is then fed into the DACs, which generally convert the input into analog output voltage levels, as depicted in Fig. 2. Following Ohm's current law, the applied input voltages produce a current through each device depending on the device's resistance or conductance value (weight). The current flowing through each device connected in a column is combined based on Kirchhoff's current law. Ultimately, the current values in the columns are utilized to carry out a multiplication and accumulation operation of the weights and input vector, which is nothing but a VMM operation in memory.
\[\begin{array}{l}c_{0}\quad c_{1}\quad c_{2}\quad c_{3}\\ v_{1}\begin{pmatrix}1\\ 2\\ 2\\ v_{2}\end{pmatrix}\begin{pmatrix}1\\ 2\\ 2\\ 0\end{pmatrix}\begin{pmatrix}1\\ 0\\ 2\\ 3\\ 2\end{pmatrix}\begin{pmatrix}1\\ 0\\ 2\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}1\\ 2\\ 2\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{0}\\ r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{1}\\ 2\\ 3\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{1}\\ r_{2}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \\ 2\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{}r_{3}\end{pmatrix}\begin{pmatrix}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin{}r_{3} \end{pmatrix}\begin{pmatrix}r_{3}\end{pmatrix}\begin
than two bits, which need to be converted back to the digital domain. The analog-to-digital (ADC) resolution is determined by \(\lceil\log_{2}(w\times m)\rceil\), where \(w\) represents the weight bits (2 in this example) and \(m\) means the number of devices in a single column (4 in this example).
### _Trng_
To implement the TRNG on the RRAM crossbar, we have used the technique presented in [21], where the device-to-device (D2D) and cycle-to-cycle (C2C) variations on the crossbar have been used to generate the random switching in the crossbar. The switching threshold of each device on the RRAM crossbar is used to generate random bits. By applying a 50% switching probability pulse to the crossbar, random devices switch their state to LRS, and the others remain in the HRS. Due to the crossbar variation, each device's switching threshold is different, resulting in random switching of the devices. In order to implement the TRNG in the proposed architecture, another terminal of the device must connect to GND, which the 1x4 DeMUX controls.
### _Puf_
A single RRAM crossbar is utilized in this proposed architecture to implement a TRNG and a PUF. The crossbar is first initialized to an entropy source based on the TRNGs algorithm, which provides a source of randomness to generate random bits. Challenges are then applied to the crossbar's rows, and the responses of the PUF are collected. The challenges are mapped to a read voltage pulse, which varies based on the input challenge. However, during PUF implementation, the crossbar is configured to two-state rather than multi-state switching.
To collect the responses of the PUF, Kirchoff's current law is applied to the crossbar, which collects the current flowing through each device at the column lines. The input challenge and device variations influence this current. The sneak path affects the current in the crossbar, which contributes to the current at each column in a completely random manner. The analog current values are converted to boolean response bits at the output using a current sense amplifier (CSA). As the response bit can be either 0 or 1, ADC in the path has been bi-passed using 1x4 DeMUX. The digital interface can further use the collected responses to lock the weights matrix on the crossbar. The randomness and unpredictability of the PUF response make it suitable for use in secure authentication and key generation applications, and the incorporation of TRNG adds a layer of security.
## III Weights locking
Weights locking is a required method used to safeguard the intellectual property of NN models, particularly in scenarios where the model has been trained on sensitive data or where the model's performance is essential to business success. The proposed architecture integrates a PUF as a hardware security module in the RRAM crossbar. During training, the weights are encrypted using a unique key generated by the PUF. The architecture is configured to implement the PUF and generate the key to encrypt the weights. The encrypted weights and the challenge are then provided to the user.
During the inference process, the architecture is reconfigured to implement the PUF, and the challenge provided with the encrypted weights is applied to generate the key. Next, the encrypted weights are loaded into the NN, and the key generated by the PUF is utilized to decrypt the weights. The decrypted weights are then used to make predictions.
### _Locking_
The suggested design enables the NN to be secured within an RRAM crossbar. With the hardware security module situated on the same crossbar, a key can be generated via configuration in both the TRNG and PUF setups. The PUF-generated key can then be used to encrypt the weights to be stored in the crossbar. Algorithm 1 outlines the use of the proposed architecture for NN weights locking.
```
1:Choose TRNG implementation
2:Apply 50% probability switching pulse
3:Choose PUF implementation
4:Apply challenge and collect the responses (key)
5:Store the CRPs on the server side and use a key to encrypt the weights.
6:Send the encrypted weights to sharing platform with the challenge
```
**Algorithm 1** An algorithm for weights locking
### _Unlocking_
Once the weights have been encrypted, they can be transmitted to the user via any sharing platform. An identical challenge will be employed on the device's end to generate the required key. Algorithm 2 specifies the decryption procedure. The device's inherent randomness is expressed via CRPs, which are unique to each device, which helps prevent attackers from using the same weights on a different hardware device.
```
1:Receive the encrypted weights
2:Choose TRNG implementation
3:Apply 50% probability switching pulse
4:Choose PUF implementation
5:Apply challenge and generate the key
6:Choose the key to decrypt the weights
7:Store the decrypted weights on the same crossbar (overwrite the TRNG entropy)
```
**Algorithm 2** An algorithm for unlocking the weights
In summary, this paper describes an architecture that integrates NNs and hardware security modules using passive RRAM cells connected in a crossbar structure. The RRAM device can store multiple resistive states between low and high resistive states, allowing it to function as a two-state or multi-state device. The proposed architecture offers flexibility to configure the device in multiple modes to suit different applications, such as VMM, TRNG, and PUF.
## IV Experimental Results
For this study, an RRAM cell comprises a P/Ti/TiO\({}_{x}\)/HfO\({}_{2}\)/Pt material stack, demonstrating improved stability in terms of both electroforming voltage and thermal [22]. This device can be configured into different configurations, such as binary and multi-state switching. The device is programmed into resistance by applying a voltage pulse with a specific duration and amplitude. The device exhibits resistance between \(60-100K\Omega\) in HRS and \(1.5-1.6K\Omega\) in LRS. However, in a multi-state configuration, there can be multiple states between HRS and LRS. The devices in the crossbar are utilized without any selector (passive) in series. Passive crossbars typically face the issue of sneak-path current, which can be utilized to design TRNGs and PUFs.
### _Switching and Variations_
In order to switch the device between binary states, a 150ns pulse of 2.0V with 10ns rise and the fall time is applied to switch the device to LRS (programming to 1), while a negative pulse of 2.0V is applied to switch the device to HRS (programming to 0). However, a gradual RESET method has been employed to achieve multi-state behavior. In this method, the device is initialized to LRS, and then an incremental pulse is applied to switch it to multiple states. The I-V curves of multi-state switching have been shown in Fig. 3.
The switching of the devices can be affected by variations in D2D and C2C parameters. These variations are influenced by manufacturing variations in device radius, device length, and oxygen ion concentration in the dielectric. As a result of these variations, the HRS resistance varies from 31\(K\Omega\) to 155\(K\Omega\) with an average of 65.56\(K\Omega\), while the LRS resistance varies from 1.55\(K\Omega\) to 1.67\(K\Omega\) with an average of 1.64\(K\Omega\). However, the HRS has a wide distribution; all HRS values are distinguishable from LRS. By carefully selecting the pulse to RESET the devices, they can be switched randomly from LRS to HRS or vice versa, which can be exploited to design the TRNGs.
### _PUF properties_
We conducted extensive experiments to evaluate the reliability and performance of PUF on the proposed architecture. Our results demonstrate a reliability of 100%, indicating that the CRPs generated by the proposed PUF are consistent and repeatable across multiple trials. Additionally, the uniqueness of the CRPs was found to be 47.78%, meaning that the probability of generating the same CRP for two different devices is low. The uniformity of the CRPs was measured to be 49.79%, indicating that the distribution of the CRPs is approximately uniform. Finally, the bit-aliasing was found to be 48.57%, indicating that the probability of generating the same CRP for two different challenges is also low.
Importantly, our results show that the proposed PUF achieves these performance metrics without needing post-processing techniques, making it a practical and efficient hardware security primitive. Overall, our findings demonstrate the potential for using RRAM-based PUF designs in hardware security applications with high reliability and security characteristics.
### _Nn_
The crossbar can be utilized to map NN weights, similar to VMM applications. Fig.(a)a shows the mapping of 2-bit weights to the crossbar. To enable the multi-state behavior of a device, a gradual RESET method is employed, as illustrated in Fig.(b)b. We conducted a proof-of-concept by implementing VMM multiplication on a 16x16 crossbar, which is a critical operation for any NN implementation. The device's variations result in error accumulation at the input of the ADC. Nonetheless, this issue can be resolved through onboard fault-aware training of the required application.
## V Conclusions
In conclusion, this paper described a proposed architecture integrating NNs and hardware security modules using the RRAM crossbar as the core device. The proposed architecture enables a single crossbar to implement NN, TRNG, and PUF applications. The RRAM crossbar's multi-state storage characteristic is exploited to perform the vector-matrix multiplication operations required for NN implementation. The TRNG uses the crossbar's variation in device switching thresholds to generate random bits. The PUF is implemented using the same crossbar initialized as an entropy source for the TRNG. The proposed architecture provides flexibility to configure the RRAM device in multiple modes to suit different applications. This paper also presents the algorithms for NN weight locking. Overall, the proposed architecture shows promise in achieving a more efficient and compact design for the hardware implementation of NNs and security primitives.
## Acknowledgments
This work was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEUROTEC II under Project 16ME0398K, Project 16ME0399 and through Dr. Suhas Pai Donation Fund at IIT Bombay.
Fig. 3: The mapping of 2-bit weights to the crossbar is demonstrated in (a), while the multi-state behavior of devices for analog weight storage is illustrated in (b).
|
2308.05086
|
Aspherical Lagrangian submanifolds, Audin's conjecture and cyclic
dilations
|
Given a closed, oriented Lagrangian submanifold $L$ in a Liouville domain
$\overline{M}$, one can define a Maurer-Cartan element with respect to a
certain $L_\infty$-structure on the string homology
$\widehat{H}_\ast^{S^1}(\mathcal{L}L;\mathbb{R})$, completed with respect to
the action filtration. When the first Gutt-Hutchings capacity of $\overline{M}$
is finite, and $L$ is a $K(\pi,1)$ space, it leads to interesting geometric
implications. In particular, we show that $L$ bounds a non-constant
pseudoholomorphic disc of Maslov index 2. This confirms a general form of
Audin's conjecture and generalizes the works of Fukaya and Irie in the case of
$\mathbb{C}^n$ to a wide class of Liouville manifolds. In particular, when
$\dim_\mathbb{R}(\overline{M})=6$, every closed, orientable, prime Lagrangian
3-manifold $L\subset\overline{M}$ is diffeomorphic to a spherical space form,
or $S^1\times\Sigma_g$, where $\Sigma_g$ is a closed oriented surface.
|
Yin Li
|
2023-08-09T17:28:54Z
|
http://arxiv.org/abs/2308.05086v3
|
# Aspherical Lagrangian submanifolds, Audin's conjecture and cyclic dilations
###### Abstract
Given a closed, oriented Lagrangian submanifold \(L\) in a Liouville domain \(\overline{M}\), one can define a Maurer-Cartan element with respect to a certain \(L_{\infty}\)-structure on the string homology \(\widehat{H}_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\), completed with respect to the action filtration. When the first Gutt-Hutchings capacity [31] of \(\overline{M}\) is finite, and \(L\) is a \(K(\pi,1)\) space, it leads to interesting geometric implications. In particular, we show that \(L\) bounds a pseudoholomorphic disc of Maslov index \(2\). This confirms a general form of Audin's conjecture [5] and generalizes the works of Fukaya [18] and Irie [33] in the case of \(\mathbb{C}^{n}\) to a wide class of Liouville manifolds. In particular, when \(\dim_{\mathbb{S}}(\overline{M})=6\), every closed, orientable, prime Lagrangian \(3\)-manifold \(L\subset\overline{M}\) is diffeomorphic either to a spherical space form, or \(S^{1}\times\Sigma_{g}\), where \(\Sigma_{g}\) is a closed oriented surface.
## 1 Introduction
### Closed Lagrangian submanifolds in \(\mathbb{C}^{n}\)
The study of Lagrangian submanifolds plays a central role in symplectic geometry, whose importance is reflected by Weinstein's famous creed "everything is a Lagrangian submanifold". As one of the first non-trivial consequences of Gromov's ground breaking work [30] on pseudoholomorphic curves, we know that \(H^{1}(L;\mathbb{Q})\neq 0\) for any closed Lagrangian submanifold \(L\subset\mathbb{C}^{n}\).
Gromov's argument for the non-vanishing of \(H^{1}(L;\mathbb{Q})\) can be summarized as follows. Take a compactly supported Hamiltonian function \(H_{t}:[0,1]\times\mathbb{C}^{n}\to\mathbb{R}\) which displaces \(L\), denote by \(X_{H_{t}}\) its associated vector field. Choose a cut-off function \(\chi:\mathbb{R}\to[0,1]\) such that \(\chi=0\) on \(\mathbb{R}_{\leq 0}\) and \(\chi=1\) on \(\mathbb{R}_{\geq 1}\). For each \(r\geq 0\), consider the function \(\chi_{r}(s):=\chi(r+s)\chi(r-s)\). Let \(D\subset\mathbb{C}\) be the closed unit disc, we identify \(D\backslash\{-1,1\}\) with the strip \(\mathbb{R}\times[0,1]\), and denote by \(s\) and \(t\) the coordinates on \(\mathbb{R}\) and \([0,1]\), respectively. Consider the moduli space \(\mathcal{N}^{r}(L,\beta)\) of maps \(u:(D,\partial D)\to(\mathbb{C}^{n},L)\) in the relative homotopy class \(\beta\in\pi_{2}(\mathbb{C}^{n},L)\), which satisfy the perturbed Cauchy-Riemann equation
\[\big{(}du-X_{\chi_{r}(s)H_{t}}(u)-dt\big{)}^{0,1}=0, \tag{1.1}\]
where the \((0,1)\)-part is taken with respect to the standard complex structure on \(\mathbb{C}^{n}\). Note that when \(r=0\), \(\chi_{0}=0\), so (1.1) reduces to the ordinary Cauchy-Riemann equation. When \(\beta=0\), one can show that \(\mathcal{N}^{1}(L,0)=\emptyset\), so \(\mathcal{N}^{\geq 0}(L,0):=\bigcup_{r\in[0,1]}\mathcal{N}^{r}(L,0)\) has a single boundary component \(\mathcal{N}^{0}(L,0)\), which consists of constant solutions to (1.1). Since \(\big{[}\mathcal{N}^{0}(L,0)\big{]}\neq 0\) in \(H_{n}(\mathcal{L}L;\mathbb{R})\), \(\mathcal{N}^{\geq 0}(L,0)\) cannot be compact, and there is a divergent sequence of maps \((u_{k})_{k\geq 1}\) in \(\mathcal{N}^{\geq 0}(L,0)\). It follows from elliptic regularity and the removable singularity theorem that \(L\) bounds a non-constant holomorphic disc, which implies that \(H^{1}(L;\mathbb{Q})\neq 0\).
Fukaya's new observation, which was sketched in [18] and carried out in detail by Irie in [33], is that the new boundary component arising from compactifying the moduli space \(\mathcal{N}(L,\beta)\) can be expressed in terms of string topology operations after pushing forward
its virtual fundamental chain to the free loop space \(\mathcal{L}L\). More precisely, for a suitable \(L_{\infty}\)-structure \((\ell_{k})_{k\geqslant 1}\) on the free loop space homology \(H_{\mathfrak{s}}(\mathcal{L}L;\mathbb{R})\), there holds
\[\sum_{k=2}^{\infty}\frac{1}{(k-1)!}\ell_{k}(y,x\cdots,x)=(-1)^{n+1}[L], \tag{1.2}\]
where \([L]\in H_{n}(\mathcal{L}L;\mathbb{R})\) is the cycle of constant loops, and \(x\in\widehat{H}_{\mathfrak{s}}(\mathcal{L}L;\mathbb{R})\) is a Maurer-Cartan element. Both of \(x\) and \(y\) lie in the _completed_ free loop space homology. Here, the completion is taken with respect to the energy filtration
\[F^{\Xi}H_{n-\mathfrak{s}}(\mathcal{L}L;\mathbb{R}):=\bigoplus_{\omega_{std}( \mathfrak{a})\supset\Xi}H_{n-\mathfrak{s}}(\mathcal{L}(a)L;\mathbb{R}), \tag{1.3}\]
where \(\omega_{std}\) denotes the standard symplectic form on \(\mathbb{C}^{n}\), \(\bar{a}\in H_{2}(\mathbb{C}^{n},L)\) satisfies \(\partial\bar{a}=a\in H_{1}(L;\mathbb{Z})\), and \(\mathcal{L}(a)L\subset\mathcal{L}L\) consists of loops in the class \(a\). When \(L\) is a \(K(\pi,1)\) space (for technical reasons, one also assumes that \(L\) is orientable and relatively _Spin_), the identity (1.2) implies that \(L\) bounds a non-constant holomorphic disc of Maslov index \(2\). This provides a refinement of Gromov's result and in particular confirms the famous conjecture of Audin [5]. More interestingly, when \(n=3\), it leads to the following classification of prime Lagrangian submanifolds.
**Theorem 1** (Fukaya [18], Irie [33]).: _A closed, connected, orientable and prime 3-manifold \(L\) admits a Lagrangian embedding into \(\mathbb{C}^{3}\) if and only if it is diffeomorphic to \(S^{1}\times\Sigma_{g}\), where \(\Sigma_{g}\) is a closed, orientable surface._
Under the additional assumption that \(L\) is monotone, one can prove the same result without assuming that \(L\) is prime, see [17].
In this paper, we extend the ideas of Fukaya and Irie to study closed Lagrangian submanifolds in a more general class of Liouville domains, namely those with finite first Gutt-Hutchings capacity (see Section 1.2 for details). We prove a general version of Audin's conjecture (cf. Corollary 8) and a generalization of Theorem 1 (cf. Corollary 11), which in particular enables us to classify the prime Lagrangian 3-folds in many Milnor fibers. See Remark 7.
### Cyclic dilations and Gutt-Hutchings capacities
In order to generalize the work of Fukaya and Irie to more interesting Liouville manifolds, we will use a notion introduced by the author [41] (also independently by Zhou [55]). From now on, \(M\) will be a \(2n\)-dimensional Liouville manifold with \(c_{1}(M)=0\), and we fix a trivialization of its canonical bundle \(K_{M}\), so that its symplectic cohomology \(\mathit{SH}^{\bullet}(M)\) and (positive) \(S^{1}\)-equivariant symplectic cohomology \(\mathit{SH}^{\bullet}_{S^{1}}(M)\) (see [8] or [44] for its definition) admit well-defined \(\mathbb{Z}\)-gradings. As is the case of singular homologies, the \(S^{1}\)-equivariant symplectic cohomology is related to the ordinary symplectic cohomology by the (Gysin) long exact sequence
\[\cdots\to\mathit{SH}^{\bullet-1}(M)\overset{I}{\to}\mathit{SH}^{\bullet-1}_{S ^{1}}(M)\overset{S}{\to}\mathit{SH}^{\bullet+1}_{S^{1}}(M)\overset{B}{\to} \mathit{SH}^{\bullet}(M)\to\cdots, \tag{1.4}\]
where the erasing map \(I\) is induced by the obvious inclusion of cochain complexes, and the marking map \(B\) is defined in terms of the higher BV structures on the cochain complex \(\mathit{SC}^{\bullet}(M)\) underlying \(\mathit{SH}^{\bullet}(M)\).
**Definition 2**.: _A cohomology class \(\tilde{b}\in\mathit{SH}^{1}_{S^{1}}(M)\) is called a cyclic dilation if its image under the marking map \(B:\mathit{SH}^{\bullet}_{S^{1}}(M)\to\mathit{SH}^{\bullet-1}(M)\) gives the identity, i.e. \(B(\tilde{b})=1\)._
**Remark 3**.: _Note that the terminology used in this article differs from that of [41], where \(\tilde{b}\in\mathit{SH}^{1}_{S^{1}}(M)\) is called a cyclic dilation as long as its image under the marking map
hits an invertible element \(h\in\mathit{SH}^{0}(M)^{\times}\). Here we reserve the terminology for the more restrictive situation when \(h=1\) is the identity, and refer to the general case when \(h\) is allowed to be an arbitrary invertible element as a cyclic quasi-dilation, see Definition 60. Requiring \(h=1\) is equivalent to the existence of a \(k\)-dilation for some \(k\in\mathbb{N}\) under the terminology of Zhou [55]._
The notion of a cyclic dilation generalizes that of a _dilation_ introduced earlier by Seidel-Solomon [47]. Recall that a dilation is a class \(b\in\mathit{SH}^{1}(M)\) such that its image under the BV (Batalin-Vilkovisky) operator \(\Delta:\mathit{SH}^{*}(M)\to\mathit{SH}^{*-1}(M)\) hits the identity, i.e. \(\Delta(b)=1\). Every dilation gives rise to a cyclic dilation via the erasing map \(I:\mathit{SH}^{*}(M)\to\mathit{SH}^{*}_{S^{1}}(M)\). See [41], Section 4.2 for details. However, the converse is not true. A typical example is the Milnor fiber of a \(3\)-fold triple point
\[\{z_{1}^{3}+z_{2}^{3}+z_{3}^{3}+z_{4}^{3}=0\}\subset\mathbb{C}^{4}, \tag{1.5}\]
in which case there is no dilation in \(\mathit{SH}^{1}(M)\) (cf. [45], Example 2.7), but there is a cyclic dilation in \(\mathit{SH}^{1}_{S^{1}}(M)\) (cf. [41], Section 6.1 and [55], Section 5.2). The study of cyclic dilations enables us to extend some of Seidel-Solomon's results to a wider class Liouville manifolds, among which is the following obstruction to exact Lagrangian embedding.
**Proposition 4** ([41], Corollary 5.1).: _Let \(M\) be a Liouville manifold which admits a cyclic dilation \(\tilde{b}\in\mathit{SH}^{1}_{S^{1}}(M)\). Then \(M\) does not contain a closed exact Lagrangian submanifold which is a \(K(\pi,1)\) space._
The existence of a cyclic dilation \(\tilde{b}\in\mathit{SH}^{1}_{S^{1}}(M)\) can also be interpreted in terms of certain symplectic capacities of the Liouville domain \(\overline{M}\subset M\), where
\[M=\overline{M}\cup_{\partial\overline{M}}[1,\infty)\times\partial\overline{M} \tag{1.6}\]
is obtained by attaching a cylindrical end to \(\overline{M}\). Recall that the cochain complex computing \(\mathit{SH}^{*}_{S^{1}}(M)\) is defined as
\[\left(\mathit{SC}^{*}_{S^{1}}(M):=\mathit{SC}^{*}(M)\otimes_{\mathbb{K}} \mathbb{K}((u))/u\mathbb{K}[[u]],\partial^{S^{1}}:=\partial+u\delta_{1}+u^{2} \delta_{2}+\cdots\right), \tag{1.7}\]
where \(\partial\) is the ordinary Floer differential, \(\delta_{1}\) is the cochain level BV operator, and \(u\) is a formal variable of degree \(2\). The action filtration on \(\mathit{SC}^{*}(M)\) induces a filtration \(F^{\bullet}\) on \(\mathit{SC}^{*}_{S^{1}}(M)\), and the \(k\)_-th Gutt-Hutchings capacity_ of \(\overline{M}\), introduced in [31], is defined to be
\[c_{k}^{\mathit{GH}}(M):=\inf\left\{a\left|\partial^{S^{1}}(x)=u^{-k+1}e_{M} \text{ for some }x\in F^{\leqslant a}\mathit{SC}^{*}_{S^{1}}(M)\right.\right\}, \tag{1.8}\]
where \(e_{M}\in\mathit{SC}^{0}(M)\) is the cochain level representative of the identity \(1\in\mathit{SH}^{0}(M)\). It follows from the definition that (cf. [41], Section 4.2)
**Proposition 5**.: _A Liouville manifold \(M\) admits a cyclic dilation if and only if the Liouville domain \(\overline{M}\) has finite first Gutt-Hutchings capacity, i.e. \(c_{1}^{\mathit{GH}}(M)<\infty\)._
In other words, \(M\) admits a cyclic dilation if and only if the identity \(1\in\mathit{SH}^{0}(M)\) lies in the kernel of the erasing map \(I:\mathit{SH}^{0}(M)\to\mathit{SH}^{0}_{S^{1}}(M)\).
In this paper, we will generalize the results of Fukaya and Irie described in Section 1.1 to Liouville manifolds with cyclic dilations. To describe the motivation of considering this particular class of Liouville manifolds, we appeal to a more conceptual and largely speculative interpretation of Fukaya and Irie's results, with a focus on the key identity (1.2), which the author learned from [40], Section 6. Let \(\overline{N}\) be another Liouville domain, and \(N\) its completion as a Liouville manifold. For a Liouville embedding \(\iota_{L}:\overline{N}\hookrightarrow M\), Viterbo functoriality [50] gives rise to a map \(\mathit{SH}^{*}(M)\to\mathit{SH}^{*}(N)\), which is, among other things, a morphism between \(\mathbb{Z}\)-graded \(\mathbb{K}\)-algebras. Gromov's theorem on Lagrangian embeddings
in \(\mathbb{C}^{n}\) then follows from the existence of such a map and the fact that \(\mathit{SH}^{*}(\mathbb{C}^{n})=0\). In general, when we have a symplectic embedding \(\iota_{S}:\overline{N}\hookrightarrow M\), the original version of Viterbo functoriality no longer holds. However, from the symplectic embedding \(\iota_{S}\) one can construct a Maurer-Cartan element \(\mathit{s}\in\mathit{SC}^{*}(N)\) by counting, roughly speaking, holomorphic thimbles in the symplectic cap \(\overline{M\backslash\overline{N}}\) (the closure of \(M\backslash\overline{N}\)) which are asymptotic to the Reeb orbits in \(\partial\overline{N}\), see [7] and [29], Section 4.4. One can then deform the Floer differential \(\partial\) using the Maurer-Cartan element \(s\) to obtain a twisted cochain complex
\[\left(\mathit{SC}^{*}(N),\partial+\sum_{k=2}^{\infty}\frac{1}{(k-1)!}\ell_{k} (\cdot,s\cdots,s)\right), \tag{1.9}\]
whose cohomology we denote by \(\mathit{SH}^{*}(N,s)\). Here,
\[\ell_{k}:\mathit{SC}^{*}(N)^{\otimes k}\to\mathit{SC}^{*+3-2k}(N) \tag{1.10}\]
is the \(L_{\infty}\)-structure on the (undeformed) symplectic cochain complex \(\mathit{SC}^{*}(N)\) (cf. [49], Section 4.2). This enables us to define a \(\mathbb{K}\)-algebra map
\[\mathit{SH}^{*}(M)\to\mathit{SH}^{*}(N,s), \tag{1.11}\]
which should be viewed as the Viterbo functoriality for the _non-exact_ symplectic embedding \(\iota_{S}\). Note that when \(\overline{N}\) is the Weinstein neighborhood of a closed Lagrangian submanifold \(L\subset M\), the Maurer-Cartan cochain \(s\) can be equivalently interpreted as a chain \(s_{L}\in C_{n-\mathfrak{s}}(\mathcal{L}L;\mathbb{R})\) on the free loop space, defined by evaluating the fundamental chain of the moduli space of holomorphic discs in \(M\) with boundary on \(L\), via a domain-stretching argument similar to that of [49]. In the special case of a Lagrangian embedding \(L\hookrightarrow\mathbb{C}^{n}\), it follows from (1.11) that
\[\mathit{SH}^{*}(T^{*}L,s)\cong H_{n-\mathfrak{s}}(\mathcal{L}L,s_{L})=0, \tag{1.12}\]
where \(H_{n-\mathfrak{s}}(\mathcal{L}L,s_{L})\) denotes the free loop space homology deformed by \(s_{L}\), using the chain level loop bracket. (1.2) then follows from (1.12) since the inclusion of constant loops \(L\hookrightarrow\mathcal{L}L\) defines a coboundary in \(C_{n-\mathfrak{s}}(\mathcal{L}L,s_{L})\).
It is reasonable to expect that similar considerations as above would work in the \(S^{1}\)-equivariant setting. In terms of symplectic field theory, this is known as the _Cieliebak-Latscher formalism_. Namely one should be able to construct a Maurer-Cartan element \(\mathit{\tilde{s}}\in\mathit{SC}^{*}_{S^{1}}(N)\) associated to the symplectic embedding \(\iota_{S}:\overline{N}\hookrightarrow M\), so that the \(S^{1}\)-equivariant analogue of Viterbo functoriality (also known as the _Cieliebak-Latschev map_, cf. [10, 13]) exists after deformation, which gives rise to a morphism
\[\mathit{SH}^{*}_{S^{1}}(M)\to\mathit{SH}^{*}_{S^{1}}(N,\mathit{\tilde{s}}), \tag{1.13}\]
where the right-hand side is defined using the \(L_{\infty}\)-structure
\[\tilde{\ell}_{k}:\mathit{SC}^{*}_{S^{1}}(N)^{\otimes k}\to\mathit{SC}^{*+4-3k} _{S^{1}}(N). \tag{1.14}\]
The analogue of (1.2) in the \(S^{1}\)-equivariant case translates to the fact that the lifting \(\tilde{e}_{N}\in\mathit{SC}^{0}_{S^{1}}(N)\) of the identity \(e_{N}\in\mathit{SC}^{0}(N)\) defines a coboundary in the deformed cochain complex
\[\left(\mathit{SC}^{*}_{S^{1}}(N),\tilde{e}_{S^{1}}+\sum_{k=2}^{\infty}\frac{1} {(k-1)!}\tilde{\ell}_{k}(\cdot,\tilde{s},\cdots,\tilde{s})\right). \tag{1.15}\]
In order to obtain an analogue of (1.2) for a more general class of Liouville manifolds, it is therefore natural to impose the condition that the image of the identity \(e_{M}\in\mathit{SC}^{*}(M)\) defines a coboundary under the obvious inclusion \(\mathit{SC}^{*}(M)\hookrightarrow\mathit{SC}^{*}_{S^{1}}(M)\). As we have seen, this means that \(M\) admits a cyclic dilation.
### New results
We turn to the actual substance of this paper. Let \(M\) be a Liouville manifold. As part of the general set up, we fix a background class \([\alpha]\in H^{2}(M;\mathbb{Z}_{2})\) and a \(\mathbb{Z}_{2}\)-gerebe \(\alpha\) representing this class. From now on, whenever we write \(\mathit{SH}^{*}(M)\) and \(\mathit{SH}^{*}_{S^{1}}(M)\), they should be understood as symplectic cohomologies twisted by \(\alpha\).
Let \(L\subset M\) be a closed Lagrangian submanifold. Recall that for \(a\in H_{1}(L;\mathbb{Z})\), \(\mathcal{L}(a)L\subset\mathcal{L}L\) is the subspace consisting of loops in \(a\). We introduce the notation
\[\mathbb{H}^{S^{1}}_{\bullet}(a):=H^{S^{1}}_{\bullet\,+\,n\neq\mu(a)-1}( \mathcal{L}(a)L;\mathbb{R}), \tag{1.16}\]
where \(\mu:H_{1}(L;\mathbb{Z})\to\mathbb{Z}\) is the Maslov index, and the right-hand side is the \(S^{1}\)-equivariant homology of \(\mathcal{L}(a)L\), with the \(S^{1}\)-action given by reparametrization of loops. Consider the direct sum
\[\mathbb{H}^{S^{1}}_{\bullet}:=\bigoplus_{a\in H_{1}(L;\mathbb{Z})}\mathbb{H}^ {S^{1}}_{\bullet}(a), \tag{1.17}\]
which carries the aforementioned energy filtration
\[F^{\Xi}\mathbb{H}^{S^{1}}_{\bullet}:=\bigoplus_{\theta_{M}(a)>\Xi}\mathbb{H}^ {S^{1}}_{\bullet}(a), \tag{1.18}\]
where \(\theta_{M}\) is the Liouville 1-form on \(M\). This enables us to define the completion
\[\widehat{\mathbb{H}}^{S^{1}}_{\bullet}:=\lim_{\Xi\to\infty}\mathbb{H}^{S^{1}} _{\bullet}/F^{\Xi}\mathbb{H}^{S^{1}}_{\bullet}. \tag{1.19}\]
**Theorem 6**.: _Let \(M\) be a \(2n\)-dimensional Liouville manifold with \(c_{1}(M)=0\), and assume that \(M\) admits a cyclic dilation. Let \(L\subset M\) be a closed Lagrangian submanifold which is oriented and relatively Spin with respect to \(\alpha\). Then there exists an \(L_{\infty}\)-structure \((\tilde{\ell}_{k})_{k\geqslant 1}\) on \(\mathbb{H}^{S^{1}}_{\bullet}\), together with homology classes \(\tilde{x}\in\widehat{\mathbb{H}}^{S^{1}}_{-2}\), \(\tilde{y}\in\widehat{\mathbb{H}}^{S^{1}}_{2}\), such that_
1. \(\tilde{\ell}_{1}=0\)_._
2. _The_ \(L_{\infty}\)_-structure_ \((\tilde{\ell}_{k})_{k\geqslant 1}\) _respects the decomposition of_ \(\mathbb{H}^{S^{1}}_{\bullet}\) _according to_ \(a\in H_{1}(L;\mathbb{Z})\)_. In particular, it extends to the completion_ \(\widehat{\mathbb{H}}^{S^{1}}_{\bullet}\)_, and we will use the same notation to denote its extension._
3. _There exists a constant_ \(c>0\) _such that_ \(\tilde{x}\in F^{c}\widehat{\mathbb{H}}^{S^{1}}_{-2}\)_._
4. \(\tilde{x}\) _and_ \(\tilde{y}\) _satisfy the following relations:_ \[\sum_{k=2}^{\infty}\frac{1}{k!}\tilde{\ell}_{k}(\tilde{x},\cdots,\tilde{x})=0,\] (1.20) \[\left(\sum_{k=2}^{\infty}\frac{1}{(k-1)!}\tilde{\ell}_{k}(\tilde{y},\tilde{x},\cdots,\tilde{x})\right)_{a=0}=(-1)^{n+1}\llbracket L\rrbracket,\] (1.21) _where the infinite sums on the left-hand side make sense because of (iii), the subscript_ \(a=0\) _means throwing away the high energy part, and_ \(\llbracket L\rrbracket\) _denotes the image of the fundamental class of_ \(L\) _under the composition_ \[H_{*}(L;\mathbb{R})\to H_{\bullet}(\mathcal{L}(0)L;\mathbb{R})\to H^{S^{1}}_{ \bullet}(\mathcal{L}(0)L;\mathbb{R}),\] (1.22) _where the first map is induced by the inclusion of constant loops, and the second map is the erasing map._
Note that when \(L\subset M\) is _Spin_, we can pick \(\alpha=0\) in the above theorem. This shall be our default choice when dealing with Lagrangian submanifolds with _Spin_ structures.
**Remark 7**.: _It follows from [55], Section 5.2 that Theorem 6 applies to any Milnor fiber of the "low degree" Brieskorn singularities in \(\mathbb{C}^{n+1}\) defined by the polynomials_
\[z_{1}^{k}+\cdots+z_{n+1}^{k}=0,\ 2\leqslant k\leqslant n. \tag{1.23}\]
_On the other hand, a more elementary argument by Seidel-Solomon (cf. [47], Section 7) implies that the assumptions of Theorem 6 are satisfied for any Milnor fiber of an isolated singularity which is triply stabilized._
_Other examples of Liouville manifolds with cyclic dilations include cotangent bundles of rationally inessential manifolds. See [41], Section 4.2._
In order to prove Theorem 6, we will follow the general strategies of Fukaya [18] and Irie [33], and extend them to the \(S^{1}\)-equivariant case. This will involve some new constructions in chain level string topology (cf. Section 3.2) and some new (parametrized) moduli spaces defined using holomorphic maps with Hamiltonian asymptotic and Lagrangian boundary conditions, which we will introduce in Sections 4.1 and 4.2. Analyzing these moduli spaces constitutes the main technical part of this paper. The heuristic argument presented in Section 1.2 has many conceptual advantages, but can be hard to realize for technical reasons. To name a few, the geometric \(L_{\infty}\)-structure on \(\mathit{SC}^{*}(M)\) hasn't been rigorously constructed, and the general form of the non-exact Viterbo functoriality (1.11) hasn't been established. In the \(S^{1}\)-equivariant case, significant progress has been made by Cieliebak-Latschev [38] to construct the morphism (1.13) from the perspective of symplectic field theory, which doesn't fully fit with our purposes as the linearized contact homology doesn't take into account of the unit \(e_{M}\otimes 1\in\mathit{SC}^{*}_{S^{1}}(M)\).
As a Corollary to Theorem 6, we prove the following general version of Audin's conjecture for Lagrangian \(K(\pi,1)\) spaces in Liouville manifolds with finite first Gutt-Hutchings capacity \(c_{1}^{\mathit{GH}}(M)\).
**Corollary 8**.: _Let \(M\) be a Liouville manifold admitting a cyclic dilation, and \(L\subset M\) a closed, orientable Lagrangian submanifold which is a \(K(\pi,1)\) space, and Spin relative to \(\alpha\). Then there exists a finite covering \(\widetilde{L}\) of \(L\) which is homotopy equivalent to a product \(S^{1}\times K\) for some closed \((n-1)\)-dimensional manifold \(K\). Moreover, \(\pi_{1}(\widetilde{L})\subset\pi_{1}(L)\) is the centralizer of some element \(\gamma\in\pi_{1}(L)\) which has Maslov class equal to 2 and positive symplectic area._
**Remark 9**.: _The original conjecture of Audin [5] states that any Lagrangian torus \(L\subset\mathbb{C}^{n}\) has Maslov number \(2\). It is first rigorously proved by Cieliebak-Mohnke [12] using punctured holomorphic curves. An alternative approach was outlined earlier by Fukaya [18] and realized later by Irie [33], which shows that the same conclusion holds for Lagrangian submanifolds \(L\subset\mathbb{C}^{n}\) which are \(K(\pi,1)\) spaces and Spin. When \(M\) is a subcritical Weinstein manifold, and \(L\subset M\) is a monotone Lagrangian \(K(\pi,1)\) space (without the Spin assumption), this is proved by Damian [15]. The holomorphic curves used in our argument are actually more related to those appeared in Cieliebak-Mohnke [12], although the general idea of the proof is to more extent inspired by Fukaya [18]._
_Our generalization above has the feature that \(L\subset M\) doesn't need to be displaceable. For example, it is shown by Lekili-Maydayskiy [39] that non-displaceable monotone Lagrangian tori exist in 4-dimensional \(A_{k}\) Milnor fibers for any \(k\geqslant 1\). In [2], a \(1\)-parameter family of monotone Lagrangian tori in \(T^{*}S^{3}\) is shown to split-generate the monotone Fukaya category. In all these examples, the Lagrangian tori have minimal Maslov number \(2\)._
**Remark 10**.: _In another direction, Keating constructed infinitely many monotone Lagrangian \(K(\pi,1)\) spaces (not necessarily orientable) in certain affine hypersurfaces of complex dimensions 2 and 3 with any possible minimal Maslov index. See [36], Theorems 1.1 and 1.2. The affine hypersurfaces appeared in her construction are of the form_
\[\left\{z_{1}^{2}+z_{2}^{4}+z_{3}^{k}=1\right\}\subset\mathbb{C}^{3},\ \text{where}\ k\geqslant 1 \tag{1.24}\]
_in complex codimension 2, and_
\[\left\{z_{1}^{2}+z_{2}^{4}+z_{3}^{k_{3}}+z_{4}^{k_{4}}=1\right\}\subset\mathbb{C}^ {4},\text{ where }k_{3}\gg 1,k_{4}\gg 1 \tag{1.25}\]
_in complex dimension 3. Note that for \(k\geq 4\) (resp. \(\frac{1}{k_{3}}+\frac{1}{k_{4}}\leq\frac{1}{4}\)), these affine hypersurfaces do not admit cyclic dilations. In fact, it follows from [55], Theorem 3.27 and [42], Theorem 2.5 that if a smooth affine variety admits a cyclic dilation, then it must be \(\mathbb{A}^{1}\)-uniruled. By [42], Lemma 7.1, this forces its log Kodaira dimension to be \(-\infty\). However, the condition \(k\geq 4\) (resp. \(\frac{1}{k_{3}}+\frac{1}{k_{4}}\leq\frac{1}{4}\)) would imply that the affine hypersurfaces under consideration have non-negative log Kodaira dimensions. Compare with the examples given in Remark 7._
In fact, we will prove something slightly stronger than the statement of Corollary 8, namely the class \(\bar{\gamma}\in\pi_{2}(M,L)\) with \(\partial\bar{\gamma}=\gamma\) and positive symplectic area is represented by a \(J_{M}\)-holomorphic disc for any convex almost complex \(J_{M}\) on \(M\). Note that aside from technical assumptions, this unveils the deep reason behind the validity of Proposition 4:
_Liouville manifolds with cyclic dilations cannot contain exact Lagrangian \(K(\pi,1)\) spaces because any aspherical Lagrangian submanifold \(L\subset M\) necessarily bounds a non-constant pseudoholomorphic disc._
When \(n=3\), we have the following generalization to Theorem 1. Recall that a \(3\)-dimensional _spherical space form_ is a quotient \(S^{3}/\Gamma\), where \(\Gamma\subset\text{SO}(4)\) is a finite subgroup acting freely on \(S^{3}\cong\text{SO}(4)/\text{SO}(3)\).
**Corollary 11**.: _Let \(M\) be a 6-dimensional Liouville manifold which admits a cyclic dilation, and let \(L\subset M\) be a closed, orientable, prime Lagrangian submanifold, then \(L\) is diffeomorphic to a spherical space form, or a product \(S^{1}\times\Sigma_{g}\), where \(\Sigma_{g}\) is an orientable closed surface._
Our result is sharp in the sense that there exist Liouville 6-manifolds with cyclic dilations containing Lagrangian submanifolds of each of the topological types allowed by Corollary 11. For \(S^{1}\times\Sigma_{g}\), the fact that they can be embedded as closed Lagrangian submanifolds in \(\mathbb{C}^{3}\) follows from [18], Theorem 2.6. For a spherical space form \(L=S^{3}/\Gamma\), it follows from [47], Example 6.4 that \(T^{*}L\) admits a dilation, therefore also a cyclic dilation.
Corollaries 8 and 11 will be proved in Section 2.
The paper is organized as follows. In Section 2, we explain how to deduce the applications, Corollaries 8 and 11, from our main result, Theorem 6. In Section 3, we recall a de Rham chain model of the free loop space homology due to Irie [32], and modify his construction to produce two de Rham models \(C_{\bullet}^{S^{1}}\) and \(C_{\bullet}^{\lambda}\) of \(S^{1}\)-equivariant chains on the free loop space. These two de Rham models are related by the natural projection \(C_{\bullet}^{S^{1}}\to C_{\bullet}^{\lambda}\), which can be shown to be a quasi-isomorphism (cf. Section 3.2). We then introduce a chain level refinement of Chas-Sullivan's string bracket [9] on \(C_{\bullet}^{\lambda}\), which enables us to provide a chain level statement of Theorem 6 (cf. Theorem 26). Section 4 contains the main technical input of this paper. Inspired by the work of Cohen-Ganatra [13], we introduce the relevant moduli spaces whose virtual fundamental chains define the finite energy de Rham chains approximating the chains satisfying Theorem 26, and analyze the boundary strata of their compactifications. The proof of Theorem 6 is completed in Section 4.5. One interesting aspect of our argument is that the geometric de Rham chains produced using the moduli spaces of holomorphic curves are first defined on the larger chain complex \(C_{\bullet}^{S^{1}}\), on which the chain level string bracket \(\{\cdot,\cdot\}\) isn't well-defined as a Lie bracket, but the \(S^{1}\)-equivariant differential takes a more adorable form. They will then be projected to \(C_{\bullet}^{\lambda}\) to give the chains required by Theorem 26. The final section, Section 5, is devoted to the discussions of some potential implications and variations of our results. In Appendix A, we record some key facts in virtual perturbation theory that are used in
the main contents of this paper. The orientation conventions of the moduli spaces and the sign computations will be dealt with in Appendix B.
## Acknowledgements
During a discussion in November 2019, Paul Seidel asked me whether the notion of a cyclic dilation introduced in [41] has any cool applications. Hopefully this paper serves as an interesting application. I would like to thank Yi Wang for helpful discussions and many useful suggestions, especially for pointing out the issue discussed in Remark 19. I also thank Kai Cieliebak for useful comments and Kei Irie for sending me an erratum to his paper [33], which enables me to make corresponding corrections for various minor issues in earlier versions of this paper.
The work is partially funded by Simons grant #385571.
## 2 Proof of Corollaries
In this section, we prove Corollaries 8 and 11 by assuming Theorem 6. The argument here largely follows the exposition of [40] in the special case when \(M=\mathbb{C}^{n}\), with only slight modifications.
For the proof of Corollary 8, we will be using the following two lemmas. Recall that \(L\) is a closed manifold of dimension \(n\). For a loop \(\gamma:S^{1}\to L\), denote by \(Z_{\gamma}\subset\pi_{1}(L)\) its centralizer in the fundamental group.
**Lemma 12** ([40], Lemma 5.1).: _Let \(\pi:\widetilde{L}\to L\) be a connected covering of \(L\) associated to the subgroup \(Z_{\gamma}\), and let \(\tilde{\gamma}\in\pi_{1}(\widetilde{L})\) be a lift of \(\gamma\). Then \(\pi\) induces a homeomorphism \(\mathcal{L}(\tilde{\gamma})\widetilde{L}\to\mathcal{L}(\gamma)L\) between the corresponding components of the free loop spaces._
**Lemma 13** ([40], Lemma 5.2).: _In the situation of the previous lemma, assume in addition that \(L\) is a \(K(\pi,1)\) space. Then the evaluation \(\mathcal{L}(\tilde{\gamma})\widetilde{L}\to\widetilde{L}\) at the base point is a homotopy equivalence._
The following is a simple consequence of Lemmas 12 and 13.
**Corollary 14**.: _If \(L\) is a \(K(\pi,1)\) space, then every component of \(\mathcal{L}L\) has the homotopy type of a CW complex of dimension at most \(n\)._
We can now prove Corollary 8, which in particular shows that if \(M\) admits a cyclic dilation, and the closed Lagrangian submanifold \(L\subset M\) is a \(K(\pi,1)\) space and relatively _Spin_, then \(L\) has minimal Maslov index \(2\).
Proof of Corollary 8.: We can rewrite the identity (1.21) as
\[\sum_{k=2}^{\infty}\frac{1}{(k-1)!}\sum_{a=a_{1}+\cdots+a_{k-1}}\tilde{\ell}_{ k}\left(\tilde{y}(-a),\tilde{x}(a_{1}),\cdots,\tilde{x}(a_{k})\right)=(-1)^{n+1} \llbracket L\rrbracket, \tag{2.1}\]
where \(a,a_{1},\cdots,a_{k}\in H_{1}(L;\mathbb{Z})\), and \(\tilde{x}(a_{i})\) denotes the component of \(\tilde{x}\) in the summand \(H^{S^{1}}_{\mathfrak{g}+n+\mu(a_{i})-1}(\mathcal{L}(a_{i});\mathbb{R})\) with respect to the decomposition (1.17). We now use the assumption that \(L\) is a \(K(\pi,1)\) space, because of the topological splitting on the free loop space, we have \(\llbracket L\rrbracket\neq 0\).1 It follows that there must be some integer \(k\geq 2\) and homology classes \(a,a_{1},\cdots,a_{k}\) such that
Footnote 1: Note that this is not true for general \(L\). For example, \(\llbracket L\rrbracket=0\) in \(H^{S^{1}}_{\mathfrak{g}}(\mathcal{L}L;\mathbb{R})\) if \(L\) is simply-connected, cf. [54], Corollary 1.1.6.
\[\tilde{\ell}_{k}\left(\tilde{y}(-a),\tilde{x}(a_{1}),\cdots,\tilde{x}(a_{k}) \right)\neq 0. \tag{2.2}\]
The gradings of these inputs are given by
\[|\tilde{y}(-a)|=n+1-\mu(a)\text{ and }|\tilde{x}(a_{i})|=n-3+\mu(a_{i}). \tag{2.3}\]
Since \(L\) is a \(K(\pi,1)\), it follows from Corollary 14 that the vector space \(\mathbb{H}_{\bullet}^{S^{1}}\) is supported in degrees \(0\leq*\leq n\). In order for (2.2) to be true, we must have
\[2\leq\mu(a)\leq n+1\text{ and }3-n\leq\mu(a_{i})\leq 3. \tag{2.4}\]
By our assumption, \(\sum_{i=1}^{k}\mu(a_{i})=\mu(a)\geq 2\), so there must be some \(i\) for which \(\mu(a_{i})>0\). Recall that for an orientable Lagrangian submanifold \(L\), \(\mu(a_{i})\) is even, so the constraint \(0<\mu(a_{i})\leq 3\) actually implies that \(\mu(a_{i})=2\). Since \(\tilde{x}(a_{i})\neq 0\) and it is defined in terms of counting pseudoholomorphic discs in the relative homotopy class \(\bar{a}_{i}\in\pi_{2}(M,L)\) with \(\bar{c}\bar{a}_{i}=a_{i}\) (see Section 4.1 for the moduli spaces defining the chain underlying \(\tilde{x}(a_{i})\), and refer to (3.114) for the precise relation between this chain and the homology class \(\tilde{x}(a_{i})\)), we see that \(L\) bounds a non-constant pseudoholomorphic disc of Maslov index \(2\), which in particular has positive symplectic area.
Set \(\gamma:=a_{i}\), and let \(Z_{\gamma}\subset\pi_{1}(L)\) be its centralizer. We have a short exact sequence
\[0\to\ker(\mu|_{Z_{\gamma}})\to Z_{\gamma}\xrightarrow{\frac{1}{2}\mu}\mathbb{ Z}\to 0. \tag{2.5}\]
It follows that the map \(\rho:\mathbb{Z}\times\ker(\mu|_{Z_{\gamma}})\to Z_{\gamma}\) defined by \(\rho(k,g)=\gamma^{k}\cdot g\) is an isomorphism. Since \(L\) is a \(K(\pi,1)\) space, the covering space \(\widetilde{L}\) of \(L\) associated to \(Z_{\gamma}\) is also an Eilenberg-MacLane space \(K\left(\mathbb{Z}\times\ker(\mu|_{Z_{\gamma}}),1\right)\). In particular, it is homotopy equivalent to \(S^{1}\times K\) for some \((n-1)\)-dimensional manifold \(K\) which is a \(K\left(\ker(\mu|_{Z_{\gamma}}),1\right)\) space.
It remains to show that \(\widetilde{L}\) is a finite cover of \(L\). Note that since \(\mu(\gamma)=2\) and \(|\tilde{x}(\gamma)|=n\), we have \(H_{n}(\mathcal{L}(\gamma)L;\mathbb{R})\neq 0\). By Lemmas 12 and 13, \(\mathcal{L}(\gamma)L\) is homotopy equivalent to the \(n\)-manifold \(\widetilde{L}\). Since \(H_{n}(\widetilde{L};\mathbb{R})\neq 0\), it follows that \(\widetilde{L}\) is compact, which forces \(\widetilde{L}\to L\) to be a finite covering.
**Remark 15**.: _As an interesting comparison, Fukaya's argument [18] in the case of \(\mathbb{C}^{n}\) gives the bound \(0<\mu(a_{i})\leq 2\) for some \(1\leq i\leq k\), which corresponds to the fact that a non-orientable Lagrangian submanifold (e.g. a Klein bottle in \(\mathbb{C}^{2}\)) can have minimal Maslov index \(1\). This is better than the bound \(0<\mu(a_{i})\leq 3\) obtained in our case, which suggests the existence of non-orientable Lagrangian submanifolds with minimal Maslov index \(3\) in some Liouville manifolds with cyclic dilations._
Proof of Corollary 11.: Let \(L\subset M\) be a compact, orientable, prime \(3\)-manifold. It is known that \(L\) is either diffeomorphic to \(S^{1}\times S^{2}\), or \(L\) is irreducible, meaning that every \(S^{2}\subset L\) bounds a ball.
Let \(L\) be an irreducible \(3\)-manifold. When \(H_{1}(L;\mathbb{Q})\neq 0\), it follows that \(\pi_{1}(L)\) is infinite, and the universal cover of \(L\) is non-compact. In this case the argument is identical to the proof in Section 5 of [40], Corollary 1.2, which implies that \(L\) is diffeomorphic to \(S^{1}\times\Sigma_{g}\) for some \(g\geq 1\). When \(\pi_{1}(L)\) is finite, applying Perelman's proof of the Geometrization Conjecture we see that \(L\) is a spherical space form.
## 3 de Rham complex and string bracket
To prove Theorem 6, we first recall in this section a chain model of the free loop space homology due to Irie [32], and then modify his construction to produce a chain model for the string homology \(\mathbb{H}_{\bullet}^{S^{1}}\) (cf. (1.17)), on which an odd Lie bracket (which is supposed to play the role of the chain level string bracket) can be defined. This enables us to reformulate in Theorem 26 the statement of Theorem 6 at chain level in Section 3.3. In order to produce the chains in the completed de Rham complex as required by Theorem 26, we will follow the strategy of Irie [33] and approximate them using chains up to finite energy level. This is done in Section 3.4.
### Irie's chain model
Let \(L\) be a closed, orientable manifold of dimension \(n\). We will be working with the space of Moore loops with marked points in \(L\). For every \(k\in\mathbb{Z}_{\geq 0}\), define \(\mathcal{L}_{k+1}L\) to be the space of the \((k+2)\)-tuples \((T,\gamma,t_{1},\cdots,t_{k})\), where
* \(T>0\) and \(\gamma\in C^{\infty}(\mathbb{R}/T\mathbb{Z},L)\);
* \(0<t_{1}<\cdots<t_{k}<T\), for convenience, we also set \(t_{0}=0=T\in\mathbb{R}/T\mathbb{Z}\);
* \(\mathcal{C}_{t}^{m}\gamma(t_{j})=0\) for every \(m\in\mathbb{N}\) and \(0\leq j\leq k\).
From now on, we will omit \(L\) from the notations, and simply write \(\mathcal{L}_{k+1}\) for the space of Moore loops with \(k\) marked points in \(L\). These spaces are equipped with the evaluation maps \(ev_{j}^{\mathcal{L}}:\mathcal{L}_{k+1}\to L\) at the marked points \(t_{j}\), and concatenation maps
\[con_{j}:\mathcal{L}_{k+1}\ _{ev_{j}^{\mathcal{L}}}\times_{ev_{0}^{\mathcal{L}} }\mathcal{L}_{k^{\prime}+1}\to\mathcal{L}_{k+k^{\prime}} \tag{3.1}\]
defined in the obvious way. It is easy to see that the concatenation maps are compatible with the decomposition \(\mathcal{L}_{k+1}=\bigsqcup_{aeH_{1}(L,\mathbb{Z})}\mathcal{L}_{k+1}(a)\) of \(\mathcal{L}_{k+1}\) into different homotopy classes, where \(\mathcal{L}_{k+1}(a)\subset\mathcal{L}_{k+1}\) is the subspace consisting of tuples \((T,\gamma,t_{1},\cdots,t_{k})\) with \([\gamma]=a\). It follows that we have a map
\[con_{j}:\mathcal{L}_{k+1}(a)\ _{ev_{j}^{\mathcal{L}}}\times_{ev_{0}^{\mathcal{L}} }\mathcal{L}_{k^{\prime}+1}(a^{\prime})\to\mathcal{L}_{k+k^{\prime}}(a+a^{ \prime}) \tag{3.2}\]
for \(a,a^{\prime}\in H_{1}(L;\mathbb{Z})\).
**Definition 16** ([33], Definition 4.1).: _Let \(U\) be a smooth manifold and consider the map \(\varphi:U\to\mathcal{L}_{k+1}\), which can be written as_
\[\varphi(u)=\left(T(u),\gamma(u),t_{1}(u),\cdots,t_{k}(u)\right). \tag{3.3}\]
_We say that \(\varphi\) is a \(C^{\infty}\) map if both of the map_
\[U\to\mathbb{R}^{k+1};u\mapsto\left(T(u),t_{1}(u),\cdots,t_{k}(u)\right) \tag{3.4}\]
_and the map_
\[\{(u,t)|u\in U,0\leq t\leq T(u)\}\to L;(u,t)\mapsto\gamma(u)(t) \tag{3.5}\]
_are \(C^{\infty}\). For the second map, it means that the map extends to a \(C^{\infty}\)-map from an open neighborhood of the left-hand side in \(U\times\mathbb{R}\) to \(L\)._
_We say that \(\varphi\) is smooth if \(\varphi\) is \(C^{\infty}\) and \(ev_{0}^{\mathcal{L}}\circ\varphi:U\to L\) is a submersion._
For \(N\in\mathbb{N}\), let \(\mathfrak{U}_{N}\) be the collection of oriented submanifolds in \(\mathbb{R}^{N}\), and define \(\mathfrak{U}:=\bigsqcup_{N\geq 1}\mathfrak{U}_{N}\). Let \(\mathcal{P}(\mathcal{L}_{k+1}(a))\) denote the set of pairs \((U,\varphi)\), where \(U\in\mathfrak{U}\) and \(\varphi:U\to\mathcal{L}_{k+1}(a)\) is a smooth map in the sense of Definition 16. In the terminology of [32], the pair \((U,\varphi)\) is called a _plot_ of the _differentiable space_\(\mathcal{L}_{k+1}\).
For each \(N\), consider the vector space
\[\bigoplus_{(U,\varphi)\in\mathcal{P}(\mathcal{L}_{k+1}(a))}A_{c}^{\dim(U)-N}(U), \tag{3.6}\]
where \(A_{c}^{\ast}(U)\) denotes the space of compactly supported differential forms on \(U\). Denote by \(Z_{N}\) the subspace of (3.6) defined by
\[\begin{split}&\left\{(U,\varphi,\pi_{\mathfrak{U}})-(U^{\prime}, \varphi\circ\pi,\omega)|(U,\varphi)\in\mathcal{P}(\mathcal{L}_{k+1}(a)),U^{ \prime}\in\mathfrak{U},\\ &\omega\in A_{c}^{\dim(U^{\prime})-N}(U^{\prime}),\pi:U^{\prime} \to U\text{ is a submersion}\right\}.\end{split} \tag{3.7}\]
As a graded vector space, the \(N\)th degree _de Rham chain complex_ of \(\mathcal{L}_{k+1}(a)\) is the quotient
\[C_{N}^{\text{dR}}(\mathcal{L}_{k+1}(a)):=\left(\bigoplus_{(U,\varphi)\in \mathcal{P}(\mathcal{L}_{k+1}(a))}A_{c}^{\dim(U)-N}(U)\right)/Z_{N}. \tag{3.8}\]
By abuse of notations, we will write the chains in \(C^{dR}_{*}(\mathcal{L}_{k+1}(a))\) as \((U,\varphi,\omega)\) instead of their equivalence classes. The boundary operator \(\partial:C^{dR}_{*}(\mathcal{L}_{k+1}(a))\to C^{dR}_{*-1}(\mathcal{L}_{k+1}(a))\) is defined by taking the de Rham differential
\[\partial(U,\varphi,\omega):=(-1)^{|\omega|+1}(U,\varphi,d\omega). \tag{3.9}\]
One can check that \(\partial\) is well-defined, and \(\partial^{2}=0\). The homology of \(\left(C^{dR}_{*}(\mathcal{L}_{k+1}(a)),\partial\right)\) will be denoted by \(H^{dR}_{*}(\mathcal{L}_{k+1}(a))\).
It is proved in [32] that the homology group \(H^{dR}_{*}(\mathcal{L}_{k+1}(a))\) is independent of \(k\in\mathbb{Z}_{\geqslant 0}\), and is in fact isomorphic to the singular homology \(H^{sing}_{*}(\mathcal{L}(a);\mathbb{R})\) defined using the \(C^{\infty}\)-topology on \(\mathcal{L}(a)\).
One nice property of de Rham chains is that one can take their fiber products. For \(k\in\mathbb{N}\), \(k^{\prime}\in\mathbb{Z}_{\geqslant 0}\), and \(1\leqslant j\leqslant k\), define the map
\[\circ_{j}:C^{dR}_{n+d}(\mathcal{L}_{k+1}(a))\otimes C^{dR}_{n+d^{\prime}} \left(\mathcal{L}_{k^{\prime}+1}(a^{\prime})\right)\to C^{dR}_{n+d+d^{\prime} }\left(\mathcal{L}_{k+k^{\prime}}(a+a^{\prime})\right) \tag{3.10}\]
by
\[x\circ_{j}y:=(-1)^{(\dim(U)-|\omega|-n)|\omega^{\prime}|}\left(U_{\varphi_{j} \times_{\varphi_{0}^{\prime}}}U^{\prime},con_{j}\circ(\varphi_{j}\times \varphi_{0}^{\prime}),\omega\times\omega^{\prime}|_{U_{\varphi_{j}}\times_{ \varphi_{0}^{\prime}}U^{\prime}}\right), \tag{3.11}\]
where \(\varphi_{j}=ev_{j}^{c}\circ\varphi\) and \(\varphi_{0}^{\prime}=ev_{0}^{c}\circ\varphi^{\prime}\). One can check that this is a chain map, so it descends to a map
\[H_{n+d}(\mathcal{L}(a);\mathbb{R})\otimes H_{n+d^{\prime}}\left(\mathcal{L}(a ^{\prime});\mathbb{R}\right)\to H_{n+d+d^{\prime}}\left(\mathcal{L}(a+a^{ \prime});\mathbb{R}\right), \tag{3.12}\]
which corresponds to the Chas-Sullivan loop product defined in [9] under the isomorphism \(H^{dR}_{*}(\mathcal{L}_{k+1}(a))\cong H^{sing}_{*}(\mathcal{L}(a);\mathbb{R})\) mentioned above.
There is a relative version of the above construction, whose definition makes use of de Rham chains on \([-1,1]\times\mathcal{L}_{k+1}(a)\) relative to \(\{-1,1\}\times\mathcal{L}_{k+1}(a)\). Let \(\overline{\mathcal{P}}(\mathcal{L}_{k+1}(a))\) denote the set of tuples \((U,\varphi,\tau_{+},\tau_{-})\), where
* \(U\in\mathfrak{U}\) and \(\varphi:U\to\mathbb{R}\times\mathcal{L}_{k+1}(a)\). Write \(\varphi\) as \((\varphi_{\mathbb{R}},\varphi_{\mathcal{L}})\), and for every interval \(I\subset\mathbb{R}\), define \(U_{I}:=(\varphi_{\mathbb{R}})^{-1}(I)\).
* \(\varphi_{\mathbb{R}}\) and \(\varphi_{\mathcal{L}}\) are \(C^{\infty}\) maps in the sense of Definition 16. Moreover, the map \(U\to\mathbb{R}\times L\) defined by \(u\mapsto(\varphi_{\mathbb{R}}(u),ev_{0}\circ\varphi_{\mathcal{L}}(u))\) is a submersion.
* \(\tau_{+}:U_{\geqslant 1}\to\mathbb{R}_{\geqslant 1}\times U_{1}\) is a diffeomorphism such that \[\varphi|_{U_{\geqslant 1}}=(i_{\geqslant 1}\times\varphi_{\mathcal{L}}|_{U_{1}}) \circ\tau_{+},\] (3.13) where \(i_{\geqslant 1}:\mathbb{R}_{\geqslant 1}\hookrightarrow\mathbb{R}\) is the obvious inclusion.
* \(\tau_{-}:U_{\leqslant-1}\to\mathbb{R}_{\leqslant-1}\times U_{-1}\) is a diffeomorphism such that \[\varphi|_{U_{\leqslant-1}}=(i_{\leqslant-1}\times\varphi_{\mathcal{L}}|_{U_{-1} })\circ\tau_{-1},\] (3.14) where \(i_{\leqslant-1}:\mathbb{R}_{\leqslant-1}\hookrightarrow\mathbb{R}\) is the obvious inclusion.
Note that the sets \(U_{\geqslant 1}\) and \(U_{\leqslant-1}\) can be empty.
For any \((U,\varphi,\tau_{+},\tau_{-})\in\overline{\mathcal{P}}(\mathcal{L}_{k+1}(a))\) and \(N\in\mathbb{Z}\), let \(A^{N}(U,\varphi,\tau_{+},\tau_{-})\) be the vector space of differential \(N\)-forms \(\omega\in A^{N}(U)\) on \(U\) such that
* \(\omega|_{U_{[-1,1]}}\) is compactly supported,
* \(\omega|_{U_{\geqslant 1}}=(\tau_{+})^{*}(1\times\omega|_{U_{1}})\),
* \(\omega|_{U_{\leqslant-1}}=(\tau_{-})^{*}(1\times\omega|_{U_{-1}})\).
Define the space of \(N\)th degree _relative de Rham chains_ to be
\[\overline{C}^{dR}_{N}(\mathcal{L}_{k+1}(a)):=\left(\bigoplus_{(U,\varphi,\tau_{+},\tau_{-},\omega)\in\overline{\mathcal{C}}^{dR}(\mathcal{L}_{k+1}(a))}A^{ \dim(U)-N-1}(U,\varphi,\tau_{+},\tau_{-})\right)\!\!/\overline{Z}_{N}, \tag{3.15}\]
where the subspace \(\overline{Z}_{N}\subset\overline{C}^{dR}_{N}(\mathcal{L}_{k+1}(a))\) is generated by
\[(U,\varphi,\tau_{+},\tau_{-},\omega)-(U^{\prime},\varphi^{\prime},\tau^{ \prime}_{+},\tau^{\prime}_{-},\omega^{\prime}), \tag{3.16}\]
if there exists a submersion \(\pi:U^{\prime}\to U\) satisfying \(\varphi^{\prime}=\varphi\circ\pi\), \(\omega=\pi_{!}\omega^{\prime}\), and
\[\tau_{+}\circ\pi|_{U^{\prime}_{\leq 1}}=(\mathit{id}_{\mathbb{R}_{\geq 1}} \times\pi|_{U^{\prime}_{1}})\circ\tau^{\prime}_{+}, \tag{3.17}\]
\[\tau_{-}\circ\pi|_{U^{\prime}_{\leq-1}}=(\mathit{id}_{\mathbb{R}_{\leq-1}} \times\pi|_{U^{\prime}_{-1}})\circ\tau^{\prime}_{-}, \tag{3.18}\]
where \(\mathit{id}_{\mathbb{R}_{I}}\) is the identity map on \(\mathbb{R}_{I}\). The differential \(\overline{\partial}:\overline{C}^{dR}_{*}(\mathcal{L}_{k+1}(a))\to\overline{ C}^{dR}_{*-1}(\mathcal{L}_{k+1}(a))\) is defined to be
\[\overline{\partial}(U,\varphi,\tau_{+},\tau_{-},\omega):=(-1)^{|\omega|+1}(U, \varphi,\tau_{+},\tau_{-},d\omega). \tag{3.19}\]
Again, one can check that \(\overline{\partial}\) is well-defined and \(\overline{\partial}^{2}=0\), which gives rise to a relative version of de Rham homology \(\overline{H}^{dR}_{*}(\mathcal{L}_{k+1}(a))\).
The fiber product (3.10) also exists on the relative de Rham complex. For \(k\in\mathbb{N}\), \(k^{\prime}\in\mathbb{Z}_{\geq 0}\), \(1\leq j\leq k\), \(a,a^{\prime}\in H_{1}(L;\mathbb{Z})\), and \(x=(U,\varphi,\tau_{+},\tau_{-},\omega)\), \(y=(U^{\prime},\varphi^{\prime},\tau^{\prime}_{+},\tau^{\prime}_{-},\omega^{ \prime})\) two relative de Rham chains, define
\[\circ_{j}:\overline{C}^{dR}_{n+d}(\mathcal{L}_{k+1}(a))\otimes\overline{C}^{ dR}_{n+d^{\prime}}(\mathcal{L}_{k^{\prime}+1}(a^{\prime}))\to\overline{C}^{dR}_{n+d +d^{\prime}}(\mathcal{L}_{k+k^{\prime}}(a+a^{\prime})) \tag{3.20}\]
by
\[x\circ_{j}y=(-1)^{(\dim(U)-|\omega|-n-1)|\omega^{\prime}|+n}\left(U_{\varphi_{ j}}\times_{\varphi^{\prime}_{0}}U^{\prime},\varphi^{\prime\prime},\tau^{ \prime\prime}_{+},\tau^{\prime\prime}_{-},\omega\times\omega^{\prime}\right), \tag{3.21}\]
where
\[\varphi_{j}:=(\varphi_{\mathbb{R}},ev^{\mathcal{L}}_{j}\circ\varphi_{\mathcal{ L}}),\ \varphi^{\prime}_{0}:=(\varphi^{\prime}_{\mathbb{R}},ev^{\mathcal{L}}_{0}\circ \varphi^{\prime}_{\mathcal{L}}), \tag{3.22}\]
and
\[\varphi^{\prime\prime}(u,u^{\prime}):=\left(\varphi_{\mathbb{R}}(u),con_{j}( \varphi_{\mathcal{L}}(u),\varphi^{\prime}_{\mathcal{L}}(u^{\prime}))\right), \tag{3.23}\]
\[\tau^{\prime\prime}_{+}(u,u^{\prime}):=\left(\rho_{+}(u,u^{\prime}),(pr_{U_{1} }\circ\tau_{+}(u),pr_{U^{\prime}_{1}}\circ\tau^{\prime}_{+}(u^{\prime})) \right), \tag{3.24}\]
\[\tau^{\prime\prime}_{-}(u,u^{\prime}):=\left(\rho_{-}(u,u^{\prime}),(pr_{U_{-1} }\circ\tau_{-}(u),pr_{U^{\prime}_{-1}}\circ\tau^{\prime}_{-}(u^{\prime})) \right), \tag{3.25}\]
with the functions \(\rho_{\pm}\) given by
\[\rho_{+}(u,u^{\prime}):=\mathit{pr}_{\mathbb{R}_{\geq 1}}\circ\tau_{+}(u)= \mathit{pr}_{\mathbb{R}_{\geq 1}}\circ\tau^{\prime}_{+}(u^{\prime}), \tag{3.26}\]
\[\rho_{-}(u,u^{\prime}):=\mathit{pr}_{\mathbb{R}_{\leq-1}}\circ\tau_{-}(u)= \mathit{pr}_{\mathbb{R}_{\leq-1}}\circ\tau^{\prime}_{-}(u^{\prime}), \tag{3.27}\]
where \(\mathit{pr}_{\mathbb{R}_{I}}\) denotes the trivial projection to the \(\mathbb{R}_{I}\) factor.
The de Rham complex \(C^{dR}_{*}(\mathcal{L}_{k+1}(a))\) and its relative version \(\overline{C}^{dR}_{*}(\mathcal{L}_{k+1}(a))\) are related as follows. It is natural to consider the inclusion map \(i:C^{dR}_{*}(\mathcal{L}_{k+1}(a))\to\overline{C}^{dR}_{*}(\mathcal{L}_{k+1}(a))\) defined by
\[i(U,\varphi,\omega):=(-1)^{\dim(U)}(\mathbb{R}\times U,\mathit{id}_{\mathbb{R} }\times\varphi,\tau_{+},\tau_{-},1\times\omega), \tag{3.28}\]
where the diffeomorphisms \(\tau_{\pm}\) are defined in the obvious way, and the projection maps \(e_{\pm}:\overline{C}^{dR}_{*}(\mathcal{L}_{k+1}(a))\to C^{dR}_{*}(\mathcal{L}_{k+ 1}(a))\) are given by
\[e_{+}(U,\varphi,\tau_{+},\tau_{-},\omega):=(-1)^{\dim(U)-1}(U_{1},\varphi|_{U_{1} },\omega|_{U_{1}}), \tag{3.29}\]
\[e_{-}(U,\varphi,\tau_{+},\tau_{-},\omega):=(-1)^{\dim(U)-1}(U_{-1},\varphi|_{U_{-1 }},\omega|_{U_{-1}}), \tag{3.30}\]
where \(U_{1}\) (resp. \(U_{-1}\)) is oriented so that \(\tau_{+}\) (resp. \(\tau_{-}\)) is orientation preserving (\(\mathbb{R}_{\geqslant 1}\) and \(\mathbb{R}_{\leqslant-1}\) are oriented so that \(\frac{\partial}{\partial t}\) is the positive direction). It is easy to see that \(e_{\pm}\) are surjective, and \(i,e_{+}\) and \(e_{-}\) are well-defined chain maps such that \(e_{+}\circ i=e_{-}\circ i=id_{C}\), where \(id_{C}\) denotes the identity of the chain complex \(C^{dR}_{*}(\mathcal{L}_{k+1}(a))\). Conversely, one can show that \(i\circ e_{+}\) and \(i\circ e_{-}\) are chain homotopic to \(id_{\overline{C}}\), the identity of the relative chain complex \(\overline{\mathcal{C}}^{dR}_{*}(\mathcal{L}_{k+1}(a))\). In particular, the projections \(e_{\pm}\) are quasi-isomorphisms, and we have
\[\overline{H}^{dR}_{*}(\mathcal{L}_{k+1}(a))\cong H^{dR}_{*}(\mathcal{L}_{k+1} (a))\cong H^{sing}_{*}(\mathcal{L}(a);\mathbb{R}). \tag{3.31}\]
We end this subsection by remarking that there is actually a simplification of Irie's chain model, due to Y. Wang, constructed using the fundamental groupoid of \(L\). See [52], Chapter 2.
### Chain level string bracket
The de Rham chain model introduced in Section 3.1 is good enough for Irie's realization of Fukaya's ideas outlined in [18]. However, in order to write down the Maurer-Cartan equations for \(S^{1}\)-equivariant chains on the free loop space, we need a de Rham chain model on which the chain level homotopy \(S^{1}\)-action induced by loop rotations is strict (in the sense of [27], Definition 1), and equip such an \(S^{1}\)-complex with a chain level Lie bracket, so that it becomes a dg Lie algebra (with degree shift). Unlike the loop bracket, whose definition naturally lifts to the chain level, the original construction of the string bracket by Chas-Sullivan [9] does not directly apply to the \(S^{1}\)-equivariant complex of de Rham chains (cf. Remark 23 below). To resolve these issues, we will pass to a double quotient of Irie's chain model \(C^{dR}_{*}(\mathcal{L}_{k+1})\), on which the \(S^{1}\)-action becomes strict, and a Lie bracket exists on the chain level.
From now on, we shall further abbreviate the notations by setting
\[C_{*}(a,k)=C^{dR}_{*+n+\mu(a)+k-1}(\mathcal{L}_{k+1}(a)), \tag{3.32}\] \[C_{*}(k)=C^{dR}_{*+n}(\mathcal{L}_{k+1}):=\bigoplus_{a\in H_{1}( L;\mathbb{Z})}C^{dR}_{*+n+\mu(a)+k-1}(\mathcal{L}_{k+1}(a)). \tag{3.33}\]
Consider the total complex
\[\left(C_{*}:=\bigoplus_{a\in H_{1}(L;\mathbb{Z})}\prod_{k=0}^{\infty}C_{*}(a, k),\tilde{\partial}\right), \tag{3.34}\]
where the differential \(\tilde{\partial}\) can be expresses in terms of the de Rham differential \(\partial\) and the cosimplicial structure maps \(\delta_{k,i}\) defined below, see [32], Section 2.5.2. It is equipped with the action filtration
\[F^{\Xi}C_{*}:=\bigoplus_{\theta_{M}(a)\supset\Xi}\prod_{k=0}^{\infty}C_{*}(a,k) \tag{3.35}\]
mentioned in the introduction, with respect to which one can take the completion
\[\widehat{C}_{*}:=\varprojlim_{\Xi\to\infty}C_{*}/F^{\Xi}C_{*}. \tag{3.36}\]
Recall from [32] that the chain complexes \(\left\{C_{*}(k)\right\}_{k\geqslant 0}\) form a non-symmetric dg operad
\[\mathcal{O}_{L}=\left\{\mathcal{O}_{L}(k)\right\}_{k\geqslant 0}, \tag{3.37}\]
with a multiplication
\[\mu_{L}:=(L^{\prime},i_{2}\circ\phi,1)\in C_{-1}(0,2), \tag{3.38}\]
where \(i_{2}:L\to\mathcal{L}_{3}(0)\) is defined by taking three copies of the inclusion of constant loops \(i_{0}:L\to\mathcal{L}_{1}(0)\), \(L^{\prime}\in\mathfrak{U}\), and \(\phi:L^{\prime}\to L\) is an orientation-preserving diffeomorphism, and a unit
\[e_{L}:=(L^{\prime},i_{0}\circ\phi,1)\in C_{1}(0,0) \tag{3.39}\]
of \(\mu_{L}\). We have
\[\mu_{L}\circ_{1}e_{L}=\mu_{L}\circ_{2}e_{L}=id_{\circ_{L}}, \tag{3.40}\]
where
\[id_{\circ_{L}}:=(L^{\prime},i_{1}\circ\phi,1)\in C_{0}(0,1) \tag{3.41}\]
is the identity, with \(i_{1}:L\to\mathcal{L}_{2}(0)\) by taking two copies of \(i_{0}\). The total complex \(C_{\bullet}\) has the structure of an associative dg algebra, with the product \(\bullet:C_{i}\otimes C_{j}\to C_{i+j-1}\) defined by2
Footnote 2: Note the sign difference from [32], which is due to the fact that we are considering here the total complex \(\prod_{k=0}^{\infty}\circ_{L}(k)_{\bullet+k-1}\) with degree shifted up by \(1\), instead of the total complex \(\widehat{\mathcal{O}}_{\bullet}:=\prod_{k=0}^{\infty}\circ_{L}(k)_{\bullet+k}\) considered in [32]. This sign difference will be inherited by many of the formulas later on.
\[(x\bullet y)(a,k):=\sum_{\begin{subarray}{c}k_{1}+k_{2}=k\\ a_{1}+a_{2}=a\end{subarray}}(-1)^{k_{1}(|y|+1)}(\mu_{L}\circ_{1}x(a_{1},k_{1}) )\circ_{k_{1}+1}y(a_{2},k_{2}). \tag{3.42}\]
More interestingly, the dg operad \(\mathcal{O}_{L}\) carries a cyclic structure ([32], Definition 2.9). To define it we identify an element \((T,\gamma,t_{1},\cdots,t_{k})\in\mathcal{L}_{k+1}\) with the \((k+1)\)-tuple \((\Gamma_{0},\cdots,\Gamma_{k})\), where each \(\Gamma_{i}=(T_{i},\gamma_{i})\) is a Moore path, with \(T_{i}>0\) and \(\gamma_{i}\in C^{\infty}([0,T_{\mathrm{i}}],L)\). Cyclic permutation of the labeling of the marked points \(t_{0},\cdots,t_{k}\) defines a map
\[R_{k}:\mathcal{L}_{k+1}\to\mathcal{L}_{k+1};\ (\Gamma_{0},\cdots,\Gamma_{k}) \mapsto(\Gamma_{1},\cdots,\Gamma_{k},\Gamma_{0}), \tag{3.43}\]
whose induced map
\[(R_{k})_{\bullet}:C_{\bullet}(k)\to C_{\bullet}(k) \tag{3.44}\]
on the de Rham chain complex gives the cyclic structure of \(\mathcal{O}_{L}\).
Using the cyclic structure maps \((R_{k})_{\bullet}\) on the dg operad \(\mathcal{O}_{L}\), one can define the chain level BV operator
\[\begin{split}\delta_{cyc}&:C_{\bullet}(a,k+1)\to C _{\bullet+1}(a,k),\\ (\delta_{cyc}x)(a,k)&:=\sum_{j=1}^{k+1}(-1)^{|x|+k( j-1)}(R_{k+1})^{j}_{\bullet}x(a,k+1)\circ_{k+2-j}e_{L}.\end{split} \tag{3.45}\]
This is an anti-chain map. Under the isomorphism \(H_{\bullet}(C_{\bullet})\cong H_{\bullet}(\mathcal{L}L;\mathbb{R})\), it is proved in [32], Section 8.5 that the cohomology level operation of \(\delta_{cyc}\) coincides with the BV operator \(\Delta:H_{\bullet}(\mathcal{L}L;\mathbb{R})\to H_{\bullet+1}(\mathcal{L}L; \mathbb{R})\) defined by loop rotations.
As before, the definition of \(\delta_{cyc}\) is compatible with the decomposition (3.34), therefore extends to an operator on the completion \(\widehat{C}_{\bullet}\).
Recall that a _cosimplicial chain complex_ consists of a sequence of complexes \(\left\{C_{\bullet}(k)\right\}_{k\geqslant 0}\), together with two families of chain maps
\[\delta_{k,i}:C_{\bullet}(k-1)\to C_{\bullet}(k),\ \sigma_{k,i}:C_{\bullet}(k+1) \to C_{\bullet}(k) \tag{3.46}\]
for each \(0\leqslant i\leqslant k\) such that
\[\delta_{k+1,j}\circ\delta_{k,i}=\delta_{k+1,i}\circ\delta_{k,j-1}\ \text{for}\ i<j, \tag{3.47}\]
\[\sigma_{k-1,j}\circ\sigma_{k,i}=\sigma_{k-1,i}\circ\sigma_{k,j+1}\ \text{for}\ i\leqslant j, \tag{3.48}\]
\[\sigma_{k,j}\circ\delta_{k+1,i}=\left\{\begin{array}{ll}\delta_{k,i}\circ \sigma_{k-1,j-1}&i<j,\\ id_{C}&i=j,j+1,\\ \delta_{k,i-1}\circ\sigma_{k-1,j}&i>j+1.\end{array}\right. \tag{3.49}\]
In our case, the cosimplicial structure on the dg operad \(\mathcal{O}_{L}\) is given by
\[\delta_{k,i}(x):=\left\{\begin{array}{ll}\mu_{L}\circ_{2}x&i=0,\\ x\circ_{i}\mu_{L}&1\leqslant i\leqslant k-1,\\ \mu_{L}\circ_{1}x&i=k,\end{array}\right. \tag{3.50}\]
\[\sigma_{k,i}(x):=x\circ_{i+1}e_{L}. \tag{3.51}\]
A chain \(x\in C_{\bullet}(k)\) is called _normalized_ if \(\sigma_{k-1,i}(x)=0\) for every \(0\leqslant i\leqslant k-1\). It is easy to see that the normalized chains in the total complex \(C_{\bullet}\) form a subcomplex, which we will denote by \(C_{\bullet}^{nm}\). The inclusion \(C_{\bullet}^{nm}\hookrightarrow C_{\bullet}\) is a quasi-isomorphism, see [32], Lemma 2.5 for a proof.
There is an alternative, yet equivalent realization of the subcomplex \(C_{\bullet}^{nm}\) of normalized chains, which is standard in simplicial homotopy theory. We spell out the details for the reader's convenience. A chain \(x\in C_{\bullet}(k)\) is called _degenerate_ if there exists an \(i\) with \(0\leqslant i\leqslant k-1\) and a \(y\in C_{\bullet}(k-1)\) such that \(x=\delta_{k,i}(y)\). It is clear that the degenerate chains in \(C_{\bullet}\) form a subcomplex \(D_{\bullet}\), and we call the quotient complex \(C_{\bullet}^{nd}:=C_{\bullet}/D_{\bullet}\) the complex of _non-degenerate_ de Rham chains. We include an elementary proof of the following fact, which is a consequence of the Dold-Kan correspondence.
**Lemma 17**.: _The composition_
\[C_{\bullet}^{nm}\hookrightarrow C_{\bullet}\to C_{\bullet}^{nd} \tag{3.52}\]
_is an isomorphism of chain complexes. In particular, the projection map \(C_{\bullet}\to C_{\bullet}^{nd}\) is also a quasi-isomorphism._
Proof.: It suffices to show that the natural map \(C_{\bullet}^{nm}(k)\to C_{\bullet}^{nd}(k)\) is an isomorphism for each \(k\in\mathbb{Z}_{\geqslant 0}\). Define
\[\mathscr{F}^{j}C_{\bullet}^{nm}(k):=\bigcap_{0\leqslant i<j}\ker(\sigma_{k-1,i}),\ \mathscr{F}^{j}D_{\bullet}(k):=\operatorname{span}_{\mathbb{B}}\langle \operatorname{im}(\delta_{k,0}),\cdots,\operatorname{im}(\delta_{k,j})\rangle. \tag{3.53}\]
We will argue by induction on \(j\). When \(j=0\), since \(\sigma_{k-1,0}\circ\delta_{k,0}=\mathit{id}_{C}\), the short exact sequence
\[0\to\mathscr{F}^{0}D_{\bullet}(k)\to C_{\bullet}\xrightarrow{p_{0}}\mathscr{ F}^{0}C_{\bullet}^{nm}(k) \tag{3.54}\]
splits, where the map \(p_{0}\) is given by \(x\mapsto x-\delta_{k,0}\circ\sigma_{k-1,0}(x)\).
Assume that we already have an isomorphism \(\mathscr{F}^{i}C_{\bullet}^{nm}(r)\cong\frac{C_{\bullet}(r)}{\mathscr{F}^{j} \cdot D_{\bullet}(r)}\) for \(0\leqslant i\leqslant j-1\) and \(0\leqslant r\leqslant k-1\), one checks that the following diagram commutes.
(3.55)
where the map \(p_{j}\) is given by \(x\mapsto x-\delta_{k,j}\circ\sigma_{k-1,j}(x)\). Since the horizontal lines are short exact sequences, and the first two vertical arrows are isomorphisms, so is the third vertical arrow.
For later purposes, it would be more convenient for us to work with a quotient complex of \(C_{\bullet}\) rather than a subcomplex, so we shall use \(C_{\bullet}^{nd}\) instead of \(C_{\bullet}^{nm}\) to perform our constructions below. It follows from [32], Theorem 2.10 that the chain level BV operator \(\delta_{cyc}\) descends to the quotient complex \(C_{\bullet}^{nd}\). By abuse of notations, we shall use \(\tilde{\partial}\) and \(\delta_{cyc}\) to denote the induced differential and the BV operator on \(C_{\bullet}^{nd}\), respectively.
As has been mentioned before, the dg operad \(\mathcal{O}_{L}\) carries an additional piece of structure--a _cocyclic chain complex_, which is a cosimplicial chain complex together with an additional family of chain maps
\[\tau_{k}:C_{\bullet}(k)\to C_{\bullet}(k) \tag{3.56}\]
such that \(\tau_{k}^{k+1}=\mathit{id}_{C}\) and
\[\tau_{k}\circ\delta_{k,i}=\left\{\begin{array}{ll}\delta_{k,k}&i=0,\\ \delta_{k,i-1}\circ\tau_{k-1}&1\leqslant i\leqslant k,\end{array}\right. \tag{3.57}\]
\[\tau_{k}\circ\sigma_{k,i}=\left\{\begin{array}{ll}\sigma_{k,k}\circ\tau_{k+1}^{2} &i=0,\\ \sigma_{k,i-1}\circ\tau_{k+1}&1\leqslant i\leqslant k.\end{array}\right. \tag{3.58}\]
In our case, the maps \(\tau_{k}\) are given by \((R_{k})_{\ast}\). See [32], Remark 7.6.
**Lemma 18**.: _We have \(\delta_{cyc}^{2}=0\) in the complex \(C_{\bullet}^{nd}\) of non-degenerate de Rham chains. In other words, \((C_{\bullet}^{nd},\tilde{\partial},\delta_{cyc})\) is a strict \(S^{1}\)-complex._
Proof.: Starting from a cocyclic chain complex \(\left\{C_{\bullet}(k)\right\}_{k\geqslant 0}\), one can equip the associated total complex \(C_{\bullet}=\prod_{k\geqslant 0}C_{\bullet}(k)\) with the structure of a strict \(S^{1}\)-complex \((C_{\bullet},\tilde{\partial},\mathbb{B})\), where \(\mathbb{B}:=N\circ s\circ(1-\lambda)\) is Connes' operator, where \(N\) is the norm of the cyclic operator \(\lambda\), and \(s\) is the degeneracy operator, see [40]. More precisely, on \(C_{\bullet}(k+1)\) it is given by
\[\mathbb{B}_{k}=N_{k}\circ s_{k}\circ(1-\lambda_{k+1}):C_{\bullet}(k+1)\to C _{\bullet+1}(k), \tag{3.59}\]
where
\[\lambda_{k}(x)=(-1)^{k}\tau_{k}(x),\ N_{k}(x)=(1+\lambda+\cdots+\lambda^{k})(x) \tag{3.60}\]
for \(x\in C_{\bullet}(k)\), and
\[s_{i}(x)=(-1)^{|x|}\sigma_{k,i-1}\circ\tau_{k+1}(x) \tag{3.61}\]
for \(x\in C_{\bullet}(k+1)\) and \(1\leqslant i\leqslant k+1\). It is straightforward to verify that \(\mathbb{B}^{2}=0\), see for example [51], Example 2.6. The operator \(\mathbb{B}\) clearly descends to one on \(C_{\bullet}^{nd}\). By abuse of notations, we will still denote it by \(\mathbb{B}:C_{\bullet}^{nd}\to C_{\bullet+1}^{nd}\). To see that \(\delta_{cyc}^{2}=0\) on \(C_{\bullet}^{nd}\), it remains to show that \(\delta_{cyc}=\mathbb{B}\) after passing to the quotient \(C_{\bullet}/D_{\ast}\). For any \(x\in C_{\bullet}(k+1)\), we have
\[s_{k}\circ\lambda_{k+1}(x)=(-1)^{|x|+1}\sigma_{k,k}\circ\tau_{k+1}^{2}(x)=(-1 )^{k}\tau_{k}\circ\sigma_{k,0}(x), \tag{3.62}\]
where the second equality follows from (3.58). Now let \([x]\in C_{\bullet}^{nd}(k+1)\) be the image of \(x\) under the quotient map \(C_{\bullet}(k+1)\twoheadrightarrow C_{\bullet}^{nd}(k+1)\), it follows that
\[\sigma_{k,0}([x])=\sigma_{k,0}\left([x-\delta_{k+1,0}\circ\sigma_{k,0}(x)] \right)=0, \tag{3.63}\]
where the vanishing on the right-hand side follows from (3.49). Thus the operator \(\mathbb{B}_{k}\), after passing to the quotient complex \(C_{\bullet}^{nd}(k+1)\), can be simplified to \(N_{k}\circ s_{0}\), with \(s_{0}:=\lambda_{k}\circ s_{1}\circ\lambda_{k+1}^{-1}\). On the other hand, it is explained in [32], Section 2.5.4 that \(N_{k}\circ s_{0}\) coincides with the BV operator \(\delta_{cyc}\) defined by (3.45).
By Lemma 18, various versions of \(S^{1}\)-equivariant homology theories can be constructed in terms of the strict \(S^{1}\)-complex \((C_{\bullet}^{nd},\tilde{\partial},\delta_{cyc})\). What is relevant for us here is the (positive) \(S^{1}\)-equivariant free loop space homology \(H_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\), also known as the string homology of \(L\), which is the homology of the chain complex
\[C_{\bullet}^{S^{1}}:=\left(C_{\bullet}^{nd}\otimes_{\mathbb{R}}\mathbb{R}((u ))/u\mathbb{R}[[u]],\partial^{S^{1}}:=\tilde{\partial}+u\delta_{cyc}\right), \tag{3.64}\]
where \(u\) is a formal variable of degree \(-2\) (note that this is different from the case of \(S^{1}\)-equivariant symplectic cohomology, where \(u\) has degree \(2\), because of the homological grading convention that we imposed here). Similarly, one can define the completed version \(\widehat{C}_{\bullet}^{S^{1}}\) of the \(S^{1}\)-equivariant chain complex with respect to the action filtration, and its homology \(\widehat{H}_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\)3, which is just the \(\widehat{\mathbb{H}}_{\bullet}^{S^{1}}\) defined by (1.19). It is clear that \(C_{\bullet}^{S^{1}}\) decomposes according to \((a,k)\in H_{1}(L;\mathbb{Z})\times\mathbb{Z}_{\geqslant 0}\), and we will use the notations \(C_{\bullet}^{S^{1}}(a,k)\) and \(C_{\bullet}^{S^{1}}(k)\) for their obvious meanings.
Footnote 3: To avoid confusions, we remark that this is not the _periodic \(S^{1}\)-equivariant homology_ of the free loop space, which is denoted using the same notation in [54].
**Remark 19**.: _It is proved in [51] that there is an isomorphism between \(H_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\) and the homology of the complex \(C_{\bullet}^{nd}\otimes_{\mathbb{R}}\mathbb{R}[[u]]\), equipped with the equivariant differential \(\partial^{S^{1}}\) as above. This brings no contradiction since there is a quasi-isomorphism_
\[\left(C_{\bullet}\otimes_{\mathbb{R}}\mathbb{R}((u))/u\mathbb{R}[[u]],\partial^ {S^{1}}\right)\cong\left(C_{\bullet}\otimes_{\mathbb{R}}\mathbb{R}[[u]], \partial^{S^{1}}\right) \tag{3.65}\]
_for any cocyclic chain complex \(C_{\bullet}\). In particular, the \(S^{1}\)-equivariant chain complex (3.64) does not define the (positive) cyclic homology \(HC_{\bullet}(C_{\bullet}^{\text{nd}})\). Instead, it computes the negative cyclic homology \(HC_{\bullet}^{-}(C_{\bullet}^{\text{nd}})\). The chain complex underlying the cyclic homology \(HC_{\bullet}(C_{\bullet}^{\text{nd}})\) is given by \(\left(C_{\bullet}^{\text{nd}}\otimes_{\mathbb{R}}\mathbb{R}[[u^{-1}]], \mathcal{S}^{S^{1}}\right)\), which allows infinitely negative powers in \(u\). For cyclic chain complexes, the corresponding facts are summarized in [50], Corollary 2.14. The proofs in the cocyclic case are parallel to that in the cyclic case._
For the string homology \(H_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\), Chas-Sullivan [9] defined a Lie bracket, known as _string bracket_, by applying the erasing map to the loop product pre-composed with the marking maps (cf. (3.88)). This equips \(H_{\bullet}^{S^{1}}(\mathcal{L}L;\mathbb{R})\) with the structure of a graded Lie algebra of degree \(2-n\) (under the usual grading convention). Inspired by the construction of the Lie bracket for cyclic operads in [35] and [53], we introduce a here a Lie bracket on a (quasi-isomorphic) quotient complex of \(C_{\bullet}^{S^{1}}\), which is supposed to play the role of a chain level refinement of the string bracket. We shall define it as a bilinear operation
\[\{\cdot,\cdot\}:C_{i}^{S^{1}}\otimes C_{j}^{S^{1}}\to C_{i+j+1}^{S^{1}} \tag{3.66}\]
on the \(S^{1}\)-equivariant de Rham complex \(C_{\bullet}^{S^{1}}\) as follows. Let \(\tilde{x}=\sum_{i=0}^{\infty}x_{i}\otimes u^{-i}\in C_{\bullet}^{S^{1}}\) and \(\tilde{y}=\sum_{i=0}^{\infty}y_{i}\otimes u^{-i}\in C_{\bullet}^{S^{1}}\), we have
\[\begin{split}\{\tilde{x},\tilde{y}\}\,(a,k)&:=\sum \limits_{\begin{subarray}{c}a_{1}+a_{2}=a\\ k_{1}+k_{2}=k+1\end{subarray}}\sum\limits_{i=1}^{k_{1}}\sum\limits_{j=1}^{k_ {2}+1}(-1)^{\mathfrak{K}_{ij}}x_{0}(a_{1},k_{1})\circ_{i}\left((R_{k_{2}+1})^{ j}_{\bullet}y_{0}(a_{2},k_{2}+1)\circ_{k_{2}+2-j}e_{L}\right)\otimes 1\\ &-\sum\limits_{\begin{subarray}{c}a_{1}+a_{2}=a\\ k_{1}+k_{2}=k+1\end{subarray}}\sum\limits_{i=1}^{k_{1}}\sum\limits_{j=1}^{k_ {1}+1}(-1)^{\mathfrak{K}_{ij}+(|x_{0}|+1)(|y_{0}|+1)}\left((R_{k_{1}+1})^{j}_{ \bullet}y_{0}(a_{1},k_{1}+1)\circ_{k_{1}+2-j}e_{L}\right)\\ &\circ_{i}x_{0}(a_{2},k_{2})\otimes 1,\end{split} \tag{3.67}\]
where
\[\mathfrak{K}_{ij}=(i-1)(k_{2}-1)+(k_{1}-1)(|y_{0}|+k_{2})+|y_{0}|+k_{2}(j-1). \tag{3.68}\]
It is not hard to see that \(\{\cdot,\cdot\}\) does not define a Lie bracket on \(C_{\bullet}^{S^{1}}\). However, we will see that it becomes a Lie bracket after passing to a quotient complex \((C_{\bullet}^{\lambda},\tilde{\partial})\) of \((C_{\bullet}^{S^{1}},\tilde{c}^{S^{1}})\), where
\[C_{\bullet}^{\lambda}:=C_{\bullet}^{\text{nd}}/\text{im}(1-\lambda) \tag{3.69}\]
is known as the _Connes' complex_[14], with \(\lambda:C_{\bullet}^{\text{nd}}\to C_{\bullet}^{\text{nd}}\) being the cyclic operator defined in the proof of Lemma 18. We first show that it is well-defined.
**Lemma 20**.: _The differential \(\tilde{\partial}\) on the total complex \(C_{\bullet}\) descends to one on its quotient \(C_{\bullet}^{\lambda}\)._
Proof.: In the classical case of an endomorphism operad of an associative algebra, this is a standard fact. See for example [40], Lemma 2.1.1. In our case, define \(\tilde{\partial}^{\prime}:C_{\bullet}\to C_{\bullet-1}\) by
\[(\tilde{\partial}^{\prime}x)(a,k):=(\tilde{\partial}x)(a,k)-(-1)^{|x|+1} \delta_{k,k}(x(a,k-1)). \tag{3.70}\]
It is straightforward to check that \((1-\lambda)\tilde{\partial}^{\prime}=\tilde{\partial}(1-\lambda)\).
In fact, the quotient complex \((C_{\bullet}^{\lambda},\tilde{\partial})\) provides an alternative chain model for the string homology of \(L\).
**Lemma 21**.: _The natural projection_
\[\begin{split}\pi_{\lambda}:\left(C_{\bullet}^{\text{nd}}\otimes_ {\mathbb{R}}\mathbb{R}((u))/u\mathbb{R}[[u]],\tilde{\partial}+u\delta_{\text{ cyc}}\right)&\to(C_{\bullet}^{\lambda},\tilde{\partial})\\ \tilde{x}&\mapsto\underline{x}\end{split} \tag{3.71}\]
_is a quasi-isomorphism._
Proof.: Since we are working over the field \(\mathbb{R}\supset\mathbb{Q}\), it follows from [40], Theorem 2.1.5 that the natural inclusion
\[\left(\ker(1-\lambda),\tilde{\tilde{\varrho}}\right)\hookrightarrow\left(C_{ \bullet}^{nd}\otimes_{\mathbb{R}}\mathbb{R}[[u]],\partial^{S^{1}}\right) \tag{3.72}\]
is a quasi-isomorphism. More precisely, [40], Theorem 2.1.5 deals with cyclic chain complexes, and (3.72) follows by passing to the cyclic dual. See also [51], Example 2.6. Consider the composition
\[\left(\ker(1-\lambda),\tilde{\tilde{\varrho}}\right)\hookrightarrow\left(C_{ \bullet}^{nd}\otimes_{\mathbb{R}}\mathbb{R}[[u]],\partial^{S^{1}}\right) \xrightarrow{\cong}\left(C_{\bullet}^{nd}\otimes_{\mathbb{R}}\mathbb{R}((u))/ u\mathbb{R}[[u]],\partial^{S^{1}}\right)\xrightarrow{\pi_{\lambda}}\left(C_{ \bullet}^{\lambda},\tilde{\varrho}\right), \tag{3.73}\]
which gives the obvious isomorphism between chain complexes. Since the first two maps are both quasi-isomorphisms, so must be the third one.
As we have remarked above, the advantage of using the quotient chain model \(C_{\bullet}^{\lambda}\) is that the binary operation \(\{\cdot,\cdot\}\) now gives rise to a well-defined Lie bracket. By abuse of notations, we will still use \(\{\cdot,\cdot\}\) to denote the induced operation on the quotient complex \(C_{\bullet}^{\lambda}\). More precisely, for any \(\underline{x},\underline{y}\in C_{\bullet}^{\lambda}\), we have
\[\left\{\underline{x},\underline{y}\right\}=\underline{\{\tilde{x},\tilde{y} \}}. \tag{3.74}\]
Note that this definition depends on non-canonical choices of the lifts \(\tilde{x}\) and \(\tilde{y}\) of \(\underline{x}\) and \(\underline{y}\), respectively, but different choices only differ by a cyclic permutation, hence the induced operation on \(C_{\bullet}^{\lambda}\) is well-defined.
We use \(C_{\bullet}^{\lambda}(a,k)\) to denote \((a,k)\)-component in the decomposition
\[C_{\bullet}^{\lambda}=\bigoplus_{a\in H_{1}(L;\mathbb{Z})}\prod_{k=0}^{ \infty}C_{\bullet}^{\lambda}(a,k). \tag{3.75}\]
In this paper, we will use both of the chain complexes \(C_{\bullet}^{S^{1}}\) and \(C_{\bullet}^{\lambda}\) as de Rham models of \(S^{1}\)-equivariant chains on the free loop space. The former complex has the advantage that its differential \(\partial^{S^{1}}\) has a more convenient form, while the latter complex is useful for the algebraic arguments in Sections 3.3 and 3.4 since it carries the structure of an odd dg Lie algebra.
**Lemma 22**.: \(\left(C_{\bullet}^{\lambda},\tilde{\varrho},\{\cdot,\cdot\}\right)\) _is a dg Lie algebra of degree \(1\). In particular, for \(\underline{x},\underline{y},\underline{z}\in C_{\bullet}^{\lambda}\), the bilinear operation \(\{\cdot,\cdot\}\) satisfies the Jacobi identity_
\[\left\{\underline{x},\{\underline{y},\underline{z}\}\right\}=\left\{\{ \underline{x},\underline{y}\},\underline{z}\right\}+(-1)^{(|\underline{x}|+1) (|\underline{y}|+1)}\left\{\underline{y},\{\underline{x},\underline{z}\} \right\}, \tag{3.76}\]
_and is graded anti-symmetric in the sense that_
\[\left\{\underline{x},\underline{y}\right\}=-(-1)^{(|\underline{x}|+1)(| \underline{y}|+1)}\{\underline{y},\underline{x}\}. \tag{3.77}\]
Proof.: It is obvious from the definition that the bracket \(\{\cdot,\cdot\}\) satisfies the graded Leibniz rule. In order to verify the anti-symmetric property and the Jacobi identity, it is convenient to use the "edge-grafting" operations
\[{}_{i}\circ_{j}:C_{\bullet}(k_{1})\otimes C_{\bullet}(k_{2})\to C_{\bullet}(k _{1}+k_{2}-1), \tag{3.78}\]
where \(0\leqslant i\leqslant k_{1}\) and \(0\leqslant j\leqslant k_{2}\) introduced in [35], given explicitly by
\[x(a_{1},k_{1})\ {}_{i}\circ_{j}\ y(a_{2},k_{2}):=\left\{\begin{array}{ll}x(a_{1},k_{1} )\circ_{i}\lambda_{k_{2}}^{j}y(a_{2},k_{2})&i\geqslant 1,\\ \lambda_{k_{1}}x(a_{1},k_{1})\circ_{k_{1}}\lambda_{k_{2}}^{j}y(a_{2},k_{2})&i =0,\end{array}\right. \tag{3.79}\]
where we have used the cyclic operator \(\lambda_{k}:C_{\bullet}(k)\to C_{\bullet}(k)\) instead of the rotation operator \((R_{k})_{\bullet}\) since it simplifies the signs. These operations are graded (anti-)commutative up to cyclic permutations, in the sense that (cf. [53], Lemma B.13)
\[x(a_{1},k_{1})\ {}_{i}\circ_{j}\ y(a_{2},k_{2})=(-1)^{(|x|+1)(|y|+1)}\lambda_{k_{1}+k_{2} -1}^{k_{2}-i+j}\left(y(a_{2},k_{2})\ {}_{j}\circ_{i}\ x(a_{1},k_{1})\right). \tag{3.80}\]
It is clear that the operations \({}_{i}\circ_{j}\) descends to operations on the quotient complex \(C_{\bullet}^{nd}\). By abuse of notations, we shall use the same notations to denote the induced operations on \(C_{\bullet}^{nd}\). With the identity (3.80), we can write the bracket \(\{\tilde{x},\tilde{y}\}\) in a more compact way:
\[\begin{split}\{\tilde{x},\tilde{y}\}\,(a,k)=\sum_{ \begin{subarray}{c}a_{1}+a_{2}=a\\ k_{1}+k_{2}=k+1\end{subarray}}\sum_{i=0}^{k_{1}}\sum_{j=0}^{k_{2}}& (-1)^{\complement{\mathbb{R}}_{i}+(|x_{0}|+1)(|y_{0}|+1)+1}\lambda_{k_{1}+k_{2 }-1}^{k_{2}-i+j}\\ &((y_{0}(a_{2},k_{2}+1)\,\,_{0}\circ_{0}e_{L})\,\,_{j}\circ_{i} \,x_{0}(a_{1},k_{1}))\otimes 1,\end{split} \tag{3.81}\]
where
\[\complement{\mathbb{R}}_{i}=(i-1)(k_{2}-1)+(k_{1}-1)(|y_{0}|+k_{2}). \tag{3.82}\]
See [32], Section 2.5.4 for the sign conventions, especially the appearance of \(\complement{\mathbb{R}}_{i}\) in the expression above. It follows that
\[\begin{split}\{\underline{x},\,\underline{y}\}(a,k)=\sum_{ \begin{subarray}{c}a_{1}+a_{2}=a\\ k_{1}+k_{2}=k+1\end{subarray}}\sum_{i=0}^{k_{1}}\sum_{j=0}^{k_{2}}& (-1)^{\complement{\mathbb{R}}_{i}+(|x_{0}|+1)(|y_{0}|+1)+1} \underline{(y_{0}(a_{2},k_{2}+1)\,\,_{0}\circ_{0}e_{L})}\\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
Since the Lie bracket \(\{\cdot,\cdot\}\) is compatible with the decomposition of \(C^{\lambda}_{\bullet}\) according to \((a,k)\in H_{1}(L;\mathbb{Z})\times\mathbb{Z}_{\geqslant 0}\), it naturally extends to the completion \(\widehat{C}^{\lambda}_{\bullet}\), and equips it with the structure of a dg Lie algebra of degree \(1\) as well.
As in the non-equivariant case, we can introduce the relative version of the \(S^{1}\)-equivariant de Rham chain complex \(C^{S^{1}}_{\bullet}\) and its quotient \(C^{\lambda}_{\bullet}\). To do this, first notice that the cyclic permutation map (3.43) also induces a chain map
\[(\overline{R}_{k})_{\bullet}:\overline{C}_{\bullet}(k)\to\overline{C}_{\bullet }(k) \tag{3.89}\]
on the relative de Rham complex, using which we can define a BV operator
\[\begin{split}\bar{\delta}_{cyc}&:\overline{C}_{ \bullet}(a,k+1)\to\overline{C}_{\bullet+1}(a,k)\\ (\bar{\delta}_{cyc}\bar{x})(a,k)&:=\sum_{j=1}^{k+1 }(-1)^{|\bar{x}|+k(j-1)}(\overline{R}_{k+1})^{\bar{j}}_{\bullet}\bar{x}(a,k+1) \circ_{k+2-j}\bar{e}_{L},\end{split} \tag{3.90}\]
where
\[\bar{e}_{L}:=\left(\mathbb{R}\times L^{\prime},\mathit{id}_{\mathbb{R}} \times(i_{0}\circ\phi),\mathit{id}_{\mathbb{R}_{\mathbb{R}_{\mathbb{R}_{ \mathbb{R}_{\mathbb{R}_{\mathbb{R}_{\mathbb{R}_{\mathbb{R}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{ \mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}_{\mathbb{R}}}\right) \tag{3.91}\]
is the unit in \(\overline{C}_{1}(0,0)\), with the diffeomorphism \(\phi:L^{\prime}\to L\) and the inclusion of constant loops \(i_{0}:L\to\mathcal{L}_{1}(0)\) defined as above, and \(1_{\mathbb{R}}\) and \(1_{L^{\prime}}\) are constant functions with value \(1\) on \(\mathbb{R}\) and \(L^{\prime}\), respectively. Since the relative de Rham complexes \(\left\{\overline{C}_{\bullet}(k)\right\}_{k\geqslant 0}\) also carries a cocyclic structure, the analogue of Lemma 18 holds in the relative case, and \(\bar{\delta}_{cyc}\) is an anti-chain map with \(\bar{\delta}_{cyc}^{2}=0\) after passing to the quotient complex \(\overline{C}^{nd}_{\bullet}\) of non-degenerate relative chains. Define the \(S^{1}\)-equivariant relative de Rham chain complex as
\[\overline{C}^{S^{1}}_{\bullet}:=\left(\overline{C}^{nd}_{\bullet}\otimes_{ \mathbb{R}}\mathbb{R}((u))/u\mathbb{R}[\llbracket u\rrbracket],\bar{\delta}^{S ^{1}}:=\bar{\bar{\partial}}+u\bar{\delta}_{cyc}\right), \tag{3.92}\]
with \(|u|=-2\). The chain level Lie bracket is defined in the same way as above. By abuse of notations, we shall still denote it by \(\{\cdot,\cdot\}\). As before, it is first defined as an operation on \(\overline{C}^{S^{1}}_{\bullet}\), and then descends to a Lie bracket of degree \(1\) on the quotient complex
\[\overline{C}^{\lambda}_{\bullet}:=\overline{C}^{nd}_{\bullet}/\mathrm{im}(1- \bar{\lambda}), \tag{3.93}\]
where \(\bar{\lambda}\) denotes the cyclic operator on the relative de Rham complex \(\overline{C}^{nd}_{\bullet}\) of non-degenerate chains. Moreover, \(\{\cdot,\cdot\}\) extends to the completion \(\widehat{\overline{C}}^{\lambda}_{\bullet}\) of \(\overline{C}^{\lambda}_{\bullet}\) with respect to the action filtration.
To relate the chain complexes \(C_{\bullet}\) and \(\overline{C}_{\bullet}\), we have the inclusion map \(i\) defined by (3.28), and the projection maps \(e_{\pm}\) to both ends given by (3.29) and (3.30). Since these maps are compatible with the cyclic permutations of the marked points \(t_{0},\cdots,t_{k}\), and the fiber products with the units \(e_{L}\) and \(\bar{e}_{L}\), they induce morphisms between the strict \(S^{1}\)-complexes \(C^{nd}_{\bullet}\) and \(\overline{C}^{nd}_{\bullet}\). We denote the induced chain maps on the \(S^{1}\)-equivariant de Rham complexes by
\[\tilde{i}:C^{S^{1}}_{\bullet}\to\overline{C}^{S^{1}}_{\bullet}\ \text{and}\ \tilde{e}_{\pm}: \overline{C}^{S^{1}}_{\bullet}\to C^{S^{1}}_{\bullet}, \tag{3.94}\]
respectively. One can check that \(\tilde{e}_{+}\circ\tilde{i}=\tilde{e}_{-}\circ\tilde{i}=\mathit{id}_{C}\). Note that we have abused the notations here and used \(\mathit{id}_{C}\) for the identity map of \(C^{S^{1}}_{\bullet}\) as well. Similarly, the identity endomorphism of \(\overline{C}^{S^{1}}_{\bullet}\) will still be denoted by \(\mathit{id}_{\overline{C}}\). The maps \(i\) and \(e_{\pm}\) also descend to the quotient complexes \(C^{\lambda}_{\bullet}\) and \(\overline{C}^{\lambda}_{\bullet}\). We denote these chain maps by
\[\underline{i}:C^{\lambda}_{\bullet}\to\overline{C}^{\lambda}_{\bullet}\ \text{and}\ \underline{e}_{\pm}:\overline{C}^{\lambda}_{\bullet}\to C^{\lambda}_{\bullet}. \tag{3.95}\]
It is also clear that \(\underline{e}_{+}\circ\underline{i}=\underline{e}_{-}\circ\underline{i}=id_{C}\), where the \(\mathit{id}_{C}\) here is the identity map on \(C^{\lambda}_{\bullet}\).
**Lemma 24**.: _The map \((\tilde{e}_{+},\tilde{e}_{-}):\overline{C}^{S^{1}}_{\bullet}\to C^{S^{1}}_{ \bullet}\oplus C^{S^{1}}_{\bullet}\) is surjective. Moreover, \(\tilde{i}\circ\tilde{e}_{+}\) and \(\tilde{i}\circ\tilde{e}_{-}\) are chain homotopic to \(\mathit{id}_{\overline{C}}\)._
Proof.: The surjectivity of \((\tilde{e}_{+},\tilde{e}_{-})\) follows from [33], Lemma 4.7.
In [33], Lemma 4.8, a chain homotopy \(K:\overline{C}_{*}\to\overline{C}_{*+1}\) from \(i\circ e_{+}\) to \(\mathit{id}_{\overline{C}}\) has been defined. It is clear that the map \(K\) induces a map on the \(S^{1}\)-equivariant chain complex \(\overline{C}_{*}^{\mathbb{S}^{1}}\). By abuse of notations, we will still denote it by \(K:\overline{C}_{*}^{\mathbb{S}^{1}}\to\overline{C}_{*+1}^{\mathbb{S}^{1}}\). Here we need to check that \(K\circ\bar{\delta}_{cyc}+\bar{\delta}_{cyc}\circ K=0\), so the map \(K\) satisfies
\[K(\bar{\partial}+u\bar{\delta}_{cyc})+(\bar{\partial}+u\bar{\delta}_{cyc})K= \mathit{id}_{\overline{C}}-\tilde{i}\circ\tilde{e}_{+}, \tag{3.96}\]
which gives a chain homotopy between \(\tilde{i}\circ\tilde{e}_{+}\) and \(\mathit{id}_{\overline{C}}\). To see this, recall that
\[K(U,\varphi,\tau_{+},\tau_{-},\omega)=(-1)^{|\omega|+1}(\mathbb{R}\times U, \bar{\varphi},\bar{\tau}_{+},\bar{\tau}_{-},\bar{\omega}), \tag{3.97}\]
where \(\bar{\varphi}:\mathbb{R}\times U\to\mathbb{R}\times\mathcal{L}_{k+1}\) is given by \(\bar{\varphi}=(\alpha\left(r,\varphi_{\mathbb{R}}(u)\right),\varphi_{\mathcal{ L}}(u))\), with \(\alpha:\mathbb{R}^{2}\to\mathbb{R}\) being some fixed choice of smooth function (cf. Step 1 in the proof of [33], Lemma 4.8). It is clear from the definition that the map \(K\) commutes with cyclic permutations of the marked points \(t_{0},\cdots,t_{k}\) and the fiber product with \(\bar{e}_{L}\), which implies that \(K\circ\bar{\delta}_{cyc}=(-1)^{\varepsilon}\bar{\delta}_{cyc}\circ K\). To determine the sign \((-1)^{\varepsilon}\), recall that the definition of the BV operator \(\bar{\delta}_{cyc}\) involves a sign \((-1)^{|\tilde{x}|+k(j-1)}\) before each term (cf. (3.90)). Since the map \(K\) has degree \(1\), it follows that \(\varepsilon=1\). The argument for the \(\tilde{i}\circ\tilde{e}_{-}\) case is similar.
Passing to the quotient complexes \(C_{*}^{\lambda}\) and \(\overline{C}_{*}^{\lambda}\), we obtain the following.
**Corollary 25**.: _The map \((\underline{e}_{+},\underline{e}_{-}):\overline{C}_{*}^{\lambda}\to C_{*}^{ \lambda}\oplus C_{*}^{\lambda}\) is surjective. Moreover, \(\underline{i}\circ\underline{e}_{+}\) and \(\underline{i}\circ\underline{e}_{-}\) are chain homotopic to \(\mathit{id}_{\overline{C}}\)._
### Chain level statement of Theorem 6
We provide a chain level statement of Theorem 6 using the de Rham models \((\widehat{C}_{*}^{\lambda},\tilde{\sigma})\) and \((\widehat{C}_{*}^{\mathbb{S}^{1}},\widehat{\sigma}^{\mathbb{S}^{1}})\) of the \(S^{1}\)-equivariant free loop space homology defined in Section 3.2. Recall that \(\widehat{C}_{*}^{\lambda}\) carries the structure of a dg Lie algebra of degree \(1\), with the Lie bracket given by \(\{\cdot,\cdot\}\) defined in (3.67).
**Theorem 26**.: _There exist \(S^{1}\)-equivariant de Rham chains \(\underline{x}\in\widehat{C}_{-2}^{\lambda}\), \(\underline{y}\in\widehat{C}_{2}^{\lambda}\), \(\underline{z}\in\widehat{C}_{1}^{\lambda}\), and a real number \(\varepsilon\in\mathbb{R}_{>0}\) such that_
1. \(\tilde{\partial}(\underline{x})-\frac{1}{2}\left\{\underline{x},\underline{x} \right\}=0\)_. In other words,_ \(\underline{x}\) _is a Maurer-Cartan element._
2. \(\tilde{\partial}(\underline{y})-\left\{\underline{x},\underline{y}\right\}= \underline{z}\)_. In other words,_ \(\underline{z}\) _has a primitive_ \(\underline{y}\) _with respect to the_ \(\underline{x}\)_-deformed differential on_ \(\widehat{C}_{*}^{\lambda}\)_._
3. \(\underline{x}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geq 2\varepsilon\) _or_ \(a=0\)_,_ \(k\geq 3\)_. Moreover, the chain_ \(\underline{x}(0,3)\) _admits a lift_ \(\tilde{x}(0,3)\in C_{-2}^{S^{1}}(0,3)\)_, whose image under the marking map_ \(\mathbf{B}_{c}\) _defined by (_3.87_) gives a cycle_ \(x(0,2)\in C_{n}^{dR}(\mathcal{L}_{3}(0))\)_. Under the isomorphism between de Rham and singular homologies,_ \([x(0,2)]\in H_{n}^{dR}(\mathcal{L}_{3}(0))\) _goes to_ \((-1)^{n+1}[L]\in H_{n}(\mathcal{L}(0);\mathbb{R})\)_._
4. \(\underline{z}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geq 2\varepsilon\) _or_ \(a=0\)_. Moreover,_ \(\underline{z}(0,0)\) _lifts along the natural projection_ \(\widehat{C}_{*}\to\widehat{C}_{*}^{\lambda}\) _to a cycle_ \(z(0,0)\in C_{n}^{dR}(\mathcal{L}_{1}(0))\)_, whose homology class_ \([z(0,0)]\in H_{n}^{dR}(\mathcal{L}_{1}(0))\) _corresponds to_ \((-1)^{n+1}[L]\in H_{n}(\mathcal{L}(0);\mathbb{R})\) _under the isomorphism between de Rham and singular homologies._
**Remark 27**.: _In order to produce the chains \(\underline{x}\), \(\underline{y}\) and \(\underline{z}\) in Theorem 26, it is enough to find chains \(\tilde{x}\in\widehat{C}_{-2}^{S^{1}}\), \(\tilde{y}\in\widehat{C}_{2}^{S^{1}}\) and \(\tilde{z}\in\widehat{C}_{1}^{S^{1}}\) such that the equations_
\[\partial^{S^{1}}(\tilde{x})-\frac{1}{2}\left\{\tilde{x},\tilde{x}\right\}=0\text { and }\partial^{S^{1}}(\tilde{y})-\left\{\tilde{x},\tilde{y}\right\}=\tilde{z} \tag{3.98}\]
_hold, and the corresponding conditions in (iii) and (iv) of Theorem 26 are satisfied. In particular, \(\mathbf{B}_{c}\left(\tilde{x}(0,3)\right)=(-1)^{n+1}\mu_{L}\) and \(\tilde{z}(0,0)=e_{L}\otimes 1\). The operation \(\{\cdot,\cdot\}\) in the
above isn't a Lie bracket, but the equations in (3.98) still make sense. In fact, for our geometric argument in Section 4, we will construct chains in the \(S^{1}\)-equivariant complex \(C^{S^{1}}_{\bullet}\) instead of Connes' complex \(C^{\lambda}_{\bullet}\), by pushing forward virtual fundamental chains of moduli spaces of holomorphic curves._
We then show that Theorem 6 is a corollary of its chain level refinement--Theorem 26. First note that (i) and (iii) above imply that the element
\[\underline{x}(0):=\sum_{k=3}^{\infty}\underline{x}(0,k)\in\widehat{C}^{ \lambda}_{-2} \tag{3.99}\]
is also a Maurer-Cartan element for \(\left(\widehat{C}^{\lambda}_{\bullet},\tilde{\partial},\{\cdot,\cdot\}\right)\), so we can use it to deform the differential \(\tilde{\partial}\). Introduce the notation
\[C^{\lambda}_{\bullet}(a):=\prod_{k=0}^{\infty}C^{\lambda}_{\bullet}(a,k), \tag{3.100}\]
and denote by \(\tilde{\partial}_{\underline{x}(0)}:C^{\lambda}_{\bullet}(a)\to C^{ \lambda}_{\bullet-1}(a)\) the deformed differential. By definition,
\[\tilde{\partial}_{\underline{x}(0)}(\underline{w}):=\tilde{\partial}_{ \underline{w}}-\{\underline{x}(0),\underline{w}\} \tag{3.101}\]
for \(\underline{w}\in C^{\lambda}_{\bullet}(a)\). We show that the deformation induced by the "low energy" Maurer-Cartan element \(\underline{x}(0)\) is actually trivial.
**Lemma 28**.: _There is an isomorphism_
\[H_{\bullet}\left(C^{\lambda}_{\bullet}(a),\tilde{\partial}_{\underline{x}(0 )}\right)\cong H^{S^{1}}_{\bullet+n+\mu(a)-1}(\mathcal{L}(a);\mathbb{R}). \tag{3.102}\]
Proof.: Consider the short exact sequence
\[0\rightarrow\prod_{k=1}^{\infty}C^{\lambda}_{\bullet}(a,k)\to C^{ \lambda}_{\bullet}(a)\to C^{\lambda}_{\bullet}(a,0)\to 0. \tag{3.103}\]
Since \(\underline{x}(0,0)=0\) by Theorem 26 (iii), and we have the isomorphism
\[H_{\bullet}\left(C^{\lambda}_{\bullet}(a,0),\partial\right)\cong H_{\bullet} \left(C^{S^{1}}_{\bullet}(a,0),\partial^{S^{1}}\right)\cong H^{S^{1}}_{ \bullet+n+\mu(a)-1}(\mathcal{L}(a);\mathbb{R}), \tag{3.104}\]
it suffices to show that the subcomplex \(\prod_{k=1}^{\infty}C^{\lambda}_{\bullet}(a,k)\subset C^{\lambda}_{\bullet}(a)\) is acyclic.
For every \(N\in\mathbb{N}\), observe that the differential \(\tilde{\partial}_{\underline{x}(0)}\) preserves the subcomplex
\[\prod_{k>2N}C^{\lambda}_{\bullet}(a,k)\subset\prod_{k=1}^{\infty}C^{\lambda} _{\bullet}(a,k), \tag{3.105}\]
so it descends to the quotient \(\prod_{1\leqslant k\leqslant 2N}C^{\lambda}_{\bullet}(a,k)\), and it is compatible with the filtration
\[F^{i}\left(\prod_{1\leqslant k\leqslant 2N}C^{\lambda}_{\bullet}(a,k)\right):= \prod_{i\leqslant k\leqslant 2N}C^{\lambda}_{\bullet}(a,k). \tag{3.106}\]
Consider the spectral sequence associated to this filtration, whose \(E_{1}\)-term is
\[H_{\bullet}\left(C^{\lambda}_{\bullet}(a,k),\partial\right)\cong H^{S^{1}}_{ \bullet+n+\mu(a)+k-1}(\mathcal{L}(a);\mathbb{R}). \tag{3.107}\]
Fix lifts \(\tilde{x}(a,k)=\sum_{i=0}^{\infty}x_{i}(a,k)\otimes u^{-i}\) and \(\tilde{w}(a,k)=\sum_{i=0}^{\infty}w_{i}(a,k)\otimes u^{-i}\) for \(\underline{x}(a,k)\) and \(\underline{w}(a,k)\), respectively. The differential \(d_{1}:H_{\bullet}\left(C^{\lambda}_{\bullet}(a,k)\right)\to H_{\bullet-1} \left(C^{\lambda}_{\bullet}(a,k+1)\right)\) is given
by
\[\begin{split}\{\underline{x}(0),\underline{w}\}\,(a,k+1)& =(-1)^{|w_{0}|}\sum_{i=1}^{k}\sum_{j=1}^{3}(-1)^{\mathfrak{K}_{ij}}\underline{w_ {0}(a,k)}\circ_{i}\left((R_{3})_{s}^{j}x_{0}(0,3)\circ_{4-j}e_{L}\right)\\ &-\sum_{i=1}^{2}\sum_{j=1}^{3}(-1)^{\mathfrak{K}_{ij}}\underline{ \left((R_{3})_{s}^{j}x_{0}(0,3)\circ_{4-j}e_{L}\right)\circ_{i}w_{0}(a,k)}\\ &=(-1)^{|w_{0}|+n+1}\left(\underline{\mu_{L}\circ_{1}w_{0}(a,k)} +\sum_{1\leqslant i\leqslant k}\underline{w_{0}(a,k)\circ_{i}\mu_{L}}\right.\\ &\left.+(-1)^{k+1}\underline{\mu_{L}\circ_{2}w_{0}(a,k)}\right), \end{split} \tag{3.108}\]
where the first equality follows from the anti-symmetric property (3.80) of the Lie bracket, and the second identity follows from Theorem 26, (iii). We conclude that \(d_{1}=\pm\sum_{j=0}^{k+1}(-1)^{i}\). Thus all \(E_{2}\)-terms vanish and the complex \(\left(\prod_{k=1}^{\infty}C_{\bullet}^{\lambda}(a,k),\tilde{\varrho}_{ \underline{x}(0)}\right)\) is acyclic.
Straightforward computations imply the following.
**Lemma 29**.: _Define \(\underline{x}^{+}:=\underline{x}-\underline{x}(0)\), then it satisfies_
\[\tilde{\partial}_{\underline{x}(0)}(\underline{x}^{+})-\frac{1}{2}\left\{ \underline{x}^{+},\underline{x}^{+}\right\}=0\text{ and }\tilde{\partial}_{ \underline{x}(0)}(\underline{y})-\{\underline{x}^{+},\underline{y}\}= \underline{z}. \tag{3.109}\]
The following is now a consequence of the two lemmas above and the homotopy transfer lemma for \(L_{\infty}\)-structures.
**Proposition 30**.: _Theorem 26 implies Theorem 6._
Proof.: It follows from our construction that the homology of the total complex \(C_{\bullet}^{\lambda}\) is \(\mathbb{H}_{\bullet}^{S^{1}}\) defined by (1.16). By Lemma 28, there exist linear maps
\[\iota:\mathbb{H}_{\bullet}^{S^{1}}\to C_{\bullet}^{\lambda},\ \pi:C_{\bullet}^{ \lambda}\to\mathbb{H}_{\bullet}^{S^{1}},\ \kappa:C_{\bullet}^{\lambda}\to C_{\bullet+1}^{\lambda}, \tag{3.110}\]
such that
\[\tilde{\partial}_{\underline{x}(0)}\circ\iota=0,\ \pi\circ\tilde{\partial}_{ \underline{x}(0)}=0,\ \pi\circ\iota=\mathit{id}_{\mathbb{H}}, \tag{3.111}\]
where \(\mathit{id}_{\mathbb{H}}\) denotes the identity of \(\mathbb{H}_{\bullet}^{S^{1}}\), and
\[\kappa\circ\tilde{\partial}_{\underline{x}(0)}+\tilde{\partial}_{\underline{x }(0)}\circ\kappa=\mathit{id}_{C}-\iota\circ\pi, \tag{3.112}\]
where \(\mathit{id}_{C}\) is the identity of \(C_{\bullet}^{\lambda}\). These maps can be chosen to be compatible with the decompositions of \(\mathbb{H}_{\bullet}^{S^{1}}\) and \(C_{\bullet}^{\lambda}\) over \(H_{1}(L;\mathbb{Z})\), therefore extend to the completions \(\widehat{\mathbb{H}}_{\bullet}^{S^{1}}\) and \(\widehat{C}_{\bullet}^{\lambda}\). Moreover, it follows from Theorem 26, (iv) that we can take \(\pi\) so that the chain \(\sum_{k=0}^{\infty}\underline{z}(0,k)\in\widehat{C}_{1}^{\lambda}\) gets mapped to \((-1)^{n+1}[\![\underline{L}]\!]\in H_{n}^{S^{1}}(\mathcal{L}(0);\mathbb{R})\).
By [40], Proposition 4.9, there exist an \(L_{\infty}\)-structure \((\tilde{\ell}_{k})_{k\geqslant 1}\) on \(\mathbb{H}_{\bullet}^{S^{1}}\) and an \(L_{\infty}\)-homomorphism
\[p=(p_{k})_{k\geqslant 1}:\left(C_{\bullet}^{\lambda},\tilde{\varrho}_{ \underline{x}(0)},\{\cdot,\cdot\}\right)\to\left(\mathbb{H}_{\bullet}^{S^{1}},(\tilde{\ell}_{k})_{k\geqslant 1}\right) \tag{3.113}\]
such that \(\ell_{1}=0\) and \(p_{1}=\pi\). The \(L_{\infty}\)-structure \((\tilde{\ell}_{k})_{k\geqslant 1}\) and the \(L_{\infty}\)-homomorphism \(p\) can be taken so that they respect the decompositions over \(H_{1}(L;\mathbb{Z})\), therefore also extend to the relevant completions. Moreover, by Lemma 29, the elements
\[\underline{X}:=\sum_{k=1}^{\infty}\frac{1}{k!}p_{k}(\underline{x}^{+},\cdots, \underline{x}^{+})\in\widehat{\mathbb{H}}_{-2}^{S^{1}}, \tag{3.114}\]
\[\underline{Y}:=\sum_{k=1}^{\infty}\frac{1}{(k-1)!}p_{k}(\underline{y}, \underline{x}^{+},\cdots,\underline{x}^{+})\in\widehat{\mathbb{H}}_{2}^{S^{1}}, \tag{3.115}\]
\[\underline{Z}:=\sum_{k=1}^{\infty}\frac{1}{(k-1)!}p_{k}(\underline{z},\underline{ x}^{+},\cdots,\underline{x}^{+})\in\widehat{\mathbb{H}}_{1}^{S^{1}} \tag{3.116}\]
satisfy
\[\sum_{k=2}^{\infty}\frac{1}{k!}\tilde{\ell}_{k}(\underline{X},\cdots, \underline{X})=0, \tag{3.117}\]
\[\sum_{k=2}^{\infty}\frac{1}{(k-1)!}\tilde{\ell}_{k}(\underline{Y},\underline{ X},\cdots,\underline{X})=\underline{Z}. \tag{3.118}\]
Note that the infinite sums in the definitions of \(\underline{X}\), \(\underline{Y}\), and \(\underline{Z}\) make sense since \(\underline{x}^{+}(a)\neq 0\) only when \(\theta_{M}(a)\geq 2\varepsilon\). Since \(\underline{X}(a)\neq 0\) only if \(\theta_{M}(a)\geq 2\varepsilon\), (iii) of Theorem 6 holds with \(c=2\varepsilon\). To complete the proof, it remains to show that \(\underline{Z}(0)=(-1)^{n+1}[\![L]\!]\). Since \(p\) respects the decompositions over \(H_{1}(L;\mathbb{Z})\), and \(\underline{z}(a,k)\neq 0\) only if \(\theta_{M}(a)\geq 2\varepsilon\) or \(a=0\), we obtain
\[\underline{Z}(0)=\pi\left(\sum_{k=0}^{\infty}\underline{z}(0,k)\right)=(-1)^{ n+1}[\![L]\!]. \tag{3.119}\]
### Approximating the solutions
To prove Theorem 26, we need to find chains \(\underline{x},\underline{y},\underline{z}\in\widehat{C}_{\ast}^{\lambda}\) satisfying the equations
\[\tilde{\partial}(\underline{x})-\frac{1}{2}\{\underline{x},\underline{x}\}=0,\ \tilde{\partial}(\underline{y})-\{\underline{x},\underline{y}\}=\underline{z}. \tag{3.120}\]
In principle, these chains should be defined by pushing forward the virtual fundamental chains of the moduli spaces introduced in Section 4.2. However, as Irie noticed in his situation [33], it is difficult to get such chains all at once as it will involve simultaneous perturbations of the Kuranishi maps of infinitely many moduli spaces. To overcome this difficulty, one can instead define a sequence of triple chains \((\underline{x}_{i},\underline{y}_{i},\underline{z}_{i})\in C_{-2}^{\lambda} \times C_{2}^{\lambda}\times C_{1}^{\lambda}\) such that
1. the triple \((\underline{x}_{i},\underline{y}_{i},\underline{z}_{i})\) satisfies (3.120) up to certain energy level, which goes to infinity as \(i\to\infty\);
2. the triples \((\underline{x}_{i},\underline{y}_{i},\underline{z}_{i})\) and \((\underline{x}_{i+1},\underline{y}_{i+1},\underline{z}_{i+1})\) are gauge equivalent up to certain energy level, which also goes to infinity as \(i\to\infty\).
(ii) implies that the limits \(\underline{x}\), \(\underline{y}\) and \(\underline{z}\) of the sequences of chains \((\underline{x}_{i})\), \((\underline{y}_{i})\) and \((\underline{z}_{i})\) exist, and (i) implies that they satisfy (3.120). We show in this section that having such a sequence is enough for the validity of Theorem 26.
We start by describing how to specify the value of \(\varepsilon\) in the statement of Theorem 26. Let \(M\) be a Liouville manifold which admits a cyclic dilation. We say that an almost complex structure \(J_{M}\) on \(M\) is of _contact type_ if it is compatible with \(d\theta_{M}\) and \(dr\circ J_{M}=-\theta_{M}\). From now on, fix an almost complex structure \(J_{M}\) on \(M\) which is of contact type. We take \(\varepsilon>0\) so that \(2\varepsilon\) is less than the minimal symplectic area of \(J_{M}\)-holomorphic discs with boundary on \(L\).
For the purpose of approximating \(\underline{x},\underline{y},\underline{z}\) using finite energy de Rham chains, it would be convenient for us to consider an alternative completion of the chain complex \(C_{\ast}^{\lambda}\). For each \(m\in\mathbb{Z}\), define the filtration
\[F^{m}C_{\ast}^{S^{1}}:=\bigoplus_{\begin{subarray}{c}a\in H_{1}(L;\mathbb{Z})\\ k\in\mathbb{Z}_{\neq 0}\\ \theta_{M}(a)\varepsilon(m+1-k)\end{subarray}}C_{\ast}^{S^{1}}(a,k), \tag{3.121}\]
which induces a similar filtration on its quotient complex \(C_{\ast}^{\lambda}\). By abuse of notations, the filtration on \(C_{\ast}^{\lambda}\) will still be denoted by \(F^{m}\). It follows from the definition that
and \(\left\{F^{m},F^{m^{\prime}}\right\}\subset F^{m+m^{\prime}}\). In the same manner, one can define a filtration on the relative chain complexes \(\overline{C}^{S^{1}}_{\ast}\) and \(\overline{C}^{\lambda}_{\ast}\), which we denote by \(\overline{F}^{m}\). The chains \(\underline{x}\), \(\underline{y}\) and \(\underline{z}\) satisfying (3.120) will then be defined as limits in the completion of \(C^{\lambda}_{\ast}\) with respect to the filtration \(F^{m}\).
Note that a priori, the completions of \(C^{S^{1}}_{\ast}\) (resp. \(C^{\lambda}_{\ast}\)) with respect to the filtration \(F^{m}\) defined above can be different from \(\widehat{C}^{S^{1}}_{\ast}\) (resp. \(\widehat{C}^{\lambda}_{\ast}\)) considered in Section 3.2 defined using the action filtration \(F^{\Xi}\). In order to show that the chains \(\underline{x}\), \(\underline{y}\) and \(\underline{z}\) actually lie in \(\widehat{C}^{\lambda}_{\ast}\), we follow [34] and introduce the following subsets of \(H_{1}(L;\mathbb{Z})\).
\[\underline{\underline{A}}_{x}:=\left\{a\in H_{1}(L;\mathbb{Z})|2(i,k)\text{ such that }\underline{\bar{x}}_{i}(a,k)\neq 0\right\}, \tag{3.122}\]
\[\underline{\underline{A}}_{x}^{+}:=\left\{a_{1}+\cdots+a_{m}|m\geqslant 1,a_{ 1},\cdots,a_{m}\in\underline{\underline{A}}_{x}\right\}, \tag{3.123}\]
\[\underline{\underline{A}}_{y,z}:=\left\{a\in H_{1}(L;\mathbb{Z})|\exists(i,k) \text{ such that }\left(\underline{\bar{y}}_{i}(a,k),\underline{\bar{z}}_{i}(a,k)\right) \neq(0,0)\right\}, \tag{3.124}\]
\[\underline{\underline{A}}_{y,z}^{+}:=\left\{a_{1}+\cdots+a_{m}|m\geqslant 1,a_{ 1}\in\underline{\underline{A}}_{y,z},a_{2}\cdots,a_{m}\in\underline{\underline {A}}_{x}\right\}. \tag{3.125}\]
Parallel to the above definitions, one can also introduce the corresponding subsets of \(H_{1}(L;\mathbb{Z})\) for chains in the \(S^{1}\)-equivariant complex \(\overline{C}^{S^{1}}_{\ast}\), and we will denote these subsets by \(A_{x}\), \(A_{x}^{+}\), \(A_{x,y}\) and \(A_{x,y}^{+}\), respectively. For example,
\[A_{x}:=\left\{a\in H_{1}(L;\mathbb{Z})|2(i,k)\text{ such that }\bar{\bar{x}}_{i}(a,k)\neq 0 \right\}. \tag{3.126}\]
We show in this subsection that in order to prove Theorem 26, it suffices to prove the following.
**Theorem 31**.: _There exist integers \(I,U\geqslant 3\) and a sequence \((\underline{x}_{\iota},\underline{y}_{\iota},\underline{z}_{\iota},\underline {\bar{x}}_{\iota},\underline{\bar{y}}_{\iota},\underline{\bar{z}}_{\iota})_{i \geqslant 3}\) of chains with \(\underline{x}_{\iota},\underline{y}_{\iota},\underline{z}_{\iota}\in C^{ \lambda}_{\ast}\), and \(\underline{\bar{x}}_{\iota},\underline{\bar{y}}_{\iota},\underline{\bar{z}}_{ \iota}\in\overline{C}^{\lambda}_{\ast}\), such that the following conditions hold._
1. \(\underline{x}_{\iota}\in F^{1}C^{\lambda}_{-2}\)_,_ \(\underline{\bar{x}}_{\iota}\in\overline{F}^{1}\overline{C}^{\lambda}_{-2}\)_,_ \(\underline{\underline{y}}_{\iota}\in F^{-U}C^{\lambda}_{2}\)_,_ \(\underline{\underline{y}}_{\iota}\in\overline{F}^{-U}\overline{C}^{\lambda}_ {2}\)_,_ \(\underline{z}_{\iota}\in F^{-1}C^{\lambda}_{1}\)_,_ \(\underline{\bar{z}}_{\iota}\in\overline{F}^{-1}\overline{C}^{\lambda}_{1}\)_._
2. \(\underline{x}_{\iota}=\underline{e}_{-}(\underline{\bar{x}}_{\iota})\)_,_ \(\underline{y}_{\iota}=\underline{e}_{-}(\underline{\bar{y}}_{\iota})\)_,_ \(\underline{z}_{\iota}=\underline{e}_{-}(\underline{\bar{z}}_{\iota})\)_._
3. \(\tilde{\tilde{\tilde{\phi}}}(\underline{\bar{x}}_{\iota})-\frac{1}{2}\left\{ \underline{\bar{x}}_{\iota},\underline{\bar{x}}_{\iota}\right\}\in\overline{F }^{\lambda}C^{\lambda}_{-3}\)_,_ \(\tilde{\tilde{\bar{\phi}}}(\underline{\bar{y}}_{\iota})-\left\{\underline{ \bar{x}}_{\iota},\underline{\bar{y}}_{\iota}\right\}-\left\{\underline{\bar{ z}}_{\iota}\in\overline{F}^{i-U-1}\overline{C}^{\lambda}_{1}\)_,_ \(\tilde{\tilde{\bar{\phi}}}(\underline{\bar{z}}_{\iota})-\left\{\underline{ \bar{x}}_{\iota},\underline{\bar{z}}_{\iota}\right\}\in\overline{F}^{i-2} \overline{C}^{\lambda}_{0}\)_._
4. \(\underline{x}_{\iota+1}-\underline{e}_{+}(\underline{\bar{x}}_{\iota})\in F^{ i}C^{\lambda}_{-2}\)_,_ \(\underline{y}_{\iota+1}-\underline{e}_{+}(\underline{\bar{y}}_{\iota})\in F^{i-U -1}C^{\lambda}_{2}\)_,_ \(\underline{z}_{\iota+1}-\underline{e}_{+}(\underline{\bar{z}}_{\iota})\in F^{i- 2}C^{\lambda}_{1}\)_._
5. \(\underline{x}_{\iota}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geqslant 2\varepsilon\) _or_ \(a=0\)_,_ \(k\geqslant 3\)_. Moreover,_ \(\underline{x}_{\iota}(0,3)\) _admits a lift_ \(\tilde{x}_{i}(0,3)\in C^{S^{1}}_{-2}(0,3)\) _such that_ \(\mathbf{B}_{c}\left(\tilde{x}_{i}(0,3)\right)=x_{i}(0,2)\in C^{\text{ad}}_{-1} (0,2)\) _is a cycle, whose homology class coincides with_ \((-1)^{n+1}[L]\) _under the isomorphism between de Rham and singular homologies of the free loop space._
6. \(\underline{\underline{z}}_{i}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geqslant 2\varepsilon\) _or_ \(a=0\)_. Moreover,_ \(\underline{z}_{i}(0,0)\) _lifts to a cycle_ \(z_{i}(0,0)\in C_{1}(0,0)\)_, whose homology class_ \([z_{i}(0,0)]\) _corresponds to_ \((-1)^{n+1}[L]\) _under the isomorphism between de Rham and singular homologies of the free loop space._
7. _For any_ \(\Xi>0\)_,_ \[\underline{\underline{A}}_{x}^{+}(\Xi):=\left\{a\in\underline{\underline{A}}_{x}^{+}| \theta_{M}(a)<\Xi\right\}\text{ and }\underline{\underline{A}}_{y,z}^{+}(\Xi):=\left\{a\in \underline{\underline{A}}_{y,z}^{+}|\theta_{M}(a)<\Xi\right\}\] (3.127) _are finite sets._
**Remark 32**.: _As we have noticed in Remark 27, in order to find the chains in \(C^{\lambda}_{\ast}\) and \(\overline{C}^{\lambda}_{\ast}\) satisfying the conditions of Theorem 31, it is enough to have the chains \((\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i},\tilde{\bar{x}}_{i},\tilde{\bar{y}} _{i},\tilde{\bar{z}}_{i})_{i\geqslant 3}\), where \(\tilde{x}_{i},\tilde{y}_{i},\tilde{z}_{i}\in C^{S^{1}}_{\ast}\), and \(\tilde{\bar{x}}_{i},\tilde{\bar{y}}_{i},\tilde{\bar{z}}_{i}\in\overline{C}^{S^{1 }}_{\ast}\), satisfying the following conditions corresponding to (i)--(vii) above._
1. \(\tilde{x}_{i}\in F^{1}C_{-2}^{S^{1}}\)_,_ \(\bar{\tilde{x}}_{i}\in\overline{F}^{1}\overline{C}_{-2}^{S^{1}}\)_,_ \(\tilde{y}_{i}\in F^{-U}C_{2}^{S^{1}}\)_,_ \(\bar{\tilde{y}}_{i}\in\overline{F}^{-U}\overline{C}_{2}^{S^{1}}\)_,_ \(\tilde{z}_{i}\in F^{-1}C_{1}^{S^{1}}\)_,_ \(\bar{\tilde{z}}_{i}\in\overline{F}^{-1}\overline{C}_{1}^{S^{1}}\)_._
2. \(\tilde{x}_{i}=\tilde{e}_{-}(\bar{\tilde{x}}_{i})\)_,_ \(\tilde{y}_{i}=\tilde{e}_{-}(\bar{\tilde{y}}_{i})\)_,_ \(\tilde{z}_{i}=\tilde{e}_{-}(\bar{\tilde{z}}_{i})\)_._
3. \(\tilde{\bar{\phi}}^{S^{1}}(\bar{\tilde{x}}_{i}){-}\frac{1}{2}\left\{\bar{\tilde {x}}_{i},\bar{\tilde{x}}_{i}\right\}\in\overline{F}^{i}\overline{C}_{-3}^{S^{1 }}\)_,_ \(\tilde{\bar{\phi}}^{S^{1}}(\bar{\tilde{y}}_{i}){-}\{\bar{\tilde{x}}_{i},\bar{ \tilde{y}}_{i}\}{-}\bar{\tilde{z}}_{i}\in\overline{F}^{i-U-1}\overline{C}_{1}^ {S^{1}}\)_,_ \(\tilde{\bar{\phi}}^{S^{1}}(\bar{\tilde{z}}_{i}){-}\{\bar{\tilde{x}}_{i},\bar{ \tilde{z}}_{i}\}\in\overline{F}^{i-2}\overline{C}_{0}^{S^{1}}\)_._
4. \(\tilde{x}_{i+1}-\tilde{\bar{e}}_{+}(\bar{\tilde{x}}_{i})\in F^{i}C_{-2}^{S^{1} }\)_,_ \(\tilde{y}_{i+1}-\tilde{e}_{+}(\bar{\tilde{y}}_{i})\in F^{i-U-1}C_{2}^{S^{1}}\)_,_ \(\tilde{z}_{i+1}-\tilde{e}_{+}(\bar{\tilde{z}}_{i})\in F^{i-2}C_{1}^{S^{1}}\)_._
5. \(\tilde{x}_{i}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geq 2\varepsilon\) _or_ \(a=0\)_,_ \(k\geq 3\)_. Moreover,_ \(\mathbf{B}_{c}\left(\tilde{x}_{i}(0,3)\right)=x_{i}(0,2)\)_._
6. \(\tilde{z}_{i}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geq 2\varepsilon\) _or_ \(a=0\)_. Moreover,_ \(\tilde{z}_{i}(0,0)\in C_{1}^{S^{1}}\) _is a cocycle, whose homology class_ \(\left[\bar{z}_{i}(0,0)\right]=(-1)^{n+1}\llbracket L\rrbracket\)_._
7. _For any_ \(\Xi>0\)_,_ \[A_{x}^{+}(\Xi):=\left\{a\in A_{x}^{+}|\theta_{M}(a)<\Xi\right\}\text{ and }A_{y,z}^{+}(\Xi):=\left\{a\in A_{y,z}^{+}|\theta_{M}(a)<\Xi\right\}\] (3.128) _are finite sets._
_We remark that due to the cyclic invariance for chains in \(\overline{C}_{\bullet}^{\lambda}\), the requirement \(\bar{\tilde{\phi}}^{S^{1}}(\bar{\tilde{x}}_{i})-\frac{1}{2}\left\{\bar{\tilde{ x}}_{i},\bar{\tilde{x}}_{i}\right\}\in\overline{F}^{\prime}\overline{C}_{-3}^{S^{1}}\) in (iii') can be weakened. See our proof of Theorem 31 in Section 4.5._
The following is an \(S^{1}\)-equivariant analogue of [33], Lemma 6.4 (except for the item (ix), see [34]). Its proof is almost identical to the non-equivariant case, except for some changes of notations, gradings and signs. We record the details here for the readers' convenience.
**Lemma 33**.: _Let \(I,U\in\mathbb{Z}_{\geqslant 3}\) and \(\underline{x}_{i},\underline{y}_{i},\underline{z}_{i},\underline{\bar{x}}_{i}, \underline{\bar{y}}_{i},\underline{\bar{z}}_{i}\) be as in Theorem 31. Then there exists a sequence_
\[(\underline{x}_{i,j},\underline{y}_{i,j},\underline{z}_{i,j},\underline{\bar{x }}_{i,j},\underline{\bar{y}}_{i,j},\underline{\bar{z}}_{i,j})_{i\geqslant I,j \geqslant 0} \tag{3.129}\]
_of (relative) de Rham chains satisfying the following conditions:_
1. \(\underline{x}_{,0}=\underline{x}_{i}\)_,_ \(\underline{y}_{i,0}=\underline{y}_{i}\)_,_ \(\underline{z}_{i,0}=\underline{z}_{i}\)_,_ \(\underline{\bar{x}}_{i,0}=\underline{\bar{y}}_{i}\)_,_ \(\underline{\bar{y}}_{i,0}=\underline{\bar{y}}_{i}\)_,_ \(\bar{\bar{z}}_{i,0}=\underline{\bar{z}}_{i}\)_._
2. \(\underline{x}_{i,j}\in F^{1}C_{-2}^{\lambda}\)_,_ \(\underline{\bar{x}}_{i,j}\in\overline{F}^{1}\overline{C}_{-2}^{\lambda}\)_,_ \(\underline{\bar{y}}_{i,j}\in F^{-U}C_{2}^{\lambda}\)_,_ \(\underline{\bar{y}}_{i,j}\in\overline{F}^{-U}\overline{C}_{2}^{\lambda}\)_,_ \(\underline{z}_{i,j}\in F^{-1}C_{1}^{\lambda}\)_,_ \(\underline{\bar{z}}_{i,j}\in\overline{F}^{-1}\overline{C}_{1}^{\lambda}\)_._
3. \(\underline{x}_{i,j}=\underline{e}_{-}(\underline{\bar{x}}_{i,j})\)_,_ \(\underline{y}_{i,j}=\underline{e}_{-}(\underline{\bar{y}}_{i,j})\)_,_ \(\underline{z}_{i,j}=\underline{e}_{-}(\underline{\bar{z}}_{j,j})\)_._
4. \(\tilde{\bar{\phi}}(\underline{\bar{x}}_{i,j}){-}\frac{1}{2}\left\{\underline{ \tilde{x}}_{i,j},\underline{\bar{x}}_{i,j}\right\}\in\overline{F}^{i+j} \overline{C}_{-3}^{\lambda}\)_,_ \(\tilde{\bar{\phi}}(\underline{\bar{y}}_{i,j}){-}\{\bar{\tilde{x}}_{i,j},\underline {\bar{y}}_{i,j}\}{-}\bar{\tilde{\bar{z}}}_{i,j}\in\overline{F}^{i+j-U-1} \overline{C}_{1}^{\lambda}\)_,_ \(\tilde{\bar{\phi}}(\underline{\bar{z}}_{i,j})-\{\underline{\bar{y}}_{i,j},\underline {\bar{z}}_{i,j}\}\in\overline{F}^{i+j-2}\overline{C}_{1}^{\lambda}\)_._
5. \(\underline{x}_{i+1,j}-\underline{e}_{+}(\underline{\bar{x}}_{i,j})\in F^{i+j}C_{-2}^{\lambda}\)_,_ \(\underline{y}_{i+1,j}-\underline{e}_{+}(\underline{\bar{y}}_{i,j})\in F^{i+j-U-1}C _{2}^{\lambda}\)_,_ \(\underline{\bar{z}}_{i+1,j}-\underline{e}_{+}(\underline{\bar{z}}_{i,j})\in F^{i+j-2}C _{0}^{\lambda}\)_._
6. \(\tilde{\bar{x}}_{i,j+1}-\underline{\bar{x}}_{i,j}\in\overline{F}^{i+j} \overline{C}_{-2}^{\lambda}\)_,_ \(\bar{\bar{y}}_{i,j+1}-\underline{\bar{y}}_{i,j}\in\overline{F}^{i+j-U-1} \overline{C}_{2}^{\lambda}\)_,_ \(\tilde{\bar{z}}_{i,j+1}-\underline{\bar{z}}_{i,j}\in\overline{F}^{i+j-2} \overline{C}_{1}^{\lambda}\)_._
7. \(\bar{\bar{x}}_{i,j}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geq 2\varepsilon\) _or_ \(a=0\)_,_ \(k\geq 3\)_. Moreover,_ \(\underline{\bar{x}}_{i,j}(a,k)\) _lifts to a chain_ \(\underline{\tilde{x}}_{i,j}(0,3)\in\overline{C}_{-2}^{S^{1}}\) _satisfying_ \(\overline{\mathbf{B}}_{c}\left(\tilde{\bar{x}}_{i,j}(0,3)\right)=\bar{x}_{i,j}(0,2)\)_, where_ \(\overline{\mathbf{B}}_{c}:\overline{C}_{\bullet}^{S^{1}}(a,k+1)\to\overline{C}_{ \bullet
_(iii)_ \(\bar{\underline{\varepsilon}}_{i,j}(a,k)\neq 0\) _only if_ \(\theta_{M}(a)\geqslant 2\varepsilon\) _or_ \(a=0\)_. Moreover,_ \(\bar{\underline{\varepsilon}}_{i,j}(0,0)\in\overline{C}_{1}^{\lambda}\) _lifts to a cycle_ \(\bar{\varepsilon}_{i,j}(0,0)\in\overline{C}_{1}(0,0)\) _whose homology class_ \([\bar{z}_{i,j}(0,0)]\) _corresponds to_ \((-1)^{n+1}[L]\) _under the isomorphism between relative de Rham and singular homologies of the free loop space._
* _If there exists_ \((i,j,k)\in\mathbb{Z}_{i\geqslant I}\times\mathbb{Z}_{j\geqslant 0}\times\mathbb{Z}_{k \geqslant 0}\) _such that_
* \(\bar{\underline{x}}_{i,j}(a,k)\neq 0\)_, then_ \(a\in\underline{A}_{x}^{+}\)_._
* \(\left(\bar{\underline{y}}_{i,j}(a,k),\bar{\underline{z}}_{i,j}(a,k)\right) \neq(0,0)\)_, then_ \(a\in\underline{A}_{y,z}^{+}\)_._
Proof.: We prove the lemma by induction on \(j\). Define the chains \((\underline{x}_{i,0},\underline{y}_{i,0},\bar{\underline{z}}_{i,0},\bar{ \underline{y}}_{i,0},\bar{\underline{z}}_{i,0})\) as in (i). Assume that we have defined a sequence of chains \((\underline{x}_{i,j},\underline{y}_{i,j},\bar{\underline{z}}_{i,j},\bar{ \underline{z}}_{i,j},\bar{\underline{z}}_{i,j})_{i\geqslant I}\) which satisfies the conditions (i)-(ix) above, we need to define the sequence
\[(\underline{x}_{i,j+1},\underline{y}_{i,j+1},\bar{\underline{z}}_{i,j+1}, \bar{\underline{x}}_{i,j+1},\bar{\underline{y}}_{i,j+1},\bar{\underline{z}}_ {i,j+1})_{i\geqslant I}. \tag{3.130}\]
Set
\[\Delta_{x}^{i}:=\underline{x}_{i+1,j}-\underline{\varepsilon}_{+}(\bar{ \underline{x}}_{i,j})\in F^{i+j}C_{-2}^{\lambda}, \tag{3.131}\]
\[\Delta_{x}^{i}:=\underline{y}_{i+1,j}-\underline{\varepsilon}_{+}(\bar{ \underline{y}}_{i,j})\in F^{i+j-U-1}C_{2}^{\lambda}, \tag{3.132}\]
\[\Delta_{z}^{i}:=\underline{x}_{i+1,j}-\underline{\varepsilon}_{+}(\bar{ \underline{z}}_{i,j})\in F^{i+j-2}C_{1}^{\lambda}. \tag{3.133}\]
Since \(\underline{\varepsilon}\) preserves the differential \(\tilde{\bar{\partial}}\), the Lie bracket \(\{\cdot,\cdot\}\), and the filtration \(\overline{F}^{m}\), applying it to the expressions in (iv) gives
\[\tilde{\partial}(\underline{x}_{i+1,j})-\frac{1}{2}\left\{\underline{x}_{i+1, j},\underline{x}_{i+1,j}\right\}\in F^{i+j+1}C_{-3}^{\lambda}, \tag{3.134}\]
\[\tilde{\partial}(\underline{y}_{i+1,j})-\left\{\underline{x}_{i+1,j},\underline {y}_{i+1,j}\right\}-\underline{z}_{i+1,j}\in F^{i+j-U}C_{1}^{\lambda}, \tag{3.135}\]
\[\tilde{\partial}(\underline{z}_{i+1,j})-\left\{\underline{x}_{i+1,j},\underline {z}_{i+1,j}\right\}\in F^{i+j-1}C_{0}^{\lambda}. \tag{3.136}\]
Combining with the definitions of \(\Delta_{x}^{i}\), \(\Delta_{y}^{i}\) and \(\Delta_{z}^{i}\) and using (v), we obtain
\[\tilde{\bar{\partial}}(\Delta_{x}^{i})+\underline{\varepsilon}_{+}\left(\tilde {\bar{\partial}}(\underline{\bar{z}}_{i,j})-\frac{1}{2}\left\{\underline{\bar {x}}_{i,j},\bar{\underline{z}}_{i,j}\right\}\right)\in F^{i+j+1}C_{-3}^{\lambda}, \tag{3.137}\]
\[\tilde{\bar{\partial}}(\Delta_{y}^{i})+\underline{\varepsilon}_{+}\left(\tilde {\bar{\partial}}(\underline{\bar{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j}, \underline{\bar{y}}_{i,j}\right\}\right)-\bar{\underline{z}}_{i,j}\in F^{i+j- U}C_{1}^{\lambda}, \tag{3.138}\]
\[\tilde{\bar{\partial}}(\Delta_{z}^{i})+\underline{\varepsilon}_{+}\left(\tilde {\bar{\partial}}(\underline{\bar{z}}_{i,j})-\left\{\underline{\bar{x}}_{i,j}, \underline{\bar{z}}_{i,j}\right\}\right)\in F^{i+j-1}C_{0}^{\lambda}. \tag{3.139}\]
Applying the Leibniz rule and the Jacobi identity we get
\[\begin{split}\tilde{\bar{\partial}}\left(\tilde{\bar{\partial}}( \bar{\underline{x}}_{i,j})-\frac{1}{2}\left\{\underline{\bar{x}}_{i,j},\underline {\bar{x}}_{i,j}\right\}\right)=&-\frac{1}{2}\left(\left\{\tilde{ \bar{\partial}}(\bar{\underline{x}}_{i,j})-\frac{1}{2}\left\{\underline{\bar {x}}_{i,j},\bar{\underline{z}}_{i,j}\right\},\bar{\underline{z}}_{i,j}\right\} \right.\\ &-\left\{\underline{\bar{x}}_{i,j},\tilde{\bar{\partial}}(\underline {\bar{x}}_{i,j})-\frac{1}{2}\left\{\underline{\bar{x}}_{i,j},\underline{\bar{x}}_ {i,j}\right\}\right\}\right)\in\overline{F}^{i+j+1}\overline{C}_{-4}^{\lambda}, \end{split} \tag{3.140}\]
\[\begin{split}\tilde{\bar{\partial}}\left(\tilde{\bar{\partial}}( \underline{\bar{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{y}}_ {i,j}\right\}-\underline{\bar{z}}_{i,j}\right)=&-\left\{\tilde{ \bar{\partial}}(\underline{\bar{z}}_{i,j})-\frac{1}{2}\left\{\underline{\bar {x}}_{i,j},\underline{\bar{z}}_{i,j}\right\},\bar{\underline{y}}_{i,j}\right\}\\ &+\left\{\underline{\bar{x}}_{i,j},\tilde{\bar{\partial}}( \underline{\bar{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{y}}_ {i,j}\right\}-\underline{\bar{z}}_{i,j}\right\}\\ &-\left(\tilde{\bar{\partial}}(\underline{\bar{y}}_{i,j})-\left\{ \underline{\bar{x}}_{i,j},\underline{\bar{z}}_{i,j}\right\}\right)\in \overline{F}^{i+j-U}C_{0}^{\lambda},\end{split} \tag{3.141}\]
\[\begin{split}\tilde{\bar{\partial}}\left(\tilde{\bar{\partial}}( \underline{\bar{z}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{z}}_ {i,j}\right\}\right)=&-\left\{\tilde{\bar{\partial}}(\underline{ \bar{z}}_{i,j})-\frac{1}{2}\left\{\underline{\bar{x}}_{i,j},\underline{\bar{z}}_ {i,j}\right\},\underline{\bar{z}}_{i,j}\right\}\\ &+\left\{\underline{\bar{x}}_{i,j},\tilde{\bar{\partial}}( \underline{\bar{z}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{z}}_ {i,j}\right\}\right\}\in\overline{F}^{i+j-1}\overline{C}_{-1}^{\lambda},\end{split} \tag{3.142}\]
where the levels of the filtration above are determined using the fact that \(\left\{\overline{F}^{m},\overline{F}^{m^{\prime}}\right\}\subset\overline{F}^{m+m^{ \prime}}\).
It follows from Corollary 25 that we can apply [33], Lemma 6.3 to the surjective quasi-isomorphism
\[\underline{\varepsilon}_{+}:\overline{F}^{D}\overline{C}^{\lambda}_{*}/ \overline{F}^{D+1}\overline{C}^{\lambda}_{*}\to F^{D}C^{\lambda}_{*}/F^{D+1}C ^{\lambda}_{*}, \tag{3.143}\]
where \(D\) stands for the integer \(i+j\), \(i+j-U-1\), or \(i+j-2\). As a consequence, there exist relative chains
\[\overline{\Delta}^{i}_{x}\in\overline{F}^{i+j}\overline{C}^{\lambda}_{-2},\ \overline{\Delta}^{i}_{y}\in\overline{F}^{i+j-U-1}\overline{C}^{\lambda}_{2}, \ \overline{\Delta}^{i}_{z}\in\overline{F}^{i+j-2}\overline{C}^{\lambda}_{1} \tag{3.144}\]
such that
\[\underline{\varepsilon}_{+}(\overline{\Delta}^{i}_{x})-\Delta^{i}_{x}\in F^{ i+j+1}C^{\lambda}_{-2},\ \underline{\varepsilon}_{+}(\overline{\Delta}^{i}_{y})-\Delta^{i}_{y}\in F^{ i+j-U}C^{\lambda}_{2},\ \underline{\varepsilon}_{+}(\overline{\Delta}^{i}_{z})-\Delta^{i}_{z}\in F^{ i+j-1}C^{\lambda}_{1} \tag{3.145}\]
and
\[\tilde{\bar{\phi}}(\overline{\Delta}^{i}_{x})+\left(\tilde{\bar{\phi}}( \underline{\bar{x}}_{i,j})-\frac{1}{2}\left\{\underline{\bar{x}}_{i,j}, \underline{\bar{x}}_{i,j}\right\}\right)\in\overline{F}^{i+j+1}\overline{C}^{ \lambda}_{-3}, \tag{3.146}\]
\[\tilde{\bar{\phi}}(\overline{\Delta}^{i}_{y})+\left(\tilde{\bar{\phi}}( \underline{\bar{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{y }}_{i,j}\right\}-\underline{\bar{z}}_{i,j}\right)\in\overline{F}^{i+j-U} \overline{C}^{\lambda}_{1}, \tag{3.147}\]
\[\tilde{\bar{\phi}}(\overline{\Delta}^{i}_{z})+\left(\tilde{\bar{\phi}}( \underline{\bar{z}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{ z}}_{i,j}\right\}\right)\in\overline{F}^{i+j-1}\overline{C}^{\lambda}_{0}. \tag{3.148}\]
We now define the chains in (3.130) as
\[\underline{\bar{x}}_{i,j+1}:=\underline{\bar{x}}_{i,j}+\overline{\Delta}^{i}_ {x},\ \underline{\bar{y}}_{i,j+1}:=\underline{\bar{y}}_{i,j}+\overline{\Delta}^{i}_{y},\ \underline{\bar{z}}_{i,j+1}:=\underline{\bar{z}}_{i,j}+\overline{\Delta}^{i}_{z}, \tag{3.149}\]
and
\[\underline{x}_{i,j+1}:=\underline{\varepsilon}_{-}(\underline{\bar{x}}_{i,j+1 }),\ \underline{y}_{i,j+1}:=\underline{\varepsilon}_{-}(\underline{\bar{y}}_{i,j+1}), \ \underline{z}_{i,j+1}:=\underline{\varepsilon}_{-}(\underline{\bar{z}}_{i,j+1}). \tag{3.150}\]
To complete the induction step, one needs to check the following properties for every \(i\in\mathbb{Z}_{\geqslant I}\):
\[\tilde{\bar{\phi}}(\underline{\bar{x}}_{i,j+1})-\frac{1}{2}\left\{\underline{ \bar{x}}_{i,j+1},\underline{\bar{x}}_{i,j+1}\right\}\in\overline{F}^{i+j+1} \overline{C}^{\lambda}_{-3} \tag{3.151}\]
\[\tilde{\bar{\phi}}(\underline{\bar{y}}_{i,j+1})-\left\{\underline{\bar{x}}_{i, j+1},\underline{\bar{y}}_{i,j+1}\right\}-\underline{\bar{z}}_{i,j+1}\in\overline{F}^{i+j- U}\overline{C}^{\lambda}_{1}, \tag{3.152}\]
\[\tilde{\bar{\phi}}(\underline{\bar{z}}_{i,j+1})-\left\{\underline{\bar{x}}_{i, j+1},\underline{\bar{z}}_{i,j+1}\right\}\in\overline{F}^{i+j-1}\overline{C}^{ \lambda}_{0}, \tag{3.153}\]
\[\underline{x}_{i+1,j+1}-\underline{\varepsilon}_{+}(\underline{\bar{x}}_{i,j+1 })\in F^{i+j+1}C^{\lambda}_{-2}, \tag{3.154}\]
\[\underline{y}_{i+1,j+1}-\underline{\varepsilon}_{+}(\underline{\bar{y}}_{i,j+1 })\in F^{i+j-U}C^{\lambda}_{2}, \tag{3.155}\]
\[\underline{z}_{i+1,j+1}-\underline{\varepsilon}_{+}(\underline{\bar{z}}_{i,j+1 })\in F^{i+j-1}C^{\lambda}_{1}. \tag{3.156}\]
To prove (3.151), we use the definition of \(\underline{\bar{x}}_{i,j+1}\) to compute
\[\begin{split}\tilde{\bar{\phi}}(\underline{\bar{x}}_{i,j+1})- \frac{1}{2}\left\{\underline{\bar{x}}_{i,j+1},\underline{\bar{x}}_{i,j+1} \right\}&=\left(\tilde{\bar{\phi}}(\underline{\bar{x}}_{i,j})+ \tilde{\bar{\phi}}(\overline{\Delta}^{i}_{x})-\frac{1}{2}\left\{\underline{ \bar{x}}_{i,j},\underline{\bar{x}}_{i,j}\right\}\right)\\ &-\frac{1}{2}\left\{\overline{\Delta}^{i}_{x},\overline{\Delta}^{i }_{x}\right\}-\left\{\underline{\bar{x}}_{i,j},\overline{\Delta}^{i}_{x} \right\}.\end{split} \tag{3.157}\]
Now (3.151) follows since all three terms on the right-hand side lie in \(\overline{F}^{i+j+1}\overline{C}^{\lambda}_{-3}\).
For (3.152), using the definitions of \(\underline{\bar{y}}_{i,j+1}\) and \(\underline{\bar{z}}_{i,j+1}\) we have
\[\tilde{\bar{\phi}}(\underline{\bar{y}}_{i,j+1})-\left\{\underline{\bar{x}}_{ i,j+1},\underline{\bar{y}}_{i,j+1}\right\}-\underline{\bar{z}}_{i,j+1} =\left(\tilde{\bar{\phi}}(\overline{\Delta}^{i}_{y})+\tilde{\bar{\phi}}( \underline{\bar{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{y}}_{ i,j}\right\}-\bar{\bar{z}}_{i,j}\right)\] \[-\left\{\overline{\Delta}^{i}_{x},\underline{\bar{y}}_{i,j}\right\}- \left\{\underline{\bar{x}}_{i,j},\overline{\Delta}^{i}_{y}\right\}-\left\{ \overline{\Delta}^{i}_{x},\overline{\Delta}^{i}_{y}\right\}-\overline{\Delta}^{i}_{z}. \tag{3.158}\]
Since all the terms on the right-hand side lie in \(\overline{F}^{i+j-1}\overline{C}_{1}^{\lambda}\), (3.152) follows.
By definitions of \(\bar{\underline{x}}_{i,j+1}\) and \(\bar{\underline{z}}_{i,j+1}\), one can compute
\[\begin{split}\tilde{\bar{\phi}}(\bar{\underline{z}}_{i,j+1})-\left\{ \underline{\bar{x}}_{i,j+1},\bar{\underline{z}}_{i,j+1}\right\}&= \left(\tilde{\bar{\phi}}(\bar{\underline{z}}_{i,j})+\tilde{\bar{\phi}}( \overline{\underline{\Delta}}_{i}^{\underline{z}})-\left\{\underline{\bar{x}} _{i,j},\bar{\underline{z}}_{i,j}\right\}\right)\\ &-\left\{\overline{\Delta}_{x}^{i},\bar{\underline{z}}_{i,j} \right\}-\left\{\underline{\bar{x}}_{i,j},\overline{\Delta}_{i}^{\underline{ z}}\right\}-\left\{\overline{\Delta}_{x}^{i},\overline{\Delta}_{i}^{\underline{z}} \right\}.\end{split} \tag{3.159}\]
(3.153) follows since all the four terms on the right-hand side lie in \(\overline{F}^{i+j-1}\overline{C}_{0}^{\lambda}\).
(3.154) follows from the computation
\[\begin{split}\underline{x}_{i+1,j+1}-\underline{e}_{+}(\underline {\bar{x}}_{i,j+1})&=(\underline{x}_{i+1,j+1}-\underline{x}_{i+ 1,j})+\left(\underline{x}_{i+1,j}-\underline{e}_{+}(\underline{\bar{x}}_{i,j} )\right)+e_{+}(\underline{\bar{x}}_{i,j}-\underline{\bar{x}}_{i,j+1})\\ &=\underline{e}_{-}(\overline{\Delta}_{x}^{i+1})+\left(\Delta_{x }^{i}-\underline{e}_{+}(\overline{\Delta}_{x}^{i})\right),\end{split} \tag{3.160}\]
where the definitions of \(\Delta_{x}^{i}\), \(\overline{\Delta}_{x}^{i}\) are used, and the first term of the second line is obtained by applying \(\underline{e}_{-}\) to \(\bar{\underline{x}}_{i+1,j+1}:=\bar{\underline{x}}_{i+1,j}+\overline{\Delta}_ {x}^{i+1}\). Note that by (3.144) and (3.145), the right-hand side of (3.160) lies in \(F^{i+j+1}C_{-2}^{\lambda}\).
Similarly, (3.155) and (3.156) follow from the computations
\[\begin{split}\underline{y}_{i+1,j+1}-\underline{e}_{+}(\underline {\bar{y}}_{i,j+1})&=(\underline{y}_{i+1,j+1}-\underline{y}_{i+1,j })+\left(\underline{y}_{i+1,j}-\underline{e}_{+}(\underline{\bar{y}}_{i,j}) \right)+\underline{e}_{+}(\underline{\bar{y}}_{i,j}-\underline{\bar{y}}_{i,j+ 1})\\ &=\underline{e}_{-}(\overline{\Delta}_{y}^{i+1})+\left(\Delta_{y} ^{i}-\underline{e}_{+}(\overline{\Delta}_{y}^{i})\right),\end{split} \tag{3.161}\]
\[\begin{split}\underline{z}_{i+1,j+1}-\underline{e}_{+}(\bar{ \underline{z}}_{i,j+1})&=(\underline{z}_{i+1,j+1}-\underline{z}_{i +1,j})+\left(\underline{z}_{i+1,j}-\underline{e}_{+}(\bar{\underline{z}}_{i,j })\right)+\underline{e}_{+}(\bar{\underline{z}}_{i,j}-\underline{\bar{z}}_{i,j +1})\\ &=\underline{e}_{-}(\overline{\Delta}_{z}^{i+1})+\left(\Delta_{z }^{i}-\underline{e}_{+}(\overline{\Delta}_{z}^{i})\right).\end{split} \tag{3.162}\]
By the induction hypothesis, \(\bar{\underline{x}}_{i,j}\) and \(\bar{\underline{z}}_{i,j}\) satisfy the conditions (vii)--(xi) in the statement of the lemma. Since the \((a,k)\)-components of \(\Delta_{x}^{i}\) and \(\tilde{\bar{\phi}}(\bar{\underline{x}}_{i,j})-\frac{1}{2}\left\{\underline{ \bar{x}}_{i,j},\underline{\bar{x}}_{i,j}\right\}\) are non-zero only when \(\theta_{M}(a)\geq 2\varepsilon\) or \(a=0\), we can take \(\overline{\Delta}_{x}^{i}(a,k)\neq 0\) only when \(\theta_{M}(a)\geq 2\varepsilon\) or \(a=0\). Similarly, we can take \(\overline{\Delta}_{x}^{i}\) so that \(\overline{\Delta}_{i}^{i}(a,k)\neq 0\) only if \(\theta_{M}(a)\geq 2\varepsilon\) or \(a=0\). By definitions of \(\underline{\bar{x}}_{i,j+1}\) and \(\underline{\bar{z}}_{i,j+1}\), it follows that \(\underline{\bar{x}}_{i,j+1}(a,k)\) and \(\underline{\bar{z}}_{i,j+1}(a,k)\) are non-zero only if \(\theta_{M}(a)\geq 2\varepsilon\) or \(a=0\). Moreover, since \(\overline{\Delta}_{x}^{i}\in\overline{F}^{2}\overline{C}_{-2}^{\lambda}\) and \(\overline{\Delta}_{z}^{i}\in\overline{F}^{0}\overline{C}_{1}^{\lambda}\), it follows from the definition of the filtration (3.121) that \(\overline{\Delta}_{x}^{i}(0,k)=0\) if \(k=0,1,2,3\) and \(\overline{\Delta}_{z}^{i}(0,k)=0\) if \(k=0\). This shows that the corresponding statement holds for \(\underline{\bar{x}}_{i,j+1}\) and \(\bar{\underline{z}}_{i,j+1}\) as well. Finally, if \(a\notin\underline{d}_{x}^{+}\), then for any \(k\in\mathbb{Z}_{\geq 0}\), the \((a,k)\)-components of \(\Delta_{x}^{i}\) and \(\tilde{\bar{\phi}}(\underline{\underline{x}}_{i,j})-\frac{1}{2}\left\{\underline{ \bar{x}}_{i,j},\underline{\bar{x}}_{i,j}\right\}\) are zero by the induction hypothesis. Thus one can take \(\overline{\Delta}_{x}^{i}\) so that \(\overline{\Delta}_{x}^{i}(a,k)=0\) for any \(k\in\mathbb{Z}_{\geq 0}\). Similarly, if \(a\notin\underline{d}_{y,z}^{+}\), then for any \(k\in\mathbb{Z}_{\geq 0}\), the \((a,k)\)-components of \(\Delta_{y}^{i}\), \(\Delta_{z}^{i}\), \(\tilde{\bar{\phi}}(\underline{\underline{y}}_{i,j})-\left\{\underline{\bar{x}}_{i,j},\underline{\bar{y}}_{i,j}\right\}-\bar{\underline{z}}_{i,j}\) and \(\tilde{\bar{\phi}}(\bar{\underline{z}}_{i,j})-\left\{\underline{\bar{x}}_{i,j },\underline{\bar{z}}_{i,j}\right\}\) are zero. It follows that one can take \(\overline{\Delta}_{y}^{i}\) and \(\overline{\Delta}_{z}^{i}\) so that \(\overline{\Delta}_{y}^{i}(a,k)=\overline{\Delta}_{z}^{i}(a,k)=0\) for any \(k\in\mathbb{Z}_{\geq 0}\). This verifies (ix) for the chains \(\underline{\bar{x}}_{i,j+1}\), \(\underline{\bar{y}}_{i,j+1}\) and \(\underline{\bar{z}}_{j,j+1}\).
**Proposition 34**.: _Theorem 31 implies Theorem 26._
Proof.: Fix an integer \(i\geq I\). For every \(j\in\mathbb{Z}_{\geq 0}\), applying \(\underline{e}_{-}\) to the equations in Lemma 33, (vi) we obtain
\[\underline{x}_{i,j+1}-\underline{x}_{i,j}\in F^{i+j}C_{-2}^{\lambda},\ \underline{y}_{i,j+1}-\underline{y}_{i,j}\in F^{i+j-U-1}C_{2}^{\lambda},\ \underline{z}_{i,j+1}-\underline{z}_{i,j}\in F^{i+j-2}C_{1}^{\lambda}. \tag{3.163}\]
Thus the limits
\[\underline{x}:=\lim_{j\to\infty}\underline{x}_{i,j},\ \underline{y}:=\lim_{j\to\infty} \underline{y}_{i,j},\ \underline{z}:=\lim_{j\to\infty}\underline{z}_{i,j} \tag{3.164}\]
exist in the completion of \(C^{\lambda}_{\bullet}\) with respect to the filtration \(F^{m}\), and they satisfy the equations
\[\tilde{\partial}(\underline{x})-\frac{1}{2}\left\{\underline{x}, \underline{x}\right\}=0,\;\tilde{\partial}(\underline{y})-\left\{\underline{x}, \underline{y}\right\}=\underline{z}. \tag{3.165}\]
To show that we actually have \(\underline{x}\in\widehat{C}^{\lambda}_{-2}\), \(\underline{y}\in\widehat{C}^{\lambda}_{2}\) and \(\underline{z}\in\widehat{C}^{\lambda}_{1}\), we need to verify that for any \(\Xi>0\), there are only finitely many classes \(a\in H_{1}(L;\mathbb{Z})\) with \(\theta_{M}(a)<\Xi\) and \((x(a,k),y(a,k),z(a,k))\neq 0\) for some \(k\in\mathbb{Z}_{\geqslant 0}\). This follows from the observation that by Lemma 33 (ix), such a class \(a\) must satisfy \(a\in\underline{A}^{+}_{\bullet}\cup\underline{A}^{+}_{\underline{y},z}\). On the other hand, Theorem 31, (vii) implies that \(\underline{A}^{+}_{\bullet}(\Xi)\cup\underline{A}^{+}_{\underline{y},z}(\Xi)\) is a finite set. (iii) and (iv) in the statement of Theorem 26 follows from the last two conditions of Lemma 33.
## 4 Moduli spaces and Kuranishi structures
This section contains the main contents of this paper, where we prove Theorem 6. In the previous section, we have reduced it to the proof of Theorem 31, which requires us to produce sequences of cyclic de Rham chains in the complex \(C^{\lambda}_{\bullet}\) (and its relative version \(\overline{C}^{\lambda}_{\bullet}\)) satisfying certain conditions. As mentioned in Remarks 27 and 32, these chains will be obtained as natural projections as \(S^{1}\)-equivariant de Rham chains in \(C^{S^{1}}_{\bullet}\) (and its relative version \(\overline{C}^{\mathcal{S}^{1}}_{\bullet}\)). Unlike the case of \(\mathbb{C}^{n}\), for general Liouville manifolds with cyclic dilations, the definitions of these chains will involve new sequences of moduli spaces4, which we will introduce below in Sections 4.1 and 4.2. The proof follows generally the strategy of Irie [33], which is based on the following underlying principle: given a K-space \((X,\widehat{\mathbb{U}})\) with a CF(continuous family)-perturbation \(\widehat{\mathbb{S}}=(\widehat{\mathbb{S}}^{\varepsilon})_{0<\varepsilon\in 1}\) and a strongly smooth map \(\hat{f}:(X,\widehat{\mathbb{U}})\to\mathcal{L}_{k+1}\), one can define a de Rham chain \(\hat{f}_{\bullet}(X,\widehat{\mathbb{U}},\widehat{\mathbb{S}}^{\varepsilon}) \in C_{\bullet}\) for sufficiently small \(\varepsilon\), by integration along fibers. We will actually need slight variations of this principle for admissible K-spaces and relative de Rham chains. These facts are briefly recalled in Appendix A.3. To apply this principle, we equip the relevant moduli spaces with Kuranishi structures in Section 4.3, and define strongly smooth maps from these spaces to \(\mathcal{L}_{k+1}\), so that they are compatible with the boundary strata. To achieve the smoothness of these maps, we first define strongly continuous maps from these moduli spaces to spaces of continuous loops with marked points in Section 4.4, then approximate these continuous maps by smooth ones (cf. Appendix A.2).
Footnote 4: With these moduli spaces, one can actually give a new proof of Fukaya-Irie’s result in the \(\mathbb{C}^{n}\) case using our “\(S^{1}\)-equivariant argument”. See Remark 51.
In this section, \(M\) will be a Liouville manifold with \(c_{1}(M)=0\), and \(L\subset M\) is a closed Lagrangian submanifold, which we assume to be oriented and relatively _Spin_ with respect to the \(\mathbb{Z}_{2}\)-gerbe \(\alpha\) fixed at the beginning of Section 1.3.
### Moduli spaces of holomorphic discs
We start by recalling the definition of a family of moduli spaces which is appears frequently in the study of mirror symmetry [20], while here they serve as the sources of the chains approximating the (non-equivariant) Maurer-Cartan element \(x\in\widehat{C}_{-1}\) mentioned in the introduction. Along the way, we also fix some notations.
Let \(\mathcal{R}_{k+1}\) be the moduli space of closed unit discs \(D\) with marked points \(z_{0},\cdots,z_{k}\in\partial D\) aligned in counterclockwise order, where \(k\in\mathbb{Z}_{\geqslant 0}\), modulo the automorphism group \(Aut(D)\cong\mathit{PSL}(2,\mathbb{R})\). For any homotopy class \(\beta\in\pi_{2}(M,L)\) with \(\beta\neq 0\), or \(\beta=0\) and \(k\geqslant 2\), define5
Footnote 5: This is an unusual notation, but it will be convenient when these moduli spaces appear in the boundary strata of the Cohen-Ganatra moduli spaces defined below.
\[\mathcal{R}_{k+1}(L,\beta) \tag{4.1}\]
to be the space of pairs
\[((D,z_{0},\cdots,z_{k}),u)\,, \tag{4.2}\]
where the map \(u:(D,\partial D)\to(M,L)\) satisfies \(\widehat{\partial}u=0\) and \([u]=\beta\). Here, the Cauchy-Riemann operator \(\widehat{\partial}\) is taken with respect to the almost complex structure \(J_{M}\) fixed at the beginning of Section 3.4. As a convention, we have \(\mathcal{R}_{1}(L,0)=\mathcal{R}_{2}(L,0)=\emptyset\).
For a general closed Lagrangian submanifold \(L\), the transversality of the moduli spaces \(\mathcal{R}_{k+1}(L,\beta)\) cannot be achieved with standard perturbations of the almost complex structure. To study it we need the language of Kuranishi structures introduced by Fukaya-Oh-Ohta-Ono [20, 21, 22, 23], which we will briefly recall in Appendix A.1. \(\mathcal{R}_{k+1}(L,\beta)\) admits a compactification \(\overline{\mathcal{R}}_{k+1}(L,\beta)\), which is an admissible K-space and is modeled on decorated rooted ribbon trees. See [33], Section 7.2.2 for details.
To describe the compactification \(\overline{\mathcal{R}}_{k+1}(L,\beta)\), we recall the following notion.
**Definition 35**.: _A decorated rooted ribbon tree is a pair \((T,B)\) satisfying the following requirements._
1. \(T\) _is a connected tree, with the set of vertices_ \(C_{0}(T)\) _and the set of edges_ \(C_{1}(T)\)_._
2. _For each_ \(v\in C_{0}(T)\)_, a cyclic order of the set of edges is fixed._
3. _A decomposition_ \(C_{0}(T)=C_{0,\text{int}}(T)\sqcup C_{0,\text{ext}}(T)\) _into the set of interior and exterior vertices. For every_ \(v\in C_{0,\text{int}}(T)\)_, define_ \(k_{v}\) _to be the valency of_ \(v\) _minus 1._
4. _A distinguished element in_ \(C_{0,\text{ext}}(T)\)_, which plays the role of the root._
5. _The valency of every exterior vertex is 1._
6. _There is a map_ \(B:C_{0,\text{int}}(T)\to\pi_{2}(M,L)\)_. For every_ \(v\in C_{0,\text{int}}(T)\)_, either_ \(d\theta_{M}(B(v))>0\) _or_ \(B(v)=0\)_._
7. _Every_ \(v\in C_{0,\text{int}}(T)\) _with_ \(B(v)=0\) _has valency at least 3._
Denote also by \(C_{1,\text{int}}(T)\) is the set of interior edges, and by \(C_{1,\text{ext}}(T)\) the set of exterior edges. An edge is called exterior if it contains an exterior vertex, otherwise it is interior. For every \(k\in\mathbb{Z}_{\geqslant 0}\) and \(\beta\in\pi_{2}(M,L)\), let \(\mathcal{G}(k+1,\beta)\) be the set of decorated rooted ribbon trees with \(\#C_{0,\text{ext}}(T)=k+1\) and \(\sum_{v\in C_{0,\text{int}}(T)}B(v)=\beta\). For every \((T,B)\in\mathcal{G}(k+1,\beta)\), one can define an "interior" evaluation map
\[ev_{\text{int}}:\prod_{v\in C_{0,\text{int}}(T)}\mathcal{R}_{k_{v}+1}(L,B(v)) \to\prod_{e\in C_{1,\text{int}}(T)}L^{2} \tag{4.3}\]
by, roughly speaking, evaluating at the two endpoints of each edge \(e\in C_{1,\text{int}}(T)\). For details, see [23], Section 21.1. We also have the "exterior" evaluation map
\[ev_{\text{ext}}:\prod_{C_{0,\text{int}}(T)}\mathcal{R}_{k_{v}+1}(L,B(v))\to \prod_{e\in C_{1,\text{ext}}(T)}L\cong L^{k+1}, \tag{4.4}\]
defined in the obvious way by evaluating at the exterior edges. Define the evaluation map
\[ev^{\mathcal{R}}=(ev_{0}^{\mathcal{R}},\cdots,ev_{k}^{\mathcal{R}}):\overline {\mathcal{R}}_{k+1}(L,\beta)\to L^{k+1} \tag{4.5}\]
on the compactified moduli space by restricting \(ev^{\text{ext}}\) to
\[\overline{\mathcal{R}}_{k+1}(L,\beta):=\bigsqcup_{(T,B)\in\mathcal{G}(k+1, \beta)}\left(\prod_{e\in C_{1,\text{int}}(T)}L\right)\,_{\Delta}\times_{ev_{ \text{int}}}\,\left(\prod_{v\in C_{0,\text{int}}(T)}\mathcal{R}_{k_{v}+1}(L, B(v))\right)\!, \tag{4.6}\]
where \(\Delta:\prod_{e\in C_{1,\text{int}}(T)}L\to\prod_{e\in C_{1,\text{int}}(T)}L^{2}\) is the diagonal map.
To construct the chain level Maurer-Cartan element \(\tilde{x}\in\widehat{C}_{-2}^{S^{1}}\) in the equivariant case, we need a slight variation of the moduli spaces \(\mathcal{R}_{k+1}(L,\beta)\) defined above.
Let \(\mathcal{R}_{k+1,\theta}\) be the moduli space of domains \((D,z_{0},\cdots,z_{k})\) modulo \(Aut(D)\) as above, but now with the position of \(z_{k}\) fixed. More precisely, regard \(z_{0}\) as the base point, with
respect to which every other marked point \(z_{i}\), where \(1\leqslant i\leqslant k\), has an argument \(\vartheta_{i}\). We require that
\[\vartheta_{k}=\text{const.} \tag{4.7}\]
For \(1\leqslant i\leqslant k\), there is a map
\[\pi_{\vartheta,i}:\mathcal{R}_{k+1,\vartheta}\to\mathcal{R}_{k} \tag{4.8}\]
defined by applying the cyclic permutation to the boundary marked points \(z_{0},\cdots,z_{k}\), so that \(z_{i}\) becomes \({}_{20\bmod k+1}\), and then forgetting the point labeled \(z_{k+1-i}\) under the permutation. Since the position of \(z_{k+1-i}\) after permutation is uniquely determined by \(z_{k-i}\), \((-1)^{ki}\pi_{\vartheta,i}\) is an orientation-preserving embedding (the sign \((-1)^{ki}\) follows from our orientation convention in Appendix B), which identifies \(\mathcal{R}_{k+1,\vartheta}\) as an open sector of \(\mathcal{R}_{k}\). More precisely, we have the following:
**Lemma 36**.: _The disjoint union \(\bigsqcup_{1\leqslant i\leqslant k}(-1)^{ki}\pi_{\vartheta,i}(\mathcal{R}_{ k+1,\vartheta})\) covers all but codimension \(1\) strata of \(\mathcal{R}_{k}\)._
Proof.: This follows directly from the definitions. The simplest case when \(k=2\) is illustrated in Figure 1, where \(\pi_{\vartheta,1}(\mathcal{R}_{3,\vartheta})\subset\mathcal{R}_{2}\) is the sector with \(0<\arg(z_{1})<\vartheta_{2}\), and \(\pi_{\vartheta,2}(\mathcal{R}_{3,\vartheta})\subset\mathcal{R}_{2}\) gives the sector with \(\vartheta_{2}<\arg(z_{1})<2\pi\). The general case is similar. After permuting the boundary marked points \(i\) times, the point \(z_{0}\) before permutation will become a point lying in the open arc \((z_{k-i},z_{k-i+1\bmod k+1})\) after the permutation, therefore performing cyclic permutations \(k\) times exhausts the whole circle \(\partial D\) (up to isolated points).
Consider the compactification \(\overline{\mathcal{R}}_{k+1,\vartheta}\), which consists of nodal discs with a total number of \(k+1\) marked points (excluding the nodes) on the boundaries of the components. For an element \(S\) of \(\overline{\mathcal{R}}_{k+1,\vartheta}\), we call the component \(S_{0}\subset S\) containing the marked point \(z_{0}\) the _main component_. Let \(z_{i_{1}},\cdots,z_{i_{r}}\in\partial S_{0}\) be the marked points on the boundary of the main component, and let \(\zeta_{1},\cdots,\zeta_{s}\in\partial S_{0}\) be the nodes. Denote by \(\varphi_{i}\) the argument of \(\zeta_{i}\) taken with respect to the base point \(z_{0}\). If \(i_{r}=k\), then \(S\) is required to satisfy \(\vartheta_{i_{r}}=\text{const}\), otherwise we impose the condition \(\varphi_{i}=\text{const}\) for the disc bubble emanating from \(\zeta_{i}\) containing the marked point \(z_{k}\). For a concrete example of a nodal disc \(S\in\overline{\mathcal{R}}_{8,\vartheta}\), see Figure 2. Note that the codimension \(1\) boundary of \(\overline{\mathcal{R}}_{k+1,\vartheta}\) is covered by the natural
inclusions of
\[\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant k_{1}\leqslant k-1\\ 1\leqslant i\leqslant k_{1}\end{subarray}}\overline{\mathcal{R}}_{k_{1}+1, \vartheta\ i\times_{0}}\ \overline{\mathcal{R}}_{k_{2}+1}, \tag{4.9}\]
where the notation \({}_{i}\times_{0}\) means the disc breaking happens at \(z_{i}\), with the nodal point playing the role of \(z_{0}\) on the disc bubble in \(\overline{\mathcal{R}}_{k_{2}+1}\). The map \(\pi_{\vartheta,i}\) extends as a map \(\overline{\mathcal{R}}_{k+1,\vartheta}\to\overline{\mathcal{R}}_{k}\) defined on the compactifications, an example is given in Figure 2.
When \(\beta=0\), we set \(\mathcal{R}_{2,\vartheta}(L,0)=\mathcal{R}_{3,\vartheta}(L,0)=\emptyset\). When \(\beta\neq 0\) or \(k\geqslant 3\), define
\[\mathcal{R}_{k+1,\vartheta}(L,\beta) \tag{4.10}\]
to be the space of \(\{(u,z_{0},\cdots,z_{k})\}/Aut(D)\), with \(u:(D,\partial D)\to(M,L)\) as above and the position of \(z_{k}\) fixed as in (4.7). It is clear that \(\mathcal{R}_{k+1,\vartheta}(L,\beta)\) is a K-space. As in the case of \(\overline{\mathcal{R}}_{k+1}(L,\beta)\), the compactification \(\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)\) is also modeled on decorated rooted ribbon trees. In a similar fashion as above, we can define the interior evaluation map
\[ev_{int}:\prod_{v\in C_{0,\text{int}}(T)\setminus\{v_{0}\}}\mathcal{R}_{k_{v} +1}(L,B(v))\times\mathcal{R}_{k_{v_{0}}+1,\vartheta}(L,B(v_{0}))\to\prod_{e\in C _{1,\text{int}}(T)}L^{2} \tag{4.11}\]
and the exterior evaluation map
\[ev_{ext}:\prod_{v\in C_{0,\text{int}}(T)\setminus\{v_{0}\}}\mathcal{R}_{k_{v} +1}(L,B(v))\times\mathcal{R}_{k_{v_{0}}+1,\vartheta}(L,B(v_{0}))\to L^{k+1}. \tag{4.12}\]
It follows that the compactified moduli space is given by
\[\begin{split}\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)& :=\bigsqcup_{\begin{subarray}{c}(T,B)\in\mathcal{G}(k+1,\beta)\\ v_{0}\in C_{0,\text{\tiny{tot}}}(T)\end{subarray}}\left(\prod_{\text{e}\in C _{1,\text{\tiny{int}}}(T)}L\right)\,_{\Delta^{\,\times\,ev_{\text{\tiny{int}}}} }\\ &\left(\prod_{v\in C_{0,\text{\tiny{int}}}(T)}\mathcal{R}_{k_{v}+1 }(L,B(v))\times\mathcal{R}_{k_{v_{0}}+1,\vartheta}(L,B(v_{0}))\right).\end{split} \tag{4.13}\]
Define the evaluation map
\[ev_{\vartheta}^{\mathcal{R}}=(ev_{\vartheta,0}^{\mathcal{R}},\cdots,ev_{ \vartheta,k}^{\mathcal{R}}):\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta) \to L^{k+1} \tag{4.14}\]
as the restriction of \(ev_{\text{\tiny{ext}}}\) to \(\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)\).
**Remark 37**.: _In [18], Fukaya proposed to use the moduli space of holomorphic discs (without marked points on \(\partial D\)) modulo \(Aut(D,1)\), the automorphism group of \(D\) fixing \(1\in\partial D\) to define the Maurer-Cartan element in the non-equivariant case, and the same space of maps modulo \(Aut(D)\) to define the Maurer-Cartan element in the \(S^{1}\)-equivariant case. This is compatible with our constructions with the presence of boundary marked points._
### Cohen-Ganatra moduli spaces
We consider a sequence of moduli spaces which are variants of the moduli spaces studied in a different context by Cohen-Ganatra [13]. They will be used to construct the \(S^{1}\)-equivariant chains approximating the primitive \(\tilde{y}\in\widehat{C}_{2}^{S^{1}}\) of the identity (cf. (1.21)).
To start with, we briefly recall the definition of a simpler moduli space introduced by Ganatra (cf. [27], Section 4.3). An _\(l\)-point angle-decorated cylinder_ is a cylindner \(C=\mathbb{R}\times S^{1}\), equipped with a collection of auxiliary marked points \(p_{1},\cdots,p_{l}\in C\), such that their \(s\in\mathbb{R}\) coordinates \((p_{i})_{s}\), \(1\leqslant i\leqslant l\) satisfy
\[(p_{1})_{s}\leqslant\cdots\leqslant(p_{l})_{s}. \tag{4.15}\]
Let \({}_{l}\mathcal{M}\) be the moduli space of \(l\)-point angle-decorated cylinders \((C,p_{1},\cdots,p_{l})\), modulo translations in the \(s\)-direction. It admits a compactification \({}_{l}\overline{\mathcal{M}}\) as a smooth manifold with corners, by adding broken cylinders with marked point decorations.
To incorporate symplectic geometry, one needs to study smooth maps from the domains \((C,p_{1},\cdots,p_{l})\) to \(M\) satisfying Floer's equation with certain asymptotic conditions. For this purpose we need to introduce Floer data for elements in \({}_{l}\mathcal{M}\). We say that a time-dependent Hamiltonian function \(H_{t}:S^{1}\times M\to\mathbb{R}\) is _admissible_ if \(H_{t}=H+F_{t}\) is the sum of an autonomous Hamiltonian function \(H:M\to\mathbb{R}\) which is equal to \(r^{2}\) on the cylindrical end \([r_{0},\infty)\times\partial\overline{M}\) for some \(r_{0}\gg 1\), and a time-dependent perturbation \(F_{t}:S^{1}\times M\to\mathbb{R}\). Furthermore, we require that for any \(r_{1}\gg r_{0}\), there exists an \(r>r_{1}\) such that \(F_{t}=0\) in a neighborhood of the hypersurface \(\{r\}\times\partial\overline{M}\subset M\). Denote by \(\mathcal{H}(M)\) the set of admissible Hamiltonians \(H_{t}\) so that all the \(1\)-periodic orbits of the Hamiltonian vector field \(X_{H_{t}}\) are non-degenerate.
Let \(J_{t}:S^{1}\times TM\to TM\) be a time-dependent almost complex structure. It is called _weak contact type_ if there exists a sequence \(\{r_{i}\}_{i\in\mathbb{N}}\) of positive real numbers with \(\lim_{i\to\infty}r_{i}=\infty\) such that near each hypersurface \(\{r_{i}\}\times\partial\overline{M}\) it satisfies \(dr\circ J_{t}=-\theta_{M}\). Denote by \(\mathcal{J}(M)\) the set of \(d\theta_{M}\)-compatible almost complex structures on \(M\) which are of weak contact type.
A Floer datum for an element of \({}_{l}\mathcal{M}\) consists of the following:
* Choices of positive and negative cylindrical ends \[\varepsilon^{+}:[0,\infty)\times S^{1}\to C\text{ and }\varepsilon^{-}:(- \infty,0]\to C\] (4.16)
such that \[\varepsilon^{+}(s,t)=(s+(p_{l})_{s}+\eta,t),\ \varepsilon^{-}(s,t)=(s-(p_{1})_{s}+ \eta,t+(p_{1})_{t}),\] (4.17) where \((p_{1})_{t}\) is the \(t\in S^{1}\)-coordinate of \(p_{1}\).
* A domain-dependent Hamiltonian function \(H_{C}:C\times M\to\mathbb{R}\) satisfying \[(\varepsilon^{\pm})^{*}=H_{t}\] (4.18) for some \(H_{t}\in\mathcal{H}(M)\).
* A domain-dependent almost complex structure \(J_{C}:C\times\text{{TM}}\to\text{{TM}}\) such that \[(\varepsilon^{\pm})^{*}J_{C}=J_{t}\] (4.19) for some \(J_{t}\in\mathcal{J}(M)\).
A _Floer datum for the \(S^{1}\)-action_ on the cochain complex \(\text{{SC}}^{*}(M)\) is an inductive sequence of choices of Floer data for the compactified moduli spaces \(\left\{{}_{l}\overline{\mathcal{M}}\right\}_{l\in\mathbb{N}}\) for each \(l\geqslant 1\) and each element of \(\overline{\mathcal{M}}\), which is compatible with the boundary strata in \(\partial_{l}\overline{\mathcal{M}}\), and varies smoothly with respect to gluing. The existence of such a Floer datum is guaranteed by an induction argument.
From now on, fix a choice of Floer datum for the \(S^{1}\)-action on \(\text{{SC}}^{*}(M)\). For two \(1\)-periodic orbits \(x\) and \(y\) of \(X_{H_{t}}\), define
\[{}_{l}\mathcal{M}(x,y) \tag{4.20}\]
to be the space of pairs \(((C,p_{1},\cdots,p_{l}),u)\), where \((C,p_{1},\cdots,p_{l})\in{}_{l}\mathcal{M}\) is an \(l\)-point angle-decorated cylinder, and \(u:C\to M\) is a map satisfying
\[\left\{\begin{array}{l}(du-X_{H_{C}}\otimes dt)^{0,1}=0,\\ \lim_{s\to\infty}(\varepsilon^{+})^{*}u(s,\cdot)=x,\\ \lim_{s\to-\infty}(\varepsilon^{-})^{*}u(s,\cdot)=y,\end{array}\right. \tag{4.21}\]
where the \((0,1)\)-part in the Floer equation is taken with respect to \(J_{C}\). With generic choices of Floer data, transversality for \({}_{l}\mathcal{M}(x,y)\) can be achieved with standard argument, and the virtual dimension \(\leqslant 1\) components of the Gromov compactified moduli space \({}_{l}\overline{\mathcal{M}}(x,y)\) are compact manifolds with boundary. A signed count of rigid elements in \({}_{l}\overline{\mathcal{M}}(x,y)\) defines the cochain level operations
\[\delta_{l}:SC^{*+2l-1}(M)\to\text{{SC}}^{*}(M) \tag{4.22}\]
appearing in (1.7). Note that if we allow \(l=0\), then the moduli space \({}_{0}\mathcal{M}(x,y)\), which we will abbreviate by \(\mathcal{M}(x,y)\), is the moduli space defining the Floer differential on the cochain complex \(\text{{SC}}^{*}(M)\).
The Cohen-Ganatra moduli spaces that we introduce below can be thought of as certain interpolations of the moduli spaces \(\mathcal{M}_{l}(x,y)\) considered above and the moduli space \(\mathcal{R}_{k+1}(L,\beta)\) recalled in Section 4.1. Since this is a parametrized moduli space, we first introduce the moduli space \({}_{l}\mathcal{R}^{1}_{k+1}\) of domains
\[(S;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell) \tag{4.23}\]
modulo automorphisms, where \(S=D\backslash\{\zeta\}\) is a closed unit disc with an interior puncture \(\zeta\), which will serve as an input. At \(\zeta\) there is an asymptotic marker, which is a half-line \(\ell\in T_{\zeta}D\). As in the case of \(\mathcal{R}_{k+1}(L,\beta)\), there are \(k+1\) marked points \(z_{0},\cdots,z_{k}\in\partial D\), labeled in counterclockwise order. Moreover, there is a set of auxiliary marked points \(p_{1},\cdots,p_{l}\in S\) lying in the interior of \(D\). For a representative of an element of \({}_{l}\mathcal{R}^{1}_{k+1}\) with
\(\zeta\) fixed at the origin, and \(z_{0}\) fixed at \(1\), these points are required to be strictly radially ordered with norms in \((0,\frac{1}{2})\), i.e.
\[0<|p_{l}|<\cdots<|p_{1}|<\frac{1}{2}. \tag{4.24}\]
Finally, we require that the asymptotic marker \(\ell\) at \(\zeta\) to point toward \(p_{l}\). See Figure 3 for a depiction of a representative of \({}_{3}\mathbb{R}^{1}_{4}\).
The compactification \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}\) of the moduli space \({}_{l}\mathcal{R}^{1}_{k+1}\) is a real blow-up of the usual Deligne-Mumford compactification. To describe its boundary strata, we introduce two auxiliary moduli spaces: \({}_{l}^{j,j+1}\mathbb{R}^{1}_{k+1}\) and \({}_{l-1}\mathcal{R}^{S^{1}_{k}}_{k+1}\).
\({}_{l}^{j,j+1}\mathbb{R}^{1}_{k+1}\) is the moduli space of the domains (4.23), except that the strict radial ordering condition (4.24) for the auxiliary marked points \(p_{1},\cdots,p_{l}\) is now replaced with
\[|p_{l}|<\cdots<|p_{j+1}|=|p_{j}|<\cdots<|p_{1}|<\frac{1}{2}, \tag{4.25}\]
for some \(1\leqslant j<k\).
\({}_{l-1}\mathcal{R}^{S^{1}_{k+1}}_{k+1}\) is the moduli space of the same domains, with \(p_{1},\cdots,p_{l}\) satisfying the strict radial condition (4.24), but with \(|p_{1}|=\frac{1}{2}\), i.e.
\[|p_{l}|<\cdots<|p_{1}|=\frac{1}{2}. \tag{4.26}\]
For each \(1\leqslant j\leqslant l-1\), there exists a map
\[\pi_{j}:{}_{l}^{j+1}\mathcal{R}^{1}_{k+1}\rightarrow{}_{l-1}\mathcal{R}^{1}_ {k+1}, \tag{4.27}\]
which forgets the marked point \(p_{j}\). Since this amounts to forgetting the argument of \(p_{j}\), \(\pi_{j}\) has \(1\)-dimensional fibers. The map \(\pi_{j}\) extends to a map defined on the compactification \({}_{l}^{j,j+1}\overline{\mathcal{R}}^{1}_{k+1}\), which we will still denote by \(\pi_{j}\) by abuse of notations. On the moduli space \({}_{l-1}\mathcal{R}^{S^{1}_{k}}_{k+1}\), there is another map
\[\pi_{S^{1}}:{}_{l-1}\mathcal{R}^{S^{1}_{k+1}}_{k+1}\rightarrow{}_{l-1} \mathcal{R}^{1}_{k+1}, \tag{4.28}\]
forgetting the marked point \(p_{1}\). It also extends to a map on the compactification \({}_{l-1}\overline{\mathcal{R}}^{S^{1}_{k+1}}_{k+1}\).
**Proposition 38**.: _With the notations introduced above, the codimension \(1\) boundary components of \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}\) are covered by the images of the natural inclusions of the following strata_
\[{}_{j}\overline{\mathcal{M}}\times{}_{l-j}\overline{\mathcal{R}}^{1}_{k+1},1 \leqslant j\leqslant l, \tag{4.29}\]
\[{}_{l}^{j,j+1}\overline{\mathcal{R}}^{4}_{k+1},1\leqslant j\leqslant l-1, \tag{4.30}\]
\[{}_{l-1}\overline{\mathcal{R}}^{S^{1}_{k+1}}_{k+1}. \tag{4.31}\]
\[{}_{l}\overline{\mathcal{R}}^{1}_{k_{k+1}}{}_{i}\times_{0}\overline{\mathcal{ R}}_{k_{2}+1},\ k_{1}\geqslant 0,k_{2}\geqslant 2,k_{1}+k_{2}=k+1,1 \leqslant i\leqslant k_{1}. \tag{4.32}\]
Figure 3: An element in the moduli space \({}_{3}\mathbb{R}^{1}_{4}\)
Proof.: When there is no marked points on \(\partial S\), the boundary strata of the moduli spaces \(\overline{\mathcal{R}}^{1}_{k+1}\) have been analysed by Cohen-Ganatra [13], Section 4.2. See also [54], Section 4.4. In particular, we have the boundary strata (4.29), (4.30) and (4.31). Note that the strata in (4.29) are loci created by real blow-ups, see Figure 4 for an illustration. The only difference in our case is that there are now \(k+1\) additional marked points \(z_{0},\cdots,z_{k}\) on the boundary \(\partial S\). When two boundary marked points \(z_{i}\) and \(z_{j}\) come together, where \(i<j\), a disc will break off from the domain \((S;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell)\), carrying the marked points \(z_{i},\cdots,z_{j}\), together with the nodal point on its boundary. Such disc bubbles give rise to the strata in (4.32).
The moduli space \({}_{l-1}\mathcal{R}^{S^{1}}_{k+1}\) introduced above will play a crucial role for our later purposes, we need to analyze it a little bit more. Via the forgetful map (4.28), we have an abstract identification
\[{}_{l-1}\mathcal{R}^{S^{1}}_{k+1}\cong{}_{l-1}\mathcal{R}^{1}_{k+1}\times S^{ 1}, \tag{4.33}\]
where the \(S^{1}\) factor is determined by the argument \(\theta_{1}:=\arg(p_{1})\). Under this identification, the compactification \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\) is abstractly modeled by \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1}\times S^{1}\). In particular, the codimension 1 boundary stratum \({}_{l-1}^{j,j+1}\overline{\mathcal{R}}^{1}_{k+1}\subset{}_{l-1}\overline{ \mathcal{R}}^{1}_{k+1}\) for some \(2\leqslant j\leqslant l-1\) corresponds to a stratum \({}_{l-1}^{j,j+1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\) in the codimension 1 boundary of \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\), where the \(S^{1}\) factor describes the situation that \(|p_{1}|=|p_{2}|=\frac{1}{2}\). Denote by
\[\pi_{j}^{S^{1}}:{}_{l-1}^{j,j+1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\to{}_{l- 2}\overline{\mathcal{R}}^{S^{1}}_{k+1} \tag{4.34}\]
the map which forgets \(p_{j}\).
For an element \(S\) of \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\), we say that \(p_{1}\)_points at a boundary point_\(z_{i}\), for some \(0\leqslant i\leqslant k\), if for a representative of \(S\) with \(\zeta\) fixed at the origin, the ray from \(\zeta\) to \(p_{1}\) points at \(z_{i}\). Denote by \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\subset{}_{l-1}\overline{ \mathcal{R}}^{S^{1}}_{k+1}\) the codimension 1 locus where \(p_{1}\) points at \(z_{i}\). There is a bijection
\[\tau_{i}:{}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\to{}_{l-1}\overline{ \mathcal{R}}^{1}_{k+1}, \tag{4.35}\]
which is defined as follows. When \(l\geqslant 2\), \(\tau_{i}\) forgets the point \(p_{1}\) on the circle \(|z|=\frac{1}{2}\), and relabels the remaining auxiliary marked points \(p_{2},\cdots,p_{l}\) as \(p_{1},\cdots,p_{l-1}\). When \(l=1\), \(\tau_{i}\) is defined by cyclically permuting the boundary marked points, so that the original \(z_{i}\) is
now labeled \(z_{k}\), and then forgetting \(p_{1}\). Similarly, we say that \(p_{1}\) points between \(z_{i}\) and \(z_{i+1\bmod k}\) if for such a representative, the ray from \(\zeta\) to \(p_{1}\) intersects the arc in \(\partial S\) from \(z_{i}\) to \(z_{i+1\bmod k}\). The locus in \({}_{l-1}\mathcal{R}_{k+1}^{\mathbb{S}_{i+1}^{1}}\) where \(p_{1}\) points between \(z_{i}\) and \(z_{i+1\bmod k}\) is denoted by \({}_{l-1}\mathcal{R}_{k+1}^{\mathbb{S}_{i,i+1}^{1}}\).
As in the case of \(\overline{\mathcal{R}}_{k}\) (cf. Lemma 36), we can decompose the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathbb{S}_{i+1}^{1}}\) into its sectors \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathbb{S}_{i+1}^{1}}\), and identify each sector with a moduli space of the form \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}\). \({}_{l-1}\mathcal{R}_{k+1,\tau_{i}}^{1}\) is the abstract moduli space of discs with \(k+2\) boundary marked points \(z_{0},\cdots,z_{i-1},z_{f},z_{i},\cdots,z_{k}\) arranged in counterclockwise order, with the point \(z_{f}\) marked as auxiliary, one interior puncture \(\zeta\), marked as an input, equipped with an asymptotic marker \(\ell\), and \(l\) auxiliary marked points \(p_{1},\cdots,p_{l}\) in the interior of the disc which are strictly radially ordered with norms in \((0,\frac{1}{2})\) in the sense of (4.24), for a representative of an element of \({}_{l-1}\mathcal{R}_{k+1,\tau_{i}}^{1}\) which fixes \(z_{0}\) at \(1\) and \(\zeta\) at \(0\). Note that the compactification \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}\) is abstractly isomorphic to \({}_{l-1}\overline{\mathcal{R}}_{k+2}^{1}\), except that \(z_{f}\) is marked as auxiliary, which means that it is forgotten when we consider the boundary \(\partial S\) as a Moore loop with marked points. We remark that this is an important point which will play a crucial role in our argument in Section 4.5. More precisely, at any stratum of \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}\):
* we treat the main component (the one containing the interior puncture \(\zeta\)) as belonging to \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{j}}^{1}\) for some \(0\leq k^{\prime}\leq k\) and \(0\leq j\leq k^{\prime}\) if it contains \(z_{f}\) as a boundary marked point, and \({}_{l-1}\overline{\mathcal{R}}_{k+2}^{1}\) if it does not;
* if a non-main disc component (the one without the puncture \(\zeta\)) contains the boundary marked point \(z_{f}\), we view it as an element of \(\mathcal{R}_{k^{\prime},f_{i}}\), the space of discs with \(k^{\prime}+1\) boundary marked points, where the \(i\)th point is marked as forgotten. See [27], Appendix A.2 for the detailed construction of the moduli spaces \(\mathcal{R}_{k^{\prime},f_{i}}\) when the boundary marked points are treated as punctures.
Moreover, the asymptotic marker \(\ell\) at \(\zeta\) points in the direction \(\theta_{1}\) (or \(z_{f}\) if \(l=0\)). In order to relate the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}\) to a sector \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathbb{S}_{i,i+1}^{1}}\subset{}_{l-1} \overline{\mathcal{R}}_{k+1}^{\mathbb{S}_{i+1}^{1}}\), we define the _auxiliary-rescaling map_
\[\pi_{f}^{i}:{}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}\to{}_{l-1} \overline{\mathcal{R}}_{k+1}^{\mathbb{S}_{i+1}^{1}}, \tag{4.36}\]
which, for a representative of an element of \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}\) with \(\zeta=0\), adds a point \(p_{0}\) on the line segment connecting \(\zeta\) and \(z_{f}\) with \(|p_{0}|=\frac{1}{2}\) and delete \(z_{f}\). Finally, we relabel the marked points \(p_{0},\cdots,p_{l}\) as \(p_{1},\cdots,p_{l+1}\). For an illustrative example of the definition of \(\pi_{f}^{i}\), see Figure 4.2. By our orientation conventions (cf. Appendix B), the map \(\pi_{f}^{i}\) is an oriented diffeomorphism.
Finally, there is a free \(\mathbb{Z}_{k+1}\)-action on the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathbb{S}^{1}}\) generated by the map
\[\kappa:{}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathbb{S}^{1}}\to{}_{l-1} \overline{\mathcal{R}}_{k+1}^{\mathbb{S}^{1}}, \tag{4.37}\]
which cyclically permutes the labels of the boundary marked points. Concretely, \(\kappa\) changes the label \(z_{i}\) to \(z_{i+1}\) if \(0\leqslant i\leqslant k-1\), and \(z_{k}\) to \(z_{0}\). It can be shown that this \(\mathbb{Z}_{k+1}\)-action is properly discontinuous. For a similar (in fact, almost identical) situation, see [27], Lemma 12.
**Remark 39**.: _Similar auxiliary moduli spaces as \({}_{l-1}\mathcal{R}^{S^{1}_{k+1}}_{k+1}\), \({}_{l-1}\mathcal{R}^{S^{1}_{l}}_{k+1}\), \({}_{l-1}\mathcal{R}^{S^{1}_{l+1}}_{k+1}\) and \({}_{l-1}\mathcal{R}^{1}_{k+1,r}\), considered above also play important roles in Ganatra's construction of cyclic open-closed maps [27]. The main difference between our set up and his is that the marked points \(z_{0},\cdots,z_{k}\in\partial D\) there are treated as punctures, and therefore equipped with strip-like ends (and Floer data) for the purpose of constructing closed-open maps. Here, as we have mentioned, they are regarded as marked points on the Moore loops, so one can study the corresponding forgetful maps. These forgetful maps will be important for the construction of cyclic invariant Kuranishi structures in Section 4.3._
In order to study the Floer theory associated to the domains in \({}_{l}\mathcal{R}^{1}_{k+1}\), we need the corresponding notion of Floer data. Note that by identifying an element of \({}_{l}\mathcal{R}^{1}_{k+1}\) with a half-cylinder \(\mathbb{R}_{\geqslant 0}\times S^{1}\), we can assign each point \(p_{i}\) with an \(s\)-coordinate \((p_{i})_{s}\in\mathbb{R}_{\geqslant 0}\), and a \(t\)-coordinate \((p_{i})_{t}\in S^{1}\).
**Definition 40**.: _A Floer datum for an element of \({}_{l}\mathcal{R}^{1}_{k+1}\) consists of the following:_
* _A positive cylindrical end which is compatible with the asymptotic marker, namely an embedding_ \[\varepsilon^{+}:[0,\infty)\times S^{1}\to S,\ (s,t)\mapsto(s+(p_{l})_{s}+\eta,t)\] (4.38) _for some_ \(\eta>0\)_._
* _A sub-closed 1-form_ \(\nu_{S}\in\Omega^{1}(S)\) _such that_ \(\nu_{S}\equiv 0\) _near_ \(\partial S\) _and_ \((\varepsilon^{+})^{*}\nu_{S}=dt\)_._
* _A domain-dependent Hamiltonian_ \(H_{S}:S\times M\to\mathbb{R}\) _satisfying_ \[(\varepsilon^{+})^{*}H_{S}=H_{t}\] (4.39) _for some_ \(H_{t}\in\mathcal{H}(M)\)_, and_ \[H_{S}\equiv 0\text{ near }\partial S.\] (4.40)
* _A domain-dependent almost complex structure_ \(J_{S}:S\times\text{TM }\to\text{TM }\) _such that_ \[(\varepsilon^{+})^{*}J_{S}=J_{t}\] (4.41) _for some_ \(J_{t}\in\mathcal{J}(M)\)_, and_ \[J_{S}\equiv J_{M}\text{ near }\partial S,\] (4.42) _where_ \(J_{M}\) _is the almost complex structure fixed in Section_ 3.4_._
In a similar fashion, we can define what it means for a Floer datum on the auxiliary moduli spaces \({}_{l}^{j,j+1}\mathcal{R}^{1}_{k+1}\) and \({}_{l-1}\mathcal{R}^{S^{1}}_{k+1}\). We will inductively choose the Floer data on the compactified moduli spaces \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}\) so that they satisfy certain consistency conditions. In order to do so, we first choose a Floer datum on the auxiliary moduli space \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\) so that
* The Floer datum on \({}_{l-1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\) is \(\mathbb{Z}_{k+1}\)-equivariant under the cyclic permutation \(\kappa\).
* On the boundary stratum \({}_{l-1}^{j,j+1}\overline{\mathcal{R}}^{S^{1}}_{k+1}\subset\partial_{l-1} \overline{\mathcal{R}}^{S^{1}}_{k+1}\), the Floer datum is conformally equivalent to the one pulled back from \({}_{l-2}\overline{\mathcal{R}}^{S^{1}}_{k+1}\) via the forgetful map \(\pi^{S^{1}}_{j}\).
**Definition 41**.: _A Cohen-Ganatra Floer datum is a an inductive sequence of choices, for every \(k\in\mathbb{Z}_{\geqslant 0}\) and \(l\in\mathbb{N}\), of Floer data for every representative of \(l\overline{\mathcal{R}}^{1}_{k+1}\) in the sense of Definition 40, which vary smoothly over the moduli spaces, and is required to satisfy the following :_
1. _The choice of Floer datum on any boundary stratum should agree with the inductively chosen datum along the boundary stratum for which we have already picked the data._
2. _Near the boundary strata in (_4.30_), the Floer data are conformally equivalent to the ones obtained by puling back from_ \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1}\) _via the forgetful maps_ \(\pi_{j}\)_._
3. _On the codimension 1 loci_ \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1}\subset{}_{l-1}\overline{\mathcal{R}} ^{1}_{k+1}\)_, where_ \(p_{1}\) _points at_ \(z_{i}\)_, the Floer datum should agree with the pullback by_ \(\tau_{i}\) _of the existing Floer datum on_ \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1}\)_._
**Proposition 42**.: _A Cohen-Ganatra Floer datum exists._
Proof.: This follows from the fact that the choices of Floer data at each stage is contractible, and for a suitable inductive order, the conditions imposed on Floer data at various strata do not contradict each other. For a similar situation, see [27], Proposition 10.
Fix a Cohen-Ganatra Floer datum, and a 1-periodic orbit \(x\) of \(X_{H_{t}}\), which is a generator of the symplectic cochain complex \(SC^{*}(M)\). Let \(\pi_{2}(M,x,L)\) be the set of homotopy classes of maps \(u:S\to M\) with boundary on \(L\) and asymptotic to \(x\) at the puncture \(\zeta\). There is a natural map \(\partial:\pi_{2}(M,x,L)\to H_{1}(L;\mathbb{Z})\) defined by sending \([u]\in\pi_{2}(M,x,L)\) to the homology class \([u(\partial S)]\in H_{1}(L;\mathbb{Z})\). For any \(\hat{\beta}\in\pi_{2}(M,x,L)\), define the _Cohen-Ganatra moduli space_
\[{}_{l}\mathcal{R}^{1}_{k+1}(x,L,\hat{\beta}) \tag{4.43}\]
to be the space of pairs
\[\left((S;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell),u\right), \tag{4.44}\]
where \((S;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell)\in{}_{l}\mathcal{R}^{1}_{k+1}\), and the map \(u:S\to M\) satisfies
\[\left\{\begin{array}{l}(du-X_{H_{S}}\otimes\nu_{S})^{0,1}=0,\\ u(\partial S)\subset L,\\ \lim_{s\to\infty}(\varepsilon^{+})^{*}u(s,\cdot)=x,\\ [u]=\hat{\beta},\end{array}\right. \tag{4.45}\]
where the \((0,1)\)-part in the Floer equation is taken with respect to \(J_{S}\).
Among the Cohen-Ganatra moduli spaces \({}_{l}\mathcal{R}^{1}_{k+1}(x,L,\hat{\beta})\), there is one special case that will be particularly important for us, namely the moduli space
\[\mathcal{R}^{1}_{k+1}(e_{M},L,\beta), \tag{4.46}\]
which is defined using domains without the auxiliary interior marked points \(p_{1},\cdots,p_{l}\), and the asymptotic condition at the positive puncture is given by the minimum of a \(C^{2}\)-small Morse function \(f:M^{in}\to\mathbb{R}\) defined in the interior \(M^{in}\) of the Liouville domain \(\overline{M}\). In this case, the homotopy class \(\hat{\beta}\) can be identified with a class \(\beta\in\pi_{2}(M,L)\) under the isomorphism \(\pi_{2}(M,e_{M},L)\cong\pi_{2}(M,L)\), which explains the notation.
Given \(x\in SC^{*}(M)\) and \(\hat{\beta}\in\pi_{2}(M,x,L)\), in a similar fashion as above, we can define the moduli spaces
\[{}_{l-1}\mathcal{R}^{1}_{k+1}(x,L,\hat{\beta}), \tag{4.47}\]
\[{}_{l-1}\mathcal{R}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta}),\ 0\leqslant i \leqslant k, \tag{4.48}\]
\[{}_{l}^{j,j+1}\mathcal{R}^{1}_{k+1}(x,L,\hat{\beta}),\ 1\leqslant j\leqslant l-1, \tag{4.49}\]
which parametrize the same maps \(u:S\to M\) as in (4.45), but with the domains in the moduli spaces \({}_{l-1}{\mathcal{R}}^{1}_{k+1,\,\,l-1}{\mathcal{R}}^{1}_{k+1,\tau_{i}}\) and \({}_{l}^{j,j+1}{\mathcal{R}}^{1}_{k+1}\), respectively.
When defining (4.48) (and its compactification), we need to choose Floer data on the moduli spaces \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}\). In this case, there is an oriented diffeomorphism
\[\tau_{i}:{}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}\to{}_{l-1} \overline{\mathcal{R}}^{1}_{k+1,\tau_{0}} \tag{4.50}\]
induced by cyclic permutations of the boundary labels. The Floer datum for \({}_{l-1}{\mathcal{R}}^{1}_{k+1,\tau_{i}}\) shall be chosen so that it coincides with the pullback of the Floer datum on \({}_{l-1}{\mathcal{R}}^{1}_{k+1,\tau_{0}}\) by (4.50).
The Gromov compactification \(\imath\overline{\mathcal{R}}^{1}_{k+1}(x,L,\mathring{\beta})\) of the Cohen-Ganatra moduli space is an admissible K-space. Its detailed description will be postponed to the next subsection.
### Kuranishi structures on compactified moduli spaces
The purpose of this subsection is to state Theorem 46, where we describe the compactifications of the moduli spaces \({\mathcal{R}}_{k+1,\vartheta}(L,\beta)\) and \({}_{l}{\mathcal{R}}^{1}_{k+1}(x,L,\mathring{\beta})\) introduced in Sections 4.1 and 4.2 as admissible K-spaces. We start with the following lemma which summarizes the basic properties of these moduli spaces.
**Lemma 43**.: _The following properties of the moduli spaces hold._
1. _When_ \(\beta=0\)_, the moduli space_ \({\mathcal{R}}_{k+1,\vartheta}(L,0)\) _consists of constant maps for every_ \(k\geqslant 3\)_, and_ \({}_{l}{\mathcal{R}}^{1}_{k+1}(e_{M},L,0)\) _consists of constant maps for every_ \(k\geqslant 0\)_._
2. _If_ \(\theta_{M}(\partial\mathring{\beta})+|A_{H_{t}}(x)|<0\)_, then_ \({}_{l}{\mathcal{R}}_{k+1}(x,L,\mathring{\beta})=\emptyset\)_, where_ \(H_{t}\in{\mathcal{H}}(M)\) _is a fixed choice of some admissible Hamiltonian, which serves as part of our Floer datum, and_ \[A_{H_{t}}(x)=-\int_{S^{1}}x^{\bullet}\theta_{M}+\int_{S^{1}}H_{t}(x(t))dt\] (4.51) _is the action of the orbit_ \(x\) _of_ \(X_{H_{t}}\)_._
3. _For every_ \(k\in{\mathbb{N}}\) _we have_ \[{\mathcal{R}}_{k+1,\vartheta}(L,\beta)=\emptyset\Leftrightarrow{\mathcal{R}} _{2,\vartheta}(L,\beta)=\emptyset.\] (4.52) _For every_ \(k,l\in{\mathbb{Z}}_{\geqslant 0}\)_, we have_ \[{}_{l}{\mathcal{R}}_{k+1}(x,L,\mathring{\beta})=\emptyset\Leftrightarrow{}_ {l}{\mathcal{R}}_{1}(x,L,\mathring{\beta})=\emptyset.\] (4.53) _Moreover, for every_ \(c>0\) _and fixed_ \(l\in{\mathbb{Z}}_{\geqslant 0}\)_, the sets_ \[\{\beta\in\pi_{2}(M,L)|{\mathcal{R}}_{2,\vartheta}(L,\beta)\neq\emptyset, \theta_{M}(\partial\mathring{\beta})<c\}\,,\] (4.54) \[\Big{\{}\mathring{\beta}\in\pi_{2}(M,x,L)\left|\left|{\mathcal{R}}_{1}(x,L, \mathring{\beta})\neq\emptyset,\theta_{M}(\partial\mathring{\beta})<c\right.\right\}\] (4.55) _are both finite._
Proof.: (i) is obvious for \({\mathcal{R}}_{k+1,\vartheta}(L,0)\). For any map \(u:(S,\partial S)\to(M,L)\) parametrized by \({\mathcal{R}}^{1}_{k+1}(e_{M},L,\beta)\), the removable singularity theorem for pseudoholomorphic maps implies that there exists a \(J_{M}\)-holomorphic map \(\bar{u}:(D,\partial D)\to(M,L)\), with \([\bar{u}]=\beta\) and \(\bar{u}(0)\) mapped to the minimum of the \(C^{2}\)-small Morse function \(f:M^{\bar{m}}\to{\mathbb{R}}\). If \(\bar{u}\) is constant, so must be \(u\).
(ii) is an energy estimate. For any element (which is represented by a pair (4.44)) of \({}_{l}{\mathcal{R}}_{k+1}(L,x,\mathring{\beta})\), we have
\[0\leqslant E^{geom}(u)=\int_{S}\frac{1}{2}||du-X_{H_{\beta}}\otimes\nu_{S}||^ {2}\leqslant E^{top}(u)=\theta_{M}(\partial\mathring{\beta})+A_{H_{t}}(x), \tag{4.56}\]
where \(E^{geom}(u)\) and \(E^{top}(u)\) are the geometric and the topological energies of \(u\), respectively (cf. [3], Section 7.2). Note that in the above computations, we have used the fact that \(H_{S}\equiv 0\) near \(\partial S\).
(iii) is a consequence of Gromov compactness.
To describe the compactifications of the moduli spaces \({}_{l}\mathcal{R}^{1}_{k+1}(x,L,\mathring{\beta})\), \({}_{l}^{j,j+1}\mathcal{R}^{1}_{k+1}(x,L,\mathring{\beta})\), \({}_{l-1}\mathcal{R}^{\mathcal{S}^{1}_{k+1}}_{k+1}(x,L,\mathring{\beta})\) and \({}_{l-1}\mathcal{R}^{1}_{k+1,\tau_{i}}(x,L,\mathring{\beta})\) introduced in Section 4.2, we need a slight variation of the notion of decorated rooted ribbon trees.
**Definition 44**.: _A decorated rooted ribbon tree with a single puncture is a triple \((T,\mathring{B},v_{0})\) such that the tree \(T\) satisfies (i)-(v) in the definition of a decorated rooted ribbon tree, but now there is a distinguished interior vertex \(v_{0}\in C_{0,\text{int}}(T)\) which is called a puncture. The conditions imposed on the map \(B:C_{0,\text{int}}(T)\to\pi_{2}(M,L)\) in Definition 35, (vi) and (vii) are replaced with the following:_
1. _There exists a map_ \(\mathring{B}:C_{0,\text{int}}(T)\to\pi_{2}(M,x,L)\)_, such that its restrictions to_ \(C_{0,\text{int}}(T)\backslash\{v_{0}\}\) _and_ \(\{v_{0}\}\) _give rise to maps_ \(C_{0,\text{int}}(T)\backslash\{v_{0}\}\to\pi_{2}(M,L)\) _and_ \(\{v_{0}\}\to\pi_{2}(M,x,L)\)_, respectively, where_ \(x\) _is a 1-periodic orbit of_ \(X_{H_{t}}\)_. For every_ \(v\in C_{0,\text{int}}(T)\)_, either_ \(d\theta_{M}\left(\mathring{B}(v)\right)>0\) _or_ \(\mathring{B}(v)=0\)_. Note that for_ \(v=v_{0}\)_, this means that_ \(x\) _is a constant orbit, so_ \(\hat{B}(v_{0})\) _can be regarded as a class in_ \(\pi_{2}(M,L)\)_._
2. _Every_ \(v\in C_{0,\text{int}}(T)\backslash\{v_{0}\}\) _with_ \(\hat{B}(v)=0\) _has valency at least_ \(3\)_. When_ \(x\) _is a constant orbit, we also require that_ \(v_{0}\) _has valency at least_ \(3\) _if_ \(\hat{B}(v_{0})=0\) _in_ \(\pi_{2}(M,L)\)_._
Note that when \(x\) is a constant orbit, this reduces to the notion of a decorated rooted ribbon tree. For \(k\in\mathbb{Z}_{\geqslant 0}\) and \(\mathring{\beta}\in\pi_{2}(M,x,L)\), denote by \(\mathcal{G}(k+1,\mathring{\beta})\) the set of decorated rooted ribbon trees with a single puncture \((T,\mathring{B},v_{0})\), such that \(\#C_{0,\text{ext}}(T)=k+1\) and \(\sum_{v\in C_{0,\text{int}}(T)}\hat{B}(v)=\mathring{\beta}\).
We also have the following extension of a reduction of decorated rooted ribbon trees (cf. [33], Definition 7.19).
**Definition 45**.: _Let \((T,\hat{B},v_{0})\in\mathcal{G}(k+1,\mathring{\beta})\) and \(e\in C_{1,\text{int}}(T)\), with \(v_{0},v_{1}\) being vertices of \(e\). By collapsing \(e\) to a new vertex \(v_{01}\), we get another \((T^{\prime},\mathring{B}^{\prime},v_{0})\in\mathcal{G}(k+1,\mathring{\beta})\) such that_
\[C_{0}(T^{\prime})=(C_{0}(T)\backslash\{v_{0},v_{1}\})\cup\{v_{01}\}, \tag{4.57}\]
\[C_{1}(T^{\prime})=C_{1}(T)\backslash\{e\}, \tag{4.58}\]
\[\mathring{B}^{\prime}(v)=\left\{\begin{array}{ll}\hat{B}(v)&v\neq v_{01},\\ \hat{B}(v_{0})+\hat{B}(v_{1})&v=v_{01}.\end{array}\right. \tag{4.59}\]
_An element of \(\mathcal{G}(k+1,\mathring{\beta})\) which can be obtained from \((T,\hat{B},v_{0})\in\mathcal{G}(k+1,\mathring{\beta})\) by repeating the above procedure is called a reduction of \((T,\mathring{B},v_{0})\)._
Let \(X\) be an admissible K-space. For \(r\in\mathbb{N}\), denoted by \(\widehat{S}_{r}X\) the normalized codimension \(r\) corner of \(X\). When \(r=1\), we shall also write \(\partial X\) for the normalized boundary.
From now on, let \(\varepsilon>0\) be chosen as in the beginning of Section 3.4, so that every non-constant \(J_{M}\)-holomorphic disc bounded by \(L\) has area at most \(2\varepsilon\). Take \(U\in\mathbb{N}\) so that \(\varepsilon(U-1)\geqslant|A_{H_{t}}(x)|\).
**Theorem 46**.: _For every \(k,m,l\in\mathbb{Z}_{\geqslant 0}\), and \(P=\{m\}\) or \([m,m+1]\), there exist the following data._
1. _(Moduli spaces) Compact, oriented, admissible K-spaces (when_ \(l=0\)_, the moduli spaces (_4.63_), (_4.64_) and (_4.65_) are empty)_ \[\overline{\mathcal{R}}_{k+1}(L,\beta;P),\text{ where }\beta\in\pi_{2}(M,L)\text{ and }\theta_{M}(\partial\beta)<(m-k+1)\varepsilon,\] (4.60) \[\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta;P),\text{ where }\beta\in\pi_{2}(M,L) \text{ and }\theta_{M}(\partial\beta)<(m-k+1)\varepsilon,\] (4.61)
\[\begin{split}&{}_{l}\overline{\mathcal{R}}_{k+1}(x,L,\hat{\beta};P), \text{ where }\hat{\beta}\in\pi_{2}(M,x,L)\text{ and }\theta_{M}(\hat{\partial}\hat{\beta})<(m-k-U)\varepsilon, \end{split} \tag{4.62}\]
\[\begin{split}&{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{} _{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{l} {}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{}_{l}{l}{}_{l}{}_{l}{}_{l}{}_{l}{l}{}_{l}{}_{l}{l}{}_{l}{}_{l}{l}{}_{l}{l}{}_{l}{l}{}_{l
_(Compatibility at boundaries) Orientation-preserving isomorphisms of admissible K-spaces:_
\[\partial\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta;\{m\}) \cong\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}+1\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{1}}\overline{ \mathcal{R}}_{k_{1}+2,\vartheta}(L,\beta_{1};\{m\})\ _{i}\times_{0}\overline{\mathcal{R}}_{k_{2}+1}(L,\beta_{2};\{m\}), \tag{4.82}\] \[\partial_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};\{m\}) \cong\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{2}}i\overline{ \mathcal{R}}_{k_{1}+1}^{1}(x,L,\hat{\bar{\beta}}_{1};\{m\})\ _{i}\times_{0}\overline{\mathcal{R}}_{k_{2}+1}(L,\beta_{2};\{m\})\] (4.83) \[\sqcup\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{3}}\overline{ \mathcal{R}}_{k_{1}+1}(L,\beta_{1};\{m\})\ _{i}\times_{0}\overline{\mathcal{R}}_{k_{2}+1}^{1}(x,L,\hat{\beta}_{2};\{m\})\] \[\sqcup\bigsqcup_{\begin{subarray}{c}0\leqslant j\leqslant l\\ \end{subarray}}(-1)^{\varepsilon_{4,j}}\overline{\mathcal{M}}(x,y_{j};\{m\}) \times_{l-j}\overline{\mathcal{R}}_{k+1}^{1}(y_{j},L,\hat{\beta};\{m\})\] \[\sqcup\bigsqcup_{\begin{subarray}{c}1\leqslant j\leqslant l-1 \end{subarray}}(-1)^{\varepsilon_{5},j+1}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\bar{\beta}};\{m\})\] \[\sqcup(-1)^{\varepsilon_{6}}{}_{l-1}\overline{\mathcal{R}}_{k+1}^ {S^{1}}(x,L,\hat{\bar{\beta}};\{m\}),\] (4.84) \[\partial\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta;[m,m+1]) \cong(-1)^{\varepsilon_{7}}\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta; \{m\})\sqcup(-1)^{\varepsilon_{8}}\overline{\mathcal{R}}_{k+2,\vartheta}(L, \beta;\{m+1\})\] \[\sqcup\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}+1\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{12}}i\overline{ \mathcal{R}}_{k_{1}+1}^{i}(x,L,\hat{\beta}_{1};[m,m+1])\ _{i}\times_{0}\overline{\mathcal{R}}_{k_{2}+1}(L,\beta_{2};[m,m+1])\] \[\sqcup\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{13}}\overline{ \mathcal{R}}_{k_{1}+1}(L,\beta_{1};[m,m+1])\ _{i}\times_{0}\imath\overline{\mathcal{R}}_{k_{2}+1}(x,L,\hat{\bar{ \beta}}_{2};[m,m+1])\] \[\sqcup\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ 1\leqslant i\leqslant k_{1}\\ \beta_{1}+\beta_{2}=\beta\end{subarray}}(-1)^{\varepsilon_{13}}\overline{ \mathcal{R}}_{k_{1}+1}^{i}(x,L,\hat{\bar{\beta}};[m,m+1])\ _{i}\times_{0}\imath\overline{ \mathcal{R}}_{k_{2}+1}(x,L,\hat{\bar{\beta}}_{2};[m,m+1])\] \[\sqcup\bigsqcup_{\begin{subarray}{c}0\leqslant j\leqslant l\\ \end{subarray}}(-1)^{\varepsilon_{14,j}}\overline{\mathcal{M}}(x,y_{j};[m,m+1]) \times_{l-j}\overline{\mathcal{R}}_{k+1}^{1}(y_{j},L,\hat{\bar{\beta}};[m,m+ 1])\] \[\sqcup\bigsqcup_{\begin{subarray}{c}1\leqslant j\leqslant l-1 \end{subarray}}(-1)^{\varepsilon_{13}}i\underset{l}{}^{j,j+1}\overline{ \mathcal{R}}_{k+1}^{1}(x,L,\hat{\bar{\beta}};[m,m+1])\sqcup(-1)^{\varepsilon _{16}}{}_{l-1}\overline{\mathcal{R}}_{k+1}^{\mathcal{S}^{1}}(x,L,\hat{\bar{ \beta}};[m,m+1]), \tag{4.85}\]
_where the notation \({}_{i}\times_{0}\) in the above is an abbreviation for the fiber product \({}_{e_{\mathbb{W}}}\times_{e_{\mathbb{W}}}\), with \(e_{\mathbb{W}}\) given by the composition \((id_{P}\times pr_{i})\circ e^{\bullet,P}\), where \(e^{\bullet,P}\) stands for one of the evaluations maps (4.69), (4.70) and (4.71) in (ii), and \(pr_{i}:L^{k+1}\to L\) is the projection to the \((i+1)\)th factor. The signs \(\varepsilon_{1},\cdots,\varepsilon_{16}\) above are given as follows:_
\[\varepsilon_{1}=(k_{1}-i)(k_{2}-1)+n+k_{1}-1,\] \[\varepsilon_{2}=(k_{1}-i)(k_{2}-1)+n+k,\ \varepsilon_{3}=(k_{1}-i)(k_{2}-1)+n+1,\] \[\varepsilon_{4,j}=n+|y_{j}|,\ 0\leqslant j\leqslant l,\ \varepsilon_{5}=0,\ \varepsilon_{6}=0,\] \[\varepsilon_{7}=1,\ \varepsilon_{8}=0,\ \varepsilon_{9}=(k_{1}-i)(k_{2}-1)+n+k_{1}, \tag{4.86}\] \[\varepsilon_{10}=1,\ \varepsilon_{11}=0,\] \[\varepsilon_{12}=(k_{1}-i)(k_{2}-1)+n+k+1,\ \varepsilon_{13}=(k_{1}-i)(k_{2}-1)+n,\] \[\varepsilon_{14,j}=n+|y_{j}|+1,\ 0\leqslant j\leqslant l,\ \varepsilon_{15}=1,\ \varepsilon_{16}=1,\]
_The compatibility of the Kuranishi structures at the boundaries_ \(\int_{l}^{j,j+1}\overline{\overline{\mathcal{R}}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\int_{l-1}\overline{\mathcal{R}}^{\otimes^{1}}_{k+1}(x,L,\hat{\beta};P)\) _and_ \(\int_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\) _are similar to that of_ \(\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\) _described above, we do not write down the details here since they will not be needed for our main argument in Section_ 4.5_._
* _(Compatibility at corners, I) Isomorphisms of admissible K-spaces (for simplicity, we do not consider orientations of the moduli spaces for these isomorphisms)_ \[\widehat{S}_{r}\overline{\overline{\mathcal{R}}}_{k+2,\theta}(L, \beta;P)\cong\underset{\begin{subarray}{c}(T,B(v);\widehat{S}_{d}P)\\ \#C_{1,\text{int}}(T)+d\nu\\ v_{0}\in C_{0,\text{int}}(T)\end{subarray}}{\bigcup}\left(\prod_{e\in C_{1, \text{int}}(T)}\widehat{S}_{d}P\times L\right)\,_{\Delta}\times ev_{int}\] \[\left(\prod_{v\in C_{0,\text{int}}(T)\backslash\{v_{0}\}} \overline{\overline{\mathcal{R}}}_{k_{v}+1}\left(L,B(v);\widehat{S}_{d}P \right)\times\overline{\overline{\mathcal{R}}}_{k_{v_{0}}+1,\vartheta}\left(L, B(v_{0});\widehat{S}_{d}P\right)\right),\] _where the fiber product on the right-hand side is taken over_ \(\prod_{eC_{1,\text{int}}(T)}\left(\widehat{S}_{d}P\times L\right)^{2}\)_._ \[\widehat{S}_{r}\left(i\overline{\mathcal{R}}^{1}_{k+1}(x,L, \hat{\beta};P)\right)\cong\underset{\begin{subarray}{c}(T,\hat{B},v_{0})\in \mathfrak{G}(k+1,\hat{\beta})\\ \#C_{1,\text{int}}(T)+d\tau_{1}+\tau_{2}=r\\ v_{0}\in C_{0,\text{int}}(T)\end{subarray}}{\bigcup}\underset{\begin{subarray} {c}j_{1}+\cdots+j_{2}=l_{2}\\ l_{1}+l_{2}=l\end{subarray}}{\bigcup}\left(\prod_{e\in C_{1,\text{int}}(T)} \widehat{S}_{d}P\times L\right)\] \[\,_{\Delta}\times ev_{int}\,\,\left(\prod_{v\in C_{0,\text{int}}( T)\backslash\{v_{0}\}}\overline{\mathcal{R}}_{k_{v}+1}\left(L,\hat{B}(v); \widehat{S}_{d}P\right)\right.\] \[\,\times\left(\prod_{i=1}^{r_{2}}j_{i}\overline{\mathcal{M}}(y_{j_{ i}},x;\widehat{S}_{d}P)\times\widehat{S}_{r_{1}}(l_{1},\overline{\mathcal{R}}^{1}_ {k_{v_{0}}+1})\left(y_{j_{i}},L,\hat{B}(v_{0});\widehat{S}_{d}P\right)\right) \right),\] (4.88) _where_ \(\widehat{S}_{r_{1}}(l_{i}\overline{\mathcal{R}}^{1}_{k_{v_{0}}+1})\) _denotes the codimension_ \(r_{1}<r\) _corner of the moduli space of domains_ \(l_{i}\overline{\mathcal{R}}^{1}_{k_{v_{0}}+1}\)_, and_ \(y_{j_{i}}\) _are 1-periodic orbits of_ \(X_{H_{i}}\)_. The identifications of the codimension_ \(r\) _corners of the moduli spaces_ \(\int_{l}^{j,j+1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\int_{l-1}\overline{\mathcal{R}}^{\otimes^{1}}_{k+1}(x,L,\hat{\beta};P)\) _and_ \(\int_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\) _are similar to that of_ \(i\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\) _above, and are therefore omitted._
* _(Compatibility at corners, II) Let_ \(X\) _be either_ \(\overline{\mathcal{R}}_{k+1}(L,\beta;P)\)_,_ \(\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta;P)\)_,_ \(\int_{l}^{j,j+1}\overline{\overline{\mathcal{R}}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\int_{l-1}\overline{\mathcal{R}}^{\otimes^{1}}_{k+1}(x,L,\hat{\beta};P)\) _and_ \(\int_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\)_. Then for every_ \(l,l^{\prime}\in\mathbb{N}\)_, the canonical covering map_ \(\pi_{l,l^{\prime}}:\widehat{S}_{l}(\widehat{S}_{l}X)\to\widehat{S}_{l+l^{ \prime}}(X)\) _coincides with the map defined from the fiber product presentation in (v)._
* _(Cyclic invariance) Let_ \(X\) _be one of the moduli spaces_ \(\overline{\mathcal{R}}_{k+1}(L,\beta;P)\)_,_ \(\overline{\mathcal{R}}_{k+2,\vartheta}(L,\beta;P)\)_,_ \(\int_{l}^{1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\int_{l}^{1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_,_ \(\int_{l}^{1}x,L,\hat{\beta};P)\)_, and_ \(\int_{l-1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\)_. Then the Kuranishi structure on_ \(X\) _is invariant under the_ \(\mathbb{Z}_{k+1}\)_-action induced by the cyclic permutations of boundary marked points_ \(z_{0},\cdots,z_{k}\)_._
Proof.: We shall explain here some important points in the statement of the theorem which are not self-evident.
The constructions of the Kuranishi structures on the moduli spaces \(i\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\), \(\int_{l}^{j,j+1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\), \(\int_{l-1}\overline{\mathcal{R}}^{\otimes^{1}}_{k+1}(x,L,\hat{\beta};P)\), \(\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\) and \(\int_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\) are combinations of the constructions of the Kuranishi structures for the moduli spaces of pseudoholomorphic discs (tree-like K-systems, cf. [24, 25]) and that of the moduli spaces of solutions to Floer's equation associated to a time-dependent Hamiltonian function
(linear K-systems, cf.[23, 26]). In fact, since in our case the transversality of the moduli spaces \({}_{j}\overline{\mathcal{M}}(x,y)\) of Floer cylinders can be achieved by carefully choosing Floer data, the constructions of Kuranishi structures on these moduli spaces are essentially reduced to that of \(J_{M}\)-holomorphic discs. However, as we will discuss shortly, in order to achieve the invariance property (vii), we need to be a little bit careful when constructing these Kuranishi structures. The dimension computations are straightforward. The explicit description of a Kuranishi chart of \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\) can be found in Lemma 48.
We explain how to define the evaluation map
\[{}_{l}ev^{\mathcal{R}}=({}_{l}ev^{\mathcal{R}}_{0},\cdots,{}_{l}ev^{\mathcal{ R}}_{k}):{}_{l}\overline{\mathcal{R}}^{1}_{k}(x,L,\hat{\beta})\to L^{k+1}. \tag{4.89}\]
For each \((T,\hat{B},v_{0})\in\mathcal{G}(k+1,\hat{\beta})\), each \(0\leqslant l_{1}\leqslant l\), and each decomposition \(l-l_{1}=\sum_{i=1}^{r_{2}}j_{i}\), with \(j_{i}\in\mathcal{Z}_{\geqslant 0}\) for each \(1\leqslant i\leqslant r_{2}\), we can define the exterior evaluation map
\[ev_{ext}:\prod_{v\in C_{0,\text{int}}(T)\backslash\{v_{0}\}}\mathcal{R}_{k_{v }+1}\left(L,\hat{B}(v)\right)\times\prod_{i=1}^{r_{2}}j_{i}\mathcal{M}(y_{j_{ i}},x)\times{}_{l_{1}}\mathcal{R}^{1}_{k_{v_{0}}+1}\left(y_{j_{i}},L,\hat{B}(v_{0}) \right)\to L^{k+1} \tag{4.90}\]
in the same manner as the maps (4.4) and (4.12) in Section 4.1. The map \({}_{l}ev^{\mathcal{R}}\) is given by restricting \(ev^{ext}\) to the compactified moduli space
\[{}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta})\cong \underset{\begin{subarray}{c}(T,\hat{B},v_{0})\in\mathcal{G}(k+1,\hat{\beta} )\\ v_{0}\in C_{0,\text{int}}(T)\end{subarray}}{\bigsqcup}\underset{\begin{subarray} {c}j_{1}+\cdots+j_{r_{2}}=l_{2}\\ l_{1}+l_{2}=l\end{subarray}}{\bigsqcup}\left(\prod_{e\in C_{1,\text{int}}(T)}L \right)\,_{\Delta}\times ev_{int} \tag{4.91}\] \[\left(\prod_{v\in C_{0,\text{int}}(T)\backslash\{v_{0}\}} \mathcal{R}_{k_{v}+1}\left(L,\hat{B}(v)\right)\times\prod_{i=1}^{r_{2}}j_{i} \mathcal{M}(y_{j_{i}},x)\times{}_{l_{1}}\mathcal{R}^{1}_{k_{v_{0}}+1}\left(y_ {j_{i}},L,\hat{B}(v_{0})\right)\right).\]
This definition directly extends to the moduli spaces \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta};P)\) with \(P\)-coefficients and gives rise to the evaluation map \({}_{l}ev^{\mathcal{R},P}\) in (ii). The constructions of the evaluation maps (4.72), (4.73) and (4.74) are similar, and we omit the details.
The isomorphisms (4.82) and (4.84) follow from (4.9). Note that in (4.82) and (4.84), there is no boundary strata of the form
\[\overline{\mathcal{R}}_{k_{1}+1}(L,\beta_{1};P)\,_{i}\times_{0}\overline{ \mathcal{R}}_{k_{2}+2,\vartheta}(L,\beta_{2};P). \tag{4.92}\]
This is because according to the construction, the constraint imposed on the boundary marked point with "the largest label" only makes sense on the disc component with which carries \(z_{0}\).
For the isomorphisms (4.83) and (4.84), note that the codimension 1 boundary strata of the compactified moduli space \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta})\) are covered by the natural inclusions of
\[{}_{j}\overline{\mathcal{M}}(x,y_{j})\times{}_{l-j}\overline{\mathcal{R}}^{1}_ {k+1}(y_{j},L,\hat{\beta}),\text{ where }1\leqslant j\leqslant l, \tag{4.93}\]
\[{}_{l}^{j,j+1}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta}),\text{ where }1 \leqslant j\leqslant l-1, \tag{4.94}\]
\[{}_{l-1}\overline{\mathcal{R}}^{\mathcal{S}^{1}}_{k+1}(x,L,\hat{\beta}), \tag{4.95}\]
which come from the degenerations of domains,
\[\overline{\mathcal{M}}(x,y)\times{}_{l}\overline{\mathcal{R}}^{1}_{k+1}(y,L, \hat{\beta}), \tag{4.96}\]
which comes from semi-stable breaking, and
\[{}_{l}\overline{\mathcal{R}}^{1}_{k_{1}+1}(x,L,\hat{\beta}_{1})\,_{i}\times_{0} \overline{\mathcal{R}}_{k_{2}+1}(L,\beta_{2}),\text{ where }k_{1}+k_{2}=k+1,1\leqslant i \leqslant k_{1}\text{ and }\hat{\beta}_{1}+\beta_{2}=\hat{\beta}, \tag{4.97}\]
\[\overline{\mathcal{R}}_{k_{1}+1}(L,\beta_{1})\ {}_{i}\times_{0}\ {}_{l} \overline{\mathcal{R}}_{k_{2}+1}^{1}(x,L,\hat{\beta}_{2}),\ \text{where}\ k_{1}+k_{2}=k+1,1\leqslant i\leqslant k_{1}\ \text{and}\ \beta_{1}+\hat{\beta}_{2}=\hat{\beta}, \tag{4.98}\]
which are disc bubbles.
In the above, the strata (4.93)-(4.95) correspond respectively to the strata (4.29)-(4.32) in Proposition 38. Due to the possible non-exactness of \(L\) in our case, there can be disc bubbles. By our choice of Floer data as specified in Definition 40, their contributions give the boundary strata (4.97) and (4.98).
The sign computations in (iv) will be carried out in Appendix B. The compatibility conditions in (iv) and (v) will be explained at the end of this subsection.
For the moduli spaces \(\overline{\mathcal{R}}_{k+1}(L,\hat{\beta};P)\) and \(\overline{\mathcal{R}}_{k+2,\theta}(L,\beta;P)\), the invariance property (vii) of the Kuranishi structures follows from the construction of Fukaya (cf.[19], Corollary 3.1). The arguments for the moduli spaces \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\), \({}_{l}^{j,j+1}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\) and \({}_{l}\overline{\mathcal{R}}_{k+1}^{\delta^{1}}(x,L,\hat{\beta};P)\) are similar. For clarity, we breiffly explain here Fukaya's construction. The key point is to observe that [19], Lemma 3.1 holds not only for the case when the domain is a disc, but a punctured disc as well. For simplicity, we consider here only the case of \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\). Let \({}_{l}\overline{\mathcal{R}}^{1}(x,L,\hat{\beta};P)\) be the Cohen-Ganatra moduli space constructed as in Section 4.2, but using the domains \((S;p_{1},\cdots,p_{l};\ell)\in{}_{l}\mathcal{R}^{1}\) without boundary marked points. This enables us to construct Kuranishi structures on \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\) for various \(k\in\mathbb{Z}_{\geqslant 0}\) by pulling back the Kuranishi structure on the moduli space \({}_{l}\overline{\mathcal{R}}^{1}(x,L,\hat{\beta};P)\), via the map
\[f_{k+1}:_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\to{}_{l} \overline{\mathcal{R}}^{1}(x,L,\hat{\beta};P) \tag{4.99}\]
which forgets all the boundary marked points \(z_{0},\cdots,z_{k}\). Here, what is slightly different from the cases of the moduli spaces \(\overline{\mathcal{R}}_{k+1}(L,\beta;P)\) and \(\overline{\mathcal{R}}_{k+2,\theta}(L,\beta;P)\) is that additional care must be taken when \(\hat{c}\hat{\beta}=0\), in which case there is an extra boundary stratum in \(\hat{c}\hat{\eta}\overline{\mathcal{R}}^{1}(x,L,\hat{\beta};P)\) corresponding to a configuration of Floer cylinders (with additional marked points) in \({}_{l}\overline{\mathcal{M}}(x,pt)\), where \(pt\in L\) is regarded as a constant orbit of \(X_{H_{t}}\), attached to a constant disc at \(pt\). Equivalently, denote by \({}_{l}\overline{\mathcal{M}}(x)\) the moduli space of pairs \(((\mathbb{C},p_{1},\cdots,p_{l}),u)\), where \(p_{1},\cdots,p_{l}\in\mathbb{C}\) are auxiliary marked points satisfying
\[0<|p_{1}|<\cdots<|p_{l}|, \tag{4.100}\]
and \(u:\mathbb{C}\to M\) is a solution to the Floer equation with the asymptotic condition at the positive cylindrical end given by \(x\). Evaluation at the origin \(0\in\mathbb{C}\) defines a map \(ev_{0}^{\mathcal{M}}:{}_{l}\overline{\mathcal{M}}(x)\to M\), with which the extra boundary stratum can be written as the fiber product
\[{}_{l}\overline{\mathcal{M}}(x)\ {}_{ev_{0}^{\mathcal{M}}}\times_{M}L. \tag{4.101}\]
In this case, we need a Kuranishi structure on \({}_{l}\overline{\mathcal{M}}(x)\) which is compatible with the gluings of Floer trajectories, such that \(ev_{0}^{\mathcal{M}}\) is strongly continuous and weakly submersive. Since transversalities of (4.101) can be achieved by appropriately choosing Floer data, this is obviously true with trivial obstruction bundle. Taking a manifold (with corners) chart \(U_{p}\) of \({}_{l}\overline{\mathcal{M}}(x)\), where \(p\in{}_{l}\overline{\mathcal{M}}(x)\), the restriction of \(ev_{0}^{\mathcal{M}}\) defines a map \(U_{p}\to M\). Define \(V_{p}:=(ev_{0}^{\mathcal{M}}|_{U_{p}})^{-1}(L)\), then its product with the compactification of the configuration space
\[\text{\it Conf}_{k+1}:=\left\{(z_{0},\cdots,z_{k})\in(\partial S)^{k+1}|\text{ the points}\ z_{i}\text{'s respect cyclic order}\right\}/S^{1} \tag{4.102}\]
of \(k+1\) cyclically ordered points on the circle can be regarded as a stratum of a Kuranishi neighborhood of the pullback of \((p,q)\in{}_{l}\overline{\mathcal{M}}(x)\ {}_{ev_{0}^{\mathcal{M}}}\times_{M}L\subset\hat{c} _{l}\overline{\mathcal{R}}^{1}(x,L,\hat{\beta};P)\) in \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\). We can then extend it to a Kuranishi neighborhood of \(f_{k+1}^{-1}(p,q)\) in \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P)\).
**Remark 47**.: _Theorem 46 above is actually parallel to [33], Theorem 7.20. A remarkable difference is that the invariance property (vii) of the Kuranishi structures for certain
moduli spaces under cyclic permutations of boundary marked points is not required there. However, this is crucial for our purposes since we are working in the \(S^{1}\)-equivariant case, where both the \(S^{1}\)-equivariant differential and the chain level string bracket involve cyclic permutations of the marked points on the Moore loops. Note also that the moduli space \(\overline{\mathcal{S}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\) does not admit a Kuranishi structure which is invariant under the cyclic permutations of the boundary marked points. Instead, by our choices of Floer data, the group \(\mathbb{Z}_{k+1}\) of cyclic permutations acts transitively on the set of admissible K-spaces \(\left\{\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\beta};P)\right\}^{k} _{i=0}\)._
From now on, we shall fix Kuranishi structures on the moduli spaces (4.60)--(4.65) satisfying (ii)-(vii) of Theorem 46.
We describe explicitly the Kuranishi charts of \(i\overline{\mathcal{S}}^{1}_{k+1}(x,L,\hat{\beta};P)\). Let
\[i\overline{\mathcal{X}}^{1}_{k+1}(x,L,\hat{\beta};P) \tag{4.103}\]
be the space of pairs (4.44), where \((S;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell)\in i\overline{\mathcal{R}}^{1}_ {k+1}\), but now \(u:S\to M\) is a \(C^{\infty}\)-map satisfying \(u(\partial S)\subset L\), \([u]=\hat{\beta}\in\pi_{2}(M,x,L)\), \(\hat{\bar{v}}u=0\) (with respect to \(J_{M}\)) in a neighborhood of \(\partial S\), and
\[\partial_{s}u+J_{t}(\partial_{t}u-X_{H_{t}})=0 \tag{4.104}\]
on the (positive) cylindrical end, if there is no cylinders breaking off at \(\zeta\). When cylinders bubble off at \(\zeta\), \(u\) is required to satisfy (4.104) on the cylindrical end of the main component, and solves the corresponding Floer equations of the form (4.21) on the cylinder components (which may carry some of the auxiliary marked points \(p_{1},\cdots,p_{l}\)). Moreover, we have the space
\[\overline{\mathcal{X}}_{k+1}(L,\beta;P) \tag{4.105}\]
introduced in [33], which parametrizes pairs the (4.2), but now \(u:(D,\partial D)\to(M,L)\) is a \(C^{\infty}\)-map satisfying \(\hat{\bar{v}}u=0\) on a small neighborhood of \(\partial D\) and \([u]=\beta\in\pi_{2}(M,L)\). Since we can take the obstruction bundles \(\mathcal{E}\) in both cases so that every section of it is supported away from \(\partial S\) (resp. \(\partial D\)) and the cylindrical end near \(\zeta\) (in fact, the punctured disc of radius \(\frac{1}{2}\)), the following lemma is straightforward.
**Lemma 48**.: _Let \(p\in i\overline{\mathcal{S}}^{1}_{k+1}(x,L,\hat{\beta};P)\) and \(\mathcal{U}_{p}=(U_{p},\mathcal{E}_{p},s_{p},\psi_{p})\) be a Kuranishi chart at \(p\). Let \((T,\hat{B},v_{0})\in\mathcal{S}(k+1,\hat{\beta})\), \(0\leqslant l_{1}\leqslant l\), \(l-l_{1}=\sum_{i=1}^{r_{2}}j_{i}\), where \(j_{i}\in\mathbb{Z}_{\geq 0}\) for each \(1\leqslant i\leqslant r_{2}\), such that_
\[\begin{split} p\in P&\times\left(\prod_{e\in C_{1, \text{\rm alt}}(T)}L\right)\,\triangle\,\times\,ev_{int}\,\,\left(\prod_{v\in C _{0,\text{\rm alt}}(T)\setminus\{v_{0}\}}\overline{\mathcal{X}}_{k_{v}+1} \left(L,\hat{B}(v)\right)\right.\\ &\left.\times\prod_{i=1}^{r_{2}}j_{i}\mathcal{M}(y_{j_{i}},x) \times_{l_{1}}\mathcal{R}^{1}_{k_{v_{0}}+1}\left(y_{j_{i}},L,\hat{B}(v_{0}) \right)\right)\end{split} \tag{4.106}\]
_Then set theoretically \(U_{p}\) can be embedded into_
\[\begin{split}\bigsqcup_{(T^{\prime},B^{\prime},v_{0})}P& \times\left(\prod_{e\in C_{1,\text{\rm alt}}(T^{\prime})}L\right)\, \triangle\,\times_{ev_{int}}\,\,\left(\prod_{v\in C_{0,\text{\rm alt}}(T^{ \prime})\setminus\{v_{0}\}}\overline{\mathcal{X}}_{k_{v}+1}\left(L,B^{\prime}( v)\right)\right.\\ &\left.\times i\overline{\mathcal{X}}^{1}_{k_{v_{0}}+1}\left(x,L, \hat{B}^{\prime}(v_{0})\right)\right),\end{split} \tag{4.107}\]
_where \((T^{\prime},\hat{B}^{\prime},v_{0})\) runs over all reductions of \((T,\hat{B},v_{0})\)._
Let \(\mathcal{C}\) be the collection of all the moduli spaces appeared in Theorem 46, except for those of the form \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\bar{a}};P)\). To achieve the compatibility conditions (iv)-(vi) in Theorem 46, we need a total order on \(\mathcal{C}\) with the following property: if a moduli space \(X\in\mathcal{C}\) is part of the normalized boundary \(\partial Y\) of another moduli space \(Y\in\mathcal{C}\), then \(X<Y\). Concretely, we choose our total order on \(\mathcal{C}\) as follows:
* \(X<Y\) if the \(P\)-part of \(X\) is \(\{m\}\) and the \(P\)-part of \(Y\) is \([m^{\prime},m^{\prime}+1]\), for any \(m,m^{\prime}\in\mathbb{Z}_{\geq 0}\).
* \(X<Y\) if the \(P\)-part of \(X\) is \(\{m\}\) and the \(P\)-part of \(Y\) is \(\{m+1\}\).
* \(X<Y\) if the \(P\)-part of \(X\) is \([m,m+1]\) and the \(P\)-part of \(Y\) is \([m+1,m+2]\).
* For any \(k_{1},k_{2},k_{3},k_{4},k_{5},l_{1},l_{2},l_{3},l_{4}\in\mathbb{Z}_{\geq 0}\), \(a_{1},a_{2},a_{3},a_{4},a_{5}\in H_{1}(L;\mathbb{Z})\), \(1\leq j\leq l_{2}-1\) and \(1\)-periodic orbits \(x_{1},x_{2},x_{3},x_{4},x_{5}\) of \(X_{H_{t}}\), \[\begin{split} l_{1}\overline{\mathcal{M}}(x_{1},x_{2};\{m\})& <\overline{\mathcal{R}}_{k_{1}+1}(L,\bar{a}_{1};\{m\})<\overline{ \mathcal{R}}_{k_{2}+2,\vartheta}(L,\bar{a}_{2};\{m\})\\ &<l_{2}^{\,j+1}\overline{\mathcal{R}}_{k_{3}+1}^{k_{1}}(x_{3},L, \tfrac{\hat{a}}{a_{3}};\{m\})<l_{3}\overline{\mathcal{R}}_{k_{4}+1}^{\mathcal{ S}^{1}}(x_{4},L,\tfrac{\hat{a}}{a_{4}};\{m\})\\ &<l_{4}\overline{\mathcal{R}}_{k_{3}+1}^{k_{1}}(x_{5},L,\tfrac{ \hat{a}}{a_{5}};\{m\})\end{split}\] (4.108)
* If \(k_{1},k_{2},l_{1},l_{2}\in\mathbb{Z}_{\geq 0}\), \(1\leq j_{1}\leq l_{1}-1\), \(1\leq j_{2}\leq l_{2}-1\) and \(a_{1},a_{2}\in H_{1}(L;\mathbb{Z})\) satisfy \(\theta_{M}(a_{1})+(k_{1}-1)\varepsilon<\theta_{M}(a_{2})+(k_{2}-1)\varepsilon\), then \[\overline{\mathcal{R}}_{k_{1}+1}(L,\bar{a}_{1};\{m\})<\overline{\mathcal{R}}_ {k_{2}+1}(L,\bar{a}_{2};\{m\}),\] (4.109) \[\overline{\mathcal{R}}_{k_{1}+2,\vartheta}(L,\bar{a}_{1};\{m\})<\overline{ \mathcal{R}}_{k_{2}+2,\vartheta}(L,\bar{a}_{2};\{m\}),\] (4.110) \[\begin{split} l_{1}^{\,j_{1},j_{1}+1}\overline{\mathcal{R}}_{k_{1} +1}^{1}(x,L,\tfrac{\hat{a}}{a_{1}};\{m\})<l_{2}^{\,i_{2},j_{2}+1}\overline{ \mathcal{R}}_{k_{2}+1}^{1}(x,L,\tfrac{\hat{a}}{a_{2}};\{m\}),\end{split}\] (4.111) \[\begin{split} l_{1}\overline{\mathcal{R}}_{k_{1}+1}^{ \mathcal{S}^{1}}(x,L,\tfrac{\hat{a}}{a_{1}};\{m\})<l_{2}\overline{\mathcal{R} }_{k_{2}+1}^{\mathcal{S}^{1}}(x,L,\tfrac{\hat{a}}{a_{2}};\{m\}),\end{split}\] (4.112) \[\begin{split} l_{1}\overline{\mathcal{R}}_{k_{1}+1}^{1}(x,L, \tfrac{\hat{a}}{a_{1}};\{m\})<l_{2}\overline{\mathcal{R}}_{k_{2}+1}^{1}(x,L, \tfrac{\hat{a}}{a_{2}};\{m\}).\end{split}\] (4.113)
* Let \(k,l_{1},l_{2}\in\mathbb{Z}_{\geq 0}\), \(1\leq j_{1}\leq l_{1}-1\), \(1\leq j_{2}\leq l_{2}-1\), \(a\in H_{1}(L;\mathbb{Z})\) and \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{4}\) be \(1\)-periodic orbits of \(X_{H_{t}}\). If \(A_{H_{t}}(x_{1})<A_{H_{t}}(x_{2})\), then \[\begin{split} l_{1}^{\,j_{1},j_{1}+1}\overline{\mathcal{R}}_{k+1}^ {1}(x_{1},L,\tfrac{\hat{a}}{a};\{m\})<l_{2}^{\,j_{2},j_{2}+1}\overline{ \mathcal{R}}_{k+1}^{1}(x_{2},L,\tfrac{\hat{a}}{a};\{m\}),\end{split}\] (4.114) \[\begin{split} l_{1}\overline{\mathcal{R}}_{k+1}^{ \mathcal{S}^{1}}(x_{1},L,\tfrac{\hat{a}}{a};\{m\})<l_{2}\overline{\mathcal{R} }_{k+1}^{\mathcal{S}^{1}}(x_{2},L,\tfrac{\hat{a}}{a};\{m\}),\end{split}\] (4.115) \[\begin{split} l_{1}\overline{\mathcal{R}}_{k+1}^{ \mathcal{S}}(x_{1},L,\tfrac{\hat{a}}{a};\{m\})<l_{2}\overline{\mathcal{R}}_{k+1 }^{1}(x_{2},L,\tfrac{\hat{a}}{a};\{m\}).\end{split}\] (4.116) If \(A_{H_{t}}(x_{1})-A_{H_{t}}(x_{2})<A_{H_{t}}(x_{3})-A_{H_{t}}(x_{4})\), then \[\begin{split} l_{1}\overline{\mathcal{M}}(x_{1},x_{2})<l_{2} \overline{\mathcal{M}}(x_{3},x_{4}).\end{split}\] (4.117)
* For \(c\in\mathbb{R}\), we choose arbitrary total orders on sets \[\left\{\overline{\mathcal{M}}(x_{1},x_{2};\{m\})|A_{H_{t}}(x_{1})-A_{H_{t}}(x _{2})=c\right\},\] (4.118) \[\left\{\overline{\mathcal{R}}_{k+1}(L,\bar{a};\{m\})|\theta_{M}(a)+(k-1) \varepsilon=c\right\},\] (4.119) \[\left\{\overline{\mathcal{R}}_{k+2,\vartheta}(L,\bar{a};\{m\})|\theta_{M}(a) +(k-1)\varepsilon=c\right\},\] (4.120) \[\left\{\begin{split} l^{\,j+1}\overline{\mathcal{R}}_{k+1}^ {1}(x,L,\tfrac{\hat{a}}{a};\{m\})|\theta_{M}(a)+(k-1)\varepsilon=c\right\}, \end{split}\] (4.121) \[\left\{\begin{split} l^{\,-1}\overline{\mathcal{R}}_{k+1}^ {\mathcal{S}^{1}}(x,L,\tfrac{\hat{a}}{a};\{m\})|\theta_{M}(a)+(k-1) \varepsilon=c\right\},\] (4.122) \[\left\{\begin{split} l^{\,1}\overline{\mathcal{R}}_{k+1}^ {1}(x,L,\tfrac{\hat{a}}{a};\{m\})|\theta_{M}(a)+(k-1)\varepsilon=c\right\}. \end{split}\] (4.123)
* For each \(m\in\mathbb{Z}_{\geq 0}\), we take a total order on the moduli spaces with \(P=[m,m+1]\) in a similar manner as (iv)--(vii) above.
We haven't dealt with the moduli spaces \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\hat{a}};P)\) when defining the total ordering above. In this case, we have a (codimension \(0\)) embedding of admissible K-spaces
\[{}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\hat{a}};P) \hookrightarrow{}_{l-1}\overline{\mathcal{R}}^{{}^{{}^{{}^{{}^{{}^{{}^{{}^{{}^{{ }^{{{}^{{}^{{}^{{}^{{}^{}}}}}}}}}}}}}}}_{k+1}(x,L,\hat{\hat{a}};P) \tag{4.124}\]
induced from the auxiliary-rescaling map (4.36). See the proof of Lemma 55 for details. Thus the compatibility at corners for \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{i}}(x,L,\hat{\hat{a}};P)\) follows essentially from the compatibility at corners for \({}_{l-1}\overline{\mathcal{R}}^{{}^{{}^{{}^{{}^{{}^{{}^{{{}^{{}^{{}^{{}^{{ }^{{}^{{}^{{}}}}}}}}}}}}}}}}_{k+1}(x,L,\hat{\hat{a}};P)\), which is guaranteed by the existence of the total order on \(\mathcal{C}\) specified above.
**Remark 49**.: _If we consider the subcollection \(\mathcal{C}^{\prime}\subset\mathcal{C}\) of the moduli spaces \(\overline{\mathcal{R}}_{k_{1}+1}(L,\bar{a}_{1};P)\), \(\overline{\mathcal{R}}_{k_{2}+2,\vartheta}(L,\bar{a}_{2};P)\), \({}_{l_{1}}^{j,j+1}\overline{\mathcal{R}}^{1}_{k_{3}+1}(x,L,\hat{\bar{a}}_{3} ;P)\), \({}_{l_{2}}\overline{\mathcal{R}}^{{}^{{}^{{}^{{}^{{}^{{}^{{}^{{}^{{ }^{{}^{{}^{{}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }}^{{}^{{}^{{}}^{{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}}^{{}^{}^{{}^{}^{{}^{}^{}^{ }}^{{{}^{}^{{}^{}^{{}^{}^{{}}^{{}^{}^{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{ }}^{{}^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{}^{{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{ }^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{{}^{ }^{{}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{ }^{{}^{{}^{{}}^{}^{{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{{}^{}^{}^{{}^{}^{ }^{{{}^{{}}^{}^{{}^{{}^{{}}^{}^{{}^{}^{{}^{{}}^{}^{{}^{{}^{{}}^{}^{{}}^{{}^{ }^{{}^{{}^{}^{{}}^{}^{{}^{{}^{}^{{}^{{}}^{{}^{{}}^{}^{{}^{{}}^{{}^{ }^{{{}}^{{}^{{}}^{{}^{}^{{}}^{{}^{{}^{}^{{}}^{}^{{}^{{}}^{{}^{{}}^{{}^{ }^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{{}}^{{}^{{} }^{{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{}^{{}}^{{{}} }^{{{}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}^{{}}^{{}}^{{}^{{}}^{{{}}^{{}} }^{{{}^{{}^{{}}^{{}^{{{}}^{{}}^{{}^{{}}^{{}}^{{{}}^{{{}}^{{}}^{{}^{{}}^{{{}}} }^{{{}^{{}^{{}^{{}}^{{}^{{}}^{{}}^{{{}^{{}}^{{}}^{{{}}^{{}}^{{{}}^{{}}^{{}}^{{ }^{{{}}^{{{}}^{{}^{{}}^{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}}^{{ }}^{{{}^{{{}}^{{{}}}^{{{}}^{{{}}^{}}^{{{{}}^{{{{}}}^{{}}^{{{}}}^{{ }^{{{{}}^{{{}}^{{{}}^{{}}^{{{}}^{{{}}}^{{{}}^{{{}}^{{{}}^{{{{}}}^{{}}^{{{ }}^{{{{}}}^{{{{}}}^{{}^{{{}}^{{{{}}}^{{{}}}^{{}}^{{{}}^{{ }^{{{{}}}^{{{{{}}}^{{{}}^{{{}}}^{{{{}}}^{{{}}^{{{{}}}^{{{{}}}^{{{{}}}^{ }^{{{{{}}}^{{{{}}}^{{{{}}}^{{{}}^{{}}^{{{}}^{{{{}}^{{{}}}^{{{{}}}^{{{{}}}^{{ }^{{{{{}}}}^{{{}}^{{{}}^{{{}}^{{{{}}}^{{}}^{{{{}}}^{{{}}^{{{{}}}}^{{ }^{{{{{}}^{{{{}}}^{{{{}}}^{{{{}}}^{{{{{}}}^{{{}}^{{{{}}}^{{{{}}}^{{{}}}^{{{{}}}^{{{ }}^{{{{}}^{{{}}^{{{{}}}^{{{{{}}}}^{{{}}^{{{}}}^{{{}}^{{{{}}}^{{ }^{{{}^{{{{}}}^{{{{{}}}^{{}}^{{{{}}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{{}}}^{{{}}^{{}^{{{ }}^{{{}}^{{{}}^{{{}}^{{{}}^{{}^{{}}^{{{}}^{{}}^{{{}}^{{}}^{{{}^{{}}^{{ }^{{}^{{}}^{{}^{{}}^{{}^{{{}}^{{}^{{{}}^{{}}^{{{}}^{{{}}^{{{}}^{{{}}}^{{{}}^{{}^{{}} }^{{{}^{{}^{{}^{}^{{{}^{{}}^{{}^{{}}^{{{}^{{}}^{{{}^{{{}}}^{{{}}^{{}}^{{}^{{ }^{{}^{{}^{{}}^{{}^{{}^{{}}^{{}^{{{}}^{{}^{{{}}^{{}^{{}}}^{{{}}^{{}^{{}}^{{ }^{{}^{{}^{{}^{{}^{{}}^{{{}^{{}^{}}^{{{}^{{}}^{{{}^{}^{{}}^{{{}}^{{}}^{{{}}^{{}^{ }^{{{}^{{}^{{}^
* \(T\in\mathbb{R}_{\geqslant 0}\) and \(\gamma\in C^{0}([0,T],L)\) with \(\gamma(0)=\gamma(T)\),
* \(0=t_{0}\leqslant t_{1}\leqslant\cdots\leqslant t_{k}\leqslant T\).
Clearly, we can identify \(\mathcal{L}_{k+1}^{con}\) with
\[\left\{(\Gamma_{0},\cdots,\Gamma_{k})\in(\Pi^{con})^{k+1}|ev_{1}(\Gamma_{i})= ev_{0}(\Gamma_{i=1})\text{ for }0\leqslant i\leqslant k+1\text{ and }ev_{1}(\Gamma_{k})=ev_{0}(\Gamma_{0})\right\}. \tag{4.129}\]
This identification enables us to equip \(\mathcal{L}_{k+1}^{con}\) with a metric
\[d_{\mathcal{L}_{k+1}}\left((\Gamma_{0},\cdots,\Gamma_{k}),(\Gamma_{0}^{\prime },\cdots,\Gamma_{k}^{\prime})\right):=\max_{0\leqslant i\leqslant k}d_{\Pi}( \Gamma_{i},\Gamma_{i}^{\prime}). \tag{4.130}\]
The evaluation map \(ev_{j}^{\mathcal{L}}:\mathcal{L}_{k+1}^{con}\to L\), \(0\leqslant j\leqslant k+1\), and the concatenation map \(\mathit{conj}:\mathcal{L}_{k+1}^{con}\overset{\infty}{\underset{v_{j}^{ \mathcal{L}}}{\sum}}\times\underset{v_{0}^{\mathcal{L}}}{\sum}\ \mathcal{L}_{k+1}^{con}\to\mathcal{L}_{k+k}^{con}\) are defined in the same way as in the case of smooth loops. \(ev_{j}^{\mathcal{L}}\) is obviously continuous (with respect to the metric \(d_{\mathcal{L}_{k+1}}\)), and the continuity of \(\mathit{conj}\) follows from the continuity of (4.127).
For \(\beta\in\pi_{2}(M,L)\) and \((T,B)\in\mathfrak{G}(k+1,\beta)\), there is a continuous map
\[ev_{int}^{P}:\prod_{v\in C_{0,\mathit{in}}(T)}P\times\mathcal{L}_{k_{k}+1}^{ con}(\partial B(v))\to\prod_{e\in C_{1,\mathit{in}}(T)}(P\times L)^{2} \tag{4.131}\]
defined in the same way as (4.3). By concatenation of loops, we get a continuous map
\[\left(\prod_{e\in C_{1,\mathit{in}}(T)}P\times L\right)\ _{\Delta}\times \underset{ev_{int}^{P}}{\sum}\ \left(\prod_{v\in C_{0,\mathit{in}}(T)}P\times\mathcal{L}_{k_{k}+1}^{con}( \partial B(v))\right)\to P\times\mathcal{L}_{k+1}^{con}(\partial\beta). \tag{4.132}\]
**Proposition 50**.: _For \(k,m\in\mathbb{Z}_{\geqslant 0}\), and \(P=\{m\}\) or \([m,m+1]\), one can define strongly continuous maps_
\[\mathit{Ev}^{\mathcal{R}}:\overline{\mathcal{R}}_{k+1}(L,\beta;P)\to P \times\mathcal{L}_{k+1}^{con}(\partial\beta),\text{ where }\theta_{M}(a)<(m+1-k)\varepsilon, \tag{4.133}\]
\[\mathit{Ev}_{\vartheta}^{\mathcal{R}}:\overline{\mathcal{R}}_{k+2,\vartheta}( L,\beta;P)\to P\times\mathcal{L}_{k+1}^{con}(\partial\beta),\text{ where }\theta_{M}(a)<(m+1-k)\varepsilon, \tag{4.134}\]
\[\mathit{Ev}^{\mathcal{R}}:\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\beta};P) \to P\times\mathcal{L}_{k+1}^{con}(\partial\hat{\beta}),\text{ where }\theta_{M}(a)<(m-k-U)\varepsilon, \tag{4.135}\]
\[\mathit{l-1}Ev^{\mathcal{S}^{1}}:\mathit{l-1}\overline{\mathcal{R}}_{k+1}^{ \delta^{1}}(x,L,\hat{\beta};P)\to P\times\mathcal{L}_{k+1}^{con}(\partial\hat{ \beta}),\text{ where }\theta_{M}(a)<(m-k-U)\varepsilon, \tag{4.136}\]
\[\mathit{l}^{j,j+1}Ev^{\mathcal{R}}:\mathit{l}^{j,j+1}\overline{\mathcal{R}}_{k+ 1}^{1}(x,L,\hat{\beta};P)\to P\times\mathcal{L}_{k+1}^{con}(\partial\hat{ \beta}),\text{ where }\theta_{M}(a)<(m-k-U)\varepsilon, \tag{4.137}\]
\[\mathit{l-1}Ev_{i}:\mathit{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}(x,L, \hat{\beta};P)\to P\times\mathcal{L}_{k+1}^{con}(\partial\hat{\beta}),\text{ where }\theta_{M}(a)<(m-k-U)\varepsilon, \tag{4.138}\]
_such that the following diagram commutes for every \((T,B)\in\mathfrak{G}(k+2,\beta)\):_
\[\left(\prod_{e}P\times L)_{\Delta}\times \underset{ev_{int}^{P}}{\sum}\left(\prod_{v\neq v_{0}}\mathcal{R}_{k_{v}+1}( B(v);P)\times\mathcal{R}_{k_{v_{0}}+1,\vartheta}(B(v_{0});P)\right)\
commutes for every \((T,\hat{B},v_{0})\in\mathcal{G}(k+1,\hat{\beta})\), where the first horizontal map is defined from (4.88) by setting \(d=0\), and the vertical maps are given by \({}_{l}Ev^{\mathcal{R}}\)._
_In the above, we have abbreviated the notations of the moduli spaces so that the boundary conditions specified by the Lagrangian submanifold \(L\) and the asymptotic conditions specified by a Hamiltonian orbit \(x\) are omitted. In the commutative diagram (4.140) above, we can also include cylinder bubbles in \(\prod_{i=1\,j_{\mathcal{R}}}^{r_{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Note that there are only finitely many \(\beta_{l}\) that are non-vanishing. By definition of a cyclic dilation, we have
\[B_{c}(\tilde{\beta})=e_{M}-\partial\beta_{-1} \tag{4.143}\]
for some \(\beta_{-1}\in\mathit{SC}^{-1}(M)\), where \(B_{c}\) is the cochain level marking map
\[B_{c}:SC^{\bullet}_{S^{1}}(M)\to\mathit{SC}^{\bullet-1}(M), \tag{4.144}\]
and \(e_{M}\in\mathit{SC}^{0}(M)\) is the cochain level representative of the identity. We want to fix the cochain \(x\) in the definition of Cohen-Ganatra moduli spaces so that it comes from the cochains \(\{\beta_{l}\}_{l\in\mathbb{Z}_{\geqslant 0}}\) in the expression of \(\tilde{\beta}\), together with \(\beta_{-1}\). For a closed Lagrangian submanifold \(L\subset M\) which is oriented and _Spin_ relative to \(\alpha\), fix classes \(a_{l-1}\in H_{1}(L;\mathbb{Z})\) for each \(l\in\mathbb{Z}_{\geqslant 0}\) such that \(a_{l-1}=0\) if \(\beta_{l-1}=0\), and \(\sum_{l=0}^{\infty}a_{l-1}=a\). For \(\mathring{\mathring{\mathring{\mathring{\mathring{\mathring{\mathring{ \mathring{\mathring{\mathring{\mathring{\mathring{\mathring{\mathring{\mathring{ \mathring{\mathring{\mathring{ \mathring{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \\
\[\bar{x}_{m}(k):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{k+1}\overline{Ev}_{ \bullet}\left(\overline{\overline{\mathbb{R}}}_{k+1}(L,\bar{a};[m,m+1])\right) \in\overline{C}_{-1}, \tag{4.161}\]
\[x_{m,0}(k+1):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{n+1}\overline{Ev}_{ \bullet}\left(\overline{\overline{\mathbb{R}}}_{k+2,\vartheta}(L,\bar{a};\{m \})\right)\in C_{-2}, \tag{4.162}\]
\[\bar{x}_{m,0}(k+1):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{k}\overline{ Ev}_{\bullet}\left(\overline{\overline{\mathbb{R}}}_{k+2,\vartheta}(L,\bar{a};\{m \})\right)\in C_{-2}, \tag{4.163}\]
\[\bar{x}_{m,0}(k+1):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{n+1} \overline{Ev}_{\bullet}\left(\overline{\overline{\mathbb{R}}}_{k+2,\vartheta} (L,\bar{a};\{m\})\right)\in C_{-2}, \tag{4.164}\]
\[\bar{x}_{m,0}(k+1):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{k}\overline{ Ev}_{\bullet}\left(\overline{\overline{\mathbb{R}}}_{k+2,\vartheta}(L,\bar{a};\{m \})\right)\in\overline{C}_{-2}, \tag{4.165}\]
\[\bar{x}_{m,0}(k+1):=\sum_{\theta_{M}(a)<(m+1-k)\varepsilon}(-1)^{k}\overline{ Ev}_{\bullet}\left(\overline{\overline{\mathbb{R}}}_{k+2,\vartheta}(L,\bar{a};\{m \})\right)\in\overline{C}_{-2}, \tag{4.166}\]
\[y_{m,0}(k):=\sum_{\theta_{M}(a)<(m-U-k)\varepsilon}\sum_{l=0}^{\infty}(-1)^{n+k+1} \overline{Ev}_{\bullet}\left(\{\overline{\mathbb{R}}^{1}_{k+1}(\beta_{l-1},L, \mathring{\bar{a}}_{l-1};[m,m+1])\right)\in C_{2}, \tag{4.164}\]
\[y_{m,1}(k+1):=\sum_{\theta_{M}(a)<(m-U-k-1)\varepsilon}\sum_{l=1}^{\infty}(-1)^ {n+k+1}\overline{Ev}_{\bullet}\left({}_{l-1}\overline{\overline{\mathbb{R}}^{ 1}_{k+2}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};\{m\})}\right)\in C_{0}, \tag{4.165}\]
\[\bar{y}_{m,0}(k):=\sum_{\theta_{M}(a)<(m-U-k)\varepsilon}\sum_{l=0}^{\infty} \overline{Ev}_{\bullet}\left(\{\overline{\mathbb{R}}^{1}_{k+1}(\beta_{l-1},L, \mathring{\bar{a}}_{l-1};[m,m+1])\right)\in\overline{C}_{2}, \tag{4.166}\]
\[\bar{y}_{m,1}(k+1):=\sum_{\theta_{M}(a)<(m-U-k-1)\varepsilon}\sum_{l=1}^{ \infty}\overline{Ev}_{\bullet}\left({}_{l-1}\overline{\overline{\mathbb{R}}^{ 1}_{k+2}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1])}\right)\in\overline{ C}_{0}, \tag{4.167}\]
\[z_{m}(k):=\sum_{\theta_{M}(a)<(m-1-k)\varepsilon}(-1)^{n+k+1}\overline{Ev}_{ \bullet}\left(\overline{\overline{\mathbb{R}}^{1}_{k+1}(e_{M},L,\bar{a};\{m\}) }\right)\in C_{1}, \tag{4.168}\]
\[\bar{z}_{m}(k):=-\sum_{\theta_{M}(a)<(m-1-k)\varepsilon}\overline{Ev}_{ \bullet}\left(\overline{\overline{\mathbb{R}}^{1}_{k+1}(e_{M},L,\bar{a};[m,m+1 ])}\right)\in\overline{C}_{1}, \tag{4.169}\]
where the chains (4.160) and (4.161) were defined in [33], Section 7.6. In the above, the sums are taken over \(a\in H_{1}(L;\mathbb{Z})\), which give the components of the chains (resp. relative chains) in \(C_{\bullet}(k)\) (resp. \(\overline{C}_{\bullet}(k)\)). One can further take sums over all \(k\in\mathbb{Z}_{\geqslant 0}\) to obtain the full chains \(x_{m},\cdots,\bar{z}_{m}\), or restrict to a particular \(a\in H_{1}(L;\mathbb{Z})\) to obtain the (relative) chains \(x_{m}(a,k),\cdots,\bar{z}_{m}(a,k)\). Note that these chains give rise to chains in the quotient complexes \(C_{a}^{nd}\) and \(\overline{C}_{\bullet}^{nd}\) under the natural projections \(C_{\bullet}\to C_{\bullet}\) and \(\overline{C}_{\bullet}\to\overline{C}_{\bullet}^{nd}\). By abuse of notations, we shall use the same notations to denote the corresponding chains in \(C_{\bullet}^{nd}\) and \(\overline{C}_{\bullet}^{nd}\).
Using the chains (4.162)--(4.169) (realized as chains in \(C_{\bullet}^{nd}\) and \(\overline{C}_{\bullet}^{nd}\)), we can form the \(S^{1}\)-equivariant (relative) de Rham chains
\[\tilde{x}_{m}(k):=x_{m,0}(k)\otimes 1\in C_{-2}^{S^{1}}, \tag{4.170}\]
\[\bar{\tilde{z}}_{m}(k):=\bar{x}_{m,0}(k)\otimes 1\in\overline{C}_{-2}^{S^{1}}, \tag{4.171}\]
\[\tilde{y}_{m}(k,k+1):=y_{m,0}(k)\otimes 1+y_{m,1}(k+1)\otimes u^{-1}\in C_{2}^{S^ {1}}, \tag{4.172}\]
\[\bar{\tilde{y}}_{m}(k,k+1):=\bar{y}_{m,0}(k)\otimes 1+\bar{y}_{m,1}(k+1)\otimes u ^{-1}\in\overline{C}_{2}^{S^{1}} \tag{4.173}\]
\[\tilde{z}_{m}(k):=z_{m}(k)\otimes 1\in C_{1}^{S^{1}}, \tag{4.174}\]
\[\bar{\tilde{z}}_{m}(k):=\bar{z}_{m}(k)\otimes 1\in\overline{C}_{1}^{S^{1}}. \tag{4.175}\]
In the above definitions, we have specified the \(k\)-components of the \(x\) and \(z\)-type chains, but this is not the case for the \(y\)-type chains, where we have combined the chain \(y_{m,0}\) (resp. \(\bar{y}_{m,0}\)) over the \(k\)-component and the chain \(y_{m,1}\) (resp. \(\bar{y}_{m,1}\)) over the \((k+1)\)-component. This is because we will be dealing with the \(S^{1}\)-equivariant differentials of these chains, so \(\partial\left(y_{m,0}(k)\right)\) (resp. \(\partial\left(\bar{y}_{m,0}(k)\right)\)) and \(\delta_{cyc}\left(y_{m,1}(k+1)\right)\) (resp. \(\delta_{cyc}\left(\bar{y}_{m,1}(k+1)\right)\)) both lie in the \((a,k)\)-component of \(C_{\bullet}^{nd}\) (resp. \(\overline{C}_{\bullet}^{nd}\)).
For later purposes, we also introduce the following auxiliary chains (as before, they can be regarded as chains in \(\overline{C}_{1}^{nd}\)):
\[\bar{y}_{m}^{j,j+1}(k):=\sum_{\theta_{M}(a)<(m-k-U)\varepsilon}\sum_{l=2}^{ \infty}\overline{Ev}_{\bullet}\left({}_{l}\overset{j,j+1}{|\overline{ \mathbb{R}}^{1}_{k+1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1])}{}\right) \in\overline{C}_{1}, \tag{4.176}\]
\[\bar{y}_{m}^{S^{1}}(k):=\sum_{\theta_{M}(a)<(m-k-U)\varepsilon}\sum_{l=1}^{ \infty}\overline{Ev}_{\bullet}\left({}_{l-1}\overline{\overline{\mathbb{R}}^{ 1}_{k+1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1])}{}\right)\in \overline{C}_{1}. \tag{4.177}\]
\[\bar{y}_{m,1}^{i}(k):=\sum_{\theta_{M}(a)<(m-k-U)\varepsilon}\sum_{l=1}^{\infty} \overline{Ev}_{\bullet}\left(l_{l-1}\overline{\overline{\overline{\overline{ \overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{ \overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{ \cdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdotcdot \left}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\{\
with the fiber product \(\overline{\overline{\mathcal{R}}}_{k+2,\partial}(L,\bar{a};[m,m+1])\)\(\times_{0}\)\(\overline{\mathcal{R}}_{1}^{1}(e_{M},L,0;[m,m+1])\), which gives the sign \(|\bar{x}_{m,0}|+1\), where \(\overline{\overline{\mathcal{R}}}_{1}^{1}(e_{M},L,0;[m,m+1])\) is the moduli space defining the unit \(\bar{e}_{L}\). Now the total sign \(|\bar{x}_{m,0}|+k(i-1)\) is the sum of the sign \(|\bar{x}_{m,0}|+1\) with a sign \(k+1\) coming from the comparison between \(\lambda_{k+1}^{-1}\) and \(\tau_{k+1}^{-1}\), a sign \(k\) from \(\lambda_{k}\), and a sign \(ki\) from \(\lambda_{k}^{i}\).
We then analyze the codimension 1 boundary strata
\[\bigsqcup_{0\leqslant j\leqslant l}j\overline{\mathcal{M}}(\beta_{l-1},y_{j,l};[m,m+1])\times_{l-j}\overline{\overline{\mathcal{R}}}_{k+1}^{1}(y_{j,l}, L,\mathring{\bar{a}}_{l-1};[m,m+1]), \tag{4.182}\]
where the \(y_{j,l}\)'s are 1-periodic orbits of \(X_{H_{t}}\),
\[\bigsqcup_{1\leqslant j\leqslant l-1}\mathring{\overline{\mathcal{R}}}_{k+1} ^{1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1]), \tag{4.183}\]
and
\[\bigsqcup_{l-1}\overline{\overline{\mathcal{R}}}_{k+1}^{S^{1}}(\beta_{l-1},L, \mathring{\bar{a}}_{l-1};[m,m+1]) \tag{4.184}\]
of the admissible K-space \(\overline{l\overline{\mathcal{R}}}_{k+1}^{1}(\beta_{l-1},L,\mathring{\bar{a }}_{l-1};[m,m+1])\) (cf. (4.85)), and identify their contributions in terms of chain level string topology operations.
**Lemma 55**.: _The following identities hold in \(\overline{C}_{*}^{nd}\) for any \(k,l\in\mathbb{Z}_{\geqslant 0}\):_
\[\sum_{l=0}^{\infty}\sum_{j=0}^{l}y_{j,l}=\sum_{l=1}^{\infty}\sum_{j=0}^{l} \delta_{j}(\beta_{l-1})=e_{M}, \tag{4.185}\]
\[\bar{y}_{m}^{j,j+1}(k)=0, \tag{4.186}\]
\[\bar{y}_{m}^{S^{1}}(k)=\delta_{cyc}\left(\bar{y}_{m,1}(k+1)\right), \tag{4.187}\]
_where the \(\delta_{j}\)'s are structure maps on the \(S^{1}\)-complex \(SC^{*}(M)\), with \(\delta_{0}=\partial\), and \(\delta_{cyc}\) is the BV operator on the strict \(S^{1}\)-complex \(\overline{C}_{*}^{nd}\)._
Proof.: First, observe that the moduli spaces \({}_{j}\overline{\mathcal{M}}(\beta_{l-1},y_{j,l})\) in (4.182) are rigid due to dimension reasons. Since counting rigid elements of \({}_{j}\overline{\mathcal{M}}(\beta_{l-1},y_{j,l})\) gives the image of \(\beta_{l-1}\) under the operator \(\delta_{j}\), it follows that \(\sum_{l=0}^{\infty}\sum_{j=0}^{l}y_{j,l}=\sum_{l=0}^{\infty}\sum_{j=0}^{l} \delta_{j}(\beta_{l-1})\). The \(S^{1}\)-equivariant cocycle condition satisfied by \(\mathring{\beta}=\sum_{l=0}^{\infty}\beta_{l}\otimes u^{-l}\) implies that
\[\sum_{l=1}^{\infty}\sum_{j=1}^{l-1}\delta_{j}(\beta_{l-1})=0, \tag{4.188}\]
so
\[\sum_{l=0}^{\infty}\sum_{j=0}^{l}y_{j,l}=\sum_{l=0}^{\infty}y_{l,l}=\sum_{l=0 }^{\infty}\delta_{l}(\beta_{l-1}). \tag{4.189}\]
Note that the right-hand side gives \(B_{c}(\mathring{\beta})+\partial\beta_{-1}\), which is equal to the identity \(e_{M}\) by (4.143). This proves (4.185).
For the second identity, observe that the forgetful map (4.27) induces an isomorphism
\[\mathring{l}^{j+1}\overline{\overline{\mathcal{R}}}_{k+1}^{1}(\beta_{l-1},L, \mathring{\bar{a}}_{l-1};[m,m+1])\cong_{l-1}\overline{\overline{\mathcal{R}} }_{k+1}^{1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1])\times S^{1} \tag{4.190}\]
as admissible K-spaces. Since the admissible map \(\mathring{l}^{j,j+1}\overline{E}\mathring{v}^{\mathcal{R}}\) factors as the projection onto the first factor followed by \({}_{l-1}\overline{E}\mathring{v}^{\mathcal{R}}\), and the virtual fundamental chain of the moduli space \(\mathring{l}^{j,j+1}\overline{\overline{\mathcal{R}}}_{k+1}^{1}(\beta_{l-1},L, \mathring{\bar{a}}_{l-1};[m,m+1])\) is product-like, with non-trivial degree in the \(S^{1}\)-factor, we conclude that
\[\overline{E}v_{\bullet}\left(\mathring{l}^{j,j+1}\overline{\overline{ \mathcal{R}}}_{k+1}^{1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1};[m,m+1])\right)=0 \tag{4.191}\]
in \(\overline{C}_{1}^{nd}\). This proves (4.186).
The proof of the third identity is inspired by that of [27], Proposition 6. We use the decomposition (up to some codimension 1 strata) of the moduli space \({}_{l-1}\mathcal{R}_{k+1}^{1}\) into sectors \({}_{l-1}\mathcal{R}_{k+1,\tau_{i}}^{1}\) considered in Section 4.2, which induces a decomposition of the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{\delta^{i}}(\beta_{l-1},L,\frac{\mathring {a}}{a_{l-1}};[m,m+1])\) into the corresponding sectors \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}(\beta_{l-1},L,\frac{ \mathring{a}}{a_{l-1}};[m,m+1])\). More precisely, there is an embedding of abstract moduli spaces
\[\bigsqcup_{i=0}^{k}{}_{l-1}\mathcal{R}_{k+1,\tau_{i}}^{1}\xrightarrow[]{}_{i =0}^{k}{}_{l-1}\mathcal{R}_{k+1}^{S_{i,i+1}^{1}}\hookrightarrow{}_{l-1} \mathcal{R}_{k+1}^{S_{i,i+1}^{1}}, \tag{4.192}\]
which is compatible with our choices of Floer data, and covers all but a codimension 1 locus in the target (compare with Lemma 36). Since codimension 1 strata do not affect the de Rham chain defined by the pushforward \(\overline{Ev}_{*}\) in the quotient complex \(\overline{C}_{*}^{nd}\), the chain defined by \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{S^{1}}(\beta_{l-1},L,\frac{\mathring{a} }{a_{l-1}};[m,m+1])\) is a sum of the chains defined by the sectors
\[\bigsqcup_{i=0}^{k}{}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}(\beta_{l -1},L,\frac{\mathring{a}}{a_{l-1}};[m,m+1])\xrightarrow[]{}_{i=0}^{k}{}_{l-1} \overline{\mathcal{R}}_{k+1}^{S_{i,i+1}^{1}}(\beta_{l-1},L,\frac{\mathring{a} }{a_{l-1}};[m,m+1]), \tag{4.193}\]
where the identification is induced by the auxiliary-rescaling maps \(\left\{\pi_{f}^{i}\right\}_{i=0}^{k}\). It follows that
\[\bar{y}_{m}^{S^{1}}(k) =\sum_{\theta_{M}(a)<(m-k-U)\varepsilon}\sum_{l=1}^{\infty} \overline{Ev}_{*}\left({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}( \beta_{l-1},L,\frac{\mathring{a}}{a_{l-1}};[m,m+1])\right)\] \[=\sum_{i=0}^{k}\sum_{\theta_{M}(a)<(m-k-U)\varepsilon}\sum_{l=1} ^{\infty}\overline{Ev}_{*}\left({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}} ^{1}(\beta_{l-1},L,\frac{\mathring{a}}{a_{l-1}};[m,m+1])\right) \tag{4.194}\] \[=\sum_{i=0}^{k}\bar{y}_{m,1}^{i}(k).\]
In order to deduce (4.184), it remains to show that
\[\bar{y}_{m,1}^{i}(k)=(-1)^{k(k-i)}(\overline{R}_{k+1})_{*}^{k+1-i}\bar{y}_{m, 1}(k+1)\circ_{i+1}\bar{e}_{L} \tag{4.195}\]
in \(\overline{C}_{*}^{nd}\) for any \(0\leqslant i\leqslant k\). By cyclically permuting the labels of the boundary marked points \(z_{0},\cdots,z_{k}\) for an element of the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{i}}^{1}(\beta_{l-1},L,\frac{ \mathring{a}}{a_{l-1}};[m,m+1])\), we can achieve that the auxiliary marked point \(z_{f}\in\partial S\) lies between \(z_{0}\) and \(z_{1}\). By our choices of Floer data (cf. Section 4.2) and CF-perturbations (cf. (i) at the beginning of this section), the chain \(\bar{y}_{m,1}^{i}(k)\) is obtained by applying the algebraic operator \(\lambda^{-i}\) to the de Rham chain \(\bar{y}_{m,1}^{0}(k)\) defined by the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+1,\tau_{0}}^{1}(\beta_{l-1},L,\frac{ \mathring{a}}{a_{l-1}};[m,m+1])\), see our arguments for Lemma 54 and Theorem 83. Thus it is enough to prove (4.195) for \(i=0\). It is a principle explained in the proof of [27], Lemma 11 that marking \(z_{f}\) as auxiliary is equivalent to putting it back in as an ordinary marked point and then erasing it by taking fiber product with the unit \(\bar{e}_{L}\). Since treating \(z_{f}\) as an ordinary marked point gives the moduli space \({}_{l-1}\overline{\mathcal{R}}_{k+2}^{1}(\beta_{l-1},L,\frac{\mathring{a}}{a}; [m,m+1])\), rotated so that \(z_{f}\) now play the role of \(z_{1}\) under the new labeling, it follows that
\[\bar{y}_{m,1}^{0}(k)=(s_{1}\circ\lambda_{k+1}^{-1})\left(\bar{y}_{m,1}(k+1) \right), \tag{4.196}\]
which is exactly the identity (4.195) when \(i=0\). The justification for signs is the same as in the proof of Lemma 54.
**Remark 56**.: _We describe here an alternative approach to prove the identity (4.195). To do this, we equip the Cohen-Ganatra moduli space \({}_{l}\mathcal{R}_{k+1}^{1}(x,L,\frac{\mathring{a}}{a})\) with the Floer data so that
a disc bubble develops (at the boundary point determined by the line segment connecting the origin and \(p_{1}\)) when \(|p_{1}|\to\frac{1}{2}\). See the right-hand side of Figure 56. This gives an alternative compactification \(\widehat{\mathcal{R}}^{1}_{k+1}(x,L,\frac{\mathring{\tilde{a}}}{a})\) of the moduli space \({}_{i}\mathcal{R}^{1}_{k+1}(x,L,\frac{\mathring{\tilde{a}}}{a})\), which is also an admissible K-space. When \(l=1\) and \(L\subset M\) is exact, such a compactification is used in [47] to deduce the identity (2.14) there. When \(l\geq 2\), a little bit more care must be taken when choosing the Floer data if we want to keep the splitting (4.190) induced by the forgetful map. Here, we choose the Floer data so that when \(|p_{1}|\) is sufficiently close to \(\frac{1}{2}\), it creates a "neck region" with gluing parameter_
\[r=\log\left(\frac{1-2|p_{2}|}{1-2|p_{1}|}\right). \tag{4.197}\]
_In other words, when \(|p_{1}|\to\frac{1}{2}\), \(r\to\infty\), a disc bubble appears, but when moving towards the codimension 2 stratum corresponding to \(|p_{1}|=|p_{2}|=\frac{1}{2}\), the disc bubble shrinks back to a point._
_There is a cobordism relating the two compactifications \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\frac{\mathring{\tilde{a}}}{a})\) and \({}_{l}\widehat{\mathcal{R}}^{1}_{k+1}(x,L,\frac{\mathring{\tilde{a}}}{a})\), under which the boundary stratum \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+1,\tau_{0}}(x,L,\frac{\mathring{\tilde{a }}}{a})\subset{}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\frac{\mathring{ \tilde{a}}}{a})\) goes to_
\[{}_{l-1}\widehat{\mathcal{R}}^{1}_{k+1,\tau_{0}}(x,L,\frac{\mathring{\tilde{a }}}{a}):=\bigsqcup_{\hat{\tilde{a}}_{1}+\tilde{a}_{2}=\hat{\tilde{a}}}\kappa^ {-1}\left({}_{l-1}\overline{\mathcal{R}}^{1}_{k+2}(x,L,\frac{\mathring{\tilde {a}}}{a})\right)\ {}_{1}\times_{0}\ \overline{\mathcal{R}}_{1,f_{1}}(e_{M},L,0), \tag{4.198}\]
_where \(\overline{\mathcal{R}}^{1}_{1,f_{1}}\) is abstractly the moduli space \(\overline{\mathcal{R}}^{1}_{2}\) defined in Section 4.1, but with \(z_{1}\in\partial D\) marked as forgotten, and \(\kappa\) is the \(\mathbb{Z}_{k+2}\)-action on \({}_{l-1}\overline{\mathcal{R}}^{1}_{k+2}(x,L,\frac{\mathring{\tilde{a}}}{a})\) induced by cyclic permutations of the boundary marked points. Since pushing forward the virtual fundamental chain of \(\overline{\mathcal{R}}^{1}_{1,f_{1}}(e_{M},L,0)\) defines the identity \(e_{L}\in C_{1}(0,0)\), up to sign this gives the right-hand side of (4.195) when \(i=1\)._
With the preparations above, we are now at a position to complete the proof of our main result.
Proof of Theorem 31.: Following Remark 32, we shall construct chains in the \(S^{1}\)-equivariant de Rham complex \(C^{S^{1}}_{\bullet}\), and then project them to chains in Connes' complex \(C^{\lambda}_{\bullet}\) which satisfy the conditions (i)-(vii) of Theorem 31. We have defined the chains \(\tilde{x}_{m}\), \(\tilde{\tilde{x}}_{m}\), \(\tilde{y}_{m}\), \(\tilde{y}_{m}\), \(\tilde{z}_{m}\) and \(\tilde{\tilde{z}}_{m}\) in (4.170)--(4.175).
Let us first check that the requirements (i')--(vi') in Remark 32 are satisfied. The relations \(\tilde{x}_{m}=\tilde{e}_{-}(\tilde{\tilde{x}}_{m})\), \(\tilde{y}_{m}=\tilde{e}_{-}(\tilde{\tilde{y}}_{m})\), and \(\tilde{z}_{m}=\tilde{e}_{-}(\tilde{\tilde{z}}_{m})\) follow from the definitions above and (A.17). Moreover, the implication
\[\left(\tilde{x}_{m+1}-\tilde{e}_{+}(\bar{\tilde{x}}_{m})\right)(a,k)\neq 0 \Rightarrow\theta_{m}(a)\geq(m-k)\varepsilon \tag{4.199}\]
shows that \(\tilde{x}_{m+1}-\tilde{e}_{+}(\bar{\tilde{x}}_{m})\in F^{m}\). Similarly, one can show that \(\tilde{y}_{m+1}-\tilde{e}_{+}(\bar{\tilde{y}}_{m})\in F^{m-U-1}\) and \(\tilde{z}_{m+1}-\tilde{e}_{+}(\bar{\tilde{z}}_{m})\in F^{m-2}\).
By Theorem 79 (and its variations described in Remark 82), the isomorphism (4.159) implies that
\[\begin{split}&\tilde{\partial}\bar{x}_{m,0}(a,k+1)\\ =&\sum_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ a_{1}+a_{2}=a\\ 1\leqslant i\leqslant k_{1}\end{subarray}}(-1)^{(k_{1}-i)(k_{2}-1)+k_{1}}\bar{ x}_{m,0}(a_{1},k_{1}+1)\circ_{i}\bar{x}_{m}(a_{2},k_{2})\\ =&\sum_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ a_{1}+a_{2}=a\\ 1\leqslant i\leqslant k_{1}\end{subarray}}\sum_{j=1}^{k_{2}+1}(-1)^{\mathbbm{K} ^{1}_{ij}}\bar{x}_{m,0}(a_{1},k_{1}+1)\circ_{i}\left((\overline{R}_{k_{2}+1}) ^{j}_{\bullet}\bar{x}_{m,0}(a_{2},k_{2}+1)\circ_{k_{2}+2-j}e_{L}\right),\end{split} \tag{4.200}\]
where
\[\begin{split}\mathbbm{K}^{1}_{ij}=(i-1)(k_{2}-1)+(k_{1}-1)k_{2}+ k_{2}(j-1),\end{split} \tag{4.201}\]
for every \((a,k)\) with \(\theta_{M}(a)<(m+k-1)\varepsilon\). In the above, the second equality follows from Lemma 54. Note that the right hand side of (4.200) isn't equal to \(\frac{1}{2}\left\{\bar{\bar{x}}_{m},\bar{\bar{x}}_{m}\right\}(a,k+1)\) after projecting to \(\overline{C}_{\bullet}^{\,nd}\), so the condition (iii') in Remark 32 isn't satisfied. However, by definition of \(\left\{\cdot,\cdot\right\}\) it is clear that after passing to the quotient complex \(\overline{C}_{\bullet}^{\,\lambda}\), the right-hand side of (4.200) becomes \(\frac{1}{2}\left\{\bar{\bar{x}}_{m},\bar{\bar{x}}_{m}\right\}(a,k+1)\). It is therefore strong enough to replace the requirement \(\bar{\partial}^{S^{1}}(\bar{\bar{x}}_{m})-\frac{1}{2}\left\{\bar{\bar{x}}_{m},\bar{\bar{x}}_{m}\right\}\in\overline{F}^{m}\) in Remark 32, (iii') for the purpose of proving Theorem 31. We shall refer to this weaker condition as (iii"). Note that here the equivariant differential \(\bar{\partial}^{S^{1}}\) is actually given by the ordinary de Rham differential \(\bar{\partial}\) since the \(S^{1}\)-equivariant chain \(\bar{\bar{x}}_{m}\) only has \(u^{0}\)-part. Similar argument shows that \(\bar{\partial}^{S^{1}}(\bar{\bar{z}}_{m})-\left\{\bar{\bar{x}}_{m},\bar{\bar{ z}}_{m}\right\}\in\overline{F}^{m-2}\).
In order to confirm that \(\bar{\partial}^{S^{1}}(\bar{\bar{y}}_{m})-\left\{\bar{\bar{x}}_{m},\bar{\bar{ y}}_{m}\right\}-\bar{\bar{z}}_{m}\in\overline{F}^{m-U-1}\), we first note that it follows from the isomorphism (4.85) (with every moduli space \(X\) replaced with its outer collaring \(\overline{X}\)) that
\[\begin{split}\bar{\partial}\bar{y}_{m,0}(a,k)&= \sum_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ a_{1}+a_{2}=a\\ 1\leqslant i\leqslant k_{1}\end{subarray}}(-1)^{(k_{1}-i)(k_{2}-1)+k_{1}-1} \bar{y}_{m,0}(a_{1},k_{1})\circ_{i}\bar{x}_{m}(a_{2},k_{2})\\ &+\sum_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ a_{1}+a_{2}=a\\ 1\leqslant i\leqslant k_{1}\end{subarray}}(-1)^{(k_{1}-i)(k_{2}-1)+k_{1}}\bar{ x}_{m}(a_{1},k_{1})\circ_{i}\bar{y}_{m,0}(a_{2},k_{2})\\ &-\sum_{l=1}^{\infty}\sum_{j=0}^{l}\overline{E}\overline{v}_{ \bullet}\left(\iota_{-l}\overline{\bar{\jmath}}\overline{\bar{k}}_{k+1}^{1} \left(\delta_{j}(\beta_{l-1}),L,\overset{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
When combining with \(\tilde{\partial}\tilde{y}_{m,0}(a,k)\), the term \(\tilde{\delta}_{cyc}\left(\tilde{y}_{m,1}(a,k+1)\right)\) gives the \((a,k)\)-part of the \(S^{1}\)-equivariant differential \(\tilde{\partial}^{S^{1}}(\tilde{\tilde{y}}_{m})(a,k)\). Note that \(\tilde{\partial}^{S^{1}}\left(\tilde{\tilde{y}}_{m}(a,k,k+1)\right)\) contains an additional term \(\tilde{\partial}\tilde{y}_{m,1}(a,k+1)\otimes u^{-1}\), which belongs to the \((a,k+1)\)-part of \(\overline{C}_{*}^{S^{1}}\), so it does not appear in (4.204). The identification of the first two sums on the right-hand side of (4.204) with the \((a,k)\)-part of the Lie bracket \(\{\tilde{\bar{x}}_{m},\tilde{\bar{y}}_{m}\}\) is the same as above, which relies on Lemma 54. All together, we have proved \(\tilde{\partial}^{S^{1}}(\tilde{y}_{m})-\{\tilde{\bar{x}}_{m},\tilde{\bar{y} }_{m}\}-\tilde{\bar{z}}_{m}\in\overline{F}^{m-U-1}\).
For (vi'), \(\tilde{x}_{m,0}(a,k+1)\neq 0\) implies that \(\mathcal{R}_{k+2,\partial}(L,\tilde{a};\{m\})\neq\emptyset\), thus \(\theta_{m}(a)\geqslant 2\varepsilon\) or \(a=0\), \(k\geqslant 2\). Moreover, it follows from Lemma 54 that
\[\mathbf{B}_{c}\left(\tilde{x}_{m,0}(0,3)\right)=\sum_{j=1}^{k+1}(-1)^{j-1}(R_{ 3})^{j}_{*}x_{m,0}(0,3)\circ_{4-j}e_{L}=x_{m}(0,2), \tag{4.205}\]
which is by definition \((-1)^{n+1}\mu_{L}\in C^{nd}_{-1}(0,2)\). Similarly, \(\tilde{z}_{m}(a,k)\neq 0\) implies that \(\mathcal{R}^{1}_{k+1}(e_{M},L,\tilde{a};\{m\})\neq\emptyset\), thus \(\theta_{M}(a)\geqslant 2\varepsilon\) or \(a=0\) (cf. the proof of Lemma 43, (i)), and \([\tilde{z}_{m}(0,0)]=(-1)^{n+1}\left[\overline{\mathcal{R}}^{1}_{1}(e_{M},L,0; \{m\})\right]\otimes 1=(-1)^{n+1}[\![L]\!]\).
Finally, we show that Remark 32, (vii') is satisfied. Note that \(\left(\tilde{\bar{y}}_{m}(a,k),\tilde{\bar{z}}_{m}(a,k)\right)\neq(0,0)\) implies that there exists an \(l\in\mathbb{Z}_{\geqslant 0}\) such that one of the following two moduli spaces
\[{}_{l}\overline{\mathcal{R}}^{1}_{k+1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1}) \text{ and }_{l-1}\overline{\mathcal{R}}^{1}_{k+2}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1}) \tag{4.206}\]
is non-empty. (When \(l=0\), then \(\overline{\mathcal{R}}^{1}_{k+1}(\beta_{-1},L,\mathring{\bar{a}}_{-1})\neq \emptyset\).) By Lemma 43 (iii), this implies that for such an \(l\), we have
\[\left({}_{l}\overline{\mathcal{R}}^{1}_{1}(\beta_{l-1},L,\mathring{\bar{a}}_{ l-1}),{}_{l-1}\overline{\mathcal{R}}^{1}_{1}(\beta_{l-1},L,\mathring{\bar{a}}_{l-1}) \right)\neq(\emptyset,\emptyset). \tag{4.207}\]
Hence, \(a\in A_{x}\) implies that \(\overline{\mathcal{R}}_{2,\partial}(L,\tilde{a};\{m\})\neq\emptyset\), while \(a\in A_{y,z}\) implies there is an \(l\) such that (4.207) holds. From now on, fix the choice of such an \(l\). We claim that the set
\[\left\{a\in H_{1}(L;\mathbb{Z})\left|\theta_{M}(a)<\Xi,\left(\overline{ \mathcal{R}}_{2,\partial}(L,\tilde{a};\{m\}),{}_{l}\overline{\mathcal{R}}^{1}_ {1}(\beta_{l-1},L,\mathring{\bar{a}}),{}_{l-1}\overline{\mathcal{R}}^{1}_{1}( \beta_{l-1},L,\mathring{\bar{a}})\right)\neq(\emptyset,\emptyset,\emptyset)\right.\right\} \tag{4.208}\]
is finite for any \(\Xi>0\). In fact, assume that this isn't the case, then there exists a sequence \((a_{\ell})_{\ell\in\mathbb{N}}\) of distinct elements \(a_{\ell}\in H_{1}(L;\mathbb{Z})\) such that \(\theta_{M}(a_{\ell})<\Xi+|A_{H_{t}}(\beta_{l})|\) for every \(\ell\in\mathbb{N}\), and at least one of the following conditions holds:
* \(\overline{\mathcal{R}}_{2,\partial}(L,\bar{a}_{\ell};\{m\})\neq\emptyset\) for every \(\ell\in\mathbb{N}\),
* \({}_{l}\overline{\mathcal{R}}^{1}_{1}(\beta_{l-1},L,\mathring{\bar{a}}_{\ell})\neq\emptyset\) for every \(\ell\in\mathbb{N}\),
* \({}_{l-1}\overline{\mathcal{R}}^{1}_{1}(\beta_{l-1},L,\mathring{\bar{a}}_{\ell})\neq\emptyset\) for every \(\ell\in\mathbb{N}\).
Consider the first case. Pick a \(u_{\ell}\in\overline{\mathcal{R}}_{2,\partial}(L,\tilde{a}_{\ell};\{m\})\) for each \(\ell\). By possibly passing to a subsequence we may assume that \((u_{\ell})_{\ell\in\mathbb{N}}\) is Gromov convergent, so \((a_{\ell})_{\ell\in\mathbb{N}}\) is constant for \(\ell\gg 0\), which contradicts the assumption. Similar arguments in the second and the third case will lead to the same contradiction. Thus we conclude that (4.208) is a finite set, which shows that both \(A_{x}^{+}(\Xi)\) and \(A_{y,z}^{+}(\Xi)\) are finite sets for any \(\Xi>0\). This completes the verification of the conditions (i')--(vii') in Remark 32 (with (iii') replaced with a weaker condition (iii") described above). Thus after projecting the chains \(\bar{x}_{m}\), \(\tilde{\bar{x}}_{m}\), \(\tilde{y}_{m}\), \(\tilde{\bar{y}}_{m}\), \(\tilde{z}_{m}\) and \(\bar{\bar{z}}_{m}\) to \(C_{*}^{\lambda}\) and \(\overline{C}_{*}^{\lambda}\), we obtain the chains \(\underline{x}_{m}\), \(\underline{y}_{m}\), \(\underline{z}_{m}\), \(\bar{\underline{x}}_{m}\), \(\bar{\underline{y}}_{m}\) and \(\bar{\underline{z}}_{m}\) satisfying Theorem 31 (with the index \(m\) replaced with \(i\)).
## 5 Miscellaneous
We discuss some implications and variations of the main results of this paper. In this section, \(\mathbb{K}\) will be a field with \(\text{char}(\mathbb{K})=0\).
### Cyclic coordinate functions
Recall that the symplectic cohomology \(\mathit{SH}^{*}(M)\) of a Liouville manifold carries the structure of a Gerstenhaber algebra, with the Gerstenhaber bracket
\[[\cdot,\cdot]:\mathit{SH}^{i}(M)\otimes\mathit{SH}^{j}(M)\to\mathit{SH}^{i+j-1}(M) \tag{5.1}\]
defined by counting rigid points in the moduli space of solutions to the perturbed Cauchy-Riemann equation, with domains given by 1-parameter families of pair-of-pants, see [45], Section (4a) for details. When \(M=T^{*}Q\) is the cotangent bundle of a compact smooth manifold \(Q\), the Gerstenhaber bracket coincides with the loop bracket in string topology [9].
Inspired by the work of Fukaya [18], Ganatra and Pomerleano introduced the following notion (cf. [28], Remark 5.5). From now on, let \(M\) be a Liouville manifold with \(c_{1}(M)=0\).
**Definition 57**.: _A pair \((c,\phi)\in\mathit{SH}^{0}(M)\times\mathit{SH}^{1}(M)\) is called a coordinate function if_
\[[c,\phi]=1. \tag{5.2}\]
It is straightforward to check that \(T^{*}S^{1}\) admits a coordinate function. By Kunneth formula for symplectic cohomology [43], a coordinate function exists for \(T^{*}(S^{1}\times K)\), where \(K\) is any closed manifold.
The relation between coordinate functions and Fukaya-Irie's result is as follows. One expects that the \(\ell_{2}\) in the identity (1.2) should coincide with the loop bracket \([\cdot,\cdot]\) (up to sign), which means that the \(L_{\infty}\)-structure \((\ell_{k})_{k\geqslant 1}\) on \(H_{n-*}(\mathcal{L}L;\mathbb{R})\) is a deformation of the formal \(L_{\infty}\)-structure given by \([\cdot,\cdot]\). If this deformation is trivial, and \(y(a)\neq 0\) for some \(a\in H_{1}(L;\mathbb{Z})\) with \(\mu(a)=2\), then (1.2) reduces to (5.2), and we conclude that \(\mathit{SH}^{*}(T^{*}L)\) admits a coordinate function.
On the other hand, when restricting ourselves to dimension 3, Theorem 1 provides evidence to the following conjecture made in [28].
**Conjecture 58** (Ganatra-Pomerleano).: _If the cotangent bundle \(T^{*}Q\) of a closed, oriented 3-manifold \(Q\) admits a coordinate function, then \(Q\) is diffeomorphic to \(S^{1}\times\Sigma_{g}\) for some \(g\geqslant 0\)._
More generally, the discussions at the end of Section 1.2 suggest that coordinate functions exist for many Liouville domains which admit symplectic embeddings into \(\mathbb{C}^{n}\), or more generally, flexible Weinstein manifolds. In general, it seems quite difficult to construct interesting examples of Liouville manifolds with coordinate functions.
Inspired by Theorem 6, we introduce the following \(S^{1}\)-equivariant analogue of a coordinate function.
**Definition 59**.: _We call a pair \((\tilde{c},\tilde{\phi})\in\mathit{SH}^{1}_{S^{1}}(M)\times\mathit{SH}^{1}_{S^ {1}}(M)\) a cyclic coordinate function if they satisfy_
\[\{\tilde{c},\tilde{\phi}\}=1. \tag{5.3}\]
Here \(\{\cdot,\cdot\}\) is the degree \(-2\) Lie bracket on \(\mathit{SH}^{*}_{S^{1}}(M)\), which serves as part of the gravity algebra structure on \(\mathit{SH}^{*}_{S^{1}}(M)\) (cf. [44], Section (8b)). One can obtain non-trivial examples of Liouville manifolds with cyclic coordinate functions by relating it with the following notion introduced in [41].
**Definition 60**.: _We say that a Liouville manifold \(M\) admits a cyclic quasi-dilation if there is a pair \((\tilde{b},h)\in\mathit{SH}^{1}_{S^{1}}(M)\times\mathit{SH}^{0}(M)^{\times}\) such that \(B(\tilde{b})=h\)._
It follows from the definition of the Lie bracket \(\{\cdot,\cdot\}\) that if \(M\) admits a cyclic coordinate function, then
\[B(\tilde{c})\cdot B(\tilde{\phi})=1. \tag{5.4}\]
In particular, \(M\) admits a cyclic quasi-dilation. On the other hand, if \(M\) admits a cyclic dilation \(\tilde{b}\in\mathit{SH}^{1}_{S^{1}}(M)\), then the pair \((\tilde{b},\tilde{b})\) gives a cyclic coordinate function. Parallel to Conjecture 58, it seems reasonable to expect the following:
**Conjecture 61**.: _If the cotangent bundle \(T^{*}Q\) of a closed, oriented 3-manifold \(Q\) admits a cyclic coordinate function, then \(Q\) is diffeomorphic to \(S^{1}\times\Sigma_{g}\), where \(g\geqslant 0\), or a spherical space form._
One of the motivations of introducing the notion of a cyclic coordinate function is that we can prove its existence for an important class of Liouville manifolds--punctured \(A_{m}\) Milnor fibers, which are hypersurfaces \(M\subset\mathbb{C}^{n}\times\mathbb{C}^{*}\) defined by the equation
\[\left\{(x_{1},\cdots,x_{n},z)\in\mathbb{C}^{n}\times\mathbb{C}^{*}\left|x_{1}^ {2}+\cdots+x_{n}^{2}=z^{m+1}-1\right.\right\}, \tag{5.5}\]
where \(m\geqslant 0\). Note that here we do not exclude the possibility \(m=0\), in which case \(M\) isn't a punctured Milnor fiber, but a self-plumbing of \(T^{*}S^{n}\).
From the point of view of Theorem 6, there are good reasons to believe that cyclic coordinate functions should exist for these punctured Milnor fibers, since adding a copy of \(T^{*}S^{n-1}\) at the origin to the total space of \(\pi:M\to\mathbb{C}^{*}\) defines a symplectic embedding from \(M\) to the \(A_{m}\) Milnor fiber, and the latter space admits a cyclic dilation (when \(n\geqslant 3\), a dilation). On the other hand, it is clear that these punctured Milnor fibers cannot be symplectically embedded into \(\mathbb{C}^{n}\) (except when \(m=0\)), or more generally, any Liouville manifolds with vanishing symplectic cohomology, so one doesn't expect the existence of coordinate functions.
**Proposition 62**.: _Let \(M\) be a punctured \(A_{m}\) Milnor fiber of dimension \(n\geqslant 2\), then \(M\) admits a cyclic coordinate function._
Proof.: \(M\) is the total space of a Lefschetz fibration \(\pi:M\to\mathbb{C}^{*}\), whose smooth fibers are symplectomorphic to \(T^{*}S^{n-1}\), and the monodromy at the origin is trivial. There is a long exact sequence
\[\cdots\to\mathbb{K}^{\mathit{Crit}(\pi)}[-n]\to\mathit{SH}^{*}_{\mathit{ vert}}(M)\to\mathit{SH}^{*}(F)\otimes H^{*}(S^{1};\mathbb{K})\to\cdots \tag{5.6}\]
relating the symplectic cohomology of the fibers \(F\) and that of the total space, where \(\mathit{SH}^{*}_{\mathit{vert}}(M)\) is the "vertical" symplectic cohomology defined as the direct limit of a system of Hamiltonians which are small in the base, but linear with growing slopes in the fiber direction.
When \(n\geqslant 3\), it follows from the argument of [47], Section 7 (in particular, Example 7.5) that \(M\) admits a dilation, so in particular, a cyclic coordinate function.
When \(n=2\), a smooth fiber \(F\) of \(\pi\) is symplectomorphic to \(T^{*}S^{1}\), in which case we have \(\mathit{SH}^{*}(F)\cong\mathbb{K}[w,w^{-1},\partial_{w}]\), where \(|w|=0\) and \(|\partial_{w}|=1\). Both of the generators \(w\) and \(w^{-1}\) lie in the image of the BV operator, more precisely,
\[\Delta(-\partial_{w})=w^{-1},\Delta(w^{2}\partial_{w})=w, \tag{5.7}\]
and they can be lifted to elements of \(\mathit{SH}^{0}(M)^{\times}\). In fact, consider the composition
\[\mathit{SH}^{0}(F)\xrightarrow{\cong}\mathit{SH}^{0}_{\mathit{ vert}}(M)\to\mathit{SH}^{0}(M), \tag{5.8}\]
where the isomorphism follows from the fact that the contributions of the critical points of \(\pi\) are supported in degree 2, and the second map is the continuation map. Denote by \(h^{\pm 1}\in\mathit{SH}^{0}(M)\) the images of \(w^{\pm 1}\) under (5.8).
It is proved in [46], Lemma 19.5 that a class in \(\mathit{SH}^{1}(F)\) can be lifted to \(\mathit{SH}^{1}_{\mathit{vert}}(M)\) if and only if its image in \(H^{1}(V;\mathbb{K})\) vanishes under the closed-open map, where \(V\subset F\) is the vanishing cycle. Although this lifting criterion does not apply directly to the class \(-\partial_{w}\) and \(w^{2}\partial_{w}\) in (5.7), we can always add copies of \(w\partial_{w}\) (which does not affect the result of applying the BV operator) to a class so that it becomes a class which lifts. In particular, the vector fields \((w-1)\partial_{w}\) and \((w^{2}+w)\partial_{w}\) can be lifted to classes \(b_{-},b_{+}\in\mathit{SH}^{1}(M)\). Since both of the map \(\mathit{SH}^{*}_{\mathit{vert}}(M)\to\mathit{SH}^{*}(F)\) in the long exact sequence and the continuation map \(\mathit{SH}^{*}_{\mathit{vert}}(M)\to\mathit{SH}^{*}(M)\) are homomorphisms of BV algebras, we have \(\Delta(b_{-})=h^{-1}\)
and \(\Delta(b_{+})=h\). Denote by \(\tilde{b}_{-},\tilde{b}_{+}\in SH^{I}_{\,S^{1}}(M)\) the images of \(b_{-}\) and \(b_{+}\) under the erasing map \(I\), it follows that \(B(\tilde{b}_{-})\cdot B(\tilde{b}_{+})=1\), so \((\tilde{b}_{-},\tilde{b}_{+})\) gives a cyclic coordinate function for the 4-dimensional punctured Milnor fiber.
We mention here one potential application of the notions of coordinate functions and cyclic coordinate functions, namely proving the non-formality of the \(L_{\infty}\)-structures on the cochain complexes \(\mathit{SC}^{*}(M)\) and \(\mathit{SC}^{*}_{S^{1}}(M)\).
**Definition 63**.: _An \(L_{\infty}\)-algebra \(\mathcal{A}\) is called formal if it has a minimal model \(H^{*}(\mathcal{A})\) with \(\ell_{k}=0\) for \(k\geqslant 3\). It is called intrinsically formal if for any \(L_{\infty}\)-structure on \(H^{*}(\mathcal{A})\), we necessarily have \(\ell_{k}=0\) for \(k\geqslant 3\)._
Let \(M_{d}\) be the complement of \(d\geqslant n+2\) generic hyperplanes in \(\mathbb{CP}^{n}\), then it admits a symplectic embedding into \(\mathbb{C}^{n}\). A generalization of our argument should lead to the identity (1.21), with the string homology \(\mathbb{H}^{S^{1}}_{*}\) replaced with the \(S^{1}\)-equivariant symplectic cohomology \(\mathit{SH}^{\bullet}_{S^{1}}(M_{d})\). On the other hand, it follows from [41], Theorem 2.3 that \(M_{d}\) does not admit a cyclic quasi-dilation, so in particular it does not admit a cyclic coordinate function. It follows from (1.21) that there must be some \(k\geqslant 3\) such that \(\tilde{\ell}_{k}\neq 0\) on \(\mathit{SH}^{\bullet}_{S^{1}}(M_{d})\). In particular, this shows that the \(L_{\infty}\)-algebra \(\mathit{SH}^{\bullet}_{S^{1}}(M_{d})\) cannot be intrinsically formal. Since it is reasonable to expect that the \(L_{\infty}\)-structure \((\tilde{\ell}_{k})_{k\geqslant 1}\) is homotopy equivalent to the geometric \(L_{\infty}\)-structure on \(\mathit{SC}^{\bullet}_{S^{1}}(M_{d})\) constructed by counting holomorphic curves (cf. [48]), we conjecture that \(\mathit{SC}^{\bullet}_{S^{1}}(M_{d})\) isn't formal as an \(L_{\infty}\)-algebra. Since \(\mathbb{C}^{n}\) has vanishing symplectic cohomology, similar argument can be carried out in the non-equivariant case to show the non-intrinsic formality (and conjecturally non-formality) of the \(L_{\infty}\)-structure \((\ell_{k})_{k\geqslant 1}\) on \(\mathit{SC}^{*}(M_{d})\). The non-existence of a coordinate function follows from similar argument as in the proof of [41], Theorem 2.3, since \(M_{d}\) cannot be uniruled by cylinders. Note that these non-formality results fits with the geometric intuition that the affine variety \(M_{d}\) is uniruled by \(d\)-punctured spheres.
### Compact symplectic manifolds
We briefly discuss in this subsection how to combine the original idea of Fukaya [18] and the new ingredients in this paper to obtain restrictions on Lagrangian embedding in certain compact symplectic manifolds.
In [18], Fukaya applied similar ideas as described in the introduction to the compact symplectic manifold \(\mathbb{CP}^{n}\) and proved the following:
**Theorem 64** ([18], Theorem 14.1).: _Let \(L\subset\mathbb{CP}^{n}\) be a closed Lagrangian submanifold, which is a \(K(\pi,1)\) space and Spin. Then there exists \(\bar{a}\in\pi_{2}(\mathbb{CP}^{n},L)\) such that \(L\) bounds a holomorphic disc in the class \(\bar{a}\) with Maslov index 2. Moreover, \(a=\partial\bar{a}\in\pi_{1}(L)\) is non-zero, and its centralizer \(Z_{a}\subset\pi_{1}(L)\) has finite index._
**Remark 65**.: _When \(X=Y\times Z\), where \(Y\) is a closed Kahler manifold with a subcritical polarization in the sense of Biran-Cieliebak [6], and \(Z\) is chosen so that \(X\) is monotone, Theorem 64 is proved by Damian [15] for monotone Lagrangian submanifolds \(L\subset X\) which are \(K(\pi,1)\) spaces._
Theorem 64 is parallel to Corollary 8 in this paper. In particular, it implies that a Lagrangian \(L\subset\mathbb{CP}^{n}\) cannot be a hyperbolic manifold. This is a special case of the _Viterbo-Eliashberg theorem_[16], which states that if \(L\subset X\) is a closed Lagrangian submanifold in a _uniruled_ smooth algebraic variety \(X\), then \(L\) cannot be hyperbolic.
It is therefore desirable to have generalizations of Theorem 64 for closed Lagrangian submanifolds \(L\) in a more general class of uniruled smooth algebraic varieties \(X\). Inspired by the work of Biran-Cieliebak [6], we introduce the following notion.
**Definition 66**.: _Let \(X\) be a closed Kahler manifold. If there is a complex hypersurface \(\Sigma\subset X\) such that the complement \(X\backslash\Sigma\) is a Weinstein manifold which admits a cyclic dilation. Then we shall call the couple \((X,\Sigma)\) a polarization with finite first Gutt-Hutchings capacity._
In view of Remark 65, it is natural to expect the following.
**Conjecture 67**.: _Let \(X\) be a Kahler manifold which admits a polarization \((X,\Sigma)\) with finite first Gutt-Hutchings capacity. Then the conclusion of Theorem 64 holds for any closed Lagrangian submanifold \(L\subset X\) which is a \(K(\pi,1)\) space and Spin._
It is not clear how to prove Conjecture 67 in general. Here let us take a look at a special case, which is inspired by the works of Abouzaid-Smith [4] and Ganatra-Pomerleano [28]. Let \(X\) be a smooth projective variety, and \(\Sigma\subset X\) is a normal crossing divisor, with smooth components \(\Sigma_{1},\cdots,\Sigma_{m}\). Assume that
\[K_{X}^{-1}\cong\mathcal{O}(\sum_{i=1}^{m}a_{i}\Sigma_{i}). \tag{5.9}\]
Let \(I\subset\{1,\cdots,m\}\) be a subset, we introduce the notations \(\Sigma_{I}:=\bigcap_{\stackrel{{\mbox{\tiny$\frown$}}}{{\mbox{ \tiny$I$}}}}\Sigma_{i}\), \(\Sigma_{I}^{\circ}:=\Sigma_{I}\backslash\bigcup_{j\notin I}\Sigma_{j}\). Locally near \(\Sigma_{I}\subset X\), there is an \(|I|\)-fold intersection \(U_{I}\) of the tubular neighborhoods \(U_{i}\) of each \(\Sigma_{i}\), and the iterated projection \(U_{I}\to\Sigma_{I}\) is a symplectic fibration with structure group \(U(1)^{|I|}\), with fibers a product of standard symplectic 2-discs. Denote by \(S_{I}\to\Sigma_{I}\) the associated \(T^{|I|}\)-bundle over \(\Sigma_{I}\), and by \(S_{I}^{\circ}\) its restriction to \(\Sigma_{I}^{\circ}\).
Let \(J_{X}\) be an almost complex structure on \(X\) which is compatible with the symplectic form \(\omega_{X}\), and let \(L\subset X\) be a closed Lagrangian submanifold which is disjoint from \(\Sigma\). For a \(J_{X}\)-holomorphic map \(u:(D,\partial D)\to(X,L)\) with \(u(0)\in\Sigma_{I}^{\circ}\), one can choose a local complex coordinate \(z\) near the origin so that the normal component \(u_{i}\) of \(u\) has an expansion
\[u_{i}(z)=a_{i}z^{v_{i}}+O(|z|^{v_{i}+1}), \tag{5.10}\]
where \(v_{i}\in\mathbb{N}\) and the coefficient \(a_{i}\neq 0\) is the \(v_{i}\)-jet normal to \(\Sigma_{i}\) modulo higher order terms, which takes value in \((T_{0}^{\bullet}D)^{\otimes v_{i}}\otimes N_{u(0)}\Sigma_{i}\), where \(N\Sigma_{i}\) is the normal bundle of \(\Sigma_{i}\subset X\). Now fix an asymptotic marker \(\ell\) at \(0\in D\), it specifies via \(a_{i}\) a non-zero vector in \(N_{u(0)}\Sigma_{i}\backslash\{0\}\). Denote its image in the real projectivization \(\mathbb{P}N_{u(0)}\Sigma_{i}\) by \([a_{i}]\). Taking the direct sum over \(i\in I\) gives a point \([\bigoplus_{i\in I}a_{i}]\) in the fiber \(S_{I,u(0)}\) of the torus bundle \(S_{I}\to D_{I}\). Let \(\mathbf{v}=(v_{1},\cdots,v_{m})\in\mathbb{Z}_{\mathcal{S}0}^{m}\), where \(v_{i}=0\) for \(i\notin I\), the _enhanced evaluation map_ of \(u\) at \(0\) is defined as
\[\mathit{Ev}_{0}^{\mathbf{v}}(u):=\left(u(0),\left[\bigoplus_{i\in I}a_{i} \right]\right), \tag{5.11}\]
which keeps track of both the incidence condition and the tangency condition of the holomorphic curve \(u\) at \(0\) with the divisors \(\Sigma_{i}\), \(i\in I\).
For \(\bar{a}\in H_{2}(X,L)\) and \(\mathbf{v}_{l}=(v_{l,1},\cdots,v_{l,m})\in\mathbb{Z}_{\mathcal{S}0}^{m}\) supported on \(I_{l}\subset\{1,\cdots,m\}\), let
\[{}_{l}\mathcal{R}_{k+1}(L,\mathbf{v}_{l},\bar{a}) \tag{5.12}\]
be the moduli space of pairs
\[\left((D;z_{0},\cdots,z_{k},p_{1},\cdots,p_{l};\ell),u\right), \tag{5.13}\]
where \(z_{0},\cdots,z_{k}\in\partial D\), the auxiliary marked points \(p_{1},\cdots,p_{l}\) satisfy (4.24), and the asymptotic marker \(\ell\) at \(0\in D\) points towards \(p_{l}\). \(u:(D,\partial D)\to(X,L)\) is a \(J_{X}\)-holomorphic map such that \(u(0)\cdot\Sigma_{i}=v_{l,i}\) for \(i\in I_{l}\), and \([u]=\bar{a}\). The enhanced evaluation map gives rise to a map
\[\mathit{Ev}_{0}^{\mathbf{v}_{l}}::\mathcal{R}_{k+1}(L,\mathbf{v}_{l},\bar{a}) \to S_{I_{l}}^{\circ}. \tag{5.14}\]
For any locally finite chain \(\alpha_{l}\in C^{BM}_{\ast}(S^{S_{l}}_{l};\mathbb{K})\), we form the fiber product
\[{}_{l}\mathcal{R}_{k+1}(L,\mathbf{v}_{l},\alpha_{l},\bar{a}):={}_{l}\mathcal{R} _{k+1}(L,\mathbf{v}_{l},\bar{a})\times_{E_{0}^{\mathbf{v}_{l}}}\alpha_{l}. \tag{5.15}\]
The compactification \({}_{l}\overline{\mathcal{R}}_{k+1}(L,\mathbf{v}_{l},\alpha_{l},\bar{a})\) is an admissible K-space, with its codimension \(1\) boundary covered by the following strata:
\[\bigsqcup_{\begin{subarray}{c}k_{1}+k_{2}=k+1\\ \bar{a}_{1}+\bar{a}_{2}=\bar{a}\end{subarray}}\left({}_{l}\overline{\mathcal{R }}_{k_{1}+1}(L,\mathbf{v}_{l},\alpha_{l},\bar{a}_{1})\ {}_{i}\times_{0}\overline{\mathcal{R}}_{k_{2}+1}(L,\bar{a}_{2})\sqcup \overline{\mathcal{R}}_{k_{1}+1}(L,\bar{a}_{1})\ {}_{i}\times_{0}\ {}_{l} \overline{\mathcal{R}}_{k_{2}+1}(L,\mathbf{v}_{l},\alpha_{l},\bar{a}_{2}) \right),\]
\[{}_{j}\overline{\mathcal{P}}(\mathbf{v}_{l},\alpha_{l},x_{j})\times{}_{l-j} \overline{\mathcal{R}}_{k+1}^{1}(x_{j},L,\bar{a}),0\leqslant j\leqslant l, \tag{5.16}\]
\[{}_{l-1}\overline{\mathcal{R}}_{k+1}^{S^{1}}(L,\mathbf{v}_{l},\alpha_{l},\bar {a}), \tag{5.17}\]
\[{}_{l}^{j,j+1}\overline{\mathcal{R}}_{k+1}^{S^{1}}(L,\mathbf{v}_{l},\alpha_{l },\bar{a}),1\leqslant j\leqslant l-1, \tag{5.18}\]
where \({}_{j}\mathcal{P}(\mathbf{v}_{l},\alpha_{l},x_{j})\) are moduli spaces of (parametrized) logarithmic PSS (Piunikhin-Salamon-Schwarz) maps studied in [28], with \(x_{j}\) being an orbit of the vector field \(X_{H_{t}}\) of some admissible Hamiltonian function (defined using the radial coordinate on the collar neighborhood of the divisor complement \(X\backslash\Sigma\)). When \(j=0\), this is defined as the fiber product
\[\mathcal{P}(\mathbf{v}_{l},\alpha_{l},x_{j}):=\mathcal{P}(\mathbf{v}_{l},x_{j })\times_{E_{0}^{\mathbf{v}_{l}}}\alpha_{l}, \tag{5.20}\]
where \(\mathcal{P}(\mathbf{v}_{l},x_{j})\) is the moduli space of maps \(u:\mathbb{C}\to X\) satisfying the Floer equation \((du-X_{H_{t}}\otimes dt)^{0,1}=0\), with the obvious asymptotic and tangency conditions. In general, \({}_{j}\mathcal{P}(\mathbf{v}_{l},\alpha_{l},x_{j})\) is defined as the space of the same maps \(u:\mathbb{C}\to X\), but from a varying family of domains obtained by equipping \(\mathbb{C}\) with the auxiliary marked points \(p_{l-j+1},\cdots,p_{l}\), which are strictly radially ordered in the sense that
\[0<|p_{l}|<\cdots<|p_{l-j+1}|. \tag{5.21}\]
As before, the boundary strata (5.16) come from disc bubbles along the Lagrangian boundary \(L\), (5.17) arise from breakings of planes at the origin \(0\in D\) when \(|p_{l-j+1}|\to 0\), (5.18) corresponds to the locus where \(|p_{1}|=\frac{1}{2}\), and (5.19) are the loci defined by \(|p_{j}|=|p_{j+1}|\).
A count of rigid elements in the moduli spaces \({}_{j}\overline{\mathcal{P}}(\mathbf{v},\alpha,x)\) for varying \(\alpha\in C^{BM}_{\ast}(S^{\circ}_{l};\mathbb{K})\) and \(\mathbf{v}\in\mathbb{Z}^{m}_{>0}\) supported on \(I\subset\{1,\cdots,m\}\) defines a map
\[PSS^{log}_{j}:C^{\ast+2j}_{log}(X,\Sigma)\to SC^{\ast}(M) \tag{5.22}\]
on the logarithmic cochain complex
\[C^{\ast}_{log}(X,\Sigma):=\bigoplus_{I\subset\{1,\cdots,m\}}{}_{l}\mathbf{v} ^{\prime}C^{BM}_{2n-|I|-\ast}(S^{\circ}_{l};\mathbb{K}), \tag{5.23}\]
When \(l=0\), the positive energy part of this map is a chain map on _admissible_ cochains \(C^{\ast}_{log}(X,\Sigma)_{ad}\subset C^{\ast}_{log}(X,\Sigma)\) (cf. [28], Definition 3.20), which induces on the cohomology level the logarithmic PSS map \(H^{\ast}_{log}(X,\Sigma)_{ad}\to SH^{\ast}_{+}(M)\) studied in [28].
Now suppose that there exist a family of locally finite chains \((\alpha_{l})_{l\geqslant 0}\) and vectors \((\mathbf{v}_{l})_{l\geqslant 0}\) such that \(\sum_{l=0}^{\infty}PSS^{log}_{l}(\alpha_{l}\mathbf{v}^{\prime n})\) hits the identity \(e_{M}\in SC^{0}(M)\), and \(PSS^{log}_{j}(\alpha_{l}\mathbf{v}^{\prime n})=0\) for \(j<l\), then by pushing forward the virtual fundamental chains of the moduli spaces in (5.16)-(5.19), similar arguments as in Section 4.5 would imply the identity (1.21), and therefore confirm Conjecture 67 for Lagrangian submanifolds \(L\subset X\) which are disjoint from \(\Sigma\).
When \(X\backslash\Sigma\) admits a dilation \(b\in\mathit{SH}^{1}(X\backslash\Sigma)\), and the dilation comes from an admissible cochain \(\alpha\mathbf{t}^{\mathbf{v}}\), i.e. \(\mathit{PSS}^{log}(\alpha\mathbf{t}^{\mathbf{v}},\mathfrak{b})=b\), where \(\mathfrak{b}\in C^{\ast}(X\backslash\Sigma;\mathbb{K})\) is a bounding cochain for \(\alpha\mathbf{t}^{\mathbf{v}}\), it follows from [28], Lemmas 4.29 and 4.30 that there exists an \(\tilde{\alpha}\in C^{BM}_{\ast}(S^{\circ}_{l};\mathbb{K})\) such that \(\mathit{PSS}^{log}(\tilde{\alpha}\mathbf{t}^{\mathbf{v}})\) hits the identity \(e_{M}\in SC^{0}(M)\). In particular, this confirms Conjecture 67 for the pairings \((X,\Sigma)\) studied in [28], Sections 6.1 and 6.4, where a dilation can be constructed via the logarithmic PSS map.
Kuranishi structures and virtual fundamental chains
For reader's convenience, we collect in this appendix some basic notions and useful facts in the theory of Kuranishi structures. Our references here are [20, 21, 22, 23] and [33]. Let \(X\) be a separable, metrizable topological space.
### Basic notions
Instead of writing down all the definitions, we will only recall here the most basic notions and provide the reference for their variations and generalizations, so the reader may use this section as a dictionary for the notions in Kuranishi theory used in the main content of our paper. Following [33], we shall define Kuranish charts using manifolds instead of orbifolds. This is sufficient for our purposes because sphere bubbles do not appear for Lagrangian submanifolds in Liouville manifolds.
**Definition 68**.: _A Kuranishi chart of \(X\) is a quadruple \(\mathcal{U}=(U,\mathcal{E},s,\psi)\) such that_
1. \(U\) _is a smooth manifold,_
2. \(\mathcal{E}\) _is a smooth vector bundle on_ \(U\)_,_
3. \(s\) _is a smooth section of_ \(\mathcal{E}\)_,_
4. \(\psi:s^{-1}(0)\to X\) _is a homeomorphism onto an open subset of_ \(X\)_._
_We call \(U\) a Kuranishi neighborhood, \(\mathcal{E}\) an obstruction bundle, \(s\) a Kuranishi map, and \(\psi\) a parametrization. \(\dim\mathcal{U}:=\dim U-\mathrm{rank}\ E\) is called the dimension of \(\mathcal{U}\). An orientation of \(\mathcal{U}\) is a pair of orientations of \(U\) and \(E\)._
To globalize the notion of a Kuranishi chart, we need the following two definitions.
**Definition 69**.: _Let \(\mathcal{U}_{i}=(U_{i},\mathcal{E}_{i},s_{i},\psi_{i})\), \(i=1,2\) be Kuranishi charts of \(X\). An embedding of Kuranishi charts \(\Phi:\mathcal{U}_{1}\to\mathcal{U}_{2}\) is a pair \(\Phi=(\phi,\hat{\phi})\) such that_
1. \(\phi:U_{1}\to U_{2}\) _is an embedding of smooth manifolds,_
2. \(\hat{\phi}:\mathcal{E}_{1}\to\mathcal{E}_{2}\) _is an embedding of smooth vector bundles over_ \(\phi\)_,_
3. \(\hat{\phi}\circ s_{1}=s_{2}\circ\phi\)_,_
4. \(\psi_{2}\circ\phi=\psi_{1}\) _on_ \(s_{1}^{-1}(0)\)_,_
5. _for any_ \(x\in s_{1}^{-1}(0)\)_, the covariant derivative_ \[D_{\phi(x)}s_{2}:\frac{T_{\phi(x)}U_{2}}{(D_{x}\phi)(T_{x}U_{1})}\to\frac{( \mathcal{E}_{2})_{\phi(x)}}{\hat{\phi}((\mathcal{E}_{1})_{x})}\] (A.1) _is an isomorphism._
**Definition 70**.: _Let \(\mathcal{U}_{i}=(U_{i},\mathcal{E}_{i},s_{i},\psi_{i})\), \(i=1,2\) be Kuranishi charts of \(X\). A coordinate change in weak sense (resp. strong sense) from \(\mathcal{U}_{1}\) to \(\mathcal{U}_{2}\) is a triple \(\Phi_{21}=(U_{21},\phi_{21},\hat{\phi}_{21})\) satisfying the first two (resp. all three) of the requirements below:_
1. \(U_{21}\) _is an open subset of_ \(U_{1}\)_;_
2. \((\phi_{21},\hat{\phi}_{21})\) _is an embedding of Kuranishi charts_ \(\mathcal{U}_{1}|_{U_{21}}\to\mathcal{U}_{2}\)_;_
3. \(\psi_{1}\left(s_{1}^{-1}(0)\cap U_{21}\right)=\mathrm{im}\psi_{1}\cap\mathrm{ im}\psi_{2}\)_._
**Definition 71**.: _A Kuranishi structure \(\widehat{\mathcal{U}}\) of \(X\) of dimension \(d\) consists of_
* _a Kuranishi chart of dimension_ \(d\)__\(\mathcal{U}_{p}=(U_{p},\mathcal{E}_{p},s_{p},\psi_{p})\) _at_ \(p\) _for every_ \(p\in X\)
* _a coordinate change in the weak sense_ \(\Phi_{pq}=(U_{pq},\phi_{pq},\hat{\phi}_{pq}):\mathcal{U}_{q}\to\mathcal{U}_{p}\) _for every_ \(p\in X\) _and_ \(q\in\mathrm{im}\psi_{p}\) _so that_ \(\Phi_{pp}=(U_{p},\mathit{id},\mathit{id})\)_,_
_such that_
* \(o_{q}\in U_{pq}\) _for every_ \(q\in\mathrm{im}\psi_{p}\)_,_
* _for every_ \(p\in X\)_,_ \(q\in\mathrm{im}\psi_{p}\) _and_ \(r\in\psi_{q}\left(s^{-1}(0)\cap U_{pq}\right)\)_, there holds_ \(\Phi_{pr}|_{U_{pqr}}=\Phi_{pq}\circ\Phi_{qr}|_{U_{pqr}}\)_, where_ \(U_{pqr}:=\phi_{qr}^{-1}(U_{pq})\cap U_{pr}\)_._
_The pair \((X,\widehat{\mathcal{U}})\) is called a K-space of dimension \(d\). We say the Kuranishi structure \(\widehat{\mathcal{U}}\) is oriented if each Kuranishi chart \(\mathcal{U}_{p}\) is oriented and \(\phi_{pq}\) preserves orientations for every \(p\in X\) and \(q\in\mathrm{im}\psi_{p}\)._
The analogy of the notion of a manifold with corners in the theory of Kuranishi structures is called an _admissible K-space_, see [23], Chapters 17 and 25 for details.
Just as one can attach collar neighborhoods to manifolds with boundary, the same can be done for admissible K-spaces, see [23], Chapter 17 for the theory of collared Kuranishi structures. In particular, for \(\tau>0\), the notion of a \(\tau\)-outer collaring of an admissible K-space \(X\) used in Section 4.5 is defined in [23], Definition 17.29.
**Definition 72**.: _Let \(\widehat{\mathcal{U}}\) be a Kuranishi structure of \(X\), and \(Y\) a topological space._
* _A strongly continuous map_ \(\hat{f}:(X,\widehat{\mathcal{U}})\to Y\) _assigns a continuous map_ \(f_{p}:U_{p}\to Y\) _for each_ \(p\in X\) _such that_ \(f_{p}\circ\phi_{pq}=f_{q}\) _holds on_ \(U_{pq}\)_._
* _When_ \(Y\) _is a smooth manifold, we say that_ \(\hat{f}\) _is strongly smooth if each_ \(f_{p}\) _is smooth._
The notion of a strongly smooth map generalizes to that of an _admissible map_ for admissible K-spaces.
In order to define the pushforward of virtual fundamental chains of K-spaces via strongly continuous and submersive maps, we need to equip the K-spaces with _CF-perturbations_. Roughly speaking, these are triples \(\mathcal{S}_{x}=(W_{x},\omega_{x},\{\mathfrak{s}_{x}^{\varepsilon}\})\) for every point \(x\in V_{x}\subset U_{p}\) satisfying certain conditions, where \(\mathfrak{s}_{x}^{\varepsilon}\) is a family of sections of \(\mathcal{E}_{p}\) over the manifold chart \(V_{x}\) of \(U_{p}\), parametrized by an open neighborhood \(W_{x}\) of \(0\) in some finite-dimensional vector space, and \(\omega_{x}\) is a top degree differential form on \(W_{x}\). See [23], Definition 7.4 for details.
**Definition 73**.: _Let \((X,\widehat{\mathcal{U}})\) be a K-space and \(N\) a manifold with corners._
* _A strongly continuous map_ \(\hat{f}:(X,\widehat{\mathcal{U}})\to N\) _is said to be a corner stratified smooth map if_ \(f_{p}:U_{p}\to N\) _is a corner stratified smooth map (__[_23_]__, Definition 26.1) for any_ \(p\in X\)_._
* _A corner stratified smooth map_ \(\hat{f}:(X,\widehat{\mathcal{U}})\to N\) _is a corner stratified weak submersion if_ \(f_{p}:U_{p}\to N\) _is a corner stratified submersion for any_ \(p\in X\)_._
* _Let_ \(\widehat{\mathcal{S}}\) _be a CF-perturbation of_ \(X\)_. We say that a corner stratified smooth map_ \(\hat{f}:(X,\widehat{\mathcal{U}})\to N\) _is a corner stratified strong submersion with respect to_ \(\widehat{\mathcal{S}}\) _if the following holds. Let_ \(p\in X\) _and_ \((U_{p},\mathcal{E}_{p},s_{p},\psi_{p})\) _is a Kuranishi chart at_ \(p\)_. Let_ \((V_{x},\phi_{x})\) _be a manifold chart of_ \(U_{p}\) _and_ \((W_{x},\omega_{x},\{\mathfrak{s}_{x}^{\varepsilon}\})\) _be a representative of_ \(\widehat{\mathcal{S}}\) _in the chart_ \(V_{x}\)_, then_ \[f\circ\psi_{p}\circ\phi_{x}\circ pr_{V}:(\mathfrak{s}_{x}^{\varepsilon})^{-1}(0)\to N\] (A.2) _is a corner stratified submersion, where_ \(pr_{V}\) _is the projection_ \(V_{x}\times W_{x}\to V_{x}\)_._
To achieve the compatibility of CF-perturbations, we need the notion of a _thickening_ of Kuranishi structures, see [23], Definition 5.3.
### \(C^{0}\)-approximation
We recall Irie's \(C^{0}\)-approximation lemma ([33], Sections 7.5 and 9, see also [34], Section 4) and its consequences.
**Definition 74**.: _Let \((X,\widehat{\mathbb{U}})\) be a K-space, \((Y,d_{Y})\) be a metric space, and \(\hat{f},\hat{g}:(X,\widehat{\mathbb{U}})\to Y\) be strongly smooth maps. For any \(\varepsilon>0\), we say that \(\hat{f}\) and \(\hat{g}\) are \(\varepsilon\)-close, if_
\[d_{Y}\left(f_{p}(x),g_{p}(x)\right)<\varepsilon\] (A.3)
_for every \(p\in X\) and \(x\in U_{p}\)._
**Theorem 75** (\(C^{0}\)-approximation lemma).: _Let \((X,\widehat{\mathbb{U}})\) be a K-space and \(\hat{f}:(X,\widehat{\mathbb{U}})\to\mathcal{L}_{k+1}^{\,con}\) be a strongly continuous map such that \(ev_{j}^{\mathcal{L}}\circ\hat{f}:(X,\widehat{\mathbb{U}})\to L\) is strongly smooth for every \(0\leqslant j\leqslant k\). Let \(Z\subset X\) be a closed subset and \(\hat{g}:(Z,\widehat{\mathbb{U}}|_{Z})\to\mathcal{L}_{k+1}\) is a strongly smooth map such that_
* \(ev_{j}^{\mathcal{L}}\circ\hat{g}=ev_{j}^{\mathcal{L}}\circ\hat{f}|_{Z}\) _for every_ \(0\leqslant j\leqslant k\)_,_
* \(\hat{g}\) _is_ \(\varepsilon\)_-close to_ \(\hat{f}|_{Z}\) _with respect to_ \(d_{\mathcal{L}_{k+1}}\)_._
_If \(\varepsilon<\rho_{L}\), where \(\rho_{L}\) is the constant fixed in Section 4.4, there exists an open substructure \(\widehat{\mathbb{U}}_{0}\) of \(\widehat{\mathbb{U}}\), and a strongly smooth map \(\hat{g}^{\prime}:(X,\widehat{\mathbb{U}}_{0})\to\mathcal{L}_{k+1}\) such that_
* \(\hat{g}^{\prime}\) _is_ \(\varepsilon\)_-close to_ \(\hat{f}|_{\widehat{\mathbb{U}}_{0}}\)_,_
* \(ev_{j}^{\mathcal{L}}\circ\hat{g}^{\prime}=ev_{j}^{\mathcal{L}}\circ\hat{f}|_{ \widehat{\mathbb{U}}_{0}}\) _for every_ \(0\leqslant j\leqslant k\)_,_
* \(\hat{g}^{\prime}=\hat{g}\) _on_ \(\widehat{\mathbb{U}}_{0}|_{Z}\)_._
The \(C^{0}\)-approximation lemma, combined with [23], Proposition 17.81, yields the following.
**Theorem 76** ([33], Theorem 7.33).: _Suppose we are given the following data_
* \(k\in\mathbb{Z}_{\geq 0}\)_,_ \(\tau\in(0,1)\) _and_ \(\varepsilon\in(0,\rho_{L})\)_;_
* \(\tau\)_-collared K-space_ \((X,\widehat{\mathbb{U}})\)_;_
* \(\tau\)_-collared strongly continuous map_ \(\hat{f}:(X,\widehat{\mathbb{U}})\to\mathcal{L}_{k+1}^{\,con}\) _such that_ \(ev_{j}^{\mathcal{L}}\circ\hat{f}:(X,\widehat{\mathbb{U}})\to L\) _is a_ \(\tau\)_-collared admissible map for every_ \(0\leqslant j\leqslant k\)_, and_ \(ev_{0}^{\mathcal{L}}\circ\hat{f}\) _is a stratified weak submersion;_
* _for every_ \(l\in\mathbb{N}\)_, a_ \(\tau\)_-collared Kuranishi structure_ \(\widehat{\mathbb{U}}_{l}^{+}\) _on_ \(\widehat{S}_{l}(X)\) _which is a thickening of_ \(\widehat{\mathbb{U}}|_{\widehat{S}_{l}(X)}\)_;_
* _for_ \(l_{1},l_{2}\in\mathbb{N}\)_, an_ \(\frac{(l_{1}+l_{2})!}{l_{1}l_{2}!}\)_-fold covering of_ \(\tau\)_-collared K-spaces_ \[\widehat{S}_{l_{1}}\left(\widehat{S}_{l_{2}}(X),\widehat{\mathbb{U}}_{l_{2}}^ {+}\right)\to\left(\widehat{S}_{l_{1}+l_{2}}(X),\widehat{\mathbb{U}}_{l_{1}+l_ {2}}^{+}\right)\] (A.4) _such that the following diagrams commute for_ \(l_{1},l_{2},l_{3}\in\mathbb{N}\)_:_ \[\begin{CD}\widehat{S}_{l_{1}}\left(\widehat{S}_{l_{2}}\left(\widehat{S}_{l_{3} }(X),\widehat{\mathbb{U}}_{l_{3}}^{+}\right)\right)@>{}>{}>\widehat{S}_{l_{1}+l _{2}}\left(\widehat{S}_{l_{3}}(X),\widehat{\mathbb{U}}_{l_{3}}^{+}\right)\\ @V{}V{}V@V{}V{}V\\ \widehat{S}_{l_{1}}\left(\widehat{S}_{l_{2}+l_{3}}(X),\widehat{\mathbb{U}}_{l_{2} +l_{3}}^{+}\right)@>{}>{}>\widehat{S}_{l_{1}+l_{2}+l_{3}}(X),\widehat{\mathbb{U }}_{l_{1}+l_{2}+l_{3}}^{+}\end{CD}\] (A.5) \[\begin{CD}\widehat{S}_{l_{1}}\left(\widehat{S}_{l_{2}}(X,\widehat{ \mathbb{U}})\right)@>{}>{}>\widehat{S}_{l_{1}}\left(\widehat{S}_{l_{2}}(X), \widehat{\mathbb{U}}_{l_{2}}^{+}\right)\\ @V{}V{}V@V{}V{}V\\ \widehat{S}_{l_{1}+l_{2}}(X,\widehat{\mathbb{U}})@>{}>{}>\widehat{S}_{l_{1}+l_{2}} (X),\widehat{\mathbb{U}}_{l_{1}+l_{2}}^{+}\end{CD}\] (A.6)
* \(a\) _\(\tau\)-collared CF-perturbation_ \(\widehat{\mathbb{S}}_{l^{+}}^{+}\) _of_ \(\left(\widehat{S}_{l}(X),\widehat{\mathbb{U}}_{l}^{+}\right)\) _for every_ \(l\in\mathbb{N}\)_, such that the pullback of_ \(\widehat{\mathbb{S}}_{l_{1}+l_{2}}^{+}\) _by (_4.4_) coincides with the restriction of_ \(\widehat{\mathbb{S}}_{l_{2}}^{+}\) _for every_ \(l_{1},l_{2}\in\mathbb{N}\)_;_
* \(a\) _\(\tau\)-collared admissible map_ \(\hat{f}_{l}^{+}:\left(\widehat{S}_{l}(X),\widehat{\mathbb{U}}_{l}^{+}\right) \rightarrow\mathcal{L}_{k+1}\) _for every_ \(l\in\mathbb{N}\) _such that_
* _the pullback of_ \(\hat{f}_{l_{1}+l_{2}}^{+}\) _by (_4.4_) coincides with the restriction of_ \(\hat{f}_{l_{2}}^{+}\) _for every_ \(l_{1},l_{2}\in\mathbb{N}\)_,_
* _for every_ \(0\leqslant j\leqslant k\)_,_ \(ev_{j}^{\mathcal{C}}\circ\hat{f}_{l}^{+}:\left(\widehat{S}_{l}(X),\widehat{ \mathbb{U}}_{l}^{+}\right)\to L\) _coincides with the restriction of_ \(ev_{j}^{\mathcal{C}}\circ\hat{f}\) _to_ \(\widehat{S}_{l}(X)\) _via the KK-embedding_ \(\widehat{\mathbb{U}}|_{\widehat{S}_{l}(X)}\rightarrow\widehat{\mathbb{U}}_{l} ^{+}\)_,_
* \(ev_{0}^{\mathcal{C}}\circ\hat{f}_{l}^{+}:\left(\widehat{S}_{l}(X),\widehat{ \mathbb{U}}_{l}^{+}\right)\to L\) _is a stratified strong submersion with respect to_ \(\widehat{\mathbb{S}}_{l}^{+}\)_,_
* \(\hat{f}_{l}^{+}\) _is_ \(\varepsilon\)_-close to_ \(\hat{f}|_{\widehat{S}_{l}(X)}\)_._
_Then, for any \(\tau^{\prime}\in(0,\tau)\), there exist the following data._
* \(A\) \(\tau^{\prime}\)_-collared Kuranishi structure_ \(\widehat{\mathbb{U}}^{+}\) _on_ \(X\)_, which is a thickening of_ \(\widehat{\mathbb{U}}\)_._
* _An isomorphism of_ \(\tau^{\prime}\)_-collared Kuranishi structures_ \(\widehat{\mathbb{U}}^{+}|_{\widehat{S}_{l}(X)}\cong\widehat{\mathbb{U}}_{l}^{+}\) _for every_ \(l\in\mathbb{N}\)_._
* \(A\) \(\tau^{\prime}\)_-collared CF-perturbation_ \(\widehat{\mathbb{S}}^{+}\) _of_ \((X,\widehat{\mathbb{U}}^{+})\) _such that_ \(\widehat{\mathbb{S}}^{+}|_{\widehat{S}_{l}(X)}\) _coincides with_ \(\widehat{\mathbb{S}}_{l}^{+}\) _via the isomorphism of K-spaces_ \(\widehat{\mathbb{U}}^{+}|_{\widehat{S}_{l}(X)}\cong\widehat{\mathbb{U}}_{l}^{+}\)_._
* \(A\) \(\tau^{\prime}\)_-collared admissible map_ \(\hat{f}^{+}:(X,\widehat{\mathbb{U}}^{+})\rightarrow\mathcal{L}_{k+1}\) _such that_
* \(\hat{f}^{+}\) _is_ \(\varepsilon\)_-close to_ \(\hat{f}\)_;_
* _for every_ \(0\leqslant j\leqslant k\)_,_ \(ev_{j}^{\mathcal{C}}\circ\hat{f}^{+}\) _coincides with_ \(ev_{j}\circ\hat{f}\) _with respect to the KK-embedding_ \(\widehat{\mathbb{U}}\rightarrow\widehat{\mathbb{U}}^{+}\)_;_
* \(ev_{0}\circ\hat{f}^{+}:(X,\widehat{\mathbb{U}}^{+})\to L\) _is a stratified strong submersion with respect to_ \(\widehat{\mathbb{S}}^{+}\)_._
**Remark 77**.: _There is a version of Theorem 76 for \(\tau\)-collared K-spaces \((X,\widehat{\mathbb{U}})\), and a \(\tau\)-collared strong continuous map_
\[\hat{f}:(X,\widehat{\mathbb{U}})\rightarrow[a,b]^{\widehat{\mathbb{D}}\tau} \times\mathcal{L}_{k+1}^{con},\] (A.7)
_where \(a<b\) are real numbers and \([a,b]^{\widehat{\mathbb{D}}\tau}:=[a-\tau,b+\tau]\)._
**Proposition 78**.: _Let \(X\) be the \(\tau\)-outer collaring of one of the moduli spaces \(\overline{\mathcal{R}}_{k+1}(L,\bar{a};P)\), \(\overline{\mathcal{R}}_{k+2,\partial}(L,\bar{a};P)\), \({}_{l}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\bar{a}};P)\), \({}_{l-1}\overline{\mathcal{R}}_{k+1}^{S^{1}}(x,L,\hat{\bar{a}};P)\) or \({}_{l}^{j,j+1}\overline{\mathcal{R}}_{k+1}^{1}(x,L,\hat{\bar{a}};P)\), where the number \(\tau\) is chosen as in Remark 49, then the thickening \(\widehat{\mathbb{U}}^{+}\) of the Kuranishi structure and the CF-perturbation \(\widehat{\mathbb{S}}^{+}\) that we obtained on \(X\) in Theorem 76 can be taken to be cyclically invariant._
Proof.: According to Fukaya [19], Sections 3 and 5, for these moduli spaces we can start in Theorem 76 with the Kuranishi structures \(\widehat{\mathbb{U}}_{l}^{+}\) and CF-perturbations \(\widehat{\mathbb{S}}_{l}^{+}\) that are invariant under the cyclic permutations of the boundary marked points \(z_{0},\cdots,z_{k}\). In fact, as we have seen in the proof of Theorem 46, we can first apply Theorem 76 to the boundaries and corners of the moduli spaces defined using domains without boundary marked points, which gives us a thickening \(\widehat{\mathbb{U}}^{+}\) of the original Kuranishi structure \(\widehat{\mathbb{U}}\). The pullback of \(\widehat{\mathbb{U}}^{+}\) via the forgetful map \(f_{k+1}\) (cf. (4.99)) which erases all the boundary marked points then equips \(X\) with a cyclic invariant \(\frac{1}{2}\)-collared Kuranishi structure, extending the cyclic invariant Kuranishi structures on its boundaries and corners, which are obtained by pullbacks as well. Similar considerations apply to CF-perturbations.
### Integration along the fiber
We recall the integration along the fiber associated to a strongly smooth map, and the compatibility properties satisfied by the de Rham chains defined in this way. The main references here are [23], Chapter 7 and [33], Section 7.1.
**Theorem 79** ([33], Theorem 7.2).: _Let \((X,\widehat{\mathbb{U}})\) be a compact, oriented K-space of dimension \(d\) without boundary, equipped with a strongly smooth map \(\hat{f}:(X,\widehat{\mathbb{U}})\to\mathcal{L}_{k+1}\), a differential form \(\hat{\omega}\), and a CF-perturbation \(\widehat{\mathbb{S}}=(\widehat{\mathbb{S}}^{\varepsilon})_{0<\varepsilon \leqslant 1}\). We assume that \(\widehat{\mathbb{S}}\) is transversal to \(0\), and \(ev_{0}\circ\hat{f}:(X,\widehat{\mathbb{U}})\to L\) is strongly submersive with respect to \(\widehat{\mathbb{S}}\). Then one can define a de Rham chain_
\[\hat{f}_{\ast}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{\mathbb{S}}^{ \varepsilon})\in C^{dR}_{d-|\hat{\omega}|}(\mathcal{L}_{k+1})\] (A.8)
_for sufficiently small \(\varepsilon\), so that Stokes' formula and fiber product formula hold._
The Stokes' formula and the fiber product formula in Theorem 79 are stated as follows.
**Theorem 80** (Stokes' formula).: _For sufficiently small \(\varepsilon>0\), there holds_
\[\partial\left(\hat{f}_{\ast}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{ \mathbb{S}}^{\varepsilon})\right)=(-1)^{|\hat{\omega}|+1}\hat{f}_{\ast}(X, \widehat{\mathbb{U}},d\hat{\omega},\widehat{\mathbb{S}}^{\varepsilon}).\] (A.9)
Suppose for \(i=1,2\), we are given the following data:
* a compact oriented K-space \((X_{i},\widehat{\mathbb{U}}_{i})\) of dimension \(d_{i}\);
* a strongly smooth map \(\hat{f}_{i}:(X_{i},\widehat{\mathbb{U}}_{i})\to\mathcal{L}_{k_{i}+1}\);
* a differential form \(\hat{\omega}_{i}\) on \((X_{i},\widehat{\mathbb{U}}_{i})\);
* a CF-perturbation \(\widehat{\mathbb{S}}_{i}\) on \((X_{i},\widehat{\mathbb{U}}_{i})\) such that \(ev_{0}\circ\hat{f}_{i}:(X_{i},\widehat{\mathbb{U}}_{i})\to L\) is strongly submersive with respect to \(\widehat{\mathbb{S}}_{i}\).
For each \(1\leqslant j\leqslant k_{1}\), we can take fiber product of K-spaces and define
\[(X_{12},\widehat{\mathbb{U}}_{12}):=(X_{1},\widehat{\mathbb{U}}_{1})\ _{ev_{j}\circ\hat{f}_{1}}\times_{ev_{0}\circ\hat{f}_{2}}(X_{2},\widehat{ \mathbb{U}}_{2}).\] (A.10)
On can also define fiber product of CF-perturbations \(\widehat{\mathbb{S}}_{12}:=\widehat{\mathbb{S}}_{1}\times\widehat{\mathbb{S} }_{2}\) on \((X_{12},\widehat{\mathbb{U}}_{12})\) ([23], SS10.2). Finally, define a differential form \(\hat{\omega}_{12}\) on \((X_{12},\widehat{\mathbb{U}}_{12})\) by
\[\hat{\omega}_{12}:=(-1)^{(d-|\hat{\omega}|_{1}-n)|\hat{\omega}_{2}|}\hat{ \omega}_{1}\times\hat{\omega}_{2},\] (A.11)
and a strongly smooth map \(\hat{f}_{12}:(X_{12},\widehat{\mathbb{U}}_{12})\to\mathcal{L}_{k_{1}+k_{2}}\) by
\[(f_{12})_{p_{1},p_{2}}(x_{1},x_{2}):=\text{\it con}_{j}\left((f_{1})_{p_{1}}( x_{1}),(f_{2})_{p_{2}}(x_{2})\right),\] (A.12)
where \(x_{1}\in U_{p_{1}}\), \(x_{2}\in U_{p_{2}}\), and \(\text{\it ev}_{j}\circ f_{p_{1}}(x_{1})=\text{\it ev}_{0}\circ f_{p_{2}}(x_{2})\).
**Theorem 81** (Fiber product formula).: _We have_
\[(\hat{f}_{12})_{\ast}(X_{12},\widehat{\mathbb{U}}_{12},(-1)^{|\hat{\omega}_{ 12}|+n}\hat{\omega}_{12},\widehat{\mathbb{S}}_{12}^{\varepsilon})=(\hat{f}_{ 1})_{\ast}(X_{1},\widehat{\mathbb{U}}_{1},\hat{\omega}_{1},\widehat{\mathbb{S} }_{1}^{\varepsilon})\circ_{j}(\hat{f}_{2})_{\ast}(X_{2},\widehat{\mathbb{U}}_{ 2},\hat{\omega}_{2},\widehat{\mathbb{S}}_{2}^{\varepsilon})\] (A.13)
_for sufficiently small \(\varepsilon>0\)._
**Remark 82**.: _There are versions of Theorem 79 for admissible K-spaces \((X,\widehat{\mathbb{U}})\), in which case the strongly smooth map \(\hat{f}\) is replaced with an admissible map, and the strong submersion \(ev_{0}\circ\hat{f}\) is replaced with a stratified strong submersion, and the Stokes' formula now takes the form_
\[\partial\left(\hat{f}_{\ast}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{ \mathbb{S}}^{\varepsilon})\right)=(-1)^{|\hat{\omega}|}(\hat{f}|_{\hat{\omega}X} )_{\ast}(\partial X,\widehat{\mathbb{U}}|_{\hat{\omega}X},\hat{\omega}|_{ \hat{\omega}X},\widehat{\mathbb{S}}|_{\hat{\omega}X})+(-1)^{|\hat{\omega}|+1} \hat{f}_{\ast}(X,\widehat{\mathbb{U}},d\hat{\omega},\widehat{\mathbb{S}}^{ \varepsilon}),\] (A.14)
_where \(\partial X\) is the normalized boundary of \(X\), which is itself a K-space._
_As another variation, we can consider admissible maps \(\hat{f}:(X,\widehat{\mathbb{U}})\to[a,b]\times\mathcal{L}_{k+1}\), in which case \(\partial X=\partial_{h}X\sqcup\partial_{\varepsilon}X\) is decomposed as horizontal and vertical boundaries, where \(\partial_{h}X=f^{-1}\left(\{a,b\}\times\mathcal{L}_{k+1}\right)\). Let \(\partial_{-}X=f^{-1}(\{a\}\times\mathcal{L}_{k+1})\) and \(\partial_{+}X=f^{-1}(\{b\}\times\mathcal{L}_{k+1})\). In this case, we require_
\[\left(\mathit{pr}_{[a,b]}\circ\hat{f},ev_{0}\circ\mathit{pr}_{\mathcal{L}_{k+ 1}}\circ\hat{f}\right):(X,\widehat{\mathbb{U}})\to[a,b]\times L\] (A.15)
_to be a corner-stratified strong submersion, and the pushforward defines a relative de Rham chain_
\[\hat{f}_{*}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{\mathcal{S}}^{ \varepsilon})\in\overline{C}^{dR}_{d-|\hat{\omega}|-1}(\mathcal{L}_{k+1})\] (A.16)
_for sufficiently small \(\varepsilon>0\), which satisfies_
\[e_{\pm}\left(\hat{f}_{*}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{ \mathcal{S}}^{\varepsilon})\right)=(-1)^{d-1}(\hat{f}|_{\partial_{\pm}X})_{*} \left(\partial_{\pm}X,\widehat{\mathbb{U}}|_{\partial_{\pm}X},\hat{\omega}|_{ \partial_{\pm}X},\widehat{\mathcal{S}}^{\varepsilon}|_{\partial_{\pm}X} \right)\in C^{dR}_{d-|\hat{\omega}|-1}(\mathcal{L}_{k+1}).\] (A.17)
_The Stokes' formula in this case is_
\[\begin{split}\partial\left(\hat{f}_{*}(X,\widehat{\mathbb{U}}, \hat{\omega},\widehat{\mathcal{S}}^{\varepsilon})\right)&=(-1)^ {|\hat{\omega}|}(\hat{f}|_{\partial_{h}X})_{*}\left(\partial X,\widehat{ \mathbb{U}}|_{\partial_{h}X},\hat{\omega}|_{\partial_{h}X},\widehat{ \mathcal{S}}^{\varepsilon}|_{\partial_{h}X}\right)\\ &+(-1)^{|\hat{\omega}|+1}\hat{f}_{*}(X,\widehat{\mathbb{U}},d \hat{\omega},\widehat{\mathcal{S}}^{\varepsilon}).\end{split}\] (A.18)
For the purposes of this paper (cf. Remark 47), we need an additional property of the de Rham chains defined by integration along the fiber. For any \(a\in H_{1}(L;\mathbb{Z})\), and \(x\) a \(1\)-periodic orbit of the Hamiltonian vector field \(X_{H_{t}}\), let \((X,\widehat{\mathbb{U}})\) be one of the \(\frac{1}{2}\)-collared K-spaces \(\overline{\overline{\mathcal{K}}}_{k+1}(L,\hat{a};P)\), \(\overline{\overline{\mathcal{K}}}_{k+1,\partial}(L,\bar{a};P)\), \(\overline{i}\overline{\overline{\mathcal{K}}}_{k+1}^{\overline{\mathbb{U}}}(x, L,\hat{\bar{a}};P)\), \({}_{l\,l-1}\overline{\overline{\mathcal{K}}}_{k+1}^{\overline{\mathbb{U}}}(x,L, \hat{\bar{a}};P)\) or \({}_{l\,j}{}^{j+1}\overline{\overline{\mathbb{U}}}_{k+1}^{\overline{\mathbb{U} }}(x,L,\hat{\bar{a}};P)\) considered in Section 4.5. Cyclic permutations of the labels of the boundary marked points \(z_{0},\cdots,z_{k}\) defines a \(\mathbb{Z}_{k+1}\)-action on \(X\), whose generator is a map \(\kappa:X\to X\). It follows from Theorem 46, (vii) and Proposition 78 that \(X\) can be equipped with a Kuranishi structure \(\widehat{\mathbb{U}}\) and a CF-perturbation \(\mathcal{S}^{\varepsilon}\) that is cyclic invariant, therefore the map \(\kappa\) extends to a map
\[\kappa:(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{\mathcal{S}}^{\varepsilon })\to(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{\mathcal{S}}^{\varepsilon }),\] (A.19)
if the differential form \(\hat{\omega}\) is cyclic invariant. The following theorem is a straightforward consequence of the cyclic invariance of these data. For simplicity, we assume below that \(P=\{m\}\). The statement for the \(P=[m,m+1]\) case is similar, simply replace \(R_{k}\) with \(\overline{R}_{k}\).
**Theorem 83** (Cyclic permutation formula).: _Let the quadruple \((X,\widehat{\mathbb{U}},\widehat{\omega},\widehat{\mathcal{S}}^{\varepsilon})\) be as above, and assume that \(\varepsilon>0\) is sufficiently small. Then for \(1\leqslant i\leqslant k\),_
\[\hat{f}_{*}\left(\kappa^{i}(X,\widehat{\mathbb{U}},\hat{\omega},\widehat{ \mathcal{S}}^{\varepsilon})\right)=(-1)^{ki}(R_{k})_{*}^{i}\left(\hat{f}_{*}(X, \widehat{\mathbb{U}},\hat{\omega},\widehat{\mathcal{S}}^{\varepsilon})\right).\] (A.20)
_Here, the admissible map \(\hat{f}:(X,\widehat{\mathbb{U}})\to P\times\mathcal{L}_{k+1}\) is given by one of the maps (4.153), (4.154), (4.155), (4.156) or (4.158)._
Proof.: We only need to justify the signs. Note that cyclic permutation of the boundary marked points from \(z_{0},z_{1},\cdots,z_{k}\) to \(z_{k},z_{0}\cdots,z_{k-1}\) changes the orientations of the abstract moduli spaces from \(-dz_{1}\wedge\cdots\wedge dz_{k}\) to \(dz_{2}\wedge\cdots\wedge dz_{k}\wedge dz_{1}\) (with \(z_{0}\) fixed at \(1\), and neglecting the choices of orientations for the interior marked points \(p_{1},\cdots,p_{l}\), which all agree. cf. [27], Remark 46). Thus a sign \((-1)^{k}\) needs to be introduced for each such permutation.
## Appendix B Orientations of moduli spaces
In this appendix, we discuss the orientations of the moduli spaces considered in this paper and compute the signs \(\varepsilon_{1},\cdots,\varepsilon_{22}\) in Theorem 46, (iv). As before, we assume
is a Liouville manifold with \(c_{1}(M)=0\) and \(L\subset M\) is a Lagrangian submanifold which is equipped with some fixed choice of a _Spin_ structure relative to the \(\mathbb{Z}_{2}\)-gerbe \(\alpha\).
The orientation of the moduli space \(\overline{\mathcal{R}}_{k+1}(L,\beta)\) has been fixed in [33], Section 7.2.3. We shall orient \(\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)\) in the same way as \(\overline{\mathcal{R}}_{k+1}(L,\beta)\). More precisely, let \(\mathcal{R}(L,\beta)\) be the moduli space of pseudoholomorphic discs without boundary marked points, which carries its canonical orientation defined in [20], Section 8.1. \(\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)\) is oriented so that the natural isomorphism
\[T\left(\overline{\mathcal{R}}_{k+1,\vartheta}(L,\beta)\right)\oplus T\left( Aut(D)\right)\cong T\left(\overline{\mathcal{R}}(L,\beta)\right)\oplus T (\partial D)^{\oplus k}\] (B.1)
is orientation-preserving, where \(\partial D\) is equipped with the anticlockwise orientation, and \(\mathit{Aut}(D)\) is oriented via the diffeomorphism \(\mathit{Aut}(D)\cong(\partial D)^{3}\). It follows from the sign computation in the case of \(\overline{\mathcal{R}}_{k}(L,\beta)\) that
\[\varepsilon_{1}=(k_{1}-i)(k_{2}-1)+n+k_{1}-1\] (B.2)
in (4.82).
To compute the signs \(\varepsilon_{2},\varepsilon_{3},\varepsilon_{4,0},\cdots,\varepsilon_{4,l}, \varepsilon_{5},\varepsilon_{6}\) in (4.83), we first pick orientations for the abstract moduli spaces. On a representative of the moduli spaces \({}_{l}\mathcal{R}^{1}_{k+1}\) and \({}_{l-1}\mathcal{R}^{S^{1}_{k+1}}_{k+1}\) which fixes \(z_{0}=1\) and \(\zeta=0\), we pick the volume forms
\[-r_{1}\cdots r_{l}dz_{1}\wedge\cdots\wedge dz_{k}\wedge dr_{l}\wedge d\theta_ {l}\wedge\cdots\wedge dr_{1}\wedge d\theta_{1}\] (B.3)
and
\[r_{1}\cdots r_{l}dz_{1}\wedge\cdots\wedge dz_{k}\wedge d\theta_{1}\wedge dr_{ l}\wedge d\theta_{l}\wedge\cdots\wedge dr_{2}\wedge d\theta_{2}\] (B.4)
for \({}_{l}\mathcal{R}^{1}_{k+1}\) and \({}_{l-1}\mathcal{R}^{S^{1}_{k+1}}_{k+1}\), respectively, where \((r_{j},\theta_{j})\) is the polar coordinates for the auxiliary marked point \(p_{j}\). For the moduli spaces \({}_{l}\mathcal{R}^{1}_{k+1,\tau_{l}}\), we shall equip them with orientations which are compatible with the auxiliary-rescaling map (4.36). More precisely, for a representative with \(z_{0}=1\) and \(\zeta=0\), we pick the volume form
\[r_{1}\cdots r_{l}dz_{1}\wedge\cdots\wedge dz_{k}\wedge dz_{f}\wedge dr_{l} \wedge d\theta_{l}\wedge\cdots\wedge dr_{1}\wedge d\theta_{1}.\] (B.5)
The space of \(l\)-point angle decorated cylinders admits a canonical complex orientation, and we trivialize the \(\mathbb{R}\)-action on it by choosing \(\partial_{s}\) as the vector field inducing the action. The quotient gives the moduli space \({}_{l}\mathcal{M}\). Using this convention, we have an isomorphism
\[\langle\partial_{s}\rangle\otimes\lambda^{top}\left(T_{l}\overline{\mathcal{ M}}(x,y)\right)\cong o_{x}\otimes o_{y}^{-1},\] (B.6)
where \(o_{x}\) and \(o_{y}\) are orientation lines associated to the Hamiltonian orbits \(x\) and \(y\), respectively, and \(\lambda^{top}\) stands for the top degree exterior power. As in the definition of the Floer differential, when counting rigid elements of \({}_{l}\overline{\mathcal{M}}(x,y)\) in the definition of the operations \(\delta_{l}\), we twist by the sign \((-1)^{|y|}\).
The moduli space \({}_{l}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{\beta})\) is oriented by combining the ideas of [5], Appendix A and [20], Chapter 8. More precisely, let \(D_{u}\) be the linearization of the perturbed Cauchy-Riemann operator, then we have an isomorphism of line bundles
\[\det(D_{u})\otimes o_{x}\otimes\kappa^{\alpha}_{x}\cong\lambda^{top}(TL),\] (B.7)
where \(\kappa^{\alpha}_{x}\) is the _background line_ determined by our choice of the relative _Spin_ structure on \(L\), the orbit \(x\), and the background class \([\alpha]\in H^{2}(M;\mathbb{Z}_{2})\) (cf. [5], Definition 3.1). Combining (B.6) and (B.7), it follows that on a boundary stratum of the form \({}_{j}\overline{\mathcal{M}}(x,y_{j})\times{}_{l-j}\overline{\mathcal{R}}(y_{ j},L,\hat{\beta})\), we have a natural isomorphism
\[\lambda^{top}\left(T_{l-j}\overline{\mathcal{R}}(y_{j},L,\hat{\beta})\right) \otimes\langle\partial_{s}\rangle\otimes\lambda^{top}\left(T_{j}\overline{ \mathcal{M}}(x,y_{j})\right)\cong\lambda^{top}(TL)\otimes(o_{x}\otimes \kappa^{\alpha}_{x})^{-1}.\] (B.8)
Comparing the orientation of \({}_{j}\overline{\mathcal{M}}(x,y_{j})\times{}_{l-j}\overline{\mathcal{R}}(y_{ j},L,\hat{\beta})\subset\partial_{i}\overline{\mathcal{R}}^{1}_{k+1}(x,L,\hat{ \beta})\) with (B.8), we obtain
\[\varepsilon_{4,j}=n+|y_{j}|.\] (B.9)
For the boundary strata \(\smash{\mathop{l}\limits^{j,j+1}}\overline{\mathbb{R}}^{1}_{k+1}(x,L,\hat{\beta})\) and \(\smash{\mathop{l}\limits_{-1}}\overline{\mathbb{R}}^{1}_{k+1}(x,L,\hat{\beta})\), we may orient them so that the natural maps
\[\smash{\mathop{l}\limits^{j,j+1}}\overline{\mathbb{R}}^{1}_{k+1}(x,L,\hat{ \beta})\to S^{1}\times\smash{\mathop{l}\limits_{-1}}\overline{\mathbb{R}}^{1} _{k+1}(x,L,\hat{\beta})\] (B.10)
and
\[\smash{\mathop{l}\limits_{-1}}\overline{\mathbb{R}}^{1}_{k+1}(x,L,\hat{\beta}) \to S^{1}\times\smash{\mathop{l}\limits_{-1}}\overline{\mathbb{R}}^{1}_{k+1}( x,L,\hat{\beta})\] (B.11)
induced from the forgetful maps (4.27) and (4.28) are oriented diffeomorphisms. This provides us with an inductive way to orient the moduli spaces so that
\[\varepsilon_{5}=\varepsilon_{6}=0.\] (B.12)
It remains to determine the signs \(\varepsilon_{2}\) and \(\varepsilon_{3}\). For this purpose, we introduce the moduli spaces \(\smash{\mathop{i}\limits^{j}}\widetilde{\mathcal{R}}^{1}(x,L,\hat{\beta})\) of pairs \(((S,p_{1},\cdots,p_{l};\ell),u)\), where \(u:S\to M\) satisfies (4.45) and \([u]=\hat{\beta}\), but now we do not quotient by \(Aut(S)\cong S^{1}\). Similarly, define \(\widetilde{\mathcal{R}}(L,\beta)\) to be the space of pseudoholomorphic maps \(u:(D,\partial D)\to(M,L)\) with \([u]=\beta\), but without modding out \(Aut(D)\cong\textit{PSL}(2,\mathbb{R})\). Consider the gluing map
\[\textit{gl}:\smash{\mathop{i}\limits^{j}}\widetilde{\mathcal{R}}^{1}(x,L, \hat{\beta}_{1})\ \smash{\mathop{1}\limits_{\times-1}}\ \widehat{\mathcal{R}}(L,\beta_{2})\to\smash{ \mathop{i}\limits^{j}}\widetilde{\mathcal{R}}^{1}(x,L,\hat{\beta}),\ \hat{\beta}_{1}+\beta_{2}=\hat{\beta},\] (B.13)
where the notation \(\smash{\mathop{1}\limits_{\times-1}}\) means the fiber product on the left-hand side is taken with respect to the evaluation maps at \(1\in\partial S\) on the first component, and at \(-1\in\partial D\) on the second component. The argument of [20], Lemma 8.3.10 applies to our case and implies the following:
**Lemma 84**.: _The map \(\textit{gl}\) is orientation-preserving._
Using this lemma, the signs \(\varepsilon_{2}\) and \(\varepsilon_{3}\) can be computed in the same way as [20], Section 8.3. We have
\[\varepsilon_{2}=(k_{1}-i)(k_{2}-1)+n+k\ \text{and}\ \varepsilon_{3}=(k_{1}-i)(k_{ 2}-1)+n+1.\] (B.14)
The remaining signs \(\varepsilon_{7},\cdots,\varepsilon_{13},\varepsilon_{14,0},\cdots,\varepsilon_ {14,l},\varepsilon_{15},\varepsilon_{16}\) are determined from the signs \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4,0},\cdots, \varepsilon_{4,l},\varepsilon_{5},\varepsilon_{6}\) using the formulae
\[\partial([m,m+1]\times X)=\{m+1\}\times X\sqcup(-1)\{m\}\times X\sqcup(-1)[m, m+1]\times\partial X,\] (B.15)
\[[m,m+1]\times(X\times_{L}Y)=([m,m+1]\times X)\times_{[m,m+1]\times L}([m,m+1] \times Y),\] (B.16)
where \(X\) and \(Y\) are admissible K-spaces.
|
2307.15410
|
Towards a Fully Unsupervised Framework for Intent Induction in Customer
Support Dialogues
|
State of the art models in intent induction require annotated datasets.
However, annotating dialogues is time-consuming, laborious and expensive. In
this work, we propose a completely unsupervised framework for intent induction
within a dialogue. In addition, we show how pre-processing the dialogue corpora
can improve results. Finally, we show how to extract the dialogue flows of
intentions by investigating the most common sequences. Although we test our
work in the MultiWOZ dataset, the fact that this framework requires no prior
knowledge make it applicable to any possible use case, making it very relevant
to real world customer support applications across industry.
|
Rita Costa, Bruno Martins, Sérgio Viana, Luisa Coheur
|
2023-07-28T09:03:14Z
|
http://arxiv.org/abs/2307.15410v1
|
# Towards a fully Unsupervised framework for intent induction in Customer Support Dialogues
###### Abstract
State of the art models in intent induction require annotated datasets. However, annotating dialogues is time-consuming, laborious and expensive. In this work, we propose a completely unsupervised framework for intent induction within a dialogue. In addition, we show how pre-processing the dialogue corpora can improve results. Finally, we show how to extract the dialogue flows of intentions by investigating the most common sequences. Although we test our work in the MultiWOZ dataset, the fact that this framework requires no prior knowledge make it applicable to any possible use case, making it very relevant to real world customer support applications across industry.
## 1 Introduction
The evolution of technology has allowed the automation of several processes across diversified engineering industry fields, such as customer support services, which have drastically evolved with the advances in Natural Language Processing and Machine Learning. One of the major challenges of these systems is to identify users intentions, a complex Natural Language Understanding task, that vary across domains. With the evolution of Deep Learning architectures, recent works focused on modelling intentions and creating a taxonomy of intents, so they can be fed to powerful supervised clustering algorithms (Haponchyk et al., 2020; Chatterjee and Sengupta, 2021).
However, these systems have the bottleneck of requiring the existence of labelled data to be trained and deployed, and, thus, they can not be easily transferred to real world customer support services, where the available data for a commercial chatbot usually consists in no more than a dataset of interactions between clients and operators. As labeling hundreds of utterances with intent labels can be time-consuming, laborious, expensive and, sometimes, even requires someone with expertise, it is not straightforward to apply current state of the art supervised models to new domains (Chatterjee and Sengupta, 2020).
In this work, we propose a totally unsupervised intent induction framework and apply it to the MultiWOZ dataset (Budzianowski et al., 2018). Previous unsupervised intent induction works have used methods which perform clustering of user query utterances in human-human conversations (Perkins and Yang, 2019; Haponchyk et al., 2020; Chatterjee and Sengupta, 2020). Popular clustering algorithms for practical applications include centroid-based algorithms, such as the K-Means algorithm (Lloyd, 1982), and density based algorithms, namely DBSCAN (Daszykowski and Walczak, 2009) and HDBSCAN (McInnes and Healy, 2017). An advantage of the density-based algorithms is not requiring to define the number of clusters a priori (Ghaemi and Farnaghi, 2019), being more efficient for detecting clusters with arbitrary shapes from a noisy dataset, particularly for a case where the number of dialogue intentions is not known a priori. By using HDBSCAN, we also do not require the prior definition of the density threshold used to create the clusters (contrary to DBSCAN), which is more suitable for this application. Moreover, we show that text pre-processing techniques, such as performing named entity recognition, can improve the clustering process of dialogue utterances. Finally, we complement this experiment with an analysis of the most common dialogue flows, based on the detected intents.
In summary, the main contributions of this work are:
* the application of a fully unsupervised method for extracting intents within a dialogue, requiring no prior information about its content, and hence avoiding the time-consuming task of manually analysing user questions and identifying the intents (both intra- and inter-domain studies are conducted);
* an exploratory analysis of the dataset, motivating the usage of general text processing techniques to optimize the intent extraction process, that can be applied to any corpora;
* an informal analysis of the most common flows of discovered intentions.
As there is no required prior knowledge of any dataset specificities or deployment details, our proposal is applicable to any type of data and use case, making it relevant for a huge variety of applications, such as customer support applications.
This paper is organized as follows: in Section 2 we present related work, in Section 3 we present a data analysis, and in Section 4 the experimental results. Then, in Section 5 we present the main conclusions and some future work.
## 2 Related Work
This section gives an overview of the tools used in the development of this work. In Section 2.1, we present the MultiWOZ dataset, a task-oriented collection of dialogues whose utterances are used for the experiments in Section 4. Before feeding these sentences into an algorithm, it is required to transform them in a space representation, for which an overview is given in Section 2.2. In Section 2.3, we present HDBSCAN and motivate the choice of this clustering algorithm. Finally, the method for analysis of dialogue flows is presented in Section 2.4.
### MultiWOZ Dataset
The MultiWOZ dataset (Budzianowski et al., 2018) is a labelled human-human collection of goal-oriented dialogues, simulating natural conversations between a tourist and an assistant from an information center in a touristic city. The corpus has conversations spanning over 7 domains -- _attraction, hospital, police, hotel, restaurant, taxi, train_ -- with diverse complexity of tasks, going from a simple information query about an attraction, to booking a night at a hotel, a restaurant reservation and a taxi to connect both places. The dataset is composed of 10438 dialogues, which can be either single domain or multi-domain. The average number of turns per dialogue is 8.93 and 15.39, for single and multi-domain, respectively.
One particularity about this dataset is the richness in annotations at two levels for each utterance: domain and intent. This information will allows us to conduct the experiments with a reference of ground truth, helping validating the used approach. In Figure 1, it is possible to see an example of a part of a dialogue, with the corresponding domains and intents for each utterance. Besides the possible conversation domains, an utterance can also belong to two broader domains: the _booking_ domain -- if it refers to the act of booking an entitiy -- or to the _general_ domain -- if it is a greeting, an acknowledgement, etc. In addition to the dialogues and their annotations, the dataset is also composed of 7 database files, one for each possible domain of conversation.
A further exploratory analysis of this dataset can be found in Section 3.
Figure 1: A dialogue example with domains and intents.
### Text Representation
An important part of Natural Language Processing is how to represent sentences such that it is possible to build algorithms on them. Initially, the focus was in representing words independently. The most basic approach was to represent text through a one-hot vector, with value 1 assigned to a word that is present, and 0 corresponding to not present. The impossibility to transmit the similarity between words gave rise to what are now called word embeddings, which represent a word in a low dimensional vector space, where similar words take similar parts of the modelling space. Popular word embeddings techniques include Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). The need to solve ambiguities around words meanings and represent them with respect to the sentence they are inserted led to the evolution of contextual word embeddings, such as ELMo (Peters et al., 2018). The representations move beyond word-level semantics, in that each word has a representation which is a function of the entire input sequence, being able to capture syntax and semantics. The evolution of text representation techniques opened the door to more complex language models, with transformer architectures that use attention to learn embeddings, such as GPT (Radford and Salimans, 2018) and BERT Devlin et al. (2019). In tasks such as clustering and semantic search, a common method is to map each sentence such that semantically similar sentences are close, as proposed in Sections 2.2.1 and 2.2.2.
#### 2.2.1 Sentence-BERT
BERT related models have the state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity. However, to compute the similarity between two sentences requires that they are both fed into the model, which makes it too expensive for pair regression tasks, such as semantic similarity search and clustering, due to too many possible combinations. To make this task more efficient, **Sentence-BERT** (SBERT) (Reimers and Gurevych, 2020) uses siamese and triplet network structures to derive semantically meaningful sentence embeddings. These techniques represent entire sentences and their semantic information as vectors, making semantically similar sentences close in the vector space. This helps the machine in understanding the context, intention, and other nuances in the entire text. Then, by using a similarity measure like cosine-similarity or euclidean distance, it is possible to find semantically similar sentences. SBERT is available in the Sentence-Transformer framework1, with pre-trained models of sentence embeddings tuned for various tasks, in more than 100 languages.
Footnote 1: [https://www.sbert.net/](https://www.sbert.net/)
#### 2.2.2 Dimensionality Reduction
After using SBERT for utterance representation, we obtain embeddings with dimension of 768. Since high dimensionality embeddings lead to a loss of robustness in clustering algorithms, we trade the loss of information for a more robust clustering by reducing the dimensionality of the embeddings before feeding them to the clustering algorithm.
There are a few alternatives of methods for dimensionality reduction, such as t-Distributed Stochastic Neighbor Embedding **(t-SNE)**(van der Maaten and Hinton, 2008), and Uniform Manifold Approximation and Projection (**UMAP)**(McInnes et al., 2018). Both were designed to predominantly preserve the local structure, by grouping neighbouring data points together, which provides a very informative visualization of the heterogeneity present in the data. UMAP is more adequate for this context, since t-SNE produces unstable embeddings, making the experiences non reproducible.
### HDBSCAN for unsupervised clustering
Clustering is an unsupervised Machine Learning technique that consists of grouping data points such that those with similar features are classified with the same group, meaning that data points belonging to different groups should have more dissimilar properties. Depending on the notion of what defines a cluster, there are a variety of diversified clustering algorithms: some of them are centroid based, such as the K-Means (Lloyd, 1982), where the clustering is done based on some randomly initialized points and the minimum distance from a point to others; others are density based, such as DBSCAN (Daszykowski and Walczak, 2009), where points are clustered based on their densities in a particular region. Density based clustering is particularly relevant for problems where little is known about the dataset, since they do not require the a priori definition of the amount of clusters.
In most density-based clustering algorithms, such as DBSCAN, it is necessary to define a density threshold to make a cluster. This parameter is specially difficult to adjust for higher dimensional data, posing a problem for obtaining clusters with varying densities. To solve this problem, the Hierarchical Density-Based Spatial Clustering of Applications with Noise (**HDBSCAN**) (Campello et al., 2015) was developed, not requiring the prior definition of this density threshold. The algorithm first builds a hierarchy to figure out which peaks end up merging together and in what order. Then, for each cluster, it evaluates if it is more beneficial to keep that cluster or split it into subclusters, considering the
volume of each peak. HDBSCAN uses soft clustering: unlike most clustering algorithms, data points are not assigned cluster labels, but rather a vector of probabilities for each cluster, identified by \(c\in\{0,n_{clusters}-1\}\), allowing each point to potentially be a mix of clusters. It is also noise aware, meaning that it has a notion of data samples that are not assigned to any cluster, to which it assigns the label -1.
### Sequential Pattern Mining for the analysis of dialogue flows
In the context of dialogue interactions, besides identifying utterances intentions, it is relevant to evaluate the most common interactions, allowing to discover the flow of the dialogue. To do so, it is possible to use the sequential pattern mining algorithm **Prefix**-projected **S**equential **p**attern (**PrefixSpan**) (Pei et al., 2001), which discovers frequent subsequences as patterns in a sequence database.
The PrefixSpan implementation2 to be used outputs traditional single-item sequential patterns. This library also includes the frequent closed sequential pattern mining algorithm BIDE (Wang and Han, 2004), and the frequent generator sequential pattern mining algorithm FEAT (Gao et al., 2008). To use the algorithm via API, we will refer to the PrefixSpan class in prefixspan/api.py. In this implementations, two types of sequences can be obtained:.frequent(n) returns the sequences that appear n times; and.topk(k) gives the k most frequent sequences in the dataset. These methods also support other specificities, which can be found in the algorithm documentation link.
Footnote 2: [https://pypi.org/project/prefixspan/](https://pypi.org/project/prefixspan/)
## 3 Data Analysis
To have a better understanding of the task we have at hand, it is relevant to perform an analysis of the dialogue utterances. In Section 3.1, the similarities between embeddings of different utterances are investigated, motivating the use of an open-source tool for entity identification. In Section 3.2, we provide an overview of the distribution of the dataset over domain and intent.
### Embeddings representation
As proposed in Section 2.2, the dataset utterances are represented using the a Sentence-BERT model. These embeddings are obtained using the sentence-transformer \({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \
### Domain and intent annotations
As seen in Figure 1, one utterance from the MultiWOZ dataset can belong to more than one domain. In Figure 3, we present the frequency of each possible combination of domains (the combinations which were present less than 10 times were kept out of this plot for the sake of visibility). The highest presence are for single domain utterances. The highest value is for the _general_ domain, followed by _train_, _restaurant_, _hotel_, _attraction_, _taxi_ and _booking_. The _police_ and _hospital_ domains have fewer utterances assigned, as they also have less dialogues.
For the generality of domains, the possible intents classifications are _inform_, _request_, _recommend_, _no-offer_ and _select_. The _booking_ domain has other possibilities regarding the booking possibility, _book_ or _no-book_. The _general_ domain has its particular intents: _great_, _welcome_, _reqmore_, _thank_ and _bye_. Naturally, it is possible for a utterance to hold more than one intent. As there are many more possible combinations of intents than domains, we will not present plots of all domains, but rather exemplify with the representations of utterances belonging to the hotel domain. In Figure 4, it is possible to see the 2-D representations of utterances belonging to this domain, using the UMAP algorithm with dimension 2. Although these are just 2-D representations of way higher dimensional embeddings, it is still possible to identify some groups of sentences belonging to the same domain or intent. This can translate the possibility of performing density clustering in these data points.
Figure 2: Similarity.
## 4 Experimental Results
This section includes an analysis of the experimental results. An introduction to the evaluation methods is given in Section 4.1. In Section 4.3, we present and analyse the results of an intra-domain clustering experiment for the hotel domain. In Section 4.2, an inter-domain clustering experience is conducted.
Figure 4: 2-D representations of utterances embeddings per intent in the hotel domain.
Figure 3: The possible combinations of domains.
### Evaluation Metrics
To evaluate the results of a clustering experiment, one can use intrinsic methods (based on properties of the algorithm itself), such as the relative validity index. This metric measures how close elements from one cluster are to each other, and how distant they are from elements in other clusters. It is important to note that the topic of clustering validation is considered one of the most challenging topics in the clustering literature: since these are unsupervised algorithms, it is required to resort to internal validity criteria, calculated solely based on information intrinsic to the data.
In these particular experiences, since we have annotation references from the dataset, it is also possible to resort to extrinsic methods that compare the clusters with a pre-existing structure -- a ground truth solution. In this context, BCubed precision and BCubed recall (Bagga and Baldwin, 1998) are found to be the only ones that satisfy all the proposed properties/constraints for clustering evaluation metrics (Amigo et al., 2009). The BCubed precision of an item is the proportion of items in its cluster which have the item's category, including itself, related to the amount of items in its cluster. The BCubed recall is analogous, but related to the amount of items within its category. The overall BCubed precision and recall are the averaged precision and recall of all the items.
Naturally, extrinsic methods are not usable when there are no ground truth references, leaving intrinsic methods as the most relevant for clustering experiences, since they are the only ones applicable in real world scenarios.
### Inter-domain Clustering
Firstly, we present the clustering results for an experience with all the utterances from the MultiWOZ dataset. In this inter-domain clustering experience, we have two types of possible labels: domain and intent. To simplify, we present the possible domains for the utterances, whose 2-D representations are plotted in Figure 5. As evident in Figure 5, there are many possible combinations of domain labels for the data points. Hence, we will refrain from plotting the possible combinations of intents, as there are even more possibilities than those in Figure 5, and its analysis would be too exhaustive.
For these experiences, we opted to remove the utterances from the general domain. As seen in the plot from Figure 3, these are the most present in the dataset. The fact that these types of utterances are very repetitive, with a very low variability, makes the dataset very imbalanced. As the same exact utterance from the general domain occurs very
Figure 5: 2-D representations of utterances embeddings per domain.
often, this can damage the clustering experience by generating clusters for equal utterances only, which is not aligned with the goals for this task.
When running the HDBSCAN algorithm, there are two important parameters to set: min_cluster_size, defining the smallest size grouping that should be considered a cluster. The bigger its value, the less clusters will be obtained; and min_samples, which provides a measure of how conservative the clustering should be. The larger its value, the more conservative the clustering, and the more points will be considered as noise, being clusters progressively restricted to more dense areas. By default, it is set to the value of min_cluster_size. We fine tune these values by making min_samples vary from 0 to 100 with a step size of 10, min_cluster_size vary from 25 to 300 with a step size of 25 and measuring the relative validity index, as depicted in Table 1. It is not possible to see a direct relationship between this value and both of the variables. The best result happens for min_samples\(=100\) and min_cluster_size\(=300\).
For a deeper analysis of the possible clustering results, we present the values for BCubed precision (P), BCubed recall (R) and their harmonic mean (M), for each value of min_samples and the corresponding optimal value of min_cluster_size. the values for BCubed metrics using domains or intents as labels are presented in Tables 2
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & 25 & 50 & 75 & 100 & 125 & 150 & 175 & 200 & 225 & 250 & 275 & **300** \\ \hline
10 & \(3.32\times 10^{-2}\) & \(4.31\times 10^{-2}\) & \(3.27\times 10^{-3}\) & \(2.47\times 10^{-3}\) & \(2.47\times 10^{-3}\) & \(2.47\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(3.09\times 10^{-3}\) & \(3.09\times 10^{-3}\) \\
20 & \(4.01\times 10^{-2}\) & \(3.76\times 10^{-2}\) & \(1.47\times 10^{-3}\) & \(1.50\times 10^{-3}\) & \(1.50\times 10^{-3}\) & \(4.95\times 10^{-3}\) & \(4.95\times 10^{-3}\) & \(4.92\times 10^{-3}\) & \(1.55\times 10^{-3}\) & \(1.42\times 10^{-3}\) & \(2.60\times 10^{-3}\) & \(2.60\times 10^{-3}\) \\
30 & \(3.30\times 10^{-2}\) & \(2.67\times 10^{-2}\) & \(1.01\times 10^{-3}\) & \(1.28\times 10^{-3}\) & \(1.38\times 10^{-3}\) & \(1.38\times 10^{-3}\) & \(1.15\times 10^{-3}\) & \(5.46\times 10^{-3}\) & \(4.42\times 10^{-3}\) & \(4.38\times 10^{-3}\) & \(4.15\times 10^{-3}\) \\
40 & \(1.62\times 10^{-2}\) & \(1.62\times 10^{-2}\) & \(5.72\times 10^{-3}\) & \(5.88\times 10^{-4}\) & \(4.17\times 10^{-4}\) & \(1.26\times 10^{-2}\) & \(1.11\times 10^{-3}\) & \(9.46\times 10^{-3}\) & \(4.23\times 10^{-3}\) & \(4.03\times 10^{-3}\) & \(4.03\times 10^{-3}\) \\
50 & \(1.62\times 10^{-2}\) & \(2.48\times 10^{-2}\) & \(1.46\times 10^{-3}\) & \(5.89\times 10^{-5}\) & \(5.89\times 10^{-5}\) & \(6.99\times 10^{-3}\) & \(5.62\times 10^{-3}\) & \(1.44\times 10^{-3}\) & \(1.44\times 10^{-3}\) & \(5.10\times 10^{-3}\) & \(5.10\times 10^{-3}\) & \(5.09\times 10^{-3}\) \\
60 & \(1.82\times 10^{-2}\) & \(2.48\times 10^{-2}\) & \(6.17\times 10^{-2}\) & \(2.42\times 10^{-2}\) & \(2.42\times 10^{-3}\) & \(1.86\times 10^{-3}\) & \(7.50\times 10^{-3}\) & \(3.88\times 10^{-3}\) & \(3.88\times 10^{-3}\) & \(3.88\times 10^{-3}\) & \(3.88\times 10^{-3}\) & \(3.87\times 10^{-3}\) \\
70 & \(2.52\times 10^{-5}\) & \(2.52\times 10^{-5}\) & \(1.03\times 10^{-3}\) & \(1.03\times 10^{-3}\) & \(1.03\times 10^{-3}\) & \(2.02\times 10^{-2}\) & \(1.89\times 10^{-2}\) & \(1.66\times 10^{-2}\) & \(1.34\times 10^{-2}\) & \(1.14\times 10^{-2}\) & \(3.09\times 10^{-2}\) \\
80 & \(6.03\times 10^{-4}\) & \(5.63\times 10^{-4}\) & \(1.42\times 10^{-3}\) & \(1.42\times 10^{-3}\) & \(1.42\times 10^{-3}\) & \(1.87\times 10^{-3}\) & \(7.15\times 10^{-3}\) & \(7.15\times 10^{-3}\) & \(7.15\times 10^{-3}\) & \(7.15\times 10^{-3}\) & \(8.43\times 10^{-2}\) & \(1.04\times 10^{-3}\) \\
90 & \(8.46\times 10^{-7}\) & \(5.81\times 10^{-4}\) & \(5.81\times 10^{-4}\) & \(5.81\times 10^{-3}\) & \(1.41\times 10^{-3}\) & \(1.40\times 10^{-3}\) & \(8.30\times 10^{-3}\) & \(8.81\times 10^{-3}\) & \(8.57\times 10^{-3}\) & \(8.57\times 10^{-3}\) & \(9.57\times 10^{-3}\) & \(9.77\times 10^{-3}\) & \(9.80\times 10^{-3}\) \\
100 & \(3.04\times 10^{-4}\) & \(3.82\times 10^{-4}\) & \(3.82\times 10^{-4}\) & \(3.82\times 10^{-4}\) & \(3.82\times 10^{-4}\) & \(3.82\times 10^{-4}\) & \(1.37\times 10^{-1}\) & \(1.36\times 10^{-1}\) & \(1.12\times 10^{-1}\) & \(1.10\times 10^{-1}\) & \(1.10\times 10^{-1}\) & \(1.50\times 10^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Grid search of relative validity index over min_cluster_size and min_samples for all utterances of the Multi-WOZ dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & & & & & & \multicolumn{3}{c}{Soft Clusters} \\ \cline{3-10} min\_samples & min\_cluster\_size & validity & P & R & M & P & R & M & n clusters \\ \hline
**10** & **50** & \(4.31\times 10^{-2}\) & 0.2559 & 0.6323 & **0.3643** & 0.5167 & 0.0915 & 0.1555 & **105** \\
20 & 25 & \(4.01\times 10^{-2}\) & 0.2302 & 0.7171 & 0.3486 & 0.5236 & 0.0905 & 0.1544 & 159 \\
30 & 250 & \(4.38\times 10^{-2}\) & 0.2301 & 0.6409 & 0.3386 & 0.3884 & 0.3509 & 0.3687 & 16 \\
40 & 225 & \(4.23\times 10^{-2}\) & 0.2300 & 0.6583 & 0.3409 & 0.3877 & 0.3837 & 0.3857 & 15 \\
50 & 50 & \(2.48\times 10^{-2}\) & 0.2096 & 0.7328 & 0.3260 & 0.4733
and 3, respectively. Besides, we can evaluate both the clusters and soft clusters, where the latter are obtained by choosing the cluster with the maximum value of probability. The number of obtained clusters for each experience is also presented, allowing to have an idea of how granular the clusters are.
We can draw a few ideas from the results. Firstly, that an increase in the value of P is usually combined with a decrease in the value of R, supporting the need for analysing their harmonic mean (M). We can also confirm that we need to increase both min_samples and min_cluster_size for the clustering to become more conservative: for the same value of min_cluster_size, an increase in min_samples leads to a lower number of obtained clusters (which happens for min_cluster_size\({}=50\), for example).
The BCubed metrics results are generally better when using the domain annotations as labels. In Figure 6, we present the results for the inter-domain experience with the optimal relative validity index, where the quality of the clusters can be grasped. In Table 4, we present details about each cluster: their length, persistence in the spanning tree, and the dataset reference label, which corresponds to the label from the dataset with more data points in each cluster. For a better analysis of each clustering experience, we can also extract the most frequent words in each cluster of utterances. In this experience, we use the TD-IDF algorithm, treating each cluster of utterances as a single document4.
Footnote 4: Following [https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6](https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6)
6), we understand that these types of utterances must have a great presence in the dataset, and possibly appearing in different types of domain dialogues. We should underline that, as the amount and variability of dialogue utterances increase, it is more likely that similar utterances belonging to different domains appear, leading to utterances with different labels being clustered together.
### Intra-domain Clustering
For this experience, we consider utterances from the MultiWOZ dataset belonging to the hotel domain. Intra-domain data is the most likely to be found in a real world scenario, where dialogues that are jointly analyzed belong to the same broader domain.
In Table 5, the values for the relative validity index are presented when varying min_samples from 5 to 50 with a step size of 5, and min_cluster_size from 10 to 100 with a step size of 10 -- as we are in the presence of a smaller amount of data, the range of values for the variables have also been decreased. The best score of relative validity index is for the combination of min_samples\(=50\) and min_cluster_size\(=80\).
Similarly to before, we present the values P, R, and M in Table 6, for each value of min_samples and the corresponding optimal value of min_cluster_size. In what comes to the best performance in BCubed metrics, there is a mismatch between the results from the clusters and soft clusters: the former occurs for min_samples\(=25\) and min_cluster_size\(=20\); and the latter is for min_samples\(=45\) and min_cluster_size\(=20\). These results are also not in accordance with the optimal performance for relative validity index, which happens for min_samples\(=50\) and min_cluster_size\(=80\). From these possible combinations of values, 28 is the amount of obtained clusters which is more in accordance with the original labels from the dataset.
In Figure 7, we present the results of these three clustering experiences in a 2-D representation, from fewer to greater obtained clusters. The color gradient on the right side of each graph indicates the number of clusters present in the plot, where the top indicates the maximum number of clusters +1. For experiences where fewer clusters are obtained, there is generally a broader cluster to which most of the data points belong, with a few more specific ones. Although this can be supported by the nature of the dialogues, where a lot of utterances are related to searching for a hotel, these results are not that useful once we want to analyse the flow of intentions in a dialogue. This fact advocates for the importance of adapting the hyper parameters to the experience and results we are looking for, regardless of any
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & & \multicolumn{4}{c}{Clusters} & \multicolumn{4}{c}{Soft Clusters} \\ \cline{3-10} & min\_cluster\_size & validity & P & R & M & P & R & M & n clusters \\ \hline
5 & 10 & \(5.91\times 10^{-2}\) & 0.5509 & 0.6357 & 0.5903 & 0.6527 & 0.0381 & 0.0721 & 161 \\
10 & 20 & \(5.36\times 10^{-2}\) & 0.5282 & 0.7183 & 0.6088 & 0.6164 & 0.0703 & 0.1262 & 59 \\
15 & 10 & \(3.14\times 10^{-2}\) & 0.5200 & 0.7790 & 0.6237 & 0.6344 & 0.0525 & 0.0970 & 96 \\
20 & 10 & \(2.69\times 10^{-2}\) & 0.5155 & 0.7694 & 0.6173 & 0.6038 & 0.0870 & 0.1520 & 50 \\
**25** & **20** & \(2.99\times 10^{-2}\) & 0.5158 & 0.8127 & **0.6311** & 0.5725 & 0.1055 & 0.1781 & **28** \\
30 & 50 & \(1.35\times 10^{-2}\) & 0.4994 & 0.5578 & 0.5270 & 0.5129 & 0.7407 & 0.6061 & 6 \\
35 & 30 & \(6.47\times 10^{-3}\) & 0.4971 & 0.5491 & 0.5218 & 0.5117 & 0.7548 & 0.6100 & 6 \\
40 & 30 & \(5.67\times 10^{-3}\) & 0.4956 & 0.5368 & 0.5154 & 0.5125 & 0.7593 & 0.6120 & 6 \\
**45** & **20** & \(7.52\times 10^{-3}\) & 0.4964 & 0.5272 & 0.5113 & 0.5128 & 0.7743 & **0.6170** & **6** \\
**50** & **80** & \(3.11\times 10^{-1}\) & 0.4885 & 0.5328 & 0.5097 & 0.4921 & 0.8209 & 0.6153 & **3** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Optimal clustering results for each value of min_samples.
\begin{table}
\begin{tabular}{l r r r r r r r r r r} \hline \hline & 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 \\ \hline
5 & \(5.91\times 10^{-2}\) & \(5.45\times 10^{-2}\) & \(5.77\times 10^{-5}\) & \(2.89\times 10^{-5}\) & \(2.89\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.35\times 10^{-5}\) \\
10 & \(4.29\times 10^{-2}\) & \(5.36\times 10^{-2}\) & \(1.23\times 10^{-3}\) & \(2.92\times 10^{-5}\) & \(2.96\times 10^{-5}\) & \(2.96\times 10^{-5}\) & \(2.96\times 10^{-5}\) & \(2.96\times 10^{-5}\) & \(2.96\times 10^{-5}\) & \(5.88\times 10^{-5}\) \\
15 & \(3.14\times 10^{-2}\) & \(2.78\times 10^{-2}\) & \(2.86\times 10^{-2}\) & \(2.27\times 10^{-2}\) & \(2.25\times 10^{-2}\) & \(4.92\times 10^{-5}\) & \(3.21\times 10^{-5}\) & \(3.21\times 10^{-5}\) & \(3.21\times 10^{-5}\) & \(3.21\times 10^{-5}\) \\
20 & \(3.36\times 10^{-2}\) & \(2.69\times 10^{-2}\) & \(1.02\times 10^{-5}\) & \(1.02\times 10^{-5}\) & \(1.02\times 10^{-5}\) & \(4.21\times 10^{-5}\) & \(4.21\times 10^{-5}\) & \(4.21\times 10^{-5}\) & \(4.14\times 10^{-5}\) & \(4.14\times 10^{-6}\) \\
25 & \(2.69\times 10^{-2}\) & \(2.99\times 10^{-2}\) & \(6.98\times 10^{-7}\) & \(6.68\times 10^{-7}\) & \(6.66\times 10^{-3}\) & \(6.66\times 10^{-3}\) & \(6.54\times 10^{-8}\) & \(5.54\times 10^{-8}\) & \(5.54\times 10^{-8}\) \\
30 & \(5.41\times 10^{-4}\) & \(3.52\times 10^{-6}\) & \(3.92\times 10^{-6}\) & \(3.92\times 10^{-6}\) & \(1.35\times 10^{-2}\) & \(1.35\times 10^{-2}\) & \(1.35\times 10^{-2}\) & \(1.09\times 10^{-2}\) & \(7.53\times 10^{-3}\) & \(1.75\times 10^{-5}\) \\
35 & \(5.97\times 10^{-4}\) & \(1.89\times 10^{-6}\) & \(6.47\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(6.47\times 10^{-3}\) & \(1.86\times 10^{-7}\) & \(1.86\times 10^{-7}\) & \(1.86\times 10^{-7}\) \\
40 & \(8.81\times 10^{-4}\) & \(2.95\times 10^{-5}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(5.67\times 10^{-3}\) & \(2.92\times 10^{-6}\) & \(2.92\times 10^{-6}\) \\
45 & \(6.33\times 10^{-4}\) & \(7.52\times 10^{-3}\) &
Figure 7: Results of clustering intents in the hotel domain.
computed metric. In Tables 7, 8 and 9, we present details about each cluster, for each clustering experience of Figure 7.
For the experience with only 3 obtained clusters (Table 7), it is easy to understand that the two specific clusters are related to the hotel price range: cluster 1 (yellow) is probably mostly composed of utterances from the user, due to the high presence of restrictive words ('moderate' and 'cheap'); cluster 2 (purple) should be mostly composed of utterances from the assistant where a 'preference' is recurrently being asked. The rest of the utterances belong to cluster 0 (magenta), where the most frequent words are certainly directly obtained from the most frequent utterances from the dataset.
In the next experience (Table 8), there are other more specific clusters, regarding booking (cluster 0 - magenta), hotel details such as postcode, phone, and address (cluster 1 - orange), and requesting a taxi from the hotel to the restaurant (cluster 2 - dark yellow).
The last experience results in a higher number of clusters, spanning over more versatile types of intents: a confirmation (cluster 0), a suggestion of other time or date (cluster 1), a recognition of the non existence of hotels following the given criteria (cluster 10), an inquiry about the wifi (cluster 14), etc. The fact that the clusters are more granular also means that the algorithm can split some clusters that could be broader, such as cluster 11 and 12, which both seem to be about a hotel room booking request. One possibility can be the fact that one cluster includes more utterances belonging to user inquiries, and the other to assistant replies.
In the three clustering experiences, most of the clusters are labelled with either 'hotel-inform' or 'hotel-request', which are the most frequent labels of utterances in the hotel domain, as seen in Figure 4. We can understand that, despite being able to obtain reasonable clusters, it will be difficult for the algorithm to match the level of granularity with the dataset annotations, which explains the low results for the BCubed metrics.
### Analysis of the dialogue flow
For this part of the experience, we feed the results from the intra-domain clustering of the hotel domain to the tool for analysis of sequences. In Table 10, the most frequent flows between these 28 clusters are presented, which can be informally analysed resorting to the most relevant utterances in each cluster.
Clusters 26 and 27 appear frequently, which are composed of utterances where the user is asking for a hotel with some specific restrictions: the former with the intent for a particular star rating, and the latter with parking or/and wifi restrictions. Afterwards, the most common clusters are 10 and 19: cluster 10 identifies the lack of domain entities obeying to the given specifications; and cluster 19 suggests a hotel or guesthouse. Cluster 12 is also frequent, usually assigned to utterances where the user is starting the booking process.
Despite being possible to make this correspondence, some cases do not follow these labels, such as the transition \(10\to 19\), that apparently matches two subsequent assistant utterances. As the utterances from the user and assistant are all clustered at the same time, semantically similar utterances from both of the parts can be assigned the same cluster. However, this experience was not focused on dividing the utterances between user and system, as this also does not happen in the dataset reference labels: as an example, there are a lot of 'hotel-inform' subsequent utterances.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline cluster & length & persistence & top words & label \\ \hline
0 & 471 & 0.0526 & hotel, guesthouse, date, cardinal, free & hotel-inform \\
1 & 80 & 0.0213 & range, price, moderate, cheap, don & hotel-inform \\
2 & 82 & 0.0112 & price, range, options, cardinal, preference & hotel-request \\ \hline \hline \end{tabular}
\end{table}
Table 7: Details of the 3 clusters obtained for the hotel domain.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline cluster & length & persistence & top words by TF-IDF & label \\ \hline
0 & 20 & 0.0674 & reference, number, yes, need, book & hotel-request \\
1 & 20 & 0.0181 & postcode, phone, address, number, code & hotel-request \\
2 & 20 & 0.0848 & restaurant, taxi, hotel, time, need & hotel-inform \\
3 & 309 & 0.0397 & cardinal, guesthouse, hotel, date, free & hotel-inform \\
4 & 20 & 0.0421 & range, price, moderate, cheap, priced & hotel-inform \\
5 & 24 & 0.0465 & price, range, options, mind, area & hotel-request \\ \hline \hline \end{tabular}
\end{table}
Table 8: Details of the 6 clusters obtained for the hotel domain.
As an example, we provide a dialogue example with the assigned clusters, in Figure 8. The dialogue starts with the transition \(26\to 19\), which is the most common transition in the dataset. Afterwards, it classifies two subsequent utterances with the cluster 10, which can be justified by being semantically close (both present negative sentences). The user comes back to providing hotel restrictions, which is aligned with what we have seen about cluster 26. The following suggestion from the assistant (the \(6^{th}\) utterance) is also assigned to the cluster 26, which is not aligned with what we discovered about the clusters -- it should probably be assigned with cluster 19. One justification for these errors can be, that as we are forcing the algorithm to assign one cluster to each utterance (as we used the results from soft-clustering), very weak classifications are also being considered. Besides, the most frequent clusters should also be the ones that are not that specific, and the algorithm has more difficulties in classifying. When it comes to the booking itself, the algorithm assigns two different clusters for asking and providing the requirements, 15 and 2, which are in accordance with the main topics extracted from the clusters: the first one is confirming the hotel has free parking, and the latter providing the required hotel details.
## 5 Conclusion and Future Work
In this work, we successfully built a framework that is able to identify dialogue intentions, in an unsupervised manner. To do so, we developed a clustering tool for dialogue utterances, which groups them according to their similarity and intention. As seen in the experiments, we were able to obtain reasonable clusters with different levels of granularity,
\begin{table}
\begin{tabular}{l l l l l} \hline cluster & length & persistence & top words by TF-IDF & label \\ \hline
0 & 20 & \(2.00\times 10^{-9}\) & yes, does, fine, sounds, matter & hotel-inform \\
1 & 20 & \(1.06\times 10^{-7}\) & date, time, try, starting, instead & hotel-inform \\
2 & 21 & \(3.39\times 10^{-8}\) & phone, number, postcode, date, help & hotel-inform \\
3 & 20 & \(1.01\times 10^{-7}\) & postcode, phone, number, just, address & hotel-request \\
4 & 20 & \(3.28\times 10^{-8}\) & address, road, phone, number, town & hotel-request \\
5 & 21 & \(3.81\times 10^{-7}\) & restaurant, taxi, hotel, time, need & hotel-inform \\
6 & 21 & \(1.48\times 10^{-7}\) & book, reference, number, yes, sounds & hotel-request \\
7 & 21 & \(2.39\times 10^{-7}\) & reference, number, yes, need, thank & hotel-request \\
8 & 20 & \(1.57\times 10^{-7}\) & range, price, moderate, cheap, priced & hotel-inform \\
9 & 20 & \(1.70\times 10^{-7}\) & price, range, options, mind, area & hotel-request \\
10 & 20 & \(2.47\times 10^{-8}\) & hotels, hotel, sorry, area, criteria & hotel-nooffer hotel-request \\
11 & 20 & \(2.85\times 10^{-8}\) & date, people, starting, room, cardinal & hotel-inform \\
12 & 46 & \(2.28\times 10^{-7}\) & date, people, starting, book, cardinal & hotel-inform \\
13 & 20 & \(1.97\times 10^{-7}\) & date, people, starting, cardinal, yes & hotel-inform \\
14 & 26 & \(3.99\times 10^{-1}\) & wifi, does, free, internet, include & hotel-inform \\
15 & 22 & \(8.29\times 10^{-7}\) & parking, free, does, offer, yes & hotel-inform \\
16 & 21 & \(6.88\times 10^{-7}\) & area, stay, town, like, prefer & hotel-request \\
17 & 20 & \(8.89\times 10^{-8}\) & hotel, prefer, preference, guesthouse, hotels & hotel-inform hotel-request \\
18 & 22 & \(5.30\times 10^{-7}\) & place, stay, looking, need, north & hotel-inform \\
19 & 20 & \(8.83\times 10^{-8}\) & guesthouse, cardinal, star, like, stars & hotel-inform \\
20 & 33 & \(7.69\times 10^{-7}\) & guesthouse, lovely, does, tell, house & hotel-recommend \\
21 & 22 & \(6.40\times 10^{-7}\) & called, hotel, looking, guesthouse, information & hotel-inform \\
22 & 20 & \(2.87\times 10^{-7}\) & guesthouse, suggest, recommend, prefer, like & hotel-recommend \\
23 & 20 & \(5.23\times 10^{-8}\) & guesthouse, book, like, room, recommend & hotel-recommend \\
24 & 21 & \(3.90\times 10^{-9}\) & parking, place, stay, free, cheap & hotel-inform \\
25 & 20 & \(3.30\times 10^{-7}\) & parking, guesthouse, free, looking, cheap & hotel-inform \\
26 & 21 & \(1.60\times 10^{-7}\) & star, cardinal, hotel, free, rating & hotel-inform \\
27 & 40 & \(1.07\times 10^{-7}\) & wifi, free, parking, need, hotel & hotel-inform \\ \hline \end{tabular}
\end{table}
Table 9: Details of the 28 clusters obtained for the hotel domain.
Figure 8: A dialogue example with the assigned clusters.
\begin{table}
\begin{tabular}{l c c} \hline n & sequence & frequency \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 10: The most frequent sequences of the identified 28 clusters for the hotel domain.
supporting the idea that the algorithm parameters should be adapted to each use case and nature of the data, regardless of how general the algorithm should be.
Besides, the sequence analysis tool proved to be able to find relevant flows of intentions in a dialogue, which can be helpful for dialogue management applications. In future work, it would make sense to perform two different clustering experiences, for user and assistant utterances apart, to ensure they are not mixed in the same clusters. Depending on the application, this information could even be available, and an analysis of the sequence of user requests without the assistant (and vice versa) could be valuable.
Besides, the problem of identifying dialogue flows can be further investigated by modifying the sequence analysis tool to return sequences obeying to different specifications, such as a longer length or sequences that do not include a certain cluster. Regardless of that, these results do already prove that it is possible to identify relevant clusters in a dialogue application, and analyse their most common flows in an unsupervised scenario. Other opportunities of future work are the creation of a taxonomy of intents, and the comparison with the one provided in the datasets.
## Acknowledgements
This work was conducted within the IAG (Intelligent Agents Generator) project with the universal code LISBOA-01-0247-FEDER-045385, co-funded by Lisboa 2020, Portugal 2020, and the European Union, through the European Regional Development Fund.
|
2304.03722
|
Beyond Privacy: Navigating the Opportunities and Challenges of Synthetic
Data
|
Generating synthetic data through generative models is gaining interest in
the ML community and beyond. In the past, synthetic data was often regarded as
a means to private data release, but a surge of recent papers explore how its
potential reaches much further than this -- from creating more fair data to
data augmentation, and from simulation to text generated by ChatGPT. In this
perspective we explore whether, and how, synthetic data may become a dominant
force in the machine learning world, promising a future where datasets can be
tailored to individual needs. Just as importantly, we discuss which fundamental
challenges the community needs to overcome for wider relevance and application
of synthetic data -- the most important of which is quantifying how much we can
trust any finding or prediction drawn from synthetic data.
|
Boris van Breugel, Mihaela van der Schaar
|
2023-04-07T16:38:40Z
|
http://arxiv.org/abs/2304.03722v1
|
# Beyond Privacy: Navigating the Opportunities and Challenges of Synthetic Data
###### Abstract
Generating synthetic data through generative models is gaining interest in the ML community and beyond. In the past, synthetic data was often regarded as a means to private data release, but a surge of recent papers explore how its potential reaches much further than this--from creating more fair data to data augmentation, and from simulation to text generated by Chat-GPT. In this perspective we explore whether, and how, synthetic data may become a dominant force in the machine learning world, promising a future where datasets can be tailored to individual needs. Just as importantly, we discuss which fundamental challenges the community needs to overcome for wider relevance and application of synthetic data--the most important of which is quantifying how much we can trust any finding or prediction drawn from synthetic data.
## 1 Introduction
**Motivation.** Data is the foundation of most science, but real data can be severely limiting. It may be privacy-sensitive, unfair, unbalanced, unrepresentative, or it may simply not exist. In turn, these limitations continuously constrain how the ML community operates--from training to testing, from model development to deployment. Do we need to accept this as reality, or is there another way? The answer might lie in synthetic data.
Advances in deep generative modelling have seen a steep rise in methods that aim to replace real data with synthetic data. The first uses of synthetic data were mostly privacy-focused, aiming to create realistic synthetic data that mimics the real data but does not disclose sensitive information [35, 39, 91]. More recently, however, there has been an increasing interest in extending synthetic data to use cases where it _improves_ upon real data (see Fig. 1), for example providing better fairness [82, 88, 89], augmenting the dataset size [4, 9, 18, 21], and creating or simulating data for different domains [86, 90]. The latest and most widely acclaimed development is user-prompted data (e.g. OpenAI's DALL-E and ChatGPT), of which the use cases and influences on a wider audience are hard to understate.
Figure 1: Overview synthetic data use cases
In this perspective, we reflect on these advances, and discuss opportunities and challenges that lie ahead. We hope to shed light on whether the ML community requires a radical paradigm shift: away from the uncountable, inherent limitations of real data, towards an ML world where realistic, customizable, synthetic data becomes the lead actor. We will find that though this future looks bright, there are fundamental challenges of synthetic data that need to be urgently overcome before synthetic data can be trusted and used more widely.
**Defining synthetic data.** We focus on data-driven synthetic data, which we define as follows:
**Definition 1**: Given a stochastic mathematical model that outputs data and is fitted on real data with the purpose of describing and mimicking (some part of) the real data's distribution. The data generated by such a model we call **data-driven synthetic data**.
We use the _data-driven_ constraint to focus ourselves on recent ML deep generative models, and **not** traditional, _hand-crafted_ synthetic data--datasets generated by user-defined mathematical equations or physics simulations that are often over-simplified. The "(some part of)" also implies we are often not only interested in mimicking the real data--additional input enables the generation of synthetic data with very different characteristics than the real data, allowing the creation of purpose-built datasets. For example, by modelling the data's distribution conditional on age, we can generate data for an older or younger population than the original. Henceforth, we will drop "data-driven" and simply refer to this as synthetic data.
**Focus.** In the perspective we do not discuss specific generative model architectures (e.g. VAEs [46, 68], GANs [27], score-based/diffusion models [34, 74, 75]), for which good reviews exist (e.g. see [12]). Similarly, although many synthetic data works focus primarily on computer vision, we will not make a distinction between different data modalities--we will see that many of the key applications or issues are modality-independent.
The perspective is split in two parts. In Section 2 we discuss the main use cases of synthetic data with specific opportunities and challenges for each. Subsequently, in Section 3 we highlight general challenges and opportunities.
## 2 Use cases
This section is structured by the main use cases of synthetic data, see Figure 1 and Table 1. In Section 2.1 we discuss the most traditional use case: privacy. Continuing to a more recent strand of research, we discuss three uses of synthetic for settings where real data is insufficient or non-existent. We start at _augmentation_ (Section 2.2), where we have some data for the target domain of interest \(T\), but would like to create more data for this domain in a generative manner. Subsequently, we consider _domain adaptation_ (Section 2.3), where we again assume we have some data for domain \(T\), but we also have data from some related domain \(S\) that we can leverage. Continuing, we introduce _data-driven simulation_ (Section 2.4), where we assume we have data for some domain \(S\), and we use prior knowledge or assumptions on possible distributional shifts to generate data for a different domain, \(T\).
Furthermore, we explore synthetic data for _fairness_ (Section 2.5), where the aim is to debias data. At last, in Section 2.6 we discuss user-prompted synthetic data (e.g. ChatGPT).
### Privacy
**Motivation.** Many real-world datasets contain private information: personal, sensitive data about individuals. Sharing this data may not be possible due to the risk of violating data privacy, which in turn impedes scientific research, reproducibility, and ML development itself. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. This signifies a stark contrast with traditional anonymization methods, which often risk either not being fully private, e.g. allowing linkage attacks [69], or losing too much utility (e.g. when features are coarsened or made noisy).
Synthetic data is not private by default; a generative model could overfit to its training data--learning to memorise samples or disclose sensitive statistics. As a result, a large body of work has focused on how one can adapt generative models to ensure privacy. Overall, the idea is usually to
limit how much any sample can influence the final dataset (e.g. [23, 35], or to avoid that synthetic data is highly overfitted to real data (e.g. [91]). It is beyond the scope of this article to review these works, instead we refer to Appendix B for a high-level overview and [76] for a more detailed discussion.
**Challenges.** Many challenges remain for private synthetic data. First, there is no perfect metric or definition for measuring or guaranteeing privacy. Typical anonymization metrics (e.g. k-anonymity) rely on linkage,1 but linkage is not an issue for generative models since data is created from scratch. Similarly, though differential privacy [23] is a popular definition for privacy, it has serious disadvantages; it is non-intuitive, cannot be measured directly and it is unclear whether it presents a good trade-off of privacy and utility. Attacker models can be used in a white-hat-hacker-fashion for quantifying privacy vulnerability, but since these methods inherently rely on some other ML attacker model (including its parameterisation and training process), more research is required into these attackers' reliability, robustness, and benchmarking capabilities--essential properties for a good metric. We discuss metrics further in Section 3.
Footnote 1: Linkage attacks focus on the ability to identify someone in an anonymized dataset by using some additional knowledge. For example, assume we have access to anonymized hospital records that contain some person we are interested in and we know this person is 1m65 tall, weighs 70kg, and lives in some postcode; we can go through the dataset and might be able to find this person’s record and acquire sensitive medical information.
Secondly, there is a privacy-utility trade-off: generating synthetic data with privacy constraints often leads to slightly noisier distributions, leading to a loss of data utility. Finding the right balance is hard [76], not least because both privacy and utility are hard to quantify. Developing better and more interpretable metrics will be an essential first step for navigating this trade-off.
Thirdly, it is hard to guarantee privacy in a future-proof fashion; something that seems private now, may change in the future when attacker models improve and attackers may have access to more data (e.g. as showcased in [83]).
[style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=My,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=My,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,stylestyle=
overfitting. Synthetic data can be used to mitigate this, by artificially increasing the size of real datasets.
Augmentation is a popular technique in ML that often leads to better downstream models. In the image and text domain, one usually uses prior knowledge about invariances to augment training data; e.g. rotate images under the assumption that a rotated object is still the same object. Unfortunately, such prior knowledge is harder to acquire in other domains, for example for tabular data. Instead, one can use a generative model for augmentation. The advantage over traditional tabular data augmentation methods (e.g. SMOTE [14]) is that generative models could theoretically describe the true data distribution perfectly. Recent works show for different domains that synthetic data through deep generative modelling can improve downstream models [18, 21, 38, 63], especially for making predictions on small subgroups [5, 10]. The latter is important, because small subgroups may correspond to underrepresented minorities.
**Challenges.** The challenge of generative data augmentation is that the synthetic data may not be perfect. There is thus a trade-off when using synthetic data for augmentation, between data quantity (i.e. increasing dataset size) versus data quality (i.e. creating potentially unrealistic data). There is some idea of why generative data augmentation helps downstream tasks--see Section C--but it would be beneficial to gain deeper understanding of _when_ synthetic data augmentation aids models and which choice of generative model (hyperparameters) is most suitable. Ideally, this will result in guidelines that are granular and dataset-dependent, e.g. augment these minority classes, but not the larger class for which there is already enough data. Navigating the quality-quantity trade-off is closely tight in with measuring synthetic data quality, which itself is non-trivial (see Section 3).
[style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=My,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame=MyFrame,style=MyFrame=My,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame=MyFrame,style=MyFrame,style=MyFrame=My,style=MyFrame,style=My
This buoyancy. Generating synthetic data using domain adaptation allows generating synthetic data for settings for which we have little data, which can lead to more accurate and cost-efficient ML.
### Data-driven simulations
**Motivation.** The previous two sections assume access to (some) data from the target distribution and aim to generate more data, but in some cases we may have no target data at all. Why are we interested in settings for which we have no data? One of the main applications is model testing. The world is constantly evolving, hence any trained model may operate in a setting with a different data distribution as the training set--typically leading to overestimated real-world performance [60, 62, 65, 67]. Ideally, when we have trained an ML model (e.g. a classifier), we want to understand, test, and document how it will behave in unseen scenarios--e.g. in future scenarios or on different populations [26].2
Footnote 2: This is also reminiscent of more mature industries, for instance, car manufactures make use of wind tunnels and crash tests to meet regulation standards, whilst electronic component data sheets outline conditions where reliable operation is guaranteed. However, we cannot always go into the real-world to find these datasets.
Generative models can help to simulate these unseen scenarios based on (e.g. historic) data. Specifically, we can use real data to learn some characteristics in the data, e.g. relationships between selected features, but use a priori knowledge, assumptions, and constraints to modify the overall distribution. For example, we may decide to model the source data using a causal generative model and use do-operations to model interventions [61]. In the image domain, another option is to use non-data-driven synthetic data. For example, use CGI-generated images of traffic accidents and add realism through style transfer [86]. In contrast to manually searching for other datasets in the "wild" [31, 47], simulating data through a deep generative process may provide _realistic_ data that is relatively _low-effort_, _cost-efficient_, and highly _customizable_. Long term, we believe synthetic data may provide practitioners with a standardized process to train, test ML, and select model(s) under a variety of operating conditions--aiding ML performance, reliability and public acceptance.
**Challenges.** There are two main challenges for simulated data. The first is reliability of synthetic data itself. Model testing or training on synthetic data is only as accurate and reliable as the synthetic data we have generated, and understanding and quantifying the synthetic data quality is hard--we will discuss this in Section 3. Nonetheless, synthetic data can still quantify when a model _might_ fail, which could help model developers decide better which real (expensive) data to acquire to test this hypothesis.
The second challenge is defining meaningful shifts and constraints for developing standardized testing procedures. This requires understanding distributional shifts better on a case-by-case basis. For example, if we want to predict how a trained model will do in the future, we need to understand how the population and feature relationships may change over time. Ideally, the error in these assumptions is propagated to the data, and in turn to the final analyses.
[style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,style=MyFrame,
Data users creating models based on publicly available data may be unaware they are inadvertently including bias in their model, or insufficiently knowledgeable to remove it. Generative models have the potential to play an important role in this area, through generating synthetic data based on unfair real data that does not reflect the same biases.3 By generating one (or multiple4) fair dataset(s), a data publisher can guarantee _any_ downstream model satisfies the desired fairness requirements.
Footnote 3: In its core, this is a form of simulation (Section 2.4), where we simulate a fair world based on a particular definition of fairness. We discuss it separately, because defining fairness is an important and much-studied topic by itself and comes with a number of unique caveats.
Footnote 4: A data publisher could publish different synthetic datasets with different fairness constraints.
**Defining fairness.** Fairness itself is a broad topic. One face of fairness is representation--some groups may not be represented properly in the data. This is a form of imbalance, for which augmentation (Section 2.2) of underrepresented groups can help. Here we focus on a different form of fairness: algorithmic fairness. In its most basic form this considers how some algorithm \(A\)'s outcome (e.g. whether someone should get parole) depends on some attribute \(S\) (e.g. ethnicity). Algorithmic fairness can be further subdivided into different notions of fairness, e.g. direct discrimination and indirect discrimination. 5 In [82, 88, 89, 92], authors show that by carefully constraining a generative model--with constraints given by the fairness requirement--it is possible to generate fair synthetic data based on unfair real data.
Footnote 5: We refer interested readers to existing surveys for an overview (e.g. 57).
**Challenges.** Some challenges remain. First, a synthetic dataset that seems fair does not necessarily guarantee that a model trained on this data gives fair downstream predictions--we give an example and explanation in Appendix E. This can be solved [82], but requires knowledge of the downstream model's deployment setting, which the data publisher does not always have.
Second, debiasing itself can induce significant data utility loss. This is to be expected: a good but unfair predictor chooses to intentionally use information that it should not, so not allowing this can decrease its performance. As a result, preventing bias entirely may not be desirable. Instead one may be more inclined to use a threshold. For example, the US Supreme Court's 80% rule [2] essentially states that a prediction has disparate impact if for disadvantaged group \(A=1\) and positive outcome \(\hat{Y}=1\), \(\frac{P(\hat{Y}=1|A=1)}{P(\hat{Y}=1|A=0)}<0.8\)[24]. A non-binary vision on fairness also allows balancing multiple fairness notions. This is often necessary, because usually different fairness notions cannot be achieved at the same time, and human subjects do not always find the same notion the most fair [71].
Third, another challenge is defining bias when the sensitive attribute is not explicitly included. For example, in computer vision, there is not a single pixel that reflects "asian". In these domains, human annotations and/or secondary ML systems can be developed to quantify and in turn mitigate bias.6
Footnote 6: Ideally, however, fairness is already taken into account at the acquisition stage—acquiring sensitive attributes (e.g. ethnicity and gender) directly allows more accurate understanding and mitigation of bias.
[breakable, wide, wide, and wide, and wide, and wide, are not necessarily fair.]
[MISSING_PAGE_POST]
multiple times would give very different responses, while diversity in synthetic datasets is often desirable [1].7 On the other hand, user-prompted synthetic data approaches generate data conditionally on (usually text-based) prompts, using deep generative models trained on large quantities of data--thereby satisfying our definition of data-driven synthetic data. More importantly however, we choose to include these methods in this perspective, because they are plagued by some of the same challenges, e.g. the trustworthiness of generated content.
Footnote 7: From a more mathematical perspective: whereas other synthetic data use cases require approximating a distribution closely, user-prompted synthetic data may be satisfactory if it only achieves a point estimate of the conditional (e.g. the mode of \(p(output|prompt)\).
**Challenges.** The influence of user-prompted synthetic data has already reached far beyond academic circles. It poses unique challenges--concerning copyrights [85], academic writing [80], and the future of education [77]--but also exemplifies more general AI questions--e.g. transparency, trustworthiness, and accountability [84]. Many of these issues can be traced back to the core difficulties of synthetic data generation: quantifying the quality (including factual correctness) and authenticity. These methods are already being deployed, which warrants urgency in solving some of the key synthetic data issues. As an ML community we cannot be compliant and hope someone will fix problems when they arrive; one big-scale failure can have dire consequences, not least for the public perception of AI--especially since models like ChatGPT are the first AIs people actively engage with. This is also the silver lining; widespread interest and high anticipated commercial value provide resources to solve some of these issues. By contextualising methods like ChatGPT as data-driven synthetic data methods, efforts can be centralized into solving some of these issues.
## 3 General challenges and opportunities
Let us now discuss the general challenges for widespread synthetic data use. In Figure 2 we include 14 major open questions, and we refer to these in text like this: \(\boxed{\#}\).
**Extending applicability.**1 Some settings are significantly underexplored in the generative modelling literature, including high-dimensional tabular data, multimodal data (e.g. medical records consisting of text and tabular), relational databases, and \(\boxed{\#}\) combining use cases (e.g. fairness _and_ privacy). Extending applicability increases the potential of synthetic data for both scientific and industrial use.
**Metrics.** Trust in synthetic data requires measurability of desired properties--quality, privacy, etc. Not only is it hard to measure some of these properties, even defining them is a challenge. Current metrics for \(\boxed{\#}\) quality and \(\boxed{\$}\) privacy are insufficient, see Table 2 in Appendix A for an overview. We envision five properties for better synthetic data metrics. First, metrics should provide insight into different aspects of synthetic data (e.g. fidelity, diversity, and authenticity as in [1]). Secondly, they should be interpretable to a downstream user. Interpretability allows them to decide whether to trust the downstream analysis, and also helps choose a good trade-off between competing metrics (e.g. privacy-utility or fairness-utility). Third, the ideal metric should allow for granular evaluation; instead of just knowing how "good" a synthetic dataset is overall, we would like to know whether some groups are misrepresented, more vulnerable to privacy attacks, etc. Fourth, it should be reliable and robust, enabling trustworthy benchmarking and providing consistent results across studies and implementations. Fifth, it should be data-efficient and robust to varying amounts of data available. For example, many metrics (e.g. [91]) require that real and synthetic datasets are the same size, but real data is usually limited whereas synthetic data can be generated freely. As long as no set of metrics exists that satisfies all these desiderata, the use of synthetic data will be severely limited.
**Which generative model to use?** A plethora of generative models are being developed, but there is little insight into \(\boxed{\$}\) which model
should be used when. Evidently, having better metrics would aid this process by facilitating generative model selection--potentially trading off different metrics depending on the downstream use case. This can be aided further through model standardization and more unified generative modelling libraries (e.g. see [64]), including more benchmarking datasets for different generative modelling settings.
**Outliers and underrepresented regions.** Low-density regions are more difficult to model, because by definition there are fewer samples to learn from. This is especially true when privacy constraints are used, since these methods limit the influence of any single point. This poses a problem for synthetic data, since low-density regions are more likely to correspond to outliers--which can be of primary interest to scientists--and minority groups--incorrect modelling of which could lead to discrimination. On the positive side, Section 2.2 explored how generative models have been shown to in fact _improve_ the model performance in low-density regions, compared to using the real data only. 6 Understanding the limitations of generative modelling in data-scarce regions remains an underexplored challenge.
**Understanding influence of synthetic data on downstream task.** Going further than
Figure 2: Selection of open questions in the synthetic data realm. Numbering does not indicate perceived importance.
metrics, [7] we want to understand _how_ the synthetic data step affects our results. For example, errors and uncertainty in the generative process should be propagated to downstream uncertainty quantification and significance tests.
**Data clearing house for verification protocols.** Until the above-mentioned reliability challenges are resolved, acceptance of results derived from synthetic data will be low and hence require some form of verification. Verifying results on real data is not always possible or desirable however, because data users may not have access to real data, and data publishers do not have the capacity to verify any end-user's result. [8] One possibility is to create a protected data entity, which _does_ have access to real data and can automatically run tests. We envision this as an API that can answer queries, yet is carefully restricted from leaking too much information of the real data--e.g. users need to be verified, cannot test too many hypotheses, and the API's output is partly hidden or noisy.
**Publishing and access.** At last, [13] how should we publish synthetic data. There are three options: (i) publishing the generative model, (ii) publishing only the data, and (iii) API access to generate based on some user input (e.g. Chat-GPT). Each has advantages and its limitations (see Appendix F) and data publishers may choose different forms of access dependent on the context.
## 4 Conclusion
Synthetic data offers a broad range of advantages over real data; from privacy to user-prompted generation aids, and from better (more accurate, robust, and fair) downstream models to cost-efficient and more versatile use of real data. We believe long-term, standardized procedures for generating synthetic data will become key components to model training, testing, and benchmarking. For synthetic data to really change the ML world, however, key challenges remain. We urge the community to shift resources; focusing less on creating stronger generative models (e.g. that create more realistic pictures), towards seeking to understand the quality of synthetic data and its effect on the validity of downstream results or predictions. With the world's eye caught by generative models, fundamental understanding in trustworthiness of synthetic data is urgently required.
|
2305.14177
|
ChemGymRL: An Interactive Framework for Reinforcement Learning for
Digital Chemistry
|
This paper provides a simulated laboratory for making use of Reinforcement
Learning (RL) for chemical discovery. Since RL is fairly data intensive,
training agents `on-the-fly' by taking actions in the real world is infeasible
and possibly dangerous. Moreover, chemical processing and discovery involves
challenges which are not commonly found in RL benchmarks and therefore offer a
rich space to work in. We introduce a set of highly customizable and
open-source RL environments, ChemGymRL, based on the standard Open AI Gym
template. ChemGymRL supports a series of interconnected virtual chemical
benches where RL agents can operate and train. The paper introduces and details
each of these benches using well-known chemical reactions as illustrative
examples, and trains a set of standard RL algorithms in each of these benches.
Finally, discussion and comparison of the performances of several standard RL
methods are provided in addition to a list of directions for future work as a
vision for the further development and usage of ChemGymRL.
|
Chris Beeler, Sriram Ganapathi Subramanian, Kyle Sprague, Nouha Chatti, Colin Bellinger, Mitchell Shahen, Nicholas Paquin, Mark Baula, Amanuel Dawit, Zihan Yang, Xinkai Li, Mark Crowley, Isaac Tamblyn
|
2023-05-23T15:56:17Z
|
http://arxiv.org/abs/2305.14177v1
|
# ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry
###### Abstract
This paper provides a simulated laboratory for making use of Reinforcement Learning (RL) for chemical discovery. Since RL is fairly data intensive, training agents 'on-the-fly' by taking actions in the real world is infeasible and possibly dangerous. Moreover, chemical processing and discovery involves challenges which are not commonly found in RL benchmarks and therefore offer a rich space to work in. We introduce a set of highly customizable and open-source RL environments, **ChemGymRL**, based on the standard Open AI Gym template. ChemGymRL supports a series of interconnected virtual chemical _benches_ where RL agents can operate and train. The paper introduces and details each of these benches using well-known chemical reactions as illustrative examples, and trains a set of standard RL algorithms in each of these benches. Finally, discussion and comparison of the performances of several standard RL methods are provided in addition to a list of directions for future work as a vision for the further development and usage of ChemGymRL.
Machine Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Reinforcement Learning, Digital Chemistry
## 1 Introduction
In Material Design, the goal is to determine a pathway of chemical and physical manipulation that can be performed on some starting materials or substances in order to transform them into a desired target material. The aim of this research is to improve aspects of this process using reinforcement learning (RL). Here, we introduce a collection of interconnected RL environments called **ChemGymRL** using the OpenAI-Gym standard which are designed for experimentation and exploration with RL within the context of discovery and optimization of chemical synthesis. These environments are each a virtual variant of a chemistry "bench", an experiment or process that would otherwise be performed in real-world chemistry labs.
Recently, there has been a growth in research using automated robotics for chemical laboratories (Jiang et al., 2022; She et al., 2022; Caramelli et al., 2021; Fakhruldeen et al., 2022; Rooney et al., 2022; Hickman et al., 2023; Bennett and Abolhasani, 2022; Porwol et al., 2020) and research into improving/studying these automated systems (Manzano et al., 2022; MacLeod et al., 2022; MaelCoel et al., 2022; Seifrid et al., 2022; Flores-Leonar et al., 2020). Additionally, there has been a lot of progress on developing/using digital chemistry as a way to accelerate research for material/drug discovery (M. Mehr et al., 2023; Bubliauskas et al., 2022; Pizzuto et al., 2022; Pyzer-Knapp et al., 2021; Li et al., 2021; Fievez et al., 2022; Yoshikawa et al., 2023; Choubisa et al., 2023; Roch et al., 2020), some even using RL (Volk et al., 2023; Zhou et al., 2017; Gottipati et al., 2020). Even with these advancements, the biggest roadblock for automated laboratories is the access to data to train them on (Editorial, 2023).
The goal of ChemGymRL is to simulate enough of complexity of real-world chemistry experiments to allow meaningful exploration of algorithms for learning policies to control bench-specific agents, while keeping it simple enough that episodes can be rapidly generated during the RL algorithm development process. The environment supports the training of RL agents by associating positive and negative rewards based on the procedure and outcomes of actions taken by the agents. The aim is for ChemGymRL to help bridge the gap between autonomous laboratories and digital chemistry.
The ChemGymRL Open Source Library enables the use of RL algorithms to train agents in the pursuit of achieving an optimal path for generating a suitable outcome by means of a comprehensive reward system and numerous avenues for
achieving an outcome.
Our goal is to create an environment that allows scientists and researchers to simulate chemical laboratories for the development of RL agents. For detailed information about this software package, documentation and tutorials including code and videos can be found at [https://www.chemgymrl.com/](https://www.chemgymrl.com/).
We would like to specifically highlight ChemGymRL as a unique testbed for RL research. Since ChemGymRL is open source and highly customizable, it provides a training environment to accelerate RL research in several directions (especially pertaining to different RL sub-areas like hierarchical RL (Sutton et al., 1999), model-based RL (Moerland et al., 2023), distributional RL (Bellemare et al., 2017), curriculum learning (Narvekar et al., 2020), constrained RL (Liu et al., 2021), inverse RL (Ng & Russell, 2000) etc.), in addition to providing a useful training environment with real-world applications (more discussions in Section 11.1). While the majority of prior RL environments have focused on computer game domains with specific challenges for the RL community1, ChemGymRL pertains to a comparatively large space of RL challenges since it is associated with a real-world problem having open world difficulties. From the perspective of an RL researcher, there is a critical need for new test beds based on real-world applications that can test the limits of modern RL algorithms, which will accelerate the development of RL. We feel that ChemGymRL will be highly beneficial to the RL community in this context.
Footnote 1: For example Sokoban-RL (Shoham & Elidan, 2021) pertains to model-based RL, MineRL (Guss et al., 2019) corresponds to curriculum learning etc., however ChemGymRL is a general testbed that can support a wide-variety of RL paradigms and algorithms.
## 2 ChemGymRL
### The Laboratory
ChemGymRL is designed in a modular fashion so that new benches can be added or modified with minimal difficulty or changes to the source code. The environment can be thought of as a virtual chemistry laboratory consisting of different stations or benches where a variety of tasks can be completed, represented in Fig. 1(a). The laboratory is comprised of 3 basic elements: vessels, shelves, and benches. _Vessels_ contain materials, in pure or mixed form, with each vessel tracking the hidden internal state of their contents, as shown in Fig. 1(b). Whether an agent can determine this state, through measurement or reasoning, is up to the design of each bench and the user's goals. A _shelf_, can hold all any _vessels_ not currently in use, as well as the results (or output vessels) of previous experiments to allocate vessels to benches and bench agents.
Finally, each _chemistry bench_ recreates a simplified version of one task in a material design pipeline. A bench must be able to receive a set of initial experimental supplies, possibly including vessels, and return the results of the intended experiment, also including modified vessels. The details and methods of how the benches interact with the vessels between these two points are completely up to the user, including the goal of the bench. In this initial version of ChemGymRL we have defined our own initial core benches, which we describe in the following sections and which will allow us to demonstrate an example workflow.
Figure 1: (a) The ChemGymRL environment. Individual agents operate at each bench, working towards their own goals. The benches pictured are extraction (Ext), distillation (DiT), and reaction (RxN). The user determines which materials the bench agents have access to and what vessels they start with. Vessels can move between benches; the output of one bench becomes an input of another, just as in a real chemistry lab. Successfully making a material requires the skilled operation of each individual bench as well as using them as a collective. (b) Materials within a laboratory environment are stored and transported between benches within a vessel. Benches can act on these vessels by combining them, adding materials to them, allowing for a reaction to occur, observing them (thereby producing a measurement), etc.
### Reaction Bench (RxN)
The sole purpose of the **reaction bench (RxN)** is to allow the agent to transform available reactants into various products via a chemical reaction. The agent has the ability to control temperature and pressure of the vessel as well as the amounts of reactants added. The mechanics of this bench are quite simple in comparison to real-life which enables low computational cost for RL training. Reactions are modelled by solving a system of differential equations which define the rates of changes in concentration (See Appendix).
The goal of the agent operating on this bench is to modify the reaction parameters, in order to increase and/or decrease the yield of certain desired/undesired materials. The key to the agent's success in this bench is learning how best to allow certain reactions to occur such that the yield of the desired material is maximized and the yield of the undesired material is minimized.
_Observation Space:_ In this bench, the agent is able to observe a UV-Vis absorption spectra of the materials present in the vessel as shown in Fig. 2(a), the normalized temperature, volume, pressure, and available materials for the system.
_Action Space:_ The agent can increase or decrease the temperature and volume of the vessel, as well as add any fraction of the remaining reactants available to it. In this bench, the actions returned by an agent are a continuous valued vector of size \(n+2\), where \(n\) is the number of reactants. These actions are also shown in Fig. 2(b).
A main feature of ChemGymRL is its modularity. If one wanted to make the results of RxN more accurate and generalizable, they could replace the current system of differential equations with a molecular dynamics simulation without needing to change how the agent interacts with the bench or how the bench interacts with the rest of ChemGymRL.
### Extraction Bench (ExT)
Chemical reactions commonly result in a mixture of desired and undesired products. Extraction is a method to separate them. The extraction bench (ExT) aims to isolate and extract certain dissolved materials from an input vessel containing multiple materials through the use of various insoluble solvents. This is done by means of transferring materials between a number of vessels and utilizing specifically selected solvents to separate materials from each other.
A simple extraction experiment example is extracting salt from an oil solvent using water. Suppose we have a vessel containing sodium chloride dissolved in hexane. Water is added to vessel and the vessel is shaken to mix the two solvents. When the contents of the vessel settle, the water and hexane will have separated into two different layers. Sodium chloride is an ionic compound, therefore there is a distinct separation of charges when dissolved. Due to hexane being a non-polar solvent and water being a polar solvent, a large portion of the dissolved sodium chloride is pulled from the hexane into the water. Since water has a higher density than hexane, it is found at the bottom of the vessel and can be easily drained away, bringing the dissolved sodium chloride with it.
_Observation Space:_ For a visual representation of the solvent layers in the vessel for the agent, as seen in Fig. 3(a)-(c), we sample each solvent corresponding to each pixel using the relative heights of these distributions as probabilities. This representation makes this bench a partially observable Markov decision process (POMDP). The true state is not observed because the observations do not show the amount of dissolved solutes present nor their distribution throughout the solvents. In this set up, very light solvents will quickly settle at the top of the vessel, while very dense solvents will quickly settle at the bottom. The more similar two solvents densities are, the longer they will take to fully separate.
_Action Space:_ The agent here has the ability to mix the vessel or let it settle, add various solvents to the vessel, drain the contents of the vessel into an auxiliary vessel bottom first, pour the contents of the vessel into a secondary auxiliary vessel, and pour the contents of either auxiliary vessel into each other or back into the original vessel. Unlike in RxN, only one of these actions can be performed at once, therefore the actions returned by the agent conceptually consist of two discrete values. The first value determines _which_ of the processes are performed and the second value determines the magnitude of that process. If the drain action is selected by the first value, then the second value determines how much is drained form the vessel. Including the ability to end the experiment, the agent has access to 8 actions with 5 action multiplier values each. These actions are depicted in Fig. 3(d). Practically however, the actions returned by the agent consist of a single discrete values to reduce redundancy in
Figure 2: A visualization of the reaction bench (RxN) observation and action space. (a) An example of a UV-Vis spectra that would been seen in an observation and (b) The icons representing each action in RxN.
the action space.
The goal of the agent in this bench is to use these processes in order to maximize the purity of a desired _solute_ relative to other _solutes_ in the vessel. This means the agent must isolate the desired solute in one vessel, while separating any other solutes into the other vessels. Note that the solute's relative purity _is not_ affected by the presence of solvents, only the presence of other solutes.
Similar to RxN, if one wanted to make the results of ExT more realistic, they could replace the separation equations with a fluid dynamics simulation without needing to change how the agent interacts with the bench or how the bench interacts with the rest of ChemGymRL.
### Distillation Bench (DiT)
Similar to the ExT, the distillation bench (DiT) aims to isolate certain materials from an inputted vessel containing multiple materials (albeit with a different process). This is done by means of transferring materials between a number of vessels and heating/cooling the vessel to separate materials from each other.
A simple distillation example is extracting a solute dissolved in a single solvent. Suppose we have a vessel containing sodium chloride dissolved in water. If we heat the vessel to \(100^{\circ}\)C, the water will begin to boil. With any added heat, more water will evaporate and be collected in an auxiliary vessel, leaving the dissolved sodium chloride behind to precipitate out as solid sodium chloride in the original vessel.
_Observation Space:_ For a visual representation for the agent, we use the same approach described for ExT. For the precipitation of any solutes, we define a precipitation reaction and use the same approach described for RxN.
_Action Space:_ The agent has the ability to heat the vessel or let it cool down and pour the contents of any of the vessels (original and auxiliaries) into one another. When the agent heats/cools the vessel, the temperature of the vessel and its materials are altered by
\[\Delta T=\frac{Q}{C}, \tag{1}\]
where \(Q\) is the amount of heat added and \(C\) is the total heat capacity of the contents of the vessel. However, if the temperature of the vessel is at the boiling point of one of its materials, the temperature no longer increases. Instead, any heat added is used to vaporize that material according to
\[\Delta n_{l}=\frac{Q}{\Delta H_{v}}, \tag{2}\]
where \(n_{l}\) is the number of mols of the material in the liquid phase and \(H_{v}\) is the enthalpy of vaporization for that material. Similar to the ExT, only one of these processes can be done at a time, therefore the actions returned by the agent consist of two discrete values again. Including the ability to end the experiment, the agent has access to 4 actions with 10 action multiplier values each. These actions are depicted in Fig. 4. Again, the actions returned by the agent consist of a single discrete values to reduce redundancy in the action space.
Figure 3: Typical observations seen in extraction bench (ExT) for a vessel containing air, hexane, and water. (a) The vessel in a fully mixed state. Each material is uniformly distributed throughout the vessel with little to no distinct layers formed. (b) The vessel in a partially mixed state. The air has formed a single layer at the top of the vessel and some distinct water and hexane layers have formed, however they are still mixed with each other. (c) The vessel in a fully settled state. Three distinct layers have formed in order of increasing density: water, hexane, and then air. (d) The icons representing each action and their multiplier values available in ExT. The extraction vessel (EV) is the primary vessel used, B1/B2 are the auxiliary vessels used in the experiment, and S1/S2 are the solvents available.
The goal of the agent in this bench is to use these processes to maximize the absolute purity of a desired material in the vessel. This means the agent must isolate the desired _material_ in one vessel, while separating any other _materials_ into other vessels. Note that unlike ExT, the material's absolute purity is affected by the presence of all materials.
### Characterization Bench
In general, it is impossible to determine the exact contents of a vessel just by looking at it. Techniques exist to help characterize the contents of a vessel, however each comes with a cost. The primary cost is the monetary cost to acquire/maintain/run the instrument used. In some cases, the sample of the vessel contents being measured is destroyed during the measurement, thus incurring a different type of cost.
The characterization bench is the primary method to obtain insight as to what the vessel contains. The purpose of the characterization bench is not to manipulate the input vessel, but to subject it to analysis techniques that observe the state of the vessel, possibly including the materials inside it and their relative quantities. This does not mean that the contents of the input vessel cannot be modified by the characterization bench. This allows an agent or user to observe vessels, determine their contents, and allocate the vessel to the necessary bench for further experimentation.
The characterization bench is the only bench that is not "operated". A vessel is inputted to the bench along with a characterization method and the results of said method on that vessel are returned. Currently, the characterization bench consists of a UV-Vis spectrometer that returns the UV-Vis absorption spectrum of the inputted vessel. Each material in ChemGymRL has a set of UV-Vis absorption peaks defined and the UV-Vis spectrum for a vessel is the combination of the peaks for all materials present, weighted proportionally by their concentrations. In future versions of ChemGymRL we will expand the characterization bench to include other forms of partial observation.
## 3 Reinforcement Learning
Reinforcement Learning (RL) (Sutton and Barto, 2018) is one possible solution to a Markov Decision Process (MDP). MDPs are represented as a tuple \(\langle S,A,R,T,\gamma\rangle\) where the \(s\in S\subseteq\mathds{R}^{n}\) denotes the state space, \(a\in A\subseteq\mathds{R}^{m}\) denotes the action space, \(r\in R\subseteq\mathds{R}\) denotes the reward function and \(T=P(s_{t+1}|s_{t},a_{t})\) denotes the transition dynamics that provides the probability of state \(s_{t+1}\) at the next time step given that the agent is in state \(s_{t}\) and performs action \(a_{t}\). The objective for an RL agent is to learn a policy \(\pi(a|s)\) that maximizes the discounted sum of expected rewards provided by the equation \(J_{\pi}(s)=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}|s_{0}=s]\), where \(\gamma\in[0,1)\) is the discount factor.
In RL there are two broad classes of algorithms that exist in the literature. The first is \(Q\)-learning, where a \(Q\)-function is learned iteratively using the Bellman optimality operator \(\mathcal{B}^{*}Q(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim T(s^{\prime}|s, a)}[\max_{a^{\prime}}Q(s^{\prime},a^{\prime})]\). Here \(s\) and \(s^{\prime}\) denote the current and next state respectively, \(a\) and \(a^{\prime}\) denotes the current and next action respectively. Finally, an exact or approximate scheme of maximization is used to extract the greedy policy from the \(Q\)-function. As seen from the max operator being used in the Bellman equation, this family of methods only apply to an environment with a set of discrete actions.
The second class of RL algorithms is actor-critic, where the algorithm alternates between computing a value function \(Q^{\pi}\) by a (partial) policy evaluation routine using the Bellman operator on the stochastic policy \(\pi\), and then improving the current policy \(\pi\) by updating it towards selecting actions that maximize the estimate maintained by the \(Q\)-values. This family of methods apply to both discrete and continuous action space environments. Because of this, actor-critic may be used on any bench in chemistry gym environment.
## 4 Case Study
As a simple example, we outline how a particular chemical production process uses each of the benches.
### Wurtz Reaction
Wurtz reactions are commonly used for the formation of certain hydrocarbons. These reactions are of the form:
\[2\text{R-Cl}+2\text{Na}\xrightarrow{\text{diethyl ether}}\text{R-R}+2\text{ NaCl}. \tag{3}\]
Here we consider the case of hexane (C\({}_{6}\)H\({}_{14}\)) for R, where one hydrogen atoms is replaced with chlorine, giving us
Figure 4: The icons representing each action and their multiplier values available in DiT. The distillation vessel (DV) is the primary vessel and B1/B2 are the auxiliary vessels in the experiment.
1-, 2-, and 3-chlorohexane as reactants with sodium. Note that we may have 2R-Cl and R-R be replaced with R\({}_{1}\)-Cl, R\({}_{2}\)-Cl, and R\({}_{1}\)-R\({}_{2}\) in this reaction format. Tab. 1 shows the possible outcomes of this reaction. Note that it is impossible to produce just 5-methylundecane, 4-ethyldecane, or 4-ethyl-5-methylnonane. If the desired reaction is
\[\text{R}_{1}\text{-Cl}+\text{R}_{2}\text{-Cl}+2\text{Na}\xrightarrow{\text{ diethyl ether}}\text{R}_{1}\text{-R}_{2}+2\text{NaCl}, \tag{4}\]
then we will unavoidably also have
\[\text{2R}_{1}\text{-Cl}+2\text{Na}\xrightarrow{\text{diethyl ether}}\text{R}_{1}\text{-R}_{1}+2\text{NaCl} \tag{5}\] \[\text{2R}_{2}\text{-Cl}+2\text{Na}\xrightarrow{\text{diethyl ether}}\text{R}_{2}\text{-R}_{2}+2\text{NaCl},\]
occurring simultaneously.
Wurtz can an interesting and challenging reaction because the yield varies greatly between each product, making it difficult to train an agent which can optimally make each of them.
### Workflow
Suppose that we have the previously listed chlorohexanes, sodium, diethyl ether, and water available to us with the goal to produce dodecane. Using RxN we can add diethyl ether, 1-chlorohexane, and sodium to a vessel. With time, this will produce a vessel containing dodecane and sodium chloride dissolved in diethyl ether. The UV-Vis spectrometer in the RxN can be used to measure the progression of the reaction.
The vessel can then be brought to the ExT to separate dodecane from sodium chloride. Dodecane is non-polar, so if we add water to the vessel and mix, most of the sodium chloride will be extracted into the water while most of the dodecane will be left in the diethyl ether. We can then drain the water out of the vessel while keeping the diethyl ether. While it's impossible to get all of the sodium chloride out with this method, we can repeat this process to increase the purity of dodecane.
The vessel can then be brought to the DiT to separate the dodecane from the diethyl ether. Diethyl ether has a much lower boiling point than dodecane so it will boil first. Heating the vessel enough will cause all of the diethyl ether to vaporize, leaving the dodecane in the vessel with trace amounts of sodium chloride.
Alternatively, because dodecane has a much lower boiling point than sodium chloride, we can skip the ExT and bring the vessel to DiT right after RxN. As before, heating the vessel enough will cause all of the diethyl ether to vaporize, condensing into an auxiliary vessel. We can then put the collect diethyl ether elsewhere such that the auxiliary vessel collected the vaporized materials is now empty. If the vessel is heated up even further now, the dodecane will be vaporized and collected into the empty auxiliary vessel, leaving the sodium chloride behind. Our auxiliary vessel now contains pure dodecane, concluding the experiment.
While this example workflow uses the benches in a specific order, more complicated experiments may use them in a completely different order or even use each bench multiple times. Given specific goals, below we will outline how RL can be used to emulate this behavior for various cases.
## 5 RL Details
In our discrete benches, Proximal Policy Optimization (PPO) (Schulman et al., 2017), Advantageous Actor-Critic (A2C) (Mnih et al., 2016) and Deep Q-Network (DQN) (Mnih et al., 2015) were used. In our continuous benches, Soft Actor-Critic (SAC) (Haarnoja et al., 2018) and Twin Delayed Deep Deterministic Policy Gradient (TD3) (Fujimoto et al., 2018) were used instead of DQN. Note that we can choose between SAC and TD3 fairly arbitrarily since they are both off-policy algorithms that use Q-learning.
Unless otherwise specified, all RL agents were trained for 100K time steps across 10 environments in parallel (for a total of 1M time steps). Training was done by repeatedly gathering 256 time steps of experience (in each environment), then updating our policy and/or Q-function with this new experience. Since PPO and A2C are on-policy, their policies were only updated with the 2560 steps of new experiences. In contrast, a replay buffer of size 1M was maintained and sampled when training with DQN, SAC, and TD3. For the first 30K steps of DQN training, a linear exploration schedule beginning at 1.0 and ending at 0.01
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Reaction & R-1 & R-2 & R\({}_{1}\)-R\({}_{2}\) \\ \hline
1 & 1-chlorohexane & 1-chlorohexane & dodecane \\ \hline
2 & 1-chlorohexane & 2-chlorohexane & 5-methylundecane \\ \hline
3 & 1-chlorohexane & 3-chlorohexane & 4-ethyldecane \\ \hline
4 & 2-chlorohexane & 2-chlorohexane & 5,6-dimethyldecane \\ \hline
5 & 2-chlorohexane & 3-chlorohexane & 4-ethyl-5-methylnonane \\ \hline
6 & 3-chlorohexane & 3-chlorohexane & 4,5-diethyloctane \\ \hline \end{tabular}
\end{table}
Table 1: All possible Wurtz reactions involving chlorohexanes. Symmetrically equivalent entries have been removed from the table as R\({}_{1}\)-R\({}_{2}\) = R\({}_{2}\)-R\({}_{1}\) and 6, 5, 4-chlorohexane is equivalent to 1, 2, 3-chlorohexane, respectively.
was used. Exploration remained at 0.01 afterwards. All of these RL algorithms were performed using the Stable Baselines 3 (Raffin et al., 2021) implementations.
## 6 Laboratory Setup
### Reaction Bench Methodology
For the reaction bench (RxN), we consider two chemical processes. In both processes, each episode begins with a vessel containing 4 mols of diethyl ether, and operates for 20 steps. In the first process, the agent has access to 1.0 mol each of 1, 2, 3-chlorohexane, and sodium, where the system dynamics are defined by the Wurtz reaction outlined above. Each episode, a target material is specified to the agent via length 7 one-hot vector where the first 6 indices represent the 6 Wurtz reaction products in Tab. 1 and the last represents NaCl. After the 20 steps have elapsed, the agent receives a reward equal to the molar amount of the target material produced.
In the second experiment, we introduce a new set of reaction dynamics given by
\[\begin{split} A+B+C&\to E\\ A+D&\to F\\ B+D&\to G\\ C+D&\to H\\ F+G+H&\to I\end{split} \tag{6}\]
where the agent has access to 1.0 mol of \(A\), \(B\), \(C\) and 3.0 mol of \(D\). Each episode, a target material is specified to the agent via length 5 one-hot vector with indices representing \(E\), \(F\), \(G\), \(H\), and \(I\). If the target is \(E\), the agent receives a reward equal to the molar amount of \(E\) produced after the 20 steps have elapsed. Otherwise, the agent receives a reward equal to the difference in molar amounts between the target material and \(E\) after the 20 steps have elapsed. Here, \(E\) is an undesired material. The reaction \(A+B+C\to E\) occurs quickly relative to the others, adding difficulty to the reaction when \(E\) is not the target.
### Extraction Bench Methodology
For the extraction bench (ExT), we consider a layer-separation process where the agent operates for up to 50 steps. Similar to the Wurtz reaction, the target material is specified via length 7 one-hot vector. Each episode begins with a vessel containing 4 mols of diethyl ether, 1 mol of dissolved sodium chloride, and 1 mol of one of the 6 Wurtz reaction products in Tab. 1. The Wurtz reaction product contained in the vessel is the same as the target material, unless the target material is sodium chloride, in which case dodecane is added since sodium chloride is already present. After the episode has ended, the agent receives a reward equal to the change in solute purity of the target material weighted by the molar amount of that target material, where the change in solute purity is relative to the start of the experiment. If the target material is present in multiple vessels, a weighted average of the solute purity across each vessel is used.
As an example, consider when the target material is dodecane. In this experiment, the 1 mol of dissolved sodium chloride becomes 1 mol each of Na\({}^{+}\) and Cl\({}^{-}\), so the initial solute purity of dodecane is 1/3. Suppose we end the experiment with 0.7 mols of dodecane with 0.2 mols each of Na\({}^{+}\) and Cl\({}^{-}\) in one vessel, and the remaining molar amounts in a second vessel. Dodecane has a solute purity of 7/11 and 3/19 in each vessel respectively. The final solute purity of dodecane would be \(0.7\times 7/11+0.3\times 3/19\approx 0.493\). Thus the agent would receive a reward of \(0.159\).
### Distillation Bench Methodology
For the distillation bench (DiT), we consider a similar experimental set-up to the ExT one. Each episode begins with a vessel containing 4 mols of diethyl ether, 1 mol of the dissolved target material, and possibly 1 mol of another material. If the target material is sodium chloride, the additional material is dodecane, otherwise the additional material is sodium chloride. After the episode has ended, the agent receives a reward calculated similarly to the ExT, except using absolute purity rather than solute purity.
## 7 RL Results
### Reaction Bench
Since reaction bench (RxN) has a continuous action space, we trained SAC and TD3 in addition to A2C and PPO. For the first experiment, we are looking at the Wurtz reaction dynamics. Given that we know the system dynamics in this case, we have also devised a heuristic agent for the experiment, which we expect to be optimal. This agent increases the temperature, and adds only the required reactants for the desired product immediately. This heuristic agent achieves an average return of approximately 0.62. Using the performance of this heuristic as a reference, the best and mean relative performances of the agents trained with each algorithm is shown in Fig. 5. Each algorithm can consistently give rise to agents that produce sodium chloride when requested. Since this is a by-product of all reactions in our set-up, it is the easiest product to create. While the other products are not hard to produce either, they require specific reactants, and in order to maximize the yield, they require the absence of other reactants. The PPO agents are able to match the heuristic agent for all targets, while some SAC and TD3 agents are able to come close on a few targets. A2C only comes close to the heuristic on producing sodium
chloride.
The average return as a function of training steps for each algorithm is shown in Fig. 6. On average, the agents trained with each algorithm are able to achieve a return of at least 0.4. This is expected as even an agent taking random actions can achieve an average return of approximately 0.44. The agents trained with A2C, SAC, and TD3 do not perform much better than a random agent in most cases, however the ones trained with PPO significantly outperform it. While on average, A2C, SAC, and TD3 have similar performance, we saw in Fig. 5 that the best performing SAC and TD3 agents outperformed the best A2C agents.
The second RxN experiment uses reaction dynamics more complicated than the Wurtz reaction. In the Wurtz reaction, the agent need only add the required reactants for the desired product all together. In this new reaction, this is still true for some desired products, however not all of them. Similarly to the previous experiment, we also devised a heuristic agent for this experiment, which achieves an average return of approximately 0.83. Using the performance of the heuristic agent as reference again, the best and mean relative performances of the agents trained with each algorithm are shown in Fig. 7. Once again, PPO consistently produces agents that can match the performance of the heuristic agent. The best performing policies produced by A2C and SAC are able to nearly match the heuristic agent for all desired products excluding I. This is not unexpected as producing I requires producing intermediate products at different times during the reaction. The best performing policy produced by TD3 however, nearly matches the heuristic agent for all desired products excluding E. This is also not unexpected, given that producing E is penalized for all other desired products.
Unlike PPO, the other algorithms used are less reliable at producing these best performing agents. This is likely due to PPO learning these policies much faster than the other algorithms, as seen in Fig. 8. Since PPO converges to optimal behavior so quickly, there's very little room for variation in the policy. The other algorithms however are slowly converging to non-optimal behaviors, leaving much more room for variation in the policies (and returns) that they converge to.
For the best performing agents produced by each algorithm, the average action values for each target are shown in Fig. 9. Looking at the heuristic policy, a constant action can be used for each target product, excluding I. When the target is I, the
Figure 5: Radar graphs detailing the average return of each policy with respect to each target material in Wurtz RxN. Panel (a) uses the best policy produced from 10 runs, whereas panel (b) averages across the 10 runs (still using the best policy of each run). Returns of each RL algorithm are relative to the heuristic policy and clipped into the range \([0,\infty)\). Here, the PPO agents consistently outperform the A2C, SAC, and TD3 agents for all 7 target materials. Target materials with high returns across each algorithm (such as sodium chloride) appear to be easier tasks to learn, where target materials with less consistent return across each algorithm (such as 5,6-dimethyldecane) appear to be more difficult tasks to learn.
Figure 6: Wurtz RxN, average return with \(\sigma\)/5 shaded, 10 runs for each algorithm with 10 environments in parallel per run, 1M (100K sequential steps x 10 environments) total steps per run, averages are over 3200 returns. The performance of each algorithm converges before 300K total steps, with only PPO converging on an optimal policy. Despite training for an additional 700K total steps, A2C, SAC, and TD3 were not able to escape the local maxima they converged to.
desired action must change after several steps have passed, meaning the agent cannot just rely on what the specified target is. Note that if all of a material has been added by step \(t\), then it does not matter what value is specified for adding that material at step \(t+1\).
The best performing agent for A2C, PPO, and SAC were all able to produce E when requested and Fig. 9 shows that they each have learned to add A, B, C, and not D. TD3 however adds D in addition to A, B, and C, thus leading to the decreased return in that case. It can also be seen that all four algorithms learned to add two of A, B, or C in addition to D, then add the third one several steps later when I is the target product, mimicking the behavior of the heuristic policy. Note that even though the heuristic waits to add C, waiting to add A or B instead would be equally optimal. While each algorithm does this, PPO and TD3 do so better than the others. PPO is also the only one that succeeds in both of these cases, showing that an RL agent can learn the required behavior in this system.
### Extraction Bench
With the results seen in the RxN tests, we now move onto the extraction bench (ExT) experiment. Regardless of the target material in our Wurtz extraction experiment, the optimal behavior is quite similar so we will not focus on the different cases as before. Since the ExT uses discrete actions, we replace SAC and TD3 with DQN. We also use what we call PPO-XL which is PPO trained with more environments in parallel. We have devised a heuristic policy for this experiment based on what an undergraduate chemist would learn. However, as the dynamics are more complex we do not necessarily expect it to be optimal.
As seen in Fig. 10, the agents trained with A2C do not achieve a return above zero, while the agents trained with DQN ended up achieving a negative return. Not only do both PPO and PPO-XL produce agents that achieve significantly more reward than the other algorithms, they are able to outperform the heuristic policy as well. On average, the best performing agent trained with PPO-XL manages to achieve a return of approximately 0.1 higher than the heuristic (see Fig. 10), resulting in roughly a 10% higher solute purity. While there is a large variance in the final performance of the agents trained with PPO and PPO-XL, they consistently
Figure 8: Fictitious RxN, average return with \(\sigma\)/5 shaded, 10 runs for each algorithm with 10 environments in parallel per run, 1M (100K sequential steps x 10 environments) total steps per run, averages are over 3200 returns. PPO quickly converges to an optimal policy, like in Wurtz RxN. Unlike in Wurtz RxN, the other algorithms take much longer to converge. While they still converge to sub-optimal performances, the gap between optimal performance is less severe.
Figure 7: Radar Graph detailing the average return of each policy with respect to each target material in Fictitious RxN. Panel a) uses the best policy produced from 10 runs, whereas panel b) averages across the 10 runs (still using the best policy of each run). Returns of each RL algorithm are relative to the heuristic policy and clipped into the range \([0,\infty)\). Again, the PPO agents consistently outperform the A2C, SAC, and TD3 agents for all 5 target materials, however it is not as significant of a gap as in Wurtz RxN. Target materials with high returns across each algorithm (such as F, G, and H) appear to be easier tasks to learn, where target materials with less consistent return across each algorithm (such as E and I) appear to be more difficult tasks to learn.
outperform the agents trained with the other algorithms.
As shown in Fig. 11, the action sequences of the policies learned from A2C, DQN, and PPO are quite different. The action sequences of the policies learned by PPO and PPO-XL are much more similar, as expected. The first half of these sequences are comparable to the heuristic, however the agents in both cases have learned a second component to the trajectory to achieve that extra return. Interestingly, both PPO and PPO-XL agents have learned to end the experiment when they achieve the desired results, whereas the A2C and DQN agents do not. PPO once again shows that an RL agent can learn the required behavior in this system.
### Distillation Bench
Lastly, we now move onto the final experiment, distillation bench (DiT). Similar to ExT, the desired target material in the Wurtz distillation experiment does not have much effect on the optimal behavior so we will not focus on the different target cases. Instead we will focus on the different cases of when salt is and is not present with another material in the initial distillation vessel. Note that a single agent operates on both of these cases, not two agents trained independently on each case. As before, we have devised a heuristic policy and as with the RxN experiments, we expect it to be optimal
Figure 10: Wurtz ExT, average return with \(\sigma\) shaded, 30 runs for each algorithm with 1M total steps per run (2M for PPO-XL). For each run, returns are averaged over 1000 steps (only using terminated episodes). The mean and standard deviation are then calculated across the 30 runs (\(\sigma\) is calculated from 30 points). The PPO and PPO-XL agents consistently acquire positive returns, even approaching the theoretical maximum in some cases. The A2C agents learn policies which perform equivalently to ending the experiment immediately and are unable to escape those local maxima. The DQN agents acquire negative return, which is a worse performance than not running the experiment.
Figure 9: Fictitious RxN, average value of each action at every step for the best performing policies for each algorithm. The five curves in each box represents the sequence of actions for the five different target materials. Comparing the same curve across a single column outlines how a single policy acts for a single target material. Comparing different curves within a single box outlines how a single policy acts differently between different target materials. Comparing the same curve across a single row outlines how different policies act for the same target material. For actions corresponding to adding material, the curves represent how quickly those materials are added. The well performing policies are the ones that add only the required reactants (such as A2C and SAC), while the best performing policies are the ones that add them according to the right schedule (such as PPO).
once again. In Fig. 12 we can see that on average, the algorithms converge faster than in the other experiments, however there is much more variation in the solutions.
For the case when no salt is present in the distillation vessel, the the best performing agents trained with each algorithm learn a very similar policy to the heuristic one, as seen in Fig. 13. They heat the vessel until the solvent has boiled away, then end the experiment. The agent trained with DQN however, was the only one not to manually end the experiment but rather repeat an action to the point it is equivalent to waiting until the maximum number of steps has elapsed. For the case when salt and an additional material are present, the agent trained with A2C learned to only slightly modify its actions compared to the previous case, where the one trained with DQN makes no modifications. The best performing agents trained with PPO and PPO-XL modify their actions similar to the heuristic policy, achieving the optimal return in both cases. This shows that the expected behavior in our final bench can also be learned by an RL agent.
## 8 Limitations
Currently ChemGymRL has many limitations; any reaction or material that one wishes to model must be predefined with all properties specified by the user. Additionally, the solvent dynamics are modeled using simple approximations and while they suffice for these introductory tests, they would not for real-world chemistry.
As previously mentioned, the ChemGymRL framework was designed in a modular fashion for the ease of improvement. The differential equations used to model the reactions could be replaced with a molecular dynamics simulation. This
Figure 11: Wurtz ExT, the sequence of actions with the highest return when dodecane is the target material seen during the rollout of the best performing policy learned by each algorithm. Each picture represents an action and average value described by Fig. 3(d). The number beneath the image represents how many times that action was repeated. While it is more difficult to interpret these policies than with RxM, similarities can be seen between the PPO, PPO-XL, and heuristic policies, explaining their high performances. The A2C policy uses a similar action set, however in a different order, outlining the precision required by the agent. The DQN policy use many actions that either undo previous actions or do nothing in that specific state.
Figure 12: Wurtz DiT, average return with \(\sigma\) shaded, 30 runs for each algorithm with 1M total steps per run (2M for PPO-XL). For each run, returns are averaged over 1000 steps (only using terminated episodes). The mean and standard deviation are then calculated across the 30 runs (\(\sigma\) is calculated from 30 points). The DQN, PPO, and PPO-XL agents consistently acquire positive returns whereas the A2C agents only get positive returns on average. While DQN and PPO acquire similar returns on average, the variance with PPO is much higher, meaning the best performing PPO policy outperforms the best DQN policy. The PPO-XL policies outperform the other algorithms both on average and in the best case scenarios.
Figure 13: Wurtz DiT, the sequences of actions with the highest return produced by the best performing policy learned with each algorithm for two cases: when salt is and is not initially present in the distillation vessel with another material. Each picture represents an action and average value described by Fig. 4. The number beneath the image represents how many times that action was repeated. The PPO-XL, and heuristic policies are nearly identical in both cases, with only minor differences. When no salt is present, the A2C and DQN policies are similar to the others, however when salt is present they continue to behave as if it is not.
would allow RxN to operate with on a more generalizable rule-set. Without having to manually define the possible reactions, the RxN could be used to discover new, more efficient reaction pathways by an RL agent. Currently, the reward metric used in RxN is the molar amount of desired material produced by the agent. If this metric was changed to reflect a certain desired property for the produced material, then the RxN could be used for drug discovery. Making similar improvements to ExT and DiT, the RL agent could then learn to purify these new discoveries.
## 9 Future Work
For future work on ChemGymRL, the lab manager environment will be formatted in a way that allows an RL agent to operate in it. Using pre-trained agents for the individual benches, the lab manager agent would decide which vessel to give to which bench while also specifying the desired goal to each bench, in order to achieve the lab manager's own goal. The lab manager agent would make proper use of the agentless characterization bench introduced here as well. In addition to this, the implementation of new benches will be explored, allowing more complicated experiments to be conducted.
## 10 Conclusions
We have introduced and outlined the ChemGymRL interactive framework for RL in chemistry. We have included three benches that RL agents can operate and learn in. We also include a characterization bench, for making observations, which agents do not operate in, and presented directions for improvement. To show these benches are operational, we have successfully, and reproducibly, trained at least one RL agent on each of them. Included in this framework is a vessel state format compatible with each bench, therefore allowing the outputs of one bench to be the input to another.
In the Wurtz RxN experiment, A2C, SAC, and TD3 were not able to show better performances than an agent taking random actions, where PPO was able to achieve optimal returns on all targets. In the second RxN experiment, A2C, SAC, and TD3 were able to show performances that achieves optimal returns for one of the two difficult tasks, whereas PPO was able to achieve optimal returns on both.
In the Wurtz ExT experiment, A2C and DQN were not able to produce agents that perform better than doing nothing, whereas PPO was able to achieve higher returns than the devised heuristics. In the Wurtz DiT experiment each algorithm was able to produce an agent that performs better than doing nothing and much better than an agent taking random actions.
Finally, we included elaborate discussions on other RL algorithms that can be tried in ChemGymRL and how this environment will be extremely valuable to the broader RL research community. We hope to see a wide adoption of ChemGymRL within the set of testbeds commonly used by researchers in the RL community.
## Acknowledgements
C. Beeler and C. Bellinger performed work at the National Research Council of Canada under the AI4D program. I.T. acknowledges NSERC.
|
2307.16119
|
C^2-Morse Functions on the Deligne-Mumford compactification of the
moduli space M_{g,n} bar
|
We construct a series of C^2-Morse functions on the Deligne-Mumford
compactification M_{g,n} bar of the moduli space of genus g Riemann/hyperbolic
surfaces with n punctures. This series of functions converges to the systole
function, which is topologically Morse. We will show that the critical points
of our functions approach those of systole sublinearly, stratum-wise, and with
the same indices. These functions might be the first explicit examples of
C^2-Morse functions on M_{g,n} bar, and the Morse handle decomposition may give
rise to the first example of a natural cell decomposition of M_{g,n} bar, that
works no matter if n is positive or equal to 0.
|
Changjie Chen
|
2023-07-30T03:53:39Z
|
http://arxiv.org/abs/2307.16119v3
|
# \(C^{2}\)-Morse functions on \(\overline{\mathcal{M}}_{g,n}\)
###### Abstract.
We present a series of \(C^{2}\)-Morse functions on the Deligne-Mumford compactification \(\overline{\mathcal{M}}_{g,n}\) of the moduli space of genus \(g\) Riemann/hyperbolic surfaces with \(n\) punctures. This series of functions converges to the systole function, which is topologically Morse. We will show that the critical points of our functions approach those of systole sublinearly, stratum-wise, and with the same indices.
## 1. Introduction
Morse functions are important in the sense of studying the Morse homology of the base space through the handle decomposition, see [16] for more about Morse theory. The moduli space \(\mathcal{M}_{g,n}\) of \((g,n)\)-hyperbolic or Riemann surfaces, as well as its Deligne-Mumford compactification \(\overline{\mathcal{M}}_{g,n}\), is of utmost interest while the topology is still mysterious for general \((g,n)\). Morse theory could be a powerful tool if we understand a Morse function well on either space. We will construct and study a series of Morse functions using Teichmuller theory.
Note that any geodesic-length function \(l_{\gamma}\) is a Morse function on the Teichmuller space. However, it does not descend to the moduli space. It is also not possible to take the arithmetic average over the mapping class group orbit of \(\gamma\) because of the size of the mapping class group.
While not smooth, topological Morse property has been shown for the _systole_ function, i.e., the minimum of all geodesic-length functions. The systole function depends only on the hyperbolic structure of the base surface and therefore is mapping class group invariant and descends to
the moduli space. However, as a minimum function, it is only continuous. Explicit examples with Fenchel-Nielsen coordinates can show it is not even \(C^{1}\).
P. Schmutz Schaller showed in [10] that when \(n>0\), the systole function is topologically Morse on \(\mathcal{M}_{g,n}\), which is an analogy of (smooth) Morse functions in topological settings, see [12]. The case of \((g,n)=(1,1)\) has been well studied, for instance, see [1]. Schmutz Schaller also calculated all the critical points and the index at each of them for \((g,n)=(0,5),(0,6),(1,2),(2,0)\) in addition to \((0,4)\) and \((1,1)\).
The general case was proved by H. Akrout in [1] a few years later that the systole function is topologically Morse on the moduli space of a hyperbolic surface with or without punctures. Akrout defined a _minimal gradient_ at a surface \(X\in\mathcal{T}\) to be the gradient vector of the geodesic-length function for a shortest geodesic on \(X\) and he developed a theory on so-called _(semi-)eutacticity_ differentiating surfaces by the relative position of all minimal gradients for his proof.
Aside from the stories originating from geodesic-length functions, as another candidate, \(-\log\det^{*}\Delta_{m}\) on the space of hyperbolic structures \(\{m\}\) on a given surface was conjectured by Sarnak to be a Morse function, where \(\det^{*}\Delta_{m}\) is the zeta regularized product of all non-zero eigenvalues of \(\Delta_{m}\), see [20]. It seems that very little study followed and this conjecture still remains wide open.
In the most popular version, a Morse function is required to be smooth, but to define critical points and nondegeneracy, and to construct the handle decomposition and perform Morse theory, it is allowable to weaken the smoothness condition to \(C^{2}\). We may call them \(C^{2}\)-Morse functions. In this paper, we introduce a series of functions named \(\operatorname{sy_{T}}\)
that converges to the systole function and will have (at least) \(C^{2}\)-Morse property and more:
**Definition**.: \[\operatorname{sys_{T}}(X)=-T\log\sum_{\gamma\text{ s.c.g. on }X}e^{-\frac{1}{T}l_{ \gamma}(X)},\]
where s.c.g. stands for simple closed geodesic. We will show the main theorem as follows across a few sections:
**Main Theorem**.: _The series of functions \(\operatorname{sys_{T}}\) is decreasing in \(T\) and converges to \(\operatorname{sys}\) as \(T\to 0^{+}\). Moreover, there exists \(T_{0}>0\) such that when \(T<T_{0}\), \(\operatorname{sys_{T}}\) has the following properties: (1) Every \(\operatorname{sys_{T}}\) is a \(C^{2}\)-Morse function on the Deligne-Mumford compactification \(\overline{\mathcal{M}}_{g,n}\) (with altered differential structure). (2) There is a natural stratum-wise correspondence:_
\[\operatorname{Crit}(\operatorname{sys_{T}})\leftrightarrow\operatorname{Crit }(\operatorname{sys}).\]
_More precisely, let \(\mathcal{S}\subset\mathcal{M}_{g,n}\) be a stratum that is isomorphic to \(\mathcal{M}_{g^{\prime},n^{\prime}}\), then there is a bijection_
\[\operatorname{Crit}(\operatorname{sys_{T}}|_{\mathcal{S}}) \leftrightarrow\operatorname{Crit}(\operatorname{sys}|_{ \mathcal{M}_{g^{\prime},n^{\prime}}})\] \[p_{T} \leftrightarrow p\]
_with the properties_
\[d_{\text{WP}}(p,p_{T})<CT,\]
_which implies \(p_{T}\to p\) and consequently \(\operatorname{Crit}(\operatorname{sys_{T}}|_{\mathcal{S}})\to\operatorname{ Crit}(\operatorname{sys}|_{\mathcal{M}_{g^{\prime},n^{\prime}}})\), and_
\[\operatorname{ind}_{\operatorname{sys_{T}}}(p_{T})=\operatorname{ind}_{ \operatorname{sys}}(p).\]
The author believes that the \(\operatorname{sys_{T}}\) functions are smooth. To the best of the author's knowledge, they are the first examples in literature, of \(C^{2}\)-Morse functions on the moduli space as well as the Deligne-Mumford compactification. With the Morse property, they can be used to study the topology of \(\mathcal{M}_{g,n}\) or \(\overline{\mathcal{M}}_{g,n}\). They can also be used to study the systole function itself with the index-preserving critical point attracting property.
As we have established the Morse property of \(\mathrm{sys}_{\mathrm{T}}\) functions, there are many mysteries to be discovered and questions to be answered. In **[paper to be uploaded]**, the author proves a theorem of nonexistence of low index critical points for \(\mathrm{sys}_{\mathrm{T}}\) (and \(\mathrm{sys}\)) on large \(\mathcal{M}_{g,n}\), namely for large \(\mathcal{M}_{g,n}\), all low index critical points for \(\mathrm{sys}_{\mathrm{T}}\) live in the Deligne-Mumford boundary. Very ambiguously speaking, this result can be interpreted as low order homology elements of \(\mathcal{M}_{g,n}\) come from the boundary \(\partial\mathcal{M}_{g,n}\).
Here is the organization of this paper. In Section 2, we review some basics in hyperbolic geometry and Teichmuller theory. In section 3, we talk about Akrout's theorem about the systole function and his theory of (semi-)eutacticity. We list some powerful theorems in section 4 that will be used in our later proofs. In section 5 we study \(\mathrm{sys}_{\mathrm{T}}\) at the first step and develop general knowledge including the _fan decomposition_ to study the critical points. Section 6 is focused on the behavior of \(\mathrm{sys}_{\mathrm{T}}\) on the _major subspace_ while section 7 reveals its full local behavior at eutactic points. Section 8 establishes nondegeneracy of \(\mathrm{sys}_{\mathrm{T}}\) near eutactic points. The final pieces of the puzzle to show that \(\mathrm{sys}_{\mathrm{T}}\) is a Morse function on \(\mathcal{M}_{g,n}\) come in section 9 with a study of its behavior near semi-eutactic and non-semi-eutactic points, and section 10 for near the Deligne-Mumford boundary. The extension of \(\mathrm{sys}_{\mathrm{T}}\) to the boundary is defined and studied in section 11 to complete the proof of the main theorem. We show the Weil-Petersson gradient flow of \(\mathrm{sys}_{\mathrm{T}}\) on \(\mathcal{M}_{1,1}\) in section 12.
A table of notations that will be used is attached at the end of this paper.
## 2. Preliminaries
### Teichmuller space and moduli space
Let \(X\) be a smooth complete real surface with negative Euler characteristic and \(\mathcal{M}_{-1}(X)\)
the set of all hyperbolic metrics on \(X\). Let \(\mathit{Diff}_{+}(X)\) be the set of all orientation-preserving diffeomorphisms and \(\mathit{Diff}_{0}(X)\subset\mathit{Diff}_{+}(X)\) the subset consisting of those homotopic to the identity map, and they act on \(\mathcal{M}_{-1}(X)\) by pullback. The quotient \(\mathit{Diff}_{+}(X)/\mathit{Diff}_{0}(X)\) is known as the _mapping class group \(\mathit{Mod}(X)\)_. The _Teichmuller space_ is
\[\mathcal{T}(X):=\mathcal{M}_{-1}(X)/\mathit{Diff}_{0}(X)\]
and the _moduli space_ is
\[\mathcal{M}(X):=\mathcal{M}_{-1}(X)/\mathit{Diff}_{+}(X)=\mathcal{T}(X)/ \mathit{Mod}(X).\]
We will frequently use a hyperbolic surface \(X\) instead of a hyperbolic metric \(m\) on the surface \(X\), by abuse of notation, to refer to a point in the Teichmuller space or moduli space. When \(X\) is of genus \(g\) and with \(n\) punctures, we also use \(\mathcal{T}_{g,n}\) and \(\mathcal{M}_{g,n}\) for \(\mathcal{T}(X)\) and \(\mathcal{M}(X)\) respectively.
### Weil-Petersson metric
Let \((X,m)\in\mathcal{T}\). A _Beltrami differential_ on \((X,m)\) is \(f\frac{d\overline{z}}{dz}\) on a local chart and a _harmonic Beltrami differential_ if it can be expressed as \(ds_{m}^{-2}\overline{\varphi}\) for a holomorphic quadratic differential \(\varphi\), whose total space is denoted by \(\mathit{HB}(X)\). By Bers' embedding, points in Teichmuller space can be represented by Beltrami differentials on \(X\) and we in fact have an isomorphism \(\mathit{HB}(X)\cong T_{m}\mathcal{T}(X)\).
The _Weil-Petersson metric_ on \(\mathcal{T}(X)\) is
\[\langle\mu_{1},\mu_{2}\rangle_{\mathit{WP}}=\int_{X}\mu_{1}\overline{\mu}_{2} ds_{m}^{2},\]
for \(\mu_{1},\mu_{2}\in\mathit{HB}(X)\cong T_{m}\mathcal{T}(X)\).
The Weil-Petersson metric descends to the moduli space since it is mapping class group invariant. See [12] for more details on Teichmuller theory.
### The systole function and topological Morse functions
For any homotopy class \(\gamma\) on \(X\), there exists a unique geodesic representative with respect to a hyperbolic metric \(m\). We define \(l_{\gamma}\) at \((X,m)\) to
be the length of the geodesic representative.
The systole function \(sys\) assigns a surface the length of a shortest (nontrivial) closed geodesic, i.e.,
\[\operatorname{sys}(X)=\min_{\gamma\text{ closed geodesic on }X}l_{\gamma}(X).\]
As the function respects isometries, it descends to \(sys\): \(\mathcal{T}\to\mathbb{R}_{+}\), as well as \(sys\): \(\mathcal{M}\to\mathbb{R}_{+}\).
**Definition 2.1**.: With the same assumptions on \(X\) above, let \(S(X)\) be the set of all shortest closed geodesics on it.
For any \(\gamma\in S(X)\), note that \(\gamma\) has to be simple (otherwise it is possible to take a nontrivial shortcut). Therefore, it is true that
\[\operatorname{sys}(X)=\min_{\gamma\text{ simple closed geodesic on }X}l_{\gamma}(X).\]
It is not hard to see that \(\operatorname{sys}\) is continuous but not \(C^{1}\). Examples can be constructed with Fenchel-Nielsen coordinates for instance.
To show Akrout's result on \(\operatorname{sys}\), we review some definitions.
**Definition 2.2** (Topological Morse function).: Let \(f:M^{n}\to\mathbb{R}\) be a continuous function. \(x\in M\) is called a _\((C^{0}\)-)ordinary point_ if locally \(f\) can be expanded to a \(C^{0}\)-chart near \(x\), otherwise it is called \((C^{0}\)-) critical. A _critical point_\(x\) is nondegenerate if there is a local \(C^{0}\)-chart \((x^{i})\) such that \(f-f(x)=(x^{1})^{2}+\cdots+(x^{r})^{2}-(x^{r+1})^{2}-\cdots(x^{n})^{2}\). In this case the _index_ of \(f\) at \(x\) is defined to be \(n-r\). A continuous function is called _topologically Morse_ if all the critical points are nondegenerate. For more, see [10].
## 3. Akrout's Theorem and Eutacticity
We introduce Akrout's (semi-)eutactic condition and his results on the systole function.
**Definition 3.1**.: Let \(\{v_{i}\}\subset\mathbb{R}^{n}\) be a finite set of nonzero vectors. It is called _eutactic_ if the origin is contained in the interior of the convex
hull of \(\{v_{i}\}\), or _semi-eutactic_ if the origin is contained in the boundary of the convex hull of \(\{v_{i}\}\). It is called _non-semi-eutactic_ if it is neither eutactic or semi-eutactic.
With Euclidean metric, we have the following equivalent definitions:
**Lemma 3.2**.: _Let \(\{v_{i}\}\subset\mathbb{R}^{n}\) be a finite subset, then (1) \(\{v_{i}\}\) is eutactic if \(\max_{\tau\in S^{n-1}}\min_{i}\{\langle v_{i},\tau\rangle\}<0\). (2) \(\{v_{i}\}\) is semi-eutactic if \(\max_{\tau\in S^{n-1}}\min_{i}\{\langle v_{i},\tau\rangle\}=0\). (3) \(\{v_{i}\}\) is non-semi-eutactic if \(\max_{\tau\in S^{n-1}}\min_{i}\{\langle v_{i},\tau\rangle\}>0\)._
**Definition 3.3**.: Let \(S(X)=\{\gamma_{1},\cdots,\gamma_{r}\}\) be the set of all shortest geodesics on \(X\), then the gradient vectors \(\nabla l_{1},\cdots,\nabla l_{r}\in T_{X}\mathcal{T}\) are called the _minimal gradients_ for \(X\).
**Definition 3.4**.: A hyperbolic surface \(X\in\mathcal{T}\) is called _eutactic (semi-eutactic, non-semi-eutactic)_ if the set of minimal gradients \(\{\nabla l_{i}\}_{\gamma_{i}\in S(X)}\) is eutactic (semi-eutactic, non-semi-eutactic) in the tangent space \(T_{X}\mathcal{T}\) (equivalently \(\{dl_{i}\}_{\gamma_{i}\in S(X)}\) is eutactic (semi-eutactic, non-semi-eutactic) in the cotangent space \(T_{X}^{*}\mathcal{T}\) by duality).
In [1], Akrout proved topological Morse property for sys through eutacticity:
**Theorem 3.5**.: _The systole function is topologically Morse on \(\mathcal{M}_{g,n}\). \(X\) is a critical point if and only if \(X\) is eutactic, and the index is equal to \(\operatorname{rank}\{\nabla l_{i}:\gamma_{i}\in S(X)\}\)._
This paper is related to but will not use Akrout's theorem on topological Morse property to prove any result.
## 4. A few theorems in Teichmuller theory
Below we cite a few results from literature that we will use.
**Theorem 4.1** (Wolpert [14]).: _Let \(\gamma\) be a closed geodesic and \(l_{\gamma}\) the associated geodesic-length function, then \(l_{\gamma}\) is real analytic on \(\mathcal{T}\) and strictly convex along Weil-Petersson geodesics._
We use Birman and Series' version of control of the growth rate of simple closed geodesics by length on hyperbolic surfaces. Let \(s_{X}(L)\) be the number of simple closed geodesics with length at most \(L\) on a hyperbolic surface \(X\).
**Theorem 4.2** (Birman-Series [1]).: _There exists a universal polynomial \(P\) such that \(s_{X}(L)\leq P(L)\)._
To make it convenient to use, we suppose \(P(x)\leq ce^{x}\) for some fixed constant \(c\).
The following results are also due to Wolpert. See the explicit Weil-Petersson geodesic length-length formula that was given by Riera in [10] that may help understand.
**Theorem 4.3** (Wolpert [12] 3.11, 3.16).: _There exists a universal constant \(c>0\), such that_
\[\langle\nabla l_{\gamma},\nabla l_{\gamma}\rangle_{\mathit{WP}}\leq c(l_{ \gamma}+l_{\gamma}^{2}e^{\frac{l_{\gamma}}{2}})\]
_and_
\[\nabla^{2}l_{\gamma}(\cdot,\cdot)<c(1+l_{\gamma}e^{\frac{l_{\gamma}}{2}}) \langle\cdot,\cdot\rangle_{\mathit{WP}},\]
_for any simple closed geodesic \(\gamma\)._
**Theorem 4.4** (Wolpert [12] 3.12).: _Let \(\gamma_{i},\gamma_{j}\) be disjoint or identical geodesics, then_
\[0<\langle\nabla l_{i},\nabla l_{j}\rangle_{\mathit{WP}}=\frac{2}{\pi}l_{i} \delta_{ij}-O(l_{i}^{2}l_{j}^{2}),\]
_where the right hand side is uniform for \(l_{i},l_{j}\leq c_{0}\)._
Given mutually disjoint short geodesics \(\beta_{i}\), let \(\gamma_{i}\)'s be a few geodesics disjoint from \(\cup\beta_{i}\), such that \(\{\lambda_{i}:=\nabla l_{\beta_{i}}^{\frac{1}{2}},J\lambda_{i},\nabla l_{i}\}\) is a local frame, and with the notations:
**Theorem 4.5** (Wolpert [25]).: _Let \(D\) be the Weil-Petersson connection, then as \(l_{\beta_{i}}\to 0\),_
\[D_{J\lambda_{i}}\lambda_{i}=\frac{3}{2\pi l_{\beta_{i}}^{\frac{1}{ 2}}}J\lambda_{i}+O(l_{\beta_{i}}^{\frac{3}{2}}),\] \[D_{\lambda_{j}}\lambda_{i}=O(l_{\beta_{i}}^{\frac{3}{2}}),\] \[D_{J\lambda_{j}}\lambda_{i}=O(l_{\beta_{i}}(l_{\beta_{j}}^{\frac {3}{2}}+l_{\beta_{i}}^{\frac{1}{2}})),\text{when $i\neq j$},\] \[D_{\nabla l_{j}}\lambda_{i}=O(l_{\beta_{i}}),\] \[D_{\lambda_{j}}\nabla l_{i}=O(l_{\beta_{j}}^{\frac{1}{2}}),\] \[D_{J\lambda_{j}}\nabla l_{i}=O(l_{\beta_{j}}^{\frac{1}{2}}),\] \[D_{\nabla l_{j}}\nabla l_{i}\text{ is continuous}.\]
A classic exercise in topology will also be used:
**Lemma 4.6**.: _Let \(f,g:S^{n}\to S^{n}\) be two self maps where \(f(x)+g(x)\neq 0\), then \(\deg f=\deg g\). Specially, if \(g=\text{id}\), then \(\deg f=1\)._
Proof.: \(H(x,t)=\frac{(1-t)f(x)+tg(x)}{||(1-t)f(x)+tg(x)||}\) is a homotopy that sends \(f\) to \(g\).
## 5. \(\operatorname{sys_{T}}\) Functions and Fan Decomposition
In this section, we study the convergence of the \(\operatorname{sys_{T}}\) functions and develop a method that the author calls _fan decomposition_ that will be used to study the behavior of \(\operatorname{sys_{T}}\) in following sections.
**Definition 5.1**.: We take a family of averages of the length functions associated to all simple closed geodesics on the base surface \(X\), indexed by \(T>0\):
\[\operatorname{sys_{T}}(X)=-T\log\sum_{\text{$\gamma$ simple closed geodesic on $X$}}e^{-\frac{1}{T}l_{\gamma}(X)}.\]
_Remark 5.2_.: As it respects the mapping class group action, \(\operatorname{sys_{T}}\) descends to \(\mathcal{T}\) as well as \(\mathcal{M}\). We make no distinguishment between them by abuse of notation.
**Definition 5.3**.: Let \(S(X)\) be the set of all shortest geodesics on \(X\).
**Definition 5.4**.: Let \(\text{sec.sys}(X)\) be the _second systole function_ on a hyperbolic surface \(X\), taking the value of the length of second shortest geodesics (counted without multiplicity).
Note that the second systole function is upper semi-continuous and thus bounded from above on the thick part of the moduli space.
**Theorem 5.5**.: _The function \(\operatorname{sys_{T}}\) is well defined for \(T\) sufficiently small, and it uniformly converges to \(\operatorname{sys}\) as \(T\to 0^{+}\). More specifically,_
\[0<\operatorname{sys_{T}}(X)-\operatorname{sys}(X)<c^{\prime}T,\]
_where \(c^{\prime}\) is a constant depending only on \((g,n)\). Moreover, \(\operatorname{sys_{T}}:\mathcal{M}\to\mathbb{R}\) is at least \(C^{2}\)._
Proof.: Write
\[\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(X)}=\sum_{n=0}\sum_{n\leq l_{\gamma}<n+ 1}e^{-\frac{1}{T}l_{\gamma}(X)},\]
and by bounding the growth rate
\[\sum_{n\leq l_{\gamma}<n+1}e^{-\frac{1}{T}l_{\gamma}(X)} \leq s_{X}(n+1)e^{-\frac{1}{T}n}\] \[\leq ce^{n+1}e^{-\frac{1}{T}n}=ce^{(1-\frac{1}{T})n+1}.\]
Therefore, \(\operatorname{sys_{T}}\) is well defined for \(T<1\), and
\[\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(X)} =\#S(X)e^{-\frac{1}{T}\operatorname{sys}(X)}+\sum_{n=\operatorname {sec.sys}(X)}\sum_{n\leq l_{\gamma}<n+1}e^{-\frac{1}{T}l_{\gamma}(X)}\] \[\leq\#S(X)e^{-\frac{1}{T}\operatorname{sys}(X)}+\sum_{n= \operatorname{sec.sys}(X)}ce^{(1-\frac{1}{T})n+1}\] \[=\#S(X)e^{-\frac{1}{T}\operatorname{sys}(X)}+c\frac{e^{1+ \operatorname{sec.sys}(X)}}{1-e^{1-\frac{1}{T}}}e^{-\frac{1}{T}\operatorname{ sec.sys}(X)}.\]
Note that \(\#S(X)\geq 1\) is bounded by Birman-Series' theorem as \(\operatorname{sys}(g,n):=\max_{X\in\mathcal{M}_{g,n}}\{\operatorname{sys}(X)\}<\infty\) and \(\operatorname{sec.sys}\) is also bounded from above, then
\[|\operatorname{sys_{T}}(X)-\operatorname{sys}(X)|<c^{\prime}T.\]
The first and second derivatives will be studied in Lemma 5.9 and Lemma 8.1, completing the proof that \(\operatorname{sys_{T}}\) is at least \(C^{2}\)
**Lemma 5.6**.: \(\operatorname{sy_{T}}\) _is decreasing in \(T\)._
Proof.: Note that
\[\frac{d}{dT}\operatorname{sy_{T}} =-\log\left(\sum e^{-\frac{1}{T}l}\right)-\frac{1}{T}\frac{\sum le ^{-\frac{1}{T}l}}{\sum e^{-\frac{1}{T}l}}\] \[=-\frac{1}{\sum e^{-\frac{1}{T}l}}\left(\log\left(\sum e^{-\frac{ 1}{T}l}\right)-\sum\left(-\frac{1}{T}l\right)e^{-\frac{1}{T}l}\right)<0.\]
Given \(T\), the first derivatives can be calculated directly:
\[\nabla\operatorname{sy_{T}}(X)=\frac{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(X )}\nabla l_{\gamma}(X)}{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(X)}}.\]
Therefore, to study the critical points of \(\operatorname{sy_{T}}\), it suffices to study
\[\sum_{\gamma}e^{-\frac{1}{T}(l_{\gamma}(X)-\operatorname{sys}(X_{0}))}\nabla \operatorname{sy_{T}}(X)=\sum_{\gamma}e^{-\frac{1}{T}(l_{\gamma}(X)- \operatorname{sys}(X_{0}))}\nabla l_{\gamma}(X)\]
instead for a fixed \(X_{0}\).
**Definition 5.7**.: Let \(p\) be a critical point of the systole function, and let
\[\tilde{\Omega}_{T}(X)=\sum_{\gamma}e^{-\frac{1}{T}(l_{\gamma}(X)-\operatorname {sys}(p))}\nabla l_{\gamma}(X),\]
which is a rescaling of \(\nabla\operatorname{sy_{T}}(X)\). We leave out \(p\) in the definition of \(\tilde{\Omega}\) for simplicity as there are only finitely many such critical points and the context will be clear when \(\tilde{\Omega}\) is used.
Let \(u:(-a,a)\to\mathcal{T}\) be a unit speed geodesic with \(u(0)=p\) and tangent vector \(\tau:=u^{\prime}(0)\). We let
\[\tilde{\Omega}_{T}(v)=\tilde{\Omega}_{T}(t,\tau)=\tilde{\Omega}_{T}(u(t))\]
by abuse of notation, where \(X=u(t)\) and \(v=t\tau\), where we omit the exponential map \(\exp_{p}\) at \(p\). Let \(\Omega_{T}\) be the'main part' of \(\tilde{\Omega}_{T}\) where the sum is taken over \(S(p)\). In the rest of the paper, we treat other maps like \(\Phi_{T}\) and \(\Psi_{T}\) in the same manner. We may also use a lowercase superscript \(i\) for the \(i\)-th component of a sum, for example,
\[\Omega^{i}(X)=e^{-\frac{1}{T}(l_{i}(X)-\operatorname{sys}(p))}\nabla l_{i}(X),\]
and an uppercase superscript \(J\) for the partial sum over \(J\), for example,
\[\Omega^{J}(X)=\sum_{j\in J}e^{-\frac{1}{T}(l_{j}(X)-\operatorname{sys}(p))}\nabla l _{j}(X).\]
Let \(\pi:\mathbb{R}^{n}\setminus\{O\}\to S^{n-1}\) be the standard projection onto the unit sphere.
_Remark 5.8_.: Note that if we choose an orthonormal basis for \(T_{p}\mathcal{T}\), then by composing the exponential map \(\exp_{p}\), we get a normal coordinate system locally near \(p\). Therefore, we can see \(\tilde{\Omega}_{T}\) as a vector field on \(\mathcal{T}\) near \(p\) or \(T_{p}\mathcal{T}\) near \(0\). We will recall this later. We will focus on the renormalized gradient vector \(\tilde{\Omega}_{T}\).
**Lemma 5.9**.: _When \(T<1\), \(||\tilde{\Omega}_{T}(X)-\Omega_{T}(X)||<c^{\prime\prime}e^{-\frac{1}{T} \operatorname{sec.sys}(X)}\), where \(c^{\prime\prime}\) is a constant depending on \((g,n)\)._
Proof.: We have the growth rate of the first derivative
\[\frac{1}{e^{-\frac{1}{T}\operatorname{sys}_{\mathrm{T}}}}||\sum _{n\leq l_{\gamma}<n+1}e^{-\frac{1}{T}l_{\gamma}(X)}\nabla l_{\gamma}|| \leq\frac{1}{e^{-\frac{1}{T}(\operatorname{sys}+c^{\prime}T)}}s_{ X}(n+1)e^{-\frac{1}{T}n}||\nabla l_{\gamma}||\] \[\leq Cce^{n+1}e^{-\frac{1}{T}n(c(n+1+(n+1)^{2}e^{\frac{n+1}{2}}) )^{\frac{1}{2}}}\] \[\leq 2Cc^{\frac{3}{2}}e^{(\frac{5}{4}-\frac{1}{T})n+\frac{5}{4}}.\]
Therefore, \(\operatorname{sys}_{\mathrm{T}}\) is at least \(C^{1}\). For the approximation with \(\Omega_{T}\), with the help of Theorem 4.3, we have
\[||\tilde{\Omega}_{T}(X)-\Omega_{T}(X)|| \leq\sum_{\gamma\not\in S(X)}e^{-\frac{1}{T}l_{\gamma}(X)}|| \nabla l_{\gamma}(X)||\] \[=\sum_{n=\operatorname{sec.sys}(X)}\sum_{n\leq l_{\gamma}<n+1}e^ {-\frac{1}{T}l_{\gamma}(X)}||\nabla l_{\gamma}(X)||\] \[\leq\sum_{n=\operatorname{sec.sys}(X)}2c\sqrt{c_{1}}e^{n+1}e^{- \frac{1}{T}l_{\gamma}(X)}e^{1+\frac{n+1}{4}}\] \[\leq 2c\sqrt{c_{1}}\frac{e^{\frac{5}{4}+\frac{9}{4}\operatorname{ sec.sys}(X)}}{1-e^{\frac{9}{4}-\frac{1}{T}}}e^{-\frac{1}{T}\operatorname{sec.sys}(X)}\] \[\leq c^{\prime\prime}e^{-\frac{1}{T}\operatorname{sec.sys}(X)}\]
The lemma follows.
**Definition 5.10**.: Let \(X\) be a point in the Teichmuller space, and \(S(X)=\{\gamma_{1},\cdots,\gamma_{r}\}\) the set of all shortest geodesics on it. Define subspaces of the tangent space \(T_{X}\mathcal{T}\):
\[\text{the major subspace}:T_{X}^{\text{sys}}\mathcal{T}=\text{Span}\{ \nabla l_{1},\cdots,\nabla l_{r}\},\] \[\text{and the minor subspace}:T_{X}^{\text{sys}\bot}\mathcal{T}=(T _{X}^{\text{sys}}\mathcal{T})^{\bot}.\]
The notions are used mainly when \(X\) is eutactic.
**Definition 5.11** (Fan decomposition of the major subspace \(T_{p}^{\text{sys}}\mathcal{T}\)).:
Let \(p\in\mathcal{M}\) be a critical point for the systole function, and \(S(p)=\{\gamma_{i}\}_{i\in I}\) where \(I=\{1,\cdots,r\}\) is the index set, then the minimal gradient set \(\{\nabla l_{i}\}_{i\in I}\) satisfies the eutactic condition at \(p\), say \(\sum_{i=1}^{r}a_{i}\nabla l_{i}(p)=0\) for positive \(a_{i}\)'s. Let \(J\) be a subset of \(I\), and we define \(F_{J}\subset T_{p}^{\text{sys}}\mathcal{T}\) to be the region consisting of vectors \(v\) such that \(\text{argmax}_{j}\{\langle\nabla l_{j},v\rangle\}=J\). On the other hand, for any \(\tau\) it assigns a multiindex \(J=J(\tau)\) by inclusion \(\tau\in F_{J}\). We extend the definition of \(J(\cdot)\) to \((T_{X}^{\text{sys}\bot}\mathcal{T})^{c}\) by \(J=J\circ\text{proj}_{T_{p}^{\text{sys}}\mathcal{T}}\) by abuse of notation. Let \(v_{J}=\pi(\sum_{j\in J}\nabla l_{j})\) be the unitized average.
The fan decomposition concerns the variation of the length of the shortest geodesics at \(p\) when the base point is perturbed. Fix the direction \(\tau\) of a small perturbation, the geodesics indexed by \(J(\tau)\) will be longer than others. If we perturb the base point in the opposite direction \(-\tau\), then the geodesics indexed by \(J\) will be shorter than others and are the only candidates for the shortest geodesics on \(u(-t\tau)\), i.e. \(S(u(-t\tau))\subset\{\gamma_{j}\}_{J}\).
**Lemma 5.12**.: _The following are true: (1) \(\bigcup_{J}F_{J}=T_{p}^{\text{sys}}\mathcal{T}\). (2) Any \(F_{J}\) is a polygonal cone properly contained in some half space of the major subspace and \(\nabla l_{j}\in\mathbb{H}(\tau)\) for any \(\tau\) and all \(j\in J(\tau)\), and therefore \(v_{J}\) is well defined. (3) \(F_{J}\subset\mathbb{H}(v_{J})\), equivalently, \(\langle v_{J},\tau\rangle>0\) if \(\tau\in F_{J}\). (4) There exists \(D>0\) such that \(|\langle\nabla l_{j},\tau\rangle|>D\) for any \(\tau\) unit and all \(j\in J(\tau)\)._
Proof.: We first prove the lemma.
**Lemma 5.13**.: _Let \(\mathcal{T}\) be a set of \(
\(j\in J(\tau)\), i.e.,_
\[\max_{\tau;j\in J(\tau)}\langle\nabla l_{j},\tau\rangle>D\text{ and }\min_{\tau;j\in J (\tau)}\langle\nabla l_{j},\tau\rangle<-D.\]
_(5) The inner products \(\langle\nabla l_{j},\tau\rangle\) cannot all be equal and \(\#\{i:\langle\nabla l_{i},\tau\rangle\geq 0\}>0\), \(\#\{i:\langle\nabla l_{i},\tau\rangle<0\}>0\)._
Proof.: (1) is tautological. For any \(0\neq v\in T_{p}^{\text{sys}}\mathcal{T}\), by the eutactic condition 3.4, there exists at least one \(i\) such that \(\langle\nabla l_{i},v\rangle>0\), otherwise, \(v\) would live in the minor subspace \(T_{p}^{\text{sys}\perp}\mathcal{T}\), from which (2) and (5) follows. (3) follows directly from (2). (4) follows from the finiteness of \(\{F_{J}\}\) and the first two propositions.
_Remark 5.13_.: If \(q\in\mathcal{T}\) is semi-eutactic, we may do a fan decomposition on the subspace spanned by the maximal eutactic subset of all minimal gradients in a similar way, or a degenerated fan decomposition on the subspace spanned by all minimal gradients. In the latter case, we can similarly establish an inequality resembling (4) above:
There exists \(D>0\) such that
\[\max\langle\nabla l_{j},\tau\rangle>D\text{ or }\min\langle\nabla l_{j},\tau \rangle<-D\]
for any \(\tau\in T^{\text{sys}}\mathcal{T}\) unit.
_Remark 5.14_.: Property (4) in Lemma 5.12 and the remark above still hold true (with possibly smaller \(D\)), if we allow \(\tau\) to be any vector in \(T_{*}\mathcal{T}\) with \(\angle(\tau,T_{*}^{\text{sys}\perp}\mathcal{T})>\theta\) for fixed \(\theta>0\).
## 6. Behavior of \(\text{sys}_{\text{T}}\) in the Major Subspace
This section provides an insight of the behavior of \(\text{sys}_{\text{T}}\) (though not complete), in a simplified setting, to help one better understand, while very little from this section will be directly used later.
We take the following approximation of \(\Omega_{T}\):
\[\Phi_{T}(t,\tau)=\sum_{i}e^{-\frac{t}{T}\langle\nabla l_{i}(p),\tau\rangle} \nabla l_{i}(p).\]
Note that rigorously we are considering the vector field \(\exp_{p}^{*}\Omega_{T}\) on the tangent bundle \(T(T_{p}\mathcal{T})\cong T_{p}\mathcal{T}\), as \(\exp_{p}\) is locally an isomorphism but we will continue to use \(\Omega_{T}\) for simplicity by abuse of notation (we will do the same thing a few times in this paper), and the approximation \(\Phi_{T}\) is done in the same space with the help of normal coordinates.
**Lemma 6.1**.: _Under the notations in Definition 5.7 and Definition 5.10, \(\Phi_{T}(\cdot,\cdot)\) restricted as a map \(T_{p}^{\rm sys}\mathcal{T}\to T_{p}^{\rm sys}\mathcal{T}\) has a zero._
Proof.: Consider \(\Phi_{T}(t,-\tau)\) restricted onto the \(\rho T\)-sphere \(S_{\rho T}^{d-1}:=\{t\tau:t=\rho T,\tau\ {\rm unit}\}\subset T_{p}^{\rm sys} \mathcal{T}\), where \(d=\dim T_{p}^{\rm sys}\mathcal{T}\) is the dimension of the major subspace and \(\rho>0\) is a constant depending only on \(p\) and it will be determined later.
For any \(\tau\in T_{p}^{\rm sys}\mathcal{T}\), fix a \(j\in J(\tau)\), we then have
\[\Phi_{T}(t,-\tau) =\sum_{i}e^{-\frac{t}{T}\langle\nabla l_{i}(p),\tau\rangle} \nabla l_{i}(p)\] \[=e^{\frac{t}{T}\langle\nabla l_{j}(p),\tau\rangle}\sum_{i}e^{ \frac{t}{T}(\langle\nabla l_{i}(p),\tau\rangle-\langle\nabla l_{j}(p),\tau \rangle)}\nabla l_{i}(p).\]
Therefore, by (2) in Lemma 5.12,
\[\Phi_{T}(t,-\tau)\sim e^{\rho\langle\nabla l_{j}(p),\tau\rangle}\sum_{i\in J} \nabla l_{i}(p)\neq 0,\]
when \(\rho=\frac{t}{T}\) is sufficiently large and hence it induces a self map on \(S^{d-1}\subset T_{p}^{\rm sys}\mathcal{T}\) by post-composing the projection \(\pi\). It can be formulated as follows:
\[\pi\circ\Phi_{T}(\rho,-\tau)|_{\tau\in S^{d-1}} =\pi\circ\Phi_{T}(t,-\tau)|_{t\tau\in S_{\rho T}^{d-1}}\] \[=\pi\left(e^{\rho(\nabla l_{j}(p),\tau)}\sum_{i}e^{\rho(\langle \nabla l_{i}(p),\tau\rangle-\langle\nabla l_{j}(p),\tau\rangle)}\nabla l_{i}(p)\right)\] \[\to v_{J}\ {\rm as}\ \rho\to\infty.\]
Therefore, there exists \(\rho_{J}>0\) such that
\[\angle(\Phi_{T}(t,-\tau),\tau)<\frac{\pi}{2}\]
when \(t>\rho_{J}T\), by (3) in Lemma 5.12. By finiteness of \(\{J\}\), take \(\rho>\max_{J}\{\rho_{J}\}\) and the angle condition is then satisfied for all \(\tau\in S^{d-1}\subset T_{p}^{\rm sys}\mathcal{T}\). By Lemma 4.6, \(\Phi_{T}(\cdot,-\cdot)|_{S^{d-1}_{\rho T}}\) has degree \(1\), and equivalently, if we don't negate \(\tau\),
\[\deg(\pi\circ\Phi_{T}|_{S^{d-1}_{\rho T}})=(-1)^{d}.\]
Therefore, \(\Phi_{T}\) has a zero in the interior of the \(\rho T\)-sphere.
The following lemma establishes uniqueness, which still holds true for \(\tilde{\Omega}_{T}\) as we will see later, while the linearity below is only for \(\Phi_{T}\).
**Lemma 6.2**.: _The zero in the lemma above is unique and it linearly approaches \(p\) in \(T\)._
Proof.: Uniqueness: Suppose \(v_{1}\neq v_{2}\) are two zeros for \(\Phi_{T}(v)\). Consider the function \(f(s):=\langle\Phi_{T}((1-s)v_{1}+sv_{2}),v_{1}-v_{2}\rangle\) which vanishes at \(0\) and \(1\). Note that \(f\) is strictly increasing since each summand of
\[f(s)=\sum_{i}e^{-\frac{1}{T}\langle\nabla l_{i},v_{1}\rangle}e^{\frac{s}{T} \langle\nabla l_{i},v_{1}-v_{2}\rangle}\langle\nabla l_{i},v_{1}-v_{2}\rangle\]
is non-decreasing and at least one of them is strictly increasing, leading to a contradiction.
Linearity: Suppose \((t_{0},\tau_{0})\) is the zero for \(\Phi_{T_{0}}\), then \(\Phi_{T}(\frac{t_{0}}{T_{0}}T,\tau_{0})=\Phi_{T_{0}}(t_{0},\tau_{0})=0\), i.e., \((\frac{t_{0}}{T_{0}}T,\tau_{0})\) is the zero for \(\Phi_{T_{0}}\).
Putting Lemma 6.1 and Lemma 6.2 together, we have:
**Theorem 6.3**.: \(\Phi_{T}\) _restricted on \(T_{p}^{\rm sys}\mathcal{T}\) has a unique zero, and it linearly approaches \(p\) in \(T\)._
## 7. Existence of Critical Points
Recall that we suppose \(p\) is a critical point for sys, i.e., \(p\) is eutactic. While the linear approximation above solves the existence of a zero in the major subspace \(T_{p}^{\rm sys}\mathcal{T}\), it does not work for detecting that for \(\nabla\,{\rm sys}_{\rm T}\) when the orthogonal complement \(T_{p}^{\rm sys.\perp}\mathcal{T}\) is involved.
Note that the \(t\) component in the coordinates \((t,\tau)\) that we are using
for a small neighborhood of \(p\) is the radial coordinate. As a reminder, we omit the exponential map '\(\exp_{p}^{*}\)' in \(\exp_{p}^{*}l_{i}\), \(\exp_{p}^{*}\nabla l_{i}\), and \(\exp_{p}^{*}\Omega_{T}\), etc., by abuse of notation, while we use the \((t,\tau)\) coordinates. Since \(l_{i}\) is real analytic, we can write
\[l_{i}(t,\tau)=l_{i}(p)+t\langle\nabla l_{i}(p),\tau\rangle+\frac{1}{2}t^{2} \nabla^{2}l_{i}(p)(\tau,\tau)+O_{i}(t^{3})\]
and
\[\nabla l_{i}(t,\tau)=\nabla l_{i}(p)+t\nabla^{2}l_{i}(p)(\tau,\cdot)+O_{i}(t^ {2}).\]
We consider the following when \(t=\rho T\), i.e., when \(t\tau\in S^{n-1}_{\rho T}\),
\[e^{-\frac{1}{T}(l_{i}(t,\tau)-\text{sys}(p))}\nabla l_{i}(t,\tau)\] \[= e^{-\frac{1}{T}(l_{i}(t,\tau)-\text{sys}(p))}(\nabla l_{i}(p)+ \rho T\nabla^{2}l_{i}(p)(\tau,\cdot)+O(T^{2}))\] \[= e^{\rho\langle\nabla l_{i}(p),\tau\rangle+\frac{1}{2}\rho^{2}T \nabla^{2}l_{i}(p)(\tau,\tau)+O(T^{2})}\nabla l_{i}(p)+\rho Te^{\rho\langle \nabla l_{i}(p),\tau\rangle+O(T)}\nabla^{2}l_{i}(p)(\tau,\cdot)\] \[+e^{-\frac{1}{T}(l_{i}(t,\tau)-\text{sys}(p))}O(T^{2})\] \[= e^{\rho\langle\nabla l_{i}(p),\tau\rangle+\frac{1}{2}\rho^{2}T \nabla^{2}l_{i}(p)(\tau,\tau)}\nabla l_{i}(p)+\rho Te^{\rho\langle\nabla l_{i }(p),\tau\rangle}\nabla^{2}l_{i}(p)(\tau,\cdot)+O(T^{2})\]
Therefore, if we write
\[\Omega_{T}(t,\tau) =\sum_{i}e^{-\frac{1}{T}(t\langle\nabla l_{i}(p),\tau\rangle+ \frac{1}{2}t^{2}\nabla^{2}l_{i}(\tau,\tau))}\nabla l_{i}(p)\] \[+t\sum_{i}e^{-\frac{1}{T}(t\langle\nabla l_{i}(p),\tau\rangle} \nabla^{2}l_{i}(p)(\tau,\cdot)+\epsilon_{2}(T,t,\tau),\]
then \(\epsilon_{2}(T,t,\tau)=O(T^{2})\) when \(t=\rho T\).
**Definition 7.1**.: With the notations above, we denote the'main part' by
\[\Psi_{T}:=\Omega_{T}-\epsilon_{2},\]
where \(\epsilon_{2}(T,t,\tau)|_{t=\rho T}=O(T^{2})\).
We fix a constant \(\theta_{0}\) below that will stay fixed and be used almost everywhere throughout the rest of the paper:
**Definition 7.2**.: Let \(0<\theta_{0}<\frac{\pi}{6}\) be a constant (that depends on the base critical point \(p\)) that satisfies
\[\angle(\nabla^{2}l_{i}(\tau,\cdot),\tau)<\frac{\pi}{2}-2\theta_{0}\]
for all \(i\). Existence of \(\theta_{0}\) is due to \(\nabla^{2}l_{i}(\tau,\tau)>0\) by convexity in Theorem 4.1, compactness of the unit sphere and finiteness of the number of shortest geodesics on \(p\).
**Theorem 7.3**.: _There exists \(\rho_{0}>0\) and \(\epsilon_{0}=\epsilon_{0}(\rho)>0\) such that \(\pi\circ\Psi_{T}(\cdot,\cdot)\) restricted on \(S^{n-1}_{\rho T}\subset T_{p}\mathcal{T}\) has degree \((-1)^{d}\) for \(\rho>\rho_{0}\) and \(T<\epsilon_{0}\), where \(d:=\dim T_{p}^{\rm sys}\mathcal{T}\) as a reminder._
Proof.: Let \(i\) be the automorphism given by
\[i(v_{1}+v_{2})=v_{1}-v_{2},\]
where \(v_{1}\in T_{X}^{\rm sys}\mathcal{T}\) and \(v_{2}\in T_{X}^{\rm sys\perp}\mathcal{T}\). For example, \(i|_{T_{X}^{\rm sys}\mathcal{T}}=id|_{T_{X}^{\rm sys}\mathcal{T}}\). To show the conclusion on \(\pi\circ\Psi_{T}(\cdot,\cdot)\), we consider \(i\circ\pi\circ\Psi_{T}(\cdot,-\cdot)\).
When \(t=\rho T\),
\[\Psi_{T}(t,-\tau)|_{S^{n-1}_{\rho T}}=\sum_{i}e^{\rho(\nabla l_{i}(p),\tau)} \left(e^{-\frac{1}{2}\rho^{2}T\nabla^{2}l_{i}(p)(\tau,\tau)}\nabla l_{i}(p)- \rho T\nabla^{2}l_{i}(p)(\tau,\cdot)\right).\]
Let
\[\Psi_{1}(t,-\tau)=\sum_{i}e^{\rho(\nabla l_{i}(p),\tau)}e^{-\frac{1}{2}\rho^{2 }T\nabla^{2}l_{i}(p)(\tau,\tau)}\nabla l_{i}(p)\]
and
\[\Psi_{2}(t,-\tau)=-\sum_{i}e^{\rho(\nabla l_{i}(p),\tau)}\rho T\nabla^{2}l_{i} (p)(\tau,\cdot).\]
We consider two cases based on the position of \(\tau\):
Case 1: When \(\angle(\tau,T_{p}^{\rm sys\perp}\mathcal{T})\leq\theta_{0}\) (as a remark that this situation does not happen if \(T_{p}^{\rm sys}\mathcal{T}=T_{p}\mathcal{T}\)), let \(\tau^{\prime}\) be the projection of \(\tau\) onto \(T_{p}^{\rm sys\perp}\mathcal{T}\), then \(\angle(\tau^{\prime},\tau)\leq\theta_{0}\). Recall that we have \(\angle(\nabla^{2}l_{i}(p)(\tau,\cdot),\tau)<\frac{\pi}{2}-2\theta_{0}\) by the choice of \(\theta_{0}\), then it follows that \(\angle(-\Psi_{2}(t,-\tau),\tau)<\frac{\pi}{2}-2\theta_{0}\). Using triangle inequality, we have
\[\angle(-\Psi_{2}(t,-\tau),\tau^{\prime})<\angle(-\Psi_{2}(t,-\tau),\tau)+ \angle(\tau,\tau^{\prime})<\frac{\pi}{2}-2\theta_{0}+\theta_{0}=\frac{\pi}{2} -\theta_{0},\]
which in other words implies \(i(\Psi_{2}(t,-\tau))\in\mathbb{H}(\tau^{\prime})\) by definition of \(i\). Note that \(\Psi_{1}\in T_{p}^{\rm sys}\mathcal{T}\subset\partial\mathbb{H}(\tau^{\prime})\), thus
\[i(\Psi_{T}(t,-\tau))=\Psi_{1}(t,-\tau)+i(\Psi_{2}(t,-\tau))\in\mathbb{H}(\tau^{ \prime}).\]
Therefore,
\[\angle(i(\Psi_{T}(t,-\tau)),\tau)<\angle(\Psi_{1}+i(\Psi_{2}),\tau^{\prime})+ \angle(\tau^{\prime},\tau)<\frac{\pi}{2}+\theta_{0}.\]
Case 2: When \(\angle(\tau,T_{p}^{\rm sys\perp}\mathcal{T})\geq\theta_{0}\), similar to what we have in (4) in Lemma 5.12, by compactness of the region of such \(\tau\)'s in this case, there exists \(D>0\) such that \(\langle\nabla l_{j}(p),\tau\rangle>D\) for any \(\tau\) and all \(j\in J(\tau)\). Let \(M,m>0\) be the maximum and minimum of \(\{||\nabla l_{i}(p)||,||\nabla^{2}l_{i}(p)(\tau,\cdot)||\}_{i,\tau}\) respectively, where positivity is guaranteed by Theorem 4.1 again.
Let \(K=\{k:\langle\nabla l_{k},\tau\rangle\leq 0\}\) that is disjoint from \(J(\tau)\). For any \(j\in J(\tau)\) and \(k\in K\), we have the following estimates:
\[||\Psi_{1}^{j}(t,-\tau)|| >\langle\Psi_{1}^{j}(t,-\tau),\tau\rangle\] \[=e^{\rho(\nabla l_{j}(p),\tau)}e^{-\frac{1}{2}\rho^{2}T\nabla^{2} l_{j}(p)(\tau,\tau)}\langle\nabla l_{j}(p),\tau\rangle\] \[>e^{\rho D}e^{-\frac{1}{2}\rho^{2}TM}D,\]
\[||\Psi_{1}^{k}(t,-\tau)|| =e^{\rho(\nabla l_{k}(p),\tau)}e^{-\frac{1}{2}\rho^{2}T\nabla^{2 }l_{k}(p)(\tau,\tau)}||\nabla l_{k}(p)||\] \[\leq 2e^{\frac{1}{2}\rho^{2}TM}M,\]
and
\[||\Psi_{2}(t,-\tau)|| \leq\sum_{i}e^{\rho(\nabla l_{i}(p),\tau)}\rho T||\nabla^{2}l_{i }(p)(\tau,\cdot)||\] \[\leq re^{\rho M}\rho TM,\]
where \(r=\#S(p)\).
Note that all \(\langle\Psi_{1}^{i}(t,-\tau),\tau\rangle\geq 0\) for \(i\in(J(\tau)\cup K)^{C}\), then putting
all terms together we have
\[\langle\Psi_{T}(t,-\tau)|_{t=\rho T},\tau\rangle =\langle\Psi_{1}^{J}+\Psi_{1}^{K}+\Psi_{1}^{(J\cup K)^{C}}+\Psi_{2},\tau\rangle\] \[\geq\langle\Psi_{1}^{J},\tau\rangle-||\Psi_{1}^{K}||-||\Psi_{2}||\] \[\geq e^{\rho D}e^{-\frac{1}{2}\rho^{2}TM}D-2e^{\frac{1}{2}\rho^{2 }TM}M-re^{\rho M}\rho TM,\]
where \(J=J(\tau)\) for short. If we set \(T<e^{-\rho M}\), the expression above will go to \(+\infty\) as \(\rho\to\infty\). Therefore, we fix \(\rho\) such that
\[\langle\Psi_{T}(t,-\tau)|_{t=\rho T},\tau\rangle>0\]
for any \(T<e^{-\rho M}\).
With the choice of \(\rho\) and the two cases considered above,
\[\deg i\circ\pi\circ\Psi_{T}(\cdot,-\cdot)=1,\]
following the theorem.
**Lemma 7.4**.: _With the same notations above, we have the following estimates on the norms when \(t=\rho T\): (1) When \(\angle(\tau,T_{p}^{\rm sys\perp}\mathcal{T})\leq\theta_{0}\),_
\[||\Psi_{T}(t,-\tau)||>\rho Te^{-\rho M\sin\theta_{0}}m\sin\theta_{0}\cos \theta_{0},\]
_(2) When \(\angle(\tau,T_{p}^{\rm sys\perp}\mathcal{T})\geq\theta_{0}\),_
\[||\Psi_{T}(t,-\tau)||>\frac{1}{2}e^{\rho D}e^{-\frac{1}{2}\rho^{2}TM}D.\]
_Together we may claim that \(||\Psi_{T}|_{t=\rho T}||>C(p)T.\)_
Proof.: In the first case, let \(\tau^{\prime}\) be the projection of \(\tau\) onto the minor subspace and consider the projection of \(\Psi_{T}\) onto \(\tau^{\prime}\).
Note that
\[\langle\nabla l_{i}(p),\tau\rangle=||\nabla l_{i}(p)||\cos\angle(\nabla l_{i},\tau)>-M\sin\theta_{0}\]
and as a reminder, \(\theta_{0}\) is chosen such that \(\angle(\nabla^{2}l_{i}(\tau,\cdot),\tau)<\frac{\pi}{2}-2\theta_{0}\), so
\[\angle(\nabla^{2}l_{i}(p)(\tau,\cdot),\tau^{\prime})<\angle(\nabla^{2}l_{i}( p)(\tau,\cdot),\tau)+\angle(\tau,\tau^{\prime})<\frac{\pi}{2}-\theta_{0}\]
and
\[\langle\nabla^{2}l_{i}(p)(\tau,\cdot),\tau^{\prime}\rangle =||\nabla^{2}l_{i}(p)(\tau,\cdot)||\cdot||\tau^{\prime}||\cos\angle (\nabla^{2}l_{i}(p)(\tau,\cdot),\tau^{\prime})\] \[>m\sin\theta_{0}\cos\theta_{0}.\]
Therefore,
\[\langle\Psi_{T},-\tau^{\prime}\rangle =\sum_{i}e^{\rho\langle\nabla l_{i}(p),\tau\rangle}\rho T\langle \nabla^{2}l_{i}(p)(\tau,\cdot),\tau^{\prime}\rangle\] \[>\rho Te^{-\rho M\sin\theta_{0}}m\sin\theta_{0}\cos\theta_{0}\]
The first claim follows. The second follows directly from the definition of \(\rho\).
What Theorem 7.3 tells us is that \(\Psi_{T}\) has a zero in the interior of \(S^{n-1}_{\rho T}\) by Lemma 4.6. Now we are ready to prove the same conclusion for \(\tilde{\Omega}_{T}\):
**Theorem 7.5** (Existence).: _With the same \(\rho\) above, \(\operatorname{sys}_{\mathrm{T}}\) has a critical point in the interior of the \(\rho T\)-sphere \(S^{n-1}_{\rho T}\subset T_{p}\mathcal{T}\) when \(T\) is sufficiently small._
Proof.: To recall the idea, we take a two-step approximation
\[\tilde{\Omega}_{T}\approx\Omega_{T}\approx\Psi_{T}.\]
Now let us consider the errors.
When \(T\) is sufficiently small, any point \(X\) in the geodesic ball \(\exp_{p}(B_{\rho T})\) centered at \(p\) satisfies that \(\operatorname{sec.\,sys}(X)>\operatorname{sec.\,sys}(p)\), and thus by Lemma 5.9,
\[||\tilde{\Omega}_{T}(X)-\Omega_{T}(X)||<c^{\prime\prime}e^{-\frac{1}{T} \operatorname{sec.\,sys}(X)}<c^{\prime\prime}e^{-\frac{1}{T}\operatorname{ sec.\,sys}(p)}.\]
For the second approximation, since \(\rho\) is already fixed as in Theorem 7.3, by Definition 7.1,
\[||\Omega_{T}(X)-\Psi_{T}(X)||=O(T^{2}).\]
Therefore, the total error can be given by
\[||\tilde{\Omega}_{T}(X)-\Psi_{T}(X)||=O(T^{2}).\]
Since the angle between \(\tau\) and \(i\circ\Psi_{T}(t,-\tau)|_{S^{n-1}_{\rho T}}\) can be bounded by \(\frac{\pi}{2}+\theta_{0}\) (which is not optimal), and \(||i\circ\Psi_{T}(t,-\tau)|_{S^{n-1}_{\rho T}}||>C(p)T\), by the lemma above, with the error taken into effect, Lemma 4.6 is satisfied, and
\[\deg(\tilde{\Omega}_{T})=\deg(\Psi_{T})=(-1)^{d}.\]
There existence follows.
**Corollary 7.6**.: _Suppose uniqueness (which will be proved in Theorem 8.5) is satisfied for the existence above. Let \(p_{T}\) denote the unique critical point for \(\operatorname{sys}_{\mathrm{T}}\) near \(p\), then \(d(p,p_{T})\) is sublinear in \(T\), meaning \(d(p,p_{T})<CT\) for some \(C\)._
We also prove the nonexistence of critical points outside of the \(\rho T\)-ball centered at \(p\):
**Theorem 7.7** (Nonexistence outside the \(\rho T\)-ball).: _There exist \(\rho_{1}>0\), \(\epsilon_{1}=\epsilon_{1}(\rho)>0\) and \(r_{p}>0\) for each \(p\), such that there are no critical points of \(\operatorname{sys}_{\mathrm{T}}\) with distance from \(p\) in \([\rho T,r_{p}]\) for \(\rho>\rho_{1}\) and \(T<\min\{\epsilon_{1},\frac{r_{p}}{\rho}\}\)._
Proof.: We recall that \(0<\theta_{0}<\frac{\pi}{6}\) is chosen such that for all \(i\),
\[\angle(\nabla^{2}l_{i}(\tau,\cdot),\tau)<\frac{\pi}{2}-2\theta_{0}.\]
For \(\tau\not\in T^{\operatorname{sys}\perp}_{p}\mathcal{T}\), let \(C^{\operatorname{sys}}(\tau,\theta)\) be the cone in the major subspace centered at \(\operatorname{proj}_{T^{\operatorname{sys}}_{p}\mathcal{T}}\tau\) with angle \(\theta\). Note that \(v\in C^{\operatorname{sys}}(\tau,\theta)\times T^{\operatorname{sys}\perp}_{p }\mathcal{T}\) if and only if
\[\angle(\operatorname{proj}_{T^{\operatorname{sys}}_{p}\mathcal{T}}v, \operatorname{proj}_{T^{\operatorname{sys}}_{p}\mathcal{T}}\tau)<\theta.\]
For example, if \(\tau\in T^{\operatorname{sys}}_{p}\mathcal{T}\), then \(C^{\operatorname{sys}}(\tau,\frac{1}{2}\pi)\times T^{\operatorname{sys}\perp }_{p}\mathcal{T}=\mathbb{H}(\tau)\).
Let \(U_{p}=B(p,r_{p})\) be a geodesic ball centered at \(p\) and \(C>0\) a constant with \(Cr_{p}<\frac{1}{2}\), such that for any \(q\in\overline{U}_{p}\), if we let \(t=d(p,q)\) and \(\tau=\frac{1}{t}\exp_{p}^{-1}(q)\), the following conditions are satisfied:
(1) By continuity,
\[|l_{i}(q)-l_{i}(p)-t\langle\nabla l_{i}(p),\tau\rangle|<Ct^{2},\]
for all \(i\) and
\[l_{j}(q)-l_{j}(p)<0,\]
for \(j\in J(-\tau)\),
(2) By continuity,
\[||\nabla l_{i}(q)-\nabla l_{i}(p)||<Ct,\]
for all \(i\) and consequently
\[\angle(\nabla l_{i}(q),\nabla l_{i}(p))<\frac{1}{2}\min\{\theta_{0},\frac{1}{ 2}\pi-\theta_{1}\},\]
where \(\theta_{1}\) is chosen in Case 2 below, and
\[\frac{1}{2}<\frac{||\nabla l_{i}(q)||}{||\nabla l_{i}(p)||}<\frac{3}{2},\]
(3) By continuity,
\[||\nabla l_{i}(q)-\nabla l_{i}(p)-t\nabla^{2}l_{i}(\tau,\cdot)||<Ct^{2},\]
and by taking projection onto the minor subspace,
\[t||\operatorname{proj}_{T^{\operatorname{sys\perp}}_{p}\mathcal{T}}\nabla^{2 }l_{i}(\tau,\cdot)||-Ct^{2}>0,\]
(4) By continuity,
\[\max\{l_{i}(q)\}<\min_{\gamma\not\in S(p)}l_{\gamma}(q).\]
\[\operatorname{sys}(p)+(r_{p}||\nabla l_{i}||\sin\theta_{0}+2Cr_{p}^{2})<\min _{\gamma\not\in S(p)}l_{\gamma}(q)\]
for all \(i\).
Condition (4) guarantees that we only need to consider the sum over \(\gamma_{i}\in S(p)\) since the 'tail' will be dominantly smaller. Given these constants and conditions, we again consider the following two cases:
Case 1: if \(\angle(\tau,T^{\operatorname{sys\perp}}_{p}\mathcal{T})\leq\theta_{0}\), then
\[\angle(\tau,\nabla l_{i}(p))\geq\frac{\pi}{2}-\theta_{0},\]
for all \(i\in I\), and therefore,
\[e^{-\frac{1}{T}l_{i}(q)} >e^{-\frac{1}{T}\operatorname{sys}(p)}e^{-\frac{1}{T}(t\langle \nabla l_{i}(p),\tau\rangle+Ct^{2})}\] \[>e^{-\frac{1}{T}\operatorname{sys}(p)}e^{-\frac{1}{T}(t||\nabla l _{i}(p)||\sin\theta_{0}+Ct^{2})}\] \[>e^{-\frac{1}{T}\operatorname{sys}(p)}e^{-\frac{1}{T}(r_{p}|| \nabla l_{i}(p)||\sin\theta_{0}+Cr_{p}^{2})}\] \[>e^{-\frac{1}{T}(\min_{\gamma\not\in S(p)}l_{\gamma}(q)-Cr_{p}^{ 2})}\]
Take the projection of \(\sum_{i}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)\) onto the minor subspace, then we have, for a single \(\nabla l_{i}(q)\),
\[||\operatorname{proj}_{T_{p}^{\operatorname{sys}\perp}\mathcal{ T}}\nabla l_{i}(q)|| >t||\operatorname{proj}_{T_{p}^{\operatorname{sys}\perp}\mathcal{ T}}\nabla^{2}l_{i}(\tau,\cdot)||-Ct^{2}\] \[\geq\rho T||\operatorname{proj}_{T_{p}^{\operatorname{sys}\perp }\mathcal{T}}\nabla^{2}l_{i}(\tau,\cdot)||-Cr_{p}\rho T=:C_{1}(p)T,\]
and for the main part,
\[||\sum_{i}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)||\] \[\geq||\operatorname{proj}_{T_{p}^{\operatorname{sys}\perp} \mathcal{T}}\sum_{i}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)||\] \[\geq\sum_{i}e^{-\frac{1}{T}l_{i}(q)}||\operatorname{proj}_{T_{p} ^{\operatorname{sys}\perp}\mathcal{T}}(\nabla l_{i}(q))||\] \[>\sum_{i}e^{-\frac{1}{T}\operatorname{sys}(p)}e^{-\frac{1}{T}(r _{p}||\nabla l_{i}(p)||\sin\theta_{0}+Cr_{p}^{2})}\left(t||\operatorname{proj} _{T_{p}^{\operatorname{sys}\perp}\mathcal{T}}\nabla^{2}l_{i}(\tau,\cdot)||-Ct ^{2}\right)\] \[>e^{-\frac{1}{T}(\min_{\gamma\not\in S(p)}l_{\gamma}(q)-Cr_{p}^{ 2})}C_{1}(p)T.\]
We thus have the positivity of the norm of the gradient vector, as the tail is dominantly smaller:
\[||\sum_{\gamma}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)|| \geq||\operatorname{proj}_{T_{p}^{\operatorname{sys}\perp} \mathcal{T}}\sum_{i}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)||-||\sum_{\gamma \not\in S(p)}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)||\] \[>C_{1}(p)Te^{-\frac{1}{T}(\min_{\gamma\not\in S(p)}l_{\gamma}(q)- Cr_{p}^{2})}-C_{2}(p)e^{-\frac{1}{T}\min_{\gamma\not\in S(p)}l_{\gamma}(q)}>0.\]
Case 2: if \(\angle(\tau,T_{p}^{\operatorname{sys}\perp}\mathcal{T})\geq\theta_{0}\). By eutacticity, there exists \(0<\theta_{1}<\frac{1}{2}\pi\) such that for any \(\tau\not\in T_{p}^{\operatorname{sys}\perp}\mathcal{T}\),
\[C^{\operatorname{sys}}(\tau,\frac{1}{2}\pi-\theta_{1})\cap\{\nabla l_{i}\}\neq\emptyset.\]
Let
\[J:=\{j:\nabla l_{j}(p)\in C^{\rm sys}(-\tau,\frac{1}{2}\pi-\theta_{1})\}\]
(note that \(J\) is not short for \(J(-\tau)\) in this case) and
\[K:=\{k:\nabla l_{k}(p)\in C^{\rm sys}(\tau,\frac{\pi}{2}+\frac{1}{2}\theta_{1})\}.\]
By continuity, there exist \(D_{1}<0\) and \(D_{2}\) constant, such that when we can make \(r_{p}\) small enough, for any \(q\in U_{p}=B(p,r_{p})\):
(1) For \(j\in J\),
\[\langle\nabla l_{j}(q),\tau\rangle<D_{1}.\]
(2) For \(k\in K\),
\[\langle\nabla l_{k}(q),\tau\rangle>D_{2}.\]
(3) For \(i\in(J\cup K)^{C}\),
\[\langle\nabla l_{i}(q),\tau\rangle<0.\]
(4) \(D_{1}<D_{2}\).
For any \(j\in J\) and \(k\in K\), we have
\[\frac{||e^{-\frac{1}{T}l_{k}(q)}\nabla l_{k}(q)||}{||e^{-\frac{1}{T}l_{j}(q)} \nabla l_{j}(q)||} <\frac{\frac{3}{2}e^{-\frac{1}{T}l_{k}(q)}||\nabla l_{k}(p)||}{ \frac{1}{2}e^{-\frac{1}{T}l_{j}(q)}||\nabla l_{j}(p)||}\] \[=3e^{-\frac{1}{T}(l_{k}(q)-l_{j}(q))}\frac{||\nabla l_{k}(p)||}{ ||\nabla l_{j}(p)||}\] \[<3e^{\frac{1}{T}(-t(\langle\nabla l_{k}(p),\tau\rangle-\langle \nabla l_{j}(p),\tau\rangle)+2Ct^{2})}\frac{||\nabla l_{k}(p)||}{||\nabla l_{j} (p)||}\] \[<3e^{\frac{1}{T}(t(D_{1}-D_{2})+2Ct^{2})}\frac{||\nabla l_{k}(p) ||}{||\nabla l_{j}(p)||}\] \[<3e^{\rho(D_{1}-D_{2})+2Cr_{p}}\frac{||\nabla l_{k}(p)||}{|| \nabla l_{j}(p)||}<\epsilon,\]
where \(\epsilon<\frac{1}{r}\), if \(\rho\) is sufficiently large.
We consider the projection of \(\sum_{i}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)\) onto the major subspace. For any \(\nabla l_{j}(p)\in C^{\operatorname{sys}}(-\tau,\frac{1}{2}\pi-\theta_{1})\),
\[-\langle e^{-\frac{1}{T}l_{j}(q)}\nabla l_{j}(q),\tau\rangle >-e^{-\frac{1}{T}\operatorname{sys}(p)}e^{-\frac{1}{T}(l_{j}(q)- l_{j}(p))}\langle\nabla l_{j}(q),\tau\rangle\] \[>-D_{1}e^{-\frac{1}{T}\operatorname{sys}(p)}e^{\frac{1}{T}(-t \langle\nabla l_{j},\tau\rangle-Ct^{2})}\] \[>-D_{1}e^{-\frac{1}{T}\operatorname{sys}(p)}e^{\frac{1}{T}(-D_{1 }\rho T-Cr_{p}\rho T)}\] \[=-D_{1}e^{-\frac{1}{T}\operatorname{sys}(p)}e^{\rho(-D_{1}-Cr_{p })}\]
Note that, for any \(i\in(J\cup K)^{C}\), \(\nabla l_{i}(q)\) has a negative projection onto \(\tau\). Therefore,
\[||\sum_{I}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)|| >-\langle\sum_{K^{C}}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q), \tau\rangle-||\sum_{K}e^{-\frac{1}{T}l_{k}(q)}\nabla l_{k}(q)||\] \[\geq-\langle\sum_{J}e^{-\frac{1}{T}l_{j}(q)}\nabla l_{j}(q),\tau \rangle-||\sum_{K}e^{-\frac{1}{T}l_{k}(q)}\nabla l_{k}(q)||\] \[\geq-(1-r\epsilon)\langle e^{-\frac{1}{T}l_{j}(q)}\nabla l_{j}(q ),\tau\rangle\]
Put the tail in,
\[||\sum_{\gamma}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q)|| >-(1-r\epsilon)\langle e^{-\frac{1}{T}l_{j}(q)}\nabla l_{j}(q), \tau\rangle-||\sum_{\gamma\not\in S(p)}e^{-\frac{1}{T}l_{i}(q)}\nabla l_{i}(q )||\] \[>-(1-r\epsilon)D_{1}e^{-\frac{1}{T}\operatorname{sys}(p)}e^{\rho (-D_{1}-Cr_{p})}-C_{2}(p)e^{-\frac{1}{T}\min_{\gamma\not\in S(p)}l_{\gamma}(q)}\] \[>C_{3}(p)e^{-\frac{1}{T}\operatorname{sys}(p)}-C_{2}(p)e^{-\frac {1}{T}\min_{\gamma\not\in S(p)}l_{\gamma}(q)}>0\]
The theorem follows.
**Corollary 7.8**.: _Let \(p_{1},p_{2}\) be two critical points, then_
\[d(p_{1},p_{2})>\max\{r_{p_{1}},r_{p_{2}}\}.\]
## 8. Nondegeneracy and Local Uniqueness of Critical Points
As the actual systole function is topologically Morse, we naturally hope the \(C^{2}\) functions \(\operatorname{sys}_{\operatorname{T}}\) in the approximation family are \(C^{2}\)-Morse or even smooth Morse, when \(T\) is sufficiently close to \(0\). To proceed, with the same notations above, we directly calculate the Hessian \(\tilde{H}_{\operatorname{sys}_{\operatorname{T}}}\) of \(\operatorname{sys}_{\operatorname{T}}\)
on the tangent vector field \(\tau=u^{\prime}\) for any geodesic \(u\):
\[\tilde{H}_{T}(\tau) :=T\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\right)^{2}\tilde{H }_{\operatorname{syst}}(\tau,\tau)\] \[=\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\langle\nabla l_{ \gamma},\tau\rangle\right)^{2}-\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}} \langle\nabla l_{\gamma},\tau\rangle^{2}\right)\left(\sum_{\gamma}e^{-\frac{ 1}{T}l_{\gamma}}\right)\] \[+T\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\right)\left(\sum_ {\gamma}e^{-\frac{1}{T}l_{\gamma}}\nabla^{2}l_{\gamma}(\tau,\tau)\right)\]
Let \(H_{T}\) be the'main part' of \(\tilde{H}_{T}\), i.e., where every sum above is over \(S(p)\) instead of all simple closed geodesics. By Theorem 4.3 and a similar calculation as in Lemma 5.9 and Theorem 7.5,
**Lemma 8.1**.: _We have_
\[|(\tilde{H}_{T})_{p}-(H_{T})_{p}|<c^{\prime\prime\prime}e^{-\frac{1}{T}( \operatorname{sys}(p)+\operatorname{sec.sys}(p))}.\]
Proof.: Note that
\[\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\nabla^{2}l_{\gamma}(\tau,\tau)\leq c \sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}(1+l_{\gamma}e^{\frac{l_{\gamma}}{2}}) \leq c(p)e^{-\frac{1}{T}\operatorname{sec.sys}(p)},\]
then the lemma follows from Theorem 5.5 and Lemma 5.9.
Our hope is that \(\tilde{H}_{T}\) is nondegenerate at critical points of \(\operatorname{syst}\), but first as an observation we have the following:
**Lemma 8.2**.: _When \(T\) is sufficiently small, \((\tilde{H}_{T})_{p}:T_{p}\mathcal{T}\to\mathbb{R}\) is nondegenerate._
Proof.: Since all \(\gamma_{i}\)'s attain the same value at \(p\), the above can be simplified as
\[(H_{T})_{p}(\tau)=-e^{-\frac{2}{T}\operatorname{sys}(p)}\left(r\sum_{i} \langle\nabla l_{i},\tau\rangle^{2}-\left(\sum_{i}\langle\nabla l_{i},\tau \rangle\right)^{2}-Tr\sum_{i}\nabla l_{i}^{2}(p)(\tau,\tau)\right),\]
where \(r:=\#S(p)\) as a reminder.
If \(\tau\in T_{p}^{\rm sys\perp}\mathcal{T}\), then
\[(\tilde{H}_{T})_{p}(\tau)>Tre^{-\frac{2}{T}\,{\rm sys}(p)}\sum_{i}\nabla^{2}l_{i}( p)(\tau,\tau)-C_{p}e^{-\frac{1}{T}({\rm sys}(p)+{\rm sec.sys}(p))}>0.\]
If \(\tau\in T_{p}^{\rm sys}\mathcal{T}\), by part (5) of Lemma 5.12 and Cauchy's inequality,
\[r\sum_{i}\langle\nabla l_{i},\tau\rangle^{2}-\left(\sum_{i}\langle\nabla l_{i},\tau\rangle\right)^{2}>0.\]
Similar to part (4) in Lemma 5.12, there exists \(D>0\) (we may assume the same \(D\) for both Lemma 5.12 and here for simplicity), such that
\[\min_{i}\{\langle\nabla l_{i}(p),\tau\rangle\}<-D.\]
If we let
\[r_{1}:=\#\{i:\langle\nabla l_{i},\tau\rangle\geq 0\}\]
and
\[r_{2}:=\{i:\langle\nabla l_{i},\tau\rangle<0\},\]
then we have
\[r\sum_{i}\langle\nabla l_{i},\tau\rangle^{2}-\left(\sum_{i}\langle \nabla l_{i},\tau\rangle\right)^{2}\] \[=r\left(\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle\nabla l_{i}, \tau\rangle^{2}+\sum_{\langle\nabla l_{i},\tau\rangle<0}\langle\nabla l_{i}, \tau\rangle^{2}\right)\] \[-\left(\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle\nabla l_{i}, \tau\rangle+\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle\nabla l_{i}, \tau\rangle\right)^{2}\] \[=\left(r_{1}\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle \nabla l_{i},\tau\rangle^{2}-\left(\sum_{\langle\nabla l_{i},\tau\rangle\geq 0 }\langle\nabla l_{i},\tau\rangle\right)^{2}\right)\] \[+\left(r_{2}\sum_{\langle\nabla l_{i},\tau\rangle<0}\langle \nabla l_{i},\tau\rangle^{2}-\left(\sum_{\langle\nabla l_{i},\tau\rangle<0} \langle\nabla l_{i},\tau\rangle\right)^{2}\right)\] \[+r_{2}\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle \nabla l_{i},\tau\rangle^{2}+r_{1}\sum_{\langle\nabla l_{i},\tau\rangle<0} \langle\nabla l_{i},\tau\rangle^{2}\] \[-2\left(\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle \nabla l_{i},\tau\rangle\right)\left(\sum_{\langle\nabla l_{i},\tau\rangle<0} \langle\nabla l_{i},\tau\rangle\right)\] \[\geq r_{2}\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle \nabla l_{i},\tau\rangle^{2}+r_{1}\sum_{\langle\nabla l_{i},\tau\rangle<0} \langle\nabla l_{i},\tau\rangle^{2}\] \[-2\left(\sum_{\langle\nabla l_{i},\tau\rangle\geq 0}\langle \nabla l_{i},\tau\rangle\right)\left(\sum_{\langle\nabla l_{i},\tau\rangle<0} \langle\nabla l_{i},\tau\rangle\right)\] \[\geq r_{2}D^{2}+r_{1}D^{2}+2D^{2}\geq 4D^{2}\]
Going back to \((H_{T})_{p}\), we have
\[(H_{T})_{p}(\tau)\leq-e^{-\frac{2}{T}\operatorname{sys}(p)}\left(4D^{2}-Tr^{2 }M\right),\]
and
\[(\tilde{H}_{T})_{p}(\tau) <(H_{T})_{p}(\tau)+C_{p}e^{-\frac{1}{T}(\operatorname{sys}(p)+ \operatorname{sec.sys}(p))}\] \[\leq-e^{-\frac{2}{T}\operatorname{sys}(p)}\left(4D^{2}-Tr^{2}M \right)+C_{p}e^{-\frac{1}{T}(\operatorname{sys}(p)+\operatorname{sec.sys}(p))}\]
which is negative when \(T\) is sufficiently small as \(\sec.\,\mathrm{sys}>\mathrm{sys}\).
Therefore, when \(T\) is small enough, \(H_{T}(\tau)<0\).
Given a critical point \(p_{T}\) of \(\mathrm{sys_{T}}\), the Hessian can be simplified as
\[(\tilde{H}_{T})_{p_{T}}(\tau)=\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}} \right)\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\left(-\langle\nabla l_{ \gamma},\tau\rangle^{2}+T\nabla^{2}l_{\gamma}(\tau,\tau)\right)\right).\]
**Theorem 8.3**.: _Let \(p_{T}\) be a critical point (no uniqueness established yet) for \(\mathrm{sys_{T}}\) as in Theorem 7.5 when \(T\) is sufficiently small, then \((\tilde{H}_{T})_{p_{T}}:T_{p_{T}}\mathcal{T}\to\mathbb{R}\) is nondegenerate._
Proof.: Let \(v_{T}=\exp_{p}^{-1}(p_{T})\in T_{p}\mathcal{T}\), then \(d(\exp_{p})_{v_{T}}:T_{v_{T}}T_{p}\mathcal{T}\cong T_{p}\mathcal{T}\to T_{p_{T }}\mathcal{T}\) is an isomorphism when \(T\) is small. By smoothness, let \(C=C(p)>0\) satisfy
\[|d(\exp_{p})_{v_{T}}(\nabla l_{i}(p))-\nabla l_{i}(p_{T})|<Cd(p,p_{T})<C\rho T.\]
We also let
\[\mathrm{tail}=\sum_{\gamma\not\in S(p)}e^{-\frac{1}{T}l_{\gamma}}\left(- \langle\nabla l_{\gamma},\tau\rangle^{2}+T\nabla^{2}l_{\gamma}(\tau,\tau) \right),\]
and then \(|\mathrm{tail}|<C_{p}e^{-\frac{1}{T}\sec.\mathrm{sys}(p)}\), similar as above.
Instead of the tangent space \(T_{p_{T}}\mathcal{T}\) at \(p_{T}\), we consider the push forward of the tangent subspaces \(T_{p}^{\mathrm{sys}}\mathcal{T}\) and \(T_{p}^{\mathrm{sys}\perp}\mathcal{T}\) at \(p\). By continuity, results similar to part (4) of Lemma 5.12 and the bounds of \(\{||\nabla l_{i}(p_{T})||,||\nabla^{2}l_{i}(p_{T})(\tau,\cdot)||\}_{i,\tau;T \leq\epsilon}\) can be established. For simplicity, by abuse of notation, we still denote the bounds by \(D,M,m\). We also assume \(l_{i}(p_{T})<\frac{1}{2}\left(\mathrm{sys}(p)+\sec.\,\mathrm{sys}(p)\right)< \sec.\,\mathrm{sys}(p)\) for all \(\gamma_{i}\in S(p)\) by making \(T\) small enough.
When \(\tau\in d(\exp_{p})_{v_{T}}(T_{p}^{\rm sys}\mathcal{T})\),
\[\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\right)^{-1}\cdot( \tilde{H}_{T})_{p_{T}}(\tau)\] \[\leq-e^{-\frac{1}{T}\frac{1}{2}(\operatorname{sys}(p)+\operatorname {sec.sys}(p))}(D^{2}-rTM)+C_{p}e^{-\frac{1}{T}\operatorname{sec.sys}(p)}<0,\]
when \(T\) is sufficiently small.
When \(\tau\in d(\exp_{p})_{v_{T}}(T_{p}^{\rm sys\bot}\mathcal{T})\),
\[\left(\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\right)^{-1}\cdot( \tilde{H}_{T})_{p_{T}}(\tau)\] \[\geq re^{-\frac{1}{T}\frac{1}{2}(\operatorname{sys}(p)+ \operatorname{sec.sys}(p))}(-(2C\rho T)^{2}+Tm)-C_{p}e^{-\frac{1}{T} \operatorname{sec.sys}(p)}>0,\]
when \(T\) is sufficiently small.
The theorem follows since \(d(\exp_{p})_{v_{T}}\) is an isomorphism.
If we say the _index_ of a (nondegenerate) quadratic form is the number of negative eigenvalues, then we have shown in the proof above:
**Corollary 8.4**.: _Let \(p\) be a critical point for \(\operatorname{sys}\), then_
\[\operatorname{ind}\tilde{H}_{\operatorname{sys}_{\mathrm{T}}}(p)=\operatorname {ind}\tilde{H}_{\operatorname{sys}_{\mathrm{T}}}(p_{T})=(-1)^{d}=\operatorname {ind}_{\operatorname{sys}}(p).\]
**Theorem 8.5** (Uniqueness within \(\rho T\)-ball).: _When \(T\) is sufficiently small, there exists a unique critical point \(p_{T}\) within the \(\rho T\)-ball around \(p\)._
Proof.: Let \(p_{i},i\in I\) be all the critical points in the interior of the \(\rho T\)-sphere described as above. Consider the gradient vector field \(\nabla\operatorname{sys}_{\mathrm{T}}\) on the interior of \(S_{\rho T}\) that is nonzero on the boundary. Note that
\[\operatorname{ind}_{p_{i}}(\nabla\operatorname{sys}_{\mathrm{T}})= \operatorname{ind}\tilde{H}_{\operatorname{sys}_{\mathrm{T}}}(p_{i}).\]
Therefore, when \(T\) is sufficiently small, by Theorem 7.3, we have
\[(-1)^{d} =\deg(\nabla\operatorname{sys_{T}}|_{S_{\rho^{T}}})=\sum_{I}\operatorname{ ind}_{p_{i}}(\nabla\operatorname{sys_{T}})\] \[=\sum_{I}\operatorname{ind}\tilde{H}_{\operatorname{sys_{T}}}(p_{ i})=\#I\cdot(-1)^{d},\]
which implies \(\#I=1\), i.e., the critical point near \(p\) is unique.
The theorem eventually validates the definition we have used above:
**Definition 8.6**.: Suppose \(T\) is sufficiently small. Let \(p_{T}\) be the unique critical point near \(p\).
## 9. Ordinary Points in the Interior
By Akrout's theorem, ordinary points for the systole function are exactly non-eutactic points, namely, non-semi-eutactic and semi-eutactic points. Similar results are true for \(\operatorname{sys_{T}}\).
**Theorem 9.1** (Non-semi-eutactic points).: _For any non-semi-eutactic point \(q\in\mathcal{M}\), there exist a neighborhood \(V=V(q)\) of \(q\) and \(T_{0}=T_{0}(q)>0\) such that any \(q^{\prime}\in V\) is ordinary for \(\operatorname{sys_{T}}\) for \(T<T_{0}\)._
Proof.: Let \(I\) be the index set for \(S(q)\). Since \(q\) does not satisfy the semi-eutactic condition, there exists \(\tau\in T_{q}^{\operatorname{sys}}\mathcal{T}\) such that \(\langle\nabla l_{i}(q),\tau\rangle>0\) for all \(i\in I\), equivalently,
\[\nabla l_{i}(q)\in C(\tau,\frac{1}{2}\pi-\theta_{2})\]
for some \(0<\theta_{2}(q)<\frac{1}{2}\pi\), where \(C(\tau,\theta)\) is the cone in the tangent space \(T_{q}\mathcal{T}\) centered at \(\tau\) with angle \(\theta\). Let \(V\) be a neighborhood of \(q\) such that for any \(q^{\prime}\in V\):
(1) \(\nabla l_{i}(q^{\prime})\in C(\tau,\frac{1}{2}\pi-\frac{1}{2}\theta_{2})\),
(2) \(\max\{l_{i}(q^{\prime})\}<\min_{\gamma\not\in S(q)}\{l_{\gamma}(q^{\prime})\}\).
Therefore,
\[\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(q^{\prime})}||\nabla\operatorname {sys_{T}}(q^{\prime})||=||\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}(q^{\prime})} \nabla l_{\gamma}(q^{\prime})||\] \[\geq||\sum_{i}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i}(q^{ \prime})||-||\sum_{\gamma\not\in S(q)}e^{-\frac{1}{T}l_{\gamma}(q^{\prime})} \nabla l_{\gamma}(q^{\prime})||\] \[\geq||\langle\sum_{i}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i }(q^{\prime}),\tau\rangle||-||\sum_{\gamma\not\in S(q)}e^{-\frac{1}{T}l_{ \gamma}(q^{\prime})}\nabla l_{\gamma}(q^{\prime})||\] \[>\sin(\frac{1}{2}\theta_{2})\sum_{i}||\nabla l_{i}(q^{\prime})|| e^{-\frac{1}{T}\max\{l_{i}(q^{\prime})\}}-C(q)e^{-\frac{1}{T}\min_{\gamma \not\in S(q)}\{l_{\gamma}(q^{\prime})\}}>0,\]
for \(T<T_{0}\) for appropriate \(T_{0}>0\).
To show the same property for semi-eutactic points, a few lemmas are needed and we present them after the theorem.
**Theorem 9.2** (Semi-eutactic points).: _For any semi-eutactic point \(q\in\mathcal{M}\), there exist a neighborhood \(V=V(q)\) of \(q\) and \(T_{0}=T_{0}(q)>0\) such that any \(q^{\prime}\in V\) is ordinary for \(\operatorname{sys_{T}}\) for \(T<T_{0}\)._
Proof.: By Lemma 9.3 below, we split the index set \(I\) of \(S(q)\) into a maximal eutactic index subset \(I_{e}\) and the non-semi-eutactic index complement \(I_{nse}\). Let \(V(q)\) be the geodesic ball \(B(q,t_{0})\) for \(t_{0}>0\) to be determined. Recall that \(T_{q}^{\operatorname{sys}}\mathcal{T}=\operatorname{Span}_{I}\{\nabla l_{i}\}\) (we use \(T^{\operatorname{sys}}\) for short). We let \(T^{e}=\operatorname{Span}_{I_{e}}\{\nabla l_{i}\}\) and \(T^{e\perp}=(T^{e})^{\perp}\). Also recall that \(\theta_{0}\) is chosen such that \(\angle(\nabla^{2}l_{i}(\tau,\cdot),\tau)<\frac{\pi}{2}-2\theta_{0}\) for all \(i\).
By continuity, choose \(\theta_{3},\theta_{4}\) and \(d\) small enough such that
(1) \(\theta_{3}<\theta_{0}\) and \(C_{2}\max\{\theta_{3},C_{1}\theta_{4}\}<\theta_{0}\), where \(C_{1},C_{2}\) are fixed constants introduced in case 2(b),
(2) \(\max_{i\in I_{e},\angle(\tau,T^{e\perp})\leq\theta_{3}}|\langle\nabla l_{i}, \tau\rangle|<d\),
(3) If \(\langle\nabla l_{k},\tau\rangle\geq-2d\) then \(\angle(\nabla l_{k},\tau)\leq\frac{\pi}{2}+\theta_{4}\) for \(k\in I_{nse}\),
and further conditions presented in the proof.
Case 1 : When \(\angle(\tau,T^{e\perp})\geq\theta_{3}\) (i.e., \(\tau\) not perpendicular to \(T^{e}\) with angle bound).
By the semi-eutactic condition, since \(\tau\not\in T^{e\perp}\), we have that \(\overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline{ \overline{\overline{\overline{\overline{\overline{\overline{\overline{\overline \overline{ }}}}}}}}}}}}}
\(\{\nabla l_{i}(q)\}\neq\emptyset\) and \(\mathbb{H}(-\tau)\cap\{\nabla l_{i}(q)\}\neq\emptyset\). In fact, over the compact region of candidate \(\tau\)'s in this case, there exists \(D>0\) such that
\[\min_{i}\langle\nabla l_{i}(q),\tau\rangle<-D.\]
Let
\[J=\{j:\langle\nabla l_{j}(q),\tau\rangle=\min_{i}\langle\nabla l_{i}(q),\tau \rangle\}\]
and
\[K=\overline{\mathbb{H}(\tau)}.\]
For any \(q^{\prime}\in V(q)\), \(j\in J\) and \(k\in K\), we have
\[\frac{||e^{-\frac{1}{T}l_{k}(q^{\prime})}\nabla l_{k}(q^{\prime}) ||}{||e^{-\frac{1}{T}l_{j}(q^{\prime})}\nabla l_{j}(q^{\prime})||} =e^{-\frac{1}{T}(l_{k}(q^{\prime})-l_{j}(q^{\prime}))}\frac{|| \nabla l_{k}(q^{\prime})||}{||\nabla l_{j}(q^{\prime})||}\] \[=e^{\frac{1}{T}(t(\nabla l_{j}(q),\tau)-t\langle\nabla l_{k}(q), \tau\rangle+O(t^{2}))}\frac{||\nabla l_{k}(q^{\prime})||}{||\nabla l_{j}(q^{ \prime})||}\] \[<e^{-\frac{1}{2}\frac{1}{T}t_{0}D}\frac{||\nabla l_{k}(q^{\prime })||}{||\nabla l_{j}(q^{\prime})||}<\epsilon,\]
for \(t_{0}\) small enough.
Pick a \(j\in J\), and consider the sum over \(K^{C}\),
\[||\sum_{K^{C}}e^{-\frac{1}{T}l_{j}(q^{\prime})}\nabla l_{j}(q^{ \prime})|| \geq-\langle\sum_{K^{C}}e^{-\frac{1}{T}l_{j}(q^{\prime})}\nabla l _{j}(q^{\prime}),\tau\rangle\] \[\geq-\langle\sum_{J}e^{-\frac{1}{T}l_{j}(q^{\prime})}\nabla l_{j }(q^{\prime}),\tau\rangle\] \[\geq De^{-\frac{1}{T}l_{j}(q^{\prime})}\]
Put all terms together,
\[||\sum_{S(q)}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i}(q^{ \prime})|| \geq||\sum_{K^{C}}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i}(q^ {\prime})||\] \[-||\sum_{K}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i}(q^{ \prime})||\] \[\geq D\sum_{J}e^{-\frac{1}{T}l_{j}(q^{\prime})}-r\epsilon e^{- \frac{1}{T}l_{j}(q^{\prime})}||\nabla l_{j}(q^{\prime})||\] \[>(D-rM\epsilon)e^{-\frac{1}{T}l_{j}(q^{\prime})},\]
where \(M=\max_{i;q^{\prime}}||\nabla l_{i}(q^{\prime})||\). Note that by setting \(t_{0}\) small, the sum above is \(O(e^{-\frac{1}{T}(\operatorname{sys}(q)+\epsilon)})\), and the 'tail', the sum over \(S(q)^{C}\), is bounded by \(e^{-\frac{1}{T}(\operatorname{sec.sys}(q)-\epsilon)}=o(e^{-\frac{1}{T}( \operatorname{sys}(q)+\epsilon)})\). Therefore, \(\nabla\operatorname{syst}(q^{\prime})\neq 0\).
Case 2: When \(\angle(\tau,T^{e\perp})\leq\theta_{3}\) (i.e., when \(\tau\) almost perpendicular to \(T^{e}\)).
Let \(\mathbf{C}=\cap_{k\in I_{nse}}\mathbb{H}_{T^{\operatorname{sys}}}(\nabla l_{k })=\{\tau:\langle\nabla l_{k},\tau\rangle>0\text{ for all }k\}\) be the polygonal cone in \(T^{\operatorname{sys}}\) consisting of vectors with positive inner products with \(\nabla l_{k}\) for all \(k\in I_{nse}\), which is nonempty and convex by non-semi-eutactic condition. Note that \(\mathbf{C}\times T^{\operatorname{sys}\perp}\) is the set of such vectors in the total tangent space. We fix a \(v\in\mathbf{C}\) and make \(t\) sufficiently small so that \(\langle\nabla l_{k}(q^{\prime}),v\rangle\) is positively bounded from below by continuity.
- Case 2(a): When furthermore, \(\angle(\nabla l_{k},\tau)\leq\frac{\pi}{2}+\theta_{4}\) for all \(k\in I_{nse}\).
Let \(J=\{j\in I_{nse}:\frac{\pi}{2}\leq\angle(\nabla l_{j},\tau)\leq\frac{\pi}{2}+ \theta_{4}\}\subset I_{nse}\) and \(K:=I_{nse}\setminus J\) the complement.
Let \(T^{e\perp\operatorname{sys}}\) be the orthogonal complement of \(T^{e}\) in \(T^{\operatorname{sys}}\), then
\[(\partial\mathbf{C}\times T^{\operatorname{sys}\perp})\cap T^{e\perp}=( \partial\mathbf{C}\cap T^{e\perp\operatorname{sys}})\times T^{\operatorname{ sys}\perp},\]
which is trivial if and only if \(\partial\mathbf{C}\cap T^{e\perp\operatorname{sys}}=0\) and \(T^{\operatorname{sys}\perp}=0\) (and thus \(T^{e\perp\operatorname{sys}}=T^{e\perp}\)). In this case \(J=\emptyset\) if \(\theta_{3}\) and \(\theta_{4}\) are small enough. Otherwise, \(\angle(\partial\mathbf{C},T^{e\perp\operatorname{sys}})\leq\angle(\tau, \partial\mathbf{C})+\angle(\tau,T^{e\perp})\leq C_{1}\theta_{4}+\theta_{3}\) by Lemma 9.6. Contradiction!
If \(J=\emptyset\), then \(\tau\in\mathbf{C}\times T^{\operatorname{sys}\perp}\). Note that \(\mathbf{C}\cap T^{e\perp}\) is nonempty. Let \(\tau^{\prime}\) be the projection of \(\tau\) onto \(T^{e\perp}\), then \(\tau^{\prime}\in(\mathbf{C}\times T^{\operatorname{sys}\perp})\cap T^{e\perp}\) by convexity of \(\mathbf{C}\times T^{\operatorname{sys}\perp}\) and \(\angle(\tau,\tau^{\prime})<\theta_{3}<\theta_{0}\).
If \(J\neq\emptyset\), let \(\tau^{\prime}\) be the projection of \(\tau\) onto \((\mathbf{C}\times T^{\operatorname{sys}\perp})\cap T^{e\perp}\) which is nontrivial. By Lemma 9.6, \(\angle(\tau,\partial\mathbf{C}\times T^{\operatorname{sys}\perp})\leq\angle( \tau,\partial\mathbf{C})<C_{1}\theta_{4}\) and since \(\angle(\tau,T^{e\perp})\leq\theta_{3}\), by Lemma 9.5, \(\angle(\tau,\tau^{\prime})<C_{2}\max\{\theta_{3},C_{1}\theta_{4}\}<\theta_{0}\)
Therefore, for \(i\in I_{e}\cup J\), since \(\nabla l_{i}(q)\perp\tau^{\prime}\),
\[\langle\nabla l_{i}(q^{\prime}),\tau^{\prime}\rangle=\langle\nabla l_{i}(q^{ \prime})-\nabla l_{i}(q),\tau^{\prime}\rangle=\langle t\nabla^{2}l_{i}(\tau, \cdot)+O_{i}(t^{2}),\tau^{\prime}\rangle.\]
Consider the angle, when \(t\) is sufficiently small,
\[\angle(t\nabla^{2}l_{i}(\tau,\cdot)+O_{i}(t^{2}),\tau^{\prime}) \leq\angle(t\nabla^{2}l_{i}(\tau,\cdot)+O_{i}(t^{2}),\tau)+\angle (\tau,\tau^{\prime})\] \[\leq\frac{\pi}{2}-\frac{3}{2}\theta_{0}+\theta_{0}=\frac{\pi}{2}- \frac{1}{2}\theta_{0},\]
Thus, \(\langle\nabla l_{i}(q^{\prime}),\tau^{\prime}\rangle>0\).
For \(k\in K\), note that \(\langle\nabla l_{k}(q),\tau^{\prime}\rangle>0\) since \(\tau^{\prime}\in\operatorname{int}(\mathbf{C}\times T^{\operatorname{sys} \perp})\) if \(J=\emptyset\) and \(\tau^{\prime}\in\partial\mathbf{C}\times T^{\operatorname{sys}\perp}\) if \(J\neq\emptyset\), then
\[\langle\nabla l_{k}(q^{\prime}),\tau^{\prime}\rangle=\langle\nabla l_{k}(q), \tau^{\prime}\rangle+\langle t\nabla^{2}l_{k}(q)(\tau,\cdot)+O_{k}(t^{2}), \tau^{\prime}\rangle>0\]
where the second term is positive for the same reason as above.
Each term has a positive projection onto \(\tau^{\prime}\) and thus the sum is nonzero.
- Case 2(b): When furthermore, \(\min_{k\in I_{nse}}\langle\nabla l_{k},\tau\rangle\leq-2d\).
Let \(J\subset I_{nse}\) be the index set of the vectors that realize the minimum above.
For \(i\in I_{e}\), note that \(\langle\nabla l_{j},\tau\rangle-\langle\nabla l_{i},\tau\rangle\leq-2d+d=-d\), then
\[\frac{||\operatorname{proj}_{v}(e^{-\frac{1}{T}l_{i}(q^{\prime})} \nabla l_{i}(q^{\prime}))||}{||\operatorname{proj}_{v}(e^{-\frac{1}{T}l_{j}(q^ {\prime})}\nabla l_{j}(q^{\prime}))||}\] \[=\frac{e^{-\frac{1}{T}(t\langle\nabla l_{i},\tau\rangle+O_{i}(t^ {2}))}}{e^{-\frac{1}{T}(t\langle\nabla l_{j},\tau\rangle+O_{j}(t^{2}))}}\frac{ ||\operatorname{proj}_{v}(t\nabla^{2}l_{i}(q)(\tau,\cdot)+O_{i}(t^{2}))||}{|| \operatorname{proj}_{v}(\nabla l_{j}(q^{\prime}))||}\] \[=te^{\frac{1}{T}(t\langle\nabla l_{j},\tau\rangle-t\langle\nabla l _{i},\tau\rangle+O_{ij}(t^{2}))}\frac{||\operatorname{proj}_{v}(\nabla^{2}l_ {i}(q)(\tau,\cdot)+O_{i}(t))||}{||\operatorname{proj}_{v}(\nabla l_{j}(q^{ \prime}))||}\] \[\leq te^{-\frac{1}{2}\frac{1}{T}td}\frac{||\operatorname{proj}_{v }(\nabla^{2}l_{i}(q)(\tau,\cdot)+O_{ij}(t))||}{||\operatorname{proj}_{v}( \nabla l_{j}(q^{\prime}))||}<\epsilon,\]
for \(t\) sufficiently small.
By the choice of \(v\), \(\langle\nabla l_{k}(q^{\prime}),v\rangle\) is bounded from below by a positive
constant for all \(k\in I_{nse}\). Therefore,
\[||\sum_{I}e^{-\frac{1}{T}l_{i}(q^{\prime})}\nabla l_{i}(q^{\prime})|| \geq||\operatorname{proj}_{v}\sum_{I_{e}\cup I_{nse}}e^{-\frac{1}{T }l_{i}(q^{\prime})}\nabla l_{i}(q^{\prime})||\] \[\geq||\operatorname{proj}_{v}\sum_{I_{nse}}e^{-\frac{1}{T}l_{j}(q ^{\prime})}\nabla l_{j}(q^{\prime})||\] \[-||\operatorname{proj}_{v}\sum_{I_{e}}e^{-\frac{1}{T}l_{i}(q^{ \prime})}\nabla l_{i}(q^{\prime})||\] \[\geq||\operatorname{proj}_{v}\sum_{J}e^{-\frac{1}{T}l_{j}(q^{ \prime})}\nabla l_{j}(q^{\prime})||\] \[-||\operatorname{proj}_{v}\sum_{I_{e}}e^{-\frac{1}{T}l_{i}(q^{ \prime})}\nabla l_{i}(q^{\prime})||\] \[>(1-r\epsilon)||e^{-\frac{1}{T}l_{j}(q^{\prime})}\operatorname{ proj}_{v}(\nabla l_{j}(q^{\prime}))||>0\]
Each case corresponds to a compact region of the unit sphere, and the process in each case can be done continuously, hence theorem follows.
**Lemma 9.3**.: _Let \(\{v_{i}\}_{i\in I}\) be a semi-eutactic set, then the index set \(I\) can be uniquely split into a maximal eutactic index subset \(I_{e}\) and the non-semi-eutactic index complement \(I_{nse}\)._
Proof.: Consider the convex hull of \(\{v_{i}\}\), then the origin is on the boundary by definition. Let \(P\) be the maximal face of the convex hull that the origin lives on. Let \(I_{e}\) be the index set of the maximal subset whose convex hull is \(P\), then any hyperplane perpendicular to \(\operatorname{Span}\{v_{i}\}\) passing through \(\{v_{i}\}_{I_{e}}\) separates all other vectors on one side.
**Lemma 9.4**.: _Given a finite non-semi-eutactic set of vectors \(\{v_{k}\}\subset\mathbb{R}^{n}\), then for any \(\theta\) small, there exists \(d>0\) such that for any unit vector \(\tau\) at least one of the following is true: (1) \(\angle(v_{k},\tau)\leq\frac{\pi}{2}+\theta\) for all \(k\), (2) \(\min_{k}\langle v_{k},\tau\rangle\leq-2d.\)_
Proof.: This is by continuity.
**Lemma 9.5**.: _Let \(V_{1},V_{2}\subset\mathbb{R}^{n}\) be two linear subspaces with nontrivial intersection \(V_{0}:=V_{1}\cap V_{2}\). Then there exists a constant \(C\) depending on
\(\angle(V_{1},V_{2})\) such that for \(\theta\) small enough, for any \(v\in\mathbb{R}^{n}\) with \(\angle(v,V_{i})<\theta\) for \(i=1,2\), we have \(\angle(v,V_{0})<C\theta\), i.e.,_
\[C(V_{1},\theta)\cap C(V_{2},\theta)\subset C(V_{0},C\theta).\]
Proof.: It is obvious if one is a subspace of the other. Assume \(\angle(V_{1},V_{2})>0\). Let \(v_{i}\) be the projection of \(v\) onto \(V_{i}\), \(i=0,1,2\) and \(v^{\prime}_{i}=v_{i}-v_{0}\), \(i=1,2\). Now by the setup \(\angle(V_{1},V_{2})=\min\{\angle(v^{\prime}_{1},v^{\prime}_{2}),\pi-\angle(v^{ \prime}_{1},v^{\prime}_{2})\}>0\) and \(\angle(v,V_{i})=\angle(v,v_{i})\) for \(i=0,1,2\), thus it suffices to prove the lemma in \(\mathbb{R}^{3}\). Let \(C_{i}\) be the great circle on \(S^{2}\) spanned by \(v_{i}\) and \(v_{0}\), \(i=1,2\), then \(C_{1}\cap C_{2}=\pm v_{0}\), and \(\min\{\angle(v^{\prime}_{1},v^{\prime}_{2}),\pi-\angle(v^{\prime}_{1},v^{ \prime}_{2})\}=\angle(C_{1},C_{2})\). Therefore, there exists \(C>0\) depending on \(\angle(C_{1},C_{2})\) such that \(\angle(v,v_{0})=d_{S^{2}}(v,v_{0})<C\max_{i=1,2}d_{S^{2}}(v,C_{i})=C\max_{i=1,2} \angle(v,v_{i})<C\theta\) if \(v\) is close enough to \(C_{1}\) and \(C_{2}\).
**Lemma 9.6**.: _Let \(\{v_{i}\}_{i\in I}\subset\mathbb{R}^{n}\) be a finite non-semi-eutactic set of vectors, and \(\mathbf{C}=\cap_{i}\mathbb{H}(v_{i})\), then there exists \(\theta\) small enough, such that for any vector \(\tau\not\in\mathbf{C}\) with \(\angle(v_{i},\tau)\leq\frac{\pi}{2}+\theta\) for all \(i\), there exists a unique unit vector \(u\) such that \(\angle(u,\tau)=\min_{v\in\partial\mathbf{C}}\angle(v,\tau)\). Furthermore, \(\angle(u,\tau)<C\theta\) for constant \(C\)._
Proof.: It is clear that \(v\in\mathbf{C}\) if and only if \(\langle v_{i},v\rangle\geq 0\) for all \(i\), then for \(v_{1},v_{2}\in\mathbf{C}\), for \(0\leq\lambda\leq 1\), \(\langle v_{i},\lambda v_{1}+(1-\lambda)v_{2}\rangle\geq 0\), which shows that \(\mathbf{C}\) is radial and convex. Note that \(\mathbf{C}\) is contained in some half space. Now consider the angle between a vector and \(\tau\) as a function on \(\mathbf{C}\cap S^{n-1}\). If \(\tau\not\in\mathbf{C}\cap S^{n-1}\), \(\angle(\cdot,\tau)=d_{S^{n-1}}(\cdot,\tau)\) is minimized over \(\mathbf{C}\cap S^{n-1}\) by a unique vector \(v\in\partial\mathbf{C}\cap S^{n-1}\) by convexity if \(\tau\) is closed enough to \(\partial\mathbf{C}\).
The second part is due to Lemma 9.5 and finiteness of faces.
**Definition 9.7**.: With the same settings in the lemma above, define
\[\angle(\tau,\partial\mathbf{C})=\angle(u,\tau)\text{ and }\operatorname{proj}_{ \mathbf{C}}\tau=\operatorname{proj}_{u}\tau.\]
_Remark 9.8_.: We can reduce the condition on \(\mathbf{C}\) to any radial and convex subset for Lemma 9.6 and Definition 9.7.
**Lemma 9.9**.: _Let \(\mathbf{C}=\cap\mathbb{H}_{i}\neq\emptyset\) be the intersection of finitely many half spaces. Suppose \(V\) is a linear subspace that has nontrivial intersection
_with \(\mathbf{C}\), then for any \(v\in\mathbb{R}^{n}\) and for \(\theta\) small, if \(\angle(v,\partial\mathbf{C})<\theta\) and \(\angle(v,V)<\theta\), then \(\angle(v,\mathbf{C}\cap V)\leq\angle(v,\partial\mathbf{C}\cap V)<C\theta\)._
Proof.: Let \(\tau^{\prime}\) be the projection of \(\tau\) onto \(\partial\mathbf{C}\) and \(f\) be a face of \(\partial\mathbf{C}\) such that \(\tau^{\prime}\in f\), then we claim \(f\cap V\neq\emptyset\) if \(\theta\) is chosen small enough. Otherwise, \(\angle(f,V)\leq\angle(v,f)+\angle(v,V)<2\theta\) which is a contradiction if \(2\theta<\min_{f\in F}\angle(f,V)\), where \(F=\{f,f\) is a face of \(\partial\mathbf{C}\) and \(f\cap V=\emptyset\}\). Note that \(f\cap V\) is radial and convex, \(\angle(\tau,\partial\mathbf{C}\cap V)\leq\angle(\tau,f\cap V)\leq\angle(\tau, \operatorname{proj}_{f\cap V}\tau^{\prime})<C\theta\) by Lemma 9.5.
## 10. Approaching the Boundary
**Definition 10.1**.: The \(\epsilon\)-thin part and \(\epsilon\)-thick part of the moduli space are defined as
\[\mathcal{M}^{\leq\epsilon} =\{X\in\mathcal{M}:\operatorname{sys}(X)\leq\epsilon\},\] \[\mathcal{M}^{\geq\epsilon} =\{X\in\mathcal{M}:\operatorname{sys}(X)\geq\epsilon\}.\]
**Theorem 10.2** (Points near the boundary).: _There exist \(\epsilon>0\) and \(T_{0}>0\) such that any \(q\in\mathcal{M}^{\leq\epsilon}\) is an ordinary point for \(\operatorname{sys_{T}}\) for all \(T<T_{0}\). In fact, for any \(\beta\) of length \(\leq\epsilon\), \(\langle\nabla l_{\beta},\nabla\operatorname{sys_{T}}\rangle>0\)._
Proof.: Let \(q\in\mathcal{M}^{\leq\epsilon}\) and \(S(q)=\{\gamma_{1},\cdots,\gamma_{r}\}\). By collar lemma or Corollary 4.1.2 in [1], \(\gamma_{i}\)'s are disjoint. Let \(\gamma_{I}=\cup\gamma_{i}\). We have the Weil-Petersson pairing of \(\nabla l_{i}\) with \(\nabla l_{\gamma}\) for \(\gamma\) classified by the following three types:
Type 1: \(\gamma=\gamma_{j}\).
\[\langle\nabla l_{\gamma},\nabla l_{i}\rangle>||\nabla l_{i}||\delta_{ij}.\]
Type 2: \(\gamma\cap\gamma_{I}=\emptyset\).
\[\langle\nabla l_{\gamma},\nabla l_{i}\rangle>0.\]
Type 3: \(\gamma\not\subset\gamma_{I}\), \(\gamma\cap\gamma_{I}\neq\emptyset\).
By collar lemma, \(l_{\gamma}>x(\epsilon)\), where \(x(\epsilon)=2\operatorname{arcsinh}(\frac{1}{\sinh\frac{\epsilon}{2}})>-2\log\epsilon\).
Therefore, choose any \(i\),
\[||\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}\nabla l_{\gamma}|| \geq||\sum_{\gamma\cap\gamma_{I}=\emptyset\text{ or }\gamma\subset \gamma_{I}}e^{-\frac{1}{T}l_{\gamma}}\nabla l_{\gamma}||-||\sum_{\gamma\cap \gamma_{I}\neq\emptyset}e^{-\frac{1}{T}l_{\gamma}}\nabla l_{\gamma}||\] \[\geq e^{-\frac{1}{T}l_{i}}||\nabla l_{i}||-Ce^{-\frac{1}{T}x( \epsilon)}\] \[\geq\frac{1}{2}\sqrt{\epsilon}e^{-\frac{1}{T}\epsilon}-Ce^{\frac {2}{T}\log\epsilon}\]
In order for the lower bound above to be positive, it suffices to make \(T\) satisfy
\[T<\frac{-2\log\epsilon-\epsilon}{\log(2C)-\frac{1}{2}\log\epsilon}\]
Note that
\[\frac{-2\log\epsilon-\epsilon}{\log(2C)-\frac{1}{2}\log\epsilon}\to 4,\]
as \(\epsilon\to 0^{+}\). Therefore, when \(\epsilon<\epsilon_{0}\) small enough, a uniform upper bound \(T_{0}\) for \(T\) can be chosen such that for any \(q\in\mathcal{M}^{\leq\epsilon_{0}}\), \(\operatorname{sys}_{\mathrm{T}}(q)\neq 0\) for all \(T<T_{0}\).
_Remark 10.3_.: By critical points attracting property, all critical points of \(\operatorname{sys}_{\mathrm{T}}\) live in the thick part of the moduli space.
_Remark 10.4_.: The positive pairing in the theorem implies that when \(T<T_{0}\), \(\nabla\operatorname{sys}_{\mathrm{T}}\) points transversely inward to the sys-level sets \(\mathcal{M}^{=\epsilon^{\prime}}\) in \(\mathcal{M}^{\leq\epsilon}\). Therefore,
**Corollary 10.5**.: _For \(\epsilon\) small enough, \(\mathcal{M}^{\geq\epsilon}\) is a deformation retract of \(\mathcal{M}\)._
## 11. Extension onto the Boundary
Recall that the Deligne-Mumford boundary \(\partial_{\textit{DM}}\mathcal{M}\) of the moduli space \(\mathcal{M}(X)\) is the space of all nodal hyperbolic surfaces modeled on \(X\). We may use \(\partial\mathcal{M}\) for simplicity. A stratum is a maximal connected subset in which points are homeomorphic to each other. The Deligne-Mumford compactification is the union \(\overline{\mathcal{M}}=\mathcal{M}\cup\partial\mathcal{M}\).
Let \(X\in\partial\mathcal{M}\) with the pinched curve set \(S\) and call a neighborhood _stratifically closed_ if the pinched curve set of any point is a subset of \(S\). Expand \(S\) to a pants decomposition \(\bar{S}=\{\alpha_{i}\}\), then the Fenchel-Nielsen coordinates are given by the associated length and twist parameters \(\{l_{i},\tau_{i}\}\). For a small neighborhood \(U\) of \(X\) that is stratifically closed, let \((U,\{k_{i}e^{i\tau_{i}}\})\) be a chart, with \(k_{i}e^{i\tau_{i}}=l_{i}^{X}e^{i\tau_{i}}\) being the transition maps on the overlap, where \(\chi\) attains \(\frac{1}{2}\) if \(\alpha_{i}\in S\) and \(1\) otherwise. We call such chart a _nodal chart_ at \(X\). Different extensions are \(C^{\infty}\)-compatible by analytical compatibility of Fenchel-Nielsen coordinates given by different pants decompositions.
Let \(S=\{\beta_{1},\cdots,\beta_{s}\}\) be a set of disjoint simple closed geodesics (not necessarily shortest) on \(X\). As all \(l_{\beta_{i}}\to 0\), by definition \(X\to X_{0}\in\partial\mathcal{M}\) ending in the respective stratum. The limit nodal surface \(X_{0}\) has a different topology and may not be connected away from the nodes. The tangent space of the compactification at \(X_{0}\in\partial\mathcal{M}\) can be decomposed as \(T(X_{0})=T^{\mathit{Str}}(X_{0})\oplus T^{\mathit{Nod}}(X_{0})\), where \(T^{\mathit{Str}}(X_{0})\) is given by the length and twist parameters for \(\bar{S}\setminus S\) and \(T^{\mathit{Nod}}(X_{0})\) is given by those for \(S\).
Under a nodal chart at some \(X\in\partial\mathcal{M}\) with pinched curve set \(S\) as above, the geodesic-length functions \(l_{\beta_{i}}=k_{\beta_{i}}^{2}\) are \(C^{\infty}\).
To extend \(\mathrm{sys_{T}}\) to \(\partial\mathcal{M}\), note that for fixed \(T>0\),
\[e^{-\frac{1}{T}l_{\beta_{i}}}\to 1\text{ as }l_{\beta_{i}}\to 0.\]
For any \(\gamma\) that crosses some \(\beta_{i}\), since \(\sinh\frac{l_{\beta_{i}}}{2}\sinh\frac{l_{\gamma}}{2}>2\arcsinh 1\),
\[l_{\gamma}\to\infty\text{ and }e^{-\frac{1}{T}l_{\gamma}}\to 0\text{ as }l_{\beta_{i}}\to 0.\]
**Definition 11.1**.: For \(X\in\partial\mathcal{M}\), let
\[\mathrm{sys_{T}}(X)= -T\log\left(s+\sum_{\gamma\text{ s.c.g. on }X}e^{-\frac{1}{T}l_{\gamma}(X)}\right)\] \[= -T\log\left(s+\sum_{i}e^{-\frac{1}{T}\,\mathrm{sys_{T}}(X_{i})} \right),\]
where \(s=\#S\).
It is known from previous discussion that \(\mathrm{sys_{T}}\) is at least \(C^{2}\) on each stratum. Note that an extension of the Weil-Petersson metric from \(\mathcal{M}\) to the compactification \(\overline{\mathcal{M}}\), that equals the Weil-Petersson metric when restricted on each stratum, does not exist. But given continuity of \(\mathrm{sys_{T}}\left|{}_{\overline{\mathcal{M}}}\right.\), following the stratum-wise gradient flow, we know that it attains its global minimum on maximally pinched surfaces, and therefore is positively bounded on \(\partial M\). We write this as a lemma:
**Lemma 11.2**.: _The function \(\mathrm{sys_{T}}\) is bounded on \(\overline{\mathcal{M}}\) by positive numbers._
**Lemma 11.3**.: _For \(T\) sufficiently small, \(\mathrm{sys_{T}}:\overline{\mathcal{M}}\to\mathbb{R}\) is \(C^{2}\), under the differential structure given by nodal charts._
Proof.: It is already shown that \(\mathrm{sys_{T}}\) is \(C^{2}\) on the interior and continuous globally by its definition. Let \(X\to X_{0}\in\partial\mathcal{M}\) with pinched curve set \(S=\{\beta_{1},\cdots,\beta_{s}\}\). Recall that the first derivative is given by
\[\mathrm{sys_{T}}^{\prime}=\frac{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}l_{ \gamma}^{\prime}}{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}},\]
and the second derivative
\[\mathrm{sys_{T}}^{\prime\prime}=\frac{1}{T}\left(\frac{\sum_{\gamma}e^{- \frac{1}{T}l_{\gamma}}l_{\gamma}^{\prime}}{\sum_{\gamma}e^{-\frac{1}{T}l_{ \gamma}}}\right)^{2}-\frac{1}{T}\frac{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}( l_{\gamma}^{\prime})^{2}}{\sum_{\gamma}e^{-\frac{1}{T}l_{\gamma}}}+\frac{\sum_{ \gamma}e^{-\frac{1}{T}l_{\gamma}}l_{\gamma}^{\prime\prime}}{\sum_{\gamma}e^{- \frac{1}{T}l_{\gamma}}}.\]
Thus it suffices to show that \(e^{-\frac{1}{T}l^{\prime}}\) and \(e^{-\frac{1}{T}l^{\prime\prime}}\) converge when the base point approaches the boundary. We show that based on the following three types of geodesics:
Type 1: \(\gamma=\beta_{i}\).
It is clear by definition that \(l=k_{\beta_{i}}^{2}\) is \(C^{\infty}\).
Type 2: \(\gamma\cap\beta_{i}=\emptyset\).
We can express \(l_{\gamma}=\mathrm{tr}(\Pi A_{i})\) by a tracking algorithm by unfolding the surface, where each \(A_{i}\) is a rotation or translation matrix smooth in
Fenchel-Nielsen coordinates and not involving \(\tau_{\beta_{i}}\)'s.
Type 3: \(\gamma\not\in\cup\beta_{i},\gamma\cap(\cup\beta_{i})\neq\emptyset\).
We have \(l_{\gamma}\geq-2\log l_{\beta_{i}}\) for \(\beta_{i}\cap\gamma\neq\emptyset\). Let \(\gamma_{i}\)'s be a few geodesics disjoint from \(\cup\beta_{i}\), such that \(\{\lambda_{i}=\nabla l_{\beta_{i}}^{\frac{1}{2}},J\lambda_{i},\nabla l_{i}\}\) is a local frame.
For the first directional derivatives, note that
\[e^{-\frac{1}{T}l_{\gamma}}|\langle\nabla l_{\gamma},\lambda_{i} \rangle|,e^{-\frac{1}{T}l_{\gamma}}|\langle\nabla l_{\gamma},J\lambda_{i} \rangle|\leq e^{-\frac{1}{T}l}||\nabla l_{\gamma}||\cdot||\lambda_{i}||\] \[\leq e^{-\frac{1}{T}l_{\gamma}}(c(l_{\gamma}+l_{\gamma}^{2}e^{ \frac{l_{\gamma}}{2}}))^{\frac{1}{2}}\cdot c\to 0,\]
and
\[e^{-\frac{1}{T}l_{\gamma}}|\langle\nabla l_{\gamma},\nabla l_{i} \rangle| \leq e^{-\frac{1}{T}l}||\nabla l_{\gamma}||\cdot||\nabla l_{i}||\] \[\leq e^{-\frac{1}{T}l_{\gamma}}(c(l_{\gamma}+l_{\gamma}^{2}e^{ \frac{l_{\gamma}}{2}}))^{\frac{1}{2}}\cdot(c(l_{i}+l_{i}^{2}e^{\frac{l_{i}}{2} }))^{\frac{1}{2}}\to 0.\]
For the second directional derivatives, note that
\[||e^{-\frac{1}{T}l_{\gamma}}XYl_{\gamma}|| =||e^{-\frac{1}{T}l_{\gamma}}(H(l_{\gamma})(X,Y)+\langle D_{X}Y, \nabla l_{\gamma}\rangle)||\] \[\leq e^{-\frac{1}{T}l_{\gamma}}(||H(l_{\gamma})(X,Y)||+||D_{X}Y| \cdot||\nabla l_{\gamma}||)\] \[\leq e^{-\frac{1}{T}l_{\gamma}}((4c(1+l_{\gamma}e^{\frac{l}{2}})+ O(l_{\gamma}^{3}))||X||\cdot||Y||+\frac{c}{l_{\beta_{i}}^{\frac{1}{2}}} \cdot||\nabla l_{\gamma}||)\to 0,\]
where \(X,Y\) are two vector fields near \(X_{0}\).
Therefore, \(\sum_{\gamma\in D\!t(\gamma)}e^{\frac{1}{T}l_{\gamma}}\) is \(C^{2}\), where \(D\!t(\gamma)\) is the set of \(\gamma^{\prime}\)'s that are Dehn twist equivalent to \(\gamma\) along \(\cup\beta_{i}\).
It is relatively clear about \(\operatorname{Crit}(\operatorname{sy_{T}}|_{\overline{\mathcal{M}}})\cap \mathcal{M}\). For critical points on the boundary \(\partial\mathcal{M}\), they correspond exactly to the critical points for \(\operatorname{sy_{T}}|_{\mathcal{M}(\mathcal{S})}\). The \(T^{Str}\)-directional derivatives are \(0\) since \(\nabla\operatorname{sy_{T}}|_{\mathcal{S}}\) is parallel to \(\nabla\operatorname{sy_{T}}|_{\mathcal{M}(\mathcal{S})}\) under the identification \(\mathcal{S}\leftrightarrow\mathcal{M}(\mathcal{S})\) and so are \(T^{Nod}\)-directional derivatives following Remark 10.4.
On a nodal chart \((U,\{k_{i}e^{i\tau_{i}}\})\), the second derivative matrix is
\[\begin{pmatrix}2I&0\\ 0&*\end{pmatrix},\]
where \(*\) is the second derivative matrix of \(\operatorname{sys}_{\mathrm{T}}\) restricted on \(T^{\mathit{Str}}\) that is nondegenerate. This shows that critical points on \(\partial\mathcal{M}\) are nondegenerate.
**Theorem 11.4**.: \(\operatorname{sys}_{\mathrm{T}}:\overline{\mathcal{M}}\to\mathbb{R}\) _is \(C^{2}\)-Morse with the given differential structure._
Part (2) of the main theorem holds true for the extended \(\operatorname{sys}_{\mathrm{T}}\) for similar reasons as in Sections 7-10. There we complete the proof of the main theorem.
## 12. Weil-Petersson Gradient Flow on \(\overline{\mathcal{M}}_{1,1}\)
As a \(C^{0}\) function, the systole does not give a Weil-Petersson gradient vector field. Although we can consider the gradient 'direction' instead, it turns out to be a discontinuous distribution in the tangent space. The \(\operatorname{sys}_{\mathrm{T}}\) functions are \(C^{2}\) and thus it is possible to observe the structure of the base space through the gradient flow. Here we present an example.
**Theorem 12.1**.: _The Weil-Petersson gradient flow of \(\operatorname{sys}_{T}\) on \(\mathcal{M}_{1,1}\) when \(T\) is sufficiently small:_
The 3-punctured sphere is a 6-sheeted covering space of \(\mathcal{M}_{1,1}\) that resolves the singularities. The red point is a triple point in the moduli space where \(\operatorname{sys}_{\mathrm{T}}\) attains its maximum, and the blue is a double point, see [13] for more on these surfaces. The punctures are 3-punctured spheres that form the Deligne-Mumford boundary.
\begin{tabular}{c l} \hline \hline \(J\) & multiindex, Weil-Petersson almost complex structure \\ \(K\) & multiindex \\ \(F_{J}\) & fan associated to \(J\) \\ \(v_{J}\) & projected average vector \\ \(c,C,\theta\) & constant \\ \(D\) & constant, Weil-Petersson connection \\ \(\gamma,\gamma_{i}\) & simple closed geodesic \\ \(\beta_{i}\) & pinched curve \\ \(l_{\gamma},l_{i}\) & length function associated to \(\gamma\) or \(\gamma_{i}\) \\ \(m,M\) & minimum, maximum of \(\{||\nabla l_{i}||,||\nabla^{2}l_{i}(\tau,\cdot)\}_{i;\tau}\) \\ sys & systole function \\ sec. sys & second systole function \\ \(\operatorname{syst}_{T}\) & \(:=-T\log\sum_{\gamma\text{ simple closed geodesic on }X}e^{-\frac{1}{T}L_{\gamma}(X)}\) \\ \(S\) & pinched curve set \\ \(S(X)\) & set of shortest geodesics on a surface \\ \(s_{X}(L)\) & number of simple closed geodesics of length \(\leq L\) on \(X\) \\ \(r\) & \(:=\#S(X)\), number of shortest geodesics \\ \(r_{1},r_{2}\) & \# of shortest geodesics with pairing with given vector \(\geq 0\), \(<0\) \\ \(I\) & index set for \(S(X)\) \\ \(I_{e}\) & subset of \(I\) indexing the maximal eutactic subset \\ \(I_{nse}\) & complement of \(I_{e}\) in \(I\) when \(X\) is semi-eutactic \\ \(\mathbf{C}\) & intersection of cones \\ \(\exp_{p}\) & exponential map at \(p\) \\ \(i\) & an automorphism of \(T_{p}\mathcal{T}\) \\ \(\pi\) & standard projection onto the unit sphere \\ \(\mathcal{T},\mathcal{M}\) & Teichmuller space, moduli space \\ \(\overline{\mathcal{M}},\partial\mathcal{M},\mathcal{S}\) & Deligne-Mumford compactification, D-M boundary, stratum \\ \(T^{\operatorname{sys}_{1}},T^{\operatorname{sys}_{2}}_{p}\mathcal{T}\) & major subspace \\ \(T^{\operatorname{sys}_{\perp}},T^{\operatorname{sys}_{\perp}}_{p}\mathcal{T}\) & minor subspace \\ \(T^{e}\) & subspace spanned by \(\{\nabla l_{i}\}_{I_{e}}\) \\ \(T^{e\perp_{\operatorname{sys}}}\) & orthogonal complement of \(T^{e}\) in \(T^{\operatorname{sys}}\) \\ \(\tilde{\Omega}_{T},\Omega_{T}\) & see Definition 5.7 \\ \(\tilde{\Phi}_{T},\Phi_{T}\) & see Section 6 \\ \(\tilde{\Psi}_{T},\Psi_{T}\) & see Definition 7.1 \\ \(\Psi_{1},\Psi_{2}\) & see Theorem 7.3 \\ \(\tilde{H}_{T},H_{T}\) & see Section 8 \\ \hline \hline \end{tabular}
|
2305.00102
|
Minimal relations for the balanced algebra
|
Motivated by a problem in graph theory, this article introduces an algebra
called the balanced algebra. This algebra is defined by generators and
relations, and the main goal is to find a minimal set of relations for it.
|
Erika Pirnes
|
2023-04-28T21:44:30Z
|
http://arxiv.org/abs/2305.00102v1
|
# Minimal relations for the balanced algebra
# Minimal relations for the balanced algebra
Erika Pirnes
**Abstract:** Motivated by a problem in graph theory, this article introduces an algebra called the balanced algebra. This algebra is defined by generators and relations, and the main goal is to find a minimal set of relations for it.
## 1 Introduction
This article is about an algebra \(\mathcal{B}\) called the _balanced algebra_. The algebra is related to a problem that comes up in algebraic graph theory. The algebra \(\mathcal{B}\) is defined via generators and relations, and the main goal of the article is to find a minimal set of relations for \(\mathcal{B}\).
Informal explanation:First think about all possible "words" in two letters \(L\) and \(R\). For example, \(L\), \(LR\), and \(LLRLR\) are words. A word is called balanced if it contains equal numbers of both letters. Among the words above, only \(LR\) is balanced. The elements of the balanced algebra \(\mathcal{B}\) are linear combinations of words, for example, \(LR\), or \(2RR+5LLR\). Moreover, any time two balanced words appear next to each other inside a word, they may be swapped and the resulting word is considered to be the same element in \(\mathcal{B}\) as the original word. For example, the words \(LRRL\) and \(RLLR\) correspond to the same element in \(\mathcal{B}\), because they can be obtained from each other by swapping the two balanced words \(LR\) and \(RL\).
As another example, the words \(LRLLRR=(LR)(LLRR)\) and \(LLRLRLR=(LLRR)(LR)\) correspond to the same element in \(\mathcal{B}\); they can be obtained from each other by swapping \(LR\) and \(LLRR\). By writing the words as \(LRLLRR=L(RL)(LR)R\) and \(LLRRLR=L(LR)(RL)R\), it can be seen that the words can also be obtained from each other by swapping \(LR\) and \(RL\). As a generalization of this example, any time \(LR\) and \(LLRR\) appear next to each other inside a word, swapping \(LR\) and \(RL\) (as in the example) yields the same result as swapping \(LR\) and \(LLRR\). Thus it can be said that swapping \(LR\) and \(LLRR\) follows from swapping \(LR\) and \(RL\). Informally, the goal of this article is to find a "minimal" subset of swaps so that any swap follows from the swaps in the subset.
Formal explanation:The algebra \(\mathcal{B}\) is defined using generators and relations as follows. The generators are the letters \(L\) and \(R\). A _word_ is a concatenation of letters, and a _balanced_ word consists of equal numbers of both letters. The defining relations of \(\mathcal{B}\) are that any two balanced words commute. It turns out that many of these relations are redundant. The main goal of this article is to find a minimal set of relations for \(\mathcal{B}\); more precisely, a minimal subset of the original set of relations that can be used as the defining relations of \(\mathcal{B}\). It will be seen that this minimal subset is not unique; the main result gives a family of minimal subsets. One of the subsets in the family is then chosen to be studied in detail.
Motivation:The balanced algebra \(\mathcal{B}\) comes up in algebraic graph theory in the following way. Start with a graph \(\Gamma\), and choose a vertex \(\alpha\) as a base vertex. The vertex set of \(\Gamma\) is partitioned into sets called _subconstituents_; the \(i^{\text{th}}\) subconstituent consists of the vertices at distance \(i\) from \(\alpha\). The vertices of \(\Gamma\) form a basis of a vector space called the _standard module_. The _raising_ matrix \(R\) and the _lowering_ matrix \(L\) act on this basis by sending a vertex in the \(i^{\text{th}}\) subconstituent to the sum of its neighbors in the \((i+1)^{\text{st}}\) or \((i-1)^{\text{st}}\) subconstituent, respectively.
Under some assumptions (\(\Gamma\) is distance-regular and bipartite), the matrices \(L\) and \(R\), together with certain projection matrices, generate an algebra called the _subconstituent algebra_\(T\) of \(\Gamma\) with respect to \(\alpha\). Certain well-behaved irreducible \(T\)-modules are called _thin_ modules. Under the assumptions mentioned above, it is known that the balanced words in \(L\) and \(R\) commute if and only if every irreducible \(T\)-module is thin [1]. In this case the graph \(\Gamma\) is called thin with respect to \(\alpha\). Studying the balanced algebra may help to better understand thin graphs.
Organization of the article:In Section 2, it is explained how the condition "balanced words commute" comes up in algebraic graph theory. The balanced algebra \(\mathcal{B}\) is defined in Section 3. The concept of swaps, used in most proofs, is also explained in that section. Section 4 is for introducing some useful tools. A family of minimal sets of relations for \(\mathcal{B}\) is found in Section 5. In Section 6, one member of the family is studied in detail.
Motivation
This section gives a bit more detail about how the balanced algebra comes up in algebraic graph theory, in the study of distance-regular graphs. The familiar cube is an example of a distance-regular graph, and it is used as a running example to illustrate the concepts discussed. Reading this section is not necessary for understanding the rest of the article.
Assumptions regarding graphs:Throughout this section, \(\Gamma\) denotes a finite, undirected, and connected graph without any loops or repeated edges. The vertex set of \(\Gamma\) is denoted by \(\mathcal{X}\), and the number of vertices by \(n\).
Definition (distance and diameter):For a nonnegative integer \(k\), a _path_ of length \(k\) in \(\Gamma\) is a sequence \(x_{0},x_{1},\ldots,x_{k}\) of distinct vertices such that for \(1\leq i\leq k\), the vertices \(x_{i-1}\) and \(x_{i}\) are adjacent. This path is said to be _from \(x_{0}\) to \(x_{k}\)_. The _path-length distance function_\(\partial\) is defined as follows: for vertices \(x\) and \(y\) of \(\Gamma\), their distance \(\partial(x,y)\) is the minimal length of a path from \(x\) to \(y\). The _diameter_\(d=d(\Gamma)\) is defined to be the maximal distance between two vertices of \(\Gamma\).
Definition (distance-regular graph and intersection numbers):The graph \(\Gamma\) is called _distance-regular_ if, for \(0\leq i,j\leq d\), the size of the set \(\{z\colon\partial(x,z)=i,\partial(z,y)=j\}\) does not depend on the vertices \(x\) and \(y\), but only on their distance \(h=\partial(x,y)\). The size of the above set is denoted by \(p_{ij}^{h}\). The numbers \(p_{ij}^{h}\) (\(0\leq h,i,j\leq d\)) are called the _intersection numbers_ of \(\Gamma\).
Definition (bipartite graph):The graph \(\Gamma\) is _bipartite_ if its vertex set \(\mathcal{X}\) can be partitioned into two subsets with the property that two vertices belonging to the same subset are never adjacent.
More assumptions regarding graphs:For the rest of the section, it is assumed that \(\Gamma\) is distance-regular and bipartite. A vertex \(\alpha\) of \(\Gamma\) is chosen as a base vertex.
General notes about intersection numbers:Firstly, the intersection numbers are symmetric in the sense that \(p_{ij}^{h}=p_{ji}^{h}\). Secondly, the distance function \(\partial\) satisfies the triangle inequality, which means that \(p_{ij}^{h}=0\) if the sum of two of the numbers \(h,i,j\) is less than the third one. Thirdly, \(p_{ij}^{h}=0\) if \(h+i+j\) is odd. (This is because bipartite graphs do not have odd cycles.)
Example:The cube graph \(Q_{3}\) has vertex set \(\{0,1\}^{3}\), and two vertices are adjacent if they differ in exactly one coordinate. Note that the distance between two vertices is equal to the number of coordinates at which they differ. As there are three coordinates, the diameter of \(Q_{3}\) is 3.
The graph \(Q_{3}\) is distance-regular. The table below shows the intersection numbers \(p_{ij}^{h}\) for the triples \((h,i,j)\) which satisfy the triangle inequality and for which \(h+i+j\) is even; symmetry of intersection numbers allows to save space by listing only values with \(i\geq j\).
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(h\) & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 3 \\ \hline \(i\) & 1 & 2 & 3 & 0 & 1 & 2 & 3 & 1 & 2 & 2 & 3 & 3 & 2 & 3 & 3 \\ \hline \(j\) & 1 & 2 & 3 & 0 & 0 & 1 & 2 & 1 & 0 & 2 & 1 & 3 & 1 & 0 & 2 \\ \hline \(p_{ij}^{h}\) & 3 & 3 & 1 & 1 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 0 & 3 & 1 & 0 \\ \hline \end{tabular}
Definition (standard module and subconstituents):The _standard module_\(V\) of \(\Gamma\) is a \(\mathbb{C}\)-vector space with basis \(\{v\colon v\in\mathcal{X}\}\). For \(0\leq i\leq d\), the set \(\Gamma_{i}(\alpha)=\{z\in\mathcal{X}\colon\partial(x,z)=i\}\) is called the \(i^{\text{th}}\)_subconstituent_ of \(\Gamma\) (with respect to \(\alpha\)). Let \(\operatorname{Mat}_{\mathcal{X}}(\mathbb{C})\) denote the algebra consisting of square matrices over \(\mathbb{C}\) with rows and columns indexed by \(\mathcal{X}\). The algebra \(\operatorname{Mat}_{\mathcal{X}}(\mathbb{C})\) acts on \(V\) by left multiplication.
Example:For the graph \(Q_{3}\), the vertex \(\alpha=(0,0,0)\) is chosen as the base vertex. The standard module has dimension 8. See Figure 2 for an illustration of \(Q_{3}\) and its subconstituents.
Figure 1: The cube graph \(Q_{3}\).
Figure 2: The graph \(Q_{3}\) with base vertex \(\alpha\). The dashed lines are used to separate the subconstituents.
Definition (raising and lowering matrices):The raising matrix \(R\) and the lowering matrix \(L\) are matrices in \(\operatorname{Mat}_{\mathcal{X}}(\mathbb{C})\) which act on \(V\) as follows. Let \(0\leq i\leq d\), and let \(v\) be a vertex in the \(i^{\text{th}}\) subconstituent. Then \(Rv\) is the sum of the neighbors of \(v\) in the \((i+1)^{\text{st}}\) subconstituent (or \(0\) if \(i=d\)). Similarly, \(Lv\) is the sum of the neighbors of \(v\) in the \((i-1)^{\text{st}}\) subconstituent (or \(0\) if \(i=0\)).
Example:The table below shows how the raising and lowering matrices act on the vertices of \(Q_{3}\). The vertex labeling is from Figure 2.
Note:When explicitly writing down matrices associated with \(Q_{3}\), the rows and columns will be indexed in the order given by the leftmost column in the table above.
Definition (adjacency matrix):The adjacency matrix \(A\) of \(\Gamma\) is a matrix in \(\operatorname{Mat}_{\mathcal{X}}(\mathbb{C})\) which acts on \(V\) by sending a vertex to the sum of its neighbors.
A matrix equation:Keeping in mind that \(\Gamma\) is bipartite (which implies that two vertices inside the same subconstituent cannot be adjacent), the neighbors of a vertex in the \(i^{\text{th}}\) subconstituent can only be in the \((i+1)^{\text{st}}\) or \((i-1)^{\text{st}}\) subconstituent. This implies that the raising, lowering, and adjacency matrices are related via the equation \(A=R+L\).
Example:For \(Q_{3}\), the matrix equation \(A=R+L\) looks as follows.
\[\begin{bmatrix}0&\mathbf{1}&\mathbf{1}&\mathbf{1}&0&0&0\\ \mathbf{1}&0&0&0&1&0&0\\ \mathbf{1}&0&0&0&\mathbf{1}&0&0\\ \mathbf{1}&0&0&0&\mathbf{1}&\mathbf{1}&0\\ \mathbf{1}&\mathbf{1}&0&0&0&\mathbf{1}&\mathbf{1}&0\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&0&0&\mathbf{0}&\mathbf{1}&\mathbf{1}&0\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}&0&\mathbf{0}&\mathbf{1}&\mathbf{1} &\mathbf{0}\end{bmatrix}=\begin{bmatrix}0&0&0&0&0&0&0\\ \mathbf{1}&\mathbf{0}&0&0&0&0&0\\ \mathbf{1}&\mathbf{0}&0&0&0&0&0\\ \mathbf{1}&\mathbf{0}&0&0&0&0&0\\ \mathbf{1}&\mathbf{0}&0&0&0&0&0\\ \mathbf{1}&\mathbf{1}&\mathbf{0}&0&0&0&0\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&0&0&0&0\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&0&0&0&0\\ \mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}&0&0\\ \mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}\end{bmatrix}+\begin{bmatrix} 0&\mathbf{1}&\mathbf{1}&\mathbf{0}&0&0\\ \mathbf{0}&0&0&\mathbf{1}&0&0\\ \mathbf{0}&0&0&\mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0} \\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{1}&\mathbf{1}& \mathbf{0}\end{bmatrix}.\]
Projections:For \(0\leq i\leq d\), let \(E_{i}^{\star}\) denote the matrix in \(\operatorname{Mat}_{\mathcal{X}}(\mathbb{C})\) which acts on \(V\) as follows: A vertex in the \(i^{\text{th}}\) subconstituent is sent to itself, and vertices in all other subconstituents are sent to zero. Note that \(I=E_{0}^{\star}+E_{1}^{\star}+\cdots+E_{d}^{\star}\).
Example:The diameter of \(Q_{3}\) is 3 (as seen earlier), and the equation \(I=E_{0}^{\star}+E_{1}^{\star}+E_{2}^{\star}+E_{3}^{\star}\) looks as follows.
\[\begin{array}{c}I\\ \begin{array}{c}\begin{array}{c}I\\ \begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \end{array}\end{array}&\begin{array}{c}E_{0}^{\star}&E_{1}^{\star}&E_{2}^{ \star}&E_{3}^{\star}\\ \begin{array}{c}E_{3}^{\star}\\ \begin{array}{c}0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0 0\\ 0\\ 0 \\ 0\\ 0 \\ 0 0\\ 0 0\\ 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0\\ 0 0\\ 0 \\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0\\ 0 0 \\ 0 0 0\\ 0 0\\ 0 0 0\\ 0 0\\ 0 0 \\ 0 0 0\\ 0 0 0\\ 0 0 0\\ 0 0 \\ 0 0 0\\ 0 0 \\ 0 0 0\\ 0 0 0\\ 0 0 0\\ 0 0 0 \\ 0 0 0 0\\ 0 0 0 \\ 0 0 0 0\\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 0\\ 0 0 0 \\ 0 0 0 0\\ 0 0 0 \\ 0 0 0 0\\ 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 0 \\ 0 0 0 \\ 0 0 0 0 0 \\ 0 0 0 0 \\ 0
Ideals and swapping
This section describes how the balanced algebra is obtained as the quotient by an ideal \({\cal J}\) of the free algebra with two generators \(L\) and \(R\). (This is exactly the generators and relations approach from Section 1, done in detail.) The main goal of this article is to find a minimal generating set for \({\cal J}\). A key result of this section gives a powerful perspective ("swaps") for looking at ideal membership.
Definition (words and free algebra):Let \({\cal A}\) be the free \({\mathbb{C}}\)-algebra with the generators \(L\) and \(R\). The two generators are called _letters_, and a _word of length \(n\)_ is a product (concatenation) \(a_{1}a_{2}\ldots a_{n}\), where \(a_{i}\) is a letter for \(1\leq i\leq n\). A _subword_ of \(a_{1}a_{2}\ldots a_{n}\) is a word \(a_{k}a_{k+1}\ldots a_{l}\) where \(1\leq k\leq l\leq n\). The length of a word \(W\) is denoted by \(l(W)\). The empty word has length \(0\), and it is the multiplicative identity of \({\cal A}\). All other words have positive length and are called nonempty. As a complex vector space, \({\cal A}\) has a basis consisting of all possible words in the two letters \(L\) and \(R\).
Example:\(LLL\) and \(RL\) are words, and \(2LLL+5RL\) is an element of \({\cal A}\). Some subwords of \(RRLRLL\) are \(RRLR\) and \(LRL\).
Important note:In this article, a "word" always refers to a word in \({\cal A}\), and "an ideal of \({\cal A}\)" is used to mean a two-sided ideal of \({\cal A}\).
Definition (balanced words and balanced algebra):A word is called _balanced_ if the letters \(L\) and \(R\) appear equally many times in it. Define the set
\[S=\{FG-GF\colon\ F\ \mbox{and}\ G\ \mbox{are nonempty balanced words}\},\]
and let \({\cal J}\) be the ideal of \({\cal A}\) generated by \(S\). The quotient algebra \({\cal B}={\cal A}/{\cal J}\) is called the _balanced algebra_.
Example:The words \(LR\) and \(RRLRLL\) are both nonempty balanced words, which implies that \((LR)(RRLRLL)-(RRLRLL)(LR)\in S\). See Figure 3 for an illustration of the word \(RRLRLL\).
Goal:The main goal of this article is to find a minimal subset of \(S\) that generates \({\cal J}\). Note that there are many possible such subsets; in fact, the main result of this article (Theorem 5.8) gives an infinite family of them. On the way, some choices are made, so there might well exist some "nice" minimal subset of \(S\) that generates \({\cal J}\) but which does not belong in this family.
Figure 3: The word \(RRLRLL\). An ascending line segment represents the letter \(R\) and a descending line segment represents the letter \(L\). (Words are drawn from left to right.) As the word is balanced, the word begins and ends at the same vertical level; in this picture, a dashed line is drawn at that level.
Definition (equivalence of words):Define the binary relation \(\sim\) on the set of words as follows: if \(X\) and \(Y\) are words, then \(X\sim Y\) whenever \(X-Y\in\mathcal{J}\). Note that \(\sim\) is an equivalence relation. Whenever equivalence classes of words are mentioned, they refer to equivalence classes with respect to the relation \(\sim\).
Example:\(LRRRLL-RRLLLR=(LR)(RRLL)-(RRLL)(LR)\in S\subset\mathcal{J}\), and this implies that \(LRRRLL\sim RRLLLR\).
Note:The equivalence relation defined above is the same one that would normally be used for determining whether two elements of \(\mathcal{A}\) correspond to the same element in the quotient \(\mathcal{B}\); the only difference is that this equivalence relation is only used for words (and not linear combinations of words). Going forward, elements of \(\mathcal{B}\) are not discussed, and instead everything is done in \(\mathcal{A}\).
Definition (swaps):Consider two nonempty balanced words \(F\) and \(G\), and assume that they appear next to each other inside a word \(W\). Then there exist words \(W_{1}\) and \(W_{2}\) so that \(W=W_{1}FGW_{2}\), or \(W=W_{1}GFW_{2}\). Switching the places of \(F\) and \(G\) is called a _swap_ of type \((F,G)\). The two words \(W_{1}FGW_{2}\) and \(W_{1}GFW_{2}\) are said to be _related by a swap_, or more precisely, related by a swap of type \((F,G)\). Note that a swap of type \((F,G)\) is the same thing as a swap of type \((G,F)\).
Example:The words \(RLLR=(RL)(LR)\) and \(LRRL=(LR)(RL)\) are related by a swap of type \((RL,LR)\).
Note:Sometimes two words can be related by a swap in multiple ways, as seen in the following example.
Example:The words \(RRLLRL=R(RL)(LR)L\) and \(RLRRLL=R(LR)(RL)L\) are related by a swap of type \((RL,LR)\). By rearranging the parentheses, the words can be written as \((RRLL)(RL)\) and \((RL)(RRLL)\), so the words are also related by a swap of type \((RRLL,RL)\). Figure 4 illustrates the two words.
Figure 4: The words \(RRLLRL\) and \(RLRRLL\) can be obtained from each other by switching the places of the “peak” \(RL\) and “valley” \(LR\). The higher dashed line indicates the level where the product of these words starts and ends. Alternatively, they can be obtained from each other by switching the places of the “high peak” \(RRLL\) and the “low peak” \(RL\). The lower dashed line indicates the level where the product of these words starts and ends.
Definition (sequence of swaps):Let \(X\) and \(Y\) be words. Assume that \(Z_{1},Z_{2},\ldots,Z_{k}\) are words with \(Z_{1}=X\) and \(Z_{k}=Y\). If \(Z_{i}\) and \(Z_{i+1}\) are related by a swap for \(1\leq i\leq k-1\), then the words \(X\) and \(Y\) are said to have a _sequence of swaps_ between them. (Note that the situation is symmetric in the sense that if \(X\) and \(Y\) have a sequence of swaps between them, then so do \(Y\) and \(X\).)
**Lemma 3.1**.: Let \(I\) be any set, and let \(F_{i}\) and \(G_{i}\) be balanced words for all \(i\in I\). Let \(X\) and \(Y\) be any two words. The following are equivalent:
* \(X-Y\) is in the ideal \(\mathcal{K}\) generated by \(\{F_{i}G_{i}-G_{i}F_{i}\}_{i\in I}\);
* there is a sequence of swaps between the words \(X\) and \(Y\), where every swap is of type \((F_{i},G_{i})\) for some \(i\in I\).
Proof.: (i) \(\implies\) (ii): The assumption \(X-Y\in\mathcal{K}\) implies that \(X-Y=\sum_{j=1}^{m}\alpha_{j}W_{j,1}(F_{j}G_{j}-G_{j}F_{j})W_{j,2}\), where \(W_{j,1}\) and \(W_{j,2}\) are words and \(\alpha_{j}\in\mathbb{C}\) are nonzero scalars; for each \(j=1,\ldots,m\) there exists \(i\in I\) so that \((F_{j},G_{j})=(F_{i},G_{i})\). The term \(W_{j,1}(F_{j}G_{j}-G_{j}F_{j})W_{j,2}\) involves the words \(X_{j}=W_{j,1}F_{j}G_{j}W_{j,2}\) and \(Y_{j}=W_{j,1}G_{j}F_{j}W_{j,2}\), which are related by a swap of type \((F_{j},G_{j})\). With the simplified notation,
\[X-Y=\sum_{j=1}^{m}\alpha_{j}(X_{j}-Y_{j}). \tag{1}\]
Using the equation (1), define a graph \(\Gamma\) as follows: the vertices of \(\Gamma\) are all the words appearing in the terms on the right hand side, that is, the vertex set is \(\{W\colon W=X_{j}\text{ or }W=Y_{j}\text{ for some }j\}\). Two vertices are adjacent if there is an index \(j\) so that one of the words is equal to \(X_{j}\) and the other one is \(Y_{j}\). Note that the equation (1), together with the fact that distinct words in \(\mathcal{A}\) are linearly independent, implies that both \(X\) and \(Y\) are vertices.
If it can be shown that \(X\) and \(Y\) are in the same connected component, then there is a sequence of swaps between \(X\) and \(Y\). This is because each edge comes from a pair of words related by a swap, as mentioned above. For the argument, it is convenient to make \(\Gamma\) into a weighted graph. Impose a weighting on the vertices of \(\Gamma\) as follows: for a vertex \(W\), the weight of \(W\) is the coefficient of \(W\) on the right hand side of (1), when the sum is distributed. Because of that same equation, the vertex \(X\) has weight \(1\), the vertex \(Y\) has weight \(-1\), and all other vertices have weight \(0\).
For each \(j\), the vertices \(X_{j}\) and \(Y_{j}\) are in the same connected component, by the definition of \(\Gamma\). Therefore the sum on the right hand side of (1) can be separated into sums over each connected component, and this implies that the sum of weights over any connected component is zero. Now, if \(Y\) is not in the connected component of \(X\), then the sum of weights of this component is \(1\), which is a contradiction. Therefore \(X\) and \(Y\) are in the same component and so there is a sequence of swaps between \(X\) and \(Y\).
(ii) \(\implies\) (i): It needs to be shown that \(X-Y\in\mathcal{K}\), and this will be done by induction on the number of swaps in the sequence of swaps between \(X\) and \(Y\). First assume that the sequence consists of a single swap of type \((F,G)\), where \(F=F_{i}\) and \(G=G_{i}\) for some \(i\). This means that there are (possibly empty) words \(W_{1}\) and \(W_{2}\) so that \(X=W_{1}FGW_{2}\) and \(Y=W_{1}GFW_{2}\). Then \(X-Y=W_{1}(FG-GF)W_{2}\in\mathcal{K}\).
Now assume that there is a sequence of \(n\) swaps between \(X\) and \(Y\), with \(n\geq 2\). Then there exists a word \(W\) so that \(X\) and \(W\) have a sequence of \(n-1\) swaps between them and \(W\) and \(Y\) are related by a single swap. By induction, both \(X-W\) and \(W-Y\) are in \(\mathcal{K}\), and therefore so is \(X-Y=(X-W)+(W-Y)\).
Note:The following proposition provides a powerful characterization for the equivalence of words, and it will be used in proofs throughout the article.
**Proposition 3.2**.: Let \(X\) and \(Y\) be two words. Then \(X\sim Y\) if and only if there is a sequence of swaps between \(X\) and \(Y\).
Proof.: This is a special case of Lemma 3.1, where the generating set is \(S\).
**Corollary 3.3**.: Let \(\mathcal{E}\) denote an equivalence class of words. Then
1. \(\mathcal{E}\) consists of words of equal length, and
2. \(\mathcal{E}\) is finite.
Proof.: Follows from Proposition 3.2.
Prime words and elevation
This section introduces the concepts of prime words and elevation. One of the results in this section states that if the balanced words in the definition of \(S\) are replaced with prime words, then the resulting set \(S^{\prime}\) generates \(\mathcal{J}\). The set \(S^{\prime}\) is used as an intermediate step in finding a minimal generating set for \(\mathcal{J}\).
Definition (prime words):A word is called _prime_ if it is nonempty, balanced, and cannot be written as the product of two nonempty balanced words.
**Lemma 4.1**.: A nonempty balanced word \(a_{1}a_{2}\ldots a_{n}\) is prime if and only if the word \(a_{1}a_{2}\ldots a_{k}\) is not balanced for \(k=1,\ldots n-1\).
Proof.: Assume first that \(a_{1}a_{2}\ldots a_{n}\) is prime. If \(a_{1}a_{2}\ldots a_{k}\) is balanced for some \(k\) with \(1\leq k\leq n-1\), then the word \(a_{k+1}\ldots a_{n}\) is also balanced. This implies that \(a_{1}a_{2}\ldots a_{n}\) can be written as a product of two nonempty balanced words, which is a contradiction. For the converse direction, assume that the word \(a_{1}a_{2}\ldots a_{k}\) is not balanced for \(k=1,\ldots n-1\). Then \(a_{1}a_{2}\ldots a_{n}\) cannot be written as the product of two nonempty balanced words, so it is prime.
Example:The words \(RL\), \(LLRR\), and \(RRLRLRLL\) are prime, which can be checked using Lemma 4.1. On the other hand, the word \(RRLLLRLRLLLRLRR\) is not prime, as it can be written as the product of \(RRLL\), \(RL\), \(RL\), and \(LLRLRR\), which are all primes. See Figure 5 for an illustration.
**Lemma 4.2**.: A nonempty balanced word \(W\) can be written uniquely as \(W=P_{1}P_{2}\cdots P_{k}\), where \(k\) is a positive integer and \(P_{i}\) is a prime word for \(1\leq i\leq k\).
Proof.: Clear.
Definition (prime factors):Referring to Lemma 4.2, the prime words \(P_{i}\) are called _prime factors_ of \(W\), and the _number of prime factors of_\(W\) refers to the number \(k\).
Figure 5: Illustration of the word \(RRLLLRLRLLLRR=(RRLL)(RL)(RL)(LLRLRR)\). Each prime in the product starts and ends at the dotted line.
A new set:Define the set
\[S^{\prime}=\{PQ-QP\colon\ P\text{ and }Q\text{ are prime words}\}.\]
Note that \(S^{\prime}\subset S\).
**Lemma 4.3**.: The set \(S^{\prime}\) generates the ideal \(\mathcal{J}\).
Proof.: Let \(\mathcal{J}^{\prime}\) denote the ideal generated by \(S^{\prime}\); the goal is to prove that \(\mathcal{J}=\mathcal{J}^{\prime}\). The inclusion \(\mathcal{J}^{\prime}\subset\mathcal{J}\) follows from the fact that \(S^{\prime}\subset S\). For the reverse inclusion, it suffices to show that if \(F\) and \(G\) are two nonempty balanced words, then \(FG-GF\in\mathcal{J}^{\prime}\). Let \(p\) denote the number of prime factors in \(FG\). The proof is by induction on \(p\). As both \(F\) and \(G\) are nonempty, they both have at least one prime factor, which means that \(p\geq 2\). If \(p=2\), then \(F\) and \(G\) are both prime words and thus \(FG-GF\in S^{\prime}\subset\mathcal{J}^{\prime}\).
Assume that \(p>2\). Then either \(F\) or \(G\) is not prime; without loss of generality, it may be assumed that \(F\) is not prime. Thus \(F\) can be written as \(F_{1}F_{2}\) where both \(F_{1}\) and \(F_{2}\) are nonempty balanced words. Now
\[FG-GF =F_{1}F_{2}G-GF_{1}F_{2}\] \[=F_{1}F_{2}G-F_{1}GF_{2}+F_{1}GF_{2}-GF_{1}F_{2}\] \[=F_{1}(F_{2}G-GF_{2})+(F_{1}G-GF_{1})F_{2}.\]
Each of \(F_{1}\) and \(F_{2}\) has fewer prime factors than \(F\), so the number of prime factors in both \(F_{1}G\) and \(F_{2}G\) is less than \(p\). By induction, \(F_{i}G-GF_{i}\in\mathcal{J}^{\prime}\) for \(i=1,2\). Using this together with the calculation from above, it can be concluded that \(FG-GF\in\mathcal{J}^{\prime}\). This concludes the proof.
Definition (elevation):For a letter \(a\), assign a weight \(\overline{a}\) as follows: \(\overline{a}=1\) if \(a=R\) and \(\overline{a}=-1\) if \(a=L\). Let \(W=a_{1}a_{2}\ldots a_{n}\) be a balanced word, and let \(0\leq k\leq n\). The \(k^{\text{th}}\)_elevation of_\(W\) is denoted by \(e_{k}(W)\) and given by \(e_{k}(W)=\sum_{i=1}^{k}\overline{a_{i}}\). The values \(e_{k}(W)\) form the _elevation sequence_\(Q(W)=\{e_{k}(W)\}_{k=0}^{n}\). The underlying multiset of \(Q(W)\) is called the _elevation multiset of_\(W\), and it is denoted by \(E(W)\).
Note:The elevation sequence of any balanced word begins and ends with a zero.
First example:The balanced word \(W=RRLL\) has elevation sequence \(Q(W)=\{0,1,2,1,0\}\), and elevation multiset \(E(W)=\{0^{2},1^{2},2\}\). The exponent indicates how many times a number appears in the multiset.
Second example:The balanced word \(W=RRRLLRLLLLRRRL\) has elevation sequence
\[Q(W)=\{0,1,2,3,2,1,2,1,0,-1,-2,-1,0,1,0\}\]
and elevation multiset \(E(W)=\{-2,(-1)^{2},0^{4},1^{4},2^{3},3\}\). See Figure 6 for an illustration.
**Lemma 4.4**.: If \(X\) and \(Y\) are balanced words with \(X\sim Y\), then \(E(X)=E(Y)\).
Proof.: By Proposition 3.2, there is a sequence of swaps between \(X\) and \(Y\). A swap does not change the elevation multiset but merely rearranges the entries in the elevation sequence. Therefore \(X\) and \(Y\) have the same elevation multiset.
Example:The words \(RRRLLLRLL\) and \(RRLRRLLL\) are related by a swap of type \((RL,LR)\), so \(RRRLLLRL\sim RRLRLL\). The table below shows the elevation sequences for both words. The values that switch places in the swap are underlined. The words share the same elevation multiset. See Figure 7 for illustration of the two words.
**Proposition 4.5**.: Let \(W\) be a balanced word with \(l(W)\geq 2\). The following are equivalent:
1. \(W\) is prime;
2. \(e_{k}(W)\neq 0\) for \(1\leq k\leq l(W)-1\).
Proof.: This is a reformulation of Lemma 4.1.
Figure 6: Illustration of the word \(W=RRRLLLRLLLRRRL\). The dashed lines indicate the different elevations.
Figure 7: The two words are related by a swap of type \((RL,LR)\). The dashed line is at elevation 2, where the subword \((RL)(LR)\) (in the first word) and \((LR)(RL)\) (in the second word) begins and ends.
**Corollary 4.6**.: Let \(P\) be a prime word. Then one of the following holds:
1. \(e_{k}(P)>0\) for \(1\leq k\leq l(P)-1\);
2. \(e_{k}(P)<0\) for \(1\leq k\leq l(P)-1\).
Proof.: Elevations are integers, and adjacent elevations \(e_{k}(P)\) and \(e_{k+1}(P)\) always differ by \(1\). If \(e_{k}(P)<0\) and \(e_{m}(P)>0\) for some \(k,m\), then \(e_{l}(P)=0\) for some \(l\) between \(k\) and \(m\). This contradicts Proposition 4.5.
Definition (upper and lower primes):Let \(P\) be a prime word. Referring to Corollary 4.6, if \(P\) satisfies (i), then \(P\) is called an _upper_ prime. Similarly, if \(P\) satisfies (ii), then \(P\) is called a _lower_ prime. See Figure 8 for an illustrated example of upper and lower primes.
Note:An upper prime \(P\) starts with the letter \(R\) and ends with the letter \(L\). Moreover, \(P\) can be written as \(RZL\) where \(Z\) is a (possibly empty) product of upper primes. See Figure 9.
Figure 8: Illustration of the upper prime \(RRLRRRLLLL\), and the lower prime \(LLLRLRLRLRR\).
Figure 9: Visualization of the upper prime \(P=RRLRRRLLL\), and how it can be written as \(RZL\), where \(Z=(RL)(RRLL)\). The words \(RL\) and \(RRLL\) are upper primes.
A minimal generating set
This section focuses on finding a minimal subset of \(S\) that generates \(\mathcal{J}\). By Lemma 4.3, the subset \(S^{\prime}\subset S\) generates \(\mathcal{J}\). In this section, two more subsets, \(S^{\prime\prime}\) and \(S^{\prime\prime\prime}\), are defined, with \(S^{\prime\prime\prime}\subset S^{\prime\prime}\subset S^{\prime}\subset S\). The main result states that \(S^{\prime\prime\prime}\) is a minimal subset of \(S\) that generates \(\mathcal{J}\). Because some choices are made when defining \(S^{\prime\prime\prime}\), this result actually gives a whole family of minimal subsets of \(S\) that generate \(\mathcal{J}\). Showing that \(S^{\prime\prime}\) generates \(\mathcal{J}\) serves as an intermediate result towards the main goal.
**Lemma 5.1**.: Let \(P\) and \(Q\) be upper primes. Then \(PQ-QP\) is in the ideal generated by
\[\{U(LR)-(LR)U\colon\ U\text{ is an upper prime}\}.\]
Proof.: By Lemma 3.1, it is enough to show that there is a sequence of swaps between \(PQ\) and \(QP\) where every swap is of type \((U,LR)\) for some upper prime \(U\). In this proof, these kinds of swaps are called "upper swaps". The proof is an induction on \(l(PQ)\). As \(P\) and \(Q\) are prime words, both are nonempty balanced words so both have length \(\geq 2\). Therefore the smallest possible case is \(l(PQ)=4\), with \(P=Q=RL\). In this case \(PQ=QP\) so the claim is true because no swaps are needed.
Next, assume that \(l(PQ)\geq 6\). There exist nonnegative integers \(p,q\) and upper primes \(P_{1},\ldots,P_{p}\) and \(Q_{1},\cdots Q_{q}\) so that \(P=RP_{1}\cdots P_{p}L\) and \(Q=RQ_{1}\ldots Q_{q}L\). The diagram below describes how the sequence of swaps between \(PQ\) and \(QP\) can be found. Each arrow in the diagram represents a sequence of swaps. The top arrow involves swaps of type \((Q_{i},LR)\), and the bottom arrow swaps of type \((P_{i},LR)\). Both of these are upper swaps.
The middle arrow involves swaps of type \((P_{i},Q_{j})\); notice that \(l(P_{i}Q_{j})\leq n-4\) for any \(i\in\{1,\ldots,p\}\) and \(j\in\{1,\ldots,q\}\). Therefore, by induction, there exists a sequence of swaps between \(P_{i}Q_{j}\) and \(Q_{j}P_{i}\) where every swap is an upper swap. This means that each swap of type \((P_{i},Q_{j})\) can be replaced by a sequence of upper swaps.
\[PQ=RP_{1}\ldots P_{p}(LR)Q_{1}\ldots Q_{q}L\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
**Lemma 5.2**.: Let \(P\) and \(Q\) be lower primes. Then \(PQ-QP\) is in the ideal generated by
\[\{(RL)D-D(RL)\colon\ D\text{ is a lower prime}\}.\]
Proof.: Similar to Lemma 5.1.
Yet another generating set:Define
\[S^{\prime\prime}=\{UD-DU\colon\ U\text{ is an upper prime, }D\text{ is a lower prime}\}.\]
Note that \(S^{\prime\prime}\subset S^{\prime}\subset S\).
**Proposition 5.3**.: The set \(S^{\prime\prime}\) generates \(\mathcal{J}\).
Proof.: Let \(\mathcal{J}^{\prime\prime}\) denote the ideal generated by \(S^{\prime\prime}\). The goal is to show that \(\mathcal{J}=\mathcal{J}^{\prime\prime}\). The inclusion \(\mathcal{J}^{\prime\prime}\subset\mathcal{J}\) follows from the fact that \(S^{\prime\prime}\subset S\). By Lemma 4.3, the set \(S^{\prime}\) generates \(\mathcal{J}\), so for the reverse inclusion, it is enough to show that \(S^{\prime}\subset\mathcal{J}^{\prime\prime}\). This translates to the following claim: if \(P\) and \(Q\) are prime words, then \(PQ-QP\in\mathcal{J}^{\prime\prime}\). There are four cases, depending on whether \(P\) and \(Q\) are upper or lower primes. The table below shows why \(PQ-QP\in\mathcal{J}^{\prime\prime}\) in each of the cases.
\begin{tabular}{|l|l|l|} \hline & \(Q\) upper prime & \(Q\) lower prime \\ \hline \multirow{3}{*}{\(P\) upper prime} & By Lemma 5.1, \(PQ-QP\) is in the ideal generated by elements of the type & \\ & \(U(LR)-(LR)U\) where \(U\) is an upper prime, and these are all elements of \(S^{\prime\prime}\). & \\ \hline \multirow{3}{*}{\(P\) lower prime} & Combining \(PQ-QP=-(QP-PQ)\) and the fact that \(QP-PQ\in S^{\prime\prime}\) gives & By Lemma 5.2, \(PQ-QP\) is in the ideal generated by elements of the type \((RL)D-D(RL)\) where \(D\) is a lower prime, and these are all elements of \(S^{\prime\prime}\). \\ \cline{1-1} \cline{2-3} & \(PQ-QP\in\mathcal{J}^{\prime\prime}\). & \(PQ-QP\in S^{\prime\prime}\subset\mathcal{J}^{\prime\prime}\). \\ \cline{1-1} \cline{2-3} &
**Corollary 5.4**.: Let \(X\) and \(Y\) be any words. The following are equivalent:
1. \(X\sim Y\).
2. There is a sequence of swaps between \(X\) and \(Y\), where every swap is of type \((U,D)\) for some upper prime \(U\) and a lower prime \(D\).
Proof.: Use Lemma 3.1 and Proposition 3.2, together with the fact that \(S^{\prime\prime}\) generates \(\mathcal{J}\) from Proposition 5.3.
**Lemma 5.5**.: Let \(U\) be an upper prime, \(D\) a lower prime, and \(W\) a balanced word. If \(UD\) and \(W\) are related by a swap, then exactly one of the following holds:
1. \(W=DU\);
2. \(W=U^{\prime}D\) where \(U\) and \(U^{\prime}\) are related by a swap;
3. \(W=UD^{\prime}\) where \(D\) and \(D^{\prime}\) are related by a swap.
Proof.: Because \(UD\) and \(W\) are related by a swap, there are nonempty balanced words \(F,G\) and words \(W_{1},W_{2}\) so that \(UD=W_{1}FGW_{2}\) and \(W=W_{1}GFW_{2}\). Note that the product \(FG\) starts and ends at the same elevation of \(UD\). (In other words, if \(a=l(W_{1})\) and \(b=l(FG)\), then \(e_{a}(UD)=e_{a+b}(UD)\).) The table below shows how the elevation \(e_{k}(UD)\) behaves for \(0\leq k\leq l(UD)\).
\begin{tabular}{|c|c|c|c|c|c|} \hline \(k\) & \(0\) & \(1,\ldots,l(U)-1\) & \(l(U)\) & \(l(U)+1,\ldots,l(U)+l(D)-1\) & \(l(U)+l(D)=l(UD)\) \\ \hline \(e_{k}(UD)\) & \(0\) & \(>0\) & \(0\) & \(<0\) & \(0\) \\ \hline \end{tabular}
If \(FG\) starts at elevation zero, then both \(F\) and \(G\) start and end at elevation zero. In this case, the only possibility is \(U=F\) and \(D=G\), because elevation zero appears exactly 3 times in \(UD\). This is case (i). If \(FG\) starts at positive elevation, then \(FG\) is a subword of \(U\), which gives case (ii). If \(FG\) starts at negative elevation, then \(FG\) is a subword of \(D\), which gives case (iii).
Note:The picture below illustrates the table in the proof of Lemma 5.5.
Figure 11: Illustration of \(UD=(RRRLLRLL)(LLRLRLRR)\). The dashed line represents elevation zero. Elevation is positive inside \(U\), negative inside \(D\), and zero for exactly three indices; the zero elevation locations are marked with a black dot.
**Lemma 5.6**.: The equivalence class of any upper prime consists entirely of upper primes of the same length, and the equivalence class of any lower prime consists entirely of lower primes of the same length.
Proof.: Let \(U\) be an upper prime, let \(n=l(U)\), and let \(W\) be a word such that \(U\sim W\). The goal is to show that \(W\) is an upper prime of length \(n\). By Proposition 3.2, there is a sequence of swaps between \(U\) and \(W\), so \(W\) is a balanced word of length \(n\) and thus \(e_{0}(W)=e_{n}(W)=0\). Zero appears in \(E(U)\) exactly twice, and \(E(U)=E(W)\) by Lemma 4.4. Therefore \(e_{k}(W)>0\) for all \(k\) with \(1\leq k\leq n-1\), which means that \(W\) is also an upper prime. This proves the statement about upper primes, and the statement about lower primes can be proven similarly.
Note:The following proposition gives an equivalent condition for a subset of \(S\) to generate \(\mathcal{J}\). This will be a key ingredient in the proof of the main result.
**Proposition 5.7**.: Let \(S^{\star}\subset S\). Then \(S^{\star}\) generates \(\mathcal{J}\) if and only if for any upper prime \(U\) and lower prime \(D\), there exist words \(U^{\prime}\sim U\) and \(D^{\prime}\sim D\) such that either \(U^{\prime}D^{\prime}-D^{\prime}U^{\prime}\in S^{\star}\) or \(D^{\prime}U^{\prime}-U^{\prime}D^{\prime}\in S^{\star}\).
Proof.: As \(S^{\star}\subset S\), there is a set \(I\) and balanced words \(F_{i},G_{i}\) so that \(S^{\star}=\{F_{i}G_{i}-G_{i}F_{i}\}_{i\in I}\). For the purpose of this proof, a swap of type \((F_{i},G_{i})\) for some \(i\) is called an \(S^{\star}\)-swap.
Assume first that \(S^{\star}\) generates \(\mathcal{J}\). If the second condition in the statement does not hold, then there exist an upper prime \(U\) and a lower prime \(D\) so that for any \(U^{\prime}\sim U\) and \(D^{\prime}\sim D\), the elements \(U^{\prime}D^{\prime}-D^{\prime}U^{\prime}\) and \(D^{\prime}U^{\prime}-U^{\prime}D^{\prime}\) are not in \(S^{\star}\). In other words, swaps of type \((U^{\prime},D^{\prime})\) are not \(S^{\star}\)-swaps. Because \(UD-DU\in\mathcal{J}\), there is a sequence of \(S^{\star}\)-swaps between \(UD\) and \(DU\), by Lemma 3.1.
Let \(\{U_{j}\}_{j=1}^{m}\) and \(\{D_{k}\}_{k=1}^{n}\) be the equivalence classes of \(U\) and \(D\), respectively. Let \(X\) be a word so that there is a sequence of swaps between \(UD\) and \(X\). By repeated application of Lemma 5.5, \(X\) can be \(U_{j}D_{k}\) for some \(j\) and \(k\), but because swaps of type \((U_{j},D_{k})\) are not \(S^{\star}\)-swaps, \(X\) cannot be \(D_{k}U_{j}\). In particular, \(X\) cannot be \(DU\), which is a contradiction.
Now assume that for any upper prime \(U\) and lower prime \(D\), there exist words \(U^{\prime}\sim U\) and \(D^{\prime}\sim D\) such that either \(U^{\prime}D^{\prime}-D^{\prime}U^{\prime}\in S^{\star}\) or \(D^{\prime}U^{\prime}-U^{\prime}D^{\prime}\in S^{\star}\), or in other words, the swap of the type \((U^{\prime},D^{\prime})\) is an \(S^{\star}\)-swap. By Proposition 5.3, \(S^{\prime\prime}\) generates \(\mathcal{J}\), so for showing that \(S^{\star}\) generates \(\mathcal{J}\) it is enough to show that every element of \(S^{\prime\prime}\) is in the ideal generated by \(S^{\star}\). Using Lemma 3.1, this translates to showing that for any upper prime \(U\) and lower prime \(D\), there is a sequence of \(S^{\star}\)-swaps between \(UD\) and \(DU\).
The proof is by induction on \(l(UD)\). The smallest possible case is \(l(UD)=4\); this happens only for \(U=RL\) and \(D=LR\). Both of these words are the only elements of their equivalence classes, so the assumption implies that the swap of type \((U,D)\) is an \(S^{\star}\)-swap and by itself forms the desired sequence of \(S^{\star}\)-swaps between \(UD\) and \(DU\).
Assume now that \(l(UD)>4\). By the assumption, there exist words \(U^{\prime}\sim U\) and \(D^{\prime}\sim D\) such that the swap of type \((U^{\prime},D^{\prime})\) is an \(S^{\star}\)-swap. By Corollary 5.4, there is a sequence of swaps between \(U\) and \(U^{\prime}\) where each swap is of type \((U^{\prime\prime},D^{\prime\prime})\) for some upper prime \(U^{\prime\prime}\) and lower prime \(D^{\prime\prime}\). Now \(l(U^{\prime\prime}D^{\prime\prime})\leq l(U)\leq l(UD)-2\), so by induction, there is a sequence of \(S^{\star}\)-swaps between \(U^{\prime\prime}D^{\prime\prime}\) and \(D^{\prime\prime}U^{\prime\prime}\). Combining the above observations gives a sequence of \(S^{\star}\)-swaps between \(U\) and \(U^{\prime}\). Similarly, there is a sequence of \(S^{\star}\)-swaps between \(D\) and \(D^{\prime}\).
Now there is a sequence of \(S^{\star}\)-swaps between \(UD\) and \(DU\) as follows, with each arrow representing a sequence of \(S^{\star}\)-swaps: \(UD\leftrightarrow U^{\prime}D^{\prime}\leftrightarrow D^{\prime}U^{\prime} \leftrightarrow DU\). This concludes the proof.
Representatives:Let \(\Upsilon\) denote the set of equivalence classes of upper primes, and \(\Lambda\) the set of equivalence classes of lower primes. In order to state the main result, a representative will be chosen for each equivalence class. For each \(\upsilon\in\Upsilon\), denote the chosen representative by \(U_{\upsilon}\), and for each \(\lambda\in\Lambda\), denote the chosen representative by \(D_{\lambda}\).
Minimal generating set:Define the set
\[S^{\prime\prime\prime}=\big{\{}U_{\upsilon}D_{\lambda}-D_{\lambda}U_{\upsilon }\colon\,\upsilon\in\Upsilon,\lambda\in\Lambda\big{\}}.\]
Note that \(S^{\prime\prime\prime}\) depends on the chosen representatives.
**Theorem 5.8**.: \(S^{\prime\prime\prime}\) is a minimal subset of \(S\) that generates \(\mathcal{J}\).
Proof.: Firstly, clearly \(S^{\prime\prime\prime}\subset S\). The way that \(S^{\prime\prime\prime}\) is defined guarantees that the condition in Proposition 5.7 is satisfied; by this proposition, \(S^{\prime\prime\prime}\) generates \(\mathcal{J}\). By the same proposition, no relation \(U_{\upsilon}D_{\lambda}-D_{\lambda}U_{\upsilon}\) can be removed from \(S^{\prime\prime\prime}\), as then \(S^{\prime\prime\prime}\) wouldn't generate \(\mathcal{J}\) anymore. This concludes the proof.
Choosing representatives
As was shown in Theorem 5.8, the set \(S^{\prime\prime\prime}\) is a minimal set of generators for \(\mathcal{J}\). Note that \(S^{\prime\prime\prime}\) is not unique, because it depends on the choice of representatives for the equivalence classes of prime words; therefore the result actually gives a family of minimal sets of generators. This section focuses on one possible choice of representatives: the minimal word with respect to alphabetical order. The concept of a reduced word is introduced, and it will be shown that every equivalence class of balanced words contains a unique reduced word which coincides with the minimal word. The reduced word can be easily found using an algorithm, and this gives a convenient way of finding the minimal word of the equivalence class of a given balanced word, without having to list all the words in the equivalence class.
Definition (alphabetical order):The set of words is linearly ordered by alphabetical order. Let \(X\) and \(Y\) be any words. The notation \(X<Y\) means that \(X\) comes before \(Y\) in alphabetical order; for example, \(LLRR<RRLL\). Some other examples are \(LL<LLR\) and \(LLL<LR\).
Minimal representatives:By Corollary 3.3, each equivalence class of words is finite. Therefore, each equivalence class has a minimal word with respect to alphabetical order. Moreover, because alphabetical order is a linear order, this minimal word is unique and it will thus be referred to as _the minimal word_ of the equivalence class. In this section, the representatives \(U_{v}\) and \(D_{\lambda}\) needed to define \(S^{\prime\prime\prime}\) are chosen to be the minimal words of their equivalence classes. (See definition of \(S^{\prime\prime\prime}\) right before Theorem 5.8.)
Definition (reduced words):A word \(W\) is called _reduced_, if it does not contain a subword of the type \(UD\) where \(U\) is an upper prime and \(D\) is a lower prime.
**Lemma 6.1**.: Let \(U\) be an upper prime and \(D\) a lower prime. Let \(W_{1}\) and \(W_{2}\) be any words. Then \(W_{1}DUW_{2}\sim W_{1}UDW_{2}\) and \(W_{1}DUW_{2}<W_{1}UDW_{2}\).
Proof.: The words are related by a swap of type \((U,D)\), which implies equivalence by Proposition 3.2. The word \(U\) starts with the letter \(R\) and \(D\) with the letter \(L\), so \(DU<UD\) and therefore also \(W_{1}DUW_{2}<W_{1}UDW_{2}\).
**Lemma 6.2**.: The minimal word of an equivalence class is reduced.
Proof.: If the minimal word is not reduced, then it has a subword \(UD\) where \(U\) is an upper prime and \(D\) is a lower prime. Then a swap of type \((U,D)\) produces a smaller word in the same equivalence class, by Lemma 6.1. This is a contradiction, so the minimal word is reduced.
Unique reduced word:The next results are preparation for Proposition 6.6, which states that every equivalence class of balanced words contains a unique reduced word.
**Lemma 6.3**.: Let \(W\) be a balanced word, and assume that \(e_{i}(W)\geq e_{j}(W)\) for some \(i<j\). Then there exists an integer \(e\in[e_{j}(W),e_{i}(W)]\), and integers \(i^{\prime}\in[0,i]\) and \(j^{\prime}\in[j,l(W)]\) with \(e_{i^{\prime}}(W)=e_{j^{\prime}}(W)=e\). (The bracket notation refers to a closed interval.)
Proof.: There are three cases:
1. \(e_{i}(W)\geq e_{j}(W)\geq 0\): In this case, choose \(e=e_{j}(W)\) and \(j^{\prime}=j\). As \(e_{0}(W)=0\leq e\leq e_{i}(W)\), there exists \(i^{\prime}\in[0,i]\) so that \(e_{i^{\prime}}(W)=e=e_{j^{\prime}}(W)\).
2. \(e_{i}(W)>0>e_{j}(W)\): In this case, choose \(e=0\), \(i^{\prime}=0\), and \(j^{\prime}=l(W)\).
3. \(0\geq e_{i}(W)\geq e_{j}(W)\): Similar to (i), with \(e=e_{i}(W)\) and \(i^{\prime}=i\).
Example:As an example to Lemma 6.3, consider the balanced word \(W=RRLLRRLL\), and let \(i=3\) and \(j=5\). For this word, \(e_{3}(W)=3\geq 1=e_{5}(W)\), so the lemma implies that some elevation between \(1\) and \(3\) is obtained at both ends. For this example, this elevation is either \(e=2\) (with \(i^{\prime}=2\) and \(j^{\prime}=6\)), or \(e=1\) (with \(i^{\prime}=1\) and \(j^{\prime}\in\{5,7\}\)). See illustration in Figure 12.
**Proposition 6.4**.: Let \(W\) be a balanced word. The following are equivalent:
1. \(W\) is reduced;
2. \(W\) does not contain a subword of the type \(RL^{n}R\) where \(n\geq 2\);
3. \(W=L^{a}(RL)^{k_{1}}R(RL)^{k_{2}}R\cdots(RL)^{k_{m}}RL^{b}\), where \(a,b,m\geq 0\), with \(a+b=m\), and \(k_{i}\geq 0\) for \(1\leq i\leq m\).
Proof.: (i) \(\implies\) (ii): Assume, on the contrary, that \(W\) contains a subword \(RL^{n}R\) for some \(n\geq 2\); this means that \(W=W_{1}RL^{n}RW_{2}\) for some words \(W_{1},W_{2}\). Let \(i=l(W_{1})\), and \(j=i+n+2\) (indices \(i\) and \(j\) are the starting and ending locations of the subword \(RL^{n}R\)). Now \(e_{i}(W)=e_{j}(W)+n-2\geq e_{j}(W)\), because \(n\geq 2\). By Lemma 6.3, there exists an integer \(e\) with \(e_{j}(W)\leq e\leq e_{i}(W)\), together with \(i^{\prime}\leq i\) and \(j^{\prime}\geq j\) so that \(e_{i^{\prime}}(W)=e\) and \(e_{j^{\prime}}(W)=e\). If there are multiple choices for \(i^{\prime}\) and \(j^{\prime}\), then the largest possible \(i^{\prime}\) and smallest possible \(j^{\prime}\) are chosen. But now the subword of \(W\) starting at index \(i^{\prime}\) and ending at \(j^{\prime}\) is a product \(UD\) where \(U\) is an upper prime and \(D\) is a lower prime, which contradicts (i). (See example picture in Figure 13.)
Figure 12: An example of Lemma 6.3. The black circles indicate the locations \(i=3\) and \(j=5\), and the horizontal lines the elevations \(e_{3}(W)=3\) and \(e_{5}(W)=1\). Graphically, the statement of the lemma says that it is possible to draw a horizontal line between the two given horizontal lines which intersects the graph both left from \(i\) and right from \(j\); here the line can be drawn at elevation \(1\) or \(2\).
(ii) \(\implies\) (iii): Write \(W=L^{a}W^{\prime}L^{b}\) where \(a,b\geq 0\) and there are no letters \(L\) in the beginning or end of \(W\). If \(W^{\prime}\) is empty, then \(W\) is empty as well, because if there are no letters \(R\) then there are no letters \(L\) either, as \(W\) is balanced. In this case \(W\) is of the form (iii) with \(a=b=m=0\). Assume \(W^{\prime}\) is not empty; then \(W^{\prime}\) starts and ends with a letter \(R\). (The special case \(W^{\prime}=R\) can be dealt with in the same way as the general case with the two \(R\)'s different.)
In the word \(W^{\prime}\), any \(L\) has an \(R\) left to it, because two or more successive letters \(L\) would produce a subword of type \(RL^{n}R\) with \(n\geq 2\). This means that \(W^{\prime}\) is a product of factors each of which is either \(RL\) or \(R\). By inserting the empty subword in the form \((RL)^{0}\) between successive letters \(R\) (and on the left side of the leftmost \(R\) if needed), it is possible to write \(W^{\prime}=(RL)^{k_{1}}R\cdots(RL)^{k_{m}}R\), for some \(m\geq 1\), with \(k_{i}\geq 0\) for \(0\leq i\leq m\). It has been shown that \(W=L^{a}(RL)^{k_{1}}R(RL)^{k_{2}}R\cdots(RL)^{k_{m}}RL^{b}\), with \(a,b,m\geq 0\), and \(k_{i}\geq 0\) for \(1\leq i\leq m\). Finally, each factor \((RL)^{k_{i}}R\) has one more \(R\) than \(L\), and therefore \(a+b=m\), because \(W\) is balanced.
(iii) \(\implies\) (i): If \(W\) is not reduced, it contains a subword \(UD\) where \(U\) is an upper prime and \(D\) is a lower prime. By definition, the words \(U\) and \(D\) both have length at least 2, so \(U\) ends with an \(L\) and \(D\) starts with an \(L\), and both \(U\) and \(L\) contain at least one \(R\). This is a contradiction, as words of form (iii) do not contain adjacent letters \(L\) in the middle.
Example:The table below contains four reduced words, together with the corresponding parameters \(a\), \(b\), \(m\), and \(k_{1},\ldots,k_{m}\) from of Proposition 6.4 (iii).
\begin{tabular}{|c|c|c|c|c|} \hline Reduced word & \(a\) & \(b\) & \(m\) & \((k_{1},\ldots,k_{m})\) \\ \hline \(LRRL\) & 1 & 1 & 2 & \((0,0)\) \\ \hline \(RLRRL\) & 0 & 2 & 2 & \((1,0)\) \\ \hline \(LLRLRRRLRLRLL\) & 2 & 2 & 4 & \((1,0,1,0)\) \\ \hline \(LRRRLRLRLL\) & 1 & 2 & 3 & \((0,0,2)\) \\ \hline \end{tabular}
Figure 13: Illustration of Proposition 6.4 (i) \(\implies\) (ii), with the example word \(RRRRLLLRRRLL\) which has a subword \(RL^{4}R\). In this example \(i=4\) and \(j=10\), and the subword \(RL^{4}R\) is drawn in bold. The black circles indicate the locations \(i^{\prime}=3\) and \(j^{\prime}=11\), which are the starting and ending points for \(UD=(RRLL)(LLRR)\). (Note that the choices \(e=2\), \(i^{\prime}=2\), and \(j^{\prime}=10\) would be valid as well.)
**Lemma 6.5**.: Let \(W\) be a balanced word as in Proposition 6.4 (iii). Then the elevation multiset of \(W\) is
\[E(W)=\{0,-1,\ldots,-a\}\cup\{0,1,\ldots,b\}\cup\{(i-a)^{\mu_{i}}\}_{i=0}^{m},\]
where the multiplicity \(\mu_{i}\) of the value \(i-a\) is given by
\[\mu_{i}=\begin{cases}k_{1},&i=0\\ k_{i}+1+k_{i+1},&1\leq i\leq m-1\\ k_{m},&i=m.\end{cases}\]
Proof.: The parts \(L_{a}\) and \(L_{b}\) of \(W\) contribute to the multisets \(\{0,-1,\ldots,-a\}\) and \(\{0,1,\ldots,b\}\), respectively. It remains to show that the middle part \((RL)^{k_{1}}R(RL)^{k_{2}}R\cdots(RL)^{k_{m}}R\), excluding its endpoints, contributes to a multiset as in the statement.
First, the lowest elevation \(-a\,(=0-a)\) appears after the first occurrence (which was included in \(\{0,-1,\ldots,-a\}\)) precisely \(k_{1}\) times. Similarly, the highest elevation \(b\,(=m-a)\) appears before the last occurrence (included in \(\{0,1,\ldots,b\}\)) exactly \(k_{m}\) times. For \(1\leq i\leq m-1\), the elevation \(i-a\) appears \(k_{i}\) times "at the peaks" of \((RL)^{k_{i}}\), then once more, and finally \(k_{i+1}\) times "after the peaks" of \((RL)^{k_{i+1}}\), giving multiplicity \(k_{i}+1+k_{i+1}\). See Figure 14 for an example.
**Proposition 6.6**.: Every equivalence class of balanced words contains a unique reduced word, which is the minimal word of the equivalence class.
Proof.: Existence: The minimal word of an equivalence class is reduced, by Lemma 6.2.
Uniqueness: Assume that \(X\) and \(Y\) are reduced words with \(X\sim Y\). It will be shown that \(X=Y\). By Proposition 6.4, both \(X\) and \(Y\) are of the form (iii). By Lemma 6.5, the smallest and largest elevation values of a word of this form are \(-a\) and \(b\), respectively. The assumption \(X\sim Y\) implies \(E(X)=E(Y)\) by Lemma 4.4, which means that \(X\) and \(Y\) share the parameters \(a\) and \(b\). Thus \(X=L^{a}X^{\prime}L^{b}\) and \(Y=L^{a}Y^{\prime}L^{b}\), where \(X^{\prime}=(RL)^{k_{1}}R\cdots(RL)^{k_{m}}R\) and \(Y^{\prime}=(RL)^{l_{1}}R\cdots(RL)^{l_{m}}R\) for some \(k_{i}\geq 0\) and \(l_{i}\geq 0\), where \(1\leq i\leq m\). (Note that the number of factors \((RL)^{k}R\) is \(m=a+b\) for both \(X^{\prime}\) and \(Y^{\prime}\).) It suffices to show that \(X^{\prime}=Y^{\prime}\).
Figure 14: Illustration on how the elevation multiset of the word \(W=LL(RL)^{3}R(RL)^{2}R(RL)RRLL\) can be written as in Lemma 6.5. For this word, \(a=b=2\), \(m=4\), and \((k_{1},k_{2},k_{3},k_{4})=(3,2,1,0)\). The three different parts of \(E(W)\) are separated with vertical dashed lines. The horizontal dashed lines illustrate how the elevations \(i-2\) for \(1\leq i\leq 3\) appear \(k_{i}+1+k_{i+1}\) times: first \(k_{i}\) times “at the peaks” of \((RL)^{k_{i}}\) (black dots), then once more and finally \(k_{i+1}\) times “after the peaks” of \((RL)^{k_{i+1}}\) (white circles). Note that there are no white circles along elevation \(1\) (topmost horizontal line), because \(k_{4}=0\).
Using the expression for \(E(X)=E(Y)\) from Lemma 6.5, and especially the fact that the multiplicities of the values in the third part agree, leads to the system of equations
\[\begin{cases}k_{1}=l_{1}\\ k_{i}+1+k_{i+1}=l_{i}+1+l_{i+1},\ \ \ 1\leq i\leq m-1\\ k_{m}=l_{m}.\end{cases}\]
It is easy to see by back-substitution that \(k_{i}=l_{i}\) for all \(1\leq i\leq m\). Thus \(X^{\prime}=Y^{\prime}\) and therefore \(X=Y\). This shows the uniqueness.
Finding the minimal word:The reduction algorithm (presented below) takes in a balanced word \(X\) and produces a reduced word in the equivalence class of \(X\); by Proposition 6.6, this reduced word is the minimal word of the equivalence class of \(X\). The key idea of the algorithm is to look for certain kinds of subwords and perform swaps until no such subwords exist.
Reduction algorithm:Let \(X\) be a balanced word. The reduction algorithm is defined using a recursive sequence \((X_{0},X_{1},\dots)\) of words, starting with \(X_{0}=X\). For \(i\geq 0\), proceed as follows. If \(X_{i}\) is reduced, the algorithm terminates with output \(X_{i}\). If \(X_{i}\) is not reduced, find the leftmost occurrence of a subword of type \(UD\) where \(U\) is an upper prime and \(D\) is a lower prime; then \(X_{i}=W_{1}UDW_{2}\) for some words \(W_{1},W_{2}\). Now let \(X_{i+1}=W_{1}DUW_{2}\).
**Proposition 6.7**.: The reduction algorithm terminates.
Proof.: For each \(i\), The words \(X_{i}\) and \(X_{i+1}\) are related by a swap, which implies that every \(X_{i}\) belongs to the equivalence class of \(X\). By Lemma 6.1, \(X_{i+1}<X_{i}\) for each \(i\), so all \(X_{i}\) are distinct. However, the equivalence class of any word is finite by Corollary 3.3. These observations together imply that the algorithm must terminate.
Note:Equivalence classes of prime words can be found by listing all the prime words of given length, and then applying the reduction algorithm to each word. An equivalence class consists of prime words that give the same output.
Example:The following two tables list all the words in the equivalence classes of upper and lower primes of length at most 10. Each row corresponds to an equivalence class. The words are listed in alphabetical order, so the minimal word always comes first. The parentheses are added to highlight the differences between words in the equivalence class.
Figure 16: Equivalence classes of lower primes of length \(\leq 10\).
Figure 15: Equivalence classes of upper primes of length \(\leq 10\).
Example:Let \(U=RL\) and \(D=LR\). One equivalence class of upper primes of length 12 consists of six words that are of the type \(RRZLL\) where \(Z\) is a product of four prime words: two copies of \(U\) and two copies of \(D\). Note that there are exactly \(\binom{4}{2}=6\) ways to arrange these four prime words. The words are as follows:
* \(W_{DDUU}=RR(LR)(LR)(RL)(RL)LL\)
* \(W_{DUDU}=RR(LR)(RL)(LR)(RL)LL\)
* \(W_{DUUD}=RR(LR)(RL)(RL)(LR)LL\)
* \(W_{UDDU}=RR(RL)(LR)(LR)(RL)LL\)
* \(W_{UDUD}=RR(RL)(LR)(RL)(LR)LL\)
* \(W_{UUDD}=RR(RL)(RL)(LR)(LR)LL\).
Figure 17 illustrates how the words are related by swaps: each edge corresponds to a swap, and the directed edges are the special swaps from the reduction algorithm. The graph is almost complete, with only two edges missing. One of the missing edges is between \(W_{UUDD}\) and \(W_{DUDU}\), and the other one between \(W_{DDUU}\) and \(W_{UDUD}\). It can be seen that neither of these two pairs is related by a swap, by checking all the possible subwords \(FG\) where \(F\) and \(G\) are balanced words.
|
2304.06882
|
On the dimension of c-nilpotent multiplier of n-Lie algebras
|
Let L be a finite-dimensional n-Lie algebra with free presentation F/R. Then
the concept of c-nilpotent multiplier of L, denoted by M(c)(L), is defined as
follows: M(c)(L) =(gamma c+1(F) R)/gamma c+1(R, F, . . . , F). In this paper,
we obtain some inequalities and certain bounds for the dimension of M(c)(L) by
using the basic commutators. Also, we discuss the relationship between the
dimension of the c-nilpotent multiplier of L and the c-nilpotent multiplier of
some factor of L. We further obtain an inequality between dimensions of
c-nilpotent multiplier of n-Lie algebra and non-abelian tensor (exterior)
product of a central ideal by its abelianized factor n-Lie algebra. Finally, we
also determine the dimension and structure of c-nilpotent multipliers
Heisenberg n-Lie algebras, which can be a useful tool for determining the
dimension of the multiplier of nilpotent n-Lie algebras of class 2.
|
Seyedeh Nafiseh Akbarossadat
|
2023-04-14T01:17:01Z
|
http://arxiv.org/abs/2304.06882v1
|
# On the dimension of \(c\)-nilpotent multiplier of \(n\)-Lie algebras
# On the dimension of \(c\)-nilpotent multiplier of \(n\)-Lie algebras
Seyedeh Nafiseh Akbarossadat
**Abstract.** Let \(L\) be a finite-dimensional \(n\)-Lie algebra with free presentation \(F/R\). Then the concept of \(c\)-nilpotent multiplier of \(L\), denoted by \(\mathcal{M}^{(c)}(L)\), is defined as follows:
\[\mathcal{M}^{(c)}(L)=\frac{\gamma_{c+1}(F)\cap R}{\gamma_{c+1}(R,F,\ldots,F)}.\]
In this paper, we obtain some inequalities and certain bounds for the dimension of \(\mathcal{M}^{(c)}(L)\) by using the basic commutators. Also, we discuss the relationship between the dimension of the \(c\)-nilpotent multiplier of \(L\) and the \(c\)-nilpotent multiplier of some factor of \(L\). We further obtain an inequality between dimensions of \(c\)-nilpotent multiplier of \(n\)-Lie algebra and non-abelian tensor (exterior) product of a central ideal by its abelianized factor \(n\)-Lie algebra. Finally, we also determine the dimension and structure of \(c\)-nilpotent multipliers Heisenberg \(n\)-Lie algebras, which can be a useful tool for determining the dimension of the multiplier of nilpotent \(n\)-Lie algebras of class \(2\).
**Key words:**\(n\)-Lie algebra, Non-abelian tensor product, \(c\)-Nilpotent multiplier, \(c\)-Capable.
**MSC:** 17B05, 17B30, 17B60.
## 1. Introduction and Preparatory
Ellis [8] developed an analogous theory of non-abelian tensor products for Lie algebras (see also [10]). Using tensor (exterior) products of Lie algebras, Ellis described the universal central extension of Lie algebras.
In 1986, Filippov [11] introduced the notion of \(n\)-Lie algebras. An \(n\)_-Lie algebra_ over a field \(\mathbb{F}\) is a vector space \(L\) over \(\mathbb{F}\) along with an anti-symmetric \(n\)-linear form \([x_{1},\ldots,x_{n}]\) satisfying the Jacobi identity:
\[[[x_{1},\ldots,x_{n}],y_{2},\ldots,y_{n}]=\sum_{i=1}^{n}[x_{1},\ldots,x_{i-1}, [x_{i},y_{2},\ldots,y_{n}],x_{i+1},\ldots,x_{n}].\]
Clearly, \(n\)-Lie algebras are nothing but ordinary Lie algebras when \(n=2\). Studying \(n\)-Lie algebras is important because of their applications in physics and geometry. So far, several papers have been published about their classification.The author and Saeedi defined [3, 5, 4] the concepts of non-abelian tensor (exterior) product of \(n\)-Lie algebras
and proved some results about them. Also, we introduced some bounds for the dimension of the non-abelian tensor square and Schur multiplier of \(n\)-Lie algebras. So far, various researches have been done in the field of multipliers (see [2, 6, 7, 14, 15, 16, 17]).
First, we will review some of the concepts required in this paper.
An \(n\)-Lie algebra \(L\) is called nilpotent of class \(s\) (for some positive and integer number \(s\)), if \(L^{s+1}=\gamma_{s+1}(L)=0\) and \(L^{s}\neq 0\). This is equivalent to \(Z_{s-1}(L)\subsetneq Z_{s}(L)=L\). Then \(s\) is said the nilpotency class of \(L\), and we write \(cl(L)=s\). Note that the nilpotency property of \(n\)-Lie algebras is closed under subalgebra, ideal, and homomorphic image, but it is not closed under the extended property.
**Definition 1.1**.: _Let \(L\) and \(P\) be two \(n\)-algebras over an arbitrary field \(\mathbb{F}\) such that \(L\) and \(P\) act on each other by the families of \(n\)-linear maps \(\{f_{i}\}_{1\leq i\leq n-1}\) and \(\{g_{i}\}_{1\leq i\leq n-1}\), respectively, and act on themselves by their brackets. Then \(L\wedge P\) is generated by all symbols_
\[l_{1}\wedge\cdots\wedge l_{i}\wedge p_{i+1}\wedge\cdots\wedge p_{n},\]
_for all \(l_{i}\in L\), \(p_{i}\in P\), and \(\alpha\in\mathbb{F}\), in which the following properties are hold:_
\[l_{1}\wedge\cdots\wedge\alpha l_{i}+l_{i}^{\prime}\wedge\cdots \wedge l_{j}\wedge p_{j+1}\wedge\cdots\wedge p_{n}\] \[= \alpha(l_{1}\wedge\cdots\wedge l_{i}\wedge\cdots\wedge l_{j} \wedge p_{j+1}\wedge\cdots\wedge p_{n})\] \[+(l_{1}\wedge\cdots\wedge l_{i}^{\prime}\wedge\cdots\wedge l_{j} \wedge p_{j+1}\wedge\cdots\wedge p_{n}),\]
\[l_{1}\wedge\cdots\wedge l_{j}\wedge p_{j+1}\wedge\cdots\wedge \alpha p_{j+i}+p_{j+i}^{\prime}\wedge\cdots\wedge p_{n}\] \[= \alpha(l_{1}\wedge\cdots\wedge l_{j}\wedge p_{j+1}\wedge\cdots \wedge p_{j+i}\wedge\cdots\wedge p_{n})\] \[+(l_{1}\wedge\cdots\wedge l_{j}\wedge p_{j+1}\wedge\cdots\wedge p _{j+i}^{\prime}\wedge\cdots\wedge p_{n}),\]
\[[l_{1},\ldots,l_{n}]\wedge l_{2}^{\prime}\wedge\cdots\wedge l_{i}^{ \prime}\wedge p_{1}\wedge\cdots\wedge p_{n-i} \tag{1.1}\] \[= \sum_{j=1}^{n}(-1)^{n-j}l_{1}\wedge\cdots\wedge l_{j-1}\wedge l_{ j+1}\wedge\cdots\wedge l_{n}\wedge f_{i}(l_{j},l_{2}^{\prime},\ldots,l_{i}^{ \prime},p_{1},\ldots,p_{n-i}),\]
\[l_{1}\wedge\cdots\wedge l_{i}\wedge p_{1}\wedge\cdots\wedge p_{ n-i-1}\wedge[p_{1}^{\prime},\ldots,p_{n}^{\prime}]\] \[= \sum_{j=1}^{n}(-1)^{n-j-2}g_{i}(p_{j}^{\prime},l_{1},\ldots,l_{i},p_{1},\ldots,p_{n-i-1})\wedge p_{1}^{\prime}\wedge\cdots\wedge p_{j-1}^{ \prime}\wedge p_{j+1}^{\prime}\wedge\cdots\wedge p_{n}^{\prime},\]
\[l_{1}\wedge\cdots\wedge l_{j-1}\wedge l^{*}\wedge\cdots\wedge l _{k-1}\wedge l^{*}\wedge\cdots\wedge l_{i}\wedge p_{1}\wedge\cdots\wedge p_{n-i }=0,\] \[l_{1}\wedge\cdots\wedge l_{i}\wedge p_{1}\wedge\cdots\wedge p_{j- 1}\wedge p^{*}\wedge\cdots\wedge p_{k-1}\wedge p^{*}\wedge\cdots\wedge p_{n-i }=0.\]
\[\Big{[}(l_{1}^{1}\wedge\cdots\wedge l_{i_{1}}^{1}\wedge p_{1}^{1} \wedge\cdots\wedge p_{n-i_{1}}^{1}),(l_{1}^{2}\wedge\cdots\wedge l_{i_{2}}^{2} \wedge p_{1}^{2}\wedge\cdots\wedge p_{n-i_{2}}^{2}),\ldots,\] \[(l_{1}^{n-1}\wedge\cdots\wedge l_{i_{n-1}}^{n-1}\wedge p_{1}^{n-1 }\wedge\cdots\wedge p_{n-i_{n-1}}^{n-1}),(l_{1}^{n}\wedge\cdots\wedge l_{i_{n}} ^{n}\wedge p_{1}^{n}\wedge\cdots\wedge p_{n-i_{n}}^{n})\Big{]}\] \[=\frac{1}{2^{n-1}}\Big{(}(-1)^{\sum_{s=1}^{n-1}i_{s}(n-i_{s})} \big{(}g_{n-i_{1}}(p_{1}^{1},\ldots,p_{n-i_{1}}^{1},l_{1}^{1},\ldots,l_{i_{1}} ^{1})\] \[\qquad\qquad\wedge f_{i_{2}}(l_{1}^{2},\ldots,l_{i_{2}}^{2},p_{1 }^{2},\ldots,p_{n-i_{2}}^{2})\] \[\qquad\qquad\wedge\cdots\wedge f_{i_{n-1}}(l_{1}^{n-1},\ldots,l_ {i_{n-1}}^{n-1},p_{1}^{n-1},\ldots,p_{n-i_{n-1}}^{n-1})\] \[\qquad\qquad\wedge f_{i_{n}}(l_{1}^{n},\ldots,l_{i_{n}}^{n},p_{1 }^{n},\ldots,p_{n-i_{n}}^{n})\big{)}\] \[+\sum_{r=2}^{n-1}(-1)^{\sum_{s=1}^{n-1}i_{s}(n-i_{s})}_{s\neq r} \big{(}g_{n-i_{1}}(p_{1}^{1},\ldots,p_{n-i_{1}}^{1},l_{1}^{1},\ldots,l_{i_{1}} ^{1})\] \[\qquad\qquad\wedge g_{n-i_{r}}(p_{1}^{r},\ldots,p_{n-i_{1}}^{r},l _{1}^{r},\ldots,l_{i_{1}}^{r})\wedge f_{i_{2}}(l_{1}^{2},\ldots,l_{i_{2}}^{2},p_{1}^{2},\ldots,p_{n-i_{2}}^{2})\] \[\qquad\qquad\wedge\cdots\wedge f_{i_{r-1}}(l_{1}^{r-1},\ldots,l_ {i_{2}}^{r-1},p_{1}^{r-1},\ldots,p_{n-i_{2}}^{r-1})\] \[\qquad\qquad\wedge f_{i_{r+1}}(l_{1}^{r+1},\ldots,l_{i_{2}}^{r+1},p_{1}^{r+1},\ldots,p_{n-i_{2}}^{r+1})\] \[\qquad\qquad\wedge\cdots\wedge f_{i_{n-1}}(l_{1}^{n-1},\ldots,l_ {i_{2}}^{n-1},p_{1}^{n-1},\ldots,p_{n-i_{2}}^{n-1})\] \[\qquad\qquad\wedge f_{i_{n}}(l_{1}^{n},\ldots,l_{i_{2}}^{n},p_{1 }^{n},\ldots,p_{n-i_{2}}^{n})\big{)}\] \[+\cdots+(-1)^{i_{1}(n-i_{1})}\big{(}g_{n-i_{1}}(p_{1}^{1},\ldots,p _{n-i_{1}}^{1},l_{1}^{1},\ldots,l_{i_{1}}^{1})\] \[\qquad\qquad\wedge g_{n-i_{2}}(p_{1}^{2},\ldots,p_{n-i_{2}}^{2}, l_{1}^{2},\ldots,l_{i_{2}}^{2})\] \[\qquad\qquad\wedge\cdots\wedge g_{n-i_{n-1}}(p_{1}^{n-1},\ldots,p _{n-i_{n-1}}^{n-1},l_{1}^{n-1},\ldots,l_{i_{n-1}}^{n-1})\] \[\qquad\qquad\wedge g_{n-i_{n}}(p_{1}^{n},\ldots,p_{n-i_{n}}^{n},l _{1}^{n},\ldots,l_{i_{n}}^{n})\big{)}\Big{)},\]
In what follows, we recall the definition of crossed modules of \(n\)-Lie algebras.
**Definition 1.2** (Crossed module).: _A crossed module is a homomorphism of Lie \(n\)-algebras \(\mu:L\longrightarrow P\) together with an action \(\{g_{i}\}_{1\leq i\leq n-1}\) of \(P\) on \(L\) satisfying the following conditions:_
1. \(\mu\) _is compatible with the action of_ \(P\) _on_ \(L\)_, that is,_ \[\mu\,(g_{i}(p_{1},\ldots,p_{i},l_{1},\ldots,l_{n-i}))=[p_{1},\ldots,p_{n-i}, \mu(l_{1}),\ldots,\mu(l_{i})],\] _for all_ \(1\leq i\leq n-1\)_._
2. _For all_ \(1\leq i\leq n-1\)_,_ \[g_{i}(\mu(l_{1}),\ldots,\mu(l_{i}),l_{i+1},\ldots,l_{n})=[l_{1},\ldots,l_{i},l_ {i+1},\ldots,l_{n}].\]
3. _For all_ \(1\leq i\leq n-1\) _and_ \(i+1\leq j\leq n\)_,_ \[g_{i}(p_{1},\ldots,p_{i},l_{i+1},\ldots,l_{n})=(-1)^{j-i-1}g_{i+1}(p_{1}, \ldots,p_{i},\mu(l_{j}),l_{i+1},\ldots,l_{j-1},l_{j+1},\ldots,l_{n}).\]
It is easy to check that the kernel of \(\mu\) is in the center of \(L\) and that its image is an ideal in \(P\). Furthermore, the \(n\)-Lie algebra \(\operatorname{Im}(\mu)\) acts trivially on the center \(Z(L)\), and so
trivially on \(\ker\mu\). Hence \(\ker\mu\) inherits an action of \(P/\mathrm{Im}(\mu)\) making \(\ker\mu\) a representation of the \(n\)-Lie algebra \(P/\mathrm{Im}(\mu)\).
For the first time, the author and Saeedi [4] introduced the concept of free \(n\)-Lie algebras and, then in [1], defined the concept of basic commutators of weight \(w\) in \(d\)-dimensional \(n\)-Lie algebras. Also, we proved some of its properties and the formula to calculate the number of them. In what follows, we review it.
**Theorem 1.3** ([1]).: _Let the set \(X=\{x_{i}|x_{i+1}>x_{i};\ i=1,2,\ldots,d\}\) be an ordered set and a basis for the free \(n\)-Lie algebra \(F\) and let \(w\) be a positive integer number. Then the number of basic commutators of weight \(w\) is_
\[l_{d}^{n}(w)=\sum\limits_{j=1}^{\alpha_{0}}\beta_{j^{*}}\left(\sum\limits_{i= 2}^{w-1}\alpha_{i}\binom{d}{n-1}\right), \tag{1.2}\]
_where \(\alpha_{0}=\binom{d-1}{n-1}\), \(\alpha_{i}\), (\(2\leq i\leq w-1\)) is the coefficient of the \((i-2)\)th sentence in Newton's binomial expansion \((a+b)^{w-3}\), (i.e. \(\alpha_{i}=\binom{w-3}{i-2})\) and if \(\binom{k-1}{n-1}+1\leq j\leq\binom{k}{n-1}\) (for \(k=n-1,n,n+1,n+2,\ldots,d-1\)), then \(j^{*}=\binom{k-1}{n-1}+1\) and \(\beta_{j^{*}}=(d-n-j^{*}+2)\)._
The following theorem expresses one of the most important and main applications of basic commutators of \(n\)-Lie algebras.
**Theorem 1.4** ([1]).: _Let \(F\) be a free \(n\)-Lie algebra and let \(F^{i}\) be the \(i\)th term of the lower central series of \(F\), for each \(i\in\mathbb{N}\). Then \(\dfrac{F^{i}}{F^{i+c}}\) is abelian of dimension \(\sum\limits_{j=0}^{c-1}l_{d}^{n}(i+j)\), where \(c=1,2,\ldots\)._
The next proposition determines the dimension of \(c\)-nilpotent multiplier of all abelian \(n\)-Lie algebras.
**Proposition 1.5** ([2]).: _Let \(L\) be an abelian \(n\)-Lie algebra with dimension finite \(d\). Then \(\dim\mathcal{M}^{(c)}(L)=l_{d}^{n}(c+1)\). In particular, \(\dim\mathcal{M}(L)=l_{d}^{n}(2)=\dfrac{1}{2}d(d-1)\)._
## 2. Preliminary results
In this section, we try to present some important and practical tools in order to prove the main results of this paper.
**Lemma 2.1**.: _Let \(0\longrightarrow R\longrightarrow F\longrightarrow L\longrightarrow 0\) be a free presentation of the \(n\)-Lie algebra \(L\) and let \(M\) be an ideal of \(L\) with free presentation \(S/R\), for some ideal \(S\) of \(F\). Then_
\[\gamma_{c+1}(L)\cap M\cong\dfrac{(\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F, \ldots,F)}{\mathcal{M}^{(c)}(L)}.\]
Proof.: By the definition of the lower central series, we know that
\[\gamma_{c+1}(L)\cap M=\gamma_{c+1}(F/R)\cap(S/R) =\frac{\gamma_{c+1}(F)+R}{R}\cap\frac{S}{R}\] \[=\frac{(\gamma_{c+1}(F)+R)\cap S}{R}\] \[=\frac{(\gamma_{c+1}(F)\cap S)+(R\cap S)}{R}\] \[=\frac{(\gamma_{c+1}(F)\cap S)+R}{R}\] \[\cong\frac{(\gamma_{c+1}(F)\cap S)}{(\gamma_{c+1}(F)\cap S)\cap R}\] \[=\frac{(\gamma_{c+1}(F)\cap S)}{\gamma_{c+1}(F)\cap R}\] \[\cong\frac{(\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F,\ldots,F)}{( \gamma_{c+1}(F)\cap R)/\gamma_{c+1}(R,F,\ldots,F)}\] \[=\frac{(\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F,\ldots,F)}{ \mathcal{M}^{(c)}(L)}.\]
**Lemma 2.2**.: _Let \(L\) be an \(n\)-Lie algebra with a free presentation \(L\cong F/R\), and let \(M\) be its central ideal and \(M\cong S/R\), for some ideals \(S\) of the free \(n\)-Lie algebra \(F\). Then \(\gamma_{c+1}(S,F,\ldots,F)/(\gamma_{c+1}(R,F,\ldots,F)+\gamma_{c+1}(S))\) is a homomorphic image of_
\[(L/M)^{ab}\ ^{c}\otimes M=\underbrace{(L/M)^{ab}\otimes\cdots\otimes(L/M)^{ab}}_{c -times}\otimes M,\]
_where \((L/M)^{ab}=\frac{L/M}{(L/M)^{2}}\) and the tensor product is in sense of tensor product of \(n\)-Lie algebras._
Proof.: Since \(F/R\) is a free presentation of \(L\), so there exists an epimorphism \(\pi:F\longrightarrow L\) such that \(\ker\pi=R\). Consider the following map:
\[\theta:\begin{cases}\underbrace{(L/M)^{ab}\times\cdots\times(L/M)^{ab}}_{(c- 1)(n-1)-times}\times M\longrightarrow\gamma_{c+1}(S,F,\ldots,F)/(\gamma_{c+1} (R,F,\ldots,F)+\gamma_{c+1}(S))\\ (\bar{l}_{1},\ldots,\bar{l}_{n},\bar{l}_{n+1},\ldots,\bar{l}_{2n-1},\ldots, \bar{l}_{(c-1)(n-1)})\longmapsto\\ \overline{[\ldots[[\bar{f}_{1},\ldots,\bar{f}_{n-1},m],\bar{f}_{n},\bar{f}_{ n+1},\ldots,\bar{f}_{2n-2}],\ldots],\bar{f}_{(c-2)(n-1)+1},\ldots,\bar{f}_{(c-1)(n-1 )}],\ldots]},\ldots],\end{cases}\]
where \(\pi^{-1}(l_{i})=\bar{f_{i}}\), for all \(i\). It is easy to check that \(\theta\) is linear, and hence it induces the following epimorphism:
\[(L/M)^{ab}\,{}^{c}\otimes M=\underbrace{(L/M)^{ab}\otimes...\otimes(L/M)^{ab}}_{c -times}\otimes M\to\gamma_{c+1}(S,F,...,F)/(\gamma_{c+1}(R,F,...,F)+\gamma_{c+1 }(S)).\]
The author and Saeedi [3] proved a result similar to the following lemma for non-abelian tensor product of \(n\)-Lie algebras. By a similar method and reasoning, the following lemma can be proved for the non-abelian exterior (wedge) product of \(n\)-Lie algebras as well.
**Lemma 2.3**.: _Let \(L\), \(P\), \(K\), and \(Q\) be \(n\)-Lie algebras such that \(L\) and \(K\) act compatibly on each other by \(\{f_{i}\}_{1\leq i\leq n-1}\) and \(\{g_{i}\}_{1\leq i\leq n-1}\), and so they act \(P\) and \(Q\) by \(\{h_{i}\}_{1\leq i\leq n-1}\) and \(\{s_{i}\}_{1\leq i\leq n-1}\), respectively. Moreover, suppose that \(\sigma_{1}:L\longrightarrow P\) and \(\sigma_{2}:K\longrightarrow Q\) are \(n\)-Lie homomorphisms preserving the actions in the sense that_
\[\sigma_{1}(g_{i}(a_{1},\ldots,a_{n}))=s_{i}(\sigma_{r}(a_{1}), \ldots,\sigma_{r}(a_{n})),\] \[\sigma_{2}(f_{i}(a_{1},\ldots,a_{n}))=h_{i}(\sigma_{r}(a_{1}), \ldots,\sigma_{r}(a_{n})),\]
_for all \(a_{1},\ldots,a_{n}\in L\cup K\), where \(r=1\) if \(a_{i}\in L\), and \(r=2\) if \(a_{i}\in K\). Then there is a unique homomorphism (up to sign)_
\[\sigma_{1}\wedge\sigma_{2}:L\wedge K \longrightarrow P\wedge Q\] \[\sum\limits_{i=1}^{n-1}l_{1}^{i}\wedge\cdots\wedge l_{i}^{i} \wedge k_{i+1}^{i}\wedge\cdots\wedge k_{n}^{i} \longmapsto \sum\limits_{i=1}^{n-1}\sigma_{1}(l_{1}^{i})\wedge\cdots\wedge \sigma_{1}(l_{i}^{i})\wedge\sigma_{2}(k_{i+1}^{i})\wedge\cdots\wedge\sigma_{2 }(k_{n}^{i}).\]
Proof.: The proof is similar to [3, Proposition 4.5].
**Lemma 2.4**.: _Let \(F\) be a free \(n\)-Lie algebra and let \(S\) and \(I\) be two ideals of \(F\). Then_
\[[I,\gamma_{j}(S,F,\ldots,F),F,\ldots,F]\subseteq\gamma_{j}(I,F,\ldots,F),\]
_for all \(j\geq 1\)._
Proof.: We prove this by induction on \(j\). The result is trivially true for \(j=1\). Assume that the result holds for \(j\geq 1\). Then for any ideal \(I\) in \(F\), by the definition of \(\gamma_{j+1}\) and
the Jacobi identity, we have
\[[I,\gamma_{j+1}(S,F,\ldots,F),F,\ldots,F] =[\gamma_{j+1}(S,F,\ldots,F),I,F,\ldots,F]\] \[=[[\gamma_{j}(S,F,\ldots,F),\underbrace{F,\ldots,F}_{n-1}],I, \underbrace{F,\ldots,F}_{n-2}\] \[=[[\gamma_{j}(S,F,\ldots,F),I,\underbrace{F,\ldots,F}_{n-2}], \underbrace{F,\ldots,F}_{n-1}\] \[\quad+[[\gamma_{j}(S,F,\ldots,F),[F,I,\underbrace{F,\ldots,F}_{n- 2}],\underbrace{F,\ldots,F}_{n-2}]\] \[\quad+\ldots\] \[\quad+[\gamma_{j}(S,F,\ldots,F),\underbrace{F,\ldots,F}_{n-2}, [F,I,\underbrace{F,\ldots,F}_{n-2}]]\] \[\subseteq[\gamma_{j}(I,F,\ldots,F),F,\ldots,F]+[\gamma_{j}(S,F, \ldots,F),I,F,\ldots,F]\] \[\subseteq\gamma_{j}([I,F,\ldots,F],F,\ldots,F)+\gamma_{j+1}(I,F,\ldots,F)\] \[=\gamma_{j+1}(I,F,\ldots,F).\]
Let \(L\) be an \(n\)-Lie algebra with a free presentation \(F/R\), for some free \(n\)-Lie algebra \(F\) and an ideal \(R\) of \(F\) with the \(n\)-Lie algebra epimorphism \(\pi\), and let \(M\) be an ideal of \(L\) with a free presentation \(S/R\), for some ideal \(S\) of \(F\). Since \(R\subseteq S\subseteq F\), so \(\gamma_{j}(R,F,\ldots,F)\subseteq\gamma_{j}(S,F,\ldots,F)\), and hence we can put \(T_{j}=\dfrac{\gamma_{j}(S,F,\ldots,F)}{\gamma_{j}(R,F,\ldots,F)}\), for all \(j\geq 1\). The following lemma shows that \(T_{j}\) and \(L\) act on each other compatibly.
**Lemma 2.5**.: _By the above assumptions and notations, \(T_{j}\) and \(L\) act compatibly on each other._
Proof.: Define the two family of maps \(\{g_{i}\}\) and \(\{h_{i}\}\) as follows:
\[\begin{cases}g_{i}:L^{\times i}\times T_{j}^{\times n-i}\longrightarrow T_{j} \\ g_{i}\left(l_{1},\ldots,l_{i},t_{i+1}^{-},\ldots,\bar{t_{n}}\right)=[f_{1}, \ldots,f_{i},t_{i+1},\ldots,t_{n}]+\gamma_{j}(R,F,\ldots,F)\end{cases}\]
and
\[\begin{cases}h_{i}:T_{j}^{\times i}\times L^{\times n-i}\longrightarrow L\\ h_{i}\left(\bar{t_{1}},\ldots,\bar{t_{i}},l_{i+1},\ldots,l_{n}\right)=\pi([t_ {1},\ldots,t_{i},l_{i+1},\ldots,l_{n}])\end{cases}\]
where \(l_{p}\in L\), \(x_{q}\in\gamma_{j}(S,F,\ldots,F)\), and \(f_{p}=\pi^{-1}(l_{p})\), for all \(p\) and \(q\). By Lemma 2.4, it is easy to check that \(T_{j}\) and \(L\) act compatibly on each other.
The following proposition is useful in some of our results and future investigation.
**Proposition 2.6**.: _Let \(L\) be an \(n\)-Lie algebra and let \(M\) be its ideal. Then_
1. \(T_{j+1}\) _is an isomorphic image of_ \(M\wedge^{j}L\)
_._
2. _if_ \(M\) _is an_ \(r\)_-central ideal of_ \(L\)_, then_ \(T_{j+1}\) _is an isomorphic image of_ \(M\wedge^{j}K\)_, where_ \(K=\dfrac{L}{\gamma_{r+1}(L)}\)_._
Proof.:
1. Consider the map \(\lambda:T_{j}\longrightarrow L\) by \(\lambda(\bar{t})=\pi(t)\). Since \(\pi\) is a well-defined \(n\)-Lie homomorphism, so \(\lambda\) is too. The \(n\)-Lie homomorphism \(\lambda\) together with the defined action of \(L\) on \(T_{j}\) in Lemma 2.5 and also, the identity map \(id_{L}:L\longrightarrow L\) are crossed modules of \(n\)-Lie algebras. The family of bilinear maps \(\phi_{j}^{i}:T_{j}^{\times i}\times L^{\times n-i}\xrightarrow{epi}T_{j+1}\) given by \(\phi_{(}\bar{x}_{1},\ldots,\bar{x}_{i},l_{i+1},\ldots,l_{n})=[x_{1},\ldots,x_{ i},f_{i+1},\ldots,f_{n}]+\gamma_{j+1}(R,F,\ldots,F)\), is an \(n\)-multiplying. By Theorem 3.5 of [3] consequently, this family induces an epimorphism \(\Phi_{j}:T_{j}\wedge L\longrightarrow T_{j+1}\). Thus, by using Lemma 2.3, we obtain an epimorphism \(\Phi_{j}\wedge id_{L}:(T_{j}\wedge L)\wedge L\longrightarrow T_{j+1}\wedge L\). Now, we continue by induction on \(j\).
2. By the assumption, \(\gamma_{n+1}(S,F,\ldots,F)\subseteq R\) and then \[[\gamma_{j}(S,F,\ldots,F),\gamma_{n+1}(F),F,\ldots,F]\subseteq\gamma_{j+n+1} (S,F,\ldots,F)\subseteq\gamma_{j+1}(R,F,\ldots,F).\] So, the action of \(\gamma_{n+1}(F)\) on \(T_{j}\) is trivial and the \(n\)-Lie algebras \(L/\gamma_{n+1}(L)\) and \(T_{j}\) act compatibly on each other by the induced actions \(\bar{g}_{i}\)'s and \(\bar{h}_{i}\)'s of defined \(g_{i}\)'s and \(h_{i}\)'s in Lemma 2.5; that is, \[\begin{cases}\bar{g}_{i}:L/\gamma_{n+1}(L)^{\times i}\times T_{j}^{\times n-i} \longrightarrow T_{j}\\ \bar{g}_{i}\left(\bar{l}_{1},\ldots,\bar{l}_{i},\bar{t}_{i+1},\ldots,\bar{t}_ {n}\right)=g_{i}\left(l_{1},\ldots,l_{i},\bar{t}_{i+1},\ldots,\bar{t}_{n}\right) \\ =[f_{1},\ldots,f_{i},t_{i+1},\ldots,t_{n}]+\gamma_{j}(R,F,\ldots,F)\end{cases}\] and \[\begin{cases}\bar{h}_{i}:T_{j}^{\times i}\times L/\gamma_{n+1}(L)^{\times n -i}\longrightarrow L/\gamma_{n+1}(L)\\ \bar{h}_{i}\left(\bar{t}_{1},\ldots,\bar{t}_{i},\bar{l}_{i+1},\ldots,\bar{l}_ {n}\right)=\pi([t_{1},\ldots,t_{i},l_{i+1},\ldots,l_{n}])\end{cases}\] where \(l_{p}\in L\), \(x_{q}\in\gamma_{j}(S,F,\ldots,F)\), and \(f_{p}=\pi^{-1}(l_{p})\), for all \(p\) and \(q\). Now, as part (1), there exist epimorphisms \(T_{j}\wedge K\longrightarrow T_{j+1}\) and \(M\wedge^{j}K\longrightarrow T_{j+1}\) and hence the proof is completed.
**Corollary 2.7**.: _By the assumptions and notations in Proposition 2.6, there exists an epimorphism \(\kappa\) from \(\ker\mu_{M}^{j}\) onto \(\dfrac{R\cap\gamma_{j+1}(S,F,\ldots,F)}{\gamma_{j+1}(R,F,\ldots,F)}\)._
Proof.: The proof is similar to [16, Corollary 1.5].
## 3. On the \(c\)-nilpotent multiplier of \(n\)-Lie algebras
In this section, first we introduce some inequalities for the dimension of the \(c\)-nilpotent multiplier of the \(n\)-Lie algebras and their factors. Then, we show that every \(c\)-perfect \(n\)-Lie algebra has at least one cover. Also, we give some upper bounds for the dimension of \(n\)-nilpotent multiplier of \(n\)-Lie algebras using the basic commutators.
The following lemma is one of the important tools for obtaining the required inequalities, which is similar to the work of Araskhan and Rismanchian [7] for the Lie algebras and the work of Jones [12] for the group case.
**Lemma 3.1**.: _Let \(L\) be an \(n\)-Lie algebra with dimension finite, and let \(M\) be its ideal. Then there exists an \(n\)-Lie algebra \(K\) with its ideal \(S\) such that_
1. \(\gamma_{c+1}(L)\cap M=K/S\)_;_
2. \(S\cong\mathcal{M}^{(c)}(L)\)_;_
3. \(\mathcal{M}^{(c)}(L/M)\) _is a homomorphic image of_ \(K\)_._
4. _If, in addition,_ \(M\) _is a_ \(c\)_-central ideal of_ \(L\)_, then_ \(\gamma_{c+1}(L)\cap M\) _is a homomorphic image of_ \(\mathcal{M}^{(c)}(L/M)\)_._
Proof.:
1. Since \(\gamma_{c+1}(F)\cap R\unlhd\gamma_{c+1}(F)\cap S\), so by Lemma 2.1, it is enough to put \(K/M=\dfrac{(\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F,\ldots,F)}{(\gamma_{c+1}( F)\cap R)/\gamma_{c+1}(R,F,\ldots,F)}\).
2. By the definition of the \(c\)-nilpotent multiplier of \(L\) and Lemma 2.1, we obtain \(S=\mathcal{M}^{(c)}(L)\).
3. Consider the free presentation \[L/M=\dfrac{F/R}{S/R}\cong F/S,\] and hence \[\mathcal{M}^{(c)}(L/M)=\dfrac{\gamma_{c+1}(F)\cap S}{\gamma_{c+1}(S,F,\ldots, F)}.\] Now, consider the map \(\alpha:(\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F,\ldots,F)\longrightarrow\dfrac{\gamma_{c+1} (F)\cap S}{\gamma_{c+1}(S,F,\ldots,F)}\). It is easy to check that \(\alpha\) is an epimorphism of \(n\)-Lie algebras with the kernel ideal \(\gamma_{c+1}(S,F,\ldots,F)/\gamma_{c+1}(R,F,\ldots,F)\).
4. Since \(M\) is a \(c\)-central ideal, so \(\gamma_{c+1}(M,L,\ldots,L)=1\). Hence \(\dfrac{\gamma_{c+1}(S,F,\ldots,F)+R}{R}=R\). Thus \(\gamma_{c+1}(S,F,\ldots,F)\subseteq R\). Therefore, \(\gamma_{c+1}(S,F,\ldots,F)\subseteq\gamma_{c+1}(F)\cap R\) and \(\gamma_{c+1}(S,F,\ldots,F)\subseteq\gamma_{c+1}(F)\cap S\). Also, by the proof of Lemma 2.1, we know that \[\gamma_{c+1}(L)\cap M\cong\dfrac{\gamma_{c+1}(F)\cap S}{\gamma_{c+1}(F)\cap R }\cong\dfrac{\dfrac{\gamma_{c+1}(F)\cap S}{\gamma_{c+1}(F)\cap R}}{\dfrac{ \gamma_{c+1}(F)\cap R}{\gamma_{c+1}(S,F,\ldots,F)}}=\dfrac{\mathcal{M}^{(c)}( L/M)}{\gamma_{c+1}(F)\cap R}.\] Thus the proof is completed by considering the epimorphism \(\alpha:\mathcal{M}^{(c)}(L/M)\longrightarrow\gamma_{c+1}(L)\cap M=\dfrac{ \mathcal{M}^{(c)}(L/M)}{\dfrac{\gamma_{c+1}(F)\cap R}{\gamma_{c+1}(S,F,\ldots, F)}}\).
The following corollary follows immediately from the previous lemma.
**Corollary 3.2**.: _Let \(L\) be an \(n\)-Lie algebra with dimension finite, and let \(M\) be its ideal. Then we have the following inequality:_
\[\dim\mathcal{M}^{(c)}(L/M)\leq\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L)\cap M).\]
Proof.: From Lemma 3.1, we have \(\gamma_{c+1}(L)\cap M=K/S\) and \(S\cong\mathcal{M}^{(c)}(L)\). Thus
\[\dim(\gamma_{c+1}(L)\cap M)=\dim K-\dim S,\qquad\dim S=\dim\mathcal{M}^{(c)}(L).\]
Hence,
\[\dim(\gamma_{c+1}(L)\cap M)=\dim K-\dim\mathcal{M}^{(c)}(L).\]
On the other hand, since \(\mathcal{M}^{(c)}(L/M)\) is a homomorphic image of \(K\), so \(\dim\mathcal{M}^{(c)}(L/M)\leq\dim K\). Therefore,
\[\dim\mathcal{M}^{(c)}(L/M)\leq\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L) \cap M).\]
**Theorem 3.3**.: _Let \(L\) be an \(n\)-Lie algebra with dimension finite and free presentation \(F/R\), and let \(M\) be a central ideal of \(L\) with free presentation \(S/R\), for some ideal \(S\) of \(F\). Then_
\[\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L)\cap M)\leq\dim\mathcal{M}^{(c)} (L/M)+\dim\mathcal{M}^{(c)}(M)+\dim((L/M)^{ab}\;{}^{c}\otimes M).\]
Proof.: Since \(M\) is central, so \(\gamma_{2}(S,F,\ldots,F)\subseteq R\). By Corollary 3.2 and the proof of part (3) of Lemma 3.1, we know that \(\mathcal{M}^{(c)}(L/M)\) is an isomorphic image of \((\gamma_{c+1}(F)\cap S)/\gamma_{c+1}(R,F,\ldots,F)\) with the kernel ideal \(\gamma_{c+1}(S,F,\ldots,F)/\gamma_{c+1}(R,F,\ldots,F)\). Also, we have
\[\dim\mathcal{M}^{(c)}(L/M)+\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}=\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L)\cap M).\]
On the other hand, we know that
\[\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)+ \gamma_{c+1}(S)}=\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F )+\gamma_{c+1}(S)}.\]
Hence
\[\dim\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}-\dim\frac{ \gamma_{c+1}(R,F,\ldots,F)+\gamma_{c+1}(S)}{\gamma_{c+1}(R,F,\ldots,F)}=\dim \frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)+\gamma_{c+1}(S)}.\]
Also, by this fact that \(M\) is a central ideal, we have
\[\gamma_{c+1}(S)\subseteq\gamma_{2}(S,F,\ldots,F)\subseteq R \tag{3.4}\] \[\gamma_{2}(R,F,\ldots,F)\subseteq R. \tag{3.3}\]
Therefore, it follows from equations (3.3) and (3.4) that
\[\gamma_{2}(R,F,\dots,F)+\gamma_{c+1}(S)\subseteq R. \tag{3.5}\]
On the other hand, since \(R\subseteq S\), so
\[\gamma_{2}(R,F,\dots,F)\subseteq\gamma_{c+1}(S), \tag{3.6}\]
and hence equations (3.5) and (3.6) imply that
\[\gamma_{2}(R,F,\dots,F)+\gamma_{c+1}(S)\subseteq\gamma_{c+1}(S)\cap R. \tag{3.7}\]
Thus
\[\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L)\cap M) =\dim\mathcal{M}^{(c)}(L/M)+\frac{\gamma_{c+1}(S,F,\dots,F)}{ \gamma_{c+1}(R,F,\dots,F)}\] \[=\dim\mathcal{M}^{(c)}(L/M)+\dim\frac{\gamma_{c+1}(S,F,\dots,F)}{ \gamma_{c+1}(R,F,\dots,F)+\gamma_{c+1}(S)}\] \[\quad+\dim\frac{\gamma_{c+1}(R,F,\dots,F)+\gamma_{c+1}(S)}{ \gamma_{c+1}(R,F,\dots,F)}\] (by Lemma 2.2) \[\leq\dim\mathcal{M}^{(c)}(L/M)+\dim\left((L/M)^{ab\ \ c}\otimes M\right)\] \[\quad+\dim\frac{\gamma_{c+1}(R,F,\dots,F)+\gamma_{c+1}(S)}{\gamma _{c+1}(R,F,\dots,F)}\] (by (3.7)) \[\leq\frac{\gamma_{c+1}(S)\cap R}{\gamma_{c+1}(R,F,\dots,F)}\] \[=\dim\mathcal{M}^{(c)}(L/M)+\dim\mathcal{M}^{(c)}(M)\] \[\quad+\dim\left((L/M)^{ab\ \ c}\otimes M\right).\]
The following proposition plays a fundamental and important role in obtaining some main results of this section.
**Proposition 3.4**.: _Let \(L\) be an \(n\)-Lie algebra and let \(M\) be its ideal. Then we have the following exact sequences:_
\[\ker(\mu_{M}^{c})\longrightarrow\mathcal{M}^{(c)}(L)\longrightarrow \mathcal{M}^{(c)}(L/M)\longrightarrow\frac{M\cap\gamma_{c+1}(L)}{\gamma_{c+1} (M,L,\dots,L)}\longrightarrow 0; \tag{3.9}\] \[M\wedge^{c}\frac{L}{\gamma_{r+1}(L)}\longrightarrow\mathcal{M} ^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/M)\longrightarrow M\cap\gamma_{c+ 1}(L)\longrightarrow 0, \tag{3.8}\]
_with the condition that \(M\) is \(r\)-central and \(r\leq c\)._
Proof.: Let \(0\longrightarrow R\longrightarrow F\longrightarrow L\longrightarrow 0\) be a free presentation of \(L\) and let \(M\cong S/R\), for some ideal \(S\) of \(F\). Then we have the following exact sequence:
\[0\longrightarrow\frac{R\cap\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1 }(R,F,\ldots,F)}\longrightarrow\frac{R\cap\gamma_{c+1}(F)}{\gamma_{c+1}(R,F, \ldots,F)} \longrightarrow\frac{S\cap\gamma_{c+1}(F)}{\gamma_{c+1}(S,F,\ldots,F)} \tag{3.10}\] \[\longrightarrow\frac{(S+\gamma_{c+1}(F))+R}{\gamma_{c+1}(S,F, \ldots,F)+R}\longrightarrow 0.\]
Since \(r\leq c\) and \(M\) is \(c\)-central, so
\[\gamma_{c+1}(S,F,\ldots,F)\subseteq\gamma_{r+1}(S,F,\ldots,F)\subseteq R=0_{S /R}. \tag{3.11}\]
We know that \(L\cong F/R\) and \(M\cong S/R\), and hence \(F/S\) is a free presentation of \(L/M\). Thus according to the definition of \(c\)-nilpotent multiplier of \(L/M\), we have
\[\mathcal{M}^{(c)}(L)=\frac{R\cap\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}, \qquad\mathcal{M}^{(c)}(L/M)=\frac{S\cap\gamma_{c+1}(F)}{\gamma_{c+1}(S,F, \ldots,F)}. \tag{3.12}\]
Also,
\[\frac{M\cap\gamma_{c+1}(L)}{\gamma_{c+1}(M,L,\ldots,L)} =\frac{(S/R)\cap\gamma_{c+1}(F/R)}{\gamma_{c+1}(S/R,F/R,\ldots,F/ R)}\] \[=\frac{\frac{S+R}{R}\cap\frac{\gamma_{c+1}(F)+R}{R}}{\frac{ \gamma_{c+1}(S,F,\ldots,F)+R}{R}}\] \[=\frac{(S\cap\gamma_{c+1}(F))+R}{\frac{(S\cap\gamma_{c+1}(F))+R} {\gamma_{c+1}(S,F,\ldots,F)+R}}. \tag{3.13}\]
By Corollary 2.7, there exists an epimorphism \(\psi:\ker(\mu_{M}^{c})\longrightarrow\frac{R\cap\gamma_{c+1}(S,F,\ldots,F)}{ \gamma_{c+1}(R,F,\ldots,F)}\), and so
\[\frac{R\cap\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\cong\frac{ \ker(\mu_{M}^{c})}{\ker\psi}. \tag{3.14}\]
Therefore, by replacing the terms of sequence (3.10) by (3.11), (3.12), (3.13), and (3.14), the sequence (3.8) is obtained.
On the other hand, from (3.11), we have
\[\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)} \xrightarrow{\theta}\frac{R\cap\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}. \tag{3.15}\]
Moreover, since \(M\) is \(c\)-central, so by part (2) of Proposition 2.6, there is an epimorphism \(\lambda:M\wedge^{c}\frac{L}{\gamma_{c+1}(L)}\longrightarrow T_{c+1}=\frac{ \gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\). Hence
\[\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\cong\frac{M \wedge^{c}\frac{L}{\gamma_{c+1}(L)}}{\ker\lambda}. \tag{3.16}\]
Now, by putting (3.12), (3.13), (3.14), (3.15), and (3.16) in the sequence (3.10), we obtain the sequence (3.9).
The following corollary is an immediate consequence of Proposition 3.4 and helps us to prove the useful Proposition 3.7.
**Corollary 3.5**.: _Let \(L\) be a finite-dimensional \(n\)-Lie algebra and let \(M\) be its ideal. Then the following properties hold:_
1. _The_ \(n\)_-Lie algebra_ \(\mathcal{M}^{(c)}(L)\) _has dimension finite._
2. \(\dim\mathcal{M}^{(c)}(L/M)\leq\mathcal{M}^{(c)}(L)+\dim\left(\frac{M\cap \gamma_{c+1}(L)}{\gamma_{c+1}(M,L,\ldots,L)}\right)\)_._
3. \(\dim\mathcal{M}^{(c)}(L)+\dim(M\cap\gamma_{c+1}(L))=\dim\mathcal{M}^{(c)}(L/ M)+\dim\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\)_, where_ \(F/R\) _and_ \(S/R\) _are free presentations of_ \(L\) _and_ \(M\)_, respectively. In particular,_ \[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)=\dim\left(\gamma_{c+1}\left( \frac{F}{\gamma_{c+1}(R,F,\ldots,F)}\right)\right).\]
4. _If_ \(\mathcal{M}^{(c)}(L)=0\)_, then_ \(\mathcal{M}^{(c)}(L/M)\cong\frac{M\cap\gamma_{c+1}(L)}{\gamma_{c+1}(M,L, \ldots,L)}\)_._
5. _If_ \(M\) _is an_ \(r\)_-central ideal, that is,_ \(M\subseteq Z_{r}(L)\) _and_ \(r\leq c\)_, then_ \[\dim\mathcal{M}^{(c)}(L)+\dim(M\cap\gamma_{c+1}(L))\leq\dim\mathcal{M}^{(c)} (L/M)+\dim\left(M\wedge^{c}\frac{L}{\gamma_{r+1}(L)}\right).\]
Proof.:
1. It is clear according to the short exact sequence (3.10) in the proof of Proposition 3.4.
2. According to part (1) of Proposition 3.4, this is clear.
3. Since \(\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\) is an isomorphic image of \(M\wedge^{c}\frac{L}{\gamma_{r+1}(L)}\), so by replacing \(M\wedge^{c}\frac{L}{\gamma_{r+1}(L)}\) with \(\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\) in equation (3.9), we have the following sequence: \[0\longrightarrow\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F) }\longrightarrow\mathcal{M}^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/M) \longrightarrow M\cap\gamma_{c+1}(L)\longrightarrow 0.\]
Hence
\[\dim\mathcal{M}^{(c)}(L)+\dim(M\cap\gamma_{c+1}(L))=\dim\mathcal{M}^{(c)}(L/M)+ \dim\frac{\gamma_{c+1}(S,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}.\]
If \(M=L\), then \(\dim\mathcal{M}^{(c)}(L/M)=0\) and
\[\gamma_{c+1}\left(\frac{F}{\gamma_{c+1}(R,F,\ldots,F)}\right)=\frac{\gamma_{c +1}(F,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}.\]
Therefore
\[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)=\dim\left(\gamma_{c+1}\left( \frac{F}{\gamma_{c+1}(R,F,\ldots,F)}\right)\right).\]
(4). If \(\mathcal{M}^{(c)}(L)=1\), then by part (1) of Proposition 3.4, the proof is completed.
(5). According to part (2) of Proposition 3.4, the result is obtained immediately.
The short exact sequence \(0\longrightarrow M\longrightarrow L^{*}\longrightarrow L\longrightarrow 0\) of \(n\)-Lie algebras is called \(c\)-stem cover, if \(M\subseteq Z_{c}(L^{*})\cap\gamma_{c+1}(L^{*})\) and \(M\cong\mathcal{M}^{(c)}(L)\). Then \(L^{*}\) is called \(c\)-cover of \(L\). Specially, if \(c=1\), then it is the definition of cover.
Moreover, an \(n\)-Lie algebra \(L\) is called \(c\)-perfect, if \(\gamma_{c+1}(L)=L\).
**Corollary 3.6**.: _Let \(L\) be a \(c\)-perfect finite-dimensional \(n\)-Lie algebra with \(c\)-cover \(L^{*}\). Then \(\mathcal{M}^{(c)}(L)\) is trivial._
Proof.: By the definition of \(c\)-cover, there exists an ideal \(M\) of \(L\) such that
\[L^{*}/M\cong L,\qquad M\subseteq Z_{c}(L^{*})\cap\gamma_{c+1}(L^{*}),\qquad \mathcal{M}^{(c)}(L)\cong M.\]
Since \(M\) is \(c\)-central and \(L\) is \(c\)-perfect, that is, \(L=\gamma_{c+1}(L)\), so by part (5) of 3.5, we have
\[\dim\mathcal{M}^{(c)}(L^{*})+\dim(M\cap\gamma_{c+1}(L^{*}))\leq\dim\mathcal{M }^{(c)}(L^{*}/M)+\underbrace{\dim(M\wedge^{c}\frac{L^{*}}{\gamma_{c+1}(L^{*}) })}_{0}.\]
Thus
\[\dim\mathcal{M}^{(c)}(L^{*})\leq\dim\mathcal{M}^{(c)}(L^{*}/M)=\dim\mathcal{M }^{(c)}(L).\]
Hence \(\dim\mathcal{M}^{(c)}(L^{*})\leq 0\), which implies that \(\dim\mathcal{M}^{(c)}(L^{*})=0\).
The author in [2, Paroposition 1.6] proved that if \(L\) is an \(n\)-Lie algebra with the free representation \(F/R\) and the map \(\pi:F/\gamma_{c+1}(R,F,\ldots,F)\longrightarrow F/R\) is an epimorphism, then \(Z_{c}^{*}(L)=\pi\left(Z_{c}(F/\gamma_{c+1}(R,F,\ldots,F))\right)\). Using this fact, we prove the following practical theorem.
**Proposition 3.7**.: _Let \(L\) be a finite-dimensional \(n\)-Lie algebra and let \(M\) be an ideal of \(L\) with \(M\subseteq Z_{c}(L)\). Then the following statements are equivalent:_
1. \(M\subseteq Z_{c}^{*}(L)\)_;_
2. _The natural map_ \(\alpha:\mathcal{M}^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/M)\) _is a monomorphism;_
3. \(\dim\mathcal{M}^{(c)}(L/M)=\dim\mathcal{M}^{(c)}(L)+\dim(M\cap\gamma_{c+1}(L))\)_._
Proof.: Similar to [2, Lemma 1.7], it can be proved that \(M\subseteq Z_{c}^{*}(L)\) if and only if \(\alpha:\mathcal{M}^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/M)\) is a monomorphism.
If \(M\subseteq Z_{c}^{*}(L)\), then by part (4) of Lemma 3.1 and part (2), we have the following sequence:
\[0\longrightarrow\mathcal{M}^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/M) \longrightarrow M\cap\gamma_{c+1}(L)\longrightarrow 0.\]
Hence
\[\dim\mathcal{M}^{(c)}(L/M)=\dim\mathcal{M}^{(c)}(L)+\dim(M\cap\gamma_{c+1}(L)).\]
Conversely, if part (3) is hold, then by comparing part (3) of Corollary 3.5 and part (2), we get \(\gamma_{c+1}(S,F,\ldots,F)\subseteq\gamma_{c+1}(R,F,\ldots,F)\). On the other hand, since \(R\subseteq S\), so \(\gamma_{c+1}(R,F,\ldots,F)\subseteq\gamma_{c+1}(S,F,\ldots,F)\). Therefore \(\gamma_{c+1}(S,F,\ldots,F)=\gamma_{c+1}(R,F,\ldots,F)\), and this fact is equivalent to \(S/\gamma_{c+1}(R,F,\ldots,F)\subseteq Z_{c}(F/\gamma_{c+1}(R,F,\ldots,F))\). From [2, Proposition 1.6] we know that \(Z_{c}^{*}(L)=\pi\left(Z_{c}(F/\gamma_{c+1}(R,F,\ldots,F))\right)\), and hence \(\pi(S/\gamma_{c+1}(R,F,\ldots,F))\subseteq Z_{c}^{*}(L)\). Therefore, \(M\subseteq Z_{c}^{*}(L)\), which completes the proof.
The following proposition states that every \(c\)-perfect \(n\)-Lie algebra has at least one cover.
**Proposition 3.8**.: _Let \(L\) be a \(c\)-perfect \(n\)-Lie algebra. Then \(L\) has at least one cover._
Proof.: Let \(\pi:F\longrightarrow L\) be an epimorphism with \(\ker\pi=R\), and hence \(F/R\) be a free presentation of \(L\). Then
\[0\longrightarrow\frac{R}{\gamma_{c+1}(R,F,\ldots,F)}\longrightarrow\frac{F} {\gamma_{c+1}(R,F,\ldots,F)}\stackrel{{\bar{\pi}}}{{\longrightarrow }}L\longrightarrow 0,\]
be the free \(c\)-central extension of \(L\). Since \(\bar{\pi}(\gamma_{c+1}(F)/\gamma_{c+1}(R,F,\ldots,F))=L\), so by restricting \(\pi\) to \(\gamma_{c+1}(F)/\gamma_{c+1}(R,F,\ldots,F)\), we have
\[\ker\bar{\pi}=\ker\pi\bigcap\frac{\gamma_{c+1}(F)}{\gamma_{c+1}( R,F,\ldots,F)} =\frac{R}{\gamma_{c+1}(R,F,\ldots,F)}\bigcap\frac{\gamma_{c+1}(F)}{ \gamma_{c+1}(R,F,\ldots,F)}\] \[=\frac{R\cap\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}= \mathcal{M}^{(c)}(L).\]
Thus we get the following \(c\)-central extension:
\[0\longrightarrow\mathcal{M}^{(c)}(L)\longrightarrow\frac{\gamma_{c+1}(F)}{ \gamma_{c+1}(R,F,\ldots,F)}\longrightarrow L\longrightarrow 0. \tag{3.17}\]
On the other hand, \(L\) is \(c\)-perfect, so
\[L\cong\frac{F}{R}\longrightarrow F\cong L\oplus R\longrightarrow F=\gamma_{c +1}(F)+R.\]
Thus
\[\gamma_{c+1}\left(\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}\right) =\frac{\gamma_{c+1}(\gamma_{c+1}(F),F,\ldots,F)+\gamma_{c+1}(R,F, \ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\] \[=\frac{\gamma_{c+1}(\gamma_{c+1}(F)+R,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\] \[=\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}.\]
Therefore, \(\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}\) is \(c\)-perfect and so (3.17) is a stem \(c\)-central extension of \(L\).
In the following theorem, we give upper and lower bounds for the dimension of \(\mathcal{M}^{(c)}(L)\).
**Theorem 3.9**.: _Let \(L\) be a finite-dimensional nilpotent \(n\)-Lie algebra of class \(m\), let \(d=d(L)\), and let \(\dim\gamma_{i+1}(L)=a_{i}\), for all \(i\geq 1\). Then the following inequalities are hold:_
\[l_{d}^{n}(c+1)\leq\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\leq l_{d}^{n}( c+1)+\sum_{i=1}^{c}a_{i}d^{c(n-1)-i+1}.\]
_If \(L\) is abelian, then the lower and upper bounds are attained._
Proof.: Since \(L\) is nilpotent, so \(\Phi(L)=L^{2}\), and hence \(L/L^{2}\) is an abelian \(n\)-Lie algebra of dimension \(d\). If we put \(M=L^{2}\), then by Proposition 1.5 we have
\[\dim\mathcal{M}^{(c)}(L/M)=\dim\mathcal{M}^{(c)}(L/L^{2})=l_{d}^{n}(c+1). \tag{3.18}\]
Thus by putting \(M=L^{2}\) in part (2) of Corollary 3.5, we obtain
\[\dim\mathcal{M}^{(c)}(L/L^{2})\leq\dim\mathcal{M}^{(c)}(L)+\dim\frac{L^{2} \cap\gamma_{c+1}(L)}{\gamma_{c+1}(L^{2},L,\ldots,L)} =\dim\mathcal{M}^{(c)}(L)+\dim\frac{\gamma_{c+1}(L)}{\gamma_{c+2 }(L)} \tag{3.19}\]
Therefore, by (3.18) and (3.19), we have
\[l_{d}^{n}(c+1)\leq\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L). \tag{3.20}\]
Now let \(0\longrightarrow R\longrightarrow F\longrightarrow L\longrightarrow 0\) be a free presentation of \(L\), and put \(M=L^{2}\), in part (1) of Proposition 2.6. Then
\[L=\frac{F}{R}\longrightarrow M=L^{2}=\left(\frac{F}{R}\right)^{\prime}=\frac {F^{2}+R}{R},\]
and hence \(T_{c+1}=\frac{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\), and it is an isomorphic image of \(L^{2}\wedge^{c}L\), by part (1) of Proposition 2.6. That is, there is an epimorphism \(\psi:L^{2}\wedge^{c}L\longrightarrow T_{c+1}\) such
that
\[\frac{L^{2}\wedge^{c}L}{\ker\psi}\cong T_{c+1}=\frac{\gamma_{c+1}(F^{2}+R,F, \ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}. \tag{3.21}\]
Now, if \(M=L=F/R\), then \(T^{\prime}_{c+1}=\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)}\). Thus
\[\frac{T^{\prime}_{c+1}}{T_{c+1}}=\frac{\frac{\gamma_{c+1}(F)}{ \gamma_{c+1}(R,F,\ldots,F)}}{\frac{\gamma_{c+1}(F^{2}+R,F, \ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}}\cong\frac{\gamma_{c+1}(F)}{\gamma_{c+1 }(F^{2}+R,F,\ldots,F)}. \tag{3.22}\]
On the other hand, by (3.22) we know that there exists the following epimorphism:
\[\theta:T^{\prime}_{c+1}=\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\ldots,F)} \stackrel{{ epi.}}{{\longrightarrow}}\frac{T^{\prime}_{c+1}}{T _{c+1}}=\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}.\]
Then \(\ker\theta=\frac{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}{\gamma_{c+1}(R,F,\ldots,F)}\), and so we obtain the following short exact sequence:
\[0\longrightarrow\ker\theta\stackrel{{ inc.}}{{ \longrightarrow}}T^{\prime}_{c+1}\stackrel{{\theta}}{{ \longrightarrow}}\frac{T^{\prime}_{c+1}}{T_{c+1}}\longrightarrow 0.\]
That is,
\[0\longrightarrow\frac{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}{\gamma_{c+1}(R,F, \ldots,F)}\stackrel{{ inc.}}{{\longrightarrow}}\frac{\gamma_{c+1}(F)}{ \gamma_{c+1}(R,F,\ldots,F)}\stackrel{{\theta}}{{\longrightarrow }}\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}\longrightarrow 0. \tag{3.23}\]
Thus by (3.21) and (3.23), we have
\[L^{2}\wedge^{c}L\longrightarrow\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F, \ldots,F)}\stackrel{{\theta}}{{\longrightarrow}}\frac{\gamma_{c +1}(F)}{\gamma_{c+1}(F^{2}+R,F,\ldots,F)}\longrightarrow 0. \tag{3.24}\]
Now, by part (3) of Corollary 3.5, and then using (3.24) and Theorem 1.4, we have
\[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L) =\dim\left(\gamma_{c+1}\left(\frac{F}{\gamma_{c+1}(R,F,\dots,F)} \right)\right)\] \[=\dim\left(\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(R,F,\dots,F)}\right)\] \[\leq\dim(L^{2}\wedge^{c}L)+\dim\left(\frac{\gamma_{c+1}(F)}{ \gamma_{c+1}(F^{2},F,\dots,F)}\right) \tag{3.25}\] \[=\dim(L^{2}\wedge^{c}L)+\dim\left(\frac{\gamma_{c+1}(F)}{\gamma_{ c+2}(F)}\right).\]
We know that \(L^{2}\wedge^{c}L=(\dots((L^{2}\wedge L)\wedge L)\dots)\wedge L\) and also that the generators of \(L^{2}\wedge L\) are as \(a_{1}\wedge l_{2}\wedge\dots\wedge l_{n}\), \(a_{1}\wedge a_{2}\wedge l_{3}\wedge\dots\wedge l_{n},\dots\), and \(a_{1}\wedge a_{2}\wedge\dots\wedge a_{n-1}\wedge l_{n}\), where \(a_{i}\in L^{2}\), and \(l_{j}\in L\), for all \(1\leq i\leq n-1\) and \(2\leq j\leq n\). Indeed by using equation (1.1), one can easily check that \(a_{1}\wedge a_{2}\wedge l_{3}\wedge\dots\wedge l_{n}\in L^{3}\wedge L\), and similarly, \(a_{1}\wedge\dots\wedge a_{i}\wedge l_{i+1}\wedge\dots\wedge l_{n}\in L^{i+1}\wedge L\). Thus
\[\dim(L^{2}\wedge^{c}L)\leq\sum_{i=1}^{c}a_{i}d^{c(n-1)-i+1}. \tag{3.26}\]
Therefore, by (3.25) and (3.26), we obtain
\[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\leq l_{d}^{n}(c+1)+\sum_{i=1}^{c }a_{i}d^{c(n-1)-i+1}.\]
The next theorem states an upper bound for the dimension of \(c\)-nilpotent multiplier of finite-dimensional nilpotent \(n\)-Lie algebras of class \(m\).
**Theorem 3.10**.: _Let \(L\) be a \(d\)-dimensional nilpotent \(n\)-Lie algebra of class \(m\). Then_
\[\dim\mathcal{M}^{(c)}(L)\leq\sum_{k=0}^{c}l_{d}^{n}(m+k), \tag{3.27}\]
_when \(m\leq c\), and we have_
\[\dim\mathcal{M}^{(c)}(L)\leq\sum_{k=1}^{m}l_{d}^{n}(c+k), \tag{3.28}\]
_for all \(c\) in which \(m\geq c+1\), and if \(L\) is abelian, then the equality is attained._
Proof.: Let \(L\cong\frac{F}{R}\). First assume that \(m\leq c\). Since \(\gamma_{m+1}(L)=0\), so
\[0=\gamma_{m+1}(L)=\gamma_{m+1}\left(\frac{F}{R}\right)=\frac{\gamma_{m+1}(F)+R}{ R}\cong\frac{R}{\gamma_{m+1}(F)\cap R},\]
which implies that
\[\gamma_{m+1}(F)+R=\gamma_{m+1}(F)\cap R=R,\]
that is,
\[\gamma_{m+1}(F)\subseteq\gamma_{m+1}(F)\cap R\subseteq R.\]
Also, we know that \(\gamma_{m+1}(F)\subseteq\gamma_{m}(F)\). Thus \(\gamma_{m+1}(F)\subseteq\gamma_{m}(F)\cap R\). Therefore,
\[\gamma_{c+m+1}(F)\subseteq\gamma_{c+m}(F)\subseteq\cdots\subseteq\gamma_{m+ 2}(F)\subseteq\gamma_{m+1}(F)\subseteq R\cap\gamma_{m}(F).\]
Hence
\[\dim\frac{R\cap\gamma_{m}(F)}{\gamma_{c+m+1}(F)}\leq\dim\frac{\gamma_{m}(F)}{ \gamma_{c+m+1}(F)}=\sum_{k=0}^{c}l_{d}^{n}(m+k). \tag{3.29}\]
On the other hand, we know that
\[\dim\mathcal{M}^{(c)}(L)=\dim\frac{R\cap\gamma_{c+1}(F)}{\gamma_ {c+1}(R,F,\ldots,F)} \leq\dim\frac{R\cap\gamma_{m}(F)}{\gamma_{c+1}(R,F,\ldots,F)}\] \[\leq\dim\frac{R\cap\gamma_{m}(F)}{\gamma_{c+1}(F)} \tag{3.30}\] \[\leq\dim\frac{R\cap\gamma_{m}(F)}{\gamma_{c+m+1}(F)}.\]
Thus by (3.29) and (3.30), we obtain that
\[\dim\mathcal{M}^{(c)}(L)\leq\sum_{k=0}^{c}l_{d}^{n}(m+k).\]
Now, suppose that \(c+1\leq m\). Then
\[\dim\mathcal{M}^{(c)}(L)=\dim\frac{R\cap\gamma_{c+1}(F)}{\gamma_{c+1 }(R,F,\dots,F)} =\dim\frac{\Big{(}R\cap\gamma_{c+1}(F)\Big{)}/\gamma_{m+1}(F)}{ \gamma_{c+1}(R,F,\dots,F)/\gamma_{m+1}(F)}\] \[\leq\dim\frac{\gamma_{c+1}(F)/\gamma_{m+1}(F)}{\gamma_{c+1}(R,F, \dots,F)/\gamma_{m+1}(F)}\] \[\leq\dim\frac{\gamma_{c+1}(F)}{\gamma_{m+1}(F)}-\dim\frac{\gamma_ {c+1}(R,F,\dots,F)}{\gamma_{m+1}(F)}\] \[\leq\dim\frac{\gamma_{c+1}(F)}{\gamma_{m+1}(F)}\] \[\leq\dim\frac{\gamma_{c+1}(F)}{\gamma_{c+m+1}(F)}=\sum_{k=1}^{m} l_{d}^{n}(c+k).\]
**Corollary 3.11**.: _Let \(K\) be an \(n\)-Lie algebra over a field \(\mathbb{F}\) of characteristic zero, and also suppose that the algebra \(L=\frac{K}{Z_{c}(K)}\) satisfies in the stated conditions of Theorem 3.9. Then_
\[\dim\gamma_{c+1}(K)\leq l_{d}^{n}(c+1)+\sum_{i=1}^{c}a_{i}d^{c(n-1)-i+1}.\]
Proof.: Let \(F/R\) and \(S/R\) be a free presentations of \(K\) and \(Z_{c}(K)\), respectively, where \(S\) is an ideal of \(F\). Therefore, according to Theorem 3.9, we have
\[\dim\gamma_{c+1}(L)=\dim\gamma_{c+1}(K/Z_{c}(K))=\dim\gamma_{c+1}(F/S)=\frac{ \gamma_{c+1}(F)+S}{S}=\dim\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(F)\cap S}. \tag{3.31}\]
Hence
\[\dim\gamma_{c+1}(L)=\dim\gamma_{c+1}(K/Z_{c}(K)) =\dim\frac{\gamma_{c+1}(K)+Z_{c}(K)}{Z_{c}(K)} \tag{3.32}\] \[=\dim\gamma_{c+1}(K)+\dim Z_{c}(K)-\dim Z_{c}(K)=\dim\gamma_{c+1 }(K).\]
Therefore, by equations (3.31) and (3.32), we have
\[\dim\gamma_{c+1}(K)=\dim\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(F)\cap S} =\dim\left(\frac{\gamma_{c+1}(F)/\gamma_{c+1}(S,F,\ldots,F)}{\gamma _{c+1}(F)\cap S/\gamma_{c+1}(S,F,\ldots,F)}\right)\] \[\leq\dim\left(\frac{\gamma_{c+1}(F)}{\gamma_{c+1}(S,F,\ldots,F)}\right)\] \[=\dim\mathcal{M}^{(c)}(K/Z_{c}(K))+\dim\gamma_{c+1}(K/Z_{c}(K))\] \[=\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\] \[\leq l_{d}^{n}(c+1)+\sum_{i=1}^{c}a_{i}d^{c(n-1)-i+1}.\]
In the next proposition, we give another upper bound for the dimension of terms of lower central series, which is similar to [13, Lemma 14 ].
**Proposition 3.12**.: _Let \(L\) be an \(n\)-Lie algebra over a field \(\mathbb{F}\) such that \(\dim(L/Z_{c}(L))=d\). Then \(\dim\gamma_{c+1}(L)\leq l_{d}^{n}(c+1)\)._
Proof.: Let \(X=\{\bar{x}_{1},\bar{x}_{2},\ldots,\bar{x}_{d}\}\) be a basis set for \(L/Z_{c}(L)\). Then each member of \(L\) can be shown as \(\left(\sum\limits_{i=1}^{d}c_{i}\bar{x}_{i}\right)+z\), where \(c_{i}\in\mathbb{F}\) and \(z\in Z_{c}(L)\), for all \(i=1,2,\ldots,d\). If \(F/R\) is a free presentation of \(L\), then
\[\dim\gamma_{c+1}(L)=\dim\gamma_{c+1}(F/R)=\dim\frac{F^{c+1}+R}{R} =\dim\frac{F^{c+1}}{F^{c+1}\cap R}=\dim\frac{F^{c+1}/F^{c+2}}{(F^{c+1} \cap R)/F^{c+2}}\] \[\leq\dim\frac{F^{c+1}}{F^{c+2}}=l_{d}^{n}(c+1).\]
The following corollary shows that, under some conditions, the converse of Proposition 1.5 is hold.
**Corollary 3.13**.: _Let \(L\) be an \(n\)-Lie algebra with dimension \(d\). Then the following properties hold:_
1. \(\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\leq l_{d}^{n}(c+1)\)_. In particular,_ \[\dim\mathcal{M}(L)\leq\dim\mathcal{M}(L)+\dim L^{2}\leq l_{d}^{n}(2).\]
2. _If_ \(\dim\mathcal{M}^{(c)}(L)=l_{d}^{n}(c+1)\)_, then_ 1. _There exists an epimorphism_ \(\theta:\mathcal{M}^{(c)}(L)\longrightarrow\dim\mathcal{M}^{(c)}(L/L^{2})\)_, which is obtained by choosing_ \(M=L^{2}\) _in the second part of Proposition_ 3.4_._ 2. _If_ \(\ker\theta=0\)_, then_ \(L\) _is abelian._
Proof.:
1. Put \(K=F/\gamma_{c+1}(R,F,\ldots,F)\), in Proposition 3.12, where \(F/R\) is a free presentation of \(L\). Then \[\dim\frac{K}{Z_{c}(K)} =\dim\left(\frac{F/\gamma_{c+1}(R,F,\ldots,F)}{Z_{c}(F/\gamma_{c+1 }(R,F,\ldots,F))}\right)\] \[=\dim\left(\frac{F/\gamma_{c+1}(R,F,\ldots,F)}{(Z_{c}(F)+\gamma_ {c+1}(R,F,\ldots,F))/\gamma_{c+1}(R,F,\ldots,F)}\right)\] \[=\dim\left(\frac{F}{Z_{c}(F)+\gamma_{c+1}(R,F,\ldots,F)}\right)\] \[\leq\dim\frac{F}{R}=d.\] Therefore, (3.33) \[\dim\gamma_{c+1}(F/\gamma_{c+1}(R,F,\ldots,F))\leq l_{d}^{n}(c+1).\] On the other hand, according to part (3) of Corollary 3.5, we have (3.34) \[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)=\dim\gamma_{c+1}\left(\frac{F}{ \gamma_{c+1}(R,F,\ldots,F)}\right).\] Hence by (3.33) and (3.34), we have \[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\leq l_{d}^{n}(c+1).\] In particular, if \(c=1\), then \(\dim\mathcal{M}(L)+\dim L^{2}\leq l_{d}^{n}(c+1)\), and so \(\dim\mathcal{M}(L)\leq l_{d}^{n}(c+1)\).
2. According to Proposition 3.4, \(\theta\) exists. Thus it is enough to show that \(\theta\) is onto. By part (1), we have \[\dim\mathcal{M}^{(c)}(L)+\dim\gamma_{c+1}(L)\leq l_{d}^{n}(c+1).\] On the other hand, we know that \(\dim\mathcal{M}^{(c)}(L)=l_{d}^{n}(c+1)\). So \(\dim\gamma_{c+1}(L)=0\). Now, if we put \(M=L^{2}\), in part (2) of Proposition 3.4, then we obtain \[\dim M\cap\gamma_{c+1}(L)=\dim L^{2}\cap\gamma_{c+1}(L)=0,\] and thus \(\theta\) is an epimorphism. Now, let \(\ker\theta=0\). Since \(\theta\) is onto, so by putting \(M=L^{2}\) in part (2) of Proposition 3.4, we get the following exact sequence: \[0\longrightarrow\mathcal{M}^{(c)}(L)\longrightarrow\mathcal{M}^{(c)}(L/L^{ 2})\longrightarrow 0,\] that is, \(\mathcal{M}^{(c)}(L)=\mathcal{M}^{(c)}(L/L^{2})\). Since \(L/L^{2}\) is abelian, so \(L^{2}=0\) and hence \(L\) is abelian.
The following corollary determines the dimension of \(c\)-nilpotent multiplier of nilpotent \(n\)-Lie algebras of maximal class. We recall that a \(d\)-dimensional \(n\)-Lie algebra
is nilpotent of maximal class \(c+1\) when \(\dim Z_{c}(L)=\dim\gamma_{2}(L)=d-n\) and also \(\dim(Z_{i}(L)/Z_{i-1}(L))=1\).
**Corollary 3.14**.: _Let \(L\) be an \(n\)-Lie algebra of maximal nilpotency class \(c+1\) with dimension finite \(d\). Then_
\[\dim\mathcal{M}^{(c)}(L)\leq l_{d-1}^{n}(c+1)+n^{c(n-1)}.\]
Proof.: Since \(L\) is an \(n\)-Lie algebra of maximal class \(c+1\), then \(Z(L)=\gamma_{c+1}(L)\), and hence \(\dim(\gamma_{c+1}(L))=1\). By using Theorem 3.3, if we choose \(M=Z(L)\), then we obtain
\[\dim\mathcal{M}^{(c)}(L)+\dim(\gamma_{c+1}(L)\cap Z(L))\leq \dim\mathcal{M}^{(c)}(L/Z(L))+\dim\mathcal{M}^{(c)}(Z(L))\] \[+\dim((L/Z(L))^{ab\;c}\otimes Z(L)).\]
On the other hand, we know that \(Z_{c+1}(L)=L\) and that \(\dim Z_{c}(L)=\dim\gamma_{2}(L)=d-n=c\). Moreover, for all \(1\leq i\leq c\), we have \(\dim Z_{i}(L)-Z_{i-1}(L)=1\), and \(\dim Z(L)=1\). Also, by Proposition 1.5, \(\dim\mathcal{M}^{(c)}(Z(L))=l_{1}^{n}(c+1)=0\), and also \(\dim(\gamma_{c+1}(L)\cap Z(L))=1\). Thus \(Z(L)\subseteq\gamma_{2}(L)\). Hence,
\[\dim((L/Z(L))^{ab}=\dim\frac{L/Z(L)}{(L/Z(L))^{2}} =\dim\frac{L/Z(L)}{(\gamma_{2}(L)+Z(L))/Z(L)}\] \[=\dim\frac{L/Z(L)}{\gamma_{2}(L)/Z(L)}\] \[=\dim L/Z(L)-\dim\gamma_{2}(L)/Z(L)\] \[=(d-1)-((d-n)-1)=n.\]
Therefore,
\[\dim\mathcal{M}^{(c)}(L)+1 \leq\dim\mathcal{M}^{(c)}(L/Z(L))+0+n^{((c+1)-1)(n-1)}\times 1\] \[\leq l_{d-1}^{n}(c+1)+n^{c(n-1)}.\]
Eshrati, Saeedi, and Darabi [9] proved that every \(d\)-dimensional nilpotent \(n\)-Lie algebra of class \(2\) with \(\dim\gamma^{2}(L)=1\) has the structure \(H(n,m)\oplus A(d-mn-1)\), where \(H(n,m)\) is a Heisenberg \(n\)-Lie algebra of dimension \(mn+1\) and \(A(d-mn-1)\) is an abelian \(n\)-Lie algebra of dimension \(d-mn-1\). Also, they gave the dimension of the Schur multiplier of \(H(n,m)\) as follows:
\[\mathcal{M}(H(n,m))=\begin{cases}n,&m=1,\\ \binom{mm}{n}-1,&m\geq 2.\end{cases}\]
Therefore, according to the above explanation and Proposition 1.5, if we get the dimension of the \(c\)-nilpotent multiplier of \(H(n,m)\), then we will be able to calculate the dimension of the \(c\)-nilpotent multiplier of every finite-dimensional nilpotent \(n\)-Lie algebra of class \(2\) with \(\dim\gamma_{2}(L)=1\). In the following theorem, we get the structure and dimension of the \(c\)-nilpotent multiplier of \(H(n,m)\).
**Theorem 3.15**.: _If \(L\cong H(n,m)\), then for all \(c\geq 2\), we have_
\[\mathcal{M}^{(c)}(H(n,m))\cong\begin{cases}A(l_{m}^{n}(c+1)+l_{m}^{n}(c+2)),&m=1, \\ A(l_{mn}^{n}(c+1)),&m\geq 2.\end{cases}\]
Proof.: Assume first \(m=1\), and let \(F/R\) be a free presentation of \(H(n,1)\). Since \(cl(H(n,1)=2\), so \(\gamma_{3}(H(n,1)=1\). Hence
\[1=\gamma_{3}(H(n,1))=\gamma_{3}(F/R)=\frac{\gamma_{3}(F)+R}{R}\cong\frac{R}{ \gamma_{3}(F)\cap R},\]
which implies that \(R=\gamma_{3}(F)\). Thus
\[\mathcal{M}^{(c)}(H(n,1))=\frac{\gamma_{c+1}(F)\cap R}{\gamma_{c+1}(R,F,\dots, F)}=\frac{\gamma_{c+1}(F)\cap\gamma_{3}(F)}{\gamma_{c+1}(\gamma_{3}(F),F, \dots,F)}=\frac{\gamma_{c+1}(F)}{\gamma_{c+3}(F)}.\]
Therefore by Theorem 1.4, we obtain
\[\dim\mathcal{M}^{(c)}(H(n,1))=\frac{\gamma_{c+1}(F)}{\gamma_{c+3}(F)}=l_{m}^{ n}(c+1)+l_{m}^{n}(c+2).\]
Since the \(c\)-nilpotent multiplier of every \(n\)-Lie algebra is abelian, so \(\mathcal{M}^{(c)}(H(n,1))\cong A(l_{m}^{n}(c+1)+l_{m}^{n}(c+2))\).
Now, let \(m\geq 2\). Since \(m\neq 1\), so \(H(n,m)\) is not capable, by [2, Theorem 1.19] and hence by [2, Lemma 1.5], we have \(Z^{*}(H(n,m))\neq 0\). Thus \(Z^{*}(H(n,m))=\gamma_{2}(H(n,m))\), by [2, Proposition 1.10]. Also, it is easy to check that \(Z^{*}(H(n,m))\subseteq Z^{*}_{c}(H(n,m))\), and hence \(\gamma_{2}(H(n,m))\subseteq Z^{*}_{c}(H(n,m))\). Thus by using Proposition 3.7 and bearing in mind that \(\gamma_{c+1}(H(n,m))=0\), so we have \(\dim\mathcal{M}^{(c)}(H(n,m))=\dim\mathcal{M}^{(c)}(H(n,m))^{ab}\). Therefore, by Theorem 1.5, \(\mathcal{M}^{(c)}(H(n,m))\cong\mathcal{M}^{(c)}(H(n,m))^{ab}\cong A(l_{mn}^{n }(c+1))\).
|
2304.10038
|
Open-World Continual Learning: Unifying Novelty Detection and Continual
Learning
|
As AI agents are increasingly used in the real open world with unknowns or
novelties, they need the ability to (1) recognize objects that (i) they have
learned and (ii) detect items that they have not seen or learned before, and
(2) learn the new items incrementally to become more and more knowledgeable and
powerful. (1) is called novelty detection or out-of-distribution (OOD)
detection and (2) is called class incremental learning (CIL), which is a
setting of continual learning (CL). In existing research, OOD detection and CIL
are regarded as two completely different problems. This paper theoretically
proves that OOD detection actually is necessary for CIL. We first show that CIL
can be decomposed into two sub-problems: within-task prediction (WP) and
task-id prediction (TP). We then prove that TP is correlated with OOD
detection. The key theoretical result is that regardless of whether WP and OOD
detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good
WP and good OOD detection are necessary and sufficient conditions for good CIL,
which unifies novelty or OOD detection and continual learning (CIL, in
particular). A good CIL algorithm based on our theory can naturally be used in
open world learning, which is able to perform both novelty/OOD detection and
continual learning. Based on the theoretical result, new CIL methods are also
designed, which outperform strong baselines in terms of CIL accuracy and its
continual OOD detection by a large margin.
|
Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, Bing Liu
|
2023-04-20T01:32:32Z
|
http://arxiv.org/abs/2304.10038v1
|
# Open-World Continual Learning: Unifying Novelty Detection and Continual Learning
###### Abstract
As AI agents are increasingly used in the real open world with unknowns or novelties, they need the ability to (1) recognize objects that (i) they have learned and (ii) detect items that they have not seen or learned before, and (2) learn the new items incrementally to become more and more knowledgeable and powerful. (1) is called _novelty detection_ or _out-of-distribution_ (OOD) _detection_ and (2) is called _class incremental learning_ (CIL), which is a setting of _continual learning_ (CL). In existing research, OOD detection and CIL are regarded as two completely different problems. This paper theoretically proves that OOD detection actually is necessary for CIL. We first show that CIL can be decomposed into two sub-problems: _within-task prediction_ (WP) and _task-id prediction_ (TP). We then prove that TP is correlated with OOD detection. The _key theoretical result_ is that regardless of whether WP and OOD detection (or TP) are defined explicitly or implicitly by a CIL algorithm, good WP and good OOD detection are _necessary_ and _sufficient_ conditions for good CIL, which unifies novelty or OOD detection and continual learning (CIL, in particular). A good CIL algorithm based on our theory
can naturally be used in open world learning, which is able to perform both novelty/OOD detection and continual learning. Based on the theoretical result, new CIL methods are also designed, which outperform strong baselines in terms of CIL accuracy and its continual OOD detection by a large margin.
keywords: Open world learning, continual learning, OOD detection +
Footnote †: journal: Computer Vision and Pattern Recognition
## 1 Introduction
The current dominant machine learning paradigm (ML) makes the _closed-world assumption_, which means that the classes of objects seen by the system in testing or in deployment must have been seen during training [54; 53; 6; 15; 37], i.e., there is nothing novel occurring during testing or deployment. This assumption is invalid in practice as the real environment is an **open world** that is full of unknowns or novel objects. In order to make an AI agent thrive in the open world, it has to detect novelties and learn them incrementally to make the system more knowledgeable and adaptable over time. This process involves multiple activities, such as _novelty/OOD detection_, _novelty characterization_, _adaption_, _risk assessment_, and _continual learning_ of the detected novel items or objects [38]. Novelty detection, also called _out-of-distribution_ (OOD) _detection_, aims to detect unseen objects that the agent has not learned. On detecting the novel objects or situation, the agent has to respond or adapt its actions. But in order to adapt, it must first _characterize_ the novel object as without it, the agent would not know how to respond or adapt. For example, it may characterize a detected novel object as looking like a dog. Then, the agent may react like what it would react to a dog. In the process, the agent also constantly assesses the risk of its actions. Finally, it also learns to recognize the new object incrementally so that it will not be surprised when it sees the same kind of object in the future. This incremental learning function is called _continual learning_ (CL) or _lifelong learning_[14]. Note that before learning, the agent must obtain labeled training data, which can be collected by the agent through interaction with the environment or with human users. This aspect is out of the scope of this paper. See [38] for details.
This paper focuses only on the key learning aspects of the open world scenario: (1) OOD/novelty detection and (2) continual learning, more specifically _class incremental learning_ (CIL) (see the definition below). In the research community, (1) and (2) are regarded as two completely different
problems, but this paper theoretically unifies them by proving that good OOD detection is, in fact, necessary for CIL. Thus, a good CIL algorithm can naturally be used in the open world to perform both (1) and (2). Below, we first define the concepts of OOD detection and continual learning.
_Out-of-distribution (OOD) detection_: Given the training data \(\mathcal{D}=\{(x^{i},y^{i})_{i=1}^{n}\}\), where \(n\) is the number of data samples, and \(x^{i}\in\mathbf{X}\) is an input sample and \(y^{i}\in\mathbf{Y}\) (the set of all class labels in \(\mathcal{D}\)) is its class label, our goal is to build a classifier \(f:\mathbf{X}\rightarrow\mathbf{Y}\cup\{O\}\) that can detect test instances that do not belong to any classes in \(\mathbf{Y}\) (called _OOD detection_), which are assigned to the class \(O\). \(\mathbf{Y}\) is often called the _in-distribution_ (IND) _classes_.
Note that as we can see from the definition, an OOD detection algorithm can also classify test instances belonging to \(\mathbf{Y}\) to their respective classes, which is called IND classification, although most OOD detection papers do not report the IND classification results.
Continual learning (CL) aims to incrementally learn a sequence of tasks. Each task consists of a set of classes to be learned together (the set may contain only a single class). Once a task is learned, its training data (at least a majority of it) is no longer accessible. Thus, unlike multitask learning, in learning a new task, CL will not be able to use the data of the previous tasks. A major challenge of CL is _catastrophic forgetting_ (CF), which refers to the phenomenon that in learning a new task, the neural network model parameters need to be modified, which may corrupt the knowledge learned for previous tasks in the network and cause performance degradation for the previous tasks [43]. Although a large number of CL techniques have been proposed, they are mainly empirical. Little theoretical work has been done on how to solve CL. This paper performs such a theoretical study about the necessary and sufficient conditions for effective CL. There are two main CL settings that have been extensively studied: _class incremental learning_ (CIL) and _task incremental learning_ (TIL) [60]. In CIL, the learning process builds a single classifier for all tasks/classes learned so far. In testing, a test instance from any class may be presented for the model to classify. No prior task information (e.g., task-id) of the test instance is provided. Formally, CIL is defined as follows.
_Class incremental learning_ (CIL). CIL learns a sequence of tasks, \(1,2,...,T\). Each task \(k\) has a training dataset \(\mathcal{D}_{k}=\{(x_{k}^{i},y_{k}^{i})_{i=1}^{n_{k}}\}\), where \(n_{k}\) is the number of data samples in task \(k\), and \(x_{k}^{i}\in\mathbf{X}\) is an input sample and \(y_{k}^{i}\in\mathbf{Y}_{k}\) (the set of all classes of task \(k\)) is its class label. All \(\mathbf{Y}_{k}\)'s are disjoint (\(\mathbf{Y}_{k}\cap\mathbf{Y}_{k^{\prime}}=\emptyset\), \(\forall k\neq k^{\prime}\)) and \(\bigcup_{k=1}^{T}\mathbf{Y}_{k}=\mathbf{Y}\). The goal of CIL is to construct a
single predictive function or classifier \(f:\mathbf{X}\rightarrow\mathbf{Y}\) that can identify the class label \(y\) of each given test instance \(x\).
In the TIL setup, each task is a separate or independent classification problem. For example, one task could be to classify different breeds of dog and another task could be to classify different types of animal (the tasks may not be disjoint). One model is built for each task in a shared network. In testing, the task-id of each test instance is provided and the system uses only the specific model for the task (dog or animal classification) to classify the test instance. Formally, TIL is defined as follows.
_Task incremental learning_ (TIL). TIL learns a sequence of tasks, \(1,2,...,T\). Each task \(k\) has a training dataset \(\mathcal{D}_{k}=\{((x_{k}^{i},k),y_{k}^{i})_{i=1}^{n_{k}}\}\), where \(n_{k}\) is the number of data samples in task \(k\in\mathbf{T}=\{1,2,...,T\}\), and \(x_{k}^{i}\in\mathbf{X}\) is an input sample and \(y_{k}^{i}\in\mathbf{Y}_{k}\subset\mathbf{Y}\) is its class label. The goal of TIL is to construct a predictor \(f:\mathbf{X}\times\mathbf{T}\rightarrow\mathbf{Y}\) to identify the class label \(y\in\mathbf{Y}_{k}\) for \((x,k)\) (the given test instance \(x\) from task \(k\)).
This paper focuses on CIL as it incrementally learns new (or novel) classes of objects, which is a core function of open world learning. Although the proposed methods also work for TIL, TIL is not a core part of open world learning as it builds a model for each task separately and independently in a shared neural network and the tasks are given by users. TIL is also much simpler. Several existing techniques can effectively overcome CF for TIL (with almost no CF) [55; 61]. However, CIL remains to be highly challenging due to the additional problem of _Inter-task Class Separation_ (ICS), i.e., how to establish decision boundaries between the classes of the new task and the classes of the previous tasks in learning the new tasks without accessing the training data of the previous tasks.
**Problem Statement** (_open-world continual learning_ (OCL)): OCL is defined as CIL with the OOD detection capability. At any time, the resulting CIL model is able to classify test instances belonging to the classes learned so far in CIL to their respective classes and also detect OOD instances that do not belong to any of the learned classes so far.
We note that OOD detection in CIL is slightly different from the traditional OOD detection (which sees the full IND data together) because in CIL, the model does not see all the IND data together. In CIL, the IND data comes in a sequence of tasks incrementally and in learning each task, the model does not see any data (or only a very small sample) of the previous tasks.
The main contribution of this paper is two-fold. First, it makes a the
oretical contribution by proving the necessary and sufficient conditions for solving the CIL problem. It shows that a good OOD detection performance is one of the necessary conditions. Thus, the proposed theory is also applicable to open-world learning as novelty (OOD) detection is a key function of open-world learning and open-world learning should also learn continually, i.e., open-world continual learning. Second, based on the theory, several new CIL algorithms are designed, which are also able to detect novel (or OOD) instances for open-world continual learning (OCL).
**Theory.** We conduct a theoretical study of CIL, which is applicable to any CIL classification model. Instead of focusing on the traditional PAC generalization bound [48] or neural tangent kernel (NTK) [25], we focus on how to solve the CIL problem. We first decompose the CIL problem into two sub-problems in a probabilistic framework: _Within-task Prediction_ (WP) and _Task-id Prediction_ (TP). WP means that the prediction for a test instance is only done within the classes of the task to which the test instance belongs, which is basically the TIL problem. TP predicts the task-id. TP is needed because in CIL, task-id is not provided in testing. This paper then proves based on the popular cross-entropy loss that (i) the CIL performance is bounded by WP and TP performances, and (ii) TP and task OOD detection performance bound each other (which connects CIL and OOD detection).
_Key theoretical result_: Regardless of whether WP and TP or OOD detection are defined explicitly or implicitly by a CIL algorithm, good WP and good TP or OOD detection are _necessary_ and _sufficient_ conditions for good CIL performances.3
Footnote 3: This result is applicable to both batch/offline and online/stream CIL and to CIL problems with blurry task boundaries which means that some training data of a task may come later together with a future task.
The intuition of the theory is simple because if a CIL model is perfect at detecting OOD samples for each task, which solves the ICS problem, then CIL is reduced to WP, which is just the traditional single-task supervised learning for each task and as indicated earlier, many OOD algorithms can also perform IND classification, which is basically WP.
**New CIL Algorithms for OCL.** The theoretical result provides a principled guidance for solving the CIL problem. Based on the theory, several new CIL methods are designed, including (1) techniques based on the integration of a TIL method and an OOD detection method, which outperforms
strong baselines in both the CIL and TIL settings by a large margin. This combination is particularly attractive because TIL has achieved no forgetting, and we only need a strong OOD detection technique that can perform both IND prediction and OOD detection to learn each task to achieve strong CIL results. Note that we do not propose a new OOD detection technique as there are numerous such techniques in the literature. We will use two existing ones. (2) Another method is based on a pre-trained model and an OOD replay technique, which performs even better, outperforming existing baselines markedly in both CIL and OOD detection in the OCL setting.
## 2 Related Work
Although a large number of algorithms have been proposed to solve the CIL problem, they are mainly empirical. There are two works that focus on the traditional PAC generalization bound [48] or NTK [25], but they do not tell how to solve the CL problem. This paper focuses on how to solve the CIL problem. To the best of our knowledge, we are not aware of any work that have proposed a theory on how to solve CIL. Also, none of the existing work has connected CIL and OOD detection. Our work shows that a good CIL algorithm can naturally perform OOD detection in the open world. Below, we first survey four popular families in CL approaches, which are mainly for overcoming catastrophic forgetting (CF). We then discuss related works about open world learning.
**(1). Regularization-based methods** prevent forgetting by restricting the learned parameters for previous tasks from being modified significantly by using a regularization term to penalize such changes [29; 68] or regularize the representation or outputs from the previously learned network [35; 69].
**(2). Replay-based methods**[51; 11; 8; 7; 64] prevent forgetting by saving a small amount of training data from previous tasks in a memory buffer and jointly train the network using the current data and the previous task data saved in the memory. Some methods in this family also study which samples in memory should be used in replay [2] or which samples in the training data should be saved for later reply [51; 40].
**(3). Generative methods** construct a generative network to generate raw training data [56; 44; 4]. The generated data are used with the current task training data to jointly train the classification network. [69] generates features instead of raw data. The generated samples in these methods are
used to prevent forgetting in both the generative network and the discriminative network.
**(4). Parameter-isolation methods**[55; 61] train a set of task-specific parameters to effectively protect the important parameters of each task, which thus has almost no forgetting. A limitation of the approach is that the correct task-id of each test instance must be known to the system to select the corresponding task-specific parameters at inference. These methods are thus mainly used for TIL. Some CIL methods also used these methods [1; 45; 49; 22] and they have separate mechanisms to predict the task-id (more on this below). However, their CIL performances are still far below that of recent replay-based counterparts (see Sections 4.2 and 5.2 for details).
Two of our proposed CIL methods use two parameter-isolation methods (HAT [55] and SupSup [61]) for _task incremental learning_ (TIL) as one of the components. As mentioned above, some recent CIL approaches have already used TIL methods. In such cases, task-id needs to be predicted. CCG [1] constructs an additional network to predict the task-id. iTAML [49] identifies the task-id of the test data in a batch. A serious limitation of this is that it requires the test data in a batch to belong to the same task. Our methods are different as they can predict (implicitly) for a single test instance at a time. HyperNet [45] and PR [22] propose an entropy based task-id prediction method. SupSup [61] predicts task-id by finding the optimal superpositions at inference. However, these methods perform poorly because they do not know OOD detection is the key for accurate task-id prediction, although our methods do not explicitly predict task-id for each test instance.
_Open world learning_ has been studied by many researchers [54; 53; 6; 15; 37; 38]. However, the existing research mainly focused on novelty detection, also called _open set recognition_ or _out-of-distribution_ (OOD) _detection_. Some researchers have also studied learning the novel objects after they are detected and manually labeled [6; 15; 63; 24]. However, none of them perform continual learning, which has additional challenges of catastrophic forgetting (CF) and inter-task class separation (ICS). Excellent surveys of novelty detection or OOD detection and open world learning can be found in [65; 46; 47; 24]. [17] did novelty detection and also continual learning, but its continual learning uses the regularization based method. It is quite weak because it has serious forgetting. A position paper [31] recently presented some nice blue sky ideas about open world learning, but it does not propose or implement any algorithm.
Our proposed algorithms are quite different. In training, based on our
theory, we use two existing OOD detection methods to verify that our theory can guide us to design new and much more effective CIL algorithms. In testing, our OOD detection is in the open-world continual learning (OCL) setting, which has been described in the introduction section.
There are also several papers that study _novel class discovery_[18], which is defined as discovering the hidden classes in the detected novel or OOD instances. Our work does not perform this function. We assume that the training data for each new task is given. Performing automatic class label discovery is still very challenging as in many cases, the class assignments can be subjective and are determined by human users. For example, for a dog, whether it should just be labeled as dog or a specific breed of dog is a subjective decision and is depending on applications.
## 3 A Theoretical Study on Solving CIL
This section presents our theory for solving CIL, which also covers novelty or OOD detection in the open world. It first shows that the CIL performance improves if the within-task prediction (WP) performance and/or the task-id prediction (TP) performance improve, and then shows that TP and OOD detection bound each other, which indicates that CIL performance is controlled by WP and OOD detection. This connects CIL and OOD detection. Finally, we study the necessary conditions for a good CIL model, which includes a good WP, and a good TP or OOD detection.
### CIL Problem Decomposition
This sub-section first presents the assumptions made by CIL based on its definition and then proposes a decomposition of the CIL problem into two sub-problems. A CIL system learns a sequence of tasks \(\{(\mathbf{X}_{k},\mathbf{Y}_{k})\}_{k=1,\ldots,T}\), where \(\mathbf{X}_{k}\) is the domain of task \(k\) and \(\mathbf{Y}_{k}\) are classes of task \(k\) as \(\mathbf{Y}_{k}=\{\mathbf{Y}_{k,j}\}\), where \(j\) indicates the \(j\)th class in task \(k\). Let \(\mathbf{X}_{k,j}\) to be the domain of \(j\)th class of task \(k\), where \(\mathbf{X}_{k}=\bigcup_{j}\mathbf{X}_{k,j}\). For accuracy, we will use \(x\in\mathbf{X}_{k,j}\) instead of \(\mathbf{Y}_{k,j}\) in probabilistic analysis. Based on the definition of class incremental learning (CIL) (Section 1), the following assumptions are implied,
**Assumption 1**.: The domains of classes of the same task are disjoint, i.e., \(\mathbf{X}_{k,j}\cap\mathbf{X}_{k,j^{\prime}}=\emptyset\), \(\forall j\neq j^{\prime}\).
**Assumption 2**.: The domains of tasks are disjoint, i.e., \(\mathbf{X}_{k}\cap\mathbf{X}_{k^{\prime}}=\emptyset\), \(\forall k\neq k^{\prime}\).
For any ground event \(D\), the goal of a CIL problem is to learn \(\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\). This can be decomposed into two probabilities, _within-task IND prediction_ (WP) probability and _task-id prediction_ (TP) probability. WP probability is \(\mathbf{P}(x\in\mathbf{X}_{k,j}|x\in\mathbf{X}_{k},D)\) and TP probability is \(\mathbf{P}(x\in\mathbf{X}_{k}|D)\). We can rewrite the CIL problem using WP and TP based on the two assumptions,
\[\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D) =\sum_{k=1,\ldots,n}\mathbf{P}(x\in\mathbf{X}_{k,j_{0}}|x\in \mathbf{X}_{k},D)\mathbf{P}(x\in\mathbf{X}_{k}|D) \tag{1}\] \[=\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_{0}},D )\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D) \tag{2}\]
where \(k_{0}\) means a particular task and \(j_{0}\) a particular class in the task.
Some remarks are in order about Eq. 2 and our subsequent analysis to set the stage.
_Remark 1_.: Eq. 2 shows that if we can improve either the WP or TP performance, or both, we can improve the CIL performance.
_Remark 2_.: It is important to note that our theory is not concerned with the learning algorithm or training process. But we will propose some concrete CIL algorithms based on the theoretical result in the experiment section.
_Remark 3_.: Note that the CIL definition and the subsequent analysis are applicable to tasks with any number of classes (including only one class per task) and to online CIL where the training data for each task or class comes gradually in a data stream and may also cross task boundaries (blurry tasks [5]) because our analysis is based on an already-built CIL model after training. Regarding blurry task boundaries, suppose dataset 1 has classes {dog, cat, tiger} and dataset 2 has classes {dog, computer, car}. We can define task 1 as {dog, cat, tiger} and task 2 as {computer, car}. The shared class _dog_ in dataset 2 is just additional training data of _dog_ appeared after task 1.
_Remark 4_.: Furthermore, CIL = WP * TP in Eq. 2 means that when we have WP and TP (defined either explicitly or implicitly by implementation), we can find a corresponding CIL model defined by WP * TP. Similarly, when we have a CIL model, we can find the corresponding underlying WP and TP defined by their probabilistic definitions.
In the following sub-sections, we develop this further concretely to derive the sufficient and necessary conditions for solving the CIL problem in the context of cross-entropy loss as it is used in almost all supervised CIL systems.
### CIL Improves as WP and/or TP Improve
As stated in Remark 2 above, the study here is based on a _trained CIL model_ and not concerned with the algorithm used in training the model. We use cross-entropy as the performance measure of a trained model as it is the most popular loss function used in supervised CL. For experimental evaluation, we use _accuracy_ following CL papers. Denote the cross-entropy of two probability distributions \(p\) and \(q\) as
\[H(p,q)\stackrel{{ def}}{{=}}-\mathbb{E}_{p}[\log q]=-\sum_{i}p_{i }\log q_{i}. \tag{3}\]
For any \(x\in\mathbf{X}\), let \(y\) to be the CIL ground truth label of \(x\), where \(y_{k_{0},j_{0}}=1\) if \(x\in\mathbf{X}_{k_{0},j_{0}}\) otherwise \(y_{k,j}=0\), \(\forall(k,j)\neq(k_{0},j_{0})\). Let \(\tilde{y}\) be the WP ground truth label of \(x\), where \(\tilde{y}_{k_{0},j_{0}}=1\) if \(x\in\mathbf{X}_{k_{0},j_{0}}\) otherwise \(\tilde{y}_{k_{0},j}=0\), \(\forall j\neq j_{0}\). Let \(\bar{y}\) be the TP ground truth label of \(x\), where \(\bar{y}_{k_{0}}=1\) if \(x\in\mathbf{X}_{k_{0}}\) otherwise \(\bar{y}_{k}=0\), \(\forall k\neq k_{0}\). Denote
\[H_{WP}(x) =H(\tilde{y},\{\mathbf{P}(x\in\mathbf{X}_{k_{0},j}|x\in\mathbf{X} _{k_{0}},D)\}_{j}), \tag{4}\] \[H_{CIL}(x) =H(y,\{\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\}_{k,j}),\] (5) \[H_{TP}(x) =H(\bar{y},\{\mathbf{P}(x\in\mathbf{X}_{k}|D)\}_{k}) \tag{6}\]
where \(H_{WP}\), \(H_{CIL}\) and \(H_{TP}\) are the cross-entropy values of WP, CIL and TP, respectively. We now present our first theorem. The theorem connects CIL to WP and TP, and suggests that by having a good WP or TP, the CIL performance improves as the upper bound for the CIL loss decreases.
**Theorem 1**.: _If \(H_{TP}(x)\leq\delta\) and \(H_{WP}(x)\leq\epsilon\), we have \(H_{CIL}(x)\leq\epsilon+\delta\)._
The detailed proof is given in A.1. This theorem holds regardless of whether WP and TP are trained together or separately. When they are trained separately, if WP is fixed and we let \(\epsilon=H_{WP}(x)\), \(H_{CIL}(x)\leq H_{WP}(x)+\delta\), which means if TP is better, CIL is better. Similarly, if TP is fixed, we have \(H_{CIL}(x)\leq\epsilon+H_{TP}(x)\). When they are trained concurrently, there exists a functional relationship between \(\epsilon\) and \(\delta\) depending on implementation. But no matter what it is, when \(\epsilon+\delta\) decreases, CIL gets better.
Theorem 1 holds for any \(x\in\mathbf{X}=\bigcup_{k}\mathbf{X}_{k}\) that satisfies \(H_{TP}(x)\leq\delta\) or \(H_{WP}(x)\leq\epsilon\). To measure the overall performance under expectation, we present the following corollary.
**Corollary 1**.: _Let \(U(\mathbf{X})\) represents the uniform distribution on \(\mathbf{X}\). i) If \(\mathbb{E}_{x\sim U(\mathbf{X})}[H_{TP}(x)]\leq\delta\), then \(\mathbb{E}_{x\sim U(\mathbf{X})}[H_{CIL}(x)]\leq\mathbb{E}_{x\sim U(\mathbf{X}) }[H_{WP}(x)]+\delta\). Similarly, ii) \(\mathbb{E}_{x\sim U(\mathbf{X})}[H_{WP}(x)]\leq\epsilon\), then \(\mathbb{E}_{x\sim U(\mathbf{X})}[H_{CIL}(x)]\leq\epsilon+\mathbb{E}_{x\sim U( \mathbf{X})}[H_{TP}(x)]\)._
The proof is given in A.2. The corollary is a direct extension of Theorem 1 in expectation. The implication is that given TP performance, CIL is positively related to WP. The better the WP is, the better the CIL is as the upper bound of the CIL loss decreases. Similarly, given WP performance, a better TP performance results in a better CIL performance. Due to the positive relation, we can improve CIL by improving either WP or TP using their respective methods developed in each area.
### Task Prediction (TP) to OOD Detection
Building on Eq. 2, we have studied the relationship of CIL, WP and TP in Theorem 1. We now connect TP and OOD detection. They are shown to be dominated by each other to a constant factor.
We again use cross-entropy \(H\) to measure the performance of TP and OOD detection of a trained network as in Section 3.2. To build the connection between \(H_{TP}(x)\) and OOD detection of each task, we first define the notations of OOD detection. We use \(\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)\) to represent the probability distribution predicted by the \(k\)th task's OOD detector. Notice that the task prediction (TP) probability distribution \(\mathbf{P}(x\in\mathbf{X}_{k}|D)\) is a categorical distribution over \(T\) tasks, while the OOD detection probability distribution \(\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)\) is a Bernoulli distribution. For any \(x\in\mathbf{X}\), define
\[H_{OOD,k}(x)=\begin{cases}H(1,\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D))=- \log\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D),\ x\in\mathbf{X}_{k},\\ H(0,\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D))=-\log\mathbf{P}^{\prime}_{k} (x\notin\mathbf{X}_{k}|D),\ x\notin\mathbf{X}_{k}.\end{cases} \tag{7}\]
In CIL, the OOD detection probability for a task can be defined using the output values corresponding to the classes of the task. Some examples of the function is a sigmoid of maximum logit value or a maximum softmax probability after re-scaling to 0 to 1. It is also possible to define the OOD detector directly as a function of tasks instead of a function of the output values of all classes of tasks, i.e. Mahalanobis distance. The following theorem shows that TP and OOD detection bound each other.
**Theorem 2**.: _i) If \(H_{TP}(x)\leq\delta\), let \(\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)=\mathbf{P}(x\in\mathbf{X}_{k}|D)\), then \(H_{OOD,k}(x)\leq\delta,\forall\,k=1,\ldots,T\). ii) If \(H_{OOD,k}(x)\leq\delta_{k},k=1,\ldots,T\), let \(\mathbf{P}(x\in\mathbf{X}_{k}|D)=\frac{\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{ k}|D)}{\sum_{k}\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)}\), then \(H_{TP}(x)\leq(\sum_{k}\mathbf{1}_{x\in\mathbf{X}_{k}}e^{\delta_{k}})(\sum_{k}1 -e^{-\delta_{k}})\), where \(\mathbf{1}_{x\in\mathbf{X}_{k}}\) is an indicator function._
See A.3 for the proof. As we use cross-entropy, the lower the bound, the better the performance is. The first statement (i) says that the OOD detection performance improves if the TP performance gets better (i.e., lower \(\delta\)). Similarly, the second statement (ii) says that the TP performance improves if the OOD detection performance on each task improves (i.e., lower \(\delta_{k}\)). Besides, since \((\sum_{k}\mathbf{1}_{x\in\mathbf{X}_{k}}e^{\delta_{k}})(\sum_{k}1-e^{-\delta_{k}})\) converges to 0 as \(\delta_{k}\)'s converge to 0 in order of \(O(|\sum_{k}\delta_{k}|)\), we further know that \(H_{TP}\) and \(\sum_{k}H_{OOD,k}\) are equivalent in quantity up to a constant factor.
In Theorem 1, we studied how CIL is related to WP and TP. In Theorem 2, we showed that TP and OOD bound each other. Now we explicitly give the upper bound of CIL in relation to WP and OOD detection of each task. The detailed proof can be found in A.4.
**Theorem 3**.: _If \(H_{OOD,k}(x)\leq\delta_{k},\,k=1,\ldots,T\) and \(H_{WP}(x)\leq\epsilon\), we have_
\[H_{CIL}(x)\leq\epsilon+(\sum_{k}\mathbf{1}_{x\in\mathbf{X}_{k}}e^{\delta_{k}} )(\sum_{k}1-e^{-\delta_{k}}),\]
_where \(\mathbf{1}_{x\in\mathbf{X}_{k}}\) is an indicator function._
### Necessary Conditions for Improving CIL
In Theorem 1, we showed that good performances of WP and TP are sufficient to guarantee a good performance of CIL. In Theorem 3, we showed that good performances of WP and OOD are sufficient to guarantee a good performance of CIL. For completeness, we study the necessary conditions of a well-performed CIL in this sub-section.
**Theorem 4**.: _If \(H_{CIL}(x)\leq\eta\), then there exist i) a WP, s.t. \(H_{WP}(x)\leq\eta\), ii) a TP, s.t. \(H_{TP}(x)\leq\eta\), and iii) an OOD detector for each task, s.t. \(H_{OOD,k}\leq\eta,\,k=1,\ldots,T\)._
The detailed proof is given in A.5. This theorem tells that if a good CIL model is trained, then a good WP, a good TP and a good OOD detector for each task are always implied. More importantly, by transforming Theorem 4 into its contraposition, we have the following statements: If for any WP, \(H_{WP}(x)>\eta\), then \(H_{CIL}(x)>\eta\). If for any TP, \(H_{TP}(x)>\eta\), then \(H_{CIL}(x)>\eta\). If for any OOD detector, \(H_{OOD,k}(x)>\eta\), \(k=1,\ldots,T\), then \(H_{CIL}(x)>\eta\). Regardless of whether WP and TP (or OOD detection) are defined explicitly or implicitly by a CIL algorithm, the existence of a good WP and the existence of a good TP or OOD detection are necessary conditions for a good CIL performance.
_Remark 5_.: It is important to note again that our study in this section is based on a CIL model that has already been built. In other words, our study tells the CIL designers what should be achieved in the final model. Clearly, one would also like to know how to design a strong CIL model based on the theoretical results, which also considers catastrophic forgetting (CF). One effective method is to make use of a strong existing TIL algorithm, which can already achieve no or little forgetting (CF), and combine it with a strong OOD detection algorithm (as mentioned earlier, most OOD detection methods can also perform WP). Thus, any improved method from the OOD detection community can be applied to CIL to produce improved CIL systems (see Sections 4.2.3 and 4.2.4).
Recall in Section 2, we reviewed prior works that have tried to use a TIL method for CIL with a task-id prediction method [45; 3; 49; 1; 22]. However, since they did not know that the key to the success of this approach is a strong OOD detection algorithm, they are quite weak (see Sections 4.2 and 5.2).
_Remark 6_.: Since good OOD detection for each task is necessary for CIL, our theory thus covers novelty (OOD) detection for open world learning.
## 4 Proposed Approach 1: Combining TIL and OOD Detection
Based on the above theoretical result, we designed two approaches (three different methods) to solve CIL that employ OOD detection. One more method will also be discussed in Section 4.2, which is a weaker post-processing OOD detection technique. This section presents the first approach, which combines a task incremental learning (TIL) method and an OOD detection method. The approach does not save any training data from previous tasks. In the next section, we present the second approach, which is based on replay and needs to save some training data from previous tasks.
### Combining a TIL Method and an OOD Detection Method
As mentioned earlier, several existing TIL methods can overcome CF. This proposed approach basically leverages the CF prevention ability in two TIL methods (HAT [55] and SupSup (Sup) [61]) and replaces their task learning methods with an OOD detection technique, called CSI [58], which can perform both within-task or IND prediction (WP) and OOD instance detection. Below, we first introduce the two TIL methods, HAT and SupSup, and the OOD detection method, CSI. The combinations give two new CIL
methods, HAT+CSI and Sup+CSI. None of these methods needs to save any data from previous tasks.
Figure 1 shows the overall training frameworks of HAT+CSI and Sup+CSI. Note that both HAT and Sup are multi-head methods (one head for each task) designed for task incremental learning (TIL).
#### 4.1.1 HAT: Hard Attention Masks
To prevent forgetting of the trained OOD detection model \(f^{k}\circ h^{k}\) for each task \(k\) in subsequent task learning, the hard attention mask (HAT) [55] for TIL is employed (which prevents forgetting in the feature extractor). Specifically, in learning a task, a set of embeddings are trained to protect the important neurons so that the corresponding parameters are not interfered by subsequent tasks. The importance of a neuron is measured by the 0-1 pseudo-step function, where 0 indicates not important and 1 indicates important (and thus protected).
The hard attention mask is an output of sigmoid function \(u\) with a hyper
Figure 1: Overview of prediction and training framework of HAT+CSI and Sup+CSI. **(a)** HAT+CSI: The CIL prediction is made by argmax over the concatenated output from each task. The training of each task uses CSI. That is, the training batch is augmented to give different views of the samples for contrastive training. The training consists of two steps following CSI. The first step learns the feature extractor by using the hard attention algorithm [55], which applies task embeddings to find hard masks at each layer. Then given the learned feature representations, it fine-tunes the classifier in step 2. **(b)** Sup+CSI: The CIL prediction is also made by taking argmax over the concatenated output values from each task as HAT+CSI. The model training for each task is similar to HAT+CSI except that it uses the Edge Popup algorithm of SupSup [50] to find a sparse network for each task. The sparse networks, which are indicated by edges of different colors in the diagram. The second step fine-tunes the classifier only with the fixed feature extractor.
parameter \(s\)
\[a_{l}^{k}=u(se_{l}^{k}), \tag{8}\]
where \(e_{l}^{k}\) is a learnable embedding at layer \(l\) of task \(k\). Since the step function is not differentiable, a sigmoid function with a large \(s\) is used to approximate it. Sigmoid is approximately a 0-1 step function with a large \(s\). The attention is multiplied to the output \(h_{l}=\text{ReLU}(W_{l}h_{l-1}+b_{l})\) of layer \(l\),
\[h_{l}^{\prime}=a_{l}^{k}\otimes h_{l} \tag{9}\]
The \(j\)th element \(a_{j,l}^{k}\) in the attention mask blocks (or unblocks) the information flow from neuron \(j\) at layer \(l\) if its value is 0 (or 1). With 0 value of \(a_{j,l}^{k}\), the corresponding parameters in \(W_{l}\) and \(b_{l}\) can be freely changed as the output values \(h_{l}^{\prime}\) are not affected. The neurons with non-zero mask values are necessary to perform the task, and thus need a protection for catastrophic forgetting.
We modify the gradients of parameters that are important in performing the previous tasks \((1,\cdots,k-1)\) during training task \(k\) so they are not interfered. Denote the accumulated mask by
\[a_{l}^{<k}=\max(a_{l}^{<k-1},a_{l}^{k-1}) \tag{10}\]
where \(\max\) is element-wise maximum and the initial mask \(a_{l}^{0}\) is a zero vector. It is a collection of mask values at layer \(l\) where a neuron has value 1 if it has ever been activated previously. The gradient of parameter \(w_{ij,l}\) is modified as
\[\nabla w_{ij,l}^{\prime}=\left(1-\min\left(a_{i,l}^{<k},a_{j,l-1}^{<k}\right) \right)\nabla w_{ij,l} \tag{11}\]
where \(a_{i,l}^{<k}\) is the \(i\)th unit of \(a_{l}^{<k}\). The gradient flow is blocked if both neurons \(i\) in the current layer and \(j\) in the previous layer have been activated. We apply the mask for all layers except the last layer. The parameters in last layer do not need to be protected as they are task-specific parameters.
A regularization is introduced to encourage sparsity in \(a_{l}^{k}\) and parameter sharing with \(a_{l}^{<k}\). The capacity of a network depletes when \(a_{l}^{<k}\) becomes 1-vector in all layers. Despite a set of new neurons can be added in network at any point in training for more capacity, we utilize resources more efficiently by minimizing the loss
\[\mathcal{L}_{r}=\lambda\frac{\sum_{l}\sum_{i}a_{i,l}^{k}\left(1-a_{i,l}^{<k} \right)}{\sum_{l}\sum_{i}\left(1-a_{i,l}^{<k}\right)} \tag{12}\]
where \(\lambda\) is a hyper-parameter. The final objective to train a comprehensive task network without forgetting is
\[\mathcal{L}=\mathcal{L}_{ce}+\mathcal{L}_{r} \tag{13}\]
where \(\mathcal{L}_{ce}\) is the cross-entropy loss. The overall framework of the algorithm is shown in Figure 1(a).
Note that for TIL, HAT needs the task-id for each test instance in order to choose the right task model for prediction or classification. However, by replacing the original model building method for each task in HAT with the OOD detection method in CSI (more specifically, \(\mathcal{L}_{c}\)) during training, HAT+CSI does not require to know the task-id of each test instance at inference, which makes HAT+CSI suitable for CIL (class incremental learning). We will see the detailed prediction/classification method in Section 4.2.
#### 4.1.2 SupSup: Supermasks in Superposition
SupSup (Sup) [61] is also a highly effective method that can overcome forgetting in the TIL setting. Sup trains supermasks by Edge Popup algorithm in [50]. More precisely, given initial \(\mathbf{W}\), find binary masks \(\mathbf{M}_{k}\) for task \(k\) to minimize the cross-entropy loss
\[\mathcal{L}=-\frac{1}{|\mathbf{X}_{k}|}\sum\log p(y|x,k), \tag{14}\]
where \(\mathbf{X}_{k}\) is the training data for task \(k\), and
\[p(y|x,k)=f(h(x;\mathbf{W}\otimes\mathbf{M}_{k})), \tag{15}\]
where \(\otimes\) indicates element-wise product. The masks are obtained by selecting the top \(p\%\) of entries in the score matrices \(\mathbf{V}\). The \(p\) value determines the sparsity of the mask \(\mathbf{M}_{k}\). The subnetwork found by Edge Popup algorithm is indicated by different colors in Figure 1(b).
Like HAT, Sup is also for TIL and needs the task-id \(k\) of each test instance at inference. With \(k\), the system (which is referred to as Sup GG in the original Sup paper) uses the task-specific mask \(\mathbf{M}_{k}\) to obtain the classification output. Like HAT+CSI, by replacing the cross-entropy loss in mask finding with the OOD detection loss in CSI, Sup+CSI also does not require the task-id of each test instance, which makes Sup+CSI applicable to CIL (class incremental learning). We will discuss the detailed prediction/classification method in Section 4.2.
#### 4.1.3 CSI: Contrasting Shifted Instances for OOD Detection
We now explain the OOD detection method CSI. CSI is based on contrastive learning [13; 27] and data and class augmentations due to their excellent performance [58]. Since here we focus on how to learn a single task based on OOD detection, we omit the task-id unless necessary. The OOD training process is similar to that of contrastive learning. It consists of two steps: Step 1 learns the feature representation by the composite \(g\circ h\), where \(h\) is the feature extractor and \(g\) is the projection to contrastive representation, and Step 2 learns/fine-tunes the linear classifier \(f\), mapping the feature representation of \(h\) to the label space (the classifier is the OOD model)). This two-step training process is outlined in Figure 1(b). In the following, we first describe the two-step training process and then explain how to make a prediction based on an ensemble method to further improve prediction.
**Step 1 (Contrastive Loss for Feature Learning).** Supervised contrastive learning is used to try to repel data of different classes and align data of the same class more closely to make it easier to classify them. A key operation is data augmentation via transformations.
Given a batch of \(N\) samples, each sample \(x\) is first duplicated. Each version then goes through _three initial augmentations_ (horizontal flip, color changes, and Inception crop [57]) to generate two different views \(x^{1}\) and \(x^{2}\) (they keep the same class label as \(x\)). Denote the augmented batch by \(\mathcal{B}\), which now has \(2N\) samples. In [21] and [58], it was shown that using image rotations is effective in learning OOD detection models because such rotations can effectively serve as out-of-distribution (OOD) training data. For each augmented sample \(x\in\mathcal{B}\) with class \(y\) of a task, we rotate \(x\) by \(90^{\circ},180^{\circ},270^{\circ}\) to create three images, which are assigned _three new classes_\(y_{1},y_{2}\), and \(y_{3}\), respectively. This results in a larger augmented batch \(\tilde{\mathcal{B}}\). Since we generate three new images from each \(x\), the size of \(\tilde{\mathcal{B}}\) is \(8N\). For each original class, we now have 4 classes. For a sample \(x\in\tilde{\mathcal{B}}\), let \(\tilde{\mathcal{B}}(x)=\tilde{\mathcal{B}}\backslash\{x\}\) and let \(P(x)\subset\tilde{\mathcal{B}}\backslash\{x\}\) be a set consisting of the data of the same class as \(x\) distinct from \(x\). The contrastive representation of a sample \(x\) is \(z_{x}=g(h(x,k))/\|g(h(x,k))\|\), where \(k\) is the current task. In learning, we minimize the supervised contrastive loss.
\[\mathcal{L}_{c}=\frac{1}{8N}\sum_{x\in\tilde{\mathcal{B}}}\frac{-1}{|P(x)|} \sum_{p\in P(x)}\log\frac{\exp(z_{x}\cdot z_{p}/\tau)}{\sum_{x^{\prime}\in \tilde{\mathcal{B}}(x)}\exp(z_{x}\cdot z_{x^{\prime}}/\tau)}, \tag{16}\]
where \(\tau\) is a scalar temperature, \(\cdot\) is dot product, and \(\times\) is multiplication.
The loss is reduced by repelling \(z\) of different classes and aligning \(z\) of the same class more closely. \(\mathcal{L}_{c}\) basically trains a feature extractor with good representations for learning an OOD classifier.
Since the feature extractor is shared across tasks in continual learning, a protection is needed to prevent catastrophic forgetting. HAT and Sup use their respective techniques to protect their feature extractor from forgetting. Therefore, the losses \(\mathcal{L}\) of Eq. 14 and \(\mathcal{L}_{ce}\) of Eq. 13 are replaced by Eq. 16 while the forgetting prevention mechanisms still hold.
**Step 2 (Fine-tuning the Classifier).** Given the feature extractor \(h\) trained with the loss in Eq. 16, we _freeze_\(h\) and only _fine-tune_ the linear classifier \(f\), which is trained to predict the classes of task \(k\)_and_ the augmented rotation classes. \(f\) maps the feature representation to the label space in \(\mathcal{R}^{4|\mathcal{C}^{k}|}\), where 4 is the number of rotation classes including the original data with \(0^{\circ}\) rotation and \(|\mathcal{C}^{k}|\) is the number of original classes in task \(k\). We minimize the cross-entropy loss,
\[\mathcal{L}_{\text{ft}}=-\frac{1}{|\tilde{\mathcal{B}}|}\sum_{(x,y)\in\tilde{ \mathcal{B}}}\log\tilde{p}(y|x,k), \tag{17}\]
where ft indicates fine-tune, and
\[\tilde{p}(y|x,k)=\text{softmax}\left(f(h(x,k))\right) \tag{18}\]
where \(f(h(x,k))\in\mathcal{R}^{4|\mathcal{C}^{k}|}\). The output \(f(h(x,k))\) includes the rotation classes. The linear classifier is trained to predict the original _and_ the rotation classes. Since individual classifier is trained for each task and the feature extractor is frozen, no protection is necessary.
**Ensemble Class Prediction.** We now discuss the prediction of class label \(y\) for a test sample \(x\). Note that the network \(f\circ h\) in Eq. 18 returns logits for rotation classes (including the original task classes). Note also for each original class label \(j_{k}\in\mathcal{C}^{k}\) (original classes) of a task \(k\), we created three additional rotation classes. For class \(j_{k}\), the classifier \(f\) will produce four output values from its four rotation class logits, i.e., \(f_{j_{k},0}(h(x_{0},k))\), \(f_{j_{k},90}(h(x_{90},k))\), \(f_{j_{k},180}(h(x_{180},k))\), and \(f_{j_{k},270}(h(x_{270},k))\), where 0, 90, 180, and 270 represent \(0^{\circ},90^{\circ},180^{\circ}\), and 270\({}^{\circ}\) rotations respectively and \(x_{0}\) is the original \(x\). We compute an ensemble output \(f_{j_{k}}(h(x))\) for each class \(j_{k}\in\mathcal{C}^{k}\) of task \(k\),
\[f(h(x,k))_{j_{k}}=\frac{1}{4}\sum_{\text{deg}}f(h(x_{\text{deg}},k))_{j_{k}, \text{deg}}. \tag{19}\]
### Experiments
We now present the experimental results of the combination techniques HAT+CSI and Sup+CSI for _class incremental learning_ (CIL). We will also use another OOD detection method ODIN [36] to show that better OOD detection method leads to better CIL results. We do not conduct extensive experiments on ODIN as it is much weaker than CSI in terms of ODD detection. Note that we will not report the ODD detection results for HAT+CSI and Sup+CSI in the open world in the continual learning process as the proposed method MORE in the next section performs better.
#### 4.2.1 Experimental Datasets and Baselines
**Datasets and CIL tasks**: We use three standard image classification benchmark datasets and construct five different CIL experiments.
**(1). CIFAR-10**[30]: This dataset consists of 32x32 color images of 10 classes with 50,000 training and 10,000 testing samples. We construct an experiment (**C10-5T**) of 5 tasks with 2 classes per task.
**(2). CIFAR-100**[30]: This dataset consists of 32x32 color images of 100 classes with 50,000 training and 10,000 testing samples. We construct two experiments of 10 tasks (**C100-10T**) and 20 tasks (**C100-20T**), where each task has 10 classes and 5 classes, respectively.
**(3). Tiny-ImageNet**[32]: This is an image classification dataset with 64x64 color images of 200 classes with 100,000 training and 10,000 validation samples. Since the dataset does not provide labels for testing data, we use the validation data for testing. We construct two experiments of 5 tasks (**T-5T**) and 10 tasks (**T-10T**) with 40 classes per task and 20 classes per task, respectively.
**Baselines**: We use 18 diverse continual learning baselines:
**(1).** One projection method (**OWM**[67]).
**(2).** Two exemplar-free (no replay data is saved) regularization methods (**MUC**[39] and **PASS**[69]).
**(3).** Nine replay-based methods (**LwF**[35], **iCaRL**[51], **A-GEM**[11], **EEIL**[8], **GD**[34], **Mnemonics**[40], **BiC**[62], **DER++**[7]], and **HAL**[10]).
**(4).** Three parameter-isolation methods (**HAT**[55], **HyperNet**[45], and **SupSup**[61]).
**(5).** Additionally, we report the accuracies of replay-based method **Co\({}^{2}\)L**[9] and parameter isolation methods **CCG**[1] and **PR-Ent**[22] from their
original papers as CCG has not released the code and we are unable to run Co\({}^{2}\)L and PR-Ent on our machines.
#### 4.2.2 Training Details and Evaluation Metrics
**Training Details.** For the backbone structure, we follow [61; 69; 7] and use ResNet-18 [19]. For CIFAR-100 and Tiny-ImageNet, the number of channels are doubled to fit more classes. For all baselines, the same ResNet-18 backbone architecture is employed except for OWM and HyperNet, for which we use their original architectures. OWM uses AlexNet. It is not obvious how to apply its orthogonal projection technique to the ResNet structure. HyperNet uses ResNet-32 and we are unable to replace it due to model initialization arguments unexplained in the original paper. For the replay methods, we use memory buffer 200 for CIFAR-10 and 2000 for CIFAR-100 and Tiny-ImageNet as in [51; 7]. We use the hyper-parameters suggested by the authors. If we could not reproduce any result, we use 10% of the training data as a validation set to grid-search for good hyper-parameters. For our proposed methods, we report the hyper-parameters in F. All the results are averages over 5 runs with random seeds.
**Evaluation Metrics.**
**(1).**_Average classification accuracy_ over all classes after learning the last task. The final class prediction depends _prediction methods_ (see below). We also report _forgetting rate_ in G.
**(2).**_Average AUC_ (Area Under the ROC Curve) over all task models for the evaluation of OOD detection. AUC is the main measure used in OOD detection papers. Using this measure, we show that a better OOD detection method will result in a better CIL performance. Let \(\textit{AUC}_{k}\) be the AUC score of task \(k\). It is computed by using only the model (or classes) of task \(k\) to score the test data of task \(k\) as the in-distribution (IND) data and the test data from other tasks as the out-of-distribution (OOD) data. The average AUC score is: \(\textit{AUC}=\sum_{k}\textit{AUC}_{k}/n\), where \(n\) is the number of tasks.
It is not straightforward to change existing CL algorithms to include a new OOD detection method that needs training, e.g., CSI, except for TIL (task incremental learning) methods like HAT and Sup. For HAT and Sup, we can simply switch their methods for learning each task with CSI (see Section 4.1.1 and Section 4.1.2).
**Prediction Methods.** The theoretical result in Section 3 states that we use Eq. 2 to perform the final prediction. The first probability (WP) in Eq. 2 is easy to get as we can simply use the softmax values of the classes in
each task. However, the second probability (TP) in Eq. 2 is tricky as each task is learned without the data of other tasks. There can be many options. We take the following approaches for prediction (which are a special case of Eq. 2, see below):
**(1).** For those approaches that use a single classification head to include all classes learned so far, we predict as follows (which is also the approach taken by the existing papers.)
\[\hat{y}=\arg\max f(x) \tag{20}\]
where \(f(x)\) is the logit output of the network.
**(2).** For multi-head methods (e.g., HAT, HyperNet, and Sup), which use one head for each task, we use the concatenated output as
\[\hat{y}=\arg\max\bigoplus_{k}f(x)_{k} \tag{21}\]
where \(\bigoplus\) indicate concatenation and \(f(x)_{k}\) is the output of task \(k\).4
Footnote 4: The Sup paper proposed an one-shot task-id prediction assuming that the test instances come in a batch and all belong to the same task like iTAML. We assume a single test instance per batch. Its task-id prediction results in accuracy of 50.2 on C10-5T, which is much lower than 62.6 by using Eq. 21. The task-id prediction of HyperNet also works poorly. The accuracy by its task-id prediction is 49.34 on C10-5T while it is 53.4 using Eq. 21. PR uses entropy to find task-id. Among many variations of PR, we use the variations that perform the best for each dataset with exemplar-free and single sample per batch at testing (i.e., no PR-BW).
These methods (in fact, they are the same method used in two different settings) are a special case of Eq. 2 if we define \(OOD_{k}\) as \(\sigma(\max f(x)_{k})\), where \(\sigma\) is the sigmoid. Hence, the theoretical results in Section 3 are still applicable. We present a detailed explanation about this prediction method and some other options in C. These two approaches work quite well.
#### 4.2.3 Better OOD Detection Produces Better CIL Performance
The key theoretical result in Section 3 is that better OOD detection will produce better CIL performance. We compare a weaker OOD method ODIN with the strong CSI. ODIN is a post-processing method for OOD detection [36]. Note that it does not always improves the OOD detection performance compared to without the ODIN post-processing (see below).
**Applying ODIN.** We first train the baseline models using their original algorithms, and then apply temperature scaling and input noise of ODIN at testing for each task (no training data needed). More precisely, the output of class \(j\) in task \(k\) changes by temperature scaling factor \(\tau_{k}\) of task \(k\) as
\[s(x;\tau_{k})_{j}=e^{f(x)_{kj}/\tau_{k}}/\sum_{j}e^{f(x)_{kj}/\tau_{k}} \tag{22}\]
and the input changes by the noise factor \(\epsilon_{k}\) as
\[\tilde{x}=x-\epsilon_{k}\text{sign}(-\nabla_{x}\log s(x;\tau_{k})_{\hat{y}}) \tag{23}\]
where \(\hat{y}\) is the class with the maximum output value in task \(k\). This is a positive adversarial example inspired by [16]. The values \(\tau_{k}\) and \(\epsilon_{k}\) are hyper-parameters and we use the same values for all tasks except for PASS, for which we use a validation set to tune \(\tau_{k}\) (see B).
Table 1 gives the results for C100-10T. The CIL results clearly show that the CIL performance increases if the AUC increases with ODIN. For instance,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{AUC} & \multicolumn{2}{c}{CIL} \\ Method & Original & ODIN & Original & ODIN \\ \hline OWM & 71.31 & 70.06 & 28.91 & 28.88 \\ \hline MUC & 72.69 & 72.53 & 30.42 & 29.79 \\ PASS & 69.89 & 69.60 & 33.00 & 31.00 \\ \hline LwF & 88.30 & 87.11 & 45.26 & 51.82 \\ A-GEM & 78.01 & 79.00 & 9.29 & 13.48 \\ EEIL & 83.37 & 79.73 & 48.99 & 41.74 \\ GD & 85.37 & 82.98 & 49.67 & 47.28 \\ BiC & 87.89 & 86.73 & 52.92 & 48.65 \\ DER++ & 85.99 & 88.21 & 53.71 & 55.29 \\ HAL & 64.21 & 64.83 & 15.59 & 21.01 \\ \hline HAT & 77.72 & 77.80 & 41.06 & 41.21 \\ HyperNet & 71.82 & 72.32 & 30.23 & 30.83 \\ Sup & 79.16 & 80.58 & 44.58 & 46.74 \\ \hline \hline \end{tabular} Performance comparison based on C100-10T between the original output and the output post-processed with OOD detection technique ODIN. Note that ODIN is not applicable to iCaRL and Mnemonics as they are not based on softmax but some distance functions. As mentioned earlier, for **Co\({}^{2}\)L**, **CCG**, and **PR-Ent**, they either have no code or their codes do not run on our machine. The results for other datasets are in B.
\end{table}
Table 1: Performance Comparison bewteen the Original Output and ODIN
the CIL of DER++ and Sup improves from 53.71 to 55.29 and 44.58 to 46.74, respectively, as the AUC increases from 85.99 to 88.21 and 79.16 to 80.58. It shows that when this method is incorporated into each task model in existing trained CIL network, the CIL performance of the original method improves. We note that ODIN does not always improve the average AUC. For those experienced a decrease in AUC, the CIL performance also decreases except LwF. The inconsistency of LwF is due to its severe classification bias towards later tasks as discussed in BiC [62]. The temperature scaling in ODIN has a similar effect as the bias correction in BiC, and the CIL of LwF becomes close to that of BiC after the correction. Regardless of whether ODIN improves AUC or not, the positive correlation between AUC and CIL (except LwF) verifies the efficacy of Theorem 3, indicating better OOD detection results in better CIL performances.
**Applying CSI.** We now apply the OOD detection method CSI. Due to its sophisticated data augmentation, supervised constrative learning and results ensemble, it is hard to apply CSI to other baselines without fundamentally change them except for HAT and Sup (SupSup) as these methods are parameter isolation-based TIL methods. We can simply replace their model for training each task with CSI wholesale. As mentioned earlier, both HAT and Sup as TIL methods have almost no forgetting.
Table 2 reports the results of using CSI and ODIN. ODIN is a weaker OOD method than CSI. Both HAT and Sup improve greatly as the systems are equipped with a better OOD detection method CSI. These experiment results empirically demonstrate the efficacy of Theorem 3, i.e., the CIL performance can be improved if a better OOD detection method is used.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline CL & OOD & C10-5T & C100-10T & C100-20T & T-5T & T-10T \\ & & AUC & CIL & AUC & CIL & AUC & CIL & AUC & CIL & AUC & CIL \\ \hline \multirow{2}{*}{HAT} & ODIN & 82.5 & 62.6 & 77.8 & 41.2 & 75.4 & 25.8 & 72.3 & 38.6 & 71.8 & 30.0 \\ & CSI & 91.2 & 87.8 & 84.5 & 63.3 & 86.5 & 54.6 & 76.5 & 45.7 & 78.5 & 47.1 \\ \hline \multirow{2}{*}{Sup} & ODIN & 82.4 & 62.6 & 80.6 & 46.7 & 81.6 & 36.4 & 74.0 & 41.1 & 74.6 & 36.5 \\ & CSI & 91.6 & 86.0 & 86.8 & 65.1 & 88.3 & 60.2 & 77.1 & 48.9 & 79.4 & 45.7 \\ \hline \hline \end{tabular} Average CIL and AUC of HAT and Sup with OOD detection methods ODIN and CSI. ODIN is a traditional OOD detection method while CSI is a recent OOD detection method known to be better than ODIN. As CL methods produce better OOD detection performance by CSI, their CIL performances are better than the ODIN counterparts.
\end{table}
Table 2: Average CIL and AUC after Applying OOD Detection Methods ODIN and CSI
Average accuracy after all tasks are learned. The baselines are grouped into (a), (b), (c), and (d) for projection, regularization, replay, and parameter-isolation methods, respectively. Our proposed methods are grouped into (e). Exemplar-free methods are italicized. \(\dagger\) indicates that in their original papers, PASS and Mnemonics are pre-trained with the first half of the classes. Their results with pre-train are 50.1 and 53.5 on C100-10T, respectively, which are still much lower than the proposed HAT+CSI and Sup+CSI without pre-training. We do not use pre-training in our experiment for fairness. \(*\) indicates that iCaRL and Mnemonics report average incremental accuracy in their original papers. We report average accuracy over all classes after all tasks are learned. The last column Avg. shows the average CIL accuracy of each method over all datasets.
#### 4.2.4 Full Comparison of HAT+CSI and Sup+CSI with Baselines
We now make a full comparison of the two proposed systems HAT+CSI and Sup+CSI designed based on the theoretical results with baselines. Since HAT and Sup are exemplar-free CL methods, HAT+CSI and Sup+CSI do not need to save any previous task data for replaying. Table 3 shows that HAT and Sup equipped with CSI outperform the baselines by large margins. DER++, the best replay method, achieves 66.0 and 53.7 on C10-5T and C100-10T, respectively, while HAT+CSI achieves 87.8 and 63.3 and Sup+CSI achieves 86.0 and 65.1. The large performance gap remains consistent in more challenging problems, T-5T and T-10T.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline \hline (a) & _OWM_ & 51.8\(\pm\)0.05 & 28.9\(\pm\)0.60 & 24.1\(\pm\)0.26 & 10.0\(\pm\)0.55 & 8.6\(\pm\)0.42 & 24.7 \\ \hline (b) & _MUC_ & 52.9\(\pm\)1.03 & 30.4\(\pm\)1.18 & 14.2\(\pm\)0.30 & 33.6\(\pm\)0.19 & 17.4\(\pm\)0.17 & 29.7 \\ & _PASS\({}^{\dagger}\)_ & 47.3\(\pm\)0.98 & 33.0\(\pm\)0.58 & 25.0\(\pm\)0.69 & 28.4\(\pm\)0.51 & 19.1\(\pm\)0.46 & 30.6 \\ \hline \multirow{8}{*}{(c)} & LwF & 54.7\(\pm\)1.18 & 45.3\(\pm\)0.75 & 44.3\(\pm\)0.46 & 32.2\(\pm\)0.50 & 24.3\(\pm\)0.26 & 40.2 \\ & iCaRL\({}^{*}\) & 63.4\(\pm\)1.11 & 51.4\(\pm\)0.99 & 47.8\(\pm\)0.48 & 37.0\(\pm\)0.41 & 28.3\(\pm\)0.18 & 45.6 \\ & A-GEM & 20.0\(\pm\)0.37 & 9.3\(\pm\)0.17 & 4.1\(\pm\)0.89 & 13.5\(\pm\)0.08 & 7.7\(\pm\)0.07 & 10.9 \\ & EEIL & 57.1\(\pm\)0.28 & 49.0\(\pm\)1.27 & 33.5\(\pm\)0.08 & 14.7\(\pm\)0.40 & 9.8\(\pm\)0.19 & 32.8 \\ & GD & 58.7\(\pm\)0.31 & 49.7\(\pm\)0.33 & 38.9\(\pm\)0.02 & 16.4\(\pm\)1.40 & 11.7\(\pm\)0.25 & 35.1 \\ & Mnemonics\({}^{\dagger*}\) & 64.1\(\pm\)1.47 & 51.0\(\pm\)0.34 & 47.6\(\pm\)0.74 & 37.1\(\pm\)0.46 & 28.5\(\pm\)0.72 & 45.7 \\ & BiC & 61.4\(\pm\)1.74 & 52.9\(\pm\)0.64 & 48.9\(\pm\)0.54 & 41.7\(\pm\)0.74 & 33.8\(\pm\)0.40 & 47.7 \\ & DER++ & 66.0\(\pm\)1.20 & 53.7\(\pm\)1.30 & 46.6\(\pm\)1.44 & 35.8\(\pm\)0.77 & 30.5\(\pm\)0.47 & 46.5 \\ & HAL & 32.8\(\pm\)2.17 & 15.6\(\pm\)0.31 & 13.5\(\pm\)1.53 & 3.4\(\pm\)0.35 & 3.4\(\pm\)0.38 & 13.7 \\ & Co\({}^{2}\)L & 65.6 & & & & & \\ \hline \multirow{8}{*}{(d)} & _CCG_ & 70.1 & & & & & \\ \cline{2-6} & _HAT_ & 62.7\(\pm\)1.45 & 41.1\(\pm\)0.93 & 25.6\(\pm\)0.51 & 38.5\(\pm\)1.85 & 29.8\(\pm\)0.65 & 39.5 \\ & _HyperNet_ & 53.4\(\pm\)2.19 & 30.2\(\pm\)1.54 & 18.7\(\pm\)1.10 & 7.9\(\pm\)0.69 & 5.3\(\pm\)0.50 & 23.1 \\ \cline{1-1} & _Sup_ & 62.4\(\pm\)1.45 & 44.6\(\pm\)0.44 & 34.7\(\pm\)0.30 & 41.8\(\pm\)1.50 & 36.5\(\pm\)0.36 & 44.0 \\ \cline{1-1} & _PR-Ent_ & 61.9 & 45.2 & & & & \\ \hline \multirow{8}{*}{(e)} & _HAT+CSI_ & 87.8\(\pm\)0.71 & 63.3\(\pm\)1.00 & 54.6\(\pm\)0.92 & 45.7\(\pm\)0.26 & 47.1\(\pm\)0.18 & 59.7 \\ \cline{1-1} & _Sup+CSI_ & 86.0\(\pm\)0.41 & 65.1\(\pm\)0.39 & 60.2\(\pm\)0.51 & 48.9\(\pm\)0.25 & 45.7\(\pm\)0.76 & 61.2 \\ \cline{1-1} & HAT+CSI+c & 88.0\(\pm\)0.48 & 65.2\(\pm\)0.71 & 58.0\(\pm\)0.45 & 51.7\(\pm\)0.37 & 47.6\(\pm\)0.32 & 62.1 \\ \cline{1-1} & Sup+CSI+c & 87.3\(\pm\)0.37 & 65.2\(\pm\)0.37 & 60.5\(\pm\)0.64 & 49.2\(\pm\)0.28 & 46.2\(\pm\)0.53 & 61.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average Accuracy (CIL) of All Methods
Due to the definition of OOD in the prediction method and the fact that each task is trained separately in HAT and Sup, the outputs \(f(x)_{k}\) from different tasks can be in different scales, which will result in incorrect predictions. To deal with the problem, we can calibrate the output as \(\alpha_{k}f(x)_{k}+\beta_{k}\) and use \(OOD_{k}=\sigma(\alpha_{k}f(x)_{k}+\beta_{k})\). The optimal \(\alpha_{k}^{*}\) and \(\beta_{k}^{*}\) for each task \(k\) can be found by optimization with a memory buffer to save a very small number of training examples from previous tasks like that in the replay-based methods. We refer the calibrated methods as HAT+CSI+c and Sup+CSI+c. They are trained by using the memory buffer of the same size as the replay methods (see Section 4.2.2). Table 3 shows that the calibration improves from their memory free versions, i.e., without calibration. We provide the details about how to train the calibration parameters \(\alpha_{k}\) and \(\beta_{k}\) in D.
As shown in Theorem 1, the CIL performance also depends on the TIL (WP) performance. We compare the TIL accuracies of the baselines and our methods in Table 4. Our systems again outperform the baselines by large margins on more challenging datasets (e.g., CIFAR100 and Tiny-ImageNet).
## 5 Proposed Approach 2: Out-of-Distribution Replay
The approach presented above does not save any training data from previous tasks except for the optional step of calibration. The method presented in this section is based on the replay approach to solving CIL, which saves a small number of training data from each previous task. The proposed method is called _M_ulti-head model for continual learning via _O_OD _RE_play (MORE).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline BiC & 95.4\(\pm\)0.35 & 84.6\(\pm\)0.48 & 88.7\(\pm\)0.19 & 61.5\(\pm\)0.60 & 62.2\(\pm\)0.45 & 78.5 \\ HAT & 96.7\(\pm\)0.18 & 84.0\(\pm\)0.23 & 85.0\(\pm\)0.98 & 61.2\(\pm\)0.72 & 63.8\(\pm\)0.41 & 78.1 \\ Sup & 96.6\(\pm\)0.21 & 87.9\(\pm\)0.27 & 91.6\(\pm\)0.15 & 64.3\(\pm\)0.24 & 68.4\(\pm\)0.22 & 81.8 \\ \hline HAT+CSI & 98.7\(\pm\)0.06 & 92.0\(\pm\)0.37 & 94.3\(\pm\)0.06 & 68.4\(\pm\)0.16 & 72.4\(\pm\)0.21 & 85.2 \\ Sup+CSI & 98.7\(\pm\)0.07 & 93.0\(\pm\)0.13 & 95.3\(\pm\)0.20 & 65.9\(\pm\)0.25 & 74.1\(\pm\)0.28 & 85.4 \\ \hline \hline \end{tabular}
* TIL (WP) results of 3 best performing baselines and our methods. The full results are given in E. The calibrated versions (+c) of our methods are omitted as calibration does not affect TIL performances.
\end{table}
Table 4: TIL Accuracy
### The Proposed MORE Technique
Recall a replay-based method for continual learning works by memorizing or saving a small subset of the training samples from each previous task in a memory buffer. The saved data is called the _replay data_. In learning a new task, the new task data and the replay data are trained jointly to update the model. Clearly, using the replay data can partially deal with the inter-class separation (ICS) problem because the model sees some data from all classes learned so far. However, it cannot solve the ICS problem completely because the amount of the replay data is often very small.
Unlike existing replay-based CIL methods, which simply use the replay data to update the decision boundaries between the old class and the new classes (in the new task), the proposed method uses the replay data to build an OOD detection model for each task in continual learning, which gives the name of the proposed method, i.e., _out-of-distribution replay_. Further, unlike existing OOD detection methods, which usually do not use any OOD data in training, the proposed method uses the replay data from previous tasks as the OOD data for the current new task in building its OOD detection model.
Unlike HAT+CSI and Sup+CSI, which do not use a pre-trained network, MORE trains a multi-head network as an adapter [23] to a pre-trained network (see Figure 2(b)). Note that using a pre-trained transformer network and adapter modules is a common practice in existing continual learning methods in the natural language processing community [26]. Here we also leverage this approach for image classification tasks. In continual learning, the pre-trained network is frozen, only the adapters and the norm layers are trainable. Similar to HAT+CSI, hard attention mask (HAT) is again employed to protect each task model or classifier to avoid forgetting. Each head is also an OOD detection model for a task, but, as mentioned above, MORE uses the replay data as the OOD data to build an OOD detection model. Since HAT has been described in Section 4.1.1, we will not discuss it further except to state that we need to use \(\mathcal{L}_{ood}\) in Eq. 24 to replace \(\mathcal{L}_{ce}\) in Eq. 13 after incorporating the trainable embedding \(\mathbf{e}^{k}\). We describe the whole training and prediction process in H.
#### 5.1.1 Training an OOD Detection Model
At task \(k\), the system receives the training data \(\mathcal{D}_{k}=\{(x_{k}^{i},y_{k}^{i})_{i=1}^{n_{k}}\}\), where \(n_{k}\) is the number of samples, and \(x_{k}^{i}\in\mathbf{X}_{k}\) is an input sample and \(y_{k}^{i}\in\mathbf{Y}_{k}\) (the set of all classes of task \(k\)) is its class label. We train the feature extractor \(z=h(x,k;\theta)\) and task specific classifier \(f(z;\phi_{k})\) using \(\mathcal{D}_{k}\) and the samples in
the memory buffer \(\mathcal{M}\). We treat the buffer data as OOD data to encourage the network to learn the current task and also detect ODD samples (the models or classifiers of the previous tasks are not touched). We achieve it by maximizing \(p(y|x,k)=\text{softmax}f(h(x,k;\theta);\phi_{k})\) for an IND sample \(x\in\mathbf{X}_{k}\) and maximizing \(p(ood|x,k)\) for an OOD sample \(x\in\mathcal{M}\). The additional label \(ood\) is reserved for previous and possible future unseen classes. Figure 2(a) shows the overall idea of the proposed approach. We formulate the problem as follows.
Given the training data \(\mathbf{X}_{k}\) of size \(n_{k}\) at task \(k\) and the memory buffer \(\mathcal{M}\) of size \(M\), we minimize the loss
\[\mathcal{L}_{ood}(\theta,\phi_{k})=-\frac{1}{M+N}\left(\sum_{(x,y)\in\mathcal{ M}}\log p(ood|x,k)+\sum_{(x,y)\in\mathcal{D}^{k}}\log p(y|x,k)\right) \tag{24}\]
It is the sum of two cross-entropy losses. The first loss is for learning OOD samples while the second loss is for learning the classes from the current task. We optimize the shared parameter \(\theta\) in the feature extractor. The task specific classification parameters \(\phi_{k}\) are independent of other tasks. The learned representation on the current data should be robust to OOD data. The classifier thus can classify both IND and OOD data.
In testing, we perform prediction by comparing the softmax probability output values using all the task classifiers from task 1 to \(k\) without the OOD
Figure 2: (a) We train the feature extractor and the task classifier \(k\) at task \(k\). The output values of the classifier correspond to \(|\mathbf{Y}_{k}|+1\) classes, in which the last class is for OOD (i.e., representing previous and unseen future classes). At inference/testing, the probability values of each task model without the OOD class are concatenated and the system chooses the class with the maximum score. (b) Transformer and adapter module. The masked adapter network consists of 2 fully connected layers and task specific masks. During training, only the masked adapters and norm layers are updated and the other parts in the transformer layers remain unchanged.
class as
\[\hat{y}=\arg\max\bigoplus_{1\leq j\leq k}p(\mathbf{Y}_{j}|x,j) \tag{25}\]
where \(\bigoplus\) is the concatenation over the output space. Figure 2(a) shows the prediction rule. We are basically choosing the class with the highest softmax probability over all classes from all learned tasks.
#### 5.1.2 Back-Updating the Previous OOD Models
Each task model works better if more diverse OOD data is provided during training. As in a replay-based approach, MORE saves an equal number of samples per class after each task [12]. The saved samples in the memory are used as OOD samples for each new task. Thus, in the beginning of continual learning when the system is trained on only a small number of tasks, the classes of samples in the memory are less diverse than after more tasks are learned. This makes the performance of OOD detection stronger for later tasks, but weaker in earlier tasks. To prevent this asymmetry, we update the model of each previous task so that it can also identify the samples from subsequent classes (which were unseen during the training of the previous task) as OOD samples.
At task \(k\), we update the previous task models (\(j=1,\cdots,k-1\)) as follows. Denote the samples of task \(j\) in memory \(\mathcal{M}\) by \(\tilde{\mathbf{X}}_{j}\). We construct a new dataset using the current task dataset and the samples in the memory buffer. We randomly select \(|\mathcal{M}|\) samples from the training data \(\mathcal{D}_{k}\) and pool them with the remaining samples in \(\mathcal{M}\) after removing the IND samples \(\tilde{\mathcal{D}}_{j}\) of task \(j\) from \(\mathcal{M}\). We do not use the entire training data \(\mathcal{D}_{k}\) as we do not want a large sample imbalance between IND and OOD. Denote the new dataset by \(\tilde{\mathcal{M}}\). Using the data, we update only the parameters \(\phi_{j}\) of the classifier for task \(j\) with the feature representations frozen by minimizing the loss
\[\mathcal{L}(\phi_{j})=-\frac{1}{2M}\left(\sum_{(x,y)\in\tilde{ \mathcal{M}}}\log p(ood|x,j)+\sum_{(x,y)\in\tilde{\mathcal{D}}_{j}}\log p(y|x, j)\right) \tag{26}\]
We reduce the loss by updating the parameters of classifier \(j\) to maximize the probability of the class if the sample belongs to task \(j\) and maximize the OOD probability otherwise.
#### 5.1.3 Improving Prediction Performance by a Distance Based Technique
We further improve the prediction in Eq. 25 by introducing a distance based factor used as a coefficient to the softmax probabilities in Eq. 25. It is quite intuitive that if a test instance is close to a class, it is more likely to belong to the class. We thus propose to combine this new distance factor and the softmax probability output of the task \(j\) model to make the final prediction decision. In some sense, this can be considered as an ensemble of the two methods.
We define the distance based coefficient \(s_{j}(x)\) of task \(j\) for the test instance \(x\) by the maximum of inverse Mahalanobis distance [33] between the feature of \(x\) and the Gaussian distributions of the classes in task \(j\) parameterized by the mean \(\mu_{j}^{i}\) of the class \(i\) in task \(j\) and the sample covariance \(S_{j}\). They are estimated by the features of class \(i\)'s training data for each class \(i\) in task \(j\). If a test instance is from the task, its feature should be close to the distribution that the instance belongs to. Conversely, if the instance is OOD to the task, its feature should not be close to any of the distributions of the classes in the task. More precisely, for task \(j\) with class \(y_{1},\cdots,y_{|\mathbf{Y}_{j}|}\) (where \(|\mathbf{Y}_{j}|\) represents the number of classes in task \(k\)), we define the coefficient \(s_{j}(x)\) as
\[s_{j}(x)=\max\left[1/\mathrm{MD}(x;\mu_{j}^{y_{1}},S_{j}),\cdots,1/\mathrm{MD} (x;\mu_{j}^{y_{|\mathbf{Y}_{j}|}},S_{j})\right] \tag{27}\]
\(\mathrm{MD}(x;\mu_{j}^{i},S_{j})\) is the Mahalanobis distance. The coefficient is large if at least one of the Mahalanobis distances is small but the coefficient is small if all the distances are large (i.e. the feature is far from all the distributions of the task). The parameters \(\mu_{j}^{i}\) and \(S_{j}\) can be computed and saved when each task is learned. The mean \(\mu_{j}^{i}\) is computed using the training samples \(\mathcal{D}_{j}^{i}\) of class \(i\) as follows,
\[\mu_{j}^{i}=\sum_{x\in\mathcal{D}_{j}^{i}}h(x,i)/|\mathcal{D}_{j}^{i}| \tag{28}\]
and the covariance \(S_{j}\) of task \(j\) is the mean of covariances of the classes in task \(j\),
\[S_{j}=\sum_{i\in\mathbf{Y}_{j}}S_{j}^{i}/|\mathbf{Y}_{j}| \tag{29}\]
where \(S_{j}^{i}=\sum_{x\in\mathcal{D}_{j}^{i}}(x-\mu_{j}^{i})^{T}(x-\mu_{j}^{i})/| \mathcal{D}_{j}^{i}|\) is the sample covariance of class \(i\). By multiplying the coefficient \(s_{j}(x)\) to the original softmax probabilities
\(p(\mathbf{Y}_{j}|x,j)\), the task output \(p(\mathbf{Y}_{j}|x,j)s_{j}(x)\) increases if \(x\) is from task \(j\) and decreases otherwise. The final prediction is made by (which replaces Eq. 25)
\[y=\arg\max\bigoplus_{1\leq j\leq k}p(\mathbf{Y}_{j}|x,j)s_{j}(x), \tag{30}\]
where \(k\) is the last task that we have learned.
### Experiments
We now report the experiment results of the proposed method MORE. For **experimental datasets**, we use the same three image classification benchmark datasets as in Section 4.2.1. For **baselines**, the same systems are used as well (see Section 4.2.1) except Mnemonics, HyperNet, CCG, Co\({}^{2}\)L, and PR-Ent. Mnemonics requires optimization of training instances and it is not clear how to implement for images after interpolation for a given input size of a pre-trained model (see below). For HyperNet, it is due to the reason explained in Training Details in Section 4.2.2. For CCG, Co\({}^{2}\)L, and PR-Ent, CCG has no code and the codes of Co\({}^{2}\)L, and PR-Ent do not run in our environment and thus we could not convert their codes to use a pre-trained model. Finally, we are left with 13 baselines. Note that HAT-CSI and Sup+CSI are not included as they are much weaker (up to 15% lower than MORE in accuracy) as CSI's approach of using contrastive learning and data augmentations does not work well with a pre-trained model.
**Evaluation Metrics.** We still use the same evaluation measures as we used in Section 4.2.2. **(1).**_Average classification accuracy_ over all classes after learning the last task. **(2).**_Average AUC_ (Area Under the ROC Curve) for evaluating OOD detection performance of continual learning in the open world. See Section 5.2.4 for more details.
#### 5.2.1 Pre-trained Network
We pre-train a vision transformer [59] using a subset of the ImageNet data [52] and apply the pre-trained network/model to all baselines and our method. To ensure that there is no overlapping of data between ImageNet and our experimental datasets, we manually removed 389 classes from the original 1000 classes in ImageNet that are similar/identical to the classes in CIFAR-10, CIFAR-100, or Tiny-ImageNet. We pre-train the network with the remaining subset of 611 classes of ImageNet.
Using the pre-trained network, both our system and the baselines improve dramatically compared to their versions without using the pre-trained
network. For instance, the two best baselines (DER++ and PASS) in our experiments achieve the average classification accuracy of 66.89 and 68.25 (after the final task) with the pre-trained network over 5 experiments while they achieve only 46.88 and 32.42 without using the pre-train network.
We insert an adapter module at each transformer layer to exploit the pre-trained transformer network in continual learning. During training, the adapter module and the layer norm are trained while the transformer parameters are unchanged to prevent forgetting in the pre-trained network.
#### 5.2.2 Training Details
For all experiments, we use the same backbone architecture DeiT-S/16 [59] with a 2-layer adapter [23] at each transformer layer, and the same class order for both baselines and our method. The first fully-connected layer in adapter maps from dimension 384 to bottleneck. The second fully-connected layer following ReLU activation function maps from bottleneck to 384. The the bottleneck dimension is the same for all adapters in a model. For our method, we use SGD with momentum value 0.9. The back-updating method in Section 5.1.2 is also a hyper-parameter choice. If we apply it, we train each classifier for 10 epochs by SGD with learning rate 0.01, batch size 16, and momentum value 0.9. We choose 500 for \(s\) in Eq. 8 and 0.75 for \(\lambda\) in Eq. 12 as recommended in [55]. We find a good set of learning rate and number of epochs on the validation set made of 10% of the training data. We follow [12] and save an equal number of random samples per class in the replay memory. Following the experiment settings in [51; 69], we fix the size of memory buffer and reduce the saved samples to accommodate a new set of samples after a new task is learned. We use the class order protocol in [51; 7] by generating random class orders for the experiments. The baselines and our method use the same class ordering. We also report the size of memory required for each experiment in A.
For CIFAR-10, we split 10 classes into 5 tasks (2 classes per task). The bottleneck size in each adapter is 64. Following [7], we use the memory size 200, and train for 20 epochs with learning rate 0.005, and apply the back-updating method in Section 5.1.2.
For CIFAR-100, we conduct 10 tasks and 20 tasks experiments, where each task has 10 classes and 5 classes, respectively. We double the bottleneck size of the adapter to learn more classes. We use the memory size 2000 following [51] and train for 40 epochs with learning rate 0.001 and 0.005 for 10 tasks and 20 tasks, respectively, and apply the back-updating method in
Section 5.1.2.
For Tiny-ImageNet, two experiments are conducted. We split 200 classes into 5 and 10 tasks, where each task has 40 classes and 20 classes per task, respectively. We use the bottleneck size 128, and save 2000 samples in memory. We train with learning rate 0.005 for 15 and 10 epochs for 5 tasks and 10 tasks, respectively. There is no need to use the back-updating method as the earlier tasks already have diverse OOD classes.
#### 5.2.3 Accuracy and Forgetting Rate Results and Analysis
**Average Accuracy.** Table 5 shows that our method MORE consistently outperforms the baselines. All the reported results are the averages of 5 runs. The last column gives the average of each row. We compare with the replay-based methods first. The best replay-based method on average over all the datasets is DER++. Our method MORE achieves the accuracy of 71.59, much better than 66.89 of DER++. This demonstrates that the existing replay-based methods utilizing the replay samples to update all learned classes are inferior to our MORE method using samples for OOD learning. The best baseline is the generative method PASS. Its average accuracy over
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline \hline (a) & OWM & 41.69\(\pm\)6.34 & 21.39\(\pm\)3.18 & 16.98\(\pm\)4.44 & 24.55\(\pm\)2.48 & 17.52\(\pm\)3.45 & 24.43 \\ \hline (b) & MUC & 73.95\(\pm\)7.24 & 57.87\(\pm\)1.11 & 43.98\(\pm\)2.68 & 62.47\(\pm\)0.34 & 55.79\(\pm\)0.49 & 58.81 \\ & PASS & 86.21\(\pm\)1.10 & 68.90\(\pm\)0.94 & 66.77\(\pm\)1.18 & 61.03\(\pm\)0.38 & 58.34\(\pm\)0.42 & 68.25 \\ \hline \multirow{7}{*}{(c)} & LwF & 67.59\(\pm\)4.27 & 66.50\(\pm\)1.93 & 67.54\(\pm\)0.97 & 33.51\(\pm\)4.36 & 36.85\(\pm\)4.46 & 54.40 \\ & iCaRL & 87.55\(\pm\)0.99 & 68.90\(\pm\)0.47 & 69.15\(\pm\)0.99 & 53.13\(\pm\)1.04 & 51.88\(\pm\)2.36 & 66.12 \\ & A-GEM & 56.33\(\pm\)7.77 & 25.21\(\pm\)4.00 & 21.99\(\pm\)4.01 & 30.53\(\pm\)3.99 & 21.90\(\pm\)5.52 & 31.20 \\ & EEIL & 82.34\(\pm\)3.13 & 68.08\(\pm\)0.51 & 63.79\(\pm\)0.66 & 53.34\(\pm\)0.54 & 50.38\(\pm\)0.97 & 63.59 \\ & GD & **89.16\(\pm\)**0.53 & 64.36\(\pm\)0.57 & 60.10\(\pm\)0.74 & 53.01\(\pm\)0.97 & 42.48\(\pm\)2.53 & 61.82 \\ & BiC & 67.44\(\pm\)3.93 & 64.47\(\pm\)1.30 & 67.69\(\pm\)1.97 & 38.78\(\pm\)1.26 & 40.98\(\pm\)2.39 & 55.87 \\ & DER++ & 84.63\(\pm\)2.91 & 69.73\(\pm\)0.99 & 70.03\(\pm\)1.46 & 55.84\(\pm\)2.21 & 54.20\(\pm\)3.28 & 66.89 \\ & HAL & 84.38\(\pm\)2.70 & 67.17\(\pm\)1.50 & 67.37\(\pm\)1.45 & 52.80\(\pm\)2.37 & 55.25\(\pm\)3.60 & 65.39 \\ \hline (d) & HAT & 83.30\(\pm\)1.54 & 62.34\(\pm\)0.93 & 56.72\(\pm\)0.44 & 57.91\(\pm\)0.72 & 53.12\(\pm\)0.94 & 62.68 \\ & Sup & 80.91\(\pm\)2.99 & 62.49\(\pm\)0.49 & 57.32\(\pm\)1.11 & 58.43\(\pm\)0.67 & 54.52\(\pm\)0.45 & 62.74 \\ \hline \multicolumn{2}{c}{}{} & MORE & **89.16\(\pm\)**0.96 & **70.23\(\pm\)**2.27 & **70.53\(\pm\)**1.09 & **64.97\(\pm\)**1.28 & **63.06\(\pm\)**1.26 & **71.59** \\ \hline \hline \end{tabular} Average accuracy after the final task. ‘-XT’ means X number of tasks. Our system MORE and all baselines used the pre-trained network. The baselines are grouped into (a), (b), (c), and (d) for projection, regularization, replay, and parameter-isolation methods, respectively. The last column show the average accuracy of each method over all datasets and experiments. We highlight the best results in each column in bold.
\end{table}
Table 5: Average Accuracy after the Final Task
all the datasets is 68.25, which is still poorer than our method's performance of 71.59. The performance of the multi-head method HAT [55] using task-id prediction is only 62.68, which is lower than many other baselines. Its performance is particularly low in experiments where the number of classes per task is small. For instance, its accuracy on C100-20T is 56.72, much lower than our method of 70.53 trained based on OOD detection.
**Accuracy with Smaller Memory Sizes.** For all the datasets, we run additional experiments with half of the original memory size and show that our method is even stronger with a smaller memory. The new memory sizes are 100, 1000, and 1000 for CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively. Table 6 shows that MORE has experienced almost no performance drop with the reduced memory size while the memory-based baselines suffer from major performance reduction. The accuracy of the best memory-based baseline (DER++) has decreased the accuracy from 66.89 to 62.16, while MORE only decreases from 71.59 to 71.44, which shows that a small number of OOD samples is enough to enable the system to produce a robust OOD detection model.
**Average Forgetting Rate.** The average forgetting rate is defined as
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline \hline (a) & OWM & 41.69\(\pm\)6.34 & 21.39\(\pm\)3.18 & 16.98\(\pm\)4.44 & 24.55\(\pm\)2.48 & 17.52\(\pm\)3.45 & 24.43 \\ \hline (b) & MUC & 73.95\(\pm\)7.24 & 57.87\(\pm\)1.11 & 43.98\(\pm\)2.68 & 62.47\(\pm\)0.34 & 55.79\(\pm\)0.49 & 58.81 \\ & PASS & 86.21\(\pm\)1.10 & 68.90\(\pm\)0.94 & 66.77\(\pm\)1.18 & 61.03\(\pm\)0.38 & 58.34\(\pm\)0.42 & 68.25 \\ \hline \multirow{7}{*}{(c)} & LwF & 63.01\(\pm\)4.19 & 56.76\(\pm\)3.72 & 63.53\(\pm\)2.86 & 26.79\(\pm\)2.36 & 28.08\(\pm\)4.88 & 47.63 \\ & iCaRL & 86.08\(\pm\)1.19 & 66.96\(\pm\)2.08 & 68.16\(\pm\)0.71 & 47.27\(\pm\)3.22 & 49.51\(\pm\)1.87 & 63.60 \\ & A-GEM & 56.64\(\pm\)4.29 & 23.18\(\pm\)2.54 & 20.76\(\pm\)2.88 & 31.44\(\pm\)3.84 & 23.73\(\pm\)6.27 & 31.15 \\ & EEIL & 77.44\(\pm\)3.04 & 62.95\(\pm\)0.68 & 57.86\(\pm\)0.74 & 48.36\(\pm\)1.38 & 44.59\(\pm\)1.72 & 58.24 \\ & GD & 85.96\(\pm\)1.64 & 57.17\(\pm\)1.06 & 50.30\(\pm\)0.58 & 46.09\(\pm\)1.77 & 32.41\(\pm\)2.75 & 54.39 \\ & BiC & 56.28\(\pm\)3.31 & 58.42\(\pm\)2.48 & 62.19\(\pm\)1.20 & 33.29\(\pm\)2.65 & 28.44\(\pm\)2.41 & 47.72 \\ & DER++ & 80.09\(\pm\)3.00 & 64.89\(\pm\)2.48 & 65.84\(\pm\)1.46 & 50.74\(\pm\)2.41 & 49.24\(\pm\)5.01 & 62.16 \\ & HAL & 79.16\(\pm\)4.56 & 62.65\(\pm\)0.83 & 63.96\(\pm\)1.49 & 48.17\(\pm\)2.94 & 47.11\(\pm\)6.00 & 60.21 \\ \hline \multirow{2}{*}{(d)} & HAT & 83.30\(\pm\)1.54 & 62.34\(\pm\)0.93 & 56.72\(\pm\)0.44 & 57.91\(\pm\)0.72 & 53.12\(\pm\)0.94 & 62.68 \\ & Sup & 80.91\(\pm\)2.99 & 62.49\(\pm\)0.49 & 57.32\(\pm\)1.11 & 58.43\(\pm\)0.67 & 54.52\(\pm\)0.45 & 62.74 \\ \hline \multirow{2}{*}{(e)} & MORE & **88.13\(\pm\)**1.16 & **71.69\(\pm\)**0.11 & **71.29\(\pm\)**0.55 & **64.17\(\pm\)**0.77 & **61.90\(\pm\)**0.90 & **71.44** \\ \hline \hline \end{tabular} Accuracy performances of the baselines and our method MORE with smaller memory sizes. We reduce the size of the memory buffer by half. The new sizes are 100, 1000, 1000 for CIFAR10, CIFAR100, and Tiny-ImageNet. Numbers in bold are the best results in each column.
\end{table}
Table 6: Average Accuracy with Smaller Memory Sizes
follows [40]: \(\mathcal{F}^{t}=\sum_{k=1}^{t-1}(\mathcal{A}_{k}^{\text{init}}-\mathcal{A}_{k}^{t} )/(t-1)\), where \(\mathcal{A}_{k}^{\text{init}}\) is the classification accuracy on samples of task \(k\) right after learning the task \(k\). We do not consider the task \(t\) as it is the last task.
We compare the forgetting rate of our method MORE against the baselines using C10-5T, C100-10T, and C100-20T. Figure 3 shows that the performance drop of our method as more task are learned is relatively lower than many baselines. MUC, A-GEM, HAT, and Sup achieve lower drop than our method. However, they are not able to adapt to new tasks well as the accuracy values of the four methods on C100-10T are 57.87, 25.21, 62.34, and 62.49, respectively, while our method MORE achieves 70.23. The performance gaps remains consistent in the other dataset. PASS experiences smaller drop in performance on C100-10T and C100-20T than our method, but its accuracy of 68.90 and 66.77 are significantly lower than 70.23 and 70.53 of our MORE.
#### 5.2.4 Out-of-Distribution Detection Results
As we explained in the introduction section, since our method MORE is based on OOD detection in building each task model, our method can naturally be used to detect test samples that are out-of-distribution for all the classes or tasks learned thus far. We are not aware of any existing continual learning system that has done such an evaluation. We evaluate the performance of the baselines and our system in this out-of-distribution scenario, which is also called the open set setting.
This OOD detection ability is highly desirable for a continual learning system because in a real-life environment in the open world, the system can
Figure 3: Average forgetting rate (%). The lower the rate, the better the method is.
be exposed to not only seen classes, but also unseen classes. When the test sample is from one of seen classes, the system should be able to predict its class. If the sample does not belong to any of the training classes seen so far (i.e., the sample is out-of-distribution), the system should detect it.
We formulate the performance of OOD detection of a continual learning system as the following. A continual learning system accepts and classifies a test sample after training task \(k\) if the test sample is from one of the classes in tasks \(1,\cdots,k\). If it is from one of the classes of the future tasks \(k+1,\cdots,t\), it should be rejected as OOD (where \(t\) is the last task in each evaluation).
We use maximum softmax probability (MSP) [20] as the OOD score of a test sample for the baselines and use maximum output with coefficient in Eq. 30 for our method MORE. We employ Area Under the ROC Curve (AUC) to measure the performance of OOD detection as AUC is the standard metric used in OOD detection papers [65].We report _average incremental AUC_ (AI-AUC) which is the average AUC result at all tasks except the last one as there is no more OOD data after the last task.
Table 7 shows that our method MORE outperforms all baselines consistently except MUC and GD. For MUC, it performs better than MORE on Tiny-ImageNet, but on average, it is poorer (80.98) than MORE (82.23). For GD, it outperforms MORE on C10-5T, but its overall performance is much lower as it achieves the average of only 77.69 over the 5 experiments.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline \hline (a) & OWM & 70.02\(\pm\)3.59 & 63.17\(\pm\)1.06 & 59.42\(\pm\)1.26 & 67.24\(\pm\)0.92 & 62.17\(\pm\)0.35 & 64.41 \\ \hline (b) & MUC & 85.47\(\pm\)3.97 & 79.28\(\pm\)1.15 & 74.82\(\pm\)1.91 & **83.91\(\pm\)**0.54 & **81.42\(\pm\)**0.47 & 80.98 \\ & PASS & 84.57\(\pm\)1.54 & 77.74\(\pm\)1.40 & 77.42\(\pm\)1.44 & 77.07\(\pm\)2.14 & 74.79\(\pm\)2.36 & 78.32 \\ \hline \multirow{7}{*}{(c)} & LwF & 72.18\(\pm\)4.15 & 74.95\(\pm\)0.39 & 75.40\(\pm\)0.64 & 66.44\(\pm\)1.14 & 65.52\(\pm\)0.64 & 70.90 \\ & iCaRL & 82.12\(\pm\)5.38 & 77.42\(\pm\)0.45 & 76.91\(\pm\)1.30 & 71.86\(\pm\)1.57 & 74.24\(\pm\)1.66 & 76.06 \\ & A-GEM & 74.92\(\pm\)5.62 & 64.19\(\pm\)0.86 & 60.23\(\pm\)0.95 & 67.88\(\pm\)1.28 & 63.08\(\pm\)1.12 & 66.06 \\ & EEIL & 87.19\(\pm\)2.31 & 78.89\(\pm\)1.32 & 77.69\(\pm\)1.40 & 74.82\(\pm\)0.79 & 73.45\(\pm\)1.33 & 78.39 \\ & GD & **89.71\(\pm\)**1.85 & 77.31\(\pm\)1.03 & 75.19\(\pm\)0.87 & 75.36\(\pm\)0.78 & 70.90\(\pm\)1.75 & 77.69 \\ & BiC & 71.29\(\pm\)3.57 & 74.49\(\pm\)0.72 & 75.71\(\pm\)0.60 & 67.45\(\pm\)0.89 & 66.63\(\pm\)0.77 & 71.11 \\ & DER++ & 84.61\(\pm\)2.64 & 78.42\(\pm\)0.64 & 78.37\(\pm\)0.42 & 74.80\(\pm\)1.72 & 74.86\(\pm\)1.93 & 78.09 \\ & HAL & 84.09\(\pm\)3.30 & 77.37\(\pm\)0.55 & 77.66\(\pm\)0.31 & 74.52\(\pm\)1.93 & 75.47\(\pm\)2.35 & 77.82 \\ \hline (d) & HAT & 87.83\(\pm\)2.44 & 79.57\(\pm\)0.29 & 77.20\(\pm\)0.74 & 79.78\(\pm\)1.59 & 78.25\(\pm\)1.68 & 80.53 \\ & Sup & 87.06\(\pm\)3.68 & 80.54\(\pm\)0.12 & 77.81\(\pm\)0.66 & 80.01\(\pm\)0.71 & 78.96\(\pm\)0.64 & 80.87 \\ \hline \multicolumn{2}{c}{} & MORE & 88.06\(\pm\)1.84 & **81.67\(\pm\)**1.27 & **80.97\(\pm\)**0.80 & 80.72\(\pm\)3.38 & 79.73\(\pm\)2.97 & **82.23** \\ \hline \hline \end{tabular}
Numbers in bold are the best results in each column.
\end{table}
Table 7: AI-AUC Results
#### 5.2.5 Ablation Study
We conduct an ablation study to measure the performance gain by each proposed technique, **back-updating** in previous models in Section 5.1.2 and the **distance-based coefficient** in Section 5.1.3, using three experiments. The back-updating by Eq. 26 is to improve the earlier task models as they are trained with less diverse OOD data than later models. The modified output with coefficient in Eq. 27 is to improve the classification accuracy by combining the softmax probability of the task networks and the inverse Mahalanobis distances.
Table 8 compares accuracy obtained after applying each method. Both distance based coefficient and back-updating show large improvements from the original method without any of the two techniques. Although the performance is already competitive with either technique, the performance improves further after applying them together.
## 6 Conclusion
This paper studied open world continual learning, which is necessary for learning in the open world. An open world learning algorithm first needs to detect novel or out-of-distribution (OOD) items and then learn them continually or incrementally. This type of learning is called class incremental learning (CIL), which is a challenging setting of continual learning. Another popular setting is task incremental learning (TIL). Traditionally, novelty/OOD detection and CIL are regarded as two completely different problems. This paper theoretically showed that the two problems can be unified. In particular, we first decomposed the CIL prediction problem into _within-task prediction_ (WP) and _task-id prediction_ (TP). WP is basically TIL. The
\begin{table}
\begin{tabular}{c c c c} \hline \hline & C10-5T & C100-10T & C100-20T \\ \hline Original & 91.01\(\pm\)2.48 & 76.93\(\pm\)1.58 & 75.76\(\pm\)2.35 \\ Coefficient (C) & 93.86\(\pm\)1.12 & 80.31\(\pm\)1.02 & 80.77\(\pm\)1.36 \\ Back (B) & 93.36\(\pm\)0.79 & 80.35\(\pm\)1.08 & 80.32\(\pm\)0.82 \\ \hline C + B & 94.23\(\pm\)0.82 & 81.24\(\pm\)1.24 & 81.59\(\pm\)0.98 \\ \hline \hline \end{tabular} Performance gains with the proposed techniques. The row Original indicates the method without the coefficient and back-updating and the row Back means the back-updating method.
\end{table}
Table 8: Ablation Study
paper further theoretically demonstrated that TP is correlated with _out-of-distribution_ (OOD) detection. It then proved that a good performance of the two is both necessary and sufficient for good CIL performances. Since OOD detection is basically novelty detection in open-world learning, our theory connects and unifies open-world novelty detection and continual learning (CIL in particular). Their combination gives us the paradigm of open-world continual learning. The theoretical result also provides a principled guidance for designing better continual learning algorithms. Based on the result, several new CIL methods have been designed. They outperform strong baselines in CIL by a large margin and also perform novelty (or OOD) detection well in the open world continual learning setting.
## Author Contributions
Gyuhak Kim conceived the idea, developed the algorithms, conducted experiments, and led the writing of the paper. Changnan Xiao developed the mathematical framework and derived proofs. Tatsuya Konishi and Zixuan Ke contributed in discussions and helped edit the paper. Bing Liu supervised the project and helped write the paper.
## Acknowledgements
The work of Gyuhak Kim, Zixuan Ke and Bing Liu was supported in part by a research contract from KDDI, a research contract from DARPA (HR001120C0023), and three NSF grants (IIS-1910424, IIS-1838770, and CNS-2225427).
## Appendix A Proof of Theorems and Corollaries
### Proof of Theorem 1
Proof.: Since
\[H_{CIL}(x) =H(y,\{\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\}_{k,j})\] \[=-\sum_{k,j}y_{k,j}\log\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D),\] \[H_{WP}(x) =H(\tilde{y},\{\mathbf{P}(x\in\mathbf{X}_{k_{0},j}|x\in\mathbf{ X}_{k_{0}},D)\}_{j})\] \[=-\sum_{j}y_{k_{0},j}\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j}|x\in \mathbf{X}_{k_{0}},D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_ {0}},D),\]
and
\[H_{TP}(x) =H(\bar{y},\{\mathbf{P}(x\in\mathbf{X}_{k}|D)\}_{k})\] \[=-\sum_{k}\bar{y}_{k}\log\mathbf{P}(x\in\mathbf{X}_{k}|D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D),\]
we have
\[H_{CIL}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_ {0}},D)-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=H_{WP}(x)+H_{TP}(x)\] \[\leq\epsilon+\delta.\]
### Proof of Corollary 1
Proof.: By proof of Theorem 1, we have
\[H_{CIL}(x)=H_{WP}(x)+H_{TP}(x).\]
Taking expectations on both sides, we have i)
\[\mathbb{E}_{x\sim U(\mathbf{X})}[H_{CIL}(x)] =\mathbb{E}_{x\sim U(\mathbf{X})}[H_{WP}(x)]+\mathbb{E}_{x\sim U (\mathbf{X})}[H_{TP}(x)]\] \[\leq\mathbb{E}_{x\sim U(\mathbf{X})}[H_{WP}(x)]+\delta.\]
and ii)
\[\mathbb{E}_{x\sim U(\mathbf{X})}[H_{CIL}(x)] =\mathbb{E}_{x\sim U(\mathbf{X})}[H_{WP}(x)]+\mathbb{E}_{x\sim U( \mathbf{X})}[H_{TP}(x)]\] \[\leq\epsilon+\mathbb{E}_{x\sim U(\mathbf{X})}[H_{TP}(x)].\]
### Proof of Theorem 2.
Proof.: i) Assume \(x\in\mathbf{X}_{k_{0}}\). For \(k=k_{0}\), we have
\[H_{OOD,k_{0}}(x) =-\log\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=H_{TP}(x)\leq\delta.\]
For \(k\neq k_{0}\), we have
\[H_{OOD,k}(x) =-\log\mathbf{P}^{\prime}_{k}(x\notin\mathbf{X}_{k}|D)\] \[=-\log(1-\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D))\] \[=-\log(1-\mathbf{P}(x\in\mathbf{X}_{k}|D))\] \[=-\log\mathbf{P}(x\in\cup_{k^{\prime}\neq k}\mathbf{X}_{k^{ \prime}}|D)\] \[\leq-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=H_{TP}(x)\leq\delta.\]
ii) Assume \(x\in\mathbf{X}_{k_{0}}\). For \(k=k_{0}\), by \(H_{OOD,k_{0}}(x)\leq\delta_{k_{0}}\), we have
\[-\log\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\leq\delta_{k_{0}},\]
which means
\[\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\geq e^{-\delta_{k_{0}}}.\]
For \(k\neq k_{0}\), by \(H_{OOD,k}(x)\leq\delta_{k}\), we have
\[-\log\mathbf{P}^{\prime}_{k}(x\notin\mathbf{X}_{k}|D)\leq\delta_{k},\]
which means
\[\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)\leq 1-e^{-\delta_{k}}.\]
Therefore, we have
\[\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D) =\frac{\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)}{\sum_ {k^{\prime}}\mathbf{P}^{\prime}_{k^{\prime}}(x\in\mathbf{X}_{k^{\prime}}|D)}\] \[\geq\frac{e^{-\delta_{k_{0}}}}{1+\sum_{k\neq k_{0}}1-e^{-\delta_{k }}}\] \[=\frac{e^{-\delta_{k_{0}}}}{e^{-\delta_{k_{0}}}+\sum_{k}1-e^{- \delta_{k}}}\] \[=\frac{1}{1+e^{\delta_{k_{0}}}\sum_{k}1-e^{-\delta_{k}}}.\]
Hence,
\[H_{TP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[\leq-\log\frac{1}{1+e^{\delta_{k_{0}}}\sum_{k}1-e^{-\delta_{k}}}\] \[=\log[1+e^{\delta_{k_{0}}}\sum_{k}1-e^{-\delta_{k}}]\] \[\leq e^{\delta_{k_{0}}}(\sum_{k}1-e^{-\delta_{k}})\] \[=(\sum_{k}\mathbf{1}_{x\in\mathbf{X}_{k}}e^{\delta_{k}})(\sum_{k }1-e^{-\delta_{k}}).\]
### Proof of Theorem 3.
Proof.: Using Theorem 1 and 2,
\[H_{CIL}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_{ 0}},D)-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=H_{WP}(x)+H_{TP}(x)\] \[\leq\epsilon+H_{TP}(x)\] \[\leq\epsilon+(\sum_{k}\mathbf{1}_{x\in\mathbf{X}_{k}}e^{\delta_{ k}})(\sum_{k}1-e^{-\delta_{k}})\]
### Proof of Theorem 4
Proof.: i) Assume \(x\in\mathbf{X}_{k_{0},j_{0}}\subset\mathbf{X}_{k_{0}}\). Define \(\mathbf{P}(x\in\mathbf{X}_{k,j}|x\in\mathbf{X}_{k},D)=\mathbf{P}(x\in\mathbf{X} _{k,j}|D)\). According to proof of Theorem 1,
\[H_{WP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_{0 }},D),\] \[H_{CIL}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D).\]
Hence, we have
\[H_{WP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|x\in\mathbf{X}_{k_{0 }},D)\] \[=-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D)\] \[=H_{CIL}(x)\leq\eta.\]
ii) Assume \(x\in\mathbf{X}_{k_{0},j_{0}}\subset\mathbf{X}_{k_{0}}\). Define \(\mathbf{P}(x\in\mathbf{X}_{k}|D)=\sum_{j}\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\). According to proof of Theorem 1,
\[H_{TP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D),\] \[H_{CIL}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D).\]
Hence, we have
\[H_{TP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=-\log\sum_{j}\mathbf{P}(x\in\mathbf{X}_{k_{0},j}|D)\] \[\leq-\log\mathbf{P}(x\in\mathbf{X}_{k_{0},j_{0}}|D)\] \[=H_{CIL}(x)\leq\eta.\]
iii) Assume \(x\in\mathbf{X}_{k_{0},j_{0}}\subset\mathbf{X}_{k_{0}}\). Define \(\mathbf{P}_{i}^{\prime}(x\in\mathbf{X}_{k}|D)=\mathbf{P}(x\in\mathbf{X}_{k}|D) =\sum_{j}\mathbf{P}(x\in\mathbf{X}_{k,j}|D)\). According to proof of Theorem 4 ii), we have
\[H_{TP}(x)\leq\eta.\]
According to proof of Theorem 2 i), we have
\[H_{OOD,i}(x)\leq H_{TP}(x).\]
Therefore,
\[H_{OOD,i}(x)\leq H_{TP}(x)\leq\eta.\]
## Appendix B Additional Results and Explanation Regarding Table 1 in the Main Paper
In Section 4.2.3, we showed that a better OOD detection improves CIL performance. For the post-processing method ODIN, we only reported the results on C100-10T. Table B.9 shows the results on the other datasets.
A continual learning method with a better AUC shows a better CIL performance than other methods with lower AUC. For instance, original HAT achieves AUC of 82.47 while HyperNet achieves 78.54 on C10-5T. The CIL for HAT is 62.67 while it is 53.40 for HyperNet. However, there are some exceptions that this comparison does not hold. An example is LwF. Its AUC and CIL are 89.39 and 54.67 on C10-5T. Although its AUC is better than HAT, the CIL is lower. This is due to the fact that CIL improves with WP and TP according to Theorem 1. The contraposition of Theorem 4 also says if the cross-entropy of TIL is large, that of CIL is also large. Indeed, the average within-task prediction (WP) accuracy for LwF on C10-5T is 95.2 while the same for HAT is 96.7. Improving WP is also important in achieving good CIL performances.
For PASS, we had to tune \(\tau_{k}\) using a validation set. This is because the softmax in Eq. 22 improves AUC by making the IND (in-distribution) and OOD scores more separable within a task, but deteriorates the final scores across tasks. To be specific, the test instances are predicted as one of the classes in the first task after softmax because the relative values between classes in task 1 is larger than the other tasks in PASS. Therefore, larger \(\tau_{1}\) and smaller \(\tau_{k}\), for \(k>1\), are chosen to compensate the relative values.
## Appendix C Definitions of TP
As noted in the main paper, the class prediction in Eq. 2 varies by definition of WP and TP. The precise definition of WP and TP depends on implementation. Due to this subjectivity, we follow the prediction method in the existing approaches in continual learning, which is the \(\arg\max\) over the output. In this section, we show that the \(\arg\max\) over output is a special case of Eq. 2. We also provide CIL results using different definitions of TP.
We first establish another theorem. This is an extension of Theorem 2 and connects the standard prediction method to our analysis.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & \multicolumn{3}{c}{C10-5T} & \multicolumn{2}{c}{C100-20T} & \multicolumn{2}{c}{T-5T} & \multicolumn{2}{c}{T-10T} \\ Method & CIL & AUC & CIL & AUC & CIL & AUC & CIL & AUC & CIL \\ \hline \multirow{2}{*}{OWM} & Original & 81.33 & 51.79 & 71.90 & 24.15 & 58.49 & 10.00 & 59.48 & 8.57 \\ & ODIN & 71.72 & 40.65 & 68.52 & 23.05 & 58.46 & 10.77 & 59.38 & 9.52 \\ \hline \multirow{2}{*}{MUC} & Original & 79.49 & 52.85 & 66.20 & 14.19 & 68.42 & 33.57 & 62.63 & 17.39 \\ & ODIN & 79.54 & 53.22 & 65.72 & 14.11 & 68.32 & 33.45 & 62.17 & 17.27 \\ \hline \multirow{2}{*}{PASS} & Original & 66.51 & 47.34 & 70.26 & 24.99 & 65.18 & 28.40 & 63.27 & 19.07 \\ & ODIN & 63.08 & 35.20 & 69.81 & 21.83 & 65.93 & 29.03 & 62.73 & 17.78 \\ \hline \multirow{2}{*}{LwF} & Original & 89.39 & 54.67 & 89.84 & 44.33 & 78.20 & 32.17 & 79.43 & 24.28 \\ & ODIN & 88.94 & 63.04 & 88.68 & 47.56 & 76.83 & 36.20 & 77.02 & 28.29 \\ \hline \multirow{2}{*}{A-GEM} & Original & 85.93 & 20.03 & 74.48 & 4.14 & 72.33 & 13.52 & 76.42 & 7.66 \\ & ODIN & 86.43 & 34.03 & 75.12 & 6.99 & 72.46 & 14.69 & 76.75 & 8.50 \\ \hline \multirow{2}{*}{EEIL} & Original & 89.72 & 57.09 & 85.96 & 33.46 & 64.82 & 14.67 & 64.87 & 9.79 \\ & ODIN & 89.20 & 59.47 & 85.46 & 35.16 & 57.01 & 11.92 & 55.42 & 6.88 \\ \hline \multirow{2}{*}{GD} & Original & 91.23 & 58.69 & 86.76 & 38.83 & 68.63 & 16.36 & 69.61 & 11.73 \\ & ODIN & 90.39 & 60.53 & 86.64 & 42.33 & 60.75 & 13.43 & 63.92 & 11.83 \\ \hline \multirow{2}{*}{BiC} & Original & 90.89 & 61.41 & 89.46 & 48.92 & 80.17 & 41.75 & 80.37 & 33.77 \\ & ODIN & 91.86 & 64.29 & 87.89 & 47.40 & 74.54 & 37.40 & 76.27 & 29.06 \\ \hline \multirow{2}{*}{DER++} & Original & 90.16 & 66.04 & 85.44 & 46.59 & 71.80 & 35.80 & 72.41 & 30.49 \\ & ODIN & 87.08 & 63.07 & 87.72 & 49.26 & 73.92 & 37.87 & 72.91 & 32.52 \\ \hline \multirow{2}{*}{HAL} & Original & 86.16 & 32.82 & 65.59 & 13.51 & 53.00 & 3.42 & 57.87 & 3.36 \\ & ODIN & 76.27 & 44.75 & 64.46 & 17.40 & 53.26 & 4.80 & 58.13 & 4.74 \\ \hline \multirow{2}{*}{HAT} & Original & 82.47 & 62.67 & 75.35 & 25.64 & 72.28 & 38.46 & 71.82 & 29.78 \\ & ODIN & 82.45 & 62.60 & 75.36 & 25.84 & 72.31 & 38.61 & 71.83 & 30.01 \\ \hline \multirow{2}{*}{HyperNet} & Original & 78.54 & 53.40 & 72.04 & 18.67 & 54.58 & 7.91 & 55.37 & 5.32 \\ & ODIN & 79.39 & 56.72 & 73.89 & 23.8 & 54.60 & 8.64 & 55.53 & 6.91 \\ \hline \multirow{2}{*}{Sup} & Original & 79.16 & 62.37 & 81.14 & 34.70 & 74.13 & 41.82 & 74.59 & 36.46 \\ & ODIN & 82.38 & 62.63 & 81.48 & 36.35 & 73.96 & 41.10 & 74.61 & 36.46 \\ \hline \hline \end{tabular} Performance comparison between the original output and output post-processed with OOD detection technique ODIN. Note that ODIN is not applicable to iCaRL and Mnemonics as they are not based on softmax but some distance functions. The result for C100-10T are reported in the main paper.
\end{table}
Table 9: Performance Comparison between the Original Output and ODIN
**Theorem 5** (Extension of Theorem 2).: _i) If \(H_{TP}(x)\leq\delta\), let \({\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)={\bf P}(x\in{\bf X}_{k}|D)^{1/\tau_{k}}\), \(\forall\tau_{k}>0\), then \(H_{OOD,k}(x)\leq\max(\delta/\tau_{k},-\log(1-(1-e^{-\delta})^{1/\tau_{k}}), \forall\,k=1,\ldots,T\)._
_ii) If \(H_{OOD,k}(x)\leq\delta_{k},k=1,\ldots,T\), let \({\bf P}(x\in{\bf X}_{k}|D)=\frac{{\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)^{1/ \tau_{k}}}{\sum_{j}{\bf P}^{\prime}_{j}(x\in{\bf X}_{j}|D)^{1/\tau_{j}}}\), \(\forall\tau_{k}>0\), then \(H_{TP}(x)\leq\sum_{k}\frac{{\bf 1}_{x\in{\bf X}_{k}}\delta_{k}}{\tau_{k}}+\frac{ \sum_{k}(1-e^{-\delta_{k}})^{1/\tau_{k}}}{\sum_{k}{\bf 1}_{x\in{\bf X}_{k}}(1-(1-e^{- \delta_{k}})^{1/\tau_{k}})}\), where \({\bf 1}_{x\in{\bf X}_{k}}\) is an indicator function._
In Theorem 5 (proof appears later), we can observe that \(\delta/\tau_{k}\) decreases with the increase of \(\tau_{k}\), while \(-\log(1-(1-e^{-\delta})^{1/\tau_{k}})\) increases. Hence, when TP is given, let \(\delta=H_{TP}(x)\), we can find the optimal \(\tau_{i}\) to define OOD by solving \(\delta/\tau_{k}=-\log(1-(1-e^{-\delta})^{1/\tau_{k}})\). Similarly, given OOD, let \(\delta_{k}=H_{OOD,k}(x)\), we can find the optimal \(\tau_{1},\ldots,\tau_{T}\) to define TP by finding the global minima of \(\sum_{k}\frac{{\bf 1}_{x\in{\bf X}_{k}}\delta_{k}}{\tau_{k}}+\frac{\sum_{k}(1-e^{- \delta_{k}})^{1/\tau_{k}}}{\sum_{k}{\bf 1}_{x\in{\bf X}_{k}}(1-(1-e^{-\delta_{k}})^{1/ \tau_{k}})}\). The optimal \(\tau_{k}\) can be found using a memory buffer to save a small number of previous data like that in a replay-based continual learning method.
In Theorem 5 (ii), let \({\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)=\sigma(\max f(x)_{k})\), where \(\sigma\) is the sigmoid and \(f(x)_{k}\) is the output of task \(k\) and choose \(\tau_{k}\approx 0\) for each \(k\). Then \({\bf P}(x\in{\bf X}_{k}|D)\) becomes approximately 1 for the task \(k\) where the maximum logit value appears and 0 for the rest tasks. Therefore, Eq. 2 in the paper
\[{\bf P}(x\in{\bf X}_{k,j}|D)={\bf P}(x\in{\bf X}_{k,j}|x\in{\bf X}_{k},D){\bf P }(x\in{\bf X}_{k}|D)\]
is zero for all classes in tasks \(k^{\prime}\neq k\). Since only the probabilities of classes in task \(k\) are non-zero, taking \(\arg\max\) over all class probabilities gives the same class as \(\arg\max\) over output logits.
We have also tried another definition of WP and TP. The considered WP is
\[{\bf P}(x\in{\bf X}_{k,j}|x\in{\bf X}_{k},D)=\frac{e^{f(x)_{kj}/\nu_{k}}}{\sum_ {j}e^{f(x)_{kj}/\nu_{k}}},\] (C.1)
where \(\nu_{k}\) is a temperature scaling parameter for task \(k\), and the TP is
\[{\bf P}(x\in{\bf X}_{k}|D)=\frac{{\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)}{\sum_ {k}{\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)},\] (C.2)
where \({\bf P}^{\prime}_{k}(x\in{\bf X}_{k}|D)=\max_{j}e^{f(x)_{kj}/\tau_{k}}/\sum_{j }e^{f(x)_{kj}/\tau_{k}}\) and \(\tau_{k}\) is a temperature scaling parameter. This is the maximum softmax of task \(k\). We choose
\(\nu_{k}=0.1\) and \(\tau_{k}=5\) for all \(k\). A good \(\tau\) and \(\nu\) can be found using grid search on a validation set. However, one can also find the optimal values by optimization using some past data saved for memory buffer. The CIL results for the new prediction method is in Table C.10.
Proof of Theorem 5.: i) Assume \(x\in\mathbf{X}_{k_{0}}\).
For \(k=k_{0}\), we have
\[H_{OOD,k_{0}}(x) =-\log\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\] \[=-\frac{1}{\tau_{k_{0}}}\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[=\frac{1}{\tau_{k_{0}}}H_{TP}(x)\leq\frac{\delta}{\tau_{k_{0}}}.\]
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T \\ \hline _OWM_ & 40.6\(\pm\)0.47 & 28.6\(\pm\)0.82 & 22.9\(\pm\)0.32 & 10.4\(\pm\)0.54 & 9.2\(\pm\)0.35 \\ _MUC_ & 53.2\(\pm\)1.32 & 30.6\(\pm\)1.21 & 14.0\(\pm\)0.12 & 33.1\(\pm\)0.18 & 17.2\(\pm\)0.13 \\ _PASS\({}^{\dagger}\)_ & 33.6\(\pm\)0.71 & 18.5\(\pm\)1.85 & 20.8\(\pm\)0.85 & 21.4\(\pm\)0.44 & 13.0\(\pm\)0.55 \\ LwF & 63.0\(\pm\)0.34 & 51.9\(\pm\)0.88 & 47.5\(\pm\)0.62 & 35.9\(\pm\)0.32 & 27.8\(\pm\)0.29 \\ iCaRL\({}^{*}\) & 65.3\(\pm\)0.83 & 52.9\(\pm\)0.39 & 48.2\(\pm\)0.70 & 34.8\(\pm\)0.34 & 27.3\(\pm\)0.17 \\ A-GEM & 34.0\(\pm\)1.86 & 14.5\(\pm\)0.55 & 7.3\(\pm\)1.78 & 15.4\(\pm\)0.24 & 9.0\(\pm\)0.30 \\ EEIL & 59.5\(\pm\)0.41 & 41.8\(\pm\)0.78 & 37.9\(\pm\)6.11 & 15.1\(\pm\)0.00 & 7.5\(\pm\)0.19 \\ GD & 68.0\(\pm\)0.75 & 47.2\(\pm\)0.33 & 41.8\(\pm\)0.25 & 15.7\(\pm\)2.08 & 12.2\(\pm\)0.14 \\ Mnemonics\({}^{\dagger*}\) & 65.6\(\pm\)1.55 & 50.7\(\pm\)0.72 & 47.9\(\pm\)0.71 & 36.3\(\pm\)0.30 & 27.7\(\pm\)0.78 \\ BiC & 65.5\(\pm\)0.81 & 50.8\(\pm\)0.69 & 47.2\(\pm\)0.71 & 37.0\(\pm\)0.58 & 29.1\(\pm\)0.34 \\ DER++ & 63.1\(\pm\)1.12 & 54.6\(\pm\)1.21 & 48.9\(\pm\)1.18 & 37.4\(\pm\)0.72 & 32.1\(\pm\)0.44 \\ HAL & 43.0\(\pm\)3.10 & 20.0\(\pm\)1.15 & 17.0\(\pm\)0.83 & 4.6\(\pm\)0.58 & 4.8\(\pm\)0.50 \\ _HAT_ & 62.6\(\pm\)1.31 & 41.5\(\pm\)0.80 & 25.9\(\pm\)0.56 & 38.9\(\pm\)1.62 & 30.1\(\pm\)0.52 \\ _HyperNet_ & 56.7\(\pm\)1.23 & 32.4\(\pm\)1.07 & 24.5\(\pm\)1.12 & 8.9\(\pm\)0.58 & 7.0\(\pm\)0.52 \\ _Sup_ & 62.6\(\pm\)1.11 & 46.8\(\pm\)0.34 & 36.0\(\pm\)0.32 & 41.5\(\pm\)1.17 & 35.7\(\pm\)0.40 \\ \hline _HAT+CSI_ & 85.2\(\pm\)0.92 & 62.9\(\pm\)1.07 & 53.6\(\pm\)0.84 & 47.0\(\pm\)0.38 & 46.2\(\pm\)0.30 \\ _Sup+CSI_ & 87.4\(\pm\)0.40 & 66.6\(\pm\)0.23 & 60.5\(\pm\)0.89 & 47.7\(\pm\)0.30 & 46.3\(\pm\)0.30 \\ HAT+CSI+c & 85.2\(\pm\)0.94 & 63.6\(\pm\)0.69 & 55.4\(\pm\)0.79 & 51.4\(\pm\)0.38 & 46.5\(\pm\)0.26 \\ Sup+CSI+c & 86.2\(\pm\)0.79 & 67.0\(\pm\)0.14 & 60.4\(\pm\)1.04 & 48.2\(\pm\)0.35 & 46.1\(\pm\)0.32 \\ \hline \hline \end{tabular} Average classification accuracy. The results are based on class prediction method defined with WP and TP in Eq. C.1 and Eq. C.2, respectively. The results can improve by finding optimal temperature scaling parameters.
\end{table}
Table C.10: Average Accuracy with a Different Prediction Method
For \(k\neq k_{0}\), we have
\[H_{OOD,k}(x) =-\log\mathbf{P}^{\prime}_{k}(x\notin\mathbf{X}_{k}|D)\] \[=-\log(1-\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D))\] \[=-\log(1-\mathbf{P}(x\in\mathbf{X}_{k}|D)^{1/\tau_{k}})\] \[=-\log(1-(1-\mathbf{P}(x\in\cup_{k^{\prime}\neq k}\mathbf{X}_{k^ {\prime}}|D))^{1/\tau_{k}})\] \[\leq-\log(1-(1-\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D))^{1/\tau_{k}})\] \[=-\log(1-(1-e^{-H_{TP}(x)})^{1/\tau_{k}})\] \[\leq-\log(1-(1-e^{-\delta})^{1/\tau_{k}}).\]
ii) Assume \(x\in\mathbf{X}_{k_{0}}\).
For \(k=k_{0}\), by \(H_{OOD,k_{0}}(x)\leq\delta_{k_{0}}\), we have
\[-\log\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\leq\delta_{k_{0}},\]
which means
\[\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)\geq e^{-\delta_{k_{0}}}.\]
For \(k\neq k_{0}\), by \(H_{OOD,k}(x)\leq\delta_{k}\), we have
\[-\log\mathbf{P}^{\prime}_{k}(x\notin\mathbf{X}_{k}|D)\leq\delta_{k},\]
which means
\[\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)\leq 1-e^{-\delta_{k}}.\]
Therefore, we have
\[\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D) =\frac{\mathbf{P}^{\prime}_{k_{0}}(x\in\mathbf{X}_{k_{0}}|D)^{1/ \tau_{k_{0}}}}{\sum_{k}\mathbf{P}^{\prime}_{k}(x\in\mathbf{X}_{k}|D)^{1/\tau_{ k}}}\] \[\geq\frac{e^{-\delta_{k_{0}}/\tau_{k_{0}}}}{1+\sum_{k\neq k_{0}}(1 -e^{-\delta_{k}})^{1/\tau_{k}}}\] \[=\frac{e^{-\delta_{k_{0}}/\tau_{k_{0}}}}{1-(1-e^{-\delta_{k_{0}}} )^{1/\tau_{k_{0}}}+\sum_{k}(1-e^{-\delta_{k}})^{1/\tau_{k}}}\] \[=\frac{e^{-\delta_{k_{0}}/\tau_{k_{0}}}}{1-(1-e^{-\delta_{k_{0}}} )^{1/\tau_{k_{0}}}}\cdot\frac{1}{1+\frac{\sum_{k}(1-e^{-\delta_{k}})^{1/\tau_{ k}}}{1-(1-e^{-\delta_{k_{0}}})^{1/\tau_{k_{0}}}}}.\]
Hence,
\[H_{TP}(x) =-\log\mathbf{P}(x\in\mathbf{X}_{k_{0}}|D)\] \[\leq-\log\frac{e^{-\delta_{k_{0}}/\tau_{k_{0}}}}{1-(1-e^{-\delta_{ k_{0}}})^{1/\tau_{k_{0}}}}\cdot\frac{1}{1+\frac{\sum_{k}(1-e^{-\delta_{k}})^{1/ \tau_{k}}}{1-(1-e^{-\delta_{k_{0}}})^{1/\tau_{k_{0}}}}}\] \[=\frac{\delta_{k_{0}}}{\tau_{k_{0}}}+\log[1-(1-e^{-\delta_{k_{0}}} )^{1/\tau_{k_{0}}}]+\log\left[1+\frac{\sum_{k}(1-e^{-\delta_{k}})^{1/\tau_{k}}} {1-(1-e^{-\delta_{k_{0}}})^{1/\tau_{k_{0}}}}\right]\] \[\leq\frac{\delta_{k_{0}}}{\tau_{k_{0}}}+\frac{\sum_{k}(1-e^{- \delta_{k}})^{1/\tau_{k}}}{1-(1-e^{-\delta_{k_{0}}})^{1/\tau_{k_{0}}}}\] \[=\sum_{k}\frac{\mathbf{1}_{x\in\mathbf{X}_{k}}\delta_{k}}{\tau_{k }}+\frac{\sum_{k}(1-e^{-\delta_{k}})^{1/\tau_{k}}}{\sum_{k}\mathbf{1}_{x\in \mathbf{X}_{k}}(1-(1-e^{-\delta_{k}})^{1/\tau_{k}})}.\]
## Appendix D Output Calibration
In this section, we discuss the output calibration technique used in Section 4.2.4 to improve the final prediction accuracy. Even if an OOD detection of each task was perfect (i.e. the model accepts and rejects IND and OOD samples perfectly), the system could make incorrect class prediction if the magnitudes of outputs across different tasks are different. To ensure that the output values are comparable, we calibrate the outputs by scaling \(\alpha_{k}\) and shifting \(\beta_{k}\) for each task. The optimal parameters \((\alpha_{k},\beta_{k})\in R\times R\) can be found by solving optimization problem using samples in memory buffer. More precisely, denote the memory buffer \(\mathcal{M}\) and calibration parameters \((\alpha,\beta)\in R^{T}\times R^{T}\), where \(T\) is the number of learned tasks. After training \(T\)th task, we find optimal calibration parameters by minimizing the cross-entropy loss,
\[\mathcal{L}=-\frac{1}{|\mathcal{M}|}\sum_{(x,y)\in\mathcal{M}}\log p(y|x)\] (D.1)
where \(p(c|x)\) is computed using the softmax,
\[\text{softmax}\bigoplus[\alpha_{k}f(x)_{k}+\beta_{k}]\] (D.2)
where \(\bigoplus\) indicates the concatenation and \(f(x)_{k}\) is the output of task \(k\) as Eq. 21. Given the optimal parameters \((\alpha^{*},\beta^{*})\), we make final prediction as
\[\hat{y}=\arg\max\bigoplus[\alpha_{k}^{*}f(x)_{k}+\beta_{k}^{*}]\] (D.3)
If we use \(OOD_{k}=\sigma(\alpha_{k}^{*}f(x)_{k}+\beta_{k}^{*})\), where \(\sigma\) is the sigmoid, and \(TP_{k}=OOD_{k}/\sum_{k^{\prime}}OOD_{k^{\prime}}\), the theoretical results in Section 3 hold.
## Appendix E TIL (WP) Results
The TIL (WP) results of all the systems are reported in Table E.11. HAT and Sup show strong performances compared to the other baselines as they leverage task-specific parameters. However, as shown in Theorem 1, the CIL depends on TP (or OOD). Without an OOD detection mechanism in HAT or Sup, they perform poorly in CIL as shown in the main paper. The contrastive learning in CSI also improves the IND prediction (i.e., WP), and this along with OOD detection results in the strong CIL performance.
## Appendix F Hyper-parameters
Here we report the hyper-parameters that we did not report in the main paper due to space limitations. We mainly report the hyper-parameters of the proposed methods, HAT+CSI, Sup+CSI, and their calibrated versions. For all the experiments of the proposed methods, we use the values chosen by the original CSI [58]. We use LARS [66] optimization with learning rate 0.1 for training the feature extractor. We linearly increase the learning rate by 0.1 per epoch for the first 10 epochs. After that, we use cosine scheduler [42] without restart as in [58; 13]. After training the feature extractor, we train the linear classifier for 100 epochs with SGD with learning rate 0.1 and reduce the rate by 0.1 at 60, 75, and 90 epochs. For all the experiments except MNIST, we train the feature extractor for 700 epochs with batch size 128.
For the following hyper-parameters, we use 10% of training data for validation to find a good set of values. For the number of epochs and batch size for MNIST, Sup+CSI trains for 1000 epochs with batch size of 32 while HAT+CSI trains for 700 epochs with batch size of 256. The hard attention regularization penalty \(\lambda_{i}\) in HAT is different by experiments and task \(i\). For MNIST, we use \(\lambda_{1}=0.25\), and \(\lambda_{2}=\cdots=\lambda_{5}=0.1\). For
C10-5T, we use \(\lambda_{1}=1.0\), and \(\lambda_{2}=\cdots=\lambda_{5}=0.75\). For C100-10T, \(\lambda_{1}=1.5\), and \(\lambda_{2}=\cdots=\lambda_{10}=1.0\) are used. For C100-20T, \(\lambda_{1}=3.5\), and \(\lambda_{2}=\cdots=\lambda_{20}=2.5\) are used. For T-5T, \(\lambda_{i}=0.75\) for all tasks, and lastly, for T-10T, \(\lambda_{1}=1.0\), and \(\lambda_{2}=\cdots=\lambda_{10}=0.75\) are used. We use larger \(\lambda_{1}\) for the first task than the later tasks as we have found that the larger regularization on the first task results in better accuracy. This is by the definition of regularization in HAT. The earlier task gives lower penalty than later tasks. We manually give larger penalty to the first task. We did not search hyper-parameter \(\lambda_{t}\) for tasks \(t\geq 2\). For sparsity in Sup+CSI, we simply choose the least sparsity value of 32 used in the original Sup paper without parameter search.
Calibration methods (HAT+CSI+c and Sup+CSI+c) are based on its memory free versions (i.e. HAT+CSI and Sup+CSI). Therefore, the model training part uses the same hyper-parameters as their calibration free counterparts. For calibration training, we use SGD with learning rate 0.01, 160 training iterations, and batch size of 15 for HAT+CSI+c for all experiments.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline _OWM_ & 85.0\(\pm\)0.07 & 59.6\(\pm\)0.83 & 65.4\(\pm\)0.48 & 22.4\(\pm\)0.87 & 28.1\(\pm\)0.55 & 52.1 \\ _MUC_ & 95.1\(\pm\)0.10 & 77.3\(\pm\)0.83 & 73.4\(\pm\)9.16 & 55.9\(\pm\)0.26 & 47.2\(\pm\)0.22 & 69.8 \\ _PASS\({}^{\dagger}\)_ & 83.8\(\pm\)0.68 & 72.1\(\pm\)0.70 & 76.8\(\pm\)0.32 & 49.9\(\pm\)0.56 & 46.5\(\pm\)0.39 & 65.8 \\ LwF & 95.2\(\pm\)0.30 & 86.2\(\pm\)1.00 & 89.0\(\pm\)0.45 & 56.4\(\pm\)0.48 & 55.3\(\pm\)0.35 & 76.4 \\ iCaRL & 94.9\(\pm\)0.34 & 84.2\(\pm\)1.04 & 85.7\(\pm\)0.68 & 54.5\(\pm\)0.29 & 52.7\(\pm\)0.37 & 74.4 \\ A-GEM & 82.5\(\pm\)4.19 & 58.9\(\pm\)2.14 & 56.4\(\pm\)7.03 & 32.1\(\pm\)0.90 & 30.1\(\pm\)0.29 & 52.0 \\ EEIL & 93.4\(\pm\)0.02 & 83.1\(\pm\)3.13 & 88.4\(\pm\)2.07 & 30.3\(\pm\)0.89 & 25.9\(\pm\)0.04 & 64.2 \\ GD & 94.4\(\pm\)0.09 & 82.2\(\pm\)0.18 & 85.7\(\pm\)0.20 & 30.7\(\pm\)1.79 & 32.2\(\pm\)0.37 & 65.0 \\ Mnemonics\({}^{\dagger*}\) & 94.5\(\pm\)0.46 & 82.3\(\pm\)0.30 & 86.2\(\pm\)0.46 & 54.8\(\pm\)0.16 & 52.9\(\pm\)0.66 & 74.1 \\ BiC & 95.4\(\pm\)0.35 & 84.6\(\pm\)0.48 & 88.7\(\pm\)0.19 & 61.5\(\pm\)0.60 & 62.2\(\pm\)0.45 & 78.5 \\ DER++ & 92.0\(\pm\)0.54 & 84.0\(\pm\)9.43 & 86.6\(\pm\)9.44 & 57.4\(\pm\)1.31 & 60.0\(\pm\)0.74 & 76.0 \\ HAL & 82.8\(\pm\)1.94 & 49.5\(\pm\)1.51 & 61.1\(\pm\)1.43 & 13.2\(\pm\)0.77 & 21.2\(\pm\)0.41 & 26.2 \\ _HAT_ & 96.7\(\pm\)0.18 & 84.0\(\pm\)0.23 & 85.0\(\pm\)0.98 & 61.2\(\pm\)0.72 & 63.8\(\pm\)0.41 & 78.1 \\ _HyperNet_ & 94.6\(\pm\)0.37 & 76.8\(\pm\)1.22 & 83.5\(\pm\)0.98 & 23.9\(\pm\)0.60 & 28.0\(\pm\)0.69 & 61.4 \\ _Sup_ & 96.6\(\pm\)0.21 & 87.9\(\pm\)0.27 & 91.6\(\pm\)0.15 & 64.3\(\pm\)0.24 & 68.4\(\pm\)0.22 & 81.8 \\ \hline _HAT+CSI_ & 98.7\(\pm\)0.06 & 92.0\(\pm\)0.37 & 94.3\(\pm\)0.06 & 68.4\(\pm\)0.16 & 72.4\(\pm\)0.21 & 85.2 \\ _Sup+CSI_ & 98.7\(\pm\)0.07 & 93.0\(\pm\)0.13 & 95.3\(\pm\)0.20 & 65.9\(\pm\)0.25 & 74.1\(\pm\)0.28 & 85.4 \\ \hline \hline \end{tabular}
\end{table}
Table 11: The TIL Results of All the Systems.
For Sup+CSI+c, we use the same values for all the experiments except for MNIST. For MNIST, we use learning rate 0.05, batch size of 8, and run 280 iterations.
For the baselines, we use the hyper-parameters reported in the original papers or in their code. If the hyper-parameters are unknown or the code does not reproduce the result (e.g., the baseline did not implement a particular dataset or the code had undergone significant version change), we search for the hyper-parameters as we did for HAT+CSI and Sup+CSI.
## Appendix G Forgetting Rate
We discuss forgetting rate (i.e., backward transfer) [41], which is defined for task \(t\) as
\[\mathcal{F}^{t}=\frac{1}{t-1}\sum_{k=1}^{t-1}\mathcal{A}_{k}^{\text{ init}}-\mathcal{A}_{k}^{t},\] (G.1)
where \(\mathcal{A}_{k}^{\text{init}}\) is the classification accuracy of task \(k\)'s data after learning it for the first time and \(\mathcal{A}_{k}^{t}\) is the accuracy of task \(k\)'s data after learning task \(t\). We report the forgetting rate after learning the last task.
Figure G.4 shows the forgetting rates of each method. Some methods (e.g., OWM, iCaRL) experience less forgetting than the proposed methods HAT+CSI and Sup+CSI on M-5T. On this dataset, all the systems performed well. For instance, OWM and iCaRL achieve 95.8% and 96.0% accuracy while HAT+CSI and HAT+CSI+c achieve 94.4 and 96.9% accuracy. As we have
Figure G.4: Average forgetting rate (%). The lower the value, the better the method is on forgetting.
noted in the main paper, Sup+CSI and Sup+CSI+c achieve only 80.7 and 81.0 on M-5T although they have improved drastically from 70.1% of the base method Sup.
OWM and HyperNet show lower forgetting rates than HAT+CSI+c and Sup+CSI+c on T-5T and T-10T. However, they are not able to adapt to new classes as OWM and HyperNet achieve the classification accuracy of only 10.0% and 7.9%, respectively, on T-5T and 8.6% and 5.3% on T-10T. HAT+CSI+c and Sup+CSI+c achieves 51.7% and 49.2%, respectively, on T-5T and 47.6% and 46.2% on T-10T.
In fact, the performance reduction (i.e., forgetting) in our proposed methods occurs not because the systems forget the previous task knowledge, but because the systems learn more classes and the classification naturally becomes harder. The continual learning mechanisms (HAT and Sup) used in the proposed methods experience little or no forgetting because they find independent subset of parameters for each task, and the learned parameters are not interfered during training. For the forgetting rate results in the TIL setting, refer to our earlier workshop paper [28].
## Appendix H Pseudo-Code
For task \(k\), Let \(p(y|\mathbf{x},k)=\text{softmax}f(h(\mathbf{x},k;\theta,\mathbf{e}^{k});\phi_{k})\), where \(\theta\) is the parameters for adapter, \(\mathbf{e}^{k}\) is the trainable embedding for hard attentions, and \(\phi_{k}\) is the set of parameters of the classification head of task \(k\). Algorithm 1 and Algorithm 2 describe the training and testing processes, respectively. We add comments with the symbol "//".
```
0: Memory \(\mathcal{M}\), learning rate \(\lambda\), a sequence of tasks \(\mathcal{D}=\{\mathcal{D}^{k}\}_{k=1}\), and parameters \(\{\theta,\mathbf{e},\phi\}\), where \(\mathbf{e}\) and \(\phi\) are collections of task embeddings \(\mathbf{e}^{k}\) and task heads \(\phi_{k}\) // CL starts
1:for each task data \(\mathcal{D}^{k}\in\mathcal{D}\)do // Model training
2:for a batch \((\mathbf{X}_{i}^{k},\mathbf{y})\) in \(\mathcal{D}^{k}\), until converge do
3:\(\mathbf{X}_{s}\) = \(sample(\mathcal{M})\)
4: Compute loss (Eq. 24+Eq. 13) and gradients of parameters
5: Modify the model parameters \(\nabla\theta\leftarrow\nabla\theta^{\prime}\) using Eq. 11
6: Update parameters as \(\theta\leftarrow\theta-\lambda\nabla\theta\), \(\mathbf{e}^{k}\leftarrow\mathbf{e}^{k}-\lambda\partial\mathcal{L}\), \(\phi_{k}\leftarrow\phi_{k}-\lambda\partial\mathcal{L}\)
7:endfor // Back-updating in Section 5.1.2
8: Randomly select \(\tilde{\mathcal{D}}\subset\mathcal{D}^{k}\), where \(|\tilde{\mathcal{D}}|=|\mathcal{M}|\)
9:for each task \(j\), until converge do
10: minimize \(\mathcal{L}(\phi_{j})\) of Eq. 26
11:endfor // Obtain statistics in Section 5.1.3
12: Compute \(\mathbf{\mu}_{j}^{k}\) using Eq. 28 and \(\mathbf{S}^{k}\) using Eq. 29
13:endfor
```
**Algorithm 1** Training MORE
```
0: Test instance \(\mathbf{x}\) and parameters \(\{\theta,\mathbf{e},\phi\}\)
1:for each task \(k\)do
2: Obtain \(p(\mathcal{Y}^{k}|\mathbf{x},k)\)
3: Obtain \(s^{k}(\mathbf{x})\) using Eq. 27
4:endfor // Concatenate outputs for final prediction \(y\) and OOD score \(s\)
5:\(y=\arg\max\bigoplus_{1\leq k\leq t}p(\mathcal{Y}^{k}|\mathbf{x},k)s^{k}(\mathbf{x})\) (i.e. Eq. 30)
6:\(s=\max\bigoplus_{1\leq k\leq t}p(\mathcal{Y}^{k}|\mathbf{x},k)s^{k}(\mathbf{x})\)
```
**Algorithm 2** MORE Prediction
## Appendix I Size of Memory Required
In this section, we report sizes of memory required by each method in Section 5.2. The sizes include network size, replay buffer, all other parameters or example kept in memory simultaneously for a model to be functional.
We use an 'entry' to refer to a parameter or element in a vector or matrix to calculate the total memory required to train and test. The pre-trained backbone uses 21.6 million (M) entries (parameters). The adapter modules use 1.2M entries for CIFAR10 and 2.4M for other datasets. The baselines and our method use 22.9M and 24.1M entries for the model on CIFAR10 and other datasets, respectively. The unique technique of each method may add additional entries for training and test/inference.
The total memory required for each method without considering the replay memory buffer is reported in Table 12. Our method is competitive in memory consumption. Baselines such as OWM and A-GEM take considerably more memory than our system. iCaRL and DER++ take the least amount of memory, but the differences between our method and theirs are only 0.8M, 1.8M, 3.6M, 1.0M, and 1.8M for C10-5T, C100-10T, C100-20T, T-5T and T-10T.
Many replay based methods (e.g., iCaRL, HAL) need to save the previous network for distillation during training. This requires additional 1.2M or
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T \\ \hline OWM & 26.6M & 28.1M & 28.1M & 28.2M & 28.2M \\ MUC & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ PASS & 22.9M & 24.2M & 24.2M & 24.3M & 24.4M \\ LwF & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ iCaRL & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ A-GEM & 26.5M & 31.4M & 31.4M & 31.5M & 31.5M \\ EEIL & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ GD & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ BiC & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ DER++ & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ HAL & 22.9M & 24.1M & 24.1M & 24.1M & 24.1M \\ HAT & 23.0M & 24.7M & 25.4M & 24.6M & 25.1M \\ Sup & 24.7M & 33.7M & 45.7M & 27.7M & 33.7M \\ MORE & 23.7M & 25.9M & 27.7M & 25.1M & 25.9M \\ \hline \hline \end{tabular} Total memory (in entries) required for each method without the replay memory buffer.
\end{table}
Table 12: Required Memory
2.4M entries for CIFAR10 or other datasets. Our method does not save the previous model as we do not use distillation.
Note that a large memory consumption usually comes from the memory buffer as the raw data is of size 32*32*3 or 64*64*3 for CIFAR and T-ImageNet. For memory buffer of size 2000, a system needs 6.1M or 24.6M entries for CIFAR or T-ImageNet. Therefore, saving a smaller number of samples is important for reducing the memory consumption. As we demonstrated in Table 5 and Table 6 in the main paper, our method performs better than the baselines even with a smaller memory buffer. In Table 5, we use large memory sizes (e.g., 200 and 2000 for CIFAR10 and other datasets). In Table 6, we reduce the memory size by half. When we compare the accuracy of our method in Table 6 to those of the baselines in Table 5, our method still outperforms them on all datasets. Our method with a smaller memory buffer achieves average classification accuracy of 88.13, 71.69, 71.29, 64.17, 61.90 on C10-5T, C100-10T, C100-20T, T-5T, and T-10T. On the other hand, the best baselines achieve 88.98, 69.73, 70.03, 61.03, 58.34 on the same experiments with a larger memory buffer.
|
2304.08677
|
Bogoliubov excitations driven by thermal lattice phonons in a quantum
fluid of light
|
The elementary excitations in weakly interacting quantum fluids have a
non-trivial nature which is at the basis of defining quantum phenomena such as
superfluidity. These excitations and the physics they lead to have been
explored in closed quantum systems at thermal equilibrium both theoretically
within the celebrated Bogoliubov framework, and experimentally in quantum
fluids of ultracold atoms. Over the past decade, the relevance of Bogoliubov
excitations has become essential to understand quantum fluids of interacting
photons. Their driven-dissipative character leads to distinct properties with
respect to their equilibrium counterparts. For instance, the condensate
coupling to the photonic vacuum environment leads to a non-zero generation rate
of elementary excitations with many striking implications. In this work,
considering that quantum fluids of light are often hosted in solid-state
systems, we show within a joint theory-experiment analysis that the vibrations
of the crystal constitute another environment that the condensate is
fundamentally coupled to. This coupling leads to a unique heat transfer
mechanism, resulting in a large generation rate of elementary excitations in
typical experimental conditions, and to a fundamental non-zero contribution at
vanishing temperatures. Our work provides a complete framework for
solid-embedded quantum fluids of light, which is invaluable in view of
achieving a regime dominated by photon vacuum fluctuations.
|
Irénée Frérot, Amit Vashisht, Martina Morassi, Aristide Lemaître, Sylvain Ravets, Jacqueline Bloch, Anna Minguzzi, Maxime Richard
|
2023-04-18T01:01:35Z
|
http://arxiv.org/abs/2304.08677v4
|
# Bogoliubov excitations driven by thermal lattice phonons in a quantum fluid of light
###### Abstract
Elementary excitations in weakly interacting quantum fluids have a correlated particle-hole nature that leads to spectacular macroscopic quantum phenomena such as superfluidity. This many-body character was established in the context of cold-atom condensates at thermal equilibrium in the framework of Bogoliubov's celebrated theory of the weakly interacting Bose gas. Bogoliubov excitations were also found to be highly relevant to driven-dissipative quantum fluids of light, with certain resulting phenomena strikingly analogue to their equilibrium counterparts, but also genuine out-of-equilibrium aspects. In this work, we investigate both theoretically and experimentally a regime in which the elementary excitations in a quantum fluid of light result dominantly from their interaction with thermal lattice phonons, namely the elementary vibrations of the crystal. By an accurate comparison with the theoretically predicted spectral function of the driven-dissipative quantum fluid we achieve a quantitative understanding of the particle-hole nature of the elementary excitations, and unveil a remarkable decoupling from thermal excitations which is expected to be relevant in equilibrium quantum fluids as well. Finally, we exploit this quantitative understanding to identify a crossover temperature around \(1\,\mathrm{K}\), below which the lattice phonons are sufficiently quieted down for the quantum fluctuations to take over in the generation of Bogoliubov excitations. This regime is highly desired as it is characterized by strong quantum correlations between Bogoliubov excitations.
## I Introduction
Interactions between the constituent particles of a quantum fluid play a key role in their response to any space-time perturbations, such as thermal fluctuations, or an obstacle disrupting the quantum flow. They provide a many-body nature to the quantum fluid elementary excitations that results in spectacular macroscopic phenomena, such as superconductivity [1], and superfluidity [2; 3]. While capturing in full generality many-body excitations represents a daunting theoretical challenge, the celebrated Bogoliubov theory provides a microscopic theoretical framework to describe them in bosonic quantum fluids in the weakly interacting regime, namely when the two-body scattering length is much shorter than the average inter-particle distance. In this regime, the elementary excitations are transformed from free particles (operator \(\hat{a}_{\mathbf{q}}\)) to correlated particle-hole quasi-particles \(\hat{\beta}_{\mathbf{q}}=u_{\mathbf{q}}\hat{a}_{\mathbf{q}}+v_{-\mathbf{q}} \hat{a}_{-\mathbf{q}}^{\dagger}\), where \((u_{\mathbf{q}},v_{\mathbf{q}})\) are the characteristic Bogoliubov amplitudes, and \(\hbar\mathbf{q}\) is the excitation momentum [4; 5].
Interactions have a strong effect on the fluid characteristics. In closed quantum systems, the dispersion relation of the elementary excitations takes a linear gapless shape at low momenta that results in a non-zero critical velocity, and hence to superfluidity [5] as experimentally demonstrated in ultracold atoms [6; 7; 8]. Bogoliubov theory is also fully relevant in driven-dissipative quantum fluids of light [9]. In this case, the system consists of a many-body steady-state of weakly-interacting photons, which is obtained by resonant drive with laser, and in which the dominant coherent component will be referred to as the condensate thereafter. This quantum fluid is obtained in a semiconductor microcavity or in a nonlinear medium, in which the photons are dressed with an electronic transition, which provides them with two-body interactions. A superfluid state of the elementary excitations has been reported in different kinds of quantum fluids of light [10; 11; 12] as a result of the emergence of a sonic dispersion relation, as was confirmed experimentally [13; 14]. The nonequilibrium character of this quantum fluid actually allows for a larger family of elementary excitations dispersion relations, such as gapped states that also lead to superfluidity, or diffusive states that do not [9; 15].
Another striking consequence of the Bogoliubov transformation is that the spectral function of weakly interacting fluids exhibit two distinct excitation branches instead of one in the free particle case: the first branch, which is usually referred to as the normal branch (dark blue branch in Fig.1.(c,d)) has a more particle-like nature and a positive frequency with respect to the condensate. The second one, usually referred to as the ghost mode (light blue branch in Fig.1.(c,d)), is more hole-like and has a
negative frequency. The presence of this ghost mode is a key signature of a particle-hole correlation in the elementary excitations. It has been first observed in ultracold atom experiments [16], and later reported in quantum fluids of light in spontaneous emission [17] and in pump-probe experiments [13; 14]. This feature is also present in quantum fluids of light in which the condensate coherence results from spontaneous symmetry breaking, and not from the laser coherence [9]. Such condensates have specific fluctuations [18], and their elementary excitations involve a different Bogoliubov-like transformation, characterized by a diffusive Goldstone mode at vanishing frequency [19; 20], as well as a ghost mode as was observed recently [21; 22; 23].
The correlated particle-hole nature of the excitations has also fundamental consequences at the microscopic level, as it leads to the formation of quantum entanglement among pairs of excitations at opposite momenta and frequency [24]. This pairing mechanism has been shown to be highly relevant for the experimental investigation of the dynamical Casimir effect [25], for the experimental realization of analogue Hawking radiation [26; 27; 28] and in other analogue gravity experiments [29; 30]. Importantly, in these proposals, the elementary excitations are assumed to be driven by quantum fluctuation as illustrated in Fig.1.a.
An interesting feature of weakly interacting quantum fluids, which is at the heart of this work, is that the Bogoliubov transformation also modifies profoundly the interaction between the condensate and its environment both in terms of overall magnitude, and of momentum-frequency dependence. This correction plays for instance a role in the condensate interaction with an impurity [31; 32; 33; 34], and it is particularly interesting when the condensate is in permanent interaction with a thermal fluctuation reservoir, as is the case for quantum fluids of light embedded in a solid-state environment.
Indeed, crystalline lattices are subject to thermal vibrations, i.e. lattice phonons, that are unavoidably coupled to the condensate. Owing to the driven-dissipative nature of quantum fluids of light, this coupling results in the steady-state generation of elementary excitations on top of the condensate, as illustrated in Fig.1.b. In the present work, we explore both theoretically and experimentally the key role of thermal phonons on the physics of resonantly driven polariton condensates. We find that the coupling to phonons plays an important role in all the properties listed above. Namely, it constitutes a valuable resource to measure in a reliable way not only the Bogoliubov dispersion relation, but also the Bogoliubov particle and hole coefficients \((u_{\mathbf{q}},v_{-\mathbf{q}})\). It also constitutes a probe of the renormalized condensate-phonon interaction resulting from Bogoliubov transformation, that we exploit to demonstrate the mechanism of decoupling of the condensate from the thermal phonon bath, and hence of protection against its thermal fluctuations. Finally, while the thermal fluctuations dominate the current experiment, we predict that they can be strongly reduced at the benefit of the quantum fluctuations, by tuning the appropriate experimental parameters.
We organize this article as follows; in Section II, we develop a Bogoliubov theory of a resonantly driven exciton-polariton condensate, coupled both to lattice phonons and to free space photons. The observables relevant to the experiment are calculated, such as the spectral function of the elementary excitation emission \(I(\mathbf{q},\omega)\). We report in Section III our measurement of \(I(\mathbf{q},\omega)\) in a mi
Figure 1: Sketch of the two intrinsic mechanisms creating Bogoliubov excitations in a quantum fluid of light embedded in a finite-temperature solid-state micro-resonator. (a) describes the contribution from quantum fluctuations (noted \(|0\rangle\) in the sketch) that results from the Bogoliubov transformed condensate vacuum on one hand, and to coupling of the condensate to the extracavity photonic vacuum on the other hand. (b) describes the condensate coupling to finite temperature solid-state phonons (symbolized as a flame of temperature \(T\)). The experimental configuration considered in this work consists in a steady-state condensate (orange ellipse) involving \(\langle N_{c}\rangle\) photon-like particles (exciton-polaritons) driven at resonance by a laser field of amplitude \(F\), and subject to a loss rate \(\gamma\). The correlated particle-hole nature of Bogoliubov excitations is shown as a bound red and white symbols. The radiative recombination of the latter is shown as escaping photons, in a momentum-frequency intensity pattern described by the spectral function \(I(\mathbf{q},\omega)\). (c) and (d) describe the same mechanism as (a) and (b) respectively, using the dispersion relation in the framework of the Bogoliubov excitation dispersion relation, which consists of a normal (dark blue) and ghost (light blue) mode, both with a shape that differs from the free polaritons dispersion relation (dashed line). As discussed in the main text, the spontaneous emission rate of Bogoliubov excitations is proportional to \(|u_{c}v_{c}|^{2}\) when they result from quantum fluctuations (c), and to \(|u_{c}(u_{x}-v_{x})|^{2}\) (\(|v_{c}(u_{x}-v_{x})|^{2}\)) for the normal (ghost) mode, when they result from coupling to thermal lattice phonons (d), where \((u_{c,x},v_{c,x})\) are the coefficients of the Bogoliubov transformation. The subscript \(c\) (\(x\)) refers to the cavity photon (exciton) component of Bogoliubov excitations.
crocavity between temperatures of \(6.6\,\mathrm{K}\) and \(12\,\mathrm{K}\). By quantitative comparison with the theory, we extract the elementary excitation dispersion relation (Section II.3) and the Bogoliubov transformation amplitudes \((u_{\mathbf{q}},v_{-\mathbf{q}})\) (Section III.4). In the discussion Section IV we estimate the experimentally achieved Bogoliubov-transformation induced decoupling from the phonon bath and derive a crossover temperature below which quantum fluctuations are expected to take over the lattice phonons fluctuations. We discuss how to tune the microcavity parameters to achieve a refined control over both phenomena. Finally, section V offers some concluding remarks.
## II Theory
### Microscopic model and observables
In this work, we investigate a quantum fluid of light consisting of resonantly driven exciton-polaritons [35, 9] (henceforth denoted as 'polaritons'), hybrid quasiparticles obtained when photons confined in a cavity are in the strong coupling regime with an excitonic transition (bound electron-hole pairs) provided by a semiconductor planar quantum well. Their excitonic component provides them with two-body interactions, as well as interactions with the bath of acoustic solid-state lattice phonons, while the photonic fraction mediates the coupling to the resonant laser drive and to the extra-cavity free propagating photons that constitute the measured observable. The Hamiltonian describing all these interactions expressed in the exciton-photon basis thus consists of the following contributions:
\[\hat{\mathcal{H}}=\hat{\mathcal{H}}^{(0)}_{\mathrm{pol}}+\hat{\mathcal{V}}_{ xx}+\hat{\mathcal{H}}^{(0)}_{\mathrm{ph}}+\hat{\mathcal{V}}_{xp}+\hat{\mathcal{V}}_{ \mathrm{out}}, \tag{1}\]
where
\[\hat{\mathcal{H}}^{(0)}_{\mathrm{pol}} =\hbar\sum_{\mathbf{q}}\left[\omega_{x,\mathbf{q}}\hat{b}^{ \dagger}_{\mathbf{q}}\hat{b}_{\mathbf{q}}+\omega_{c,\mathbf{q}}\hat{a}^{ \dagger}_{\mathbf{q}}\hat{a}_{\mathbf{q}}+\frac{\Omega}{2}(\hat{a}^{\dagger}_{ \mathbf{q}}\hat{b}_{\mathbf{q}}+\mathrm{h.c.})\right]\] \[\quad+\left(f_{p}(t)^{*}\hat{a}_{\mathbf{q}_{p}}+\mathrm{h.c.}\right) \tag{2}\]
describes the interaction between cavity photons (operator \(\hat{a}_{\mathbf{q}}\)) and quantum well excitons (\(\hat{b}_{\mathbf{q}}\) ) of in-plane momentum \(\mathbf{q}\), in which the strong coupling regime is described by the third term in the sum, with \(\hbar\Omega\) being the Rabi splitting [36, 37] separating the upper and lower polariton modes when \(\hat{\mathcal{H}}^{(0)}_{\mathrm{pol}}\) is in its diagonal form [38]. The last term in \(\hat{\mathcal{H}}^{(0)}_{\mathrm{pol}}\) describes the coherent laser drive with \(f_{p}(t)=F_{p}e^{i\omega_{\mathrm{sat}}t}\) of amplitude \(F_{p}\) with corresponding in-plane momentum \(\hbar\mathbf{q}_{p}\). Concerning the interaction terms, we take
\[\hat{\mathcal{V}}_{xx}=(\hbar g_{x}/2)\sum_{\mathbf{k},\mathbf{k}^{\prime}, \mathbf{q}}\hat{b}^{\dagger}_{\mathbf{k}+\mathbf{q}}\hat{b}^{\dagger}_{\mathbf{ k}^{\prime}-\mathbf{q}}\hat{b}_{\mathbf{k}^{\prime}}\hat{b}_{\mathbf{k}}. \tag{3}\]
This describes the Coulomb-mediated interactions between excitons, of strength \(g_{x}\), that contributes to two-body interactions between polaritons [39]. Furthermore,
\[\hat{\mathcal{V}}_{\mathrm{sat}}=(-\hbar g_{s}/2)\sum_{\mathbf{k},\mathbf{k}^ {\prime},\mathbf{q}}(\hat{a}^{\dagger}_{\mathbf{k}+\mathbf{q}}\hat{b}^{\dagger }_{\mathbf{k}^{\prime}-\mathbf{q}}\hat{b}_{\mathbf{k}^{\prime}}\hat{b}_{ \mathbf{k}}+\mathrm{h.c.}) \tag{4}\]
describes an additional interaction mechanism between excitons, of strength \(g_{s}\) and often referred to as saturation nonlinearity. It results from the fact that excitons consist of bound fermions, i.e. electrons and holes, so that the creation of an exciton produces a non-zero fermionic phase-space filling, that in turn reduces the photon-creation probability of a second exciton [40]. The term
\[\hat{\mathcal{H}}^{(0)}_{\mathrm{ph}}=\hbar\sum_{\mathbf{q},k_{x}}\omega^{( \mathrm{ph})}_{\mathbf{q},k_{x}}\hat{c}^{\dagger}_{\mathbf{q},k_{x}}\hat{c}_{ \mathbf{q},k_{x}} \tag{5}\]
describes the three dimensional continuum of harmonic lattice vibration modes or acoustic phonons, with bosonic operators \(\hat{c}_{\mathbf{q},k_{x}}\). \(\omega^{(\mathrm{ph})}_{\mathbf{q},k_{x}}=v_{s}\sqrt{\mathbf{q}^{2}+k_{x}^{2}}\) is the acoustic phonon dispersion relation, with \(v_{s}\) the sound velocity. \(\mathbf{q}=(q_{x},q_{y})\) is the two-dimensional momentum in the plane of the microcavity spacer and quantum well, and \(k_{z}\) is in the orthogonal direction. The interaction between acoustic phonons and excitons occurs via the elastic deformation potential that reads [41]:
\[\hat{\mathcal{V}}_{xp}=i\hbar\sum_{\mathbf{q},k_{z}}g_{xp}(\mathbf{q},k_{z})( \hat{c}_{\mathbf{q},k_{x}}-\hat{c}^{\dagger}_{-\mathbf{q},k_{x}})\sum_{ \mathbf{q}^{\prime}}\hat{b}^{\dagger}_{\mathbf{q}+\mathbf{q}^{\prime}}\hat{b}_{ \mathbf{q}^{\prime}}\, \tag{6}\]
where \(g_{xp}(\mathbf{q},k_{z})\) is the momentum-dependent interaction strength. The detailed expression for \(g_{xp}(\mathbf{q},k_{z})\) is given in Appendix A.9. Finally, the term
\[\hat{\mathcal{V}}_{\mathrm{out}}=\hbar\sum_{\mathbf{q},k_{z}}\left\{\omega^{( \alpha)}_{\mathbf{q},k_{z}}\hat{\alpha}^{\dagger}_{\mathbf{q},k_{z}}\hat{ \alpha}_{\mathbf{q},k_{x}}+\kappa_{\mathbf{q},k_{z}}[\hat{\alpha}^{\dagger}_{ \mathbf{q},k_{z}}\hat{a}_{\mathbf{q}}+\mathrm{h.c.}]\right\} \tag{7}\]
describes the conversion of intracavity photons into extracavity free propagating photons, described by the bosonic operator \(\hat{c}_{\mathbf{q},k_{z}}\). Cavity photons can tunnel at a rate \(\kappa_{\mathbf{q},k_{z}}\) into this continuum across the mirrors, and vice-versa, as a result of the finite reflectivity of the mirrors. The first term in Eq. (7) describes the extra-cavity free propagating photon energy, whose dispersion relation in vacuum is \(\omega^{(\alpha)}_{\mathbf{q},k_{x}}=c\sqrt{\mathbf{q}^{2}+k_{z}^{2}}\) with \(c\) the speed of light. The second term describes the tunnel coupling mechanism.
The experimental observable of focus in the current work is the extracavity photon intensity \(I(\mathbf{q},\omega)\) resolved both in frequency and momentum. Using an input-output formalism detailed in Appendix A.1 we derive a general relation between the intracavity photon field and the extracavity photon intensity that reads
\[I(\mathbf{q},\omega)=\lim_{\Delta t\to\infty}\frac{\gamma_{\mathrm{ cav}}}{\pi\Delta t}\int_{t_{0}}^{t_{0}+\Delta t}dt_{2}\int_{t_{0}}^{t_{0}+\Delta t}dt_{1}\] \[\qquad\qquad\qquad\times e^{-i\omega(t_{2}-t_{1})}\langle\hat{a}^{ \dagger}_{\mathbf{q}}(t_{2})\hat{a}_{\mathbf{q}}(t_{1})\rangle \tag{8}\]
where we have used the fact that the extracavity photons are in a vacuum state (an excellent approximation
considering that our photons are in the \(\sim 1.5\,\)eV energy range). The cavity loss rate is \(\gamma_{\rm cav}=\pi\rho({\bf q},\omega)\kappa_{{\bf q},k_{z}({\bf q},\omega)}^{2}\) with \(\rho({\bf q},\omega)=\sum_{k_{z}}\delta(\omega-\omega_{{\bf q},k_{z}}^{(\alpha)})\) the partial density of states for extra-cavity photon modes, and can be safely taken as constant within the frequency range relevant to the experiment. We thus proceed to derive the two-time correlator \(\langle\hat{a}_{\bf q}^{\dagger}(t_{2})\hat{a}_{\bf q}(t_{1})\rangle\) in presence of both quantum fluctuations and a thermal population of acoustic phonons.
In our experimental configuration, polariton-polariton interactions are relatively weak, i.e. the associated scattering length is much smaller than the interparticle distance, and the driving laser intensity is large enough to induce a macroscopic population of the steady-state excitonic and photonic modes [5]. This justifies a mean-field treatment for the condensate and the Bogoliubov approximation for the description of the excitations on top of it.
### Bogoliubov theory
_Mean-field equation for the condensate.-_ We first derive the mean-field steady-state of the resonantly driven system. Writing the Heisenberg equations of motion for the cavity photons and excitons at the laser wavevector \({\bf q}_{p}\), and setting, as per the mean-field approximation, \(\langle\hat{b}_{{\bf q}_{p}}\rangle=\psi_{x}\) and \(\langle\hat{a}_{{\bf q}_{p}}\rangle=\psi_{c}\), we find the steady-state equations:
\[(\omega_{x}-i\gamma_{x}/2+g_{x}|\psi_{x}|^{2})\psi_{x}+(\Omega/2-g_{s}|\psi_{ x}|^{2})\psi_{c}-g_{s}\psi_{x}^{2}\psi_{c}^{*}=0 \tag{9}\]
and:
\[(\omega_{c}-i\gamma_{\rm cav}/2)\psi_{c}+(\Omega/2-g_{s}|\psi_{x}|^{2}/2) \psi_{x}+F_{p}=0\, \tag{10}\]
where \(n_{x,c}=|\psi_{x,c}|^{2}\) are the excitonic and photonic densities and from here on the excitonic and photonic frequencies are shifted by the laser frequency \(\omega_{\rm las}\). For details on the full solution to Eqs. (9-10) see Appendix A.2.
Note that we have introduced a phenomenological decoherence rate for the excitonic transition of the form \(\gamma_{x}({\bf q})=\gamma_{x,0}+\beta q^{2}\) (\(\beta>0\)) that describes in an effective way the fact that polaritons of higher energy (and hence of higher \(|{\bf q}|\)) interact more strongly with the quantum well imperfections (see e.g. [42; 43; 44] for details). We do not attempt to describe this effect in our model as it would far exceed our scope without clear benefit for the purpose of this work.
_Bogoliubov approximation.-_ We then use the Bogoliubov approximation to reduce the interaction terms to a quadratic form. This approximation amounts to: i) assume that both the cavity photon and the exciton modes at the laser momentum \({\bf q}_{p}\) are macroscopically occupied; and ii) neglect in the Hamiltonian terms involving more than two operators at \({\bf q}\neq{\bf q}_{p}\). We thus derive the resulting quadratic exciton-exciton interaction terms \(\hat{\mathcal{V}}_{xx}\) and \(\hat{\mathcal{V}}_{\rm sat}\) that describe interactions occurring within the condensate, and with final states outside the condensate with momenta \({\bf q}_{p}+{\bf q}\) and \({\bf q}_{p}-{\bf q}\) (see Appendix A.3 for the detailed expression). The phonon-exciton interaction becomes
\[\hat{\mathcal{V}}_{xp} = i\hbar\sqrt{n_{x}}\sum_{{\bf q},k_{z}}g_{xp}({\bf q},k_{z})\times \tag{11}\] \[(\hat{c}_{{\bf q},k_{z}}-\hat{c}_{-{\bf q},k_{z}}^{\dagger})(\hat {b}_{{\bf q}_{p}+{\bf q}}^{\dagger}+\hat{b}_{{\bf q}_{p}-{\bf q}})\.\]
and describes the scattering of a condensate exciton via emission or absorption of a phonon into an excited state outside the condensate with momentum \({\bf q}_{p}+{\bf q}\) or \({\bf q}_{p}+{\bf q}\).
Under the Markov approximation for the damping kernel associated to the bath of extra-cavity photons and the bath of solid-state phonons, the excitations take the following final form (see Appendix A.4, A.5, A.6, A.7 for details):
\[\hat{\mathcal{A}}_{{\bf q}_{p},{\bf q}}(\omega)=i[\omega{\bf 1}-M_{\bf q}]^{-1} \hat{\mathcal{F}}_{{\bf q}_{p},{\bf q}} \tag{12}\]
where we have set for short-hand notation
\[\hat{\mathcal{A}}_{{\bf q}_{p},{\bf q}}(\omega)=[\hat{a}_{{\bf q}_{p}+{\bf q} }(\omega),\hat{b}_{{\bf q}_{p}+{\bf q}}(\omega),\hat{a}_{{\bf q}_{p}-{\bf q}} ^{\dagger}(\omega),\hat{b}_{{\bf q}_{p}-{\bf q}}^{\dagger}(\omega)]^{T}, \tag{13}\]
and introduced the Langevin force vector
\[\hat{\mathcal{F}}_{{\bf q}_{p},{\bf q}}(\omega) = [\hat{F}_{{\bf q}_{p}+{\bf q}}(\omega),\hat{f}_{{\bf q}}(\omega)- \hat{f}_{-{\bf q}}^{\dagger}(\omega), \tag{14}\] \[\hat{F}_{{\bf q}_{p}-{\bf q}}^{\dagger}(\omega),f_{-{\bf q}}^{ \dagger}(\omega)-\hat{f}_{{\bf q}}(\omega)]^{T}\]
The matrix \(M_{\bf q}\) reads:
\[M_{\bf q}=\begin{pmatrix}\omega_{c,{\bf q}_{p}+{\bf q}}-i\gamma_{\rm cav}&-2 \mu_{s}+\Omega/2&0&-\mu_{s}\\ -2\mu_{s}+\Omega/2&\omega_{x,{\bf q}_{p}+{\bf q}}-i\gamma_{x,{\bf q}_{p}+{\bf q }}+2{\rm Re}(\mu_{sx})&-\mu_{s}&\mu_{sx}\\ 0&\mu_{s}&-\omega_{c,{\bf q}_{p}-{\bf q}}-i\gamma_{\rm cav}&2\mu_{s}-\Omega/2 \\ \mu_{s}&-\mu_{sx}^{*}&2\mu_{s}-\Omega/2&-\omega_{x,{\bf q}_{p}-{\bf q}}-i \gamma_{x,{\bf q}_{p}-{\bf q}}-2{\rm Re}(\mu_{sx})\end{pmatrix}\, \tag{15}\]
with \(\mu_{s}=g_{s}n_{x}/2\) and \(\mu_{sx}=g_{x}n_{x}-g_{s}\sqrt{n_{x}n_{c}}e^{-i\phi}\).
The Langevin force for cavity photons, that results from their coupling to the extracavity photons reads:
\[\hat{F}_{\bf q}(t)=-i\sum_{k_{z}}\kappa_{{\bf q},k_{z}}\hat{\alpha}_{{\bf q},k _{z}}^{(in)}e^{-i\omega_{{\bf q},k_{z}}t}\, \tag{16}\]
where \(\hat{\alpha}^{(in)}_{\mathbf{q},k_{z}}:=e^{i\omega^{(a)}_{\mathbf{q},k_{z}}t_{0}} \hat{c}_{\mathbf{q}}\hat{\alpha}_{\mathbf{q},k_{z}}(t_{0})\). As discussed below, this force is the main contributor to quantum fluctuations. Similarly, the Langevin force acting on excitons, as a result of their coupling to the thermal phonons bath is
\[\hat{f}_{\mathbf{q}}(t)=\sqrt{n_{x}}\sum_{k_{z}}g_{xp}(\mathbf{q},k_{z})\hat{c} ^{(in)}_{\mathbf{q},k_{z}}e^{-i\omega^{(\text{ph})}_{\mathbf{q},k_{z}}t}\, \tag{17}\]
where \(\hat{c}^{(in)}_{\mathbf{q},k_{z}}:=\hat{c}_{\mathbf{q},k_{z}}(t_{0})e^{i\omega ^{(\text{ph})}_{\mathbf{q},k_{z}}t_{0}}\). Similar equations of motion have been derived in the literature [45; 46], but the interaction with lattice phonons has not been included so far to the best of our knowledge.
_Bogoliubov eigenmodes.-_ Equations (12)-(15) describe how the cavity photons and excitons hybridize due to: (i) the exciton-photon Rabi coupling \(\Omega\); and (ii) two-body interactions; as well as their forcing by coupling to lattice phonons, decoherence via the excitonic component, and dissipation into extracavity photons. The \(M_{\mathbf{q}}\) matrix is readily brought onto a diagonal form by \(P_{\mathbf{q}}M_{\mathbf{q}}P_{\mathbf{q}}^{-1}\) with \(P_{\mathbf{q}}\) containing the Bogoliubov coefficients \(u_{\alpha},v_{\alpha}\):
\[P_{\mathbf{q}}=\begin{pmatrix}u_{lp,c,\mathbf{q}}&u_{lp,x,\mathbf{q}}&v_{lp,c, -\mathbf{q}}&v_{lp,x,-\mathbf{q}}\\ u_{up,c,\mathbf{q}}&u_{up,x,\mathbf{q}}&v_{up,c,-\mathbf{q}}&v_{up,x,- \mathbf{q}}\\ v^{\dagger}_{lp,c,\mathbf{q}}&v^{\dagger}_{lp,x,\mathbf{q}}&u^{\dagger}_{lp, c,-\mathbf{q}}&u^{\dagger}_{lp,x,-\mathbf{q}}\\ v^{\dagger}_{up,c,\mathbf{q}}&v^{\dagger}_{up,x,\mathbf{q}}&u^{\ast}_{up,c,- \mathbf{q}}&u^{\ast}_{up,x,-\mathbf{q}}\end{pmatrix}, \tag{18}\]
where the \(\mathbf{q}_{\mathbf{p}}\) dependence has been dropped for the sake of lighter notations. There are thus four Bogoliubov coefficients per modes in our model instead of the usual two in the Bogoliubov theory. This is because the bosonic quasi-particles that constitute our condensate are exciton-photon hybrid, such that \(u_{j,x}\) for instance characterizes both the particle and the excitonic contribution of the Bogoliubov state in mode \(j\), \(v_{j,c}\), the hole and photonic contributions, and so on. In order to ensure bosonic commutation relations of the new quasiparticle field operators the Bogoliubov coefficients are normalized according to \(|u_{c,\mathbf{q}}|^{2}+|u_{x,\mathbf{q}}|^{2}-|v_{c,-\mathbf{q}}|^{2}-|v_{x,- \mathbf{q}}|^{2}=1\) for all modes. For the sake of compactness, we drop the 'lp' subscript for the lower polariton coefficients, that we thus note \((u_{c,\mathbf{q}},u_{x,\mathbf{q}},v_{c,-\mathbf{q}},v_{x,-\mathbf{q}})\) in the following sections.
### Dispersion relations
The eigenvalues of \(M_{\mathbf{q}}\) provide the four dispersion relations and damping rates of the Bogoliubov excitations, namely (owing to the symmetries of the \(M_{\mathbf{q}}\) matrix):
\[\omega_{lp,N}(\mathbf{q})-i\gamma_{lp,N}(\mathbf{q}) =\omega_{lp,\mathbf{q}_{p}+\mathbf{q}}-i\gamma_{lp,\mathbf{q}_{p} +\mathbf{q}} \tag{19}\] \[\omega_{lp,G}(\mathbf{q})-i\gamma_{lp,G}(\mathbf{q}) =-\omega_{lp,\mathbf{q}_{p}-\mathbf{q}}-i\gamma_{lp,\mathbf{q}_{p }-\mathbf{q}} \tag{20}\]
for the lower polariton normal and ghost modes, and:
\[\omega_{up,N}(\mathbf{q})-i\gamma_{up,N}(\mathbf{q}) =\omega_{up,\mathbf{q}_{p}+\mathbf{q}}-i\gamma_{lp,\mathbf{q}_{p }+\mathbf{q}} \tag{21}\] \[\omega_{up,G}(\mathbf{q})-i\gamma_{up,G}(\mathbf{q}) =-\omega_{up,\mathbf{q}_{p}-\mathbf{q}}-i\gamma_{up,\mathbf{q}_{ p}-\mathbf{q}} \tag{22}\]
for the upper polariton normal and ghost modes, where the lower ('lp') and upper ('up') polaritons are still good labels as long as we can neglect the up/lp mixing induced by the Bogoliubov transformation, a condition which is well satisfied as long as \(\Omega\gg(\mu_{s},\mu_{xs})\), as is the case in our experiment [45]. Examples of thus obtained lower polariton normal (dark solid line) and ghost (light solid line) mode dispersion relation are shown in Fig. 1(c,d), and compared with the free polariton dispersion relation (\(\mu_{s}=\mu_{sx}=0\), dashed line).
### Emission intensity \(I(\mathbf{q},\omega)\)
In order to determine \(I(\mathbf{q},\omega)\) we need to evaluate the intracavity photon correlator in Eq. (8). This is achieved by using Eq. (12) and Fourier transforming back to time domain \(\hat{a}_{\mathbf{q}}(t)=\int_{-\infty}^{\infty}(d\omega/2\pi)e^{-i\omega t} \hat{a}_{\mathbf{q}}(\omega)\). The relevant term in Eq. (8) is the equation of motion for the intra-cavity photon operator:
\[\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}(\omega) =G_{11}\hat{F}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)+G_{13}\hat{F}^ {\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)\] \[\quad+(G_{12}-G_{14})[\hat{f}_{\mathbf{q}}(\omega)-\hat{f}^{ \dagger}_{-\mathbf{q}}(\omega)]\, \tag{23}\]
that involves both the quantum and lattice phonon fluctuations, and where we have set \(G(\mathbf{q},\omega)=i[\omega 1-M_{\mathbf{q}}]^{-1}\), and the Langevin forces \(\hat{F}\) and \(\hat{f}\) are given respectively by the Fourier transforms of Eqs. (16) and (17). We discuss separately the two different fluctuation contributions.
#### ii.4.1 Quantum fluctuations contribution to \(I(\mathbf{q},\omega)\)
We first focus on the contribution from quantum fluctuations of photons and calculate \(\langle\hat{a}^{\dagger}_{\mathbf{q}}(t_{1})\hat{a}_{\mathbf{q}}(t_{2})\rangle\). Given that the initial state for the system is the vacuum for the photon modes, only the \(\hat{F}^{\dagger}\) term contributes to the output signal. By using that \(\int_{-\infty}^{-\frac{\alpha d\omega}{2\pi}}\!\!e^{-i\omega t}G_{13}(\mathbf{q },\omega)\hat{F}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)=i\sum_{k_{z}} \left\{\kappa_{\mathbf{q}_{p}-\mathbf{q},k_{z}}e^{-it\omega^{(\alpha)}_{\mathbf{q }_{p}-\mathbf{q},k_{z}}}G_{13}(\mathbf{q},\omega^{(\alpha)}_{\mathbf{q}_{p}- \mathbf{q},k_{z}})[\hat{\alpha}^{(in)}_{\mathbf{q}_{p}-\mathbf{q},k_{z}}]^{ \dagger}\right\}\) and \(\langle[\hat{\alpha}^{(in)}_{\mathbf{q}_{p}-\mathbf{q},k_{z}}][\hat{\alpha}^{( in)}_{\mathbf{q}_{p}-\mathbf{q},k_{z}}]^{\dagger}\rangle=\delta_{k_{z},k_{z}^{ \prime}}\), we obtain the contribution of quantum fluctuations to the two-point correlations as:
\[\langle\hat{a}^{\dagger}_{\mathbf{q}_{p}+\mathbf{q}}(t_{2})\hat{a}_{ \mathbf{q}_{p}+\mathbf{q}}(t_{1})\rangle_{Q}=\sum_{k_{z}}\kappa^{2}_{\mathbf{q} _{p}-\mathbf{q},k_{z}}\times\] \[\quad\quad e^{i(t_{2}-t_{1})\omega^{(\alpha)}_{\mathbf{q}_{p}- \mathbf{q},k_{z}}}|G_{13}(\mathbf{q},\omega^{(\alpha)}_{\mathbf{q}_{p}- \mathbf{q},k_{z}})|^{2} \tag{24}\]
To obtain the output signal from Eq. (8) we finally assume that \(\Delta t\) is much larger than the inverse typical frequency range (namely, \(\Delta t\gg 10^{-11}\)s for frequencies in the meV range), allowing us to send \(\Delta t\to\infty\) and obtain
\[I_{Q}(\mathbf{q}_{p}+\mathbf{q},\omega)=\gamma^{2}_{\text{cavv}}|G_{13}( \mathbf{q},\omega)|^{2}/\pi. \tag{25}\]
\(G_{13}(\mathbf{q},\omega)\) can be made explicit using the Bogoliubov transformation Eq. (18) which leads to
\[I_{Q}(\mathbf{q}_{p}+\mathbf{q},\omega)=\gamma_{\text{cav}}^{2}\times\] \[\left[\frac{|u_{c,\mathbf{q}}v_{c,-\mathbf{q}}|^{2}/\pi}{(\omega- \omega_{\mathbf{q}_{p}+\mathbf{q}})^{2}+\gamma_{\mathbf{q}_{p}+\mathbf{q}}^{2} }+\frac{|u_{c,-\mathbf{q}}v_{c,\mathbf{q}}|^{2}/\pi}{(\omega+\omega_{\mathbf{q }_{p}-\mathbf{q}})^{2}+\gamma_{\mathbf{q}_{p}-\mathbf{q}}^{2}}\right] \tag{26}\]
for the lower polariton resonance, where we have omitted the '\(lp\)' label. In this expression we have assumed that the normal (\(\omega=\omega_{\mathbf{q}_{p}+\mathbf{q}}\)) and ghost (\(\omega=-\omega_{\mathbf{q}_{p}-\mathbf{q}}\)) modes are well split in frequency as compared to \(\gamma_{\mathbf{q}_{p}+\mathbf{q}}\).
This expression constitutes an important result that characterizes the emission in the quantum fluctuation regime: the normal branch at \(\mathbf{q}_{p}+\mathbf{q}\) and the ghost branch at \(\mathbf{q}_{p}-\mathbf{q}\) exhibit an equal brightness in the spectral function. This property is a fundamental consequence of the fact that in this regime, the photons are produced in (quantum-entangled) pairs of \(\mathbf{q}_{p}\pm\mathbf{q}\) momenta via Hamiltonian terms equivalent to \(\hat{a}_{\mathbf{q}_{p}}\hat{a}_{\mathbf{q}_{p}}\hat{a}_{\mathbf{q}_{p}+ \mathbf{q}}^{\dagger}\hat{a}_{\mathbf{q}_{p}-\mathbf{q}}^{\dagger}\), which destroy two condensate photons (at frequency \(\omega_{\text{las}}\)) to produce a pair of correlated photons in the normal and ghost branches at frequencies \(\omega_{\text{las}}\pm\omega\).
#### ii.2.2 Thermal phonon fluctuation contribution to \(I(\mathbf{q},\omega)\)
We now focus on the contribution to the emission intensity originating from coupling to thermal lattice phonons, namely from \(\hat{f}_{\mathbf{q}}(\omega)-\hat{f}_{-\mathbf{q}}^{\dagger}(\omega)\) in Eq. (23). We follow a similar derivation as in the quantum fluctuation case, using the relation \(\hat{f}_{-\mathbf{q}}^{\dagger}(\omega)=[\hat{f}_{-\mathbf{q}}(-\omega)]^{\dagger}\). We also use the fact that as \(v_{s}\), the acoustic speed of sound, is much smaller than the speed of light, the phonons involved in the interaction have a wavector \(k_{z}\gg|\mathbf{q}|\), so that \(\omega=c|k_{z}|\) and we can replace the \(g_{xp}(\mathbf{q},k_{z})\) by \(g_{xp}(\omega)\). The scattering rate between the condensate an the phonon bath can be expressed as
\[n_{x}\gamma_{xp}(\omega)=\pi n_{x}[g_{xp}(\omega)]^{2}\rho^{\prime}_{\mathbf{ q},\omega} \tag{27}\]
with \(\rho^{\prime}_{\mathbf{q},\omega}=\sum_{k_{z}}\delta(\omega-\omega_{\mathbf{q},k_{z}}^{(\text{ph})})\) the reduced phonon density of states. An explicit expression and numerical evaluation of \(n_{x}\gamma_{xp}(\omega)\) is given in Appendix A.9. Finally, using a thermal Bose-Einstein distribution for the phonons bath, \(n_{T}(\omega)=1/[e^{\hbar\omega/(k_{B}T)}-1]\), yields the spectral function
\[I_{P}(\mathbf{q}_{p}+\mathbf{q},\omega)=\gamma_{\text{cav}} \gamma_{xp}(|\omega|)n_{x}|n_{T}(\omega)|\times\] \[|G_{12}(\mathbf{q},\omega)-G_{14}(\mathbf{q},\omega)|^{2}/\pi \tag{28}\]
As before, we obtain an explicit expression for the matrix elements \(G_{12}(\mathbf{q},\omega)-G_{14}(\mathbf{q},\omega)\) using the Bogoliubov transformation [Eq. (18)]. Within the same assumptions as in the quantum fluctuation case, we thus obtain for the lower polariton:
\[I_{P}(\mathbf{q}_{p}+\mathbf{q},\omega)=\gamma_{\text{cav}} \gamma_{xp}(|\omega|)n_{x}|n_{T}(\omega)||u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^ {2}\times\] \[\left[\frac{|u_{c,\mathbf{q}}|^{2}/\pi}{(\omega-\omega_{\mathbf{q }_{p}+\mathbf{q}})^{2}+\gamma_{\mathbf{q}_{p}+\mathbf{q}}^{2}}+\frac{|v_{c, \mathbf{q}}|^{2}/\pi}{(\omega+\omega_{\mathbf{q}_{p}-\mathbf{q}})^{2}+\gamma_ {\mathbf{q}_{p}-\mathbf{q}}^{2}}\right]\,. \tag{29}\]
Note that due to the frequency-dependent factor \(\gamma_{xp}(|\omega|)\) as well as to the thermal factor \(|n_{T}(\omega)|\), this lineshape is not purely Lorentzian. This aspect turns out to be significant in developing a careful analysis of the experimental data as discussed below.
Equation (29) is an important outcome of our analysis. For the normal mode (first Lorentzian), the output photons are produced via the radiative relaxation of an elementary excitation produced itself by the absorption of a thermal phonon at \((\mathbf{q},\omega)\) by the condensate. These events occur at a rate which is proportional to \(\gamma_{xp}(\omega)n_{x}\), to the thermal population of lattice phonons \(n_{T}(\omega)\), and to the density of states of the Bogoliubov excitation in the normal mode at \(\omega_{\text{las}}+\omega\) (\(|u_{c,\mathbf{q}}|^{2}\) factor). For the ghost branch (second Lorentzian), the output photons are produced by radiative relaxation of an elementary excitation produced itself by spontaneous and stimulated emission of a lattice phonon by the condensate (\(|n_{T}(\omega<0)|=1+n_{T}(-\omega)\) term), where the density of Bogoliubov excitation in the ghost branch is given by the \(|v_{c,\mathbf{q}}|^{2}\) factor. Therefore, in this thermal phonons regime, the two Bogoliubov excitation branches have different intensities. This is different from the quantum fluctuations regime, and it will turn out to be a key feature to identify in the experiment the physical origin of Bogoliubov excitations.
As will also be discussed more extensively in section IV.2, the overall emission rate is also modulated by a \(|u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^{2}\) factor, which quantifies the density fluctuation fraction of a Bogoliubov excitation (the other fraction consisting of phase fluctuation). It is this vanishing density fluctuation fraction in Bogoliubov excitations for vanishing momenta that leads to decoupling between the condensate and lattice phonons at small \(|\mathbf{q}|\).
## III Experiment
To test these theoretical predictions experimentally, we designed a GaAs-based microcavity in the strong coupling regime containing a single quantum well. This configuration suppresses the dark exciton states that arise in multi quantum well structures [47; 48; 49], and that contribute an unwanted electronic reservoir that strongly perturb the Bogoliubov transformation [50; 51]. Another contribution to this reservoir is suppressed by choosing a quantum well thicker than usual (17 nm), which results in a narrower excitonic inhomogeneous broadening, and hence to a lower density of states of localized excitons close to resonance with the lower polariton mode [52].
In the experiment, we excite the planar microcavity, with continuous wave (CW) laser light quasi-resonant
with the zero-momentum state of the lower polariton (LP) branch. The upper polariton (UP) branch is blue-shifted above the LP by a Rabi splitting \(\hbar\Omega=3.28\,\)meV. By taking advantage of the intentional wedge in the cavity thickness, we choose to address a lower polariton (LP) state that has an excitonic fraction of \(|X|^{2}=0.53\). This choice offers significant two-body interactions while preserving a narrow spectral linewidth of the LP states. The microcavity temperature (that of lattice phonons) can be tuned between \(T_{c}=6.6\) and \(12\,\)K in the vacuum chamber of a Helium flow cryostat. We set the laser frequency at \(\omega_{\text{las}}=\omega_{lp}(\mathbf{q}_{p})+0.20\,\)meV/\(\hbar\) above our target LP state, where \(\omega_{lp}(\mathbf{q})\) is the non-interacting LP dispersion relation and \(\hbar\omega_{\text{las}}=1449.0\,\)meV, and tune its angle close to normal incidence (measured to \(\theta_{p}=0.1\,\)degree), which corresponds to a polariton condensate with in-plane momentum \(|\mathbf{q}_{p}|=0.015\,\mu\)m\({}^{-1}\). For both emission and absorption, the extracavity photon incidence angle is related to the in-plane momentum of the corresponding polaritonic state via \(|\mathbf{q}|=\omega_{\text{las}}\sin(\theta)/c\). Using beam shaping, the laser spot at the microcavity surface (and hence the polariton condensate diameter) amounts to \(30\,\mu\)m, i.e. much larger than the condensate healing length \(\xi=\hbar/\sqrt{2m\Delta_{\text{int}}}\simeq 1.5\,\mu\)m, where \(m=6.6\times 10^{-35}\,\)kg is the LP effective mass, and \(\Delta_{\text{int}}\simeq 0.2\,\)meV is the two-body interaction energy involved in our experiments. A \(20\,\mu\)m diameter circular spatial filter selects the central part of the condensate, where the condensate density profile is practically constant.
The driven polariton state has a narrow spectral linewidth \(\hbar\gamma_{lp}(\theta\sim 0)=\hbar\gamma_{0}\simeq 0.03\,\)meV in the non-interacting regime, as measured after deconvolution of the instrument, such that \(\omega_{lp}/\gamma_{c}>\sqrt{3}/2\). In this regime, the driven lower polariton condensate intensity \(|\psi_{lp}|^{2}\), and hence the transmitted light intensity \(I_{t}\propto|\psi_{lp}|^{2}\), exhibits a hysteretic response as a function of the laser intensity \(I_{L}\)[53]. The measured \(I_{t}(I_{L})\) is shown in the SI section I [54]. In the high density state of the bistable region of \(I_{t}(I_{L})\), the laser is resonant with the blue shifted LP state and a regime of significant two-body interactions is reached (\(\Delta_{\text{int}}\geq\hbar\gamma_{0}\)). In the low density state of the bistability, the laser impinges the low energy tail of the essentially unshifted LP state and the two-body interactions is measured to be 400 times smaller (see SI section I [54]).
### Measured spectral function of elementary excitation \(I(\theta,\omega)\)
_Non-interacting regime.-_ In order to characterize at best the system parameters which do not depend on interactions, we first tune the laser in the regime of vanishing interactions. The emission of the condensate and its elementary excitations, resolved both in angle (\(\theta\)) and frequency (\(\omega/2\pi\)), is collected on the transmission side of the microcavity. The condensate emission intensity \(I_{t}\), which is peaked at \(\omega=\omega_{\text{las}}\) and \(k=\mathbf{q}_{p}\), is several orders of magnitude brighter than the spontaneous emission of
Figure 2: Measured (a and c) and theoretical (b,d,e) spectral functions \(I(\theta,\omega)\) of a resonantly-driven polariton condensate in the regime of vanishing interactions (a-b) and in the interacting regime (c,d,e) characterized by \(g_{s}n_{x}=0.19\,\)meV. The lower polariton spectral area (below the white dashed line) is plotted in Log color-scale, while the upper area is plotted in a linear color scale with the indicated correction factors. The dashed black line is the theoretical dispersion relation of the Bogoliubov excitations in both situations. The missing blue stripe (a and c) and gray rectangles (b, d and e) shows the area rejected from the detection by the spectral filter. In (d) the condensate is coupled only to thermal solid-state phonons (neglecting quantum fluctuations) with temperature \(T_{f}=15\,\)K, as determined from the quantitative analysis of the experiment (see main text). On the contrary, in (e) the elementary excitations are calculated assuming quantum fluctuations only.
Bogoliubov excitations \(I(\theta,\omega)\); we reject this signal by exploiting the spectrally narrow character of \(I_{t}\), in using a custom-built image-preserving notch filter of \(\sim 0.2\,\)meV bandwidth. Filtering out the very bright condensate signal allows us to measure \(I(\theta,\omega)\) for both positive and negative frequency with respect to \(\omega_{\rm las}\).
The result is shown in Fig.2.a in log (linear) scale for the lower (upper) polariton branch. The emission-free stripe situated between \(1448.95\,\)meV and \(1449.2\,\)meV is the result of the above-mentioned filter rejection band. In this non-interacting regime, the elementary excitations have a purely particle (i.e polaritonic) nature: they correspond to the excitation of a polariton out of the condensate into any other polariton state, including the UP. As expected, we do not observe any emission from the ghost mode (at \(\omega<\omega_{\rm las}\)). Notice that the emission of the UP is much dimmer than that of the LP, as expected in an thermally-assisted excitation mechanism, as the creation of an upper polariton requires the absorption of a phonon of energy \(>3.05\,\)meV, while at \(T_{c}=6\,\)K the phonon thermal distribution falls off exponentially with the decay constant \(k_{B}T_{c}=0.56\,\)meV. We further elaborate below on the role of thermal phonons in the creation of elementary excitations.
_Interacting regime.-_ We then increase the laser intensity in order to reach the high-density state of the bistability, where the interactions are significant. In this regime, the interaction energy amounts to \(\Delta_{\rm int}=0.19\,\)meV. The measured \(I(\theta,\omega)\) is shown in Fig.2.c. As is immediately apparent, with respect to the non-interacting case (panel a of the figure), the interactions modify significantly the elementary excitations dispersion relation. Two different branches can be identified, the one with a positive effective mass and frequency range above \(\omega_{\rm las}\) is the Bogoliubov excitations normal (N) branch \(\hbar\omega_{N}(\theta)\). The dimmer one, characterized by a negative effective mass, and frequency range below \(\omega_{\rm las}\) is the ghost (G) branch \(\hbar\omega_{G}(\theta)\). The shape of \(\hbar\omega_{N,G}(\theta)\) is also clearly modified (i.e. steeper) with respect to the free particle dispersion relation \(\hbar\omega_{lp}(\theta)\). These features fully agree with the expected characteristics of the dispersion relation for LP Bogoliubov excitations. Finally, the UP dispersion relation is also clearly visible, and its shape is left essentially unchanged with respect to the non-interacting regime, as we will see more quantitatively later on, and which is expected in the regime where \(\Omega\gg\Delta_{\rm int}\) (See Appendix section A.8).
_Data analysis.-_ In order to extract the different relevant observables from these measured \(I(\theta,\omega)\), such as in particular the dispersion relation of the different modes \(\omega_{j}(\theta)\), (\(j=\{{\rm lp},{\rm up}\}\) in the non-interacting regime and \(j=\{{\rm N},{\rm G},{\rm up}\ast\}\) in the interacting regime, where \({\rm up}^{\ast}\) labels the normal branch of the upper polariton), as well as the peak-integrated intensity \(A_{j}(\theta)\) of those modes, we first use the measurement in the non-interacting regime carried out at different temperatures (between \(T_{c}=6.6\,\)K and \(T_{c}=12\,\)K). From these data, we can precisely determine most experimental parameters such as e.g. the bare exciton and cavity energies, the Rabi splitting, as well as the cavity photon effective mass. Moving then to the interacting regime, we fit the experimental \(I(\theta,\omega)\) using Eq. (28), keeping fixed these parameters that do not depend on interactions, and introduce three additional \(\theta\)-dependent fit parameters, which allow us to freely capture both the energy and the amplitude of each individual peak in the experiment. As further discussed in the Supplementary Information [54], this method provides us with the experimental values of \(\omega_{j}(\theta)\), and \(A_{j}(\theta)\) that can then be compared with the theory, allowing us to estimate the remaining interaction-dependent parameters of the model.
_Theoretical spectral functions.-_ In the non-interacting regime, \(|v_{c}|^{2}=0\); from Eq. (26) it follows that the quantum fluctuations cannot create any elementary excitation. The corresponding theoretical spectral function \(I(\theta,\omega)\) is shown in Fig.2.b and is in excellent agreement with the measured spectral function (panel a of the figure). In the next section, we will proceed to a quantitative comparison between theory and experiments. We next move to the interacting regime and keep the same model parameters as in the non-interacting case, except for a nonzero value of the nonlinearity \(g_{s}n_{x}=0.19\,\)meV, and a higher temperature \(T=15\,\)K\(>T_{c}\). Such higher temperature is likely to result from residual absorption of the high intracavity field by the GaAs alloys. By repeating the same analysis as above, namely assuming only contribution from thermal lattice phonons yields the theoretical spectral function shown in Fig.2.d. Also in this case we find an extremely good agreement with the experimental spectral function (panel c of the figure). The hypothesis of neglecting the quantum fluctuations will be justified in detail in Sec. III.3 by the analysis of the spectral intensities.
_Thermal fluctuations versus quantum fluctuations.-_ As discussed in Sec. II, the generation of elementary excitations by thermal phonons or by quantum fluctuations result in a very different \(I(\theta,\omega)\) pattern: Fig. 2.(b,d) show the calculated spectral functions including only excitations via thermal phonons, while Fig.2.e shows the corresponding pattern calculated assuming only quantum photon fluctuations. The reason for this difference is that in the quantum regime, excitations are only created in correlated pairs, implying that - for \({\bf q}_{p}=0\) - \(I(\theta,\omega)\) and \(I(-\theta,-\omega)\) have an equal brightness. In the regime dominated by thermal lattice phonons, the mechanism generating elementary excitations is not expected to produce as much pair correlations. We notice however that a finite amount of correlations is still expected in this regime, as the lattice phonon which is emitted for the creation of an excitation at \((-\omega,-\theta)\) has precisely the right energy and momentum for the creation, via its absorption, of a second excitation at \((\omega,\theta)\). A quantitative account for the
correlation properties of the spontaneous emission signal, both on the theory and experimental level, exceeds the scope of the present study, but represents a very clear future research direction. In the next sections, we proceed to a quantitative analysis of the experimental data.
### Dispersion relations \(\omega_{j}(\theta)\)
_Comparison between experiment and theory.-_ From the fitting procedure of the spectral function \(I(\theta,\omega)\) as described in the previous section, we extract the experimental dispersion relations of the elementary excitations. In the non-interacting regime, the result is shown in Fig.3.a and Fig.3.c for both the upper [\(\omega_{up}(\theta)\)] and lower [\(\omega_{lp}(\theta)\)] polariton modes. In the interacting regime, three different modes are extracted: \(\omega_{G}(\theta)\), \(\omega_{N}(\theta)\) (Fig.3.d) for, respectively, the ghost and normal modes of the lower polariton and \(\omega_{up*}(\theta)\) (Fig.3.b) which corresponds to the upper polariton branch that retains its free particle character in the limit \(\hbar\Omega\gg\Delta_{\mathrm{int}}\)[45]. The theoretical dispersion relations obtained from diagonalizing the \(M_{\mathbf{q}}\) matrix Eq. (15) are shown as the solid lines, using the parameters of the 'hotter' regime described above, and by adjusting the two coefficients \(g_{s}\) and \(g_{x}\) that describe the interactions in the polariton gas.
We obtain an excellent quantitative agreement between theory and experiment, which allows us to draw several important conclusions. Firstly, the fact that the theory accurately reproduces both (i) the N and G branches spectral blueshift, and (ii) the slope of \(\omega_{N,G}(\theta)\) at low momenta, implies that no additional reservoir is perturbing the Bogoliubov transformation. Indeed, we have shown both experimentally and theoretically in previous works [51; 55], that a reservoir of particles much heavier than polaritons (such as e.g. electron-hole pairs, or dark excitons) coexisting with the condensate would induce an additional blueshift \(\Delta_{\mathrm{res}}\) to the dispersion relation, but would not modify the speed of sound \(c_{s}\), for which the relation \(c_{s}=\sqrt{\Delta_{\mathrm{int}}/m}\) remains valid. Since the experimentally observed blueshift is \(\Delta_{o}=\Delta_{\mathrm{int}}+\Delta_{\mathrm{res}}\), in presence of a reservoir, one finds that \(c_{s}<\sqrt{\Delta_{o}/m}\). Note that this analysis is also valid for non-sonic states, in which the dispersion relation slope is also determined only by \(\Delta_{\mathrm{int}}\), and thus also leads to a similar inequality when \(\Delta_{o}>\Delta_{\mathrm{int}}\), something which can be verified numerically. This analysis shows that within the experimental uncertainty, the measured elementary excitations involve no such reservoir. Let us also notice that the occurrence of a flat segment in the best fitted dispersion relation at low momentum (around \(\theta=0\)) does not imply that the corresponding condensate is dynamically unstable. In fact, due to the presence of polariton dissipation \(\gamma_{lp}\), the instability threshold \(n_{x,th}\) for the exciton density occurs only when the imaginary part of the dispersion relation becomes positive, and this may occur well below the regime where the dispersion is sonic [9; 45]. Finally, the finite size of the condensate and the weak in-plane disorder of the quantum well also affect \(n_{x,th}\) in a non-trivial way. As it would result in small corrections as compared to our experimental uncertainty, we do not attempt to capture those effects in our theoretical model.
_Excitonic nonlinearity.-_ The best fit obtained above for the dispersion relation in the interacting case involves the ratio \(g_{x}n_{x}/g_{s}n_{x}\) of the nonlinear contributions, where \(n_{x}\) is the excitonic fraction of the condensate density. The fact that all three dispersion relation branches (G,N,up*) are experimentally accessible provides us with a unique mean to determine this ratio unambiguously. Indeed, while both \(g_{x}\) and \(g_{s}\) result in a blueshift of the lower polariton N and G modes, \(g_{x}\) would also blueshift the up* branch with respect to its non-interacting counterpart; on the other hand \(g_{s}\) leaves this up* branch spectrally unshifted, and leads to an effectively reduced Rabi splitting. This qualitative difference allows us to clearly separate both \(g_{s}\) and \(g_{x}\) contributions, and yields the ratio \(g_{x}/g_{s}\simeq 0^{+0.3}_{-0}\) where \(0\) is the best fit, and \(g_{s}n_{x}=0.19\,\)meV. This result is of significant interest, and its microscopic interpretation deserves a detailed analysis that exceed the scope of this work.
Let us mention that the fact that \(g_{s}\) seems to dominate over \(g_{x}\) does not contradict previous works dealing with polaritonic nonlinearities, since as explained above, \(g_{s}\) and \(g_{x}\) contribute to blueshifting the LP mode which is the only focus of nearly all studies on exciton-polaritons, while the upper polariton mode is typically disregarded.
Figure 3: Dispersion relations in the non-interacting (a,c) and interacting (b,d) regime at \(T_{c}=6.6K\). The data points extracted from the analysis of the measured spectral functions \(I(\theta,\omega)\) are shown in black for the upper polariton branch, in dark (light) blue for the normal (ghost) mode of the lower polariton, with error bars determined from the fitting procedure and corresponding to the \(1\sigma\) confidence interval. The solid lines show the theory.
### Emission intensity of elementary excitations
We then extract \(A_{j}(\theta)\) from the measured spectral function \(I(\theta,\omega)\), which is defined as the spectral integration of the emission intensity over the peaks of each mode \(j=\)N,G, as labelled in Eq. (29). \(A_{j}(\theta)\) is the right observable to identify the origin of the fluctuations generating the elementary excitations (namely, thermal lattice phonons or photonic quantum fluctuations). Assuming narrow Lorentzian lineshapes, Eq. (29) yields a simple approximate expressions for \(A_{j}(\theta)\) in the regime dominated by thermal lattice phonons, that reads:
\[A_{N}(\theta) \simeq \gamma_{\text{cav}}\gamma_{xp}[\omega_{N}(\theta)]n_{x}n_{T}[ \omega_{N}(\theta)]|u_{c}(\theta)|^{2} \tag{30}\] \[\times |u_{x}(\theta)-v_{x}(\theta)|^{2}/\gamma_{N}(\theta),\]
and, similarly:
\[A_{G}(\theta) \simeq \gamma_{\text{cav}}\gamma_{xp}[-\omega_{G}(\theta)]n_{x}(1+n_{T} [-\omega_{G}(\theta)])|v_{c}(\theta)|^{2} \tag{31}\] \[\times |u_{x}(\theta)-v_{x}(\theta)|^{2}/\gamma_{G}(\theta),\]
where \(-\omega_{G}(\theta)>0\) and we assumed that \(\mathbf{q}_{p}\approx 0\). In the quantum fluctuation regime, Eq.(26) yields:
\[A_{N}(\theta)=A_{G}(\theta)\simeq\gamma_{\text{cav}}^{2}|u_{c}(\theta)v_{c}( \theta)|^{2}/\gamma_{N}(\theta). \tag{32}\]
In our dataset, the N mode is found much brighter than the G mode, which is in agreement with Eqs. (30,31), and not with Eq. (32). This feature rules out a dominant contribution of quantum fluctuations in this experiment.
The quantitative analysis of \(A_{j}(\theta)\) is shown for a phonon temperature \(T_{c}=6.6\,\)K, both in the non-interacting case [Fig. 4.a] and in the interacting case [Fig. 4.b]. In the non-interacting case, we observe a decay of \(A_{lp}(\theta)\) as a function of \(\theta\) which is in very good agreement with the theoretical prediction. The latter also agrees quantitatively with the theory in the interacting case assuming \(T=15\,\)K as explained above. Importantly, the relative intensity between \(A_{G}(\theta)\) and \(A_{N}(\theta)\) is captured strikingly well in this log-scaled plot, except at small angles where \(A_{G}(\theta)\) is found to deviate from the theory. Indeed at smaller angles, the Bogoliubov excitation peaks become increasingly harder to separate from an additional emission contribution that we attribute to spatial inhomogeneities. This agreement shows that the generation of elementary excitations is most likely dominated by thermal phonon in our experimental conditions.
We confirm this important feature by analyzing the ratio \(R_{lp}(\theta)[T_{c}]=A_{lp}(\theta)[T_{c}]/A_{lp}(\theta)[T_{0}]\) in the non-interacting case, where \(T_{0}\) is a reference temperature, and \(A_{lp}(\theta)[T_{c}]\) is the measured lower polariton intensity at another cryostat temperature \(T_{c}\). According to Eq. (30), \(R_{lp}(\theta)\) is free from any temperature-independent parameters such as the non-trivial energy-dependent exciton-phonon interaction \(g_{xp}(\omega)\), that involves the excitonic envelope wavefunction, and it depends only on temperature as \(R_{lp}(\theta)[T_{c}]=n_{T_{c}}(\omega)/n_{T_{0}}(\omega)\) where \(n_{T_{c}}(\omega)\) is the Bose-Einstein distribution of temperature \(T_{c}\) evaluated at \(\omega=\omega_{lp}(\theta)\). In the experiment, \(R_{lp}(\theta)[T_{c}]\) is not as simple, as the excitonic transition energy, and to a lesser extent the cavity mode, both redshift for increasing temperature. This redshift modifies the LP dispersion relation \(\omega_{lp}(\theta)\) and the excitonic fraction. However \(R_{lp}(\theta)\) is still an observable that involves a lower number of complex experimental parameters. We thus perform this analysis varying the cryostat temperature in the range \(T_{c}=[6.6,7.6,8.7,10,11,12]\,\)K, and choose \(T_{c}=12\) as the reference temperature \(T_{0}\). The result is shown in Fig.4.c, and compared with the theoretical prediction for \(R_{lp}(\theta)[T_{c}]\) including the temperature-dependent parameters mentioned above. We analyzed the experimental uncertainty (see the dashed lines in Fig.4.c) and found that the temperature estimate is accurate within \(5\,\)K. Since this accuracy is comparable with the investigated temperature range, a fully quantitative comparison is not possible, however the observed trend is clearly consistent with a regime dominated by thermal phonons. The two analyses presented in this section thus demonstrates consistently that in our experiment, the Bogoliubov excitations result dominantly from the interaction of the condensate with thermal lattice phonons.
Figure 4: Analysis of integrated emission intensity \(A_{j}(\theta)\). Measured and theoretical \(A_{j}(\theta)\) at \(T_{c}=6.6K\) (a) in the non-interacting regime (\(j=\)lp) and (b) in the interacting regime, where the normal (ghost) mode \(j=\)N (\(j=\)G) corresponds to the upper (lower) curve of dark (light symbols). The theory is shown as solid lines. The gray rectangle indicates the area of inaccessible measurements due to the spectral filter rejection in the experiment. (c) Measured integrated intensity ratio \(R_{lp}(\theta)[T_{c}]=A_{lp}(\theta)[T_{c}]/A_{lp}(\theta)[T_{c}=12K]\) for \(T_{c}=[7.6,8.7,10,11]\,\)K (symbols) in the non-interacting regime. The theory assuming the nominal phonon temperature \(T=T_{c}\) is shown as a solid line, as well as two more temperatures: a lower and a higher with \(5\,\)K difference, which are shown as dotted lines
### Characterization of Bogoliubov amplitudes
_Intensity ratio and Bogoliubov coefficients.-_ In our experiment, the fact that the Bogoliubov excitations are created dominantly by interaction with the thermal bath of lattice phonons opens up a unique opportunity to obtain a quantitative estimate of the Bogoliubov coefficients \((u_{c}(\theta),u_{x}(\theta),v_{c}(\theta),v_{x}(\theta))\), by comparing the emission intensity of the ghost branch and of the normal branch. Indeed, assuming that \(\gamma_{G}(\theta)\simeq\gamma_{N}(\theta)\), and using the fact that \(\mathbf{q}_{p}\approx 0\), the intensity ratio \(R_{GN}(\theta)=A_{G}(\theta)/A_{N}(\theta)\) assumes the strikingly simple expression
\[R_{GN}(\theta)=\frac{|v_{c}(\theta)|^{2}}{|u_{c}(\theta)|^{2}}e^{\hbar\omega_{ N}(\theta)/k_{B}T}\,. \tag{33}\]
Experimentally, \(A_{G}\) and \(A_{N}\) are measured in the exact same experimental conditions, as they are obtained in a single shot of the CCD camera, so that no extra \(\theta\)-dependent factor remains in the ratio \(R_{GN}\).
The comparison between the experiment carried out at \(T_{c}=6.6\)K, and the corresponding theory, are shown in Fig.5.a. Considering the experimental uncertainty quantified by the error bars (notice also that the full range of the plot is only \(R_{GN}=[0,8]\%\)), we obtain a fairly good agreement, regarding both the value of the ratio itself, and its trend as a function of \(\theta\).
In order to illustrate the crucial role of temperature on the brightness of the ghost branch with respect to the normal one, theoretical plots of \(R_{GN}(\theta)\) are shown in Fig.5.a for several other temperatures between \(6.5\) K and \(\infty\) (dashed lines). When the temperature is much higher than the highest frequency of the lower polariton mode (namely when \(k_{B}T\gg k_{B}T_{GN}=\hbar\omega_{N}(\theta_{\text{max}})\), which in our case corresponds to \(T_{GN}\simeq 15\) K), we have \(R_{GN}\simeq|v_{c}/u_{c}|^{2}\) (dashed line labelled as \(T=\infty\) in Fig.5.a). Upon cooling down below \(T_{GN}\), less thermal phonons are available to create excitations into the normal branch (starting with the highest energy states, namely largest angles or momenta), so that the ghost branch, which is populated by both spontaneous and stimulated emission of lattice phonons, increases in intensity with respect to the normal branch.
Remarkably, one predicts that for low enough temperature, and neglecting the quantum fluctuations to get a simple estimate, the ghost branch emission intensity can even overcome that of the normal branch. For a given \(\theta\), this happens when \(R_{GN}(\theta)\geq 1\), namely when \(k_{B}T\leq\hbar\omega_{N}/\log(|u_{c}/v_{c}|^{2})\). For instance, using our experimental parameters and a typical angle \(\theta=10\) degree, we find the temperature for equal brightness of the two branches \(T_{EB}\lesssim 5.2\) K which is about three times colder than the actual phonon temperature estimated for this experiment. This criterion provides a likely explanation regarding the notoriously difficult observation of the ghost mode spontaneous emission in exciton-polariton systems, in addition to the requirement of eliminating any detrimental electronic reservoir.
Finally, it is straightforward to estimate the Bogoliubov coefficient ratio \(|v_{c}/u_{c}|^{2}=R_{GN}\times\exp[-E_{N}/k_{B}T]\) from \(R_{GN}(\theta)\), using the temperature \(T=15\) K. The result is shown in Fig.5.b together with the theory (solid line). In full agreement with the theory, we find that the photonic fraction of the "hole" character of Bogoliubov excitations (\(|v_{c}|^{2}\)) amounts to \(\sim 1.5\%\) at high angle/energy, and up to \(\sim 4\%\) for the lowest angles. While this correlation is modest, it is clearly established in our experiment, and the detailed understanding provided by our theoretical model is a guide towards realizing more strongly correlated quantum fluids of light, and in particular generating quantum-correlated pairs of excitations by lowering the temperature to enter a regime dominated by quantum fluctuations (see Section IV.1).
_Extracting all Bogoliubov coefficients.-_ The Bogoliubov transformation is defined by 4 complex coefficients \((u_{c},u_{x},v_{c},v_{x})\), that characterizes the exciton and photon particle fractions (first two), and the exciton and photon hole fractions (last two), of a Bogoliubov excitation (see section II.B and Eq. (18)). In our experimental conditions, these four amplitudes can be determined to within
Figure 5: Bogoliubov amplitudes analysis. (a) \(R_{GN}(\theta)\), (b) photonic Bogoliubov squared amplitude ratio \(|v_{c}(\theta)/u_{c}(\theta)|^{2}\). The four squared Bogoliubov amplitudes are plotted in (c) in semilog scale with \(u_{c}\) (\(v_{c}\)) in dark (light) blue, and \(u_{X}\) (\(v_{X}\)) in dark (light) red. (d) Measured and theoretical Bogoliubov correction factor \(F_{B}(\theta)\) to the polariton-phonon interaction rate. The data points in (a,b,d) are shown in green. In all panels, the solid lines show the theory. The dashed line in (b) show the theoretical \(R_{GN}(\theta)\) assuming the different phononic temperatures indicated, in addition to the best fit \(T=15\) K which is shown as a solid line. The gray rectangle indicates the area of inaccessible measurements due to the spectral filter rejection in the experiment.
a good approximation from the knowledge of the ratio \(|v_{c}/u_{c}|^{2}\) determined above, and from the knowledge of the excitonic \(X\) and photonic \(C\) Hopfield coefficients of the (non-interacting) polariton states, (their \(\mathbf{q}\)-dependence has been dropped like for \((u,v)\)). Indeed as mentioned earlier, the fact that the Rabi splitting is much larger than the interaction energy (namely \(\Omega\gg\Delta_{\text{int}}\)), implies that the lower and upper polaritons do not hybridize under the Bogoliubov transformation, and that the hole fraction of the upper polaritons is vanishing [45]. One can thus define two lower-polariton-only Bogoliubov coefficients \((u,v)\) that are related to the 4 original coefficient via the relations \((u_{x},v_{x})/X=(u_{c},v_{c})/C=(u,v)\), which leads also to \(|v_{c}/u_{c}|^{2}=|v/u|^{2}\) (see Appendix A.8 for a detailed derivation). In this regime, the Bogoliubov coefficients are approximately real, and using the normalization condition \(|u|^{2}-|v|^{2}=1\), one can thus derive both \(u\) and \(v\) from the knowledge of \(|v_{c}/u_{c}|^{2}\), and hence all four coefficients \((u_{c},u_{x},v_{c},v_{x})\). By doing so, one obtains all the Bogoliubov coefficients plotted on Fig.5.d, together with their exact theoretical prediction showing a remarkable agreement that further supports the approximations discussed above.
## IV Discussion
### Crossover temperature into the quantum regime
Owing to the promising quantum resources that the regime dominated by quantum fluctuations has to offer [24; 25; 26; 27; 28; 29; 30], we estimate in this section the crossover temperature below which the quantum fluctuations become the dominant mechanism generating Bogoliubov excitations. As we shall see, this crossover temperature strongly depends on the emission angle \(\theta\): on the one hand, a larger \(\theta\) is desirable for easier experimental separability from the laser drive of both the ghost and normal modes; on the other hand the quantum regime is easier to achieve at small \(\theta\) owing to the decoupling from thermal phonons as a consequence of Bogoliubov transformation that will be discussed in Section IV.2. The crossover temperature can be simply estimated for the normal branch by equating Eqs. (30,32), yielding:
\[T_{N}^{(cr)}(\theta)=\frac{\hbar\omega_{N}(\theta)/k_{B}}{\log\left[1+\frac{ \gamma_{exp}(\theta)n_{x}}{\gamma_{\text{max}}}\frac{|u_{x}(\theta)-v_{x}( \theta)|^{2}}{|v_{c}(\theta)|^{2}}\right]}. \tag{34}\]
Similarly, for the ghost mode we equate Eqs. (31,32) to obtain:
\[T_{G}^{(cr)}(\theta)=-\frac{\hbar\omega_{N}(\theta)/k_{B}}{\log\left[1-\frac{ \gamma_{exp}(\theta)n_{x}}{\gamma_{\text{max}}}\frac{|u_{x}(\theta)-v_{x}( \theta)|^{2}}{|u_{c}(\theta)|^{2}}\right]}. \tag{35}\]
One difficulty in deriving a quantitative estimate of \(T_{N,G}^{(cr)}(\theta)\) is that it requires knowing the excitonic density \(n_{x}\) involved in the condensate, which is not easy to determine in the experiment. However, note that \(T_{N,G}^{(cr)}(\theta)\) depends only logarithmically (and hence weakly) on the product \(\gamma_{xp}n_{x}\). Regarding \(n_{x}\), we use the fact that it it is necessarily lower than the excitonic Mott density \(n_{x,M}\), for which the Rabi splitting, and hence the polaritonic state would be fully collapsed. \(n_{x,M}\) depends only on the excitonic Bohr radius as \(n_{x,M}\sim 1/(\pi a_{B}^{2})\) and is thus easy to estimate; it amounts to \(\simeq 3.2\times 10^{11}\,\text{cm}^{-2}\) in our quantum well. Concerning \(\gamma_{xp}\), a realistic estimate is derived in Appendix A.9.
The resulting crossover temperature for the normal branch \(T_{N}^{(cr)}(\theta)\) is thus plotted in Fig.6.a, by considering a range of excitonic densities between 5% (lightest gray line) and 100% (red line) of \(n_{x,M}\). The parameters used in the calculation are those derived for our experiment at \(T_{c}=6.6\,K\) above. This analysis shows that the normal branch crossover temperature is situated realistically within the interval \([2.5,4.5]\)K at a typical angle of 10 degree.
The ghost branch crossover temperature \(T_{G}^{(cr)}(\theta)\) is shown in Fig.6.c. As an obvious difference with the normal branch, no crossover temperature can be determined above a critical angle (\(|\theta_{cr}|\sim 10\,\text{degree}\) in our simulation of this experiment), because above \(|\theta_{cr}|\), the quantum
Figure 6: Calculated angle-dependent thermal to quantum regime crossover temperature \(T_{cr,N}\) for the normal (a) and \(T_{cr,G}\) for the ghost modes (c) in semilog scale. The excitonic density is increasing from \(n_{x}=0.1n_{x,M}\) to \(n_{x}=0.9n_{x,M}\) from the lightest to the darkest gray lines. The dashed green and the red line show the result for the smallest and the largest possible realistic excitonic densities respectively, namely \(n_{x}=0.05n_{x,M}\) and \(n_{x}=n_{x,M}\), where the latter constitutes a solid upper bound of \(n_{x}\). The thick horizontal line shows a photon tempeature of \(T_{c}=15\,\text{K}\), and the dashed horizontal shows liquid Helium temperature \(T_{He}=4.2\,\text{K}\). panels (a and c) are calculated for the nonlinearity achieved in this work \(g_{s}n_{x}=0.19\,\text{meV}\), while panels (b and d) are calculated for a twice larger nonlinearity \(g_{s}n_{x}=0.38\,\text{meV}\).
fluctuations can never overcome the thermal fluctuations for the generation of Bogoliubov excitations. This feature is clearly of high relevance in trying to reach the quantum regime. Physically, it originates from the fact that even at \(T=0\,\)K, the condensate can generate Bogoliubov excitations into the ghost mode by spontaneous emission of phonons into the phonon vacuum. The corresponding \(T=0\,\)K ghost branch emission intensity \(A_{G,0}\) can be derived from Eq. (31) as:
\[A_{G,0} =A_{G}(\theta,T=0)=\gamma_{\rm cav}\gamma_{xp}(\theta)n_{x}\] \[\times|v_{c}(\theta)|^{2}|u_{x}(\theta)-v_{x}(\theta)|^{2}. \tag{36}\]
When \(A_{G,0}\) is larger than that generated by quantum fluctuations [Eq. (32)], the latter can then never become dominant for the ghost branch. The parameters involved in \(A_{G,0}\) show that \(|\theta_{cr}|\) can be increased via several different parameters, the most convenient of which are most likely the excitonic and photonic fractions (embedded within the Bogoliubov coefficients \((u_{x},v_{x},u_{c},v_{c})\)) that allow modulating to a large extent the relative coupling of polaritons to phonons on one hand (\(\propto|X|^{2}\)) and to free space photons (\(\propto|C|^{2}\)) on the other hand.
For angles smaller than \(|\theta_{cr}|\), we see in Fig.6.c that \(T_{G}^{(cr)}(\theta)\) falls approximately in the interval \([10,50]\)K in this experiment. This interval suggests that the experimental excitonic density is rather on the high side of the plotted density interval. Notice however, the proximity of \(|\theta_{cr}|\) enhances the uncertainty of this ghost branch analysis.
Finally, in order to illustrate the competing role of the interaction energy with respect to the lattice phonon bath temperature, we increased the interaction energy by a factor of two in the calculation. The result is shown in Fig.6.b and Fig.6.d for the normal and ghost mode respectively. The benefit of this increase of interaction energy is twofold. It opens up a gap in the Bogoliubov excitation dispersion relation, of spectral size comparable to the spectral filter in the experiment. As a result, the emission even from the smallest angles, where \(|v|^{2}\) is the largest, becomes experimentally accessible. In practice, reaching this regime experimentally proves challenging as we observe excessive spectral broadening of the elementary excitation spectra at higher laser drive, due to a power broadening mechanism that remains to be clarified. The second benefit of this interaction energy doubling is that it increases the crossover temperature at all angles, as expected.
Overall this analysis clearly indicates that by cooling down a bit deeper, by taking care of the residual absorption within the cavity, and by tweaking the polaritonic Hopfield coefficients in order to achieve the best compromise between the coupling to quantum fluctuations (\(\propto|C|^{2}\)) at the expense of the coupling to phonons, while keeping a sufficient amount of nonlinearity (\(\propto|X|^{2}\)), the regime in which the quantum fluctuation are significant is very realistically reachable.
### Decoupling of low-momenta Bogoliubov excitations from lattice phonons
One last striking property that we examine in this section, is the overall reduction of the coupling strength between polaritons and the thermal bath of lattice phonons that results from the Bogoliubov transformation. This reduction can be seen already at the Hamiltonian level in which rewriting the exciton-phonon interaction term [Eq. (11)] in the Bogoliubov basis results in a correction of the interaction energy amplitude \(g_{xp}(\omega)\) by a factor \((u_{x,\mathbf{q}}-v_{x,-\mathbf{q}})\). Assuming a LP-only condensate, which as discussed both in Section III.4 and in Appendix A.8 is an excellent approximation in our experiment, this correction factor takes the simpler expression \(X_{\mathbf{q}}(u_{\mathbf{q}}-v_{-\mathbf{q}})\) with \((u,v)\) the Bogoliubov amplitudes in the LP basis, and \(X_{\mathbf{q}}\) is the usual (non-interacting) LP excitonic Hopfield coefficient. As a result, the phonon-condensate coupling rate which is proportional to \(g_{xp}(\omega)^{2}\) ends up corrected by the factor \(F_{B}=|u_{\mathbf{q}}-v_{-\mathbf{q}}|^{2}\leq 1\) by the Bogoliubov transformation of the elementary excitations.
Physically, \(|u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^{2}\) actually quantifies the fraction of excitonic density fluctuations in a Bogoliubov excitation, through which polaritons indeed couple to lattice deformations. Remarkably, for small momenta \(|u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^{2}\ll 1\) and therefore the corresponding Bogoliubov excitations become effectively decoupled from the bath of thermal lattice phonons. This phenomenon is somewhat reminiscent of a superfluid behavior, in which, as a consequence of inter-particle interactions, the system effectively decouples from slowly-moving defects. In the context of reaching the quantum fluctuation regime, this decoupling favours its emergence in low momenta states, and thus increases the crossover temperature into the quantum regime as already pointed our in the previous section [cf. Eqs. (34,35)].
Interestingly, we can derive a measurement of \(F_{B}(\theta)\) from our experimental determination of \(|v(\theta)/u(\theta)|^{2}\) presented in Section III.4 as \(F_{B}^{2}=(1-|v/u|)/(1+|v/u|)\). The result is plotted in Fig.5.c alongside the theoretical \(F_{B}=|u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^{2}/X^{2}\). A Bogoliubov correction factor of \(F_{B}\simeq 80\%\) is found at large angle and decreases to about \(70\%\) at lower angles (for the data points which are the closest to the spectral filter), a trend which is in fair agreement with the theory, with deviations mostly coming from the weaker measured emission intensity of the ghost branch at low angles as compared to the theory.
The dataset presented in Fig.5.c suggests the onset of a dipping behaviour of \(F_{B}(\theta)\) at low angle with respect to higher angles. This dipping is an unambiguous signature of the Bogoliubov-mediated thermal protection mechanism from the phonon bath.
## V Conclusion
An essential and unique feature of the Bogoliubov transformation in a quantum fluid of polaritons is the
fact that it occurs in a solid-state environment. We have shown in this work that once the extrinsic sources of perturbation are removed, the polariton quantum fluid should not be considered lying in vacuum. It is actually strongly influenced by its interaction with the thermal vibrations of the crystalline lattice.
We first show both experimentally and theoretically that this interaction leads to a steady-state generation rate of elementary excitations on top of the condensate, of which we develop a quantitative understanding. This understanding is based on an accurate comparison between experimentally measured spectral functions and the ones obtained by our Bogoliubov theory including interaction of the condensate both with lattice phonons and photonic quantum fluctuations. We thus not only provide a full picture of the dispersion relation of the various Bogoliubov excitation modes but also, exploiting the spectral function intensity, we obtain a quantitative determination of Bogoliubov amplitudes characterizing the particle-hole correlation that emerge in interacting Bose fluids.
This understanding is of high importance for example in the context of harnessing the quantum properties of these excitations [24; 30], as the generation of Bogoliubov excitations by lattice phonons and by quantum fluctuations are in competition, even at \(T=0\,\)K. We thus define a crossover temperature towards the quantum fluctuation regime, and discuss mitigation strategies to limit - or take advantage of - the thermal lattice phonon contribution on the generation of elementary excitations. In our experiment we determine that the crossover temperature for typically interesting excitation states, is situated between \(2\,\)K and \(4\,\)K, i.e. a range which is experimentally convenient.
Finally, building up on this quantitative understanding, we highlight a striking phenomenon by which the Bogoliubov transformation taking place within the quantum fluid of light actually reduces its interaction with the bath of lattice phonons, and can even suppress it completely for low momenta excitation states. This thermal protection mechanism is reminiscent of, and yet different from, superfluidity. Owing to its driven-dissipative nature, this effect has a spectacular influence on a quantum fluid of light, but observable corrections are also expected in the dynamics of closed weakly-interacting quantum fluids such as ultracold atoms interacting with impurities [31] or a second atomic component [56; 57].
###### Acknowledgements.
IF acknowledges support from the European Union Horizon's 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 101031549 (QuoMoDys). This work was partly supported by the Paris Ile de France Region in the framework of DIM SIRTEQ, by the RENATECH network and the General Council of Essonne, by the European Research Council via the project ARQADIA (949730) and the project ANAPOLIS (101054448), and by Centre of Quantum Technologies's Exploratory Initiative programme. AV acknowledges the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 754303, and ERC grant LATIS.
## Appendix A Details on the theoretical model
### Input-output formalism
In the experiment, we measure the emission intensity resolved in both frequency and wavevector, denoted as \(I(\mathbf{q},\omega)\), where \(\mathbf{q}=(q_{x},q_{y})\) is the in-plane momentum. In order to compute it from the microscopic model, we adopt an input-output formalism. We work in the Heisenberg picture and make the standard assumption that the state is factorized at initial time \(t=t_{0}\), namely \(\hat{\rho}_{0}=\hat{\rho}_{p}\otimes|0_{\text{phot}}\rangle\langle 0_{\text{phot }}|\), with \(\hat{\rho}_{p}\) the intracavity polaritonic density matrix, and \(|0_{\text{phot}}\rangle\) the vacuum for the extra-cavity photon modes. The latter is an excellent approximation in our experiment, as the typical frequencies involved (\(\omega\sim 1.5\)eV) in the problem correspond to a temperature \(T>10^{4}\text{K}\gg T_{c}\) with \(T_{c}\sim 10\text{K}\) the cryostat temperature, so that the extracavity modes can be safely considered in a vacuum state. We work in the Markovian approximation in which the bath retains no memory of its interaction with the system.
After a time interval \(\Delta t=t-t_{0}\), the extacavity photon mode at \((\mathbf{q},k_{z})\) contains a number of output photons given by \(n(\mathbf{q},k_{z},\Delta t)=\langle\hat{\alpha}_{\mathbf{q},k_{z}}^{\dagger }(t_{0}+\Delta t)\hat{\alpha}_{\mathbf{q},k_{z}}(t_{0}+\Delta t)\rangle\), where the average is taken over the initial state, namely \(\langle\dots\rangle=\text{Tr}[\hat{\rho}_{0}\dots]\). As per the experimental condition, the system is in its steady-state, so that this quantity does not depend on the initial time \(t_{0}\). Experimentally, we measure the photon flux, per unit momentum and frequency, tunneling outside of the microcavity during a time interval \(\Delta t\), which is macroscopic as compared to the system microscopic timescales, namely:
\[I(\mathbf{q},\omega)=\rho(\mathbf{q},\omega)\lim_{\Delta t\to\infty}\left[ \frac{n[\mathbf{q},k_{z}(\mathbf{q},\omega),\Delta t]}{\Delta t}\right] \tag{10}\]
where \(\rho(\mathbf{q},\omega)=\sum_{k_{z}}\delta(\omega-\omega_{\mathbf{q},k_{z}}^{( \alpha)})\) is the partial density of states of extracavity photons, and \(k_{z}(\mathbf{q},\omega)=\sqrt{\omega^{2}/c^{2}-\mathbf{q}^{2}}\).
Using Heisenberg equations of motion (EOM) for the extracavity photons \(i\hbar\hat{\partial}_{t}\hat{\alpha}_{\mathbf{q},k_{z}}=[\hat{\alpha}_{ \mathbf{q},k_{z}}(t),\hat{\mathcal{V}}_{\text{out}}]\) where \(\hat{\mathcal{V}}_{\text{out}}\) is given in Eq. (7) in the main text, we can relate their dynamical evolution to the one of intracavity photons as:
\[\hat{\alpha}_{\mathbf{q},k_{z}}(t) = e^{-i\omega_{\mathbf{q},k_{z}}^{(\alpha)}(t-t_{0})}\hat{\alpha} _{\mathbf{q},k_{z}}(t_{0}) \tag{11}\] \[-i\kappa_{\mathbf{q},k_{z}}\int_{t_{0}}^{t}dt^{\prime}e^{-i\omega _{\mathbf{q},k_{z}}^{(\alpha)}(t-t^{\prime})}\hat{a}_{\mathbf{q}}(t^{\prime})\.\]
Since the photonic input state is the vacuum, only the second term in Eq. (10) contributes to the output signal, leading to:
\[I(\mathbf{q},\omega)=\lim_{\Delta t\to\infty}\frac{\gamma_{\rm cav }}{\pi\Delta t}\int_{t_{0}}^{t_{0}+\Delta t}dt_{2}\int_{t_{0}}^{t_{0}+\Delta t} dt_{1}\] \[\times e^{-i\omega(t_{2}-t_{1})}\langle\hat{a}_{\mathbf{q}}^{ \dagger}(t_{2})\hat{a}_{\mathbf{q}}(t_{1})\rangle \tag{11}\]
with the cavity loss rate \(\gamma_{\rm cav}=\pi\rho(\mathbf{q},\omega)\kappa_{\mathbf{q},k_{x}}^{2}( \mathbf{q},\omega)\). Equation (11) thus shows that the photon emission intensity into the extracavity medium corresponds to the Fourier transform of the two-times correlations \(\langle\hat{a}_{\mathbf{q}}^{\dagger}(t_{2})\hat{a}_{\mathbf{q}}(t_{1})\rangle\) of the intracavity photons, and hence of the polaritons.
### Solution of the mean-field steady-state equations
We provide here the details of the solution of Eqs. (9) and (10). Without loss of generality, we can take \(\psi_{x}=\sqrt{n_{x}}\) and \(\psi_{c}=\sqrt{n_{c}}e^{-i\phi}\). Experimentally, we control the pump intensity \(|F_{p}|^{2}\), and \(n_{x},n_{c},\phi\) spontaneously take their steady-state values; but for the sake of deriving the solutions of the mean-field equations, we take \(n_{x}\) as a parameter, and derive \(n_{c},\phi,|F_{p}|^{2}\) as a function of \(n_{x}\). First, taking the real and imaginary parts of Eq. (9), we derive the relations:
\[(3g_{s}n_{x}/2-\Omega/2)\sqrt{n_{c}}\cos\phi=\sqrt{n_{x}}(\omega_ {x}+g_{x}n_{x}) \tag{12}\] \[(g_{s}n_{x}/2-\Omega/2)\sqrt{n_{c}}\sin\phi=-\sqrt{n_{x}}\gamma_ {x}/2 \tag{13}\]
Taking the square of these relations, and using \(\cos^{2}\phi+\sin^{2}\phi=1\), we obtain the ratio:
\[Q_{n}:=\frac{n_{c}}{n_{x}}=\frac{(\omega_{x}+g_{x}n_{x})^{2}}{(\Omega/2-3g_{s} n_{x}/2)^{2}}+\frac{(\gamma_{x}/2)^{2}}{(\Omega/2-g_{s}n_{x}/2)^{2}} \tag{14}\]
This relation gives \(n_{c}\) as a function of \(n_{x}\). Inserting \(Q_{n}\) into Eqs. (12) and (13), we then find the relative phase \(\phi\) as follows:
\[\cos\phi = \frac{\omega_{x}+g_{x}n_{x}}{\sqrt{Q_{n}}(3g_{s}n_{x}/2-\Omega/2)} \tag{15}\] \[\sin\phi = \frac{\gamma_{x}/2}{\sqrt{Q_{n}}(\Omega/2-g_{s}n_{x}/2)} \tag{16}\]
In our experiment, \(\Omega=3.28\,\)meV is much larger than \(g_{s}n_{x}=0.19\,\)meV, such that \(\cos\phi<0\) and \(\sin\phi>0\). Using also \(g_{x}n_{x}=0\) yields \(\tan\phi\simeq\gamma_{x}/\omega_{x}(1/2-g_{s}n_{x}/\Omega)\) where \(\gamma_{x}\sim 0.15\,\)meV and \(\omega_{x}=1.355\,\)meV/\(\hbar\) (we remind that the frequencies are expressed in the frame rotating at the laser frequency). We see that as long as \(g_{s}n_{x}/\Omega\ll 1\), \(\phi\simeq\pi\) where \(\phi=\pi\) is the nominal phase difference in the linear regime for lower polariton states. Once \(\phi\) and \(n_{c}\) are obtained, we finally use Eq. (10) to find \(|F_{p}|^{2}\):
\[|F_{p}|^{2}=|(\omega_{c}-i\gamma_{\rm cav}/2)\sqrt{n_{c}}e^{-i\phi}+(\Omega/2-g _{s}n_{x}/2)\sqrt{n_{x}}|^{2}. \tag{17}\]
This provides the solution to the mean-field steady-state equations of the condensate.
### Polariton interactions in the Bogoliubov approximation
Here, we give the detailed expression for the polariton interaction terms in the Bogoliubov approximation. For the exciton-exciton interaction it results in:
\[\hat{\mathcal{V}}_{xx}=(\hbar g_{x}n_{x}/2)\sum_{\mathbf{q}}[\hat{b}_{\mathbf{ q}_{p}+\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}^{\dagger}+4\hat{b}_{ \mathbf{q}_{p}+\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}+\hat{ b}_{\mathbf{q}_{p}+\mathbf{q}}\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}]. \tag{18}\]
and for the saturation term:
\[\hat{\mathcal{V}}_{\rm sat} =-(\hbar g_{s}/2)\sum_{\mathbf{q}}\left(n_{x}[\hat{a}_{\mathbf{q} _{p}+\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}^{\dagger}+2\hat {a}_{\mathbf{q}_{p}+\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}+\right. \tag{19}\] \[\left.\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}\hat{b}_{\mathbf{q}_{p}- \mathbf{q}}+2\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}\hat{b}_{\mathbf{q}_{p}+ \mathbf{q}}^{\dagger}+\right]+\sqrt{n_{c}n_{x}}[e^{-i\phi}\hat{b}_{\mathbf{q}_{ p}+\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}\] \[\qquad\qquad+(4\cos\phi)\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}^{ \dagger}\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}+e^{i\phi}\hat{b}_{\mathbf{q}_{p}+ \mathbf{q}}\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}]\ \right)\.\]
where \(\phi\) is the relative phase between the excitonic and photonic condensate fields.
### Equations of motion for cavity photons
The EOM for the cavity photons \(\partial_{t}\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}=(-i/\hbar)[\hat{a}_{\mathbf{q} _{p}+\mathbf{q}}(t),\hat{\mathcal{H}}_{\rm tot}]\) is made of three terms. A first term, contains the free part and the interaction between cavity photons and excitons [Eq. (2)]:
\[-\frac{i}{\hbar}[\hat{a}_{\mathbf{q}_{p}+\mathbf{q}},\hat{\mathcal{H}}_{\rm pol }^{(0)}]=-i\omega_{c,\mathbf{q}_{p}+\mathbf{q}}^{(0)}\hat{a}_{\mathbf{q}_{p}+ \mathbf{q}}-i\frac{\Omega}{2}\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}\, \tag{20}\]
a second term comes from the excitonic saturation mechanism (treated in the Bogoliubov approximation, as explained in the main text, Eq. (19)):
\[-\frac{i}{\hbar}[\hat{a}_{\mathbf{q}_{p}+\mathbf{q}},\hat{\mathcal{V}}_{\rm sat}] =i(g_{s}n_{x}/2)(\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}^{\dagger}+2\hat{b}_{ \mathbf{q}_{p}+\mathbf{q}})\, \tag{21}\]
and a third term containing the coupling to extra-cavity photons [Eq. (7)]:
\[-\frac{i}{\hbar}[\hat{a}_{\mathbf{q}_{p}},\hat{\mathcal{V}}_{\rm out}]=-i\sum_{k_{z }}\kappa_{\mathbf{q},k_{z}}\hat{a}_{\mathbf{q},k_{z}}. \tag{22}\]
### Equations of motion for quantum well excitons
The EOM for the quantum well excitons \(\partial_{t}\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}=(-i/\hbar)[\hat{b}_{\mathbf{q}_{p} +\mathbf{q}}(t),\hat{\mathcal{H}}_{\rm tot}]\) is made of three terms. A first term containing the free part and Rabi coupling to cavity photons:
\[-\frac{i}{\hbar}[\hat{b}_{\mathbf{q}_{p}+\mathbf{q}},\hat{\mathcal{H}}_{\rm pol }^{(0)}]=-i\omega_{x,\mathbf{q}_{p}+\mathbf{q}}^{(0)}\hat{b}_{\mathbf{q}_{p}+ \mathbf{q}}-i\frac{\Omega}{2}\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}\, \tag{23}\]
a second term comes from the excitonic saturation and Coulomb interactions (treated in the Bogoliubov approximation, Eqs. (19) and (18)):
\[-\frac{i}{\hbar}[\hat{b}_{\mathbf{q}_{p}+\mathbf{q}},\hat{\mathcal{V}}_{\rm sat }+\hat{\mathcal{V}}_{xx}]=i(g_{s}n_{x}/2)(\hat{a}_{\mathbf{q}_{p}-\mathbf{q}}^{ \dagger}+2\hat{a}_{\mathbf{q}_{p}+\mathbf{q}})\] \[+ig_{s}\sqrt{n_{x}n_{x}}[e^{-i\phi}\hat{b}_{\mathbf{q}_{p}-\mathbf{ q}}^{\dagger}+(2\cos\phi)\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}]\] \[-ig_{x}n_{x}(\hat{b}_{\mathbf{q}_{p}-\mathbf{q}}^{\dagger}+2\hat{b}_{ \mathbf{q}_{
and a third term from the coupling to lattice phonons (in the Bogoliubov approximation, Eq. (11)):
\[-\frac{i}{\hbar}[\hat{b}_{\mathbf{q}_{p}+\mathbf{q}},\hat{\mathcal{V}}_{xp}]= \sqrt{n_{x}}\sum_{k_{z}}g_{xp}(\mathbf{q},k_{z})(\hat{\alpha}_{\mathbf{q},k_{z}} -\hat{c}^{\dagger}_{-\mathbf{q},k_{z}}). \tag{111}\]
### General results about the coupling to a bath
Cavity photons are coupled to a 3D continuum of extra-cavity photons, while excitons are coupled to a 3D continuum of lattice phonons. The coupling is respectively described by Eqs. (7) and (11). In both cases, it corresponds to a linear coupling to a bath of harmonic oscillators, namely a coupling which generically takes the form:
\[\hat{\mathcal{H}}_{\text{bath}}/\hbar = \sum_{\mathbf{q}}[\hat{a}^{\dagger}_{\mathbf{q}}\sum_{k_{z}}(x_{ \mathbf{q},k_{z}}\hat{\alpha}_{\mathbf{q},k_{z}}+y_{\mathbf{q},k_{z}}\hat{ \alpha}^{\dagger}_{-\mathbf{q},k_{z}})+\text{h.c.}] \tag{112}\] \[+\sum_{\mathbf{q},k_{z}}\omega_{\mathbf{q},k_{z}}\hat{\alpha}^{ \dagger}_{\mathbf{q},k_{z}}\hat{\alpha}_{\mathbf{q},k_{z}}\]
with \(\hat{a}_{\mathbf{q}}\) either the cavity photon or cavity exciton, and \(\hat{\alpha}_{\mathbf{q},k_{z}}\) either the output photons or the lattice phonons. We now derive the contribution to the EOM of the \(\hat{a}_{\mathbf{q}}\) operators resulting from such a generic coupling. We first write the EOM for the bath operators:
\[\partial_{t}\hat{\alpha}_{\mathbf{q},k_{z}} = (-i/\hbar)[\hat{\alpha}_{\mathbf{q},k_{z}},\hat{\mathcal{H}}_{ \text{bath}}] \tag{113}\] \[= -i[y_{-\mathbf{q},k_{z}}\hat{a}^{\dagger}_{-\mathbf{q}}+x^{*}_{ \mathbf{q},k_{z}}\hat{a}_{\mathbf{q}}+\omega_{\mathbf{q},k_{z}}\hat{\alpha}_{ \mathbf{q},k_{z}}]\.\]
The formal solution reads:
\[\alpha_{\mathbf{q},k_{z}}(t) = e^{-i\omega_{\mathbf{q},k_{z}}(t-t_{0})}-i\int_{t_{0}}^{t}d\tau e ^{-i\omega_{\mathbf{q},k_{z}}(t-\tau)}[ \tag{114}\] \[x^{*}_{\mathbf{q},k_{z}}\hat{a}_{\mathbf{q}}(\tau)+y_{-\mathbf{ q},k_{z}}\hat{a}^{\dagger}_{-\mathbf{q}}(\tau)]\.\]
On the other hand, the contribution to the EOM for the system operators \(\hat{a}_{\mathbf{q}}\) stemming from coupling to the bath reads:
\[(-i/\hbar)[\hat{a}_{\mathbf{q}}(t),\hat{\mathcal{H}}_{\text{bath}}]=-i\sum_{ k_{z}}[x_{\mathbf{q},k_{z}}\hat{\alpha}_{\mathbf{q},k_{z}}+y_{\mathbf{q},k_{z}} \hat{\alpha}^{\dagger}_{-\mathbf{q},k_{z}}]. \tag{115}\]
Injecting the formal solution of Eq. (114) for the bath operators into Eq. (114) yields a contribution to the EOM of the form:
\[(-i/\hbar)[\hat{a}_{\mathbf{q}}(t),\hat{\mathcal{H}}_{\text{bath} }]=\hat{F}^{(x)}_{\mathbf{q}}(t)-[\hat{F}^{(y)}_{-\mathbf{q}}(t)]^{\dagger}- \int_{-\infty}^{\infty}d\tau\{\] \[\Gamma_{1}(t-\tau)\hat{a}_{\mathbf{q}}(\tau)+\Gamma_{2}(t-\tau) \hat{a}^{\dagger}_{-\mathbf{q}}(\tau)\}. \tag{116}\]
We have introduced the so-called Langevin forces:
\[\hat{F}^{(x)}_{\mathbf{q}}(t)=-i\sum_{k_{z}}x_{\mathbf{q},k_{z}}\hat{\alpha}_{ \mathbf{q},k_{z}}(t_{0})e^{-i\omega_{\mathbf{q},k_{z}}(t-t_{0})}\, \tag{117}\]
and:
\[\Gamma_{2}(t)=\Theta(t)\sum_{k_{z}}(x_{\mathbf{q},k_{z}}y_{-\mathbf{q},k_{z}}e ^{-i\omega_{\mathbf{q},k_{z}}t}+x_{-\mathbf{q},k_{z}}y_{\mathbf{q},k_{z}}e^{i \omega_{-\mathbf{q},k_{z}}t}) \tag{118}\]
(\(\Theta\) is the Heaviside step function). Notice that in the integral over the damping kernels, we have sent the initial time \(t_{0}\) to \(-\infty\), which amounts to assume that the support of the damping kernel (a few times the memory of the bath) is much smaller than \(t-t_{0}\) (Markov approximation).
The Langevin forces contributions (\(\hat{F}\) terms in Eq. (116)) describe the forcing of the system by the external bath. The contribution on the second line of Eq. (116) (\(\Gamma\) terms) describe the damping induced by the coupling to the bath, giving rise to a finite linewidth to Bogoliubov excitations.
_Photonic bath.-_ Specifying these results to our model, we obtain for cavity photons [cf. Eq. (100): \(x_{\mathbf{q},k_{z}}=\kappa_{\mathbf{q},k_{z}}\)]:
\[-\frac{i}{\hbar}[\hat{a}_{\mathbf{q}}(t),\hat{\mathcal{V}}_{\text{out}}]=\hat {F}_{\mathbf{q}}(t)-\int_{-\infty}^{\infty}d\tau\gamma_{c,\mathbf{q}}(t-\tau) \hat{a}_{\mathbf{q}}(\tau) \tag{119}\]
with the Langevin force:
\[\hat{F}_{\mathbf{q}}(t)=-i\sum_{k_{z}}\kappa_{\mathbf{q},k_{z}}\hat{\alpha}^{( in)}_{\mathbf{q},k_{z}}e^{-i\omega^{(\alpha)}_{\mathbf{q},k_{z}}t} \tag{120}\]
with the input modes defined according to \(\hat{\alpha}^{(in)}_{\mathbf{q},k_{z}}:=\hat{\alpha}_{\mathbf{q},k_{z}}(t_{0}) e^{i\omega^{(\alpha)}_{\mathbf{q},k_{z}}t_{0}}\), and with the damping kernel:
\[\gamma_{c,\mathbf{q}}(t)=\Theta(t)\sum_{k_{z}}|\kappa_{\mathbf{q},k_{z}}|^{2}e^ {-i\omega^{(\alpha)}_{\mathbf{q},k_{z}}t}. \tag{121}\]
_Phononic bath.-_ For cavity excitons which are coupled to the bath of thermal lattice solid-state phonons [cf. Eq. (111): \(x_{\mathbf{q},k_{z}}=i\sqrt{n_{x}}g_{xp}(\mathbf{q},k_{z})=-y_{\mathbf{q},k_{z}}\)], we find the contribution to EOM:
\[-i\omega\hat{a}_{{\bf q}_{p}+{\bf q}}(\omega) = -i\omega^{(0)}_{c,{\bf q}_{p}+{\bf q}}\hat{a}_{{\bf q}_{p}+{\bf q}}( \omega)-i\frac{\Omega}{2}\hat{a}_{{\bf q}_{p}+{\bf q}}(\omega) \tag{109}\] \[+i(g_{s}n_{x}/2)[\hat{a}^{\dagger}_{{\bf q}_{p}-{\bf q}}(\omega)+2 \hat{a}_{{\bf q}_{p}+{\bf q}}(\omega)]\] \[-ig_{s}n_{x}[\hat{b}^{\dagger}_{{\bf q}_{p}-{\bf q}}(\omega)+2\hat {b}_{{\bf q}_{p}+{\bf q}}(\omega)]\] \[+ig_{s}\sqrt{n_{x}n_{c}}[e^{-i\phi}\hat{b}^{\dagger}_{{\bf q}_{p} -{\bf q}}(\omega)+(2\cos\phi)\hat{b}_{{\bf q}_{p}+{\bf q}}(\omega)]\] \[+\hat{f}_{\bf q}(\omega)-\hat{f}^{\dagger}_{-{\bf q}}(\omega)-[ \gamma_{x,{\bf q}}(\omega)-\gamma^{*}_{x,-{\bf q}}(\omega)][\hat{b}_{{\bf q}_{ p}+{\bf q}}(\omega)+\hat{b}^{\dagger}_{{\bf q}_{p}-{\bf q}}(\omega)]\]
Equivalently, we take the Hermitian conjugate of these last two equalities, use that \([\hat{O}(\omega)]^{\dagger}=\hat{O}^{\dagger}(-\omega)\), and change \((\omega,q)\) into \((-\omega,-q)\). We obtain:
\[-i\omega\hat{a}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega) = i\omega^{(0)}_{c,\mathbf{q}_{p}-\mathbf{q}}\hat{a}^{\dagger}_{ \mathbf{q}_{p}-\mathbf{q}}(\omega)+i\frac{\Omega}{2}\hat{b}^{\dagger}_{ \mathbf{q}_{p}-\mathbf{q}}(\omega) \tag{101}\] \[-i(g_{s}n_{x}/2)[\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)+2 \hat{b}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)]\] \[+\hat{F}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)-\gamma^{* }_{c,\mathbf{q}_{p}-\mathbf{q}}(\omega)\hat{a}^{\dagger}_{\mathbf{q}_{p}- \mathbf{q}}(\omega)\]
and
\[-i\omega\hat{b}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega) = i\omega^{(0)}_{X,\mathbf{q}_{p}-\mathbf{q}}\hat{b}^{\dagger}_{ \mathbf{q}_{p}-\mathbf{q}}(\omega)+i\frac{\Omega}{2}\hat{a}^{\dagger}_{ \mathbf{q}_{p}-\mathbf{q}}(\omega) \tag{102}\] \[-i(g_{s}n_{x}/2)[\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)+2 \hat{a}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)]\] \[+ig_{x}n_{x}[\hat{b}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)+2\hat{b} ^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)]\] \[-ig_{s}\sqrt{n_{x}n_{c}}[e^{i\phi}\hat{b}_{\mathbf{q}_{p}+ \mathbf{q}}(\omega)+(2\cos\phi)\hat{b}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}( \omega)]\] \[+\hat{f}^{\dagger}_{-\mathbf{q}}(\omega)-\hat{f}_{\mathbf{q}}( \omega)-[\gamma^{*}_{x,-\mathbf{q}}(\omega)-\gamma_{x,\mathbf{q}}(\omega)][ \hat{b}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)+\hat{b}^{\dagger}_{\mathbf{q}_{p}- \mathbf{q}}(\omega)]\]
We introduce the notations:
\[\tilde{\gamma}_{x,\mathbf{q}}(\omega) = \gamma_{x,\mathbf{q}}(\omega)-\gamma^{*}_{x,-\mathbf{q}}(\omega) \tag{103}\] \[\gamma_{\mathrm{cav}} = \gamma_{c,\mathbf{q}}(\omega)\] (104) \[\mu_{s} = g_{s}n_{x}/2\] (105) \[\mu_{sx} = g_{x}n_{x}-g_{s}\sqrt{n_{x}n_{c}}e^{-i\phi} \tag{106}\]
where the photonic loss rate \(\gamma_{\mathrm{cav}}\) is taken as a real number (namely, we ignore any Lamb shift on the resonance frequencies that would stem from its imaginary part, and that is not observed in the experiment), and approximately frequency- and momentum-independent. The equations of motion can be finally summarized as:
\[\begin{pmatrix}\hat{a}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)\\ \hat{b}_{\mathbf{q}_{p}+\mathbf{q}}(\omega)\\ \hat{a}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)\\ \hat{b}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)\end{pmatrix}=i[\omega \mathbf{1}-\tilde{M}_{\mathbf{q}}]^{-1}\begin{pmatrix}\hat{F}_{\mathbf{q}_{p }+\mathbf{q}}(\omega)\\ \hat{f}_{\mathbf{q}}(\omega)-\hat{f}^{\dagger}_{-\mathbf{q}}(\omega)\\ \hat{F}^{\dagger}_{\mathbf{q}_{p}-\mathbf{q}}(\omega)\\ f^{\dagger}_{-\mathbf{q}}(\omega)-\hat{f}_{\mathbf{q}}(\omega)\end{pmatrix} \tag{107}\]
where we introduced the \(\tilde{M}_{\mathbf{q}}\) matrix:
\[\tilde{M}_{\mathbf{q}}=\begin{pmatrix}\omega^{(0)}_{c,\mathbf{q}_{p}+\mathbf{ q}}-i\gamma_{\mathrm{cav}}&\Omega/2-2\mu_{s}&0&-\mu_{s}\\ -2\mu_{s}+\Omega/2&\omega^{(0)}_{x,\mathbf{q}_{p}+\mathbf{q}}+2\mathrm{Re}( \mu_{sx})-i\tilde{\gamma}_{x}&-\mu_{s}&\mu_{sx}-i\tilde{\gamma}_{x}\\ 0&\mu_{s}&-\omega^{(0)}_{c,\mathbf{q}_{p}-\mathbf{q}}-i\gamma_{\mathrm{cav}}& -\Omega/2+2\mu_{s}\\ \mu_{s}&-\mu^{*}_{sx}+i\tilde{\gamma}_{x}&2\mu_{s}-\Omega/2&-\omega^{(0)}_{x, \mathbf{q}_{p}-\mathbf{q}}-2\mathrm{Re}(\mu_{sx})+i\tilde{\gamma}_{x}\end{pmatrix}\;. \tag{108}\]
#### a.8.8 Approximate decoupling between the upper and lower polariton
In this section, we derive an approximate model assuming that the lower polariton (LP) and upper polariton (UP) do not hybridize under Bogoliubov transformation. This allows us to obtain analytical expressions for the frequencies and Bogoliubov coefficients which are in excellent agreement with the initial model. Using the further approximation that the Bogoliubov coefficients are real, which we shall justify, we show that all four Bogoliubov coefficients for the LP can be reconstructed from the experimentally-measured ratio of ghost to nor
mal branch signals. This allows us to reconstruct from the experimental data the quantity \(|u_{x,\mathbf{q}}-v_{x,-\mathbf{q}}|^{2}\), characterizing the decoupling from lattice phonons, and discussed in section IV.2. Our starting point is the matrix \(M_{\mathbf{q}}\) [Eq. (15) of the main text] expressed as:
\[M_{\mathbf{q}}=\begin{pmatrix}A_{\mathbf{q}}&B\\ -B^{*}&-A_{-\mathbf{q}}\end{pmatrix} \tag{101}\]
with:
\[A_{\mathbf{q}}=A_{\mathbf{q}}^{*}=A_{\mathbf{q}}^{t}=\begin{pmatrix}\omega_{c,\mathbf{q}_{p}+\mathbf{q}}^{(0)}&\Omega/2-2\mu_{s}\\ \Omega/2-2\mu_{s}&\omega_{x,\mathbf{q}_{p}+\mathbf{q}}^{(0)}+2\mathrm{Re}(\mu _{sx})\end{pmatrix} \tag{102}\]
and:
\[B=B^{t}=\begin{pmatrix}0&-\mu_{s}\\ -\mu_{s}&\mu_{sx}\end{pmatrix} \tag{103}\]
For the sake of notation, we omit \(\mathbf{q}_{p}\) in the labels; in all expressions, one should make the substitution \(\pm\mathbf{q}\rightarrow\mathbf{q}_{p}\pm\mathbf{q}\). We first perform a unitary transformation to the (non-interacting) polariton basis. We introduce the \(2\times 2\) unitary matrix \(U_{0}\) that diagonalizes the non-interacting problem (\(\mu_{s}=\mu_{sx}=0\)), namely:
\[U_{0,\mathbf{q}}=\begin{pmatrix}C_{\mathbf{q}}&-X_{\mathbf{q}}\\ X_{\mathbf{q}}&C_{\mathbf{q}}\end{pmatrix} \tag{104}\]
with:
\[U_{0,\mathbf{q}}^{t}\begin{pmatrix}\omega_{c,\mathbf{q}}^{(0)}&\Omega/2\\ \Omega/2&\omega_{x,\mathbf{q}}^{(0)}\end{pmatrix}U_{0,\mathbf{q}}=\begin{pmatrix} \omega_{lp,\mathbf{q}}^{(0)}&0\\ 0&\omega_{up,\mathbf{q}}^{(0)}\end{pmatrix} \tag{105}\]
with the non-interacting resonance frequencies:
\[\omega_{lp/up,\mathbf{q}}^{(0)}=\frac{\omega_{c,\mathbf{q}}^{(0)}+\omega_{x, \mathbf{q}}^{(0)}}{2}\mp\frac{1}{2}\sqrt{[\omega_{c,\mathbf{q}}^{(0)}-\omega_ {x,\mathbf{q}}^{(0)}]^{2}+\Omega^{2}}\;. \tag{106}\]
The so-called Hopfield coefficients \(X_{\mathbf{q}}\) and \(C_{\mathbf{q}}\) are real, and define the lower and upper polariton operators via:
\[\hat{a}_{lp,\mathbf{q}} =C_{\mathbf{q}}\hat{a}_{\mathbf{q}}+X_{\mathbf{q}}\hat{b}_{ \mathbf{q}} \tag{107}\] \[\hat{a}_{up,\mathbf{q}} =-X_{\mathbf{q}}\hat{a}_{\mathbf{q}}+C_{\mathbf{q}}\hat{b}_{ \mathbf{q}} \tag{108}\]
We then write the \(M_{\mathbf{q}}\) matrix in this basis, namely:
\[M_{\mathbf{q}}^{\prime} =\begin{pmatrix}U_{0,\mathbf{q}}^{t}&0\\ 0&U_{0,-\mathbf{q}}^{t}\end{pmatrix}M_{\mathbf{q}}\begin{pmatrix}U_{0,\mathbf{ q}}&0\\ 0&U_{0,-\mathbf{q}}\end{pmatrix}\] \[=\begin{pmatrix}A_{\mathbf{q}}^{\prime}&B_{\mathbf{q}}^{\prime} \\ -(B_{-\mathbf{q}}^{\prime})^{*}&-A_{-\mathbf{q}}^{\prime}\end{pmatrix} \tag{109}\]
We have to evaluate the matrices \(A_{\mathbf{q}}^{\prime}=U_{0,\mathbf{q}}^{t}A_{\mathbf{q}}U_{0,\mathbf{q}}\) and \(B_{\mathbf{q}}^{\prime}=U_{0,\mathbf{q}}^{t}BU_{0,-\mathbf{q}}\). The final result is:
\[A_{\mathbf{q}}^{\prime}=\begin{pmatrix}\omega_{lp,\mathbf{q}}^{(0)}+2\mathrm{ Re}(gn)_{\mathrm{LL},\mathbf{q},\mathbf{q}}&2\mathrm{Re}(gn)_{\mathrm{LU}, \mathbf{q},\mathbf{q}}\\ 2\mathrm{Re}(gn)_{\mathrm{LU},\mathbf{q},\mathbf{q}}&\omega_{up,\mathbf{q}}^{ (0)}+2\mathrm{Re}(gn)_{\mathrm{UU},\mathbf{q},\mathbf{q}}\end{pmatrix} \tag{110}\]
\[B_{\mathbf{q}}^{\prime}=\begin{pmatrix}(gn)_{\mathrm{LL},\mathbf{q},-\mathbf{ q}}&(gn)_{\mathrm{LU},\mathbf{q},-\mathbf{q}}\\ (gn)_{\mathrm{LU},\mathbf{q},-\mathbf{q}}&(gn)_{\mathrm{UU},\mathbf{q},- \mathbf{q}}\end{pmatrix}\;, \tag{111}\]
where we define the (\(q\)-dependent) effective interactions as:
\[(gn)_{\mathrm{LL},\mathbf{q},\mathbf{q}^{\prime}} =-(C_{\mathbf{q}}X_{\mathbf{q}^{\prime}}+X_{\mathbf{q}}C_{\mathbf{ q}^{\prime}})\mu_{s}+X_{\mathbf{q}}X_{\mathbf{q}^{\prime}}\mu_{sx}\] \[(gn)_{\mathrm{UU},\mathbf{q},\mathbf{q}^{\prime}} =(C_{\mathbf{q}}X_{\mathbf{q}^{\prime}}+X_{\mathbf{q}}C_{\mathbf{ q}^{\prime}})\mu_{s}+C_{\mathbf{q}}C_{\mathbf{q}^{\prime}}\mu_{sx}\] \[(gn)_{\mathrm{LU},\mathbf{q},\mathbf{q}^{\prime}} =(X_{\mathbf{q}}X_{\mathbf{q}^{\prime}}-C_{\mathbf{q}}C_{\mathbf{ q}^{\prime}})\mu_{s}+X_{\mathbf{q}}C_{\mathbf{q}^{\prime}}\mu_{sx}\]
We now make the simplification that the upper- and lower-polaritons do not hybridize, namely we set \((gn)_{\mathrm{LU}}=0\). The \(M_{\mathbf{q}}^{\prime}\) matrix then splits into two independent \(2\times 2\) matrices, describing independent Bogoliubov transformations for the upper- and lower-polaritons. We have verified that, in the conditions of the experiment, the coefficients of the \((2\times 2)\) Bogoliubov transformations found in this approximation are almost identical to the exact \((4\times 4)\) ones. We then have two matrices do diagonalize:
\[M_{lp,\mathbf{q}}^{\prime}=\begin{pmatrix}a_{\mathbf{q}}&b_{\mathbf{q}}\\ -b_{-\mathbf{q}}^{*}&-a_{-\mathbf{q}}\end{pmatrix} \tag{112}\]
with:
\[a_{\mathbf{q}} :=\omega_{lp,\mathbf{q}}^{(0)}+2\mathrm{Re}(gn)_{\mathrm{LL}, \mathbf{q},\mathbf{q}} \tag{113}\] \[b_{-\mathbf{q}} :=(gn)_{\mathrm{LL},\mathbf{q},-\mathbf{q}} \tag{114}\]
for the LP, and similarly for the UP with \(a_{\mathbf{q}}=\omega_{up,\mathbf{q}}^{(0)}+2\mathrm{Re}(gn)_{\mathrm{UU}, \mathbf{q}}\) and \(b_{\mathbf{q}}=(gn)_{\mathrm{UU},\mathbf{q},-\mathbf{q}}\). The diagonalization is of the form:
\[M_{lp,\mathbf{q}}^{\prime}=\begin{pmatrix}u_{\mathbf{q}}^{*}&-v_{\mathbf{q}} \\ -v_{-\mathbf{q}}^{*}&u_{-\mathbf{q}}\end{pmatrix}\begin{pmatrix}\omega_{lp, \mathbf{q}}&0\\ 0&-\omega_{lp,-\mathbf{q}}\end{pmatrix}\begin{pmatrix}u_{\mathbf{q}}&v_{-\mathbf{ q}}^{*}\\ v_{\mathbf{q}}^{*}&u_{-\mathbf{q}}^{*}\end{pmatrix} \tag{115}\]
where we introduce the \(u\) and \(v\) Bogoliubov coefficients associated to the lower polariton. As we shall only be interested in the lower-polariton \(u,v\) coefficients, we omit their \(lp\) labels. \(\omega_{lp,\mathbf{q}}\) gives the LP dispersion relation in the presence of interactions. Similarly, for the UP the eigenvalues of \(M_{up,\mathbf{q}}^{\prime}\), namely \(\omega_{up,\mathbf{q}}\) and \(-\omega_{up,-\mathbf{q}}\), give the UP dispersion relation in the presence of interactions. Given that \(\mathrm{Re}(gn)_{\mathrm{UU}}\ll\omega_{up,\mathbf{q}}^{(0)}\) for all relevant \(\mathbf{q}\)'s, the matrix \(M_{up,\mathbf{q}}^{\prime}\) is almost diagonal, and therefore the Bogoliubov eigenmodes are almost identical to the non-interacting upper polaritons (the Bogoliubov transformation diagonalizing \(M_{up,\mathbf{q}}^{\prime}\) is close to the identity, something which we verified numerically).
We now explicitly proceed to the diagonalization of the \(M_{lp,\mathbf{q}}^{\prime}\) matrix in Eq. (112). Notice that \(b_{-\mathbf{q}}=b_{\mathbf{q}}\). We then write:
\[M_{lp,\mathbf{q}}^{\prime}=\frac{a_{\mathbf{q}}-a_{-\mathbf{q}}}{2}\mathbf{1}+ \begin{pmatrix}\bar{a}_{\mathbf{q}}&b_{\mathbf{q}}\\ -b_{\mathbf{q}}^{*}&-\bar{a}_{\mathbf{q}}\end{pmatrix} \tag{116}\]
with \(\bar{a}_{\mathbf{q}}=(a_{\mathbf{q}}+a_{-\mathbf{q}})/2\). The eigenvalues of \(M_{lp,\mathbf{q}}^{\prime}\) are then \(\omega_{lp,\mathbf{q}}\) and \(-\omega_{lp,-\mathbf{q}}\) with:
\[\omega_{lp,\mathbf{q}}=\frac{a_{\mathbf{q}}-a_{-\mathbf{q}}}{2}+\sqrt{\bar{a}_{ \mathbf{q}}^{2}-|b_{\mathbf
We introduce the notation \(\bar{\omega}_{\bf q}=\sqrt{\bar{a}_{\bf q}^{2}-|b_{\bf q}|^{2}}\). Notice that we focus on the regime where \(\bar{a}_{\bf q}^{2}>|b_{\bf q}|^{2}\), which is always the case in the regime accessible to our measurements. Otherwise, the eigenvalues have an imaginary part, and we find that \(|u_{\bf q}|^{2}=|v_{\bf q}|^{2}\); in this case, which shall not be discussed further in this paper, the eigenmodes do not describe proper bosonic excitations. Solving the eigenvalue equations using the conventions of Eq. (101), the convention that \(u_{\bf q}\) is real, and the normalization condition \(|u_{\bf q}|^{2}-|v_{\bf q}|^{2}=1\), we find the coefficients:
\[u_{\bf q}=u_{-{\bf q}}=\sqrt{\frac{\bar{a}_{\bf q}}{2\bar{\omega }_{\bf q}}+\frac{1}{2}} \tag{102}\] \[v_{\bf q}=v_{-{\bf q}}=\frac{b_{\bf q}}{|b_{\bf q}|}\sqrt{\frac {\bar{a}_{\bf q}}{2\bar{\omega}_{\bf q}}-\frac{1}{2}} \tag{103}\]
Notice in particular the the \(v_{\bf q}\) coefficient is complex, with a phase given by the phase of \(b_{\bf q}=(gn)_{\rm LL,{\bf q},-{\bf q}}\). In practice, though, this phase is negligible, and we take \(v_{\bf q}\) real.
The Bogoliubov eigenmode of the lower polariton is described by the annihilation operator (recall that a non-zero pump momentum \({\bf q}_{\bf p}\) should be re-introduced in these expressions: \(\pm{\bf q}\to{\bf q}_{p}\pm{\bf q}\)):
\[\hat{\beta}_{lp,{\bf q}}=u_{\bf q}\hat{a}_{lp,{\bf q}}+v_{-{\bf q }}\hat{a}_{lp,-{\bf q}}^{\dagger} \tag{104}\] \[=u_{\bf q}(C_{\bf q}\hat{a}_{\bf q}+X_{\bf q}\hat{b}_{\bf q})+v_{ -{\bf q}}(C_{-{\bf q}}\hat{a}_{-{\bf q}}^{\dagger}+X_{-{\bf q}}\hat{b}_{-{\bf q }}^{\dagger}) \tag{105}\]
We therefore have the identification:
\[u_{lp,c,{\bf q}} = u_{\bf q}C_{\bf q}\] \[u_{lp,x,{\bf q}} = u_{\bf q}X_{\bf q}\] \[v_{lp,c,-{\bf q}} = v_{-{\bf q}}C_{-{\bf q}}\] \[v_{lp,x,-{\bf q}} = v_{-{\bf q}}X_{-{\bf q}} \tag{106}\]
Therefore, the experimental measurement of \(v_{lp,c,-q}/u_{lp,c,q}\) gives direct access to the ratio \(v_{-{\bf q}}/u_{\bf q}\) of the lower-polariton Bogoliubov transformation. Assuming that \(v_{\bf q}=v_{-{\bf q}}\) is real (an excellent approximation in the regime of the experiment), and using the normalization \(u_{\bf q}^{2}-v_{\bf q}^{2}=1\), we then reconstruct both \(u_{\bf q}\) and \(v_{\bf q}\). Using the knowledge of the Hopfield coefficients \(X_{\bf q},C_{\bf q}\) of the non-interacting problem, we may then reconstruct all four Bogoliubov coefficients \(u_{c,{\bf q}},u_{x,{\bf q}},v_{c,{\bf q}},v_{x,{\bf q}}\), and in particular the thermal decoupling coefficient \(|u_{x,{\bf q}}-v_{x,-{\bf q}}|^{2}\), which is discussed in section IV.2. The quantitative agreement further confirms both the validity of the model, as well as the approximate decoupling between LP and UP as discussed in this section.
### Estimating the exciton-phonon interaction \(n_{x}\gamma_{xp}(\omega)\)
In this Appendix, we provide further details and quantitative estimates for the exciton-phonon interaction strength \(\gamma_{xp}(\omega)\) introduced in Eq. (27) of the main text. In the limit of strong confinement of excitons in the \(z\)-direction of the quantum well, the exciton-phonon coupling amplitude is given by:
\[\hbar g_{xp}({\bf q},k_{z})=\sqrt{\frac{\hbar\sqrt{{\bf q}^{2}+k_{z}^{2}}}{2 \rho Vv_{s}}}(V_{e}I_{e}^{p}I_{e}^{z}-V_{h}I_{h}^{p}I_{h}^{z}) \tag{107}\]
where \(\rho=5.3\times 10^{3}\,\)kg.m\({}^{-3}\) is GaAs density, \(v_{s}=4.7\times 10^{3}\,\)m.s\({}^{-1}\) is the longitudinal acoustic speed of sound in GaAs, \(V\) is a quantization volume, \(V_{e,h}=(-7,2.7)\,\)eV are the deformation potentials in GaAs for band-edge electrons and holes respectively.
\[I_{e}^{z}\simeq I_{h}^{z}\simeq\exp(-k_{z}^{2}/q_{z,\text{cut}}^{2}) \tag{108}\]
is a Gaussian approximation of the \(k_{z}\)-component of the electron and hole envelope wavefunction which is assumed to be mostly determined by the quantum well thickness \(L_{z}\), imposing the cutoff wavevector \(q_{z,\text{cut}}\approx 0.9\times 2\pi/L_{z}\).
\[I_{e(h)}^{p}=[1+(m_{e(h)}/(2M)qa_{B})^{2}]^{-3/2} \tag{109}\]
is the in-plane component of the electron-hole wavefunction resulting from its bound state character of Bohr radius \(a_{B}\simeq 10\,\)nm. \(m_{e,h}\) are the electron and hole effective masses and \(M=m_{e}+m_{h}\) is the excitonic mass. Notice that \(g_{xp}({\bf q},k_{z})=g_{xp}({\bf q},-k_{z})\), which allows us to replace unambiguously \(k_{z}\) by \(\omega\) using the dispersion relation \(\omega=v_{s}\sqrt{{\bf q}^{2}+k_{z}^{2}}\). In the parameter regime of the experiment, \(q\ll k_{z}\) so that \(\omega\approx v_{s}|k_{z}|\). Furthermore, \([m_{e(h)}/(2M)qa_{B}]^{2}\ll 1\). Therefore the coupling simplifies to:
\[\hbar g_{xp}(\omega)=\sqrt{\frac{\hbar\omega}{2\rho Vv_{s}^{2}}}(V_{e}-V_{h}) \exp(-\frac{\omega^{2}}{v_{s}^{2}q_{z,\text{cut}}^{2}}). \tag{110}\]
An explicit expression for \(\gamma_{xp}(\omega)\) in Eq. (27) can now be determined. We define the excitonic density in the quantum well as \(n_{x}=N_{x}/A\) where \(A\) is the condensate area and \(V=AL_{z}\). The density of states \(\rho^{\prime}_{{\bf q},\omega}\) counts the number of phonons wavevectors \(k_{z}\) matching the condition \(\omega=\omega^{\rm(ph)}_{{\bf q},k_{z}}=v_{s}\sqrt{{\bf q}^{2}+k_{z}^{2}}\), namely it is such that:
\[\rho^{\prime}_{{\bf q},\omega}d\omega=(L_{z}/2\pi)dk_{z}. \tag{111}\]
Using the fact that in our experiment \(v_{s}|{\bf q}|\ll\omega\), we get:
\[\rho^{\prime}_{{\bf q},\omega}\approx\Theta(\omega)\frac{L_{z}}{2\pi v_{s}} \tag{112}\]
which is approximately constant (\(\Theta\) is the Heaviside step function). We have further verified numerically the validity of this approximation. So finally,
\[\gamma_{xp}(\omega)n_{x}=\frac{n_{x}L_{z}A}{2v_{s}}\Theta(\omega)g_{xp}(\omega) ^{2}\, \tag{113}\]
that can be more explicitly given as
\[\gamma_{xp}(\omega)n_{x}=\frac{\omega n_{x}}{4v_{s}^{3}\rho\hbar}(V_{e}-V_{h})^{2 }\exp\bigg{(}-\frac{2\omega^{2}}{v_{s}^{2}q_{z,\text{cut}}^{2}}\bigg{)}. \tag{114}\]
Using this expression to fit the decay of integrated intensity versus angle for different temperatures in the vanishing interaction regime, and relying on the fact in this regime of very low laser excitation, we have a temperature measurement in the cryostat that provides a reliable measurement the phonon temperature in the microcavity (the temperature probe is glued next to the microcavity in the same way), we could determine that \(q_{z,\mathrm{cut}}\) is best fitted with \(L_{z}=8.5\,\mathrm{nm}\) instead of the nominal quantum well thickness \(L_{z}=17\,\mathrm{nm}\). A good reason for this mismatch is that the strong confinement assumption \(a_{B}>L_{z}\), with \(a_{b}=10\,\mathrm{nm}\), is in fact not well checked in our thick quantum well. In the weaker confinement regime where \(a_{B}<L_{z}\), the contribution of the bound electron-hole wavefunction contributes as the shortest length scale and hence contributes more to \(q_{z,\mathrm{cut}}^{2}\) than \(L_{z}\). The apparent \(L_{z}\) in the strong confinement description is thus expected to decrease by a factor of the order of \(\sim L_{z}/a_{B}\). Another contribution to this reduced \(L_{z}\) is that the the expression \(q_{z,\mathrm{cut}}\sim 0.9\times 2\pi/L_{z}\) is obtained by fitting the exact wavefunction in a finite height quantum well with a vertical transition edge between the barriers and the quantum well, while in reality, the transition edges are typically much smoother than that due to Indium diffusion, that can result in tighter confinement length along \(z\) for the ground state (see e.g. [58]).
|
2304.07544
|
Explaining Giant Apparent $\mathrm{p}K_\mathrm{A}$ Shifts in Weak
Polyelectrolyte Brushes
|
Recent experiments on weak polyelectrolyte brushes found marked shifts in the
effective p$K_\mathrm{A}$ that are linear in the logarithm of the salt
concentration. Comparing explicit-particle simulations with mean-field
calculations we show that for high grafting densities the salt concentration
effect can be explained using the ideal Donnan theory, but for low grafting
densities the full shift is due to a combination of the Donnan effect and the
polyelectrolyte effect. The latter originates from electrostatic correlations
which are neglected in the Donnan picture and which are only approximately
included in the mean-field theory. Moreover, we demonstrate that the magnitude
of the polyelectrolyte effect is almost invariant with respect to salt
concentration but depends on the grafting density of the brush. This invariance
is due to a complex cancellation of multiple effects. Based on our results, we
show how the experimentally determined p$K_\mathrm{A}$ shifts may be used to
infer the grafting density of brushes, a parameter that is difficult to measure
directly.
|
David Beyer, Peter Košovan, Christian Holm
|
2023-04-15T12:32:03Z
|
http://arxiv.org/abs/2304.07544v2
|
# Explaining Giant Apparent \(\mathrm{p}K_{\mathrm{A}}\) Shifts in Weak Polyelectrolyte Brushes
###### Abstract
Recent experiments on weak polyelectrolyte brushes found marked shifts in the effective \(\mathrm{p}K_{\mathrm{A}}\) that are linear in the logarithm of the salt concentration. Comparing explicit-particle simulations with mean-field calculations we show that for high grafting densities the salt concentration effect can be explained using the ideal Donnan theory, but for low grafting densities the full shift is due to a combination of the Donnan effect and the polyelectrolyte effect. The latter originates from electrostatic correlations which are neglected in the Donnan picture and which are only approximately included in the mean-field theory. Moreover, we demonstrate that the magnitude of the polyelectrolyte effect is almost invariant with respect to salt concentration but depends on the grafting density of the brush. This invariance is due to a complex cancellation of multiple effects. Based on our results, we show how the experimentally determined \(\mathrm{p}K_{\mathrm{A}}\) shifts may be used to infer the grafting density of brushes, a parameter that is difficult to measure directly.
_Introduction._ Recent experiments by Ferrand-Drake del Castillo et al. [1] have demonstrated marked shifts in the effective \(\mathrm{p}K_{\mathrm{A}}\) of weak polyelectrolyte brushes, tunable by varying the salt concentration. Furthermore, they used three different polymers, including both acidic and basic polyelectrolytes (PEs), to show that these shifts are approximately linear in the logarithm of the salt concentration. In a related study [2], they demonstrated that the tunable response of these brushes to variations in pH and salt concentration makes them excellent candidates for high-capacity protein capture and release through pH adjustment.
Similar \(\mathrm{p}K_{\mathrm{A}}\) shifts have been predicted by mean-field models and could qualitatively be explained using the ideal Donnan theory [3; 4; 5; 6; 7; 8]. In the Donnan theory, the polyelectrolyte brush is approximated as a homogeneous phase in equilibrium with the bulk solution, which acts as an infinite reservoir of ions. By assuming that the brush confines all of its counterions, a Donnan potential emerges between the brush and the bulk solution. If the concentration of the polymer-bound charges is much higher than the ionic strength of the solution, \(I\), the Donnan potential can be approximated as
\[\psi^{\mathrm{Don}}\approx\frac{k_{\mathrm{B}}T}{z_{\mathrm{mon}}e}\ln\left( \frac{\alpha c_{\mathrm{mon}}}{I}\right). \tag{1}\]
Here, \(\alpha\) denotes the degree of ionization of the brush, \(c_{\mathrm{mon}}\) is the total concentration of monomeric units inside the brush, and \(z_{\mathrm{mon}}=\pm 1\) stands for the valency in the ionized state. If we define the effective \(\mathrm{p}K_{\mathrm{A}}\) as the pH value at which the polymer is 50% ionized, we can express its shift as
\[\mathrm{p}K_{\mathrm{A}}^{\mathrm{eff}}\equiv\mathrm{pH}\left(\alpha=0.5 \right)=\mathrm{p}K_{\mathrm{A}}+\Delta\stackrel{{\mathrm{ideal}}} {{=}}\mathrm{p}K_{\mathrm{A}}+\Delta^{\mathrm{Don}}. \tag{2}\]
In an ideal system, the shift \(\Delta\) depends only on the Donnan contribution \(\Delta^{\mathrm{Don}}\). However, in general, also electrostatic interactions contribute to the shift. Using Equation 1, the Donnan contribution can be expressed as
\[\begin{split}\Delta^{\mathrm{Don}}&\approx-z_{ \mathrm{mon}}\log_{10}\left(\frac{c_{\mathrm{mon}}}{2I}\right)\\ &=z_{\mathrm{mon}}\biggl{[}\log_{10}\left(\frac{I}{c^{\odot}} \right)-\log_{10}\left(\frac{c_{\mathrm{mon}}}{2c^{\odot}}\right)\biggr{]}, \end{split} \tag{3}\]
where \(c^{\odot}\) is an arbitrary reference concentration, usually chosen as \(c^{\odot}=$1\,\mathrm{M}$\). If the concentration of ionized monomers at \(\alpha=0.5\) does not significantly change with the ionic strength, then the second term in Equation 3 is constant, and \(\Delta^{\mathrm{Don}}\) scales linearly with the logarithm of the ionic strength, as confirmed by recent experimental measurements of the Donnan potential in polyelectrolyte membranes [9]. Considering these findings, the experiments by Ferrand-Drake del Castillo et al. [1] seem to confirm that the Donnan approximation can be used to describe \(\mathrm{p}K_{\mathrm{A}}\) shifts in polyelectrolyte brushes. However, in this letter, we show otherwise. Our results demonstrate that the Donnan approximation is not necessarily valid and that fully explaining the effect of ionic strength on \(\mathrm{p}K_{\mathrm{A}}\) requires going beyond this approximation.
Numerical mean-field models have been introduced to alleviate some approximations used in the Donnan theory [3; 4; 6; 7; 10; 11; 12; 13]. Within the mean-field approximation, particle-particle interactions are replaced by interactions with an average field, proportional to the mean density at a specific location. When applied to polyelectrolyte brushes, these models explicitly account for density variations perpendicular to the surface, while averaging over density variations parallel to the surface. These models also account for electrostatic interactions on the mean-field level by solving the Poisson-Boltzmann equation. Lastly, local density fields enable us to calculate local variations in ionization states as well. The density profiles obtained from the mean-field calculations provide a more refined description of the brush-solution interface than the ideal Donnan theory. Nevertheless, the
mean-field approximation works well for polyelectrolyte systems only when inter-chain interactions prevail over intra-chain interactions [14]; otherwise, the applicability of the mean-field theory is limited.
Based on the mean-field picture, we can distinguish brushes in an osmotic or polyelectrolyte regime [6]. In the osmotic regime, most counterions are confined to the brush. Consequently, electrostatic interactions are screened, and the brush properties are controlled by the osmotic pressure of counterions and Donnan partitioning. In the polyelectrolyte regime, most counterions are released into the bulk solution, and the brush properties are controlled by electrostatic interactions within and in-between the chains. Thus, the mean-field theory should accurately describe brushes in the osmotic regime but not so well in the polyelectrolyte regime.
_Simulation model and method._ To go beyond both, Donnan and mean-field approximations, we constructed a coarse-grained, particle-based simulation model of a polyelectrolyte brush, as shown in Figure 1. Our model consisted of 25 chains, each of which contained 25 monomers. The first monomer of each chain was fixed to an immobile flat surface. Because experimental grafting densities were not available [1], we performed simulations at two different grafting densities, within a plausible range, \(\Gamma=0.79/\mathrm{nm}^{2}\) and \(\Gamma=0.079/\mathrm{nm}^{2}\). The polymer chains were modelled using a generic bead-spring model, derived from the Kremer-Grest model [15]. Small ions (Na\({}^{+}\), Cl\({}^{-}\), H\({}^{+}\), OH\({}^{-}\)) were represented as explicit particles, whereas the solvent effects were only included implicitly through the relative dielectric constant. The interactions included short-range steric repulsion between all particles, and the full unscreened Coulomb potential between charged particles. To represent a flat surface, we used 2D-periodic boundary conditions in a slab. All monomers were treated as weak acids with p\(K_{\mathrm{A}}=4.0\) as a typical value for acrylic polymers [16]. The ionization equilibrium of the acidic monomers and the exchange of small ions with the bulk were simulated using the grand-reaction method [17]. Full technical details of the model and the simulation method are provided in the ESI. All simulations were performed using the open-source simulation software ESPResSo [18].
_Results._ Figure 2 confirms that the simulation results qualitatively reproduce the experimentally observed p\(K_{\mathrm{A}}\) shifts as a function of salt concentration. Additionally, the simulation results at high grafting density, (\(\Gamma=0.79/\mathrm{nm}^{2}\), Figure 2a) are quantitatively matched by mean-field calculations for the same brush using the Scheutjens-Fleer self-consistent field (SCF) theory (see ESI for technical details). Simulation results for the same system at a low grafting density (\(\Gamma=0.079/\mathrm{nm}^{2}\), Figure 2b) also display qualitatively similar shifts, albeit greater than those obtained from SCF calculations. This discrepancy between mean-field calculations and simulations indicates that an additional effect contributes to the results, and that this effect is neglected by both, the Donnan theory, and the mean-field approximation.
Figure 3 shows that both datasets, at high and low grafting density, exhibit the same Donnan-like variation of the p\(K_{\mathrm{A}}\) shift. At a high grafting density, simulations and mean-field calculations yield the same p\(K_{\mathrm{A}}\) shift. In addition, they match the p\(K_{\mathrm{A}}\) shift of PMAA brushes determined experimentally in Ref. [1]. At a low grafting density, the slope of the p\(K_{\mathrm{A}}\) shift as a function of ionic strength is the same in mean-field calculations and simulations, but the absolute shift predicted by the mean-field theory is significantly smaller. This comparison demonstrates that the Donnan-like variation of the p\(K_{\mathrm{A}}\) shift does not imply that the Donnan theory fully describes the problem.
For analyzing the p\(K_{\mathrm{A}}\) shift, it is important to recognize that the ionic strength does not necessarily have the same value as the salt concentration. While they differ only negligibly at high salt concentrations, at low salt concentrations and extreme pH-values, the ionic strength is higher than the salt concentration due to the significant
Figure 1: (a): Snapshot of the brush model (small ions are not shown for clarity). Black lines bound to the simulation box and additional periodic images in the \(x\)-\(y\)-plane are shown to illustrate the periodic boundary conditions. The snapshot was produced using VMD [19]. (b): Schematic representation of the simulation setup: simulation box containing the brush, coupled to a reservoir at a fixed pH and salt concentration.
contribution to screening of H\({}^{+}\) or OH\({}^{-}\) ions. Therefore, plotting the p\(K_{\rm A}\) shift as a function of the salt concentration (Figure S3) results in a non-linear dependence on the logarithm of the salt concentration at very low salt concentrations. Nevertheless, this dependence becomes again linear when plotted as a function of ionic strength instead of salt concentration. This subtle difference between the roles of ionic strength and salt concentration was not observed in the experiments of Ref. [1] because they did not use sufficiently low salt concentrations. However, it can clearly be observed in our simulations at \(c_{\rm salt}\lesssim 10^{-5}\,\)M.
To explain the additional contribution to the p\(K_{\rm A}\) shift, beyond effects accounted for by the Donnan and the SCF mean-field approximation, we use concepts previously introduced in our studies on weak polyelectrolyte hydrogels [20; 17; 21]. In those studies, we showed that the p\(K_{\rm A}\) shift in two-phase systems can be decomposed into two contributions: \(\Delta=\Delta^{\rm PE}+\Delta^{\rm Don}\). The first term, \(\Delta^{\rm PE}\), expresses the contribution of electrostatic interactions, termed the _polyelectrolyte effect_. The second term, \(\Delta^{\rm Don}\), expresses the contribution of the unequal partitioning of H\({}^{+}\) ions, termed the _Donnan effect_. The magnitude of these effects depends on the parameters of the system.
To quantify the Donnan effect, we used the density profiles of H\({}^{+}\) ions to determine differences in their concentrations inside and outside the brush. For convenience, we quantified this difference using the 'local pH', defined as
\[\rm local\ pH\equiv-\log_{10}\left(\frac{\langle c_{H+}\rangle_{brush}}{1\,M} \right)\neq pH. \tag{4}\]
The inequality sign in this equation is used to emphasize that the 'local pH' is different from the pH in the bulk. Using this definition, we calculated the local pH inside the brush from the average concentration of H\({}^{+}\) ions up to a distance from the surface of one half of the mean end-to-end distance of the chains at a specific pH and \(c_{\rm salt}\). As evidenced by the monomer density profiles (cf. Figure S5), such a calculation ensures that we average over a range of distances deep inside the brush, where the H\({}^{+}\) concentration is almost constant, thus avoiding artifacts resulting from the brush-solution interface.
Figure 3(a) shows how the difference between the 'local pH' inside and outside the brush varies with the pH in the reservoir. This difference is the quantitative measure of the Donnan effect. It increases as a function of the pH, concomitantly with the increase in the degree of ionization (cf. Figure 2). At pH \(>9\) this difference decreases again although the chains remain fully ionized, because, at a constant salt concentration, an increase in pH necessarily entails an increase in ionic strength. Furthermore, this difference is larger at lower salt concentrations, in
Figure 3: The p\(K_{\rm A}\) shift of brushes with different grafting densities as a function of ionic strength. Solid symbols represent data from particle-based simulations, whereas empty symbols represent SCF results.
Figure 2: Titration curve of the weak polyelectrolyte brush with different grafting densities as a function of the pH and for different salt concentrations. Markers correspond to simulation results, while the dashed lines indicate the SCF results. As a reference, the ideal titration curve is shown as described by the Henderson-Hasselbalch equation (HH).
line with Equation 3. By plotting the degree of ionization of the brush as a function of local pH inside the brush, we effectively subtracted the Donnan effect, so that only the polyelectrolyte effect remains. In Figure 4b, we show that this polyelectrolyte effect is significant at a low grafting density and accounts for a \(\mathrm{p}K_{\mathrm{A}}\) shift of approximately 0.5 units of pH. Figure S7 in the ESI demonstrates that this effect is less significant at a high grafting density. Regardless of the grafting density, the polyelectrolyte effect remains virtually unchanged at different salt concentrations. This invariance stems from a rather complex cancellation of effects, involving concomitant changes in the ionization degree and brush swelling. Nevertheless, the magnitude of the polyelectrolyte effect matches the difference between the \(\mathrm{p}K_{\mathrm{A}}^{\mathrm{eff}}\) calculated from our simulations and from SCF calculations (ca. 0.1 for the high grafting density and 0.5 for the low grafting density, cf. Figure 3). Thus, we have confirmed numerically that the polyelectrolyte effect is the additional contribution to the \(\mathrm{p}K_{\mathrm{A}}\) shift.
The SCF calculations fully account for the Donnan effect but only approximately account for the polyelectrolyte effect. Therefore, they quantitatively match simulations at a high grafting density, when the Donnan effect prevails and the polyelectrolyte effect is negligible. This agreement weakens at a low grafting density as the polyelectrolyte effect becomes stronger. These results corroborate our previous findings, demonstrating that SCF can accurately predict the ionization of polyelectrolytes when inter-chain interactions prevail, and that this prediction successively becomes worse as intra-chain interactions get stronger [14].
Our observations of the role of the polyelectrolyte and the Donnan effect could be exploited to infer the grafting density of polyelectrolyte brushes from measured \(\mathrm{p}K_{\mathrm{A}}\) shifts. In the ESI, we show that, when the Donnan effect prevails, the grafting density can be approximated by
\[\Gamma^{\mathrm{app}}\approx\frac{2Ih_{\mathrm{brush}}N_{\mathrm{A}}}{N}\cdot 1 0^{\Delta^{\mathrm{Don}}} \tag{5}\]
where \(N_{\mathrm{A}}\) is the Avogadro constant. If we can independently determine the height of the brush, \(h_{\mathrm{brush}}\), the number of monomers per chain, \(N\), and the \(\mathrm{p}K_{\mathrm{A}}\) shift, \(\Delta\), we can calculate the apparent grafting density using Equation 5. However, the Donnan and polyelectrolyte effect cannot be extracted separately from the experimentally determined \(\mathrm{p}K_{\mathrm{A}}\) shift \(\Delta\). Therefore, in practice, one has to use \(\Delta\) instead of \(\Delta^{\mathrm{Don}}\) in Equation 5, assuming that the Donnan effect controls the \(\mathrm{p}K_{\mathrm{A}}\) shift. By applying this approach to the densely grafted brush simulated here, we obtain the apparent grafting density \(\Gamma^{\mathrm{app}}\approx 0.96/\mathrm{nm}^{2}\), which is close to the correct value of \(\Gamma=0.79/\mathrm{nm}^{2}\). For the less dense brush, we obtain \(\Gamma^{\mathrm{app}}\approx 0.25/\mathrm{nm}^{2}\) which is approximately three times the correct value of \(\Gamma=0.079/\mathrm{nm}^{2}\). In general, the apparent grafting density determined using this approach is always an upper bound to the correct value because the polyelectrolyte effect is disregarded.
_Conclusion._ As shown by our simulations, \(\mathrm{p}K_{\mathrm{A}}\) shifts of weak polyelectrolyte brushes are linear in the logarithm of the ionic strength of the solution, which can be qualitatively explained by the Donnan theory. However, this logarithmic variation of the \(\mathrm{p}K_{\mathrm{A}}\) shift does not necessarily imply that the Donnan theory fully describes its physical origin. This shift is caused by a combination of the Donnan effect and polyelectrolyte effect. In brushes with a high grafting density, the Donnan effect prevails, and consequently the Donnan theory provides a quantitatively correct description. In brushes with a lower grafting density, the polyelectrolyte effect significantly contributes to the resulting \(\mathrm{p}K_{\mathrm{A}}\) shift. Nevertheless, in both cases, the \(\mathrm{p}K_{\mathrm{A}}\) varies with the logarithm of
Figure 4: Donnan and polyelectrolyte effect in the brush with a grafting density of \(\Gamma=0.079/\mathrm{nm}^{2}\) as a function of pH at different salt concentrations. (a): Difference between the local pH inside the brush and pH in the bulk; (b): Degree of ionization of the brush in comparison with the ideal Henderson-Hasselbalch equation (HH).
the ionic strength. In short, varying the salt concentration is a general approach to manipulating the effective p\(K_{\text{A}}\) of polyelectrolyte brushes, thereby tuning their responsive behaviour in a specific pH range.
D.B. and C.H. and P.K. acknowledge funding by the German Research Foundation (DFG) under the grant 397384169 - FOR2811. C.H. furthermore thanks the DFG for funding under Project-No 451980436 and 429529433. P.K. acknowledges funding by the Czech Science Foundation under grant 21-31978J. We thank Andreas Dahlin for helpful discussions concerning his experimental system.
|
2307.08327
|
Analyzing the Impact of Adversarial Examples on Explainable Machine
Learning
|
Adversarial attacks are a type of attack on machine learning models where an
attacker deliberately modifies the inputs to cause the model to make incorrect
predictions. Adversarial attacks can have serious consequences, particularly in
applications such as autonomous vehicles, medical diagnosis, and security
systems. Work on the vulnerability of deep learning models to adversarial
attacks has shown that it is very easy to make samples that make a model
predict things that it doesn't want to. In this work, we analyze the impact of
model interpretability due to adversarial attacks on text classification
problems. We develop an ML-based classification model for text data. Then, we
introduce the adversarial perturbations on the text data to understand the
classification performance after the attack. Subsequently, we analyze and
interpret the model's explainability before and after the attack
|
Prathyusha Devabhakthini, Sasmita Parida, Raj Mani Shukla, Suvendu Chandan Nayak
|
2023-07-17T08:50:36Z
|
http://arxiv.org/abs/2307.08327v1
|
# Analyzing the Impact of Adversarial Examples on Explainable Machine Learning
###### Abstract
Adversarial attacks are a type of attack on machine learning models where an attacker deliberately modifies the inputs to cause the model to make incorrect predictions. Adversarial attacks can have serious consequences, particularly in applications such as autonomous vehicles, medical diagnosis, and security systems. Work on the vulnerability of deep learning models to adversarial attacks has shown that it is very easy to make samples that make a model predict things that it doesn't want to. In this work, we analyze the impact of model interpretability due to adversarial attacks on text classification problems. We develop an ML-based classification model for text data. Then, we introduce the adversarial perturbations on the text data to understand the classification performance after the attack. Subsequently, we analyze and interpret the model's explainability before and after the attack.
Adversarial Attacks, Natural Language Processing, Text Classification, Explainability
## I Introduction
Artificial intelligence (AI) and natural language processing (NLP) have come a long way in recent years, but there are still concerns regarding the ramifications of using them to automatically regulate data in many situations. One concern is that automated control of material on social media networks will accelerate the development of AI to offset the effects of AI. Let that be an image, speech, or text that the models are trained on, these can be attacked. For example, a small fraction of noise added to an image trained by a model can predict completely wrong. This can impact many applications such as self-driving cars, Alexa, Chabot, and many other applications. Nowadays, these models are facing a new challenge which is an adversarial attack that is designed to be imperceptible to humans, while still causing the model to misclassify the input. The discovery of Adversarial examples in neural networks has led to exponential growth in the research on this topic as shown in the Figure 1[1].
In recent years, there has been increased interest in designing adversarial attacks for NLP tasks to create attacks and produce solutions for those attacks to make the model robust. Learning the effect of attacks with explainability is one solution to measuring and learning attacks. Adversarial examples of text data, a technique for producing adversarial perturbations on text classification and their impact on model explainability is presented in this study. These adversarial samples in this text classification are purposely generated sentences with English transformations, such as misspellings, replacing them with synonyms, which can lead the classifier to provide incorrect prediction outputs [2] as shown in Table I. When the above sentence is changed semantically, though the meaning is the same it does not make any difference for a person to predict. However, the trained model after an attack classifies the sentence with 100% confidence belonging to a different class.
A theoretical model of the adversarial example-crafting process is challenging to construct, making it tough to defend against it. Because there are currently no accepted theoretical frameworks for understanding the solutions to these complicated performance issues, building any type of theoretical justification that protection will rule out a set of hostile circumstances is exceedingly difficult. Defending against hostile instances is extremely tough since machine learning models are required to produce good results for every potential input. The great majority of the time, ML models function well, yet they only operate on a relatively tiny subset of all the many possible inputs. If one can successfully block one kind
Fig. 1: Number of research papers on adversarial examples [1]
\begin{table}
\begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|} \hline
**Original Input** & [**[offers]**] that [**[rare]**] combination of entertainment and education. & [[Positive(100\%)]] \\ \hline
**Adversarial Example** & [**[prescribes]**] that[**[sparse]** combination of entertainment and education. & [[Negative(100\%)]] \\ \hline \end{tabular}
\end{table} TABLE I: Adversarial example on text data [2]
of attack; it leaves another vulnerability exposed to the invader who is aware of the attack being utilized. Making a defense that is capable of protecting against a strong and adaptable attacker is an important research area [3].
In addition to initiating attacks, we have studied the effects of model explainability before and after attacks. Recently, there has been a significant increase in the development of explainable machine learning techniques. This is mostly due to increased awareness among regulators and end-users of the impact of machine learning models and the necessity to comprehend their conclusions. The objective is for the user to comprehend the predictions of ML models, which is accomplished through explanations. Explainability is a technique for analyzing any ML model that involves perturbing the input of data samples and monitoring how forecasts change over time [4]. The expected output from this part is a list of explanations, which explains how each feature contributes to data prediction highlighting which adjustments to the features will have the greatest influence on the forecast [5].
The semantic similarity model, like the model under attack, is subject to adversarial examples. It is thus difficult to determine which model committed the mistake when this form of attack discovers a legitimate adversarial example: it was either the model being attacked or the model employed to impose the constraint. Storing the attacks and predicting the results for models can help distinguish attacks and model errors. The explainability of the model clearly tells the reader to understand when an attack is generated and what features make the model classify incorrectly. In this way, future models can be modeled and explained easily to better understand the attacks. The other limitation of related work is that there are research papers on adversarial attacks and explainability machine learning models, but the concepts are not combined with NLP. [6]. In this paper, we have examined the impact of adversarial examples on the explainability of the ML model.
In addition to developing safe deep neural network (DNN) models and analyzing adversarial situations and their effects, solutions help to better understand DNNs and, as a result, improve their reliability and safety [7]. Examples of adversarial perturbations include asymmetrical perturbations that are perceptually indistinguishable from human eyes yet may elude detection by a DNN. The use of adversarial instances provides a new domain in which to investigate the attractive aspects of neural networks. In this work, we propose a combination of two concepts, Adversarial attacks and Explainability, to explain the models performance after the attacks.
Our key contributions are listed below:
* We propose to explain adversarial attacks with explainability to build a robust model against adversarial attacks on text data.
* The purpose of this work is to experiment with adversarial attacks and then compare the results. We have trained a binary text classification model on two different datasets and used pre-trained transformer models for comparison.
* We have generated adversarial examples on the model and pre-trained mode. We explained the original model and attacked models with explainable AI techniques for analyzing the effect of attacks on text sentences
* This research depends on the state-of-art methodologies to attack and delves further into the effects model explainability by attacks. We analyze the effect of the attack on explainability on three state-of-the-art transformer-attacked models, BERT, RoBERTa, and xlnet-based models.
Moreover, this work aims to give answers to the following research questions:
* How explainability of the model is affected?
* How the success rates of attacks on each model trained are used?
* What features are mostly correlated- which are affected by attacks?
The rest of this work is arranged in the following manner: Section II addresses the literature as well as earlier comparable works. Section III provides a brief background on the different concepts used in this research. Subsequently, section IV presents the proposed integrated pipeline impacted by the adversarial attack and incorporating XAI techniques. Section V provides detailed results and a discussion of our findings. Finally, section VI gives a summary of our conclusions as well as suggestions for future research directions.
## II Literature Review
In this section, we present recent work, more specifically on adversarial attacks, adversarial attacks on text data, and explainable machine learning models by researchers. Adversarial attacks create distorted copies of the input data that deceive the classifier (i.e., modify its output). These attacks can be produced on any data. For example, producing noise in image data is an adversarial attack on image data.
### _Adversarial Attacks_
Adversarial attacks on different data are detailed in a study by Xi et al. in [8], where the authors generated adversarial assaults using three distinct data types (images, graphs, and text) and then explained the solutions. Carlini et al. used adversarial attacks on speech data to successfully convert any audio waveform into any target transcription with only little distortion [9]. Xie et al. discussed the adversarial attack on the image recognition model might cause it to produce an explanation that emphasizes bigger models. The authors proposed AdvProp, which is an enhanced method based on auxiliary batch normalization to avoid overfitting problems. It divided the bigger model into the mini-batch and then training was applied [10]. Slack et al. presented a scaffolding scheme that hides the effect of biases on the classifier with the adversarial object [11]. Real-world data was considered and the post hoc method was applied to the adversarial classifier to avoid biases. Here, three datasets such as COMPAS, Communities and Crime, and German Credit were trained and evaluated. This method claimed exploitation for biased classifier using perturbation method where the input was biased and the final classifier was controlled behavior. Aryal et al. presented a
study on adversarial attacks in malware analysis that involve modifying the malware sample in such a way that it can bypass detection by antivirus software [12]. It could be by modifying the binary code of the malware, inserting additional code to confuse the antivirus software, or modifying the metadata of the file to make it appear legitimate. Adversarial perturbation was a solution by adding small, carefully crafted modifications to the malware sample that are designed to cause the antivirus software to misclassify the sample.
Sungtae et al. proposed LAVA adversarial attacks and discussed the performance of attacks on longitudinal data from the Electronic Health Record (EHR) using various models, claiming that LAVA adversarial attacks are more difficult to detect than other attacks [13]. LAVA avoids selecting features that are highly relevant to the prediction goals by default. Open-source packages for generating adversarial attacks on NLP models are described in [14, 15, 16].
Miyato et al proposed one of the first studies to use adversarial training for natural language processing problems, executing perturbations at the word embedding level rather than at the level of actual input space [17]. Attack word embedding is a technique for adding perturbation to the word embedding to deceive a classification algorithm. Similarly, when it comes to phrases, words, and letters, changing a single letter in a sentence changes the model's predicted outcome. The attack technique can do this by identifying the most impactful letter substitution using gradient information. Xu et al proposed TextTricker based on loss and gradient white box adversarial attack model used to produce incorrect explanations on a text classification model which might cause it to produce an explanation that emphasizes irrelevant words, leading the user to make incorrect assumptions. It supported targeted as well as non-targeted attacks and evaluated two datasets with a noticeable success rate [8]. The work done by Samanta et al. marks the beginning of the process of creating adversarial statements that are grammatically accurate and retain the syntactic structure of the original phrase, respectively [18]. The authors accomplish this by substituting synonyms for original words or by including certain words with varying meanings in various situations. Some of the toolkits for producing attacks on text data for generating attacks on NLP are OpenAttack and TextAttack [19, 2].
### _Explainable Machine Learning Models_
The method or algorithm that creates the explanations is a model for interpretability. Much research is being conducted to develop new methodologies and tactics to improve interpretability while limiting the reduction in projected accuracy. Ribeiro et al. introduce the Local interpretable Model-Agnostic Explanations (LIME) model and explain how it delivers many of the desired properties for interpretability in [20]. They also discuss some of the major difficulties in explaining data, as well as a newly announced model-agnostic explanation technique (LIME) that overcomes these difficulties. The importance, evaluation, and properties of interpretability with models and metrics are explained in [21]. The interpretability models for both white-box and black-box models are explained with their functionalities toward different types of data, how the models have been developed, offering explanations, and comparisons, and mainly focusing on literature for various explanations used in different papers. Linardatos et al. stated that the LIME and Shapley Additive Explanations (SHAP) approaches are the most comprehensive and widely used methods for visualizing feature interactions and significance in the literature [22]. Rosenberg et al. focused on the concept of adversarial attack using transferability of explainability. In the first phase, it analyses the features of the malware classifier using explainability, and then an adversarial attack in a black-box scenario was added and trained [23]. The authors claimed the selection of features to be modified in the presence of adversarial examples using explainability for malware classifiers.
### _Adversarial attacks on interpretability_
Watson et al. proposed research on explainability-based machine learning models using SHAP on Electronic Health Records (EHR) and Chest X-Ray (CXR) data [24]. They were able to characterize how various attack tactics operate on various datasets and use this knowledge to identify samples that have been manipulated in an adversarial manner using SHAP values. The efficacy of attacks is looked into six models, and the results of adversarial sample detection are evaluated, with strategies compared against advanced MLLOO explainability-based adversarial detection methods.
From the literature, it is evident that there are tools to generate adversarial attacks on text data. Along with that, there are methods to improve the interpretability of the machine learning models with the methods like LIME and SHAP. The combination of these two concepts in Natural Language Processing is useful for building better complex models and improving understanding of Adversarial attacks on text data. In addition, the explanations make users choose better model-based explanations.
## III Background
This section gives a brief description of concepts of adversarial attacks and model explainations.
### _Adversarial Attacks_
In the past, researchers have explored adversarial training for NLP models in a variety of ways, with various objective functions, restrictions, search techniques, transformations, and search algorithms being used to produce adversarial instances [25].
#### Iii-A1 White-box Attacks
In this type of attack model total information is used for classification. The attacker has knowledge about the training method using optimization and training distribution with minimization of error rate.
#### Iii-A2 Black-box Attacks
Here, it is assumed no information about the model is to the attacker. The adversary method exploits the crafted input along with the outputs. Further subdivided into the following types:
3 Non-adaptive Black-box Attacks
This model can only use training data distribution and algorithm which crafts on local input and targeted to output.
#### Iii-A4 Adaptive Black-box Attacks
In this model adaptive adversary method is used without any complete knowledge of the training method. Generally, it's similar to a plaintext attack in the case of cryptography.
#### Iii-A5 Strict Black-box Attacks
In this adversary method training data is not follow the distribution rather process data in pairs and produce targeted output.
We chose to use TextAttack for the following reasons.
### _Explainability_
In this part, we describe Explainability, the importance of Explainability, and the outcomes of combining these two concepts.
Explainable machine learning approaches have recently grown in popularity. This is due to regulators and end users being more aware of the influence of machine learning models and the need to understand their findings. Explanations help the user comprehend the ML models' predictions. The explanations are models for interpretability. Much research is being done to promote interpretability in prediction tasks while limiting the reduction in projected accuracy. This is accomplished by providing explanations to help the user comprehend the predictions of machine learning models. To do this, we employ the explainability technique, which is an algorithm that creates possible explanations.
According to the literature by [26], interpretability is comprised of three aims, all of which are interconnected and often compete with one another:
* Accuracy: Rather than a theoretical link, accuracy refers to the actual connection between the explanatory strategy and the ML model prediction. If this goal is not achieved, the explanation will be rendered ineffective since it will not be loyal to the prediction that it is intended to explain.
* Understandability: It iss connected to the ease with which an explanation is grasped by the spectator and is measured in terms of observers. This is an important aim since, no matter how precise an explanation is, it is worthless if it is not easily comprehended.
* Efficiency: It refers to the amount of time it takes for a user to comprehend an explanation. If this requirement is not met, it is possible to argue that, given an endless amount of time, practically any model is interpretable. As a result, an explanation should be intelligible in a limited length of time, ideally less than a few minutes. This aim is connected to the preceding one, understandability: in general, the more intelligible an explanation is, the more swiftly it is comprehended.
#### Iii-A1 Model agnostic explanation methods
A model-agnostic explanation strategy is a method of explaining that is not dependent on a particular model. Even though these methods are more useful when applied to a specific model and case, explanation methods have a distinct advantage in that they are completely separate from the original model class, allowing them to be applied to completely different use cases even when the predictive model is the same. The goal is to make the user understand the predictions of ML models, which is achieved through explanations. For this, we make use of an explanation method, which is nothing more than an algorithm that generates explanations. Several approaches are known as LIME explanations, which stands for Local Interpretable Model-agnostic Explanations [20].
## IV Proposed framework
Figure 2 presents the schematic diagram of the proposed framework. As illustrated in the figure, in the initial stage of our proposed method, we prepare the model for training. This involves conducting initial data analysis and performing data cleaning to ensure the data is suitable for training. We perform the following steps for data preparation and cleaning.
#### Iv-B1 Noise Removal
This is the initial stage that involves removing all formatting from text, like HTML tags, paragraph or page breaks, as well as punctuation marks.
#### Iv-B2 Tokenization
As defined in [27], tokenization is the act of identifying the fundamental elements in a linguistic expression that do not need to be deconstructed. In NLP, tokens are words that are separated by spaces, and the act of breaking down sentences into tokens, which may be words or punctuation marks.
#### Iv-B3 Normalization
Text may include a variety of non-standard token types, including digit sequences, acronyms, words, as well as letter sequences in all capital letters, abbreviations, roman numerals, mixed-case words, URLs, as well as email addresses. The process of normalization is the rewriting of such material in everyday language. In NLP, stemming & lemmatization, are generally used for text normalization. Stemming is the process of removing prefixes and suffixes from words whereas lemmatization is a process of reducing a word to its most fundamental form.
#### Iv-B4 Stop Word Removal
Stop words are terms in language that are regularly found in numerous sentences and are therefore taken out either before or after the natural language data processing process is completed. Even though stop words are frequently encountered in NLP tasks such as sentiment analysis and topic modeling, their frequency may result in model bias against the space containing these words, resulting in a bad model for these tasks. To address this issue, such phrases are removed before the analysis of natural language.
#### Iv-B5 Word Embeddings/Vectorization
The process of converting a text into a vector is called word embedding/vectorization. Some of the methods used in NLP for this are: i) TF-IDF (Term Frequency - Inverse Document Frequency), ii) n-gram-based approach, and iii) GloVe,
After data cleaning, the model is then trained using AI algorithms. Our focus in this step is to create a reliable model that can be used as input for generating attacks and explanations. Alongside model training, we also evaluate its performance and generate explanations for a subset of examples from the test dataset. These generated explanation examples will later
be compared to the explanations of the same examples from the attacked model in the final step.
In the second step of our proposed method, we generate adversarial attacks. Here we perturb the text that trick the NLP model into making incorrect predictions while adhering to specific constraints. To generate attacks, we follow a series of steps, including selecting the model, wrapping the model using the model wrapper, choosing the dataset, creating an attack, and finally attacking the model. We explore different attack models to compare their performance in generating attacks. For generating adversarial sammples, we utilized textAttack tool as it provides various recipes, namely TextFooler and TextBugger [2].
In the final step, we generate explanations for the attacked model and analyze the effects of attacks on input sentences. We have used the LIME (Local Interpretable Model-Agnostic Explanations) method for generating explanations [20]. LIME operates on the principle of zooming into the local region of each prediction, allowing us to obtain valid explanations without needing to consider the entire model. By fitting a linear interpretable model, known as a surrogate, in the local area of interest, LIME provides a local approximation of the complex model. LIME solely relies on the inputs and outputs of the model to generate explanations. This approach allows us to gain insights into the interpretability and understanding of the attacked model's predictions. By comparing the explanations of the attacked model with those of the original model, we can identify any changes or vulnerabilities introduced through adversarial attacks.
Through the usage of explanations, we can analyze how the attacks influence the behavior of the attacked model, thereby assessing its robustness and reliability. This step provides valuable insights into the differences between the original and attacked models, highlighting any modifications in their decision-making processes. By analyzing the effects of the attacks on the model's explanations, we can better understand the implications of adversarial inputs and improve our understanding of the model's behavior in real-world scenarios.
## V Results and Discussions
In this section, we explain the detailed results of the various steps. We begin by providing a description of the data set used in this study.
### _Data set_
For this research, we utilized two datasets: 1) Rotten Tomatoes Movie Reviews (MR) and 2) IMDB dataset. The first dataset, Rotten Tomatoes, was collected by Pang and Lee [28]. It comprises 5,331 movie reviews, with positive and negative sentiments. Each review consists of sentences with an average sentence length of 32 words. The second dataset, IMDB, was collected by Maas et al [29]. It encompasses 50,000 positive and negative reviews, with an average sentence length of 215.63 words.
### _Neural Network Design_
We employ state-of-the-art pre-trained transformer models. Since our approach focuses on adversarial training concerning model explainability, we choose the best performing pre-trained on text classification from hugging face transformers pre-trained models. We have used BERT, RoBERT, and XLNet-based cased models and have compared the attacks with success rates on both the datasets.
The pre-trained models from transformers can be loaded along with tokenizer to generate attacks and for explanations. The accuracy performance of the models is shown in the table II.
Fig. 2: Workflow diagram of the proposed model
### _Generating attacks for pre-trained Models_
We have used transformer models to compare and evaluate the effect of attack on single sentence for visualization of the explaination results. As a result, the attack results are displayed with 10 sample sentences from the dataset. Results of each model are different in each sentence because of the difference in components used for producing the attacks. For example, results explained with the RoBERTa model. The attacks are performed on 10 samples from the dataset. Out of 10, the initial trained model has 1 wrong prediction out of 10 with model accuracy 90%. After the attack the model, there were 0 failed attacks leading to a success rate of 100%. Among those 9 successful attacks performed by attack recipe, on average 15.2% words are changed to change the prediction results, and made 113.89 queries to find successful perturbation. The average number of words in 10 samples are 19.57 per sentence. Original Accuracy for BERT-based-uncased and Xlnet-based-cased models are 100% for those ten samples. Since the model is least performing, the number of words changing for producing the attacks are also less for SVM.
### _Explainations with LIME_
Explanations with LIME on text data provides probabilities for each text and is visualized with different colors with score for each feature in the sentence. Higher the score with respective class color is more affecting the prediction rate in a sentence. Since it is binary classification to predict positive or negative reviews from datasets. The class names are categorized to 0 and 1. For a sample text, the probabilities for each word are calculated, and then explain which takes class names as input parameters.
The number of features in the text are considered to be ten as the average number of words in the ten samples is 19.5 words. The sample text explanations are reprsented as shown in the figures 3 and 4. In figure 4, the orange highlights in 3 are for class 0 and blue are for class 1. The darker the highlight is more the feature importance for that class. The feature importance values are shown in the below figure for whole text. For example, the word "**clever**" highlighted with 36% feature importance is making the model classify the input sentence as positive movies review. It shows which word is playing an important role in predicting the output. This is the main reason to analyze the importance of explainations for analyzing the attacks as well. One can clearly see change in word and the importance can clearly affect the outputs.
### _Analyzing the explanations with different models_
The explainations for each model are different for before and after attack because the way the model interprets words is different. The main aim of this section is to explain how the different models are affecting the model explainations before and after the attack. We have used similar explanations for different ML models - BERT-based-uncased, RoBERTa, and XLNet-based-cased models. To analyze the difference in explanations, we selected a single following sample input sentence:
**Sentence**: _"steers turns in a snappy screenplay that curls at the edges ; its so clever you want to hate it. but he somehow pulls it off."_
**Label**: 1 (Positive Review)
The confidence scores of each model on this sentence is explained in the table III. Although these models predicted
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Model** & **Rotten Tomatoes** & **IMDB** \\ \hline BERT & 87.60\% & 91.90\% \\ \hline RoBERTa & 89.90\% & 94.10\% \\ \hline XLNet & 90.80\% & 95.70\% \\ \hline \end{tabular}
\end{table} TABLE II: Table comparing accuracy of pre-trained models
Fig. 4: Sample feature explanation
Fig. 3: Sample explanation
the review as positive with more probability, there are huge differences in word importance in each model. After the attack, all the 3 models predicted the same sentence as negative but with different probabilities as shown in the Figure 4. The table V shows the change in probabilities of each class with respect to models as shown in the table.
Figure 5-7 shows the feature importance given to sentences by each model before and after the attack. It is observed that **"clever", **"pulls"** are given importance for 3 models to classify the sentence as positive. And **"hate", **"butt"** are given more importance to classify it as negative. It is important to analyze what words are given more priority in these explanations. As all the explanations visualized are for the original model, and these explanations change when the model is attacked.
After the attack, the explainations and word importance are changed because when the model is attacked; the words in the original sentence are replaced with transformations like synonyms, or change is spelling, or by removing suffixes/prefixes.
It is clear from the results that the BERT model is affected more with perturbations than other two models. The other two models were affected nearly the same (60%). The example positive review with 100%confidence is converted to 93% confident negative review after attack on BERT-based-uncased model. Similarly, the table IV explains how the confidence values for example sentence is changed from positive class to negative class.
The most affected BERT-based-uncased model completely changes the prediction with 93% confidence as negative class. It can be analyzed from the results that the BERT model on MR dataset predicts the wrong output with 93% confidence, stating it as the most affected one.
It can be analyzed that on small datasets, the attacks with RoBERTa are predicting wrong with less probability when compared to the BERT model. The effect of BERT model is more than other two models. When comparing both datasets, the RoBERTa model was less attacked in the sample example sentence compared to other models in both datasets as shown in the table VI.
## VI Conclusions
Natural Language Processing field in AI has a variety of applications. However, one of the problems in AI is adversarial examples of text data to misclassify input in text classification tasks. This paper was an initiative to analyze changes in word importance with explanations after attacks on the input text data. This implementation allows users to understand the
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
**Model** & **Confidence Values of each class** \\ \hline BERT-based-uncased & [[Positive (100\%)]] to [[Negative (93\%)]] \\ \hline RoBERTa & [[Positive (94\%)]] to [[Negative (66\%)]] \\ \hline XLNet-based-cased & [[Positive (85\%)]] to [[Negative (89\%)]] \\ \hline \end{tabular}
\end{table} TABLE V: Comparison of change in values after attack
Fig. 5: This figure illustrates the word probabilities in sample sentence before and after attack for BERT model.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Attacked Model** & **Confidence score for class 0** & **Predicted Probability for class 1** \\ \hline BERT-based-uncased & 0.96 & 0.04 \\ \hline RoBERTa & 0.61 & 0.39 \\ \hline XLNet-based-cased & 0.68 & 0.32 \\ \hline \end{tabular}
\end{table} TABLE IV: Comparing three models based on single sample text prediction after attacking the sentence
Fig. 6: This figure illustrates the word probabilities in sample sentence before and after attack for RoBERTa model.
concept of adversarial attacks to attack and to understand the usage of explainability algorithms to analyze the effect of adversarial attacks on text data. We used different ML algorithms and transfer learning-based models to analyze the impact of adversarial attacks on model explainability.
|
2305.18995
|
Elastic signatures of a spin-nematic
|
We study the elastic signatures -- renormalisation of sound velocity and
magnetostriction -- of the spin-nematic phase of a spin-$1$ magnet on a
triangular lattice described by the bilinear-biquadratic spin Hamiltonian. We
show that at low temperatures, the scattering of the acoustic phonons from the
Goldstone modes of the nematic phase lead to a powerlaw renormalisation of the
fractional change in the sound velocity, $v$, as a function of temperature,
$T$, i.e. $\Delta v/v\propto T^3$ as opposed to the same in the high
temperature paramagnet where $\Delta v/v\propto T^{-1}$. At the generically
discontinuous nematic transition, there is a jump in magnetostriction as well
as $\Delta v/v$ along with enhanced $h^4$ dependence on the magnetic field,
$h$, near the nematic transition. These signatures can help positively
characterise the spin-nematic in general and in particular the realisation of
such a phase in the candidate material NiGa$_2$S$_4$.
|
Monica Bapna, Subhro Bhattacharjee
|
2023-05-30T12:48:05Z
|
http://arxiv.org/abs/2305.18995v1
|
# Elastic signatures of a spin-nematic
###### Abstract
We study the elastic signatures- renormalisation of sound velocity and magnetostriction - of the spin-nematic phase of a spin-1 magnet on a triangular lattice described by the bilinear-biquadratic spin Hamiltonian. We show that at low temperatures, the scattering of the acoustic phonons from the Goldstone modes of the nematic phase lead to a powerlaw renormalisation of the fractional change in the sound velocity, \(v\), as a function of temperature, \(T\), _i.e._\(\Delta v/v\propto T^{3}\) as opposed to the same in the high temperature paramagnet where \(\Delta v/v\propto T^{-1}\). At the generically discontinuous- nematic transition, there is a jump in magnetostriction as well as \(\Delta v/v\) along with enhanced \(h^{4}\) dependence on the magnetic field, \(h\), near the nematic transition. These signatures can help positively characterise the spin-nematic in general and in particular the realisation of such a phase in the candidate material NiGa\({}_{2}\)S\({}_{4}\).
## I Introduction
Interplay of symmetries and competing interactions can stabilise a plethora of magnetic phases in spin systems with inconclusive experimental signatures for conventional probes. This not only include issues of finding _smoking-gun_ signatures of fractionalised quasi-particles in quantum spin liquids, but a much broader context pertaining to many other unconventional phases such as higher (than dipole) multi-pole orders in a variety of candidate systems with spin moments \(S>1/2\).[1; 2; 3; 4; 5; 6] A possible resolution to the above conundrum is to invoke a combination of probes to gather complementary insights. An important class of such experimental probes like vibrational Raman and infrared scatterings as well as ultrasonic spectroscopy aim to exploit the ubiquitous magnetoelastic coupling to reveal the properties of the unconventional magnetic ground states and low energy excitations via the phonons.[7; 8; 9; 10]
In this paper, we shall develop the theory of magnetoelastic coupling for the rather elusive spin-nematic phases[11; 12; 13; 14] in spin-1 magnets and apply it to predict its elastic signatures. In a spin-nematic, the ground state has no magnetic dipole moment but has a finite expectation value for the the quadrupole moment which is a bilinear of spins. Here, we shall consider only on-site spin-nematic characterised by the symmetric traceless operator[15; 16] :
\[Q_{i}^{\mu\nu}=\frac{(S_{i}^{\mu}S_{i}^{\nu}+S_{i}^{\nu}S_{i}^{\mu})}{2}-\frac {S(S+1)\delta^{\mu\nu}}{3}, \tag{1}\]
where \(S_{i}^{\alpha}\) (\(\alpha=x,y,z\)) are spin-1 operators at lattice site \(i\). Such order has been proposed for the triangular lattice magnet NiGa\({}_{2}\)S\({}_{4}\)[15; 17; 18; 19; 20] where it is stabilized by a sizeable biquadratic term \(\sim(\mathbf{S}_{i}\cdot\mathbf{S}_{j})^{2}\) that can arise from spin-lattice coupling.[15; 21; 22]
NiGa\({}_{2}\)S\({}_{4}\) is a layered material where Ni\({}^{2+}\) forms an isotropic triangular lattice with \(S=1\) at each site. The system fails to show any conventional long range magnetic order in neutron scattering experiments to the lowest temperature measured (Curie Weiss temperature, \(\theta_{W}=-80K\)).[17; 23] However the state below \(T\sim 10\) K shows \(\sim T^{2}\) magnetic specific heat and constant magnetic susceptibility indicating the presence of low energy linearly dispersing excitations. It was subsequently proposed that this spin-1 triangular lattice magnet could possibly have spin ferromagnetic [15; 19; 20] or three sublattice nematic[18] ordering (see Fig. 1). This spin-nematic state forms the right starting point[20] to understand the relevance of the third nearest neighbour Heisenberg exchange[24; 25; 26; 27] as well as spin freezing below 10 K[28]- both relevant for the material. However, the most pertinent experiments for our work are the recent Raman measurements probing the phonons in NiGa\({}_{2}\)S\({}_{4}\)[29] which shows substantial spin-lattice coupling. We build on the above experimental indication of sizeable magnetoelastic coupling to show that the possible spin-nematic can have a strong signature in the elastic sector - namely the strain and the sound velocity - which can form further experimental probes to such unconventional spin-quadrupal order. These lattice signatures can be measured very accurately and have recently proved very useful in obtaining information about unconventional magnetic phases and phase transitions.[8; 30]
Being bilinear in spins, the spin-nematic is time-reversal (TR) invariant, but, breaks the spin-rotation symmetry and has been dubbed as _moment-free magnetism_.[14] Unlike dipolar ordering, such quadrupolar ordering is hard to detect via neutron scattering.[31] However, the same TR even order parameter is expected to couple strongly to the lattice vibrations and hence provides a possible way to probe it. Further, even in absence of static dipole moment, the breaking of spin-rotation symmetry lead to gapless Goldstone modes - spin-nematic waves. The coupling of such gapless modes to the acoustic phonons further gives a way to detect the former in ultrasound experiments.
Here, we report the effect of magnetoelastic coupling in a uniaxial spin-nematic by deriving the microscopics of such coupling and in particular the coupling of the acoustic phonons to the nematic Goldstone modes to obtain the renormalisation of sound speed at low temperatures deep inside the spin-nematic phase. We complement the microscopics approach with a phenomenological
Landau-Ginzburg theory for the long wavelength dipolar, quadrupolar and strain modes to capture the effect of the thermal transition out of the spin-nematic phase on the magnetostriction and sound speed renormalisation. While we focus on NiGa\({}_{2}\)S\({}_{4}\)[17; 23], our results are easily generalised to other cases of spin-nematic order.
The rest of this paper is organised as follows. In section II, we introduce the microscopic setting of magnetoelastic coupling (Eq. 7) in spin-1 magnets on triangular lattice with bilinear-biquadratic exchanges (Eq. 2) and summarize its different phases- motivated by the low energy physics of NiGa\({}_{2}\)S\({}_{4}\)- and discuss the effect of such coupling in fractional change of sound speed (Eq. 13). The above formalism is used to calculate the coupling between the nematic Goldstone modes and the acoustic phonons and hence the temperature dependence of fractional change in sound speed deep inside the both the ferronematic and the three sublattice nematic phase in Section III. Typically the temperature dependence of the sound speed is given by a powerlaw, \(\Delta v/v\sim T^{a}\) where \(a=3\) (Eq. 24) for a ferronematic and can vary between \(1-3\) (Eq. 29) for the three sublattice nematic depending on the temperature range and the ratio of bilinear and biquadratic coupling \(J/K\) (see Eq. 2). This is unlike the case of the thermal paramagnet where \(\Delta v/v\propto 1/T\) (Eq. 33). Complementary to the microscopies, we study the phenomenological Landau-Ginzburg theory for the long wavelength dipolar, quadrupolar and strain modes in Section IV. We find the symmetry allowed irreducible representations to obtain the mean field free energy which we use in Section V to examine the elastic signatures - fractional change in length (magnetostriction), \(\Delta L/L\), and \(\Delta v/v\) - of the dipolar and quadrupolar ordering via symmetry allowed magnetoelastic couplings. We find that magnetic(nematic) ordering would lead to continuous change(jumps) in \(\Delta L/L\) and \(\Delta v/v\) along with an enhanced \(h^{4}\) dependence on the magnetic field, \(h\), near the nematic transitions. Observing jumps in fractional change in length and fractional change in sound speed in the absence of any magnetization would strongly favour the case for a spin-nematic. Various details are summarised in appendices.
## II Spin-lattice coupling in spin-1 magnet
The starting point is the nearest neighbour minimal spin-1 bilinear-biquadratic model on the triangular lattice [15; 16] given by
\[H=J\sum_{\langle ij\rangle}({\bf S}_{i}\cdot{\bf S}_{j})+K\sum_{\langle ij \rangle}({\bf S}_{i}\cdot{\bf S}_{j})^{2} \tag{2}\]
where in addition to the usual (first) Heisenberg term, we also have the (second) biquadratic spin exchanges. These higher order exchanges can be obtained from an underlying higher energy multi-orbital Hubbard model.[32] Notably, the biquadratic term can be further renormalised by magnetoelastic coupling by integrating out the phonons [15; 33] whose importance is evident in the recent Raman scattering experiments.[29] Both virtual hopping and phonon effects naturally give rise to \(K<0\) while \(J>0\) from the former.
It is useful to re-write the above Hamiltonian, up to a constant, using the spin-1 operator identity [16]: \(({\bf S}_{i}\cdot{\bf S}_{j})^{2}=\frac{1}{3}S^{2}(S+1)^{2}-\frac{1}{2}({\bf S }_{i}\cdot{\bf S}_{j})+\sum_{\mu,\nu}Q_{i}^{\mu\nu}Q_{j}^{\mu\nu}\) as \(H=\left(J-\frac{K}{2}\right)\sum_{\langle ij\rangle}{\bf S}_{i}\cdot{\bf S}_{ j}+K\sum_{\langle ij\rangle;\mu,\nu}Q_{i}^{\mu\nu}Q_{j}^{\mu\nu}\). Eq. 2 is clearly invariant under global spin rotations as well as various symmetries of the triangular lattice and TR. The ground states, however, spontaneously break different symmetries depending on the ratio of \(K/J\).
Most important to us is the large \(K/J\) regime where an uniaxial uniform (ferro) spin-nematic is stabilised [15] for \(K<0\). For spin-1, an on-site uniaxial nematic state is stabilized (this is clear from the discussion on the spin-1 wave functions in Refs. [16; 19], and [34] as summarized in Appendix A) where the director of the nematic is uniform at all the lattice sites as shown in Fig. 1. The order parameter that characterise such a ferronematic order is
\[\langle Q^{\mu\nu}\rangle={\cal Q}_{FN}\bigg{(}n^{\mu}n^{\nu}-\frac{1}{3}\delta ^{\mu\nu}\bigg{)}. \tag{3}\]
where \({\cal Q}_{FN}\) and \(\hat{\bf n}\) are respectively the magnitude and director of the ferronematic. Indeed, for \(J>0\) and \(K<0\), the ferronematic (spiral) phase is stable for \(|K|/J>2(<2)\) within mean-field analysis.[20]
While the microscopic mechanisms as discussed above, favours \(K<0\), it is useful to consider the case of \(K>0\), whence for large \(K/J\) a three sublattice nematic is stabilised in the triangular lattice where the directors of the spin-nematic are orthogonal to each other on the three sublattices [18] as shown in Fig. 1. The relevant mean-field wave-functions for the three sublattice nematic orders are briefly summarised in Appendix A.
Turning the the limit of \(J/|K|\gg 1\), on a triangular lattice, for \(J>0\), a natural competing (with the above two nematic) phase is the \(120^{\circ}\) coplanar spiral with a non-zero spin expectation given by
\[\langle{\bf S}_{i}\rangle = m\hat{\bf s}_{i}\ \ \mbox{with}\ \ \hat{\bf s}_{i}=\cos({\bf q}\cdot{\bf r}_{i})\hat{\bf e}_{x}+\sin({\bf q} \cdot{\bf r}_{i})\hat{\bf e}_{x} \tag{4}\]
Figure 1: Schematic : The uniaxial ferro (Eq. 3) and the three sublattice (Eq. 13) spin-nematic orders on the triangular lattice. The directors are shown in deep blue.
where \(m\) is the magnitude of magnetization, \({\bf q}=\frac{2\pi}{3}\hat{\bf x}+\frac{2\pi}{\sqrt{3}}\hat{\bf y}\) is the spiral wave vector and \(\hat{\bf e}_{x}\) and \(\hat{\bf e}_{y}\) are orthogonal unit vectors in the plane of spin ordering. Note that the spin ordering induces a parasitic quadrupole moment, \({\cal Q}_{par}\sim m^{2}\), which should be distinguished from the pure spin-nematic ordering discussed above.
### Spin-phonon coupling
The Raman scattering experiments [29] indicate the presence of substantial magnetoelastic coupling in NiGa\({}_{2}\)S\({}_{4}\), possibly arising from the modulation of the spin exchange coupling constants (\(J\) and \(K\)) in the Hamiltonian in Eq. 2 by the phonons. This is obtained via Taylor expansion of the exchange constants in lattice displacements about their equilibrium positions as [8]
\[J_{ij} = J_{0}+\frac{\partial J_{ij}}{\partial{\bf R}_{ij}}\cdot{\bf R}_{ ij}+\frac{1}{2}{\bf R}_{ij}\cdot\frac{\partial^{2}J_{ij}}{\partial{\bf R}_{ij}^ {2}}\cdot{\bf R}_{ij}\] \[K_{ij} = K_{0}+\frac{\partial K_{ij}}{\partial{\bf R}_{ij}}\cdot{\bf R}_ {ij}+\frac{1}{2}{\bf R}_{ij}\cdot\frac{\partial^{2}K_{ij}}{\partial{\bf R}_{ ij}^{2}}\cdot{\bf R}_{ij}\, \tag{5}\]
where \({\bf R}_{ij}={\bf R}_{i}-{\bf R}_{j}\) and \({\bf R}_{i}\) is the displacement of site \(i\) from its equilibrium position \({\bf R}_{i}^{0}\). Eq. 2 then becomes
\[H=H_{sp}+H_{sp-ph} \tag{6}\]
where \(H_{sp}\) is the spin Hamiltonian in Eq. 2 with the exchanges now being given by the equilibrium values (first term in Eq. 5) and \(H_{sp-ph}\) being the spin-phonon coupling Hamiltonian of the form,
\[H_{sp-ph} = H_{1}+H_{2}, \tag{7}\]
where
\[H_{1} = \sum_{\bf k}U_{\bf k}^{(1)}A_{\bf k}+\sum_{\bf k}\widetilde{U}_{ \bf k}^{(1)}A_{\bf k} \tag{8}\] \[H_{2} = \frac{1}{2}\sum_{\bf k,k^{\prime}}U_{\bf k,k^{\prime}}^{(2)}A_{ \bf k}A_{-{\bf k^{\prime}}}+\frac{1}{2}\sum_{\bf k,k^{\prime}}\widetilde{U}_{ \bf k,k^{\prime}}^{(2)}A_{\bf k}A_{-{\bf k^{\prime}}} \tag{9}\]
are respectively linear and quadratic in lattice displacement operator given by \(A_{\bf k}=a_{\bf k}+a_{-\bf k}^{\dagger}\) with \(a_{\bf k}^{\dagger}\) being the bosonic phonon creation operator. \(U_{\bf k}^{(1)}\), \(\widetilde{U}_{\bf k}^{(1)}\), \(U_{\bf k,k^{\prime}}^{(2)}\) and \(\widetilde{U}_{\bf k,k^{\prime}}^{(2)}\) depend on the spin operators and are given by
\[U_{\bf k}^{(1)} = \sum_{\langle ij\rangle}({\bf S}_{i}\cdot{\bf S}_{j})\frac{(e^{i \bf k\cdot R}_{i}^{0}-e^{i\bf k\cdot R}_{j}^{0})}{\sqrt{2MN\omega_{0,\bf k}^ {ph}}}\left(\frac{\partial J_{ij}}{\partial{\bf R}_{ij}}\cdot{\bf e_{k}}\right)\] \[\widetilde{U}_{\bf k}^{(1)} = \sum_{\langle ij\rangle}({\bf S}_{i}\cdot{\bf S}_{j})^{2}\frac{(e ^{i\bf k\cdot R}_{i}^{0}-e^{i\bf k\cdot R}_{j}^{0})}{\sqrt{2MN\omega_{0,\bf k }^{ph}}}\left(\frac{\partial K_{ij}}{\partial{\bf R}_{ij}}\cdot{\bf e_{k}}\right)\]
and
\[U_{\bf k,k^{\prime}}^{(2)} = \sum_{\langle ij\rangle}\frac{({\bf S}_{i}\cdot{\bf S}_{j})}{2MN \sqrt{\omega_{0,\bf k}^{ph}\omega_{0,-\bf k^{\prime}}^{ph}}}(e^{i\bf k\cdot R }_{i}^{0}-e^{i\bf k\cdot R}_{j}^{0})\] \[\qquad\times\left({\bf e}_{-{\bf k^{\prime}}}\cdot\frac{\partial^ {2}J_{ij}}{\partial{\bf R}_{ij}^{2}}\cdot{\bf e_{k}}\right)(e^{-i\bf k^{ \prime}\cdot R}_{i}^{0}-e^{-i\bf k^{\prime}\cdot R}_{j}^{0})\] \[\widetilde{U}_{\bf k,k^{\prime}}^{(2)} = \sum_{\langle ij\rangle}\frac{({\bf S}_{i}\cdot{\bf S}_{j})^{2}}{2MN \sqrt{\omega_{0,\bf k}^{ph}\omega_{0,-\bf k^{\prime}}^{ph}}}(e^{i\bf k\cdot R }_{i}^{0}-e^{i\bf k\cdot R}_{j}^{0})\] \[\qquad\times\left({\bf e}_{-{\bf k^{\prime}}}\cdot\frac{\partial^ {2}K_{ij}}{\partial{\bf R}_{ij}^{2}}\cdot{\bf e_{k}}\right)(e^{-i\bf k^{ \prime}\cdot R}_{i}^{0}-e^{-i\bf k^{\prime}\cdot R}_{j}^{0})\]
where \(M\) and \(N\) are the mass and number of a nickel ions respectively, \({\bf e_{k}}\) is the phonon polarization vector, \(\omega_{0,\bf k}^{ph}\) is the bare phonon frequency corresponding to the Harmonic phonon Hamiltonian,
\[H_{ph}=\sum_{\bf k}\omega_{0,\bf k}^{ph}a_{\bf k}^{\dagger}a_{\bf k} \tag{12}\]
For acoustic phonons at long wavelengths (\({\bf k}\to 0\)) we have \(\omega_{0,\bf k}^{ph}\approx v_{0}|{\bf k}|\) where \(v_{0}\) is the bare sound speed. Above and in the rest of this work, we have set \(\hbar=1\).
### Fractional Change in the sound speed
The effect of the spin-phonon coupling on the fractional change in sound speed is obtained from the real part of the phonon self-energy, \(\Sigma({\bf q},\omega)\), as, [8]
\[\frac{\Delta v}{v}=\lim_{{\bf q}\to 0}\frac{\Delta\omega_{\bf q}^{ph}}{ \omega_{0,\bf q}^{ph}}\approx\lim_{{\bf q}\to 0}\frac{Re\Sigma({\bf q},\omega_{0,\bf q }^{ph})}{\omega_{0,\bf q}^{ph}}. \tag{13}\]
The phonon self-energy due to the interactions with the spins, in turn, can be obtained from the phonon propagator \(G^{ph}({\bf q},\tau-\tau^{\prime})=-\langle T_{\tau}(A_{\bf q}(\tau)A_{-{\bf q }}(\tau^{\prime}))\rangle\) in Matsubara space and is given by the Dyson equation
\[G^{ph}({\bf q},i\Omega_{n})=\frac{1}{\left(G_{0}^{ph}({\bf q},i\Omega_{n}) \right)^{-1}-\Sigma({\bf q},i\Omega_{n})} \tag{14}\]
where \(\Omega_{n}\) are the bosonic Matsubara frequencies and the bare phonon propagator given by
\[G_{0}^{ph}({\bf q},i\Omega_{n})=\frac{2\omega_{0,\bf q}^{ph}}{(i\Omega_{n})^{2} -(\omega_{0,\bf q}^{ph})^{2}}. \tag{15}\]
The phonon self energy due to the magnetoelastic coupling would depend on correlations of the spins and hence fractional change in sound speed can be used to probe the spin physics. In the following sections we use this formalism to calculate the temperature dependence of \(\Delta v/v\) in the ferronematic, three sublattice nematic and the thermal paramagnet.
## III Fractional change in sound speed in spin-nematic phase
Deep inside the spin-nematic phase - both ferro and three sublattice, the renormalisation of the sound speed is brought about by the interaction between the acoustic phonons and the spin-nematic waves [35] via the coupling given by Eq. 7. To this end we use the _spin-nematic wave_ theory for the ferronematic (Matavev _et. al._[13]) and three sublattice nematic (Tsunetsugu _et. al._[18]) to write the spins in terms of the low energy Goldstone bosons of the respective nematic orders - summarised in Appendix C for completeness.
### The Ferronematic phase
To get the temperature dependence of \(\Delta v/v\) (Eq. 13) in the ferronematic, we calculate phonon self energy starting with the following Hamiltonian for the phonon and linear spin-nematic waves
\[\mathcal{H}_{FN}=H_{ph}+H_{sp,FN}+H_{sp-ph,FN}\;, \tag{16}\]
obtained from Eqs. 6 and 12. Here \(H_{ph}\) is the harmonic phonon Hamiltonian (Eq. 12), \(H_{sp,FN}\) and \(H_{sp-ph,FN}\) are respectively the linear spin-nematic wave Hamiltonian for the ferronematic phase and the phonon-spin-nematic wave coupling Hamiltonian respectively whose form we now discuss.
\(H_{sp,FN}\) can be obtained from Eq. 2 by expressing the spins in terms of boson operators that create spin-nematic wave.[13] This is done for the ferronematic state by introducing two bosons at every site \(i\), with creation operators given by \(b_{i1}^{\dagger}\) and \(b_{i2}^{\dagger}\) that capture deviations from the mean field ferronematic ground state. For a mean field state with the director along the \(\hat{\mathbf{z}}\)-direction, the wave function is \(\otimes_{i}\left|0\right\rangle_{i}\) (Appendix A) and \(b_{i1}^{\dagger}\left|0\right\rangle_{i}=\left|1\right\rangle_{i},b_{i1}\left| 1\right\rangle_{i}=\left|0\right\rangle_{i},b_{i2}^{\dagger}\left|0\right\rangle _{i}=\left|\bar{1}\right\rangle_{i}\) and \(b_{i2}\left|\bar{1}\right\rangle_{i}=\left|0\right\rangle_{i}\). The relation between the spin operators and the above bosons are given in Appendix C.1. Expressing the spin operators in terms of the bosons in Eq. 2 and approximating to the harmonic order we get
\[H_{sp,FN}=\sum_{\mathbf{k}}\omega_{\mathbf{k}}^{s}\psi_{\mathbf{k}}^{\dagger} \psi_{\mathbf{k}}\, \tag{17}\]
with \(\psi_{\mathbf{k}}=(d_{\mathbf{k},1},d_{-\mathbf{k},2}^{\dagger})\) where \(d_{\mathbf{k},1}\) and \(d_{\mathbf{k},2}\) are related to \((b_{\mathbf{k},1},b_{\mathbf{k},2})\) via Bogoliubov transformation (Eq. 57). The dispersion is given by
\[\omega_{\mathbf{k}}^{s}=6K_{0}\sqrt{\left(1-\gamma_{\mathbf{k}}\right)\left( 1+\gamma_{\mathbf{k}}-\frac{2J_{0}}{K_{0}}\gamma_{\mathbf{k}}\right)}\, \tag{18}\]
where \(\gamma_{\mathbf{k}}=\frac{1}{6}\sum_{\mathbf{\delta}}e^{i\mathbf{k}\cdot\mathbf{ \delta}}\) and \(\mathbf{\delta}\) is the distance to the six nearest neighbours. In the long wavelength limit, \(\omega_{\mathbf{k}}^{s}\approx\bar{c}_{s}|\mathbf{k}|\) with \(\bar{c}_{s}\) being the ferro spin-nematic wave speed.
The coupling between the phonons and the ferro spin-nematic waves is obtained by using the same bosonic representation of the spin operators in Eq. 7. The resultant Hamiltonian is given by
\[H_{sp-ph,FN}=H_{1,FN}+H_{2,FN} \tag{19}\]
where
\[H_{1,FN}= \sum_{\mathbf{k},\mathbf{q_{2}}}\psi_{\mathbf{k}+\mathbf{q_{2}}}^ {\dagger}\mathcal{M}_{\mathbf{k},\mathbf{q_{2}}}^{(1)}\psi_{\mathbf{q_{2}}}A_ {\mathbf{k}}\] \[H_{2,FN}= \sum_{\mathbf{k},\mathbf{k^{\prime}},\mathbf{q_{2}}}\psi_{ \mathbf{k}-\mathbf{k^{\prime}}+\mathbf{q_{2}}}^{\dagger}\mathcal{M}_{ \mathbf{k},\mathbf{k^{\prime}},\mathbf{q_{2}}}^{(2)}\psi_{\mathbf{q_{2}}}A_ {\mathbf{k}}A_{-\mathbf{k^{\prime}}}\] \[+\sum_{\mathbf{k}}\mathcal{M}_{\mathbf{k}}^{(2),0}A_{\mathbf{k}} A_{-\mathbf{k}} \tag{20}\]
represent respectively the linear (Eq. 8) and quadratic (Eq. 9) coupling with the phonons as shown in Fig. 2. The detailed expression of the scattering vertices \(\mathcal{M}_{\mathbf{k},\mathbf{q_{2}}}^{(1)}\), \(\mathcal{M}_{\mathbf{k},\mathbf{k^{\prime}},\mathbf{q_{2}}}^{(2)}\) and \(\mathcal{M}_{\mathbf{k}}^{(2),0}\) are given in Appendix D.1. Notably the second term in \(H_{2,FN}\), given by \(\mathcal{M}_{\mathbf{k}}^{(2),0}\) only depends on the displacement operators via a quadratic form similar to that of the Harmonic potential. This corresponds to the renormalisation of the bare phonon frequency and hence to sound speed due to ferronematic ordering.
The contributions from the spin-nematic waves of the ferronematic to \(\Delta v/v\) can be captured to the leading order in magnetoelastic coupling by calculating the phonon free energy due to \(\mathcal{M}_{\mathbf{k},\mathbf{q_{2}}}^{(1)}\) and \(\mathcal{M}_{\mathbf{k},\mathbf{k^{\prime}},\mathbf{q_{2}}}^{(2)}\) terms given by Feynman diagrams in Fig. 3.
The phonon self energy (in Eq. 14) due to the above two contributions is then given by
Figure 3: Diagrams used to calculate contribution to the phonon self energy due to nematic fluctuations (Fig. 2). The left diagram comes from vertices due to \(H_{1,FN}\) and right diagram from \(H_{2,FN}\) (Eq. 20).
Figure 2: Interactions between the phonon and ferronematic spin waves (Eq. 20). The left diagram comes from \(H_{1,FN}\) and right diagram from first term in \(H_{2,FN}\). Solid lines represent phonons and dashed lines represent ferronematic spin waves (\(\psi\) in Eq. 17)
\[\Sigma({\bf q},i\Omega_{n})=\Sigma_{1}({\bf q},i\Omega_{n})+\Sigma_{2}({\bf q},i \Omega_{n}) \tag{21}\]
where
\[\Sigma_{1}({\bf q},i\Omega_{n}) \approx -\frac{1}{\beta}\sum_{\begin{subarray}{c}{\bf q}_{2},\lambda,\\ \rho,\omega_{1}\end{subarray}}{\cal M}^{(1),\lambda\rho}_{-{\bf q},{\bf q}+{ \bf q}_{2}}{\cal M}^{(1),\rho\lambda}_{{\bf q},{\bf q}_{2}}G^{m,\lambda\lambda }_{0}({\bf q}_{2},i\omega_{1}) \tag{22}\] \[\qquad\qquad\times G^{m,\rho\rho}_{0}({\bf q}+{\bf q}_{2},i\Omega _{n}+i\omega_{1})\] \[\Sigma_{2}({\bf q},i\Omega_{n}) \approx 2{\cal M}^{(2),0}_{{\bf q}}+\sum_{{\bf q}_{2},\lambda}2{\cal M} ^{(2),\lambda\lambda}_{{\bf q},{\bf q}_{2}}\langle\psi^{\dagger\lambda}_{{\bf q }_{2}}\psi^{\lambda}_{{\bf q}_{2}}\rangle\,\]
with \(\lambda,\rho=1,2\) and \(G^{m,\lambda\rho}_{0}\) denotes the bare Matrix ferronematic spin-wave Green's function defined as \(G^{m,\lambda\rho}_{0}({\bf q},\tau-\tau^{\prime})=-\langle T_{\tau}(\psi^{ \lambda}_{\bf q}(\tau)\psi^{\dagger\rho}_{\bf q}(\tau^{\prime}))\rangle\). Due to the diagonal form of the spin wave Hamiltonian (Eq. 17), the matrix is given by
\[G^{m}_{0}({\bf q},i\Omega_{n})=\begin{pmatrix}\frac{1}{i\Omega_{n}-\omega^{ \prime}_{4}}&0\\ 0&\frac{-1}{i\Omega_{n}+\omega^{\prime}_{-4}}\end{pmatrix}. \tag{23}\]
The sum over the bosonic Matsubara frequencies in Eq. 22 is evaluated using the standard techniques [36] and using Eq. 13, the temperature dependence of the fractional change in sound speed is obtained as
\[\frac{\Delta v}{v}=\bar{c}_{1}+\bar{c}_{2}T^{3}\, \tag{24}\]
The factors \(\bar{c}_{1}\) and \(\bar{c}_{2}\) (see expressions in Appendix D.1) have contributions from first and second derivatives of both the bilinear and the biquadratic couplings and depend on the details of the lattice via the phonon spectrum and polarisation. However, the above temperature dependence is generically valid for other two dimensional lattices. The sound attenuation, \(\propto Im\Sigma/v_{0}\), can similarly be calculated from the imaginary part of the phonon self energy. At this order only \(\Sigma_{1}\) contributes and such sound attenuation generically proportional, at low frequencies, the to the phonon frequency. [37; 38]
### The three sublattice nematic phase
The \(\Delta v/v\) for the three sublattice nematic can be obtained in a similar way. In analogy with the ferronematic case (Eq. 16), the relevant Hamiltonian is given by
\[{\cal H}_{3SN}=H_{ph}+H_{sp,3SN}+H_{sp-ph,3SN}\, \tag{25}\]
where \(H_{ph}\) is the harmonic phonon Hamiltonian (Eq. 12), \(H_{sp,3SN}\) and \(H_{sp-ph,3SN}\) are respectively the linear spin-nematic wave Hamiltonian for the three sublattice nematic phase and the phonon-three sublattice spin-nematic wave coupling Hamiltonian.
Due to the non-uniform structure of this nematic phase, the calculations are somewhat more tedious and the relevant parts are relegated to Appendix D.2. The difference, however, in this case stems from the three sublattice spin-nematic wave spectrum. The spin-nematic wave theory about a three sublattice nematic state, [18] such as \(\prod_{\bar{\bf R}}|x\rangle_{\bar{\bf R},1}\,|y\rangle_{\bar{\bf R},\bar{2}} |z\rangle_{\bar{\bf R},3}\) is obtained as follows. Here \(\bar{\bf R}\) stands for the three sublattice unit cell and 1, 2 and 3 denote the sublattices of each unit cell with their nematic directors being along the \(\hat{\bf x}\), \(\hat{\bf y}\) and \(\hat{\bf z}\) directions respectively as shown in Fig. 1. We introduce two bosons \(\widetilde{\alpha}^{\dagger}\) and \(\widetilde{\beta}^{\dagger}\) at each sublattice to capture the deviations from the ground state, _e.g._, for sublattice-3, \(|z\rangle_{\bar{\bf R},3}\equiv|{\rm vac}\rangle_{\bar{\bf R},3}\) and \(|S_{z}=\pm 1\rangle_{\bar{\bf R},3}=\frac{1}{\sqrt{2}}(\widetilde{\alpha}^{ \dagger}_{\bar{\bf R},3}\pm i\widetilde{\beta}^{\dagger}_{\bar{\bf R},3})\,|{ \rm vac}\rangle_{\bar{\bf R},3}\). The details are summarised in Appendix C.2.
The resultant diagonalised harmonic Hamiltonian (similar to Eq. 17 for the ferronematic) is given by
\[H_{sp,3SN}=\sum_{{\bf k},\widetilde{\lambda}\widetilde{\rho}}\left[\widetilde{ \omega}^{s}_{+,{\bf k}}\ \widetilde{d}^{\ \ \dagger}_{+,{\bf k},\widetilde{\lambda}\widetilde{\rho}}\widetilde{d} ^{\ \ \dagger}_{+,{\bf k},\widetilde{\lambda}\widetilde{\rho}}+\widetilde{\omega}^{s}_{-, {\bf k}}\ \widetilde{d}^{\ \ \dagger}_{-,{\bf k},\widetilde{\lambda}\widetilde{\rho}}\widetilde{d} ^{\
pendix D.2) and is given by
\[\frac{\Delta v}{v} \approx \lim_{\mathbf{q}\to 0}\Bigg{[}\int_{0}^{k^{*}}dk\int d\theta_{ \hat{\mathbf{k}}}\frac{Re\widetilde{\Sigma}_{\mathbf{k}}\left(\widetilde{\omega }_{-,\mathbf{k}}^{s}\propto|\mathbf{k}|\right)}{\omega_{0,\mathbf{q}}^{ph}} \tag{28}\] \[+ \int_{k^{*}}^{\infty}dk\int d\theta_{\hat{\mathbf{k}}}\frac{Re \widetilde{\Sigma}_{\mathbf{k}}\left(\widetilde{\omega}_{-,\mathbf{k}}^{s} \propto|\mathbf{k}|^{2}\right)}{\omega_{0,\mathbf{q}}^{ph}}\Bigg{]}\,\]
accounting for the crossover between the linear and quadratic dispersions at \(k^{*}\). This approximately leads to, in the three sublattice nematic,
\[\frac{\Delta v}{v}=\left\{\begin{array}{ll}\widetilde{c}_{1}+\widetilde{c}_ {2}T^{3}&\mbox{for $\beta J_{0}\gg 1$}\\ \widetilde{c}_{3}+\widetilde{c}_{4}T^{2}&\mbox{for $\beta J_{0}\sim 1$}\\ \widetilde{c}_{5}+\widetilde{c}_{6}T&\mbox{for $\beta J_{0}<1$}\end{array}\right. \tag{29}\]
where \(\beta=1/(k_{B}T)\) and the details of the pre-factors are given in Appendix D.2. No such crossover behaviour is observed for the ferromematic case since the spin wave dispersion remains linear in \(k\) even on setting \(J_{0}=0\).
### The thermal paramagnet
In contrast to the low temperature nematic phases discussed above, for the high temperature paramagnetic phase, the spin dynamics is faster than the acoustic phonons and integrating out spins, gives the following effective interaction Hamiltonian for phonons,[8]
\[H_{\rm eff}=\sum_{\mathbf{k}}\omega_{0,\mathbf{k}}^{ph}a_{\mathbf{k}}^{\dagger }a_{\mathbf{k}}+\frac{1}{2}\sum_{\mathbf{k},\mathbf{k}^{\prime}}V_{\mathbf{k} \mathbf{k}^{\prime}}A_{\mathbf{k}}A_{-\mathbf{k}^{\prime}} \tag{30}\]
where
\[V_{\mathbf{k}\mathbf{k}^{\prime}}=\langle\bar{U}_{\mathbf{k}\mathbf{k}^{\prime }}^{(2)}\rangle-\beta\langle\langle\bar{U}_{\mathbf{k}}^{(1)}\bar{U}_{- \mathbf{k}^{\prime}}^{(1)}\rangle\rangle \tag{31}\]
Here \(\bar{U}_{\mathbf{k}}^{(1)}=U_{\mathbf{k}}^{(1)}+\widetilde{U}_{\mathbf{k}}^{( 1)}\) and \(\bar{U}_{\mathbf{k}\mathbf{k}^{\prime}}^{(2)}=U_{\mathbf{k}\mathbf{k}^{\prime }}^{(2)}+\widetilde{U}_{\mathbf{k}\mathbf{k}^{\prime}}^{(2)}\) (which have been defined in Eq. 10 and Eq. 11) and \(\langle\langle....\rangle\rangle\) denotes the connected correlators for the spins, averaged over a thermal ensemble. The self energy is given by :
\[\Sigma(\mathbf{q},i\Omega_{n})=V_{\mathbf{q}\mathbf{q}} \tag{32}\]
So, calculating the leading order temperature dependence of \(\Delta v/v\) would involve various spin correlators which are then calculated using high temperature series expansion. To the leading order in \(1/T\), this gives
\[\frac{\Delta v}{v}=a_{1}+\widetilde{a}_{2}\beta\, \tag{33}\]
where, notably, a constant contribution arises from the biquadratic term such that
\[a_{1}=\sum_{\mathbf{\delta}}\frac{1}{3Mv_{0}^{2}}\big{(}\hat{\mathbf{q}}.\mathbf{ \delta})^{2}\left(\mathbf{e}_{-\mathbf{q}}.\frac{\partial^{2}K}{\partial\mathbf{ \delta}^{2}}.\mathbf{e}_{\mathbf{q}}\right)\, \tag{34}\]
where \(\mathbf{\delta}\) runs over the six nearest neighbours. It is evident that the constant arises only due to the biquadratic coupling and would be absent for the case of pure Heisenberg model.
## IV The Landau-Ginzburg theory
The above microscopic approach works deep inside the nematic at low temperatures or in the thermal paramagnet at high temperatures. For general elastic responses for the entire phase diagram, a more phenomenological Landau -Ginzburg theory- that accounts for the \(120^{\circ}\) spiral and the spin-nematic (both ferro and three sub-lattice) ordering as well as the elastic degrees of freedom- is useful to account for the spin-lattice physics of the system.[4] Below, we construct this theory to derive the elastic signatures of a spin-nematic in context of the NiGa\({}_{2}\)S\({}_{4}\). Our calculations are easily generalised to other situations, in particular the spin-orbit coupled multi-polar orders.[4; 39]
We start with the symmetry analysis for the three fields - the magnetic, the spin-nematic and elastic - for the point group symmetry of NiGa\({}_{2}\)S\({}_{4}\) which is D\({}_{3d}\). For our present calculations, we choose the largest unit cell for all the type of orderings discussed above- a single triangle- and systematically isolate the relevant symmetry allowed terms for the three fields including their interactions. We choose an up triangle that consists of three sites of Ni\({}^{2+}\) (the triangular unit and site labels are shown in Fig.1 and their positions being \(\mathbf{r}_{1}=0\), \(\mathbf{r}_{2}=\hat{\mathbf{x}}\) and \(\mathbf{r}_{3}=\frac{1}{2}\hat{\mathbf{x}}+\frac{\sqrt{3}}{2}\hat{\mathbf{y}}\)) and impose inversion symmetry to obtain the normal modes. The non-trivial transformations for the up triangle are \(C_{3}\) and \(\sigma_{h}C_{2}^{\prime}\) (these transformations keep the centre of the triangle fixed, \(\sigma_{h}\) is required with the 2-fold rotation to bring the crystal field environment back to itself, details in Appendix E). Combining this with inversion generates all the 6 conjugacy classes of D\({}_{3d}\). We can then decompose the dipole, quadrupole fields and the elastic modes into the irreducible representations (irrep) to construct the Landau-Ginzburg free energy.
### The magnetic (dipolar) and nematic (quadrupolar) modes
Eq. 2 the system has full SU(2) spin rotation symmetry. Hence the spin operators remain un-rotated (in spin space) under various lattice transformations while the site indices transform. Using these latter transformations, the non-trivial irreducible representations are constructed.
The irreducible representations for dipoles and quadrupole on the up triangle with site labels as shown in Fig. 1 are given in Table 1. Note that for both the fields the irreducible representations consist of a singlet (\(a\)) and a doublet (\(e=e1,e2\)). However, it is useful to note that while the dipolar field is odd under time reversal, the quadrupolar field is even. The details of the symmetry transformations of the irreps are listed in Appendix E. All the relevant orders can be represented as different combinations of the irreducible representations.
#### iii.1.1 Magnetic orders
The relevant magnetic orders are :
Ferromagnetic order :This is given by \(\langle\mathbf{m}_{a}\rangle\neq 0\), \(\langle\mathbf{m}_{e1}\rangle=0\) and \(\langle\mathbf{m}_{e2}\rangle=0\) (where bold font is used to suppress the spin index). Inverting the relations for spins in Table 1, we get :
\[\langle\mathbf{S}_{1}\rangle =\frac{1}{6}\left(2\sqrt{3}\langle\mathbf{m}_{a}\rangle+3\sqrt{2} \langle\mathbf{m}_{e1}\rangle+\sqrt{6}\langle\mathbf{m}_{e2}\rangle\right)\] \[\langle\mathbf{S}_{2}\rangle =\frac{1}{6}\left(2\sqrt{3}\langle\mathbf{m}_{a}\rangle-3\sqrt{2} \langle\mathbf{m}_{e1}\rangle+\sqrt{6}\langle\mathbf{m}_{e2}\rangle\right)\] \[\langle\mathbf{S}_{3}\rangle =\frac{1}{3}\left(\sqrt{3}\langle\mathbf{m}_{a}\rangle-\sqrt{6} \langle\mathbf{m}_{e2}\rangle\right) \tag{35}\]
which is evidently a ferromagnet for only \(\langle\mathbf{m}_{a}\rangle\neq 0\). The ferromagnetic order also has a non-vanishing parasitic quadrupolar moment (see Appendix A).
\(120^{\circ}\) spiral order :The spiral order is given by the three simultaneous conditions
\[\langle\mathbf{m}_{a}\rangle=0,\ \ \ \ \ \langle\mathbf{m}_{e1} \rangle.\langle\mathbf{m}_{e2}\rangle=0\] \[\langle\mathbf{m}_{e1}\rangle.\langle\mathbf{m}_{e1}\rangle= \langle\mathbf{m}_{e2}\rangle.\langle\mathbf{m}_{e2}\rangle \tag{36}\]
The first condition (using Eq. 35) translates into zero magnetisation condition per triangle, while the second and third conditions results in the equality of magnitude of the moments at different sites, _i.e._, \(\langle\mathbf{S}_{1}\rangle.\langle\mathbf{S}_{1}\rangle=\langle\mathbf{S}_{2 }\rangle.\langle\mathbf{S}_{2}\rangle=\langle\mathbf{S}_{3}\rangle.\langle \mathbf{S}_{3}\rangle\) and equal angle between the ordered moments, _i.e._, \(\langle\mathbf{S}_{1}\rangle.\langle\mathbf{S}_{2}\rangle=\langle\mathbf{S}_{2 }\rangle.\langle\mathbf{S}_{3}\rangle=\langle\mathbf{S}_{3}\rangle.\langle \mathbf{S}_{1}\rangle\). From this is is fairly easy to show that the angle between any two nearest neighbour moments is \(120^{\circ}\) such that
\[\langle\mathbf{S}_{i}\rangle=\] \[\frac{\sqrt{6}}{3}\bigg{(}\langle\mathbf{m}_{e1}\rangle\text{ cos}\bigg{(}\mathbf{q}.\mathbf{r}_{i}+\frac{\pi}{6}\bigg{)}+\langle \mathbf{m}_{e2}\rangle\text{sin}\bigg{(}\mathbf{q}.\mathbf{r}_{i}+\frac{\pi}{6 }\bigg{)}\bigg{)} \tag{37}\]
with \(i=1,2,3\) and \(\mathbf{q}=\frac{2\pi}{3}\hat{\mathbf{x}}+\frac{2\pi}{\sqrt{3}}\hat{\mathbf{y}}\). The phase factor of \(\pi/6\) is due our choice of doublet modes of spins in Table. 1. Similar to the ferromagnet, the spiral order also has non-vanishing parasitic quadrupolar moments.
#### iii.1.2 Nematic orders
For pure spin nematic orders, the expectation value of the magnetic moments vanishes, _i.e._\(\langle\mathbf{S}_{i}\rangle=0\). The on-site quadrupolar tensor is symmetric and can be diagonalized and using Eq. 1, one can obtain the expectation values \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{1})^{2}\rangle\), \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{2})^{2}\rangle\) and \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{3})^{2}\rangle\) along the principal axes \(\hat{\mathbf{e}}_{1}\), \(\hat{\mathbf{e}}_{2}\) and \(\hat{\mathbf{e}}_{3}\) of the quadrupole ellipsoid.[40] In terms of the quadrupole ellipsoid, there are three possibilities : (1) No nematic order : \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{1})^{2}\rangle=\langle(\mathbf{S}.\hat{ \mathbf{e}}_{2})^{2}\rangle=\langle(\mathbf{S}.\hat{\mathbf{e}}_{3})^{2}\rangle\), (2) Biaxial nematic : \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{1})^{2}\rangle\neq\langle(\mathbf{S}. \hat{\mathbf{e}}_{2})^{2}\rangle\neq\langle(\mathbf{S}.\hat{\mathbf{e}}_{3})^{2}\rangle\), and, (3) uniaxial nematic : This is obtained when \(\langle(\mathbf{S}.\hat{\mathbf{e}}_{i})^{2}\rangle=\langle(\mathbf{S}.\hat{ \mathbf{e}}_{j})^{2}\rangle\neq\langle(\mathbf{S}.\hat{\mathbf{e}}_{k})^{2}\rangle\) for \(i\neq j\neq k\). For spin-1, the first case implies a paramagnet, while the biaxial nematic is not relevant for us. Hence we focus on the uniaxial nematic where, if the unequal expectation value is greater (lesser) than the equal ones, we have a rod (disc)-like uniaxial nematic. For spin-1, only a uniaxial nematic of the disc type is allowed.[16; 34] Therefore for the on-site quadrupolar order relevant to our calculations, the order parameter is characterised by
\[\langle Q_{i}^{\alpha\beta}\rangle=\mathcal{Q}_{i,N}\big{(}n_{i}^{\alpha}n_{i} ^{\beta}-\delta^{\alpha\beta}/3\big{)} \tag{38}\]
with \(\mathcal{Q}_{i,N}<0\) and \(\mathbf{n}_{i}\) being the director of the nematic.
All possible three sublattice disk-like uniaxial nematic orders can be obtained from the quadrupolar irreps in Table 1. The on-site expectation values obtained by inverting the relations are given by :
\[\langle Q_{1}^{\alpha\beta}\rangle =\frac{1}{6}\left(2\sqrt{3}\langle Q_{a}^{\alpha\beta}\rangle+3 \sqrt{2}\langle Q_{e1}^{\alpha\beta}\rangle+\sqrt{6}\langle Q_{e2}^{\alpha \beta}\rangle\right)\] \[\langle Q_{2}^{\alpha\beta}\rangle =\frac{1}{6}\left(2\sqrt{3}\langle Q_{a}^{\alpha\beta}\rangle-3 \sqrt{2}\langle Q_{e1}^{\alpha\beta}\rangle+\sqrt{6}\langle Q_{e2}^{\alpha \beta}\rangle\right)\] \[\langle Q_{3}^{\alpha\beta}\rangle =\frac{1}{3}\left(\sqrt{3}\langle Q_{a}^{\alpha\beta}\rangle-\sqrt {6}\langle Q_{e2}^{\alpha\beta}\rangle\right) \tag{39}\]
Ferronematic order:This is given by
\[\langle Q_{a}^{\alpha\beta}\rangle=\sqrt{3}\mathcal{Q}_{FN}\big{(}n^{\alpha}n^{ \beta}-\frac{1}{3}\delta^{\alpha\beta}\big{)},\ \ \ \ \langle\mathbf{Q}_{\mathbf{c}}\rangle=0 \tag{40}\]
The on-site expectation values can be obtained from Eq. 39 and are given by \(\langle Q_{1}^{\alpha\beta}\rangle=\langle Q_{2}^{\alpha\beta}\rangle=\langle Q_{3 }^{\alpha\beta}\rangle=\frac{1}{\sqrt{3}}\ \langle Q_{a}^{\alpha\beta}\rangle\), which is clearly a ferronematic (Eq. 3) that is energetically favoured[15] when \(K<0\) in Eq. 2. For \(J>0\) and \(K<0\), the ferronematic phase is stable for \(|K|/J>2\) within mean-field analysis.[19]
Three sublattice nematic order:This is given by:
\[\langle Q_{a}^{\alpha\beta}\rangle =0 \tag{41}\] \[\langle Q_{e1}^{\alpha\beta}\rangle =\frac{\mathcal{Q}_{3SN}}{\sqrt{2}}(n_{1}^{\alpha}n_{1}^{\beta}-n_{2 }^{\alpha}n_{2}^{\beta})\] (42) \[\langle Q_{e2}^{\alpha\beta}\rangle =\frac{\mathcal{Q}_{3SN}}{\sqrt{6}}(n_{1}^{\alpha}n_{1}^{\beta}+n_{2 }^{\alpha}n_{2}^{\beta}-2n_{3}^{\alpha}n_{3}^{\beta})\, \tag{43}\]
where \(\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n}_{3}\) are mutually orthogonal unit vectors and \(\mathcal{Q}_{3SN}<0\). Again, using Eq. 39, we can see that the above conditions correspond to the three sublattice
\begin{table}
\begin{tabular}{|c|c|} \hline Irrep & Expression \\ \hline \(m_{a}^{\alpha}\) & \((S_{1}^{\alpha}+S_{2}^{\alpha}+S_{3}^{\alpha})/\sqrt{3}\) \\ \hline \(m_{e1}^{\alpha}\) & \((S_{1}^{\alpha}-S_{2}^{\alpha})/\sqrt{2}\) \\ \(m_{e2}^{\alpha}\) & \((S_{1}^{\alpha}+S_{2}^{\alpha}-2S_{3}^{\alpha})/\sqrt{6}\) \\ \hline \hline \(Q_{a}^{\alpha\beta}\) & \((Q_{1}^{\alpha\beta}+Q_{2}^{\alpha\beta}+Q_{3}^{\alpha\beta})/\sqrt{3}\) \\ \hline \(Q_{a}^{\alpha\beta}\) & \((Q_{1}^{\alpha\beta}-Q_{2}^{\alpha\beta})/\sqrt{2}\) \\ \(Q_{e2}^{\alpha\beta}\) & \((Q_{1}^{\alpha\beta}+Q_{2}^{\alpha\beta}-2Q_{3}^{\alpha\beta})/\sqrt{6}\) \\ \hline \end{tabular}
\end{table}
Table 1: Irreducible representations for spins and quadrupoles (\(\alpha,\beta=x,y,z\)).
nematic order with the directors on sites 1, 2 and 3 being \(\mathbf{n}_{1}\), \(\mathbf{n}_{2}\) and \(\mathbf{n}_{3}\) respectively. Such three sublattice ordering is expected to be stabilised when the sign of the biquadratic term in the spin Hamiltonian (Eq. 2), \(K>0\). For \(J>0\) and \(K>0\), the three sublattice nematic is obtained for \(K/J>1\). [19]
#### iii.1.3 Coexistence of nematic order and collinear sinusoidal dipolar order
In passing, we point out the possibility of an interesting phase of coexisting nematic and dipole order with a three site unit-cell and hence captured within the above formulation. This is given by \(\langle\mathbf{m}_{a}\rangle=0\), \(\langle\mathbf{m}_{e1}\rangle\parallel\langle\mathbf{m}_{e2}\rangle\). The resultant dipolar order is collinear and sinusoidal [41] with the spin configuration given by \(\langle\mathbf{S}_{i}\rangle=\frac{\sqrt{6}}{3}\langle\mathbf{m}_{e1}\rangle \bigg{(}\sqrt{1+\tilde{\lambda}^{2}}\bigg{)}\mathrm{sin}\bigg{(}\mathbf{q}. \mathbf{r}_{i}+\frac{\pi}{6}+\phi\bigg{)}\), where \(\mathbf{q}=\frac{2\pi}{3}\hat{\mathbf{x}}+\frac{2\pi}{\sqrt{3}}\hat{\mathbf{y}}\). The above form can be obtained from Eq. 37 by using \(\langle\mathbf{m}_{e2}\rangle=\widetilde{\lambda}\langle\mathbf{m}_{e1}\rangle\). Thus the state corresponds to collinear sinusoidal magnetic order. Since the spins are not completely polarised it allows for (non-parasitic) nematic ordering. Using Eq. 6 we find that the sinusoidal dipole ordering is accompanied by bi-axial nematic ordering with the two orthogonal principal directions of the nematic directors being along \(\hat{n}=\langle\mathbf{m}_{e1}\rangle/|\langle\mathbf{m}_{e1}\rangle|\) and perpendicular to it respectively. The quadrupole moment and the magnetic moment do not have the same symmetry implying the coexistence of nematic and collinear sinusoidal dipolar order. In the present Hamiltonian we do not expect this order to be stabilised. However, they may be relevant for more generic models such as the one studied in Ref. [42].
### The Elastic modes
Having discussed the dipole and the quadrupole modes of interest, we now turn to the elastic modes that they can couple to. The normal modes of a single triangle are given in Appendix E.2. This consists of
\[\text{a singlet}:\epsilon_{a}\hskip 56.905512pt\text{a doublet}:( \epsilon_{e1},\epsilon_{e2}) \tag{44}\]
They are linearly related with the Cartesian strain tensors, \(\varepsilon_{ij}\) (where \(\varepsilon_{ij}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial R_{j}}+\frac {\partial u_{j}}{\partial R_{i}}\right)\) ; \(u_{i}\) is the \(i^{th}\) component of displacement from equilibrium position, \(R_{i}\)) as (see Appendix E.2)
\[\varepsilon_{a} = \frac{1}{2}(\varepsilon_{xx}+\varepsilon_{yy});\] \[\varepsilon_{e1} = \varepsilon_{xy},\hskip 28.452756pt\varepsilon_{e2}=\frac{1}{2}( \varepsilon_{xx}-\varepsilon_{yy}) \tag{45}\]
We now study their coupling with the dipole and the quadrupole modes to understand the nature of magnetoelastic response of the system.
### The Landau free energy
To write the Landau-Ginzburg free energy, we consider the long wavelength symmetry allowed terms for the dipolar, quadrupolar and the elastic modes. At the mean field level we only consider uniform terms and drop all spatially fluctuating ones.
#### iii.3.1 Dipole-quadrupole free energy
The dipole-quadrupole free energy is
\[\mathcal{F}_{mQ}=\mathcal{F}_{m}+\mathcal{F}_{Q}+\mathcal{F}_{mQ}^{\text{int} }\, \tag{46}\]
where the three terms denote pure dipolar and pure quadrupolar free energies; and the interaction between the dipoles and the quadrupoles.
Dipolar terms:The dipolar modes are odd under time reversal so they appear in even powers in the free energy. The dipoles have \(a\) and \(e\) irreps so the dipolar free energy can be written as the sum of free energy for individual modes and the interaction terms between them
\[\mathcal{F}_{m}=\mathcal{F}_{m,a}+\mathcal{F}_{m,e}+\mathcal{F}_{m,ae}^{int}\, \tag{47}\]
where upto quartic orders
\[\mathcal{F}_{m,a}=r_{m_{a}}(\mathbf{m}_{a}.\mathbf{m}_{a})+u_{m_{a}}( \mathbf{m}_{a}.\mathbf{m}_{a})^{2}\, \tag{48}\]
\[\mathcal{F}_{m,e} = r_{m_{e}}(\mathbf{m}_{e}.\mathbf{m}_{e})+u_{m_{e}}(\mathbf{m}_{ e}.\mathbf{m}_{e})^{2} \tag{49}\] \[+ v_{m_{e}}[(\mathbf{m}_{e1}.\mathbf{m}_{e2})^{2}-(\mathbf{m}_{e1}.\mathbf{m}_{e1})(\mathbf{m}_{e2}.\mathbf{m}_{e2})]\,\]
are the free energies of singlet and doublet modes while
\[\mathcal{F}_{m,ae}^{int}= v_{m}(\mathbf{m}_{a}.\mathbf{m}_{a})(\mathbf{m}_{e}. \mathbf{m}_{e})+\tilde{v}_{m}(m_{a}^{\alpha}m_{a}^{\beta})(m_{e}^{\alpha}.m_{ e}^{\beta})\, \tag{50}\]
represents the interaction between them.
As discussed in section IV.1.1, ferromagnetic order corresponds to \(\langle\mathbf{m}_{a}\rangle\neq 0\) with \(\langle\mathbf{m}_{e}\rangle=0\). The mean field theory leads to a continuous transition between paramagnetic (for \(r_{m_{a}}>0\)) and ferromagnetic (for \(r_{m_{a}}<0\)) ordered state which takes place at \(r_{m_{a}}=0\). However, for the microscopic model in Eq. 2, we expect that \(r_{m_{a}}>0\).
For the doublet mode, the continuous transition between the thermal paramagnet (for \(r_{m_{e}}>0\)) and the dipole (for \(r_{m_{e}}<0\)) ordered phase occurs at \(r_{m_{e}}=0\) for Eq. 49. The details of the dipole ordered phase is controlled by \(v_{m_{e}}\). [41] For \(v_{m_{e}}>0(<0)\), the \(120^{\circ}\) spiral (collinear sinusoidal) state is stabilised. Note that the dipolar free energy for the doublet mode, Eq. 49, is invariant under :
\[\mathbf{m}_{e1} \rightarrow \mathbf{m}_{e1}^{\prime}=\cos\theta\mathbf{m}_{e1}-\sin\theta \mathbf{m}_{e2}\] \[\mathbf{m}_{e2} \rightarrow \mathbf{m}_{e2}^{\prime}=\pm(\sin\theta\mathbf{m}_{e1}+\cos\theta \mathbf{m}_{e2})\, \tag{51}\]
for \(\theta\in(0,2\pi]\). The origin of this enhanced symmetry is due to the enlargement of the translation symmetry
from \(\mathbb{Z}_{3}\) to \(U(1)\) as can be quickly checked by writing the spin configurations in Eq. 37. This enhanced symmetry, absent in the microscopic model is broken down at the sixth order by the term \((\mathbf{m}_{e1}\mathbf{\cdot}\mathbf{m}_{e1}\mathbf{-}\mathbf{m}_{e2}\mathbf{ \cdot}\mathbf{m}_{e2})[(\mathbf{m}_{e1}\mathbf{\cdot}\mathbf{m}_{e1}\mathbf{- }\mathbf{m}_{e2}\mathbf{\cdot}\mathbf{m}_{e2})^{2}-12(\mathbf{m}_{e1}\mathbf{ \cdot}\mathbf{m}_{e2})^{2}]\).[43]
Finally the interaction term in Eq. 50 leads to effective attraction (for \(v_{m},\tilde{v}_{m}<0\)) or repulsion (for \(v_{m},\tilde{v}_{m}>0\)) between the ferromagnetic and the spiral or the sinusoidal orders. In particular, attraction between the ferromagnetic and collinear sinusoidal order can open up a regime of ferrimagnetic order where both these order parameters are non-zero.
Quadrupolar terms:The quadrupolar modes are even under time reversal and can be compactly written in an SU(2) spin rotation invariant form in terms of traces over spin indices giving rise to
\[\mathcal{F}_{Q}=\mathcal{F}_{Q,a}+\mathcal{F}_{Q,e}+\mathcal{F}_{Q,ae}^{int}. \tag{52}\]
where up to quartic orders,
\[\mathcal{F}_{Q,a}=r_{Q_{a}}\mathrm{Tr}\mathbf{Q}_{a}^{2}+w_{Q_{a }}\mathrm{Tr}\mathbf{Q}_{a}^{3}+u_{Q_{a}}\mathrm{Tr}\mathbf{Q}_{a}^{4} \tag{53}\]
\[\mathcal{F}_{Q,e}=r_{Q_{a}}\mathrm{Tr}(\mathbf{Q}_{e1}^{2}+ \mathbf{Q}_{e2}^{2})+s_{Q_{a}}\mathrm{Tr}(\mathbf{Q}_{e2}^{3}-3\mathbf{Q}_{e1 }^{2}\mathbf{Q}_{e2})\] \[+u_{Q_{e,1}}\mathrm{Tr}(\mathbf{Q}_{e1}^{2}+\mathbf{Q}_{e2}^{2}) ^{2}+u_{Q_{e,2}}[\mathrm{Tr}(\mathbf{Q}_{e1}^{2}+\mathbf{Q}_{e2}^{2})]^{2}\] \[+u_{Q_{e,3}}[\mathrm{Tr}(\mathbf{Q}_{e1}\mathbf{Q}_{e2})\mathrm{ Tr}(\mathbf{Q}_{e1}\mathbf{Q}_{e2})-\mathrm{Tr}(\mathbf{Q}_{e1}^{2})\mathrm{Tr}( \mathbf{Q}_{e2}^{2})]. \tag{54}\]
for the singlet and doublet modes and
\[\mathcal{F}_{Q,ae}^{int}= s_{Q_{a}Q_{a}}Q_{a}^{\alpha\beta}\bigg{[}\bigg{(}Q_{e1}^{ \beta\gamma}Q_{e1}^{\gamma\alpha}+Q_{e2}^{\beta\gamma}Q_{e2}^{\gamma\alpha} \bigg{)}\] \[-\frac{1}{3}\delta^{\alpha\beta}\mathrm{Tr}(\mathbf{Q}_{e1}^{2}+ \mathbf{Q}_{e2}^{2})\bigg{]}\, \tag{55}\]
represents the leading order symmetry allowed interaction between them. On general grounds all associated transitions out of the nematic orders described by Eq. 52, are first order (without fine-tuning) due to the presence of the third order terms. Understanding these different nematic orders is tedious due to the matrix order parameter \(\mathbf{Q}\) and here we restrict ourselves to the simple nematic orders relevant to Eq. 2.
As discussed in section IV.1.2, the ferronematic order corresponds to Eq. 40. This is obtained in the regime \(r_{Q_{a}}<0\) and \(r_{Q_{a}}>0\) in Eq. 52 where the above solution is gotten by considering a general singlet quadrupolar tensor in its eigen-basis (\(\mathbf{Q}_{a}=\mathrm{diag}(l_{1},l_{2},-l_{1}-l_{2})\)) and extremising with respect to the eigenvalues.
For the three sublattice nematic which is an ordering in the doublet mode, we substitute the ansatz of Eq. (42) and (43) in the doublet free energy Eq.(54) to get,
\[\mathcal{F}_{Q,e}= 2r_{Q_{e}}\mathcal{Q}_{3SN}^{2}-\frac{4}{\sqrt{6}}s_{Q_{e}} \mathcal{Q}_{3SN}^{3}\] \[+(4u_{Q_{e,1}}/3+4u_{Q_{e,2}}-u_{Q_{e,3}})\mathcal{Q}_{3SN}^{4}. \tag{56}\]
For positive quartic term, we get a three sub-lattice nematic via a first order phase transition.
The interaction term between the singlet and doublet quadrupolar modes (Eq. 55) indicates that ordering of the doublet mode would give rise to a parasitic singlet quadrupole expectation value of
\[Q_{a}^{\alpha\beta}\approx-\frac{s_{Q_{a}Q_{e}}}{2r_{Q_{a}}}[(Q_{e1}^{\beta \gamma}Q_{e1}^{\gamma\alpha}+Q_{e2}^{\beta\gamma}Q_{e2}^{\gamma\alpha})-\frac {Tr(\mathbf{Q}_{e}^{2})\delta^{\alpha\beta}}{3}].\]
However, for the particular three sublattice nematic (Eqs. 42 and 43) such parasitic ferronematic order vanishes.
Dipole-quadrupole interaction :The general form of interaction terms between dipoles and quadrupoles, due to time reversal symmetry, consists of even powers of dipoles and any powers for quadrupoles consistent with other symmetries. Therefore the lowest order term has the form \(\sim m^{2}Q\). This implies that pure quadrupolar ordering renormalises the mass for dipoles while dipolar ordering gives rise to parasitic quadrupole moment. The nature of the parasitic quadrupole moments is obtained by extremising the leading order dipole-quadrupole interaction :
\[\mathcal{F}_{mQ}^{\mathrm{int}}= s_{m_{a}Q_{a}}Q_{a}^{\alpha\beta}\bigg{(}m_{a}^{\alpha}m_{a}^{ \beta}-\frac{1}{3}(\mathbf{m}_{a}\mathbf{\cdot}\mathbf{m}_{a})\delta^{ \alpha\beta}\bigg{)}\] \[+s_{m_{a}Q_{a}}Q_{a}^{\alpha\beta}\bigg{(}m_{e1}^{\alpha}m_{e1}^{ \beta}+m_{e2}^{\alpha}m_{e2}^{\beta}\] \[-\frac{1}{3}(\mathbf{m}_{e}\mathbf{\cdot}\mathbf{m}_{e})\delta^ {\alpha\beta}\bigg{)}+s_{m_{a}Q_{e}}\bigg{[}-2Q_{e1}^{\alpha\beta}\bigg{(} \frac{m_{e1}^{\alpha}m_{e2}^{\beta}}{2}\] \[+\frac{m_{e2}^{\alpha}m_{e1}^{\beta}}{2}-\frac{1}{3}(\mathbf{m}_{e 1}\mathbf{\cdot}\mathbf{m}_{e2})\delta^{\alpha\beta}\bigg{)}+Q_{e2}^{\alpha \beta}\bigg{(}m_{e2}^{\alpha}m_{e2}^{\beta}\] \[-m_{e1}^{\alpha}m_{e1}^{\beta}-\frac{1}{3}(\mathbf{m}_{e2}\mathbf{ \cdot}\mathbf{m}_{e2}-\mathbf{m}_{e1}\mathbf{\cdot}\mathbf{m}_{e1})\delta^{ \alpha\beta}\bigg{)}\bigg{]}\] \[+u_{m_{a}Q_{e}}m_{a}^{\alpha}m_{a}^{\beta}(Q_{e1}^{\alpha\gamma}Q _{e1}^{\gamma\beta}+Q_{e2}^{\alpha\gamma}Q_{e2}^{\beta})\] \[+s_{m_{a}m_{a}Q_{a}}\bigg{[}Q_{e1}^{\alpha\beta}\bigg{(}\frac{m_{a }^{\alpha}m_{e1}^{\alpha}+m_{e1}^{\alpha}m_{a}^{\beta}}{2}\] \[-\frac{1}{3}(\mathbf{m}_{a}\mathbf{\cdot}\mathbf{m}_{e1})\delta^{ \alpha\beta}\bigg{)}+Q_{e2}^{\alpha\beta}\bigg{(}\frac{m_{a}^{\alpha}m_{e2}^{ \beta}+m_{e2}^{\alpha}m_{a}^{\beta}}{2}\] \[-\frac{1}{3}(\mathbf{m}_{a}\mathbf{\cdot}\mathbf{m}_{e2})\delta^ {\alpha\beta}\bigg{)}\bigg{]}. \tag{57}\]
For the ferromagnetic ordering, this leads to
\[Q_{a}^{\alpha\beta}\approx-\frac{s_{m_{a}Q_{a}}}{2r_{Q_{a}}}\bigg{(}m_{a}^{ \alpha}m_{a}^{\beta}-\frac{1}{3}(\mathbf{m}_{a}\mathbf{\cdot}\mathbf{m}_{a}) \delta^{\alpha\beta}\bigg{)}\, \tag{58}\]
while for the \(120^{\circ}\) spiral order we get
\[Q_{a}^{\alpha\beta} \approx-\frac{s_{m_{a}Q_{a}}}{2r_{Q_{a}}}\bigg{(}m_{e1}^{\alpha}m_{e1 }^{\beta}+m_{e2}^{\alpha}m_{e2}^{\beta}\] \[\quad-\frac{1}{3}(\mathbf{m}_{e1}.\mathbf{m}_{e1}+\mathbf{m}_{e2}.\mathbf{m}_{e2})\delta^{\alpha\beta}\bigg{)}\] \[Q_{e1}^{\alpha\beta} \approx\frac{s_{m_{e}Q_{e}}}{r_{Q_{e}}}\bigg{(}\frac{m_{e1}^{ \alpha}m_{e2}^{\beta}+m_{e2}^{\alpha}m_{e1}^{\beta}}{2}-\frac{1}{3}(\mathbf{m} _{e1}.\mathbf{m}_{e2})\delta^{\alpha\beta}\bigg{)}\] \[Q_{e2}^{\alpha\beta} \approx-\frac{s_{m_{e}Q_{e}}}{2r_{Q_{e}}}\bigg{(}m_{e2}^{\alpha}m_ {e2}^{\beta}-m_{e1}^{\alpha}m_{e1}^{\beta}\] \[\quad-\frac{1}{3}(\mathbf{m}_{e2}.\mathbf{m}_{e2}-\mathbf{m}_{e1 }.\mathbf{m}_{e1})\delta^{\alpha\beta}\bigg{)} \tag{59}\]
The above parasitic moments for the ferromagnetic and \(120^{\circ}\) spiral order are in accordance with what we expect from the wave functions listed in appendix A.
#### iv.2.2 Coupling with the magnetic field
Magnetic field (\(\mathbf{h}=h\hat{\mathbf{h}}\)) couples to the spins via the usual Zeeman term
\[\mathcal{F}_{hm}=g_{m}\mathbf{h}.\mathbf{m}_{a} \tag{60}\]
such that only the \(\mathbf{m}_{a}\) mode couples linearly to the uniform magnetic field. Such linear coupling evidently favours the polarised phase along the magnetic field.
As quadrupoles are even under time reversal, the coupling takes the form:
\[\mathcal{F}_{hQ}=g_{Q}Q_{a}^{\alpha\beta}\bigg{(}h^{\alpha}h^{\beta}-\frac{1} {3}\delta^{\alpha\beta}(\mathbf{h}.\mathbf{h})\bigg{)} \tag{61}\]
Note again, only the \(Q_{a}\) modes couple to bilinears of the uniform magnetic field. This can be seen from the microscopic term of the form \(h^{\alpha}h^{\beta}\sum_{i}Q_{i}^{\alpha\beta}\) (where \(i\) is the site index) and writing it in terms of the irreducible representations in Table 1. The effect (to linear order) of small uniform magnetic field on the ferronematic and the three sublattice nematic order is summarised below.
Ferronematic order:For the ferronematic order (Eq. 40) due to \(\mathcal{F}_{hm}\) (Eq. 60), a uniform magnetic moment proportional to the magnetic field develops \(m_{a}^{\alpha}\approx-\frac{g_{m}h^{\alpha}}{2r_{m_{a}}}\). Due to this uniform magnetic moment, the coupling between the ferronematic order parameter and the singlet dipolar mode in Eq. 57 becomes,
\[\mathcal{F}_{mQ,FN}^{\text{int}}=\frac{\sqrt{3}s_{m_{a}Q_{a}}\mathcal{Q}_{FN}g _{m}^{2}h^{2}}{4r_{m_{a}}^{2}}\bigg{(}(\hat{\mathbf{h}}.\mathbf{n})^{2}-\frac {1}{3}\bigg{)}. \tag{62}\]
When \(g_{Q}\approx 0\) in Eq. 61, the above term alone decides whether the ferronematic directors turn perpendicular(parallel) to the magnetic field for \(s_{m_{a}Q_{a}}<0\) (\(s_{m_{a}Q_{a}}>0\)) (since \(\mathcal{Q}_{FN}<0\) for disk like ferronematic order.[36; 34]) With \(g_{Q}\neq 0\), \(\mathcal{F}_{hQ}\) becomes,
\[\mathcal{F}_{hQ,FN}=\sqrt{3}\mathcal{Q}_{FN}g_{Q}h^{2}\bigg{(}(\hat{\mathbf{ h}}.\mathbf{n})^{2}-\frac{1}{3}\bigg{)}\, \tag{63}\]
which competes with Eq. 62 in deciding whether director is parallel or perpendicular to the magnetic field. Extremising the Gaussian free energy (Eq. 53) along with these terms, we get
\[\mathcal{Q}_{FN}=-\frac{\sqrt{3}(g_{Q}+s_{m_{a}Q_{a}}g_{m}^{2}/4r_{m_{a}}^{2} }{6r_{Q_{a}}}h^{2} \tag{64}\]
while is the magnetic field induced ferronematic ordering where we have assumed for concreteness \(s_{m_{a}Q_{a}},g_{Q}>0\).
Three sublattice nematic:For the three sublattice nematic too the uniform magnetic moment proportional to and in the direction of the magnetic field develops with \(m_{a}^{\alpha}\approx-\frac{g_{m}h^{\alpha}}{2r_{m_{a}}}\). In contrast to the ferronematic case, however, here the application of magnetic field also results in doublet dipolar modes via \(\mathbf{m}_{a}\). This can be seen by extremising the \(s_{m_{a}m_{e}Q_{a}}\) (Eq. 57) :
\[m_{\mathbf{e}}^{\alpha}\approx-\frac{s_{m_{a}m_{e}Q_{a}}Q_{\mathbf{e}}^{ \alpha\beta}m_{a}^{\beta}}{2r_{m_{e}}}=g_{m}\frac{s_{m_{a}m_{e}Q_{a}}Q_{ \mathbf{e}}^{\alpha\beta}h^{\beta}}{4r_{m_{a}}r_{m_{a}}} \tag{65}\]
for \(\mathbf{e}=(e1,e2)\). It is evident that the doublet modes depend on the orientation of the magnetic field relative to the directors. When the magnetic field is along any one of the orthogonal directors of the three sublattice nematic, we see from Eq. 42, Eq. 43 and Eq. 65 that \(\mathbf{m}_{e1}\) and \(\mathbf{m}_{e2}\) are also in the direction of the magnetic field. This gives rise to a collinear sinusoidal magnetization in addition to the uniform magnetization (due to \(\mathbf{m}_{a}\)) along the direction of the magnetic field. Considering the general case where the directors are \(\mathbf{n}_{1}=\hat{\mathbf{x}}\), \(\mathbf{n}_{2}=\hat{\mathbf{y}}\) and \(\mathbf{n}_{3}=\hat{\mathbf{z}}\) and the magnetic field is at a general inclination to the directors, \(\mathbf{h}=h(\sin\theta\cos\phi\hat{\mathbf{x}}+\sin\theta\sin\phi\hat{ \mathbf{y}}+\cos\theta\hat{\mathbf{z}})\), using Eq. 35 we see that the resulting doublet and singlet magnetisations correspond to, \(\langle\mathbf{S}_{1}\rangle=(a_{m}\sin\theta\cos\phi\hat{\mathbf{x}}+b_{m} \sin\theta\sin\phi\hat{\mathbf{y}}+b_{m}\cos\theta\hat{\mathbf{z}})\), \(\langle\mathbf{S}_{2}\rangle=(b_{m}\sin\theta\cos\phi\hat{\mathbf{x}}+a_{m} \sin\theta\sin\phi\hat{\mathbf{y}}+b_{m}\cos\theta\hat{\mathbf{z}})\) and \(\langle\mathbf{S}_{3}\rangle=(b_{m}\sin\theta\cos\phi\hat{\mathbf{x}}+b_{m} \sin\theta\sin\phi\hat{\mathbf{y}}+a_{m}\cos\theta\hat{\mathbf{z}})\) where \(a_{m}=\frac{g_{m}h}{3}\left(-\frac{\sqrt{3}}{2r_{m_{a}}}+\frac{2s_{m_{a}m_{e}Q_{a }Q_{aSN}}}{4r_{m_{e}}r_{m_{a}}}\right)\) and \(b_{m}=\frac{g_{m}h}{3}\left(-\frac{\sqrt{3}}{2r_{m_{a}}}-\frac{s_{m_{a}m_{e}Q_{a }Q_{aSN}}}{4r_{m_{e}}r_{m_{a}}}\right)\). The resultant phase is a combination of uniform polarization along the field as well as sub-lattice dependent polarisation along the three orthogonal direction.
#### iv.2.3 Elastic term and coupling of strains to dipoles and quadrupoles
Finally, we turn to the contributions to the free energy due to elastic fields as well as spin-phonon coupling.
Elastic energy:The harmonic elastic energy for the singlet and the doublet strain modes is
\[\mathcal{F}_{\text{elastic}}=\frac{1}{2}(c_{a}\varepsilon_{a}^{2}+c_{e} \varepsilon_{e}^{2})\, \tag{66}\]
where \(c_{a}\) and \(c_{e}\) are two independent elastic constants.
Coupling between dipoles, quadrupoles and strains :
The coupling term between dipoles and strain fields is given by
\[\mathcal{F}_{\varepsilon m}=\mathcal{F}_{\varepsilon ma}+\mathcal{F}_{ \varepsilon me}\, \tag{67}\]
where
\[\mathcal{F}_{\varepsilon ma} = [\mu_{a1}\varepsilon_{a}+\frac{\mu_{a}}{2}\varepsilon_{a}^{2}+ \frac{\mu_{e}}{2}\varepsilon_{e}^{2}]p_{1}(\mathbf{m}_{a}.\mathbf{m}_{a}) \tag{68}\] \[\mathcal{F}_{\varepsilon me} = [\mu_{a1}\varepsilon_{a}+\frac{\mu_{a}}{2}\varepsilon_{a}^{2}+ \frac{\mu_{e}}{2}\varepsilon_{e}^{2}]p_{2}(\mathbf{m}_{e}.\mathbf{m}_{e}) \tag{69}\]
denotes the coupling between the singlet and doublet dipolar modes.
Similarly, the coupling term between quadrupoles and strains is given by
\[\mathcal{F}_{\varepsilon Q}=\mathcal{F}_{\varepsilon Qa}+\mathcal{F}_{ \varepsilon Qe}\, \tag{70}\]
where
\[\mathcal{F}_{\varepsilon Qa} = [\tilde{\mu}_{a1}\varepsilon_{a}+\frac{\tilde{\mu}_{a}}{2} \varepsilon_{a}^{2}+\frac{\tilde{\mu}_{e}}{2}\varepsilon_{e}^{2}]p_{3}\mathrm{ Tr}\mathbf{Q}_{a}^{2} \tag{71}\] \[\mathcal{F}_{\varepsilon Qe} = [\tilde{\mu}_{a1}\varepsilon_{a}+\frac{\tilde{\mu}_{a}}{2} \varepsilon_{a}^{2}+\frac{\tilde{\mu}_{e}}{2}\varepsilon_{e}^{2}]p_{4} \mathrm{Tr}(\mathbf{Q}_{e}.\mathbf{Q}_{e}) \tag{72}\]
In both the cases of dipolas and quadrupoles, the free energy allows for linear coupling with the singlet strain field with bilinear of the dipole/quadrupole fields. This would lead to lattice distortions upon dipolar or quadrupolar ordering while the quadratic terms (in elastic fields) lead to renormalisation of elastic constants and hence the sound speed.
## V Fractional change in sound speed and fractional change in length
### Fractional change in length
Fractional change in length along direction \(\hat{r}\equiv(\cos\theta,\sin\theta)\) (where \(\theta\) is the angle with respect to the Cartesian \(x\)-axis in Fig. 1) can be obtained from the Cartesian strain fields \(\varepsilon_{ij}\) :
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}=\sum_{ij}\varepsilon_{ ij}\hat{r}_{i}\hat{r}_{j}=\varepsilon_{a}+\varepsilon_{e2}\cos(2\theta)+ \varepsilon_{e1}\sin(2\theta). \tag{73}\]
Due to the linear coupling term between the uniform strain field, \(\varepsilon_{a}\) and the dipole/nematic bilinears (Eqs. 68, 69, 71 and 72), this leads to isotropic magnetostriction of the triangular lattice given by
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}=\varepsilon_{a} \approx -\frac{1}{c_{a}}\bigg{[}\mu_{a1}\bigg{(}p_{1}(\mathbf{m}_{a}. \mathbf{m}_{a})+p_{2}(\mathbf{m}_{e}.\mathbf{m}_{e})\bigg{)} \tag{74}\] \[+ \tilde{\mu}_{a1}\bigg{(}p_{3}\mathrm{Tr}\mathbf{Q}_{a}^{2}+p_{4} \mathrm{Tr}(\mathbf{Q}_{e}.\mathbf{Q}_{e})\bigg{)}\bigg{]}\]
Focussing on the \(120^{\circ}\) spiral order, Eq. 73 reduces to
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}=-\frac{\mu_{a1}p_{2}}{c_{a}}( \mathbf{m}_{e}.\mathbf{m}_{e}) \tag{75}\]
such that below the critical point (\(r_{m_{e}}<0\) in Eq. 49) when \(\mathbf{m}_{e}\sim\sqrt{-r_{m_{e}}/u_{m_{e}}}\),
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}\propto|r_{m_{e}}| \tag{76}\]
as expected from the lowest symmetry allowed coupling. Thus for a thermal phase transition where \(r_{m_{e}}\propto(T-T_{c})\), one expects a linear turning on of the magnetostrictive distortion with measurable consequence for thermal expansion experiments.
A more startling effect occurs across a spin nematic transition. In particular for a ferronematic, the above expression reduces to
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}=-\frac{\mu_{a1}p_{3}}{c_{a}}Tr \mathbf{Q}_{a}^{2}=-\frac{2\mu_{a1}p_{3}}{c_{a}}\mathcal{Q}_{FN}^{2} \tag{77}\]
such that across the discontinuous nematic transition, there is a simultaneous jump in the lattice volume.
On turning on the magnetic field inside either the spiral or the ferronematic, there are added contributions due to the Zeeman term (Eq. 60) resulting in non-zero \(m_{a}^{\alpha}=-\frac{q_{m}h^{\alpha}}{2r_{m_{a}}}\) and \(\mathcal{Q}_{FN}\) (Eq. 64) such that we have
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}= -\frac{\mu_{a1}p_{2}}{c_{a}}(\mathbf{m}_{e}.\mathbf{m}_{e})- \frac{\mu_{a1}p_{1}g_{m}^{2}}{4c_{a}r_{m_{a}}^{2}}h^{2}\] \[-\frac{\tilde{\mu}_{a1}p_{3}(g_{Q}+s_{m_{a}Q_{a}}g_{m}^{2}/4r_{m _{a}}^{2})^{2}}{6r_{Q_{a}}^{2}c_{a}}\ \ h^{4} \tag{78}\]
for the spiral and
\[\left(\frac{\Delta L}{L}\right)_{\hat{r}}= -\frac{2\mu_{a1}p_{3}}{c_{a}}\mathcal{Q}_{FN}^{2}-\frac{\mu_{a1} p_{1}g_{m}^{2}}{4c_{a}r_{m_{a}}^{2}}h^{2}\] \[-\frac{\tilde{\mu}_{a1}p_{3}(g_{Q}+s_{m_{a}Q_{a}}g_{m}^{2}/4r_{m_ {a}}^{2})^{2}}{6r_{Q_{a}}^{2}c_{a}}\ \ h^{4} \tag{79}\]
for the ferronematic.
While both the forms predict a similar mixture of \(h^{2}\) and \(h^{4}\) dependence, one expects that near the ferronematic phase transition \(r_{Q_{a}}\to 0\) such that the \(h^{4}\) dependence becomes more pronounced on approaching the ferronematic phase transition. Similar results hold for the three sublattice nematic.
### Fractional change in sound speed
The fractional change in sound speed, \(v\), is related to the change in elastic constants, \(c\), (Eq. 66) as :
\[\frac{\Delta v}{v}\propto\frac{\Delta c}{c}\, \tag{80}\]
The component of the elastic tensor depends on the propagation direction and polarization of the sound and hence is generically a linear combination of \(c_{a}\) and \(c_{e}\).
The renormalisation of these two elastic constants are due to coupling with the dipole and nematic orders are readily obtained from Eqs. 67 and 70 as :
\[\frac{\Delta c_{a}}{c_{a}} = \frac{1}{c_{a}}\bigg{[}\mu_{a}\bigg{(}p_{1}(\mathbf{m}_{a}. \mathbf{m}_{a})+p_{2}(\mathbf{m}_{e}.\mathbf{m}_{e})\bigg{)}\] \[+ \tilde{\mu}_{a}\bigg{(}p_{3}\mathrm{Tr}\mathbf{Q}_{a}^{2}+p_{4} \mathrm{Tr}(\mathbf{Q}_{e}.\mathbf{Q}_{e})\bigg{)}\bigg{]}\] \[\frac{\Delta c_{e}}{c_{e}} = \frac{1}{c_{e}}\bigg{[}\mu_{e}\bigg{(}p_{1}(\mathbf{m}_{a}. \mathbf{m}_{a})+p_{2}(\mathbf{m}_{e}.\mathbf{m}_{e})\bigg{)} \tag{81}\] \[+ \tilde{\mu}_{e}\bigg{(}p_{3}\mathrm{Tr}\mathbf{Q}_{a}^{2}+p_{4} \mathrm{Tr}(\mathbf{Q}_{e}.\mathbf{Q}_{e})\bigg{)}\bigg{]}\]
Therefore similar to the case of fractional change in length, the fraction change in sound speed would be continuous across the \(120^{\circ}\) spiral ordering transition and discontinuous across the nematic transitions. The magnetic field dependence below the ordering transitions will have a similar dependence.
Observing jumps in fractional change in length and fractional change in sound speed in the absence of any magnetization, along with the enhanced \(h^{4}\) magnetic field dependence would strongly favour the case for spin-nematic orders.
## VI Summary and outlook
In this work, we have studied the possible elastic signatures of a spin-nematic, possibly realised in the triangular lattice compound NiGa\({}_{2}\)S\({}_{4}\). Inspired by recent Raman scattering experiments [29] which indicate substantial spin-phonon coupling in the material, and the general expectation that the time reversal symmetric spin-nematic order parameter can linearly couple to the elastic degrees of freedom, we show that- (1) inside such a spin-nematic phase, the interaction between the nematic Goldstone boson in a ferromagnetic phase and the phonons lead to a powerlaw dependence of the fractional change in sound speed, _i.e._, \(\Delta v/v\propto T^{3}\), (2) across the spin-nematic phase transition, there is a finite jump in both magnetostriction as well as \(\Delta v/v\) stemming from the discontinuous nature of the nematic transition, and (3) possible enhanced \(h^{4}\) dependence of the magnetostriction and \(\Delta v/v\) on the magnetic field just below the nematic transition. These signatures, along with thermodynamic measurements such as low temperature powerlaw specific heat [15; 17; 18; 19; 20] and absence of long range spin correlations provides important steps in characterising the spin-nematic phase in particular and paves a way for concrete experimental signatures for higher-moment magnetic orderings that have proved quite challenging in spite of several candidate materials. It is useful to note that light scattering can also act as a complementary spectroscopic probe for spin-nematic as shown in Ref. [44].
In context of NiGa\({}_{2}\)S\({}_{4}\), we have used the minimal nearest neighbour bilinear-biquadratic spin-1 Hamiltonian (Eq. 2) to understand the the low temperature elastic signatures. In the actual material, in addition to nearest neighbour bilinear and biquadratic terms, a third neighbour antiferromagnetic bilinear spin interaction appears to be relevant to understand the detailed physics. Here, however, we have neglected such terms as we are interested in the elastic response of the spin-nematic where the effect of such terms are expected to be secondary and as far as the temperature dependence is concerned. We further note that in addition to spin-rotation invariant terms, the microscopics of NiGa\({}_{2}\)S\({}_{4}\) can also admit small single-ion anisotropies as well as DM interactions. While the former can pin the nematic director, [15] the latter can pin the chirality of the spiral state for the DM vector pointing perpendicular to the plane (see Appendix B). Therefore the former can gap out the nematic Goldstone modes and change the powerlaw dependence to an exponential damping for \(\Delta v/v\). However, since such effects have not been measured in the low temperature specific heat, [15; 17; 18; 19; 20] we neglect such single-ion anisotropies.
Several studies on ultrasound properties on candidates for multipolar order in SOC coupled systems [4] or in-field spin-nematic [38] exists. We hope our results will generate interest in probing ultrasound renormalisation in NiGa\({}_{2}\)S\({}_{4}\) which is a rare spin-nematic candidate in a system described by a spin rotation symmetric spin Hamiltonian to a very good approximation.
###### Acknowledgements.
The authors thank R. Moessner, S. Nakatsuji, N. Drichko and S. Zherlitsyn for discussion. SB acknowledges adjunct fellow program at SNBNCBS, Kolkata for hospitality. The authors acknowledge funding from Max Planck Partner group Grant at ICTS, Swarna Jayanti fellowship grant of SERB-DST (India) Grant No. SB/SJF/2021-22/12 and the Department of Atomic Energy, Government of India, under Project No. RTI4001.
## Appendix A Wave function for various spin-1 orders
The mean-field product wave functions for spin-1 magnets for the dipole and nematic orders [16; 19; 34] relevant to this work is best understood in the basis :
\[\ket{x}=i\frac{\ket{1}-\ket{\bar{1}}}{\sqrt{2}},\ \ket{y}=\frac{\ket{1}+\ket{\bar{1}}}{ \sqrt{2}},\ \ket{z}=-i\ket{0}.\] (A1)
The on-site spin and the quadrupole operators are
\[S^{\vec{\mu}}=-i\sum_{\bar{\nu}\bar{\lambda}}\epsilon_{\bar{\mu}\bar{\nu}\bar{ \lambda}}\ket{\bar{\nu}}\bra{\bar{\lambda}};\ Q^{\bar{\mu}\bar{\nu}}=\frac{1}{3 }\delta^{\bar{\mu}\bar{\nu}}-\frac{\ket{\bar{\nu}}\bra{\bar{\mu}}}{2}-\frac{ \ket{\bar{\mu}}\bra{\bar{\nu}}}{2}\] (A2)
where \(\epsilon_{\bar{\mu}\bar{\nu}\bar{\lambda}}\) denotes the Levi Civita tensor and \(\bar{\mu}\), \(\bar{\nu}\), \(\bar{\lambda}=x,y,z\). The most general single spin-1 state is
\[\left|\mathbf{w}\right\rangle=\sum_{\bar{\mu}}w_{\bar{\mu}}\left|\bar{\mu} \right\rangle\, \tag{10}\]
where \(w_{\bar{\mu}}\) are the three components of a complex vector
\[\mathbf{w}=\mathbf{u}+i\mathbf{v} \tag{11}\]
with \(\mathbf{u},\mathbf{v}\in\mathcal{R}^{3}\) and \(\mathbf{u}\cdot\mathbf{u}+\mathbf{v}\cdot\mathbf{v}=1\) with the overall phase fixed by \(\mathbf{u}\cdot\mathbf{v}=0\). The expectation of spin and quadrupole operators are :
\[\left\langle\mathbf{w}\right|\mathbf{S}\left|\mathbf{w}\right\rangle = 2\mathbf{u}\times\mathbf{v} \tag{12}\] \[\left\langle\mathbf{w}\right|Q^{\bar{\mu}\bar{\nu}}\left|\mathbf{ w}\right\rangle = \frac{1}{3}\delta^{\bar{\mu}\bar{\nu}}-u^{\bar{\mu}}u^{\bar{\nu}}-v ^{\bar{\mu}}v^{\bar{\nu}}. \tag{13}\]
The mean field wave function for different magnetic and nematic ordered states are direct products of the onsite wave functions, \(\left|\psi\right\rangle_{MF}=\otimes_{i}\left|\mathbf{w}_{i}\right\rangle\).
_Ferromagnetic order:_ The ferromagnetic order in the \(\hat{\mathbf{z}}\) direction is given by
\[\mathbf{u}_{i}=\frac{1}{\sqrt{2}}(\sin\!\theta\ \hat{\mathbf{x}}+\cos\! \theta\ \hat{\mathbf{y}}),\ \ \mathbf{v}_{i}=\frac{-1}{\sqrt{2}}(\cos\!\theta\ \hat{\mathbf{x}}-\sin\! \theta\ \hat{\mathbf{y}}), \tag{14}\]
such that the spin expectation value (Eq. 12) is given by
\[\left\langle\mathbf{w}\right|\mathbf{S}\left|\mathbf{w}\right\rangle=\ \hat{ \mathbf{z}} \tag{15}\]
which characterises the fully polarised ferromagnetic state. The on-site parasitic quadrupole order is
\[\left\langle\mathbf{w}\right|Q^{\bar{\mu}\bar{\nu}}\left|\mathbf{w}\right\rangle =\begin{pmatrix}-\frac{1}{6}&0&0\\ 0&-\frac{1}{6}&0\\ 0&0&\frac{1}{3}\end{pmatrix}\, \tag{16}\]
in the \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) basis and corresponds to uniaxial parasitic ferronematic order with director along \(\hat{\mathbf{z}}\).
\(120^{\circ}\) _spiral order:_ For a \(120^{\circ}\) spiral order :
\[\mathbf{u}_{1}=\frac{1}{\sqrt{2}}\hat{\mathbf{z}}\,\ \mathbf{v}_{1}=- \frac{1}{\sqrt{2}}\hat{\mathbf{y}}\] \[\mathbf{u}_{2}=\frac{1}{\sqrt{2}}\hat{\mathbf{z}}\,\ \mathbf{v}_{2}=- \frac{1}{\sqrt{2}}\bigg{(}\frac{\sqrt{3}}{2}\hat{\mathbf{x}}-\frac{1}{2}\hat{ \mathbf{y}}\bigg{)}\] \[\mathbf{u}_{3}=\frac{1}{\sqrt{2}}\hat{\mathbf{z}}\,\ \mathbf{v}_{3}=- \frac{1}{\sqrt{2}}\bigg{(}-\frac{\sqrt{3}}{2}\hat{\mathbf{x}}-\frac{1}{2} \hat{\mathbf{y}}\bigg{)} \tag{17}\]
such that the spin expectation value is given by Eq. 4 and corresponds to Fig. 4. Again, each on-site parasitic quadrupolar moment would be uniaxial with the director being in the direction of the respective magnetic moment.
Pure nematic orders require \(\left\langle\mathbf{w}\right|\mathbf{S}\left|\mathbf{w}\right\rangle=2 \mathbf{u}\times\mathbf{v}=0\) in addition to \(\mathbf{u}\cdot\mathbf{v}=0\) which correspond to either \(\mathbf{v}=0\) or \(\mathbf{u}=0\)_i.e._, \(\mathbf{w}\) is real or imaginary. Considering \(\mathbf{v}=0\), the expectation value of the quadrupole operator (Eq. 13) is,
\[\left\langle\mathbf{w}\right|Q^{\bar{\mu}\bar{\nu}}\left|\mathbf{w}\right\rangle =\frac{1}{3}\delta^{\bar{\mu}\bar{\nu}}-u^{\bar{\mu}}u^{\bar{\nu}}\, \tag{18}\]
which is a uniaxial nematic with the director along \(\mathbf{u}\).
_Ferronematic order:_ All sites have
\[\mathbf{u}=\hat{\mathbf{n}},\ \ \ \ \mathbf{v}=0 \tag{19}\]
with \(\hat{\mathbf{n}}\) being the director (Fig.1).
_Three sublattice nematic order:_ The three sites of the triangular unit (see Fig. 1)) have :
\[\mathbf{u}_{1}=\hat{\mathbf{n}}_{1}\,\ \mathbf{v}_{1}=0\] \[\mathbf{u}_{2}=\hat{\mathbf{n}}_{2}\,\ \mathbf{v}_{2}=0\] \[\mathbf{u}_{3}=\hat{\mathbf{n}}_{3}\,\ \mathbf{v}_{3}=0 \tag{20}\]
where \(\hat{\mathbf{n}}_{1}\), \(\hat{\mathbf{n}}_{2}\) and \(\hat{\mathbf{n}}_{3}\) are mutually orthogonal.
_Co-existing nematic order and collinear sinusoidal dipolar order:_ The spins for a three sublattice collinear sinusoidal dipolar order (with co-existing nematic order discussed in Section IV.1.2) can be expressed as \(\left\langle\mathbf{S}_{i}\right\rangle=\sin\!\eta_{i}\ \hat{m}\) (with \(\eta_{i}=\mathbf{q}.\mathbf{r}_{i}+\pi/6+\phi\)). Using Eq. 14, the wave functions with the above order are :
\[\mathbf{u}_{i}=\cos(\eta_{i}/2)(\sin\!\theta_{i}\ \hat{\mathbf{m}}_{ \perp_{1}}+\cos\!\theta_{i}\ \hat{\mathbf{m}}_{\perp_{2}})\] \[\mathbf{v}_{i}=-\sin(\eta_{i}/2)(\cos\!\theta_{i}\ \hat{\mathbf{m}}_{ \perp_{1}}-\sin\!\theta_{i}\ \hat{\mathbf{m}}_{\perp_{2}}) \tag{21}\]
where \(\hat{\mathbf{m}}_{\perp_{1}}\times\hat{\mathbf{m}}_{\perp_{2}}=\hat{\mathbf{m}}\), \(i=1,2,3\) and \(\eta_{2}=\eta_{1}+2\pi/3\), \(\eta_{3}=\eta_{1}+4\pi/3\). Such an order would feature sites having partially polarized spins. Unlike quadrupole moments for fully polarized spins (Eq. 16), for the case of partially polarized spins one can see that the expectation values of the on-site quadrupolar moment (using Eq. 13) would depend on the choice of \(\theta_{i}\) for the site and the quadrupole moment would be bi-axial - with the eigenvalues being \(\{1/3,1/3-|\mathbf{u}_{i}|^{2},1/3-|\mathbf{v}_{i}|^{2}\}\) for the eigenvectors being \(\{\hat{\mathbf{m}},\hat{\mathbf{u}}_{i},\hat{\mathbf{v}}_{i}\}\) respectively.
## Appendix B Mean field T=0 phase diagram and DM term
The mean field phase diagram of Eq. 2 was obtained in Ref. [15]. On adding to it, a DM term given by \(H_{DM}=\mathbf{D}_{ij}\cdot(\mathbf{S}_{i}\times\mathbf{S}_{j})\), where \(\mathbf{D}_{ij}=-\mathbf{D}_{ji}\) and \(\mathbf{D}_{ij}=D\hat{\mathbf{z}}\) vector pointing out of the triangular lattice plane, the mean field phase diagram is given by Fig. 4 for \(J>0\) and \(K<0\). The main role of the DM term is to lift the degeneracy of the clockwise and anti-clockwise spirals. The different order parameters are plotted in Fig. 5.
## Appendix C Linear Spin-nematic wave theory
### The ferronematic
As discussed in Sec. III.1, the Goldstone modes out of a ferronematic with director along the \(\hat{\mathbf{z}}\) direction can be captured by defining two bosons at each site \(i\), \(b_{i1}^{\dagger}\) and \(b_{i2}^{\dagger}\). For a faithful description of the spin-1 Hilbert space, the bosons are subject to the constraints - \(b_{i1}^{\dagger}b_{i1}^{\dagger}=b_{i1}b_{i1}=b_{i2}^{\dagger}b_{i2}^{\dagger}=b_{ i2}b_{i2}=b_{i1}^{\dagger}b_{i2}^{\dagger}=b_{i1}b_{i2}=b_{i1}b_{i2}^{\dagger}=b_{i2}b_{i1}^{ \dagger}=0\) and
\(b_{i2}b_{i2}^{\dagger}=b_{i1}b_{i1}^{\dagger}=1-b_{i1}^{\dagger}b_{i1}-b_{i2}^{ \dagger}b_{i2}\). The spin operators can be written in terms of these bosons as
\[S_{i}^{+}=\sqrt{2}[b_{i1}^{\dagger}+b_{i2}],\ S_{i}^{-}=\sqrt{2}[b_{i1}+b_{i2}^{ \dagger}],\ S_{i}^{z}=n_{i1}^{\prime}-n_{i2}^{\prime}, \tag{105}\]
where, \(n_{i1(2)}^{\prime}=b_{i1(2)}^{\dagger}b_{i1(2)}\). It is useful to note that the above two bosons are related to the three bosons \(a_{x},a_{y},a_{z}\) of the SU(3) flavour wave theory [19] as : \(b_{1}=(-a_{x}+ia_{y})/\sqrt{2}\) and \(b_{2}=(a_{x}+ia_{y})/\sqrt{2}\) while \(a_{z}\) is condensed. Using Eq. 105 in Eq. 2 and using the constraints, we get
\[H_{sp,FN} = 6K_{0}N-6K_{0}\sum_{i,\mu^{\prime}}n_{i\mu^{\prime}}^{\prime}+2J _{0}\sum_{\langle ij\rangle}(b_{i1}^{\dagger}b_{j1}+b_{i2}^{\dagger}b_{j2}) \tag{106}\] \[+2(J_{0}-K_{0})\sum_{\langle ij\rangle}(b_{i1}^{\dagger}b_{j2}^{ \dagger}+b_{i1}b_{j2})\]
where \(\mu^{\prime}=1,2\). The quadratic spin-nematic wave Hamiltonian above can be diagonalized via Fourier transform followed by Bogoliubov transformation,
\[\begin{pmatrix}b_{\mathbf{k},1}\\ b_{-\mathbf{k},2}^{\dagger}\end{pmatrix}\ =\ \mathcal{R}_{\mathbf{k}}\psi_{ \mathbf{k}}\, \tag{107}\]
with
\[\mathcal{R}_{\mathbf{k}}\ =\ \begin{pmatrix}u_{\mathbf{k}}&v_{\mathbf{k}}\\ v_{\mathbf{k}}&u_{\mathbf{k}}\end{pmatrix}\ \text{and}\ \psi_{\mathbf{k}}=\begin{pmatrix}d_{ \mathbf{k},1}\\ d_{-\mathbf{k},2}^{\dagger}\end{pmatrix} \tag{108}\]
such that \(u_{\mathbf{k}}=\cosh\bar{\mathbf{\theta}}_{\mathbf{k}}\), \(v_{\mathbf{k}}=\sinh\bar{\mathbf{\theta}}_{\mathbf{k}}\) and \(\tanh\!2\bar{\mathbf{\theta}}_{\mathbf{k}}=\frac{(K_{0}-J_{0})\gamma_{\mathbf{ k}}}{J_{0}\gamma_{\mathbf{k}}-K_{0}}\). The diagonalized Hamiltonian and dispersion are given in Eq. 17 and Eq. 18 respectively.
### The three sublattice nematic
Spin-nematic wave theory for the three sublattice nematic, [18] as mentioned in the main text, requires two bosons at every sublattice, \(\widetilde{\alpha}_{i}\) and \(\widetilde{\beta}_{i}\), that account for the deviations about the respective nematic director. In terms of these bosons, the spin operators at sublattice 3 (say) (see Fig. 1) are given by
\[S_{\bar{\mathbf{R}},3}^{x} = \widetilde{\alpha}_{\bar{\mathbf{R}},3}^{\dagger}+\widetilde{ \alpha}_{\bar{\mathbf{R}},3},\quad S_{\bar{\mathbf{R}},3}^{y}=\widetilde{ \beta}_{\bar{\mathbf{R}},3}^{\dagger}+\widetilde{\beta}_{\bar{\mathbf{R}},3}\] \[S_{\bar{\mathbf{R}},3}^{z} = -i(\widetilde{\alpha}_{\bar{\mathbf{R}},3}^{\dagger}\widetilde{ \beta}_{\bar{\mathbf{R}},3}-\widetilde{\beta}_{\bar{\mathbf{R}},3}^{\dagger} \widetilde{\alpha}_{\bar{\mathbf{R}},3})\, \tag{109}\]
with constraints \((\widetilde{\alpha}_{\bar{\mathbf{R}},3})^{2}=(\widetilde{\beta}_{\bar{\mathbf{ R}},3})^{2}=\widetilde{\alpha}_{\bar{\mathbf{R}},3}\widetilde{\beta}_{\bar{ \mathbf{R}},3}=\widetilde{\beta}_{\bar{\mathbf{R}},3}\widetilde{\alpha}_{\bar{ \mathbf{R}},3}^{\dagger}=\widetilde{\beta}_{\bar{\mathbf{R}},3}\widetilde{ \alpha}_{\bar{\mathbf{R}},3}^{\dagger}=0\) and \(\widetilde{\beta}_{\bar{\mathbf{R}},3}\widetilde{\beta}_{\bar{\mathbf{R}},3}^ {\dagger}=\widetilde{\alpha}_{\bar{\mathbf{R}},3}\widetilde{\alpha}_{\bar{ \mathbf{R}},3}^{\dagger}=1-\widetilde{\alpha}_{\bar{\mathbf{R}},3}^{\dagger} \widetilde{\alpha}_{\bar{\mathbf{R}},3}-\widetilde{\beta}_{\bar{\mathbf{R}},3}^ {\dagger}\widetilde{\beta}_{\bar{\mathbf{R}},3}\) for faithful representation of the Hilbert space. Likewise, the spin operators can be written for the other two sublattices. [18] Eq. 2, up to harmonic orders in the bosons is,
\[H_{sp}=3K_{0}N+H(\widetilde{\beta}_{1},\widetilde{\alpha}_{2})+H(\widetilde{ \beta}_{2},\widetilde{\alpha}_{3})+H(\widetilde{\beta}_{3},\widetilde{\alpha}_{ 1})\, \tag{110}\]
with the following form of the Hamiltonian for each pair featuring above:
\[H(\widetilde{\beta}_{\widetilde{\lambda}},\widetilde{\alpha}_{ \widetilde{\rho}}) = 3\sum_{\mathbf{k}}\left[\widetilde{\gamma}_{\mathbf{k}}(J_{0} \widetilde{\beta}_{\bar{\mathbf{k}},\bar{\lambda}}^{\dagger}\widetilde{\alpha}_{ -\mathbf{k},\bar{\rho}}^{\dagger}+(J_{0}-K_{0})\widetilde{\beta}_{\mathbf{k}, \bar{\lambda}}^{\dagger}\widetilde{\alpha}_{\bar{\mathbf{R}},\bar{\rho}})\right. \tag{111}\] \[\left.\qquad\quad+\text{h.c}\right]+3K_{0}\sum_{\mathbf{k}}( \bar{n}_{\widetilde{\beta},\mathbf{k},\bar{\lambda}}+\bar{n}_{\widetilde{ \alpha},\mathbf{k},\bar{\rho}})\]
Figure 5: Results of the self consistency calculations (for \(J>0\) and \(K<0\)) : a),b): Energy and order parameters for \(D=0\), c),d): Energy and order parameters for \(D/J=0.42\)
where \(\widetilde{\gamma}_{\mathbf{k}}\) is as defined below Eq. 27 and \(\bar{n}_{\bar{\beta},\mathbf{k},\widetilde{\lambda}}=\widetilde{\beta}^{\dagger}_ {\mathbf{k},\widetilde{\lambda}}\widetilde{\beta}^{\prime}_{\mathbf{k}, \widetilde{\lambda}}\), \(\bar{n}_{\bar{\alpha},\mathbf{k},\widetilde{\rho}}=\widetilde{\alpha}^{\dagger}_ {\mathbf{k},\widetilde{\rho}}\widetilde{\alpha}_{\mathbf{k},\widetilde{\rho}}\). The above Hamiltonian can be diagonalized using Bogoliubov transformation,[45]
\[\left(\begin{array}{c}\widetilde{\alpha}_{\mathbf{k},\widetilde{\rho}}\\ \widetilde{\beta}^{\prime}_{\mathbf{k},\widetilde{\lambda}}\\ \widetilde{\alpha}^{\prime}_{-\mathbf{k},\widetilde{\rho}}\\ \widetilde{\beta}^{\prime}_{-\mathbf{k},\widetilde{\lambda}}\end{array}\right) =T_{\mathbf{k}}\left(\begin{array}{c}\widetilde{d}^{+}_{+,\mathbf{k}, \widetilde{\lambda}\widetilde{\rho}}\\ \widetilde{d}^{-}_{-,\mathbf{k},\widetilde{\lambda}\widetilde{\rho}}\\ \widetilde{d}^{+}_{+,-,\mathbf{k},\widetilde{\lambda}\widetilde{\rho}}\\ \widetilde{d}^{\dagger}_{-,-,\mathbf{k},\widetilde{\lambda}\widetilde{\rho}} \end{array}\right)\text{ with }T_{\mathbf{k}}=\left(\begin{array}{c}\widetilde{T}_{1,\mathbf{k}}& \widetilde{T}_{2,\mathbf{k}}\\ \widetilde{T}_{2,\mathbf{k}}&\widetilde{T}_{1,\mathbf{k}}\end{array}\right) \tag{100}\]
where
\[\widetilde{T}_{1,\mathbf{k}} = \left(\begin{array}{cc}\widetilde{\tau}_{\mathbf{k},\mathbf{k} }&\widetilde{\tau}_{\mathbf{k},\mathbf{k}}\\ \widetilde{\tau}_{\mathbf{k},\mathbf{k}}&\widetilde{\tau}_{\mathbf{k}, \mathbf{k}}\end{array}\right)\] (101) \[\widetilde{T}_{2,\mathbf{k}} = \left(\begin{array}{cc}-\frac{\widetilde{\tau}_{\mathbf{k}} \widetilde{\lambda}_{0}}{|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{ \tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}| \widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_ {\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{ \mathbf{k}}|\widetilde{\tau}_{\mathbf{k}}|\widetilde{\tau}_{\mathbf
where \(q_{2}=|\mathbf{q}_{2}|\), \(V\) is the total volume of the lattice, \(\theta_{\mathbf{q}\mathbf{q}_{2}}\) is the angle between \(\hat{\mathbf{q}}\) and \(\hat{\mathbf{q}}_{2}\), the integration limits in \(\bar{c}_{2}\) can be extended upto infinity due to exponential damping factor and
\[\bar{\mathcal{A}}_{\pm}(\hat{\mathbf{q}},\hat{\mathbf{q}_{2}})= \sum_{\mathbf{\delta},\mathbf{\delta}^{\prime}}(\hat{\mathbf{q}}\cdot\bm {\delta})(\hat{\mathbf{q}}\cdot\mathbf{\delta}^{\prime})\bigg{[}\left(\frac{ \partial\mathcal{J}_{\mathbf{\delta}}}{\partial\mathbf{\delta}}\cdot\mathbf{e}_{- \mathbf{q}}\right)\frac{3K_{0}}{\bar{c}_{s}}-\left(\frac{\partial K_{\mathbf{ \delta}}}{\partial\mathbf{\delta}}\cdot\mathbf{e}_{-\mathbf{q}}\right)\left(\frac{ 3K_{0}}{\bar{c}_{s}}\pm\frac{(\hat{\mathbf{q}}_{2}\cdot\mathbf{\delta})^{2}}{\bar {c}_{s}}6(K_{0}-J_{0})\right)\bigg{]}\] \[\times\bigg{[}\left(\frac{\partial J_{\mathbf{\delta}^{\prime}}}{ \partial\mathbf{\delta}^{\prime}}\cdot\mathbf{e}_{\mathbf{q}}\right)\frac{3K_{0}} {\bar{c}_{s}}-\left(\frac{\partial K_{\mathbf{\delta}^{\prime}}}{\partial\mathbf{ \delta}^{\prime}}\cdot\mathbf{e}_{\mathbf{q}}\right)\left(\frac{3K_{0}}{\bar{ c}_{s}}\pm\frac{(\hat{\mathbf{q}}_{2}\cdot\mathbf{\delta}^{\prime})^{2}}{\bar{c}_{s}}6(K_{0}-J_{0}) \right)\bigg{]}\] \[\bar{\mathcal{C}}(\hat{\mathbf{q}},\hat{\mathbf{q}_{2}})= \sum_{\mathbf{\delta}}(\hat{\mathbf{q}}\cdot\mathbf{\delta})^{2}\bigg{[}- \left(\mathbf{e}_{-\mathbf{q}}\cdot\frac{\partial^{2}J_{\mathbf{\delta}}}{ \partial\mathbf{\delta}^{2}}\cdot\mathbf{e}_{\mathbf{q}}\right)\frac{3K_{0}}{\bar{ c}_{s}}+\left(\mathbf{e}_{-\mathbf{q}}\cdot\frac{\partial^{2}K_{\mathbf{\delta}}}{ \partial\mathbf{\delta}^{2}}\cdot\mathbf{e}_{\mathbf{q}}\right)\left(\frac{3K_{0}} {\bar{c}_{s}}+\frac{(\hat{\mathbf{q}}_{2}\cdot\mathbf{\delta})^{2}}{\bar{c}_{s}}6 (K_{0}-J_{0})\right)\bigg{]}\]
To obtain the sound attenuation \(\propto\Sigma(\mathbf{q},\omega)/v_{0}\), we note that at the leading order only \(\Sigma_{1}\) has an imaginary part. From Eq. 11, it is fairly straight forward to show that it is proportional to \(\omega_{0,\mathbf{q}}^{ph}\) and hence disappears for the acoustic phonons in the limit of \(\mathbf{q}\to 0\).[37]
### Three sublattice nematic phase
For the three sublattice nematic, the spin-phonon coupling Hamiltonian in Eq. 25 is given by
\[H_{sp-ph,3SN}=H_{1,3SN}+H_{2,3SN} \tag{15}\]
where the two terms are similar to Eqs. 20 are given by
\[H_{1,3SN}= \sum_{\mathbf{k},\mathbf{q}_{2},\bar{\lambda}\bar{\rho}}\widetilde{ \widetilde{\psi}}^{\dagger}_{\mathbf{k}+\mathbf{q}_{2},\bar{\lambda}\bar{\rho} }\widetilde{\widetilde{\mathcal{M}}}^{(1)}_{\mathbf{k}+\mathbf{q}_{2},\mathbf{ q}_{2}}\widetilde{\psi}_{\mathbf{q}_{2},\bar{\lambda}\bar{\rho}}A_{ \mathbf{k}} \tag{16}\] \[H_{2,3SN}= \sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}_{2},\bar{\lambda }\bar{\rho}}\widetilde{\widetilde{\psi}}^{\dagger}_{\mathbf{k}-\mathbf{k}^{ \prime}+\mathbf{q}_{2},\bar{\lambda}\bar{\rho}}\widetilde{\mathcal{M}}^{(2)} _{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}_{2}}\widetilde{\psi}_{\mathbf{q}_ {2},\bar{\lambda}\bar{\rho}}A_{\mathbf{k}}A_{-\mathbf{k}^{\prime}}\] \[+\sum_{\mathbf{k}}\widetilde{\mathcal{M}}^{(2),0}_{\mathbf{k}}A_{ \mathbf{-k}}\, \tag{17}\]
where \(\widetilde{\psi}_{\mathbf{k},\bar{\lambda}\bar{\rho}}=\left(\widetilde{d}_{+, \mathbf{k},\bar{\lambda}\bar{\rho}},\ \widetilde{d}_{-,\mathbf{k},\bar{\lambda}\bar{\rho}},\ \widetilde{d}_{+,-\mathbf{k},\bar{\lambda}\bar{\rho}}^{\dagger}\ \widetilde{d}_{-,-\mathbf{k},\bar{\lambda}\bar{\rho}} \right)^{T}\) and
\[\widetilde{\mathcal{M}}^{(1)}_{\mathbf{k}+\mathbf{q}_{2},\mathbf{ q}_{2}}= \sum_{\tilde{\mathbf{\delta}}}\sqrt{\frac{1}{2MN\omega_{0,\mathbf{k}}^{ph }}}(1-e^{i\mathbf{k}\cdot\tilde{\mathbf{\delta}}})\widetilde{\overline{\mathcal{M }}}^{(1)}_{\mathbf{k}+\mathbf{q}_{2},\mathbf{q}_{2}}(\widetilde{\mathbf{\delta}}) \tag{18}\]
where
\[\overline{\widetilde{\mathcal{M}}}^{(1)}_{\mathbf{k}+\mathbf{q}_{2},\mathbf{ q}_{2}}(\widetilde{\mathbf{\delta}})= T^{\dagger}_{\mathbf{k}+\mathbf{q}_{2}}\Bigg{[}\left(\frac{ \partial J_{\tilde{\mathbf{\delta}}}}{\partial(-\widetilde{\mathbf{\delta}})}\cdot \mathbf{e}_{\mathbf{k}}\right)e^{i\mathbf{q}_{2}\cdot\tilde{\mathbf{\delta}}} \begin{pmatrix}0&0&0&0\\ 1&0&1&0\\ 0&0&0&0\\ 1&0&1&0\end{pmatrix}+\left(\frac{\partial K_{\tilde{\mathbf{\delta}}}}{\partial(- \widetilde{\mathbf{\delta}})}\cdot\mathbf{e}_{\mathbf{k}}\right)\begin{pmatrix}e^{-i \mathbf{k}\cdot\tilde{\mathbf{\delta}}}&-e^{-i(\mathbf{k}+\mathbf{q}_{2})\cdot \tilde{\mathbf{\delta}}}&0&0\\ -e^{i\mathbf{q}_{2}\cdot\tilde{\mathbf{\delta}}}&1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}\Bigg{]}T_{\mathbf{q}_{2}} \tag{19}\]
and
\[\widetilde{\mathcal{M}}^{(2)}_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}_{2}}= \sum_{\tilde{\mathbf{\delta}}}\frac{1}{4MN\sqrt{\omega_{0,\mathbf{k}}^{ph} \omega_{0,\mathbf{-k}^{\prime}}^{ph}}}(1-e^{i\mathbf{k}\cdot\tilde{\mathbf{\delta}}})( 1-e^{-i\mathbf{k}^{\prime}\cdot\tilde{\mathbf{\delta}}})\widetilde{\mathcal{M}}^{(2)}_ {\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}_{2}}(\widetilde{\mathbf{\delta}}) \tag{20}\]
where
\[\widetilde{\mathcal{M}}^{(2)}_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{q}_{2}}( \widetilde{\mathbf{\delta}})= T^{\dagger}_{\mathbf{k}-\mathbf{k}^{\prime}+\mathbf{q}_{2}} \Bigg{[}\mathbf{e}_{-\mathbf{k}^{\prime}}\cdot\frac{\partial^{2}J_{\tilde{\mathbf{ \delta}}}}{\partial\tilde{\mathbf{\delta}}^{2}}\cdot\mathbf{e}_{\mathbf{k}}e^{i \mathbf{q}_{2}\cdot\tilde{\mathbf{\delta}}}\begin{bmatrix}0&0&0&0\\ 1&0&1&0\\ 0&0&0&0\\ 1&0&1&0\end{bmatrix}+\mathbf{e}_{-\mathbf{k}^{\prime}}\cdot\frac{\partial^{2}K_ {\tilde{\mathbf{\delta}}}}{\partial\tilde{\mathbf{\delta}}^{2}}\cdot\mathbf{e}_{\mathbf{k}} \begin{bmatrix}e^{-i(\mathbf{k}-\mathbf{k}^{\prime})\cdot\tilde{\mathbf{\delta}}}&-e^{-i( \mathbf{k}-\mathbf{k}^{\prime}+\mathbf{q}_{2})\cdot\tilde{\mathbf{\delta}}}&0&0\\ -e^{i\mathbf{q}_{2}\cdot\tilde{\mathbf{\delta}}}&1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{bmatrix}\Bigg{]}T_{\mathbf{q}_{2}} \tag{21}\]
\[\widetilde{\mathcal{M}}^{(2),0}_{\mathbf{k}}= \sum_{\tilde{\mathbf{\delta}}}\frac{1}{4M\omega_{0,\mathbf{k}}^{ph} }(1-e^{i\mathbf{k}\cdot\tilde{\mathbf{\delta}}})(1-e^{-i\mathbf{k}\cdot\tilde{\mathbf{ \delta}}})\left(\mathbf{e}_{-\mathbf{k}}\cdot\frac{\partial^{2}K_{\tilde{\mathbf{ \delta}}}}{\partial\tilde{\mathbf{\delta}}^{2}}\cdot\mathbf{e}_{\mathbf{k}}\right) \tag{22}\]
Here, the matrix \(T_{\bf k}\) is as defined in Eq. 12 and \(\widetilde{\mathbf{\delta}}\) are the displacements mentioned below Eq. 27.
The phonon self energy is then calculated perturbatively (in spin-phonon interaction) and the leading contributions are given by:
\[\Sigma({\bf q},i\Omega_{n}) =\sum_{\bf k}\widetilde{\Sigma}_{\bf k}({\bf q},i\Omega_{n})\] \[=\sum_{\bf k}[\widetilde{\Sigma}_{1\bf k}({\bf q},i\Omega_{n})+ \widetilde{\Sigma}_{2\bf k}({\bf q},i\Omega_{n})]\, \tag{13}\]
where
\[\widetilde{\Sigma}_{1\bf k}({\bf q},i\Omega_{n}) \approx\frac{3}{2}\sum_{\vec{\mu},\vec{\nu}\in\{+,-\}}\] \[\bigg{[}-\frac{1}{\beta}\mathcal{N}^{\vec{\mu}\vec{\nu}}_{2,{\bf q },{\bf k}}\sum_{\omega_{1}}\widetilde{G}^{m,\vec{\mu}}_{0}({\bf q}+{\bf k},i \Omega_{n}+i\omega_{1})\widetilde{G}^{m,\vec{\nu}}_{0}({\bf k},i\omega_{1})\] \[-\frac{1}{\beta}\mathcal{N}^{\vec{\mu}\vec{\nu}}_{3,{\bf q},{\bf k }}\sum_{\omega_{1}}\widetilde{G}^{m,\vec{\mu}}_{0}(-{\bf q}+{\bf k},-i\Omega _{n}-i\omega_{1})\widetilde{G}^{m,\vec{\nu}}_{0}(-{\bf k},i\omega_{1})\] \[-\frac{1}{\beta}\mathcal{N}^{\vec{\mu}\vec{\nu}}_{1,{\bf q},{\bf k }}\sum_{\omega_{1}}\widetilde{G}^{m,\vec{\mu}}_{0}({\bf q}+{\bf k},i\Omega_{n }-i\omega_{1})\widetilde{G}^{m,\vec{\nu}}_{0}(-{\bf k},i\omega_{1})\bigg{]} \tag{14}\] \[\widetilde{\Sigma}_{2\bf k}({\bf q},i\Omega_{n}) \approx 2\widetilde{\mathcal{M}}^{(2),0}_{\bf q}+6\sum_{\hat{\imath}=\{1 \text{ to }4\}}\widetilde{\mathcal{M}}^{(2),\vec{i}\vec{i}}_{\bf q,q,k}( \widetilde{\psi}^{\hat{\imath}}_{\bf k}\widetilde{\psi}^{\hat{\imath}}_{\bf k }\widetilde{\psi}^{\hat{\imath}}_{\bf k})\, \tag{15}\]
where the \(\widetilde{\lambda}\widetilde{\rho}\) indices are suppressed since each pair gives the same contribution and has been accounted by appropriate multiplicative factors in the expressions above, \(\widetilde{G}^{m,+}_{0}({\bf q},i\Omega_{n})\) and \(\widetilde{G}^{m,-}_{0}({\bf q},i\Omega_{n})\) are the bare propagators for the \(\widetilde{d}_{+,{\bf q}}\) and \(\widetilde{d}_{-,{\bf q}}\) bosons: \(\widetilde{G}^{m,+}_{0}({\bf q},i\Omega_{n})=\frac{1}{i\Omega_{n}-\omega^{ \prime}_{+,{\bf q}}}\) and \(\widetilde{G}^{m,-}_{0}({\bf q},i\Omega_{n})=\frac{1}{i\Omega_{n}-\omega^{ \prime}_{-,{\bf q}}}\) and \(\mathcal{N}\) in \(\Sigma_{1}\) stands for the appropriate matrix elements of \(\mathcal{O}((\widetilde{\mathcal{M}}^{(1)}_{{\bf k},{\bf q}_{2}})^{2})\). The sum over Matsubara frequencies can be performed using the techniques in Ref. [36]. Converting the momentum sums into integrals one obtains Eq. 28 in the main text.
For \(J_{0}/K_{0}\ll 1\) and temperatures much less than the gap scale of \(\widetilde{\omega}^{s}_{+,{\bf q}}\), the temperature dependent contribution to Eq. 14 due to both the internal lines in Fig. 3 belonging to the \(\widetilde{\omega}^{s}_{+,{\bf q}}\) branch, the contribution is exponentially damped due to the finite gap. When one branch is \(\widetilde{\omega}^{s}_{+,{\bf q}}\) and the other is \(\widetilde{\omega}^{s}_{-,{\bf q}}\), the temperature dependence obtained is similar to that obtained for both the branches being \(\widetilde{\omega}^{s}_{-,{\bf q}}\) albeit with different prefactors. Likewise, the contribution to the second term of Eq. 15 due to the \(\widetilde{\omega}^{s}_{+,{\bf q}}\) would be exponentially damped.
We now list the detailed expressions obtained by considering only the \(\widetilde{\omega}^{s}_{-,{\bf q}}\) branch for obtaining the self energy. Now, to scale out the temperature dependence due to the Bose distribution function, we perform the standard change of variables \(x=\beta\widetilde{\omega}^{s}_{-,{\bf k}}\) (where \(\beta=1/(k_{B}T)\)) and obtain the temperature dependence. This changes the upper limit of the first term in Eq. 28 and the lower limit of the second term in Eq. 28 to \(\beta C_{s}k^{*}=\beta\widetilde{c_{s}}(k^{*})^{2}=6\beta J_{0}\) for integration over the variable \(x\). In the limit where \(\beta J_{0}\gg 1\), only the contribution from the first integral in Eq. 28 with the linear dispersion would be important, giving the following expressions for the pre-factors in Eq. 29,
\[\widetilde{c}_{1}= \sum_{\vec{\mathbf{\delta}}}\frac{1}{2Mv_{0}^{2}}(\hat{\bf q}\cdot \widetilde{\mathbf{\delta}})^{2}\left({\bf e}_{-{\bf q}}\cdot\frac{\partial^{2}K_{ \vec{\mathbf{\delta}}}}{\partial\widetilde{\mathbf{\delta}}^{2}}.{\bf e}_{{\bf q}} \right)-\frac{3V}{16\pi^{2}}\lim_{{\bf q}\to 0}\int_{\rm BZ}dkd\theta_{\hat{\bf k}}k\frac{( \mathcal{N}^{--}_{1,{\bf q},{\bf k}}+\mathcal{N}^{--}_{3,{\bf q},{\bf k}})}{ \widetilde{\omega}^{s}_{-,{\bf k}}\omega^{ph}_{0,{\bf q}}}\] \[+\frac{3V}{8\pi^{2}MNv_{0}^{2}}\int_{\rm BZ}dkd\theta_{\hat{\bf k}}k \sum_{\vec{\mathbf{\delta}}}(\hat{\bf q}\cdot\widetilde{\mathbf{\delta}})^{2} \widetilde{\mathcal{M}}^{(2),44}_{0,0,{\bf k}}(\widetilde{\mathbf{\delta}}) \tag{16}\]
\[\widetilde{c}_{2}= \frac{3Vk_{B}^{3}}{4\pi^{2}MNv_{0}^{2}c_{s}^{4}}\int_{0}^{\infty} dx\int_{0}^{2\pi}d\theta_{\hat{\bf k}}\bigg{[}\frac{x^{3}e^{x}c_{s}\cos\theta_{{\bf q} {\bf k}}\mathcal{A}_{+}(\hat{\bf q},\hat{\bf k})K_{0}}{(e^{x}-1)^{2}(v_{0}-c_{s} \cos\theta_{{\bf q}{\bf k}})J_{0}}+\frac{x^{2}}{(e^{x}-1)}\big{(}-\mathcal{A}_{- }(\hat{\bf q},\hat{\bf k})\frac{K_{0}}{J_{0}}+c_{s}\sqrt{\frac{K_{0}}{2J_{0}}} \mathcal{C}(\hat{\bf q},\hat{\bf k})\big{)}\bigg{]} \tag{17}\]
where \(k=|{\bf k}|\), \(\theta_{{\bf q}{\bf k}}\) is the angle between \(\hat{\bf q}\) and \(\hat{\bf k}\), the integration limits in \(\widetilde{c}_{2}\) have been extended to infinity in the \(\beta J_{0}\gg 1\) limit and
\[\mathcal{A}_{\pm}(\hat{\bf q},\hat{\bf k})= \sum_{\widetilde{\mathbf{\delta}},\widetilde{\mathbf{\delta}}^{\prime}}( \hat{\bf q}\cdot\widetilde{\mathbf{\delta}})(\hat{\bf q}\cdot\widetilde{\mathbf{\delta}}^{ \prime})\bigg{[}\frac{1}{4}\frac{\partial J_{\widetilde{\mathbf{\delta}}}}{\partial \widetilde{\mathbf{\delta}}}\cdot{\bf e}_{-{\bf q}}\pm\frac{\partial K_{ \widetilde{\mathbf{\delta}}}}{\partial\widetilde{\mathbf{\delta}}}\cdot{\bf e}_{-{\bf q }}\left(\frac{J_{0}}{2K_{0}}\right)(\hat{\bf k}\cdot\widetilde{\mathbf{\delta}})^{2} \bigg{]}\bigg{[}\frac{1}{4}\frac{\partial J_{\widetilde{\mathbf{\delta}}^{\prime}}}{ \partial\widetilde{\mathbf{\delta}}^{\prime}}\cdot{\bf e}_{{\bf q}}\pm\frac{\partial K_{ \widetilde{\mathbf{\delta}}^{\prime}}}{\partial\widetilde{\mathbf{\delta}}^{\prime}}\cdot{\bf e}_{{ \bf q}}\left(\frac{J_{0}}{2K_{0}}\right)(\hat{\bf k}\cdot\widetilde{\mathbf{\delta}}^{ \prime})^{2}\bigg{]} \tag{18}\]
\[\mathcal{C}(\hat{\bf q},\hat{\bf k})= \sum_{\widetilde{\mathbf{\delta}}}(\hat{\bf q}\cdot\widetilde{\mathbf{ \delta}})^{2}\bigg{[}\frac{1}{4}\left({\bf e}_{-{\bf q}}\cdot\frac{\partial^{2}J_{ \widetilde{\mathbf{\delta}}}}{\partial\widetilde{\mathbf{\delta}}^{2}}\cdot{\bf e}_{{\bf q }}\right)+\left({\bf e}_{-{\bf q}}\cdot\frac{\partial^{2}K_{\widetilde{\mathbf{ \delta}}}}{\partial\widetilde{\mathbf{\delta}}^{2}}\cdot{\bf e}_{{\bf q}}\right)\left( \frac
For \(\beta J_{0}\lesssim 1\), the leading temperature dependence is obtained from the second integral in Eq. 28 with the quadratic dispersion, \(\frac{\Delta w}{v}=\widetilde{c}_{1}+\mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I} _{3}\) where \(\widetilde{c}_{1}\) is given in Eq. 26 and
\[\mathcal{I}_{1}= -\frac{3V}{8\pi^{2}MNv_{0}^{2}\widetilde{c}_{s}}\int_{6\beta J_{0 }}^{\infty}\frac{dx}{x(e^{x}-1)}\int d\theta_{\hat{\mathbf{k}}}\widetilde{ \mathcal{A}}_{-}(\hat{\mathbf{q}},\hat{\mathbf{k}})\] \[\mathcal{I}_{2}= \frac{27V}{4\pi^{2}MNv_{0}^{2}}\int_{6\beta J_{0}}^{\infty}\frac {dx\sqrt{2}e^{x}}{\sqrt{\beta}(e^{x}-1)^{2}}\int d\theta_{\hat{\mathbf{k}}} \frac{\widetilde{\mathcal{A}}_{+}(\hat{\mathbf{q}},\hat{\mathbf{k}})\cos\theta _{\mathbf{q}\hat{\mathbf{k}}}}{v_{0}\sqrt{\widetilde{c}_{s}}}\] \[\mathcal{I}_{3}= \frac{9V}{8\sqrt{2}\pi^{2}MNv_{0}^{2}\widetilde{c}_{s}}\int_{6 \beta J_{0}}^{\infty}\frac{dx}{\beta(e^{x}-1)}\int d\theta_{\hat{\mathbf{k}}} \widetilde{\mathcal{C}}(\hat{\mathbf{q}},\hat{\mathbf{k}}) \tag{27}\]
where the upper limit can be taken to infinity due to exponential damping and
\[\widetilde{\mathcal{A}}_{+}(\hat{\mathbf{q}},\hat{\mathbf{k}})= \sum_{\tilde{\boldsymbol{\delta}},\tilde{\boldsymbol{\delta}}^{ \prime}}(\hat{\mathbf{q}}\cdot\widetilde{\boldsymbol{\delta}})(\hat{\mathbf{q }}\cdot\widetilde{\boldsymbol{\delta}}^{\prime})\bigg{[}\frac{1}{6}\frac{ \partial J_{\tilde{\boldsymbol{\delta}}}}{\partial\widetilde{\boldsymbol{ \delta}}}\cdot\mathbf{e}_{-\mathbf{q}}+\frac{\partial K_{\tilde{\boldsymbol{ \delta}}}}{\partial\widetilde{\boldsymbol{\delta}}}\cdot\mathbf{e}_{-\mathbf{ q}}\left(\frac{J_{0}}{K_{0}}\right)(\hat{\mathbf{k}}\cdot\widetilde{\boldsymbol{ \delta}})^{2}\bigg{]}\bigg{[}\frac{1}{6}\frac{\partial J_{\tilde{\boldsymbol{ \delta}}^{\prime}}}{\partial\widetilde{\boldsymbol{\delta}}^{\prime}}\cdot \mathbf{e}_{\mathbf{q}}+\frac{\partial K_{\tilde{\boldsymbol{\delta}}^{ \prime}}}{\partial\widetilde{\boldsymbol{\delta}}^{\prime}}\cdot\mathbf{e}_{ \mathbf{q}}\left(\frac{J_{0}}{K_{0}}\right)(\hat{\mathbf{k}}\cdot\widetilde{ \boldsymbol{\delta}}^{\prime})^{2}\bigg{]} \tag{28}\] \[\widetilde{\mathcal{A}}_{-}(\hat{\mathbf{q}},\hat{\mathbf{k}})= \sum_{\tilde{\boldsymbol{\delta}},\tilde{\boldsymbol{\delta}}^{ \prime}}(\hat{\mathbf{q}}\cdot\widetilde{\boldsymbol{\delta}})(\hat{\mathbf{q }}\cdot\widetilde{\boldsymbol{\delta}}^{\prime})\bigg{[}\frac{1}{2}\frac{ \partial J_{\tilde{\boldsymbol{\delta}}}}{\partial\widetilde{\boldsymbol{ \delta}}}\cdot\mathbf{e}_{-\mathbf{q}}-\frac{\partial K_{\tilde{\boldsymbol{ \delta}}}}{\partial\widetilde{\boldsymbol{\delta}}}\cdot\mathbf{e}_{-\mathbf{q}} \left(\frac{J_{0}}{K_{0}}\right)(\hat{\mathbf{k}}\cdot\widetilde{\boldsymbol{ \delta}})^{2}\bigg{]}\bigg{[}\frac{1}{2}\frac{\partial J_{\tilde{\boldsymbol{ \delta}}^{\prime}}}{\partial\widetilde{\boldsymbol{\delta}}^{\prime}}\cdot \mathbf{e}_{\mathbf{q}}-\frac{\partial K_{\tilde{\boldsymbol{\delta}}^{\prime }}}{\partial\widetilde{\boldsymbol{\delta}}^{\prime}}\cdot\mathbf{e}_{ \mathbf{q}}\left(\frac{J_{0}}{K_{0}}\right)(\hat{\mathbf{k}}\cdot\widetilde{ \boldsymbol{\delta}}^{\prime})^{2}\bigg{]} \tag{29}\]
(30)
At exactly \(J_{0}=0\), the lower limit of the integrals in Eq. 30 is zero and the integrals diverge suggesting an instability at the \(J_{0}=0\) point. At finite but small \(J_{0}\), on performing numerical integration, we see the approximate temperature dependence as mentioned in Eq. 29.
## Appendix E Details of the Landau-Ginzburg Theory
The point group symmetry of NiGa\({}_{2}\)S\({}_{4}\) is D\({}_{3d}\). This, along with translations by lattice vectors (\(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) in Fig. 6 in the plane of the triangular lattice of magnetic atoms) generate the set of relevant symmetry operations. Notably, the point group D\({}_{3d}\) is D\({}_{3}\otimes C_{i}\) (where \(C_{i}\) consists of identity(\(E\)) and inversion(\(I\)) operations). [46] D\({}_{3}\) consists of a 3-fold rotation axis and a 2-fold rotation axis perpendicular to the 3-fold axis (The 2-fold rotation axes \(a\),\(b\) and \(c\) are shown in Fig.6). Combining with inversion this gives 12 elements divided into 6 classes for D\({}_{3d}\) : (\(E\)), (\(C_{3},C_{3}^{2}\)), (\(C_{2,a}^{\prime},C_{2,b}^{\prime},C_{2,c}^{\prime}\)), (\(I\)), (\(IC_{3},IC_{3}^{2}\)) and (\(IC_{2,a}^{\prime},IC_{2,b}^{\prime},IC_{2,c}^{\prime}\)).
For the 3-sublattice dipolar and quadrupolar orders that we are interested in, we consider a single triangle as the unit cell. The non-trivial transformations for the triangle formed by the points 1,2,3 in Fig. 6 which keep its centre (point x) fixed are \(C_{3}\) and \(\sigma_{h}C_{2}^{\prime}\) where \(\sigma_{h}\) denotes reflection about the horizontal plane (the plane containing the triangular lattice). The axes \({}^{\prime}\),\(b^{\prime}\) and \(c^{\prime}\) for the \(C_{2}^{\prime}\) rotation are shown in Fig.6. \(\sigma_{h}\) is required with the 2-fold rotation to bring the crystal field environment back to itself. Combining these with inversion about a lattice point generates the 12 elements of \(D_{3d}\) by up to translations by lattice vectors.
Denoting the transformations with an argument indicating the coordinate point that is left unchanged by the transformations ( eg. \(C_{3}(1)\) indicates the operation of \(C_{3}\) rotation keeping point 1 fixed, \(I(1)C_{3}\)(x) indicates the operation of \(C_{3}\) rotation keeping point x fixed followed by inversion about the site now at location 1), for
Figure 6: Triangular lattice of magnetic Ni\({}^{2+}\). Axes for the different symmetry transformations of \(D_{3d}\) are shown
the elements apart from identity and inversion, we have:
\[T_{-\mathbf{a_{1}}}C_{3}(\mathrm{x})= C_{3}(1)\] \[T_{-\mathbf{a_{2}}}C_{3}^{2}(\mathrm{x})= C_{3}^{2}(1)\] \[T_{-\mathbf{a_{1}}}\sigma_{h}C_{2,a^{\prime}}^{\prime}(\mathrm{x})= I(1)C_{2,a}^{\prime}(1)\] \[\sigma_{h}C_{2,b}^{\prime}(\mathrm{x})= I(1)C_{2,b}^{\prime}(1)\] \[T_{-\mathbf{a_{2}}}\sigma_{h}C_{2,c^{\prime}}^{\prime}(\mathrm{x})= I(1)C_{2,c}^{\prime}(1)\] \[T_{\mathbf{a_{1}}}I(1)C_{3}(\mathrm{x})= I(1)C_{3}(1)\] \[T_{\mathbf{a_{2}}}I(1)C_{3}^{2}(\mathrm{x})= I(1)C_{3}^{2}(1)\] \[T_{\mathbf{a_{1}}}I(1)\sigma_{h}C_{2,a^{\prime}}^{\prime}(\mathrm{x})= C_{2,a}^{\prime}(1)\] \[I(1)\sigma_{h}C_{2,b^{\prime}}(\mathrm{x})= C_{2,b}^{\prime}(1)\] \[T_{\mathbf{a_{2}}}I(1)\sigma_{h}C_{2,c^{\prime}}^{\prime}(\mathrm{x})= C_{2,c}^{\prime}(1)\.\]
### Transformation of dipoles and quadrupoles under point group symmetries
The symmetry transformations for irreps of the dipoles on the triangle of Fig. 1 listed in Table 1:
\[C_{3}: m_{a}^{\alpha}\rightarrow m_{a}^{\alpha}\] \[m_{e1}^{\alpha}\rightarrow -\frac{1}{2}m_{e1}^{\alpha}+\frac{\sqrt{3}}{2}m_{e2}^{\alpha}\] \[m_{e2}^{\alpha}\rightarrow -\frac{\sqrt{3}}{2}m_{e1}^{\alpha}-\frac{1}{2}m_{e2}^{\alpha}\] \[\sigma_{h}C_{2,a^{\prime}}^{\prime}: m_{a}^{\alpha}\rightarrow m_{a}^{\alpha}\] \[m_{e1}^{\alpha}\rightarrow -m_{e1}^{\alpha}\] \[m_{e2}^{\alpha}\rightarrow m_{e2}^{\alpha}\]
Further, inversion about a lattice point of a sublattice interchanges the other two sublattices. This action of interchange of two sublattices is the same as that of \(\sigma_{h}C_{2}^{\prime}\) about the appropriate two fold axis. Under translation \(T_{\mathbf{a_{1}}}\), sublattice \(1\to 2\to 3\to 1\) which is also the action of \(C_{3}\). Under translation \(T_{\mathbf{a_{2}}}\), sublattice \(1\to 3\to 2\to 1\) which is also the action of \(C_{3}^{2}\). Under global SU(2) spin rotation, such that we have \(m_{\widetilde{I}}^{\alpha}\to R^{\alpha\beta}m_{\widetilde{I}}^{\beta}\) for \(\widetilde{I}=a,e1,e2\), where \(R^{\alpha\beta}\) are the SU(2) spin rotation matrices for spin-1. The dipoles are odd under time reversal, so \(m_{\widetilde{I}}^{\alpha}\rightarrow-m_{\widetilde{I}}^{\alpha}\).
The symmetry transformations for irreps of the quadrupoles on the up triangle of Fig. 1 listed in Table 1:
\[C_{3}: Q_{a}^{\alpha\beta} \rightarrow Q_{a}^{\alpha\beta} \tag{14}\] \[Q_{e1}^{\alpha\beta} \rightarrow -\frac{1}{2}Q_{e1}^{\alpha\beta}+\frac{\sqrt{3}}{2}Q_{e2}^{\alpha\beta}\] \[Q_{e2}^{\alpha\beta} \rightarrow -\frac{\sqrt{3}}{2}Q_{e1}^{\alpha\beta}-\frac{1}{2}Q_{e2}^{\alpha\beta}\] \[\sigma_{h}C_{2,a^{\prime}}^{\prime}: Q_{a}^{\alpha\beta} \rightarrow Q_{a}^{\alpha\beta}\] \[Q_{e1}^{\alpha\beta} \rightarrow -Q_{e1}^{\alpha\beta}\] \[Q_{e2}^{\alpha\beta} \rightarrow Q_{e2}^{\alpha\beta}\]
Like the case for dipolar modes, inversion about a lattice point of a sublattice interchanges the other two sublattices. This action of interchange of two sublattices is the same as that of \(\sigma_{h}C_{2}^{\prime}\) about the appropriate two fold axis. Under translation \(T_{\mathbf{a_{1}}}\), sublattice \(1\to 2\to 3\to 1\) which is also the action of \(C_{3}\). Under translation \(T_{\mathbf{a_{2}}}\), sublattice \(1\to 3\to 2\to 1\) which is also the action of \(C_{3}^{2}\). Similarly, under spin rotations and time reversal, we respectively have \(Q_{\widetilde{I}}^{\alpha\beta}\to R^{\alpha\rho}R^{\beta\lambda}Q_{ \widetilde{I}}^{\rho\lambda}\) and \(Q_{\widetilde{I}}^{\alpha\beta}\to Q_{\widetilde{I}}^{\alpha\beta}\) for \(\widetilde{I}=a,e1,e2\).
### Normal elastic modes and their transformations
We consider the up triangle of Fig.1 with \(\mathbf{u_{1}}\),\(\mathbf{u_{2}}\) and \(\mathbf{u_{3}}\) being the displacements from mean position. The three non-zero energy normal modes (in the basis { \(u_{1x}\),\(u_{1y}\),\(u_{2x}\),\(u_{2y}\),\(u_{3x}\),\(u_{3y}\)}) arranged as irreducible representations are given in Table 2. Under the lattice symmetries, the normal modes transform as:
\[C_{3}: \epsilon_{a} \rightarrow \epsilon_{a} \tag{15}\] \[\epsilon_{e1} \rightarrow -\frac{1}{2}\epsilon_{e1}+\frac{\sqrt{3}}{2}\epsilon_{e2}\] \[\epsilon_{e2} \rightarrow -\frac{\sqrt{3}}{2}\epsilon_{e1}-\frac{1}{2}\epsilon_{e2}\] \[\sigma_{h}C_{2,a^{\prime}}^{\prime}: \epsilon_{a} \rightarrow \epsilon_{a}\] \[\epsilon_{e1} \rightarrow -\epsilon_{e1}\] \[\epsilon_{e2} \rightarrow \epsilon_{e2}\]
The long wavelength strain modes, \(\varepsilon_{a},\varepsilon_{e1},\varepsilon_{e2}\) (where \(\varepsilon_{a}=\epsilon_{a}/a\), \(\varepsilon_{e1}=\epsilon_{e1}/a\) and \(\varepsilon_{e2}=\epsilon_{e2}/a\) ; \(a=1\) is the lattice spacing), can be expressed in terms of Cartesian strains as given by Eq. 45. There are two independent elastic constants associated with the two modes as given in Eq. 66. Using the standard relations for the elastic constants for \(D_{3d}\) symmetry [47] and comparing the harmonic elastic energy in written in terms of Cartesian strains with the Eq. 66 we obtain,
\[c_{a} =2(c_{11}+c_{12})\] \[c_{e} =2(c_{11}-c_{12})\, \tag{16}\]
where the elastic constants on the right hand side are written in Voigt notation.
\begin{table}
\begin{tabular}{|c|c|} \hline Irrep & Eigenvector \\ \hline \(\epsilon_{a}\) & \(\left[\frac{-1}{2},\frac{-1}{2\sqrt{3}},\frac{-1}{2},\frac{-1}{2\sqrt{3}},0, \frac{-1}{\sqrt{3}}\right]\) \\ \hline \(\epsilon_{e1}\) & \(\left[\frac{-1}{2\sqrt{3}},\frac{-1}{2},\frac{-1}{2\sqrt{3}},\frac{1}{2},\frac{ 1}{\sqrt{3}},0\right]\) \\ \(\epsilon_{e2}\) & \(\left[\frac{-1}{2},\frac{1}{2},\frac{1}{2\sqrt{3}},\frac{-1}{2},\frac{1}{2\sqrt{3}},0,\frac{-1}{\sqrt{3}}\right]\) \\ \hline \end{tabular}
\end{table}
Table 2: Normal modes of an up triangle
|
2305.09640
|
Bug or not Bug? Analysing the Reasons Behind Metamorphic Relation
Violations
|
Metamorphic Testing (MT) is a testing technique that can effectively
alleviate the oracle problem. MT uses Metamorphic Relations (MRs) to determine
if a test case passes or fails. MRs specify how the outputs should vary in
response to specific input changes when executing the System Under Test (SUT).
If a particular MR is violated for at least one test input (and its change),
there is a high probability that the SUT has a fault. On the other hand, if a
particular MR is not violated, it does not guarantee that the SUT is fault
free. However, deciding if the MR is being violated due to a bug or because the
MR does not hold/fit for particular conditions generated by specific inputs
remains a manual task and unexplored. In this paper, we develop a method for
refining MRs to offer hints as to whether a violation results from a bug or
arises from the MR not being matched to certain test data under specific
circumstances. In our initial proof-of-concept, we derive the relevant
information from rules using the Association Rule Mining (ARM) technique. In
our initial proof-of-concept, we validate our method on a toy example and
discuss the lessons learned from our experiments. Our proof-of-concept
demonstrates that our method is applicable and that we can provide suggestions
that help strengthen the test suite for regression testing purposes.
|
Alejandra Duque-Torres, Dietmar Pfahl, Claus Klammer, Stefan Fischer
|
2023-05-16T17:42:37Z
|
http://arxiv.org/abs/2305.09640v1
|
# Bug or not Bug? Analysing the Reasons Behind Metamorphic Relation Violations
###### Abstract
Metamorphic Testing (MT) is a testing technique that can effectively alleviate the oracle problem. MT uses Metamorphic Relations (MRs) to determine if a test case passes or fails. MRs specify how the outputs should vary in response to specific input changes when executing the System Under Test (SUT). If a particular MR is violated for at least one test input (and its change), there is a high probability that the SUT has a fault. On the other hand, if a particular MR is not violated, it does not guarantee that the SUT is fault free. However, deciding if the MR is being violated due to a bug or because the MR does not hold/fit for particular conditions generated by specific inputs remains a manual task and unexplored. In this paper, we develop a method for refining MRs to offer hints as to whether a violation results from a bug or arises from the MR not being matched to certain test data under specific circumstances. In our initial proof-of-concept, we derive the relevant information from rules using the Association Rule Mining (ARM) technique. In our initial proof-of-concept, we validate our method on a toy example and discuss the lessons learned from our experiments. Our proof-of-concept demonstrates that our method is applicable and that we can provide suggestions that help strengthen the test suite for regression testing purposes.
Metamorphic testing, metamorphic relation, association rule mining, passive testing.
## I Introduction
Software testing is a crucial stage in the software development life cycle as it ensures the software's proper operation and quality. One of the most significant challenges in software testing is the test oracle problem. A test oracle determines the System Under Test's (SUT) output for a given input. The test oracle problem occurs when the SUT lacks an oracle or when creating one to verify the computed outputs is practically impossible [1]. _Metamorphic Testing_ (MT) is a software testing approach proposed by Chen _et al._[2] to alleviate the test oracle problem.
In contrast to traditional testing techniques, MT examines the relations between input-output pairs of consecutive SUT executions rather than the individual outputs. The relations between SUT inputs and outputs in MT are known as _Meta-morphic Relations_ (MRs). MRs specify how the outputs should vary in response to specific input changes [3]. When an MR is violated for at least one test input and its change, there is a strong likelihood that the SUT has a fault. However, the absence of violation does not ensure that the SUT is fault-free. As a result, the suitability of the MRs employed significantly impacts the effectiveness of MT [3].
In current practice, the identification and selection of MRs are made manually, requiring a deep understanding of the SUT and its problem domain. The requirement for domain knowledge makes automatic MR identification challenging [3, 4]. Another critical challenge is the need to distinguish automatically whether a particular MR violation is due to a fault in the SUT or because the MR does not satisfy or fully fit a specific statement/method/function of the SUT for certain test data. In current practice, interpreting an MR violation is an entirely manual effort. It is important to highlight that the cost, in terms of time and resources, of the MT approach is related to the amount of MRs used [5, 6]. Thus, as the number of MRs grows, the number of test cases may grow exponentially. As a result, the execution time and the time needed for manual inspection of MR violations will also increase.
Some approaches indirectly reduce the manual effort required to interpret the meaning of an MR violation. For instance, Cao _et al._[7] provides quantitative suggestions/guidance for developing automated means of selecting/prioritising MR for cost-effective MT. Srinivasan _et al._[5, 6] proposed two MR prioritisation approaches to improve MT's efficiency and effectiveness. These approaches use (i) fault detection information and (ii) statement/branch coverage information to prioritise MRs. Zhang _et al._[8] suggested strategies to clean MRs by deleting duplicate or redundant MRs. These approaches offer indirect help since by prioritising or reducing the set of MRs, the number of test cases will be reduced as well. Thus, the manual effort of inspection through the violated MRs is less.
Motivated by the above, we ask ourselves the following research question: _How can MRs be refined based on test data?_ To answer this question, we developed a method for refining MRs that suggest whether a detected MR violation results from a fault in the SUT or arises from the fact that the MR does not apply to the used test data. Our method assumes that a predefined set of MRs is provided and uses the concepts of fuzz testing, passive testing, and rule mining.
First, our method uses a fuzzer to feed random data to the SUT. Second, it performs the necessary input transformations following the indications of the MRs. Third, similar to passive tests, logs are produced with information related to inputs, outputs, and whether or not MRs are violated. Those logs are used to feed a mining algorithm. Our method employs
association rule mining (ARM). In our context, the purpose of ARM is to extract interesting relationships between the inputs and whether or not the MR is violated. ARM is an unsupervised machine learning (ML) method [9]. ARM algorithms attempt to find relationships or associations between categorical variables in large transactional datasets [10]. We were particularly interested in understanding whether the information provided by the resulting model helps in deciding whether there is a fault or whether the MR does not fully fit the specific method/function/statement when the violation occurs.
In our initial proof-of-concept, we validate our method on a toy example and discuss the lessons learned from our experiments. Our proof-of-concept demonstrates that our method is applicable and can provide suggestions that help strengthen the test suite for regression testing purposes. We published the replication package to facilitate future research.
The rest of the paper is structured as follows. Section II presents the main concepts used in our research. In Section III, we describe the proposed method. In Section IV, we present our results and discuss threats to validity. Section V presents the related work. Finally, we conclude the paper in Section VI.
## II Background
This section presents the key concepts used in our research. Section II-A introduces the MT approach. Section II-B provides a brief description of test data generation techniques, and Section II-C gives a brief introduction to ARM.
### _Metamorphic Testing_
MT is a software testing approach that alleviates the test oracle problem. MT aims to exploit the internal properties of a SUT to either check its expected outputs or generate new test cases. Figure 1 shows the MT basic workflow. Overall, MT works by checking the relation between the inputs and outputs of multiple executions of the SUT. Such relations are called MRs. MRs specifies how the outputs should change according to specific variations made to the input. Overall, MT follows four major steps:
1. Create a set of initial tests or source test cases.
2. Identify an appropriate list of MRs that the SUT should satisfy.
3. Create follow-up test cases by applying the input transformations required by the identified MRs in Step 2 to each source test case.
4. Execute the corresponding initial and follow-up test case pairs.
5. Check if the source tests and follow-up tests output change matches the change predicted by the MR.
The final step needs further interpretation of the MT workflow output based on the Non-violation of MRs. When a Non-violation is presented, it is not guaranteed that the SUT is implemented correctly. However, if an MR is violated for specific test cases, it must be a fault in the SUT, assuming the MR is defined correctly.
### _Test Data Generation_
In software testing, test data corresponds to the input data used during test execution. Test data is used for positive testing, verifying that functions of the SUT generate anticipated outputs for given inputs, and negative testing, examining the SUT's capacity to handle atypical, extraordinary, or unexpected inputs [ref]. Inadequately constructed testing data could only cover some potential test cases, which would be detrimental to the software's quality.
Our method does not attempt to add something new in test data generation techniques. Instead, we take advantage of Fuzz Testing for generating test data. Fuzz Testing, often known as "fuzzing," is a software testing approach involving injecting erroneous or random data, or "FUZZ," into software systems to find coding errors and security flaws. Fuzz testing involves injecting data using automated or somewhat automated methods and evaluating the system for various exceptions, such as system failure or malfunction of built-in code, etc [11].
Overall, fuzzing consists of three components, i.e., _input generator_, _executor_, and _defect monitor_[11]. The input generator provides the executor with several inputs, and the executor runs target programs on the inputs. Then, fuzzing monitors the execution to check if it discovers new execution states or defects (e.g., crashes). Fuzzing can be divided into Generation-based and Mutation-based fuzzing. Mutation-based fuzzing alters existing data samples to create new test data. Generation-Based fuzzing defines new data based on the system's input or the target SUT function. It starts generating input from scratch based on the specification.
### _Association Rule Mining_
ARM is a rule-based unsupervised ML method that allows discovery relations between variables or items in large databases. ARM has been used in other fields, such as business analysis, medical diagnosis, and census data, to find out patterns previously unknown [10]. The ARM process consists of at least two major steps: finding all the frequent itemsets that satisfy minimum support thresholds and generating strong association rules from the frequently derived itemsets by applying a minimum confidence threshold.
A large variety of ARM algorithms exist. [12]. In our experiments, we use the Apriori algorithm from Python3 Efficient-Apriori library [13]. It is well known that the Apriori
Fig. 1: MT basic workflow
algorithm is exhaustive; it finds all the rules with specified support and confidence. In addition, ARM doesn't require labelled data and is, thus, fully unsupervised. Below we define important terminology regarding ARM:
_Itemset:_ Let \(X_{i}\) be items, then \(I\) ={X\({}_{1}\),..., X\({}_{k}\)} is an Itemset of \(k\) different items, with k \(>\) 1.
_Association rule:_ Consider a dataset \(D\), having \(m\) different types of items and \(n\) transactions defined by the itemsets constructed from the items. An association rule exposes the relationships between the elements of the itemsets in the set of \(n\) transactions.
_Support:_ The support of an association rule involving itemsets X and Y is the percentage of transactions in dataset D that contain itemsets X and Y. The support of an association rule \(X\to Y\):
\(support(X\to Y)=support(X\cup Y)=P(X\cup Y)\)
_Confidence:_ The confidence is the percentage of transactions in the dataset D with itemset X that also contains the itemset Y. The confidence is calculated using the conditional probability, which is further expressed in terms of itemset support: \(confidence(X\to Y)=P(Y|X)=support(X\cup Y)/support(X)\)
_Lift:_ Lift is used to measure the frequency of the occurrence of \(X\) and \(Y\) together if both are statistically independent of each other. The lift of rule \((X\to Y)\) is defined as \(lift(X\to Y)=confidence(X\to Y)/support(Y)\).
A lift value of 1 indicates that \(X\) and \(Y\) appear as frequently together as they appear individually under the assumption of conditional independence.
## III Method
Figure 2 presents an overview of the method for refining MRs based on rule mining. In general, the proposed method comprises two phases. Phase I is in charge of identifying the degree of applicability of the set of MRs selected. It is important to highlight that our method does not cover the selection of appropriate MRs. We assume that there is already a predefined set of several MRs. Phase II uses the refined MRs to create or improve the test suite for future SUT versions. Thus, the output of this phase could be seen as a regression test suite. Each phase is thoroughly described below:
### **Phase I**
Phase I comprises three modules: Test Data Generation Module (TDG Module), Metamorphic Test Module (MT Module) and Analyser Module. Overall, the TDG Module produces the test data that will feed the SUT and the MT Module. In the MT Module, the test data is transformed based on the indications of each MR; then, such transformed data is executed against the SUT. Both SUT outputs, the output produced with test data and the transformed test data, are executed against SUT, are checked against the MRs in the MR Checker. Then, the test data and the results of the MR Checker Module are organised and stored in a Log file. The Log file is used in the Analyser Module, where processed and refined MRs based on rules are extracted. These modules, their internal settings and their activities are detailed below:
#### Iii-A1 **TDG Module**
This module generates the test data, which will feed the SUT. It is important to note that our method does not try to add something new in the field of test data generation techniques. As we explained in Section II-B, our method uses fuzzing testing to generate test data. Overall, the basic fuzzing workflow involves three basic components, _input generator_, _executor_, and _defect monitoring_. There are open-source tools that can be used in this module. However, it is necessary to consider the SUT's application domain. For instance, fuzzers such as SPIKE proxy, Peach Fuzzer, and OWASP WSFuzzer are highly recommended for security purposes in web systems. Also, the programming language must be considered when selecting the fuzzer. For instance, OSS-Fuzz, Google's open-source fuzzing platform, supports Java and Python language for finding security vulnerabilities, stability issues, and functional bugs. Regardless of the Fuzzer used, this module's most important is storing the generated test data.
#### Iii-A2 **MT Module**
Overall, this module is responsible for performing the test data transformation based on the changes in the inputs specified by the MRs, executing the transformed
Fig. 2: Overview of the method for refining MRs based on rule mining
test data, and checking if the test data and the transformed test data outputs match the change predicted by the MRs. This module has three main activities, _Set of MRs_, _Test data transformations_, and _MR Checker_.
* _Set of MRs:_ It is important to note that our approach does not involve the initial selection of the MRs. Our approach assumes the prior existence of a predefined set of MRs.
* _Test data transformation:_ This activity is in charge of transforming the test data according to the change specified by each MR. To the best of our knowledge, no tool performs this activity automatically, _i.e.,_ the translation of input change described by the MR and its meaning into code. Thus, this is considered to be a manual task.
* _MR Checker:_ This activity is responsible for checking that both outputs, test data and transformed test data match the change predicted by the corresponding MR.
Once the test data is generated, transformed, and compared by the MR Checker, _i.e.,_ the verdict of the MR Checker is ready, a Log file is produced with the following information: execution ID, test data, function call, and the MR Checker verdict per MR.
#### Iii-A3 **Analyser Module**
The Analyzer Module is in charge of discovering interesting relations between test data and whether or not a certain MR has been violated. This module has three main activities, _Pre-processing_, _Tester feedback_, and _Rule mining_. Below we describe each activity in detail as well as their internal process:
* _Pre-processing:_ This activity is responsible for ensuring that the data is correct, consistent and usable. Also, it shows the tester an initial summary of the percentage of violations and not violations per each MR and function call. This activity has three main functions: _data quality_, _clean_, and _summary report_. The _data quality_ function is responsible for checking that Log has no missing data, as well as removing the rows that are not needed or inconsistent rows. The _clean_ removes duplicate entries. The _Summary report_ is based on the percentage of violations and no-violation per MRs. Also, it provides the tester with the ability to inspect some random sample values and atypical values. This is done to increase confidence in the suitability of the MRs.
* _Tester feedback:_ This activity is in charge of qualifying MR's verdict based on the summary report provided by the _Pre-processing_ activity. Here the tester needs to perform two checks when there is an incorrect and correct behaviour of the MR Checker output: 1. _Incorrect behaviour_: The tester evaluates whether there is an obvious fault in the SUT. Phase I should be repeated as long the fault is present. 2. _Correct behaviour_: In the correct behaviour there are two possibilities. 1. _MR not violated_ which represents a positive test. 2. _MR violated_ which represents a negative test.
If specific MR is violated 100% of the time, we assume that the MR does not apply to the tested function. On the other hand, if it is not violated 100% of the time, we assume that the MR matches the tested function. In both cases, the tester can directly decide whether to include them in the Rule file. Those 100% violations could be used as negative tests and the 100% of no violations as positive tests. We also look at the atypical values; for instance, if the MR was violated or not violated only 10% of the time.
* _Rule mining (predefined data type relations):_ This activity is responsible for generating the rule set by discovering interesting relationships between the test data and whether or not a particular MR has been violated. This activity has three internal steps: 1. _Encoding_: This step is in charge of preparing the data according to the requirements of the rule mining algorithm. For example, Apriori [14], which is the algorithm used in this paper, works only with categorical features. Thus, this component categorises and generalises the numerical inputs into string representations. 2. _Rule generation:_ This step is responsible for generating the set of rules using the Apriori ARM algorithm. 3. _Data type relation:_ This step is in charge of generalising the data based on its data type relation. This data type is predefined. For example, the test data for SUT can be generalised using partial order theory. The partial order defines a notion of comparison between at least two elements, _e.g.,_ input\({}_{a}\) and input\({}_{b}\). The two elements input\({}_{a}\) and input\({}_{b}\) can be in any of four mutually exclusive relationships to each other:
The latter relationship is not present in our data.
### _Phase II_
Phase II is in charge of generating the test suite for regression testing purposes. Overall, this phase takes the Log, which has the test data generated and takes the Log with the set of rules for generating the final test suite.
## IV Results and Discussion
This section presents and discusses the preliminary results of our approach. Let's consider the Algorithm 1 as the SUT. Algorithm 1 is a program that computes three basic arithmetic operations, addition, subtraction, and multiplication between two integers. The full set of data generated during our experiments, as well as all scripts, can be found in our GitHub repo1.
Footnote 1: [https://github.com/aduguet/VST2023-BugORNOTbug](https://github.com/aduguet/VST2023-BugORNOTbug)
### **Phase I**
#### Iv-A1 **Test Data Generation Module**
In this module, we create a fuzzer based on a random number generator. For this,
we use the NumPy random function in Python. A total of 100 random numbers, elements of {0, 1, 2,..., 9}, were generated for input\({}_{a}\) and input\({}_{b}\), following a uniform distribution. Since, for some MRs, a constant is needed, it was generated only once and reused every time it was needed. Figure 3 shows the histogram of the generated test data, _i.e.,_ input\({}_{a}\) and input\({}_{b}\).
#### Iv-A2 **MT Module**
* _Set of MRs:_ Table I describes the set of MRs for Algorithm 1. For our SUT, we take advantage of the generic rules of arithmetic to build a set of four MRs that may apply to the SUT. MR\({}_{2}\), MR\({}_{3}\), and MR\({}_{4}\) need a constant \(k\). That constant was randomly generated only once and reused each time it was needed.
* _Test data transformation_ Table II shows how the inputs are transformed following the MR indications.
* _MR Checker:_ Table III shows how the MR Checker checks the outputs following the expected output predicted by the MRs and gets the verdict, _i.e.,_ not violated or violated for our SUT.
#### Iv-A3 **Analyser Module**
* _Pre-processing:_ Figure 4 shows an example of the summary report for Algorithm 1, _i.e.,_ not violated or violated, with a controlled input space.
* _Tester feedback:_ Figure 4 shows the percentage of MR not violated and violated per function call. From the first line, which belongs to the ADD function, we can see that for MR\({}_{3}\) and MR\({}_{4}\), there are 100% violations. In this case, we assume that MR3 and MR4 do not apply to the SUM function. However, we have two options: include a test (negative test) for all test data or not include it. on the other hand, MR1 and MR2 present 100% and 99% of not violation respectively. After inspecting random samples to check, we can conclude that MR\({}_{1}\) and MR\({}_{2}\) completely match the ADD function. This means that MR1 and 99% of MR\({}_{2}\) will be directly included in the set of rules to build positive tests in phase II. The 1% violation is a particular case for MR\({}_{2}\) in the ADD function, which occurs whenever input\({}_{a}\) and input\({}_{b}\) are equal to 0.
In the second line of pie charts in the Figure 4, which belongs to the SUB function, you can see the opposite behaviour of MR\({}_{3}\) and MR\({}_{4}\) compared to the ADD function. For these MR we have 100% non-violations. This indicates that the MRs fully match the SUB function. Like MR\({}_{1}\) and MR\({}_{2}\) in the ADD function, MR\({}_{3}\) and MR\({}_{4}\) can go directly as a positive test for the SUB function in phase II. We can't say to much about MR\({}_{1}\) and MR\({}_{2}\) for SUB function. These MRs, _i.e.,_ MR\({}_{1}\) and MR\({}_{2}\), are the ones who we are interested to analyse with the _rule mining_.
The results from MUL function are in the third line of the pie charts in the Figure 4. In this function, one can see that MR\({}_{1}\) and \(MR_{3}\) behave the same as in the ADD function, which means that MUL also fully matches MR\({}_{1}\) and can be included directly in the final rules. It similarly
Fig. 4: Summary report for the tester feedback
Fig. 3: Distribution of data generated for input\({}_{a}\) and input\({}_{b}\)
\begin{table}
\begin{tabular}{l|l|l} \hline \hline
**MR** & **Change in the input** & **Expected output** \\ \hline
**MR\({}_{1}\)** & Permute the inputs & Remain equal \\
**MR\({}_{2}\)** & Multiply by a positive constant \(k>1\) & Increase \\
**MR\({}_{3}\)** & Adding a positive constant \(k\) to each operand & Remain equal \\
**MR\({}_{4}\)** & Subtracting a positive constant \(k\) from each operand & Remain equal \\ \hline \hline \end{tabular}
\end{table} TABLE I: Set of MRs for Algorithm 1
happens with MR\({}_{3}\); it has the same behaviour as the ADD function, meaning we can treat it in the same way as MR\({}_{3}\) for ADD function. The 11% of the violated cases in MUL function for MR\({}_{2}\) occurs whenever input\({}_{a}\) or input\({}_{b}\) are equal to 0. Here one could assume that we have other particular case in the test data. It indicates that we can use the this particular case to create a negative MR\({}_{2}\) test for MUL. It cannot be the same as in the ADD function since in the MUL function, these violations occur when either input\({}_{a}\) or input\({}_{b}\) is 0, and in ADD function, both inputs must be equal to zero. From the information in Figure 4 one can assume the following:
* MR\({}_{1}\) applies to ADD and MUL function for all the test data.
* MR\({}_{2}\) applies to the SUB function for almost all the test data, but when input\({}_{a}\)\(\equiv\) input\({}_{b}\)\(\equiv\) 0, MR\({}_{2}\) must be used as a negative test.
* MR\({}_{3}\) and MR\({}_{4}\) does not apply to the ADD function.
* MR\({}_{3}\) and MR\({}_{4}\) applies to the SUB function for all the test data.
* MR\({}_{3}\) does not apply to the MUL function.
* _Rule mining:_ We apply the Apriori algorithm with minimal support and maximum confidence thresholds, _i.e._, 0.2 and 1. We created the data type relation in the following way:
\begin{tabular}{}
[MISSING_PAGE_POST]
\end{tabular} \begin{}
\end{tabular}
\begin{tabular}{}
\end{
fact, it is part of the MT approach to inspect such results, _i.e.,_ whether or not the MR is violated.
The advantage of our approach is that it can provide hard facts on which MRs out of the set of MRs can and cannot be applied. In this regard, our approach reduces the time spent in selecting MRs, which is a manual and very time-consuming task. The downside of our approach is the time required for setup, the runtime of MRs against the test data, and manual inspection. However, the execution of the MRs with the test data can be automated, and the manual inspection is performed only once.
In terms of the effectiveness of our approach to finding defects, we face the same problem as any testing approach. It is well known that there is an open and old discussion about the dependency of test data and test cases. Any approach that uses MR can only find fault with them if the test data is chosen wisely. The proper selection of test data is planned to be explored in the future.
### _Threats to validity_
In the context of our proof-of-concept validation, two types of threats to validity are most relevant: threats to internal and external validity.
#### Iv-D1 Internal validity
We used a well-known program as a proof-of-concept and the most common ARM algorithm to achieve internal validity. It is not fully clear in which situations the ARM is the best choice for our method. Future research in the direction of evaluating different ARM algorithms and feature selection needs to be done.
#### Iv-D2 External validity
With regard to external validity, our study is rather limited since we only use one well-understood class in our experiments. Thus, the actual scope of the effectiveness of our proposed method is yet to be determined.
#### Iv-D3 Construct validity
In this paper, we used the NumPy package, in particular its random function, to generate the test data. The usage of these third-party libraries represents potential threats to construct validity. To avoid this, we verified that the results produced by the random function are uniform by manually inspecting the generated distributions.
## V Related Work
Since MT was introduced in 1998 by Chen _et al._[2], MT has been demonstrated to be an effective technique for testing in a variety of application domains. Several studies have shown MT as a strong technique for testing the "non-testable programs" where an oracle is unavailable or too difficult to implement [15, 16, 17, 18]. Also, MT has been demonstrated to be an effective technique for testing in a variety of application domains, _e.g.,_ autonomous driving [19, 20], cloud and networking systems [21, 22], bioinformatic software [23, 24], scientific software [25]. However, the efficacy of MT heavily relies on the specific MRs employed and its interpretation of the meaning of MR violations.
As we mentioned before, some approaches indirectly reduce the manual effort required to interpret the meaning of an MR violation through prioritisation. For instance, Cao _et al._[7] provides quantitative suggestions/guidance for developing automated means of selecting/prioritising MR for cost-effective MT. Srinivasan _et al._[5, 6] proposed two MR prioritisation approaches to improve MT's efficiency and effectiveness. These approaches use (i) fault detection information and (ii) statement/branch coverage information to prioritise MRs. Zhang _et al._[8] suggested strategies to clean MRs by deleting duplicate or redundant MRs. These approaches offer indirect help since by prioritising or reducing the set of MRs, the number of test cases will be reduced as well. Thus, the manual effort of inspection through the violated MRs is less.
## VI Conclusion and future work
We presented a new ARM-based method for refining MRs that suggest whether a detected MR violation results from a fault in the SUT or arises from the fact that the MR does not apply for the used test data. Our method assumes that a predefined set of MRs is provided and uses the concepts of fuzz testing and ARM. Our method consists of two phases. The purpose of Phase I is to evaluate the level of applicability of the chosen set of MRs.
In phase I, there are three main modules: TDG Module, MT Module, and Analyser Module. The TDG Module generates the test data that will be sent to the MT Module and SUT. According to each MR's instructions, the test data is changed in the MT Module before being run against the SUT. In the MR Checker, the output from running test data and the transformed test data against SUT are compared to the changes predicted by the MRs. The test data and the MR Checker Module's results are then organised and saved in a Log file. The Analyser Module uses the Log file to process and improve MRs according to relations test data and whether or not the MR is violated.
Fig. 5: Example test suit using refined MRs
Phase II is in charge of analysing the final set of rules and creating the new test suite. In our proof-of-concept, we used a toy example, a program that computes three basic arithmetic operations, addition, subtraction, and multiplication between two integers. We show step by step the execution of our method and its expected outputs from each module.
An advantage of our method is that it can be applied not on the SUT with only integer inputs and outputs but on the class under test where we can have any inputs. Also, of mixed types of data. Thus, our method can be generalised for inputs of any type, not only for integers. It removes some limitations on the type of SUT that can be analysed. The weakness of our method is the need for manual feedback from the tester. However, compared with the manual effort already needed in the MT approach, we consider that the effort needed in our approach is less. We must manually translate the rules into assertions (test code) to generate the final test suite. For efficiency reasons, it would be better to have an automatic translation of the rules generated into test code. Unfortunately, there is no simple way to do this.
Given the limitations of our study, more experiments have to be conducted to test our proposed method empirically. We are currently focusing on extending our experiments in three directions. First, we will add more MRs in the initial set of MRs to test the sensitivity of our method with regard to the filtering MRs that have no relation to the SUT. We will systematise this by using the relation between the input type and the MR. Second, we will apply our proposed method to more SUT. Third, we will do a mutation analysis to evaluate the effectiveness of our approach.
## Acknowledgement
This research was partly funded by the Estonian Center of Excellence in ICT research (EXCITE), the IT Academy Programme for ICT Research Development, the Austrian ministries BMVIT and BMDW, the Province of Upper Austria in the frame of the Software Competence Center Hagenberg (SCCH), and grant PRG1226 of the Estonian Research Council.
|
2310.03865
|
Model Complexity of Program Phases
|
In resource limited computing systems, sequence prediction models must
operate under tight constraints. Various models are available that cater to
prediction under these conditions that in some way focus on reducing the cost
of implementation. These resource constrained sequence prediction models, in
practice, exhibit a fundamental tradeoff between the cost of implementation and
the quality of its predictions. This fundamental tradeoff seems to be largely
unexplored for models for different tasks. Here we formulate the necessary
theory and an associated empirical procedure to explore this tradeoff space for
a particular family of machine learning models such as deep neural networks. We
anticipate that the knowledge of the behavior of this tradeoff may be
beneficial in understanding the theoretical and practical limits of creation
and deployment of models for resource constrained tasks.
|
Arjun Karuvally, J. Eliot B. Moss
|
2023-10-05T19:50:15Z
|
http://arxiv.org/abs/2310.03865v1
|
# Model Complexity of Program Phases
###### Abstract
In resource limited computing systems, sequence prediction models must operate under tight constraints. Various models are available that cater to prediction under these conditions that in some way focus on reducing the cost of implementation. These resource constrained sequence prediction models in practice exhibit a fundamental tradeoff between the cost of implementation and the quality of its predictions. This fundamental tradeoff seems to be largely unexplored for models for different tasks. Here we formulate the necessary theory and an associated empirical procedure to explore this tradeoff space for a particular family of machine learning models such as deep neural networks. We anticipate that the knowledge of the behavior of this tradeoff may be beneficial in understanding the theoretical and practical limits of creation and deployment of models for resource constrained tasks.
## 1 Introduction
Sequence prediction models can have differing resource constraints imposed on them because of properties of the computing systems within which they are implemented. Many of these models can be found in the design of computer systems where they are often used for improvements in performance of microarchitecture (Bhatia et al. (2019), Calder et al. (1997), Dai et al. (2016)), power/performance estimation (Carvalho et al. (2020), Foots et al. (2020), Wu et al. (2015)), thread/instruction scheduling
[Li et al. (2009), Jain and Amarasinghe (2019)], and harware security [Chiappetta et al. (2016), Khasawneh et al. (2017), Khasawneh et al. (2020)]. There are two general scientific questions of interest in these models. The first one is: What is the nature of the tradeoff that exists between the cost to evaluate the model versus the quality of its predictions? The second question is: What parts of a task require more complex models and how can we quantify and identify them? We approach these questions from an algorithmic information theoretic perspective and explore a notion of information complexity that encompasses the properties of model implementation such as available data, model family, and the cost of evaluation of models.
We organize the paper as follows. In Section 1, we define sequence prediction problems with resource constraints. Section 2 defines our notion of information complexity based on Kolmogorov's algorithmic information theory, where we will also compare and contrast related notions that exist in the literature. In Section 3, we propose an empirical procedure to explore the nature of the proposed information complexity measure for resource constrained sequence prediction problems in general. In Section 4, we show results from the empirical procedure applied to the problem of predicting future cache miss rates from the sequence of past cache miss rates. Miss rate prediction is useful for different problems in computer systems design like the optimization of energy consumption, cache partitioning, etc.
## 2 Related Work
Kolmogorov's notion of complexity has been used to define and quantify the complexity of objects from different domains such as genomics, virology, etc. [Cilibrasi and Vitanyi (2005)] use Normalized Compression Distance (NCD), a metric based on Kolmogorov Complexity, to cluster identical objects. In NCD, a real world compressor is used to define the Kolmogorov Complexity of individual objects. Recently, available data tends to be large and there is new interest emerging on questions about the relaxed or resource-bounded complexity of the data. The general approach has been to find the complexity of the source generating the data and to quantify that complexity using the entropy of the source. This was done by Rooij and Vitanyi (2012) who used standard lossy data compression techniques to find the trade-off between quality and representational cost. In resource-bounded statistical models, we are really interested in the _model_ complexity of the data. This
deviates from universal notions of complexity such as Shannon entropy. Achille et al. (2021) used Kolmogorov Complexity of the model as a way to measure the difficulty of learning for a data set. They validated this from the perspective of transfer learning by introducing a distance metric in the space of learning tasks. This metric encompasses classical notions of Kolmogorov Complexity, and Shannon and Fisher Information. From these works it can be observed that a general technique to make a computable version of the incomputable Kolmogorov Complexity is to use real compression methods. In our work, we take ideas from the model compression literature to create an empirical procedure to explore the Kolmogorov complexity trade-off curves of cache miss rate behavior of computer programs.
Cache miss rate behavior analysis of programs is important for various applications. Joshi et al. (2020) used a machine learning based method was to estimate cache miss rates for cache partitioning algorithm that optimizes cache utilization across multiple processors. Srinivasan et al. (2013), Khoshbakht and Dimopoulos (2017), and Alcorta et al. (2021) all used information about future program phases to improve the energy efficiency of a system. Chiu and Moss (2018) used a run time method based on a decision tree model to create a mechanism to improve the energy delay product of programs running on a computer system with adjustable clock rates. As machine learning based models are being deployed in computer systems, the question of the trade-off between cost of model evaluation and the quality of its predictions becomes important due to the resource constraints that exist in these systems.
## 3 Resource Constrained Sequence Prediction Models
We now formalize sequence prediction problems and the associated models in which we are interested. There are different ways to create models for sequence prediction problems. We focus on models representing Maximum Likelihood Estimates of a given sequence prediction task \(\mathcal{T}\) under resource constraints. In practice, the task \(\mathcal{T}\) is defined using a finite data set \(\mathcal{D}=\{x_{1},x_{2},x_{3}...x_{N}\}\) that represents the sequence associated with the task. A model learns the task using this data set and the data set is the only connection it can have to the task \(\mathcal{T}\). The maximum likelihood model for the task \(\mathcal{T}\) is represented by a computable probability distribution \(f\) with parameters \(\theta\). That is, \(f\) is a predictor that offers a _distribution_ of values of the next elements in the sequence as opposed to
predicting a single most-likely value. The negative log likelihood (\(\mathcal{L}_{\mathcal{D}}\)) of the model \(f\) with respect to the data set \(\mathcal{D}=\{x_{1},x_{2}...x_{N}\}\) serves as a measure of fit of the model to the task:
\[\mathcal{L}_{\mathcal{D}}(f_{\theta})=\sum_{i=2}^{N}-\log(f_{\theta}(x_{i}|x_{1 },x_{2}...x_{i-1})) \tag{1}\]
The cost of evaluation of the model with parameters \(\theta\) is given by a function \(J(\theta)\geq 0\). The resource constraints that are imposed on the model are given by \(\alpha\) which constraints \(J\) such that \(J(\theta)\leq\alpha\). Using the above definitions, we formalize the resource-constrained sequence prediction problem as that of finding parameter vector \(\theta^{*}\) that minimizes loss, subject to the resource constraint:
\[\theta^{*}=\operatorname*{argmin}_{\theta}\mathcal{L}_{\mathcal{D}}(f_{ \theta}),\text{ such that }0\leq J(\theta)\leq\alpha \tag{2}\]
We assume in this work that the model \(f\) is a computable probability distribution in the space of all probability distributions. This assumption seems reasonable since most models built for these problems run in a Turing Machine (any computer system) that can represent only computable functions \(f\).
Using these definitions, the sequence prediction model with which we are concerned with in this paper has the general architecture shown in Figure 1. The sequence prediction model forms an auto-regressive feedback loop with an oracle that corrects the predictions of the model and provides the model with information about the quality of its predictions at any point in time \(t\). The resource constraints are imposed on the model part of the system. Although the oracle is not explicitly resource
Figure 1: The general architecture of resource constrained sequence prediction models. The data set \(\mathcal{D}\) is generated using two parts. A model \(f_{\theta}\) that has resource constraints imposed on it and an associated oracle that helps the model encode \(\mathcal{D}\) using \(\mathcal{L}_{\mathcal{D}}(f_{\theta})\), the losses, from which one can derive the errors and thus correct the predictions.
constrained, a tradeoff exists between the size of the oracle and the complexity of the model, which is given by the structure function of the task \(\mathcal{T}\). We will elaborate on this formally in Section 4.
## 4 Algorithmic Information Content
As the cost to evaluate a model \(f_{\theta}\) is related to the description of the model with respect to a finite data set \(\mathcal{D}\), we leverage ideas from algorithmic information theory to define a notion of information content of interest in our context. A principled notion of information content defined for the type of problems formalized in Section 3 should encompass choices such as the model family, finite data set, and the cost function of interest. Algorithmic information theory defines the information content of a string \(x\) as its Kolmogorov Complexity. The Kolmogorov Complexity of a string \(x\) is defined as the shortest possible description in a program written in a universal language \(U\), where the program outputs \(x\). Formally, consider \(p\) to be any program that can be run on a universal machine \(U\) where \(p\) outputs \(x\) and \(l(p)\) is the length of the program \(p\). The Kolmogorov Complexity for a string \(x\) is defined as
\[K_{U}(x)=\min_{p}\bigl{\{}l(p):U(p)=x\bigr{\}} \tag{3}\]
Using Kolmogorov Complexity, the amount of information content in the data set \(\mathcal{D}\) can thus be defined as
\[K_{U}(\mathcal{D})=\min_{p}\bigl{\{}l(p):U(p)=\mathcal{D}\bigr{\}} \tag{4}\]
To define the information content of interest for the resource constrained sequence prediction problems of Section 3, we first consider the following notion of information content associated with the two part description of the data set \(\mathcal{D}\) using the model \(f_{\theta}\). This will be the combined description of
Figure 2: The tradeoff between the cost of implementation for a model to the loss associated with its predictions. The structure function marks the boundary of the set of feasible models. The empirical compression boundary that we explore in this paper is an upper bound to this structure function.
the model along with the oracle in the general architecture.
\[C_{K}(\mathcal{D},f_{\theta})=\min_{\theta}\bigl{\{}\mathcal{L}_{\mathcal{D}}(f_{ \theta})+K_{U}(f_{\theta})\bigr{\}} \tag{5}\]
One part of the description \(C_{K}\) is \(K_{U}(f_{\theta})\) which is the complexity of description of the model \(f_{\theta}\) in a universal language \(U\). The other part, \(\mathcal{L}_{\mathcal{D}}(\theta)\), is the complexity of encoding the data set \(\mathcal{D}\) using the model \(f_{\theta}\). Since after learning the data set \(\mathcal{D}\), only \(\theta\) is left in the description of the model, the set of all \(\theta\) forms statistics of the model with respect to the data set \(\mathcal{D}\). The setting of \(\theta\) that satisfies \(C_{K}(\theta)\) forms both the sufficient and minimal statistics of the model with respect to data set \(\mathcal{D}\)(Vereshchagin and Vitanyi, 2004). This definition of \(C_{K}\) does not incorporate the notion of resource constraints and the tradeoff between \(K_{U}(f_{\theta})\) and \(\mathcal{L}_{\mathcal{D}}(f_{\theta})\). To allow us to reason about the tradeoff, we relax the notion of the minimal sufficient statistic \(\theta\) and consider statistics that are only \(\beta\) sufficient. \(\beta\) sufficient statistics are the statistics \(\theta\) that satisfy the following relaxed notion of information content:
\[C_{\beta}(\mathcal{D})=\min_{\theta}\mathcal{L}_{\mathcal{D}}(f_{\theta})+ \beta K_{U}(f_{\theta}) \tag{6}\]
It was shown by Vereshchagin and Vitanyi (2004) that for \(\beta=1\), the statistic \(\theta\) that satisfies \(C_{\beta}\) matches Kolmogorov's notion of minimal sufficient statistics and relates to the phase change in \(\theta\) that occurs when a model enters the overfitting regime for a given data set \(\mathcal{D}\). This draws a connection between resource constrained Kolmogorov Complexity and the theory of learning. Since our work is mainly concerned with the tradeoff suggested by \(C_{\beta}\), we do not delve into the implications of it in learning theory. The tradeoff between \(J(\theta)\) and \(\mathcal{L}_{\mathcal{D}}(f_{\theta})\) inferred by \(C_{\beta}\) is given by its associated structure function which is defined as
\[S_{\mathcal{D}}(\alpha)=\min_{\theta}\bigl{\{}\mathcal{L}_{\mathcal{D}}(f_{ \theta}):K_{U}(f_{\theta})\leq\alpha\bigr{\}} \tag{7}\]
Thus the value of \(\beta\) in \(C_{\beta}\) serves as a Lagrange multiplier of the structure function in Equation 7 and is related to different resource constraints imposed on the model. Now that we have a notion of resource constraint in \(C_{\beta}\), we focus on incorporating the cost function into \(C_{\beta}\). Since evaluation cost can be well approximated by the description of the model \(f_{\theta}\), we incorporate a notion of cost
into the universal language \(U\). Consider \(K_{J}(f_{\theta})\) to be the cost to encode \(f_{\theta}\) using a cost model \(J\). We fix \(J\) and the relation between \(K_{U}\) such that
\[K_{U}(f_{\theta})+c\leq K_{J}(f_{\theta}) \tag{8}\]
Here \(c\) is the length of a fixed size "shell" program independent of the input that encodes the cost function \(J\). The cost function serves as an upper bound to the algorithmic information content \(K_{U}(f_{\theta})\). The structure function that incorporates our notion of cost can be defined as:
\[S_{J,\mathcal{D}}(k)=\min_{\theta}\bigl{\{}\mathcal{L}_{\mathcal{D}}(f_{ \theta}):K_{J}(f_{\theta})\leq k\bigr{\}} \tag{9}\]
The structure function \(S_{J,\mathcal{D}}(k)\) reformulates the search for a model for a task as a problem of encoding the structure and variability present in a data set \(\mathcal{D}\). Ideally, the structure that should be learned for solving a task should be encoded using the description of the model \(K_{J}\) and the nuisance variability is encoded using \(\mathcal{L}_{\mathcal{D}}\). The tradeoff between the descriptions \(K_{J}\) and \(\mathcal{L}_{\mathcal{D}}\) defines the nature of the fundamental tradeoff between the cost of evaluation of a model and the quality of its predictions for the resource constrained sequence prediction problem defined in Section 3.
## 5 Empirical Compression Boundary
The structure function of a learning task provides a theoretically grounded framework to answer questions about the tradeoff between cost and quality of a model. Unfortunately, quantities based on Kolmogorov Complexity are not computable even in theory and the notion works only in asymptotic regimes. Hence the exact nature of the function is not discernable for all possible model families unless the entire parameter space for each model in the model families can be explored. Exploration of the entire parameter space is intractable for deep neural networks where the number of parameters is very high. We leverage ideas from the model compression literature to develop an empirical procedure to explore an upper bound to this function which we call the _compression boundary_. To simplify the development of an empirical framework, we fix the cost function of interest to be the
number of non-zero parameters of the model.
\[J(\theta)=\sum_{i}|\theta_{i}|\ \ \text{such that}\ \ |x|=\begin{cases}1&x\neq 0\\ 0&x=0\end{cases} \tag{10}\]
This cost function does not consider the full description length of a model. For example, additional cost may be required to encode each of the parameters \(\theta\). For a model that can have at most \(n\) parameters, the exact description can have a complexity given by \(K_{J}(f_{\theta})\leq aJ(\theta)+b\log(n)+c\). This description consists of a three components. The first component \(aJ(\theta)\) is the cost to encode all the parameters. Here, \(a\) is the number of bits to encode each \(\theta\) on average. The second component, \(b\log(n)\), is the cost to encode the _position_ of each \(\theta\), needed for describing the compressed representation. \(b\) is a multiplier that allows for description of other parameter position related information such as activation functions, layer types, etc. The last description component is a constant \(c\) that is independent of the number of parameters.
For settings of \(\theta\) where \(J(\theta)\gg 1\) (asymptotic regimes), the effect of these additional components can be ignored when comparing different boundaries. This condition seems to be satisfied for the deep learning models with very many parameters with which we are concerned here. We can define an optimization problem to explore the compression boundary using convex optimization procedures. Formally, the boundary is defined by all \(\theta^{*}\) that satisfy
\[\theta^{*}=\arg\min_{\theta}\mathcal{L}_{\mathcal{D}}(\theta)+\beta\sum_{i}| \theta_{i}| \tag{11}\]
Since we use convex optimization procedures with gradient descent to learn the parameters \(\theta\) for a model, it is difficult to back-propagate through the discrete function \(|x|\). This issue can be alleviated by a variety of methods. We have found in our experiments that the exact method used to optimize this function does not have much influence on finding the compression boundary as all methods lead to the same boundary. We choose to solve the optimization problem in Equation 11 by introducing another set of parameters to the model called gates (\(g\)) defined as \(g=\text{sigmoid}(z)\). \(\theta\) is redefined using \(g\) as \(\theta^{\prime}=g\cdot\theta\). The optimization procedure is then reformulated as
\[\theta^{*}=\arg\min_{\theta^{\prime}}\mathcal{L}_{\mathcal{D}}(\theta^{\prime })+\beta\sum_{i}g \tag{12}\]
\(g\) acts as a smooth approximation to the discrete function \(|\theta|\) because \(0<g<1\). Since \(g\) cannot be exactly zero due to the sigmoid function, we introduce a magnitude based threshold, \(g_{\min}\), for each of the gates, and consider \(\theta_{i}\) to be \(0\) if \(g_{i}\leq g_{\min}\). We propose to leverage the Legrange multiplier \(\beta\) and the magnitude based threshold \(\alpha\) to find the compression boundary. We present the algorithm to explore the tradeoff space in Algorithm 1.
```
Fix range of beta - [\(\beta_{0},\beta_{1}\)] ; Fix magnitude based threshold - [\(g_{\min}^{\prime},g_{\max}^{\prime}\)] ; Fix ordered set of \(\beta\) to explore - \(\{\beta_{0}^{\prime},\beta_{1}^{\prime},\beta_{i}^{\prime},...\}\) where \(\beta_{0}\leq\beta_{i}^{\prime}<\beta_{i+1}^{\prime}\leq\beta_{1}\) ; Fix ordered set of \(g_{\min}\) to explore - \(\{g_{0}^{\prime},g_{1}^{\prime},..,g_{i}^{\prime},...\}\) where \(g_{\min}^{\prime}\leq g_{i}^{\prime}<g_{i+1}^{\prime}\leq g_{\max}^{\prime}\) ; foreach\(\beta\in\{\beta_{0}^{\prime},\beta_{1}^{\prime}..,\beta_{i}^{\prime},...\}\) ; do Solve \(\theta=\arg\min_{\theta}\mathcal{L}_{\mathcal{D}}(\theta)+\beta\sum_{i}| \theta_{i}|\) ; foreach\(g_{\min}\in\{g_{0}^{\prime},g_{1}^{\prime}..,g_{i}^{\prime},...\}\)do \(\forall i:g_{i}<g_{\min}\) : set \(g_{i}\gets 0\) ; Recompute \(\forall i:\theta_{i}:=g_{i}\cdot\theta_{i}\) ; Extract Model \(\mathcal{H}(\beta,\alpha)\) ; end foreach end foreach
```
**Algorithm 1**Compression Boundary Estimation
## 6 The Cache Miss Rate Prediction Problem
In the cache miss rate prediction problem, we wish to predict the miss rate of a cache over time encountered by a program running on a CPU. The sequence prediction model predicts the next miss rate given a sequence of previous miss rates. Most computing systems on which these models are implemented impose space and time constraints on the model if it is to be used in real time. Thus in these models, the procedure that we developed in Section 3 can be used to explore the associated cost-quality tradeoff. It is difficult to build a very general model for cache miss rate prediction as programs running in a computer system can vary significantly in their behavior, which is evident from Figure 3. Here we focus instead on the cache miss rate behavior of individual programs that have different phases [15] during their execution, and we compare the associated tradeoff curves. We chose three different programs to explore, selected based on the number and duration of their phases of execution. For swim-ref-swim, shown in Figure 2(a), the behavior of the cache miss rate is fairly regular throughout the trace, which points to a single long phase. For gcc-ref-cc15, shown in Figure 2(b), there are more distinct phases but each phase is shorter. pmd-small-JikesRVM contrasts with the other two traces because the cache miss rate behavior does not
show as much visual evidence of phase behavior, which could mean that the phases are transient (short). This trace thus depicts the case where a model may have trouble finding a regular pattern across the trace. From the perspective of building a resource constrained model, we expect the cost to encode the dynamic behavior of the above programs to follow the trend: swim-ref-swim i gcc-ref-cc15 i pmd-small-JikesRVM.
## 7 Experimental Setup
**Data Collection:** gcc-ref-cc15 and swim-ref-swim are part of the SPEC CPU 2000 benchmark suite ("ref" size runs) where we collected traces of every memory access made by the programs using valgrind, specifically its Lackey tool. We used the same tool on pmd-small-JikesRVM which is part of the DaCapo Java benchmark suite. We mapped virtual addresses of data accesses to their 64-byte virtual cache line, and applied the least recently used (LRU) stack algorithm to obtain miss rates for various cache sizes of perfect LRU caches. These are calculated over windows of 100,000 instructions.
Figure 4: Compression boundary and parameter phases for gcc-ref-cc15 with LSTM models. Each blue point corresponds to a model \(\mathcal{H}(\beta,\alpha)\) found by the Empirical Compression Boundary algorithm.
Figure 3: \(\log_{10}\) of cache miss rates over time for programs with three different behaviors depending on the length of phases they exhibit.
Figure 5: Compression boundaries and parameter phases of \(\beta\) sufficient parameters of LSTM for the three traces. Models with number of parameters from \(10\) to \(2\times 10^{6}\) are explored.
**Data Preprocessing:** We transform the cache miss rates using \(\log_{10}\) to show better the interesting miss rates close to \(0\). To avoid \(0\) itself, we clip the lower bound of the value of miss rates to have at least some small value \(0<\epsilon<1/100000\). For learning, the sequences are divided into contiguous chunks, with some chunks used for training and others for testing. Training and test chunks are drawn from all parts of the traces so that all different behaviors within a program are captured in both training and testing.
**Probabilistic Modeling:**
We take the time evolution of the preprocessed cache miss rates of a program as the data set \(\mathcal{D}=\{x_{1},x_{2}...x_{N}\}\) to learn. Since each \(x_{i}\) is a real number and not a discrete quantity, it is not ideal for modeling as a sequence. We bin the \(x_{i}\) into 100 equally spaced bins so that we can discretize the space making it suitable to perform sequence modeling [Salman and Kecman (2012)]. Thus, each \(x_{i}\), which was a real number, is converted to \(x_{i}^{\prime}\) which is a discrete integer value in \([0,99]\). The data set for learning is changed to be \(\mathcal{D}=\{x_{1}^{\prime},x_{2}^{\prime}...x_{N}^{\prime}\}\). The objective of the model is now to estimate the distribution
\[\mathbb{P}(x_{t}^{\prime}|x_{1}^{\prime},x_{2}^{\prime}...x_{t-1}^{\prime}) \tag{13}\]
Now we can apply the techniques we devised in Section 5 to explore the cost-quality tradeoff curves of the three benchmark traces. We used LSTMs (Hochreiter and Schmidhuber, 1997) as the model family under consideration. LSTMs are a variant of RNNs (Rumelhart et al., 1986) and have been successful in sequence modeling due to their ability to capture short and long term dependencies in sequential data [Sutskever et al. (2014); Wu et al. (2016)]. _Unrolling_ LSTMs beyond a certain time step in the history of a sequence leads to heavy computation and _vanishing gradient_ issues [Hochreiter (1998); Bengio et al. (1994)], so we modified the model to account for this by unrolling only up to a finite number of time steps, \(h\), behind the current history of the sequence. Our model consists of an LSTM layer followed by 4 feed forward layers [Linzen et al. (2016)]. The hidden state of the LSTM is forwarded on to the next prediction of the model. The hidden state, in theory, contains information about the values prior to the current time step.
We run the Empirical Compression Boundary algorithm on the data sets \(\mathcal{D}\) generated by the three programs. The models to be learned are initialized with different seeds and layer widths to get better
estimates of the boundary. We plot the loss \(\mathcal{L}_{\mathcal{D}}\) against the cost \(J(\theta)\) for all \(\mathcal{H}\) obtained by running the compression boundary algorithm.
Figure 8: Local likelihoods and predictions for models of gcc-ref-cc15.
Figure 6: Tradeoff between the average negative log likelihood (\(\hat{\mathcal{L}}_{\mathcal{D}}\)) and the cost (\(J(\theta)\)) for all three traces. It is observed that swim-ref-swim has the lowest cost at the point at which the parameter phase change happens, followed by gcc-ref-cc15 and pmd-small-JikesRVM in that order.
Figure 7: Local likelihoods and predictions for models of swim-ref-swim. Figure 6(a) shows the local likelihoods across time for models with different cost constraints \(J(\theta)\). Figure 6(b) shows the predictions of models for different costs. The blue line represents the actual value of the bin at every point in the trace and red lines correspond to the predictions. The improvement of predictions across the trace can be observed as the cost constraints are relaxed.
## 8 Results
We tackle the first question: What is the nature of the tradeoff that exists between the cost to evaluate the model to the quality of its predictions? We try to answer this question by looking at the general behavior of the compression boundary for each of the traces that we selected. The range of the cost \(J(\theta)\) is fixed between \(10\) and \(2\times 10^{6}\) since the tradeoff region of the three traces occurs in this range. Although we will focus the following discussion on the trace generated by gcc-ref-cc15, the general behavior we analyze is valid for all the other traces. Figure 3(a) shows the tradeoff between the loss \(\mathcal{L}_{\mathcal{D}}\) and cost \(J(\theta)\) obtained for gcc-ref-cc15. The models along the compression boundary can be divided as belonging to three different phases based on the slope at each point. To clarify the difference between phases discussed here and phases in behavior of cache miss rate traces that was mentioned before, we will use _parameter phase_ to mean the phase of the parameter of the model and we will use _program phase_ to mean the phase-like behavior of the program. Three _parameter phases_ can be observed in the log-log scale:
* The region of model parameters where we have to invest significantly in the cost of the model to obtain small factor improvements in the quality of its predictions.
* The region of model parameters where the tradeoff between \(J(\theta)\) and \(\mathcal{L}_{\mathcal{D}}(\theta)\) is almost linear on a log-log scale. The linear trend also represents a power law in the cost-quality tradeoff.
Figure 9: Local likelihoods and predictions for models of pmd-small-JikesRVM.
- The region of model parameters where the tradeoff has similar behavior to Phase 1. Models in this phase require significant investment in \(J(\theta)\) to obtain small improvements in \(\mathcal{L}_{\mathcal{D}}\).
We conjecture that models in Phase 1 encode a shared structure that most points in \(\mathcal{D}\) conform to and is in some way _easy_ for the family of models to describe. Models in Phase 2 perform minute additions to this main structure to obtain better quality by encoding more data points. By Phase 3, all data points in \(\mathcal{D}\) that conform to the shared structure are captured by the model. The rest of the data will be nuisance variability that a model has to memorize, leading to the expensive tradeoff with respect to the cost \(J(\theta)\). The region near the boundary of Phase 2 and 3 is the region where the tradeoff between cost and quality becomes optimal. This _parameter phase_ boundary becomes interesting in applications where resources are not constrained and the best possible model for a task is of interest. In other words, this is the most likely region where the minimal sufficient statistic for a model family with respect to \(\mathcal{D}\) can be found. Figure 5 shows how this three phase behavior is present in all the three traces we consider.
Now that we have established the general behavior of the parameter phases of the three cache miss rate traces, we will look at how we can compare the compression boundary curves of the traces. Since the value of \(\mathcal{L}_{\mathcal{D}}\) is the negative of the log likelihood of the models on the whole trace, this value for each trace cannot be compared directly. We instead normalize \(\mathcal{L}_{\mathcal{D}}\) by the length of the trace to obtain a comparable metric \(\hat{\mathcal{L}}_{\mathcal{D}}\). Formally, the quantity we are interseted in comparing is defined as \(\hat{\mathcal{L}}_{\mathcal{D}}=\frac{1}{N}\mathcal{L}_{\mathcal{D}}\) where \(N\) is the number of points in the trace. The comparison between \(\hat{\mathcal{L}}_{\mathcal{D}}\) of the three traces is given in Figure 5. The figure shows that the cost at which the _parameter phase_ changes from 2 to 3 is in increasing order of the complexity of the _program phase_ behavior of each of the programs: swim-ref-swim \(\mathrm{i}\) gcc-ref-cc15 \(\mathrm{i}\) pmd-small-JikesRVM. This interesting relation between complexity and resource requirements will be used to identify regions of program behavior that are complex for a model, which can be used to answer the second scientific question.
That second question is: What parts of a task require more complex models and how can we quantify and identify them? To answer this question, we take models along the compression boundary that we found for each trace, in the increasing order of cost. The likelihood achieved by each of the models at each point in the trace is taken to obtain a score of quality at that cost constraint. Formally, we
are interested in the value of \(\mathcal{L}_{\mathcal{D}}^{t}=\log(f_{\theta}(x_{t}|x_{t-1},x_{t-2},...,x_{1}))\) for each time index \(t\) of the trace. Since we have traces that have very large number of points, to visualize the likelihoods, we average these likelihoods in windows to obtain a local average likelihood. This local average across time is plotted as a heat map with the x-axis as the time index, the y-axis as the cost of the model, and the colors indicating the magnitude of \(\mathcal{L}_{\mathcal{D}}^{t}\). The predictions of models at different costs are visualized alongside one another to obtain some idea about the type of predictions performed by models under different cost constraints.
We begin our discussion with swim-ref-swim as it has the least complex behavior among the three traces. The lower plot of Figure 6(a) shows the heat map of local likelihoods and Figure 6(b) shows the predictions of the corresponding models. It can be observed that at low cost regimes (\(10\) to \(2\times 10^{5}\)), models are able to learn only some parts of the trace and these parts occur periodically within the trace. The predictions that correspond to this regime seems to indicate constant predictions corresponding to the mode of the bins across the trace. In the next regime (\(2\times 10^{5}\) to \(2.5\times 10^{5}\)), part of the trace that is less complex to model also seems to be the low amplitude oscillations underlying the dynamic behavior of the trace. As we consider models with even higher costs (\(2.5\times 10^{5}\) to \(3\times 10^{5}\)), the corresponding predictions capture the periodic nature of the trace but fail to capture some of the variabilities. These variabilities are captured as the cost bounds are increased and the predictions of high cost models show how it is possible to learn the general behavior of the whole trace except the rare peak that occurs.
We now consider gcc-ref-cc15 which unlike swim-ref-swim has multiple _program phases_ and has slightly complex behavior. The lower plot of Figure 7(a) shows the heat map of local average likelihoods and Figure 7(b) shows the predictions of models under different resource constraints. Unlike the swim-ref-swim trace, we observe that the improvement in quality of models obtained from relaxing cost constraints is different across the trace. Although this is the case, quality improvements are not random, and interestingly they seems to track some of the _program phases_ that we observe within the trace. Consider that gcc-ref-cc15 is divided into four arbitrary _program phases_ as in Figure 7(a). It can be observed that the quality given by local likelihood average follows the trend: Phase 4? Phase 2? Phase 3? Phase 1 for most cost levels. This shows how some _program phases_ may have
more complex dynamical behavior than others. Thus it may be important to keep in mind, while modeling, the _program phases_ in which a resource constrained model will be effective.
Now we turn our attention to the more difficult pmd-small-JikesRVM trace, which does not exhibit long term periodic or phase-like behavior. The local likelihood plot in Figure 9a shows how this complex behavior is reflected in the cost-quality tradeoff. There are no distinct _program phases_ that can be observed in the trace or the local likelihood plots. The predictions seem to improve only slowly as cost bounds are relaxed, but due to highly varying structure the cost required for a similar rate of improvement is much higher compared to the other two traces, even though the number of points in the trace is close to half that of the other two.
## Conclusion and Future Work
We introduced and elaborated a theoretical notion of resource constrained optimization problems and developed an empirical procedure to explore them. We considered the tradeoff between quality and cost for specific resource constrained prediction models from the LSTM model family for the cache miss rate prediction problem. We found that the tradeoff has a fairly regular trend for the traces we examined. We note how the point at which the process of learning a data set changes from learning the structure of task to memorization of the nuisance variability associated with it corresponds well with traditional notions of complexity of a data set. In our evaluation, we showed how the procedure we developed can also be used to identify and quantify the complexity of dynamic behavior within a data set in addition to comparison among data sets. Although we explored the scientific questions pertaining to resource constrained tasks, the practical applications of the above methods remain an open problem. We anticipate that the procedure we developed, especially pertaining to identification and quantification of complexity of individual data points, can be applied to problems in curriculum learning, self-paced learning where complexity notions of individual data points within a data set are of interest. Resource bounded Kolmogorov Complexity gives a new perspective on developing models for resource constrained problems based on \(\beta\) sufficient statistics that opens up another avenue to create practical models for these problems.
|
2305.19335
|
Gröbner geometry for regular nilpotent Hessenberg Schubert cells
|
A regular nilpotent Hessenberg Schubert cell is the intersection of a regular
nilpotent Hessenberg variety with a Schubert cell. In this paper, we describe a
set of minimal generators of the defining ideal of a regular nilpotent
Hessenberg Schubert cell in the type $A$ setting. We show that these minimal
generators are a Gr\"obner basis for an appropriate lexicographic monomial
order. As a consequence, we obtain a new computational-algebraic proof, in type
$A$, of Tymoczko's result that regular nilpotent Hessenberg varieties are paved
by affine spaces. In addition, we prove that these defining ideals are complete
intersections, are geometrically vertex decomposable, and compute their Hilbert
series. We also produce a Frobenius splitting of each Schubert cell that
compatibly splits all of the regular nilpotent Hessenberg Schubert cells
contained in it. This work builds on, and extends, work of the second and third
author on defining ideals of intersections of regular nilpotent Hessenberg
varieties with the (open) Schubert cell associated to the Bruhat-longest
permutation.
|
Mike Cummings, Sergio Da Silva, Megumi Harada, Jenna Rajchgot
|
2023-05-30T18:02:12Z
|
http://arxiv.org/abs/2305.19335v2
|
# Grobner geometry for
###### Abstract.
A regular nilpotent Hessenberg Schubert cell is the intersection of a regular nilpotent Hessenberg variety with a Schubert cell. In this paper, we describe a set of minimal generators of the defining ideal of a regular nilpotent Hessenberg Schubert cell in the type \(A\) setting. We show that these minimal generators are a Grobner basis for an appropriate lexicographic monomial order. As a consequence, we obtain a new computational-algebraic proof, in type \(A\), of \(\operatorname{Tymoczko}\)'s result that regular nilpotent Hessenberg varieties are paved by affine spaces. In addition, we prove that these defining ideals are complete intersections, are geometrically vertex decomposable, and compute their Hilbert series. We also produce a Frobenius splitting of each Schubert cell that compatibly splits all of the regular nilpotent Hessenberg Schubert cells contained in it. This work builds on, and extends, work of the second and third author on defining ideals of intersections of regular nilpotent Hessenberg varieties with the (open) Schubert cell associated to the Bruhat-longest permutation.
## 1. Introduction
Hessenberg varieties are subvarieties of the full flag variety \(\operatorname{Flags}(\mathbb{C}^{n})\).1 They were first introduced to algebraic geometers by De Mari, Procesi, and Shayman [7], and their study lies in the intersection of algebraic geometry, representation theory, and combinatorics, among other research areas. For example: these varieties arise in the study of quantum cohomology of flag varieties, they are generalizations of the Springer fibers which arise in geometric representation theory, total spaces of families of suitable Hessenberg varieties support interesting integrable systems [1], and the study of some of their cohomology rings suggests that there is a rich Hessenberg analogue to the theory of Schubert calculus on \(\operatorname{Flags}(\mathbb{C}^{n})\).
Footnote 1: In this manuscript, we restrict to the case of Lie type \(A\), i.e., when the flag variety corresponds to the group \(GL_{n}(\mathbb{C})\) (or \(SL_{n}(\mathbb{C})\)). Much can also be said about other Lie types.
The motivation of the present paper stems mainly from Schubert geometry. Moreover, this manuscript can and should be viewed as a natural companion paper to the recent work [6] of the second and third authors. Let us briefly recall the setting. The main objects of discussion are the **local patches of Hessenberg varieties** -- i.e., intersections of Hessenberg varieties with certain choices of affine Zariski-open subsets of \(\operatorname{Flags}(\mathbb{C}^{n})\). (Throughout this manuscript, we restrict to the so-called regular nilpotent case - to be defined precisely below in Section 2.2.) In particular, we may study the **local Hessenberg patch ideal**, denoted \(I_{w_{0},h}\), for the special case of a **regular nilpotent Hessenberg variety** (to be defined in Section 2.2), intersected with the affine coordinate chart \(w_{0}B^{-}B\)
of \(\operatorname{Flags}(\mathbb{C}^{n})\cong GL_{n}(\mathbb{C})/B\) centered at the permutation flag corresponding to the maximal element \(w_{0}\) in \(S_{n}\); this is the case studied in [6]. (Here \(B\) is the usual Borel subgroup of upper-triangular invertible matrices in \(GL_{n}(\mathbb{C})\) and \(B^{-}\) is the opposite Borel of lower-triangular invertible matrices.) This affine coordinate chart is also a **Schubert cell**, \(B(\operatorname{Id})B/B\) of the identity permutation \(\operatorname{Id}\in S_{n}\).
The main contribution of this manuscript is that we prove analogues of the results of [6] for _all_ possible **regular nilpotent Hessenberg Schubert cells** - that is, for all possible intersections of regular nilpotent Hessenberg varieties, denoted \(\operatorname{Hess}(\mathsf{N},h)\), with Schubert cells \(BwB/B\), for any \(w\in S_{n}\). It turns out that the perspectives used in [6] can be treated in a similar manner to the \(w_{0}\)-chart, as long as we restrict our attention to the Hessenberg Schubert cells (instead of looking at the entire local Hessenberg patch). A rough statement of our main results, which is stated more precisely as Corollary 4.5 and Theorem 4.6, is as follows:
**Theorem**.: _Let \(n\) be a positive integer and let \(h:[n]\to[n]\) be an indecomposable Hessenberg function. Let \(w\in S_{n}\) and suppose that the Hessenberg Schubert cell \(\operatorname{Hess}(\mathsf{N},h)\cap BwB/B\) is non-empty. Let \(J_{w,h}\subseteq\mathbb{C}[BwB/B]\) denote the (radical) defining ideal of \(\operatorname{Hess}(\mathsf{N},h)\cap BwB/B\). Then there exists a lexicographic monomial order \(<\) such that \(\operatorname{in}_{<}(J_{w,h})\) is generated by \(|\lambda_{h}|\)-many indeterminates, where \(\lambda_{h}\) is a partition associated to the Hessenberg function \(h\). Moreover, there is a natural list of generators of \(J_{w,h}\) which form a Grobner basis for \(J_{w,h}\) with respect to \(<\)._
The above results easily imply that each non-empty Hessenberg Schubert cell is isomorphic to an affine space \(\mathbb{A}^{r_{w}-|\lambda_{h}|}\) where \(r_{w}\) is the dimension of the Schubert cell \(BwB/B\), \(w\in S_{n}\). Using this, in Corollary 5.7, we recover the known result that each regular nilpotent Hessenberg variety is paved by affine spaces. This was initially proved by Tymoczko in [15] and generalized to the regular (not necessarily nilpotent) case by Precup in [13]. Tymoczko and Precup's proofs use the language of Lie theory (and apply beyond type \(A\)) and are delicate combinatorial arguments. Thus, one contribution of this paper is a new, quick proof of this paving-by-affines result in the type \(A\) regular nilpotent case, using purely computational-algebraic methods.
As in [6], our main theorem has a number of immediate consequences, including geometric vertex decomposability, Frobenius splitting, and Hilbert series. These applications are discussed in detail, and more precisely, in Section 5.
**Corollary**.: _Let \(n\) be a positive integer and let \(h:[n]\to[n]\) be an indecomposable Hessenberg function. Let \(w\in S_{n}\) and suppose that the Hessenberg Schubert cell \(\operatorname{Hess}(\mathsf{N},h)\cap BwB/B\) is non-empty. Let \(J_{w,h}\subseteq\mathbb{C}[BwB/B]\) denote the (radical) defining ideal of \(\operatorname{Hess}(\mathsf{N},h)\cap BwB/B\). Then,_
1. _each_ \(J_{w,h}\subseteq\mathbb{C}[BwB/B]\) _is a complete intersection ideal;_
2. _the Hilbert series of_ \(\mathbb{C}[BwB/B]/J_{w,h}\)_, with respect to_ \(\mathbb{Z}\)_-grading induced by the natural circle action on the Hessenberg Schubert cell, is given by_ \[H_{R/J_{w,h}}(t)=\frac{\prod_{k>h(\ell)}\left(1-t^{v_{w}(k)-v_{ w}(\ell)-1}\right)}{\prod_{\begin{subarray}{c}i<w(j)\\ j\leq w^{-1}(i)\end{subarray}}\left(1-t^{w(j)-i}\right)};\]
3. _each ideal_ \(J_{w,h}\) _is geometrically vertex decomposable;_
4. _upon replacing_ \(\mathbb{C}\) _by a algebraically closed field_ \(\mathbb{K}\) _of positive characteristic, there is a natural Frobenius splitting of_ \(\mathbb{K}[BwB/B]\) _which compatibly splits all ideals_ \(J_{w,h}\subseteq\mathbb{K}[BwB/B]\)_._
We briefly describe the contents of the manuscript. In Section 2 we recall the key definitions and background. In Section 3 we introduce coordinates on the intersections of Schubert cells with regular nilpotent Hessenberg varieties, and define their local defining ideals via explicit relations. Then in Section 4 we prove our main results, and in Section 5 we give a detailed discussion of the many immediate applications of our main results, as itemized above.
### Remark on the field
For simplicity, as well as consistency with some of the literature that we cite, we work over algebraically closed fields. We note, however, that all of our computational-algebraic arguments work more generally. For example, our Grobner basis arguments work over arbitrary base fields and our Frobenius splitting results hold over perfect fields of positive characteristic.
### Acknowledgements
Cummings was supported in part by an NSERC CGS-M and The Milos Novotny Fellowship. Da Silva was supported in part by an NSERC Postdoctoral Fellowship of Canada. Harada was supported in part by NSERC Discovery Grant 2019-06567 and a Canada Research Chair Tier 2 award. Rajchgot was supported in part by NSERC Discovery Grant 2017-05732.
## 2. Background
In this section we recall some background on flag varieties, regular nilpotent Hessenberg varieties, Hessenberg patch ideals, and a certain circle action on regular nilpotent Hessenberg varieties. We follow [6] as our main reference for notational conventions. For the purposes of this manuscript we restrict to Lie groups of type \(A\), although Hessenberg varieties can be defined in arbitrary Lie type.
### Flag varieties
The **(full) flag variety**\(\operatorname{Flags}(\mathbb{C}^{n})\) is the set of sequences of nested subspaces of \(\mathbb{C}^{n}\)
\[\operatorname{Flags}(\mathbb{C}^{n})\coloneqq\{V_{\bullet}=(0=V_{0}\subsetneq V _{1}\subsetneq V_{2}\subsetneq\cdots\subsetneq V_{n}=\mathbb{C}^{n})\mid \dim_{\mathbb{C}}(V_{i})=i\}.\]
Let \(B\) denote the Borel subgroup of \(GL_{n}(\mathbb{C})\) of upper-triangular invertible matrices and let \(U^{-}\) denote the subgroup of \(GL_{n}(\mathbb{C})\) of lower-triangular matrices with \(1\)'s along the
diagonal. By representing \(V_{\bullet}\in\operatorname{Flags}(\mathbb{C}^{n})\) as an element of \(GL_{n}(\mathbb{C})\) by the matrix whose first \(i\) columns span \(V_{i}\), we can identify \(\operatorname{Flags}(\mathbb{C}^{n})\) with \(GL_{n}(\mathbb{C})/B\). Then \(U^{-}B\), the set of cosets \(uB\) for \(u\in U^{-}\), can be viewed as a coordinate chart on \(GL_{n}(\mathbb{C})/B\cong\operatorname{Flags}(\mathbb{C}^{n})\), which we show precisely below. In fact, \(U^{-}B\) is an open dense subset of \(\operatorname{Flags}(\mathbb{C}^{n})\).
Let \(w\in S_{n}\) be a permutation of the symmetric group on \(n\) letters. By slight abuse of notation, we will often take \(w\) to also mean its corresponding permutation matrix. Here we take the convention that we record the permutation "along columns" to obtain the corresponding permutation matrix; that is, we write
\[w=\begin{bmatrix}|&|&&|\\ e_{w(1)}&e_{w(2)}&\cdots&e_{w(n)}\\ |&|&&|\end{bmatrix}, \tag{2.1}\]
where \(e_{j}\) is the \(j^{\text{th}}\) standard basis vector, written as a column vector, with a \(1\) in the \(j^{\text{th}}\) entry and \(0\)'s elsewhere.
Given the above notation, we define an **open cell** (a coordinate chart) in \(GL_{n}(\mathbb{C})/B\) containing the permutation \(w\in S_{n}\) by the translation
\[\mathcal{N}_{w}\coloneqq wU^{-}B\subseteq GL_{n}(\mathbb{C})/B.\]
To describe \(\mathcal{N}_{w}\) more concretely, let \(M\) denote a generic element of \(U^{-}\),
\[M\coloneqq\begin{bmatrix}1&&&\\ \star&1&&\\ \vdots&\vdots&\ddots&\\ \star&\star&\cdots&1&\\ \star&\star&\cdots&\star&1\end{bmatrix}, \tag{2.2}\]
where the \(\star\)'s are complex numbers. Since the entries below the diagonal are free of constraints, it is clear that \(U^{-}\), the set of all such \(M\)'s, is isomorphic to affine space \(\mathbb{A}^{n(n-1)/2}\). For any \(w\in S_{n}\), the map \(M\mapsto wMB\in GL_{n}(\mathbb{C})/B\) is an embedding \(U^{-}\cong\mathbb{A}^{n(n-1)/2}\xrightarrow{\cong}\mathcal{N}_{w}\), showing that \(\mathcal{N}_{w}\) is isomorphic to affine space as well. Henceforth, we can identify a point in \(\mathcal{N}_{w}\) with a translation by \(w\) of a matrix of the form (2.2). In particular, a point in \(\mathcal{N}_{w}\) is uniquely identified by a matrix \(wM\) whose \((i,j)\)-th entries are given by
\[[wM]_{i,j}=\begin{cases}1&\text{if }i=w(j),\\ 0&\text{if }j>w^{-1}(i),\\ x_{i,j}&\text{otherwise, for some }x_{i,j}\in\mathbb{C}.\end{cases} \tag{2.3}\]
_Example 2.4_.: Denote by \(w_{0}\in S_{n}\) the longest permutation in Bruhat order. If \(n=4\), then \(w_{0}=4321\) in one-line notation and
\[w_{0}M=\begin{bmatrix}x_{1,1}&x_{1,2}&x_{1,3}&1\\ x_{2,1}&x_{2,2}&1&0\\ x_{3,1}&1&0&0\\ 1&0&0&0\end{bmatrix}.\]
It follows from (2.3) that there is an isomorphism between the coordinate ring of \(\mathcal{N}_{w}\) and the polynomial ring in \(n(n-1)/2\) indeterminates, the \(x_{i,j}\) that satisfy \(j<w^{-1}(i)\). We denote this polynomial ring by \(\mathbb{C}[\mathbf{x}_{w}]\) where \(\mathbf{x}_{w}\) denotes the finite set of indeterminates labelled \(x_{i,j}\) for \(j<w^{-1}(i)\).
### Hessenberg varieties and Hessenberg patch ideals
We next review the definition of a Hessenberg variety. Let \([n]\coloneqq\{1,2,\ldots,n\}\). A function \(h:[n]\to[n]\) is a **Hessenberg function** if \(h(i)\geq i\) for all \(i\in[n]\) and \(h(i+1)\geq h(i)\) for all \(i\in[n-1]\). An **indecomposable** Hessenberg function additionally satisfies \(h(i)\geq i+1\) for all \(i\in[n-1]\). To a Hessenberg function \(h\), we may associate the **Hessenberg subspace** of \(\mathfrak{gl}_{n}(\mathbb{C})\) defined by
\[H(h)\coloneqq\{(a_{i,j})_{i,j\in[n]}\,\mid\,a_{i,j}=0\,\text{if}\,i>h(j)\}.\]
Given an indecomposable Hessenberg function \(h\) and a linear operator \(A:\mathbb{C}^{n}\to\mathbb{C}^{n}\), we define the **Hessenberg variety associated to \(A\) and \(h\)** to be the following subvariety of \(\operatorname{Flags}(\mathbb{C}^{n})\),
\[\operatorname{Hess}(A,h)\coloneqq\{(V_{0}\subsetneq\cdots\subsetneq V_{n})\in \operatorname{Flags}(\mathbb{C}^{n})\mid AV_{i}\subseteq V_{h(i)}\text{ for all }i\in[n]\}.\]
Equivalently, viewing a flag as a coset \(MB\) in \(GL_{n}(\mathbb{C})/B\) and representing it by a matrix \(M\) in the coset \(MB\), and viewing \(A\) as an element in \(\mathfrak{gl}_{n}(\mathbb{C})\), we may also describe \(\operatorname{Hess}(A,h)\) as
\[\operatorname{Hess}(A,h)\coloneqq\{MB\in GL_{n}(\mathbb{C})/B\,\mid\,M^{-1}AM \in H(h)\}. \tag{2.5}\]
In this manuscript we focus on the case that \(A\) is a regular nilpotent operator. Specifically, we restrict attention to the Hessenberg varieties \(\operatorname{Hess}(A,h)\) where \(A=\mathsf{N}\) is defined by
\[\mathsf{N}\coloneqq\begin{bmatrix}0&1&0&\cdots&0\\ 0&0&1&\cdots&0\\ \vdots&&\ddots&\vdots\\ 0&0&0&\cdots&1\\ 0&0&0&\cdots&0\end{bmatrix}. \tag{2.6}\]
The remainder of this section will be devoted to reviewing the defining equations for the affine variety \(\mathcal{N}_{w}\cap\operatorname{Hess}(\mathsf{N},h)\) from [6, Section 2.2]. In the next section, we will describe equations which define the intersection of \(\operatorname{Hess}(\mathsf{N},h)\) with a _Schubert cell_.
In the definition below, as well as in the discussion that follows, we view \(M\) as a matrix with entries \(0,1\), or an indeterminate in the set \(\mathbf{x}_{w}=\{x_{i,j}\}\) as in Section 2.1, and view each matrix entry of \((wM)^{-1}\mathsf{N}(wM)\) as an element in \(\mathbb{C}[\mathbf{x}_{w}]\).
**Definition 2.7** ([2, Definition 3.3]).: Let \(h\) be a Hessenberg function. Let \(w\in S_{n}\) and \(k,\ell\in[n]\). Following the notation of (2.3) and (2.6), define polynomials in \(\mathbb{C}[\mathbf{x}_{w}]\) by
\[f_{k,\ell}^{w}\coloneqq[(wM)^{-1}\mathsf{N}(wM)]_{k,\ell}.\]
For an indecomposable Hessenberg function \(h\), we define the **Hessenberg patch ideal corresponding to \(w\) and \(h\)** to be the ideal
\[I_{w,h}\coloneqq\langle f_{k,\ell}^{w}\mid k>h(\ell)\rangle\subseteq\mathbb{ C}[\mathbf{x}_{w}].\]
From (2.5) it follows that the \(f_{k,\ell}^{w}\) must vanish on \(\operatorname{Hess}(\mathsf{N},h)\) and they (set-theoretically) cut out \(\operatorname{Hess}(\mathsf{N},h)\) in \(\mathcal{N}_{w}\). In fact, the following is known.
**Proposition 2.8** ([2, Proposition 3.7]).: _The Hessenberg patch ideals \(I_{w,h}\) defined above are radical and are the defining ideals of the affine variety \(\mathcal{N}_{w}\cap\operatorname{Hess}(\mathsf{N},h)\)._
_Example 2.9_.: Consider \(w_{0}\in S_{4}\) as in Example 2.4. Then,
\[(w_{0}M)^{-1}\mathsf{N}(w_{0}M)=\begin{bmatrix}0&0&0&0\\ 1&0&0&0\\ -x_{2,2}+x_{3,1}&1&0&0\\ -x_{1,2}+x_{1,3}(x_{2,2}-x_{3,1})+x_{2,1}&-x_{1,3}+x_{2,2}&1&0\end{bmatrix}.\]
For example, for the indecomposable Hessenberg function \(h=(2,3,4,4)\), the corresponding Hessenberg patch ideal corresponding to \(w_{0}\) and \(h\) is given by
\[I_{w_{0},h}=\langle-x_{2,2}+x_{3,1},\,-x_{1,2}+x_{1,3}(x_{2,2}-x_{3,1})+x_{2,1 },\,-x_{1,3}+x_{2,2}\rangle.\]
\(\diamond\)
In work with Abe, DeDieu, and Galetto, the third author showed that there exists an explicit inductive formula for the polynomials \(f_{k,\ell}^{w_{0}}\)[2, Equation (3.6)]. Building on this work, the second and third authors found an explicit formula for the initial term for the polynomials \(f_{k,\ell}^{w_{0}}\) and recursive equations relating the \(f_{k,\ell}^{w_{0}}\) for varying \(k\) and \(\ell\)[6]. To proceed further, we require a monomial order, which in [6] is defined as follows.
**Definition 2.10** ([6, Definition 4.11]).: We denote by \(<_{n}\) the lexicographical monomial order on \(\mathbb{C}[\mathbf{x}_{w_{0}}]\) defined by \(x_{i,j}>_{n}x_{i^{\prime},j^{\prime}}\) if \(i<i^{\prime}\), or, \(i=i^{\prime}\) and \(j<j^{\prime}\).
Moreover, there exists an ordering of the polynomials \(f_{k,\ell}^{w_{0}}\) such that the initial term of one polynomial does not appear in any of the polynomials later in the order. This sequence can be obtained by reading from the matrix \((w_{0}M)^{-1}\mathsf{N}(w_{0}M)\) left-to-right along the bottom row, then left-to-right along the penultimate row, and so on. Explicitly, this is the sequence
\[f_{n,1}^{w_{0}},\,f_{n,2}^{w_{0}},\,\dots,\,f_{n,n-1}^{w_{0}},\,f_{n-1,1}^{w_ {0}},\,f_{n-1,2}^{w_{0}},\,\dots. \tag{2.11}\]
It is shown in [6] that the initial terms of the polynomials \(f_{k,\ell}^{w_{0}}\) with respect to \(<_{n}\) form a list of distinct indeterminates. We have the following.
**Lemma 2.12** ([6, Lemma 4.13]).: _Let \(k,\ell\in[n]\) satisfy \(k>\ell+1\). Then the following hold._
1. _With respect to the monomial order_ \(<_{n}\) _of Definition_ 2.10_,_ \(\operatorname{in}_{<_{n}}(f_{k,\ell}^{w_{0}})=-x_{n+1-k,\ell+1}\)_. In particular, the initial term is an indeterminate (up to sign)._
2. _The initial term_ \(-x_{n+1-k,\ell+1}\) _of_ \(f_{k,\ell}^{w_{0}}\) _does not appear in any_ \(f_{k^{\prime},\ell^{\prime}}^{w_{0}}\) _which are listed after_ \(f_{k,\ell}^{w_{0}}\) _in the sequence (_2.11_)._
3. _The indeterminate_ \(x_{n+1-k,\ell+1}\) _appears exactly once in_ \(f_{k,\ell}^{w_{0}}\) _and all other indeterminates_ \(x_{i,j}\) _appearing in_ \(f_{k,\ell}^{w_{0}}\) _either satisfy_ \(i>n+1-k\) _or_ \(i=n+1-k\) _and_ \(j>\ell+1\)
Let \(h\) be an indecomposable Hessenberg function. By Definition 2.7, the ideal \(I_{w_{0},h}\) is generated by the polynomials \(f_{k,\ell}^{w_{0}}\) for all \(k,\ell\in[n]\) with \(k>h(\ell)\). By definition of indecomposable Hessenberg functions, if \(k>h(\ell)\) then \(k\geq\ell+1\), so the above lemma applies. We additionally have the following useful result from [6].
**Proposition 2.13** ([6, Corollary 4.15]).: _For any indecomposable Hessenberg function \(h\), the generators \(f_{k,\ell}^{w_{0}}\) form a Grobner basis for the Hessenberg patch ideal \(I_{w_{0},h}\) with respect to \(<_{n}\)._
In this paper, we generalize Proposition 2.13 to all regular nilpotent Hessenberg Schubert cells.
### \(\mathsf{S}\)-actions on regular nilpotent Hessenberg varieties
In this manuscript we will use a certain circle action on \(\operatorname{Hess}(\mathsf{N},h)\) which allows us to restrict attention to an appropriate subset of Schubert cells for our analysis. It is the same circle action as is used in [6], so we keep the exposition brief and refer the reader to [6, Section 2] for details.
Let \(\mathsf{S}\cong\mathbb{C}^{*}\) denote the circle subgroup of the maximal torus of \(GL_{n}(\mathbb{C})\) given by
\[\mathsf{S}\coloneqq\{\underline{t}\coloneqq\operatorname{diag}(t,t^{2},\ldots,t^{n})\,\mid\,t\in\mathbb{C}^{*}\}.\]
It is straightforward to check that \(\mathsf{S}\) preserves \(\operatorname{Hess}(\mathsf{N},h)\). Moreover, since \(\mathsf{S}\) is a subgroup of the usual maximal torus of \(GL_{n}(\mathbb{C})\) and the maximal torus preserves the coordinate patches \(\mathcal{N}_{w}\), it follows that \(\mathsf{S}\) also preserves the cell \(\mathcal{N}_{w}\), hence also the intersections \(\operatorname{Hess}(\mathsf{N},h)\cap\mathcal{N}_{w}\).
Additionally, we can also concretely compute the action of \(\mathsf{S}\) on any local coordinate patch \(\mathcal{N}_{w}\). Recall that the standard maximal torus action on \(GL_{n}(\mathbb{C})/B\) is given by left multiplication on left cosets; since \(\mathsf{S}\) is a subgroup, it acts in the same way. More precisely, given a matrix \(wM\) representing a flag \([wM]\in GL_{n}(\mathbb{C})/B\), we have
\[\underline{t}\cdot[wM]=[t(wM)].\]
To express \([t(wM)]\) in terms of the coordinate chart \(\mathcal{N}_{w}\), we must now find a matrix \(M^{\prime}\) with entries described by (2.3) such that \(twM=wM^{\prime}\). The simple observation here is that left multiplication by the diagonal matrix \(\underline{t}\) changes the entries in the \((w(j),j)\)-th spots to be \(t^{w(j)}\) instead of a \(1\), so in order to return the matrix to the required form, it is necessary to multiply on the right by the diagonal element \(\operatorname{diag}(t^{-w(1)},t^{-w(2)},\ldots,t^{-w(n)})\), i.e. the diagonal matrix whose \(j\)-th diagonal entry is \(t^{-w(j)}\). From this simple observation it is not hard to see that the matrix \(wM^{\prime}\) satisfies
\[[wM^{\prime}]_{i,j}=\begin{cases}1&\text{if }i=w(j),\\ 0&\text{if }j>w^{-1}(i),\\ t^{i-w(j)}x_{i,j}&\text{otherwise}\end{cases}\]
where the \(x_{i,j}\) are the entries of \(wM\) as in (2.3).
The above-described torus action on the patch \(\mathcal{N}_{w}\) induces an \(\mathsf{S}\)-grading on the coordinate ring \(\mathbb{C}[\mathbf{x}_{w}]\), where the indeterminate \(x_{i,j}\) has degree \(w(j)-i\). Observe that, by definition, \(w(j)-i>0\), and so this is a positive \(\mathbb{Z}\)-grading. In Section 3, we will restrict this action further to the Schubert cell at \(w\), but the principle remains the same.
## 3. Coordinates on regular nilpotent Hessenberg Schubert cells
In this section, we will relate coordinates on the regular nilpotent Hessenberg Schubert cells to certain coordinates on the \(w_{0}\)-chart of the same Hessenberg variety. This will allow us, in the next section, to adapt results about the \(w_{0}\)-chart from [6] to yield results for the regular nilpotent Hessenberg Schubert cells, allowing us to draw conclusions about, e.g., the initial terms, recurrence relations, and Grobner bases. Indeed, finding this explicit relation with the \(w_{0}\)-chart allows us to avoid reproving the technical details of [6].
Recall that the Schubert cell of a permutation \(w\) is defined by
\[X_{\circ}^{w}\coloneqq BwB/B\subseteq GL_{n}(\mathbb{C})/B\cong\operatorname{ Flags}(\mathbb{C}^{n}).\]
By the well-known theory of Bruhat decomposition, the Schubert cells are disjoint and their union is \(GL_{n}(\mathbb{C})/B\). Furthermore, each \(X_{\circ}^{w}\) is isomorphic to a complex affine space of dimension \(\ell(w)\), where \(\ell(w)\) denotes the length of the permutation \(w\). In fact, in a manner similar to how we described elements in \(GL_{n}(\mathbb{C})/B\) in the previous section, we can parametrize the Schubert cell by viewing an element of \(X_{\circ}^{w}\) as represented by a matrix of the form \(\Omega_{w}\), where the \((i,j)\)-th entry is given by:
\[[\Omega_{w}]_{i,j}=\begin{cases}1&\text{if $i=w(j)$,}\\ 0&\text{if $i>w(j)$ or $j>w^{-1}(i)$,}\\ z_{i,j}&\text{otherwise (i.e., $i<w(j)$ and $j<w^{-1}(i)$).}\end{cases} \tag{3.1}\]
Note that the condition \(i>w(j)\) corresponds to the matrix entries occurring below the \(1\)'s and the condition \(j>w^{-1}(i)\) corresponds to the entries occurring to the right of the \(1\)'s. This parametrization illustrates that \(X_{\circ}^{w}\) is isomorphic to an affine space, and we can view \(\mathbf{z}_{w}\coloneqq\{z_{i,j}\;\mid\;i<w(j)\text{ and }j<w^{-1}(i)\}\) as indeterminates corresponding the affine coordinates on \(X_{\circ}^{w}\). In particular, the affine coordinate ring of \(X_{\circ}^{w}\) can be identified with the polynomial ring \(\mathbb{C}[\mathbf{z}_{w}]\). Note that, in the discussion below, we will reserve the coordinates \(x_{i,j}\) for entries of \(wM\) as in (2.3), and the coordinates \(z_{i,j}\) for entries of \(\Omega_{w}\) as in (3.1).
We now wish to relate the coordinates \(\mathbf{z}_{w}\) with the coordinates \(\mathbf{x}_{w_{0}}\). To do this, our first observation is that we can write the \(z_{i,j}\) coordinates of the Schubert cell as a specialization of the \(x_{i,j}\) coordinates of the \(w_{0}\)-chart (by setting certain variables to zero after an appropriate relabelling). We start with a motivational example to demonstrate this behaviour, and describe this process more precisely afterwards.
_Example 3.2_.: Let \(n=4\) and \(w=3421\). As we see below, for an element \(w_{0}M\in\mathcal{N}_{w_{0}}\), there exists a permutation matrix \(v_{w}\) such that if we right-multiply \(w_{0}M\) by \(v_{w}\) and then set some variables equal to \(0\), then we obtain an element of the cell \(X_{\circ}^{w}\). Indeed, in our example, \(v_{w}\) turns out to be the permutation matrix
\[v_{w}=\begin{bmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}\]
which corresponds to the permutation \(v_{w}=2134\). We can compute the result of right-multiplication by \(v_{w}\) and setting \(x_{3,1}\) to \(0\), and obtain
\[\left(\begin{bmatrix}x_{1,1}&x_{1,2}&x_{1,3}&1\\ x_{2,1}&x_{2,2}&1&0\\ x_{3,1}&1&0&0\\ 1&0&0&0\end{bmatrix}\begin{bmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}\right)\right)_{x_{3,1}=0}=\begin{bmatrix}x_{1,2}&x_{1,1}& x_{1,3}&1\\ x_{2,2}&x_{2,1}&1&0\\ 1&0&0&0\\ 0&1&0&0\end{bmatrix}\in X_{\circ}^{w}. \tag{3.3}\]
For an element of the cell \(X_{\circ}^{w}\), we may represent it as a matrix with entries as in (3.1) by
\[\Omega_{w}=\begin{bmatrix}z_{1,1}&z_{1,2}&z_{1,3}&1\\ z_{2,1}&z_{2,2}&1&0\\ 1&0&0&0\\ 0&1&0&0\end{bmatrix}. \tag{3.4}\]
Using the correspondence (3.3), we can identify the coordinates of \(\Omega_{w}\) with those of \(w_{0}M\) by a ring homomorphism \(\psi_{w}\) from \(\mathbb{C}[\mathbf{x}_{w_{0}}]\to\mathbb{C}[\mathbf{z}_{w}]\) defined by comparing the matrix of (3.3) entry-by-entry to \(\Omega_{w}\) in (3.4). For instance, in this example, \(z_{1,1}\) is identified with \(x_{1,2}\), \(z_{1,2}\) with \(x_{1,1}\), and so on. The reader may verify that this is exactly the correspondence given in Definition 3.8 below, which formalizes and generalizes the phenomenon seen in this example. \(\Diamond\)
Following the example, given a permutation \(w\in S_{n}\), define \(v_{w}\coloneqq w_{0}w\). Denote the \((i,j)\)-th entry of the matrix \(w_{0}M\) by \(x_{i,j}\) for \(i,j\in[n]\) with \(i+j\leq n+1\). (Here we set \(x_{i,n-i+1}\coloneqq 1\) for all \(1\leq i\leq n\), i.e., the anti-diagonal entries are set to be equal to \(1\).) Under right-multiplication by \(v_{w}\) (and by our convention (2.1) for permutation matrices), the columns of \(w_{0}M\) are permuted, and in particular, the \((i,j)\)-th entry of the matrix \((w_{0}M)v_{w}\) is the \((i,v_{w}(j))\)-th entry of the matrix \(w_{0}M\). Equivalently, using the standard notation \(A_{i,j}\) for the \((i,j)\)-th entry of a matrix \(A\), we have
\[[(w_{0}M)v_{w}]_{i,j}=[w_{0}M]_{i,v_{w}(j)}.\]
Moreover, since it is precisely the anti-diagonal entries of \(w_{0}M\) which are equal to \(1\), the above equality implies that the \((i,j)\)-th entry of \((w_{0}M)v_{w}\) is equal to \(1\) precisely when \(i=n+1-v_{w}(j)=w_{0}(v_{w}(j))=w(j)\), where the last equality is by our choice of \(v_{w}\). This means that the \(1\)'s in the matrix \((w_{0}M)v_{w}\) are located precisely at the same locations as the \(1\)'s in the permutation matrix corresponding to \(w\). Therefore, if the variables appearing in \((w_{0}M)v_{w}\) below the \(1\)'s and to the right of the \(1\)'s are all equal to \(0\), then the matrix represents an element of the cell \(X_{\circ}^{w}\).
Indeed, from (3.1), it then follows that in order for the matrix \((w_{0}M)v_{w}\) to be in \(X_{\circ}^{w}\), we must have that \([(w_{0}M)v_{w}]_{i,j}=0\) if either
\[w(j)<i\]
or
\[j>w^{-1}(i).\]
We can then ask which indeterminates appearing as entries in \(w_{0}M\), obtained by right-multiplication by \(v_{w}\), have to be equal to \(0\) in order for the corresponding matrix \((w_{0}M)v_{w}\) to be contained in \(X_{\circ}^{w}\). Indeed, let \(x_{a,b}\) denote the \((a,b)\)-th entry of \(w_{0}M\) for \(a+b<n+1\)
(As we already saw, if \(a+b=n+1\) then \(x_{a,b}=1\) so we do not consider this case.) Based on the above paragraphs, it is straightforward now to see that if \((w_{0}M)v_{w}\) is in \(X_{\circ}^{w}\) it is necessary to have \(x_{a,b}=0\) if either
\[w(v_{w}^{-1}(b))<a \tag{3.5}\]
or
\[v_{w}^{-1}(b)>w^{-1}(a). \tag{3.6}\]
Since \(v_{w}=w_{0}w\) by definition, the inequality (3.5) simplifies to \(w_{0}(b)<a\) or equivalently \(n+1-b<a\), which can never happen since we started with \(a+b<n+1\). Therefore, the only non-vacuous condition is (3.6).
The preceding discussion motivates the following definitions. (In the previous paragraph, we used indices \(a\) and \(b\) to distinguish the indices used for \(w_{0}M\) versus \((w_{0}M)v_{w}\). We now revert to using \(i,j\) to denote indices for the matrix \(w_{0}M\).) Let \(D_{w}\) denote the collection of pairs \((i,j)\) specified by (3.6); more precisely, let
\[D_{w}\coloneqq\{(i,j)\in[n]^{2}\mid i+j\leq n\text{ and }v_{w}^{-1}(j)>w^{-1}(i)\}. \tag{3.7}\]
**Definition 3.8**.: Define a homomorphism of rings \(\psi_{w}:\mathbb{C}[\mathbf{x}_{w_{0}}]\to\mathbb{C}[\mathbf{z}_{w}]\) by
\[\psi_{w}(x_{i,j})\coloneqq\begin{cases}0&\text{if }(i,j)\in D_{w},\\ z_{i,v_{w}^{-1}(j)}&\text{if }(i,j)\notin D_{w}.\end{cases}\]
_Remark 3.9_.: Geometrically, the ring homomorphism \(\psi_{w}\) corresponds to the embedding of the Schubert cell \(X_{\circ}^{w}\) into \(\mathcal{N}_{w_{0}}\), given by right multiplication of an element \(\Omega_{w}\in X_{\circ}^{w}\) by \(v_{w}^{-1}\). Then \(\Omega_{w}v_{w}^{-1}\in\mathcal{N}_{w_{0}}\), and certain coordinates are equal to \(0\) by the discussion above, so the corresponding map on coordinate rings sets those coordinates to \(0\).
In what follows, if \(A\) is a matrix whose entries are in \(\mathbb{C}[\mathbf{x}_{w_{0}}]\), we denote by \(\psi_{w}(A)\) the matrix obtained by applying the ring homomorphism \(\psi_{w}\) to each matrix entry of \(A\), i.e., if \(a_{i,j}\) is the \((i,j)\)-th entry of \(A\), then \(\psi_{w}(a_{i,j})\) is the \((i,j)\)-th entry of \(\psi_{w}(A)\). By construction, this results in a matrix with entries in \(\mathbb{C}[\mathbf{z}_{w}]\). With this notation in place, we have the following observations.
_Remark 3.10_.: For any \(w\in S_{n}\), we have that \(\Omega_{w}=\psi_{w}((w_{0}M)v_{w})\) by construction of \(\psi_{w}\).
We also have the following.
**Lemma 3.11**.: _For any \(w\in S_{n}\), \(\left(\psi_{w}((w_{0}M)v_{w})\right)^{-1}=\psi_{w}\left(((w_{0}M)v_{w})^{-1}\right)\)._
Proof.: This follows from the facts that \(\psi_{w}\) is a ring homomorphism applied entry-by-entry and that any entry of the inverse of a matrix can be expressed as a polynomial in terms of the entries of the given matrix. Indeed, the determinant of \(M\) is equal to \(1\) and determinants of permutation matrices are \(\pm 1\). Hence the determinant of \((w_{0}M)v_{w}\) is \(\pm 1\) and so the determinant in the denominator of the usual formula for a matrix inverse does not appear in this computation, and all entries of the matrix inverse are polynomials and not rational functions.
**Definition 3.12**.: Let \(w\in S_{n}\). As in Definition 2.7, define polynomials \(g^{w}_{k,\ell}\) in \(\mathbb{C}[\mathbf{z}_{w}]\) for \(k,\ell\in[n]\) by
\[g^{w}_{k,\ell}\coloneqq[\Omega_{w}^{-1}\mathsf{N}\Omega_{w}]_{k,\ell}.\]
For an indecomposable Hessenberg function \(h\), we define the **regular nilpotent Hessenberg Schubert cell ideal corresponding to \(w\) and \(h\)** to be the ideal
\[J_{w,h}\coloneqq\langle g^{w}_{k,\ell}\mid k>h(\ell)\rangle\subseteq\mathbb{C }[\mathbf{z}_{w}]. \tag{3.13}\]
As in the case of the generators \(f^{w}_{k,\ell}\) in Definition 2.7, it is straightforward to see from (2.5) that the \(g^{w}_{k,\ell}\) set-theoretically cut out the intersection \(\operatorname{Hess}(\mathsf{N},h)\cap X^{w}_{\circ}\) from \(X^{w}_{\circ}\), justifying the terminology in Definition 3.12 above. It will also follow from our algebraic arguments below that \(J_{w,h}\) is radical (see Corollary 4.8), and hence \(J_{w,h}\) also scheme-theoretically defines \(\operatorname{Hess}(\mathsf{N},h)\cap X^{w}_{\circ}\).
_Notation 3.14_.: It will also be useful in what follows to have notation for the number of generators \(g^{w}_{k,\ell}\) of \(J_{w,h}\) as listed in (3.13). Let \(\lambda_{h}\) denote the partition \((n-h(1),n-h(2),\ldots,n-h(n-1),n-h(n)=0)\). Then \(|\lambda_{h}|\), the sum of the parts of \(\lambda_{h}\), is \(|\lambda_{h}|=n^{2}-\sum_{i=1}^{n}h(i)\). This is equal to the number of generators \(g^{w}_{k,\ell}\) as listed in (3.13).
_Example 3.15_.: Continuing Example 3.2, we now note that we can express the generators for the regular nilpotent Hessenberg Schubert cell ideal in terms of the Hessenberg \(w_{0}\)-patch ideal via the map \(\psi_{w}\). First, notice that
\[\Omega_{w}^{-1}\mathsf{N}\Omega_{w}=\begin{bmatrix}0&1&0&0\\ 0&0&0&0\\ 1&-z_{2,1}&0&0\\ -z_{1,3}+z_{2,1}&-z_{1,1}+z_{1,3}z_{2,1}+z_{2,2}&1&0\end{bmatrix}. \tag{3.16}\]
Meanwhile,
\[F\coloneqq(w_{0}M)^{-1}\mathsf{N}(w_{0}M)=\begin{bmatrix}0&0&0&0\\ 1&0&0&0\\ -x_{2,2}+x_{3,1}&1&0&0\\ -x_{1,2}+x_{1,3}(x_{2,2}-x_{3,1})+x_{2,1}&-x_{1,3}+x_{2,2}&1&0\end{bmatrix}\]
and
\[v_{w}^{-1}Fv_{w}=\begin{bmatrix}0&1&0&0\\ 0&0&0&0\\ 1&-x_{2,2}+x_{3,1}&0&0\\ -x_{1,3}+x_{2,2}&-x_{1,2}+x_{1,3}(x_{2,2}-x_{3,1})+x_{2,1}&1&0\end{bmatrix}. \tag{3.17}\]
Since \(\Omega_{w}=\psi_{w}((w_{0}M)v_{w})\) as observed in Remark 3.10, it follows by Lemma 3.11 that \(\Omega_{w}^{-1}=\psi_{w}(v_{w}^{-1}(w_{0}M)^{-1})\). Applying \(\psi_{w}\) entry-by-entry, it follows that \(\psi_{w}(v_{w}^{-1}Fv_{w})=\Omega_{w}^{-1}\mathsf{N}\Omega_{w}\). In particular, the generators satisfy \(g^{w}_{k,\ell}=\psi_{w}([v_{w}^{-1}Fv_{w}]_{k,\ell})\). For example, comparing the \((3,2)\)-th entries of the RHS of (3.16) and (3.17), we can see that
\[\psi_{w}(-x_{2,2}+x_{3,1}) =\psi_{w}(-x_{2,2})+\psi_{w}(x_{3,1})\] \[=-z_{2,1}+0=-z_{2,1}\]
as claimed. We will formalize this computation in Lemma 3.20 below.
We have just seen in the above example that the generators \(g_{k,\ell}^{w}\) in this special case have a simple algebraic relation to certain of the \(f_{i,j}^{w_{0}}\) generators from Definition 2.7, since the matrix entries of \(F\) from the above example are precisely these generators. In the discussion that follows, the overarching theme is that we can use this correspondence to effectively translate results on the \(w_{0}\)-patch as obtained in [6] to the regular nilpotent Hessenberg Schubert cells \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\). Before proceeding further into the details of the translation, however, we need first to make precise the permutations \(w\) for which this translation makes sense. As we will show below, it turns out that the only Schubert cells \(X_{\circ}^{w}\) for which the intersection \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\) is non-empty (and hence the questions about local defining ideals is non-vacuous) are the permutations \(w\) lying in the \(\mathsf{S}\)-fixed point set of \(\operatorname{Hess}(\mathsf{N},h)\). The next lemma makes this precise.
**Lemma 3.18**.: _Let \(w\in S_{n}\). Let \(\mathsf{S}\) denote the copy of \(\mathbb{C}^{*}\) acting on \(\operatorname{Hess}(\mathsf{N},h)\) as described in Section 2.3. Then \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\neq\emptyset\) if and only if \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\)._
Proof.: Since \(w\in X_{\circ}^{w}\) for any \(w\in S_{n}\), one direction is clear. Indeed, since \(w\in X_{\circ}^{w}\), if \(w\in\operatorname{Hess}(\mathsf{N},h)\) then \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\neq\emptyset\).
It remains to show the other direction. Suppose there exists a point \(gB\in\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\), so it is non-empty. Following our notational convention in (3.1) for points in the Schubert cell \(X_{\circ}^{w}\), we may represent this point \(gB\) as a matrix \(\Omega_{w}\) whose \((i,j)\)-th entry is \(1\) if \(i=w(j)\), \(0\) if \(i>w(j)\) or \(j>w^{-1}(i)\), and denoted \(z_{i,j}\) otherwise. We already observed in Section 2.3 that \(\operatorname{Hess}(\mathsf{N},h)\) is preserved under the \(\mathsf{S}\)-action for our choice of \(\mathsf{S}\). Schubert cells are also preserved, since they are preserved by the maximal torus action, and \(\mathsf{S}\) is a subgroup of the maximal torus. Thus \(\mathsf{S}\) preserves \(\operatorname{Hess}(\mathsf{N},h)\cap\Omega_{w}\), and we conclude that the \(\mathsf{S}\)-orbit \(\mathsf{S}\cdot gB\) is also contained in \(\operatorname{Hess}(\mathsf{N},h)\cap\Omega_{w}\). Using the matrix representation of the point \(gB\) and the definition of \(\mathsf{S}\), it is straightforward to compute, following the outline given in Section 2.3, that for an element \(\underline{t}\coloneqq\operatorname{diag}(t,t^{2},\ldots,t^{n})\) for \(t\in\mathbb{C}^{*}\), the element \(\underline{t}\cdot gB\) can be represented by a matrix in \(GL_{n}(\mathbb{C})\) whose \((i,j)\)-th entry is \(0\) if \(i>w(j)\) or \(j>w^{-1}(i)\), is \(t^{i}\) if \(i=w(j)\), and \(t^{i}z_{i,j}\) otherwise. Such a matrix is not in the form given in (3.1) so we adjust by multiplication on the right by a torus element in \(B\) given by \(\operatorname{diag}(t^{-w(1)},t^{-w(2)},\ldots,t^{-w(n)})\), which then yields a matrix specifying the same flag, the entries of which are \(0\) if \(i>w(j)\) or \(j>w^{-1}(i)\), \(1\) if \(i=w(j)\), and \(t^{i-w(j)}z_{i,j}\) otherwise. Now we observe that by assumption, if the \((i,j)\)-th entry is \(t^{i-w(j)}z_{i,j}\), then \(i<w(j)\) and therefore \(i-w(j)<0\). This implies that as we take the limit as \(t\to\infty\), the limit point of this orbit is precisely the permutation flag \(w\). Since \(\operatorname{Hess}(\mathsf{N},h)\) is closed, the limit point \(w\) is also contained in \(\operatorname{Hess}(\mathsf{N},h)\), and it is evident that \(w\) is \(\mathsf{S}\)-fixed by the above computation. Hence \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\), as desired.
The above lemma shows that we can restrict attention to permutations \(w\) contained in \(\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). The third author and Abe, Horiguchi, and Masuda gave an explicit characterization of this set, as follows.
**Lemma 3.19** ([3, Lemma 2.3]).: _Let \(h\) be any Hessenberg function. Then,_
\[\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}=\{w\in S_{n}\mid w^{-1}(w(j)-1) \leq h(j)\text{ for all }j\in[n]\},\]
_where we take the convention that \(w(0)=0\) for all \(w\in S_{n}\)._
We can now relate the \(f^{w_{0}}_{i,j}\) polynomials on the \(w_{0}\)-chart to the \(g^{w}_{k,\ell}\) polynomials on the regular nilpotent Hessenberg Schubert cell. The following is a straightforward computation.
**Lemma 3.20**.: _For any \(w\in S_{n}\) and any \(k,\ell\in[n]\), we have \(g^{w}_{k,\ell}=\psi_{w}(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)})\)._
Proof.: By Remark 3.10, Lemma 3.11, and Definition 3.12 of the \(g^{w}_{k,\ell}\), we can compute that
\[g^{w}_{k,\ell} =[\Omega_{w}^{-1}\mathsf{N}\Omega_{w}]_{k,\ell}=[(\psi_{w}((w_{0}M )v_{w}))^{-1}\mathsf{N}(\psi_{w}((w_{0}M)v_{w}))]_{k,\ell}\] \[=[\psi_{w}(((w_{0}M)v_{w})^{-1})\mathsf{N}(\psi_{w}((w_{0}M)v_{w}) )]_{k,\ell}\]
and since \(\psi_{w}\) is a ring homomorphism,
\[=[\psi_{w}((v_{w}^{-1}(w_{0}M)^{-1})\mathsf{N}(w_{0}M)v_{w})]_{k,\ell}\] \[=\psi_{w}([v_{w}^{-1}(w_{0}M)^{-1}\mathsf{N}(w_{0}M)v_{w}]_{k,\ell})\]
where the last equality holds since \(\psi_{w}\) is applied to the matrix entry-by-entry. The claim of the lemma then follows from the facts that multiplication of \((w_{0}M)^{-1}\mathsf{N}(w_{0}M)\) on the left by \(v_{w}^{-1}\) permutes rows and multiplication on the right by \(v_{w}\) permutes columns.
Although the above result holds for any \(k,\ell\in[n]\), in the next sections we will always impose further restrictions on \((k,\ell)\) so that they correspond to the generators \(g^{w}_{k,\ell}\) for the ideal \(J_{w,h}\).
## 4. A Grobner basis for defining ideals of regular nilpotent Hessenberg Schubert cells
In the last section, we provided an explicit relationship between the generators \(g^{w}_{k,\ell}\) of the local defining ideal of regular nilpotent Hessenberg Schubert cells and their corresponding entries \(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)}\) in the \(w_{0}\)-chart. The main purpose of the present section is to show that these generators in fact form a Grobner basis for the ideal \(J_{w,h}\). The main strategy is to concretely describe the relationship between the initial terms of the \(g^{w}_{k,\ell}\) and those of the corresponding \(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)}\) through the ring homomorphism \(\psi_{w}\).
We first note that if a polynomial \(f^{w_{0}}_{a,b}\) is equal to either \(0\) or \(1\), which occurs when \(a\leq b+1\), then \(\psi_{w}\) will map \(f^{w_{0}}_{a,b}\) to itself and the initial terms are vacuously preserved. From Lemma 2.12(1) it follows that the case in which \(f^{w_{0}}_{a,b}\) has a non-constant initial term is when \(a>b+1\). Therefore, in comparing initial terms of \(g^{w}_{k,\ell}\) with the corresponding \(f^{w_{0}}_{v_{w}(k),v_{k}(\ell)}\) via Lemma 3.20, we need only consider the case when \(v_{w}(k)>v_{w}(\ell)+1\).
In order to discuss Grobner bases, we first need to fix a monomial order. Our first step is therefore to adapt the monomial order \(<_{n}\) from Definition 2.10 to a monomial order on \(\mathbb{C}[\mathbf{z}_{w}]\). We do this using the permutation \(v_{w}\).
**Definition 4.1**.: Define a lexicographic monomial order \(<_{n}^{w}\) on \(\mathbb{C}[\mathbf{z}_{w}]\) by \(z_{i,j}>_{n}^{w}z_{i^{\prime},j^{\prime}}\) if \(i<i^{\prime}\) or \(i=i^{\prime}\) and \(v_{w}(j)<v_{w}(j^{\prime})\).
_Remark 4.2_.: In the case that \(w=w_{0}\), the permutation \(v_{w}\) is the identity permutation, so we recover from \(<_{n}^{w}\) exactly the monomial order \(<_{n}\) from Definition 2.10.
**Lemma 4.3**.: _Let \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). Assume \(k,\ell\in[n]\) satisfy \(k\geq h(\ell)\) and \(v_{w}(k)>v_{w}(\ell)+1\). Then,_
1. _The initial term of_ \(f_{v_{w}(k),v_{w}(\ell)}^{w_{0}}\) _does not get mapped to zero under_ \(\psi_{w}\)_._
2. _Furthermore, if_ \((k^{\prime},\ell^{\prime})\) _also satisfies the hypotheses and_ \((k^{\prime},\ell^{\prime})\neq(k,\ell)\)_, then_ \(\psi_{w}(\operatorname{in}_{<_{n}}(f_{v_{w}(k^{\prime}),v_{w}(\ell^{\prime})} ^{w_{0}}))\neq\psi_{w}(\operatorname{in}_{<_{n}}(f_{v_{w}(k),v_{w}(\ell)}^{w_{ 0}})).\) _That is,_ \(\psi_{w}\) _is injective when we restrict its domain to_ \(\mathbb{C}[\mathbf{x}_{w_{0}}]\setminus D_{w}\)_._
Proof.: Suppose \(k,\ell\) satisfy \(k\geq h(\ell)\) and \(v_{w}(\ell)+1>v_{w}(k)\). By Lemma 2.12(2), the initial term is the indeterminate \(\operatorname{in}_{<_{n}}(f_{v_{w}(k),v_{w}(\ell)}^{w_{0}})=-x_{n+1-v_{w}(k),v _{w}(\ell)+1}\). To show that this does not map to \(0\) under \(\psi_{w}\), we will show that \((n+1-v_{w}(k),v_{w}(\ell)+1)\not\in D_{w}\). By (3.7), this is equivalent to showing that
\[n+1-v_{w}(k)+v_{w}(\ell)+1\leq n\quad\text{and}\quad v_{w}^{-1}(v_{w}(\ell)+1 )\leq w^{-1}(n+1-v_{w}(k)).\]
The first inequality is equivalent to \(v_{w}(\ell)+2\leq v_{w}(k)\), and this follows from the assumption on \(k\) and \(\ell\). For the second inequality, observe that
\[n+1-v_{w}(k)=w_{0}(v_{w}(k))\]
and hence
\[w^{-1}(n+1-v_{w}(k))=w^{-1}(w_{0}v_{w}(k))=(w_{0}v_{w})^{-1}(w_{0}v_{w}(k))=k.\]
Hence, to prove the second inequality it suffices to show that
\[v_{w}^{-1}(v_{w}(\ell)+1)\leq k.\]
Since \(v_{w}=w_{0}w\), we may also compute the left-hand side as
\[v_{w}^{-1}(v_{w}(\ell)+1) =w^{-1}w_{0}(w_{0}w(\ell)+1)\] \[=w^{-1}w_{0}(n+1-w(\ell)+1)\] \[=w^{-1}(w(\ell)-1)\]
so we need to prove
\[w^{-1}(w(\ell)-1)\leq k.\]
Recall that \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). By Lemma 3.19 we know therefore that
\[w^{-1}(w(\ell)-1)\leq h(\ell)\]
and by our assumption \(h(\ell)\leq k\), and hence we obtain the inequality
\[w^{-1}(w(\ell)-1)\leq k\]
as desired. This proves the first claim.
Now if \(\psi_{w}(x_{i,j})=\psi_{w}(x_{i^{\prime},j^{\prime}})\) and are nonzero, by Definition 3.8 we must have \(i=i^{\prime}\) and \(v_{w}^{-1}(j)=v_{w}^{-1}(j^{\prime})\), which forces \(j=j^{\prime}\), so \(x_{i,j}=x_{i^{\prime},j^{\prime}}\). That is, \(\psi_{w}|_{\mathbb{C}[\mathbf{x}_{w_{0}}]\setminus D_{w}}\) is injective. This proves the second claim.
**Lemma 4.4**.: _Under the hypotheses of Lemma 3.20, \(\psi_{w}\) respects the monomial orders \(<_{n}^{w}\) and \(<_{n}\). That is, \(\operatorname{in}_{<_{n}^{w}}(\psi_{w}(f_{v_{w}(k),v_{w}(\ell)}^{w_{0}}))=\psi_ {w}(\operatorname{in}_{<_{n}}(f_{v_{w}(k),v_{w}(\ell)}^{w_{0}}))\)._
Proof.: By the result of Lemma 2.12(3), the initial term of \(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)}\) is an indeterminate that appears exactly once in \(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)}\) and by Lemma 4.3, this initial term does not get mapped to zero. As a result, it suffices to show that if \(x_{i,j},x_{i^{\prime},j^{\prime}}\notin D_{w}\) with \(x_{i,j}>_{n}x_{i^{\prime},j^{\prime}}\), then \(\psi_{w}(x_{i,j})>_{n}^{w}\psi_{w}(x_{i^{\prime},j^{\prime}})\).
So suppose that \(x_{i,j},x_{i^{\prime},j^{\prime}}\notin D_{w}\) with \(x_{i,j}>_{n}x_{i^{\prime},j^{\prime}}\). By Definition 2.10, we have that either \(i<i^{\prime}\), or, \(i=i^{\prime}\) and \(j<j^{\prime}\). Then by Definition 3.8, we have that \(\psi_{w}(x_{i,j})>_{n}^{w}\psi_{w}(x_{i^{\prime},j^{\prime}})\) if and only if \(z_{i,v_{w}^{-1}(j)}>_{n}^{w}z_{i^{\prime},v_{w}^{-1}(j^{\prime})}\). The case \(i<i^{\prime}\) is immediate, so assume that \(i=i^{\prime}\). Then, the inequality holds if and only if \(v_{w}(v_{w}^{-1}(j))<v_{w}(v_{w}^{-1}(j^{\prime}))\), i.e., \(j<j^{\prime}\).
**Corollary 4.5**.: _Let \(J_{w,h}\) be the regular nilpotent Hessenberg Schubert cell ideal defining \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\) as a subset of \(X_{\circ}^{w}\). Then, with respect to the monomial order \(<_{n}^{w}\) defined above, the initial terms of the generators \(g_{k,\ell}^{w}\) of \(J_{w,h}\) are distinct indeterminates._
Proof.: Let \(\{g_{k_{1},\ell_{1}}^{w},\ldots,g_{k_{r},\ell_{r}}^{w}\}\) be the set of generators of \(J_{w,h}\) from Definition 3.12. By Lemma 3.20, this is equal to the set \(\{\psi_{w}(f^{w_{0}}_{v_{w}(k_{1}),v_{w}(\ell_{1})}),\ldots,\psi_{w}(f^{w_{0}}_ {v_{w}(k_{r}),v_{w}(\ell_{r})})\}\). Then, by Lemma 4.4, \(\operatorname{in}_{<_{n}^{w}}(g_{k_{i},\ell_{i}}^{w})=\psi_{w}(\operatorname{ in}_{<_{n}}(f^{w_{0}}_{v_{w}(k_{i}),v_{w}(\ell_{i})}))\) and by Lemma 4.3, the initial terms of \(g_{k_{i},\ell_{i}}^{w}\) with respect to \(<_{n}^{w}\) are distinct indeterminates, as claimed.
It is now straightforward to see that the generators \(\{g_{k,\ell}^{w}\}\) of \(J_{w,h}\) form a Grobner basis with respect to \(<_{n}^{w}\).
**Theorem 4.6**.: _For any indecomposable Hessenberg function \(h\) and any \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\), the natural generators \(g_{k,\ell}^{w}\) for the regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) form a Grobner basis for \(J_{w,h}\) with respect to \(<_{n}^{w}\). Moreover, the initial ideal \(\operatorname{in}_{<_{n}^{w}}(J_{w,h})\) is an ideal generated by distinct indeterminates._
Proof.: We saw in Corollary 4.5 that the initial terms of the generators are distinct indeterminates. This implies that the \(S\)-polynomial for any pair of generators \(g_{k,\ell}^{w}\) and \(g_{k^{\prime},\ell^{\prime}}^{w}\) is given by \(S(g_{k,\ell}^{w},g_{k^{\prime},\ell^{\prime}}^{w})=\operatorname{in}_{<_{n}^{ w}}(g_{k^{\prime},\ell^{\prime}}^{w})g_{k,\ell}^{w}-\operatorname{in}_{<_{n}^{ w}}(g_{k,\ell}^{w})g_{k^{\prime},\ell^{\prime}}^{w}\). Buchberger's Criterion [5, Chapter 2.6, Theorem 6] immediately implies that the natural generators form a Grobner basis with respect to \(<_{n}^{w}\). Since we saw in Corollary 4.5 that the initial terms of the \(g_{k,\ell}^{w}\) are distinct indeterminates, and since these form a Grobner basis, the initial ideal is an ideal generated by distinct indeterminates as claimed.
_Remark 4.7_.: We could have also shown the above theorem using the fact that the image of a Grobner basis is again a Grobner basis under a specialization map if the initial terms appearing do not map to zero.
Suppose that \(\{g_{k_{1},\ell_{1}}^{w},\ldots,g_{k_{r},\ell_{r}}^{w}\}\) are the generators for \(J_{w,h}\) from Definition 3.12. By Lemma 3.20, each \(g_{k,\ell}^{w}\) is the image of a certain \(f_{a,b}^{w_{0}}\) under the ring homomorphism \(\psi_{w}\). Furthermore by Lemma 4.3, the initial terms of these \(f_{a,b}^{w_{0}}\) with respect to \(<_{n}\) are not mapped to \(0\) by \(\psi_{w}\). Let us define a new monomial order \(\prec_{n}^{w_{0},w}\) on \(\mathbb{C}[\mathbf{x}_{w_{0}}]\) which respects the ordering \(<_{n}\) except for all \(x_{i,j}\in D_{w}\) which receive the least weight. That is, if \(x_{i,j}\in D_{w}\) and \(x_{i^{\prime},j^{\prime}}\notin D_{w}\), then \(x_{i,j}\prec_{n}^{w_{0},w}x_{i^{\prime},j^{\prime}}\), and if \(x_{i,j},x_{i^{\prime},j^{\prime}}\in D_{w}\) or \(x_{i,j},x_{i^{\prime},j^{\prime}}\notin D_{w}\), then \(x_{i,j}\prec_{n}^{w_{0},w}x_{i^{\prime},j^{\prime}}\) if \(x_{i,j}<_{n}x_{i^{\prime},j^{\prime}}\).
Then \(\{f^{w_{0}}_{v_{w}(k_{1}),v_{w}(\ell_{1})},\dots,f^{w_{0}}_{v_{w}(k_{r}),v_{w}( \ell_{r})}\}\) is a Grobner basis for the ideal it generates with respect to \(\prec_{n}^{w_{0},w}\), since the initial terms of the \(f^{w_{0}}_{a,b}\) with respect to \(<_{n}\) are distinct indeterminates not in \(D_{w}\). Then per [9, Definition 4], \(\prec_{n}^{w_{0},w}\) is a \(\psi_{w}\)-admissible order, so by the result of [9, Theorem 1] together with Lemmas 4.3 and 4.4, we can provide an alternate proof of Theorem 4.6. (Technically, \(\psi_{w}\) is a composition of a specialization map with a change of variable names, but this is small adjustment to the statement of [9, Theorem 1].)
From the above results, it follows immediately that the regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) is radical and hence is the defining ideal of \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\).
**Corollary 4.8**.: _Let \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). Then the ideal \(J_{w,h}\) is radical._
Proof.: Since \(\operatorname{in}_{\prec_{n}^{w}}(J_{w,h})\) is an ideal of indeterminates, it is a radical ideal. Thus, so too is \(J_{w,h}\) (e.g., see [5, Exercise 4.2.16]).
## 5. Applications of the Main Results
In this section, we discuss some consequences of our main results. The results of the previous sections imply that the regular nilpotent Hessenberg Schubert cells are affine spaces, as we show in Proposition 5.7. This allows us to obtain several immediate applications of our main results which connect to existing literature and illustrate the usefulness of our computational-algebraic perspective.
The content of this section is as follows. First, in Section 5.1 we see that the local defining ideals of regular nilpotent Hessenberg Schubert cells are complete intersections. Second, in Section 5.2 we show - as advertised above - that the regular nilpotent Hessenberg varieties \(\operatorname{Hess}(\mathsf{N},h)\) are paved by affine spaces. Third, in Section 5.3 we compute the Hilbert series of the ideals of \(X_{\circ}^{w}\cap\operatorname{Hess}(\mathsf{N},h)\). The content of Section 5.4 is to show that these cell ideals \(J_{w,h}\) are geometrically vertex decomposable (GVD). Finally, in Section 5.5 we state and prove some results on Frobenius splitting related to regular nilpotent Hessenberg Schubert cells.
### Complete intersections
As an immediate corollary of the results in Section 3, we see that, in analogy with the regular nilpotent Hessenberg patch ideal at the \(w_{0}\)-chart (as seen in [6]), regular nilpotent Hessenberg Schubert cell ideals are also complete intersection ideals. We begin with the following.
**Lemma 5.1**.: _Let \(\{g^{w}_{k_{1},\ell_{1}},\dots,g^{w}_{k_{r},\ell_{r}}\}\) be the set of generators of \(J_{w,h}\) from Definition 3.12, ordered so that \(\operatorname{in}_{\prec_{n}^{w}}(g^{w}_{k_{1},\ell_{1}})>_{n}^{w}\dots>_{n}^{ w}\operatorname{in}_{\prec_{n}^{w}}(g^{w}_{k_{r},\ell_{r}})\). Then \(\operatorname{in}_{\prec_{n}^{w}}(g^{w}_{k_{i},\ell_{i}})\) does not divide any term of \(g^{w}_{k_{j},\ell_{j}}\) for any \(j>i\)._
Proof.: By Lemma 3.20, the ordered set of generators \(\{g^{w}_{k_{1},\ell_{1}},\dots,g^{w}_{k_{r},\ell_{r}}\}\) is equal to the ordered set \(\{\psi_{w}(f^{w_{0}}_{v_{w}(k_{1}),v_{w}(\ell_{1})}),\dots,\psi_{w}(f^{w_{0}}_ {v_{w}(k_{r}),v_{w}(\ell_{r})})\}\). Furthermore, we have that
\[\operatorname{in}_{\prec_{n}}(f^{w_{0}}_{v_{w}(k_{1}),v_{w}(\ell_{1})})>_{n} \dots>_{n}\operatorname{in}_{\prec_{n}}(f^{w_{0}}_{v_{w}(k_{r}),v_{w}(\ell_{r })})\]
and so \(\operatorname{in}_{<_{n}}(f^{w_{0}}_{v_{w}(k_{i}),v_{w}(\ell_{i})})\) does not divide any term of \(f^{w_{0}}_{v_{w}(k_{j}),v_{w}(\ell_{j})}\) for \(j>i\) by [6, Lemma 4.13]. Applying \(\psi_{w}\), we see that \(\operatorname{in}_{<_{n}^{w}}(g^{w}_{k_{i},\ell_{i}})\) does not divide any term of \(g^{w}_{k_{j},\ell_{j}}\) for any \(j>i\), completing the proof.
For our argument that each regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) is a complete intersection, we require that \(J_{w,h}\) be homogeneous. However, the generators \(g^{w}_{k,\ell}\) are not homogeneous with respect to the standard grading. To address this issue, we again use the \(\psi_{w}\) map to define a _non-standard_ grading on \(\mathbb{C}[\mathbf{z}_{w}]\). It will turn out, as we see below, that with respect to this non-standard grading, the \(g^{w}_{k,\ell}\) are homogeneous.
In Section 2.3 of this manuscript we defined, following [6], a positive \(\mathbb{Z}\)-grading on \(\mathbb{C}[\mathbf{x}_{w}]\) by \(\deg(x_{i,j}):=w(j)-i\). In the special case \(w=w_{0}\), this becomes the \(\mathbb{Z}\)-grading on \(\mathbb{C}[\mathbf{x}_{w_{0}}\) given by \(\deg(x_{i,j}):=w_{0}(j)-i\). We now define a \(\mathbb{Z}\)-grading on \(\mathbb{C}[\mathbf{z}_{w}]\) by using this grading on \(\mathbb{C}[\mathbf{x}_{w_{0}}]\) together with the algebra homomorphism \(\psi\): namely, we define \(\deg(z_{i,j})\coloneqq\deg(\psi^{-1}_{w}(z_{i,j}))=w_{0}(v_{w}(j))-i\), for all \(z_{i,j}\in\mathbf{z}_{w}\). The fact that this is well-defined follows from Lemma 4.3(2).
_Remark 5.2_.: The grading on \(\mathbb{C}[\mathbf{z}_{w}]\) given above agrees with the grading that arises from the \(\mathsf{S}\)-action on \(\operatorname{Hess}(\mathsf{N},h)\cap X^{w}_{\circ}\), defined in Section 2.3. Indeed, for any \(z_{i,j}\in\mathbf{z}_{w}\), the grading from the \(\mathsf{S}\)-action yields \(\deg(z_{i,j})=w(j)-i\). By the construction of \(v_{w}\), we know that \(w=w_{0}v_{w}\). From this it follows that the two definitions agree.
_Example 5.3_.: We continue Example 3.15 and show that in this case, the two definitions of degree indeed agree, as pointed out in Remark 5.2. The following table uses the definition arising from the \(\mathsf{S}\)-action on \(\operatorname{Hess}(\mathsf{N},h)\cap X^{w}_{\circ}\).
\begin{tabular}{|c||c|c|c|c|c|} \hline \(z_{i,j}\) & \(z_{1,1}\) & \(z_{1,2}\) & \(z_{1,3}\) & \(z_{2,1}\) & \(z_{2,2}\) \\ \hline \(\deg(z_{i,j})=w(j)-i\) & \(2\) & \(3\) & \(1\) & \(1\) & \(2\) \\ \hline \end{tabular} Meanwhile the following table uses the definition in terms of \(\psi^{-1}_{w}\). Recall that \(w_{0}(a)=n+1-a\) for all \(a\in[n]\).
\begin{tabular}{|c||c|c|c|c|c|} \hline \(z_{i,j}\) & \(z_{1,1}\) & \(z_{1,2}\) & \(z_{1,3}\) & \(z_{2,1}\) & \(z_{2,2}\) \\ \hline \(\psi^{-1}_{w}(z_{i,j})\) & \(x_{1,2}\) & \(x_{1,1}\) & \(x_{1,3}\) & \(x_{2,2}\) & \(x_{2,1}\) \\ \hline \(\deg(z_{i,j})=w_{0}(v_{w}(j))-i\) & \(2\) & \(3\) & \(1\) & \(1\) & \(2\) \\ \hline \end{tabular} Moreover, notice that the generators \(g^{w}_{k,\ell}\) are homogeneous with respect to this grading. Indeed, \(g^{w}_{4,1}\) has degree \(1\) and \(g^{w}_{4,2}\) has degree \(2\).
The following lemma shows that it is no accident that the generators \(g^{w}_{k,\ell}\) in the previous example are homogeneous. Indeed, the generators \(f^{w_{0}}_{k,\ell}\) for the \(w_{0}\)-chart are homogeneous with respect to this grading, so our definition of the grading on \(\mathbb{C}[\mathbf{z}_{w}]\) via the \(\psi_{w}\) map allows us to easily translate the homogeneity to the \(\operatorname{Hess}(\mathsf{N},h)\cap X^{w}_{\circ}\) setting.
**Lemma 5.4**.: _Let \(h\) be an indecomposable Hessenberg function and \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). Then, the above defines a positive \(\mathbb{Z}\)-grading on \(\mathbb{C}[\mathbf{z}_{w}]\) with respect to which the generators \(g^{w}_{k,\ell}\) for \(J_{w,h}\) are homogeneous._
Proof.: It was shown in [6, Section 2.3] that the grading on \(\mathbb{C}[\mathbf{x}_{w_{0}}]\) is positive. Since \(\psi^{-1}_{w}(\mathbf{z}_{w})=\mathbf{x}_{w_{0}}\setminus D_{w}\) is a subset of \(\mathbf{x}_{w_{0}}\), it follows that the grading on \(\mathbb{C}[\mathbf{z}_{w}]\) is also
positive. Moreover, it was shown in [6, Lemma 2.18] that generators \(f_{k,\ell}^{w_{0}}\) are homogeneous with respect to the non-standard grading on \(\mathbb{C}[\mathbf{x}_{w_{0}}]\). We showed in Lemma 3.20 that \(g_{k,\ell}^{w}=\psi_{w}(f_{w_{u}(k),w_{w}(\ell)}^{w_{0}})\). Since \(\psi_{w}\) is a homomorphism of rings, the image of the monomials in \(f_{v_{w}(k),v_{w}(\ell)}^{w_{0}}\) under \(\psi_{w}\) either retain their degree or get mapped to zero. In either case, we conclude that \(g_{k,\ell}^{w}\) is homogeneous.
In particular, from Lemma 5.4 we can conclude that the regular nilpotent Hessenberg Schubert cell ideals \(J_{w,h}\) are homogeneous with respect to this non-standard grading. Using this, we now show that for \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\), the regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) is a **triangular complete intersection** in the sense of [6, Definition 3.3].
**Theorem 5.5**.: _For any indecomposable Hessenberg function \(h\) and any \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\), the regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) is a complete intersection of height \(|\lambda_{h}|\) and the \(g_{k,\ell}^{w}\) defining \(J_{w,h}\) form a minimal generating set. Furthermore, \(J_{w,h}\) is a triangular complete intersection._
Proof.: We follow the argument in [6, Remark 3.8]. By Theorem 4.6, we know that the initial ideal \(\operatorname{in}_{<_{n}^{w}}(J_{w,h})\) is generated by distinct indeterminates, each one corresponding to a generator \(g_{k,\ell}^{w}\). Hence it is a complete intersection. By [10, Corollary 19.3.8], \(J_{w,h}\) is also a complete intersection. Since \(J_{w,h}\) is a homogeneous ideal (with respect to the non-standard grading given above), we can also conclude that the height of \(J_{w,h}\) and \(\operatorname{in}_{<_{n}^{w}}(J_{w,h})\) are equal by [8, Section 8.2.3 and Theorem 15.26]. Since we know the indeterminates are in one-to-one correspondence with the generators \(g_{k,\ell}^{w}\) of \(J_{w,h}\), from our definition of the notation \(\lambda_{h}\) in Notation 3.14, we conclude there are precisely \(|\lambda_{h}|\) many indeterminates generating \(\operatorname{in}_{<_{n}^{w}}(J_{w,h})\) and hence its height is \(|\lambda_{h}|\). Thus the height of \(J_{w,h}\) is also \(|\lambda_{h}|\), and we conclude that the \(g_{k,\ell}^{w}\) form a minimal generating set of \(J_{w,h}\). Finally, by Lemma 5.1, the \(g_{k,\ell}^{w}\) defining \(J_{w,h}\) can be ordered so that the initial term of a generator (which is an indeterminate) does not appear in any subsequent generators. This, together with the above argument, implies that \(J_{w,h}\) is a triangular complete intersection in the sense of [6, Definition 3.3].
### Affine pavings
An **affine paving** of an algebraic variety \(X\) is an ordered partition of \(X\) into disjoint \(X_{0},X_{1},X_{2},\dots\), such that
* each \(X_{i}\) is homeomorphic to an affine space; and
* each of the finite unions \(\cup_{i=0}^{r}X_{i}\) is a Zariski closed subset of \(X\).
Tymoczko proved in [15] that regular nilpotent Hessenberg varieties in the classical Lie type are paved by affine spaces, using delicate combinatorial and Lie-theoretic arguments. In this subsection, we provide a quick alternate proof of this result in Lie type \(A\). We begin by pointing out a straightforward general fact.
**Lemma 5.6**.: _Suppose that \(I\subseteq\mathbb{C}[x_{1},\dots,x_{n}]\) is a triangular complete intersection of height \(s\). Then \(\mathbb{V}(I)\cong\mathbb{A}^{n-s}\)._
Proof.: Since \(I\) is a triangular complete intersection of height \(s\), there is a monomial order \(<\) and Grobner basis \(G=\{f_{1},\dots,f_{s}\}\) such that \(\operatorname{in}_{<}(f_{j})=x_{i_{j}}\) and \(x_{i_{j}}\neq x_{i_{k}}\) whenever \(j\neq k\). We may assume that \(G\) is a _reduced_ Grobner basis so that
\(x_{i_{2}}-r_{2},\dots,f_{s}=x_{i_{s}}-r_{s}\) and no term of any \(r_{j}\) is divisible by any of the indeterminates in \(\{x_{i_{1}},\dots,x_{i_{s}}\}\). Let \(A:=\{x_{1},\dots,x_{n}\}\setminus\{x_{i_{1}},\dots,x_{i_{s}}\}\) and consider the ring homomorphism
\[F:\mathbb{C}[x_{1},\dots,x_{n}]\to\mathbb{C}[A]\]
where \(F(x_{i_{j}})=r_{j}\) if \(x_{i_{j}}\notin A\) and \(F(x_{k})=x_{k}\) if \(x_{k}\in A\). Clearly, this map is surjective. Thus, \(\ker(F)\) is a prime ideal of height \(s\) which contains \(I\). But \(I\) is prime of height \(s\) and so \(\ker(F)=I\).
Combining Lemma 5.6 with Theorem 5.5 immediately yields the following result.
**Proposition 5.7**.: _Let \(h\) be an indecomposable Hessenberg function and \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). Let \(r_{w}\) denote the dimension of the Schubert cell \(X_{\circ}^{w}\). Then, the intersection of \(\operatorname{Hess}(\mathsf{N},h)\) is isomorphic to the affine space \(\mathbb{A}^{r_{w}-|\lambda_{h}|}\)._
Next recall that the flag variety \(GL_{n}(\mathbb{C})/B\) has an affine paving. The disjoint sets \(X_{0},X_{1},X_{2},\dots\) in the paving are the Schubert cells and they are ordered by any total order which refines Bruhat order. We will now see that this affine paving induces an affine paving on each regular nilpotent Hessenberg variety.
**Theorem 5.8**.: _Let \(h\) be an indecomposable Hessenberg function. Then \(\operatorname{Hess}(\mathsf{N},h)\subseteq GL_{n}(\mathbb{C})/B\) has an affine paving by the set of all \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\), \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\), totally ordered by any order which refines Bruhat order._
Proof.: By Lemma 3.18, \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\) is non-empty if and only if \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). By Proposition 5.7, each non-empty \(\operatorname{Hess}(\mathsf{N},h)\cap X_{\circ}^{w}\) is isomorphic to affine space. The result now follows by noting that the set of all Schubert cells form an affine paving of \(GL_{n}(\mathbb{C})/B\) and the Hessenberg variety \(\operatorname{Hess}(\mathsf{N},h)\) is a closed subvariety of \(GL_{n}(\mathbb{C})/B\).
### Hilbert series
In this section, we compute the Hilbert series of regular nilpotent Hessenberg Schubert cell ideals. The computation is a straightforward application of Theorem 5.5, together with well-known techniques regarding Hilbert series of complete intersections and initial ideals. We start by quickly reviewing this background on Hilbert series.
Recall that the Hilbert series of a positively \(\mathbb{Z}\)-graded ring \(R=\mathbb{C}\oplus R_{1}\oplus R_{2}\oplus\dots\) is the series
\[H_{R}(t)=\sum_{k=0}^{\infty}\dim_{\mathbb{C}}(R_{k})t^{k}.\]
The following results are well-known but are stated here for completeness.
**Lemma 5.9**.: _[_8_, Theorem 15.26 and Exercise 21.17]_ _Suppose that \(I\) is a homogeneous ideal of \(R=\mathbb{C}[x_{1},\dots,x_{n}]\) with respect to a positive \(\mathbb{Z}\)-grading on \(R\). Let \(<\) be a monomial order on \(R\). Then_
1. \(H_{R/I}(t)=H_{R/\operatorname{in}<(I)}(t)\)
2. _Let_ \(I=\langle f_{1},\ldots,f_{k}\rangle\) _be a complete intersection of height_ \(k\) _where each_ \(f_{i}\) _is homogeneous and_ \(\deg(f_{i})=d_{i}\)_. Then_ \[H_{R/I}(t)=H_{R}(t)\prod_{i=1}^{k}(1-t^{d_{i}}).\]
Based on the above lemma and the results from Section 5.1, we can explicitly compute the Hilbert series of \(R/J_{w,h}\), as follows.
**Theorem 5.10**.: _Equip \(R=\mathbb{C}[\mathbf{z}_{w}]\) with the positive \(\mathbb{Z}\)-grading defined in Section 5.1. Suppose that \(h\) is an indecomposable Hessenberg function and \(w\in\operatorname{Hess}(\mathsf{N},h)^{\mathsf{S}}\). Then \(\operatorname{in}_{<_{n}^{w}}(g^{w}_{k,\ell})=-z_{n+1-v_{w}(k),v_{w}^{-1}(v_{ w}(\ell)+1)}\) has degree \(v_{w}(k)-v_{w}(\ell)-1\) for all \(k>h(\ell)\). Furthermore, the Hilbert series of \(R/J_{w,h}\) is_
\[H_{R/J_{w,h}}(t)=\frac{\prod_{k>h(\ell)}\bigl{(}1-t^{v_{w}(k)-v_ {w}(\ell)-1}\bigr{)}}{\prod_{\begin{subarray}{c}i<w(j)\\ j\leq w^{-1}(i)\end{subarray}}\bigl{(}1-t^{w(j)-i}\bigr{)}}.\]
Proof.: By Lemmas 3.20 and 4.4, \(\operatorname{in}_{<_{n}^{w}}(g^{w}_{k,\ell})=\psi_{w}(\operatorname{in}_{<_ {n}}(f^{w_{0}}_{v_{w}(k),v_{w}(\ell)}))\). This initial term was computed in Lemma 2.12(1) as \(\operatorname{in}_{<_{n}}(f^{v_{w}(k),v_{w}(\ell)}_{v_{w}(\ell)})=-x_{n+1-v_{w }(k),v_{w}(\ell)+1}\). Then by Definition 3.8 of \(\psi_{w}\),
\[\operatorname{in}_{<_{n}^{w}}(g^{w}_{k,\ell})=\psi_{w}(-x_{n+1-v_{w}(k),v_{w} (\ell)+1})=-z_{n+1-v_{w}(k),v_{w}^{-1}(v_{w}(\ell)+1)}.\]
By definition of the \(\mathbb{Z}\)-grading given in Section 5.1, we have
\[\deg(-z_{n+1-v_{w}(k),v_{w}^{-1}(v_{w}(\ell)+1)})=w_{0}(v_{w}(\ell)+1)-(n+1-v_ {w}(k))=v_{w}(k)-v_{w}(\ell)-1,\]
since \(w_{0}(v_{w}(\ell)+1)=n+1-(v_{w}(\ell)+1)\). We argued in Lemma 5.4 that the \(g^{w}_{k,\ell}\) are homogeneous, so it follows that \(\deg(g^{w}_{k,\ell})=v_{w}(k)-v_{w}(\ell)-1\) for all \(k>h(\ell)\). This proves the first claim.
Next we compute the Hilbert series of \(R/J_{w,h}\). Since \(\{g^{w}_{k,\ell}\mid k>h(\ell)\}\) is a Grobner basis for the complete intersection ideal \(J_{w,h}\) with respect to \(<_{n}^{w}\), by Lemma 5.9, we conclude
\[H_{R/J_{w,h}}(t)=H_{R}(t)\prod_{k>h(\ell)}\left(1-t^{\deg(g^{w}_{k,\ell})} \right).\]
But \(H_{R}(t)=\prod_{z_{i,j}\in\mathbf{z}_{w}}(1-t^{\mu_{i,j}})^{-1}\), where \(\mu_{i,j}=\deg(z_{i,j})=w(j)-i\), so
\[H_{R/J_{w,h}}(t)=\frac{\prod_{k>h(\ell)}\bigl{(}1-t^{v_{w}(k)-v_ {w}(\ell)-1}\bigr{)}}{\prod_{\begin{subarray}{c}i<w(j)\\ j\leq w^{-1}(i)\end{subarray}}\bigl{(}1-t^{w(j)-i}\bigr{)}},\]
where the product in the denominator is over those indices for which \(z_{i,j}\in\mathbf{z}_{w}\), as given in (3.1).
_Remark 5.11_.: Since \(R/J_{w,h}\) is isomorphic to a polynomial ring by the work in Section 5.2, we could have also computed the Hilbert series using the weights of the \(z_{i,j}\) remaining after taking the quotient of \(R\) by \(J_{w,h}\). In other words, \(H_{R/J_{w,h}}(t)\) is simply the series \(\prod_{z_{i,j}\;\mathord{\restriction}\;\mathrm{in}(g_{k,\ell}^{w})}(1-t^{\mu_{ i,j}})^{-1}\), where the product is over those indices \((i,j)\) for which \(z_{i,j}\in\mathbf{z}_{w}\) does not appear as an initial term of any \(g_{k,\ell}\) with \(k>h(\ell)\).
### Geometric vertex decomposability
Knutson, Miller, and Yong related the notion of a vertex decomposition of a simplicial complex to a classical algebraic geometry degeneration technique, and called this a **geometric vertex decomposition** in the case that the degeneration is reduced [12]. Recently, Klein and the fourth author expanded this definition recursively to **geometrically vertex decomposable** ideals, which provides a generalization of vertex decomposable simplicial complexes [11]. The condition that a simplicial complex is vertex decomposable can be strengthened to required that the decomposition respects a fixed ordering of the vertices. This strengthened notion is called compatibly vertex decomposable. Klein and Rajchgot similarly introduced the notion of \(<\)**-compatibly geometrically vertex decomposable**, where the degenerations respect a lexicographic monomial order \(<\). In particular, an ideal is \(<\)-compatibly geometrically vertex decomposable if and only if its initial ideal is the Stanley-Reisner ideal of a compatibly vertex decomposable simplicial complex with respect to a corresponding vertex order [11, Proposition 2.14].
While we refer the reader to [11] for formal definitions, we provide the following sufficient condition for being geometrically vertex decomposable, which is a straightforward generalization of [11, Proposition 2.14].
**Lemma 5.12** ([6, Lemma 3.6]).: _Let \(I\subseteq\mathbb{C}[\mathbf{x}]\) be an ideal in a polynomial ring and \(<\) a lexicographic monomial order on \(\mathbb{C}[\mathbf{x}]\). If \(\mathrm{in}_{<}(I)\) is an ideal of indeterminates, then \(I\) is \(<\)-compatibly geometrically vertex decomposable._
We saw in Theorem 4.6 that the initial ideal of the regular nilpotent Hessenberg Schubert cell ideal \(J_{w,h}\) is an ideal of indeterminates, so the following is immediate from Lemma 5.12 upon choosing the lexicographic monomial order \(<_{n}^{w}\) from Definition 4.1.
**Proposition 5.13**.: _Let \(h\) be an indecomposable Hessenberg function and \(w\in\mathrm{Hess}(\mathsf{N},h)^{\complement}\). Then \(J_{w,h}\) is \(<_{n}^{w}\)-compatibly geometrically vertex decomposable._
### Frobenius splitting
Many well-studied subvarieties of the flag variety are known to be _Frobenius split_, including Schubert varieties and Richardson varieties. Furthermore, each Schubert cell in the flag variety has a Frobenius splitting which compatibly splits all opposite Schubert varieties intersected with that Schubert cell.
In this subsection, we observe that the situation is similar for regular nilpotent Hessenberg varieties. Namely, each Schubert cell in the flag variety has a Frobenius splitting which compatibly splits all the regular nilpotent Hessenberg varieties intersected with that Schubert cell. This extends [6, Corollary 5.14].
Let \(\mathbb{K}\) be a algebraically closed field of characteristic \(p>0\). A **Frobenius splitting** of a polynomial ring \(R=\mathbb{K}[y_{1},\ldots,y_{n}]\) is a map \(\phi:R\to R\) which satisfies the following
|
2306.08768
|
Generalizable One-shot Neural Head Avatar
|
We present a method that reconstructs and animates a 3D head avatar from a
single-view portrait image. Existing methods either involve time-consuming
optimization for a specific person with multiple images, or they struggle to
synthesize intricate appearance details beyond the facial region. To address
these limitations, we propose a framework that not only generalizes to unseen
identities based on a single-view image without requiring person-specific
optimization, but also captures characteristic details within and beyond the
face area (e.g. hairstyle, accessories, etc.). At the core of our method are
three branches that produce three tri-planes representing the coarse 3D
geometry, detailed appearance of a source image, as well as the expression of a
target image. By applying volumetric rendering to the combination of the three
tri-planes followed by a super-resolution module, our method yields a high
fidelity image of the desired identity, expression and pose. Once trained, our
model enables efficient 3D head avatar reconstruction and animation via a
single forward pass through a network. Experiments show that the proposed
approach generalizes well to unseen validation datasets, surpassing SOTA
baseline methods by a large margin on head avatar reconstruction and animation.
|
Xueting Li, Shalini De Mello, Sifei Liu, Koki Nagano, Umar Iqbal, Jan Kautz
|
2023-06-14T22:33:09Z
|
http://arxiv.org/abs/2306.08768v1
|
# Generalizable One-shot Neural Head Avatar
###### Abstract
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image. Existing methods either involve time-consuming optimization for a specific person with multiple images, or they struggle to synthesize intricate appearance details beyond the facial region. To address these limitations, we propose a framework that not only generalizes to unseen identities based on a single-view image without requiring person-specific optimization, but also captures characteristic details within and beyond the face area (_e.g._ hairstyle, accessories, _etc._). At the core of our method are three branches that produce three tri-planes representing the coarse 3D geometry, detailed appearance of a source image, as well as the expression of a target image. By applying volumetric rendering to the combination of the three tri-planes followed by a super-resolution module, our method yields a high fidelity image of the desired identity, expression and pose. Once trained, our model enables efficient 3D head avatar reconstruction and animation via a single forward pass through a network. Experiments show that the proposed approach generalizes well to unseen validation datasets, surpassing SOTA baseline methods by a large margin on head avatar reconstruction and animation.
## 1 Introduction
Head avatar animation [63; 32; 42] aims to animate a source portrait image with the motion (i.e., pose and expression) from a target image. It is a long-standing task in computer vision that has been widely applied to video conferencing, computer games, Virtual Reality (VR) and Augmented Reality (AR). In real-world applications, synthesizing a realistic portrait image that matches the given identity and motion raises two major challenges - efficiency and high fidelity. Efficiency requires the model to generalize to arbitrary unseen identities and motion without any further optimization during inference. High fidelity demands the model to not only faithfully preserve intricate details in the input image (_e.g._ hairstyle, glasses, earrings), but also hallucinate plausibly whenever necessary (_e.g._ synthesize the occluded facial region when the input is in profile view or generate teeth when the mouth transitions from closed to open).
Traditional methods [16; 19; 13] based on 3D Morphable Models (3DMMs) learn networks that predict shape, expression, pose and texture of an arbitrary source portrait image efficiently. However, these approaches often fall short in synthesizing realistic details due to limited mesh resolution and a coarse texture model. Additionally, they exclusively focus on the facial region while neglecting other personal characteristics such as hairstyle or glasses. Inspired by the remarkable progress made in Generative Adversarial Networks (GANs) [24; 30; 64], another line of methods [63; 72; 75; 54; 51; 78; 17] represent motion as a warping field that transforms the given source image to match the desired pose and expression. Yet, without explicit 3D understanding of the given portrait image, these methods can only rotate the head within limited angles, before exhibiting warping artifacts, unrealistic distortions and undesired identity changes across different target views. Recently, neural rendering [43] has demonstrated impressive results in facial avatar reconstruction and
animation [21; 46; 60; 61; 81; 2; 47; 22; 23; 26; 4]. Compared to meshes with fixed and pre-defined topology, an implicit volumetric representation is capable of learning photo-realistic details including areas beyond the facial region. However, these models have limited capacity and cannot generalize trivially to unseen identities during inference. As a result, they require time-consuming optimization and extensive training data of a specific person to faithfully reconstruct their 3D neural avatars.
In this paper, we present a framework aiming at a more practical but challenging scenario - given an unseen single-view portrait image, we reconstruct an implicit 3D head avatar that not only captures photo-realistic details within and beyond the face region, but also is readily available for animation without requiring further optimization during inference. To this end, we propose a framework with three branches that disentangle and reconstruct the coarse geometry, detailed appearance and expression of a portrait image, respectively. Specifically, given a source portrait image, our _canonical branch_ reconstructs its coarse 3D geometry by producing a canonicalized tri-plane [9; 10] with a neutral expression and frontal pose. To capture the fine texture and characteristic details of the input image, we introduce an _appearance branch_ that utilizes the depth rendered from the canonical branch to create a second tri-plane by mapping pixel values from the input image onto corresponding positions in the canonicalized 3D space. Finally, we develop an _expression branch_ that takes the frontal rendering of a 3DMM with a target expression and a source identity as input. It then produces a third tri-plane that modifies the expression of the reconstruction as desired. After combining all three tri-planes by summation, we carry out volumetric rendering followed by a super-resolution block and produce a high-fidelity facial image with source identity as well as target pose and expression. Our model is learned with large numbers of portrait images of various identity and motion during training. At inference time, it can be readily applied to an unseen single-view image for 3D reconstruction and animation, eliminating the need for additional test-time optimization.
To summarize, our contributions are:
* We propose a framework for 3D head avatar reconstruction and animation that simultaneously captures intricate details in a portrait image while generalizing to unseen identities without test-time optimization.
* To achieve this, we introduce three novel modules for coarse geometry, detailed appearance as well as expression disentanglement and modeling, respectively.
* Our model can be directly applied to animate an unseen image during inference efficiently, achieving favorable performance against state-of-the-art head avatar animation methods.
## 2 Related Works
### 3D Morphable Models
Reconstructing and animating 3D faces from images has been a fundamental task in computer vision. Following the seminal work by Parke _et al_. [48], numerous methods have been proposed to represent the shape and motion of human faces by 3D Morphable Models (3DMMs) [50; 39; 18; 7; 38]. These methods represent the shape, expression and texture of a given person by linearly combining a set of bases using person-specific parameters. Building upon 3DMMs, many works have been proposed to reconstruct and animate human faces by estimating the person-specific parameters given a single-view portrait image [16; 19; 13; 37]. While 3DMMs provide a strong prior for understanding of human faces, they are inherently limited in two ways. First, they exclusively focus on the facial region and fail to capture other characteristic details such as hairstyle, eye glasses, inner mouth _etc_. Second, the geometry and texture fidelity of the reconstructed 3D faces are limited by mesh resolution, leading to unrealistic appearance in the rendered images. In this work, we present a method that effectively exploits the strong prior in 3DMMs while addressing its geometry and texture fidelity limitation by employing neural radiance fields [43; 5; 45].
### 2D Expression Transfer
The impressive performance of Generative Adversarial Networks (GANs) [24] spurred another line of head avatar animation methods [63; 72; 75; 54; 51; 78; 17]. Instead of reconstructing the underline 3D shape of human faces, these methods represent motion (_i.e_. expression and pose) as a warping field. Expression transfer is carried out by applying a warping operation onto the source image to
match the motion of the driving image. By leveraging the powerful capacity of generative models, these methods produce high fidelity results with more realistic appearance compared to 3DMM-based methods. However, without an explicit understanding and modeling of the underlying 3D geometry of human faces, these methods usually suffer from warping artifacts, unrealistic distortions and undesired identity change when the target pose and expression are significantly different from the ones in the source image. In contrast, we explicitly reconstruct the underline 3D geometry and texture of a portrait image, enabling our method to produce more realistic synthesis even in cases of large pose change during animation.
### Neural Head Avatars
Neural Radiance Field (NeRF) [43; 5; 45] debuts remarkable performance for 3D scene reconstruction. Many works [21; 46; 60; 61; 81; 2; 47; 22; 23; 26; 4; 34; 82; 3] attempt to apply NeRF to human portrait reconstruction and animation by extending it from static scenes to dynamic portrait videos. Although these methods demonstrate realistic reconstruction results, they inefficiently learn separate networks for different identities and require thousands of frames from a specific individual for training. Another line of works focus on generating a controllable 3D head avatar from random noise [59; 67; 57; 44; 68; 36; 84; 56]. Intuitively, 3D face reconstruction and animation could be achieved by combining these generative methods with GAN inversion [52; 20; 70; 66]. However, the individual optimization process for each frame during GAN inversion is computationally infeasible for real-time performance in applications such as video conferencing. Meanwhile, several works [62; 74; 6; 15] focus on reconstructing 3D avatars from arbitrary input images, but they cannot animate or reenact these avatars. Closest to our problem setting, few works explore portrait reconstruction and animation in a few-shot [76] or one-shot [27; 32; 42; 85; 40] manner. Specifically, the ROME method [32] combines a learnable neural texture with explicit FLAME meshes [39] to reconstruct a 3D head avatar, encompassing areas beyond the face region. However, using meshes as the 3D shape representation prevents the model from producing high-fidelity geometry and appearance details. Instead of using explicit meshes as 3D representation, the HeadNeRF [27] and MofaNeRF methods learn implicit neural networks that take 3DMM parameters (_i.e._ identity and expression coefficients or albedo and illumination parameters) as inputs to predict the density and color for each queried 3D point. Additionally, the OTAvatar [42] method proposes to disentangle latent style codes from a pre-trained 3D-aware GAN [9] into separate motion and identity codes, enabling facial animation by exchanging the motion codes. Nonetheless, all three models [27; 85; 42] require laborious test-time optimization, and struggle to reconstruct photo-realistic texture details of the given portrait image presumably because they encode the appearance using a compact latent vector. In this paper, we propose the first 3D head neural avatar animation work that not only generalizes to unseen identities without test-time optimization, but also captures intricate details from the given portrait image, surpassing all previous works in quality.
## 3 Method
We present a framework that takes a source image \(I_{s}\) together with a target image \(I_{t}\) as inputs, and synthesizes an image \(I_{o}\) that combines the identity from the source image and the motion (_i.e._, expression and head pose) from the target image. The overview of the proposed method is illustrated in Fig. 1. Given a source image including a human portrait, we begin by reconstructing the coarse geometry and fine-grained person-specific details via a canonical branch and an appearance branch, respectively. To align this reconstructed 3D neural avatar with the expression in the target image, we employ an off-the-shelf 3DMM [16] model 1 to produce a frontal-view rendering that combines the identity from the source image with the expression from the target image. Our expression branch then takes this frontal-view rendering as input and outputs a tri-plane that aligns the reconstructed 3D avatar to the target expression. By performing volumetric rendering from the target camera view and applying a super-resolution block, we synthesize a high-fidelity image with the desired identity and motion. In the following, we describe the details of each branch in our model, focusing on answering three questions: a) how to reconstruct the coarse shape and texture of a portrait image with neutral expression in Sec. 3.1; b) how to capture appearance details in the source image in Sec. 3.2; and c) how to model and transfer expression from the target image onto the source image in Sec. 3.3.
The super-resolution module and the training stages with associated objectives will be discussed in Sec. 3.4 and Sec. 3.5, respectively.
### Coarse Reconstruction via the Canonical Branch
Given a source image \(I_{s}\) depicting a human portrait captured from the camera view \(C_{s}\), the canonical branch predicts its coarse 3D reconstruction represented as a tri-plane [9; 10]\(T_{c}\). To serve as a strong geometric prior for the subsequent detailed appearance and expression modeling, we impose two crucial properties on the coarse reconstruction. First, the coarse reconstruction of face images captured from different camera views should be aligned in the 3D canonical space, allowing the model to generalize to single-view portrait images captured from arbitrary camera views. Second, we enforce the coarse reconstruction to have a _neutral_ expression (_i.e_., opened eyes and closed mouth), which facilitates the expression branch to add the target expression effectively.
Based on these two goals, we design an encoder \(E_{c}\) that takes the source image \(I_{s}\in\mathbb{R}^{3\times 512\times 512}\) as input and predicts a canonicalized tri-plane \(T_{c}\in\mathbb{R}^{3\times 32\times 256\times 256}\). Specifically, we fine-tune a pre-trained SegFormer [65] model as our encoder, whose transformer design enables effective mapping from the 2D input to the canonicalized 3D space. Furthermore, to ensure that \(T_{c}\) has a neutral expression, we employ a 3DMM [16] to render a face with the same identity and camera pose of the source image, but with a neutral expression. We then encourage the rendering of \(T_{c}\) to be close to the 3DMM's rendering within the facial region by computing an L1 loss and a perceptual loss [28; 77] between them:
\[\begin{split} I_{c}&=\mathcal{R}_{\mathcal{V}}(T_{ c},C_{s})\\ I_{neu},M_{neu}&=\mathcal{R}_{\mathcal{M}}( \alpha_{s},\beta_{0},C_{s})\\ \mathcal{L}_{neutral}&=||I_{neu}-I_{c}\times M_{neu }||+||\phi(I_{neu})-\phi(I_{c}\times M_{neu})||,\end{split} \tag{1}\]
where \(\mathcal{R}_{\mathcal{V}}(T,C)\) is the volumetric rendering of a tri-plane \(T\) from the camera view \(C\), \(\phi\) is a pre-trained VGG-19 network [55]. \(I,M=\mathcal{R}_{\mathcal{M}}(\alpha,\beta,C)\) is the 3DMM [16] that takes identity coefficients \(\alpha\), expression coefficients \(\beta\) as inputs, and renders an image \(I\) and a mask \(M\) including only the facial region from camera view \(C\). By setting \(\alpha=\alpha_{s}\) (_i.e_. the identity coefficents of \(I_{s}\)) and \(\beta_{0}=\bar{\mathbf{0}}\) in Eq. 1, we ensure that \(I_{neu}\) has the same identity of \(I_{s}\) but with a neutral expression.
As shown in Fig. 2(c), the rendered image \(I_{c}\) from the canonical tri-plane \(T_{c}\) indeed has a neutral expression with opened eyes and closed mouth, but lacks fine-grained appearance. This is because mapping a portrait from the 2D input to the canonicalized 3D space is a challenging and holistic process. As a result, the encoder primarily focuses on aligning inputs from different camera views and neglects individual appearance details. Similar observations have also been noted in [15; 74]. To
Figure 1: **Overview.** The proposed method contains four main modules: a canonical branch that reconstructs the coarse geometry and texture of a portrait with a neutral expression (Sec. 3.1), an appearance branch that captures fine-grained person-specific details (Sec. 3.2), an expression branch that modifies the reconstruction to desired expression, and a super-resolution block that renders high-fidelity synthesis (Sec. 3.4).
resolve this issue, we introduce an appearance branch that spatially transfers details from the input image to the learned coarse reconstruction's surface in the next section.
### Detail Reconstruction via the Appearance Branch
We now introduce the appearance branch that aims to capture and reconstruct intricate facial details in the input image. The core idea is to leverage the depth map rendered from the canonical tri-plane \(T_{c}\) to compute the 3D position of each pixel in the image such that the facial details can be accurately "transferred" from the 2D input image to the 3D reconstruction. Specifically, we first render \(T_{c}\) from the source camera view \(C_{s}\) to obtain a depth image \(D_{s}\in\mathbb{R}^{128\times 128}\). The 3D position (denoted as \(P^{ij}\)) of each pixel \(I_{j}^{ij}\) in the source image \(I_{s}\) can be computed by \(P^{ij}=\textbf{o}+D_{s}^{ij}\textbf{d}\), where **o** and **d** are the ray origin and viewing direction sampled from the camera view \(C_{s}\) of the source image. Based on the 3D locations of all pixels, we construct a neural point cloud [69; 58] by associating the color information from each pixel \(I_{s}^{ij}\) in the 2D image to its corresponding 3D position \(P^{ij}\). Instead of directly using the RGB color of each pixel, we employ an encoder \(E_{p}\) to extract 2D features (denoted as \(F\in\mathbb{R}^{32\times 128\times 128}\)) from \(I_{s}\) and associate the feature at each pixel to its corresponding 3D location. As a result, we establish a neural point cloud composed of all visible pixels in the image and associate each point with a 32-dimensional feature vector. This mapping process from a 2D image to the 3D space, is referred to as "Lifting" and demonstrated in Fig. 1(b).
To integrate the neural point cloud into the canonical tri-plane \(T_{c}\), we propose a "Rasterization" process (see Fig. 1(c)) that converts the neural point cloud to another tri-plane denoted as \(T_{p}\) such that it can be directly added to \(T_{c}\). For each location on the planes (_i.e._ the XY-, YZ-, XZ-plane) in \(T_{p}\), we compute its nearest point in the neural point cloud and transfer the feature from the nearest point onto the query location on the plane. A comparison between Fig. 2(d) and Fig. 2(e) reveals the contribution of our appearance tri-plane \(T_{p}\), which effectively transfers the fine-grained details (_e.g._, pattern on the hat) from the image onto the 3D reconstruction.
### Expression Modeling via the Expression Branch
Expression reconstruction and transfer is a challenging task. Naively predicting the expression from an image poses difficulties in disentangling identity, expression, and head rotation. Meanwhile, 3DMMs provide a well-established expression representation that captures common human expressions effectively. However, the compact expression coefficients in 3DMMs are highly correlated with the expression bases and do not include spatially varying deformation details. As a result, conditioning a network solely on these coefficients for expression modeling can be challenging. Instead, we propose a simple expression branch that fully leverages the expression prior in any 3DMM and seamlessly integrates with the other two branches. The core idea is to provide the model with target expression information using a 2D rendering from the 3DMM instead of the expression coefficients. As shown in Fig. 1(a), given the source image \(I_{s}\) and target image \(I_{t}\), we predict their corresponding shape and expression coefficients denoted as \(\alpha_{s}\) and \(\beta_{t}\) respectively using a 3DMM prediction network [16]. By combining \(\alpha_{s}\) and \(\beta_{t}\), we render a _frontal-view_ facial image as \(I_{exp}=\mathcal{R}_{\mathcal{M}}(\alpha_{s},\beta_{t},C_{front})\), where \(C_{front}\) is a pre-defined frontal camera pose. We then use an encoder (denoted as \(E_{e}\)) that takes \(I_{exp}\) as input and produces an expression tri-plane \(T_{e}\in\mathbb{R}^{3\times 32\times 256\times 256}\). We modify the canonical tri-plane \(T_{c}\) to the target expression by directly adding \(T_{e}\) with \(T_{c}\). Note that we always render \(I_{exp}\) in the pre-defined frontal view so that the expression encoder can focus on modeling expression changes only and ignore motion changes caused by head rotation. Moreover, our expression encoder also learns to hallucinate realistic inner mouths (_e.g._, teeth) according to the target expression, as
Figure 2: **Visualization of the contribution of each branch.** (a) Source image. (b) Target image. (c) Rendering of the canonical tri-plane. (d) Rendering of the combination of the canonical and expression tri-planes. (e) Rendering of the combination of all three tri-planes.
the 3DMM rendering \(I_{exp}\) does not model the inner mouth region. Fig. 2(d) visualizes the images rendered by combining the canonical and expression tri-planes, where the target expression from Fig. 2(b) is effectively transferred onto Fig. 2(a) through the expression tri-plane.
### The Super-resolution Module
By adding the canonical and appearance tri-planes from a source image, together with the expression tri-plane from a target image, we reconstruct and modify the portrait in the source image to match the target expression. Through volumetric rendering, we can obtain a portrait image at a desired camera view. However, the high memory and computational cost of volumetric rendering prevents the model from synthesizing a high-resolution output. To overcome this challenge, existing works [9; 36; 56; 57] utilize a super-resolution module that takes a low-resolution rendered image or feature map as input and synthesizes a high-resolution result. In this work, we follow this line of works and fine-tune a pre-trained GFPGAN [64; 57] as our super-resolution module [57]. By pre-training on the task of 2D face restoration, GFPGAN learns a strong prior for high-fidelity facial image super-resolution. Additionally, its layer-wise feature-conditioning design prevents the model from deviating from the low-resolution input, thereby mitigating temporal or multi-view inconsistencies, as observed in [57].
### Model Training
We utilize a two-stage training schedule to promote multi-view consistent reconstructions, as well as to reduce the overall training time. In the first stage, we train our model without the super-resolution module using a reconstruction objective and the neutral expression loss discussed in Sec. 3.1. Specifically, we compare the rendering of a) the canonical tri-plane (_i.e._, \(I_{c}=\mathcal{R}_{\mathcal{V}}(T_{c},C_{t})\)), b) the combination of the canonical and expression tri-planes (_i.e._, \(I_{c+e}=\mathcal{R}_{\mathcal{V}}(T_{c}+T_{e},C_{t})\)), and c) the combination of all three tri-planes (_i.e._, \(I_{c+e+p}=\mathcal{R}_{\mathcal{V}}(T_{c}+T_{e}+T_{p},C_{t})\)) with the target image via the L1 and the perceptual losses similarly to Eq. 1:
\[\begin{split}\mathcal{L}_{1}&=||I_{c}-I_{t}||+||I_{ c+e}-I_{t}||+||I_{c+e+p}-I_{t}||,\\ \mathcal{L}_{p}&=||\phi(I_{c})-\phi(I_{t})||+|| \phi(I_{c+e})-\phi(I_{t})||+||\phi(I_{c+e+p}),\phi(I_{t})||.\end{split} \tag{2}\]
Intuitively, applying supervision to different tri-plane combinations encourages the model to predict meaningful reconstruction in all three branches. To encourage smooth tri-plane reconstruction, we also adopt the TV loss proposed in [9]. The training objective for the first stage is \(\mathcal{L}^{1}=\lambda_{1}\mathcal{L}_{1}+\lambda_{p}\mathcal{L}_{p}+\lambda _{TV}\mathcal{L}_{TV}+\lambda_{neutral}\mathcal{L}_{neutral}\), where \(\lambda_{x}\) is the weight of the corresponding objective.
In the second stage, to encourage multi-view consistency, we only fine-tune the super-resolution module and freeze other parts of the model. We use all the losses in the first stage and a dual-discriminator proposed in [9] that takes the concatenation of the low-resolution rendering and the high-resolution reconstruction as input. Specifically, we use the logistic formulation [24; 30; 64] of the adversarial loss \(L_{adv}=\mathbb{E}_{(I_{o}^{l},I_{o})}\text{softplus}(D(I_{o}^{l}\oplus I_{o})))\), where \(I_{o}^{l}\) is the upsampled version of the low-resolution rendered image and \(\oplus\) represents the concatenation operation. The overall training objective of the second stage is \(\mathcal{L}^{2}=\mathcal{L}^{1}+\lambda_{adv}\mathcal{L}_{adv}\).
## 4 Experiments
### Datasets
Training datasets.We train our model using a single-view image dataset (FFHQ [29]) and two video datasets (CelebV-HQ [83] and RAVDESS [41]). For the single-view images in FFHQ, we carry out the 3D portrait reconstruction task, _i.e._, the source and target images are exactly the same. For the CelebV-HQ and the RAVDESS datasets, we randomly sample two frames from the same video to formulate a pair of images with the same identity but different motion. Furthermore, we observe that parts of videos in the CelebV-HQ and the RAVDESS datasets are fairly static, leading to a pair of source and target images with similar head poses, impeding the model from learning correct 3D shape of portraits. To enhance the learning of 3D reconstruction, we further employ an off-the-shelf 3D-aware GAN model [9] to synthesize 55,857 pairs of images rendered from two randomly sampled camera views, _i.e._, the source and target images have the same identity and expression but different views.
Evaluation datasets.We evaluate our method and the baselines [32; 27; 42; 72] on the CelebA dataset [35] and the testing split of the HDTF dataset [79], following [72; 42]. Note that our method has never seen any image from these two datasets during training while both StyleHeat[72] and OTAvatar [42] are trained using the training split of the HDTF dataset. Nonetheless, our method generalizes well to all validation datasets and achieves competitive performance, as discussed later.
Figure 3: **Cross-identity reenactment on CelebA [35] and HDTF [79]. The first two rows show cross-identity reenactment results on the CelebA dataset, while the last two rows demonstrate motion transfer from videos in the HDTF dataset to images in the CelebA dataset.**
Datasets pre-processing.We compute the camera poses of images in all training and testing datasets using [16] following [9]. As in previous literature [32, 27, 85], background modeling is out of the scope of this work; we further use an off-the-shelf portrait matting method [31] to remove backgrounds in all training and testing images.
### Metrics and Baselines
We evaluate all methods for 3D portrait reconstruction, same-identity and cross-identity reenactment.
3D portrait reconstruction.We use all 29,954 high-fidelity images2 in CelebA [35]. We measure the performance by computing various metrics between the reconstructed images and the input images, including the L1 distance, perceptual similarity metric (LPIPS), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and Frechet inception distance (FID).
Footnote 2: [https://github.com/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebookbook/facebook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbookbook/facebookbookbook/facebook](https://github.com/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebook/facebookbook/facebook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbook/facebookbookbook/facebookbookbook/facebook)
### Qualitative and Quantitative Results
3D portrait reconstruction.For each testing portrait image, we reconstruct its 3D head avatar and render it from the observed view using different methods. By comparing the rendering with the input image, we assess the fidelity of 3D reconstruction of each method. Table 1 shows the quantitative results, demonstrating that our model achieves significantly better reconstruction and fidelity scores. These results highlight the ability of our model to faithfully capture details in the input images and reconstruct high-fidelity 3D head avat
\begin{table}
\begin{tabular}{c|c c c c c c c c|c c c c c} \hline \hline & \multicolumn{10}{c|}{Same-Identity Reenactment} & \multicolumn{5}{c}{Cross-Identity Reenactment} \\ Methods & PSNR\(\uparrow\) & SSIM\(\uparrow\) & CSIIM\(\uparrow\) & AED\(\downarrow\) & APD\(\downarrow\) & AKD\(\downarrow\) & LPIPS\(\downarrow\) & L\(\downarrow\) & FID\(\downarrow\) & CSIM\(\uparrow\) & AED\(\downarrow\) & APD\(\downarrow\) & FID\(\downarrow\) \\ \hline ROME [32] & 20.75 & 0.838 & 0.746 & **0.123** & 0.012 & 2.938 & 0.173 & 0.047 & 31.55 & 0.629 & 0.247 & 0.020 & **43.38** \\ OTAvatar [42] & 20.12 & 0.806 & 0.619 & 0.162 & 0.017 & 2.933 & 0.198 & 0.053 & 36.63 & 0.514 & 0.282 & 0.028 & 44.86 \\ StyleLet [72] & 19.18 & 0.805 & 0.654 & 0.141 & 0.021 & 2.843 & 0.194 & 0.056 & 108.03 & 0.537 & **0.246** & 0.025 & 105.1 \\ Nex3D-PTI [56] & 19.89 & 0.813 & 0.645 & 0.137 & 0.035 & **1.449** & 0.180 & 0.038 & 41.66 & 0.581 & 0.291 & 0.045 & 101.8 \\ Ours & **22.15** & **0.868** & **0.789** & 0.129 & **0.010** & 2.596 & **0.117** & **0.037** & **21.60** & **0.643** & 0.263 & **0.018** & 47.39 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison on the HDTF dataset [79].**
Figure 4: **Cross-identity reenactment on HDTF [79].**
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Methods & CSIM\(\uparrow\) & AED\(\downarrow\) & APD\(\downarrow\) & FID\(\downarrow\) \\ \hline ROME [32] & 0.521 & 0.270 & 0.022 & 76.03 \\ StyleHeat [72] & 0.461 & 0.270 & 0.038 & 94.28 \\ Next3D-PTI [56] & 0.483 & **0.266** & 0.042 & **56.01** \\ Ours & **0.551** & 0.274 & **0.017** & **59.48** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross-identity reenactment between the HDTF dataset [79] and the CelebA dataset [35].
Neutral expression constraint.In our model, we aim to wipe out the expression in the source image by enforcing the coarse reconstruction from the canonical branch to have a neutral expression. This ensures that the expression branch always animates a fully "neutralized" expressionless face from the canonical branch. Without this, the expression branch fails to correctly modify the coarse reconstruction into the target expression, as shown in Fig. 5(e) and Table 4 (_i.e._, worse AED score).
Appearance branch.The appearance branch is the key to reconstructing intricate facial details of the input portrait image. Without this branch, the model struggles to capture photo-realistic details, resulting in considerably lower reconstruction and fidelity metrics, as shown in Table 4 and Fig. 5(f).
Alternative expression branch design.Instead of using the frontal view rendering from the 3DMM to provide target expression information to the expression branch (see Sec. 3.3), an alternative way is to use the 3DMM expression coefficients to linearly combine a set of learnable expression bases. However, this design performs sub-optimally as it overlooks the individual local deformation details caused by expression changes, introducing artifacts (_e.g._, mouth not fully closed) shown in Fig. 5(d) and lower FID in Table 4. We provide more results and ablations in the appendix.
## 5 Conclusions and Broader Impact
Conclusions.In this paper, we propose a framework for one-shot 3D human avatar reconstruction and animation from a single-view image. Our method excels at capturing photo-realistic details in the input portrait image, while simultaneously generalizing to unseen images without the need for test-time optimization. Through comprehensive experiments and evaluations on validation datasets, we demonstrate that the proposed approach achieves favorable performance against state-of-the-art baselines on head avatar reconstruction and animation.
Broader Impact.The proposed framework has the potential to make significant contributions to various fields such as video conferencing, entertainment industries, and virtual reality. It offers a wide range of applications, including but not limited to animating portraits for film/game production, or reducing transmission costs in video conferences through self-reenactment that only requires transmitting a portrait image with compact motion vectors. However, the proposed method may present important ethical considerations. One concerning aspect is the possibility of generating "deepfakes", where manipulated video footage portrays individuals saying things they have never actually said. This misuse could lead to serious privacy infingements and the spread of misinformation. We do not advocate for such activities and instead underscore the need to build guardrails to ensure safe use of talking-head technology, such as [53, 80, 12, 1, 11].
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{3D Portrait Reconstruction} & \multicolumn{4}{c}{Cross-identity Reacnet} \\ Methods & L\(\downarrow\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & FID\(\downarrow\) & CSM\(\uparrow\) & AED\(\downarrow\) & APD\(\downarrow\) & FID\(\downarrow\) \\ \hline w/o appearance & 0.061 & 0.239 & 19.53 & 0.712 & 47.84 & 0.370 & 0.243 & 0.015 & 43.33 \\ w/o neutral constraint & 0.027 & 0.124 & 25.65 & 0.854 & 13.56 & 0.593 & 0.315 & 0.018 & 22.92 \\ linear expression & 0.031 & 0.134 & 25.08 & 0.841 & 14.62 & 0.443 & 0.217 & 0.016 & 26.18 \\ Ours & 0.030 & 0.116 & 24.77 & 0.861 & 10.47 & 0.599 & 0.276 & 0.017 & 17.36 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation studies.** Blue text highlights the inferior performance of the variants. (Sec. 4.5)
Figure 5: **Ablation studies.** Details are explained in Sec. 4.5.
## Appendix
In this document, we first show more ablation studies in Sec. A. We further demonstrate more qualitative results in Sec. B. Additional details of the proposed framework and the evaluation process are described in Sec. C and Sec. D. Finally, we discuss preliminaries of the 3DMMs in Sec. E and the limitations of the proposed method in Sec. F, respectively.
## Appendix A More Ablation Studies
Details of the linear expression branch.We show the architecture of an alternative expression branch design in Fig. 6 (a). As described in Sec.4.5 of the main submission, this design draws inspirations from 3DMMs and learns 64 tri-plane-based expression bases denoted as \(\{E_{1},...,E_{64}\}\). To produce the target expression tri-plane, it linearly combines the learnable bases by \(T_{c}=\theta_{t}^{1}E_{1}+...+\theta_{t}^{64}E_{64}\), where \(\theta_{t}\) are the target expression coefficients extracted from the target image by the 3DMM [16]. As shown in Fig.5 and Table.4 in the main submission, this design produces unrealistic mouth regions during the animation.
Expression branch using coefficients.An intuitive design for the expression branch is to utilize an encoder that directly maps the target 3DMM expression coefficients to an expression tri-plane. We investigate this design by using two kinds of encoders: i) We simply use linear layers followed by transpose convolutional layers to map the expression coefficient vector to an expression tri-plane. We dub this design as the "encoder" design and show its architecture in Fig. 6(b). ii) We use the generator from StyleGAN2 [30] as our encoder. Specifically, we first map the target expression coefficients to a set of style vectors, the encoder takes a constant tensor as input and modulates the features at each layer using the style vectors produced from the expression coefficients. We denote this design as "modulation" and show its structure in Fig. 6(c).
As shown in Table 5, for the cross-identity reenactment evaluation on CelebA [35], both alternative expression branch designs discussed above have lower CSIM score. This is because the expression coefficients are orthogonal to the identity in the source image. By taking the expression coefficients alone as input, the model has less information of the source identity and suffers from identity preservation while animating.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{3D Portrait Reconstruction} & \multicolumn{4}{c}{Cross-Identity Reenact} \\ Methods & L1\(\downarrow\) & LPIPS\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & FID\(\downarrow\) & CSIM\(\uparrow\) & AED\(\downarrow\) & APD\(\downarrow\) & FID\(\downarrow\) \\ \hline modulation & 0.030 & 0.130 & 25.01 & 0.841 & 11.80 & 0.558 & 0.280 & 0.018 & 22.84 \\ encoder & 0.028 & 0.121 & 25.69 & 0.850 & 9.840 & 0.538 & 0.264 & 0.017 & 18.88 \\ fine-tune high-res & 0.028 & 0.114 & 25.87 & 0.870 & 9.177 & 0.597 & 0.278 & 0.017 & 21.51 \\ ours & 0.030 & 0.116 & 24.77 & 0.861 & 10.47 & 0.599 & 0.276 & 0.017 & 17.36 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation studies.** Blue text highlights the inferior performance of the variants.
Figure 6: **Ablation variants.** See Sec. A for details.
Fine-tuning end-to-end in stage II.As discussed in Sec.3.5 in the submission, in order to preserve multi-view consistency, we only fine-tune the super-resolution module while freezing other parts in Stage II. We verify the effectiveness of this choice by conducting an ablation study where we instead fine-tune end-to-end in Stage II.
Table 5 demonstrates the quantitative evaluation of this variant model. Though it has better reconstruction results from the observed view, it has worse FID score for the task of cross-identity reenactment. This indicates this variant synthesizes less realistic animation results at the target pose.
## Appendix B More Qualitative Results
In this section, we show more qualitative results on the CelebA [35] and HDTF [79] datasets.
### Cross-identity Reenactment on CelebA
In Fig. 8, Fig. 9, Fig. 10 and Fig. 11, we present more visualizations of cross-identity reenactment on the CelebA dataset.
More Implementation Details
Detailed network architecture.In Fig. 7, we present the detailed architecture of the proposed method. As discussed in Sec.3 of the main submission, our model includes three branches that capture the coarse geometry, detailed appearance and expression, respectively. Specifically, the encoder \(E_{c}\) in the canonical branch takes a source image of size \(3\times 512\times 512\) as input and outputs a feature map of size \(256\times 128\times 128\). By passing the feature map through four convolution layers and one transpose convolution layer, we obtain a canonical tri-plane of size \(3\times 32\times 256\times 256\). The encoder \(E_{p}\) in the appearance branch takes the source image as input and outputs a feature map of size \(256\times 128\times 128\). Through the "Lifting" and "Raterization" process introduced in Sec.3.2 of the main submission, we produce an appearance tri-plane of size \(3\times 32\times 256\times 256\). Furthermore, to prevent the expression in the source image from leaking into the final animation, we use an off-the-shelf face parsing network [73]3 to mask out the eye and mouth regions before providing the source image to the encoder \(E_{p}\). Finally, the encoder \(E_{e}\) in the expression branch is similarly designed as \(E_{c}\), except that it takes the frontal-view 3DMM rendering with the target expression as input. All three encoders (_i.e_. \(E_{c},E_{p},E_{e}\)) use a pre-trained SegFormer [65] model, up to the classifier layer. We adopt the tri-plane decoder proposed by [9] to map the interpolated tri-plane feature to color and density for each 3D point. For the super-resolution block, we fine-tune a pre-trained GFPGAN [64] model without modifying its architecture.
Footnote 3: [https://github.com/zllrunning/face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch)
## Appendix D More Evaluation Details
We explain details of how we evaluated the baseline methods and the proposed method in this section.
3D portrait reconstruction.For the ROME method [32], we use the publicly available code and model4, which renders 256\({}^{2}\) images. For fair comparison, we resize our prediction and the ground truth images from 512\({}^{2}\) to 256\({}^{2}\). Since the synthesis from ROME is not pixel-to-pixel aligned to the input image, we rigidly transform the ground truth image and our image such that they align with ROME's predictions. To this end, we apply the Procrustes process [25, 71] to align our prediction/ground truth image to ROME's prediction using facial landmarks detected by [8]. We also replace the white background in ROME's implementation to a black one to match the ground truth images and our predictions. We compare with ROME on all 29,954 high-fidelity images in CelebA [35] for 3D portrait reconstruction.
Footnote 4: [https://github.com/SamsungLabs/rome](https://github.com/SamsungLabs/rome)
Since the synthesis results by HeadNeRF [27]5 and our method are pixel-to-pixel aligned to the input image, we do not carry out further alignment when comparing to HeadNeRF. However, as discussed in Sec.4.2 of the main submission, applying HeadNeRF on all 29,954 images in CelebA is computationally infeasible. Thus we apply our method and HeadNeRF on a randomly sampled subset that includes 3000 images from the CelebA dataset.
Footnote 5: [https://github.com/CrisiHY1995/headnerf](https://github.com/CrisiHY1995/headnerf)
Footnote 6: [https://github.com/FeiiYin/StyleHEAT](https://github.com/FeiiYin/StyleHEAT)
Reenactment.We compare with ROME [32], StyleHeat [72]6, OTAvatar [42] and Next3D [56] combined with PTI [52] for same-identity and cross-identity reenactment. We leave HeadNeRF out on the HDTF dataset since it is impossible to test it on tens of thousands frames due to the time-consuming optimization for each frame. Moreover, OTAvatar [42] is a concurrent work and up to the date of this paper, only its partial code7 that allows for comparison on the HDTF dataset alone has been released publicly. So we do not compare with it on motion transfer from the HDTF dataset to the CelebA dataset. To animate a given portrait, the Next3D method [56] first uses pivotal tuning [52] to map the portrait image to the latent space of its generator and then animates the portrait using expressions extracted from the target video. We use the publicly available implementation8 of Next3D with pivotal tuning. For fair comparison, we align predictions from all methods to the target image using the Procrustes process discussed above. Note that synthesis from all methods have a black background and are readily comparable after the alignment.
Footnote 7: [https://github.com/theEricMa/OTAvatar](https://github.com/theEricMa/OTAvatar)
Footnote 8: [https://github.com/MrTornado24/Next3D](https://github.com/MrTornado24/Next3D)
Preliminaries of 3DMMs
We exploit the geometry prior from a 3DMM [50] that represents the shape and texture of a portrait by:
\[\begin{split} S&=\bar{S}+B_{id}\alpha+B_{exp}\beta\\ T&=\bar{T}+B_{tex}\delta\end{split} \tag{3}\]
where \(\bar{S}\), \(\bar{T}\) are the mean shape and texture of human faces, \(B_{id},B_{exp},B_{tex}\) are the shape, expression and texture bases, and \(\alpha,\beta,\delta\) are coefficients that linearly combine the shape, expression and texture bases, respectively. Since we mainly utilize the shape and expression components in the 3DMM in this work, we ignore its texture and illumination modules and simply denote the rendering operation from a camera view \(C\) as \(I,M=\mathcal{R}_{M}(\alpha,\beta,C)\), where \(I\) is the rendered image, and \(M\) is the rendered mask that only includes the facial region.
## Appendix F Limitations
Teeth and pupil reconstruction.3D head avatar reconstruction and animation is a highly challenging task. The proposed method takes the first step to produce high-fidelity results. However, to generalize to any portrait image, one dilemma is that the expression of the source portrait and target image could be arbitrary, which introduces various challenging scenarios. For instance, the source portrait image could have a closed mouth while the target expression has an open mouth (_e.g._ the second row of Fig. 8). In this case, the model should hallucinate correct inner mouth regions. Yet, in other cases, the inner mouth is visible in the source portrait (_e.g._ the sixth row in Fig. 8). To resolve this dilemma, in this work, our model simply always hallucinates the inner mouth of the individual through our expression branch. As a result, the hallucinated mouth could deviate from the one in the source image. In other words, our model cannot accurately reconstruct teeth and lips, as shown in Fig. 22. The same analysis applies to pupils. Since we have no prior knowledge of whether the eyes in the portrait image are open or closed, our model always hallucinates the pupils through the expression branch instead of reconstructing the ones in the source image. We leave this limitation to future works.
Figure 8: **Cross-identity reenactment on CelebA.**
Figure 9: **Cross-identity reenactment on CelebA.**
Figure 10: **Cross-identity reenactment on CelebA.**
Figure 11: **Cross-identity reenactment on CelebA.**
Figure 12: **3D reconstruction on CelebA.**
Figure 13: **3D reconstruction on CelebA.**
Figure 14: **3D reconstruction on CelebA.**
Figure 15: **3D reconstruction on CelebA.**
Figure 16: **Cross-identity reenactment from HDTF to CelebA.**
Figure 17: **Cross-identity reenactment from HDTF to CelebA.**
Figure 18: **Cross-identity reenactment from HDTF to CelebA.**
Figure 19: **Cross-identity reenactment from HDTF to CelebA.**
Figure 20: **Same-identity reenactment on HDTF.**
Figure 21: **Cross-identity reenactment on HDTF.**
Figure 22: **Failure cases.**
|
2302.05963
|
Analyzing the Effectiveness of the Underlying Reasoning Tasks in
Multi-hop Question Answering
|
To explain the predicted answers and evaluate the reasoning abilities of
models, several studies have utilized underlying reasoning (UR) tasks in
multi-hop question answering (QA) datasets. However, it remains an open
question as to how effective UR tasks are for the QA task when training models
on both tasks in an end-to-end manner. In this study, we address this question
by analyzing the effectiveness of UR tasks (including both sentence-level and
entity-level tasks) in three aspects: (1) QA performance, (2) reasoning
shortcuts, and (3) robustness. While the previous models have not been
explicitly trained on an entity-level reasoning prediction task, we build a
multi-task model that performs three tasks together: sentence-level supporting
facts prediction, entity-level reasoning prediction, and answer prediction.
Experimental results on 2WikiMultiHopQA and HotpotQA-small datasets reveal that
(1) UR tasks can improve QA performance. Using four debiased datasets that are
newly created, we demonstrate that (2) UR tasks are helpful in preventing
reasoning shortcuts in the multi-hop QA task. However, we find that (3) UR
tasks do not contribute to improving the robustness of the model on adversarial
questions, such as sub-questions and inverted questions. We encourage future
studies to investigate the effectiveness of entity-level reasoning in the form
of natural language questions (e.g., sub-question forms).
|
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, Akiko Aizawa
|
2023-02-12T17:32:55Z
|
http://arxiv.org/abs/2302.05963v1
|
# Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering
###### Abstract
To explain the predicted answers and evaluate the reasoning abilities of models, several studies have utilized underlying reasoning (UR) tasks in multi-hop question answering (QA) datasets. However, it remains an open question as to how effective UR tasks are for the QA task when training models on both tasks in an end-to-end manner. In this study, we address this question by analyzing the effectiveness of UR tasks (including both sentence-level and entity-level tasks) in three aspects: (1) QA performance, (2) reasoning shortcuts, and (3) robustness. While the previous models have not been explicitly trained on an entity-level reasoning prediction task, we build a multi-task model that performs three tasks together: sentence-level supporting facts prediction, entity-level reasoning prediction, and answer prediction. Experimental results on 2WikiMultiHopQA and HotpotQA-small datasets reveal that (1) UR tasks can improve QA performance. Using four debiased datasets that are newly created, we demonstrate that (2) UR tasks are helpful in preventing reasoning shortcuts in the multi-hop QA task. However, we find that (3) UR tasks do not contribute to improving the robustness of the model on adversarial questions, such as sub-questions and inverted questions. We encourage future studies to investigate the effectiveness of entity-level reasoning in the form of natural language questions (e.g., sub-question forms).1
Footnote 1: Our data and code are available at [https://github.com/Alab-NII/multi-hop-analysis](https://github.com/Alab-NII/multi-hop-analysis)
## 1 Introduction
The task of multi-hop question answering (QA) requires a model to read and aggregate information from multiple paragraphs to answer a given question (Figure 0(a)). Several multi-hop QA datasets have been proposed, such as QAngaroo (Welbl et al., 2018), HotpotQA (Yang et al., 2018), and MuSiQue (Trivedi et al., 2022). In HotpotQA, the authors provide sentence-level supporting facts (SFs) to test the reasoning ability and explainability of the models. However, owing to the design of the sentence-level SFs task (binary classification) and the redundant information in the sentences, Inoue et al. (2020) and Ho et al. (2020) show that the sentence-level SFs are insufficient to explain and evaluate multi-hop models in detail. To address this issue, \(\mathrm{R}^{4}\mathrm{C}\)(Inoue et al., 2020) and 2WikiMultiHopQA (2Wiki; Ho et al., 2020) datasets provide an entity-level reasoning prediction task to explain and evaluate the process of answering questions. Entity-level reasoning information is defined as a set of triples that describes the reasoning path from question to answer (Figure 0(b)).
Several previous studies (Chen et al., 2019; Fu et al., 2021) utilize sentence-level SFs and/or entity-level reasoning information to build explainable models by using question decomposition (Min et al., 2019; Perez et al., 2020) or predicting sentence-level SFs. The advantages of these pipeline models are that they can exploit the underlying reasoning (UR) process in QA and their predicted answers are more interpretable. However, the question remains as to how effective training on UR tasks is for the QA task in an end-to-end manner. Although a few end-to-end models have also been introduced (Qiu et al., 2019; Fang et al., 2020), these models are not explicitly trained on entity-level and answer prediction tasks.
In addition to the triple form, the sub-question form is another way to utilize entity-level reasoning information. Specifically, Tang et al. (2021) utilize question decomposition as an additional sub-question evaluation for bridge questions (there are two types of questions: bridge and comparison) in HotpotQA. They only use sub-questions for evaluation and do not fine-tune the models on them. In addition, Ho et al. (2022) use sub-questions for
both evaluation and training. However, they only focus on comparison questions for date information. In contrast, we focus on the triple form of the entity-level information and conduct experiments using two datasets, 2Wiki and HotpotQA-small (obtained by combining HotpotQA and \(\rm R^{4}C\)), which include both types of questions.
In this study, we analyze the effectiveness of UR tasks (including both sentence-level and entity-level) in three aspects: (1) _QA performance_, (2) _reasoning shortcuts_, and (3) _robustness_. First, QA performance is the final objective of the QA task. We aim to answer the following question: **(RQ1)**_Can the UR tasks improve QA performance?_ For the second aspect, previous studies [3, 13, 14] demonstrate that many questions in the multi-hop QA task contain biases and reasoning shortcuts [15], where the models can answer the questions by using heuristics. Therefore, we aim to ask the following: **(RQ2)**_Can the UR tasks prevent reasoning shortcuts?_ For the final aspect, to ensure safe development of NLP models, robustness is one of the important issues and has gained tremendous amount of research [20]. In this study, we aim to test the robustness of the model by asking modified versions of questions, such as sub-questions and inverted questions. Our question is **(RQ3)**_Do the UR tasks make the models more robust?_
There are no existing end-to-end models that can perform three tasks simultaneously (sentence-level SFs prediction, entity-level prediction, and answer prediction); therefore, we first build a multi-hop BigBird-base model [1] to perform these three tasks simultaneously. We then evaluate our model using two multi-hop datasets: 2Wiki and HotpotQA-small. To investigate the effectiveness of the UR tasks, for each dataset, we conduct three additional experiments in which the model is trained on: (1) answer prediction task, (2) answer prediction and sentence-level prediction tasks, and (3) answer prediction and entity-level prediction tasks. We also create four debiased sets (Figure 0(c)) for 2Wiki and HotpotQA-small for **RQ2**. We create and reuse adversarial questions for 2Wiki and HotpotQA-small for **RQ3**.
The experimental results indicate that the UR tasks can improve QA performance from 77.9 to 79.4 F1 for 2Wiki and from 66.4 to 69.4 F1 for HotpotQA-small **(RQ1)**. The results of the models on the four debiased sets reveal that the UR tasks can be used to reduce reasoning shortcuts **(RQ2)**. Specifically, when the model is trained on both answer prediction and UR tasks, the performance drop of the model on the debiased sets is lower than that when the model is trained only on answer prediction (e.g., 8.9% vs. 13.4% EM). The results also suggest that the UR tasks do not make the model more robust on adversarial questions, such as sub-questions and inverted questions **(RQ3)**. Our analysis shows that correct reconstruction of the entity-level reasoning task contributes to finding the correct answer in only 37.5% of cases. This implies that using entity-level reasoning information in the form of triples does not answer adversarial questions, in this case, the sub-questions. We encourage future work to discover the effectiveness of the entity-level reasoning task in the form of sub-questions that have the same form as multi-hop QA questions.
## 2 Background
Reasoning Tasks in Multi-hop QAIn this study, we consider UR tasks in multi-hop QA in
Figure 1: Example of (a) a standard multi-hop question, (b) two underlying reasoning tasks in the QA process and three aspects in our analysis, ‘+’ and ‘-’ indicate that the UR tasks have a positive and negative impacts, respectively, and (c) debiased and adversarial examples that are used in our study.
cluding two levels: _sentence-level_ and _entity-level_. The sentence-level SFs prediction task was first introduced by Yang et al. (2018). This task requires a model to predict a set of sentences that is necessary to answer a question (Figure 1).
To evaluate the UR process of the models, derivation and evidence information were introduced in \(\mathrm{R}^{4}\mathrm{C}\) and 2Wiki, respectively. Both derivation and evidence are sets of triples that represent the reasoning path from question to answer. The difference is the form; derivation in \(\mathrm{R}^{4}\mathrm{C}\) uses a semi-structured natural language form, whereas evidence in 2Wiki uses a structured form. We conduct experiments with both \(\mathrm{R}^{4}\mathrm{C}\) (HotpotQA-small) and 2Wiki. For consistency, we use the term _entity-level reasoning prediction task_ to denote the derivation task in \(\mathrm{R}^{4}\mathrm{C}\) and the evidence task in 2Wiki.
Reasoning Shortcuts and BiasesIn this study, we consider both reasoning shortcuts and biases to be similar. These are spurious correlations in the dataset that allow a model to answer the question correctly without performing the expected reasoning skills, such as comparison and multi-hop reasoning. Following previous studies Jiang and Bansal (2019); Ko et al. (2020), we use the terms _word overlap shortcut_ and _position bias_.
To check whether the UR tasks can prevent reasoning shortcuts, we first identify the types of shortcuts that exist in HotpotQA-small and 2Wiki. We use heuristics to identify the word overlap shortcut (Appendix A). We find that the word overlap shortcut is common in HotpotQA-small, but not in 2Wiki. The small sample size of HotpotQA-small (Section 4) increases the uncertainty of the obtained results. Therefore, within the scope of this study, we mainly experiment with position bias.
We observe that many examples in 2Wiki contain answers in the first sentence. Therefore, we divide every sentence-level SF in each gold paragraph into two levels: the first sentence (position_0) and the remaining sentences (position_other). Subsequently, we obtain the percentage of each level by dividing the total number of each level (e.g., position_0) by the total number of SFs. Figure 2 illustrates the information on the position of sentence-level SFs in dev. sets of three datasets. We find that all three datasets have a bias toward the first sentence. We also find that 2Wiki has more position biases than HotpotQA and HotpotQA-small.
## 3 Our Multi-task Model
To investigate the usefulness of UR tasks for the QA task, we jointly train the corresponding tasks: sentence-level SFs prediction, entity-level prediction, and answer prediction. Figure 3 illustrates our model. To handle long texts, we use the Big-Bird model Zaheer et al. (2020), which is available in Hugging Face's transformers repository.2 Our model comprises three main steps: (1) paragraph selection, (2) context encoding, and (3) multi-task prediction. We use the named entity recognition (NER) models of Spacy3 and Flair Akbik et al. (2019) to extract all entities in the context and use them for the entity-level prediction task.
Footnote 2: [https://huggingface.co/transformers/model_doc/bigbird.html](https://huggingface.co/transformers/model_doc/bigbird.html)
Footnote 3: [https://spacy.io/](https://spacy.io/)
Paragraph SelectionFollowing previous models Qiu et al. (2019); Fang et al. (2020); Tu et al. (2020), instead of using all the provided paragraphs, we first filter out answer-unrelated paragraphs. We follow the paragraph selection process described
Figure 3: Our model has three main steps: paragraph selection, context encoding, and multi-task prediction.
Figure 2: Information on the position of sentence-level SFs in the dev. sets of the three datasets.
in Fang et al. (2020). First, we retrieve first-hop paragraphs by using title matching or entity matching. We then retrieve second-hop paragraphs using the hyperlink information available in Wikipedia. When we retrieve paragraphs, we reuse a paragraph ranker model4 from the hierarchical graph network (HGN) model Fang et al. (2020) to rank input paragraphs using the probability of whether they contain sentence-level SFs.
Footnote 4: [https://github.com/ywfan/HGN](https://github.com/ywfan/HGN)
Context EncodingTo obtain vector representations for sentences and entities, we first combine all the selected paragraphs into one long paragraph and then concatenate it with the question to form a context \(C\). Specifically, \(C=[[\mathrm{CLS}],q_{1},...,q_{m},[\mathrm{SEP}],p_{1},...,p_{n},[\mathrm{SEP}]]\), where \(m\) and \(n\) are the lengths of the question \(q\) and the combined paragraph \(p\) (all selected paragraphs), respectively. The context \(C\) is then tokenized into \(l\) sub-words before feeding into BigBird to obtain the contextual representation \(C^{\prime}\) of the sub-words:
\[C^{\prime}=\mathrm{BigBird}(C)\in\mathbb{R}^{l\times h}, \tag{1}\]
where \(h\) is the hidden size of the BigBird model. Next, we obtain the representation \(s_{i}\in\mathbb{R}^{2h}\) of the \(i\)-th sentence and the representation \(e_{j}\in\mathbb{R}^{4h+d_{t}}\) of the \(j\)-th entity, as follows:
\[\begin{split} s_{i}=C^{\prime}_{S^{i}_{\mathrm{start}}};C^{ \prime}_{S^{i}_{\mathrm{end}}}\\ e_{j}=C^{\prime}_{E^{\prime}_{\mathrm{start}}};C^{\prime}_{E^{ \prime}_{\mathrm{end}}};t_{j};s_{k},\end{split} \tag{2}\]
where [:] denotes the concatenation of the two vectors, \(C^{\prime}_{S^{i}_{\mathrm{start}}}\) and \(C^{\prime}_{E^{\prime}_{\mathrm{start}}}\) denote the first sub-word representations of the \(i\)-th sentence and \(j\)-th entity, respectively. \(C^{\prime}_{S^{i}_{\mathrm{end}}}\) and \(C^{\prime}_{E^{\prime}_{\mathrm{end}}}\) denote the last sub-word representations of the \(i\)-th sentence and \(j\)-th entity, respectively. We enrich the entity embedding \(e_{j}\) by concatenating it with a \(d_{t}\)-dimensional type embedding \(t_{j}\) and a sentence embedding \(s_{k}\), where \(k\) is the index of the sentence containing the \(j\)-th entity.
We also leverage the entity information to improve the contextual representation of sub-words \(C^{\prime}\) as it is mainly used for the answer prediction task, which will be described in the next section. Thus, the enhanced sub-word representation \(C^{\prime\prime}_{i}\) of the \(i\)-th sub-word is calculated as follows:
\[C^{\prime\prime}_{i}=C^{\prime}_{i};e_{k}\in\mathbb{R}^{5h+d_{t}}, \tag{3}\]
where \(e_{k}\) is the embedding of the \(k\)-th entity containing the \(i\)-th sub-word. Otherwise, \(e_{k}\) is a null vector with the same dimension.
Multi-task PredictionAfter context encoding, we train our model on three main tasks together: (1) sentence-level prediction, (2) entity-level prediction, (3) and answer prediction. We split the answer prediction task into two sub-tasks, similar to previous studies Yang et al. (2018); Fang et al. (2020), including answer type prediction and answer span prediction. We train our model by minimizing the joint loss for all tasks, as follows:
\[\begin{split} L_{\mathrm{joint}}=\lambda_{\mathrm{sent}}L_{ \mathrm{sent}}+\lambda_{\mathrm{ent}}L_{\mathrm{ent}}+\\ \lambda_{\mathrm{ans}}(L_{\mathrm{start}}+L_{\mathrm{end}}+L_{ \mathrm{type}}),\end{split} \tag{4}\]
where \(\lambda_{\mathrm{sent}}\), \(\lambda_{\mathrm{ent}}\), and \(\lambda_{\mathrm{ans}}\) are the hyper-parameters for three tasks: sentence-level prediction, entity-level prediction, and answer prediction (details are given in Appendix B.1).
For the sentence-level prediction task, we use a binary classifier to predict whether a sentence is a supporting fact. For the answer type prediction task, we use a 4-way classifier to predict the probabilities of _yes_, _no_, _span_, and _no answer_. Two linear classifiers are used for the answer span prediction task to independently predict the start and end tokens of the answer span.
Different from existing end-to-end models Qiu et al. (2019); Fang et al. (2020), our model is explicitly trained on the entity-level prediction task. We formalize the entity-level reasoning prediction task as a relation extraction task Zhang and Wang (2015). The input is a pair of entities, and the output is the relationship between two entities. From all named entities obtained by using the NER models, we generate a set of entity pairs; for example, given \(N\) entities, we obtain \(N\times(N-1)\) pairs. For each pair, we predict a relationship in a set of predefined relationships obtained from the training set. We then use cross-entropy as the learning objective.
## 4 Datasets and Evaluation Metrics
We mainly experiment with 2Wiki and HotpotQA-small. We also train and evaluate our model on the full version of HotpotQA. We reuse and create debiased and adversarial sets for the evaluation. Table 1 presents the statistics for 2Wiki, HotpotQA-small, and additional evaluation sets. The details of HotpotQA and 2Wiki are presented in Appendix B.2. It should be noted that all datasets are in English.
### HotpotQA-small
\(\mathrm{R}^{4}\mathrm{C}\)[11] is created by adding entity-level reasoning information to the samples in HotpotQA. We obtain HotpotQA-small by combining HotpotQA [22] with \(\mathrm{R}^{4}\mathrm{C}\). HotpotQA-small comprises three tasks as in 2Wiki: (1) sentence-level SFs prediction, (2) entity-level prediction, and (3) answer prediction. First, we re-split the ratio between the training and dev. sets; the new sizes are 3,671 and 917 for the training and dev. sets, respectively (the original sizes are 2,379 and 2,209, respectively). In \(\mathrm{R}^{4}\mathrm{C}\), there are three gold annotations for the entity-level prediction task; in 2Wiki, there is only one gold annotation. For consistency in the evaluation and analysis, we randomly choose one annotation from the three annotations for every sample in \(\mathrm{R}^{4}\mathrm{C}\).
The entity-level reasoning in \(\mathrm{R}^{4}\mathrm{C}\) is created by crowdsourcing. We observe that there are many similar relations in the triples in \(\mathrm{R}^{4}\mathrm{C}\), and these relations can be grouped into one. For example, _is in_, _is located in_, _is in the_, and _is located in the_ indicate location relation. We also group the relations by removing the context information in the relations; for example, _is a 2015 book by_ and _is the second book by_ are considered similar to the relation _is a book by_. After grouping, the number of relations in \(\mathrm{R}^{4}\mathrm{C}\) is 2,526 (it is 4,791 before).
### Debiased Dataset
The objective of our debiased dataset is to introduce a small perturbation in each paragraph to mitigate a specific type of bias, in our case, the position bias shown in Figure 2. For both 2Wiki and HotpotQA-small, we use the same method to generate four debiased sets: AddUnrelated, AddRelated, Add2, and Add2Swap. The differences between these four sets are whether the sentence is related or unrelated to the paragraph and whether we add one or two sentences into the paragraph. The details of each set are as follows.
**AddUnrelated**: One sentence unrelated to the paragraph is added. In our experiment, we use a list of sentences in the sentence-level revision dataset [23]. We randomly choose one sentence that has a number of tokens greater than eleven but less than twenty-one.
**AddRelated**: One sentence that does not have an impact on the meaning or flow of the paragraph is added. In our experiment, we write multiple templates for each entity type (e.g., for a film entity, "#Name is a nice film", where _#Name_ is the title of the paragraph), then randomly choose one template, and add it to the paragraph. To detect the type of the paragraph, we use the question type information in 2Wiki and HotpotQA-small, the results of the NER model, and the important keywords in the question (e.g., who, magazine, album, and film).
**Add2**: AddRelated and AddUnrelated are combined in order.
**Add2Swap**: The order of AddRelated and AddUnrelated in Add2 is swapped.
### Adversarial Dataset
The objective of our adversarial dataset is to check the robustness of the model by asking modified versions of questions. For HotpotQA-small, we reuse two versions of adversarial examples in Geva et al. (2022). The first one is automatically generated by using the 'Break, Perturb, Build' (BPB) framework in Geva et al. (2022). The BPB framework performs three main steps: (1) breaking a question into multiple reasoning steps, (2) perturbing the reasoning steps by using a list of defined rules, and (3) building new QA samples from the perturbations in step #2. The second version is a subset of the first version and is validated by crowd workers. We only use the examples in these two versions that the original examples appear in HotpotQA-small.
For 2Wiki, no adversarial dataset is available. Based on the idea of the BPB framework in Geva et al. (2022), we apply two main rules from BPB for 2Wiki: (1) replace the comparison operation for comparison questions, and (2) use the prune step for bridge questions. For the first rule, we replace the operation in the comparison questions (e.g., "Who was born first, A or B?" is converted to "Who was born later, A or B?"). For the second
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Split** & **2Wiki** & **HotpotQA-small** \\ \hline Train & 167,454 & 3,671 \\ Dev. & 12,576 & 917 \\ Test & 12,576 & - \\ Debiased & 12,576 (x4) & 917 (x4) \\ Adversarial & 12,576 & 659 \& 134 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for 2Wiki and HotpotQA-small. There are four debiased sets in 2Wiki and HotpotQA-small. There are one adversarial set in 2Wiki and two adversarial sets in HotpotQA-small.
rule, we use a sub-question in the QA process as the main question (e.g., for Figure 1, we ask, "Who is the father of Joan of Valois?").
### Evaluation Metrics
Each task in HotpotQA and 2Wiki is evaluated by using two metrics: exact match (EM) and F1 score. Following the evaluation script in HotpotQA and 2Wiki, we use joint EM and joint F1 to evaluate the entire capacity of the model. For HotpotQA, they are the products of the scores of two tasks: sentence-level prediction and answer prediction. For 2Wiki and HotpotQA-small, they are the products of the scores of three tasks: sentence-level prediction, entity-level prediction, and answer prediction.
## 5 Results
Currently, there are no existing end-to-end models that explicitly train all three tasks together; therefore, in this study, we use our proposed model for analysis. We also compare our model with other previous models on the HotpotQA and 2Wiki datasets. In general, the experimental results indicate that our model is comparable to previous models and can be used for further analyses. We focus more on the analysis; therefore, the detailed results of the comparison are presented in Appendix B.3.
### Effectiveness of the UR Tasks
To investigate the effectiveness of the UR tasks, we train the model in four settings: (1) answer prediction only, (2) answer prediction and sentence-level SFs prediction, (3) answer prediction and entity-level prediction, and (4) all three tasks together.
QA Performance (RQ1)Our first research question is whether the UR tasks can improve QA performance. To answer this question, we compare the results of different task settings described above. The results are presented in Table 2. For 2Wiki, using sentence-level and entity-level separately (settings #2 and #3), the QA performance does not change significantly. The improvement is significant when we combine both the sentence-level and entity-level (setting #4). Specifically, the scores when the model is trained on the answer prediction task only (setting #1) and on both the answer prediction task and UR tasks (setting #4) are 77.9 and 79.4 F1, respectively. In contrast to 2Wiki, using sentence-level and entity-level separately, there is a larger QA performance improvement in HotpotQA-small. Specifically, the F1 scores of settings #2 and #3 are 69.0 and 69.1, respectively, whereas, the F1 score of the first setting is 66.4. Similar to 2Wiki, there is a large gap between the two settings, #1 and #4 (66.4 F1 and 69.4 F1, respectively).
In summary, these results indicate that both sentence-level and entity-level prediction tasks contribute to improving QA performance. These results align with the findings in Yang et al. (2018), which shows that incorporating the sentence-level SFs prediction task can improve QA performance. We also find that when combining both sentence-level and entity-level prediction tasks, the scores of the answer prediction task are the highest.
Reasoning Shortcuts (RQ2)To investigate whether explicitly optimizing the model on the UR tasks can prevent reasoning shortcuts, we evaluate the four settings of the model on the four debiased sets of 2Wiki and HotpotQA-small. The generation of the debiased sets includes stochastic steps. To minimize the impact of randomness on our reported results, we generate the debiased sets five times and report the average evaluation scores. The average performance drops are presented in Table 3 (detailed scores are given in Appendix B.4).
Overall, for 2Wiki, when the model is trained on only one task (#1), the drop is the largest (except for AddRelated, which is the second largest). When the model is trained only on the answer prediction task, the drops are always higher than those when the model is trained on three tasks. Specifically, the gaps between the two settings, #1 (only answer task) and #4 (all three tasks), are 4.5%, 0.4%, 3.2%, 4.5% (EM score) for AddUnrelated, AddRelated, Add2, and Add2Swap, respectively. These scores indicate that the two tasks, sentence-level and entity-level, positively affect the answer prediction task when the model is trained on three tasks simultaneously.
For HotpotQA-small, we observe that the effectiveness of the UR tasks is inconsistent. For example, for AddUnrelated, when training the model on the three tasks (setting #4), the reduction is larger than that when training on answer task only (setting #1) (5.1 vs. 3.0 EM). However, for AddRelated, the reduction on setting #4 is smaller than that on setting #1 (1.3 vs. 4.0 EM). One possible reason is that the performance of the entity-level task is not good (6.4 EM), which affects the answer prediction task when the model is
trained on the three tasks together. Another possible reason is that the position bias in HotpotQA-small is not sufficiently large. We present a detailed analysis in Section 5.2 to explain this case.
Robustness (RQ3)To test whether the UR tasks can help to improve the robustness of the model, we evaluate the four settings of the model on the adversarial sets. For 2Wiki, the results are presented in Table 4. The scores for all four settings decrease significantly on the adversarial set. The reduction is the smallest when the model is trained on the answer task only. The UR tasks do not make the model more robust on this adversarial set. For HotpotQA-small, we observe the same behavior, that is, when the model is trained on the answer task only, the reduction is the smallest. All results are presented in Table 5. These results indicate that both sentence-level and entity-level prediction tasks do not contribute to improving the robustness of the models on adversarial questions, such as sub-questions and inverted questions. We analyze the results in Section 5.2.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Task** & \multicolumn{2}{c}{**Dev-adver**} & \multicolumn{2}{c}{**Reduction \%**} \\ \cline{2-5}
**Setting** & EM & F1 & EM & F1 \\ \hline Ans & 37.09 & 46.07 & **48.51** & **40.84** \\ Ans + Sent & 34.26 & 43.64 & 52.95 & 44.51 \\ Ans + Ent & 32.67 & 39.43 & 54.83 & 49.58 \\ Ans + Sent + Ent & 34.19 & 42.74 & 53.55 & 46.15 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of our model in the dev-adversarial set of 2Wiki and the performance drop.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & **Task** & \multicolumn{5}{c}{**Resource (\%) on Four Debiased Sets**} \\ \cline{3-10} & **Setting** & \multicolumn{2}{c}{**AddUnrelated**} & \multicolumn{2}{c}{**AddRelated**} & \multicolumn{2}{c}{**Add2**} & \multicolumn{2}{c}{**Add2Swap**} \\ & & EM & F1 & EM & F1 & EM & F1 & EM & F1 \\ \hline \multirow{4}{*}{2Wiki} & (1) Ans & 13.40 & 12.13 & 3.55 & 3.46 & 12.32 & 11.72 & 18.99 & 17.51 \\ & (2) Ans + Sent & 11.00 & 9.71 & 4.16 & 4.22 & 11.22 & 10.69 & 17.62 & 16.24 \\ & (3) Ans + Ent & **7.73** & **6.94** & **2.80** & **2.77** & **8.38** & **7.76** & **13.12** & **12.21** \\ & (4) Ans + Sent + Ent & 8.86 & 8.11 & 3.16 & 3.13 & 9.09 & 8.58 & 14.53 & 13.77 \\ \hline \multirow{4}{*}{HotpotQA-small} & (1) Ans & 3.01 & 1.53 & 4.04 & 1.50 & 1.65 & 1.01 & 3.96 & 2.47 \\ & (2) Ans + Sent & **1.13** & **1.35** & -0.51 & 0.19 & **0.08** & **0.85** & **1.77** & **1.96** \\ & (3) Ans + Ent & 6.73 & 5.60 & **-0.92** & **0.03** & 4.02 & 3.54 & 6.89 & 5.46 \\ \cline{1-1} & (4) Ans + Sent + Ent & 5.05 & 4.65 & 1.26 & 1.25 & 1.83 & 2.46 & 3.58 & 3.64 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average performance drop from five times running (smaller is better) of the four settings on the four debiased sets of 2Wiki and HotpotQA-small. The best and worst scores are boldfaced and underlined, respectively.
### Analyses
Details of RQ2To investigate the results concerning RQ2 in more depth, we first analyze the position biases of different types of questions in 2Wiki and HotpotQA-small. We find that the comparison questions have more position biases than the bridge questions in both 2Wiki and HotpotQA-small (Appendix B.5). To evaluate the effectiveness of the position bias for each type of question, we evaluate the four settings of the model on the four debiased sets for each type of question in both datasets. All the results are presented in Appendix B.5.
For 2Wiki, we find that most of the answers are in the first sentences in the comparison questions. This large bias is the main reason for the significant reduction in the scores in the comparison questions. 2Wiki has 46.0% of comparison questions. The reduction in comparison questions contributes to the reduction in the entire dataset. In other words, the results of 2Wiki are affected by those of the comparison questions. HotpotQA-small has only 22.0% of comparison questions, and the position bias in the comparison questions was not sufficiently large. Therefore, the position bias does not have a significant impact on the main QA task. In other words, the UR tasks do not have a significant effect.
Details of RQ3The adversarial questions used in RQ3 are the sub-questions in the QA process for bridge questions and the inverted questions for comparison questions. We observe that the triple in the entity-level task is helpful in answering the sub-questions. For example, the triple is: (_Charles of Valois, father, Philip III of France_) and the sub-question is "_Who is the father of Charles of Valois?_". To understand more on the behaviors of the model, we analyze the results from 2Wiki in two settings: (3) Ans + Ent and (4) Ans + Sent + Ent. Table 6 presents the detailed results for these two settings. We find that correct reconstruction of the entity-level reasoning task contributes to finding the correct answer only in 32.8% of cases in setting #3 and only in 37.5% of cases in setting #4. Entity-level reasoning in the form of triples has no significant effect on the main QA process. Several examples are presented in Appendix B.5.
We conjecture that there are three possible reasons why the UR tasks cannot contribute to the adversarial dataset. The first one is the difference in the form and design of the tasks. Specifically, the entity-level reasoning task is formulated as a relation extraction task; the input is a pair of entities, and the output is a relation label. Meanwhile, the adversarial dataset is formulated as a QA task; the input is a natural language question, and the output is an answer. The second reason is the incompetence of the entity-level reasoning information. As discussed in Ho et al. (2022), the entity-level reasoning in the comparison questions does not describe the full path from question to answer, and other reasoning operations are required to obtain the answer. The final reason is the manner in which we utilize the entity-level reasoning information. Our model does not consider the order of the triples in the reasoning chain. For example, we do not consider the order of the two steps in Figure 0(b). We hope that our research will inspire future studies to investigate the effectiveness of the UR tasks in the form of a natural language question, which has the
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Task** & **Correct** & **Correct** & **Correct Both** \\
**Setting** & **Ans** & **Ent** & **Ans \& Ent** \\ \hline (3) Ans + Ent & 4,109 & 6,851 & 2,249 (32.8\%) \\ (4) Ans + Sent + Ent & 4,300 & 6,450 & 2,420 (37.5\%) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Number of correct predicted answers, number of correct predicted entity-level reasoning, and number of examples that have both correct predicted answers and correct predicted entity-level reasoning.
\begin{table}
\begin{tabular}{l c c|c c c|c c c c} \hline \hline
**Task** & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Dev-Adver**} & \multicolumn{2}{c}{**Adver\(\downarrow\) (\%)**} & \multicolumn{2}{c}{**Dev-Adver-val**} & \multicolumn{2}{c}{**Adver-val\(\downarrow\) (\%)**} \\ \cline{2-10}
**Setting** & EM & F1 & EM & F1 & EM & F1 & EM & F1 & EM & F1 \\ \hline (1) Ans & 52.89 & 66.43 & 40.36 & 51.23 & 23.69 & **22.88** & 37.31 & 46.69 & **29.46** & **29.72** \\ (2) Ans + Sent & 54.42 & 69.03 & 41.73 & 52.50 & 23.32 & 23.95 & 34.33 & 43.86 & 36.92 & 36.46 \\ (3) Ans + Ent & 54.74 & 69.08 & 42.79 & 52.16 & **21.83** & 24.49 & 27.61 & 36.86 & 49.56 & 46.64 \\ (4) Ans + Sent + Ent & 54.74 & 69.44 & 40.52 & 51.14 & 25.98 & 26.35 & 31.34 & 38.22 & 42.75 & 44.96 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of our model in the dev. and two dev-adversarial sets of HotpotQA-small. ‘Adver’ denotes adversarial and ‘Adver-val’ denotes the adversarial set that was validated by crowd workers.
same form as a multi-hop QA question.
## 6 Related Work
Multi-hop Datasets and AnalysesTo test the reasoning abilities of the models, many multi-hop QA datasets Welbl et al. (2018); Talmor and Berant (2018); Yang et al. (2018) have been proposed. Recently, Trivedi et al. (2022) introduced MuSiQue, a multi-hop dataset constructed from a composition of single-hop questions. The reason why do we not conduct experiments on MuSiQue is explained in the limitations section.
In addition to Tang et al. (2021) and Ho et al. (2022), the most similar to our research mentioned in the Introduction, there are some other existing studies Chen and Durrett (2019); Jiang and Bansal (2019); Min et al. (2019); Trivedi et al. (2020) on the analysis and investigation of the multi-hop datasets and models. However, most of them do not utilize the internal reasoning information when answering questions.
Multi-hop ModelsVarious directions have been proposed for solving multi-hop datasets, including question decomposition Talmor and Berant (2018); Jiang and Bansal (2019); Min et al. (2019); Perez et al. (2020); Wolfson et al. (2020); Fu et al. (2021), iterative retrieval Feldman and El-Yaniv (2019); Asai et al. (2020); Qi et al. (2021), graph neural networks Song et al. (2018); De Cao et al. (2019); Ding et al. (2019); Qiu et al. (2019); Tu et al. (2019); Fang et al. (2020), and other approaches such as single-hop based models Yang et al. (2018); Nishida et al. (2019) or transformer-based models Devlin et al. (2019); Zaheer et al. (2020). Our model is based on the BigBird transformer model.
Other QA Reasoning DatasetsIn addition to multi-hop reasoning datasets, several other existing datasets also aim to evaluate the reasoning abilities of the models. Some of them are: DROP Dua et al. (2019) for numerical reasoning; CLUTRR Sinha et al. (2019), ReClor Yu et al. (2020), and LogiQA Liu et al. (2020) for logical reasoning; Quoref Dasigi et al. (2019) for coreference reasoning; CommonsenseQA Talmor et al. (2019), MCScript2.0 Ostermann et al. (2019), and CosmosQA Huang et al. (2019) for commonsense reasoning. Many of these datasets consist of only a single paragraph in the input or lack explanation information that describes the reasoning process from question to answer. However, our focus is on multi-hop reasoning datasets that contain multiple paragraphs in the input and provide explanatory information for the QA process.
## 7 Conclusion
We analyze the effectiveness of the underlying reasoning tasks using two multi-hop datasets: 2Wiki and HotpotQA-small. The results reveal that the underlying reasoning tasks can improve QA performance. Using four debiased sets, we demonstrate that the underlying reasoning tasks can reduce the reasoning shortcuts of the QA task. The results also reveal that the underlying reasoning tasks do not make the models more robust on adversarial examples, such as sub-questions and inverted questions. We encourage future studies to investigate the effectiveness of the entity-level reasoning task in the form of sub-questions.
### Limitations
Our study has two main limitations. The first one is the small size of HotpotQA-small. Currently, there are no other multi-hop datasets that contain a large number of examples with the entity-level reasoning prediction task. MuSiQue is the most potential option. The entity-level reasoning information in MuSiQue includes two types of formats: triple format and natural language question format. We do not experiment with MuSiQue because the number of examples that have entity-level reasoning information in the form of a triple is small: 2,253 out of 19,938 in the training set and 212 out of 2,417 in the dev. set.
The second limitation is that our model does not consider the order of the triples in the entity-level reasoning prediction task. As shown in Figure 0(b), the two triples are ordered. However, our model formulizes the entity-level prediction task as a relation extraction task. We predict a relation given the two entities detected by the NER models. Therefore, the order of the triples is not considered. We conjecture that this may be one of the reasons why the entity-level reasoning prediction task (e.g., a triple _(Film A, director, D)_) does not support the model when answering sub-questions (e.g., _Who is the director of Film A?_) using the same information.
### Acknowledgments
We would like to thank Viktor Schlegel and the anonymous reviewers for their valuable comments and suggestions. This work was supported by
JSPS KAKENHI Grant Numbers 21H03502 and 22K17954 and JST AIP Trilateral AI Research and PRESTO Grant Number JPMJCR20G9.
|
2304.08111
|
Leveraging Multi-view Data for Improved Detection Performance: An
Industrial Use Case
|
Printed circuit boards (PCBs) are essential components of electronic devices,
and ensuring their quality is crucial in their production. However, the vast
variety of components and PCBs manufactured by different companies makes it
challenging to adapt to production lines with speed demands. To address this
challenge, we present a multi-view object detection framework that offers a
fast and precise solution. We introduce a novel multi-view dataset with
semi-automatic ground-truth data, which results in significant labeling
resource savings. Labeling PCB boards for object detection is a challenging
task due to the high density of components and the small size of the objects,
which makes it difficult to identify and label them accurately. By training an
object detector model with multi-view data, we achieve improved performance
over single-view images. To further enhance the accuracy, we develop a
multi-view inference method that aggregates results from different viewpoints.
Our experiments demonstrate a 15% improvement in mAP for detecting components
that range in size from 0.5 to 27.0 mm.
|
Faranak Shamsafar, Sunil Jaiswal, Benjamin Kelkel, Kireeti Bodduna, Klaus Illgner-Fehns
|
2023-04-17T09:41:37Z
|
http://arxiv.org/abs/2304.08111v1
|
# Leveraging Multi-view Data for Improved Detection Performance:
###### Abstract
Printed circuit boards (PCBs) are essential components of electronic devices, and ensuring their quality is crucial in their production. However, the vast variety of components and PCBs manufactured by different companies makes it challenging to adapt to production lines with speed demands. To address this challenge, we present a multi-view object detection framework that offers a fast and precise solution. We introduce a novel multi-view dataset with semi-automatic ground-truth data, which results in significant labeling resource savings. Labeling PCB boards for object detection is a challenging task due to the high density of components and the small size of the objects, which makes it difficult to identify and label them accurately. By training an object detector model with multi-view data, we achieve improved performance over single-view images. To further enhance the accuracy, we develop a multi-view inference method that aggregates results from different viewpoints. Our experiments demonstrate a 15% improvement in mAP for detecting components that range in size from 0.5 to 27.0 \(mm\).
## 1 Introduction
Every electronic device, including personal computers, televisions, or advanced industrial, transportation equipment, has a printed circuit board (PCB). The PCB is ultimately responsible for the functionality of the whole system, so ensuring PCB quality in a timely and precise manner is essential. In industrial environments, human-operated and manual PCB analysis are inefficient, expensive, and suffer from a high error rate and subjective accuracy. This makes the _automatic_ inspection of PCBs essential, which is accomplished primarily through visual analysis.
Generally, PCB inspection can be carried out by using different imaging modalities, _i.e._ gray-scale/RGB images [15, 24], depth maps [5, 10], or infrared images [6, 11]. Although RGB data is challenging due to the variety of illuminations, reflections, and colors of components among similar types, the images under visible light present rich data for PCB analysis. Nevertheless, PCB design sets vary tremendously based on their functionality, manufacturer, and component variations and hence, more complexity exists in PCB images compared to images in other common computer vision scenarios.
Referencing a PCB is a shallow approach for PCB inspection, in which the test image of the PCB must strictly match the reference image of the same PCB. Alternatively, in earlier works, indirect abstract information was derived from PCB images, such as the number of objects [22], the number of connected holes in the PCB [21], or solder joint locations [16]. On the basis of classical image processing techniques, [3] used the normalized cross correlation template-matching and proposed a method for constraining the genetic algorithm search space to reduce computational calculations. Authors in [23] used color distribution patterns for recognizing the components. Similarly, in [12], authors proposed a color distribution-based segmentation for resistors and integrated circuits (IC). Yet, all these referential techniques lead to PCB-specific solutions.
Typically, PCB inspection based on component placement is the most reliable approach that can be exploited in various PCB-related applications, including: 1) PCB fabrication and assembly in PCB manufacturers, 2) Optimization of PCB component placement, 3) Checking for malicious inputs to the PCBs for security issues, 4) PCB recycling: Recovering the reusable components and precious materials and separating the toxic substances, and 5) Quality control in PCB manufacturers or in electronics producers (before PCBs are assembled into electronic devices), _e.g._ locating the missing or misaligned components.
After computer vision witnessed its renaissance via deep learning, different approaches emerged for automated PCB inspection. A two-stage approach was proposed in [14] that firstly extracted regions and then applied a classification using a convolutional neural network (CNN). [13] integrated a geometric attention-guided mask branch into an
object detector for IC detection. In [15], the capability of classification on the component images of six classes was explored. In [9], a multiple-stage approach is proposed in which a class-agnostic region proposal network is followed by a low-shot similarity prediction classifier and a graph-based neural network to refine features, which showed poor performance for small components. In these deep learning-based solutions providing suitable and adequate images still remains an open issue. Obtaining high-quality ground-truth annotations is another level of challenge for deep learning-based PCB analysis.
In this work, we address both _image modality_ and _image analysis_ for PCB inspection and propose an approach that processes the whole image in an end-to-end framework, targeted to detect and identify the components of a PCB with high accuracy. Evidently, the creation of a dataset for deep learning-based PCB object detection is highly expensive and time-consuming. Therefore, we develop a framework for hardware-augmented multi-view training data generation that includes semi-automatic labeling, making the annotation process much smoother with consistent labels. Using this multi-view dataset, we gain a significant improvement in the model performance. Moreover, a multi-view inference module is formulated to accumulate the results from different viewpoints and reach a consensus for more accurate and reliable performance. Our framework can effectively inspect the PCBs with component dimensions as small as 0.5 \(mm\). In addition, the approach is applicable to inspection of different PCB samples in open-ended and outsourced PCB collections and does not rely on a specific type or a reference sample. An overview of our model is shown in Fig. 1.
## 2 Methodology
### Hardware setup
To collect images, we used the 3rd generation K\(|\)Lens (80 \(mm\) focal length) mounted on a 61 MP full-frame camera (hr455CXGE, SVS-VISTEK) (Fig. 1(a)). K\(|\)Lens [1, 4, 7] via its special technology records multi-view images and these images can be used in its software to compute disparity maps. Multi-view images are formed in a \(3\times 3\) grid of images, named kaleidoscopic image. The technology involves refracting light rays from the main lens onto an intermediate image plane via a mirror system. The rays are then projected onto the sensor in a way that they are split into nine separate images. This is similar to simultaneous shooting by nine cameras placed at very short distances from each other. However, compared to a multi-array camera setup, calibration and rectification are well-developed and stable in K\(|\)Lens. This setup is also highly compact for capturing multi-view images, which is particularly useful for tiny and close-range objects, _i.e_. macro photography.
For a larger magnification, a close-up lens with 5 dioptres (MARUMI) was attached to the lens. The PCBs were illuminated in a bright field setup with a flat top light (DTL 1010 WTS, MBJ) with white LEDs. The light source was further enhanced with a polarization filter to reduce reflections from pins. The working distance was set to around 100 \(mm\).
### K\(|\)Lens PCB dataset
Our goal is not to limit the dataset to a specific manufacturer's PCBs or a particular PCB. This attitude introduces wide variance in PCBs in terms of design, component types, color, _etc_. It may be possible to generate datasets synthetically. However, this approach requires CAD models, is PCB-specific, and cannot represent real-world data distribution. Moreover, the manual labeling effort for these images remains a challenge.
In this context, we create a PCB dataset with the following data types: 1) High-resolution RGB images, 2) High precision depth maps to the accuracy of about 70 \(\mu m\), and 3) Multi-view RGB samples (nine images) obtained in each image shooting. We leverage the merit of having all these data types in one shot to analyze PCBs using deep learning.
Five PCBs with different functionalities are used for image collection. The lateral dimensions of the PCB components vary from \(0.5\times 1.0\) to \(27.0\times 27.0\,mm^{2}\). In each shot, a region of about \(38\times 56\,mm^{2}\) from the PCBs is recorded in a scanning manner with an approximate overlap of \(15\) to \(25\,mm\). Overall, a total of 262 kaleidoscopic images (\(3\times 3\) grid multi-view) are captured (sample shown in Fig. 1(b)).
To extract the individual images, the software provided by the K\(|\)Lens system mirrors, rectifies, and crops the viewpoints (Fig. 1(c)). Note that the individual images record the scene from slightly different perspectives, from which the center viewpoint presents the least distortion and is referred to as single-view hereon. We also record the disparity maps, which show the displacements of image content in a pair of rectified images, namely the displacements between the center-view image and every other viewpoint. A sample disparity map from the center viewpoint (#5) to viewpoint #6 is shown in Fig. 3.
Hence, the recorded kaleidoscopic image set yields the data samples as 262 single images, 2358 multi-view images, and 3144 disparity maps (Tab. 1). The disparity data for the corner images (#1, #3, #7, #9) include the displacements in two directions, _i.e_. in \(x\)- and \(y\)-axis, with respect to the center viewpoint.
#### 2.2.1 Semi-automatic labeling
Our setup takes advantage of the multi-view data recording to capture 9x more data per image shot. This allows us to increase the available images with no extra cost or effort. In
our PCB analysis framework, we define a total of 11 classes that need to be annotated on 2358 images. Obviously, with an increasing number of images and classes, and considering the difficulty of recognizing PCB components, annotation becomes increasingly complex and time-consuming. Thus, we develop a semi-automatic labeling technique for object detection to produce high-quality labeled data and efficiently save time and resources, as follows:
- **Active labeling for single-view (center-view) images:** 1) A subset of center-view images (40%) are manually annotated from scratch. 2) An object detector model is trained on these images. 3) The trained network is used for predicting the initial labels for the remaining center-view images. 4) The initial labels on center-view images are checked and modified.
- **Depth-based label generalization for multi-view images:** With the help of disparity maps of the center viewpoint to the other viewpoints, each annotated bounding box in the center-view is translated in the proper direction along \(x\)- and/or \(y\)-axis. The value of the translation for each box is derived from the center of the bounding box in the disparity map. Therefore, in each kaleidoscopic image, the labels of the center-view are generalized to the other eight viewpoints using the corresponding disparity map. This means that for each image, we obtain the annotations of eight other images in a fully automatic manner. Namely, we define \(B_{I_{v}}\) as the set of bounding boxes in viewpoint image \(I_{v}\), with \(v\in\{1,2,...,9\}\) as the viewpoint index:
\[B_{I_{v}}=\{b_{1}^{I_{v}},b_{2}^{I_{v}},...,b_{n}^{I_{v}}\}. \tag{1}\]
Here, \(n\) is the number of annotations (bounding boxes). The \(i\)-th bounding box in image \(I_{v}\) is formulated as \(b_{i}^{I_{v}}=(c_{i}^{I_{v}},x_{i}^{I_{v}},y_{i}^{I_{v}},w_{i}^{I_{v}},h_{i}^ {I_{v}})\), where \((x_{i}^{I_{v}},y_{i}^{I_{v}})\) and \((w_{i}^{I_{v}},h_{i}^{I_{v}})\) indicate the center coordinates and the width/height of the
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Single-view** & **Multi-view (9x)** & **Disparity maps** \\ \hline
262 & 2358 & 3144 \\ \hline \hline \end{tabular}
\end{table}
Table 1: K\(|\)Lens PCB dataset collected from five PCBs using multi-view imaging.
Figure 1: Block diagram of the proposed component localization and identification using multi-view data. \(I_{i}\) denotes the input images from different viewpoints and \(I_{5}\) indicates the center viewpoint image. \(B_{i}\) is the set of predicted results (bounding boxes) from the \(i\)-th image. An object detector is applied on each viewpoint independently following a warping function on the results using disparity maps. After aligning the results from the center viewpoint, the ensemble of predictions are aggregated together.
Figure 2: (a) Our hardware setup for PCB analysis; (b) Sample kaleidoscopic image (\(3\times 3\) grid multi-view images); (c) Processed multi-view kaleidoscopic image after mirroring and rectifying the individual images and cutting out the distorted regions. Note that the individual images are piled up for the sake of comparison to the original kaleidoscopic image.
bounding box, and \(c_{i}^{I_{v}}\) denotes the ground-truth class category. We define a _Forward Warping_ function from center-view to viewpoint \(v\), _i.e_. \(I_{5}\to I_{v}\), in order to compute the _disparity-displaced_ bounding boxes:
\[\text{FW}_{B_{I_{5}}\to B_{I_{v}}}=\{b_{1}^{I_{5}\to I_{v}},b_{2}^{I_{5}\to I_{v}},...,b_{n}^{I_{5}\to I_{v}}\}, \tag{2}\]
\[b_{i}^{I_{5}\to I_{v}}=(c_{i}^{I_{5}},x_{i}^{I_{5}}-m,y_{i}^{I_{5}}-n,w_{i}^{I _{5}},h_{i}^{I_{5}}), \tag{3}\]
\[\begin{split} m&=max(0,d_{I_{5}\to I_{v}}^{X}(x_{i}^{I _{5}},y_{i}^{I_{5}})),\\ n&=max(0,d_{I_{5}\to I_{v}}^{Y}(x_{i}^{I_{5}},y_{i}^{I_{5}})).\end{split} \tag{4}\]
Here, \(m\) and \(n\) indicate the amount of displacement in \(x\) and \(y\) directions, computed from the corresponding disparity map from center-view image to viewpoint \(v\), _i.e_. \(d_{I_{5}\to I_{v}}^{X}\) and \(d_{I_{5}\to I_{v}}^{Y}\). This process is summarized in Fig. 3. It is important to note that the disparity maps, which link image contents from different perspectives, could make this approach feasible. Finally, the generated labels are checked to see if any components near the image borders are missed or need modification in case they are invisible in the center image. In this framework, by manually labeling only 105 images, we get the labels of 2253 other images automatically.
After analyzing the experimented boards, 11 classes are defined for their components: \(C\) (Capacitor), \(D\) (Diode), _IC_ (Integrated circuit), \(L\) (Inductor), \(R\) (Resistor), _XTAL_ (Crystals oscillator), _LED_ (Light-emitting diode), \(Q\) (Transistor), _AL_C_ (Aluminum capacitor), _RY_ (Relay), and _FI_ (Filter). An example of each component is shown in Fig. 4. Some classes, such as transistors, diodes, and inductors, could vary in appearance, shape, and text. Since aluminum capacitors are completely different from ceramic capacitors, an explicit class is assigned for them to enhance the network's learning capabilities.
Figure 5 summarizes the statistics of various components after the labeling process in both single-view and multi-view images. \(C\) and \(R\) classes present more instances; others, with lower numbers, can be almost 9x greater in multi-view images. In Fig. 6, a sample component (aluminum capacitor) from different perspectives in multi-view images is illustrated.
### Multi-view training
In order to identify components on PCBs, we use the object detection task, which involves 2D localization (defined by bounding boxes) and classification. To this end, we utilize the state-of-the-art model, YOLOv5 [8], which has compound-scaled variants by introducing different numbers of layers and filters into its baseline architecture. We exploit the nano version since with only 1.9 million parameters, it suits a fast-operational industrial system. Also, two im
Figure 4: Samples of 11 component types for localization and classification. The following abbreviations are used for these classes: C (Capacitor), D (Diode), IC (Integrated circuit), L (Inductor), R (Resistor), XTAL (Crystals oscillator), LED (Light-emitting diode), Q (Transistor), AL_C (Aluminum capacitor), RY (Relay), and FI (Filter).
Figure 5: Number of instances in defined classes for single-view and multi-view images. A logarithmic scale is used in the \(y\)-axis.
Figure 3: Label Generalization: The disparity map from center viewpoint #5 to viewpoint #6 is generated by the system software. Each bounding box in the viewpoint #5 is translated by the disparity map value in the center of the box. The translated labels align well with viewpoint #6.
age dimensions, \(640\) and \(1024\) are considered for analysis. As part of YOLOv5, functions such as random HSV color space, left-right flipping, translation, and scaling are applied for online unique mosaic augmentation. When it comes to image augmentation, smaller models, _e.g_. YOLOv5n, yield better results with less augmentation because overfitting can be avoided. However, with our hardware-assisted MV dataset, we can increase the number of samples safely with new real-world images.
### Multi-view inference
This part introduces a novel approach for multi-view object detection, beyond training with multi-view data. Specifically, this technique applies in the _inference_ phase and requires _multi-view test_ data with a linkage between viewpoints spaces. The disparity map allows us to establish this connection.
To this end, we apply the trained model to each viewpoint image independently (center-view and other eight viewpoints), obtaining a total of nine object detection outputs. Probability scores and classification correctness can differ depending on the viewpoint. We leverage the ensemble of information from each viewpoint to reach a consensus from multi-view data, as follows:
* _Label warping_: When the model is applied to nine multi-view images, it generates an ensemble of boxes that need to be warped and aligned into a single view. To achieve this, we warp all bounding boxes to the center-view images. The disparity maps allow us to align the bounding boxes with the center-view. This is the reverse of the process explained for semi-automatic labeling in Sec. 2.2.1. Instead of generalizing labels from the center-view to other viewpoints using disparity maps, we warp the predicted labels from other viewpoints to the center-view. Thus, we can define a _Backward Warping_ function similar to (2) that applies to each of the eight viewpoints surrounding the center-view.
* _Bounding boxes fusion_: Following the warping, each object is accompanied by several predictions, which should be merged for the final prediction. A method such as Non-Maximum Suppression (NMS) [18] can accomplish this by keeping only the bounding box with the highest Jaccard similarity score. Due to the strict suppression of other boxes by NMS, it is not suitable for our use-case, as we intend to systematically combine information from different predictions. A more advanced version of NMS is Non-Maximum Weighted (NMW) [19], which is based on a weighted averaging of boxes to predict the average box, rather than the highest. In [20], authors proposed a similar strategy to fuse the ensemble boxes called Weighted Boxes Fusion (WBF) and proved that WBF outperforms NMW by using the confidence scores of the predicated bounding boxes to construct average boxes and comparing against the average box that is updated at each step of the comparison.
* _Intermixing the center-view coordinates_: Since the center-view coordinates are well-encompassing for components near the image border, we intermix them with the results from bounding boxes fusion. Namely, the class category and confidence score come from bounding boxes fusion, and the spatial coordinates from the center-viewpoint boxes. This way, we define the intermixing function as follows: \[\text{IM}(b_{i}^{bbf},b_{i}^{I_{5}})=(c_{i}^{bbf},x_{i}^{I_{5}},y_{i}^{I_{5} },w_{i}^{I_{5}},h_{i}^{I_{5}},s_{i}^{bbf}),\] (5) where \(b_{i}^{bbf}\), \(b_{i}^{I_{5}}\) represent the \(i\)-th bounding box from bounding boxes fusion and center-view image, respectively.
## 3 Experimental results and discussion
For training and validation, we conduct a \(k\)-fold cross-validation manner to robustly validate the performance measures. More precisely, we performed a 10-fold random split on the center-view (single-view) images, which we call the single-view (SV) dataset. After splitting the samples, the remaining viewpoints sets are added to the corresponding set (either the training set or the test set) to build the multi-view dataset (MV). As a result, we prevent viewpoint-related bias by assuring that all the viewpoints of a kaleidoscopic shot belong to either the training or the validation set.
Figure 6: An annotated aluminum capacitor from different perspectives in multi-view data. Bounding boxes of the eight surrounding viewpoints are obtained via the proposed depth-based label generalization process.
Evaluation metrics include precision, recall, [email protected], and [email protected]:0.95 averaged over 10-fold cross-validation. mAP [2] is most effective metric for evaluating object detectors.
- **Multi-view training:** Table 2 compares single-view (center-view) training with multi-view training (9x more training data) on two image sizes to demonstrate the performance of multi-view training. Single-view (SV) training is analogous to regular imaging without multi-view technology which we consider as the baseline model. By training the model with 9x more data, it safely achieves higher values in all metrics. This can be attributed to the enriched information of objects and to the background information, which is highly beneficial for object detection. Namely, the increased number of instances for each component class in the MV dataset is crucial in training a detection model, as it provides more varied and diverse examples of the components, enabling the model to learn more discriminative features for accurate detection. MV model also provides more robust results when images are downscaled. That is, the drop in accuracy of MV due to image size reduction is less than the case in SV model, which is beneficial for faster predictions or in limited-resources edge devices.
Figure 7 shows a sample result of SV and MV models and the activation maps of the predictions. In order to gain a better understanding of the model's feature representation, we used EigenCAM [17] for displaying activation maps. Activation maps are a visualization technique used in deep learning to identify the most discriminative regions of an image that contribute to a network's behavior and its decision-making process. Based on the regions highlighted in the activation maps generated from the SV and MV models, it appears that MV model exhibits more activation than the SV model. This suggests that the model with higher activation may be more sensitive to certain image features, which leads to obtaining higher accuracy.
- **Multi-view inference:** For each center-view image, the other viewpoints are known for applying the MVI method. More specifically, in MVI, we assume multi-view test data is available; a model is trained on multi-view data; the model is tested on all nine images for a test sample; and finally, the results are fused. In Tab. 2, we observe that multi-view inference achieve superior performance over SV and MV. In particular, using the advanced WBF, we can gain
Figure 7: Qualitative performance using SV (_Left_) and MV (_Right_) models. The top row and bottom row respectively display the detection results and the activation maps. MV model exhibits more activation than the SV model, which leads to obtaining more accurate predictions.
further \(3.14\) accuracy in terms of [email protected]:0.95 compared to the vanilla MV model for image size 416\(\times\)640.
Table 3 presents the average results for each component individually in terms of [email protected]:0.95. The table validates the increase in performance by MV and MVI in all the component types. Notably, the improvement is especially significant for components that are difficult to detect. For example for XTAL and LED components in 672\(\times\)1024 image size, [email protected]:0.95 increases from 27.82% to 86.01% and from 59.08% to 90.50%, respectively.
## 4 Conclusion
We presented a multi-view framework for analyzing PCB components. Our method works end-to-end, rather than extracting component regions first and then classifying them. The performance improves when multi-view images are used instead of single-view images. Furthermore, ensemble results of multi-view data are aggregated in the inference phase. We also developed a semi-automated labeling process to save the effort of manual labeling and ensure consistency of annotations for the object detection task.
## Acknowledgment
The authors would like to thank POLARIXPARTNER GmbH, [https://www.polarixpartner.com](https://www.polarixpartner.com), for providing the PCBs and their component data.
|
2306.11239
|
Representations of flat virtual braids by automorphisms of free group
|
Representations of braid group $B_n$ on $n \geq 2$ strands by automorphisms
of a free group of rank $n$ go back to Artin (1925). In 1991 Kauffman
introduced a theory of virtual braids and virtual knots and links. The virtual
braid group $VB_n$ on $n \geq 2$ strands is an extension of the classical braid
group $B_n$ by the symmetric group $S_n$. In this paper we consider flat
virtual braid groups $FVB_n$ on $n\geq 2$ strands and construct a family of
representations of $FVB_n$ by automorphisms of free groups of rank $2n$. It has
been established that these representations do not preserve the forbidden
relations between classical and virtual generators. We investigated some
algebraic properties of the constructed representations. In particular, we
established conditions of faithfulness in case $n=2$ and proved that the kernel
contains a free group of rank two for $n\ge3$.
|
Bogdan Chuzhinov, Andrey Vesnin
|
2023-06-20T02:34:24Z
|
http://arxiv.org/abs/2306.11239v1
|
# Representations of flat virtual braids by automorphisms of free group
###### Abstract.
Representations of braid group \(B_{n}\) on \(n\geq 2\) strands by automorphisms of a free group of rank \(n\) go back to Artin (1925). In 1991 Kauffman introduced a theory of virtual braids and virtual knots and links. The virtual braid group \(VB_{n}\) on \(n\geq 2\) strands is an extension of the classical braid group \(B_{n}\) by the symmetric group \(S_{n}\). In this paper we consider flat virtual braid groups \(FVB_{n}\) on \(n\geq 2\) strands and construct a family of representations of \(FVB_{n}\) by automorphisms of free groups of rank \(2n\). It has been established that these representations do not preserve the forbidden relations between classical and virtual generators. We investigated some algebraic properties of the constructed representations. In particular, we established conditions of faithfulness in case \(n=2\) and proved that the kernel contains a free group of rank two for \(n\geq 3\).
Key words and phrases:braid, virtual braid, flat virtual braid group, automorphism of free group 2000 Mathematics Subject Classification: 57K12 A.V.'s work was carried out in the framework of the State Task to the Sobolev Institute of Mathematics (Project FWNF-2022-0004).
There is very nice relation between braid groups and knot theory based on _Alexender's theorem_ and _Markov's theorem_[2]. Invariants, arising from representations of braid groups, play an important role in classical knot theory and its generalizations.
Artin discovered faithful representation \(\varphi_{n}\colon B_{n}\to\operatorname{Aut}(\mathbb{F}_{n})\), where \(\mathbb{F}_{n}=\langle x_{1},\ldots,x_{n}\rangle\) - is the free group of rank \(n\). Homomorphism \(\varphi_{n}\) maps generator \(\sigma_{i}\in B_{n}\) to the following automorphism \(\varphi_{n}(\sigma_{i})\):
\[\varphi_{n}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1},\\ x_{i+1}\mapsto x_{i+1}^{-1}x_{i}x_{i+1},\\ x_{j}\mapsto x_{j},\qquad j\neq i,i+1.\end{cases}\]
Note, that for each \(i\), \(1\leq i\leq n-1\) one has \(\varphi_{n}(\sigma_{i})(x_{1}\cdots x_{n})=x_{1}\cdots x_{n}\). Therefore \(\varphi_{n}(\beta)(x_{1}\cdots x_{n})=(x_{1}\cdots x_{n})\) for every \(\beta\in B_{n}\). Moreover, it is shown by Artin [1, 3] that an automorphism \(g\in\operatorname{Aut}(\mathbb{F}_{n})\) is equal to \(\varphi_{n}(\beta)\) for some \(\beta\in B_{n}\) if and only if the following two conditions are satisfied: (i) \(f(x_{i})\) is conjugate of some \(x_{j}\) for \(i=1,\ldots,n-1\); and (ii) \(f(x_{1}\cdots x_{n})=x_{1}\cdots x_{n}\).
In what follows, if any automorphism acts on a generator identically, we will not write this action. We write the composition of automorphisms in the order of their application from left to right, namely, \(\varphi\psi(f)=\psi(\varphi(f))\).
Virtual braids were introduced by Kauffman in his founding paper [4] together with virtual knots and links, see also [5]. In the same paper he defined the virtual braid group \(VB_{n}\) on \(n\geq 2\) strands, generated by the elements \(\sigma_{1},\ldots,\sigma_{n-1}\) similarly to the classical braid group and \(\rho_{1},\ldots\rho_{n-1}\) that satisfy _skew relations_ (1) - (2), _substitution group relations_ (3) - (5) and _mixed relations_ (6) - (7):
\[\rho_{i}^{2} =1, 1\leq i\leq n-1, \tag{4}\] \[\rho_{i}\rho_{j} =\rho_{j}\rho_{i}, |i-j|\geq 2,\] (5) \[\rho_{i}\rho_{i+1}\rho_{i} =\rho_{i+1}\rho_{i}\rho_{i+1}, 1\leq i\leq n-2,\] (6) \[\rho_{i}\sigma_{j} =\sigma_{j}\rho_{i}, |i-j|\geq 2,\] (7) \[\rho_{i}\rho_{i+1}\sigma_{i} =\sigma_{i+1}\rho_{i}\rho_{i+1}, 1\leq i\leq n-2. \tag{3}\]
Generator \(\rho_{i}\in VB_{n}\) is presented geometrically in Figure 2.
Geometric braids corresponding to the mixed relation (7) are presented in Figure 3.
S. Kamada [6] established that the following _Alexander's theorem for virtual braids_ holds: If \(K\) is a virtual link, then for some \(n\) there exists a virtual braid \(\beta\in VB_{n}\) such that \(K\) is the closure of \(\beta\).
It is known as shown in [7] that relations
\[\rho_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\rho_{i+1}, 1\leq i\leq n-2, \tag{9}\] \[\rho_{i+1}\sigma_{i}\sigma_{i+1}=\sigma_{i}\sigma_{i+1}\rho_{i}, 1\leq i\leq n-2. \tag{8}\]
do not hold in the \(VB_{n}\) group. These relations (8) - (9) are called _forbidden relations_, see Figures 4 and 5. The group \(WB_{n}\) is obtained from \(VB_{n}\) by adding the relation (8) and is called _welded braid group_[8]. The same group \(WB_{n}\) is obtained by adding the relation (9) to the group \(VB_{n}\). Adding both relations (8) and (9) to \(VB_{n}\) leads to unknotting transformations for virtual knots and links [7, 9, 6]. Other unknotting operations for links, virtual links and welded links are given, for example, in [10, 11, 12]. Note that the representations \(VB_{n}\to\operatorname{Aut}(\operatorname{G_{n}})\) were constructed, for example, for groups \(G_{n}\) of the following form: \(G_{n}=\mathbb{F}_{n}*\mathbb{Z}^{n+1}\) in [13], \(G_{n}=\mathbb{F}_{n}*\mathbb{Z}^{2}\) in [14], \(G_{n}=\mathbb{F}_{n}*\mathbb{Z}^{2n+1}\) and \(G_{n}=\mathbb{F}_{n}*\mathbb{Z}^{n}\) in [15]. For structural properties and other representations of the virtual braid groups see [16, 17].
In [18] the _group of flat virtual braids_\(FVB_{n}\) on \(n\) strands was introduced as a result of adding the relations (10) to the group \(VB_{n}\):
\[\sigma_{i}^{2}=1,\qquad 1\leq i\leq n-1. \tag{10}\]
We summarize the above discussions in the following definition.
**Definition 1**.: _For \(n\geq 2\) a group with generators \(\sigma_{1},\dots,\sigma_{n-1}\), \(\rho_{1},\dots,\rho_{n-1}\) and the following defining relations:_
\[\begin{array}{ccc}\sigma_{i}^{2}=1,&\rho_{i}^{2}=1,&1\leq i\leq n-1,\\ \sigma_{i}\,\sigma_{i+1}\,\sigma_{i}=\sigma_{i+1}\,\sigma_{i}\,\sigma_{i+1},& \rho_{i}\,\rho_{i+1}\,\rho_{i}=\rho_{i+1}\,\rho_{i}\,\rho_{i+1},&1\leq i\leq n -2,\\ \sigma_{i}\,\sigma_{j}=\sigma_{j}\,\sigma_{i},&\rho_{i}\,\rho_{j}=\rho_{j}\, \rho_{i},&|i-j|\geq 2\end{array}\]
_and_
\[\begin{array}{ccc}\rho_{i}\,\rho_{i+1}\,\sigma_{i}=\sigma_{i+1}\,\rho_{i}\, \rho_{i+1},&1\leq i\leq n-2,\\ \rho_{i}\,\sigma_{j}=\sigma_{j}\,\rho_{i},&|i-j|\geq 2.\end{array}\]
_is called the flat virtual braid group \(FVB_{n}\) on \(n\) strands._
Generator \(\sigma_{i}\in FVB_{n}\) is presented geometrically in Figure 6 and generator \(\rho\in FVB_{n}\) is presented geometrically in Figure 2.
In [19] the following problem was formulated: Does it exist a representation of the \(FVB_{n}\) group by automorphisms of some group for which the forbidden relations would not hold? In [20] such representation \(\eta_{n}\colon FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n})\) was constructed, here
\(\mathbb{F}_{2n}=\langle x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\rangle\) is a free group of rank \(2n\). The homomorphism \(\eta_{n}\) maps generators \(\sigma_{i},\rho_{i}\in FVB_{n}\), where \(i=1,\ldots,n-1\), to the following automorphisms:
\[\eta_{n}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1}y_{i+1},\\ x_{i+1}\mapsto x_{i}y_{i+1}^{-1},\end{cases}\qquad\eta_{n}(\rho_{i}):\begin{cases} x_{i}\mapsto x_{i+1},\\ x_{i+1}\mapsto x_{i},\\ y_{i}\mapsto y_{i+1},\\ y_{i+1}\mapsto y_{i}.\end{cases} \tag{11}\]
It was shown in [20] that the representation \(\eta_{2}:FVB_{2}\to\operatorname{Aut}(\mathbb{F}_{4})\) is faithful.
In this paper we construct a family of representations of the \(FVB_{n}\) group by automorphisms of the free group \(\mathbb{F}_{2n}=\langle x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\rangle\), which generalize the representation (11). Namely, we consider a family of homomorphisms \(\Theta_{n}:FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n})\), which are given by mapping generators \(\sigma_{i},\rho_{i}\in FVB_{n}\), where \(i=1,\ldots,n-1\), to the following automorphisms:
\[\Theta_{n}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1}\,a_{i}(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}\,b_{i}(y_{i},y_{i+1}),\end{cases}\qquad\Theta_{n}(\rho_{i} ):\begin{cases}x_{i}\mapsto x_{i+1}\,c_{i}(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}\,d_{i}(y_{i},y_{i+1}),\\ y_{i}\mapsto y_{i+1},\\ y_{i+1}\mapsto y_{i},\end{cases} \tag{12}\]
where the elements \(a_{i}(y_{i},y_{i+1})\), \(b_{i}(y_{i},y_{i+1})\), \(c_{i}(y_{i},y_{i+1})\) and \(d_{i}(y_{i},y_{i+1})\) are words in a free group of rank two with generators \(\{y_{i},y_{i+1}\}\) for each \(i=1,\ldots,n-1\). Thus, the homomorphisms \(\Theta_{n}\) depend only on the choice of the words \(a_{i},b_{i},c_{i},d_{i}\), which define the locally nontrivial action of the automorphism corresponding to the generator of the group \(FVB_{n}\), and in this sense the homomorphisms \(\Theta_{n}\) are _local homomorphisms_.
The article has the following structure. In Theorem 1 we establish for which \(a_{i}\), \(b_{i}\), \(c_{i}\) and \(d_{i}\) there exists a local homomorphism \(\Theta_{n}\) of the group \(FVB_{n}\) into the automorphism group of the free group \(\mathbb{F}_{2n}\). In Section 3 we obtain results about the structure of the kernel of the homomorphism \(\Theta_{n}\), in particular, in Theorem 3 we describe the kernel of this homomorphism for \(n=2\). In Theorem 4 it will be established that for \(n\geq 3\) the kernel of the homomorphism \(\Theta_{n}\) contains a free group of rank \(2\). We note it was shown earlier in [20] that for \(n\geq 3\) the kernel of the homomorphism \(\eta_{n}\), which is a special case of \(\Theta_{n}\), contains an infinite cyclic group.
## 2. Existence of local representations
Let \(\mathbb{F}_{2n}\) be a free group of rank \(2n\) with free generators \(x_{1},\ldots,x_{n}\), \(y_{1},\ldots,y_{n}\).
**Theorem 1**.: _Let \(n\geq 2\) and \(a_{i}(y_{i},y_{i+1})\), \(b_{i}(y_{i},y_{i+1})\), \(c_{i}(y_{i},y_{i+1})\), \(d_{i}(y_{i},y_{i+1})\) are words in a free group of rank two with generators \(\{y_{i},y_{i+1}\}\), where \(1\leq i\leq n-1\). Define the map \(\Theta_{n}:FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n})\) by mapping \(\sigma_{i}\) and \(\rho_{i}\) to automorphisms:_
\[\Theta_{n}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1}\,a_{i}(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}\,b_{i}(y_{i},y_{i+1}),\end{cases}\qquad\Theta_{n}(\rho_{i} ):\begin{cases}x_{i}\mapsto x_{i+1}\,c_{i}(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}\,d_{i}(y_{i},y_{i+1}),\\ y_{i}\mapsto y_{i+1},\\ y_{i+1}\mapsto y_{i}.\end{cases} \tag{13}\]
_Then the following properties hold._
1. _The map_ \(\Theta_{n}\) _is homomorphism iff_ \[b_{i}(y_{i},y_{i+1})=a_{i}^{-1}(y_{i},y_{i+1}),\qquad c_{i}(y_{i},y_{i+1})=y_{i+ 1}^{m_{i}},\qquad d_{i}(y_{i},y_{i+1})=y_{i}^{-m_{i}},\] _where_ \(m_{i}\in\mathbb{Z}\) _for_ \(1\leq i\leq n-1\)_, and_ \[a_{j}(y_{j},y_{j+1})=y_{j+1}^{m_{j}}\,a_{j-1}(y_{j},y_{j+1})\,y_{j}^{-m_{j-1}}, \quad 2\leq j\leq n-1,\] _with_ \(n\geq 3\)_, where_ \(a_{1}=w(y_{1},y_{2})\) _for some word_ \(w(A,B)\in\mathbb{F}_{2}=\langle A,B\rangle\)_._
2. _The map_ \(\Theta_{n}\) _does not preserve the forbidden relations._
Proof.: (1) Let us verify that the relations (1) - (7) and the relation (10) are preserved under the map \(\Theta_{n}\). Denote \(s_{i}=\Theta_{n}(\sigma_{i})\in\operatorname{Aut}(\mathbb{F}_{2n})\) and \(r_{i}=\Theta_{n}(\rho_{i})\in\operatorname{Aut}(\mathbb{F}_{2n})\).
The relations (1), (4) and (6) are preserved because \(s_{i}\) acts non-trivially only on \(x_{i}\) and \(x_{i+1}\), while \(r_{i}\) acts non-trivially only on \(x_{i}\), \(x_{i+1}\), \(y_{i}\) and \(y_{i+1}\).
Since
\[s_{i}^{2}:\begin{cases}x_{i}\mapsto x_{i+1}\,a_{i}(y_{i},y_{i+1})\mapsto x_{i} \,b_{i}(y_{i},y_{i+1})\,a_{i}(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}\,b_{i}(y_{i},y_{i+1})\mapsto x_{i+1}\,a_{i}(y_{i},y_{i+1}) \,b_{i}(y_{i},y_{i+1}),\end{cases}\]
the relation (10) is preserved if and only if \(b_{i}(y_{i},y_{i+1})=a_{i}^{-1}(y_{i},y_{i+1})\) for all \(1\leq i\leq n-1\). Further,
\[r_{i}^{2}:\begin{cases}x_{i}\mapsto x_{i+1}\,c_{i}(y_{i},y_{i+1})\mapsto x_{i} \,d_{i}(y_{i},y_{i+1})\,c_{i}(y_{i+1},y_{i}),\\ x_{i+1}\mapsto x_{i}\,d_{i}(y_{i},y_{i+1})\mapsto x_{i+1}\,c_{i}(y_{i},y_{i+1}) \,d_{i}(y_{i+1},y_{i}),\end{cases}\]
and relation (3) is preserved iff
\[d_{i}(y_{i},y_{i+1})=c_{i}^{-1}(y_{i+1},y_{i}),\qquad 1\leq i\leq n-1. \tag{14}\]
Consider the actions of automorphisms \(r_{i}r_{i+1}s_{i}\) and \(s_{i+1}r_{i}r_{i+1}\). We have
\[r_{i}r_{i+1}s_{i}:\begin{cases}x_{i}\mapsto x_{i+1}c_{i}(y_{i},y_{i+1})\mapsto x _{i+2}c_{i+1}(y_{i+1},y_{i+2})c_{i}(y_{i},y_{i+2}),\\ x_{i+1}\mapsto x_{i}c_{i}^{-1}(y_{i+1},y_{i})\mapsto x_{i}c_{i}^{-1}(y_{i+2},y _{i})\mapsto x_{i+1}a_{i}(y_{i},y_{i+1})c_{i}^{-1}(y_{i+2},y_{i}),\\ x_{i+2}\mapsto x_{i+2}\mapsto x_{i+1}c_{i+1}^{-1}(y_{i+2},y_{i+1})\mapsto x_{i} a_{i}^{-1}(y_{i},y_{i+1})c_{i+1}^{-1}(y_{i+2},y_{i+1}),\end{cases}\]
and
\[s_{i+1}r_{i}r_{i+1}:\begin{cases}x_{i}\mapsto x_{i+1}c_{i}(y_{i},y_{i+1})\mapsto x _{i+2}c_{i+1}(y_{i+1},y_{i+2})c_{i}(y_{i},y_{i+2}),\\ x_{i+1}\mapsto x_{i+2}a_{i+1}(y_{i+1},y_{i+2})\mapsto x_{i+2}a_{i+1}(y_{i},y_{ i+2})\\ \qquad\qquad\mapsto x_{i+1}c_{i+1}^{-1}(y_{i+2},y_{i+1})a_{i+1}(y_{i},y_{i+1}), \\ x_{i+2}\mapsto x_{i+1}a_{i+1}^{-1}(y_{i+1},y_{i+2})\mapsto x_{i}c_{i}^{-1}(y_{i+ 1},y_{i})a_{i+1}^{-1}(y_{i},y_{i+2})\\ \qquad\qquad\qquad\mapsto x_{i}c_{i}^{-1}(y_{i+2},y_{i})a_{i+1}^{-1}(y_{i},y_{ i+1}).\end{cases}\]
Thus, to fulfill the relation (7), it is necessary and sufficient that
\[a_{i+1}(y_{i},y_{i+1})c_{i}(y_{i+2},y_{i})=c_{i+1}(y_{i+2},y_{i+1})a_{i}(y_{i},y _{i+1}), \tag{15}\]
holds for all \(1\leq i\leq n-2\).
A similar consideration of the relations (5) leads to the equalities
\[c_{i+1}(y_{i},y_{i+2})c_{i}(y_{i+1},y_{i+2})=c_{i+1}(y_{i+1},y_{i+2})c_{i}(y_{i},y _{i+2}), \tag{16}\]
for all \(1\leq i\leq n-2\). This is only possible if \(c_{i}(y_{i},y_{i+1})=y_{i+1}^{m_{i}}\) for some \(m_{i}\in\mathbb{Z}\) with \(1\leq i\leq n-1\). Using (15) and (16) we get
\[a_{i+1}(y_{i},y_{i+1})=y_{i+1}^{m_{i+1}}a_{i}(y_{i},y_{i+1})y_{i}^{-m_{i}}, \qquad 1\leq i\leq n-2.\]
The fulfillment of the relations (2) is checked directly.
(2) Let us now show that the forbidden relations do not hold under the map \(\Theta_{n}\). Indeed, we have:
\[y_{i}\stackrel{{ r_{i}}}{{\longmapsto}}y_{i+1}\stackrel{{ s_{i+1}}}{{\longmapsto}}y_{i+1}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i+1},\]
\[y_{i}\stackrel{{ s_{i+1}}}{{\longmapsto}}y_{i}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i}\stackrel{{ r_{i+1}}}{{\longmapsto}}y_{i},\]
therefore, \(r_{i}s_{i+1}s_{i}\neq s_{i+1}s_{i}r_{i+1}\). Similarly
\[y_{i}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i}\stackrel{{ s_{i+1}}}{{\longmapsto}}y_{i}\stackrel{{ r_{i}}}{{\longmapsto}}y_{i+1},\]
\[y_{i}\stackrel{{ r_{i+1}}}{{\longmapsto}}y_{i}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i}\stackrel{{ s_{i+1}}}{{\longmapsto}}y_{i},\]
therefore, \(s_{i}s_{i+1}r_{i}\neq r_{i+1}s_{i}s_{i+1}\).
Thus the representation \(\Theta_{n}\) given by the formula (13) depends on the word \(a_{1}(A,B)=w(A,B)\in\mathbb{F}_{2}=\langle A,B\rangle\), into which we substitute \(y_{i}\) and \(y_{i+1}\) instead of \(A\) and \(B\) respectively, and a set of integers \(m=(m_{1},\dots,m_{n-1})\). To emphasize this dependence of the representation on \(w\) and \(m\), we denote it \(\Theta_{n}^{w,m}\):
\[\Theta_{n}^{w,m}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1}\,\prod_{k=i}^{ 2}y_{i+1}^{m_{k}}\,w(y_{i},y_{i+1})\,\prod_{k=1}^{i-1}y_{i}^{-m_{k}},\\ x_{i+1}\mapsto x_{i}\,\prod_{k=i-1}^{1}y_{i}^{m_{k}}\,(w(y_{i},y_{i+1}))^{-1}\, \prod_{k=2}^{i}y_{i+1}^{-m_{k}},\end{cases} \tag{17}\]
and
\[\Theta_{n}^{w,m}(\rho_{i}):\begin{cases}x_{i}\mapsto x_{i+1}y_{i+1}^{m_{i}}, \\ x_{i+1}\mapsto x_{i}y_{i}^{-m_{i}},\\ y_{i}\mapsto y_{i+1},\\ y_{i+1}\mapsto y_{i},\end{cases} \tag{18}\]
where in the products \(\prod_{k=i}^{2}\) and \(\prod_{k=i-1}^{1}\) it is assumed that \(i\geq 2\) and the indices are decreasing, and in the products \(\prod_{k=1}^{i-1}\) and \(\prod_{k=2}^{i}\) it is assumed \(i\geq 2\) and the indices are increasing.
The word \(w\) is called the _defining word_ for the homomorphism \(\Theta_{n}^{w,m}\). In the particular case when \(m_{i}=0\) for all \(i=1,\dots,n-1\), we will write \(\Theta_{n}^{w}:FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n})\) assuming
that
\[\Theta_{n}^{w}(\sigma_{i}):\begin{cases}x_{i}\mapsto x_{i+1}w(y_{i},y_{i+1}),\\ x_{i+1}\mapsto x_{i}(w(y_{i},y_{i+1}))^{-1},\end{cases}\qquad\Theta_{n}^{w}( \rho_{i}):\begin{cases}x_{i}\mapsto x_{i+1},\\ x_{i+1}\mapsto x_{i},\\ y_{i}\mapsto y_{i+1},\\ y_{i+1}\mapsto y_{i}.\end{cases} \tag{19}\]
Note that the homomorphism (11) constructed in [20] can be represented as \(\eta_{n}=\Theta_{n}^{w}\) for \(w(A,B)=B\).
Let \(\beta\in FVB_{n}\) and \(x\in\mathbb{F}_{2n}\). Further, to simplify the notation, by \(\beta(x)\) we mean \(\Theta_{n}^{w,m}(\beta)(x)\), where the word \(w\) and the set \(m\) are assumed to be clear from the context.
## 3. The kernel of homomorphism \(\Theta_{n}^{w,m}\) and \(FVK_{n}\) group
In this section we show that the kernel of the homomorphism \(\Theta_{n}^{w,m}\) lies in the intersection of the group of flat virtual pure braids and the group of flat virtual kure braids group \(FVK_{n}\) defined below.
Consider the subgroup \(\mathrm{S}_{n}=\langle\sigma_{1}\ldots\sigma_{n-1}\rangle\) of \(FVB_{n}\), which is isomorphic to the permutation group of an \(n\)-element set. The map \(\pi_{n}:FVB_{n}\to\mathrm{S}_{n}\) defined on the generators \(\sigma_{i},\rho_{i}\) according to the rule:
\[\pi_{n}(\sigma_{i}) =\sigma_{i}, 1\leq i\leq n-1,\] \[\pi_{n}(\rho_{i}) =\sigma_{i}, 1\leq i\leq n-1,\]
is obviously a homomorphism.
**Definition 2**.: _Denote \(FVP_{n}=\mathrm{Ker}(\pi_{n})\) and call it flat virtual pure braid group on \(n\) strands._
Similarly, the subgroup \(S_{n}^{\prime}=\langle\rho_{1}\ldots\rho_{n-1}\rangle\) of \(FVB_{n}\) is isomorphic to the permutation group of an \(n\)-element set, and the map \(\nu_{n}:FVB_{n}\to\mathrm{S}_{n}^{\prime}\) defined on generators \(\sigma_{i}\), \(\rho_{i}\) as follows:
\[\nu_{n}(\sigma_{i}) =1, 1\leq i\leq n-1,\] \[\nu_{n}(\rho_{i}) =\rho_{i}, 1\leq i\leq n-1,\]
is also a homomorphism.
**Definition 3**.: _Denote \(FVK_{n}=\mathrm{Ker}(\nu_{n})\) and call it flat virtual kure braid group on \(n\) strands._
Here we use the term _flat virtual kure braid_ since the term _kure virtual braid group_ was used in [21] for kernel of the map \(\pi_{K}:VB_{n}\to S_{n}\) which is defined analogously to \(\nu_{n}:FVB_{n}\to S^{\prime}\). The group \(FVK_{n}=\mathrm{Ker}(\nu_{n})\) also was denoted by \(FH_{n}\) in [20] since is the flat analog of the Rabenda's group \(H_{n}\) from [22, Prop. 17].
**Lemma 1**.: _[_20_, Prop. 4]_ _The group \(FVK_{n}\) admits a presentation with generators \(x_{i,j}\), \(1\leq i\neq j\leq n\) and defining relations_
\[x_{i,j}^{2}=1,\qquad x_{i,j}\,x_{k,l}=x_{k,l}\,x_{i,j},\qquad x_{i,k}\,x_{k,j} \,x_{i,k}=x_{k,j}\,x_{i,k}\,x_{k,j}, \tag{20}\]
_where different letters stand for different indices._
**Corollary 1**.: _The group \(FVK_{n}\) is a Coxeter group with defining relations_
\[x_{i,j}^{2}=1,\qquad(x_{i,j}\,x_{k,l})^{2}=1,\qquad(x_{i,k}\,x_{k,j})^{3}=1. \tag{21}\]
The following property is a generalization of the property established in [20, Prop. 9] for the word \(w(A,B)=B\).
**Lemma 2**.: _Let \(n\geq 2\). For any word \(w\in\mathbb{F}_{2}\) and any set of integers \(m=(m_{1},\ldots,m_{n-1})\)\(\operatorname{Ker}(\Theta_{n}^{w,m})\leq FVP_{n}\cap FVK_{n}\)._
Proof.: Let \(g\in\operatorname{Ker}(\Theta_{n}^{w,m})\). Then \(y_{i}=g(y_{i})=\nu_{n}(g)(y_{i})\), since all \(\sigma_{i}\) act identically on \(y_{i}\). But then \(\nu_{n}(g)\) is the identity permutation of the set \(\{y_{1},\ldots,y_{n}\}\), which by definition means \(g\in FVK_{n}\).
Next, we show that \(g\in FVP_{n}\). Denote by \(G\) the normal closure of the subgroup \(\langle y_{1},\ldots,y_{n}\rangle\) in \(\mathbb{F}_{2n}\). It is clear that \(G\) is a \(\Theta_{n}^{w,m}(FVB_{n})\)-invariant subgroup. Then \(\Theta_{n}^{w,m}\) induces a homomorphism \(\Psi_{n}^{w,m}\colon FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n}/G)= \operatorname{Aut}(\mathbb{F}_{n})\), where \(\mathbb{F}_{n}=\langle x_{1},\ldots,x_{n}\rangle\). From the formulas (13) we can write out the action of \(\Psi_{n}^{w,m}\) on the generators of the group \(FVB_{n}\):
\[\Psi_{n}^{w,m}(\sigma_{i})\colon\begin{cases}x_{i}\mapsto x_{i+1},\\ x_{i+1}\mapsto x_{i},\end{cases}\qquad\Psi_{n}^{w,m}(\rho_{i})\colon\begin{cases}x _{i}\mapsto x_{i+1},\\ x_{i+1}\mapsto x_{i},\end{cases}\qquad 1\leq i\leq n-1.\]
Now it is easy to see that the image of \(FVB_{n}\) under the map \(\Psi_{n}^{w,m}\) is a permutation of the set \(\{x_{1},\ldots,x_{n}\}\). It remains to note that if \(g\in\operatorname{Ker}(\Theta_{n}^{w,m})\), then \(g\in\operatorname{Ker}(\Psi_{n}^{w,m})=FVP_{n}\).
Since \(S_{n}^{\prime}\leq FVB_{n}\), then the decomposition of \(FVB_{n}=FVK_{n}\rtimes S_{n}^{\prime}\) follows directly from the definition of \(FVK_{n}\). Considering the restriction of the homomorphism \(\pi_{n}\) to \(FVK_{n}\), we obtain the homomorphism \(\xi_{n}\colon FVK_{n}\to S_{n}\). Note that its kernel is \(X_{n}=\operatorname{Ker}(\xi_{n})=FVP_{n}\cap FVK_{n}\). Further, since \(S_{n}^{\prime}\leq FVK_{n}\), we obtain the decomposition \(FVK_{n}=X_{n}\rtimes S_{n}\). Thus, we have the following decomposition of the group of flat virtual braids:
\[FVB_{n}=(X_{n}\rtimes S_{n})\rtimes S_{n}^{\prime}.\]
As it invented in [22], we denote
\[\lambda_{i,i+1}=\rho_{i}\sigma_{i}, 1\leq i\leq n-1,\] \[\lambda_{i,j}=\rho_{j-1}\rho_{j-2}\ldots\rho_{i+1}\lambda_{i,i+1 }\rho_{i+1}\ldots\rho_{j-2}\rho_{j-1}, j-i\geq 2.\]
Element \(\lambda_{i,j}\) is presented geometrically in Figure 7,
**Lemma 3**.: _[_22_]_ _The group \(FVP_{n}\) is generated by the elements \(\lambda_{i,j},\ 1\leq i<j\leq n\) and the defining relations are:_
\[\lambda_{i,j}\lambda_{k,l} =\lambda_{k,l}\lambda_{i,j}, \tag{23}\] \[\lambda_{k,i}\lambda_{k,j}\lambda_{i,j} =\lambda_{i,j}\lambda_{k,j}\lambda_{k,i}, \tag{22}\]
_where \(i,j,k,l\) correspond to different indices._
Proof.: We first show that \(\lambda_{i,j}\lambda_{k,l}\) is generated by the elements \(\lambda_{i,j},\ 1\leq i<j\leq n-1\) and the defining relations are:
(24) \[\lambda_{i,j}\lambda_{k,l} =\lambda_{k,l}\lambda_{i,j},\] (25) \[\lambda_{k,i}\lambda_{k,j}\lambda_{i,j} =\lambda_{k,j}\lambda_{k,i},\] (26) \[\lambda_{k,i}\lambda_{k,j}\lambda_{i,j} =\lambda_{k,j}\lambda_{k,i},\] (27) \[\lambda_{k,i}\lambda_{k,j} =\lambda_{k,j}\lambda_{k,i},\] (28) \[\lambda_{k,i}\lambda_{k,j} =\lambda_{k,i}\lambda_{k,j}\lambda_{k,i},\] (29) \[\lambda_{k,i}\lambda_{k,j} =\lambda_{k,i}\lambda_{k,i},\] (30) \[\lambda_{k,i}\lambda_{k,j} =\lambda_{k,i}\lambda_{k,i},\] (31) \[\lambda_{k,i}\lambda_{k,j} =\lambda_{k,i}\lambda_{k,i},\] (32) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (33) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (34) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (35) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (36) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (37) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (38) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (39) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (40) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (41) \[\lambda_{k,i}\lambda_{k,i} =\lambda_{k,i}\lambda_{k,i},\] (42)
Let us consider the case \(n=3\) in more details. As shown in [22],
\[\mathrm{FVP}_{3}=\langle a,b,c\mid[a,c]=1\rangle, \tag{24}\]
where \(a=\lambda_{2,3}\lambda_{1,3}\), \(b=\lambda_{2,3}\) and \(c=\lambda_{2,3}\lambda_{1,2}^{-1}\). These elements presented geometrically in Figure 8.
**Theorem 2**.: _We have the following decomposition_
\[X_{3}=\mathbb{Z}^{2}*\mathbb{F}_{3}*\Gamma,\]
_where \(\mathbb{F}_{3}\) is a free group of rank \(3\) and \(\Gamma=\langle x,y,u,v,p,q\mid xy=uv,vu=pq,qp=yx\rangle\)._
Proof.: Consider the restriction of the homomorphism \(\nu_{3}\colon FVB_{3}\to S^{\prime}_{3}\) to \(FVP_{3}\). Let us denote it by \(\varphi\colon FVP_{3}\to S^{\prime}_{n}\). Then \(X_{3}=\operatorname{Ker}(\varphi)\).
To find the generators and relations of the \(X_{3}\) group, we use the Reidemeisetr-Schreier rewriting process, see for example [23]. Let us write out the system of Schreier representatives for \(\operatorname{Ker}(\varphi)\) using the generators indicated in the presentation (24): \(T=\{1,a,ab,c,cb,b\}\). For an element \(g\), we denote its representative in \(T\) by \(\overline{g}\). Then the kernel \(\operatorname{Ker}(\varphi)\) is generated by the following elements:
\[a\cdot a\cdot(\overline{a^{2}})^{-1}=a^{2}c^{-1} =t, a\cdot c\cdot(\overline{ac})^{-1}=ac=m,\] \[ab\cdot a\cdot(\overline{aba})^{-1}=abab^{-1} =v, ab\cdot b\cdot(\overline{ab^{2}})^{-1}=ab^{2}a^{-1}=w,\] \[ab\cdot c\cdot(\overline{abc})^{-1}=abcb^{-1}c^{-1} =p, c\cdot a\cdot(\overline{ca})^{-1}=ca=r,\] \[c\cdot c\cdot(\overline{c^{2}})^{-1}=c^{2}a^{-1} =g, cb\cdot a\cdot(\overline{cba})^{-1}=cbab^{-1}a^{-1} =q,\] \[cb\cdot b\cdot(\overline{cb^{2}})^{-1}=cb^{2}c^{-1} =h, cb\cdot c\cdot(\overline{cbc})^{-1}=cbcb^{-1} =y,\] \[b\cdot a\cdot(\overline{ba})^{-1}=bab^{-1}c^{-1} =x, b\cdot b\cdot(\overline{b^{2}})^{-1}=b^{2} =f,\] \[b\cdot c\cdot(\overline{bc})^{-1}=bcb^{-1}a^{-1} =u.\]
Further, the relations \(taca^{-1}c^{-1}t^{-1}\) for \(t\in T\) must be rewritten in new generators. For example, for \(t=ab\) we get:
\[ab(aca^{-1}c^{-1})b^{-1}a^{-1}\,=\,vbca^{-1}c^{-1}b^{-1}a^{-1}\,= \,vuaba^{-1}c^{-1}b^{-1}a^{-1}\] \[=\,vuq^{-1}cbc^{-1}b^{-1}a^{-1}\,=\,vuq^{-1}p^{-1}.\]
The rest of the relations are found similarly. As a result, we get:
\[m=r, m=tg, r=gt,\] \[xy=uv, vu=pq, qp=yx.\]
It is now clear that the elements \(g,t\) generate \(\mathbb{Z}^{2}\), the elements \(w,h,f\) generate \(\mathbb{F}_{3}\), and the group generated by the elements \(x\)\(y\), \(u\), \(v\), \(p\) and \(q\) we denote by \(\Gamma\).
**Lemma 4**.: _[_20_]_ _Let \(G_{n}\) be a subgroup of \(FVP_{n}\) generated by the elements:_
\[t_{i,j}= \lambda_{i,j}^{2}, 1\leq i<j\leq n, \tag{26}\] \[d_{i,j,k}= \lambda_{j,k}^{-1}\lambda_{i,j}^{-1}\lambda_{j,k}\lambda_{i,k}, 1\leq i<j<k\leq n,\] (27) \[e_{i,j,k}= \lambda_{j,k}^{-1}\lambda_{i,j}^{-1}\lambda_{i,k}\lambda_{i,j}, 1\leq i<j<k\leq n. \tag{25}\]
_Then the normal closure of \(G_{n}\) in \(FVP_{n}\) coincides with \(X_{n}\)._
Let us describe the action of \(\Theta_{n}^{w}\) on the generators indicated in Lemma 4.
**Lemma 5**.: _The homomorphism \(\Theta_{n}^{w}:FVB_{n}\to\operatorname{Aut}(\mathbb{F}_{2n})\), defined by the word \(w\in\mathbb{F}_{2}\), maps the generators of the group \(G_{n}\) to the following automorphisms:_
\[\Theta_{n}^{w}(t_{i,j}) :\begin{cases}x_{i}\mapsto x_{i}\,w_{i,j}^{-1}\,w_{j,i}^{-1},\\ x_{j}\mapsto x_{j}\,w_{i,j}\,w_{j,i},\end{cases} 1\leq i<j\leq n, \tag{29}\] \[\Theta_{n}^{w}(d_{i,j,k}) :\begin{cases}x_{j}\mapsto x_{j}\,w_{j,i}^{-1}\,w_{i,k}^{-1}\,w_ {j,k},\\ x_{k}\mapsto x_{k}\,w_{i,k}\,w_{j,i}\,w_{j,k}^{-1},\end{cases} 1\leq i<j<k\leq n,\] (30) \[\Theta_{n}^{w}(e_{i,j,k}) :\begin{cases}x_{i}\mapsto x_{i}\,w_{i,j}^{-1}\,w_{j,k}^{-1}\,w_ {i,k},\\ x_{j}\mapsto x_{j}\,w_{i,j}\,w_{i,k}^{-1}\,w_{j,k},\end{cases} 1\leq i<j<k\leq n. \tag{28}\]
_where \(w_{i,j}=w(y_{i},y_{j})\) for all \(i,j\)._
Proof.: Let, as before, \(s_{i}=\Theta_{n}^{w}(\sigma_{i})\) and \(r_{i}=\Theta_{n}^{w}(\rho_{i})\). First of all, let us establish some auxiliary formulas. Let \(1\leq i<j-1\leq n-1\), then
\[b_{i,j}:=\Theta_{n}^{w}(\rho_{j-1}\ldots\rho_{i+1}):\begin{cases}x_{i+1} \stackrel{{ r_{i+1}}}{{\longmapsto}}x_{i+2},\\ \vdots\\ x_{j-1}\stackrel{{ r_{j-1}}}{{\longmapsto}}x_{j},\\ x_{j}\stackrel{{ r_{j-1}}}{{\longmapsto}}x_{j-1}\stackrel{{ r_{j-2}}}{{\longmapsto}}x_{j-2}\ldots\stackrel{{ r_{i+1}}}{{\longmapsto}}x_{i+1},\\ y_{i+1}\stackrel{{ r_{i+1}}}{{\longmapsto}}y_{i+2},\\ \vdots\\ y_{j-1}\stackrel{{ r_{j-1}}}{{\longmapsto}}y_{j},\\ y_{j}\stackrel{{ r_{j-1}}}{{\longmapsto}}x_{j-1}\stackrel{{ r_{j-2}}}{{\longmapsto}}x_{j-2}\ldots\stackrel{{ r_{i+1}}}{{\longmapsto}}y_{i+1}.\end{cases}\]
Further,
\[a_{i,i+1}:=\Theta_{n}^{w}(\lambda_{i,i+1})=\Theta_{n}^{w}(\rho_{i}\sigma_{i}): \begin{cases}x_{i}\stackrel{{ r_{i}}}{{\longmapsto}}x_{i+1} \stackrel{{ s_{i}}}{{\longmapsto}}x_{i}\,w_{i,i+1}^{-1},\\ x_{i+1}\stackrel{{ r_{i}}}{{\longmapsto}}x_{i}\stackrel{{ s_{i}}}{{\longmapsto}}x_{i+1}\,w_{i,i+1},\\ y_{i}\stackrel{{ r_{i}}}{{\longmapsto}}y_{i+1}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i+1},\\ y_{i+1}\stackrel{{ r_{i}}}{{\longmapsto}}y_{i}\stackrel{{ s_{i}}}{{\longmapsto}}y_{i}.\end{cases}\]
Let us show that for \(1\leq i<j\leq n\) the formulas
\[\Theta_{n}^{w}(\lambda_{i,j}):\begin{cases}x_{i}\mapsto x_{i}\,w_{i,j}^{-1}, \\ x_{j}\mapsto x_{j}\,w_{i,j},\\ y_{i}\mapsto y_{j},\\ y_{j}\mapsto y_{i}\end{cases} \Theta_{n}^{w}(\lambda_{i,j}^{-1}):\begin{cases}x_{i}\mapsto x_{i}\,w_ {j,i},\\ x_{j}\mapsto x_{j}\,w_{j,i}^{-1},\\ y_{i}\mapsto y_{j},\\ y_{j}\mapsto y_{i}\end{cases}\]
hold. Indeed, we have
\[\Theta_{n}^{w}(\lambda_{i,j}):\begin{cases}x_{i}\stackrel{{ b_{i,j}}}{{ \longmapsto}}x_{i}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}x_{i}\,w_{i,i+1}^{-1} \stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}x_{i}\,w_{i,j}^{-1},\\ x_{i+1}\stackrel{{ b_{i,j}}}{{\longmapsto}}x_{i+2}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}x_{i+2}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}x_{i+1},\\ \vdots\\ x_{j-1}\stackrel{{ b_{i,j}}}{{\longmapsto}}x_{j}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}x_{j}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}x_{j-1},\\ x_{j}\stackrel{{ b_{i,j}}}{{\longmapsto}}x_{i+1}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}x_{i+1}\,w_{i,i+1}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}x_{j}\,w_{i,j},\\ y_{i}\stackrel{{ b_{i,j}}}{{\longmapsto}}y_{i}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}y_{i+1}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}y_{j},\\ y_{i+1}\stackrel{{ b_{i,j}}}{{\longmapsto}}y_{i+2}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}y_{i+2}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}y_{i+1},\\ \vdots\\ y_{j-1}\stackrel{{ b_{i,j}}}{{\longmapsto}}y_{j}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}y_{j}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}y_{j-1},\\ y_{j}\stackrel{{ b_{i,j}}}{{\longmapsto}}y_{i+1}\stackrel{{ a_{i,i+1}}}{{\longmapsto}}y_{i}\stackrel{{ b_{i,j}^{-1}}}{{\longmapsto}}y_{i}.\end{cases}\]
We are now ready to prove the formulas (28) - (30). For example, let us establish (28):
\[\Theta_{n}^{w}(t_{i,j})=\Theta_{n}^{w}(\lambda_{i,j}^{2}):\begin{cases}x_{i} \stackrel{{\lambda_{i,j}}}{{\longmapsto}}x_{i}\,w_{i,j}^{-1} \stackrel{{\lambda_{i,j}}}{{\longmapsto}}x_{i}\,w_{i,j}^{-1}\,w_{ j,i}^{-1},\\ x_{j}\stackrel{{\lambda_{i,j}}}{{\longmapsto}}x_{i}\,w_{i,j} \stackrel{{\lambda_{i,j}}}{{\longmapsto}}x_{i}\,w_{i,j}\,w_{j,i}, \\ y_{i}\stackrel{{\lambda_{i,j}}}{{\longmapsto}}y_{j}\stackrel{{ \lambda_{i,j}}}{{\longmapsto}}y_{i},\\ y_{j}\stackrel{{\lambda_{i,j}}}{{\longmapsto}}y_{i}\stackrel{{ \lambda_{i,j}}}{{\longmapsto}}y_{j}.\end{cases}\]
The formulas (29) and (30) are proved in the same way.
The following statement answers the question about the faithfulness of the representations \(\Theta_{n}^{w,m}\) in the case \(n=2\).
**Theorem 3**.: _The nontrivial representation \(\Theta_{2}^{w,m}:FVB_{2}\to\operatorname{Aut}(\mathbb{F}_{4})\) is not exact if the defining word \(w\) is_
\[w(A,B)=A^{k_{1}}B^{k_{2}}\ldots A^{k_{m}}B^{-k_{m}}\ldots A^{-k_{2}}B^{-k_{1}}A ^{m_{1}},\]
_where all \(k_{i}\) are nonzero integers except possibly only for \(k_{1}\) and \(k_{m}\). In this case, \(\operatorname{Ker}(\Theta_{2}^{w,m})=X_{2}\simeq\mathbb{Z}\). The representation of \(\Theta_{2}^{w,m}\) is exact for other \(w\)._
Proof.: Lemma 2 implies that \(\operatorname{Ker}(\Theta_{2}^{w,m})\leq X_{2}\). It is easy to show that \(X_{2}\) is generated by the element \(t_{1,2}\).
In the case of \(n=2\), the set \(m\) consists of a single integer \(m=\{m_{1}\}\). For any \(k\in\mathbb{Z}\)
\[\Theta_{n}^{w,m}(t_{1,2}^{k})\colon\begin{cases}x_{1}\mapsto x_{1}\,(w_{1,2}^ {-1}\,y_{2}^{m_{1}}\,w_{2,1}^{-1}\,y_{1}^{m_{1}})^{k},\\ x_{2}\mapsto x_{2}\,(w_{1,2}\,y_{1}^{-m_{1}}\,w_{2,1}\,y_{2}^{-m_{1}})^{k}.\end{cases}\]
Thus \(\Theta_{n}^{w}(t_{1,2}^{k})=\operatorname{id}\) iff either \(k=0\) or
\[w_{1,2}^{-1}\,y_{2}^{m_{1}}\,w_{2,1}^{-1}\,y_{1}^{m_{1}}=1,\]
i.e. \(f(y_{1},y_{2})=f^{-1}(y_{2},y_{1})\) for the word \(f(y_{1},y_{2})=w(y_{1},y_{2})y_{1}^{-m_{1}}\).
Let \(f(A,B)=A^{k_{1}}B^{k_{2}}\dots B^{k_{s}}A^{k_{s+1}}\). Then
\[A^{k_{1}}B^{k_{2}}\dots B^{k_{s}}A^{k_{s+1}}=B^{-k_{s+1}}A^{-k_{s}}\dots A^{-k_ {2}}B^{-k_{1}},\]
therefore \(f(A,B)=A^{k_{1}}B^{k_{2}}\dots A^{k_{m}}B^{-k_{m}}\dots A^{-k_{2}}B^{-k_{1}}\), where all \(k_{i}\) -- nonzero integers except maybe \(k_{1}\) and \(k_{m}\). But then
\[w(A,B)=A^{k_{1}}B^{k_{2}}\dots A^{k_{m}}B^{-k_{m}}\dots A^{-k_{2}}B^{-k_{1}}A^ {m_{1}}.\]
This completes the proof.
## 4. Nontriviality of the kernel \(\operatorname{Ker}(\Theta_{n}^{w,m})\) for \(n\geq 3\)
Consider the following subgroups of the \(FVB_{n}\) group:
\[Q_{n}^{i} =\langle t_{i,i+1},\ e_{i,i+1,i+2}\rangle, 1\leq i\leq n-2, \tag{32}\] \[M_{n}^{i+1} =\langle t_{i+1,i+2},\ d_{i,i+1,i+2}\rangle, 1\leq i\leq n-2,\] (33) \[P_{n}^{i+2} =\langle t_{i,i+2}\rangle, 1\leq i\leq n-2. \tag{31}\]
**Lemma 6**.: _Let \(n\geq 3\) and \(w(A,B)\in\mathbb{F}_{2}(A,B)\). Then, for all \(i\), \(1\leq i\leq n-2\), we obtain the inclusion_
\[\left[Q_{n}^{i},\left[M_{n}^{i+1},P_{n}^{i+2}\right]\right]\leq\operatorname{ Ker}(\Theta_{n}^{w}). \tag{34}\]
Proof.: Note that \(Q_{n}^{i}\) acts non-trivially only on generators \(x_{i}\) and \(x_{i+1}\), \(M_{n}^{i+1}\) acts non-trivially only on \(x_{i+1}\) and \(x_{i+2}\), while \(P_{n}^{i+2}\) acts non-trivially only on \(x_{i}\) and \(x_{i+2}\). Consider the element
\[h=q(mpm^{-1}p^{-1})q^{-1}(mpm^{-1}p^{-1})^{-1},\]
where \(q\in Q_{n}^{i}\), \(m\in M_{n}^{i+1}\) and \(p\in P_{n}^{i+2}\). This element acts non-trivially only on the generators \(x_{i}\), \(x_{i+1}\) and \(x_{i+2}\). Write out its action:
\[h:x_{i}\stackrel{{ q}}{{\longmapsto}}qx_{i} \stackrel{{ m}}{{\longmapsto}}qx_{i}\stackrel{{ p}}{{\longmapsto}}pqx_{i}\stackrel{{ m^{-1}}}{{\longmapsto}}pqx_{i}\stackrel{{ p^{-1}}}{{\longmapsto}}qx_{i}\stackrel{{ q^{-1}}}{{\longmapsto}}x_{i}\stackrel{{ p}}{{\longmapsto}}px_{i}\stackrel{{ m}}{{\longmapsto}}px_{i}\stackrel{{ p^{-1}}}{{\longmapsto}}x_{i}\stackrel{{ m^{-1}}}{{\longmapsto}}x_{i};\] \[h:x_{i+1}\stackrel{{ q}}{{\longmapsto}}qx_{i+1} \stackrel{{ m}}{{\longmapsto}}mqx_{i+1} \stackrel{{ p}}{{\longmapsto}}mqx_{i+1}\stackrel{{ m^{-1}}}{{\longmapsto}}qx_{i+1}\stackrel{{ p^{-1}}}{{\longmapsto}}qx_{i+1}\stackrel{{ q^{-1}}}{{\longmapsto}}x_{i+1}\stackrel{{ p}}{{\longmapsto}}x_{i+1}\] \[\stackrel{{ m}}{{\longmapsto}}mx_{i+1} \stackrel{{ p^{-1}}}{{\longmapsto}}mx_{i+1} \stackrel{{ m^{-1}}}{{\longmapsto}}x_{i+1};\] \[h:x_{i+2}\stackrel{{ q}}{{\longmapsto}}x_{i+2} \stackrel{{ m}}{{\longmapsto}}mx_{i+2} \stackrel{{ p}}{{\longmapsto}}pmx_{i+2}\stackrel{{ m^{-1}}}{{\longmapsto}}m^{-1}pmx_{i+2}\stackrel{{ p^{-1}}}{{\longmapsto}}p^{-1}m^{-1}pmx_{i+2}\stackrel{{ q^{-1}}}{{\longmapsto}}\] \[p^{-1}m^{-1}pmx_{i+2} \stackrel{{ pmp^{-1}m^{-1}}}{{\longmapsto}}x_{i+2}\]
Thus, \(h\in\operatorname{Ker}(\Theta_{n}^{w})\) and the inclusion (34) is proved.
**Theorem 4**.: _Let \(n\geq 3\). For any defining word \(w(A,B)\in\mathbb{F}_{2}(A,B)\) the kernel \(\operatorname{Ker}(\Theta_{n}^{w})\) contains a subgroup isomorphic to a free group of rank 2._
Proof.: Denote a free group of rank 2 by \(\mathbb{F}_{2}\). By Lemma 6, it suffices to show that \(\mathbb{F}_{2}\leq\left[Q_{3}^{1},\left[M_{3}^{2},P_{3}^{3}\right]\right]\leq FVP_{3}\).
Consider the elements
\[h_{0}=\left[t_{1,2},[t_{2,3},t_{1,3}]\right]\qquad\text{and}\qquad h_{1}=\left[t _{1,2},[d_{1,2,3},t_{1,3}]\right].\]
We write them down in terms of the generators of \(FVP_{3}\) group, see (24):
\[h_{0}=\left[(c^{-1}b)^{2},[b^{2},(b^{-1}a)^{2}]\right]\qquad\text{and}\qquad h _{1}=\left[(c^{-1}b)^{2},[b^{-2}ca,(b^{-1}a)^{2}]\right]. \tag{35}\]
Let us prove that \(h_{0}\) and \(h_{1}\) generate \(\mathbb{F}_{2}\). To do this, it suffices to show that there are no relations between them. Let \(\psi:FVP_{3}\to\langle a,b\rangle\) be the homomorphism given by the mapping \(\psi(a)=a\), \(\psi(b)=b\) and \(\psi(c)=1\). Denote \(\bar{h}_{0}=\psi(h_{0})\) and \(\bar{h}_{1}=\psi(h_{1})\). Then
\[\bar{h}_{0} =b^{3}ab^{-1}ab^{-2}a^{-1}ba^{-1}b^{-2}ab^{-1}ab^{2}a^{-1}ba^{-1}b ^{-1},\] \[\bar{h}_{1} =ab^{-1}aba^{-1}ba^{-1}b^{-2}ab^{-1}ab^{-1}a^{-1}ba^{-1}b^{2}.\]
The elements \(\bar{h}_{0}\) and \(\bar{h}_{1}\) lie in the free group \(\langle a,b\rangle\). Hence the group \(\langle\bar{h}_{0},\bar{h}_{1}\rangle\) is either isomorphic to \(\mathbb{Z}\) or isomorphic to \(\mathbb{F}_{2}\). The first case means that \(\bar{h}_{0}\) and \(\bar{h}_{1}\) must be powers of the same element, i.e. \(\bar{h}_{0}=g(a,b)^{k}\) and \(\bar{h}_{1}=g(a,b)^{s}\) for some word \(g(a,b)\in\langle a,b\rangle\) and nonzero \(k,s\in\mathbb{Z}\). Let \(g(a,b)=f\cdot w\cdot f^{-1}\), where \(w(a,b)\) is the cyclic reduced word in \(\langle a,b\rangle\). Then \(g^{s}=fw^{s}f^{-1}=\bar{h}_{1}\) and since \(\bar{h}_{1}\) is itself cyclically reduced, we get \(f=1\). But then \(g^{k}=fw^{k}f^{-1}=w^{k}=\bar{h}_{0}\) must be cyclically reduced, which is not the case. Thus \(\langle\bar{h}_{0},\bar{h}_{1}\rangle\cong\mathbb{F}_{2}\) and hence \(\langle h_{0},h_{1}\rangle\cong\mathbb{F}_{2}\).
**Corollary 2**.: _Let \(n\geq 3\), then \(\operatorname{Ker}(\Theta_{n}^{w,m})\) contains a subgroup isomorphic to a free group of rank 2 for any integer tuple \(m=(m_{1},\dots,m_{n-1})\) and arbitrary defining word \(w(A,B)\in\mathbb{F}_{2}(A,B)\)._
Proof.: There are three points which are sufficient to prove the corollary:
* \(t_{1,2}\) acts non-trivially only on generators \(x_{1}\) and \(x_{2}\),
* \(t_{2,3}\) and \(d_{1,2,3}\) act non-trivially only on elements \(x_{2}\) and \(x_{3}\),
* \(t_{1,3}\) acts non-trivially only on \(x_{1}\) and \(x_{3}\).
Recall we have \(t_{i,i+1}=\lambda_{i,i+1}^{2}=(\rho_{i}\sigma_{i})^{2}\) by the formula (25). Therefore, \(t_{1,2}\) and \(t_{2,3}\) satisfy property above.
Let's check that \(t_{1,3}=(\rho_{2}\rho_{1}\sigma_{1}\rho_{2})^{2}\) leaves \(x_{2},\ y_{1},\ y_{2}\) and \(y_{3}\) in place. The element \(t_{1,3}\) acts trivially on the generators \(y_{i}\) for \(1\leq i\leq 3\), because using formulas (17, 18) we get \(t_{1,3}\cdot y_{i}=(\rho_{2}\rho_{1}\rho_{2})^{2}\cdot y_{i}=1\cdot y_{i}=y_{i}\). The trivial action on \(x_{2}\) follows from the fact that \(\rho_{2}\rho_{1}\sigma_{1}\rho_{2}\) leaves this element in place. Indeed, according to (17, 18), we obtain
\[x_{2}\stackrel{{\rho_{2}}}{{\longmapsto}}x_{3}y_{3}^{m_{2}} \stackrel{{\rho_{1}}}{{\longmapsto}}x_{3}y_{3}^{m_{2}} \stackrel{{\sigma_{1}}}{{\longmapsto}}x_{3}y_{3}^{m_{2}} \stackrel{{\rho_{2}}}{{\longmapsto}}x_{2}y_{2}^{-m_{2}}y_{2}^{m_{2} }=x_{2}.\]
Similarly, we check the action for
\[x_{1}\stackrel{{\sigma_{2}\rho_{2}}}{{\longmapsto}}x_{1} \stackrel{{\sigma_{1}}}{{\longmapsto}}x_{2}w(y_{1},y_{2}) \stackrel{{\rho_{1}}}{{\longmapsto}}x_{1}y_{1}^{-m_{1}}w(y_{2},y_{1} )\stackrel{{\rho_{2}}}{{\longmapsto}}x_{1}y_{1}^{-m_{1}}w(y_{3},y_{1} )\stackrel{{\sigma_{2}}}{{\longmapsto}}\] \[\stackrel{{\sigma_{2}}}{{\longmapsto}}x_{1}y_{1}^{-m_{1}}w(y_{ 3},y_{1})\stackrel{{\rho_{2}}}{{\longmapsto}}x_{1}y_{1}^{-m_{1}}w(y_{ 2},y_{1})\stackrel{{\rho_{1}}}{{\longmapsto}}\] \[\stackrel{{\rho_{1}}}{{\longmapsto}}x_{2}y_{2}^{m_{1}}y_{2 }^{-m_{1}}w(y_{1},y_{2})\stackrel{{\sigma_{1}}}{{\longmapsto}}x_{1}w ^{-1}(y_{1},y_{2})w(y_{1},y_{2})\stackrel{{\rho_{2}}}{{ \longmapsto}}x_{1},\] \[d_{1,2,3}\cdot y_{i}=\rho_{1}\rho_{2}\rho_{1}\rho_{1}\rho_{2}\rho_ {1}\cdot y_{i}=y_{i},\quad 1\leq i\leq 3.\]
This completes the proof.
|
2304.07222
|
CAPP Axion Search Experiments with Quantum Noise Limited Amplifiers
|
The axion is expected to solve the strong CP problem of quantum
chromodynamics and is one of the leading candidates for dark matter. CAPP in
South Korea has several axion search experiments based on cavity haloscopes in
the frequency range of 1-6 GHz. The main effort focuses on operation of the
experiments with the highest possible sensitivity. It requires maintenance of
the haloscopes at the lowest physical temperature in the range of mK and usage
of low noise components to amplify the weak axion signal. We report development
and operation of low noise amplifiers for 5 haloscope experiments targeting at
different frequency ranges. The amplifiers show noise temperatures approaching
the quantum limit.
|
Sergey V. Uchaikin, Boris I. Ivanov, Jinmyeong Kim, Çağlar Kutlu, Arjan F. Van Loo, Yasunobu Nakamura, Seonjeong OH, Violeta Gkika, Andrei Matlashov, Woohyun Chung, Yannis K. Semertzidis
|
2023-04-13T07:11:01Z
|
http://arxiv.org/abs/2304.07222v2
|
# CAPP Axion Search Experiments with Quantum Noise Limited Amplifiers
###### Abstract
The axion is expected to solve the strong CP problem of quantum chromodynamics and is one of the leading candidates for dark matter. CAPP in South Korea has several axion search experiments based on cavity haloscopes in the frequency range of 1-6 GHz. The main effort focuses on operation of the experiments with the highest possible sensitivity. It requires maintenance of the haloscopes at the lowest physical temperature in the range of mK and usage of low noise components to amplify the weak axion signal. We report development and operation of low noise amplifiers for 5 haloscope experiments targeting at different frequency ranges. The amplifiers show noise temperatures approaching the quantum limit.
Josephson Parametric Amplifier, flux-driven JPA, axion search, haloscope July 29, 2022
## 1 Introduction
Axions are one of the possible candidates for dark matter are axions. They are hypothetical elementary particles postulated by the Peccei-Quinn theory in 1977 to solve the strong CP problem in quantum chromodynamics [1, 2]. Slow-moving axions produced in the early universe are also prospective dark matter candidates [3, 4, 5]. Axions interacting with a magnetic field can decay into two photons and thus be detected using an extremely sensitive receiver, the so-called axion haloscope. A haloscope consists of a high quality factor microwave cavity placed in a strong static magnetic field imposed by a superconducting magnet [6, 7]. The signal power of the resulting microwave photons is very low and given by [8]
\[P_{a\rightarrow\gamma\gamma}\approx g^{2}_{\alpha\gamma\gamma}\frac{\rho_{a}}{m _{a}}B^{2}_{0}VC\cdot\min(Q_{L},Q_{a}){\sim}10^{-22}W, \tag{1}\]
where \(g^{2}_{\alpha\gamma\gamma}\) is a model-dependent coupling constant, \(\rho_{a}\) and \(m_{a}\) are the axion density and mass, \(B_{0}\) is the static magnetic field, \(V\) is the effective haloscope volume, \(C\) is the cavity form factor, and \(Q_{L},Q_{a}\) are the cavity and axion quality factors.
Because the axion mass is unknown, the resonance frequency of the cavity should be tunable to allow scanning all frequencies corresponding to all possible axion masses. The scanning speed is given by [9]
\[\frac{df}{dt}=\left(\frac{1}{SNR}\right)^{2}\left(\frac{P_{a-\gamma\gamma\gamma}(f)}{ k_{B}T_{sys}}\right)^{2}\frac{Q_{a}}{Q_{L}}\!\sim\!\frac{B_{0}^{4}V^{2}CQ_{L}}{T_{sys}^{2}}, \tag{2}\]
where \(SNR\) and \(T_{SYS}\) are the read-out signal-to-noise ratio and system noise temperature, respectively. There have been numerous cavity experiments in search of axions (see, for example, [10] and [11]). To increase the scanning speed we need to increase the axion signal which is defined in the denominator of Eq. (2) and reduce the system noise temperature, which is in the numerator part of Eq. (2). A significant fraction of \(T_{sys}\) is the noise temperature of the amplifier, \(T_{n}\). At the early stage, only HEMT amplifiers were used at CAPP [12, 13, 14]. Theoretically, the lowest noise of any amplifier is given by the laws of quantum mechanics. Such an amplifier is said to have quantum-limited noise.
## 2 Flux Driven Josephson Parametric Amplifier
In CAPP we use flux-driven Josephson parametric amplifiers (FDJPA) with close to quantum-limited noise [15, 16]. The FDJPA consists of a superconducting \(\lambda\)/4 resonator shorted to ground via a SQUID (Fig. 1). The central frequency of the resonator \(f_{c}\) can be adjusted in a limited range by applying a dc flux using a superconducting coil. If a signal is applied on the FDJPA input, one can get amplification of the reflected signal. The energy for that process is provided by the pump tone applied to the SQUID. For our experiments, the parametric amplifiers are used in the non-degenerate three-wave mixing mode, and the frequency of the pump signal is close to twice the FDJPA resonance frequency.
## 3 Operation in the proximity of a strong magnet
Axion haloscope experiments use strong magnets to obtain the highest possible output signal. Devices based on Josephson junctions are very sensitive to external magnetic fields and should be protected from it. Each of the CAPP magnets has a compensation coil to provide a volume with reduced magnetic field, which is limited and highly occupied with RF components. To protect the FDJPA from the field of the 8T magnet, which is about 0.1 Tesla in the vicinity of the FDJPA, we developed a three-layer magnetic shield (Fig. 2). This shield was calculated to result in a residual magnetic field around an FDJPA of less than 100 nT. Figure 3 shows the dependence of the resonance frequency \(f_{c}\) on the applied dc flux bias when the magnet is turned on and off. As one can see from Figure 3, the effect of the magnetic field on the resonance frequency is small.
Figure 1: Schematic of a flux-driven JPA.
## 4 FDJPA readout
A block diagram of the RF chain used for the axion experiment with a FDJPA as the first amplifier is shown in Fig. 4. The chain allows measuring different parameters such as tuning frequency range, gain, and instantaneous bandwidth of the FDJPA. The PUMP input provides the pump signal of the FDJPA. The output signal of the FDJPA is further amplified with low-noise cold HEMT amplifiers. The setup also has a noise source which allows for measuring of noise temperature of the FDJPA and HEMT. The cold RF switch is used to connect the FDJPA to the noise source or the cavity. To measure the cavity and the RF chain properties the WEAK, BYPASS and CAVR inputs are used. The WEAK input is connected with a special cavity antenna weakly coupled to the cavity volume. The BYPASS input allows us to measure the RF chain characteristics by bypassing the cavity. CAVR is used to obtain the cavity reflection (for more details, see [16, 17]).
## 5 Implementation of FDJPs into axion experiments
In the CAPP flagship "12TB" and planned "18T" experiments we use wet dilution ridges which consume a lot of liquid helium during the cooldown of our equipment. To
Figure 3: Dependence of the resonance frequency _fc_ on the applied dc flux bias when the 8 T magnet is turned ON and OFF. The residual field of the magnet at the position of the FDJPA without the shield is about 0.1 T. By implementing this shield, the field strength at the FDJPA is reduced to below 100nT.
Figure 2: Tree-layer FDJPA shield. 1 – NbTi layer; 2 – mu-metal layer; 3 – Al layer.
save budget and minimize the number of cooldowns we made a special design of the mixing chamber RF assembly (Fig. 5).The assembly combines all of the cold RF components: FDJPA, circulators, couplers, switch, noise source, etc. The assembly can be mounted in BlueFors dry fridges for preliminary testing without using liquid helium.
We have 25 FDJPAs for a frequency range 0.98-6.0 GHz. Some of them are shown in Table 1 and Figure 6. As you can see from Figure 6, the noise temperature varies for different amplifiers and operating frequency. We assume that this is the influence of the RF circuits and are working to improve them. Figure 7 shows the coverage of the frequency for a running 12TB 1.1 GHz with available FDJPAs. The FDJPAs have been used in completed [17] and ongoing CAPP experiments: 8T PACE 2.3 GHz, 8T PACE 5.6 GHz, 8TB 5.9 GHz, 12TB 1.1 GHz and 8T PACE 2.3 GHz with a superconducting cavity. Results of these experiments will be reported soon.
## 6 Conclusion
We developed a set of FDJPAs for a frequency range of 0.98 to 6 GHz as well as methods and designs to implement them in CAPP axion search experiments. Noise
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline JPA No. & 2010- & 2010- & 2010- & 2010- & 2010- & 2010- & 1904- & 1910- & 1910- & 1910- \\ & 1.05G & 1.15G & 1.1G & 1.2G & 1.25G & 2.25G & 2.4G & 2.5G & 6.09G & 7G \\ \hline Operation & 0.98– & 1.006– & 1.07– & 1.13– & 1.17– & 2.06– & 2.27– & 2.3– & 5.1– & 5.5– \\ Frequency & 1.018 & 1.071 & 1.12 & 1.165 & 1.21 & 2.157 & 2.31 & 2.45 & 5.4 & 6.0 \\ Low-High & & & & & & & & & & & \\ (GHz) & & & & & & & & & & & \\ Tunable range & 20 & 65 & 50 & 35 & 40 & 97 & 40 & 15 & 300 & 500 \\ (MHz) & & & & & & & & & & \\ \(T_{n}\) (mK) & 100 & 90 & 100 & 95 & 100 & 150 & 120 & 130 & 145 & 150 \\ \(hf/(2k_{B})\) & 24 & 25 & 26 & 28 & 29 & 51 & 55 & 57 & 126 & 138 \\ \(T_{n}/(hf/(2k_{B}))\) & 4.2 & 3.6 & 3.8 & 3.4 & 3.4 & 3 & 2.2 & 2.28 & 1.15 & 1.09 \\ Bandwidth (kHz) & \(>\)150 & \(>\)150 & \(>\)150 & \(>\)150 & \(>\)200 & \(>\)200 & \(\sim\)100 & \(\sim\)100 & \(\sim\)1000 & \(\sim\)1000 \\ \hline \end{tabular}
\end{table}
Table 1: Selected FDJPAs for CAPP axion search experiments. \(T_{n}\), \(hf/(2k_{B})\) and \(T_{n}/(hf/(2k_{B}))\) are the noise temperature of the FDJPA, the quantum-limited noise for amplifier added noise and the ratio between them for the middle of the FDJPA operating frequency range.
Figure 6: Results of noise measurements of 4 FDJPAs. The red line corresponds to the standard quantum limit \(hf/k_{B}\).
temperatures of all amplifiers approach the quantum noise limit and their gain (\(>\)20 dB) allows minimizing the influence of subsequent amplifying stages on the system noise temperature. The FDJPAs have so far been used in 5 completed and ongoing CAPP experiments.
## Acknowledgment
This work is supported by IBS-R017-D1-2020-a00 and JST ERATO (Grant No. JPMJER1601). Arjan F. van Loo was supported by the JSPS postdoctoral fellowship.
|
2305.17318
|
Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion
|
Sensor fusion is a crucial augmentation technique for improving the accuracy
and reliability of perception systems for automated vehicles under diverse
driving conditions. However, adverse weather and low-light conditions remain
challenging, where sensor performance degrades significantly, exposing vehicle
safety to potential risks. Advanced sensors such as LiDARs can help mitigate
the issue but with extremely high marginal costs. In this paper, we propose a
novel transformer-based 3D object detection model "REDFormer" to tackle low
visibility conditions, exploiting the power of a more practical and
cost-effective solution by leveraging bird's-eye-view camera-radar fusion.
Using the nuScenes dataset with multi-radar point clouds, weather information,
and time-of-day data, our model outperforms state-of-the-art (SOTA) models on
classification and detection accuracy. Finally, we provide extensive ablation
studies of each model component on their contributions to address the
above-mentioned challenges. Particularly, it is shown in the experiments that
our model achieves a significant performance improvement over the baseline
model in low-visibility scenarios, specifically exhibiting a 31.31% increase in
rainy scenes and a 46.99% enhancement in nighttime scenes.The source code of
this study is publicly available.
|
Can Cui, Yunsheng Ma, Juanwu Lu, Ziran Wang
|
2023-05-27T00:47:39Z
|
http://arxiv.org/abs/2305.17318v1
|
Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion
###### Abstract
Sensor fusion is a crucial augmentation technique for improving the accuracy and reliability of perception systems for automated vehicles under diverse driving conditions. However, adverse weather and low-light conditions remain challenging, where sensor performance degrades significantly, exposing vehicle safety to potential risks. Advanced sensors such as LiDARs can help mitigate the issue but with extremely high marginal costs. In this paper, we propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions, exploiting the power of a more practical and cost-effective solution by leveraging bird's-eye-view camera-radar fusion. Using the nuScenes dataset with multi-radar point clouds, weather information, and time-of-day data, our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy. Finally, we provide extensive ablation studies of each model component on their contributions to address the above-mentioned challenges. Particularly, it is shown in the experiments that our model achieves a significant performance improvement over the baseline model in low-visibility scenarios, specifically exhibiting a 31.31% increase in rainy scenes and a 46.99% enhancement in nighttime scenes. The source code of this study is publicly available1.
Footnote 1: [https://github.com/PurdueDigitalTwin/REDFormer](https://github.com/PurdueDigitalTwin/REDFormer)
## I Introduction
Sensor fusion, also known as sensor integration, refers to the process of combining data from various sensors installed in an automated vehicle to obtain a comprehensive and accurate understanding of the surrounding environment [1], it is also a crucial component in digital twin technology. [2]. This approach allows the vehicle to obtain a more comprehensive view of the environment, critical for making informed decisions and navigating safely. The sensors commonly used in automated vehicle sensor integration include LiDAR, radar, camera, and global navigation satellite system (GNSS). Integrating information from multiple sensors allows automated vehicles to operate more effectively and safely in various driving conditions.
However, one major challenge in sensor fusion applications is the underperformance of sensors in low visibility environments. Common causes for visibility reduction include adverse weathers, such as rain, snow, and low-light conditions (like driving at night). As a result, automated vehicles that rely on these inaccurate detection results pose substantial risks, which might expedite serious traffic accidents or other devastating consequences. Therefore, there is a need to develop a sensor fusion system that is reliable in low visibility and can work effectively in diverse practical conditions.
A number of existing research prefers fusing LiDAR and camera sensors due to their complementary nature. However, such a solution comes with the potential limitations [3]:
* **Costliness:** LiDAR can be very expensive compared to other sensors on automated vehicles.
* **Environmental Sensitivity:** Environmental factors such as rain, snow, or fog can reduce LiDAR's accuracy.
* **High Computations:** LiDAR generates large amounts of data, which can be computationally intensive to process and analyze.
On the contrary, radar sensors can effectively detect objects under a broader range of environmental conditions and are generally less expensive than LiDARs. Therefore, it is more reasonable to investigate sensor fusion of radar and camera data to achieve the 3D object detection task in a practical context. Compared to relying solely on radar sensors, camera-radar fusion offers numerous benefits, including improved object detection by combining high-resolution images from cameras with the ability to be unaffected by environmental conditions of radar to detect objects even in low-light or adverse weather conditions. Particularly, advanced machine learning algorithms such as Transformers empower the perception capability of automated vehicles by its computational efficiency and scalability [4], and allowing us to train multi tasks in one joint model [5]. Such systems are more robust and provide a complete awareness of the environment, contributing to avoiding potential hazards and accidents. Moreover, camera-radar fusion systems are more accessible and cost-effective for all vehicles, promoting the overall penetration rate.
This paper proposes a 3D object detection model applying sensor fusion on multi-camera images and multi-radar point clouds in a bird's-eye-view (BEV) perspective. Inspired by achievements in natural language processing, we learn the positional representation of multi-radar point clouds using an embedding mechanism and gated linear filters. We achieve spatial-temporal fusion using the attention mechanism with image embedding, radar point cloud embedding, and BEV positional queries. Additionally, we design a multi-task objective to integrate visibility conditions into the end-to-end training procedure, aiming to enable our model to understand different low-visibility conditions and their corresponding weather and time-of-day (TOD) properties comprehensively.
The main contributions of this paper are summarized as follows:
* A novel radar embedding backbone is proposed for camera-radar fusion that significantly improves 3D object detection accuracy compared to using images solely.
* A multi-task learning (MTL) paradigm is developed to incorporate weather and TOD information as additional contextual clues during training.
* Extensive experiments on a 3D object detection benchmark are conducted. The results demonstrate that our model outperforms the state-of-the-art approaches, especially in adverse weather conditions and low-light environments.
## II Related Works
### _Object Detection_
Problem of object detection is a computer vision task that aims to identify and localize objects within an image. During the past decade, the developments of machine learning technologies have led to emerging interests in research on 2D object detection, with a multitude of works in this field [6, 7]. Nevertheless, 2D object detection has limitations in representing the depth of view of the context. As a result, more recent research focuses on designing models for 3D object detection. FCOS3D [8] enhanced a 2D detector FCOS [9] and directly generated 3D bounding boxes. Inspired by DETR [10], DETR3D [11] extracts 2D features from multiple camera images and associates them with 3D positions using sparse 3D object queries and camera transformation matrices. Using multi-camera image inputs, M\({}^{2}\)M [12] achieves 3D object detection and map segmentation in the BEV space. Other methods [13] directly use multi-layer perception to learn how to project images input from the camera to the BEV plane. BEVformer [14] uses a deformable transformer to compute spatial and temporal features in BEV grid regions of interest across camera views and previous BEV information. Besides, existing methods have also investigated how to incorporate radar information to enhance image-based object detection. For example, CRFNet [15] converts unstructured radar pins into a pseudo-image format and uses a downstream model to process radar embeddings alongside the camera image.
However, many of these studies need to address the performance degradation under low-visibility conditions, which has been attracting attentions recently. Mirza et al. analyzes the performance degradation of different architectures under adverse weather conditions[16]. Tomy et al. proposes a robust sensor fusion model that combines event-based and frame-based cameras for robust object detection in adverse conditions, utilizing a voxel grid representation for event input and employs a two-parallel feature extractor network for both frames and events [17]. Bijelic et al. proposes a deep fusion architecture that enables robust fusion in foggy and snowy conditions [18]. Sakaridis et al. presents a benchmark dataset comprising authentic images captured in diverse adverse weather conditions [19]. Meanwhile, Kenk provides a benchmark radar dataset for evaluating object-detecting model performance in bad weather [20]. Therefore, our work takes one step further to investigate how to exploit multi-radar point clouds to enhance object detection algorithms under low-visibility conditions.
### _Camera-Radar Sensor Fusion_
A previous study by Waldschmidt et al. shows that radar is not affected by adverse lighting and severe weather conditions, enabling direct measurement of distance, radial velocity, and, with the aid of a suitable antenna system, determination of the angle of distant objects [21]. Hence, the concept of camera-radar sensor fusion aims to exploit the merits of both types of sensors and provide a more comprehensive understanding of the surroundings. Kadow et al. [22] used radar data to limit the search scope for video input and return the distance features and then utilized a simple neural network for vehicle identification. Bertozzi et al. used radar data to refine detecting bounders and apply camera data to detect and classify road obstacles [23]. Other works used similar methods to achieve camera-radar sensor fusion [24, 25]. With the rising deep learning techniques, more efficient sensor fusion models have emerged in recent years. Nobis et al. presented a network model whose layers incorporated both image data and projected sparse radar data and enabled the model to learn features from both sensors [15]. [26] and [27] involved independent object detection by each sensor (camera and radar), followed by the integration of their results to arrive at a final decision. Lekic et al. projected the radar data onto the 2D images generated by the camera and then used Conditional Multi-Generator Generative Adversarial Networks (CMGANs) to implement object detection [28]. Bijelic et al. developed a deep fusion model using camera and radar inputs and validated it on a dataset with adverse weather conditions [18]. A middle-fusion method, CenterFusion, proposed in [29] utilized a center point detection network to detect objects based on their center points in the image. Distinguished from the abovementioned studies, our work draws inspiration from recent achievements in natural language processing and uses embedding along with attention mechanisms [30] to achieve camera-radar sensor fusion.
## III Methodology
This section presents our camera-radar bird's-eye-view fusion transformer (REDFormer) model for the 3D object detection in detail. The main design idea is to fuse multi-radar point cloud and multi-camera image features in a bird's-eye-view (BEV) plane with addressing the performance drop under low-visibility conditions. We will start with our problem statement and introduce the key components in our model, including learnable BEV queries, the radar backbone (RB) module, and the multi-task learning (MTL) module that address the objectives in the problem statement.
### _Problem Statement and Model Architecture_
Our model aims to achieve 3D object detection using camera-radar sensor fusion. Suppose we have \(N_{r}\) radar sensors and \(N_{c}\) cameras. At each timestep \(t\)
we observe of a collection of multi-camera images \(\mathcal{F}_{t}=\left\{f_{i}^{(t)}\in\mathbb{R}^{3\times H\times W}\mid i=1,\ldots,N_{c}\right\}\), where \(H\) and \(W\) are the height and width of each image. Besides, we also observe multi-radar point clouds \(\mathcal{P}_{t}=\left\{p_{ij}^{(t)}\in\mathbb{R}^{D_{r}}\mid i=1,\ldots,N_{r}\right\}\), where \(j\) is the number of radar points for radar sensor \(i\), and \(D_{r}\) the attribute dimension for each radar point. Suppose we have \(N_{k}\) context objects. The bounding box of a context object \(k\) is defined by a vector of \(D_{b}\) parameters \(b_{k}^{(t)}\in\mathbb{R}^{D_{b}}\), and the type of the object is \(c_{k}^{(t)}\in\mathbb{R}\). The objective of our problem is to search for the optimal model inside the model space \(\mathcal{M}=\left\{m\mid m:\mathbb{R}^{D_{r}}\times\mathbb{R}^{3\times H\times W }\rightarrow\mathbb{R}^{D_{b}}\times\mathbb{R}\right\}\) that minimizes the prediction and classification errors
\[m^{*}=\underset{m\in\mathcal{M}}{\text{argmin}}\frac{1}{N_{k}}\sum_{k=1}^{N_{ k}}\text{err}(b_{k}^{(t)},\hat{b}_{k}^{(t)})+\text{err}(c_{k}^{(t)},\hat{c}_{k}^ {(t)}),\]
\[\text{where }(\hat{b}_{k}^{(t)},\hat{c}_{k}^{(t)})=m(\mathcal{F}_{t},\mathcal{P}_{ t}\mid\theta). \tag{1}\]
Given the problem statement above, there are three essential problems :
* **Sensor Fusion** How to resolve the differences in feature spaces between multi-radar point clouds and multi-camera images?
* **Temporal Dependencies** How to learn the temporal dependencies between consecutive observations?
* **Vulnerability to Low-visibility Conditions** How to incorporate low-visibility information into the end-to-end learning pipeline?
As illustrated in Fig. 1, we approach sensor fusion and temporal dependencies by proposing a novel embedding-based RB and a temporal self-attention layer to unify multi-radar point clouds and multi-camera images in a BEV perspective. The RB handles extracting the positional saliency signal of each region of interest, while the attention mechanism learns temporal correlations between current radar signals and previously observed BEV features.
Nevertheless, the original objective function [14] failed to consider vulnerability to low-visibility conditions. To compensate, we train our model by optimizing a multi-task objective. Specifically, instead of only minimizing the prediction and classification error, our model minimizes the binary classification error of current weather conditions \(\hat{w}^{(t)}\) and the time-of-day label (i.e., night or daytime) \(T^{(t)}\) simultaneously. The overall architecture of the proposed model is inspired by the existing state-of-the-art model BEVFormer [14]. However, we distinguish ourselves with the three innovative components explicitly designed to address the three critical problems.
### _Radar Backbone_
It is critical to resolving the difference in feature spaces between multi-radar point clouds and multi-camera image inputs for sensor fusion. To that end, we design the RB module, which projects and helps unify the features under the BEV scope.
Initially, since each radar sensor observes points from its perspective, we need to unify the overall point clouds in a shared local coordinate system centered at the current position of the ego vehicle. For each radar \(i\), we project its observed point cloud using an affine transformation matrix \(p^{\prime\,(t)}_{\,ij}=\mathcal{T}_{i}\cdot p_{ij}^{(t)}\). Then, we aggregate these projected points in a BEV plane by region of interest (RoI) and create a saliency
Fig. 1: Illustration of the proposed REDFormer. First, the radar backbone generates radar BEV embedding from the multi-radar point cloud. The radar BEV combines with BEV positional embeddings, and the combination and the learnable BEV features from the previous layer is the input for a temporal self-attention layer. Then, the image backbone extracts multi-view image features. BEV queries from the upstream selectively search within the regions of interest associated with image feature maps in the consecutive spatial cross-attention layer. Finally, a downstream model uses the BEV features from the attention layer to derive predictions that simultaneously minimize multi-task prediction loss.
matrix. Suppose we denote the BEV regional saliency matrix by \(S\in\mathbb{R}^{X\times Y}\), where \(X\) and \(Y\) are the numbers of RoI on each row and column, respectively. The value of each cell \(s_{ij}\in\mathbb{R}\) is the number of projected multi-radar points presented within the corresponding region of interest.
Drawing our concepts from natural language processing, we propose a novel approach incorporating an embedding layer in conjunction with a gated unit layer for processing the structured radar point data. The embedding layer \(E(s_{ij}):\mathbb{R}\rightarrow\mathbb{R}^{C}\) treats each cell salience \(s_{ij}\) as a token and creates a look-up table mapping from the cell salience to a \(C\)-dimensional learnable vector. We use a gated unit layer alongside the embedding layer to reduce the noise in raw radar saliency and squash the value space. The gated unit consists of a sigmoid-linear function and a tanh-linear function. If to denote the feature in a output Radar BEV cell by \(e_{ij}\), we can express the overall operation as follows:
\[e_{ij}=\sigma(W_{1}\cdot E(s_{ij})+b_{1})\odot\tanh(W_{2}\cdot E(s_{ij})+b_{2}), \tag{2}\]
where \((W_{1},W_{2})\) and \((b_{1},b_{2})\) are the weights and biases for sigmoid-linear and tanh-linear functions, and \(\odot\) denotes the element-wise product of two matrices.
### _Temporal Self-Attention and BEV Queries_
Object detection models can take advantage of sequential input consisting of previous observations in practical cases. To address the temporal correlations, we implement the temporal self-attention layer with the help of BEV queries. As shown in Fig. 2, the BEV queries matrix \(Q\in\mathbb{R}^{X\times Y\times C}\) derives by adding the radar BEV from RB with a learnable BEV positional embedding. We then fed the BEV queries into the temporal self-attention layer to learn correlations with the BEV representations from preceding timesteps.
Recall that the upstream RB module ensures that the center of the BEV features corresponds to the position of the ego vehicle. Using the intrinsic matrices of cameras, we can determine the corresponding points on the multi-view images for each query \(Q_{p}\in\mathbb{R}^{C}\). As a result, we build the connection between points on the multi-view images and points on the BEV queries. Such a connection is vital for fusing radar features with camera features in the consecutive spatial cross-attention layer [14].
### _Multi-Task Learning_
Multi-task learning (MTL) is an inductive transfer mechanism that aims to enhance generalization performance by leveraging shared information across multiple related tasks [31]. In contrast to traditional machine learning methods, which often focus on learning a single task, MTL capitalizes on the potential richness of information embedded within training signals of related tasks originating from the same domain.
In our proposed REDFormer model, we employ an MTL strategy to bolster its adaptability to diverse environmental circumstances, such as different weather and night scenarios. To achieve this, we apply two additional recognition heads on the unified BEV representation, allowing the model to simultaneously learn and account for weather and TOD conditions alongside the primary 3D object detection task. These recognition heads are designed as separate linear output branches, each responsible for predicting a specific task-related output. The inclusion of these additional learning goals allows the model to capture nuanced relationships and shared features across the tasks. This fosters better generalization and enhanced performance under diverse real-world conditions, resulting in a more robust and versatile 3D object detection system.
Following [10, 11], we employ the set-to-set loss function to quantify the disparity between the predicted and
Fig. 2: Illustration of the radar backbone in the REDFormer. The backbone projects multi-radar point clouds onto the BEV plane and then aggregates local point clouds regarding each region of interest (RoI). An embedding layer learns the representation of the saliency signal within each RoI. A gated unit filters the signal and generates the Radar BEV embeddings.
ground truth values for the 3D object detection task. For environmental variables, we utilize the standard cross-entropy loss function to perform binary classification, identifying whether it's raining or not, and whether it's nighttime or not. The joint loss function is defined as \(\mathcal{L}_{\text{joint}}=\mathcal{L}_{\text{det}}+\mathcal{L}_{\text{min}}+ \mathcal{L}_{\text{tod}}\).
## IV Experiments
### _Dataset_
Our model is implemented and evaluated on the nuScenes dataset, a large-scale public dataset designed by Motional for autonomous driving research [35]. The nuScenes dataset consists of approximately 1,000 scenes, each with a duration of approximately 20 seconds. For every sample, six high-definition cameras are positioned to capture images at a resolution of 1600 \(\times\) 900 pixels covering 360 degree field of view. And five radar sensors provide 360-degree coverage around the vehicle, operating at a frequency of 77GHZ with a range up to 250 meters.
The nuScenes dataset uses a comprehensive set of evaluation metrics to assess the performance of models and algorithms for object detection task. NuScenes dataset uses mean average precision (mAP) as its primary evaluation metric, which compute the 2D center distance on the ground plane for object matching rather than 3D intersection over union affinities. The nuScenes dataset also includes a set of true positive metrics, such as average translation error (ATE), average scale error (ASE), average orientation error (AOE), average velocity error (AVE), and average attribute error (AAE), which are used to evaluate the accuracy of object detection in terms of translation, scale, orientation, velocity, and attribute errors. In addition, the nuScenes dataset introduced a nuScenes detection score (NDS) to consolidate all the above evaluation metrics. The NDS is calculated as follow:
\[\text{NDS}=0.5\times\text{mAP}+\sum 0.1\times\max((1-\text{mTP}),0) \tag{3}\]
### _Baseline_
We select several state-of-the-art 3D object detection models as baselines to compare the performance of our approach. The baseline models include DETR3D [11], FCOS3D [32], BEVFusion [33], BEVFormer (small variant) [14] and BEVDet (tiny variant) [34], which rely solely on cameras. Additionally, we compare our approach to CenterFusion[29] and CRF-Net [15], which incorporate both radar and camera data.
### _Main Results_
We conduct extensive evaluations of our proposed REDFormer model on the nuScenes dataset object detection task, and compare its performance with several state-of-the-art object detection approaches that use either camera, radar, or camera-radar fusion models. Our results, summarized in Tab. I, demonstrate that our model achieves a significant improvement over the baselines. Specifically, we obtained a 6.7% improvement in NDS and a 16.1% improvement in mAP compared to CenterFusion (camera-radar Fusion) [29], an 1.5% improvement in NDS and a 4.1% improvement in mAP compared to BEVFormer (Camera only) [14]. These findings indicate the effectiveness of our novel RB and MTL modules, and their abilities to capture the nuances of multi-sensor data and enhance object detection performance in challenging scenarios.
To assess the robustness of our model under adverse weather conditions, we conduct experiments in both low-light nighttime and rainy scenarios. Specifically, we evaluate our model's performance in these adverse conditions to verify its ability to maintain high accuracy and reliability even in low visibility environments. To do this, we create two sub-datasets from the nuScenes dataset, one containing only nighttime scenarios and the other containing only rainy scenes. We then perform experiments on these sub-datasets to evaluate our model's performance in these challenging conditions. We use BEVFormer [14] and BEVDet [34] as our baseline model. The object detection performance in low-visibility scenarios is generally not as good as in normal conditions, as indicated in Tab. II. However, our REDFormer demonstrates significant improvements in both rainy and nighttime conditions when compared to BEVFormer [14]. Specifically, we observe a 31.31% improvement in NDS on rainy scenes and a 46.99% improvement in NDS on night scenes highlighting the effectiveness of our approach in adverse weather conditions. Our model also showcases significant improvements than BEVFormer [14] in mAP across challenging conditions. Particularly, we achieve a notable 14.5% improvement in mAP for rainy scenes and an impressive 11.5% improvement for night scenes. Furthermore, compared to the baseline model BEVDet [34], our model exhibits a substantial 19.8% improvement in mAP for rainy scenes and a remarkable 50% improvement for night scenes. Hence, the notable enhancement in low-visibility scenarios demonstrates the radar's ability to provide precise object localization. This signifies the effectiveness of our RB in guiding the model towards object localization especially in low-visibility conditions, while the MTL heads effectively incorporate light conditions into the predictions. In rainy or nighttime conditions, the model appropriately assigns higher importance to radar inputs compared to sunny or other high-visibility conditions.
### _Ablation Study_
Module analysisWe conduct ablation experiments on the nuScenes validation set to analyze the impact of different modules, specifically the MTL and the RB. Each component is removed individually while the other is kept fixed. Our ablation study includes both the full nuScenes dataset and low-visibility subsets to comprehensively evaluate the effectiveness of our approach. Tab. III demonstrates that both the RB and MLT modules significantly improve model performance. Additionally, when compared to the baseline model, Tab. IV highlights the improvements made by each module in night and rainy scenes. Notably, the RB module improves NDS by 31.21% and 43.33% in rainy and night subsets, respectively, while the MLT module
improves NDS by 30.51% and 43.44% in rainy and night subsets, respectively. Individually, both of these components contribute to enhancing the object detection performance of the model, and their combined usage yields the highest performance, indicating the synergistic benefits of integrating both components in the model. These findings highlight the effectiveness of our proposed approach and underscore the value of the consequential modules in enhancing the performance of object detection systems in challenging scenarios.
Influence of the capacity limit of the embedding dictionaryWe discuss the outcomes of selecting the capacity limit \(K\) for the embedding dictionary based on the experiment presented in Tab. V. We restrict our focus to \(K\geq 10\), as this value represents the maximum number of radar points within a single grid throughout the entire nuScenes dataset. The optimal performance in terms of mAP is achieved when \(K=10\), while the NDS values show near identical best performances for \(K=10\) and \(K=20\). These results suggest that \(K\) should be selected as close as possible to the radar point capacity per grid, contingent on the specific radar setup of the vehicle.
### _Visualization_
In order to provide a comprehensive insight into the performance of the REDFormer model in 3D object detection tasks under low-visibility conditions, we present detailed visualizations showcasing the outstanding performance and the significant improvement of our model compared to the baseline (BEVFormer) model in such scenarios. The comparison of prediction results between RedFormer, the baseline (BEVFormer), and ground truth in the rainy subset and night subset are visualized in Fig.3 and Fig.4, respectively.
The visualization results clearly demonstrate that our model, REDFormer, outperforms the baseline model under challenging visibility conditions, as evidenced by the more precise 3D bounding boxes it generates. Notably, our model avoids entirely non-existent predictions, whereas the baseline model occasionally produces erroneous predictions. These results validate the effectiveness of our MTL approach, which enables our model to account for visibility conditions
and achieve superior performance in such scenes.
Furthermore, the incorporation of multi-radar input proves to be advantageous, as it remains unaffected by environmental conditions and provides reliable cues in adverse environments. Additionally, the inclusion of radar points imparts additional depth information, resulting in more accurate and realistic 3D bounding boxes.
## V Conclusions
In this paper, our proposed approach to 3D object detection represents a significant improvement over existing state-of-the-art (SOTA) methods, leveraging a middle fusion technique that combines a transformer-based bird's-eye-view (BEV) encoder. We introduced an innovative radar backbone (RB) to extract features from multi-radar points and employ multi-task learning (MTL) to enable the model to consider the impact of weather and time-of-day (TOD) on object detection. The transformer-based BEV approach enables us to effectively utilize comprehensive environmental information, leading to high performance in object detection. Our approach enhances the accuracy and robustness of the system in diverse environments, including those with reduced visibility due to adverse weather conditions or low light. By combining the benefits of MTL, the novel RB, and the transformer-based middle fusion approach, our method demonstrates significant improvements in performance. Our experiments reveal that our model outperforms the SOTA baseline model (BEVFormer), achieving a 31.31% higher (NDS) in rainy scenes and a 46.99% higher (NDS) in low-visibility night scenes. Overall, our approach represents a valuable contribution to the field of 3D object detection, with potential applications in a wide range of industries and
Fig. 4: Visualization results of REDFormer and BEVFormer on nuScenes night validation subset, exclusively including figures when objects are present. Vehicles are marked by orange bounding boxes, motorcycles by red, pedestrians by blue, and barriers by gray.
Fig. 3: Visualization results of REDFormer and BEVformer on nuScenes rainy validation subset, exclusively including figures when objects are present. Vehicles are marked by orange bounding boxes, motorcycles by red, pedestrians by blue, and barriers by gray.
use cases. One limitation of our work is that the current frames per second (FPS) of our model are relatively high. As a result, our future research will focus on optimizing the FPS and enhancing the model's deployability for real-time predictions in real automated vehicles.
|
2308.11795
|
Vector Field Dynamics: Field Equations and Energy Tensor
|
Relativistic field theory for a vector field on a curved space-time is
considered assuming that the Lagrangian field density is quadratic and contains
field derivatives of first order at most. By applying standard variational
calculus, the general Euler-Lagrange equations for the field are derived and
the existence of a conserved current is achieved. The field equations are also
analyzed from an eikonal-like point of view. The Hilbert energy-momentum tensor
of the field is also derived and the influence of each one of the irreducible
pieces appearing in the Lagrangian is studied. Particular values of the free
parameters allow to retrieve known results.
|
Roberto Dale, Alicia Herrero, Juan Antonio Morales-Lladosa
|
2023-08-22T21:28:26Z
|
http://arxiv.org/abs/2308.11795v2
|
# Vector field dynamics: field equations and energy tensor
###### Abstract
Relativistic field theory for a vector field on a curved space-time is considered assuming that the Lagrangian field density is quadratic and contains field derivatives of first order at most. By applying standard variational calculus, the general Euler-Lagrange equations for the field are derived and the existence of a conserved current is achieved. The field equations are also analyzed from an eikonal-like point of view. The Hilbert energy-momentum tensor of the field is also derived and the influence of each one of the irreducible pieces appearing in the Lagrangian is studied. Particular values of the free parameters allow to retrieve known results.
Lagrangian density Relativistic field theory Test vector field Curved background Hilbert energy-momentum tensor
## 1 Introduction
Space-time vector fields are essential tools in Physics. In the standard model of Particle Physics, the fundamental interaction between elementary fermions (quarks and leptons) are mediated by twelve vector fields (eight gluons and four electroweak spin 1 bosons) [1, 2]. Lie algebras generators are used to implement diverse physical symmetries, both in Relativistic Quantum Mechanics and in General Relativity [3]. A vector field is the architect for a tensor-vector theory of gravitation like, for example, the one introduced in [4, 5] or its extensions (see, for instance [6, 7, 8] and [9] for a review). The search for dynamical and gravitational predicted imprints that could be attributed to, otherwise unobserved, vector fields is a current issue in Astrophysics and observational Cosmology [10, 11, 12, 13].
This work is focused on the dynamics of a "test vector field" \(\xi\). This task involves (i) the analysis of the field equations for \(\xi\) and (ii) the description of its energy-momentum tensor. The terminology "test field" underlines the fact that no specific metric theory of gravitation is considered along this work. Then, any test field characteristic property works
in any curved space-time background and constitutes the starting point for studying self-gravitating vector fields. This last scenario involves, in addition, the Einstein field equations sourced by the energy content defined by \(\xi\), and the coupling of the dynamical evolution of the field itself.
In this paper, the obtained results will be presented as general as possible in order to open the possibility of applications in extended areas of Physics, including General Relativity. For instance, the field energy tensor could be studied from an algebraic point of view and characterized according to the Churchill-Plebanski classification [14, 15] of a spacetime symmetric 2-tensor and the restriction derived from the usual energy conditions introduced by Plebanski (see Refs. [15, 16]). The \(3+1\) (space plus time) decomposition of the field equation and the energy tensor of \(\xi\), with respect to a space-time observer \(u\) (unit, future oriented time-like vector field), and their physical interpretation are other possible applications of our work. In particular, when \(\xi\) is time-like and \(u\) is the observer associated to \(\xi\), the decomposition of its covariant derivative, \(\nabla u\), relatively to \(u\) provides the kinematic properties of \(\xi\) (acceleration, shear, expansion and vorticity), which are involved in the field equations of \(\xi\) and help to interpret its energy tensor. Moreover, the general approach to self-gravitating vector fields involving the \(3+1\) decomposition of the coupled system of the Einstein equations and the vector field equations, the former sourced by the energetic content of the field \(\xi\) itself, could also be investigated.
The terminology time-like, light-like (null) or space-like for a vector \(\xi\) will be used to express the fact that \(\xi^{2}\equiv g(\xi,\xi)\) is negative, zero or positive, respectively, when the signature convention \((-,+,+,+)\) is taken for the space-time metric \(g\). However, no particular election for the metric signature has been used along this work.
The paper is structured according to this plan. Notation, main definitions and sign conventions are established in Sec. 2. The Lagrangian density \(L\) for a vector field theory and the corresponding Euler-Lagrange equations are introduced in Sec. 3, pointing out the fundamental terms in \(L\). Next, Sec. 4 deals with the eikonal-like decomposition of the dynamics of a vector field. Following the Hilbert variational prescription, in Sec. 5, the energetic content carried out by a vector field is obtained piece to piece, according to the Lagrangian decomposition in invariant summands. Finally, the results of this paper are discussed and summarized in Sec. 6. To gain conciseness, the main results are presented as statements under the form of brief propositions and lemmas.
## 2 Notation and conventions
First of all, some remarks about the notation and some useful identities used further in this paper are given:
(i) The space-time metric determinant, \(\mathrm{g}\equiv\det g<0\), fulfills the relations:
\[\mathrm{g}=g_{\mu\nu}m^{\mu\nu}\qquad\mathrm{and}\qquad\frac{\partial\mathrm{ g}}{\partial g_{\mu\nu}}=m^{\mu\nu}=\mathrm{g}\,g^{\mu\nu}\,, \tag{1}\]
where \(m^{\mu\nu}\) is the minor (with its sign) of the matrix metric element \(g_{\mu\nu}\), in a given basis, and \(g^{\mu\nu}\) is the inverse of the metric, used to raise and lower indexes, respectively. Other related expressions are:
\[g^{\mu\nu}=\frac{\partial\mathrm{ln}\,(-\mathrm{g})}{\partial g_{\mu\nu}} \qquad\mathrm{and}\qquad\Gamma^{\alpha}_{\alpha\rho}=\partial_{\rho}\ln\sqrt{ -\mathrm{g}}\,, \tag{2}\]
with \(\Gamma^{\alpha}_{\alpha\rho}=g^{\alpha\mu}\Gamma_{\alpha\rho.\mu}\) (summation over contracted repeated indexes is understood) and
\[\Gamma_{\alpha\rho.\mu}=\frac{1}{2}(\partial_{\alpha}g_{\rho\mu}+\partial_{ \rho}g_{\alpha\mu}-\partial_{\mu}g_{\alpha\rho})\]
as the Christoffel symbols, defined from the first derivatives of the metric field components, \(g_{\mu\nu}(x^{\alpha})\), in a coordinate system \(\{x^{\alpha}\}\), which are denoted by
\[g_{\mu\nu,\alpha}\equiv\partial_{\alpha}g_{\mu\nu}=\frac{\partial g_{\mu\nu} }{\partial x^{\alpha}}.\]
(ii) Let \(\xi\) be a smooth vector field, defined on a space-time domain \(D\) with metric \(g\) and Levi-Civita connection \(\nabla\). From the covariant derivative of \(\xi\), \(\nabla\xi\), and its transpose, \({}^{\mathrm{t}}\nabla\xi\equiv(\nabla\xi)^{\mathrm{t}}\), we can always build two 2-tensors, \(S\) and \(F\) defined by:
\[S \equiv \nabla\xi+{}^{\mathrm{t}}\nabla\xi=\mathcal{L}_{\xi}g\,, \tag{3}\] \[F \equiv \nabla\xi-{}^{\mathrm{t}}\nabla\xi=\mathrm{d}\xi\,. \tag{4}\]
where \(\mathcal{L}_{\xi}\) stands for the Lie derivative along \(\xi\) and \(\mathrm{d}\) for the exterior derivative, here acting on the 1-form metrically equivalent to \(\xi\), which is also denoted by \(\xi\), that is \(g(\xi,\cdot)\equiv\xi\). By using index notation the 2-tensors \(S\) and \(F\) are:
\[S_{\mu\nu} = \nabla_{\mu}\xi_{\nu}+\nabla_{\nu}\xi_{\mu}=(\mathcal{L}_{\xi}g)_{ \mu\nu}\,, \tag{5}\] \[F_{\mu\nu} = \nabla_{\mu}\xi_{\nu}-\nabla_{\nu}\xi_{\mu}=(\mathrm{d}\xi)_{\mu\nu} \tag{6}\]
respectively, where \(\nabla_{\mu}\xi_{\nu}\equiv(\nabla\xi)_{\mu\nu}=\partial_{\mu}\xi_{\nu}-\Gamma^{ \rho}_{\mu\nu}\xi_{\rho}\).
(iii) Given \(x\) and \(y\), both being vector or covector fields, the symbol \(\widetilde{\otimes}\) stands for its symmetrized tensorial product, that is, \(x\widetilde{\otimes}y\equiv x\otimes y+y\otimes x\).
(iv) Given \(P\) and \(Q\) second order tensors, the tensor \(P\times Q\) denotes its matrix product, or contraction of adjacent indices, that is
\[(P\times Q)_{\mu}{}^{\nu}=P_{\mu\rho}\,Q^{\rho\nu}.\]
In particular, \(P^{2}=P\times P\). The trace of \(P\) is \(\mathop{\rm tr}\nolimits P=g^{\mu\nu}P_{\mu\nu}=P^{\mu}_{\mu}\). Thus, for the 2-tensors \(S\) and \(F\) in (3) and (4),
\[\mathop{\rm tr}\nolimits S=2\,\nabla_{\mu}\xi^{\mu}\equiv 2\,\nabla\cdot\xi, \tag{7}\]
where
\[\nabla\cdot\xi\equiv\nabla_{\mu}\xi^{\mu}=\frac{1}{\sqrt{-{\rm g}}}\partial_{ \mu}(\sqrt{-{\rm g}}\;\xi^{\mu}) \tag{8}\]
is the covariant divergence of \(\xi\). Moreover,
\[\mathop{\rm tr}\nolimits S^{2} = \mathop{\rm tr}\nolimits(S\times S)=S_{\mu\nu}S^{\mu\nu}\,, \tag{9}\] \[\mathop{\rm tr}\nolimits F^{2} = \mathop{\rm tr}\nolimits(F\times F)=-F_{\mu\nu}F^{\mu\nu}\,, \tag{10}\]
because \(S\) is symmetric and \(F\) is antisymmetric, and then \(\mathop{\rm tr}\nolimits(S\times F)=\mathop{\rm tr}\nolimits(F\times S)=0\).
(v) The divergence of a 2-tensor \(T\) is denoted by \(\nabla\cdot T\) (in components, \((\nabla\cdot T)^{\nu}=\nabla_{\mu}T^{\mu\nu}\)). The interior or contracted product by \(\xi\) is denoted by \(i(\xi)\). For instance, if \(T\) is a covariant \(2\)-tensor, then \((i(\xi)\nabla T)_{\mu\nu}=\xi^{\alpha}\nabla_{\alpha}T_{\mu\nu}\) is the covariant derivative of \(T\) along \(\xi\), that is \(i(\xi)\nabla T=\nabla_{\xi}T\). On the other hand, \([i(\xi)T]_{\nu}=\xi^{\mu}T_{\mu\nu}\) is a covector but if \(T\) is a mixed 2-tensor, \([i(\xi)T]^{\nu}=\xi^{\mu}T_{\mu\nu}^{\ \nu}\) is a vector. The double contraction of \(T\) with \(\xi\) is denoted by \(i^{2}(\xi)T\equiv i(\xi)i(\xi)T=T(\xi,\xi)=T_{\mu\nu}\xi^{\mu}\xi^{\nu}\).
(vi) By convention, the sign of the Riemann curvature tensor is taken according with the Ricci identities, which are written as
\[\nabla_{\mu}\nabla_{\nu}\xi_{\beta}-\nabla_{\nu}\nabla_{\mu}\xi_{\beta}=\xi^ {\alpha}R_{\alpha\beta\nu\mu}\,. \tag{11}\]
The Ricci tensor of \(g\), \(Ric\equiv Ric(g)\), is defined as the contraction of the first and third index of the curvature (in components \(R_{\alpha\beta}=R^{\mu}_{\ \alpha\mu\beta}\)), and the scalar curvature is \(R\equiv\mathop{\rm tr}\nolimits Ric\). Moreover, the Ricci identities for 2-tensors have the expression:
\[\nabla_{\mu}\nabla_{\nu}T_{\alpha\beta}-\nabla_{\nu}\nabla_{\mu}T_{\alpha \beta}=R^{\rho}_{\ \alpha\nu\mu}T_{\rho\beta}+R^{\rho}_{\ \beta\nu\mu}T_{\alpha\rho}\,. \tag{12}\]
## 3 Vector field dynamics
This section is mainly devoted to obtain the differential equation that a vector field obeys, following the usual approach of the relativistic field theory in a curved space-time. Moreover, a detailed comparison with previous studies, carried out by other authors, is also provided.
### Euler-Lagrange equation for a vector field \(\xi\)
For a vector field \(\xi\) in a curved geometry, we are faced with the dynamic field equation that \(\xi\) obeys. Variational calculus from a density Lagrangian leads to the corresponding Euler-Lagrange equations for \(\xi\), whose relevant differential terms depend on the hypothesis that takes place in the construction of \(L\).
Let us consider the functional:
\[I=\int_{D}L\,\sqrt{-{\rm g}}\;d^{4}x,\qquad{\rm g}\equiv\det g \tag{13}\]
where \(L\) is the quadratic Lagrangian density on \(D\), defined from the vector field \(\xi\) and its first derivatives as
\[L=a\mathop{\rm tr}\nolimits S^{2}+b\mathop{\rm tr}\nolimits F^{2}+c\left( \mathop{\rm tr}\nolimits S\right)^{2}+\left(\mu+\nu R\right)\xi^{2}\,, \tag{14}\]
with \(a,b,c,\mu,\nu\in\mathbb{R}\) and \(R\) the scalar curvature of \(g\).
Note that our Lagrangian density is defined using the scalars \(\mathop{\rm tr}\nolimits S^{2}\), \(\mathop{\rm tr}\nolimits F^{2}\) and \(\mathop{\rm tr}\nolimits S\) without consider terms on the Ricci tensor of the form \(i^{2}(\xi)Ric\). Later on, it will be seen that this Ricci terms can be rewritten in terms of the scalars used in our definition of the Lagrangian density.
By requiring \(L\) to be stationary under variations of \(\xi\) and its first derivatives (vanishing on \(\partial D\), the boundary of \(D\)), the Euler-Lagrange equations [17] are written as:
\[\nabla_{\alpha}\frac{\partial L}{\partial\nabla_{\alpha}\xi_{\beta}}-\frac{ \partial L}{\partial\xi_{\beta}}=0. \tag{15}\]
Notice that considering an extra term proportional to \(\operatorname{tr}S\) in expression (14) of \(L\) is irrelevant, since
\[\frac{\partial\operatorname{tr}S}{\partial\nabla_{\alpha}\xi_{\beta}}=2g^{ \alpha\beta}\,, \tag{16}\]
and \(\nabla g=0\). Then, such a term does not contribute to the field equation (15), in accordance with the application of the Gauss theorem for the case \(\operatorname{tr}S=2\nabla\cdot\xi\) and the assumed boundary conditions. As a matter of consistence, the energy tensor associated with this extra term has to be zero (see Proposition 5 in Subsection 5.2).
Now, from the definition of \(S\) and \(F\) in (5) and (6), respectively, the following relations for the derivatives of the different summands in expression (14) of \(L\) are deduced:
\[\frac{\partial\operatorname{tr}S^{2}}{\partial\nabla_{\alpha}\xi _{\beta}} = 4\,S^{\alpha\beta}, \tag{17}\] \[\frac{\partial\operatorname{tr}F^{2}}{\partial\nabla_{\alpha} \xi_{\beta}} = -4\,F^{\alpha\beta},\] (18) \[\frac{\partial(\operatorname{tr}S)^{2}}{\partial\nabla_{\alpha} \xi_{\beta}} = 4\,\frac{\partial(\nabla\cdot\xi)^{2}}{\partial\nabla_{\alpha} \xi_{\beta}}=8\,(\nabla\cdot\xi)\,g^{\alpha\beta},\] (19) \[\frac{\partial\xi^{2}}{\partial\xi_{\beta}} = 2\,\xi^{\beta}. \tag{20}\]
Substituting these expression in Eq. (15), it becomes:
\[a\,\nabla\cdot S-b\,\nabla\cdot F+2\,c\,\mathrm{d}\,(\nabla\cdot\xi)-\frac{1} {2}(\mu+\nu R)\xi=0\,, \tag{21}\]
which is the field equation that the vector field \(\xi\) satisfies.
Now, taking into account the Ricci identities (11) for \(\xi\), the divergence summands, \(\nabla\cdot S\) and \(\nabla\cdot F\), might be conveniently expressed in terms of the field and its derivatives as follows:
\[\nabla\cdot S = \Delta\xi+\mathrm{d}(\nabla\cdot\xi)+i(\xi)Ric\,, \tag{22}\] \[\nabla\cdot F = \Delta\xi-\mathrm{d}(\nabla\cdot\xi)-i(\xi)Ric\,, \tag{23}\]
where \(\Delta\) stands for the Laplace-Beltrami operator, \((\Delta\xi)_{\alpha}\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\xi_{\alpha}\). Then, redefining the coefficients in Eq. (21) as
\[\alpha\equiv a-b,\quad\beta\equiv a+b,\quad\gamma\equiv a+b+2c\quad\mathrm{and }\quad\rho\equiv-\frac{1}{2}(\mu+\nu R), \tag{24}\]
we have the following result.
**Proposition 1**: _For a Lagrangian of the form (14) associated to the field \(\xi\), the Euler-Lagrange equation is_
\[\alpha\,\Delta\xi+\beta\,i(\xi)Ric+\gamma\,\mathrm{d}\,(\nabla\cdot\xi)+\rho\, \xi=0, \tag{25}\]
_where \(\alpha\), \(\beta\), \(\gamma\) and \(\rho\) are given in (24)._
Related to the hyperbolicity (strong and weak) analysis and stability of this field equation (25) for \(\mu=\nu=0\), in the space \(\mathbb{R}^{3}\) of real parameters \(\alpha,\beta,\gamma\), we can closely follow the procedure presented in Ref. [18]. In the generic case \(\alpha\neq 0\), we get that Eq. (25) is strongly hyperbolic only for \(\gamma=0\) and weakly hyperbolic if \(\gamma\neq-\alpha\).
Then, Eq. (25) is the field equation for the vector field \(\xi\) in any metric \(g\). This equation will be called here the _Proca-Taubes-Will equation_ since it reduces to the field equations considered by these three authors when particular values of the parameters are chosen, as it will be seen in the next subsection.
### Discussion about the field equation.
Firstly, the previous field equation (25) will be compared with the Proca field equation [20]. To begin with, let us consider the Proca Lagrangian density,
\[L_{\mathcal{P}}=-\operatorname{tr}F^{2}+\mu\,\xi^{2},\]
which clearly follows from (14) taking \(a=c=\nu=0,b=-1\). From these values of the parameters, Eq. (21) becomes
\[\nabla\cdot F-\frac{\mu}{2}\,\xi=0, \tag{26}\]
which is the Proca field equation. Since \(F\) is antisymmetric, this equation implies that the Proca field is divergence free, \(\nabla\cdot\xi=0\), which is not a gauge condition. The Proca field itself defines a conserved current when \(\xi\) is time-like. Then Eq. (25) reduces to
\[\Delta\xi-i(\xi)Ric-\frac{\mu}{2}\,\xi=0\,, \tag{27}\]
with the constant \(\mu\) positive and related to the field mass. Regarding (25), notice that the parameter \(\gamma=b=-1\) does not appear since the field is divergence free.
Let us continue with the Almost-Killing Equation (AKE):
\[\Delta\xi+i(\xi)Ric+(1-\lambda)\mathrm{d}\left(\nabla\cdot\xi\right)=0\,, \tag{28}\]
introduced by Taubes [21] to extend the concept of exact space-time isometry. Clearly, this expression is recovered from Eq. (25) taking \(\alpha=1\), \(\beta=1\), \(\gamma=1-\lambda\) and \(\rho=0\) (that is, \(a=1\), \(b=0\), \(2c=-\lambda\) and \(\mu=\nu=0\)). Related to this equation, in Ref. [22], the authors vindicate the AKE as a useful gauge condition to be implemented in the 3+1 formulation of the Einstein field equations. Moreover, Ref. [18] contains a general analysis of the AKE; in particular, the AKE equation is strongly hyperbolic iff \(\lambda=1\), that corresponds to the differential equation, \(\Delta\xi+i(\xi)Ric=0\), for a generator \(\xi\) of an harmonic transformation, under which the Levi-Civita connection satisfies \(g^{\mu\nu}\,L_{\xi}\Gamma^{\alpha}_{\mu\nu}=0\) (see Ref. [23]). The definition of the Lie derivative for a linear connection (\(\mathcal{L}_{\xi}\Gamma^{\alpha}_{\mu\nu}\)) can be found in [24].
Next, the Will equation coming from the Will Lagrangian density,
\[L_{\mathcal{W}}=\epsilon\,\operatorname{tr}F^{2}+\eta\,i^{2}(\xi)\,Ric+\tau \operatorname{tr}(\nabla\xi\times{}^{\mathrm{t}}\nabla\xi)+\omega\,R\,\xi^{2}\,, \tag{29}\]
with \(\epsilon,\eta,\tau\) and \(\omega\), real parameters, is analyzed. Originally, \(L_{\mathcal{W}}\) was introduced to build vector-tensor theories of gravitation, by extending the Hilbert variational formulation of the Einstein' General Relativity (see Refs. [4, 5]). In this context, the full Lagrangian density is written as \(L=R+L_{\mathcal{W}}+L_{m}\), where \(L_{m}\) stands for the matter term. From this Lagrangian, the field \(\xi\) obeys the Will equation:
\[\epsilon\,\nabla\cdot F+\frac{\eta}{2}\,i(\xi)\,Ric-\frac{\tau}{2}\,\Delta\xi +\frac{\omega}{2}\,R\,\xi=0. \tag{30}\]
Taking into account the general expression (23) for \(\nabla\cdot F\), this equation (30) is written as
\[\left(-\epsilon+\frac{\tau}{2}\right)\,\Delta\xi+\left(\epsilon-\frac{\eta}{2 }\right)\,i(\xi)Ric+\epsilon\,\mathrm{d}\left(\nabla\cdot\xi\right)-\frac{ \omega}{2}\,R\,\xi=0, \tag{31}\]
which can be obtained from Eq. (25) making
\[\alpha=-\epsilon+\frac{\tau}{2},\ \beta=\epsilon-\frac{\eta}{2},\ \gamma=\epsilon,\ \rho=-\frac{\omega}{2}R \tag{32}\]
that is, taking form Eq. (14)
\[a=\frac{1}{4}(\tau-\eta),\ b=\epsilon-\frac{1}{4}(\tau+\eta),\ c=\frac{\eta}{ 4},\ \mu=0,\ \nu=\omega. \tag{33}\]
Notice that, up to the addition of a Proca-like term, \(\mu\,\xi^{2}\), the Lagrangian densities (14) and (29) only differ by a full divergence term, and then, can be considered equivalent with respect to the obtention of the field equation from a variational principle with the assumed boundary conditions. To be more specific,
\[L_{\mathcal{W}}=\frac{1}{4}(\tau-\eta)\,\operatorname{tr}S^{2}+\left[\epsilon- \frac{1}{4}(\tau+\eta)\right]\,\operatorname{tr}F^{2}+\eta\,(\nabla\cdot\xi)^ {2}+\,\omega\,R\,\xi^{2}+\eta\,\nabla\cdot\mathcal{B}\,, \tag{34}\]
where \(\mathcal{B}\equiv\nabla_{\xi}\xi-(\nabla\cdot\xi)\,\xi\). Now, since the following contracted Ricci identity
\[i^{2}(\xi)Ric=\nabla\cdot\mathcal{B}+(\nabla\cdot\xi)^{2}-\operatorname{tr}( \nabla\xi\times\nabla\xi) \tag{35}\]
is obtained by contraction of Eq. (11) and, taking into account the expressions
\[\mathrm{tr}(\nabla\xi\times\nabla\xi) = \frac{1}{4}(\mathrm{tr}\,L^{2}+\mathrm{tr}\,F^{2})\,, \tag{36}\] \[\mathrm{tr}(\nabla\xi\times{}^{\mathrm{t}}\nabla\xi) = \frac{1}{4}(\mathrm{tr}\,L^{2}-\mathrm{tr}\,F^{2}), \tag{37}\]
Eq. (34) is derived from Eq. (29). The term \(i^{2}(\xi)Ric\) in (29) is usually interpreted as a minimal coupling gravitational term. It is worthwhile to remark that the explicit introduction of this term in (14) is a matter of convenience due to it could be incorporated by renaming the corresponding coefficients in (14) using (36) and (35).
Finally, other cases can be obtained from expression (25). For \(\alpha=1\), \(\beta=-1\), \(\gamma=\frac{-2\epsilon}{2\epsilon-\eta}\neq 0\) and \(\rho=0\) (that is, \(a=0\), \(b=-1\), \(2c=\frac{-\eta}{2\epsilon-\eta}\) and \(\mu=\nu=0\) if we look at Eq. (21)), this equation reduces to the _Dale-Saez equation_[25]:
\[\nabla\cdot F-\frac{\eta}{2\epsilon-\eta}\,\mathrm{d}\,(\nabla\cdot\xi)=0\]
or equivalently,
\[\Delta\xi-i(\xi)Ric-\frac{2\epsilon}{2\epsilon-\eta}\,\mathrm{d}\,(\nabla \cdot\xi)=0, \tag{38}\]
which can also be obtained from the Will equation (31) making \(\tau=\eta\neq 2\epsilon\) and \(\omega=0\). Different aspects of a vector-tensor theory founded in the coupling of (38) with the Einstein field equations has been analyzed in Refs. [25, 26, 27, 28].
The case \(\tau=0,\epsilon=-1\) in the Will Lagrangian density in (34) with the addition of a Proca-like term \(\mu\,\xi^{2}\), has also been treated in the literature. In such a case, we obtain
\[-\nabla\cdot F+\frac{\eta}{2}\,i(\xi)\,Ric+\frac{1}{2}(\mu+\omega\,R)\,\xi=0\,, \tag{39}\]
in consonance with Eq. (30). Notice that the precedent equation (39) and Eq. (4) in Ref. [29] differ quite lightly, that is, the numeric coefficients of the term \(\nabla\cdot F\) between both equations differ in a factor 2. However, this minor discrepancy should be taken into account in numerical or analytical expressions derived in [29]. Furthermore, according with Eq. (31), in this case, we obtain that \(\xi\) satisfies the equation:
\[\Delta\xi-\left(1+\frac{\eta}{2}\right)\,i(\xi)Ric-\mathrm{d}\,(\nabla\cdot \xi)-\frac{1}{2}(\mu+\omega\,R)\xi=0 \tag{40}\]
which corresponds to the case \(\alpha=1\), \(\beta=-1-\frac{\eta}{2}\), \(\gamma=-1\) and \(\rho=-\frac{1}{2}(\mu+\omega\,R)\) (that is, \(a=-\frac{\eta}{4}\), \(b=-1-\frac{\eta}{4}\), \(c=\frac{\eta}{4}\), and \(\mu=\mu\) and \(\nu=\omega\)).
In Table 1 the previous discussion is summarized indicating the values of the parameters in the field equation (25) that recovers some of the known equations from the literature.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \(\alpha\) & \(\beta\) & \(\gamma\) & \(\rho\) \\ \hline Proca & 1 & \(-1\) & \(-1\) & \(-\frac{\mu}{2}\) \\ \hline AKE & 1 & 1 & \(1-\lambda\) & \(\mu=\nu=0\rightarrow\rho=0\) \\ \hline Will & \(-\epsilon+\frac{\gamma}{2}\) & \(\epsilon-\frac{\eta}{2}\) & \(\epsilon\) & \(\mu=0\), \(\nu=\omega\rightarrow\rho=-\frac{\omega}{2}R\) \\ \hline D-S & 1 & \(-1\) & \(\frac{2\epsilon}{2\epsilon-\eta}\) & \(\mu=\nu=0\rightarrow\rho=0\) \\ \hline B-H & 1 & \(-1-\frac{\eta}{2}\) & \(-1\) & \(-\frac{1}{2}(\mu+\nu R)\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the values of the parameters in equation (25) to recover different field equations: Proca, Almost Killing Equation (AKE), Will, Dale-Sáez (D-S) and Böhmer-Harko (B-H).
### Further considerations from the field equations
The term \(\nabla\cdot F\) in the field equation (21) allows to obtain, by derivation of this equation, a conserved current \(J\). Effectively, due to the fact that \(F\) is antisymmetric, the equation (12) leads to \(\nabla\cdot(\nabla\cdot F)=0\) after the double contraction of this Ricci identity for \(F\). Then, taking the divergence in the field equation (21) we have:
\[\nabla\cdot J=0\quad{\rm with}\quad J=b\,\nabla\cdot F. \tag{41}\]
So, when \(J\) is time-like, it could be interpreted as a conserved current.
The above expression for \(J\) can be further simplified taking into account the field equation and the expression (23) for \(\nabla\cdot F\):
\[J=b\,\left(\Delta\xi-{\rm d}(\nabla\cdot\xi)-i(\xi)Ric\right). \tag{42}\]
Substituting the field equation (25), we get the following result:
**Proposition 2**: _A conserved current \(J\) derived from the field equation (21) when \(\alpha\neq 0\) is given by:_
\[J=\left[1-\frac{1}{2}\left(1+\frac{\beta}{\alpha}\right)\right]\rho\,\xi+ \frac{1}{2}(\alpha+\beta)\left(1-\frac{\beta}{\alpha}\right)i(\xi)Ric+\left[ \frac{\gamma}{2}\left(1-\frac{\beta}{\alpha}\right)+\frac{\alpha-\beta}{2} \right]{\rm d}(\nabla\cdot\xi). \tag{43}\]
Notice that, when \(\alpha\neq 0\), three contributions appear in (43) due to the terms \(\xi\), \(i(\xi)Ric\) and \({\rm d}(\nabla\cdot\xi)\). Moreover, if \(\beta=\alpha\neq 0\), the resulting current identically vanishes since \(b=0\). And when \(\beta=-\alpha\neq 0\), a simplified situation follows,
\[J=\rho\,\xi+(\gamma-\beta){\rm d}(\nabla\cdot\xi), \tag{44}\]
which has been analyzed in Ref. [4, 5, 28] taking \(\beta=-1\), \(\gamma=-\frac{2e}{2\epsilon-\eta}\), \(\rho=0\) and corresponds to the current,
\[J=-\frac{\eta}{2\epsilon-\eta}{\rm d}(\nabla\cdot\xi). \tag{45}\]
On the other hand, in an arbitrary space-time, the expression
\[\Pi^{\mu}\equiv\frac{\partial L}{\partial\nabla_{0}\xi_{\mu}} \tag{46}\]
provides a natural generalization for the definition of conjugate moment of the vector field \(\xi\). In a flat space-time field theory, it is usually obtained [19] in terms of the ordinary partial derivative of \(L\) with respect to the inertial time derivative \(\partial_{0}\xi_{\mu}\). Then, from expression (14) and taking into account (17), (18) and (19), we obtain this general expression
\[\Pi^{\mu}=4\,\Big{(}aS^{0\mu}-bF^{0\mu}+2\;c\;(\nabla\cdot\xi)g^{0\mu}\Big{)}. \tag{47}\]
For a geodesic and vorticity free unit observer, whose metrically equivalent 1-form is written as \(u=-{\rm d}t=(-1,0,0,0)\), the conjugate moment of a vector field \(\xi\) relative to \(u\) could be defined as
\[\Pi_{u}(\xi)=-4\,\Big{(}a\;i(u)S-b\;i(u)F+2\;c\;(\nabla\cdot\xi)\;u\Big{)}. \tag{48}\]
Note that, if \(\mu=\nu=0\) (\(\rho=0\)), then \(L\) is independent of the components of \(\xi\). In such a case, a study of the time-independent character of a volume integral quantity derived from \(\Pi^{\mu}\) for an observer could be examined as a generalization of the treatment given in [19].
## 4 Eikonal-like decomposition of the field equation
Without lost of generality, at each space-time event, the field \(\xi\) may be factorized as a vector field, \(A\), and a scalar function \(f\) of \(\varphi\), which depend on the event coordinates, \(\{x^{\mu}\}\), that is:
\[\xi(x^{\mu})=A(x^{\mu})f(\varphi(x^{\mu}))\,. \tag{49}\]
In the eikonal ansatz, the vector \(A\) represents an amplitude vector and the function \(\varphi\) is taken as a phase. Here we will consider this factorization, in general, without being necessarily an amplitude vector and a phase.
From this factorization of the field, \(\xi=fA\), we get that \(\nabla\xi=df\otimes A+f\;\nabla A\), which implies
\[\nabla\cdot\xi=i(A)df+f\;\nabla\cdot A=f^{\prime}(k\cdot A)+f\;\nabla\cdot A \tag{50}\]
by contraction, with \(df=f^{\prime}k\) being \(f^{\prime}=df/d\varphi\), \(k\equiv{\rm d}\varphi\), and \(k\cdot A\equiv g(k,A)\). Moreover,
\[\nabla\nabla\xi=\nabla df\otimes A+2\;df\otimes\nabla A+f\;\nabla\nabla A,\]
where \(\nabla df=f^{\prime\prime}k\otimes k+f^{\prime}\nabla k\) and \(f^{\prime\prime}=df^{\prime}/d\varphi\). By contraction, the above equation gives
\[\Delta\xi=(\Delta f)A+2\;i(df)\nabla A+f\;\Delta A \tag{51}\]
for \(\Delta f=g^{\mu\nu}\nabla_{\mu}\partial_{\nu}f=f^{\prime\prime}k^{2}+f^{ \prime}\nabla\cdot k\). Then, we can rewrite (51) as
\[\Delta\xi=f\Delta A+f^{\prime}\Big{(}2\nabla_{k}A+(\nabla\cdot k)A\Big{)}+f^{ \prime\prime}k^{2}\,A, \tag{52}\]
with \(k^{2}\equiv g(k,k)\). And, from expression (50),
\[d(\nabla\cdot\xi)=f^{\prime\prime}(k\cdot A)k+f^{\prime}\Big{(}d(k\cdot A)+( \nabla\cdot A)k\Big{)}+f\;d(\nabla\cdot A). \tag{53}\]
Now, replacing these expressions (52) and (53) in Eq. (25), we have the following results.
**Proposition 3**: _Given the field \(\xi\) factorized as \(\xi=fA\) with \(f\) a scalar function of \(\varphi\) and \(A\) a vector field, the field equation (25) is written as_
\[f\mathcal{X}+f^{\prime}\mathcal{Y}+f^{\prime\prime}\mathcal{Z}=0, \tag{54}\]
_with \(\mathcal{X},\mathcal{Y},\mathcal{Z}\) defined by the expressions:_
\[\mathcal{X} \equiv \alpha\Delta A+\beta i(A)Ric+\gamma d(\nabla\cdot A)+\rho A, \tag{55}\] \[\mathcal{Y} \equiv \alpha[2\nabla_{k}A+(\nabla\cdot k)A]+\gamma[{\rm d}(k\cdot A)+ (\nabla\cdot A)\,k],\] (56) \[\mathcal{Z} \equiv \alpha k^{2}\,A+\gamma(k\cdot A)\,k, \tag{57}\]
_where \(k=d\varphi\) and \(f^{\prime}\) (\(f^{\prime\prime}\)) is the first (second) derivative of \(f\) with respect to \(\varphi\)._
Moreover, the following statement has also been established.
**Proposition 4**: _For any given space-time metric \(g\), a sufficient condition for a field \(\xi=fA\) to satisfy Eq. (25) is that the vector \(A\) and the differential of \(\varphi\), \(k=d\varphi\), obey the equations:_
\[\mathcal{X}=0,\qquad\mathcal{Y}=0,\qquad\mathcal{Z}=0\,. \tag{58}\]
_with \(\mathcal{X},\mathcal{Y}\) and \(\mathcal{Z}\) defined by (55), (56) and (57), respectively._
Now, we analyse some interesting implications derived from Eqs. (58). From the condition \(\mathcal{Y}=0\) we have that contraction \(i(A)\mathcal{Y}=0\) holds, that is,
\[i(A)\mathcal{Y} = \alpha[2i(A)\nabla_{k}A+(\nabla\cdot k)A^{2}]+\gamma[i(A){\rm d}( k\cdot A)+(\nabla\cdot A)\,(k\cdot A)]=\] \[= \nabla\cdot(\alpha A^{2}k+\gamma(k\cdot A)A)=0.\]
So, as a consequence of \(\mathcal{Y}=0\) we can define the divergence free vector, \(\mathcal{J}\), given by
\[\mathcal{J}\equiv\alpha A^{2}k+\gamma(k\cdot A)A, \tag{60}\]
that is, \(\nabla\cdot\mathcal{J}=0\).
On the other hand, the contractions of (57) with \(A\) and \(k\) give:
\[i(A)\mathcal{Z} = \alpha k^{2}A^{2}+\gamma(k\cdot A)^{2}, \tag{61}\] \[i(k)\mathcal{Z} = (\alpha+\gamma)(k\cdot A)k^{2}, \tag{62}\]
which are equal to zero when considering \(\mathcal{Z}=0\). Analyzing these last equations (\(i(A)\mathcal{Z}=0\) and \(i(k)\mathcal{Z}=0\)) we can distinguish the following cases:
Case 1: \(\alpha\gamma\neq 0\), \(\alpha+\gamma\neq 0\).
In this case, from expressions (61) and (62) equal to 0 we can conclude that \(k\cdot A=0\) and consequently \(k\) and \(A\) are orthogonal. Note that if \(k\cdot A\neq 0\), then \(i(k)\mathcal{Z}=0\) gives \(k^{2}=0\) and \(i(A)\mathcal{Z}=\gamma(k\cdot A)^{2}=0\), which enters in contradiction with \(k\cdot A=0\).
Consequently, from \(k\cdot A=0\) we get the following results:
1. From Eq. (61) we have \(i(A){\cal Z}=\alpha k^{2}A^{2}=0\) and then \(k\) and \(A\) are both null and collinear, or one of them is null and the other one is space-like.
2. From Eq. (60), \({\cal J}=\alpha A^{2}k\) is divergence free. Then, since \(k\) is a gradient (\(k={\rm d}\varphi\)), the hessian \(\nabla k\) is symmetric and if \(k\) is null, we have that \[\nabla_{k}k=i(k)\nabla k=i(k)^{t}\nabla k=\frac{1}{2}\nabla(k^{2})=0\] and the integral curves of \(k\) define an affine parameterized congruence of null geodesics. If, in addition, the amplitude vector \(A\) is space-like, then \({\cal J}\) is a conserved null four-current, \(\nabla\cdot(A^{2}k)=0\).
3. By using (56), the equation \({\cal Y}=0\) simplifies to \[\alpha[2\nabla_{k}A+(\nabla\cdot k)A]+\gamma(\nabla\cdot A)\,k=0\,.\] (63)
Case 2: \(\alpha\gamma\neq 0\), \(\alpha+\gamma=0\).
In this case, from (61) equal to 0 we have that:
\[i(A){\cal Z}=k^{2}A^{2}-(k\cdot A)^{2}=0. \tag{64}\]
Then, we get the following options: \(A\) and \(k\) are both null and collinear; or they generate a null 2-plane, that is, one of them is null and the other one is space-like or both vectors are space-like and non orthogonal, that is, \(k^{2}A^{2}=(k\cdot A)^{2}\neq 0\).
Notice that, in both analyzed cases, neither \(k\) nor \(A\) can be time-like. As a consequence, we have the following statement:
**Proposition 5**: _In the generic case, i.e. when \(\alpha\gamma\neq 0\), the eikonal-like decomposition, \(\xi=f\,A\), satisfying (58), is not possible for a time-like vector field \(\xi\)._
Case 3: \(\alpha\gamma=0\).
In this case, if \(\alpha=0\) and \(\gamma\neq 0\), from \({\cal Z}=0\) we get that \(k\) and \(A\) are orthogonal (\(k\cdot A=0\)). Moreover, the equation \({\cal Y}=0\) reduces to \(\nabla\cdot A=0\) and \({\cal X}=0\) implies that \(A\) is an eigenvector of the Ricci tensor.
If \(\gamma=0\) and \(\alpha\neq 0\) then, from \({\cal Z}=0\), the vector \(k\) is null. Moreover, the contraction \(i(A){\cal Y}=0\) implies that \({\cal J}=\alpha A^{2}k\), with \(\nabla\cdot{\cal J}=0\).
The implications derived from a sufficient condition for an eikonal-like factorization of the field \(\xi\), \(\xi=f\,A\), have been exhaustively examined. Notice that the Proca field equation (27) belongs to case 2 since \(\alpha=1\) and \(\gamma=-1\). However, the AKE equation (28), the Will one (40) and the Dale-Saez equation (38) are in any of the cases depending on the value of the coefficients.
## 5 Vector field energy-momentum tensor.
### The Hilbert energy tensor: generalities.
The Hilbert energy tensor, \(T\), is obtained from the variation of the functional (13) with respect to the metric \(g\) (see, for instance, Refs. [17, 30]). As starting point to obtain \(T\), let us consider the contravariant expression:
\[T^{\mu\nu}=\frac{2}{\sqrt{-{\rm g}}}\frac{\delta{\cal L}}{\delta g_{\mu\nu}}, \qquad{\cal L}\equiv L\sqrt{-{\rm g}}, \tag{65}\]
with \(L\) the considered quadratic Lagrangian density (14). Here, the function \({\cal L}\) depends at most on the metric field and its first derivatives, \({\cal L}(g_{\mu\nu},g_{\mu\nu,\rho})\), so that the variational derivative takes the expression:
\[\frac{\delta{\cal L}}{\delta g_{\mu\nu}}=\frac{\partial{\cal L}}{\partial g_{ \mu\nu}}-\frac{\partial}{\partial x^{\rho}}\frac{\partial{\cal L}}{\partial g _{\mu\nu,\rho}} \tag{66}\]
when the variations \(\delta g_{\mu\nu}\) and \(\delta g_{\mu\nu,\rho}\) vanish on the integration boundary.
Taking into account (2), we can develop (65) to obtain the expression
\[T^{\mu\nu}=2\frac{\delta L}{\delta g_{\mu\nu}}+Lg^{\mu\nu}-2\Gamma^{\alpha}_{ \alpha\rho}\frac{\partial L}{\partial g_{\mu\nu,\rho}} \tag{67}\]
where the first term is the variational derivative of the Lagrangian density \(L\), that is
\[\frac{\delta L}{\delta g_{\mu\nu}}=\frac{\partial L}{\partial g_{\mu\nu}}-\frac{ \partial}{\partial x^{\rho}}\frac{\partial L}{\partial g_{\mu\nu,\rho}}\,. \tag{68}\]
Different expressions of \(L\) will give its corresponding energy tensors \(T\). In our case, we will consider \(L\) given by (14) and analyze it term by term separately to obtain the corresponding energy tensor \(T\) by addition of each one of the studied parts.
### Hilbert energy tensor associated to the Lagrangian density of \(\xi\).
In this subsection, we will focus on the Lagrangian density (14) in order to derive the Hilbert energy tensor using Eq. (67). As we have mentioned before, we will analyze term by term the factors appearing in \(L\).
To begin with, let us consider a Lagrangian density defined by a twice differentiable function of \(\nabla\cdot\xi\), say \(L\equiv h(\nabla\cdot\xi)\). To obtain the Hilbert Energy tensor corresponding to this \(L\), we take into account that the divergence of a vector field \(\xi\) can be conveniently expressed as:
\[\nabla\cdot\xi = g^{\alpha\beta}\nabla_{\alpha}\xi_{\beta}=g^{\alpha\beta}( \partial_{\alpha}\xi_{\beta}-\Gamma^{\lambda}_{\alpha\beta}\xi_{\lambda}) \tag{69}\] \[= g^{\alpha\beta}\partial_{\alpha}\xi_{\beta}-g^{\alpha\beta}g^{ \lambda\sigma}\Gamma_{\alpha\beta,\sigma}\xi_{\lambda}\,,\]
and, by derivation, it leads to
\[2\,\frac{\partial(\nabla\cdot\xi)}{\partial g_{\mu\nu}} = g^{\alpha\beta}(\Gamma^{\mu}_{\alpha\beta}\xi^{\nu}+\Gamma^{ \nu}_{\alpha\beta}\xi^{\mu})-(\mathcal{L}_{\xi}g)^{\mu\nu}\,, \tag{70}\] \[2\,\frac{\partial(\nabla\cdot\xi)}{\partial g_{\mu\nu,\rho}} = g^{\mu\nu}\xi^{\rho}-g^{\rho\mu}\xi^{\nu}-g^{\rho\nu}\xi^{\mu}\,. \tag{71}\]
Then, using expression (69), its partial derivative and the definition of variational derivative given in (68), we have the following result:
**Lemma 1**: _The variational derivative of \(\nabla\cdot\xi\) with respect metric variations is given by:_
\[\frac{\delta(\nabla\cdot\xi)}{\delta g_{\mu\nu}}=\frac{\partial(\nabla\cdot \xi)}{\partial g_{\mu\nu}}-\frac{\partial}{\partial x^{\rho}}\frac{\partial( \nabla\cdot\xi)}{\partial g_{\mu\nu,\rho}}=-\frac{1}{2}\big{[}g^{\mu\nu} \partial_{\rho}\xi^{\rho}+(g^{\mu\beta}\xi^{\nu}+g^{\nu\beta}\xi^{\mu})\Gamma ^{\rho}_{\beta\rho}\big{]}. \tag{72}\]
Now, let us obtain the Hilbert energy tensor associated with \(L\equiv h(\nabla\cdot\xi)\). Eq. (67) applied to this case gives:
\[T^{\mu\nu}=2h^{\prime}\frac{\delta(\nabla\cdot\xi)}{\delta g_{\mu\nu}}+hg^{ \mu\nu}-2\big{[}h^{\prime\prime}\partial_{\rho}(\nabla\cdot\xi)+h^{\prime} \Gamma^{\alpha}_{\alpha\rho}\big{]}\frac{\partial L}{\partial g_{\mu\nu,\rho}} \tag{73}\]
where the prime denotes derivative with respect to \(\nabla\cdot\xi\).
Finally, by direct substitution of (70) and (71) in Eq. (73), we get:
**Lemma 2**: _Given a vector field \(\xi\), the Hilbert tensor associated with the Lagrangian density \(L=h(\nabla\cdot\xi)\) is_
\[T=\big{[}h-(\nabla\cdot\xi)h^{\prime}-h^{\prime\prime}i(\xi)\mathrm{d}( \nabla\cdot\xi)\big{]}\,g+h^{\prime\prime}\xi\tilde{\otimes}\mathrm{d}( \nabla\cdot\xi), \tag{74}\]
_where \(h^{\prime}\) and \(h^{\prime\prime}\) stand for the first and second derivative of \(h\) with respect \(\nabla\cdot\xi\), respectively._
By applying this results to the interesting particular cases \(L=\nabla\cdot\xi\) and \(L=(\nabla\cdot\xi)^{2}\), we can conclude the following statements.
**Proposition 6**: _If \(L=\nabla\cdot\xi\), then \(T=0\)._
**Proposition 7**: _If \(L=(\nabla\cdot\xi)^{2}\), then_
\[T=-[(\nabla\cdot\xi)^{2}+2\,i(\xi)\mathrm{d}(\nabla\cdot\xi)]\,g+2\,\xi\tilde {\otimes}\mathrm{d}(\nabla\cdot\xi). \tag{75}\]
Notice that Proposition 6 agrees with the fact that terms proportional to \(2\nabla\cdot\xi=\operatorname{tr}S\) do not contribute neither the energy tensor nor the field equations.
Next, let us consider the case \(L=\operatorname{tr}F^{2}\) with \(F=d\xi\). From (10), \(\operatorname{tr}F^{2}=-F_{\mu\nu}F^{\mu\nu}=-g^{\mu\alpha}g^{\nu\beta}F_{\mu\nu} F_{\alpha\beta}\) with
\[F_{\alpha\beta}=\partial_{\alpha}\xi_{\beta}-\partial_{\beta}\xi_{\alpha}. \tag{76}\]
Since no metric derivative occurs in \(L\), we have:
\[\frac{\partial\operatorname{tr}F^{2}}{\partial g_{\mu\nu,\rho}}=0, \tag{77}\]
and, from (68), its variational derivative with respect to the metric reduces to
\[\frac{\delta\operatorname{tr}F^{2}}{\delta g_{\mu\nu}}=\frac{\partial \operatorname{tr}F^{2}}{\partial g_{\mu\nu}}=-2F^{\mu}_{\ \alpha}F^{\alpha\nu}. \tag{78}\]
In this case, the calculation of the Hilbert energy tensor from Eq. (67) gives
\[T^{\mu\nu}=-4(F^{2})^{\mu\nu}+\left(\operatorname{tr}F^{2}\right)g^{\mu\nu}, \tag{79}\]
and we can conclude the following statement:
**Proposition 8**: _If \(L=\frac{1}{4}\operatorname{tr}F^{2}\), then_
\[T=-F^{2}+\frac{1}{4}(\operatorname{tr}F^{2})\,g, \tag{80}\]
_where the 1/4 factor has been included for convenience._
Note that, in this last statement, \(F\) can represent the electromagnetic field in a curved space-time, recovering the known result in flat space-time and extending it to an arbitrary space-time without considering the equivalence principle.
Now, we consider the Lagrangian density \(L=\operatorname{tr}S^{2}\) with \(S=\mathcal{L}_{\xi}\,g\). In components, this Lagrangian density has the expression (9) with
\[S_{\alpha\beta}=\partial_{\alpha}\xi_{\beta}+\partial_{\beta}\xi_{\alpha}-g^{ \rho\sigma}(g_{\alpha\sigma,\beta}+g_{\beta\sigma,\alpha}-g_{\alpha\beta, \sigma})\xi_{\rho}. \tag{81}\]
In this case calculations are a bit more involved that in the previous case since, from Eqs. (67) and (68), the Hilbert tensor is written as
\[T^{\mu\nu}=(\operatorname{tr}S^{2})g^{\mu\nu}+2\,\frac{\partial\operatorname{ tr}S^{2}}{\partial g_{\mu\nu}}-2\,\partial_{\rho}\frac{\partial\operatorname{ tr}S^{2}}{\partial g_{\mu\nu,\rho}}-2\,\Gamma^{\alpha}_{\alpha\rho}\frac{\partial \operatorname{tr}S^{2}}{\partial g_{\mu\nu,\rho}}\,. \tag{82}\]
and the following expressions
\[\frac{\partial\operatorname{tr}S^{2}}{\partial g_{\mu\nu}} = -2(S^{2})^{\mu\nu}+2S^{\alpha\beta}(\xi^{\mu}\Gamma^{\nu}_{ \alpha\beta}+\xi^{\nu}\Gamma^{\mu}_{\alpha\beta}), \tag{83}\] \[\frac{\partial\operatorname{tr}S^{2}}{\partial g_{\mu\nu,\rho}} = -2(\xi^{\mu}S^{\nu\rho}+\xi^{\nu}S^{\mu\rho}-\xi^{\rho}S^{\mu\nu}). \tag{84}\]
have to be taken into account. After substitution of (83) and (84) in (82), and after some rearrangements of terms and simplifications, we obtain the following expression for the energy tensor:
\[\begin{array}{rcl}T^{\mu\nu}&=&\operatorname{tr}S^{2}g^{\mu\nu}-4(S^{2})^{ \mu\nu}-4(\partial_{\rho}\xi^{\rho}+\Gamma^{\alpha}_{\alpha\rho})\,\xi^{\rho} S^{\mu\nu}+4\,\xi^{\mu}(\partial_{\rho}S^{\rho\nu}+\Gamma^{\alpha}_{\alpha\rho}S^{ \rho\nu}+\Gamma^{\nu}_{\alpha\beta}S^{\alpha\beta})\\ &&+4\,\xi^{\nu}(\partial_{\rho}S^{\rho\mu}+\Gamma^{\alpha}_{\alpha\rho}S^{\rho \mu}+\Gamma^{\mu}_{\alpha\beta}S^{\alpha\beta})-4\,(\xi^{\rho}\partial_{\rho }S^{\mu\nu}-S^{\mu\rho}\partial_{\rho}\xi^{\nu}-S^{\rho\nu}\partial_{\rho}\xi^ {\mu})\,.\end{array} \tag{85}\]
This expression involves the divergence of \(\xi\) at the third term, the divergence of \(S\) at the fourth and fifth terms and the Lie derivative of the contravariant tensor \(S^{\mu\nu}\) along \(\xi\) at the last term. Then, we can conclude the following statement:
**Proposition 9**: _If \(L=\frac{1}{4}\operatorname{tr}S^{2}\), then_
\[T=-S^{2}+\frac{1}{4}(\operatorname{tr}S^{2})\,g-(\nabla\cdot\xi)S+\xi\tilde{ \otimes}(\nabla\cdot S)-\mathcal{L}_{\xi}S^{*}, \tag{86}\]
_where \(S^{*}\) represents the contravariant tensor metrically equivalent to \(S=\mathcal{L}_{\xi}\,g\). That is, \(S^{*}=(\mathcal{L}_{\xi}g)^{*}=-\mathcal{L}_{\xi}g^{*}\), where \(g^{*}\) is the contravariant metric tensor._
As usual, where expressions in components are used, the star in \(S^{*}\) and \(g^{*}\) will be tacitly understood, that is, \((S^{*})^{\mu\nu}=g^{\mu\alpha}g^{\nu\beta}S_{\alpha\beta}\equiv S^{\mu\nu}\). Thus, the expression for the last term in (86), \({\cal L}_{\xi}S^{*}\), is written as
\[({\cal L}_{\xi}S^{*})^{\mu\nu}=\xi^{\rho}\partial_{\rho}S^{\mu\nu}-S^{\mu\rho} \partial_{\rho}\xi^{\nu}-S^{\rho\nu}\partial_{\rho}\xi^{\mu}=\xi^{\rho}\nabla_{ \rho}S^{\mu\nu}-S^{\mu\rho}\nabla_{\rho}\xi^{\nu}-S^{\mu\nu}\nabla_{\rho}\xi^{\mu} \tag{87}\]
where substitution of partial derivation by the covariant one becomes from the symmetry property of the Christoffel symbols. Again, the 1/4 factor in \(L\) was included for convenience.
Incidentally, notice that the sum of the first and the last terms in Eq. (86), might be conveniently replaced from the following identity:
\[S^{2}+{\cal L}_{\xi}S^{*}=\nabla_{\xi}S+\nabla\xi\times{}^{\rm t}\nabla\xi-{}^ {\rm t}\nabla\xi\times\nabla\xi\,, \tag{88}\]
where each summand is a symmetric tensor. Once this relation between symmetric contravariant tensors has been established, the expression remains valid for the corresponding covariant, or mixed, metrically equivalent tensors, due to its tensorial character. A similar situation occurs in Eqs. (75) and (80) of the Hilbert tensor in the corresponding statements.
## 6 Summary and discussion
In this work we have examined the dynamics of a relativistic vector field \(\xi\) in a curved space-time. We have derived, from variational principles, the field equations and the energy content in any metric \(g\) coupled to the field from a quadratic Lagrangian field density and keeping at most the first order field derivatives. The field equation (25) reduces to the one considered by other authors when the general expression is particularized using a properly parameter trial. The associated energy tensor has been obtained analyzing the Lagrangian terms separately and in a general way, which can be useful to characterize it algebraically. Moreover, this presentation will also allow to perform a \(3+1\) decomposition of both, the field equation and the energy tensor, to interpret them from an observer point of view.
Furthermore, the eikonal-like ansatz has also been studied, concluding that, a separability hypothesis does not apply when \(\alpha\gamma\neq 0\) and \(\xi\) is time-like.
It is worthwhile to remark that the addition of a term proportional to \({\rm tr}\,F^{3}\) is irrelevant since
\[\frac{\partial\,{\rm tr}\,F^{3}}{\partial\nabla_{\alpha}\xi_{\beta}}=0.\]
However, a term proportional to \({\rm tr}\,S^{3}\) leads to second order derivatives of the field in the field equations since
\[\frac{\partial\,{\rm tr}\,S^{3}}{\partial\nabla_{\alpha}\xi_{\beta}}=6(S^{2}) ^{\alpha\beta},\]
and \((\nabla\cdot S^{2})_{\mu}=(\nabla_{\alpha}S^{\alpha\beta})S_{\beta\mu}+S^{ \alpha\beta}(\nabla_{\alpha}S_{\beta\mu}),\) which can be developed taking into account the definition (3) of \(S\).
This study has been developed using the Levi-Civita connexion. However, it admits an extension by considering a general linear connexion (with torsion and non-metricity tensor contributions) and following the approach presented in [9] but using the methodology here presented. Nevertheless, this task is out of the scope of the present work.
On the other hand, in Electrodynamics, the Lorenz gauge condition on the vector potential \(\xi\), \(\nabla\cdot\xi=0\), not only simplifies the study of Maxwell equations in Minkowski space-time, but also, in a curved background with metric \(g\). Nevertheless, in the last situation, the vector potential \(\xi\) couples to the Ricci tensor of the metric \(g\), \(Ric(g)\), via an algebraic term of the form \(i(\xi)Ric(g)\). This term could have a relevant contribution for non vacuum geometries. By contrast, if \(\xi\) represents the Proca vector potential, the equation \(\nabla\cdot\xi=0\) becomes from the Proca field equation, and plays the role of a constraint equation (not of a gauge condition). Now, a linear term proportional to the own potential \(\xi\) has to be considered, in addition to the four dimensional Laplace-Beltrami curved contribution, say \(\Delta\xi\), and the gravitational coupling term \(i(\xi)Ric(g)\).
The natural generalization from Electrodynamics is to assume that \(L\) is a scalar density made from quadratic terms in the field and its first derivatives of first order at most. In this case, the vanishing of the divergence of \(\xi\) cannot be given for granted, so that its differential, \(d(\nabla\cdot\xi)\), enters in the \(\xi\) field equations. The general stress-energy tensor \(T\) of the \(\xi\) field, has been obtained by taking the variational derivative of \(L\) with respect to the space-time metric \(g\). This is a topic that should be complemented with the algebraic classification of \(T\) and its physical interpretation.
The above considerations could be taken into account to go further with the study here presented. At least, three scenarios could be studied: (i) \(\xi\) is a dynamic test field in a particular (in general, curved) known space-time geometry; (ii) \(\xi\) is a self-gravitating field dynamically coupled to the Einstein field equations; and (iii) \(\xi\) is, jointly the space-time
geometry, a dynamical vector field whose field equation couples to the extended Einstein equations for a vector-tensor gravity theory, and whose energy tensor has to be added at the Einstein tensor and provided with a (purely) geometric meaning.
Nonetheless, whether two solutions of the field equations for the same \(\xi\) in (ii) and (iii) are equivalent (that is, a simple matter of interpretation) or whether they are necessarily not physically equivalent, is an issue that requires to be clarify in depth. Meanwhile, applications of vector tensor theories of the gravitation (such as the search for exact solutions or the description of gravitational scenarios in presence of a vector field) is an interesting field to gain progress on this issue.
## Acknowledgments
This work has been supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades, Projects:
PID2019-109753GB-C21/AEI/10.13039/501100011033 and
PID2019-109753GB-C22/AEI/10.13039/501100011033.
|
2301.12797
|
Investigation of Ultrafast Demagnetization and Gilbert Damping and their
Correlation in Different Ferromagnetic Thin Films Grown Under Identical
Conditions
|
Following the demonstration of laser-induced ultrafast demagnetization in
ferromagnetic nickel, several theoretical and phenomenological propositions
have sought to uncover its underlying physics. In this work we revisit the
three temperature model (3TM) and the microscopic three temperature model
(M3TM) to perform a comparative analysis of ultrafast demagnetization in
20-nm-thick cobalt, nickel and permalloy thin films measured using an
all-optical pump-probe technique. In addition to the ultrafast dynamics at the
femtosecond timescales, the nanosecond magnetization precession and damping are
recorded at various pump excitation fluences revealing a fluence-dependent
enhancement in both the demagnetization times and the damping factors. We
confirm that the Curie temperature to magnetic moment ratio of a given system
acts as a figure of merit for the demagnetization time, while the
demagnetization times and damping factors show an apparent sensitivity to the
density of states at the Fermi level for a given system. Further, from
numerical simulations of the ultrafast demagnetization based on both the 3TM
and the M3TM, we extract the reservoir coupling parameters that best reproduce
the experimental data and estimate the value of the spin flip scattering
probability for each system. We discuss how the fluence-dependence of
inter-reservoir coupling parameters so extracted may reflect a role played by
nonthermal electrons in the magnetization dynamics at low laser fluences.
|
Suchetana Mukhopadhyay, Sudip Majumder, Surya Narayan Panda, Anjan Barman
|
2023-01-30T11:46:24Z
|
http://arxiv.org/abs/2301.12797v1
|
**Investigation of Ultrafast Demagnetization and Gilbert Damping and their Correlation in Different Ferromagnetic Thin Films Grown UnderIdentical Conditions1**
###### Abstract
Following the demonstration of laser-induced ultrafast demagnetization in ferromagnetic nickel, several theoretical and phenomenological propositions have sought to uncover its underlying physics. In this work we revisit the three temperature model (3TM) and the microscopic three temperature model (M3TM) to perform a comparative analysis of ultrafast demagnetization in 20-nm-thick cobalt, nickel and permalloy thin films measured using an all-optical pump- probe technique. In addition to the ultrafast dynamics at the femtosecond timescales, the nanosecond magnetization precession and damping are recorded at various pump excitation fluences revealing a fluence-dependent enhancement in both the demagnetization times andthe damping factors. We confirm that the Curie temperature to magnetic moment ratio of a given system acts as a figure of merit for the demagnetization time, while the demagnetization times and damping factors show an apparent sensitivity to the density of states at the Fermi level for a given system. Further, from numerical simulations of the ultrafast demagnetization based on both the 3TM and the M3TM, we extract the reservoir coupling parameters that best reproduce the experimental data and estimate the value of the spin flip scattering probability for each system. We discuss how the fluence-dependence of inter-reservoir coupling parameters so extracted may reflect a role played by nonthermal electrons in the magnetization dynamicsat low laser fluences.
## 1 Introduction
Optical excitation of a magnetic material with a short and intense laser pulse sets in motion a chain of microscopic events that result in a macroscopic, measurable quenching of the net magnetization. Since the processing speed of magnetic storage devices is limited bythe maximum speeds at which the magnetization can be manipulated, the possibility of controlling magnetization at sub-picosecond timescales by employing ultrashort laser pulses holds tremendous potential for applications in spin-based memory and storage devices with ultrafast processing speeds including laser-induced opto-magnetism and all-optical switching [1, 2, 3, 4] as well as THz spintronic devices [5, 6, 7]. Meanwhile, the relaxation timescales of the laser-induced magnetization precession in ferromagnetic thin films associated with the magnetic damping set fundamental limits on magnetization switching and data transfer rates in spintronic devices. Picosecond laser-induced magnetization quenching in ferromagnetic gadolinium was reported by Vaterlaus et al. in 1991, who also reported for the first time acharacteristic timescale of 100\(\pm\)80 ps [8] associated with the magnetization loss. In this early work, the timescale was identified with the timescale of the spin-lattice interactions mediating the disruption of magnetic ordering due to laser heating of the lattice, though later works contradicted this interpretation [9, 10]. In fact, by the mid-1990s, it had begun to be recognized that finer time resolution was necessary to probe this phenomenon to uncover a fuller picture of the associated relaxation processes occurring at sub-picosecond timescales. In 1996, Beaurepaire et al. demonstrated in a seminal work that faster demagnetization occurring at sub-picosecond timescales could be triggered by femtosecond laser pulses in a nickel thin film [11]. The ultrafast demagnetization phenomenon went on to be demonstrated in a wide array of ferromagnetic systems [12, 13, 14, 15, 16, 17, 18, 19], triggering a flurry of research that continues till date. A few years after the pioneering experiments, Koopmans et al. characterized for the first time the full magneto-optical response to femtosecond laser pulses in a ferromagnet by time-resolved measurements of Kerr ellipticity and rotation [13]. However, the observation of nonmagnetic contributions to the Kerr rotation signal naturally led some to question whether the ultrafast quenching in response to the laser excitation was indeed a magnetic phenomenon [13, 14]. In 2003, Rhie et al. used time-resolved photoemission
spectroscopy to probe the collapse of the 3d exchange splitting in nickel as an unambiguous signature of a photoinduced demagnetization occurring over 300\(\pm\)70 fs [15]. This was soon followed by the first reports of an estimate of the characteristic timescale of femtosecond laser-induced demagnetization derived from quantum-mechanical principles [16].
One of the major reasons behind this sustained interest is the need to achieve a complete understanding of the microscopic mechanisms that underlie the ultrafast demagnetization phenomenon that have so far remained elusive. Much inquiry has been focused on how the laser-induced loss of magnetic order is compensated for via the transfer of angular momentum of the spin system at the associated timescales. Over the years, several mechanisms have been proposed for explaining the angular momentum conservation associated with the ultrafast demagnetization, which may be broadly categorized in two distinct classes. One of these argues in favor of a dominant contribution to the demagnetization arising from nonlocal transport processes driven by the laser pulse, such as superdiffusive transport of spin-polarized hot electrons [20; 21] or heat currents [22]. The other narrative relies on local spin-flip scattering processes occurring by the collision of excited electrons with impurities, phonons, and magnons [17; 23; 24]. In this category, electron-lattice and electron-spin relaxation mechanisms are accepted as the major demagnetization channels in the picoseconds following the laser excitation [25], with several works attributing a pivotal role to the material-specific spin-orbit interaction [23; 25; 26; 27]. On the other hand, it is possible to model the demagnetization by adopting a thermal description, considering the laser excitation as a "heat" source that excites electrons close to the Fermi level instantaneously to very high energies, whose eventual relaxation processes drive the magnetization loss. The laser excitation energy controlled by the applied laser pump fluence has a direct influence on the maximum electron temperatures reached and thus plays a pivotal role in the ultrafast magnetization dynamics that follow as a result.
In this work, employing an all-optical time-resolved magneto-optical Kerr effect (TR-MOKE) technique, we investigate the ultrafast spin dynamics along with the nanosecond magnetization precession and damping at different laser pump fluences in three ferromagnetic thin films: 20-nm-thick cobalt, nickel and permalloy grown under uniform deposition conditions. Cobalt and nickel are elementary 3d ferromagnetic transition metals well studied in the literature in both their elementary forms and as constituting elements in multilayered structures where their strong spin-orbit coupling mediates interfacial effects such as spin pumping. Permalloy, an alloy comprised of approximately 80% nickel and 20% iron, is a prototype material for spintronic applications due to several desirable qualities such as low coercivity, large permeability and in particular, low Gilbert damping, which aids in minimizing the maximum power consumption of devices and allows spin wave propagation on length scales of the order of device size. The simultaneous investigation of the fluence-dependent modulation of ultrafast demagnetization and damping in all the above ferromagnetic thin films is motivated by the primary objectives of exploring the dominant microscopic processes underlying the demagnetization in these systems, correlating the observed magnetization dynamics to the material properties of each system, and in the process exploring their tunability with laser pump fluence. We set out to achieve this in two ways: (a) by modelling the ultrafast demagnetization at various pump fluences using two theoretical models sharing a common link and (b) by correlating the laser-induced changes in ultrafast demagnetization to the changes in the Gilbert damping factor which characterizes the magnetization precession. TR-MOKE is a well established local and non-invasive all-optical measurement technique tunable to an extremely fine time resolution limited only by the laser pulse width and is thus well suited to our purpose.
For the detailed analysis of ultrafast demagnetization in our samples, we extract the values of demagnetization time and fast relaxation time for each sample using a phenomenological fitting function and record its systematic variation with the pump fluence. We subsequently model the demagnetization data using two well known theoretical models: the phenomenological three-temperature model (3TM) and the microscopic three temperature model (M3TM), and thereby extract values for the microscopic parameters relevant for the demagnetization process and calculate the temporal evolution of the simulated temperatures of the electron, spin and lattice systems within the first few picoseconds of the laser excitation. Both these models assume a thermal picture to explain the initial electron temperature rise and subsequent relaxation but differ significantly in their approach with regards to the treatment of the magnetization. A systematic study implementing both models to analyze TR-MOKE data recorded under uniform experimental conditions on identically prepared samples is absent in the literature and will prove instructive. Further, a thorough investigation and characterization of the ultrafast demagnetization, magnetic damping and their intercorrelation in three different systems deposited under identical deposition conditions has not been carried out before. For the purposes of a comparative analysis, it is vital to study samples grown under the same deposition conditions. The conductivity of the substrate can also directly influence demagnetization time by promoting or hindering ultrafast spin transport processes [20; 28] while the deposition technique determines the structural properties of the samples that may indirectly affect the
demagnetization time. Moreover, since the experimentally obtained demagnetization timescale has been shown to be quite sensitive to extrinsic factors such as the laser pulse duration and the spectral bandwidth [29], it is useful to measure the demagnetization times for various thin films from experiments performed in the same experimental setup under near-identical experimental conditions.
## 2 Materials and Methods
### Sample fabrication
20 nm-thick cobalt, nickel and permalloy thin films were deposited by electron beam evaporation under an average base pressure of 1\(\times\)10\({}^{-7}\) Torr and at a very low deposition rate of 0.2 A/s chosen to achieve uniform deposition. Each film was deposited on an insulating 8 mm \(\times\) 8 mm silicon substrate coated with 285-nm-thick SiO\({}_{2}\). Subsequently, the films were capped _in-situ_ with a 5-nm-thick protective gold layer (base pressure \(\sim\)1 \(\times\) 10\({}^{-7}\) Torr, deposition rate 0.1 A/s) to prevent surface oxidation of the ferromagnetic layer and protect it from possible degradation due to high power laser exposure during the optical pump-probe measurements [30; 31]. The thickness of the capping layer was kept more than three times smaller than the optical penetration depth of 400 nm light in gold. After the deposition of the capping layer, the surface topography of the samples was investigated by atomic force microscopy (AFM) using a Tap190Al-G tapping mode AFM probe as shown in Figure 1 (_a_). The average surface roughness \(R_{a}\) of the cobalt, nickel and permalloy films were obtained as 0.91 nm, 0.36 nm and 0.41 nm respectively. Thus, reasonably low surface roughness in the sub-nm range was obtained which is comparable for all the samples. In addition, the static magneto-optical Kerr effect was used to study the magnetic hysteresis of the deposited samples to confirm their ferromagnetic nature (Figure 1(_b_)).
### All-optical measurement of ultrafast spin dynamics
Measurement of laser-induced magnetization dynamics is carried out using a TR-MOKE technique in two-color optical pump-probe arrangement using a 400 nm pump beam and 800 nm probe beam having a pump-probe cross-correlation width of \(\sim\)100 fs [30] shown schematically in Figure 1(_c_). A typical TR-MOKE trace is shown in Figure 1(_d_), comprising ultrafast demagnetization (Regime I), fast remagnetization (Regime II) and damped magnetization precession (Regime III).This technique enables the direct observation of the spin dynamics in the femto- and picosecond time domain. Femtosecond pulses are generated by an amplified laser system (Libra, Coherent Inc.) employing a chirped-pulse regenerative amplifier and a Ti:sapphire laser oscillator (Coherent Inc.) pumped by a neodymium-doped yttrium lithium fluoride (Nd-YLF) laser. The second harmonic of the amplified laser output (wavelength = 400 nm, repetition rate = 1 kHz, pulse width \(>\)40 fs), generated through a lithium triborate (LBO) nonlinear crystal, is used for laser excitation of the ferromagneticthin films. The time-delayed fundamental beam (wavelength = 800 nm, repetition rate = 1 kHz, pulse width \(\sim\)40 fs) is used to probe the ensuing magnetization dynamics. In our setup, different wavelengths are employed for the pump and the probe pulse to eliminate the possibility of state blocking effects arising from the use of identical wavelengths for pumping and probing [32]. A computer-controlled variable delay generator offers precise control of the delay time between pump and probe. Before commencing measurements on any sample, the zero delay was carefully estimated by maximizing the transient reflectivity signal of a bare silicon substrate placed adjacent to the sample on the same sample holder. TR-MOKE experiments are performed with a non-collinear pump-probe geometry. The pump beam, focused to a spot size of \(\sim\)300 \(\mu\)m, is incident obliquely on the sample while the probe beam, with a spot size of \(\sim\)100 \(\mu\)m, is incident normal to the sample surface and aligned to the center of the pump spot. The pump-probe spatial overlap on the sample was carefully maintained. The choice of a relatively smaller spot size of the probe beam as compared to the pump beam facilitates optical alignment and ensures that the probe beam detects the local magnetization changes from a part of the sample uniformly irradiated by the pump. Before reflection on the samples, the probe beam is polarized orthogonally to the linearly polarized pump beam. After reflection, the Kerr-rotated probe beam is split and one part is fed directly into a Si-photodiode to measure the time-resolved reflectivity signal. The other part is fed into an identical photodiode after passing through a Glan-Thompson polarizer adjusted to a small angle from extinction to minimize optical artifacts in the Kerr rotation signal. In thisway, simultaneous measurement of the time-resolved total reflectivity and Kerr rotation signals is possible. An optical chopper operating at 373 Hz placed in the path of the pump beam provides a reference signal for the digital signal processing lock-in amplifiers (Stanford Research Systems, SR830) which record the modulated signal in a phase sensitive manner. All experiments were carried out under ambient conditions of temperature and pressure.
## 3 Results and Discussion
### Theoretical models for ultrafast demagnetization
The phenomenon of optically induced ultrafast demagnetization starts with the irradiation of the magnetic sample with a brief and intense optical laser pulse, exciting electronsmomentarily a few electron volts above the Fermi level. Though the exact sequence of events following the initial excitation is difficult to trace due to the highly nonequilibrium conditionscreated by it, a qualitative overview of the complete demagnetization process is fairly well established. The laser excitation generates a nonthermal pool of excited electrons which thermalize rapidly within several femtoseconds via electron-electron interactions.Spin-dependent scattering events taking place during this transient regime lead to a sharp drop in magnetization observable around a few hundred femtoseconds in the experimental Kerr rotation signal. Subsequently, the thermalized electrons may release their excess energy via a variety of relaxation channels, such as by excitation of phonons or magnons. This resultsin a partial recovery of the magnetization beyond which heat dissipation into the environmentpromotes further recovery on a longer timescale. The 3TM posits that the thermodynamics of the demagnetization phenomenon can be described simply by considering energy exchange between three thermal reservoirs [33], each of which is assigned a temperature: the electrons at temperature \(T_{e}\), the lattice at temperature \(T_{i}\) and the electronic spin reservoir attemperature \(T_{s}\). Since the reservoirs are in thermal contact and the overall process isadiabatic, equilibration of the excited electrons with the spin and lattice reservoirs via energytransfer may be described by coupled rate equations in the following manner:
\[C_{\mathbf{e}}(T_{\mathbf{e}})\frac{dT_{\mathbf{e}}}{dt}= -G_{\mathbf{e}1}(T_{\mathbf{e}}-T_{\mathbf{l}})-G_{\mathbf{e}2}(T_{\mathbf{e}}-T_{\mathbf{ z}})+\nabla_{\mathbf{z}}(\kappa\nabla_{\mathbf{z}}T_{\mathbf{e}})+P(t)\] \[C_{\mathbf{l}}(T_{\mathbf{l}})\frac{dT_{\mathbf{l}}}{dt}= -G_{\mathbf{e}1}(T_{\mathbf{l}}-T_{\mathbf{e}})-G_{\mathbf{l}2}(T_{\mathbf{l}}-T_{\mathbf{ z}}) \tag{1}\] \[C_{\mathbf{s}}(T_{\mathbf{s}})\frac{dT_{\mathbf{z}}}{dt}= -G_{\mathbf{e}g}(T_{\mathbf{s}}-T_{\mathbf{e}})-G_{\mathbf{l}g}(T_{\mathbf{s}}-T_{\mathbf{ l}})\]
where, \(C_{\mathbf{e}}\), \(C_{\mathbf{l}}\) and \(C_{\mathbf{s}}\) denote the specific heats of the electron, lattice and spin reservoirs respectively, while \(G_{el}\), \(G_{es}\), and \(G_{sl}\) denote the inter-reservoir coupling parameters. The term\(P\)(t) describes the action of the laser pulse as a source term driving the excitation of \(\,\)theelectron reservoir to high temperatures. The thermal diffusion term \(\nabla_{\mathbf{z}}(\kappa\nabla_{\mathbf{z}}T_{\mathbf{e}})\) describes heat dissipation occurring via thermal conduction along the sample thickness. Under this description, the observed demagnetization is attributed to a rise in the spin temperature \(T_{s}\) occurring shortly after the electron temperature rise. The coupling strengths between theelectron-spin and electron-lattice subsystems qualitatively determine the efficiency of energytransfer between them and hence influence the timescales associated with the demagnetization and fast relaxation. However, the 3TM is purely phenomenological and does not explicitly consider any microscopic mechanisms underlying the phenomenon it describes. On the other hand, the M3TM proposed by Koopmans et al. [23] provides a "spin-projected"perspective [27] to explain the ultrafast transfer of angular momentum highlighting the role of Elliot-Yafet type ultrafast spin-flip scattering in the demagnetization process. The initial excitation by the laser pulse disturbs the electronic subsystem from equilibrium which leadsto an imbalance in spin-up and spin-down scattering rates, resulting in the observed loss of magnetic order. The process is mediated by spin-orbit interactions leading to the formation ofhot spots in the band structure where spin-up and spin-down channels are intermixed. An electron scattered into these hot spots via a phonon- or impurity-mediated scattering will fliptis spin with a finite probability. The individual scattering events are characterized by a parameter \(a_{sf}\)across the sample, identified with the probability of spin-flip due to electron-phonon scattering. The magnitude of this parameter will directly depend on theextent of spin-orbit coupling and hence is expected to be comparable in materials with similar spin-orbit coupling strengths. The M3TM retains the coupled rate equations for the electron and lattice temperatures, similar to the thermal description provided by the 3TM. Howeverthe fundamental difference from the 3TM is that in the framework of the M3TM, the spin bath is formed by a collection of two-level systems obeying Boltzmann statistics. Instead of assigning a temperature to the spin bath, the normalized magnetization is directly calculated from the associated exchange splitting. The rate of change of the magnetization, derived analytically considering an Elliot-Yafet scattering-driven demagnetization, is parametrized by \(a_{sf}\) and coupled to the electron and the lattice subsystem temperatures. The assignment of a characteristic temperature to the spin subsystem is replaced in the M3TM by an evolution equation for the magnetization:
\[\frac{dm}{dt}\ =\ Rm\frac{\tau_{1}}{\tau_{c}}\left(1-coth\left(\frac{m\tau_{c}}{ \tau_{e}}\right)\right) \tag{2}\]
The quantity \(R\) is a material-specific factor which influences the demagnetization rate and is proportional to \(a_{\sigma}T_{c}{}^{2}/\mu_{at}\) where \(T_{c}\) is the material Curie temperature and \(\mu_{at}\) is the atomic magnetic moment. Though the two models differ in their approaches, one can immediately discern certain similarities in their domains of validity. Both models are appropriate only when the nonlocal mechanisms driving ultrafast demagnetization such as superdiffusive spin transport can be neglected. We note here that it has been reported that spin transport is not a major contributor to the ultrafast demagnetization in transition metals [34]. We nevertheless use an insulating SiO\({}_{2}\)-coated Si substrate for our samples to minimize spin transport effects such that analysis with the local models described above should suffice in our case. Any additional contributions arising from the gold capping layer would be uniform across all samples investigated and therefore unlikelyto impact the main results of our comparative study. Moreover, since the thickness of the capping layer is much smaller than the penetration depth of both 400 nm and 800 nm light in gold [35], the pump excitation fully penetrates down to the magnetic layer ensuring that the effect of direct laser excitation of the ferromagnet is probed in our case. Thus, we set up numerical calculations based on the models described above in order to extract microscopic information from the experimentally obtained demagnetization traces.
_3.2 Ultrafast demagnetization in cobalt, nickel, and permalloy thin films_
We proceed by performing time-resolved measurements of the polar magneto-optical Kerr effect in the cobalt, nickel and permalloy thin film samples as a function of the laser fluence. Measurements are carried out under a strong enough external magnetic field kept constant at around 2 kOe tilted at a small angle from the sample plane to saturate the magnetization of the samples. The pump fluence is varied between 0.8-8.7 mJ/cm\({}^{2}\) by varying the power of the pump pulse. The results are presented in Figure 2. To ascertain that the measured Kerr signal reflects the true magnetization dynamics without any spurious contribution from optical effects triggered in the initial stages of laser excitation, we also examine the transient reflectivity signal for each sample. The fluence dependent variation in the reflectivity can be found in Figure S1 of the Supplementary Materials, demonstrating that at any given fluence the amplitude of the reflectivity signal is negligibly small compared to that of the Kerr rotation. We nevertheless restrict ourselves to the low fluence regime to avoid nonlinear effects and sample damage. For all our experiments, the probe fluence is kept constant at a value about half that of the lowest pump fluence used to prevent additional contribution to the spin dynamics by probe excitation. As seen in Figure 2, the ultrafast demagnetization completes within 1 ps for all three samples considered which is followed by a fast recovery of the magnetization, all observed within the experimental time window of 4 ps. These experimental traces clearly exhibit the "Type-I" or "one-step" demagnetization expected for transition metal thin films at room temperature and under low-to-moderate pump fluence [23]. The amplitude of the maximum quenching of the Kerr rotation signal increases with the laser fluence, allowing us to rule out nonlinear effects [36]. Closer inspection of the traces alsoreveals an increase of the time taken to demagnetize the samples with increasing fluence for all three samples. To quantify this increase, we fit our demagnetization traces to a phenomenological expression based on the 3TM and valid in the low laser fluence regime [37]:
\[-\frac{\Delta M_{Z}}{M_{Z}}\ =\ \left[\left\{\frac{A_{1}}{(t/\tau_{0}+1)^{1/2}}- \frac{(A_{2}\tau_{E}-A_{1}\tau_{M})e^{-t/\tau_{M}}}{\tau_{E}-\tau_{M}}-\frac{ \tau_{E}(A_{1}-A_{2})e^{-t/\tau_{E}}}{\tau_{E}-\tau_{M}}\right\}\Theta(t) \right]\bigotimes\Gamma(t) \tag{3}\]
where \(\Theta(t)\) is the Heaviside step function, \(\delta(t)\) is the Dirac delta function and \(\Gamma(t)\) is the Gaussian laser pulse. The constant \(A_{1}\) represents the value of the normalized magnetization after remagnetization has completed and equilibrium between the electron, spin and lattice reservoirs has been re-established. \(A_{2}\) is proportional to the initial rise in electron temperature and hence to the maximum magnetization quenching. \(A_{3}\) represents the magnitude of state-filling effects present during the onset of the demagnetization response, which is negligible in our case. \(\tau_{M}\) and \(\tau_{E}\) are the demagnetization time and fast relaxation time, respectively. Prior to the fitting, all the experimental traces were normalized by hysteresis measurements of the Kerr rotation signal under the saturating magnetic field in the absence of laser excitation. We find that within the range of fluence values considered, permalloy exhibits the largest magnetization quenching of 54.6%, followed by a 23.7% quenching achieved in nickel, while the magnetization of cobalt the least, only about 8% for the largest applied fluence. The demagnetization occurs at a characteristic timescale of 230-280 fs for cobalt, 160
210 fs for nickel, and 220-250 fs for permalloy, increasing with the laser fluence. This effect can be attributed to enhanced spin-fluctuations at elevated spin temperatures for higher fluences [38]. At a fluence of 4.8 mJ/cm\({}^{2}\), the extracted demagnetization times are \(276.6\pm 3.41\) fs for cobalt, \(187.3\pm 2.89\) fs for nickel and \(236.8\pm 2.45\) fs for permalloy. The timescale for the magnetization recovery \(\tau_{E}\) also increases with increasing pump fluence. The variation of these characteristic timescales with laser fluence is shown in Figure 3. These fluence-dependent trends in \(\tau_{M}\) and \(\tau_{E}\) hint at a spin-flip process-dominated ultrafast demagnetization in our studied systems [23; 39; 40]. The values of \(\tau_{M}\) extracted from our experiments lie within the typical range of 100-300 fs consistent with previous reports of the ultrafast demagnetization times in these metals [17; 23], and are too large to represent a superdiffusive transport-driven demagnetization [41].
For the 3TM and M3TM simulations, we choose a laser pump term given by \(P(t)=P(0)Fe^{-\left(\tau/\tau_{p}\right)^{2}}\) proportional to the pump fluence \(F\) and following a Gaussian temporal profile. The maximum rise of the electron temperature and thus also the extent of demagnetization depends sensitively on this term which is hence adjusted to reproduce the maximum quenching observed experimentally. We use a pulse width \(\tau_{p}=100\) fs determined by the pump-probe cross-correlation in all calculations. Intrinsic to both the models we consider is the assumption that electron thermalization occurs extremely fast. The thermal diffusion term can be neglected in our case since the thicknesses of the films we study are kept slightly greater than the optical penetration depth of 400 nm pump beam in those films. This ensures uniform heating of the films in the vertical direction while also avoiding laser penetration into the substrate in which case heat dissipation into the substrate would have to be taken into account. Besides, the timescales associated with heat dissipation are generally tens to hundreds of picoseconds, much longer than the demagnetization and fast relaxation times, and hence unlikely to significantly influence our observations at these timescales. Since both models we consider are thermal in their approach, choosing correct values for the reservoir specific heats is vital for a proper simulation of the demagnetization. For the electronic specific heat \(C_{e}\), we assume a linear dependence on the electronic temperature \(C_{e}(T_{e})=\gamma T_{e}\) derived from the Sommerfeld free-electron approximation where \(\gamma\) is determined by the electronic density of states at the Fermi level [42]. The value of \(\gamma\) for permalloy is approximated as a weighted average of the individual \(\gamma\) values of nickel and iron in permalloy. The lattice specific heat \(C_{i}\) is calculated at each value of the lattice temperature according to the following relation derived from Debye theory:
\[C_{l}(T_{l})=9N_{A}k_{B}\left(\frac{\tau_{l}}{\theta_{D}}\right)^{3}\int_{0}^ {\frac{\tau_{l}}{\theta_{D}}}\frac{x^{4}e^{x}}{(e^{x}-1)^{2}}dx \tag{4}\]
where \(N_{A}\) is the Avogadro's number, \(k_{B}\) is the Boltzmann's constant and \(\theta_{D}\) is the Debye temperature. Finally, we fix the spin specific heat \(C_{s}\) to its value at room temperature for the 3TM calculations, obtained by subtracting the electronic and lattice contributions from the experimental values of the total specific heat found in the literature [43]. Considering a spin-temperature-dependent form of \(C_{s}\) was not found to significantly affect our conclusions as described in Section IV of the Supplementary Materials. The fixed parameter set used in our calculations have been listed in Table **1**. To relate the experimental demagnetization to the temperature of the spin subsystem under the 3TM framework, the spin temperature \(T_{s}\) is mapped to the magnetization of the system via the Weiss' mean field theory [44], which is then fitted with the experimental magnetization traces to obtain the empirical inter-reservoir coupling parameters \(G_{el}\) and \(G_{es}\) consistent with the observed dynamics. We neglect the spin-lattice coupling parameter \(G_{nl}\) for the 3TM simulations since in ferromagnetic transition metals the energy transfer between electrons and lattice is far greater than that between lattice and spins [45].
For the 3TM simulations, we proceeded to extract \(G_{el}\) and \(G_{es}\) by first fitting the demagnetization data at the lowest fluence to the model. However, fitting the higher fluence data using identical values of the coupling parameters as extracted at the lowest fluence did not result in a good match to our experimental results. The coupling parameters extracted from the low fluence data led to an overestimation of the demagnetization time at the higher fluences. It was seen that a 5-10% increase in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Sample & \(T_{c}\) (K) & \(\theta_{D}\) (K) & \(\gamma\) (Jm-3K-2) & \(C_{s}\) (Jm-3K-1) & \(\mu_{\text{et}}\) (\(\mu_{\text{8}}\)) \\ \hline Ni & 627 & 450 & 1065 & \(3.07\times 10^{5}\) & 0.62 \\ \hline Co & 1388 & 445 & 714 & \(1.59\times 10^{5}\) & 1.72 \\ \hline Py & 860 & 454 & 992 & \(2.67\times 10^{5}\) & 1.00 \\ \hline \end{tabular}
\end{table}
Table 1: Fixed parameter set used in the calculations. Literature values have been used for all parameters listed [30; 31; 34; 35].
\(G_{es}\) from its value at its adjacent lower fluence value rectifies the overestimation of the demagnetization time. On the other hand, the remagnetization dynamics is most sensitive to the \(G_{el}\) parameter so that the overall dynamics is best reproduced only by adjusting both \(G_{el}\) and \(G_{es}\). As shown in Figure 4, the resulting fithsows excellent agreement with the experimental data. This exercise reveals the crucial role played by the electron-spin relaxation channels in determining the timescale associated with the initial demagnetization while the magnetization recovery is primarily mediated by the electron-lattice interaction. We also find that the mismatch between model and experiment can be resolved by considering an increasing trend of \(G_{el}\) and \(G_{es}\) with pump fluence arising from a faster demagnetization process for the same percentage quenching as compared to the model predictions within the studied fluence range. The values of the microscopic parameters extracted from the least-squares fits with their corresponding error bounds can be found in Supplementary Tables S1-S3. Since the exact values of the coupling parameters extracted from the fits naturally depend on the values chosen for the fixed parameters, the interpretation of the results from these fits is best limited to a comparative one.
For the M3TM simulations, the demagnetization traces are fitted directly to Equation 2 yielding \(G_{el}\) and \(a_{sf}\) as fit parameters. In this case, \(a_{sf}\) plays a role in determining the maximum extent and the associated timescale of the demagnetization via the scaling factor \(R\) while \(G_{el}\) continues to influence mainly the magnetization recovery process. However, the demagnetization time is less sensitive to changes in \(a_{sf}\) than it is to \(G_{es}\) in the 3TM case.This results in somewhat higher values of \(G_{el}\) and a sharper rise with pump fluence than those extracted from the 3TM simulations, in order to compensate for the overestimation of demagnetization time that results from the model if \(G_{el}\) at the lowest fluence is used for all the fits. We have obtained an \(a_{sf}\) of \(\sim\)0.02 for cobalt, \(\sim\)0.05-0.06 for nickel, and \(\sim\)0.03-0.06 for permalloy. The value of \(a_{sf}\) we have extracted for nickel is an order of magnitude lower than the value \(a_{sf}\) = 0.185 first reported by Koopmans et al. [23] but quite close to the value of 0.08 reported by Roth et al. [39]. This discrepancy is expected, as the artificially high value of 0.185 arose due to an overestimation of the electronic specific heat in the original work, avoided here by considering experimentally determined \(\gamma\) values reported inthe literature. The observation that \(a_{sf}\)[Co] \(<\)\(a_{sf}\)[Ni] is consistent with previous reports[23, 40].
With regard to thermal treatments of ultrafast demagnetization like the 3TM and M3TM, there is some contention centered on the role of the nonthermal electrons generated within the first 50-100 femtoseconds after laser excitation [9, 48, 49]. Within this time frame, the excitedhot electrons have not yet thermalized and their energies cannot be described by a Fermi-Dirac distribution. However, both the 3TM and the M3TM completely neglect the finite electron thermalization time and consider only the energy exchange between the thermalized electron population with the lattice and/or the spin subsystem as the initial nonequilibrium distribution is assumed to last for much shorter durations (\(\gtrsim\) 10 fs) than the timescale over which demagnetization typically occurs (\(\gtrsim\) 100 fs). However, strictly speaking, this assumption is valid only when the thermalization proceeds very rapidly such as can be ensured with high fluence excitations [50]. In the low fluence regime, electron thermalization is slower, hence the influence of nonthermal electrons on the measured dynamics may become more important with decreasing pump fluence. The increasing of \(G_{el}\) with fluence deduced from our calculations may be an indication of a decrease in the efficiency of electron-lattice interactions at low laser fluences when nonthermal electrons prevail. However, the specific values of \(G_{el}\) and \(G_{es}\) values we have obtained for the different systems remain comparable to values previously reported for these metals [11, 23, 37].
After completion of the fitting routine, we set up simulations using the optimized values of the free parameters to obtain the temporal evolution of \(T_{e}\), \(T_{l}\) and \(T_{s}\) as a function of the pump-probe delay. The calculated profiles of \(T_{e}\), \(T_{l}\) and \(T_{s}\) for four different values of the pump fluence are presented in Figure 5 for nickel, showing good agreement between the results predicted by the 3TM and the M3TM. Similar profiles for cobalt and permalloy are given in Figs. S2 and S3 of the Supplementary Materials. As the pump fluence is increased, laser excitation deposits a greater amount of energy in the electronic reservoir which leads to a proportionate increase in the initial electron temperature rise. However, the maximum electron temperature reached for the highest applied pump fluence remains well below the Fermi temperature for the respective material, ensuring the assumption of the linear temperature dependence of the electronic specific heat remains valid [42]. Given the fundamentally different origins of the 3TM and M3TM, the close agreement between the temperature profiles calculated using the two models and among the extracted values of the electron-lattice coupling parameter \(G_{el}\) is remarkable. This observation reflects that, given the strength of the laser excitation determining the initial temperature excursion of the electron system, the \(G_{el}\) parameter common to both the 3TM and the M3TM framework plays a pivotal role in determining the profile of the ensuing magnetization changes.
This also explains why \(G_{el}\) is seen to follow a similar fluence-dependent trend for a given system in bothmodels. While the electron temperature is coupled to the spin temperature via the \(G_{es}\) parameter in the 3TM, it enters into the evolution equation for the magnetization via Equation 2 in the M3TM case. From Figure 5, a qualitative difference in the temporal profiles of the electron, spin and lattice temperatures is also apparent. Referring to the figures corresponding to the 3TM, while \(T_{e}\) exhibits a sharp peak within the first few hundred femtoseconds, \(T_{s}\) exhibits a shorter and broader peak at slightly longer time delays. The delayed peak in \(T_{s}\) occurs when excited electrons transfer energy to the spin degrees of freedom [51] through scattering events mediated by the coupling parameter \(G_{es}\), which qualitatively determines the maximum spin temperature value [11]. These scattering events can result in the generation of magnons, Stoner excitations and spin fluctuations which may play a part in the observed demagnetization [52]. Lastly, the lattice temperature gradually increases over the first few picoseconds after excitation, reaching an equilibrium value at which it stays nearly constant over several picoseconds.
In Figure 6 we present a comparison between the experimental demagnetization curves and calculated electron temperature profiles for each of our three samples. Under the same applied pump fluence, different amplitudes of magnetization quenching and electron temperature riseare observed. It is clearly seen that nickel quenches more strongly than cobalt, an observation which can be explained by band structure effects and a more efficient laser-induced electronic excitation in nickel owing to its much lower Curie temperature [34; 40; 53], reflected in the M3TM as a higher extracted value of the spin-flip probability \(a_{if}\) in nickel. On the other hand, a lower electronic specific heat capacity in cobalt as compared to nickel [42] predictably leads to a much greater rise in its electron temperature for the same pump fluence. In the cased permalloy, alloying with iron can reduce the electronic density of states at the Fermi level as compared to nickel, decreasing \(\gamma\). Because of the larger electron temperatures reached as a result, a value of \(a_{if}\) comparable to that of nickel is sufficient to drive a much larger reduction in the magnetization. Permalloy also shows a somewhat larger demagnetization time at a given pump fluence as compared to nickel. Since a standard TR-MOKE optical pump-probe measurement cannot be used to probe the demagnetization response in alloys in an element-specific manner, interpreting this finding is difficult. It has been reported previously using element-specific measurements in a nickel-iron alloy that the demagnetization of nickel proceeds faster than that of iron [54]. Our observation of a slower demagnetization inpermalloy may be reflective of this result. Finally, the demagnetization times for the three systems at any given pump fluence is seen to sensitively depend on the ratio of \(T\omega/\mu_{en}\), confirming its role as a 'figure of merit' (FOM) for the demagnetization time [23]. As shown in Figure 8 (\(a\)), the demagnetization proceeds slower for a lower value of the FOM. Thus at a given incident laser pump fluence, the demagnetization times are found to correlate with the values of this quantifying parameter, rather than the material Curie temperature directly.
_3.3 Precessional dynamics and correlation of Gilbert damping parameter with ultrafast demagnetization time_
To gain further insight into the microscopic mechanism underlying the ultrafast demagnetization, we study also the precessional magnetization dynamics in our thin film systems. A typical time-resolved Kerr rotation data showing the different temporal regimes of laser-induced magnetization dynamics is given in Figure 1(\(d\)). Regime I and II represent the ultrafast demagnetization and magnetization recovery processes respectively, while the oscillatory signal of regime III represents the magnetization precession with the Gilbert damping characteristic of the ferromagnetic material. To extract the Gilbert damping factor \(a\), the background-subtracted oscillatory Kerr signal is first fitted with a damped sinusoidal function given by:
\[M(t)=M(0)exp^{-(t/\tau)}\sin\ (2\pi ft+\varphi) \tag{5}\]
where \(f\) is the precessional frequency, \(\tau\) is the relaxation time, and \(\varphi\) is the initial phase of oscillation. The dependence of \(f\) on applied bias magnetic field \(H\) is fitted to the Kittel formula relevant for ferromagnetic systems:
\[f=\frac{\gamma_{0}}{2\pi}\ \left(H(H+4\pi M_{eff})\right)^{1/2} \tag{6}\]
where \(\gamma_{0}=g\mu_{B}/h\) is the gyromagnetic ratio and \(M_{eff}\) is the effective saturation magnetization of the probed volume. From the Kittel fit, the values of \(g\) and \(M_{eff}\) can be extracted as fit parameters. While the value of \(g\) is found to be \(2\pm 0.1\) for all thesamples, the extracted values
of \(M_{\mathit{eff}}\) are \(1280.2\pm 1.69\) emu/cm\({}^{3}\) for cobalt, \(400.5\pm 11.46\) emu/cm\({}^{3}\) for nickel and \(737.1\pm 5.32\) emu/cm\({}^{3}\) for permalloy (Section V of theSupplementary Materials).
Subsequently, the effective damping factor \(\alpha\) is calculated depending on the extracted value of \(\tau\) and \(M_{\mathit{eff}}\) as:
\[\alpha=\frac{1}{\nu_{0}\tau(H+2\pi M_{\mathit{eff}})} \tag{7}\]
Although the timescale of precessional dynamics and magnetic damping in ferromagnetic thinfilms differ by many orders of magnitude, similar underlying phenomena may often determine the characteristics of either process. Through several independent investigations relating these two distinct processes, the correlation between the demagnetization time \(\tau_{\mathit{M}}\) and the Gilbert damping factor \(\alpha\) emerged as a critical indicator of the dominant microscopic contribution to the ultrafast demagnetization. Koopmans et al. in 2005 analytically derived an inversely proportional relationship between the Gilbert damping factor and ultrafast demagnetization time based on quantum mechanical considerations [16]. However, the validity of the modelcould not be confirmed experimentally when \(\tau_{\mathit{M}}\) and \(\alpha\) were tuned by transition metal and rare earth doping [18]. Later it was proposed that the Gilbert damping parameter and ultrafast demagnetization time can be either directly or inversely proportional to each other depending on the predominant microscopic mechanism contributing to the magnetic damping [55]. A proportional dependence would suggest a dominating conductivity-like or "breathing Fermi surface" contribution to the damping due to the scattering of intraband electrons andholes, while an inverse dependence may arise in materials with a dominant resistivity-like damping arising from inter-band scattering processes. Zhang et al. later suggested that an inverse correlation between Gilbert damping factor and ultrafast demagnetization time may arise in the presence of spin transport, which provides an additional relaxation channel resulting in an enhancement of the magnetic damping along with an acceleration of the demagnetization process at the femtosecond timescale [56]. In cases where local spin-flip scattering processes are expected to dominate the demagnetization process, the breathing Fermi surface model is valid predicting a proportional relationship between \(\tau_{\mathit{M}}\) and \(\alpha\).
To study the correlation between ultrafast demagnetization and damping in a system, both \(\tau_{\mathit{M}}\) and \(\alpha\) have to be systematically varied. For a given material, the exciting pump fluence can be used to modulate the demagnetization time. As discussed in the previous section, we observe an increment of the demagnetization time with the pump fluence for the systems under investigation. An enhancement in \(\alpha\) is also observed as the pump fluence is increased from 0.8 mJ/cm\({}^{2}\) to 8.7 mJ/cm\({}^{2}\) for each sample, associated with a corresponding decrease in the precessional relaxation time. Within the experimental fluence range, \(\alpha\) is found to be enhanced from 0.0089 to 0.0099 for cobalt, from 0.0346 to 0.0379 for nickel and from 0.0106 to 0.0119 for permalloy. This enhancement of the damping can be attributed to the transient rise in the system temperature as the laser pump fluence is increased [57]. As a laser-induced increase in electron temperature also drives the demagnetization process at the femtosecond timescale, one can qualitatively correlate the demagnetization time with the magnetic damping at various excitation energies even for a single sample. Clearly, a direct correlation between demagnetization time and the Gilbert damping factor is obtained in our samples by measuring the magnetization dynamics at various pump fluences, as shown in Figure 7 for the nickel thin film. Thus, we infer that similar relaxation processes likely drive the laser induced transient magnetization dynamics at the two different timescales. At a low fluence value of 2.1 mJ/cm\({}^{2}\), the extracted value of \(\alpha\) is the highest for the nickel film, and relatively low for cobalt and permalloy. The observed trend in Figure 8 (\(b\)) can be directly correlated with the values of the Sommerfeld coefficient \(\gamma\) given in Table 1. This observation is in accordance with Kambersky's spin flip scattering theory [58], wherein \(\alpha\) is directly proportional to the density of states at the Fermi level, which in turn governs the value of \(\gamma\).
## 4 Summary
To summarize, we have investigated fluence-dependent ultrafast demagnetization in identically-grown cobalt, nickel and permalloy thin films using an all-optical time-resolved magneto-optical Kerr effect technique. We study both the femtosecond laser-induced ultrafast demagnetization and the magnetization precession and damping in each of our samples for various pump excitation fluences. Using a phenomenological expression, we extract the values of demagnetization time and fast relaxation time
for each of our samples to show that theyexhibit an increasing trend with applied laser fluence consistent with a spin-flip scattering-dominated demagnetization in our samples. Two distinct ultrafast demagnetization models can each reproduce our experimental demagnetization traces remarkably well. By directly fitting our experimental data to numerical solutions of these models, we have extracted values for the electron-lattice and electron-spin coupling parameters and an empirical estimate of the spin flip scattering probability for each of our samples and demonstrated the fluence-dependent variation in calculated electron, spin, and lattice temperatures as a function of the pump-probe delay. We conjecture that the increasing trend the extracted electron-lattice coupling parameter with fluence may indicate a decrease in the efficiency of electron-lattice interactions in the low laser fluences when nonthermal electrons may play a role in influencing the magnetization dynamics. We have compared the experimental and theoretical results obtained from each of the systems under investigation and confirmed that the ratio of the Curie temperature to the atomic magnetic moment acts as a figure of merit sensitive to the demagnetization time for these systems. Our study demonstrates the mutual agreement between the phenomenological three temperature model and the microscopic three temperature model for the systems we study and highlights the possibility of a fluence-dependent enhancement in the inter-reservoir interactions in the lowlaser fluence regime. Finally, we have reported from our experiments a direct correlation between the ultrafast demagnetization time and the enhancement of the magnetic dampingthat transiently appears as a result of the laser excitation, reflecting that both phenomena are sensitive to similar underlying microscopic processes.
## 5 Acknowledgement
AB gratefully acknowledges the financial support from the S. N. Bose National Centre for Basic Sciences (SNBNCBS) under Project No. SNB/AB/18-19/211 and Department of Science and Technology (DST), Govt. of India under Grant No. DST/NM/TUE/QM-3/2019-1C-SNB. SM acknowledges DST, India for financial support from INSPIRE fellowship, while SM and SNP acknowledge SNBNCBS for senior research fellowship.
|
2305.14611
|
NiCro: Purely Vision-based, Non-intrusive Cross-Device and
Cross-Platform GUI Testing
|
To ensure app compatibility and smoothness of user experience across diverse
devices and platforms, developers have to perform cross-device, cross-platform
testing of their apps, which is laborious. There comes a recently increasing
trend of using a record and replay approach to facilitate the testing process.
However, the graphic user interface (GUI) of an app running on different
devices and platforms differs dramatically. This complicates the record and
replay process as the presence, appearance and layout of the GUI widgets in the
recording phase and replaying phase can be inconsistent. Existing techniques
resort to instrumenting into the underlying system to obtain the app metadata
for widget identification and matching between various devices. But such
intrusive practices are limited by the accessibility and accuracy of the
metadata on different platforms. On the other hand, several recent works
attempt to derive the GUI information by analyzing the GUI image. Nevertheless,
their performance is curbed by the applied preliminary visual approaches and
the failure to consider the divergence of the same GUI displayed on different
devices. To address the challenge, we propose a non-intrusive cross-device and
cross-platform system NiCro. NiCro utilizes the state-of-the-art GUI widget
detector to detect widgets from GUI images and then analyses a set of
comprehensive information to match the widgets across diverse devices. At the
system level, NiCro can interact with a virtual device farm and a robotic arm
system to perform cross-device, cross-platform testing non-intrusively. We
first evaluated NiCro by comparing its multi-modal widget and GUI matching
approach with 4 commonly used matching techniques. Then, we further examined
its overall performance on 8 various devices, using it to record and replay 107
test cases of 28 popular apps and the home page to show its effectiveness.
|
Mulong Xie, Jiaming Ye, Zhenchang Xing, Lei Ma
|
2023-05-24T01:19:05Z
|
http://arxiv.org/abs/2305.14611v1
|
# NiCro: Purely Vision-based, Non-intrusive Cross-Device and Cross-Platform GUI Testing
###### Abstract
The diversity of devices and platforms brings significant challenges for mobile application (app) testing. To ensure app compatibility and smoothness of user experience across diverse devices and platforms, developers have to perform cross-device, cross-platform testing of their apps, which is laborious. There comes a recently increasing trend of using a record and replay approach to facilitate the testing process. However, the graphic user interface (GUI) of an app running on different devices and platforms differs dramatically. This complicates the record and replay process as the presence, appearance and layout of the GUI widgets in the recording phase and replaying phase can be inconsistent. Existing techniques resort to instrumenting into the underlying system to obtain the app metadata for widget identification and matching between various devices. But such intrusive practices are limited by the accessibility and accuracy of the metadata on different platforms. On the other hand, several recent works attempt to derive the GUI information by analysing the GUI image. Nevertheless, their performance is curbed by the applied preliminary visual approaches and the failure to consider the divergence of the same GUI displayed on different devices. To address the challenge, we propose a non-intrusive cross-device and cross-platform system NiCro. NiCro utilizes the state-of-the-art GUI widget detector to detect widgets from GUI images and then analyses a set of comprehensive information to match the widgets across diverse devices. At the system level, NiCro can interact with a virtual device farm and a robotic arm system to perform cross-device, cross-platform testing non-intrusively. We first evaluated NiCro by comparing its multi-modal widget and GUI matching approach with 4 commonly used matching techniques. Then, we further examined its overall performance upon 8 various devices, using it to record and replay 107 test cases of 28 popular apps and the home page to show its effectiveness.
UI Testing, Cross-platform and cross-device
## I Introduction.
Compatibility issue is a common critical concern of modern mobile app development, where the same mobile app could exhibit different behaviours or be rendered differently in the GUI due to the differences in a target mobile device environments (e.g., screen size, operating system) [1, 2]. Fig. 1 shows some examples of an app running on different devices and environments. The early common practice of app compatibility testing in the industry is hiring testers to execute testing procedures on diverse devices manually. However, it involves a plethora of repetitive, laborious and time-consuming actions and pushes industries to pursue an efficient way for cross-device and cross-platform testing.
In recent years, record-and-replay testing has drawn much industrial interest, which expands automated testing with more devices and reduces the repetitive human work [3]. A wide range of testing and debugging techniques (e.g., regression testing [4, 5], failure reproduction [6] and test transfer [7, 8]) are based on record and replay approaches. Most of these existing techniques focus on recording and replaying on the same platform, while transferring tests (recorded actions) of an app to replay on different platforms and devices has been less explored [2]. TestMig [9] makes early attempts to perform test migration from an iOS app to an Android counterpart through source code analysis. More recent work MAPIT [2] proposes bi-directional UI test transfer across-platform testing for iOS and Android apps. Nevertheless, all these approaches require intrusion into the app to acquire metadata, such as source code and run-time view hierarchy, to perform analysis. The intrusive techniques are inapplicable or unsuitable when the underlying metadata is unavailable or not analyzable (e.g., the WebView in an app). Moreover, they would fail to handle test cases that involve certain system operations (e.g., allow system permission) and multiple apps (e.g., share to other apps).
In contrast, non-intrusive techniques based on visual intelligence are more general, which imitate the human's way of _viewing_ and _interacting_ with GUIs. They only need the
Fig. 1: An app displayed on various devices. “P” and “V” indicate if the device is “Physical” or “Virtual”; “A” and “I” represent “Android” and “iOS” platforms; the last number shows the screen size in inches.
screenshot of the GUI (i.e., pixel image) to extract GUI information, which avoids the complexity of app intrusion involving diverse software stacks and the errors caused by the inconsistency between underlying code and the rendered GUI [10]. LIRAT [11] is the first image-driven framework proposed to solve the record and replay problem. It obtains the GUI widget's information solely through pixel GUI images and matches widgets between different platforms based on the widget's screenshot and relative position. However, it fails to consider the variance of GUI widget layout and placement due to the diverse screen sizes of devices (see Fig. 1), and it does not support widget-independent operations, such as scroll. Some works utilize a robot system for external interaction with the physical device [12, 13, 14]. RoScript [14] is the state-of-the-art non-intrusive robotic framework for record-and-replay, which recognizes touch actions as well as the target GUI widgets and drives the robot to replay. However, RoScript only considers the tests of the same app on the same device.
Although being beneficial in many aspects by having simplicity without handling the underlying software stack, building a non-intrusive cross-device and cross-platform record and replay system faces multiple challenges. First, different devices' hardware environments (e.g., screen size) may responsively change the GUI in terms of the layout and appearance of widgets, which relocates some widgets out of the screen and make them invisible on some small devices. Second, an app on different platforms may experience differences in GUI design and implementation, which causes inconsistency in GUI layout and styling, further complicating the widget matching because of the change in widget's position and size.
To address these challenges and tackle record-and-replay GUI testing, we propose a purely non-intrusive cross-device and cross-platform approach NiCro. NiCro is based on computer vision techniques to analyse GUI information from pixel GUI images without any additional need for app metadata. Fig. 3 summarizes the workflow of NiCro's at a high level. It records the actions, including type and coordinates, and collects GUI images from different devices. Next, NiCro leverages the state-of-the-art computer image-based GUI element detector UIED [15, 16] to detect widgets, and then extracts multi-modal attributes of each widget to represent it. For the widget-dependent action (e.g., _click_, _long press_ and _text input_), NiCro utilizes the extracted multi-modal information to match the widget in target devices. For widget-independent actions (e.g., _swipe horizontally_ and _scroll vertically_) that do not associate with any certain widgets, NiCro matches the record device's GUI with the target device's GUI to identify the ending position of the action. At system level, NiCro comprises three key components: (1) a _Device Farm_, (2) a _Robotic System_ and (3) a _Host Computer_ (see Fig. 2). The _Device Farm_ and the _Robot System_ are provided to support various virtual and physical devices for record-and-replay testing. The _Host Computer_ connects and commutes with the other two components and runs NiCro's core techniques.
We performed a comprehensive evaluation from two aspects: 1) the accuracy of NiCro's multi-modal widget and GUI matching approach compared to 4 commonly used matching methods; 2) the performance of NiCro's action and test case replay with a state-of-the-art non-intrusive approach as the baseline. To this end, we conducted experiments upon the home page and 28 popular mobile apps spanning 14 common categories, which are installed on 8 devices, including 5 Android devices (4 emulators and 1 physical phone), 2 physical iPhones and an iPad tablet. We randomly selected 50 widgets and 5 GUIs from each app, on which NiCro achieved 0.91 and 0.94 accuracy for widget matching and GUI matching. We then manually created a total of 107 test cases containing 639 GUI actions that cover the general functionalities of each app. For each test case, we randomly select a device as the recording device (source device) and replay the actions on the rest of the other 7 devices (target devices). Overall, NiCro achieved 0.86 and 0.85 accuracy in widget-dependent and widget-independent action replay, which accurately completes 63% entire test cases with no correction and 89% test cases with one-step correction, significantly outperforming its baseline.
To summarize, this paper makes the following contributions:
* A ready-to-use non-intrusive cross-device and cross-platform system1 that can non-intrusively interact with a virtual device farm and a physical robotic system to record and replay GUI actions. Footnote 1: [https://github.com/thorxv/NiCro.git](https://github.com/thorxv/NiCro.git)
* A novel computer vision-based approach to extract the multi-modal widget information and accurately match GUI widgets in the face of GUI layout and visual differences over diverse devices and platforms.
Fig. 2: The architecture and high-level workflow of NiCro.
* A systematic evaluation of the proposed technique and a thorough analysis of the system performance to demonstrate the effectiveness of NiCro.
## II Background
### _Cross-Device and Cross-Platform GUI Testing_
Due to the massive number and enormous diversity of modern mobile devices, a mobile app's running environments, including software and hardware, would vary dramatically and unpredictably. This raises the strong demand for testing apps over multiple devices in the development process to tackle device fragmentation [1]. Although numerous works were proposed to address the compatibility issue due to device fragmentation [17, 18, 19], most of them focus on the sole (individual) platform (e.g., Android) and are from an aspect of functionality rather than GUI. Compared to back-end functionality, the GUI is more subject to display changes on different and heterogeneous devices, such as the screen size and resolution, because the widget's layout, appearance and even visibility may change responsively.
As shown in Fig. 1, the GUIs on diverse devices are differentially rendered even for the same page of the same app. The difference exists even for devices that have the same operating platform (i.e., _Device 1_ to _Device 5_, and _Device 6_ to _Device 8_). For example, many widgets on _Device 1_ cannot be filled in the same screen on a smaller device, such as _Device 6_. And the GUI layout may change responsively to fit the screen size. For instance, the widgets at the bottom are arranged in rows of four on _Device 5_, while they are squeezed into three-widget-row on _Device 1_ and two-widget-row on other smaller devices. These variances raise more challenges in locating and matching the widget cross-devices. Moreover, the app running on different platforms usually has to adjust its implementation, which causes some distinctions of the GUIs, hence complicating the cross-platform GUI testing. For instance, although _Device 2_ and _Device 8_ share a similar screen size, their GUIs are not identical (e.g., the icons at the bottom bar). This discrepancy usually lies in the different GUI layouts and nuances of widget appearance, though the related functionalities are usually consistent. All of the features complicate cross-device and cross-platform GUI testing.
### _Non-Intrusive GUI Testing_
Non-intrusive technique, contrary to intrusive technique, does not lack into the underlying system or app to acquire data or perform action [14]. It does not require connection to the target system through certain APIs and hence is not as sensitive to the variants of the programming running environment as the intrusive approaches. This feature is desirable to testing apps running on a volatile platform, such as Android which has dozens of major versions and distributions, where the API is frequently updated or changed and requires the intrusive approach to be revised accordingly [20]. In addition, intrusive techniques cannot correctly acquire information from apps in many situations, such as a closed system where the app metadata is inaccessible or unanalyzable [12]. Even for open source systems, some app metadata is still unable or inaccurate to obtain, e.g., runtime view hierarchy and metadata.
In contrast, the non-intrusive approach is more generic and less demanding. It obtains information and interacts with the app purely through an external interface, which is analogous to human-app interaction in that we mainly use eyes to view the GUI and fingers to operate the device physically. The non-intrusive approaches resort to the computer vision process to "see" and "understand" GUIs [11, 16, 21], and some works apply robot arms to conduct the external interactions with devices, similar to human fingers [14, 22, 23]. For the non-intrusive approach, the capacity of visual intelligence to detect and analyze the widgets on the GUI image decisively affects the overall performance. However, most of the existing non-intrusive testing approaches rely on preliminary computer vision techniques that cannot accurately locate and recognize GUI widgets from the GUI image [15], let alone the much more complicated cross-device and cross-platform testing.
## III NiCro System Overview
### _System Architecture_
Fig. 2 summarizes NiCro's overall architecture. It comprises three modules at the system level: (1) a virtual _Device Farm_; (2) a _Robotic System_, and (3) a _Host Computer_. The _first two modules_ provide various devices that NiCro interacts with in a black-box manner, and the _third module_ runs NiCro's core techniques to collect and process GUI information.
#### Iii-A1 Device Farm
The device farm usually refers to a testing environment where a variety of devices can be accessed and controlled to test apps [24]. A virtual device farm replaces the physical devices with app emulators. These emulators simulate real devices of certain models and provide the identical environment as their physical version in terms of screen size and operating system. The device farm's scalability and flexibility enable the addition or removal of emulators easily without much additional cost, which offers a convenient platform to test apps on diverse devices. In this work, we set up our _Device Farm_ through Android Studio [25] that supports a convenient device manager to create and manage diverse device emulators. NiCro non-intrusively interacts with the device manager to take a screenshot of the app and send mouse events to execute GUI actions on virtual device emulators.
#### Iii-A2 Robotic System
NiCro supports the _Robotic System_ for direct and external interaction with physical devices. Fig. 4 presents the system setting, which contains three parts: a high-resolution camera, a robot arm with a stylus, and a plain surface to place the device. The surface is simply a black pad on a desk, and the camera is set to focus on the surface to take screenshots of the devices. NiCro is equipped with the screen recognition technique that can extract the screen region from the picture, which is then cropped out and used as the GUI image. The robot arm emulates a human's finger to execute the given actions on the physical device. The robot arm and camera are highly customizable in that the hardware can be easily replaced by similar alternatives (e.g., the XY plotter used in RoScript [14]). The robotic system can interact
with most forms of common mobile devices by simply placing the device within the surface area, which provides a convenient means for cross-device and cross-platform testing.
#### Iii-A3 Host Computer
This is the central processing unit that connects the virtual devices and the robotic systems. It collects GUI images and recorded actions from the other two components. Specifically, it takes two kinds of input: (1) the GUI images (i.e., screenshots and screen photos) and (2) the action, including its type and coordinates recorded on a source device. Then, it performs a series of visual analyses that consist of widget detection, multi-modal representation and matching on the GUIs of all target devices. Finally, it outputs the replay action, including the type and coordinate, to the _Device Farm_ or the _Robotic System_ to replay on each target device respectively. The input from and the output to the devices only rely on the devices' external interfaces (screen).
### _Visual Analytic Workflow_
The visual analytic workflow is demonstrated in Fig. 2's Host Computer component and illustrated in Fig. 3.
#### Iii-B1 Action Record and GUI Image Collection
To start with, NiCro first records the GUI action and collects the GUI images. An action can be performed on any one of the connected devices. We name the device for recording action _Source Device_ in this work and those on which the action is replayed _Target Devices_. The vision-based design of NiCro greatly simplifies the recording process, which only needs to record the _Source Device_'s GUI image (e.g., screenshot or photo) along with the action, requiring only action type and screen coordinates. At the same time, the current GUI images on all target devices are collected for further analysis.
#### Iii-B2 GUI Widget Detection
At the beginning of the visual analysis, NiCro leverages its UIED-based GUI widget detector to detect all the widgets on each GUI image. The detection result comprises the location and class (i.e., text or non-text) for each widget on the GUI.
#### Iii-B3 Widget Information Extraction
Next, NiCro analyses multi-modal attributes, in addition to the simple visual properties, from the detection result to represent the widgets for matching. These attributes not only include the spatial and visual information, such as _Location_, _Image Clip_, _Text Content_ and _Shape_, but also contain some contextual knowledge, such as its _Neighboring Widgets_ and _Parent Widget_ if any.
#### Iii-B4 Widget Matching for Widget-dependent Action
The prerequisite to replay the widget-dependent action on _Target Devices_ is to match the same widget with the target widget on the _Source Device_. NiCro accomplishes the matching based on the extracted multi-modal information. Depending on the widget type (text or non-text), NiCro applies specific rules that consider the widget attributes in different priorities and finally singles out the matched widget on the _Target Devices_.
#### Iii-B5 Expanding GUI Scope for Unsuccessful Widget Matching
As stated in the above sections, the target widget may not appear on _Target Device_'s current GUI because of the responsive adjustment or different implementation. However, the widget is still supposed to exist to guarantee the functional consistency of a specific page in an app. We observe that the most common reason a widget cannot be found is that the widget is allocated to the scope out of the current screen region, and it will appear when the scope is exposed to the screen region through scrolling. Therefore, NiCro applies a screen-height scroll to the app page if no widget is matched in the current GUI and tries to match again until the target widget is identified or the margin of the page is reached.
#### Iii-B6 GUI Matching for Widget-independent Actions
As for the widget-independent action, such as swiping and scrolling, NiCro replays it by matching the entire GUIs of different devices. The swiping and scrolling start from a position on the page and move a certain distance to reach an ending position. Due to the different display sizes of various devices, the length of the content filled on the screen would differ, and the moving distance to a certain point on the page would be inconstant and hard to transform directly. Therefore, NiCro turns to match
Fig. 3: The summarized workflow of NiCro.
the ending GUIs in different devices using its _GUI Matcher_ introduced in Section IV-D.
#### Iii-A7 Action Execution
As a purely non-intrusive system, NiCro executes the actions on devices through external interactions without any intrusive control. Specifically, it sends mouse events (e.g., click, swipe) to virtual devices in _Device Farm_ and controls the robot arm to tap the physical device's screen in _Robotic System_, which is analogous to human-device interaction. This simple practice covers the most common operations for mobile apps, including inputting text in which NiCro directly tangs on the keyboard like the human finger. This design greatly simplifies the action execution where _Host Computer_ only needs to send the action type and screen coordinates to replay the action on a _Target Device_.
## IV NiCro's Visual Analytic Approach
NiCro's visual analytic approach contains four major components to analyse and match GUIs.
### _Widget Detector_
NiCro utilizes UIED [16], a state-of-the-art GUI widget detector, with some revisions to detect widgets on the GUI image. UIED is designed with the consideration of GUI's special visual characteristics. This enables to achieve higher accuracy in GUI widget detection than deep learning models that are built for natural objects [15] (e.g., Faster RCNN [26] and YOLO [27]), or the methods relying on manually-crafted visual features [28, 29]. UIED contains three steps:
_GUI Text Detection_: The original UIED uses EAST [30], a scene text detector, to extract the text widgets. However, the EAST is only able to locate the text area but cannot recognize the text content, while NiCro requires the text content (if any) to represent and match widgets. Thus we replace EAST with the widely used Google OCR tool [31], which is trained to recognize a wide range of text images to detect the location and content of text widgets.
_Non-text Widgets Detection_: UIED applies a pipeline of image processing algorithms, including gradient-based binarization, flood-filling connected component labelling [32] and shape recognition. The GUI-specific approach is proven to be effective [15], but it suffers from the problem of losing nested widgets (e.g., widgets in a container). NiCro tackles this problem by recognizing the container where all the nested components are kept. In particular, NiCro identifies a widget as a container if: (1) it is rectangular and (2) none of its inside objects are connected with the widget border. Besides, NiCro only categorizes the widgets as "Text" or "Non-text"as the detailed widget category is not involved in the later process.
_Merge of Text and Non-text Widgets_: This step not only integrates the detection results, but also cross-checks non-text widgets and filters out false-positive non-text widgets. UIED counts on the OCR's text results and discards non-text widgets whose bounding box intersects with any text region.
Moreover, NiCro recognizes the screen region from the GUI photo taken by the _Robotic System_ using heuristics: If (1) a widget's height is larger than half of the photo's height and (2) the widget contains multiple widgets and (3) the widget is not contained by any widget, it is a screen region.
### _Widget Information Extractor_
The widget detection produces basic widget spatial and class information (i.e., size, location and text/non-text class). However, as stated previously, the GUI and the widgets on it may change in terms of placement and appearance on different devices and platforms. Consequently, it is necessary to extract more comprehensive multi-modal information to represent and match widgets. Figure 3 shows some examples of the widget information extracted by NiCro. Overall, the extractor obtains five attributes to represent a widget.
_Spatial Location_: The _Spatial Location_ is acquired from the widget detection directly and includes _(Top, Left)_ and _(Bottom, Right)_ to indicate the coordinates of the top left and bottom right corners of a widget's bounding box.
_Shape_: This attribute implies the widget's geometric property. It uses a 4-tuple _(Width, Height, Area, Aspectation)_ to measure the widget's bounding box, which is calculated by the _Spatial Location_ (i.e., (_(Top, Left)_, (_Bottom, Right)_)).
_Image Clip_: The extractor clips the widget's image from the GUI image as its visual information using the _Spatial Location_. Not only does the non-text widget needs the image clip to show its visual content, but also the text widget uses the image clip to present its font properties.
_Text Content_: The _Text Content_ is obtained from the widget detector's OCR results directly, which can either be a single word or a line of texts. NiCro does not combine multiple lines into a paragraph to avoid incorrect merging and tricky parameter adjustment. So, a chunk of text that contains several lines would be identified as separate text widgets. Note that some non-text widgets also have text content if there is a piece of text within it (e.g., the text button).
_Surrounding Widgets_: NiCro also examines the widget from a contextual aspect. This information is conducive to identifying the widget on the _Target Device_, in particular when there are multiple widgets with the same appearance on the GUI. A widget's parent and neighbours usually remain constant even when the GUI changes responsively to fit different devices. Thus, the extractor includes the _parent widget_ (e.g., container) and the closest _adjacent widgets_ in four directions (i.e., up, down, left and right) for a widget. Note that the _parent
Fig. 4: NiCro’s Robotic System
could be _None_, and the number of the _neighboring widgets_ is 0 to 4 depending on the specific widget.
### _Widget Matcher_
NiCro utilizes the widget's attributes extracted in the previous step to match its counterparts in other devices. These attributes capture the comprehensive information of the widget and impose different degrees of influence on the matching result. Therefore, different types (i.e., text and non-text) of widgets put distinct priorities upon these attributes while comparing. For the sake of presentation, we denote the _Source Device_ GUI where NiCro records the action as \(G_{s}\), the _Target Device_ GUI where the action is replayed as \(G_{t}\). We also denote the widgets in \(G_{s}\) as \(W_{s}\), and the ones in \(G_{t}\) as \(W_{t}\).
#### Iv-C1 Text Widget Matching
The _Text Content_ is the most decisive factor for matching two text widgets. However, two matched text widgets may not contain identical content. Occasionally, a line of text in a large device's GUI would be broken down into several lines in a small device to fit the screen. As stated in Section IV-B, NiCro's widget information extractor keeps the line-level texts but does not merge multiple lines into a paragraph, so that the text line in the large GUI would be divided into several text widgets in the small GUI. Therefore, the _Text Content_ matching follows the rule: If \(W_{s}.TextContent\subseteq W_{t}.TextContent\) or \(W_{t}.TextContent\subseteq W_{s}.TextContent\), then \(W_{s}\) is a matched candidate to \(W_{t}\). If no widget in \(G_{t}\) is matched _Text Content_ with \(W_{s}\), the approach will not apply any other attributes but determines that there is no matching text widget in this GUI.
After matching by this rule, there can be multiple widgets \(W_{t}^{i}(i\in\{1..n\})\) in \(G_{t}\) that match with \(W_{s}\). If none of the \(n\)\(W_{t}^{i}\) has the same content as each other, which indicates they are just separate parts of \(W_{s}\), _Widget Matcher_ selects the one with the longest content as the matched target. Otherwise, if \(W_{s}\) is matched with several widgets that share identical text content, the approach resorts to other attributes to filter out the candidates. The attributes are checked from simple to complex in order, until only one widget is left. (1) It compares the _Shape_ between \(W_{s}\) and all candidates \(W_{t}^{i}\). The difference between any value in the 4-tuple (_Width, Height, Area, Aspect Ration_) for the two widgets should be less than a specified threshold. (2) For the remaining candidates, it encodes the their _Image Clips_ through a ResNet50 model [33] and calculates the cosine similarity [34] between encodings of \(W_{t}^{i}\) and \(W_{s}\). The candidate with less than 80% similarity will be discarded. (3) If there is still more than one candidate left, the _Widget Matcher_ matches the _Surrounding Widgets_ of \(W_{s}\) and \(W_{t}^{i}\) to single out the final target.
#### Iv-C2 Non-Text Widget Matching
The non-text widgets usually contain more complicated visual features than text widgets do. For example, the image and icon widget conveys information through pictures or symbols, and the button has a visible boxing area with colour filled or a line boundary. Therefore, the primary attribute for the non-text widget to be matched is the _Image Clip_. The approach first encodes the _Image Clip_s through a ResNet50 model and computes the cosine similarity between encodings. However, there can be multiple widgets with the same appearance on the GUI, which requires additional information, other than the image, to match. So the _Widget Matcher_ includes the widget whose similarity is larger than 80% as the candidate. On the other hand, if no widget reaches the similarity threshold, the approach concludes that no widget on this GUI matches the target widget.
Similar to the text widget matching, in the case that multiple candidates are selected, the _Widget Matcher_ considers other attributes. It starts with the _Shape_ (i.e., (_Width, Height, Area, Aspect Ration_)). But since there can be some nuances in _Shape_ for the same widgets on different devices, the approach allows a small degree of difference but filters out those too distinguishable. Finally, if the _Shape_ still fails to single out the final widget, the approach chooses the candidate that has the most matched _Surrounding Widgets_ with the target widget.
### _GUI Matcher_
NiCro has to match the entire GUIs in two situations: (1) to determine if the page reaches the margin and (2) to check if the widget-independent action (i.e., swipe horizontally and scroll vertically) is complete on a different _Target Device_.
#### Iv-D1 GUI Matching on Same Device
NiCro is a purely non-intrusive system that is unable to be informed by the underlying system about the current GUI's position on the page. Therefore, to examine if the GUI is at the margin of a page (e.g., page bottom), NiCro checks if the GUIs are identical before and after the widget-independent action. For example, if the GUI keeps unchanged after a scroll or swipe, it means the page has no overshooting part of the screen region in the operating direction and reaches its margin. In this case, the identical GUIs can be matched straightforwardly by comparing the pixel images. This simple practice is effective as the content on the same page is unlikely to change in a short time during the action execution. Even if some parts (e.g., Ads) of the GUI occasionally change, NiCro tries to execute the action and match the GUIs again until it finally finds the page has already reached its margin.
#### Iv-D2 GUI Matching on Different Devices
While replaying the widget-independent action on different _Target Devices_, the distance of scrolling or swiping would vary dramatically due to the different lengths of content filled on screens with diverse sizes and resolutions. Hence, to determine whether the page on the _Target Device_ reaches the same or similar position as the recording _Source Device_, the _GUI Matcher_ tries to match all the widgets on the two GUIs. In particular, it defines the GUI from the device with a smaller screen as \(G_{1}\) and the
GUI from the larger device as \(G_{2}\). If all the widgets in \(G_{1}\) are successfully matched to a counterpart in \(G_{2}\), NiCro treats these two GUIs as matched.
## V Evaluation
We conduct experiments and analyses in two aspects to evaluate NiCro: (1) the performance of NiCro's core multi-modal matching approach for widgets and GUIs and (2) its capability to complete the record and replay task for cross-device and cross-platform UI testing.
### _Experiment Setup_
To investigate NiCro's cross-device capability, our evaluation leverages 5 different Android devices, including physical and virtual ones, which have a wide range of screen sizes from 3.7" to 8.0". For the cross-platform replay, our evaluation also includes two iPhone devices in different models and an iPad. Table I shows the details of the experimental devices and Fig. 1 demonstrates their GUIs. Due to the current _Device Farm_ in NiCro does not support iOS, we only use physical iPhone devices that are interacted with the _Robotic System_.
Regarding the subject mobile apps, we select a total of 28 apps spanning 14 frequently-used categories (see Table III). Furthermore, we record and replay actions on the device's home page. For example, tapping the icon to open an app and swiping & scrolling on the home page. The app selection considers the following three perspectives: (1) it covers a range of most popular app categories to show NiCro's generalization capability; (2) the apps are widely downloaded so that they are stable to function in different devices and (3) the apps support both Android and iOS as to test NiCro's cross-platform ability.
NiCro's virtual _Device Farm_ is implemented based on Android Studio in version 2021.1.1.23, which provides a variety of emulators with various Android versions. For the _Robotic System_, we leverage an _UArm Swift Pro Robot Arm_ manufactured by Ufactory [35], a 4-axis robot that holds a stylus pen to mimic a human finger to interact with the touch screen. The camera used in our system is a _4K Logitech BRIO Webcam_[36], and the surface is simply a black pad placed under the camera. Finally, the _Host Computer_ we use in the experiment is a desktop and runs Ubuntu 18.04.
### _Accuracy of Widget Matching and GUI Matching_
The evaluation first examines the matching accuracy for widgets and GUIs. We implemented a range of image matching methods and tested them on the same experimental data with NiCro. Different from many previous image-based non-intrusive testing systems that solely or primarily rely on the widget image [37, 14, 3], NiCro adopts multi-modal information to represent and match GUI widgets and the entire GUI. The evaluation includes the techniques applied in two closest non-intrusive testing works and two individual single-modal approaches NiCro utilizes. In particular, we evaluate (1) _Template Matching_ used in RoScript; (2) _SIFT_ used in LIRAT; (3) _ResNet_ that NiCro applies to encode widget image and (4) _Text Content Matching_ in NiCro to match text widget's content and texts on some non-text widgets.
In the experimental process, we iteratively select a _Source Device_ and use the rest of the other 7 devices as _Target Devices_. This yields a 8 x 8 matrix of results for each evaluated method, as shown in Fig. 5. From the collected 28 apps and the home page, we randomly chose 25 text widgets and 25
Fig. 5: Experimental result of widget matching
non-text widgets from each app. Overall, 1,450 (29 x (25 + 25)) widgets are used as the experimental subjects, and every widget is guaranteed to be visible on all of the 8 devices while being matched, although their relative position and size would vary in different devices. Besides, we randomly select 5 GUIs from each app on the 8 devices to examine the GUI matching, which brings a total of 1,160 (5 x 29 x 8) GUIs. Then, we apply the matching algorithms to find the most matched widget for a _Target Widget_ and the most matched GUIs for a _Target GUI_ on a _Target Device_. Finally, we manually inspect if the matching is correct and calculate the matching accuracy.
#### V-B1 Non-Text Widget Matching
Fig. 5 present the experimental results. Overall, the _Text Content_ and _Template Matching_ perform worst in non-text widget matching and text widget matching (0.23 and 0.39), while _NiCro_'s approach achieves the best performance in both cases (0.9 and 0.93). For non-text widgets, there are 28% of them have text content (e.g., button with text in the centre), while the rest of 72% do not contain any text information (e.g., image, icon). Since the _Text Content_ can only match widgets through text, it is reasonable that it performs poorly on the non-text widgets. The low accuracy for _SIFT_ is because many of the non-text widgets, such as icons, do not have complicated textures so _SIFT_ fails to extract sufficient features points as the widget image's descriptor for accurate matching. _Template Matching_ simply slides the _Target Widget_'s image over the _Target Device_'s GUI image to match the most similar patch. It is highly subject to the scales of the template image (_Target Widget_'s image) and the input image (_Target Device_'s GUI image) [38], and thus it can only perform well when the _Source Device_ and the _Target Device_ have similar screen size and the corresponding widgets' appearance are consistent. _ResNet50_ also achieves a high accuracy because it, as a well-pre-trained image encoder, represents the image's feature effectively, thereby minimizing the cosine distance between the embeddings of matched widget images.
#### V-B2 Text Widget Matching
On the other hand, all the three image matching algorithms _SIFT_, _Template Matching_ and _ResNet50_ fail to match precisely for text widgets for two major reasons: (1) the visual and texture features of text characters are not as distinct as non-text images [39] and (2) a long line of text on a large device would be broken down to multiple short lines on a small device. As the analysis in _Text Widget Matching_ (Sec IV-C1), the short lines are still supposed to be matched to the long one because they are the partial components, while their visual properties are apparently different. However, the _Text Content_ reaches a high matching accuracy (0.87), which indicates the text content is decisive for distinguishing text widgets. Nevertheless, using _Text Content_ solely still fails in some cases where additional information is required, especially when there are multiple same or similar text pieces on a GUI.
#### V-B3 GUI Matching
The GUI matching accuracy for the three image matching algorithms is interestingly different from their accuracy for individual widgets. _SIFT_ performs much better in GUI matching (0.80) because the entire GUI is much larger than the single widget and contains more SIFT features. It is even better than _ResNet_ (0.76), as _SIFT_ is scale-invariant but _ResNet_ is more subject to the GUI size. _Template Matching_ reaches a noticeably better accuracy for GUIs from similar-size devices, while it is incapable of matching GUIs from devices of various size that contains the different length of content because this method conducts a pixel-level matching. Matching GUIs using _Text Content_ also produces an acceptable accuracy (0.85). Here we adopt the same text content matching strategy as NiCro does, in which if the text content in the smaller GUI can be matched to the larger GUI, it regards them as matched. NiCro's GUI matching approach based on multi-modal widget matching obviously outperforms the compared methods and achieves an accuracy of 0.94.
#### V-B4 NiCro's Performance
NiCro takes the advantages of _ResNet50_ and _Text Content_ while matching non-text and text widgets, but also uses multi-modal attributes to further single out the most matched widget and GUIs for the _Target Widget_ and _Target GUI_. Although NiCro's matching approach performs significantly better overall, it produces incorrect matching in some situations in our experiment. The most significant factor that affects the matching accuracy is the difference between the _Source Device_ and the _Target Device_. With the result shown in Figure 5, we observe the three most influential aspects of the devices: (1) screen size, (2) platform and (3) virtual or physical presence. First, the difficulty of matching widgets increases when the screen size differences increases. For example, the widget matching accuracy for widgets in _D1_(8.0") and _D5_(3.7") is noticeably lower than the accuracy for _D3_(6") and _D4_(5"). This is because the GUI varies with the device's screen size, which causes the change in GUI layout and widget's shape (e.g., size and aspect ratio). Second, the widget's appearance and surrounding widgets may change dramatically on different platforms, which may affect NiCro's widget matching performance. For instance, the performance is better when the widget in _D6_(iOS) is matched to those in _D7_(iOS) than to widgets on Android devices. Third, the virtual or physical presence of the compared devices may also impact the performance because the GUI image of physical devices is the photo taken in a natural environment where the environmental light takes some effects. This impact can be observed from the experiment in which the widget matching accuracy for _D2_(physical) and _D3_(virtual) is always lower than the accuracy for _D3_(virtual) and _D4_(virtual) even though they are all Android devices and have similar screen size. These effects can also be seen in the GUI matching which is based on the matched widgets.
### _Action and Test Case Replay Performance_
This part of the evaluation focuses on the system's overall performance in replaying test cases and actions. For each app, we applied 3 to 4 test cases composed of 3 to 8 GUI actions (widget-dependent or widget-independent) in every test case. These test cases are all common usage scenarios that cover the general and specific functionalities of each app. For example, "Change user name" for some apps that support user login, and "Search and enroll the most popular Python course" for the Education app. This yields a total of 107 test cases containing 507 widget-dependent actions and 132 widget-independent actions. In the process of the experiment, we randomly select one device as the _Source Device_ to record and then replay the actions on the rest of the 7 devices (_Target Devices_). Therefore, there are 639 (507 + 132) actions recorded and 4,473 (639 x 7) actions replayed. We here use a strict replay success criterion: An action replay is successful if it is accurately reproduced on all 7 _Target Devices_, and a test case is successfully replayed if all its actions are replayed correctly.
We observe three phenomena in these app usage scenarios. (1) Widget-independent actions (i.e., swipe or scroll) are commonly involved in the test cases, in which 66 (61.7%) of the 107 cases contain at least one widget-independent action when recorded. This is because it is very common that the screen size is not enough to fill all the page content, which requires scrolling or swiping to move the rest content to the screen scope. (2) The screen size variances across devices cause the recorded target widget on the _Source Device_ not shown on smaller _Target Devices_. It happens to 43 test cases (40.2%) when they are replayed on some smaller devices, in which the screen needs to scroll to expose the target widget. (3) The cross-platform implementation inconsistency changes the GUI layout and widget appearance in some apps. 78 test cases (72.9%) experience inconsistency at various levels when replayed on different platforms and they may need more comprehensive information to match target widgets.
With the observation, we compare NiCro with the three latest and closest works: MAPIT [2], LIRAT [11] and RoScript [14]. Table II shows the comparison. It contrasts the three factors in the respective approaches (Cross-Platform; Cross-Device; Widget-Independent actions). We also present the ratio of available test cases for each work in the **TC** column. LIRAT and MAPIT do not support widget-independent actions, making them unavailable to handle 66 (61.7%) test cases. Moreover, the intrusive technique MAPIT requires the target widget's metadata (e.g., accessibility id, text content etc.), while we find that MAPIT fails to obtain the information of 91 widgets (18%) out of the 507 widgets in the widget-dependent actions on one or more devices, because of the absence or redundancy of specific widget attributes in the metadata. Besides, the intrusive approach cannot acquire any content within an external component such as advertisement and WebView that is the primary element of some apps (e.g., web browsers). This makes additional 12 test cases fail to run on MAPIT, which leaves only at most 29 test cases (27.1%) available for it. Although RoScript's approach solely considers the same UI on the same device, it supports both widget-dependent and widget-independent actions, and only requires the image as input, which enables it to run on all of the test cases in our experiment. And thus, we use RoScript as our baseline to examine NiCro's overall performance.
Figure III summarizes the experimental results. Overall, NiCro achieves a promising performance for action replay, in which it accurately replays 86% widget-dependent actions and 85% widget-independent actions. In contrast, RoScript only succeeds in 40% and 23% of these actions. Regarding the test case replay, NiCro replays 63% of them without any manual correction (**0-Correction**), where the entire test case replay
falls even if only one of its action replay fails. To investigate deeply, we examine the situation that allows one time of correction (**1-Correction**) for each test case and experiment NiCro and Rescript again, in which NiCro successfully replays 89% cases and significantly outperforms its baseline (20%).
Both of these two approaches' performances vary in different categories of apps. In detail, NiCro performs better on apps containing more textual content than those containing more image elements. It achieves 0.93 and 0.91 widget-dependent replay accuracy on Email and Chat apps while only reaching 0.76 and 0.73 on Video and Photo apps. This is because it can better detect and match text widgets (as shown in Fig. 5), but for GUIs filled with complicated images or whose widgets overlap on images, its UIED-based widget detector cannot performs perfectly and hence causes cascading errors in widget matching. Besides, the replay accuracy on Finance apps (0.81) and Map apps (0.81) is slightly lower for a similar reason, where they contain many diagrams and have GUI widgets overlapping on these diagrams, which fails the widget detection. On the contrary, RoScript's widget-dependent replay is even worse in textual apps (0.34 on Email apps and 0.36 on Chat apps) than in apps with more images (0.43 on Photo apps), because its Template Matching-based image matching approach cannot handle text-widget matching. Overall, RoScript completes all widget-dependent action replay poorly also due to its incapable to properly work if the _Target Widget_ is invisible in the _Target Device_ due to the smaller screen, and is mostly incapable of matching widgets with different appearances across platforms. On the other hand, NiCro's performance on widget-independent actions obviously surpasses RoScript, in which NiCro succeeds 112 cases out of 132 but RoScript only completes 30 cases. This poor performance of RoScript is because it replays scroll and swipe simply with the same distance in the recording phase. However, such practice is inappropriate if the screen sizes of the recording device and the replay devices are inconsistent. This confirms the effectiveness of NiCro's widget-independent action replay approach based on the GUI matching.
### _Threats to Validity_
(1) The apps involved could be a threat. To examine NiCro's generalization ability and counteract this, we use as many as 28 popular apps in 14 common categories plus the home page in the experiment. However, there are more apps that would affect the experimental results, such as Game apps that are majorly composed of graphical elements, which the UIED-based detector would not handle perfectly [15]. (2) The tested devices could be another threat. Although we intentionally select a diverse range of devices in terms of physical or virtual presence, different screen sizes and operating systems, it may still miss some particular devices that affect the result. However, our evaluation confirms that NiCro can well perform the record and replay across physical and virtual devices with or without the same operating system. (3) The robotic system's experimental environment could also be a threat. We run NiCro's robotic system under a relatively stable environment with a steady light and fixed camera focus to acquire clear GUI photos. The quality of the GUI photo taken by the camera may change under different circumstances and affect the widget detection and matching. However, it is not difficult to adjust the physical setting to adapt to the new environment. (4) The code of LIRAT and RoScript are manually re-implemented by ourselves. In our communication with the authors of LIRAT and RoScript, they did not agree to release their source code and data for our comparative analysis purpose due to various restrictions, forcing us to re-implement their techniques. We tried our best and implemented them in a way that strictly followed the description in the original papers.
## VI Related Work
There is a range of intrusive testing frameworks for Android, such as Instrumentation [40], Robotium [41] and Espresso [42]. Monkeyrunner [43] can replay the test script on several devices simultaneously. Behrang et al [44]. and Lin et al [45]. attempt to apply static code analysis to extract the GUI models from Android apps to assist GUI testing, which is not suitable for closed-source systems, such as iOS. For iOS, the mainstream automated testing tools include KIF [46] and UIAutomation [47]. Nevertheless, the intrusive tools for iOS are even harder to set up as it is not an open-source system, where the user has to pay a fee to apply for an Apple developer account and configure a set of software and hardware dependencies. Besides, all the above techniques do not support cross-platform testing.
TestMig [9] is the early attempt to migrate tests from iOS to Android. However, this migration is a single-direction, which does not support replaying of the other direction, i.e., Android test scripts on iOS. Appium [47] uses Android Debug Bridge [48] and Web Driver Agent [49] to communicate with Android and iOS devices through its Python-based test script. MAPIT [2] is the latest work for cross-platform UI testing migration that enables the test migration from iOS to Android and the other way around. However, it still relies on the metadata to extract and match widgets and fails to tackle the cases where the metadata is not available or analyzable.
Some works resort to image-based approaches for UI testing migration. AppTestMigrator [50] computes the similarity among GUI widgets and migrates test cases between apps with similar features. Sikuli [3] provides a visual test script that records the widget screenshot image and uses template matching algorithm [29] to find the target widget on different devices while replaying. LIRAT [11] is the latest approach to perform cross-platform automated UI testing based on GUI images. In addition, some works utilize robot arms to physically interact with devices [12, 14, 51]. Among them, RoScript [14] is the state-of-the-art for UI record-and-replay testing, which leverages image processing to recognise the human action in a recording video and uses template matching [29] to locate the target widgets to replay. However, these approaches are limited by their preliminary visual processing techniques and do not consider the GUI variance across devices with diverse screen sizes and operating systems.
## VII Conclusion and Future Work
This paper presents a novel non-intrusive cross-device and cross-platform system named NiCro, to perform GUI action record and replay. NiCro consists of a _Device Farm_ and a _Robotic System_ to support the interaction with virtual as well as physical devices. It also utilizes a UI-specific widget detector and extracts comprehensive UI information to detect and match widgets and GUIs across various devices. Our comprehensive evaluation demonstrates the promising of NiCro's as an early attempt of visual intelligence-based techniques toward more general non-intrusive software engineering tasks across platforms, architecture and devices.
|
2303.17096
|
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
|
Recent studies have shown that higher accuracy on ImageNet usually leads to
better robustness against different corruptions. Therefore, in this paper,
instead of following the traditional research paradigm that investigates new
out-of-distribution corruptions or perturbations deep models may encounter, we
conduct model debugging in in-distribution data to explore which object
attributes a model may be sensitive to. To achieve this goal, we create a
toolkit for object editing with controls of backgrounds, sizes, positions, and
directions, and create a rigorous benchmark named ImageNet-E(diting) for
evaluating the image classifier robustness in terms of object attributes. With
our ImageNet-E, we evaluate the performance of current deep learning models,
including both convolutional neural networks and vision transformers. We find
that most models are quite sensitive to attribute changes. A small change in
the background can lead to an average of 9.23\% drop on top-1 accuracy. We also
evaluate some robust models including both adversarially trained models and
other robust trained models and find that some models show worse robustness
against attribute changes than vanilla models. Based on these findings, we
discover ways to enhance attribute robustness with preprocessing, architecture
designs, and training strategies. We hope this work can provide some insights
to the community and open up a new avenue for research in robust computer
vision. The code and dataset are available at
https://github.com/alibaba/easyrobust.
|
Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue
|
2023-03-30T02:02:32Z
|
http://arxiv.org/abs/2303.17096v1
|
# ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
###### Abstract
Recent studies have shown that higher accuracy on ImageNet usually leads to better robustness against different corruptions. Therefore, in this paper, instead of following the traditional research paradigm that investigates new out-of-distribution corruptions or perturbations deep models may encounter, we conduct model debugging in distribution data to explore which object attributes a model may be sensitive to. To achieve this goal, we create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions, and create a rigorous benchmark named ImageNet-E(diting) for evaluating the image classifier robustness in terms of object attributes. With our ImageNet-E, we evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers. We find that most models are quite sensitive to attribute changes. A small change in the background can lead to an average of 9.23% drop on top-1 accuracy. We also evaluate some robust models including both adversarially trained models and other robust trained models and find that some models show worse robustness against attribute changes than vanilla models. Based on these findings, we discover ways to enhance attribute robustness with preprocessing, architecture designs, and training strategies. We hope this work can provide some insights to the community and open up a new avenue for research in robust computer vision. The code and dataset are available at [https://github.com/alibaba/easyrobust](https://github.com/alibaba/easyrobust).
+
Footnote †: This research is supported in part by the National Key Research and Development Program of China under Grant No.2020AAA0140000.
## 1 Introduction
Deep learning has triggered the rise of artificial intelligence and has become the workhorse of machine intelligence. Deep models have been widely applied in various fields such as autonomous driving [27], medical science [32], and finance [37]. With the spread of these techniques, the robustness and safety issues begin to be essential, especially after the finding that deep models can be easily fooled by negligible noises [15]. As a result, more researchers contribute to building datasets for benchmark
ing model robustness to spot vulnerabilities in advance.
Most of the existing work builds datasets for evaluating the model robustness and generalization ability on out-of-distribution data [21, 6, 29] using adversarial examples and common corruptions. For example, the ImageNet-C(orruption) dataset conducts visual corruptions such as Gaussian noise to input images to simulate the possible processors in real scenarios [21]. ImageNet-R(enditions) contains various renditions (_e.g._, paintings, embroidery) of ImageNet object classes [20]. As both studies have found that higher accuracy on ImageNet usually leads to better robustness against different domains [21, 50]. However, most previous studies try to achieve this in a top-down way, such as architecture design, exploring a better training strategy, _etc_. We advocate that it is also essential to manage it in a bottom-up way, that is, conducting model debugging with the in-distribution dataset to provide clues for model repairing and accuracy improvement. For example, it is interesting to explore whether a bird with a water background can be recognized correctly even if most birds appear with trees or grasses in the training data. Though this topic has been investigated in studies such as causal and effect analysis [8], the experiments and analysis are undertaken on domain generalization datasets. How a deep model generalizes to different backgrounds is still unknown due to the vacancy of a qualified benchmark. Therefore, in this paper, we provide a detached object editing tool to conduct the model debugging from the perspective of object attribute and construct a dataset named ImageNet-E(diting).
The ImageNet-E dataset is a compact but challenging test set for object recognition that contains controllable object attributes including backgrounds, sizes, positions and directions, as shown in Fig. 1. In contrast to ObjectNet [5] whose images are collected by their workers via posing objects according to specific instructions and differ from the target data distribution. This makes it hard to tell whether the degradation comes from the changes of attribute or distribution. Our ImageNet-E is automatically generated with our object attribute editing tool based on the original ImageNet. Specifically, to change the object background, we provide an object background editing method that can make the background simpler or more complex based on diffusion models [24, 46]. In this way, one can easily evaluate how much the background complexity can influence the model performance. To control the object size, position, and direction to simulate pictures taken from different distances and angles, an object editing method is also provided. With the editing toolkit, we apply it to the large-scale ImageNet dataset [41] to construct our ImageNet-E(diting) dataset. It can serve as a general dataset for benchmarking robustness evaluation on different object attributes.
With the ImageNet-E dataset, we evaluate the performance of current deep learning models, including both convolutional neural networks (CNNs), vision transformers as well as the large-scale pretrained CLIP [39]. We find that deep models are quite sensitive to object attributes. For example, when editing the background towards high complexity (see Fig. 1, the \(3\)rd row in the background part), the drop in top-1 accuracy reaches 9.23% on average. We also find that though some robust models share similar top-1 accuracy on ImageNet, the robustness against different attributes may differ a lot. Meanwhile, some models, being robust under certain settings, even show worse results than the vanilla ones on our dataset. This suggests that improving robustness is still a challenging problem and the object attributes should be taken into account. Afterward, we discover ways to enhance robustness against object attribute changes. The main contributions are summarized as follows:
* We provide an object editing toolkit that can change the object attributes for manipulated image generation.
* We provide a new dataset called ImageNet-E that can be used for benchmarking robustness to different object attributes. It opens up new avenues for research in robust computer vision against object attributes.
* We conduct extensive experiments on ImageNet-E and find that models that have good robustness on adversarial examples and common corruptions may show poor performance on our dataset.
## 2 Related Work
The literature related to attribute robustness benchmarks can be broadly grouped into the following themes: robustness benchmarks and attribute editing datasets. Existing robustness benchmarks such as ImageNet-C(orruption) [21], ImageNet-R(endition) [20], ImageNet-Stylized [13] and ImageNet-3DCC [29] mainly focus on the exploration of the corrupted or out-of-distribution data that models may encounter in reality. For instance, the ImageNet-R dataset contains various renditions (_e.g._, paintings, embroidery) of ImageNet object classes. ImageNet-C analyzes image models in terms of various simulated image corruptions (_e.g._, noise, blur, weather, JPEG compression, _etc._). Attribute editing dataset creation is a new topic and few studies have explored it before. Among them, ObjectNet [5] and ImageNet-9 (_a.k.a._ background challenge) [50] can be the representative. Specifically, ObjectNet collects a large real-world test set for object recognition with controls where object backgrounds, rotations, and imaging viewpoints are random. The images in ObjectNet are collected by their workers who image objects in their homes. It consists of \(313\) classes which are mainly household objects. ImageNet-9 mainly creates a suit of datasets that help disentangle the impact of foreground and background signals on classification. To achieve this goal, it uses coarse-grained classes
with corresponding rectangular bounding boxes to remove the foreground and then paste the cut area with other backgrounds. It can be observed that there lacks a dataset that can smoothly edit the object attribute.
## 3 Preliminaries
Since the editing tool is developed based on diffusion models, let us first briefly review the theory of denoising diffusion probabilistic models (DDPM) [24, 46] and analyze how it can be used to generate images.
According to the definition of the Markov Chain, one can always reach a desired stationary distribution from a given distribution along with the Markov Chain [14]. To get a generative model that can generate images from random Gaussian noises, one only needs to construct a Markov Chain whose stationary distribution is Gaussian distribution. This is the core idea of DDPM. In DDPM, given a data distribution \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), a forward noising process produces a series of latents \(\mathbf{x}_{1},...,\mathbf{x}_{T}\) of the same dimensionality as the data \(\mathbf{x}_{0}\) by adding Gaussian noise with variance \(\beta_{t}\in(0,1)\) at time \(t\):
\[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\sqrt{1-\beta_{t}}\mathbf{x}_{ t-1},\beta_{t}\mathbf{I}),s.t.\ 0<\beta_{t}<1, \tag{1}\]
where \(\beta_{t}\) is the diffusion rate. Then the distribution \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\) at any time \(t\) is:
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\sqrt{\bar{\alpha}_{t}},(1-\bar{ \alpha}_{t})\mathbf{I}),\ \mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{ \alpha}_{t}}\epsilon \tag{2}\]
where \(\bar{\alpha}_{t}=\prod_{s=1}^{t}(1-\beta_{t})\), \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\). It can be proved that \(\lim_{t\to\infty}q(\mathbf{x}_{t})=\mathcal{N}(0,\mathbf{I})\). In other words, we can map the original data distribution into a Gaussian distribution with enough iterations. Such a stochastic forward process is named as diffusion process since what the process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) does is adding noise to \(\mathbf{x}_{t-1}\).
To draw a fresh sample from the distribution \(q(\mathbf{x}_{0})\), the Markov process is reversed. That is, beginning from a Gaussian noise sample \(\mathbf{x}_{T}\sim\mathcal{N}(0,\mathbf{I})\), a reverse sequence is constructed by sampling the posteriors \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). To approximate the unknown function \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), in DDPMs, a deep model \(p_{\theta}\) is trained to predict the mean and the covariance of \(\mathbf{x}_{t-1}\) given \(\mathbf{x}_{t}\) instead. Then the \(\mathbf{x}_{t-1}\) can be sampled from the normal distribution defined as:
\[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mu_{\theta}(\mathbf{ x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)). \tag{3}\]
In stead of inferring \(\mu_{\theta}(\mathbf{x}_{t},t)\) directly, [24] propose to predict the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) which was added to \(\mathbf{x}_{0}\) to get \(\mathbf{x}_{t}\) with Eq. (2). Then \(\mu_{\theta}(\mathbf{x}_{t},t)\) is:
\[\mu_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\mathbf{ x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t )\right). \tag{4}\]
[24] keep the value of \(\Sigma_{\theta}(\mathbf{x}_{t},t)\) to be constant. As a result, given a sample \(\mathbf{x}_{t}\) at time \(t\), with a trained model that can predict the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\), we can get \(\mu_{\theta}(\mathbf{x}_{t},t)\) according to Eq. (4) to reach the \(\mathbf{x}_{t-1}\) with Equation (3) and eventually we can get to \(\mathbf{x}_{0}\).
Previous studies have shown that diffusion models can achieve superior image generation quality compared to the current state-of-the-art generative models [1]. Besides, there have been plenty of works on utilizing the DDPMs to generate samples with desired properties, such as semantic image translation [36], high fidelity data generation from low-density regions [44], _etc_. In this paper, we also choose the DDPM adopted in [1] as our generator.
## 4 Attribute Editing with Diffusion Models and ImageNet-E
Most previous robustness-related work has focused on the important challenges of robustness on adversarial examples [6], common corruptions [21]. They have found that higher clean accuracy usually leads to better robustness.
Figure 2: Attribute editing with DDPMs. Give an input image and its corresponding object mask, the object is firstly removed with inpainting operation to get the pure background image. Then, we leverage the diffusion process to edit the background image \(\mathbf{x}_{0}\) and object image coherently. \(\odot\) denotes the element-wise blending of these two images using the object mask. For background editing, the background complexity objective function is added during the diffusion process (Alg. 1, line \(5\)). For other object attributes editing, the object image needs to be transformed first (Alg. 2, line \(1\)).
Therefore, instead of exploring a new corruption that models may encounter in reality, we pay attention to the model debugging in terms of object attributes, hoping to provide new insights to clean accuracy improvement. In the following, we describe our object attribute editing tool and the generated ImageNet-E dataset in detail. The whole pipeline can be found in Fig. 2.
### Object Attribute Editing with Diffusion Models
**Background editing.** Most existing corruptions conduct manipulations on the whole image, as shown in Fig. 1. Compared to adding global corruptions that may hinder the visual quality, a more likely-to-happen way in reality is to manipulate the backgrounds to fool the model. Besides, it is shown that there exists a spurious correlation between labels and image backgrounds [12]. From this point, a background corruption benchmark is needed to evaluate the model's robustness. However, the existing background challenge dataset achieves background editing with copy-paste operation, resulting an obvious artifacts in generated images [50]. This may leave some doubts about whether the evaluation is precise since the dataset's distribution may have changed. To alleviate this concern, we adopt DDPM approach to incorporate background editing by adding a guiding loss that can lead to backgrounds with desired properties to make the generated images stay in/close to the original distribution. Specifically, we choose to manipulate the background in terms of texture complexity due to the hypothesis that an object should be observed more easily from simple backgrounds than from complicated ones. In general, the texture complexity can be evaluated with the gray-level co-occurrence matrix (GLCM) [16], which calculates the gray-level histogram to show the texture characteristic. However, the calculation of GLCM is non-differentiable, thus it cannot serve as the conditional guidance of image generation. We hypothesize that a complex image should contain more frequency components in its spectrum and higher amplitude indicates greater complexity. Thus, we define the objective of complexity as:
\[\mathcal{L}_{c}=\sum\left|\mathcal{A}(\mathcal{F}(\mathbf{x}))\right|, \tag{5}\]
where \(\mathcal{F}\) is the Fourier transform [45], \(\mathcal{A}\) extracts the amplitude of the input spectrum. \(\mathbf{x}\) is the evaluated image. Since minimizing this loss helps us generate an image with desired properties and should be conducted on the \(\mathbf{x}_{0}\), we need a way of estimating a clean image \(\mathbf{x}_{0}\) from each noisy latent representation \(\mathbf{x}_{t}\) during the denoising diffusion process. Recall that the process estimates at each step the noise \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) added to \(\mathbf{x}_{0}\) to obtain \(\mathbf{x}_{t}\). Thus, \(\hat{\mathbf{x}}_{0}\) can be estimated via Equation (6) [1]. The whole optimization procedure is shown in Algorithm 1.
\[\hat{\mathbf{x}}_{0}=\frac{\mathbf{x}_{t}}{\sqrt{\hat{\alpha}_{t}}}-\frac{ \sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(\mathbf{x}_{t},t)}{\sqrt{\bar{ \alpha}_{t}}}. \tag{6}\]
As shown in Fig. 3(a), with the proposed method, when we guide the generation procedure with the proposed objective towards the complex direction, it will return images with visually complex backgrounds. We also provide the GLCM dissimilarity and contrast of each image to make a quantitative analysis of the generated images. A higher dissimilarity/contrast score indicates a more complex image background [16]. It can be observed that the complexity is consistent with that calculated with GLCM, indicating the effectiveness of the proposed method.
**Controlling object size, position and direction.** In general, the human vision system is robust to position, direction and small size changes. Whether the deep models are also robust to these object attribute changes is still unknown to researchers. Therefore, we conduct the image editing with controls of object sizes, positions and directions to find the answer. For a valid evaluation on different attributes, all other variables should remain unchanged, especially the background. Therefore, we first disentangle the object and background with the in-painting strategy provided by [54]. Specifically, we mask the object area in input image \(\mathbf{x}\). Then we conduct in-painting to remove the object and get the pure background image \(\mathbf{x}^{b}\), as shown in Fig. 3(b) column \(3\). To realize the aforementioned object attribute controlling, we adopt the orthogonal transformation. Denote \(P\) as the pixel locations of object in image \(\mathbf{x}\) where \(P\in\mathbb{R}^{3\times N_{o}}\). \(N_{o}\) is the number of pixels belong to object and \(p_{i}=[x_{i},y_{i},1]^{T}\) is the position of object's \(i\)-th pixel. \(h^{\prime}\in[0,H-h],w^{\prime}\in[0,W-w]\) where \([x,y,w,h]\) stand for the enclosing rectangle of the object with mask \(M\). Then the newly edited \(\mathbf{x}[T_{\text{attribute}}\cdot P]=\mathbf{x}[P]\) and \(M[T_{\text{attribute}}\cdot P]=M[P]\), where
\[T_{\text{size}}=\left[\begin{smallmatrix}8&0&\Delta x\\ 0&s&\Delta y\\ 0&0&1\end{smallmatrix}\right],T_{\text{poi}}=\left[\begin{smallmatrix}1&0&w^{ \prime}\\ 0&1&h^{\prime}\\ 0&0&1\end{smallmatrix}\right],T_{\text{direction}}=\left[\begin{smallmatrix}\cos \theta&\sin\theta&0\\ -\sin\theta&\cos\theta&0\\ 0&0&1\end{smallmatrix}\right] \tag{7}\]
where \(s\) is the resize scale. \(\theta\) is the rotation angle. \(\Delta x=(1-s)\cdot(x+w/2),\Delta y=(1-s)\cdot(y+h/2)\).
With the background image \(\mathbf{x}^{b}\) and edited object \(\mathbf{x}^{o}\), a naive way is to place the object in the original image to the corresponding area of background image \(\mathbf{x}^{b}\) as \(M\odot\mathbf{x}^{o}+(1-M)\odot\mathbf{x}^{b}\). However, the result generated in this manner may look disharmonic, lacking a delicate adjustment to blending them together. Besides, as shown in Fig. 3(b) column \(3\), the object-removing operation may leave some artifacts behind, failing to produce a coherent and seamless result. To deal with this problem, we leverage DDPM models to blend them at different noise levels along the diffusion process. Denote the image with desired object attribute as \(\mathbf{x}^{o}\). Starting from the pure background image \(\mathbf{x}^{b}\) at time \(t_{0}\), at each stage, we perform a guided diffusion step with a latent \(\mathbf{x}_{t}\) to obtain the \(\mathbf{x}_{t-1}\) and at the same time, obtain a noised version of object image
\(\mathbf{x}_{t-1}^{o}\). Then the two latents are blended with the mask \(M\) as \(\mathbf{x}_{t-1}=M\odot\mathbf{x}_{t-1}^{o}+(1-M)\odot\mathbf{x}_{t-1}\). The DDPM denoising procedure may change the background. Thus a proper initial timing is required to maintain a high resemblance to the original background. We set the iteration steps \(t_{0}\) as 50 and 25 in Algorithm 1 and 2 respectively.
```
input : source image \(\mathbf{x}\), mask \(M\), diffusion model \((\mu_{\theta}(\mathbf{x}_{t}),\Sigma_{\theta}(\mathbf{x}_{t}))\), \(\bar{\alpha}_{t}\), \(\lambda\), iteration steps \(t_{0}\) output : edited image \(\mathbf{x}_{0}\)
1\(\mathbf{x}_{t_{0}}\sim\mathcal{N}(\sqrt{\bar{\alpha}_{t_{0}}}\mathbf{x},(1- \bar{\alpha}_{t_{0}})\mathbf{I})\);
2for\(t\gets t_{0}\)todo
3\(\hat{\mathbf{x}}_{0}\leftarrow\frac{\mathbf{x}_{t}}{\sqrt{\bar{\alpha}_{t_{0 }}}}-\frac{\sqrt{1-\bar{\alpha}_{t}}\bar{\alpha}_{t_{0}}(\mathbf{x}_{t},t)}{ \sqrt{\bar{\alpha}_{t}}}\);
4\(\nabla_{bg}\leftarrow\nabla_{bg}\mathcal{N}_{\theta_{\text{c}}}\mathcal{L}_{ \text{c}}(\hat{\mathbf{x}}_{0})\);
5\(\mathbf{x}_{t-1}^{b}\sim\mathcal{N}(\mu_{\theta}(\mathbf{x}_{t})+\lambda \Sigma_{\theta}(\mathbf{x}_{t})\nabla_{bg},\Sigma_{\theta}(\mathbf{x}_{t}))\);
6\(\mathbf{x}^{o}\sim\mathcal{N}(\sqrt{\bar{\alpha}_{t}}\mathbf{x},(1-\bar{ \alpha}_{t})\mathbf{I})\);
7\(\mathbf{x}_{t-1}\gets M\odot\mathbf{x}^{o}+(1-M)\odot\mathbf{x}_{t-1}^{b}\);
8
9 end while
```
**Algorithm 1**Background editing
### ImageNet-E dataset
With the tool above, we conduct object attribute editing including background, size, direction and position changes based on the large-scale ImageNet dataset [41] and ImageNet-S [11], which provides the mask annotation. To guarantee the dataset quality, we choose the animal classes from ImageNet classes such as dogs, fishes and birds, since they appear more in nature without messy backgrounds. Classes such as stove and mortarboard are removed. Finally, our dataset consists of \(47872\) images with \(373\) classes based on the initial 4352 images, each of which is applied 11 transforms. Detailed information can be found in Appendix A. For background editing, we choose three levels of the complexity, including \(\lambda=-20,\lambda=20\) and \(\lambda=20\)-adv with adversarial guidance (see Sec.B for details) instead of complexity. Larger \(\lambda\) indicates stronger guidance towards high complexity. For the object size, we design four levels of sizes in terms of the object pixel rates (\(=\text{sum}(M>0.5)/\text{sum}(M\geq 0)\)): \([\text{Full},0.1,0.08,0.05]\) where 'Full' indicates making the object as large as possible while maintaining its whole body inside the image. Smaller rates indicate smaller objects. For object position, we find that some objects hold a high object pixel rate in the whole image, resulting in a small \(H-h\). Take the first picture in Fig. 3 for example, the dog is big and it will make little visual differences after position changing. Thus, we adopt the data whose pixel rate is 0.05 as the initial images for the position-changing operation.
In contrast to benchmarks like ImageNet-C [21] giving images from different domains so that the model robustness in these situations may be assessed, our effort aims to give an editable image tool that can conduct model debugging with in-distribution (ID) data, in order to identify specific shortcomings of different models and provide some insights for clean accuracy improving. Thus, the data distribution should not differ much from the original ImageNet. We choose the out-of-distribution (OOD) detection methods Energy [33] and GradNorm [26] to evaluate whether our editing tool will move the edited image out of its original distribution. These OOD detection methods aim to distinguish the OOD examples from the ID exam
Figure 3: (a) Images generated with the proposed background complexity editing method. (b) Edited images with size changing. The Fréchet inception distance (FID) for pasting is 50.64 while it is 32.59 for ours, indicating the effectiveness of the leveraging of DDPMs.
ples. The results are shown in Fig. 4. \(x\)-axis is the ID score in terms of the quantities in Energy and GradNorm and \(y\)-axis is the frequency of each ID score. A high ID score indicates the detection method takes the input sample as the ID data. Compared to other datasets, our method barely changes the data distribution under both Energy (the 1st row) and GradNorm (the 2nd row) evaluation methods. Besides, the Frechet inception distance (FID) [23] for our ImageNet-E is 15.57 under the random background setting, while it is 34.99 for ImageNet-9 (background challenge). These all imply that our editing tool can ensure the proximity to the original ImageNet, thus can give a controlled evaluation on object attribute changes. To find out whether the DDPM will induce some degradation to our evaluation, we have conducted experiment in Tab. 1 with the setting \(\lambda=0\) during background editing. This operation will first add noises to the original and then denoise them. It can be found in "Inver" column that the degradation is negligible compared to degradation induced by attribute changes.
## 5 Experiments
We conduct evaluation experiments on various architectures including both CNNs (ResNet (RN) [19], DenseNet [25], EfficientNet (EF) [47], ResNest [53], ConvNeXt [35]) and transformer-based models (Vision-Transformer (ViT) [9], Swin-Transformer (Swin) [34]). Other state-of-the-art models that trained with extra data such as CLIP [39], EfficientNet-L2-Noisy-Student [51] are also evaluated in the Appendix. Apart from different sizes of these models, we have also evaluated their adversarially trained versions for comprehensive studies. We report the drop of top-1 accuracy as metric based on the idea that the attribute changes should induce little influence to a robust trained model. More experimental details and results of top-1 accuracy can be found in the Appendix.
### Robustness evaluation
**Normally trained models.** To find out whether the widely used models in computer vision have gained robustness against changes on different object attributes, we conduct extensive experiments on different models. As shown in Tab. 1, when only the background is edited towards high complexity, the average drop in top-1 accuracy is 9.23% (\(\lambda=20\)). This indicates that most models are sensitive to object background changes. Other attribute changes such as size and position can also lead to model performance degradation. For example, when changing the object pixel rate to \(0.05\), as shown in Fig. 1 row \(4\) in the'size' column, while we can still recognize the image correctly, the performance drop is 18.34% on average. We also find that the robustness under different object attributes is improved along with improvements in terms of clean accuracy (Original) on different models. Accordingly, a switch from an RN50 (92.69% top-1 accuracy) to a Swin-S (96.21%) leads to the drop in accuracy decrease from 15.72% to 10.20% on average. By this measure, models have become more and more capable of generalizing to different backgrounds, which implies that they indeed learn some robust features. This shows that object attribute robustness can be a good way to measure future progress in representation learning. We also observe that larger networks possess better robustness on the attribute editing. For example, swapping a Swin-S (96.21% top-1 accuracy) with the larger Swin-B (95.96% top-1 accuracy) leads to the decrease of the dropped accuracy from 10.20% to 8.99% when \(\lambda=20\). In a similar fashion, a ConvNeXt-T (9.32% drop) is less robust than the giant ConvNeXt-B (7.26%). Consequently, models with even more depth, width, and feature aggregation may attain further attribute robustness. Previous studies [30] have shown that zero-shot CLIP exhibits better out-of-distribution robustness than the finetuned CLIP, which is opposite to our ImageNet-E as shown in Tab. 1. This may serve as the evidence that our ImageNet-E has a good proximity to ImageNet. We also find that compared with fully
Figure 4: Distributions of ID score of different datasets in terms of the quantities in Energy (the first row) and GradNorm (the second row) for in-distribution (ImageNet) and other datasets. Higher overlap indicates greater proximity to ImageNet.
supervised trained model under the same backbone (ViT-B), the CLIP fails to show a better attribute robustness. We think this may be caused by that the CLIP has spared some capacity for OOD robustness.
**Adversarially trained models.** Adversarial training [42] is one of the state-of-the-art methods for improving the adversarial robustness of deep models and has been widely studied [2]. To find out whether they can boost the attribute robustness, we conduct extensive experiments in terms of different architectures and perturbation budgets (constraints of \(l_{2}\) norm bound). As shown in Fig. 5, the adversarially trained ones are not robust against attribute changes including both backgrounds and size-changing situations. The dropped accuracies are much greater compared to normally trained models. As the perturbation budget grows, the situation gets worse. This indicates that adversarial training can do harm to robustness against attributes.
### Robustness enhancements
Based on the above evaluations, we step further to discover ways to enhance the attribute robustness in terms of preprocessing, network design and training strategies. More details including training setting and numerical experimental results can be found in Appendix C.5.
**Preprocessing.** Given that an object can be inconspicuous due to its small size or subtle position, viewing an object at several different locations may lead to a more stable prediction. Having this intuition in mind, we perform the classical Ten-Crop strategy to find out if this operation can help to get a robustness boost. The Ten-Crop operation is executed by cropping all four corners and the center of the input image. We average the predictions of these crops together with their horizontal mirrors as the final result. We find this operation can contribute a 0.69% and 1.24% performance boost on top-1 accuracy in both background and size changes scenarios on average respectively.
**Network designs.** Intuitively, a robust model should tend to focus more on the object of interest instead of the background. Therefore, recent models begin to enhance the model by employing attention modules. Of these, the ResNet [53] can be a representative. The ResNet is a modularized architecture, which applies channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations. As it has achieved a great boost in the ImageNet dataset, it also shows superiority on ImageNet-E compared to ResNet. For example, a switch from RN50 decreases the average dropped accuracy from 15.72% to 12.57%. This indicates that the channel-wise attention module can be a good choice to improve the attribute robustness. Another representative model can be the vision transformer, which consists of multiple self-attention modules. To study whether incorporating transformer's self-attention-like architecture into the model design can help attribute robustness generalization, we establish a hybrid architecture by directly feeding the output of res_3 block in RN50 into ViT-S as the input feature like [3]. The dropped accuracy decreases by 1.04% compared to the original RN50, indicating the effectiveness of the self-attention-like architectures.
**Training strategy.** a) _Robust trained._ There have been plenty of studies focusing on the robust training strategy to improve model robustness. To find out whether these works can boost the robustness on our dataset, we further evaluate these state-of-the-art models including SIN [13], Debiased-CNN [31], Augmix [22], ANT [40], DeepAugment [20] and model trained with lots of standard augmentations (RN50-T) [48]. As shown in Tab. 2, apart from the RN50-T, while the Augmix model shows the best performance against the background change scenario, the Debiased model holds the best in the object size change scenario. What we find unexpectedly is the SIN performance. The SIN method features the novel data augmentation scheme where ImageNet images are stylized with style transfer as the training data to force the model to rely less on textural cues for classification. Though the robustness boost is achieved on ImageNet-C (mCE 69.32%) compared to its vanilla model (mCE 76.7%), it fails to improve the robustness in both object background and size-changing scenarios. The drops of top-1 accuracy for vanilla RN50 and RN50-SIN are 21.26% and 24.23% respectively, when the object size rate is 0.05, though they share similar accuracy on original ImageNet. This indicates that existing benchmarks cannot reflect the real robustness in object attribute changing. Therefore, a dataset like ImageNet-E is necessary for comprehensive evaluations on deep models. b) _Masked image modeling._ Considering that masked image modeling has demonstrated impressive results in self-supervised representation learning by recovering corrupted image patches [4], it may be robust to the attribute changes. Therefore, we choose the Masked AutoEncoder (MAE) [17] as the training strategy since its objective is recovering images with only 25% patches. Specifically, we adopt the MAE training strategy with ViT-B backbone and then finetune it with ImageNet
Figure 5: Comparisons between vanilla models and adversarially trained models across different architectures in terms of size changes (left). Evaluation of adversarial models (RN50) trained with different perturbation budgets is provided in the right figure.
training data. We find that the robustness is improved. For example, the dropped accuracy decreases from 10.62% to 9.05% on average compared to vanilla ViT-B.
### Failure case analysis
To explore the reason why some robust trained models may fail, we leverage the LayerCAM [28] to generate the heat map for different models including vanilla RN50, RN50+SIN and RN50+Debiased for comprehensive studies. As shown in Fig. 6, the heat map of the Debiased model aligns better with the objects in the image than that of the original model. It is interesting to find that the SIN model sometimes makes wrong predictions even with its attention on the main object. We suspect that the SIN model relies too much on the shape. for example, the'sea urchin' looks like the 'acron' with the shadow. However, its texture clearly indicates that it is the'sea urchin'. In contrast, the Debiased model which is trained to focus on both the shape and texture can recognize it correctly. More studies can be found in Appendix C.4.
### Model repairing
To validate that the evaluation on ImageNet (IN)-E can help to provide some insights for model's applicability and enhancement, we conduct a toy example for model repairing. Previous evaluation shows that the ResNet50 is vulnerable to background changes. Based on this observation, we randomly replace the backgrounds of objects with others during training and get a validation accuracy boost from 77.48% to 79.00%. Note that the promotion is not small as only 8781 training images with mask annotations are available in ImageNet. We also step further to find out if the improved model can get a boost the OOD robustness, as shown in the Tab. 3. It can be observed that with the insights provided by the evaluation on ImageNet-E, one can explore the model's attribute vulnerabilities and manage to
\begin{table}
\begin{tabular}{c|c|c c c c c|c c c|c|c|c} \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Original} & \multicolumn{6}{c|}{Background changes} & \multicolumn{6}{c|}{Size changes} & Position & Direction & \multirow{2}{*}{Avg.} \\ \cline{3-3} \cline{5-16} & & Inver & \(\lambda=-20\) & \(\lambda=20\) & \(\lambda=20\)-adv & Random & Full & 0.1 & 0.08 & 0.05 & rp & rd \\ \hline \hline RN50 & 92.69\% & 1.97\% & 7.30\% & 13.35\% & 29.92\% & 13.34\% & 2.71\% & 7.25\% & 10.51\% & 21.26\% & 26.46\% & 25.12\% & 15.72\% \\ DenseNet121 & 92.10\% & 1.49\% & 6.29\% & 9.00\% & 29.20\% & 12.43\% & 3.50\% & 7.00\% & 10.68\% & 21.55\% & 26.53\% & 23.64\% & 14.98\% \\ EF-B0 & 92.85\% & 1.07\% & 7.10\% & 10.71\% & 34.88\% & 15.64\% & 3.03\% & 8.00\% & 11.57\% & 23.28\% & 27.91\% & 19.15\% & 16.12\% \\ ResNet50 & 93.83\% & 1.44\% & 6.33\% & 8.98\% & 26.62\% & 11.28\% & 2.53\% & 5.27\% & 8.01\% & 18.03\% & 21.37\% & 17.32\% & 12.57\% \\ ViT-S & 94.14\% & **0.82\%** & 6.42\% & 8.98\% & 31.12\% & 13.06\% & **0.80\%** & 5.37\% & 8.95\% & 17.37\% & 22.86\% & 17.13\% & 13.17\% \\ Swin-S & **96.21\%** & 1.13\% & 5.18\% & 7.33\% & 23.50\% & 9.31\% & 1.27\% & 4.21\% & 6.29\% & 14.16\% & 17.35\% & **13.42\%** & 10.20\% \\ ConvNeXt-T & 96.07\% & 1.43\% & **4.69\%** & **6.26\%** & **19.83\%** & **7.93\%** & 1.75\% & **3.28\%** & **5.18\%** & **12.76\%** & **15.71\%** & 15.78\% & **9.32\%** \\ \hline \hline RN101 & 94.00\% & 2.11\% & 7.05\% & 11.62\% & 29.47\% & 13.57\% & 2.57\% & 6.81\% & 10.12\% & 26.05\% & 25.85\% & 24.42\% & 15.21\% \\ DenseNet169 & 92.37\% & 1.12\% & 5.81\% & 8.43\% & 27.51\% & 11.61\% & 2.25\% & 6.90\% & 10.41\% & 20.59\% & 24.93\% & 20.68\% & 13.91\% \\ EF-B3 & 94.97\% & 1.87\% & 7.77\% & 8.40\% & 29.90\% & 12.92\% & 1.36\% & 6.80\% & 10.16\% & 21.36\% & 24.98\% & 17.24\% & 14.09\% \\ ResNet101 & 95.54\% & 1.10\% & 5.58\% & 6.65\% & 23.03\% & 10.40\% & 1.35\% & 3.97\% & 6.53\% & 15.44\% & 19.11\% & 14.31\% & 10.64\% \\ ViT-B & 95.38\% & 0.83\% & 5.32\% & 8.43\% & 26.60\% & 10.98\% & **0.62\%** & 4.00\% & 6.30\% & 14.51\% & 18.82\% & 14.95\% & 11.05\% \\ Swin-B & 95.96\% & 0.79\% & 4.46\% & 6.23\% & 21.44\% & 8.25\% & 0.99\% & 3.16\% & 5.04\% & 12.34\% & 13.35\% & **12.60\%** & **8.99\%** \\ ConvNeXt-B & **96.42\%** & **0.69\%** & **3.75\%** & **4.86\%** & **16.49\%** & **6.04\%** & 9.09\% & **2.25\%** & **3.36\%** & **9.47\%** & **12.40\%** & 13.01\% & **7.26\%** \\ \hline CLIP-zeroshot & 80.01\% & 4.88\% & 11.56\% & 15.28\% & 36.14\% & 20.09\% & 3.33\% & 12.67\% & 15.77\% & 25.31\% & 28.87\% & 21.57\% & 19.06\% \\ CLIP-finetuned & 93.68\% & 2.17\% & 9.82\% & 11.83\% & 38.33\% & 18.19\% & 9.06\% & 9.25\% & 12.67\% & 23.32\% & 28.56\% & 22.00\% & 18.30\% \\ \hline \end{tabular}
\end{table}
Table 1: Evaluations with different state-of-the-art models in terms of Top-1 accuracy and the corresponding droped accuracy under background changes, size changes, random position (rp) and random direction (rd).
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c|c} \hline \multirow{2}{*}{Architectures} & \multirow{2}{*}{Ori} & \multicolumn{6}{c|}{Background changes} & \multicolumn{6}{c|}{Size changes} & Position & Direction & \multirow{2}{*}{Avg.} \\ \cline{3-3} \cline{5-16} & & Inver & \(\lambda=-20\) & \(\lambda=20\) & \(\lambda=20\)-adv & Random & Full & 0.1 & 0.08 & 0.05 & rp & rd \\ \hline RN50 & 92.69\% & 1.97\% & 7.30\% & 13.35\% & 29.92\% & 13.34\% & 2.71\% & 7.25\% & 10.51\% & 21.26\% & 26.46\% & 25.12\% & 15.72\% \\ RN50-Adversarial & 81.96\% & **0.66\%** & **4.75\%** & 13.62\% & 37.87\% & 15.25\% & 4.87\% & 9.62\% & 13.94\% & 25.51\% & 31.96\% & 18.99\% \\ RN50-SIN & 91.57\% & 2.23\% & 7.61\% & 12.19\% & 33.16\% & 13.58\% & 1.68\% & 8.30\% & 12.60\% & 24.23\% & 29.16\% & 27.24\% & 16.98\% \\ RN50-Debiased & 93.34\% & 1.43\% & 6.09\% & 11.45\% & 27.99\% & 12.12\% & 1.98\% & 5.53\%
repair the model and get a performance boost accordingly.
## 6 Conclusion and Future work
In this paper, we put forward an image editing toolkit that can take control of object attributes smoothly. With this tool, we create a new dataset called ImageNet-E that can serve as a general dataset for benchmarking robustness against different object attributes. Extensive evaluations conducted on different state-of-the-art models show that most models are vulnerable to attribute changes, especially the adversarially trained ones. Meanwhile, other robust trained models can show worse results than vanilla models even when they have achieved a great robustness boost on other robustness benchmarks. We further discover ways for robustness enhancement from both preprocessing, network designing and training strategies.
**Limitations and future work.** This paper proposes to edit the object attributes in terms of backgrounds, sizes, positions and directions. Therefore, the annotated mask of the interest object is required, resulting in a limitation of our method. Besides, since our editing toolkit is developed based on diffusion models, the generalization ability is determined by DDPMs. For example, we find synthesizing high-quality person images is difficult for DDPMs. Under the consideration of both the annotated mask and data quality, our ImageNet-E is a compact test set. In our future work, we would like to explore how to leverage the edited data to enhance the model's performance, including both the validation accuracy and robustness.
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline \hline Models & IN & IN-v2 & IN-A & IN-C\(\downarrow\) & IN-R & IN-Sketch & IN-E \\ \hline RN50 & 77.5 & 65.7 & 6.5 & 68.6 & 39.6 & 27.5 & 83.7 \\ RN50-repaired & **79.0** & **67.2** & **9.4** & **65.8** & **40.7** & **29.4** & **85.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model repairing results. Top-1 accuracy (%) is reported except for IN-C, which is mCE (mean Corruption Error). Higher top-1 accuracy and lower mCE indicate better performance. IN-E reports the average accuracy on ImageNet-E.
|
2307.12419
|
Challenges in aligning requirements engineering and verification in a
large-scale industrial context
|
[Context and motivation] When developing software, coordination between
different organizational units is essential in order to develop a good quality
product, on time and within budget. Particularly, the synchronization between
requirements and verification processes is crucial in order to assure that the
developed software product satisfies customer requirements. [Question/problem]
Our research question is: what are the current challenges in aligning the
requirements and verification processes? [Principal ideas/results] We conducted
an interview study at a large software development company. This paper presents
preliminary findings of these interviews that identify key challenges in
aligning requirements and verification processes. [Contribution] The result of
this study includes a range of challenges faced by the studied organization
grouped into the categories: organization and processes, people, tools,
requirements process, testing process, change management, traceability, and
measurement. The findings of this study can be used by practitioners as a basis
for investigating alignment in their organizations, and by scientists in
developing approaches for more efficient and effective management of the
alignment between requirements and verification.
|
Giedre Sabaliauskaite, Annabella Loconsole, Emelie Engström, Michael Unterkalmsteiner, Björn Regnell, Per Runeson, Tony Gorschek, Robert Feldt
|
2023-07-23T20:08:49Z
|
http://arxiv.org/abs/2307.12419v1
|
Challenges in Aligning Requirements Engineering and Verification in a Large-Scale Industrial Context
###### Abstract
[Context and motivation] When developing software, coordination between different organizational units is essential in order to develop a good quality product, on time and within budget. Particularly, the synchronization between requirements and verification processes is crucial in order to assure that the developed software product satisfies customer requirements. [Question/problem] Our research question is: what are the current challenges in aligning the requirements and verification processes? [Principal ideas/results] We conducted an interview study at a large software development company. This paper presents preliminary findings of these interviews that identify key challenges in aligning requirements and verification processes. [Contribution] The result of this study includes a range of challenges faced by the studied organization grouped into the categories: organization and processes, people, tools, requirements process, testing process, change management, traceability, and measurement. The findings of this study can be used by practitioners as a basis for investigating alignment in their organizations, and by scientists in developing approaches for more efficient and effective management of the alignment between requirements and verification.
Keywords:requirements engineering, software verification, software testing, coordination.
## 1 Introduction
Are we sure that the tests performed are based on requirements and not on technical specifications supplied by developers? Are we sure that the test coverage is adequate? In order to assure that customer requirements are realized as intended these questions must be asked and answered. However, this is not an easy task, since requirements tend to change over time [13], and in many cases the requirement specifications are not updated during the development of a product making it hard to use them as a solid base for creating e.g. test cases [7, 15]. In small systems with just a few requirements it could still be possible to handle the changes manually, but it gets extremely hard in complex systems with thousands of requirements. Therefore, there is a need for a
mechanism to manage coordination between the requirements and the verification processes. We call such coordination _alignment_.
In this paper, we examine the challenges in aligning the requirements and the verification processes. We present preliminary results of an interview study performed in a large software company in Sweden. The overall goal of our research is to understand how alignment activities are performed in practice, what the important problems are and what can be improved to gain better alignment. The results presented in this paper are a set of challenges that can help practitioners and researchers. Practitioners can, for instance, allocate more resources in the areas that are challenging when aligning the requirements and the verification processes. Researchers can also benefit from our results by focusing their research on the areas that are the most challenging. The results are valid in the context of the company under investigation. We are currently extending the case study to other companies. By comparing the results of this case with other case studies it will be possible to get a more general picture of challenges in different kinds of organizations and in different domains.
The paper is structured as follows. In Section 2, we present related work in the area. In Section 3, we describe the research approach used in this qualitative case study. Section 4 describes the alignment challenges found. Finally conclusions and future work are presented in Section 5.
## 2 Related Work
In [17], authors presented the findings of the discussions with test managers and engineers in software development organizations regarding difficulties of integrating independent test agencies into software development practices. The company where we have performed interviews does not commonly use independent test agencies, however it has separate requirements, development, and testing units. Therefore it would be interesting to compare the results of having independent test agency and independent test unit within the company under study.
Findings related to change management emphasize the importance of synchronization between the development and test with respect to modifications of functionality [17]. The results of our study confirm these findings. One of the most recurrent challenges identified in our study is that requirements are not being updated on time.
Findings related to people interactions and communication stress the need of communication between development and test organizations. If testers do not know who wrote or modified the code, they do not know whom to talk to when potential faults are detected. On the other hand, it could be difficult for developers to inform testers on upcoming functionality changes, if they don't know whose test cases will be affected [17]. Our study confirms these results as well. Most of the interviewees suggest that alignment could be greatly improved if requirements and testing people would interact more with each other.
Several surveys on requirements related challenges are present in the literature: problems in the requirements engineering process [9], requirements modeling [6],
quality requirements [7], requirements prioritization [1], and requirements interdependencies [8]. Among these, Karlsson et al. [1] have results similar to ours, i.e. tool integration is difficult and it is a challenge to write quality requirements.
Most of the studies above do not focus on the alignment between the requirements and the verification processes. Research in connecting requirements and testing has been performed by several authors, for instance Uusitalo et al. [4], Post et al. [3], and Damian and Chisan [10]. Uusitalo et al [4], have conducted a series of interviews in order to investigate best practices in linking requirements and testing. Among the best practices, authors mention early tester involvement in requirements activities. They conclude by suggesting to strengthening the links between requirements engineers and testers, since it is difficult to implement traceability between them; a conclusion supported by this study (see Section 4.7).
The importance of linking requirements and verification is also stressed by Post et al. [3]. They describe a case study showing that formalizing requirements in scenarios make it easier to trace them to test sets. Damian and Chisan [10] present a case study where they introduce a new requirements engineering process in a software company. Among the practices in the process, they include traceability links between requirements and testing, cross-functional teams, and testing according to requirements. They show that an effective requirements engineering process has positive influence on several other processes including testing process.
The case studies above [3, 4, 10] are performed in a medium scale requirements engineering context [11], while our study is performed in a large/very large scale context and includes many aspects of aligning requirements and verification.
## 3 Research Approach
The approach used in this study is qualitative. Qualitative research consists of an application of various methods of collecting information, mainly interviews and focus groups. This type of research is exploratory [16]. Participants are asked to respond to general questions, and the interviewers explore their responses to identify and define peoples' perceptions and opinions about the topic being discussed. As the study was meant to be deep and exploratory, interviews were the best tool since surveys are not exploratory in nature. The interviews were semi-structured to allow in-depth, exploratory freedom to investigate non-premeditated aspects.
In this study, we interviewed 11 professionals in a large software development company in Sweden, based on the research question: What are the current challenges in aligning the requirements and the verification processes?
The viewpoint taken in this research is from a process perspective. The researchers involved do not work directly with artifacts, but with processes and have expertise in fields like requirements, testing, quality, and measurement.
Based on our pre-understanding of the processes involved in aligning requirements and verification, a conceptual model has been designed (see Figure 1). This model was used as a guide during the interviews. In this model, we consider three dimensions of requirements and test artifacts, connected through work processes. One is the _Abstraction level dimension_, from general goals down to source code, which is
similar both for the requirements and the testing side. Test artifacts are used to verify the code, but also for verifying the requirements. The arrows are relationships that can be both explicit and implicit, and can be both bi- or uni-directional. Then, we have the _Time dimension_, in which the processes, the products, and the projects change and evolve. This has an effect on the artifacts. There is also the _dimension of Product lines_, which addresses variability, especially applicable when the development is based on a product line engineering approach [2].
**Case Context**. Our results are based on empirical data collected through interviews at a large anonymous company, which is using a product-line approach. The company is developing embedded systems for a global market, and has more than 5000 employees. A typical project in this company lasts about 2-years, involves circa 800-1000 men per year, and has around 14000 requirements and 200000 test cases. The tool DOORS is used for requirements management, and the tool Quality Center for test management. Further information about the company is not disclosed for confidentiality reasons.
The interviews have been distributed in time between May and October 2009.
### Research Methodology
In this study, challenges and problems, as well as, current good practices and improvement suggestions regarding alignment between the requirements and verification processes have been identified through interviews with software engineering practitioners. The results from 11 interviews are included in this paper. Employees with different roles have been interviewed: quality management related roles (quality manager and quality control leader), requirements related roles (requirements process manager, requirements architect and requirements coordinator),
Figure 1: Conceptual Model
developer and testing related roles (test leader, tester). The research was conducted in several steps:
1. Definition of interview guide;
2. Interview planning and execution;
3. Transcription of interviews, and division of transcriptions into sections;
4. Definition of codes (keywords) to be assigned to transcriptions' sections;
5. Coding of interview transcriptions using predefined codes;
6. Sorting of coded transcriptions to group transcription sections according to codes;
7. Analysis of the results;
8. Validation of results by feedback to the organization.
**Step 1.** We constructed an interview guide, which is a document containing 30 questions to be asked during the interviews. The first version of the interview guide contained 22 questions, defined based on the research questions. The questions were validated during 2 pilot interviews. This led to the updated list of questions, grouped into several topics, such as general questions about requirements and testing, questions on quality requirements, etc. An overview of the interview guide is available in Table 11.
Footnote 1: The complete version of the interview guide and coding guide are available at:
[http://serg.cs.lth.se/research/experiment_packages/interview_study_on_requirements_verification_alignment/](http://serg.cs.lth.se/research/experiment_packages/interview_study_on_requirements_verification_alignment/)
**Step 2.** Eleven professionals were interviewed; each interview lasted for about one and a half hour. All interviews were recorded in audio format and notes were taken. A semi-structured interview strategy [16] has been used in all interviews, where the interview guide acted as a checklist to make sure that all important topics were covered. 2-3 interviewers interviewed one interviewee. One of the interviewers lead the interview, while the others followed the interview guide, took notes, and asked additional questions. The selection of the interviewees has been made based on recommendations by requirements managers, test managers, and the interviewees themselves. (At the end of each interview we asked the interviewees if they could
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Interview topics**} & \multicolumn{1}{c|}{**Description**} \\ \hline Software requirements & Handling of functional and quality requirements, customer involvement \\ \hline Software testing & Handling of testing artifacts, customer involvement, testing of functional and quality requirements \\ \hline Alignment between requirements \& & Alignment importance, current alignment method (documents, processes, methods, tools, principles, practices, etc.), alignment responsible, problems \& challenges, improvement ideas \& expected benefits \\ \hline Measurements and feedback gathering & Alignment related measurements, performance indicators, customer satisfaction evaluation \\ \hline Product line engineering & Handling of requirements and testing, maintaining alignment \\ \hline Outsourcing & Maintaining alignment in case of outsourcing \\ \hline \end{tabular}
\end{table}
Table 1: Overview of the interview guide
recommend a person or a role in a company whom we could interview in order to get alignment related information).
**Step 3.** Interviews were transcribed into text in order to facilitate the analysis. The transcriptions were then divided into text sections containing 1-2 sentences. All the text sections have been numbered in order to keep the order of the sentences. The size of the transcriptions ranged from 4000 words to about 9000 words per interview.
**Step 4.** As suggested by C.B. Seaman [12], codes (keywords) were assigned to the transcriptions' sections in order to be able to extract all the sections related to a specific topic. However, the definition of the coding scheme turned out to be a non-trivial task. We started by making an initial list of possible codes, which included codes related to our research questions, alignment methods, quality requirements [14] and software development process activities. In order to extend and tailor this initial list of codes to our interview context, we decided to perform exploratory coding [16], which included six researchers analyzing several interview transcriptions individually and assigning suitable codes to the text sections.
The result of exploratory coding was a list with 169 codes. In the next stage, we reviewed the codes resulting from the exploratory coding, grouped them into several categories at different abstraction levels and developed a coding guide. The coding guide is a document containing the list of codes and detailed instructions of how to code a transcription. In order to validate the coding guide, seven researchers used it to code the same interview transcription (let's call it X) individually, and then had a meeting to discuss differences in coding and possible improvements of the coding guide. Kappa inter-rater agreement [18] has been used as a metric to evaluate improvement in homogeneity of coding by different researchers. Consequently, the coding guide was updated and the interview transcription (X) was coded again using the updated version of the coding guide to make sure that the differences between different coders were minimized. The coding guide included codes at three abstraction levels: high, medium, and low (see Table 2). The high-level codes were based on research questions. The medium-level codes included different categories relevant to our research, and the low-level codes were the coder's interpretation of the transcription's section. A summary of the codes is presented in Table 2.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Abstraction level** & **Description** \\ \hline High & Codes related to research questions, i.e. alignment practices, problems and challenges, improvement ideas and benefits \\ \hline Medium & Two groups of codes: Group 1 – thirteen categories, which include requirements, testing, traceability, configuration management, organization processes, interactions, product quality aspects, and measurements among others. Group 2 – additional categories, e.g. product-line engineering, outsourcing, open source. \\ \hline Low & Coder’s interpretation of the transcription’s section, a brief summary of the information described in the section \\ \hline \end{tabular}
\end{table}
Table 2: Overview of the codes assigned to transcription’s sections (see footnote 1 for a complete list of codes).
**Step 5.** Eleven interview transcriptions were randomly assigned to four researchers, who coded them using the final version of the coding guide. The template used during coding is shown in Table 3.
**Step 6.** Coded interview transcriptions were merged into one file, making it possible to group transcription sections according to codes.
**Step 7.** The identified transcription's sections of each group were analyzed by two researchers. In order to identify alignment challenges, researchers studied all the transcription's section coded as "challenges" with the goal to extract challenges from the information provided by interviewees. Some challenges were similar and therefore could be reformulated or merged together, while others were kept apart as they were different.
**Step 8.** The results of the analysis were validated by feedback from the organization where the interviews have been conducted.
### Validity Discussion
A discussion of possible threats to validity will help us to qualify the results and highlight some of the issues associated with our study. As suggested by P. Runeson and M. Host [5], we have analyzed the construct validity, external validity, and reliability. Internal validity is concerned with threats to conclusions about cause and effect relationships, which is not an objective of this study. A detailed list of possible threats is presented in [16].
**Threats to Construct Validity.** The construct validity is the degree to which the variables are accurately measured by the measurement instruments used in the study [5]. The main construct validity threat in this study regards the design of the measurement instrument: are the questions formulated so that the interviews answer our research questions? Our main measurement instrument is the interview guide (see Section 3.1, Step 1), which includes the questions to be asked. Two researchers have constructed it by analysing the research questions and creating sub-questions. Five researchers have reviewed it to check for completeness and consistency; therefore we believe that the interview guide is accurate. The other measurement instrument is the coding guide. As described in Section 3.1, Step 4, this instrument has been validated
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{No} & \multirow{2}{*}{Text} & \multicolumn{2}{c|}{High-Level Coding} & \multicolumn{2}{c|}{Medium-Level Coding} & \multicolumn{1}{c|}{Low-Level} \\ \cline{3-6} & & \multicolumn{1}{c|}{Research Questions} & \multicolumn{2}{c|}{Group 1} & \multicolumn{1}{c|}{\multirow{2}{*}{Group 2}} & \multicolumn{1}{c|}{\multirow{2}{*}{Comments}} \\ \cline{3-6} \cline{5-6} \cl
by seven researchers in order to make sure that the result of the coding activity had minimal individual variations.
The questions in the interview guide were tailored on the fly to the interviewees since the professionals participating in the interviews had different roles and different background. Our study is qualitative; the goal is not to quantify answers of the same type, rather to explore the different activities in the company, which could be done best by investigating deeply the role of each interviewee.
Another potential threat in this study is that different interviewees may interpret the term "alignment" differently. For this reason, the conceptual model (see Figure 1) has been shown to the subjects during the interviews, in order to present our definition of alignment between requirements and verification.
#### 3.2.1 Threats to External Validity.
The threats to external validity concern generalisation. The purpose of this study is not to do any statistical generalization of the results to other contexts, but to explore the problems and benefits of alignment in the context of the specific company. The study was performed in an industrial environment where the processes were real, and the subjects were professionals. Hence, we believe that the results can be analytically generalized to any company of similar size and application domain. The company might not be representative; therefore more companies will be interviewed in order to get results independent of the kind of company.
#### 3.2.2 Reliability.
Reliability issues concern to what extent the data and the analysis are dependent on the researchers. Hypothetically, if another researcher later on conducts the same study the results should be the same. In this study, all finding have been derived by at least two researchers, and then reviewed by at least three other researchers. Therefore, this threat has been made smaller.
In our study, the investigation procedures are systematic and well documented (see Section 3). The interview guide, the researchers' view (the conceptual model), and the coding scheme were reviewed independently by seven researchers with different background.
The presented observations reflect the views of the participants. The interviews have been recorded and transcribed. The transcriptions could contain errors due to misinterpretation, mishearing, inaccurate punctuation or mistyped words. In order to minimize these threats, the transcriber has also been present at the interview. Moreover, the transcriptions were sent to the interviewees so that they could correct possible misinterpretation of their answers.
One factor affecting the reliability of the data collected can be the fact that the interviews capture the subjective opinion of each interviewee. However, we interviewed 11 professionals, which we believe is a sufficient amount to capture the general view of the company. Influence among the subjects could not be controlled and we could only trust the answers received. The choice of the subjects in the company might not give a representative picture of the company; however, the subjects had different roles and we tried to cover diverse roles within the company.
Regarding the coding activity, it is a classification of pieces of text, which are taken out of context; hence there is a risk of misinterpretation. This risk was minimized by checking the whole context of the text while doing data analysis.
To summarize, we believe that the validity threats of our results are under control, although the results should not be generalized to all organizations.
## 4 Analysis and Result
The result of this study includes a range of challenges faced by the studied company grouped into these categories: organization and processes, people, tools, requirements process, testing process, change management, traceability, and measurement. The grouping is rough: if the challenge belonged to several categories, we assigned it to the category which was the most relevant. The choice of the categories was based on the medium level codes (see Table 2). All challenges are rooted in the interview transcriptions. The challenges of each group are presented in Subsections 4.1-4.8.
### Organization and Processes Related Issues
This section summarizes the alignment problems and challenges related to the company's organizational structure and processes.
* The requirements and verification processes are separate processes and are not aligned. Furthermore, processes can use different standards of documentation, which negatively influence the hand-over between different parts of organization. Moreover, some parts of the company follow a documented development process while other parts do not.
* Frequent process changes negatively influence alignment. It would take time for people to learn and use the new process. Sometimes, people are reluctant to use a process knowing that it will change soon. Also, some good practices could be lost due to the process changes.
* Distance in time between the development of requirements and test artifacts can create alignment problems. Requirements can be approved without having test cases associated with them. This can result in having non-testable requirements.
* In a large company, gaps in communication across different organizational units often occur, especially at the high level. Furthermore, as stated by an employee "_it is hard to find who is accountable for things because depending on who you ask you get very different answers_". Therefore, this could affect the alignment, especially at the high abstraction level of the requirements and verification processes.
* Implementation of process improvements is time consuming, especially when the improvements are involving several units. Several issues related to the management can affect the alignment, e.g. decisions are not documented, lessons learnt are not always collected and processes depends on individual commitment.
Summarizing the challenges, the requirements and the verification processes are not aligned and are distant in time. There are also communication problems across different organizational units and the decisions are not documented, therefore it is hard to know who is accountable for a decision. The organizational structure and the processes, as well as changes in these are influencing the alignment. One reason could be that the company is very large and many organizational units are involved, and not every unit follows the documented process, and the standard for documentation.
### People Related Issues
This subsection presents a list of issues that are related to people, their skills and communication with each other.
* Professionals do not always have good technical knowledge and understanding about the work of other units. Requirements engineers sometimes lack knowledge about implementation as well as testing, while testers lack knowledge of requirements. Also, professionals are sometimes unwilling to move within the company in order to gain this knowledge. This has a negative effect on alignment between requirements and verification processes.
* Lack of cooperation between requirements people, developers and testers is affecting the alignment. In some cases, requirements engineers and developers have a good communication, as well as developers and testers. However, when there is a lack of direct communication between requirements and testing people, alignment is influenced negatively.
The main challenge in this area is communication, cooperation, and understanding of each other's work within the company. This can be hard when working under tight deadlines; there is no time to communicate and understand each other's work. Adequate technical knowledge, communication and cooperation between requirements people, developers and testers greatly influence the alignment.
### Tools Issues
Software tools play a crucial role in maintaining alignment between different artifacts. The following are several tool related issues.
* The lack of appropriate tools influences the alignment. It is very important to have reliable and easy to use requirements and verification tools. If the tool is difficult to use, or it is not reliable, people are not willing to use them. Having a good requirements management tool, which includes not only information about requirements, but also the flow of requirements, is crucial for testers. Otherwise, testers try to get this kind of information from other sources, for instance the developers. Tools for managing quality requirements are needed, otherwise there is a risk that quality requirements are not implemented and/or tested.
* It is important to keep the requirements database updated. If requirements are not up to date, testers will test according to old requirements and will find issues, which are not really failures, but valid features.
* If there is no tool to collect customer needs, it is difficult to keep them aligned with requirements, hence with test cases as well. And this leads to misalignment between customer needs and requirements, and consequently affects customer satisfaction with the final product.
* In cases when requirements and testing artifacts are stored in different tools, there is a need of good interfaces between these tools, and access of all interested parties to the tools. Otherwise, it becomes very difficult to maintain alignment. Especially when there are many-to-many relationships between requirements and test cases.
* If the mapping between requirements and test cases is not presented in a clear way, it could contain too much redundant information, and therefore it could be difficult for requirements people and testers to use it.
Most of the interviewees stated the lack of adequate software tools, which would allow to handle requirements, verification, and to measure the alignment between them. Furthermore, the interface of the tools and tool integration is not always good. The consequence of this is that people become reluctant to use them and do not update the information stored in them. This is greatly affecting the alignment.
### Requirements Process Related Issues
This subsection presents a list of issues that are related to the requirements process.
* Requirements sometimes are not given enough attention and consideration by other organizational units, such as development and testing units. According to an employee "_Developers do not always review the requirements, and discover requirements that can not be implemented during development, even when having agreed on the requirements beforehand_". This could be due to the lack of involvement of developers and testers in requirements reviews.
* Not having a good way of managing customers' needs makes it more difficult to define requirements, especially requirements at a high abstraction level.
* Requirements engineers do not think about testability of requirements. Therefore, requirements could turn out to be non-testable.
* Dealing with quality requirements is a difficult task. Quality requirements tend to be badly structured or vague. Furthermore, it is difficult to assign quality requirements to different development groups for implementation, since several groups are usually involved in implementing a quality requirement, and none wants to take a full responsibility for that.
* It is difficult to maintain alignment in organizations working with a large set of requirements, when the number of requirements reaches tens of thousands or more. Furthermore, in the organizations, which are using a product lines engineering [2] approach, maintaining alignment between domain and application requirements and test cases could be a challenge.
As we can see, there are numerous challenges related to requirements process, which affect alignment. Most of the interviewees stress the importance of updating requirements as soon as changes occur, and finding adequate ways of defining and
managing quality requirements. These two are the most recurrent requirements process related challenges.
### Testing Process Related Issues
The following are the issues that related to the testing process.
* Sometimes testers lack clear directions on how to proceed with testing. Especially while testing high-level requirements, such as roadmaps for example. It is difficult to test that the products adhere to roadmaps, since such testing takes a long time and is costly. Usually short loops are preferred.
* In case several organizational units are involved in testing, the cooperation between them is crucial. It is particularly relevant to the companies, which have a product line engineering approach, since different organizational units could be performing domain and application testing, and the faults detected in applications should be removed from domain as well.
* There is a lack of verification at early development stages, especially of quality requirements verification. This results in lower quality of the product, as well as added cost and time spent on removal of defects at later development phases.
* It is inefficient to maintain alignment of requirements and test cases due to the large amount of test cases; sometimes their number reaches hundreds of thousands.
* It is difficult to get requirements people interested in having good quality test cases. Requirement people's involvement in reviewing test cases contributes to alignment, since this would help to assure that test cases comply with requirements.
As we can see from the above-mentioned challenges, there are numerous testing process related issues that can affect alignment. Having a well defined testing process at different development stages, and good cooperation between testing units could help improve the alignment.
### Change Management Issues
The following are the challenges related to change management.
* It is sometimes difficult to find the people responsible, if a change occurs, if a defect is found, or if there is a need of further information. Thus, requirements engineers do not always inform related developers and testers in case of a requirements change. Furthermore, if a failure is found during maintenance phase, it is extremely difficult for maintenance people to find requirements people who can give information regarding requirements, or whom to inform about implemented changes. Therefore, maintenance people sometimes need to use testers as a source of information about requirements.
* There is a lack of strategy in deciding which changes to implement in case there is not enough time or resources to implement all changes.
* The information about changes is not always timely updated in the requirements database. Therefore it is difficult for developers and testers to know that the change has occurred.
Updating the requirements on time is one of the most recurrent challenges. It is therefore important to find ways to cope with changes immediately so that the traceability with testing can be maintained. In addition, delta handling and good tracking and reporting on the requirements and test case tools is needed to easily track changes and verify completeness.
### Traceability Issues
The following are the challenges related to traceability between requirements and testing artifacts.
* There is a lack of links between requirements and test cases. Some test cases are very complex; therefore it is difficult to trace them back to requirements.
* If traceability between requirements and test cases is not maintained, testers keep testing requirements that have been removed. The reasons for lack of traceability could be the difficulty to implement traceability, and the lack of resources to maintain it.
* Having large legacies implies that a lot of test cases do not have requirements linked. This complicates implementation of alignment.
* Ideally, alignment should be implemented and maintained at all abstraction levels of requirements and verification processes. However, if it cannot be done for various reasons, such as lack of resources or time constraints, it is necessary to clearly define at which level to implement alignment.
The main challenge is the large volumes and complexity of requirements, test cases and test results. These are negatively influencing traceability. Better tools could help in managing the traceability in large scale requirements engineering and testing.
### Measurements Issues
The following are the measurements related challenges.
* Due to the lack of experience in using measurements, it is difficult to define appropriate metrics or indicators.
* There is a lack of alignment related metrics. For example, one of the alignment metrics is requirements coverage by test cases, which is measured by calculating a percentage of requirements that have associated test cases. However, if a requirement has several test cases associated to it, it still could lack complete test coverage. Therefore, additional metrics are needed in order to get more complete information about requirements coverage.
* Key Performance Indicators (KPI) and metrics should be appropriate at both operative, as well as, top management level. Sometimes KPIs are useful only at top
management level, but do not provide important information at the operative level regarding the things that could be improved.
* Sometimes target values for metrics and indicators are defined without a business case, not based on historical measurement data. Therefore, they could be in-achievable.
Among challenges regarding measuring the alignment that are mentioned, the most recurrent is the difficulty of defining metrics to measure the alignment, especially the requirements coverage. Definition and use of adequate alignment metrics could help improve the alignment.
## 5 Conclusions and Further Research
In this paper, we have presented results of an interview study performed in a large software development company in Sweden. The goal of the study was to explore the current challenges in aligning requirements and verification processes.
One of the main challenges found regards software tools, both for managing requirements and for managing test cases. Often tools are not easy to use, and when different tools are used, the interface between them is poor. The consequence is that employees tend to not update the requirements stored in the tools and the information stored becomes obsolete and not useful. Traceability is also a challenge, and its importance is corroborated by other studies [3, 14]. Communication and cooperation across different units within the company is also a major challenge, confirming the results in [1, 17]. As a consequence of the challenges, company has decided to improve it's development process.
Our results can inspire other practitioners in their alignment improvement efforts since they can learn from this case what can be the most salient challenges in managing large quantities of requirements and test information in natural language. Researchers can also learn from this study since they can focus their research on existing challenges of potentially general interest.
We are extending this study to other companies of different size and domain. This will further enhance a general picture of alignment issues.
Acknowledgements.This work was funded by the Industrial Excellence Center EASE - Embedded Applications Software Engineering, ([http://ease.cs.lth.se](http://ease.cs.lth.se)). Many thanks to the anonymous interviewees for their dedicated participation in this study, and reviewers of the paper for their valuable comments.
|
2303.16388
|
Complexity of Equilibria in First-Price Auctions under General
Tie-Breaking Rules
|
We study the complexity of finding an approximate (pure) Bayesian Nash
equilibrium in a first-price auction with common priors when the tie-breaking
rule is part of the input. We show that the problem is PPAD-complete even when
the tie-breaking rule is trilateral (i.e., it specifies item allocations when
no more than three bidders are in tie, and adopts the uniform tie-breaking rule
otherwise). This is the first hardness result for equilibrium computation in
first-price auctions with common priors. On the positive side, we give a PTAS
for the problem under the uniform tie-breaking rule.
|
Xi Chen, Binghui Peng
|
2023-03-29T01:57:34Z
|
http://arxiv.org/abs/2303.16388v1
|
# Complexity of Equilibria in First-Price Auctions
###### Abstract
We study the complexity of finding an approximate (pure) Bayesian Nash equilibrium in a first-price auction with common priors when the tie-breaking rule is part of the input. We show that the problem is PPAD-complete even when the tie-breaking rule is trilateral (i.e., it specifies item allocations when no more than three bidders are in tie, and adopts the uniform tie-breaking rule otherwise). This is the first hardness result for equilibrium computation in first-price auctions with common priors. On the positive side, we give a PTAS for the problem under the uniform tie-breaking rule.
Introduction
First-price auction is arguably the most commonly used auction format in practice [21, 14, 15], in which the highest bidder wins the item and pays her bid. First-price auction and its variants have been widely used in online ad auctions: when a user visits a platform, an auction is run among interested advertisers to determine the ad to be displayed to the user. Despite of its simplicity, first-price auctions are _not_ incentive compatible -- it is the most well-known example in auction theory that does not admit a truthful strategy. This has led to significant effort in economics [1, 1, 13, 12, 15, 16] and more recently, in computer science [1, 17, 18], to understand equilibria of first-price auctions.
In this paper we study the computational complexity of finding a Bayesian Nash equilibrium in a first-price auction. We consider the following _independent common prior_ setting. There is one single item to sell, and \(n\) bidders are interested in it. Each bidder has a continuous value distribution \(\mathcal{D}_{i}\) supported over \([0,1]\). The joint value distribution \(\mathcal{D}\) is the product of \(\mathcal{D}_{i}\)'s. While \(\mathcal{D}_{i}\)'s are public, each bidder \(i\) has a private value \(v_{i}\) for the item drawn from \(\mathcal{D}_{i}\). Each bidder chooses a bidding strategy, which maps her private value to a bid from a _discrete_ bid space \(\mathcal{B}=\{b_{0},b_{1}\ldots,b_{m}\}\). A Bayesian Nash equilibrium is a tuple of bidding strategies, one for each bidder, such that every bidder gets a best response to other bidders' strategies (see formal definition in Section 2, including the two conditions that bidding strategies need to satisfy: no overbidding and monotonicity).
This game between bidders, however, is not fully specified without a tie-breaking rule: how the item is allocated when more than one bidder have the highest bid. A variety of tie-breaking rules have been considered in the literature. The uniform tie-breaking rule, where the item is allocated to one of the winners uniformly at random, has been the most common offset. The other commonly used tie-breaking rule is to perform an additional round of Vickrey auction to ensure the existence of equilibria when the bidding space is continuous [1, 14]. A recent line of works [19, 10, 11, 12, 13] used monopoly tie-breaking rules that always give the item to one player when establishing worst-case price-of-anarchy (POA) bounds.
To accommodate tie-breaking rules in the problem, we consider the setting where the auctioneer specifies a tie-breaking rule \(\Gamma\) to be used in the auction (as part of the input). \(\Gamma\) maps each \(W\subseteq[n]\) as the set of winners to a distribution \(\Gamma(W)\) over \(W\) as the allocation of the item to bidders in \(W\). While a general tie-breaking rule takes exponentially many entries to describe, our PPAD-hardness result is built upon the succinct family of so-called _trilateral_ tie-breaking rules: such a tie-breaking rule \(\Gamma\) specifies item allocations when no more than three bidders are in tie, and follows the uniform tie-breaking rule otherwise (when more than three bidders are in tie). The hardness result rules out the possibility of an efficient algorithm for finding a Bayesian Nash equilibrium in a first-price auction when the tie-breaking rule is given as part of the input (unless PPAD is in P). We compliment our hardness result with a polynomial time approximation scheme (PTAS) for finding a constant-approximate Bayesian Nash equilibrium under the uniform tie-breaking rule.
### Our results
Our main hardness result shows that the problem of finding an \(\epsilon\)-approximate Bayesian Nash equilibrium in a first-price auction is PPAD-complete with trilateral tie-breaking.
**Theorem 1.1** (Computational hardness).: _It is PPAD-complete to find an \(\epsilon\)-approximate Bayesian Nash equilibrium in a first-price auction under a trilateral tie-breaking rule for \(\epsilon=1/\operatorname{poly}(n)\)._
It is worth pointing out that the hardness above holds even when (1) the bid space \(\mathcal{B}\) has size \(3\); and (2) the density function of each \(\mathcal{D}_{i}\) is a piecewise-constant function with no more than four nonzero pieces.
On the positive side, we obtain a PTAS for finding a Bayesian Nash equilibrium in a first-price auction under the uniform tie-breaking rule:
**Theorem 1.2** (PTAS under uniform tie-breaking).: _For any \(\epsilon>0\), \(n,m\geq 2\), there is an algorithm that finds an \(\epsilon\)-approximate Bayesian Nash equilibrium using \(O(n^{4}\cdot g(1/\epsilon))\) time under the uniform tie-breaking rule._
Our algorithm works as long as it has oracle access to the CDF of each value distribution \(\mathcal{D}_{i}\).
### Related work
First-price auctionThe study of first-price auction dates back to the seminal work of Vickrey [20] in 1960s. Despite its extremely simple form and a wide range of applications, the incentive has been a central issue and it is perhaps the most well known mechanism that does not admit a truthful strategy. A long line of works in the economic literature [14, 21, 22, 15, 16, 17, 18, 19, 20, 21] devote to characterizing the existence, uniqueness and closed-form expression of a pure Bayesian Nash equilibrium (or BNE). However, the BNE of first-price auction is only well-understood in a few special cases, including when the players have symmetric valuation distributions [1], when all players have probability density function bounded above 0 and atomic probability mass at the lowest points [16], when there are only two bidders with uniform valuation distributions [15] or when the players have discrete value and continuous bidding space and the tie-breaking is performed with an extra round of Vickrey (second-price) auction [23].
A formal study on the computational complexity of equilibria in a first-price auction has been raised by the recent work of [13], which is most closest to us. [13] examines the computation complexity under a _subjective prior_, that is, each bidder has a different belief of other's valuation distribution. They prove the PPAD-completeness and the FIXP-completeness of finding an \(\epsilon\)-BNE (for some constant \(\epsilon>0\)) and an exact BNE, under the uniform tie-breaking rule. As we shall explain soon, the techniques to obtain their results are quite different from us. It is worth noting that most aforementioned literature are on the common prior setup, and [13] also leaves an open question of characterizing the computational complexity of \(\epsilon\)-BNE under the standard setting of independent common prior. [13] also provides a polynomial time algorithm for finding a high precision BNE for _constant_ number of players and bids, when the input distribution are piecewise polynomial. Their approach is based on polynomial system solvers and thus different from us. The work of [12] studies the Bayesian combinatorial auctions, where there are multi-items to sell for multiple bidders. They prove the complexity of Bayesian Nash equilibrium is at least PP-hard (a complexity class between the polynomial hierarchy and PSPACE), the model is quite different, because the agents' valuation could be much more complex, defining over subsets of items.
Other aspects of first-price auction have also been studied in the literature, including the price of anarchy/stability [15, 16, 17, 18, 19] and parameter estimation [12, 13].
Equilibrium computationThe complexity class of PPAD (Polynomial Parity Arguments on Directed graphs) was first introduced by Papadimitriou [15] to capture one particular genre of total search functions. The seminal work [1, 2] established the PPAD-hardness of normal-form games. The hardness of approximation was settled by subsequent work [14, 15, 17] in the past few years. A broad range of problems have been proved to be PPAD-hard, and notable examples including equilibrium computation in special but important class of
games (win-or-lose game [1, 2], anonymous game [1], constant rank game [13], graphical game [2]), market equilibrium (Arrow-Debreu market [14, 15, 16], non-monotone market [1, 17, 18], Hylland-Zeckhauser scheme [13]), fair division [15, 16], min-max optimization [10] and reinforcement learning [11, 12].
The PTAS is known for anonymous game [15], which is closely related to our work. The [15] presented a \(n^{g(m,1/\epsilon)}\cdot U\) algorithm for \(m\)-action \(n\)-player anonymous games for some exponential function \(g\). Here \(U\) denotes the number of bits to represent a payoff value in the game. Instead, our algorithm finds an \(\epsilon\)-BNE of first-price auction with running time \(n^{4}\cdot g(1/\epsilon)\), which does not depend on the size of bidding space and the bit-size of the representation of the distributions. It crucially utilizes the structure of first-price auction in the rounding and searching step, and could have a broader application in auction theory.
### Technical overview
The challenge of obtaining the PPAD-hardness arises from two folds. First, the utility function does not admit a closed-form expression, in terms of other player's strategy. It depends on an exponential number of possible bidding profiles and is computed only via a dynamic programming approach. Second, the game structure is highly symmetric under the (independent) common prior. In a first-price auction, the allocation is determined by the entire bidding profile, and each player faces "almost" the same set of profile. From this perspective, it is more like an anonymous game. Perhaps even worse, in an anonymous game, the utility function of each player is different and could be designed for the sake of reduction. While in a first-price auction, the utility function of each player is the same, and depends only on the allocation probability. Of course, the general (non-uniform) tie-breaking rule as well as the different valuation distributions could be used for breaking the symmetry. We note the above challenges are unique to the common prior setting. In a sharp contrast, in the subjective prior setting [16], the players' subjective belief could be different. A player could presume most other players have zero value and submit zero bid, hence, the game is _local_ and non-symmetric.
To resolve the above challenges, our key ideas are (1) linearizing the allocation probability and expanding a first order approximation of the utility function; and (2) carefully incorporating a (simple) general tie-breaking rule to break the symmetry.
Technical highlight: Linearizing "everything"Given a strategy profile \(s\), the distribution over the entire bidding profile (and therefore the allocation probability, the utility, the best response) could be complicated to compute, especially when multiple players submit the highest bid. To circumvent this issue, we assign a large probability \((1-\delta)\) around value \(0\) for all players, for some polynomially small \(\delta>0\).1 By doing this, the probability that a player bids nonzero is small, so one can ignore higher order term. Concretely, let \(p_{i,j}\) be the probability that player \(i\) gets the item when bidding \(b_{j}\) given that the other players have strategy \(s_{-i}\), and let \(\Gamma_{i}(b_{j},s_{-i})\) be the allocation for player \(i\) of bidding \(b_{j}\) given other player's strategy \(s_{-i}\). The immediate advantage is that the allocation probability can be approximated as
Footnote 1: This is the reason that our hardness result only applies for (inverse) polynomially small \(\epsilon\).
\[\Gamma_{i}(b_{j},s_{-i})\approx(1-\sum_{i^{\prime}\in[n]\setminus\{i\}}\sum_{ j^{\prime}>j}p_{i^{\prime},j^{\prime}})+\sum_{i^{\prime}\in[n]\setminus\{i\}} \Sigma_{i,i^{\prime}}\cdot p_{i^{\prime},j} \tag{1}\]
under a _bilateral_ tie-breaking rule. Here \(\Sigma\in[0,1]^{n\times n}\) specifies the allocation when there is a tie between a pair of players \((i_{1},i_{2})\) and satisfies \(\Sigma+\Sigma^{\top}=(J-I)\). At this stage, it is tempting to use
\(p_{i,j}\) to encode variables of a generalized circuit problem and the choice of best response to encode constraints. In our final construction, we only need three bids \(0=b_{0}<b_{1}<b_{2}\) and the variables are encoded by the jump point \(\tau_{i}\) between \(b_{1},b_{2}\) (i.e., when player \(i\) bids \(b_{2}\) instead of \(b_{1}\)), which has the closed-form expression of
\[\tau_{i}=b_{2}+\frac{\Gamma_{i}(b_{1},s_{-i})\cdot(b_{2}-b_{1})}{\Gamma_{i}(b_ {2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}. \tag{2}\]
Even after the linearization step of Eq. (1), the above expression is still quite formidable to handle. Our next idea is to restrict the jumping point in a small interval between \((b_{2},1)\), and assign only a small total probability mass of \(\beta\delta\) over the interval, here \(\beta\) is another polynomially small value. There is a (fixed) probability mass of \(\delta\) around \(b_{2}\) and \(1\). One can further perform a first order approximation to Eq. (2), and again linearize the jumping point expression.
Incorporating tie-breaking ruleAbstracting away some construction details, the above construction reduces the first-price auction from a fix point problem, obeys the following form
\[\vec{p}=f(G\vec{p})\quad\text{where}\quad G=2\Sigma^{A}-J+I, \tag{3}\]
where \(\vec{p}\) is the probability of bidding \(b_{1}\) (inside the small interval), \(f=(f_{1},\ldots,f_{n})\) is operated coordinate-wise over \(G\vec{p}\), \(f_{i}\) is a monotone function maps from \([a_{i},b_{i}]\) (some fixed interval) to \([0,1]\), \(J\) is the all \(1\) matrix and \(I\) is the identity matrix. The fixed point problem is fairly general and subsumes the generalized circuit problem, _if_\(\Sigma^{A}\) is an arbitrary matrix in \([0,1]^{n\times n}\). Unfortunately, it is not true due to the constraint of \(\Sigma^{A}+(\Sigma^{A})^{\top}=(J-I)\). We resolve the issue by adding an extra pivot player. The pivot player is guaranteed to bid \(b_{0}=0\) and \(b_{2}\) with equal probability of \(1/2\). From a high level, the pivot player splits the equilibrium computation into two cases, the case when it bids \(b_{0}\) is similar, while the case of bidding \(b_{2}\) introduces another tie-breaking matrix \(\Sigma^{B}\in[0,1]^{n\times n}\) among the original players in \([n]\) (hence it becomes a trilateral rule). It transforms the fix point problem (i.e., Eq. (3)) to a more convenient form
\[\vec{p}=f(G^{\prime}\vec{p})\quad\text{where}\quad G^{\prime}=2\Sigma^{A}+ \Sigma^{B}-J+I, \tag{4}\]
and one can construct gadgets to reduce from the generalized circuit problem. The last step is fairly common and details can be found in Section 3.
## 2 Preliminary
Notation.We write \([n]\) to denote \(\{1,2,\ldots,n\}\) and \([n_{1}:n_{2}]\) to denote \(\{n_{1},n_{1}+1,\ldots,n_{2}\}\). Let \(\mathbf{1}_{i}\) be an indicator vector - it equals the all \(0\) vector, except the \(i\)-th coordinate which equals \(1\). Let \(\Delta_{n}\) contains all probability distribution over \([n]\). Given a vector \(v\in\mathbb{R}^{n}\), and an index \(i\in[n]\), \(v_{i}\) denotes the \(i\)-th entry of \(v\) while \(v_{-i}\) denotes \((v_{1},v_{2},\ldots,v_{i-1},v_{i+1},\ldots,v_{n})\), i.e., all entries except the \(i\)-th coordinate. We write \(x=y\pm\epsilon\) if \(x\in[y-\epsilon,y+\epsilon]\). Let \(J_{n}\in\mathbb{R}^{n\times n}\) be the \(n\times n\) all-\(1\) matrix and \(I_{n}\) be the \(n\times n\) identity matrix.
### Model
In a Bayesian first-price auction (FPA), there is one single item to sell and it is specified by a tuple \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\), where \(\mathcal{N}=[n]\) is the set of players, \(\mathcal{B}\) is the bid space, \(\mathcal{D}\) is the value distribution and \(\Gamma\) is the tie-breaking rule. For each play \(i\in\mathcal{N}\), it has a private value \(v_{i}\) of the item that is drawn
from a (continuous) distribution \(\mathcal{D}_{i}\) supported over \([0,1]\) (written as \(v_{i}\sim\mathcal{D}_{i}\)). We consider the standard _independent common prior_ setting -- the joint value distribution \(\mathcal{D}=\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{n}\) is the product distribution of \(\{\mathcal{D}_{i}\}_{i\in[n]}\) and we assume the value profile \(v=(v_{1},\ldots,v_{n})\in[0,1]^{n}\) is drawn from \(\mathcal{D}\). Let \(\mathcal{B}=\{b_{0},b_{1},\ldots,b_{m}\}\subset[0,1]\) be the bid space, where \(0=b_{0}<b_{1}<\cdots<b_{m}\leq 1\).
In a first-price (sealed-bid) auction, each bidder \(i\) submits a bid \(\beta_{i}\in\mathcal{B}\) simultaneously to the seller. The seller assigns the item to the winning player \(i^{*}\) which submits the highest bid, and charges \(i^{*}\) a payment equals to its bid \(\beta_{i^{*}}\).
Allocation and tie-breaking rule.When there are multiple players submitting the same highest bid, the seller assigns and charges the item to one of those winning players, following a pre-described _tie-breaking_ rule \(\Gamma\). A tie breaking rule \(\Gamma:\{0,1\}^{n}\to\Delta_{n}\) maps a set of winning players \(W\subseteq[n]\) to an allocation profile \(\Gamma(W)\in\Delta_{n}\) supported on \(W\) that specifies the winning probability of each player \(i\in W\) as \(\Gamma_{i}(W)\). Formally, given a bidding profile \(\beta\in\mathcal{B}^{n}\), the set of winning players \(W(\beta)\) are those who submit the highest bids
\[W(\beta)=\left\{i\in[n]:\beta_{i}=\max_{j\in[n]}\beta_{j}\right\}.\]
The tie breaking rule \(\Gamma(W(\beta))\in\Delta_{n}\) specifies the winning probability of each player in \(W(\beta)\) and \(\Gamma_{i}(W(\beta))\) is the probability that the bidder \(i\) obtains the item under the bidding profile \(\beta\). The tie-breaking rule needs to satisfy (1) \(\Gamma_{i}(W(b))>0\) only if \(i\in W(\beta)\), i.e., the item is assigned only to players with the highest bid; and (2) \(\sum_{i\in[n]}\Gamma_{i}(W(\beta))=1\), i.e., the total allocation is 1. When there is no confusion, we also abbreviate \(\Gamma(\beta)=\Gamma(W(\beta))\).
It is known that the tie-breaking rule plays a subtle yet critical rule on the equilibrium of Bayesian FPA. Our hardness result is built upon the _trilateral_ tie-breaking rule, a simple generalization of the commonly used uniform tie-breaking method.
**Definition 2.1** (Trilateral tie-breaking).: _A trilateral tie-breaking rule \(\Gamma\) is specified by the following tuples of nonnegative numbers_
\[\left(w_{i,j}:1\leq i<j\leq n\right)\quad\text{and}\quad\left(\sigma^{(1)}_{i, j,k},\sigma^{(2)}_{i,j,k}:1\leq i<j<k\leq n\right)\]
_such that \(w_{i,j}\leq 1\) and \(\sigma^{(1)}_{i,j,k}+\sigma^{(2)}_{i,j,k}\leq 1\). Given a bidding profile \(\beta\in\mathcal{B}^{n}\) and the winning set \(W(\beta)\), the item is distributed according to \(\Gamma\) as follows_
1. _If_ \(W(\beta)=\{i\}\) _for some_ \(i\in[n]\)_, then_ \(\Gamma_{i}(\beta)=1\)_;_
2. _If_ \(W(\beta)=\{i,j\}\) _for some_ \(1\leq i<j\leq n\)_, then_ \(\Gamma_{i}(\beta)=w_{i,j}\) _and_ \(\Gamma_{j}(\beta)=1-w_{i,j}\)_;_
3. _If_ \(W(\beta)=\{i,j,k\}\) _for some_ \(1\leq i<j<k\leq n\)_, then_ \(\Gamma_{i}(\beta)=\sigma^{(1)}_{i,j,k}\)_,_ \(\Gamma_{j}(\beta)=\sigma^{(2)}_{i,j,k}\) _and_ \(\Gamma_{k}(\beta)=1-\sigma^{(1)}_{i,j,k}-\sigma^{(2)}_{i,j,k}\)_; and_
4. _When_ \(|W(\beta)|\geq 4\)_, the item is evenly distributed among players in_ \(W(\beta)\)_._ 2__
Footnote 2: We note our hardness result actually holds regardless of the tie-breaking rule among more than 3 players (i.e., not necessarily uniform).
Equilibrium and strategyGiven a tie-breaking rule \(\Gamma\) and a bidding profile \(\beta=(\beta_{1},\ldots,\beta_{n})\), the _ex-post_ utility of a bidder \(i\) is given by
\[u_{i}(v_{i};\beta_{i},\beta_{-i})=(v_{i}-\beta_{i})\cdot\Gamma_{i}(\beta).\]
A strategy \(s_{i}:[0,1]\to\mathcal{B}\) of player \(i\) is a map from her (private) value \(v_{i}\) to a bid \(s(v_{i})\in\mathcal{B}\), with the following two properties:
* **No overbidding**. A player never submits a bid larger than her private value, i.e., \(s_{i}(v_{i})\leq v_{i}\) for all \(v_{i}\in[0,1]\).
* **Monotonicity.**\(s_{i}\) is a non-decreasing function.
These are common assumptions in the literature of first-price auction [14, 15, 16] and they rule out spurious equilibria in Bayesian auctions [1]. Due to the monotonicity assumption, one can write a strategy \(s_{i}\) as \(m\) thresholds \(0\leq\tau_{i,1}\leq\cdots\leq\tau_{i,m}\leq 1\), where the player \(i\) bids \(b_{j}\) in the interval \((\tau_{i,j},\tau_{i,j+1}]\)3. Here we set by default \(\tau_{i,0}=0\) and \(\tau_{i,m+1}=1\).
Footnote 3: If the valuation distribution contains a point mass, then the strategy might be randomized at the point mass.
The \(\epsilon\)-approximate Bayesian Nash equilibrium (\(\epsilon\)-approximate BNE) of FPA is defined as follow.
**Definition 2.2** (\(\epsilon\)-approximate Bayesian Nash equilibrium).: _Let \(n,m\geq 2\). Given a first-price auction (\(\mathcal{N},\mathcal{B},\mathcal{D},\Gamma\)), a strategy profile \(s=(s_{1},\ldots,s_{n})\) is an \(\epsilon\)-approximate Bayesian Nash equilibrium (\(\epsilon\)-approximate BNE) if for any player \(i\in[n]\), we have_
\[\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i};s_{i}(v_{i}), s_{-i}(v_{-i}))\big{]}\geq\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i} (v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}-\epsilon,\]
_where \(\mathsf{bs}(v_{i},s_{-i})\in\mathcal{B}\) is the best response of player \(i\) given other players' strategy \(s_{-i}\), i.e._
\[\mathsf{bs}(v_{i},s_{-i})\in\arg\max_{b\in\mathcal{B}}\operatorname*{\mathbb{ E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}(v_{i};b,s_{-i}(v_{-i}))\big{]}.\]
The existence and the PPAD membership of finding a \(1/\operatorname{poly}(n)\)-approximate BNE can be established via a similar approach of [16] (In particular, Theorem 4.1 and Theorem 4.4 of [16]), and we omit the standard proof here.
We shall also use another notion of equilibrium which is more convenient in our hardness reduction. The \(\epsilon\)-approximately well-supported Bayesian Nash equilibrium (\(\epsilon\)-BNE) is defined as4
Footnote 4: We note the \(\epsilon\)-approximate BNE is known also _ex-ante_ approximate BNE, and the \(\epsilon\)-BNE is known as _ex-interim_ approximate BNE in some of the literature.
**Definition 2.3** (\(\epsilon\)-approximately well-supported Bayesian Nash equilibrium).: _Let \(n,m\geq 2\). Given a first-price auction (\(\mathcal{N},\mathcal{B},\mathcal{D},\Gamma\)), a strategy profile \(s=(s_{1},\ldots,s_{n})\) is an \(\epsilon\)-approximately well-supported Bayesian Nash equilibrium (\(\epsilon\)-BNE) if for any player \(i\in[n]\) and \(v_{i}\in[0,1]\), we have_
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}(v_{i};s_{i }(v_{i}),s_{-i}(v_{-i}))\big{]}\geq\operatorname*{\mathbb{E}}_{v_{-i}\sim \mathcal{D}_{-i}}\big{[}u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i})) \big{]}-\epsilon.\]
The notion of \(\epsilon\)-BNE and \(\epsilon\)-approximate BNE can be reduced to each other in polynomial time, losing at most a polynomial factor of precision. It is clear that an \(\epsilon\)-BNE is also an \(\epsilon\)-approximate BNE. Lemma 2.4 states the other direction and the proof is deferred to the appendix.
**Lemma 2.4**.: _Given a first-price auction (\(\mathcal{N},\mathcal{B},\mathcal{D},\Gamma\)) and an \(\epsilon\)-approximate BNE \(s\), there is a polynomial time algorithm that maps \(s\) to an \(\epsilon^{\prime}\)-BNE, where \(\epsilon^{\prime}=(2n+10)\sqrt{\epsilon}\)._
PPAD-hardness
Recall our main hardness result
**Theorem 1.1** (Computational hardness).: _It is PPAD-complete to find an \(\epsilon\)-approximate Bayesian Nash equilibrium in a first-price auction under a trilateral tie-breaking rule for \(\epsilon=1/\operatorname{poly}(n)\)._
In the rest of section, we construct the hard instances of FPA in Section 3.1 and provide some basic properties in Section 3.2. We reduce from the \(\epsilon\)-generalized-circuit problem in Section 3.3.
### Construction of first price auctions
It suffices to prove finding \(\epsilon\)-BNE is hard for some \(\epsilon=1/\operatorname{poly}(n)\) due to Lemma 2.4. We will use the following three parameters in the construction:
\[\epsilon=\frac{1}{n^{40}},\quad\delta=\frac{1}{n^{10}}\quad\text{and}\quad \beta=\frac{1}{n^{4}}.\]
We describe the bidding space \(\mathcal{B}\), the valuation distribution \(\mathcal{D}\) and the tie-breaking rule \(\Gamma\).
Bidding space.The bidding space \(\mathcal{B}=\{b_{0},b_{1},b_{2}\}\) contains 3 bids in total, where \(b_{0}=0\),
\[b_{1}=\frac{\delta^{2}}{n^{4}}\quad\text{and}\quad b_{2}=\frac{\delta}{n^{2}}.\]
Valuation distribution.There are \(n+1\) players -- \(n\) standard players indexed by \([n]\) and one pivot player \(n+1\). We will describe the value distribution \(\mathcal{D}_{i}\) of player \(i\) by specifying its density function \(p_{i}:[0,1]\to\mathbb{R}^{+}\). The density function \(p_{n+1}\) of the pivot player is set as follows:
\[p_{n+1}(v)=\begin{cases}1/(2\epsilon)&v\in[0,\epsilon]\\ 1/(2\epsilon)&v\in[1-\epsilon,1]\end{cases}.\]
In another word, \(\mathcal{D}_{n+1}\) has 0.5 probability mass around 0 and 0.5 probability mass around 1.
The density function \(p_{i}\) of each standard player \(i\in[n]\) is set as follows:
\[p_{i}(v)=\begin{cases}(1-(2+\beta)\delta)/\epsilon&v\in[0,\epsilon]\\ \delta/\epsilon&v\in[b_{2}-\epsilon,b_{2}]\\ \widetilde{p}_{i}(v)&v\in(b_{2},1-\epsilon)\\ \delta/\epsilon&v\in[1-\epsilon,1]\end{cases}\]
where \(\widetilde{p}_{i}(v)\) is defined over \((b_{2},1-\epsilon)\), satisfies \(\int_{b_{2}}^{1-\epsilon}\widetilde{p}_{i}(v)\mathrm{d}v=\beta\delta\), but will be specified later in the reduction in Section 3.3. In short, a standard player \(i\) has most its probability mass around 0, \(\delta\) mass around \(b_{2}\), \(\delta\) mass around 1 and \(\beta\delta\) mass in \((b_{2},1-\epsilon)\) to be specified later.
Tie-breaking rule.We describe the trilateral tie-breaking rule \(\Gamma\) as follows. For any bidding profile \(\beta\) with \(2\leq|W(\beta)|\leq 3\), the tie-breaking rule depends on the presence of \(n+1\) in \(W(\beta)\):
* Suppose \(n+1\notin W(\beta)\). Then
If \(|W(\beta)|=2\), i.e., \(W(b)=\{i,j\}\), the tie-breaking rule is given by a matrix \(\Sigma^{A}\in[0,1]^{n\times n}\) such that player \(i\) obtains \(\Sigma^{A}_{i,j}\) unit of the item and player \(j\) obtains \(\Sigma^{A}_{j,i}\) unit. So the matrix \(\Sigma^{A}\) needs to satisfy \(\Sigma^{A}+(\Sigma^{A})^{\top}=(J_{n}-I_{n})\). We will specify \(\Sigma^{A}\) in the reduction later but will guarantee that all of its off-diagonal entries lie in \([1/4,3/4]\). * If \(|W(\beta)|=3\), then we use the uniform allocation.
* Suppose \(n+1\in W(\beta)\). Then
* If \(|W(\beta)|=2\), then the item is fully allocated to the pivot player \(n+1\).
* If \(|W(\beta)|=3\), i.e., \(W(b)=\{i,j,n+1\}\), then the tie breaking is given by a matrix \(\Sigma_{B}\in[0,1]^{n\times n}\) such that player \(i\) obtains \(\Sigma^{B}_{i,j}\) unit of the item, player \(j\) obtains \(\Sigma^{B}_{j,i}\) unit and player \(n+1\) obtains \(1-\Sigma^{B}_{i,j}-\Sigma^{B}_{j,i}\) unit. So the matrix \(\Sigma^{B}\) needs to satisfy \(\Sigma^{B}+(\Sigma^{B})^{\top}\leq J_{n}-I_{n}\), i.e., \(\Sigma^{B}+(\Sigma^{B})^{\top}\) is entrywise dominated by \(J_{n}-I_{n}\).
### Basic properties
Let \(s=(s_{1},\ldots,s_{n+1})\) be an \(\epsilon\)-BNE of the instance. We prove a few properties of \(s\) in this subsection. Given \(s\), for each player \(i\) we define \(f_{i}:\mathcal{B}\to[0,1]\) and \(F_{i}:\mathcal{B}\to[0,1]\) as follows:
\[f_{i}(b)=\Pr_{v_{i}\sim\mathcal{D}_{i}}[s_{i}(v_{i})=b]\quad\text{and}\quad F _{i}(b)=\Pr_{v_{i}\sim\mathcal{D}_{i}}[s_{i}(v_{i})\leq b].\]
In the rest part of section, we abbreviate
\[\Gamma_{i}(b,s_{-i}):=\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[\Gamma _{i}(b,s_{-i}(v_{-i}))]\quad\text{and}\quad u_{i}(v_{i};b,s_{-i}):=\mathop{ \mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};b;s_{-i}(v_{-i}))]\]
when there is no confusion.
We start with the following lemma.
**Lemma 3.1** (Separable bid).: _In any \(\epsilon\)-BNE, the equilibrium strategy \(s\) satisfies_
* _For a standard player_ \(i\in[n]\)_, its equilibrium strategy satisfies_
* _when_ \(v_{i}\in[0,\epsilon]\)_,_ \(s_{i}(v_{i})=b_{0}\)_;_
* _when_ \(v_{i}\in[b_{2}-\epsilon,b_{2}]\)_,_ \(s_{i}(v_{i})=b_{1}\)_; and_
* _when_ \(v_{i}\in[1-\epsilon,1]\)_,_ \(s_{i}(v_{i})=b_{2}\)_._
* _For the pivot player, its equilibrium strategy satisfies_
* _when_ \(v_{n+1}\in[0,\epsilon]\)_,_ \(s_{n+1}(v_{n+1})=b_{0}\)_; and_
* _when_ \(v_{n+1}\in[1-\epsilon,1]\)_,_ \(s_{n+1}(v_{n+1})=b_{2}\)_._
Proof.: The claim of \(s_{i}(\epsilon)=0\) holds trivially for all \(i\in[n+1]\) due to the no-overbidding assumption. A standard player \(i\) chooses between \(b_{0}\) and \(b_{1}\) for \(v_{i}\in[b_{2}-\epsilon,b_{2}]\). The allocation probability \(\Gamma_{i}(b_{0},s_{-i})\) of bidding \(b_{0}\) satisfies \(\Gamma_{i}(b_{0},s_{-i})\leq\frac{1}{n+1}\), hence the utility of bidding \(b_{0}=0\) is at most
\[u_{i}(v_{i};b_{0},s_{-i})=(v_{i}-b_{0})\cdot\Gamma_{i}(b_{0},s_{-i})\leq\frac{ 1}{n}b_{2}.\]
The allocation probability of bidding \(b_{1}\) is at least
\[\Gamma_{i}(b_{1},s_{-i})\geq\prod_{i\in[n+1]}f_{i}(b_{0})\geq(1-(2+\beta) \delta)^{n}\cdot\frac{1}{2}\geq\frac{1}{3}\]
hence the utility of bidding \(b_{1}\) is at least
\[u_{i}(v_{i};b_{1},s_{-i})=(v_{i}-b_{1})\cdot\Gamma_{i}(b_{1},s_{-i})\geq(b_{2}- \epsilon-b_{1})\cdot\frac{1}{3}>u_{i}(v;b_{0},s_{-i})+\epsilon.\]
Finally, we analyse the equilibrium strategy around \(v\in[1-\epsilon,1]\) for all \(n+1\) players. Via an analysis similar to the above argument, it is clear that both standard players and the pivot player would choose between \(b_{1}\) and \(b_{2}\). For a standard player \(i\in[n]\), the allocation probability of bidding \(b_{2}\) satisfies
\[\Gamma_{i}(b_{2},s_{-i}) \geq F_{n+1}(b_{1})\cdot\prod_{j\in[n]\setminus[i]}F_{j}(b_{1}) \tag{5}\] \[\geq F_{n+1}(b_{1})\cdot\left(\prod_{j\in[n]\setminus[i]}f_{j}(b_{0 })+\sum_{j\in[n]\setminus[i]}f_{j}(b_{1})\prod_{r\in[n]\setminus\{i,j\}}f_{r} (b_{0})\right)\] \[\geq \frac{1}{2}\left(\prod_{j\in[n]\setminus[i]}f_{j}(b_{0})+\sum_{j \in[n]\setminus[i]}f_{j}(b_{1})\prod_{r\in[n]\setminus\{i,j\}}f_{r}(b_{0}) \right)+\frac{1}{2}f_{n+1}(b_{1}),\]
where the last step holds as \(f_{n+1}(b_{0})=\frac{1}{2}\) and
\[\prod_{j\in[n]\setminus[i]}f_{j}(b_{0})\geq(1-(2+\beta)\delta)^{n-1}\geq\frac {1}{2}. \tag{6}\]
The allocation probability of bidding \(b_{1}\) satisfies
\[\Gamma_{i}(b_{1},s_{-i}) \leq f_{n+1}(b_{0})\cdot\left(\prod_{j\in[n]\setminus[i]}f_{j}(b_{0 })+\sum_{j\in[n]\setminus[i]}f_{j}(b_{1})\cdot\Sigma_{i,j}^{A}\prod_{r\in[n] \setminus\{i,j\}}f_{r}(b_{0})+9n^{2}\delta^{2}\right) \tag{7}\] \[+ f_{n+1}(b_{1})\cdot\sum_{j\in[n]\setminus\{i\}}f_{j}(b_{1})\] \[\leq \frac{1}{2}\left(\prod_{j\in[n]\setminus[i]}f_{j}(b_{0})+\sum_{j \in[n]\setminus[i]}f_{j}(b_{1})\cdot\Sigma_{i,j}^{A}\prod_{r\in[n]\setminus\{ i,j\}}f_{r}(b_{0})\right)+f_{n+1}(b_{1})\cdot 3n\delta+9n^{2}\delta^{2}.\]
Here the first step holds since (1) when the pivot player bids \(b_{0}\), player \(i\) obtains \(\Sigma_{i,j}^{A}\) unit of item when (only) player \(j\) bids \(b_{1}\), and the probability of at least two players bidding \(b_{1}\) is bounded as
\[\sum_{j,r\in[n]\setminus\{i\}}\Pr[s_{r}(v_{r})=b_{1}\wedge s_{j}(v_{j})=b_{1} ]\leq n^{2}\cdot 9\delta^{2},\]
(2) when the pivot player bids \(b_{1}\), the player \(i\) obtains the item only if there exists at least one other standard player \(j\) bids \(b_{1}\) as the tie breaking rule assigns the item fully to player \(n+1\) when there are only two winners. The second step follows from \(f_{j}(b_{1})\leq 3\delta\) and that the pivot player does not bid \(b_{0}\) in \([1-\epsilon,1]\) so \(f_{n+1}(b_{0})=1/2\).
Subtracting Eq. (7) and Eq. (5), one obtains
\[\Gamma_{i}(b_{2},s-i)-\Gamma_{i}(b_{1},s_{-i})\] \[\geq \frac{1}{2}\sum_{j\in[n]\setminus[i]}f_{j}(b_{1})\cdot(1-\Sigma_{ i,j}^{A})\prod_{r\in[n]\setminus\{i,j\}}f_{r}(b_{0})+\frac{1}{2}f_{n+1}(b_{1})-3n \delta f_{n+1}(b_{1})-9n^{2}\delta^{2}\] \[\geq \frac{1}{2}\cdot(n-1)\cdot\delta\cdot\frac{1}{4}\cdot\frac{1}{2} +\frac{1}{2}f_{n+1}(b_{1})-3n\delta f_{n+1}(b_{1})-9n^{2}\delta^{2}\geq\frac{n \delta}{32}. \tag{8}\]
The second step holds due to \(f_{j}(b_{1})\geq\delta\), \(\Sigma^{A}_{i,j}\in[1/4,3/4]\) and Eq (6). Hence we claim player \(i\) prefers \(b_{2}\) than \(b_{1}\) at value \(v_{i}\in[1-\epsilon,1]\), since
\[u_{i}(v_{i};b_{2},s_{-i})-u_{i}(v_{i};b_{1},s_{-i}) = (v_{i}-b_{2})\cdot\Gamma_{i}(b_{2},s_{-i})-(v_{i}-b_{1})\cdot \Gamma_{i}(b_{1},s_{-i})\] \[\geq v_{i}\cdot(\Gamma_{i}(b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i}))-b_ {2}\] \[\geq (1-\epsilon)\cdot\frac{n\delta}{32}-b_{2}>\epsilon.\]
Finally, for the pivot player \(n+1\), the allocation probability of bidding \(b_{1}\) satisfies
\[\Gamma_{n+1}(b_{1},s_{-i})\leq\ \prod_{i\in[n]}F_{i}(b_{1}) \tag{9}\]
and the allocation probability of \(b_{2}\) satisfies
\[\Gamma_{n+1}(b_{2},s_{-i}) \geq \prod_{i\in[n]}F_{i}(b_{1})+\sum_{i\in[n]}f_{i}(b_{2})\prod_{j\in [n]\setminus\{i\}}F_{i}(b_{1}) \tag{10}\] \[\geq \prod_{i\in[n]}F_{i}(b_{1})+\frac{n\delta}{2}.\]
The first step holds since the tie-breaking rule favors player \(n+1\) when at most one player in \([n]\) bids \(b_{2}\), the second step holds due to \(f_{i}(b_{2})\geq\delta\) and Eq. (6). Hence, at any \(v_{n+1}\in[1-\epsilon,1]\) we have
\[u_{n+1}(v_{n+1};b_{2},s_{-i})-u_{n+1}(v;b_{1},s_{-i}) = (v_{n+1}-b_{2})\cdot\Gamma_{n+1}(b_{2},s_{-i})-(v_{n+1}-b_{1}) \cdot\Gamma_{n+1}(b_{1},s_{-i})\] \[\geq v_{n+1}\cdot(\Gamma_{n+1}(b_{2},s_{-i})-\Gamma_{n+1}(b_{1},s_{-i }))-b_{2}\] \[\geq (1-\epsilon)\cdot\frac{n\delta}{2}-b_{2}>\epsilon.\]
We conclude the proof of the lemma here.
Lemma 3.1 confirms that \(b_{0},b_{1},b_{2}\) would appear in an \(\epsilon\)-BNE profile for every player \(i\in[n]\). It still remains to determine at which value point a standard player \(i\in[n]\) jumps from \(b_{1}\) to \(b_{2}\) in \(s_{i}\). Let \(\tau_{i}\in(b_{2},1-\epsilon)\) be the jumping point from \(b_{1}\) to \(b_{2}\) of a standard player \(i\). The following formula is convenient to use.
**Lemma 3.2** (Jumping point formula).: _The jumping point \(\tau_{i}\) of a standard player \(i\in[n]\) satisfies_
\[\tau_{i}=b_{2}+\frac{\Gamma_{i}(b_{1},s_{-i})\cdot(b_{2}-b_{1})}{\Gamma_{i}(b_ {2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}\pm O\left(\frac{\epsilon}{\delta}\right).\]
Proof.: At any value point \(v\in[0,1]\), recall the utility of bidding \(b_{1}\) equals
\[u_{i}(v_{i};b_{1},s_{-i})=(v_{i}-b_{1})\Gamma_{i}(b_{1},s_{-i})\]
and the utility of bidding \(b_{2}\) equals
\[u_{i}(v_{i};b_{2},s_{-i})=(v_{i}-b_{2})\Gamma_{i}(b_{2},s_{-i}).\]
Solving for \(u_{i}(\tau_{i},b_{1},s_{-i})=u_{i}(\tau_{i},b_{2},s_{-i})\pm\epsilon\), one obtains
\[\tau_{i} = \frac{\Gamma_{i}(b_{2},s_{-i})b_{2}-\Gamma_{i}(b_{1},s_{-i})b_{1} \pm\epsilon}{\Gamma_{i}(b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}\] \[= b_{2}+\frac{\Gamma_{i}(b_{1},s_{-i})\cdot(b_{2}-b_{1})\pm \epsilon}{\Gamma_{i}(b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}\] \[= b_{2}+\frac{\Gamma_{i}(b_{1},s_{-i})\cdot(b_{2}-b_{1})}{\Gamma_{ i}(b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}\pm O\left(\frac{\epsilon}{\delta} \right).\]
The last step follows from Eq. (8), and this finishes the proof of the lemma.
Let \(x_{i}\in[0,\beta\delta]\) be the probability mass over the interval \((b_{2},\tau_{i})\), i.e., \(x_{i}=\int_{b_{2}}^{\tau_{i}}p_{i}(v)\mathsf{d}v\), which we will refer to the jumping probability of \(s_{i}\). We state a few facts that will be used repeatedly.
**Lemma 3.3** (Basic facts).:
* _For any standard player_ \(i\in[n]\)_, we have_ \[\prod_{j\in[n]\setminus\{i\}}f_{j}(b_{0})=1-(n-1)(2+\beta)\delta\pm O(n^{2} \delta^{2})\] _and_ \[\prod_{j\in[n]\setminus\{i\}}F_{j}(b_{1})=1-(n-1)(\beta+1)\delta+\sum_{j\in[ n]\setminus\{i\}}x_{j}\pm O(n^{2}\delta^{2}).\]
* _For any_ \(e\in\{1,2\}\)_, we have_ \[\sum_{i\neq j\in[n]\setminus\{i\}}\Pr\big{[}s_{i}(v_{i})=b_{e}\wedge s_{j}(v_ {j})=b_{e}\big{]}=O(n^{2}\delta^{2}).\]
Proof.: For the first claim, we have
\[\prod_{j\in[n]\setminus\{i\}}f_{j}(b_{0})=(1-(2+\beta)\delta)^{n-1}=1-(n-1)( 2+\beta)\delta\pm O(n^{2}\delta^{2})\]
due to the choice of \(\delta\). Similarly we have (using \(x_{i}\in[0,\beta\delta]\)
\[\prod_{j\in[n]\setminus\{i\}}F_{j}(b_{1}) =\prod_{j\in[n]\setminus\{i\}}(1-(1+\beta)\delta+x_{i})\] \[=1-(n-1)(1+\beta)\delta+\sum_{j\in[n]\setminus\{i\}}x_{j}\pm O(n ^{2}\delta^{2}).\]
For the second claim, for any \(e\in\{1,2\}\), we have
\[\sum_{i\neq j\in[n]}\Pr\big{[}s_{i}(v_{i})=b_{e}\wedge s_{j}(v_{j})=b_{e} \big{]}\leq\sum_{i\neq j\in[n]}(1+\beta)\delta^{2}=O(n^{2}\delta^{2}).\]
We conclude the proof here.
The key step is to determine the jumping point, where we use approximation.
**Lemma 3.4** (Jumping point).: _The jumping point \(\tau_{i}\) of a standard player \(i\in[n]\) satisfies_
\[\tau_{i}=\left(\frac{\Delta_{i,1}-\Delta_{i,2}}{\Delta_{i,1}^{2}}\pm O\left( \frac{\beta^{2}}{\Delta_{i,1}}\right)\right)\cdot b_{2}\]
_where_
\[\Delta_{i,1} :=\,(n-1)\delta+\sum_{j\in[n]\setminus\{i\}}\left(\beta\delta \cdot\Sigma_{i,j}^{A}+(1+\beta)\delta\cdot\Sigma_{i,j}^{B}\right)\in\big{[}(n -1)\delta,2n\delta\big{]}\quad\text{and}\] \[\Delta_{i,2} :=\,\sum_{j\in[n]\setminus\{i\}}\left(1-2\Sigma_{i,j}^{A}-\Sigma _{i,j}^{B}\right)x_{j}\in\big{[}-2n\beta\delta,n\beta\delta\big{]}.\]
Proof.: For any standard player \(i\in[n]\), we compute when it jumps from \(b_{1}\) to \(b_{2}\) using the formula in Lemma 3.2. To do so, we first compute \(\Gamma_{i}(b_{1},s_{-i})\) and \(\Gamma_{i}(b_{2},s_{-i})\).
\[\Gamma_{i}(b_{1},s_{-i}) =f_{n+1}(b_{0})\cdot\left(\prod_{j\in[n]\setminus\{i\}}f_{j}(b_{0} )+\sum_{j\in[n]\setminus\{i\}}f_{j}(b_{1})\prod_{j\in[n]\setminus\{i,j\}}f_{j} (b_{0})\cdot\Sigma_{i,j}^{A}\pm O(n^{2}\delta^{2})\right)\] \[=\frac{1}{2}\left(1-(n-1)(2+\beta)\delta+\sum_{j\in[n]\setminus\{ i\}}(\delta+x_{j})\Sigma_{i,j}^{A}\right)\pm O(n^{2}\delta^{2}). \tag{11}\]
Here the first step follows from the tie-breaking rule and the second claim of Lemma 3.3, the second step follows from \(f_{j}(b_{1})=\delta+x_{j}\) and the first claim of Lemma 3.3.
The allocation probability of bidding \(b_{2}\) obeys
\[\Gamma_{i}(b_{2},s_{-i}) =f_{n+1}(b_{0})\cdot\left(\prod_{j\in[n]\setminus\{i\}}F_{j}(b_{ 1})+\sum_{j\in[n]\setminus\{i\}}f_{j}(b_{2})\prod_{j\in[n]\setminus\{i,j\}}F_{ j}(b_{1})\cdot\Sigma_{i,j}^{A}\pm O(n^{2}\delta^{2})\right)\] \[\quad+f_{n+1}(b_{2})\cdot\left(\sum_{j\in[n]\setminus\{i\}}f_{j} (b_{2})\prod_{k\in[n]\setminus\{i,j\}}F_{k}(b_{1})\cdot\Sigma_{i,j}^{B}\pm O(n ^{2}\delta^{2})\right)\] \[=\frac{1}{2}\left(1-(n-1)(1+\beta)\delta+\sum_{j\in[n]\setminus\{ i\}}x_{j}+\sum_{j\in[n]\setminus\{i\}}(\delta+\beta\delta-x_{j})\Sigma_{i,j}^{A}\right)\] \[\quad+\frac{1}{2}\sum_{j\in[n]\setminus\{i\}}(\delta+\beta\delta -x_{j})\Sigma_{i,j}^{B}\pm O(n^{2}\delta^{2}).\]
The first step uses the tie breaking rule and requires some explanations. In particular, (1) when the pivot player \(n+1\) bids \(b_{0}\), the player \(i\) obtains \(1\) unit of item when other players bid less than \(b_{2}\), \(\Sigma_{A,i,j}\) unit of item when only player \(j\) bids \(b_{2}\); we also make use of the second claim of Lemma 3.3 to omit the other case; (2) when the player \(n+1\) bids \(b_{2}\), the player \(i\) obtains \(0\) unit of good when no other players bid \(b_{0}\) and obtains \(\Sigma_{B,i,j}\) unit of goods when one other player \(j\) bids \(b_{2}\), and we omit other cases using Lemma 3.3. The second step follows from Lemma 3.3 and \(f_{j}(b_{2})=\delta+\beta\delta-x_{j}\).
Combining the above expression, we have
\[\Gamma_{i}(b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i}) =\frac{1}{2}(n-1)\delta+\frac{1}{2}\sum_{j\in[n]\setminus\{i\}} \Big{(}\beta\delta\cdot\Sigma_{i,j}^{A}+(1+\beta)\delta\cdot\Sigma_{i,j}^{B} \Big{)}\] \[\quad+\frac{1}{2}\sum_{j\in[n]\setminus\{i\}}\Big{(}1-2\Sigma_{i,j}^{A}-\Sigma_{i,j}^{B}\Big{)}x_{j}\pm O(n^{2}\delta^{2}). \tag{12}\]
Let \(\Delta_{i,1}\) and \(\Delta_{i,2}\) be defined as in the statement of the lemma. Note that \(\Delta_{i,1}\) does not depend on \(\{x_{j}\}_{j\neq i}\) while \(\Delta_{i,2}\) depends on \(\{x_{j}\}_{j\neq i}\). It is easy to see that
\[\Delta_{i,1}\in\big{[}(n-1)\delta,2n\delta\big{]}\quad\text{and}\quad\Delta_{i,2}=\big{[}-2(n-1)\beta\delta,(n-1)\beta\delta\big{]}. \tag{13}\]
Finally we can compute the jumping point \(\tau_{i}\) using Lemma 3.2 as follows:
\[\tau_{i} =b_{2}+\frac{\Gamma_{i}(b_{1},s_{-i})\cdot(b_{2}-b_{1})}{\Gamma_{i}( b_{2},s_{-i})-\Gamma_{i}(b_{1},s_{-i})}\pm O\left(\frac{\epsilon}{\delta}\right)\] \[=b_{2}+\frac{1\pm O(n\delta)}{\Delta_{i,1}+\Delta_{i,2}}\cdot b_{ 2}\pm O\left(\frac{\epsilon}{\delta}\right)\] \[=\left(\frac{\Delta_{i,1}-\Delta_{i,2}}{\Delta_{i,1}^{2}}\pm O \left(\frac{\beta^{2}}{\Delta_{i,1}}\right)\right)\cdot b_{2}\]
The second step follows from Eq. (12), \(\Gamma_{i}(b_{1},\beta_{i})=(1/2)\pm O(n\delta)\) (see Eq. (11)) and the choice of \(b_{1},b_{2}\). The last step follows from Eq. (13).
### Reduction from generalized circuit
Given \(\alpha<\beta\), we write \(\mathsf{T}_{[\alpha,\beta]}:\mathbb{R}\rightarrow[\alpha,\beta]\) to denote the truncation function with
\[\mathsf{T}_{[\alpha,\beta]}(x)=\min\big{\{}\max\{x,\alpha\},\beta\big{\}}.\]
We recall the generalized circuit problem [10] and present a simplified version from [11].
**Definition 3.5** ((Simplified) generalized circuit).: _A generalized circuit is a tuple \((V,G)\), where \(V\) is a set of nodes and \(G\) is a collection of gates. Each node \(v\in V\) is associated with a gate \(G_{v}\) that falls into one of two types \(\{G_{1-},G_{+}\}\): If \(G_{v}\) is a \(G_{+}\) gate, then it has two input nodes \(v_{1},v_{2}\in V\setminus\{v\}\); if it is a \(G_{1-}\) gate then it takes one input node \(v_{1}\in V\setminus\{v\}\). Given \(\kappa>0\), a \(\kappa\)-approximation solution to \((V,G)\) is an assignment \(x\in[0,1]^{V}\)such that for every node \(v\):_
* _If_ \(G_{v}\) _is a_ \(G_{+}\) _gate and takes input nodes_ \(v_{1},v_{2}\in V\backslash\{v\}\)_, then_ \(x_{v}=\mathsf{T}_{[0,1]}(x_{v_{1}}+x_{v_{2}}\pm\kappa)\)__
* _If_ \(G_{v}\) _is a_ \(G_{1-}\) _gate and takes an input node_ \(v_{1}\in V\backslash\{v\}\)_, then_ \(x_{v}=\mathsf{T}_{[0,1]}(1-x_{v_{1}}\pm\kappa)\)_._
The generalized circuit problem is known to be PPAD-hard for constant \(\kappa\).
**Theorem 3.6** ([11, 11]).: _There is a constant \(\kappa>0\) such that it is PPAD-hard to find an \(\kappa\)-approximate solution of a generalized circuit._
We prove Theorem 1.1 via a reduction from the generalized circuit problem.
Given an instance of generalized circuit defined over nodes set \(V\) (\(|V|=m\)), we let \(V_{1}=[m_{1}]\) be the set of nodes with gate \(G_{+}\) and \(V_{2}=[m_{1}+1:m]\) be the set of nodes with gate \(G_{1-}\). We construct an instance of first price auction with \(n=m_{1}+2(m-m_{1})=2m-m_{1}\) standard players and one pivot player.
Let \(\mathcal{N}=\mathcal{N}_{1}\cup\mathcal{N}_{2}\cup\mathcal{N}_{3}\) be the set of standard players, where \(\mathcal{N}_{1}=[m_{1}]\), \(\mathcal{N}_{2}=[m_{1}+1:m]\) and \(\mathcal{N}_{3}=[m+1:2m-m_{1}]\). From a high level, we use players in \(\mathcal{N}_{1}\) to represent the set of nodes with \(G_{+}\) gates, players in \(\mathcal{N}_{2}\) to represent the set of nodes with \(G_{1-}\) gates. Players in \(\mathcal{N}_{3}\) are used in constructing \(G_{1-}\). We first specify the probability density \(\widetilde{p}\) over interval \((b_{2},1-\epsilon)\) and the tie-breaking matrices \(\Sigma^{A}\) and \(\Sigma^{B}\) to complete the description of the FPA instance.
* For player \(i\) in \(\mathcal{N}_{1}\) (i.e., \(i\in[m_{1}]\)), its valuation distribution \(\widetilde{p}_{i}\) is uniform over the interval \[\left[\frac{1}{\Delta_{i,1}}\cdot b_{2},\left(\frac{1}{\Delta_{i,1}}+\frac{1}{ 10}\cdot\frac{\beta\delta}{\Delta_{i,1}^{2}}\right)\cdot b_{2}\right]\] with a total probability mass of \(\beta\delta\). Let \(i(1),i(2)\in[m]=\mathcal{N}_{1}\cup\mathcal{N}_{2}\) be the input nodes of the \(G_{+}\) gates, we set \(\Sigma^{B}_{i,i(1)}=\Sigma^{B}_{i,i(2)}=1/10\).
* For player \(m_{1}+j\in\mathcal{N}_{2}\) (i.e., \(j\in[m-m_{1}]\)), its valuation distribution \(\widetilde{p}_{m_{1}+j}\) is uniform over \[\left[\left(\frac{1}{\Delta_{m_{1}+j,1}}-\frac{1}{10}\cdot\frac{\beta\delta}{ \Delta_{m_{1}+j,1}^{2}}\right)\cdot b_{2},\frac{1}{\Delta_{m_{1}+j,1}}\cdot b _{2}\right]\] with a total probability mass of \(\beta\delta\). We set \(\Sigma_{m_{1}+j,m+j}^{A}=9/20\).
* For player \(m+j\in\mathcal{N}_{3}\) (i.e., \(j\in[m-m_{1}]\)), its valuation distribution \(\widetilde{p}_{m+j}\) is uniform over \[\left[\left(\frac{1}{\Delta_{m+j,1}}+\frac{1}{10}\cdot\frac{\beta\delta}{ \Delta_{m+j,1}^{2}}\right)\cdot b_{2},\left(\frac{1}{\Delta_{m+j,1}}+\frac{1} {5}\cdot\frac{\beta\delta}{\Delta_{m+j,1}^{2}}\right)\cdot b_{2}\right]\] with a total probability mass of \(\beta\delta\). We set \(\Sigma_{m+j,m_{1}+j}^{A}=11/20\). Let \(j(1)\in[m]\) be the input node of \(G_{1-}\), then set \(\Sigma_{m+j,j(1)}^{B}=1/5\).
* For any entry of \(\Sigma^{A}\) that has not been determined above, we set it to be \(1/2\), and for any entry of \(\Sigma^{B}\) that has not been determined, we set it to be \(0\).
It is easy to verify that \(\Sigma^{A}\) and \(\Sigma^{B}\) satisfy the following properties as promised earlier: (1) the off-diagonal entries of \(\Sigma^{A}\) lie in \([1/4,3/4]\); (2) \(\Sigma^{A}+(\Sigma^{A})^{\top}=J_{n}-I_{n}\); and (3) the off-diagonal entries of \(\Sigma^{B}\) belong to \([0,1/2]\).
Letting \(\kappa:=n\beta=1/n^{3}\), we prove that any \(\epsilon\)-BNE of the first price auction gives an \(O(\kappa)\)-approximate solution to the generalized circuit. Indeed the following lemma shows that by taking \(x_{i}^{\prime}=x_{i}/(\beta\delta)\), where \(x_{i}\) is the jumping probability of \(s_{i}\), we obtain an \(O(\kappa)\)-approximate solution \((x_{1}^{\prime},\ldots,x_{m}^{\prime})\) to the input generalized circuit. This finishes the proof of Theorem 1.1.
**Lemma 3.7**.: _Given an \(\epsilon\)-BNE of the first price auction and let \((x_{1},\ldots,x_{n})\in[0,\beta\delta]^{n}\) be the tuple of jumping probabilities, then we have_
* _For any_ \(i\in[m_{1}]\)_,_ \(x_{i}=\mathsf{T}_{[0,\beta\delta]}(x_{i(1)}+x_{i(2)}\pm O(\kappa\beta\delta))\)__
* _For any_ \(j\in[m-m_{1}]\)_,_ \(x_{m_{1}+j}=\mathsf{T}_{[0,\beta\delta]}(\beta\delta-x_{j(1)}\pm O(\kappa\beta \delta))\)__
Proof.: For the first claim, for any \(i\in[m_{1}]\), one has
\[\Delta_{i,2}=\sum_{r\in[n]\setminus\{i\}}\Big{(}1-2\Sigma_{i,r}^{A}-\Sigma_{ i,r}^{B}\Big{)}x_{r}=-\frac{1}{10}x_{i(1)}-\frac{1}{10}x_{i(2)},\]
where the second step follows from \(\Sigma_{i,r}^{A}=1/2\) for all \(r\in[n]\setminus\{i\}\), \(\Sigma_{i,i(1)}^{B}=\Sigma_{i,i(2)}^{B}=1/10\) and \(\Sigma_{i,r}^{B}=0\) for all other \(r\in[n]\setminus\{i,i_{1},i_{2}\}\). By Lemma 3.4, one has
\[\tau_{i}=\left(\frac{\Delta_{i,1}-\Delta_{i,2}}{\Delta_{i,1}^{2}}\pm O\left( \frac{\beta^{2}}{\Delta_{i,1}}\right)\right)\cdot b_{2}=\left(\frac{1}{\Delta_ {i,1}}+\frac{1}{10}\cdot\frac{x_{i(1)}+x_{i(2)}}{\Delta_{i,1}^{2}}\pm O\left( \frac{\beta^{2}}{\Delta_{i,1}}\right)\right)\cdot b_{2}.\]
Since \(\mathcal{D}_{i}\) is uniform over \([\frac{1}{\Delta_{i,1}}\cdot b_{2},(\frac{1}{\Delta_{i,1}}+\frac{1}{10}\cdot \frac{\beta\delta}{\Delta_{i,1}^{2}})\cdot b_{2}]\) with probability mass \(\beta\delta\), we have
\[x_{i}=\mathsf{T}_{[0,\beta\delta]}(x_{i(1)}+x_{i(2)}\pm O(\kappa\beta\delta)).\]
For the second claim, we first analyse the jumping probability of player \(m_{1}+j\). We have
\[\Delta_{m_{1}+j,2}=\sum_{r\in[n]\setminus\{m_{1}+j\}}\Big{(}1-2\Sigma_{m_{1}+ j,r}^{A}-\Sigma_{m_{1}+j,r}^{B}\Big{)}x_{r}=\frac{1}{10}x_{m+j},\]
where the second follows from \(\Sigma^{A}_{m_{1}+j,m+j}=9/20\), \(\Sigma^{A}_{m_{1}+j,r}=1/2\) for all \(r\in[n]\backslash\{m_{1}+j,m+j\}\), and \(\Sigma^{B}_{m_{1}+j,r}=0\) for all \(r\in[n]\backslash\{m_{1}+j\}\). Hence, by Lemma 3.4, we have
\[\tau_{m_{1}+j} = \left(\frac{\Delta_{m_{1}+j,1}-\Delta_{m_{1}+j,2}}{\Delta^{2}_{m _{1}+j,1}}\pm O\left(\frac{\beta^{2}}{\Delta_{m_{1}+j,1}}\right)\right)\cdot b _{2}\] \[= \left(\frac{1}{\Delta_{m_{1}+j,1}}-\frac{1}{10}\cdot\frac{x_{m+j }}{\Delta^{2}_{m_{1}+j,1}}\pm O\left(\frac{\beta^{2}}{\Delta_{m_{1}+j,1}} \right)\right)\cdot b_{2}\]
Given that \(\mathcal{D}_{m_{1}+j}\) is uniform over \([(\frac{1}{\Delta_{m_{1}+j,1}}-\frac{1}{10}\cdot\frac{\beta\delta}{\Delta^{2} _{m_{1}+j,1}})\cdot b_{2},\frac{1}{\Delta_{m_{1}+j,1}}\cdot b_{2}]\) with mass \(\beta\delta\), we have
\[x_{m_{1}+j}=\mathsf{T}_{[0,\beta\delta]}(\beta\delta-x_{m+j}\pm O (\kappa\beta\delta)). \tag{14}\]
It remains to analyse the jumping probability of player \(m+j\), and we have
\[\Delta_{m+j,2} = \sum_{r\in[n]\backslash\{m+j\}}\Big{(}1-2\Sigma^{A}_{m+j,r}- \Sigma^{B}_{m+j,r}\Big{)}x_{r}\] \[= -\frac{1}{10}x_{m_{1}+j}-\frac{1}{5}x_{j(1)}=-\frac{1}{10}\left( \beta\delta-x_{m+j}+2x_{j(1)}\right)\pm O(\kappa\beta\delta).\]
Here the second step follows from \(\Sigma^{A}_{m+j,m_{1}+j}=11/20\), \(\Sigma^{A}_{m+j,r}=1/2\) for \(r\in[n]\backslash\{m_{1}+j,m+j\}\), \(\Sigma^{B}_{m+j,j(1)}=1/5\) and \(\Sigma^{B}_{m+j,r}=0\) for any \(r\in[n]\backslash\{m+j,j(1)\}\). The last step follows from Eq. (14).
Now, by Lemma 3.4, one has
\[\tau_{m+j} = \left(\frac{\Delta_{m+j,1}-\Delta_{m+j,2}}{\Delta^{2}_{m+j,1}} \pm O\left(\frac{\beta^{2}}{\Delta_{m+j,1}}\right)\right)\cdot b_{2}\] \[= \left(\frac{1}{\Delta_{m+j,1}}+\frac{1}{10}\cdot\frac{\beta\delta -x_{m+j}+2x_{j(1)}\pm O(\kappa\beta\delta)}{\Delta^{2}_{m+j,1}}\pm O\left( \frac{\beta^{2}}{\Delta_{m+j,1}}\right)\right)\cdot b_{2}\]
Given that \(\mathcal{D}_{m+j}\) is uniform over \([(\frac{1}{\Delta_{m+j,1}}+\frac{1}{10}\cdot\frac{\beta\delta}{\Delta^{2}_{m +j,1}})\cdot b_{2},(\frac{1}{\Delta_{m+j,1}}+\frac{1}{5}\cdot\frac{\beta\delta }{\Delta^{2}_{m+j,1}})\cdot b_{2}]\) with mass \(\beta\delta\), we conclude that
\[x_{m+j}=\mathsf{T}_{[0,\beta\delta]}(x_{j(1)}\pm O(\kappa\beta \delta)).\]
Plugging into Eq. (14), we obtain
\[x_{m_{1}+j}=\mathsf{T}_{[0,\beta\delta]}(\beta\delta-x_{j(1)} \pm O(\kappa\beta\delta)).\]
This completes the proof of the second claim.
## 4 Ptas
We present a PTAS for computing an \(\epsilon\)-approximate BNE in an FPA under the uniform tie-breaking rule.
**Theorem 1.2** (PTAS under uniform tie-breaking).: _For any \(\epsilon>0\), \(n,m\geq 2\), there is an algorithm that finds an \(\epsilon\)-approximate Bayesian Nash equilibrium using \(O(n^{4}\cdot g(1/\epsilon))\) time under the uniform tie-breaking rule._
Our approach proceeds in the following four steps. Given an FPA \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\) where \(\Gamma\) is the uniform tie-breaking rule, we first round the bidding space \(\mathcal{B}\) and reduce its size to \(O(1/\epsilon)\). We then prune the valuation distribution \(\mathcal{D}\) and work on a weak notion of \((\epsilon,\delta)\)-approximate BNE that relaxes the no-overbidding requirement. In the third step, we argue the existence of an \((\epsilon,\delta)\)-approximate BNE profile over a discretized space and in the final step, we develop a suitable searching algorithm for \((\epsilon,\delta)\)-approximate BNE in the discretized space.
**Step 1: Rounding bids.** Given a first-price auction with bidding space \(\mathcal{B}=\{b_{0},b_{1},\ldots,b_{m}\}\) with \(0=b_{0}<b_{1}<\cdots<b_{m}\leq 1\), we define \(\mathcal{B}_{\epsilon}=\{b_{0,\max},b_{1,\max},\ldots,b_{10\epsilon^{-1},\max}\}\) as follows. First we take \(b_{0,\max}=b_{0}=0\). Then for each \(t\in[10\epsilon^{-1}]\), let \(b_{t,\max}\) be the maximum bid in
\[\mathcal{B}_{t}:=\mathcal{B}\cap\left(\frac{(t-1)\epsilon}{10},\frac{t \epsilon}{10}\right]\]
if \(\mathcal{B}_{t}\) is not empty; and set \(b_{t,\max}=\mathsf{nil}\) (meaning that we don't add an element to \(\mathcal{B}_{\epsilon}\)) if \(\mathcal{B}_{t}\) is empty. We prove that it suffices to find an \((\epsilon/2)\)-approximate BNE over bidding space \(\mathcal{B}_{\epsilon}\).
**Lemma 4.1**.: _Given any first-price auction \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\), let \(B_{\epsilon}\) be the rounded bidding space defined above. Then any \((\epsilon/2)\)-approximate BNE of \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\) is also an \(\epsilon\)-approximate BNE of \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\)._
Proof.: Let \(s=(s_{1},\ldots,s_{n})\) be an \((\epsilon/2)\)-approximate BNE of \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\). By definition, each \(s_{i}\) satisfies
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}}\left[u_{i}(v_{i};s_{i}(v_{i}),s_{-i}(v _{-i}))-u_{i}(v_{i};\mathsf{bs}_{\epsilon}(v_{i},s_{-i}),s_{-i}(v_{-i}))\right] \geq-\frac{\epsilon}{2},\]
where
\[\mathsf{bs}_{\epsilon}(v_{i},s_{-i}):=\arg\max_{b\in\mathcal{B}_{\epsilon}} \mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}u_{i}(v_{i};b,s_{-i}(v_{-i}))\]
is the best response in \(\mathcal{B}_{\epsilon}\). For each value \(v_{i}\in[0,1]\), let
\[\epsilon_{i}(v_{i}):=\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\left[u_ {i}(v_{i};s_{i}(v_{i}),s_{-i}(v_{-i}))-u_{i}(v_{i};\mathsf{bs}_{\epsilon}(v_ {i},s_{-i}),s_{-i}(v_{-i}))\right]\]
be the deficiency of utility at value point \(v_{i}\) and we know that \(\mathop{\mathbb{E}}_{v_{i}}[\epsilon_{i}(v_{i})]\geq-\epsilon/2\).
Let \(\mathsf{bs}(v_{i},s_{-i})\in\mathcal{B}\) be the best response in the original bidding space \(\mathcal{B}\). Given that the case of \(\mathsf{bs}(v_{i},s_{-i})=0\) is trivial, we may assume without loss of generality that
\[\mathsf{bs}(v_{i},s_{-i})\in\left(\frac{(t-1)\epsilon}{10},\frac{t\epsilon}{10 }\right],\quad\text{for some $t\in[10\epsilon^{-1}]$};\]
hence \(\mathcal{B}_{t}\neq\emptyset\). Our goal is to prove that the revenue of bidding \(b_{t,\max}\) or \(0\) (both of which are in \(\mathcal{B}_{\epsilon}\)) is at most \(\epsilon/10\) worse comparing to \(\mathsf{bs}(v_{i},s_{-i})\).
**Case 1.** Suppose \(v_{i}\geq t\epsilon/10\). Then it is valid to bid \(b_{t,\max}\) and we have
\[\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}} \big{[}u_{i}(v_{i};\mathsf{bs}_{\epsilon}(v_{i},s_{-i}),s_{-i}(v_ {-i}))-u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}\] \[\geq\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i} (v_{i};b_{t,\max},s_{-i}(v_{-i}))-u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i} (v_{-i}))\big{]}\] \[=\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[(v_{i}-b_{t, \max})\Gamma_{i}(b_{t,\max},s_{-i}(v_{-i}))-(v_{i}-\mathsf{bs}(v_{i},s_{-i})) \Gamma_{i}(\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))]\] \[\geq\mathsf{bs}(v_{i},s_{-i})-b_{t,\max}\geq-\epsilon/10 \tag{15}\]
where the first step follows from \(b_{t,\max}\in\mathcal{B}_{\epsilon}\), the third step follows from \(v_{i}\geq b_{t,\max}\) and
\[\Gamma_{i}(b_{t,\max},s_{-i}(v_{-i}))\geq\Gamma_{i}(\mathsf{bs}(v_{i},s_{-i}),s _{-i}(v_{-i})).\]
**Case 2.** Suppose \(v_{i}\in(\frac{(t-1)\epsilon}{10},\frac{t\epsilon}{10}]\) (note than we cannot have \(v_{i}<(t-1)\epsilon/10\) due to the no overbidding assumption). In this case bidder \(i\) may not be able to bid \(b_{t,\max}\) due to the no-overbidding assumption. However, in this case the utility is small anyway so we can let bidder bid \(b_{0}=0\) as follows:
\[\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}} \bigl{[}u_{i}(v_{i};\mathsf{bs}_{\epsilon}(v_{i},s_{-i}),s_{-i}(v_ {-i}))-u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\bigr{]}\] \[\geq 0-\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\bigl{[}u_{ i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\bigr{]}\] \[=\ \mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\bigl{[}(-v_{i }+\mathsf{bs}(v_{i},s_{-i}))\Gamma_{i}(\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{i}) )\bigr{]}\geq-\epsilon/10. \tag{16}\]
Combining Eq. (15) and Eq. (16), we have proved that
\[\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\bigl{[}u_{i}(v_{i};s_{i}(v_ {i}),s_{-i}(v_{-i}))-u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i})) \bigr{]}\geq\epsilon_{i}(v_{i})-\frac{\epsilon}{10}.\]
Taking an expectation over \(v_{i}\), we complete the proof of the lemma.
**Step 2: Rounding distribution.** Given a first-price auction \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\), we would like to round the value distribution \(\mathcal{D}_{i}\) such that it is supported over discrete values, and we truncate off the probability mass if it is too small. Formally, letting \(\delta\in(0,\frac{\epsilon}{20})\) be a parameter to be specified later5, we define \(\mathcal{D}_{i}^{\epsilon,\delta}\) for each \(i\in[n]\) as follows:
Footnote 5: Looking head, the parameter \(\delta\) shall be much smaller than \(\epsilon\)
\[p_{i}^{\epsilon,\delta}(t)=\Pr_{v_{i}\sim\mathcal{D}_{i}^{\epsilon,\delta}} \left[v_{i}=\frac{t\epsilon}{10}\right]=\max\left\{0,\int_{\frac{t\epsilon}{10 }}^{\frac{(t+1)\epsilon}{10}}\Pr_{v_{i}\sim\mathcal{D}_{i}}[v_{i}=v]\,\mathrm{ d}v-\delta\right\}\]
for each \(t\in[10\epsilon^{-1}-1]\), and define
\[p_{i}^{\epsilon,\delta}(0)=\Pr_{v_{i}\sim\mathcal{D}_{i}^{\epsilon,\delta}} \left[v_{i}=0\right]=1-\sum_{t=1}^{10\epsilon^{-1}-1}\Pr_{v_{i}\sim\mathcal{D} _{i}^{\epsilon,\delta}}\left[v_{i}=\frac{t\epsilon}{10}\right].\]
That is, the valuation distribution is rounded (down) to discrete values \(V_{\epsilon}:=\{0,\epsilon/10,\ldots,1-\epsilon/10\}\) and truncated at \(\delta\); the extra probability mass is put on \(0\). From now on, we would consider both continuous and discrete valuation distribution of bidders.
Let \(\mathcal{D}^{\epsilon,\delta}=\mathcal{D}_{1}^{\epsilon,\delta}\times\cdots \times\mathcal{D}_{n}^{\epsilon,\delta}\). Next we define the notion of \((\epsilon,\delta)\)-approximate BNE -- a weaker notation of equilibrium with relaxed constraint on overbidding, and show that it suffices to find an \((\epsilon,\delta)\)-approximate BNE of the rounded FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma)\).
**Definition 4.2** (\((\epsilon,\delta)\)-approximate Bayesian Nash equilibrium).: _Let \(n,m\geq 2\). Given a Bayesian FPA (\(\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma\)), a strategy profile \(s=(s_{1},\ldots,s_{n})\) is said to be an \((\epsilon,\delta)\)-approximate Bayesian Nash equilibrium (\((\epsilon,\delta)\)-approximate BNE), if for any player \(i\in[n]\), its strategy \(s_{i}\) is monotone and at most \(\epsilon\)-worse than the best response:_
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}}\bigl{[}u_{i}(v_{i};s_{i}(v_{i}),s_{-i}(v _{-i}))\bigr{]}\geq\mathop{\mathbb{E}}_{v\sim\mathcal{D}}\bigl{[}u_{i}(v_{i}; \mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\bigr{]}-\epsilon,\]
_Moreover, letting \(v_{i,\max}=\max_{u_{i}\in V_{\epsilon},p_{i}(v_{i,\max})>0}v_{i}\), the player \(i\) never bids higher than \(v_{i,\max}\) and its total overbidding probability is at most \(\delta\), i.e.,_
\[\Pr_{v_{i}\sim\mathcal{D}_{i}^{\epsilon,\delta}}\big{[}s_{i}(v_{i})>v_{i,\max} \big{]}=0\quad\text{and}\quad\Pr_{v_{i}\sim\mathcal{D}_{i}^{\epsilon,\delta}} \big{[}s_{i}(v_{i})>v_{i}\big{]}<\delta.\]
We prove that it suffices to find an \((\epsilon,\delta)\)-approximate BNE in the rounded FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma)\). In the proof we will perform the operation of converting a bidding strategy \(s_{i}\) over bidding space \(\mathcal{B}_{\epsilon}\) and value distribution \(\mathcal{D}_{i}\) to a (unique) monotone bidding strategy \(s^{\prime}_{i}\) over the same bidding space \(\mathcal{B}_{\epsilon}\) but a different value distribution \(\mathcal{D}^{\prime}_{i}\) such that their induced distributions over \(\mathcal{B}_{\epsilon}\) are the same:
**Definition 4.3** (Monotone analogue).: _Let \(s=(s_{1},\ldots,s_{n})\) be a strategy profile over bidding space \(\mathcal{B}_{\epsilon}\) and value distribution \(\mathcal{D}=\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{n}\), and let \(\mathcal{D}^{\prime}=\mathcal{D}^{\prime}_{1}\times\cdots\times\mathcal{D}^{ \prime}_{n}\) be another value distribution. The monotone analogue of \(s\) with respect to \(\mathcal{D}^{\prime}\) (denoted as \(s^{\mathfrak{m}}\)) is defined as the unique monotone strategy profile \(s^{\mathfrak{m}}=(s_{1}^{\mathfrak{m}},\ldots,s_{n}^{\mathfrak{m}})\) such that_
\[\Pr_{v_{i}\sim\mathcal{D}^{\prime}_{i}}\big{[}s_{i}^{\mathfrak{m}}(v_{i})=b \big{]}=\Pr_{v_{i}\sim\mathcal{D}_{i}}\big{[}s_{i}(v_{i})=b\big{]},\quad\text{ for all }i\in[n]\text{ and }b\in\mathcal{B}_{\epsilon}.\]
**Lemma 4.4**.: _Given any FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\), let \(\mathcal{D}^{\epsilon,\delta}\) be the rounded distribution defined above. Then the monotone analogue of any \((\epsilon,\delta)\)-approximate BNE in \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma)\) is an \((2\epsilon+22\delta/\epsilon)\)-approximate BNE in \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\)._
Proof.: For clarity of the presentation, we divide the proof into two parts.
**First part.** Define the distribution \(\mathcal{D}^{\epsilon}_{i}\) for each \(i\in[n]\) as follow:
\[p^{\epsilon}_{i}(t)=\Pr_{v_{i}\sim\mathcal{D}_{i}}\bigg{[}v_{i}=\frac{t \epsilon}{10}\bigg{]}=\int_{\frac{t\epsilon}{10}}^{\frac{(t+1)\epsilon}{10}} \Pr_{v_{i}\sim\mathcal{D}_{i}}[v_{i}=v]\,\mathrm{d}v,\quad\text{ for each }t\in[10\epsilon^{-1}-1]\]
and
\[p^{\epsilon}_{i}(0)=\Pr_{v_{i}\sim\mathcal{D}_{i}}\big{[}v_{i}=0\big{]}=1-\sum _{t=1}^{10\epsilon^{-1}-1}\Pr_{v_{i}\sim\mathcal{D}_{i}}\bigg{[}v_{i}=\frac{t \epsilon}{10}\bigg{]}\,.\]
The distribution \(\mathcal{D}^{\epsilon}_{i}\) rounds down \(\mathcal{D}_{i}\) to discrete values \(V_{\epsilon}\). Given any \(\epsilon\)-approximate BNE strategy \(s\) of FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon},\Gamma)\), we show that its monotone analogue \(s^{\mathfrak{m}}\) (with respect to \(\mathcal{D}\)) is an \(\frac{11}{10}\epsilon\)-approximate BNE of FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D},\Gamma)\). For any \(v\in[0,1]\), let \(v^{\epsilon}=\lfloor\frac{10v}{\epsilon}\rfloor\cdot\frac{\epsilon}{10}\) be the closest while smaller discrete value in \(V_{\epsilon}\). For any player \(i\in[n]\), we have
\[\begin{split}\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}& \big{[}u_{i}(v_{i};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}^{\mathfrak{m}}(v_{-i})) \big{]}-\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}(v_{i}; \mathsf{bs}(v_{i},s_{-i}^{\mathfrak{m}}),s_{-i}^{\mathfrak{m}}(v_{-i}))\big{]} \\ &\geq\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}( v_{i}^{\epsilon};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}^{\mathfrak{m}}(v_{-i})) \big{]}-\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}(v_{i}^{ \epsilon};\mathsf{bs}(v_{i},s_{-i}^{\mathfrak{m}}),s_{-i}^{\mathfrak{m}}(v_{-i})) \big{]}-\frac{\epsilon}{10}\\ &=\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}^{\epsilon}_{-i}} \big{[}u_{i}(v_{i}^{\epsilon};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}(v_{-i})) \big{]}-\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}^{\epsilon}_{-i}}\big{[}u_{i}( v_{i}^{\epsilon};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}-\frac{\epsilon}{10}\\ &\geq\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}^{\epsilon}_{-i}} \big{[}u_{i}(v_{i}^{\epsilon};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}(v_{-i})) \big{]}-\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}^{\epsilon}_{-i}}\big{[}u_{i}( v_{i}^{\epsilon};\mathsf{bs}(v_{i}^{\epsilon},s_{-i}),s_{-i}(v_{-i}))\big{]}-\frac{ \epsilon}{10}.\end{split}\]
The first step holds since \(u_{i}(v_{i};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}^{\mathfrak{m}}(v_{-i}))\geq u_{i}(v_{ i}^{\epsilon};s_{i}^{\mathfrak{m}}(v_{i}),s_{-i}^{\mathfrak{m}}(v_{-i}))\) (\(v_{i}\geq v_{i}^{\epsilon}\)) and
\[u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}^{\mathfrak{m}}),s_{-i}^{\mathfrak{m}}(v_{-i})) -u_{i}(v_{i}^{\epsilon};\mathsf{bs}(v_{i},s_{-i}^{\mathfrak{m}}),s_{-i}^{ \mathfrak{m}}(v_{-i}))\leq(v_{i}-v_{i}^{\epsilon})\leq\frac{\epsilon}{10}.\]
The second step holds because the bidding profile \(s_{-i}(v_{-i})\) when \(v_{-i}\sim\mathcal{D}_{-i}^{\epsilon}\) is identical to \(s_{-i}^{\mathsf{m}}(v_{-i})\) when \(v_{-i}\sim\mathcal{D}_{-i}\).
Taking an expectation over \(v_{i}\sim\mathcal{D}_{i}\), we obtain
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}[u_{i}(v_{i};s_{ i}^{\mathsf{m}}(v_{i}),s_{-i}^{\mathsf{m}}(v_{-i}))]-\mathop{\mathbb{E}}_{v\sim \mathcal{D}}\big{[}u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}^{\mathsf{m}}),s_{-i}^{ \mathsf{m}}(v_{-i}))\big{]}\] \[\geq \mathop{\mathbb{E}}_{v_{i}\sim\mathcal{D}_{i}}\mathop{\mathbb{E}} _{v_{-i}\sim\mathcal{D}_{-i}^{\epsilon}}\big{[}u_{i}(v_{i}^{\epsilon};s_{i}^{ \mathsf{m}}(v_{i}),s_{-i}(v_{-i}))\big{]}-\mathop{\mathbb{E}}_{v_{i}\sim \mathcal{D}_{-i}}\mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}u_{i}( v_{i}^{\epsilon};\mathsf{bs}(v_{i}^{\epsilon},s_{-i}),s_{-i}(v_{-i}))\big{]}- \frac{\epsilon}{10}\] \[= \mathop{\mathbb{E}}_{v_{i}\sim\mathcal{D}_{i}^{\epsilon}} \mathop{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}^{\epsilon}}\big{[}u_{i}(v_{i} ;s_{i}^{\mathsf{m}}(v_{i}),s_{-i}(v_{-i}))\big{]}-\mathop{\mathbb{E}}_{v_{i} \sim\mathcal{D}_{i}^{\epsilon}}\big{[}u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{ -i}(v_{-i}))\big{]}-\frac{\epsilon}{10}\] \[\geq -\epsilon-\frac{\epsilon}{10}=-\frac{11}{10}\epsilon.\]
where the second step holds due to \(v_{i}^{\epsilon}(v_{i}\sim\mathcal{D}_{i})\) has identical distribution as \(v_{i}(v_{i}\sim\mathcal{D}_{i}^{\epsilon})\). It is easy to verify that \(s^{\mathsf{m}}\) does not overbid given that \(s\) does not overbid as an \(\epsilon\)-approximate BNE. We conclude that \(s^{\mathsf{m}}\) is an \(\frac{11}{10}\epsilon\)-approximate BNE of FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon},\Gamma)\).
**Second part.** We prove that, given any \((\epsilon,\delta)\)-approximate BNE \(s\) of FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma)\), its monotone analogue \(s^{\mathsf{m}}\) with respect to the distribution \(\mathcal{D}^{\epsilon}\) must be an \((\epsilon+20\delta/\epsilon)\)-approximate BNE of \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon},\Gamma)\).
To prove the correctness, one needs to verify for each player \(i\in[n]\),
1. \(s_{i}^{\mathsf{m}}\) is monotone,
2. \(s_{i}^{\mathsf{m}}\) is at most \((\epsilon+20\delta/\epsilon)\)-worse than the best response; and
3. \(s_{i}^{\mathsf{m}}\) does not overbid.
The first claim of monotonicity follows directly from the definition of \(s^{\mathsf{m}}\) as the monotone analogue of \(s\). To prove the second claim, a critical observation is that the bidding profile is always preserved, i.e., \(s(v)\) (\(v\sim\mathcal{D}^{\epsilon,\delta}\)) and \(s^{\mathsf{m}}(v)\) (\(v\sim\mathcal{D}^{\epsilon}\)) are identical. As a consequence, we have
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}\big{[}u_{i}(v_ {i};s_{i}^{\mathsf{m}}(v_{i}),s_{-i}^{\mathsf{m}}(v_{-i}))\big{]} = \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}\big{[}(v_{i}-s _{i}^{\mathsf{m}}(v_{i}))\Gamma_{i}(s_{i}^{\mathsf{m}}(v_{i}),s_{-i}^{\mathsf{ m}}(v_{-i}))\big{]} \tag{17}\] \[\geq \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon,\delta}}\big{[}( v_{i}-s_{i}(v_{i}))\Gamma_{i}(s_{i}(v_{i}),s_{-i}(v_{-i}))\big{]}\] \[= \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon,\delta}}\big{[}u_{ i}(v_{i};s_{i}(v_{i}),s_{-i}(v_{-i}))\big{]}.\]
The second step holds since (i) the expected payment is the same under \(s^{\mathsf{m}}\) and \(s\), i.e.,
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}\big{[}s_{i}^{ \mathsf{m}}(v_{i})\Gamma_{i}(s_{i}^{\mathsf{m}}(v_{i}),s_{-i}^{\mathsf{m}}(v_{-i }))\big{]}=\mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon,\delta}}\big{[}s_{ i}(v_{i})\Gamma_{i}(s_{i}(v_{i}),s_{-i}(v_{-i}))\big{]}\]
and (ii) the distribution \(\mathcal{D}_{i}^{\epsilon}\) stochastically dominates the distribution \(\mathcal{D}_{i}^{\epsilon,\delta}\).
Similarly, we have
\[\mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}\big{[}u_{i}(v_{ i};\mathsf{bs}(v_{i},s_{-i}^{\mathsf{m}}),s_{-i}^{\mathsf{m}}(v_{-i}))\big{]} = \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon}}\big{[}(v_{i}- \mathsf{bs}(v_{i},s_{-i}^{\mathsf{m}}))\Gamma_{i}(\mathsf{bs}(v_{i},s_{-i}^{ \mathsf{m}}),s_{-i}^{\mathsf{m}}(v_{-i}))\big{]} \tag{18}\] \[\leq \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon,\delta}}\big{[}( v_{i}-\mathsf{bs}(v_{i},s_{-i}))\Gamma_{i}(\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i})) \big{]}+20\delta/\epsilon\] \[= \mathop{\mathbb{E}}_{v\sim\mathcal{D}^{\epsilon,\delta}}\big{[}u_{ i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}+20\delta/\epsilon.\]
where the second step holds since the TV distance between \(\mathcal{D}^{\epsilon}\) and \(\mathcal{D}^{\epsilon,\delta}\) is bounded by \(20\delta/\epsilon\) and \(s_{-i}^{\mathsf{m}}(v_{-i})\) (\(v_{-i}\sim\mathcal{D}_{-i}^{\epsilon}\)) has the same distribution as \(s_{-i}(v_{-i})\) (\(v_{-i}\sim\mathcal{D}_{-i}^{\epsilon,\delta}\)). Combining Eq. (17) and Eq. (18), we conclude the \(s^{\mathsf{m}}\) is at most \((\epsilon+20\delta/\epsilon)\)-worse than the best response.
Finally we prove the last item that \(s_{i}^{m}\) does not overbid. We prove by contradiction and suppose that at some bid \(b\in\mathcal{B}_{\epsilon}\), we have \(\Pr_{v_{i}\sim\mathcal{D}^{\epsilon}_{i}}[s_{i}^{m}(v_{i})=b>v_{i}]>0\). Let \(\nu_{i}\in[0,1]\) be the smallest value such that \(\Pr[s_{i}^{m}(\nu_{i})=b]>0\), and let \(v_{i,\max}^{\epsilon,\delta}=\max_{v_{i}\in V_{\epsilon},p_{i}^{\epsilon, \delta}(v_{i,\max}^{\epsilon,\delta})>0}v_{i}\). We note that \(\nu_{i}<b\leq v_{i,\max}^{\epsilon,\delta}\) due to the definition of \((\epsilon,\delta)\)-approximate BNE, i.e., there is no overbidding over \(v_{i,\max}^{\epsilon,\delta}\). Hence, we have
\[\Pr_{v_{i}\sim\mathcal{D}^{\epsilon,\delta}}\big{[}s_{i}(v_{i}) \geq b\big{]} =\Pr_{v_{i}\sim\mathcal{D}^{\epsilon}}\big{[}s_{i}^{m}(v_{i}) \geq b\big{]}\geq\Pr_{v_{i}\sim\mathcal{D}^{\epsilon}}\big{[}v_{i}\geq\nu_{i} \big{]}\geq\Pr_{v_{i}\sim\mathcal{D}^{\epsilon,\delta}}\big{[}v_{i}\geq\nu_{i} \big{]}+\delta\] \[\geq\Pr_{v_{i}\sim\mathcal{D}^{\epsilon,\delta}}\big{[}v_{i}\geq b \big{]}+\delta \tag{19}\]
The first step follows from the definition of monotone analogue, the second step holds since \(s_{i}^{m}\) is monotone. The third step holds since \(\nu_{i}<v_{i,\max}^{\epsilon,\delta}\) and \(\mathcal{D}^{\epsilon,\delta}\) truncates at least \(\delta\) probability mass at \(v_{i,\max}^{\epsilon,\delta}\). Eq. (19) implies
\[\Pr_{v_{i}\sim\mathcal{D}^{\epsilon,\delta}}[v_{i}<b\wedge s_{i}(v_{i})\geq b] \geq\delta,\]
i.e., the overbidding probability is at least \(\delta\). This contradicts with the fact that \(s\) is an \((\epsilon,\delta)\)-approximate BNE of FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon},\Gamma)\). We complete the proof here.
Step 3: Existence of discretized \((\epsilon,\delta)\)-approximate BNE. Given a first-price auction \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\), we prove the existence of a (suitably) discretized \((\epsilon,\delta)\)-approximate BNE. We describe this step using a generic FPA \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\) but will apply it on the rounded FPA \((\mathcal{N},\mathcal{B}_{\epsilon},\mathcal{D}^{\epsilon,\delta},\Gamma)\) later. For any \(j\in[0:m]\), \(i\in[n]\), let \(p_{i,j}=\Pr[s_{i}(v_{i})=b_{j}]\) be the probability of player \(i\) bidding \(b_{j}\) in a strategy profile \(s\).
**Lemma 4.5** (Discretization).: _Let \(m,n\geq 2\), given any first-price auction \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\) and \(\mathcal{B}=\{b_{0},b_{1},\ldots,b_{m}\}\) with \(0=b_{0}<b_{1}<\cdots<b_{m}\leq 1\), there exists an \((\epsilon,\delta)\)-approximate BNE strategy profile \(s\) such that \(p_{i,j}\) is a integer multiple of_
\[\ell(m)\cdot\frac{1}{\epsilon^{6}\delta}\]
_for any \(i\in[n]\) and \(j\in[0:m]\), where \(\ell(m)\) is some exponential function of \(m\)._
We make use of the following result from [4]. We note this the major part that we need a uniform tie-breaking rule.
**Theorem 4.6** (Theorem 3 of [4]).: _Let \(p_{i}\in\Delta_{m+1}\) for \(i\in[n]\), and let \(\{X_{i}\in\mathbb{R}^{m+1}\}_{i\in[n]}\) be a set of independent \((m+1)\)-dimensional random unit vectors such that, for all \(i\in[n]\), \(j\in[0:m]\), \(\Pr[X_{i}=\mathtt{1}_{j}]=p_{i,j}\). Let \(z>0\) be an integer. Then there exists another set of probability vectors \(\{\widehat{p}_{i}\in\Delta_{m+1}\}_{i\in[n]}\) such that the following conditions hold:_
* \(|\widehat{p}_{i,j}-p_{i,j}|=O(1/z)\)_, for all_ \(i\in[n]\) _and_ \(j\in[0:m]\)_;_
* \(\widehat{p}_{i,j}\) _is an integer multiple of_ \(\frac{1}{2^{2n}}\cdot\frac{1}{z}\) _for all_ \(i\in[n]\) _and_ \(j\in[0:m]\)_;_
* _If_ \(p_{i,j}=0\) _then_ \(\widehat{p}_{i,j}=0\)_;_
* _Let_ \(\{\widehat{X}_{i}\in\mathbb{R}^{m+1}\}_{i\in[n]}\) _be a set of independent_ \((m+1)\)_-dimensional random unit vectors such that_ \(\Pr[X_{i}=\mathtt{1}_{j}]=\widehat{p}_{i,j}\) _for all_ \(i\in[n]\)_,_ \(j\in[0:m]\)_.Then_ \[\left\|\sum_{i\in[n]}X_{i}-\sum_{i\in[n]}\widehat{X}_{i}\right\|_{\mathsf{TV}}=O \left(h(m)\cdot\frac{\log z}{z^{1/5}}\right).\] (20)
_Moreover, for all_ \(i^{\prime}\in[n]\)_, we have_
\[\left\|\sum_{i\in[n]\setminus\{i^{\prime}\}}X_{i}-\sum_{i\in[n]\setminus\{i^{ \prime}\}}\widehat{X}_{i}\right\|_{\text{TV}}=O\left(h(m)\cdot\frac{\log z}{z^{ 1/5}}\right). \tag{21}\]
_where_ \(h(m)\) _is some exponential function_
Proof of Lemma 4.5.: Given a BNE strategy profile \(s\) of FPA \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\), we take
\[z=\Omega\Big{(}\max\big{\{}2h(m)^{6}\epsilon^{-6},2m^{2}\delta^{-1}\big{\}} \Big{)}\quad\text{and}\quad\gamma=h(m)\cdot\frac{\log z}{z^{1/5}}.\]
Let \(\{p_{i}\in\Delta^{m+1}\}_{i\in[n]}\) be the probability vectors that correspond to \(s\), i.e., \(p_{i,j}=\Pr[s_{i}(v_{i})=b_{j}]\). Using Theorem 4.6, let \(\{\widehat{p}_{i}\in\Delta^{m+1}\}_{i\in[n]}\) be the set of discretized probability vectors, and let \(\widehat{s}\) be the unique monotone strategy determined by \(\widehat{p}\) (with respect to the same value distribution \(\mathcal{D}\)). We prove that \(\widehat{s}\) forms an \((\epsilon,\delta)\)-approximate BNE of the auction. We need to verify that for every player \(i\in[n]\), (1) \(\widehat{s}_{i}\) is at most \(\epsilon\)-worse than the best response; and (2) the overbidding probability is small.
By Eq. (21), for any \(j\in[0:m]\), one has
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}\big{[}\Gamma_{i}(b_{j },s_{-i}(v_{-i}))\big{]}=\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{- i}}\big{[}\Gamma_{i}(b_{j},\widehat{s}_{-i}(v_{-i}))\big{]}\pm\gamma. \tag{22}\]
since the allocation probability (under the uniform tie-breaking) is determined by the histogram of other players' bidding histogram, which shifts by at most \(\gamma\) between \(s_{-i}\) and \(\widehat{s}_{-i}\) in total variance distance.
For the utility, we have
\[\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i}; \widehat{s}_{i}(v_{i}),\widehat{s}_{-i}(v_{-i}))\big{]} =\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}(v_{i}- \widehat{s}_{i}(v_{i}))\cdot\Gamma_{i}(\widehat{s}(v_{i}),\widehat{s}_{-i}(v_ {-i}))\big{]}\] \[=\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}(v_{i}- \widehat{s}_{i}(v_{i}))\cdot\Gamma_{i}(\widehat{s}_{i}(v_{i}),s_{-i}(v_{i})) \big{]}\pm\gamma\] \[=\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}(v_{i}-s_{i }(v_{i}))\cdot\Gamma_{i}(s_{i}(v_{i}),s_{-i}(v_{i}))\big{]}\pm O\left(m^{2} \cdot\frac{1}{z}+\gamma\right)\] \[=\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i}; s(v_{i}),s_{-i}(v_{-i}))\big{]}\pm\frac{\epsilon}{2}. \tag{23}\]
The second step follows from Eq. (22), the third step holds since the TV distance between \((v_{i},s_{i}(v_{i}))\) and \((v_{i},\widehat{s}_{i}(v_{i}))\) is at most \(O(m^{2}\cdot\frac{1}{z})\). The last step follows from the choice of \(z\).
At the same time, we have
\[\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i}; \mathsf{bs}(v_{i},\widehat{s}_{-i}),\widehat{s}_{-i}(v_{-i}))\big{]} \leq\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i },\mathsf{bs}(v_{i},\widehat{s}_{-i}),s_{-i}(v_{-i}))\big{]}+\gamma\] \[\leq\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{i },\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}+\gamma\] \[\leq\operatorname*{\mathbb{E}}_{v\sim\mathcal{D}}\big{[}u_{i}(v_{ i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))\big{]}+\frac{\epsilon}{2}. \tag{24}\]
Here the first step follows from the bidding histogram shifts by at most \(\gamma\) between \(s_{-i}\) and \(\widehat{s}_{-i}\) in total variance distance. The last step follows from the choice of parameters.
Combining Eq. (23) and Eq. (24), we have that \(\widehat{s}_{i}\) is at most \(\epsilon\)-worse than the best response.
To bound the probability of overbidding, let \(j(i)\in[0:m]\) be the maximum bid that receives non-zero probability under \(\widehat{s}_{i}\), then it is easy to verify that \(b_{j(i)}\leq v_{i,\max}\): otherwise \(b_{j(i)}>v_{i,\max}\) and \(p_{i,j(i)}>0\) would contradict with the no overbidding assumption of BNE on \(s\).
Finally we have
\[\Pr_{v_{i}\sim\mathcal{D}_{i}}\left[\widehat{s}_{i}(v_{i})>v_{i}\right]\leq\Pr_{v_ {i}\sim\mathcal{D}_{i}}\left[s_{i}(v_{i})>v_{i}\right]+O\left(m^{2}\cdot\frac{1} {z}\right)=0+O\left(m^{2}\cdot\frac{1}{z}\right)\leq\delta\]
since the total variation distance between \((v_{i},s_{i}(v_{i}))\) and \((v_{i},\widehat{s}_{i}(v_{i}))\) is at most \(O(m^{2}\cdot\frac{1}{z})\).
Step 4: Searching for an \((\epsilon,\delta)\)-approximate BNE.Finally we provide a simple searching algorithm for \((\epsilon,\delta)\)-approximate BNE over the discretized space. The key observation comes from the allocation rule of a first-price auction, i.e., only players with the highest bid could win the item. We say a strategy profile \(s\) lies in the grid \(S_{\omega}\) for some \(\omega\in(0,1)\) if \(p_{i,j}=\Pr_{v_{i}\sim\mathcal{D}_{i}}[s_{i}(v_{i})=b_{j}]\) is a multiple of \(\omega\) for every \(i\in[n]\) and \(j\in[0:m]\).
**Lemma 4.7**.: _Given a first-price auction \((\mathcal{N},\mathcal{B},\mathcal{D},\Gamma)\), and suppose there exists at least one \((\epsilon,\delta)\)-approximate BNE over the grid \(S_{\omega}\), then there is an algorithm that runs in \(n^{4}m\cdot 2^{\widetilde{O}(m/\epsilon\omega)}\) time and returns an \((2\epsilon,\delta)\)-approximate BNE under the uniform tie-breaking rule._
Proof.: Let \(R=100(m+1)/\epsilon\omega\). The discretized strategy profiles \(\mathsf{E}_{\omega}\subseteq[\Delta_{m+1}]^{R}\) are defined as follows. A strategy profile \((p_{1},\ldots,p_{R})\in\mathsf{E}_{\omega}\) is parameterized by a bid level \(j^{*}\in[0:m-1]\) and \(k_{0},k_{1},\ldots,k_{j^{*}}\in[0:10\epsilon^{-1}]\) such that \(p_{i,j}\) is a multiple of \(\omega\) for all \(i\in[n],j\in[0:m]\), and
\[\sum_{r=1}^{R}p_{r,m-j}=k_{j},\,\forall j\in[0:j^{*}]\quad\text{and}\quad p_{r,m-j}=0,\,\forall r\in[R],j\in[j^{*}+1:m-1].\]
The size of \(\mathsf{E}_{\omega}\) satisfies
\[|\mathsf{E}_{\omega}|\leq(m+1)\cdot(10/\epsilon\omega+1/\omega)^{m+1}\cdot \binom{R}{10/\epsilon\omega+1/\omega}\leq 2^{\widetilde{O}(m/\epsilon\omega)}.\]
For any strategy profile \((p_{1},\ldots,p_{R})\in\mathsf{E}_{\omega}\), one can augment with \((n-R)\) default strategies \((1,0,\ldots,0)\) (that is, bidding \(b_{0}=0\) with probability \(1\)). Slightly abuse of notation, we also use \(\mathsf{E}_{\omega}\subseteq[\Delta_{m+1}]^{n}\) to denote the augmented strategy profiles. We shall prove
* There is an \((2\epsilon,\delta)\)-approximate BNE in \(\mathsf{E}_{\omega}\) (up to a matching with players), and
* One can identify the matching efficiently.
Existence of \((2\epsilon,\delta)\)-approximate BNERecall that there exists an \((\epsilon,\delta)\)-approximate BNE strategy \(s\) over the grid \(S_{\omega}\). Let \(j^{*}\) be the first index over \([0:m]\) such that \(\sum_{i\in[n]}p_{i,m-j}\leq 10/\epsilon\) (\(\forall j<j^{*}\)) and \(\sum_{i\in[n]}p_{i,m-j^{*}}\geq 10/\epsilon\). If \(j^{*}=m\), then we have \(s\in\mathsf{E}_{\omega}\) (up to a matching between players). If \(j^{*}\leq m-1\), then let \(n^{*}\in[n]\) be the smallest player such that \(\sum_{i\in[n^{*}]}p_{n,m-j^{*}}\geq 10/\epsilon\) (w.l.o.g. we assume it takes the equality). Truncate the strategy profile to \(s^{\prime}\) such that
\[p^{\prime}_{i,j}=\Pr_{v_{i}\sim\mathcal{D}_{i}}[s^{\prime}_{i}(v_{i})=b_{j}]= \begin{cases}p_{i,j}&j>m-j^{*}\vee(j=m-j^{*}\wedge i\leq n^{*})\\ 0&j\in[1:m-j^{*}-1]\vee(j=m-j^{*}\wedge i>n^{*})\\ 1-\sum_{j^{\prime}=1}^{m}p^{\prime}_{i,j^{\prime}}&j=0.\end{cases}\]
It is clear the new strategy \(s^{\prime}\in\mathsf{E}_{\omega}\) (up to a matching between players), and for each player \(i\in[n]\), the new strategy \(s^{\prime}_{i}\) is monotone and the probability of overbidding is no more than \(\delta\) (because we only move bidding probability to \(b_{0}=0\)). It suffices to prove it is at most \(2\epsilon\)-worse
than the best response. The key observation is that the allocation probability of bidding \(b_{j}\) with \(j>m-j^{*}\) remains the same, i.e.,
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[\Gamma_{i}(b_{j},s^{ \prime}_{-i}(v_{-i}))]=\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[ \Gamma_{i}(b_{j},s_{-i}(v_{-i}))]\quad\forall j\in[m-j^{*}+1:m]\]
and moreover, the allocation probability of bidding no more than \(b_{m-j^{*}}\) is small
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[\Gamma_{i}(b_{j};s^{ \prime}_{-i}(v_{-i}))]\leq\epsilon/4\quad\forall j\in[0:m-j^{*}].\]
This holds since with probability at least \(1-\exp(-5/3\epsilon)\), there are at least \(5/\epsilon\) players bid no less than \(b_{m-j^{*}}\) by Chernoff bound.
Therefore, at any value point \(v_{i}\), if \(s^{\prime}_{i}(v_{i})>b_{m-j^{*}}\), then \(s_{i}(v_{i})=s^{\prime}_{i}(v_{i})\) and
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};s^{\prime }_{i}(v_{i});s^{\prime}_{-i}(v_{-i}))]=\operatorname*{\mathbb{E}}_{v_{-i}\sim \mathcal{D}_{-i}}[u_{i}(v_{i};s_{i}(v_{i});s_{-i}(v_{-i}))] \tag{25}\]
If \(s^{\prime}_{i}(v_{i})\leq b_{m-j^{*}}\), then \(s_{i}(v_{i})\leq b_{m-j^{*}}\), and
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};s^{\prime }_{i}(v_{i});s^{\prime}_{-i}(v_{-i}))]\geq-\epsilon/4\quad\text{and}\quad \operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};s_{i}(v_{ i});s_{-i}(v_{-i}))]\leq\epsilon/4. \tag{26}\]
Combining Eq. (25) and Eq. (26), for any \(v_{i}\in[0,1]\), one has
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};s^{\prime }_{i}(v_{i});s^{\prime}_{-i}(v_{-i}))-u_{i}(v_{i};s_{i}(v_{i});s_{-i}(v_{-i})) ]\geq-\epsilon/2. \tag{27}\]
Similarly, one can prove
\[\operatorname*{\mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};\mathsf{ bs}(v_{i},s^{\prime}_{-i}),s^{\prime}_{-i}(v_{-i}))]\leq\operatorname*{ \mathbb{E}}_{v_{-i}\sim\mathcal{D}_{-i}}[u_{i}(v_{i};\mathsf{bs}(v_{i},s_{-i}),s_{-i}(v_{-i}))]+\epsilon/2. \tag{28}\]
Combining Eq. (27) (28), we have proved \(s^{\prime}_{i}\) is at most \(2\epsilon\)-worse than the best response.
Find a matchingGiven a strategy profile \((p_{1},\ldots,p_{n})\in\mathsf{E}_{\omega}\), we show how to define a bipartite matching problem, such that the \((2\epsilon,\delta)\)-approximate BNE are one-to-one correspondence to the perfect bipartite matching of the graph. The bipartite matching problem is defined between players \([n]\) and the strategy \(\{p_{i}\}_{i\in[n]}\). We draw an edge between player \(i_{1}\) and the strategy \(p_{i_{2}}\), if \(p_{i_{2}}\) is at most \(2\epsilon\)-worse than the best response (note the histogram of other players' bidding profile is determined) and the overbidding probability is small at most \(\delta\). We note the best response can be computed in \(O(mn^{2})\) time and one can estimate the utility of a bid in \(O(n^{2})\) time using dynamic programming (similar as [10]). Hence, it takes \(n^{2}\cdot O(mn^{2})\) time to construct the bipartite graph, and a perfect matching can be found in time \(O(n^{3})\).
Combining the above four steps (Lemma 4.1, Lemma 4.4, Lemma 4.5 and Lemma 4.7), we conclude the proof of Theorem 1.2.
## Acknowledgement
X.C. and B.P. would like to thank Aviad Rubinstein and anonymous STOC reviewers for helpful suggestions on the paper. The research of X.C. and B.P. is supported by NSF grants CCF-1703925, IIS-1838154, CCF-2106429 and CCF-2107187, CCF-1763970, CCF-2212233, COLL2134095, COLL2212745.
|
2306.04152
|
A Unified One-Step Solution for Aspect Sentiment Quad Prediction
|
Aspect sentiment quad prediction (ASQP) is a challenging yet significant
subtask in aspect-based sentiment analysis as it provides a complete
aspect-level sentiment structure. However, existing ASQP datasets are usually
small and low-density, hindering technical advancement. To expand the capacity,
in this paper, we release two new datasets for ASQP, which contain the
following characteristics: larger size, more words per sample, and higher
density. With such datasets, we unveil the shortcomings of existing strong ASQP
baselines and therefore propose a unified one-step solution for ASQP, namely
One-ASQP, to detect the aspect categories and to identify the
aspect-opinion-sentiment (AOS) triplets simultaneously. Our One-ASQP holds
several unique advantages: (1) by separating ASQP into two subtasks and solving
them independently and simultaneously, we can avoid error propagation in
pipeline-based methods and overcome slow training and inference in
generation-based methods; (2) by introducing sentiment-specific horns tagging
schema in a token-pair-based two-dimensional matrix, we can exploit deeper
interactions between sentiment elements and efficiently decode the AOS
triplets; (3) we design ``[NULL]'' token can help us effectively identify the
implicit aspects or opinions. Experiments on two benchmark datasets and our
released two datasets demonstrate the advantages of our One-ASQP. The two new
datasets are publicly released at
\url{https://www.github.com/Datastory-CN/ASQP-Datasets}.
|
Junxian Zhou, Haiqin Yang, Yuxuan He, Hao Mou, Junbo Yang
|
2023-06-07T05:00:01Z
|
http://arxiv.org/abs/2306.04152v1
|
# A Unified One-Step Solution for Aspect Sentiment Quad Prediction
###### Abstract
Aspect sentiment quad prediction (ASQP) is a challenging yet significant subtask in aspect-based sentiment analysis as it provides a complete aspect-level sentiment structure. However, existing ASQP datasets are usually small and low-density, hindering technical advancement. To expand the capacity, in this paper, we release two new datasets for ASQP, which contain the following characteristics: larger size, more words per sample, and higher density. With such datasets, we unveil the shortcomings of existing strong ASQP baselines and therefore propose a unified one-step solution for ASQP, namely One-ASQP, to detect the aspect categories and to identify the aspect-opinion-sentiment (AOS) triplets simultaneously. Our One-ASQP holds several unique advantages: (1) by separating ASQP into two subtasks and solving them independently and simultaneously, we can avoid error propagation in pipeline-based methods and overcome slow training and inference in generation-based methods; (2) by introducing sentiment-specific horns tagging schema in a token-pair-based two-dimensional matrix, we can exploit deeper interactions between sentiment elements and efficiently decode the AOS triplets; (3) we design "[NULL]" token can help us effectively identify the implicit aspects or opinions. Experiments on two benchmark datasets and our released two datasets demonstrate the advantages of our One-ASQP. The two new datasets are publicly released at [https://www.github.com/Datasory-CN/ASQP-Datasets](https://www.github.com/Datasory-CN/ASQP-Datasets).
## 1 Introduction
Aspect-based sentiment analysis (ABSA) is a critical fine-grained opinion mining or sentiment analysis problem that aims to analyze and understand people's opinions or sentiments at the aspect level (Liu, 2012; Pontiki et al., 2014; Zhang et al., 2022). Typically, there are four fundamental sentiment elements in ABSA: (1) _aspect category_ (\(c\)) defines the type of the concerned aspect; (2) _aspect term_ (\(a\)) denotes the opinion target which is explicitly or implicitly mentioned in the given text; (3) _opinion term_ (\(o\)) describes the sentiment towards the aspect; and (4) _sentiment polarity_ (\(s\)) depicts the sentiment orientation. For example, given an opinionated sentence, "touch screen is not sensitive," we can obtain its \((c,a,o,s)\)-quadruple as "(Screen#Sensitivity", "touch screen", "not sensitive", NEG), where NEG indicates the negative sentiment polarity.
Due to the rich usage of applications, numerous research efforts have been made on ABSA to predict or extract fine-grained sentiment elements (Jiao et al., 2019; Pontiki et al., 2014, 2015, 2016; Zhang et al., 2022; Yang et al., 2021). Based on the number of sentimental elements to be extracted, existing studies can be categorized into the following tasks: (1) _single term extraction_ includes aspect term extraction (ATE) (Li and Lam, 2017; He et al., 2017), aspect category detection (ACD) (He et al., 2017; Liu et al., 2021); (2) _pair extraction_ includes aspect-opinion pairwise extraction (AOPE) (Yu et al., 2019; Wu et al., 2020), aspect-category sentiment analysis (ACSA) (Cai et al., 2020; Dai et al., 2020), and
\begin{table}
\begin{tabular}{l l l} \hline Task & Output & Example Output \\ \hline ATE & \{\(a\)\} & \{touch screen \} \\ \hline ACD & \{\(c\)\} & \{Screen\#Sensitivity\} \\ \hline AOPE & \{(a,o)\} & \{touch screen, not sensitive\} \\ \hline ACSA & \{\(c,s\)\} & \{(Screen\#Sensitivity, NEG)\} \\ \hline E2E- & \{(a,s)\} & \{touch screen, NEG\} \\ ABSA & & \\ \hline ASTE & \{\((a,o,s)\)\} & \{touch screen, not sensitive, NEG\} \\ \hline TASD & \{\(c,a,s\)\} & \{(Screen\#Sensitivity, touch screen, NEG)\} \\ \hline ASQP/ & \{(c,a,o,s)\} & \{Screen\#Sensitivity, touch screen, not sensitive, NEG\} \\ \hline \end{tabular}
\end{table}
Table 1: The outputs of an example, “touch screen is not sensitive”, for various ABSA tasks. \(a\), \(c\), \(o\), \(s\), and NEG are defined in the first paragraph of Sec. 1.
End-to-End ABSA (E2E-ABSA) (Li et al., 2019; He et al., 2019) to extract the aspect and its sentiment; (3) _triplet extraction_ includes aspect-sentiment triplet extraction (ASTE) (Mukherjee et al., 2021; Chen et al., 2021), and Target Aspect Sentiment Detection (TASD) (Wan et al., 2020); (4) _quadruple extraction_ includes aspect-category-opinion-sentiment (ACOS) quadruple extraction (Cai et al., 2021) and aspect sentiment quad prediction (ASQP) (Zhang et al., 2021). ACOS and ASQP are the same tasks, which aim to extract all aspect-category-opinion-sentiment quadruples per sample. Since ASQP covers the whole task name, we use ASQP to denote the ABSA quadruple extraction task. Table 1 summarizes an example of the outputs of various ABSA tasks.
This paper focuses on ASQP because it provides a complete aspect-level sentiment analysis (Zhang et al., 2022). We first observe that existing ASQP datasets are crawled from only one source and are small with low-density (Cai et al., 2021; Zhang et al., 2021). For example, the maximum sample size is around 4,000, while the maximum number of quadruples per sample is around 1.6. This limits the technical development of ASQP. Second, ASQP includes two extraction subtasks (aspect extraction and opinion extraction) and two classification subtasks (category classification and sentiment classification). Modeling the four subtasks simultaneously is challenging, especially when the quadruples contain implicit aspects or opinions (Cai et al., 2021). Though existing studies can resolve ASQP via pipeline-based (Cai et al., 2021) or generation-based methods (Zhang et al., 2021; Mao et al., 2022; Bao et al., 2022; Gao et al., 2022), they suffer from different shortcomings, i.e., pipeline-based methods tend to yield error propagation while generation-based methods perform slowly in training and inference.
To tackle the above challenges, we first construct two datasets, **en-Phone** and **zh-FoodBeverage**, to expand the capacity of datasets. en-Phone is an English ASQP dataset in the cell phone domain collected from several e-commercial platforms, while zh-FoodBeverage is the first Chinese ASQP dataset collected from multiple sources under the categories of Food and Beverage. Compared to the existing ASQP datasets, our datasets have 1.75 to 4.19 times more samples and a higher quadruple density of 1.3 to 1.8. This achievement is a result of our meticulous definition and adherence to annotation guidelines, which allow us to obtain more fine-grained quadruples.
After investigating strong ASQP baselines, we observed a decline in performance on our newly released dataset. This finding, coupled with the shortcomings of the existing baselines, motivated us to develop a novel one-step solution for ASQP, namely One-ASQP. As illustrated in Fig. 1, our One-ASQP adopts a shared encoder from a pre-trained language model (LM) and resolves two tasks, aspect category detection (ACD) and aspect-opinion-sentiment co-extraction (AOSC) simultaneously. ACD is implemented by a multi-class classifier and AOSC is fulfilled by a token-pair-based two-dimensional (2D) matrix with the sentiment-specific horns tagging schema, a popular technique borrowed from the joint entity and relation extraction (Wang et al., 2020; Shang et al., 2022). The two tasks are trained independently and simultaneously, allowing us to avoid error propagation and overcome slow training and inferring in generation-based methods. Moreover, we also design a unique token, "[NULL]", appending at the beginning of the input, which can help us to identify implicit aspects or opinions effectively.
Our contributions are three-fold: (1) We construct two new ASQP datasets consisting of more fine-grained samples with higher quadruple density while covering more domains and languages. Significantly, the released zh-FoodBeverage dataset is the first Chinese ASQP dataset, which provides opportunities to investigate potential technologies in a multi-lingual context for ASQP. (2) We propose One-ASQP to simultaneously detect aspect categories and co-extract aspect-opinion-sentiment triplets. One-ASQP can absorb deeper interactions between sentiment elements without error propagation and conquer slow performance in generation-based methods. Moreover, the delicately designed "[NULL]" token helps us to identify implicit aspects or opinions effectively. (3) We conducted extensive experiments demonstrating that One-ASQP is efficient and outperforms the state-of-the-art baselines in certain scenarios.
## 2 Datasets
We construct two datasets 1 to expand the capacity of existing ASQP datasets.
Footnote 1: More details are provided in Appendix A.
### _Category-aware_
The _Category-aware_ is a _Category-aware_. The _Category-aware_ is a _Category-aware_.
### Source
**en-Phone** is an English dataset collected from reviews on multiple e-commerce platforms in July and August of 2021, covering 12 cell phone brands. To increase the complexity and the quadruple density of the dataset, we deliver the following filtering steps: (1) applying the LangID toolkit 2 to filter out comments whose body content is not in English; (2) filtering out samples with less than \(8\) valid tokens. **zh-FoodBeverage** is the first Chinese ASQP dataset, collected from Chinese comments in multiple sources in the years 2019-2021 under the categories of Food and Beverage. We clean the data by (1) filtering out samples with lengths less than \(8\) and greater than \(4000\); (2) filtering out the samples with valid Chinese characters less than 70%; (3) filtering out ad texts by a classifier which is trained by marketing texts with 90% classification accuracy.
Footnote 2: [https://pypi.org/project/langid/](https://pypi.org/project/langid/)
### Annotation
A team of professional labelers is asked to label the texts following the guidelines in Appendix A.2. Two annotators individually annotate the same sample by our internal labeled system. The strict quadruple matching F1 score between two annotators is 77.23%, which implies a substantial agreement between two annotators [16]. In case of disagreement, the project leader will be asked to make the final decision. Some typical examples are shown in Table 10.
### Statistics and Analysis
Table 2 reports the statistics of two existing ASQP benchmark datasets and our released datasets for comparison. **en-Phone** contains 7,115 samples with 15,884 quadruples while **zh-FoodBeverage** contains 9,570 samples with 24,973 quadruples. The size and the number of quadruples are significantly larger than the current largest ASQP benchmark dataset, i.e., Laptop-ACOS. The statistics show that our released datasets contain unique characteristics and are denser than existing Restaurant-ACOS and Laptop-ACOS: (1) the number of words per sample is 25.78 and 193.95 for en-Phone and zh-FoodBeverage, respectively, while the number of quadruples per sample is 2.23 and 2.61 for en-Phone and zh-FoodBeverage accordingly. This shows that en-Phone and zh-FoodBeverage are much denser than existing ASQP datasets; (2) based on the annotation guidelines, we only label opinionated sentences with explicit aspects. Moreover, due to commercial considerations, we exclude sentences with neutral sentiment in zh-FoodBeverage; (3) here, we define more fine-grained aspects and opinions than existing ASQP datasets; see more examples in Appendix A. Consequently, we attain a longer average length per aspect and per opinion, as reported in the last two columns of Table 2.
## 3 Methodology
### ASQP Formulation
Given an opinionated sentence \(\mathbf{x}\), ASQP is to predict all aspect-level sentiment quadruples \(\{(c,a,o,s)\}\), which corresponds to the aspect category, aspect term, opinion term, and sentiment polarity, respectively. The aspect category \(c\) belongs to a category set \(\mathcal{C}\); the aspect term \(a\) and the opinion term \(o\) are typically text spans in \(\mathbf{x}\) while they can be null if the target is not explicitly mentioned, i.e., \(a\in V_{\mathbf{x}}\cup\{\emptyset\}\) and \(o\in V_{\mathbf{x}}\cup\{\emptyset\}\), where \(V_{\mathbf{x}}\) denotes the set containing all possible continuous spans of \(\mathbf{x}\). The sentiment polarity \(s\) belongs to one of the sentiment classes, SENTIMENT={POS, NEU, NEG}, which corresponds to the positive, neutral, and negative sentiment, respectively.
### One-ASQP
Our One-ASQP resolves two subtasks, ACD and AOSC, simultaneously, where ACD seeks a classifier to determine the aspect categories, and AOSC is to extract all \((a,o,s)\)-triplets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline & **\%** & **w/s** & \(\epsilon_{c}\) & **\%** & **Rq** & **\%** & **EAEAQ** & **EAQ** & **IAEQ** & **t**\%** & **t**\%** & **t**\%** & **t**\%** & **t**\%** & **t**\%** \\ \hline Restaurant-ACOS & 2,286 & 15.11 & 13.6 & 1.60 & 2.431 & 350 & 530 & 1.007 & 151 & 2.503 & 1.46 & 1.20 \\ \hline Laptop-ACOS & 4,076 & 15.73 & 121 & 57.73 & 1.42 & 3.278 & 1.241 & 912 & 342 & 1.879 & 316 & 35.785 & 1.40 & 1.09 \\ \hline \hline en-Phone & 7,115 & 25.78 & 88 & 15.384 & 2.23 & 13,160 & 2.724 & - & - & 3.751 & 371 & 11.562 & 1.73 & 1.98 \\ \hline zh-FoodBeverage & 9,570 & 193.95 & 138 & 24.973 & 2.61 & 17,407 & 7.566 & - & - & 6,778 & - & 18,195 & 2.60 & 2.04 \\ \hline \end{tabular}
\end{table}
Table 2: Data statistics for the ASQP task. # denotes the number of corresponding elements. s, w, c, q stand for samples, words, categories, and quadruples, respectively. EA, EO, IA, and IO denote explicit aspect, explicit opinion, implicit aspect, and implicit opinion, respectively. “-” means this item is not included.
Given \(\mathbf{x}\) with \(n\)-tokens, we construct the input as follows:
\[\left[\text{NULL}\right]x_{1}\,x_{2}\,\dots\,x_{n}, \tag{1}\]
where the token \(\left[\text{NULL}\right]\) is introduced to detect implicit aspects or opinions; see more details in Sec. 3.2.2. Now, via a pre-trained LM, both tasks share a common encoder to get the representations:
\[\mathbf{H}=\mathbf{h}_{\text{NULL}}\,\mathbf{h}_{1}\,\mathbf{h}_{2}\,\dots\, \mathbf{h}_{n}\in\mathbb{R}^{d\times(n+1)}, \tag{2}\]
where \(d\) is the token representation size.
#### 3.2.1 Aspect Category Detection
We apply a classifier to predict the probability of category detection:
\[\mathbf{C}=\text{sigmoid}(\mathbf{W}_{2}(\text{RELU}(\mathbf{W}_{1}\mathbf{ H}+\mathbf{b}_{1}))), \tag{3}\]
where \(\mathbf{W}_{1}\in\mathbb{R}^{d\times d}\), \(\mathbf{b}_{1}\in\mathbb{R}^{d}\), \(\mathbf{W}_{2}\in\mathbb{R}^{|\mathcal{C}|\times d}\). Here, \(|\mathcal{C}|\) is the number of categories in \(\mathcal{C}\). Hence, \(\mathbf{C}\in\mathbb{R}^{|\mathcal{C}|\times(n+1)}\), where \(C_{ij}\) indicates the probability of the \(i\)-th token to the \(j\)-th category.
#### 3.2.2 Aosc
We tackle AOSC via a token-pair-based 2D matrix with the sentiment-specific horns tagging schema to determine the positions of aspect-opinion pairs and their sentiment polarity.
TaggingWe define four types of tags: (1) **AB-OB** denotes the cell for the beginning position of an aspect-opinion pair. For example, as ("touch screen", "not sensitive") is an aspect-opinion pair, the cell corresponding to ("touch", "not") in the 2D matrix is marked by "AB-OB". (2) **AE-OE** indicates the cell for the end position of an aspect-opinion pair. Hence, the cell of ("screen", "sensitive") is marked by "AE-OE". (3) **AB-OE-*****SENTIMENT** defines a cell with its sentiment polarity, where the row position denotes the beginning of an aspect and the column position denotes the end of an opinion. Hence, the cell of ("touch", "sensitive") is tagged by "AB-OE-NEG". As SENTIMENT consists of three types of sentiment polarity, there are three cases in AB-OE-*SENTIMENT.
Figure 1: The structure of our One-ASQP: solving ACD and AOSC simultaneously. ACD is implemented by a multi-class classifier. AOSC is fulfilled by a token-pair-based 2D matrix with sentiment-specific horns tagging. The results in the row of “[NULL]” indicate no aspect for the opinion of “very speedy”. In contrast, the results in the column of “[NULL]” imply no opinion for the aspect of “express package”.
(4) **4-7** denotes the cell other than the above three types. Hence, we have five types of unique tags, {AB-OB, AE-OE, AB-OE-POS, AB-OE-NEU, AB-OE-NEG}.
Triplet DecodingSince the tagged 2D matrix has marked the boundary tokens of all aspect-opinion pairs and their sentiment polarity, we can decode the triples easily. First, by scanning the 2D matrix column-by-column, we can determine the text spans of an aspect, starting with "AB-OE-*SENTIMENT" and ending with "AE-OE". Similarly, by scanning the 2D matrix row-by-row, we can get the text spans of an opinion, which start from "AB-OB" and end with "AB-OE-*SENTIMENT". Finally, the sentiment polarity can be easily determined by "AB-OE-*SENTIMENT".
Implicit Aspects/Opinions ExtractionDetecting implicit aspects or opinions is critical in ASQP Cai et al. (2021). Here, we append the "[NULL]" token at the beginning of the input. Our One-ASQP can then easily determine the cases of Implicit Aspects and Explicit Opinions (IA&EO) and Explicit Aspects and Implicit Opinions (EA&IO). The whole procedure is similar to the above triplet decoding: when the text spans at the row of "[NULL]" start from "AB-OB" and end with "AB-OE-*SENTIMENT", we can obtain an explicit opinion without aspect. Meanwhile, when the text spans at the column of "[NULL]" start from "AB-OE-*SENTIMENT" and ends with "AE-OE", we can obtain an explicit aspect without opinion. As shown in Fig. 1, we can quickly obtain the corresponding aspect-opinion pairs as "(NULL, very speedy)" and "(express package, NULL)". The sentiment polarity can also be determined by "AB-OE-*SENTIMENT" accordingly. Although the current setting for IA&IO cannot be solved directly, it is possible to resolve it in two steps. First, we can identify IA&IO using tools such as Extract-Classify-ACOS Cai et al. (2021). Then, we can classify aspect categories and sentiment polarity. However, a unified solution with One-ASQP is left for future work.
Tagging ScoreGiven \(\mathbf{H}\), we compute the probabilities of the \((i,j)\)-th cell to the corresponding tags by:
\[\mathbf{a}_{i} =\mathbf{W}_{a}\mathbf{h}_{i}+\mathbf{b}_{a}, \tag{4}\] \[\mathbf{o}_{j} =\mathbf{W}_{o}\mathbf{h}_{j}+\mathbf{b}_{o},\] (5) \[\mathbf{P}_{ij} =\text{sigmoid}(\mathbf{a}_{i}^{T}\mathbf{o}_{j})\in\mathbb{R}^{5} \tag{6}\]
where \(\mathbf{W}_{a}\in\mathbb{R}^{D\times d}\) and \(\mathbf{W}_{o}\in\mathbb{R}^{D\times d}\) are the weight matrices for the aspect token and the opinion token, respectively, \(\mathbf{b}_{a}\in\mathbb{R}^{D}\) and \(\mathbf{b}_{o}\in\mathbb{R}^{D}\) are the biases for the aspect token and the opinion token, respectively. \(D\) is the hidden variable size set to 400 as default.
### Training Procedure
TrainingWe train ACD and AOSC jointly by minimizing the following loss function:
\[\mathcal{L}_{total}=\alpha\mathcal{L}_{ACD}+\beta\mathcal{L}_{ AOSC}, \tag{7}\]
where \(\alpha\) and \(\beta\) are trade-off constants set to 1 for simplicity. The ACD loss \(\mathcal{L}_{ACD}\) and the AOSC loss \(\mathcal{L}_{AOSC}\) are two cross-entropy losses defined as follows:
\[\mathcal{L}_{ACD}=-\frac{1}{n\times|\mathcal{C}|}\times \tag{8}\] \[\sum_{i=1}^{n}\sum_{j=0}^{|\mathcal{C}|-1}\{y_{ij}^{\mathcal{C}} \log C_{ij}+(1-y_{ij}^{\mathcal{C}})\log(1-C_{ij})\},\] \[\mathcal{L}_{AOSC}=-\frac{1}{(n+1)\times(n+1)\times 5}\times\] (9) \[\sum_{i=0}^{n}\sum_{j=0}^{n}\{\mathbf{Y}_{ij}^{t}\log\mathbf{P} _{ij}+(1-\mathbf{Y}_{ij}^{t})\log(1-\mathbf{P}_{ij})\},\]
where \(C_{ij}\) is the predicted category computed by Eq. (3), \(y_{ij}^{\mathcal{C}}\in\{0,1\}\) and it is 1 when the \(i\)-th token is assigned to the \(j\)-th category and 0 otherwise. \(\mathbf{P}_{ij}\) is the predicted tagging score computed by Eq. (6) for all five types of tags while \(\mathbf{Y}_{ij}\in\mathbb{R}^{5}\) is the ground-truth one-hot encoding.
During training, we implement the negative sampling strategy as Li et al. (2021) to improve the performance of our One-ASQP on unlabeled quadruples. We set the negative sampling rate to 0.4, a suggested range in Li et al. (2021) that has yielded good results. Specifically, to minimize the loss in Eq.(7), we randomly sample 40% of unlabeled entries as negative instances, which correspond to '0' in ACD and '-' in AOSC, as shown in Fig.1.
### Quadruples Decoding
After attaining the model, we can obtain the category sequences of ACD and the AOS triplets in the AOSC matrix simultaneously. We then decode the quadruples in one step via their common terms. For example, as shown in Fig. 1, we can merge (Logistics#Speed, express package) and (express package, NULL, POS) via the common aspect term,
"express package", and obtain the quadruple (Logistics#Speed, express package, NULL, POS).
Overall, our One-ASQP consists of two independent tasks, ACD and AOSC. Their outputs only share in the final decoding stage and do not rely on each other during training as the pipeline-based methods need. This allows us to train the model efficiently and decode the results consistently in both training and test.
## 4 Experimental Setup
DatasetsWe conduct the experiments on four datasets in Table 2. For Restaurant-ACOS and Laptop-ACOS, we apply the original splitting on the training, validation, and test sets [14]. For en-Phone and zh-FoodBeverage, the splitting ratio is 7:1.5:1.5 for training, validation, and test, respectively.
Evaluation MetricsWe employ F1 scores as the main evaluation metric and also report the corresponding Precision and Recall scores. A sentiment quad prediction is counted as correct if and only if all the predicted elements are exactly the same as the gold labels. The time cost is also recorded to demonstrate the efficiency of One-ASQP.
Implementation DetailsOne-ASQP is implemented by PyTorch 1.13.1. All experiments are run on a workstation with an Intel Xeon E5-2678 [email protected] CPU, 256G memory, a single A5000 GPU, and Ubuntu 20.04. For English datasets, we adopt LMs of DeBERTaV3-base and DeBERTaV3-large [13], which contain 12 layers with a hidden size of 768 and 24 layers with a hidden size of 1024, respectively. For the Chinese dataset, we adopt MacBERT [14], a Chinese LM with the same structure as DeBERTaV3. For the English datasets, the maximum token length is set to 128 as the maximum average word length is only 25.78, as shown in Table 2. For the zh-FoodBeverage, the maximum token length is 256. The batch size and learning rate for all experiments are [32, 3e-5] as they can perform well. We monitor the F1 score on the validation set and terminate the training when no score drops for four epochs. Finally, we report the scores on the test set by the best model on the validation set.
BaselinesWe compare our One-ASQP with strong baselines: (1) _pipeline-based methods_ consist of four methods, i.e., **DP-ACOS**, **JET-ACOS**, **TAS-BERT-ACOS**, and **Extract-Classify-ACOS**, which are all proposed in [14]; (2) _generation-based methods_ include BART for ABSA (**BARTABSA**) [14], Generative Aspect-based Sentiment analysis (**GAS**) [13], **Paraphrase** generation for ASQP [13], **Seq2Path**[15], **GEN-SCL-NAT**[11], and ABSA with Opinion Tree Generation (**OTG**) [12].
## 5 Results and Discussions
### Main Results
Table 3 reports the comparison results on two existing ASQP datasets. Since all methods apply the same splitting on these two datasets, we copy the results of baselines from corresponding references. The results show that: (1) Generation-based methods gain significant improvement over pipeline-based methods as pipeline-based methods tend to propagate the errors. (2) Regarding generation-based methods, OTG attains the best performance on the F1 score. The exceptional performance may come from integrating various features, e.g., syntax and semantic information, for forming the opinion tree structure [12]. (3) Our One-ASQP is competitive with generation-based methods. By checking the LM sizes, we know that the generation-based baselines except BARTABSA apply T5-base as the LM, which consists of 220M
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Method** & **en-Phone** & **zh-FoodBeverage** \\ \hline Extract-Classify-ACOS & 31.28 33.23 32.23 & 41.39 32.53 36.43 \\ Paraphrase & 46.72 49.84 48.23 & 52.74 50.47 51.58 \\ GEN-SCL-NAT & 45.16 **51.56** 48.15 & 54.28 48.95 51.48 \\ \hline One-ASQP (base) & **57.90** **39.66** 35.35 & 56.51 **59.13** 37.579 \\ One-ASQP (large) & 57.42 50.96 **54.00** & **60.96** 56.24 **58.51** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of en-Phone and zh-FoodBeverage. Scores are averaged over five runs with different seeds.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Method** & \multicolumn{3}{c|}{**Restaurant-ACOS**} & **Laptop-ACOS** \\ \hline DF-ACOS & 34.67 15.08 & 21.04 & 13.04 & 0.57 & 8.00 \\ JET-ACOS & 59.81 28.94 39.01 & 44.52 162.5 & 23.81 \\ TAS-BERT-ACOS & 26.29 46.29 33.3 & **47.15** 19.22 27.31 & 19.27 23.1 \\ Extract-Classify-ACOS & 38.54 52.96 44.61 & 45.45 29.48 35.80 & - \\ BARTABSA & 56.52 53.5 35.96 & 53.67 41.63 40.46 47.05 & - \\ GAS & 60.69 58.52 59.9 & 59.69 41.60 42.75 42.17 & - \\ Paraphrase & 58.98 91.51 59.90 & 41.77 **45.04** 43.34 \\ Seq2Path & - & - & 58.41 & - & - & 42.97 \\ GEN-SCL-NAT & - & - & 62.62 & - & - & 45.16 \\ OTG & 63.96 **61.74** **62.83** & 46.11 44.79 **45.44** \\ \hline One-ASQP (base) & 62.60 57.21 59.78 & 42.83 40.00 41.37 & - \\ One-ASQP (large) & **65.91** 56.24 60.69 & 43.80 39.54 41.56 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of Restaurant-ACOS and Laptop-ACOS. Scores are averaged over 5 runs with different seeds.
parameters. In comparison, our One-ASQP model utilizes DeBERTaV3, which consists of only 86M and 304M backbone parameters for its base and large versions, respectively. The compact model parameter size is a crucial advantage of our approach. However, on the Restaurant-ACOS and Laptop-ACOS datasets, One-ASQP falls slightly behind some generation-based methods that can take advantage of the semantics of sentiment elements by generating natural language labels. In contrast, One-ASQP maps each label to a specific symbol, similar to the numerical indexing in classification models. Unfortunately, the limited quantity of these datasets prevents our One-ASQP model from achieving optimal performance.
We further conduct experiments on en-Phone and zh-FoodBeverage and compare our One-ASQP with three strong baselines, Extract-Classify-ACOS, Paraphrase, and GEN-SCL-NAT. We select them because Extract-Classify-ACOS is the best pipeline-based method. Furthermore, Paraphrase and GEN-SCL-NAT are two strong generation-based baselines releasing source codes, which is easier for replication. Results in Table 4 are averaged by five runs with different random seeds and show that our One-ASQP, even when adopting the base LM version, outperforms three strong baselines. We conjecture that good performance comes from two reasons: (1) The newly-released datasets contain higher quadruple density and fine-grained sentiment quadruples. This increases the task difficulty and amplifies the order issue in generation-based methods [14], i.e., the orders between the generated quads do not naturally exist, or the generation of the current quads should not condition on the previous ones. More evaluation tests are provided in Sec. 5.4. (2) The number of categories in the new datasets is much larger than Restaurant-ACOS and Laptop-ACOS. This also increases the search space, which tends to yield generation bias, i.e., the generated tokens neither come from the original text nor the pre-defined categories and sentiments. Overall, the results demonstrate the significance of our released datasets for further technical development.
Table 5 reports the time cost (in seconds) of training in one epoch and inferring the models on Restaurant-ACOS and en-Phone; more results are in Appendix B.1. The results show that our One-ASQP is much more efficient than the strong baselines as Extract-Classify-ACOS needs to encode twice and Paraphrase can only decode the token sequentially. To provide a fair comparison, we set the batch size to 1 and show the inference time in the round bracket. The overall results show that our One-ASQP is more efficient than the baselines. Our One-ASQP can infer the quadruples parallel, which is much favorite for real-world deployment.
### Effect of Handling Implicit Aspects/Opinions
Table 6 reports the breakdown performance of the methods in addressing the implicit aspects/opinions problem. The results show that (1) the generation-based baseline, GEN-SCL-NAT, handles EA&IO better than our One-ASQP when the quadruple density is low. Accordingly, One-ASQP performs much better than GEN-SCL-NAT on IA&EO in Restaurant-ACOS. GEN-SCL-NA performs worse in IA&EO may be because the generated decoding space of explicit opinions is huge compared to explicit aspects. (2) In en-Phone and zh-FoodBeverage, One-ASQP consistently outperforms all baselines on EA&EO and EA&IO. Our One-ASQP is superior in handling implicit opinions when the datasets are more fine-grained.
### Ablation Study on ADC and AOSC
To demonstrate the beneficial effect of sharing the encoder for ADC and AOS tasks. We train these two tasks separately, i.e., setting (\(\alpha\), \(\beta\)) in Eq. 7 to (1.0, 0.0) and (0.0, 1.0). The results in Table 7 show that our One-ASQP absorbs deeper information between two tasks and attains better performance. By sharing the encoder and conducting joint training, the connection between the category and other sentiment elements can become more tightly integrated, thereby contributing to each other.
### Effect of Different Quadruple Densities
We conduct additional experiments to test the effect of different quadruple densities. Specifically, we keep those samples with only one quadruple
\begin{table}
\begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Restaurant-ACOS**} & \multicolumn{2}{c}{**en-Phone**} \\ \hline & Train & Inference & Train & Inference \\ \hline Extract-Classify-ACOS & 38.43 & 14.79 & 158.34 & 25.23 \\ Paraphrase & 30.52 & 58.23 & 99.23 & 160.56 \\ GEN-SCL-NAT & 35.32 & 61.64 & 104.23 & 175.23 \\ \hline OneASQP (base) & 11.23 & 6.34 (29.35) & 32.23 & 6.32 (35.45) \\ OneASQP (large) & 17.63 & 14.63 (44.62) & 105.23 & 10.34 (61.23) \\ \hline \end{tabular}
\end{table}
Table 5: Time cost (seconds) on Restaurant-ACOS and en-Phone. For a fair comparison with baselines, we record the inference time of our One-ASQP with the batch size of 1 and report them in the round bracket.
in en-Phone and zh-FoodBeverage and construct two lower-density datasets, en-Phone (one) and zh-FoodBeverage (one). We then obtain 1,528 and 3,834 samples in these two datasets, respectively, which are around one-fifth and two-fifth of the original datasets accordingly.
We only report the results of our OneASQP with the base versions of the corresponding LMs and Paraphrase. Results in Table 8 show some notable observations: (1) Paraphrase can attain better performance on en-Phone (one) than our OneASQP. It seems that generation-based methods are powerful in the low-resource scenario. However, the performance is decayed in the full datasets due to the generation order issue. (2) Our One-ASQP significantly outperforms Paraphrase in zh-FoodBeverage for both cases. The results show that our OneASQP needs sufficient training samples to perform well. However, in zh-FoodBeverage (one), the number of labeled quadruples is 3,834. The effort is light in real-world applications.
### Error Analysis and Case Study
To better understand the characteristics of our OneASQP, especially when it fails. We conduct the error analysis and case study in this section. We check the incorrect quad predictions on all datasets and show one typical error example for each type from Laptop-ACOS in Fig. 2, where we report the percentage of errors for better illustration. The results show that (1) In general, extracting aspects and opinions tends to introduce larger errors than classifying categories and sentiments. Aspects and opinions have more complex semantic definitions than categories and sentiments, and extracting implicit cases further increases the difficulty of these tasks. (2) There is a significant category error in Laptop-ACOS, likely due to an imbalance issue in which there are 121 categories with relatively small samples per category. For example, 35 categories have less than two quadruples. (3) The percentage of opinion errors is higher than that of aspect errors in all datasets because opinions vary more than aspects, and there are implicit opinions in the new datasets. This is reflected in the numbers of opinion errors in en-Phone and zh-FoodBeverage, which are 125 (37.31%) and 395 (49.94%), respectively, exceeding the corresponding aspect errors of 99 (29.55%) and 246 (31.10%). Removing samples with implicit opinions reduces the opinion errors to 102 and 260 in en-Phone and zh-FoodBeverage, indicating that explicit opinion errors are slightly larger than explicit aspect errors. (4) The percentage of sentiment errors is relatively small, demonstrating the effectiveness of our proposed sentiment-specific horns tagging schema.
## 6 Related Work
**ABSA Benchmark Datasets** are mainly provided by the SemEval'14-16 shared challenges Pontiki et al. (2014, 2015, 2016). The initial task is only to identify opinions expressed about specific entities and their aspects. In order to investigate more tasks, such as AOPE, E2E-ABSA, ASTE, TASD, and ASQP, researchers have re-annotated the datasets and constructed some new ones Fan et al. (2019); Li
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Restaurant-ACOS**} & \multicolumn{2}{c|}{**Laptop-ACOS**} & \multicolumn{2}{c|}{**en-Phone**} & \multicolumn{2}{c}{**zh-FoodBeverage**} \\ \cline{2-10} & EA\&EO & EA\&IO & IA\&EO & EA\&IO & IA\&EO & EA\&IO & EA\&IO \\ \hline Extract-Classify & 45.0 & 23.9 & 34.7 & 35.4 & 16.8 & 39.0 & 35.2 & 24.2 & 37.2 & 33.3 \\ Paraphrase & 65.4 & 45.6 & 53.3 & 45.7 & 33.0 & 51.0 & 49.1 & 45.6 & 50.9 & 49.9 \\ GEN-SCL-NAT & **66.5** & **46.2** & 56.5 & **45.8** & **34.3** & **54.0** & 50.1 & 45.4 & 50.9 & 49.9 \\ \hline One-ASQP & 66.3 & 31.1 & **64.2** & 44.4 & 26.7 & 53.5 & **54.8** & **52.9** & **55.4** & **59.8** \\ \hline \end{tabular}
\end{table}
Table 6: Breakdown performance (F1 scores) to depict the ability to handle implicit aspects or opinions. E and I stand for Explicit and Implicit, respectively, while A and O denote Aspect and Opinion, respectively.
\begin{table}
\begin{tabular}{l|c c|c c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**en-Phone**} & \multicolumn{2}{c}{**zh-FoodBeverage**} \\ \cline{2-7} & one & full & one & full \\ \hline Paraphrase & **49.78** & 48.23 & 49.23 & 50.23 \\ OneASQP & 36.12 & **53.58** & **53.39** & **57.79** \\ \hline \end{tabular}
\end{table}
Table 8: Comparison results on different datasets with different quadruple densities.
et al., 2019; Xu et al., 2020; Wan et al., 2020; Cai et al., 2021). However, re-annotated datasets still contain the following limitations: (1) The data is collected from only one source, limiting the scope of the data; (2) the data size is usually small, where the maximum one is only around 4,000; (3) there is only a labeled quadruple per sentence and many samples share a common aspect, which makes the task easier; (4) the available public datasets are all in English. The shortcomings of existing benchmark datasets motivate us to crawl and curate more data from more domains, covering more languages and with higher quadruple density.
**ASQP** aims to predict the four sentiment elements to provide a complete aspect-level sentiment structure (Cai et al., 2021; Zhang et al., 2021). The task is extended to several variants, e.g., capturing the quadruple of holder-target-expression-polarity (R et al., 2022; Lu et al., 2022) or the quadruple of target-aspect-opinion-sentiment in a dialogue (Li et al., 2022). Existing studies can be divided into the pipeline or generation paradigm. A typical _pipeline_-based work (Cai et al., 2021) has investigated different techniques to solve the subtasks accordingly. It consits of (1) first exploiting double propagation (DP) (Qiu et al., 2011) or JET (Xu et al., 2020) to extract the aspect-opinion-sentiment triplets and after that, detecting the aspect category to output the final quadruples; (2) first utilizing TAS-BERT (Wan et al., 2020) and the Extract-Classify scheme (Wang et al., 2017) to perform the aspect-opinion co-extraction and predicting category-sentiment afterward. Most studies fall in the _generation paradigm_(Zhang et al., 2021; Mao et al., 2022; Bao et al., 2022; Gao et al., 2022). Zhang et al. (2021) is the first generation-based method to predict the sentiment quads in an end-to-end manner via a _PARAPHRASE_ modeling paradigm. It has been extended and overcome by Seq2Path (Mao et al., 2022) or tree-structure generation (Mao et al., 2022; Bao et al., 2022) to tackle the generation order issue or capture more information. Prompt-based generative methods are proposed to assemble multiple tasks as LEGO bricks to attain task transfer (Gao et al., 2022) or tackle few-shot learning (Varia et al., 2022). GEN-SCL-NAT (Peper and Wang, 2022) is introduced to exploit supervised contrastive learning and a new structured generation format to improve the naturalness of the output sequences for ASQP. However, existing methods either yield error propagation in the pipeline-based methods or slow computation in the generation-based methods. The shortcomings of existing methods motivate us to propose One-ASQP.
## 7 Conclusions
In this paper, we release two new datasets, with the first dataset being in Chinese, for ASQP and propose One-ASQP, a method for predicting sentiment quadruples simultaneously. One-ASQP utilizes a token-pair-based 2D matrix with sentiment-specific horns tagging, which allows for deeper interactions between sentiment elements, enabling efficient decoding of all aspect-opinion-sentiment triplets. An elaborately designed "[NULL]" token is used to identify implicit aspects or opinions effectively. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of One-ASQP. Notably, existing strong baselines exhibit a decay in performance on the newly released datasets. We hope these datasets and One-ASQP will inspire further technical development in this area.
Figure 2: Error analysis and case study. Though the predicted aspect and opinion differ from the golden ones in the above examples, they seem correct.
## Acknowledgments
The work was partially supported by the IDEA Information and Super Computing Centre (ISCC) and the National Nature Science Foundation of China (No. 62201576).
## Limitations
Our proposed One-ASQP still contains some limitations:
* Our One-ASQP does not solve the case of IA&IO. We defer the technical exploration of this issue to future work.
* One-ASQP has to split the ASQP task into two subtasks, ADC and AOSC. It is still promising to explore more effective solutions, e.g., by only one task, which can absorb deeper interactions between all elements.
* Generally, One-ASQP suffers more opinion errors than other sentiment elements due to the fine-grained annotation and implicit opinions issues. It is possible to tackle it by exploring more advanced techniques, e.g., syntax or semantics augmentation, to dig out deeper connections between options and other sentiment elements.
* One-ASQP tends to make errors when there are many aspect categories with small labeled quadruples. It is also significant to explore more robust solutions to detect the aspect categories in the low-resource scenario.
* Though we have released datasets in both English and Chinese, we do not explore ASQP in the multi-lingual scenario. We leave this as future work.
## Ethics Statement
We follow the ACL Code of Ethics. In our work, there are no human subjects and informed consent is not applicable.
|
2304.12149
|
Exploring shared memory architectures for end-to-end gigapixel deep
learning
|
Deep learning has made great strides in medical imaging, enabled by hardware
advances in GPUs. One major constraint for the development of new models has
been the saturation of GPU memory resources during training. This is especially
true in computational pathology, where images regularly contain more than 1
billion pixels. These pathological images are traditionally divided into small
patches to enable deep learning due to hardware limitations. In this work, we
explore whether the shared GPU/CPU memory architecture on the M1 Ultra
systems-on-a-chip (SoCs) recently released by Apple, Inc. may provide a
solution. These affordable systems (less than \$5000) provide access to 128 GB
of unified memory (Mac Studio with M1 Ultra SoC). As a proof of concept for
gigapixel deep learning, we identified tissue from background on gigapixel
areas from whole slide images (WSIs). The model was a modified U-Net (4492
parameters) leveraging large kernels and high stride. The M1 Ultra SoC was able
to train the model directly on gigapixel images (16000$\times$64000 pixels,
1.024 billion pixels) with a batch size of 1 using over 100 GB of unified
memory for the process at an average speed of 1 minute and 21 seconds per batch
with Tensorflow 2/Keras. As expected, the model converged with a high Dice
score of 0.989 $\pm$ 0.005. Training up until this point took 111 hours and 24
minutes over 4940 steps. Other high RAM GPUs like the NVIDIA A100 (largest
commercially accessible at 80 GB, $\sim$\$15000) are not yet widely available
(in preview for select regions on Amazon Web Services at \$40.96/hour as a
group of 8). This study is a promising step towards WSI-wise end-to-end deep
learning with prevalent network architectures.
|
Lucas W. Remedios, Leon Y. Cai, Samuel W. Remedios, Karthik Ramadass, Aravind Krishnan, Ruining Deng, Can Cui, Shunxing Bao, Lori A. Coburn, Yuankai Huo, Bennett A. Landman
|
2023-04-24T15:00:42Z
|
http://arxiv.org/abs/2304.12149v1
|
# Exploring shared memory architectures for end-to-end gigapixel deep learning
###### Abstract
Deep learning has made great strides in medical imaging, enabled by hardware advances in GPUs. One major constraint for the development of new models has been the saturation of GPU memory resources during training. This is especially true in computational pathology, where images regularly contain more than 1 billion pixels. These pathological images are traditionally divided into small patches to enable deep learning due to hardware limitations. In this work, we explore whether the shared GPU/CPU memory architecture on the M1 Ultra systems-on-a-chip (SoCs) recently released by Apple, Inc. may provide a solution. These affordable systems (less than $5000) provide access to 128 GB of unified memory (Mac Studio with M1 Ultra SoC). As a proof of concept for gigapixel deep learning, we identified tissue from background on gigapixel areas from whole slide images (WSIs). The model was a modified U-Net (4492 parameters) leveraging large kernels and high stride. The M1 Ultra SoC was able to train the model directly on gigapixel images (\(16000\times 64000\) pixels, 1.024 billion pixels) with a batch size of 1 using over 100 GB of unified memory for the process at an average speed of 1 minute and 21 seconds per batch with Tensorflow 2/Keras. As expected, the model converged with a high Dice score of \(0.989\pm 0.005\). Training up until this point took 111 hours and 24 minutes over 4940 steps. Other high RAM GPUs like the NVIDIA A100 (largest commercially accessible at 80 GB, \(\sim\)$15000) are not yet widely available (in preview for select regions on Amazon Web Services at $40.96/hour as a group of 8). This study is a promising step towards WSI-wise end-to-end deep learning with prevalent network architectures.
**Keywords:** Gigapixel, GPU, Large Patch, Computational Pathology, Segmentation
## 1 Introduction
The deep learning revolution has largely been possible due to GPU acceleration (Dean, 2020). When training on very large images, GPU RAM may not be sufficient for a batch size of 1 (Jain et al., 2020). In this case, model parallelism can be implemented. However, model parallelism requires multiple available GPUs (Yadan et al., 2013) and introduces communication overhead (Keuper and Freundt, 2016).
In computational pathology, gigapixel (1 billion pixels) images are standard (Dimitriou et al., 2019). Instead of training on gigapixel images directly, popular approaches use small patches, for example \(256\times 256\) pixels (Chen et al., 2022). Unfortunately, small patches do not provide global contextual information, thus larger patches are still desired (Chen et al., 2022). Previous work has learned from whole slide images (without patching) by training model modules separately (not end-to-end) (Zhang et al., 2022). Recently, the use of multi-scale patches, including larger patches that provide more spatial context (\(4096\times 4096\) pixels) have been shown to be effective for learning (Chen et al., 2022).
In this work, we perform end-to-end training on gigapixel images (1.024 billion pixels) without distributing the model or data across multiple GPUs. We use a batch size of 1, and a small convolutional neural network that leverages large kernels with high stride. The model is trained to detect tissue from background as a proof of concept. Our training scheme is enabled by a Mac Studio with an M1 Ultra SoC with 128 GB of unified RAM (shared CPU/GPU RAM). A visual depiction can be seen in Figure 1.
## 2 Methods
The data consists of 342 hematoxylin and eosin (H&E) whole slide images acquired under institutional review board (IRB) approval (Vanderbilt IRBs #191738 and #191777). Briefly, 256 images were used for training, 5 for validation, and 81 for testing. The images were converted to grayscale, color inverted, normalized 0 to 1, and cropped to \(16000\times 64000\) pixels. Labels were created by cropping, downsampling by 8, tiling into non-overlapping \(1\times 1\) tiles, thresholding tiles with mean intensity over 230 to 0, conversion to grayscale, thresholding
Figure 1: Usually, small patches (eg. \(256\times 256\) pixels) are used in computational pathology. Enabled by the unified memory architecture, we instead use a gigapixel area (\(16000\times 64000\) pixels) from preprocessed images, which is larger than a traditional single tissue field of view.
non-zero pixels to 255, median blurring, erosion, dilation, hole filling, upsampling to the original resolution, re-thresholding values above 0 to 255, and binarizing.
We used a heavily modified U-Net (Ronneberger et al., 2015) with 7 convolutional layers and 4492 parameters. Skip connections used addition rather than concatenation. Early layers learned a downsampling with few large kernels and high stride (no pooling). All training and inference was performed on a Mac Studio with M1 Ultra SoC and 128 GB of unified memory. The model was trained with the Adam optimizer, a learning rate of 1e-3, a batch size of 1, and binary cross-entropy loss. The model weights corresponding to the lowest validation loss were selected for evaluation.
## 3 Results & Discussion
The average time per step (including validation) was 1 minute and 21 seconds. The Apple Activity Monitor application was used to get an accurate read on the unified memory usage for the process. The peak unified memory consumption from the first 8 steps of training was 103.61 GB. The model achieved a Dice score of \(0.989\pm 0.005\) on the testing set. Figure 2 shows a qualitative assessment of model performance.
In this proof of concept, we have shown that it is possible to perform end-to-end training directly on images of size 1.024 billion pixels, with no patching, using $5000 Apple silicon (Mac Studio with M1 Ultra SoC and 128 GB of unified memory). The peak unified memory usage that was measured for the process was 103.61 GB. During development, this model was unable to run on an NVIDIA RTX A6000 (48 GB, \(\sim\$4500\)). Other high RAM GPUs, such as the NVIDIA A100 (80 GB, \(\sim\$15000\)) are not yet widely available (in preview for select regions on Amazon at $40.96/hour for a group of 8). The shared GPU/CPU RAM architecture on the Apple M1 Ultra SoC opens the field for the design of memory-efficient models for gigapixel images at a reasonable price point.
This research was supported by The Leona M. and Harry B. Helmsley Charitable Trust grant G-1903-03793 and G- 2103-05128, NSF CAREER 1452485, NSF 2040462, NSF GRFP Grant No. DGE-1746891, NCRR Grant UL1 RR024975-01 (now at NCATS Grant 2 UL1 TR000445-06), the NIDDK, VA grants I01BX004366 and I01CX002171, VUMC Digestive Disease Research Center supported by NIH grant R01DK135597, P30DK058404, NIH grant T32GM007347, NVIDIA hardware grant, resources of ACCRE at Vanderbilt University.
Figure 2: Segmentations with a large amount of true positives led to high Dice scores. This proof of concept demonstrates learnability of gigapixel images using Apple silicon.
|
2308.03391
|
Symplectic geometry and space mission design
|
Using methods from symplectic geometry, the second and fifth authors have
provided theoretical groundwork and tools aimed at analyzing periodic orbits,
their stability and their bifurcations in families, for the purpose of space
mission design. The Broucke stability diagram was refined, and the "Floer
numerical invariants" where considered, as numbers which stay invariant before
and after a bifurcation, and therefore serve as tests for the algorithms used.
These tools were later employed for numerical studies. In this article, we will
further illustrate these methods with numerical studies of families of orbits
for the Jupiter-Europa and Saturn-Enceladus systems, with emphasis on
planar-to-spatial bifurcations, from deformation of the families in Hill's
lunar problem studied by the first author. We will also provide an algorithm
for the numerical computation of Conley--Zehnder indices, which are
instrumental in practice for determining which families of orbits connect to
which. As an application, we use our tools to study a family of periodic orbits
that approaches Enceladus at an altitude of 29km, and therefore may be used in
future space missions to visit the water plumes.
|
Cengiz Aydin, Urs Frauenfelder, Otto van Koert, Dayung Koh, Agustin Moreno
|
2023-08-07T08:20:26Z
|
http://arxiv.org/abs/2308.03391v2
|
# Symplectic Methods in Space Mission Design
###### Abstract
Using methods from symplectic geometry, the second and fourth authors have provided theoretical groundwork and tools aimed at analyzing periodic orbits, their stability and their bifurcations in families, for the purpose of space mission design.1 The Broucke stability diagram2 was refined, and the "Floer numerical invariants" where considered, as numbers which stay invariant before and after a bifurcation, and therefore serve as tests for the algorithms used, as well as being easy to implement. These tools were further employed for numerical studies.3 In this article, we will further illustrate these methods with numerical studies of families of orbits for the Jupiter-Europa system, with emphasis on planar-to-spatial bifurcations, from deformation of the families in Hill's lunar problem studied by the first author.4
Footnote 1: footnotemark:
Footnote 2: footnotemark:
Footnote 3: footnotemark:
Footnote 4: footnotemark:
Footnote 5: footnotemark:
Footnote 6: footnotemark:
## Introduction
Symplectic geometry is the branch of mathematics that studies the geometric properties of phase spaces, those spaces that describe the possible states of a classical physical system. It provides the proper framework in order to address problems in classical mechanics, e.g., the gravitational problem of N bodies in three-dimensional space. In the last thirty years, a host of theoretical tools have been developed in the field, with _Floer theory_ as a notable example, whose emphasis is on the theoretical study of periodic orbits. In a more applied direction, periodic orbits are of interest for space mission design, as they model trajectories for spacecraft or satellites. Studying families of orbits aimed at placing a spacecraft around a target moon is relevant for space exploration, where optimizing over all possible trajectories is needed, in order to minimize fuel consumption, avoid collisions, and maximize safety. In this context, the influence on a satellite of a planet with an orbiting moon can be approximated by a three-body problem of _restricted_ type (i.e., the mass of the satellite is considered negligible by comparison). This is a classical problem which has been central to the development of symplectic geometry, and therefore it is not unreasonable to expect the modern available tools to provide insights. The need of organizing all information pertaining to orbits leads to the realm of data analysis, for which computationally cheap methods are important. The direction we will pursue is then encapsulated in the following questions:
**Guiding questions**
* **(Classification)** Can we tell when two orbits are _qualitatively differenta_? Footnote a: Recall that the Floquet multipliers of a closed orbit are by definition the non-trivial eigenvalues of the monodromy matrix.
* **(Catalogue)** Can we resource-efficiently refine data bases of known orbits?
* **(Symplectic geometry)** Can we use methods from symplectic geometry to guide/organize the numerical work?
The first two questions were addressed by the second and fourth authors, where the mathematical groundwork is explained.[1] The second, third, and fourth authors used it in combination with numerical work, addressing the third question.[3] In this article, we continue this line of research. We have the following tools at our disposal.
**Toolkit**
* **Floer numerical invariants:** Numbers which stay invariant before and after a bifurcation, and so can help predict the existence of orbits, as well as being easy to implement. There is one invariant for arbitrary periodic orbits, and another for _symmetric_ periodic orbits.[3]
* **The B-signs:**[1] a \(\pm\) sign associated to each elliptic or hyperbolic Floquet multiplier of an orbita, which helps predict bifurcations. This is generalization of the classical Moser-Krein signature,[5, 6, 7] which originally applies only to elliptic Floquet multipliers, to also include the case of hyperbolic multipliers, whenever the corresponding orbit is _symmetric_. Footnote a: Recall that the Floquet multipliers of a closed orbit are by definition the non-trivial eigenvalues of the monodromy matrix.
* **Global topological methods:** the _GIT-sequence_,[1] a sequence of spaces whose global topology encodes (and sometimes forces) bifurcations, and refines Broucke's stability diagram[2] by adding the \(B\)-signs.
* **Conley-Zehnder index:[8, 9] a winding number associated to each non-degenerate orbit, extracted from the topology of the symplectic group, which does not change unless a bifurcation occurs. Therefore it can be used to determine which families connect to which.**
## Preliminaries
In this section, we review the toolkit. But first, we set up some language and notation.
### Basic notions
**Mechanics/symplectic geometry.** Given a \(2n\)-dimensional phase-space \(M\) with its symplectic form \(\omega\), a Hamiltonian function \(H:M\rightarrow\mathbb{R}\), with Hamiltonian flow \(\phi_{t}^{H}:M\to M\) which preserves \(\omega\) (i.e. \((\phi_{t}^{H})^{*}\omega=\omega\)), and a periodic orbit \(x\), the _monodromy matrix_ of \(x\) is \(M_{x}=D\phi_{T}^{H}\), where \(T\) is the period of \(x\). Then \(M_{x}\) is a symplectic \(2n\times 2n\)-matrix; we denote by \(Sp(2n)\) the space of such matrices (the _symplectic group_).
Note that if \(H\) is time-independent then \(1\) appears twice as a _trivial_ eigenvalue of \(M_{x}\). We can ignore these if we consider the _reduced_ monodromy matrix \(M_{x}^{red}\in Sp(2n-2)\), obtained by fixing the energy and dropping the direction of the flow.
* A _Floquet multiplier_ of \(x\) is an eigenvalue of \(M_{x}\), which is not one of the trivial eigenvalues (i.e. an eigenvalue of \(M_{x}^{red}\)).
* An orbit is _non-degenerate_ if \(1\) does not appear among its Floquet multipliers.
* An orbit is _stable_ if all its Floquet multipliers are semi-simple and lie in the unit circle.
We will only consider the cases \(n=2\) (planar problems) and \(n=3\) (spatial problems).
**Symmetries.** An _anti-symplectic involution_ is a map \(\rho:M\to M\) satisfying \(\rho^{2}=id\) and \(\rho^{*}\omega=-\omega\). Its _fixed-point locus_ is \(fix(\rho)=\{x:\rho(x)=x\}\). An anti-symplectic involution \(\rho\) is a _symmetry_ of the system if \(H\circ\rho=H.\) A periodic orbit \(x\) is _symmetric_ if \(\rho(x(-t))=x(t)\) for all \(t\). The _symmetric points_ of the symmetric orbit \(x\) are the two intersection points of \(x\) with \(fix(\rho)\). The monodromy matrix of a symmetric orbit at a symmetric point is a _Wonenburger_ matrix:
\[M=M_{A,B,C}=\left(\begin{array}{cc}A&B\\ C&A^{T}\end{array}\right)\in Sp(2n), \tag{1}\]
where
\[B=B^{T},\quad C=C^{T},\quad AB=BA^{T},\quad A^{T}C=CA,\quad A^{2}-BC=id,\]
equations which ensure that \(M\) is symplectic. The eigenvalues of \(M\) are determined by those of the first block \(A\):1
Footnote 1: The _symmetric points_ of the symmetric orbit are the two intersection points of the symmetric orbit.
* If \(\lambda\) is an eigenvalue of \(M\) then its stability index \(a(\lambda)=\frac{1}{2}(\lambda+1/\lambda)\) is an eigenvalue of \(A\).
* If \(a\) is an eigenvalue of \(A\) then \(\lambda(a)=a+\sqrt{a^{2}-1}\) is an eigenvalue of \(M\).
### B-signs
Assume \(n=2,3\). Let \(x\) be a symmetric orbit with monodromy \(M_{A,B,C}\) at a symmetric point. Assume \(a\) is a real, simple and nontrivial eigenvalue of \(A\) (i.e. \(\lambda(a)\) elliptic or hyperbolic)). Let \(v\) be an eigenvector of \(A^{T}\) with eigenvalue \(a\), i.e. \(A^{T}v=a\cdot v\). The _B-sign_ of \(\lambda(a)\) is
\[\epsilon(\lambda(a))=\text{sign}(v^{T}Bv)=\pm.\]
One easily sees that this is independent of \(v\), and the basis chosen to write down the monodromy matrix. Note that if \(n=2\), we have two \(B\)-signs \(\epsilon_{1},\epsilon_{2}\), one for each symmetric point; and if \(n=3\), we have two _pairs_ of \(B\)-signs \((\epsilon_{1}^{1},\epsilon_{2}^{1}),(\epsilon_{1}^{2},\epsilon_{2}^{2})\), one for each symmetric point and each eigenvalue.
The second and fourth authors have recently shown that a planar symmetric orbit is negative hyperbolic iff the \(B\)-signs of its two symmetric points differ [10]. One can define the \(C\)-_signs_ similarly, obtained by replacing the \(B\)-block, with the \(C\)-block of \(M\), and \(A^{T}\), by \(A\).
**Conley-Zehnder index**
The CZ-index is part of the index theory of the symplectic group. It assigns a winding number to non-degenerate orbits. In practical terms, it helps understand which families of orbits connect to which (CZ-index stays constant if no bifurcation occurs, and jumps under bifurcation as shown in Figure 1). It may be defined as follows.
**Planar case.** Let \(n=2\), \(x\) planar orbit with (reduced) monodromy \(M_{x}^{red}\), and \(x^{k}\) its \(k\)-fold cover.
* **Elliptic case:**\(M_{x}^{red}\) is conjugated to a rotation, \[M_{x}^{red}\sim\left(\begin{array}{cc}\cos\varphi&-\sin\varphi\\ \sin\varphi&\cos\varphi\end{array}\right),\] (2) with Floquet multipliers \(e^{\pm 2\pi i\varphi}\). Then \[\mu_{CZ}(x^{k})=1+2\cdot\lfloor k\cdot\varphi/2\pi\rfloor\] In particular, it is odd, and jumps by \(\pm\) 2 if the eigenvalue \(1\) is crossed in a family. Recall from (1) that for symmetric periodic orbits we have \(M_{x}^{red}=\left(\begin{array}{cc}a&b\\ c&a\end{array}\right)\). Moreover, in view of (2) if \(b<0\) then the rotation is determined by \(\varphi\) and if \(b>0\) then the rotation is determined by \(-\varphi\); this determines the CZ-index jump, see Figure 1.
* **Hyperbolic case:**\(M_{x}^{red}\) is diagonal up to conjugation, \[M_{x}^{red}\sim\left(\begin{array}{cc}\lambda&0\\ 0&1/\lambda\end{array}\right),\]
Figure 1: \(\mu_{CZ}\) jumps by \(\pm 1\) when crossing \(1\), according to direction of bifurcation, as shown. If it stays elliptic, the jump is by \(\pm 2\). This is determined by the \(B\)-sign.
with Floquet multipliers \(\lambda,1/\lambda\). Then
\[\mu_{CZ}(x^{k})=k\cdot n,\]
where \(D\phi_{t}^{H}\) rotates the eigenspaces by angle \(\frac{\pi nt}{T}\), with \(n\) even/odd if \(x\) positive/negative hyperbolic. Notice that for symmetric periodic orbits the signatures of \(b\) and \(c\) are equal.
**Spatial case.** Let \(n=3\). Assume that the reflection along the \(xy\)-plane gives rise to a symplectic symmetry of \(H\) (e.g., the 3BP). If \(x\subset\mathbb{R}^{2}\) is a planar orbit, then we have a symplectic splitting into planar and spatial blocks
\[M_{x}^{red}\sim\left(\begin{array}{cc}M_{p}^{red}&0\\ 0&M_{s}\end{array}\right)\in Sp(4),\quad M_{p}^{red},M_{s}\in Sp(2).\]
Then
\[\mu_{CZ}(x)=\mu_{CZ}^{p}(x)+\mu_{CZ}^{s}(x),\]
where each summand corresponds to \(M_{p}^{red}\) and \(M_{s}\) respectively. We have that
* Planar to planar bifurcations correspond to jumps in \(\mu_{CZ}^{p}\).
* Planar to spatial bifurcations correspond to jumps of \(\mu_{CZ}^{s}\).
While the CZ-index is defined in general, all our bifurcations will involve planar-to-spatial bifurcations at planar orbits and the above symmetry, so these definitions will suffice. In what follows, the computations of CZ-indices of families will not rely directly on the definition, but rather on knowing them analytically for special families (e.g., in the Kepler problem), and then determining the jumps at bifurcations arising after deformation, for which the \(B\)-signs are necessary, as explained above.
**Floer numerical invariants**
Recall that bifurcations occurs when studying families \(t\mapsto x_{t}\) of periodic orbits, as a mechanism by which at some parameter time \(t=t_{0}\) the orbit \(x_{t_{0}}\) becomes degenerate, and several new families may bifurcate out of it; see Figure 2. The Floer numbers are meant to give a simple test to keep track of all new families. We will first need the following technical definition: a periodic orbit \(x\) is _good_ if
\[\mu_{CZ}(x^{k})=\mu_{CZ}(x)\bmod 2\]
for all \(k\geq 1\). Otherwise, it is _bad_. In fact, a planar orbit is bad iff it is an even cover of a negative hyperbolic orbit. And a spatial orbit is bad iff it is an even cover of either an elliptic-negative hyperbolic or a positive-negative hyperbolic orbit. Note that a good planar orbit can be bad if viewed in the spatial problem.
Given a bifurcation at \(x\), the _SFT-Euler characteristic_ (or the _Floer number_) of \(x\) is
\[\chi(x)=\sum_{i}(-1)^{CZ_{i}^{bef}}=\sum_{j}(-1)^{CZ_{j}^{aft}}.\]
The sum on the LHS is over **good** orbits _before_ bifurcation, and RHS is over **good** orbits _after_ bifurcation. As these numbers only involve the parity of the CZ-index, one has simple formulas which bypass the computation of this index, as they only involve the Floquet multipliers:
* **Planar case.**\(\chi(x)=\#\big{\{}\text{ good }\mathcal{H}^{+}\big{\}}-\#\big{\{}\mathcal{E}, \,\mathcal{H}^{-}\big{\}}\).
* **Spatial case.**\(\chi(x)=\#\big{\{}\mathcal{H}^{--}\,,\mathcal{EH}^{-}\,,\mathcal{E}^{2}\,, \text{ good }\mathcal{H}^{++}\,,\mathcal{N}\big{\}}-\#\big{\{}\mathcal{H}^{-+}, \text{ good }\mathcal{EH}^{+}\big{\}}\).
Here, \(\mathcal{E}\) denotes _elliptic_, \(\mathcal{H}^{\pm}\) denotes _positive/negative hyperbolic_, and \(\mathcal{N}\) denotes _nonreal_ quadruples \(\lambda,1/\lambda,\overline{\lambda},1/\overline{\lambda}\). The above simply tells us which type of orbit comes with a plus or a minus sign (the formula should be interpreted as either before or after).
**Invariance.** The fact that the sums agree before and after -_invariance_- follows from deep results from _Floer theory_ in symplectic geometry. We will accept this as a fact, and use it as follows:
The Floer number can be used as a **test**: if the sums do _not_ agree, we know the algorithm missed an orbit.
The invariant above works for arbitrary periodic orbits. There is a similar Floer invariant for _symmetric_ orbits.[3]
### Global topological methods
These methods encode: bifurcations; stability; eigenvalue configurations; obstructions to existence of regular families; and \(B\)-signs, in a visual and resource-efficient way. The main tool is the _GIT sequence_,[1] a refinement of the Broucke staibility diagram via implementing the \(B\)-signs. This is a sequence of three branched spaces (or _layers_), together with two maps between them, which collapse certain branches together. Each branch is labelled by the \(B\)-signs. A symmetric orbit gives a point in the top layer, and an arbitrary orbit, in the middle layer. The base layer is \(\mathbb{R}^{n}\) (the space of coefficients of the characteristic polynomial of the first block of \(M_{A,B,C}\)). Then a family of orbits gives a path in these spaces, so that their topology encodes valuable information. The details are as follows.
**GIT sequence: 2D.** Let \(n=2\), \(\lambda\) eigenvalue of \(M^{red}\in Sp(2)\), with stability index \(a(\lambda)=\frac{1}{2}(\lambda+1/\lambda)\). Then \(\lambda=\pm 1\) iff \(a(\lambda)=\pm 1\); \(\lambda\) positive hyperbolic iff \(a(\lambda)>1\); \(\lambda\) negative hyperbolic iff \(a(\lambda)<-1\); and \(\lambda\) elliptic (stable) iff \(-1<a(\lambda)<1\). The Broucke stability diagram is
Figure 2: A sketch of a bifurcation at a degenerate orbit, with the before/after orbits determined by the deformation parameter (the energy), each branch with its own CZ-index. The Floer number is a signed count of orbits which stays invariant.
then simply the real line, split into three components; see Figure 3. If two orbits lie in different components of the diagram, then one should expect bifurcations in any family joining them, as the topology of the diagram implies that any path between them has to cross the \(\pm 1\) eigenvalues.
One can think that the stability index "collapses" the two elliptic branches in the middle layer of Figure 3 together. These two branches are distinguished by the \(B\)-signs, coinciding with the Krein signs.[5, 6] There is an extra top layer for symmetric orbits, where now each hyperbolic branch separates into two, and there is a collapsing map from the top to middle layer. Note that to go from one branch to the other, the topology of the layer implies that the eigenvalue 1 needs to be crossed. This means that one should expect bifurcations in any (symmetric) family joining them, _even if_ they project to the same component of the Broucke diagram. To sum up:
* \(B\)-signs "separate" hyperbolic branches, for symmetric orbits.
* If two points lie in different components of the Broucke diagram, one should expect bifurcation in any path joining them.
* If two points lie in the same component of the Broucke diagram, but if \(B\)-signs differ, one should _also_ expect bifurcation in any path joining them.
**GIT sequence: 3D.** Let \(n=3\). Given \(M^{red}=M_{A,B,C}\in Sp(4)\), its _stability point_ is \(p=(\text{tr}(A),\det(A))\in\mathbb{R}^{2}\). The plane splits into regions corresponding to the eigenvalue configuration of \(M^{red}\), as in Figure 4. The GIT sequence[1] adds two layers to this diagram, as shown in Figure 5. The top layer has two extra branches than the middle one, for each hyperbolic eigenvalue.
**Bifurcations in the Broucke diagram.** An orbit family \(t\mapsto x_{t}\) gives a path \(t\mapsto p_{t}\in\mathbb{R}^{2}\) of
Figure 3: The 2D GIT sequence. One obtains more refined information for symmetric orbits.
stability points. The family bifurcates if \(p_{t}\) crosses \(\Gamma_{1}\). More generally, let \(\Gamma_{\varphi}^{e}\) be the line with slope \(\cos(2\pi\varphi)\in[-1,1]\) tangent to \(\Gamma_{d}=\{y=x^{2}/4\}\), corresponding to matrices with eigenvalue \(e^{2\pi i\varphi}\); and \(\Gamma_{\lambda}^{h}\) the tangent line with slope \(a(\lambda)\in\mathbb{R}\backslash[-1,1]\), corresponding to matrices with eigenvalue \(\lambda\).
A \(k\)-fold bifurcation happens when crossing \(\Gamma_{l/k}^{e}\) for some \(l\).
That is, higher order bifurcations are encoded by a pencil of lines tangent to a parabola, as in Figure 6.
**Example: symmetric period doubling bifurcation.** We finish this section with an example where our invariants give new information. Consider a symmetric orbit \(x\) going from elliptic to negative hyperbolic. A priori there could be two bifurcations, one for each symmetric point (B or C in Figure 7). However, invariance of \(\chi(x^{2})\) implies only _one_ can happen (note \(x^{2}\) is _bad_). And where the bifurcation happens is determined by the \(B\)-sign, occurring at the symmetric point in which the \(B\)-sign does _not_ jump; or alternatively, where the \(C\)-sign jumps.
### Circular restricted three-body problem
The Circular Restricted Three-Body Problem (CRTBP) shown in Figure 8 describes the motion of an infinitesimal mass with two primaries under mutual gravitational attraction. A dimensionless rotating coordinate system (\(X^{R}-Y^{R}-Z^{R}\)) is defined at the barycenter of the two primaries with respect to the inertial frame (\(X^{I}-Y^{I}-Z^{I}\)), rotating about \(Z^{I}\) with true anomaly \(\nu\). The \(X\)-axis of the rotating coordinate system is aligned with the vector from the larger primary body (\(m_{1}\)) to the second primary body (\(m_{2}\)). The \(Z\)-axis is perpendicular to the primaries' orbital plane, and the \(Y\)-axis completes the right-handed coordinate system. The position vector \(\mathbf{r}\) points from the barycenter to the spacecraft in the rotating frame. The non-dimensional mass of the second primary is defined as
\[\mu=\frac{m_{2}}{m_{1}+m_{2}}=m_{2},\]
and then the larger body's mass is
\[1-\mu=\frac{m_{1}}{m_{1}+m_{2}}=m_{1}.\]
Figure 4: **The 3D Broucke stability diagram. Here, \(\Gamma_{\pm 1}\) corresponds to eigenvalue \(\pm 1\), \(\Gamma_{d}\) to double eigenvalue, \(\mathcal{E}^{2}\) to doubly elliptic (stable region), and so on.1**
Figure 5: The branches (represented as lines) are two-dimensional, and come together at the 1-dimensional “branching locus” (represented as points), where we cross from one region to another of the Broucke diagram.
Figure 6: Bifurcations are encoded by a pencil of lines.
Figure 7: Symmetric period doubling bifurcation. The _fake_ symmetric points, while close to intersection points, do _not_ intersect the fixed-point loci.
Define the unit of time so that the mean motion of the primary orbit is \(1\). Then the equations of motion for the infinitesimal mass is written as
\[\ddot{x} =2\dot{y}+x-(1-\mu)\frac{x+\mu}{r_{1}^{3}}-\mu\frac{x-1+\mu}{r_{2}^ {3}}\] \[\ddot{y} =-2\dot{x}+y-(1-\mu)\frac{y}{r_{1}^{3}}-\mu\frac{y}{r_{2}^{3}}\] \[\ddot{z} =-(1-\mu)\frac{z}{r_{1}^{3}}-\mu\frac{z}{r_{2}^{3}}\]
where \(r_{1}^{2}=(x+\mu)^{2}+y^{2}+z^{2}\), \(r_{2}^{2}=(x-1+\mu)^{2}+y^{2}+z^{2}\). No closed form general solution is possible for the model.
The Hamiltonian describing the CRTBP is given by
\[H:(\mathbb{R}^{3}\setminus\{M,P\})\times\mathbb{R}^{3}\to \mathbb{R},\] \[H(q,p)=\frac{1}{2}\|p\|^{2}-\frac{\mu}{\|q-M\|}-\frac{1-\mu}{\| q-P\|}+p_{1}q_{2}-p_{2}q_{1},\]
where \(q=(q_{1},q_{2},q_{3})\) is the position of a satellite, \(p=(p_{1},p_{2},p_{3})\) is its momentum, the mass of the secondary body \(m_{2}\) is fixed at \(M=(1-\mu,0,0)\), and the mass of the primary body \(m_{1}\) is fixed at \(P=(-\mu,0,0)\). The Jacobi constant \(c\) is then defined by the convention \(\Gamma:=H\equiv-c/2\). The Hamiltonian \(H\) is invariant under the anti-symplectic involutions
\[\rho:(q_{1},q_{2},q_{3},p_{1},p_{2},p_{3}) \mapsto(q_{1},-q_{2},-q_{3},-p_{1},p_{2},p_{3}),\] \[\widetilde{\rho}:(q_{1},q_{2},q_{3},p_{1},p_{2},p_{3}) \mapsto(q_{1},-q_{2},q_{3},-p_{1},p_{2},-p_{3}),\]
with corresponding fixed-point loci given by
\[L=\text{Fix}(\rho)=\{q_{2}=q_{3}=p_{1}=0\},\] \[\widetilde{L}=\text{Fix}(\widetilde{\rho})=\{q_{2}=p_{1}=p_{3}=0\}.\]
These correspond respectively to \(\pi\)-rotation around the \(x\)-axis, and reflection along the \(xz\)-plane. Their composition \(\sigma=\rho\circ\tilde{\rho}\) is a _symplectic_ symmetry corresponding to reflection along the \(xy\)-plane.
For instance, the _Jupiter-Europa system_ then corresponds to a CRTBP with mass ratio \(\mu=2.5266448850435e^{-05}\).
Figure 8: A schematic CRTBP configuration showing \(x_{1}=m_{1}\), \(x_{2}=m_{2}\), and two of the libration points in a non-dimensional rotating coordinate system \(X^{R}-Y^{R}\), \(Z^{R}(Z^{I})\) are in the out-of-plane direction.
### Hill's lunar problem
Hill's lunar problem is a limit case of the restricted three-body problem where the massless body is assumed very close to the small primary. This problem can therefore be viewed as an approximation to the Jupiter-Europa system, when one lets the mass of Europa go to zero. The Hamiltonian describing the system is
\[E:(\mathbb{R}^{3}\backslash\{0\})\times\mathbb{R}^{3}\rightarrow\mathbb{R},\]
\[E(q,p)=\frac{1}{2}\|p\|^{2}-\frac{1}{\|q\|}+p_{1}q_{2}-p_{2}q_{1}-q_{1}^{2}+ \frac{1}{2}q_{2}^{2}+\frac{1}{2}q_{3}^{2}.\]
The linear symmetries of this problem have been completely characterized.[11] While the planar restricted three-body problem is invariant under reflection at the \(x\)-axis, the planar Hill lunar problem is additionally invariant under reflection at the \(y\)-axis. For the spatial lunar problem, there are more symmetries: \(\rho,\widetilde{\rho}\) (which extend the reflection at the \(x\)-axis), and two additional symmetries \(\kappa,\widetilde{\kappa}\) (\(\pi\)-rotation along the \(y\)-axis, and reflection along the \(yz\)-plane; both extend the reflection along the \(y\)-axis). Their composition is also \(\sigma=\kappa\circ\widetilde{\kappa}\).
### Numerical Work
**Result I. Planar direct / prograde orbits**
Henon[12] describes a family \(g\) of planar direct periodic orbits which are invariant with respect to both reflections at the \(x\) and \(y\)-axis. This family undergoes a non-generic pitchfork bifurcation, going from elliptic to positive hyperbolic, and where two new families of elliptic orbits, called \(g^{\prime}\), appear; see Figure 9. These new families are still invariant under reflection at the \(x\)-axis, but not under reflection at the \(y\)-axis. Reflection at the \(y\)-axis maps one branch of the \(g^{\prime}\)-family to the other branch. Figure 9 shows the bifurcation graph which is constructed as follows: Each vertex denotes a degenerate orbit at which bifurcation happens and each edge represents a family of orbits with varying energy, labeled by the corresponding CZ-index. From this data, it is easy to determine the associated Floer number. For instance in Figure 9 on the left, the Floer number is \((-1)^{3}=-1\) before bifurcation, and \((-1)^{2}+2(-1)^{3}=-1\) after bifurcation; they coincide, as they should.
By deforming the mass parameter, we may go from Hill's lunar problem to the Jupiter-Europa system; see Figure 9. The pitchfork bifurcation deforms to a _generic_ situation, where one of the
Figure 9: Bifurcation graphs for the planar direct/prograde orbits with CZ-index.
\(g^{\prime}\) branches glues to the before-bifurcation part of the \(g\) branch, the result of which we call the _g-LPO1_ branch, and where the other \(g^{\prime}\) branch glues to the after-bifurcation part of the \(g\) branch, which we call the _DPO-LPO2_ branch (undergoing birth-death bifurcation). The \(DPO\)-orbits are planar positive hyperbolic and the \(LPO2\)-orbits are planar elliptic. As the symmetry with respect to the \(y\)-axis is lost, the new orbits will be _approximately_ symmetric with respect to the \(y\)-axis, but not exactly symmetric; similarly, the \(y\)-symmetric relation between the \(g^{\prime}\) branches persists only approximately for the corresponding deformed orbits. These families are plotted in Figure 10, where this behavior is manifest. The data for each new branch is given in Tables 1, 2 and 3 in the Appendix. Via this bifurcation analysis, one may predict the existence of the DPO-LPO2 branch, which a priori is not straightforward to find. While these families are already known and appear e.g., in page 12 of Reference [13], this suggests a general mechanism which we will exploit, cf. Figure 13, and Figure 14. Note that Reference [13] provides an online data base for _planar_ and _\(x\)-axis symmetric_ periodic orbits, and we match their notation for orbits (DPO, LPO, etc.). The novelty of this article is to focus on _spatial_ bifurcations of these planar orbits, employing our novel methods.
**Result III. Bifurcation graphs between third covers of prograde and fifth cover of retrograde orbits**
A bifurcation graph relating third covers of \(g,g^{\prime}\), and fifth covers of planar retrograde orbits, known as family \(f\), was obtained by the first author;[4] see Figure 13. The third covers of LPO2 and fifth covers of DRO were found using Cell-Mapping.[14] Taking Fig. 13 as a starting point, we compare it to the Jupiter-Europa system. The result is plotted in Figure 14.
Let us focus on the two vertices on the right of Figure 13 which are not of birth-death type. After deformation, the (red) family starting at \(g^{\prime 3}\) on the right of CZ-index 15 glues to the (blue) family of the same index ending in \(f^{5}\), resolving the vertex at which they meet; note that simiarly as in Result I, \(f\) is replaced with DRO, and \(g^{\prime}\), with LPO2. The two other families meeting at the same vertex coming from \(g^{\prime 3}\) and \(g^{3}\) now glue to a family undergoing birth-death, where now \(g^{\prime}\) is replaced by \(g\)-\(LPO1\), and \(g\), with \(DPO\). A similar phenomenon happens at the other vertex, where the (pink) family starting at \(g^{\prime 3}\) with CZ-index 14 on the right glues to the (green) family of the same index, and the other two families now undergo birth-death.
Another notable feature is the (red) family between \(LPO2^{3}\) and \(DRO^{5}\) of CZ-index 15, a _spatial_ family connecting two _planar_ orbits, one retrograde (\(DRO^{5}\)), and the other, prograde (\(LPO2^{3}\)).
Figure 11: Jupiter-Europa: A planar-to-spatial bifurcation of a simple closed planar DPO orbit, from the side, from above and with its symmetric family by using the reflection at the \(xy\)-plane.
Figure 12: Left: The bifurcation graph between simple closed \(g\)-orbit and the new families of spatial orbits generated by the spatial index jump in Hill’s system. Right: In the Jupiter–Europa system. The horizontal symmetry corresponds to the reflection at the \(xy\)-plane.
## Conclusion
We presented a toolkit extracted from the general methods of symplectic geometry/topology, aimed at studying periodic orbits of general Hamiltonian systems, together with their bifurcations in families, eigenvalue configurations, and stability, in a visual, and resource-efficient way. In the presence of symmetry, the information attached to orbits, and the methods involved, may be significantly refined. We illustrated these methods on numerical examples, for a system of current interest which is modelled by a restricted three-body problem (Jupiter-Europa). We have studied families of planar to spatial bifurcations for this system, via bifurcation analysis and deformation from the lunar problem. The numerical findings are in agreement with the theoretical predictions, and the bifurcation graphs are completely novel.
Figure 13: Bifurcation graph for Hill’s lunar problem by the first author,[4] between the 3rd cover of \(g\), the 3rd cover of \(g^{\prime}\) and the 5th cover of \(f\), based on work of Kalantonis.[15] A cross means collision, and b-d means birth-death bifurcation. The horizontal symmetry in the diagram, relating full and dashed edges, means that the corresponding families are related by a symmetry. For instance, the non-dashed red 15 on the right is related by the dashed red 15 on the right by reflection along the xy-plane. The other red 15 families on the left are obtained by applying the extra two spatial symmetries \(\kappa,\widetilde{\kappa}\). Similarly for the pink 14 families. The blue and green families are doubly symmetric; one of the symmetries breaks at bifurcation, where the red and pink families appear.
## Acknowledgment
A. Moreno is supported by the NSF under Grant No. DMS-1926686, by the Sonderforschungsbereich TRR 191 Symplectic Structures in Geometry, Algebra and Dynamics, funded by the DFG (Projektnummer 281071066 - TRR 191), and by the DFG under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURERES Excellence Cluster).
Figure 14: **Bifurcation graph for the Jupiter–Europa system, between the 3rd cover of \(g\)-\(LPO1\), the third cover of \(DPO\), the third cover of \(LPO2\), and the 5th cover of \(DRO\). The data for the pink family is collected in Table 5, for the red one in Table 6, for the blue one in Table 7 and for the green in Table 8. Some orbits of the blue and green family are plotted in Figure 15. The horizontal symmetry is reflection along the \(xy\)-plane. Note that non-dashed red 15 and non-dashed blue 15 are no longer related by a symmetry.**
Figure 15: Jupiter-Europa: Two families of spatial orbits branching out from the \(g\)-\(LPO1^{3}\) orbit; above: these orbits are symmetric w.r.t. the \(x\)-axis and their data is collected in Table 7. This is the (blue) family of CZ-index 15 in Figure 14; below: these orbits are symmetric w.r.t. the \(xz\)-plane and their data is collected in Table 8. This is the (green) family of CZ-index 14 in Figure 14. Each family has a symmetric family by using the reflection at the ecliptic.
## Appendix: Tables
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \(\Gamma\) & \(x(0)\) & \(\dot{y}(0)\) & \(T\) & (\(C/B\))-sign \& Floquet multipliers & \(\mu_{CZ}^{p}\) / \(\mu_{CZ}^{s}\) / \(\mu_{CZ}\) \\ \hline
3.00429783 & 0.99502455 & 0.07670173 & 0.40998 & (\(-/+\)) \(\varphi_{p}=5.862\), (\(-/+\)) \(\varphi_{s}=5.894\) & 1 / 1 / 2 \\
3.00156431 & 0.99037034 & 0.06224607 & 1.02778 & (\(-/+\)) \(\varphi_{p}=5.245\), (\(-/+\)) \(\varphi_{s}=5.406\) & 1 / 1 / 2 \\
3.00101739 & 0.98833167 & 0.06026263 & 1.32856 & (\(-/+\)) \(\varphi_{p}=4.973\), (\(-/+\)) \(\varphi_{s}=5.216\) & 1 / 1 / 2 \\
3.00060753 & 0.98623049 & 0.05949811 & 1.64998 & (\(-/+\)) \(\varphi_{p}=4.712\), (\(-/+\)) \(\varphi_{s}=5.050\) & 1 / 1 / 2 \\
3.00054882 & 0.98587513 & 0.05946574 & 1.7052 & (\(-/+\)) \(\varphi_{p}=4.670\), (\(-/+\)) \(\varphi_{s}=5.026\) & 1 / 1 / 2 \\
2.99962388 & 0.97762100 & 0.06369886 & 3 & (\(-/+\)) \(\varphi_{p}=4.001\), (\(-/+\)) \(\varphi_{s}=4.787\) & 1 / 1 / 2 \\
2.99935885 & 0.97409965 & 0.06735824 & 3.5 & (\(-/+\)) \(\varphi_{p}=3.987\), (\(-/+\)) \(\varphi_{s}=4.863\) & 1 / 1 / 2 \\
2.99908502 & 0.97038828 & 0.07212000 & 4 & (\(-/+\)) \(\varphi_{p}=3.995\), (\(-/+\)) \(\varphi_{s}=5.001\) & 1 / 1 / 2 \\
2.99868251 & 0.96488658 & 0.08024713 & 4.6003 & (\(-/+\)) \(\varphi_{p}=4.185\), (\(-/+\)) \(\varphi_{s}=5.254\) & 1 / 1 / 2 \\ \end{tabular}
\end{table}
Table 4: Data for \(DRO\) branch.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(\Gamma\) & \(x(0)\) & \(\dot{y}(0)\) & \(T\) & (\(C/B\))-sign \& Floquet multipliers & \(\mu_{CZ}^{p}\) / \(\mu_{CZ}^{s}\) / \(\mu_{CZ}\) \\ \hline
3.00374885 & 1.00955895 & 0.04118756 & 1.25694 & (\(+/-\)) \(\varphi_{p}=0.190\), (\(+/-\)) \(\varphi_{s}=1.397\) & 3 / 3 / 6 \\
3.00371150 & 1.01150895 & 0.03105844 & 1.34143 & (\(+/-\)) \(\varphi_{p}=0.538\), (\(+/-\)) \(\varphi_{s}=1.555\) & 3 / 3 / 6 \\
3.00369790 & 1.01200895 & 0.02882949 & 1.37591 & (\(+/-\)) \(\varphi_{p}=0.625\), (\(+/-\)) \(\varphi_{s}=1.620\) & 3 / 3 / 6 \\
3.00363027 & 1.01440084 & 0.01977091 & 1.62295 & (\(+/-\)) \(\varphi_{p}=1.101\), (\(+/-\)) \(\varphi_{s}=2.094\) & 3 / 3 / 6 \\
3.00357414 & 1.016776 & 0.0130572 & 2.1215 & (\(+/-\)) \(\varphi_{p}=1.878\), (\(+/-\)) \(\varphi_{s}=3.131\) & 3 / 3 / 6 \\
3.00357388 & 1.016787 & 0.013014 & 2.12519 & (\(+/-\)) \(\varphi_{p}=1.885\), (\(-/-\)) \(\lambda_{s}=-1.02\) & 3 / 3 / 6 \\
3.00356878 & 1.01701395 & 0.01253366 & 2.20708 & (\(+/-\)) \(\varphi_{p}=2.120\), (\(-/+\)) \(\varphi_{s}=3.225\) & 3 / 3 / 6 \\
3.00353952 & 1.01771395 & 0.01187914 & 2.65553 & (\(+/+\)) \(\lambda_{p}=-9.64\), (\(-/+\)) \(\varphi_{s}=4.144\) & 3 / 3 / 6 \\
3.00349789 & 1.01765259 & 0.01364657 & 2.95454 & (\(+/+\)) \(\lambda_{p}=-36.3\), (\(-/+\)) \(\varphi_{s}=4.743\) & 3 / 3 / 6 \\ \end{tabular}
\end{table}
Table 3: Data for \(LPO2\) branch.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \(\Gamma\) & \(x(0)\) & \(\dot{y}(0)\) & \(T\) & (\(C/B\))-sign \& Floquet multipliers & \(\mu_{CZ}^{p}\) / \(\mu_{CZ}^{s}\) / \(\mu_{CZ}\) \\ \hline
3.00363027 & 1.01440084 & 0 & 0.01974709 & 4.86 & (\(-/+\)) \(\varphi_{p}=3.305\), \(\varphi_{s}^{3}=0\) & 14 \(\to\) 16 \\
3.00362881 & 1.01439256 & 0.00046114 & 0.01976648 & 4.87 & (\(-/+\)) \(\varphi_{1}=3.300\), (\(-/+\)) \(\varphi_{2}=6.281\) & 14 \\
3.00359018 & 1.01415816 & 0.00242577 & 0.02031752 & 4.90 & (\(-/+\)) \(\varphi_{1}=3.208\), (\(-/+\)) \(\varphi_{2}=6.280\) & 14 \\
3.00357914 & 1.01409052 & 0.00273476 & 0.02047842 & 4.91 & (\(+/+\)) \(\lambda=-1.05\), (\(-/+\)) \(\varphi=6.278\) & 14 \\
3.00354287 & 1.01386628 & 0.00355363 & 0.02101794 & 4.94 & (\(+/+\)) \(\lambda=-1.18\), (\(-/+\)) \(\varphi=6.255\) & 14 \\
3.00325974 & 1.01198527 & 0.00688259 & 0.02594794 & 5.2 & (\(+/+\)) \(\lambda=-1.62\), (\(-/+\)) \(\varphi=5.963\) & 14 \\
3.00298774 & 1.00985792 & 0.00824897 & 0.03258269 & 5.5 & (\(+/-\)) \(\varphi_{1}=2.566\), (\(-/+\)) \(\varphi_{2}=5.657\) & 14 \\
3.00270453 & 1.00652898 & 0.00795347 & 0.04651756 & 5.85 & (\(-/+\)) \(\varphi_{1}=1.947\), (\(-/+\)) \(\varphi_{2}=5.978\) & 14 \\
3.00264234 & 1.00560524 & 0.00778449 & 0.05051319 & 5.88
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \(\Gamma\) & \(x(0)\) & \(\dot{y}(0)\) & \(\dot{z}(0)\) & \(T\) & (\(C/B\))-sign \& Floquet multipliers & \(\mu_{CZ}\) \\ \hline
3.00363027 & 1.01440084 & 0.01974709 & 0 & 4.86 & \((-/+)\)\(\varphi_{p}^{3}=3.305\), \(\varphi_{s}^{3}=0\) & 14 \(\rightarrow\) 16 \\
3.00351924 & 1.01408954 & 0.01928185 & 0.01342237 & 4.96 & \((-/-)\)\(\lambda_{1}=-1.24\), \((-/-)\)\(\lambda_{2}=1.09\) & 15 \\
3.00321170 & 1.01314307 & 0.01799332 & 0.02676988 & 5.25 & \((-/-)\)\(\lambda_{1}=-1.62\), \((-/-)\)\(\lambda_{2}=1.45\) & 15 \\
3.00302231 & 1.01246670 & 0.01723541 & 0.03300118 & 5.46 & \((+/-)\)\(\varphi=2.749\), \((-/-)\)\(\lambda=1.78\) & 15 \\
3.00273486 & 1.01099334 & 0.01657953 & 0.04285920 & 5.82 & \((+/-)\)\(\varphi=1.618\), \((-/-)\)\(\lambda=1.55\) & 15 \\
3.00270684 & 1.01077857 & 0.01644568 & 0.04412031 & 5.85 & \((+/-)\)\(\varphi=1.801\), \((-/-)\)\(\lambda=1.40\) & 15 \\
3.0026563 & 1.01548137 & 0.01548137 & 0.04575934 & 5.88 & \((+/-)\)\(\varphi=2.506\), \((-/-)\)\(\lambda=1.35\) & 15 \\
3.00243536 & 1.01068879 & 0.00516070 & 0.04995370 & 6 & \((-/+)\)\(\varphi=5.761\), \((-/-)\)\(\lambda=8.52\) & 15 \\
3.00204821 & 1.0119864 & \(-\)0.01174966 & 0.05089250 & 6.24 & \((-/+)\)\(\varphi=5.965\), \((-/-)\)\(\lambda=31.1\) & 15 \\
3.00172312 & 1.01167768 & \(-\)0.02457421 & 0.04793350 & 6.5 & \((-/+)\)\(\varphi=6.016\), \((-/-)\)\(\lambda=30.0\) & 15 \\
3.00147493 & 1.01207539 & \(-\)0.03349482 & 0.04374826 & 6.75 & \((-/+)\)\(\varphi=6.070\), \((-/-)\)\(\lambda=22.3\) & 15 \\
3.00127220 & 1.01242785 & \(-\)0.04020652 & 0.03910792 & 7 & \((-/+)\)\(\varphi=6.124\), \((-/-)\)\(\lambda=15.2\) & 15 \\
3.00096072 & 1.01304236 & \(-\)0.04944170 & 0.02947258 & 7.5 & \((-/+)\)\(\varphi=6.201\), \((-/-)\)\(\lambda=6.16\) & 15 \\
3.00073221 & 1.01358551 & \(-\)0.05528744 & 0.01932635 & 8 & \((-/+)\)\(\varphi=5.911\), \((-/-)\)\(\lambda=1.14\) & 15 \\
3.00055690 & 1.01409401 & \(-\)0.05914769 & 0.00388381 & 8.5 & \((-/+)\)\(\varphi=4.550\), \((-/-)\)\(\lambda=1.00\) & 15 \\
3.00054882 & 1.01412064 & \(-\)0.05930512 & 0 & 8.52 & \((-/+)\)\(\varphi_{p}^{5}=4.500\), \(\varphi_{s}^{5}=0\) & 16 \(\rightarrow\) 14 \\ \end{tabular}
\end{table}
Table 6: **Data for one branch bifurcation from 3rd cover of the \(LPO2\)-orbit. This family ends at the 5th cover of the \(DRO\)-orbit. Since the CZ-index is constant, this family forms a bridge between the two underlying planar periodic orbits. Moreover, these spatial orbits are simply-symmetric w.r.t. the \(x\)-axis. Its symmetric family is obtained by using the reflection at the ecliptic.**
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \(\Gamma\) & \(x(0)\) & \(\dot{y}(0)\) & \(\dot{z}(0)\) & \(T\) & (\(C/B\))-sign \& Floquet multipliers & \(\mu_{CZ}\) \\ \hline
3.00365597 & 0.985579006 & \(-\)0.01951876 & 0 & 4.79 & \((-/+)\)\(\varphi_{p}^{3}=3.205\), \(\varphi_{s}^{3}=0\) & 14 \(\rightarrow\) 16 \\
3.00365338 & 0.98556744 & \(-\)0.01945341 & 0.00182488 & 4.80 & \((-/+)\)\(\varphi=3.214\), \((-/-)\)\(\lambda=1.001\) & 15 \\
3.00363389 & 0.98561874 & \(-\)0.01938011 & 0.00580611 & 4.82 & \((-/-)\)\(\lambda_{1}=-1.01\), \((-/-)\)\(\lambda_{2}=1.005\) & 15 \\
3.00329911 & 0.98657072 & \(-\)0.01815373 & 0.02416862 & 5.12 & \((-/-)\)\(\lambda_{1}=-1.54\), \((-/-)\)\(\lambda_{2}=1.378\) & 15 \\
3.00314093 & 0.98708523 & \(-\)0.01763006 & 0.02950246 & 5.30 & \((-/-)\)\(\lambda_{1}=-1.02\), \((-/-)\)\(\lambda_{2}=1.685\) & 15 \\
3.0030399 & 0.98759083 & \(-\)0.01727225 & 0.03377964 & 5.46 & \((+/-)\)\(\varphi=2.290\), \((-/-)\)\(\lambda=1.977\) & 15 \\
3.00281046 & 0.98856773 & \(-\)0.01745284 & 0.04009001 & 5.73 & \((+/-)\)\(\varphi=0.976\), \((-/-)\)\(\lambda=2.451\) & 15 \\
3.00275889 & 0.98918471 & \(-\)0.01885469 & 0.04265726 & 5.82 & \((+/-)\)\(\varphi=0.134\), \((-/-)\)\(\lambda=4.422\) & 15 \\
3.00275823 & 0.98925887 & \(-\)0.01917383 & 0.04284561 & 5.82 & \((-/-)\)\(\lambda_{1}=1.041\), \((-/-)\)\(\lambda_{2}=5.013\) & 14 \\
3.00276196 & 0.98939330 & \(-\)0.01992825 & 0.04305515 & 5.83 & \((-/-)\)\(\lambda_{1}=1.148\), \((-/-)\)\(\lambda_{2}=6.636\) & 14 \\ \end{tabular}
\end{table}
Table 7: **Data for one branch bifurcation from 3rd cover of the \(g\)-\(LPO1\)-orbit. These spatial orbits are simply-symmetric w.r.t. the \(x\)-axis and they are connected to one branch bifurcation from the 3rd cover of the \(DRO\)-orbit via birth-death. Its symmetric family is obtained by using the reflection at the ecliptic.**
\begin{table}
\begin{tabular}{r|c|c|c|c|c|c} \hline \(\Gamma\) & \(x(0)\) & \(z(0)\) & \(\dot{y}(0)\) & \(T\) & \((C/B)\)-sign \& Floquet multipliers & \(\mu_{CZ}\) \\ \hline
3.00365597 & 0.985579006 & 0 & \(-\)0.01951876 & 4.79 & \((-/+)\)\(\varphi_{p}^{3}=3.205\), \(\varphi_{s}^{3}=0\) & 14 \(\rightarrow\) 16 \\
3.00365461 & 0.98556706 & \(-\)0.00036219 & \(-\)0.01947401 & 4.80 & \((-/+)\)\(\varphi_{1}=3.215\), \((-/+)\)\(\varphi_{2}=6.281\) & 14 \\
3.00363389 & 0.98568597 & \(-\)0.00175392 & \(-\)0.01975974 & 4.82 & \((+/+)\)\(\lambda=-1.01\), \((-/+)\)\(\varphi=6.277\) & 14 \\
3.00360033 & 0.98588066 & \(-\)0.00278699 & \(-\)0.02023321 & 4.84 & \((+/+)\)\(\lambda=-1.12\), \((-/+)\)\(\varphi=6.263\) & 14 \\
3.00331461 & 0.98766211 & \(-\)0.00655464 & \(-\)0.02492170 & 5.11 & \((+/+)\)\(\lambda=-1.52\), \((-/+)\)\(\varphi=5.978\) & 14 \\
3.00314742 & 0.98883839 & \(-\)0.00763969 & \(-\)0.02842198 & 5.29 & \((+/-)\)\(\varphi_{1}=3.031\), \((-/+)\)\(\varphi_{2}=5.756\) & 14 \\
3.00289637 & 0.99094696 & \(-\)0.00836889 & \(-\)0.03579903 & 5.61 & \((+/-)\)\(\varphi_{1}=1.373\), \((-/+)\)\(\varphi_{2}=5.309\) & 14 \\
3.00285045 & 0.99142732 & \(-\)0.00835072 & \(-\)0.03776509 & 5.67 & \(0.376\pm 0.570i\), \(0.806\pm 1.221i\) & 14 \\
3.00277633 & 0.99243798 & \(-\)0.00805049 & \(-\)0.04244268 & 5.78 & \(0.491\pm 0.121i\), \(1.917\pm 0.472i\) & 14 \\
3.00277358 & 0.99249304 & \(-\)0.00802039 & \(-\)0.04284315 & 5.79 & \((+/+)\)\(\lambda_{1}=1.987\), \((-/-)\)\(\lambda_{2}=2.016\) & 14 \\
3.00276770 & 0.993012244 & \(-\)0.00750257 & \(-\)0.04644237 & 5.82 & \(\lambda_{1}=1.000\), \((-/-)\)\(\lambda_{2}=7.181\) & b-d \\ \hline \end{tabular}
\end{table}
Table 8: **Data for one branch bifurcation from 3rd cover of the \(g\)-\(LPO1\)-orbit These spatial orbits are simply-symmetric w.r.t. the \(xz\)-plane and they are connected to one branch bifurcation from the 3rd cover of the \(DPO\)-orbit via birth-death. Its symmetric family is obtained by using the reflection at the ecliptic.**
|
2303.10921
|
Strong lensing and shadow of Ayon-Beato-Garcia (ABG) nonsingular black
hole
|
We study nonsingular black holes viewed from the point of view of
Ayon-Beato-Garcia (ABG) nonlinear electrodynamics (NLED) and present a complete
study of their corresponding strong gravitational lensing. The NLED modifies
the the photon's geodesic, and our calculations show that such effect increases
the corresponding photon sphere radius and image separation, but decreases the
magnification. We also show that the ABG's shadow radius is not compatible with
bound estimates of Sgr A* from Keck and VLTI (Very Large Telescope
Interferometer). Thus, the possibility of Sgr A* being a nonsingular ABG black
hole is ruled out.
|
H. S. Ramadhan, M. F. Ishlah, F. P. Pratama, I. Alfredo
|
2023-03-20T07:40:35Z
|
http://arxiv.org/abs/2303.10921v3
|
# Strong lensing and shadow of Ayon-Beato-Garcia (ABG) nonsingular black hole
###### Abstract
We study nonsingular black holes viewed from the point of view of Ayon-Beato-Garcia (ABG) nonlinear electrodynamics (NLED) and present a complete study of their corresponding strong gravitational lensing. The NLED modifies the the photon's geodesic, and our calculations show that such effect increases the corresponding photon sphere radius and image separation, but decreases the magnification. We also show that the ABG's shadow radius is not compatible with bound estimates of Sgr A* from Keck and VLTI (Very Large Telescope Interferometer). Thus, the possibility of Sgr A* being a nonsingular ABG black hole is ruled out.
Introduction
Black hole (BH) is one of the most straightforward yet profound prediction of General Relativity (GR). Its extreme gravity distorts its surrounding spacetime and bends light, creating (among many things) the _gravitational lensing_ phenomenon. The recent observation by Event Horizon Telescope (EHT) that successfully captured the visual images of the superheavy BHs M87* [1] and Sgr A* [2] has established a triumph for the gravitational lensing as a means to empirically prove black hole's existence. By "image" here is the corresponding _shadow_[3] surrounded by accreting materials that emits and lenses light from the nearby background source.
Theoretically, the study of gravitational lensing is as old as GR itself (see, for example, [4; 5] and the references therein), but it was Darwin who first applied it for Schwarzschild BH [6]. His exact calculation on the deflection angle shows that at small impact parameters there exists a critical value (close to the corresponding photon sphere) where the deflection angle suffers from logarithmic divergence [7], beyond which photons fall into the horizon. His results were later rediscovered and developed by other authors, for example in [8; 9; 10; 11]. In the last two decades the study of gravitational lensing in the strong deflection limit received revival and extensive elaboration [12; 13; 14; 15]1. In particular, Bozza shows in [15] that the analytical expansion of the strong deflection angle in the limit of \(r\to r_{ps}\) (\(r_{ps}\) being the photon sphere radius) is given by
Footnote 1: Recently, Virbadhra modeled the supermassive black hole \(M87^{*}\) as a Schwarzschild lens and strudied its distorted (tangential, radial, and total) magnifications of images with respect to the change in angular source position and lens-source ratio distance [16].
\[\alpha(b)=-\tilde{a}_{1}\log\left(\frac{b}{b_{c}}-1\right)+\tilde{a}_{2}+O(b- b_{c}). \tag{1}\]
with \(\tilde{a}_{1}\) and \(\tilde{a}_{2}\) some constants. Upon closer inspection, Tsukamoto gave correction to the higher order expansion [17; 18]. The result on Schwarszchild was extended to the case of Reissner-Nordstrom by Eiroa _et al_[19], while the strong lensing in Kerr BH was studied in [20; 21; 22].
Probably the most intriguing property of black holes is the existence of singularity due to the gravitational collapse [23]. It was believed that such singularity is an inherent solution of general relativity, but the stable ones (like all observable black holes) are disconnected
from the observers by event horizon [24]. Nevertheless, Bardeen in 1968 constructed a metric function that produced nonsingular spacetime [25]. The metric and all invariants are devoid of singularity everywhere, including at \(r=0\). Instead, we have regular de Sitter space at the core. (For an excellent review on regular BH see, for example, [26].) The strong lensing phenomenon around Bardeen BH has been studied in [27; 28].
At first nobody knows what kind of matter that sources the Bardeen geometry, but later Ayon-Beato and Garcia (ABG) realized that this nonsingular metric can be obtained as solutions of Einstein's equations coupled to some nonlinear electrodynamics (NLED) source [29; 30]. Invoking NLED turns out to have profound impact on the geodesic of test photon. Novello _et al_ showed that in nonlinear electrodynamics background, photon moves in an effective modified geometry [31], and this radically modifies the corresponding optical observables. In [32] the authors model the M87* as (singular) NLED-charged black hole and studied its shadow.
In this work, we discuss the effect of effective geometry to the lensing phenomenon in the Bardeen BH using one of the ABG's NLED model. In particular, we calculate the image separation and magnification. We use it as a model for the supermassive black hole at the center of our galaxy, the Sgr A*, and calculate the astrophysical observables. Lastly, we investigate its shadow radius and compare it to the astrophysical data from Keck and VLTI. This paper is organized as follows. In Sec. II we briefly review the regular Bardeen solution and its corresponding ABG models. In Sec. III we present the effective metric of ABG and the corresponding photon sphere. Sec. IV is devoted to applying the Bozza's and Tsukamoto's strong lensing formalism to our model. Sec. V is devoted to calculating the strong lensing observables using the Sgr A* data. In Sec. VI we calculate the shadow radius and plot it against the Keck-VLTI constraints. Finally, we summarize our findings in Sec. VII.
## II Bardeen spacetime
The Bardeen metric is given by [25]:
\[ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}d\Omega^{2}, \tag{2}\]
with
\[f(r)\equiv 1-\frac{2mr^{2}}{(r^{2}+q^{2})^{\frac{3}{2}}}, \tag{3}\]
and \(q\) is the charge. This spacetime is regular at \(r=0\), as can easily be seen from the Kretschmann scalar:
\[\lim_{r\to 0}R^{\alpha\beta\gamma\delta}R_{\alpha\beta\gamma\delta}=\frac{96m^{2 }}{q^{8/3}}, \tag{4}\]
while the metric behaves de Sitter-like
\[f(r)\approx 1-\frac{2m}{q^{3}}r^{2}. \tag{5}\]
The horizons \(r_{h}\) are given by the roots of
\[\left(r_{h}^{2}+q^{2}\right)^{3}-4m^{2}r_{h}^{4}=0. \tag{6}\]
Bardeen black hole can, in general, possess up to two horizons. The extremal condition is achieved when [30]
\[q^{2}=\frac{16}{27}m^{2}\ \to r_{extr}=\sqrt{\frac{32}{27}}m. \tag{7}\]
Ayon-Beato and Garcia proposed the NLED matter to source the Bardeen spacetime, given in [29]. This model, however, produces a slightly different metric function than the original Bardeen,
\[f(r)=1-\frac{2mr^{2}}{(r^{2}+q^{2})^{\frac{3}{2}}}+\frac{q^{2}r^{2}}{\left(r +2+q^{2}\right)^{2}}. \tag{8}\]
Strong lensing of this particular model has been discussed in [33]. Later ABG considered a simpler NLED sourced by magnetic monopole as follows [30],
\[\mathcal{L}=\frac{3}{2sq^{2}}\left(\frac{\sqrt{2q^{2}F}}{1+\sqrt{2q^{2}F}} \right)^{5/2}, \tag{9}\]
where \(F\equiv\frac{1}{4}F^{\mu\nu}F_{\mu\nu}\) and \(s\equiv q/2m\). By inserting the monopole ansatz \(A_{\mu}=\delta_{\mu}^{\nu}q\left(1-\cos\theta\right)\), the field strength becomes \(F=q^{2}/2r^{4}\) and the Lagrangian produces the metric solution Eq. (3).
## III Effective geometry
The NLED Lagrangian above induces the effective metric tensor [31]
\[g_{eff}^{\mu\nu}=g^{\mu\nu}-\frac{4\mathcal{L}_{FF}}{\mathcal{L}_{F}}F_{ \alpha}^{\mu}F^{\alpha\nu}, \tag{10}\]
where \({\cal L}_{A}\equiv\partial{\cal L}/\partial A\). This, in turn, yields the effective length element
\[ds_{eff}^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+h_{m}(r)r^{2}d^{2}\Omega, \tag{11}\]
where
\[h_{m}(r)=\left(1+\frac{4{\cal L}_{FF}}{{\cal L}_{F}}\frac{q^{2}}{r^{4}}\right)^ {-1}. \tag{12}\]
Inserting the Lagrangian (9) we obtain
\[h_{ABG}(r)=\left(1-\frac{2(6q^{2}-r^{2})}{(q^{2}+r^{2})}\right)^{-1}. \tag{13}\]
From the corresponding geodesic equation it is not difficult to see that the radial equation satisfies
\[\frac{1}{2}\dot{r}^{2}+V_{eff}=0, \tag{14}\]
where we define the effective potential \(V_{eff}\) as
\[V_{eff}\equiv\frac{f(r)}{h(r)}\frac{\mathbb{L}^{2}}{r^{2}}. \tag{15}\]
The corresponding photon sphere radius is given by the largest positive root of the following condition, [15]
\[\frac{f^{\prime}(r_{ps})}{f(r_{ps})}-\frac{2}{r_{ps}}-\frac{h^{\prime}(r_{ps} )}{h(r_{ps})}=0. \tag{16}\]
This yields,
\[\frac{2}{r_{ps}}+\frac{28q^{2}r_{ps}}{11q^{4}+8q^{2}r_{ps}^{2}-3r_{ps}^{4}}+ \frac{m\left(4q^{2}r_{ps}-2r_{ps}^{3}\right)}{\left(q^{2}+r_{ps}^{2}\right) \left[\left(q^{2}+r_{ps}^{2}\right)^{3/2}-2mr_{ps}^{2}\right]}=0. \tag{17}\]
By solving the roots numerically, the behavior of \(r_{ps}\) as a function of q, \(r_{ps}=r_{ps}(m=1,q)\), is shown in Fig.1. It is shown that the photon sphere decreases as the charge increases until some critical value \(q_{crit}\) where \(r_{ps}(m=1,q=q_{crit})\) is minimum, beyond which \(r_{ps}\) starts increasing without bound. Interestingly, the critical value \(q_{crit}\) is not given by the extremal charge \(q=\sqrt{16/27}m\), as in the Bardeen case. Rather, \(q_{crit}=16/27\)\(m=0.592\).
In Fig. 2 we show how \(r_{ps}\) varies with \(m\) for several values of \(q\). They differ only at small \(m\). When the mass is large, \(r_{ps}\) for different \(q\) asymptote to a single gradient. In Fig. 3 we show the deviation of \(r_{ps}\) as a function of \(q\) from the Schwarzschild (charge-less condition). The ABG model we consider here falls between the Schwarzschild and the RN at large \(q\), unlike the original Bardeen which falls the fastest.
## IV Deflection Angle in the Strong Field Limit
From the spherical symmetry and staticity conditions, Noether's theorem dictates that this spacetime has constants of motion, the total test particle's energy \(E\) and angular momentum \(\mathbb{L}\), related to the \(\partial_{t}\) and \(\partial_{\varphi}\) Killing vectors, respectively. We define the impact parameter for photon as
\[b(r_{0})\equiv\frac{\mathbb{L}}{E}=\sqrt{\frac{h_{m}(r_{0})r_{0}^{2}}{f(r_{0})}}. \tag{18}\]
Figure 1: Photon Sphere of Bardeen with ABG source as function of charge (\(q\)) for \(m=1\).
Figure 2: \(r_{ps}\) as function of \(m\) for a variation of charge \(q\).
Solving the null geodesic equation, the general expression for bending angle of light rays can be expressed as (see, for example, in [34])
\[\alpha(r_{0})=2\int_{r_{0}}^{\infty}\sqrt{\frac{1}{r^{2}f(r)h(r)R(r)}}dr-\pi, \tag{19}\]
where
\[R(r)=\frac{r^{2}h(r)}{b^{2}f(r)}-1. \tag{20}\]
The integral is divergent at \(r_{0}\to r_{ps}\). To circumvent this problem we define \(z\equiv 1-r_{0}/r\)[18] and write Eq. (19) as
\[\alpha(r_{0})=\int_{0}^{1}\mathcal{H}(z,r_{0})dz-\pi, \tag{21}\]
with
\[\mathcal{H}(z,r_{0})\equiv\frac{2\left(1-z\right)^{2}}{\sqrt{f\left(\frac{r_{ 0}}{1-z}\right)h\left(\frac{r_{0}}{1-z}\right)R\left(\frac{r_{0}}{1-z}\right)}}. \tag{22}\]
The singular part can be isolated by defining
\[\mathcal{H}(z,r_{0})\equiv\mathcal{H}_{R}(z,r_{0})+\mathcal{H}_{D}(z,r_{0}), \tag{23}\]
where the subscript R(D) refers to the regular (divergent) part, respectively.
To handle the divergent part:
\[\mathcal{I}_{D}(r_{0})\equiv\int_{0}^{1}\mathcal{H}_{D}(z,r_{0})dz, \tag{24}\]
Figure 3: \(r_{ps}\) as function of \(q\) for different models.
we define \(\mathcal{H}(z,r_{0})\equiv 2r_{0}/\sqrt{\mathcal{G}(z,r)}\) and, by expanding it around \(z\to 0\), obtain the expression for \(\mathcal{G}(z,r_{0})\) up to second-order:
\[\mathcal{G}_{0}(z,r_{0})=c_{1}(r_{0})z+c_{2}(r_{0})z^{2}, \tag{25}\]
where
\[c_{1}(r_{0}) = C_{0}\mathcal{D}_{0}r_{0}f(r_{0}),\] \[c_{2}(r_{0}) = C_{0}r_{0}f_{0}\bigg{\{}\mathcal{D}_{0}\left[\mathcal{D}_{0} \left(\mathcal{D}_{0}+\frac{f_{0}^{\prime}}{f_{0}^{3}}\right)r_{0}-3\right]+ \frac{\mathcal{D}_{0}r_{0}}{2}\bigg{\}}, \tag{26}\]
with \(X_{0}\equiv X(r=r_{0})\), \(C(r)\equiv h(r)r^{2}\), and
\[\mathcal{D}(r)\equiv\frac{C^{\prime\prime}(r)}{C(r)}-\frac{f^{\prime\prime}(r )}{f(r)}. \tag{27}\]
For the ABG model, the values of \(c_{1}\) and \(c_{2}\) are
\[c_{1}(r_{0}) = -\frac{2(r_{0}^{2}(11q^{4}+22q^{2}r_{0}^{2}-3r_{0}^{4})\tilde{j}_ {1}^{3/2}+m(9r_{0}^{8}-61q^{2}r_{0}^{6})))}{(11q^{2}-3r^{2})^{2}\tilde{j}_{1}^ {3/2}},\] \[c_{2}(r_{0}) = \frac{mr_{0}^{6}\tilde{j}_{2}+r_{0}^{2}(\tilde{j}_{3}\tilde{j}_{ 1}^{5/2}-\tilde{j}_{3})}{(11q^{2}-3r_{0}^{2})^{3}\tilde{j}_{1}^{5/2}},\]
with
\[\tilde{j}_{1} \equiv q^{2}+r_{0}^{2},\] \[\tilde{j}_{2} \equiv 2013q^{6}+3104q^{4}r_{0}^{2}+57q^{2}r_{0}^{4}-54r_{0}^{6},\] \[\tilde{j}_{3} \equiv 121q^{6}+825q^{4}r_{0}^{2}-99Q^{2}r_{0}^{4}+9r_{0}^{6}. \tag{29}\]
Following Bozza [15] and Tsukamoto [18] it can be shown that the divergent integral in the strong field limit \(r\to r_{ps}\) (or equivalently \(b\to b_{ps}\)) is expressed as
\[\mathcal{I}_{D}(b) = -\sqrt{\frac{2}{f_{ps}C_{ps}^{\prime\prime}-f_{ps}^{\prime\prime} C_{ps}}}\] \[\times\bigg{\{}\log\left(\frac{b}{b_{c}}-1\right)+\log\left[r_{ ps}^{2}\left(\frac{C_{ps}^{\prime\prime}}{C_{ps}}-\frac{f_{ps}^{\prime\prime}}{ f_{ps}}\right)\right]\bigg{\}}.\]
The regular part is
\[{\cal I}_{R}\equiv\int_{0}^{1}{\cal H}_{R}(z,r_{0})dz. \tag{31}\]
Likewise, by expanding around \(r\to r_{ps}\) the integral can be expressed as
\[\frac{{\cal I}_{R}(r_{ps})}{2r_{ps}} = \int_{0}^{1}\frac{dz}{F(r_{ps},z)}-\int_{0}^{1}\sqrt{\frac{2}{C_{ ps}r_{ps}f_{ps}{\cal D}_{ps}z^{2}}}dz,\]
with
\[F(r_{ps},z)\equiv\sqrt{C\left(\frac{r_{ps}}{1-z}\right)R\left(\frac{r_{ps}}{1- z}\right)f\left(\frac{r_{ps}}{1-z}\right)\left(1-z\right)^{4}}. \tag{33}\]
For the ABG model, the integral can be evaluated numerically. Putting (30) and (32) into (21), we can express it as Eq. (1) by identifying
\[\tilde{a}_{1} \equiv \sqrt{\frac{2A_{ps}B_{ps}}{A_{ps}C^{\prime\prime}_{ps}-A^{\prime \prime}_{ps}C_{ps}}},\] \[\tilde{a}_{2} \equiv \tilde{a_{1}}\log\tilde{b}+I_{R}(r_{ps})-\pi, \tag{34}\]
with
\[\tilde{b}\equiv r_{ps}^{2}\left(\frac{C^{\prime\prime}_{ps}}{C_{ps}}-\frac{A^ {\prime\prime}_{ps}}{A_{ps}}\right). \tag{35}\]
In terms of the ABG model we consider, their expressions are
\[\tilde{a}_{1} = \sqrt{\frac{(11q^{2}-3r_{ps}^{2})^{3}\tilde{k}_{1}^{5/2}}{\tilde{k}_ {2}-\tilde{k}_{3}\tilde{k}_{1}^{5/2}}},\] \[\tilde{a}_{2} = -\pi+\mathcal{I}_{R}(r_{ps})+\sqrt{\frac{(11q^{2}-3r_{ps}^{2})^{3 }\tilde{k}_{1}^{5/2}}{\tilde{k}_{2}-\tilde{k}_{3}\tilde{k}_{1}^{5/2}}}\log \left[\frac{2\tilde{k}_{1}^{5/2}\tilde{k}_{3}-2\tilde{k}_{2}}{(11q^{2}-3r_{ps} ^{2})^{2}(\tilde{k}_{1}^{3/2}-2mr_{ps}^{2})\tilde{k}_{1}^{2}}\right],\]
with
\[\tilde{k}_{1} = q^{2}+r_{ps}^{2},\] \[\tilde{k}_{2} = m(3355q^{6}r_{ps}^{4}+466q^{4}r_{ps}^{6}+51q^{2}r_{ps}^{8}),\] \[\tilde{k}_{3} = (121q^{6}+825q^{4}r_{ps}^{2}-99q^{2}r_{ps}^{4}+9r_{ps}^{6}). \tag{37}\]
The calculated \(\tilde{a}_{1}\) and \(\tilde{a}_{2}\) are shown in Fig. 4. The \(\tilde{a}_{1}\) slowly increases until \(q=q_{crit}\) and then decreases linearly. The \(\tilde{a}_{2}\), on the other hand, decreases as \(q\) goes up until \(q=q_{crit}\), then it starts increasing. The deflection angles are depicted in Fig. 5. It is shown that the critical impact parameters for the NLED models are smaller than for the pure Bardeen. This critical value increases with increasing charge.
## V Lensing of Sgr A* as an ABG black hole
The most straightforward effect of light deflection due to gravitational field is the notion of "gravitational lensing". The lensing mechanism can be inferred from Fig. 6. The straight
Figure 5: Comparison of light deflection for Regular Bardeen and Bardeen with ABG source
segment \(\overline{SO}\) is the path the light would have taken had it not been deflected due to the lens (BH) at \(L\). The angle \(\beta\) denotes the angular position of the source \(S\) from the observer \(O\) if there were no lensing. What the \(O\) observes is the "image" of \(S\) located at \(I\) whose angular position is given by \(\theta\). The deflection angle is given by \(\alpha\). From simple geometry the relation between \(\beta\) and \(\theta\) can be expressed as [13; 35]
\[\tan\beta=\tan\theta-\frac{D_{LS}}{D_{OL}}\left[\tan\theta+\tan(\alpha-\theta) \right], \tag{38}\]
known as the lens equation.
In the strong field limit (\(r_{0}\to r_{ps}\)) \(\beta\) and \(\theta\) are small, \(\alpha\) can exceed \(2\pi\) and light can loop around the black hole several (\(n\)) times before escaping out to the observer. In this sense, \(\alpha=2n\pi+\Delta\alpha_{n}\). We can then expand \(\tan(\alpha-\theta)\sim\Delta\alpha_{n}-\theta\)[14]. The lens equation thus
Figure 6: Gravitational lensing diagram. S, L, and O are the source, the lens, and the observer, respectively.
becomes
\[\beta=\theta-\frac{D_{LS}}{D_{OL}}\Delta\alpha_{n}. \tag{39}\]
We also have the relation
\[b=D_{OL}\theta. \tag{40}\]
Substituting it into (1) and inverting it results in [15]
\[\theta(\alpha)\simeq\frac{b_{c}}{D_{OL}}\left(1+e^{\frac{\dot{a}_{2}-\alpha}{ \dot{a}_{1}}}\right). \tag{41}\]
From Fig. 6, the innermost image is given by
\[\theta_{\infty}=\frac{b_{c}}{D_{OL}}. \tag{42}\]
Expanding around \(\alpha\) yields
\[\theta_{n}=\theta_{n}^{0}-\gamma_{n}\Delta\alpha_{n}, \tag{43}\]
with
\[\theta_{n}^{0} = \frac{b_{c}}{D_{OL}}\left(1+e_{n}\right), \tag{44}\] \[\gamma_{n} = \frac{b_{c}}{\dot{a}_{2}D_{OL}}e_{n},\] (45) \[e_{n} = e^{\frac{\dot{a}_{2}-2\pi\pi}{\dot{a}_{1}}}. \tag{46}\]
We can eliminate \(\Delta\alpha_{n}\) using the Eq. (39). This results in the equation for the n-th shadow position [14; 15; 17]
\[\theta_{n}=\theta_{n}^{0}+\frac{D_{LS}}{D_{OS}}\gamma_{n}(\beta-\theta_{n}^{0 }), \tag{47}\]
where the second term on the right-hand side is small compared to the first one. For the Einstein ring case, \(\beta=0\) and
\[\theta_{nE}=\left(1-\frac{D_{LS}}{D_{OS}}\gamma_{n}\right)\theta_{n}^{0}. \tag{48}\]
The observables we wish to calculate are the image separation and the magnification. The separation \(s\) is the difference between the outermost and innermost images,
\[s=\theta_{1}-\theta_{\infty}=\theta_{\infty}e^{\frac{\dot{a}_{2}-2\pi}{\dot{ a}_{1}}}. \tag{49}\]
The magnification \(\mu\) is the _inverse_ of the corresponding Jacobian determinant for the critical curve [4; 14]. The \(n^{th}\) image magnification is defined to be
\[\mu_{n}=\frac{1}{|\det J_{\theta_{n}^{0}}|}=\frac{1}{\frac{\beta}{\theta_{n}^ {0}}\frac{\partial\beta}{\partial\theta}\big{|}_{\theta_{n}^{0}}}, \tag{50}\]
from which the (relativistic) flux ratio is expressed as
\[r=\frac{\mu_{1}}{\sum\limits_{n=2}^{\infty}\mu_{n}}=e^{\frac{2\pi}{a_{1}}}. \tag{51}\]
In this paper we calculate the strong lensing from the Sgr A* black hole modeled as the Bardeen with ABG source. We use data from GRAVITY collaboration where the black hole mass and its distance from the Earth (observer) are \(m=4.154\times 10^{6}M_{\odot}\) and \(D_{OL}=8.178\ kpc\), respectively [36]. These values are consistent with the EHT results [2]. In Table 1 we show the observables. Here the value the magnification is converted to magnitudes \(r_{m}=2.5\log_{10}r\)[15]. From the Table it can be seen that the observable values for the Bardeen does not differ much from that of RN. However, the observables for the ABG are significantly different from both Bardeen and RN; _i.e.,_ the ABG's are smaller. This shows that the NLED effect is quite significant here.
From Table 1 it can be inferred that the ABG has \(1.5\times\) smaller values of \(\theta_{\infty}\) compared to Bardeen. This means that photon can orbit ABG with smaller radius. While the separation \(s\) for the ABG is surprisingly \(30\times\) larger, its magnification \(r_{m}\) is smaller than for Bardeen. The NLED thus strengthens the gravitational field by decreasing the innermost distance while at the same time increasing its corresponding separation with the outermost image. Interestingly, the observables in the ABG behave in such opposite ways with the the ones in the Bardeen. In Figs. 7 it is shown that while the \(\theta_{\infty}\) in Bardeen decreases monotonically, in the ABG case it increases. From Figs. 8 the separation in the Bardeen model increases monotonically, while in the ABG there exists some maximum value \(s=s_{max}\) before which it initially increases and after which it starts decreasing. Similarly, in Figs. 9 we see that the \(r_{m}\) decreases unboundedly for the Bardeen as \(q\) increases, whereas it decreases to its minimum value before increasing monotonically for the ABG.
## VI Is Sgr A* a nonsingular black hole?
Having calculated the lensing observables for the Sgr A* if its modeled as the ABG black hole, a tempting question would be: how realistic is Sgr A* as a nonsingular black hole? While to the best of our knowledge we have not found any strong lensing data of Sgr A* to compare with, we do have data for its shadow radius bound. The black hole _shadow_ is the dark region surrounding it due to the inability of the photon from the background light
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & \(Q/m\) & \(\theta_{\infty}\) (\(\mu\) as) & \(s(\mu\) as) & \(r_{m}\) \\ \hline Schwarzschild & - & 26.0592 & 0.0327 & 6.8184 \\ \hline Bardeen & 0 & 26.0592 & 0.0327 & 6.8184 \\ \cline{2-5} & 0.1 & 26.0156 & 0.0332351 & 6.79937 \\ \cline{2-5} & 0.2 & 25.8833 & 0.0349143 & 6.74083 \\ \cline{2-5} & 0.3 & 25.6569 & 0.0381989 & 6.63828 \\ \cline{2-5} & 0.4 & 25.3266 & 0.044135 & 6.48266 \\ \cline{2-5} & 0.5 & 24.8751 & 0.0552564 & 6.25695 \\ \cline{2-5} & 0.6 & 24.2718 & 0.0788401 & 5.92697 \\ \cline{2-5} & 0.7 & 23.4566 & 0.144124 & 5.41046 \\ \hline RN & 0 & 26.0592 & 0.0327 & 6.8184 \\ \cline{2-5} & 0.1 & 26.0157 & 0.0330446 & 6.81081 \\ \cline{2-5} & 0.2 & 25.8841 & 0.0340718 & 6.7875 \\ \cline{2-5} & 0.3 & 25.6612 & 0.0359467 & 6.74699 \\ \cline{2-5} & 0.4 & 25.3412 & 0.0389696 & 6.68646 \\ \cline{2-5} & 0.5 & 24.9146 & 0.043696 & 6.60104 \\ \cline{2-5} & 0.6 & 24.3668 & 0.0511167 & 6.48246 \\ \cline{2-5} & 0.7 & 23.6747 & 0.0628463 & 6.31574 \\ \hline ABG & 0 & 15.0453 & 1.01295 & 3.93662 \\ \cline{2-5} & 0.1 & 15.0593 & 1.015 & 3.93252 \\ \cline{2-5} & 0.2 & 15.1029 & 1.02079 & 3.92061 \\ \cline{2-5} & 0.3 & 15.1806 & 1.02911 & 3.90248 \\ \cline{2-5} & 0.4 & 15.3007 & 1.03751 & 3.88228 \\ \cline{2-5} & 0.5 & 15.4768 & 1.04185 & 3.87016 \\ \cline{2-5} & 0.6 & 15.7286 & 1.03655 & 3.8886 \\ \cline{2-5} & 0.7 & 16.0813 & 1.01629 & 3.9788 \\ \hline \end{tabular}
\end{table}
Table 1: Observables for the Sgr A* Schwarzschild, Bardeen, RN, and ABG. The \(\theta\) and its corresponding separation are expressed in \(\mu\) arc-second (as) unit.
source to escape the gravitational potential around the black hole. The size of the shadow corresponds to the critical impact parameter of the photon orbit [37; 38; 39]. The idea that the Sgr A* shadow can be observed was suggested in [3; 40; 41]. The shadow radius \(R_{sh}\) is represented by the critical impact parameter, _i.e.,_ the impact parameter evaluated at the photon sphere radius \(r_{ps}\)[42; 43],
\[R_{sh}=b_{c}=r_{ps}\sqrt{\frac{h(r_{ps})}{f(r_{ps})}}. \tag{52}\]
For Bardeen with ABG source we will have
\[R_{shABG}=r_{ps}\sqrt{\frac{\left(1-\frac{2(6q^{2}-r_{ps}^{2})}{(q^{2}+r_{ps}^ {2})}\right)^{-1}}{\left(1-\frac{2mr_{ps}^{2}}{(r_{ps}^{2}+q^{2})^{3/2}}\right)}}. \tag{53}\]
Recently, Vagnozzi _et al_[44] did a comprehensive horizon-scale test using the EHT data
Figure 7: \(\theta_{\infty}\) as function of \(q\) for: [top] The ABG model and, [down] the original Bardeen.
from the Sgr A* to constraint a wide range of classical black hole solutions. They calculate \(\delta\), the fractional deviation between the inferred and and the Schwarzschild shadow radii, and compared them with the observational estimates by Keck and VLTI (Very Large Telescope Interferometer) [45]:
\[\delta\simeq-0.060\pm 0.065. \tag{54}\]
Converting this bound into the shadow radius, and assuming Gaussian uncertainties, the constraint reads
\[4.54\lesssim R_{sh}/M\lesssim 5.22, \tag{55}\]
and
\[4.20\lesssim R_{sh}/M\lesssim 5.56, \tag{56}\]
for \(1\sigma\) and \(2\sigma\) levels, respectively.
Figure 8: \(s\) as function of \(q\) for: [top] the ABG model, [down] the Bardeen model.
Among the many BH solutions the authors of Ref. [44] scrutinize, they left out the ABG BH unchecked. We plotted the shadow radius as a function of magnetic charge for the ABG BH in Eq. (53),, \(R_{shABG}=R_{shABG}(q)\). The result is shown in Fig. 10. Interestingly, while the original Bardeen metric passes the horizon-scale test, Fig. (4) of [44], the effective metric of ABG does not, as can be seen from the figure. The radius shadow of ABG BH is way below the ETH constraint for Sgr A*. Thus, we conclude that the possibility of Sgr A* being an ABG black hole is ruled out.
## VII Conclusion
In this work, we study both the strong lensing and shadow phenomenology of nonsingular ABG black hole. Our approach is different either from [27; 28] in that we regard the
Figure 9: \(r_{m}\) as function of \(q\) for: [top] the ABG model, [down] the Bardeen model.
nonsingularity coming from the NLED charge, or from [33] where we used simpler ABG NLED model.
The NLED reduces photon sphere radius with increasing \(q\) until it reaches \(q=q_{crit}\), after which it starts to increase monotonically. It also reduces the radius of the photon orbit, and also increases the gravitational field by pulling the innermost distance closer but at the same time stretching its separation distance from the outermost image. Our observable quantities behave differently from the ordinary Bardeen [27; 28] in terms of the innermost image (Figs. 7), the separation between innermost and outermost images (Figs. 8), and the magnification (Figs. 9). Interestingly, the NLED type that we particularly choose to model Bardeen in this paper gives distinct observable results compared to other types. While the model [33] predicts that the angular separation \(s\) decreases while the magnification \(r_{m}\) increases with increasing \(q\), our results show the opposite. From Table 1 we can observe that as the charge increases, the angular separation does increase while the magnification does decrease.
Lastly, and more importantly, in this work we try to answer a tempting question of whether the Sgr A* at the center of our Milky Way is a nonsingular supermassive BH or not. Based on our shadow radius analysis, we show that Sgr A* cannot be the ABG black hole. Its shadow radius is way below the constraints imposed by EHT observational data. This is, however, is by no means saying that the Sgr A* cannot be nonsingular BH. In Ref. [44] it is shown that the EHT observations are consistent with Sgr A* being a Hayward BH [46]. The static ABG model itself is non-realistic from astroph
Figure 10: Shadow radius \(r_{sh}\) of the ABG black hole as a function of magnetic charge \(q\).
Sgr A*, since all black holes are supposed to be rotating. It would be interesting if we can put the rotating nonsingular black holes (for example, see [47; 48; 49]) to this horizon-scale test to see whether they are suitable to model the Sgr A*. Recently, Walia, Ghosh, and Maharaj tested the three rotating regular BHs (Hayward, Bardeen, and Simpson-Visser) using the EHT observations for Sgr A* [50]. They concluded that those three metrics can be still within the EHT bounds and thus the possibility of Sgr A* being one of them cannot be ruled out. However, their black hole metrics are not sourced by the NLED charge. Due to the rotating nature, it is not known whether such exact solutions exist for coupled Einstein-NLED equations.
###### Acknowledgements.
We thank Reyhan Lambaga and Imam Huda for the fruitful discussions on the preliminary stage of this work.
## Data Availability Statement
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
|
2301.10180
|
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech
Recognition: the Arman-AV Dataset
|
In recent years, significant progress has been made in automatic lip reading.
But these methods require large-scale datasets that do not exist for many
low-resource languages. In this paper, we have presented a new multipurpose
audio-visual dataset for Persian. This dataset consists of almost 220 hours of
videos with 1760 corresponding speakers. In addition to lip reading, the
dataset is suitable for automatic speech recognition, audio-visual speech
recognition, and speaker recognition. Also, it is the first large-scale lip
reading dataset in Persian. A baseline method was provided for each mentioned
task. In addition, we have proposed a technique to detect visemes (a visual
equivalent of a phoneme) in Persian. The visemes obtained by this method
increase the accuracy of the lip reading task by 7% relatively compared to the
previously proposed visemes, which can be applied to other languages as well.
|
Javad Peymanfard, Samin Heydarian, Ali Lashini, Hossein Zeinali, Mohammad Reza Mohammadi, Nasser Mozayani
|
2023-01-21T05:13:30Z
|
http://arxiv.org/abs/2301.10180v1
|
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset
###### Abstract
In recent years, significant progress has been made in automatic lip reading. But these methods require large-scale datasets that do not exist for many low-resource languages. In this paper, we have presented a new multipurpose audio-visual dataset for Persian. This dataset consists of almost 220 hours of videos with 1760 corresponding speakers. In addition to lip reading, the dataset is suitable for automatic speech recognition, audio-visual speech recognition, and speaker recognition. Also, it is the first large-scale lip reading dataset in Persian. A baseline method was provided for each mentioned task. In addition, we have proposed a technique to detect visemes (a visual equivalent of a phoneme) in Persian. The visemes obtained by this method increase the accuracy of the lip reading task by 7% relatively compared to the previously proposed visemes, which can be applied to other languages as well.
persian dataset, audio-visual speech recognition, lip reading, viseme
## I Introduction
Automatic speech recognition (ASR) is a task to understand speech from audio signals. This task has been developed over the years and got mature enough to be used on any device. But the trained models, apart from the high ability of speech recognition, have weaknesses in special conditions like environments with loud noises. That's where audio-visual speech recognition (AVSR) comes in to overcome this limitation. AVSR uses visual information alongside audio in order to decode speech more effectively in noisy environments like inside cars. AVSR is more like human comprehension, which uses visual and audio perception to understand each other. Lip reading is another task which is only consuming visual information.
The traditional approaches [1, 2, 3] used a two-stage algorithm to deal with such problems. In the first stage, a hand-crafted feature extractor, extract useful features from lip movements alongside another feature extractor that extracts feature from audio signals and then fuse them. In the second stage, a classifier such as the hidden Markov model or artificial neural networks is used to classify digits, characters, etc. However, in the last few years, the availability of large public datasets and emerge of deep neural networks had a substantial impact on this field. Deep learning approaches usually consist of two parts, which are front-end and back-end, like traditional ones except that here they are end-to-end trainable. For front-end part usually use convolution neural networks to extract visual and audio features and temporal networks like RNNs, attention model, and transformers on the back-end side to model temporal information. Recently, methods such as [4, 5, 6] used knowledge distillation to train lip reading and AVSR models. To do this, usually use the ASR model as the teacher and the lip reading model as the student.
The primary dataset was collected under laboratory conditions [7, 8] - Usually, a person stands in front of the camera and read some words or sentences in a quiet place at normal speed. Emerge of deep learning and automatic pipeline led to building datasets in the "wild" condition [9, 10, 11] which is larger and also more challenging and close to the real condition than before. Unlike the old datasets which were restricted to digit, character, or phrase classification, nowadays, datasets are collected for word-level classification and sentence-level AVSR. Unfortunately, most of these datasets are in English and there are a few datasets witches come in other languages [12]. In this paper, we will more concentrate on sentence-level AVSR, but we did not limit our dataset to just speech recognition tasks. We make our dataset in a way that can be used for different tasks. In the following sections, we will explain more about it.
Currently, LRS2 [13] is the most wildly used audio-visual dataset. The dataset contains more than 240 hours of videos and about 118,000 utterances. Also, they used BBC news
and talk shows as sources for their dataset. The dataset is publicly available to the research community. LSVSR [11] is the largest dataset with over 3800 hours of data collected from YouTube-uploaded videos. This dataset was collected by Google DeepMind and, unfortunately, they did not make it publicly available.
In this paper we propose a novel large-scale dataset that is collected in the "wild" condition from reviews, movies, etc. on the Apart website. The dataset contains over 220 hours of videos from 1760 Persian celebrities. Also, we provide the name of the celebrity in each sample as labels to be used for different purposes, such as speaker recognition. To the best of our knowledge, this is the largest audio-visual dataset in the Persian language.
Since this is an audio-visual dataset, it could be used for a number of different applications such as automatic speech recognition, lip reading, speaker recognition, audio-visual speech synthesis, etc.
The organization of this paper is as follows. In section 2, we look at some of the most recent approaches and datasets. In section 3, we discuss the statistic of the data and the pipeline that we build for collecting data. In section 4, we will use a base model to evaluate our dataset. Finally, conclude the paper in section 5.
## II Related Works
In this section, we will review the two tasks of speech recognition and speaker recognition. Since we have introduced a new dataset and tested the performance of the baseline models on it, we will explore the related works from the perspective of the proposed methods and datasets.
### _Methods_
#### Ii-A1 Speech Recognition
**Lip reading.** There is a large body of research on automated lip reading. The advent of deep learning and the availability of large-scale datasets cause massive progress in the automation of this field. Here we discuss the works which focused on the sentence-level task in lip reading as an open set character. For sentence-level recognition, the recent works can be divided into approaches. The first approach utilizes Connectionist Temporal Classification (CTC). The model predicts frame-wise characters and tries to maximize and marginalizes all possible paths to find optimal alignment between prediction and ground truth. An example based on this approach is LipNet [28]. LipNet was the first sentence-level lip reading model that used spatiotemporal CNNs as the front-end, followed by Bidirectional Gated Recurrent Unit (Bi-GRU) as the back-end of architecture, and employed CTC loss to train the network. Shillingford _et al._[11] proposed the V2P model closest to previous work. The model outputs a sequence of phoneme distributions and uses CTC loss at train time. At inference time, a decoder based on finite-state transducers (FSTs) maps a sequence of phoneme distributions to a word sequence.
The second approach follows the sequence-to-sequence model, in which output characters are conditioned based on each other, unlike CTC-based models. WAS [9] was the first sequence-to-sequence model. The model consists of two key components called the image encoder Watch and the character decoder Spell and has a unique dual attention mechanism. Chung _et al._[15] extended the WAS model to the MV-WAS model that can decode visual sequences across all poses and show that it is possible to read lips in profile, but the standard is inferior to reading frontal faces. Peymarf _et al._[29] suggested external viseme decoding, which divides the sequence-to-sequence model into two stages, video to viseme and viseme to character, respectively. The network outputs character sequence given viseme through external text data. Afouras _et al._[30] proposed a visual transformer pooling (VTP) module that learns to track and aggregate the lip movements representation. This work also utilizes a wordPiece tokenizer (one of the subword algorithms) to learn the language easily, resulting in time and memory efficiency
Petridis _et al._[31] presented a hybrid architecture and used a joint decoder including RNN, attention, and CTC. Afouras _et al._[32] took a hybrid approach as well but suggested replacing RNN with a transformer. This work proposed a sequence-to-sequence and CTC on top of the transformer self-attention as the back-end. Both models use spatiotemporal CNNs, followed by ResNet as a common front-end. The extension of [31] is [33] that employed the conformer variant of the transformer to model both local and global dependencies of an image or audio sequence.
There is a trend in lip reading that benefits from the knowledge distillation area to improve performance by extracting information from the audio-only counterpart of datasets. In this approach, the Automatic Speech Recognition (ASR) model plays a role as the teacher for the Visual Speech Recognition (VSR) model as the student. The examples of this trend are [4, 5] and [6].
**Automatic Speech Recognition.** In ASR, self-supervision plays the main role in the state-of-the-art models. This approach has two steps, pre-training and fine-tuning. In the pre-training step, the model learns powerful speech representation from large amounts of unlabeled data through a pretext task. Then, the output of the previous step (speech representation) is fed to an acoustic model to be fine-tuned for a downstream task. In wav2vec [34], the pre-training step has two networks named encoder and context. The encoder network converts raw audio input to feature representations, and the context network converts feature representations to contextualized features. Both networks are convolutional networks. The objective of this model is defined by contrastive loss between future steps and distractors. The vq-wav2vec [35] is the improved version of the previous work, which benefited from the BERT model. This work is also composed of two stages. In the first stage, the continuous audio feature representations are discretized via Gumbel-Softmax or k-means clustering quantization methods. Thanks to this, the speech data takes a structure similar to language, and the BERT model can be applied. So in the second stage, the discretized representations are fed to a BERT model for contextualized representations. The wav2vec 2.0 [36]
showed better results by joint learning of the two-stages pre-training pipeline mentioned above. The model is pre-trained by new contrastive loss between contextualized output and quantized representation. The HuBERT [37] is very similar to wav2vec 2.0 in model architecture but different in the training process. First, HuBERT uses the cross-entropy loss for model optimization. Second, Data discretization is done through a k-means algorithm instead of a quantization process. Third, The training process alternates between the hidden units discovery and target prediction. The model re-uses embeddings from the BERT encoder in the clustering step.
**Audio-Visual Speech Recognition.** The task of audio-visual speech recognition is lip reading with the presence of audio. Shi _et al._[38] presented the audio-visual counterpart of HuBERT model as an AV-HuBERT. Also, this work reported the result of visual HuBERT performance for the lip reading task. The works like [11, 31] and [32] addressed both the VSR and the AVSR problems, which are later explained in the lip reading section.
#### Iii-A2 Speaker Recognition
Since this is a classification problem, the general models consist of two modules. The first is a feature extractor module and the second is a classification module. Depending on the input data, the feature extractor could be a convolutional network like VGG-M or ResNet [25] or any other neural network that could be used for sequential data. For classification, a fully connected network is the leading choice, and the output size is the number of speakers in the dataset. Since the input data is an audio file, input data to the network could be a spectrogram [25] of data or the output of a feature extraction algorithm like MFCC, etc., which usually is a 1D feature vector of data.
### _Datasets_
#### Iii-B1 Speech Recognition
As we know, the progress in deep learning is due to large datasets. The performance of lip reading systems is affected by position, illumination and speaker diversity, so the quality and quantity of the dataset are important. In recent years some datasets like LRS2 [13], LRS3-TED [10] and LSVSR [11] have tried to increase the number of utterances and video hours and MV-LRS [15] has tried to cover more face angles. Table I shows three categories of datasets named Audio-Visual Speech Recognition, Active Speaker Detection and Speaker recognition. The table has general, transcription, video and speaker information columns for each dataset. We can see useful statistics in this information, such as the type of task run on the dataset, the source of the collected videos and the number of speakers in the videos. The language of most of the datasets in this table is English, but fortunately, other languages have been published in recent years, like CMLR [12], LRWR [18] and GLips [19]. Since 2013, no Persian dataset has been published for lip reading until now; we released Audio-Visual Speech Recognition in Persian (Arman_AV) with 89k utterances, 220 hours and 1760 speakers.
#### Iii-B2 Speaker Recognition
Most of the datasets in this field are collected under laboratory conditions and are not available to be used freely for research purposes. One of the first free datasets which were collected under the "wild" conditions is the Speakers in the Wild (SITW) [39] dataset. This dataset contains multimedia content of 299 speakers with hand-annotated speech samples. VoxCeleb [24] is a large-scale dataset that contains speech samples of over 1,000 celebrities with more than 100,000 utterances, extracted from YouTube videos; this dataset is gender-balanced (45% of the speakers are female), along with speakers with various accents, racial backgrounds, and ages. The dataset's creators included information about each speaker's gender and country of origin from Wikipedia. Videos contained in the dataset are recorded in a huge number of challenging environments such as quiet and noisy studio interviews, open-air stadiums, etc., and all are debased with real-world commotion, consisting of giggling, covering discourse, foundation chatter, etc. Also, another goal of the authors of the dataset was to propose a pipeline to create fully automated datasets. VoxCeleb 2 [25] which came shortly after VoxCeleb (and is very similar but much larger) is the largest dataset in this field which includes more than six thousand speakers that speak more than a million utterances. The three mentioned datasets are in English. CN-Celeb [27] is another large-scale dataset which contains over 130,000 utterances and is very similar to the VoxCeleb dataset except in the three following aspects: first, CN-Celeb covers 11 genres of speech such as singing, entertainment, etc.,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Category** & **Dataset** & \multicolumn{6}{c|}{**General Idk**} & \multicolumn{3}{c|}{**Transcription Idk**} & \multicolumn{3}{c|}{**Video Idk**} & \multicolumn{3}{c|}{**Speaker Idk**} \\ \hline Category & Name & Year & Lang. & Task & Classes & Uber. & Availability & Source & Vach. & Words & Num. & Duration & Resolution & PPS & User(*) & Num. \\ \hline \multirow{10}{*}{Audio-Visual Speech Recognition} & SEAV [14] & 2013 & FA & Sent & 600 & 1.6k environment & 1000 & - & - & 131/100 & 30 & French & 1 \\ & LSW [19] & 2016 & ENG & Switch & 500 & 400K & Aval.1\({}^{\star}\) & BBC Programgrams & 500 & 400K & - & - & 256/256 & 25 & 30 & 30 & 1000\({}_{1000}\) \\ & LSW [19] & 2017 & ENG & Sent. & - & 118K & Noa.1\({}^{\star}\) & BBC Programgrams & 174K & 807K & - & 264/160 & 160/160 & 25 & 30 & 1000\({}_{1000}\) \\ & LSW [15] & 2017 & ENG & Sent. & - & 74K & Aval.1\({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) & \({}^{\star}\) \\ & VLNF [6] & 2017 & SW & Sent. & - & 600 & Public & Lab environment & 1374K & 10K & - & 1890/120 & 1280/720 & 50 & French & 24 \\ & Speech & LSW [19] & 2018 & ENG & Sent. & - & 165K & Aval.1\({}^{\star}\) & Text Data & 57K & - & - & 4720 & 244/25 & 25 & 30 & 90 & 1000\({}_{1000}\) \\ & LSW [11] & 2018 & ENG & Sent. & - & 293K & Buser & VoxCeleb & 127K & - & - & - & 3,880 & 128/128 & 30 & 30 & 1000\({}_{1000}\) \\ & LSW [10] & 2019 & CII & Switch & 1000 & 173K & Watt. & Wavenaven & 11K & - & - & - & Distributed & 25 & 30 & 30 & 200\({}_{1000}\) \\ & LSW [11] & 2019 & CII & Switch & 1002 & Public & News & Informat & 1K & - & - & - & - & Distributed & 25 & 30 & 30 & 20 \\ & LSW [18] & 2021 & EUS & Watt. & Wattels & 235 & - & 117K & - & - & - & - & - & 112/112 & 25 & 0 & 115 \\ & LSW [19] & 2022 & GEN & Wattels & 500 & - & Phelist & Precision Parliament & 500 & 250K & 250K & 250K & - & 256/256 & 25 & 100 & 100 \\ & LSW [17] & 2022 & FA & Wattels & 500 & 248K & Public & Avatar & 500 & - & 2900 & 1290/1200 & 26 & 30 & 30 & 200 \\ & LSW [1] & 2022 & FA & Switch & 500 & 248K & Public & Avatar & 500 & - & 240K & 300 & 242K & 2242K & - & - & 1700 \\ & Our Dataset & 2022 & FA & Sent. & - & 98K & Public & Avatar & 42K & 2.5M & 98K & 220K & 2242K & 25 & - & 1700 \\ \hline \multirow{10}{*}{Audio-Visual Speech Recognition} & SEAV [24] & 2017 & ENG & - & 3 & - & Public & YouTube & - & - & 188 & 38.5th & 2183-128 & - & - & - \\ & Voxceleb [25] & 2018 & ENG & - & 1612 & 1,128K & Public & YouTube & - & - & 222 & 35.9th & - & - & - & - \\ \cline{1-1} & Drosophila [20] & 2019 & FRIMG & - & 1850 & 5448 & Aval.1\({}^{\star}\) & Con
which is more than VoxCeleb. Second, CN-Celeb concentrated on Chinese celebrities, containing videos of 1,000 celebrities. And last but not least is that the dataset is not automated completely, and they considered human supervision for the dataset. To the best of our knowledge, there is no suitable Persian dataset in this field.
## III Dataset
### _Automated Pipeline_
In this subsection, we see the steps of data collection. Figure 1 shows the overview of these steps.
**Step 1: Video Collection.** We pick three kinds of videos for our dataset described in the following segments.
1. **Interviews.** Interviews and biographies are well-suited for audio-visual datasets. The main goal of these videos is to speak to artists, politicians or famous people in general. These kinds of videos are divided into two groups. In the first group, the shows include a host and narrator (which is not desirable in this case), and the second group consists of those that don't have them.
2. **Series and Movies.** These kinds of videos can be used if suitable preprocessing is applied to them. The main challenges are dealing with different directions and angles of the camera (When a speaker is talking or is not in the shot all of the time, unlike in the interviews). Also, the entire video has a significant amount of silence or music. Another challenge is multi-speaker simultaneous speech. In general, this type of video is not ideal for collecting data because the percentage of redundant data in them is relatively high. In the investigation we did, less than 10% of the input data is considered appropriate.
3. **Vods.** One of the other ideas to collect more data is different keyword searching in ugcs. Then we review the resulting videos and choose the top keywords for the dataset.
To obtain data, multiple search terms are used
**Step 2: Scene Detection.** In this stage, we want to detect each scene. For this purpose, the PySceneDetect1 is utilized. In this algorithm, we subtract the pixel values of two consecutive frames to detect different scenes. This difference value is calculated in HSV colour space.
Footnote 1: [https://github.com/Breakthrough/PySceneDetect](https://github.com/Breakthrough/PySceneDetect)
**Step 3: Face Detection and Face Tracking.** Now each video is divided into smaller pieces called a scene. Up to this point, the video had a temporal clipping. Note that we need frames in which we see faces. So the S\({}^{3}\)FD model [40] and algorithm based on IoU [41] are used for face detection and face tracking respectively. Depending on the number of faces found in each part of the video, each video may be split into one or more videos or removed if there is no face.
Fig. 1: Data collection pipeline.
Fig. 2: Our dataset samples.
**Step 4: Active Speaker Detection.**
In this step, we automatically found the parts of the video that contain the speaker's face.
**Step 5: Speaker Diarization.** As stated, up to stage 4, the video parts including a speaker with a constant shooting angle have been chosen. But there is another potential challenge in interviews, a situation where the host and the guest talk at the same time. To address this problem, we can use Speaker Diarization2 based on UIS-RNN.
Footnote 2: [https://github.com/taylorlu/Speaker-Diarization](https://github.com/taylorlu/Speaker-Diarization)
**Step 6: Annotations.** As we mentioned before, the source of our dataset videos is Apart. The videos on this video hosting website lack subtitles, so we utilize the commercial Aipaa's3 ASR service to construct rough sentence-level transcripts of the videos.
Footnote 3: [https://aipaa.ir/](https://aipaa.ir/)
**Step 7: Face Recognition and Dataset Split.** We use ArcFace [42] and produce face feature embeddings for face recognition. This stage aims to create a dataset with a speaker-independent property. A dataset like this is beneficial for applications like lip reading. It should be noted that the results of the face recognition algorithm were checked manually, and its errors were corrected.
### _statistics_
First of all, the dataset will be available as mp4 face-cropped videos with 224*224 pixels resolution and a frame rate of 25 FPS. We leveraged Aparat (a Persian video-sharing website like YouTube) as a source for our dataset. The collected videos came from different channels and programs, which means our dataset contains various kinds of challenging environments and circumstances. The outcome of our work comprises 220 hours of video, which contains over 89 thousand utterances and 2.5 million words of 1760 celebrities. We split our dataset into train/ validation, and test sections which each contain 211, 9 hours of data. Figure 2 shows some examples of the video samples. In table I, in addition to the statistics of our dataset, we have also presented the statistics of other recent datasets.
## IV Viseme Analysis
The viseme is a significant challenge in the lip reading problem. Visemes consist of a set of phonemes spoken by the same form of lips and aren't exactly known for any languages. This mapping may even be different in various speakers. However, some research has addressed the issue in languages such as English and Persian. Some proposed models for said languages. In this study, we proposed a method to automatically identify the Persian visemes on large-scale data that one can apply to other languages by only having the appropriate dataset of the interested language. Some works have addressed the visemes for the Persian language [43]. Collecting a set of laboratory datasets and considering the similarities in speaking the various phonemes, the authors in [43] proposed a categorization method for phonemes to identify the visemes in the language. Combining the deep learning technique to automatically extract the features and clustering in this study, we introduced a method to categorize the phonemes. Then, we compared a viseme-based model for lip reading with these visemes and the visemes addressed in previous works. Employing the Kaldi tool, we first determined the phoneme level transcription for each sample in this experiment. Each phoneme spoken by the speaker is known every 30 milliseconds in this transcription. Then, we employed the av-Hubert pre-trained model to obtain the visual embedding for each phoneme, which was used as a feature for our clustering using the k-means algorithm. In this step, we expected the phonemes having the same visual features, like the same lip movements and forms, to be grouped in the same category. We used the method in [29] to train our model. Employing the two models of the video-to-viseme and the viseme-to-character models and finally merging the two, we obtained the lip reading model. One benefit of such a method is that one can use the textual data to improve the quality of the lip reading model. However, the model requires a proper grapheme to phoneme (G2P) model to convert the Persian text into the phonemes and then convert it into the visemes, for which we used an appropriate G2P. For the video-to-viseme model dataset, we used the phoneme level transcription to convert the phonemes into visemes with the help of its corresponding mapping. We also employed about 1 million utterances for the visemes into character conversion datasets. In this regard, we first employed a G2P model to obtain the phoneme sequence for each utterance and then used the phoneme-to-viseme mapping to extract the viseme sequence for each one. To train the viseme-to-character model, we used the viseme sequence as the input, whereas the persian text was considered the desired output. Employing the attention mechanism to implement the video-to-viseme model, we used a seq2seq one having a two-layer GRU. Furthermore, we employed the same architecture with fewer parameters and a different input type to train our viseme-to-character model. As seen in table II, we obtained a higher precision in the lip reading problem by employing the new proposed model to identify the Persian visemes for which we believe this phoneme-to-viseme mapping to be more appropriate. Furthermore, one can use the method to identify visemes in any given language. All the data and their related transcriptions are publicly available.
## V Experiments
Here, the baseline methods are explained. We demonstrate how well they work with our dataset and provide information on training setup. The image sequence and audio sequence inputs are denoted as \(x_{1:T}^{v}\) and \(x_{1:T}^{a}\) in all baselines, respectively.
### _Automatic Speech Recognition (ASR)_
The present section reviews the experiments conducted on speech recognition using the produced datasets. Audio data alone was used in the first experiment to train Persian speech recognition. To this end, the pre-trained AV-HuBERT [44] model uses \(x_{1:T}^{a}\) to extract a 768-dimensional feature vector \(e_{1:T}^{a}\) for each 40ms of audio input. In this experiment, we used the pre-trained AV-HuBERT base model for feature extraction on the English language data. The output embedding in this
network for each window is a representation of that audio segment, including speaker and speech features. We expect the model to use speech-related features during the learning process. As such, we expect features corresponding to the speaker to be implicitly ignored, as they are irrelevant to our purpose. One further consideration during the experiment is whether or not a model pre-trained via the English language data can be used for the Persian language.
In the next step, the \(e_{1:T}^{a}\) vector was fed to a two-stacked Bidirectional LSTM, resulting in \(o_{1:T}^{a}\). This network was trained using CTC. We assume \(y=(y_{1},y_{2},...,y_{n})\) is a transcription and \(\pi\) are paths that function B will map \(\pi\) to y. We calculated the \(p_{t}^{CTC}\) probability over 52 characters by applying Softmax on \(o_{1:T}^{a}\). As our CTC loss \(L_{CTC}\) decreases, the network begins to learn.
As evident in table IV, using the proposed architecture, we achieved a %84.66 character accuracy rate (CAR) using only 211 hours of Persian speech data, which is a suitable value, considering the data volume. Moreover, the experiment has demonstrated that the AV-HuBERT model, trained on data in English, could extract a proper embedding for speech data in Persian.
### _Audio Visual Speech Recognition (AVSR)_
During the second experiment, we trained an audio-visual speech recognition model. Image pre-processing was first used in this model to crop only the mouth ROI. Then the AV-HuBERT model was used per cropped video frame to obtain a 768-dimensional vector. In this section, an architecture similar to the previous stage was used, but we also gave the visual features as input to the model. Moreover, we used less output for visual features as they presumably contain less information than speech information. This model is presented in Figure
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Viseme Mapping & CER (Greedy decoding) & WER (Greedy decoding) & CER (Beam search decoding) & WER (Beam search decoding) \\ \hline Traditional [43] & \%\(\downarrow\)54.04 & **\%76.32** & \%52.19 & \%73.24 \\ AV-HuBERT + K-means (Ours) & **\%51.24** & \%76.71 & **\%48.54** & **\%72.83** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Lip reading results on our dataset
Fig. 3: 2D and 3D projection of AV-HuBERT for persian phonemes using our dataset. a) 3D projection of embedding of phonemes using PCA. b) 2D projection of embedding of phonemes using tSNE.
\begin{table}
\begin{tabular}{l l} \hline \hline cluster id & phonemes \\ \hline \hline
1 & /F/ /N/ \\ \hline
2 & /AA/ / KCH/ \\ \hline
3 & /B/ / P/ /M/ \\ \hline
4 & /ZH/ \\ \hline
5 & /CH/ / JH/ / SH/ \\ \hline
6 & /O/ / U/ \\ \hline
7 & /S/ /Z/ /D/ T/ \\ \hline
8 & /A/ /I/ / H/ / GH/ \\ \hline
9 & /G/ / K/ /E/ /L/ / I/ Y/ / N/ R/ \\ \hline
10 & /si/ \\ \hline \hline \end{tabular}
\end{table} TABLE III: Persian visemes using clustering and AV-HuBERT
4. The input image sequence \(x_{1:T}^{v}\) is fed to the pre-trained Single-modal visual HuBERT to obtain a 768-dimensional feature vector \(e_{1:T}^{v}\). The 768-dimensional feature vector \(e_{1:T}^{a}\) is obtained from input audio sequence \(x_{1:T}^{a}\) through the pre-trained Singed-modal audio HuBERT concurrently. In the next step, the separated two-stacked Bi-LSTM are used to map \(e_{1:T}^{v}\) and \(e_{1:T}^{a}\) to \(o_{1:T}^{v}\) and \(o_{1:T}^{a}\) respectively. Then the two 1-dimensional convolutional filters are applied on \(c_{t}\) as a result of the concatenation of \(o_{t}^{v}\) and \(o_{t}^{a}\). The output of this step is \(f_{t}\). As mentioned above, CTC loss is used to train the model. First, we calculated the \(p_{t}^{CTC}\) probability. This probability is obtained by applying Softmax and linear projection on \(f_{t}\) (Where \(W\in\mathrm{I\!R}^{52\times(256+1024)}\) and \(b\in\mathrm{I\!R}^{52}\)). Then, the model will be optimized through \(L_{CTC}\) minimizing the value.
This model was trained with the exact mechanism as the previous one. As visible in table IV, through this model, which provides both audio and visual features, we have achieved higher accuracy than the ASR model. As visible in this experiment, embeddings obtained in the pre-trained AV-HuBERT model, which are trained on the English language data, can be used in the Farsi language, and relatively desirable features of lip movement can be extracted from it.
### _Speaker recognition_
Another problem we tested using the generated data was speaker recognition. For this, we first separated the people with more than five samples. Then we randomly selected five samples for each speaker. For each speaker, four samples were considered as a reference, and only one of the samples was used for testing. Then, using the TitaNet [45] pre-trained model, which was trained on 5 datasets (Voxceleb [24] and Voxceleb2 [25], NIST SRE portion of datasets from 2004-2008 (LDC2009E100), Switchboard-Cellular1 and Switchboard-Cellular2 [46], Fisher [47] and Librispeech [48]). We fed \(x_{1:T}^{a_{t}}\) as a test and \(x_{1:T}^{a_{0}}\),..., \(x_{1:T}^{a_{0}}\) as references to the TitaNet [45] model. The model produces the 192-dimensional embeddings of each audio sample, for example, \(v_{1:T}^{a_{t}}\). In the next step, the test embedding vector is compared with every reference of every person through cosine similarity (Here \(i\in 0,...,3\)). After that, we calculate the average with the previous step's results. Then, each sample in the test was assigned to the person who had the most similarity with that in terms of the average cosine similarity criterion. Using this method, we achieved an accuracy of 84.5.
## VI Conclusion
Arman-AV is a large-scale multipurpose dataset for Persian that can be used in various tasks such as lip reading, automatic speech recognition, audio-visual speech recognition, and speaker recognition, and it is publicly available. The dataset consists of almost 220 hours of video data (about 89,000 samples) from 1760 speakers. Although the main goal of collecting this dataset was lip reading, the results of a baseline method were reported for each of the mentioned tasks. According to the obtained results, the character error rate of speech recognition was reduced by 14.7% relatively using visual features. We also proposed a method for obtaining Persian visemes. By using these visemes, higher accuracy in lip reading was achieved. In addition, we have provided various analyses for the dataset such as the age and gender of speakers, and the synchronization of audio and video for each segment. All of this metadata is available along with the dataset. The dataset can be used for other tasks such as face recognition, face verification, and audio-visual speaker recognition, which we did not cover in this paper and can be considered for future work on the dataset.
## VII Acknowledgement
The project was mainly supported by Arman Rayan Sharif company, an AI company in Iran.
|
2303.07291
|
Anyon condensation in the string-net models
|
We study condensation of abelian bosons in string-net models, by constructing
a family of Hamiltonians that can be tuned through any such transition. We show
that these Hamiltonians admit two exactly solvable, string-net limits: one deep
in the uncondensed phase, described by an initial, uncondensed string net
Hamiltonian, and one deep in the condensed phase, described by a final,
condensed string net model. We give a systematic description of the condensed
string net model in terms of the uncondensed string net and the data associated
with the condensing abelian bosons. Specifically, if the uncondensed string net
is described by a fusion category $\mathcal{C}$, we show how the string labels
and fusion data of the fusion category $\mathcal{\tilde{C}}$ describing the
condensed string net can be obtained from that of $\mathcal{C}$ and the data
describing the string oeprators that create the condensing boson. This
construction generalizes previous approaches to anyon condensation in string
nets, by allowing the condensation of arbitrary abelian bosons, including
chiral bosons in string nets constructed from (for example) Chern-Simons
theories, which describe time-reversal invariant bilayer states. This gives a
method for obtaining the full data for string nets without explicit
time-reversal symmetry from such bilayer models. We illustrate our approach
with several examples.
|
Chien-Hung Lin, Fiona J. Burnell
|
2023-03-13T17:11:36Z
|
http://arxiv.org/abs/2303.07291v1
|
# Anyon condensation in the string-net models
###### Abstract
We study condensation of abelian bosons in string-net models, by constructing a family of Hamiltonians that can be tuned through any such transition. We show that these Hamiltonians admit two exactly solvable, string-net limits: one deep in the uncondensed phase, described by an initial, uncondensed string net Hamiltonian, and one deep in the condensed phase, described by a final, condensed string net model. We give a systematic description of the condensed string net model in terms of the uncondensed string net and the data associated with the condensing abelian bosons. Specifically, if the uncondensed string net is described by a fusion category \(\mathcal{C}\), we show how the string labels and fusion data of the fusion category \(\bar{\mathcal{C}}\) describing the condensed string net can be obtained from that of \(\mathcal{C}\) and the data describing the string operators that create the condensing boson. This construction generalizes previous approaches to anyon condensation in string nets, by allowing the condensation of arbitrary abelian bosons, including chiral bosons in string nets constructed from (for example) Chern-Simons theories, which describe time-reversal invariant bilayer states. This gives a method for obtaining the full data for string nets without explicit time-reversal symmetry from such bilayer models. We illustrate our approach with several examples.
## I Introduction
The universal, low-energy properties of gapped phases of quantum matter are described using two principles: symmetry and topological order. Considerable effort in recent years has gone into expanding our understanding of the resulting geneaology of quantum phases that cannot be described by the Landau paradigm of spontaneously broken symmetries, unveiling many new intriguing possibilities for strongly interacting systems. Among the earliest notable exceptions to Landau's framework are topologically ordered phases in 2+1 dimensions,[1; 2; 3; 4], which harbor emergent point-like particles (known as anyons) with fractional statistics.
The long-ranged properties of topologically ordered phases are captured by a mathematical structure known as a unitary modular tensor category (UMTC) [5; 6; 7; 8; 9], which describes the rules governing fusion and braiding of point-like excitations. Thus our knowledge of the possible topologically ordered phases - much like our knowledge of the possible symmetry groups - is quite be complete. Given this, it is natural to ask which phases can, in principle, be related by second-order phase transitions?
In the case of topological order, this question is closely related to the question of which topological orders are related by so-called anyon condensation transitions (see [10] for a brief review). Such transitions were first studied in the context of conformal field theory[11; 12; 13; 14; 15; 16; 17; 18; 19], and they have been discussed for general UMTC's in the mathematical literature [20; 21; 22; 23; 24; 25; 26; 27]. Refs. [28; 29; 30; 31; 32] described how, in the context of 2+1 dimensional topologically ordered phases, these transitions physically correspond to processes in which emergent bosons condense; the topological order obtained is then a direct consequence of the new, condensed, vacuum.
Anyon condensation has proven useful in understanding not only the structure of topological phases[33; 34; 35; 36; 37; 38], but also when they admit gappable boundaries[26; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51], and how to create non-abelian topological orders[52; 53; 54; 55; 56; 57; 58; 59; 60; 61] or topological defects[62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72] from abelian ones. Moreover, the possibility of condensing anyons to change a topological order also opens up the door for novel second-order critical points which may not have analogues in conventional symmetry-breaking transitions[73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83]. Recently, it has been observed that anyon condensation can also be used to study certain dynamical processes in open quantum systems and quantum codes [84; 85].
In studying anyon condensation, it is useful to have a lattice Hamiltonian that can be tuned between the two phases in question. This establishes beyond a doubt that a direct transition between the two topological orders can occur, and enables a variety of analytical and numerical approaches to be used to study both the corresponding phase transitions, and verify the above description of the condensed phase [36; 37; 73; 74; 77; 86; 87; 88]. Lattice models of anyon condensation are also useful for constructing Hamiltonians realizing symmetry-enriched topological orders [58; 59].
The present work focuses on a family of 2D topological orders known as Drinfeld centers, which are believed to be the most general class of (bosonic) topological orders compatible with gapped boundaries [26; 40; 90; 26]. These can be realized by commuting projector lattice models known as string nets [91; 92; 93; 94; 95; 96; 40]. The string net construction begins not from the UMTC describing the anyon model, but from a pivotal fusion category \(\mathcal{C}\), which describes the Hamiltonian and ground states. The full topological order (i.e. UMTC) is exhibited by studying so-called string operators, which realize point-like anyonic excitations at their end-points. A number of works have previously studied anyon condensation in these models [73; 74; 77; 76; 88; 89; 77; 86]. However, the literature so far has focused primarily on transitions which condense a particular type of boson within these string net models, corresponding to excitations of only the pla
quette term in the string net Hamiltonian. For such condensation processes a general prescription exists[73] to modify the string-net Hamiltonian by adding a term which can drive anyon condensation; in the limit that this term is very large the Hamiltonian reduces to a new string net model realizing the topological order of the condensed phase, constructed from a sub-category of the original fusion category \(\mathcal{C}\). In other words, the data for the string net model of the condensed phase follows straightforwardly from that of the uncondensed phase. (The relation between the string operators of the condensed and uncondensed phases, however, is not quite so straightforward).
Here, we will describe a general formalism that describes condensation of arbitrary _abelian bosons_ in string-net models.1 First, we describe an extended version of the string net construction, obtained by extending the Hilbert space using an approach similar to that of Ref. [97], albeit tailored to simplify the description of the condensed phase. Within this extended Hilbert space, we construct a family of model Hamiltonians that can be tuned through a transition involving the condensation of any abelian boson, and outline the topological order expected for the resulting condensed phase. Our modified model has the advantage that deep in the condensed phase, a general prescription can be given to identify both the low-energy effective Hilbert space and the ground state. We describe how the Hilbert space of the condensed phase can be described by a new, effective, set of string labels (i.e. a new effective fusion category \(\tilde{\mathcal{C}}\)), whose relationship to the original label set can be calculated explicitly. We further show that the ground state of the condensed phase is also a string-net ground state, described by the data of the new fusion category \(\tilde{\mathcal{C}}\). In this regime, our Hamiltonian acts like the regular string net Hamiltonian on the new label set. Moreover, we obtain an explicit expression for the fusion data of \(\tilde{\mathcal{C}}\) in terms of \(\mathcal{C}\) and the condensing bosons.
Footnote 1: An abelian boson is simply a boson that has a unique fusion outcome with any other anyon in the theory.
The relationship between the topological order of the uncondensed and condensed anyon models can be characterized by the fate of the anyons of the original topological order in this condensed vacuum. First, anyons that braid non-trivially with any of the condensing bosons become confined, and are absent from the topological order of the condensed theory. Second, two anyons that are related by fusion with one of the condensing bosons must be identified in the condensed phase, meaning that topologically speaking, they correspond to the same excitation. Finally, certain non-abelian anyons can split into multiple distinct anyon types after condensation. For condensation of abelian bosons, these relationships are well known in conformal field theory, where they go by the name of central extensions[11], and the \(S\) and \(T\) matrices of the final topological orders can be computed explicitly. However, as noted by Ref. [32], the task of computing the full topological data - namely \(F\) and \(R\) matrices - is significantly more challenging. Our approach does not easily give us access to the full topological data of the anyon model - indeed, though the confinement, identification, and splitting of anyons in the final topological order is apparent from the form of our Hamiltonian, in our approach the associated topological data is inferred only indirectly, through the emergence of the new string net data \(\tilde{\mathcal{C}}\), which in turn implies a new set of anyon- creating string operators. However, we show that our approach to string net condensation does allow one to straightforwardly compute the data of the fusion category underlying the condensed phase, as we demonstrate explicitly in a number of examples.
An interesting application of the condensation transitions studied here is that they can take a string net with an explicit time-reversal symmetry (of the type described in the original work by Levin and Wen[91]) into one that does not admit a naive time-reversal transformation (described in detail in Ref. [96]). One example that we will discuss in detail is the transition from \(\mathrm{SU(2)_{4}\times\overline{\mathrm{SU(2)}}_{4}}\) to \(\mathrm{U(1)_{3}\times\overline{\mathrm{SU(2)}}_{4}}\), for which \(\mathcal{C}=SU(2)_{4}\) has all of the symmetries assumed by Ref. [91], while \(\tilde{C}=T\Upsilon_{3}\) does not.
The paper is organized as follows. In Sec. II, we review some basics of general string-net ground states, and introduce the extended string-net Hilbert space that we use to study anyon condensation. In Sec. III, we introduce a family of modified string net Hamiltonians, which can be tuned across a transition condensing an arbitrary abelian boson. We describe the effective Hilbert space deep in the condensed phase in Sec. IV, where we discuss how the string types in \(\tilde{\mathcal{C}}\) are related to those in \(\mathcal{C}\). In Sec. V, we study the condensed ground state, and show that it is indeed a string net. In particular, we show how the new ground state allows one to describe the fusion data of \(\tilde{\mathcal{C}}\), verify that this fusion data is indeed consistent, and argue that the full Hamiltonian projected into the condensed Hilbert space is indeed the associated string net Hamiltonian. We illustrate our construction with concrete examples in Sec. VI. A number of technical details are elaborated on in the appendices.
## II Extended string-net models
In this section, we introduce the extended string-net models that we will use for our models of anyon condensation.
### Review: Generalized string net models
We begin by reviewing the string-net construction. Here we use the generalized string-net construction of
Ref. [96] (see also Refs. [40; 89; 93]), since the symmetries assumed in the original construction[91] are not always present in the condensed phase.
We defer a discussion of string-net Hamiltonians to Sec. III.1, and here focus on the string-net Hilbert space, together with its ground states and certain excited states.
#### ii.1.1 The string-net ground state
The string net model consists of a Hamiltonian whose ground state(s) obey certain special properties, which we now describe. These string-net ground states live in a Hilbert space of _string-net configurations_, each of which is defined on an oriented, trivalent graph. (Though the string-net Hamiltonians are defined on the honeycomb lattice, this lattice structure is not necessary to describe the string-net ground states.) Throughout this work, we use the convention that _all strings are oriented upward_, i.e. the orientation vector has positive projection onto the \(\hat{y}\) direction. We therefore require this projection to be non-zero, such that our strings cannot have horizontal tangent vectors.
The string net configuration is obtained by assigning to each edge \(i\) a label (or string type) \(a_{i}\). The combinations of string types \(\{(a,b;c)\}\) that are allowed to meet at a vertex is dictated by a set of branching rules- i.e. if \((a,b;c)\) is among the branching rules, then the vertices
\[\includegraphics[width=14.226378pt]{fig/abc_string_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_netnet_net_net_netnet_net_net_net_net_net_net_netnet_net_netnet_netnet_netnet_net_netnet_netnet_net_netnet_net_net_net_netnet_netnet_netnet_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_netnet_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_netnet_net_net_netnet_net_net_netnet_netnet_net_net_net_net_netnet_net_net_netnet_netnet_net_netnet_net_netnet_net_netnet_net_netnet_net_net_netnet_netnet_net_netnet_netnet_netnet_net_netnet_netnet_netnet_netnet_net_netnet_netnet_netnet_netnet_netnet_net_netnet_net_netnetnet_netnet_netnet_netnet_netnet_netnet_netnet_netnet_net_netnetnet_netnet_netnet_netnet_net_netnetnet_net_netnet_netnet_netnet_netnet_netnet_netnet_netnet_netnetnet_net_net_netnet_net_netnet_net_netnet_netnet_netnet_netnet_net_netnet_net_netnet_net_netnet_net_netnet_net_net_netnet_net_netnet_net_netnet_netnet_net_netnet_netnet_net_net_netnet_net_netnet_net_netnet_net_net_net_netnet_net_netnet_net_net_net_net_netnet_net_net_net_netnet_net_netnet_net_netnet_netnet_net_netnet_net_net_net_netnet_net_net_net_netnet_net_net_net_net_netnet_net_net_net_net_netnet_net_net_net_net_net_netnet_net_net_net_net_netnet_net_net_net_net_netnet_net_net_netnet_net_net_net_netnet_net_net_netnet_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_netnet_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net_net_net_net_net_net_netnet_net_net_net_net_net_net
where \((F_{d}^{abc})^{-1}\) is the matrix inverse of \((F_{d}^{abc})\), whose matrix elements are \((F_{d}^{abc})_{ef}\equiv F_{def}^{abc}\).
The conditions (6) also imply that the quantum dimensions obey:
\[d_{a}d_{b}=\sum_{c}d_{c} \tag{7}\]
where the sum runs over all values of \(c\) that satisfy the branching rules.
Local unitary transformations of the string net wave function result in new coefficients \(\{\hat{F},\hat{d}\}\), which are related to the original coefficients \(\{F,d\}\) via the gauge transformation:
\[\begin{split}\hat{F}_{def}^{abc}&=F_{def}^{abc} \cdot\frac{f_{e}^{ab}f_{d}^{ec}}{f_{f}^{bc}f_{d}^{af}}\\ \hat{d}_{a}&=d_{a}\.\end{split} \tag{8}\]
Here \(f_{c}^{ab}\) parametrize the local unitary transformation; they are complex functions defined on upward vertices, with the downward vertices transformed by \(1/f_{c}^{ab}\). To preserve the constraints listed above, we require
\[|f_{c}^{ab}|=1,f_{c}^{ab}=1\ \text{if}\ a\ \text{or}\ b=0. \tag{9}\]
It is convenient to note that the local rules (4) imply the following identities:
\[\Phi \tag{10a}\] \[\Phi \tag{10b}\]
with
\[[F_{cd}^{ab}]_{ef} =(F_{f}^{ceb})^{-1}_{da}\frac{d_{e}d_{f}}{d_{d}d_{a}} \tag{11a}\] \[=F_{fad}^{ceb}\frac{d_{e}d_{f}}{d_{a}d_{d}} \tag{11b}\]
#### ii.2.2 Abelian string operators
Next, we review the string operators that create point-like anyon excitations when acting on the ground state. Here we focus on the case where these anyons are _abelian bosons_, since these are the excitations we wish to condense. (For a discussion of more general string operators, see Ref. [96].) Recall that an abelian anyon is defined by the fact that it has a unique fusion product with any other anyon in the theory; it is a boson if it has trivial statistics with itself.
To create a particle-antiparticle pair \((a,\bar{a})\) at two points in our lattice, we act on the string-net ground state with a string operator \(W_{a}(P)\) along an oriented path \(P\). This creates \(a\) at the final endpoint of \(P\), and its antiparticle \(\bar{a}\) at the initial endpoint. On a given string-net state \(\langle X|\), we depict this action by drawing an \(a\)-labeled string along the path \(P\)_under_ the string-net graph. The string label \(a\) specifies both a choice of one or more string types, and some extra data required to resolve crossings between the path \(P\) and the string-net graph.
If \(a\equiv\phi\) is an abelian anyon, the label \(\phi\) corresponds to a single string type \(s\), meaning that in regions where \(P\) does not cross any edges of the string-net, we replace the label \(\phi\) with \(s\) on upward-oriented segments of \(P\), and \(\bar{s}\) on downward-oriented segments of \(P\). Further, \(s\) (and \(\bar{s}\)) must have a unique fusion product with all other string types, meaning that for each \(a\), the branching rules contain \((a,s;a^{\prime})\) (and also \((s,a;a^{\prime})\)) for only one \(a^{\prime}\), which we will sometimes denote as \(a^{\prime}\equiv a\times s\). It follows that
\[d_{s}=d_{\bar{s}}=1, \tag{12}\]
and thus \(d_{a}=d_{a^{\prime}}\) by Eq. (7). In this case the coefficients associated with the moves (4b) and (4c) are unity.
For abelian anyons, the crossings are resolved using the rules:
\[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/
where \(x^{\prime}=x\times s\) for \(x=a,b,c\), and \((a,b;c)\) is allowed by the branching rules. Given a set of \(F\)-symbols satisfying (5,6) and a choice of the string \(s\), in general we will find multiple solutions to Eqs. (15) for \(w_{\phi}\). We label these by \(m\), and the corresponding anyon by \(\phi=(s,m)\) where \(s\) is the string type created by the corresponding string operator \(W_{\phi}\) and \(m\) labels distinct solutions for a given \(s\).
For example, the \(\mathbb{Z}_{N}\) string-net model has \(N\) string types \(a\in\{0,1,\ldots,N-1\}\) with \(\mathbb{Z}_{N}\) branching rules \((a,b;c=a+b(\text{ mod }N))\). There are \(N\) distinct solution to (5)[98, 99]
\[F(a,b,c)=e^{2\pi i\frac{2m}{N^{2}}(b+c-[b+c]_{N})}. \tag{17}\]
labeled by \(p=0,\ldots,N-1\). The arguments \(a,b,c\) take values in \(0,\ldots,N-1\) and \([b+c]_{N}\) denotes \(b+c\) (mod \(N\)) with values also taken in \(0,\ldots,N-1\). Each \(\mathbb{Z}_{N}\) string-net model has \(N^{2}\) topologically distinct quasiparticle excitations labeled by \(\phi=(s,m)\) where \(s,m=0,1,\ldots,N-1\). The corresponding string operators \(W_{\phi}\) are defined by the string parameters
\[w_{\phi}(a)=e^{2\pi i(\frac{m\pi a}{N^{2}}+\frac{m}{N})}. \tag{18}\]
The braiding statistics of quasiparticles can be extracted from the commutation algebra of the corresponding string oeprators. (see Ref. [91, 96, 96] for details). Specifically, the exchange statistics of \(\phi=(s,m)\) is
\[e^{i\theta_{\phi}}=w_{\phi}(s). \tag{19}\]
Thus self-bosons satisfy
\[w_{\phi}(s)=1. \tag{20}\]
If \(\phi=(s,m)\) and \(\chi=(r,n)\) are two abelian bosons that we wish to condense simultaneously, then they must have trivial braiding. This requires that[96]
\[w_{\phi}(r)w_{\chi}(s)=1. \tag{21}\]
### Extended string-net model
In the usual string-net construction, if \(s\neq 0\), \(W_{\phi}(P)\) creates states outside of the string net Hilbert space, since near the endpoints of \(P\) there is no way to fuse an \(s\)-labeled string into the string net graph without creating vertices that violate the branching rules. When we are only interested in the topological nature of the excitations, the resulting ambiguity in the action of \(W_{\phi}(P)\) near the endpoints is unimportant, since it affects only the immediate vicinity of the excitation and hence cannot impact its topological properties. In order to condense \(\phi\), however, we require a more careful treatment of these endpoints. We achieve this by extending the string-net Hilbert space.
#### ii.2.1 Extended string-net Hilbert space
The extended string-net Hilbert space, \(\mathcal{H}_{\{\phi\}}\), is defined with respect to a set \(\{\phi\}\) of abelian bosons that we wish to condense. Since every finite abelian group is isomorphic to a direct product of cyclic groups, we can assume without loss of generality that the group is \(G=\mathbb{Z}_{N_{1}}\times\cdots\times\mathbb{Z}_{N_{k}}\). To understand how to condense all bosons in \(G\), it is therefore sufficient to understand how to condense bosons in a single \(\mathbb{Z}_{N_{j}}\) factor; thus in what follows, for simplicity we will often restrict ourselves to the case that the set of bosons to be condensed comprise a cyclic group.
The string-nets in \(\mathcal{H}_{\{\phi\}}\) are oriented trivalent graphs with _two_ types of edges, as shown in Fig. 1. The first type, which we will simply call edges, are edges connecting two trivalent vertices. Each such edge carries a string label as defined in \(\mathcal{H}\). The second type of edge, which we will call sticks, has one end-point at a trivalent vertex, and one open endpoint. A stick carries a \(|G|\)-spin label, which takes values in the set of abelian bosons \(\{\phi\}\). This spin label \(\phi=(s,m)\) also dictates a string label \(s\) associated with the stick; we require the labels at each trivalent vertex to satisfy the branching rules, and the total \(G\)-spin label (i.e. the sum of spin labels of all sticks) must be trivial. An orthonormal basis for the extended Hilbert space \(\mathcal{H}_{\phi}\) is thus given by the set of all oriented trivalent graphs with sticks which (1) satisfy the branching rules at each trivalent vertex, and (2) have a net trivial \(G\)-spin.
To describe these extended string-nets on the lattice, we work on a decorated honeycomb lattice: at the center of each edge of the honeycomb lattice, we add an upward-pointing stick (see Fig. 1 (b)). We introduce two types of spins on the decorated lattice: link spins, which live on its edges, and end spins, which live at the endpoints of each stick. The link spins take values in the string types \(\{0,a,b,c,...\}\) of the standard string-net model, and end spins take values in excitation labels \(\{\phi\}\). We require a
Figure 1: A typical string-net configuration in \(\mathcal{H}_{\phi}\) in continuum (a) and on the decorated honeycomb lattice (b). Regular edges connecting two trivalent vertices can host any string label \(a\in\mathcal{C}\). Edges connecting to only one trivalent vertex, which we call sticks, may _only_ host edge labels \(\{s\}\) associated with the string operators that generate the set of condensing bosons \(\{\phi\}\). These edge labels necessarily have abelian fusion rules, \(a\times s=s\times a=a^{\prime}\) for any \(a\in\mathcal{C}\). Sticks also carry a label from the set \(\{\phi\}\) at their end-points.
stick carrying a label \(\phi=(s,m)\) to have the string label \(s\), and that all trivalent vertices satisfy the branching rules.
#### ii.1.2 \(W_{\phi}(P)\) in the extended string-net Hilbert space
In the following, we will use the extended string-net Hilbert space in two ways. First, we may use it to describe a system whose ground state is the original string-net ground state, but which can also describe certain excited states that are not allowed in the original string-net Hilbert space. In this case, sticks with non-trivial labels appear only in excited states, and the string-net ground state is exactly as described in Sec. II.1.1. Second, in order to describe the condensed phase, we can view all sticks as part of the ground-state Hilbert space. This will allow us to describe a modified set of local rules capturing the condensed phase, as we discuss in Sec. V.
Here, we take the first perspective, and describe the action of the string operator \(W_{\phi}(P)\) in the extended Hilbert space. The action of \(W_{\phi}(P)\) on the string net ground state \(|\Phi\rangle\) is exactly as specified in Sec. II.1.2 away from the end-points of \(P\). However, we now require \(P\) to begin and end on two sticks. In adddition to its action on the edge labels, \(W_{\phi}(P)\) acts by raising the end spin at the final and initial end-point of the path \(P\) by \(\phi\) and \(\overline{\phi}\), respectively.
When \(\{\phi\}\) is a set of abelian bosons with trivial mutual statistics, we can describe any string operator \(W_{\phi}(P)\) as a product of "basic string operators" \(W^{i}_{\phi}\), each of which connects a pair of sticks on adjacent edges. The four basic string operators on the decorated honeycomb lattice act along the four paths \(p_{1},p_{2},p_{3},p_{4}\) shown in Fig. 2(a). The operators \(W^{1}_{\phi},W^{2}_{\phi}\) act on paths \(p_{1}\) and \(p_{2}\) centered at upward vertices, while \(W^{3}_{\phi},W^{4}_{\phi}\) act on paths \(p_{3}\) and \(p_{4}\), centered at the downward vertices. Their action is defined as follows. Let \(a,b,c,d\) and \(e,f,g\) denote the initial link spin states along \(p_{i}\) and on the external legs of \(p_{i}\) respectively, and let \(\phi_{a},\phi_{b}\) be the initial end spin states at stick \(a,b\) respectively (see Fig. 2). The matrix elements of \(W^{i}_{\phi}\) between an initial state \(a,b,c,d,e,f,g,\phi_{a},\phi_{b}\) and a final state \(a^{\prime},b^{\prime},c^{\prime},d^{\prime},e,f,g,\phi_{a^{\prime}},\phi_{b^{ \prime}}\) are then given by
\[W^{1,abcd;\phi_{a}\phi_{b}}_{\phi,a^{\prime}b^{\prime}c^{\prime} d^{\prime};\phi_{a^{\prime}}\phi_{b^{\prime}}}(eg) =\bar{w}_{\phi}(f)\delta_{\phi_{a}\times\phi,\phi_{a^{\prime}}}\delta_{\phi_{ b}\times\bar{\phi},\phi_{b^{\prime}}}\times\] \[F^{eas}_{c^{\prime}c^{\prime}}F^{cf}_{df^{\prime}}(F^{csf}_{d^{ \prime}c^{\prime}f^{\prime}})^{*}F^{s\pm b}_{bb^{\prime}}(F^{dsb^{\prime}}_{ gd^{\prime}b})^{*}\] \[W^{2,abcd;\phi_{a}\phi_{b}}_{\phi,a^{\prime}b^{\prime}c^{\prime} d^{\prime};\phi_{a^{\prime}}\phi_{b^{\prime}}}(eg) =\delta_{\phi_{a}\times\phi,\phi_{a^{\prime}}}\delta_{\phi_{b}\times\bar{ \phi},\phi_{b^{\prime}}}\times\] \[F^{eas}_{c^{\prime}c^{\prime}}F^{cf}_{c^{\prime}d^{\prime}}F^{ robb^{\prime}}(F^{dsb^{\prime}}_{gd^{\prime}b})^{*}\] \[W^{3,abcd;\phi_{a}\phi_{b}}_{\phi,a^{\prime}b^{\prime}c^{\prime} d^{\prime};\phi_{a^{\prime}}\phi_{b^{\prime}}}(eg) =w_{\phi}(f)\delta_{\phi_{a}\times\phi,\phi_{a^{\prime}}}\delta_{\phi_{b} \times\bar{\phi},\phi_{b^{\prime}}}\times\] \[F^{eas}_{c^{\prime}c^{\prime}}(F^{df}_{c^{\prime}cf})^{*}F^{ dsf}_{bb^{\prime}}(F^{s\pm b}_{gd^{\prime}b})^{*}\] \[F^{eas}_{c^{\prime}c^{\prime}d^{\prime};\phi_{a^{\prime}}\phi_{b^ {\prime}}}(eg) =\delta_{\phi_{a}\times\phi,\phi_{a^{\prime}}}\delta_{\phi_{b} \times\bar{\phi},\phi_{b^{\prime}}}\times\] \[F^{eas}_{c^{\prime}c^{\prime}}(F^{ds}_{c^{\prime}c^{\prime}})^{*}F ^{s\pm b}_{bb^{\prime}}(F^{dsb^{\prime}}_{gd^{\prime}b})^{*} \tag{22}\]
Here \(x^{\prime}=x\times s\) (or \(x\times\bar{s}\), if \(x=b\)), where we use multiplicative notation for the abelian group operation on both edge and end spins.
Notice that the matrix elements of open string operators are not invariant under local unitary transformations of the form (9), and thus are gauge dependent. When \(\phi\) are cyclic abelian bosons with trivial mutual statistics, there exists a convenient gauge
\[F(s^{i},s^{j},s^{k})\equiv F^{s^{i}s^{j}s^{k}}_{s^{(i+j)}s^{(i+j)}s^{(j+k)}}=1 \tag{23}\]
where \(s^{i},s^{j},s^{k}\) are any string types associated with condensing bosons (see Appendix A). We will work in the gauge (23) in the rest of the paper.
In the gauge (23), the basic string operators have the following important properties, which we derive in Appendix A. First, all basic string operators commute with each other:
\[[W^{i}_{\phi},W^{j}_{\phi^{\prime}}]=0. \tag{24}\]
Second, one can show that
\[W^{i\dagger}_{\phi}=W^{i}_{\phi} \tag{25}\]
for \(i=1,2,3,4\).
Finally, a general string operator can be expressed as a product of simple string operators. First, \(W^{i}_{\phi_{1}},W^{i}_{\phi_{2}}\) along the same path \(p_{i}\) can be combined as
\[W^{i}_{\phi_{1}}\cdot W^{i}_{\phi_{2}}=W^{i}_{\phi_{1}\times\phi_{2}} \tag{26}\]
where the \(\cdot\) operation is defined by
\[\begin{split} W^{i,abcd;\phi_{a}\phi_{b}}_{\phi_{3}=\phi_{1} \times\phi_{2},a_{3}b_{3}c_{3}d_{3};\phi_{a}\times\phi_{3},\phi_{b}\times\bar{ \phi}_{3}}(efg)=\\ W^{i,abcd;\phi_{a}\phi_{b}}_{\phi_{1},a_{1}b_{1}c_{3}d_{3};\phi_{a} \times\phi_{1},\phi_{b}\times\bar{\phi}_{1}}(efg)\times\\ W^{i,a_{1}a_{1}b_{1}c_{1}d_{3};\phi_{a}\times\phi_{1},\phi_{b} \times\bar{\phi}_{1}}_{\phi_{2},a_{1}b_{1}c_{1}d_{3};\phi_{a}\times\bar{\phi}_{1 }}_{\phi_{2},a_{1}b_{1}c_{1}d_{3};\phi_{a}\times\bar{\phi}_{1}}(efg)\times\\ W^{i,a_{1}b_{1}c_{1}d_{3};\phi_{a}\times\phi_{1},\phi_{b}\times\bar{ \phi}_{1}}_{\phi_{2},a_{1}b_{2}c_{3}d_{3};\phi_{a}\times\phi_{3},\phi_{b}\times \bar{\phi}_{3}}(efg)\end{split} \tag{27}\]
Figure 2: (a) Four building blocks of any string operator defined on the decorated honeycomb lattice. \(W^{1}_{\phi},W^{2}_{\phi},W^{3}_{\phi},W^{4}_{\phi}\) act along four different paths (the red line) connecting two nearest neighboring sticks. Here \(a,b,c,d\) and \(e,f,g\) denote the initial link spin states along the path and on the external legs of the path, respectively and \(\phi_{a},\phi_{b}\) are end spin states at two ends of the path. Their matrix elements are given in (22). (b) A typical open string operator \(W(P)\) along the path \(P\) can be decomposed into product of basic string blocks acting on each vertex along \(P\).
for \(i=1,\ldots,4\). Thus if the set of condensing bosons is cyclic and generated by \(\phi\), we can express all basic string operators as products of the basic string operator \(W_{\phi}^{i}\). Second, let \(P\) be a path obtained from a union of two basic paths, \(p_{i}(r_{1},r_{2})\), which begins on a stick at positions \(r_{1}\), and ends on a stick at position \(r_{2}\), and \(p_{j}(r_{2},r_{3})\), which begins on the stick at \(r_{2}\) and ends on a stick at \(r_{3}\) (see Fig. (3)). (Note that it is because \(w_{\phi}(s)=1\) that we can combine string end-points into a single string that crosses the sticks). Then we have:
\[W_{\phi}(P)=W_{\phi}^{i,(12)}W_{\phi}^{j,(23)} \tag{28}\]
and similarly for paths composed of more than two concatenated segments, as shown in the Figure. Here we define \(W_{\phi}^{i,(12)}=W_{\phi}^{i}\) if \(p_{i}(r_{1},r_{2})\) is oriented upwards, and \((W_{\phi}^{i})^{\dagger}\) otherwise. By joining string operators along multiple basic paths in this way, we can thus express \(W_{\phi}(P)\) as a product of basic string operators for any path \(P\). (Note that since the basic string operators commute, the order in which we apply them is unimportant.) It follows that _any_ product of \(\phi\)-string operators for \(\phi\) in our chosen set of abelian bosons can be expressed as a product of basic string operators.
## III Lattice Hamiltonians for condensing Abelian Bosons
Next, we identify a lattice Hamiltonian \(H(J)\) within the extended string-net Hilbert space that, by tuning a parameter \(J\), can bring a system through a transition in which a set of abelian bosons is condensed. Our lattice Hamiltonian has the general form
\[H(J)=H_{\mathcal{C}}-JH_{1}. \tag{29}\]
Here \(H_{\mathcal{C}}\) is a Hamiltonian in the extended string-net Hilbert space whose ground state is exactly the original string-net ground state; it can be viewed as a modification of the original string-net Hamiltonian (see Refs. [91; 96]) appropriate to the extended string-net Hilbert space. \(H_{1}\) is a term which creates particle- antiparticle pairs of anyons in the set \(\{\phi\}\) of condensing bosons. Here, for simplicity, we take this set to be a cyclic group of order \(p\), which we denote \(\langle\phi\rangle=\{\phi^{i},i=1,...,p\}\). with \(\phi^{p}=1\).
We will show that \(H(J)\) has the following properties. First, \(H(J=0)\) is identical to the original string-net Hamiltonian when acting on states where all stick labels are trivial, and states with sticks carrying non-trivial labels \(\phi^{i}\) have a finite energy cost. Thus in this limit string-net eigenstates with sticks carrying non-trivial labels \(\phi^{i}\) correspond to gapped excited states, and the ground state is the original string-net ground state \(|\Phi\rangle\). Second, \(H(J=\infty)\) is a commuting projector model with a frustration-free ground state \(|\Psi\rangle\), in which excitations in the set \(\langle\phi\rangle\) have condensed, in the sense that they are present in arbitrary number in the ground state. Third, \(|\Psi\rangle\) can be obtained by applying a certain projector to the \(J=0\) ground-state \(|\Phi\rangle\). Thus we can describe the \(J=\infty\) ground state explicitly in terms of the string types and local rules associated with \(|\Phi\rangle\), and use this description to investigate the topological data of the condensed phase.
It is worth noting that, as we show below, the ground state of \(H(J)\) for any \(J\) contains only excitations on the sticks, and no plaquette defects. Thus for \(\phi^{p}=1\), the critical point separating condensed and uncondensed phases is always of the Potts or clock variety, depending on the specific choice of \(H_{1}\). Here we have chosen a Potts- like version, resulting in first order transitions for \(p\geq 3\).
### The Hamiltonian \(H_{\mathcal{C}}\)
We first define the Hamiltonian \(H_{\mathcal{C}}\) in the extended string-net Hilbert space \(\mathcal{H}_{\phi}\) of the honeycomb lattice (see Fig. 1). \(H_{\mathcal{C}}\) is of the form
\[H_{\mathcal{C}}=-\sum_{e}Q_{e}-\sum_{p}B_{p}^{\phi} \tag{30}\]
The two sums run over end spins \(e\) and plaquettes \(p\) of the decorated honeycomb lattice. The operator \(Q_{e}\) acts on the end spins
\[Q_{e}=\delta_{e,\mathbf{1}} \tag{31}\]
where \(\delta_{e,\mathbf{1}}=1\) if \(e=\mathbf{1}\) (no excitation) and \(\delta_{e,\mathbf{1}}=0\) otherwise (\(\phi\) excitations). The operator \(Q_{e}\) penalizes the
Figure 3: (a) An example of a product of the basic string operators creating a \((\phi,\bar{\phi})\) pair on two neighbouring plaquettes. (b) The action of these products is equivalent to the action of a string \(W_{\phi}(P)\) starting on a stick in one plaquette, and ending on a stick in the other plaquette, which visits no other sticks in between. (c) By path independence (see Eq. (14)), such a string can be deformed to cross only the edge separating the plaquette pair.
states with \(\phi\) excitations at ends of sticks. Note that unlike in the usual string-net Hamiltonian, we have not included a term imposing the branching rules at each vertex; instead, we will work exclusively in the string-net Hilbert space, where these are necessarily satisfied.
The operator \(B_{p}^{\phi}\) on the decorated honeycomb lattice is more complicated, but the main idea is as follows. First, \([B_{p}^{\phi},B_{p^{\prime}}^{\phi}]=[B_{p}^{\phi},Q_{e}]=0\), ensuring that \(H_{\mathcal{C}}\) is a sum of commuting projectors. Second, analogous to the plaquette term in the usual string-net models[91, 96],\(B_{p}^{\phi}\) maps between different string-net configurations in the extended string-net Hilbert space, ensuring that the ground states (for which all stick labels are trivial) obey the local rules (4). Indeed, when acting on states where all stick labels are trivial, our plaquette term is identical to that of the generalized string net models [96]. Third, unlike the plaquette term of the usual string-net models, \(B_{p}^{\phi}\) commutes with the string operators \(W_{\phi^{k}}(P)\) even for paths \(P\) ending or beginning on the plaquette \(p\).
We note that here we use a prescription that ensures that \(B_{p}^{\phi}\) commutes with \(W_{\phi}(P)\) for _any_ choice \(\phi=(s,m)\); this allows us to discuss all abelian anyon condensation transitions on the same footing. For some classes of models, however (those for which the fusion category describing the string types is _braided_), an alternative and potentially computationally simpler formulation of the Hamiltonian resulting in the same condensed phase exists; this is discussed in Appendix B.
We now describe the operator \(B_{p}^{\phi}\) in detail. \(B_{p}^{\phi}\) has the form:
\[B_{p}^{\phi}=\sum_{s=0}^{N-1}\frac{d_{s}}{D}B_{p}^{\phi,s} \tag{32}\]
where \(D=\sum_{s=0}^{N-1}d_{s}^{2}\) and \(B_{p}^{\phi,s}\) describes a 27 spin interaction involving the 24 link spins around \(p\) and 3 end spins inside \(p\) (see Fig. 4). Its action can be understood as a sequence of three operations:
\[B_{p}^{\phi,s}=\sum_{\phi_{10},\phi_{11},\phi_{12}}W_{\phi_{10},\phi_{11}, \phi_{12}}^{\dagger}B_{p}^{s}W_{\phi_{10},\phi_{11},\phi_{12}} \tag{33}\]
where the sums run over the possible spin labels of the three sticks inside \(p\), with
\[W_{\phi_{10},\phi_{11},\phi_{12}}=(W_{\phi_{10}}^{1})^{\dagger}\mathcal{P}_{ \phi_{10}}\cdot W_{\phi_{11}}^{1}\mathcal{P}_{\phi_{11}}\cdot W_{\phi_{12}}^{ 3}\mathcal{P}_{\phi_{12}} \tag{34}\]
Here \(\mathcal{P}_{\phi_{a}}=|\phi_{a}\rangle\langle\phi_{a}|\) projects the end spin label of stick \(a\) onto \(\phi_{a}\), and \((W_{\phi_{10}}^{1})^{\dagger},W_{\phi_{11}}^{1},W_{\phi_{12}}^{3}\) are basic string operators (see Eq. (22)) that lower the spin label on sticks \(10,11\), and \(12\) by \(\phi_{10},\phi_{11}\), and \(\phi_{12}\) respectively. The operator \(W_{\phi_{10},\phi_{11},\phi_{12}}\) therefore moves any excitations on the sticks inside the plaquette \(p\) to sticks outside of \(p\):
(35)
This action is nontrivial only if \(\{\phi_{10},\phi_{11},\phi_{12}\}\) contains non-trivial end spin labels. In particular, it is trivial when acting on ground states of \(H_{\mathcal{C}}\).
The second operation \(B_{p}^{s}\) is the same as the plaquette operator defined in the Ref. [91] which adds a loop of type-\(s\) string around the boundary of \(p\):
(36)
Finally, the operation \(W_{\phi_{10},\phi_{11},\phi_{12}}^{\dagger}\) moves the excitations \(\{\phi^{10},\phi^{11},\phi^{12}\}\) back to the appropriate sticks in \(p\):
(37)
Here \(C_{1},C_{2},C_{3}\) are the corresponding matrix elements of the three operations. The product \(C_{1}\cdot C_{2}\cdot C_{3}\) gives the matrix elements of \(B_{p}^{\phi,s}\).
More precisely, the matrix elements of \(B_{p}^{\phi,s}\) are defined by
(38)
Figure 4: Decorated honeycomb lattice with an upward stick on each link of the honeycomb lattice. The \(Q_{e}\) operator acts on the end spin. The \(B_{p}\) operator acts on 27 spins adjacent to the plaquette \(p\).
where
\[B^{s,i_{1}\ldots i_{6}j_{1}\ldots j_{6}}_{p,i_{1}\ldots i_{6}j_{1} \ldots j_{6}}(e_{1}\ldots e_{12};\phi_{10},\phi_{11},\phi_{12})=\] \[d_{8}\sqrt{\frac{d_{i_{1}}d_{j_{2}}d_{3}d_{4}d_{4}j_{5}^{\prime} d_{6}}{d_{i_{1}}d_{2}d_{3}d_{4}^{\prime}d_{3}}{d_{5}d_{5}d_{5}^{\prime}}}\times\] \[(F^{ii_{1}i_{1}^{*}}_{j_{1}^{*}})^{*}(F^{ij_{2}i_{2}^{*}}_{j_{2}^ {*}})^{*}(F^{ii_{2}e_{2}}_{j_{2}^{*}j_{2}^{*}})^{*}F^{\bar{s}i_{5}s_{2}}_{j_{2} ^{*}}(F^{\bar{s}i_{3}e_{3}}_{j_{3}^{*}j_{3}^{*}})^{*}(F^{s_{5}i_{4}s}_{j_{5}^{ *}j_{3}^{*}})^{*}(F^{e_{5}i_{4}s}_{j_{5}^{*}j_{3}^{*}})^{*}\times\] \[F^{e_{6}^{\prime}\bar{s}i_{6}}_{j_{5}\bar{s}j_{6}^{*}}F^{b_{5} \bar{s}i_{6}^{*}}_{j_{5}\bar{s}i_{6}^{*}}(F^{i_{4}i_{4}^{*}}_{i_{4}^{*}})^{*}(F ^{e_{6}^{\prime}\bar{s}i_{6}}_{j_{1}^{*}})^{*}F^{e_{4}^{\prime}\bar{s}j_{3}^{* }}_{\xi_{4}i_{5}^{*}j_{3}^{*}}\times\] \[W^{1,e_{10}e_{13}e_{14}e_{14}e_{13}\phi_{13}}_{\phi_{10},0e_{13}^ {*}i_{4}\phi_{13}}(i_{4}j_{3}f_{4})W^{1,0e_{13}^{*}\alpha e_{4}^{*}i_{4}\phi_{1 3}^{*}}_{\phi_{10},0e_{13}^{*}\alpha e_{4}^{*}i_{4}\phi_{13}^{*}}\] \[W^{1,e_{11}e_{14}e_{14}e_{13}\phi_{14}}_{\phi_{11},\phi_{14}}(f_{ 6}j_{5})W^{1,e_{11}^{*}\alpha e_{4}^{*}\phi_{13}^{*}\phi_{10},\phi_{11}^{*}}_{ \phi_{11},e_{14}e_{14}\phi_{13}^{*}\phi_{10},\phi_{11}^{*}}(f_{6}j_{6}^{*}j_{5} ^{*})\times\] \[W^{3,e_{11}e_{12}e_{13}e_{14}\phi_{15}\phi_{12}}_{\phi_{12},e_{13 }^{*}\phi_{10},\phi_{11}^{*}}(f_{1}i_{1}j_{6})W^{3,e_{11}^{*}\alpha e_{4}^{*}i _{4}\phi_{13}^{*}\phi_{11}^{*}}_{\phi_{12},e_{13}^{*}\phi_{13}^{*}\phi_{13}^{* }}(f_{1}i_{1}^{*}j_{6}) \tag{39}\]
Here \(e_{7},e_{8}\ldots e_{12}\) take values in abelian string types and thus \(j_{p}=i_{p}\times e_{p+6}\) for \(p=1\ldots 6\) while \(e_{1}^{\prime}=e_{1}\times e_{12},e_{4}^{\prime}=e_{4}\times e_{10}\) and \(e_{p}^{\prime}=e_{6}\times e_{11}\). The matrix elements of \(W^{1}_{\phi},W^{3}_{\phi},W^{11}_{\phi},W^{31}_{\phi}\) are defined in (22) and (25).
From this explicit form, one can check that the plaquette operator \(B^{\phi,s}_{p}\) commutes with any basic string operator \(W^{j}_{\phi^{i}}\) (and hence any string operator \(W_{\phi^{i}}(P)\)):
\[[B^{\phi,s}_{p},W^{j}_{\phi^{i}}]=0. \tag{40}\]
We leave the derivation to Appendix D.
We can now show that \(H_{\mathcal{C}}\) (30) has the following properties. First, it is a sum of commuting projectors: clearly \([Q_{e},Q_{e^{\prime}}]=[Q_{e},B^{\phi}_{p}]=0\), since \(B^{\phi}_{p}\) does not alter the value of the spin-label on any stick, and hence preserves the eigenvalue of \(Q_{e}\). Moreover, in Appendix C we show that \([B^{\phi}_{p},B^{\phi}_{p^{\prime}}]=0\). Essentially, this results from the fact that the two plaquette operators commute in the absence of excitations on the sticks, and also that the string operator used to move a stick excitation off the shared edge between two adjacent plaquettes commutes with \(B^{\phi}_{p}\), where \(p\) is the plaquette that the stick points outward from.
It follows that, like the conventional string-net Hamiltonians, \(H_{\mathcal{C}}\) is exactly solvable. Second, there exists at least one state that satisfies \(Q_{e}=B^{\phi}_{p}=1\) for all \(e,p\); this state is therefore a ground state. Clearly, states with only trivial stick labels satisfy \(Q_{e}=1\) at every vertex; when restricted to these states, \(H_{\mathcal{C}}\) reduces to the original string-net model, whose ground state \(|\Phi\rangle\) is an eigenstate of the plaquette term of the corresponding Hamiltonian with eigenvalue \(1\) on every plaquette[96]. When \(Q_{e}\equiv 1\), \(|\Phi\rangle\) is therefore also an eigenstate satisfying \(B^{\phi}_{p}\equiv 1\). In other words, \(H_{\mathcal{C}}\) is exactly solvable, and its ground state(s) are exactly the string-net ground states defined by the local rules (4).
Though they have the same ground state(s), the excited states of our extended string-net model differ from those of conventional string-net Hamiltonians. In conventional string nets, where sticks are not included, excited eigenstates are either string-net states with \(B^{\phi}_{p}=0\) on some plaquettes, or states that violate the branching rules and hence are outside of the string-net Hilbert space (for which necessarily we also have \(B^{\phi}_{p}=0\) on some plaquettes). In our models, however, there are \(\phi^{j}\)-type excitations of \(H_{\mathcal{C}}\) satisfying \(B^{\phi}_{p}=1\) everywhere, with \(Q_{e}=0\) on some sticks.2 As a consequence, the ground state of \(H(J)\) satisfies \(B^{\phi}_{p}\equiv 1\)_for every positive_\(J\).
Footnote 2: We note that if we allow states outside of the string net Hilbert space, this leads to a redundancy, since in our extended Hilbert space \(\phi\) can also be realized by an eigenstate with \(Q_{e}\equiv 1\), with either some \(B^{\phi}_{p}=0\) or a violation of the branching rules.
### The Hamiltonian \(H_{1}\)
To define \(H_{1}\), we begin by defining the projector along the path \(p_{i},i=1,2,3,4\):
\[P^{i}_{\phi}(\mathbf{r})=\frac{1}{p}\sum_{k=1}^{p}W^{i}_{\phi^{k}}(\mathbf{r}) \tag{41}\]
where the sum runs over basic open string operators \(W^{i}_{\phi^{k}}\) with \(\phi^{k}\in\langle\phi\rangle\), and \(\mathbf{r}\) indexes a unit cell of the honeycomb lattice. The set of operators \(\{P^{i}_{\phi}\}\) form commuting projectors. The operator \(H_{1}\) is defined as a sum of commuting projectors over all neighboring sticks
\[H_{1}=\sum_{i,\mathbf{r}}P^{i}_{\phi}(\mathbf{r}) \tag{42}\]
By (40), we have \([H_{1},B^{\phi}_{P}]=0\). Thus, \(H_{1}\) creates excitations only of \(Q_{e}\) in \(H_{\mathcal{C}}\), while leaving the operator \(B^{\phi}_{P}\) in its ground state on every plaquette.
### Condensed phase and the \(J\to\infty\) limit
For \(J\) sufficiently large, the string net describes a new topological phase, in which the anyons \(\{\phi^{j},1\leq j<q\}\) have condensed. That this is so can be most easily understood by considering the limit \(J\to\infty\).
Since \(H_{1}\) is a sum of commuting projectors, in the \(J\to\infty\) limit, the low-energy Hilbert space \(\tilde{\mathcal{H}}\) consists of states in the image of the projector:
\[P_{\phi}=\prod_{i,\mathbf{r}}P^{i}_{\phi}(\mathbf{r}). \tag{43}\]
These states have eigenvalue \(1\) under all terms in \(H_{1}\).
To leading order in \(1/J\), the effective Hamiltonian, which acts within the low energy Hilbert space \(\tilde{\mathcal{H}}\), is
\[H_{\tilde{\mathcal{C}}}=P_{\phi}H_{\mathcal{C}}P_{\phi}=-\sum_{p}P_{\phi}B^{ \phi}_{p}+\text{const.} \tag{44}\]
In the second equality, we use (40) and the fact that \(H_{\phi}\equiv\sum_{e}P_{\phi}Q_{e}P_{\phi}\) is simply the number of ways to combine operators in \(P_{\phi}\) to obtain a trivial label on every vertex, which is a system size independent constant. Note that since \(P_{\phi}\) and \(B_{p}^{\phi}\) are projectors, and \([P_{\phi},B_{p}^{\phi}]=[B_{p}^{\phi},B_{p^{\prime}}^{\phi}]=0\), \(P_{\phi}B_{p}^{\phi}\) are also commuting projectors. Moreover, if \(|\Phi\rangle\) is the ground state of \(H_{\mathcal{C}}\), we have \(P_{\phi}B_{p}^{\phi}(P_{\phi}|\Phi\rangle)=P_{\phi}|\Phi\rangle\) for every \(p\). Hence the ground state \(|\Psi\rangle\) of \(H_{D}\) is given by3
Footnote 3: In fact, projecting any state \(|\Phi_{xx}\rangle\) satisfying \(B_{\phi}^{\phi}|\Phi_{ex}\rangle=|\Phi_{ex}\rangle\) in this way gives the ground state \(|\Psi\rangle\) of \(H_{D}\). This is because such states have the form \(|\Phi_{ex}\rangle=W_{\phi}(P)|\Phi\rangle\), and as \(P_{\phi}W_{\phi}(P)=P_{\phi}\), we have \(P_{\phi}|\Phi_{ex}\rangle=P_{\phi}W_{\phi}(P)|\Phi\rangle=P_{\phi}|\Phi\rangle\).
\[|\Psi\rangle=P_{\phi}|\Phi\rangle. \tag{45}\]
To show that \(|\Psi\rangle\) is indeed a state in which the bosons \(\langle\phi\rangle\) have condensed, we expand the projector \(P_{\phi}\) according to:
\[P_{\phi}=\frac{1}{p^{2N_{V}}}\sum_{\{\phi_{j},r_{ij}\}}W_{\{\phi_{j},r_{ij}\}} \tag{46}\]
Here \(N_{V}\) is the number of vertices on our honeycomb lattice; for each such vertex there are two simple string operators. (Note that throughout this paper, we will assume boundary conditions where this is the case). \(W_{\{\phi_{j},r_{ij}\}}\) is the composite string operator which creates excitations \(\{\phi_{j}\}\) using string operators along the paths \(\{r_{ij}\}\) on the lattice, and the sum runs over all possible configurations \(\{\phi_{j},r_{ij}\}\) on the lattice, subject to the constraint that \(\prod_{j}\phi_{j}=1\). We can use (46) to expand the new ground state (45) as
\[|\Psi\rangle=\frac{1}{p^{2N_{V}}}\sum_{\{\phi_{j},r_{ij}\}}W_{\{\phi_{j},r_{ij} \}}|\Phi\rangle. \tag{47}\]
In other words, the ground state \(|\Psi\rangle\) is a superposition of all possible configurations of \(\phi\) excitations - a \(\phi\) condensed state.
We can also make some educated guesses about the topological order in the condensed phase by studying the effect of \(P_{\phi}\) on low-lying excited states of \(H_{\mathcal{C}}\). These are created by generalized versions of the string operators \(W_{\phi}(P)\), which we deonte \(W_{\alpha}(P)\), where \(P\) describes a path on the lattice, and \(\alpha\) is the anyon type. The data associated with \(W_{\alpha}(P)\) is essentially the same as that for \(W_{\phi}(P)\), except that resolving string crossings requires a matrix \(\Omega_{\alpha}(a)\), rather than a scalar \(w_{\phi}(a)\); a detailed description can be found in Ref. [96]. Unless \(\alpha=\phi^{j}\), here we require that \(P\) starts and ends at vertices of the lattice, rather than on sticks.
Consider how the operators \(P_{\phi}^{i}(\mathbf{r})\) act on the string operators \(W_{\alpha}(P)\). The latter can suffer one of three possible gates. First, if
\[W_{\phi}(P)W_{\alpha}(P)=W_{\alpha^{\prime}}(P) \tag{48}\]
then the operators \(W_{\alpha}(P)\), \(W_{\alpha^{\prime}}(P)\) have identical actions on states in the image of \(P_{\phi}\). This suggests that in the limit \(J\rightarrow\infty\), the two anyons \(\alpha\) and \(\alpha^{\prime}\) have been _identified_, meaning that they comprise a single anyon type in the condensed phase. For example, all of the condensing bosons \(\{\phi^{k}\}\) are identified with the vacuum in the condensed phase. This conclusion agrees with the expectations of other approaches to anyon condensation[31; 32].
Note that if
\[W_{\phi^{r}}(P)W_{\alpha}(P)=W_{\alpha}(P) \tag{49}\]
for \(r|p\), then in the condensed phase \(W_{\alpha}(P)\) becomes identified with \(r-1\) distinct anyon string operators, rather than with \(q-1\). For example, if \(r=1\), \(W_{\alpha}(P)\) does not become identified with any other anyon string operators. Though this statement seems rather innocuous here, in fact in such cases \(\alpha\)_splits_ into multiple anyon types after condensation[31; 32]. We will not discuss the splitting at the level of anyons in detail here; however it is closely related to the splitting of string net labels which, as we show in Section IV.2, arises in the ground states of our condensed string net model.
Second, if \(\alpha\) braids non-trivially with one of the condensing bosons \(\phi\), then when the path \(P_{1}\) crosses \(P_{2}\), and \(\phi\) is an abelian boson,[96]
\[W_{\bar{\phi}}(P_{1})W_{\alpha}(P_{2})W_{\phi}(P_{1})=\frac{S_{\alpha\phi}}{S_ {\alpha\mathbf{1}}}W_{\alpha}(P_{2})W_{\phi}(P_{1}) \tag{50}\]
In this case, the string operator \(W_{\alpha}(P)\) maps states in the image of \(P_{\phi}\) (for which \(W_{\phi}(P_{1})|\Psi\rangle=|\Psi\rangle\) for every choice of \(P_{1}\)) to states outside of this image. This suggests that \(\alpha\) anyons are no longer a point-like excitations in the condensed phase, and become confined, again agreeing with expectations based on other approaches to anyon condensation.
In the following sections, rather than pursue the analysis of anyon string operators, we will instead focus on the fate of the string net ground state in the condensed phase. We will show how to describe the ground state of \(H_{\mathcal{C}}(J\rightarrow\infty)\) as a conventional string net of the type described in Ref. [96]. Such string net ground states can always be associated with a commuting projector string net Hamiltonian[96], whose topological order can be inferred directly from the string net data. (Specifically, it is the Drinfeld center of the fusion category comprising the string net). Thus this approach allows us to identify the topological order of the condensed phase without requiring an explicit discussion of anyon string operators in the condensed phase.
## IV The condensed Hilbert space
In order to understand the condensed phase, we begin by studying the effective Hilbert space
\[\tilde{\mathcal{H}}=\text{span}\{\langle\tilde{X}|:\langle\tilde{X}|P_{\phi}= \langle\tilde{X}|\}. \tag{51}\]
that describes states of finite energy in the limit \(J\to\infty\), which we refer to as the _condensed Hilbert space_. Our goal is to show that \(\tilde{\mathcal{H}}\) can be thought of as a new (non-extended) string-net Hilbert space, whose basis states are string-net states with new string labels and new branching rules.
To construct a basis state \(\langle\tilde{X}|\) in the effective Hilbert space defined by Eq. (51), we begin with a reference state \(\langle X|\) in the uncondensed string-net Hilbert space \(\mathcal{H}\). The corresponding basis state in \(\tilde{\mathcal{H}}\) is:
\[\langle\tilde{X}|=\langle X|P_{\phi} \tag{52}\]
Since \(W^{i}_{\phi^{k}}(\mathbf{r})P^{i}_{\phi}(\mathbf{r})=P^{i}_{\phi}(\mathbf{r})\) for every \(i\) and \(\mathbf{r}\), we have
\[W_{\{\phi_{j},r_{ij}\}}P_{\phi}=P_{\phi}W_{\{\phi_{j},r_{ij}\}}=P_{\phi}. \tag{53}\]
Consequently, if \(\langle X^{\prime}|=\langle X|W_{\{\phi_{j},r_{ij}\}}\), then \(\langle X|P_{\phi}=\langle X^{\prime}|P_{\phi}\). Thus, to construct a basis of \(\tilde{\mathcal{H}}\), we must find a suitable basis \(\{\langle X_{0}|\}\) of \(\mathcal{H}\) such that \(\langle X^{i}_{0}|W_{\{\phi_{j},r_{ij}\}}|X^{j}_{0}\rangle=\delta_{ij}\). Since we are interested in identifying a set of string-net labels appropriate to the condensed phase, we take \(\{\langle X_{0}|\}\) to be states in the string-label basis - i.e., \(\langle X^{j}_{0}|\) has a fixed string label for every edge.
To find the new string-label basis \(\{\langle\tilde{X}|\}\), we consider two classes of condensing bosons. The first class is \(\phi=(0,m)\) - i.e. the string operator \(W_{\phi}(P)\) does not change the string labels of edges that it acts on. This type of condensation, which does not require the extended Hilbert space, has been discussed in detail in Ref. [73]. The second class, with \(\phi=(s,m)\), condenses bosons whose string operators do change the string net labels. These condensation transitions do require the extended Hilbert space that we introduce here.
We start with the first case, \(\phi=(0,m)\). For any edge in the lattice, \(P_{\phi}\) contains an equal contribution from \(\phi^{j}\)-labeled strings that cross that edge, for every \(j\) (see Fig. 3). Thus,
\[\left<\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig4.eps}} \right|P_{\phi}=\left<\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{Fig4.eps}}\right|\frac{1}{p}\sum_{i=1}^{p}w_{\phi^{i}}(a)=\left<\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{Fig4.eps}}\right|\delta_{w_{\phi}(a),1} \tag{54}\]
where \(p\) is the order of \(\langle\phi\rangle\). Thus, only string types \(a\) with
\[w_{\phi}(a)=1 \tag{55}\]
remain after condensation of \(\phi=(0,m)\) bosons. Hence, the new string-label basis \(\{\langle\tilde{X}|\}\) is simply the subset of states in \(\langle X_{0}|\) containing only string types \(a\) satisfying Eq. (55). We say that the remaining string labels, which do not appear in the low-energy Hilbert space after condensation, are _confined_.
Now, we consider the second case, \(\phi=(s,m)\). If \(\langle\phi\rangle\) contains a subgroup \(\langle\phi_{m^{j}}\rangle\subset\langle\phi\rangle\) which is generated by \(\phi_{m^{j}}=(0,m^{j})\), then by the same reasoning as above, the string types \(a\) that appear in the condensed Hilbert space must satisfy
\[w_{\phi_{m^{j}}}(a)=1, \tag{56}\]
with other string types being confined. We find it useful to reorganize the deconfined string types into new string labels as follows. First, we define the new null string label via:
\[\bar{0}=\oplus_{j=0}^{q-1}s^{j}. \tag{57}\]
where we have assumed the condensing bosons form a cyclic group generated by \(\phi=(s,m)\), with \(s^{q}=1\) and \(q|p\). Here the symbol \(\oplus\) means that, in the original string net basis, an edge carrying the label \(\bar{0}\) carries a superposition of labels in the set \(\{s^{j}\}\). Similarly, other condensed string types are given by superpositions of the form:
\[\tilde{a}=\oplus_{j=0}^{q-1}(a\times s^{j}). \tag{58}\]
It is convenient to pick a particular representative for \(\tilde{a}\) in the original label set, which we will denote \(a\). We denote the remaining terms on the right-hand side of Eq. (58) as:
\[a^{j}\equiv a\times s^{j}\,\ \ a^{0}\equiv a \tag{59}\]
Then all \(a^{j}\) project to the same condensed string type, while if \(b\neq a\times s^{j}\), then \(a\) and \(b\) project to different string types. As we discuss in detail below, if one or more of the labels obey \(a^{r}=a\) for some \(r|q\), in the condensed phase the single string type \(a\) splits into multiple string types \(\tilde{a}_{1},...\tilde{a}_{q/r}\). Finally, the branching rules for the new string labels can be deduced from the branching rules of the original string labels. In the absence of splitting, given the branching rules \((a,b;c)\), the new branching rules are \((\tilde{a},\tilde{b};\tilde{c})\). We discuss the new branching rules in the presence of splitting in Sec. IV.2.2.
The condensed string labels, together with the associated branching rules, define the string-label basis in the condensed Hilbert space. Specifically, a string-net state \(\langle\tilde{X}|\in\tilde{\mathcal{H}}\) has edges labeled with the condensed string types \(\{\tilde{0},\tilde{a},\tilde{b},...\}\), and satisfies the new branching rules \((\tilde{a},\tilde{b};\tilde{c})\) at each vertex. Note that \(\tilde{\mathcal{H}}\) should be viewed as a conventional string-net Hilbert space, since after condensation all sticks are effectively in the trivial vacuum state.
### Mapping between new and old string net labels
Since \(\langle\tilde{X}|=\langle X|P_{\phi}\), any state in \(\tilde{\mathcal{H}}\) can also be expressed as a superposition of string-net states in our original Hilbert space \(\mathcal{H}_{\phi}\). This superposition contains states in which each edge label is replaced by an appropriate superposition of original string net labels, with the branching rules obeyed at every trivalent vertex, and arbitrary allowed labels on the sticks. Notationally, we represent the resulting string-net configurations in the original label set by \(X\in\tilde{X}\), where \(X\) represents a labeling of edges in the original string-net basis, and \(\tilde{X}\) represents the corresponding labels in the condensed basis. Explicitly, we
may write
\[\langle\tilde{X}|=\frac{1}{p^{2N_{V}}}\sum_{X\in\tilde{X}}C(X)\langle X| \tag{60}\]
where \(C(X)\) are numerical coefficients.
The coefficients \(C(X)\) are highly constrained. For any \(X_{1},X_{2}\in\tilde{X}\), \(\langle X_{1}|\) is related to \(\langle X_{2}|\) by the action of some composite string operator \(W(\{\phi_{j},r_{ij}\})\):
\[\langle X_{1}|W(\{\phi_{j},r_{ij}\})=W(\{\phi_{j},r_{ij}\})X_{2}^{\dagger} \langle X_{2}| \tag{61}\]
where \(W(\{\phi_{j},r_{ij}\})_{X_{2}}^{X_{1}}\) is the relevant matrix element of \(W(\{\phi_{j},r_{ij}\})\), and for a fixed configuration \(\phi_{j},r_{ij}\) the state \(X_{2}\) is unique as the condensing anyons are abelian. Since the composite string operator is unitary, we equivalently have \(W(\{\phi_{j},r_{ij}\})|X_{2}\rangle=W(\{\phi_{j},r_{ij}\})_{X_{2}}^{X_{1}}|X_{1}\rangle\). Therefore, the coefficients satisfy:
\[C(X_{2}) =\langle\tilde{X}|X_{2}\rangle=\langle\tilde{X}|W(\{\phi_{j},r_{ ij}\})|X_{2}\rangle\] \[=W(\{\phi_{j},r_{ij}\})_{X_{2}}^{X_{1}}\langle\tilde{X}|X_{1}\rangle\] \[=W(\{\phi_{j},r_{ij}\})_{X_{2}}^{X_{1}}C(X_{1}). \tag{62}\]
where in the first line, we have used Eq. (53). Eq. (62) allows us to determine the coefficients \(C(X)\), up to an overall coefficient \(C(X_{0})\) for each distinct reference configurations \(\{X_{0}\}\).
### Vertex coefficients
Solutions \(\{C(X)\}\) to Eq. (62) can be expressed \(C(X)=\prod_{i}C_{v}(X)\), where the product runs over all trivalent vertices in the extended string-net cofiguration \(X\), and \(C_{v}(X)\) is a coefficient that depends only on the three string labels surrounding the vertex \(v\). This is because the action of any string operator can be broken up into a product of actions of simple string operators, with each simple string operator acting at a single honeycomb vertex and the vertices associated with nearby sticks. Thus for each simple string operator, Eq. (62) can be reduced to a set of equations relating products of at most three of the vertex coefficients \(C_{v}(X)\) to at most three of the vertex coefficients \(C_{v}(X^{\prime})\). We will show that the resulting equations are self-consistent, and sufficient to fully determine the coefficients of any configuration \(X\in\tilde{X}\) from that of a reference configuration \(X_{0}\).
To parametrize the vertex coefficients \(C_{v}(X)\), we define a set of _root vertices_, which contain one representative vertex \((a,b;c)\) in the original string label set for each condensed vertex \((\tilde{a},\tilde{b};\tilde{c})\). Then any condensed string net state \(\langle\tilde{X}|\) can be obtained by projecting a reference string-net state \(\langle X_{0}|\) for which all vertices are root vertices. Conversely, any two states that differ by at least one root vertex project to two distinct states in \(\tilde{\mathcal{H}}\). We then define two types of vertex coefficients: \(\{A^{ab}_{c}\}\) associated with the root vertices \((a,b;c)\), and \(\{B^{ajk}_{c^{\prime}k}\}\), associated with the remaining vertices \((a^{j},b^{k};c^{j+k})\), where \(a^{j}\equiv a\times s^{j}\). The coefficients \(\{B^{a^{j}b^{k}}_{c^{j+k}}\}\) can be expressed in terms of \(\{A^{ab}_{c}\}\) using Eq. (62). On the other hand, \(\{A^{ab}_{c}\}\), which are associated with the vertex coefficients of our reference configuration, are not fully determined, and in some cases admit multiple, physically distinct solutions.
We begin by defining the root vertices. Again, we have two cases to consider. The first case is the \(\phi=(0,m)\) condensed phases. In this case, we define the root vertices by
\[\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig
In the absence of splitting, these are not constrained, and can be any complex number of unit modulus; in particular, we may choose them all to be 1.
The \(\phi=(0,m)\) condensed phases can be thought of special cases of the \(\phi=(s,m)\) condensed phases where \(s=0\) and thus \(\bar{0}=\{0\}\). In this case, the vertex coefficients (67) associated with root vertices (63) correspond to a gauge choice for our string net model[96].
When \(s\neq 0\), to find the coefficients \(C(X)\) in Eq. (60), we define a set of vertex coefficients associated with non-root vertices via:
\[C\left(\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \right)\sim B^{s^{i}a},\quad C\left(\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{figs.eps}}\right)\sim\frac{1}{B^{s^{i}a}} \tag{69}\]
and
\[C\left(\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \right)\sim B^{a^{j}b}_{c^{i+j}},\quad C\left(\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{figs.eps}}\right)\sim\frac{1}{B^{a^{j}b}_{c ^{i+j}}}. \tag{70}\]
where \(B^{a^{j}b^{k}}_{e^{i+j}}\) are complex numbers, and at least one of \((k,j)\) are non-zero, such that at least one of \(a^{j},b^{k}\) are not in our chosen set of reference labels.
The division into \(A\)-type and \(B\)-type vertex coefficients is useful since the latter are fully determined by the root vertex coefficients \(A^{ab}_{\ c},A^{a^{s}}\) using Eq. (62). The coefficients \(A^{ab}_{\mathrm{c}}\) and \(A^{a^{s}i}\), on the other hand, are not fixed by Eq. (62) provided that \(a\times s^{r}\neq a\) for any \(r<q\). In this case, coefficients \(C(X)\) in (60) can be parametrized as
\[C(X)=C(X_{0})\prod_{v\in X}B_{v} \tag{71}\]
where the product runs over all vertices \(v\) in \(X\) and \(B_{v}\) is the corresponding vertex coefficient.
The coefficient \(C(X_{0})\) associated with the given reference configuration \(X_{0}\) is determined by the root vertex coefficients via:
\[C(X_{0})=\prod_{v\in X_{0}}A_{v} \tag{72}\]
where \(v\) runs over all root vertices in \(X_{0}\) and \(A_{v}\) is the corresponding root vertex coefficient. When \(a\times s^{r}\neq a\) for any \(r<q\) we will find that all choices of root vertex coefficients are equivalent, and the freedom to choose \(C(X_{0})\) amounts to a gauge choice.
If \(a^{r}=a\) for some \(r|q\), the parametrization of \(C(X)\) is similar. However, in this case Eq. (62) imposes additional constraints on the root vertex coefficients \(A^{a^{s}^{k}}\). In this case we find that only \(A^{a^{s}^{k}}\) for \(k\leq r\) are free parameters, and that there are \(q/r\) distinct solutions for each of these coefficients. These distinct solutions correspond to the fact[31] that after condensation the label \(a\)_splits_ into \(q/r\) distinct labels; correspondingly we also obtain multiple vertex coefficients \(A^{ab}_{\mathrm{c}}\). We now discuss each of these cases in turn.
#### vi.2.1 Case 1: \(a^{k}\neq a\)
First, let us verify that a solution to Eq. (62) can be expressed as a product of vertex coefficients - i.e. that any mapping between two configurations with the same sets of initial and final vertices has the same numerical coefficient. If \(a^{k}\neq a\) for any \(a\) or \(k\), then it suffices to consider sequences of simple string operators connecting the same inital and final vertex configurations. The properties of basic string operators outlined in Eqs. (24) - (26), as well as the consistency conditions (5) and (15), ensure that all combinations of basic string operators relating a given initial and final set of vertices will have the same numerical coefficient.
Second, we use Eq. (62) to solve for the \(B\)-type vertex coefficients in terms of \(\{A^{a^{s}i},A^{ab}_{\mathrm{c}}\}\). First, consider vertices where both \(a\) and \(b\) legs are powers of the condensing label \(s\). In this case, in the gauge (23), and using \(w_{\phi}(s^{j})=1\), all non-vanishing string operator matrix elements are simply \(+1\), and we have
\[\begin{split}& A^{0s^{j}}A^{0s^{-j}}B^{s^{j}s^{-j}}=A^{00}=1\\ & B^{s^{i}s^{j}}A^{0s^{i}}A^{0s^{j}}B^{s^{i+j}s^{-i-j}}=A^{00}=1. \end{split} \tag{73}\]
It follows that, given the condition (68),
\[B^{s^{i}s^{j}}=1 \tag{74}\]
for any \(i,j\).
Next suppose \(a\notin\bar{0}\), with \(b=s^{i}\), and \(a^{j}\neq a\). In this case, we have
\[A^{a^{s}k}B^{a^{k},s^{l-k}}B^{s^{l},s^{-k}}=A^{a^{s}l}(F^{a^{s}k^{j}l^{-k}}_{a^ {l}a^{l}s^{l}})^{*} \tag{75}\]
where the coefficient is obtained by acting on the vertex \((a,s^{l};a^{l})\) with the product \(W^{1}_{\phi^{k}}W^{2}_{\phi^{-k}}\). Given Eq. (74), this fixes \(B^{a^{k},s^{l-k}}\) in terms of \(A^{a^{s}l}\) and \(A^{a^{s}k}\).
If \(a,b\neq s^{k}\), we have
\[A^{a^{s}j}A^{b^{k}}B^{a^{j},b^{k}}_{c^{j+k}}B^{c^{j+k}\bar{s}^{j+k}}=W_{jk}( abc)A^{ab}_{c}\, \tag{76}\]
The matrix element is given by acting with the product \(W^{2}_{\phi^{k}}W^{1}_{\phi^{j}}\) on the vertex \((a,b;c)\) (see Fig. 2):
\[W_{jk}(abc)=\bar{w}_{\phi^{j}}(b)F^{abs^{j}}_{c^{j}c^{j}b^{j}}(F^{a^{s}j^{j}b} _{c^{j}a^{j}b^{j}})^{*}F^{a^{j}bs^{k}}_{c^{j}+c^{j}b^{k}}(F^{c^{s}j^{j}}_{c^{j }0}F^{c^{j}s^{j}j^{+k}}_{c^{j}+\bar{s}j})^{*} \tag{77}\]
Given Eq. (75), this fixes \(B^{a^{j},b^{k}}_{c^{j+k}}\) in terms of \(A^{ab}_{c}\) and \(\{A^{a^{s}j}\}\).
Finally, using string paths of the form
\[\begin{split}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \end{split} \tag{78}\]
we have
\[B^{s^{j}a^{i}}W_{\bar{j}j}(s^{j},a^{i},a^{i+j}) =B^{a^{j}s^{i}} \tag{79a}\] \[B^{s^{j}a}W_{\bar{j}j}(s^{j},a,a^{j}) =A^{as^{i}}, \tag{79b}\]
which shows that both \(B^{s^{i}a}\) and \(B^{s^{k}a^{j}}\) can be expressed in terms of \(A^{as^{j}}\)-type vertex coefficients. These relations have a particularly simple form: From Eq. (77), we can show that
\[W_{\bar{j}j}(s^{j},a,a^{j})= w_{\phi^{j}}(a) \tag{80}\]
where we have used Eqs. (15b,15c), as well as the identity
\[F^{as^{j}\bar{s}^{j}}_{aa^{j}0}F^{s^{j}\bar{s}^{j}s^{j}}_{s^{j}0}(F^{a^{j},\bar {s}^{j},s^{j}}_{a^{j}a0})^{*}=1 \tag{81}\]
which follows from Eq. (5).
This leaves us with the vertex coefficients \(A^{ab}_{c}\) and \(A^{as^{j}}\) (where \(a,b\neq s^{k}\)). The former are clearly free parameters, since by definition there is no \(\phi\) string operator that takes such a root vertex to another root vertex. Indeed, they represent a choice of gauge for the \(F\) matrices describing the condensed phase (see Eq. (9)), and can be set to 1. To see that \(A^{as^{j}}\) are also free parameters, we note that there is a residual gauge freedom when solving (62). Specifically, given a set of vertex coefficients that satisfy Eq. (62), we can construct an infinite class of other solutions \(\tilde{A}^{ab}_{c},\tilde{B}^{a^{jk}}_{c^{j+k}}\) via:
\[\tilde{A}^{ab}_{c}=A^{ab}_{c}\cdot\frac{g(a)g(b)}{g(c)}\,\ \ \tilde{B}^{a^{j}b^{k}}_{c^{j+k}}=B^{a^{j}b^{k}}_{c^{j+k}}\cdot\frac{g(a^{j})g(b ^{k})}{g(c^{j+k})}. \tag{82}\]
Here \(a,b\) and \(c\) are any string labels (including \(s^{j}\)), and \(g(a)\) is any function with
\[g(s^{i})g(s^{j})=g(s^{i+j}),\quad g(0)=1.\]
It is straightforward to verify that the transformations (82) do not alter the equalities dictated by the action of any of the basic string operators at a vertex.
When \(a^{i}\neq a^{j}\) for \(i\neq j\) (mod \(q\)), the gauge transformation (82) fully fixes the coefficients \(A^{as^{i}}\): we can always choose the ratio \(g(a)/g(a^{j})\) to set
\[A^{as^{j}}=1. \tag{83}\]
Accordingly, we have
\[B^{s^{j}a}=w_{\phi^{j}}(a)\,\quad B^{a^{j}s^{i-j}}=(F^{as^{j}s^{i-j}}_{a^{ i}a^{j}s^{i}})^{*} \tag{84}\]
by Eqs. (75) and (79b).
#### v.2.2 Case 2: \(a^{r}=a\)
If \(a^{r}=a\) for some \(r|q\), there are additional constraints relating the coefficients \(A^{a,s^{nr}}\), and we cannot set these to 1 using transformations of the form (82).
We begin by considering configurations involving only vertices of the form \((a,s^{jr};a)\), \(j=1,...q/r\), and their cyclic permutations. We first show that for such configurations, there exists a solution to Eq. (62) that can be expressed as a product of vertex coefficients. To show this, we must establish that the coefficients relating configurations with the same sets of initial and final vertices do not depend on the relative positions of these vertices - an issue that did not crop up in case 1. For example, using a \(W^{1}\)-type simple string acting on a vertex \((a,s^{j},a)\) with sticks on the two \(a\) legs carrying labels \(s^{i}\) and \(s^{k}\), we can derive:
\[A^{as^{i(i+l)r}}A^{as^{(k-l)r}}= A^{as^{ir}}A^{as^{kr}}F^{as^{jr}s^{l}r}_{aa(s^{(j+l)r})}(F^{as ^{l}s^{jr}}_{aa(s^{(j+l)r})})^{*}\] \[\times F^{as^{ir}s^{lr}}_{aas^{(i+l)r}}F^{s^{lr}s^{lr}s^{kr}}_{s^{ kr}0s^{(k-l)r}}(F^{as^{l}s^{(k-l)r}}_{aas^{kr}})^{*} \tag{85}\]
where we have removed a common factor of \(A^{as^{jr}}\) (which is non-zero) from both sides, and used \(w_{\phi^{j}}(s^{k})=1\) for all \(j,k\). This can be true only if the coefficient does not depend on \(j\). Similar consistency requirements arise from acting with \(W^{2}_{\phi^{jr}}\) on a vertex \((s^{jr},a;a)\) and with \(W^{2}_{\phi^{jr}}W^{1}_{\phi^{jr}}\) on a vertex \((a,\bar{a};s^{jr})\). In Appendix E, using the conditions (5a, 15a), we show that in all three cases, in the gauge (23), the coefficients are indeed independent of \(j\). (A similar result holds for vertices of the form \((a,s^{i+jr};a^{i})\) with \(i<r\), for which the coefficient is also indepedent of \(j\)). Thus we see that, for configurations with only vertices involving \(a^{j}\), \(\bar{a}^{k}\) (\(j,k<r\)), and powers of \(s^{i}\), the simple string operators at each vertex yield a consistent set of equations for the \(A^{as^{j}}\).
Having established that a consistent solution exists, let us solve for the coefficients \(A^{as^{ir}}\). (As above, the coefficients \(A^{as^{i}}\) for \(i<r\) can be consistently set to 1 by a gauge transformation). Taking \(k=l=1\) in Eq. (85), we find:
\[A^{as^{(i+1)r}}=A^{as^{ir}}A^{as^{ir}}F^{as^{ir}s^{r}}_{aas^{(i+1)r}}F^{s^{r} \bar{s}^{r}s^{r}}_{s^{k}r^{0}0}F^{as^{jr}s^{r}}_{aas^{(j+1)r}}(F^{as^{r}s^{jr}} _{aas^{(j+1)r}})^{*} \tag{86}\]
As shown in Eq. (119), in fact
\[F^{as^{jr}s^{lr}}_{aas^{(j+l)r}}=F^{as^{lr}s^{jr}}_{aas^{(j+l)r}} \tag{87}\]
It follows that in our gauge of choice,
\[A^{as^{ir}}=(A^{as^{r}})^{n}\prod_{k=1}^{n-1}(F^{as^{kr}s^{r}}_{aas^{(k+1)r}}) \tag{88}\]
Thus of the \(q/r\) vertex coefficients \(A^{as^{nr}}\), we can freely choose only one, which we take to be \(A^{a,s^{r}}\).
Moreover, the coefficient \(A^{as^{r}}\) is not unconstrained: taking \(n=q/r\) in Eq. (88), and noting that \(s^{(q/r)r}=0\), we see that
\[(A^{as^{r}})^{q/r}=\prod_{k=1}^{q/r-1}(F^{as^{kr}s^{r}}_{aas^{(k+1)r}})^{*} \tag{89}\]
Thus, we see that \(A^{as^{r}}\) must be a \(q/r^{th}\) root of the product on the right-hand side, and we have exactly \(q/r\) possible choices for this coefficient, which we label \((A^{as^{r}})_{i}\), \(i=1,...q/r\). We note that the product on the right-hand side (and hence also \((A^{as^{r}})_{i}\)) has modulus \(1\), since by unitarity \((F^{as^{lr}s^{r}}_{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
multiplicity). In particular, there is no fusion multiplicity associated with the vacuum \(\bar{0}\), since the cyclic property of the fusion rules ensures that only vertices of the form \((a,\overline{a};s^{jr})\) are allowed.
The possible fusion rules for other types of vertices, such as vertices \((\tilde{a}_{\mu},\tilde{b}_{\lambda};\tilde{c}_{\nu})\) where all three labels split, are discussed in Appendix E.
In summary, the new Hilbert space \(\tilde{\mathcal{H}}\) consists of string-net states with both unsplit string types of the form (57), (58), whose branching rules are fixed by those of the original labels, and _split_ string types given by (90), whose branching rules are fixed by a combination of those of the original theory, and the solutions to Eq. (96).
## V String net model of the condensed phase
We now show that the ground state \(|\Psi\rangle\) of our extended string-net model as \(J\to\infty\) can be expressed as an ordinary string net ground state using the new label set \(\{\tilde{a}_{i},\tilde{b},\tilde{c},...\}\). In particular, we show how to use the vertex coefficients \(\{A^{ab}_{\rm c},B^{a^{j},b^{k}}_{o^{j+k}}\}\) described in the previous section to obtain the fusion data describing the string net in the condensed phase. If there is no splitting, we find that the vertex coefficients, together with the fusion data of the original category, fully fix the fusion data for the condensed string net. With splitting, these do not fully fix the fusion data; the remaining freedom can be eliminated by imposing the consistency conditions (5).
### The topological data for the condensed phases
Deep in the condensed phase, the basis states in \(\tilde{\mathcal{H}}\) allow us to express the condensed ground state \(|\Psi\rangle=P_{\phi}|\Phi\rangle\) as a new string-net condensed state with amplitudes
\[\Psi\left(\tilde{X}\right) \equiv\langle\tilde{X}|\Psi\rangle\] \[=\frac{1}{p^{2N_{\rm V}}}\sum_{X\in\tilde{X}}C_{\tilde{X}}(X) \langle X|\Psi\rangle\] \[\equiv\frac{1}{p^{2N_{\rm V}}}\sum_{X\in\tilde{X}}C_{\tilde{X}}(X) \Psi(X). \tag{98}\]
Here the sum over \(X\in\tilde{X}\) is over all configurations of uncondensed string labels compatible with the configuration \(\tilde{X}\). Note that when one or more labels split there are multiple distinct solutions for the vertex coefficients, associated with the multiple distinct split string labels; in this case the coefficients \(C(X)\) depend not only on \(X\) but on the choice of which label in each set \(\{\tilde{a}_{\mu}\}\) is in the configuration \(\tilde{X}\); to indicate this dependence, we have added a subscript, denoting the coefficient \(C(X)\) as \(C_{\tilde{X}}(X)\).
#### v.1.1 Topological data in theories without splitting
We first describe how to use Eq. (98) to obtain the topological data associated with the condensed string net in theories where none of the original labels split. We begin by simplifying Eq. (98), using the relation:
\[C_{\tilde{X}}(X_{1})\Psi(X_{1})=C_{\tilde{X}}(X_{2})\Psi(X_{2})\text{ for any }X_{1},X_{1}\in\tilde{X}. \tag{99}\]
To see that these are equal, observe that on the one hand, we have
\[C_{\tilde{X}}(X_{2})=C_{\tilde{X}}(X_{1})W(P)^{X_{1}}_{X_{2}} \tag{100}\]
for any \(X_{1},X_{2}\in\tilde{X}\) by (61,62). On the other hand, the new ground state \(|\Psi\rangle\) satisfies
\[\Psi(X_{2})=\langle X_{2}|\Psi\rangle=\frac{\langle X_{1}|W(P)|\Psi\rangle}{W( P)^{X_{1}}_{X_{2}}}=\frac{\Psi(X_{1})}{W(P)^{X_{1}}_{X_{2}}}. \tag{101}\]
Here we use (61) in the second equality and (45) in the third equality. Putting (100,101) together, we establish (99).
By using (99), we can rewrite (98) as
\[\Psi\left(\tilde{X}\right)=C(X_{0})\Psi(X_{0}) \tag{102}\]
were \(X_{0}\) denotes a reference configuration of our choice from the set \(X\in\tilde{X}\); in the following it will be convenient to choose \(X_{0}\) to have only trivial labels on all sticks. Observe that in the absence of splitting, each vertex in \(\tilde{X}\) corresponds to \(q^{2}\) configurations \(X\), obtained by acting with \(W^{1}_{\phi_{j}}W^{2}_{\phi^{k}}\) for \(0\leq j,k<q\) at each vertex. Each such configuration appears \(p/q\) times when we act with \(P_{\phi}\) on \(X_{0}\). Thus summing over \(X\in\tilde{X}\) and expressing all terms in terms of \(\Psi(X_{0})\) gives a factor of \(p^{2}\) for each vertex, which exactly cancels the normalization pre-factor.
We can use the amplitudes of this new ground state to define the new F-symbols \(F^{\tilde{a}\tilde{b}\tilde{c}}_{\tilde{d}\tilde{e}\tilde{f}}\) and quantum dimensions \(d_{\tilde{a}}\) by
\[\Psi\left(\begin{array}{c}\includegraphics[width=142.376pt]{figs/2pt}\\ \end{array}\right) =\sum_{\tilde{f}}F^{\tilde{a}\tilde{b}\tilde{c}\tilde{b}\tilde{c} }_{\tilde{d}\tilde{e}\tilde{f}}\Psi\left(\begin{array}{c}\includegraphics[width=142.376pt]{figs/2pt}\\ \end{array}\right) \tag{103a}\] \[\Psi\left(\begin{array}{c}\includegraphics[width=142.376pt]{figs/2pt}\\ \end{array}\right) =\delta_{\tilde{c},\tilde{d}}\sqrt{\frac{d_{\tilde{d}_{\tilde{d}_{ \tilde{b}}}}}{d_{\tilde{c}}}}\Psi\left(\begin{array}{c}\includegraphics[width=142.376pt]{figs/2pt}\\ \end{array}\right). \tag{103b}\]
Here the grey regions denote the part of the configuration that is identical on both sides of the equation.
To relate the new coefficients to the old ones, consider a pair of reference configurations \(X_{0},X_{0}^{\prime}\) related by one of the local moves in Eq. (4). For convenience, we choose all reference configurations to be closed configurations in which all sticks carry the trivial label. These closed configurations are generated by those terms in \(P_{\phi}\) containing only closed loops of simple string operators, all
of which act as the identity on the ground state \(|\Phi\rangle\) of the original string net. (Recall that \(|\Psi\rangle=P_{\phi}|\Phi\rangle\)). Thus when \(X_{0},X_{0}^{\prime}\) are closed configurations, \(\Psi(X_{0})\propto\Phi(X_{0})\), and similarly for \(X_{0}^{\prime}\). Note that the constant of proportionality here depends only on the number of closed loop string operators in \(P_{\phi}\), and is the same for all reference configurations.
Using the fact that \(\Phi(X_{0}),\Phi(X_{0}^{\prime})\) are related by the original local rules, and applying (102) to both sides of (103), we conclude that when none of the labels \(\tilde{a},...\tilde{f}\) split, the old data and the new data are related by
\[\frac{B_{e}^{ab}B_{d}^{ec}}{B_{f}^{bc}B_{d}^{af}}F_{def}^{abc}=F_{ \tilde{d}\tilde{e}\tilde{f}}^{\tilde{a}\tilde{b}\tilde{c}} \tag{104a}\] \[d_{a}=d_{\tilde{a}} \tag{104b}\]
(The local moves (4b) and (4c) lead to the same definition (104b) of \(d_{\tilde{a}}\)). Here \(B_{e}^{ab}\) (which can also be root vertex coefficients \(A_{e}^{ab}\)) are the vertex coefficients defined in (66) -(70) and \(\{F_{def}^{abc},d_{a}\}\) are the original \(F\)-symbols and quantum dimensions for the ground state \(\Phi\). The labels \(a,b,c,d\) in \(F_{def}^{abc}\) are chosen such that they are compatible with the branching rules of the old theory, and such that \(a\in\tilde{a},b\in\tilde{b},c\in\tilde{c},d\in\tilde{d}\). This expression thus fully fixes the new \(F\)-symbols in terms of the old \(F\)-symbols and the vertex coefficients. Further, comparing this to the expression (8) for gauge transformations of the \(F\)-symbols, we can see that the root vertex coefficients \(A_{e}^{ab}\) with \(a,b\neq s^{i}\) are simply gauge transformations of the new \(F\)'s. (The remaining vertex coefficients, which are fully fixed by the choice of \(A_{e}^{ab}\), ensure that the left-hand side of the equation is independent of the specific choice \(a,b,c,d,e,f\) used in the calculation.)
#### v.1.2 Topological data in theories with splitting
In theories with splitting, instead of expressing amplitudes in terms of a single reference configuration, we replace Eq. (102) with:
\[\Psi\left(\tilde{X}\right)=\frac{1}{N(X_{0})}\sum_{X_{0}\in\tilde{X}}C_{\tilde {X}}(X_{0})\Psi(X_{0}) \tag{105}\]
were \(X_{0}\) denotes the set of reference configurations that are compatible with the choice of condensed labels \(\tilde{X}\), _and_ contain only trivial labels on the sticks. The reason for this replacement is that in theories with splitting, a single reference configuration \(X_{0}\) may not be sufficient to uniquely fix \(\tilde{X}\) via the choice of vertex coefficients entering \(C_{\tilde{X}}(X_{0})\). We therefore instead keep the minimum number of configurations in our sum necessary to ensure that the right-hand side describes coefficients associated with a specific condensed label set, which is a sum over all configurations compatible with \(\tilde{X}\) for which all sticks carry the trivial label.
Unlike in the unsplit case, the number of configurations associated with each reference configuration \(X_{0}\) in the sum does not, in general, fully cancel the pre-factor of \(1/p^{2N_{V}}\) in Eq. (98). Here \(N(X_{0})\) counts the number of distinct products of simple string operators that leave \(X_{0}\) unchanged - meaning that they change neither the labels on the sticks, nor any of the edge labels. This number depends on the number of closed loops in \(X_{0}\) along which all labels split. Relative to the unsplit case, the number of distinct configurations \(X\) in the sum (98) is reduced by \(N(X_{0})\).
To find \(F_{\tilde{d}\tilde{e}\tilde{f}}^{\tilde{a}\tilde{b}\tilde{c}}\), we note that:
\[\Psi \tag{106}\] \[=\sum_{e\in\tilde{e}}(B_{e}^{ab})_{\lambda}^{\mu\nu}(B_{d}^{ec})_{ \sigma}^{\lambda\rho}\] \[\quad\times\sum_{f}F_{def}^{abc}\Phi\]
In the last equality, the labels \(\mu,\nu,\rho,\sigma\) identify the external split legs as \(\tilde{a}_{\mu},\tilde{b}_{\nu},\tilde{c}_{\rho}\), and \(\tilde{d}_{\sigma}\) respectively. However, it can happen that there is more than one solution \(\tilde{f}_{\kappa}\) compatible with both the old label \(f\), and the new fusion rules. In other words, there may be more than one choice of \(\kappa\) for which \((B_{f}^{bc\nu\rho}(B_{d}^{af})_{\sigma}^{\mu\nu}\neq 0\). We conclude that the old data and the new data are related by
\[\sum_{e:e\in\tilde{e}}\frac{(B_{e}^{ab})_{\kappa}^{\mu\nu}(B_{d}^{ec})_{ \sigma}^{\lambda\rho}}{(B_{f}^{bc})_{\kappa}^{\nu\rho}(B_{d}^{af})_{\sigma}^{ \mu\nu}}F_{def}^{abc}=\sum_{\tilde{f}:f\in f}F_{d_{ef}^{\tilde{a}_{\mu}\tilde{ b}_{\nu}\tilde{c}_{\rho}}F_{\kappa}}^{abc}. \tag{107}\]
To find \(d_{\tilde{a}}\), we observe that we also have:
\[\Psi\left(\sum_{\tilde{X}\in X_{0}}\tilde{X}\right)=\frac{1}{N(X_{0})}\sum_{ \tilde{X}\in X_{0}}C_{\tilde{X}}(X_{0})\Psi(X_{0}). \tag{108}\]
where in this case, we can choose a single reference configuration. Letting \(\tilde{X}\) be a configuration with single closed loop carrying the label \(\tilde{a}\), and \(X_{0}\) to have a single loop carrying the label \(a\), we find that the number of terms in the sum is precisely \(N(X_{0})\), and that all terms in the sum contribute equally. From this, we conclude that
\[d_{a}=\sum_{\tilde{a}:a\in\tilde{a}}d_{\tilde{a}}. \tag{109}\]
Thus, we see that in theories with splitting, the original fusion data and vertex coefficients do not fully fix all of the new \(F^{\prime}s\). In this case, the remaining freedom must be used to ensure that the new \(F\)'s satisfy the consistency conditions (5), as well as the unitarity conditions (6).
### Consistency conditions for \(F_{\tilde{d}\tilde{e}\tilde{f}}^{\tilde{a}\tilde{b}\tilde{c}}\)
We now show that the new data \(\{F_{\tilde{d}\tilde{e}\tilde{f}}^{\tilde{a}\tilde{b}\tilde{c}}\}\) satisfy the consistency conditions (5). We begin with the condition
(5b), which requires \(F^{\tilde{a}\tilde{b}\tilde{c}}_{\tilde{d}\tilde{f}}=1\) if \(\tilde{a}\) or \(\tilde{b}\) or \(\tilde{c}=\tilde{0}\). We wish to show that the right-hand side is equal to 1 if \(a\), \(b\), or \(c\) are powers of \(s\). Indeed, using Eq. (62), one can show:
\[\frac{B^{ab^{i}}_{c}B^{bs^{i}}_{c}}{B^{cs^{i}}B^{ab}_{c}}F^{abs^{i}}_{c^{i}c^{i }b^{i}}=\frac{B^{as^{i}}_{c^{i}}B^{a^{i}b}_{c^{i}}}{B^{ab^{i}}_{c^{i}}B^{s^{i}b }}F^{as^{i}b}_{c^{i}a^{i}b^{i}}=\frac{B^{a^{i}b}_{c^{i}}B^{s^{i}a}}{B^{ab}_{c}B^ {s^{i}c}}F^{s^{i}ab}_{c^{i}a^{i}c}=1 \tag{110}\]
If \(a\), \(b\), or \(c\) are powers of \(s\), then there are no sums in Eq. (107); thus Eq. (110) ensures that the new \(F\)'s satisfy Eq. (5b).
To see that the first term in Eq. (110) is equal to unity, apply a \(W^{2}_{\phi^{i}}\)-type string to the vertex \((a,b;c)\), with the stick on the \(c\) edge carrying the label \(s^{i}\). From Eq. (22), in the gauge (23), the matrix element associated with this string operator is \(F^{abs^{i}}_{c^{i}cb^{i}}\), and Eq. (62) implies that \(B^{cs^{i}}B^{ab}_{c}=B^{ab^{i}}_{c^{i}}B^{bs^{i}}F^{abs^{i}}_{c^{i}cb^{i}}\). The second equality is obtained by applying a product of the form \(W^{2}_{\phi^{i}}W^{1}_{\phi^{-i}}\) to a configuration with the two vertices \((a,s^{i};a^{i})\) and \((a^{i},b;c^{i})\), and using Eqs. (5), (15), and (79). The third equality can be obtained by acting with a \(W^{1}_{\phi^{i}}\) string on the vertex \((a,b;c)\), with an \(s^{i}\) labeled stick on the \(c\) edge. In our gauge (or choice, the corresponding matrix element is \(\overline{w}_{\phi^{i}}(b)F^{ab^{i}}_{c^{i}cb^{i}}(F^{a^{i}b^{i}}_{c^{i}a^{i} b^{i}})^{*}\), which by Eq. (15) is equal to \(w_{\phi^{i}}(a)\bar{w}^{i}_{\phi}(c)(F^{s^{i}ab}_{c^{i}a^{i}b})^{*}\). This gives
\[A^{ab}_{c}A^{cs^{i}}\bar{\omega}_{\phi^{i}}(c)=F^{s^{i}ab}_{c^{i}a^{i}c}\bar{ \omega}_{\phi^{i}}(a)A^{as^{i}}B^{a^{i}b}_{c} \tag{111}\]
Using Eq. (79), we obtain the stated result.
Next, we turn to the pentagon identity (5a). Multiplying both sides of (5a) by \(B^{fc}_{g}B^{gd}_{e}B^{ab}_{f}\), and summing over \(f\in\tilde{f}\) and \(g\in\tilde{g}\), gives
\[\sum_{f\in f,g\in\tilde{g}}F^{fcd}_{egf}F^{abl}_{efk}B^{fc}_{g}B^ {gd}_{f}B^{ab}_{f}\] \[=\sum_{f\in\tilde{f},g\in\tilde{g},h}F^{abc}_{gfh}F^{ahd}_{egk}F^{ bcd}_{khl}B^{fc}_{g}B^{gd}_{e}B^{ab}_{f} \tag{112}\]
We fist consider the right hand side of (112). Using Eq. (104), we have:
\[\sum_{h,g\in\tilde{g}}(\sum_{f\in f}F^{abc}_{gfh}B^{b}_{f}B^{fc}_ {g})F^{ahd}_{egk}F^{bcd}_{khl}B^{gd}_{e} \tag{113}\] \[=\sum_{h,\tilde{h}\tilde{\exists}h}F^{\tilde{a}\tilde{b}\tilde{c} }_{\tilde{g}\tilde{f}h}(\sum_{g\in\tilde{g}}F^{ahd}_{egk}B^{gh}_{g}B^{gd}_{e})F ^{bcd}_{khl}B^{bc}_{h}\] \[=\sum_{\tilde{h},\tilde{h}\tilde{\exists}k}F^{\tilde{a}\tilde{b} \tilde{c}}_{\tilde{g}\tilde{f}h}F^{\tilde{a}\tilde{b}\tilde{d}}_{\tilde{e} \tilde{g}\tilde{k}}(\sum_{h\in h}F^{bcd}_{khl}B^{hd}_{h}B^{bc}_{h})B^{ak}_{e}\] \[=\sum_{\tilde{h},\tilde{\exists}k\tilde{\exists}k,\tilde{\exists} l}F^{\tilde{a}\tilde{b}\tilde{c}}_{\tilde{g}\tilde{f}h}F^{\tilde{a} \tilde{b}\tilde{d}}_{\tilde{e}\tilde{f}h}F^{\tilde{b}\tilde{c}\tilde{d}}_{ \tilde{k}\tilde{l}}B^{cd}_{l}B^{bl}_{e}B^{ak}_{e}\]
(In the third line, we exploit the fact that \(\sum_{h,\tilde{h}\tilde{\Rightarrow}h}=\sum_{\tilde{h},h\in\tilde{\mathbb{R}}}\)). A similar treatment of the right hand side of (112) gives \(\sum_{\tilde{l}\tilde{\exists}l,\tilde{\exists}k,\tilde{\exists}l}F^{\tilde{l} \tilde{a}\tilde{d}}_{\tilde{e}\tilde{g}\tilde{l}}F^{\tilde{a}\tilde{b}\tilde{l} }_{\tilde{e}\tilde{f}\tilde{k}}B^{bl}_{e}B^{ak}_{l}B^{cd}_{l}\). Thus the new \(F\)-symbols satisfy:
\[\sum_{\tilde{l}\tilde{\exists}l,\tilde{\exists}k}F^{\tilde{f}\tilde{c}\tilde{d} }_{\tilde{e}\tilde{g}\tilde{l}}F^{\tilde{a}\tilde{b}\tilde{l}}_{\tilde{e} \tilde{f}\tilde{k}}=\sum_{\tilde{l}\tilde{\exists}l,\tilde{\exists}k}\sum_{ \tilde{h}}F^{\tilde{a}\tilde{b}\tilde{c}}_{\tilde{g}\tilde{f}h}F^{\tilde{a} \tilde{b}\tilde{d}}_{\tilde{e}\tilde{g}\tilde{k}}F^{\tilde{b}\tilde{c}\tilde{d} }_{\tilde{k}\tilde{l}}F^{\tilde{b}\tilde{c}\tilde{d}}_{\tilde{k}\tilde{l}}. \tag{114}\]
If \(a^{r}\neq a\) for any \(r<q\), then the sums over \(\tilde{l}\) and \(\tilde{k}\) can be dropped, and the new \(F\)-symbols automatically satisfy the pentagon identity (5a). Otherwise, Eq. (104) only constrains certain sums of the new \(F\)'s, and we must use the remaining freedom to choose the new \(F\)'s to satisfy Eq. (5a).
### Effective Hamiltonian
We have seen that the ground state \(|\Psi\rangle\) of the condensed phase is a string net state, described in terms of the new labels \(\{\tilde{a}\}\), with new \(F\) symbols and quantum dimensions given by Eqs. (107), (109), together with the consistency conditions (5). Since \(|\Psi\rangle\) is also a ground state of the effective Hamiltonian in the condensed phase, this suggests that our effective Hamiltonian acts on the labels in the new basis as a (conventional) string net Hamiltonian.
In the absence of splitting, it is relatively straightforward to see that this is indeed the case. Consider the action of \(P_{\phi}B^{\phi}_{p}\) on a state \(\langle\tilde{X}|\) in our new string-net basis. As above, we can use the fact that \(\langle\tilde{X}|P_{\phi}B^{\phi}_{p}=\langle\tilde{X}|B^{\phi}_{p}=C(X_{0}) \langle X_{0}|B^{\phi}_{p}\), where \(X_{0}\) is a configuration compatible with \(\tilde{X}\), and for which all stick labels are trivial. We thus have
\[\langle\tilde{X}|B^{\phi,t}_{p}=\sum_{X^{\prime}_{0}}C(X_{0})\langle X^{\prime}_{ 0}|B^{t,i_{1}...i_{\phi}j_{1}...j_{\phi}}_{p,i_{1}^{\prime}...i_{\phi}j_{1}...j_ {\phi}}(e_{1},...e_{12};\mathbb{I},\mathbb{I},\mathbb{I}) \tag{115}\]
where \(j_{k}=i_{k},j^{\prime}_{k}=i^{\prime}_{k}\) for \(k=4,5,6\), and \(e_{10}=e_{11}=e_{12}=0\). Here \(\langle X^{\prime}_{0}|\) is identical to \(\langle X_{0}|\) except on the boundary of the plaquette \(p\), where edges labeled \(i_{1}...i_{6},j_{1}...j_{6}\) in \(\langle X_{0}|\) now carry labels \(i^{\prime}_{1}...i^{\prime}_{6},j^{\prime}_{1}...j^{\prime}_{6}\). Applying Eq. (104a) repeatedly, we find that the matrix element can be expressed as a product of new \(F\) symbols:
\[\sum_{X^{\prime}}C(X_{0})\langle X^{\prime}|B^{t,i_{1}...i_{6}j_{1}...j_ {6}}_{p,i_{1}^{\prime}...i_{\phi}j_{1}...j_{6}}(e_{1},...
in the third line. Thus our effective Hamiltonian in the string-net phase is exactly the new string net Hamiltonian.
The situation in theories with splitting is similar, though more subtle due to the fact that a single label \(t\) in the original theory can represent multiple labels \(\tilde{t}_{\mu}\) in the new theory.
## VI Examples
In this section, we work out some illustrative examples. We begin by considering condensation in the abelian \(\mathbb{Z}_{2},\mathbb{Z}_{4},\mathbb{Z}_{6}\), \(\mathbb{Z}_{4}\times\mathbb{Z}_{4}\) string-net models. In abelian theories there is never any splitting, and the new fusion data follows directly from the coefficients \(B^{a_{i},b^{j}}_{c^{i+k}\cdot\cdot\cdot}\). We also describe condensation of abelian bosons in two non-abelian string-net models, based on the fusion categories \(\text{Rep}(S_{3})\) and \(SU(2)_{4}\). In these models, we also fully construct the new Hilbert space \(\tilde{\mathcal{H}}\) after condensation and compute new \(F\)-symbols and quantum dimensions for the condensed phases.
Throughout our discussion of abelian string-net models, we will use
\[F(a,b,c)=F^{abc}_{def} \tag{117}\]
for brevity, since other indices can be deduced from the abelian branching rules. Moreover, for string nets based on the group \(G=\mathbb{Z}_{N}\), there are \(N\) distinct solutions to (4), with the explicit form[98; 99]
\[F(a,b,c)=e^{2\pi i\frac{pa}{N^{2}}(b+c-[b+c])}. \tag{118}\]
The integer parameter \(p=0,\ldots,N-1\) labels the \(N\) distinct solutions. The arguments \(a,b,c\) take values in \(0,\ldots,N-1\) and \([b+c]\) denotes \(b+c\) (mod \(N\)) with values also taken in \(0,\ldots,N-1\). For each of the \(N\) distinct solutions, we can construct a corresponding string-net model. Each such string net model has \(N^{2}\) topologically distinct quasiparticle excitations labeled by \(\phi=(s,m)\) where \(s,m=0,1,\ldots,N-1\). The string operator \(W_{\phi}(P)\) which creates \(\phi=(s,m)\) is defined by (13) with the string parameters
\[w_{\phi}(a)=e^{2\pi i(\frac{pa}{N^{2}}+\frac{ma}{N})} \tag{119}\]
### \(\mathbb{Z}_{2}\) string-net model
To set the stage, we begin with the \(\mathbb{Z}_{2}\) string-net model, whose condensation transitions and phase diagram have been studied extensively in the literature [100; 101; 102; 103; 104; 73; 87; 88; 105; 106; 73; 78; 107; 108; 109; 110; 111; 112; 113; 114]. Here, we briefly review how our construction replicates these results.
The \(\mathbb{Z}_{2}\) string-net model has two types of strings \(\{0,1\}\) with dual strings \(\bar{0}=0,\bar{1}=1\). The branching rules are \(\{(a,b;c)\) with \(a+b=c\) mod \(2\}\). There are two distinct solutions \(F(1,1,1)=\pm 1\) to (5). The corresponding models are the Toric code[3] and the double semion model respectively. The Toric code has two \(\mathbb{Z}_{2}\) bosons \(\phi=(1,0)\) and \(\phi=(0,1)\), while the double semion model has one \(\mathbb{Z}_{2}\) boson \(\phi=(0,1)\).
We first consider the condensation of \(\phi=(0,1)\) in the two models, as the two condensed phases are identical. After condensation, only string type \(a\) which satisfies \(w_{\phi}(a)=(-1)^{a}=1\) remains, namely the remaining string type is \(\bar{0}=\{0\}\) and thus the Hilbert space \(\tilde{\mathcal{H}}\) is the vacuum state which is the same as the vacuum state in \(\mathcal{H}_{\phi}\). Hence, there is no string-net topological order after \(\phi\) condensation.
Next, we consider the \(\phi=(1,0)\) condensation in the Toric code. After condensation, the new string type is \(\bar{0}=\{0,1\}\) and thus \(\tilde{\mathcal{H}}\) is the vacuum state which is the equal superposition of all states in \(\mathcal{H}_{\phi}\). Thus there is no string-net topological order after \(\phi\) condensation.
With the Hamiltonian described here, all of these phase transitions are in the 2+1D Ising universality class.
### \(\mathbb{Z}_{4}\) string-net model
We next show how our construction allows us to construct certain condensed phases of the \(\mathbb{Z}_{4}\) string net model. The full phase diagram of this model was studied in detail in Ref. [105].
The \(\mathbb{Z}_{4}\) string-net model has four types of strings \(\{0,1,2,3\}\) with dual strings \(\bar{0}=0,\bar{1}=3,\bar{2}=2,\bar{3}=1\). The branching rules are \(\{(a,b;c)\) with \(a+b=c\) mod \(4\}\). The Hilbert space consists of all possible string-nets with the above string types and branching rules.
There are four distinct solutions to the self-consistency conditions (5)
\[F(a,b,c)=e^{i\frac{2\pi p(b+c-[b+c]_{4})}{4^{2}}} \tag{120}\]
labeled by \(p=0,1,2,3\). Here \([x]_{n}=x\) mod \(n\). The corresponding \(\mathbb{Z}_{4}\) string-net models realize the topological order described by the Chern-Simons theory with the \(K\)-matrix
\[K=\left(\begin{array}{cc}0&4\\ 4&-2p\end{array}\right).\]
All four models have a \(\mathbb{Z}_{4}\) boson \(\phi=(0,1)\) and a \(\mathbb{Z}_{2}\) boson \(\phi=(0,2)\). In addition, the \(p=0\) model has other two \(\mathbb{Z}_{2}\) bosons \(\phi=(2,0)\) and \(\phi=(2,2)\). We consider the topological order after condensation of each of these bosons.
As in the \(\mathbb{Z}_{2}\) case, condensing \(\phi=(0,1)\) leads to a trivial order. Thus, we begin with the condensation of \(\phi=(0,2)\). In the condensed phase, the remaining string types \(a\) are those which satisfy \(w_{\phi}(a)=e^{i2\pi\frac{2}{4}}=1\), namely, the remaining string types are \(a\in\{0,2\}\). Thus the Hilbert space \(\tilde{\mathcal{H}}\) after \(\phi\)-condensation contains string-nets with the new string labels \(\{\bar{0}=\{0\},\bar{1}=\{2\}\}\) and
the \(\mathbb{Z}_{2}\) branching rules. As discussed in Eq. (68), in this case all non-vanishing vertex coefficients can be set to 1. Solving Eq. (104), we then find that the \(F\) symbols of the condensed phase are simply a subset of those of the uncondensed phase. Specifically:
\[\text{all }A=1\quad,F(\tilde{1},\tilde{1},\tilde{1})=1,\quad d_{\tilde{1}}=1 \tag{121}\]
for the \(p=0,2\) models, and
\[\text{all }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=-1,\quad d_{\tilde{1}}=1 \tag{122}\]
for the \(p=1,3\) models. Thus, the \(\phi=(0,2)\) condensed phase in the \(p=0,2\) models is described by the Toric code while the \(\phi=(0,2)\) condensed phase in the \(p=1,3\) models is described by the double semion model.
Condensing \(\phi=(1,0)\) also leads to a trivial topological phase; thus we next turn to the condensation of the \(\phi=(2,0)\) and \(\phi=(2,2)\) bosons in \(p=0\) model. The Hilbert space \(\tilde{\mathcal{H}}\) for both cases contains string-nets with the new string types \(\{\tilde{0}=\{0,2\},\tilde{1}=\{1,3\}\}\) and the \(\mathbb{Z}_{2}\) branching rules. To find the topological order for after condensation, we solve for the vertex coefficients, and use Eq. (104) to deduce the topological data of the condensed phase. When the condensing boson is \(\phi=(2,0)\), we find that
\[\text{all }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=1,\quad d_{\tilde{1}}=1. \tag{123}\]
In this case, the condensed \(F\)-symbols are simply a subset of the uncondensed ones; this is always the case when condensing \((q,0)\)-type bosons in untwisted abelian lattice gauge theories. Thus, the \(\phi=(2,0)\) condensed phase is described by the Toric code.
When condensing boson is \(\phi=(2,2)\), in contrast, not all vertex coefficients can be chosen to be unity. In this case, we can choose:
\[A^{t2}=(-1)^{t},\text{ other }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=-1, \quad d_{\tilde{1}}=1. \tag{124}\]
with \(t=0,1,2,3\). Thus, the \(\phi=(2,2)\) condensed phase is described by the double semion model.
### \(\mathbb{Z}_{6}\) string-net models
The \(\mathbb{Z}_{6}\) string-net model has six types of strings \(\{0,1,2,\ldots,5\}\). The dual string type is defined by \(\tilde{a}=6-a\bmod 6\) while the branching rules are the triplets \((a,b;c)\) that satisfy \(a+b=c\) (mod 6). By using the general solution
\[F(a,b,c)=e^{i\frac{2\pi pa(b+c-\left[b+c_{0}\right])}{6^{2}}}, \tag{125}\]
we can construct six distinct string-net models labeled by \(p=0,1,\ldots,5\). The corresponding topological order can be described by the Chern-Simons theory with the K-matrix
\[K=\left(\begin{array}{cc}0&6\\ 6&-2p\end{array}\right).\]
Analogously to the previous examples, condensing a \(\mathbb{Z}_{6}\) boson results in a trivial topological phase. Thus we focus on condensing the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) abelian bosons, which are summarized for the 6 distinct \(\mathbb{Z}_{6}\) string-net models in Table 1.
We first consider condensing \(\mathbb{Z}_{3}\) bosons. Condensing \(\phi=(0,2)\), which is a boson for any \(p\), leaves the string types \(\{\tilde{0}=\{0\},\tilde{1}=\{3\}\}\) with \(\mathbb{Z}_{2}\) branching rules. In this case, the condensed phase is described by:
\[\text{all }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=1,\quad d_{\tilde{1}}=1 \tag{126}\]
for the \(p=0,2,4\) models and
\[\text{all }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=-1,\quad d_{\tilde{1}}=1 \tag{127}\]
for the \(p=1,3,5\) models. Thus, the \(\phi=(0,2)\) condensed phase in the \(p=0,2,4\) models is described by the Toric code while the \(\phi=(0,2)\) condensed phase in the \(p=1,3,5\) models is described by the doubled semion model.
Second, we condense the \(\mathbb{Z}_{3}\) boson \(\phi=(2,0)\) in the \(p=0\) model and the \(\mathbb{Z}_{3}\) boson \(\phi=(2,2)\) in the \(p=3\) model. The new string labels are \(\{\tilde{0}=\{0,2,4\},\tilde{1}=\{1,3,5\}\}\) with \(\mathbb{Z}_{2}\) branching rules after condensation. Analogous to the \(\mathbb{Z}_{4}\) case, we find
\[\text{all }A=1,\quad F(\tilde{1},\tilde{1},\tilde{1})=d_{\tilde{1}}=1 \tag{128}\]
for condensation of \(\phi=(2,0)\) in the \(p=0\) model and
\[\begin{split} A^{12}=A^{32}=A^{52}=-1,\text{other }A=1,\\ F(\tilde{1},\tilde{1},\tilde{1})=-1,d_{\tilde{1}}=1.\end{split} \tag{129}\]
for condensation of \(\phi=(2,2)\) in the \(p=3\) model. Thus the two condensed phases are described by the Toric code and the doubled semion model respectively.
Next, we consider condensing \(\mathbb{Z}_{2}\) bosons. When the condensing boson is \(\phi=(0,3)\), the remaining string types are \(\{\tilde{0}=\{0\},\tilde{1}=\{2\},\tilde{2}=\{4\}\}\) with \(\mathbb{Z}_{3}\) branching rules. After solving (104), we find
\[\text{all }A=1,\quad F(\tilde{a},\tilde{b},\tilde{c})=e^{i2\pi\frac{a(\tilde{b}+c- \left[\tilde{b}+c_{\tilde{1}}\right])}{9}},\quad d_{\tilde{a}}=1 \tag{130}\]
with \(q=0\) for the \(p=0,3\) models and \(q=1\) for the \(p=1,4\) models and \(q=2\) for the \(p=2,5\) models. Thus, the \(\phi=(0,3)\) condensed phase in the \(p=0,3\) models is described by the \(\mathbb{Z}_{3}\) string-net model with \(q=0\) while the \(\phi=(0,3)\) condensed phase in the \(p=1,4\) and \(p=2,5\) models is described by the \(\mathbb{Z}_{3}\) string-net model with \(q=1\) and \(q=2\) respectively.
Finally, we condense the \(\phi=(3,0),(3,1),(3,2)\) bosons in the \(\mathbb{Z}_{6}\) string-net models with \(p=0,2,4\) respectively. The new string types after condensation are \(\{\tilde{0}=\{0,3\},\tilde{1}=\{1,4\},\tilde{2}=\{2,5\}\}\) with \(\mathbb{Z}_{3}\) branching rules. Condensing \(\phi=(3,0)\) in the \(p=0\) model, all vertex coefficients can be taken to be 1, and we obtain:
\[\text{all }A=1,\quad F(\tilde{a},\tilde{b},\tilde{c})=1,\quad d_{\tilde{a}}=1. \tag{131}\]
To condense \(\phi=(3,1)\) in the \(p=2\) model, we may take:
\[A^{13}=A^{43}=e^{i\frac{2\pi}{3}},A^{23}=A^{53}=e^{-i\frac{2\pi}{3} },\quad\text{other }A=1\] \[F(\tilde{1},\tilde{1},\tilde{2})=F(\tilde{1},\tilde{2},\tilde{1} )=F(\tilde{1},\tilde{2},\tilde{2})=e^{-i\frac{2\pi}{3}},\] \[F(\tilde{2},\tilde{2},\tilde{1})=F(\tilde{2},\tilde{1},\tilde{2 })=F(\tilde{2},\tilde{2},\tilde{2})=e^{i\frac{2\pi}{3}},\quad\text{other }F=1 \tag{132}\]
Finally, to condense \(\phi=(3,2)\) in the \(p=4\) model, we obtain:
\[A^{13}=A^{43}=e^{-i\frac{2\pi}{3}},A^{23}=A^{53}=e^{i\frac{2\pi} {3}},\quad\text{other }A=1\] \[F(\tilde{1},\tilde{1},\tilde{2})=F(\tilde{1},\tilde{2},\tilde{1 })=F(\tilde{1},\tilde{2},\tilde{2})=e^{i\frac{2\pi}{3}},\] \[F(\tilde{2},\tilde{2},\tilde{1})=F(\tilde{2},\tilde{1},\tilde{2 })=F(\tilde{2},\tilde{2},\tilde{2})=e^{-i\frac{2\pi}{3}},\quad\text{other }F=1\] \[d_{\tilde{a}}=1. \tag{133}\]
Thus the three condensed phases are described by the \(\mathbb{Z}_{3}\) string-net models labeled by \(p=0,2,1\) respectively.
We summarize the condensed phases after condensing abelian bosons in the \(\mathbb{Z}_{6}\) models in Table 1.
### \(\mathbb{Z}_{4}\times\mathbb{Z}_{4}\) string-net model
The \(\mathbb{Z}_{4}\times\mathbb{Z}_{4}\) string-net model has 16 types of strings labeled by \(a\in\{(a_{1},a_{2}),a_{1},a_{2}\in\{0,1,2,3\}\}\) with dual strings \(\bar{a}=(\bar{a}_{1},\bar{a}_{2})=(4-a_{1},4-a_{2}).\) The branching rules are \(\{(a_{i},b_{i};c_{i})\text{ with }a_{i}+b_{i}=c_{i}\text{ mod }4\}\) with \(i=1,2\). The Hilbert space consists of all possible string-nets with the above string types and branching rules.
The general form of solutions to the self-consistency conditions (5) for \(\mathbb{Z}_{N}\times\mathbb{Z}_{N}\) string net models is known[98; 99]. Here, we consider one such solution, for which:
\[F(a,b,c)=e^{i2\pi a^{T}\mathbf{N}^{-1}\mathbf{P}\mathbf{N}^{-1 }(b+c-[b+c])} \tag{134}\]
with
\[\mathbf{N}=\left(\begin{array}{cc}4&0\\ 0&4\end{array}\right),\mathbf{P}=\left(\begin{array}{cc}0&2\\ 0&0\end{array}\right). \tag{135}\]
Here the square bracket \([b+c]\) denotes a 2-component vector whose \(i\)-th component is \(b_{i}+c_{i}\) (mod 4). From the solution (134), we can construct the \(\mathbb{Z}_{4}\times\mathbb{Z}_{4}\) string-net Hamiltonian.
We focus on the four \(\mathbb{Z}_{2}\) bosons in the model and we denote them by
\[\phi_{1}=(2,0,0,3),\quad\phi_{2}=(2,0,0,1),\] \[\phi_{3}=(2,0,2,1),\quad\phi_{4}=(2,0,2,3).\]
Here the bosons are labeled by \((s_{1},s_{2},m_{1},m_{2})\) with \(s_{1},s_{2}\) being the flux and \(m_{1},m_{2}\) being the charge carried by the particle. Now, we consider the condensation of the four bosons \(\phi_{i}\) in the \(\mathbb{Z}_{4}\times\mathbb{Z}_{4}\) model in order. In the \(\phi_{i}\) condensed phase, we define the 2-component new string labels by
\[\tilde{a}=(\tilde{a}_{1},\tilde{a}_{2})=\{(a_{1},a_{2}),(2+a_{1}, a_{2})\}\] \[\text{with }a_{1}\in\{0,1\},a_{2}\in\{0,1,2,3\}\]
To find the topological order for the \(\phi_{i}\) condensed phase, we have to solve for the vertex coefficients. First, we find that
\[\text{all }A=1\text{ for }\phi_{1}\text{ condensed phase}\] \[A^{a,(2,0)}=(-1)^{a_{2}},\text{other }A=1\text{ for }\phi_{2}\text{ condensed phase}. \tag{136}\]
For the \(\phi_{1}\) condense phase, we then solve Eq. (104) to find:
\[F(\tilde{a},\tilde{b},\tilde{c})=e^{i2\pi\tilde{a}^{T}\tilde{\mathbf{N}}^{-1 }\tilde{\mathbf{P}}\tilde{\mathbf{N}}^{-1}(\tilde{b}+\tilde{c}-[\tilde{b}+ \tilde{c}])} \tag{137}\]
with
\[\tilde{\mathbf{N}}=\left(\begin{array}{cc}2&0\\ 0&4\end{array}\right),\tilde{\mathbf{P}}=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right). \tag{138}\]
For the \(\phi_{2}\) condensed phase, the new \(F\)-symbol is gauge equivalent to (138). Thus the topological order in \(\phi_{1}\) or \(\phi_{2}\) condense phase is described by the Chern-Simons theory with \(K\)-matrix[89]
\[K=\left(\begin{array}{cccc}0&0&2&0\\ 0&0&0&4\\ 2&0&0&-1\\ 0&4&-1&0\end{array}\right). \tag{139}\]
Second, we find that
\[A^{a,(2,0)} =(-1)^{a_{1}+a_{2}},\text{other }A=1\text{ for }\phi_{3}\text{ condensed phase}\] \[A^{a,(2,0)} =(-1)^{a_{1}},\text{other }A=1\text{ for }\phi_{4}\text{ condensed phase}. \tag{140}\]
\begin{table}
\begin{tabular}{|l|l|l||l|l|} \hline \(\mathbb{Z}_{6}\) Models & \(\mathbb{Z}_{2}\)\(\phi\) & condensed & \(\mathbb{Z}_{3}\)\(\phi\) & condensed \\ \(p\) & & phase & & phase \\ \hline \(0,\dots,5\) & \((0,3)\) & \(K_{3,0},\ p=0,3\) & \((0,2)\) & \(K_{2,0}\), \(p=0,2,4\) \\ & & \(K_{3,1},\ p=1,4\) & & \\ & & \(K_{3,2},\ p=2,5\) & & \\ \hline \(0\) & \((3,0)\) & \(K_{3,0}\) & \((2,0)\) & \(K_{2,0}\) \\ \hline \(2\) & \((3,1)\) & \(K_{3,2}\) & & \\ \hline \(3\) & & & \((2,2)\) & \(K_{2,1}\) \\ \hline \(4\) & \((3,2)\) & \(K_{3,1}\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Six \(\mathbb{Z}_{6}\) string-net models labeled by \(p=0,1,\dots,5\). Column 2, 4 show the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{3}\) bosons labeled by \((s,m)\) in the models. Column 3, 5 show the \(K\)-matrix for the condensed phases after condensing \(\phi\) bosons in Column 2, 4 respectively. Here \(K_{a,b}=\left(\begin{array}{cc}0&a\\ a&-2b\end{array}\right)\).
For the \(\phi_{4}\) condensed phase, we find that \(F(\tilde{a},\tilde{b},\tilde{c})\) is given by (137) with
\[\tilde{\mathbf{N}}=\left(\begin{array}{cc}2&0\\ 0&4\end{array}\right),\tilde{\mathbf{P}}=\left(\begin{array}{cc}1&1\\ 0&0\end{array}\right). \tag{141}\]
The new \(F\)-symbol for the \(\phi_{3}\) condensed phase is gauge equivalent to (141). Thus the topological order in \(\phi_{3}\) or \(\phi_{4}\) condense phase is described by the \(K\)-matrix
\[K=\left(\begin{array}{cccc}0&0&2&0\\ 0&0&0&4\\ 2&0&-2&-1\\ 0&4&-1&0\end{array}\right). \tag{142}\]
### \(S_{3}\) string-net model
Our last two examples concern condensation transitions involving splitting. We begin with the \(S_{3}\) string-net model (constructed from the fusion category \(\mathrm{Rep}(S_{3})\), which has three types of strings \(\{0,1,2\}\) with dual strings \(\bar{0}=0,\bar{1}=1,\bar{2}=2\). The branching rules are
\[\begin{split}&\{(0,0;0),(1,0;1),(2,0;2),\\ &(1,1;0)(1,1;1),(1,1;2),(1,2;1),(2,2;0)\}.\end{split} \tag{143}\]
Here the triplets \((a,b;c)\) are understood as the fusion \(a\times b=c\) and are symmetric in the first two labels \(a,b\). The nontrivial F-symbols and \(d\) to self-consistency conditions (5) are
\[\begin{split}& F^{111}_{1ef}=\left(\begin{array}{ccc}\frac{1}{2}&- \frac{1}{\sqrt{2}}&\frac{1}{2}\\ -\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ \frac{1}{2}&\frac{1}{\sqrt{2}}&\frac{1}{2}\end{array}\right)\\ & F^{111}_{211}=F^{112}_{111}=F^{121}_{111}=F^{211}_{111}=-1\\ & d_{0}=d_{2}=1,d_{1}=2\end{split} \tag{144}\]
where the matrix indices \(e,f\) can be \(0,1,2\).
The model has 8 quasiparticles. Among them, there is a \(\mathbb{Z}_{2}\) abelian boson, which we denote \(\phi=(2,0)\). The corresponding string operator is defined by the string parameter
\[w_{\phi}(a)=(-1)^{a}. \tag{145}\]
Since \(2\times 1=1\times 2=1\), condensing \(\phi\) will cause the original string label 1 to split into two distinct labels, which we denote \(\tilde{1}_{1},\tilde{1}_{2}\).
To describe the Hilbert space \(\tilde{\mathcal{H}}\) after condensation, we first solve (89) for \(A^{s^{\prime}a}\). The two distinct solutions are:
\[(A^{21})_{1}=(A^{12})_{2}=1,\quad(A^{12})_{1}=(A^{21})_{2}=-1. \tag{146}\]
Thus, the new string labels for \(\tilde{\mathcal{H}}\) are
\[\tilde{0}=\{0,2\},\quad\tilde{1}_{1}=\{1\},\quad\tilde{1}_{2}=\{1\}. \tag{147}\]
The branching rules can be deduced from the branching rules for the old string labels and are given by 4
Footnote 4: Before condensation, we have \(1\times 1=0+1+2\). After condensation, the fusion becomes \((\tilde{1}_{1}+\tilde{1}_{2})\times(\tilde{1}_{1}+\tilde{1}_{2})=\tilde{0}+ \tilde{1}_{1}+\tilde{1}_{2}+\tilde{0}\). Thus, \(\tilde{1}_{1},\tilde{1}_{2}\) can be either self-dual or not self-dual. However, from the associativity of the fusion \(1_{1}\times(\tilde{1}_{2}\times\tilde{1}_{1})=(\tilde{1}_{1}\times\tilde{1}_{2}) \times\tilde{1}_{1}\), we conclude \(\tilde{1}_{1}\times\tilde{1}_{2}=\tilde{0},\tilde{1}_{1}\times\tilde{1}_{1}= \tilde{1}_{2},\tilde{1}_{2}\times\tilde{1}_{2}=\tilde{1}_{1}\). This can also be deduced directly from Eq. (94).
\[\{(0,0;0),(1,0;1),(2,0;2), \tag{148}\] \[(1,1;0)(1,1;1),(1,1;2),(1,2;1),(2,2;0)\}.\]
Here the triplets \((a,b;c)\) are understood as the fusion \(a\times b=c\) and are symmetric in the first two labels \(a,b\). The nontrivial F-symbols and \(d\) to self-consistency conditions (5) are
\[\begin{split}& F^{111}_{1ef}=\left(\begin{array}{ccc}\frac{1}{2}&- \frac{1}{\sqrt{2}}&\frac{1}{2}\\ -\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ \frac{1}{2}&\frac{1}{\sqrt{2}}&\frac{1}{2}\end{array}\right)\\ & F^{111}_{211}=F^{112}_{111}=F^{121}_{111}=F^{211}_{111}=-1\\ & d_{0}=d_{2}=1,d_{1}=2\end{split} \tag{149}\]
where the matrix indices \(e,f\) can be \(0,1,2\).
The model has 8 quasiparticles. Among them, there is a \(\mathbb{Z}_{2}\) abelian boson, which we denote \(\phi=(2,0)\). The corresponding string operator is defined by the string parameter
\[w_{\phi}(a)=(-1)^{a}. \tag{140}\]
Since \(2\times 1=1\times 2=1\), condensing \(\phi\) will cause the original string label 1 to split into two distinct labels, which we denote \(\tilde{1}_{1},\tilde{1}_{2}\).
To describe the Hilbert space \(\tilde{\mathcal{H}}\) after condensation, we first solve (89) for \(A^{s^{\prime}a}\). The two distinct solutions are:
\[(A^{21})_{1}=(A^{12})_{2}=1,\quad(A^{12})_{1}=(A^{21})_{2}=-1. \tag{141}\]
Thus, the new string labels for \(\tilde{\mathcal{H}}\) are
\[\tilde{0}=\{0,2\},\quad\tilde{1}_{1}=\{1\},\quad\tilde{1}_{2}=\{1\}. \tag{142}\]
The branching rules can be deduced from the branching rules for the old string labels and are given by 4
Footnote 4: Before condensation, we have \(1\times 1=0+1+2\). After condensation, the fusion becomes \((\tilde{1}_{1}+\tilde{1}_{2})\times(\tilde{1}_{1}+\tilde{1}_{2})=\tilde{0}+ \tilde{1}_{1}+\tilde{1}_{2}+\tilde{0}\). Thus, \(\tilde{1}_{1},\tilde{1}_{2}\) can be either self-dual or not self-dual. However, from the associativity of the fusion \(1_{1}\times(\tilde{1}_{2}\times\tilde{1}_{1})=(\tilde{1}_{1}\times\tilde{1}_{2}) \times\tilde{1}_{1}\), we conclude \(\tilde{1}_{1}\times\tilde{1}_{2}=\tilde{0},\tilde{1}_{1}\times\tilde{1}_{1}= \tilde{1}_{2},\tilde{1}_{2}\times\tilde{1}_{2}=\tilde{1}_{1}\). This can also be deduced directly from Eq. (94).
\[\{(0,0;0),(1,0;1),(2,0;2), \tag{143}\] \[(1,1;0)(1,1;1),(1,1;2),(1,2;1),(2,2;0)\}.\]
Here the triplets \((a,b;c)\) are understood as the fusion \(a\times b=c\) and are symmetric in the first two labels \(a,b\). The nontrivial F-symbols and \(d\) to self-consistency conditions (5) are
\[\begin{split}& F^{111}_{1ef}=\left(\begin{array}{ccc}\frac{1}{2}&- \frac{1}{\sqrt{2}}&\frac{1}{2}\\ -\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}\\ \frac{1}{2}&\frac{1}{\sqrt{2}}&\frac{1}{2}\end{array}\right)\\ & F^{111}_{211}=F^{112}_{111}=F^{121}_{111}=-1\\ & d_{0}=d_{2}=1,d_{1}=2\end{split} \tag{144}\]
where the matrix indices \(e,f\) can be \(0,1,2\).
The model has 8 quasiparticles. Among them, there is a \(\mathbb{Z}_{2}\) abelian boson, which we denote \(\phi=(2,0)\). The corresponding string operator is defined by the string parameter
\[w_{\phi}(a)=(-1)^{a}. \tag{145}\]
Since \(2\times 1=1\times 2=1\), condensing \(\phi\) will cause the original string label 1 to split into two distinct labels, which we denote \(\tilde{1}_{1},\tilde{1}_{2}\).
To describe the Hilbert space \(\tilde{\mathcal{H}}\) after condensation, we first solve (89) for \(A^{s^{\prime}a}\). The two distinct solutions are:
\[(A^{21})_{1}=(A^{12})_{2}=1,\quad(A^{12})_{1}=(A^{21})_{2}=-1. \tag{146}\]
Thus, the new string labels for \(\tilde{\mathcal{H}}\) are
\[\tilde{0}=\{0,2\},\quad\tilde{1}_{1}=\{1\},\quad\tilde{1}_{2}=\{1\}. \tag{147}\]
The branching rules can be deduced from the branching rules for the old string labels and are given by 4
Footnote 4: Before condensation, we
The model has 25 particles. Among these, there is one \(\mathbb{Z}_{2}\) abelian boson, which we denote \(\phi=(4,0)\). The corresponding string operator is defined by the string parameter
\[w_{\phi}(a)=(-i)^{a}. \tag{151}\]
We consider the string net obtained by condensing \(\phi=(4,0)\).
Since the string labels obey \(2\times 4=4\times 2=2\), the label 2 will split into two distinct string types after condensation, which we denote \(\tilde{2}_{1}\) and \(\tilde{2}_{2}\). We first define \(\tilde{\mathcal{H}}\) after condensation. Specifically, solving for the vertex coefficients \(A^{4,a}\), we obtain:
\[\begin{split}& A^{41}=1,\quad A^{43}=-1,\quad A^{14}=A^{34}=-i, \\ &(A^{42})_{1}=(A^{24})_{2}=1,\quad(A^{24})_{1}=(A^{42})_{2}=-1. \end{split} \tag{152}\]
Thus, the new string labels for \(\tilde{\mathcal{H}}\) are
\[\begin{split}\tilde{0}=\{0,4\},\quad\tilde{1}=\{1,3\},\quad \tilde{2}_{1}=\{2\},\quad\tilde{2}_{2}=\{2\}.\end{split} \tag{153}\]
The new branching rules can be deduced from the old branching rules, together with Eq. (94), and are given by
\[\begin{split}\{(\tilde{1},\tilde{1};\tilde{0}),(\tilde{1}, \tilde{1};\tilde{2}_{1}),(\tilde{1},\tilde{1};\tilde{2}_{2}),\\ (\tilde{1},\tilde{2}_{1};\tilde{1}),(\tilde{1},\tilde{2}_{2}; \tilde{1}),\\ (\tilde{2}_{1},\tilde{2}_{1};\tilde{2}_{2}),(\tilde{2}_{1},\tilde{ 2}_{2};\tilde{0}),(\tilde{2}_{2},\tilde{2}_{2};\tilde{2}_{1}).\}\end{split} \tag{154}\]
Thus, \(\tilde{\mathcal{H}}\) is the string-net Hilbert space with new string labels (153) and branching rules (154).
Next, we want to find \(\{F,d\}\) in the \(\phi\) condensed phase. A valid choice of the full vertex coefficients is given by (152), together with:
\[\begin{split}&(A_{2}^{11})_{1}=-\sqrt{2},\quad(A_{1}^{12})^{1}=- \frac{1}{\sqrt{2}},\quad(A_{2}^{22})_{2}^{11}=\frac{1}{\sqrt{2}},\\ &(A_{2}^{11})_{2}=1,\qquad(A_{1}^{12})^{2}=1,\qquad(A_{2}^{22})_ {1}^{22}=-2,\end{split} \tag{155}\]
where \((A_{1}^{12})^{2}=1\) pertains to the vertex \((\tilde{1},\tilde{2}_{2};\tilde{1})\) and so on. Using these, and Eq. (107), we find the nontrivial new \(F\)-symbols are
\[\begin{split}& F_{100}^{\tilde{1}\tilde{1}\tilde{1}}=F_{10 \tilde{2}_{a}}^{\tilde{1}\tilde{1}\tilde{1}}=F_{1\tilde{2}_{a}0}^{\tilde{1} \tilde{1}\tilde{1}}=-\frac{1}{\sqrt{3}},\quad F_{1\tilde{2}_{a}\tilde{2}_{b}}^ {\tilde{1}\tilde{1}\tilde{1}}=-\frac{1}{\sqrt{3}}e^{-i\frac{2\pi ab}{3}},\\ & F_{\tilde{2}_{b}\tilde{1}\tilde{1}}^{\tilde{2}_{a}\tilde{1}}=F_ {1\tilde{1}\tilde{1}}^{\tilde{2}_{b}\tilde{1}\tilde{2}_{b}}=e^{-i\frac{2\pi ab }{3}},\qquad F_{1\tilde{1}0}^{\tilde{1}\tilde{2}_{a}\tilde{2}_{b}}=F_{10 \tilde{1}}^{\tilde{2}_{a}\tilde{1}}=-1.\end{split} \tag{156}\]
Here \(a\neq b=1,2\). Interestingly, the data (156) are exactly the \(\mathbb{Z}_{3}\) Tambara-Yamagami category[106] (\(TY_{3}\)). The \(TY_{3}\) category has 4 labels \([0]=\tilde{0},[1]=\tilde{2}_{1},[2]=\tilde{2}_{2},\sigma=\tilde{1}\). The first 3 labels have \(\mathbb{Z}_{3}\) fusion rules. The last label \(\sigma\) represents the symmetry defect:
\[\begin{split}[a]\times\sigma=\sigma&\\ \sigma\times\sigma=[0]+[1]+[2].\end{split} \tag{157}\]
With the data (156), we can construct the ground state and effective string-net model for the \(\phi\) condensed phase. The braiding data of excitations, the S, T matrices, are known in Ref. [107].
Thus, condensing the \(\mathbb{Z}_{2}\) boson in the \(\mathrm{SU}(2)_{3}\) string net, we obtain the \(TY_{3}\) string net. In this case, because the input fusion category is modular, this transition is analogous to condensing the \(\mathbb{Z}_{2}\) boson in the top layer of an \(\mathrm{SU}(2)_{4}\times\overline{\mathrm{SU}(2)}_{4}\) bilayer system. The resulting topological order is \(\mathrm{SU}(3)_{1}\times\overline{\mathrm{SU}(2)}_{4}\), which is exactly that of the \(TY_{3}\) string net.
## VII Discussion
In this paper, we have systematically studied condensation of arbitrary abelian bosons in string-net models. We have introduced a Hamiltonian that tunes the system through a condensation transition, and given a detailed description of the string net in the condensed phase. We have shown how, in the low-energy Hilbert space of the condensed phase, the input fusion category \(\mathcal{C}\) of the uncondensed string net becomes a new fusion category \(\tilde{\mathcal{C}}\), with both the effective Hamiltonian and the ground state in the condensed phase being \(\tilde{\mathcal{C}}\) string nets. Finally, we have shown how both the labels and the fusion data for \(\tilde{\mathcal{C}}\) can be calculated directly from the data of the string operators of the condensing bosons, together with the fusion data of \(\mathcal{C}\).
Because the transitions discussed here involve condensation of abelian bosons, the degrees of freedom that become gapless at the critical point can all be mapped onto variables in a Potts model, using a method similar to that described in Ref. [73]. By modifying \(H_{1}\), one could also achieve phase transitions in the clock universality class.
One useful result of our construction is the possibility of systematically extracting not only the label set, but also the fusion data of \(\tilde{\mathcal{C}}\), by solving for the vertex coefficients implied by the string operators \(W_{\phi^{j}}^{a}\). We note that Ref. [32] similarly introduced vertex coefficients when studying the effect of anyon condensation on the fusion and braiding data of the UMTC describing the topological order, and used these to determine the \(F\) and \(R\) symbols for the condensed theory. The vertex coefficients that we introduce here can be viewed as analogs of Ref. [32]'s vertex coefficients, albeit for the fusion category underpinning the string net, rather than for the UMTC associated with the anyon model itself.
One potential application of our construction, illustrated in the last example, is to obtain the fusion data for string nets of lower symmetry by condensing anyons in string nets with higher symmetry. For example, we can begin with a string net that has explicit time-reversal symmetry, such as \(\mathrm{SU}(2)_{4}\times\overline{\mathrm{SU}(2)}_{4}\), and condense a chiral abelian boson in one of the two copies, to obtain a string net that does not have time reversal symmetry. This is useful because the data for many high-symmetry
string nets, such as those constructed from rational conformal field theories, is known.
A second potential application is to string nets realizing symmetry enriched topological phases, where the enriching symmetry is abelian. Specifically, condensing \(\mathbb{Z}_{p}\) abelian anyons can be viewed as "un-gauging" a \(\mathbb{Z}_{p}\) symmetry, and a modification of the construction here can lead to condensed phases in which the models exhibit a global \(\mathbb{Z}_{p}\) symmetry, similar to the constructions of Refs. [58; 59]. Such a construction may enable a simpler string-net realization of many of these symmetries than in the existing literature. It also gives a framework that could be used to construct similar models with anyon-permuting symmetry at the boundary of a three-dimensional Walker-Wang string net.
**Note added** Shortly before completing this work, we became aware of Ref. [108], which also discusses anyon condensation in string net models, including some non-abelian examples.
###### Acknowledgements.
C.-H. Lin thanks Lan Tian, Chenjie Wang, and Yidun Wan for helpful discussions. FJB is grateful for the support of NSF DMR 1928166.
## Appendix A Properties of Abelian string operators
In this section, we prove the basic properties of our abelian string operators that we use in the main text.
### Finding a gauge where \(F\)-symbols are trivial
Many of the properties of abelian string operators that we will use are valid only in the gauge where \(F(s^{i},s^{j},s^{k})=1\), where we use the notation \(F(a,b,c)=F^{abc}_{abc,ab,bc}\) appropriate to \(F\)-symbols involving only abelian string labels. To see that such a gauge exists, we use the fact that if \(\phi=(s,m)\) to be a boson, then \(w_{\phi}(s)=1\). (We note that while \(w_{\phi}(a)\) is not gauge invariant for general \(a\), \(w_{\phi}(s)\), which represents the self-twist of the particle, is a gauge-invariant quantity). From Eq. (15), we see that
\[w_{\phi}(s)w_{\phi}(s^{j})=F(s,s^{j},s)w_{\phi}(s^{j+1}) \tag{104}\]
If \(s^{2}=0\) and \(j=1\), \(F(s,s^{j},s)\) is gauge invariant, and this tells us that only if \(F(s,s^{j},s)=1\) can \((s,m)\) be a boson. Otherwise, under gauge transformations, we have
\[\hat{F}(s,s^{j},s)=F(s,s^{j},s)\frac{f^{ss^{j}}_{s^{j+1}}f^{s^{j+1}s}_{s^{j+1}} }{f^{ss^{j}s}_{s^{j+1}}f^{ss^{j+1}}_{s^{j+2}}} \tag{105}\]
where our string net construction requires \(f^{s0}_{s}=f^{0s}_{s}=1\). For \(1\leq j<p-1\), we can use the ratio \(f^{s^{j+1}}_{s^{j+2}}/(f^{ss^{j+1}}_{s^{j+2}})\) to fix \(F(s,s^{j},s)=1\), where \(s^{p}=0\). Further, we have \(w_{\phi}(s)w_{\phi}(s^{p-1})=F(s,s^{p-1},s)w_{\phi}(0)\), and hence also \(F(s,\bar{s},s)=1\), where \(\bar{s}=s^{p-1}\). It follows that in this gauge, for all \(i,j\), we have
\[F(s,s^{i},s)=1\,\quad F(s,s^{i},s^{j})F(s^{i},s^{j},s)=F(s^{i},s,s^{j}). \tag{106}\]
In this gauge, we see that \(w_{\phi}(s^{j})=1\) for all \(j\).
Next, consider \(F(s,s^{j},s^{k})\) with \(k>1\). Under gauge transformations, we have:
\[\hat{F}(s,s^{j},s^{k})=F(s,s^{j},s^{k})\frac{f^{ss^{j}}_{s^{j+1}}}{f^{ss^{j+k }}_{s^{j+1}}}\frac{f^{s^{j+1}s^{k}}_{s^{j+k+1}}}{f^{ss^{j}s^{k}}_{s^{j+k}}} \tag{107}\]
For \(k>1\), and a fixed choice of \(f^{ss^{i}}_{s^{i+1}}\) for each \(i\), we can set all of these to \(1\) by fixing the ratio \(\frac{f^{s^{j+1}s^{j+1}}_{s^{j+1}}}{f^{ss^{j+k}}_{s^{j+k}}}\). (In this case, this also works for \(j=p-1\)).
Thus, if \(w_{\phi}(s)=1\), we have enough gauge freedom to simultaneously set \(F(s,s^{j},s^{k})=1\) for all \(j,k\). Using the pentagon relation, we also have
\[F(s,s^{i},s^{j})F(s,s^{i+j},s^{k})F(s^{i},s^{j},s^{k})\] \[\qquad=F(s^{i+1},s^{j},s^{k})F(s,s^{i},s^{j+k}) \tag{108}\]
In the gauge where \(F(s,s^{j},s^{k})=1\) for all \(j,k\), we find that
\[F(s^{i},s^{j},s^{k})=F(s^{i+1},s^{j},s^{k}) \tag{109}\]
from which it follows that \(F(s^{i},s^{j},s^{k})=1\) for all \(i,j,k\).
### Basic string operators in a general gauge
The gauge choice \(F(s^{i},s^{j},s^{k})=1\) is convenient, because the action of the operator \(W_{\phi}(P)\) is identical to acting with a string operator with a fixed end-point that is located away from the stick, and fusing it into the lattice appropriately. With a different gauge choice, the difference between the operator \(W_{\phi}(P)\) and such open string operators can be described by a gluing operator \(O_{l}\), whose action is defined by
(110)
Here \(a\) denotes the string label of the stick, and the grey region denotes the configuration which does not change.
In addition, in this gauge, when taking the product \(W^{i}_{\phi^{1}}\cdot W^{i}_{\phi^{2}}\), we may ignore the vertical bendings of the jointed path \(p^{i}\cup p^{j}\). For example, consider two basic string operators \(W^{i}_{\phi^{1}},W^{i}_{\phi^{2}}\) along the same path \(p_{i}\). When acting the composite operator \(W^{i}_{\phi^{1}}\cdot W^{i}_{\phi^{2}}\) on the vacuum state, we have
\[\begin{split}\langle vac|W^{i}_{\phi^{1}}\cdot W^{i}_{\phi^{2}}& =\left\langle\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt }{\includegraphics[width=14.226378pt]{figs_1}}\right.\\ &=\left\langle\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{figs_2}}\right.\\ &=\left\langle vac|W^{i}_{\phi^{1}\times\phi^{2}}\cdot\theta_{s_{1},s _{2}}\right.\end{split} \tag{100}\]
with
\[\theta_{s_{1},s_{2}}=\frac{F(s_{2},\bar{s}_{2},\bar{s}_{1})}{F(s_{1},s_{2}, \bar{s}_{1}\times\bar{s}_{2})}. \tag{101}\]
Here we use (4) to remove the loop in the second line at the expense of the factor \(\theta_{s_{1},s_{2}}\).
Thus, in a general gauge, we do not have \(W^{i}_{\phi^{1}}\cdot W^{i}_{\phi^{2}}=W^{i}_{\phi^{1}\times\phi^{2}}\). Instead, we find:
\[\begin{split} W^{i,abcd;\phi^{a}\phi^{b}}_{\phi^{3}=\phi^{1}\times \phi^{2},a_{3}b_{3}c_{3}d_{3};\phi^{a}\times\phi^{3},\phi^{b}\times\bar{\phi}^{ 3}}(efg)=\\ W^{i,abcd;\phi^{a}\phi^{b}}_{\phi^{1},a_{1}b_{1}c_{1}d_{1};\phi^{a} \times\phi^{1},\phi^{b}\times\overline{\phi}^{1}}(efg)\times\\ W^{i,a_{1}b_{1}c_{1}d_{1};\phi^{a}\times\phi^{1},\phi^{b}\times \overline{\phi}^{1}}_{\phi^{2},a_{3}b_{3}c_{3}d_{3};\phi^{a}\times\phi^{3}, \phi^{b}\times\overline{\phi}^{3}}(efg)\times\theta^{-1}_{s_{1},s_{2},a,b}\end{split} \tag{102}\]
with
\[\theta_{s_{1},s_{2},a,b}=\frac{F(s_{2},\bar{s}_{2},\bar{s}_{1})F(\bar{s}_{2},\bar{s}_{1},b)}{F(s_{1},s_{2},\bar{s}_{1}\times\bar{s}_{2})F(a,s_{1},s_{2})} \tag{103}\]
for \(i=1,\ldots,4\).
In addition, one can show that
\[\begin{split} W^{i\dagger,a^{\prime}b^{\prime}c^{\prime}d^{ \prime};\phi^{a^{\prime}}\phi^{b^{\prime}}}_{\phi,abcd;\phi^{a}\phi^{b}}(efg)= \\ W^{i,a^{\prime}b^{\prime}c^{\prime}d^{\prime};\phi^{a^{\prime}} \phi^{b^{\prime}}}_{\bar{\phi},abcd;\phi^{a}\phi^{b}}(efg)\cdot\theta^{-1}_{s,\bar{s},a,b}\end{split} \tag{104}\]
for \(i=1,2,3,4\).
### Properties of basic string operators in the gauge \(F(s^{i},s^{j},s^{k})=1\)
Next, we establish the properties of basic string operators in the gauge
\[F(s^{i},s^{j},s^{k})=1. \tag{105}\]
First, from equations (102,104), we see that in this gauge,
\[W^{i}_{\phi^{1}}\cdot W^{i}_{\phi^{2}}=W^{i}_{\phi^{1}+\phi^{2}} \tag{106}\]
and
\[W^{i\dagger}_{\phi}=W^{i}_{\bar{\phi}}. \tag{107}\]
Second, all basic string operators commute:
\[[W^{i}_{\phi},W^{j}_{\phi^{\prime}}]=0. \tag{108}\]
This follows from Eq. (102) if \(i=j\) (i.e. if the two paths are the same). If the two paths intersect only on one stick (for example, \(i=1,j=3\)), this follows from the fact that using Eq. (5), one can show that the two operator products differ by a factor of \(F(s^{i},b\times s,s^{j})\), which is unity if \(b\) labels a stick. If \(i=1,j=2\), we can use the identity
\[w_{\phi}(f)w_{\phi}(s)=F^{sfs}_{f^{\prime\prime},f^{\prime}}w_{\phi}(f\times s) \tag{109}\]
to show that \(\bar{w}_{\phi}(f)F^{sfs}_{f^{\prime\prime},f^{\prime},f^{\prime}}=\bar{w}_{ \phi}(f\times s)\), which shows that \([W^{1}_{\phi},W^{2}_{\phi}]=0\). We can use this, together with Eq. (106), to show that \([W^{1}_{\phi^{1}},W^{2}_{\phi^{j}}]=0\). A similar argument shows \([W^{3}_{\phi^{1}},W^{4}_{\phi^{j}}]=0\).
Moreover, in this gauge we have \(F^{s\bar{s}b}_{\theta^{0}b}=w_{\phi}(b)=1\) when \(b\) is a string label associated with the condensing boson. It follows that \(W^{1}_{\phi}W^{3}_{\phi}=W_{\phi}(p_{1}\cup p_{3})\), where the path \(p_{1}\cup p_{3}\) crosses straight under the \(b\)-labeled stick. Similar results hold for other products of simple string operators with paths that overlap only on a single stick. Using the identity (derived from Eq. (5)) \(F^{f\bar{s}s}_{f,f\times s,0}F(\bar{s}s\bar{s})=F^{f\times\bar{s}s,\bar{s},f,0}\), the product \(W^{1}_{\phi}W^{2}_{\phi}\) can similarly be shown to be equal to an operator running along the path \((p_{1}\cup p_{2})\), which directly connects the two sticks. (The consistency relations ensure that any deformation of this path which does not change the end-points yields the same operator).
Thus, in this gauge, we may express a general string operator by concatenating string operators along a series of adjacent basic paths.
## Appendix B Diagrammatical representation of the \(B^{\phi,s}_{p}\) operator
In this section, we present the graphical representation of \(B^{\phi,s}_{p}\) in \(H_{\mathcal{C}}\) (30) which lead to the matrix elements in Eq. (39), as well as an alternative (simpler) formulation
The action of \(B^{\phi,s}_{p}\) in \(H_{\mathcal{C}}\) is defined by
\[B^{\phi,s}_{p}=\sum_{\phi_{10},\phi_{11},\phi_{12}}W_{\phi_{10},\phi_{11},\phi_{12 }}B^{s}_{p}W^{\dagger}_{\phi_{10},\phi_{11},\phi_{12}}. \tag{110}\]
with
\[W_{\phi_{10},\phi_{11},\phi_{12}}=\mathcal{P}_{\phi_{10}}W^{1}_{\phi_{10}}\cdot \mathcal{P}_{\phi_{11}}W^{1}_{\phi_{11}}\cdot\mathcal{P}_{\phi_{12}}W^{3}_{\phi_{12 }}. \tag{111}\]
Here the sums run over three end spins states \(\phi_{10},\phi_{11},\phi_{12}\) in \(p\). The \(\mathcal{P}_{\phi_{i}}=|\phi_{i}\rangle\langle\phi_{i}|\) is the projector to the end spin state \(|\phi_{i}\rangle\) and \(W^{\dagger}_{\phi_{10}},W^{\dagger}_{\phi_{11}},W^{3}_{\phi_{12}}\) are three basic string operators defined in (22). The \(B^{s}_{p}\) is defined to add a loop-\(s\) to the boundary of \(p\) after \(W_{\phi_{10},\phi_{11},\phi_{12}}\) moves the excitations \(\{\phi_{10},\phi_{11},\phi_{12}\}\) to the exterior of \(p\). Finally, after fusing the loop-\(s\) to the boundary of \(p\), \(W^{\dagger}_{\phi_{10},\phi_{11},\phi_{12}}\) moves back the excitations to \(p\).
Diagrammatically, the matrix elements of \(B^{\phi,s}_{p}\) can be obtained by
\[=C_{1}\sum_{i^{\prime}_{1}i^{\prime}_{2}\cdots i^{\prime}_{6}}C_{2 _{2}}C_{3}\left\langle\begin{array}{c}\includegraphics[height=14.226378pt]{ Fig1}\\ \includegraphics[height=14.226378pt]{ Fig2}\end{array}\right\rangle \tag{13}\]
where
\[B^{s,i_{1}\ldots i_{6}j_{1}\ldots j_{6}}_{p_{1},i^{\prime}_{1} \ldots i^{\prime}_{6}j_{1}\ldots j_{6}}(e_{1}\ldots e_{12};\phi_{10},\phi_{11},\phi_{12})=C_{1}C_{2_{a}}C_{2_{b}}C_{2_{c}}C_{3} \tag{14}\]
with
\[C_{1} =W^{1,e_{10}e_{13}e_{14}e_{14}\phi_{10}\phi_{13}}_{\phi_{10},0e_{ 13}^{\prime}\ldots e_{14}^{\prime}\phi_{13}^{\prime}}(i_{4}j_{3}f_{4})W^{1,e_{ 14}e_{11}e_{6}\phi_{13};\phi_{14}\phi_{11}}_{\phi_{11},e_{14}^{\prime}\phi_{ 6}^{\prime}\phi_{13};\phi_{14}^{\prime}\mathbf{1}}(f_{6}j_{6}j_{5}) \tag{15}\] \[\quad\times W^{3,e_{15}e_{12}e_{13}e_{14}\phi_{15}\phi_{15}\phi_{ 12}}_{\phi_{12},e_{13}^{\prime}\phi_{15}^{\prime}\phi_{15}^{\prime}}\] \[C_{2_{a}}C_{2_{b}}C_{2_{c}}=\frac{1}{d_{s}}\sqrt{\frac{d_{i} \cdot d_{j}}{d_{j}}{d_{j}}{d_{e^{\prime}}d_{j}}{d_{e^{\prime}}d_{j}}{d_{e^{ \prime}}d_{j}}{d_{e^{\prime}}d_{j}}{d_{e^{\prime}}d_{j}}{d_{e^{\prime}}d_{j}} _{d_{e^{\prime}}d_{j}}d_{j}}_{j}\times\] \[[F^{i^{\prime}_{1}e_{2}}_{\bar{\psi}_{1}}]_{i_{1}i^{\prime}_{2}} [F^{j^{\prime}_{1}e_{2}}_{\bar{\psi}_{2}}]_{i^{\prime}_{1}i^{\prime}_{2}}[F^{ i^{\prime}_{2}e_{3}}_{\bar{\psi}_{2}}]_{i^{\prime}_{2}i^{\prime}_{2}}[F^{ \bar{\psi}^{\prime}^{\prime}_{3}e_{3}}_{\bar{\psi}_{3}}]_{i_{3}j^{\prime}_{3 }}\times\] \[[F^{j^{\prime}_{3}e_{4}}_{\bar{\psi}_{1}}]_{i_{4}j^{\prime}_{5}} [\bar{F}^{j^{\prime}_{3}e_{6}}_{\bar{\psi}_{6}}]_{j^{\prime}_{5}}[\bar{F}^{ j^{\prime}_{3}e_{6}}_{\bar{\psi}_{6}}]_{j_{6}}[F^{i^{\prime}_{4}\bar{\psi}_{ 1}}_{\bar{\psi}_{4}}]_{i_{4}}[F^{j^{\prime}_{3}e_{1}}_{\bar{\psi}_{6}}]_{i_{ 4}}\] \[C_{3} =W^{1,e_{10}e_{13}^{\prime}e_{13}^{\prime}e_{14}\phi_{13}}_{\phi_ {10},e_{13}^{\prime}\phi_{13}^{\prime}\phi_{14}}(i^{\prime}_{4}j^{\prime}_{3 }f_{4})\] \[\quad\times W^{1,e_{1}^{\prime},e_{14}^{\prime}e_{0}^{\prime}e_{ 13}^{\prime}\phi_{14}^{\prime}\mathbf{1}}_{\bar{\phi}_{11},e_{14}e_{11}e_{11}^{ \prime}\phi_{15}^{\prime}\phi_{14}\phi_{11}}(f^{\prime}_{6}j^{\prime}_{6}j^{ \prime}_{5})\] \[\quad\times W^{3,e_{1}^{\prime},e_{10}^{\prime}e_{1}^{\prime} \phi_{1}^{\prime}\mathbf{1}}_{\phi_{12},e_{15}e_{12}e_{13}^{\prime}\phi_{15} \phi_{12}}(f_{1}i^{\prime}_{1}j^{\prime}_{6}) \tag{16}\]
where \(\phi^{\prime}_{13}=\phi_{13}\times\phi_{10}\), and similarly for \(\phi^{\prime}_{14},\phi^{\prime}_{15}\). Each product is unique because all stick labels have abelian fusion rules. Here \(e_{7},e_{8}\ldots e_{12}\) take values in abelian string types and thus \(j_{p}=i_{p}\times e_{p+6}\) for \(p=1\ldots 6\) while \(e^{\prime}_{1}=e_{1}\times e_{12},f^{\prime}_{1}=f^{\prime}_{1}\times e_{12},e^{ \prime}_{4}=e_{4}\times\bar{e}_{10},f^{\prime}_{4}=f_{4}\times\bar{e}_{11}\). The functions \(W^{1,\text{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emphemphemphemphemphemphemphemphemphemphemphemphemph{\emph{\emph{\emphemphemphemphemphemphemphemphemphemphemphemphemphemph \emphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemph \ \emphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemphemph \emphemphemphemphemphemphemphemphemphemphemphemphemphemph \
can be condensed as described in Ref. [73]. Using the original string-net construction, however, a boson \(a\) (\(\overline{a}\)), corresponding to an anyon in the category \(\mathcal{C}\) (\(\overline{\mathcal{C}}\)), violates both vertex and plaquette terms. However, using a modification of the Walker-Wang construction of 3D string nets[110], it is relatively straightforward to construct a modified plaquette term for which open string operators creating these anyons commute with all plaquette terms.
When the condensing anyons are all from \(\mathcal{C}\), we can do this in the generalized string-net Hilbert space depicted in Fig. 1, and impose the constraint that at each trivalent vertex in the new lattice, the combination of edge labels is allowed by fusion. As in the construction outlined in the main text, we energetically penalize any sticks carrying a label other than the identity. Finally, we modify the plaquette operator's action on configurations where the sticks carry non-trivial labels, to ensure that \(B_{p}\) commutes with open string operators ending on the sticks. This can be done by threading the loop carrying the plaquette label under all sticks, and using the fusion and braiding rules (4,B6) to resolve the diagram and obtain the matrix elements, using exactly the same procedure as in the 3D Walker-Wang string net models (see [110; 111]). Intuitively, this construction can be viewed as starting from a single layer of the Walker-Wang Hamiltonian, with a smooth lower boundary (see [111]), and open vertical edges extending upwards out of the plane. In the full 3D construction, our sticks thus correspond to edges of vertical plaquettes, and anyon condensation is achieved by adding "half-plaquette" operators along these vertical plaquettes. Commutativity of adjacent plaquette operators, as well as of plaquette operators with the anyon string operators corresponding to adding such vertical plaquettes, follows from commutativity of the full Walker-Wang Hamiltonian.
Similarly, to condense a set of bosons that are all from \(\overline{\mathcal{C}}\), we reverse the procedure above, drawing a Walker-Wang model with a smooth upper boundary, and keeping half-plaquettes extending downwards from this plane. In this case, plaquette operator matrix elements are obtained by drawing the plaquette loop over the sticks, and then using appropriate fusion and braiding rules. Condensing some anyons in \(\mathcal{C}\), and some in \(\overline{\mathcal{C}}\), can similarly be achieved by adding two sticks on each edge, one extending above the plane, and one below it.
The procedure for identifying string net data in the condensed phase using this construction is exactly analogous to that of the more general construction outlined in the main text.
## Appendix C Showing that \(B_{p_{1}}^{\phi,t_{1}},B_{p_{2}}^{\phi,t_{2}}\) commute
In this section, we will show that the operators \(B_{p_{1}}^{\phi,t_{1}}\) and \(B_{p_{2}}^{\phi,t_{2}}\) commute with one another. We only need to consider two cases. One case is when two plaquettes are the same \(p_{1}=p_{2}\). The other case is when \(p_{1}\) and \(p_{2}\) are adjacent since two operators will commute if \(p_{1}\) and \(p_{2}\) are further apart.
The first case is when two \(B_{p}^{\phi,t}\) operators act on the same plaquette \(p_{1}=p_{2}=p\). We will show \(B_{p}^{\phi,t_{1}}\) and \(B_{p}^{\phi,t_{2}}\) commute if the branching rules \(\delta_{u}^{t_{1},t_{2}}\) are symmetric in \(t_{1},t_{2}\). We note that \(B_{p}^{\phi,t_{1}}B_{p}^{\phi,t_{2}}=\sum_{\phi_{10},\phi_{11},\phi_{12}}W_{ \phi_{10},\phi_{11},\phi_{12}}B_{p}^{t_{1}}B_{p}^{t_{2}}W_{\phi_{10},\phi_{11},\phi_{12}}^{\dagger}\). Thus, to show that \(B_{p}^{\phi,t_{1}},B_{p}^{\phi,t_{2}}\) commute, it is sufficient to show that \(B_{p}^{\phi,t_{1}},B_{p}^{t_{2}}\) commute.
To this end, we compute
ary contributes the factors
\[\sum_{a^{\prime\prime}b^{\prime\prime}a^{\prime\prime\prime}b^{\prime\prime\prime }}\frac{w_{\phi}(a^{\prime\prime})}{w_{\phi}(a)}[\tilde{F}^{aa^{\prime\prime}}_{ bt_{2}}]_{ab^{\prime\prime}}[F^{a^{\prime\prime\prime}a^{\prime\prime}}_{t_{1}b^{ \prime\prime}}]_{a^{\prime\prime}b^{\prime\prime}} \tag{104}\]
Second, the action of \(B^{\phi,t_{2}}_{p_{2}}B^{\phi,t_{1}}_{p_{1}}\) on the shared boundary contributes the factors
\[\sum_{a^{\prime\prime}b^{\prime\prime}a^{\prime\prime\prime}b^{\prime\prime \prime\prime}}\frac{w_{\phi}(a^{\prime\prime\prime})}{w_{\phi}(a^{\prime})}[ \tilde{F}^{aa^{\prime\prime\prime}}_{b^{\prime}t_{2}}]_{a^{\prime\prime}b^{ \prime\prime\prime}}[F^{a^{\prime}s}_{t_{1}b}]_{ab^{\prime}}F^{t_{1}a}_{a^{ \prime\prime}a^{\prime\prime}a^{\prime\prime}}\tilde{F}^{t_{1}b}_{b^{\prime \prime}b^{\prime\prime}}. \tag{105}\]
Here \(x^{\prime}=x\times t_{1},x^{\prime\prime}=x\times t_{2},x^{\prime\prime\prime }=x\times t_{1}\times t_{2}\) for \(x=a,b\) and \(a+s=b\). All we need is to show that (104) = (105).
To this end, we use (15a) and (5) to simplify (104) and (105) as
\[\sum_{a^{\prime\prime}b^{\prime\prime}a^{\prime\prime\prime}b^{\prime\prime \prime}}w_{\phi}(t_{2})\frac{F^{aa_{2}}_{b^{\prime\prime}b^{\prime\prime}t_{2} ^{\prime}}F^{t_{1}a^{\prime\prime}s}_{b^{\prime\prime}a^{\prime\prime}b^{ \prime\prime}})^{*}}{F^{aa_{2}s}_{b^{\prime\prime}a^{\prime\prime}t_{2}^{ \prime}}}. \tag{106}\]
and
\[\sum_{a^{\prime}b^{\prime}a^{\prime\prime}b^{\prime\prime}a^{\prime\prime \prime\prime}}w_{\phi}(t_{2})\frac{F^{a^{\prime}s_{1}}_{b^{\prime\prime\prime }t_{2}^{\prime}}F^{t_{1}at_{2}}_{a^{\prime\prime\prime}a^{\prime\prime}}(F^{t _{1}as}_{b^{\prime}a^{\prime}b}F^{t_{1}b}_{b^{\prime\prime\prime}b^{\prime \prime}})^{*}}{F^{a^{\prime\prime}t_{2}s}_{b^{\prime\prime\prime}a^{\prime \prime\prime}t_{2}^{\prime}}} \tag{107}\]
respectively. Here \(t_{2}^{\prime}=t_{2}+s\).
The next step is to simplify (107) further. By using the two pentagon identities
\[\begin{split}& F^{a^{\prime}st_{2}}_{b^{\prime\prime\prime}b^{ \prime}t_{2}^{\prime}}F^{t_{1}at_{2}^{\prime}}_{b^{\prime\prime\prime}a^{ \prime\prime}b^{\prime\prime}}=F^{t_{1}as}_{b^{\prime}a^{\prime}b^{\prime}}F^{ t_{1}bt_{2}}_{b^{\prime\prime\prime}b^{\prime\prime}b^{\prime\prime}}F^{ ast_{2}}_{b^{\prime\prime}b^{\prime\prime}t_{2}^{\prime}}\\ & F^{t_{1}at_{2}^{\prime}}_{b^{\prime\prime\prime}a^{\prime}b^{ \prime\prime}}F^{a^{\prime}t_{2}s}_{b^{\prime\prime\prime}a^{\prime\prime\prime \prime}t_{2}^{\prime}}=\sum_{h}F^{t_{1}at_{2}}_{a^{\prime\prime}a^{\prime}h}F^{ t_{1}hs}_{b^{\prime\prime\prime}a^{\prime\prime\prime}b^{\prime\prime}}F^{ ast_{2}}_{b^{\prime\prime}ht_{2}^{\prime}}\end{split} \tag{108}\]
and the unitary conditions
\[\begin{split}&\sum_{b^{\prime}}F^{t_{1}bt_{2}}_{b^{\prime\prime \prime}b^{\prime\prime}}(F^{t_{1}bt_{2}}_{b^{\prime\prime\prime}b^{\prime \prime}})^{*}=1\\ &\sum_{a^{\prime}}F^{t_{1}at_{2}^{\prime}}_{a^{\prime\prime\prime} a^{\prime\prime}}(F^{t_{1}at_{2}}_{a^{\prime\prime\prime\prime}a^{\prime}h})^{*}= \delta_{a^{\prime\prime}h},\end{split} \tag{109}\]
we can show (107)=(108). This completes the proof that the \(B^{\phi,t_{1}}_{p_{1}},B^{\phi,t_{2}}_{p_{2}}\) terms commute with one another.
## Appendix D Showing that \([B^{\phi,s}_{p},W_{\phi^{i}}]=0\)
In this section, we show that \(B^{\phi,s}_{p}\) and \(W_{\phi^{i}}\) commute with one another. For our purpose, it is sufficient to show they commute in the gauge (23). In fact, we check that \(B^{\phi,s}_{p}\) and \(W_{\phi^{i}}\) commute in _any_ gauge.
It suffices to show that the basic string operators \(W^{k}_{\phi^{i}}\) commute with \(B^{\phi,s}_{p}\) since any \(W_{\phi^{i}}\) can be constructed by gluing the basic string operators along the path. Thus, we only have to consider the case when the basic string operators \(W^{k}_{\phi^{i}}\) are around the vertices surrounding the plaquette \(p\) since it is clear that two operators commute if they are further apart.
There are two independent basic string operators which act around each vertex surrounding the plaquette \(p\) (see Fig. 6(a)). We need to show all 12 basic string operators commute with \(B^{\phi,s}_{p}\). Among 12 string operators, there are 6 string operators like \(W^{4}_{\phi^{i}}\), whose ends lie outside \(p\), 4 string operators like \(W^{1}_{\phi^{i}}\) which intersect \(p\) and 2 string operators like \(W^{2}_{\phi^{i}}\) whose ends lie inside \(p\). We will show that \(W^{4}_{\phi^{i}},W^{1}_{\phi^{i}},W^{2}_{\phi^{i}}\) commute with \(B^{\phi,s}_{p}\). In a similar way, one can show other basic string operators also commute with \(B^{\phi,s}_{p}\).
First, we want to show that \(W^{4}_{\phi^{i}}B^{\phi,s}_{p}=B^{\phi,s}_{p}W^{4}_{\phi^{i}}\) (see Fig. 6(b)). To show this, we write out their matrix elements and compare the factors which are different. Specifically, we need to show that the product from the left of the equation
\[\begin{split}& W^{4,krmq}_{\phi^{i},k_{i}r_{i}m_{i},q_{i}}(pnt)W^{3, klm_{i}n}_{\phi^{i},k_{i+1}0m_{i+i}n_{i}}(pq_{i}f)\times\\ & W^{3l,k_{i+1}0m_{i+f}s}_{\phi^{i},k_{i}lm_{i}n_{*}}(pq_{i+\bar{ s}}f_{s})(F^{\bar{s}q_{i}r_{i}}_{t_{i}q_{i+\bar{s}}t}F^{\bar{s}q_{i}}_{m_{i+ \bar{s}}t_{i+\bar{s}}q_{i+\bar{s}}})^{*}\end{split} \tag{110}\]
and the product from the right
\[\begin{split}& W^{4,krmq}_{\phi^{i},k_{i}r_{i}m_{i},q_{i+\bar{s}}}( pn_{i}s_{\bar{s}})W^{3,krmn}_{\phi^{i},k_{1}0m_{i}n_{i}}(pqf)\times\\ & W^{3\dagger,k_{i}0m_{f}s}_{\phi^{i},klmn_{*}}(pq_{i}s_{\bar{s}})(F ^{\bar{s}qr}_{t_{i}q_{i}}F^{f_{\bar{s}}q_{i}}_{m_{i+\bar{s}}t_{q_{i}}})^{*} \end{split} \tag{111}\]
are equal. Here \(x_{y}=x+y\) and the matrix elements of \(W^{k}_{\phi}\) are defined in (22). By using (25) to write \(W^{k\dagger}_{\phi}\) in terms of \(W^{k}_{\phi}\) and (15,16) to write \(w\) in terms of \(F\) symbols, we can then show they are equal by (4).
Second, we want to show that \(W^{1}_{\phi}B^{\phi,s}_{p}=B^{\phi,s}_{p}W^{1}_{\phi^{i}}\) (see Fig. 6(b)). Again, we write down the matrix elements of both sides of the equation and compare the difference between the two. Specifically, from the left is the product
\[W^{1,abcd}_{\phi^{i},a_{i}b_{i}c_{i}d_{i}}(efg)W^{1,a,b_{i}c_{i}d_{i}}_{\phi^{i}+ \phi^{i},a_{i}b_{0}c_{0}d_{b}}(efg)W^{1\dagger,a_{0}c_{0}d_{b+s}}_{\phi^{i},a_{i }b_{i}c_{i}d_{i+s}}(ef_{s}
Third, we want to show that \(W^{2}_{\phi^{j}}B^{\phi,s}_{p}=B^{\phi,s}_{p}W^{2}_{\phi^{j}}\) (see Fig. 6(b)). We write down the their matrix elements and compare the difference. Specifically, from the left we have
\[W^{2,nbfd}_{\phi^{i},n_{i}b_{i}f_{i}d_{i}}(pcg)W^{1,ab_{i}cd_{i}}_{ \phi^{b}+\bar{\phi}^{i},a_{n_{i}+1}\bar{\phi}_{0c_{b}+\bar{d}}b_{i}}(ef_{i}g)\times\] \[W^{3,mn_{i}0p}_{\phi^{a}+\phi^{j},m_{a+i}0_{0n+p_{i}+i}}(qrf_{i}) W^{1\dagger,a_{b}+\bar{\phi}^{i},a_{b}+\bar{\phi}^{i},a_{b}c_{d+i}}_{\phi^{i}+ \bar{\phi}^{i},a_{b}c_{d+i}}(ef_{i+s}g_{s})\times\] \[W^{3,mn_{i}+\bar{\phi}^{i}0_{n+i}f_{i+s}}_{\phi^{a}+\phi^{j},mn_{ i}0p_{s}}(qr_{s}f_{i+s})F^{c_{b+i}f_{i}s}_{g_{s}gf_{i+s}}F^{f_{i}s\bar{s}}_{f_{i }f_{i},0}(F^{f_{i+s}\bar{r}}_{o_{n+i}f_{i}r_{s}})^{*} \tag{101}\]
and from the right we have
\[W^{1,abcd}_{\phi^{a},a_{0}c_{b}d_{b}}(efg)W^{3,mnrop}_{\phi^{a}, m_{a}0_{a}p_{n}}(qrf)\] \[W^{1\dagger,n_{0}ac_{0}d_{b+s}}_{\phi^{a},abc_{b}}(ef_{s}g_{s})W^{ 3\dagger,mn_{0}n_{p}+n_{s}}_{\phi^{a},mnrop_{a}}(qr_{\bar{s}}f_{s}) \tag{102}\] \[W^{2\dagger,nbfd}_{\phi^{i},n_{i}b_{i}f_{i+s}d_{i+s}}(p_{s}c_{g}) F^{c_{b}f_{s}s}_{g_{s}gf_{s}}F^{f_{s}s\bar{s}}_{f_{f},0}(F^{f_{i}\bar{s}\bar{r}}_{ o_{n}fr_{s}})^{*}.\]
Similarly, by a straightforward but tedious computation, one can show they are equal by (4).
## Appendix E Consistency of solutions for vertex coefficients in the presence of splitting
If \(a^{r}=a\) and \(c^{r}=c\), then a \(W^{1}_{\phi^{jr}}\) operator at a vertex \((a,b;c)\) does not change the labels about this vertex; its only effect is to change the labels of the two adjacent sticks. Thus, we obtain:
\[A^{as^{(i+j)r}}A^{cs^{(k-j)r}}A^{ab}_{c}= A^{as^{ir}}A^{cs^{kr}}A^{ab}_{c}\bar{w}_{\phi^{jr}}(b)F^{ abs^{jr}}_{cobj^{r}}(F^{as^{jr}b}_{cabi^{r}})^{*}\] \[\times F^{as^{ir}s^{jr}}_{aas(i+j)r}(F^{cs^{jr}s^{(k-j)r}}_{ccskr}) ^{*} \tag{103}\]
where we have used the gauge (23). If \(A^{ab}_{c}\) is non-zero, this can be true only if the coefficient does not depend on \(b\).
Similarly, applying a \(W^{2}_{\phi^{jr}}\) operator at a vertex \((b,a;c)\), we obtain:
\[A^{as^{(i+j)r}}A^{cs^{(k-j)r}}A^{ba}_{c}= A^{as^{ir}}A^{cs^{kr}}A^{ba}_{c}F^{bas^{jr}}_{ccb^{jr}}\] \[\times F^{as^{ir}s^{jr}}_{aas(i+j)r}F^{cs^{jr}}_{ccskr}(F^{cs^{jr }s^{(k-j)r}}_{\hphantom{c}cskr})^{*} \tag{104}\]
Finally, applying \(W^{2}_{\phi^{jr}}W^{1}_{\phi^{jr}}\) to a vertex \((a,c;b)\) with \(as^{r}=a\) and \(cs^{r}=c\) gives:
\[A^{as^{(i+j)r}}A^{cs^{(k-j)r}}A^{ac}_{b}= A^{as^{ir}}A^{cs^{kr}}A^{ac}_{b}\bar{w}_{\phi^{jr}}(c)(F^{cs^{jr }s^{-jr}}_{c0})^{*}\] \[\times(F^{as^{jr}c}_{bac})^{*}F^{as^{ir}s^{jr}}_{aas(i+j)r}F^{cs ^{kr}s^{-jr}}_{ces(k-j)r} \tag{105}\]
Similar equations appear about downward-oriented vertices, involving the string operators \(W^{3}_{\phi^{jr}},W^{4}_{\phi^{jr}}\); however these do not impose any new conditions required for consistency.
We now show that the coefficients identified above are independent of \(b\)_for different choices of \(b\) that are related by fusion with \(s^{rj}\)_. Iterating this, we see that the coefficients are the same for any \(b\) in the fusion orbit of \(s^{jr}\). In particular, for \(j=1\) we see that the coefficient is the same for any \(b\) in the fusion orbit of \(s^{r}\). Moreover, the string operator \(W^{i}_{\phi^{jr}}=(W^{i}_{\phi^{r}})^{j}\); hence we conclude that, or any \(j\), the coefficients in Eqs. (103-105) are the same for all \(b\) in the fusion orbit of \(s^{r}\), and for any \(j\).
We begin with Eq. (103). Using Eq. (5 a), we find
\[F^{as^{jr}b}_{cab^{jr}}F^{ab^{jr}s^{jr}}_{ccb^{jr}}F^{s^{jr}bs^{jr}}_{b^{jr}b^{ jr}b^{jr}}=F^{abs^{jr}}_{ccb^{jr}}F^{as^{jr}b^{jr}}_{cab^{2jr}}. \tag{106}\]
Next, we multiply both sides of the equation by \(w_{\phi^{jr}}(b^{(jr)})\), and use Eq. (15) to see that
\[w_{\phi^{jr}}(b)w_{\phi^{jr}}(s^{jr})=w_{\phi^{jr}}(b^{(jr)})F^{s^{jr}bs^{jr}}_{ b^{jr}b^{jr}b^{jr}}. \tag{107}\]
Since \(w_{\phi^{jr}}(s^{jr})=1\), we thus find:
\[w_{\phi^{jr}}(b)F^{as^{jr}b}_{cab^{jr}}(F^{bs^{jr}}_{ccb^{jr}})^{*}=w_{\phi^{jr}}( b^{(jr)})F^{as^{jr}b^{jr}}_{cab^{2jr}}(F^{sb^{jr}s^{jr}}_{ccb^{2jr}})^{*}. \tag{108}\]
Iterating this result, we see that the coefficient is the same for all choices of \(b\) in the same fusion orbit of \(s^{r}\).
Next, consider Eq. (104). Eq. (5 a) stipulates:
\[F^{s^{jr}b}_{cb^{jr}c}F^{s^{jr}cs^{kr}}_{cccs^{k}r}F^{bak^{kr}}_{cca}=F^{b^{jr}as ^{kr}}_{cca}F^{s^{jr}ba}_{cb^{jr}c} \tag{109}\]
Further, since \(s^{jr}c=c\), we have
\[w_{\phi^{jr}}(c)w_{\phi^{jr}}(s^{kr})=\frac{F^{jsr}_{cccc}F^{cs^{kr}sr}_{ccs(k^{ (k+j)r})}}{F^{cs^{jr}s^{kr}}_{ccs(k^{(k+j)r})}}w_{\phi^{jr}}(c) \tag{110}\]
and hence \(F^{s^{jr}cs^{jr}}_{ccc}=1\). It follows that \(F^{bs^{jr}}_{cca}=F^{b^{jr}as^{jr}}_{cca}\), and the coefficient is the same for any \(b\) in the fusion orbit of \(s^{jr}\).
Finally, consider Eq. (104). We have simplified the coefficient on the right-hand side as follows. Similar to Eq. (77), the product of \(W^{2}_{\phi^{jr}}W^{1}_{\phi^{jr}}\) on the vertex \((a,c;b)\) can be expressed
\[\bar{w}_{\phi^{jr}}(c)F^{acs^{jr}}_{b^{jr}bc}(F^{as^{jr}c}_{b^{jr}ac })^{*}(F^{bs^{jr}s^{-jr}}_{bb^{jr}s^{r}})^{*}(F^{b^{jr}s^{-jr}s^{r}}_{bb^{jr}c}\] \[\times F^{as^{jr}}_{aas(i+j)r}F^{cs^{k}rs^{-jr}}_{ccs(k-j)r} \tag{111}\]
where the stick on the \(a\) edge initially carries the label \(s^{ir}\), that on the \(c\) edge carries a label \(s^{kr}\), and that on the \(b\) edge carries \(s^{t}\).
However, we can use Eq. (5) to show that when \(as^{r}=a\) and \(cs^{r}=c\),
\[F^{b^{jr}s^{-jr}s^{t}}_
We can show that the coefficient is the same for any \(b\) in the fusion orbit of \(s^{jr}\):
\[F^{aa^{jsr}}_{bac}F^{acs^{jsr}}_{b^{jr}bc}F^{s^{jsr}}_{ccc}=F^{acs^{jsr}}_{b^{jr}bc }F^{ss^{jsr}c}_{b^{jr}ac} \tag{111}\]
Cancelling the redundant factors on both sides, and noting that by Eq. (100), \(F^{s^{jsr}cs^{jsr}}_{ccc}=1\), we see that \(F^{as^{jsr}}_{bac}=F^{as^{jsr}c}_{b^{jr}ac}\), and again by iterating we find our result.
Now, if \(b\) is an abelian particle, and \(a\times s^{r}=a,c\times s^{r}=c\), then \(a\times c=\sum_{j}b^{jr}+...\), where \(...\) cannot contain \(b^{l}\) for \(l\neq jr\). This follows from the cyclic property of the branching rules. Suppose \((a,c;b)\) and \((a,c;b^{j})\) are allowed by the branching rules. Then so are \((c,\bar{b}^{-j};a)\) and \((\bar{b}^{-j},a;\tilde{c})\). However, if \(b^{j}=b\times s^{j}=s^{j}\times b\), then we must also have \((\bar{b},a;\bar{c}^{j})\). If \(j=lr\) then \(\bar{c}^{j}=\bar{c}\), and the outcome of fusing \(\bar{b}\) with \(a\) is unique. Otherwise, however, we see that fusing \(\bar{b}\) with \(a\) can have at least two different outcomes; hence \(b\) is not an abelian string label. In particular, if \(b=s^{j}\), it follows that all equations associated with acting with string operators \(s^{jr}\) on the vertices \((a,s^{j};a)\), \((s^{j},a;a)\), and \((a,\bar{a};s^{j})\) are consistent. This allows us to solve for the coefficients \(A^{as^{j}}\).
In general, however, vertices of the form (for example) \((b,a;c)\) and \((g,a;c)\), where \(g\neq bs^{j}\), can lead to multiple equations of the form (101), which relate the same pairs of coefficients \(A^{as^{(i+l)r}},A^{cs^{(k-l)r}}\) to \(A^{as^{irr}}A^{cs^{kr}}\). If \(F^{bas^{lr}}_{ccbr}\neq F^{gas^{lr}}_{ccgr}\), these equations are mutually inconsistent unless we choose either \(A^{ba}_{c}=0\) or \(A^{ga}_{c}=0\). The resolution to this is to recognize that since both \(a\) and \(c\) split, there are multiple non-zero choices for the coefficient: \(A^{ba}_{c}=(A^{ba}_{c})_{ij}\), denoting a choice of \(\tilde{a}_{i}\) and \(\tilde{c}_{j}\). For a given \(b\), we find that the coefficient \(F^{bas^{jsr}}_{ccgr}\) takes on at most \(p/r\) distinct values (and similarly for other vertices); hence we expect to find at least one non-zero \((A^{ba}_{c})_{ij}\) for each \(i\).
### Relations between coefficients \(A^{as^{jr}}\) when \(a^{r}=a\)
Suppose \(a^{r}=a\). Eq. (5a) gives:
\[F^{as^{lr}s^{jr}}_{aas^{(l+j)r}}F^{as^{(i+j)r}}_{aas^{(l+j+k)r}}F^{s^{l(i+j+k )r}}_{aas^{(l+j+k)r}s^{(l+j)r}s^{(j+k)r}}\] \[=F^{as^{jsr}}_{aas^{(j+k)r}}F^{as^{lr}s^{(j+k)r}}_{aas^{(l+j+k)r}} \tag{112}\]
In our gauge of choice, taking \(j\to j-1\) and \(k=1\) in the above expression gives
\[F^{as^{lr}s^{jr}}_{aas^{(l+j)r}}=\frac{F^{as^{l}s^{(j-1)r}}_{aas^{(l+j-1)r}}F^ {as^{(l+j-1)r}s^{r}}_{aas^{(l+j)r}}}{F^{as^{(l-j)r}s^{r}}_{aas^{(l+j)r}}} \tag{113}\]
Using this expression repeatedly, we can show that
\[F^{as^{lr}s^{jr}}_{aas^{(l+j)r}} =\left(\frac{\prod_{k=0}^{j-1}F^{as^{(l+k)r}s^{r}}_{aas^{(l+k+1)r}} }{\prod_{k=j}^{l-1}F^{as^{kr}s^{r}}_{aas^{(l+1)r}}}\right)\] \[=\left(\frac{\prod_{k=l}^{l+j-1}F^{as^{jkr}s^{r}}_{aas^{(k+1)r}}}{ \prod_{k=1}^{l-1}F^{as^{jkr}s^{r}}_{aas^{(k+1)r}}}\right) \tag{114}\]
From this, we see that
\[\frac{F^{as^{lr}s^{jr}}_{aas^{(l+j)r}}}{F^{as^{jsr}s^{jr}s^{jr}}_{ aas^{(l+j)r}}} =\left(\frac{\prod_{k=l}^{l+j-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}{ \prod_{k=j}^{l+j-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}\right)\left(\frac{\prod_{k=1 }^{l-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}{\prod_{k=1}^{j-1}F^{as^{kr}s^{r}}_{aas^{ (k+1)r}}}\right) \tag{115}\]
Without loss of generality, assume that \(l>j\). Then:
\[\left(\frac{\prod_{k=l}^{l+j-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}{ \prod_{k=j}^{l+j-1}F^{as^{kr}s^{kr}}_{aas^{(k+1)r}}}\right) =\frac{1}{\prod_{k=j}^{l-1}F^{as^{kr}s^{kr}s^{r}}_{aas^{(k+1)r}}}\] \[\left(\frac{\prod_{k=1}^{l-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}{ \prod_{k=1}^{j-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}}}\right) =\prod_{j}^{l-1}F^{as^{kr}s^{r}}_{aas^{(k+1)r}} \tag{116}\]
so that
\[F^{as^{lr}s^{jr}}_{aas^{(l+j)r}}=F^{as^{jsr}s^{lr}}_{aas^{(l+j)r}} \tag{117}\]
Further, from the expression (88),
\[(A^{as^{nr}})_{\mu}=(A^{as^{r}})_{\mu}^{n}\prod_{k=1}^{n-1}(F^{as^{kr}s^{r}}_{ aas^{(k+1)r}}) \tag{118}\]
we see that
\[\frac{(A^{as^{(j+l)r}})_{\mu}}{(A^{as^{lr}})_{\nu}} =\frac{(A^{as^{r}})_{\mu}^{j+l}}{(A^{as^{r}})_{\nu}^{l}}\prod_{k= 1}^{j+l-1}(F^{as^{kr}s^{r}s^{r}}_{aas^{(k+1)r}}) \tag{119}\] \[=\frac{(A^{as^{r}})_{\mu}^{j+l}}{(A^{as^{r}})_{\nu}^{l}}\prod_{k= 1}^{j+l-1}(F^{as^{kr}s^{r}}_{aas^{(k+1)r}}) \tag{120}\]
Hence:
\[\frac{(A^{as^{(j+l)r}})_{\mu}}{(A^{as^{lr}})_{\nu}(A^{as^{jr}})_{\rho}} =\frac{(A^{as^{r}})_{\mu}^{j+l}}{(A^{as^{r}})_{\nu}^{l}}\prod_{k= 1}^{j+l-1}(F^{as^{kr}s^{r}}_{aas^{(k+1)r}})\] \[=\frac{(A^{as^{r}})_{\mu}^{j+l}}{(A^{as^{r}})_{\nu}^{l}(A^{as^{r} })_{\rho}^{j}}F^{as^{lr}s^{jr}}_{aas^{(l+j)r}} \tag{121}\]
where in the last line, we have used Eq. (117). Evidently, if all factors come from the same solution (i.e. \(\mu=\nu=\rho\)), then the right-hand side is simply \(F^{as^{lr}s^{jr}}_{aas^{(l+j)r}}\).
### Vertices where multiple labels split
For vertices where multiple labels split (and none of the labels are a power of \(s\)), we must address two questions. First, is it the case that for every choice of \(b\), there exists at least one pair \((\mu,\nu)\) for which Eq. (96) can be satisfied? If not, we must conclude that at least one of the particles \(a,b,\) or \(c\) must be confined.
Recall that if \(s^{q}=1\), then if \(w_{\phi\prime}(b)\neq 1\), n \(b\) does not correspond to any label in our effective Hilbert space; hence in this case we must set \((A^{ab}_{c})_{\nu}^{\mu}=0\) for all \(\mu,\nu\). When \(w_{\phi\prime}(b)=1\), \(W^{1}_{\phi\prime}\) acts as the identity operator at the vertex \((a
\((0,m)\) which does not involve any fusion. Applying the operator \(W^{1}_{\phi^{\prime}}\)\(q/r\) times, in the gauge (23) we obtain the matrix element
\[M_{1}(a,b,c))^{q/r}\left(\prod_{k=1}^{q/r-1}F^{as^{kr}s}_{aas^{(k+1)r})}\right) \left(\prod_{k=1}^{q/r-1}(F^{cs^{r}\tilde{s}^{kr}}_{cc\tilde{s}^{(k-1)r}})^{*}\right) \tag{101}\]
Since \(W^{1}_{\phi^{\prime}}=(W^{1}_{\phi^{\prime}})^{q/r}\), it follows that:
\[(M_{1}(a,b,c))^{q/r}=\left(\prod_{k=1}^{q/r-1}(F^{as^{kr}s}_{aas^{(k+1)r})})^{* }\right)\left(\prod_{k=1}^{q/r-1}(F^{cs^{r}\tilde{s}^{kr}}_{cc\tilde{s}^{(k-1)r }})\right) \tag{102}\]
hence \(M_{1}(a,b,c)\) is a \(q/r^{th}\) root of the product on the right-hand side. Now, \(A^{as^{r}}\) is a \(q/r^{th}\) root of \(\prod_{k=1}^{q/r-1}(F^{as^{kr}s}_{aas^{(k+1)r})})^{*}\), while \(A^{cs^{r}}\) is a \(q/r^{th}\) root of \(\prod_{k=1}^{q/r-1}(F^{cs^{kr}s^{kr}}_{cc\tilde{s}^{(k+1)r}})^{*}=\left(\prod_ {k=1}^{q/r-1}F^{cs^{r}\tilde{s}^{kr}}_{cc\tilde{s}^{(k+1)r}}\right)^{-1}\), where we have used Eq. (100). Thus \(A^{as^{r}}/A^{cs^{r}}\) is also a \(q/r^{th}\) root of the product on the right-hand side. It follows that for every \(b\), there exists at least one choice of \(\mu,\nu\) for which Eq. (94) is satisfied - in which case it is also satisfied for \(b^{kr}\), \(0\leq k<q/r\). This suggests that for a fixed \(M_{1}(a,b,c)\) of modulus \(1\), we expect \(q/r\) distinct solutions \(A^{as^{r}}_{\mu}/A^{cs^{r}}_{\nu}=M_{1}(a,b,c)e^{2\pi inr/q}\), \(0\leq n<q/r\).
At this point, it is worth commenting on the fusion rules of the new theory. In the most general case, we have
\[(\sum_{\mu=1}^{q/r}\tilde{a}_{\mu})\times(\sum_{\nu=1}^{q/r}\tilde{c}_{\nu})= \frac{q}{r}\sum_{\tilde{b}|b\times s^{r}\neq b}c_{b}\tilde{b}+\sum_{\tilde{d }|d\times s^{r}=d}\sum_{\lambda}c_{d,\lambda}\tilde{d}_{\lambda}. \tag{103}\]
The first sum contains any terms that do not split, and the second contains terms that do). As discussed in the main text, if \((a,c;b^{k})\) is not an allowed vertex for any \(0<k<r\), then \(c_{b}=1\); otherwise, \(c_{b}\) counts the number of distinct \(k\), \(0\leq k<r\), for which \((a,c;b^{k})\) is allowed. The second sum runs over labels \(d\) for which \(d\times s^{v}=d\), with \(0<v<q\). In this case, in the condensed theory a fusion channel \(\tilde{d}_{\lambda}\) appears with a coefficient \(c_{d,\lambda}\), whose value depends on \(v,r\), and the number of values of \(\lambda\) for which the new fusion rules admit solutions.
As discussed in the main text, when \(c_{b}=1\), the number of distinct values of \((\mu,\nu)\) for which \((\tilde{a}_{\mu},\tilde{c}_{\nu};\tilde{b})\) is allowed by the branching rules is equal to the number of copies of \(\tilde{b}\) on the right, and the new theory need not have fusion multiplicity. If \(c_{b}>1\), the label \(\tilde{b}=\sum_{j=1}^{q-1}b^{j}\) at the vertex \((\tilde{a}_{\mu},\tilde{c}_{\nu};\tilde{b})\) may be associated with multiple different values of the coefficient \(M_{l}(a,b^{j},c)\). Each distinct coefficient then corresponds to a distinct set of solutions \((\mu,\nu)\) to Eq. (94). Thus, if \(\{M_{l}(a,b^{j},c),0\leq j<r\}\) are all distinct, then we can find up to \(c_{b}q/r\) distinct solutions to Eq. (94), and again the new theory need not have fusion multiplicities. On the other hand, if these coefficients are not all distinct, then in general fusion multiplicities are expected, meaning that the coefficients \((A^{ab}_{c})^{\mu}_{\nu}\) are matrices.
The situation for vertices \((\tilde{a}_{\mu},\tilde{c}_{\nu};\tilde{d}_{\lambda})\) is similar, except that in this case if \(v\) and \(r\) are not mutually prime, additional constraints are imposed which fix which coefficients \((A^{ab}_{c})^{\mu,\lambda}_{\nu}\) are non-zero. Again, we cannot rule out the possibility of fusion multiplicities and the need to make these coefficients matrices.
|
2305.03379
|
Some Analytical Properties of the Hyperbolic Sine Integral
|
By using some tools of analysis, we establish some analytical properties such
as monotonicity and inequalities involving the hyperbolic sine integral
function. As applications of some of the established properties, we obtain some
rational bounds for the hyperbolic tangent function.
|
Kwara Nantomah
|
2023-05-05T09:16:24Z
|
http://arxiv.org/abs/2305.03379v1
|
# Some analytical properties of the hyperbolic sine integral
###### Abstract.
By using some tools of analysis, we establish some analytical properties such as monotonicity and inequalities involving the hyperbolic sine integral function. As applications of some of the established properties, we obtain some rational bounds for the hyperbolic tangent function.
Key words and phrases:Hyperbolic sine integral function, hyperbolic functions, hyperbolic sine function, bounds, inequalities 2010 Mathematics Subject Classification: 33B10, 33Bxx, 26D05
## 1. Introduction
The cardinal hyperbolic sine function which is also known as sinhc function or hyperbolic sinc function is defined for \(z\in(-\infty,\infty)\) as [15]
\[\text{sinhc}(z)=\begin{cases}\frac{\sinh(z)}{z},&z\neq 0\\ 1,&z=0.\end{cases} \tag{1}\]
It has been very useful in various areas of mathematics, physics and engineering. For example, it has been demonstrated that the function exhibits a clear geometric interpretation as the ratio between length and chord of a symmetric catenary segment [9], [15]. Due to its usefulness, it has been investigated by several researchers and many remarkable inequalities have been established concerning the function. For further information and recent developments on such inequalities, one may consult the works [3], [4], [5], [6], [7], [8], [10], [16], [17] and the references therein.
Closely related to the sinch function is the hyperbolic sine integral function which is defined for \(z\in(-\infty,\infty)\) as [1, p. 231]
\[\text{Shi}(z)=\int_{0}^{z}\frac{\sinh(t)}{t}\,dt. \tag{2}\]
In [12], the author considered three representations in terms of the hypergeometric function \({}_{2}F_{3}\) for a certain indefinite hyperbolic sine integral. A review of the literature reveals that, unlike the cardinal hyperbolic sine function which is well researched in terms of its inequalities or bounds, the hyperbolic sine integral is yet to receive a similar attention.
The purpose of this paper is to trigger the process for such investigations and attention. Precisely, we establish some analytical properties such as monotonicity and inequalities involving the hyperbolic sine integral function. As applications of some of the established properties, we obtain some rational bounds for the hyperbolic tangent function. We present our findings in the subsequent sections.
## 2. Some Properties of the Hyperbolic Sine Integral
The hyperbolic sine integral function may also be defined for \(z\in(-\infty,\infty)\) by the following equivalent forms.
\[\operatorname{Shi}(z) =\int_{0}^{1}\frac{\sinh(zt)}{t}\,dt, \tag{3}\] \[=\sum_{r=0}^{\infty}\frac{z^{2r+1}}{(2r+1)(2r+1)!}. \tag{4}\]
By change of variable, representation (3) is obtained from (2) and representation (4) is obtained from either (2) or (3) by using the series representation of \(\frac{\sinh(z)}{z}\).
By utilizing representation (3), the derivatives of \(\operatorname{Shi}(z)\) are obtained as follows.
\[\operatorname{Shi}^{(k)}(z)=\int_{0}^{1}t^{k-1}\sinh(zt)dt,\quad k\in\{2m:m \in\mathbb{N}_{0}\}, \tag{5}\]
\[\operatorname{Shi}^{(k)}(z)=\int_{0}^{1}t^{k-1}\cosh(zt)dt,\quad k\in\{2m+1:m \in\mathbb{N}_{0}\}, \tag{6}\]
where \(\mathbb{N}_{0}=\{0,1,2,3,...\}\). In particular, the first and second derivatives are
\[\operatorname{Shi}^{\prime}(z)=\int_{0}^{1}\cosh(zt)dt=\frac{\sinh(z)}{z}, \tag{7}\]
\[\operatorname{Shi}^{\prime\prime}(z)=\int_{0}^{1}t\sinh(zt)dt=\frac{\cosh(z) }{z}-\frac{\sinh(z)}{z^{2}}. \tag{8}\]
**Remark 2.1**.: Identity (8) implies that
\[\cosh(z)>\frac{\sinh(z)}{z} \tag{9}\]
for \(z>0\) and this is well known in the literature.
**Lemma 2.2**.: _If a function \(\frac{p(x)}{x}\) is increasing or decreasing on an interval \(I\), then \(p(x)\) supperadditive or subadditive on \(I\) respectively._
Proof.: See Lemma 3.2 of [14] or Theorem 3.1 of [2].
**Theorem 2.3**.: _The function \(\operatorname{Shi}(z)\) is supperadditive on \((0,\infty)\). That is, the inequality_
\[\operatorname{Shi}(u+v)>\operatorname{Shi}(u)+\operatorname{Shi}(v) \tag{10}\]
_holds for \(u>0\) and \(v>0\)._
First Proof.: Let \(A(z)=\frac{\operatorname{Shi}(z)}{z}\) for \(z>0\). Then
\[z^{2}A^{\prime}(z) =z\mathrm{Shi}^{\prime}(z)-\mathrm{Shi}(z)\] \[=\sum_{r=0}^{\infty}\frac{z^{2r+1}}{(2r+1)!}-\sum_{r=0}^{\infty} \frac{z^{2r+1}}{(2r+1)(2r+1)!}\] \[=\sum_{r=0}^{\infty}\left[1-\frac{1}{2r+1}\right]\frac{z^{2r+1}}{ (2r+1)!}\] \[>0.\]
Hence \(A(z)\) is increasing and the conclusion follows from Lemma 2.2.
Second Proof.: Let \(u>0\) and \(v>0\). Then
\[\mathrm{Shi}(u+v) =\int_{0}^{1}\frac{\sinh(ut+vt)}{t}\,dt\] \[=\int_{0}^{1}\frac{\sinh(ut)\cosh(vt)}{t}\,dt+\int_{0}^{1}\frac{ \cosh(ut)\sinh(vt)}{t}\,dt\] \[>\int_{0}^{1}\frac{\sinh(ut)}{t}\,dt+\int_{0}^{1}\frac{\sinh(vt)} {t}\,dt\] \[=\mathrm{Shi}(u)+\mathrm{Shi}(v)\]
since \(\cosh(z)>1\) for all \(z\neq 0\).
Third Proof.: Let \(\phi(u,v)=\mathrm{Shi}(u+v)-\mathrm{Shi}(u)-\mathrm{Shi}(v)\) for \(u>0\) and \(v>0\). Without loss of generality, let \(v\) be fixed. Then
\[\frac{\partial}{\partial u}\phi(u,v) =\mathrm{Shi}^{\prime}(u+v)-\mathrm{Shi}^{\prime}(u)\] \[=\int_{0}^{1}\cosh(ut+vt)dt-\int_{0}^{1}\cosh(ut)dt\] \[=\int_{0}^{1}\left[\cosh(ut)\cosh(vt)+\sinh(ut)\sinh(vt)\right]\, dt-\int_{0}^{1}\cosh(ut)\,dt\] \[=\int_{0}^{1}\cosh(ut)[\cosh(vt)-1]dt+\int_{0}^{1}\sinh(ut)\sinh(vt)dt\] \[>0\]
since \(\cosh(z)>1\) for all \(z\neq 0\). Thus, \(\phi(u,v)\) is increasing and so
\[\phi(u,v)>\lim_{u\to 0}\phi(u,v)=0\]
which gives the desired result.
**Theorem 2.4**.: _The inequality_
\[\mathrm{Shi}(u)+\mathrm{Shi}(v)>u+v \tag{11}\]
_holds for \(u>0\) and \(v>0\), and the inequality_
\[\frac{\mathrm{Shi}(u)}{\mathrm{Shi}(v)}\leq\frac{u}{v} \tag{12}\]
_holds for \(0<u\leq v\)._
Proof.: The monotonicity property of the function \(\frac{\mathrm{Shi}(z)}{z}\) implies that, for \(z>0\), we have
\[\frac{\mathrm{Shi}(z)}{z}>\lim_{z\to 0^{+}}\frac{\mathrm{Shi}(z)}{z}=1.\]
That is,
\[\mathrm{Shi}(z)>z.\]
Hence for \(u>0\) and \(v>0\), we have \(\mathrm{Shi}(u)>u\) and \(\mathrm{Shi}(v)>v\) which results to (11). Likewise, for \(0<u\leq v\), we have
\[\frac{\mathrm{Shi}(u)}{u}\leq\frac{\mathrm{Shi}(v)}{v}\]
which results to (12).
**Theorem 2.5**.: _Let \(z>0\) and \(\lambda\in(0,1)\). Then the inequality_
\[\mathrm{Shi}(\lambda z)>\lambda\mathrm{Shi}(z) \tag{13}\]
_holds. If \(\lambda>1\), then the inequality is reversed._
Proof.: Let \(\alpha(z)=\mathrm{Shi}(\lambda z)-\lambda\mathrm{Shi}(z)\) for \(z>0\) and \(\lambda\in(0,1)\). Then
\[\alpha^{\prime}(z)=\lambda\left[\mathrm{Shi}^{\prime}(\lambda z)-\mathrm{Shi} ^{\prime}(z)\right]<0\]
since \(\mathrm{Shi}^{\prime}(z)\) is increasing for \(z>0\). Hence \(\alpha(z)\) is decreasing and then, we have
\[\alpha(z)>\lim_{z\to 0^{+}}\alpha(z)=0\]
which gives (13).
**Theorem 2.6**.: _For \(z>0\), the inequality_
\[\mathrm{Shi}(z)+\mathrm{Shi}(1/z)\geq 2\int_{0}^{1}\frac{\sinh(t)}{t}dt\approx 2. 11450 \tag{14}\]
_holds. Equality is attained if \(z=1\)._
Proof.: The case for \(z=1\) is easily seen. Because of this, let \(P(z)=\mathrm{Shi}(z)+\mathrm{Shi}(1/z)\) for \(z\in(0,1)\cup(1,\infty)\). Then
\[P^{\prime}(z)=\mathrm{Shi}^{\prime}(z)-\frac{1}{z^{2}}\mathrm{Shi}^{\prime}(1/ z),\]
which means that
\[zP^{\prime}(z)=\sinh(z)-\sinh(1/z):=E(z)\]
Since \(\sinh(z)\) is increasing, then \(E(z)<0\) if \(z\in(0,1)\) and \(E(z)>0\) if \(z\in(1,\infty)\). Thus, \(P(z)\) is decreasing on \((0,1)\) and increasing on \((1,\infty)\). Therefore, on both intervals, we have
\[P(z)>\lim_{z\to 1}P(z)=2\mathrm{Shi}(1)=2\int_{0}^{1}\frac{\sinh(t)}{t}dt \approx 2.11450\]
completing the proof.
**Lemma 2.7** ([13]).: _Let \(-\infty\leq u<v\leq\infty\) and \(p\) and \(q\) be continuous functions that are differentiable on \((u,v)\), with \(p(u+)=q(u+)=0\) or \(p(v-)=q(v-)=0\). Suppose that \(q(z)\) and \(q^{\prime}(z)\) are nonzero for all \(z\in(u,v)\). If \(\frac{p^{\prime}(z)}{q^{\prime}(z)}\) is increasing (or decreasing) on \((u,v)\), then \(\frac{p(x)}{q(x)}\) is also increasing (or decreasing) on \((u,v)\)._
In the literature, Lemma 2.7 is referred to as l'Hospital rule for monotonicy. It has become a remarkable tool in proving various results in mathematical analysis.
**Lemma 2.8**.: _For \(z>0\), the function \(T(z)=\frac{\sinh(z)}{\operatorname{Shi}(z)}\) is increasing._
Proof.: For \(z\in(0,\infty)\), we have
\[T(z)=\frac{\sinh(z)}{\operatorname{Shi}(z)}=\frac{p_{1}(z)}{q_{1}(z)},\]
where \(p_{1}(z)=\sinh(z)\), \(q_{1}(z)=\operatorname{Shi}(z)\) and \(p_{1}(0)=q_{1}(0)=0\). Then
\[\frac{p^{\prime}_{1}(z)}{q^{\prime}_{1}(z)}=\frac{z\cosh(z)}{\sinh(z)}=\frac{ p_{2}(z)}{q_{2}(z)}\]
where \(p_{2}(z)=z\cosh(z)\), \(q_{2}(z)=\sinh(z)\) and \(p_{2}(0)=q_{2}(0)=0\). Then
\[\sinh^{2}(z)\left(\frac{p_{2}(z)}{q_{2}(z)}\right)^{\prime} =[\cosh(z)+z\sinh(z)]\sinh(z)-z\cosh^{2}(z)\] \[=\cosh(z)\sinh(z)+z\left[\sinh^{2}(z)-\cosh^{2}(z)\right]\] \[=\cosh(z)\sinh(z)-z\] \[>0\]
since \(\cosh(z)>1\) and \(\sinh(z)>z\) for \(z>0\). Thus, \(\frac{p^{\prime}_{1}(z)}{q^{\prime}_{1}(z)}\) is increasing. Hence by Lemma 2.7, the function \(\frac{p_{1}(z)}{q_{1}(z)}\) is also increasing. This completes the proof.
**Theorem 2.9**.: _For \(z>0\), the inequality_
\[\operatorname{Shi}(z)\operatorname{Shi}(1/z)\geq\left(\int_{0}^{1}\frac{\sinh (t)}{t}dt\right)^{2}\approx 1.11778 \tag{15}\]
_holds. Equality is attained if \(z=1\)._
Proof.: The case for \(z=1\) is easily seen. And so, let \(Q(z)=\operatorname{Shi}(z)\operatorname{Shi}(1/z)\) and \(\theta(z)=\ln Q(z)\) for \(z\in(0,1)\cup(1,\infty)\). Then
\[z\theta^{\prime}(z) =z\frac{\operatorname{Shi}^{\prime}(z)}{\operatorname{Shi}(z)}- \frac{1}{z^{2}}\frac{\operatorname{Shi}^{\prime}(1/z)}{\operatorname{Shi}(1/z)}\] \[=\frac{\sinh(z)}{\operatorname{Shi}(z)}-\frac{\sinh(1/z)}{ \operatorname{Shi}(1/z)}\] \[:=H(z).\]
Because of Lemma 2.8, then \(H(z)<0\) if \(z\in(0,1)\) and \(H(z)>0\) if \(z\in(1,\infty)\). Subsequently, \(Q(z)\) is decreasing on \((0,1)\) and increasing on \((1,\infty)\). Therefore,
on both intervals, we have
\[Q(z)>\lim_{z\to 1}Q(z)=(\operatorname{Shi}(1))^{2}=\left(\int_{0}^{1}\frac{ \sinh(t)}{t}dt\right)^{2}\approx 1.11778\]
completing the proof.
**Lemma 2.10**.: _For \(z>0\), the function \(V(z)=\operatorname{Shi}(z)-\sinh(z)\) is decreasing and the inequality_
\[\operatorname{Shi}(z)-\sinh(z)<0 \tag{16}\]
_holds._
Proof.: We have
\[V^{\prime}(z) =\operatorname{Shi}^{\prime}(z)-\cosh(z)\] \[=\frac{\sinh(z)}{z}-\cosh(z)<0\]
as a result of (9). Hence
\[V(z)<\lim_{z\to 0}V(z)=0\]
which proves (16).
**Lemma 2.11**.: _For \(z>0\), the function \(K(z)=\frac{\sinh(z)}{\operatorname{Shi}^{2}(z)}\) is decreasing._
Proof.: For \(z\in(0,\infty)\), we have
\[K(z)=\frac{\sinh(z)}{\operatorname{Shi}^{2}(z)}=\frac{p_{1}(z)}{q_{1}(z)},\]
where \(p_{1}(z)=\sinh(z)\), \(q_{1}(z)=\operatorname{Shi}^{2}(z)\) and \(p_{1}(0)=q_{1}(0)=0\). Then
\[\frac{p_{1}^{\prime}(z)}{q_{1}^{\prime}(z)}=\frac{z\cosh(z)}{2\operatorname{ Shi}(z)\sinh(z)}=\frac{p_{2}(z)}{q_{2}(z)}\]
where \(p_{2}(z)=z\cosh(z)\), \(q_{2}(z)=2\operatorname{Shi}(z)\sinh(z)\) and \(p_{2}(0)=q_{2}(0)=0\). Then
\[2\operatorname{Shi}^{2}(z)\left(\frac{p_{2}(z)}{q_{2}(z)}\right)^ {\prime} =\operatorname{Shi}(z)\coth(z)-z\operatorname{Shi}(z)\mathrm{ cosech}^{2}(z)-\cosh(z)\] \[=\cosh(z)\left[\frac{\operatorname{Shi}(z)}{\sinh(z)}-1\right]- z\operatorname{Shi}(z)\mathrm{cosech}^{2}(z)\] \[<0\]
as a result of (16). Thus, \(\frac{p_{1}^{\prime}(z)}{q_{1}^{\prime}(z)}\) is decreasing. Hence by Lemma 2.7, the function \(\frac{p_{1}(z)}{q_{1}(z)}\) is also decreasing. This completes the proof.
**Remark 2.12**.: The increasing property of the function \(\frac{\sinh(z)}{\operatorname{Shi}(z)}\) is equivalent to
\[z\operatorname{Shi}(z)\cosh(z)-\sinh^{2}(z)>0. \tag{17}\]
Also, the decreasing property of the function \(\frac{\sinh(z)}{\operatorname{Shi}^{2}(z)}\) is equivalent to
\[z\operatorname{Shi}(z)\cosh(z)-2\sinh^{2}(z)<0. \tag{18}\]
Combining (17) and (18) yields
\[\sinh^{2}(z)<z\mathrm{Shi}(z)\cosh(z)<2\sinh^{2}(z) \tag{19}\]
which is also equivalent to
\[\frac{\tanh(z)}{z}<\frac{\mathrm{Shi}(z)}{\sinh(z)}<2\frac{\tanh(z)}{z}. \tag{20}\]
**Theorem 2.13**.: _For \(z>0\), the inequality_
\[\frac{z}{2}+\frac{\cosh(z)-1}{z}<\mathrm{Shi}(z)<2\left(\frac{\cosh(z)-1}{z}\right) \tag{21}\]
_holds._
Proof.: Recall that \(t<\mathrm{Shi}(t)<\sinh(t)\) for \(t>0\). Then, integrating over the interval \((0,z)\), we have
\[\int_{0}^{z}tdt<\int_{0}^{z}\mathrm{Shi}(t)dt<\int_{0}^{z}\sinh(t)dt\]
which gives
\[\frac{z^{2}}{2}<z\mathrm{Shi}(z)-\cosh(z)+1<\cosh(z)-1\]
and this simplifies to (21).
**Theorem 2.14**.: _For \(z>0\), the inequality_
\[\frac{2\mathrm{Shi}(z)\mathrm{Shi}(1/z)}{\mathrm{Shi}(z)+\mathrm{Shi}(1/z)}\leq \int_{0}^{1}\frac{\sinh(t)}{t}dt\approx 1.05725 \tag{22}\]
_holds. Equality is attained if \(z=1\)._
Proof.: The case for \(z=1\) is easily seen. On that note, let \(\Psi(z)=\frac{2\mathrm{Shi}(z)\mathrm{Shi}(1/z)}{\mathrm{Shi}(z)+\mathrm{Shi}( 1/z)}\) and \(h(z)=\ln\Psi(z)\) for \(z\in(0,1)\cup(1,\infty)\). Then
\[h^{\prime}(z)=\frac{\mathrm{Shi}^{\prime}(z)}{\mathrm{Shi}(z)}-\frac{1}{z^{2} }\frac{\mathrm{Shi}^{\prime}(1/z)}{\mathrm{Shi}(1/z)}-\frac{\mathrm{Shi}^{ \prime}(z)-\frac{1}{z^{2}}\mathrm{Shi}^{\prime}(1/z)}{\mathrm{Shi}(z)+ \mathrm{Shi}(1/z)}\]
which implies that
\[z\left[\mathrm{Shi}(z)+\mathrm{Shi}(1/z)\right]h^{\prime}(z)=z\frac{\mathrm{ Shi}^{\prime}(z)}{\mathrm{Shi}(z)}\mathrm{Shi}(1/z)-\frac{1}{z}\frac{\mathrm{Shi}^{ \prime}(1/z)}{\mathrm{Shi}(1/z)}\mathrm{Shi}(z).\]
This further gives rise to
\[z\left[\frac{1}{\mathrm{Shi}(z)}+\frac{1}{\mathrm{Shi}(1/z)} \right]h^{\prime}(z) =z\frac{\mathrm{Shi}^{\prime}(z)}{\mathrm{Shi}^{2}(z)}-\frac{1}{z }\frac{\mathrm{Shi}^{\prime}(1/z)}{\mathrm{Shi}^{2}(1/z)}\] \[=\frac{\sinh(z)}{\mathrm{Shi}^{2}(z)}-\frac{\sinh(1/z)}{\mathrm{ Shi}^{2}(1/z)}\] \[=D(z).\]
Owing to Lemma 2.11, we have \(D(z)>0\) if \(z\in(0,1)\) and \(D(z)<0\) if \(z\in(1,\infty)\). Thus, \(h(z)\) is increasing on \((0,1)\) and decreasing on \((1,\infty)\). Accordingly, \(\Psi(z)\) is
increasing on \((0,1)\) and decreasing on \((1,\infty)\). Therefore, on both intervals, we have
\[\Psi(z)<\lim_{z\to 1}\Psi(z)=\operatorname{Shi}(1)=\int_{0}^{1}\frac{\sinh(t)}{t}dt \approx 1.05725\]
completing the proof.
**Remark 2.15**.: Theorem 2.14 can be interpreted to mean that, for \(z>0\), the harmonic mean of \(\operatorname{Shi}(z)\) and \(\operatorname{Shi}(1/z)\) can never be greater than the quantity \(\operatorname{Shi}(1)\). Inequality (22) can also be rearranged as
\[\frac{1}{2}\left[\frac{1}{\operatorname{Shi}(z)}+\frac{1}{ \operatorname{Shi}(1/z)}\right]\geq\left(\int_{0}^{1}\frac{\sinh(t)}{t}dt \right)^{-1}. \tag{23}\]
**Lemma 2.16** ([11]).: _Let the function \(\alpha:I\subseteq(0,\infty)\to(0,\infty)\) be differentiable. Then \(\alpha(z)\) is is geometrically convex (concave) if and only if \(\frac{z\alpha^{\prime}(z)}{\alpha(z)}\) is increasing (decreasing) respectively._
**Theorem 2.17**.: _The function \(\operatorname{Shi}(z)\) is geometrically convex on \((0,\infty)\). That is, the inequality_
\[\operatorname{Shi}(u^{k}v^{1-k})\leq(\operatorname{Shi}(u))^{k} \left(\operatorname{Shi}(v)\right)^{1-k} \tag{24}\]
_holds for \(u>0\), \(v>0\) and \(k\in[0,1]\)._
Proof.: Applying Lemma 2.8, we have
\[\frac{d}{dz}\left(\frac{z\operatorname{Shi}^{\prime}(z)}{\operatorname{Shi}(z )}\right)=\frac{d}{dz}\left(\frac{\sinh(z)}{\operatorname{Shi}(z)}\right)>0\]
and by Lemma 2.16, we conclude that \(\operatorname{Shi}(z)\) is geometrically convex. This is equivalent to (24).
**Remark 2.18**.: It is interesting to note that, by letting \(u=z\), \(v=1/z\) and \(k=\frac{1}{2}\) in (24), we recover the inequality (15).
## 3. Rational Bounds for the Hyperbolic Tangent Function
In this section, as applications of the hyperbolic sine integral, we obtain some rational bounds for the hyperbolic tangent function.
**Theorem 3.1**.: _For \(z>0\), the inequalities_
\[\frac{2z}{z^{2}+2}<\tanh(z)<\frac{z^{3}+6z}{3z^{2}+6} \tag{25}\]
_hold._
Proof.: By direct computations, we obtain
\[\operatorname{Shi}^{(3)}(z) =\int_{0}^{1}t^{2}\cosh(zt)dt\] \[=\frac{(z^{2}+2)\sinh(z)-2z\cosh(z)}{z^{3}}>0.\]
Upon rearrangement, we obtain
\[\tanh(z)>\frac{2z}{z^{2}+2}\]
which gives the left hand side of (25). Also,
\[\operatorname{Shi}^{(4)}(z) =\int_{0}^{1}t^{3}\sinh(zt)dt\] \[=\frac{(z^{3}+6z)\cosh(z)-(3z^{2}+6)\sinh(z)}{z^{4}}>0.\]
Hence
\[\tanh(z)<\frac{z^{3}+6z}{3z^{2}+6}\]
which gives the right hand side of (25). This completes the proof.
**Theorem 3.2**.: _For \(z>0\), the inequalities_
\[\frac{4z^{3}+24z}{z^{4}+12z^{2}+24}<\tanh(z)<\frac{z^{5}+20z^{3}+120z}{5z^{4}+ 60z^{2}+120} \tag{26}\]
_hold._
Proof.: By direct computations, we obtain
\[\operatorname{Shi}^{(5)}(z) =\int_{0}^{1}t^{4}\cosh(zt)dt\] \[=\frac{(z^{4}+12z^{2}+24)\sinh(z)-(4z^{3}+24z)\cosh(z)}{z^{5}}>0.\]
This implies that
\[\tanh(z)>\frac{4z^{3}+24z}{z^{4}+12z^{2}+24}\]
which gives the left hand side of (26). Also,
\[\operatorname{Shi}^{(6)}(z) =\int_{0}^{1}t^{5}\sinh(zt)dt\] \[=\frac{(z^{5}+20z^{3}+120z)\cosh(z)-(5z^{4}+60z^{2}+120)\sinh(z)}{ z^{6}}\] \[>0.\]
Hence
\[\tanh(z)<\frac{z^{5}+20z^{3}+120z}{5z^{4}+60z^{2}+120}\]
which gives the right hand side of (26). This completes the proof.
**Theorem 3.3**.: _For \(z>0\), the inequalities_
\[\frac{6z^{5}+120z^{3}+720z}{z^{6}+30z^{4}+360z^{2}+720}<\tanh(z)<\frac{z^{7}+4 2z^{5}+840z^{3}+5040z}{7z^{6}+210z^{4}+2520z^{2}+5040} \tag{27}\]
_hold._
Proof.: By direct computations, we obtain
\[\operatorname{Shi}^{(7)}(z) =\int_{0}^{1}t^{6}\cosh(zt)dt\] \[=\frac{(z^{6}+30z^{4}+360z^{2}+720)\sinh(z)-(6z^{5}+120z^{3}+720z) \cosh(z)}{z^{7}}\] \[>0.\]
This implies that
\[\tanh(z)>\frac{6z^{5}+120z^{3}+720z}{z^{6}+30z^{4}+360z^{2}+720}\]
which gives the left hand side of (27). Also,
\[\operatorname{Shi}^{(8)}(z)\] \[=\int_{0}^{1}t^{7}\sinh(zt)dt\] \[=\frac{(z^{7}+42z^{5}+840z^{3}+5040z)\cosh(z)-(7z^{6}+210z^{4}+252 0z^{2}+5040)\sinh(z)}{z^{8}}\] \[>0.\]
Hence
\[\tanh(z)<\frac{z^{7}+42z^{5}+840z^{3}+5040z}{7z^{6}+210z^{4}+2520z^{2}+5040}\]
which gives the right hand side of (27). This completes the proof.
**Theorem 3.4**.: _For \(z>0\), the inequalities_
\[\frac{8z^{7}+336z^{5}+6720z^{3}+40320z}{z^{8}+56z^{6}+1680z^{4}+201 60z^{2}+40320}<\tanh(z)\\ <\frac{z^{9}+72z^{7}+3024z^{5}+60480z^{3}+362880z}{9z^{8}+504z^{6 }+15120z^{4}+181440z^{2}+362880} \tag{28}\]
_hold._
Proof.: By direct computations, we obtain
\[\operatorname{Shi}^{(9)}(z) =\int_{0}^{1}t^{8}\cosh(zt)dt\] \[=\frac{1}{z^{9}}\left[(z^{8}+56z^{6}+1680z^{4}+20160z^{2}+40320) \sinh(z)\right.\] \[\left.-(8z^{7}+336z^{5}+6720z^{3}+40320z)\cosh(z)\right]\] \[>0.\]
This implies that
\[\tanh(z)>\frac{8z^{7}+336z^{5}+6720z^{3}+40320z}{z^{8}+56z^{6}+1680z^{4}+20160 z^{2}+40320}\]
which gives the left hand side of (28). Also,
\[\begin{split}\operatorname{Shi}^{(10)}(z)&=\int_{0}^{1}t ^{9}\sinh(zt)dt\\ &=\frac{1}{z^{10}}\left[(z^{9}+72z^{7}+3024z^{5}+60480z^{3}362880z) \cosh(z)\right.\\ &\quad\left.-(9z^{8}+504z^{6}+15120z^{4}+181440z^{2}+362880) \sinh(z)\right]\\ &>0.\end{split}\]
Hence
\[\tanh(z)<\frac{z^{9}+72z^{7}+3024z^{5}+60480z^{3}+362880z}{9z^{8}+504z^{6}+151 20z^{4}+181440z^{2}+362880}\]
which gives the right hand side of (28). This completes the proof.
**Remark 3.5**.: The bounds in (28) are better than those in (27). The bounds in (27) are also better than those in (26). And the bounds in (26) are also better than those in (25).
**Remark 3.6**.: Due to their monotonicity properties, for \(m\geq 2\), the derivatives of the hyperbolic sine integral, \(\operatorname{Shi}^{(m)}(z)\) give rational bounds for the hyperbolic tangent function. Particularly, odd derivatives give lower bounds and even derivatives give upper bounds. The corresponding bounds get better as \(m\) increases. It is also observed that, the lower bounds obtained this way, are of the form \(\frac{p^{\prime}(z)}{p(z)}\) and the upper bounds are of the form \(\frac{q(z)}{q^{\prime}(z)}\) for some polynomials \(p(z)\) and \(q(z)\).
As a byproduct of Theorem 3.1, we obtain the following result which provides bounds for the hyperbolic cosine function.
**Corollary 3.7**.: _For \(z>0\), the inequalities_
\[\frac{z^{2}+2}{2}<\cosh(z)<e^{\frac{z^{2}}{6}}\left(\frac{z^{2}+2}{2}\right)^ {\frac{2}{3}} \tag{29}\]
_hold._
Proof.: By integrating (25) over the interval \((0,z)\), we have
\[\int_{0}^{z}\frac{2t}{t^{2}+2}dt<\int_{0}^{z}\tanh(t)dt<\int_{0}^{z}\frac{t^{3 }+6t}{3t^{2}+6}dt\]
which gives
\[\ln(z^{2}+2)-\ln 2<\ln\cosh(z)<\frac{z^{2}}{6}+\frac{2}{3}\ln(z^{2}+2)-\frac{2 }{3}\ln 2.\]
That is
\[\ln\frac{z^{2}+2}{2}<\ln\cosh(z)<\ln\left\{e^{\frac{z^{2}}{6}}\left(\frac{z^{2 }+2}{2}\right)^{\frac{2}{3}}\right\}\]
and by taking exponents, we obtain (29).
|
2308.14821
|
Transverse dynamics of charmed hadrons in ultra-relativistic nuclear
collisions
|
Transverse momentum $p_{\rm T}$ spectra and anisotropic flow distributions
are studied for charmonia and charmed hadrons produced in Pb-Pb collisions and
measured with the ALICE detector at the CERN Large Hadron Collider (LHC). The
investigations are performed within the framework of the Statistical
Hadronization Model with the transverse dynamics evaluated using predictions
from relativistic viscous hydrodynamics as implemented in the computer codes
MUSIC and FluiduM. With this essentially parameter-free approach, mostly good
agreement is obtained for $p_{\rm T}$ spectra in the range $p_{\rm T}$ $< 10$
GeV/c. The calculations suggest a hardening of the ${\rm J}/\psi$ $p_{\rm T}$
distribution for more central collisions while the data show the opposite
trend. The observed wide distribution in $p_{\rm T}$ of anisotropic flow
coefficients v$_2$ and v$_3$ for charmonia is also well reproduced, while their
magnitude is generally somewhat over predicted. This finding may be connected
to a difference in spatial distribution between light and charmed hadrons due
to a different diffusion of light and heavy quarks in the hot fireball.
|
Anton Andronic, Peter Braun-Munzinger, Hjalmar Brunßen, Jana Crkovská, Johanna Stachel, Vytautas Vislavicius, Martin Völkl
|
2023-08-28T18:15:24Z
|
http://arxiv.org/abs/2308.14821v3
|
# Transverse dynamics of charmed hadrons in ultra-relativistic nuclear collisions
###### Abstract
Transverse momentum \(p_{\mathrm{T}}\) spectra and anisotropic flow distributions are studied for charmonia and charmed hadrons produced in Pb-Pb collisions and measured with the ALICE detector at the CERN Large Hadron Collider (LHC). The investigations are performed within the framework of the Statistical Hadronization Model with the transverse dynamics evaluated using predictions from relativistic viscous hydrodynamics as implemented in the computer codes MUSIC and Fluid\(u\)M. With this essentially parameter-free approach good agreement is obtained for \(p_{\mathrm{T}}\) spectra in the range \(p_{\mathrm{T}}<10\) GeV/c. The observed wide distribution in \(p_{\mathrm{T}}\) of anisotropic flow coefficients \(\mathrm{v}_{2}\) and \(\mathrm{v}_{3}\) for charmonia is also well reproduced, while their magnitude is generally somewhat over predicted. This finding may be connected to a difference in spatial distribution between light and charmed hadrons due to a different diffusion of light and heavy quarks in the hot fireball.
## 1 Introduction
The production and hadronization of charm quarks is of central interest in current quark-gluon plasma (QGP) research [1]. In particular, hadrons with one or more charm quarks such as charmonia or (multi-)charm hadrons play an increasing role in our quest to quantitatively understand the expansion and hadronization of the QGP fireballs formed in ultra-relativistic nuclear collisions with heavy beams. In addition, the production yields of such hadrons contain unique information about the degree of deconfinement of charm quarks reached in such collisions [2; 3; 4; 5].
The production probabilities of charmed hadrons in central Pb-Pb collisions are very well described within the framework of the statistical hadronization model (SHMc), see [3; 6] and the description in the following section 2. Within this framework alone there is no information about the transverse dynamics and, in particular, the transverse momentum spectra of the produced charm hadrons. Recently, we have shown [4; 7] that transverse momentum spectra can be well described if one couples hydrodynamical [8] or hydrosinspired models such as developed within the 'blast wave' approach [9; 10] to the SHMc. Using these models it is assumed that the particles under consideration, i.e. in our case the charm quarks, participate in the collective expansion and flow developed in the QGP fireball as a consequence of the pressure build-up and and pressure gradients in the hot medium. This success implies that a much more direct route to transverse dynamics in
the SHMc is to implement the charmed hadrons, e.g. charmonia, directly into one of the now available relativistic hydrodynamics simulation frameworks. In particular, we focus on hydrodynamic modeling of open charm hadrons and charmonia in the framework of MUSIC [11] and of Fluid\(u\)M [12]. How this is done in detail is described in section 3 below. A key ingredient of the current approach is that the integrated yield for each charm hadron is taken directly from the SHMc calculation, and that, in order to describe the transverse spectral shape, the hydrodynamic evolution is stopped at the QCD phase boundary. Here our explicit assumption is that, for charmed hadrons, not only the yields but also the spectra freeze out at the phase boundary. For light valence flavor hadrons there is, on the other hand, indication that, while hadron yields are frozen, the expansion continues for a couple of fm/c in the now non-equilibrium hadronic phase and that transverse momentum spectra are determined at kinetic freeze out at a somewhat lower effective temperature. At LHC energies this implies that for charmed hadrons the evolution is stopped when the QGP temperature reaches T\({}_{pc}=156.5\) MeV. All charmed hadrons are then produced at the same thermal conditions as hadrons with light (u,d,s) valence quarks [3; 13], with the one major difference of the introduction of a charm quark fugacity \(g_{c}\)[4; 6]. The quantity \(g_{c}\) is introduced since charm quarks are dominantly produced in initial hard collisions, i.e. never reach chemical equilibrium but thermalize in terms of their momentum distributions subsequently in the hot QGP medium. Thus, \(g_{c}\) is not a free parameter but determined from the measured charm quark content of the fireball. Of course, as in all statistical hadronization models, the approach needs information about the full mass spectrum of charmed hadrons and their strong decays, see [4].
The main idea pursued in the present manuscript is to study the implications of the detailed hydrodynamic evolution on the transverse dynamics. In particular, we compare the predictions of the above cited two, in their technical implementation very different, hydrodynamic model codes with the measured transverse momentum spectra of charmonia and charm hadrons. In addition, we investigate how well the recently measured azimuthal anisotropies of charmonia [14; 15] and charm hadrons [16] are reproduced. This is a particularly interesting question as first attempts in this direction missed the experimental data in particular at higher values of \(p_{\rm T}>~{}4\) GeV/c [17; 18]. In a recent publication [19] good agreement was obtained by employing Langevin simulations to more accurately account for spatial anisotropies and introducing space-momentum correlations of the charm quarks during the hydrodynamic expansion of the hot fireball. Below we show that both transverse momentum spectra and anisotropic flow distributions are in rather good agreement with SHMc predictions under the assumption that charm quarks fully participate in the hydrodynamic expansion. This is realized by directly inserting the charmonia when the hydrodynamic evolution is stopped at the pseudo critical temperature of the chiral phase transition such that charmonia inherit the full flow of the medium at this instance. The direct use of the hydrodynamic models rather than the simplification of a blast wave model (as in [4]) retains complexities of the medium evolution. For example, while in blast wave models freeze out is realized as a simple integration along a freeze out contour in a plane of proper time vs radius, in a hydrodynamic model individual fluid cells are followed in time and freeze out occurs when a given energy density or temperature is reached. This implies
complex geometries of the freeze out in space and time, with velocity vectors of a given fluid cell and freeze out surface normals pointing in all possible directions. Inspecting the radial profile of velocities of fluid cells, one finds indeed a strong correlation (as employed in the blast wave model) but, in particular for larger radii of 5-10 fm, a very wide distribution of velocities [4]. We also note that freeze out at early times actually gives a substantial contribution to particle spectra from fluid cells at the surface of the fireball and with a large velocities. In addition, the use of anisotropic modeling (already in 2+1 dimensional hydrodynamics) allows for calculation of the flow coefficients.
In section 6 we show how a blast wave parametrization can be optimally matched to reproduce the full hydrodynamics calculation of the J/\(\psi\) spectrum. Finally in section 7 we discuss and estimate the consequence of the charm spatial distribution possibly being more compact than the energy distribution at freeze out.
## 2 Brief summary of the statistical hadronization model for heavy quarks, SHMc
Here we summarize the main ideas behind the SHMc. For much more detail on the original development see [2; 3; 4; 6]. Our main emphasis will be on the production of charmonia and in particular on the connection between SHMc for yields, and hydrodynamic models for the description of transverse dynamics. The key idea is based on the recognition that, contrary to what happens in the (u,d,s) sector, the heavy (mass \(\sim 1.2\) GeV) charm quarks are not thermally produced. Rather, production takes place in initial hard collisions. The produced charm quarks then thermalize in the hot fireball, but the total number of charm quarks is conserved during the evolution of the fireball [20] since charm quark annihilation is very small. In essence, this implies that charm quarks can be treated like impurities. Their thermal description then requires the introduction of a charm fugacity \(g_{c}\)[4; 6]. The value of \(g_{c}\) is not a free parameter but experimentally determined by measurement of the total charm cross section, relying on the precisely measured cross section for D\({}^{0}\) production in Pb-Pb collisions [21] and fragmentation as in the SHMc, see the section on quark matter in [5]. For 0-10% central Pb-Pb collisions at 5.02 TeV, the number of c\(\bar{c}\) pairs per unit of rapidity is 13.8\(\pm\)2.1 (16.3\(\pm\)2.4 with an enhanced charm-baryon spectrum, see below) resulting in \(g_{c}=31.5\pm 4.7\) and in a D\({}^{0}\) to J/\(\psi\) ratio of 53, in very good agreement with recent precision data [22]. Note that, with this approach, unlike in the case in which one starts with the charm production cross section in pp collisions, the uncertainties related to gluon shadowing in the Pb nucleus are avoided, leading to reduced uncertainties in \(g_{c}\) (currently \(\pm 15\%\)).
We note here that the charm balance equation should contain canonical corrections for more peripheral collisions or for lighter collision systems (or lower collision energies), i.e., whenever the number of charm pairs is not large compared to 1 [4; 23; 24]. For central Pb-Pb collisions at 5.02 TeV the canonical correction factors are in fact very close to 1 [4]. For more peripheral collisions or smaller systems the canonical corrections are important. We follow everywhere the canonical treatment described in detail in [4].
The large value of \(g_{c}\approx 30\) for charm production in central Pb-Pb collisions at mid rapidity implies very large enhancements for charmed hadrons compared to what is obtained in the purely thermal case. In the absence of canonical corrections the enhancement factor is about 900 for charmonia and doubly charmed, and \(2.6\cdot 10^{4}\) for triply charmed hadrons.
The statistical hadronization approach is applied to the core of the nuclear overlap region. As the density drop off in nuclei is gradual, we define a corona region. We assume this to start where the density of one of the two colliding nuclei has dropped to 10 % of the central density. The rationale is that in the overlap of this tail of the distribution with the core or tail of the density distribution of the other nucleus, the average number of collisions is less than 1. The two regions are derived from a Glauber model calculation as a function of centrality. As an example, for the 0-10 % centrality bin, the total overlap region contains 361.3 participants, of which we define 340.6 as the core. In this core there are 1645 binary collisions.
## 3 Extracting momentum information from hydrodynamic calculations
As described in the previous section, the core part of the fireball is assumed to hadronize statistically while preserving the fixed number of charm and anti charm quarks produced in hard initial collisions. This implies that the relative distribution of the charm quarks among hadrons is driven by the statistical weights at a given temperature. We use the temperature determined for chemical freeze out obtained when fitting the yields of hadrons with u,d,s valence quarks, T\({}_{chem}\) = 156.5 MeV [3], which happens to coincide with the pseudo critical temperature for the chiral phase transition T\({}_{pc}\). While the statistical hadronization model as such makes no prediction about the momentum distributions of the produced hadrons, the underlying assumption is nevertheless that the hadronizing charm quarks are in (or close to) local thermal equilibrium kinetically. Measurements of the various D-meson spectra and elliptic flow combined with model comparisons confirm this assumption [21]. Based on this statistical approach of charmed hadron formation a consistent prediction of the transverse dynamics is obtained by using the hydrodynamic modelling optimized for observables of light valence quark hadrons, albeit at T\({}_{pc}\). Local thermal equilibrium gives the momentum dependent yields in the rest frame of the fluid, i.e. in each fluid cell. Employing the boosts of all fluid cells and their geometric arrangement, transverse momentum spectra including azimuthal asymmetries can be obtained as described in this section.
In hydrodynamic simulations, the transition from the hydrodynamic phase to a hadron gas with little interaction is usually not modeled as part of the system evolution. Instead, the system evolves in time fully hydrodynamically. The end of the evolution is reached by defining a transition hypersurface, delineating the points in space and time where a certain critical temperature or energy density are reached. At this hypersurface particles are produced according to the Cooper-Frye formalism [25]:
\[E\frac{\mathrm{d}N}{\mathrm{d}^{3}p}=g_{i}\int_{\Sigma}f(u^{\mu}p_{\mu})p^{\mu }\mathrm{d}^{3}\Sigma_{\mu}. \tag{1}\]
This formalism assumes a thermalized momentum distribution \(f\) at the freeze out hypersurface and then counts the particles crossing the boundary. We assume that the charmed hadrons are produced at the same rate in all unit volumes at the transition temperature. This means that information about the shape of the overall momentum distribution can be obtained by calculating the freeze out momentum distribution from the hydrodynamic simulations according to 3.1. The integral of the distribution is then scaled according to the SHMc as described above. Compared to a description using a blast wave approach this gives a much more realistic description of the freeze out process. Here, the freeze out has significant contributions from hypersurfaces that are volume like (with time like surface normal vectors \(\mathrm{d}^{3}\Sigma_{\mu}\)) and ones that are surface like (with space like normal vectors). In the blast wave approach only volume like contributions are considered. Also, the full complexity of the radius-time-velocity association is reflected in the spectrum, while in the blast wave approach a simplified tight connection between these variables is employed. Further and new information on the physics behind the Cooper-Frye formalism can be found in [26].
For our purposes we use two hydrodynamic models: MUSIC [11] and Fluid\(u\)M [12]. MUSIC was used with the settings of [27] and using a 2+1D, boost-invariant setup for the measurements at mid rapidity. This includes an initial state based on the IP-Glasma model [28; 29]. In Fluid\(u\)M, the fluid dynamic fields are decomposed into an azimuthally and Bjorken boost symmetric background contribution and perturbations. This allows a numerically very efficient calculation of the hydrodynamic evolution. For the calculation of transverse momentum spectra, terms of linear order in perturbation do not contribute so that in this case the azimuthally symmetric background evolution is sufficient. Fluid\(u\)M was used with the parameters as in [30] and a normalization of the energy density in the initial state model \(\mathrm{T}_{\mathrm{R}}\)ENTo [31] that reproduced the integrated pion, kaon and proton yields measured in central Pb-Pb collisions. In a very recent paper, the diffusion of charm quarks and its effect on transverse momentum spectra of charm hadrons is discussed in the context of the Fluid\(u\)M model [32]. The settings of both models are summarized in Table 1. In both cases, a constant ratio of shear viscosity to entropy density \(\eta/s\) is assumed. The ratio of bulk viscosity to entropy density \(\zeta/s\) is assumed to peak somewhat above the pseudo critical temperature and the values and positions of the maximum are given in the table. The precise functions can be found in [27] and [30], respectively.
## 4 Calculation of \(\mathrm{J}/\psi\) spectrum with MUSIC and Fluid\(u\)M
Figure 1 shows the distribution of the freeze out hypersurfaces in time and radius. In Fluid\(u\)M, the calculation is started with a rotationally symmetric solution as the basic
\begin{table}
\begin{tabular}{|l l l l l l l|} \hline model & initial conditions & \(\tau_{0}\) & \(\eta/s\) & \(\left(\zeta/s\right)_{\mathrm{max}}\) & \(T_{\mathrm{peak}}\) & \(T_{\mathrm{fo}}\) \\ \hline Fluid\(u\)M & \(\mathrm{T}_{\mathrm{R}}\)ENTo & 0.18 fm/c & 0.16 & 0.06 & 175 MeV & 156 MeV \\ \hline MUSIC & IP-Glasma & 0.4 fm/c & 0.12 & 0.13 & 160 MeV & 156 MeV \\ \hline \end{tabular}
\end{table}
Table 1: MUSIC and Fluid\(u\)M parameters
output around which perturbations are calculated. This means that the resulting freeze out hypersurface is azimuthally isotropic, which gives a line in the \(\tau\)-\(R\) space.
The \(p_{\rm T}\) distribution of thermal J/\(\psi\) mesons resulting from this freeze out represents the contribution of the core in the SHMc model. To this, a pp-like contribution of the corona is added (see above). The third component to the J/\(\psi\) spectrum is obtained from feed down from weak decays of beauty hadrons. This dominantly contributes at high momenta. Beauty quarks interact with the medium, but are likely not fully thermalized [33]. Thus, the transverse momentum distribution of the beauty hadrons is expected to lie somewhere between the extremes of a pp-like distribution and a fully thermalized distribution with strong effects of collective flow as represented by hydrodynamic simulations. We chose the arithmetic mean of the two cases. For this, we used a pp-like \(p_{\rm T}\) distribution obtained from FONLL [34] and a thermalized distribution represented by blast wave parameters described below. We included B mesons by using PYTHIA8 [35] to generate their weak decays to J/\(\psi\). For the sum of B\({}^{+}\), B\({}^{-}\), B\({}^{0}\), and \(\bar{\rm B}^{0}\) we used a total yield after all strong decays of d\(N\)/d\(y\) = 0.86 and 0.116 for 0-10% and 30-50% centrality ranges as given by SHM calculations for the bottom sector at mid rapidity [33].
Figure 2 shows the resulting \(p_{\rm T}\) distributions for J/\(\psi\) mesons produced in central and semi-central collisions at mid rapidity together with the ALICE measurement at 5 TeV. The centrality selection for both models is based on final state particle multiplicities as used in the ALICE experiment. The vertical size of the boxes shown reflects the uncertainty in the charm cross section, i.e. of \(g_{c}\), the uncertainty of the corona contribution due to the experimental uncertainty of the spectrum measured for pp collisions, as well as an uncertainty from the modeling of the feed down from beauty hadrons. The overall
Figure 1: Left: \(R\) and \(\tau\) distribution of the freeze out hypersurfaces for MUSIC (histogram) and Fluid\(u\)M (black line). The MUSIC hypersurface elements are weighted by their volume (but not the energy streaming outwards through this element). Right: Contributions to the J/\(\psi\)\(p_{\rm T}\)-distribution in the SHMc model. The extraction of the blast wave parameters is discussed in section 6.
agreement is rather good considering that there is no free parameter adjusted to data. The general shape of the distributions is very similar. For the most central bin, the SHMc model gives a distribution that is somewhat harder than the experimental data. The difference is in the intermediate \(p_{\rm T}\) region dominated by the thermalized core contribution from the hydrodynamic simulations, with very similar results from MUSIC and Fluid\(u\)M. In the semi-central case, where the transverse flow effect is reduced, the agreement with the data is rather close.
For light hadrons [36], central as compared to semi-central events result in a larger average \(p_{\rm T}\) for each particle species. This is well reproduced by an increased mean transverse velocity \(\langle\beta_{\perp}\rangle\) obtained from hydrodynamic models and also from simple blast wave fits to the data. We obtain a corresponding shift in the core contribution using both hydrodynamic generators as seen in Fig. 2. In the experimental data, however, the maximum of the \(p_{\rm T}\) distribution is, if anything, slightly lower for the more central events. This may point to some additional effect beyond the assumption that the charm quarks are carried along the hydrodynamical flow of the medium.
In Figure 3 we show the corresponding model prediction for the \(\psi(2S)\) spectrum. The model uncertainties were obtained as for the J/\(\psi\) case above. In comparison we show experimental data from ALICE for Pb-Pb collisions in the 0-90 % centrality range [37]. While the overall spectral shape is in reasonable agreement, the data exceed the model predictions at low \(p_{\rm T}\) by 1-2 standard deviations. High precision data expected from ALICE Run 3 for central collisions are needed to establish whether there are any significant differences between model predictions and data.
Comparing to our first modelling [7] of charmonium spectra using a blast wave model,
Figure 2: Transverse momentum distributions of J/\(\psi\) production in Pb–Pb collisions at mid rapidity. The result from the SHMc model using the two hydrodynamical generators is compared to the measurement by the ALICE Collaboration [22]. The bottom plot show ratios of model results to the measurements. There the red data points are displayed at unity to indicate the experimental uncertainties.
we note that the current spectra based on full hydrodynamics are somewhat harder. The most probable \(p_{\rm T}\) increases from about 1.5 GeV/\(c\) in the blast wave calculations to 2.5 GeV/\(c\) with full hydrodynamic treatment, a trend which was in fact expected. The blast wave model parameters were chosen to match the MUSIC distribution of the freeze out hypersurface with a polynomial of the velocity \(\beta\) as function of radius. While the MUSIC distribution could be well matched with an exponent of n = 0.85 and a value of \(\beta_{max}\) = 0.62, it was apparent that the distribution was very broad for fluid elements at radii between 5 and 10 fm reaching from velocities of about 0.1 to 0.85 and therefore some arbitrariness in how to define the best \(\beta_{max}\) (see section 6 below).
## 5 Modeling of \({\rm J}/\psi\) anisotropic flow coefficients
The flow coefficients quantify the azimuthal anisotropy of particle distributions, driven in hydrodynamic models by the azimuthal anisotropy and resulting pressure gradients in the initial state and reduced by dissipation during the expansion phase. In the MUSIC calculations we have full access to the IP-Glasma initial state. We use the definition of the minor axis of the initial state ellipse and triangularity
\[\psi_{n}=\frac{\mbox{atan2}(r^{2}\langle\sin(n\phi)\rangle,r^{2}\langle\cos(n \phi)\rangle)+\pi}{n}\, \tag{10}\]
as introduced in [38]. Here, the average was taken over the distribution of the energy density in the initial state rather than that of the participants used in the original approach. The flow coefficients \(v_{2}\) and \(v_{3}\) were then calculated as
\[v_{n}=\langle\cos(n(\phi-\psi_{n}))\rangle\, \tag{11}\]
Figure 3: Calculation of the \(\psi(2S)\)\(p_{\rm T}\) spectrum in central Pb–Pb collisions with the SHMc and MUSIC. In comparison are shown experimental data from ALICE [37]. The error band is obtained as for the \({\rm J}/\psi\) spectrum.
with the averaging done over the expectation values of the number of J/\(\psi\) in each \(\phi\) bin. The resulting flow corresponds to fully thermalized charm quarks flowing with the medium and producing J/\(\psi\) mesons during hadronization. This was taken as the flow contribution of the core part of the model. The contribution of the corona was assumed to be independent of the angle \(\phi\), consistent with initial production without further interaction. The feed down from beauty hadrons was modeled as having a \(v_{2}\) and \(v_{3}\) constant in \(p_{\rm T}\) over the range shown, with values of \(v_{2}=0.01\pm 0.01\) and \(v_{3}=0.0\pm 0.02\) for the 0-10% and \(v_{2}=0.05\pm 0.02\) and \(v_{3}=0.0\pm 0.02\) for 30-50% centrality bins. These values were estimated from the measurements in [39].
Figure 4 shows the resulting \(p_{\rm T}\)-dependent elliptic flow coefficients for J/\(\psi\) in central and semi-central events. The error band contains the uncertainty of the overall charm cross section, the experimental uncertainty of the pp spectrum and the uncertainty attributed to the beauty feed down contribution. The elliptic flow at large \(p_{\rm T}\) is suppressed by the increasing contribution from the corona. This leads to a peak of the elliptic flow around 5 GeV/\(c\). For very high transverse momenta, the \(v_{2}\) of the core contribution becomes negative. As this happens at transverse momenta where the core contribution is already very small, this has little effect on the total \(v_{2}\) but is nevertheless interesting. We find that the negative values arise from regions close to the edge of the medium moving outward very rapidly and freezing out at early times. The contribution is stronger from fluid cells close to the tip of the ellipse. It seems plausible that this could be an effect of the curvature of the surface of the ellipse. The interface with neighboring fluid cells is smaller at the tip, leading to less slowing down due to dissipation of a given fluid cell in that region and hence larger momenta in this direction resulting in a negative \(v_{2}\). Further investigations are necessary to demonstrate this explicitly.
The resulting \(v_{2}\) exhibit a substantial flow contribution of thermalized J/\(\psi\) out to about 8-10 GeV/\(c\). This suggests that the flow observed in data even at these large transverse momenta may not be due to the effect of the path length difference, as commonly assumed
Figure 4: The \(v_{2}\) coefficients of the SHMc model prediction combined with the 2+1D MUSIC calculations. For the error band see text. The results are compared with data from the ALICE Collaboration at central and forward rapidity for central (left) and semi-central (right) Pb–Pb collisions [15].
for the high \(p_{\rm T}\) region, but still is of hydrodynamic origin coming from the anisotropy of the fluid cells and their motion.
Comparison to ALICE data at central and forward rapidities shows good agreement for central collisions. For semi-central events the range in \(p_{\rm T}\) of sizable \(v_{2}\) values is reasonably well reproduced, while the model produces about twice the maximum flow amplitude.
The \(v_{3}\) coefficients exhibit a similar behaviour for both central and more peripheral events. Similar to the \(v_{2}\), the contribution from the core gives a substantial contribution up to \(8-9\) GeV/\(c\), which is in line with measurements. The triangular flow coefficients from our model peak at values around 0.08 for transverse momenta of 5-6 GeV/\(c\). These peak values are somewhat larger than the experimental observations. For comparison, the \(v_{2}\) flow coefficients measured by ALICE in the forward direction are larger by roughly a factor of two.
## 6 Matching hydrodynamics to blast wave parametrization and resulting open charm spectra
To calculate the momentum distributions of open charm hadrons in the SHMc, the feed down of a large number of charm hadron states needs to be taken into account. In the SHMc 55 charmed mesons and 74 charmed (anti) baryons with their decay branching fractions are included. They are not usually included in the hydrodynamics codes, where we included the charmonia considered in the previous section explicitly. The inclusion of as complete a mass spectrum as possible is very important: Strong decays quadruple the D meson yields, for the \(\Lambda_{\rm c}\) the increase is by a factor 5 [4]. To incorporate this the FastReso[40] framework provides the necessary capability including the decay kinematics of the open charm hadrons. The input to the calculation is provided by a blast wave model [7; 10; 41].
This model uses a power law dependence of velocity of radius \(\beta(r)=\beta_{\rm max}(r/r_{\rm max})^{n}\) with a maximum transverse medium velocity \(\beta_{\rm max}\) and an exponent \(n\) of the radial dependence as input parameters. While we already employed this ansatz in our previous
Figure 5: The \(v_{3}\) coefficients of the SHMc model combined with the 2+1D MUSIC calculations. The model results are compared with data from the ALICE Collaboration at forward rapidity for central (left) and semi-central (right) Pb–Pb collisions [15].
publications [4; 7], the matching of the blast wave parameters to the hydrodynamic calculation was done differently here. Since we have now at our disposal the J/\(\psi\) spectrum from full hydrodynamics as shown in the previous section, we can find an optimum matching of the parametrization to full hydrodynamics.
In general, the production of particles from a part of the freeze out hypersurface depends on the mass of the particles, as given in equation 1. The freeze out hypersurface is in fact quite different in the full hydrodynamic model when compared to schematic hypersurfaces used in the blast wave approach. In particular it also has surface like contributions with space like normal vectors. Thus, the blast wave parameters which best reproduce the hydrodynamic model depend on the particle mass. However, in the limit of large masses, this matching becomes easier again. Dividing eq. 1 by the particle energy, we get:
\[\frac{\mathrm{d}N}{\mathrm{d}^{3}p}=g_{i}\int_{\Sigma}f(u^{\mu}p_{\mu})\frac{1 }{\gamma}u^{\mu}\mathrm{d}^{3}\Sigma_{\mu}. \tag{10}\]
In the limit of large masses, the particle velocity is approximately equal to the medium velocity as thermal velocity fluctuations are suppressed. Therefore, we can use \(u\) and \(\gamma\) corresponding to the medium. Then, \(\frac{1}{\gamma}u^{\mu}\mathrm{d}^{3}\Sigma_{\mu}\) is independent of the particle species. If the freeze out hypersurface is taken at constant time (\(\Sigma_{\mu}=(\Sigma_{0},\vec{0})\)), \(\frac{1}{\gamma}u^{\mu}\mathrm{d}^{3}\Sigma_{\mu}\) is simply the volume of the fluid cell. This volume can be used as a weight to take into account how fast the medium flows through the hypersurface element and thereby how much it contributes to the spectrum of the (large mass) particle. The velocities of each hypersurface element are multiplied with it's volume as a weight to find the best match of a blast wave parametrization to the full hydrodynamic result for charmed hadrons.
Figure 6: Comparison of velocity distribution of the freeze out hypersurface elements obtained from MUSIC (green line) to a matched blast wave distribution with a smoothed edge (short dashed blue line) and a velocity matched blast wave distribution (red dashed line).
In contrast to the previous method of extraction, this ignores the spatial position of the individual fluid cells and only uses their velocity distribution. The effort here is thus to optimally reproduce the resulting charmed hadron (J/\(\psi\)) momentum distribution of the hydrodynamic code with the blast wave model rather than reproducing the \(R-\beta\) distribution. In general, there is no need to equate the radius parameter \(r\) in the blast wave model with the radius in the hydrodynamic simulation. For a radius-independent freeze out time, the velocity distribution of the blast wave model has the form of a power law \(\mathrm{d}N/\mathrm{d}\beta\sim\beta^{\alpha}\), with an abrupt drop to 0 at \(\beta_{\mathrm{max}}\). Figure 6 shows the volume weighted distribution of velocities of the freeze out hypersurface elements from MUSIC. Obviously there is no sharp drop but rather a smooth fall off at large \(\beta\). This is due to the generally broad distribution of hypersurface elements as shown in Fig. 1 and also event-by-event fluctuations. The matching to the MUSIC distribution is done in two steps. First the distribution from MUSIC is fitted with a power law multiplied by an error function at large \(\beta\). As shown in Fig. 6 this distribution (blue short dashed) curve represents a very close match to the MUSIC output. This fit yields the power law parameter of the blast wave parametrization, \(n=2/(\alpha+1)\). Next, using the power law exponent \(n\), the \(\beta_{\mathrm{max}}\) of the blast wave parametrization is fixed such that its \(\langle\beta\gamma\rangle\) is equal to the one from MUSIC. This is represented by the long dashed red line in Fig. 6. The resulting optimal blast wave parameters are \(n=0.69\) and \(\beta_{max}=0.76\). Using these parameters and the blast wave parametrization yields a momentum distribution for the J/\(\psi\) very similar to that of the freeze out from the hydrodynamic simulations themselves (see Fig. 1 right) verifying that the current procedure produces an optimum match. Comparing these to the blast wave parameters using in [4; 7], \(n=0.85\) and \(\beta_{max}=0.62\), the former is about 20 % smaller and the latter 20 % larger than the optimum match. While the effect of the increase in \(\beta_{max}\) is more significant for the spectrum, both changes go into the same direction of making the spectra harder (such as the distributions shown in Figs. 2, 3) as compared to what is shown in [7].
As a further cross check we evaluate the equivalent freeze out volume represented by the hypersurface elements in MUSIC. The weights \(\frac{1}{\gamma}u^{\mu}\mathrm{d}^{3}\Sigma_{\mu}\) give the volume relevant for particle production in the collision center-of-momentum frame. When integrating over the momenta to get the total yield, the Lorentz contraction of this volume also needs to be considered, giving another factor of \(\gamma\). Summing up the contributions \(u^{\mu}\mathrm{d}^{3}\Sigma_{\mu}\) thus gives an estimate for the effective volume for particle production of about 5000 fm\({}^{3}\) very close to the result from the SHM fits [4].
The blast wave parameters were used as the input for a calculation of the D\({}^{0}\) transverse momentum spectrum, considering the core contribution with feed down from strong decays as in [4] and corona contribution based on the measured spectrum from pp collisions. The resulting \(p_{\mathrm{T}}\) distributions are shown in Figure 7 in comparison with data from the ALICE Collaboration. The uncertainties are determined as above for the J/\(\psi\) spectra. Also shown in the right panel are the corresponding distributions normalized to pp collisions scaled with the number of binary collisions, i.e. \(R_{\mathrm{AA}}\). For \(R_{\mathrm{AA}}\) the uncertainties introduced by the measured spectrum in pp collisions used for the corona part cancel. It can be seen that the contribution from the thermalized bulk of the medium is dominant at momenta below
a few times the particle mass, about 5 GeV/\(c\). This region is in very good agreement with the model. At higher momenta there is a deficit in the model. The additional contribution could be from c-quarks not fully thermalized in the medium that should show up at high \(p_{\rm T}\).
The transverse momentum distributions of all other charmed hadrons can be computed in an analogue way. We show here in Figure 8 the \(p_{\rm T}\) distribution calculated for \(\Lambda_{\rm c}\) together with the corresponding \(R_{\rm AA}\) distribution. Displayed are results from the SHMc with an enhanced charmed baryon scenario as outlined in [4]. The motivation is the observation by the ALICE collaboration of a very large fragmentation fraction of charm into the \(\Lambda_{\rm c}\) baryon in pp collisions at the LHC [43]. One explanation, proposed by He and Rapp [44], was based
Figure 8: Left: Spectra of \(\Lambda_{\rm c}\) for central Pb–Pb collisions including feed down from strong decays of higher mass charmed baryons using the optimized blast wave parametrization and FastReso. For this calculation additional undiscovered charm baryon states were considered (see [4]). Also shown are ALICE data [42]. Right: The same spectra normalized to those from pp collisions scaled with the number of binary collisions.
Figure 7: Left: Prompt D meson spectra for central Pb–Pb collisions including feed down from strong decays of higher mass charmed hadrons using the optimized blast wave parametrization and FastReso. They are compared to ALICE data [21]. Right: The same spectra normalized to those from pp collisions scaled with the number of binary collisions.
on the observation that the charmed baryon spectrum from the relativistic quark model contains many more charmed baryon states than established experimentally. This is also the case for the charmed baryon spectrum from lattice QCD [45]. The hypothesis was made that these states exist but were not yet discovered experimentally. This could be due to a large width and therefore significant overlap. Including these theoretically predicted states into their statistical model, He and Rapp found that they could reproduce the enhanced fragmentation into \(\Lambda_{\rm c}\). In the same spirit, we have enhanced the contribution of charmed baryons by the equivalent amount in our SHMc calculations and find that the \(\Lambda_{\rm c}\) yield in Pb-Pb collisions is about doubled while the D meson sector is practically unchanged, for a correspondingly larger charm cross section (see [4] where both the regular and the enhanced charm baryon results are shown).
The resulting \(\Lambda_{\rm c}\) spectral distribution (Fig. 8) is compared to recent data from ALICE and a similar picture emerges as for charmonia and D mesons: The yield at low and moderate \(p_{\rm T}\) and therefore the bulk yield are well reproduced by the current calculations, in particular considering the uncertainties. At high \(p_{\rm T}\) there is again a deficit in the model results. We also show in Figure 9 the ratio of \(\Lambda_{\rm c}\) to D meson \(p_{\rm T}\) spectra. The experimental data from ALICE are very well reproduced. In particular the maximum in this baryon to meson ratio comes out naturally from the hydrodynamic description combined with statistical hadronization without invoking an additional coalescence mechanism.
## 7 Estimating systematic effects of charm space time distribution in the medium
In the SHMc approach to charm hadronization we assume that the charm quarks, at hadronization, are thermalized in terms of the momentum distributions, i.e. the momentum distribution is determined by a (common) freeze out temperature and the fluid expansion at this instance. This is supported by observables of D mesons compared to transport models where a range is determined for the spatial diffusion coefficient \(1.5<2\pi D_{s}<4.5\) at the
Figure 9: Ratio of \(\Lambda_{\rm c}\) to D\({}^{0}\) yields as function of transverse momentum for central Pb–Pb collisions as shown separately in Figs: 7, 8 compared to ALICE data [21; 42].
pseudo critical temperature [21]. From this a kinetic equilibration time \(\tau_{kin}\) = 2.5 - 7.6 fm/\(c\) can be estimated. Recently, for the first time spatial diffusion coefficients were derived in lattice QCD calculations with light dynamical quarks [46]. In the region between 1 and 2 times the pseudo critical temperature the values were found to be significantly smaller than in previous quenched lQCD calculations, implying very fast hydrodynamization of charm quarks within 1-2 fm/\(c\). This rapid thermalization does however not imply that the spatial distribution of charm quarks at hadronization is the same as that of light quarks and gluons.
While the initial energy density is distributed roughly following the areal density of participants \(N_{part}\), the charm quark density follows the areal density of binary collisions \(N_{coll}\) which is more compact. Therefore, at early times the outermost region of the nuclear overlap region is less densely populated with charm quarks as compared to energy, or equivalently, light quarks and gluons.
We have no experimental observable at present that gives any information about the spatial distribution of charm quarks at hadronization. In fact, since the momentum equilibration takes some time (several collisions), it is plausible that the expansion of charm quarks lags somewhat behind that of the overall medium and even at late times the outermost regions of the freeze out hypersurface may not be equally populated with charm quarks. In order to give an indication how charmed hadron momentum distributions would be affected, we present in the following two very schematic estimates basically removing contributions from (i) early times or (ii) large radii at late times.
The distribution of hypersurface elements at freeze out is shown in Fig. 1 and it is visible that the outermost regions of the nuclear overlap freeze out within the first few fm/\(c\). These are regions that, immediately after the collision, are underpopulated in charm as compared to energy. And at these early times it is also questionable that charm quarks are equilibrated in terms of their momenta. To test the effect on D meson spectra, we remove contributions from the hypersurface elements corresponding to the earliest 5 % of the effective volume at freeze out. This happens to correspond to radii where the density of \(N_{coll}\) is less than half the density of \(N_{part}\). Since in statistical hadronization J/\(\psi\) mesons are formed from uncorrelated charm and anticharm quarks, we cut off the earliest 15 % for this study (considering the density of \(N_{coll}^{2}\) vs density of \(N_{part}\)). This corresponds to removing the hypersurface elements located within the first 4.5 fm/\(c\) (see Fig. 1) for D meson production and 6.7 fm/\(c\) for J/\(\psi\) production. We recalculate blast wave parameters matching the remaining distribution and obtain for D mesons \(n=0.647\) and \(\beta_{\rm max}=0.74\). In fact some of the elements with the largest velocity were removed by this cut. The resulting spectra for D mesons and J/\(\psi\) are shown in Fig. 10 as the short dashed lines in comparison to the full spectra. Note that for this discussion only the core component is shown. Once can see a visible but small effect in terms of softening of the \(p_{\rm T}\) spectra.
To test the scenario that even at late times the charm quark spatial distribution could be more compact than the overall fireball, we resort to the blast wave distribution of velocities matched to the MUSIC freeze out hypersurface and remove the outermost 1 fm. This simply implies \(\beta_{max}\) = 0.69 instead 0.76. The corresponding spectra are also shown in Fig. 10 as the long dashed lines. Now a considerable softening of the distributions is
observed. The most probable \(p_{\rm T}\) shifts downward for D mesons by 0.3 GeV/\(c\) and for J/\(\psi\) by nearly 1 GeV/\(c\).
While the two scenarios discussed are very schematic, it is obvious that they both go in the direction of obtaining an even better match to experimental data for J/\(\psi\), where we currently see a deviation, while much less affecting the D meson spectrum, where we already see a good match between model and data. It is also obvious that a slightly more compact final state has a bigger effect than removal of early freeze out.
## 8 Summary and Outlook
In this paper we have explored the possibilities to describe quantitatively and with essentially no free parameters the transverse momentum spectra and anisotropic flow distributions for charmonia and charmed hadrons produced in Pb-Pb collisions and measured with the ALICE detector at the CERN Large Hadron Collider (LHC). The basis for this work is the realization that the SHM originally developed to describe yields of hadrons composed of (u,d,s) quarks can be expanded into the charm sector with no free additional parameters by treating charm quarks as 'impurities' in the hot and dense fireball with the charm quark number determined by the measured total open charm cross section in Pb-Pb or Au-Au collisions. Furthermore, the transverse dynamics in the charm sector is more straightforward to implement than in the light quark sector as we have evidence that all hadrons with heavy quarks are formed at or very near the QCD phase boundary.
The specific focus of the present paper is indeed the transverse dynamics in the charm quark sector which is further developed by coupling the SHMc to the computer codes
Figure 10: Comparison of core components of D meson and J/\(\psi\) spectra with the default freeze out, to those calculated removing the early freeze-out (short dashed lines) and to those removing, in the blast wave parametrization, the outermost radii (long dashed lines). The spectra are normalized to the same integral for easier comparison. For the D\({}^{0}\) mesons, only the direct contribution is shown.
MUSIC and Fluid\(u\)M. For charmonia and hadrons with one charm quark a rather quantitative description is obtained in the low \(p_{\mathrm{T}}\) sector for spectra of D-mesons, \(\Lambda_{\mathrm{c}}\) baryons and charmonia. The observed wide distribution in \(p_{\mathrm{T}}\) of anisotropic flow coefficients \(\mathrm{v}_{2}\) and \(\mathrm{v}_{3}\) for charmonia is also well reproduced, but their magnitude is generally somewhat over predicted.
This finding may be connected to a difference in spatial distribution between light and charmed hadrons due to a different diffusion of light and heavy quarks in the hot fireball. To make progress in this area two scenarios are explored by modifying the distribution of charm quarks at late and early times. The first results are promising but further development is needed to make these studies quantitative.
On the theory side, to make progress requires more thorough modelling for instance in the direction employed in [32]. On the experimental side, a measurement of Hanbury Brown-Twiss correlations between D mesons would give experimental evidence about the charm quark spatial distribution.
Finally, a completely open field awaits us in the area of hadrons with 2 or 3 charm quarks. These will be experimentally explored in LHC Run 3 and Run 4 with the upgraded ALICE apparatus and ultimately with the proposed new ALICE 3 detector [47]. It will be very exciting to test SHMc predictions in the multi-charm sector and to explore hadronization and deconfinement scenarios there.
We would like to thank A. Kirchner, C. Shen and B. Schenke for their support concerning Fluid\(u\)M and MUSIC. We acknowledge inspiring discussions with S. Floerchinger, B. Friman, A. Mazeliauskas and K. Redlich. This work was performed within and supported by the DFG Collaborative Research Centre "SFB 1225 (ISOQUANT)". V. V. gratefully acknowledges funding from the Knut and Alice Wallenberg Foundation (the CLASH project).
|
2310.01900
|
A Collaborative System of Systems Simulation of Urban Air Mobility
|
The implementation of Urban Air Mobility represents a complex challenge in
aviation due to the high degree of innovation required across various domains
to realize it. From the use of advanced aircraft powered by novel technologies,
the management of the air space to enable high density operations, to the
operation of vertidromes serving as a start and end point of the flights, Urban
Air Mobility paradigm necessitates significant innovation in many aspects of
civil aviation as we know it today. In order to understand and assess the many
facets of this new paradigm, a Collaborative Agent-Based Simulation is
developed to holistically evaluate the System of Systems through the modeling
of the stakeholders and their interactions as per the envisioned Concept of
Operations. To this end, models of vertidrome air-side operations,
unmanned/manned air space management, demand estimation and passenger mode
choice, vehicle operator cost and revenues, vehicle design, and fleet
management are brought together into a System of Systems Simulation of Urban
Air Mobility. Through collaboration, higher fidelity models of each domain can
be integrated into a single environment achieving fidelity levels not easily
achievable otherwise. Furthermore, the integration enables the capture of
cross-domain effects and allows domain-specific studies to be evaluated at a
holistic level. This work demonstrates the Collaborative Simulation and the
process of building it through the integration of several geographically
distributed tools into an Agent-Based Simulation without the need for sharing
code.
|
Nabih Naeem, Patrick Ratei, Prajwal Shiva Prakasha, Lukas Asmer, Roman Jaksche, Henry Pak, Karolin Schweiger, Asija Velieva, Fares Naser, Majed Swaid, Jan Pertz, Malte Niklass
|
2023-10-03T09:16:49Z
|
http://arxiv.org/abs/2310.01900v1
|
# A Collaborative System of Systems Simulation of Urban Air Mobility
###### Abstract
The implementation of Urban Air Mobility represents a complex challenge in aviation due to the high degree of innovation required across various domains to realize it. From the use of advanced aircraft powered by novel technologies, the management of the air space to enable high density operations, to the operation of vertiformes serving as a start and end point of the flights, Urban Air Mobility paradigm necessitates significant innovation in many aspects of civil aviation as we know it today. In order to understand and assess the many facets of this new paradigm, a Collaborative Agent-Based Simulation is developed to holistically evaluate the System of Systems through the modeling of the stakeholders and their interactions as per the envisioned Concept of Operations. To this end, models of vertridrome air-side operations, unmanned/manned air space management, demand estimation and passenger mode choice, vehicle operator cost and revenues, vehicle design, and fleet management are brought together into a System of Systems Simulation of Urban Air Mobility. Through collaboration, higher fidelity models of each domain can be integrated into a single environment achieving fidelity levels not easily achievable otherwise. Furthermore, the integration enables the capture of cross-domain effects and allows domain-specific studies to be evaluated at a holistic level. This work demonstrates the Collaborative Simulation and the process of building it through the integration of several geographically distributed tools into an Agent-Based Simulation without the need for sharing code.
Urban Air Mobility, eVTOL, Agent-Based Simulation, Collaborative Simulation, System of Systems
## Nomenclature
\begin{tabular}{l l} UAM & Urban Air Mobility \\ eVTOL & electric Vertical Take-Off and Landing aircraft \\ ABS & Agent-Based Simulation \\ RCE & Remote Component Environment \\ SoS & System of Systems \\ \end{tabular}
\begin{tabular}{l l} UTM & Unmanned-Air Traffic Manager \\ FATO & Final Approach and Take-Off area \\ MaaS & Mobility as a Service provider \\ GUI & Graphical User Interface \\ CPACS & Common Parametric Aircraft Configuration \\ Schema \\ \end{tabular}
## 1 Introduction
Urban Air Mobility (UAM) envisions air travel within the urban air space utilizing small aircraft in novel concepts of operation enabled through leveraging advanced technologies and automation. Multiple use cases are classified within the umbrella of Urban Air Mobility such as Intra-City, Mega-City, Airport-Shuttle, Sub-Urban and Inter-City, each with different requirements with respect to technology as well as infrastructure [1]. As UAM presents a significant difference compared to aviation as we know it today, significant research and advancements in fields such as urban air space management, vertiport design and operations, fleet operations, integrated transport, and vehicle design, among the many others (as in FIG 1) are required [2]. In this regard, simulations have been utilized in the literature to carry out investigations with a primary focus on a singular domain. Simulations have been performed in literature to investigate the demand for UAM [3, 4], the operating costs [5], concepts for airspace management [6, 7], vertiport operations [8, 9, 10], as well as vehicle and fleet design [11, 12, 10, 13]. However, a focus on one domain typically comes at the cost of the modelling fidelity of the other domains, which pose a challenge in a UAM network that depends on tight integration between domains. Therefore, not only the advancements in the fields themselves but also the integration between these domains are necessary to enable the envisioned concepts of operations.
As the UAM network is currently still shrouded by significant uncertainty due to its early developmental stage, an integrated modelling of UAM system of systems can serve as a powerful tool in understanding and reducing the uncertainty. From the perspective of a single domain, it may be difficult to evaluate the effect a change in a domain specific parameter may have on the entire UAM network. Consider the number of FATOs at a given vertiport, an analysis solely from the vertiport perspective may be more forgiving to delays due to the limitations of the FATOs during peak hours, but considerations from a holistic perspective may find that the delays at the vertiport in peak hours would reduce the time savings possible for a passenger which may result in passengers rather choosing other modes of transport thereby reducing the revenue realized by the vehicle operator as well as the vertiport operator. Furthermore, delays for landing slots would require the vehicle to loiter in the airspace further congesting the airspace potentially leading to scheduled missions to be delayed if the airspace is saturated, which may in turn impact the revenue of the operators. As demonstrated by this brief example, changes in a single domain may have snowball effects due to the high level of interactions required across these domains, highlighting the need for holistic modelling even when conducting the domain-specific studies.
In order to achieve this holistic modelling, the authors of this work chose to collaborate together to bring together the higher fidelity models of the different domains into a single modelling environment upon which further studies can be performed. A first collaborative approach with various disciplines has been presented by [14], taking a sequential approach to the integration of the disciplines. In this study, a parallel approach is taken to the integration of the disciplines into an Agent-Based Simulation. By employing a collaborative modeling approach, domain specific studies can be carried out by the expert teams in a holistic manner without compromising the fidelity of the other domains. Agent-Based Modelling was chosen for this integration due to its effectiveness in modelling each stakeholder or entity involved in UAM and the interactions between them as they may occur in a real-life scenario. Through Agent-Based Modelling, the knowledge available to each entity can be modelled through only passing the data that is required/ what would be available to the entity as defined in a concept of operations. This property also makes it suitable to model the On-Demand operations as the foreknowledge of the prospective passenger demand cannot be assumed, which may be more challenging to accomplish through analytical models. Critically, Agent-Based Modelling can be used to model the behavior and decision logic of each stakeholder as independent agents, as they would act in a given scenario. Moreover, Agent-Based Modelling also enables a visual representation of the UAM network which would allow for a better understanding of the operations of the network in addition to the data generated by the simulation. For these reasons, an Agent-Based Simulation was chosen as the backbone upon which all domain tools are integrated. The Hanseatic City of Hamburg was selected as a model city because it is one of the most congested cities in Germany. Furthermore, it is home to a large community of aviation researchers and industry, and it is pushing innovative aviation concepts.
## 2 Methodology
Urban Air Mobility can be decomposed into its stakeholders to gain a better understanding of its constituent systems and their interactions. The different stakeholders involved in UAM are considered to be the Customer/Passenger, Mobility as a Service provider, Vehicle Operator (and Manufacturer), Vertiforme Operator, Unmanned-Air Traffic Management Operator (UTM) and the People & Regulators (as in FIG 2). Their roles are considered to be as follows:
1. Vehicle Operator: * Operates a fleet of eVTOL vehicles * Seeks to maximize profit through transporting the greatest number of passengers at minimal cost
2. UTM & ATM * Controls unmanned and manned airspace * Seeks to ensure highest level of safety while maximizing operational density of the airspace
Figure 1: Urban Air Mobility as a System of Systems, retrieved from [11]
Vertidrome Operator
* Operates either one or many Vertidrome(s). Seeks to maximize profit through processing as many passengers as possible in the shortest time
* Passenger
* Customer of Urban Air Mobility Service
* Wants to get from A to B cheaply and quickly
* Mobility as a Service Provider (MaaS)
* Operates a platform that connects Passengers with Mobility Services
* Wants to maximize people using the platform through giving the best mobility services including UAM and connecting ground transport modes
* People & Regulators
* Inhabitants, leadership and regulators of the area within which UAM service is provided
* Wants to ensure safe living environment and minimize disturbances such as noise and visual annoyances
### Concept of Operations - CONOPS
The Concept of Operations is developed taking inspiration from the user experience of the existing On-Demand ride-sharing services, and the operational requirements from the commercial aviation services and is presented in FIG 3. For UAM to be successful it is envisioned that the Customer interfaces with a singular application/website (MaaS provider), which seamlessly integrates all the aviation service providers and the ground transport options [16], to request travel options and book the desired itinerary from point A to point B with first and last mile options considered. As one of the most significant benefits of UAM is the travel time savings. In order not to lose the time saved it is critical that a seamless integration of UAM with the ground transport modes can be achieved. Once the customer makes the trip request to the MaaS, the MaaS complies all travel options including ground transport modes, as well as UAM intermodal options together with the price of each option. It is considered that the MaaS communicates with the vehicle operator(s) to find a seat on a suitable vehicle, who in turn communicates with the UTM operator to find a suitable 4D trajectory and departure and arrival slots. The customer, when given the options, can select a binding transport option from those provided. Once the passenger selects the binding offer, the information is passed to the relevant entities and the itinerary is fixed. In this study, it is assumed that there are no deviations from the schedule and that everything happens exactly as planned.
### Components of Collaborative Agent-Based Simulation
The stakeholders of UAM are further broken down into domains considering the expertise involved. The passenger is decomposed into the Demand and Mode Choice tools, representing a subset of potential passengers of UAM and a decision model on whether UAM or alternative transport mode is taken respectively. The Vertiport Operator is decomposed into an air and ground side operations, where the airside is represented in this work by the Vertiport Airside Tool. The UTM is represented by the trajectory tool ensuring conflict free routes. The vehicle operator is decomposed into several different disciplines, from the vehicle itself through the Vehicle Design Tool, mission scheduling through the Vehicle Allocation Tool and the Mission Planning Tool, and the economic aspects through the Cost and Revenues Tool. The Agent-Based Simulation is extended to be able to communicate with the individual tools, and this extended version is referred to as the "Agent-Based Simulation Core".
#### 2.2.1 Agent-Based Simulation Core
The Agent-Based Simulation Core (ABS-C) is the _agent-based simulation environment_ developed in python which serves as the backbone for the integration of the domain-specific tools. The DLR in-house agent-based simulation [17, 15, 11] is extended in this work to be able to communicate with the other tools, prepare inputs, trigger workflow components, and parse their outputs. The aforementioned communication is achieved through the use of Remote Component Environment (RCE) which allows tool integration and execution as "black boxes", further details can be found in Section 2.3. This extended version of the ABS is referred to as ABS-C in this work. The ABS-C acts as the coordinator and integrator of each tool and triggers each module in the right order as defined by the CONOPS to retrieve the necessary outputs. For each flight request, the ABS-C communicates with the RCE workflow, and after getting its final output, models the customer decision and subsequent flights if any. The ABS-C acts as the active database, as it simulates each stakeholder and their activities throughout the day based on the outputs from the RCE workflow components. In developing the collaborative simulation, it was decided that the ABS-C would act as the single source of truth upon
Figure 2: Urban Air Mobility and its Stakeholders, retrieved from [15]
Figure 3: Booking Process of Trip
which all domain-specific tools act. This was done to avoid potential accounting errors which could be introduced when dealing with multiple, disconnected databases. At the initialization of the simulation, all air vehicles are distributed across the vertiports as per the user input, are fully charged, and are without any scheduled missions. Throughout the day passengers request and accept flights, flights are created and assigned, passengers are flown, vertiport slots and trajectories are occupied. Consequently, any incoming requests are processed given the exact current state of the UAM network at the time of the request. The GUI representation of the Urban Air Mobility Simulation is depicted in FIG 4, where the considered vertiports in Hamburg, active flights, and passengers at the vertiports can be observed.
#### 2.2.2 Vehicle Design Tool
The air vehicle which compose the UAM fleet in the Collaborative Simulation are small, vertical take-off and landing capable, fully-electric aircraft. A design tool for vertical take-off and landing aircraft is developed [20] to provide the designs and performance that can be integrated into the Agent-Based Simulation. The design tool is capable of designing vertical take-off and landing aircraft of different architectures including tittortor (as in with FIG 5) a broad range of top-level aircraft requirements. Special attention is given here to ensure flexibility of the design tool so that a wide design space can be explored.
#### 2.2.3 Demand Tool
The Demand Tool defines a large dataset of trips in high spatial and temporal resolution (FIG 6 and FIG 7) for which UAM may be a viable option due to the distance between origin and destination.. This dataset contains detailed information about each individual trip including location of origin and destination as well as the time the individual starts their journey. In order to generate travel requests, departure and arrival vertiports that are closest to origin and destination are identified, as well as travel times to them. This allows to determine the earliest time the person would reach the origin vertiport which is needed for mission planning.
#### 2.2.4 Mode Choice Tool
After the actual departure time, flight duration and UAM ticket price are provided by ABS-Core, the Mode Choice Tool first completes the travel chain for the route involving UAM by accounting for first and last mile travel modes. It then constructs a complete route with alternative transport means without the use of UAM. For both complete routes, the time and costs of each mode including the alternative transport modes are considered. Then, the Mode Choice Tool uses a multinomial logit model that uses the total travel times and travel costs of all alternatives as input and determines the probability for each transport mode for each passenger. Based on the calculated probabilities, a random assignment is made to assign the passenger to one of the available means of transport. In case the UAM is selected, the transport offer will be confirmed. In the current version of the simulation, only car is available as an alternative mode, but it is planned to consider other modes at a later stage.
Figure 4: Visualization of Urban Air Mobility Simulation (base map retrieved from [18, 19])
Figure 5: A 4 Pax Tilitrotor concept. Credits: DLR
Figure 6: Spatial distribution of transport demand and vertiports
Figure 7: Temporal distribution of transport demand
#### 2.2.5 Missions Planning Tool
The Missions Planning Tool (component of ABS-C) decomposes a travel request into multi-leg missions if the vehicles in the fleet are unable to perform the direct flight. In the case of a heterogenous fleet, multiple potential mission decompositions are performed and vehicle is searched for each leg. Moreover, for each leg of each mission, a priority is placed on grouping the passenger on existing scheduled missions, only where this is not a possibility, a new mission is requested. Based on the different route options, a final route is chosen for the passenger and each leg in the itinerary is confirmed.
#### 2.2.6 Vehicle Allocation Tool
The Vehicle Allocation Tool finds the ideal vehicle from the fleet when a new mission needs to be scheduled considering the existing scheduled missions, the location of the vehicles, available and required energy of the vehicle, and best match between the estimated number of passengers for the mission, and the vehicles passenger capacity. Two separate approaches are available to the Collaborative Simulation in this regard, one through an external tool and the other directly integrated in the ABS-C.
#### 2.2.7 Vertidrome Airside Tool
The Vertidrome Airside Tool allocates the next available and conflict-free take-off and landing time slot for each UAM request. This requires to keep track of all actual traffic being processed at each vertidrome inside the UAM network. Therefore, a detailed knowledge about the traffic flow and the current airside capacity is necessary. This information is calculated and provided by the V-Lab simulation which describes a discrete-event based simulation covering the airside air and ground operation of a vertidrome and which was specifically adjusted to fit the system-of-systems analysis. The full-size V-Lab simulation module is introduced in [21]. The SoS-tailored V-Lab simulation provides two different vertidrome layouts including two different concepts of operation, on the one hand targeting one-directional independent vehicle traffic flows, and bi-directional interdependent vehicle traffic flows on the other hand [22, 8]. Furthermore, it considers designated approach and departure paths for arrival and take-off procedures and performs the overall simulation of the airside traffic flow. A high-level workflow of the Vertiport Airside Tool is displayed in FIG 8.
#### 2.2.8 Trajectories Tool
The Trajectories Tool provide a set of trajectories for each mission, based on the origin and destination vertidrome as well as the available time slot at the origin. In order to calculate a conflict-free trajectory, it is necessary to keep track of all the active or planned missions in the airspace. Conflicts are resolved by changing the departure time of the UAM. The calculated trajectories are returned to the Vertidrome Airside Tool with the corresponding arrival time to check, which trajectory matches the available time slot at the destination. The trajectory that satisfies the conditions is then suggested as a possible route. The workflow is summarized in FIG 9. Two route management approaches are defined within the tool, a trajectory-based approach and a slot-based approach. The two approaches are visualized for the city of Hamburg in FIG 10. The slot-based approach considers a fixed route network whereas the trajectory-based approach considers a free route network.
#### 2.2.9 Cost and Revenues Tool
The Cost and Revenues Tool sets the base fare and price per kilometer for UAM based on the revenue obtained through ticket sales, and the cost of operation of the fleet. The model considers the fleet size, share of deadhead (empty) flights, energy consumed by the entire network, load factor of the flights, and other parameters in its computations. The ticket price parameters are fine-tuned at the end of each simulation run based on the data from that simulation run. As the ticket price and the mode share achieved by UAM are directly related, the ticket price parameters have to be computed in an iterative loop between the Collaborative Simulation and the Cost and Revenue tool until the ticket price parameters converge.
### _Integration of Components_
A parallel integration of the tools and the Agent-Based Simulation is required to be able to closely model the On-Demand Operations as defined in the CONOPS. Essentially, the tools are to be triggered in a predefined order, using the data from the Agent-Based Simulation Core throughout its runtime i.e. the ABS-C runs in parallel with the intermittently executed tools. This is as opposed to executing each tool and the ABS-C in a sequential manner, one after the other's completion. The developed simulation is governed by the following logic depicted in FIG 11: the demand tool provides a set of trips within Hamburg, and the
Fig. 8: High-level workflow of the SoS-tailored V-Lab simulation (Vertidrome Airside Tool)
Fig. 9: High-level workflow of the trajectories tool
times at which they depart from their origin and would arrive at the origin Vertiport were they to take a UAM flight. The ABS Core, takes this data as an input, and creates each passenger request to the model a defined time period ahead of their potential arrival at the origin vertiport. By default, this time period is set to 30 minutes, i.e. the passenger requests a UAM flight 30 minutes prior to when they would arrive at the vertiport giving them ample time to make their mode choice selection. This request is processed by the Mission Planning Tool of the ABS-C, which assesses whether the request can be allocated to an already scheduled mission. If this is the case, the Mode Choice Tool is directly triggered and a mode choice decision is obtained. If this is not the case, a new mission must be scheduled, requiring the allocation of the request to a vehicle in the fleet through the vehicle allocation tool which considers the scheduled missions, energy needed and available, and fleet positioning.
Once a vehicle is obtained, the vertiport and airspace tools are triggered to find a takeoff FATO slot at the origin vertiport, a trajectory to the destination vertiport, and an arrival FATO slot. Subsequently, the ABS-C complies the information and based on the ticket pricing structure compiles a total fare and estimated arrival time to the Mode Choice Tool. Based on the chosen mode, the passenger's seat on the flight is fixed. The Vehicle Design Tool provides the aircraft design and performance at the start of the simulation for all vehicle concepts used in the study, which are passed to the tools where needed through the ABS-C. The Cost and Revenues tool is triggered at the end of an entire simulation run, to evaluate and assess the best ticket prices given the history of performed flights.
In order to achieve while still preserving the intellectual property and tool ownership, an integration using Remote Component Environment (RCE) [16] is realized. This approach allows the sharing of tools as "black boxes" through which data is processed, while achieving the desired capabilities without the need for code sharing. Furthermore, using RCE, this tight integration can be achieved in a distributed network where the tools can be located across multiple computers based in multiple locations. The interface between the tools and the ABS-C is set using a version of Common Parametric Aircraft Configuration Schema [24], adapted to describe the Urban Air Mobility network. However, as the tools need to be triggered during the runtime of the Agent-Based Simulation, a typical integration of the Agent-Based Simulation within RCE alongside the other modules would not suffice as a tool cannot stay active within an RCE workflow and communicate with other tools at the same time. The desired behavior is achieved through establishing a link between the ABS and a block within RCE which can be used to transfer the inputs and outputs between RCE and the ABS, this modified version of the ABS is referred to as the ABS-C in this work. Using this approach, the ABS-C can trigger
Figure 11: Flowchart of the collaborative simulation workflow logic
Figure 12: RCE Workflow for the integration of the tools into the ABS-C, composing the Collaborative Simulation
the workflows in RCE during runtime, passing the relevant inputs and processing the output of each module which in turn are integrated into the ABS-C. The RCE workflow connecting the ABS-C and the domain tools through the PyRCE block is shown in FIG 12. Each domain tool is placed in its own color group with a description, with multiple copies of the tool allowing for parallel processing as detailed in the subsequent section.
The integration of the tools into the ABS-C was achieved through establishing common interfaces. This was done by taking CPACS as the basis, and extending it to be able to describe the UAM system of systems in the fidelity level required. An excerpt of one such interface is depicted on FIG 13. The interfaces were defined with as much commonality as possible, although the flexibility allowed by having the ABS-C as the common interface with all tools meant it was not a requirement.
One major change was made to the overall integration of components to improve the runtime of the simulation: the grouped execution of flight requests. During the development process of the collaborative simulation, it was identified that the computational resources of the tool hosts were not under full utilization during the execution of the Collaborative Simulation with only one tool instance active.
Furthermore, a non-significant portion of the overall tool execution time was devoted to the transferring of data between tool hosts and the start-up of tools. Most commonly, RCE workflows are exploited for computationally heavy tools with relatively low iterations needed. In this case however, the computational effort demanded by each tool was minimal however the number of iterations were exceedingly high in the order of thousands, as each passenger request has to be processed through the RCE workflow. As a consequence, this meant that a significant amount of time was dedicated to the transmission of data from one tool to the other over the network. The change implemented by grouping the requests, alleviated both of the aforementioned issues. The logic of the simulation was modified such that the flight requests can be grouped together within a user defined interval and processed at the same time. In order to ensure compatibility with all tools, serial and parallel processing of these grouped requests were implemented. In particular, the vehicle allocation and vertiport and trajectory tools required parallel triggering of the tools for each of the grouped requests due to the specific inputs required. Each flight request in the grouped requests queue is parallelly executed in one of the multiple tool instances in the workflow (see FIG 12). This ensures a higher utilization of the resources available to the tool host, and reduces the time needed for processing the requests by up to the number of tool instances available. For the mode choice tool, serial processing of the grouped requests was implemented, meaning several requests are input to the tool at a time. This reduces the time spent in transferring data and starting up the tools, by reducing the number of times these actions are performed. For the computational resources used in this project, 8 parallel instances of the vertiport and trajectory tool could be realized. However, it is noteworthy to mention that the exact performance
Figure 14: System of Systems Simulation Design Structure Matrix (S3DSM) for the Collaborative Simulation
Figure 13: An excerpt of the interface with ABS and Vertiport and Trajectory Tool
improvement achieved is also related to the temporal distribution of flight requests, and the grouping interval used.
System of systems simulation problems are highly complex, due to the many stakeholders involved and the interactions between these stakeholders. Therefore, a clear dissemination method is needed. Towards this goal: the modified XDSM diagram System of Systems Simulation Design Structure Matrix [15] is presented for the Collaborative Simulation in FIG 14 which clearly states the stakeholders considered, the processes and interactions between them, and the assumptions taken for each stakeholder.
## 3 Discussion and Results
A study is setup in Hamburg to demonstrate a proof-of-concept of the collaborative simulation. A fleet of 30 vehicles of Tiltrotor Architecture (see FIG 5) with a capacity of 3 passengers and a pilot are distributed across 20 Vertiport locations as defined in FIG 4, and a ticket price of 2 EUR/km is considered. The slot-based approach for airspace management is applied, as depicted in FIG 10. The demand dataset used is as visualized in FIG 6 and FIG 7.
In this work, results are generated for the Hamburg use case over a 4-hour period with several of the modules active within the agent-based simulation, specifically the Vehicle Design Tool, Vertiforme and Trajectory Tools, Mode Choice and the Demand Tools. The Cost and Revenues Tool were not executed in this study, due to the limited window of runtime used, and the Vehicle Allocation Tool was also omitted in this run, in favour of the vehicle allocation algorithm onboard the ABS-C for simplification purposes. In this study, the tools most active within the workflow and without an alternative were included to be able to assess and demonstrate the proof of concept of the collaborative simulation. Future publications will address comparative scenario analyses and sensitivity analyses of the domain-specific parameters.
In order to address the explanatory scope of the collaborative simulation, the results of the brief 4-hour study will be explored. The results are exemplary, and further in-depth investigations are required to make any conclusions with regards to its findings. In the 4-hour window of the study, a total of 1239 flight requests were processed, of which 43 took UAM indicating a 3.4% mode share. The use of the slot-based approach means that longer distances are travelled than the direct point to point distance as with the trajectory-based approach. As the ticket prices are computed based on the actual distance travelled, the slot-based approach indicates higher relative ticket prices according to the assumptions used in this study, which may consequently lead to lower mode share. The mode share of UAM throughout the study period is shown in FIG 15. The distribution of the passengers who chose UAM onto flights and the induced deadhead flights are depicted in FIG 16. It can be observed that the number of flights does not scale proportionally with the passenger, as multiple passengers can be grouped onto the same flight if the conditions are satisfied. The figure also denotes the deadhead flights, which are the non-revenue repositioning flights, required to fulfil the demand when an aircraft is not available at the origin vertiport. These results are of interest to the Vehicle Operator stakeholder. An insight relevant to the UTM stakeholder is given in FIG 17, on the number of aircraft in the airspace at any given time. Such a view can help in understanding whether the airspace is in full utilization, and at which value this is reached.
evaluating several complete scenarios, deeper insights could be drawn into the UAM network.
## 4 Opportunities and Challenges
The collaborative simulation presents many opportunities for future research. The integration of the domain-specific modules into a SoS simulation enables a broader scope of evaluation. The domain specific studies can be performed and evaluated considering not only its impacts on that specific domain, but also on other domains and the overall SoS. Moreover, as any of the variables within the simulation can be varied, such as vertiport number and placement, separation distances enforced by air traffic management, time taken to clear the FATO area, the uncertainty associated with these domain specific variables can be assessed and better understood with respect to their impact on the overall SoS.
In addition to the evaluation of the impact of domain specific variables on the overall system, complete scenarios can be assessed as well. As an example, low maturity UAM scenario can be compared against a high maturity UAM scenarios in terms of the desired metrics such as mode share achieved, air space occupancy etc.
On the topic of evaluation metrics, in SoS problems it is desirable to evaluate the SoS from the perspectives of each major stakeholder. This is done through definition of key performance indicators (KPIs) at each stakeholder level, which can then be measured based on the specific SoS. The individual stakeholder KPIs could then be merged into a singular KPI by using metrics to evaluate the importance of each stakeholder in the SoS. The Collaborative Simulation can support this as it models the stakeholders and their actions and can collect the relevant data at each stakeholder level to compute their KPIs, and subsequently a singular KPI is so desired.
Another research opportunity is the optimization of the SoS and the constituent systems and domains. The large SoS design space can be explored with the developed collaborative simulation, with the potential for optimization based on a set of criteria and constraints.
The collaborative simulation was developed to simulate nominal conditions, i.e. all constituent systems operate as planned and there are no deviations from the plan. While the nominal conditions are sufficient for addressing many research questions, the off-nominal conditions can open additional avenues of research. As Urban Air Mobility requires operation over inhabited areas at a fast space over densely congested airspace, the investigation of off-nominal conditions is critical to ensure the design of a safe and robust SoS constellation. As the Collaborative Simulation is an Agent-Based Simulation, it is apt for the introduction of stochastic events to model off-nominal conditions and approaches to deal with them. In future work, this field of research can be explored.
When considering the opportunities posed by the collaborative simulation, it is pertinent to keep in mind the challenges associated with it as well. Due to the high degree of collaboration to achieve this, implementing certain changes especially those affecting the interfaces and the logic behind it, can require more effort than a standalone development. In the same way, the execution of studies requires all participating tools to be reliably online for the duration of the run which can exceed all a day. This can often not be the case, and can be a challenge to resolve, more so when the tools are geographically distributed as in the case of the Collaborative Simulation. Another challenge associated with this distributed approach is the commonly faced issue of debugging. In any software development project, a non-insignificant time contribution goes toward debugging and bug fixing. The time required to debug the collaborative simulation was significantly higher in part due to the nature of collaboration itself and also due to the no code-sharing approach used. In simple terms, bugs can be harder to find when there are more places they could hide in, especially if some of those places are out of sight and reach. At the same time, achieving collaboration without code-sharing is also one of the biggest advantages of this approach, as it ensures the competency is maintained where it was generated. Needless to say, this work is not the only work utilizing RCE workflows to perform studies. However, the unique aspect of this work is due to the integration of the workflow into the Agent-Based Simulation, which provides several benefits but is also the root of many of the challenges faced. As aforementioned, the agent-based simulation acts as the orchestrator, integrator and database for the workflow. This means that a failed workflow execution due to tool unavailability or memory issues, cannot be continued from the failed point and has to be restarted from the same point to ensure consistent and meaningful results. In other words: the simulation is sensitive to its starting point. This coupled with the long runtime and geographical distribution of the tools, can make for a considerable challenge. However, it is of utmost importance to note that many of these issues faced in this work are due to the novel approach taken, and several of the challenges have already been alleviated, and the rest could be entirely solved as the approach is matured.
Overall, the opportunities opened up by this work outweigh the challenges faced, of which many could be resolved. Further efforts can be made to completely resolve the remaining challenges and ensure the ease of exploitation. In this work, the authors were able to demonstrate the proof of concept of the Collaborative Simulation approach taken, and show some of its merits and challenges. The scope for future work is broad, and can take the perspective of any of the individual domains involved in the work, but also most interestingly, the holistic perspective of the UAM network. UAM SoS is shrouded with large degrees of uncertainties and unknowns, such a Collaborative Simulation can aid in the reduction of those uncertainties and unknowns, through integration and assessment of the large body of research in each of the constituting domains of the UAM SoS.
## Competing Interests
Henry Pak is also a guest editor for the special issue on the HorizonUAM project but has not been involved in the review of this manuscript.
|
2303.01838
|
Usability of Privacy Controls in Top Health Websites
|
With the increasing awareness and concerns around privacy, many service
providers offer their users various privacy controls. Through these controls,
users gain greater authority over the collection, utilisation, and
dissemination of their personal information by the services. However, these
controls may be buried deep within menus or settings, making them difficult for
a user to access. Additionally, the terminology used to describe privacy
controls can sometimes be confusing or technical, further complicating the
user's ability to understand and use them effectively. This is especially true
for health websites, as users often share sensitive information about their
health and well-being. While many privacy controls have been proposed to
protect user data on these sites, existing research focuses on individual
controls (e.g., privacy policies or cookie opt-outs) rather than providing a
comprehensive overview of the privacy landscape. In addition, many studies
concentrate on the technical aspects of privacy controls without considering
the usability of these features from a user's perspective. This paper aims to
fill the gaps in the existing work by analysing four privacy controls, namely
privacy nudge, privacy notice, privacy policy, and privacy setting, and
evaluating their usability on the top 100 most visited health websites. First,
we define usability attributes for each privacy control in three website visit
scenarios; the guest, registering, and log-in visits. These attributes include
awareness, efficiency, comprehension, functionality, and choice. Then, we
design a survey template based on these attributes and scenarios and collect
data about privacy controls. Next, we analyse the availability and usability of
each privacy control on health websites. Finally, we provide suggestions for
improving the design of these privacy controls based on the data analysis
results.
|
Ravin Gunawardena, Yuemeng Yin, Yi Huang, Rahat Masood, Suranga Seneviratne, Imran Razzak, Nguyen Tran, Aruna Seneviratne
|
2023-03-03T10:44:38Z
|
http://arxiv.org/abs/2303.01838v1
|
# Usability of Privacy Controls in Top Health Websites
###### Abstract
With the increasing awareness and concerns around privacy, many service providers are offering various privacy controls to their users. Through these controls, users gain greater authority over the collection, utilization, and dissemination of their personal information by the services. However, these controls may be buried deep within menus or settings, making them difficult for a user to access. Additionally, the terminology used to describe privacy controls can sometimes be confusing or technical, further complicating the user's ability to understand and use them effectively. This is especially true for health websites, as users often share sensitive information about their health and well-being. While many privacy controls have been proposed to protect user data on these sites, existing research tends to focus on individual controls (e.g., privacy policies or cookie opt-outs) rather than providing a comprehensive overview of the privacy landscape. In addition, many studies concentrate on the technical aspects of privacy controls without considering the usability of these features from a user's perspective. This paper aims to fill the gaps in existing literature by providing an analysis of four privacy controls, namely privacy nudge, privacy notice, privacy policy, and privacy setting, and evaluating their usability on the top 100 most visited health websites. We define usability attributes for each privacy control in three website visit scenarios; the guest, registering, and log-in visits. These attributes include awareness, efficiency, comprehension, functionality, and choice. Then, we design a survey template based on these usability attributes and scenarios and collect data about privacy controls. Next, we analyse the availability and usability of each privacy control on health websites. Finally, the paper provides suggestions for improving the design of these privacy controls based on the data analysis results. Our analysis and recommendations can help website designers and policymakers improve the usability and effectiveness of privacy controls to protect users' privacy rights.
Usable Privacy Health Websites Privacy Controls Privacy Nudges Privacy Notices Privacy Policies Privacy Settings
Introduction
User privacy has been one of the most focused areas in the healthcare sector. With the growth of online resources for health-based services, the amount of people using these services has been increasing continuously. This raises concern for the security and privacy of the users utilizing web-based health services. Service developers have developed various security and privacy tools and techniques to address some of these concerns including, preserving anonymity online and controlling what data is shared with third parties. For instance, privacy-preserving algorithms such as Differential Privacy have been designed to preserve user data to maintain user privacy. Similarly, various controls such as privacy nudges, cookie settings, and web browsers with extra privacy features such as Tor and Brave, have been developed to provide users with a higher degree of control over their online presence. In addition, regulations such as GDPR and CCPA enforce developers to follow responsible data collection practices. However, despite all these efforts, health websites act as continuous targets for hackers often leading to large-scale privacy breaches, exposing personal and intimate details of users. For example, recent data breach at Medibank compromised the health records of millions of Australian citizens, constituting a serious violation of their privacy rights. One major factor contributing to such breaches is the lack of user-friendly privacy controls on these websites. As a result, users either remain unaware of these controls or struggle to configure them correctly.
Quite recently, multiple research focused on the usability of privacy controls provided by online services. For example, data deletion and opt-out choices [1] and cookie management providers [2; 3]. These research predominantly considers only one or two aspects of privacy e.g., privacy policy or privacy nudges. Furthermore, existing works are yet to estimate the overall usability of privacy measures. In addition, only a limited amount of work investigated the usability of privacy measures on health websites that handle more private information.
To this end, we developed a holistic template to empirically analyse the usability of four different privacy controls (privacy nudges, privacy policies, privacy notices, and privacy settings). We perform this analysis on the top 100 most visited health websites [4]. Since users are more likely to share private information with these websites, there is a high degree of concern. Our evaluation is performed under several scenarios where the user can either visit as a guest, a registering user, or an already registered user. First, we analyse the availability of the above privacy controls in these websites. Moreover, we consider the impact of the regulations on the availability of privacy controls by comparing all websites with the ones complying with regulations, especially the most widely used GDPR and CCPA. Then, based on the data collected through our holistic template, we analyse and evaluate the usability of each privacy control.
More specifically, we make the following contributions.
1. We have designed a holistic survey template that can be used to analyse and evaluate the usability of four different privacy controls. In this template we take into consideration various factors such as the location, display type, number of clicks, and content types across privacy controls.
2. Based on the template, we empirically analysed the usability of the top 100 health websites under three scenarios; guest visit, registration, and login. We also compare how privacy controls change under each scenario and the impact they have on the usability.
3. We evaluated these websites based on health guidelines to determine whether they conform to these recommendations.
4. Based on the findings of our survey, we propose several designs for privacy controls, as well as recommendations for integrating privacy policies and settings.
## 2 Related work
### Research on Privacy Controls
GDPR and CCPA are two of the most commonly followed regulations by websites across the internet. Many researchers have analyzed the influence of these regulations alongside the regulatory compliance of websites that adhere to them. Degeling et al. [5] evaluated the improvement to privacy controls of a website, such as the privacy policy and cookie consent with the implementation of GDPR. Their analysis showed that across 28 countries of the European Union, around 16% of websites added privacy policies and cookie consent notices by the deadline provided by GDPR. Hils et al. [6] analyzed 161 million web pages and found that implementing GDPR and CCPA quadrupled the adoption of Consent Management Platforms (CMPs) from 2018 to 2020. However, researchers have found evidence of disparities between the displayed/user-selected privacy settings and the behaviour of websites. Matte et al. [3] crawled websites following IAB Europe's Transparency and Consent Framework (TCF) and found that even when users opted out from consent, it was recorded as positive consent. Which violates the guidelines of the framework the website adheres to.
Liu et al. [7] demonstrated that many websites violated the data protection regulations by still collecting, processing and sharing user information after users have opted out. To this end, they proposed an auditing framework to evaluate the most popular CMPs and cookie opt-out tools of websites.
Another area that has gained the attention of many researchers is web tracking and obfuscation of privacy-related information. Sanchez et al. [8] proposed an automatic tool (TrackingInspector) to detect and distinguish between known and unknown types of web tracking scripts among the Alexa top 1M websites. Mehmezhad et al. [9] analyzed the privacy notices and tracking behaviours on three platforms: PC browser, mobile browser, and mobile apps and demonstrated that most web pages present the privacy consent banner inconsistently across the three platforms. They also discovered that websites collect user information before users approve on the privacy consent. Another research leveraged a crawler to investigate 2.5M clicks to seek signs of misbehaviour on various websites [10] and demonstrated that most websites hindered users from making a decision based on their intention by hiding the links' real aim or providing insufficient information. The same type of practices was also found in mobile apps by Zimmeck et al. [11], analyzed 1 million Android apps to identify the privacy policies these apps follow and how compliant the apps are towards them.
Researchers have also developed survey templates to gather data based on users' perspectives pertaining to areas such as data deletion, email communication opt-outs and targeted advertising opt-outs. For example, results from a survey study conducted by Habib et al. [1] showed that the usability of these opt-outs needs improvement in several aspects, like display location and link quality. Another study by Redmiles et al. [12] explored the quality of 374 security and privacy advice pieces through their proposed dimensions: comprehensibility, actionability, and efficacy. Their findings revealed that though users can be aware of and understand most security advice, they needed guidance to prioritize these pieces of advice.
### Privacy Research in the Health Sector
Researchers have been using several techniques to evaluate the degree of privacy health websites provide. The US Department of Health and Human Services' Office of Disease Prevention and Health Promotion (ODPHP) designed a survey template to assess the privacy usability of the 100 top-ranked health-related websites [13]. This paper built a standardized quality measurement method and benchmarks to measure and capture the current status of website quality. Then based on their findings and previous research, they proposed improvement suggestions for websites. However, the template focuses on the general usability principles of a website rather than privacy-related usability. Boon-itt et al. [14] utilized the technology acceptance model (TAM) to classify health website quality attributes and demonstrated how the quality influences the user intention to use the health website.
Ermakova et al. [15] provided evidence on the poor readability of privacy policies on health and e-commerce websites and demonstrated that privacy policies on health websites are more readable than e-commerce ones. This paper also provided an informative corpus of privacy policy texts for further research. In 2020, Ali et al. [16] discussed the readability issues in privacy policies through literature reviews. This paper compared research results on different readability measurements and grouped them into two categories, from the reader's perspective and policy text content's perspective. Sangari et al. [17] analyzed 12000 usernames and their profiles of a medical forum (Breastcancer.org) and demonstrated that patients' medical privacy was at significant risk. This paper suggested that medical forums provide privacy notices for patients before creating usernames.
In summary, current research on privacy in health usually concerns data protection measures employed by websites or apps rather than privacy controls for users. A Few articles discussed privacy controls like privacy policy, but there is a lack of comprehensive privacy control research in the health field.
### Previous Usability Definition
In 1994, Nielsen [18] gave a general definition for usability, 'a quality attribute that assesses how easy user interfaces are to use.', and described it from five dimensions: learnability, efficiency, memorability, errors, and satisfaction. In 1998, International Standard Organisation (ISO) 9241-11 [19] defined the usability from three dimensions, that is effectiveness, efficiency and satisfaction. In 2010, [20] described usability from six dimensions, which combined [18] and [19]. In 2019, [21] reviewed research works from 2000 to 2018 and found that it is challenging to design a unified evaluation framework to measure all the usability dimensions for e-commerce websites. In 2021, [22] proposed a new definition of usability for all fields of study, 'an evaluation of the level of quality of a user's experience (UX)', from awareness, choice, security and redress attributes. About privacy controls, [23] described the usability for AdChoice icon from four dimensions: 1) time to find icon; 2) icon visibility; 3) icon functionality; and 4) adjusting preferences. Then Habib et al. [24, 25] designed usability definitions for privacy choice and cookie consent interfaces from four and seven dimensions, respectively.
Background
### Usability Definition
Applying a single standard definition of usability across various domains is difficult since it is usually contextual. However, to analyze privacy controls, we need to collect related information that can be utilized to represent usability. One such definition in the privacy field is evaluating the quality level of a user's experience on the four attributes notice/awareness, choice/consent, integrity/security, and enforcement/redress [22]. The usability under specific privacy controls such as the AdChoice icon, can be evaluated under four attributes: time to find the icon, icon visibility (i.e., size, position, state, color), icon functionality, and adjusting preferences [23]. For privacy choices, the cognitive and physical processes required to use privacy choices (i.e., opt-out and data deletion) can be utilized to represent usability [24]. This was done by designing four Human-Computer Interactions (HCI) based on finding, learning, using, and understanding a privacy policy. And for the usability of cookies, previous research have measured the usability by consent interfaces, inclusion of user needs, ability and effort, awareness, comprehension, sentiment, decision reversal, and nudging patterns [25].
### Privacy Controls
A **privacy nudge** should provide both privacy information and the option for privacy choices to the users. Users should have the ability to respond for a privacy nudge by ticking check boxes, pressing a button, or toggling the selection switch. Generalized nudges can be found in various forms, for example, presentation, information, defaults, and incentives [26]. They can also be grouped into six different categories based on the task they are facilitating [27]: defaults, presentation, information, feedback, error, social influence, and progress bar. Whereas, the **privacy notice** only provides the user with privacy information. There is no option for the user to take an action on the displayed notice except for removing it from their display. The **privacy policy** is a statement explaining what information is collected and how a website collects and processes the user's personal information in a relatively fixed text structure. Finally, **privacy setting** is a tool that allows users to control their data privacy according to their service demands and personal willingness.
Figure 1 depicts the location of the privacy nudge in six different websites. From these six images we can clearly see that there is no fixed location for the nudge to be located and it could be present anywhere on the screen. Some of these nudges are in the form of a banner, whereas some are presented as pop-up windows. According to previous research [28], pop-up windows are able to attract more user attention due to their intrusive exposure format compared to the voluntary exposure format of banners. In addition, even within banners, full screen banners are more likely to attract user attention than window/stripe banners. Another factor that plays a major role in gaining the attention of the user to a banner is the relative location of it. Banners that are located on the top of the websites obtain better visual attention compared to the bottom-right or bottom-left positions [29]. This behaviour is almost identical to privacy notices as well, which can be seen under Figure 2. Hence, to evaluate the awareness of privacy nudges and notices, both the display type(pop-up/banner) and the location where it is located should be considered.
Figure 1: Privacy Nudge Examples
can be utilized to measure the ease of use. This is evaluated under the efficiency attribute under the guest and log-in visit scenarios.
The privacy policy and privacy settings should be comprehensible to the users for them to make informed decisions. Comprehension can be measured through two finer features: language and table of content. Providing multiple languages helps certain users understand the complex and professional information in the privacy policy and privacy setting with their own languages. Which can be used to estimate the quality of the website content design [13; 30]. The table of content acts as a feature to represent the readability of a privacy policy because it provides a shortcut to the content that users care about the most [1]. In contrast, some websites purposefully degrade the comprehensibility of their privacy policy and privacy settings through dark patterns. These patterns such as scarcity, urgency, information hiding, and asymmetric choices [31] are designed to trick users by misguiding or misinforming them. However, we can assume experts in the privacy domain can identify most dark patterns and not be misled by them during the survey. The types of options that are available under privacy settings can also influence/force users to neglect their privacy rights. For example, users usually give the consent without thorough thinking when the cookie consent only provide 'Accept' choice. In contrast, the explicit choice guidance feature respects a user's privacy rights and tries to provide the most suitable privacy setting customization for each user.
## 4 Methodology
In this section we describe the process we followed to create a template to evaluate the usability attributes Awareness, Efficiency, Comprehension, Functionality and Choice across the four different privacy controls nudge, notice, policy, and setting. Then we will talk about how we utilized this template to collect data from 100 health websites.
### Survey Template Design
We designed five attributes for each privacy control which represent the ease of use (_awareness_, _efficiency_ and _comprehension_) and their utilities (_functionality_). In addition, we consider the influence of dark patterns on the usability of these privacy controls through _choice_ attribute. We use the number of clicks to access the privacy control and the type of privacy content to represent the efficiency and functionality, respectively. We utilize two finer features to represent awareness, comprehension, and choice attributes. A comprehensive breakdown of the usability features taken into consideration under each privacy control is shown in Table 1.
Based on these usability attributes, we designed pertinent questions for users in the survey. There are three website visit scenarios we are interested in: guest visit, registering, and log-in visit. The survey contains three separate modules
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & \multicolumn{2}{c}{**Guest Visit**} & \multicolumn{5}{c}{**Registering**} & \multicolumn{2}{c}{**Log-in Visit**} \\ \cline{3-10} & & **nudge** & **notice** & **policy** & **setting** & **nudge** & **notice** & **policy** & **setting** & **account setting** \\ \hline Awareness & Location & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & Display type & ✓ & ✓ & & & ✓ & ✓ & ✓ & ✓ & \\ Efficiency & Number of clicks & & & ✓ & ✓ & & & & ✓ & \\ & Language & & & ✓ & ✓ & & & & ✓ & ✓ \\ & Table of content & & & ✓ & & & & & ✓ & ✓ \\ Functionality & Type of privacy content & ✓ & ✓ & & ✓ & ✓ & ✓ & & ✓ & ✓ \\ & Types of options & ✓ & & ✓ & & ✓ & & & \\ & Explicit choice guidance & ✓ & & ✓ & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Usability Attributes for Four Privacy Controls in Three Visit Scenarios
Figure 2: Privacy Notice Examples
corresponding to each of the visit scenarios. Under each module, there are four sections pertaining to each privacy policy we aim to evaluate. Finally, several subsections contain specific questions for each privacy control under each section. Following the natural flow of a user surfing the web page, we set the guest visit scenario as the first module and registering (i.e., creating an account) and log-in visit scenarios as the second and last module, respectively and the privacy controls are ordered as privacy nudge, privacy notice, privacy policy, and privacy setting.
In the first module, the first question under each privacy control is whether the user can access the relevant privacy control while visiting the web page. These questions take the simple format of a 'Yes' or 'No' answer. This question serves as a statistical measure to represent the availability of each privacy control. If answered 'Yes', the user is prompted with the questions under each subsection of the privacy control; else, the questions are skipped and the survey moves on to the next section. Each subsection under the privacy controls consists of open questions based on 'What', 'Where', 'How many', and 'Which'. However, it is impossible to assume all available answer options before we complete a single round of data collection. Therefore, we used the Excel data validation tool to create a drop-down list to record the seen options and avoid inconsistent options for the same answer. This enables us to maintain the scalability of the template while continuously updating the available options.
Since the privacy nudge and the privacy notice follow a similar format, they share the same questions under each subsection. They are, 1) display location ('Where'), e.g., 'Top', 'Bottom' and 'Full screen'; 2) display/trigger type ('What'), e.g., 'Pop up Window', 'Banner', and 'Check Box Content'; and 3) privacy content/function type ('What'), e.g., 'Cookie', 'Advertisement', 'Profile/Personally Identifiable Information (PII). Based on the answers for display location and trigger type, we summarize the frequently-used types of display and analyze their shares in the top 100 health websites. We can have a general catalog of the privacy information from the content type answers and analyze their functionality. However, there are two more questions for privacy nudge. One is about the response action type, and another is about whether there are detailed explanations for those actions. Both of these questions are used to estimate the efficacy of the privacy control.
Under the privacy policy section, we designed nine questions. The first four questions are used for estimating the ease of use, the subsequent three are used to estimate the utility, and the last two are leveraged for regulation influence and region distribution analysis. The first three questions are: 1) where the privacy policy button is displayed; 2) how many clicks are required to visit the privacy policy; 3) whether it is available in various languages. Since the privacy policy is a legal document containing many domain-specific terms and concepts, not everyone will be able to grasp the meaning at once. Therefore, it should help the user understand the content in their familiar language. Question 4), whether it contains contents to guide reading, is used to evaluate the readability of the privacy policy to some extent. Questions 5 to 7 focus on whether the privacy policy contains: 1) privacy setting options; 2) privacy setting links; 3) clear privacy setting guidance. These three questions are designed to see whether the privacy policy provides customized rights to users. The final two questions are about the regulations this privacy policy follows and the country this website belongs to. The options for open questions under these two sections can be found in Table 2.
For the privacy setting section, there are four questions altogether. However, the first three questions are similar to the ones on the privacy policy. The final question is based on the coverage of the privacy settings provided. The question is functionally identical to the third question of the privacy nudge and determines how many aspects the privacy setting covers. Finally, if the website allows users to register, the questions related to this scenario are available under the second module of the survey template. The question in these modules are nearly the same as in the first module, except for the third-level questions about the privacy policy. The content of the privacy policy does not change under each visit scenario. Hence, we only include the first question about the privacy policy button/link display location that may differ in the registering scenario. Once the user successfully registers, we move on to the final module of the survey. Since most websites only allow to change privacy settings/preferences tied to a user account, in this section we concentrate primarily on the privacy setting aspects of the web page. The options for open questions under the privacy policy and privacy setting can be found in table 3
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Privacy nudge/notice location** & **Privacy nudge/notice triggers** & **Privacy nudge actions** & **Privacy nudge/notice types** \\ \hline top & pop up window & accept & cookie \\ middle & new page & decline & advertisement \\ bottom & banner & manage & privacy policy \\ left & check box content & & location \\ right & plain text & & email/notification \\ full screen & pull down menu & & profile/PII \\ nearby sign up/log in/subscribe & button/link & & history \\ email confirming pages & & & security guarantee \\ \hline \hline \end{tabular}
\end{table}
Table 2: Options for Open Questions About Privacy Nudge/Notice
### Data Collection
During the template creation stage, we only employed one researcher to explore the websites to keep the recorded data consistent. Once the first iteration of the template was completed, the researcher browsed ten websites and recorded the data collection process according to the template. Based on these findings, we finalized the three modules of the template with their relevant sections and subsections. This template was used for the researcher's data collection of the 100 websites. We then checked the template for questions that have the same answer for all websites and removed these questions since they do not provide any new information regarding the websites. The options for the open-ended question were also finalized during this stage. Finally, two additional researchers were given the template and asked to gather information independently, giving us three independently gathered data samples.
During the data collection process, we followed rigorous instructions to ensure outside factors did not interfere with the data gathered. The researchers were free to pick any browser for data collection since it is more realistic than every user using one browser. However, the browser should be in incognito mode with previous cookies cleared to avoid any interference from previously collected cookies. An example of such behaviour is displaying the privacy nudge/notice. If we have visited a website before with the same browser with saved cookies, they may not appear again. For the account creation process, the researchers were provided with a set of mock personal data and an email address to be used.
On average, each researcher spent about 20 minutes per website with some sites requiring around 50 minutes. The time taken was most dependent on whether the website contained a registering function and the complexity of its privacy policy. For example, suppose there is no sign-up option. In that case, the researcher only needs to explore the questions in the first module (i.e., guest visit scenario), which usually takes 10 minutes. On the other hand, if a website has registering option, one of two outcomes can occur. First, a lack of specific information, such as a local phone number or residential healthcare card, may result in a failed registration. Since a successful registration leads to the log-in scenario, this would result in a longer data collection duration. Finally, to answer the questions about the privacy policy, the researcher needs to skim through the whole document. While some privacy policies only have several pages, some contain over thirty pages. For a website with three visit scenarios and a complex privacy policy, a researcher may take about 50 minutes for the complete process.
### Usability Analysis Methods
We summarize the usability analysis methods into two main categories. One is based on the real user's perspective, which needs employing many potential users to complete surveys for several websites. Most questions of this kind of survey are subjective, like 'Whether you are easy to find the privacy policy button?' or 'How you satisfied with this privacy setting choices?'. The differences in their subjective feelings are rated based on the Likert scale method (e.g., five-point or seven-point scale). According to the users' rating scores, previous research proposed different models to fit their scores from multiple feature dimensions. These models are usually based on the concept of the Structure Equation Model (SEM). The number of survey participants is the key factor in ensuring the robustness of a model. This usability analysis aims to build a measurement standard to evaluate each website.
On the other hand, another kind of usability analysis is based on the expert's perspective, which only needs one or several experts to complete surveys on numerous websites. Most questions in this kind of survey are objective (see Table 2). Thus experts can perform more professionally and consistently. In this situation, previous research usually focused on the statistical overview of the usability for certain privacy controls on all targeted websites. They analyzed each usability attribute separately with traditional statistical methods and gained insights from these results.
In this paper, we will use expert perspective-based usability analysis approaches and conduct the usability analysis for privacy nudge, privacy notice, privacy policy, and privacy setting on awareness, efficiency, comprehension, choice, and functionality attributes.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Privacy policy button location** & **Whether the privacy policy contains the privacy setting link?** & **Privacy setting types** \\ \hline footer & Yes, third party link & cookie \\ full screen & Yes, customized link. & advertisement \\ link in the privacy notice/nudge & Yes, both kinds of link. & location \\ & No. & email/notification \\ & & profile/PII \\ & & history \\ & & activity \\ \hline \hline \end{tabular}
\end{table}
Table 3: Options for Open Questions About Privacy Policy and Privacy Setting
Results
### Overview of Targeted Websites
Our research centers on the usability analysis of four privacy controls on the top 100 health websites with the highest traffic levels. Our initial selection of the 100 health websites was subject to domain restrictions, which included three inaccessible websites and two adult websites. To resolve these constraints, we incorporated an additional set of five websites sourced exclusively from Australia into our sample.
#### 5.1.1 Regional Distribution
First, we examine the regional distribution of the 100 targeted health websites and collect statistical data of website counts in each country in descending order (see Table 4).
Our study revealed that the majority of the websites in our sample were hosted in the United States of America (46%), followed by Russia (6%), Indonesia (6%), the United Kingdom (6%), Australia (6%), Brazil (4%), India (3%), and Canada (3%). In the targeted set of 100 websites, the remaining thirteen countries only host one or two websites that are included.
Furthermore, the identified websites have been classified into six groups based on their respective geographic regions: North America, Europe, Asia-Pacific, South America, Middle East, and Global. The global website means it belongs to an international organization, not a specific country or region, e.g., the world health organization. The geographical distribution of the 100 websites is shown in Figure 3. We can obverse that North America, Europe, and Asia-Pacific constitutes the majority of targeted websites (93%). Therefore, we conduct research on the websites originating from the top three regions exclusively in the subsequent region-related analysis.
#### 5.1.2 Regulation Compliance
Following the enforcement of regulations and protocols regarding user data privacy protection, exemplified by GDPR and CCPA, it has become imperative for website designers to abide by these regulations and respect users' privacy. General Data Protect (GDPR) mainly regulates websites in Europe, while California Consumer Privacy Act (CCPA)
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Rank** & **Website Count** & **Country Name** \\ \hline
1 & 46 & USA \\
2 & 6 & Russia, Indonesia,UK and Australia. \\
3 & 4 & Brazil \\
4 & 3 & India and Canada \\
5 & 2 & France, China, Germany, Mexico, Poland and Japan \\
6 & 1 & Vietnam, Ukraine, Portugal, Italia, UAE, Jordan, Netherlands and Global \\ \hline \hline \end{tabular}
\end{table}
Table 4: Website Regional Distribution
Figure 3: Geographical Distribution for Top 100 Health Websites.
primarily applies to California and other states of the USA. While exploring the privacy-related regulations followed by the top 100 health websites, we notice that the degree of regulatory compliance varies across different geographical regions. North America (including the USA, Mexico, and Canada) and Europe (including Russia and Ukraine) place more emphasis on privacy regulatory compliance.
From Table 5, it can be observed that the predominant (top two) regulations that are commonly employed are the CCPA and GDPR from the North American and European regions respectively. In addition, there are ten regulations employed by targeted websites originating from North America, three from Europe, and four from the Asia-Pacific region. As illustrated in Figure 4, the regulatory compliance ratios in North America and Europe exceeds 80.0%, whereas in the Asia-Pacific region, the ratio is only half of that observed in the former two regions. Taking a close look at the Asian-Pacific region, there are 20 websites from six countries and one area (China Tai Wan). Nevertheless, amongst them, only 8 websites (6 from Australia, 1 from India, and 1 from Japan) comply with privacy regulations. In Section 5.2, when we analyze the usability of the privacy controls, we will attempt to further analyze the relationship between privacy control usability and the website regulatory compliance ratios of different regions.
### Availability of Privacy Controls
Before specific privacy control analysis, we summarize the overall availability of privacy nudge, privacy notice, privacy policy, and privacy setting in three visit scenarios (log-in visit, guest visit, and registering) in Figure 5. We conduct an analysis of all 100 websites to assess their performance in the guest visit scenario. Our findings indicate that 42 of these
Figure 4: Regulatory Compliance Ratios in Different Regions
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Regulation Abbreviation** & **Regulation Full Name** & **Country** & **Count** \\ \hline CCPA & California Consumer Privacy Act & USA & 25 \\ GDPR & General Data Protection Regulation & Europe & 18 \\ HIPAA & Health Insurance Portability and Accountability Act Notice of Privacy Practices & USA & 11 \\ Act & Privacy Act 1988 & Australia & 6 \\ Law No. 152-FZ & The Federal law of July 27 2006 No. 152-FZ & Russia & 5 \\ LGPD & Lei Geral de Proteção de Dados (General Data Protection Law) & Brazil & 4 \\ PA & Privacy Act of 1974 USA & USA & 3 \\ CCC & California Civil Code & USA & 2 \\ APEC CBPR & Asia-Pacific Economic Cooperation Cross Border Privacy Rules Program Requirements. & USA & 2 \\ COPPA & Children’s Online Privacy Protection Act & USA & 2 \\ LFPDPP & Ley Federal de Proteccion de Datos Personales en Posesion de los Particulares (Federal Law & Mexico & 2 \\ & on Protection of Personal Data Held by Private Parties) & USA & 1 \\ NL & Nevada Law & USA & 1 \\ FOIA & The Freedom of Information Act and Privacy Act & USA & 1 \\ PIPEDA & Personal Information Protection and Electronic Documents Act & Canada & 1 \\ SPI & Regulation 4 of the Information Technology (Reasonable Security Practices and Procedures & India & 1 \\ & and Sensitive Personal Information) Rules, 2011 (the “SPI Rules”) & Australia & 1 \\ APP & Australian Privacy Principles & Australia & 1 \\ PCR & The Law of Ukraine “On Protection of Consumer Rights” & Ukraine & 1 \\ APPI & Act on the Protection of Personal Information & Japan & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The Count of Websites Complying with Each Regulation
websites were capable of complete account creation for users, hence only these 42 websites are explored in registering and log-in visit scenarios.
The availability of each privacy control varies depending on the visit scenarios, and the privacy nudge, privacy notice, and privacy setting are not available for most websites. Most websites (97.6%, 99.0%, and 81.0%) contain privacy policies in three visit scenarios. It is noteworthy that solely the Russian website'sportkp.ru' does not present a hyperlink to its privacy policy on the homepage during the guest visit scenario. Nevertheless, it provides a privacy policy link during the registering scenario. That means all targeted websites offer a privacy policy that is accessible to users. Typically, the privacy policy is made available on the website homepages, and in some cases, during the registration process.
On the other hand, the availability of privacy nudges, privacy notices, and privacy setting is considerably lower. In the guest visit scenario, 34.0% websites have privacy setting buttons, whereas only 29.0% and 13.0% websites apply the privacy nudge and privacy notice respectively. In the registering scenario, the availability is significantly higher as compared to the guest visit scenario, with websites having privacy nudges and privacy notices constituting 61.9% and 64.3%. In the log-in visit scenario, the websites with privacy nudges and notices are fairly rare, with merely 0.0% and 4.8% presented respectively. The availability of the privacy setting (52.4%) in the log-in visit scenario is significantly higher than in the guest visit and registering scenarios. In summary, the privacy policy is the privacy control of the highest availability for websites in three visit scenarios. In addition, the websites tend to apply the privacy nudge and notice in the registering scenario rather than in the guest visit scenario. The privacy setting is utilized most frequently in the log-in visit scenario.
#### 5.2.1 Based on Regional Distribution
We provide the availability of each privacy controls in different regions in Table 6 and find the regional imbalance. We can see that the availability of the privacy nudge, notice, and setting in any visit scenario (except the privacy nudge in the guest visit scenario) in the Asia-Pacific region are significantly lower than in the other two regions. Especially, websites hosted in the Asia-Pacific region notably lack both privacy notices (in the context of guest visits) and privacy settings (in both guest visit and registering scenarios). This finding implies not all regulations will improve the availability of privacy controls, as some regulations may lack content or clear guidance in these areas. That means the rate of regulatory compliance in a region is not necessarily directly and positively correlated with the availability of privacy controls. However, we can conclude that a low regulatory compliance ratio does result in the low availability of privacy controls from the findings in Table 6.
#### 5.2.2 Based on Regulation Compliance
While not all regulations are effective in promoting privacy controls, both GDPR and CCPA contribute to some privacy controls significantly. Even though 73 of the 100 websites comply with certain regulations, the availability of each
Figure 5: The Availability of Each Privacy Control in Three Visit Scenarios
privacy control is not consistent with the general regulatory compliance ratio (see Table 7). This may be attributed to the fact that some regulations do not require certain privacy controls. For example, our finding suggests that all targeted websites complying with HIPAA and Act do not provide either the privacy nudge and notice in the guest visit scenario, or the privacy setting in the registering and log-in visit scenario. Only 10.0% of the above-mentioned websites provide the privacy setting in the guest visit scenario, and 20.0% offer the privacy nudge and notice in the registering scenario. Nonetheless, if we narrow our attention to websites that adhere to GDPR or CCPA, the data reveals that the rates of privacy setting availability are considerably greater, exceeding those of other regulations by over 10.0% in all three visit scenarios. In addition, websites complying with GDPR have better availability of the privacy nudge in three visit scenarios while websites complying with CCPA contribute to the privacy notice in the registering scenario.
### Usability of Privacy Controls
In this section, an usability analysis is performed according to usability attributes spanning all four privacy controls in Table 1. The following sub-sections conclude the study of the usability attributes corresponding to each privacy control in different scenarios.
#### 5.3.1 Privacy Nudge
First, we present the data analysis related to the _awareness_ attribute of privacy nudge: location and display type (see Figure 6 and Figure 7). Then we conduct the analysis of _choice_ attribute in Figure 8 and Figure 9. Finally, we analyze the _functionality_ attribute of privacy nudges in Figure 10.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Guest Visit Scenario** & **Nudge** & **Notice** & **Policy** & **Setting** \\ \hline The number of websites applicable & 100 & 100 & 100 & 100 \\ The number of total websites with the privacy control & 29 & 13 & 99 & 34 \\ The number of North America websites with the privacy control (total 51) & 9 & 7 & 51 & 19 \\ The number of Europe websites with the privacy control (total 22) & 14 & 5 & 21 & 14 \\ The number of Asia-Pacific websites with the privacy control (total 20) & 4 & 0 & 20 & 0 \\ The number of other region websites with the privacy control (total 7) & 2 & 1 & 7 & 1 \\ \hline
**Registering Scenario** & **Nudge** & **Notice** & **Policy** & **Setting** \\ \hline The number of websites applicable & 42 & 42 & 42 & 42 \\ The number of total websites with the privacy control & 26 & 27 & 34 & 8 \\ The number of North America websites with the privacy control (total 18) & 12 & 14 & 15 & 3 \\ The number of Europe websites with the privacy control (total 12) & 9 & 8 & 10 & 5 \\ The number of Asia-Pacific websites with the privacy control (total 8) & 4 & 2 & 5 & 0 \\ The number of other region websites with the privacy control (total 4) & 1 & 2 & 4 & 0 \\ \hline
**Log-in Scenario** & **Nudge** & **Notice** & **Policy** & **Account Setting** \\ \hline The number of websites applicable & 42 & 42 & 42 & 42 \\ The number of total websites with the privacy control & 0 & 2 & 41 & 22 \\ The number of North America websites with the privacy control (total 18) & 0 & 0 & 18 & 11 \\ The number of Europe websites with the privacy control (total 12) & 0 & 1 & 11 & 7 \\ The number of Asia-Pacific websites with the privacy control (total 8) & 0 & 0 & 8 & 2 \\ The number of other region websites with the privacy control (total 4) & 0 & 1 & 4 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The Availability of Each Privacy Control in Different Regions
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Visit Scenarios** & **Nudge** & **Notice** & **Policy** & **Setting** \\ \hline \multicolumn{5}{c}{All Websites/Websites Complying With Regulations} \\ \hline Log-in Visit & 0.0\% / 0.0\% & 4.8\% / 100.0\% & 97.6\% / 100.0\% & 52.4\% / 51.5\% \\ Guest Visit & 29.0\% / 27.4\% & 13.0\% / 15.1\% & 99.0\% / 100.0\% & 34.0\% / 41.1\% \\ Registering & 61.9\% / 57.6\% & 64.3\% / 63.6\% & 81.0\% / 81.8\% & 19.0\% / 21.2\% \\ \hline \multicolumn{5}{c}{All Websites/Websites Complying With GDPR/Websites Complying With CCPA} \\ \hline Log-in Visit & 0.0\% / 0.0\% & 4.8\% / 50.0\%/0.0\% & 97.6\% / 100.0\%/100.0\% & 52.4\% / **62.5\%/66.7\%** \\ Guest Visit & 29.0\% / **57.1\%**/17.6\% & 13.0\% / 7.1\%/11.8\% & 99.0\% / 100.0\%/ 100.0\% & 34.0\% / **78.6\%/52.9\%** \\ Registering & 61.9\% / **75.0\%/77.8\%** & 64.3\% / 62.5\%/**77.8\%** & 81.0\% / 75.0\%/88.9\% & 19.0\% / **50.0\% / 33.3\%** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Availability Comparison of Each Privacy Control for All Websites and Websites Complying With Regulations
AwarenessThe location and type of privacy nudges are inconsistently displayed in the guest visit scenario, and most websites do not display a privacy nudge or display it inappropriately. Meanwhile, in the registering scenario, the privacy nudges are presented in a more consistent manner in terms of location and type, and most of the displayed privacy nudges are conspicuous.
There are 29 websites with the privacy nudge presented in the guest visit scenario. The nudges have six display locations and two display types (see Figure 6). The most common display locations in the screen for privacy nudges are the 'Bottom' (41.4%), 'Top' (27.6%), and 'Middle' (17.2%). The privacy nudges in the other three locations ('Left', 'Right', and 'Full Screen') occupy only 13.8% in total. In terms of the distribution of display types, there is a nearly equal proportion of 'Pop-up Window' and 'Banner', with only a 3.4% variation between the two. It is suggested that pop-ups perform better than banners when attracting the user's attention. According to the findings, the top banner outperforms the bottom banner; larger pop-ups or banners are more effective than smaller ones. Thus, 'Pop-up Window', Banner' (located at the top only), and full-screen privacy nudges are considered 'effective' as they are easily noticeable by users. We count the number of these effective privacy nudges in the guest visit scenario and notice that 18 websites display privacy nudges in a salient way. That means 82 out of 100 websites do not have or only have subtle privacy nudges.
In the registering scenario, Figure 7 shows that there are five display locations (i.e., 'Top', 'Middle' 'Full Screen', 'Email Confirming Page', and 'Nearby sign up/log-in/subscribe') and four display types (i.e., 'Banner', 'New Page', 'Check Box Content', and 'Plain Text'). For the location, 76.9% (20 out of 26) privacy nudges are displayed nearby sign up/log-in/subscribe buttons, and 11.5% (3 out of 26) privacy nudges are shown with a full-screen web page. The privacy nudges located in other positions altogether only account for 11.5%. For the display type, 'Check Box Content' dominates the four display types, and 'New Page' is the second most popular. The share of these two display types is 92.3%. This suggests 92.3% privacy nudges will not be ignored because a user needs to click the check box located nearby the sign up button or close the new page to continue the registering process.
ChoiceThe privacy nudges usually provide action options (i.e., choice) to a user for decision-making. For example, 'Accept', 'Decline', and 'Manage'. Some privacy nudges only provide a single choice ('Accept'), while some provide the combination of 'Accept' and another one or two options. The privacy nudges with only the 'Accept' option disregard
Figure 6: Privacy Nudge: Awareness Attribute Analysis in the Guest Visit Scenario
Figure 7: Privacy Nudge: Awareness Attribute Analysis in the Registering Scenario
the user choice since the proposed option is mandatory. 'Decline' give the right for users to reject sharing private information with websites. However, 'Decline' may bring some limitations and impact users' browsing experience. The 'Manage' option provides customized choices that maximize flexibility for users. In the guest visit scenario, Figure 8 indicates that over half of the privacy nudges do not provide the 'Manage' option as well as an explicit explanation of the options. Similarly, Figure 9) displays that only the minority (4) of privacy nudges provide a 'Manage' option or explain the options explicitly in the registering scenario.
Specifically, in the guest visit scenario, 32.4% of privacy nudges only provide the 'Accept' option, which accounts for the highest proportion of all options. The share of the 'Accept' and 'Decline' combination is 20.6%, and other combination groups with 'Manage' together take up 47.0%. On the other hand, most privacy nudges (61.8%) do not provide a clear choice explanation. In the registering scenario, the share of the 'Accept', 'Accept and Decline', and 'Accept and Manage' option is 42.3%, 42.3%, and 15.4%, respectively. In addition, the privacy nudges with explicit option explanation account for only 15.4%.
In summary, based on the choice attribute perspective, the majority of websites disregard user choice rights in both guest visits and registering scenarios. However, the privacy nudges in the guest visit scenario demonstrate better performance compared to those in the registering scenario, as they provide more user rights and clearer explanations of their options.
FunctionalityThe functionality of privacy nudges is various in the registering and guest visit scenarios. However, most privacy nudges only cover one or two of these functions such as cookie and privacy policy. The privacy nudges push eight types of privacy content (i.e., functions): cookie, advertisement, privacy policy, terms of conditions, location, email/notification, personally identifiable information (PII), and history (see Figure 10). We count the number of privacy nudges that have each type of privacy content to calculate the share proportion of every type of privacy content (see Table 8). Some privacy nudges contain several types of privacy content. Therefore, these privacy nudges may be counted repeatedly.
From Figure 10, we can note that in the guest visit scenario, cookies are the most commonly used privacy type, while in the registering scenario, terms of conditions have the highest share among the various privacy types. Location (7.4%) and history (1.9%) are the particular functions for the privacy nudges in the guest visit scenario. In addition, the privacy nudges with privacy policy take a significant proportion in type distribution, constituting similar proportions (22.2% and 29.0% respectively) in both scenarios.
Figure 8: Privacy Nudge: Choice Attribute Analysis in the Guest Visit Scenario
Figure 9: Privacy Nudge: Choice Attribute Analysis in the Registering Scenario
From Table 8, we notice that 55.2% (16 out of 29) websites only cover one function in the guest visit scenario, whereas the ratio is 61.5% (16 out of 26) in the registering scenario. Despite that there are five websites (guest) and three websites (registering) covering three or more functions, 82.8% (guest) and 88.5% (registering) of privacy nudges only contain two or fewer functions. For example, in the guest visit scenario, 'abczdrowie.pl' is the only website presenting all seven functions in the privacy nudge. Only the website 'fitbit.com' contains a maximum of four functions in the registering scenario.
#### 5.3.2 Privacy Notice
According to Table 1, there are only two usability attributes for the privacy notice control, that is _awareness_ and _functionality_.
AwarenessSimilarly to the privacy nudge, the location and type of privacy notices are diverse. However, most websites do not contain a privacy notice or display it subtly in both visit scenarios.
In the guest visit scenario (see Figure 11), the display location 'Bottom' and display type 'Banner', which are demonstrated to perform worse when attracting user attention, comprise 69.2% and 76.9%, respectively. More specifically, only 6 out of 100 websites display salient privacy notices with 'Pop-up Window' or top 'Banner'. The privacy notice and nudge utilized in the guest visit scenario can be perceived as intrusive since they may disrupt the browsing experience. However, if appropriately designed and presented, these mechanisms can effectively inform users about their rights regarding privacy control and increase their privacy awareness. They inform users of privacy-related information when users navigate the website for the first time. However, only 38 out of 100 websites contain either a privacy nudge or notice, and 21 websites amongst them prominently display a privacy nudge or notice.
In registering scenario (see Figure 12), the privacy notice can appear in five locations or be displayed on full screen. Like the privacy nudge, over three-quarters of the websites display privacy notices nearby sign up/log-in/subscribe buttons (first-ranking display location). The second-ranking display location is the 'Bottom,' and its share is only about a tenth of the first-ranking location. Regarding the display type of the privacy notice, the dominant type is the 'Plain Text and Link' (81.5%). The plain text located nearby the sign up/log-in/subscribe buttons is not considered intrusive information as it does not disrupt the registration experience of users.
Considering the privacy nudge and notice are complementary, we evaluate them together. In total, 39 out of 42 websites present either a privacy nudge or notice, and 29 websites are appropriately designed to present the privacy nudge or notice in an effective manner. The availability and usability (from an awareness perspective) of privacy nudge and
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Scenario** & **1** & **2** & **3** & **4** & **5** & **7** \\ \hline Guest Visit (29 total) & 16 & 8 & 1 & & 3 & 1 \\ Registering (26 total) & 16 & 7 & 2 & 1 & \\ \hline \hline \end{tabular}
\end{table}
Table 8: The Count for Websites Containing a Privacy Nudge with Different Number of Functions
Figure 10: Privacy Nudge: Functionality Attribute Analysis in the Guest and Registering Scenarios
notice in the registering scenario (92.9% and 69.0%) are much higher than those in the guest visit scenario (38.0% and 21.0%).
FunctionalityCompared with privacy nudges, privacy notice contains fewer types of privacy content (i.e., functions), including three in the guest visit scenario and five in the registering scenario. The majority of privacy notices (100.0% (guest) and 81.5% (registering)) only cover one or two functions (see Table 9).
Figure 13 represents the distribution of the privacy notice content type. The privacy notice in the guest visit scenario contains three types of functionality: cookie, privacy policy, and security guarantee. 61.5% of privacy notices are related to cookie consent. In contrast to the guest visit scenario, the registering scenario includes three additional functions: advertisement, email/notification, and PII. We can observe that the privacy notice in different visit scenarios focuses on varying types of privacy content. For example, over half of the privacy notices in the registering scenario contain links to the privacy policy, but the share is only 30.8% in the guest visit scenario. In addition, 40.7%, 37.0%, and 7.4% of privacy notices inform the notification methods, the utilization of the PII, and the privacy content used for advertisement, but privacy notices in the guest visit scenario lack these types of privacy content. It is worth noting that, in the guest visit scenario, two government websites (i.e.,'medicare.gov', 'fda.gov') provide security guarantee information like 'An official website of the United States government'. The security guarantee feature can alleviate users' privacy concerns, yet monitoring mechanisms are needed to prevent the abuse of such privacy notices.
#### 5.3.3 Privacy Policy
We analyze the four usability attributes (i.e., _awareness, efficiency, comprehension_ and _choice_) for privacy policy complying with Table 1. It is unnecessary to focus on specific _functionality_, as a privacy policy should encompass
Figure 11: Privacy Notice: Awareness Attribute Analysis in the Guest Visit Scenario
Figure 12: Privacy Notice: Awareness Attribute Analysis in the Registering Scenario
comprehensive information regarding a website's privacy. In the registering scenario, we only take into account the location feature of _awareness_, as it may vary between the two scenarios. In terms of the _'efficiency'_ attribute, it is challenging to quantify the number of clicks to access the privacy policy since the registration process can have multiple variations, and the privacy policy link may appear at any point. Nevertheless, the remaining attributes of the privacy policy are consistent in both scenarios.
AwarenessIn the guest visit scenario, the location of the privacy policy on the homepage is uniform, typically found in the footer section with either a button or a hyperlink display type. In the registering scenario, most privacy policies appear as links in privacy notices or nudges.
For privacy policy analysis, we only focus on the display location instead of the display type, since we access the privacy policy either through a button or a link. Figure 14 illustrates that 89.1% of websites display the privacy policy button at the footer in the guest visit scenario. It is worth noting that the footer represents the final bottom portion of a web page and may contain multiple components. As such, the privacy policy may be situated within various components of the footer. In the registering scenario, the location of the privacy policy is not as uniform as in the guest visit scenario. Most privacy policies appear in the link of a privacy notice/nudge (84.2%). One of these targeted websites, 'altibbi.com', displays the privacy policy in a full-screen new page manner, which is considered a better practice to attract the user's attention than links in the privacy notice/nudge.
EfficiencyWhile two clicks are required to access the privacy policy of most websites through the privacy policy button or link, users may also access the privacy policy with only one click through a link in the privacy nudge/notice.
To represent the efficiency of visiting the privacy policy, we examine the number of clicks to access the privacy policy. While counting the clicks, we define the scrolling down action as one click. From Figure 14, we can see most privacy policy related buttons are at the footer; therefore, only two clicks (i.e., scrolling down and clicking on the button) are needed to access the privacy policy. Nonetheless, about 3.2% of websites (see Figure 15 (left)) do not directly display the privacy policy after the button on the homepage is clicked, prompting users to perform an extra click. If the access to privacy policy appears in multiple locations such as in privacy nudge/notice (requiring one click) and in the footer (requiring minimum 2 clicks), we account for the lowest number of clicks required. In this case, 7.4% of the websites present privacy policy access in privacy nudges/notices. Such location design is more efficient, as users only need to perform one click on the website to access the privacy policy.
Figure 13: Privacy Notice: Functionality Attribute Analysis in the Guest and Registering Scenarios
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Scenario** & **1** & **2** & **3** & **4** \\ \hline Guest Visit (13 total) & 12 & 1 & & \\ Registering (27 total) & 16 & 6 & 4 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The Count for Websites Containing a Privacy Notice With Different Number of Functions
ComprehensionIn this context, comprehension is not measured through'readability', as previous research has illustrated the inadequate clarity of privacy policies in various domains. We utilize another two features to describe the ease of understanding the privacy policy, that is, whether the privacy policy is viewable in various languages and whether it contains a table of content to guide reading. The right sub-figure in Figure 15 demonstrated that the targeted websites perform poorly in terms of the comprehension attribute of the privacy policy. Over three-quarters of website privacy policies do not provide an alternative language version or a table of content to assist users in comprehending the document.
ChoiceIn the context of privacy policy analysis, the "type of option" refers to the type of privacy setting links that are available to users, it determines the ways in which users can exercise their privacy choices. In Figure 16 (left), we note that 39.4% of privacy policies do not provide users any privacy setting links. Approximately one out of every three privacy policies exclusively present external links to third-party resources such as 'All About Cookies', 'Your Online Choices', 'All About Do Not Track', 'Network Advertising Initiative (NAI)' and links to privacy setting pages of Adobe and Google. Meanwhile, a significantly less proportion (7.1%) of privacy policies contain customized privacy setting links.
On the other hand, if the privacy policy provides clear guidance rather than meaningful privacy links for privacy settings, we can still treat the privacy policy as a good practice that respects users' 'Choice' rights. Figure 16 (right) explains the comprehension attribute by evaluating the clarity of guidance. For the privacy policies that do not contain privacy setting links, we explore whether they provide clear privacy setting guidance. Only 4 out of the 40 aforementioned websites have explicit guidance. That means around 35.5% of privacy policies completely forfeit users' privacy choice rights, providing none of the meaningful setting links or clear guidance.
Figure 14: Privacy Policy: Awareness Attribute Analysis in the Guest Visit and Registering Scenarios
Figure 15: Privacy Policy: Efficiency and Comprehension Attribute Analysis
#### 5.3.4 Privacy Settings
We analyze the usability of the privacy setting in each visit scenario based on multiple attributes. It should be highlighted that our analysis is limited to websites that have implemented privacy settings. Specifically, we analyzed a total of 34, 8, and 22 websites for the guest visit, registering, and log-in visit scenarios respectively.
Guest Visit ScenarioAwarenessFigure 17 a) shows that similar to the privacy policy, the display locations of the privacy setting are relatively standardized. Such uniformity in display location is advantageous in facilitating users' ability to locate the privacy setting. About three-quarters of privacy setting buttons/links are located at the footer of a website, and the others (23.5%) at the privacy nudge (7 out of 34) or notice (1 out of 34). These privacy nudges serve as a conspicuous reminder to users regarding the availability of privacy controls on a website. This approach is highly effective in informing users about privacy settings, even if they possess limited knowledge about privacy protection. Few websites (e.g.,'shop-apotheke.com') provide the privacy setting button/link at both the footer and the privacy nudge, which is an optimal practice by combing both intrusive and voluntary methods. This ensures that the user can effectively perceive privacy controls and locate them effortlessly in a designated section whenever they require privacy protection.
EfficiencyWe can access the privacy setting within three clicks on all targeted websites, with 14.7% of them providing one-click access (see Figure 17 b)). All websites that with one-click access to the privacy setting present it through privacy nudges. It is worth noting that all these five websites are from the Europe region. On the other hand, 60.0% of the websites with three-click access to the privacy setting (10 out of 34) are from North America. Therefore, we can deduce that in comparison to other regions, the design of privacy settings in Europe is superior. in the guest visit scenario from an _efficiency_ perspective.
Figure 16: Privacy Policy: Choice Attribute Analysis
ComprehensFigure 17 c) shows that only about one-tenth of provide privacy settings in alternative languages.
For analyzing the privacy setting, we focus solely on the language aspect of the _comprehension_ attribute. In view of the restricted length of the privacy setting text, the inclusion of a table of contents is not obligatory. Even though 4 out of 34 websites provide multiple-language privacy settings, three of them are provided with a third-party link, 'YourAdChoices', which has alternative language options. Another third-party link ('uptodate.com') provides alternative language options by redirecting websites to versions of different countries, like China (Chinese) and Brazil (Portuguese). That means ultimately none of the targeted websites offers a customized privacy setting interface with language options.
FunctionalityMost privacy settings (73.5%) in the guest visit scenario contain 'Cookie' management options, and over half of the privacy settings only cover one function.
Figure 17 d) summarizes the types of privacy content that privacy settings involve. The top three common types are the cookie, advertisement, and profile/PII. Other types of privacy content each account for less than 10.0%. Table 10 shows that 88.2% (30 out of 34) privacy settings only cover one or two functions. For example,'menshealth.com' provides three functions through three buttons at the footer ('Manage Email Preferences', 'Interest-Based Ads', 'DO NOT SELL MY PERSONAL INFORMATION'). The website 'familyandpets.com' provides four functions together with one toggle rather than separate togles that correspond to each function. In this case, the best practices are the two Europe websites ('my-personaltrainer.it' and'medonet.pl'), which provide five functions in their unified privacy setting interfaces with independent togles for each function.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Scenario** & **1** & **2** & **3** & **4** & **5** \\ \hline Guest Visit (34 total) & 20 & 10 & 1 & 1 & 2 \\ Registering (8 total) & 6 & 1 & 1 & & \\ Log-in Visit (22 total) & 15 & 3 & 3 & 1 & \\ \hline \hline \end{tabular}
\end{table}
Table 10: The Count for Websites Containing a Privacy Setting With Different Number of Functions
Figure 17: Usability Attribute Analysis for Privacy Setting in the Guest Visit Scenario
### Registering Scenario
AwarenessLocating and the display pattern of different privacy settings can be a difficult task due to their diverse placement and format.
In the registering scenario, we consider display location and type features (see Figure 18 a) and b)) for _awareness_ attribute. The display type of privacy setting during registration ranges diversely from banner, new page, check box content, and pull-down menu, which altogether accounts for 75.0%. The privacy settings may appear at five display locations with five display types, generating a wide range of display combinations. No display location or type has a significant advantage. Therefore, it is difficult for users to form habits and build awareness of where the privacy setting will appear.
ComprehensionFew websites (2 out of 8) provide multiple-language privacy settings (see Figure 18 c)).
Similar to the guest visit scenario, these two websites do not offer language options on the privacy setting interface. For example,'medscape.com' offers a selection of language at the beginning of the registration, instead of in the privacy setting (i.e., email notification preference) step. Once the registration process has commenced, users are unable to alter the language selection, and must repeat the registration procedure should they wish to do so.
FunctionalityThe majority of privacy settings only cover one function, and the most prevalent function is 'Cookie'.
Table 10 shows that only two websites cover more than one function. The other six websites provide a single function, constituting four of 'Cookie' and two of 'Email/Notification'. From Figure 18 d), we can see that the top two types of privacy content also are the cookie and email/notification. Each of the other three types of privacy content: advertisement, location, and history, occupies a small proportion of 12.5%.
Figure 18: Usability Attribute Analysis for Privacy Setting in the Registering Scenario
#### Log-in Visit Scenario
AwarenessApproximately 90.0% of the account privacy settings are located at the header, which conforms to user account setup conventions.
Figure 19 a) shows that only 2 out of 22 (9.0%) privacy settings appear at other locations rather than the header. For example, 'amboss.com' displays the privacy setting with the 'Account' button in the sidebar, which appears once users log in. Meanwhile, that sidebar can also be triggered by the click of the 3-line menu icon in the header.
EfficiencyBased on our findings, over 80.0% (18 out of 22) of privacy settings can be accessed in an efficient way.
From Figure 19 b), we observe that two-click and three-click access accounts for 40.9% each. We consider the access within three clicks (included) efficient because the relationship between the clicks is straightforward. For example, the two-click access usually contains clicks on the header account setting icon and the privacy setting button (on the account setting page). Compared to the two-click access, the three-click access has an additional drop-down list after clicking the header account setting icon.
On the other hand, about one-fifth of privacy settings can not be accessed efficiently. For example, the website 'fitbit.com' needs five clicks to access the privacy setting, including clicks on the header account icon, the 'My Dashboard' option in the menu, the setting icon in the header of the dashboard page, the 'Setting' option in the menu, and the 'Privacy' option in the setting page. The account and setting icons may confuse users during the access process. It is possible for users to perceive that both icons lead to the same page, and consequently, disregard the second icon that links to the privacy setting page.
ComprehensionFrom Figure 19 c), we conclude that only a few websites (4 out of 22) provide multi-language privacy settings. Similar to the other two visit scenarios, most websites do not display the language options on the privacy setting page. The website,'medscape.com', is a unique case since it is possible for users to customize the language setting on the privacy interface during the log-in visit.
FunctionalityIn the log-in visit scenario, the privacy setting tends to cover more functions, and the most prevalent type of content is 'Email/Notification', instead of 'Cookie' (see Figure 19 d)) as in the other two visit scenarios. In line with another two visit scenarios, most privacy settings (15 out of 22) contain only one function (see Table 10).
There are seven types of privacy content (i.e., function): cookie, advertisement, location, email/notification, profile/PII, history, and activity. However, similar to the registering and guest visit scenarios, most privacy settings still cover merely one function. The top three functions are email/notification, profile/PII, and activity. Only one website ('doctolib.fr') covers the 'Cookie' function. The website, 'fitbit.com,' offers the widest range of functions and incorporates these functions into a privacy nudge during the registering scenario.
## 6 Discussion
### Designing Privacy Controls
In this section, we discuss how to improve the usability of each privacy control by complying with effective design principles.
Privacy Nudge and NoticeFrom the 'awareness' perspective, the majority of websites (82.0% and 94.0% respectively) lack clear privacy nudges and notices during guest visits. This suggests a need for designers to introduce more obvious privacy nudges or notices to increase users' awareness of privacy information. The use of intrusive display types, such as pop-up windows and banners placed consistently in the middle or top of the screen is recommended, with full-screen displays occupying the user's entire field of view being the most effective. Hence, we recommend websites use pop-up windows or full-screen new pages to deliver privacy nudges and notices in a conspicuous manner, instead of plainly displaying them as text near sign-up/log-in/subscribe buttons. Moreover, due to the respect for users' freedom of choice, we suggest that privacy nudges should contain "Accept", "Reject", and "Manage" options, as well as an explanation for each option. However, survey results showed that only 10% of websites in guest visit scenarios and none in registering scenarios satisfy this design requirement. Therefore, most websites need improvement in this aspect. From a functionality perspective, most privacy notices only cover one or two types of privacy content, such as cookies and privacy policies, and the functions covered depend on the data the website plans to collect. The fifth clause of 999.305 b) in CCPA states that 'business shall not collect categories of personal information other than those disclosed in the notice at collection.' Based on this design principle and the analysis results (popularity of each function), we
suggest that the privacy notice in guest visit scenarios should contain "Cookie", "Privacy Policy", "Location", and "Advertisement" functions, while the privacy notice in registering scenarios should contain basic configurations like "Email/Notification" and "Profile/PII" functions in addition to the mandatory "Privacy Policy". Specific information like 'History' could also be added to the basic configuration according to a website's service.
Privacy PolicyAccording to the results in Section 5.3.3, most privacy policies are located at the homepage footer in the guest visit scenario and displayed as links in privacy notices/nudges in the registering scenario. Policy. The two improvement suggestions for the display of a privacy policy are: firstly, providing a privacy policy link in a privacy judge rather than a privacy notice that is easily ignored; secondly, displaying a privacy policy with a full-screen new page to attract users' attention to the maximum extent. Our study also suggests leveraging privacy nudge/notice functionality to access the privacy policy with only one click (i.e., provide privacy policy access in the privacy nudge/notice). Over three-quarters of privacy policies do not provide different language versions or a table of content, making it easy for users to get lost in the lengthy text. Therefore, the study recommended providing the privacy policy in multiple languages based on the potential user's first language and providing a table of content to help users locate the parts they are concerned about. Moreover, 36 out of 99 privacy policies did not provide privacy setting links nor clear privacy setting guidance. To fulfill the users' right to make decisions, we suggest the provision of customized privacy setting links where users can access all the privacy options with concise explanations (e.g., step-by-step), combined with third-party links with instructions for use. In general, implementing the recommended changes in the design of privacy policies has the potential to improve users' privacy protection and foster greater trust in the website.
Privacy SettingBased on our findings in Section 5.3.4, we provide suggestions to improve the display location, functionality, comprehension, and access efficiency of privacy settings. In the guest visit scenario, the privacy setting should be displayed in multiple formats, both as a button at the footer and as a link in the privacy nudge. To improve access efficiency, directly linking to the privacy setting from the privacy nudge is recommended. For the comprehension attribute, providing privacy settings in alternative languages is the better practice. For the registering scenario, we suggest providing check box content nearby the sign-up button or a full-screen privacy setting page after clicking the sign-up button. Such a design could ensure that privacy control is available before user information collection. Regarding the comprehension attribute, it is better to provide the privacy setting with multi-language versions or at least allow the privacy setting page to be automatically translated by the browser extension. Moreover, the privacy setting should cover 'email/notification' and 'profile/PII' functions besides 'cookies' and 'location'. In the log-in visit scenario,
Figure 19: Usability Attribute Analysis for Account Privacy Setting in the Log-in Visit Scenario
it is recommended that the privacy setting should cover a full range of functions, including 'cookies', 'advertisement', 'location', 'email/notification', 'profile/PII', 'history', and 'activity'. The location of the privacy setting should be consistent, unified, and easy to find, preferably in the account settings located in the header section.
### Combining Privacy Controls
Effective design of each privacy control is an important aspect of ensuring user privacy, but it is equally essential to understand the proper combination of these controls to achieve the maximum possible benefit for users. The privacy notice and privacy policy give users the right to know, focusing on delivering privacy-related information. On the other hand, the privacy nudge and privacy setting provide users the freedom of choice, which require users' consent or selection. According to the requirement of GDPR and CCPA, the most basic and core rights to users are the right to know and the right to choose (e.g., consent or opt-out). In addition, Art. 9 GDPR states that processing of data concerning health shall be prohibited unless the data subject has given explicit consent to the processing of those personal data. Therefore, health websites have higher requirements for privacy controls. We discuss the combination of privacy controls to deliver these rights to users in different visit scenarios.
Guest Visit ScenarioThe privacy policy comprehensively describes a website's privacy practices and must be accessible in all visit scenarios. In the guest visit scenario, while users may not leave behind significant browsing traces or personally identifiable information on websites, the CCPA mandates that websites must provide users with prompt notice, either at or before the point of data collection. Consequently, more than just a privacy policy is necessary to fulfill the timely notice requirement. The minimum combination of privacy controls includes the privacy policy and the privacy nudge (intrusive style and contains customized options), or the privacy policy (contains setting options, links, or clear setting guidance) and the privacy nudge/notice (intrusive style). The best combination of privacy controls consists of the privacy policy, the privacy nudge (intrusive type and has customized options or privacy setting link), and the privacy setting.
Registering ScenarioUnder the CCPA notice requirement, websites shall provide new notice at or before data collection when they plan to collect additional categories of personal information. During the registration process, users are prompted to provide more personal information such as name, phone number, email, and address. Therefore, websites must provide notice during registration, using the same combination of privacy controls as in the guest visit scenario, except that intrusive privacy nudge/notice is not mandatory for the minimum configuration. Plain text or check box content placed near the sign-up button can also be used, as such an arrangement is clear and easily noticeable by users.
Log-in visit ScenarioIn the log-in visit scenario, users should have been aware of and consented to the privacy practices of this website. Therefore, the minimum configuration can include only the privacy policy and the best combination would be both privacy policy and privacy setting. We observe from Figure 5 that the availability of the privacy nudge, privacy notice, and privacy setting is relatively deficient compared with the privacy policy. We recalculate the availability of the combination of privacy controls in Table 11. The websites that can meet the minimum requirement in three visit scenarios account for 26.0%, 57.1%, and 69.0%. The ratios of websites having the best configuration are much lower. In addition, we note that the regulatory compliance of the websites with the minimum configuration is over 80.0%. From this finding, we can infer that regulations such as GDPR and CCPA provide guidance for designing websites, particularly in terms of ensuring compliance with relevant privacy requirements.
In summary, we recommend that website designers implement privacy controls following the best configuration to enhance user privacy, particularly in the guest visit scenario. It is also recommended that countries in the Asia-Pacific region establish or adopt privacy protection regulations similar to GDPR or CCPA to ensure the availability of privacy controls.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Scenario** & **Minimum Configuration** & **Best Configuration** & **Regulatory Compliance** \\ \hline Guest Visit (100 total) & 26 & 13 & 80.8\% (21/26) \\ Registering (42 total) & 24 & 2 & 87.5\% (21/24) \\ Log-in Visit (42 total) & 29 & 21 & 89.7\%(26/29) \\ \hline \hline \end{tabular}
\end{table}
Table 11: The Count for Websites With Different Privacy Control Configuration and Their Regulation Compliance
### Dark Patterns
Dark patterns are increasingly being used to coerce unsuspecting users to accept less privacy protective options. During our data gathering process we came across several such practices followed by different websites. One of the most common dark patterns we came across is the cookie option selection window. When cookie choices are listed, by default, all cookies are enabled and the user has to individually turn off them one by one. Many websites do not provide the option to deselect all cookies with a single button. Another cookie related dark pattern we observed is the colour usage of the accept all button. Normally we associate the colour green with the more preferable option, and by having the accept all cookies in green a user can unintentionally click without reading the content properly.
A similar by default opt-in behaviour can be seen during the registering scenario too. We identified that several websites have users opt-in to their newsletters and/or email subscriptions during the profile creation process. These options are mostly available in the form of checkboxes and placed closely to the terms and services agreement checkbox. This can confuse the user to think that all of these options should be ticked in order to create a user account resulting in an unwanted privacy setting.
### User Study
Our current survey results are gathered by three researchers independently across three different time periods. This enabled us to capture the dynamic behaviour of privacy controls that are available in these websites. However, it is difficult to draw generalized conclusions about the usability of these websites perceived by the general public with this data. Another factor that should be taken into consideration is the domain knowledge of the researchers that currently gathered the data. Since they can be recognized as domain experts, their views and understandings of the privacy policies could be different from the average user. Therefore, to address these issues, we aim to conduct a user study covering a wide variety of user types. The same template we created will be used to gather data during the user study, with the same set of options being allowed under each question. However, a single user might not be able to evaluate all 100 websites in the survey. Hence, we can divide the 100 websites into groups that can be later combined by aggregating the results from several users to cover all 100 websites. The results of this study would provide us with a more detailed and accurate description of the level of usability realized by the general public.
## 7 Conclusion
We gathered information related to privacy controls of the top 100 most visited health websites using a analysis template designed by us. This template covers all four privacy controls; privacy nudges, privacy notices, privacy policies, and privacy settings. With the implementation of privacy regulations such as GDPR and CCPA, websites are offering more control and transparency over the privacy of the users visiting the website. However, the results from our survey showed that even though there are some forms of privacy controls made readily available for users, most of them lack the usability aspect. The unstructured display of privacy notices and nudges, the absence of clear instructions under privacy settings, and the limited language options provided for privacy policies are few of the issues we have identified. To this end, we suggested several privacy control designs and combined guidelines to improve the shortcomings present in these websites.
|
2304.03050
|
Intermediate-qudit assisted Improved quantum algorithm for string
matching with an Advanced Decomposition of Fredkin gate
|
The circuit-level implementation of a quantum string-matching algorithm,
which matches a search string (pattern) of length $M$ inside a longer text of
length $N$, has already been demonstrated in the literature to outperform its
classical counterparts in terms of time complexity and space complexity.
Higher-dimensional quantum computing is becoming more and more common as a
result of its powerful storage and processing capabilities. In this article, we
have shown an improved quantum circuit implementation for the string-matching
problem with the help of higher-dimensional intermediate temporary qudits. It
is also shown that with the help of intermediate qudits not only the complexity
of depth can be reduced but also query complexity can be reduced for a quantum
algorithm, for the first time to the best of our knowledge. Our algorithm has
an improved query complexity of $O(\sqrt{N-M+1})$ with overall time complexity
$O\left(\sqrt{N-M+1}\left((\log {(N-M+1)} \log N)+\log (M)\right)\right)$ as
compared to the state-of-the-art work which has a query complexity of
$O(\sqrt{N})$ with overall time complexity $O\left(\sqrt{N}\left((\log
N)^{2}+\log (M)\right)\right)$, while the ancilla count also reduces to
$\frac{N}{2}$ from $\frac{N}{2}+M$. The cost of state-of-the-art quantum
circuit for string-matching problem is colossal due to a huge number of Fredkin
gates and multi-controlled Toffoli gates. We have exhibited an improved gate
cost and depth over the circuit by applying a proposed Fredkin gate
decomposition with intermediate qutrits (3-dimensional qudits or ternary
systems) and already existing logarithmic-depth decomposition of $n$-qubit
Toffoli or multi-controlled Toffoli gate (MCT) with intermediate ququarts
(4-dimensional qudits or quaternary systems). We have also asserted that the
quantum circuit cost is relevant instead of using higher dimensional qudits
through error analysis.
|
Amit Saha, Om Khanna
|
2023-04-06T13:11:07Z
|
http://arxiv.org/abs/2304.03050v1
|
Intermediate-qudit assisted Improved quantum algorithm for string matching with an Advanced Decomposition of Fredkin gate
###### Abstract
String-matching problem has a broad variety of applications due to its pattern-matching ability. The circuit-level implementation of a quantum string-matching algorithm, which matches a search string (pattern) of length \(M\) inside a longer text of length \(N\), has already been demonstrated in the literature to outperform its classical counterparts in terms of time complexity and space complexity. Higher-dimensional quantum computing is becoming more and more common as a result of its powerful storage and processing capabilities. In this article, we have shown an improved quantum circuit implementation for the string-matching problem with the help of higher-dimensional intermediate temporary qudits. It is also shown that with the help of intermediate qudits not only the complexity of depth can be reduced but also query complexity can be reduced for a quantum algorithm, for the first time to the best of our knowledge. Our algorithm has an improved query complexity of \(O(\sqrt{N-M+1})\) with overall time complexity \(O\left(\sqrt{N-M+1}\left(\left(\log\left(N-M+1\right)\log N\right)+\log(M) \right)\right)\) as compared to the state-of-the-art work which has a query complexity of \(O(\sqrt{N})\) with overall time complexity \(O\left(\sqrt{N}\left((\log N)^{2}+\log(M)\right)\right)\), while the ancilla count also reduces to \(\frac{N}{2}\) from \(\frac{N}{2}+M\). The cost of state-of-the-art quantum circuit for string-matching problem is colossal due to a huge number of Fredkin gates and multi-controlled Toffoli gates. We have exhibited an improved gate cost and depth over the circuit by applying a proposed Fredkin gate decomposition with intermediate qutrits (3-dimensional qudits or ternary systems) and already existing logarithmic-depth decomposition of \(n\)-qubit Toffoli or multi-controlled Toffoli gate (MCT) with intermediate ququarts (4-dimensional qudits or quaternary systems). We have also asserted that the quantum circuit cost is relevant instead of using higher dimensional qudits through error analysis.
String matching, Fredkin gate, Intermediate qudits, Quantum algorithm.
## 1 Introduction
Quantum entanglement and superposition are two examples of quantum mechanical phenomena that are used in the idea of quantum computing for an asymptotic advantage [17, 22]. While the fundamental physics of quantum systems is not inherently binary, quantum computation is frequently stated as a two-level binary abstraction of qubits. However, higher dimensional systems can also be used to describe quantum processing. A qubit is expanded to a d-level or d-dimensional structure as a qudit [15, 24]. In this article, an asymptotically improved binary circuit implementation of string-matching problem [10], has been addressed with temporary intermediate qutrits and ququarts by efficient decomposition of Fredkin gate [2]. Since these only exist as intermediary states in a qudit system, where the input and output states are qubits, we can readily create a higher dimensional quantum state for temporary use by adding a distinct energy level [7].
An essential family of algorithms known as "string-matching algorithms" looks for the location of one or more strings (also known as patterns) within a larger string or text. These algorithms are used to discover answers for problems like text mining, pattern recognition, document matching, information security, network intrusion detection, and plagiarism detection. When using exact matching, the pattern is precisely located within the text. The brute force algorithm is the most basic type of algorithm for finding a precise match in the string-matching problem. Let us see how it works: String \(\mathcal{T}\) (to be searched) = ABCDEFGH and Pattern, \(\mathcal{P}\) (to be matched) = CDEFG and P occurs once in \(\mathcal{T}\): ABCDEFGH. With the brute force method, we merely attempt to match the first character of the pattern with the first character of the text. If we are successful, we move on to the second character and so forth. We move the pattern over one letter and attempt again if we run into a failure point. As a result, this method runs in \(O(nm)\) time. However, the Knuth-Pratt-Morris algorithm, which has a worst-case temporal complexity of \(\Theta(N+M)\), is the most well-known classical string-matching algorithm [10]. The most popular approximate string-matching algorithm also has a comparable run-time of \(\Theta(N+M)\)[16].
Quantum computing can be used to speed up string-matching algorithms. A precise string-matching quantum algorithm with \(\tilde{O}(\sqrt{N}+\sqrt{M})\) query complexity was developed by Ramesh and Vinay [9]. In this method, each check is made using a nested Grover search to determine the location where a section of length \(M\) from \(\mathcal{T}\) matches the pattern \(\mathcal{P}\). However, this work does not create the specific oracles needed, and once we take into consideration the gate-level complexity of getting the text and pattern from a database, the total time complexity, expressed in units of gate depth, is bound to rise. For average-case matching, a different strategy for the dihedral hidden subgroup problem [5] has a time complexity of \(\tilde{O}\left((N/M)^{1/2}2^{O(\sqrt{\log(M)})}\right)\)[13]. The state-of-the-art work [21] presents a string-matching algorithm, based on generalized Grover's amplitude amplification [8], with a time complexity of \(O\left(\sqrt{N}\left((\log N)^{2}+\log(M)\right)\right)\) along with \(\frac{N}{2}+M\) ancilla for arbitrary text length \(N\) and pattern length \(M\leq N\). In this particular paper, we are also using the Grover-based string-matching algorithm to solve the string-matching problem, which achieves time complexity of \(O\left(\sqrt{N-M+1}\left((\log\left(N-M+1\right)\log N\right)+\log(M)\right)\right)\) with \(\frac{N}{2}\) ancilla. We are using a system of intermediate qudits to implement a circuit that provides an asymptotic advantage over the state-of-the-art algorithm.
The main contribution of the article is summarized below:
* We exhibit a first of its kind approach to implement an improved algorithm for the string-matching problem using a novel proposed decomposition of Fredkin gate using intermediate qutrit and multi-controlled Toffoli decomposition with intermediate ququart.
* The proposed approach is sublimer with respect to the time complexity and space complexity with reduced ancilla qubits as compared to state-of-the-art approach [21].
* Our approach of solving string-matching problem outperforms the state-of-the-art approach [21] with respect to circuit cost.
This paper has the following format. The background research required to carry out this suggested work is covered in Section 2. The circuit construction for the string-matching algorithm is proposed in Section 3 by decomposing the Fredkin gate using intermediate qutrits. The efficacy of the suggested approach in comparison to the state-of-the-art is analyzed in section 4. Our findings are summarized in Section 5.
## 2 Preliminaries
### A State-of-the-art Quantum Algorithm for String-matching
The primary objective of string-matching algorithms is to find the location of a specific text pattern (P) within a larger string (S). The practical importance of these algorithms is in a wide variety of applications, from something as simple as searching for a particular word in a word processor to mapping DNA.
In string-matching, we are given a long string S of length N, and our goal is to search for a pattern P contained in the string of length M, such that \(M\leq N\). In Pradeep and Yunseong's state-of-the-art paper [21], they constructed a quantum string-matching algorithm with a time complexity of \(O(\sqrt{N}((\log(N)^{2}+\log(M))))\). The steps involved in their algorithm are as follows:
1. It is based on the generalized Grover's amplitude amplification technique. It works by initializing 2 quantum registers to store the bits of the target string of length \(N\) and the pattern of length \(M\). This process is done by using the identity and bit flip gates on 2 quantum registers (\(|t_{0}t_{1}t_{2}\ldots t_{N-1}\rangle\,|p_{0}p_{1}\ldots p_{M-1}\rangle\), where \(t_{i}\) and \(p_{i}\) denote the ith bit of string \(\mathcal{T}\) and pattern \(\mathcal{P}\), respectively).
2. The first register that contains the string \(\mathcal{T}\) is changed into a combination of \(N\) states, each of which is a bit-shifted version of the first register's initial state that has been moved by 0, 1, 2,..., \(N-1\) bits. As a consequence, and presuming that the bit indices are stored in modulo-\(N\) space, \[\left(\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}|t_{0+k}t_{1+k}t_{2+k}\dots t_{N-1+k} \rangle\right)|pop_{1}\dots p_{M-1}\rangle\] This is done by a cyclic shift operator \(S\) and the decomposition of the cyclic shift operator's circuit is shown in Figure 1.
3. Then, the XOR operation is performed on the first \(M\) bits of the first register and the entire \(M\) bits of the second register to obtain \[\frac{1}{\sqrt{N}}\sum_{k}|t_{0+k}t_{1+k}\dots t_{N-1+k}\rangle\] \[|(p_{0}\oplus t_{0+k})\left(p_{1}\oplus t_{1+k}\right)\dots(p_{M-1 }\oplus t_{M-1+k})\rangle\,.\]
4. If the sequence matches the first \(M\) bits of the \(\mathcal{T}\), the second register contains only zeroes. If the text and the pattern vary by \(d\) bit positions, the register holds \(d\) ones.
5. When looking for an exact match, the state where the second register contains only zeros or has fewer than \(D\) matches using the generalized Grover search or amplitude amplification (in the case of fuzzy search), should be separated.
A comparison on query and time-complexity between our work and other works [9; 13; 21] is given in Table 1. The oracles for [9; 13] offer arbitrary access to text and pattern bits. Because the execution time relies on the random-access oracles, which don't have a circuit-level design in the relevant papers, the time complexity for [9; 13] is unclear. In [21] and our work, this random-access generator is not necessary. Instead, for the purposes of our work, an oracle is a Grover oracle that determines whether a register is in an all-zero state, similar to [21]. Our detailed construction for such an oracle is discussed in this paper. We also follow the same algorithmic steps as [21]. Albeit we design our circuit in such a way so that we achieve an asymptotic advantage over [21] with the help of intermediate temporary qudits. Hence we directly compare our time-complexity with [21]. The time complexity consists of three different parts, first is for the query, next is for the depth of the cyclic shift operator and the final part is for Grover's search. From Table 1, it can be visualized that our proposed approach has an asymptotic advantage for the first two parts i.e., \(\sqrt{N-M+1}\) and \(\log\left(N-M+1\right)\log N\) as compared to \(\sqrt{N}\) and \((\log N)^{2}\). For the last part, the complexity remains the same for the two approaches, but ancilla reduces to 0 from \(M\).
### Toffoli Decomposition via Intermediate Qutrits
Natural access to an infinite range of discrete energy levels is available to quantum processors. Therefore, using three-level qutrits is just an option to add another distinct energy level, but at the expense of allowing more space
Figure 1: Circuit construction of cyclic shift operator [21].
for error. Qutrits can replace the workspace provided by non-data ancilla qubits in typical circuits, allowing us to function more effectively. Qutrits are a 3-level quantum system where we consider the computational basis states: \(|0\rangle\), \(|1\rangle\) and \(|2\rangle\). They are manipulated in a similar manner to qubits, however, there are additional ternary CNOT gates which may be performed on qutrits during the Toffoli decomposition. The Toffoli gate is the central building block of several quantum algorithms. Since the Toffoli involves 3-body interactions, it cannot be implemented naturally in a real quantum devices. Usually, the Toffoli gate can be constructed by decomposing it into single and two qubit gates. For example CNOT gates require 6 such gates plus 7 T gates [1] as shown in Fig. 2. Let's look at the decomposition of the Toffoli gate using intermediate qutrits, since in this paper we are using Toffoli decomposition with intermediate qutrit:
In [7], the authors demonstrated that we can momentarily inhabit the \(|2\rangle\) state during the computation, making temporarily ternary. This circuit design can be integrated into any current qubit-only circuits because it maintains binary input and output. Fig. 3 depicts a Toffoli realization as seen through qutrits [7]. More precisely, the target qubit (third qubit) must undergo a NOT operation as long as the two control qubits are both \(|1\rangle\). The first and second qubits are then subjected to a \(|1\rangle\)-controlled \(X_{+1}\), where \(+1\) stands for an increase of \(1\) (mod\(3\)) to the target qubit. If and only if the first and second qubits were both \(|1\rangle\), this raises the second qubit to \(|2\rangle\). The target qubit is then subjected to a \(X\) gate that is regulated by \(|2\rangle\). As anticipated, \(X\) is only performed when the first and second qubits were both \(|1\rangle\). Lastly, a \(|1\rangle\)-controlled \(X_{-1}\) gate cancels the impact of the first gate, returning the controls to their initial positions. The main result of this reduction is that the transient information can be stored in the \(|2\rangle\) state from ternary quantum systems instead of ancilla. Therefore, three generalized ternary CNOT gates with a circuit depth of three are adequate to realize the Toffoli gate, in actuality, no T gate is needed.
This Toffoli decomposition is further used to decompose the Fredkin gate for string-matching, which is thoroughly discussed in the next section. Before that, we have showcased the decomposition of multi-controlled Toffoli gate using ququarts, which is another important fundamental component for Grover's based string-matching.
\begin{table}
\begin{tabular}{l l l} \hline Paper & Query complexity & Time complexity \\ \hline
[9] & \(O\left(\sqrt{N}\log(\sqrt{N/M})\log M+\sqrt{M}(\log M)^{2}\right)\) & - \\
[13] & - \\
[21] & \(O(\sqrt{N})\) & \(O\left(\sqrt{N}\left((\log N)^{2}+\log(M)\right)\right)\) \\ This work & \(O(\sqrt{N-M+1})\) & \(O\left(\sqrt{N-M+1}\left((\log(N-M+1)\log N)+\log(M)\right)\right)\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of our work with prior algorithms discussed [9, 13, 21].
Figure 3: An example of Toffoli decomposition with intermediate qutrit, where input and output are qubits. The red controls activate on \(|1\rangle\) and the blue controls activate on \(|2\rangle\). The first gate temporarily elevates \(q_{1}\) to \(|2\rangle\) if both \(q_{0}\) and \(q_{1}\) were \(|1\rangle\). \(X\) operation is then only performed if \(q_{1}\) is \(|2\rangle\). The final gate acts as a mirror of first gate and restores \(q_{0}\) and \(q_{1}\) to their original state [7]
Figure 2: Qubit-only Toffli decomposition with Clifford\(+T\) gate-set [1].
### Multi-controlled Toffoli Decomposition via Intermediate Ququarts
In the previous section, we dealt with the construction of a Toffoli gate using a 3-level quantum system i.e., an intermediate qutrit. For the decomposition of \(n\)-qubit Toffoli gate, the resources increase rapidly, requiring \(O(n^{2})\) two-qubit gates in qubit-only systems. However, \(n\)-qubit Toffoli gates can be constructed efficiently using fewer resources than previous qubit-only designs with the help of intermediate qutrits [7]. Similar to this, there is research on the realization of \(n\)-qubit Toffoli with intermediary qudits; see [7, 18, 19, 20, 23]. Since this decomposition [23] is more error-resistant and is the only one that can be scaled up to any finite dimensional quantum system as opposed to [7, 18, 19, 20], we are using it in this paper. It should be mentioned that the circuit cost of decomposition with intermediate qutrits [19] and the decomposition with intermediate ququarts [23] is comparable. Even so, not all quantum hardware supports the used gate-set from [19], and there is no error analysis for this decomposition because the error rates for the used gate-set and ternary systems are not documented in the literature. The gate-set utilised in [19] is also not scalable to any finite dimensional system because it is not generalized to any such system.
As an illustration, a multi-controlled Toffoli gate with 7 control qubits and 1 target qubit is taken into consideration, as shown in Fig. 4(a). With the support of the Gokhale et al. [7] design, Fig. 4(b) shows the realization of the generalized 8-qubit Toffoli gate as shown in Fig. 4(a). In the same way that this method does, their circuit briefly saves information in the qutrit \(\left|2\right\rangle\) state of the controls. However, they decompose their ternary Toffoli into 13 one-qutrit and two-qutrit gates, [4][7], rather than saving temporary states in the quaternary \(\left|3\right\rangle\) state. According to this method, three ternary and/or quaternary CNOT gates can be further reduced to two ternary and/or quaternary CNOT gates by using the identity rule, as shown in Fig. 4(c) on Fig. 4(d). The authors further decompose the ternary Toffoli into three ternary and/or quaternary CNOT gates using the \(\left|3\right\rangle\). As a result, for a single Toffoli decomposition, this optimization can reduce the gate count from 13 to 2, and this method can also be applied to any dimensional quantum system.
This circuit design, as displayed in 4(c) or 4(d), can be understood as a binary tree of gates. More specifically, the circuit retains a tree structure with qubit inputs and outputs, and it has the characteristic that the intermediate qubit of each sub-tree and root can only be raised to \(\left|2\right\rangle\) if all seven of its control leaves were \(\left|1\right\rangle\). As a result, the circuit depth, where \(n\) is the total number of controls, is exponential in \(n\). Additionally, the overall number of gates is optimized
Figure 4: (a) An 8-qubit Toffoli gate, (b) its decomposition in [7], (c) its decomposition using a few ternary and/or quaternary CNOT gates in [23], and (d) its optimized decomposition in [23].
because each quaternary qudit is acted on by a small constant number of two gates. The \(n\)-qubit Toffoli decomposition is novel because it uses a maximum of \(2n-3\) generalized CNOT gates (\(n+1\) ternary CNOT gates and \(n-4\) quaternary CNOT gates), which is less than the state-of-the-art. It is also novel because of its logarithmic depth optimization. This decomposition of the multi-controlled Toffoli gate has further played a vital role to reduce the query complexity and ancilla qubits of our proposed string-matching algorithm, which is discussed in the next section.
## 3 Quantum Algorithm for String-matching with Intermediate Qudits
### Our Proposed Fredkin Gate with Intermediate Qutrits
In this section, we show an explicit circuit decomposition of the Fredkin gate using intermediate qutrits. Before that the state-of-the-art decomposition of a Fredkin gate with 7 CNOT and \(7\)\(\mathrm{T}\) gates is discussed. Let's start with the circuit of Fredkin gate:
Using the circuit identity, imported from Fig. 2 of [21], we find that the first CNOT gate in the Fredkin-gate circuit and the first two gates of the Toffoli-gate circuit forms a subcircuit. Thus, we obtain the state-of-the-art decomposition of Fredkin gate with Clifford \(+\mathrm{T}\) gate set:
The proposed Fredkin gate with intermediate qutrit is shown in Figure 5, where the Toffoli gate is decomposed as per Fig. 3. Only 2 CNOT gates and 3 ternary CNOT gates are required to construct this Fredkin gate, in fact no \(\mathrm{T}\) gate is required. We use this Fredkin gate further in our proposed string matching algorithm.
### Our Proposed Methodology for String-matching using Grover's Algorithm with Proposed Fredkin Gate
We outline the proposed algorithm's thorough implementation in this section. We specifically describe the registers and transformations used to carry out the method. The cyclic shift operator with proposed Fredkin gate is one of the key changes that will be used in our method compared to [21]. Another key aspect of our algorithm is that due to the use of multi-controlled Toffoli decomposition with intermediate qudits, the query complexity has been reduced. We present the details of complete circuit construction for string-matching using Grover's algorithm, which is portrayed in Fig. 6 for better visualization.
Figure 5: Advanced Fredkin gate with intermediate qutrit.
* We also use quantum registers of \(N\) and \(M\) qubits, respectively, to encapsulate a binary string \(\mathcal{T}\) of length \(N\) and a binary pattern \(\mathcal{P}\) of length \(M\) as [21]. To accomplish this, identity and bit-flip gates can be used on a quantum register with an initialization of \(|0\rangle^{\otimes(N+M)}\). The encoded quantum state is as follows \[\left|\mathcal{T}\right\rangle =\left|t_{0}t_{1}\ldots t_{N-1}\right\rangle=\bigotimes_{i=0}^{N- 1}\left|t_{i}\right\rangle\] \[\left|\mathcal{P}\right\rangle =\left|p_{0}p_{1}\ldots p_{M-1}\right\rangle=\bigotimes_{j=0}^{M -1}\left|p_{j}\right\rangle.\]
* We now construct a composite initial state and an index register of \(N-M+1\) qubits in the zero states, \[\left|\psi\right\rangle=\left|0\right\rangle^{\otimes N-M+1}\left[\bigotimes_{ i=0}^{N-1}\left|t_{i}\right\rangle\right]\left[\bigotimes_{j=0}^{M-1}\left|p_{j }\right\rangle\right]\] where, for ease of use, we assumed \(N-M+1=2^{n}\). The index register is then subjected to a \(n\) qubit Hadamard transform \(H^{\otimes n}\) (Fourier transform in case of \(N-M+1\neq 2^{n}\) for \(n\in\mathbb{N}\)) to produce a uniform superposition of \(\left|0\right\rangle,\left|1\right\rangle,\ldots\left|N-M\right\rangle\), \[\left(H^{\otimes n}|0\rangle^{\otimes n}\right)\left[\bigotimes_{i=0}^{N-1} \left|t_{i}\right\rangle\right]\left[\bigotimes_{j=0}^{M-1}\left|p_{j}\right\rangle \right]=\left(\frac{1}{\sqrt{N-M+1}}\sum_{k=0}^{N-M}\left|k\right\rangle\right) \left[\bigotimes_{i=0}^{N-1}\left|t_{i}\right\rangle\right]\left[\bigotimes_{ j=0}^{M-1}\left|p_{j}\right\rangle\right].\]
* The next step is to use the cyclic shift operator \(\mathcal{S}\), which left-circularly shifts the target state's qubits by \(k\) places. \(k\)'s values are stored in the control state. The outcome of applying \(\mathcal{S}\) to the first two registers is
Figure 6: Complete circuit for proposed string-matching.
\[\left[\mathcal{S}\left(\frac{1}{\sqrt{N-M+1}}\sum_{k=0}^{N-M}|k \rangle\right)\left(\bigotimes_{i=0}^{N-1}|t_{i}\rangle\right)\right]\left( \bigotimes_{j=0}^{M-1}|p_{j}\rangle\right)\] \[=\frac{1}{\sqrt{N-M+1}}\sum_{k=0}^{N-M}|k\rangle\left(\bigotimes_ {i=0}^{N-1}|t_{i+k}\rangle\right)\left(\bigotimes_{j=0}^{M-1}|p_{j}\rangle\right)\]
Here, we provide a short explanation of the circuit design for the \(\mathcal{S}\) cyclic-shift operator. We consider \(k\) in its binary encoded form \(|k_{0}\rangle\,|k_{1}\rangle\ldots|k_{N-M}\rangle\), such that \(2^{0}k_{0}+2^{1}k_{1}+\ldots+2^{N-M}k_{N-M}=k\), to implement the \(k\)-controlled circular shift operator \(S_{k}\). Then, a combination of controlled-shift operators that shifts the target qubits by \(k\) bits while depending on the \(k\)-controlled qubits can be used to execute the circular bitwise shift by \(k\) in the second register. In other terms, a product of controlled shift operations can produce a shift of \(k\) bits. We need the controlled-SWAP (Fredkin) gates to put the circular shift operator into practice. As an instance, a permutation of the form \(P_{r}=\{N-r,N-r+1,N-r+2,\ldots,N-r-1\}\) is applied in modulo \(N\) space by applying a cyclic shift operator \(S_{r}\) by \(r\) bits, where the \(N-r\)th bit is inserted in the zeroth position, the \(N-r+1\)th bit is inserted in the first position, and so on. Any one of these permutations can be realized into a series of transpositions. As a consequence, a cyclic shift operation can be realized into a SWAP operation's byproduct.
The number of SWAP-operation levels required to effectively implement the permutation is now determined. With a register having \(N\) qubits, we can perform \(\frac{N}{2}\) SWAP processes in parallel. We can transfer \(\frac{N}{2}\) qubits to the appropriate locations in a single time step by using the \(\frac{N}{2}\)-parallel SWAP operator. Now we just need to arrange the remaining \(N/2\) bits. The number of qubits that must be swapped drops by half at each succeeding time step. Therefore, using concurrent SWAP operations, we can arbitrarily permute \(N\) qubits in \(O(\log(N))\) time steps. This unitary process is illustrated diagrammatically with an example in Fig. 7. By using concurrent controlled-SWAP operators, shift operators can be implemented in \(O(\log(\mathrm{N}))\) time steps.
Next, we go over how to use the same qubits in the index register to handle as many \(\frac{N}{2}\) parallel swap processes. We succeed in doing this, at an expense of \(\frac{N}{2}\) clean ancilla qubits. We start by considering a MCT operation, acting on the control qubits in a state \(|k\rangle\) and \(\frac{N}{2}\) clean ancilla qubits initialized to \(|0\rangle\) as targets. This results in \(\frac{N}{2}\) copies of \(|1\rangle\), which can then be used to implement up to \(\frac{N}{2}\) Fredkin gates in a single time step. Once all necessary Fredkin gates have been implemented, we undo the MCT operation and return all
Figure 7: The cyclic-shift operator is shown in this diagram. In this case, we left-circularly shifted an 8-qubit register by one location over the course of three time steps. Generally speaking, this type of procedure can be carried out in depth \(\log(N)\) using parallel SWAP operations, where \(N\) is the size of the qubit register states.
ancilla qubits to \(|0\rangle\) for further operations. The time cost of the MCT operations with intermediate qudit is \(O(\log(N-M+1))\) as index register of \(\lceil\log(N-M+1)\rceil\) qubits are enough for our string matching since \(N-M+1\) time cyclic shift operator is needed to be performed. For [21], they need index register of \(N\) qubits as the logarithmic decomposition of MCT gates is not directly achievable in qubit-only circuits, hence they decompose their cyclic shift operator as shown in Fig. 1. Since there are \(O(\log(\mathrm{N}))\) parallel SWAP layers required for the implementation of the qubit permutation, the overall time complexity of cyclic shift operator is \(O(\log(N-M+1)\log(\mathrm{N}))\).
4. At this juncture, we look to see if the pattern string kept in the third register matches the cyclically moved text strings in the second register. Each of the first \(M\) bits in the second register and each of the \(M\) bits in the third register are combined using an XOR operation. For instance, the sequences match if the XOR outputs are all zeros. Then, with the use of CNOT gates on a quantum computer, we acquire, \[\frac{1}{\sqrt{N-M+1}}\sum_{k=0}^{N-M}\left|k\right\rangle\text{ CNOT }^{\otimes M}\left[\left(\bigotimes_{i=0}^{N-1}\left|t_{i+k}\right\rangle \right)\left(\bigotimes_{j=0}^{M-1}\left|p_{j}\right\rangle\right)\right]\] \[=\frac{1}{\sqrt{N-M+1}}\sum_{k=0}^{N-M}\left[\left|k\right\rangle \left(\bigotimes_{i=0}^{N-1}\left|t_{i+k}\right\rangle\right)\left(\bigotimes _{j=0}^{M-1}\left|p_{j}\oplus t_{j+k}\right\rangle\right)\right].\] For this purpose, the number of discrepancies between the pattern and the first \(M\) bits of the string register is stored in the final register. In fact, if and only if those two string parts match exactly, it is all zero.
5. Finally, a Grover's oracle that works on the pattern register is necessary to finish our algorithm because it will amplify and help in the identification of exact matches or near matches. We can get this oracle in \(O(\log(M))\) depth using novel decomposition of MCT gate using intermediate qudits without ancilla qubits. For a better understanding, we have given an example of proposed string-matching algorithm.
**Example:** Let's take an example string \(\mathcal{T}\) (to be searched) = ABCDEFGH and Pattern, \(\mathcal{P}\) (to be matched) = CDEFG. As per Fig. 6, \(N=8\) and \(M=5\). These \(M\) and \(N\) can be stated as \(\left|t\right\rangle\) and \(\left|p\right\rangle\) respectively. As per our proposed algorithm, we need an index register of \(\lceil\log(N-M+1)\rceil\) i.e., \(\lceil\log(8-5+1)\rceil=2\) extra qubits for cyclic shift operator as \(\left|k\right\rangle\), which are initialized to 0. Next we need \(\frac{N}{2}\) ancilla qubits as \(\left|a\right\rangle\) for parallel Fredkin operation, which are also initialized as 0. Finally one output qubit for Grover's search with 1 as input. So the initial quantum state is:
\[\psi_{0}\rightarrow\left|k\right\rangle\otimes\left|t\right\rangle\otimes \left|p\right\rangle\otimes\left|a\right\rangle\otimes\left|o\right\rangle\]
\[\psi_{0}\rightarrow\left|00\right\rangle\otimes\left|ABCDEFGH\right\rangle \otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle\otimes\left|1\right\rangle\]
At first, we have to apply Hadamard transformation on first two qubits, hence the quantum state evolves as, \(\psi_{1}\rightarrow\frac{1}{2}(\left|00\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle+\left|01\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle+\left|10\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle)\]
Now, cyclic-shift operator comes into the action. When the value of index register \(\left|k\right\rangle\) is \(\left|00\right\rangle\), there will be no change in the systems. For the value of \(\left|01\right\rangle\), there will be one place cyclic shift of \(\left|t\right\rangle\). For that, through MCT operations ancilla register \(\left|a\right\rangle\) becomes \(\left|1111\right\rangle\) first,
\(\psi_{2}\rightarrow\frac{1}{2}(\left|00\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle+\left|01\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|1111\right\rangle \otimes\left|1\right\rangle+\left|10\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle)\)
We now perform parallel Fredkin operations to shift one place of the string \(\left|t\right\rangle\) for the index register \(\left|01\right\rangle\) as shown in Fig. 7,
\(\psi_{3}\rightarrow\frac{1}{2}(\left|00\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle+\left|01\right\rangle\otimes\left|BCDEFGH\right\rangle \otimes\left|CDEFG\right\rangle\otimes\left|1111\right\rangle\otimes\left|1 \right\rangle+\left|10\right\rangle\otimes\left|ABCDEFGH\right\rangle \otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle\otimes\left|1 \right\rangle)\)
We now again get back the value of ancilla qubits to \(\left|0000\right\rangle\) through inverse operations so that further cyclic-shift operation can be performed for other indexed values,
\(\psi_{4}\rightarrow\frac{1}{2}(\left|00\right\rangle\otimes\left|ABCDEFGH \right\rangle\otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle \otimes\left|1\right\rangle+\left|01\right\rangle\otimes\left|BCDEFGH\right\rangle \otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle\otimes\left|1 \right\rangle+\left|10\right\rangle\otimes\left|ABCDEFGH\right\rangle \otimes\left|CDEFG\right\rangle\otimes\left|0000\right\rangle\otimes\left|1 \right\rangle)\)
Similarly, we perform cyclic-shift operation for the other two index register's values, which are \(\left|10\right\rangle\) and \(\left|11\right\rangle\),
\(\psi_{5}\rightarrow\frac{1}{2}(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG \rangle\otimes|0000\rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHA\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|CDEFGHAB \rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle)\)
At this point, we perform the XOR operation with the use of CNOT gates between first \(M\) bits of \(|t\rangle\) register and \(M\) bits of \(|p\rangle\) register. Outputs of \(|p\rangle\) are all zeros for the indexed value of \(|10\rangle\),
\(\psi_{6}\rightarrow\frac{1}{2}(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG \rangle\otimes|0000\rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHA\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|CDEFGHAB \rangle\otimes|00000\rangle\otimes|1\rangle+|11\rangle\otimes|DEFGHABC\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle)\)
We now perform the bit-flip operation through \(X\) gate on \(|p\rangle\) to get all ones,
\(\psi_{7}\rightarrow\frac{1}{2}(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG \rangle\otimes|0000\rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHA\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|CDEF GHAB\rangle\otimes|11111\rangle\otimes|0000\rangle\otimes|1\rangle+|11\rangle \otimes|DEFGHABC\rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle)\)
Next we perform Hadamard operation on output qubit \(|o\rangle\) to perform the Grover's search,
\(\psi_{8}\rightarrow\frac{1}{2\sqrt{2}}[|00\rangle\otimes|ABCDEFGH\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|01\rangle\otimes|BCDEFGHA \rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|0\rangle+|10 \rangle\otimes|CDEFGHAB\rangle\otimes|1111\rangle\otimes|0000\rangle\otimes|0 \rangle+|11\rangle\otimes|DEFGHABC\rangle\otimes|CDEFG\rangle\otimes|0000 \rangle\otimes|0\rangle-(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG\rangle \otimes|0000\rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHA\rangle\otimes| CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|BCDEFGHAB\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|CDEF GHAB\rangle\otimes|1111\rangle\otimes|0000\rangle\otimes|1\rangle+|11\rangle \otimes|DEFGHABC\rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1 \rangle)\)
We now perform the MCT operation between \(|p\rangle\) and \(|o\rangle\) and quantum state evolves as,
\(\psi_{9}\rightarrow\frac{1}{2\sqrt{2}}[|00\rangle\otimes|ABCDEFGH\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|01\rangle\otimes|BCDEF GH\rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|10 \rangle\otimes|CDEFGHAB\rangle\otimes|11111\rangle\otimes|0000\rangle\otimes|1 \rangle+|11\rangle\otimes|DEFGHABC\rangle\otimes|CDEFG\rangle\otimes|0000\rangle \otimes|0\rangle-(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG\rangle \otimes|0000\rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHAB\rangle\otimes|CDEFG \rangle\otimes|0000\rangle\otimes|1\rangle+|10\rangle\otimes|CDEFGHAB\rangle \otimes|1111\rangle\otimes|0000\rangle\otimes|0\rangle+|11\rangle\otimes|DEFGHABC \rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle)\)
We next perform the mirror operations to get back the quantum register \(|p\rangle\) to its initial state,
\(\psi_{10}\rightarrow\frac{1}{2\sqrt{2}}[|00\rangle\otimes|ABCDEFGH\rangle \otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|01\rangle\otimes|BCDEFGHA \rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0\rangle+|10\rangle \otimes|CDEFGHAB\rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|1\rangle+|1 1\rangle\otimes|DEFGHABC\rangle\otimes|CDEFG\rangle\otimes|0000\rangle\otimes|0 \rangle-(|00\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG\rangle\otimes|0000 \rangle\otimes|1\rangle+|01\rangle\otimes|BCDEFGHAB\rangle\otimes|CDEFG\rangle \otimes|0000\rangle\otimes|1\rangle+|1\rangle\otimes|DEFGHABC\rangle\otimes|CDEFG \rangle\otimes|0000\rangle\otimes|1\rangle)\)
Finally we perform the Grover's amplitude amplification to obtain the final outcome,
\(\psi_{11}\rightarrow-(|10\rangle\otimes|ABCDEFGH\rangle\otimes|CDEFG\rangle \otimes|0000\rangle\otimes|1\rangle)\)
The final quantum state depicts that the pattern '\(CDEFG\)' has a match in string '\(ABCDEFG\)'. The pattern can be found from the third position in the string. We can easily get the pattern's position by adding two with first position of the string since the indexed value suggests us as \(10\) in binary i.e., two in integer. We also verify our results through simulation on the QuDiet platform [3].
## 4 Discussion
### Improved Time Complexity
We determine our algorithm's time complexity in this subsection. Strings \(\mathcal{T}\) and \(\mathcal{P}\) require \(O(1)\) time to encode. It also requires \(O(1)\) time to apply the Hadamard or Fourier transform to the index register. The time required by the cyclic-shift operator \(\mathcal{S}\) is \(O\left(\left(\log\left(N-M+1\right)\log(N\right)\right)\). It takes time \(O(1)\) to evaluate XOR outcomes using CNOT gates because they allow for simple concurrent processing. Last but not least, the complexity of the Grover oracle is \(O(\log(\mathrm{M}))\). A single Grover step's complexity, which accounts for all the steps taken into account so far, is then \(O\left(\log\left(N-M+1\right)\log(N)+\log(M)\right)\). The Grover steps must be repeated \(O(\sqrt{N-M+1})\) times in order for the Grover search to be successful. With this added intricacy, the total complexity is now \(O\left(\sqrt{N-M+1}\left(\left(\log\left(N-M+1\right)\log(N\right)\right)+ \log(M)\right)\)).
### Improved Space Complexity
We also need \(O(\log(N-M+1))\) qubits for the index register in addition to the \(N\) and \(M\) qubits required to store the search string and the pattern. We require \(\frac{N}{2}\) ancilla qubits for the index register in order to implement our cyclic-shift operator in a depth-optimized manner. We do not need any other extra ancilla qubit for our proposed approach. A comparative study of space complexity with [21] is exhibited in Table 2.
### Improved Circuit Cost
One can calculate the gate count in terms of CNOT and \(\mathrm{T}\) gates according to the state-of-the-art circuit. Since it is widely anticipated that \(\mathrm{T}\) gates will predominate the cost of implementation in the fault-tolerant regime, assuming the standard gate set of Clifford\(+\mathrm{T}\), they chose those two gates as metrics. The best part of our proposed algorithm is no \(\mathrm{T}\) gate is required for string-matching. We estimate the gate count in terms of CNOT, ternary CNOT and quaternary CNOT gates. It is important to remember the fact that the \(n\)-qubit Toffoli decomposition has a logarithmic depth and uses a maximum of \(n+1\) ternary CNOT gates and \(n-4\) quaternary CNOT gates. A comparative study of circuit cost with [21] is exhibited in Table 3.
The cost of the encoding step is zero because the strings \(\mathcal{T}\) and \(\mathcal{P}\) can be originally encoded in qubits in the \(|0\rangle\) state using only the identity and \(\mathrm{bit}\)-\(\mathrm{flip}(X)\) gates. Hadamard gates are required for a Hadamard transformation of the index register which also requires zero cost. The stated permutation of size as large as \(N\) can be divided into at most \(N-1\) transpositions, so the cyclic shift operator \(\mathcal{S}\) consists of an MCT gate with depth \(O(\log(N-M+1))\) and at most \(N-1\) Fredkin gates. As per our proposed Fredkin gate, each Fredkin gate costs 2 CNOT gates and 3 ternary CNOT gates. Thus the cyclic shift operator costs at most \((2N-1)O(\log(N))\) CNOT gates, and \(((N-M+2)O(\log(N-M+1))+(3N-1)O(\log(N))\) ternary CNOT gates, and \(((N-M-3)O(\log(N-M+1))\) quaternary CNOT gates. Next, the XOR operation requires \(M\) CNOT gates. Lastly, the Grover oracle with multi-controlled Toffoli decomposition with intermediate qudits, can be implemented with \((M+1)\log(M)\) ternary CNOT gates and \((M-4)\log M\) quaternary CNOT gates without any ancilla. Lastly, for amplitude amplification, we need to repeat this \(\sqrt{N-M+1}\) times. The total CNOT, ternary CNOT and quaternary CNOT count is, thus, given in Table 3 where the component of 2 comes from the necessity of applying a unitary to create the states \(|\psi\rangle=U|0\rangle\) and the inverse unitary \(U^{\dagger}\) in order to amplify the amplitude.
### Error Analysis
Any quantum system is susceptible to different types of errors such as decoherence, noisy gates. For a binary quantum system, the gate error scales as \(2^{2}\) and \(2^{4}\) for 1- and 2-qubit gates respectively [7]. Furthermore, for qubits, the amplitude damping error decays the state \(|1\rangle\) to \(|0\rangle\) with probability \(\lambda_{1}\). For a higher dimensional system, every state in level \(|i\rangle\neq|0\rangle\) has a probability \(\lambda_{1}\) of decaying. In other words, the usage of higher dimensional states penalizes the system with more errors. Nevertheless, the effect of these errors [6] on the used decomposition of the Multi-controlled Toffoli gate has been studied by Saha et al. [23] and the used decomposition of Toffoli gate for Fredkin gate has been studied by Gokhale et. al. [7]. They have demonstrated that even though the use of intermediate qudits results in a rise in error, the total error probability of the decomposition is lower than the ones used currently because there are fewer gates and less depth [11]. This interpretation is also applicable to our approach of solving the string-matching problem, since the gate cost and the depth have been reduced as compared to [21]. Hence, we claim that our solution for the string-matching problems with intermediate qudits is superior in terms of error efficiency as compared to [21]. The generalized Toffoli decomposition of [23] that has been used in our proposed circuits for string-matching is also efficient with respect to crosstalk errors [14] due to its crosstalk-aware structure. Since the Fredkin gate decomposition is new to this paper, we show the probability of success for the Fredkin gate decomposition using the method of [21] and our proposed method. As shown in Fig. 8, we find that the decomposition in [21] has a considerably higher error rate than the one we propose. This is explained by our decomposition's shallower depth and fewer gates. The advantage of our decomposition lies in the general substantial reduction in the gate count and the depth, despite the fact that some ternary and quaternary gates are used, which have a greater error probability due to the plague of dimensionality. Thus we can conclude that our circuit for string-matching is relevant instead of using higher dimensional qudits through error analysis.
\begin{table}
\begin{tabular}{l l l} \hline Space complexity & [21] & This work \\ \hline Data qubit & \(N+M+\log N\) & \(N+M+\log(\lceil N-M+1\rceil)\) \\ Ancilla qubit & \(\frac{N}{2}+M\) & \(\frac{N}{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of space complexity of our work with state-of-the-art algorithm [21].
\begin{table}
\begin{tabular}{l l l} \hline Circuit cost & [21] & This work \\ \hline \(\mathrm{T}\) & \((8M-17+(N-1)O(\log(N)))\times 2\sqrt{N}\) & \(0\) \\ CNOT & \((7M-12+(8N-9)O(\log(N)))\times 2\sqrt{N}\) & \(((2N-1)O(\log(N))+M)\times 2\sqrt{N-M+1}\) \\ ternary CNOT & \(0\) & \((N-M+2)O(\log(N-M+1))+(3N-1)O(\log(N))+(M-1)\log(M))\times 2\sqrt{N-M+1}\) \\ quaternary CNOT & \(0\) & \(((N-M-3)O(\log(N-M+1)+(M-4)\log(M))\times 2\sqrt{N-M+1}\times 2\sqrt{N-M+1}\) \\ quaternary CNOT & \(0\) & \((N-M-3)O(\log(N-M+1)+(M-4)\log(M))\times 2\sqrt{N-M+1}\times 2\sqrt{N-M+1}\) \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of circuit cost of our work with state-of-the-art algorithm [21].
## 5 Conclusion
We have built a quantum string-matching algorithm in this work that allows for a circuit-depth complexity of \(O\left(\sqrt{N-M+1}\left((\log\left(N-M+1\right)\log(N)\right)+\log(M)\right)\right)\). Additionally, we offer a detailed gate-level version of our method, allowing for a precise calculation of the required quantum resources. This circuit for string-matching can be designed for any dimensional quantum system since the used gates are generalized, which makes the proposed approach generalized in nature. The primary use cases of the matching algorithm, such as a quick text search in a big file or spotting patterns in an image, can now be carried out more effectively. The proposed decomposition of Fredkin gate can be used in other algorithms for their efficient implementation. In fact, the overall findings show great promise for future work on effectively implementing other algorithms in intermediate qudit-assisted quantum computing. Whether the proposed approach will perform with similar efficiency in fault-tolerant regime [12] can only be answered with the evolution of more scalable qudit-supported quantum hardware. Thus it is kept as a future aspect of this work when error correction for qutrits and ququarts would be feasible.
## Acknowledgments
There is no conflict of interest.
|
2308.12288
|
CHORUS: Learning Canonicalized 3D Human-Object Spatial Relations from
Unbounded Synthesized Images
|
We present a method for teaching machines to understand and model the
underlying spatial common sense of diverse human-object interactions in 3D in a
self-supervised way. This is a challenging task, as there exist specific
manifolds of the interactions that can be considered human-like and natural,
but the human pose and the geometry of objects can vary even for similar
interactions. Such diversity makes the annotating task of 3D interactions
difficult and hard to scale, which limits the potential to reason about that in
a supervised way. One way of learning the 3D spatial relationship between
humans and objects during interaction is by showing multiple 2D images captured
from different viewpoints when humans interact with the same type of objects.
The core idea of our method is to leverage a generative model that produces
high-quality 2D images from an arbitrary text prompt input as an "unbounded"
data generator with effective controllability and view diversity. Despite its
imperfection of the image quality over real images, we demonstrate that the
synthesized images are sufficient to learn the 3D human-object spatial
relations. We present multiple strategies to leverage the synthesized images,
including (1) the first method to leverage a generative image model for 3D
human-object spatial relation learning; (2) a framework to reason about the 3D
spatial relations from inconsistent 2D cues in a self-supervised manner via 3D
occupancy reasoning with pose canonicalization; (3) semantic clustering to
disambiguate different types of interactions with the same object types; and
(4) a novel metric to assess the quality of 3D spatial learning of interaction.
|
Sookwan Han, Hanbyul Joo
|
2023-08-23T17:59:11Z
|
http://arxiv.org/abs/2308.12288v2
|
# CHORUS: Learning Canonicalized 3D Human-Object Spatial Relations
###### Abstract
We present a method for teaching machines to understand and model the underlying spatial common sense of diverse human-object interactions in 3D in a self-supervised way. This is a challenging task, as there exist specific manifolds of the interactions that can be considered human-like and natural, but the human pose and the geometry of objects can vary even for similar interactions. Such diversity makes the annotating task of 3D interactions difficult and hard to scale, which limits the potential to reason about that in a supervised way. One way of learning the 3D spatial relationship between humans and objects during interaction is by showing multiple 2D images captured from different viewpoints when humans interact with the same type of objects. The core idea of our method is to leverage a generative model that produces high-quality 2D images from an arbitrary text prompt input as an "unbounded" data generator with effective controllability and view diversity. Despite its imperfection of the image quality over real images, we demonstrate that the synthesized images are sufficient to learn the 3D human-object spatial relations. We present multiple strategies to leverage the synthesized images, including (1) the first method to leverage a generative image model for 3D human-object spatial relation learning; (2) a framework to reason about the 3D spatial relations from inconsistent 2D cues in a self-supervised manner via 3D occupancy reasoning with pose canonicalization; (3) semantic clustering to disambiguate different types of interactions with the same object types; and (4) a novel metric to assess the quality of 3D spatial learning of interaction.
## 1 Introduction
Humans interact with objects in specific ways. We wear shoes on our feet, a hat on our heads, and ride a bike by holding handles and putting two feet on the pedals. While this common sense regarding the 3D arrangements of the way we interact with objects is known to us, teaching such things to robots and machines is challenging, requiring numerous variations in diverse human-object interactions in 3D.
As providing 3D supervision by manually annotating various cases is hard to scale, an alternative way of teaching such things is by showing 2D photos from the Internet containing the interaction with the same object from many viewpoints. A text-based image retrieval (e.g., Google Image Search) can be an option to crawl many images from a text description similar to NEIL [9]. However, this approach fundamentally suffers from several obstacles such as (other than the challenges in 3D spatial relation reasoning): (1) viewpoint variations are insufficient and hard to control; (2) the number of related images decreases drastically as the compositionality of the prompt increases; and (3) the images are often biased due to commercial websites.
In this paper, we present a novel idea of leveraging a text-conditional generative model [59] as a controllable data producer in synthesizing "unbounded", "multi-view", "diverse" images to learn the 3D human-object spatial relations in a self-supervised manner. Despite its imperfectness in quality, we observe that the synthesized images from generative models are more suitable for our objective, as the generative model effectively links desired semantics of human-object interaction (HOI) described in natural language. Our synthesis-based approach allows better controllability in obtaining images for spatial relation learning, providing more relevant images from diverse viewpoints. See the examples in Fig. 2.
Nonetheless, inferring 3D spatial knowledge in human-object interaction from in-the-wild 2D image collections is still a non-trivial problem due to the inconsistency and "wildness" among synthesized images. In particular, a method should handle the following challenges: (1) semantic variation: given a category, there can be various semantic situations of human-object interaction; thus, the spatial distribution of object may vary significantly as semantics vary; (2) human pose variation: even assuming the same semantic situation, human postures can vary as diverse actions and pose are available in the same situation; (3) intra-class variation: object can exist in various forms (even if same category); and (4) visual variance: when learning from 2D cues, the visual properties (e.g., illumination, camera) may vary for even the same 3D arrangement, making it difficult to localize and extract 2D cues.
To this end, we present a self-supervised method to learn the spatial common sense of diverse human-object interactions in 3D for arbitrary object categories without any human annotations. We present several novel components to achieve the goal, including (1) automatic prompt generation for diverse view and semantic variations via chatGPT [51], (2) outlier filtering strategies, (3) automatic camera view calibrations by using a human as an anchor, (4) accumulating spatial interaction cues from inconsistent multi-view 2D knowledge with varying human pose, object geometry, and (5) clustering for various semantic human-object interaction types. The output of our method can be considered as _possible occupancy distributions_ of the category-specified 3D object relative to the human body in a canonical person-centric space regarding the intra-class object variation. We demonstrate the efficacy of our method on various object categories and human-object interaction types, as shown in Fig. 1 and Fig. 7. As the first in this direction, we also introduce a new metric, namely _Projective Average Precision (PAP)_, to quantify the quality of 3D spatial inference outputs. Our contributions are summarized as follows: (1) the first method to leverage a generative text/image model for 3D human-object spatial relation learning, including automatic prompt generation, outlier filtering, and 3D viewpoint estimation via estimated 3D humans; (2) a framework to reason about the 3D spatial relations from inconsistent 2D cues in a self-supervised manner via 3D occupancy reasoning with pose canonicalization; (3) semantic clustering to disambiguate different types of interactions with the same object types; (4) a novel metric to assess the quality of 3D spatial learning of interaction.
## 2 Related Work
**Text-to-Image Synthesis with Diffusion Models.** Recent approaches in text-to-image synthesis show great performance by leveraging emerging diffusion models [34, 50, 59, 65, 66, 67, 68]. Diffusion models are a class of generative models that "noises" the data from training distribution and learn to "denoise" the perturbed images at arbitrary noise scale. Diffusion models are likelihood-based generative models and are known to show high mode coverage and generate high-fidelity images. One drawback is the slow inference time due to multiple iterations of the denoising process, which can be mitigated by recent approaches [34, 40, 59, 66]. Diffusion models enable text-conditional image generation via utilizing CLIP [56] embeddings [49, 57, 59] or large-language-model encodings [61]. In practice, classifier guidance [15] or classifier-free guidance [26] is applied at inference steps further to enhance the quality and text-coherency in trade-off with diversity.
**Generative Models as Data Producer.** Similar to our approaches, there have been extensive approaches to leverage generative models as data generators. Prior approaches utilize GANs [19, 33, 55] to synthesize data for tasks including classification [2, 8, 43, 72, 75], 3D vision [21, 52, 64, 86], 2D segmentation [44, 87, 74], dense visual alignment [54]; or diffusion models [27, 59] for data augmentation [73] or as synthetic data for few-shot learning [25]. One major challenge of such methods is that generative models output "free" and "uncontrolled" images. Ali _et al._[29] presents a method to generate multi-view images for representation learning by applying transformations on the latent vectors of the generative models, similar to our approach.
**Learning 3D Spatial Arrangement between Human and Object.** Prior work models 3D human-object interaction for each specified interaction type and input. Approaches include inferring hand-object interaction from
Figure 2: **Internet-crawled images vs. synthesized images.** Under the prompt “A person is riding a bicycle, top view”, synthesized images exhibit superior fidelity to the intended “top view” perspective compared to internet-crawled images. The correct viewpoint image is indicated by the green bounding box.
3D [5, 71, 88], 2.5D such as heatmap [4, 6], or provided images [14, 16, 23, 35, 79]. These methods only model hand-object relationships and cannot predict the full body. The methods that model full-body interactions include generating/reconstructing 3D humans from 3D scene constraints [22, 63, 84, 85], capturing interaction from multi-view camera [3, 31, 69], or reconstructing 3D scene from human-scene interaction type [80], or human-human interactions [18]. Also, there exist approaches to model contacts [7, 12, 28, 38, 47, 58]. PHOSA [83] reconstructs 3D human and object from a single image; however, it requires manual labeling for the human-object interaction region, which limits the generalizability of the approach. CHORE [77] additionally models contact via distance between human and object by leveraging implicit surface learning to fit parametric SMPL [41] human and template object mesh. Unlike previous methods, our method does not require any object templates or supervision.
## 3 Method
### Overview
Our method extracts 3D spatial knowledge of human-object interactions (HOI) by modeling object location as an occupancy probability distribution \(\Phi_{o}\) when the category of the object \(o\) is given. The spatial relationship during HOI can vary even for the same object category due to the variation of human state (e.g., pose, body height and body shapes) and types of interactions (e.g., we can ride or hold a surfboard). Thus, we model the 3D spatial probability distribution in pose-conditioned and type-conditioned manners:
\[\Phi_{o}(\mathbf{x}|\theta,s)\in[0,1], \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{3}\) is a 3D location of object occurrence in the "pose-deformed" space, \(\theta\) is the 3D state of a person in terms of pose and shape variation, and \(s\) is a type of human-object interaction among possible discrete variations \(\mathbf{S}\). We represent \(\Phi_{o}\) in an explicit voxel space with \(48^{3}\) resolutions, where the probability shows the likelihood that the specific location is occupied by the object \(o\) during interactions. The human state \(\theta\in\mathbb{R}^{24\times 3}\) is parameterized by SMPL [41] pose parameters for 24 joints to represent the condition of 3D human pose variations. \(\mathbf{S}\) represents plausible types of interactions for an object \(o\), which is used for clustering distributions as described in Sec. 3.4. Specifically, \(\mathbf{S}\) considers the semantics described by (1) natural language prompts and (2) contacted parts between humans and objects. For example, holding a surfboard and riding a surfboard should be considered as different interactions, despite the same object category, which is the objective to have \(\mathbf{S}\) as the input for our \(\Phi_{o}\).
As a way to learn the 3D spatial distribution from inconsistent 2D image data, our method aggregates the HOI cues in a canonical space:
\[\Phi_{o}^{c}(\mathbf{x}^{c}|s)\in[0,1], \tag{2}\]
where the \(\mathbf{x}^{c}\) is a 3D location at the canonical space. The canonical space is defined by the rest-pose of SMPL, as shown in Fig. 3, where we put zero rotations for most joints except hips; \(\pm\pi/6\) z-axis rotation on the left and right hips, respectively; following the approach proposed by SNARF [10] (we empirically find that it is advantageous to keep a sufficient distance between legs). Note that this canonical distribution \(\Phi_{o}^{c}\) is independent of the 3D human pose state \(\theta\), in contrast to Eq. 1. Our 3D spatial reasoning in the canonical space is inspired by the recently emerging approaches to building animatable 3D avatars [10, 11, 30, 62], where cues from multiple 3D scans or views from different postures are aggregated in the canonical space. Yet, our framework is much simpler without requiring precise linear-blend skinning estimation since our target is reasoning the approximate object locations w.r.t. human rather than high-fidelity 3D human surface estimation. By reasoning in the
Figure 3: **Method overview. Our method starts with generating prompts for human-object relationships within a specific object category. These prompts produce a multitude of images, incorporating HOI semantics. After applying filtering strategies to eliminate outliers, the chosen images are aggregated in canonical 3D space using a canonicalization approach. The resulting distribution can be flexibly adapted to different human postures.**
canonical space, we can handle the inconsistency and variation of synthesized multiview images in HOIs. To this end, our framework provides a warping function to convert back and forth between \(\Phi_{o}^{c}\) and \(\Phi_{o}\), and our learned 3D spatial distribution can be applied to any 3D human postures to guess potential object locations as shown in Fig. 7. As an interesting key idea for generating multi-view image collections for 3D spatial relation learning, we use a SOTA diffusion model [59] that can synthesize realistic images from text prompt inputs. We refer readers to Fig. 3 for an overview of our method, and Supp. Mat. A.1\(\sim\)A.8 for more details on each component.
### Dataset Generation
**Prompt Generation.** We aim to produce various text prompts that describe the diverse semantics of human-object interaction on a target object \(o\), which a text-conditioned diffusion model [59] can then take for image generation. In particular, we want to obtain visual cues on various human-object interactions with the target object \(o\) with many viewpoints for 3D reasoning.
Towards an automatic process, we present a solution by leveraging the ChatGPT [51], an instruction-tuned large language model, to produce a set of plausible prompts automatically. Specifically, we give the input query sentence as shown in Fig. 4, where we can simply replace the object category (e.g., skis as shown in a blue box) for any target objects. As shown, ChatGPT produces various related output prompts. We empirically observe that our solution with ChatGPT is much more efficient with higher quality, compared to the possible alternative way of directly generating prompt sentences via exhaustive combinations of a set of subjects and verbs, which may produce awkward expressions (e.g., "wear a bike"). Furthermore, to encourage similar scenes from diverse viewpoints as much as possible, we also present a strategy to control viewpoints of the synthesized images by augmenting view conditions on each prompt produced from ChatGPT, such as "back view", "side view", and "far view". As shown in Fig. 4, we demonstrate this viewpoint augmentation on prompts is advantageous in producing scenes from diverse views, while it is only partially guaranteed that the outputs follow the instructed viewpoints due to the imperfection of generative models. We generate 3 to 20 prompts per category, where each is augmented with 22 different auxiliary prompts to control viewpoints.
**Synthesizing Text-Conditioned Images via Diffusion.** Given a set of produced prompts regarding a human-object interaction, we link these natural language descriptions to visual entities by synthesizing images via an off-the-shelf latent diffusion model [59], which has great power in high-fidelity image generation with high mode coverage. Examples are shown in Fig. 4. Our strategy enables us to produce an unbounded number of diverse scenes containing the desired human object interaction with the target object category. Notably, the same prompt can be used multiple times by simply changing the initial random latent, resulting in different outputs. We create \(5000\sim 90000\) images per object category, evenly distributed for each given prompt within the category.
**Filtering.** Despite the efficacy of using the diffusion model for an image producer, the collected images are still in a wild status, making it challenging to use them directly for 3D spatial HOI learning. Thus, we first apply a series of filtering strategies to retain useful images only. Notably, the unbounded nature of our data generation allows us to use tough thresholds to filter out useless image samples aggressively. We use the following criteria in determining "valid" image samples: (1) the image contains a single human for efficient localization without ambiguity; (2) the image contains the target object; (3) the human and the target object should be close enough with sufficient overlapping in the bounding boxes (with a certain threshold); and (4) the whole torso part of the human should be visible. We use an off-the-shelf object detector [37, 76] for the bounding box detection and instance segmentation. To check the torso's existence, we use a 2D keypoint detector [13, 70, 82] and check whether shoulders and hips are within the image boundary and their detection confidences are above certain thresholds.
### Pose Canonicalized Aggregation
A large collection of images obtained from our approach can provide cues for 3D spatial HOI reasoning. As its first step, we estimate camera viewpoint by using the estimated 3D human orientation as an anchor. Then, we aggregate 2D object mask cues from each view to estimate 3D occupancy distribution, where we apply pose canonicalization to handle pose variations among synthesized images.
**Viewpoint Estimation via 3D Human Pose Estimation.** The orientation of the human body in the scene can provide a clue to estimate the relative camera viewpoint w.r.t the human. For this purpose, we use an off-the-shelf monocular 3D human pose estimator [60], which outputs the 3D global
Figure 4: **Viewpoint augmentation. Examples of prompts produced by ChatGPT and images synthesized via diffusion with viewpoint augmentation.**
orientation of humans along with 3D joint orientations in SMPL [41] parameterzation. Specifically, given an image \(\mathbf{I}\) and a human bounding box \(\mathbf{B}\in\mathbb{R}^{4}\), a 3D pose regressor \(\mathcal{F}_{\text{pose3d}}\) outputs:
\[\{\phi,\theta,\beta,\pi,\mathbf{j}\}=\mathcal{F}_{\text{pose3d}}(\mathbf{I}, \mathbf{B}), \tag{3}\]
where \(\phi\in\mathbb{R}^{3}\) is the global orientation of the human defined in the camera-centric coordinate, \(\theta\in\mathbb{R}^{23\times 3}\) is 3D joint angles (excluding the pelvis root) in angle axis, \(\beta\in\mathbb{R}^{10}\) is shape parameters, and \(\pi\in\mathbb{R}^{3}\) is the weak-perspective camera parameters, and \(\mathbf{j}\in\mathbb{R}^{24\times 2}\) is the projected 2D joint locations on image space. We then convert \(\pi\) and \(\phi\) into a perspective camera model \(\Pi\) in the person-centric coordinate system by minimizing the distance between projected SMPL joints via \(\pi\) and \(\Pi\). To this point, all images are "calibrated" in a common 3D space, where the SMPL humans are aligned and centered at the origin (i.e., pelvis is set as origin), as depicted in Fig. 5.
**3D Occupancy Estimation via Human Pose Canonicalization.** We represent spatial 3D HOI reasoning via the occupancy probability field \(\Phi_{o}^{c}\) as defined in Eq. 2. For brevity, we may first consider a holistic distribution \(\Phi_{o}^{c}(\mathbf{x}^{c})\in[0,1]\), which does not consider semantics and returns a marginalized distribution for the most probable HOI. Given the virtually calibrated multi-view images we processed above, we compute \(\Phi_{o}^{c}\) via visual hull reconstruction using the 2D object segmentation masks obtained in the filtering stage. However, as an issue, the 3D human poses may not be consistent across views, as shown in Fig. 5, making it challenging to apply visual hull reconstruction directly.
To handle this issue, we consider a canonical space defined via the rest pose of the SMPL model and compute the conversion between this canonical space and the pose-deformed space corresponding to each view. This process is inspired by the deformable object reconstruction [48] or the animatable human reconstruction pipelines [10, 11, 30, 62], where SMPL body posture provides the guidance to convert between two spaces. Unlike the 3D human modeling approaches that use MLPs for neural skinning [10, 11, 62], our goal is to enable the 3D voting to the corresponding 3D canonical volume space by warping into pose-deformed volume space and checking the means of 2D mask occupancies. Thus, we take a simple strategy to find the mapping between the canonical and posed spaces. Specifically, we define Linear Blending Skinning (LBS) weights for 3D point \(\mathbf{x}^{c}\) in the canonical space (we use \(48^{3}\) voxel grid) by averaging the ones from \(\mathbf{k}\)-nearest neighbor SMPL vertices with inverse distance weights, similar to SelfRecon [30]. That is:
\[\omega(\mathbf{x}^{c})=\frac{\sum\limits_{i\in\mathbf{N_{k}}(\mathbf{x}^{c})}w _{i}\ /\ \|\mathbf{x}^{c}-\mathbf{v}_{i}\|}{\sum\limits_{i\in\mathbf{N_{k}}(\mathbf{x}^ {c})}1\ /\ \|\mathbf{x}^{c}-\mathbf{v}_{i}\|} \tag{4}\]
where \(\mathbf{N_{k}}(\mathbf{x}^{c})\) is a set of \(\mathbf{k}\)-nearest neighbor vertex indices on SMPL mesh, \(w_{i}\in\mathbb{R}^{24}\) and \(\mathbf{v}_{i}\in\mathbb{R}^{3}\) are each associated LBS skinning weights and location of \(i\)-th vertex in SMPL. Intuitively, the 3D space is transformed by following the motion of the closest SMPL vertices. Additionally, assuming the monotonic decrease of the effect of bone transformation as \(\mathbf{x}^{c}\) is far from the SMPL mesh model, we encourage "zero deformations" for the far-away points by decreasing the effect of LBS with skinning weight adjustments by simply applying weighted sum between \(\omega(\mathbf{x}^{c})\) and LBS skinning weights for pelvis \(\mathbf{e}_{1}\in\mathbb{R}^{24}\) which has no effect on deformation. Refer to Supp. Mat. A.5 for more details.
Given computed LBS weights, the warping from the canonical space \(\mathbf{x}^{c}\) to a pose-deformed space \(\mathbf{x}\) with SMPL pose \(\theta\) can be performed by following the forward-kinematics pipeline of SMPL skeletal hierarchy:
\[\mathbf{x}=\mathcal{W}(\mathbf{x}^{c})=\sum\limits_{j=1}^{n_{b}}\omega_{j}( \mathbf{x}^{c})\cdot\mathbf{B}_{j}(\theta_{j})\cdot\mathbf{x}^{c}, \tag{5}\]
where \(\omega_{j}\) denotes the \(j\)-th LBS weights, \(\mathbf{B}_{j}(\theta_{j})\in\text{SE}(3)\) represent \(j\)-th bone's global 3D transformations.
Finally, the 3D occupancy aggregation can be performed by warping the 3D canonical points into the deformed space of each view and checking the means of the 2D object mask occupancies when projected:
\[\Phi_{o}^{c}(\mathbf{x}^{c})=\frac{\sum\limits_{k=1}^{|\mathbf{G}|}r_{k} \mathcal{M}_{k}(\Pi_{k}(\mathcal{W}(\mathbf{x}^{c})))}{\sum\limits_{k=1}^{| \mathbf{G}|}r_{k}\mathcal{I}_{k}(\Pi_{k}(\mathcal{W}(\mathbf{x}^{c})))} \tag{6}\]
where \(\mathbf{G}\) is the set of generated images, \(r_{k}\) is accumulation score for image \(k\) based on predicted camera distribution, \(\mathcal{M}_{k}\), \(\mathcal{I}_{k}\) are each mask and image operator for \(k\)-th image which returns \(1\) if the provided 2D value lies within mask/image space else \(0\), and \(\Pi_{k}\) is perspective projection for \(k\)-th image view.
**Uniform View Sampling.** Despite our viewpoint augmentation when prompting, the resulting images still may have biases toward specific viewpoints, making 3D reasoning difficult. Thus, we enforce uniform view sampling (similar to
Figure 5: **Human-anchored camera calibration. Predicted humans and calibrated cameras for synthesized images.**
importance sampling) by dividing the azimuth region into a fixed number of bins and setting the accumulation score \(r_{k}\) in Eq. 6 as the inverse of camera numbers in the bin.
**Inference for Posed Space.** At inference, we can transform our occupancy probability distribution \(\Phi_{o}^{c}(\mathbf{x}^{c})\) defined in the canonical space into the pose-deformed space \(\Phi_{o}(\mathbf{x}|\theta)\) by using backward skinning, unlike the forward skinning (refer to Eq. 5) during training. Specifically, we set \(\Phi_{o}(\mathbf{x}|\theta)\) by warping \(\mathbf{x}\) in pose-deformed space to \(\mathbf{x}^{c}\) in canonical space and retrieving the learned occupancy probability \(\Phi_{o}^{c}(\mathbf{x}^{c})\). To this end, we compute the LBS weights in the _pose-deformed_ space (in contrast to training) and apply inverse transformation \(\mathcal{W}^{-1}\).
### Selective Aggregation via Semantic Clustering
The canonical distribution \(\Phi_{o}^{c}(\mathbf{x}^{c})\) from Eq. 6 does not take into account the different semantics. However, the form of human-object interaction can differ even within the same object category, requiring corresponding variations of \(\Phi_{o}^{c}(\mathbf{x}^{c})\), as defined in Eq. 2. To formulate this, we define the interaction type \(s\in\mathbf{S}\) as a pair of prompt \(p\) and body part \(\mathbf{a}\) (optional) in contact with the object:
\[\mathbf{S}=\{(p,\mathbf{a})\mid p\in\mathbf{P},\mathbf{a}\in\mathbf{A}\}, \tag{7}\]
where \(\mathbf{P}\) is the set of entire prompts produced and \(\mathbf{A}\) is the set of body part segments, given as a part of SMPL mesh vertices. Intuitively, a prompt represents a semantic (e.g., "playing with a ball"), and the body segment can further specify the interaction type (e.g., foot\(\rightarrow\)"kicking").
We compute \(\Phi_{o}^{c}(\mathbf{x}^{c}|s)\) for each interaction type \(s=(p,\mathbf{a})\) by aggregating a semantic cluster of images that depict such interactions, selected from \(\mathbf{G}\). Specifically, we utilize images generated from the single prompt \(p\). Body part \(\mathbf{a}\) is used as a proximity cue to retain relevant image samples, where we consider the image irrelevant if the 3D rays from the object mask do not intersect with the interaction region of \(\mathbf{a}\). Our method of "selective aggregation" enhances spatial HOI reasoning for multimodal scenarios.
## 4 Experiments
We provide both quantitative and qualitative comparisons to verify our method. In Sec. 4.2, we present a new metric, namely _Projective Average Precision (PAP)_. We provide a brief explanation of PAP and detailed protocols in Supp. Mat. B.2. In Sec. 4.3, we quantitatively compare our method with various ablations to provide justifications for our design choices. We also compare our method trained on synthesized images with the one trained on internet-crawled images, and show that synthesized images are more suitable for learning 3D HOI. In Sec. 4.4, we show qualitative results. We demonstrate the quality of the learned HOI spatial distribution when various human pose is given for a diverse set of object categories. We then explore the effects of semantic types on the distribution by changing HOI prompts and body part specifications. We also analyze the effects of canonicalization via a comparison with an ablation. In Sec. 4.5, we exemplify an application of our method to the downstream task: _3D Human-Object Reconstruction from a Single-view Image_. Refer to Supp. Mat. B.1\(\sim\)B.2 for more details on each part, and Supp. Mat. C for extensive qualitative results.
### Dataset
**Generated Dataset.** Since our method is a fully-autonomous and self-supervised approach, no external dataset is required for training. We test our method for 19 object categories from COCO [39] (e.g., bicycle, chair) and 5 categories from LVIS [20] (e.g., hat, sweater), where we generate \(5000\sim 90000\) images per category.
**Image Search Dataset.** We provide a quantitative comparison between the results of our method using image search data and using synthesized images in Sec. 4.3. To this end, we use AutoCrawler [81] to collect images for category _motorcycle_ from the internet, using the same prompts used for preparing generated dataset.
**Extended COCO-EFT Dataset for Testing.** To quantitatively evaluate our learned HOI distributions, we use COCO dataset [39], which provides GT 2D object masks. In particular, we use COCO-EFT (val) [32], where pseudo GTs for 3D human poses are provided in SMPL [41] format. Among all samples in COCO-EFT, we perform a filtering process to keep only the samples where human-object interactions happen by retaining the samples with a single human and a single object with sufficient overlaps between them. After the filtering process, we compute perspective camera parameters similar to Sec. 3.3 from the weak-perspective cameras provided in the dataset.
### Projective Average Precision
As no standard metrics exist for this task, we define a new evaluation metric: _Projective Average Precision (PAP)_. Refer to Fig. 6 for an overview of the PAP evaluation protocols.
**Evaluation Protocols.** We utilize the COCO-EFT [32] dataset with pseudo-GT 3D human pose and object masks to compare the projection of estimated 3D distribution with GT object mask. Specifically, we first deform the canonical 3D distribution into pose-deformed space and apply threshold for discretization. We project discretized occupancy values to 2D using the perspective camera and compute _pixel-wise_ precision and recall between ground-truth object mask and projected occupancy. By setting multiple threshold values, we enable quantifying the validity of distribution regarding the intra-class object variation in terms of geometry. Note that we compute precision and recall for evaluation as our method aims to infer the _object distribution_, not to reconstruct the exact geometry of the target. The calculated pre
cision and recall values are then employed to determine the average precision (AP) using two distinct methods (vanilla, strict). Averaging the AP values across all images within the category yields the _Projective Average Precision (PAP)_. Further insights and comprehensive details can be found in Supp. Mat. B.2.
**Human-Occlusion-Aware PAP.** We observe that human masks frequently overlap with objects, potentially causing inaccuracies in evaluating rendered 3D occupancy against partially-removed ground-truth masks as shown in Fig. 5(b). To address this issue, we exclude precision and recall calculations within the 2D region of the human projection, considering the likelihood of occlusions.
### Quantitative Evaluation
**Ablation Studies.** To justify our design choice, we quantitatively compare our method with various ablations. We use mPAP (average of PAP for all categories) metric for comparison, and we report the results in Tab. 2. We list the baselines used for comparison and discussion for each result below. Unless specified, all baselines follow the same implementation details provided in Sec. 3.
* _w/o Keypoint Filtering_: We deactivate keypoint filtering and conduct experiments using an equivalent number of images as in the case with keypoint filtering. We randomly sample from a combination of unfiltered and filtered images. We report a drop in performance when keypoint filters are not applied, which is attributed to erroneous 3D human predictions.
* _w/o Canonicalization_: We compute the HOI distribution without pose canonicalization. Notably, we find that removing canonicalization damages the metrics by a large margin. This is an expected result, as removing canonicalization will disperse the 2D cues and limit the precise aggregation due to pose variation.
* _10%, 1% Trained_: To see the effect of the number of generated data used for training (i.e., training time), we reduce the images by randomly selecting 10% or 1% of filtered images per prompt and run the 3D aggregation. The results imply that using more synthesized images for aggregation is beneficial for the quality of learned distribution, and the potential of our method for scalability.
**Image Search vs. Synthesized Images.** To validate the use of synthesized images, we compare our method's output for _motorcycle_ category across diverse datasets. We opt for _motorcycle_ due to the abundance of accessible and common images, which ensures a fair and rigorous comparison. We assess four settings, altering (1) data source (image search or synthesis) and (2) image count (1k or 10k). To ensure fairness, we randomly select images from the generated dataset, equivalent to the size of image search dataset. We follow the same procedures described in Sec. 3 for filtering and aggregation. We report the dataset's retrieval statistics including post-filtering rejection rate, and camera distribution entropy for filtered images in Tab. 1. For the computation of camera distribution entropy, we fit Gaussian normals with a standard deviation of \(\sigma=0.01\) to each camera's location on the unit sphere, using geodesic metrics.
We observe a higher and increasing rejection rate in the image search dataset compared to the stable, relatively low rate in the generated dataset. We attribute this to the word-occurrence-based search algorithm, which may retrieve im
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \# Images & \# Images after Filtering & Rejection-free (\%) & Camera Entropy (hist) \\ \hline Image Search & 1201 & 391 & 67.44 & 10.04 \\ Image Search & 9408 & 2528 & 73.15 & 12.25 \\ Synthetics (Ours) & 1201 & 446 & 62.03 & 12.21 \\ Synthetics (Ours) & 9408 & 3628 & **64.46** & **14.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Image Search vs. Synthesized Images: Statistics.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Vanilla} & \multicolumn{2}{c}{Human-Occlusion Aware} \\ \cline{2-5} & mPAP\(\uparrow\) & mPAP\({}_{\text{miss}\uparrow}\) & mPAP\(\uparrow\) & mPAP\({}_{\text{miss}\uparrow}\) \\ \hline Our\({}_{\text{Autk}}\) Keypoint Filtering & 18.31 & 15.90 & 19.97 & 16.52 \\ Our\({}_{\text{Autk}}\) Confacification & 17.52 & 15.12 & 19.56 & 15.85 \\ Our\({}_{\text{SIF}}\) & 14.98 & 13.81 & 15.21 & 13.73 \\ Our\({}_{\text{SIF}}\) & 17.90 & 16.13 & 18.86 & 15.82 \\ Our\({}_{\text{SIF}}\) & **19.86** & **17.18** & **20.28** & **16.80** \\ \hline Image Search\({}_{\text{It}}\) & 53.64 & 52.15 & 60.08 & 56.86 \\ Image Search\({}_{\text{It}}\) & 55.97 & 54.59 & 63.62 & 61.11 \\ Synthetics\({}_{\text{it}}\) (Ours) & 56.33 & 55.11 & 63.91 & 61.85 \\ Synthetic\({}_{\text{it}}\) (Ours) & **57.14** & **55.19** & **64.82** & **62.11** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Quantitative Evaluation Results.** (up) Results for ablation studies on COCO-EFT categories. (down) Results for quantitative comparison between image search and synthesize images on single category _motorcycle_.
Figure 6: **Overview of evaluation protocols for PAP.**
ages without HOI (e.g., commercials). Thus, we argue that the generated dataset offers enhanced scalability for HOI learning. Furthermore, the superior camera distribution entropy in synthesized images implies a richer diversity of camera poses than the image search dataset. Such diversity provides more informative content for 3D aggregation in the generated dataset. While our method addresses imbalanced camera distribution during aggregation through uniform view sampling (Sec. 3.3), results in Tab. 2 consistently demonstrate the synthetic dataset's superior performance across metrics. This reaffirms the effectiveness of our data generation approach for HOI learning, even after minimizing information differences from camera imbalance.
### Qualitative Evaluation
**Various Categories.** We demonstrate that our method can learn spatial human-object relationships for a diverse set of object categories in Fig. 7. We use SMPL pose and reference image sampled from; the COCO-EFT [32] dataset for COCO [39] categories and generated dataset for LVIS [20] categories; to deform the distribution in canonical space to pose-deformed space. We visualize the semantic cluster that best matches the reference image. Refer to Supp. Mat. C for extensive results.
**Effects of Semantic Type Condition.** We find that our set of learned distributions conveys the human-object interaction effectively. We report the examples of object distributions (category: _surfboard_) in canonical and pose-deformed space for different semantic types \(s=(p,\mathbf{a})\) in Fig. 8. Notably, the object distribution of the surfboard for the given prompt "A person carries a surfboard" shows the mode near the human torso and arms: which is a plausible location for holding a surfboard when carrying around. We also report results when semantic body part \(\mathbf{a}\) vary for the same semantic prompt description. In Fig. 9, we observe that body parts act important as a condition when the provided semantic prompt contains little or less information on HOI; hence, ambiguous. For example, the semantics for "A man playing with a sports ball" can be different in terms of sports categories. In Fig. 10, we can see the effects of prompt change when body parts are the same. While the semantic type with prompt "The cyclist pedaled the bicycle" return a shape similar to the original bicycle as expected, we observe the donut-shaped distribution as in Fig. 10 when we change the prompt into "The commuter standing next to a bicycle". We argue that such distribution conveys accurate semantics of spatial arrangements as humans can be oriented in arbitrary directions even if standing next to a bicycle, where we can assume such arrangement leads to rotationally-invariant distribution (e.g., donut-shape).
Figure 8: **Effects of semantic type condition on object distribution.** As semantic types vary, the canonical and pose-deformed object distribution changes accordingly.
Figure 7: **Qualitative Results for Various Categories.**
**Effects of Canonicalization.** We ablate the effects of canonicalization and report results in Fig. 11. From results, we can observe that precision is enhanced and variance is reduced when pose-canonicalization is applied, as the aggregation of 2D cues becomes more coherent compared to direct rays due to pose-variance.
### Application
Our method returns generalizable 3D human-object spatial knowledge represented as probability occupancy distribution of the object, which can be easily applied to many downstream tasks. This section provides an example case of application to the following downstream task: _3D Human-Object Reconstruction from a Single-view Image_. Given a single-view image, we apply off-the-shelf monocular 3D human pose estimator [60] to extract SMPL pose and off-the-shelf object detector [37] to find the category of interacting object, where we find the object of interaction by computing bounding box overlap between human and object and picking the object with maximum value. Next, we extract 3D object occupancy distribution by simply inputting the category of the found object as a keyword to our method. Finally, we apply the marching cube algorithm [42] to the object distribution to convert 3D object occupancy into a 3D object mesh. Our method is applicable to in-the-wild images, fully automatic and template agnostic, in contrast to previous works that use template mesh to represent 3D objects [77, 83]. See Fig. 12 for results.
## 5 Discussion
Our method introduces a novel approach, enabling machines to probabilistically learn 3D human-object spatial relationships based on pose and HOI types. Unlike prior methods, ours extracts this information through self-supervision, eliminating the need for laborious annotations. We present multiple strategies, including leveraging a generative model for prompt/image synthesis to enhance image control, and a canonicalization-based framework for computing 3D spatial HOI fields from synthesized 2D images. Furthermore, we introduce a novel metric that demonstrates the high quality of our learned HOI distribution. As the work presents a method to extract an intermediate representation, our approach offers numerous potential downstream applications; however, while possessing such potentials, the work is new and thus has limitations, such as low granularity and inaccurate modeling of distributions for small objects. We extensively discuss these limitations and potential future avenues in Supp. Mat. D.
**Acknowledgements. We greatly appreciate Byungjun Kim for valuable insights and comments. This work was supported by SNU-Naver Hyperscale AI Center, New Faculty Startup Fund from Seoul National University, SNU Creative-Pioneering Researchers Program, NRF grant funded by the Korea government (MSIT) (No. 2022R1A2C2092724), and IITP grant funded by the Korea government(MSIT) (No.2022-0-00156 and No.2021-0-01343). H. Joo is the corresponding author.**
Figure 11: **Effects of canonicalization. Removing canonicalization from our approach leads to scattered aggregation of 2D mask occupancies, resulting in blurry outcomes and diminished precision.**
Figure 12: **3D Human-Object Reconstruction from a Single-view Image. Using learned distribution as a prior, we can reconstruct 3D human and object from single image without any template mesh.**
Figure 10: **Effects of prompt specification. For category _bicycle_, object distribution changes significantly as prompts representing the HOI type differs.**
Figure 9: **Effects of body-part specification. For category _sports ball_ and prompt “A man playing with a sports ball”, body parts play a significant role as a condition for further specifying the HOI type.**
|
2306.12724
|
Distribution Network Fault Prediction Utilising Protection Relay
Disturbance Recordings And Machine Learning
|
As society becomes increasingly reliant on electricity, the reliability
requirements for electricity supply continue to rise. In response,
transmission/distribution system operators (T/DSOs) must improve their networks
and operational practices to reduce the number of interruptions and enhance
their fault localization, isolation, and supply restoration processes to
minimize fault duration. This paper proposes a machine learning based fault
prediction method that aims to predict incipient faults, allowing T/DSOs to
take action before the fault occurs and prevent customer outages.
|
Ebrahim Balouji, Karl Bäckström, Viktor Olsson, Petri Hovila, Henry Niveri, Anna Kulmala, Ari Salo
|
2023-06-22T07:55:20Z
|
http://arxiv.org/abs/2306.12724v1
|
Distribution Network Fault Prediction Utilising Protection Relay Disturbance Recordings and Machine Learning
###### Abstract
As society becomes increasingly reliant on electricity, the reliability requirements for electricity supply continue to rise. In response, transmission/distribution system operators (T/DSOs) must improve their networks and operational practices to reduce the number of interruptions and enhance their fault localization, isolation, and supply restoration processes to minimize fault duration. This paper proposes a machine learning-based fault prediction method that aims to predict incipient faults, allowing T/DSOs to take action before the fault occurs and prevent customer outages.
## Introduction
The increasing scope and complexity of electrical power systems across all sectors, including generation, transmission, distribution, and load systems, is resulting in a higher frequency of faults. The most common types of grid faults are partial or complete short circuits of power lines to the ground or among themselves. These faults can lead to significant financial losses and decrease the reliability of the electrical system. Utilities and large industrial plants, which often have extensive power line systems, are prone to faults due to various reasons, including:
* Aging and wear and tear of power lines during operation
* Using power lines that are not suitable for the intended application
* Mechanical failure, such as damage to power lines during installation or subsequent use
* Degradation of power line sheaths and insulation due to e.g., extreme temperatures, chemicals, weather, or abrasion
* Moisture build-up in insulation
* Electrical overloading
* Birds or other animals
* Vegetation that is too close or trees falling on power lines
Early fault prediction can significantly benefit grid operators by enabling them to address potential issues before they lead to failures. This can improve the overall reliability of the grid, resulting in decreased operational costs and reduced revenue loss, as well as ensuring the continuity of power delivery to end users. Before a fault happens, the grid often suffers from precursor symptoms, making it possible to predict faults using appropriate models that detect these symptoms. There are some existing studies on the prediction and analysis of grid faults, which we will briefly discuss here. One of the most well-known approaches to predict power line failures, that has been studied over the past few years, is by analysing partial discharges [1-5]. However, this analysis requires expensive measurement tools and is very challenging in noisy situations, which is often the case in large-scale grids. Another drawback with this approach is that it is not feasible in the case of fault prediction in underground and underwater cables. Additionally, not all faults originate from insulation degradation, but due to other reasons, like the ones mentioned above, which the partial discharge-based methods are not suitable for.
Other methods for predicting faults have also been investigated in recent years, such as using temperature sensors on power lines [6-9] and monitoring power lines with unmanned aerial vehicles [10-12]. However, these methods have limitations in detecting faults in underground and underwater cables and are also limited in their ability to detect faults originating from some root causes, making it necessary to use multiple approaches in order to cover all potential faults. Furthermore, these existing approaches require the deployment of additional sensors, which can be a costly endeavour as it involves not only the cost of the sensors themselves, but also the establishment of low-latency and high-throughput data stream communication pipelines, processing and storage.
In this paper, we propose a machine learning-based approach that (i) predicts faults in transmission and distribution power lines in up to a week before they happen, (ii) utilizes a purely data-driven approach by applying feature engineering methods as a pre-processing step before feeding data to a long short-term memory (LSTM) based deep neural network and (iii) utilizes existing measurements of only current and voltage without requiring installation of any additional sensors. A high-level schema of the developed method is shown in Figure 1, the detail of each part will be explained in detail in the next section.
## Methodology
### Data
The data used for prediction is collected without the need for additional sensors, using the same instrument transformers and I/O connections as the protection and control devices in the example installation. Current and voltage measurements and I/O statuses of primary equipment are shared using IEC 61850-9-2LE and IEC 61850 8-1 protocols with an edge computing device. The edge device runs multiple high-sensitivity protection-related algorithms [13], which trigger the recording of disturbances for use in the prediction.
After the edge device has collected the data from the substation, the prediction is made in a more computationally efficient environment. Disturbance recordings and other relevant data are uploaded to a cloud-based service for monitoring and storage. The data from this central storage is then transferred to a dedicated machine learning service, where the actual prediction is performed. The results of the prediction are then transferred back to the monitoring service for contextual processing. This approach of pre-filtering on the edge device significantly reduces the amount of data transferred to the cloud compared to a setup where all accurate measurements are sent to the cloud, which results in a more efficient use of computational resources.
### Feature Engineering
The waveform recordings are high-dimensional and contain a large amount of information. As a result, feature engineering is necessary before inputting the data into the machine learning model. Each disturbance recording is processed to extract several derived features, such as RMS, impedance, active and reactive power, harmonics, and phase angles. From these signals, several representative scalar values are calculated, such as max, min, standard deviation, in order to include as much information from the disturbance recording as possible with a minimal number of values. This processing results in a set of highly descriptive scalars arranged in a feature vector with approximately 300 values, serving as a lower-dimensional representation of the waveform recording, which can be more easily processed by the machine learning-based prediction model. The feature extraction pipeline is illustrated in Figure 3.
### Machine Learning
The feature vectors are arranged to create a multidimensional time series with dimensions _(ii recordings, # feature values)_. These time series are then used as input to the forecasting model. For a given prediction, the input data is taken from a one-week-long
Figure 1: The diagram provides an overview of the forecasting pipeline. The process begins by collecting anomalous measurements of current and voltage. A set of relevant features are extracted from each measurement, and any irrelevant measurements are filtered out. The filtered sequence of anomalies is then passed through a machine learning prediction model, which generates a warning if a fault is imminent.
Figure 3: The data pre-processing pipeline. Several relevant features are derived from the recorded waveforms. From these features, various scalar values are calculated and arranged in a feature vector to be used as input to the machine learning model.
Figure 2: An example of an anomaly recorded by the protection relay. The recording includes a short snippet of three phases of voltage and current waveforms.
period, meaning that all events happening within a time window stretching from the time of prediction and one week back are included in the input. The task is illustrated in figure 4; when making a prediction for a different time, the input time window is moved.
The target when training the model is a binary signal that at each time is one if there is a fault occurring within a predefined time period after that time, and zero otherwise. Specifically, this time period is also one week. This means that the model's output, which is a number between zero and one, can be interpreted as the probability of a fault occurring within one week. Thus, the model warns about faults up to one week before they happen.
Our forecasting model is a neural network that consists of three main components: a filtering component, an LSTM layer, and a classification head. The filtering component is a fully connected neural network that classifies each recording in the input time series as relevant or not relevant to the forecasting process. We manually labeled a subset of the recordings as either containing an actual anomaly, or a normal state that can be recorded if the recording device sensitivity is high and which is not very relevant to the forecasting. The filtering component is then trained on these labeled samples to learn how to distinguish between relevant and irrelevant recordings. Only relevant recordings are passed on to the next stage of processing. The LSTM (Long Short-Term Memory) layer is a type of recurrent neural network that is well-suited for handling irregularly sampled time series data due to its ability to selectively remember or forget certain pieces of information from the input sequence [14]. It processes the input time series and generates a single feature vector. The output is then processed through a classification head consisting of two fully connected layers and projected with a sigmoid activation function to a value between 0 and 1, representing the probability of a fault occurring within the next week. The model is visualized in figure 5.
If data from multiple connected locations is available, the network can utilize that information by separately processing a time series for each location and concatenating the outputs from the LSTM layer before jointly passing them to the classification head. This allows the model to separately process each location, while at the same time selectively pay attention to relevant information in each location.
We apply various augmentations to the data before it is used for training, i.e., add transformed versions of the data to our data set in order to change its variability. One significant augmentation is shuffling the input data such that the three phases change order, e.g., treating phase one as phase two, etc. This effectively increases the number of samples by six times when all permutations are used. This is crucial as it addresses the limitation of having a small amount of data.
## Results and Discussion
### Evaluation methodology
The results shown in this section are based on if the output of our model is larger or smaller than 0.5 and if the corresponding target value is zero or one, i.e., if there will be a fault within one week. Predictions are made once each hour. To evaluate the model, we split the data into a training set, and a test set, with the latter used to assess the model's ability to generalize to unseen data. We also used 5-fold cross-validation, which involves partitioning the data into five equal-sized sets, training the model on four sets, and evaluating its performance on the remaining set. This process was repeated for each of the five folds, and the average performance across all iterations was calculated as the final result.
### Hyperparameter tuning
We present a limited hyperparameter tuning experiment to investigate the effect of learning rate and learning rate decay on the performance of the forecasting model. The learning rate is a crucial parameter that controls the speed at which the weights of the neural network are updated during training, while learning rate decay is a technique that gradually reduces the learning rate over time as the network approaches convergence.
Figure 4: The input to the prediction machine learning model is a sequence of feature vectors, each corresponding to a disturbance recording within a time window before the time of prediction. In this illustration, the data corresponding to the leftmost recording would not be included in the input, while the next three would. The target is a binary signal describing whether or not there will be a fault within a prediction horizon from the time of prediction. In this figure, the target would be one, since a fault occurs shortly after the time of prediction.
Figure 5: An overview of the architecture of the forecasting machine learning model. The data is arranged in a time series, where each time step corresponds to a data recording. The data is first filtered to remove irrelevant recordings. Then the time series is processed by an LSTM and passed to a classification head, giving a probability of an imminent fault as output.
To evaluate the performance of our model, we utilize the standard metrics of recall and specificity. Recall measures the proportion of positive samples that the model correctly identifies, or in other words, the percentage of times within a week from a fault that the model issues a warning. Specificity, on the other hand, reflects the proportion of negative samples that the model accurately predicts, which translates to the frequency with which the model does not wrongly predict a fault when there is none. Specificity has top priority in this task to avoid unnecessary costs from false positives.
Our results, presented in Table I, show that the best recall, namely 0.6694, was achieved with a learning rate of 0.00003 and a learning rate decay of 0.05. On the other hand, the highest specificity, 0.9127, was obtained with a learning rate of 0.0003 and a learning rate decay of 0.05. These findings suggest that the choice of learning rate and learning rate decay can have a significant impact on the performance of the model, and careful tuning of these hyperparameters may be necessary to achieve the best results.
The output of the forecasting model for a short time period is visualized in figure 6, together with two faults and a week-long period before each fault. The model output (blue line) is rising before the faults and dropping down to a lower level after the faults.
**Comparison to baseline**
To the best of our knowledge, there have been very limited published attempts to predict faults in the power grid based on disturbance recordings. A related approach is [15], which is using weather data to predict faults. In fact, there are few methods available for predicting discrete events based on time series data in general. One approach related to this is [16], which aims to predict time series _during_ extreme events, but this is still significantly different from our task.
Since there are few existing methods for solving our problem, and since our data is unique, there are no baselines for comparison. Therefore, we developed a simple baseline model for predicting faults based on the frequency of recorded anomalies. The model counts the number of recordings during a period of one week, filtered using the first component of our proposed model, and issues a warning if the count exceeds a certain threshold. Using this baseline method and adjusting the threshold to achieve a specificity of 0.9127, we obtained a recall of 0.0889. In comparison, our proposed method achieved a recall of 0.4367 at the same specificity. Adjusting the threshold to instead achieve a recall of 0.6694 resulted in a specificity of 0.3640, which is also significantly lower than that of our proposed method.
**Discussion**
**Feature engineering**
Our feature engineering serves two important purposes. First, it highlights the parts of the recorded anomalies that are most relevant for the task. Without first extracting key properties from the waveforms, each data point would be very high-dimensional, making it more difficult for the model to learn. In order to use the raw waveforms as input, without any pre-processing, a more complex model would be required, which likely means that more data would be needed for the model to train reliably. While it is possible that this could yield good results, in most practical situations, data is scarce, making data efficiency critical. It is worth noting that overdoing feature engineering can result in the loss of essential information, as seen in the poor performance of the baseline model based on the frequency of recorded anomalies. Therefore, careful development of appropriate feature engineering is crucial.
The second reason is that, without significant down sampling of the input data through our feature extraction process, the model would become more computationally expensive, requiring powerful hardware to train. One key advantage of our approach is that it is designed to be efficient enough to run on a standard laptop CPU, something that would not be possible if the model were more complex. This makes it possible to use the model in a wide range of computing environments without needing specialized hardware.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Learning rate & Learning rate decay & Test recall & Test specificity \\ \hline
0.00003 & 0.01 & 0.5316 & 0.8517 \\ \hline
0.00003 & 0.05 & **0.6694** & 0.7495 \\ \hline
0.0001 & 0.01 & 0.4921 & 0.9088 \\ \hline
0.0001 & 0.05 & 0.5550 & 0.8616 \\ \hline
0.0003 & 0.01 & 0.4607 & 0.9102 \\ \hline
0.0003 & 0.05 & 0.4367 & **0.9127** \\ \hline
0.001 & 0.01 & 0.4638 & 0.8955 \\ \hline
0.001 & 0.05 & 0.5111 & 0.8918 \\ \hline \end{tabular}
\end{table}
Table I: The results from hyperparameter tuning
Figure 6: Visualization of a period of time with two faults, along with a one-week period before each fault, as well as the model output for the period. The predicted risk (model output) exhibits a significant increase before each of the two faults occur, enabling an alarm to be raised.
Importance of filtering
As with the feature engineering, the filtering of recordings is important in order to not include unnecessary noise when making predictions, but instead focus the attention of the model on the most relevant information. With a more complex model, that is better able to ignore certain information and put extra focus on other parts, the filtering would not be needed. However, for the same reasons as noted previously, this would come at the cost of a more computationally expensive model, which in turn would increase the need for data.
## Conclusions
In this paper, we proposed a novel deep learning framework for predicting future faults in the power grid. We showed that our LSTM-based model is able to successfully predict faults based only on disturbance recordings. Also, we observed that tuning the parameters and hyperparameters, such as features for creating sequences, or the learning rate of the optimization algorithm, can affect the prediction accuracy. Thus, for an accurate predictor model, four essential steps must be followed: suitable feature selection, feature engineering, innovative selection and structuring of neural networks, and hyperparameter optimization of the machine learning methods.
|
2307.05171
|
Enriching Verbal Feedback from Usability Testing: Automatic Linking of
Thinking-Aloud Recordings and Stimulus using Eye Tracking and Mouse Data
|
The think aloud method is an important and commonly used tool for usability
optimization. However, analyzing think aloud data could be time consuming. In
this paper, we put forth an automatic analysis of verbal protocols and test the
link between spoken feedback and the stimulus using eye tracking and mouse
tracking. The gained data - user feedback linked to a specific area of the
stimulus - could be used to let an expert review the feedback on specific web
page elements or to visualize on which parts of the web page the feedback was
given. Specifically, we test if participants fixate on or point with the mouse
to the content of the webpage that they are verbalizing. During the testing,
participants were shown three websites and asked to verbally give their
opinion. The verbal responses, along with the eye and cursor movements were
recorded. We compared the hit rate, defined as the percentage of verbally
mentioned areas of interest (AOIs) that were fixated with gaze or pointed to
with the mouse. The results revealed a significantly higher hit rate for the
gaze compared to the mouse data. Further investigation revealed that, while the
mouse was mostly used passively to scroll, the gaze was often directed towards
relevant AOIs, thus establishing a strong association between spoken words and
stimuli. Therefore, eye tracking data possibly provides more detailed
information and more valuable insights about the verbalizations compared to the
mouse data.
|
Supriya Murali, Tina Walber, Christoph Schaefer, Sezen Lim
|
2023-07-11T11:05:17Z
|
http://arxiv.org/abs/2307.05171v1
|
Enriching Verbal Feedback from Usability Testing: Automatic Linking of Thinking-Aloud Recordings and Stimulus using Eye Tracking and Mouse Data
###### Abstract
The think aloud method is an important and commonly used tool for usability optimization. However, analyzing think aloud data could be time consuming. In this paper, we put forth an automatic analysis of verbal protocols and test the link between spoken feedback and the stimulus using eye tracking and mouse tracking. The gained data - user feedback linked to a specific area of the stimulus - could be used to let an expert review the feedback on specific web page elements or to visualize on which parts of the web page the feedback was given. Specifically, we test if participants fixate on or point with the mouse to the content of the webpage that they are verbalizing. During the testing, participants were shown three websites and asked to verbally give their opinion. The verbal responses, along with the eye and cursor movements were recorded. We compared the hit rate, defined as the percentage of verbally mentioned areas of interest (AOIs) that were fixated with gaze or pointed to with the mouse. The results revealed a significantly higher hit rate for the gaze compared to the mouse data. Further investigation revealed that, while the mouse was mostly used passively to scroll, the gaze was often directed towards relevant AOIs, thus establishing a strong association between spoken words and stimuli. Therefore, eye tracking data possibly provides more detailed information and more valuable insights about the verbalizations compared to the mouse data.
Eye tracking, Usability testing, Mouse tracking, Think aloud, Verbalization
## 1 Introduction
Think aloud protocol is an important usability tool wherein participants verbalize their thoughts as they perform a usability task. It provides rich data to gain insights into an individual's thought process and attention. However, the analysis of the data can be quite time consuming, especially if it has been recorded, for instance, in an unmoderated remote test. Therefore, an automatic analysis would support usability experts and would, as a consequence, lead to a further spread of usability optimization. In this work, we investigate if a correlation between the stimulus, here a web page, and verbalizations could be established by using interaction data from the computer mouse and eye tracking.
Both eye and cursor movements could provide insights into cognitive processes and hence, could represent the correlation between the stimulus and think aloud data. However, one challenge in eye tracking is that it can be expensive and remote testing is more difficult. Mouse tracking, on the other hand, being easily available, nevertheless, shows a lot of variability among individuals. Hence, in our study, we put forth an automatic transcription of think-aloud data and an association of verbal content to specific areas of the web pages used as stimuli, based on the one hand on
mouse movements, and on the other hand, on gaze data. We found a correlation between verbal protocols and the stimulus, but importantly, gaze-based correlations were significantly stronger.
## 2 Literature review
While previous research has shown an association between eye movements and verbalizations, very few studies, have looked at it from a usability testing perspective. For instance, a consumer behavior study [14] found that when participants compared different products, they exhibited alternating fixation patterns between the choices, and that 83% of such alterations were accompanied by explicit statements of comparison. In fact, according to Cooke and Cuddihy [4] eye data and verbalizations match about 80% of the time. Although this does not necessarily allude to a temporal synchrony, Elling and colleagues [5] found that a verbalization might occur before or after fixation on a relevant region. It has also been suggested that the combined use of eye tracking and verbal protocols could provide additional insights into cognitive processes [15].
As mentioned earlier, eye tracking may not be the most feasible method. Although with webcam tracking [11, 12] this has changed, an alternate method which is easily available and remote-friendly is mouse tracking. Studies have found that both eye and mouse movements represent visual attention and are determined by the position and relevance of an object on a search page [12]. Some have even suggested that mouse tracking can be used as a substitute for eye tracking since cursor position can be used to predict gaze. For example, one study showed that the cursor and gaze are present in the same area of the visual field at least 75% of the time [2]. In fact, few studies have also alluded to the fact that cursor movements are related to verbalizations [3, 7, 9]. However, one issue when looking at cursor movements, as pointed out by Navalpakkam et.al. [10], is that there is wide variability in mouse behavior between subjects. For instance, while some users point with the cursor, others keep idle or use it just to scroll.
## 3 Method
### Technical setup
For the recording of user behavior, Eyevido Lab, a software for conducting web-based eye tracking analyses was utilized. The study was conducted within a dedicated user research lab, using a 21.5" monitor with a display resolution of 1920x1080 pixels. Data was recorded using a wireless mouse, a compact on-camera microphone with a sampling rate of 48 kHz and a Tobii 4C Pro infrared eye tracking device with a sampling rate of 90 Hz. Each gaze-, mouse- and voice-value has a global timestamp with millisecond accuracy to put them in temporal context.
### Participants
Participants were recruited through a targeted mailing list that individuals had voluntarily subscribed to. A total of ten individuals between 25 - 35 years (Mean = 28.5), including four males, five females, and one non-binary person, participated in the study. All participants provided written consent for data processing and accepted the terms and conditions.
### Procedure
Prior to the study, participants were briefed on the functionality of an eye tracker and were informed that the subsequent web pages were presented as scrollable screenshots and did not have interactive features. The stimuli consisted of three distinct screenshots extracted from the official websites of German cities or municipalities, namely Berchtesgaden,
Bielefeld and Koblenz (Figure 1). The selection of these websites was based on the visually engaging presentation of content and design, encompassing topics such as nearby tourist attractions, current local news, and cultural events. Each topic is complemented by relevant images displayed in various layouts, resulting in visually diverse sections across the websites. The captured screenshots accurately replicated the scrollable layout, display size, and appearance of the actual websites.
Prior to viewing each webpage, participants were instructed to share their thoughts and opinions aloud. To encourage this process, written prompts were provided, including the questions, "What aspects do you find negative?", "What stands out to you as positive?", and "What elements do you find confusing?". These prompts remained consistent for all three websites. Participants then took between 00:50 and 03:04 minutes (mean = 01:31 minutes) while they verbally expressed any thoughts or observations they had regarding the prompted questions or any other aspects on each web page.
In certain instances, some participants sought clarification while viewing a webpage by asking questions such as "Should i begin now?". In response, the study supervisor instructed them to commence expressing their opinions and to highlight anything that caught their attention. Note that the study was conducted in German, but examples in this paper have been translated to English for convenience.
### Marking of AOIs on the stimulus
All relevant AOIs for each page were drawn manually by an expert. They mark general blocks such as the navigation menu, a text section or an image. On average, 42 AOIs were drawn per web page. AOIs could also have been calculated algorithmically, for example, by image processing or analysis of the web page structure. But, at this point, we chose a manual method to avoid any inaccuracies as a result of using an algorithmic method.
### Transcription and diarization
Using the combination of two Als from huggingface [6], we analyzed the audio track of the recorded sessions. With the first AI "whisper" (medium) from OpenAI [13], the spoken words in the screencast were transcribed and time-stamped. A second AI "speaker-diarization" from pyannote [1] was then able to identify and separate the interviewer's and the subject's spoken words. This pipeline consumes few resources (less than 1h on 6 CPUs for the entire dataset) and produces near-perfect results for German voice recordings. The transcripts did not require any manual post-processing.
Figure 1: Shows a section of the stimuli, which are the three city websites and the AOIs marked for each one
The speaker-dirization delivered perfect results and distinguished between spoken feedback of the testers and instructions or comments given by the usability expert. In the further analysis, only the testers feedback is used.
### Calculation of links between verbal feedback and AOIs
Since timestamps were synchronized for the gaze data, mouse data and spoken sentences, they were used to determine where the subjects looked or pointed and what they said at a specific time. Thus, each spoken sentence was assigned a segment of a gaze path and a mouse path. Using the intersection of the paths with the AOIs, the verbal utterances were assigned to the individual AOIs.
## 4 Analysis
The links between the verbal utterances and AOIs were evaluated to label the hits and calculate the hit rate, which was compared between the gaze and mouse data. Additionally, the sentiments for each statement were marked and analyzed.
### Manual labeling of hits and sentiments
Every verbal statement as obtained from the transcription was checked manually to see which AOIs had been mentioned by the subject. There were a total of 344 statements (Mean per subject = 34) made and 231 statements (Mean per subject = 23) that mentioned AOIs. The remaining statements were general ones with no reference to an AOI and were excluded. The mean duration of each statement was 7.2 seconds and 7.4 (SE = 3.3) AOIs were mentioned per statement.
In some instances, a single statement was split into two utterances by the AI transcription. The AOIs were marked separately for them. There were also instances, where the context of a statement had to be checked with the previous statement. For example, subject #8 says, "But I notice that a lot of large images are used." and the next utterance is "Again, I think that's a bit too much.". One would need to check the first statement to understand that the subject is referring to the images in the second statement. Finally, for the sentiment analysis, each statement was manually evaluated and marked with a positive, negative or neutral sentiment. There were a total of 100 positive, 64 negative and 67 neutral statements. The sentiment analysis could also be done based on AI algorithms. But available AIs we could test were unusable for our use case, probably because they were trained on datasets from other domains such as tweets or Amazon reviews whose utterances differ from Think Aloud statements in the usability context. Manual sentiment labeling, on the other hand, was straightforward and excluded AI inaccuracies.
### Calculation of hits rate
In order to calculate the hit rate, the following steps were taken:
1. For every statement, the AOIs that had been mentioned were noted (This step has already been explained in the previous section 4.1).
2. From step 1, the total number of mentioned AOIs for every statement was calculated.
3. Count for every statement: how many AOIs were mentioned and simultaneously fixated.
4. Count for every statement: how many AOIs were mentioned and simultaneously pointed by the mouse.
5. Finally, Hit rate = (Number of AOIs calculated in step 3 and 4 divided by the total number of AOIs verbally mentioned in step 2) * 100.
## 5 Results
### Association between the verbalizations and AOIs based on the gaze/mouse data
We compared the hit rates, which is defined as the percentage of the verbally mentioned AOIs that are fixated on. A paired-sample T-test revealed a significant difference (T[9] = 4.3, p =.002) between the gaze (mean = 66.29%; SE = 2.1) and mouse (mean = 40%; SE = 5.6). In other words, while the gaze was on an average of 66% of AOIs that the subjects verbally mentioned at a given time, the mouse was only on 40% of them. Figure 2 shows the mean hit rate for the gaze and the mouse.
A closer look at the data revealed that, while the gaze was often directed towards the AOIs that was verbalized, the mouse was used more passively to scroll, which explains the low hit rate for the mouse. Figure 3 shows example screenshots for two different subjects. The gaze path is marked with the numbered circles and the mouse path is shown in the straight-line indicating scrolling. The text below reveals what was said during that time. In both instances, one can observe that,
while the mouse is being used to scroll, the gaze is being directed to relevant AOIs.
Figure 3: Screenshots for subjects 8 (left) and 5 (right). The numbered circles show the gaze path and the straight line shows the mouse path. The statement below shows the verbalization during the time-period the screenshot was taken. Note that in several instances, the transcription from the algorithm splits statements. This has been manually corrected for these examples
Figure 2: Hit Rate for Gaze vs. Mouse
### Is there an effect of sentiments?
There were an average of 10 positive, 6.4 negative and 6.7 neutral statements per subject. Figure 4 below shows the mean hit rate for the positive, negative and neutral sentiments for the gaze and the mouse. A two-factor within subject ANOVA with factors Sentiment (Positive, Negative and Neutral) and Tracking method (Gaze, Mouse) revealed a significant effect of Sentiment (F(57) = 3.9, p =.02) and Tracking method (F(57) = 21.5, p <.001), but no significant interaction between the factors (F(57) = 0.4, p =.6). Specifically, as seen in Figure 4, positive and neutral statements had the highest hit rate, whereas negative statements showed the lowest for both the gaze and the mouse.
## 6 Conclusion and future work
In our study, we have shown, that a link between spoken feedback and stimulus could be reasonably calculated based on gaze data. 66% of the AOIs of the web page mentioned by the testers were fixated at the same time. On average each statement related to 7 AOIs out of an average 42 marked AOIs per stimulus. The gaze outperforms the mouse data, where only 40% of the AOIs was pointed at. Our findings reveled that gaze-based correlations were significantly stronger. In other words, the gaze was directed more often towards stimuli that were verbally mentioned. The mouse, on the other hand, was mostly used to scroll. Additionally, we also found an effect of sentiments in the association. Importantly, subjects fixated on AOIs they were mentioning more often during positive and neutral assessments. For negative statements, this correlation was the lowest.
Although, previous work has mentioned a correlation between mouse and verbal data [3], it has been pointed out that mouse behavior shows a lot of variability between subjects [10]. Given that eye movements are closely related to attention [8], it is possible that they exhibit more universal patterns and are therefore a better alternative for such automatic analysis. Future studies could focus on further investigation of automatic analysis. Note that, in our study, AOIs and sentiments were manually labelled.
The usage of the generated data - the feedback-stimulus pairs - will also be subject to future research. Possible options include interactive user interfaces for efficiently displaying the feedback to usability experts. Additionally, calculating sentiment maps to highlight areas with positive or negative verbal feedback holds promise in facilitating the analysis of thinking aloud data.
|
2304.13243
|
Direct detection of finite-size dark matter via electron recoil
|
In direct dark matter (DM) detection via scattering off the electrons, the
momentum transfer plays a crucial role. Previous work showed that for
self-interacting DM, if the DM particle has a size (the so-called puffy DM),
the radius effect could dominate the momentum transfer and become another
source of velocity dependence for self-scattering cross section. In this work
we investigate the direct detection of puffy DM particles with different radii
through electron recoil. We find that comparing with the available experimental
exclusion limits dominated by the mediator effect for XENON10, XENON100 and
XENON1T, the constraints on the puffy DM-electron scattering cross-section
become much weaker for large radius DM particles. For small-radius DM
particles, the constraints remain similar to the point-like DM case.
|
Wenyu Wang, Wu-Long Xu, Jin Min Yang
|
2023-04-26T02:09:47Z
|
http://arxiv.org/abs/2304.13243v2
|
# Direct detection of finite-size dark matter via electron recoil
###### Abstract
In direct dark matter (DM) detection via scattering off the electrons, the momentum transfer plays a crucial role. Previous work showed that for self-interacting DM, if the DM particle has a size (the so-called puffy DM), the radius effect could dominate the momentum transfer and become another source of velocity dependence for self-scattering cross section. In this work we investigate the direct detection of puffy DM particles with different radii through electron recoil. We find that comparing with the available experimental exclusion limits dominated by the mediator effect for XENON10, XENON100 and XENON1T, the constraints on the puffy DM-electron scattering cross-section become much weaker for large radius DM particles. For small-radius DM particles, the constraints remain similar to the point-like DM case.
###### Contents
* I Introduction
* II Puffy DM-electron ionization rates
* III Numerical analysis and results
* IV Conclusion
* Acknowledgements
## I Introduction
More than eighty percent of the matter in our universe is considered to be dark matter (DM), which does not participate in electromagnetic interaction [1; 2; 3]. Via gravitational interaction, the existence of DM has been strongly supported by evidences which include Galaxy and Cluster rotation curves, gravitational lensing, the bullet cluster and the Cosmic Microwave Background radiation [4; 5; 6]. Meanwhile, various new physics models have been proposed to explain DM. One of the popular DM candidates is the Weakly Interacting Massive Particle (WIMP) which provides the so-called WIMP mircle due to its mass around the electroweak scale and its thermal freeze-out naturally satisfying the observed relic density [7]. However, the signature of WIMP has not observed yet in the laboratory [8; 9]. The more details of DM particle such as its spin, mass and size need to be studied on the theoretical and experimental sides in the future [1].
In traditional direct detection experiments of DM, the scale of the DM mass is focused above the GeV scale [10] and so far no evidences have been found. Below the GeV scale, the sensitivity of direct detection technique via observing the signals of nuclear recoils from the elastic DM-nucleus scattering drops quickly [11; 12; 13]. As a result, more attentions and technological developments are being paid on searching for signatures of scattering between DM and the electrons. For example, in the two-phase xenon time projection chamber (TPC), the xenon atom is ionized and the electron is recoiled. At the same time, other atoms may
also be ionized by the recoiling electrons. These recoiling electrons are then drawn through the xenon gas region via an external electric field, which results in the emission of scintillation (S2) signals [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. However, from a theoretical perspective, it is indistinguishable whether the ionized signals are caused by DM scattering off the electrons or by DM scattering off the nucleus through the Migdal effect [26; 27; 28; 29; 30].
From a model-building perspective, the idea that the DM particle has a size provides a solution to the small scale structure anomalies in the universe, which include the core-cusp problem and the Too-Big-to-Fail problem, _etc_[31]. This scenario is called the puffy self-interacting DM (SIDM) model, in which the radius effect of DM particles are considered as another source of velocity dependence of the self-scattering cross section \(\sigma/m_{\rm DM}\)[32; 33; 34]. In fact, in many new physics models, DM can be supposed as the bound sates of point-like particles which may be some dark meson, nucleus, atom, _etc_. Meanwhile, such finite-size DM can be also considered as one kind of SIDM paradigm [35; 36; 37]. Given that the puffy DM could lead to a wide range of phenomena and the radius effect could dominate the scattering momentum transfer [38; 39], we need to examine the direct detection of the finite-size DM via electron recoil, which is the aim of this work.
This work is organized as follows. In Sec. II, the finite-size DM-electron ionization rates are studied. In Sec. III, the numerical results and analyses are described. Section IV gives our conclusions.
## II Puffy DM-electron ionization rates
In the scattering process of two point-like particles, the momentum transfer plays a crucial role in calculating the scattering cross-section. When a particle has a finite size, the scattering process becomes considerably more complex. The reason is that if the momentum transfer is much smaller than the inverse radius of the particle, it can be treated as a point-like particle, and the internal structure of a puffy particle cannot be probed. However, when the momentum transfer is larger than the inverse radius of the DM particle, i.e., \(1<qR_{\rm DM}(\leq 10^{-3}m_{\rm DM}R_{\rm DM})\), the internal structure can be possibly measured, and a form factor needs to be introduced [40; 41]. In our study, the shape of puffy DM particle is taken as the dipole form. The form factor describing the comprehensive momentum transfer
caused by the spatial distribution of puffy DM charge in the bulk is expressed as
\[F_{\rm RDM}(q)=\frac{1}{(1+r_{\rm DM}^{2}q^{2})^{2}}, \tag{1}\]
where \(r_{\rm DM}\) is the radius of the DM particle and \(q\) is the momentum transfer in the scattering. When the electron is scattered by puffy DM, the matrix element is corrected by multiplying this form factor relative to the point particle case.
Scattering between an electron and a DM particle with finite size can be treated in the same way as for the point-like DM case, both mathematically and physically. The target electrons are considered as single-particle states of an isolated atom, which are calculated by the numerical Roothaan-Hartree-Fock (RHF) bound wave functions [42]. The velocity-averaged differential cross section of the DM scattering off electron is given as
\[\frac{d\left\langle\sigma_{ion}^{nl}\right\rangle}{d\ln E_{R}}=\frac{\bar{ \sigma}_{e}}{8\mu_{\chi e}^{2}}\int qdq|f_{ion}^{nl}(k^{\prime},q)|^{2}|F_{\rm DM }(q)|^{2}|F_{\rm RDM}(q)|^{2}\eta(v_{\rm min}), \tag{2}\]
where \(\bar{\sigma}_{e}\) and \(\mu_{\chi e}=m_{\chi}m_{e}/(m_{\chi}+m_{e})\) are respectively the reduced cross section and the reduced mass among the DM and electron, \(E_{R}\) and \(q\) denote respectively the electron recoil energy and momentum transfer, \(F_{\rm DM}(q)\) is the form factor of the DM, \(f_{ion}^{nl}(k^{\prime},q)\) is the form factor for ionization electron in the \((n,l)\) shell and can be expressed as
\[|f_{ion}^{nl}(k^{\prime},q)|^{2}=\frac{k^{\prime 3}}{4\pi^{3}}\times 2\sum_{n,l,l^{\prime}m^{\prime}}|\langle f|e^{i{\bf q}_{e}\cdot{\bf x}_{i}}|i\rangle|^{ 2}, \tag{3}\]
with the sum being over all electron shells and the momentum \(k^{\prime}=\sqrt{2m_{e}E_{R}}\)[43; 22; 44]. The inverse mean speed \(\eta(v_{\rm min})\), which obeys the Maxwell-Boltzmann velocity distribution of DM with the circular velocity \(v_{0}=220\) km/s and the escaped velocity \(v_{\rm esc}=544\) km/s, is the function of the minimum velocity \(v_{\rm min}\) of the DM particle required for the scattering. In this scattering process, the DM particle has an incoming velocity of \(v\), and the outgoing momenta of the DM particle and the electron are \({\bf p^{\prime}}_{\chi}\) and \(m_{e}v_{e}\), respectively. According to momentum conservation, we have
\[{\bf q}=m_{\chi}v-{\bf p^{\prime}}_{\chi}=m_{e}v_{e}. \tag{4}\]
The energy transferred into the electron can be obtained via energy conservation
\[\Delta E_{e}=\frac{1}{2}m_{\chi}v^{2}-\frac{|m_{\chi}v-q|^{2}}{2m_{\chi}}={\bf v }\cdot{\bf q}-\frac{{\bf q^{2}}}{2m_{\chi}}. \tag{5}\]
Meanwhile, \(\Delta E_{e}=E_{b}+E_{R}\) is related to the binding energy \(E_{b}\). Thus, the minimum velocity is
\[v_{\rm min}=\frac{|E_{b}^{nl}|+E_{R}}{q}+\frac{q}{2m_{\chi}}. \tag{6}\]
Then the complete expression of \(\eta(v_{\rm min})\) is written as
\[\eta(v_{\rm min})=\int_{v_{\rm rim}}\frac{d^{3}v}{v}f_{\chi}(v)\Theta(v-v_{\rm min }), \tag{7}\]
where
\[f_{\chi}(\overrightarrow{v}_{\chi})\propto e^{-\frac{\overrightarrow{v}_{ \chi}+\overrightarrow{v}_{\rm E}}{v_{0}^{2}}}\Theta(v_{\rm esc}-| \overrightarrow{v}_{\chi}+\overrightarrow{v}_{\rm E}|) \tag{8}\]
with the averaged Earth-relative velocity \(v_{E}=232\) km/s [45; 46]. The differential ionization rate is obtained as
\[\frac{dR_{ion}}{d\ln E_{R}}=N_{T}\frac{\rho_{\chi}}{m_{\chi}}\sum_{nl}\frac{d \left\langle\sigma_{ion}^{nl}v\right\rangle}{d\ln E_{R}}, \tag{9}\]
where \(N_{T}\) is the number of target atoms and the local matter density \(\rho_{\chi}=0.4\) GeV.
Additionally, from a field-theoretical perspective, a predictive benchmark model should be implemented. We assume that the Dirac fermion DM particle \(\chi\) interacts with the electron via exchanging a dark photon mediator \(A^{\prime}\) with mass \(m_{v}\). The Lagrangian of this simplified mode is
\[\mathcal{L}\supset g_{\chi}\bar{\chi}\gamma_{\mu}\chi A^{\prime}{}^{\mu}+g_{ e}\bar{e}\gamma_{\mu}eA^{\prime}{}^{\mu}, \tag{10}\]
where \(g_{\chi}\) and \(g_{e}\) are the coupling constants.
The scattering cross section between puffy DM and electron can be written as a product of the reduced cross section \(\bar{\sigma}_{e}\), the momentum-dependent factor \(F_{\rm DM}(q)\) of the point particle case and \(F_{\rm RDM}(q)\) in which the radius effect will be included:
\[\sigma_{e}=\bar{\sigma}_{e}|F_{\rm DM}|^{2}|F_{\rm RDM}|^{2}, \tag{11}\]
where \(\bar{\sigma}_{e}\) is taken as
\[\bar{\sigma}_{e}=\mu_{\chi e}^{2}\frac{1}{16\pi m_{\chi}^{2}m_{e}^{2}}\overline {|M_{\rm point}(q=q_{0})|^{2}}, \tag{12}\]
with the reference momentum \(q_{0}=\alpha m_{e}\) and \(\alpha=g_{\chi}^{2}/4\pi\).
Finally, the form factor \(F_{\rm DM}(q)\), in which the momentum dependence is from the mediator effect, can be written as [41]
\[\begin{split}|F_{\rm DM}(q)|^{2}&=\frac{\overline{| M(q)_{\rm point}|^{2}}}{|M_{\rm point}(q=q_{0})|^{2}}\\ &=\frac{(m_{v}^{2}+q_{0}^{2})^{2}(8m_{e}^{2}m_{\chi}^{2}-2q^{2}(m _{e}+m_{\chi})^{2}+q^{4})}{(m_{v}^{2}+q^{2})^{2}(8m_{e}^{2}m_{\chi}^{2}-2q_{0} ^{2}(m_{e}+m_{\chi})^{2}+q_{0}^{4})}.\end{split} \tag{13}\]
From Eqs. (1,11,13) we see that when the radius of DM particle is very small, the momentum dependence is dominated by the mediator, namely \(F_{\rm DM}(q)=1\) for a heavy dark photon or \(F_{\rm DM}(q)=\alpha^{2}m_{e}^{2}/q^{2}\) for a ultra-light dark photon, which is in agreement with the study in [17]. If the DM particle has a sizable radius, the total form factor can be rewritten approximately as
\[F_{\rm DM}(q)F_{\rm RDM}(q)\approx\begin{cases}\frac{1}{r_{\rm DM}^{4}q^{4}}& \qquad\text{for a massive mediator}\\ \\ \frac{q_{0}^{2}}{r_{\rm DM}^{2}q^{6}}&\qquad\text{for a massless mediator}\end{cases} \tag{14}\]
This implies a suppression of total momentum transfer for a large radius of DM. In the following, the effects from such a finite radius will be studied numerically.
## III Numerical analysis and results
As in [15], the constraints on the scattering cross section between puffy DM and electron can be obtained by using the experimental data. Firstly, the observed electron number \(n_{e}\) and the scintillation photon number \(n_{\gamma}\) can be deduced from the electron recoil energy \(E_{R}\). The step energy \(W=13.8\) eV (the average energy required to create a single quantum [47]) can produce the primary quantum number \(n^{(1)}\) defined as \(n^{(1)}=\rm Floor(E_{R}/W)\) where Floor is the rounded down function. The probability of the initial electron recombining with an ion is considered as \(f_{R}=0\) and the fraction for the observed electrons is \(f_{e}=0.83\). The corresponding uncertainties are set as \(0<f_{R}<0.2\), \(12.4\) eV \(<\rm W<16\) eV and \(0.62<f_{e}<0.91\). The shells exciting the photos are \((5{\rm s},4{\rm d},4{\rm p},4{\rm s})\) with energy \((13.3,63.2,87.9,201.4)\) eV respectively. The additional quantum numbers under the photnionization effect are \(n^{(2)}=(0,4,6-10,3-15)\) shown in Table 1[17]. And the total quantum number \(n^{(1)}+n^{(2)}\) obey the binomial distribution. In FIG. 1, the recoil spectra of electron are shown for \(F_{\rm DM}=1\) and \(F_{\rm DM}=q_{0}^{2}/q^{2}\) with different radius of DM. Comparing with the point particle case \(R_{\rm DM}=0\), in which the momentum transfer is dominated by the mediator effect, we see that for a small radius \(R_{\rm DM}=10\) MeV\({}^{-1}\) the suppression from the radius effect of puffy DM is not sizable in the range of electron number \(1-15\) for \(F_{\rm DM}=1\) and \(F_{\rm DM}=q_{0}^{2}/q^{2}\). However, for a large radius \(R_{\rm DM}=100\) MeV\({}^{-1}\) the suppression from the radius effect is significant and the momentum transfer is dominated by the radius effect. Beyond that, when the number of ionization electrons is larger than 3, the recoil spectrum is very small.
Then in a given event the signal of \(n_{e}\) will be converted into photo-electrons (PEs) which are produced by photomultiplier tuber to observe the experiment results. For an event with \(n_{e}\), the PE number can be obtained via a Gaussian function with mean \(n_{e}\mu\) and width \(\sqrt{n_{e}\sigma}\). Here \(\mu=27,19.7,11.4\) and \(\sigma=6.7,6.2,2.8\) for XENON10, XENON100, XENON1T, respectively. We set the PE bins as in [15; 48; 49]. Note that the PE bin for the XENON1T in our work is set as the range of [165; 275]. Finally, to compare with the theoretical value, the experimental data of XENON10 (15-kg-days) [15], XENON100 (30-kg-years) [48; 50] and XENON1T (1.5 tones-years) [49] are applied. FIG. 2 shows that the limits of the different radius DM-electron scattering cross section for \(F_{\rm DM}=1\) (upper three panels) and \(F_{\rm DM}=q_{0}^{2}/q^{2}\) (lower three panels) in the case of \(R_{\rm DM}=0\), \(R_{\rm DM}=1\) MeV\({}^{-1}\) and \(R_{\rm DM}=100\) MeV\({}^{-1}\) respectively from XENON10 data (purple), XENON100 data (red) and XENON1T data (blue). Comparing the limits with \(R_{\rm DM}=0\) and \(R_{\rm DM}=1\) MeV\({}^{-1}\), we can infer that for a very small radius of DM particle, the momentum transfer is much
Figure 1: Spectrum of expected number of events of different radius DM-electron scattering in \(F_{\rm DM}=1\) case with \(\bar{\sigma}_{e}=5\times 10^{-35}\) cm\({}^{2}\) (left panel) and \(F_{\rm DM}=q_{0}^{2}/q^{2}\) case with \(\bar{\sigma}_{e}=5\times 10^{-39}\) cm\({}^{2}\) (right panel) in xenon. The mass of DM is chosen as \(m_{\chi}=100\) MeV and the exposure is \(1000\rm kg-year\).
smaller than the inverse radius and the puffy DM can be seen as a point-like particle. In this case the momentum transfer is dominated by the mediator effect, namely \(F_{\rm DM}(q)F_{\rm RDM}(q)\sim F_{\rm DM}(q)\), and the excluded limits are similar to the case of point DM particle. Instead, for large radius puffy DM, the momentum transfer is dominated by the radius effect, namely \(F_{\rm DM}(q)F_{\rm RDM}(q)\sim F_{\rm RDM}(q)\), and the excluded limits are much weaker than for the point DM particle. Thus, when the DM particle has a large size, the momentum transfer from the radius effect cannot be ignored and it weakens the limits on the DM-electron scattering cross section.
## IV Conclusion
The momentum transfer plays a crucial role in calculating the cross-section for DM scattering off electrons. If the DM has a size, the momentum transfer can be dominated
Figure 2: The limits on the different radius DM-electron scattering cross section for \(F_{\rm DM}=1\) (upper panels) and \(F_{\rm DM}=q_{0}^{2}/q^{2}\) (lower panels) in the case of \(R_{\rm DM}=0\), \(R_{\rm DM}=1\) MeV\({}^{-1}\) and \(R_{\rm DM}=100\) MeV\({}^{-1}\) from XENON10 data (purple), XENON100 data (red) and XENON1T data (blue).
by the DM radius effect. In this work, assuming the shape of DM particle to take the dipole form, we studied the direct detection of different radius DM via electron recoil. We found that for a large radius of DM particle, the excluded limits of the DM-electron scattering cross section is much weaker than for the point-like DM particle. For example, when the radius of DM is R\({}_{\rm DM}=100\) MeV\({}^{-1}\) and the mass of DM is sub-GeV, the limit on the puffy DM-electron scattering cross section is about \(10^{-34}\)cm\({}^{2}\) (XENON10), \(10^{-29}\)cm\({}^{2}\) (XENON100) and \(10^{-24}\)cm\({}^{2}\) (XENON1T), which is weaker than \(10^{-35}\)cm\({}^{2}\) (XENON10), \(10^{-33}\)cm\({}^{2}\) (XENON100) and \(10^{-33}\)cm\({}^{2}\) (XENON1T) for the point-like DM particle. Instead, when DM particle has a very small radius, the constraints of direct detection on the DM-electron scattering cross section remain similar to the point-like DM particle case.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (NNSFC) under grant Nos. 11775012, 11821505 and 12075300, by Peng-Huan-Wu Theoretical Physics Innovation Center (12047503), and by the Key Research Program of the Chinese Academy of Sciences, grant No. XDPB15.
|
2301.00387
|
Exactly Hittable Interval Graphs
|
Given a set system $\mathcal{X} = \{\mathcal{U},\mathcal{S}\}$, where
$\mathcal{U}$ is a set of elements and $\mathcal{S}$ is a set of subsets of
$\mathcal{U}$, an exact hitting set $\mathcal{U}'$ is a subset of $\mathcal{U}$
such that each subset in $\mathcal{S}$ contains exactly one element in
$\mathcal{U}'$. We refer to a set system as exactly hittable if it has an exact
hitting set. In this paper, we study interval graphs which have intersection
models that are exactly hittable. We refer to these interval graphs as exactly
hittable interval graphs (EHIG). We present a forbidden structure
characterization for EHIG. We also show that the class of proper interval
graphs is a strict subclass of EHIG. Finally, we give an algorithm that runs in
polynomial time to recognize graphs belonging to the class of EHIG.
|
S. M. Dhannya, N. S. Narayanaswamy, K. K. Nisha
|
2023-01-01T11:33:28Z
|
http://arxiv.org/abs/2301.00387v3
|
# Exactly Hittable Interval Graphs
###### Abstract
Given a set system \(\mathcal{X}=\{\mathcal{U},\mathcal{S}\}\), where \(\mathcal{U}\) is a set of elements and \(\mathcal{S}\) is a set of subsets of \(\mathcal{U}\), an exact hitting set \(\mathcal{U}^{\prime}\) is a subset of \(\mathcal{U}\) such that each subset in \(\mathcal{S}\) contains exactly one element in \(\mathcal{U}^{\prime}\). We refer to a set system as _exactly hittable_ if it has an exact hitting set. In this paper, we study interval graphs which have intersection models that are exactly hittable. We refer to these interval graphs as _exactly hittable interval graphs_ (\(\mathsf{EHIG}\)). We present a forbidden structure characterization for \(\mathsf{EHIG}\). We also show that the class of proper interval graphs is a strict subclass of \(\mathsf{EHIG}\). Finally, we give an algorithm that runs in polynomial time to recognize graphs belonging to the class of \(\mathsf{EHIG}\).
## 1 Introduction
We study classes of simple graphs which are intersection graphs of set systems that have exact hitting sets. In particular, we introduce a class of interval graphs which can be represented as intersection graphs of intervals that have exact hitting sets. We refer to this class as _Exactly Hittable Interval Graphs_ (\(\mathsf{EHIG}\)). We also present an infinite family of forbidden structures for \(\mathsf{EHIG}\). In the following, we introduce a setting of exact hitting sets and intersection graphs, before presenting our results.
**Exact Hitting Sets:** Set systems are synonymous with hypergraphs. A _hitting set_ of a hypergraph \(H\) is a subset \(T\) of the vertex set of \(H\) such that \(T\) has at least one vertex from every hyperedge. If every hyperedge has exactly one element from \(T\), then \(T\) is called an _exact hitting set_. Also known as the Unique Hitting Set problem, the Exact Hitting Set problem is a well-studied decision problem that aims to find if a given hypergraph has an exact hitting set. It finds applications in combinatorial cryptosystems [4] and computational biology among many others. Exact Hitting Set problem is the dual of Exact Cover problem which, in turn, seeks to find a set cover that covers all vertices of a hypergraph such that the number of occurrences each vertex has in the cover is exactly one. Some famous examples of Exact Cover problem are sudoku, tiling dominoes, and n-queens problem. Exact Cover problem is a special case of Minimum Membership Set Cover problem (MMSC) [12]. While the classic Set Cover problem seeks to find a set cover of minimum cardinality, MMSC aims to find a set cover that minimizes the maximum number of occurrences each vertex has in the cover. MMSC is known to be \(\mathsf{NP}\)-complete on arbitrary
set systems [13]. However, for interval hypergraphs, MMSC has been shown to be solvable in polynomial time by Dom et al. [3]. If a hypergraph \(H\) has an exact hitting set, we refer to \(H\) as an _exactly hittable hypergraph_. It is, indeed, a natural question to ask if a given hypergraph is exactly hittable. The algorithm by Dom et al. [3] for MMSC answers this question, but not efficiently for general hypergraphs (due to inherent NP-hardness for general cases). However, for special kinds of hypergraphs like interval hypergraphs, an exact solution can be obtained in polynomial time. In our work, we address a natural extension to this question and study simple graphs which are _intersection graphs_ (defined in Section 1.1) of exactly hittable hypergraphs, with a focus on interval hypergraphs.
**Intersection Graphs:** The theory of graphs and hypergraphs are connected by a very well-studied notion of intersection graphs [5]. It is well-known that every graph \(G\) is an intersection graph of some hypergraph \(H\)[11]. \(H\) is referred to as an _intersection model_ or a _set representation_ of \(G\)[10], [11]. Interestingly, certain special classes of graphs are characterized by the structure of their intersection models. For instance, Gavril has shown that the class of chordal graphs are the intersection graphs of subtrees of a tree [6]. When the hyperedges are restricted to be paths on a tree, the resulting intersection graph class is that of path chordal graphs which is a proper subclass of the class of chordal graphs [1],[7],[15],[17].
**Forbidden Structure Characterizations:** While a graph \(G\) may be identified as an intersection graph of a structured hypergraph, characterization of \(G\) based on forbidden structures has also been equally well-studied. For instance, the class of chordal graphs are characterized by the absence of induced cycles of size 4 or more [10]. Similarly, by the celebrated theorem of Kuratowski [19], the class of planar graphs must not have subgraphs that are subdivisions of \(K_{5}\) and \(K_{3,3}\). Interval graphs are known to be the class of chordal graphs without an asteroidal triple as induced subgraph [14]. The class of proper interval graphs is a subclass of interval graphs that do not have a \(K_{1,3}\) as an induced subgraph [18]. Refer to Table 1 for a summary of these examples. Clearly, characterization of simple graphs based on their intersection models and forbidden structures are extremely well-studied notions in defining graph classes.
**Our results**
1. We begin our set of results with a simple extension to a well-known theorem by Harary that every graph \(G\) is the intersection graph of some hypergraph \(H\)[11].
2. **Observation 1**: _Every simple undirected graph is the intersection graph of an exactly hittable hypergraph. Further, if \(G\) is a connected chordal graph, then it is the intersection graph of an exactly hittable set of subtrees of a tree._
We present proof of this observation in Section 2. Further to this observation, we look at a subclass of chordal graphs namely interval graphs, which are intersection graphs of subpaths on a path. We ask if there is an exactly hittable intersection model for every interval graph, as in the case of arbitrary graphs and arbitrary chordal graphs. Interestingly, the answer is no.
2. We introduce the class of _Exactly Hittable Interval Graphs_ (\(\mathsf{EHIG}\)), which is the set of interval graphs that have an exactly hittable interval representation. We say that an interval graph is exactly hittable if and only if it has at least one exactly hittable interval representation. We present a forbidden structure characterization for \(\mathsf{EHIG}\). First, we define a family \(\mathcal{F}\) of simple graphs as follows: **Definition 1**: _Let \(\mathcal{F}\) be the set of an infinite family of interval graphs with the following structure: each graph has an induced path \(P\) consisting of \(k\) vertices, for some \(k\geq 1\), and the open neighbourhood of \(P\) contains an independent set of size at least \(k+3\)._ Our main contribution in this paper is to prove that every graph in \(\mathcal{F}\) is a forbidden structure for \(\mathsf{EHIG}\). See Fig 1 for examples of forbidden structures. In Fig \(1\), \(u\) is the induced path \(P\) consisting of one vertex with an independent set of four vertices \(\{a,b,c,d\}\) in its neighbourhood. Similarly, in Fig \(1\), \(a\)-\(b\) is the induced path \(P\) consisting of two vertices and \(\{c,d,u,e,f\}\) is the independent set of five vertices in the neighbourhood of \(P\).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline _Graph Class_ & _Intersection Model_ & _Forbidden Structures_ \\ \hline Simple & An exactly hittable hypergraph & NIL \\ \hline Planar & Segments on a plane & Subdivisions of \(K_{5}\) and \(K_{3,3}\)[19] \\ \hline Chordal & Subtrees of a tree & \(C_{k}\), for \(k\geq 4\)[10] \\ \hline Path chordal & Paths on a tree & _List given in [16]_ \\ \hline Interval & Subpaths on a path & \(C_{k}\), for \(k\geq 4\) and asteroidal triple [14] \\ \hline Proper interval & Sets of intervals not properly contained in each other & \(C_{k}\), for \(k\geq 4\), asteroidal triple and \(K_{1,3}\)[18] \\ \hline
**Exactly hittable interval graphs** (New graph class) & Exactly hittable sets of intervals & \(C_{k}\), for \(k\geq 4\), asteroidal triple, \(K_{1,3}\) and induced path \(P_{k}\) which has, in its open neighbourhood, an independent set of at least \(k+3\) vertices \\ \hline \end{tabular}
\end{table}
Table 1: Intersection models and forbidden structures for well-known graph classes
Figure 1: Two simple instances of forbidden structures
**Theorem 2**.: _An interval graph \(G\) is exactly hittable if and only if it does not contain any graph from the set \(\mathcal{F}\) as an induced subgraph._ This theorem has been proved in Section 3. We believe that this result is an interesting addition to the existing graph characterizations primarily because we could not find such an equivalence elsewhere in the literature, including graph classes repositories like graphclasses.org.
3. In Section 2, we introduce, what we refer to as, a _canonical interval representation_ for an interval graph. Given interval graph \(G\), we start with a maximal clique ordering of \(G\) and construct an interval representation from the ordering such that the intersection graph of this representation is isomorphic to \(G\). By construction, there exists exactly one canonical interval representation for every interval graph. While the canonical representation may be of independent interest, this representation is crucial in proving Theorem 2 in this paper. In Section 2, we prove the following theorem.
Theorem 3.: _Let \(G\) be an interval graph. Let \(H_{G}\) be its canonical interval representation constructed as described in Section 2. Then, \(G\) is exactly hittable if and only if \(H_{G}\) is exactly hittable._
4. Given an interval graph \(G\) and its canonical interval representation \(H_{G}\), we show that the algorithm by Dom et al. [3] to solve the MMSC problem in interval hypergraphs can be used to recognize EHIG. We present the details in Section 3.1.
5. We show that the class of EHIG is positioned between the class of proper interval graphs and the class of interval graphs in the containment hierarchy of graph classes.
Theorem 4.: _Proper interval graphs \(\subset\) EHIG \(\subset\) Interval Graphs._ The proof of the second part of the above theorem follows from the definition of EHIG. The first part of the containment relationship has been proven in Lemma 7. Interestingly, the smallest forbidden structure of EHIG is \(K_{1,4}\) whereas the forbidden structure for the class of proper interval graphs is \(K_{1,3}\).
### Preliminaries
Given a set system \(\mathcal{X}=(\mathcal{U},\mathcal{S})\), _the intersection graph_\(G(\mathcal{X})\) of sets in \(\mathcal{X}\) is the simple graph obtained as follows. For every set \(S\in\mathcal{S}\), there exists a vertex \(v_{S}\in G\). An edge \((v_{S_{i}},v_{S_{j}})\) occurs in \(G\) if and only if there exists two sets \(S_{i},S_{j}\in\mathcal{F}\) such that \(S_{i}\cap S_{j}\neq 0\). The family \(\mathcal{S}\) is called a _set representation_ of the graph G. A set representation is also referred to as an _intersection model_[10], [11]. A hypergraph \(H=(\mathcal{V},\mathcal{E})\) is a graph theoretic representation of a set system
\(\mathcal{X}=(\mathcal{U},\mathcal{S})\), where the set \(\mathcal{V}\) corresponds to \(\mathcal{U}\) and the set \(\mathcal{E}\) corresponds to \(\mathcal{S}\). The set \(\mathcal{V}\) contains _vertices_ of hypergraph \(H\) and the set \(\mathcal{E}\) contains _hyperedges_. In the intersection graph \(G\), for every hyperedge \(E\in\mathcal{E}\), there exists a vertex \(v_{E}\in G\). An edge \((v_{E_{i}},v_{E_{j}})\) occurs in \(G\) if and only if the hyperedges \(E_{i}\) and \(E_{j}\) have a non-empty intersection. A graph \(G=(V,E)\) is an _interval graph_ if an interval \(I(v)\) can be associated to a vertex \(v\in V(G)\) such that there exists an edge \((u,v)\) in \(G\) and the associated intervals \(I(u)\) and \(I(v)\) have a non-empty intersection. The set of intervals \(\{I(v)\}_{v\in V(G)}\) is an interval representation or intersection model of \(G\). When \(\mathcal{E}\) is a family of intervals on a line, then \(G\) is an interval graph and \(H\) is an _interval hypergraph_[9].
Definition 2: [2] The hypergraph \(H=([n],\mathcal{I})\), where \([n]=\{1,\ldots,n\}\) and \(\mathcal{I}\subseteq\{\{i,i+1,\ldots,j\}\mid i\leq j,i,j\in[n]\}\) is known as an **interval hypergraph**.
Each hyperedge in \(\mathcal{I}\) is a set of consecutive integers, which we call an _interval_. In an interval \(I=\{i,i+1,\ldots,j\}\), \(i\) and \(j\) are the _left_ and _right endpoints_ of \(I\) respectively, which we denote by \(l(I)\) and \(r(I)\), respectively. We use \(\mathcal{V}(H)\) (or simply \(\mathcal{V}\)) and \(\mathcal{I}(H)\) (or simply \(\mathcal{I}\)) to denote the vertex set and the hyperedge set, respectively, of an interval hypergraph \(H\). An interval hypergraph is said to be _proper_ if no interval is contained in another interval. If, for an interval graph \(G\), there exists an interval representation in which no interval is properly contained inside another interval, then \(G\) is a _proper interval graph_.
An interval graph is characterized by the existence of a linear ordering of its maximal cliques. In Section 3, we use the following characterization to obtain an exactly hittable interval representation for an interval graph, if such a representation exists.
Theorem 3.1 (Gilmore and Hoffman, 1964 [8]): _The maximal cliques of an interval graph \(G\) can be linearly ordered such that, for every vertex \(x\) of \(G\), the maximal cliques containing \(x\) occur consecutively._
The class of interval graphs is a subfamily of the class of _chordal graphs_. A chordal graph is a simple graph that does not contain any induced cycle of size \(\geq 4\)[10]. Chordal graphs are known to be intersection graphs of subtrees of a tree [6].
**Note:** We draw the reader's attention to the distinction between _interval hypergraphs_ and _interval graphs_, and _proper interval hypergraphs_ and _proper interval graphs_, as these are used extensively throughout the paper. We also use the adjective _exactly hittable_ to both simple graphs and hypergraphs. Recall that a interval graph is an Exactly Hittable Interval Graph if it has an intersection model that has an exact hitting set. On the other hand, an Exactly Hittable Hypergraph is one that has an exact hitting set.
**Observation 6**: _Since our goal is to characterize interval graphs that have an exactly hittable interval representation, we assume without loss of generality that, in the graph \(G\), for every sequence of consecutive maximal cliques in a linear ordering, there is at most one vertex which starts and ends in this sequence._
Indeed if a given graph violates this property and there are two or more vertices that start and end in a sequence, then we retain only one of those vertices. The justification for this assertion is that if the resulting graph has an exactly hittable representation, so does the original graph.
**Notations**: All standard definitions and notations from West [19] have been used throughout the paper.
## 2 A Canonical Interval Representation
In this section, we obtain _a canonical interval representation_\(H_{G}\) of a given interval graph \(G\). The canonical interval representation is nothing but a special intersection model of \(G\). Consequently, the intersection graph of intervals in \(H_{G}\) is isomorphic to \(G\). The construction follows a well-defined set of steps with the result that every interval graph has a unique canonical interval representation. The canonical representation \(H_{G}\) is obtained by _stretching_ intervals so that all intervals have distinct left endpoints and distinct right endpoints. In other words, no pair of intervals start at the same point or end at the same point. The canonical interval representation is crucial to the proof of our main result in Section 3.
**Outline:** The starting point of this construction is to use the well known linear ordering of maximal cliques associated with an interval graph [10] (Refer Theorem 5). Figure 2 gives an illustration of how to obtain the canonical interval representation of an interval graph. Let \(G=(V,E)\) be the given interval graph. Let \(\mathcal{O}=\{Q_{1},Q_{2}\ldots Q_{r}\}\) be an ordering of the maximal cliques in \(G\). For each \(v\in V(G)\), let the interval representation of \(G\) obtained from \(\mathcal{O}\) be \(I(v)=[l(v),r(v)]\), where \(l(v)\) is the index of the leftmost clique in \(\mathcal{O}\) that contains \(v\), and \(r(v)\) is the index of the rightmost clique containing \(v\). Let \(\mathcal{I}^{\prime}=\{I(v)\mid v\in V(G)\}\). To construct the canonical interval representation, we associate a gadget \(D_{i}\) with maximal clique \(Q_{i}\), for \(1\leq i\leq r\). For every maximal clique \(Q_{i}\), we look at \(D_{i}\) and stretch those intervals in \(\mathcal{I}^{\prime}\) that either start at \(i\) or end at \(i\). Intuitively, we can think of \(I(v)\) as being _stretched_ to the left if \(l(v)=i\) and as being _stretched_ to the right if \(r(v)=i\). Inside gadget \(D_{i}\), there is a point, which we denote by \(z_{i}\), with the following property: any interval for which \(l(v)=i\), starts at \(z_{i}\) or to the left of \(z_{i}\) and any interval for which \(r(v)=i\), ends at \(z_{i}\) or to the right of \(z_{i}\). We refer to \(z_{i}\) as the _zero point_ of gadget \(D_{i}\). The exact construction of stretched intervals is detailed in the subsequent paragraphs.
The gadgets \(D_{1},D_{2}\ldots,D_{r}\) are arranged in the same order as that of the maximal cliques in \(\mathcal{O}\). Further, for each \(v\in V(G)\), the stretched interval associated with \(I(v)\) has \(D_{l(v)}\) as its left-most gadget and \(D_{r(v)}\) as its rightmost gadget. To complete the construction, between each pair of consecutive gadgets we add an additional point. We show later that this additional point is crucial in obtaining the exact hitting set of special cases of exactly hittable interval graphs. The stretched interval of \(I(v)\) contains all these additional points between consecutive gadgets in the ordered set \(\{D_{l(v)},D_{l(v)+1},\ldots,D_{r(v)}\}\). Let \(H_{G}=(\mathcal{V},\mathcal{I})\)
denote the canonical interval hypergraph thus obtained. \(\mathcal{V}\) is the set of all points internal to the gadgets (defined below) and the \(r-1\) additional points between consecutive gadgets (as described above). The hyperedges in \(\mathcal{I}\) are the stretched intervals corresponding to each interval in \(\mathcal{I}^{\prime}\). We now describe the gadget \(D_{i}\) associated with maximal clique \(Q_{i}\), \(1\leq i\leq r\).
**Construction of the gadget \(D_{i}\) for maximal clique \(Q_{i}\):** Let \(\{I(v_{1}),I(v_{2}),\)\(\ldots,I(v_{t})\}\) be the ordered set of intervals such that for each \(1\leq k\leq t\), \(l(v_{k})=i\) and \(r(v_{k})>r(v_{j})\) whenever \(1\leq k<j\leq t\). In other words, the ordered set considers the intervals whose left endpoint is \(i\) in descending order of their right endpoints. Then, for each \(1\leq k\leq t\), the left endpoint of the stretched interval of
Figure 2: Construction of Canonical Interval Representation _(\(i\))_ Interval Graph \(G\) with its maximal cliques \(Q_{1},Q_{2},Q_{3},Q_{4}\)_(\(ii\))_ Linear ordering of maximal cliques \(\mathcal{O}=\{Q_{1},Q_{2},Q_{3},Q_{4}\}\)_(\(iii\))_ Interval representation of \(G\) obtained from \(\mathcal{O}\)_(\(iv\))_ Gadgets \(D_{1}\) to \(D_{4}\)_(\(v\))_ Canonical interval representation for \(G\)
\(I(v_{k})\) is \(-k+1\): this can be viewed as stretching \(I(v_{k})\) to the left. On the integer line, the left endpoint of the stretched interval of \(I(v_{k})\) is \(z_{i}-k+1\). We next consider those intervals \(I(v)\) such that \(r(v)=i\). Let \(\{I(v_{1}),I(v_{2}),\ldots,I(v_{t})\}\) be the ordered set of intervals such that for each \(1\leq k\leq t\), \(r(v_{k})=i\) and \(l(v_{k})<l(v_{j})\) whenever \(1\leq k<j\leq t\). In other words, the ordered set considers the intervals whose right endpoint is \(i\) in ascending order of their left endpoints. Then, for each \(1\leq k\leq t\), the right endpoint of the stretched interval of \(I(v_{k})\) is \(k-1\): this can be viewed as stretching \(I(v_{k})\) to the right. On the integer line, the right endpoint of the stretched interval of \(I(v_{k})\) would be \(z_{i}+k-1\). This completes the description of the gadget \(D_{i}\). Note that for \(I(v)\) in \(\mathcal{I}\), the stretched interval is stretched to the left only in the leftmost gadget in which it is present, and it is stretched to the right in the rightmost gadget in which it is present. By construction, no two intervals share the same left endpoint and the same right endpoint.
Lemma 1: _Let \(H_{G}\) be the canonical interval representation of graph \(G\) as constructed using the above procedure. Then, \(G\) is isomorphic to the intersection graph of intervals in \(H_{G}\)._
Proof: The gadgets \(D_{1},\ldots,D_{r}\) are arranged according to the same order as the maximal cliques in the ordered set \(\mathcal{O}=\{Q_{1},Q_{2}\ldots Q_{r}\}\). For each \(v\in G\), the starting gadget (and the ending gadget) of interval \(I(v)\) and the starting maximal clique (and the ending maximal clique) of vertex \(v\) in \(\mathcal{O}\) are the same by construction. Further, \(I(v)\) contains all the points in the intervening gadgets between the starting and ending gadgets of \(I(v)\) just as \(v\) occurs in all the intervening maximal cliques between the starting and ending maximal cliques to which \(v\) belongs to. It follows that \(I(u)\) and \(I(v)\) intersect if and only if the corresponding stretched intervals have a non-empty intersection. Thus the intersection graph of intervals in \(H_{G}\) is isomorphic to \(G\).
## 3 Exactly Hittable Interval Graphs
Characterizing simple graphs as intersection graphs is a well-pursued line of study in graph theory. Harary had presented results on this problem in his book [11]. We address the question of when a simple graph is the intersection graph of an exactly hittable hypergraph. We modify the proof given by Harary to answer this question. In addition, we present similar results for the class of chordal graphs (refer to Section 1.1 for definition). We prove Observation 1 about arbitrary graphs and arbitrary chordal graphs.
**Proof of Observation 1:**
Proof: The proof of the first statement is based on a slight modification to the intersection model constructed from \(G\) in Theorem 2.5 in the book by Harary [11]. Let \(H=(\mathcal{V},\mathcal{E})\) be the intersection model constructed as follows. The universe \(\mathcal{V}\) of the hypergraph is \(V(G)\cup E(G)\). The set \(\mathcal{E}\) contains a hyperedge
for each vertex \(v\in V(G)\), and \(E_{v}\) contains all the edges incident on \(v\) and the element \(v\). Clearly, the intersection graph of \(H\) is isomorphic to \(G\) and \(V(G)\) is an exact hitting set of \(H\).
The proof of the second statement, which is for a chordal graph \(G\), is similar and is as follows. Since \(G\) is a chordal graph let it be isomorphic to the intersection graph of some subtrees of a tree \(T\). In particular, let \(T\) be the clique tree of the chordal graph \(G\)[10]. Let \(\{T_{v}\mid v\in V(G)\}\) be the set of subtrees in \(T\), where \(T_{v}\) is the subtree associated with \(v\) and the tree nodes in \(T_{v}\) correspond to those maximal cliques in \(G\) which contain the vertex \(v\). We modify \(T\) to get \(T^{\prime}\) by adding \(n=|V(G)|\) new nodes, each corresponding to a vertex in \(V(G)\). For each \(v\in V(G)\), the new node corresponding to \(v\) is made adjacent in \(T\) to some node in \(T_{v}\). The resulting tree is \(T^{\prime}\) and \(T^{\prime}_{v}\) is the subtree of \(T^{\prime}\) consisting of \(T_{v}\) and the new node corresponding to \(v\). Clearly, the newly added nodes form an exact hitting set of the set \(\{T^{\prime}_{v}\mid v\in V(G)\}\) in \(T^{\prime}\), and the intersection graph of the subtrees \(\{T^{\prime}_{v}\mid v\in G\}\) is same as \(G\).
Interestingly, the property of being exactly hittable is not universal for the class of interval graphs. There are interval graphs that do not have any exactly hittable intersection model. In this paper, we present a forbidden structure characterization for the class of interval graphs that have an exactly hittable intersection model. In this section, we prove that every graph in \(\mathcal{F}\) (see Definition 1) is a forbidden structure for EHIG. First, we state and prove one direction of Theorem 2.
We use the following notations throughout the section. \(H^{\prime}\) denotes an interval representation of \(G\). We denote the open neighbourhood of vertex \(v\) by \(N(v)\). \(N(P)\) denotes open neighbourhood of all vertices in path \(P\). \(\mathcal{I}(P)\) denotes the set of intervals in \(H^{\prime}\) corresponding to vertices in path \(P\), \(X_{N(P)}\) denotes the set of independent vertices in \(N(P)\) and \(\mathcal{I}(X_{N(P)})\) denotes set of intervals in \(H^{\prime}\) corresponding to \(X_{N(P)}\).
Lemma 2: _Let \(G\) be an interval graph. Let \(\mathtt{F}\in\mathcal{F}\) be any forbidden structure. If \(G\) contains \(\mathtt{F}\) as an induced subgraph, then \(G\) is not an Exactly Hittable Interval Graph._
Proof: Our proof is by contradiction. Let \(H^{\prime}\) be any exactly hittable interval representation of \(G\). Let \(P\) be an induced path of length \(k\) in \(G\) that has an independent set of at least \(k+3\) vertices in its neighbourhood. Let \(S\) be an exact hitting set of \(H^{\prime}\). Recall that \(\mathcal{I}(P)\) denotes the set of intervals in \(H^{\prime}\) corresponding to vertices in path \(P\). By our assumption that \(G\) contains \(\mathtt{F}\), the number of intervals in \(\mathcal{I}(X_{N(P)})\) is at least \(k+3\). Hence \(|\mathcal{I}(X_{N(P)})\cap S|\geq k+3\). Since \(X_{N(P)}\) is an independent set, there can be at most two intervals in \(\mathcal{I}(X_{N(P)})\) that have their endpoints outside the union of intervals in \(\mathcal{I}(P)\) - one on either side of \(P\). Therefore, even if these two intervals in \(\mathcal{I}(X_{N(P)})\) are hit outside the intervals in \(\mathcal{I}(P)\) at either ends, the remaining \(k+1\) independent intervals have to be hit inside the union of intervals in \(\mathcal{I}(P)\). Hence \(|\mathcal{I}(P)\cap S|\geq k+1\). But there are only \(k\) intervals inside \(\mathcal{I}(P)\). Therefore, by the pigeonhole principle, at least one interval among the intervals in \(\mathcal{I}(P)\) has to be hit more
than once. Thus \(S\) cannot be an exact hitting set of \(H^{\prime}\). We have arrived at a contradiction to the assumption that \(H^{\prime}\) is exactly hittable. Since we started with an arbitrary exactly hittable representation and arrived at a contradiction, we conclude that \(G\) is not exactly hittable.
Now, we prove the other direction of Theorem 4.1. Let \({\cal O}=\{Q_{1},Q_{2}\ldots Q_{t}\}\) be a linear ordering of maximal cliques in \(G\) (Refer Theorem 4.2 and Section 2). Let \(H_{G}\) be the canonical interval representation of \(G\) obtained from \({\cal O}\). We use the following notations in this section.
We denote a minimum clique cover of neighbourhood of a vertex \(v\) which is formed by the minimum number of maximal cliques in \({\cal O}\) by \(C(N[v])\). Note that such a clique cover exists. We prove a simple observation here.
**Observation 7**: _If \(Q_{i}\ldots Q_{j},\ i,j\in[1,t],i\leq j\) denote the maximal cliques containing vertex \(v\in V\), then \(Q_{j}\in C(N[v])\)._
Proof: We prove this by contradiction. Let us assume that \(Q_{j}\notin C(N[v])\). As \(Q_{j}\neq Q_{j-1}\), there exists a vertex \(u\) in \(Q_{j}\) which is not in \(Q_{j-1}\). It follows that \(u\) is not contained in any maximal cliques previous to \(Q_{j-1}\) since the maximal cliques containing a vertex occur consecutively in the linear ordering of maximal cliques of an interval graph. Therefore, if \(Q_{j}\notin C(N[v])\), then \(u\) is not covered. It contradicts the fact that \(C(N[v])\) is a clique cover of \(N[v]\). It follows that \(Q_{j}\in C(N[v])\).
From now on, when we refer to a minimum clique cover of the input graph, we mean a minimum clique cover formed by the minimum number of maximal cliques in \({\cal O}\) unless specified otherwise. Let \(|C(N[v])|\) denote the number of cliques in \(C(N[v])\). Similarly, we denote a minimum clique cover of vertices in the maximal cliques \(Q_{i}\) to \(Q_{j}\) in the ordering \({\cal O}\), \(i<j\), by \(C(Q_{i},\ldots,Q_{j})\).
Our proof is based on the structural properties of a path \(P\) in \(G\), the construction of which is presented in Algorithm 1. The structural properties of path \(P\) are proved as lemmas later in the section.
**Outline of Algorithm 1 :**
We construct an induced path \(P\) which contains a minimal set of vertices from graph \(G\). The vertices in path \(P\) are selected such that every maximal clique in \({\cal O}\) has a non-empty intersection with path \(P\). Further, we incrementally construct a clique cover of the given graph by taking the clique cover of the closed neighborhood each of the individual vertices in \(P\).
Let \(\{v_{1},v_{2},\ldots,v_{p}\}\) be the ordered set of vertices in path \(P\) with respect to the linear ordering \(\mathcal{O}\).Let \(v_{i},\ldots,v_{j},1\leq i\leq j\leq n\) be any subset of vertices in path \(P\). We use \(CC(N[v_{i},v_{i+1},\ldots,v_{j}]),\ i\leq j\) to denote a clique cover of \((N[v_{i}]\cup N[v_{i+1}]\cup\ldots\cup N[v_{j}])\) and \(|CC(N[v_{i},v_{i+1},\ldots,v_{j}])|\) to denote the number of cliques in \(CC(N[v_{i},v_{i+1},\ldots,v_{j}])\). Note that this is a clique cover of \(G\). However, there exists graphs for which it is not a minimum clique cover. The clique cover of the closed neighborhood of vertices in path \(P\) is stored in \(\mathcal{K}\). We denote the maximal cliques which constitute \(\mathcal{K}\) in the order in which they appear in \(CC(N[v_{1},\ldots,v_{p}])\), by \(K_{1},K_{2},\ldots,K_{\alpha^{\prime}}\).
In any perfect graph, the size of a minimum clique cover equals the size of a maximum independent set. Based on this, we state an observation that we
Figure 3: Construction of path \(P\)
use in proving some important properties of the constructed clique cover of the neighborhood of vertices in path \(P\).
**Observation 8**: _In any perfect graph \(G^{\prime}\), for each maximal clique \(K\) in a minimum clique cover \(\mathcal{K}\) of \(G^{\prime}\), there exists a vertex \(u\in G^{\prime}\) such that \(u\) does not belong to any other maximal clique in \(\mathcal{K}\)._
Lemma 3: _For \(1\leq i\leq p\), \(|C(N[v_{i}])|\ \leq 3\)._
Proof: The proof is by contradiction. Let \(|\ C(N[v_{i}])\ |>3\). By definition, \(C(N[v_{i}])\) contains only the maximal cliques in the linear ordering \(\mathcal{O}\). From Observation 8, it follows that for each maximal clique \(Q\in C(N[v_{i}])\), there exists a vertex \(w\) which is unique to \(Q\). Since \(|C(N[v_{i}])|\ >3\), there exists at least 4 vertices \(w_{1},w_{2},w_{3},w_{4}\) in \(N[v_{i}]\) that forms an independent set. It follows that \(v_{i}\) together with \(w_{1},w_{2},w_{3},w_{4}\) form a forbidden structure \(K_{1,4}\) (refer Figure 1 (i)). This is a contradiction to our premise that \(G\) does not contain any forbidden structure.
Note: You may refer Algorithm 1 for the notations used in the proofs of lemmas given below:
Lemma 4: _In the path \(P\), if for any vertex \(v_{i}\), \(1\leq i<p\), \(|C(N[v_{i}])|\ =3\), then \(|C(N[v_{i+1}])|\ \leq 2\)._
Proof: The proof is by contradiction. Assume that there exists a vertex \(v_{i}\in P,\ 1\leq i\leq p\) for which \(|C(N[v_{i}])|=3\) and \(|C(N[v_{i+1}])|\geq 3\). By Lemma 3, \(|C(N[v_{i+1}])|\) cannot exceed 3. Vertices \(v_{i}\) and \(v_{i+1}\) form an edge in the path \(P\).
We consider the following cases based on the cardinality of \(C(N[v_{i}])\cup C(N[v_{i+1}])\).
**Case when \(|\ C(N[v_{i}])\cup\ C(N[v_{i+1}])\ |=4\)**: Recall from Algorithm 1 that \(Q_{r}^{i-1}\) and \(Q_{r}^{i}\) are the maximal cliques in the ordering \(\mathcal{O}\) which contains the right endpoints of the intervals corresponding to \(v_{i-1}\) and \(v_{i}\) respectively. For any vertex \(v_{i}\in P,\ Q_{r}^{i-1},Q_{r}^{i}\in C(N[v_{i}])\). By our choice of \(v_{i+1}\) in the construction of path \(P\), \(v_{i+1}\in\{Q_{r}^{i}\setminus Q_{r^{\prime}}^{i}\}\). Thus \(v_{i+1}\) is covered by \(Q_{r}^{i}\). As per our assumption,
Figure 4: Forbidden structure formation
\(\mid C(N[v_{i}])\cup C(N[v_{i+1}])\mid\ \ =4\), and \(\mid C(N[v_{i}])\mid\ \ =3\) by our premise. Therefore those vertices of \(N[v_{i+1}]\) which are not covered by \(Q_{r}^{i}\) have to be covered by exactly one more clique. It follows that \(\mid C(N[v_{i+1}])\mid\ \ =2\), which is a contradiction to our assumption that \(\mid C(N[v_{i+1}])\mid\ \ =3\). This, in turn, is a contradiction to our initial premise that \(\mid C(N[v_{i}])\cup\ C(N[v_{i+1}])\mid=4\). Thus the only possibility is, \(\mid C(N[v_{i}])\cup C(N[v_{i+1}])\mid=5\), which we discuss in the next case. Observe that \(\mid C(N[v_{i}])\cup C(N[v_{i+1}])\mid\) cannot be greater than 5 since \(v_{i+1}\in Q_{r}^{i}\) and \(\mid C(N[v_{i+1}])\mid\ \ =3\).
**Case when \(\mid C(N[v_{i}])\cup C(N[v_{i+1}])\mid=5\)**: The proof is by contradiction to our premise that \(G\) does not contain any forbidden structure. We first show that \(C(N[v_{i}])\cup C(N[v_{i+1}])\) is indeed a minimum clique cover of \(N[v_{i}]\cup N[v_{i+1}]\). Then, using Observation 8, we show that there exists a forbidden structure. By definition, \(C(N[v_{i}])\) is a minimum clique cover of \(N[v_{i}]\). Therefore, each of the three maximal cliques in \(C(N[v_{i}])\) has atleast one unique vertex which does not belong to any other maximal clique. Since \(v_{i+1}\in Q_{r}^{i}\) and \(Q_{r}^{i}\in C(N[v_{i}])\), \(Q_{r}^{i}\in C(N[v_{i+1}])\). Let the other two maximal cliques in \(C(N[v_{i+1}]\) be \(Q_{j}\) and \(Q_{k}\). By Observation 8, \(Q_{j}\) and \(Q_{k}\) contain a unique vertex each. It follows that any minimum clique cover of \(N[v_{i}]\cup N[v_{i+1}]\) contains all three maximal cliques of \(C(N[v_{i}])\) along with \(Q_{j}\) and \(Q_{k}\). Hence \(C(N[v_{i}])\cup C(N[v_{i+1}])\) is a minimum clique cover of \(N[v_{i}]\cup N[v_{i+1}]\). By Observation 8, on \(C(N[v_{i}])\cup C(N[v_{i+1}])\), there is a set \(V^{\prime}\) of 5 vertices in \(C(N[v_{i}])\cup\ C(N[v_{i+1}])\) that are mutually disjoint and form an independent set of size five. The edge \((v_{i},v_{i+1})\), together with \(V^{\prime}\) form a forbidden structure (see Definition 1). Thus we have arrived at a contradiction. It follows that if \(\mid C(N[v_{i}])\mid\ \ =3\), then \(\mid C(N[v_{i+1}])\mid\ \ =2\).
**Observation 9**: _For each vertex \(v\in P\setminus v_{p}\), \(\mid C(N[v])\mid\geq 2\)._
Proof: By construction of path \(P\),
\[CC(N[v_{1},\ldots,v_{i}])=CC(N[v_{1},\ldots,v_{i-1}])\cup C(Q_{r+1}^{i-1}, \ldots,Q_{r}^{i})\]
where \(Q_{r}^{i-1}\) is the rightmost maximal clique in \(CC(N[v_{1},\ldots,v_{i-1}])\) and it covers \(v_{i}\). \(Q_{r}^{i}\) is the rightmost maximal clique to which \(v_{i}\) belongs to, in the ordering \(\mathcal{O}\). Note that \(C(N[v_{i}])=Q_{r}^{i-1}\cup C(Q_{r+1}^{i-1},\ldots,Q_{r}^{i})\). Let \(A=N[v_{i}]\cap(Q_{r+1}^{i-1}\cup\cdots\cup Q_{r}^{i})\). Since \(v_{i}\neq v_{p}\), we know that there is \(v_{i+1}\in P\) which is chosen such that \(v_{i+1}\notin Q_{r}^{i-1}\) and \(v_{i+1}\in N[v_{i}]\). It follows that \(A\neq\emptyset\). By choice of \(v_{i}\), it has the rightmost right endpoint among that of all vertices in \(Q_{r}^{i-1}\setminus Q_{r}^{i-1}\) and \(v_{i}\neq v_{p}\). Hence \(\exists u\in A\) which is not covered by \(Q_{r}^{i-1}\). Therefore, there exists at least one \(Q\in C(Q_{r+1}^{i-1},\ldots,Q_{r}^{i})\) that covers all the vertices in \(A\),. In other words, \(C(Q_{r+1}^{i-1},\ldots,Q_{r}^{i})\neq\emptyset\). It follows that \(C(N[v_{i}])\) is of size at least 2.
Now we prove a stronger claim.
Lemma 5: _In path \(P\), there is at most one vertex \(v\) where \(\mid C(N[v])\mid=3\)._
Proof: The proof of this claim is again by contradiction. Assume that there is more than one vertex with minimum clique cover equal to 3 in path \(P\). Let \(v_{i}\) be
the first such vertex in \(P\) in the increasing order of left endpoints. By Lemma 4, we know that the minimum clique cover of \(v_{i+1}\) is of size less than \(3\). By our assumption, \(\exists j>i+1\) such that minimum clique cover of \(v_{j}\) is of size \(3\). Let the number of vertices in the subpath of \(P\) from \(v_{i}\) to \(v_{j}\) (including both \(v_{i}\) and \(v_{j}\)) be \(l\). It follows from Observation 9 that for each vertex \(v_{k}\in P,\ k\in[i+1,j-1]\), the minimum clique cover is of size \(2\). We compute \(CC(N[v_{i},\ldots,v_{k}])\) with respect to \(CC(N[v_{i},\ldots,v_{k-1}])\). Note that \(v_{k}\) is already covered in \(CC(N[v_{i},\ldots,v_{k-1}])\) and \(CC(N[v_{i},\ldots,v_{k}])\) additionally covers \(N[v_{k}]\setminus N[v_{k-1}]\). Since \(|C(N[v_{k}])|=2\), it adds just \(1\) to the number of cliques in \(CC(N[v_{i},\ldots,v_{k-1}])\). i.e,
\[|CC(N[v_{i},\ldots,v_{k}])|=|CC(N[v_{i},\ldots,v_{k-1}])|+1\]
It follows that each of the \(l-2\) vertices in \(\{v_{i+1},\ldots,v_{j-1}\}\) increments the size of the clique cover by \(1\). i.e, \(|CC(N[v_{i+1},\ldots,v_{j-1}])|=l-2\). Thus
\[|CC(N[v_{i},\ldots,v_{j}])| =|C(N[v_{i}])|+|CC(N[v_{i+1},\ldots,v_{j-1}])|+|C(N[v_{j}])|-1\] \[=3+(l-2)+3-1\] \[=l+3\]
Note that we deduct \(1\) from \(|\ C(N[v_{j}])\ |\) since \(v_{j}\) is already covered by
\(CC(N[v_{i+1}],\ldots,N[v_{j-1}])\). We can see that the vertices from \(v_{i}\) to \(v_{j}\) form a path of length \(l\) which has an independent set of size \(l+3\) in its neighbourhood. The vertices from \(v_{i}\) to \(v_{j}\), together with the independent set of size \(l+3\) in its neighbourhood forms a forbidden structure. We have arrived at a contradiction to our premise that \(G\) does not contain any forbidden structures. Therefore it is proved that in path \(P\), there is at most one vertex which has a minimum clique cover of size \(3\).
By Algorithm 1, \(\mathcal{K}=\{K_{1},K_{2}\ldots K_{\alpha^{\prime}}\}\) is the clique cover of path \(P\). Note that it forms a minimal clique cover of input graph \(G\).
_Claim._\(|\ K_{i-1}\cap K_{i}\cap K_{i+1}\ |\leq 1,\ 2\leq i\leq\alpha^{\prime}-1\).
Figure 5: Vertices \(v_{i}\) and \(v_{j}\) belong to three consecutive cliques
Proof: By Lemma 5, there is at most one vertex with minimum clique cover of size \(3\) in \(P\). If such a vertex exists, it would belong to three consecutive maximal cliques in \(\mathcal{K}\), and let us denote them by \(K_{a-1},\ K_{a}\) and \(K_{a+1}\), \(a\in[2,\alpha^{\prime}-1]\). For all other \(i\neq a,\ i\in[2,\alpha^{\prime}-1]\), \(\mid K_{i-1}\cap K_{i}\cap K_{i+1}\mid=0\). It follows that \(\mid K_{i-1}\cap K_{i}\cap K_{i+1}\mid\leq 1\) for \(2\leq i\leq\alpha^{\prime}-1\).
We now use the properties of \(P\) proved in above lemmas to construct a clique cover of \(G\) with the property that the cliques are all vertex disjoint. The clique cover is denoted by \(\mathcal{B}=\{B_{1},B_{2}\ldots B_{\alpha^{\prime}}\}\). We outline the steps in the procedure below.
Let \(v_{l},1\leq l\leq p\) be the vertex in \(P\) such that \(|C(N[v_{l}])|=3\). If no such vertex exists in \(P\), then let \(l=p+1\), and let \(K_{p+2}\) be the empty set. Further, \(K_{p+3}\) is the empty set. We know that there is at most one such vertex, and the construction below will also take care of the case when for all \(v_{i},1\leq i\leq p\), \(|C(N[v_{i}])|=2\).
1. For \(1\leq i\leq l-1\), \(B_{i}=K_{i}\setminus K_{i+1}\).
2. For \(i=l\leq p\), we define \(B_{l},B_{l+1},B_{l+2}\). * \(B_{l}=K_{l}\setminus K_{l+1}\). * \(B_{l+1}=K_{l+1}\setminus(K_{l}\cup K_{l+2})\) * \(B_{l+2}=K_{l+2}\setminus B_{l+1}\).
3. For \(i\geq l+1\), \(B_{i+2}=K_{i+2}\setminus K_{i+3}\).
Since \(G\) does not have the forbidden structure, it follows that \(\mathcal{B}\) is a clique cover, and by construction, it is a partition of the vertex set. Further, the number of cliques in \(\mathcal{B}\) is \(\alpha^{\prime}\).
To complete the proof of Theorem 3.1, we use the crucial property of the canonical representation \(H_{G}\) that no two intervals in \(H_{G}\) have the same left end point or the same right end point. Using this property, we show that for each \(B_{i}\) there is a point \(p_{i}\) in \(H_{G}\) such that the intervals in \(H_{G}\) which contain \(p_{i}\) are exactly those which correspond to the vertices in \(B_{i}\). These points \(p_{i},1\leq i\leq\alpha^{\prime}\) form an exact hitting set of \(H_{G}\). This completes the characterization of EHIG.
### Algorithm to recognize exactly hittable interval graphs
In this section, we present an algorithm to recognize an exactly hittable interval graph. This algorithm makes use of the canonical interval representation in Section 2 and the result by Dom et al. for MMSC problem on interval hypergraphs [3]. In their paper, Dom et al. showed that an integer linear programming (ILP) formulation, say \(\mathcal{L}\), for MMSC problem on interval hypergraphs can be solved in polynomial time. The coefficients of inequalities in \(\mathcal{L}\) results in a totally unimodular matrix and the polyhedron corresponding to \(\mathcal{L}\) is an integer polyhedron. If the input instance to ILP is an exactly hittable instance, then the solution returned is \(1\). We use this algorithm below to test if a given interval hypergraph instance is exactly hittable.
**Algorithm** isEhtig: Given an interval graph \(G\), construct the canonical interval representation as described in Section 2. Let \(H_{G}\) be the resulting interval representation. Run MMSC algorithm by Dom et al. [3] on \(H_{G}\) as input. If the algorithm returns value 1, then return yes. Else return no.
Lemma 6: _Algorithm_isEhtig(\(G\)) outputs yes if and only if \(G\) is exactly hittable._
Proof: The proof follows from Lemma 1, Theorem 3.1 and the correctness of algorithm for MMSC problem on interval hypergraphs.
### Proper Interval Graphs is a subclass of Ethig
The following lemma proves our Theorem 3.1.
Lemma 7: _The set of Proper Interval Graphs are strictly contained in the set of Ehtig._
Proof: Let \(G\) be a proper interval graph and let it be the intersection graph of the interval hypergraph \(H=(\mathcal{V},\mathcal{I})\) in which no interval properly contains another. Since \(H\) is a proper interval hypergraph, no two intervals in \(\mathcal{I}\)) can have the same left endpoint. Hence order intervals in \(\mathcal{I}\) according to increasing order of their left endpoints. Let this ordering be \(I_{1}<I_{2}<\ldots<I_{m}\). Add \(r(I_{1})\) (which is the smallest right endpoint among all intervals) to set \(S\). Remove all intervals hit by \(r(I_{1})\). Recurse on the remaining set of intervals until all the intervals are hit by \(S\). Clearly, \(S\) is an exact hitting set.
To show the strict containment, we show that the graph \(K_{1,3}\) which is a forbidden structure [18] for Proper Interval Graphs has an exactly hittable interval representation. Let the vertices of the \(K_{1,3}\) be \(\{u,a,b,c\}\) and edges be \(\{(u,a),(u,b),(u,c)\}\). The intervals assigned to the vertices \(a,b,c\) and \(u\) are shown in Figure 6. Hence the lemma.
This completes the proof of Theorem 3.1.
|
2305.05708
|
Language models can generate molecules, materials, and protein binding
sites directly in three dimensions as XYZ, CIF, and PDB files
|
Language models are powerful tools for molecular design. Currently, the
dominant paradigm is to parse molecular graphs into linear string
representations that can easily be trained on. This approach has been very
successful, however, it is limited to chemical structures that can be
completely represented by a graph -- like organic molecules -- while materials
and biomolecular structures like protein binding sites require a more complete
representation that includes the relative positioning of their atoms in space.
In this work, we show how language models, without any architecture
modifications, trained using next-token prediction -- can generate novel and
valid structures in three dimensions from various substantially different
distributions of chemical structures. In particular, we demonstrate that
language models trained directly on sequences derived directly from chemical
file formats like XYZ files, Crystallographic Information files (CIFs), or
Protein Data Bank files (PDBs) can directly generate molecules, crystals, and
protein binding sites in three dimensions. Furthermore, despite being trained
on chemical file sequences -- language models still achieve performance
comparable to state-of-the-art models that use graph and graph-derived string
representations, as well as other domain-specific 3D generative models. In
doing so, we demonstrate that it is not necessary to use simplified molecular
representations to train chemical language models -- that they are powerful
generative models capable of directly exploring chemical space in three
dimensions for very different structures.
|
Daniel Flam-Shepherd, Alán Aspuru-Guzik
|
2023-05-09T18:35:38Z
|
http://arxiv.org/abs/2305.05708v1
|
Language models can generate molecules, materials, and protein binding sites directly in three dimensions as XYZ, CIF, and PDB files
###### Abstract
Language models are powerful tools for molecular design. Currently, the dominant paradigm is to parse molecular graphs into linear string representations that can easily be trained on. This approach has been very successful, however, it is limited to chemical structures that can be completely represented by a graph- like organic molecules- while materials and biomolecular structures like protein binding sites require a more complete representation that includes the relative positioning of their atoms in space. In this work, we show how language models, without any architecture modifications, trained using next-token prediction- can generate novel and valid structures in three dimensions from various substantially different distributions of chemical structures. In particular, we demonstrate that language models trained directly on sequences derived directly from chemical file formats like XYZ files, Crystallographic Information files (CIFs), or Protein Data Bank files (PDBs) can directly generate molecules, crystals, and protein binding sites in three dimensions. Furthermore, despite being trained on chemical file sequences- language models still achieve performance comparable to state-of-the-art models that use graph and graph-derived string representations, as well as other domain-specific 3D generative models. In doing so, we demonstrate that it is not necessary to use simplified molecular representations to train chemical language models- that they are powerful generative models capable of directly exploring chemical space in three dimensions for very different structures.
## I Introduction
Language models are autoregressive models for sequence generation that have shown impressive progress recently in natural language understanding using deep neural networks [1; 2; 3]. These advancements are driven by architecture improvements like the Transformer- a powerful neural network for sequential data that uses self-attention [4]. Transformers have found use in many significant scientific applications like protein structure prediction and design [5; 6; 7] and various tasks in cheminformatics [8; 9; 10]. An important scientific objective is the exploration of chemical space, in order to discover new drugs and materials [11]. Language models have enormous potential for chemical space exploration- which remains almost entirely unexplored given the \(10^{60}\) drug-like molecules [12] and even more potentially accessible materials. Already, recent large language models [1; 13] are having an impact on research in chemistry and molecular design [14; 15; 16].
Substantial work has been done using other neural networks in the exploration of chemical space- deep generative models [17; 18; 19; 20] can be trained on large datasets to generate novel functional compounds from any target distributions. An important question arises on how to best represent a molecule when training models to learn molecular representations. There are many different model architectures appropriate for different representations, indeed the task at hand will heavily impact design choice. A popular approach is to directly use molecular graphs and make use of geometric deep learning to learn representations directly on atoms and bonds [21; 22; 23; 24; 25; 26; 27]- this strategy has been used to screen large compound libraries- one attempt lead to the discovery of novel antibiotics 27.
An alternate approach is to use SMILES or SELFIES string representations [28; 29] that linearize molecular graphs into strings. SMILES and SELFIES are widely used for machine learning-assisted molecular design [30; 31; 32]. SMILES and SELFIES are convenient for neural networks designed for sequences- which have proven to be powerful generative models of natural language [1; 33]. Indeed, they have been used to achieve state-of-the-art results with chemical language models using Long Short Term Memory networks (LSTMs) [34; 35].
However, strings and graphs are a simplified representation of molecules- which are at naturally represented as point clouds of atoms that includes their three-dimensional (3D) positions in space. For many molecular design tasks, such as catalysis [36], this geometric information is essential. SMILES and SELFIES sidestep this complete representation of molecules and are entirely unable to represent materials- which cannot be simplified as graphs and have a complex periodic 3D structure. Indeed, the geometric structure of molecules and materials is an important determinant of their properties. Indeed, molecules and materials are complex structured data-discrete in terms of their atomic elements but continuous in terms of the coordinates of those elements.
Any 3D molecule, biomolecule, or material can be fully represented and stored as text data in XYZ, PDB, or CIF files (as the most common formats). These text files are effectively long strings defined by atom coordinate pairs and other information- to directly model these files
as strings using a language model is somewhat unintuitive given the continuous nature of 3D space. Indeed, for success, a language model has to learn many layers of validity- from the basic elements of the file structure to the complex spatial arrangements of atoms in any molecule or material. Given the impressive ability of language models to model complex molecular distributions using simple string representations [34]- is it possible for language models to generate molecules and crystals in three dimensions by training on entire XYZ, CIF or even PDB files? There are many structural distributions of molecules and materials that simple string representations cannot model and many potential domain-specific applications that can be tackled- if language models could directly model more complex chemical file formats.
In recent work, a few models have achieved state-of-the-art results by focusing on generating molecules and materials in 3D in a way that satisfies permutation, translation, rotation, and periodic invariances with SE(3) equivariant architectures [37; 38; 39; 40]. In contrast, for a language model that has none of these invariances built in- it is challenging to generate structures by placing atoms using absolute or Cartesian coordinates. Chemical format files can easily turn into very long sequences, even for small molecules- chemical language models using LSTMs will inevitably have issues learning important long-range dependencies. However, other architectures like Transformers process the entire sequence at a time and do not have this issue- and therefore are the model that is most likely to succeed at this task.
We test the ability of a transformer-based language model to generate molecules from a standard benchmark ZINC[30] using sequences parsed from the XYZ files of these molecules. We further investigate the model's ability on materials- specifically crystals by training them to directly generate sequences parsed from CIFs (Crystallographic Information files), from two recent crystal benchmarks: PEROV5 and MP20[38]. In particular, we focus on assessing the model's ability to generate valid molecules and materials that reproduce the distributional properties of the training datasets.
In addition to training language models on all datasets, we compare with state-of-art baselines models that generate molecules or materials as point clouds in 3D space [39]. Despite having no equivariance and being constrained by the data structure, to place atoms using absolute coordinates that are generated by a single character or coordinate at a time- the results presented in this work demonstrate a language model is comparable to the ability of domain-specific 3D generative models.
We also further demonstrate the ability of language models to generate large molecular structures in 3D. For this we show that they can scale to protein binding sites in Protein Data Bank files- these are specific structural regions within proteins with hundreds of atoms that can only be truly represented as 3D point clouds.
Figure 1: A) The training datasets of structures that we benchmark language models on in this work. B) The overview of the training workflow – chemical file formats are converted to sequences of tokens using either character or coordinate-level tokenization. The language model is trained to predict the next token in these sequences.
We establish that current language models are powerful generative models for chemistry that can learn to generate structural distributions of molecules, biomolecular structures, and materials- directly in three dimensions.
## Results
The training workflow, example molecules, materials, and a protein binding site are displayed in Figure 1. The model is trained to predict the next token in a sequence defined by processing chemical file formats-XYZs, CIFs or PDBs into sequences using different tokenization strategies. After simplifying the file and removing unnecessary information We use two different strategies: the first is character-level tokenization (**LM-CH**), where the model must generate every necessary individual element of the file including spaces between coordinates as well as characters that indicate a newline in the file. Next, in atom+coordinate-level tokenization (**LM-AC**), the model only generates atom tokens like 'C' for carbon and coordinates tokens like '-1.98'. For each placement 4 tokens are required to place an atom in 3D space: the atom token and x,y, and z coordinate tokens. In both, we must first specify a level of numerical precision to be used and round all floating point numbers to either 1, 2, or 3 decimal places. Additionally, because the model is not rotation and translation invariant, data augmentation by randomly rotating training structures is a useful tool to improve performance. More technical details about the model and training are available in the Methods section. We evaluate the model on each of the 3D chemical structure distributions detailed further in the next sections.
### Molecules
We test the model on sequences derived from XYZ files of molecules from the ZINC dataset [41] that consists of 250K commercially available molecules with on average 23 heavy atoms. We generate the XYZ files using rdkit's [42] built-in conformer generation tools. While other datasets of molecules exist- the ZINC dataset is the most established benchmark for graph and string generative models of molecules- enabling a wider comparison. We train both language models on XYZ-derived sequences and first specify a numerical precision of 2 decimal places for all atomic coordinates.
We generate 10K (thousand) molecules from the model in order to evaluate its performance and ability to sample from the distribution of molecules used in training. We evaluate the model in two ways - the first assesses the 3D molecular geometries the model learns. Second, we compare the model using standard metrics used to assess generative models for chemistry. Samples from the model are very high quality and very similar to the training samples- this can be seen directly by visualizing samples- random examples are shown in the supplementary information.
First, we assess if the language model (**LM-AC**) can learn to generate molecules with similar 3D structures that would be generated by rdkit. To do this we plot a distribution of the rdkit computed root mean square de
Figure 2: A histogram of root mean squared deviations in atomic positions between 10K molecules sampled from the language model and their corresponding conformers generated by rdkit. Six example molecules and geometries with various r.m.s.d. values are visualized explicitly and compared with their rdkit conformers.
viation (r.m.s.d.) of atomic positions between molecules generated in 3D by the language model and the corresponding molecule with 3D structure produced by rdkit's conformer tools. To attach some relative meaning to the values in the histogram- for six different r.m.s.d. values we plot molecules generated by the language model that have different geometric structures. We also show the corresponding rdkit structure and plot in between, both molecules aligned. We label the r.m.s.d. for each as well as show which region of the histogram the molecule lies in. The model's distribution of r.m.s.d. ranges mostly between 1.0 and 2.0 although it has a heavy tail from 2.0-4.0 but quickly trails off. We can see the model does produce molecular geometries that are close in overall structure to geometries produced by rdkit. Additional examples comparing rdkit's geometry and the language model can be found in the supplementary information.
Next, we compare molecules generated by the language model with samples from various other generative models for molecules that are widely applied. For these baselines, we consider models that explicitly train on 3D structures as well as models that train on molecular graph or string representations. For 3D generative models, we compare with G-Schnet [39]- an auto-regressive 3D generative model that places atoms using interatomic distances, equivariant normalizing flows (ENF) [43] and equivariant diffusion for 3D molecular generation (EDM) [37]. Additionally, we consider chemical language models using a recurrent neural network with long short-term memory [35] trained on either SMILES (SM-LM) or SELFIES (SF-LM). We also train some popular deep graph generative models: junction tree variational autoencoder (JTVAE) [18] which pieces together substructures or other models that generate molecular graphs by predicting individual atoms or bonds- these include: constrained graph variational autoencoder (CGVAE) [24] and deep generative auto-regressive model of graphs (DGMG)[44].
We also use standard metrics like validity, uniqueness, and novelty [20]- to assess the model's ability to generate a diverse set of real molecules distinct from the training data. For models using graph and string representations, we use rdkit to determine validity but for 3d models we use xyz2mol [45] to determine validity- if can produce a valid Mol object in rdkit. For quantitative evaluation of any model's ability to learn its training distribution, we compute the earth mover's distance (WA) between property values of generated molecules and training molecules. We also compute the earth mover's distance between different samples of training molecules (TRAIN in Table 1) which acts as an oracle baseline to lower bound all earth mover distances. For molecular properties, we consider: quantitative estimate of drug-likeness (QED) [46], synthetic accessibility score (SA) [47], exact molecular weight (MW).
In Table 1, we can see that both language models using character and coordinate level tokenization- achieve competitive performance to models using graph and string representations. Indeed, the character-level language model performs comparable to the graph models but is worse than the SMILES (SM-LM) or SELFIES (SF-LM) language models. However, the coordinate-level language model achieves performance that is comparable to or better than all models.
### Crystals
Next, we turn to materials like crystals which are structures that cannot be represented as graphs. Specifically, crystals are materials whose constituents atoms are arranged in a highly ordered lattice structure that extends, repeating in all directions. Crystals are stored in standard text file formats known as CIFs- Crystallographic Information Files. Within CIFs, the structural information necessary to describe the crystal includes atomic elements and coordinates as well as the parameters defining the periodic lattice. Similar to an XYZ file, CIF files include atomic elements positioned in a unit cell or lattice, with six additional parameters necessary to define the unit cell. This information can be generated before the atomic elements and positions- either a character at a time or treating each entire parameter as a single token. To test if language models can generate crystals as CIF-derived sequences, we turn to curated datasets from recent work on generative models for crystals [38]. We focus on two of the datasets, the first is Perov5 [48] which includes 18928 perovskite materials that share the same structure but differ in composition. There are 56 possible elements and all materials have exactly 5 atoms in the unit cell. The second dataset, MP20 [49] consists of 45231 materials varying in both structure and composition. There are 89 elements and the materials have between 1 and 20 atoms in the unit cells. We use the exact same setup and evaluation as [38], further details regarding datasets and evaluation can be found there.
We follow that prior work [38] and use several metrics that they used to evaluate the validity, property statistics, and diversity of generated materials. We briefly detail them here, the first is 1) Validity: a crystal is structurally valid if the shortest distance between any pair of atoms is larger than 0.5 A[50] and the composition of a crystal is valid if the overall charge is neutral as computed
\begin{table}
\begin{tabular}{c c c|c c c c c} \multirow{2}{*}{3D Model} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{**Basic Metrics (\%)**} & \multicolumn{3}{c}{**WA Metrics \(\downarrow\)**} \\ & & Valid & Unique & Novel & MW & SA & QED \\ \hline \multirow{6}{*}{3D} & Train & 100.0 & 100.0 & 100.0 & 0.81 & 0.013 & 0.002 \\ & SMLM & 98.35 & 100.0 & 100.0 & 3.640 & 0.049 & 0.005 \\ & SFLM & 100.0 & 100.0 & 100.0 & 3.772 & 0.085 & 0.006 \\ & DGMG & 79.63 & 100.0 & 99.38 & 88.94 & 3.163 & 0.095 \\ & JTVAE & 100.0 & 98.56 & 100.0 & 22.63 & 0.126 & 0.023 \\ & CGVAE & 100.0 & 100.0 & 100.0 & 45.61 & 0.426 & 0.038 \\ \hline \multirow{6}{*}{3D} & ENF & 1.05 & 96.37 & 99.72 & 168.5 & 1.886 & 0.160 \\ & GSchNet & 1.20 & 55.96 & 98.33 & 152.7 & 1.126 & 0.185 \\ & EDM & 77.51 & 96.40 & 95.30 & 101.2 & 0.939 & 0.093 \\ \cline{1-1} & **LM-CH** & 90.13 & 100.0 & 100.0 & 3.912 & 2.008 & 0.077 \\ \cline{1-1} & **LM-AC** & **98.51** & **100.0** & **100.0** & **1.811** & **0.026** & **0.004** \\ \hline \end{tabular}
\end{table}
Table 1: Generation performance for ZINC.
by SMACT[51]. 2) Coverage (Cov) COV-R (Recall) and COV-P (Precision) [52], measures the similarity between ensembles of generated materials and ground truth test materials. 3) Property statistics, we also compute the earth mover's distance (WA) between the property distribution of generated materials and test materials. For properties, we use density (\(\rho\)) and number of unique elements (# elem.). Following [38] we sample 10K materials after training to compute evaluation metrics.
We compare the language model with the baselines taken from [38], which include the latest state-of-the-art generative models and methods. These include: FTCP [53] is a 1D CNN-VAE trained over a crystal representation that concatenates various properties (atom positions, atom types, diffraction pattern, etc). GSchNet was also compared with in [38] first computationally determining lattices afterward, then using a modified version (PGSchNet) that directly incorporates periodicity. Lastly, we also compare with the best-performing model in [38], the crystal diffusion variational autoencoder (CDVAE). We also include an oracle (TRAIN) that defines an upper bound for validity and coverage and a lower bound for property statistics- computed using a sample from the training data. The results are displayed in Table 2.
We train language models on CIF-derived sequences and first specify a numerical precision of 3 decimal places for all floating-point numbers (unit cell parameters and coordinates) in the CIF files.
From the results, it is clear that language models are capable of generating novel materials that maintain the properties of the crystals in both training distributions. Both character and coordinate level language models show strong performance in all evaluation metrics from validity, property statistics, and diversity. Indeed in the smaller crystal dataset PEROV5, language models achieve better metrics over the baseline models. In the larger, more structurally diverse dataset MP20, the coordinate-level language model achieves the best performance in 3 of six metrics but is close to state-of-the-art performance in the other metrics as well. The character-level language model is slightly worse but still has comparable performance to the other baselines including the CDVAE.
There are more complex materials to test the capabilities of language models on but the experiments on these materials and the results indicate the strong potential of language models for materials generation and design. It is important to note that, beyond these metrics, more work is necessary to verify the results with computational simulation and experiments.
### Protein Binding Sites
For the most challenging task, we test if language models can generate biomolecular structures that are stored within PDB files. To test this, in a limited way, we train language models on sequences derived from PDB files storing protein binding sites. These are regions on proteins that bind to ligands- other molecules including small molecules, peptides, or other proteins. Typically they are small subsections of a protein containing a few dozen residues and hundreds of atoms that define a distinct geometric pocket or cavity.
We use a dataset of \(\sim\)180K protein-ligand pairs from [54]. As shown in Figure 3 A) we process the protein binding sites by removing all atoms in residues that are furthest from the center of the protein-ligand complex until there are roughly 200-250 atoms remaining. The training structures are just the remaining residues- the ligand is removed as well. No generative model based on graphs would be able to generate the protein pockets directly- their 3D structure is what gives rise to their function.
Similar to XYZ files- PDB files have atom information like elements and coordinates but have additional information related to protein structure such as every atom's residue. Therefore, after simplifying and removing other extraneous information, we convert the files to sequences using residue information as well- which we jointly tokenize with atom information. For example, atoms that are part of cysteine residues can be identified with one of the following tokens: CYS-C, CYS-N, CYS-O, CYS-S. This will allow the model to organize how each atom is placed by associating it with a local neighborhood defined by its specific amino acid.
We force the numerical precision of each atomic position to two decimal points and identical to atom+coordinate tokenization we use a single token for each x,y, and z coordinate of the atomic position entirely so that every atom has four tokens associated with it: one to identify its atomic element and residue as well as three tokens for its atomic position. Character-level tokenization produces sequences that are substantially longer so we do not experiment with it in this task.
We cannot use xyz2mol[45] to assess validity in this context since is only applicable to smaller molecules, similarly, other standard and distribution metrics for drug
\begin{table}
\begin{tabular}{c c c|c c c c} \multirow{2}{*}{Data Model} & \multirow{2}{*}{\begin{tabular}{c} **Valid (\%)** \(\uparrow\) \\ Struc. Comp. R. \\ \end{tabular} } & \multicolumn{2}{c}{**Cov (\%)**\(\uparrow\)} & \multicolumn{2}{c}{**WA \(\downarrow\)**} \\ & & & Struc. Comp. R. & P. & \(\rho\) & \(\#\) \\ \hline \multirow{6}{*}{\begin{tabular}{c} **V** \\ **C** \\ **C** \\ **C** \\ **C** \\ \end{tabular} } & Train & **100.0** & **98.60** & **100.0** & **100.0** & **0.010** & **0.005** \\ & FTCP & 0.24 & 54.24 & 0.00 & 0.00 & 10.27 & 0.630 \\ & GSchNet & 99.92 & **98.79** & 0.18 & 0.23 & 1.625 & 0.037 \\ & PGSchNet & 96.93 & **99.13** & 0.37 & 0.25 & 0.276 & 0.455 \\ & CDVAE & **100.0** & **98.59** & **99.45** & 98.46 & 0.126 & 0.063 \\ \hline \multirow{6}{*}{\begin{tabular}{c} **LM-CH** \\ **LM-AC** \\ \end{tabular} } & **100.0** & **98.51** & **99.60** & **99.42** & **0.071** & **0.036** \\ & **LM-AC** & **100.00** & **98.79** & **99.78** & **99.36** & **0.089** & **0.028** \\ \hline \hline \multirow{6}{*}{
\begin{tabular}{c} **L** \\ **C** \\ **C** \\ **C** \\ **C** \\ \end{tabular} } & Train & **100.0** & **91.13** & **100.00** & **100.0** & **0.051** & **0.016** \\ & FTCP & 1.55 & 48.37 & 4.72 & 0.09 & 23.71 & 0.736 \\ & GSchNet & 99.65 & 75.96 & 38.33 & **0.957** & 0.304 & 0.641 \\ & PGSchNet & 77.51 & 76.40 & 41.93 & **99.74** & 4.04 & 0.623 \\ & CDVAE & **100.0** & 86.70 & 99.15 & **99.49** & **0.688** & 1.432 \\ \cline{2-6} & **LM-CH** & **54.81** & 83.55 & **99.25** & 97.89 & 0.864 & **0.132** \\ & **LM-AC** & **95.81** & **88.87** & **99.60** & 98.55 & 0.696 & **0.092** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Crystal generation performance.
like molecules are not meaningful for protein pockets. Instead, to measure validity, after sampling 10K pockets from the model, we check each residue individually with xyz2mol, and make sure the atom composition of each residue is correct. Additionally, to check if there are any overlapping atoms among residues we check if any atoms from different residues are closer than the smallest possible bond distance. Almost all pockets, or \(\sim\)99% of pockets sample pass the xyz2mol & residue check while \(\sim\)5% of pockets fail the overlap check- we show some examples of these pockets in the supplementary information.
We also compare the training distribution to the model's learned distribution: first, using a bivariate kernel density estimate, we plot the joint distribution of the interatomic distance between pocket atoms and their furthest or closest neighbor. In Figure 3 B) we can see the model closely matches the training distribution for the closest and furthest neighboring atoms. In addition, in Figure 3 B), pockets sampled from the language model and training pockets have a similar number of carbon, nitrogen, and oxygen atoms.
To conduct a sanity check to see if the model is memorizing, we check the ordering of the residues essentially the amino acid sequence of the binding site (defined by the order in which residues appear in the PDB sequence ignoring coordinates ie ARG-SER-ASP-ILE\(\cdots\)) in pockets generated by the model. For comparison- out of the \(\sim\)180K training pockets approximately 86.1 % have unique orderings. Similarly, evaluating the pockets that the language model generates, we get \(\sim\)89.8.4% unique orderings of residues and further, of these pockets, 83.6% have novel residues orderings that do not occur in the training pockets. This indicates the model is learning to generate mostly novel protein pockets with new amino acid sequences while maintaining the higher-level geometric structure that defines a protein pocket.
Additionally, we display a few examples of training and model-generated pockets in Figure 3 C) including both pockets showing individual residues and atoms as well as pockets with the surface explicitly rendered-which helps highlight the actual geometric structure of the pocket. Qualitatively, both ways of visualizing the pocket, demonstrate that the language model generates pockets that do have a similar geometric structure to the training examples.
## Discussion
We have demonstrated that language models can learn to generate molecules, materials, and biomolecular structures directly in three dimensions when trained success
Figure 3: A) Protein pockets are pre-processed by removing residues far from the center of the pocket-ligand complex. B) A comparison between the model and training data distribution of interatomic distance between 10 random pocket atoms and the closest and furthest pocket atoms. Additionally, we show a box plot for the number of carbon, nitrogen, and oxygen C) Six different examples from the training data and sampled from the language model the first 3 are plotted showing individual atoms, and the last three show the surface of the pocket.
fully on sequences derived from chemical file formats like XYZ, CIF, and PDB. The results show that language models are powerful generative models capable of learning to generate complex chemical structures in three dimensions. Language models are not just limited to simple string molecular representations like SMILES and SELFIES but can directly learn structured representations by merely predicting the next token in sequences derived from these representations. In contrast to most generative models for molecules, materials, and biomolecules that are designed for very specific classes of molecules - we demonstrate that language models, without any architecture changes and simply using next-token prediction can generate a wide variety of different chemical structures. We showed that character-level language models were able to model small drug-like molecules and simple crystals. Even further with atom and coordinate-level tokenization, language models can generate biomolecular structures like protein binding sites that contain hundreds of atoms.
In future work, building on these results, there is enormous potential to use language models for inverse design of molecules or materials to optimize properties that depend on the geometric structure. Additionally, we are interested in testing language models in other more complex classes of molecules and materials like metal-organic frameworks and other structures in molecular biology. Another important potential area is structure-based drug discovery. Other aspects should be explored for further success including the use of different tokenization strategies. A particular issue when directly tokenizing entire coordinates is the size of the vocabulary- which will grow enormously as the structure of the molecule or material being modeled grows. Predicting absolute coordinates which are not rotation or translation invariant is challenging for structures with hundreds of atoms- training on even larger structures will be difficult and may require large amounts of data.
We predict that larger and larger datasets of molecules and materials will become available in the future. As more and more data becomes available- language models will continue to improve and demonstrate their power by modeling tasks once thought impossible for them.
## II Methods
### Chemical structure Representations
Molecules (XYZ files)We represent a molecule as a point cloud of \(n\) atoms with elements \(e_{i}\in\{\texttt{C},\dots\}\) and positions \(x_{i},y_{i},z_{i}\in\mathbb{R}\)- as follows
\[\mathcal{M}=(e_{1},x_{1},y_{1},z_{1},\dots,e_{n},x_{n},y_{n},z_{n}) \tag{1}\]
Crystals (CIF files)Any crystal can be represented similarly but must include necessary information about the unit cell or lattice in addition to atomic positions and elements. The unit cell is a parallelepiped, so there are six necessary lattice parameters taken as the lengths of the cell edges \((\ell_{a},\ell_{b},\ell_{c})\) and the angles between them \((\alpha,\beta,\gamma)\). The positions of atoms inside the unit cell are described by fractional coordinates \((x_{i},y_{i},z_{i})\) along the cell edges. Thus crystals can be completely described using the following tuple:
\[\mathcal{C}=(\ell_{a},\ell_{b},\ell_{c},\alpha,\beta,\gamma,e_{1},x_{1},y_{1}, z_{1},\dots,e_{n},x_{n},y_{n},z_{n}) \tag{2}\]
Protein pockets (PDB files)We represent a protein pocket as a point cloud of \(n\) atoms with residue-atom indicators \(a_{i}\in\{\texttt{HIS-C,HIS-N},\dots\}\) and positions \(x_{i},y_{i},z_{i}\in\mathbb{R}\)- as follows
\[\mathcal{P}=(a_{1},x_{1},y_{1},z_{1},\dots,a_{n},x_{n},y_{n},z_{n}) \tag{3}\]
### Tokenization
Ignoring special tokens, character-level models use a small vocabulary of \(\sim\)30 tokens consisting of atom type tokens C,N,\(\dots\) digit characters and minus sign 1-9,-, and other file symbols like newline character which we represent with a hashtag as well as an empty space token''.
Atom+coordinate-level models use a larger vocabulary of \(\sim\)100-10K tokens consisting of atom types tokens C,N,\(\dots\) or atom-residue tokens \(\{\texttt{HIS-C,HIS-N},\dots\) and coordinate tokens like -1.9,-1.98 or -1.987- these range from the smallest to largest coordinate values and can have restricted precision between 1 and 3 decimal places.
### Language Modeling
we frame language modeling as unsupervised distribution estimation from a set of examples \((x_{1},x_{2},\dots,x_{n})\) each composed of variable length sequences of tokens \(t_{i}\) such that \(x=(t_{1},t_{2},\dots,t_{n})\). The sequences here are chemical structures so have many possible orderings (restricted by file and structural information) but regardless we factorize the joint probabilities over
\[p(x)=\prod_{i=1}^{n}p(t_{n}|t_{n-1},\dots t_{1}) \tag{4}\]
this probability is modeled using a Transformer [2] with parameters that are trained using stochastic gradient descent.
### Data Augmentation
Since the model is not invariant to rotations or translations, to improve performance at every epoch we can randomly rotate any training structure about its center to expand the training data. Models trained without data augmentation can still achieve performance close to SOTA.
### Model architecture and Training
We use Transformers with GPT architecture [2] that have roughly between \(\sim\)1 and 100 million parameters and use 12 layers, embedding size between \(\{128,1024\}\), and 4 to 12 attention heads. For training we use a small batch size between [4,32] structures, and a starting learning rate between \([10^{-4},10^{-5})\), that is decayed to \(9\cdot 10^{-6}\) over training. Example code can be found at [https://github.com/danielflamshep/xyztransformer](https://github.com/danielflamshep/xyztransformer).
## III Acknowledgements
A.A.-G. acknowledge funding from Dr. Anders G. Froseth. A.A.-G. also acknowledges support from the Canada 150 Research Chairs Program, the Canada Industrial Research Chair Program, and from Google, Inc. Models were trained using the Canada Computing Systems [55]. A.A.-G. also acknowledges support from the Acceleration Consortium at the University of Toronto.
|
2303.10935
|
On the exact quantum query complexity of $\text{MOD}_m^n$ and
$\text{EXACT}_{k,l}^n$
|
The query model has generated considerable interest in both classical and
quantum computing communities. Typically, quantum advantages are demonstrated
by showcasing a quantum algorithm with a better query complexity compared to
its classical counterpart. Exact quantum query algorithms play a pivotal role
in developing quantum algorithms. For example, the Deutsch-Jozsa algorithm
demonstrated exponential quantum advantages over classical deterministic
algorithms. As an important complexity measure, exact quantum query complexity
describes the minimum number of queries required to solve a specific problem
exactly using a quantum algorithm.
In this paper, we consider the exact quantum query complexity of the
following two $n$-bit symmetric functions $\text{MOD}_m^n:\{0,1\}^n \rightarrow
\{0,...,m-1\}$ and $\text{EXACT}_{k,l}^n:\{0,1\}^n \rightarrow \{0,1\}$, which
are defined as $\text{MOD}_m^n(x) = |x| \bmod m$ and $ \text{EXACT}_{k,l}^n(x)
= 1$ iff $|x| \in \{k,l\}$, where $|x|$ is the number of $1$'s in $x$. Our
results are as follows: i) We present an optimal quantum algorithm for
computing $\text{MOD}_m^n$, achieving a query complexity of $\lceil
n(1-\frac{1}{m}) \rceil$ for $1 < m \le n$. This settles a conjecture proposed
by Cornelissen, Mande, Ozols and de Wolf (2021). Based on this algorithm, we
show the exact quantum query complexity of a broad class of symmetric functions
that map $\{0,1\}^n$ to a finite set $X$ is less than $n$. ii) When $l-k \ge
2$, we give an optimal exact quantum query algorithm to compute
$\text{EXACT}_{k,l}^n$ for the case $k=0$ or $k=1,l=n-1$. This resolves the
conjecture proposed by Ambainis, Iraids and Nagaj (2017) partially.
|
Zekun Ye
|
2023-03-20T08:17:32Z
|
http://arxiv.org/abs/2303.10935v4
|
# On the exact quantum query complexity of \(\text{MOD}_{m}^{n}\) and \(\text{EXACT}_{k,l}^{n}\)
###### Abstract
The query model has generated considerable interest in both classical and quantum computing communities. Typically, quantum advantages are demonstrated by showcasing a quantum algorithm with a better query complexity compared to its classical counterpart. As an important complexity measure, exact quantum query complexity describes the minimum number of queries required to solve a specific problem exactly using a quantum algorithm. In this paper, we consider the exact quantum query complexity of two symmetric functions: \(\text{MOD}_{m}^{n}\), which calculates the Hamming weight of an \(n\)-bit string modulo \(m\); \(\text{EXACT}_{k,l}^{n}\), which determines if the Hamming weight of an \(n\)-bit string is exactly \(k\) or \(l\). Although these two symmetric functions have received much attention, their exact quantum query complexities have not been fully characterized. Our results are as follows:
* We design an optimal quantum query algorithm to compute \(\text{MOD}_{m}^{n}\) exactly and thus provide a tight characterization of its exact quantum query complexity. Based on this algorithm, we show the exact quantum query complexity of a broad class of symmetric functions is less than their input size.
* We give a tight characterization of the exact quantum query complexity of \(\text{EXACT}_{k,l}^{n}\) for some specific values of \(k\) and \(l\).
keywords: query complexity, exact algorithms, quantum computing +
Footnote †: journal:
## 1 Introduction
The quantum query model is a computational model that describes the power and limitations of quantum algorithms in solving problems in a query-based setting. It has demonstrated the powerful ability of a quantum computer to perform certain computational tasks more efficiently than a classical computer, such as Simon's algorithm Simon (1995) and Shor's integer factorization algorithm Shor (1996). Moreover, the quantum query model has found applications in a variety of areas, including cryptography Shor (1996); Shor (1996), optimization Shor (1996); Ekert (1997), and learning theory Shor (1996); Ekert (1997).
In this paper, we focus primarily on the exact quantum query complexity of symmetric functions. The exact quantum query complexity is the minimum number of queries required to solve a specific problem exactly using quantum algorithms. As a classical counterpart, the deterministic query complexity is the minimum number of queries required to solve a specific problem with certainty using classical deterministic algorithms. A comprehensive survey on the query complexity can be found in [14]. Symmetric functions are functions that are invariant under permutations of their inputs, which have a wide range of applications in various fields of computer science such as coding theory and cryptography. A symmetric function is partial if it is defined only on a subset of its domain, otherwise it is total.
### Related work
The study of the exact quantum query complexity of partial symmetric functions has a long history. The Deutsch-Jozsa algorithm [11; 9] demonstrated an exponential separation between exact query complexity and deterministic query complexity for the first time. Furthermore, several exact quantum algorithms [16; 21; 7] showed quadratic speedup over classical counterparts for the problem of determining whether the Hamming weight of an \(n\)-bit string is 0 or 1. Subsequently, Qiu and Zheng [24; 25] determined the exact quantum query complexity and deterministic query complexity of a generalized Deutsch-Jozsa problem. He, Sun, Yang, and Yuan [15] established an asymptotically optimal bound for the exact quantum query complexity of distinguishing the Hamming weight of an \(n\)-bit string between \(k\) and \(l\). Qiu and Zheng [24; 26] studied the exact quantum query complexity for symmetric partial Boolean functions with degrees 1 or 2. In regards to the symmetric functions with large alphabet inputs, Li and Li [18] studied the promised element distinctness problem and proposed an optimal exact quantum algorithm.
The exact quantum query complexity of total symmetric functions has also been studied extensively. On the one hand, the best-known exact quantum algorithm for any \(n\)-bit non-constant symmetric Boolean function requires at least \(n/2\) queries. On the other hand, combining the lower bound on the degree of symmetric Boolean functions [30], the best-known result about the difference between consecutive primes [27] with polynomial methods [6], it leads to the following conclusion: any exact quantum algorithm for computing any \(n\)-bit non-constant symmetric Boolean function requires at least \(n/2-O(n^{0.525})\) queries. Moreover, Montanaro, Jozsa and Mitchison [22] indicated the exact quantum query complexity of all symmetric Boolean functions on up to 6 bits by numerical results. Ambainis, Gruska and Zheng [2] showed \(\mathrm{AND}_{n}\) is the only \(n\)-bit Boolean function, up to isomorphism, that requires \(n\) quantum queries to compute exactly. While the deterministic query complexity of all non-constant total symmetric functions is \(n\)[1; 22; 2], there are only a few total symmetric functions whose exact quantum query complexity is fully characterized, which are summarized as Table 1 (up to isomorphic). Note that the functions \(\neg\mathrm{OR}_{n}\) and \(\mathrm{AND}_{n}\) are special cases of EXACT\({}_{k}^{n}\) when \(k=0\) and \(n\).
There are some total symmetric functions that have been studied, but the exact quantum query complexity is not fully characterized, including \(\mathrm{MOD}_{m}^{n}\) and \(\mathrm{EXACT}_{k,l}^{n}\). Specifically, \(\mathrm{MOD}_{m}^{n}\) aims to compute the Hamming weight of an \(n\)-bit string modulo \(m\), which is a generalization of \(\mathrm{PARITY}_{n}\). Recently, Cornelissen, Mande, Ozols and de Wolf [10] showed that when the prime factor of \(m\) is only \(2\) or \(3\), the exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\) is \(\lceil n(1-\frac{1}{m})\rceil\). Moreover, they proved the exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\) is at least \(\lceil n(1-\frac{1}{m})\rceil\) for any \(1<m\leq n\). Then they conjectured the lower bound is tight as Conjecture 1. Afterward, using variational learning algorithms, Wu, Hou, Zhang, Li and Zeng [31] suggested that when \(m=n=5\), there exists a quantum algorithm using \(4\) queries to compute \(\mathrm{MOD}_{m}^{n}\), which is consistent with Conjecture 1.
**Conjecture 1** ([10]).: _For \(1<m\leq n\), the exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\) is \(\lceil n(1-\frac{1}{m})\rceil\)._
For the function \(\mathrm{EXACT}_{k,l}^{n}\) (\(k<l\)), which aims to determine whether \(|x|\in\{k,l\}\), Ambainis, Iraids and Nagaj [3] gave the best-known result: the exact quantum query complexity of \(\mathrm{EXACT}_{k,l}^{n}\) falls within a range of \(\max\left\{n-k,l-1\right\}\) to \(\max\left\{n-k,l+1\right\}\). Moreover, they showed if \(l-k\in\{2,3\}\), the lower bound is tight and proposed Conjecture 2. For \(\mathrm{EXACT}_{k,l}^{n}\), Wu et al. [31] also gave the numerical result about some instances of small sizes. For the case \(l-k=2\), \(n\) is even and \(k,l\) is symmetrically distributed around \(n/2\), the numerical result is consistent with Conjecture 2.
**Conjecture 2** ([3]).: _If \(l-k\geq 2\), the exact quantum query complexity of \(\mathrm{EXACT}_{k,l}^{n}\) is \(\max\{n-k,l\}-1\)._
### Our contribution
In this paper, we consider the above two conjectures and study the exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\) and \(\mathrm{EXACT}_{k,l}^{n}\). Our motivation is as follows:
* The exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\) and \(\mathrm{EXACT}_{k,l}^{n}\) are not fully characterized. Thus, we aim to improve the best-known result of these two functions.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Functions & Definition & Exact Quantum Query Complexity \\ \hline \(\mathrm{PARITY}_{n}\) & \(\mathrm{PARITY}_{n}(x)=|x|\bmod 2\) & \(\lceil n/2\rceil\)[9; 12; 6] \\ \hline \(\mathrm{EXACT}_{k}^{n}\) & \(\mathrm{EXACT}_{k}^{n}(x)=\begin{cases}1,&\text{if }|x|=k,\\ 0,&\text{otherwise.}\end{cases}\) & \(\max\left\{k,n-k\right\}\)[4] \\ \hline \(\mathrm{TH}_{k}^{n}\) & \(\mathrm{TH}_{k}^{n}(x)=\begin{cases}1,&\text{if }|x|\geq k,\\ 0,&\text{otherwise.}\end{cases}\) & \(\max\left\{k,n-k+1\right\}\)[4] \\ \hline \end{tabular}
\end{table}
Table 1: The exact quantum query complexity of several symmetric functions
* In the quantum model, we say a function is _evasive_ if its exact quantum query complexity equals its input size. \(\mathrm{MOD}_{m}^{n}\) is a key function to analyze the quantum evasiveness of the symmetric functions with large alphabet output. By studying the exact quantum query complexity of \(\mathrm{MOD}_{m}^{n}\), we can better understand the quantum evasiveness of a broad class of symmetric functions.
* At present, there are quite a few exact quantum algorithm design techniques. It is interesting to obtain more exact quantum algorithm design paradigms.
Our contribution is as follows: i) We propose an optimal quantum query algorithm to compute \(\mathrm{MOD}_{m}^{n}\) exactly and thus prove Conjecture 1. Compared to the algorithm proposed in [10], our algorithm is more natural and suitable for any \(1<m\leq n\). As a corollary, we prove a wide range of symmetric functions is not evasive in the quantum model based on the above algorithm. ii) We prove Conjecture 2 for the case \(k=0\) and \(k=1,l=n-1\). Thus, we give a tighter characterization to the exact quantum query complexity of \(\mathrm{EXACT}_{k,l}^{n}\).
### Organization
The remainder of the paper is organized as follows. In Section 2, we review some definitions and notations used in this paper. In Section 3, we give an optimal exact quantum query algorithm to compute \(\mathrm{MOD}_{m}^{n}\) and analyze the quantum evasiveness of a broad class of symmetric functions. In Section 4, we discuss the exact quantum query complexity of \(\mathrm{EXACT}_{k,l}^{n}\). Finally, a conclusion is made in Section 5.
## 2 Preliminary
This section first gives some formal definitions of the quantum query model. For convenience, for an \(n\)-bit Boolean string \(x\in\{0,1\}^{n}\), we let \(x=x_{0}\cdots x_{n-1}\).
**Definition 1** (POVM [23]).: _A set of operators \(\{E_{j}\}\) is a POVM (Positive Operator-Valued Measure) if each operator \(E_{j}\) is positive and \(\sum_{j}E_{j}=I\). If a measurement described by \(\{E_{j}\}\) is performed upon a quantum state \(\ket{\psi}\), then the probability of obtaining outcome \(j\) is given by \(p(j)=\bra{\psi}{E_{j}}{\psi}\)._
**Definition 2** (Quantum query algorithms).: _A quantum query algorithm \(\mathcal{A}\) consists of an initial state \(\ket{\psi_{0}}\), a unitary operator sequence \(U_{T}O_{x}U_{T-1}O_{x}\cdots O_{x}U_{0}\) and a POVM \(\{E_{j}\}\), where \(U_{i}\)'s are fixed unitary operators, and \(O_{x}\) is a quantum query oracle dependent on \(x\), which is defined as \(O_{x}\ket{i}\ket{b}=\ket{i}\ket{b\oplus x_{i}}\), where \(i\in\{0,...,n-1\}\), \(b\in\{0,1\}\). The algorithm process is as follows:_
* _Prepare the initial state_ \(\ket{\psi_{0}}\)_;_
* _Perform unitary operations_ \(U_{0},O_{x},...,O_{x},U_{T}\) _sequentially on_ \(\ket{\Psi_{0}}\) _to obtain the quantum state_ \(\ket{\Psi_{x}}=U_{T}O_{x}U_{T-1}O_{x}\cdots O_{x}U_{0}\ket{\Psi_{0}}\)_;_
* _Perform the measurement described by_ \(\{E_{j}\}\) _upon the quantum state_ \(\left|\Psi_{x}\right\rangle\)_, use the measurement result as the output_ \(\mathcal{A}(x)\) _of the algorithm._
_The query complexity of the algorithm is defined as the number of query oracle \(O_{x}\) used in the algorithm._
As mentioned in [4], a quantum algorithm can also be described as a recursive algorithm with the following structure: First, perform unitary operation \(U_{1}O_{x}U_{0}\) and measure; second, depending on the measurement result, call a smaller instance of the algorithm. Such a recursive algorithm can be transformed into a quantum query algorithm described as Definition 2 with the same query complexity.
**Definition 3** (Exact quantum algorithms).: _Given function \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set, if a quantum algorithm \(\mathcal{A}\) satisfies \(\mathcal{A}(x)=f(x)\) for any \(x\in\{0,1\}^{n}\), then \(\mathcal{A}\) is an exact quantum algorithm to compute \(f\)._
**Definition 4** (Exact quantum query complexity).: _For function \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set. The exact quantum query complexity of \(f\), \(Q_{E}(f)\), is the minimal number of queries an exact quantum algorithm requires to compute \(f\)._
**Definition 5** (Univariate version of symmetric functions).: _For a symmetric function \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set, we define \(F:\{0,1,...,n\}\to X\) as \(F(x)=f(\left|x\right|)\) for any \(x\in\{0,1\}^{n}\), where \(\left|x\right|\) is the Hamming weight of \(x\), i.e., the number of \(1\)'s in \(x\)._
**Definition 6** (Majority index).: _Suppose \(x\in\{0,1\}^{n}\) and \(\left|x\right|\neq\frac{n}{2}\), we say \(i\) is a majority index of \(x\) if i) \(\left|x\right|>\frac{n}{2}\) and \(x_{i}=1\), or ii) \(\left|x\right|<\frac{n}{2}\) and \(x_{i}=0\)._
Next, we give some notations used in this paper. For any angle \(\theta\), let
\[\left|\theta\right\rangle^{\circ}=\cos\theta\left|0\right\rangle+\sin\theta \left|1\right\rangle.\]
For positive integer \(n\), let \([n]=\{1,\ldots,n\}\) and \(\mathbb{Z}_{n}=\{0,1,...,n-1\}\). For any \(a\in\mathbb{Z}_{n}\), the permutation operation \(U_{a}\) is defined as
\[U_{a}\left|i\right\rangle=\left|i+a\ \mathrm{mod}\ n\right\rangle.\]
Let
\[O_{x,a}=(U_{a}\otimes I)O_{x}(U_{-a}\otimes I). \tag{1}\]
Then
\[O_{x,a}\left|i\right\rangle\left|0\right\rangle=\left|i\right\rangle\left|x_{i -a}\right\rangle,\]
where the subtraction operator is with modulo \(n\). For \(i\in\{0,1\}\) and any angle \(\eta\), the unitary operator \(\mathrm{CR}_{\theta}\) is defined as
\[\mathrm{CR}_{\theta}\left|\eta\right\rangle^{\circ}\left|i\right\rangle=\left| \eta+i\theta\right\rangle^{\circ}\left|i\right\rangle. \tag{2}\]
For \(j\in\{0,...,n-1\}\), let
\[\left|\phi_{j}\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right\rangle \left|ij\theta\right\rangle^{\circ},\]
\[\left|\phi_{j}^{*}\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right \rangle\left|ij\theta+\pi/2\right\rangle^{\circ},\]
where \(\theta=\frac{2\pi}{n}\). Then \(\left\langle\phi_{j}|\phi_{j}^{*}\right\rangle=0\). For any \(j,k\in\{0,...,n-1\}\) and \(j\neq k\), we have \(\sin\frac{n(j-k)\theta}{2}=0\). Thus, by Fact 1, we have
\[\left\langle\phi_{j}|\phi_{k}\right\rangle=\left\langle\phi_{j}^{*}|\phi_{k}^{ *}\right\rangle=\frac{1}{n}\sum_{i=0}^{n-1}\cos i(j-k)\theta =0.\]
\[\left\langle\phi_{j}|\phi_{k}^{*}\right\rangle=-\frac{1}{n}\sum_{i=0}^{n-1} \sin i(j-k)\theta =0.\]
As a result, \(\left\{\left|\phi_{0}\right\rangle,...,\left|\phi_{n-1}\right\rangle,\left| \phi_{0}^{*}\right\rangle,...,\left|\phi_{n-1}^{*}\right\rangle\right\}\) is an orthonormal basis.
**Fact 1** ([17]).: _If \(\sin\frac{\pi}{2}\neq 0\), we have_
\[\sum_{k=0}^{n-1}\sin kx=\frac{\sin\frac{nx}{2}\sin\frac{(n-1)x}{2 }}{\sin\frac{\pi}{2}},\] \[\sum_{k=0}^{n-1}\cos kx=\frac{\sin\frac{nx}{2}\cos\frac{(n-1)x}{2 }}{\sin\frac{\pi}{2}}.\]
Let \(P_{j}^{\prime}=\left|\phi_{j}\right\rangle\left\langle\phi_{j}\right|+\left| \phi_{j}^{*}\right\rangle\left\langle\phi_{j}^{*}\right|\) for \(j\in\{0,...,n-1\}\). Then for any \(j\in\{0,...,n-1\}\), \(P_{j}^{\prime}\) is a projection operator and \(\sum_{j=0}^{n-1}P_{j}^{\prime}=I\). Thus, \(\left\{P_{j}^{\prime}\right\}\) is a POVM. For any \(j\in\{0,...,n-1\}\), let
\[P_{j}=P_{j}^{\prime}\otimes I_{2^{n-1}}. \tag{3}\]
Then \(\left\{P_{j}\right\}\) is also a POVM.
## 3 Computing the Hamming weight modulo \(m\)
In this section, we present an optimal algorithm to compute \(\mathrm{MOD}_{m}^{n}\), which is defined as \(\mathrm{MOD}_{m}^{n}(x)=\left|x\right|\bmod m\) for any \(x\in\{0,1\}^{n}\). First, we give Algorithm 1 to compute \(\mathrm{MOD}_{n}^{n}\), where \(\theta=\frac{2\pi}{n}\). We verify the correctness of Algorithm 1 as follows. For any \(x\in\{0,1\}^{n}\), let \(S_{x}=\{i:x_{i}=1\}\). If \(i-a=j\) for some \(j\in S_{x}\), then \(a\theta x_{i-a}=(i-j)\theta\); otherwise, \(a\theta x_{i-a}=0\). Thus, we have
\[\sum_{a=1}^{n-1}a\theta x_{i-a}=\sum_{a=0}^{n-1}a\theta x_{i-a}=\sum_{j\in S_{ x}}(i-j)\theta.\]
If \(\left|x\right|\bmod m=k\), then
\[\sum_{j\in S_{x}}(i-j)\theta=ik\theta+\eta_{x}\]
for some angle \(\eta_{x}\). Thus,
\[\left|\psi_{x}\right\rangle =\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right\rangle\left|ik \theta+\eta_{x}\right\rangle^{\circ}\left|x_{i-1}\right\rangle\left|x_{i-2} \right\rangle\cdots\left|x_{i-n}\right\rangle\] \[=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\cos\eta_{x}\left|i\right\rangle \left|ik\theta\right\rangle^{\circ}\left|x_{i-1}\right\rangle\left|x_{i-2} \right\rangle\cdots\left|x_{i-n}\right\rangle\] \[+\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\sin\eta_{x}\left|i\right\rangle \left|ik\theta+\pi/2\right\rangle^{\circ}\left|x_{i-1}\right\rangle\left|x_{i- 2}\right\rangle\cdots\left|x_{i-n}\right\rangle.\]
Finally, the probability of obtaining the measurement result \(k\) is
\[\left\langle\psi_{x}|P_{k}|\psi_{x}\right\rangle=\frac{1}{n}\sum_{i=0}^{n-1}( \cos^{2}\eta_{x}+\sin^{2}\eta_{x})=1.\]
Thus, the algorithm will output \(k\) with the probability \(1\), i.e., Algorithm 1 always outputs the correct result. Since we have \(O_{x,a}=(U_{a}\otimes I)O_{x}(U_{-a}\otimes I)\) as Equation (1), the number of query oracle \(O_{x}\) used in Algorithm 1 is \(n-1\).
```
Input:\(x\in\{0,1\}^{n}\); Output:\(\left|x\right|\bmod n\).
1. Prepare initial state \(\left|\psi_{0}\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right\rangle \left|0\right\rangle^{\circ}\left|0\right\rangle^{\otimes n-1}\), which consists of an \(n\)-dimensional qudit and \(n\) ancillary qubits.
2. For \(a=1\) to \(n-1\), we perform \(O_{x,a}\) given by Equation (1) in the first qudit and the \((a+1)\)-th ancillary qubit sequentially to obtain the state \[\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right\rangle\left|0\right\rangle^{ \circ}\left|x_{i-1}\right\rangle\left|x_{i-2}\right\rangle\cdots\left|x_{i-(n- 1)}\right\rangle.\]
3. For \(a=1\) to \(n-1\), we perform \(\text{CR}_{a\theta}\) given by Equation (2) in the first ancillary qubit the \((a+1)\)-th ancillary qubit sequentially to obtain the final state \[\left|\psi_{x}\right\rangle=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\left|i\right\rangle \left|\sum_{a=1}^{n-1}a\theta x_{i-a}\right\rangle^{\circ}\left|x_{i-1} \right\rangle\left|x_{i-2}\right\rangle\cdots\left|x_{i-(n-1)}\right\rangle.\]
4. Perform the measurement described by \(\{P_{j}\}\) defined in Equation (3) upon the quantum state \(\left|\psi_{x}\right\rangle\), and then output the measurement result.
```
**Algorithm 1**Compute \(\text{MOD}_{n}^{n}\).
Next, for \(1<m<n\), let \(c=\lfloor\frac{n}{m}\rfloor\) and \(n=cm+q\). Then \(0\leq q<m\). We give Algorithm 2 to compute \(\left|x\right|\bmod m\). The algorithm procedure is as follows. i) If \(q=0\), we partition \(x\) into \(m\)-bit substrings \(x^{(0)},\ldots,x^{(c-1)}\). For any \(0\leq i\leq c-1\), we compute \(b_{i}=\left|x^{(i)}\right|\bmod m\) by Algorithm 1. Finally, we output \(\left(\sum_{i=0}^{c-1}b_{i}\right)\bmod m\). ii) If \(q\neq 0\), we partition \(x\) into \(c\)\(m\)-bit substrings \(\left\{x^{(0)},\ldots,x^{(c-1)}\right\}\) and one \(q\)-bit
substring \(x^{(c)}\). For \(0\leq i\leq c-1\), we compute \(b_{i}=|x^{(i)}|\bmod m\) by Algorithm 1. Then we query all the elements in \(x^{(c)}\) and compute \(b_{c}=|x^{(c)}|\bmod m\). Finally, we output \((\sum_{i=0}^{c}b_{i})\bmod m\). We verify the correctness of Algorithm 2. If \(q=0\), then
\[|x|\bmod m =\left(\sum_{i=0}^{c-1}|x^{(i)}|\right)\bmod m\] \[=\left(\sum_{i=0}^{c-1}\left(|x^{(i)}|\bmod m\right)\right) \bmod m\] \[=\left(\sum_{i=0}^{c-1}b_{i}\right)\bmod m.\]
If \(q>0\), then we have \(|x|\bmod m=(\sum_{i=0}^{c}b_{i})\bmod m\) similarly. Thus, Algorithm 2 always gives the correct output. Moreover, the number of queries in Algorithm is \(c(m-1)+q=n-c=\lceil n(1-\frac{1}{m})\rceil\).
```
Input:\(x\in\{0,1\}^{n}\), integers \(m,c,q\) such that \(1<m<n\), \(c=\lceil\frac{n}{m}\rceil\) and \(n=cm+q\); Output:\(|x|\bmod m\).
1for\(i=0\to c-1\)do
2 Let \(x^{(i)}=x_{im}\cdots x_{(i+1)m-1}\);
3 Compute \(b_{i}=|x^{(i)}|\bmod m\) by Algorithm 1;
4if\(q=0\)then
5return\((\sum_{i=0}^{c-1}b_{i})\bmod m\);
6else
7 Let \(x^{(c)}=x_{cm}\cdots x_{n-1}\);
8 Query all the elements in \(x^{(c)}\) and let \(b_{c}=|x^{(c)}|\);
9return\((\sum_{i=0}^{c}b_{i})\bmod m\);
```
**Algorithm 2**Compute \(\operatorname{MOD}_{m}^{n}\).
The above results implies the following theorem:
**Theorem 1**.: _For \(1<m\leq n\), there exists an exact quantum query algorithm to compute \(\operatorname{MOD}_{m}^{n}\) using \(\lceil n(1-\frac{1}{m})\rceil\) queries._
Since Cornelissen et al. [10] showed that any quantum algorithm needs \(\lceil n(1-\frac{1}{m})\rceil\) queries to compute \(|x|\bmod m\) exactly, our algorithm is optimal and Conjecture 1 is proved. As an implication of Theorem 1, we show the following corollary:
**Corollary 1**.: _For any symmetric functions \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set, let \(F(|x|)=f(x)\) for any \(x\in\{0,1\}^{n}\). If there exists \(k\in[n]\) such that \(F(0)=F(k)\) and \(F(n-k)=F(n)\), then the exact quantum query complexity of \(f\) is less than \(n\). Moreover, the upper bound is tight, i.e., there exists a symmetric function \(f\) satisfying the above conditions whose exact quantum query complexity is \(n-1\)._
Proof.: If \(k=n\), we compute \(a=|x|\bmod n\) using \(n-1\) quantum queries by Algorithm 1 and then \(f(x)=F(a)\). If \(k\in\{1,...,n-1\}\), we give Algorithm 3 to compute \(f\). The algorithm procedure is as follows. First, we partition \(x\) into two substrings \(x^{\prime}\in\{0,1\}^{n-k}\) and \(x^{\prime\prime}\in\{0,1\}^{k}\). Then we compute \(a=|x^{\prime}|\bmod(n-k)\) and \(b=|x^{\prime\prime}|\bmod k\) by Algorithm 1. Then we discuss the following cases:
* If \(a\neq 0,b\neq 0\), then \(|x|=a+b\).
* If \(a\neq 0,b=0\), then we query \(x_{n-k}\) to determine \(|x^{\prime\prime}|=0\) or \(k\), and thus determine \(|x|\).
* If \(a=0,b\neq 0\), the case is similar to the above case.
* if \(a=0,b=0\), then we query \(x_{0}\). If \(x_{0}=0\), then \(|x|=0\) or \(k\); if \(x_{0}=1\), then \(|x|=n-k\) or \(n\).
``` Input:\(x\in\{0,1\}^{n}\), a symmetric function \(f:\{0,1\}^{n}\to\{0,1\}\) such that \(F(0)=F(k)\) and \(F(n-k)=F(n)\) for some \(k\in[n]\), where \(F\) is the univariate version of \(f\); Output:\(f(x)\).
1 Let \(x^{\prime}=x_{0}\cdots x_{n-k-1},x^{\prime\prime}=x_{n-k}\cdots x_{n-1}\);
2 Compute \(a=|x^{\prime}|\bmod(n-k)\) and \(b=|x^{\prime\prime}|\bmod k\) using Algorithm 1;
3switch\((a,b)\)do
4case\((a\neq 0,b\neq 0)\)do
5\(f(x)=F(a+b)\);
6case\((a\neq 0,b=0)\)do
7 Query \(x_{n-k}\);
8if\(x_{n-k}=0\)then\(f(x)=F(a)\);
9else\(f(x)=F(a+k)\);
10case\((a=0,b\neq 0)\)do
11 Query \(x_{0}\);
12if\(x_{0}=0\)then\(f(x)=F(b)\);
13else\(f(x)=F(n-k+b)\);
14case\((a=b=0)\)do
15 Query \(x_{0}\);
16if\(x_{0}=0\)then\(f(x)=F(0)\);
17else\(f(x)=F(n-k)\);
18
19
20
[MISSING_PAGE_POST]
Furthermore, Ambainis et al. [2] proved that the exact quantum query complexity of a total symmetric Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) is \(n\) if and only if \(f\) is isomorphic to \(\textsc{AND}_{n}\) function. Correspondingly, we conjecture there exists a generalized characterization to all total symmetric functions \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set. Thus we give the following conjecture:
**Conjecture 3**.: _Given a total symmetric function \(f:\{0,1\}^{n}\to X\), where \(X\) is a finite set. Let \(F\) be the univariate version of \(f\). Then the exact quantum query complexity of \(f\) is \(n\) if and only if one of the following conditions satisfies:_
1. \(F(0)\neq F(i)\) _for any_ \(i\in[n]\)_;_
2. \(F(n)\neq F(i)\) _for any_ \(i\in\{0,\ldots,n-1\}\)_._
Suppose a function \(f\) satisfies item i). Then for any \(x\), if there exists an algorithm to compute \(f(x)\), then the algorithm also can compute \(\textsc{AND}_{n}(x)\), and thus \(Q_{E}(f)\geq Q_{E}(\textsc{AND}_{n})\). Similarly, if \(f\) satisfies item ii), then \(Q_{E}(f)\geq Q_{E}(\textsc{OR}_{n})\). Thus, \(Q_{E}(f)=n\). As a result, to solve the above conjecture, we only need to solve the following question: if there exist \(i\in[n],j\in\{0,...,n-1\}\) such that \(F(0)=F(i)\) and \(F(n)=F(j)\), whether the exact quantum query complexity of \(f\) is less than \(n\)? By Corollary 1, we have already proven that if \(i+j=n\), then the exact quantum query complexity of \(f\) is less than \(n\). Thus, we propose the following conjecture:
**Conjecture 4**.: _If a total symmetric function \(f:\{0,1\}^{n}\to X\) satisfies \(F(0)=F(i)\), \(F(j)=F(n)\) for some \(i\in[n],j\in\{0,...,n-1\}\) such that \(i+j\neq n\), then \(Q_{E}(f)<n\), where \(F\) is the univariate version of \(f\)._
If Conjecture 4 is proved, then Conjecture 3 is also correct.
## 4 Exact Quantum Query Complexity of \(\textsc{EXACT}_{k,l}^{n}\)
In this section, we consider the exact quantum query complexity of \(\textsc{EXACT}_{k,l}^{n}\) for \(l-k\geq 2\). The \(n\)-bit Boolean function \(\textsc{EXACT}_{k,l}^{n}\) is defined as follows:
\[\textsc{EXACT}_{k,l}^{n}(x)=\begin{cases}1,&\text{if }|x|\in\{k,l\},\\ 0,&\text{otherwise}.\end{cases}\]
In the following context, we need to use the \(n\)-bit Boolean function \(\textsc{EXACT}_{k}^{n}\), defined as
\[\textsc{EXACT}_{k}^{n}(x)=\begin{cases}1,&\text{if }|x|=k,\\ 0,&\text{otherwise}.\end{cases}\]
First, we consider the case \(k=0\). We give the following lemma:
**Lemma 1**.: _For \(x\in\{0,1\}^{n}\) and \(2\leq l\leq n\), there exists a quantum algorithm to compute \(\operatorname{EXACT}_{0,l}^{n}(x)\) with \(n-1\) queries._
Proof.: If \(l<n\), we provide the algorithm as follows. For \(i=0\) to \(n-l-1\), we query \(x_{i}\) until \(x_{i}=1\) for some \(i\). Then we consider the following two cases: i) If we find the smallest integer \(i\in[0,n-l-1]\) such that \(x_{i}=1\), let \(x^{\prime}=x_{i+1}\cdots x_{n-1}\). Then \(\operatorname{EXACT}_{0,l}^{n}(x)=\operatorname{EXACT}_{l-1}^{n-i-1}(x^{ \prime})\). Since \(\operatorname{EXACT}_{l-1}^{n-i-1}(x^{\prime})\) can be computed by \(\max\left\{n-i-l,l-1\right\}\) quantum queries [4], the total number of queries is
\[(i+1)+\max\left\{n-i-l,l-1\right\} \leq\max\left\{n-l,i+l\right\}\] \[\leq\max\left\{n-l,n-1\right\}\] \[=n-1.\]
ii) If we find \(x_{i}=0\) for any \(0\leq i\leq n-l-1\), let \(x^{\prime}=x_{n-l}\cdots x_{n-1}\) and compute \(|x^{\prime}|\mod l\) using Algorithm 1. If \(|x^{\prime}|\mod l=0\), then \(|x|=0\) or \(l\), and thus \(\operatorname{EXACT}_{0,l}^{n}(x)=1\); otherwise, \(\operatorname{EXACT}_{0,l}^{n}(x)=0\). The total number of queries is \(n-l+l-1=n-1\).
If \(l=n\), we compute \(|x|\bmod n\) using \(n-1\) quantum queries by Algorithm 1, and then \(|x|\in\{0,l\}\) if and only if \(|x|\bmod n=0\).
Second, we consider the case \(k=1\) and \(l=n-1\). We give the following lemma.
**Lemma 2**.: _For \(x\in\{0,1\}^{n}\) and \(n\geq 4\), there exists a quantum algorithm to compute \(\operatorname{EXACT}_{1,n-1}^{n}(x)\) with \(n-2\) queries._
Proof.: We give a recursive algorithm as follows. The goal of the algorithm is to determine whether \(|x|\in\{1,n-1\}\). If \(|x|\in\{1,n-1\}\), the algorithm finds at least a majority index of \(x\).
* If \(n=4\), we compute \(x_{0}\oplus x_{1}\) and \(x_{2}\oplus x_{3}\) using \(2\) quantum queries by Algorithm 1. i) If \(x_{0}\oplus x_{1}=0,x_{2}\oplus x_{3}=1\), then \(|x|\in\{1,3\}\) and \(\{0,1\}\) are majority indices of \(x\); ii) if \(x_{0}\oplus x_{1}=1,x_{2}\oplus x_{3}=0\), then \(|x|\in\{1,3\}\) and \(\{2,3\}\) are majority indices of \(x\) similarly; iii) if \(x_{0}\oplus x_{1}=x_{2}\oplus x_{3}\), then \(|x|\in\{0,2,4\}\).
* If \(n=5\), then there exists an algorithm to determine whether \(|x|\in\{1,4\}\) using \(3\) quantum queries [3]. It is worth noting if \(|x|\in\{1,4\}\), the algorithm will find some \(i,j\) such that \(x_{i}\neq x_{j}\). Thus, all the indices except \(i,j\) are the majority indices of \(x\).
* If \(n>5\), let \(x^{\prime}=x_{2}\cdots x_{n-1}\). We compute \(x_{0}\oplus x_{1}\) using \(1\) quantum query first. i) If \(x_{0}\neq x_{1}\), we compute \(|x^{\prime}|\bmod n-2\) using Algorithm 1 to determine whether \(|x^{\prime}|\in\{0,n-2\}\). If \(|x^{\prime}|\notin\{0,n-2\}\), then \(|x|\notin\{1,n-1\}\); if \(|x^{\prime}|\in\{0,n-2\}\), then \(|x|\in\{1,n-1\}\) and \(\{2,...,n-1\}\) are majority indices of \(x\). ii) If \(x_{0}=x_{1}\), we call the algorithm recursively to determine whether \(|x^{\prime}|\in\{1,n-3\}\) and find a majority index \(i\) in \(x^{\prime}\) if \(|x^{\prime}|\in\{1,n-3\}\). Then we discuss the following two cases:
1. If \(|x^{\prime}|\notin\{1,n-3\}\), we have \(|x^{\prime}|\in\{0,2,...,n-4,n-2\}\). Since \(x_{0}=x_{1}\) and \(|x|=x_{0}+x_{1}+|x^{\prime}|\), we have \(|x|\in\{0,2,...,n-2,n\}\). Thus, \(|x|\notin\{1,n-1\}\); 2. If \(|x^{\prime}|\in\{1,n-3\}\), we compute \(x_{0}\oplus x^{\prime}_{i}\) using 1 quantum query. If \(x_{0}\neq x^{\prime}_{i}\), then \(|x|\in\{3,n-3\}\) and thus \(|x|\notin\{1,n-1\}\); if \(x_{0}=x^{\prime}_{i}\), then \(|x|\in\{1,n-1\}\).
Next, we prove the number of queries in the above algorithm to compute \(\text{EXACT}^{n}_{1,n-1}(x)\) is at most \(n-2\) by the induction method. i) For \(n=4\) and \(5\), the correctness of the proposition is easy to check; ii) We suppose the proposition is correct for any \(n\) such that \(4\leq n<m\) for some integer \(m\geq 6\). Then we aim to prove the correctness of the proposition in the case \(n=m\). If \(x_{0}\neq x_{1}\), the number of queries in the algorithm is \(1+(m-2-1)=m-2\); if \(x_{0}=x_{1}\), then by the induction assumption, the number of queries in the algorithm is at most \(1+(m-2-2)+1=m-2\). Thus, the proposition is also correct for \(n=m\).
Combining Lemma 1, 2 and \(Q_{E}(\text{EXACT}^{n}_{k,l})\geq\max\left\{n-k,l\right\}-1\)[3], we prove Theorem 2, which implies the correctness of Conjecture 2 for the above two cases.
**Theorem 2**.: _If \(l-k\geq 2\), then the exact quantum query complexity of \(\text{EXACT}^{n}_{k,l}\) is \(\max\left\{n-k,l\right\}-1\) for the case \(k=0\) and the case \(k=1,l=n-1\)._
## 5 Conclusion
In this paper, we have characterized the exact quantum query complexity of \(\text{MOD}^{n}_{m}\) for any \(1<m\leq n\). As a corollary, we have shown a broad class of symmetric functions is not evasive in the quantum model. Additionally, we have given the tight exact quantum query complexity of \(\text{EXACT}^{n}_{k,l}\) for some cases. Furthermore, there are some open questions worth exploring.
* For total symmetric Boolean functions, there are still some basic function classes whose quantum exact query complexity has not been fully characterized. It would be interesting to investigate whether the techniques used in this article can be extended to these functions.
* How to give a complete characterization to the class of symmetric functions that map from \(\{0,1\}^{n}\) to a finite set \(X\), whose quantum query complexity is less than \(n\)?
The study of the exact quantum query complexity of symmetric functions is an important area of research in quantum computing. While the exact quantum query complexities of a few symmetric functions are well-established, there remain many challenges in this domain. Further research is necessary to enhance our understanding of the exact quantum query complexity of symmetric functions and to explore new quantum algorithms in this field.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgment
We would like to thank Penghui Yao for helpful comments. We also would like to thank Lvzhou Li and Jingquan Luo for pointing out a flaw in Algorithm 1 in an early version of this manuscript. This research was supported by National Natural Science Foundation of China (Grant No. 61972191) and the Program for Innovative Talents and Entrepreneurs in Jiangsu.
|
2310.19197
|
concrete: Targeted Estimation of Survival and Competing Risks in
Continuous Time
|
This article introduces the R package concrete, which implements a recently
developed targeted maximum likelihood estimator (TMLE) for the cause-specific
absolute risks of time-to-event outcomes measured in continuous time.
Cross-validated Super Learner machine learning ensembles are used to estimate
propensity scores and conditional cause-specific hazards, which are then
targeted to produce robust and efficient plug-in estimates of the effects of
static or dynamic interventions on a binary treatment given at baseline
quantified as risk differences or risk ratios. Influence curve-based asymptotic
inference is provided for TMLE estimates and simultaneous confidence bands can
be computed for target estimands spanning multiple multiple times or events. In
this paper we review the one-step continuous-time TMLE methodology as it is
situated in an overarching causal inference workflow, describe its
implementation, and demonstrate the use of the package on the PBC dataset.
|
David Chen, Helene C. W. Rytgaard, Edwin C. H. Fong, Jens M. Tarp, Maya L. Petersen, Mark J. van der Laan, Thomas A. Gerds
|
2023-10-29T23:36:41Z
|
http://arxiv.org/abs/2310.19197v1
|
# concrete: Targeted Estimation of Survival
###### Abstract
This article introduces the R package concrete, which implements a recently developed targeted maximum likelihood estimator (TMLE) for the cause-specific absolute risks of time-to-event outcomes measured in continuous time. Cross-validated Super Learner machine learning ensembles are used to estimate propensity scores and conditional cause-specific hazards, which are then targeted to produce robust and efficient plug-in estimates of the effects of static or dynamic interventions on a binary treatment given at baseline quantified as risk differences or risk ratios. Influence curve-based asymptotic inference is provided for TMLE estimates and simultaneous confidence bands can be computed for target estimands spanning multiple multiple times or events. In this paper we review the one-step continuous-time TMLE methodology as it is situated in an overarching causal inference workflow, describe its implementation, and demonstrate the use of the package on the PBC dataset.
## 1 Introduction
In biomedical applications evaluating treatment effects on time-to-event outcomes, study subjects are often susceptible to competing risks such as all-cause mortality. In recent decades, several competing risk methods have been developed; including the Fine-Gray subdistributions model (Fine and Gray, 1999), cause-specific Cox regression (Benichou and Gail, 1990), pseudovalue (Klein and Andersen, 2005), and direct binomial (Scheike et al., 2008; Gerds et al., 2012) regressions; and authors have consistently cautioned against the use of standard survival estimators for causal questions involving competing risks. Nevertheless, reviews of clinical literature (Koller et al., 2012; Austin and Fine, 2017) found that most trials still fail to adequately address the effect of potential competing risks in their studies. Meanwhile, formal causal inference frameworks (Rubin, 1974; Pearl et al., 2016) gained recognition for their utility in translating clinical questions into statistical analyses and the targeted maximum likelihood estimation (TMLE) (Laan and Rubin, 2006; Laan and Rose, 2011, 2018) methodology developed from the estimating equation and one-step estimator lineage of constructing semi-parametric efficient estimators through solving efficient influence curve equations. The targeted learning roadmap (Petersen and van der Laan, 2014) combines these developments into a cohesive causal inference workflow and provides a structured way to think about statistical decisions. In this paper we apply the targeted learning roadmap to an analysis of time-to-event outcomes and demonstrate the R package concrete, which implements a recently developed continuous-time TMLE targeting cause-specific absolute risks (Rytgaard and van der Laan, 2021, 2022; Rytgaard et al., 2023).
Given identification and regularity assumptions, **concrete** can be used to efficiently estimate the treatment effect of interventions given at baseline. In short, the implemented one-step TMLE procedure consists of three stages: 1) an initial estimation of nuisance parameters, 2) a targeted update of the initial estimators to solve the estimating equation corresponding to the target statistical estimand's efficient influence curve (EIC), and 3) a plug-in of the updated estimators into the original parameter mapping to produce a substitution estimator of the target estimand.
In **concrete** the initial nuisance parameter estimation is performed using Super Learning, a cross-validated machine-learning ensemble algorithm with asymptotic oracle guarantees (Laan and Dudoi, 2003; Laan et al., 2007; Polley et al., 2021), as flexible machine-learning approaches such as Super Learners with robust candidate libraries and appropriate loss functions often give users the best chance of achieving the convergence rates needed for TMLE's asymptotic properties. The subsequent targeted update is based on semi-parametric efficiency theory in that efficient regular and asymptotically linear (RAL) estimators must have influence curves equal to the efficient influence curve (EIC) of the target statistical estimand, see e.g. (Bickel et al., 1998; Laan and Rose, 2011, 2018; Kennedy, 2016). In TMLE, initial nuisance parameter estimates are updated to solve the estimating equation corresponding to the target EIC, thus recovering normal asymptotic inference (given that initial estimators converge at adequate rates) while leveraging flexible machine-learning algorithms for initial estimation. In Section 2.3 we outline how Super Learner is used to estimate nuisance parameters in **concrete**; more detailed guidance on how to best specify Super Learner estimators is provided in e.g., (Phillips et al., 2023; Dudoit and van der Laan, 2005; Vaart et al., 2006). Section 2.3 outlines the subsequent targeted update which is fully described in (Rytgaard and van der Laan, 2021).
|
2308.03385
|
Robots as AI Double Agents: Privacy in Motion Planning
|
Robotics and automation are poised to change the landscape of home and work
in the near future. Robots are adept at deliberately moving, sensing, and
interacting with their environments. The pervasive use of this technology
promises societal and economic payoffs due to its capabilities - conversely,
the capabilities of robots to move within and sense the world around them is
susceptible to abuse. Robots, unlike typical sensors, are inherently
autonomous, active, and deliberate. Such automated agents can become AI double
agents liable to violate the privacy of coworkers, privileged spaces, and other
stakeholders. In this work we highlight the understudied and inevitable threats
to privacy that can be posed by the autonomous, deliberate motions and sensing
of robots. We frame the problem within broader sociotechnological questions
alongside a comprehensive review. The privacy-aware motion planning problem is
formulated in terms of cost functions that can be modified to induce
privacy-aware behavior - preserving, agnostic, or violating. Simulated case
studies in manipulation and navigation, with altered cost functions, are used
to demonstrate how privacy-violating threats can be easily injected, sometimes
with only small changes in performance (solution path lengths). Such
functionality is already widely available. This preliminary work is meant to
lay the foundations for near-future, holistic, interdisciplinary investigations
that can address questions surrounding privacy in intelligent robotic behaviors
determined by planning algorithms.
|
Rahul Shome, Zachary Kingston, Lydia E. Kavraki
|
2023-08-07T08:07:53Z
|
http://arxiv.org/abs/2308.03385v1
|
# Robots as AI Double Agents: Privacy in Motion Planning
###### Abstract
Robotics and automation are poised to change the landscape of home and work in the near future. Robots are adept at deliberately moving, sensing, and interacting with their environments. The pervasive use of robotics promises societal and economic payoffs due to its capabilities--conversely, the capabilities of robots to move within and sense the world around them is susceptible to abuse. Robots, unlike typical sensors, are inherently autonomous, active, and deliberate. Such automated agents can become _AI double agents_ liable to violate the privacy of coworkers, privileged spaces, and other stakeholders. In this work we highlight the understudied and inevitable threats to privacy that can be posed by the autonomous, deliberate motions and sensing of robots. We frame the problem within broader sociotechnological questions alongside a comprehensive review. The privacy-aware motion planning problem is formulated in terms of cost functions that can be modified to induce privacy-aware behavior: preserving, agnostic, or violating. Simulated case studies in manipulation and navigation, with altered cost functions, are used to demonstrate how privacy-violating threats can be easily injected, sometimes with only small changes in performance (solution path lengths). Such functionality is already widely available. This preliminary work is meant to lay the foundations for near-future, holistic, interdisciplinary investigations that can address questions surrounding privacy in intelligent robotic behaviors determined by planning algorithms.
## I Introduction
Recent advances have introduced robots--particularly robots with manipulation capabilities--into applications such as home assistance [1], healthcare [2], service [3], and industry [4]. These settings require the robot to (i) _adapt_ to sensed information (e.g., camera images) which is probabilistic and uncertain and (ii) share space and interact with humans, introducing ethical concerns. We contend that the powerful capabilities of these systems urgently burden us with novel ethical concerns relating to unprecedented use of these systems which, if not addressed now, will lead to dystopian uses of robotics by naive or malicious actors. Robots are inherently tools of surveillance, have unprecedented access to spaces [5], are trusted in ways cameras and other technology is not [6], and have sensing capabilities that are poorly understood [7]--a robot that avoids you is also tracking you. Modern uses of robots combine consumer hardware with intricate frameworks of open-source libraries, middleware [8], learned models, and probabilistic algorithms--all of which exacerbate the opacity of a robotic system. Potential abuses are understudied, under-litigated [9] and traditional mitigation strategies are hard or impossible to apply, motivating an urgent need for understanding such threats.
Privacy violations by robots can take many forms, such as data over-collection beyond what is strictly necessary for operation that can be used for later inference (as is often observed in web technologies [10, 11]). Compromised robotic systems with poor security can leak information to unknown third parties [12, 13]. Exfiltration of sensor data or post-hoc analysis of camera data can violate privacy by monitoring coworkers or users, extracting privileged information from the workplace and the home (e.g., [14]). It behooves us to be wary of the uneasy parallels between the proliferation of robots for generating economic value and surveillance capitalism [15].
Focus in the literature has primarily addressed privacy concerns raised by robots and smart devices from a computer vision perspective [16, 17, 18, 19]. There is little work from a privacy standpoint on what makes robots unique--_their ability to move and interact with the physical world._ In this work we focus on a fundamental element of robotic autonomy
Fig. 1: Deployments of robots with sensors can expose threats to privacy from the ability of robots to autonomously use motion planning for choosing how its sensors gather data. In the figure, a sensor attached to a manipulator can gather data about coworkers during typical operation.
motion planning--and its relation to privacy. Robot operators can use custom costs, constraints, and objectives that may place humans co-working with robots in situations that may violate their privacy. Abuses to privacy gives rise to what we call _robots as AI double agents_. To the best of the authors' knowledge, a comprehensive study of privacy implications of generalized motion planning is sorely under-explored.
The key contributions of this work are to present (a) a detailed motivating review of interdisciplinary literature that explains considerations of privacy at the confluence of society, technology, and engineering drawing out the connection to the robotic motion planning problem; (b) a formalization of the privacy in the motion planning problem that identifies privacy-violating functional definitions of feasibility and costs as potential vulnerabilities; (c) a set of motivating case studies based on typical manipulation and navigation scenarios using simple simulations and a straightforward weighted cost function to demonstrate that (i) simple cost function alterations can cause severe privacy-violating behavior, and (ii) privacy-violating behavior can be accompanied by only minor changes in traditional performance metrics (path length). The technical choices in the simulated study are simple modifications to the motion planning problem, using readily available open-source functionality, that serve to provide an illustrative testbed. The takeaways from this work points out the clear and present dangers to privacy posed in robotic motion planning.
## II The Bigger Picture of Privacy and Robotics
We first take a step back and look at where robotics lies within the broader context of engineering and cyber-physical systems. Many of the privacy considerations attributed to traditional uses and abuses of technology are aggravated by the power of robotic systems to not only be passive sensors, but also be autonomous in the physical space.
**Privacy:** We must concretely define what we mean by _privacy_[20]. A precise definition is closely tied to societal and legal interpretations in different parts of the world. We choose to refer to GDPR, a push towards common law privacy safeguards [21]. A closely related definition [22] promises safeguards that _"protects the individual against abuses which may accompany the collection and processing of personal data and which seeks to regulate at the same time the transfrontier flow of personal data."_
Note that we are separating security concerns from those of privacy (e.g., see [23, 24, 25, 13, 26] for ROS and security). However, privacy violating systems are more readily exploited when security has been compromised.
**Engineering Ethics:** In the context of robotics, a significant portion of the system design falls on automation deployers, consultants, and engineers. This draws a close connection between the questions of ethical automation and engineering ethics [27]. Ethics has been studied as an important aspect of engineering problems where solutions have to trade off ethical considerations and risks versus profit, efficiency, and output. The choices made by engineers can have critical societal impacts and unethical choices can have significant fallout. Engineering ethics is also deeply connected to morality and responsibility [28]. Intelligent automation lies under the shadow of this complicated relationship between technology, ethics, and society. Beyond sharing many common problems [29], the powerful capabilities of robotics presents unique challenges and threats.
**Cyber-physical Systems:** While analysis on privacy has been done in traditional cyber-physical systems [30, 31], the internet of things [32], and so on, intelligent robot systems have received little analysis. There also has been little understanding and observation from policy makers to the threat that robotics bring to privacy, particularly as these systems become more ubiquitous [9, 33]. There is an uncomfortable relationship between smart home devices and considerations of privacy and legalese [34, 35], including innocuous products like smart toys [36].
**Robots as Sensors:** From the perspective of a robot as a passive sensor platform, there has been much work in preserving privacy [37], through methods such as obfuscating sensor input [16], using degraded images (e.g., anonymizing faces [17], reducing quality of the camera feed to a teleoperator [18]), and reacting relevant parts of the scene [19]. There has also been use of "privacy markers" that indicate regions that should be removed or redacted from sensors [38, 39], automatically detecting these regions [40], and also extended to a case with mobile robots [41]. However, all these methods merely operate on the camera feed passively, and do not actively direct what information the sensor should gather. Even with minimal data, powerful inference can identify individuals (e.g., with motion [42]). In general, robots must collect only the data they need [33].
**Robots around Humans:** Trust is essential for embodied systems to operate reliably near humans [43]. Moreover, people are more "comfortable" with robots rather than un-embodied cameras, and more willingly expose themselves to privacy violations [6], and misunderstand the full capabilities of robotic systems to gather information [7, 44]. There are certain qualities that humans expect from robots, and how that relates to how robots can "fly under the radar" when doing things. This relates to intention-aware planning [43].
**Privacy and Learning:** There has been much recent work on using machine learning-based methods for robot control, particularly in learning from human demonstrations [45, 46]. Learning based methods require large amounts of data, which is at odds with privacy concerns that require minimal data collection. There has been work in addressing privacy in deep learning [47, 48], namely in differential privacy [49, 50], but little from this literature has been applied to robotics and control [51], particularly in the context of manipulation.
Given the complexity of machine learning-based models, there is also potential for subterfuge, e.g., adding undetectable backdoors to modify behavior of a system [52]. Such backdoors could be used to induce malicious behavior in models used to nominally preserve privacy without the awareness of stakeholders.
**Privacy in Robotics Research:** Privacy is pressing issue throughout the broader AI community [53]. Robotics in
particular is a concern for privacy, given the direct nature of a robotic system as a tool of surveillance, with sensing an inherent part of a robot's ability to understand the world [54]. However, considerations of privacy in the field of robotics have been far outpaced by the potential usages that might pose threat. Fig. 2 is an approximation of what is an undeniable trend--robotics solutions and applications that have the potential to interface with humans are growing beyond the understanding of privacy (and ethics) in these contexts. [55] makes a similar observation.
There is some work in understanding privacy concerns for social robots [56, 57]--that is, systems primarily designed for human interaction and entertainment. Systems, such as household robots, have been shown to have poor security and privacy properties [12]. We focus instead on general scenarios where the intended purpose, such as a logistics task, can be compromised by motion planning.
**Privacy and Planning:** There has been prior work in incorporating deliberate reasoning about sensing within planning. Generally falling into the category of "active vision" [58], work exists in autonomous surveillance [59], searching for objects [60, 61], and chronicling [62, 63]. Some work has used the capabilities of robotic arms [64] in assistive applications [65]. There has been work for protecting the privacy of robots themselves [66, 67, 68], but these works typically apply to mobile or aerial applications. Drones form a common platform of choice for visibility-aware problems [69] with some work on aspects of privacy [70, 71, 72, 73, 74, 75]. These works focus on clearly defined areas to avoid (e.g., similar to explicitly marked privacy zones) but only deal with low-dimensional systems (e.g., mobile robots and drones). General robots with many joints--like manipulators that are beginning to be more broadly deployed--pose significant challenges due to the high-dimensionality of their search space. Addressing the problem for general platforms and manipulators is necessary to apply to broader scenarios.
## III Privacy-Aware Motion Planning
In this section we take a closer look at the role privacy plays in a fundamental aspect of robotics--motion planning. We introduce some of the technical and theoretical tools necessary to define and understand elements of privacy in motion planning. We focus on a threat model where a robot fitted with a sensor collects data while moving. Privacy-aware motion planning will be defined in terms of data collected on privacy-sensitive regions. The vulnerabilities framed in this section can potentially lie exposed to deployers and end-users. Modifications to parameters, like cost functions, in motion planning can lead to altered behavior by robots acting as AI double agents.
**Definition 1** (AI Double Agent).: _An AI double agent is a robot, that by virtue of altered reasoning and planning, exhibits autonomous behavior which violates the privacy of any human agent in the robot's workspace._
### _Privacy-Aware Motion Planning_
A robotic agent \(\mathbf{A}\) with \(d\) degrees of freedom, e.g., robotic arm with \(d\) joints, is situated in a workspace \(\mathcal{W}\in\mathbb{R}^{3}\) with obstacle regions \(o\subseteq\mathcal{W}\). The robot has a \(d\)-dimensional configuration space \(\mathcal{X}\subseteq\mathbb{R}^{d}\). Each configuration of the robot can be checked for feasibility, typically defined in terms of being collision-free with the obstacles. Denote a boolean feasibility function as \(\mathbf{v}:\mathcal{X}\rightarrow\{\mathbf{1},\mathbf{0}\}\). The invalid subset corresponds to \(\mathcal{X}_{\mathrm{obs}}=\{x\mid\mathbf{v}(x)=\mathbf{0},x\in\mathcal{X}\}\), while the valid subset is \(\mathcal{X}_{\mathrm{free}}=\mathcal{X}\setminus\mathcal{X}_{\mathrm{obs}}\). A motion planning problem requires connecting a start configuration \(x_{0}\in\mathcal{X}\) to a goal region \(\mathcal{X}_{goal}\) with a continuous, collision-free, time-parameterized trajectory \(\pi:[0,1]\to\mathcal{X}_{\mathrm{free}}\) where \(\pi[0]=x_{0},\pi[1]\in\mathcal{X}_{goal}\). Given all possible such feasible trajectories \(\Pi\ni\pi\), a cost function \(\mathbf{c}:\Pi\rightarrow\mathbb{R}_{\geq 0}\) assigns a non-negative real number to a trajectory. The cost is typically considered to be (or some function proportional to) the Euclidean path length. An optimal solution corresponds to the minimum cost \(\pi^{*}\in\operatorname*{arg\,min}_{\pi\in\Pi}\ \mathbf{c}(\pi)\).
In this work we introduce the element of privacy into the motion planning problem. For notational clarity we will use subscripts with \(\mathbf{p}\) to denote privacy-aware variants. A privacy-sensitive regions is a region of the workspace which is associated with requirements for privacy preservation is denoted by \(o_{\mathbf{p}}\subseteq\mathcal{W}\). A set of \(k\) such regions is denoted by \(\mathcal{O}_{\mathbf{p}}=\{o_{\mathbf{p}}^{1}\cdots o_{\mathbf{p}}^{k}\}\). A privacy feasibility function applies to a configuration and is denoted by \(\mathbf{v}_{\mathbf{p}}:\mathcal{X}\rightarrow\{\mathbf{1},\mathbf{0}\}\). Without loss of generality we will consider privacy-violating evaluations when \(\mathbf{v}_{\mathbf{p}}\) evaluates to \(\mathbf{0}\). A non-negative privacy-aware cost is defined along a trajectory \(\mathbf{c}_{\mathbf{p}}:\Pi\rightarrow\mathbb{R}_{\geq 0}\).
The evaluation of both the constraint and cost \(\mathbf{v}_{\mathbf{p}}\) and \(\mathbf{c}_{\mathbf{p}}\) will depend upon the privacy regions \(\mathcal{O}_{\mathbf{p}}\). The exact nature of this relationship will be affected by the precise setting under consideration including the kinematics of the robot, the attachment of the sensor, the sensing model (for instance visibility cone for a camera, etc). Our general formulation will leave these as necessary privacy-aware pieces within the otherwise typical motion planning problem. Note that the definition of the problem thus far can be applied to many general combinations of robots, sensors, and privacy regions.
### _Types of Privacy-Awareness_
The definitions of \(\mathbf{v}_{\mathbf{p}}\) and \(\mathbf{c}_{\mathbf{p}}\) can allow interactions with privacy regions \(\mathcal{O}_{\mathbf{p}}\) with three types of privacy-awareness:
**Privacy-Agnostic \((\mathbf{v},\mathbf{c})\)** classical motion planning has unmodified feasibility and cost functions.
Fig. 2: The figure shows the logarithmic growth of submissions to _Arxiv:CS.RO_ with the keywords _human_, _ethic_, and _privacy_ in their abstract between 2010 and 2022. Privacy and ethical concerns are far outpaced by applications that interact with humans.
**Privacy-Preserving (\(\mathbf{v}_{\mathbf{p}}^{+},\mathbf{c}_{\mathbf{p}}^{+}\))** choices penalize privacy violations along robot motions.
**Privacy-Violating (\(\mathbf{v}_{\mathbf{p}}^{-},\mathbf{c}_{\mathbf{p}}^{-}\))** choices promote privacy violations along robot motions.
Privacy-aware behaviors present a choice of functional alternatives. This leads us to the step where these are defined, which is up to the problem designer or deployer.
**Definition 2** (Motion Planning Double Agent).: _The design of privacy-violating \(\mathbf{v}_{\mathbf{p}}^{-}\) or \(\mathbf{c}_{\mathbf{p}}^{-}\) to replace the privacy-agnostic feasibility and cost functions, creates a double agent generating motion plans for the privacy-violating variant of the problem._
### _Privacy as a Secondary Objective_
The privacy-aware cost function \(\mathbf{c}_{\mathbf{p}}\), which is either privacy-preserving or violating behavior, can encode or be a part of multiple objectives [76] within the motion planning problem. The threat model of interest, and indeed of greater risk and harder to detect, is expected to involve robots that perform their primary automation operation satisfactorily while _also_ achieving a secondary privacy-aware objective. For instance, consider a possible combination of the length of solution path (a traditional cost in motion planning) and the privacy region visibility by a camera attached to the robot. The manner of this combination can lead to different flavors of pareto-optimal [77] problems, while the continuous nature of the problem can relate to cost maps [78].
_Key to the broader scope is not the prescription of such a specific cost and feasibility. Rather, in the next section, we show that it suffices to design **minor changes to the cost function in standard motion planners to make them privacy-aware**. This is particularly interesting because of the relatively low barrier of access, as it might be possible for deployers or end-users with enough expertise to modify the parameters and modules of motion planning._
## IV Motivating Simulated Case Study
In this section we demonstrate how--using a candidate modified cost function--privacy-aware behavior can be injected into normal operation of a motion planner.
### _Candidate Model of Privacy-Aware Cost Function_
Consider a PRM* [79] as the motion planner that reports shortest paths over roadmaps [80] constructed in the robot's configuration space. _Custom cost functions_ change the graph edge weights, altering the discovered solutions executed by the robot. This basic functionality is readily available through powerful open-source libraries [81]. While being careful not to prescribe what a privacy-aware cost function should be, we provide a straightforward candidate for studying the effects introduced by weighted modifications to the Euclidean path length cost function. The trajectory \(\pi_{\mathbf{p}}\) is weighted multiplicatively or fractionally using a _privacy weight_ (\(\mathbf{w}\)) parameter (such that \(|\mathbf{w}|\geq 1\)) depending upon the interaction with the privacy regions. A negative weight is privacy-violating. The total cost will be calculated over a discretization (\(\Delta\pi\)) of the trajectory \(\pi_{\mathbf{p}}\). \(|\mathbf{w}|=1\) is privacy-agnostic.
**Privacy-Preserving (\(\mathbf{w}>1\)):**
\[\mathbf{c}_{\mathbf{p}}^{+}(\pi_{\mathbf{p}})=\sum_{\Delta\pi\in\pi_{ \mathbf{p}}}\begin{cases}\mathbf{w}\|\Delta\pi\|&\text{if privacy violated}\\ \frac{1}{\mathbf{w}}\|\Delta\pi\|&\text{otherwise}\end{cases} \tag{1}\]
**Privacy-Violating (\(\mathbf{w}<-1\)):**
\[\mathbf{c}_{\mathbf{p}}^{-}(\pi_{\mathbf{p}})=\sum_{\Delta\pi\in\pi_{ \mathbf{p}}}\begin{cases}\frac{1}{|\mathbf{w}|}\|\Delta\pi\|&\text{if privacy violated}\\ |\mathbf{w}|\|\Delta\pi\|&\text{otherwise}\end{cases} \tag{2}\]
Here \(|\cdot|\) and \(\|\cdot\|\) denote the absolute value and Euclidean arc length. In essence, all that is needed is an approximation of privacy violation (for instance, intersection with a camera cone with \(\mathcal{O}_{\mathbf{p}}\)), and functional choices that penalize or promote the privacy preservation or violation. Due to the generality of the underlying planning, this should apply to a large variety of sensor-attached high-dimensional robotic platforms. Though the cost function weighting represents a simple alteration, what is not obvious is _how the cost function affects the double agent's privacy and efficiency?_
### _Case Studies_
To highlight motivating scenarios, we focus on two case studies with cameras attached to the robot while it moves. The camera visibility is approximated by a cone (shown in pink in Fig. 3 top) defined similar to the specifications of an _Intel Realsense_ camera (a 42\({}^{\circ}\) field of view and 2m range). While sensing these privacy regions poses its own challenges, here we choose to focus on the effects of planning by assuming these as input. Our motivating scenarios will introduce typical workspace settings where specific areas might be expected to contain these privacy-regions. Humans are represented as static mesh obstacles with spherical approximations (40cm radius) of privacy regions around their heads.
**Manipulation:** A manipulator with a camera attached to its wrist is set up in a workspace opposite to human collaborator(s) or customer(s). These types of settings are common to _warehouse automation settings or service industries_. The task itself is typically concentrated in the shared workspace between the robot and the human, for e.g., a table, a cashier's desk, a counter, etc. The robot is free to move the sensor (wrist) unencumbered, as long as it reaches its planning goal and avoids obstacles. The camera interacts with the regions of the workspace expected to contain human occupancy (spherical approximations shown in Fig. 3_(left, middle)_ for one and three individuals respectively).
**Navigation:** A mobile robot with a camera attached to a controllable joint is set up to navigate through a planar workflow filled with human co-workers or crowds. Such scenarios will come up in a variety of _mapping, navigation, cleaning, and monitoring_ tasks where the mobile robot is operating within the floorplan while avoiding the obstacles (here humans). Since the head camera is freely controllable during its motions, the camera interacts with the regions of the workspaces expected to contain human occupancy (spherical approximations shown in Fig. 3_(right)_ for nine individuals).
_Experimental Details:_ The simulation uses a Fetch in two controllable modes: a) **arm+torso**: a camera attached to
its wrist (8-dim \(\mathcal{X}\)) and b) **base+head**: a controllable head camera (5-dim \(\mathcal{X}\)). A PRM* [79] is constructed and reused across the experiments. Each choice of privacy cost alters the weights on this roadmap. Given random valid starts and goals, uniform cost search reports a solution. The metrics reported for the resultant paths are a) **privacy violation fraction**, which is the fraction of the path where the sensing cone intersects with any of the privacy regions, and b) **path length**. **w** (Eqs. (1) and (2)) is chosen from \(\{1,\pm 2,\pm 5,\pm 10\}\). The code was implemented with Robowflex [82] and OMPL [81].
### _Effects of Changing Privacy Cost (\(\mathbf{c_{p}}\))_
Fig. 3 demonstrates the different behaviors obtained by changing \(\mathbf{c_{p}}\) by tuning \(\mathbf{w}\). All the motions successfully connect the start and goal but differ in the interactions between the camera cone and privacy regions (Fig. 3 middle). Privacy-violating (negative) weights significantly increase the portion of the motions in which the camera _lingers_ on the privacy regions. The privacy-violating solutions are also longer (Fig. 3 bottom) further increasing the data gathered. Note how an example violating motion (Fig. 4 left) ends up gathering data on all the regions. In contrast, the privacy-agnostic paths are shorter but are still liable to capture data on the privacy regions. The 3-region benchmark shows violations along a quarter of the motions on average (Fig. 3 middle-middle). This motivates explicit reasoning about privacy preservation. Privacy-preserving paths immediately show the benefits of penalizing intersections between the cone and the privacy regions, with such violations dropping below 0.25% in all cases, while still maintaining relatively short paths. Note the privacy-preserving motion in Fig. 4 (right) completely avoids the privacy regions. Increasing the absolute magnitude of \(\mathbf{w}\) strengthens the preserving or violating behavior.
A trade-off arises in the data between privacy violation and performance, both of which might be attributed economic value in the design of automation. Large negatively weighted privacy violations are associated with large path length increases. Interestingly, in certain cases (\(\mathbf{w}=-2\)) high privacy violations (\(>0.58\)) arise on average with only a small dip in performance (\(<1.4\) times the agnostic path length) in all the case studies. We highlight that double agent might be deliberately designed to **aggravate low-privacy, high-performance behaviors**. Additionally, trade-offs can arise when such data gathering might be necessary for detecting humans for safe operation and handling cases of dynamic or uncertain detections. Multiple sensors and targeted data gathering among multiple regions also poses risks. Though the studied cost function is _not_ prescriptive and several alternate
Fig. 3: (Top) A Fetch robot (with camera visibility cone in pink) controlling either the **arm+torso** with a hand camera (left, middle) or the **base+head** with a head camera (right). (From left to right) 1, 3, and 9 privacy regions (blue spheres) overlaid on humans in the workspace. Violin plots (with mean markers) for 100 random runs with the privacy violating fraction (middle) and path lengths (bottom) for different privacy weights. Mean values are given at the top.
models exist, the chief takeaways remain unchanged.
**Key Causes for Alarm:** The observations from our simple simulated case studies already illustrate serious causes for concern. (a) With relatively straightforward alterations to the cost function, **drastic privacy-violating behavior was introduced**. (b) Significant privacy-violating behavior **might only introduce relatively small changes to the performance** (privacy-agnostic path length). (c) **Functionality to plug in custom cost function is readily available** in open-source planning libraries. (d) Privacy-preserving behavior **only emerges when deliberately included** in the cost function.
## V Discussion
The work presented here has highlighted the privacy threats exposed by robotic double agents capable of autonomous planning and reasoning to move and sense in deliberate ways. This work contributes a thorough review of the interdisciplinary connections between privacy and motion planning, formulates the role of privacy in the motion planning problem, and showcases imminent privacy-violating threats using motivating simulated case studies and straightforward cost function modifications to motion planning. The work calls for a more consolidated investigation.
**Human-Centric Factors:** The human aspect is essential in the problem. The current simulated studies motivate the need for a deeper understanding of how humans perceive privacy-aware robotic behavior. The human-centric factors here are intrinsic to privacy, necessitating the investigations to be centered around the rights and protections of humans. There has been related human-robot interaction work that studies the anthropocentric principles to align robot motion to human expectations in intention-aware planning [83, 84, 85], the consideration of ethical or human-focused value alignment [86], and planning with legibility or human interpretation as an objective [87, 88, 89]. There has also been work on communication [90], understanding robot motion [91, 92], and parameterized social interactions [93] in navigation. Tradeoffs can exist between human-awareness in robots and privacy. The current study also raises questions about the context and expectations of privacy from the robot--an understanding by the human of an ongoing or imminent privacy violation.
**Verification and Mitigation:** This work admittedly raises more questions than answers with respect to the ways in which the threats posed by robotic planning can be identified and mitigated. While intentional deployment and recommended practices of privacy-preserving planning can achieve some headway, it does not resolve the issue of the bad actor or even a naive one (who exposes threats that exist even in privacy-agnostic planning). There needs to be broader discussions among social scientists, ethicists, and policy-makers to inform rules, regulations, and deployments. The understanding of the human factors can also inform community stakeholders like coworkers or end-users. Privacy is also connected to vulnerabilities in open-source and general-purpose software. It is critical that the robotics community recognizes these imminent threats to step towards a future that avoids the worst of these threats.
**Call to Action:** We look into motion planning as one of the most fundamental capabilities of autonomous robots that can be abused for privacy violations. We demonstrate the imminent threats that exist on the near-horizon using technologies and solutions that exist today. The increased incidence of robots in our work and home will raise uncomfortable questions of how these robots are harvesting data and protecting privacy. A call to action is needed for the robotics community to reach out to other stakeholders to build towards a future with useful robots that are both capable and comply with fundamental human values, such as privacy.
|
2306.03629
|
Equality in Degrees of Compactness: Schauder's Theorem and s-numbers
|
We investigate an extension of Schauder's theorem by studying the
relationship between various $s$-numbers of an operator $T$ and its adjoint
$T^*$. We have three main results. First, we present a new proof that the
approximation number of $T$ and $T^*$ are equal for compact operators. Second,
for non-compact, bounded linear operators from $X$ to $Y$, we obtain a
relationship between certain $s$-numbers of $T$ and $T^*$ under natural
conditions on $X$ and $Y$. Lastly, for non-compact operators that are compact
with respect to certain approximation schemes, we prove results by comparing
the degree of compactness of $T$ with that of its adjoint $T^*$.
|
Asuman Güven Aksoy, Daniel Akech Thiong
|
2023-06-06T12:31:38Z
|
http://arxiv.org/abs/2306.03629v1
|
# Equality in degrees of compactness: Schauder's theorem and s-numbers
###### Abstract.
We investigate an extension of Schauder's theorem by studying the relationship between various \(s\)-numbers of an operator \(T\) and its adjoint \(T^{*}\). We have three main results. First, we present a new proof that the approximation number of \(T\) and \(T^{*}\) are equal for compact operators. Second, for non-compact, bounded linear operators from \(X\) to \(Y\), we obtain a relationship between certain \(s\)-numbers of \(T\) and \(T^{*}\) under natural conditions on \(X\) and \(Y\). Lastly, for non-compact operators that are compact with respect to certain approximation schemes, we prove results for comparing the degree of compactness of \(T\) with that of its adjoint \(T^{*}\).
Key words and phrases:s-numbers, approximation schemes, Schauder's theorem 2010 Mathematics Subject Classification: Primary 47A16, 47B10; Secondary 47A68
## 1. Introduction
In the following, we give a brief review of the background, notation, and terminology that will be relevant to this paper. Let \(\mathcal{L}(X,Y)\) denote the normed vector space of all continuous operators from \(X\) to \(Y\), \(X^{*}\) be the dual space of \(X\), and \(\mathcal{K}(X,Y)\) denote the collection of all compact operators from \(X\) to \(Y\). Denote by \(T^{*}\in\mathcal{L}(Y^{*},X^{*})\) the adjoint operator of \(T\in\mathcal{L}(X,Y)\). The well known theorem of Schauder states that \(T\in\mathcal{K}(X,Y)\) if and only if \(T^{*}\in\mathcal{K}(Y^{*},X^{*})\). The proof of Schauder's theorem that uses Arzel\(\grave{a}\)-Ascoli Theorem is presented in most textbooks on functional analysis (see, e.g., [19]). A new and simple proof which does not depend on Arzel\(\grave{a}\)-Ascoli can be found in [20]. Recalling the fact that a class of operators \(\mathcal{A}(X,Y)\subset\mathcal{L}(X,Y)\) is called _symmetric_ if \(T\in\mathcal{A}(X,Y)\) implies \(T^{*}\in\mathcal{A}(Y^{*},X^{*})\), we note that Schauder's Theorem assures that the class \(\mathcal{K}(X,Y)\) of compact operators between arbitrary Banach spaces \(X\) and \(Y\) is a symmetric operator ideal in \(\mathcal{L}(X,Y)\).
In [18] F. Riesz proved compact operators have at most countable set of eigenvalues \(\lambda_{n}(T)\), which arranged in a sequence, tend to zero. This result raises the question of what are the conditions on \(T\in\mathcal{L}(X,Y)\) such that \((\lambda_{n}(T))\in\ell_{q}\)? Specifically, what is the rate of convergence to zero of the sequence \((\lambda_{n}(T))\)? To answer these questions, in [15] and [17], A. Pietsch developed \(s\)-numbers \(s_{n}(T)\) (closely related to singular values), which characterize the degree of compactness of \(T\). The concept of _s-numbers_\(s_{n}(T)\) is introduced axiomatically in [15], their relationship to eigenvalues are given in detail in [17].
**Definition 1.1**.: A map which assigns to every operator \(T\) a scalar sequence, is said to be a \(s\)-function if the following conditions are satisfied:
1. \(||T||=s_{1}(T)\geq s_{2}(T)\geq\cdots\geq 0\) for \(T\in\mathcal{L}(X,Y)\).
2010 Mathematics Subject Classification: 60F10, 60F15, 60F25, 60F30
## 1. Introduction
Let \(X\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space and \(\mathcal{L}(X,Y)\) be a Banach space. Let \(\mathcal{L}(X,Y)\) be a Banach space.
where the infimum is over all subspaces \(G\subset X\) such that \(\dim G\leq n\) and \(Q_{G}\) denotes the canonical quotient map \(Q_{G}:X\to X/G\).
3. The _nth Gelfand number of_\(T\), \(c_{n}(T)\) is defined as: \[c_{n}(T)=\inf\{\epsilon>0:||Tx||\leq\sup_{1\leq i\leq k}|\langle x,a_{i}\rangle| +\epsilon||x||\}\] where \(a_{i}\in X^{*},1\leq i\leq k\) with \(k<n\). It follows that an operator \(T\) is compact if and only if \(c_{n}(T)\to 0\) as \(n\to\infty\).
4. The _nth symmetrized approximation number_\(\tau_{n}(T)\) for any operator \(T\) defined between arbitrary Banach spaces \(X\) and \(Y\) is defined as follows: \[\tau_{n}(T)=\delta_{n}(J_{Y}T)\quad\text{where}\quad J_{Y}:Y\to\ell_{\infty}(B_ {Y^{*}})\] is an embedding map. Note that above definition is equivalent to \[\tau_{n}(T)=a_{n}(J_{Y}TQ_{X})\] as well as to \[\tau_{n}(T)=c_{n}(TQ_{X}),\] where \(Q_{X}:\ell_{1}(B_{X})\to X\) is a metric surjection onto \(X\) given by \(Q_{X}(\xi_{x})=\sum_{B_{X}}\xi_{x}x\quad\text{for}\ \ (\xi_{x})\in\ell_{1}(B_{X})\).
It is possible to compare various s-numbers such as \(a_{n}(T)\), \(\delta_{n}(T)\), \(c_{n}(T)\) if one imposes some mild restrictions on \(X\) and \(Y\). With this purpose in mind we define well known concepts of lifting and extension properties.
**Definition 1.3**.: In the following we introduce two well-known important properties of Banach spaces. See [7] for details.
1. We say that a Banach space \(X\) has _the lifting property_ if for every \(T\in\mathcal{L}(X,Y/F)\) and every \(\epsilon>0\) there exists an operator \(S\in\mathcal{L}(X,Y)\) such that \[||S||\leq(1+\epsilon)||T||\quad\text{and}\ \,T=Q_{F}S,\] where \(F\) is a closed subspace of the Banach space \(Y\) and \(Q_{F}:Y\to Y/F\) denotes the canonical projection. **Example 1.4**.: The Banach space \(\ell_{1}(\Gamma)\) of _summable number families_\(\{\lambda_{\gamma}\}_{\gamma\in\Gamma}\) over an arbitrary index set \(\Gamma\), whose elements \(\{\lambda_{\gamma}\}_{\gamma\in\Gamma}\) are characterized by \(\sum_{\gamma\in\Gamma}|\lambda_{\gamma}|<\infty\), has the metric lifting property.
2. A Banach space \(Y\) is said to have _the extension property_ if for each \(T\in\mathcal{L}(M,Y)\) there exists an operator \(S\in\mathcal{L}(X,Y)\) such that \(T=SJ_{M}\) and \(||T||=||S||\), where \(M\) is a closed subspace of an arbitrary Banach space \(X\) and \(J_{M}:M\to Y\) is the canonical injection. **Example 1.5**.: The Banach space \(\ell_{\infty}(\Gamma)\) of _bounded number families_\(\{\lambda_{\gamma}\}_{\gamma\in\Gamma}\) over an arbitrary index set \(\Gamma\) has the metric extension property.
We mention a couple of facts to illustrate the importance of lifting and extensions properties with respect to \(s\)-numbers. If \(T\) is any map from a Banach space with metric lifting property to an arbitrary Banach space, then \(a_{n}(T)=\delta_{n}(T)\)
([7], Prop. 2.2.3). It is also known that every Banach space \(X\) appears as a quotient space of an appropriate space \(\ell_{1}(\Gamma)\) (see [7], p.52). Furthermore, If \(T\) is any map from an arbitrary Banach space into a Banach space with metric extension property, then \(a_{n}(T)=c_{n}(T)\) ([7], Prop. 2.3.3). Additionally, every Banach space \(Y\) can be regarded as a subspace of an appropriate space \(\ell_{\infty}(\Gamma)\) (see [7], p.60).
For non-compact operator \(T\in\mathcal{L}(X,Y)\), we do not have too much information about the relationship between \(s_{n}(T)\) with \(s_{n}(T^{*})\). In this paper, by imposing certain natural conditions on \(X\) and \(Y\) we are able to obtain a relationship between \(s_{n}(T)\) with \(s_{n}(T^{*})\) for certain s-numbers. Moreover, using a new characterization of compactness due to Runde [20] together with the Principle of Local Reflexivity, we give a different, simpler proof of Hutton's theorem [11] establishing that for any compact map \(T\),
\[a_{n}(T)=a_{n}(T^{*})\quad\text{for all}\;\;n.\]
Next we consider operators which are not compact but compact with respect to certain approximation schemes Q. We call such operators as Q-compact and prove that for any Q-compact operator \(T\), one has \(\tau_{n}(T)=\tau_{n}(T^{*})\). This result answers the question of comparing the degree of compactness for \(T\) and its adjoint \(T^{*}\) for non-compact operators \(T\).
## 2. Comparing \(s_{n}(T)\) and \(s_{n}(T^{*})\)
Hutton in [11] used the Principle of Local Reflexivity (PLR) to prove that for \(T\in\mathcal{K}(X,Y)\) we have
\[a_{n}(T)=a_{n}(T^{*})\quad\text{ for all}\;\;n.\]
This result fails for non-compact operators. For example, if \(T=I:\ell_{1}\to c_{0}\) is the canonical injection and \(T^{*}:\ell_{1}\to\ell_{\infty}\) is the natural injection, then one can show
\[1=a_{n}(T)\neq a_{n}(T^{*})=\frac{1}{2}.\]
On the other hand by considering the ball measure of non-compactness, namely,
\[\gamma(T):=\inf\{r>0:T(B_{X})\subset\bigcup_{k=1}^{n}A_{k},\ \max_{1\leq k \leq n}\text{diam}\ (A_{k})<r,\,n\in\mathbb{N}\}\]
Astala in [4] proved that if \(T\in\mathcal{L}(X,Y)\), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then
\[\gamma(T)=\gamma(T^{*}).\]
Our first result is a different, simpler proof of Hutton's Theorem. We use only the characterization of compactness by Runde [20], together with the Principle of Local Reflexivity. Lindenstrass and Rosenthal [13] discovered a principle that shows that all Banach spaces are "locally reflexive" or said in another way, every bidual \(X^{**}\) is finitely representable in the original space \(X\). The following is a stronger version of this property called _Principle of Local Reflexivity_ (PLR) due to Johnson, Rosenthal and Zippin [9]:
**Definition 2.1**.: Let \(X\) be a Banach space regarded as a subspace of \(X^{**}\), let \(E\) and \(F\) be finite dimensional subspaces of \(X^{**}\) and \(X^{*}\) respectively and let \(\epsilon>0\). Then there exist a one-to-one operator \(T:E\to X\) such that
1. \(T(x)=x\) for all \(x\in X\cap E\)
2. \(f(Te)=e(f)\) for all \(e\in E\) and \(f\in F\)
3. \(||T||||T^{-1}||<1+\epsilon\).
PLR is an effective tool in Banach space theory. For example Oja and Silja in [14] investigated versions of the principle of local reflexivity for nets of subspaces of a Banach space and gave some applications to duality and lifting theorems.
**Lemma 2.2** (Lemma 1 in [20]).: _Let \(X\) be a Banach space and let \(T\in\mathcal{L}(X)\). Then \(T\in\mathcal{K}(X)\) if and only if, for each \(\epsilon>0\), there is a finite-dimensional subspace \(F_{\epsilon}\) of \(X\) such that \(||Q_{F_{\epsilon}}T||<\epsilon\), where \(Q_{F_{\epsilon}}:X\to X/F_{\epsilon}\) is the canonical projection._
**Theorem 2.3**.: _Let \(T\in\mathcal{K}(X)\). Then \(a_{n}(T)=a_{n}(T^{*})\) for all \(n\)._
Proof.: Since one always has \(a_{n}(T^{*})\leq a_{n}(T)\), if we have \(a_{n}(T)\leq a_{n}(T^{**})\), then \(a_{n}(T^{**})\leq a_{n}(T^{*})\) would imply \(a_{n}(T)\leq a_{n}(T^{*})\). Thus we must verify \(a_{n}(T)\leq a_{n}(T^{**})\). To this end, suppose \(T\in\mathcal{K}(X)\), by Schauder's theorem, \(T^{*}\) and \(T^{**}\) are compact. Let \(\epsilon>0\), then by definition, there exists \(A\in\mathcal{F}_{n}(X^{**})\) such that \(||T^{**}-A||<a_{n}(T^{**})+\epsilon\).
By Lemma 2.2, there are finite-dimensional subspaces \(E_{\epsilon}\) of \(X^{**}\) and \(F_{\epsilon}\) of \(X^{*}\) such that \(||Q_{E_{\epsilon}}T^{**}||<\epsilon\), where \(Q_{E_{\epsilon}}:X^{**}\to X^{**}/E_{\epsilon}\) and \(||Q_{F_{\epsilon}}T^{*}||<\epsilon\), where \(Q_{F_{\epsilon}}:X^{*}\to X^{*}/F_{\epsilon}\).
By the Principle of Local Reflexivity (PLR), there exists a one-to-one linear operator \(S:E_{\epsilon}\to X\) such that \(||S||||S^{-1}||<1+\epsilon\), \(y^{*}(Sx^{**})=x^{**}(y^{*})\) for all \(x^{**}\in E_{\epsilon}\) and all \(y^{*}\in F_{\epsilon}\), and \(S_{|E_{\epsilon}\cap X}=I\).
Let \(J:X\to X^{**}\) be the canonical map. By the Hahn-Banach theorem, since \(E_{\epsilon}\) is a subspace of \(X^{**}\), \(S:E_{\epsilon}\to X\) can be extended to a linear operator \(\overline{S}:X^{**}\to X\).
We now have \(T\in\mathcal{L}(X)\) and \(\overline{S}AJ\in\mathcal{L}(X)\) and rank \((\overline{S}AJ)=\text{rank}(A)<n\), and therefore
\[a_{n}(T)\leq||T-\overline{S}AJ||.\]
To get an upper bound for \(||T-\overline{S}AJ||\) we estimate \(||Tx-\overline{S}AJ(x)||\) for \(x\in B_{X}\) using an appropriate element \(z_{j}\) of the covering of the set \(T(B_{X})\).
Indeed, the compactness of \(T\) implies that \(T(B_{X})\) is relatively compact so that one can extract a finite-dimensional subset \(Y_{\epsilon}\subset T(B_{X})\subset X\) and let \(z_{j}=Tx_{j}\) be the n elements forming a basis.
Let \(x\in B_{X}\). Then we have
\[a_{n}(T) \leq|Tx-\overline{S}AJ(x)||\leq||Tx-z_{j}||+||z_{j}-\overline{S} AJ(x)||\] \[\leq\epsilon+||z_{j}-\overline{S}AJ(x)||=\epsilon+||\overline{S}z _{j}-\overline{S}AJ(x)||\] \[\leq\epsilon+(1+\epsilon)||z_{j}-AJ(x)||<\epsilon+(1+\epsilon)(a_{ n}(T^{*})+\epsilon)\]
since
\[||z_{j}-AJ(x)|| =||Jz_{j}-AJ(x)||\leq||Jz_{j}-JTx||+||JTx-AJ(x)||\] \[\leq\epsilon+||JTx-AJx||=\epsilon+||T^{**}Jx-AJx||\leq||T^{**}-A||\] \[<a_{n}(T^{*})+\epsilon.\]
It follows that \(a_{n}(T)\leq a_{n}(T^{**})\), as promised.
**Theorem 2.4**.: _If \(T\in\mathcal{L}(X,Y)\), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then \(\delta_{n}(T^{*})=\delta_{n}(T)\) for all \(n\)._
Proof.: It is known that if \(T\in\mathcal{L}(X,Y)\), where X and Y are arbitrary Banach spaces, then \(\delta_{n}(T^{*})=c_{n}(T)\) ([7], Prop. 2.5.5). We also know that if \(T\in\mathcal{L}(X,Y)\), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then \(\delta_{n}(T)=a_{n}(T)=c_{n}(T)\). Hence,
\[\delta_{n}(T^{*})=c_{n}(T)=a_{n}(T)=\delta_{n}(T).\qed\]
_Remark 2.5_.: As stated before, Astala in [4] proved that if \(T\in\mathcal{L}(X,Y)\), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then \(\gamma(T)=\gamma(T^{*})\), where \(\gamma(T)\) denotes the measure of non-compactness of \(T\). In [1], it is shown that \(\lim\limits_{n\to\infty}\delta_{n}(T)=\gamma(T)\). This relationship between Kolmogorov diameters and the measure of non-compactness together with Theorem 2.4 provide an alternative proof for the result of Astala.
**Theorem 2.6**.: _If \(T\in\mathcal{K}(X,Y)\), where X and Y are arbitrary Banach spaces with metric lifting and extension property, respectively, then \(c_{n}(T^{*})=c_{n}(T)\) for all \(n\)._
Proof.: If \(T\in\mathcal{K}(X,Y)\), then it is known that \(\delta_{n}(T)=c_{n}(T^{*})\) ([7], Prop. 2.5.6). If X and Y are Banach spaces with metric lifting and extension property, respectively, then we also have \(\delta_{n}(T)=a_{n}(T)=c_{n}(T)\). Thus, \(c_{n}(T^{*})=c_{n}(T)\) for all \(n\).
_Remark 2.7_.: In [10] it is shown that if \(X\) has the lifting property, then \(X^{*}\) has the extension property. However, if \(Y\) has the extension property, then \(Y^{*}\) has the lifting property if and only if \(Y\) is finite-dimensional. Therefore one can observe that if \(X\) has the lifting property and \(Y\) is finite-dimensional with the extension property, then \(Y^{*}\) has the lifting property and \(X^{*}\) has the extension property, so that we have
\[\delta_{n}(T^{*})=a_{n}(T^{*})=c_{n}(T^{*}).\]
## 3. Compactness with Approximation schemes
Approximation schemes were introduced in Banach space theory by Butzer and Scherer in 1968 [5] and independently by Y. Brudnyi and N. Kruglyak under the name of "approximation families" in [6]. They were popularized by Pietsch in his 1981 paper [16], for later developments we refer the reader to [1, 2, 3]. The following definition is due to Aksoy and generalizes the classical concept of approximation scheme in a way that allows using families of subsets of \(X\) instead of elements of \(X\), which is useful when we deal with n-widths.
**Definition 3.1** (Generalized Approximation Scheme).: Let \(X\) be a Banach space. For each \(n\in\mathbb{N}\), let \(Q_{n}=Q_{n}(X)\) be a family of subsets of \(X\) satisfying the following conditions:
1. \(\{0\}=Q_{0}\subset Q_{1}\subset\dots\subset Q_{n}\subset\dots\).
2. \(\lambda Q_{n}\subset Q_{n}\) for all \(n\in N\) and all scalars \(\lambda\).
3. \(Q_{n}+Q_{m}\subseteq Q_{n+m}\) for every \(n,m\in N\).
Then \(Q(X)=(Q_{n}(X))_{n\in N}\) is called a _generalized approximation scheme_ on \(X\). We shall simply use \(Q_{n}\) to denote \(Q_{n}(X)\) if the context is clear.
We use here the term "generalized" because the elements of \(Q_{n}\) may be subsets of \(X\). Let us now give a few important examples of generalized approximation schemes.
**Example 3.2**.:
1. \(Q_{n}=\) the set of all at-most-\(n\)-dimensional subspaces of any given Banach space \(X\).
2. Let \(E\) be a Banach space and \(X=L(E)\); let \(Q_{n}=N_{n}(E)\), where \(N_{n}(E)=\) the set of all \(n\)-nuclear maps on \(E\)[15].
3. Let \(a^{k}=(a_{n})^{1+\frac{1}{k}}\), where \((a_{n})\) is a nuclear exponent sequence. Then \(Q_{n}\) on \(X=L(E)\) can be defined as the set of all \(\Lambda_{\infty}(a^{k})\)-nuclear maps on \(E\)[8].
**Definition 3.3** (Generalized Kolmogorov Number).: Let \(B_{X}\) be the closed unit ball of \(X\), \(Q=Q(X)=(Q_{n}(X))_{n\in N}\) be a _generalized approximation scheme_ on \(X\), and \(D\) be a bounded subset of \(X\). Then the \(n^{\rm th}\) _generalized Kolmogorov number_\(\delta_{n}(D;Q)\) of \(D\) with respect to \(Q\) is defined by
\[\delta_{n}(D;Q)=\inf\{r>0:D\subset rB_{X}+A\ \text{for some}\ A\in Q_{n}(X)\}. \tag{3.1}\]
Assume that \(Y\) is a Banach space and \(T\in\mathcal{L}(Y,X)\). The \(n^{\rm th}\) Kolmogorov number \(\delta_{n}(T;Q)\) of \(T\) is defined as \(\delta_{n}(T(B_{Y});Q)\).
It follows that \(\delta_{n}(T;Q)\) forms a non-increasing sequence of non-negative numbers:
\[\|T\|=\delta_{0}(T;Q)\geq\delta_{1}(T;Q)\geq\dots\geq\delta_{n}(T;Q)\geq 0. \tag{3.2}\]
We are now able to introduce \(Q\)-compact sets and operators:
**Definition 3.4** (\(Q\)-compact set).: Let \(D\) be a bounded subset of \(X\). We say that \(D\) is \(Q\)-_compact_ if \(\lim_{n}\delta_{n}(D;Q)=0\).
**Definition 3.5** (\(Q\)-compact map).: We say that \(T\in L(Y,X)\) is a \(Q\)-_compact map_ if \(T(B_{Y})\) is a \(Q\)-compact set,
\[\lim_{n}\delta_{n}(T;Q)=0.\]
\(Q\)-compact maps are a genuine generalization of compact maps since there are examples of \(Q\)-compact maps which are not compact in the usual sense.
In the following we present two examples of Q-compact maps which are not compact. First of this examples is known (see [1]) and it involves a projection \(P:L_{p}[0,1]\to R_{p}\) where \(R_{p}\) denotes the closure of the span of the space of
Rademacher functions. Second example is new and illustrates the fact that if \(B_{w}\) is a weighted backward shift on \(c_{0}(\mathbb{N})\) with \(w=(w_{n})_{n}\) a bounded sequence not converging to \(0\), then \(B_{w}\) is \(Q\)-compact operator which is not compact.
**Example 3.6**.: Let \(\{r_{n}(t)\}\) be the space spanned by the Rademacher functions. It can be seen from the Khinchin inequality [12] that
\[\ell_{2}\approx\{r_{n}(t)\}\subset L_{p}[0,1]\text{ for all }1\leq p\leq\infty. \tag{3.3}\]
We define an approximation scheme \(A_{n}\) on \(L_{p}[0,1]\) as follows:
\[A_{n}=L_{p+\frac{1}{n}}. \tag{3.4}\]
\(L_{p+\frac{1}{n}}\subset L_{p+\frac{1}{n+1}}\) gives us \(A_{n}\subset A_{n+1}\). for \(n=1,2,\dots\), and it is easily seen that \(A_{n}+A_{m}\subset A_{n+m}\) for \(n,m=1,2,\dots\), and that \(\lambda A_{n}\subset A_{n}\) for all \(\lambda\). Thus \(\{A_{n}\}\) is an approximation scheme.
It can be shown that for \(p\geq 2\) the projection \(P:L_{p}[0,1]\to R_{p}\) is a non-compact \(Q\)-compact map, where \(R_{p}\) denotes the closure of the span of \(\{r_{n}(t)\}\) in \(L_{p}[0,1]\). (See [1] for details)
Next we give another example is an \(Q\)-operator which is not compact.
**Example 3.7**.: Consider the weighted backward shift
\[B(x_{1},x_{2},x_{3},\dots)=(w_{2}x_{2},w_{3}x_{3},w_{4}x_{4},\dots)\]
where \(w=(w_{n})_{n}\) is a sequence of non-zero scalars called a _weight sequence_. Any weighted shift is a linear operator and is bounded if and only if \(w\) is a bounded sequence.
Let \(w=(w_{n})_{n}\) be a bounded sequence of positive real numbers. The unilateral weighted shift on \(c_{0}(\mathbb{N})\) is defined by
\[B_{w}(e_{1})=0\quad\text{ and }\quad B_{w}(e_{n})=w_{n}e_{n-1}\quad\text{ for all }\quad n\geq 2.\]
**Proposition 3.8**.: _Suppose the approximation scheme \(Q=(A_{n})_{n=1}^{\infty}\) of \(c_{0}(\mathbb{N})\) is defined as \(A_{n}=\ell_{n}(\mathbb{N})\) for all \(n\). Then any bounded weighted shift on \(c_{0}\) is \(Q\)-compact_
Proof.: Let \(B_{w}\) be any bounded and linear weighted shift on \(c_{0}\), then \(w=(w_{n})_{n}\) is a bounded weight. Let \(m\geq 1\). Consider,
\[\delta_{m}(B_{w}(U_{c_{0}}),(A_{n})_{n}) =\inf\{r>0:B_{w}(U_{c_{0}})\subseteq rU_{c_{0}}+\ell_{m}\}\] \[=\inf\{r>0:\forall x\in U_{c_{0}},\exists y\in U_{c_{0}},\exists z \in\ell_{m}\text{ with }B_{w}(x)=ry+z\}.\]
Let \(x=(x_{n})_{n\geq 1}\in U_{c_{0}}\). Let us define \(y=(y_{n})_{n\geq 1}\in U_{c_{0}}\) and \(z=(z_{n})_{n\geq 1}\in\ell_{1}\subseteq\ell_{m}\) such that \(B_{w}(x)=\frac{1}{2m}y+z\).
Let \(A:=\{n\geq 1:2^{m}\left|x_{n}w_{n}\right|>1\}\). The set \(A\) is finite, otherwise \((w_{n})_{n}\) is unbounded. Set,
\[\left\{\begin{array}{l}x_{n}w_{n}=z_{n-1}\\ y_{n-1}=0,\end{array}\right.\qquad\forall n\in A.\]
Observe that \((w_{n}x_{n})_{n\in\mathbb{N}\setminus A}\in c_{0}\), hence there exists a subsequence \((n_{k})_{k}\) such that \(\sum_{k=1}^{\infty}|w_{n_{k}}x_{n_{k}}|<\infty\). Set,
\[\left\{\begin{array}{ll}x_{n_{k}}w_{n_{k}}=z_{n_{k}-1}\\ y_{n_{k}-1}=0,\end{array}\right.\qquad\forall k\geq 1.\]
Finally, set
\[\left\{\begin{array}{ll}2^{m}x_{n}w_{n}=y_{n-1}\\ z_{n-1}=0,\end{array}\right.\qquad\forall n\in\mathbb{N}\setminus\{(n_{k})_{k} \cup A\}.\]
Hence, \(x_{n}w_{n}=\frac{1}{2^{m}}y_{n-1}+z_{n-1}\), for all \(n\geq 2\). In other words, \(B_{w}(x)=\frac{1}{2^{m}}y+z\). Note that \(y\in U_{c_{0}}\) and \(z\in\ell_{1}\subset\ell_{m}\). In conclusion, \(\delta_{m}(B_{w}(U_{c_{0}}),(A_{n})_{n})\leq\frac{1}{2^{m}}\). As \(m\) goes to \(\infty\), we obtain that \(\delta_{m}(B_{w}(U_{c_{0}}),\ (A_{n})_{n})\) goes to \(0\) and \(B_{w}\) is \(Q\)-compact.
It is well-known that \(B_{w}\) is compact if and only if \(w=(w_{n})_{n}\) is a null-sequence.
**Corollary 3.9**.: _Let \(B_{w}\) be a weighted backward shift on \(c_{0}(\mathbb{N})\) with \(w=(w_{n})_{n}\) a bounded sequence not converging to 0. Consider the approximation schemes on \(c_{0}(\mathbb{N})\) as \(Q=(A_{n})_{n=1}^{\infty}\) with \(A_{n}=\ell_{n}(\mathbb{N})\) for all \(n\). Then, \(B_{w}\) is a non-compact \(Q\)-compact operator._
Our next objective here is to ascertain whether or not Schauder's type of theorem is true for \(Q\)-compact maps. For this purpose we use symmetrized approximation numbers of \(T\). For our needs, we choose the closed unit ball \(B_{Z}\) of the Banach space \(Z\) as an index set \(\Gamma\). Our proof of the Schauder's theorem for Q-compact operators will depend on the fact that \(\ell_{1}(B_{Z})\) has the lifting property and \(\ell_{\infty}(B_{Z})\) has the extension property. First we recall the following proposition.
**Proposition 3.10** (Refined version of Schauder's theorem [7], p.84).: _An operator \(T\) between arbitrary Banach spaces \(X\) and \(Y\) is compact if and only if_
\[\lim_{n\to\infty}\tau_{n}(T)=0\]
_and moreover,_
\[\tau_{n}(T)=\tau_{n}(T^{*}).\]
Motivated by this, we give the definition of Q-compact operators using the symmetrized approximation numbers.
**Definition 3.11**.: We say \(T\) is Q-symmetric compact if and only if
\[\lim_{n\to\infty}\tau_{n}(T,Q)=0.\]
_Remark 3.12_.: We need the following simple facts for our proof, for details we refer the reader to [7] Prop. \(2.5.4-6\).
* Recall that \(\tau_{n}(T,Q)=c_{n}(TQ_{X},Q)\), where \(Q_{X}:\ell_{1}(B_{X})\to X\).
* We will also abbreviate the canonical embedding \[K_{\ell_{1}(B_{Y^{*}})}:\ell_{1}(B_{Y^{*}})\to\ell_{\infty}(B_{Y^{*}})^{*}\] by \(K\) so that \(Q_{Y^{*}}=J_{Y}^{*}K\).
* Denote by \(P_{0}:\ell_{\infty}(B_{X^{**}})\rightarrow\ell_{\infty}(B_{X})\) the operator which restricts any bounded function on \(B_{X^{**}}\) to the subset \(K_{X}(B_{X})\subset B_{X^{**}}\) so that \(Q_{X}^{*}=P_{0}J_{X^{*}}\).
* The relations (b) and (c) are crucial facts for the estimates of \(\delta_{n}(T^{*},Q^{*})\) and \(c_{n}(T^{*},Q^{*})\). In particular, we have \(c_{n}(T^{*},Q^{*})\leq\delta_{n}(T,Q)\).
We now state and prove the following theorem which states that the degree of Q-compactness of \(T\) and \(T^{*}\) is the same in so far as it is measured by the symmetrized approximation numbers \(\tau_{n}\).
**Theorem 3.13** (Schauder's theorem for Q-compact operators).: _Let \(T\in\mathcal{L}(X,Y)\) with \(X,Y\) are arbitrary Banach spaces, and let \(Q=(Q_{n}(X))\) be a generalized approximation scheme on \(X\). Then_
\[\tau_{n}(T^{*},Q^{*})=\tau_{n}(T,Q)\]
_for all \(n\)._
Proof.: Let us show that \(\tau_{n}(T^{*},Q^{*})=\tau_{n}(T,Q)\). By Remark 3.12 parts (a) and (b) we have the following estimates:
\[\tau_{n}(T^{*},Q^{*}) =c_{n}(T^{*}Q_{Y^{*}},Q^{*})=c_{n}(T^{*}J_{Y}^{*}K,Q^{*})\] \[\leq c_{n}((J_{Y}T)^{*},Q^{*})\leq\delta_{n}(J_{Y}T,Q)=t_{n}(T,Q)\]
Conversely, we have by using Remark 3.12 parts (c) and (d):
\[t_{n}(T,Q) =c_{n}(TQ_{X},Q)=\delta_{n}(TQ_{X})^{*},Q^{*})\] \[=\delta_{n}(Q_{X}^{*}T^{*},Q^{*})=\delta_{n}(P_{0}J_{X^{*}}T^{*}, Q^{*})\] \[\leq\delta_{n}(J_{X^{*}}T^{*},Q^{*})=t_{n}(T^{*},Q^{*}).\]
Next we define approximation numbers with respect to a given scheme as follows:
**Definition 3.14**.: Given an approximation scheme \(\{Q_{n}\}\) on \(X\) and \(T\in\mathcal{L}(X)\), the n-th approximation number \(a_{n}(T,Q)\) with respect to this approximation scheme is defined as:
\[a_{n}(T,Q)=\inf\{||T-B||:\ B\in\mathcal{L}(X),\ \ B(X)\subseteq Q_{n}\}\]
Let \(X^{*}\) and \(X^{**}\) be the dual and second dual of \(X\). Note that if we let \(J:X\to X^{**}\) be the canonical injection and let \((X,Q_{n})\) be an approximation scheme, then \((X^{**},J(Q_{n}))\) is an approximation scheme.
Let \(\{Q_{n}\}\) and \(\{Q_{n}^{**}\}:=\{J(Q_{n})\}\) denote the subsets of \(X\) and \(X^{**}\) respectively.
**Definition 3.15**.: We say \((X,Q_{n})\) has the _Extended Local Reflexivity Property_ (ELRP) if for each countable subset \(C\) of \(X^{**}\), for each \(F\in Q_{n}^{**}\) for some \(n\) and each \(\epsilon>0\), there exists a continuous linear map
\[P:\mathrm{span}(F\cup C)\to X\quad\text{such that}\]
* \(||P||\leq 1+\epsilon\)
* \(P\ |_{C\cap X}{=I(Identity)}\)
Note that ELRP is an analogue of local reflexivity principle which is possessed by all Banach spaces.
**Theorem 3.16**.: _Suppose \((X,Q_{n})\) has ELRP and \(T\in\mathcal{L}(X)\) has separable range. Then for each \(n\) we have \(a_{n}(T,Q)=a_{n}(T^{*},Q^{*})\)._
Proof.: Since one always have \(a_{n}(T^{*},Q^{*})\leq a_{n}(T,Q)\) we only need to verify \(a_{n}(T,Q)\leq a_{n}(T^{**},Q^{**})\). Let \(J:X\to X^{**}\) be the canonical map and \(U_{X}\) be the unit ball of \(X.\) Given \(\epsilon>0,\) choose \(B\in\mathcal{L}(X^{**})\) such that \(B(X^{**})\in Q_{n}^{**}\) and
\[||B-T^{**}||<\epsilon+a_{n}(T^{**},Q_{n}^{**}).\]
Let \(\{z_{j}\}\) be a countable dense set in \(T(X),\) thus \(Tx_{j}=z_{j}\) where \(x_{j}\in X\). Consider the set
\[K=\operatorname{span}\{(JTx_{j})_{1}^{\infty}\cup B(X^{**})\}\]
applying ELRP of \(X\) we obtain a map
\[P:K\to X\text{ such that }||P||\leq 1+\epsilon\text{ \ and }P\upharpoonright_{(JTx_{j})_{1}^{\infty}\cap X}=I\]
For \(x\in U_{X},\) consider
\[||Tx-PBJx|| \leq||Tx-z_{j}||+||z_{j}-PBJx||\] \[\leq\epsilon+||PJTx_{j}-PBJx||\] \[\leq\epsilon+(1+\epsilon)||JTx_{j}-BJx||\] \[\leq\epsilon+(1+\epsilon)[||JTx_{j}-JTx||+||JTx-BJx||]\] \[\leq\epsilon+(1+\epsilon)[a_{n}(T^{**},Q_{n}^{**})+2\epsilon]\]
and thus
\[a_{n}(T,Q)\leq a_{n}(T^{**},Q_{n}^{**}).\qed\]
|
2302.09195
|
Data-Efficient Contrastive Self-supervised Learning: Most Beneficial
Examples for Supervised Learning Contribute the Least
|
Self-supervised learning (SSL) learns high-quality representations from large
pools of unlabeled training data. As datasets grow larger, it becomes crucial
to identify the examples that contribute the most to learning such
representations. This enables efficient SSL by reducing the volume of data
required. Nevertheless, quantifying the value of examples for SSL has remained
an open question. In this work, we address this problem for the first time, by
proving that examples that contribute the most to contrastive SSL are those
that have the most similar augmentations to other examples, in expectation. We
provide rigorous guarantees for the generalization performance of contrastive
learning on such subsets. Through extensive experiments, we show that we can
safely exclude 20% of examples from CIFAR100 and 40% from STL10 and
TinyImageNet, without affecting downstream task performance. In general,
subsets selected by our method outperform random subsets by over 3% across
these datasets. Interestingly, we also discover the subsets that contribute the
most to contrastive learning are those that contribute the least to supervised
learning. Code available at
https://github.com/bigml-cs-ucla/sas-data-efficient-contrastive-learning.
|
Siddharth Joshi, Baharan Mirzasoleiman
|
2023-02-18T00:15:06Z
|
http://arxiv.org/abs/2302.09195v5
|
# Data-Efficient Contrastive Self-supervised Learning:
###### Abstract
Self-supervised learning (SSL) learns high-quality representations from large pools of unlabeled training data. As datasets grow larger, it becomes crucial to identify the examples that contribute the most to learning such representations. This enables efficient SSL by reducing the volume of data required for learning high-quality representations. Nevertheless, quantifying the value of examples for SSL has remained an open question. In this work, we address this for the first time, by proving that examples that contribute the most to contrastive SSL are those that have the most similar augmentations to other examples, in expectation. We provide rigorous guarantees for the generalization performance of SSL on such subsets. Empirically, we discover, perhaps surprisingly, the subsets that contribute the most to SSL are those that contribute the least to supervised learning. Through extensive experiments, we show that our subsets outperform random subsets by more than 3% on CIFAR100, CIFAR10, and STL10. Interestingly, we also find that we can safely exclude 20% of examples from CIFAR100 and 40% from STL10, without affecting downstream task performance.
Machine Learning, ICML
## 1 Introduction
Large datasets power modern machine learning models. However, a key question is: what data points are essential for learning and whether more data can always yield better performance? Answering this question is crucial as it can reduce the substantial costs of training on large datasets, boost performance of the trained models and guide data collection. This has motivated a body of recent research on finding the most essential subsets for supervised learning (Toneva et al., 2019; Paul et al., 2021; Mirzasoleiman et al., 2020; Mindermann et al., 2022; Sorscher et al., 2022; Swayamdipta et al., 2020). However, as datasets grow larger, obtaining high-quality labels for them becomes prohibitively expensive. As a result, there has been a surge in self-supervised (SSL) pretraining on large un-labeled dataset (Chen et al., 2020; Grill et al., 2020; Chen and He, 2021; Zbontar et al., 2021). Nevertheless, finding the most important data points for SSL has remained an open question.
Finding the examples that contribute the most to SSL is indeed very challenging. When labels are available, the value of every example for learning can be quantified based on its loss (or confidence of the prediction) or gradient norm. Effectively, difficult-to-learn examples i.e. those with high loss or large gradient norm during the training are the ones that contribute the most to minimizing the training loss. However, in the absence of labels, SSL methods cluster examples based on their similarity to the other data points. Therefore, the SSL loss and gradient of every example is tightly coupled with the other examples in the dataset. Hence, dropping an example affect the loss and gradient of all the other examples. This makes data selection inherently more challenging for SSL as compared to supervised learning.
In this work, we address the above challenge for the first time and find examples that provably contribute the most to SSL. In particular, we focus on contrastive SSL which learns representations by maximizing the alignment between augmented views of the same examples and minimizing the similarity between augmented views of different examples (Chen et al., 2020; Zbontar et al., 2021; Oord et al., 2018). We prove that examples that contribute the most to SSL are those that have the highest expected similarity between their augmented views and the augmented views of other examples in their latent class. Effectively, such examples pull different groups of examples in a class together and enable the contrastive loss to maximally push away representations of examples in different classes. We show that such examples (1) ensure a high alignment between augmented views of examples in every class and (2) preserve the centers of class representations learned by SSL on the full data. We leverage the above properties to provide generalization guarantee for the performance of a linear classifier trained on the SSL representations learned on the subset.
We observe that, perhaps surprisingly, examples that contribute the most to contrastive SSL contribute the least to
supervised learning. In particular, we quantify the difficulty of examples for supervised learning using confidence of the predictions as well as the forgetting score (Toneva et al., 2019) i.e. the number of times an examples is misclassified after being correctly classified during the training. We show that examples that contribute the most to SSL are the easy examples with a high confidence and low forgetting score for supervised learning. Such examples can be safely excluded from a supervised learning pipeline (Toneva et al., 2019), without harming the accuracy. In contrast, difficult-to-learn examples that contribute the most to supervised learning significantly hurt SSL performance.
We extensively evaluate the performance of our proposed method for learning representations of examples in CIFAR10, CIFAR100 (Krizhevsky et al., 2009) and STL (Coates et al., 2011) with ResNet50. We show that our subsets outperform random subsets by more than 3% on CIFAR100 and STL. Interestingly, we observe that up to 20% of examples for CIFAR100 and 40% for STL can be safely excluded without harming the downstream performance. We demonstrate that the subsets that contribute the most to SSL can be efficiently extracted early in training or via an smaller proxy model. We also confirm the applicability of our method to other contrastive learning methods i.e. BYOL (Grill et al., 2020) and further observe, for BYOL, discarding 20% of examples from STL10 can even improve downstream performance by 2%.
## 2 Related Work
**Contrastive Learning.** Contrastive learning has recently emerged as a performant self-supervised framework to learn representations that capture semantically relevant information from the data. The key idea behind this family of algorithms is learning representations by maximizing agreement between augmented views of the same example (positive pairs) and minimizing agreement between augmented views of different examples (negative pairs) (Chen et al., 2020; Zbontar et al., 2021; Grill et al., 2020; Chen and He, 2021; He et al., 2020). To improve the performance of contrastive learning, re-weighting the negative pairs in the contrastive loss (Chuang et al., 2020) or re-weighting the loss to emphasize the hard negatives (Robinson et al., 2020) has been recently explored. Here, we aim to find subsets of examples that contribute the most to contrastive learning. The above reweighting strategies are orthogonal to our work and can be applied to the subsets found by our method.
**Contrastive learning theory.** A recent line of theoretical works has studied self-supervised learning (SSL). In particular, under conditional independence between positive pairs given the label, representations learned by reconstruction-based SSL algorithms can achieve small errors in the downstream linear classification task (Arora et al., 2019; Saunshi et al., 2019; Tosh et al., 2021). The independence assumption was relaxed by (HaoChen et al., 2021), which showed that minimizing spectral-based contrastive loss results in spectral clustering on the augmented distribution and is guaranteed to achieve certain generalization performance under linear evaluation. Wang and Isola (2020) proved that asymptotically, the contrastive loss optimizes alignment (closeness) of positive pairs and uniformity of the representations on the hypersphere, relating them to positive effects on downstream tasks. More closely related to our work is the recent result of (Huang et al., 2021) which showed that contrastive SSL using InfoNCE (Oord et al., 2018) or cross-correlation loss (Zbontar et al., 2021) maximizes alignment of positive pairs as well as divergence of centers of the latent class representations. Here, we show that subsets that contribute the most to contrastive SSL introduce minimal error on the alignment and divergence of centers of class representations learned on the full data. Leveraging the above properties, we provide generalization guarantees for downstream performance of representations learned on such subsets.
**Essential Subsets for Supervised Learning.** There has been a recent body of efforts on finding the most important subsets for supervised learning. Empirical methods commonly rank examples from easiest to hardest--based on confidence, loss or gradient--and curate subsets preserving the hardest examples. Coleman et al. (2020) used a smaller trained proxy model to find the most uncertain examples to train a larger model. Toneva et al. (2019) selects examples with highest forgetting score, i.e. the number of times they transition from being classified correctly to incorrectly during training. Swayamdipta et al. (2020) selects examples with the highest variance of predictions during training. Paul et al. (2021) selected examples with the lowest expected gradient norm over multiple initializations. More theoretically motivated approaches iteratively select subsets by importance sampling based on gradient norm (Katharopoulos and Fleuret, 2018) or select weighted subset of examples which closely capture the full gradient (Mirzasoleiman et al., 2020; Pooladzandi et al., 2022; Killamsetty et al., 2021).
In contrast, we show, for the first time, that easy-to-learn examples with highest confidence and lowest forgetting score that contribute the least to supervised learning are provably the most beneficial for unsupervised contrastive learning.
## 3 Problem Formulation
Assume we have a dataset \(\textbf{{X}}=\{\boldsymbol{x}_{i}\}_{i\in V}\) of \(n=|V|\) training examples drawn i.i.d. from an unknown distribution. Each example belongs to one of the \(K\) latent classes i.e. \(V=\{V_{1},\cdots,V_{K}\}\), but the corresponding class labels are not known at training time.
Contrastive SSL learns representations of examples in the
training data, by learning an encoder \(f\) that maximizes agreement between representations of differently augmented views of the same example (i.e. positive pairs) and minimizes agreement between representations of augmented views of different examples (i.e. negative pairs). This is achieved by minimizing the following InfoNCE loss (Oord et al., 2018):
\[\mathcal{L}_{cl}(V)\!=\!-\!\!\mathop{\mathbb{E}}_{\begin{subarray}{c}\mathbf{x}_{1} \in V\,\mathbf{x}_{\mathbf{x}_{1}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
With good alignment (small \(R_{e}\)) and good divergence (small \(\mathbf{\mu}_{k}^{T}\mathbf{\mu}_{l}\)), the NN classifier \(g_{f}\) can correctly classify all the examples in the main part of every class that have concentrated and aligned augmented views. If the majority of examples in every class have a high concentration of augmented data is large (large \(\sigma\)), good generalization is guaranteed. Formally,
**Theorem 4.1** (Huang et al., 2021).: _For any \(l,k\in[K]\), if_
\[\mathbf{\mu}_{k}{}^{T}\mathbf{\mu}_{l}<\phi(\sigma,\delta,\epsilon), \tag{9}\]
_then the downstream error rate of NN classifier is_
\[\xi(g_{f}(V))\leq(1-\sigma)+R_{\epsilon}(V). \tag{10}\]
Exact form of \(\phi(\sigma,\delta,\epsilon)\) is discussed in Appendix B.
### Subsets that Preserve Alignment and Divergence
We rely on the above observations to find a subset that, when used to learn representations, provides similar generalization performance to that of the full data, for the downstream NN classifier. The key idea of our approach is to find a subset, such that minimizing the contrastive loss on this subset: (1) results in good alignment for all the examples and (2) preserves the class centers of full data. In doing so, we ensure that the divergence of the class centers is preserved. If such a subset can be found, minimizing the contrastive loss in Eq. (1) on the subset results in good alignment and divergence on the full data, hence guarantees similar generalization performance for the downstream NN classifier.
Next, we introduce the notion of expected augmentation distance and discuss how it can be leveraged to find a subset that satisfies the above two conditions.
**Definition 4.2** (Expected augmentation distance).: We define the expected augmentation distance between examples \(i,j\in V\) as the expected \(l_{2}\) norm between all pairs \((\mathbf{x},\mathbf{x}^{\prime})\) of augmented examples, such that \(\mathbf{x}\in A(\mathbf{x}_{i})\) and \(\mathbf{x}^{\prime}\in A(\mathbf{x}_{j})\). Formally, for every pair of examples \(i,j\in V\) we have:
\[d_{i,j}=\mathop{\mathbb{E}}_{\mathbf{x}\in A(\mathbf{x}_{i}),\mathbf{x}^{\prime}\in A(\mathbf{ x}_{j})}\|\mathbf{x}-\mathbf{x}^{\prime}\|. \tag{11}\]
Intuitively, expected augmentation distance captures the semantic dissimilarity between every pair of examples. That is, two examples that are semantically similar have a small expected augmentation distance. We visualize examples with small and large expected augmentation distance in Fig. 1.
### Ensuring Good Alignment
We start by finding a subset that, when used to minimize the contrastive loss, aligns the augmented views of all the training examples. From Eq. (8), we know that minimizing the alignment loss \(\mathcal{L}_{align}\), directly minimizes the probability \(R_{\epsilon}(V)\) of examples with non-aligned augmented views. That is \(R_{\epsilon}(V)\leq\eta(\epsilon)\cdot\sqrt{\mathcal{L}_{align}(V)}\).
Here, we find a subset \(S_{k}\subseteq V_{k}\) of examples from every latent class \(k\) that ensures small \(R_{\epsilon}(V_{k})\) i.e. probability that examples in \(V_{k}\) are not well-aligned. For every (arbitrary) subset \(S_{k}\subseteq V_{k}\) of size \(r_{k}=|S_{k}|\) selected from class \(k\) with \(n_{k}=|V_{k}|\) examples, we can upper-bound the probability \(R_{\epsilon}(V_{k})\) based on the alignment loss of the subset i.e. \(\mathcal{L}_{align}(S_{k})\). In particular, using \(R_{\epsilon}(V_{k})\leq\eta(\epsilon)\cdot\mathop{\mathbb{E}}_{i\in V_{k}} \mathop{\mathbb{E}}_{\mathbf{x}_{1},\mathbf{x}_{2}\in A(\mathbf{x}_{i})}\|f(\mathbf{x}_{1})-f( \mathbf{x}_{2})\|\leq\eta(\epsilon)\sqrt{\mathcal{L}_{align}(V)}\)(Huang et al., 2021), we can write:
\[R_{\epsilon}(V_{k})\] \[\leq\eta(\epsilon)\cdot\mathop{\mathbb{E}}_{i\in V_{k}}\mathop{ \mathbb{E}}_{\mathbf{x}_{1},\mathbf{x}_{2}\in A(\mathbf{x}_{i})}\|f(\mathbf{x}_{1})-f(\mathbf{x}_ {2})\|\] (12) \[=\frac{\eta(\epsilon)}{n_{k}}\cdot\Bigg{(}\sum_{i\in S_{k}} \mathop{\mathbb{E}}_{\mathbf{x}_{1},\mathbf{x}_{2}\in A(\mathbf{x}_{i})}\|f(\mathbf{x}_{1})-f( \mathbf{x}_{2})\|\] \[\qquad\qquad+\sum_{i\in V_{k}\setminus S_{k}}\mathop{\mathbb{E}} _{\mathbf{x}_{1},\mathbf{x}_{2}\in A(\mathbf{x}_{i})}\|f(\mathbf{x}_{1})-f(\mathbf{x}_{2})\|\Bigg{)}\] (13) \[\leq\frac{\eta(\epsilon)}{n_{k}}\cdot\Bigg{(}\sum_{i\in S_{k}} \mathop{\mathbb{E}}_{\mathbf{x}_{1},\mathbf{x}_{2}\in A(\mathbf{x}_{i})}\|f(\mathbf{x}_{1})-f( \mathbf{x}_{2})\|\] \[+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
encoder \(f\), where \(\left\|f(\mathbf{x})-f(\mathbf{x}^{\prime})\right\|\leq L\left\|\mathbf{x}-\mathbf{x}^{\prime} \right\|\ \forall\mathbf{x},\mathbf{x}^{\prime}\), we have:
\[R_{e}(V_{k})\leq\frac{\eta(\epsilon)}{n_{k}}\Bigg{(}\!r_{k}\sqrt{\mathcal{L}_{ align}(S_{k})}+2L\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
closely. Hence, the subset introduces a small error on capturing the alignment and center of all the examples in the class.
The above minimization problem can be turned into maximizing the following monotone submodular1 cover problem:
Footnote 1: A set function \(F:2^{V}\rightarrow\mathbb{R}^{+}\) is _submodular_ if \(F(e|S)=F(S\cup\{e\})-F(S)\geq F(T\cup\{e\})-F(T)\), for any \(S\subseteq T\subseteq V\) and \(e\in V\setminus T\). \(F\) is _monotone_ if \(F(e|S)\geq 0\) for any \(e\in V\setminus S\) and \(S\subseteq V\).
\[S_{k}^{*}=\operatorname*{arg\,max}_{S\subseteq V_{k},|S|\leq r_{k}}\sum_{i\in V _{k}\setminus S_{k}}\sum_{j\in S_{k}}C-d_{i,j}, \tag{20}\]
where \(C\) is a big constant. For maximizing a monotone submodular function \(F\), the greedy algorithm provides a \((1-1/e)\) approximation guarantee (Wolsey, 1982). The greedy algorithm starts with the empty set \(S_{k_{0}}=\emptyset\), and at each iteration \(l\), chooses an element \(e\in V\) such that \(S_{k_{l}}=S_{k_{l-1}}\cup\{\operatorname*{arg\,max}_{e\in V}F(e|S_{k_{l-1}})\}\). The greedy algorithm finds a near-optimal solution of size \(r_{k}\) in \(\mathcal{O}(|V_{k}|\cdot r_{k})\) time. This complexity can be reduced to \(\mathcal{O}(|V_{k}|)\) by stochastic evaluation (Mirzasoleiman et al., 2015) and accelerated further by lazy evaluations (Minoux, 1978).
Intuitively, as the subsets selected from different classes have a small expected augmentation distance to other examples in their class, they pull together different examples in a class during contrastive learning and let the representations of a class to cluster closely. At the same time, as they preserve the class centers, they allow the representations of different classes to be effectively pushed away from each other. In doing so, they produce a large gradient norm and effectively minimize the contrastive loss on the full data. Note that as \(d_{i,j}\) is a property of the data, the subset found by solving Problem (20) ensures good alignment and divergence throughout training.
The following Theorem captures the extra divergence required on the centers of the selected examples compared to the full data, such that the generalization performance of the downstream NN classifier can be guaranteed.
**Theorem 4.3**.: _Assume \(f\) is a normalized encoder and the subset \(S_{k}\) has \(\nu_{R}^{k}\) error (Eq. 15) in capturing \(R_{e}(f,V_{k})\) and \(\nu_{\mu}^{k}\) error (Eq. 17) in capturing the center of class \(k\). If for any pair of classes \(k,l\in[K]\), we have:_
\[\boldsymbol{\mu}_{k}^{S\top}\boldsymbol{\mu}_{l}^{S}<\phi(\sigma,\delta,\epsilon) -\big{(}C\nu_{R}^{k}+2(\max\{\nu_{\mu}^{k},\nu_{\mu}^{l}\})^{2}\] \[+4\max\{\nu_{\mu}^{k},\nu_{\mu}^{l}\}\big{)} \tag{21}\]
_where \(\phi(\sigma,\delta,\epsilon)\) is the requirement on divergence of full data class centers in Theorem 4.1 and \(C\) is a constant, then the generalization error of the model trained on the subset can be bounded by:_
\[\xi(g_{f^{S}}(V))\leq(1-\sigma)+R_{\epsilon}+\nu_{R}. \tag{22}\]
Theorem 4.3 shows that if the subset captures the class centers and alignment closely (i.e. \(\nu_{R}\) and \(\nu_{\mu}\) are small), then minimizing the contrastive loss on the subset results in similar downstream generalization performance as that of training on full data for the downstream NN classifier.
The proof can be found in Appendix B, where we also discuss that \(C\nu_{R}\) is generally small. Fig. 7(b) in Appendix A confirms that divergence of _full data_ class centers when training on sufficiently large _CL-Core_ subsets is in fact better than that of training on the full data. This explains the similar generalization performance of models trained on _CL-Core_ subsets to models trained on the full data.
### Finding the Subset in Practice
Finally, we discuss how we can approximately find the latent classes and estimate the expected augmentation distance, without having the labels.
**Approximately Finding the Latent Classes.** Problem (20) requires selecting a subset from every class separately. Without the labels, we need to approximately find the latent classes. If a small subset of labels are available, a proxy model can be used to approximately find the latent classes. In our experiments, we show that having as small as 1% of the labels, CLIP (Radford et al., 2021) can be used to approximately find the latent classes. Note that we do not necessarily need access to any downstream labels. In our experiments, we also show that using CLIP (Radford et al., 2021)'s image and text encoders, we can match image embeddings from STL10 to the closest text embeddings from ImageNet labels (fine-grained labels) to obtain approximate latent classes for STL10. Alternatively, one can find latent classes by clustering the representations of a model trained with contrastive SSL. However, this approach will be less ac
curate than using CLIP or fine-grained labels, when finding larger subsets.
**Estimating the Expected Augmentation Distance.** Expected augmentation distance captures similarity of examples in the input space. However, using a proxy model can better capture the semantic similarities in practice. Note that the proxy model does not necessarily have to be the same as the model being trained with SSL. Indeed, the proxy model can be much smaller than the model being trained or can be partially trained, as we confirm experimentally. Having a proxy model, we calculate \(d_{i,j}\) for every pair of examples \(i,j\) in latent class \(k\) as follows. We uniformly sample pairs of augmented views of \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), i.e., \(\mathbf{x}_{a}\in A(\mathbf{x}_{i}),\mathbf{x}_{a}^{\prime}\in A(\mathbf{x}_{j})\), where \(A(.)\) is the set of transformations. Then the pairwise expected augmentation distance is calculated for all \(\mathbf{x},\mathbf{x}^{\prime}\in V_{k}\), as follows: \(s_{i,j}=\mathbb{E}_{\mathbf{x}\in A(\mathbf{x}_{i}),\mathbf{x}^{\prime}\in A(\mathbf{x}_{j})} \left<f_{p}(\mathbf{x}),f_{p}(\mathbf{x}^{\prime})\right>\), where \(f_{p}\) is a proxy model. In Problem (20), we directly use \(s_{ij}\) instead of \(C-d_{i,j}\). Empirically, we observed that using cosine similarities provides a larger variance on the pairwise distances and allows distinguishing examples better than \(\ell_{2}\). The pseudocode is illustrated in Alg. 1.
## 5 Experiments
In this section, we first evaluate the downstream generalization performance of the models trained by contrastive learning on the subsets found by _CL-Core_ vs. random subsets, on common image classification benchmarks, namely, CIFAR10, CIFAR100 (Krizhevsky and Hinton, 2009) and STL10 (Coates et al., 2011). Then, we do an extensive ablation study on the effect of the approximate latent classes, and the proxy model used to estimate expected augmentation distance. Finally, we investigate the difficulty of examples in the subset for supervised learning.
**Training Setup** We use SimCLR (Chen et al., 2020) as the contrastive learning method to train ResNet-50 (He et al., 2016) as the encoder architecture and a 2-layer MLP to project the representation to a 128-dimensional latent space. We use InfoNCE with temperature as our loss. Following the standard training pipeline in (Chuang et al., 2020; Robinson et al., 2020) we train for 400 epochs using the Adam optimizer with a learning rate of \(0.001\). We also apply BYOL for training ResNet-18 on STL10 using SGD with a learning rate of \(0.001\).
**Data Augmentation** For data augmentations, we use random crop, random resizing, random horizontal flips and color distortion, as is done in (Chen et al., 2020).
**Evaluation.** For evaluation, we use the widely used linear evaluation protocol (Chen et al., 2020; Chuang et al., 2020). That is, we train a linear classifier using the learned representations of the training examples and their labels. Then, we evaluate the performance of the linear classifier on the test set representations and their corresponding labels. We compare _CL-Core_ subsets with random subsets of the same size sampled from the same approximate latent classes.
### Downstream Generalization Performance
First, we evaluate the downstream generalization performance of the model pre-trained on subsets of different sizes found by _CL-Core_ vs. random subsets of the same size. Here, we use a pre-trained ResNet-50 as the proxy to calculate \(s_{ij}\), as discussed in Sec. 4.5. For CIFAR100 and STL10, we consider all \(s_{i,j}>0\) and for CIFAR10 we consider all \(s_{i,j}>0.5\). This allows us to exclude examples that have a medium similarity to several other examples, but are not very similar to many of them. To approximately find the latent classes, we input the training data to CLIP (Radford et al., 2021), and train a linear classifier on the CLIP embeddings of the training data and a small randomly selected subset of training labels. In particular, for CIFAR10 and CIFAR100, we use 1% of the labels of training examples selected at random, and for STL10, we use all the labels (\(<\) 5%) available labels. We use the trained linear classifiers to predict the latent class for all the training examples.
**SimCLR** Fig. 3 shows that training with SimCLR on subsets of various sizes found by _CL-Core_ outperform random subsets by over 3% on CIFAR100 and STL10, and by up
Figure 3: Downstream Classification Accuracy of _CL-Core_ Subsets vs. Random Subsets (reporting mean and std over 3 runs).
to 2% on CIFAR10.
**BYOL** Fig. 4(a) shows that training with BYOL on subsets of various sizes found by _CL-Core_ from STL10 outperform random subsets by more than 3%. Interestingly, subsets of size 80% outperform BYOL on the full data by 2%. This confirms that _CL-Core_ can effectively find examples that contributed the most to contrastive learning, and exclude those that might be harmful.
### Ablation Study
Next, we conduct an extensive ablation study on the effect of the approximate latent classes, and the proxy model used to estimate expected augmentation distance.
**Approximate Latent Classes** Fig. 4(b) compares the downstream performance on CIFAR100 when latent classes are obtained by training a linear classifier using 1% labeled training data on CLIP embeddings, to that of using the ground-truth class labels, and \(k\)-means clustering. We see that approximately finding the latent classes using 1% of the labels works nearly as well as ground-truth labels. Notably, while the accuracy of the linear classifier trained with 1% of the labels of CIFAR100 is only 70.8%, this does not negatively affect the quality of subsets found by _CL-Core_. Moreover, in the absence of any labels, using \(k\)-means clustering on the embeddings performs equally well for smaller subsets and still provides a significant improvement for larger subsets.
Next, we consider using a different set of labels than the original labels of the training data to find its latent classes. In particular, we use a pretrained CLIP to label STL10 images by ImageNet labels, using the zero-shot approach. That is, we match every image in STL10 to one of the ImageNet labels, by finding the CLIP text embedding of the ImageNet label that is most similar to the CLIP image embedding. Fig. 4(c) compares the downstream performance on STL10, when using ImageNet labels to find latent classes using a zero-shot approach to that of using the available (\(<5\%\)) STL10 labels to train a linear classifier on CLIP image embeddings. Importantly, no label information about STL is used in the first case. The results clearly demonstrate how _CL-Core_ can entirely avoid the use of labels for approximating the latent classes. Crucially, any relevant and potentially finer-grained set of labels are enough to approximately find the latent classes and achieve a superior downstream performance.
**Using Proxy Models to Estimate Expected Augmentation Distance** Fig. 4(d) shows estimating augmentation distance using various proxy models, such as a ResNet-50 that is partially trained for as few as 10% of epochs as well as smaller models such as a pre-trained ResNet-10, achieves a very similar downstream performance to that of using a fully pre-trained ResNet-50.
Figure 4: Ablation study on CIFAR100 and STL10.
### Investigating subsets found by _Cl-Core_
**Visualization** Fig. 5(a) use t-SNE to visualize examples that are selected by _CL-Core_ vs those that are not selected from the class "bed" in CIFAR100. Examples with small expected augmentation distance to selected and not selected examples are connected. We see that the selected examples have small expected augmentation distance to many other examples in the class. Fig. 5(b), illustrates some examples that are selected and not selected from the "bicycle" class. We see that the selected examples are representatives of the whole class, while those not selected present uncommon poses or views of the object.
**Easy Examples are the Most Important** Next, we use the forgetting score (Toneva et al., 2019), i.e. the number of times an examples is misclassified after being correctly classified during _supervised_ learning, to quantify the difficulty of an example. Fig. 4(e) shows that least forgettable examples that can be safely discarded from supervised learning (Toneva et al., 2019), can considerably outperform the random baseline and achieve a comparable performance to _CL-Core_ for smaller subsets. Fig. 6 in Appendix A shows that subsets found by _CL-Core_ have low forgetting score and high confidence, in expectation. That is they are easy for supervised learning. Effectively, the most important subsets for SSL are least important for supervised learning.
**Difficult Examples hurt Contrastive Learning** Finally, Fig. 4(f) confirms that CL on examples that are ranked lowest by _CL-Core_ i.e., subsets comprised of examples with large expected augmentation distance to the rest of their latent class, significantly hamper contrastive learning. Such examples are difficult-to-learn and are very beneficial for supervised learning (Toneva et al., 2019).
## 6 Conclusion
We identified subsets of examples that contribute the most to contrastive SSL. Theoretically, we characterized important subsets for contrastive learning with rigorous generalization guarantees. Empirically, we provided _CL-Core_ to find these subsets and conducted extensive experiments on CIFAR10, CIFAR100 and STL10 showing improvements of \(>3\)% over random subsets. Surprisingly, we discovered that these important subsets are the least informative for supervised learning. Moreover, we showed that 20% - 40% examples can be discarded on CIFAR100 and STL10, observing no loss and even improvement in downstream accuracy.
|
2305.02067
|
Photosynthesis Under a Red Sun: Predicting the absorption
characteristics of an extraterrestrial light-harvesting antenna
|
Here we discuss the feasibility of photosynthesis on Earth-like rocky planets
in close orbit around ultra-cool red dwarf stars. Stars of this type have very
limited emission in the \textit{photosynthetically active} region of the
spectrum ($400 - 700$ nm), suggesting that they may not be able to support
oxygenic photosynthesis. However, photoautotrophs on Earth frequently exploit
very dim environments with the aid of highly structured and extremely efficient
antenna systems. Moreover, the anoxygenic photosynthetic bacteria, which do not
need to oxidize water to source electrons, can exploit far red and near
infrared light. Here we apply a simple model of a photosynthetic antenna to a
range of model stellar spectra, ranging from ultra-cool (2300 K) to Sun-like
(5800 K). We assume that a photosynthetic organism will evolve an antenna that
maximizes the rate of energy input while also minimizing fluctuations. The
latter is the 'noise cancelling' principle recently reported by Arp et al.
2020. Applied to the Solar spectrum this predicts optimal antenna
configurations in agreement with the chlorophyll Soret absorption bands.
Applied to cooler stars, the optimal antenna peaks become redder with
decreasing stellar temperature, crossing to the typical wavelength ranges
associated with anoxygenic photoautotrophs at $\sim 3300$ K. Lastly, we compare
the relative input power delivered by antennae of equivalent size around
different stars and find that the predicted variation is within the same order
of magnitude. We conclude that low-mass stars do not automatically present
light-limiting conditions for photosynthesis but they may select for anoxygenic
organisms.
|
Christopher D. P. Duffy, Gregoire Canchon, Thomas J. Haworth, Edward Gillen, Samir Chitnavis, Conrad W. Mullineaux
|
2023-05-03T12:17:27Z
|
http://arxiv.org/abs/2305.02067v1
|
Photosynthesis Under a Red Sun: Predicting the absorption characteristics of an extraterrestrial light-harvesting antenna
###### Abstract
Here we discuss the feasibility of photosynthesis on Earth-like rocky planets in close orbit around ultra-cool red dwarf stars. Stars of this type have very limited emission in the _photosynthetically active_ region of the spectrum (\(400-700\) nm), suggesting that they may not be able to support oxygenic photosynthesis. However, photoautotrophs on Earth frequently exploit very dim environments with the aid of highly structured and extremely efficient antenna systems. Moreover, the anoxygenic photosynthetic bacteria, which do not need to oxidize water to source electrons, can exploit far red and near infrared light. Here we apply a simple model of a photosynthetic antenna to a range of model stellar spectra, ranging from ultra-cool (2300 K) to Sun-like (5800 K). We assume that a photosynthetic organism will evolve an antenna that maximizes the rate of energy input while also minimizing fluctuations. The latter is the _noise cancelling_ principle recently reported by Arp et al.2020). Applied to the Solar spectrum this predicts optimal antenna configurations in agreement with the chlorophyll! Soret absorption bands. Applied to cooler stars, the optimal antenna peaks become redder with decreasing stellar temperature, crossing to the typical wavelength ranges associated with anoxygenic photoautotrophs at \(\sim 3300\) K. Lastly, we compare the relative input power delivered by antennae of equivalent size around different stars an find that the predicted variation is within the same order of magnitude. We conclude that low-mass stars do not automatically present light-limiting conditions for photosynthesis but they may select for anoxygenic organisms.
keywords: Astrobiology - Planets and satellites: terrestrial planets - Planets and satellites: individual: Trappist-1
## 1 Introduction
Over 5000 exoplanets (planets orbiting stars other than the Sun) have now been detected and confirmed NASA Exoplanet Archive (NASA Exoplanet Archive). In addition to detecting planets, we are in some cases now also able to characterise their masses/radii and hence bulk composition (e.g. Seager et al.2007; Adams et al.2008; Weiss & Marcy2014) and through direct imaging and transmission spectroscopy probe their atmospheric compositions (e.g. Benneke & Seager2012; Nikolov et al.2016; Hinkley et al.2022). With this information, we are getting ever closer to developing a census of possible habitats and it is becoming increasingly plausible that the presence of life on exoplanets might be inferred (Des Marais et al.2002; Scalo et al.2007; Kaltenegger 2017; Schwieterman et al.2018).
Empirically, the distribution of stellar masses is well known to follow the stellar initial mass function (Kroupa, 2001; Chabrier, 2003), wherein low mass stars are much more common. This, coupled to evidence (Mulders et al., 2015; Hsu et al., 2019, 2020) that terrestrial planetary occurrence rates are higher around low mass stars, implies that the majority of terrestrial planets orbit stars that are lower mass than the Sun (\(T_{\rm eff}<5780\) K) (Hsu et al.2019, 2020; Ment & Charbonneau, 2023). Though we note that the occurence rate of giants is lower around low mass stars (Bryant et al., 2023) and the planet occurence rate is also sensitive to the stellar metallicity (e.g. Narang et al.2018; Lu et al.2020).
A number of terrestrial planets have been discovered around low mass stars on compact orbits, including well-known examples such as Trappist-1 (Gillon et al., 2017), Proxima Centauri (Anglada-Escude et al.2016; Faria et al.2022) and LHS 1140 (Dittmann et al.2017). This includes "habitable zone" planets where it is expected that water would be in the liquid phase on the surface of a rocky planet (Kasting et al.1993a; Grimm et al.2018; Dittmann et al.2017). Trappist-1 in particular has multiple planets in the habitable zone, making it is a high priority target with over 200 hours of observing time allocated to that planetary system in JWST cycle 1. However to understand habitability we need to go beyond the simple definition of supporting liquid water. *** Currently, Earth's biosphere is largely supported by photosynthesis, with oxygenic photoautotrophs specifically being responsible for \(>99\)% of the global primary production (Overmann & Garcia-Pichel, 2013). This does, however, depend on the biome, with anoxygenic photosynthesis contributing \(\sim 30\)% in ancient, sulphide-rich lakes (Overmann & Garcia-Pichel, 2013), and
chemosynthetic microbes becoming the dominant producers in desert ecosystems (Bay et al., 2021). Previously, it was argued that evolution of multi-cellular life, particularly animals, was only possible in an an atmosphere oxygenated by photosynthesis, though this view is currently being challenged (Cole et al., 2020; Bozdag et al., 2021). Regardless, oxygenic photosynthesis currently covers \(50-80\%\) of the Earth's surface (Field et al., 1998), in the form of canopy forests, savannah, grasslands and marine cyanobacteria and algae, and is detectable, at least at interplanetary distances, as a sharp increase in solar reflectance for wavelengths \(\lambda>700-750\) nm (Sagan et al., 1993; Arnold, L. et al., 2002). This'vegetation red edge' (VRE) is therefore a promising potential biosignature to look for in exo-planet surveys (Seager et al., 2005), though it may be hidden in the case of life beneath substrates (Cockell et al., 2009). Biochemically it arises from chlorophyll \(a\), the pigment responsible for water oxidation and quinone reduction in Photosystem II (PSII), and NADP reduction in Photosystem I (PSI), photochemical processes that require quanta of \(\lambda=680\) and \(700\) nm respectively. While higher energy photons (\(400-700\) nm) are readily utilized, since excess energy is shed non-radiatively in the antenna (van Amerongen and van Grondelle, 2001), redder photons are insufficiently energetic to drive charge separation. This would imply that low mass stars, with limited emission in the 'photosynthetically active' region of \(400<\lambda<700\) nm (see Fig. 1 **a**), are unlikely to the support complex biospheres (Covone et al., 2021). However, this definition of 'photosynthesis' is perhaps too narrow. Several species of cyanobacteria bind red-shifted Chl \(d\) and \(f\), with absorption maxima at \(790-800\) nm, as an adaptation to light that is strongly attenuated in the \(400-700\) nm region by the environment and other organisms (Viola et al., 2022; Tros et al., 2021). Moreover, anoxygenic photoautotrophs, such as purple (**?**) and green sulphur bacteria (Gregersen et al., 2011) utilize light in the \(800-1000\) nm region. The fact that they source electrons from more readily oxidizable compounds, such as hydrogen sulphide, ferrous iron, hydrogen, etc., relaxes the requirement for quanta of \(\lambda<700-800\) nm (Bryant and Frigaard, 2006), though does limit their habitat. It was previously assumed that anoxygenic photosynthesis evolved first, with oxygenic photo-autotrophs evolving around the Great Oxidation Event (Lyons et al., 2014), although this has been challenged recently, with compelling genetic (Sanchez-Baracaldo and Cardona, 2019) and geological (Wang et al., 2018) evidence for a much earlier appearance. Even so, recent modelling implies that Earth's VRE signal may have risen to its current level as recently as only 500 Mya, which may limit the likelihood of VRE detection to older, warmer Earth-like planets (O'Malley-James and Kaltenegger, 2018). There is therefore significant motivation to broaden our search to redder, more exotic forms of photosynthesis.
It is assumed that the evolution of photosynthetic processes and structures was largely dictated by the intensity, spectral quality and dynamics of the available light. An early adaptation was the antenna-reaction centre architecture, which massively enhances the absorption cross-section of the photosystems and allows for dynamic regulation of energy input in a fluctuating light environment (Wolfe et al., 1994; Fleming et al., 2012). Physically, the antenna is a massive, modular assembly of light-harvesting pigment-protein complexes. Their function is to capture light and transfer the resulting excited electronic state or 'exciton' to the reactions centres (which they do with near-total quantum efficiency). Antenna proteins generally bind several types of pigments in order to provide broad spectral coverage, creating a 'funnel' structure in which higher energy pigments donate to lower energy ones. For example, the major light-harvesting complex of PSII in higher plants, LHCII, binds Chl \(a\) and \(b\) along with several carotenoids (Wei et al., 2016). Nevertheless, antenna complexes are not black bodies, instead possessing well-defined and often narrow absorption peaks.
Kiang et al. (2007b) published a comprehensive review of the relationship between the absorption profiles of Earth's photo-autotrophs and their respective local light environments. Based on empirical trends they proposed several rules: (1) The absorption peak of the antenna is located close to the local irradiance maximum, to maximize energy input. (2) The absorption peak of the reaction centres are close to longest wavelength in the irradiance range, since excitons must be funneled from a higher energy antenna. (3) Accessory pigments (such as Chl \(b\) or carotenoids in plants) will absorb towards the shortest wavelength of the irradiance window, since they must funnel excitons to the primary pigments (e.g. Chl \(a\)) in the antenna.
Following on from Bjorn (1976), Marosvolgyi and van Gorkom (2010) applied a different approach, considering the balance between the need to absorb as much light as possible with the potentially prohibitive metabolic cost of synthesizing and maintaining a vast array of different pigment co-factors. The model depends on a _cost parameter_, \(C\), which reflects the fraction of captured energy that is used to synthesize, regenerate and repair the light-harvesting antenna. Choosing \(C\approx 0.96\) reproduces the red absorption band of both the chloroplast and the chromatophore of purple bacteria in their respective native light environments. The argument is that there is an optimum band in which to harvesting photon. Harvesting bluer photons carries the burden of synthesizing additional pigments, while harvesting redder photons is less efficient due to an increased likelihood of ambient thermal excitation of the the antenna pigments (making stimulated emission rather than absorption more likely). This seems to miss the distinct blue absorption band of the chloroplast which is composed of the Chl \(a\) and \(b\) Soret bands and the \(S_{2}\) bands of various carotenoids. They argue that Chl \(a\) and \(b\) were selected for their red \(Q_{y}\) bands, with their Soret bands being a (fortunate) side effect of their electronic structures. Similarly, they argue that carotenoids were primarily selected for their structural, antioxidant and triplet-quenching properties. One criticism of this model is that the predicted spectrum is extremely sensitive to the choice of \(C\). For \(C=0.96\) the predicted antenna spectrum is in very good agreement with the chloroplast \(Q_{y}\) band, even down to the stoichiometry between Chl \(a\) and \(b\). However, \(C=0.75\) yields a single absorption band covering the entire \(500-750\) nm region of the spectrum, making it almost black. At \(C=0.5\) it absorbs more-or-less uniformly across \(400-900\) nm. Moreover, \(C=0.96\) seems to imply that nearly all of the light energy captured and converted by the organism is re-invested in synthesizing antenna pigments, though this parameter should not be over-interpreted and the fact that the same value of \(C=0.96\) was found for both plants and purple bacteria indicates this model captures some fundamental feature of the antenna.
More recently, Arp et al. (2020) took a different approach, arguing that the absorption profile of the antenna was primarily determined by a requirement for _noise cancellation_. An antenna is subject to noise, both externally as fluctuations in spectral intensity, and internally due to the stochastic nature of the branching exciton transfer pathways. This risks periods where the photosystems are either underpowered, which reduces growth rate, or over-powered, which risks photodamage. Arp et al. (2020) argue (very convincingly) that this noise is minimized by a two-channel antenna, composed of a pair of Gaussian absorbers with similar (but different) absorption maxima, located where the gradient of the spectrum is steepest (see Fig. 1 **b,**,** Arp et al., 2020). This principle seems to predict the \(Q_{y}\) and Soret profiles of plants, purple bacteria and green sulfur bacteria, based solely on the the spectrum of locally available light, though it should be
noted that these solutions are neither unique nor necessarily optimal (see below).
These different approaches are complementary rather than contradictory, and the relationship between antenna structure and the local light environment is likely dependent on multiple factors. Since this relationship is based (largely) on fundamental rules of photo/redox chemistry, it is reasonable to assume it is universal.
In 2002 Wolstencroft and Raven modelled the rate of oxygenic photosynthesis on the surface of habitable-zone planets, with and without cloud cover, orbiting a range of stars (Wolstencroft & Raven, 2002). Assuming a model absorption profile taken from the model species _Nerium oliander_, they found that oxygenic photosynthesis performed best on cloudless planets orbiting F-type stars (6000\(-\)7500 K) and very poorly around cooler stars (K and M-type). The reason for the latter was a mismatch between the spectral irradiance and the _N. oliander_ absorption profile. They then proposed that oxygenic photosynthesis around these low mass stars may still be feasible via multi-photon processes, in which two or more \(\lambda>1000\) nm photons deliver the required energy as opposed to a single \(\lambda<700nm\) photon. Tinetti et al. (2006) explored this idea further and proposed that such processes could produce a red-shifted VRE in the Near Infrared (NIR, \(750<\lambda<1500\) nm).
In a companion paper to the one discussed above Kiang et al. (2007a) applied their empirical antenna-irradiance relationship to model irradiances for Earth-like planets orbiting in the habitable zones around F, K and M-type stars. They predicted that F2V stars (\(\sim 1.5\)M\({}_{\odot}\)) would favour photosynthetic pigments that absorb in the blue, K2V stars (\(\sim 0.8\)M\({}_{\odot}\)) would favour red-orange (as they do
Figure 1: **a.** Model surface spectral flux, \(I_{\star}(\lambda;T_{\star})\), for planets within the middle of the habitable zone around parent stars of different effective temperatures, \(T_{\star}\). The spectral fluxes are taken from PHOENIX radiative transfer models (Husser et al., 2013) of surface spectral flux and the habitable distances are estimated according to a simple radiative equilibrium model outlined in the Methodology. A sparse range of temperatures is shown purely for clarity and the vertical dashed lines approximately demarcate the absorption regions for oxygenic and anoxygenic photosynthesis, with the former often referred to as _photosynthetically active radiation_ (PAR), **b.** A schematic diagram of the concept of a dual-input noise-cancelling antenna. Two sub-populations of pigments with similar (but different) absorption maxima funnel energy to the reaction centre (RC) which oxidizes an electron donor and reduces an acceptor. The two absorbing populations tend to operate in series (e.g. Chl \(b\) transferring energy to Chl \(a\) in plant antenna complexes) and are subject to both external and internal noise. The former reflects the highly dynamic nature of the light-environment while the latter results from fluctuations of the energy transfer pathways within the antenna. **c.** An example of the matrix representation of \(\Delta^{OP}\) (\(\lambda_{0},\Delta A\)) for a fixed value of absorber width, \(\sigma=10\) nm. Above this is an illustration of two examples of antenna configuration superimposed on the 2800 K spectrum (dark red). \(T=2800\) K is chosen here purely for illustrative purposes as it exhibits many sharp bands of optimal (and extremely sub-optimal) antenna configurations. Note the distinction between the standard deviation or ‘width’, \(\sigma\), and the Full Width at Half Maximum, \(\Gamma\) 2.63\(\sigma\) of the Gaussian peak.
on Earth), and M-type stars (red dwarf stars, \(0.08-0.6\mathrm{M}_{\odot}\)) would favour absorbers in several NIR bands (\(930-1100\) nm, \(1100\)"\(1400\) nm, \(1500\)"\(1800\) nm, and \(1800\)"\(2500\) nm). The reason for the multiple bands is that irradiance in the NIR is strongly modified by atmospheric transmission, leading to multiple, distinct bands in the local spectral flux (Segura et al., 2003).
More recently, Lehmer et al. (2021) applied the model of Marosiloygi and van Gorkom (2010) discussed above to similar model irradiances. Assuming that the \(C=0.96\) cost parameter that applies to Earth organisms is universal, they predicted similar antenna absorption profiles to Kiang et al. (2007). The one difference was that even for very low mass M5V stars (\(\sim 0.16\mathrm{M}_{\odot}\)) the model did not predict absorption in NIR bands at \(1100\)"\(1400\) nm, \(1500\)"\(1800\) nm, and \(1800\)"\(2500\) nm, instead predicting a broad absorption peak at \(1050\) nm with a almost negligible shoulder predicted at \(987\) nm. By the logic of the Marosiloygi and van Gorkom (2010) model, background thermal excitation of the antenna pigments would become significant at these longer wavelengths, making light-harvesting less efficient.
Finally, at time of writing, Hall et al. (2023) have published a model in which they define the _photosynthetic habitable zone_ as the range of orbital radii in which both liquid water can occur and oxygenic photosynthesis is feasible. The model assumes that the maximal rate of photosynthesis, \(P_{\mathrm{max}}\), is dependent on surface irradiance (via a set of empirical parameters derived from phytoplankton (Yang et al., 2020)), ambient surface temperature, and the dark respiration rate, \(R_{\mathrm{rate}}\). \(R_{\mathrm{rate}}\) is essentially the steady-state energy requirement of the organism merely to continue existing, including the metabolic burdens of protein turnover, repair, maintaining homeostasis, etc. To enable growth, reproduction, etc. photosynthesis must generate a surplus to \(R_{\mathrm{rate}}\). On Earth \(R_{\mathrm{rate}}\leq 0.3P_{\mathrm{max}}\), meaning that less than a third of the photosynthetic yield needs to be reinvested in base-level survival, reflecting generally favourable conditions for life on Earth (Geider and Osborne, 1989). If conditions are similarly favourable on other Earth-like planets then oxygenic photosynthesis may be feasible around stars of \(>0.6\mathrm{M}_{\odot}\). This range can be extended to even smaller stars in the absence of any atmospheric attenuation of light or any greenhouse effect (though the authors admit this may be extremely optimistic).
As with Arp et al. (2020) it may be instructive to apply a different set of criteria to this problem. Here we apply a modified form of their noise cancelling antenna principle to model spectral fluxes of a range of stars, from very low mass/temperature M-dwarfs to something more like the Sun. We hypothesize that the antenna absorption profile will evolve to minimize input noise while maximizing the total input power.
## 2 Methodology
### Model Stellar/Solar Spectra
We use stellar spectral models for stars of different masses/temperatures generated by the phoenix radiative transfer code (Husser et al., 2013). These are typically over one and half million lines in length, so we smooth and re-sample the spectrum down to \(4000\) points, which still captures the large scale features and reduces the computation time for calculating the optimal antenna absorption characteristics (see Fig. 1**a**.). Smoothing features over wavelength ranges smaller than the smallest absorption band we consider is justified, given that all radiation across the band is absorbed. In other words, antenna bands cannot resolve spectral variations on scales smaller than their own effective width.
The standard solar spectrum used in Fig. 3 is an AM1.5 spectrum generated by the latest version of the SMARTS (Simple Model of the Atmospheric Radiative Transfer of Sunshine) code (Gueymard, 2004) and published by NREL (SMARTS: Simple Model of the Atmospheric Radiative Transfer of Sunshine SMARTS: Simple Model of the Atmospheric Radiative Transfer of Sunshine)
### Determining optimal antenna absorption characteristics for a given stellar spectral flux
Here we describe how we construct our basic model as a noise cancelling system that maximises the total power input, as discussed above. In line with the previous work Arp et al. (2020) we assume that the antenna absorption profile is comprised of a closely spaced Gaussian doublet (see Fig. 1**b**, and **c**,),
\[A\left(\lambda;\lambda_{0},\Delta\lambda,\sigma\right)=\frac{1}{2}\left(A_{+} \left(\lambda;\lambda_{0},\Delta\lambda,\sigma\right)+A_{-}\left(\lambda; \lambda_{0},\Delta\lambda,\sigma\right)\right) \tag{1}\]
where,
\[A_{\pm}\left(\lambda;\lambda_{0},\Delta\lambda,\sigma\right)=\frac{1}{2\sigma \sqrt{2\pi}}\exp\left(-\frac{\left(\lambda-\lambda_{0}\pm\frac{\Delta\lambda }{2}\right)^{2}}{2\sigma^{2}}\right) \tag{2}\]
\(\lambda_{0}\) is the central wavelength of the absorber pair, \(\Delta\lambda\) is the separation between peaks, and \(\sigma\) is the standard deviation of the Gaussian curve, hereafter the 'width' (see Fig. 1**c**.). To limit the number of free parameters we assume that \(A_{+}\) and \(A_{-}\) have the same width and amplitude. \(A_{+}\) and \(A_{-}\) represent the two input channels of the antenna and we define the input power of each channel as,
\[P_{\pm}=\int_{0}^{\infty}d\lambda\sigma\frac{hc}{\lambda}A_{\pm}\left(\lambda; \lambda_{0},\Delta\lambda,\sigma\right)I_{S}\left(\lambda;T_{S}\right) \tag{3}\]
where \(\sigma_{0}\) is the integrated optical cross-section of the antenna, \(I_{S}\left(\lambda\right)\) is the incident spectral flux (see next sub-section), \(h\) is the Planck constant and \(c\) is the speed of light. Arp et al. (2020) showed that the requirement for a noise-cancelling antenna is satisfied by maximizing the _power input difference_,
\[\Delta^{op}\left(\lambda_{0},\Delta\lambda,\sigma\right)=\frac{1}{2}\left|P_{+ }-P_{-}\right| \tag{4}\]
subject to the constraint that \(\Delta\lambda<6\sigma\). They provide a thorough mathematical justification of this and the interested reader is directed to the original article (Arp et al., 2020). Here we assume that there will be an additional selection pressure to maximize the total power input of the antenna,
\[P_{in}\left(\lambda_{0},\Delta\lambda,\sigma\right)=\frac{1}{2}\left(P_{+}+P_{-}\right) \tag{5}\]
particularly in the dim, red light environment provided by M dwarf stars. We calculate the matrices \(\Delta^{op}\left(\lambda,\Delta\lambda\right)\) and \(P_{in}\left(\lambda,\Delta\lambda\right)\) for widths \(\sigma=10.0,15.0,20.0,25.0,\)and \(30.0\) nm, which is reasonably representative of the range of widths observed for photosynthetic pigments on Earth. For example, the \(0-0\) peak of the \(Q_{y}\) band of Chl \(a\) in solution is approximately Gaussian with width \(\sigma\approx 12\) nm (Knox and Spring, 2003), while the vibronic peaks of carotenoids such as lutein can have \(\sigma\approx 20-40\) nm (Gray et al., 2022). \(\Delta^{op}\) and \(P_{in}\) are then re-scaled so that their maximal values across the entire parameter space equal unity, allowing us to define a product function,
\[\begin{split}\Pi_{in}\left(\lambda_{0},\Delta\lambda,\sigma\right)& =\tilde{\Delta}^{op}\left(\lambda_{0},\Delta\lambda,\sigma\right)\tilde{P}_{in} \left(\lambda_{0},\Delta\lambda,\sigma\right)\\ &=\left|\left(\tilde{P}_{+}\right)^{2}-\left(\tilde{P}_{-}\right) ^{2}\right|\end{split} \tag{6}\]
where the tilde indicate re-scaled functions. The definition of \(\Pi_{in}\) means that an antenna configuration with \(\Pi_{in}\) 1.0 satisfies both the noise cancellation and power input maximization conditions, while \(\Pi_{in}\) 0.0 means one (or both) of these conditions is not well satisfied.
We therefore generate a set of matrices or maps (each corresponding to a different width, \(\sigma\)) for \(\bar{\Delta}^{op}\), \(\tilde{P}_{in}\) and \(\Pi_{in}\) (see Fig. 2). These are then analysed to identify the most topologically significant peaks via the method of persistence homology (Otter et al., 2017; Taskesen, 2020). For this particular problem - identifying local maxima on a 2D surface - this has a rather intuitive interpretation. Taking one of the maps shown in Fig. 2 we can scan the z-axis (in our case \(\Delta^{op}\), \(P_{in}\), or \(\Pi_{in}\)) from 1 \(\rightarrow\) 0. As we do so peaks will appear, initially as distinct 'islands', before merging together. The _persistence score_ of a peak is then the difference between the z-values at which it appears and when it merges with other peaks. A very tall, very sharp peak, well-separated from its neighbours, will have a high persistence score, while short, broad, clustered peaks will merge with each other very soon after they appear.
### Comparing power input for optimal antenna configurations around different stars
Having calculated the optimal antenna configurations for given stellar temperature, we then compare the input powers across different temperatures. We define the ratio,
\[\phi\left(\lambda_{0},\Delta\lambda,\sigma;T\ |\ \lambda_{0}^{\prime}, \Delta\lambda^{\prime},\sigma^{\prime};T^{\prime}\right)=\\ \frac{P_{in}^{T}\left(\lambda_{0},\Delta\lambda,\sigma\right)}{ \sigma_{0}^{T}}\cdot\frac{\sigma_{0}^{T^{\prime}}}{P_{in}^{T^{\prime}}\left( \lambda_{0}^{\prime},\Delta\lambda^{\prime},\sigma^{\prime}\right)} \tag{7}\]
where \(T\) and \(T^{\prime}\) denote specific stellar temperatures. We will assume \(T^{\prime}=5800K\), effectively treating the Earth-like antennae as our reference configurations. To constrain the parameters space we need to explore we will assume that the total cross-sections are equivalent, \(\sigma_{0}^{T}=\sigma_{0}^{T^{\prime}}\) but we will discuss the implications of this assumption below.
Eqn. (7) depends on _relative_ spectral irradiance of two antennae evolving around different parent stars. Formally, \(I_{s}(lambda;T)\) are spectral flux densities W m\({}^{-2}\) nm\({}^{-1}\) at the stellar surfaces, so we therefore estimate the spectral flux at the planetary surface by assuming approximate habitable distance for each start type. Assuming that both the star and the orbiting planet are spherical black bodies, then the condition of radiative equilibrium implies the relation,
\[\left(\frac{R_{s}}{A_{P}}\right)^{2}=4\left(\frac{T_{P}}{T_{s}}\right)^{2} \tag{8}\]
where \(T_{\star}\) and \(T_{p}\) are the equilibrium temperatures of the star and planet respectively, \(R_{s}\) is the stellar radius, and \(A_{P}\) is the planetary orbital radius. We then estimate \(A_{P}\left(T_{P};R_{s},T_{s}\right)\) for a habitable (liquid water) temperature range \(273\)K \(\leq T_{P}\leq 373\)K. We then estimate the planetary surface spectral flux as,
\[I_{s}\left(\lambda;\left\langle A_{P}\right\rangle,R_{s},T_{s}\right)=F_{s} \left(\lambda;R_{s},T_{s}\right)\left(\frac{R_{s}}{A_{P}}\right)^{2} \tag{9}\]
where \(F_{s}\left(\lambda;R_{s},T_{s}\right)\) are the stellar surface spectral fluxes given by the PHOENIX radiative transfer models. A representative selection of \(I_{s}(\lambda)\) are shown in Fig. 1**a**.
## 3 Results
### Noise reduction and power maximization for the Solar spectrum
To validate the model of Arp et al. (2020) we first apply our model to the \(T_{s}=5800\) K spectral flux which is representative of the Solar spectrum at the top of Earth's atmosphere (Fig. 2). Fig. 2**a**, (left) shows the noise cancelling parameter \(\Delta^{op}\left(\lambda_{0},\Delta\lambda\right)\) for absorber widths \(\sigma=10\) nm. The latter was chosen since it gave the absolute optimal solution but the same data for larger \(\sigma\) are listed in the Fig. S1 Supplementary Material. We see that \(\Delta^{op}\) is more strongly-dependent on \(\lambda_{0}\) than on \(\Delta\lambda\). As expected from Arp et al. (2020), the values of \(\left(\lambda_{0},\Delta\lambda\right)\) that maximize \(\Delta^{op}\) are those that span the regions of \(I_{s}(\lambda)\) with the steepest gradient, with two optimal solutions on the blue edge of \(I_{s}(\lambda)\), centred on \(\lambda_{0}=304\) and \(396\) nm (see Fig. 2**a** (right).
Fig. 2**a**, (left) shows \(P_{in}\left(\lambda_{0},\Delta\lambda\right)\) for \(\sigma=10\) nm, revealing less of a dependence on \(\lambda_{0}\) and \(\Delta\lambda\) than \(\Delta^{op}\). The requirement to maximize \(P_{in}\) results in a single absorption line (\(\Delta\lambda=0\) nm) located at the spectral maximum. One would expect maximizing \(P_{in}\) would select for large \(\sigma\) (see Fig. S2 of the Supplementary Material), to cover as much of the incident spectral flux as possible, but the normalization condition in Eqn. (2) means that broader absorption lines have smaller peak amplitudes. While this may seem like an artificial model constraint it is essentially an assumption that increasing \(\sigma\) is predominantly via increasing inhomogeneous broadening, i.e. increasing the variance in the absorption peaks of chemically-identical pigments (due to an anisotropic solvent protein environment) rather than increasing the line width of the pigment itself (homogenous broadening).
Fig. 2**c**, (left) shows \(\Pi_{in}\left(\lambda_{0},\Delta\lambda\right)\) for \(\sigma=10\) nm, which is qualitatively similar to \(\Delta^{op}\left(\lambda_{0},\Delta\lambda\right)\). The key difference is that simultaneously maximizing both \(P_{in}\) and \(\Delta^{op}\) favours the antenna configuration closer to the peak of \(I_{s}\) (see Fig. 2**c**, (right)). \(\Pi_{in}\left(\lambda_{0},\Delta\lambda\right)\) for larger \(\sigma\) are shown in the Fig. S3 of the Supplementary Material.
Fig. 2 suggests that the optimal light-harvesting system for the Solar spectral flux would be composed of a pair of \(\sigma=10\) nm absorbers centred on 385 and 415 nm respectively. This is qualitatively similar to the Chl \(a\) and \(b\) blue absorption (or 'Soret') bands at approximately 450 and 425 nm respectively but are not an exact match. Crucially, this model seems to miss the 680 and 650 nm red (or '\(Q_{y}\)') bands of Chl \(a\) and \(b\) respectively, though we note there are several smaller peaks in \(\Delta^{op}\) and \(\Pi_{in}\) on the red side of \(I_{s}\) (Fig. 2**a**, and (c).) Three such configurations, with \(\Pi_{in}\sim 0.5\), are show alongside the absolute optimal configuration in Fig. 3**a**. These less optimal solutions cover the range \(490\)nm \(<\lambda_{0}<625\)nm.
For Earth-based photosynthesis we have the benefit of very accurate atmospheric transmission data. Fig. 3**b**, shows how the position and height of the \(\Pi_{in}\) peaks change for an AM1.5 standard solar spectrum. The absolute optimum peak is largely unaffected but the peak at \(\lambda=489\) nm in Fig. 3**a**, is blue-shifted to \(\lambda_{0}=443\) nm. The peaks on the red side of the spectrum move a little and their relative \(\Pi_{in}\) scores decrease due to the overall gradient of the spectrum, and therefore \(\Delta^{op}\), decreasing. The \(\bar{\Delta}^{op}\), \(\tilde{P}_{in}\), and \(\Pi_{in}\) matrices for the full range of widths are included in Figs S4-S6 of the Supplementary Material, which show that the number and density of distinct bands in \(\Delta^{op}\) and \(\Pi_{in}\) increases dramatically relative to the 5800 K spectrum. This simply reflects the fact that the AM1.5 spectrum is contains a number of sharp features on its red edge (see Fig. 3**b**,).
Finally, Fig. 3**c**, shows the absorption spectrum of LHCII, the major light-harvesting protein of vascular plants (spectrum digitized from Kondo et al. (2021), structure shown inset), with two of the
Figure 2: Optimal antenna properties for a \(T_{\rm s}=5800\) K PHOENIX spectral flux. The noise cancelling conditions requires an antenna absorption profile that is a Gaussian doublet with central wavelength \(A_{0}\) and separation \(\Delta A_{0}\). A width of \(\sigma=10nm\) gives the optimal values but a range of widths are shown in the Supplementary Material. **a. (left)** A heatmap of the noise cancelling parameter, \(\Delta^{opt}(A_{0},\Delta A,w)\), as a function of \(A_{0}\) and \(\Delta A_{0}\). The green dots indicate the most topologically distinct peaks as determined by the persistence score. **a. (right)** The optimal peaks shown with the spectrum (black). **b.** As a but for the total power input, \(P_{in}\) (\(A_{0}\), \(\Delta A,w\)). The most topologically distinct peak is indicated with a blue dot but the dependency of \(P_{in}\) on \(A_{0}\) and particularly \(\Delta A\) are much flatter then for \(\Delta^{opt}\). **c.** As a but for the product function \(\Pi_{in}\) (\(A_{0},\Delta A,w\)). For \(\Delta^{opt}\) and \(\Pi_{in}\) we note that there are several smaller-amplitude, less-distinct peaks on the red edge of the spectral flux
Figure 3: **a.** Absorption doublets corresponding to local peaks in \(\Pi_{lm}\) on the red side of a \(T_{s}=5800\) K PHOENIX spectral flux, \(I_{s}(\lambda)\). These peaks are less topologically distinct than the absolute optimum (shown in blue) and less effective at maximizing noise cancellation \(\Delta^{OP}\) and power input, \(P_{in}\). **b.** Same as a. but for a standard Solar spectrum with AM1.5 atmospheric filtering. As well as adding another peak (orange) on the blue side of the spectrum, the overall flattening in the \(\lambda>500nm\) region reduces the \(\Pi_{lm}\) scores for doublets on the red side. **c.** Two absorption doublets from b. shown against the absorption spectrum of major plant light-harvesting complex LHCI (green, structure of LHCII shown inset).
antenna configurations identified in Fig. 3**b.** The broad peaks in the region \(400-500\) nm of the LHCII absorption spectrum are composed of the Soret bands of Chl \(a\) and \(b\), and broad, vibronically-structured absorption bands of several carotenoids (Wei et al., 2016), which approximately align with our predicted absorption pair at \(\lambda_{0}=443\) nm. The red peaks arise from the \(Q_{y}\) bands of Chl \(a\) and \(b\), which are roughly in the same place as one of the weaker solutions in Fig. 3**b.** We should note that the agreement is, at best qualitative and even then do not represent unique solutions.
### Optimal antenna configurations for different stellar spectral fluxes
We apply the \(\Pi_{in}\) maximization procedure to stellar spectral fluxes in the \(2300\) K \(\leq T_{s}\leq 5800\) K range, with \(T_{s}=2300\) K representing the lower limit of the M-class stars. We consider incident wavelengths only in the \(200\) nm \(\sim\lambda_{0}<1200\) nm range as this covers (with a wide margin) all Earth-based photo-autotrophs. We do not consider any atmospheric modulation of \(I_{s}\), focusing instead on general qualitative trends. To reduce the parameter space (and because narrower \(A(\lambda)\) performed better for the \(5800\) K spectrum we consider only \(\sigma=10\) nm.
Fig. 4**a.** shows all antenna configurations for which \(\Pi_{in}(\lambda_{0},\Delta\lambda)\geq 0.9\) as a function of \(T_{s}\). These configurations represent highly optimized antennae that strongly satisfy both the requirements for noise cancellation and maximum power input (\(\Delta^{\alpha P}\sim\Pi_{in}\sim 0.95\)). As a reference Fig. 4**a.** shows ranges of \(\lambda_{0}\) that approximately correspond to the absorption peaks of known photosynthetic pigments and antenna structures. For \(T_{s}=4800\) and \(5800\) K (K- and G-type stars respectively) the optimum antennae roughly correspond to the blue Soret band of Chl \(a\) and \(b\). The Soret band is composed of transitions from the Chl ground state to several high-lying singlet excited states. In LHCII the Chl Soret band captures blue photons (see Fig. 4**c.**, blue line) though a large proportion of this energy is lost as the resulting excitations rapidly (\(<1\) ps) and non-radiatively relax to the first singlet excited state which composes the \(Q_{y}\) band. The Chl \(a\)\(Q_{y}\) band, at \(\sim 680\) nm, is central to oxygenic photosynthesis as it is the precursor to the charge-separated states that initiation the photo-redox chemistry in the RCs.
For \(T_{s}=3800\) K, the lower end of the K-type stars, there is a narrow band of optimal antenna configurations at \(\lambda_{0}\sim 540\) nm, within the region of pigments like phycocyanin (see Fig. 4**c.**, green line), the major light-harvesting pigment of the _phycobilisome antenna_ of cyanobacteria. Cyanobacteria are also oxygenic and excitation of phycocyanins similarly rapidly relaxes to the the same Chl \(a\)\(Q_{y}\) states in the same RCs. It is only at \(T_{s}=3300\) K that our model predicts an antenna configuration similar to the Chl \(a\) and \(b\)\(Q_{y}\) bands.
For \(T_{s}\leq 2800\) K, the lower end of the M-dwarf range, the optimal antenna configurations start to resemble the anoxygenic photoautotrophs. For \(T_{s}=2800\) K the optimum is within range of the Bacteriochlorophyll \(a\) (BChl \(a\)) \(Q_{y}\) band. For reference, Fig. 4**c.** shows the \(800\) and \(850\) nm bands from the peripheral LH2 antenna complex from the purple bacterium _Habodobacter gynades_ (ma-roon line), both of which originate from BChl \(a\). For \(T_{s}\leq 2600\) K the optimum antenna absorbs at \(\lambda_{0}\sim 1000\) nm which is comparable to the extremely red-shifted BChl \(b\) found in the antennae of purple bacterial such as _Blastochloris viridis_ (see Fig. 4**c.**, grey line).
In Fig. 4**b.** we consider \(\Pi_{in}\geq 0.5\), corresponding to antenna that are highly optimized to _either_ noise cancelling _or_ power input, or somewhat optimized for both (\(\Delta^{\alpha P}\sim\Pi_{in}\sim 0.7\)). We see the general temperature dependence is the same but with a broader variance. For example, for both \(T_{s}=3800\) and \(3300\) K there are a set of less optimal solutions in the Chl \(a\) and \(b\)\(Q_{y}\) regions. For \(T_{s}\leq 2600\) K there are less optimal solutions in the \(\lambda_{0}\sim 900\) nm region, though on Earth no organisms harvest light in this region as these photons are strongly absorbed by atmospheric water (Stevens, 2013).
### Comparing absolute power input for optimal antennae around different stars
The estimated habitable zones for each \(T_{s}\) are listed in Table 1. We note that \((A_{P})(T_{s}=5800\)K\()\sim 0.75\)AU, reflecting our neglect of any atmospheric greenhouse effect in our calculation of radiative equilibrium. Still the estimates of \((A_{P})(T_{a})\) are qualitatively reasonable.
Fig. 5**a.** plots the relative input power, \(\phi\), as defined in Eqn. 7 and with the optimum antenna configuration for \(T_{s}=5800\) K (see Fig. 2**c.** as the reference configuration. Across the entire range of \(T_{s}\)\(\phi\) decreases by no more than a factor of \(\sim 0.2\). This reflects the fact that the assumption of habitable planetary temperatures means that low stellar luminosity,
\[L_{s}(R_{s},T_{s})=4\pi R_{s}^{2}\int_{0}^{\infty}d\lambda F_{s}(\lambda;R_{s}, T_{s}) \tag{10}\]
is compensated for by small orbital radii, \(A_{P}\). This implies that, in terms of the pure input power from a photosynthetic antenna, light limiting conditions are not necessarily a feature of exo-planets orbiting M-dwarf stars.
## 4 Discussion
Since we are discussing the absorption characteristics of hypothetical photosynthetic organisms,and since we are interested in general trends rather than specific exo-planetary systems, we have adopted a relatively simple model. We balance two considerations, the need to minimize input noise and the need to maximize overall energy input. In terms of the validity of the method tested this first on a \(T_{s}=5800\) K model stellar spectral flux and then on a standard sea-level terrestrial Solar spectrum. In both cases our model predicts optimal antenna configurations qualitatively similar to the Chl \(a\). and \(b\). Soret bands. This is also true if, as Arp et al. (2020), we consider only the requirement for noise cancellation. In neither case do we unambiguously predict anything resembling the Chl \(a\). and \(b\). bands. As Arp et al. (2020) acknowledged, for a realistic incident spectrum there are many local peaks in \(\Delta^{\alpha P}\) and their reported fits to the antennae of higher plants are obtained by matching solutions from a selection of many. It should be noted that we employed a different method for identifying local maxima in \(\Delta^{\alpha P}\), \(P_{in}\) and \(\Pi_{in}\), compared to Arp et al. (2020) (persistence homology rather than variations in curvature). Moreover, we explicitly compare the relative amplitude of these peaks rather than treating all local maxima as equivalent solutions. We found that \(Q_{y}\)-like antenna configurations were generally topologically indistinct and sub-optimal (\(\Delta^{\alpha P}\) and \(\Pi_{in}<0.5\)).
The Chl Soret band functions as an _accessory_ light-harvesting pathway in photosynthetic antennae, capturing blue photons (\(\lambda<500\) nm) and converting them to usable lower energy quanta (\(\lambda\sim 700\) nm) via non-radiative decay to the \(Q_{y}\)-band. Kiang et al. (2007) identified the Soret band as in their rules of the evolutionary relationship between antennae and spectral irradiance, that antennae have accessory pigments that absorb on the blue edge of the irradiance window. The Chl \(Q_{y}\)-bands are the _primary_ antenna states representing the \(\lambda\sim 700\) nm quanta required by the RCs and the the immediate precursor states to the primary charge separation event that initiates
Figure 4: **a.** All antenna configurations (\(\lambda_{0},\Delta\lambda\)) for which \(\Pi_{in}\geq 0.9\) as a function of \(T_{s}\). For comparison we show, as variously coloured bands, the approximate absorption regions of the Chl \(a\) and \(b\) Soret and \(\Delta Q_{y}\) bands typical of higher parts, physocyanine, one of the antenna pigments in the phychosilome antenna of cyanobacteria, the BChl \(a\)\(Q_{y}\) band which captures light for purple bacteria, and the extremely red-shifted BChl \(b\)\(Q_{y}\) band from the antenna of some NIR-adapted purple bacteria. **b.** All antenna configurations (\(\lambda_{0},\Delta\lambda\)) for which \(\Pi_{in}\geq 0.5\). **c.** Absorption spectra of LHCII (blue, digitized from Kondo et al. (2021)), physocyanine (green, from He et al. (2021)), the peripheral antenna of purple bacterium _Rhodobacterobacterobacteroides_, LH2 (maroon, from Papagiannakis et al. (2002)), and the extremely red-shifted, BChl \(b\)-enriched LH1 antenna from _Blastochlorivis_ _viridis_ (grey, from Namoon et al. (2022)).
\begin{table}
\begin{tabular}{c c c c c} \(T_{s}\) (K) & \(R_{s}/R_{\odot}\) & \(A_{p}^{max}\) (AU) & \(A_{p}^{min}\) (AU) & \(\langle A_{p}\rangle\) (AU) \\ \hline
2300 & 0.117 & 0.019 & 0.010 & 0.015 \\
2600 & 0.133 & 0.028 & 0.015 & 0.021 \\
2800 & 0.254 & 0.062 & 0.033 & 0.048 \\
3300 & 0.299 & 0.102 & 0.055 & 0.078 \\
3800 & 0.613 & 0.276 & 0.148 & 0.212 \\
4300 & 0.694 & 0.400 & 0.214 & 0.307 \\
4800 & 0.775 & 0.557 & 0.299 & 0.428 \\
5800 & 0.936 & 0.982 & 0.526 & 0.754 \\ \hline \end{tabular}
\end{table}
Table 1: Table of estimated habitable orbital radii, \(A_{p}\), as a a function of effective stellar temperature, \(T_{s}\), and radius, \(R_{s}\) (relative to the Solar radius, \(R_{\odot}\)). \(A_{p}^{max}\) corresponds to an equilibrium planetary temperature of \(T_{p}=273\) K, while \(A_{p}^{min}\) corresponds to \(T_{p}=373\) K. \(\langle A_{p}\rangle\) is the mid-point between these distances.
Figure 5: **a.** The normalized input power, \(\phi\), (as defined in eqn. 7) for optimal antenna configurations as a function of \(T_{s}\), assuming \(\sigma=10\) nm in all cases. The reference value is the absolute optimum antenna configuration for \(T_{s}=5800\) K (\(\lambda_{0}=397.5\) nm, \(\Delta\lambda=29.0\) nm). The dashedline indicates the average \(\phi\) across all identified configurations for a given temperature. **b.** For illustrative purposes we show the optimum antenna absorption profiles, \(A(\lambda;\lambda_{0},\Delta\lambda)\), for \(T_{s}=2300\), 3800,and 5800 K. For clarity the normalized \(A(\lambda)\) have been rescaled by a factor of 40. In each case the requirement for a noise-cancelling antenna (Arp et al. (2020)) selects configurations located on the slope of \(I_{s}\)
the photosynthetic light-reaction. Clearly, \(Q_{y}\)-bands in the antenna are not (strongly) selected by the evolutionary pressures of noise cancellation and maximizing power input, but by the redox requirements of the RCs. These in turn may have been dictated by the availability of electron sources (such as water) and electron acceptors such as quinones and Fe/S clusters (Kiang et al., 2007b). Alternatively, recent quantum chemical simulations suggests that 'primitive' tetrapyrrole precursor molecules may have existed on Earth before life evolved. _Phod_, which resembles the metal-coordinating inner ring of Chl, has a Soret-\(Q_{y}\) absorption profile similar to Chl and could have formed abiotically in the reducing conditions of early Earth (de la Concepcion et al., 2022). Therefore, this absorption pattern could have been evolutionarily 'locked-in' long complex photosynthetic pathways.
In line with Lehmer et al. (2021), applying this model to decreasing \(T_{s}\) reveals optimal antennae configurations at increasing wavelengths, tracking the blue edge of the spectral irradiances and it progressively red-shifts. At around \(T_{s}=3300\) K the optimum antenna reaches the energetic lower-limit of oxygenic photosynthesis (\(\lambda_{0}\sim 700-750\) nm). In this region we find the absorption maxima of BChl \(c\)-\(f\) pigments in the antennae of various green sulfur bacteria (Gregersen et al., 2011; Bryant and Fragard, 2006; Bryant et al., 2007; Vogl et al., 2012). However, there are also species of oxygenic cyanobacteria that harvest light in \(700-800\) nm region of the spectrum using Chl \(d\) and \(f\)(Tros et al., 2021; Mascioli et al., 2022). Although it is still a matter debate, there is compelling evidence that these far-red adapted cyanobacteria still possess the conventional 680 (PSII) and 700 nm (PSI) RCs, meaning that that \(700-800\) nm photons absorbed by the antenna are transferred 'uphill' to the RCs against a thermodynamic barrier. These organisms therefore sacrifice antenna efficiency for the ability to collect far-red photons for which there is reduced competition with other oxygenic photoautotrophs (Mascioli et al., 2020).
At \(T_{s}=2800\) K the optimal antenna overlaps with BChl\(a\) and \(\mathbf{b}\) from purple bacteria (\(805-890\) nm). As an example, in Fig. 4**c**, we show the absorption profile of the LH2 antenna complex from _Phodobacter spheroidatics_ which has two well-separated Gaussian peaks at approximately 800 and 850 nm (Papagiannakis et al., 2002). Both peaks originate from BChl \(\mathbf{a}\), with the 850 nm band being red-shifted due to a combination of pigment-protein and pigment-pigment interactions, illustrating the inherent _tunability_ of the optical properties of antenna pigments.
For the lowest effective temperatures, \(T_{s}\lesssim 2600\) K, the optimal antennae configurations are in the region \(\lambda_{0}=1000-1100\) nm. There are examples of NIR-capturing antennae in our biosphere, with extremophiles such as _Blackschloris viridis_(Namoon et al., 2022) and _Ecothiorhodospira halochloris_(Steiner and Scheer, 1985) which possess antenna enriched in extremely red-shifted BChl \(a\) and \(b\) which capture light down to 1020 nm (see Fig. 4**c**.). However, it should be noted that they also absorb in the \(800-870\) nm region and still possesses the usual purple bacterial 870 nm reaction centre. As with the far-red adapted cyanobacteria the antenna is transferring energy against a thermodynamic gradient. In this case the barrier is a seemingly prohibitive \(8k_{B}T\) which may suggest an extreme trade-off between antenna efficiency and the ability to capture a very under-utilized region of the spectrum, or that energy transfer occurs in a regime in which the steady state approximation does not apply.
Assuming a fixed antenna size/cross-section, we find that the input power delivered by these optimal antenna varies surprisingly little with stellar temperature, \(T_{s}\) (see Fig. 5). This is due, in part, to that fact that our model selects for maximal input power, but mainly because the requirement for 'habitable' surface temperatures requires high surface irradiance. There is some variation in input power but this is no more than a factor of 0.2. This is an extremely small difference when compared to the variation in photon fluxes that support photosynthesis on Earth. Canopy plants can be subject to fluxes as high as 2000 \(\mu\)mol photons \(m^{-2}s^{-1}\) (\(\sim 1.2\times 10^{21}\) photons \(m^{-2}s^{-1}\)) while for shade-adapted species it can be \(\ll 100\)\(\mu\)mol photons \(m^{-2}s^{-1}\). If we include the anoxygenic organisms then there are species of green sulfur bacteria that thrive in fluxes as low as \(10^{-7}\)\(\mu\)mol photons \(m^{-2}s^{-1}\) from deep sea hydrothermal vents Beatty et al. (2005). This raises a very important feature of photosynthetic antenna that our model (and others) completely neglects: that antenna structure and cross-section are variable.
While the photosynthetic RCs are generally highly-conserved, antenna structure is very diverse (see Fig. 4). Moreover, they are flexible, in part due to their modular and hierarchical structures. For example, higher plants (tracheophytes) all possess the same antenna structures, RC-coupled light-harvesting complexes embedded in the internal membrane structures (thylakoids) chloroplasts. Still different species evolve in very different light environments by altering leaf size and orientation, chloroplast density, thylakoid size, etc. (Bateman et al., 1998). Flexibility exists within a single species, with plants like ivy (_Hedera helix_) can acclimate to both high light (\(\sim 1000\)\(\mu\)mol photons \(m^{-2}s^{-1}\)) and low light (\(<100\)\(\mu\)mol photons \(m^{-2}s^{-1}\)), though growth rate is limited at the lower limit and photodamage is problem at the upper (Oberhuber and Bauer, 1991). The ability of photosynthetic organisms to _acclimate_ to variable light environments is relevant to exo-biology. Recently, Battistuzzi et al. (2023) demonstrated that two species of cyanobacteria (the model organism _Synechocystis_ sp.PCC 6803 and the far-red adaptable _Chlorogeopis gritsch_) could acclimate to the spectral fluxes typical of low-mass (\(T_{s}\sim 2800\) K) M-dwarf stars. They did this by up-regulating pigment synthesis, effectively increasing antenna cross-section, allowing them to maintain normal growth rates. While this is acclimation of an organism to low stress, and not the evolution of an organism specifically to suit these light-conditions, it does indicate that our model (and previous) may be too pessimistic in predicting the range of light-conditions for photosynthesis.
Another problem with models based purely on average incident spectral flux is they neglect wider considerations of habitability. We would actually expect a large proportion of the habitable-zone planets orbiting M-dwarf stars to be tidally-locked (Kasting et al., 1993; Barnes, 2017). Of these a significant fraction may therefore lack the tectonic activity needed to maintain a carbon cycle (Cockell et al., 2016) which in turn would preclude the atmospheric stabilization needed to maintain liquid water (McIntyre, S. R. N., 2022). While anoxygenic photoautotrophs don't exploit water as an electron source, they do require it a the universal intra-cellular solvent. Even if liquid water does exist it is not clear how the lack of a diurnal cycle would impact photosynthesis. On Earth, photosynthetic organisms depend on a diurnal circadian clock (Dvornyk, 2016) and there is some evidence that the evolution of oxygenic photosynthesis was influenced by increasing day length on young Earth (Klatt et al., 2021). Conversely, Tang and Vincent (2000), in experiments on cyanobacteria, showed that net photosynthetic productivity decreases significantly in the absence of a period of darkness. The influence of tidal locking may therefore have as significant an impact on photosynthetic viability as incident light quality.
In conclusion, though the question of photosynthetic viability around low-mass stars is extremely complex and multivariate, we have shown that lack of light is most likely not a reason to exclude it. However, around the lowest mass stars photosynthesis may resemble Earth's anoxygenic bacteria rather than complex oxygenic organisms. However, an immediate extension of this work will be
a more careful consideration of antenna structure that factors n the inherent adaptability of Earth's photosynthetic light-harvesters.
###### Acknowledgements.
TIH is funded by a Royal Society Dorothy Hodgkin Fellowship, which also funded a summer studentship of GMMC. CDPD is funded by BBSRC (BB/T000023/1).
## Data Availability
The python scripts used to produce the models presented in this paper are available on request to Christopher Duffy ([email protected])
|
2306.14355
|
Current-induced deterministic switching of van der Waals ferromagnet at
room temperature
|
Recent discovery of emergent magnetism in van der Waals magnetic materials
(vdWMM) has broadened the material space for developing spintronic devices for
energy-efficient computation. While there has been appreciable progress in
vdWMM discovery, with strong perpendicular magnetic anisotropy (PMA) and Curie
temperatures exceeding room temperature, a solution for non-volatile,
deterministic switching of vdWMMs at room temperature has been missing,
limiting the prospects of their adoption into commercial spintronic devices.
Here, we report the first demonstration of current-controlled non-volatile,
deterministic magnetization switching in a vdW magnetic material at room
temperature. We have achieved spin-orbit torque (SOT) switching of the PMA vdW
magnet Fe3GaTe2 using a Pt spin-Hall layer up to 320 K, with a threshold
switching current density as low as $J_{sw} = 1.69\times10^6 A/cm^2$ at room
temperature. We have also quantitatively estimated the anti-damping-like SOT
efficiency of our Fe3GaTe2/Pt bilayer system to be $\xi_{DL}$ = 0.093, using
second harmonic Hall voltage measurement technique. These results mark a
crucial step in making vdW magnetic materials a viable choice for the
development of scalable, future spintronic devices.
|
Shivam N. Kajale, Thanh Nguyen, Corson A. Chao, David C. Bono, Artittaya Boonkird, Mingda Li, Deblina Sarkar
|
2023-06-25T22:08:46Z
|
http://arxiv.org/abs/2306.14355v1
|
# Current-induced deterministic switching of van der Waals ferromagnet at room temperature
###### Abstract
Recent discovery of emergent magnetism in van der Waals magnetic materials (vdWMM) has broadened the material space for developing spintronic devices for energy-efficient computation. While there has been appreciable progress in vdWMM discovery, with strong perpendicular magnetic anisotropy (PMA) and Curie temperatures exceeding room temperature, a solution for non-volatile, deterministic switching of vdWMMs at room temperature has been missing, limiting the prospects of their adoption into commercial spintronic devices. Here, we report the first demonstration of current-controlled non-volatile, deterministic magnetization switching in a vdW magnetic material at room temperature. We have achieved spin-orbit torque (SOT) switching of the PMA vdW magnet Fe\({}_{3}\)GaTe\({}_{2}\) using a Pt spin-Hall layer up to 320 K, with a threshold switching current density as low as \(J_{SW}=1.69\times 10^{6}\) A/cm\({}^{2}\) at room temperature. We have also quantitatively estimated the anti-damping-like SOT efficiency of our Fe\({}_{3}\)GaTe\({}_{2}\)/Pt bilayer system to be \(\xi_{DL}=0.093\), using second harmonic Hall voltage measurement technique. These results mark a crucial step in making vdW magnetic materials a viable choice for the development of scalable, future spintronic devices.
## Introduction
Magnetic materials-based spintronic devices[1, 2, 3] hold great promise as energy-efficient, non-volatile memories and building blocks of neuromorphic[4] and probabilistic[5, 6] computing hardware. However, just a few optimal material systems, like CoFeB/MgO[7, 8, 9] in particular, have been identified for scalable device applications over the last two decades. The discovery of emergent magnetism in van der Waals (vdW) magnetic materials[10, 11, 12, 13] has opened a new arena for material exploration for spintronic technologies. vdW magnetic materials provide scalable, perpendicular magnetic anisotropy (PMA) alternatives to CoFeB/MgO while also providing atomically smooth interfaces down to monolayer thicknesses to help maintain device performance. An important step in the development of spintronic devices with vdW materials is the deterministic switching of PMA magnetism using current or voltage drives. While there have been several reports of spintronic torque (SOT) induced control of magnetism in bilayer systems of vdW ferromagnets with heavy metals[14, 15, 16, 17], topological insulators[18], and topological semimetals[19, 20, 21, 22] over the last few years, such control at room temperature has remained elusive.
Here, we report the non-volatile, deterministic magnetization switching at room temperature in a vdW ferromagnet employing Fe\({}_{3}\)GaTe\({}_{2}\)/Pt bilayer devices. Bulk crystals of Fe\({}_{3}\)GaTe\({}_{2}\) (FGaT) were grown using a self-flux method as reported earlier[23], and its magnetic properties were studied in the bulk and in exfoliated sheets. Bilayer devices of multi-layer FGaT and 6 nm Pt were fabricated to demonstrate current-induced magnetization switching up to 320 K in the presence of a 100 Oe in-plane magnetic field with a threshold current density of 1.69 \(\times\) 10\({}^{6}\) A/cm\({}^{2}\). Furthermore, second harmonic Hall voltage measurements were used to estimate the anti-damping-like field component which is responsible for the current-induced switching, and the SOT efficiency of the FGaT/Pt system.
### Growth and characterization of Fe\({}_{3}\)GaTe\({}_{2}\)
We first present our characterization of bulk crystalline vdW FGaT synthesized using a Te self-flux method (see Methods). The FGaT unit cell possesses a hexagonal symmetry with space group P6\({}_{3}\)/mmc (no. 194), like that of isostructural Fe\({}_{3}\)GeTe\({}_{2}\), wherein two adjacent quintuple layer substructures with two inequivalent Fe crystallographic sites form a vdW gap between the tellurium layers (Fig. 1A). Millimeter-sized hexagonal planar crystals were measured with powder X-ray diffraction and showcase prominent (00L) Bragg peaks which were analyzed using a Rietveld refinement (Fig. 1B). To verify the composition of our crystals, we performed energy-dispersive X-ray spectroscopy elemental mapping on a diverse set of bulk crystals and exfoliated flakes which all exhibited the correct atomic ratio (Fig. 1C). Hysteresis loops of the direct current magnetization on bulk FGaT were performed from 3 K to 400 K with a magnetic field applied out-of-plane (OOP) as shown in Fig. 1D. Both the coercive field and the saturation magnetization gradually decrease with increasing temperature up to the transition temperature near 340 K. These are complemented by measurements of the Hall resistivity with magnetic field applied out-of-plane which showcase a prominent anomalous Hall effect (AHE) accompanying the magnetization up to
the same temperature (Fig. 1E) with a room temperature Hall angle of 2.6\({}^{\circ}\). The temperature dependence of magnetization with an applied out-of-plane magnetic field of 1000 Oe under field-cooling shows a departure near the Curie temperature from the zero-field-cooling situation (Fig. 1F). The heat capacity manifests a prominent peak (Fig. 1G) and the temperature-dependent longitudinal resistivity displays a noticeable change of slope (Fig. 1H) both near 340 K. Altogether, these complementary measurements consistently indicate room-temperature magnetic properties and the high ferromagnetic transition temperature of the bulk FGaT crystals.
Next, we have characterized magnetism in exfoliated nanosheets of FGaT using a combination of AHE and polar magneto-optic Kerr effect (MOKE) measurements. Fig. 2A shows the temperature dependent AHE hysteresis loops for OOP field sweep for a 29 nm thick FGaT flake. As depicted in the schematic (Fig. 2A inset), current is applied along the \(x-\)axis and the transverse voltage (\(V_{xy}\)) is measured along \(y-\)axis for the AHE measurements. The hysteresis loops exhibit a rectangular nature right up to room temperature, indicating a strong perpendicular magnetic anisotropy (PMA). Fig. 2B shows the temperature dependence of remanent anomalous Hall resistance. The \(R_{xy}^{AHE}\) plotted here is the difference in transverse resistance of the device, recorded while warming up the device at zero magnetic field, after cooling under +3 T and -3 T out-of-plane field, respectively. The sharp drop in \(R_{xy}^{AHE}\) above 320 K (inset of Fig. 2B) marks the transition from ferromagnetic to paramagnetic state with a Curie temperature \(T_{C}\approx 328\) K, slightly lower than the bulk value. The material exhibits an anisotropy field of \(H_{k}=38\) kOe at 300 K as evident from the \(R_{xy}\) plot for near in-plane magnetic field sweeps (\(H\perp c\)) in Fig. 2C. The inset provides \(R_{xy}\) vs \(H\) with finer field steps indicating a room temperature coercivity of 220 Oe for the OOP field sweep. To further confirm room temperature magnetic properties of the FGaT flake, we report polar MOKE measurement of a 48 nm-thick flake in Fig. 2D. The hysteresis plot shows a near 100% remanence confirming the presence of strong PMA, with a coercivity of 235 Oe. The two-step nature of the hysteresis loop can be attributed to presence of multi-domains in the relatively thick flake, as previously reported[24]. We would like to note that the MOKE plots correspond to a flake exposed directly to air for more than a week, indicative of the lack of material degradation (barring surface oxidation which cannot directly be concluded from MOKE) and hence, the air-stability of FGaT nanosheets.
### Deterministic switching of vdW magnet
To demonstrate current induced switching of magnetism in FGaT, we have fabricated bilayer devices of exfoliated FGaT flakes (bottom) and 6 nm sputtered Pt (top). The bilayer stack is patterned into a Hall bar device and anomalous Hall resistance measurements are used to track the magnetization state of FGaT in the switching experiments. Fig. 2E shows the optical image of one such device with 57.9 nm FGaT/ 6 nm Pt patterned into a 5 \(\upmu\)m wide Hall bar. The device is subjected to a current waveform as shown in Fig. 2F, with 1 ms write pulses up to a maximum of 6 mA, at 1 s intervals, with \(R_{xy}\) being recorded after each current pulse. Fig. 2G shows the Hall resistance across this device for the cyclic current sweeps. The measurements are performed at 300 K in the presence of a 100 Oe in-plane magnetic field applied parallel to current direction.
The device transitions from a low resistance to a high resistance state and vice versa, for a current pulse of \(\pm\)5.4 mA, indicating 180\({}^{0}\) switching of the OOP magnetization, with appreciable repeatability across four consecutive measurement cycles. This corresponds to a threshold switching current density of 1.69 \(\times\) 10\({}^{6}\) A/cm\({}^{2}\), which compares well with previous reports of ferromagnet/Pt systems (Supplementary Section S3). The small asymmetry in the positive and negative current switching cycles can be attributed to an OOP component of external magnetic field acting on the device due to slight misalignment between the field direction and the true sample plane. We have observed similar current-induced switching behavior in two more of our FGaT/Pt devices as documented in section S2 of Supplementary Information.
As depicted in Fig. 3A, a charge current \(I\) injected in the plane of the device results in the creation of a transverse spin-current in the Pt layer due to spin-Hall effect. The vertical component of the spin current results in an in-plane oriented spin accumulation at the FGaT/Pt interface. The in-plane damping-like torque applied by the moment of these spins drives the magnetization in-plane while the current pulse is active. The torque from the accompanying in-plane field, parallel to current direction, favors relaxation of the magnetization to one of the two OOP directions (opposite for positive and negative current) after the current pulse is removed[25], resulting in deterministic, non-volatile switching of magnetization in the PMA material. Fig. 3B shows the robust control of magnetization state in the device at room temperature using a train of random current pulses, 1 ms wide and \(\pm\)6 mA in magnitude. We further studied the dependence of magnetization switching curves for varying in-plane field and increasing temperature. In Fig. 3C, we observe that increasing the magnitude of applied in-plane magnetic field results in vertical shrinking of the hysteresis curves. Increasing in-plane magnetic field drives the steady state magnetization of FGaT further away from the \(c\)-axis, decreasing the OOP magnetization component. Thus, we can expect a reduction in the anomalous Hall resistance of the device (\(\propto M_{Z}\)) on increasing the in-plane magnetic field, in agreement with our observations (Fig. 3E). Similarly, we have found that the AHE splitting of the hysteresis curves decreases on increasing the temperature above 300 K (Fig. 3D). We continue to see a sharp transition in resistance up to 320 K, but at 330 K, there is no clear change in \(R_{xy}\) across the current cycle. Once again, this agrees well with expectations that the magnitude of magnetization decreases with increasing temperature and goes to zero beyond the Curie temperature (\(\approx 328K\)). In fact, the trend of \(R_{xy}^{AHE}\) observed for these current pulse sweeps (Fig. 3F) matches well with the \(R_{xy}^{AHE}\) vs \(T\) observed in our FGaT-only devices (Fig. 2A).
### Estimation of Spin-Orbit Torque efficiency.
To quantify the spin-orbit torque in our FGaT/Pt bilayer system, we have performed second harmonic Hall (SHH) voltage measurements[26, 27]. As depicted in Fig. 4A, the SHH voltage (\(V_{xy}^{2\omega}\)) is measured along the \(y\) - axis for an ac current excitation along the \(x\) - axis, in presence of an externally applied in-plane magnetic field, \(H_{ext}\), of varying magnitude and azimuthal angle \(\phi\) with respect to the \(x\) - axis. For an arbitrary orientation of magnetization, with polar angle \(\theta_{m}\) and
azimuthal angle \(\phi_{m}\), the transverse resistance contains contributions from the anomalous Hall effect and the planar Hall effect,
\[R_{xy}^{\omega}=R_{xy}^{AHE}\cos\theta_{m}+R_{xy}^{PHE}\sin^{2}\theta_{m}\sin(2 \phi_{m}) \tag{1}\]
where, \(R_{xy}^{AHE}\) and \(R_{xy}^{PHE}\) are the anomalous Hall resistance and planar Hall resistance of the device, respectively. Upon injecting an ac current into the device, the current induced field-like and anti-damping-like torques drive periodic oscillations of magnetization about its equilibrium position, resulting in harmonic oscillation of the transverse resistance \(R_{xy}^{\omega}\). Thus, the recorded transverse voltage \(V_{xy}(t)=I_{ac}(t)R_{xy}(t)\) contains a \(2\omega\) component which can be detected through lock-in measurement. However, the SHH voltage also has contributions from thermal effects which appear in addition to the SOT-induced components and can lead to overestimation of SOT efficiency unless systematically eliminated from \(V_{xy}^{2\omega}\)[28]. Joule heating in the device, proportional to \(I_{ac}^{2}\) (and hence containing \(2\omega\) component), creates a vertical thermal gradient. The voltage measured along the \(y-\) axis, is thus proportional to component of \(H_{ext}\) along \(x-\) axis through the ordinary Nernst effect (ONE)[29] and to \(M_{x}\) through the anomalous Nernst effect (ANE) and spin-Seebeck effect (SSE). As a result, the SHH voltage takes the form[29],
\[V_{xy}^{2\omega}=\left[\frac{I_{ac}R_{xy}^{AHE}}{2}\frac{\Delta H_{DL}}{H_{ext }-H_{k}}+V^{ONE}H_{ext}+V^{SSE}\right]\cos\phi+\left[I_{ac}R_{xy}^{PHE}\frac{ \Delta H_{FL}+H_{Oe}}{H_{ext}}\right]\cos(2\phi)\cos\phi \tag{2}\]
where, \(\Delta H_{DL},\Delta H_{FL}\) and \(H_{Oe}\) are the effective fields corresponding to current induced anti-damping-like (ADL) torque, field-like torque and Oersted field, respectively. \(H_{k}=38\) kOe is the effective anisotropy field, \(V^{ONE}\) is the SHH voltage per unit applied field and \(V^{SSE}\) is the field-independent SSE and ANE contribution.
The SHH voltage corresponding to ac excitation of amplitude 1 mA and 1.5 mA is plotted in Fig. 4B and 4C, respectively. The solid black lines correspond to least squared error fit of the recorded voltage to Eq. (2). The \(\cos\phi\) components of the voltages, corresponding to combined ADL torque, ONE and SSE/ANE contribution, is plotted against \(H_{ext}\) in Fig. 4D. As can be noted from Eq. (2), the ADL field dwindles upon increasing \(H_{ext}\) far over \(H_{k}\) and we expect to see a linear scaling of \(V_{xy}^{2\omega}\) at high fields due to the dominant ONE and SSE/ANE. Thus, a linear fit to \(V_{xy}^{2\omega}\) at high fields (dotted lines in Fig. 4D) can be used to eliminate thermal contributions from the SHH voltage. Fig. 4E shows the SHH voltage corresponding solely to ADL torque, obtained upon subtraction of thermal contributions from \(V_{xy}^{2\omega}\) in Fig. 4D. We estimate an anti-damping-like field per unit current density of \(\frac{\Delta H_{DL}}{J_{c}}=1.34\times 10^{-10}\) Oe/Am\({}^{-2}\), where \(J_{c}\) is the current density in Pt only, calculated based on the parallel resistor model (Supplementary Information, S1). The anti-damping-like spin torque efficiency can then be calculated as,
\[\xi_{DL}=\left(\frac{2e}{h}\right)M_{s}t_{FM}\,\frac{\mu_{0}\Delta H_{DL}}{J_{c }}. \tag{3}\]
Using the bulk saturation magnetization of our FGaT crystals, \(M_{S}\) = 5.8 emu/g (or 3.95 \(\times\) 10\({}^{4}\) A/m), we obtain \(\xi_{DL}\) = 0.093 for our FGaT/Pt device. This value is in good agreement with that expected for SOT system using Pt as the spin-Hall material with a metallic ferromagnet[30]. We provide a comparison of threshold switching current density, ADL torque per unit current and efficiency of various previous reports of deterministic magnetization switching in vdW magnetic materials in Supplementary Information, S3. Our FGaT/Pt device not only works beyond room temperature but also achieves switching at one of the lowest current densities (barring the CGT/Pt system[16], where the FM is an insulator and hence not suitable for magnetic tunnel junctions) providing an energy efficient alternative for spintronic devices. During the preparation of this manuscript, we came across an archived report by Li et al. on a similar system of FGaT/Pt[31]. We would like to note that the threshold switching current density we report is almost an order of magnitude smaller than that of Li et al. Furthermore, while their reported \(\xi_{DL}\) = 0.22 is larger, it may be attributed to oversight of thermal contributions to the SHH voltage even upon operating at current densities larger than ours, as has been argued previously[16] to result in overestimation of SOT effects[15]. By fully accounting for these contributions, we assume that our SOT efficiency value, although smaller, might be a more representative estimate for the FGaT/Pt bilayer system.
## Conclusion
We have demonstrated current-induced, deterministic switching of out-of-plane magnetization in a Fe\({}_{3}\)GaTe\({}_{2}\)/Pt bilayer, spin-orbit torque system up to 320 K, with a low switching current of 1.69 \(\times\) 10\({}^{6}\) A/cm\({}^{2}\) at room temperature. Furthermore, using second harmonic Hall voltage measurements, we have quantitively estimated the effective anti-damping-like field and spintorque efficiency of the FGaT/Pt system, and found it to be in good agreement with previous reports of Pt-based SOT systems. This work marks an important step in the adoption of vdW magnetic materials for building spintronic devices for non-volatile, energy-efficient memories and computing devices.
## Methods
### Bulk crystal growth and characterization
Single crystals of Fe\({}_{3}\)GaTe\({}_{2}\) were synthesized through a Te self-flux method as described in Ref. [23]. A mixture of Fe powder (Beantown Chemical, 99.9%), Ga ingot (Alfa Aesar, 99.999999%), and Te pieces (Sigma Aldrich, 99.999%) were weighed in a molar ratio of 1:2:2 in a nitrogen-filled glovebox (with H\({}_{2}\)O and O\({}_{2}\) levels less than 0.1 ppm) and placed in an alumina Canfield crucible. The mixture-filled crucible was flame-sealed in an evacuated quartz tube with quartz wool and subsequently heated to 1000\({}^{\circ}\)C from room temperature within an hour. The mixture dwelled for 24 hours and subsequently cooled to 880\({}^{\circ}\)C within an hour followed by a slow cooling to 780\({}^{\circ}\)C at a rate of 1\({}^{\circ}\)C/h. Centrifugation was performed to remove the excess flux and afterwards the products were heat-treated to alleviate the concentration of tellurium defects. The resulting
products contain a mixture of products with a silver luster among which contain Fe\({}_{3}\)GaTe\({}_{2}\) crystals which are millimeter sized.
Powder X-ray diffraction (PXRD) data were measured on bulk samples using an X'Pert Pro diffractometer (PANalytical) in Bragg-Brentano geometry operating with a curved Ge(111) monochromator and Cu K\({}_{a1}\) radiation with a wavelength of 0.154 nm. Scanning electron microscope (SEM) images on both bulk crystals and exfoliated flakes on SiO\({}_{2}\) substrates are measured using a Zeiss Merlin high-resolution SEM system with images acquired at an acceleration voltage of 20 kV, a current of 1000 pA, and a working distance of 8.5 mm. SEM-energy dispersive X-ray spectroscopy (EDS) elemental maps were taken using the EDAX APEX software.
Electrical transport measurements were performed on the bulk crystals in a Physical Property Measurement System (PPMS Dynacool, Quantum Design) in a five-probe geometry with contacts made of silver epoxy H20E and platinum wires. Direct current magnetization of the sample was acquired using the Vibrating Sample Magnetometer (VSM) option on crystals placed in the proper orientation using Kapton tape on a quartz holder (OOP) or using a Brass holder (IP). Temperature-dependent field-cooled and zero-field-cooled magnetization measurements were performed with an applied magnetic field of 1000 Oe. Specific heat capacity was measured on a 4.4 mg crystal using the Heat Capacity (HC) option with Apiezon H vacuum grease applied on the platform.
### Device Fabrication
Fe\({}_{3}\)GaTe\({}_{2}\) flakes were exfoliated on Si/SiO\({}_{2}\) dies in a nitrogen-filled glovebox. For FGaT/Pt devices, the dies after exfoliation were sealed in the glovebox and only briefly exposed to air while transferring to the load-lock chamber of the sputterer. An RF plasma cleaning step was performed, under 4 mTorr Ar pressure and 40 W for 15 seconds, to remove any nascent oxide from FGaT surface. Subsequently, 6 nm Pt was deposited at a rate of 1.8 A/s. 6-terminal Hall bars were patterned into the FGaT/Pt stack through e-beam lithography using negative resist m-N2403, followed by Ar ion-milling (300 keV, 300s, 70\({}^{0}\) inclination). Finally, lift-off through photolithography was used to pattern contact traces and pads for the etched Hall bars. Ti/Au (5 nm/70 nm) was e-beam deposited to create these metallic contacts. FGaT-only devices for magneto-transport characterization were fabricated using e-beam lithography with PMMA resist and Ti/Au (5 nm/35 nm) contacts, ensuring under a minute of direct exposure of FGaT flakes to ambient air. These devices were additionally encapsulated with thick hBN flakes in a glovebox using PDMS-based dry transfer.
### Transport Measurements
All transport measurements were performed in a 9 T PPMS DynaCool system. Anomalous Hall effect measurements on the FGaT-only devices were performed using the Electrical Transport Option of the PPMS Dynacool, with a drive current of 100 \(\mu\)A. Current-induced switching
experiments were performed by interfacing a Keithley 6221 current source and 2182A nanovoltmeter with the PPMS Dynacool. Current input sequence consisted of a 1 ms write-pulse followed by 999 ms of read pulses (\(\pm\)200 uA). For second harmonic Hall measurements, ac current was supplied by the Keithley 6221, at a frequency 1711.97 Hz. Two lock-in amplifiers, Stanford Research Systems (SRS) SR860 were used to simultaneously measure \(f\) and \(2f\) transverse voltage components.
Fig.1: (A) Crystal structure of Fe\({}_{3}\)GaTe\({}_{2}\) under different orientations: general view, _bc_-plane and _ac_-plane. (B) Powder x-ray diffraction (PXRD) pattern of the bulk crystal with indexed prominent Bragg peaks. (C) Scanning electron microscope image of an exfoliated flake on SiO\({}_{2}\)/Si substrate (i) with corresponding elemental Fe (ii), Ga (iii) and Te (iv) maps and energy-dispersive spectra (v). The scale bar on the SEM image is 20 \(\upmu\)m. (D) Spontaneous magnetization dependence with magnetic field applied out-of-plane at different temperatures. (E) Hall resistivity of the bulk sample measured at different temperatures with the magnetic field applied in the out-of-plane direction. Inset shows the measured anomalous Hall angle as a function of temperature. (F) Spontaneous magnetization under field-cooled (FC) and zero-field-cooled (ZFC) conditions with respect to temperature with the magnetic field (1000 Oe) applied out-of-plane. (G) Specific heat capacity measurement with zero and a 10 kOe magnetic field applied in the out-of-plane direction. Inset shows a fine scan at zero field near the magnetic transition. (H) Temperature dependence of the bulk longitudinal resistivity. Inset shows the derivative of the longitudinal resistivity with respect to temperature. A broad peak indicates the presence of a magnetic transition.
Fig.2: (A) Temperature dependent hysteresis plots of the device for out-of-plane field sweeps (\(H\parallel c\)). Data offset along y-axis for clarity. Inset: Schematic of measurement geometry. (B) Variation of anomalous Hall resistance of a (29 nm) FGaT flake against temperature, indicating a Curie temperature of \(\approx 328\) K. Insets – optical image of the FGaT Hall bar on bottom left, zoomed in view of \(R_{xy}-T\) close to the Curie temperature. (C) Comparison of room temperature Hall resistance curves of the FGaT device for field swept OOP (\(H\parallel c\)) and in-plane (\(H\perp c\)), with anisotropy field of \(H_{k}=38\) kOe denoted by vertical dashed lines. Inset: Low field zoom-in of the plot. (D) Room temperature polar MOKE curve of a (48 nm) FGaT flake, indicative of strong PMA and a coercivity \(H_{c}=235\) Oe. (E) Optical image of the FGaT (57.9 nm)/Pt (6 nm) device (left) and its AFM micrograph (right). (F) Current sequence applied to the FGaT/Pt device for magnetization switching experiments, with the inset clarifying the nature of write pulses (\(t_{write}\)) and read pulses (\(t_{read}\)). (G) Cyclic magnetization switching curves observed for the FGaT/Pt devices over four consecutives current pulsing loops, at 300 K under in-plane bias field of 100 Oe. Scale bars: 5 \(\mu\)m.
Fig.3: (A) Schematic diagram showing generation of spin-current in Pt layer in response to a planar current injection (I) due to spin-Hall effect. Vertical component of the in-plane polarized spin-current is incident at the Pt/FGaT interface, resulting in spin accumulation and spin-orbit torque on FGaT magnetization \(M\). (B) Deterministic, non-volatile switching of FGaT magnetization by a train of current pulses, 1 ms wide and \(\pm\)6 mA in magnitude. Modulation of current-induced \(R_{xy}\) hysteresis curves upon variation of (C) externally applied in-plane magnetic field magnitude and (D) temperature of the system. Data offset along y-axis for clarity. (E, F) Variation of anomalous Hall resistance against in-plane field (E) and system temperature (F).
## References
* [1] Hirohata, A., Yamada, K., Nakatani, Y., Prejbeanu, L., Dieny, B., Pirro, P. & Hillebrands, B. Review on spintronics: Principles and device applications. _J. Magn. Magn. Mater._**509**, (2020).
* [2] Fong, X., Kim, Y., Yogendra, K., Fan, D., Sengupta, A., Raghunathan, A. & Roy, K. Spin-Transfer Torque Devices for Logic and Memory: Prospects and Perspectives. _IEEE Trans. Comput. Des. Integr. Circuits Syst._**35,** 1-22 (2016).
* [3] Ohno, H., Stiles, M. D. & Dieny, B. Spintronics. _Proc. IEEE_**104,** 1782-1786 (2016).
* [4] Grollier, J., Querlioz, D., Camsari, K. Y., Everschor-Sitte, K., Fukami, S. & Stiles, M. D. Neuromorphic spintronics. _Nat. Electron._**3,** 360-370 (2020).
* [5] Kaiser, J. & Datta, S. Probabilistic computing with p-bits. _Appl. Phys. Lett._**119**, (2021).
Figure 4: (A) Schematic illustration of the second harmonic Hall (SHH) voltage measurement. External field \(H_{ext}\) is applied in the sample (\(xy\) -) plane at an angle \(\phi\) from \(x\) - axis. Current is applied along \(x\) - axis and SHH voltage (\(V_{xy}^{2\omega}\)) is measured along the \(y\) - axis. (B, C) \(V_{xy}^{2\omega}\) measured for in-plane magnetic field rotation for (B) \(I_{ac}=1\) mA and (C) \(I_{ac}=1.5\) mA. Solid black lines are fits to equation (2). Data offset in y-axis for clarity. (D) Hollow squares represent the amplitude of \(\cos\phi\) components of \(V_{xy}^{2\omega}\) in equation (2). Dotted lines are fits for the linear, thermal contribution to \(V_{xy}^{2\omega}\) from ordinary Nernst effect and spin Seebeck effect. (E) Anti-damping-like field contribution to \(V_{xy}^{2\omega}\) (solid squares) and their theoretical fit, with \(H_{k}=38\) kOe. Inset: \(\Delta H_{DL}\) extracted for the two current level, and their fitting line, with near zero y-intercept. Measurements taken at 300 K.
Camsari, K. Y., Sutton, B. M. & Datta, S. P-bits for probabilistic spin logic. _Appl. Phys. Rev._**6**, (2019).
* Safranski et al. [2022] Safranski, C., Hu, G., Sun, J. Z., Hashemi, P., Brown, S. L., Buzi, L., D'Emic, C. P., Edwards, E. R. J., Galligan, E., Gottwald, M. G., Gunawan, O., Karimeddiny, S., Jung, H., Kim, J., Latzko, K., Trouilloud, P. L. & Worledge, D. C. Reliable Sub-Nanosecond Switching in Magnetic Tunnel Junctions for MRAM Applications. _IEEE Trans. Electron Devices_**69**, 7180-7183 (2022).
* Safranski et al. [2021] Safranski, C., Kaiser, J., Trouilloud, P., Hashemi, P., Hu, G. & Sun, J. Z. Demonstration of Nanosecond Operation in Stochastic Magnetic Tunnel Junctions. _Nano Lett._**21**, 2040-2045 (2021).
* Gajek et al. [2012] Gajek, M., Nowak, J. J., Sun, J. Z., Trouilloud, P. L., O'Sullivan, E. J., Abraham, D. W., Gaidis, M. C., Hu, G., Brown, S., Zhu, Y., Robertazzi, R. P., Gallagher, W. J. & Worledge, D. C. Spin torque switching of 20 nm magnetic tunnel junctions with perpendicular anisotropy. _Appl. Phys. Lett._**100**, (2012).
* Gong et al. [2017] Gong, C., Li, L., Li, Z., Ji, H., Stern, A., Xia, Y., Cao, T., Bao, W., Wang, C., Wang, Y., Qiu, Z. Q., Cava, R. J., Louie, S. G., Xia, J. & Zhang, X. Discovery of intrinsic ferromagnetism in two-dimensional van der Waals crystals. _Nature_**546**, 265-269 (2017).
* Huang et al. [2017] Huang, B., Clark, G., Navarro-Moratalla, E., Klein, D. R., Cheng, R., Seyler, K. L., Zhong, Di., Schmidgall, E., McGuire, M. A., Cobden, D. H., Yao, W., Xiao, D., Jarillo-Herrero, P. & Xu, X. Layer-dependent ferromagnetism in a van der Waals crystal down to the monolayer limit. _Nature_**546**, 270-273 (2017).
* Fei et al. [2018] Fei, Z., Huang, B., Malinowski, P., Wang, W., Song, T., Sanchez, J., Yao, W., Xiao, D., Zhu, X., May, A. F., Wu, W., Cobden, D. H., Chu, J.-H. & Xu, X. Two-dimensional itinerant ferromagnetism in atomically thin Fe3GeTe2. _Nat. Mater._**17,** 778-782 (2018).
* Samarth [2017] Samarth, N. Magnetism in flatland. _Nature_**546**, 216-217 (2017).
* Alghamdi et al. [2019] Alghamdi, M., Lohmann, M., Li, J., Jothi, P. R., Shao, Q., Aldosary, M., Su, T., Fokwa, B. P. T. & Shi, J. Highly Efficient Spin-Orbit Torque and Switching of Layered Ferromagnet Fe3GeTe2. _Nano Lett._**19**, 4400-4405 (2019).
* Wang et al. [2019] Wang, X., Tang, J., Xia, X., He, C., Zhang, J., Liu, Y., Wan, C., Fang, C., Guo, C., Yang, W., Guang, Y., Zhang, X., Xu, H., Wei, J., Liao, M., Lu, X., Feng, J., Li, X., Peng, Y., Wei, H., Yang, R., Shi, D., Zhang, X., Han, Z., Zhang, Z., Zhang, G., Yu, G. & Han, X. Current-driven magnetization switching in a van der Waals ferromagnet Fe3GeTe2. _Sci. Adv._**5**, 2-8 (2019).
* Gupta et al. [2020] Gupta, V., Cham, T. M., Stiehl, G. M., Bose, A., Mittelstaedt, J. A., Kang, K., Jiang, S., Mak, K. F., Shan, J., Buhrman, R. A. & Ralph, D. C. Manipulation of the van der Waals Magnet Cr2GeTe6by Spin-Orbit Torques. _Nano Lett._**20**, 7482-7488 (2020).
* Ostwal et al. [2020] Ostwal, V., Shen, T. & Appenzeller, J. Efficient Spin-Orbit Torque Switching of the Semiconducting Van Der Waals Ferromagnet Cr2Ge2Te6. _Adv. Mater._**32,** 1-7 (2020).
* Fujimura et al. [2021] Fujimura, R., Yoshimi, R., Mogi, M., Tsukazaki, A., Kawamura, M., Takahashi, K. S., Kawasaki, M. & Tokura, Y. Current-induced magnetization switching at charge-transferred interface between topological insulator (Bi,Sb)2Te3 and van der Waals ferromagnet Fe3GeTe2. _Appl. Phys. Lett._**119,** 1-4 (2021).
* Shin et al. [2018] Shin, I., Cho, W. J., An, E. S., Park, S., Jeong, H. W., Jang, S., Baek, W. J., Park, S. Y., Yang, D. H., Seo, J. H., Kim, G. Y., Ali, M. N., Choi, S. Y., Lee, H. W., Kim, J. S., Kim, S. D. & Lee, G.
H. Spin-Orbit Torque Switching in an All-Van der Waals Heterostructure. _Adv. Mater._**34**, 1-7 (2022).
* [20] Kao, I. H., Muzzio, R., Zhang, H., Zhu, M., Gobbo, J., Yuan, S., Weber, D., Rao, R., Li, J., Edgar, J. H., Goldberger, J. E., Yan, J., Mandrus, D. G., Hwang, J., Cheng, R., Katoch, J. & Singh, S. Deterministic switching of a perpendicularly polarized magnet using unconventional spin-orbit torques in WTe2. _Nat. Mater._**21**, 1029-1034 (2022).
* [21] Wang, L., Xiong, J., Cheng, B., Dai, Y., Wang, F., Pan, C., Cao, T., Liu, X., Wang, P., Chen, M., Yan, S., Liu, Z., Xiao, J., Xu, X., Wang, Z., Shi, Y., Cheong, S. W., Zhang, H., Liang, S. J. & Miao, F. Cascadable in-memory computing based on symmetric writing and readout. _Sci. Adv._**8**, 1-11 (2022).
* [22] Ou, Y., Yanez, W., Xiao, R., Stanley, M., Ghosh, S., Zheng, B., Jiang, W., Huang, Y. S., Pillsbury, T., Richardella, A., Liu, C., Low, T., Crespi, V. H., Mkhoyan, K. A. & Samarth, N. ZrTe2/CrTe2: an epitaxial van der Waals platform for spintronics. _Nat. Commun._**13**, 1-9 (2022).
* [23] Zhang, G., Guo, F., Wu, H., Wen, X., Yang, L., Jin, W., Zhang, W. & Chang, H. Above-room-temperature strong intrinsic ferromagnetism in 2D van der Waals Fe3GaTe2 with large perpendicular magnetic anisotropy. _Nat. Commun._**13**, 1-8 (2022).
* [24] Tan, C., Lee, J., Jung, S. G., Park, T., Albarakati, S., Partridge, J., Field, M. R., McCulloch, D. G., Wang, L. & Lee, C. Hard magnetic properties in nanoflake van der Waals Fe3GeTe2. _Nat. Commun._**9**, 1-7 (2018).
* [25] Liu, L., Lee, O. J., Gudmundsen, T. J., Ralph, D. C. & Buhrman, R. A. Current-induced switching of perpendicularly magnetized magnetic layers using spin torque from the spin hall effect. _Phys. Rev. Lett._**109**, 1-5 (2012).
* [26] Garello, K., Miron, I. M., Avci, C. O., Freimuth, F., Mokrousov, Y., Blugel, S., Auffret, S., Boulle, O., Gaudin, G. & Gambardella, P. Symmetry and magnitude of spin-orbit torques in ferromagnetic heterostructures. _Nat. Nanotechnol._**8**, 587-593 (2013).
* Condens. Matter Mater. Phys._**89**, 1-15 (2014).
* Condens. Matter Mater. Phys._**90**, 1-11 (2014).
* [29] Roschewsky, N., Walker, E. S., Gowtham, P., Muschinske, S., Hellman, F., Bank, S. R. & Salahuddin, S. Spin-orbit torque and Nernst effect in Bi-Sb/Co heterostructures. _Phys. Rev. B_**99**, 1-5 (2019).
* [30] Zhu, L., Ralph, D. C. & Buhrman, R. A. Maximizing spin-orbit torque generated by the spin Hall effect of Pt. _APL Mater._**8**, (2021).
* [31] Li, W., Zhu, W., Zhang, G., Wu, H., Zhu, S., Li, R., Zhang, E., Zhang, X., Deng, Y., Zhang, J., Zhao, L., Chang, H. & Wang, K. Room-temperature van der Waals 2D ferromagnet switching by spin-orbit torques. _arXiv_**2304.10718**, (2023).
Supplementary Information for
Current-induced deterministic switching of van der Waals ferromagnet at room temperature
Shivam N. Kajale1\({}^{\dagger}\), Thanh Nguyen2\({}^{\ddagger}\), Corson A. Chao3, David C. Bono3, Artittaya Boonkird2, Mingda Li2, Deblina Sarkar1\({}^{\ast}\)
\({}^{1}\) MIT Media Lab, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
\({}^{2}\) Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
\({}^{3}\) Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
+ These authors contributed equally.
\({}^{\ast}\) [email protected]
For a FGaT/Pt bilayer Hall bar, of width \(w\) and length \(l\), equating voltage drop across both materials (parallel current channels), we have,
\[I_{Pt}R_{Pt}=I_{FGaT}R_{FGaT}\]
\[\therefore\frac{I_{Pt}}{I_{FGaT}}=\frac{R_{FGaT}}{R_{Pt}}=\frac{\frac{\rho_{FGaT}l }{wt_{FGaT}}}{\frac{\rho_{Pt}l}{wt_{Pt}}}=\frac{\rho_{FGaT}t_{Pt}}{\rho_{Pt}t _{FGaT}}\]
where, \(\rho_{Pt},\rho_{FGaT},t_{Pt}\) and \(t_{FGaT}\) are resistivities of Pt, FGaT and thicknesses of Pt and FGaT flake in the device, respectively.
\[\therefore\frac{I_{Pt}}{I_{total}}=\frac{\rho_{FGaT}t_{Pt}}{\rho_{Pt}t_{FGaT}+ \rho_{FGaT}t_{Pt}}\]
Figure S2: A. Current induced switching in device D1, for consecutive current pulsing loops. Inset: Optical image of the device; scale bar – 5 um. B. Deterministic, non-volatile switching in response to an arbitrary train of pulses. Measurements performed at 300 K under 2 kOe field parallel to current input.
|
2307.00009
|
Comparison of Machine Learning Methods for Assigning Software Issues to
Team Members
|
Software issues contain units of work to fix, improve, or create new threads
during the development and facilitate communication among the team members.
Assigning an issue to the most relevant team member and determining a category
of an issue is a tedious and challenging task. Wrong classifications cause
delays and rework in the project and trouble among the team members. This paper
proposes a set of carefully curated linguistic features for shallow machine
learning methods and compares the performance of shallow and ensemble methods
with deep language models. Unlike the state-of-the-art, we assign issues to
four roles (designer, developer, tester, and leader) rather than to specific
individuals or teams to contribute to the generality of our solution. We also
consider the level of experience of the developers to reflect the industrial
practices in our solution formulation. We collect and annotate five industrial
data sets from one of the top three global television producers to evaluate our
proposal and compare it with deep language models. Our data sets contain 5324
issues in total. We show that an ensemble classifier of shallow techniques
achieves 0.92 for issue assignment in accuracy which is statistically
comparable to the state-of-the-art deep language models. The contributions
include the public sharing of five annotated industrial issue data sets, the
development of a clear and comprehensive feature set, the introduction of a
novel label set, and the validation of the efficacy of an ensemble classifier
of shallow machine learning techniques.
|
Büşra Tabak, Fatma Başak Aydemir
|
2023-06-18T20:06:58Z
|
http://arxiv.org/abs/2307.00009v2
|
# Comparison of Machine Learning Methods for Assigning Software Issues to Team Members
###### Abstract
Software issues contain units of work to fix, improve, or create new threads during the development and facilitate communication among the team members. Assigning an issue to the most relevant team member and determining a category of an issue is a tedious and challenging task. Wrong classifications cause delays and rework in the project and trouble among the team members. This paper proposes a set of carefully curated linguistic features for shallow machine learning methods and compares the performance of shallow and ensemble methods with deep language models. Unlike the state-of-the-art, we assign issues to four roles (designer, developer, tester, and leader) rather than to specific individuals or teams to contribute to the generality of our solution. We also consider the level of experience of the developers to reflect the industrial practices in our solution formulation. We collect and annotate five industrial data sets from one of the top three global television producers to evaluate our proposal and compare it with deep language models. Our data sets contain 5324 issues in total. We show that an ensemble classifier of shallow techniques achieves 0.92 for issue assignment in accuracy which is statistically comparable to the state-of-the-art deep language models. The contributions include the public sharing of five annotated industrial issue data sets, the development of a clear and comprehensive feature set, the introduction of a novel label set, and the validation of the efficacy of an ensemble classifier of shallow machine learning techniques.
Keywords:issue assignment software management natural language processing machine learning IT management
## 1 Introduction
Software project development refers to the process of creating a software product from start to finish, including planning, designing, coding, testing, and maintenance. It involves a team of developers, often with different specializations, working together to produce a working software product. Software project management involves overseeing the development process, ensuring that the project is completed on time, within budget, and to the expected quality standards [7]. This includes managing resources, schedules, and budgets, as well as communicating with stakeholders and ensuring that the project meets its objectives. Effective project management is necessary for successful software development.
One of the primary responsibilities of a project manager is to identify and address software issues as they arise throughout the development process [49]. These issues can include technical challenges, quality assurance problems, or unexpected delays. The project manager must work with the development team to find solutions to these issues, prioritize tasks, and make adjustments to the project plan as needed. By effectively managing software issues, the project manager can help ensure that the development process stays on track, that the software product is delivered on time and to the expected quality standards, and that the project stays within budget.
Issue Tracking Systems (ITS) are designed to help development teams track and manage software issues throughout the development process. These systems allow developers to identify, report, and prioritize software issues and assign them to team members for resolution [33]. Issue tracking systems often include features such as issue tracking, bug reporting, status tracking, and reporting tools, enabling developers to manage issues effectively and ensure that they are resolved in a timely manner. Issues can be created by users with different roles such as software developers, team leaders, testers, or even customer support teams in these tools. Bertram et al. [7] carry out a qualitative study of ITS as used by small software development teams. They demonstrate that ITS plays a key critical role in the communication and collaboration within software development teams based on their interviews with various stakeholders.
Text classification is an important problem that is the task of assigning a label to a given text [45]. Text classification has started to be used as a tool to produce solutions in many studies in various fields due to the abundance and diversity of data known as big data. The main focus of this paper is to address the issue classification problem through an issue assignment approach where we assign the identified issues to appropriate team members or departments for further resolution. To accomplish this, we treat the problem as a text classification challenge. We leverage machine learning algorithms and natural language processing techniques to analyze and classify the text data of the issues. By applying these techniques, we are able to extract relevant information from the issue descriptions, such as the issue severity, context, and other important details. Overall, by tackling the issue classification problem through this approach, we aim to provide a more comprehensive and effective solution for issue management and resolution.
The issue assignment approach enables us to allocate the issues to the most suitable team members or departments. This helps to streamline the resolution process and ensure that the issues are addressed by the right people, thereby improving the overall efficiency and effectiveness of the support system. We decide that assigning issues to groups of employees who can perform the same activities is preferable to the individuals. Some employees in the issue history may not have been able to complete the task that is automatically assigned to them in that planning time due to a variety of factors, including seasonal spikes in workload, illness, or employee turnover [25]. To effectively manage the employees in our data set, we have grouped them based on the fields they work
in. This approach has resulted in the identification of four main teams in the data set, namely the software developer, UI/UX designer, software tester, and team leader. The software developer team represents the majority of the data set, making them a crucial focus of our analysis. To improve time management and issue resolution, it is important to assign the right issues to the right developers. To achieve this, we have categorized the Software Developers using sub-labels that are generally accepted in the industry, such as senior, mid, and junior software developer levels. This categorization helps us identify the experience level and skill set of each developer, allowing us to allocate the most appropriate tasks to each team member. These teams may differ according to the project or the company. For example, new teams such as business analysts, and product owners can be added or some teams can be split or removed. At this point, we expect the side that will use the system to separate the individuals according to the teams. After a newly opened issue is automatically classified among these classes, it can be randomly assigned to the individuals in the relevant team, or the individuals in the team or the team leader can make this assignment manually.
In our study, we use a closed-source data set for our analysis contrary to the majority of studies in the literature. We obtain five projects from the company's Jira interface for analysis. We focus exclusively on the main language of the issues. To prepare the data set for this study, we determine the label values by changing the people assigned to the issue according to the fields they work in, based on information we receive from the company.
ITS often contain a wealth of valuable data related to software issues. In our study, we set out to analyze this data using NLP methods, with the goal of creating a feature set that would be simpler and more successful than the word embedding methods typically used in text classification. To create our feature set, we use a range of NLP techniques to analyze the language used in software issues like part-of-speech tagging and sentiment analysis. We then compare our feature set with commonly used word embedding methods and apply a range of machine learning, ensemble learning, and deep-learning techniques to our annotated data set. This allows us to evaluate the efficiency of our approach using a range of standard metrics, including accuracy, precision, recall, and F1-score.
We have made several significant contributions to the state of the art in issue classification.
* Data set: We provide a closed-source issue data set from the industry in both Turkish and English. This data set is publicly available for further research, and to the best of our knowledge, there is no shared commercial issue data set for both languages in the literature.
* Feature set: We develop an understandable feature set that is extracted from the information in the issues, which can be applied to all issue classification procedures with high accuracy and low complexity.
* Label set: We introduce novel labels for issue assignment. By incorporating these new labels, we expand the boundaries of current research and offer
unique insights into the underlying themes, contributing to a more comprehensive understanding of the domain.
The remainder of this paper is structured as follows: Section 2 describes the background of this study including the structure of software issues in issue tracking systems. In Section 3, we present our experimental setup and approach, followed by our results and analysis in Section 4. In Section 5, we discuss the threats to validity and user evaluation. In Section 6, we discuss related work and similar classification endeavors with Turkish issue reports. Section 7 concludes our work and discusses future work.
## 2 Background
To understand our approach, it is essential to have a solid understanding of the structure of software issues in Issue Tracking Systems (ITS).
### Software Issues in ITS
This section explores software issues within ITS, covering their anatomy, life cycle, and significance in software projects.
#### 2.1.1 Issue Tracking Systems (ITS)
An issue tracking system (ITS) is a software application that manages and maintains lists of issues. It provides a database of issue reports for a software project. Members of the software development team stay aligned and collaborate faster with the help of these tools. Users can log in with a password and post a new issue report or comment on an already-posted issue. There are various issue-tracking tools that are used in software development projects, such as JIRA, Bugzilla, and Azure DevOps. Although there are some differences among these tools, they all share a similar basic structure. We developed our approach with a data set of a company that uses JIRA software.
#### 2.1.2 Anatomy of an Issue
The issue report in ITS contains various fields, each of which contains a different piece of information as shown in Figure 1. The following details [48] could typically be included in the generic process of reporting an issue:
* Summary: Title of the issue.
* Description: Issue details including its cause, location, and timing.
* Version: Version of the project to which the issue belongs.
* Component: The predetermined component on which the issue depends.
* Attachment: Attaches such as pictures, videos, and documents uploaded to the issue.
* Priority: Urgency level of the issue.
* Severity: Frequent occurrence and systemic effects.
* Status: Current status of the issue.
* Reporter: Person who creates the issue.
* Assignee: Person to whom the issue is assigned.
* Revision History: Historical changes of the issue.
* Estimated time: Estimated time spent to develop or solve the issue.
* Comments: Additional details that can be used to understand the issue.
The summary, description, and comments all contain textual details about the issue. In our methodology, we extract numerical features using the language structure of summary and description. Version is the current version of the project which differs with each release. The issue can have a predefined tag or component added to it. (e.g. project_v1.1.0) Users can upload files to help others understand the issue, an attachment refers to it. Priority, severity, and status are the categorical features of the issue. Priority is the urgency level, severity is the frequency of occurrence and status is the current status of the issue such as committed or done. People play different roles as they interact with reports in ITS. The person who submits the report is the reporter and the assignee is the person to whom the issue is assigned. If the issue has been reported previously, historical changes are shown in the revision history. The estimated time is the
Figure 1: An example issue from the Jira interface of the company.
time spent on the development and varies depending on the effort metric. The fields offered by each ITS vary, and not all fields are filled out for each project. Our strategy makes use of the fields that are largely filled in.
#### 3.1.1 Life Cycle of an Issue
The series of phases and phase changes that an issue experiences throughout its life cycle is referred to as a "workflow." Workflows for issues typically represent development cycles and business processes. Figure 2 shows a standard workflow of JIRA. The following stages of the JIRA workflow must be monitored as soon as an issue is created:
* Open: The issue is open after creation and can be assigned to the assignee to begin working on it.
* In Progress: The assignee has taken the initiative to begin working on the issue.
* Resolved: The issue's sub-tasks and works have all been finished. The reporter is currently waiting to confirm the matter. If verification is successful, the case will be closed or reopened depending on whether any additional changes are needed.
* Reopened: Although this issue has already been solved, the solution is either flawed, omitted some important details, or needs some modifications. Issues are classified as assigned or resolved at the Reopened stage.
* Closed: The issue is now regarded as resolved, and the current solution is accurate. Issues that have been closed can be reopened at a later time as needed.
Figure 2: Jira workflow.
## 3 Approach
This section outlines the methodology employed in this study to address the research objectives. It is divided into several sections, each focusing on a crucial step in the process. The section begins with an overview of the data collection, which details the sources and methods used to gather the necessary data for analysis. The preprocessing describes the steps taken to clean and transform the raw data into a format suitable for further analysis. Next, feature extraction explains the techniques used to extract relevant features from the preprocessed data, capturing essential information for the subsequent classification. Finally, the classification discusses the algorithms and models employed to classify the issues based on their assigned labels.
### Data Collection
Our raw data come from the issues of five industrial projects documented on Jira software of a company that offers solutions in the fields of business-to-business (B2B) display systems for televisions, namely, hotel TV systems and advertising and promotional display systems. The company1 is a home and professional appliance manufacturer with 18 subsidiaries specializing in electronics, large appliances, and information technology and it is Europe's leading television manufacturer, accounting for a quarter of the European market with over eight million units sold in a year.
Footnote 1: [https://www.vestel.com](https://www.vestel.com)
Table 1 summarizes the raw data. We use issue reports from five web application projects, all of which are two-sided apps with a management panel and a TV-accessible interface. The mission of the IP Hotel TV project is a browser-based interactive hospitality application used by hotel guests in their rooms. There is also a management application for managing the content that will be displayed. This is the company's first hospitality project, which began in 2011 and is still in use today by many hotels. The project Hospital TV is a hospital-specific version of the IP Hotel TV application. It is compatible with the Hospital Information Management System (HIMS), which is an integrated
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**ID** & **Name** & **\# Issues** & **Timespan** & **Team Size** \\ \hline P1 & Ip Hotel TV & 1287 & 2011-2021 & 35 \\ P2 & Rf Hotel TV & 2004 & 2017-2021 & 15 \\ P3 & Hospital TV & 202 & 2017-2021 & 28 \\ P4 & Vsync & 126 & 2017-2021 & 7 \\ P5 & Html Hotel TV & 1705 & 2018-2021 & 16 \\ \hline \end{tabular}
\end{table}
Table 1: Data sets used in the experiments.
information system for managing all aspects of a hospital's operations, including medical, financial, administrative, legal, and compliance. The Rf Hotel TV project is a version of the Ip Hotel TV application that can be used in non-intranet environments. A coax cable is used for communication with the server. The HTML Hotel TV project is a cutting-edge hospitality platform. It will eventually replace the IP Hotel TV project. Instead of using an intranet, this version is a cloud application that works over the Internet. A central system oversees the management of all customers. Customers now have access to new features such as menu design and theme creation. The project Vsync is a signage application that synchronizes the media content played by televisions. Televisions play the media through their own players.
These projects are developed in different programming languages. The project Rf Hotel TV is written in Python using the Django framework while the project Vsync is written in C# and JavaScript using the Angular framework. The rest of the projects are written in pure Javascript and C# using Microsoft technologies such as.Net and.Net Core frameworks.
The number of issues per project ranges from 126 to 2004, and the total number of issues is 5324. The data set contains issue reports submitted between 2011 and 2021, all related to different versions of the applications. The issues are created by users with different roles such as software developers, team leaders, testers, or even customer support teams in the data. Then, they are assigned to workers with different roles and experiences. The number of employees in the projects varies between seven and 35 when the "Creator", "Reporter", and "Assignee" columns in the data set are combined.
In the original data, the issues are assigned to the individual employees. We removed the names of the individuals to preserve their privacy and inserted their roles in the development team as a new column with the possible values "Software Developer", "UI/UX Designer", "Test Engineer", and "Team Leader". For
Figure 3: Distribution of team roles and developers’ experience.
the developers only, we also assigned another column indicating their level of expertise as Junior, Mid, and Senior. We inserted this new information in collaboration with the company. Figure 2(a) depicts the distribution of the assignees over the entire data set. As the first chart shows we can observe that team leaders receive the least number of issues and software engineers receive the majority of them. According to Figure 2(b), the distribution of experience among software developers, the issues are primarily assigned to junior-level developers, at most, and senior-level developers, at least.
We directly export the data from the company's Jira platform in Excel format, including all columns. Table 2 is a small portion of the massive amount of the data available. Although most columns are empty for each row, the tables have a total of 229 columns. To create the issue, required fields like "Summary," "Description" and "Assignee" are filled, but fields like "Prospect Effort" and "Customer Approval Target Date" are left blank because they aren't used in the project management.
The issues are originally written in Turkish with some English terms for technical details and jargon. We also translate the issues to English and share our data publicly in our repository2.
Footnote 2: [https://github.com/busrat/automated-software-issues](https://github.com/busrat/automated-software-issues)
### Preprocessing
Preprocessing is the first step in machine learning, and it involves preparing raw data for analysis as shown in Figure 4. The process starts with exporting all the issues from the selected projects in the Jira tracker system to an Excel file. Once exported, the data needs to be cleaned up to eliminate any rows that have fully empty columns. We eliminate the rows that contain empty assignee columns.
\begin{table}
\begin{tabular}{|c|l|l|} \hline
**Project** & **Summary** & **Description** \\ \hline P1 & Orders placed on the room &
\begin{tabular}{l} Although the success message is displayed on \\ the screen, the order e-mail does not come. \\ \end{tabular} \\ P2 & Server v1.0.9 Test Request & Please test it. \\ P3 & Making a mother-baby & The mother’s data should come into the room, since it is the same protocol number as the baby, it should be separated. \\ P4 & Multiple video wall & Multiple video wall setup should be added to the system and it should be synchronized independently. \\ P5 & Version Filter MacId & Problem experienced due to different incoming \\ Problem & & Mac address in wifi and ethernet connections. \\ \hline \end{tabular}
\end{table}
Table 2: Part of issue from our data set.
Another issue that needs to be addressed during preprocessing is dealing with missing values. In the Jira tracker system, if the numerical data that will be used as a feature, such as reopen count, is not assigned, it appears as NaN (Not a Number) in the data set. To avoid this problem, the missing values are changed to zero in the entire data set.
We concatenate two textual parts summary and description into new metadata, which we refer to as issue text. Note that these two fields are available for each issue when an issue report is submitted. We apply a lowercase transformation to ensure consistency in the issue text. This step involves converting all uppercase characters to their corresponding lowercase characters. After the transformation, we tokenize the text into words by splitting it into spaces between words.
For our feature extraction methodology, we do not perform additional text cleaning steps as every word's feature is essential for our process. We perform Part-of-Speech (POS) tagging after the tokenization step. It involves assigning a POS (such as a noun, verb, adjective, etc.) to each word in a given text. We use Zeyrek library [10] for issue texts in Turkish because it is trained on a large corpus of Turkish text.
For most used word embedding methods, we perform additional text cleaning steps to reduce the dimensionality of the data, remove redundant information, and further improve the accuracy. We eliminate all numeric characters and punctuation marks from issue texts. Stop words are words that do not carry significant meaning in a given context and can be safely ignored without sacrificing the overall meaning of a sentence. Examples of stop-words in English include "the", "and", "is", "are", etc. Similarly, in Turkish, examples of stop-words include "ve", "ile", "ise", "ama", etc. We use NLTK which provides a built-in list of stop-words for many languages, including Turkish to remove them from issue texts. The last step is lemmatization which is a crucial step in NLP that involves reducing words to their base or dictionary form, known as the "lemma". The resulting lemma retains the essential meaning of the original word, making it easier to process and analyze text data.
Figure 4: Preprocessing steps (FE: Feature Extraction, WE: Word Embedding).
### Feature Extraction
This section describes the feature selection steps for the vectors we created and two popular word embedding methods to compare them. The data set obtained from Jira contains over a hundred important fields that we can potentially use as features. However, a significant number of these fields are empty as they are not filled out by the project's team. To avoid this issue, we have narrowed down the selection of features to only those fields that are either non-empty for each issue or automatically populated by the Jira system when an issue is opened.
The columns of the issue tracking system are utilized as the initial feature set in our study, as presented in Table 3. FJN indicates the numerical features from Jira ITS. We consider the data numbers in the columns for these values. Watchers are the users who have expressed interest in a particular issue and want to receive notifications about its progress. They can receive notifications whenever a comment is added or any changes are made to the issue. Typically, multiple individuals subscribe to problematic issues in order to receive notifications upon closure or new comments. Images column is used to attach relevant screenshots, diagrams, or other images to an issue. This helps in better understanding and resolving the issue. When a bug cannot be easily identified or located, it is common practice for test engineers to include an image of the bug as a reference for developers. This serves as a visual aid to help the developers understand the issue and resolve it more effectively. Reopen Count column tracks the number of times an issue has been reopened after being marked as resolved or closed. It provides insight into the recurring nature of the issue and can help identify if the issue is resolved properly or not. This feature serves to distinguish problematic issues that persist even after the developer has addressed them. Reassign Count column keeps track of how many times an issue has been reassigned to different users or teams. It can help in analyzing the workflow and identifying any inefficiencies. There are various reasons why an issue may be assigned to someone other than the initially assigned individual. These reasons include cases where the assigned person is unavailable or unable to resolve the issue. The linked issues column allows users to link related issues together. It helps in identifying dependencies and tracking progress across multiple issues. The sub-tasks column allows users to break down larger issues into smaller sub-tasks. It helps in better managing and tracking complex issues. The components column specifies the different modules or components of the software that are affected by the issue. It helps in identifying the root cause of the issue and assigning it to the appropriate team or individual.
We only consider whether or not there is a value present in the column for columns that are mostly empty across the issues and do not have diversity in the data to separate each other. We call these boolean features FJB. Reported by customer column indicates if a customer or an internal team member reports the issue. It helps in prioritizing and resolving customer-reported issues quickly. The tested versions column indicates the versions of the software in which the issue is tested. It helps in identifying the specific version where the issue is first detected. The test execution type column specifies the type of test execution, such as
Manual or Automated. It helps in tracking the progress and success of the testing phase. The approval type column is used to indicate the type of approval required for the issue, such as Manager Approval or Technical Approval. It helps ensure that the issue is reviewed and approved by the appropriate stakeholders before being resolved. Affects versions column indicates the versions of the software that are affected by the issue. It helps in identifying the scope of the issue and prioritizing it accordingly.
Several features in our feature set are categorical as FJC, and in order to use them in our analysis, we replaced them with numerical values using label encoding. This process assigns a unique numerical value between 0 and the number of classes minus one to each category, allowing us to use them in our computations. The issue type column defines the type of issue being reported, such as Bug, Improvement, Task, etc. It helps in categorizing and prioritizing issues based on their type. The reporter column indicates the user who reported the issue. It can help in contacting the user for additional information or to gather feedback. The priority column indicates the relative importance of an issue. It can be set to High, Medium, Low, or any other custom value based on the severity of the issue and its impact on the project. The frequency column tracks how often the issue occurs. It helps in identifying patterns and trends in the occurrence of the issue. The bug category column allows users to categorize the issue based on its root cause, such as Performance, Security, Usability, etc. It helps in prioritizing and assigning the issue to the appropriate team or individual. The labels column allows users to add descriptive tags to an issue. It helps in categorizing and searching for issues based on common themes or topics.
Issue texts are utilized to extract features using NLP techniques, as detailed in Table 4. The FTN column indicates the numerical features extracted from the text fields. The Summary Words and Description Words columns indicate the number of words in the corresponding issue text columns. To analyze the sentiments of the issue texts, the TextBlob library [29] is used for sentiment analysis. Polarity represents the emotional tone of a piece of text, typically characterized as positive, negative, or neutral. The polarity score ranges from minus one (most negative) to one (most positive). Subjectivity, on the other hand, refers to the degree to which a piece of text expresses opinions or personal feelings, as opposed to being factual or objective. The subjectivity score ranges from zero (most objective) to one (most subjective). As described in Section 3.2, each word in the issue text is classified with known lexical categories using POS tagging for the morpheme-related features in both Turkish and English. The number of available tags, such as adjective, adverb, conjunction, verb, numeral, etc., is added as a new feature column for each issue. However, not all tags are added to the table. The most effective POS tags as features are discussed in Section 4.2. The FTB column indicates the boolean features extracted from the text fields. The issue text is searched for Bug Words, such as "error", "null", "bug", "server", and "undefined" to determine if there is a bug or for the developers. Test Words, such as "test" and "request" are searched for issues created for the test engineers. Document Words, such as "document(ation)" and "write" are
searched for the team leaders, and Design Words, such as "design", "icon", and "logo" are searched for the designers. The negative verb is a boolean value that checks for negative verbs in the issue text. It is assumed that bugs would be more likely to have negative verbs in their definitions rather than being by design or a new task opened. The necessity verb is a boolean value that checks for the verb for necessity in the issue text (e.g., "should" verb in English, "-meli/-mal" suffix in Turkish).
Word Embedding techniques are used to represent text as vectors. To create vectors, we utilize the preprocessed combination of title and description parts of issues. There are various forms of word embeddings available, with some of the most popular ones being Bag of Words (BoW) [32], Term Frequency-Inverse Document Frequency (TF-IDF) [41] and, Word2Vec [34]. We have implemented
\begin{table}
\begin{tabular}{|c|c|l|} \hline
**Feature** & **Name** & **Description** \\ \hline FJN1 & Watchers & The number of users following the issue. \\ FJN2 & Images & The number of the images that have been attached to the issue. \\ FJN3 & ReopenCount & The number of times an issue has been reopened. \\ FJN4 & ReassignCount & The number of times an issue has been reassigned to different users. \\ FJN5 & LinkedIssues & The number of linked related issue keys to the issue. \\ & & (i.e. ProjectName-2037) \\ FJN6 & SubTasks & The number of added sub-issue keys to the issue. (i.e. ProjectName-2037) \\ FJN7 & Components & The number of different components of the software that are affected by the issue (i.e. cloud) \\ FJB1 & ReportedByCustomer & The customer who reports the issue. (i.e X Hotel) \\ FJB2 & TestedVersions & The tested versions of the software in which the issue is tested. (i.e. ProjectName 9.4.x) \\ FJB3 & TestExecutionType & The type of test execution (i.e. manual) \\ FJB4 & ApprovalType & The type of approval required for the issue. (i.e. P1-Pilot) \\ FJB5 & AffectsVersions & The versions of the software that are affected by the issue. (i.e. ProjectName 9.4.x) \\ FJC1 & IssueType & The type of issue. (Story, Epic, Request, Bug, Test Request, Technical task) \\ FJC2 & Reporter & The user who reported the issue. \\ FJC3 & Priority & The relative importance of an issue. (Low, Medium, High, Showstopper) \\ FJC4 & Frequency & The frequency of the issue occurs. (i.e. always-2/2, sometimes-2/4) \\ FJC5 & BugCategory & Category of the issue based on its root cause (i.e. General, Functional) \\ FJC6 & Labels & Added descriptive tags to an issue. (i.e. Admin) \\ \hline \end{tabular}
\end{table}
Table 3: Features from the columns of the issue tracking system.
Tf-Idf and BOW algorithms using the Sklearn library [38] and the Word2Vec algorithm using the Gensim library [40]. We have tested both BoW unigram and bigram models separately and together. The unigram model stores the text as individual tokens, while the bigram model stores the text as pairs of adjacent tokens. Based on our experiments, the BoW unigram model outperformed the bigram model. This is attributed to the unigram model's superior ability to capture essential text features.
### Classification
We can train a classifier to attempt to predict the labels of the issues after we have our features. We experiment with various algorithms and techniques when working on a supervised machine learning problem with a given data set in order to find models that produce general hypotheses, which then make the most precise predictions about future instances, possible. We start with using machine learning techniques that Scikit-learn includes several variants of them to automatically assign issue reports to the developers. We try the best-known ML models i.e. Support Vector Machine (SVM), Decision Tree, Random Forest, Logistic Regression, k-nearest Neighbors (kNN), and Naive Bayes (NB). We use Multinomial and Gaussian NB which are the most suitable variants for text classification. The multinomial model offers the capability of classifying data that cannot be numerically represented. The complexity is significantly decreased, which is its main benefit. We test the one-vs-rest model with SVM, a heuristic technique for multi-class binary classification algorithms. The multi-class data set is divided into various binary classification issues. Scikit-learn offers a high-level component called CountVectorizer that will produce feature vectors for us. The work of tokenizing and counting is done by CountVectorizer, while the data is normalized by TfidfTransformer. In order to combine this tool with other machine learning models, we supply the title and description fields that we combined.
Most machine learning algorithms do not produce optimal results if their parameters are not properly tuned so we use grid search with cross-validation to build a high-accuracy classification model. We use the GridSearchCV tool from Sklearn library [38] to perform hyperparameter tuning in order to determine the optimal values for a given model. In particular, we use a 10-fold cross-validation. We first split the issues data set into 10 subsets. We train the classifier on nine of them and one subset is used as testing data. Several hyper-parameter combinations are entered, then we calculate the accuracy and the one with the best cross-validation accuracy is chosen and used to train a classification method on the entire data set.
We also try ensemble learning methods [61] which combine the results of multiple machine learning algorithms to produce weak predictive results based on features extracted from a variety of data projections, and then fuse them with various voting mechanisms to achieve better results than any individual algorithm. First, we use the hard-voting classifier which can combine the predictions of each classifier to determine which class has the most votes. Soft voting
based on the probabilities of all the predictions made by different classifiers is also an option. Second, we try a classification method called extra trees, which combines the predictions of multiple decision trees. Finally, we combine machine learning methods with bagging, boosting, and stacking ensemble learning techniques. While boosting and stacking aim to create ensemble models that are less biased than their components, bagging will primarily focus on obtaining an ensemble model with less variance than its components [35].
The majority of classification studies using the issue data set do not use or have limited success with deep learning-based text mining techniques. Herbold et al. [22] believe that they lack sufficient (validated) data to train a deep neural network and deep learning should instead be used for this task once the necessary data requirements are satisfied, such as through pre-trained word embeddings based on all issues reported at GitHub. We try some bidirectional language models: DistilBert, Roberta, and Electra to provide empirical evidence. DistilBert [43] is developed using the Bert [13] model. In comparison to pre-trained Bert on the same corpus, this model is quicker and smaller in size. Roberta [28] is retraining BERT with improved training methodology, more data and compute power. Electra [11] uses less computation than Bert to pre-train transformer networks. In 2022, Guven [20] compares language models for the Turkish sentiment analysis approach and the best performance has been achieved by training the Electra language model. These models are pre-trained with a Turkish data set for Turkish approaches [44].
\begin{table}
\begin{tabular}{|c|c|p{142.3pt}|} \hline
**Feature** & **Name** & **Description** \\ \hline FTN1 & SummaryWords & The number of words in the issue summary. \\ FTN2 & DescriptionWords & The number of words in the issue description. \\ FTN3 & PolarityScore & Emotional tone of the issue text. (ranges from minus one (most negative) to one (most positive)) \\ FTN4 & SubjectivityScore & Issue text expresses opinions or personal feelings. (ranges from zero (most objective) to one (most subjective)) \\ FTNP & PosTags & The number of every POS tag in the issue text. \\ FTB1 & BugWords & Check for “error, null, bug, server, undefined” words in the issue text. \\ FTB2 & TestWords & Check for “test, request” words in the issue text. \\ FTB3 & DocumentWords & Check for “document/ation, write” words in the issue text. \\ FTB4 & DesignWords & Check for “design, icon, logo” words in the issue text. \\ FTB5 & NecessityVerb & The boolean value that checks for a verb for necessity in the issue text. (i.e. “should” verb in English, “-meli/mal” suffix in Turkish) \\ FTB6 & NegativeVerb & The boolean value that checks for negative verbs in the issue text. (i.e. “n’t, not” in English, “-me, -ma” in Turkish) \\ \hline \end{tabular}
\end{table}
Table 4: Features extracted from issue texts.
## 4 Experiments and Results
This section presents the findings and outcomes of the conducted experiments, providing a comprehensive analysis of the collected data. We critically assess the performance and effectiveness of the proposed methodology by employing various evaluation metrics and techniques. We compare the extracted features from different perspectives, evaluating their individual contributions to the classification task. Lastly, we present a thorough statistical examination of the obtained results, employing appropriate statistical tests and measures to validate the significance and reliability of the findings.
### Evaluation
Table 5 presents the experiment results for the issue assignment. The models are evaluated on Team Assignment (TA) and Developer Assignment (DA). The stacking model (RF and Linear SVC) achieved the highest accuracy, with values of 0.92 for TA and 0.89 for DA. Other models, such as Support Vector Machine, Logistic Regression, and Random Forest, also showed good performance with accuracies ranging from 0.86 to 0.88. The transformer-based models, including DistilBert, Roberta, and Electra, demonstrated competitive accuracies, with Roberta and Electra achieving the highest scores in some cases.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Classification Model** & **Acc\_TA** & **Acc\_DA** \\ \hline Support Vector Machine & 0.88 & 0.86 \\ Logistic Regression & 0.88 & 0.85 \\ Naive Bayes & 0.83 & 0.78 \\ Multilayer & 0.86 & 0.75 \\ Stochastic Gradient Descent & 0.87 & 0.82 \\ Decision Tree & 0.86 & 0.86 \\ Random Forest & 0.88 & 0.86 \\ KNN & 0.88 & 0.88 \\ One vs Rest & 0.82 & 0.81 \\ Voting Soft & 0.87 & 0.87 \\ Voting Hard & 0.88 & 0.88 \\ RF with Boosting & 0.90 & 0.88 \\ Bagged DT & 0.86 & 0.88 \\ Extra Trees & 0.89 & 0.88 \\ Stacking (RF and Linear SVC) & **0.92** & **0.89** \\ DistilBert & 0.88 & 0.87 \\ Roberta & 0.91 & 0.88 \\ Electra & 0.91 & 0.88 \\ \hline \end{tabular}
\end{table}
Table 5: Experiment results (TA: Team Assignment, DA: Developer Assignment).
Table 6 provides the performance metrics for each class in the Stacking algorithm for issue assignment. Under the Team Assignment (TA) approach, the Stacking algorithm achieved a high precision value of 0.90 for the Developer class, indicating a low rate of false positive assignments. The Recall score of 0.99 for the Developer class demonstrates the algorithm's ability to correctly identify the majority of instances assigned to developers. The Tester class shows a balance between precision (0.95) and recall (0.71), indicating accurate assignments with a relatively high rate of false negatives. The Designer class exhibits similar trends with a precision of 0.92 and a recall of 0.67. The Leader class has relatively lower precision and recall scores, indicating more challenging assignments for the algorithm.
Under the Developer Assignment (DA) approach, the Stacking algorithm achieved high precision values for the Senior class (0.90) and the Mid class (0.89), indicating accurate assignments with low rates of false positives. The Mid class also demonstrates a high recall score of 0.94, indicating effective identification of instances assigned to this class. The Junior class shows a lower precision (0.62) and recall (0.53) compared to the other classes, suggesting potential challenges in accurately assigning instances to this class.
We also test our classification algorithms using the most popular word embedding techniques to determine how well our features work. Figure 5 illustrates the comparison of our feature set with Tf-Idf and BOW methods for the issue assignment. Despite the potential of Word2Vec as a word embedding algorithm, the accuracy results in my approach do not yield comparable outcomes. We use the accuracy score as the comparison metric. The graph demonstrates that using our feature set yields superior results while using Tf-Idf and BOW yields comparable results.
### Feature Comparison
Reducing the number of redundant and irrelevant features is an effective way to improve the running time and generalization capability of a learning algorithm [12]. Feature selection methods are used to choose a subset of relevant features that contribute to the intended concept. These methods can employ a variety of models and techniques to calculate feature importance scores. One
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Task** & **Class** & **Precision** & **Recall** & **F1** \\ \hline \multirow{4}{*}{TA} & Developer & 0.90 & 0.99 & 0.94 \\ & Tester & **0.95** & **0.71** & **0.83** \\ & Designer & 0.92 & 0.67 & 0.80 \\ & Leader & 0.65 & 0.57 & 0.62 \\ \hline \multirow{4}{*}{DA} & Senior & 0.90 & 0.67 & 0.80 \\ & Mid & 0.89 & 0.94 & 0.91 \\ \cline{1-1} & Junior & 0.62 & 0.53 & 0.57 \\ \hline \end{tabular}
\end{table}
Table 6: Performance metrics for each class in the Stacking algorithm.
simple approach [9] is to calculate the coefficient statistics between each feature and the target variable. This method can help to identify the most important features of a given problem and discard the redundant or irrelevant ones. By reducing the number of features used for training a model, the running time of the algorithm can be significantly reduced without sacrificing accuracy. Moreover, feature selection can also improve the interpretability of the model, as it helps to identify the key factors that influence the target variable. We present the coefficient values of each feature in Figure 6.
We find that Issue Type, namely FJC1, emerges as the most influential feature from our feature set. Apart from the Issue Type, we discover that the features Watchers and Summary Words also exhibit significant effectiveness in our analysis. Conversely, features such as Reopen Count, Test Execution Type, and Approval Type demonstrate no impact on our issue assignment process. In Figure 7, we present the effective POS tags in Turkish, highlighting the most influential ones among all POS tags. Notably, the number of unknown words, verbs, and nouns emerge as the most impactful features. Following the rigorous selection of the best features, we proceed to employ Scikit-learn's [38] Select-FromModel, a powerful meta-transformer designed to choose features based on their importance weights, to retrain our models. Through this process, we carefully identify and select the top eight features that exhibited the highest signifi
Figure 5: Comparison of our feature set with word embedding methods.
cance, as determined by the module. Remarkably, leveraging this refined feature subset allows us to achieve optimal performance and attain the most favorable outcome in our experiments.
### Statistical Analysis
Statistical significance tests are conducted to compare classification methods for determining whether one learning algorithm outperforms another on a particular learning task. Dietterich [14] reviews five approximate statistical tests and concludes that McNemar's test and the 5x2 cv t-test, both have low type I error and sufficient power. In our study, we combine all data sets into a single data set for the classification algorithm. Dietterich [14] recommends using a 5x2 t-test to statistically compare two classifiers on a single data set. The 5x2 f-test, which is also suggested as the new standard by the original authors above, is further expanded upon by Alpaydin [4]. The following lists the null hypothesis and the alternative hypothesis. The null hypothesis is that the probabilities are the same, or in simpler terms, neither of the two models outperforms the other. The alternative hypothesis, therefore, holds that the performances of the two models are not equivalent.
Figure 6: Coefficient values of each feature.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Classification Model** & **p-value** & **Hypothesis** \\ \hline SVM & 0.0076 & **Reject** \\ Logistic Regression & 0.0412 & **Reject** \\ Naive Bayes & 0.0413 & **Reject** \\ Multilayer & 0.1336 & Accept \\ SGD & 0.0412 & **Reject** \\ Decision Tree & 0.0265 & **Reject** \\ Random Forest & 0.4795 & Accept \\ KNN & 0.0412 & **Reject** \\ One vs Rest & 0.0025 & **Reject** \\ Voting Soft & 0.0736 & Accept \\ Voting Hard & 0.1336 & Accept \\ RF with Boosting & 0.4795 & Accept \\ Bagged DT & 0.4795 & Accept \\ Extra Trees & 0.2482 & Accept \\ DistilBert & 0.1376 & Accept \\ Roberta & 0.3675 & Accept \\ Electra & 0.3675 & Accept \\ \hline \end{tabular}
\end{table}
Table 7: Analysis of the Stacking algorithm’s statistical test results in comparison to others.
Figure 7: Coefficient values of each POS tag in Turkish.
Accordingly, we apply the 5x2 f-test implemented by Alpaydm [4] which is an extension of the 5x2 cv t-test as stated above. We create the matrix for all pairwise comparisons of learning algorithms. In this test, the splitting process (50% training and 50% test data) is repeated five times. A and B are fitted to the training split and their performance on the test split in each of the five iterations is assessed. The training and test sets are then rotated (the training set becomes the test set, and vice versa), and the performance is computed again, yielding two performance difference measures. Then, the mean and variance of the differences are estimated and the f-statistic proposed by Alpaydm is calculated as
\[f=\frac{\sum_{i=1}^{5}\sum_{j=1}^{2}(p_{i}^{j})^{2}}{2\sum_{i=1}^{5}s_{i}^{2}}, \tag{1}\]
where \(p_{i}^{j}\) is the difference in error rates between the two classifiers on fold \(j=\{1,2\}\) of replication \(i=\{1,...,5\}\) and \(s_{i}^{2}\) is estimated variance. We reject the null hypothesis that the two models' performances are equal if the p-value is less than our chosen significance level (\(p\)-value \(<\alpha=0.05\)) and accept that the two models are significantly different. Table 7 presents the results of the statistical F test analysis comparing the Stacking algorithm to other classification models. The table provides the p-values and corresponding hypothesis decisions for each classification model. Based on the p-values compared to the chosen alpha value, we can accept the null hypothesis that there are no significant differences among all ensemble learning and deep-learning techniques. However, when comparing these techniques with machine learning methods, the majority of cases result in rejecting the null hypothesis, indicating significant performance variations. This observation is evident from the table, which highlights the substantial rejection of the null hypothesis in most comparisons between ensemble learning, deep learning, and machine learning methods.
## 5 Discussion and Qualitative Analysis
In this section, we focus on the validity of threats that could impact the reliability and generalizability of our study results. We discuss potential sources of bias, confounding variables, and other factors that may affect the validity of our study design, data collection, and analysis. We also describe our experiment for user evaluation in the company, which is aimed at investigating the effectiveness of our approach for issue assignment. We explain the methodology we use to gather feedback from users, such as surveys or interviews, and how we plan to analyze the results.
### Threats to Validity
In this section, we discuss the validity threats to our study concerning internal validity, external validity, construct validity, and conclusion validity. (Wohlin et al. [54])
Internal validity pertains to the validity of results internal to a study. It focuses on the structure of a study and the accuracy of the conclusions drawn. To avoid creating a data set with inaccurate or misleading information for the classification, the corporate employees labeled the employees by fields in the data set. We attempt to use well-known machine learning libraries during the implementation phase to prevent introducing an internal threat that can be brought on by implementation mistakes. All of our classification techniques specifically make use of the Python Sklearn [47] package. Sklearn and NLTK are used to preprocess the text of the issues and Sklearn metrics are used to determine accuracy, precision, and recall. We think that the application that can cause the most internal threat is when allocating Turkish issues to part of speech tags. Since Turkish is not as common as English and is an agglutinative language, it is more difficult to find a highly trained POS tagger library that provides high precision. We decided to use the Turkish_pos_tagger [59] library by comparing many parameters such as data numbers, accuracy percentages, and usage popularity among many Turkish POS tagger libraries. Turkish_pos_tagger library includes 5110 sentences and the data set originally belongs to Turkish UD treebank. For 10-fold validation, the accuracy of the model is 95%.
External validity involves the extent to which the results of a study can be applied beyond the sample. We use the data set of five different applications with thousands of issues. These projects use different software languages and various technologies at the front-end and back-end layers, including restful services, communication protocols such as gRPC and WebSocket, database systems, and hosting servers. The issue reports cover a long period of time from 2011 to 2021. However, all the projects we get from the issue reports are mainly concerned with the development of web projects made to run in the browser on the TV. Issues contain many TV-specific expressions, such as application behaviors that occur as a result of pressing a button on the remote or resetting the TV. We make great efforts to ensure that the features we design to prevent external validity concerns are not particular to the data we utilize. For our classification analysis to be applicable in other fields, we believe that it will be sufficient to replicate it using other data sets from various fields.
Construct validity refers to the degree to which a test or experiment effectively supports its claims. The performance of our automated issue assignment system is evaluated using the well-known accuracy metric. We additionally back it up with two other well-known metrics, namely recall, and precision. Tasks in the organization where we use the data set can only be given to one employee, and that employee is also in charge of the sub-tasks that make up the task. This makes assigning the issue report to a single employee group as a binary classification, an appropriate classification method. However, it could be necessary for a business analyst, a product owner, and a software developer to open the same task in different project management or different software teams. For this kind of data set, the binary classification research we conducted is not a suitable approach.
Conclusion validity refers to the extent to which our conclusion is considered credible and accurately reflects the results of our study. All the issue data we used are real issue reports collected from the software development team. We use issue reports in 10 years time span, but according to the information we received from within the company, the turnover rate is low compared to other companies, and especially the team leaders and testers, who usually create the tasks, are generally people who worked for the company for 10+ years. This may have caused a high similarity in the language, namely the text, of the opened tasks and created a conclusion threat at the accuracy rate. To assess how well the accuracy values we find are consistent among themselves, we used statistical significance tests as outlined in Section 4.3. By proving our hypotheses in this manner, we showed the consistency of the outcomes we discovered and the effectiveness of our methods.
### User Evaluation
An employee of the company who served as a Senior Software Developer and created the architecture of the projects where we use the data set as well as an employee who serves as the Team Leader of the three projects we have are interviewed for this section about our application. On all the projects where we use the data set, a software developer has been employed by this company for three years. Three of the projects are being led by the team leader, who has been employed by the company for ten years. After spending about 15 minutes outlining our application, we evaluate the outcomes by conducting a few joint experiments. In the first section, we find an issue that has been done and our model assigns a different assignee value than the one made on ITS, and we talk about it. This issue is assigned to the team leader in ITS, but our model assigns it to the junior software developer. We want to know what team members think about this example of a wrong assignment. Our model assigns it to a different employee, both in terms of experience and field. First off, they state that from the test team or the customer support team, an incorrectly tested issue that is not actually a problem or issues that should not be developed can be opened. In this case, the team manager can take the issue and bring it to Won't Fix status. This is also the case in this issue. In fact, this is something that should not be done. They state that for such situations, the team manager must decide.
In the second part, we assign an idle issue that has not yet been assigned by our classification method. The model labels the issue as the junior software developer. We are asking for their opinion to find out if this is the correct assignment. Considering the scope of the job, both team members state that it is appropriate to assign this job to a junior friend, as the requirement for seniority is quite low. Assigning it to mid or senior employees would not be a problem either, but they would not consider assigning this issue to more experienced employees.
In the next section, we give a data set consisting of 20 issues assigned and closed in ITS to the senior software developer and team leader in the company. They label these issues according to the labels we set. We compare the tags
made with the assigned values in the issue's data set and the assignments made by our best working system. Table 8 shows the results of this comparison.
First, we compare the labels of Senior Software Developer and Team Leader with the assigned values on the ITS and the label values of our best model and find the issues where all four are the same. The least number of intersections are 11 common labels with values where all four of them are the same. In order to understand whether this difference is due to mislabeling of our model or due to labeling differences between employees and ITS, in combination 2, we check the values where the labels made by our model and all three of the labels made by the employees intersect. Here the total intersection turns out to be 13. We show the labels that these two issues are assigned differently on ITS to the employees. Both employees gave the same tag value, but a different employee type seems to have closed the issue in ITS. They think that if it's a problem that an employee has dealt with before, they may have taken it for that reason and it could be both types of labels. In the third combination, we find the values that our model has labeled in common with at least one employee to see if there are labels that they think differently among the employees or if our model has assigned completely different assignments from the two. Here, the number of common tags increases to 18. We find five issues that two employees tagged differently, and we ask the employees what they think about these differences. After the exchange of ideas between each other, in two of the different tagged issues, the developer thinks that the tag value of the leader is more appropriate, and in the other two, the leader thinks that the tag value of the developer is more accurate. In an issue, they cannot reach a common decision. Finally, we add the values from the ITS to the combination and find the values where our model coincides with at least one of the labels of the two employees and the label from the ITS. Thus, we see that our model and ITS have the same label value with that undecided issue.
In the last section, we direct the questions we prepared to the employees to get an idea about the system. We ask whether they would prefer such a system to be used in business life. They state that if they are converted into an application and the necessary features are added, for example, if an interface is provided where they can enter the current number of personnel, and their experiences, add and remove employees who are currently on leave, and if they turn into a plugin that integrates with the Jira interface, they will want to use it. Afterward, we ask if you find the system reliable and do you trust the assignments made.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Combination** & **\# labels** \\ \hline LabelByDeveloper \(\cup\) LabelByLeader \(\cup\) Model \(\cup\) ITS & 11 \\ LabelByDeveloper \(\cup\) LabelByLeader \(\cup\) Model & 13 \\ Model \(\cup\) (LabelByDeveloper \(\vee\) LabelByLeader) & 18 \\ Model \(\cup\) (LabelByDeveloper \(\vee\) LabelByLeader \(\vee\) ITS) & 19 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of label results.
They say that they cannot completely leave the assignment to the application, and they will want to take a look at the assignments made by the application. The team leader adds that if there is a feature to send his approval, for example, by sending an e-mail before making the appointment, he will take a look and approve it, except for exceptional cases, and his work will be accelerated. As a result of the assignments made with the system, we address the question: Do you think that the average task solution time will decrease? It can reduce the average task resolution time, but they state that they think that if similar tasks are constantly sent to similar employee groups, this may have undesirable consequences for employee happiness and development. Next, we ask if you think using the system will reduce planning time. There are times when they talked at length in team planning meetings about who would get the job and who would be more suitable. At least, they think that if they have a second data, it can be a savior in cases where they are undecided. Finally, we would like to know your suggestions to improve the system. They state that if this system is going to turn into an application, they will want to see the values that the application pays attention to, to be able to edit and remove or add new ones. They think that if it has a comprehensive and user-friendly interface, it will still be suitable for use in business processes.
## 6 Related Work
Several studies in the literature have focused on issue classification, which has addressed a variety of objectives, including issue assignment, effort estimation, issue prioritization, and so on. In this section, we briefly give details regarding issue assignment studies in general and all Turkish-language issue classification studies in particular.
Several types of research have been conducted in order to automate the time-consuming task of issue assignment. In 2017, Goyal et al. [18] review and categorize 75 research papers on the automated bug assignment area. They identify seven categories: machine learning [25, 47, 55, 5, 8, 36], information retrieval [58], auction [24], social network [57], tossing graph [50], fuzzy set [37] and operational research based [26] techniques. They capture the fact that for automatic bug report assignment, machine learning and information retrieval techniques are the most popular ones. In recent years, deep learning algorithms have also been successfully applied in this field, which has recently revolutionized the idea of word sequence representation and demonstrated encouraging advancements in a number of classification tasks [17]. In this section, we restrict our focus to machine learning and deep learning architectures used to train issue assignment systems.
The machine learning algorithms use historical bug reports to build a supervised or unsupervised machine learning classifier, which is then used to choose appropriate developers for new bug reports. Naive Bayes is the most widely used classifier in machine learning-based approaches according to prior studies [56, 8, 6, 30, 47], and it has been extensively tested [18] in the bug reports of open
source projects. Most studies use Eclipse [8; 5; 3; 51; 2; 39] and Mozilla [8; 51; 39; 23] projects to validate their proposals. Machine learning models in most approaches [5; 36; 51] use only summary and description as textual features of the issues. Jonsson et al. [25] use the combined title and description as textual features and version, type, priority, and submitter columns as nominal features. Sharma et al. [46] consider bug attributes, namely, severity, priority, component, operating system, and the bug assignee.
To estimate the value of terms, most of the approaches [51; 25; 47; 42] in the literature employ term-weighting techniques like Tf-Idf. Jonsson et al. [25] represent textual parts in the bug reports as the 100 words with the highest Tf-Idf. Shokripour et al. [47] use time metadata in Tf-Idf (Time-Tf-Idf). To determine the value of terms in a document and corpus, the Tf-Idf technique only considers their frequency. However, in determining the weight, time-based Tf-Idf considers the time spent using the term in the project. The developer's recent use of the term is taken into account when determining the value of the developer's expertise. They rank the developers according to their calculated term expertise, and the first developer on the list is assigned to fix the new bug.
However, prior studies focused on open-source projects only but rarely [25; 36] attempted in industrial environments like our study. Jonsson et al. [25] use ensemble learner Stacked Generalization, which is our best method also, that combines several machine learning classifiers on data from the automation and telecommunication company. In their approach, the different classes correspond to the development teams. Oliveira et al. [36] also use the data set of a large electronic company. They create a model that can distribute new issues according to the responsibilities of the teams using a variety of machine learning techniques and the WEKA [21] tool.
To improve prediction accuracy, some researchers use incremental learning methods. Bhattacharya et al. [8] use various machine learning algorithms and achieve the best results using the NB classifier in combination with the product-component features, tossing graphs, and incremental learning in mostly used large projects: Mozilla and Eclipse. Xia et al. [55] offer the multi-feature topic model (MTM), a specialized topic modeling approach that extends Latent Dirichlet Allocation (LDA) for the bug assignment. To map the term space to the subject space, their approach takes into account product and component information from issue reports. Then, they suggest an incremental learning mechanism that uses the topic distribution of a new bug report to assign an appropriate developer based on the reports that the developer has previously fixed.
The deep learning algorithms are attempted first in 2017 [16] for bug report assignment recommendation, to the best of our knowledge. Gupta et al. [19] describe the popular deep learning approaches applied to the domain of bug reports and Recurrent Neural Networks (RNN) and Long Short Term Memory (LSTM) are a few famous approaches being used for the deep learning-based approaches [16; 31]. Mani et al. [31] use title and description parts and Florea et al. [16] use the component id, product id, and bug severity fields as one-hot-encoded
categorical variables in addition to title, description, and comments to represent the issues. In 2022, Feng et al. [15] use four transformers models BERT and RoBERTa along with their distilled counterparts DistilBERT, DistilRoBERTa in an ensemble using a resolver team, resolver person, and description columns of the issues.
In research using Turkish issue reports, there are limited studies available in a few fields. The reason may be the agglutinative nature of the Turkish language and the absence of a shared data set for Turkish issues in the literature. Aktas et al. [1] classified the issues they gathered from the banking industry among various software development teams. They use the Jira issue reports for their research like our study. They use SVC, CNN, and LSTM models to solve the classification problem, and they represent the summary and description columns of the issue reports as ordered vectors of Tf-Idf scores. The linear SVC classifier offers the best assignment accuracy for their research with a 0.82 score. Koksal et al. [27] present an automated bug classification approach using a commercial proprietary bug data set. They apply several machine learning algorithms and the SVM classifier is the best algorithm with 0.72 accuracy. In 2022, Tunali [52] prioritizes the software development demands of a private insurance company in Turkey. He proposes several deep-learning architectures and a pre-trained transformer model called distilbert-base-turkish-cased based on DistilBERT to achieve the highest accuracy of 0.82.
## 7 Conclusions and Future Work
This study focuses on automated issue assignment using proprietary issue reports obtained from the electronic product manufacturer's issue tracking system. The objective of the issue assignment approach is to assign issues to appropriate team members based on their respective fields. The team members are categorized into Software Developer, Software Tester, Team Leader, and UI/UX Designer. Among these categories, the majority of the data set consists of developers. Efficiently allocating issues to developers is critical for effective time management. To achieve this, we further classify developers into Senior, Mid, and Junior levels, which are widely accepted labels in the industry.
Our focus lies in extracting features from the filled Jira columns, as well as the title and description texts of the issues, utilizing NLP techniques. These features serve as inputs to our learning methods, enabling us to analyze and classify the issues effectively. Additionally, we employ other commonly used word embedding methods which are Tf-Idf, BOW, and Word2Vec to generate feature vectors from the text fields. This step, implemented using the Sklearn and Gensim library, allows us to compare the performance of our feature set against alternative approaches. Furthermore, to assess the effectiveness of our overall methodology, we incorporate widely adopted deep-learning techniques, namely DistilBert, Roberta, and Electra.
Following the production of feature vectors, we proceed to implement the proposed system utilizing established machine learning techniques. With the aim
of enhancing predictive performance, we employ ensemble methods that leverage a diverse range of machine-learning algorithms. To evaluate the effectiveness of our system, we employ widely recognized metrics such as accuracy, precision, recall, and F1-score which serve as indicators of its performance. To further refine our predictions, we employ a robust technique known as 10-fold cross-validation. In order to conduct a thorough statistical analysis, we construct a matrix to compare and contrast the effectiveness of our proposed strategies. This matrix allows us to assess the performance of our system across different algorithms, ensemble techniques, and evaluation metrics.
Our future endeavors involve the development of a versatile tool applicable to diverse software team models. To fortify our work, we actively engage in discussions and pursue collaborations to acquire data sets from businesses operating across various domains, such as game development and banking applications. This broadened data set will enable us to enhance our model's capabilities for multi-class classification, accommodating different roles within software teams, including product owners and business analysts. Furthermore, we are committed to ensuring compatibility and flexibility by incorporating various business branches into our data set. By incorporating real-world data obtained directly from industry sources, both in English and Turkish, we will conduct comprehensive evaluations through diverse studies. Expanding on the existing features, we intend to utilize the same data set for future research endeavors, such as effort estimation [53, 60], further solidifying the value and applicability of our work in the field.
|
2306.06989
|
Please, not \textit{another} note about Generalized Inverses
|
We prove some statements of left- and right-continuous variants of
generalized inverses of non-decreasing real functions.
|
Philipp Wacker
|
2023-06-12T09:34:49Z
|
http://arxiv.org/abs/2306.06989v1
|
# Please, not _another_ note about Generalized Inverses
###### Abstract
We prove some statements of left- and right-continuous variants of generalized inverses of non-decreasing real functions.
## 1 Introduction
This short manuscript fills a few theoretical gaps in the recorded knowledge about generalized inverses (also called _quantile functions_ in the context of probability theory), and corrects a few inaccuracies in the existing literature. While there is a certain overlap with parts of [1, 1, 1, 2, 3, 4], none of these give the full picture of generalized inverses, and there are persistent errors that need rectification. Finally, this note1 presents some (as far as the author is aware) new insights about the exact form of \(T\circ T^{-1}\) and \(T^{-1}\circ T\), where \(T^{-1}\) is a generalized inverse.
Footnote 1: this is a popular title for communicating results about generalized inverses, see the references section.
**Notation:** In the following, we define \(f(x+)=\lim_{x\searrow x}f(x+\varepsilon)\) and \(f(x-)=\lim_{\varepsilon\searrow x}f(x-\varepsilon)\) for a function \(f:\mathbb{R}\to\mathbb{R}\). Similarly, \(f(-\infty)=\lim_{x\to-\infty}f(x)\) and \(f(\infty)=\lim_{x\to+\infty}f(x)\). A non-decreasing function is said to be a map \(f:\mathbb{R}\to\mathbb{R}\) such that \(x<y\) implies \(f(x)\leq f(y)\). We denote by \(\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,\infty\}\) the set of the extended real numbers.
We start by defining generalized inverses.
**Definition 1** (generalized inverse).: _Let \(T:\mathbb{R}\to\mathbb{R}\) be a non-decreasing function where we set \(T(-\infty)=\lim_{x\to-\infty}T(x)\) and \(T(\infty)=\lim_{x\to\infty}T(x)\). Then the generalized inverses \(T^{+}:\mathbb{R}\to\overline{\mathbb{R}}\) and \(T^{-}:\mathbb{R}\to\bar{\mathbb{R}}\) of \(T\) are defined by_
\[T^{+}(y) =\inf\{x\in\mathbb{R}:T(x)>y\} \tag{1}\] \[T^{-}(y) =\inf\{x\in\mathbb{R}:T(x)\geq y\}. \tag{2}\]
_with the convention that \(\inf\emptyset=\infty\) and \(\inf\mathbb{R}=-\infty\)._
_Remark 1_.: [10] proved that we can equivalently write \(T^{+}(y)=\sup\{x\in\mathbb{R}:T(x)\leq y\}\) and \(T^{-}(y)=\sup\{x\in\mathbb{R}:T(x)<y\}\), as long as we make sure that the domain of \(T\) is the whole of \(\mathbb{R}\).
_Remark 2_.: \(T^{+}\) and \(T^{-}\) are the right- and left-continuous generalized inverses of \(T\), in the sense outlined by lemma 1(c) below.
## 2 Useful statements for working with generalized inverses
We follow up with a list of elementary properties of \(T^{+}\). Parts 1-1 are a generalization of [1, Proposition 1] to the case of both \(T^{+}\) and \(T^{-}\). Part 1 is proven in [11, section 2]. Parts 2 is similar to [1, Proposition 4.2], and prove and sharpen all results to cover both the case \(T^{+}\) and \(T^{-}\). 2 and 2 are new and show what we can say if \(T\) is left- or right-continuous. Parts 1 and 2 correct a mistake in [1, Proposition 4.3] (see remark below), and generalize the statement to handle \(T^{-}\), as well.
**Lemma 1**.: _Let \(T:\mathbb{R}\to\mathbb{R}\) be a nondecreasing map._
1. \(T^{+}(y)=-\infty\) _if and only if_ \(T(x)>y\) _for all_ \(x\in\mathbb{R}\)_._
2. \(T^{+}(y)=\infty\) _if and only if_ \(T(x)\leq y\) _for all_ \(x\in\mathbb{R}\)_._
3. \(T^{-}(y)=-\infty\) _if and only if_ \(T(x)\geq y\) _for all_ \(x\in\mathbb{R}\)_._
4. \(T^{-}(y)=\infty\) _if and only if_ \(T(x)<y\) _for all_ \(x\in\mathbb{R}\)_._
2. \(T^{+}\) _and_ \(T^{-}\) _are nondecreasing._
3. \(T^{+}\) _is right-continuous, and_ \(T^{-}\) _is left-continuous. For all_ \(y\in\mathbb{R}\)_,_ 1. \(T^{+}(y-)=T^{-}(y-)=T^{-}(y)\)__ 2. \(T^{-}(y+)=T^{+}(y+)=T^{+}(y)\)__
4. \(T^{+}\) _is continuous at_ \(y\) _if and only if_ \(T^{-}\) _is continuous at_ \(y\)_._
5. \(T^{-}(y)\leq T^{+}(y)\)__
6. \(T^{-}(y)=T^{+}(y)\) _if and only if_ \(\operatorname{Card}(T^{-1}(\{y\})\leq 1\)_._
7. _The following relations hold:_ 1. _If_ \(y\leq T(x)\)_, then_ \(T^{-}(y)\leq x\)_. Equivalently,_ _if_ \(x<T^{-}(y)\)_, then_ \(T(x)<y\)_._ 2. _If_ \(y<T(x)\)_, then_ \(T^{+}(y)\leq x\)_. Equivalently,_ _if_ \(x<T^{+}(y)\)_, then_ \(T(x)\leq y\)_._ 3. \(T^{-}(T(x))\leq x\)__ 4. _If_ \(y>T(x)\)_, then_ \(T^{-}(y)\geq x\)_. Equivalently,_ _if_ \(x>T^{-}(y)\)_, then_ \(T(x)\geq y\)
_._ 5. _If_ \(y\geq T(x)\)_, then_ \(T^{+}(y)\geq x\)_. Equivalently,_ _if_ \(x>T^{+}(y)\)_, then_ \(T(x)>y\)_._ 6. \(T^{+}(T(x))\geq x\)_._ 7. _If_ \(T^{+}(T(x))=T^{-}(T(x))\)_, then_ \(T^{+}(T(x))=T^{-}(T(x))=x\)__ 8. \(T(T^{+}(y)-)\leq y\)__
2. _Let_ \(T\) _be right-continuous at_ \(x\)_. Then the following relations hold:_ 1. _If_ \(y>T(x)\)_, then_ \(T^{-}(y)>x\)_. Equivalently,_ _if_ \(x\geq T^{-}(y)\)_, then_ \(T(x)\geq y\)_._ 2. _If_ \(y>T(x)\)_, then_ \(T^{+}(y)>x\)_. Equivalently,_ _if_ \(x\geq T^{+}(y)\)_, then_ \(T(x)\geq y\)_._ 3. \(y\leq T(x)\) _if and only if_ \(T^{-}(y)\leq x\)_._
1. _If_ \(T\) _is right-continuous at_ \(x=T^{+}(y)\)_, then_ \(T(T^{+}(y))\geq y\)_. If_ \(T\) _is right-continuous at_ \(x=T^{-}(y)\)_, then_ \(T(T^{-}(y))\geq y\)_._
2. _Let_ \(T\) _be left-continuous at_ \(x\)_. Then the following relations hold:_ 1. _If_ \(y<T(x)\)_, then_ \(T^{-}(y)<x\)_. Equivalently,_ _if_ \(x\leq T^{-}(y)\)_, then_ \(T(x)\leq y\)_._ 2. _If_ \(y<T(x)\)_, then_ \(T^{+}(y)<x\)_. Equivalently,_ _if_ \(x\leq T^{+}(y)\)_, then_ \(T(x)\leq y\)_._ 3. \(y\geq T(x)\) _if and only if_ \(T^{+}(y)\geq x\)_._
3. _If_ \(T\) _is left-continuous at_ \(x=T^{+}(y)\)_, then_ \(T(T^{+}(y))\leq y\)_. If_ \(T\) _is left-continuous at_ \(x=T^{-}(y)\)_, then_ \(T(T^{-}(y))\leq y\)_._
4. _If_ \(T\) _is continuous at_ \(T^{+}(y)\)_, then_ \(T(T^{+}(y))=y\)_. If_ \(T\) _is continuous at_ \(T^{-}(y)\)_, then_ \(T(T^{-}(y))=y\)_._
5. _If_ \(T\) _is constant on an interval_ \(I=(x_{1},x_{2})\)_, then for all_ \(x\in I\) _we have_ \(T^{+}(T(x))>x>T^{-}(T(x))\)_._
6. _We define the left-continuous and right-continuous versions_ \(T_{l}(x):=T(x-)\) _and_ \(T_{r}(x):=T(x+)\) _of_ \(T\)_. Then_ \(T_{l}^{+}=T_{r}^{+}\) _as well as_ \(T_{l}^{-}=T_{r}^{-}\)_._
Proof.: (a), (b), (c), (d) and (e) follow immediately from elementary properties of the infimum as well as the monotonicity of \(T\). We just prove part of (c) for illustration: Let \(A_{0}=\{x:T(x)>y\}\) and \(A_{\varepsilon}=\{x:T(x)>y+\varepsilon\}\). Then \(A_{0}=\bigcup_{\varepsilon>0}A_{\varepsilon}\) and thus
\[T^{+}(y) =\inf A_{0}=\inf_{\varepsilon>0}\inf A_{\varepsilon}=\inf_{ \varepsilon>0}T^{+}(y+\varepsilon)\] \[=\lim_{\varepsilon\searrow 0}T^{+}(y+\varepsilon).\]
Regarding (g):
* Follows directly from definition and the infimum: Let \(T(x)\geq y\), then \(x\in A:=\{\xi\in\mathbb{R}:T(\xi)\geq y\}\), i.e. \(T^{-}(y)=\inf A\leq x\).
* Follows directly from definition and the infimum: Let \(T(x)>y\), then \(x\in A:=\{\xi\in\mathbb{R}:T(\xi)>y\}\), i.e. \(T^{+}(y)=\inf A\leq x\)
* This follows from (g).(1), by setting \(y=T(x)\).
* We assume that \(y>T(x)\). Thus for any \(\xi\in\mathbb{R}\) with the property that \(T(\xi)\geq y\), we have \(T(\xi)>T(x)\), i.e. \(\xi>x\) by monotonicity of \(T\). Since \(A\subset B\) implies \(\inf A\geq\inf B\), this shows \(T^{-}(y)=\inf\{\xi\in\mathbb{R}:T(\xi)\geq y\}\geq\inf\{\xi\in\mathbb{R}:\xi >x\}=x\).
* We assume that \(y\geq T(x)\). Thus for any \(\xi\in\mathbb{R}\) with the property that \(T(\xi)>y\), we have \(T(\xi)>T(x)\), i.e. \(\xi>x\) by monotonicity of \(T\). Since \(A\subset B\) implies \(\inf A\geq\inf B\), this shows \(T^{+}(y)=\inf\{\xi\in\mathbb{R}:T(\xi)>y\}\geq\inf\{\xi\in\mathbb{R}:\xi>x\}=x\).
* This follows from (g).(5), by setting \(y=T(x)\)
* Follows from (g).(3) and (g).(6).
* Clearly, \(x:=T^{+}(y)-\varepsilon<T^{+}(y)\), thus by (g).(2), \(T(x)=T(T^{+}(y)-\varepsilon)\leq y\), which shows the statement via \(\varepsilon\to 0\).
Regarding (h):
* We prove the equivalent characterization: Let \(x\geq T^{-}(y)\). We choose a sequence \(\xi_{n}\searrow x\), with \(\xi_{n}>x\), hence \(\xi_{n}>T^{-}(y)\), and thus \(T(\xi_{n})\geq y\) (by (g).(4)). Using right continuity of \(T\), we see that \(T(x)=\lim_{n}T(\xi_{n})\geq y\).
* This follows from (h).(1) and (e)
* is a direct implication of (g).(1) and (h).(1) (i) follows from (h).(1) and (h).(2) by setting \(x=T^{-}(y)\) and \(T^{+}(y)\), respectively. (j) is proven quite similarly to (h). (k) is proven quite similarly to (i). Regarding (1): Assuming continuity of \(T\) at \(T^{+}(y)\) or \(T^{-}(y)\), respectively, the statement follows from an application of (i) and (k). (m) is proven as follows. Since \(T(x)=y\) for all \(x\in(x_{1},x_{2})\), (g).(1) and (g).(5) imply that \(T^{-}(y)\leq x\) for all \(x\in I\) as well as \(T^{+}(y)\geq x\) for all \(x\in I\), i.e. by taking the limit \(x\to x_{1/2}\), we have \(T^{-}(y)\leq x_{1}\) and \(T^{+}(y)\geq x_{2}\). Thus, for any \(x\in I\), we get \(T^{+}(T(x))=T^{+}(y)\geq x_{2}>x>x_{1}\geq T^{-}(y)=T^{-}(T(x))\).
Lastly we prove (n). Let \(T\) be an arbitrary non-decreasing function, and \(T_{r}\) as above. We fix an arbitray \(y\) and set \(M_{l}=\{x:T(x-)>y\}\) and \(M_{r}=\{x:T(x+)>y\}\), and thus \(T_{l}^{+}(y)=\inf M_{l}\) and \(T_{r}^{+}(y)=\inf M_{r}\). By elementary inclusion \(M_{l}\subseteq M_{r}\), i.e. \(T_{l}^{+}(y)\geq T_{r}^{+}(y)\). It remains to show the opposite inequality. If \(M_{l}=M_{r}\), then the statement follows directly. Otherwise, let
\(x^{\star}\in M_{r}\setminus M_{l}\) which means that \(T(x^{\star}-)\leq y\) and \(T(x^{\star}+)>y\). We will now show that \(x^{\star}\) is a lower bound for \(M_{r}\). Indeed, let \(x<x^{\star}\), then \(T(x+)\leq T(x^{\star}-)\leq y\), i.e. \(x\not\in M_{r}\). Since \(M_{l}\subseteq M_{r}\), this also shows that \(x^{\star}\) is a lower bound for \(M_{l}\). Now we show that \(x^{\star}\in\overline{M_{l}}\). Indeed, if we set \(\varepsilon>0\), then \(T((x^{\star}+\varepsilon)-)\geq T(x^{\star}+)>y\), i.e. for all \(\varepsilon>0\), \(x_{\varepsilon}:=x^{\star}+\varepsilon\in M_{l}\). Since \(x_{\varepsilon}\searrow x^{\star}\), \(x^{\star}\in\overline{M_{l}}\). By inclusion, \(x^{\star}\in\overline{M_{r}}\), as well. \(x^{\star}\) being a lower bound and a limit point proves that \(x^{\star}=\inf M_{l}\), and \(x^{\star}=\inf M_{r}\), i.e. \(x^{\star}=T_{l}^{+}(y)=T_{r}^{+}(y)\). The version of with \(T^{-}\) is proven analogously.
_Remark 3_.: [1, Section 3.2] comments on the fact that there are errors in previous work on generalized inverses and constructs a series of four statements in published manuscripts with counterexamples for why they are wrong, but without providing a way of resolving these contradictions. This manuscript does: (i), (k), and (l) give the exact conditions for what we can say about \(T\circ T^{+}\) as well as \(T\circ T^{-}\). In particular, the counterexamples from [1] do not apply here, since \(\infty\) is not a point of (right- or left-)continuity of \(T\) (it does not even make sense to think about this).
There is one particular application that is especially interesting in practice, which is the inverse sampling method for univariate random variables. This is a well-known fact, but it is an elementary direct result of the previous lemma.
**Corollary 1**.: _Let \((\Omega,\mathcal{B},\mathbb{P})\) be a probability space and \(X:\Omega\to\mathbb{R}\) a random variable with \(F_{X}\) being its cumulative distribution function. Then the push-forward of a uniform random variable \(U([0,1])\) under the generalized inverse \(F_{X}^{-}\) is the law of \(X\). This means that we can generate independent samples \(x_{i}\sim U([0,1])\), plug them into \(F_{X}^{-}\), and the \(\{F_{X}^{-}(x_{i})\}\) will be samples from \(X\)._
Proof.: Since \(F_{X}\) is right-continuous, we know (from Lemma 1(h).(3)) that \(y\leq F_{X}(x)\) if and only if \(F_{X}^{-}(y)\leq x\). We compute the cumulative distribution function of \(F_{X}^{-}(U)\):
\[\mathbb{P}(\mathcal{F}_{X}^{-}(U)\leq\lambda) =\mathbb{P}(U\leq F_{X}(\lambda))\] \[=F_{X}(\lambda),\]
since the cumulative distribution function of \(U\) is given by \(\mathbb{P}(U\leq r)=r\) (for \(r\in[0,1]\)). This means that the law of \(F_{X}^{-}(U)\) is identical to the law of \(X\).
## 3 Jumps and Plateaus
The following two lemmata are an adaptation of [13, Lemma 2], and can also be found in [1, Proposition 4.3], but because the former is concerned with \(T^{-}\) instead of \(T^{+}\), discusses "third order terms" like \(T(T^{+}(T(x)))>T(x)\) instead of "second order terms" like \(T^{+}(T(x))>x\), and does not prove maximality of the half-open intervals involved, and the latter has a typo (and refers to the proof to the former instead of providing a direct proof), we give a proof
of this statement for completeness' sake. Additionally, we fix an error in the literature (see remark 4 below).
The following statements relate plateaus and jumps of \(T\) and \(T^{\pm}\) to one another. For a visualization of these connections, see [14, 15].
**Lemma 2**.: _Let \(T\) be nondecreasing._
1. _If_ \[T^{+}(y)>T^{-}(y)\] (3) _then for any_ \(x\in(T^{-}(y),T^{+}(y))=I\)_, both_ \[T(x) =y\text{ and }\] (4) \[T^{+}(T(x)) >x>T^{-}(T(x))\] (5) _and there is no greater interval than_ \(I\) _of the same type such that (_4_) holds._
2. _Conversely, if either_ 1. _there is a proper interval_ \(I=(x_{1},x_{2})\) _such that for all_ \(x_{0}\in I\)_,_ \(T(x_{0})=y\) _or_ 2. _for some_ \(x_{0}\)_, we have_ \(T^{+}(T(x_{0}))>x_{0}\)_, then with_ \(y:=T(x_{0})\)_, or_ 3. _for some_ \(x_{0}\)_, we have_ \(T^{-}(T(x_{0}))<x_{0}\)_, then with_ \(y:=T(x_{0})\)_,_ _then_ \[T^{+}(y)>T^{-}(y)\] (6)
3. _For any given_ \(x\)_, the following two statements are equivalent:_ * \(T\equiv y\) _on a proper interval_ \((x_{1},x_{2})\)_._ * \(T^{+}(y)>T^{-}(y)\)_._
Proof.: We start by proving 1. Let \(x\in(T^{-}(y),T^{+}(y))\), then \(T^{-}(y)<x<T^{+}(y)\), i.e. by an application of Lemma 1(g).(2) and 2.(4), \(y\leq T(x)\leq y\), which shows that \(T\) is indeed constant on the interval \((T^{-}(y),T^{+}(y))\). Now we show maximality. If \(x>T^{+}(y)\), then \(T(x)>y\) by Lemma 1(g).(5). Similarly, if \(x<T^{-}(y)\), then \(T(x)<y\) by 2. This shows that there is no larger half-open interval \([a,b)\) on which \(T\equiv y\). The relation (5) is a direct implication of (4) (which we just proved to be true) and lemma Lemma 1(m).
Regarding 2: We prove that 2.(1) implies (6). This is a direct consequence of Lemma 1(m):
Now 2 also implies (6): By Lemma 1(g).(3), \(T^{-}(T(x_{0}))\leq x_{0}<T^{+}(T(x_{0}))\). Similarly for 2.(3) (via Lemma 1(g).(6)).
3 follows from a combination from the two other statements.
**Lemma 3**.: _Let \(T\) be nondecreasing._
* _If_ \[T(x+)>T(x-)\] (7) _then for any_ \(y\in(T(x-),T(x+))=I\)_, both_ \[T^{+}(y)=x=T^{-}(y)\] (8) _and there is no greater interval than_ \(I\) _of the same type such that either equality in (_8_) holds._
* _Conversely, if there is a proper interval_ \(I=(y_{1},y_{2})\) _such that either_
* _for all_ \(y\in I\)_,_ \(T^{+}(y)=x\)_, or_
* _for all_ \(y\in I\)_,_ \(T^{-}(y)=x\)_, then_ \[T(x+)>T(x-)\] (9)
* _For any given_ \(x\)_, the following two statements are equivalent:_
* \(T^{+}\equiv y\equiv T^{-}\) _on a proper interval_ \((y_{1},y_{2})\)_._
Proof.: We start by proving 1. Let \(T(x+)>T(x-)\). Then for any \(y\in(T(x-),T(x+))\), we have that for any \(\varepsilon>0\), \(T(x-\varepsilon)<y<T(x+\varepsilon)\), i.e. (using again the relevant statements in lemma 1) \(x-\varepsilon\leq T^{-}(y)\) and \(T^{+}(y)\leq x+\varepsilon\). By letting \(\varepsilon\to 0\), we obtain the statement. Maximality is proven similarly: Take \(y>T(x+)\), i.e. there exists \(\varepsilon>0\) such that \(T(x+\varepsilon)<y\), and thus \(T^{-}(y)\geq x+\varepsilon\), which shows that \(y\) is not an element of the set on which \(T^{-}\equiv x\). Since \(T^{+}\geq T^{-}\), this also proves that \(y\) is not an element of the set on which \(T^{+}\equiv x\). Maximality from below is proven in the same way.
We now prove 2, assuming 2.1, i.e. \(T^{+}(y)\equiv x\) on \((y_{1},y_{2})\). For any \(\varepsilon>0\), \(T^{+}(y)<x+\varepsilon\), i.e. \(T(x+\varepsilon)>y\) for all \(y\in(y_{1},y_{2})\). Thus, \(T(x+\varepsilon)\geq y_{2}\) and \(T(x+)\geq y_{2}\). In the same way we prove \(T(x-)\leq y_{1}\): For any \(\varepsilon>0\), \(x-\varepsilon<T^{-}(y)\), i.e. \(T(x-\varepsilon)<y\) for all \(y\in(y_{1},y_{2})\). Thus, \(T(x-\varepsilon)\leq y_{1}\) and \(T(x-)\leq y_{1}\). All in all, this proves the statement since \(T(x-)\leq y_{1}<y_{2}\leq T(x+)\). The statement 2.2\(\Rightarrow\)2 is proven in a similar fashion.
3 follows from a combination from the two other statements.
_Remark 4_.: Lemma 3 and Lemma 2 are a correction of [1, Proposition 4.3]. In fact, it is not true that \(T(T^{+}(y))>T(T^{+}(y)-)\) implies \(T(T^{+}(y))>y\) or that \(T^{+}(T(x))>T^{+}(T(x)-)\) implies \(T^{+}(T(x))>x\), as Figure 1 shows: We set \(x=x_{2}\). Then \(T(x)=y\), and \(T^{+}(T(x))=T^{+}(y)=x_{2}=x\), i.e. the first condition in [1, Proposition 4.3, 1.)] holds. But, \(T(x)=y\in H(T)\), since \(y\) is a plateau of \(T\). This is in contradiction to the statement of [1, Proposition 4.3, 1.)].
## 4 Inversion Statements
The remaining statements classify exactly what can be said about \(T\circ T^{\pm}\) and \(T^{\pm}\circ T\) under suitable continuity assumptions.
**Lemma 4**.: _Let \(X=\{x_{i}\}\) be the (ordered) list of all discontinuities of \(T\), where we denote \(y_{i}^{+}=T(x_{i}+)\) and \(y_{i}^{-}=T(x_{i}-)\). Then_
\[T(T^{+}(y))=\begin{cases}T(x_{i}),&\text{ for }y\in(y_{i}^{-},y_{i}^{+})\\ y,&\text{ for }y\not\in\bigcup_{i}[y_{i}^{-},y_{i}^{+}]\end{cases}\]
\[T(T^{-}(y))=\begin{cases}T(x_{i}),&\text{ for }y\in(y_{i}^{-},y_{i}^{+})\\ y,&\text{ for }y\not\in\bigcup_{i}[y_{i}^{-},y_{i}^{+}]\end{cases}\]
_Let \(Y=\{y_{i}\}\) be the (ordered) list of all discontinuities of \(T^{\pm}\), where we denote \(x_{i}^{+}=T^{+}(y_{i})\) and \(x_{i}^{-}=T^{-}(y_{i})\). Then_
\[T^{+}(T(x))=\begin{cases}x_{i}^{+}=T^{+}(y_{i}),&\text{ for }x\in(x_{i}^{-},x_{i}^ {+})\\ x,&\text{ for }x\not\in\bigcup_{i}[x_{i}^{-},x_{i}^{+}]\end{cases}\]
\[T^{-}(T(x))=\begin{cases}x_{i}^{-}=T^{-}(y_{i}),&\text{ for }x\in(x_{i}^{-},x_{i}^ {+})\\ x,&\text{ for }x\not\in\bigcup_{i}[x_{i}^{-},x_{i}^{+}]\end{cases}\]
Proof.: We show the characterization for \(T\circ T^{+}\): Let first \(y\in(y_{i}^{-},y_{i}^{+})\). Then Lemma 3(a) proves \(T^{+}(y)=x_{i}=T^{-}(y)\), i.e. \(T(T^{+}(y))=T(x_{i})=T(T^{-}(y))\). If \(y\not\in\bigcup_{i}[y_{i}^{-},y_{i}^{+}]\), then \(T^{+}(y)\not\in X\), or otherwise \(T^{+}(y)=x_{j}\) for some \(j\), which would be in contradiction to the maximality of the set \((y_{j}^{-},y_{j}^{+})\) in Lemma 3(a). Similarly, \(T^{-}(y)\not\in X\). Thus \(T\) is continuous at \(T^{+}(y)\) and at \(T^{-}(y)\), i.e. \(T(T^{\pm}(y))=y\) by virtue of Lemma 1(l).
Regarding \(T^{+}\circ T\): If \(i<j\), \(y_{i}<y_{j}\) and thus \(x_{i}^{+}=T^{+}(y_{i})\leq T^{-}(y_{j})=x_{j}^{-}\), so \(x_{i}^{+}<x_{j}^{-}\) and thus the intervals \((x_{i}^{-},x_{i}^{+})\) are disjoint from another. Let \(x\in(x_{i}^{-},x_{i}^{+})=(T^{-}(y_{i}),T^{+}(y_{i}))\). Then by Lemma 2(a), \(T^{+}(T(x))=T^{+}(y_{i})=x_{i}^{+}\) and \(T^{-}(T(x))=T^{-}(y_{i})=x_{i}^{-}\). On the other hand, let \(x\not\in\bigcup_{i}[x_{i}^{-},x_{i}^{+}]\). Then \(T(x)\not\in Y\), because otherwise \(T(x)=y_{j}\) for some \(j\), and then \((x_{j}^{-},x_{j}^{+})=(T^{+}(y_{i}-),T^{+}(y_{i}))\) would not be the greatest interval possible, in
contradiction to Lemma 2(a). This means that \(T^{+}\) is continuous at \(T(x)\), and thus (because \(T^{-}\) and \(T^{+}\) are left- and right-continuous versions of one another, see Lemma 1(c)) \(T^{+}(T(x))=T^{-}(T(x))\). By Lemma 1(g).(7), \(T^{+}(T(x))=x=T^{-}(T(x))\).
_Remark 5_.: Note that there is no statement about the edge cases \(T(T^{\pm}(y_{i}^{\pm}))\) and \(T^{\pm}(T(x_{i}^{\pm}))\). In particular, while \(T\circ T^{+}=T\circ T^{-}\) on the two sets considered in the statement of Lemma 4, it is entirely possible that, e.g., \(T(T^{+}(y_{i}^{+}))\neq T(T^{-}(y_{i}^{+}))\). The remaining values depend on the type of continuity of \(T\) at those edge points. Assuming global (left- or right-)continuity of \(T\) allows us to precisely characterize the invertibility interaction between \(T\) and \(T^{\pm}\), and close the gaps in Lemma 4
**Lemma 5**.: _Let \(T\) be nondecreasing and **continuous from the right**. We denote by \(X=\{x_{i}\}\) the (ordered) list of all discontinuities of \(T\), i.e. \(y_{i}^{+}:=T(x_{i})>T(x_{i}-)=:y_{i}^{-}\) and \(T(x)=T(x-)\) for \(x\not\in X\). We denote by \(Y=\{y_{i}\}\) the (ordered) list of plateaus of \(T^{\pm}\), i.e. for each \(y_{i}\) there exists a proper (maximal in the set of half-open intervals) interval \(I_{i}=[x_{i}^{-},x_{i}^{+})\) such that \(T(x)\equiv y_{i}\) for all \(x\in I_{i}\). Then_
\[T(T^{+}(y)) =\begin{cases}y_{i}^{+},&\text{ for }y\in[y_{i}^{-},y_{i}^{+})\\ y,&\text{ else}\end{cases}\] \[T^{+}(T(x)) =\begin{cases}x_{i}^{+},&\text{ for }x\in[x_{i}^{-},x_{i}^{+})\\ x,&\text{ else}\end{cases}\]
_Let \(T\) be nondecreasing and **continuous from the left**. We denote by \(X=\{x_{i}\}\) the (ordered) list of all discontinuities of \(T\), i.e. \(y_{i}^{+}:=T(x_{i}+)>T(x_{i})=:y_{i}^{-}\) and \(T(x+)=T(x)\) for \(x\not\in X\). We denote by \(Y=\{y_{i}\}\) the (ordered) list of plateaus of \(T^{\pm}\), i.e. for each \(y_{i}\) there exists a proper (maximal in the set of half-open intervals) interval \(I_{i}=(x_{i}^{-},x_{i}^{+}]\) such that \(T(x)\equiv y_{i}\) for all \(x\in I_{i}\). Then_
\[T(T^{-}(y)) =\begin{cases}y_{i}^{-},&\text{ for }y\in(y_{i}^{-},y_{i}^{+}]\\ y,&\text{ else}\end{cases}\] \[T^{-}(T(x)) =\begin{cases}x_{i}^{-},&\text{ for }x\in(x_{i}^{-},x_{i}^{+}]\\ x,&\text{ else}\end{cases}\]
Proof.: This follows directly from the fact that the concatenation of right-continuous, nondecreasing functions is again right-continuous and non-decreasing (and similarly for left-continuous functions), so we can fill the gaps in our knowledge of, say, \(T\circ T^{+}\) by taking limits from the right, etc.
## 5 Conclusion
This manuscript tries to unify, organize, generalize, and correct some statements about generalized inverses. Since this is just the latest work in a long succession
of notes claiming to do just that, we close with only cautious optimism of having done so successfully.
|
2308.11959
|
Scalable δ-Level Coherent State Synchronization of Multi-Agent
Systems in the Presence of Bounded Disturbances
|
In this paper, we study scalable {\delta}-Level coherent state
synchronization for multi-agent systems (MAS) where the agents are subject to
bounded disturbances/noises. We propose a scale-free framework designed solely
based on the knowledge of agent models and agnostic to the communication graphs
and size of the network. We define the level of coherency for each agent as the
norm of the weighted sum of the disagreement dynamics with its neighbors. The
objective is to restrict the level of coherency of the network to {\delta}
without a-priori information about the disturbances.
|
Donya Nojavanzadeh, Zhenwei Liu, Ali Saberi, Anton A. Stoorvogel
|
2023-08-23T06:58:03Z
|
http://arxiv.org/abs/2308.11959v4
|
Scalable \(\delta\)-Level Coherent State Synchronization of Multi-Agent Systems with Adaptive Protocols and Bounded Disturbances
###### Abstract
In this paper, we study scalable \(\delta\)-level coherent state synchronization for multi-agent systems (MAS) where the agents are subject to bounded disturbances/noises. We propose a scale-free framework designed solely based on the knowledge of agent models and agnostic to the communication graphs and size of the network. We define the level of coherency for each agent as the norm of the weighted sum of the disagreement dynamics with its neighbors. The objective is to restrict the level of coherency of the network to \(\delta\) without _a-priori_ information about the disturbance.
## I Introduction
Synchronization and consensus problems of MAS have become a hot topic in recent years due to a wide range of applications in cooperative control of MAS including robot networks, autonomous vehicles, distributed sensor networks, and energy power systems. The objective of synchronization of MAS is to secure an asymptotic agreement on a common state or output trajectory by local interaction among agents, see [1, 20, 31, 21, 7, 22] and references therein.
State synchronization inherently requires homogeneous networks. When each agent has access to a linear combination of its own state relative to that of the neighboring agents, it is called full-state coupling. If the combination only includes a subset of the states relative to the corresponding states of the neighboring agents, it is called partial-state coupling. In the case of state synchronization based on diffusive full-state coupling, the agent dynamics progress from single- and double-integrator dynamics (e.g. [14, 18, 19]) to more general dynamics (e.g. [28, 30, 22]). State synchronization based on diffusive partial-state coupling has been considered, including static design [10, 11], dynamic design [4, 23, 24, 27, 29], and designs based on localized information exchange with neighbors [2, 22]. Recently, we have introduced a new generation of _scale-free_ protocols for synchronization and almost synchronization of homogeneous and heterogeneous MAS where the agents are subject to external disturbances, input saturation, communication delays, and input delays; see for example [6, 5, 8, 13].
Synchronization and almost synchronization in the presence of external disturbances are studied in the literature, where three classes of disturbances have been considered namely
1. Disturbances and measurement noise with known frequencies
2. Deterministic disturbances with finite power
3. Stochastic disturbances with bounded variance
For disturbances and measurement noise with known frequencies, it is shown in [33] and [34] that exact synchronization is achievable. This is shown in [33] for heterogeneous MAS with minimum-phase and non-introspective agents and networks with time-varying directed communication graphs. Then, [34] extended this results for non-minimum phase agents utilizing localized information exchange.
For deterministic disturbances with finite power, the notion of \(H_{\infty}\) almost synchronization is introduced by Peymani et.al for homogeneous MAS with non-introspective agents utilizing additional communication exchange [15]. The goal of \(H_{\infty}\) almost synchronization is to reduce the impact of disturbances on the synchronization error to an arbitrarily degree of accuracy (expressed in the \(H_{\infty}\) norm). This work was extended later in [35, 32, 16] to heterogeneous MAS with non-introspective agents and without the additional communication and for network with time-varying graphs. \(H_{\infty}\) almost synchronization via static protocols is studied in [25] for MAS with passive and passifiable agents. Recently, necessary and sufficient conditions are provided in [26] for solvability of \(H_{\infty}\) almost synchronization for homogeneous networks with non-introspective agents and without additional communication exchange. Finally, we developed a scale-free framework for \(H_{\infty}\) almost state synchronization for homogeneous network [9] utilizing suitably designed localized information exchange.
In the case of stochastic disturbances with bounded variance, the concept of stochastic almost synchronization is introduced by [36] where both stochastic disturbance and disturbance with known frequency can be present. The idea of stochastic almost synchronization is to make the stochastic RMS norm of synchronization error arbitrary small in the presence of colored stochastic
disturbances that can be modeled as the output of linear time invariant systems driven by white noise with unit power spectral intensities. By augmenting this model with the agent model one can essentially assume that stochastic disturbances are white noise with unit power spectral intensities. In this case, under linear protocols, the stochastic RMS norm of synchronization error is the \(H_{2}\) norm of the transfer function from disturbance to the synchronization error. As such, one can formulate the stochastic almost synchronization equivalently in a deterministic framework where the objective is to make the \(H_{2}\) norm of the transfer function from disturbance to synchronization error arbitrary small. This deterministic approach is referred to as almost \(H_{2}\) synchronization problem which is equivalent to stochastic almost synchronization problem. Recent work on \(H_{2}\) almost synchronization problem is [26] which provided necessary and sufficient conditions for solvability of \(H_{\infty}\) almost synchronization for homogeneous networks with non-introspective agents and without additional communication exchange. Finally, \(H_{2}\) almost synchronization via static protocols is also studied in [25] for MAS with passive and passifiable agents.
As explained above, \(H_{\infty}\) and \(H_{2}\) almost synchronization of MAS have the following disadvantages.
* _Tuning requirement:_ The designed protocols for \(H_{\infty}\) and \(H_{2}\) almost synchronization of MAS are parameterized with a tuning parameter to reduce the \(H_{\infty}\) and \(H_{2}\) norm of transfer function from the external disturbances to the synchronization error arbitrarily small. It is worth to note that the level of increasing (decreasing) of this tuning parameter depends on the knowledge of the communication graph.
* _Dependency on the size of disturbance:_ In \(H_{\infty}\) and \(H_{2}\) almost synchronization, the size of synchronization error depends on both the size of transfer function from the disturbance to the synchronization error and the size of disturbance, in other words, is if the size of disturbances increase the \(H_{\infty}\) and \(H_{2}\) norm increase as well.
On the other hand, in this paper, we consider scalable \(\delta\)-level coherent state synchronization of homogeneous MAS in the presence of bounded disturbances/noises, where one can reduce the effect of the disturbances to a certain level for any MAS with any communication network and for any size of external disturbances/noises as long as they are bounded. The contributions of this work are three folds.
1. The protocols are designed solely based on the knowledge of the agent models and do not depend on the information about the communication network such as bounds on the spectrum of the associated Laplacian matrix or the number of agents. That is to say, the universal nonlinear protocols are scale-free and can work for any communication network as long as it is connected.
2. We achieve scalable \(\delta\)-level coherent state synchronization for MAS in the presence of bounded disturbances/noises such that for any given \(\delta\), one can restricts the level of coherency of the network to \(\delta\) independent of the size of the disturbances/noises.
3. The proposed protocol is independent from any information about the disturbance such as statistics of disturbances or the knowledge of the bound on the disturbance and it achieves \(\delta\)-level coherent state synchronization as long as the disturbances are bounded which is a reasonable assumption.
Note that we only consider disturbances to the different agents and not measurement noise. For further clarification see Remark 1.
### Preliminaries on graph theory
Given a matrix \(A\in\mathbb{R}^{m\times n}\), \(A^{\top}\) denotes its conjugate transpose. A square matrix \(A\) is said to be Hurwitz stable if all its eigenvalues are in the open left half complex plane. \(A\otimes B\) depicts the Kronecker product between \(A\) and \(B\). \(I_{n}\) denotes the \(n\)-dimensional identity matrix and \(0_{n}\) denotes \(n\times n\) zero matrix; sometimes we drop the subscript if the dimension is clear from the context.
To describe the information flow among the agents we associate a _weighted graph_\(\mathcal{G}\) to the communication network. The weighted graph \(\mathcal{G}\) is defined by a triple \((\mathcal{V},\mathcal{E},\mathcal{A})\) where \(\mathcal{V}=\{1,\ldots,N\}\) is a node set, \(\mathcal{E}\) is a set of pairs of nodes indicating connections among nodes, and \(\mathcal{A}=[a_{ij}]\in\mathbb{R}^{N\times N}\) is the weighted adjacency matrix with non negative elements \(a_{ij}\). Each pair in \(\mathcal{E}\) is called an _edge_, where \(a_{ij}>0\) denotes an edge \((j,i)\in\mathcal{E}\) from node \(j\) to node \(i\) with weight \(a_{ij}\). Moreover, \(a_{ij}=0\) if there is no edge from node \(j\) to node \(i\). We assume there are no self-loops, _i.e._ we have \(a_{ii}=0\). A _path_ from node \(i_{1}\) to \(i_{k}\) is a sequence of nodes \(\{i_{1},\ldots,i_{k}\}\) such that \((i_{j},i_{j+1})\in\mathcal{E}\) for \(j=1,\ldots,k-1\). A _directed tree_ is a subgraph (subset of nodes and edges) in which every node has exactly one parent node except for one node, called the _root_, which has no parent node. A _directed spanning tree_ is a subgraph which is a directed tree containing all the nodes of the original graph. If a directed spanning tree exists, the root has a directed path to every other node in the tree [3].
For a weighted graph \(\mathcal{G}\), the matrix \(L=[\ell_{ij}]\) with
\[\ell_{ij}=\left\{\begin{array}{cc}\sum_{k=1}^{N}a_{ik},&i=j,\\ -a_{ij},&i\neq j,\end{array}\right.\]
is called the _Laplacian matrix_ associated with the graph \(\mathcal{G}\). The Laplacian matrix \(L\) has all its eigenvalues in the closed right half plane and at least one eigenvalue at zero associated with right eigenvector \(\mathbf{1}\)[3]. Moreover, if the graph contains a directed spanning tree, the Laplacian matrix \(L\) has a single eigenvalue at the origin and all other eigenvalues are located in the open right-half complex plane [20].
## II Problem formulation
Consider a MAS consisting of \(N\) identical linear agents
\[\dot{x}_{i}(t)=Ax_{i}(t)+Bu_{i}(t)+Ew_{i}(t),\quad i=1,\ldots,N \tag{1}\]
where \(x_{i}\in\mathbb{R}^{n}\), \(u_{i}\in\mathbb{R}^{m}\), and \(w_{i}\in\mathbb{R}^{w}\) are state, input, and external disturbance/noise, respectively.
The communication network is such that each agent observes a weighted combination of its own state relative to the state of other agents, _i.e._, for the protocol for agent \(i\), the signal
\[\zeta_{i}(t)=\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j}) \tag{2}\]
is available where \(a_{ij}\geqslant 0\) and \(a_{ii}=0\). The matrix \(\mathcal{A}=[a_{ij}]\) is the weighted adjacency matrix of a directed graph \(\mathcal{G}\) which describe the communication topology of the network where the nodes of network correspond to the agents. We can also express the dynamics in terms of an associated Laplacian matrix \(L=[\ell_{ij}]_{N\times N}\), such that the signal \(\zeta_{i}\) in (2) can be rewritten in the following form
\[\zeta_{i}(t)=\sum_{j=1}^{N}\ell_{ij}x_{j}. \tag{3}\]
The size of \(\zeta_{i}(t)\) can be viewed as the level of coherency at agent \(i\).
We define the set of communication graphs considered in this paper as following.
**Definition 1**: \(\mathbb{G}^{N}\) _denotes the set of directed graphs of \(N\) agents which contains a spanning tree._
We make the following assumption.
**Assumption 1**:
1. \((A,B)\) _is stabilizable._
2. \(\operatorname{Im}E\subset\operatorname{Im}B\)_._
3. _The disturbances_ \(w_{i}\) _are bounded for_ \(i\in\{1,2,\ldots,N\}\)_. In other words, we have that_ \(\|w\|_{\infty}<\infty\) _for_ \(i\in\{1,2,\ldots,N\}\)_._
4. _The network_ \(\mathcal{G}\) _associated to our MAS has a directed spanning tree, i.e._ \(\mathcal{G}\in\mathbb{G}^{N}\)_._
Next, in the following definition we define the concept of \(\delta\)-level-coherent state synchronization for the MAS with agents (1) and communication information (3).
**Definition 2**: _For any given \(\delta>0\), the MAS (1) and (3) achieves \(\delta\)-level-coherent state synchronization if there exist a \(T>0\) such that_
\[\|\zeta_{i}(t)\|\leq\delta,\]
_for all \(t>T\), for all \(i\in\{1,\ldots,N\}\), and for any bounded disturbance._
We formulate the following problem.
**Problem 1**: _Consider a MAS (1) with associated network communication (3) and a given parameter \(\delta>0\). The **scalable \(\delta\)-level-coherent state synchronization in the presence of bounded external disturbances/noises** is to find, if possible, a fully distributed nonlinear protocol using only knowledge of agent models, i.e., \((A,B)\), and \(\delta\) of the form_
\[\left\{\begin{array}{ll}\dot{x}_{i,c}=f(x_{i,c},\zeta_{i}),\\ u_{i}=g(x_{i,c},\zeta_{i}),\end{array}\right.\quad x_{i,c}\in\mathbb{R}^{n_{c}}, \tag{4}\]
_such that the MAS with the above protocol achieves \(\delta\)-level-coherent state synchronization in the presence of disturbances/noises. In other words, for any graph \(\mathcal{G}\in\mathbb{G}^{N}\) with any size of the network \(N\), and for all bounded disturbances \(w_{i}\), the MAS achieves \(\delta\)-level coherent state synchronization as defined in Definition 2._
## III Protocol design
In this section, we will design an adaptive protocol to achieve the objectives of Problem 1.
Under the assumption that \((A,B)\) is stabilizable, there exists a matrix \(P>0\) satisfying the following algebraic Riccati equation
\[A^{\intercal}P+PA-PBB^{\intercal}P+I=0. \tag{5}\]
We define \(\bar{\delta}=\delta^{2}\lambda_{\min}(P)\). We note that \(\zeta_{i}^{\intercal}P\zeta_{i}\leq\bar{\delta}\) implies \(\|\zeta_{i}(t)\|\leq\delta\). Next for any parameter \(d\) satisfying
\[0<d<\bar{\delta}, \tag{6}\]
we define the following adaptive protocol
\[\dot{\rho}_{i} =\begin{cases}\zeta_{i}^{\intercal}PBB^{\intercal}P\zeta_{i}& \text{if }\zeta_{i}^{\intercal}P\zeta_{i}\geqslant d,\\ 0&\text{if }\zeta_{i}^{\intercal}P\zeta_{i}<d,\end{cases} \tag{7}\] \[u_{i} =-\rho_{i}B^{\intercal}P\zeta_{i}.\]
In classical adaptive controllers without disturbances we would use
\[\dot{\rho}_{i}=\zeta_{i}^{\intercal}PBB^{\intercal}P\zeta_{i}\]
and it can be shown that the scaling parameter \(\rho_{i}\) remains finite if no disturbances are present using classical techniques. However, with persistent disturbances, this classical adaptation would imply that the scaling parameter \(\rho_{i}\) will become arbitrary large over time except for some degenerate cases. This paper shows that the introduction of this deadzone is a crucial modification which has very desirable properties and the scaling parameters will remain bounded. In this section, we will formally prove the key characteristics of this protocol. Unfortunately the introduction of the deadzone makes the proofs quite tricky even though the simulations will illustrate very nice behavior.
We have the following theorem.
**Theorem 1**: _Consider a MAS (1) with associated network communication (3) and a given parameter \(\delta>0\). Then, the **scalable \(\delta\)-level-coherent state synchronization in the presence of bounded external disturbances/noises** as stated in problem 1 is solvable. In particular, protocol (7) with any \(d\) satisfying (6) solves \(\delta\)-level-coherent state synchronization in the presence of disturbances/noises \(w_{i}\), for any graph \(\mathcal{G}\in\mathbb{G}^{N}\)._
_Proof:_ Lemma 1 shows that we achieve scalable \(\delta\)-level-coherent state synchronization whenever all the \(\rho_{i}\) remain bounded. After that, in lemma 2, we will establish that the \(\rho_{i}\) are always bounded.
**Lemma 1**: _Consider MAS (1) with associated network communication (3) and a given parameter \(\delta>0\). Assume Assumption 1 is satisfied. Choose any \(d\) satisfying (6). If all \(\rho_{i}\) remain bounded then there exists a \(T>0\) such that_
\[\zeta_{i}^{\intercal}(t)P\zeta_{i}(t)\leq\bar{\delta}, \tag{8}\]
_for all \(t>T\) and for all \(i=1,\ldots,N\)._
_Proof of Lemma 1:_ The network is assumed to have a directed spanning tree. Without loss of generality we assume that agent \(N\) corresponds to a root agent of such a directed spanning tree. We define
\[x=\begin{pmatrix}x_{1}\\ \vdots\\ x_{N}\end{pmatrix},\quad w=\begin{pmatrix}w_{1}\\ \vdots\\ w_{N}\end{pmatrix},\quad\bar{x}=\begin{pmatrix}x_{1}-x_{N}\\ \vdots\\ x_{N-1}-x_{N}\end{pmatrix},\quad\bar{w}=\begin{pmatrix}w_{1}-w_{N}\\ \vdots\\ w_{N-1}-w_{N}\end{pmatrix},\]
and
\[\rho=\begin{pmatrix}\rho_{1}&0&\cdots&0\\ 0&\rho_{2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\rho_{N}\end{pmatrix},\quad\rho^{j}=\begin{pmatrix}\rho_{1}&0& \cdots&0\\ 0&\rho_{2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\rho_{j}\end{pmatrix},\quad L=\begin{pmatrix}\bar{L}&L_{12}\\ L_{21}&L_{22}\end{pmatrix},\]
where \(L_{22}\in\mathbb{R}\). We obtain
\[\dot{x}=(I\otimes A)x-(\rho L\otimes BB^{\intercal}P)x+(I\otimes E)w,\]
which yields
\[\dot{x}=(I\otimes A)\bar{x}-(\rho^{N-1}\left(\bar{L}\quad L_{12}\right) \otimes BB^{\intercal}P)x+(\rho_{N}\mathbf{1}_{N-1}\left(L_{21}\quad L_{22} \right)\otimes BB^{\intercal}P)x+(I\otimes E)\bar{w},\]
where \(\mathbf{1}\) indicates a vector with each entry equal to \(1\). Note that \(L\mathbf{1}_{N}=0\) implies
\[\bar{L}\mathbf{1}_{N-1}+L_{12}=0,\quad L_{21}\mathbf{1}_{N-1}+L_{22}=0.\]
We obtain
\[\hat{\bar{x}}=(I\otimes A)\bar{x}-(\rho^{N-1}\bar{L}\otimes BB^{\mathrm{ T}}P)\bar{x}+(\rho_{N}\mathbf{1}_{N-1}L_{21}\otimes BB^{\mathrm{ T}}P)\bar{x}+(I\otimes E)\bar{w}. \tag{9}\]
By [21, Lemma 2.8], there exists \(w_{1},\ldots,w_{N}\) such that
\[\begin{pmatrix}w_{1}&w_{2}&\cdots&w_{N-1}&w_{N}\end{pmatrix}L=0.\]
with \(w_{N}\neq 0\) because agent \(N\) was assumed to be a root agent. Then it is easy to see that there exist \(a_{1},\ldots,a_{N-1}\) such that
\[\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N-1}&1\end{pmatrix}L=0. \tag{10}\]
Define
\[a^{N-1}=\begin{pmatrix}a_{1}&a_{2}&\cdots&a_{N-1}\end{pmatrix}.\]
Using the above we can simplify (9) and obtain
\[\hat{\bar{x}}=(I\otimes A)\bar{x}-[(\rho^{N-1}+\rho_{N}\mathbf{1}a^{N-1})\bar {L}\otimes BB^{\mathrm{ T}}P]\bar{x}+(I\otimes E)\bar{w}, \tag{11}\]
where we used \(aL=0\) yielding \(L_{21}=-a^{N-1}\bar{L}\). Next, we use
\[\zeta_{i}=(L_{i}\otimes I)x=[\ell_{i}\otimes I]\,\bar{x},\]
where \(L_{i}\) is the \(i\)'th row of \(L\) for \(i=1,\ldots N\). On the other hand, \(\ell_{i}\) is the \(i\)'th row of \(\bar{L}\) for \(i=1,\ldots,N-1\), and \(\ell_{N}=L_{21}\). We obtain
\[\zeta^{N-1}=(\bar{L}\otimes I)\bar{x}. \tag{12}\]
We define
\[\zeta=\begin{pmatrix}\zeta_{1}\\ \vdots\\ \zeta_{N}\end{pmatrix},\quad\zeta^{j}=\begin{pmatrix}\zeta_{1}\\ \vdots\\ \zeta_{j}\end{pmatrix}.\]
By combining (11) and (12), we obtain
\[\hat{\zeta}^{N-1}=(I\otimes A)\zeta^{N-1}-[\bar{L}(\rho^{N-1}+\rho_{N} \mathbf{1}_{N-1}a^{N-1})\otimes BB^{\mathrm{ T}}P]\zeta^{N-1}\ +(\bar{L}\otimes E)\bar{w}. \tag{13}\]
Note that given that \(\bar{w}\) and \(\rho\) are bounded, (13) implies that there exist \(\alpha\) and \(\beta\) such that
\[\|\hat{\xi}^{N-1}\|\leqslant\alpha\|\zeta^{N-1}\|+\beta. \tag{14}\]
Next, we note that if \((I\otimes B^{\mathrm{ T}}P)\zeta^{N-1}\) is very large, it will remain large for some time due to the bound (14). This would result in a substantial increase in \(\rho_{i}\). The \(\rho_{i}\) are increasing and bounded which yields that the \(\rho_{i}\) converge. Therefore, we find that there exists some \(T_{0}>0\) and \(M_{1}>0\) such that for all \(t>T_{0}\) we have
\[\|(I\otimes B^{\mathrm{ T}}P)\zeta^{N-1}(t)\|\leqslant M_{1}. \tag{15}\]
For any \(i\in\{1,\ldots,N\}\), from (13) we have that
\[\dot{\zeta}_{i}=A\zeta_{i}-[\ell_{i}(\rho^{N-1}+\rho_{N}\mathbf{1}_{N-1}a^{N-1 })\otimes BB^{\mathrm{ T}}P]\zeta^{N-1}+(\ell_{i}\otimes E)\bar{w}.\]
Define
\[V_{i}=\zeta_{i}^{\mathrm{ T}}P\zeta_{i},\]
then we obtain
\[\dot{V}_{i}=-\zeta_{i}^{\mathrm{ T}}\zeta_{i}+\zeta_{i}^{ \mathrm{ T}}PBB^{\mathrm{ T}}P\zeta_{i}-2\zeta_{i}^{ \mathrm{ T}}\left[\ell_{i}(\rho^{N-1}+\rho_{N}\mathbf{1}_{N-1}a^{N-1})\otimes PBB^{ \mathrm{ T}}P\right]\zeta^{N-1}+2\zeta_{i}^{\mathrm{ T}}(\ell_{i}\otimes PE)\bar{w}. \tag{16}\]
Since \(\bar{w}\) is bounded and we have the bound (15), we find that there exists a constant \(R\) such that \(\|s_{i}(t)\|<R\) for all \(t>T_{0}\), where
\[s_{i}=-2\left[\ell_{i}(\rho^{N-1}+\rho_{N}\mathbf{1}_{N-1}a^{N-1})\otimes B^{ \mathrm{ T}}P\right]\zeta^{N-1}+2(\ell_{i}\otimes X)\bar{w}. \tag{17}\]
Note that in (17) we used Assumption 1 implying that there exists a matrix \(X\) such that \(E=BX\). We also define
\[r_{i}=B^{\mathrm{ T}}P\zeta_{i}.\]
We obtain from (16) that
\[\dot{V}_{i}\leqslant-\gamma V_{i}+r_{i}^{\mathrm{ T}}r_{i}+r_{i}^{ \mathrm{ T}}s_{i}, \tag{18}\]
where \(\gamma=\lambda_{\max}(P)^{-1}\). Choose \(\varepsilon>0\) such that
\[R\sqrt{\varepsilon}\leqslant\gamma d,\qquad\varepsilon+\sqrt{\varepsilon}R< \bar{\delta}-d. \tag{19}\]
Convergence of \(\rho_{i}\) implies there exists \(T>T_{0}\) such that
\[\rho_{i}(t_{2})-\rho_{i}(t_{1})<\varepsilon\]
for all \(t_{2}>t_{1}>T\). Next, assume \(V_{i}>d\) in the interval \([t_{1},t_{2}]\) with \(t_{1}>t_{2}>T\). In that case
\[\int_{t_{1}}^{t_{2}}r_{i}(t)^{\mathrm{ T}}r_{i}(t)\,\mathrm{d}t=\rho_{i}(t_{2})-\rho_{i}(t_{1})<\varepsilon\]
which implies
\[V_{i}(t_{2})-V_{i}(t_{1})\leqslant-\gamma d(t_{2}-t_{1})+\varepsilon+\sqrt{ \varepsilon}R\sqrt{t_{2}-t_{1}}, \tag{20}\]
where we used (18) and the fact that
\[\int_{t_{1}}^{t_{2}}r_{i}(t)^{\mathrm{ T}}s_{i}(t)\,\mathrm{d}t \leqslant\left(\int_{t_{1}}^{t_{2}}r_{i}(t)^{\mathrm{ T}}r_{i}(t)\,\mathrm{d}t\right)^{1/2}\left(\int_{t_{1}}^{t_{2}}s_{i}(t)^{ \mathrm{ T}}s_{i}(t)\,\mathrm{d}t\right)^{1/2}.\]
From (20), it is clear that if we increase \(t_{2}\) (and keep \(t_{1}\) fixed) then there will be a moment that \(V_{i}(t_{2})\) becomes negative which yields a contradiction. Therefore, there exists some \(T_{1}>t_{1}>T\) for which \(V_{i}(T_{1})\leqslant d\). Next, we prove that for all \(t>T_{1}\) we will have that \(V_{i}(t)<\bar{\delta}\). We will show this by contradiction. If \(V_{i}(t)\geqslant\bar{\delta}\) for some \(t>T_{1}\) while \(V_{i}(T_{1})<d\), then there must exist \(t_{4}>t_{3}>T_{1}\) such that
\[V_{i}(t_{3})=d\text{ and }V_{i}(t_{4})=\bar{\delta}\text{ while }V_{i}(t) \geqslant d\text{ for }t\in[t_{3},t_{4}].\]
Similar to (20), we get
\[V_{i}(t_{4})-V_{i}(t_{3})\leqslant-\gamma d(t_{4}-t_{3})+\varepsilon+\sqrt{ \varepsilon}R\sqrt{t_{4}-t_{3}}.\]
If \(t_{4}-t_{3}>1\), then this implies
\[\bar{\delta}-d=V_{i}(t_{4})-V_{i}(t_{3})\leqslant-\gamma d(t_{4}-t_{3})+ \varepsilon+\sqrt{\varepsilon}R(t_{4}-t_{3})\leqslant\varepsilon<\bar{\delta }-d\]
using (19) which yields a contradiction. On the other hand, if \(t_{4}-t_{3}\leqslant 1\) then we obtain
\[\bar{\delta}-d=V_{i}(t_{4})-V_{i}(t_{3})\leqslant\varepsilon+\sqrt{ \varepsilon}R\sqrt{t_{4}-t_{3}}\leqslant\varepsilon+\sqrt{\varepsilon}R< \bar{\delta}-d\]
which also yields a contradiction. In this way, we can show for any \(i\in\{1,\ldots,N\}\) that (8) is satisfied for \(t\) sufficiently large.
**Lemma 2**: _Consider MAS (1) with associated network communication (3) and the protocol (7). Assume Assumption 1 is satisfied. In that case, all \(\rho_{i}\) remain bounded._
_Proof of Lemma 2:_ We prove this result by contradiction. Without loss of generality, we assume that \(\rho_{i}\) is unbounded for \(i\leqslant k\) while \(\rho_{i}\) is bounded for \(i>k\). In case all \(\rho_{i}\) are unbounded we choose \(k=N-1\). First we define, using the notation of Lemma 1:
\[a^{k}=(a_{1}\quad\cdots\quad a_{k}),\qquad a_{c}^{k}=(a_{k+1}\quad\cdots\quad a _{N-1}).\]
We have:
\[\bar{L}=\begin{pmatrix}\bar{L}_{11}&\bar{L}_{12}\\ \bar{L}_{21}&\bar{L}_{22}\end{pmatrix},\quad\bar{x}^{k}=\begin{pmatrix}\bar{x }_{1}\\ \vdots\\ \bar{x}_{k}\end{pmatrix},\quad\bar{x}^{k}_{c}=\begin{pmatrix}\bar{x}_{k+1}\\ \vdots\\ \bar{x}_{N-1}\end{pmatrix},\quad\bar{c}^{k}_{c}=\begin{pmatrix}\xi_{k+1}\\ \vdots\\ \xi_{N-1}\end{pmatrix},\]
with \(\bar{L}_{11}\in\mathbb{R}^{k\times k}\). Note that \(L\) has rank \(n-1\) since the network contains a directed spanning tree. Given that we know (10), we find that \(\bar{L}\) is invertible. Using [17, Theorem 4.25] we find that \(\bar{L}_{11}\), as a principal sub-matrix of \(\bar{L}\) is also invertible. We define
\[\zeta^{k}_{c}=\left(\bar{L}_{21}\quad\bar{L}_{22}\right)\begin{pmatrix}\bar{x }^{k}\\ \bar{x}^{k}_{c}\end{pmatrix},\]
and we obtain
\[(I\otimes B^{\mathrm{ T}}P)\zeta^{k}_{c}=\bar{s}+\bar{v}\]
with \(\bar{s}\in L_{\infty}\) and \(\bar{v}\in L_{2}\) where
\[\|\bar{s}\|_{\infty}<K_{1},\quad\|\bar{v}\|_{2}<K_{2}. \tag{21}\]
This is easily achieved by noting that if \(\zeta_{i}P\zeta_{i}<d\) then we set \(v_{i}=0\) and \(s_{i}=B^{\mathrm{T}}P\zeta_{i}\) while for \(\zeta_{i}P\zeta\geqslant d\) we set \(v_{i}=B^{\mathrm{T}}P\zeta_{i}\) and \(s_{i}=0.\) It obvious that this construction yields that \(s_{i}\in L_{\infty}\) while the fact that \(\rho_{i}\) is bounded for \(i>k\) implies that \(v_{i}\in L_{2}\) (note that \(\dot{\rho_{i}}=v_{i}^{\mathrm{T}}v_{i}\) in this construction). We define
\[\hat{x}^{k}=\bar{x}^{k}+(\bar{L}_{11}^{-1}\bar{L}_{12}\otimes I)\bar{x}_{c}^{k}.\]
Using (11) we then obtain
\[\hat{x}^{k}=(I\otimes A)\hat{x}^{k}-[(\rho^{k}+\rho_{N}\mathbf{1}_ {k}a^{k})\bar{L}_{11}\otimes BB^{\mathrm{T}}P]\hat{x}^{k}-[\rho_{N}\mathbf{1}_{ N-k-1}a_{c}^{k}\otimes B](\bar{s}+\bar{v})-(\rho_{N}\bar{L}_{11}^{-1}\bar{L}_{12} \mathbf{1}_{N-k-1}a^{k}\bar{L}_{11}\otimes BB^{\mathrm{T}}P)\hat{x}^{k}\\ -[\bar{L}_{11}^{-1}\bar{L}_{12}(\rho_{c}^{k}+\rho_{N}\mathbf{1}_{ N-k-1}a_{c}^{k})\otimes B]\left(\bar{s}+\bar{v}\right)+[(I-\bar{L}_{11}^{-1} \bar{L}_{12})\otimes BX]\bar{w} \tag{22}\]
where we used, as before, that \(E=BX\) while
\[\rho^{k}=\begin{pmatrix}\rho_{1}&0&\cdots&0\\ 0&\rho_{2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\rho_{k}\end{pmatrix},\quad\rho_{c}^{k}=\begin{pmatrix}\rho_{k+1}& 0&\cdots&0\\ 0&\rho_{k+2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\rho_{N-1}\end{pmatrix}.\]
Define
\[\hat{s}=-[\rho_{N}\mathbf{1}_{N-k-1}a_{c}^{k}\otimes I]\bar{s}- \bar{L}_{11}^{-1}\bar{L}_{12}(\rho_{c}^{k}+\rho_{N}\mathbf{1}_{N-k-1}a_{c}^{k} \otimes I)\bar{s}+[(I\quad-\bar{L}_{11}^{-1}\bar{L}_{12})\otimes X]\bar{w},\] \[\hat{v}=-[\rho_{N}\mathbf{1}_{N-k-1}a_{c}^{k}\otimes I]\bar{v}- \bar{L}_{11}^{-1}\bar{L}_{12}(\rho_{c}^{k}+\rho_{N}\mathbf{1}_{N-k-1}a_{c}^{k} \otimes I)\bar{v},\]
then (21) in combination with the boundedness of \(\rho_{c}^{k}\) and \(\rho_{N}\) implies that there exists \(K_{3}\) and \(K_{4}\) such that
\[\|\hat{s}\|_{\infty}<K_{3},\quad\|\hat{v}\|_{2}<K_{4}. \tag{23}\]
We also define
\[V_{\rho}=-\rho_{N}\bar{L}_{11}^{-1}\bar{L}_{12}\mathbf{1}_{N-k-1}a^{k}\bar{L}_ {11}.\]
Note that there exists a \(K_{5}\) such that
\[\|V_{\rho}\|<K_{5}. \tag{24}\]
We obtain
\[\hat{x}^{k}=(I\otimes A)\hat{x}^{k}-[(\rho^{k}+\rho_{N}\mathbf{1}_{k}a^{k}) \bar{L}_{11}\otimes BB^{\mathrm{T}}P]\hat{x}^{k}+[I\otimes B](\hat{s}+\bar{v} )+(V_{\rho}\otimes BB^{\mathrm{T}}P)\hat{x}^{k}. \tag{25}\]
If all \(\rho_{i}\) are unbounded or if \(\rho_{N}\) is the only one which is bounded then the above decomposition is not needed and we get immediately from (11) the equation (25) with \(\bar{L}_{11}=\bar{L},\)\(V_{\rho}=0,\)\(\bar{s}=(I\otimes X)\bar{w}\) and \(\bar{v}=0.\) Obviously, in this case also (23) and (24) are satisfied for appropriate choice of \(K_{3},K_{4}\) and \(K_{5}.\) We have
\[\hat{x}^{k}=\begin{pmatrix}\hat{x}_{1}\\ \vdots\\ \hat{x}_{k}\end{pmatrix}.\]
For each \(s\in\mathbb{N},\) we define a time-dependent permutation \(p\) of \(\{1\,\ldots,N\}\) such that
\[\rho_{p_{s}(1)}(s)\geqslant\rho_{p_{s}(2)}(s)\geqslant\rho_{p_{s}(3)}(s) \geqslant\cdots\geqslant\rho_{p_{s}(N)}(s),\]
and we choose \(p_{t}=p_{s}\) for \(t\in[s,s+1).\) We define
\[\tilde{x}_{i}(t)=\hat{x}_{p_{t}(i)}(t),\qquad\tilde{\rho}_{i}(t)=\hat{\rho}_{p _{t}(i)}(t),\qquad\tilde{a}_{i}(t)=a_{p_{t}(i)}.\]
We have that (25) implies
\[\hat{x}^{k}=(I\otimes A)\bar{x}^{k}-[(\tilde{\rho}^{k}+\tilde{\rho}_{N} \mathbf{1}_{k}\bar{a}^{k})\bar{L}_{11}\otimes BB^{\mathrm{T}}P]\bar{x}^{k}+[I \otimes B](\bar{s}+\bar{v})+(\tilde{V}_{\rho}\otimes BB^{\mathrm{T}}P)\bar{x} ^{k}, \tag{26}\]
where \(\bar{s},\bar{\zeta},\bar{v},\bar{L}_{11},\) and \(\tilde{V}_{\rho}\) are also obtained by applying the permutation introduced above. A permutation clearly does not affect the bounds we obtained in (27) and (23) and we obtain
\[\|\tilde{V}_{\rho}\|<K_{5}, \tag{27}\]
\[\|\tilde{s}\|_{\infty}<K_{3},\qquad\|\tilde{v}\|_{2}<K_{4}. \tag{28}\]
For any \(j<k\) we decompose
\[\tilde{x}_{I}^{j}=\begin{pmatrix}\tilde{x}_{1}\\ \vdots\\ \tilde{x}_{j}\end{pmatrix},\quad\tilde{x}_{II}^{j}=\begin{pmatrix}\tilde{x}_{j+ 1}\\ \vdots\\ \tilde{x}_{k}\end{pmatrix},\quad\tilde{L}_{11}=\begin{pmatrix}\tilde{L}_{11}^ {j}&\tilde{L}_{12}^{j}\\ L_{21}^{j}&L_{22}^{j}\end{pmatrix},\quad\tilde{V}_{\rho}=\begin{pmatrix} \tilde{V}_{\rho}^{I}\\ \tilde{V}_{\rho}^{II},\end{pmatrix}\]
with \(\tilde{L}_{11}^{j}\in\mathbb{R}^{j\times j},\tilde{V}_{\rho}^{I}\in\mathbb{R} ^{j\times k}\),
\[\tilde{a}^{j}=\begin{pmatrix}\tilde{a}_{1}&\cdots&\tilde{a}_{j}\end{pmatrix}, \quad\tilde{a}_{c}^{j}=\begin{pmatrix}\tilde{a}_{j+1}&\cdots&\tilde{a}_{k} \end{pmatrix},\quad\tilde{s}=\begin{pmatrix}\tilde{s}^{j}\\ \tilde{s}_{c}^{j}\end{pmatrix},\quad\tilde{v}=\begin{pmatrix}\tilde{v}^{j}\\ \tilde{v}_{c}^{j}\end{pmatrix},\]
with \(\tilde{s}^{j}\in\mathbb{R}^{nj}\), \(\tilde{v}^{j}\in\mathbb{R}^{nj}\) and
\[\tilde{x}^{j}=\tilde{x}_{I}^{j}+(\tilde{L}_{11}^{j})^{-1}\tilde{L}_{12}^{j} \tilde{x}_{II}^{j}, \tag{29}\]
for \(j<k\) while \(\tilde{x}^{k}=\tilde{x}^{k}\). We will show that
\[\tilde{\rho}_{j}^{2}\tilde{V}_{j} \tag{30}\]
is bounded for \(j=1,\ldots,k\) where
\[\tilde{V}_{j}=(\tilde{x}^{j})^{\mathrm{ T}}\tilde{H}^{j}\left(\tilde{\rho}^{j}+\tilde{\rho}_{N}\mathbf{1}_{j}\tilde{a}^{j} \right)^{-1}\tilde{x}^{j}, \tag{31}\]
while
\[\tilde{\rho}^{j}=\begin{pmatrix}\tilde{\rho}_{1}&0&\cdots&0\\ 0&\tilde{\rho}_{2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\tilde{\rho}_{j}\end{pmatrix},\quad\tilde{H}^{j}=\begin{pmatrix} \tilde{a}_{1}&0&\cdots&0\\ 0&\tilde{a}_{2}&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\cdots&0&\tilde{a}_{j}\end{pmatrix}.\]
Note that
\[\tilde{H}^{j}\left(\tilde{\rho}^{j}+\tilde{\rho}^{N}\mathbf{1}_{j}\tilde{a}^ {j}\right)^{-1}=\left(\tilde{\rho}^{j}(\tilde{H}^{j})^{-1}+\tilde{\rho}_{N} \mathbf{1}_{j}\mathbf{1}_{j}^{\mathrm{ T}}\right)^{-1}\]
is a symmetric and positive definite matrix for \(j=1,\ldots,k\).
It is not hard to verify that (30) is bounded guarantees that \(\tilde{x}_{j}\) is arbitrary small for large \(t\) and for all \(j=1,\ldots k\) since the \(\tilde{\rho}_{j}\) are increasing and converge to infinity. On the other hand, \(\tilde{x}_{j}\) arbitrary small for large \(t\) and for all \(j=1,\ldots k\) would imply that the \(\rho_{j}\) are all constant for large \(t\) which contradicts our premise that \(\rho_{j}\) is unbounded for \(j=1,\ldots k\).
We first consider the case that \(j=k\). Note that [17, Theorem 4.31] implies that
\[\tilde{H}^{k}\tilde{L}_{11}+\tilde{L}_{11}^{\mathrm{ T}}\tilde{H}^{k}>2\mu I, \tag{32}\]
for some \(\mu>0\). We get from (26) that
\[\begin{split}\dot{\tilde{V}}_{k}=(\tilde{x}^{k})^{ \mathrm{ T}}\left[\tilde{H}^{k}\left(\tilde{\rho}^{k}+\tilde{\rho}^{N}\mathbf{1}_{k} \tilde{a}^{k}\right)^{-1}\otimes(-I+PBB^{\mathrm{ T}}P)\right]\tilde{x}^{k}-(\tilde{x}^{k})^{\mathrm{ T}}\left[(\tilde{H}^{k}\tilde{L}_{11}+\tilde{L}_{11}^{\mathrm{ T}}\tilde{H}^{k})\otimes PBB^{\mathrm{ T}}P\right]\tilde{x}^{k}\\ +2(\tilde{x}^{k})^{\mathrm{ T}}\left[\tilde{H}^{k}\left(\tilde{\rho}^{k}+ \tilde{\rho}^{N}\mathbf{1}_{k}a^{k}\right)^{-1}\otimes PB\right](\tilde{s}+ \hat{v}).\end{split}\]
We get for \(t>T\) that
\[\dot{\tilde{V}}_{k}\leqslant-\tilde{V}_{k}-\mu(\tilde{x}^{k})^{ \mathrm{ T}}\left[I\otimes PBB^{\mathrm{ T}}P\right]\tilde{x}^{k}+2(\tilde{x}^{k})^{\mathrm{ T}}\left[\tilde{H}^{k}\left(\tilde{\rho}^{k}+\tilde{\rho}^{N}\mathbf{1}_{k} \tilde{a}^{k}\right)^{-1}\otimes PB\right](\tilde{s}+\hat{v}),\]
where \(T\) is such that
\[\tilde{H}^{k}\left(\tilde{\rho}^{k}+\tilde{\rho}^{N}\mathbf{1}_{k}\tilde{a}^{ k}\right)^{-1}<\mu I,\]
for \(t>T\) which is possible since we have \(\tilde{\rho}^{j}\rightarrow\infty\) for \(j=1,\ldots,k\). Note that there exists some fixed \(\alpha\) such that
\[\|\tilde{\rho}_{k}\tilde{s}\|_{2}\leqslant\alpha,\qquad\|\tilde{\rho}_{k} \tilde{v}\|_{\infty}\leqslant\alpha, \tag{33}\]
with
\[\tilde{s} =\sqrt{\frac{2}{\mu}}\left[\tilde{H}^{k}\left(\tilde{\rho}^{k}+ \tilde{\rho}^{N}\mathbf{1}_{k}\tilde{a}^{k}\right)^{-1}\otimes I\right] \tilde{s},\] \[\tilde{v} =\sqrt{\frac{2}{\mu}}\left[\tilde{H}^{k}\left(\tilde{\rho}^{k}+ \tilde{\rho}^{N}\mathbf{1}_{k}\tilde{a}^{k}\right)^{-1}\otimes I\right] \tilde{v}.\]
We get
\[\dot{\hat{V}}_{k}\leqslant-\hat{V}_{k}-\frac{\mu}{2}(\tilde{x}^{k})^{\tau}\left[I \otimes PBB^{\tau}P\right]\tilde{x}^{k}+\tilde{x}^{\tau}\tilde{x}+\tilde{v}^{ \tau}\tilde{v}, \tag{34}\]
as well as
\[\dot{\hat{V}}_{k}\leqslant-\hat{V}_{k}+\tilde{x}^{\tau}\tilde{x}+\tilde{v}^{ \tau}\tilde{v}. \tag{35}\]
We already know that \(\tilde{\rho}_{k}\) increases to infinity and (35) clearly implies that \(\hat{V}_{k}\) is bounded given our bounds (33). Note that there exists \(\gamma>0\) such that
\[\tfrac{\mu}{2}(\tilde{x}^{k})^{\tau}\left[I\otimes PBB^{\tau}P\right]\tilde{x }^{k}\geqslant\gamma(\tilde{x}^{k})^{\tau}\left[L_{11}^{2}\otimes PBB^{\tau}P \right]\tilde{x}^{k}\geqslant\gamma\sum_{i=1}^{k}\dot{\tilde{\rho}}_{j}, \tag{36}\]
and hence (34) implies
\[\dot{\hat{V}}_{k}\leqslant-\tilde{V}_{k}-\gamma\sum_{i=1}^{k}\dot{\tilde{\rho }}_{i}+\tilde{x}^{\tau}\tilde{x}+\tilde{v}^{\tau}\tilde{v}\leqslant-\tilde{V} _{k}-\gamma\dot{\tilde{\rho}}_{k}+\tilde{s}^{\tau}\tilde{s}+\tilde{v}^{\tau} \tilde{v}, \tag{37}\]
as well as
\[\dot{\hat{V}}_{k}\leqslant-\gamma(\tilde{x}^{k})^{\tau}\left[\tilde{L}_{11}^{2 }\otimes PBB^{\tau}P\right]\tilde{x}^{k}+\tilde{s}^{\tau}\tilde{s}+\tilde{v}^ {\tau}\tilde{v}. \tag{38}\]
Inequality (37) yields that for \(t>T_{1}\) we have
\[\left[\tilde{\rho}_{k}^{2}\tilde{V}_{k}\right]^{\prime} \leqslant 2\tilde{\rho}_{k}\dot{\tilde{\rho}}_{k}\tilde{V}_{k}- \tilde{\rho}_{k}^{2}\tilde{V}_{k}-\gamma\tilde{\rho}_{k}^{2}\dot{\tilde{\rho} }_{k}+\tilde{\rho}_{k}^{2}(\tilde{s}^{\tau}\tilde{s}+\tilde{v}^{\tau}\tilde{v})\] \[\leqslant-\tilde{\rho}_{k}^{2}\tilde{V}_{k}+\tilde{\rho}_{k}^{2} (\tilde{s}^{\tau}\tilde{s}+\tilde{v}^{\tau}\tilde{v}), \tag{39}\]
where we choose \(T_{1}>T\) such that \(2\tilde{V}_{k}\leqslant\gamma\tilde{\rho}_{k}\) for \(t>T_{1}\) which is obviously possible since \(\tilde{\rho}_{k}\) increases to infinity while \(\tilde{V}_{k}\), as argued before, is bounded. Then, using (33) and (39) we find
\[\tilde{\rho}_{k}^{2}(s+\sigma)\tilde{V}_{k}(s+\sigma)<e^{-\sigma}\tilde{\rho} _{k}^{2}(s)\tilde{V}_{k}(s)+2\alpha^{2},\]
for all \(\sigma\in(0,1]\) and any \(s\in\mathbb{N}\) with \(s>T_{1}\). This by itself does not yield that \(\tilde{\rho}_{k}^{2}\tilde{V}_{k}\) is bounded because we have potential discontinuities for \(s\in\mathbb{N}\) because of the reordering process we introduced. Hence
\[\tilde{\rho}_{k}^{2}(s^{+})\tilde{V}_{k}(s^{+})\text{ and }\tilde{\rho}_{k}^{2}(s^{-}) \tilde{V}_{k}(s^{-})\]
might be different and, strictly speaking, we have obtained
\[\tilde{\rho}_{k}^{2}(s+\sigma)\tilde{V}_{k}(s+\sigma)<e^{-\sigma}\tilde{\rho} _{k}^{2}(s^{+})\tilde{V}_{k}(s^{+})+2\alpha^{2}, \tag{40}\]
for all \(\sigma\in(0,1]\) and any \(s\in\mathbb{N}\) with \(s>T_{1}\). Note that \(\tilde{V}_{k}\) bounded and (33) combined with (37) implies that there exists some \(\eta>0\) such that
\[\tilde{\rho}_{i}(s+1)-\tilde{\rho}_{i}(s)<\eta, \tag{41}\]
for \(i=1,\dots,k\) and \(s\) sufficiently large.
We also need a bound like (41) for \(i=N\). Clearly, if \(\tilde{\rho}_{N}\) is bounded we trivially obtain that for any \(\varepsilon_{1}>0\) we have
\[\tilde{\rho}_{N}(s+1)-\tilde{\rho}_{N}(s)<\varepsilon_{1}. \tag{42}\]
It is then not hard to prove that \(\tilde{\rho}_{i}\) unbounded for \(i=1,\dots,k\) together with the bounds (41) and (42) implies that for any \(\varepsilon\) there exists \(T_{2}>T_{1}\) such that
\[\tilde{\rho}_{k}^{2}(s^{+})\tilde{V}_{k}(s^{+})\leqslant(1+\varepsilon)\tilde {\rho}_{k}^{2}(s^{+})\tilde{V}_{k}(s^{+}), \tag{43}\]
for \(s>T_{2}\). On the other hand, when \(\tilde{\rho}_{N}\) is unbounded we note that (41) is true for \(i=1,\dots,N-1\) since in this case \(k=N-1\). But then
\[\tilde{\zeta}_{N}=-(\tilde{a}^{N-1}\otimes I)\tilde{\zeta}^{N-1}.\]
Hence similarly as in (36) we obtain there exists \(\gamma_{1}\) such that
\[\tfrac{\mu}{2}(\tilde{x}^{k})^{\tau}\left[I\otimes PBB^{\tau}P\right]\tilde{x }^{k}\geqslant\gamma_{1}\dot{\tilde{\rho}}_{N}.\]
Then, using the same arguments as before, we can conclude (41) is satisfied for \(i=N\) as well provided \(\eta\) is chosen appropriately. But then we note that (41) implies
\[\tilde{\rho}_{i}(s^{+})-\tilde{\rho}_{i}(s^{-})<\eta,\]
for \(i=1,\dots,k\) and \(i=N\). Combined with the fact that \(\tilde{\rho}_{i}\) is unbounded we also obtain (43) for any \(\varepsilon>0\). Combining (43) with (40) implies
\[\tilde{\rho}_{k}^{2}(t)\tilde{V}_{k}(t) \tag{44}\]
is bounded if we choose \((1+\varepsilon)e^{-1}<1\).
In addition, (44) and (33) combined with (38) implies that
\[\gamma\int_{s}^{s+1}\tilde{\rho}_{k}^{2}(\bar{s}^{k})^{\tau}\left[\bar{L}_{11}^{ 2}\otimes PBB^{\tau}P\right]\bar{s}^{k}\,\mathrm{d}t\leqslant\tilde{\rho}_{k}^ {2}(s)\tilde{V}_{k}(s)+\int_{s}^{s+1}\tilde{\rho}_{k}^{2}(\bar{s}^{\tau}\bar{s }+\bar{v}^{\tau}\bar{v})\mathrm{d}t\leqslant 6\alpha^{2}.\]
Note that the above in particular implies that
\[\int_{s}^{s+1}\tilde{\rho}_{k}^{2}(\bar{\zeta}^{k})^{\tau}(I\otimes PBB^{\tau} P)\bar{\zeta}^{k}\,\mathrm{d}t\]
is bounded. We define
\[\bar{\zeta}^{j}=\begin{pmatrix}\tilde{\zeta}_{1}\\ \vdots\\ \tilde{\zeta}_{j}\end{pmatrix},\qquad\bar{\zeta}_{c}^{j}=\begin{pmatrix}\tilde {\zeta}_{j+1}\\ \vdots\\ \tilde{\zeta}_{k}\end{pmatrix}.\]
Assume for \(i=j\) we have either
\[\tilde{\rho}_{i}^{2}\tilde{V}_{i} \tag{45}\]
is unbounded or
\[\int_{s}^{s+1}\tilde{\rho}_{i}^{2}(\bar{\zeta}^{j})^{\tau}(I\otimes PBB^{\tau }P)\bar{\zeta}^{i}\,\mathrm{d}t \tag{46}\]
is unbounded while, for \(j<i\leqslant k\), both (45) and (46) are bounded. We will show this yields a contradiction. Note that in the above we already established (45) and (46) are bounded for \(i=k\).
Using (26) and (29), we obtain
\[\hat{\bar{s}}^{j}=(I\otimes A)\bar{x}^{j}-\left[(\tilde{\rho}^{j} +\rho_{N}\mathbf{1}_{j}\bar{a}^{j})L_{11}^{j}\otimes BB^{\tau}P\right]\bar{x} ^{j}+(I\otimes B)(\bar{s}+\bar{v})+(\tilde{V}_{\rho}\otimes BB^{\tau}P)\bar{ \zeta}^{k}+\tilde{\rho}_{N}(W_{j}^{1}\otimes BB^{\tau}P)\bar{\zeta}^{j}_{c}+ \tilde{\rho}_{N}(W_{j}^{2}\otimes BB^{\tau}P)\bar{\zeta}^{j}\] \[+(W_{j}^{3}\tilde{\rho}_{c}^{j}\otimes BB^{\tau}P)\bar{\zeta}^{j} _{c},\]
where
\[\tilde{s} =\tilde{s}_{I}+(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{j}s_{II},\] \[\tilde{v} =\tilde{v}_{I}+(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{j}s_{II},\] \[\tilde{V}_{\rho} =\left[\tilde{V}_{\rho}^{I}+(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{ j}\tilde{V}_{\rho}^{II}\right](\bar{L}_{11}^{j})^{-1},\]
and
\[W_{j}^{1} =(\mathbf{1}_{k-j}+(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{j}\mathbf{ 1}_{k-j})\bar{a}_{c}^{j},\] \[W_{j}^{2} =(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{j}\mathbf{1}_{k-j}\bar{a}^{ j},\] \[W_{j}^{3} =(\bar{L}_{11}^{j})^{-1}\bar{L}_{12}^{j}.\]
Note that \((\tilde{V}_{\rho}\otimes B^{\tau}P)\bar{\zeta}^{k}\) has bounded energy on the interval \([s,s+1]\) by (46) for \(i=k\) together with the fact that \(\tilde{V}_{\rho}\) is bounded. We also find \(\tilde{\rho}_{N}(I\otimes B^{\tau}P)\bar{\zeta}^{k}\) has bounded energy on the interval \([s,s+1]\) by (46) for \(i=k\) together with \(\tilde{\rho}_{N}\leqslant\tilde{\rho}_{k}\). Finally, (46) implies \(\tilde{\rho}_{i}B^{\tau}P\bar{\zeta}^{j}_{i}\) has bounded energy for \(i=j+1,\ldots,k\) which in turn yields \((\tilde{\rho}_{c}^{j}\otimes B^{\tau}P)\bar{\zeta}^{j}_{c}\) has bounded energy. Clearly, \(\tilde{s}\) is bounded while \(\tilde{v}\) has bounded energy by (28). This yields that we have
\[\hat{\bar{s}}^{j}=(I\otimes A)\bar{x}^{j}-\left[(\tilde{\rho}^{j}+\rho_{N} \mathbf{1}_{j}\bar{a}^{j})\bar{L}_{11}^{j}\otimes BB^{\tau}P\right]\bar{x}^{j} +(I\otimes B)(\bar{s}^{j}+\bar{v}^{j}), \tag{47}\]
where \(\bar{s}^{j}=\tilde{s}\) and
\[\bar{v}^{j}=\bar{v}+(\tilde{V}_{\rho}\otimes B^{\tau}P)\bar{\zeta}^{k}+\tilde{ \rho}_{N}(W_{j}^{1}\otimes B^{\tau}P)\bar{\zeta}^{j}_{c}+\tilde{\rho}_{N}(W_{j }^{2}\otimes B^{\tau}P)\bar{\zeta}^{j}+(W_{j}^{3}\tilde{\rho}_{c}^{j}\otimes B ^{\tau}P)\bar{\zeta}^{j}_{c}\]
and that there exist constants such that
\[\|\bar{s}^{j}\|_{\infty}<\tilde{K}_{3},\quad\int_{s}^{s+1}(\bar{v}^{j})^{\tau} \bar{v}^{j}\mathrm{d}t<\tilde{K}_{3}. \tag{48}\]
We obtain
\[\hat{\bar{V}}_{j}=(\bar{x}^{j})^{\tau}\left[\bar{H}^{j}(\tilde{\rho}^{j}+\tilde {\rho}_{N}\mathbf{1}_{j}\bar{a}^{j})^{-1}\otimes(-I+PBB^{\tau}P)\right]\bar{x} ^{j}-(\bar{x}^{j})^{\tau}\left[(\tilde{H}^{j}L_{11}^{j}+L_{11}^{j}\bar{H}^{j}) \otimes PBB^{\tau}P\right]\bar{x}^{j}+2\left[\bar{H}^{j}(\tilde{\rho}^{j}+ \tilde{\rho}_{N}\mathbf{1}_{j}\bar{a}^{j})^{-1}\otimes PB\right](\bar{s}^{j}+ \bar{v}^{j}),\]
using (31) and (47). (32) implies
\[\bar{H}^{j}L_{11}^{j}+L_{11}^{j}\bar{H}^{j}>\mu I,\]
for some \(\mu>0\). Moreover, we have
\[\tilde{H}^{j}(\tilde{\rho}^{j}+\tilde{\rho}_{N}\mathbf{1}_{j}\tilde{a}^{j})^{-1} \leqslant\frac{\gamma}{\tilde{\rho}_{j}},\]
for some \(\gamma>0\). This yields that there exists \(\eta>0\) such that
\[\hat{V}_{j}\leqslant-\tilde{V}_{j}-\tfrac{\mu}{2}(\tilde{x}^{j})^{\top}\,[I \otimes PBB^{\mathrm{T}}P]\,\tilde{x}^{j}+\tfrac{\eta}{\tilde{\rho}_{j}^{2}} \big{[}(\tilde{x}^{j})^{\top}\tilde{x}^{j}+(\tilde{v}^{j})^{\top}\tilde{v}^{j }\big{]}.\]
Next, (48) and (49) imply
\[\tilde{\rho}_{j}^{2}(s+\sigma)\tilde{V}_{j}(s+\sigma)<e^{-\sigma}\tilde{\rho }_{j}^{2}(s^{+})\tilde{V}_{j}(s^{+})+2\tilde{K}_{3}^{2}\eta, \tag{49}\]
for all \(\sigma\in(0,1]\), and any \(s\in\mathbb{N}\) with \(s>T_{1}\). This by itself does not imply that \(\tilde{\rho}_{j}^{2}(s)\tilde{V}_{j}(s)\) is bounded because at time \(s\in\mathbb{N}\) there might be a discontinuity due to the reordering we performed. However, as we did for the case \(i=k\), it is easy to see that there exists \(A_{0}>0\) such that a discontinuity can only occur when
\[\tilde{\rho}_{j}(s)-\tilde{\rho}_{j+1}(s)<A_{0},\]
for \(s\) sufficiently large. But then there exists some \(A>0\) such that
\[\tilde{\rho}_{j}^{2}(s)\tilde{V}_{j}(s)<(\tilde{\rho}_{j+1}(s)+A_{0})^{2} \tilde{V}_{j}(s)<(\tilde{\rho}_{j+1}(s)+A_{0})^{2}\tilde{V}_{j+1}(s)<\frac{( \tilde{\rho}_{j+1}(s)+A_{0})^{2}}{\tilde{\rho}_{j+1}^{2}(s)}\tilde{\rho}_{j+1 }^{2}(s)\tilde{V}_{j+1}(s)<A,\]
for large \(s\) since we already established that \(\tilde{\rho}_{j+1}^{2}\tilde{V}_{j+1}(s)\) is bounded while \(\tilde{\rho}_{j+1}\) is increasing to infinity. In that case (49) shows
\[\tilde{\rho}_{j}^{2}(s+\sigma)\tilde{V}_{j}(s+\sigma)\]
is bounded as well. On the other hand, if \(\tilde{\rho}_{j}^{2}(s)\tilde{V}_{j}(s)\) is larger than \(A\) then we know discontinuities do not arise and hence (49) shows that \(\tilde{\rho}_{j}^{2}\tilde{V}_{j}\) remains bounded. Remains to show that (46) is bounded. This follows immediately from (49) in combination with (48) and the boundedness of \(\tilde{\rho}_{j}^{2}\tilde{V}_{j}\).
In this way, we recursively established that (45) is bounded for \(i=1,\ldots,k\) and, as noted before, this implies that \(\rho_{i}(t)\) is constant for large \(t\) and \(i=1,\ldots,k\) which contradicts are assumption that some of the \(\rho_{i}\) are unbounded. This completes our lemma.
**Remark 1**: _It is easy to show that measurement noise that converges to zero asymptotically will not affect synchronization since, eventually, it will be arbitrarily small. However, if we have arbitrary measurement noise that is bounded, for instance, \(|v_{i}(t)|<V\), then it is very obvious that one can never guarantee synchronization with an accuracy of less than \(2V\). Bounded measurement noise (with upper bound \(V\)) in our argument will impose a lower bound on \(d\) (and hence on \(\delta\)) to avoid that in the worst case the scaling parameter will converge to infinity. This will create completely different arguments and results. In particular, our aim was to find protocols that do not depend on a specific upper bound for the disturbances. With measurement noise, both the choice of \(d\) in the protocol as well as the accuracy of our synchronization will depend on the upper bound \(V\)._
## IV **Numerical examples**
In this section, we will show that the proposed protocol achieves scalable \(\delta-\)level coherent state synchronization. We study the effectiveness of our proposed protocol as it is applied to systems with different sizes, different communication graphs, different noise patterns, and different \(\delta\) values.
We consider agent models as
\[\dot{x}_{i}(t)=\begin{pmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{pmatrix}x_{i}(t)+\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}u_{i}(t)+\begin{pmatrix}0\\ 0\\ 1\end{pmatrix}w_{i}(t), \tag{50}\]
for \(i=1,\ldots,N\). We utilize adaptive protocol (7) as following
\[\dot{\rho}_{i} =\begin{pmatrix}\zeta_{i}^{\mathrm{T}}\begin{pmatrix}1&2.41&2.41\\ 2.41&5.82&5.82\\ 2.41&5.82&5.82\end{pmatrix}\zeta_{i}\quad\text{ if }\zeta_{i}^{\mathrm{T}}P\zeta_{i} \geqslant d,\\ 0&\text{ if }\zeta_{i}^{\mathrm{T}}P\zeta_{i}<d,\end{pmatrix} \tag{51}\] \[u_{i} =-\rho_{i}\left(1\ 2.41\ 2.41\right)\zeta_{i},\]
where \(P\) is the solution of the algebraic Riccati equation (5) and equals
\[P=\begin{pmatrix}2.41&2.41&1\\ 2.41&4.82&2.41\\ 1&2.41&2.41\end{pmatrix}.\]
### **Scalability**
In this section, we consider MAS with agent models (50) and disturbances
\[w_{i}(t)=0.1sin(0.1it+0.01t^{2}),\quad i=1,\ldots,N. \tag{52}\]
In the following, to illustrate the scalability of proposed protocols, we study three MAS with 5, 25, and 121 agents communicating over directed and undirected Vicsek fractal graphs are shown in Figure 1. In all examples of the paper, the weight of edges of the communication graphs is considered to be equal 1. In the both following examples, we consider \(d=0.5\).
#### Iv-A1 Directed graphs
First, we consider directed Vicsek fractal graphs. The simulation results presented in Figures 3-5 clearly show the scalability of our one-shot-designed protocol. In other words, the scalable adaptive protocols achieve \(\delta-\)level coherent state synchronization independent of the size of the network.
#### Iv-A2 Undirected graphs
Second, we consider undirected Vicsek fractal graphs. The algebraic connectivity of the undirected Vicsek fractal graphs is presented in Table I. It can be easily seen that the size of the graphs increase the algebraic connectivity of the associated Laplacian matrix decreases. The simulation results presented in Figure 6-8 show that the one-shot designed protocol (51), achieves \(\delta-\)level coherent state synchronization regardless of the number of agents and the algebraic connectivity of the associated Laplacian matrices of the graphs.
### **Effectiveness with different types of communication graphs**
In this example, we illustrate that the protocol that we designed for our MAS also achieves synchronization for different types of communication graphs. We considered MAS (50) with \(N=121\) in the previous section where the agents are subject to noise (52). In this example, the agents are communicating through directed circulant graphs shown in Figure 2. Figure 9 shows the effectiveness of our designed protocol (51) for MAS with directed circulant communication graphs.
Fig. 1: Vicsek fractal graphs
Fig. 2: Circulant graph
### **Robustness to different noise patterns**
In this example, we analyze the robustness of our protocols to different noise patterns. As before, we consider MAS with \(N=121\) in section IV-A communicating through directed Vicsek fractal graph. In this example, we assume that agents are subject to
\[w_{i}(t)=0.01it-round(0.01it),\quad i=1,\ldots,N. \tag{53}\]
Figure 10 shows that our designed protocol is robust even in the presence of noise with different pattern.
### **Effectiveness for different \(d\) values**
Finally, in this section, we show the effectiveness of the proposed protocol for different values of delta. Similar to the previous examples, we consider the MAS of section IV-A with \(N=121\) communication through directed Vicsek fractal graphs where the agents are subject to noise (52). In this example, we choose \(d=0.2\). The simulation results presented in Figure 11 show the effectiveness of our protocol independent of the value of \(d\).
## V Conclusion
This paper studied scalable \(\delta-\)level coherent state synchronization of MAS where agents were subject to bounded disturbances/noises. The results of this paper can be utilized in the string stability of vehicular platoon control and power systems. The directions for future research on scalable \(\delta-\)level coherent synchronization are: 1. Scalable \(\delta-\)level coherent synchronization of MAS with partial-state coupling; 2. Considering non input additive disturbance _i.e.,_ where \(\operatorname{Im}E\not\subset\operatorname{Im}B\). The authors are conducting research in both directions.
|
2304.11454
|
An approach to extract information from academic transcripts of HUST
|
In many Vietnamese schools, grades are still being inputted into the database
manually, which is not only inefficient but also prone to human error. Thus,
the automation of this process is highly necessary, which can only be achieved
if we can extract information from academic transcripts. In this paper, we test
our improved CRNN model in extracting information from 126 transcripts, with
1008 vertical lines, 3859 horizontal lines, and 2139 handwritten test scores.
Then, this model is compared to the Baseline model. The results show that our
model significantly outperforms the Baseline model with an accuracy of 99.6% in
recognizing vertical lines, 100% in recognizing horizontal lines, and 96.11% in
recognizing handwritten test scores.
|
Nguyen Quang Hieu, Nguyen Le Quy Duong, Le Quang Hoa, Nguyen Quang Dat
|
2023-04-22T17:29:55Z
|
http://arxiv.org/abs/2304.11454v1
|
# An approach to extract information from academic transcripts of HUST
###### Abstract
In many Vietnamese schools, grades are still being inputted into the database manually, which is not only inefficient but also prone to human error. Thus, the automation of this process is highly necessary, which can only be achieved if we can extract information from academic transcripts. In this paper, we test our improved CRNN model in extracting information from 126 transcripts, with 1008 vertical lines, 3859 horizontal lines, and 2139 handwritten test scores. Then, this model is compared to the Baseline model. The results show that our model significantly outperforms the Baseline model with an accuracy of 99.6% in recognizing vertical lines, 100% in recognizing horizontal lines, and 96.11% in recognizing handwritten test scores.
Keywords:Academic transcript; Image Processing; CRNN; CTC; Digit string recognition
## 1 Introduction
At Hanoi University of Science & Technology, after every exam, scores are recorded in academic transcripts and then transferred to the school's database by teachers. Until now, this process has been done manually, which is time-consuming for the teachers and may lead to accidental mistakes such as the scores inputted being incorrect or the scores being assigned to the wrong students.
Currently, machine learning methods have been applied to automate these processes (see [1], [2], [3], [11], [12], [13], [14], [15], [16], [17]). It also helps to free
up manpower. By utilizing Image-Processing Techniques and Deep Learning, we can automate this procedure with a system that can extract necessary data.
This paper contains five sections. The introduction is the first section. The second section consists of prior research conducted by various authors on image processing research techniques. The third part is the method studied in this paper, including our proposed method. The fourth part is our results based on real data. And the last part is conclusion and acknowledgment.
## 2 Related works
Digit recognition is extremely useful. This is especially the case for schools where the process of inputting grades into database is still being done manually. In such schools, the assistance of digit recognition can significantly increase accuracy and reduce the time allotted to this process.
In 2018, Liu et al. [4] proposed a hybrid CNN-LSTM algorithm for identifying CO2 welding defects. The algorithm is evaluated using 500 images of molten pools from the Robotics and Welding Technology Laboratory at Guilin University of Aerospace Technology. The results from CNN-LSTM algorithm were considered to be better than those of other basic algorithms (CNN, LSTM, CNN-3), with an accuracy of 85%, 88%, and 80% for 32x32, 64x64, and 128x128 images, respectively.
In 2019, Rehman et al. [5] utilized a hybrid CNN-LSTM model to analyze opinions in people's online comments. This model outperforms conventional machine learning techniques on the IMDB movie reviews dataset and the Amazon movie reviews dataset, with a precision of 91%, recall of 86%, an F-measure of 88%, and an accuracy of 91%.
In 2020, Yang et al. [6] compared the performance of CNN-LSTM model with analytical analysis and FEA algorithm in detecting the natural frequency of six types of beams. The goal was to assess the first, second, and third-order frequencies of each beam. Author's model was concluded to be superior in both the test on robustness, with 96.6%, 93.7%, and 95.1% accuracy, respectively, and the test on extrapolability, with 95.4%, 92%, and 92.5% accuracy, respectively.
In 2019, Sivaram et al. [7] proposed a CNN-RNN combination model to detect facial landmarks. The proposed model outperforms existing methods, such as CFAN, SDM, TCDN, and RCPR, on the FaceWarehouse database, with 99% precision, 99% recall, 99% F1-Score, 98.65% Accuracy, and 98.65% AUC or ROC.
In 2018, Xiangyang et al. [8] utilized a hybrid CNN-LSTM model to tackle the problem of text classification. With the Chinese news dataset (proposed by the Sogou Lab), the model proved superior to traditional KNN, SVM, CNN, and LSTM, with a precision of 90.13% under the CBOW model and 90.68% under the Skip-Gram model.
In 2017, Yin et al. [9] used CNN-RNN and C3D hybrid networks to detect emotions in videos from the AFEW 6.0 database. The objective was to assign
emotion from 7 categories, namely anger, disgust, fear, happiness, sadness, surprise, and neutral, to each video in the test dataset. It was found that with 1 CNN-RNN model and 3 C3D models, an accuracy of 59.02% was achieved, surpassing the baseline accuracy of 40.47% and last year's highest accuracy of 53.8%.
In 2017, Zhan et el. [10] introduced a new RNN-CTC model to recognize digit sequences in three datasets, namely CVL HDS, ORAND-CAR (include CAR-A and CAR-B), and G-Captcha. Even though the proposed model only achieved a recognition rate of 27.07% for the CVL dataset, the model outperformed other state-of-the-art methods on the CAR-A, CAR-B, and G-Captcha datasets, with recognition rates of 89.75%, 91.14%, and 95.15%, respectively, due to the absence of segmentation.
## 3 Methodology
### Convolutional Neural Networks
A convolutional neural network can be successfully applied for most computer vision problems. Its characteristics make it more effective than other conventional methods. Since its inception, CNN has witnessed numerous optimizations.
However, when it comes to deeper networks, a degradation problem arises. To tackle this, He et al. proposed a deep residual learning framework, ResNet. The basic structure of this network is using shortcut connections. Shortcut connections are those that traverse multiple layers. With this type of connection, we can overcome the problem of vanishing gradients and construct more extensive networks; in other words, better feature representations can be acquired. In practice, shortcut connections can be adjusted on a case-by-case basis, depending on each specific problem.
In our proposed model, we design a 10-layer residual network that doesn't have any global pooling layers. To prevent divergence, we avoid employing CNN that is excessively profound. In addition, we maximize the use of residual learning to enhance gradient propagation.
### Recurrent Neural Networks
A Recurrent neural network is an architecture where multiple connections between neurons create a recursive cycle. Self-connection brings the advantage of utilizing contextual data when making a correspondence between sequences of input and output. Nevertheless, for conventional RNN, the range of accessible context is restricted in practice. due to the vanishing gradient problem. Applying a memory structure to RNN, which produces a cell that is known as a long short-term memory (LSTM), is one method that can be utilized to solve the problem. It has been demonstrated that the LSTM variant of RNN is capable of resolving
some of the issues that are inherently present in conventional RNN, in addition to learning how to resolve issues of long-term dependency. At this time, LSTM has developed into one of the RNNs that is utilized the most frequently.
Regarding the sequence labeling problem, it is helpful to have access to context in both the future and the past. However, the normal LSTM solely considers information from the past and pays no attention to the context of the future. The creation of a second LSTM that processes input in the other direction is one alternate solution. This type of LSTM is known as a bidirectional LSTM, and its abbreviation is Bi-LSTM. Every training sequence is conveyed both forward and backward to two distinct LSTM layers, both of which are connected to the same output layer by the Bi-LSTM algorithm. This structure provides the output layer with full context for all points in the input sequence, both in the past and in the future. This context is supplied throughout the entire time period.
### Handwritten digit string recognition with CTC
Sequence characters recognition is a common problem of OCR. In this paper, we proposed an approach to recognize handwritten digit string. After having features extracted by a convolutional neural network, the main idea is to use an output connectionist temporary classification layer to get the final predicted results after using a recurrent neural network to recognize sequence information in images. This is done after the convolutional neural network has been used to extract features.
The input image is one-dimensional tensor (after resizing 40x100x1). For feature extraction, a backbone network is constructed with convolutional, max-pooling layers, and residual network. After every convolution layer, we performed batch normalization to prevent internal covariance shift. Output of feature extraction block are fed as a sequence into labeling block. To avoid vanishing gradient problem, we use two Bi-LSTM layers instead of traditional RNN layer. Finally, a fully connected layer is used to reduce the dimension to 13, before passing CTC layer. The CTC layer serves primarily two functions: the first of these is to decode the output, and the second is to calculate the loss. The full architecture is shown in figure 1
### Proposed method
The first step of our method is image preprocessing. Transcript images are binarize, removing noises by Gaussian filter. We deskew the images by Projection profile algorithm. For class ID recognition, we use Template matching followed by OCR tools. To recognize and calculate coordinates of lines in transcript, horizontal and vertical masks generated by morphological operations are fed into Hough transformation. After having full coordinates of lines, cells of student IDs and test scores are cropped. For student IDs, are sequence of printed digits, can easily recognized by available OCR tools. In our method, we use Tesseract-OCR, which is inherently very accurate in recognizing printed characters. For test scores, we use our Handwritten digits recognition model with CTC decode. Finally, student IDs and test scores are combined.
Figure 1: Our proposed architecture model
Figure 2: Our proposed method flowchart
## 4 Experiment and results
### Evaluation of Image-preprocessing
By using a dataset consisting of images of 126 academic transcripts with 1008 vertical lines and 3859 horizontal lines, the results of the Baseline model and my improved model in detecting lines are shown below:
The Baseline model achieved an accuracy of 65.57% for vertical lines, whereas the improved model had an accuracy of 99.6%.
Figure 4: Results of the two models in detecting vertical lines
Figure 3: Result of automatic score-inputting system
The Baseline model achieved an accuracy of 74.65% for horizontal lines, whereas the improved model had an absolute accuracy of 100%.
### Evaluation of models in recognizing handwritten test scores
By using an extracted dataset with 2139 handwritten test scores, the results of the CNN Baseline model and the my CRNN model are shown below:
The Baseline model achieved an accuracy of approximately 45.86%, whereas my improved CRNN model had an accuracy of 96.11%.
### Evaluation of automatic score-inputting system
To evaluate the accuracy of the entire score-inputting system, we tested it on a dataset of 75 scanned academic transcripts with 162 images of size 1653 x 2337.
Figure 5: Results of the two models in detecting horizontal lines
Figure 6: Results of the two models in recognizing handwritten test scores
**a) Evaluation of Baseline model**
Results of the Baseline model:
* The model was able to accurately detect lines in 92 images and recognize class IDs of 51 images (20 academic transcripts), achieving an accuracy of 31.4%.
* Among 3596 student IDs, the model correctly recognized 2201 IDs, achieving an accuracy of 61.2% (The majority of images in which the lines were accurately detected all had their student IDs recognized by the model).
* Among 3596 test scores, the model was able to accurately recognize 1532 test scores, achieving an accuracy of 42.6%.
**b) Evaluation of improved model**
Results of the improved model:
* In 22 images, the model misidentified 1 vertical line. However, these misidentified lines didn't affect the columns of data that needed to be recognized. Horizontal lines, on the other hand, were all accurately detected. In all 162 images, the model correctly recognized the class IDs with an accuracy of 100%.
* Among 3596 student IDs, the model correctly recognized 3481 IDs, achieving an accuracy of 96.8%.
* Among 3596 test scores, the model was able to accurately recognize 3451 test scores, achieving an accuracy of 95.9%.
## 5 Conclusion
In this research paper, we have introduced a new approach to the case of handwritten test scores into the computer. When using additional auxiliary features on the printout such as horizontal and vertical lines on the A4 paper, we have achieved very good results in clearly separating handwritten letters and numbers, thereby increasing adds precision to reading handwritten letters and numbers into numeric data.
In the future, we will put more research on some related issues, in order to further increase the accuracy:
- Identify several records of the same person.
- Identify both letters and numbers at the same time (points are written in both numbers and words in the one handwritten paper)
## 6 Acknowledgment
We would like to extend our gratitude to the researchers at Hanoi University of Science and Technology (Hanoi, Vietnam), who were of great assistance to us in the process of collecting data and testing the application of the model in real-world scenarios.
|
2305.06201
|
Collisionless dynamics of superconducting gap excited by spin-splitting
field
|
We study the coherent dynamic interaction of a time-dependent spin-splitting
field with the homogeneous superconducting order parameter $\Delta(t)$ mediated
by spin-orbit coupling using the time-dependent Bogoliubov-de Gennes theory. In
the first part of the work we show that linear response of the superconductor
is strongly affected by the Zeeman field and spin-flip processes, giving rise
to multiple resonant frequencies of the superconducting Higgs modes. These
modes can be excited either by a quench, or by an additional non-stationary
component of the spin-splitting field, which couples linearly to the Higgs
modes. In the second part, we analyze the nonadiabatic dynamics of
quasiparticle states arising from the intersection of spectral branches from
different spin subbands, which can be provoked by a linearly growing Zeeman
field. We provide insights into the dependence of the order parameter
$\Delta(t)$ on this field and interference effects caused by tunneling of
states at the avoided crossing points. We also show that since the nonadiabatic
tunneling is related to spin-flip processes, the quasiparticle gas experiences
a dynamic magnetization that contributes to its spin susceptibility.
|
V. Plastovets, A. S. Mel'nikov, A. I. Buzdin
|
2023-05-10T14:33:38Z
|
http://arxiv.org/abs/2305.06201v1
|
# Collisionless dynamics of superconducting gap excited by spin-splitting field
###### Abstract
We study the coherent dynamic interaction of a time-dependent spin-splitting field with the homogeneous superconducting order parameter \(\Delta(t)\) mediated by spin-orbit coupling using the time-dependent Bogoliubov-de Gennes theory. _In the first part_ of the work we show that linear response of the superconductor is strongly affected by the Zeeman field and spin-flip processes, giving rise to multiple resonant frequencies of the superconducting Higgs modes. These modes can be excited either by a quench, or by an additional non-stationary component of the spin-splitting field, which couples linearly to the Higgs modes. _In the second part_, we analyze the nonadiabatic dynamics of quasiparticle states arising from the intersection of spectral branches from different spin subbands, which can be provoked by a linearly growing Zeeman field. We provide insights into the dependence of the order parameter \(\Delta(t)\) on this field and interference effects caused by tunneling of states at the avoided crossing points. We also show that since the nonadiabatic tunneling is related to spin-flip processes, the quasiparticle gas experiences a dynamic magnetization that contributes to its spin susceptibility.
## I Introduction
Extensive studies of nonequilibrium states of superconductors [1; 2] pay considerable attention to the so-called collisionless dynamics of a superconducting condensate, described by the complex-valued pairing potential \(\Delta(t)\). At timescales shorter than the typical inelastic relaxation time \(t\ll\tau_{\varepsilon}\) the dynamics of Cooper pairs is in coherent regime and is described by the Keldysh technique for Green's functions or its quasiclassical approximation [3; 4]. The collisionless regime manifests itself most clearly in the existence of oscillations of the amplitude of the order parameter \(\Delta(t)=\Delta_{0}+\delta\Delta(t)\) near the equilibrium gap value \(\Delta_{0}\). This mode comes from excited interference interaction between the wavefunctions of the quasiparticles (QP) from broken Cooper pairs.Due to the QP dispersion the summation over all interference contributions results in an inhomogeneous broadening of the total gap mode, which is equivalent to a weak damping with a typical time evolution \(\delta\Delta(t)\propto\cos(2\Delta_{0}t)/\sqrt{\Delta_{0}t}\)[3]. By analogy with electro-weak particle theory this amplitude mode is called Higgs mode [5; 6]. Since the Higgs mode is a scalar excitation, it can not be coupled to the electromagnetic field \(\mathbf{A}(t)\) linearly and several indirect mechanisms have been studied, such as linear excitation by the THz radiation in the presence of dc supercurrent [7; 8]. Also it possible to realize a nonlinear and coherent (or incoherent [9]) Higgs mode excitation using high-intensity THz light with frequency just above the equilibrium superconducting gap \(\Delta_{0}\), which can be detected by ultrafast pump-probe spectroscopy and third harmonic generation measurements [10; 11; 12; 13; 14].
It is known that, in addition to electromagnetic fields, superconductors also respond to nonstationary spin-splitting fields \(\mathbf{h}(t)\). Typically, this field is produced by an external magnetic field \(\mathbf{h}=\mu_{B}\mathbf{H}\) or by the exchange field of an adjacent ferromagnetic layer \(\mathbf{h}\sim J_{\mathrm{ex}}\mathbf{m}\), which is induced by proximity to the superconductor. Spin-split systems serve as a good platforms for spintronic applications and extensive study of various non-equilibrium processes has been done over the last few decades [15; 16; 17]. In particular, by inducing magnetic moment dynamics in S/F junctions an effective spin-triplet component of the superconducting gap is generated resulting in long-range proximity effects [18; 19; 20; 21]. On the other hand experimental observations indicate that the superconducting subsystem has a direct impact on the ferromagnetic resonance in hybrid S/F structures [22; 23; 24].
In recent years there has been a growing interest in studying of the Higgs modes in the proximized superconducting systems [25; 26] as well as the interaction of collective modes in S/F systems. For instance, it was recently shown that in a superconductor in the helical phase, which can be achieved in the presence of a strong spin-orbit coupling (SOC) and an exchange field, the Higgs mode can be linearly coupled to the electromagnetic field through the nonzero superconducting phase gradient in the ground state [27]. Also, it was revealed that the coupling of the Higgs mode \(\delta\Delta(t)\) in a superconductor to external light \(\mathbf{A}(t)\) and magnetic dynamics \(\mathbf{m}(t)\) in the F layer allows the generation of time-dependent spin currents [28]. These currents can themselves excite the Higgs mode in the superconductor through the resonance of the ferromagnet due to the reciprocal effect [28]. Another example is an interplay between the superconducting Higgs mode and a magnon mode in the adjacent F layer in the presence of a SOC and static proximity effect [29]. Interestingly, the Higgs mode here is coupled to the Zeeman field \(\mathbf{h}(t)\) linearly due to the presence of both the spin-orbit interaction and some preferred direction given by wave-vector of the magnetic mode.
According to the aforementioned works, the SOC is critical for interaction of different spin subbands of the QP spectrum, which directly leads to the gap dynamics \(\Delta(t)\). Some preconditions for this can be taken from the elementary analysis of the equilibrium state. The equilibrium superconducting gap does not depend on the Zeeman field below the so-called paramagnetic limit \(h_{\rm cr}=\Delta_{0}/\sqrt{2}\), so that \(\Delta(h<h_{\rm cr},T=0)=\Delta_{0}\); and above this limit the superconductivity is completely suppressed with \(\Delta(h>h_{\rm cr},T=0)=0\)[30; 31]. The SOC drastically changes the dependence \(\Delta(h)\) and promotes a generation of triplet component superconducting correlations, leading to the survival of the gap at \(h>h_{\rm cr}\)[32]. This effect is associated with mixing of the different spin states of QP, and the appearance of such mixing is naturally expected in the dynamic regime.
The general description of dynamics of the superconducting condensate in the presence of both time-dependent spin-flipping field and SOC is rather difficult problem [28; 29]. In this paper we will focus on a fairly simple and specific system configuration, which allows us to explicitly trace the temporal evolution of QP states and its contribution to the order parameter \(\Delta(t)\). For the sake of simplicity we consider an uniform superconductor at zero temperature \(T=0\) and consider short timescale \(t\ll\tau_{\varepsilon}\) at which the collisionless regime holds, so one can treat the system with the pure quantum-mechanical approach within the time-dependent Bogoliubov-de Gennes (TDBdG) equations [33]. We confine ourselves to addressing the homogeneous spin-splitting field with only one component \({\bf h}(t)=h(t){\bf z}_{0}\). A simple approach based on the expansion of the QP wave function in terms of the eigenstates of the BdG Hamiltonian \(\psi(t)=\sum_{n}C_{n}(t)\Psi_{n}\) can be developed. The behavior of the QPs and related self-consistent gap function \(\Delta(t)\) are determined by the coefficients \(C_{n}(t)\), which describe how the states with a specific spin quantum number and momentum are refilled due to nonstationary transitions. After introducing the TDBdG equations in Sec. II, we examine two different regimes of coherent evolution of the order parameter.
In Sec. III we analyze linearized gap dynamics where the temporal evolution of the Higgs modes \(\delta\Delta(t)\) is traced in the presence of a stationary spin-splitting field \(h_{0}\). Since the SOC allows the transitions between the QP states with different spins, the induced perturbation of the gap \(\delta\Delta(t)\) acquires three eigenfrequencies including the standard \(2\Delta_{0}\) and two additional frequencies \(2(\Delta_{0}\pm h_{0})\). These modes define both the free oscillation of the perturbed gap at \(h_{0}^{-1}\ll t\ll\tau_{\varepsilon}\), and resonant peaks in the case of driven oscillations. It was also shown that in the specific configuration, the linear coupling of the Higgs mode and the perturbation of the Zeeman field \(\delta{\bf h}(t)\) co-directional with \({\bf h}_{0}\) is possible. Note that the frequencies shifted by the spin-splitting field have been observed in the numerical simulation of the dynamics of the one-dimensional Fermi superfluid exposed to the nonstationary Zeeman field and strong SOC in the Ref. [34].
In Sec. IV we consider the dynamics of the gap \(\Delta(t)\) driven by linearly growing field \({\bf h}(t)\). At some point the field becomes larger then the equilibrium gap value \(|{\bf h}(t)|>\Delta_{0}\) and thus provokes the crossing of the branches from different spin subbands of the QP spectrum. The appearance of non-adiabatic transitions between the states at the intersection point is equivalent to the dynamical spin-flip process and can be described with the Landau-Zener-Stuckelberg-Majorana (LZSM) tunneling problem [35]. Corresponding refilling of the amplitudes \(C_{n}(t)\) contributes to the gap function \(\Delta(t)\) and can drastically change its behavior depending on the field growth rate. In Sec. IV.3-IV.4 we derive an analytical expression for \(\Delta(t)\) from the self-consistency equation which contains two different terms: (i) smooth dependence \(\Delta_{h}[h(t)]\) arising directly from spin-flip tunneling and depending on the occupation probability of QP states \(|C_{n}(t)|\); (ii) oscillating part \(\delta\Delta(t)\) originated from the interference between redistributed states with terms of type \(C_{n}^{*}(t)C_{n^{\prime}}(t)\). In addition, in Sec. IV.5-IV.6 we discussed the spin imbalance generated due to LZSM tunneling and corresponding dynamical magnetization of the QP gas. Discussion and some experimental proposals are presented in Sec. V.
## II Time-dependent Bogoliubov - de Gennes equations
We consider a homogeneous s-wave superconductor in the presence of the uniform time-dependent Zeeman field \(h(t)\) and Rashba spin-orbit coupling (RSOC). The coherent QP dynamics is governed by the TDBdG equations [33]
\[i\frac{\partial}{\partial t}\tilde{\psi}_{k}=\tilde{\mathcal{H}}(k,t)\tilde{ \psi}_{k}, \tag{1}\]
where the Hamiltonian
\[\tilde{\mathcal{H}}(k,t)=\begin{pmatrix}\hat{H}(k,t)&i\hat{\sigma}_{y}\Delta(t )\\ -i\hat{\sigma}_{y}\Delta(t)&-\hat{H}^{*}(-k,t)\end{pmatrix} \tag{2}\]
is the \(4\times 4\) matrix in the Nambu\(\times\)Spin space with the Pauli matrices \(\hat{\sigma}_{i}\) acting on the four-component wave function \(\tilde{\psi}_{k}(t)\). The single particle matrix Hamiltonian in the spin space \(\hat{H}(k,t)=\xi_{k}\hat{\sigma}_{0}-h(t){\bf z}_{0}\hat{\mathbf{\sigma}}+\alpha( \hat{\mathbf{\sigma}}\times{\bf k}){\bf z}_{0}\) depends on the modulus \(k=|{\bf k}|\) and the relative phase \(\theta_{k}=\arg(k_{x}+ik_{y})\) of the momentum. Here \(\xi_{k}=k^{2}/2m-E_{F}\) is a free particle spectrum measured from the Fermi level and \(\alpha\) is a strength of RSOC. Hereafter we put \(\hbar=1\). For simplicity we consider here the motion of QPs only in the \(x-y\) plane neglecting their dispersion along the \({\bf z}_{0}\) axis, so that \({\bf k}=(k_{x},k_{y})\).
The pairing potential \(\Delta(t)\) should satisfy the self-consistency equation, which at zero temperature \(T=0\) can be written as follows
\[\Delta(t)=-\frac{\lambda}{2}\sum_{\rm i.c.}\tilde{\psi}_{k}^{\dagger}(t)\tilde{ \tau}_{\Delta}\tilde{\psi}_{k}(t), \tag{3}\]
where \(\lambda\) is the pairing constant, \(\tilde{\tau}_{\Delta}=(\hat{\tau}_{x}+i\hat{\tau}_{y})\otimes i\hat{\sigma}_{y}/2\) and the independence of \(\Delta\) on \(\theta_{k}\) is taken into account. The summation here is performed over all solutions of Eq. (1) for different initial conditions (i.c.) at \(t=0\). The information about the dynamics as well as the distribution function of the QP excitations is contained in the functions \(\tilde{\psi}_{k}(t)\), which self-consistently define the temporal evolution of the gap. In the homogeneous problem, the initial conditions are numbered by the momentum \(k\), which, in the case of a spin-split superconductor, must be supplemented by the spin quantum number. All possible initial configurations of the QP states are defined by an equilibrium distribution function. The pairing potential \(\Delta(t)\) can be chosen as a real function of time, and this choice will be justified below.
Generally speaking, the concept of an energy spectrum for a dynamical system is not clearly defined. However, in the case of adiabatic evolution one can introduce the eikonal approximation for the QP wavefunctions \(\tilde{\psi}_{k}(t)\propto\tilde{\Psi}_{k}(t)e^{iS_{k}(t)}\), from which the adiabatic spectrum \(E_{k}(t)=-\partial_{t}S_{k}\) can be extracted. The functions \(\tilde{\Psi}_{k}(t)\) are the instantaneous eigenstates of the Hamiltonian \(\tilde{\mathcal{H}}(t)\) from Eq. (2). The resulting spectrum is
\[E_{kn}(t)= \tag{4}\] \[\pm\sqrt{E_{0}^{2}+\alpha^{2}k^{2}+h^{2}(t)\mp\text{sgn}(\sigma) 2\sqrt{\xi_{k}^{2}\alpha^{2}k^{2}+h^{2}(t)E_{0}^{2}}}\]
where \(E_{0}=\sqrt{\xi_{k}^{2}+\Delta^{2}}\). We use the index \(n\equiv\sigma\pm=\{\uparrow+,\downarrow+,\uparrow-,\downarrow-\}\) which refers to different spin subbands and positive/negative energy (these notations will be used in the text below). There are four corresponding instantaneous eigenstates which can be written as \(\tilde{\Psi}_{kn}(t)=(u_{k\uparrow n},u_{k\downarrow n},v_{k\uparrow n},v_{k \downarrow n})^{T}\). The detailed structure of the vectors is given in Appendix A. The functions \(\tilde{\Psi}_{kn}(t)\) form an orthonormal basis with he normalization condition \(\tilde{\Psi}_{kn}^{\dagger}\tilde{\Psi}_{kn^{\prime}}=\delta_{nn^{\prime}}\) and the completness relation \(\sum_{kn}\tilde{\Psi}_{kn}\tilde{\Psi}_{kn}^{\dagger kn}=\mathbb{I}\). Obviously, in the limit of the stationary Zeeman field, \(\tilde{\Psi}_{kn}\) becomes an exact solution of stationary problem (1).
It is important to keep in mind that in the presence of both RSOC and spin-splitting field the equilibrium gap value depends of the values of these fields \(\Delta_{\text{eq}}=\Delta_{\text{eq}}(h,\alpha)\). In what follows, the RSOC strength \(\alpha\) will be considered as a small parameter, and the dependence \(\Delta(h,\alpha)\) will be neglected for simplicity. Thus, the equilibrium gap value is defined as follows
\[\Delta_{\text{eq}}=\Delta_{0}=2\hbar\omega_{D}e^{-\frac{1}{\lambda N(0)}},\]
where \(\omega_{D}\) is Debye frequency and \(N(0)\) is the density of states at Fermi energy.
## III Linearized gap dynamics
In this section we want to address the temporal evolution of a small fluctuation of the gap \(\Delta_{0}+\delta\Delta(t)\) in the presence of the static spin-splitting field \(\mathbf{h}=h_{0}\mathbf{z}_{0}\). The gap dynamics can be excited by some external pulse at \(t=0\) or can be driven, for instance, by time-dependent spin-splitting field \(\delta\mathbf{h}(t)=\delta h(t)\mathbf{z}_{0}\). In linear order in small perturbations \(\delta\Delta(t),\delta h(t)\ll h_{0}<\Delta_{0}\), the TDBdG equations for the QP wave functions read
\[i\frac{\partial}{\partial t}\tilde{\psi}_{k}(t)=\Big{[}\tilde{\mathcal{H}}_{0 }+\tilde{\mathcal{V}}(t)\Big{]}\tilde{\psi}_{k}(t), \tag{5}\]
where the operators in the Nambu\(\times\)Spin space are
\[\tilde{\mathcal{H}}_{0}=\begin{pmatrix}\hat{H}_{0}(k)&i\hat{\sigma}_{y}\Delta _{0}\\ -i\hat{\sigma}_{y}\Delta_{0}&-\hat{H}_{0}^{*}(-k)\end{pmatrix}, \tag{6}\] \[\tilde{\mathcal{V}}(t)=\begin{pmatrix}-\delta h(t)\hat{\sigma}_{z }&i\hat{\sigma}_{y}\delta\Delta(t)\\ -i\hat{\sigma}_{y}\delta\Delta(t)&-\delta h(t)\hat{\sigma}_{z}\end{pmatrix},\]
and single particle Hamiltonian is \(\hat{H}_{0}(k)=\xi_{k}\hat{\sigma}_{0}-h_{0}\hat{\sigma}_{z}+\alpha(k_{y}\hat{ \sigma}_{x}-k_{x}\hat{\sigma}_{y})\).
Time-dependent equation (5) can be written in the adiabatic basis using stationary eigenfunctions \(\tilde{\Psi}_{kn}\) of the operator \(\tilde{\mathcal{H}}_{0}\). Additionally, the RSOC energy \(\alpha k\approx\alpha k_{F}\) is considered a perturbative parameter. By approximating the eigenvectors up to first order in \(\alpha k_{F}/\Delta_{0}\) (see Appendix A), we can infer from equation (3) that the fluctuation in the gap will have an order up to \(\mathcal{O}(\alpha^{2}k_{F}^{2}/\Delta_{0}^{2})\). However, in the general case, the gap \(\Delta\) should not be affected by the direction of the SOC. Therefore, the first-order change in the gap \(\delta\Delta\propto\mathcal{O}(\alpha k_{F}/\Delta_{0})\) must vanish.
Instead of the general eikonal theory, we use the perturbative approach with the ansatz written in terms of the dynamical phase
\[\tilde{\psi}_{k}(t)=\sum_{n}\tilde{\Psi}_{kn}C_{kn}(t)e^{-iE_{kn}t}. \tag{7}\]
The index \(n=\{\uparrow+,\downarrow+,\uparrow-,\downarrow-\}\) denotes the spectral branches and all negative/positive energy terms are involved into the dynamics of QPs. Substituting the function (7) into Eq. (5) we obtain the equation for the dynamics of the coefficients
\[i\frac{\partial}{\partial t}C_{km}(t)=\sum_{n}\tilde{\Psi}_{m}^{\dagger}\tilde{ \mathcal{V}}(t)\tilde{\Psi}_{n}e^{-i(E_{n}-E_{m})t}C_{kn}(t), \tag{8}\]
which completely determine the temporal evolution of gap \(\Delta(t)\) through the self-consistency equation
\[\Delta_{0}+\delta\Delta(t)= \tag{9}\] \[-\frac{\lambda}{2}\sum_{\text{i.c.}}\sum_{n,n^{\prime}}C_{kn}^{*}( t)C_{kn^{\prime}}(t)e^{-i(E_{n^{\prime}}-E_{n})t}\tilde{\Psi}_{kn}^{\dagger}\tilde{ \tau}_{\Delta}\tilde{\Psi}_{kn^{\prime}}.\]
In the case of zero temperature \(T=0\) there are two possible initial configurations at \(t=-\infty\): All QP states with energies below Fermi level in the first(second) spin subband with \(\sigma=\uparrow(\downarrow)\) are fully occupied for all momenta with \(\xi_{k}\in(-\omega_{D},\omega_{D})\). The corresponding initial conditions can be written as
\[C_{kn}(t=-\infty)=\delta_{n,l} \tag{10}\]
with Kronecker delta \(\delta_{n,n^{\prime}}\) and indices \(l=\{\uparrow-,\downarrow-\}\). Therefore, it is natural to linearize the equation (8) as follows
\[C_{kn}(t)=C_{kn}(-\infty)+\delta C_{kn}(t). \tag{11}\]
Performing Laplace transform in the complex plane \(s=i\omega+\zeta\) for the linearized equations (8, 9, 11) (see Appendix B) we get the following expression for the gap perturbation
\[\delta\Delta(s) =\Big{[}\mathcal{K}_{0}(s)+\mathcal{K}_{+}(s)+\mathcal{K}_{-}(s) \Big{]}\delta\Delta(s) \tag{12}\] \[+\delta h(s)\Big{[}\mathcal{F}_{+}(s)-\mathcal{F}_{-}(s)\Big{]}+ \mathcal{I}(s).\]
Here \(\mathcal{K}_{0,\pm}(s)\) represents kernels of the self-consistency equation; \(\mathcal{F}_{\pm}(s)\) defines the dynamical structure of the "force" term (in analogy with a mechanical oscillator) related with \(\delta h(t)\); and \(\mathcal{I}(s)\) includes all terms related to perturbations at the moment \(t=0\). Due to the absence of particle-hole asymmetry, which couples the phase and amplitude fluctuations [6], the imaginary part of \(\delta\Delta(s)\) naturally vanishes and we consider only amplitude (or Higgs) modes of the superconducting gap. Knowing the function \(\mathcal{K}(s)\) one can find eigenfrequencies and free dynamics of the system, while \(\mathcal{F}_{\pm}(s)\) induces the driven dynamics. We will conduct a thorough examination of these terms below.
### Spin-split Higgs modes
It is known that in the absence of a spin-splitting field and RSOC the Higgs mode has a singular behavior in the vicinity of the eigenfrequency \(\omega=2\Delta_{0}\), which defines the free evolution of the gap perturbation \(\delta\Delta(t)\propto\cos(2\Delta_{0}t)/\sqrt{t}\)[14]. Since the energy of the Higgs mode lies at the lower bound of the QP spectrum, the oscillatory behavior here can be represented as a coherent decay and formation of a Cooper pair into two QPs with opposite spins and energies \(\Delta_{0}\) at \(k\approx k_{F}\). The contribution from the pairs of QPs with other momenta leads to the inhomogeneous broadening of the mode with the corresponding damping law. The presence of Zeeman field and RSOC makes the dynamics more complicated. To analyze the eigenmodes of the superconductor one can set \(\delta h(t)=0\) and write the self-consistency equation as follows
\[\chi^{-1}_{\Delta\Delta}(s)\delta\Delta(s)=0,\]
where we define the bare pair susceptibility
\[\chi_{\Delta\Delta}(s)=\frac{1}{1-\mathcal{K}_{0}(s)-\mathcal{K}_{+}(s)- \mathcal{K}_{-}(s)}. \tag{13}\]
The corresponding kernels read (see Appendix B)
\[\mathcal{K}_{0}(s)=\Big{\langle}\frac{2\xi^{2}}{E_{0}}\frac{1}{s^ {2}+4E_{0}^{2}}\Big{\rangle}\propto\mathcal{O}\Big{(}\frac{\alpha^{0}k_{F}^{0 }}{\Delta_{0}^{0}}\Big{)}, \tag{14}\] \[\mathcal{K}_{\pm}(s)=\Big{\langle}\mathcal{A}^{2}(\xi)\frac{E_{0 }\pm h_{0}}{s^{2}+4(E_{0}\pm h_{0})^{2}}\Big{\rangle}\propto\mathcal{O}\Big{(} \frac{\alpha^{2}k_{F}^{2}}{\Delta_{0}^{2}}\Big{)},\]
where the notation \(\big{\langle}\dots\big{\rangle}=\lambda N(0)\int_{-\omega_{D}}^{\omega_{D}}d\xi\) is used. The function \(\mathcal{A}(\xi)\propto\tilde{\Psi}_{kn}^{0\dagger}\hat{\tau}_{\Delta}\tilde{ \Psi}_{kn}^{0}\propto\alpha k_{F}/\Delta_{0}\) is proportional to nonzero triplet component of the wave function, therefore the kernels \(\mathcal{K}_{\pm}\) are of the second order in the RSOC parameter.
The frequencies of the eigenmodes of the superconducting condensate can be traced out from the condition \(|\chi^{-1}_{\Delta\Delta}(\omega)|=0\), which reflects the singular points of the kernels (14). Consider these points in more detail. Instead of straightforward integrating, we are going to implement the analysis in the spirit of the work [3] and analytically obtain the limit \(\zeta\to 0\). The functions \(\mathcal{K}_{0,\pm}(s\rightarrow\omega)\) can be represented as \(\mathcal{K}(s)=\mathcal{K}^{\prime}(\omega)+i\text{sgn}(\omega\zeta)\mathcal{ K}^{\prime\prime}(\omega)\). The real parts of the kernels
\[\frac{\mathcal{K}_{0}^{\prime}(\omega)}{\lambda N(0)} =\fint_{-\omega_{D}}^{\omega_{D}}\frac{2\xi^{2}}{\sqrt{\xi^{2}+ \Delta_{0}^{2}(4\xi^{2}+4\Delta_{0}^{2}-\omega^{2})}}d\xi, \tag{15}\] \[\frac{\mathcal{K}_{\pm}^{\prime}(\omega)}{\lambda N(0)} =\fint_{-\omega_{D}}^{\omega_{D}}\frac{\mathcal{A}^{2}(\xi)(E_{0 }\pm h_{0})}{4(E_{0}-h_{0})^{2}\pm|\omega|^{2}}d\xi, \tag{16}\]
are regular on the imaginary axis \(s=i\omega\). The imaginary parts are
\[\frac{\mathcal{K}_{0}^{\prime\prime}(\omega)}{\lambda N(0)} =-\frac{\pi}{2}\frac{\sqrt{\omega^{2}-\omega_{0}^{2}}}{|\omega|} \Theta[\omega^{2}-\omega_{0}^{2}], \tag{17}\] \[\frac{\mathcal{K}_{\pm}^{\prime\prime}(\omega)}{\lambda N(0)} =-\frac{\pi}{8}\frac{|\omega|\mp 2h_{0}}{\xi_{\pm}}\mathcal{A}^{2}(\xi_{ \pm})\Theta[\omega^{2}-\omega_{\pm}^{2}], \tag{18}\]
Figure 1: (a) Branch points \(\omega_{0}=2\Delta_{0},\omega_{\pm}=2(\Delta_{0}\pm h_{0})\) (red dots) corresponding to the kernels \(\mathcal{K}_{0}(s)\) and \(\mathcal{K}_{\pm}(s),\mathcal{F}_{\pm}(s)\) [Eq. (12)] in the complex plane \(s=\zeta+i\omega\). Red lines show the chosen branch cuts. Black crosses correspond to the poles of the external force \(\delta h(s)\). (b-d) Illustration of physical mechanism behind the appearance of three eigenfrequencies \(\omega_{+}\) (b), \(\omega_{0}\) (c), \(\omega_{-}\) (d).
where \(\xi_{\pm}=\frac{1}{2}\sqrt{\left(|\omega|-\omega_{\pm}\right)^{2}+4\Delta_{0}(| \omega|-\omega_{\pm})}\). The discontinuities at the real axis \(\zeta\) mean the existence of the branch points
\[\omega_{0}=2\Delta_{0}, \tag{19}\] \[\omega_{+}=2(\Delta_{0}+h_{0}),\] \[\omega_{-}=2(\Delta_{0}-h_{0}),\]
and corresponding cuts in the complex plane [Fig. 1(a)].
The analysis of the general linear response of the order parameter can be significantly simplified by expanding the susceptibility \(|\chi_{\Delta\Delta}(\omega)|\) in the powers of the small parameter \(\alpha k_{F}/\Delta_{0}\), since the kernels \(\mathcal{K}_{\pm}\propto\mathcal{O}(\alpha^{2}k_{F}^{2}/\Delta_{0}^{2})\). As mentioned before, the maximum order we can take into account is \(|\chi_{\Delta\Delta}|\propto\mathcal{O}(\alpha^{2}k_{F}^{2}/\Delta_{0}^{2})\). The resonance condition \(|\chi_{\Delta\Delta}^{-1}(\omega)|=0\) is satisfied at \(\omega=\omega_{\pm}\) where the kernels \(\mathcal{K}_{\pm}^{\prime\prime}(s)\) have a singularity (note that \(\mathcal{A}\) is regular at \(\xi=\xi_{\pm}\)), and at \(\omega=\omega_{0}\), where the function \(\mathcal{K}_{0}^{\prime\prime}(s)\) goes to zero. Thus, the branch points (19) define new eigenmodes of the superconductor in the presence of spin-splitting field and weak RSOC. By taking some constant initial condition \(\mathcal{I}(s)=const\) in the RHS of Eq. (12) and using inverse Laplace transform one can consider an impulse response of the superconductor. It can be shown that the peculiarities in the vicinities of the eigenfrequencies lead to three partial contribution to the long-time gap dynamics
\[\delta\Delta(t)\propto\frac{\cos(\omega_{0}t+\pi/4)}{\sqrt{ \Delta_{0}t}} \tag{20}\] \[+ \Big{(}\frac{\alpha k_{F}}{\Delta_{0}}\Big{)}^{2}A_{+}(h_{0}) \frac{\cos(\omega_{+}t+\pi/4)}{\sqrt{\Delta_{0}t}}\] \[+ \Big{(}\frac{\alpha k_{F}}{\Delta_{0}}\Big{)}^{2}A_{-}(h_{0}) \frac{\cos(\omega_{-}t+\pi/4)}{\sqrt{\Delta_{0}t}}\]
with some amplitudes \(A_{\pm}(h_{0})\), which can be identified as spin-split Higgs modes. Details of the derivation of \(\delta\Delta(t)\) from Eq. (12) will be provided in the next subsection together with Appendix C.
Appearance of the frequencies (19) and corresponding oscillations (20) in the spin-split superconductor can be explained qualitatively. Coherent decay of the Cooper pairs from the Fermi level can occur into two different spin subbands of the QP spectrum. When two electrons with opposite spins from a pair dissociate into two QP at \(k\approx k_{F}\) with the energies \(\Delta_{0}\pm h_{0}\) without spin-flipping, then the total decay energy is equal to QP threshold \(\approx 2\Delta_{0}\). This process corresponds to the mode \(2\Delta_{0}\) and shown in Fig. 1(c). A decay into two QPs with the same spins is possible in the presence of RSOC due to the effective spin-flip scattering. The energies of such two QPs are either \(\Delta_{0}+h_{0}\) or \(\Delta_{0}-h_{0}\). This process leads to the modes \(2(\Delta_{0}\pm h_{0})\) correspondingly [Fig. 1(b,d)]. Note that this naive interpretation of the complicated QP dynamics is valid for the sufficiently small RSOC \(\alpha k_{F}\ll\Delta_{0}\).
Numerically calculated susceptibility \(|\chi_{\Delta\Delta}(\omega)|\) from Eqs. (13-14) is shown in Fig. (2)(a). The observed resonances have a different parametric order of smallness. The Higgs mode with the frequency \(\omega_{0}\) which exists in the absent the RCOS becomes dominating with more pronounced peak \(|\chi_{\Delta\Delta}(\omega\approx\omega_{0})|\propto\alpha^{0}k_{F}^{0}\), whereas two other modes at shifted frequencies \(\omega_{\pm}\) are of the order of \(|\chi_{\Delta\Delta}(\omega\approx\omega_{\pm})|\propto\alpha^{2}k_{F}^{2}/ \Delta_{0}^{2}\). These modes merge with \(\omega_{0}\) at \(h_{0}\to 0\) and disappear for \(\alpha\to 0\). It is expected that the excitation of the bare response of the superconductor can be implemented with the standard THz laser pump-probe techniques. The electric field of the pump pulse produces a quench of the spin-split superconductor and subsequent probe pulse detects the multifrequency Higgs oscillations.
Note, that a similar dynamics of the order parameter has been studied in the spin-orbit coupled Fermi gases [36; 37; 38]. In particular, the existence of the Higgs modes modified by the Zeeman field in the presence of strong SOC with \(\alpha k_{F}\sim h(t)\sim E_{F}\) was discussed in Ref. [34]. The authors performed a numerical simulation of the one-dimensional Fermi superfluid and examined the excitation of the gap oscillations with few frequencies by abrupt change of the Zeeman field. Despite the significant differences between the models, there is a general tendency for the influence of the shift of spectral QP branches on the behavior of the order parameter modes.
### Coupling of Higgs modes and Zeeman field
We found that, in addition to an electromagnetic field, the gap dynamics in a spin-split superconductor can be excited by a nonstationary component of Zeeman field \(\mathbf{h}(t)=(h_{0}+\delta h(t))\mathbf{z}_{0}\). In this particular configuration the perturbation of the spin-splitting field \(\delta h(t)\) appears in the self-consistency equation (12) in the first order, which is the trace of a dot product \((\mathbf{h}_{0}\cdot\delta\mathbf{h})\). Note that the field \(\delta h(s)\) is weighted by the functions
\[\mathcal{F}_{\pm}(s)=\Big{\langle}\mathcal{A}(\xi)\mathcal{B}( \xi)\frac{(E_{0}\pm h_{0})}{s^{2}+4(E_{0}\pm h_{0})^{2}}\Big{\rangle}\propto \mathcal{O}\Big{(}\frac{\alpha^{2}k_{F}^{2}}{\Delta_{0}^{2}}\Big{)}, \tag{21}\]
which can be written as \(\mathcal{F}(s)=\mathcal{F}^{\prime}(\omega)+i\mathrm{sgn}(\omega\zeta) \mathcal{F}^{\prime\prime}(\omega)\) and have the same order in \(\alpha k_{F}\) and the same analytical properties as the kernels \(\mathcal{K}_{\pm}(s)\) in (16,18), because both functions \(\mathcal{A}^{2}(\xi)\) and \(\mathcal{A}(\xi)\mathcal{B}(\xi)\) are regular for \(\xi\in(-\omega_{D},\omega_{D})\). The presence of the singular points in the force term makes the analysis of Eq. (12) more sophisticated, despite the fact that these points are shared with other kernels.
Consider the general case of forced oscillations of the order parameter driven by some field \(\delta h(t)\) which is abruptly turned on at \(t=0\). Since we want to consider dynamical effects related only to the external force, we neglect the initial conditions \(\mathcal{I}(s)\) in Eq. (12), e.g. assume the equilibrium system with \(\delta\Delta(t)=0\). It is convenient to introduce the linear response function
\[\chi_{\Delta h}(s)=\frac{\mathcal{F}_{+}(s)-\mathcal{F}_{-}(s)}{1-\mathcal{K}_ {0}(s)-\mathcal{K}_{+}(s)-\mathcal{K}_{-}(s)} \tag{22}\]
and write the self-consistency equation as \(\delta\Delta(s)=\chi_{\Delta h}(s)\delta h(s)\). One can obtain the expression for \(\delta\Delta(t)\) in the interval \(t\in[0,\infty)\) using inverse Laplace method:
\[\delta\Delta(t)=\frac{1}{2\pi i}\int_{-i\infty+\epsilon}^{i\infty+\epsilon} \chi_{\Delta h}(s)\delta h(s)e^{st}ds, \tag{23}\]
where \(\epsilon\) should be larger then the real part of the poles of \(\delta h(s)\). The integral can be evaluated using closed contour shown in Fig. 1(a). Making sure that all integrals on infinitely large and small arcs vanish and applying residue theorem we get
\[\delta\Delta(t)=\sum_{p}\chi_{\Delta h}(s_{p})e^{s_{p}t}\underset{ s=s_{p}}{\mathrm{Res}}\Big{[}\delta h(s)\Big{]} \tag{24}\] \[+\frac{2}{\pi}\int_{\omega_{-}}^{\infty}\mathrm{Im}\chi_{\Delta h }(s)\big{|}_{\zeta\rightarrow+0}\mathrm{Im}\Big{[}e^{i\omega t}\delta h(i \omega)\Big{]}d\omega.\]
The first term represents the contribution from the poles of the external field \(\delta h(t)\), while the second term is the contribution from the integrals along the branch cuts.
The imaginary part of the susceptibility (22) can be expanded up to the second order in \(\alpha k_{F}/\Delta_{0}\), since all the kernels \(\mathcal{F}_{\pm},\mathcal{K}_{\pm}\propto\mathcal{O}(\alpha^{2}k_{F}^{2}/ \Delta_{0}^{2})\). That allows one to distinguish different strongly dominant terms of the function \(\mathrm{Im}\chi_{\Delta h}(\omega)\) in (24) in the vicinity of different branch points (19) and estimate their contribution to an asymptotic expression for \(\delta\Delta(t)\). The detailed calculations are provided in Appendix C. Here we write the result for the superconducting gap oscillations, which at large times \(h_{0}^{-1}\ll t\) reads
\[\delta\Delta(t) \approx\sum_{p}\chi_{\Delta h}(s_{p})e^{s_{p}t}\underset{s=s_{p}} {\mathrm{Res}}\big{[}\delta h(s)\big{]}+\frac{4\Delta_{0}}{\pi\sqrt{\pi}} \frac{\big{[}\mathcal{F}_{+}^{\prime}(\omega_{0})-\mathcal{F}_{-}^{\prime}( \omega_{0})\big{]}}{\lambda N(0)}\frac{\mathrm{Im}\Big{[}\delta h(i\omega_{0} )e^{i(\omega_{0}t+\pi/4)}\Big{]}}{\sqrt{\Delta_{0}t}} \tag{25}\] \[+\frac{\sqrt{\pi}}{8\sqrt{2}}\frac{(\alpha k_{F})^{2}\Delta_{0}}{ (\Delta_{0}-h_{0})^{2}}\sum_{j=\pm}\frac{\lambda N(0)\big{[}1-\mathcal{K}_{0}^ {\prime}(\omega_{j})\big{]}}{\big{[}1-\mathcal{K}_{0}^{\prime}(\omega_{j}) \big{]}^{2}+\big{[}\mathcal{K}_{0}^{\prime\prime}(\omega_{j})\big{]}^{2}} \frac{\mathrm{Im}\Big{[}\delta h(i\omega_{j})e^{i(\omega_{j}t+\pi/4)}\Big{]}}{ \sqrt{\Delta_{0}t}}.\]
The first term here is related to the forced oscillations of the gap, caused by the Zeeman field \(\delta h(t)\). The last three terms correspond to the free oscillations triggered by \(\delta h(t)\) at \(t=0\) in the long time asymptote, with three characteristic frequencies (19) and square-root damping law. The latter can be interpreted as partial contribution from the Higgs modes in the spin-splitting field \(h_{0}\).
The eigenmodes decay at \(t\rightarrow\infty\) and in the long-time asymptote the forced oscillations prevail. Consider the steady-state behavior of \(\delta\Delta(t)\) (the first term in Eq. (25)) in the time interval restricted by the inelastic relaxation processes where the presented description of the coherent gap dynamics is valid. Assume the general harmonic perturbation \(\delta h(t)=\delta h_{0}e^{-\beta t}\cos(\omega t)\) at \(t\geq 0\) with small finite damping factor \(\beta\to 0\).The amplitude of the driven gap perturbation \(\delta\Delta(t)\) is defined by the Zeeman field \(h_{0}\) and susceptibility \(\chi_{\Delta h}(s_{0})\) taken at the pole of the force \(s_{0}=-\beta+i\omega\). The numerically integrated shape of \(|\chi_{\Delta h}(\omega)|\) is shown in Fig. (2)(a). The response of the superconductor, as expected, has three resonance peaks at the frequencies \(\omega_{0,\pm}\). However, since the the external field \(\delta\mathbf{h}(t)\) couples to the gap through the RSOC, the amplitude of the susceptibility in the vicinity of the resonances has the same order of smallness
\(|\chi_{\Delta h}(\omega_{0,\pm})|\propto\mathcal{O}(\alpha^{2}k_{F}^{2}/\Delta_{0}^ {2})\), which differs from the bare response (13).
In this section, we have solely focused on the longitudinal component of the field perturbation \(\delta h(t)\mathbf{z}_{0}\) with respect to the stationary field \(h_{0}\mathbf{z}_{0}\). However, it is also possible to introduce the time-dependent transversal component \(\delta\mathbf{h}\bot(t)\) and examine its dynamic interaction with the superconducting system in Eq. (5). This component generates triplet correlations, but these do not contribute to the order parameter since only singlet pairing in (3) is considered. Consequently, in the second-order perturbation theory with respect to \(\alpha k_{F}/\Delta_{0}\), there is no linear coupling between the field \(\delta\mathbf{h}\bot(t)\) and the gap \(\delta\Delta(t)\). This outcome is unsurprising since the only true scalar in this regime (\(\delta\mathbf{h}_{\perp}\cdot\mathbf{h}_{0}\)) is zero.
## IV Evolution of QP states in strong Zeeman field
In this section we address the case of a linearly growing spin-splitting field \(h(t)=\gamma t\), which can exceed the equilibrium value of the superconducting gap \(\Delta_{\rm eq}\equiv\Delta_{0}\) and thus provide the crossing of the two QP spectral branches \(E_{\uparrow+}(\xi_{k})\) and \(E_{\downarrow-}(\xi_{k})\) from different spin subbands [Fig. 3(a, c)]. In the collisionless regime and in the absence of RSOC the intersecting spectral branches do not interact, so that the occupation of the quasiparticle states defined at \(t=-\infty\) does not change in time. This means that the self-consistent gap function will not change even above the paramagnetic limit \(h(t)>\Delta_{0}\) and will be defined by the initial condition \(\Delta(t)=\Delta_{0}\). It is clear from the general assumptions that the spin-orbit coupling is capable of provoking the interplay between QP states with different spins, and we investigate the mechanism of such an interaction and the effect on the superconducting order parameter \(\Delta(t)\). As mentioned in the Section II, we will treat the RSOC energy as a small parameter \(\alpha k_{F}/\Delta\ll 1\). Therefore, we neglect the dependence of the equilibrium gap \(\Delta_{\rm eq}\) on \(\alpha\) and assume \(\Delta_{\rm eq}\equiv\Delta_{0}\).
### Adiabatic evolution of QP states
The evolution of QP wavefunction of the TDBdG equations (1) can be regarded with the help of general adiabatic ansatz
\[\ddot{\psi}_{k}(t)=\sum_{n}C_{kn}(t)\bar{\Psi}_{kn}(t), \tag{26}\]
where \(\bar{\Psi}_{kn}(t)\) are the instantaneous eigenstates of the Hamiltonian (2). Here all negative/positive energy terms with the indices \(n=\{\uparrow+,\downarrow+,\uparrow-,\downarrow-\}\) are taken into account. The coefficients \(C_{kn}(t)\) define the occupation of QP states and its temporal evolution. The initial conditions for \(C(t)\) should be fixed by the equilibrium distribution at \(t=0\). In the case of spin-split superconductor at zero temperature \(T=0\) there are two possible initial configurations: All QP states with energies below Fermi level in the first(second) spin subband with \(\sigma=\uparrow(\downarrow)\) are fully occupied for all momenta with \(\xi_{k}\in(-\omega_{D},\omega_{D})\). With short notations one can write this as \(C_{kn}(t=0)=\delta_{n,l}\), where \(\delta_{n,n^{\prime}}\) is Kronecker delta and \(l=\{\uparrow-,\downarrow-\}\). This means that for the given \(l\) we have \(C_{kl}(t=0)=1\), and all other \(C_{k(n\neq l)}(t=0)=0\).
The vector
\[\hat{C}_{k}(t)=(C_{k\uparrow+},C_{k\downarrow+},C_{k\uparrow-},C_{k\downarrow-} )^{T} \tag{27}\]
contains all the information about the dynamics of the QP states. Corresponding adiabatic temporal evolution can be described with the help of the unitary operator \(\hat{C}_{k}(t_{2})=\hat{U}_{k}(t_{2},t_{1})\hat{C}_{k}(t_{1})\) where \(\hat{U}_{k}=\mathrm{diag}(U_{k\uparrow+},U_{k\downarrow+},U_{k\uparrow-},U_{k \downarrow-})\) and
\[U_{kn}(t_{2},t_{1})=\exp\Big{(}-i\int_{t_{1}}^{t_{2}}E_{kn}(t)dt\Big{)}. \tag{28}\]
The interaction of the branches \(E_{k\uparrow+}(t)\) and \(E_{k\downarrow-}(t)\) leads to _avoided crossing_ of the QP levels at fixed energy \(\xi_{k}\) with the splitting proportional to \(\alpha k_{F}\). Thus the adiabatic approximation is justified only for the levels with \(E_{k}(t)\gg\alpha k_{F}\), e.g. far enough from the crossing points. Therefore, for the Zeeman field \(h(t)\lesssim\Delta_{0}\) all nonadiabatic transitions are suppressed and the gap function defined by the self-consistency equation (3) is equal to the equilibrium value \(\Delta(t)=\Delta_{0}\).
### Transition evolution matrix
The avoided crossing between the spectral terms at \(h(t)\gtrsim\Delta_{0}\) should be described in terms of nonadiabatic dynamics. For this we consider the branch intersection as consecutive avoided crossing of pairs of the QP states with fixed energy \(\xi_{k}\) at the time instant \(t_{0}(\xi_{k})=\sqrt{\xi_{k}^{2}+\Delta^{2}/\gamma}\) [Fig. 3]. For each crossing at \(\xi_{k}\in(-\omega_{D},\omega_{D})\) it is possible to formulate the time-dependent Landau-Zener-Stuckelberg-Majorana (LZSM) problem [35], which describes the transitions between two QP states with different spins during their temporal evolution. Note that resulting nonadiabatic tunneling is equivalent to dynamical spin-flip process.
In general, the description of such a tunneling (or LZSM problem) requires joint solution of TDBdG equation (1) and self-consistency equation (3). However, some important results can be obtained analytically using certain approximations:
(i) If the time variation of the gap function \(\Delta(t)\) is small on the typical tunneling time scale \(\tau_{\rm LZ}\) (see Appendix D), then the tunneling of QP states is not affected by the dynamics of the order parameter.
(ii) On the other hand, the gap \(\Delta(t)\) is defined by all states in range \(\xi_{k}\in(-\omega_{D},\omega_{D})\), and a time-dependent perturbation of the states caused by the dynamical LZSM transition makes a small contribution to the sum over
all \(\xi_{k}\). Thus, one can neglect the transient dynamics of the coefficients \(\hat{C}_{k}(t)\) in the vicinity of a transition point for each \(\xi_{k}-\)th mode. This also means that one can investigate the tunneling problem with the help of so-called transition evolution matrix [35] connecting two adiabatic regimes before (\(t<t_{0}-\)) and after (\(t>t_{0}+\)) avoided crossing [Fig. 3(e)]. These conditions allow one to effectively decouple the LZSM problem from the self-consistency equation and solve them without self-consistency.
Taking into account all these assumptions, the time evolution of the vector \(\hat{C}_{k}(t)\) from the adiabatic ansatz (26) is described as follows
\[\hat{C}_{k}(t)=\begin{cases}\hat{U}_{k}(t,t_{0}+)\hat{S}_{\text{LZ}}\hat{U}_{k} (t_{0}-,0)\hat{C}_{k}(0),&t>t_{0}(\xi_{k})\\ \hat{U}_{k}(t,0)\hat{C}_{k}(0),&t<t_{0}(\xi_{k})\end{cases}. \tag{29}\]
Here the nonadiabatic transitions between QP states are included into transition matrix \(\hat{S}_{\text{LZ}}\), which acts on the state vector \(\hat{C}_{k}(t)\) at the time instant \(t=t_{0}(\xi_{k})\). The matrix \(\hat{S}_{\text{LZ}}\) can be obtained by considering the interaction of two intersecting energy branches \(E_{\uparrow+}\) and \(E_{\downarrow-}\) in the TDBdG equation (1). Using so-called diabatic basis (basis of Hamiltonian (2) in the absence of RSOC) one gets a system of dynamical equations, the asymptotic solution of which forms a transition matrix describing the passage through the avoided intersection point. Then we go to the original adiabatic basis (26) and get the matrix \(\hat{S}_{\text{LZ}}\). The complete derivation of \(\hat{S}_{\text{LZ}}\) is presented in Appendix D and it reads
\[\hat{S}_{\text{LZ}}=\begin{pmatrix}\sqrt{p_{k}}&0&0&\sqrt{1-p_{k}}e^{i(\dots)} \\ 0&1&0&0\\ 0&0&1&0\\ -\sqrt{1-p_{k}}e^{-i(\dots)}&0&0&\sqrt{p_{k}}\end{pmatrix}, \tag{30}\]
where \((\dots)=\chi_{k}-\theta_{k}-\frac{\pi}{2}\text{sgn}(\alpha)\). The coefficient
\[p_{k}=\exp\Big{[}-\delta_{\text{LZ}}\frac{\Delta^{2}}{\xi_{k}^{2}+\Delta^{2}} \Big{]}\]
is expressed through the dimensionless LZSM parameter \(\delta_{\text{LZ}}=\pi\alpha^{2}k_{F}^{2}/\gamma\) and determines the probability of tunneling between QP states with different spins. The transition is accompanied by the appearance of the Stokes phase \(\chi_{k}\) (see Appendix D) and the phase \(\theta_{k}=\arg\big{(}k_{x}+ik_{y}\big{)}\).
To avoid confusion, we use the same notations for the spectral branches (4) before (\(t<t_{0}-\)) and after (\(t>t_{0}+\)) QP transitions, as shown in Fig. 3. Thereby, we do not need to keep track of the indices of the eigenvectors \(\hat{\Psi}_{kn}(t)\) and the evolution operators \(U_{kn}(t)\) from (28). It is sufficient that these functions take into account the permutation of the branches of the spectrum (4), so that all QP levels change their indices after the transition in accordance with the chosen notation.
### Time dependence of superconducting gap
The time-dependent order parameter subjected to the field \(h(t)\gtrsim\Delta_{0}\) depends on both the adiabatic wave function (26) and nonadiabatic LZSM tunneling (29). The
Figure 3: (a,b) QP spectrum \(E_{k}\) from (4) for two values of Zeeman field \(h(t)\) before (a) and after (b) avoided crossing. Colored/empty circles correspond to filled/empty states. The parameters are chosen as follows: \(\Delta_{0}/E_{F}=0.01\), \(\alpha/E_{F}=0.0025\). In (e) the schematic temporal evolution of the filling probabilities \(\left|C_{k}\right|^{2}(t)\) for two states at fixed \(\xi_{k}\) is shown. The gray lines show a tunneling process similar to the real one in the vicinity of the avoided transition point \(t_{0}(\xi_{k})=\sqrt{\xi_{k}^{2}+\Delta_{0}^{2}}/\gamma\), while red and blue lines refer to transition matrix approximation of the LZSM tunneling with the probability \(p_{k}\). (b, d) The QP distribution function for one spin projection \(f_{\uparrow}(E,t)\) from Eq. (43) before and after crossing of spectral branches at \(\delta_{\text{LZ}}=0.5\).
calculation of \(\Delta(t)\) can be accomplished using the self-consistent equation (3), which gets the following form
\[\Delta(t)=-\frac{\lambda}{2}\sum_{l}\sum_{k}\sum_{n,n^{\prime}}C^{*}_{kn}(t)C_{ kn^{\prime}}(t)\tilde{\Psi}^{\dagger}_{kn}\vec{r}_{\Delta}\tilde{\Psi}_{kn^{ \prime}}, \tag{31}\]
where index \(l\) means different initial configurations of the occupation of the QP spectrum at \(t=0\) (see Section IV.1). The first configuration with \(C_{kn}(t=0)=\delta_{n,\uparrow-}\) corresponds to occupation of all QP states belonging to the spectral branch \(E_{k,\uparrow-}\) for all momenta with \(\xi_{k}\in(-\omega_{D},\omega_{D})\). The evolution of the coefficients \(C_{kn}(t)\) is determined by the Eq. (29) together with Eq. (27-28). Since the branch \(E_{k,\uparrow-}\) does not cross with other branches, the coefficients \(C_{kn}(t)\) have a trivial adiabatic dynamics, which can be written as follows
\[\hat{C}_{k}(t)=\begin{pmatrix}0\\ 0\\ U_{\uparrow-}(t,0)\\ 0\end{pmatrix}. \tag{32}\]
The second initial configuration with \(C_{kn}(t=0)=\delta_{n,\downarrow-}\) leads to the intersection of the filled branch \(E_{k,\downarrow-}\) and empty branch \(E_{k,\uparrow+}\). Using equation (29) we obtain a nontrivial dynamics of the states with LZSM tunneling, which reads
\[\hat{C}_{k}(t)= \tag{33}\] \[\begin{pmatrix}\sqrt{1-p_{k}}e^{i(\dots)}U_{\uparrow+}\big{(}t,t_ {0}+\big{)}U_{\downarrow-}\big{(}t_{0}-,0\big{)}\Theta\big{[}t-t_{0}\big{]}\\ 0\\ 0\\ \Big{(}\sqrt{p_{k}}\Theta\big{[}t-t_{0}\big{]}+\Theta\big{[}t_{0}-t\big{]} \Big{)}U_{\downarrow-}\big{(}t,0\big{)}\end{pmatrix}.\]
Here \((\dots)=\chi_{k}-\theta_{k}-\frac{\pi}{2}\text{sgn}(\alpha)\) and \(\Theta(t)\) is the Heaviside function.
Substituting coefficients (32) and (33) obtained from different initial conditions together with the QP wavefunctions \(\tilde{\Psi}_{kn}\) from (A3) into the self-consistency equation (31) we get
\[\Delta(t)=\lambda\sum_{|\xi_{k}|>\sqrt{h^{2}-\Delta^{2}}}u_{0}v_{0} \tag{34}\] \[+\lambda\sum_{|\xi_{k}|<\sqrt{h^{2}-\Delta^{2}}}\Bigg{[}\Big{(} |\overset{=1}{C_{k\uparrow-}}|^{2}+|\overset{\alpha p_{k}}{C_{k\downarrow-}}|^ {2}-|\overset{\alpha 1-p_{k}}{C_{k\uparrow+}}|^{2}\Big{)}\frac{u_{0}v_{0}}{2}\] \[+u_{0}u_{1}ie^{-i\theta_{k}}C^{*}_{k\uparrow+}C_{k\downarrow-}+v _{0}v_{1}(-i)e^{i\theta_{k}}C^{*}_{k\downarrow-}C_{k\uparrow+}\Bigg{]}.\]
The last two terms are of the order of \(\mathcal{O}(\alpha k_{F}/\Delta)\), so it is convenient to write the gap function as follows
\[\Delta(t)=\Delta_{h}[h(t)]+\delta\Delta(t). \tag{35}\]
We have identified two contributions that have significantly different origins: \(\Delta_{h}\) is defined by the amplitude of the LZSM tunneling and depends on time only through the Zeeman field \(h(t)\); \(\delta\Delta(t)\propto\mathcal{O}(\alpha k_{F}/\Delta)\) is defined by cross-terms and reflects interference effects between QP wavefunctions caused by LZSM transitions and depends on time explicitly.
If one neglects the small perturbation \(\delta\Delta(t)\) in (35) then it becomes possible to get a simplified self-consistency equation for \(\Delta_{h}[h(t)]\) from Eq. (34). In general form it reads
\[\Delta_{h}=\Delta_{0}\exp\Bigg{(}\int_{1}^{h/\Delta_{h}}\frac{e^{-\delta_{ \text{LZ}}/s^{2}}-1}{\sqrt{s^{2}-1}}ds\Bigg{)}, \tag{36}\]
where \(\delta_{\text{LZ}}=\pi\alpha^{2}k_{F}^{2}/\gamma\). The numerically integrated function \(\Delta_{h}[h(t)]\) is shown in Fig. 4 and below we discuss its behavior for different tunneling regimes.
(i) The value \(\delta_{\text{LZ}}=0\) corresponds to zero RSOC effects (\(\alpha=0\)), so that the spectral branches do not change after crossing and the trivial solution for the gap \(\Delta_{h}=\Delta_{0}\) holds.
(ii) The limit of \(\delta_{\text{LZ}}\ll 1\) means that \(\gamma\gtrsim\alpha^{2}k_{F}^{2}\) and the spectral branches intersect nonadiabatically, or so rapidly that they do not feel the RSOC. The Landau-Zener tunneling is suppressed and the gap has a weak dependence on the Zeeman field at \(h(t)>\Delta_{0}\):
\[\Delta_{h}\approx\Delta_{0}\exp\Big{(}-\delta_{\text{LZ}}\frac{\sqrt{h^{2}(t)- \Delta_{h}^{2}}}{h(t)}\Big{)}.\]
(iii) In the opposite limit of \(\delta_{\text{LZ}}\gg 1\) with \(\gamma\ll\alpha^{2}k_{F}^{2}\) the QPs undergo strong spin-flip tunneling during an almost adiabatic avoided crossing. This leads to the effective formation of the triplet superconducting correlations (or related triplet component of the anomalous Green function [39]) even for the small RSOC energy \(\alpha k_{F}/\Delta\ll 1\). Such dynamically generated correlations are determined by the rate of field change \(\gamma\) and their effect on the gap can significantly exceed the static mixing of singlet-triplet pairs for \(\alpha\neq 0\)[32]. As a result,
Figure 4: Functional dependence of the superconducting gap \(\Delta_{h}\) on the spin-splitting field from expression (36) for different values of \(\delta_{\text{LZ}}\). The dashed-dotted line separates two regions where \(\Delta_{h}\lessgtr h\). The dashed line shows the critical value of the field \(h(t)=\Delta_{0}\) and the red circle marks the point of change in the behavior of the gap \(\Delta_{h}\) in the region \(\Delta_{h}>h\). After this point the equilibrium solution \(\Delta_{h}=\Delta_{0}\) should jump to one of the solutions fixed by the parameter \(\delta_{\text{LZ}}\).
the singlet gap function (3) is suppressed and the self-consistency equation reads
\[\Delta_{h}\approx\begin{cases}\sqrt{\Delta_{0}(2h(t)-\Delta_{0})}\quad\text{for} \quad\Delta_{h}>h/\delta_{\text{LZ}},\\ \Delta_{0}\exp\Big{(}-\delta_{\text{LZ}}\frac{\sqrt{h^{2}(t)-\Delta_{h}^{2}}}{h (t)}\Big{)}\frac{\exp(\sqrt{\delta_{\text{LZ}}}\sqrt{\delta_{\text{LZ}}-1})}{ \sqrt{\delta_{\text{LZ}}+\sqrt{\delta_{\text{LZ}}-1}}}\\ \quad\text{for}\quad\Delta_{h}<h/\delta_{\text{LZ}}.\end{cases}\] (iv) The critical value \(\delta_{\text{LZ}}\to\infty\) corresponds to the complete Landau-Zener spin-flip tunneling, so that there are no QPs at the energies \(E>0\). In this case we have restored the thermodynamically metastable branch \(\Delta_{h}\approx\sqrt{\Delta_{0}(2h(t)-\Delta_{0})}\) from well-known static case [30].
The actual behavior of the gap in time must be determined by switching between different branches of \(\Delta_{h}[h]\) as the Zeeman field \(h(t)\) increases. The first solution, which is fixed by the initial condition \(\Delta_{h}(t=0)=\Delta_{0}\) holds until \(h(t)=\Delta_{0}\), where \(\Delta_{h}\) goes to another unique possible solution \(\Delta_{h}[h]\) for a given \(\delta_{\text{LZ}}\) (see the red point and black dashed line in Fig. 4). The question of the exact dynamics of the gap in the jump region is difficult, because due to the rapid change in the \(\Delta_{h}\), the decoupling of the LZSM problem and self-consistency equation may not be guaranteed [Section IV.2]. It is qualitatively expected that the jump at \(t\approx\Delta_{0}/\gamma\) should be smeared both by non-zero static contribution of SOC to the gap (since the equilibrium gap value depends on \(\alpha\)) and by the QP tunneling dynamics. At large times \(h(t)\gg\Delta_{0}\) there are no transitions between the QP states (\(p_{k}\to 1\)), since the splitting between the spectral branches becomes zero and therefore the gap tends to the constant asymptotics \(\Delta_{h}(\infty)\).
### QP interference effects
In addition to the dominating term \(\Delta_{h}\), the gap equation (34) also contains small rapidly oscillating term
\[\delta\Delta(t)=\lambda \sum_{|\xi_{k}|<\sqrt{h^{2}-\Delta^{2}}}u_{0}u_{1}ie^{-i\theta}C^ {*}_{k\uparrow+}C_{k\downarrow-}\] \[+v_{0}v_{1}(-i)e^{i\theta}C^{*}_{k\downarrow-}C_{k\uparrow+},\]
arising from the interference of the QP states which have experienced LZSM transitions. It is obvious that in its structure this function resembles the collective Higgs mode, which is excited in a natural way during the redistribution of states in the QP spectrum. Let us look at it in more details. Using the time-dependent coefficients (32-33) we obtain
\[\frac{\delta\Delta(t)}{\lambda N(0)}=\int_{-\sqrt{h^{2}-\Delta^{2}}}^{\sqrt{ h^{2}-\Delta^{2}}}\sqrt{p_{k}}\sqrt{1-p_{k}}G(\xi,t)\cos(D_{k}(t))d\xi, \tag{38}\]
where we introduce the dynamical phase \(D_{k}(t)=2\int_{t_{0}}^{t}(E_{0}-h(t))dt+\chi_{k}+\pi\) and the function \(G(\xi,t)=\text{sgn}(\alpha)(u_{0}u_{1}+v_{0}v_{1})\). The function \(G(\xi)\) is proportional to \(\alpha k_{F}/\Delta\), which means that \(\delta\Delta(t)\) is parametrically small and can be considered against the background of the main change in the gap \(\Delta_{h}\) from the equation (36).
For the integral (38), it is easy to estimate the asymptotic behavior at large times \(\Delta_{0}/\gamma\ll t\). The dynamical phase is written as \(D_{k}(t)=-(E_{0}^{2}+\gamma^{2}t^{2})/\gamma+\chi_{k}+\pi+2E_{0}t\) for the spin-splitting field \(h(t)=\gamma t\). Here \(2E_{0}t\) is a fast oscillating term at \(t\to\infty\) and one can use a stationary phase approximation for the \(\xi\)-integration in Eq. (38) with the stationary phase point \(\xi=0\). Using Eq. (45) for \(G(\xi=0,t)\), we find the asymptotic behavior of \(\delta\Delta(t)\):
\[\delta\Delta(t)\approx\lambda N(0)e^{-\delta_{\text{LZ}}/2}\sqrt{1-e^{-\delta _{\text{LZ}}}}\frac{|\alpha|k_{F}}{2(\gamma t-\Delta_{h})}\sqrt{\frac{\pi \Delta_{h}}{t}} \tag{39}\]
\[\times\cos\Big{(}\frac{(\gamma t-\Delta_{h})^{2}}{\gamma}+\frac{3\pi}{4}-\chi _{0}\Big{)},\]
where \(\Delta_{h}[h(t\to\infty)]\) from Eq. (36) is a constant determined by \(\delta_{\text{LZ}}\). The result obtained means that the collective interference between the two QP states at each \(\xi_{k}\) after LZSM crossing behaves at large times as a modified Higgs mode. Due to linear dependence \(h(t)\), this mode has a modulated frequency and polynomial damping law \(\propto t^{-3/2}\) arising from the inhomogeneous broadening of the mode. Note that for large times only the contribution from the point \(\xi=0\) survives, so the amplitude of \(\delta\Delta(t)\) does not depend on the number of redistributed states in the QP spectrum.
If the linear growth of the spin-splitting field \(h(t)\) stops at a certain value \(h_{f}>\Delta_{0}\) after the redistribution of some of the QP states, then the accumulated dynamic phase \(D_{k}(t)\) and the gap fluctuation will depend only on this value \(h_{f}\)
\[\delta\Delta(t)\approx\lambda N(0)e^{-\delta_{\text{LZ}}/2}\sqrt{1- e^{-\delta_{\text{LZ}}}}\frac{|\alpha|k_{F}}{2(h_{f}-\Delta_{h})} \tag{40}\] \[\times\sqrt{\frac{\pi\Delta_{h}}{t}}\cos\Big{(}2(h_{f}-\Delta_{h}) t-\frac{3\pi}{4}-\frac{\Delta_{h}^{2}-h_{f}^{2}}{\gamma}+\chi_{0}\Big{)}.\]
The specific spectral distortion occurring between two brancher \(E_{k\uparrow+}\) and \(E_{k\downarrow-}\) during the Landau-Zener dynamics at \(h(t)<h_{f}\) acts as an initial perturbation for the gap function at \(t=h_{f}/\gamma\). The free gap dynamics at \(t>h_{f}/\gamma\) resembles the Higgs mode with \(\delta\Delta(t)\propto\cos(2(h_{f}-\Delta_{h})t)/\sqrt{t}\) at the frequency \(\omega=2|\Delta_{h}-h_{f}|\) (or \(\omega=\omega_{-}\) in our previous notations) with the standard damping law. It is interesting, that the amplitude of this mode proportional to \(|\alpha|k_{F}\) instead of \(\alpha^{2}k_{F}^{2}\) as it is expected in the case of small perturbations [Section (III)]. Such amplification is a direct consequence of the intersection of two specific spectral branches and the subsequent non-adiabatic dynamics. Thus, this mode turns out to be leading in comparison with other nonadiabatic corrections arising due to the interaction of all QP spectral branches. Note, that the method for calculating the self-consistency equation developed in Section III can be combined with the Landau-Zener problem (43) and all
corrections can be computed within the perturbation theory.
### Density of states and distribution function
Rearrangment of the spectrum as a result of the intersection of spectral branches naturally leads to a change of the structure of the density of states (DOS), that has become time dependent. Since the temporal evolution of the spectrum is adiabatic except the small region where the crossing occurs one can use the quasistatic description of the DOS. For the small RSOC the DOS for one spin projection can be written in terms of Bogoliubov-de Gennes functions
\[N_{\uparrow}(E,t)\approx \sum_{k}\sum_{n=\uparrow+,\uparrow-}|u_{0}|^{2}\delta\Big{(}E-E_{ kn}[h(t)]\Big{)} \tag{41}\] \[+|v_{0}|^{2}\delta\Big{(}E+E_{kn}[h(t)]\Big{)}.\]
Here we use static QP amplitudes \(u_{0}\) and \(v_{0}\) (see Eq. 10) to distinguish the particle/hole contributions and \(E_{k\uparrow\pm}\) are defined in Eq. (4). The calculation of \(N_{\uparrow}\) is cumbersome, because the RSOC shifts the spectral branches and opens a minigap \(\propto\alpha k_{F}\) at \(E=0\) [Appendix E]. For the small RSOC parameter these changes are negligible and one can use a standard expression for the DOS, which now depends on time through the spin-splitting field
\[\frac{N_{\uparrow}(E,t)}{N(0)}\approx\frac{|E+h(t)|}{\sqrt{(E+h(t))^{2}-\Delta _{h}[h(t)]^{2}}}. \tag{42}\]
Here the gap function \(\Delta_{h}\) is taken from (36) and two coherence peaks are present at \(E=\pm\Delta_{h}[h(t)]-h(t)\).
The amplitude of the QP wavefunction \(\psi_{k}(t)\) from (26) contains the information about filling (or occupation) of the \(\xi_{k}-\)th state. More precisely the coefficients \(|C_{k\uparrow\pm}(t)|^{2}\) and \(|C_{k\downarrow\pm}(t)|^{2}\) can serve as an effective distribution functions \(f_{\uparrow\downarrow}(E)\) for QPs with different spin projections. As it was discussed in the section IV.3, the temporal evolution of these coefficients is defined by the LZSM problem, and for spin-up states one has
\[|C_{k\uparrow-}(t)|^{2}=1,\] \[|C_{k\uparrow+}(t)|^{2}=(1-p_{k})\Theta\big{[}\sqrt{h^{2}(t)- \Delta_{h}^{2}}-\xi_{k}\big{]},\]
which can be rewritten as a distribution function
\[f_{\uparrow}(E,t)\approx\begin{cases}0,&E>0\\ 1-\exp\Big{[}-\frac{\delta_{L\pm}\Delta_{h}^{2}[h(t)]}{(E+h(t))^{2}}\Big{]},& \Delta_{h}-h<E<0\\ 1,&E<\Delta_{h}-h\end{cases} \tag{43}\]
The dependence \(f_{\uparrow}(E)\) is shown in Fig. 4 for \(\delta_{\rm LZ}=0.5\). The most pronounced change of the distribution function occurs at \(E\approx\Delta_{h}-h\), since for large QP energies the LZSM tunneling is suppressed. For the opposite spin projection the DOS \(N_{\downarrow}(E)\) has the similar structure (42) with \(h\rightarrow-h\), while the corresponfing distribution function \(f_{\downarrow}(E)\) is different and is given by the Eqs. (32-33). The DOS structure and effective distribution function enable the calculation of a system's optical or transport response, which can be experimentally measured.
### Dynamical magnetization of QP gas
Nonadiabatic LZSM tunneling of QP states causes a spin imbalance in the spectrum, which results in the appearance of nonzero dynamical magnetization. Using the notations from the previous section we get an expression for the z-component of the magnetization per unit volume
\[m_{z}(t)=\mu_{B}\sum_{\rm i.c.}\tilde{\psi}_{k}^{\dagger}(t)\tilde{\tau}_{m} \tilde{\psi}_{k}(t), \tag{44}\]
where \(\tilde{\tau}_{m}=(\tilde{\tau}_{0}+\hat{\tau}_{z})\otimes\hat{\sigma}_{z}/2\); the vector \(\tilde{\psi}_{k}(t)\) is a solution of the TDBdG problem (26) and "i.c." here means the summation over all initial conditions (see Eq. (3)). Due to symmetry and homogenuty of the problem for the field \({\bf h}(t)=h(t){\bf z}_{0}\) the transversal components of the magnetization \(m_{x,y}(t)\) are zero.
Taking the dynamical amplitudes \(C_{n}(t)\) from (32-33) and implementing the same procedure as for the self-consistency equation (34-35) we found that the magnetization can be written as follows
\[m_{z}(t)=m_{h}[h(t)]+\delta m(t). \tag{45}\]
As in the case of the gap equation (35) we have two contributions: \(m_{h}[h(t)]\) which is a slow function of time arising from the redistribution of the quaiparticle states, and \(\delta m(t)\propto\alpha k_{F}\) which is small oscillatory term originated from the interference of the redistributed states. The first term can be easily calculated with the help of the quasiparticle density
\[n_{\sigma}(t)=\int N_{\sigma}(E,t)f_{\sigma}(E,t)dE, \tag{46}\]
where \(\sigma=\uparrow\downarrow\) and the DOS \(N_{\sigma}(E)\) and distribution function \(f_{\sigma}(E)\) are defined in the previous subsection. Corresponding spin imbalance results in the dynamical magnetization \(m_{h}[h(t)]=\mu_{B}(n_{\uparrow}-n_{\downarrow})\), which is shown in Fig. 5(a).
For \(h(t)<\Delta_{0}\) there is no crossing of the QP spectral branches and according to our model there is no tunneling between QP states, therefore the dynamical magnetisation is zero. Once the intersection has occured at \(h(t)=\Delta_{0}\), the distribution functions \(f_{\uparrow,\downarrow}(E)\) transform and nonzero spin imbalance \(n_{\uparrow}-n_{\downarrow}\) is generated. Due to the jump of \(\Delta_{h}\) function at \(h(t)=\Delta_{0}\) [Fig. 4] the magnetization at this point also has a sharp discontinuity. At large times the tunneling of QP states is suppressed therefore the magnetization is saturated to a constant
value determined by the parameter \(\delta_{\rm LZ}\). Obviously, an increase in \(\delta_{\rm LZ}\) makes the spin-flip tunneling more efficient and thereby increases the maximum value of \(m_{h}\). The second therm in (45) resembles the Higgs mode term (37) and gives negligible contribution to \(m_{z}(t)\), therefore it can be discarded.
In addition one can compute the dynamical susceptibility of the QP gas in the Zeeman field of the general form \(h(t)=\mu_{\rm B}H(t)\). It is known that an orbital and a spin parts of the magnetic susceptibility can be splitted in the case of small spin-orbital effects [39]. Since we consider a homogeneous system and neglect all orbitals effects only the spin part plays a role, which can be written as follows
\[\chi^{\rm sp}[h(t)]=\mu_{B}\frac{\partial m_{h}}{\partial h}. \tag{47}\]
The ratio of the numerically calculated susceptibility \(\chi^{\rm sp}[h(t)]\) and the normal susceptibility \(\chi^{\rm sp}_{N}=2\mu_{B}^{2}N(0)\)[40] is shown if Fig. 5(b). It is seen that spin-flip tunneling in the QP spectrum provokes a paramagnetic response of the superconducting condensate. The function (47) should have a singularity \(\chi^{\rm sp}\propto(h(t)-\Delta_{0})^{-1/2}\) in the vicinity of \(h(t)\approx\Delta_{0}\), which is defined by the shape of the QP spectrum at \(k\approx k_{F}\) and has the same origin as the coherence peak in the DOS (42). However, due to the jump of the order parameter \(\Delta_{h}\) at this point we observe shifted peaks, which have to be smeared out near \(h(t)=\Delta_{0}\) if more realistic model of LZSM tunneling [Section IV.2] is taken into account. We note again that we discuss only the dynamic contribution to the susceptibility, which, generally speaking, have to be added to the static one, which is not equal to zero at \(T=0\) in the presence of SOC [39; 40].
## V Discussion and experimental perspectives
We analyzed the coherent dynamics of the superconducting condensate in the presence of Zeeman field and SOC in collisionless regime. First, it was established that the Higgs mode of the superconducting gap is sensitive to the spin-splitting field \(h_{0}\) and can be directly triggered by either its harmonic perturbation \(\delta h(t)\) or by an external laser pulse. Second, it was shown, that the field \(h(t)\sim t\) can provoke an avoided crossing of the QP spectral branches and adiabatic spin-flip tunneling of the QPs between the different branches occurs. Corresponding redistribution of the QPs in the spectrum leads to the appearance of the dependence \(\Delta[h(t)]\) and generation of the interference effects. Emerging spin imbalance reveals itself in the effective dynamical distribution function and in a generation of a weak magnetization of the QP gas.
We propose superconductor-ferromagnet hybrid structures as an experimental platform for detecting the described effects. The ferromagnetic layer can serve as a source of both Rashba spin-orbit coupling and an exchange field. Since it is important to remove orbital effects from the system, the most suitable geometry for superconductor is either thin film or one-dimensional nanowire [41].
The excitation and observation of Higgs modes in superconductors requires frequencies of the order of \(\Delta_{0}/\hbar\), which vary from the far infrared to the terahertz range. The laser excitation of modes seems to be the most practical and feasible, and the detection can be implemented using the THz light source with ultrafast pump-probe spectroscopy or third harmonic generation measurements [10; 11; 12]. Generation of a fast oscillating component of the homogeneous Zeeman field \(\delta h(t)\) inside a superconductor is a difficult task especially for the THz range. Some proposals can be made amid encouraging progress in the ultrafast optical control of magnetization in various materials [42; 43; 44]. Ferromagnetic resonance induces the time-dependent stray field which in combination with geometric constraints may serve as a Zeeman field inside a thin superconducting film, as it was discussed for a two-dimensional electron gas in Ref. [45]. Upon the excitation of the Higgs modes by the Zeeman field the THz spectroscopy measurements can be implemented again.
It is possible to make basic parameter estimates for the experimental observation of LZSM transitions in the QP spectrum. For example, consider \(\Delta_{0}=0.1\)mev (for \(T_{c}\approx 1\)K) and \(\alpha k_{F}\sim 10^{-3}\)mev \(\ll\Delta_{0}\). Then the constraint for the small tunneling rate is \(\hbar\gamma\gtrsim\alpha^{2}k_{F}^{2}\) in dimensional units, which is equivalent to \(\gamma\gtrsim 10^{-3}\)mev/ns. Consider inelastic relaxation of QP with a typical time \(\tau_{\rm ph}\sim 100\)ns in the case of the electron-phonon scattering at low temperatures [46; 47]. The collisionless regime is maintained at \(t\ll\tau_{\rm ph}\), which corresponds to times \(t\lesssim 10\)ns. Under such conditions the field \(h(t)=\gamma t\sim\Delta_{0}\) is achievable only for \(\gamma\sim 10^{-2}\)mev/ns. The measurements of the various properties of the superconducting
Figure 5: (a) Dynamical magnetization \(m_{h}\) per unit volume induced by the nonadiabatic tunneling of QP states and (b) corresponding spin susceptibility \(\chi^{\rm sp}\) versus time-dependent spin-splitting field \(h(t)\) for different values of \(\delta_{\rm LZ}\).
condensate above Pauli limit at short times can be implemented with the help of ultrafast THz techniques, such as pump-probe [12] for an optical conductivity.
###### Acknowledgements.
This work has been supported by ANR OPTOFLUX-ONICS, ANR SUPERFAST, the LIGHT S&T Graduate Program and the Russian Science Foundation (Grant No. 21-72-10161). A.S.M. acknowledges support from the State Contract of Ministry of Science and Higher Education of Russian Federation No. 075-03-2022-106 (project FSMG-2023-0011) of Moscow Institute of Physics and Technology.
## Appendix A Eigenvectors
The instantaneous eigenvectors of the Hamiltonian \(\tilde{\mathcal{H}}(k,t)\) from Eq. (2) can be written as follows
\[\tilde{\Psi}_{kn}(t)=\frac{1}{\sqrt{1+a_{1n}^{2}+a_{2n}^{2}+a_{3n}^{2}}}\begin{pmatrix} 1\\ -ia_{1n}e^{i\theta_{k}}\\ -ia_{2n}e^{i\theta_{k}}\\ a_{3n}\end{pmatrix}, \tag{30}\]
where we have defined the phase \(\theta_{k}=\arg\big{(}k_{x}+ik_{y}\big{)}\) and real coefficients
\[a_{1n} =\frac{(h+E_{kn})^{2}-E_{0}^{2}-\alpha^{2}k^{2}}{2\alpha k(\xi_{k }+h)}, \tag{31}\] \[a_{2n} =\frac{\alpha k}{\Delta}-\frac{E_{kn}-\xi_{k}-h}{\Delta}a_{1n},\] \[a_{3n} =\frac{E_{kn}-\xi_{k}+h}{\Delta}-\frac{\alpha k}{\Delta}a_{1n}.\]
The instantaneous eigenvalues of \(\tilde{\mathcal{H}}(k,t)\) are
\[E_{kn}(t)\equiv E_{k\sigma\pm}(t)=\] \[\pm\sqrt{E_{0}^{2}+\alpha^{2}k^{2}+h^{2}(t)\mp\text{sgn}(\sigma)2 \sqrt{\xi_{k}^{2}\alpha^{2}k^{2}+h^{2}(t)E_{0}^{2}}},\]
where \(E_{0}=\sqrt{\xi_{k}^{2}+\Delta^{2}}\); the subscript \(\pm\) refers to spectral branch above/below the Fermi level and \(\sigma=\{\uparrow,\downarrow\}\) denotes a spin subband. Note, that the Hamiltonian (2) implies the symmetry relations between the energies \(E_{k\uparrow+}=-E_{k\downarrow-}\) and \(E_{k\downarrow+}=-E_{k\uparrow-}\), and between the corresponding eigenvectors \(\tilde{\Psi}_{k\uparrow+}=i\hat{\tau}_{y}\otimes\hat{\sigma}_{z}\tilde{\Psi}_{ k\downarrow-}^{*}\) and \(\tilde{\Psi}_{k\downarrow+}=i\hat{\tau}_{y}\otimes\hat{\sigma}_{z}\tilde{\Psi}_ {k\uparrow-}^{*}\), where \(\hat{\tau}_{i}(\hat{\sigma}_{i})\) is the Pauli matrix in the Nambu(spin) space.
For the case of weak SOC \(\alpha k_{F}\ll\{E_{F},h(t),\Delta(t)\}\) the eigenvectors (30) can be expanded up to the first order in \(\alpha k_{F}/\Delta\) as follows
\[\tilde{\Psi}_{k\uparrow+}\approx\begin{pmatrix}u_{0}\\ -iu_{1}e^{i\theta_{k}}\\ -iv_{1}e^{i\theta_{k}}\\ v_{0}\\ -iv_{1}e^{i\theta_{k}}\end{pmatrix},\quad\tilde{\Psi}_{k\downarrow+}\approx \begin{pmatrix}iu_{1}e^{-i\theta_{k}}\\ -u_{0}\\ v_{0}\\ -iv_{1}e^{-i\theta_{k}}\end{pmatrix}, \tag{32}\] \[\tilde{\Psi}_{k\uparrow-}\approx\begin{pmatrix}-v_{0}\\ iv_{1}e^{i\theta_{k}}\\ -iu_{1}e^{i\theta_{k}}\\ u_{0}\end{pmatrix},\quad\tilde{\Psi}_{k\downarrow-}\approx\begin{pmatrix}-iv_{ 1}e^{-i\theta_{k}}\\ v_{0}\\ u_{0}\\ -iu_{1}e^{-i\theta_{k}}\end{pmatrix}.\]
Here we define equilibrium QP amplitudes
\[u_{0}=\frac{1}{\sqrt{2}}\sqrt{1+\frac{\xi_{k}}{\sqrt{\xi_{k}^{2}+\Delta^{2}}}},\quad v_{0}=\frac{1}{\sqrt{2}}\sqrt{1-\frac{\xi_{k}}{\sqrt{\xi_{k}^{2}+ \Delta^{2}}}}, \tag{33}\]
and \(u_{1},v_{1}\propto\mathcal{O}(\alpha k_{F}/\Delta)\) correspond to the triplet component of the QP wavefunctions.
The function \(G(\xi,t)=\text{sgn}(\alpha)(u_{0}u_{1}+v_{0}v_{1})\) from Eq. (38) can be found at the stationary phase point \(\xi=0\) by expanding the coefficients (31) in the small parameter \(\alpha k_{F}/\Delta\) and putting \(\Delta\approx\Delta_{h}\) (see Eq. (36)). The result reads
\[G(0,t)\approx\frac{|\alpha|k_{F}}{2(h(t)-\Delta_{h})}. \tag{34}\]
## Appendix B Derivation of linearized self-consistency equation
We start with the linearized (8, 11) dynamical equations
\[i\frac{\partial}{\partial t}\delta C_{km}(t)=\sum_{n}\tilde{ \Psi}_{m}^{\dagger}\tilde{\mathcal{V}}(t)\tilde{\Psi}_{n}e^{-i(E_{n}-E_{m})t} \big{(}\delta_{n,l}+\delta C_{kn}(t)\big{)}, \tag{35}\]
where the indices \(n,m=\{\uparrow+,\downarrow+,\uparrow-,\downarrow-\}\) and two possible initial configurations (10) are marked as \(l=\{\uparrow-,\downarrow-\}\). The compact form of the self-consistency equation for the gap (9) is
\[\Delta_{\text{eq}}+\delta\Delta(t)= \tag{36}\] \[-\frac{\lambda}{2}\sum_{l}\sum_{k}\sum_{n,n^{\prime}}\big{(} \delta_{n,l}+\delta C_{kn}(t)^{*}\big{)}\big{(}\delta_{n^{\prime},l}+\delta C _{kn^{\prime}}(t)\big{)}\] \[\times e^{-i(E_{n^{\prime}}-E_{n})t}\tilde{\Psi}_{kn}^{\dagger} \tilde{\tau}_{\Delta}\tilde{\Psi}_{kn^{\prime}}.\]
As was mentioned in Section II, we neglect the effect of RSOC on the equilibrium value of the gap, which can be taken as \(\Delta_{\text{eq}}=\Delta_{0}\). It also makes sense to omit the negligibly small corrections from the RSOC to the energy spectrum, so one can put \(E_{n}\equiv E_{k\sigma\pm}\approx\pm E_{0}-\text{sgn}(\sigma)h_{0}\)
The equations (14, 15) can be simplified and written as follows
\[\frac{\partial f_{1}}{\partial t}=ie^{i(2(E_{0}-h_{0})t)}[\mathcal{A }\delta\Delta(t)-\mathcal{B}\delta h(t)], \tag{16}\] \[\frac{\partial f_{2}}{\partial t}=ie^{i(2(E_{0}+h_{0})t)}[ \mathcal{A}\delta\Delta(t)+\mathcal{B}\delta h(t)],\] \[\frac{\partial g}{\partial t}=i\frac{\xi}{E_{0}}e^{i(2E_{0}t)} \delta\Delta(t),\] \[\delta\Delta(t)=\Big{\langle}\frac{\mathcal{A}}{2}\mathrm{Re}f_{ 1}(t)e^{-i(2(E_{0}-h_{0})t)}\Big{\rangle}\] \[+\Big{\langle}\frac{\mathcal{A}}{2}\mathrm{Re}f_{2}(t)e^{-i(2(E_ {0}+h_{0})t)}\Big{\rangle}+\Big{\langle}\frac{\xi}{E_{0}}\mathrm{Re}g(t)e^{-i( 2E_{0}t)}\Big{\rangle}.\]
We have used the notation \(\big{\langle}\dots\big{\rangle}=\lambda\sum_{k}\approx\lambda N(0)\int_{- \omega_{D}}^{\omega_{D}}d\xi\) and introduced new complex-valued functions
\[f_{1}\equiv-ie^{i\theta}\delta C_{\uparrow+},\quad g_{1}\equiv- \delta C_{\downarrow+}, \tag{17}\] \[f_{2}\equiv-ie^{-i\theta}\delta C_{\downarrow+},\quad g_{2} \equiv-\delta C_{\uparrow+},\] \[g=\frac{g_{1}+g_{2}}{2},\]
where the subscript corresponds to the two possible initial conditions. The functions
\[\mathcal{A}(\xi)=2(u_{0}u_{1}+v_{0}v_{1})\propto\mathcal{O}( \alpha k_{F}/\Delta_{0}), \tag{18}\] \[\mathcal{B}(\xi)=2(u_{0}v_{1}+u_{1}v_{0})\propto\mathcal{O}( \alpha k_{F}/\Delta_{0})\]
have the lowest order in \(\alpha k_{F}\) parameter [Appendix A] and are even in \(\xi\). All terms odd in \(\xi\) in Eq. (16) are related to the imaginary part of \(\delta\Delta(t)\) and vanish due to the approximate electron-hole symmetry of BdG Hamiltonian (2), due to which the density of states is approximated as \(N(\xi)\approx N(0)\) in the \(\langle...\rangle\)-integration [48].
Applying the Laplace transform \(f(s)=\int_{0}^{\infty}e^{-st}f(t)dt\) with \(s=i\omega+\zeta\) (where \(\zeta\to 0\)) for Eq. (16) we obtain the gap equation in the complex plane, which is found to be
\[\delta\Delta(s)=\delta\Delta(s)\Big{\langle}\frac{2\xi^{2}}{E_{0 }}\frac{1}{s^{2}+4E_{0}^{2}}\Big{\rangle}+\delta\Delta(s)\Big{\langle} \mathcal{A}^{2}(\xi)\frac{(E_{0}+h_{0})}{s^{2}+4(E_{0}+h_{0})^{2}}\Big{\rangle} +\delta\Delta(s)\Big{\langle}\mathcal{A}^{2}(\xi)\frac{(E_{0}-h_{0})}{s^{2}+4 (E_{0}-h_{0})^{2}}\Big{\rangle} \tag{19}\] \[+\delta h(s)\Big{\langle}\mathcal{A}(\xi)\mathcal{B}(\xi)\frac{ (E_{0}+h_{0})}{s^{2}+4(E_{0}+h_{0})^{2}}\Big{\rangle}-\delta h(s)\Big{\langle} \mathcal{A}(\xi)\mathcal{B}(\xi)\frac{(E_{0}-h_{0})}{s^{2}+4(E_{0}-h_{0})^{2} }\Big{\rangle}\] \[+\Big{\langle}f_{1}^{\prime}(0)\frac{\mathcal{A}(\xi)}{2}\frac{ s}{s^{2}+4(E_{0}-h_{0})^{2}}\Big{\rangle}+\Big{\langle}f_{1}^{\prime\prime}(0) \frac{\mathcal{A}(\xi)(E_{0}-h_{0})}{s^{2}+4(E_{0}-h_{0})^{2}}\Big{\rangle}\] \[+\Big{\langle}f_{2}^{\prime}(0)\frac{\mathcal{A}(\xi)}{2}\frac{ s}{s^{2}+4(E_{0}+h_{0})^{2}}\Big{\rangle}+\Big{\langle}f_{2}^{\prime\prime}(0) \frac{\mathcal{A}(\xi)(E_{0}+h_{0})}{s^{2}+4(E_{0}+h_{0})^{2}}\Big{\rangle}\] \[+\Big{\langle}g^{\prime}(0)\frac{\xi}{E_{0}}\frac{s}{s^{2}+4E_{0 }^{2}}\Big{\rangle}+\Big{\langle}g^{\prime\prime}(0)\frac{2\xi E_{0}}{s^{2}+4E _{0}^{2}}\Big{\rangle}.\]
Here \(f=f^{\prime}+if^{\prime\prime}\) and the initial conditions \(f_{1,2}(0)=f_{1,2}(t=0)\), \(g(0)=g(t=0)\) implicitly contain the initial value of the gap perturbation \(\delta\Delta(t=0)\). Note that terms with initial conditions will be discarded when calculating the superconductor response (see Section III). Now we can single out functions of \(s\) with different singularities in the complex plane and denote them using short notations
\[\mathcal{K}_{0}(s)=\Big{\langle}\frac{2\xi^{2}}{E_{0}}\frac{1}{s^ {2}+4E_{0}^{2}}\Big{\rangle}, \tag{20}\] \[\mathcal{K}_{\pm}(s)=\Big{\langle}\mathcal{A}^{2}(\xi)\frac{(E_{0 }\pm h_{0})}{s^{2}+4(E_{0}\pm h_{0})^{2}}\Big{\rangle},\] \[\mathcal{F}_{\pm}(s)=\Big{\langle}\mathcal{A}(\xi)\mathcal{B}(\xi) \frac{(E_{0}\pm h_{0})}{s^{2}+4(E_{0}\pm h_{0})^{2}}\Big{\rangle}\]
and \(\mathcal{I}(s)\), which consists of all initial perturbations at \(t=0\). The functions \(\mathcal{A}\) and \(\mathcal{B}\) are of the first order in the small parameter \(\alpha k_{F}/\Delta\), therefore we have
\[\mathcal{K}_{0}(s)\propto\mathcal{O}\Big{(}\frac{\alpha^{0}k_{F }^{0}}{\Delta_{0}^{0}}\Big{)},\] \[\mathcal{K}_{\pm}(s),\mathcal{F}_{\pm}(s)\propto\mathcal{O}\Big{(} \frac{\alpha^{2}k_{F}^{2}}{\Delta_{0}^{2}}\Big{)}.\]
It can be shown that the the difference \([\mathcal{F}_{+}(s)-\mathcal{F}_{-}(s)]\) is proportional to \(h_{0}\). This allows one to write the terms with \(\delta h(t)\) in (19) as \(\delta h(s)[\mathcal{F}_{+}(s)-\mathcal{F}_{-}(s)]\) or \((\mathbf{h}_{0}\cdot\delta\mathbf{h}(s))[\mathcal{F}_{+}(s)-\mathcal{F}_{-}(s)]/h_ {0}\), where both vectors are oriented along \(\mathbf{z}_{0}\) axis. By rewriting the equation (19) with the new introduced functions (20) we get the self-consistency equation (12).
## Appendix C Long-time behavior of \(\delta\Delta(t)\)
The susceptibility \(\text{Im}\chi_{\Delta h}(s)|_{\zeta\to 0}=\text{Im}\chi_{\Delta h}(\omega)\) in Eq. (24) has strongly dominant terms in the vicinity of different branch points in the interval \(\omega\in[\omega_{-},\infty)\). In order to demonstrate this, the function \(\text{Im}\chi_{\Delta h}(s)\) can be expanded in a series up to the second order in the parameter \(\alpha k_{F}/\Delta\), and this expansion must be carried out accurately near the branch points and may differ in different regions of \(\omega\). Therefore, we assume that the value of the integral is determined by these dominant contributions of \(\text{Im}\chi_{\Delta h}(\omega)\) and can be evaluated sequentially as \(\int_{\omega_{-}}^{\infty}=\int_{\omega_{-}}^{\omega_{0}}+\int_{\omega_{0}}^{ \omega_{+}}+\int_{\omega_{+}}^{\infty}.\) Let us consider the small regions \(\Omega\ll\omega_{0,\pm}\) in the vicinity of these points separately.
**1:**\(\omega\approx\omega_{-}+\Omega\)
Close to the point \(\omega=\omega_{-}\) the term \(\mathcal{F}_{-}^{\prime\prime}(\omega)\) dominates:
\[\mathcal{F}_{-}^{\prime\prime}(\Omega)\approx-\lambda N(0)\frac{\pi\Delta_{0 }\mathcal{A}(0)\mathcal{B}(0)}{4\sqrt{2\Delta_{0}\Omega}}\propto\frac{1}{ \sqrt{\Omega}}. \tag{10}\]
Despite the kernel \(1-\mathcal{K}_{0}^{\prime}(\omega)\) goes to zero at \(\omega\to\omega_{0}\) there is no singularity in \(\chi_{\Delta h}(\omega)\) at this point due to the small terms of the order of \((\alpha k_{F})^{2}\) in the denominator. Therefore, the region in the vicinity of \(\omega_{0}\) will not contribute to the integral. Thus, the behavior of the first integral for \(\omega\in[\omega_{-},\omega_{0})\) at large time \(h_{0}t\gg 1\) can be estimated as follows
\[\int_{\omega_{-}}^{\omega_{0}}\approx\text{Im}\Bigg{[}\frac{ \delta h(i\omega_{-})e^{i\omega_{-}t}}{\left[1-\mathcal{K}_{0}^{\prime}(\omega _{-})\right]}\int_{0}^{\omega_{0}-\omega_{-}}\mathcal{F}_{-}^{\prime\prime}( \Omega)e^{i\Omega t}d\Omega\Bigg{]} \tag{11}\] \[\approx-\lambda N(0)\frac{\pi^{3/2}\Delta_{0}\mathcal{A}(0) \mathcal{B}(0)}{4\sqrt{2\Delta_{0}t}}\frac{\text{Im}\Big{[}\delta h(i\omega_{ -})e^{i(\omega_{-}t+\pi/4)}\Big{]}}{\left[1-\mathcal{K}_{0}^{\prime}(\omega_{ -})\right]}.\]
**2:**\(\omega\approx\omega_{0}+\Omega\)
In the vicinity of the branch point \(\omega=\omega_{0}\) the main contribution is defined by
\[\mathcal{K}_{0}^{\prime\prime}(\Omega)\approx-\lambda N(0)\frac{\pi}{2\Delta_ {0}}\sqrt{\Delta_{0}\Omega}\propto\sqrt{\Omega}. \tag{12}\]
Thus at large times \(h_{0}t\gg 1\) we get
\[\int_{\omega_{0}}^{\omega_{+}}=\int_{\omega_{0}}^{\omega_{+}} \frac{\left[\mathcal{F}_{+}^{\prime}(\omega)-\mathcal{F}_{-}^{\prime}(\omega) \right]}{\mathcal{K}_{0}^{\prime\prime}(\omega)}\text{Im}\Big{[}e^{i\omega t }\delta h(i\omega)\Big{]}d\omega \tag{13}\] \[\approx-\frac{2\sqrt{\Delta_{0}}}{\sqrt{\pi t}}\frac{\left[ \mathcal{F}_{+}^{\prime}(\omega_{0})-\mathcal{F}_{-}^{\prime}(\omega_{0}) \right]}{\lambda N(0)}\text{Im}\Big{[}\delta h(i\omega_{0})e^{i(\omega_{0}t+ \pi/4)}\Big{]}.\]
**3:**\(\omega\approx\omega_{+}+\Omega\)
For the last branch point \(\omega=\omega_{+}\) the kernel \(\mathcal{F}_{+}^{\prime\prime}(\omega)\) dominates:
\[\mathcal{F}_{+}^{\prime\prime}(\Omega)\approx\lambda N(0)\frac{\pi\Delta_{0} \mathcal{A}(0)\mathcal{B}(0)}{4h_{0}\sqrt{2\Delta_{0}\Omega}}\propto\frac{1}{ \sqrt{\Omega}}. \tag{14}\]
At large times \(h_{0}t\gg 1\) we get
\[\int_{\omega_{+}}^{\infty}\approx\int_{\omega_{+}}^{\infty}-\frac{ \left[1-\mathcal{K}_{0}^{\prime}(\omega_{+})\right]\mathcal{F}_{+}^{\prime \prime}(\omega)}{\left[1-\mathcal{K}_{0}^{\prime}(\omega_{+})\right]^{2}+ \left[\mathcal{K}_{0}^{\prime\prime}(\omega_{+})\right]^{2}} \tag{15}\] \[\times\text{Im}\Big{[}e^{i\omega t}\delta h(i\omega)\Big{)}\Big{]}d\omega\] \[\approx-\lambda N(0)\frac{\pi^{3/2}\Delta_{0}\mathcal{A}(0) \mathcal{B}(0)}{4\sqrt{2\Delta_{0}t}}\frac{\left[1-\mathcal{K}_{0}^{\prime}( \omega_{+})\right]}{\left[1-\mathcal{K}_{0}^{\prime}(\omega_{+})\right]^{2}+ \left[\mathcal{K}_{0}^{\prime\prime}(\omega_{+})\right]^{2}}\] \[\times\text{Im}\Big{[}\delta h(i\omega_{+})\Big{)}e^{i(\omega_{+} t+\pi/4)}\Big{]}.\]
**4:** total integral
By combining all three contribution (11,13,14) we will get the equation (25) in the main text. Note that discussed approximations work for \(0<h_{0}<\Delta_{0}\). The functions \(\mathcal{A},\mathcal{B}\) from (14) at the point \(\xi=0\) can be calculated using the wavefunctions (12). By expanding the coefficients (10) we obtain
\[\mathcal{A}(0)\mathcal{B}(0)=\mathcal{A}^{2}(0)\approx\frac{(\alpha k_{F})^{2}} {(\Delta_{0}-h_{0})^{2}}. \tag{16}\]
Also, the analytical expressions for the kernel \(\mathcal{K}_{0}(\omega)\) at \(\omega>0\) can be found:
\[\frac{1-\mathcal{K}_{0}^{\prime}(\omega)}{\lambda N(0)}= \tag{17}\] \[\begin{cases}\frac{\sqrt{4\Delta_{0}^{2}-\omega^{2}}}{\omega} \arctan\Big{(}\frac{\omega}{\sqrt{4\Delta_{0}^{2}-\omega^{2}}}\Big{)}&\text{ for}\quad\omega<2\Delta_{0}\\ -\frac{\sqrt{\omega^{2}-4\Delta_{0}^{2}}}{\omega}\frac{1}{2}\ln\Big{(}\frac{ \omega-\sqrt{\omega^{2}-4\Delta_{0}^{2}}}{\omega+\sqrt{\omega^{2}-4\Delta_{0}^{ 2}}}\Big{)}&\text{for}\quad\omega>2\Delta_{0}\end{cases},\]
\[\frac{\mathcal{K}_{0}^{\prime\prime}(\omega)}{\lambda N(0)}=-\frac{\pi}{2}\frac{ \sqrt{\omega^{2}-4\Delta_{0}^{2}}}{\omega}\Theta[\omega-2\Delta_{0}]. \tag{18}\]
Finally, the expression with the kernels \(\mathcal{F}_{\pm}(\omega)\) from (13) can be calculated numerically for small \(\alpha k_{F}\ll\Delta_{0}\):
\[\frac{\left[\mathcal{F}_{+}^{\prime}(\omega_{0})-\mathcal{F}_{-}^{\prime}( \omega_{0})\right]}{\lambda N(0)} \tag{19}\] \[=h_{0}\fint_{0}^{\omega_{D}}\mathcal{A}(\xi)\mathcal{B}(\xi)\frac{h_ {0}^{2}-\xi^{2}-2\Delta_{0}^{2}}{(\xi^{2}-h_{0}^{2})^{2}-4\Delta_{0}^{2}h_{0} ^{2}}d\xi.\]
## Appendix D Derivation and solution of LZSM problem
The dynamics of two levels with avoided crossing can be simply described with the help of so-called _diabatic_ basis formed by the instantaneous eigenfunction \(\tilde{\Phi}_{kn}^{0}(t)\) of the time-dependent Hamiltonian (1) at \(\alpha=0\). Note here that for \(\alpha=0\) the eigenstates do not depend of \(h(t)\) at all and consist only of the Bogoliubov's amplitudes \(u_{0}\) and \(v_{0}\) (one can use (12) and put \(\alpha=0\) there). The
complete solution of the time-dependent hamiltonian can be written as follows
\[\tilde{\Psi}_{k}(t)=\sum_{n}C^{d}_{kn}(t)\tilde{\Phi}^{0}_{kn}(t), \tag{104}\]
where \(n=\{\uparrow+,\downarrow+,\uparrow-,\downarrow-\}\). In order to avoid confusion with adiabatic basis in (26) the superscript \({}^{*}\mathrm{d}^{*}\) is used to denote the diabatic basis. The time-dependent coefficients obey the following equation derived from (1):
\[i\frac{\partial}{\partial t}C^{d}_{m}=\sum_{n}C^{d}_{n}\dot{\Phi}^{0\dagger}_{ km}\Big{[}\tilde{\mathcal{H}}(t)-i\frac{\partial}{\partial t}\Big{]}\tilde{\Phi}^{0}_ {kn}. \tag{105}\]
Note that here \(\tilde{\mathcal{H}}(t)\tilde{\Phi}^{0}_{kn}\neq E_{n}(t)\tilde{\Phi}^{0}_{kn}\). By keeping in mind that \(\hat{\Phi}^{0}_{kn}(t)\) depends on time only through \(\Delta(t)\), one can rewrite (105) as follows
\[i\frac{\partial}{\partial t}\begin{pmatrix}C^{d}_{\uparrow+}\\ C^{d}_{\downarrow+}\\ C^{d}_{\uparrow-}\\ C^{d}_{\downarrow-}\\ \end{pmatrix}=\begin{pmatrix}E_{0}-h(t)&-\frac{\xi_{k}}{E_{0}}i\alpha ke^{-i \theta_{k}}&i\frac{\xi_{k}}{2E_{0}^{d}}\frac{\partial\Delta}{\partial t}& \frac{\Delta}{E_{0}}i\alpha ke^{-i\theta_{k}}\\ \frac{\xi_{k}}{E_{0}}i\alpha ke^{i\theta_{k}}&E_{0}+h(t)&-\frac{\Delta}{E_{0}} i\alpha ke^{i\theta_{k}}&i\frac{\xi_{k}}{2E_{0}^{d}}\frac{\partial\Delta}{ \partial t}\\ -i\frac{\xi_{k}}{2E_{0}^{d}}\frac{\partial\Delta}{\partial t}&\frac{\Delta}{E_ {0}}i\alpha ke^{-i\theta_{k}}&-E_{0}-h(t)&\frac{\xi_{k}}{E_{0}}i\alpha ke^{- i\theta_{k}}\\ -\frac{\Delta}{E_{0}}i\alpha ke^{i\theta_{k}}&-i\frac{\xi_{k}}{2E_{0}^{d}} \frac{\partial\Delta}{\partial t}&-\frac{\xi_{k}}{E_{0}}i\alpha ke^{i\theta_{ k}}&-E_{0}+h(t)\\ \end{pmatrix}\begin{pmatrix}C^{d}_{\uparrow+}\\ C^{d}_{\uparrow+}\\ C^{d}_{\uparrow-}\\ C^{d}_{\downarrow-}\\ \end{pmatrix}, \tag{106}\]
where \(E_{0}=\sqrt{\xi_{k}^{2}+\Delta^{2}}\). One can remove the phase \(\theta_{k}=\arg\big{(}k_{x}+ik_{y}\big{)}\) from (106) by the unitary operator
\[\hat{U}_{\theta}=\begin{pmatrix}e^{i(\frac{\pi}{4}-\frac{\theta_{k}}{2})\hat{ \sigma}_{x}}&0\\ 0&e^{i(\frac{\pi}{4}-\frac{\theta_{k}}{2})\hat{\sigma}_{x}}\\ \end{pmatrix}, \tag{107}\]
so that in the new basis we have
\[i\frac{\partial}{\partial t}\begin{pmatrix}\tilde{C}^{d}_{\uparrow+}\\ \tilde{C}^{d}_{\downarrow+}\\ \tilde{C}^{d}_{\downarrow-}\\ \tilde{C}^{d}_{\downarrow-}\\ \tilde{C}^{d}_{\downarrow-}\\ \end{pmatrix}=\begin{pmatrix}E_{0}-h(t)&-\frac{\xi}{E_{0}}\alpha k&i\frac{\xi} {E_{0}^{d}}\frac{\partial\Delta}{\partial t}&\frac{\Delta}{E_{0}}\alpha k\\ -\frac{\xi}{E_{0}}\alpha k&E_{0}+h(t)&\frac{\Delta}{E_{0}}\alpha k&i\frac{\xi} {E_{0}^{d}}\frac{\partial\Delta}{\partial t}\\ -i\frac{\xi_{k}}{2E_{0}^{d}}\frac{\partial\Delta}{\partial t}&\frac{\Delta}{E _{0}}\alpha k&-E_{0}-h(t)&\frac{\xi}{E_{0}}\alpha k\\ \frac{\xi_{k}}{2}\alpha k&-i\frac{\xi_{k}}{2E_{0}^{d}}\frac{\partial\Delta}{ \partial t}&\frac{\xi}{E_{0}}\alpha k&-E_{0}+h(t)\\ \end{pmatrix}\begin{pmatrix}\tilde{C}^{d}_{\uparrow+}\\ \tilde{C}^{d}_{\downarrow+}\\ \tilde{C}^{d}_{\uparrow-}\\ \tilde{C}^{d}_{\downarrow-}\\ \end{pmatrix}. \tag{108}\]
We assume that the time evolution of the gap function \(\Delta(t)\) is adiabatic on the timescale of the problem (108). Therefore one can assume \(\Delta\) to be constant during the transition with the typical time \(\sim\tau_{\rm LZ}\). Since the most emphasized dynamics occurs between two crossing branches, it is convenient to consider the interaction of only the corresponding terms \(C_{\uparrow+}\) and \(C_{k\downarrow-}\) [Fig. 3]. Hence, one can extract an effective two-level problem for the crossing levels:
\[i\frac{\partial}{\partial t}\begin{pmatrix}\tilde{C}^{d}_{\uparrow+}\\ \tilde{C}^{d}_{k\downarrow-}\\ \end{pmatrix}=\begin{pmatrix}E_{0}-\gamma t&\frac{\Delta}{E_{0}}\alpha k\\ \frac{\Delta}{E_{0}}\alpha k&-E_{0}+\gamma t\\ \end{pmatrix}\begin{pmatrix}\tilde{C}^{d}_{\downarrow+}\\ \tilde{C}^{d}_{k\downarrow-}\\ \end{pmatrix}. \tag{109}\]
This system can be viewed as the LZSM problem, which allows an exact solution [35]. However, as discussed in IV.2, one can neglect the transient dynamics of the \(C_{k}(t)\) coefficients in the gap equation (3) and use the transition matrix approach instead. Thus, we need to obtain the relation between the long-time asymptotes of the functions \(\tilde{C}^{d}_{\uparrow\downarrow+}(t)\) before (\(t_{0}-\)) and after (\(t_{0}+\)) transition at the point \(t_{0}(\xi_{k})=\sqrt{\xi_{k}^{2}+\Delta^{2}}/\gamma\). Here we use short notations (\(t_{0}\mp\)) \(\approx t_{0}\mp\tau_{\rm LZ}/2\). The asymptotic solution of the problem (109) is well-known [35] and reads
\[\begin{pmatrix}\tilde{C}^{d}_{\uparrow+}(t_{0}+)\\ \tilde{C}^{d}_{k\downarrow-}\\ \end{pmatrix}= \tag{110}\] \[\begin{pmatrix}\sqrt{p_{k}}&-\mathrm{sgn}(\alpha)\sqrt{1-p_{k}}e^{i \chi_{k}}\\ \mathrm{sgn}(\alpha)\sqrt{1-p_{k}}e^{-i\chi_{k}}&\sqrt{p_{k}}\\ \end{pmatrix}\] \[\times\begin{pmatrix}\tilde{C}^{d}_{\uparrow+}(t_{0}-)\\ \tilde{C}^{d}_{k\downarrow-}(t_{0}-)\\ \end{pmatrix},\]
where the coefficient
\[p_{k}=\exp\Big{[}-\delta_{\rm LZ}\frac{\Delta^{2}}{\xi_{k}^{2}+\Delta^{2}} \Big{]}\]
with \(\delta_{\rm LZ}=\pi\alpha^{2}k^{2}/\gamma\approx\pi\alpha^{2}k_{F}^{2}/\gamma\) defines the probability of tunneling. Here \(\chi_{k}=\pi/4+\arg\Gamma(1+i\ln p_{k}/2\pi)-\ln p_{k}(\ln(-\ln p_{k}/2\pi)-1)/2\pi\) is the Stokes phase with the Gamma function \(\Gamma\).
For small energies \(\xi_{k}\lesssim\Delta\) two different tunneling regimes are possible:
\[\begin{array}{llllllll}\mathrm{(weak)}&\gamma\gtrsim\alpha^{2}k_{F}^{2}& \rightarrow&\delta_{\rm LZ}\approx 0&\rightarrow&p_{k}\approx 1,\\ \mathrm{(strong)}&\gamma\ll\alpha^{2}k_{F}^{2}&\rightarrow&\delta_{\rm LZ}\gg 1& \rightarrow&p_{k}\approx 0.\end{array}\]
When \(\xi_{k}\gg\Delta\), tunneling is suppressed (\(p_{k}\to 1\)) as the quasiparticle spectrum resembles that of a normal metal with no splitting between crossing spectral branches.
The typical transient time \(\tau_{\rm LZ}\) for the LZSM tunneling can be estimated as follows [35]
\[\tau_{\rm LZ}\sim\sqrt{\frac{\hbar}{\gamma}}{\rm max}\Bigg{\{}1,\frac{\alpha k_ {F}}{\sqrt{2\gamma}}\frac{\Delta}{\sqrt{\xi_{k}^{2}+\Delta^{2}}}\Bigg{\}}.\]
If the intersection of the branches of the QP spectrum occurs at some \(\xi_{k}\), then it is possible to determine the interval \(\Delta\xi_{k}\) in which all QP states experience transient dynamics. The size of \(\Delta\xi_{k}\) depends on transient time, however, it can be shown, that the upper limit for this interval is \(\Delta\xi_{k}\sim\alpha k_{F}\ll\Delta\). The smallness of \(\Delta\xi_{k}\) and the fact that the gap function \(\Delta(t)\) is determined by all QP states in \((-\omega_{D},\omega_{D})\) confirm the validity of the approximations made in Section IV.2.
Combining all the results we write the asymptotic transition matrix \(\hat{S}_{\rm LZ}^{d}\) in diabatic basis as
\[\begin{pmatrix}C^{d}_{k\uparrow+}(t_{0}+)\\ C^{d}_{k\downarrow+}(t_{0}+)\\ C^{d}_{k\uparrow-}(t_{0}+)\\ C^{d}_{k\downarrow-}(t_{0}+)\end{pmatrix}=\begin{pmatrix}\sqrt{p_{k}}&0&0& \sqrt{1-p_{k}}e^{i\chi_{k}-i\theta_{k}-i\frac{\pi}{2}{\rm sgn}(\alpha)}\\ 0&1&0&0\\ 0&1&0&0\\ -\sqrt{1-p_{k}}e^{-i\chi_{k}+i\theta_{k}+i\frac{\pi}{2}{\rm sgn}(\alpha)}&0&0& \sqrt{p_{k}}\end{pmatrix}\begin{pmatrix}C^{d}_{k\uparrow+}(t_{0}-)\\ C^{d}_{k\downarrow+}(t_{0}-)\\ C^{d}_{k\uparrow-}(t_{0}-)\\ C^{d}_{k\downarrow-}(t_{0}-)\end{pmatrix}. \tag{10}\]
The LZSM transition matrix in the adiabatic basis (26) has the form \(\hat{S}_{\rm LZ}=\hat{R}^{-1}(t_{0}+)\hat{S}_{\rm LZ}^{d}\hat{R}(t_{0}-)\), where we use the relationship between the two basis (10) and (26) written in general form as a time-dependent matrix \(\hat{R}(t)\). Using the perturbation theory with respect to the small parameter \(\alpha k_{F}/\Delta\) and considering points \(t_{0}\pm\) far from the nonadiabatic region, one can show that the matrix \(\hat{R}(t_{0}\pm)\) can be approximated with an identity matrix. The corrections proportional to \(\alpha k_{F}/\Delta\) in all elements of the matrix \(\hat{R}(t_{0}\pm)\) as well as \(\hat{S}_{\rm LZ}\) can be neglected, since in all equations of Section IV we consider the minimum possible order of the perturbation theory with respect to the parameter \(\alpha k_{F}/\Delta\). With these approximations the matrices \(\hat{S}_{\rm LZ}\) and \(\hat{S}_{\rm LZ}^{d}\) actually coincide and the LZSM transition matrix in the adiabatic basis can be taken taken from (10). Thus, we get the Eq (30).
## Appendix E Calculation of spin-split DOS
The DOS for one spin projection can be written as follows
\[N_{\uparrow}(E,t)\approx\sum_{k}\sum_{n=\uparrow+,\uparrow-}|u_{0}|^{2}\delta \Big{(}E-E_{kn}[h(t)]\Big{)}+|v_{0}|^{2}\delta\Big{(}E+E_{kn}[h(t)]\Big{)}. \tag{11}\]
Here we use static QP amplitudes \(u_{0}\) and \(v_{0}\) to distinguish the particle/hole contributions and \(E_{k\uparrow\pm}\) are defined in Eq. (4). Note that the function \(N_{\uparrow}(E,t)\) depends on time only through the Zeeman field \(h(t)\). The straightforward calculations for \(h(t)<\Delta_{0}\) yield
\[\frac{N_{\uparrow}(E,t)}{N(0)}\approx\frac{|E|}{\xi_{0}}\Bigg{|}1-{\rm sgn}(E )\frac{\alpha^{2}k_{F}^{2}+h^{2}(t)}{\sqrt{\xi_{0}^{2}\alpha^{2}k_{F}^{2}+h^{ 2}(t)(\xi_{0}^{2}+\Delta_{h}^{2})}}\Bigg{|}^{-1}, \tag{12}\]
where
\[\xi_{0}(E,t)\approx\sqrt{E^{2}+h^{2}(t)-\Delta_{h}^{2}+\alpha^{2}k_{F}^{2}+{ \rm sgn}(E)2\sqrt{E^{2}(h^{2}(t)+\alpha^{2}k_{F}^{2})-\Delta_{h}^{2}\alpha^{2} k_{F}^{2}}} \tag{13}\]
and we have assumed \(\alpha k\approx\alpha k_{F}\) due to the vicinity to the Fermi energy. The time-dependent gap function \(\Delta_{h}[h(t)]\) is defined in (36). Two standard coherence peaks at the energies \(E=-\sqrt{(\Delta_{h}+h(t))^{2}+\alpha^{2}k_{F}^{2}}\) and \(E=\sqrt{(\Delta_{h}-h(t))^{2}+\alpha^{2}k_{F}^{2}}\) appear [Fig. 6(a)].
For the case of large Zeeman fields \(h(t)>\Delta_{0}\) one obtains
\[\frac{N_{\uparrow}(E,t)}{N(0)}\approx\begin{cases}\frac{|E|}{\xi_{0}}\bigg{|}1- \text{sgn}(E)\frac{\alpha^{2}k_{F}^{2}+h^{2}(t)}{\sqrt{\xi_{1}^{2}\alpha^{2}k_{ F}^{2}+h^{2}(t)(\xi_{2}^{2}+\Delta_{h}^{2})}}\bigg{|}^{-1},\quad E>\Delta_{m} \quad\text{and}\quad E<-\sqrt{(\Delta_{h}+h(t))^{2}+\alpha^{2}k_{F}^{2}}\\ \frac{|E|}{\xi_{0}}\bigg{|}1-\frac{\alpha^{2}k_{F}^{2}+h^{2}(t)}{\sqrt{\xi_{2} ^{2}\alpha^{2}k_{F}^{2}+h^{2}(t)(\xi_{2}^{2}+\Delta_{h}^{2})}}\bigg{|}^{-1}, \quad-\sqrt{(\Delta_{h}-h(t))^{2}+\alpha^{2}k_{F}^{2}}<E<-\Delta_{m}\end{cases}. \tag{51}\]
where
\[\Delta_{m}\approx\frac{\Delta_{h}\alpha k_{F}}{\sqrt{h^{2}(t)+\alpha^{2}k_{F}^ {2}}}.\]
The splitting of the energy spectrum in the vicinity of \(E=0\) leads to the appearance of the two additional coherence peaks and corresponging minigap at the energies \(E=\pm\Delta_{m}\), which are shown in Fig. 6(b).
|
2310.04878
|
Hybrid Recommendation System using Graph Neural Network and BERT
Embeddings
|
Recommender systems have emerged as a crucial component of the modern web
ecosystem. The effectiveness and accuracy of such systems are critical for
providing users with personalized recommendations that meet their specific
interests and needs. In this paper, we introduce a novel model that utilizes a
Graph Neural Network (GNN) in conjunction with sentence transformer embeddings
to predict anime recommendations for different users. Our model employs the
task of link prediction to create a recommendation system that considers both
the features of anime and user interactions with different anime. The
hybridization of the GNN and transformer embeddings enables us to capture both
inter-level and intra-level features of anime data.Our model not only
recommends anime to users but also predicts the rating a specific user would
give to an anime. We utilize the GraphSAGE network for model building and
weighted root mean square error (RMSE) to evaluate the performance of the
model. Our approach has the potential to significantly enhance the accuracy and
effectiveness of anime recommendation systems and can be extended to other
domains that require personalized recommendations.
|
Shashidhar Reddy Javaji, Krutika Sarode
|
2023-10-07T17:24:41Z
|
http://arxiv.org/abs/2310.04878v1
|
# Hybrid Recommendation System using Graph Neural Network and BERT Embeddings
###### Abstract
Recommender systems have emerged as a crucial component of the modern web ecosystem. The effectiveness and accuracy of such systems are critical for providing users with personalized recommendations that meet their specific interests and needs. In this paper, we introduce a novel model that utilizes a Graph Neural Network (GNN) in conjunction with sentence transformer embeddings to predict anime recommendations for different users. Our model employs the task of link prediction to create a recommendation system that considers both the features of anime and user interactions with different anime. The hybridization of the GNN and transformer embeddings enables us to capture both inter-level and intra-level features of anime data.Our model not only recommends anime to users but also predicts the rating a specific user would give to an anime. We utilize the GraphSAGE network for model building and weighted root mean square error (RMSE) to evaluate the performance of the model. Our approach has the potential to significantly enhance the accuracy and effectiveness of anime recommendation systems and can be extended to other domains that require personalized recommendations.
## 1 Introduction
Recommendation systems are algorithms that suggest items to users based on their past behavior. They are used in a variety of applications, such as online shopping, music streaming, and social media. There are two main types of recommendation systems: collaborative filtering and content-based filtering. Collaborative filtering systems recommend items to users based on the ratings or preferences of other users. For example, if you have rated a number of movies on Netflix, the collaborative filtering system will recommend other movies that other users with similar ratings have also enjoyed.
Content-based filtering systems recommend items to users based on the content of the items themselves. For example, if you have listened to a number of songs by a particular artist, the content-based filtering system will recommend other songs by the same artist. In recent years, there has been a trend towards using hybrid recommendation systems that combine the strengths of collaborative filtering and content-based filtering. These systems can provide more accurate recommendations than either type of system on its own.
There are a number of different ways to build recommendation systems. One common approach is to use machine learning algorithms. Machine learning algorithms can be trained on large datasets of user ratings or preferences to learn how to predict which items a user will like. Another approach to building recommendation systems is to use artificial intelligence (AI) techniques. AI techniques, such as deep learning, can be used to create more complex and powerful recommendation systems. Recommendation systems have become an integral part of our daily lives, aiding us in making informed decisions about the products and services we use. The success of these systems can be attributed to their ability to filter and personalize vast amounts of information, making it easier for
users to find relevant and useful items. However, the increasing complexity and heterogeneity of data have made it challenging to develop accurate and efficient recommendation systems.
In recent years, graph neural networks (GNNs) have emerged as a promising solution to this problem, allowing us to incorporate relational data into our recommendation models. GNNs can effectively capture the inherent structure and dependencies in the data, enabling us to make more accurate and personalized recommendations. Graph Neural Networks (GNNs) have emerged as a powerful approach to solving problems in the domain of recommendation systems. Recommendation systems aim to recommend items to users that are relevant and useful to them, based on their past behavior and preferences. GNNs can help in creating better recommendations by modeling the complex relationships between users and items in a graph-based representation. One of the key challenges in recommendation systems is the sparsity of the data. In many cases, users may have only interacted with a small subset of items, and the available data may not be sufficient to learn accurate models. GNNs can help address this challenge by leveraging the graph structure of the data to propagate information from observed to unobserved nodes.
GNNs can be used in both content-based and collaborative filtering approaches to the recommendation. In a content-based approach, GNNs can be used to model the features of the items and users and create recommendations based on the similarity between their embeddings. In a collaborative filtering approach, GNNs can be used to model the interactions between users and items in a graph, and create recommendations based on the relationships between the nodes.
One of the popular approaches for GNN-based recommendation is GraphSAGE. GraphSAGE is a variant of GNN that aggregates information from neighboring nodes to generate node embeddings. In GraphSAGE, each node is assigned an initial feature vector, and these features are updated iteratively by aggregating information from the node's neighbors. The aggregated features are then passed through a neural network layer to generate a new embedding for the node. In the context of recommendation, GraphSAGE can be used to generate embeddings for both users and items. The model can be trained to predict the likelihood of a user interacting with an item, based on the embeddings of the user and item. The learned embeddings can then be used to generate recommendations for users.
To improve the performance of the recommendation system, additional features can be incorporated into the model. For example, in the case of movie recommendations, features such as the genre and the synopsis of the movie can be used to augment the embeddings of the movies. Similarly, features such as the age and gender of the user can be used to augment the embeddings of the users. Overall, GNNs have shown great promise in the domain of recommendation systems and can help in creating more accurate and personalized recommendations for users. With the availability of large amounts of data and the increasing interest in personalized recommendations
The rest of the paper is organized as follows: Section 2 provides a brief overview of related work. Section 3 describes the dataset and the pre-processing steps used to prepare the data. Section 4 presents the proposed model in detail. Section 5 presents the experimental setup and results. Finally, Section 6 concludes the paper with a summary of the contributions and directions for future work
## 2 Related Work
Recommender systems have been widely used to provide personalized recommendations to users. Collaborative filtering (CF) is a popular technique that utilizes users' past behavior to make recommendations. Matrix factorization, a type of CF algorithm, decomposes the user-item interaction matrix into two lower-dimensional matrices to represent users and items. The regularization weights of the latent factors can be assigned based on items' popularity and users' activeness, which can improve the prediction results of the matrix factorization technique. [4]
The paper on graph neural networks in recommender systems provides a survey of various graph-based techniques for recommender systems, including GCNs, GATs, and GAEs. The paper discusses how these techniques can be used to handle cold-start problems, incorporate side information, and enhance recommendation accuracy. [5] Graph-based models have become increasingly popular in recent years for their ability to handle complex interactions between users and items. The linear residual graph convolutional network approach for CF-based recommender systems revisits GCNs in CF models and shows that removing non-linearities can enhance recommendation performance. The
proposed model uses a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over-smoothing problem in graph convolution aggregation operation with sparse data. [3]
The graph-based hybrid recommendation system (GHRS) combines content-based and collaborative filtering approaches to extract new features based on users' ratings, demographic, and location information. These features are then used for clustering users, which improves recommendation accuracy and dominates other methods' performance in the cold-start problem. The experimental results on the MovieLens dataset show that the proposed algorithm outperforms many existing recommendation algorithms on recommendation accuracy. [1]
Inductive matrix completion is another popular approach to building recommender systems that can handle the cold-start problem. The paper on learning to transfer graph embeddings for inductive graph-based recommendation proposes a transfer learning framework for personalized video highlight recommendation. The proposed framework is composed of two parts: a graph neural network that exploits the higher-order proximity between users and segments to alleviate the user cold-start problem and an item embedding transfer network that approximates the learned item embeddings from graph neural networks. [2]
Matrix factorization, specifically, is a widely used technique in recommender systems that utilizes users' past behavior, such as ratings or purchases, to make recommendations. One of the most popular CF algorithms is matrix factorization, which decomposes the user-item interaction matrix into the product of two lower dimensionality rectangular matrices, user and item embeddings, that represent users and items in a lower-dimensional space. The regularization weights of the latent factors can be assigned based on items' popularity and users' activeness, which can improve the prediction results of the matrix factorization technique. The paper on matrix factorization techniques for recommender systems provides a foundational understanding of collaborative filtering and matrix factorization for building recommender systems. [4]
In summary, the related papers cover various techniques for building recommender systems, including matrix factorization, graph-based models, inductive matrix completion, and transfer learning. These papers provide further insights into the use of these techniques in recommender systems and how they can be used to handle cold-start problems, incorporate side information, and enhance recommendation accuracy.
## 3 Dataset
The Anime Recommendation Database 2020 is a dataset available on Kaggle, containing information about anime and user interactions from the website MyAnimeList. The dataset was created by scraping the website and contains recommendation data from 320,000 users and 16,000 animes.
The dataset is comprised of two main tables: the anime table and the rating table. The anime table contains information about each anime, including its ID, name, genre, type, episodes, and synopsis. The genre field is a list of genres associated with anime, such as "Action", "C comedy", "Drama", and "Fantasy". The type field indicates whether the anime is a TV series, movie, OVA, or other formats. The episodes field indicates the number of episodes in the series. The synopsis field provides a brief description of the anime's plot.
The rating table contains information about user interactions with the animes, including the user ID, the anime ID, and the user's rating for the anime on a scale of 1 to 10. The dataset also includes a timestamp field indicating the time when the user rated the anime.
The dataset contains a total of 78,460,895 user-anime interactions, with an average of 4.9 ratings per user. The most popular anime in the dataset is "Death Note", with over 150,000 ratings. The dataset is useful for building recommendation systems for anime, as it contains information about both the animes and user preferences.
### Preprocessing
The dataset used in this research consists of two primary data sources: the "anime with synopsis" and "rating complete" files, which were merged to obtain relevant columns for the model. Specifically, the dataset includes anime id, user id, synopsis, genres, and rating. Prior to analysis, the dataset
undervent a preprocessing step which involved data cleaning to remove rows with null values in any column. One hot encoding was also applied to the genres column in order to transform the categorical variable into a numerical format suitable for analysis.
Furthermore, two dictionaries were created to map the user id's and anime id's in the dataset. These dictionaries were used to facilitate the analysis and interpretation of the data. Overall, the resulting dataset is suitable for use in conducting research on anime recommendation systems, and provides a robust foundation for the development and evaluation of machine learning algorithms for this purpose.
We created three classes: SequenceEncoder, IdentityEncoder, and GenresEncoder, which encode different types of data into PyTorch tensors. These classes are used to load and process node and edge data for a graph-based recommendation system. The SequenceEncoder class encodes text data using the SentenceTransformer model. The input data is a Pandas dataframe, and the output is a PyTorch tensor that represents the sentence embeddings. The IdentityEncoder class converts raw column values to PyTorch tensors, and the GenresEncoder class encodes genre information from the raw data. The load node csv function uses these encoders to process the node data, concatenating the resulting tensors into a single tensor.
The load edge csv function loads edge data and generates labels for each edge. It takes two arguments, ratings user id and ratings movie id, which are the user and movie IDs for each rating. It then generates edge labels by looking up the corresponding ratings from a dictionary user anime rating and returns a PyTorch tensor containing the edge labels. Overall, the code shows how the dataset is preprocessed before being fed into the graph-based recommendation system. The SequenceEncoder, IdentityEncoder, and GenresEncoder classes are used to encode different types of data into PyTorch tensors, which are then concatenated into a single tensor using the load node csv function. The load edge csv function loads edge data and generates labels for each edge, completing the dataset preprocessing pipeline.
## 4 Proposed Methodology
In an anime recommendation system, the features used for node creation can have a significant impact on the performance of the system. One common approach is to use genres as the features for each anime. Genres are categorical variables that can be one-hot encoded and used to represent the anime's content. This approach is straightforward and easy to implement, but it has some limitations.
Figure 1: Bar graph of ratings given by each users
One limitation is that genres alone may not capture the complexity and nuances of the anime. For example, two anime could have the same genres, but one could be a comedy with a light-hearted tone while the other could be a dark psychological thriller. In this case, relying solely on genres may not differentiate between the two anime and could lead to poor recommendations.
To overcome this limitation, we can combine the genres with the sentence embeddings of the synopsis. The synopsis is a brief summary of the anime's plot, and it can provide additional information about the anime's content and style. By using sentence embeddings, we can capture the meaning and context of the synopsis, which can help to differentiate between anime with similar genres.
To do this, we first preprocess the synopsis by removing stop words, punctuation, and other irrelevant information. We then use a pre-trained sentence embedding model such as BERT or GloVe to generate embeddings for each sentence in the synopsis. We can then average these embeddings to obtain a single embedding for the entire synopsis. We can then concatenate the one-hot encoded genres with the synopsis embedding to create a feature vector for each anime. This feature vector captures both the categorical information about the anime's genres and the semantic information about the anime's content and style. Once we have the feature vectors for each anime, we can use them to create nodes in the graph. We can then use graph neural networks (GNNs) to learn the representations of these nodes and generate recommendations based on the learned representations. Compared to using genres alone, combining genres with the synopsis embeddings can lead to more accurate and personalized recommendations. This approach can capture the complex and nuanced content of the anime and provide better differentiation between anime with similar genres. Additionally, this approach can be extended to incorporate other textual features such as reviews or user feedback, which can further improve the recommendations.
The Model class inherits from the PyTorch Module class, which provides a convenient way to define a neural network model. The _init_ method defines the components of the model and initializes their parameters. The forward method defines the computation that will be performed by the model when it is run on input data. The GNNEncoder class is a custom implementation of a GNN encoder that takes as input a set of node features and edge connections and outputs a set of node embeddings. The hidden_channels argument specifies the dimensionality of the node embeddings. The GNNEncoder class is defined in a separate file and is not shown in the code snippet provided.
HeteroData( user={ x=[100, 100] }, anime={ x=[3534, 427] }, (user, rates, anime)={ edge_index=[2, 16143], edge_label=[16143] }, (anime, rev_rates, user)={ edge_index=[2, 16143] } }
: HeteroData Structure
Figure 2: Architecture of the model
The encoder attribute of the Model class is an instance of the GNNEncoder class. It takes the hidden_channels argument as input and is initialized with the same dimensionality for both the input and output features.
The to_hetero function is a utility function that converts the GNNEncoder object to a heterogeneous GNN. The data.metadata() argument specifies the schema of the heterogeneous graph, which includes information about the node types, edge types, and features of the graph. The aggr argument specifies the type of aggregation to be used when combining information from different node types. The EdgeDecoder class is a custom implementation of an edge decoder that takes as input a set of node embeddings and a set of edge connections and outputs a set of edge predictions. The hidden_channels argument specifies the dimensionality of the node embeddings. In the GNNEncoder class, the GraphSAGE implementation is achieved by using the SAGEConv module from PyTorch Geometric library. The SAGEConv module implements the GraphSAGE convolutional operator, which aggregates the feature vectors of a node and its neighbors using a graph convolutional operation.
The decoder attribute of the Model class is an instance of the EdgeDecoder class. It takes the hidden_channels argument as input and is initialized with the same dimensionality for both the input and output features. The forward method takes as input a dictionary of node features, a dictionary of edge connections, and a set of edge labels. The x_dict argument is a dictionary of PyTorch tensors representing the node features for each node type. The edge_index_dict argument is a dictionary of PyTorch tensors representing the edge connections for each edge type. The edge_label_index argument is a PyTorch tensor representing the edge labels.The forward method of the GNNEncoder class first applies a GraphSAGE layer to the input node features using the SAGEConv module. This layer aggregates the feature vectors of each node and its neighbors using a graph convolutional operation. The resulting feature vectors are then normalized and passed through a ReLU activation function.
The forward method first passes the input data through the encoder to obtain a set of node embeddings, represented as a dictionary of PyTorch tensors. It then passes these node embeddings and the edge labels through the decoder to obtain a set of predicted edge labels. In summary, the model architecture consists of a GNN encoder that takes as input node features and edge connections, a heterophily operator that converts the GNN encoder to a heterogeneous GNN, and an edge decoder that takes as input node embeddings and edge connections and outputs a set of predicted edge labels. The model is designed for semi-supervised learning on heterogeneous graphs and can handle multiple node and edge types with different feature representations.
In the context of graph neural networks (GNNs), the heterophily operator is a mechanism used to combine information from nodes of different types in a heterogeneous graph. In a heterogeneous graph, nodes can have different types, which correspond to different features or attributes. For example, in a citation network, nodes can represent papers, authors, or conferences, and each node type can have different attributes such as publication year, paper topic, or author affiliation. To capture such heterogeneity, GNNs use different weight matrices for each node type, allowing the model to learn different representations for nodes of different types.
In the GNNEncoder class, the GraphSAGE implementation is achieved by using the SAGEConv module from the PyTorch Geometric library. The SAGEConv module implements the GraphSAGE convolutional operator, which aggregates the feature vectors of a node and its neighbors using a graph convolutional operation. The GNNEncoder class takes two arguments: the number of input feature dimensions and the number of output feature dimensions. The forward method of this class applies two GraphSAGE layers to the input node features to generate the output node features. The forward method of the GNNEncoder class first applies a GraphSAGE layer to the input node features using the SAGEConv module. This layer aggregates the feature vectors of each node and its neighbors using a graph convolutional operation. The resulting feature vectors are then normalized and passed through a ReLU activation function. The output of the first GraphSAGE layer is then passed through a second GraphSAGE layer in a similar fashion. Finally, the resulting output features are returned as the output of the forward method of the GNNEncoder class. Overall, the GNNEncoder class implements a GraphSAGE-based neural network architecture for learning node representations in a graph by aggregating neighborhood information of each node in the graph
Evaluation and Results
The process of evaluation is as follows; this model is evaluated using Root Mean square Error(RMSE), the model is used to get the ratings between a given user and certain anime which the user haven't watched before, all such links are predicted with certain weight, so given a user we get the ratings for different anime in the list which they haven't watched yet, after this the predicted ratings along with the anime are taken for particular user and then the list is sorted according to the rating predicted, we get the list of anime with highest to lowest rated for the anime that would be given by the user if watched as predicted by the model, top 10 anime of this list are taken and are recommended to that user as the anime recommendation that the user can watch. The evaluation is done by using the test set where we have the ratings that are given by the user for different anime, these are not shown at the training time, trained model is used to predict the rating and then evaluate it with the ground truth labels, using RMSE we check how close the model is able to predict the values for the given graph.
**Recommendation for user: 415 ['Pokemon Movie 14 White: Victim to Kuroki Eiyuu Zekrom', 'Tsuki no Sango', 'Charlotte', 'Tanaka-kun wa Kyyou mo Kedaruge', 'Iblard Jikan', 'Teekyuu', 'Tenshi Nanka ja Nai', 'No Game No Life: Zero', 'Puchitto Gargantia', 'Pokemon: Sernitsu no Mirage Pokemon']**
**Recommendation for user: 30 ['Jormungand', 'Hanayamata', 'BlackRock Shooter (OVA)', 'Mahou Shoujo Ore', 'Selector Infected WIXOSS', 'Kara no Kyoukai 6: Boukyaku Rokuon', 'Claymore', 'Kamigami no Asobi', 'Zettai Bouei Leviathan', 'Kakeguru']**
**Recommendation for user: 189 ['Zero no Tsukaima F', 'Mahouka Koukou no Rettousei Movie: Hoshi wo Yobu Shoujo', 'Kami-tachi ni Hirowerata Otoko', 'Sunohara-sou no Kanrinin-san', 'Dragon Ball GT', 'Seishun Buta Yarou wa Bunny Girl Senpai no Yume wo Minai', 'Tamako Market', 'School Days', 'Kono Bijutsubu ni wa Mondai ga Aru!', 'Re:Zero kara Hajimeru Isekai Seikatsu 2nd Season']**
**Recommendation for user: 298 ['Mobile Suit Gundam 00', 'Bannou Bunka Neko-Musume DASH', 'Fullmetal Alchemist: Premium Collection', 'Naruto Movie 2: Dai Gekitotsu!' Madoroshi no Chiteiseki Dattebayo!', 'Ischo ni Training: Training with Hinako', 'Doraemon', 'School Rumble', 'Golden Boy', 'Rurouni Kenshin: Meiji Kenkaku Romantan - Tsuioku-hen', 'Death Note: Rewrite']**
The results of our experiment are presented in this section. The model was trained and tested on a dataset consisting of 800 users. The following are the results of our experiment:
**Train loss: 0.659**
**Test Loss: 0.667**
**Train Accuracy: 0.52**
**Test Accuracy: 0.37**
Figure 3: Results
Conclusion and Future Work
The results show that the model achieved a higher accuracy on the training data (52%) compared to the testing data (37%). The loss values for both the training and testing data are relatively high, indicating that the model may not be performing optimally. Though the accuracy is not that high, but the model is working and giving good results with very less amount of data, the compute resource required to run for large amount of data is very high The future plans of the model would be to try with more nodes which can be made into features and then make edges between the user and their features as well as the anime and the features of the animes. Also the other side of future work would to try on more data which would be possible with more compute resources. There is also possibility of trying more types of GNN's other than Graph SAGE network for the training process of the GNN.
|
2307.11104
|
Pseudorandomness of the Sticky Random Walk
|
We extend the pseudorandomness of random walks on expander graphs using the
sticky random walk. Building on prior works, it was recently shown that
expander random walks can fool all symmetric functions in total variation
distance (TVD) upto an $O(\lambda(\frac{p}{\min f})^{O(p)})$ error, where
$\lambda$ is the second largest eigenvalue of the expander, $p$ is the size of
the arbitrary alphabet used to label the vertices, and $\min f = \min_{b\in[p]}
f_b$, where $f_b$ is the fraction of vertices labeled $b$ in the graph.
Golowich and Vadhan conjecture that the dependency on the $(\frac{p}{\min
f})^{O(p)}$ term is not tight. In this paper, we resolve the conjecture in the
affirmative for a family of expanders. We present a generalization of the
sticky random walk for which Golowich and Vadhan predict a TVD upper bound of
$O(\lambda p^{O(p)})$ using a Fourier-analytic approach. For this family of
graphs, we use a combinatorial approach involving the Krawtchouk functions to
derive a strengthened TVD of $O(\lambda)$. Furthermore, we present
equivalencies between the generalized sticky random walk, and, using
linear-algebraic techniques, show that the generalized sticky random walk
parameterizes an infinite family of expander graphs.
|
Emile Anand, Chris Umans
|
2023-07-18T17:46:23Z
|
http://arxiv.org/abs/2307.11104v1
|
# Pseudorandomness of the Sticky Random Walk
###### Abstract
We extend the pseudorandomness of random walks on expander graphs using the sticky random walk. Building on the works of [1][2][2] recently showed that expander random walks can fool all symmetric functions in total variation distance (TVD) upto an \(O(\lambda(\frac{p}{\min f})^{O(p)})\) error, where \(\lambda\) is the second largest eigenvalue of the expander, \(p\) is the size of the arbitrary alphabet used to label the vertices, and \(\min f=\min_{b\in[p]}f_{b}\), where \(f_{b}\) is the fraction of vertices labeled \(b\) in the graph. [2] conjectures that the dependency on the \((\frac{p}{\min f})^{O(p)}\) term is not tight. In this paper, we resolve the conjecture in the affirmative for a family of expanders. We present a generalization of [2]'s sticky random walk for which [2] predicts a TVD upper bound of \(O(\lambda p^{O(p)})\) using a Fourier-analytic approach. For this family of graphs, we use a combinatorial approach involving the Krawtchouk functions used in [2] to derive a strengthened TVD of \(O(\lambda)\). Furthermore, we present equivalencies between the generalized sticky random walk, and, using linear-algebraic techniques, show that the generalized sticky random walk is an infinite family of expander graphs.
Sticky Random Walk Expander Graphs Pseudorandomness Derandomization
## 1 Introduction
Expander graphs are undirected spectral sparsifiers of the clique with high expansion properties, and they are among the most useful combinatorial objects in theoretical computer science due to their applications in pseudorandomness and in error-correcting codes (see [10][21][22][23]). A \(d\)-regular graph \(G\!=\!(V,\!E)\) where \(|V|\!=\!n\) is said to be an \((n,d,\lambda)\)-expander if (for eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\cdots\lambda_{n}=-1\) of its normalized adjacency matrix \(A\)) the second largest magnitude of the eigenvalue of its adjacency matrix \(A\) is almost \(\lambda\). In other words, \(\max_{i\neq 1}|\lambda_{i}|\leq\lambda\). It is a well-known fact that \(\lambda\leq 1\) implies \(G\) is connected; so, smaller values of \(\lambda\) pertain to stronger combinatorial connectivity properties (see [16]). There are several known constructions [14][21][23][24] of optimal (Ramanujan) expander graphs that saturate the Alon-Boppana bound [15] which characterizes "good" expanders. The most useful fact about expander graphs in pseudorandomness arises from the fact that random walks on them mix fast. Let \(v_{0},...,v_{t-1}\) be a sequence of vertices obtained by a \(t\)-step walk on an expander graph \(G\) with second largest eigenvalue atmost \(\lambda\). Gilman's [17] results uses the Chernoff bound to characterize the rate of mixing (this has since been improved in [19] and [18]).
_Expander-Walk Chernoff Bound (Gilman)_. For graph \(G=(V,E)\), let \(v_{0},\ldots,v_{t-1}\) denote a sequence of vertices obtained from a \(t\)-step -walk on an expander graph \(G\). For any function \(f:[n]\to\{0,1\}\), let the stationary distribution of \(G\) be \(\pi(f):=\lim_{t\to\infty}\frac{1}{t}\sum_{i=0}^{t-1}f(v_{i})\). The expander-walk Chernoff bound [15] states that \(\forall\varepsilon>0\),
\[\Pr\left[\left|\frac{1}{t}\sum_{i=0}^{t-1}f(X_{i})-\pi(f)\right|\geq \varepsilon\right]\leq 2e^{-\Omega((\lambda-\varepsilon)^{2}t)}\]
For a proof of the expander walk Chernoff bound, we refer the reader to [21][23]. An important direct consequence of the expander Chernoff bound is that the mixing time of a \(d\)-regular expander graph on \(n\) vertices is atmost \(O(\log n)\).
Expander graphs have (oftentimes, surprisingly) ubiquitous applications. They were initially studied for the purpose of constructing fault-tolerant networks in [20], where if a small number of channels (edges) broke down, the system could be made to be still largely intact due to its good connectivity properties if it were modeled as an expander. More recently, they have been used in representation learning theoretic settings (see [13]) to create graph neural networks that can propagate information to train models more efficiently. In coding theory, expander codes (created from linear bipartite expanders - see [1]) are the only known construction (see [23]) of asymptotically good error-correcting codes which can be decoded in linear time when a constant fraction of symbols are in error.
More recent works that combine ideas from combinatorial topology and algebraic geometry have also led to the exciting study of high dimensional expanders (HDX) which are pure simplicial complexes (hypergraphs that are downwards closed under containment) where the \(1\)-skeletons are spectral expanders and the links exhibit good expansion properties. We direct the reader to [14] and [15].
One of the most important applications of expanders (which is the topic of this thesis) is on derandomization and in pseudorandomness. Suppose that there is a randomized algorithm for a language \(L\) using \(n\) bits such that: If a string \(x\in L\), then the algorithm accepts with probability \(1\). If a string \(x\notin L\), then the algorithm rejects with probability atleast \(1/2\). Our goal is to reduce the error probability of the algorithm. If we repeat the algorithm \(t\) times then the error probability goes down to \(1/2^{t}\), which is ideal. However, the number of random bits used by the algorithm is then equal to \(nt\), which is very large. One work around is to "reuse the randomness by weakening our independent choices to correlated choices on an expander graph" [16]. If we start at a random vertex in \(G\) (which is a random number in \(\{0,\ldots,n\}\) which uses \(\log n\) random bits) and pick random neighbors of \(v\), then since a good expander has degree \(d=O(1)\) and since we need \(\log d=O(1)\) bits, we can continue this process till we pick \(t\) vertices overall, and the overall number of random bits that we would need would be equal to \(\log n+O(t)\). Further, by the expander mixing lemma, for \(t\gg O(\log n)\) the sequence of vertices will still be extremely close to uniformly random.
This makes expander graphs invaluable in the field of pseudorandomness. Consider the \(t\)-step expander random walk which generates a sequence of vertices \(v_{0},\ldots,v_{t-1}\). We are then interested in the degree to which \((v_{0},\ldots,v_{t-1})\) "fool" classes of test functions, where we say that a test-function \(T\) is \(\epsilon\)-fooled by a pseudorandom function \(g:X\to[\chi]\) if the statistical distance between distributions \(T(g(X))\) and \(T(U)\) (here \(U\) is the uniform distribution on \([\chi]\)), is less than \(\epsilon\).
Ta-Shma's breakthrough construction of optimal \(\epsilon\)-balanced codes [23] which showed that expander random walk can fool the extremely sensitive parity function led to an exciting series of results that showed the pseudorandomness of expander random walks for increasingly many classes of test functions. [15] introduces the'sticky random walk', a canonical two-vertex expander which can be thought of as a Markov chain on two states, where the probability of switching between states is \(\frac{1+\lambda}{2}\) and the probability of staying at the same state is \(\frac{1-\lambda}{2}\). [15] goes on to use the discrete orthogonal Krawtchouk functions to show that the Hamming weight distribution of the sticky random walk
can fool any symmetric function with error \(O(\lambda)\), where \(\lambda<0.16\). [3][4] then used Fourier analysis to expand this result. Specifically, they showed that test functions computed by \(\mathrm{AC}^{0}\) circuits and symmetric functions are fooled by the random walks on the full expander random walk, but only for balanced binary labelings. These works culminate in [1] which establishes that random walks on expanders where the vertices are labeled from an arbitrary alphabet can fool symmetric functions (upto an \(O(\lambda)\) error) and permutation branching programs. Specifically, we are interested in the result concerning symmetric functions (Theorem 2) which we restate below.
_Fooling symmetric functions (Corollary 4 of [1])._ For all integers \(t\geq 1\) and \(p\geq 2\), let \(G=(G_{i})_{1\leq i\leq t-1}\) be a sequence of \(\lambda\)-spectral expanders on a shared vertex set \(V\) with labeling \(\mathrm{val}:V\to[p]\) that assigns each label \(b\in[p]\) to \(f_{b}\)-fraction of the vertices. Then, for any label \(b\), we have that the total variation distance between the number of \(b\)'s seen in the expander random walk and the uniform distribution on \([p]\) has the following bound (where \([\Sigma(Z)_{b}]\) counts the number of occurrences of \(b\) in \(Z\)):
\[\mathrm{TVD}([\Sigma(\mathrm{RW}_{\mathcal{G}}^{t})]_{b},[\Sigma(U[d])]_{b}) \leq O\left(\left(\frac{p}{\min_{b\in[p]}f_{b}}\right)^{O(p)}\cdot\lambda\right)\]
[1] asks whether the \((\frac{p}{\min_{b\in[p]}f_{b}})^{O(p)}\) dependence in the upper bound of the total variation distance is tight. In this paper, we answer in the negative for a family of graphs. Specifically, we present a family of generalized sticky random walks (where the alphabet size can be arbitrarily large), where in Theorem 5.3 we find that the optimal TVD is \(O(\lambda)\), for \(\lambda<0.27\), whereas Corollary 4 in [1] predicts a bound of \(O(\lambda p^{2p})\), which provides evidence that the \((\frac{p}{\min_{b\in[p]}f_{b}})^{O(p)}\) factor is not tight. [1] studied the sticky random walk because it was an "essential step" to understanding the full expander random walk - specifically, theorem \(4\) in [1] shows that every \(\lambda\)-parameterized sticky random walk is bijective with a corresponding expander graph. We extend their result in Theorem 7.1 by showing that our generalized sticky random walk (parameterized by \(\lambda\) and \(p\) also correspond to expander graphs with a linearly-reduced spectral expansion of \(\lambda p\). We then show that our generalized sticky random walk reduces from [1]'s two-vertex sticky random walk in Theorem 7.2. Finally, in Section B we provide a novel alternate treatment of the Krawtchouk functions into the complex domain which can be used to attain an \(O(\lambda p^{p})\) bound on the TVD.
## 2 Preliminaries: Notation and Conventions
This section describes the basic notation and problem setup that is used throughout the paper.
For any \(n\in\mathbb{N}\), let \([n]=\{1,...,n\}\) and \(\mathbb{Z}_{n}=\{0,...,n-1\}\). Next, let \([n]^{k}\) denote \(k\) copies of elements in \([n]\), and let \(\mathbb{Z}_{n}^{k}\) denote \(k\) copies of elements in \(\mathbb{Z}_{n}\). Furthermore, let \(\binom{[n]}{k}\) denote the set of all \(k\)-sized subsets of \([n]\), which has cardinality \(\binom{n}{k}\). For any \(n\)-bit string \(s\), let \(|s|\) denote the Hamming weight (the Hamming distance from \(0^{n}\)) of \(s\). Similarly, let \(|s|_{i}\) denote the number of \(i\)'s in \(s\). We generalize the notion of counting the number of occurrences of any character \(\chi\in\mathbb{Z}_{p}\) for \(p\geq 2\) with the symmetric function \(\Sigma(x):\mathbb{Z}_{p}^{n}\to\mathbb{Z}_{p}^{n}\), where \(\Sigma(x)\) is a vector that counts the number of occurrences of each \(\chi\in\mathbb{Z}_{p}\). Specifically, for all \(\chi\in\mathbb{Z}_{p}\) and for all \(x\in\mathbb{Z}_{p}^{n}\), we can write that \([\Sigma(x)]_{\chi}=|\{i\in x:x_{i}=\chi\}|=|x|_{\chi}\).
Let \(\mathrm{Ber}(q)\) denote the Bernoulli distribution on \(\{0,1\}\), such that if \(X\sim\mathrm{Ber}(q)\), then \(\mathrm{Pr}[X=1]=q\) and \(\mathrm{Pr}[X=0]=1-q\). Next, let \(\mathrm{Bin}(n,1/2)\) denote the binomial distribution of \(\sum_{i=1}^{k}b_{i}\) with independent choices of \(b_{i}\sim\mathrm{Ber}(1/2)\). Let \(\mathrm{U}_{p}^{n}=\mathrm{U}[\{0,\ldots,p-1\}]^{n}\) denote \(n\) samples of the uniform distribution on \(\mathbb{Z}_{p}\), where each number is sampled with probability \(1/p\). Then, \([\Sigma(\mathrm{U}_{p}^{n})]_{0}\) reports the number of \(0\)'s in an \(n\)-bit sample from the uniform distribution on \(\mathbb{Z}_{p}\). Furthermore, we write that \(x\in A\) if \(x\) is an element of \(A\), and \(x\in_{U}A\) if \(x\) is an element chosen uniformly randomly from \(A\). Finally, we use \(\overline{C}\) to denote the complement of a set \(C\subseteq\Omega\), and for any two sets \(A,B\subseteq\Omega\), we define their symmetric difference \(A\Delta B\) as \((A\cap\overline{B})\cup(B\cap\overline{A})\).
**Definition 1** (\((n,d,\lambda)\)-expanders).: A \(d\)-regular graph \(G=(V,E)\) where \(|V|=n,|E|=m\) is said to be an \((n,d,\lambda)\)-expander if (for eigenvalues \(1=\lambda_{1}\geq\lambda_{2}\geq\cdots\lambda_{n}=-1\) of its degree-normalized adjacency matrix \(A\)) the second largest magnitude of the eigenvalue of its adjacency matrix \(A\) is atmost \(\lambda\). In other words, \(\max_{i\neq 1}|\lambda_{i}|\leq\lambda\). Intuitively, the spectrum (\(\lambda_{1},...,\lambda_{n}\)) of an expander graph approximates the spectrum of the complete graph, which makes expanders a spectral sparsification of the clique.
**Definition 2** (Sticky Random Walk).: The Sticky Random Walk (SRW) \(S(n,\lambda)\) is a distribution on \(n\)-bit strings that represent \(n\)-step walks on a Markov chain with states \(\{0,1\}\) such that for each \(s\sim S(n,\lambda),\mathrm{Pr}[s_{i+1}=b|s_{i}=b]=\frac{1+\lambda}{2}\)
for \(b\in\{0,1\}\), and \(s_{1}\sim\text{Ber}(1/2)\) such that \(\Pr[s_{1}=0]=\Pr[s_{1}=1]=1/2\). As \(\lambda\to 0\), the distribution of strings from the Markov chain converges to the distribution of \(n\) independent coin-flips.
**Definition 3** (Krawtchouk Functions).: The Krawtchouk function \(K_{k}:\mathbb{Z}_{n+1}\to\mathbb{R}\) is defined, for \(\ell\in\mathbb{Z}_{n+1}\) and an arbitrary \(n\)-bit string \(\alpha\) where \(|\alpha|=\ell\), (\(|\cdot|\) is the number of \(0\)'s) such that:
\[K_{k}(\ell):=\sum_{\begin{subarray}{c}y\in\{0,1\}^{n}\\ |y|=k\end{subarray}}(-1)^{\alpha\cdot y}=\sum_{t=0}^{k}(-1)^{t}\binom{\ell}{t} \binom{n-\ell}{k-j}\]
**Lemma 2.1** (Krawtchouk orthogonality).: The Krawtchouk function is orthogonal with respect to functions mapping \([n]_{0}\to\mathbb{R}\) with respect to the inner product. The proof of this is deferred to Appendix A. Specifically:
\[\langle K_{r},K_{s}\rangle=\underset{b\sim\text{Bin}(n,1/2)}{\mathbb{E}}[K_{ r}(b)K_{s}(b)]=\begin{cases}0,&\text{if }r\neq s\\ (z),&\text{if }r=s\end{cases}\]
**Lemma 2.2** (Krawtchouk invariance).: The Krawtchouk function is invariant against choices of \(\alpha\) which satisfy \(|\alpha|=\ell\). This statement is proven in [1]. We formally state this property below:
\[\underset{\begin{subarray}{c}|L|=k\\ \text{Fixed}|A\in\mathbb{Z}_{p}^{n}\\ |A|=\ell\end{subarray}}{\mathbb{E}}[(-1)^{A\cdot B}]=\underset{\begin{subarray} {c}|L|=k\\ \text{Random}A^{\prime}\in\mathbb{Z}_{p}^{n}\\ |A^{\prime}|=\ell\end{subarray}}{\mathbb{E}}[(-1)^{A^{\prime}\cdot B}]\]
**Definition 4** (Total variation distance).: Given a measure space \((\Omega,\mathcal{F},\mu)\) and a \(\sigma\)-algebra \(\mathcal{A}\subseteq\mathcal{F}\), the total variational distance \(d_{\text{TV}}(\mu_{1},\mu_{2})\) between probability measures \(\mu_{1},\mu_{2}:\mathcal{F}\to\mathbb{R}\) is
\[d_{\text{TV}}(\mu_{1},\mu_{2})=\sup_{A\in\mathcal{A}}|\mu_{1}(A)-\mu_{2}(A)|\]
Similarly, the \(\ell_{1}\) distance between \(\mu_{1}\) and \(\mu_{2}\) is defined as \(d_{\ell_{1}}(\mu_{1},\mu_{2})=2d_{\text{TV}}(\mu_{1},\mu_{2})\). Additionally, for countable \(\Omega\) and \(\mathcal{A}=2^{\Omega}\), we have \(d_{\ell_{1}}(\mu_{1},\mu_{2})=\sum_{x\in\Omega}|\mu_{1}(x)-\mu_{2}(x)|\). Transitively,
\[d_{\text{TV}}(\mu_{1},\mu_{2})=\frac{1}{2}\sum_{x\in\Omega}|\mu_{1}(x)-\mu_{2} (x)|\]
**Definition 5**.: _[Primitive root of unity] For \(p\geq 2\), let \(\omega_{p}\) denote the \(p^{th}\) primitive root of unity if it satisfies \((\omega_{p})^{p}=1\), and if there does not exist \(q\in\mathbb{N}\) where \(q<p\) such that \((\omega_{p})^{q}=1\). Specifically, the multiplicative order of the \(p^{th}\) primitive root of unity must be \(p\). Then, for \(1\leq k<p\), we must have that_
\[\sum_{j=0}^{p-1}(\omega_{p})^{kj}=\omega_{p}^{k\cdot 0}+\omega_{p}^{k\cdot 1}+... +\omega_{p}^{k\cdot(p-1)}=0\]
## 3 A Generalization of the Sticky Random Walk
We consider the case where the vertices of the sticky random walk (SRW) can be labeled with an arbitrary alphabet \(\mathbb{Z}_{p}\), since a decomposition of the vertex-set \(V=\{0,\ldots,p-1\}=V_{1}\sqcup\ldots\sqcup V_{q}\) for \(q\geq 2\), could allow us to model random walks where the probability of transitioning between different states is asymmetric, while allowing us to study the pseudorandomness of random walks on graphs with vertices with more complex labelings which has already been explored in [10]. This section generalizes the sticky random walk on \(p\) characters, and provides the context for bounding the total variation distance between the sticky random walk and \(\mathrm{U}_{p}^{n}\).
Figure 1: The Markov chain of the sticky random walk \(S(n,\lambda)\).
**Definition 6** (The generalized sticky random walk).: The generalized sticky random walk \(S(n,p,\lambda)\) is an \(n\)-step long, \(p\)-symbol walk on a Markov chain with \(p\) states \(\mathbb{Z}_{p}\) labeled as vertices on a complete graph with self-loops \(J_{p}\), where \(s_{0}\in_{U}\mathbb{Z}_{p}\) and at each subsequent step, we either stick to the same state with probability \(\frac{1}{p}+(p-1)\lambda\), or change to any other state with uniform probability \(\frac{1}{p}-\lambda\). So, for instance, comparing \(S(n,4,\lambda)\) to \(S(n,4,0)=U_{4}^{n}\) (the uniform random walk on 4 vertices) yields the following Markov chain graphs:
**Proposition 1** (Probability Invariance Under Permutations).: _For any \(x_{1},x_{2}\in\mathbb{Z}_{p}^{n}\), we have \(\text{Pr}[x_{1}]=\text{Pr}[x_{2}]\) iff \(|(i,i+1)|\) such that \((x_{1})_{i}=(x_{1})_{i+1}\) is equal to \(|(j,j+1)|\) such that \((x_{2})_{j}=(x_{2})_{j+1}\). This can be shown by considering the products of conditional probabilities on each state. A slight weakening of this statement is that for any permutation \(\pi\) such that \(\pi:\mathbb{Z}_{p}\to\mathbb{Z}_{p}\), we must have that \(\text{Pr}[x_{1}...x_{n}]=\text{Pr}[\pi(x_{1})...\pi(x_{n})]\). Consider the case of \(p=2\). Then, the lemma yields that \(\text{Pr}(x)=\text{Pr}(\tilde{x})\), which says that inverting the labels of a string from the sticky random walk does not change its probability. This proposition extends the same argument to all \(p\in\mathbb{N}\) for \(p\geq 2\)._
**Proposition 2** (Krawtchouk Orthogonality).: The orthogonality of the Krawtchouk function \(K_{k}(\ell)\) implies that for any function \(f:\mathbb{Z}_{n+1}\to\mathbb{R}\), there exists a unique expansion \(f(\ell)=\sum\limits_{k=0}^{n}\hat{f}(k)K_{k}(\ell)\), where for \(0\leq k\leq n\),
\[\hat{f}(k)=\mathbb{E}\left[\binom{n}{k}f(b)K_{k}(b)\right]\]
**Definition 7** (Probability ratio).: Let \(q:\mathbb{Z}_{n+1}\to\mathbb{R}\), where \(q(\ell)=\frac{\Pr_{s\leq n}[|s|_{0}=\ell]}{\binom{n}{k}(p-1)^{n-\ell}}p^{n}\). Intuitively, \(q(\ell)\) is the ratio of the probability of getting a string with \(\ell\) 0s from the generalized sticky random walk \(S(n,\lambda,p)\) to the probability of getting a string with \(\ell\) 0s from \(\mathrm{U}_{p}^{n}\).
**Lemma 3.1**.: _[Krawtchouk coefficient of the probability ratio]_ Expanding \(q(\ell)\) through the Krawtchouk function expansion in Proposition 3.2 yields that:
\[\hat{q}(k)=\frac{1}{\binom{n}{k}(p-1)^{n-k}}\mathop{\mathbb{E}}_{s\sim S(n,p, \lambda)}[K_{k}(|s|_{0})]\]
Proof.: Writing the expected value of \(K_{k}(|s|_{0})\) using \(q(b)\) and \(K_{k}(b)\), we get:
\[\hat{q}(k) =\frac{1}{\binom{n}{k}(p-1)^{n-k}}\sum\limits_{b=0}^{n}\binom{n}{b }\frac{(p-1)^{n-b}}{p^{n}}q(b)K_{k}(b)\] \[=\frac{1}{\binom{n}{k}(p-1)^{n-k}}\sum\limits_{b=0}^{n}\Pr_{s\sim S (n,p,\lambda)}[|s|_{0}=b]K_{k}(b)\quad\text{(substituting $q(b)$)}\] \[=\frac{1}{\binom{n}{k}(p-1)^{n-k}}\mathop{\mathbb{E}}_{s\sim S(n, p,\lambda)}[K_{k}(|s|_{0})]\quad\text{(by definition of $\mathop{\mathbb{E}}_{s\in S(n,p,\lambda)}[K_{k}(|s|_{0})]$)}\qed\]
Figure 2: Markov chains of the generalized sticky random walk on 4 states and the uniform distribution on \(4\) states. The figure on the left corresponds to \(S(n,4,\lambda)\), the \(\lambda\)-biased sticky random walk on four vertices, and the figure on the right corresponds \(S(n,4,0)=U_{4}^{n}\), the unbiased random walk on four vertices.
**Lemma 3.2**.: For \(s\in S(n,p,\lambda)\), we have that \(\Pr[|s|_{0}=\ell]=\frac{1}{p^{n}}\sum\limits_{k=0}^{n}K_{\ell}(k)\mathop{\mathbb{ E}}_{s\sim S(n,p,\lambda)}[K_{k}(|s|_{0})]\).
Proof.: Writing out \(\Pr[|s|_{0}=\ell]\) in terms of the probability ratio \(q(\ell)\), we get:
\[\Pr[|s|_{0}=\ell] =\frac{\binom{n}{\ell}(p-1)^{n-\ell}}{p^{n}}q(\ell)\] \[=\frac{\binom{n}{\ell}(p-1)^{n-\ell}}{p^{n}}\sum\limits_{k=0}^{n} \hat{q}(k)K_{k}(\ell)\quad\text{(Krawtchouk expansion of $q(\ell)$)}\] \[=\frac{\binom{n}{\ell}(p-1)^{n-\ell}}{p^{n}}\sum\limits_{k=0}^{n} \frac{K_{k}(\ell)}{\binom{n}{k}(p-1)^{n-k}}\operatorname{\mathbb{E}}[K_{k}(|s |_{0})]\quad\text{(Lemma \ref{eq:Krawtchouk})}\] \[=\frac{1}{p^{n}}\sum\limits_{k=0}^{n}\frac{\binom{n}{\ell}}{ \binom{n}{k}}\frac{(p-1)^{n-\ell}}{(p-1)^{n-k}}\operatorname{\mathbb{E}}[K_{k }(|s|_{0})]K_{k}(\ell)\] \[=\frac{1}{p^{n}}\sum\limits_{k=0}^{n}K_{\ell}(k)\mathop{\mathbb{ E}}_{s\sim S(n,p,\lambda)}[K_{k}(|s|_{0})]\quad\text{(By the reciprocity relation)}\qed\]
Therefore, we observe that to compute \(\Pr[|s|_{0}=\ell]\), it is imperative to calculate the expected value of the Krawtchouk function. Section 4 is devoted to computing \(\operatorname{\mathbb{E}}_{s\sim S(n,p,\lambda)}[K_{k}(|s|_{0})]\).
## 4 Expectation of the Krawtchouk Function for the p-vertex Sticky Random Walk
**Definition 8** (Shift Function).: Given any set \(T\subseteq[n]\) such that \(|T|=k\), let \(a_{1}<...<a_{k}\) be the elements of \(T\) in increasing order. Then, for any \(c\in\mathbb{Z}_{p}\), let
\[\operatorname{shift}_{c}(T)=\sum\limits_{i=0}^{\lfloor k-c\rfloor/p\rfloor}(a _{c+ip}-a_{c+ip-1})\]
Then, for any \(c\) such that \(k\mod p=-c\), and for any \(d\in\mathbb{Z}_{n+1}\), let \(\phi_{c}(d)\) denote the number of subsets of \([n]\) of size \(k\) such that \(\operatorname{shift}_{c}(T)=d\). Note that for any \(t\leq 0,a_{t}=0\).
**Lemma 4.1**.: The expected value of the Krawtchouk function is given by:
\[\operatorname{\mathbb{E}}[K_{k}(|s|_{0})]=\begin{cases}(p-1)^{n-k}\sum\limits _{d=k}^{n-k}\phi_{0}(d)\lambda^{d},&\text{if $c=k\mod p\equiv 0$}\\ 0,&\text{if $c=k\mod p\not\equiv 0$}\end{cases}\]
Proof.: By the formula of expected values, we have that:
\[\operatorname{\mathbb{E}}[K_{k}(|s|_{0})] =\sum\limits_{s\sim S(n,p,\lambda)}\Pr[s]\sum\limits_{\begin{subarray} {c}y\in\mathbb{Z}_{p}^{n}\\ |y|_{0}=k\end{subarray}}(-1)^{y\cdot s}\] \[=\sum\limits_{s\sim S(n,p,\lambda)}\Pr[s]\sum\limits_{ \begin{subarray}{c}y\in\mathbb{Z}_{p}^{n}\\ |y|_{0}=k\end{subarray}}(-1)^{\sum\limits_{i=1}^{n}y_{i}\cdot s_{i}}\] \[=\sum\limits_{s\sim S(n,p,\lambda)}\Pr[s]\sum\limits_{ \begin{subarray}{c}y\in\mathbb{Z}_{p}^{n}\\ |y|_{0}=k\end{subarray}}\prod\limits_{i=1}^{n}(-1)^{y_{i}\cdot s_{i}}\]
We note then that the dot-product on the exponent of \((-1)\) only takes the summation of the element-wise product of \(\alpha\) and \(y\) for positions on \(y\) that are strictly non-zero. Therefore, we can rewrite the summation by considering the indices
corresponding to locations of non-zeros in \(y\), and instead take the summation of the dot-product along these indices. So, for \(T=\{a_{1}<...<a_{n-k}\}\), we have that:
\[\mathbb{E}[K_{k}(|s|_{0})]=\sum_{s\sim S(n,p,\lambda)}\Pr[s]\sum_{T\in\binom{[n]} {n-k}}\prod_{i\in T}(-1)^{s_{i}}\]
Further, choosing a \(T\in\binom{[n]}{n-k}\) implies a choice of \(\overline{T}=\binom{[n]}{k}=[n]\setminus T\). Hence, the summation reduces to:
\[\mathbb{E}[K_{k}(|s|_{0})] =\sum_{s\sim S(n,p,\lambda)}\Pr[s]\sum_{T\in\binom{[n]}{k}}\prod_{ i\in\overline{T}}(-1)^{s_{i}}\] \[=\sum_{T\in\binom{[n]}{k}}\mathop{\mathbb{E}}_{s\sim S(n,p, \lambda)}\left[\prod_{i\in\overline{T}}(-1)^{s_{i}}\right]\quad\text{( definition of expectations)}\]
Next, observe that the sticky random walk is a Markov chain where \((-1)^{s_{i}}=(-1)^{s_{i-1}}\) with probability \(1/p+(p-1)\lambda)\). So, we can instead model the transitions of strings from the sticky random walk as random variables \(u\), where \(u_{1}\) is uniformly distributed in \(\mathbb{Z}_{p}\) and for \(i\geq 2\), \(u_{i}\) is uniformly distributed on \((1-\lambda)U[\mathbb{Z}_{p}]+\lambda\cdot 1_{0}\). To provide an intuition for this refactorization, \((1-\lambda)U[\mathbb{Z}_{p}]\) is the 'base' probability of switching to any vertex and \(\lambda\) is the additional probability of staying on the same vertex.
Then, since \(s_{i}=\sum_{i\in T}\sum_{j=1}^{i}u_{j}\), we write that:
\[\mathop{\mathbb{E}}_{s\in S(n,p,\lambda)}\left[\prod_{i\in \overline{T}}(-1)^{s_{i}}\right] =\mathop{\mathbb{E}}_{s\in S(n,p,\lambda)}\left[(-1)^{s\in \overline{T}}\sum_{j=1}^{i}u_{j}\right]\] \[=\prod_{j=1}^{a_{n-k}}\mathop{\mathbb{E}}_{s\in S(n,p,\lambda)} \left[(-1)^{s\in\overline{T}_{i\geq 2}^{s_{i}}u_{j}}\right]\quad\text{( independence of $u_{j}$'s)}\]
When \(j=1\), we get that:
\[\mathbb{E}\left[(-1)^{s\in\overline{T}_{i\geq 2}^{s_{i}}u_{1}}\right] =\mathbb{E}[(-1)^{|\overline{T}|u_{1}}]\] \[=\begin{cases}1,&\text{if $|\overline{T}|\mod p\equiv 0$}\\ 0,&\text{otherwise}\end{cases}\]
Conversely, when \(j\geq 2\), let \(T_{j}=\{i\in\overline{T};i\geq j\}\). Then,
\[\mathbb{E}\left[(-1)^{s\in\overline{T}_{i\geq 2}^{s_{i}}u_{j}}\right] =\mathbb{E}[(-1)^{|T_{j}|u_{j}}]\] \[=\begin{cases}1,&\text{if $|T_{j}|\mod p\equiv 0$}\\ \mathbb{E}[(-1)^{u_{j}}],&\text{otherwise}\end{cases}\]
Next, observe that for \(j\geq 2\), \(\mathbb{E}[(-1)^{u_{j}}]=\lambda\).
Proof.: To show this, we write the expression for \(u_{j},j\geq 2\) in the exponent and take the expected value of \((-1)^{u_{j}}\).
\[\mathbb{E}[(-1)^{u_{j}}] =\mathbb{E}[(-1)^{(1-\lambda)U[\mathbb{Z}_{p}]+\lambda\cdot 1_{0} }]\quad\text{(since $u_{j}\sim(1-\lambda)U[\mathbb{Z}_{p}]+\lambda\cdot 1_{0}$)}\] \[=\sum_{k=0}^{p-1}(-1)^{k}\Pr[u_{j}=k]\quad\text{(definition of expectation)}\] \[=(-1)^{0}\Pr[u_{j}=0]+(-1)^{1}\Pr[u_{j}=1]+...+(-1)^{p-1}\Pr[u_{j} =p-1]\] \[=\left(\frac{1}{p}+\lambda\bigg{(}\frac{p-1}{p}\bigg{)}\right)- \left(\frac{1}{p}-\frac{\lambda}{p}\right)+...+(-1)^{p-1}\bigg{(}\frac{1}{p}- \frac{\lambda}{p}\bigg{)}\] \[=\frac{1}{p}\sum_{k=0}^{p-1}(-1)^{k}-\frac{\lambda}{p}\sum_{k=0}^ {p-1}(-1)^{k}+\lambda(-1)^{0}\] \[=\lambda\]
Then, for \(k\mod p\equiv 0\), we have that:
\[\mathbb{E}[K_{k}(|s|_{0})] =\sum_{T\in\binom{[n]}{k}}\underset{s\sim S(n,p,\lambda)}{\mathbb{E }}\bigg{[}\prod_{i\in T}(-1)^{s_{i}}\bigg{]}\] \[=\sum_{T\in\binom{[n]}{k}}\prod_{i\in T}\underset{s\sim S(n,p, \lambda)}{\mathbb{E}}[(-1)^{s_{i}}]\quad\text{(independence of $(-1)^{s_{i}}$)}\] \[=\sum_{T\in\binom{[n]}{k}}\prod_{j=1}^{a_{n-k}}\lambda\quad\text{( since $|T|=a_{n-k}$)}\] \[=\sum_{T\in\binom{[n]}{k}}\lambda^{a_{n-k}}\]
We then parameterize the summation over every possible value of the shift of T (for \(k\mod p\equiv 0\)), where the shift function is given in Definition 8.
\[\mathbb{E}[K_{k}(|s|_{0})] =\sum_{d=k}^{n-k}\bigg{(}\sum_{T\in\binom{[n]}{k}}1\bigg{)} \lambda^{d}\] \[=\sum_{d=k}^{n-k}\phi_{0}(d)\lambda^{d}\]
This yields the claim.
**Lemma 4.2**.: For \(c\in\mathbb{N}\) where \(0\leq c\leq p\), and for \(d\in\mathbb{N}\) where \(k\leq d\leq n-k\), the number of \(k\)-sized subsets of \([n]\) that satisfy \(\text{shift}_{c}(T)=d\)_is:_
\[\phi_{c}(d)=\frac{1}{p^{k}}\sum_{\begin{subarray}{c}T\in\binom{[n]}{k}\\ s\#\varepsilon_{c}(T)=d\end{subarray}}1=\frac{1}{p^{k}}\binom{d-1}{\lfloor \frac{|k-c|}{p-1}\rfloor-1}\binom{n-d}{\lfloor\frac{|k-c|}{p-1}\rfloor}\]
Proof.: To determine \(\phi_{c}(d)\), we count the total number of ways to choose \(a_{1}{<}a_{2}{<}...{<}a_{k}\) such that the lengths of the intervals \((a_{c}-a_{c-1})+(a_{c+p}-a_{c+p-1})+(a_{c+2p}-a_{c+2p-1})+...=d\), where for any \(j\leq 0\), \(a_{j}=0\). To do this, we combine each element-wise interval \((a_{c+ip},a_{c+ip-1})\) to form a contiguous interval of length \(d\) (starting from \(a_{c-1}=0\)). The remaining contiguous region that excludes these intervals must then have a length of \(n-d\). We then abstract the number of ways to count \(a_{1}<...<a_{k}\) by counting the number of intervals that have a length of \(d\) when combined, such that the remaining intervals have a length \(n-d\). From a length of \(d-1\) (accounting for \(a_{0}=0\)), we need to select intervals that form a length of \(\lfloor|k-c|/(p-1)\rfloor-1\) since they represent the number of choices of elements of \(T\) that are index-separated by \(p\). Similarly, from a length of \(n-d\), we need to select intervals that form a length of \(\lfloor|k-c|/(p-1)\rfloor\) possible intervals, since they represent every other element of \(T\). This second constraint is to ensure that the total length of the intervals chosen is exactly \(n\). Finally, we divide by the maximum number of repetitions to prevent duplicates, which is \(p^{k}\). Hence, we write that:
\[\sum_{\begin{subarray}{c}T\in\binom{[n]}{k}\\ \text{shift}_{c}(T)=d\end{subarray}}1=\binom{d-1}{\lfloor\frac{|k-c|}{p-1} \rfloor-1}\binom{n-d}{\lfloor\frac{|k-c|}{p-1}\rfloor}\]
Therefore,
\[\phi_{c}(d)=\frac{1}{p^{k}}\sum_{\begin{subarray}{c}T\in\binom{[n]}{k}\\ \text{shift}_{c}(T)=d\end{subarray}}1=\frac{1}{p^{k}}\binom{d-1}{\lfloor\frac {|k-c|}{p-1}\rfloor-1}\binom{n-d}{\lfloor\frac{|k-c|}{p-1}\rfloor}\]
**Corollary 4.2.1**.: By combining the results from Lemmas 5.1 and 5.2, we have that the expectation of the Krawtchouk function is:
\[\mathbb{E}[K_{k}(|s|_{0})]=\begin{cases}\frac{1}{p^{k}}\sum\limits_{d=k}^{n-k} \big{(}[\frac{d-1}{p-1}]\big{)}\big{(}\frac{n-d}{p-1}\big{)}\lambda^{d},&\text{ if }k\bmod p\equiv 0\\ 0,&\text{ if }k\bmod p\not\equiv 0\end{cases}\]
Thus, having computed \(\mathbb{E}[K_{k}(|s|_{0})]\), we are now prepared to upper-bound the total variation distance between \([\Sigma(S(n,p,\lambda))]_{0}\) and \([\Sigma(\mathrm{U}_{p}^{n})]_{0}\). We devote Section 5 to deriving an optimal upper bound of \(O(\lambda)\).
## 5 Upper Bounds for the Total Variation Distance
**Lemma 5.1**.: The total variational distance between the generalized sticky random walk on \(p\) vertices and the uniform distribution on \(p\) states is given by:
\[\mathrm{TVD}([\Sigma(S(n,p,\lambda))]_{0},[\Sigma(\mathrm{U}_{p}^{n})]_{0})= \frac{1}{2}\mathop{\mathbb{E}}_{b\sim\mathrm{U}_{p}^{n}}[|q(b)-1|]\]
Proof.: We write the expression of the total variation distance between the \(n\)-step sticky random walk on \(p\) states and \(n\)-samples from the uniform distribution on \(p\) states. This then yields:
\[\mathrm{TVD}([\Sigma(S(n,p,\lambda))]_{0},[\Sigma(\mathrm{U}_{p}^ {n})]_{0}) =\frac{1}{2}\sum_{\ell=0}^{n}\bigg{|}\Pr[|s|_{0}=\ell]-\frac{ \binom{n}{\ell}(p-1)^{n-\ell}}{p^{n}}\bigg{|}\] \[=\frac{1}{2}\sum_{\ell=0}^{n}\bigg{|}\binom{n}{\ell}q(\ell)\frac{ (p-1)^{n-\ell}}{p^{n}}-\frac{\binom{n}{\ell}(p-1)^{n-\ell}}{p^{n}}\bigg{|}\] \[=\frac{1}{2}\sum_{\ell=0}^{n}\bigg{|}\frac{\binom{n}{\ell}}{p^{n} }(p-1)^{n-\ell}(q(\ell)-1)\bigg{|}\] \[=\frac{1}{2}\sum_{\ell=0}^{n}|\Pr[\ell](q(\ell)-1)|\] \[=\frac{1}{2}\mathop{\mathbb{E}}_{b\sim[\Sigma(\mathrm{U}_{p}^{n} )]_{0}}[|q(b)-1|]\qed\]
**Corollary 5.1.1**.: The total variational distance between the generalized sticky random walk and the uniform distribution on \(p\) states has the following upper bound as a result of convexity:
\[\mathrm{TVD}([\Sigma(S(n,p,\lambda))]_{0},[\Sigma(\mathrm{U}_{p}^{n})]_{0}) \leq\frac{1}{2}\sqrt{\mathop{\mathbb{E}}_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{ 0}}[q(b)-1]^{2}}\]
**Lemma 5.2**.: For \(k\leq n\) and for \(b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}\), we have that
\[\mathop{\mathbb{E}}_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[q(b)-1]^{2}=\sum_{ k=1}^{n}\frac{\mathbb{E}[K_{k}(|s|_{0})]^{2}}{\binom{n}{k}(p-1)^{n-k}}\]
Proof.: Section 3 says that \(q(b)\) has a unique Krawtchouk expansion, where the coefficients on each Krawtchouk basis are given by Proposition 2, we observe that:
\[\mathop{\mathbb{E}}_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[q(b)-1]^{2}=\mathop {\mathbb{E}}_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}\bigg{[}\bigg{(}\sum_{k= 0}^{n}\hat{q}(k)K_{k}(b)-1\bigg{)}^{2}\bigg{]}\]
Then, recall that \(\hat{q}(k)=\frac{\mathbb{E}[K_{k}(|s|_{0})]}{\binom{n}{k}(p-1)^{n-k}}\). So, \(\hat{q}(0)=\frac{\mathbb{E}[K_{0}(|s|_{0})]}{(p-1)^{n-k}}=\frac{1}{(p-1)^{n}}\). Similarly, by the definition of the Krawtchouk function and the reciprocity relation \(\frac{K_{k}(\ell)}{\binom{n}{k}(p-1)^{n-k}}=\frac{K_{k}(\ell)}{\binom{n}{k}(p- 1)^{n-s}}\), we have that \(K_{0}(b)=K_{n}(b)=(p-1)^{n}\). Therefore,
\(\hat{q}(0)K_{0}(b)=1\). Hence, the above equation simplifies to:
\[\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[q(b)-1]^{2}= \mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}\left[\bigg{(} \sum\limits_{k=1}^{n}\hat{q}(k)K_{k}(b)\bigg{)}^{2}\right]\]
Since the generalized Krawtchouk functions are orthogonal (as proven in Lemma 3.2), the product of the non-diagonal entries in the above term all evaluate to \(0\). Thus, counting the residuals, we have that the square of the summation is just the summation of the squared terms that it contains. Thus, exploiting the orthogonality of the generalized Krawtchouk functions and the linearity of the expectations, we write that:
\[\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0 }}[q(b)-1]^{2} =\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0 }}\bigg{[}\sum\limits_{k=1}^{n}\hat{q}(k)^{2}K_{k}(b)^{2}\bigg{]}\] \[=\sum\limits_{k=1}^{n}\hat{q}(k)^{2}\mathop{\mathbb{E}}\limits_{b \sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[K_{k}(b)^{2}]\quad\text{(linearity of expectations)}\] \[=\sum\limits_{k=1}^{n}\frac{\mathop{\mathbb{E}}[K_{k}(|s_{0})]^{ 2}}{\binom{n}{k}^{2}(p-1)^{2n-2k}}\cdot\mathop{\mathbb{E}}\limits_{b\sim[ \Sigma(\mathrm{U}_{p}^{n})]_{0}}[K_{k}(b)^{2}]\]
Finally, we use Lemma 3.2 to write \(\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[K_{k}(b)^{ 2}]\) as \(\langle K_{k},K_{k}\rangle=\binom{n}{k}\).
\[\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0}}[q(b)-1]^{2} =\sum\limits_{k=1}^{n}\frac{\binom{n}{k}\mathop{\mathbb{E}}[K_{k}(|s_{0})]^{ 2}}{\binom{n}{k}^{2}(p-1)^{2n-2k}}=\sum\limits_{k=1}^{n}\frac{\mathop{\mathbb{ E}}[K_{k}(|s_{0})]^{2}}{\binom{n}{k}(p-1)^{2n-2k}}\qed\]
**Theorem 5.3**.: For \(\lambda\leq 0.27\),
\[\mathrm{TVD}([\Sigma(S(n,p,\lambda))]_{0},[\Sigma(\mathrm{U}_{p}^{n})]_{0}) \leq O(\lambda)\]
Proof.: Substituting the result of Corollary 5.2 into the equation derived in Lemma 4.2.1, and scaling the indexes of the summation, we have that:
\[\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0 }}[q(b)-1]^{2} =\sum\limits_{k=1}^{n/p}\frac{1}{\binom{n}{pk}}\frac{1}{(p-1)^{2n -2pk}}\bigg{(}\sum\limits_{d=pk}^{n-pk}\frac{1}{p^{k}}\binom{d-1}{\lfloor \frac{kp}{p-1}\rfloor-1}\binom{n-d}{\lfloor\frac{kp}{p-1}\rfloor}\lambda^{d} \bigg{)}^{2}\] \[=\frac{1}{p^{2k}}\sum\limits_{k=1}^{n/p}\frac{1}{\binom{n}{pk}} \frac{(p-1)^{2n-2pk}}\bigg{(}\sum\limits_{d=pk}^{n-pk}\binom{d-1}{\lfloor \frac{k}{1-\frac{p}{p}\rfloor}-1}\binom{n-d}{\lfloor\frac{k}{1-\frac{p}{p} \rfloor}}\lambda^{d}\bigg{)}^{2}\] \[\leq\frac{1}{p^{2k}}\sum\limits_{k=1}^{n/p}\frac{\binom{n}{k}^{2} }{\binom{n}{pk}(p-1)^{2n-2pk}}\bigg{(}\sum\limits_{d=pk}^{n-pk}\binom{d-1}{k- 1}\lambda^{d}\bigg{)}^{2}\] \[\leq\frac{1}{p^{2k}}\sum\limits_{k=1}^{n/p}\frac{\binom{n}{k}^{2} }{\binom{n}{pk}}\bigg{(}\sum\limits_{d=pk}^{n-pk}\binom{d-1}{k-1}\lambda^{d} \bigg{)}^{2}\]
Note the following generating function relation that \((\frac{x}{1-x})^{k}=\sum\limits_{m\geq k}\binom{m-1}{k-1}x^{m}\). Then,
\[\mathop{\mathbb{E}}\limits_{b\sim[\Sigma(\mathrm{U}_{p}^{n})]_{0} }[q(b)-1]^{2} \leq\frac{1}{p^{2k}}\sum\limits_{k=1}^{n/p}\frac{\binom{n}{k}^{2} }{\binom{n}{pk}}\bigg{(}\frac{\lambda}{1-\lambda}\bigg{)}^{2k}\] \[\leq\frac{1}{p^{2k}}\sum\limits_{k=1}^{n/p}\bigg{(}\frac{pk}{n} \bigg{)}^{pk}\bigg{(}\frac{en}{k}\bigg{)}^{2k}\bigg{(}\frac{\lambda}{1- \lambda}\bigg{)}^{2k}\quad\text{(From Appendix A)}\] \[=\sum\limits_{k=1}^{n/p}\bigg{(}\frac{pk}{n}\bigg{)}^{pk-2k} \bigg{(}\frac{e\lambda}{1-\lambda}\bigg{)}^{2k}\] \[\leq\sum\limits_{k=1}^{n/p}\bigg{(}\frac{e\lambda}{1-\lambda} \bigg{)}^{2k}\] \[\leq O(\lambda^{2}),\quad\text{(for $\lambda\leq\frac{1}{1+e}$ by geometric sums)}\]
Therefore, for \(\lambda\leq\frac{1}{1+e}\approx 0.27\), we have that \(\text{TVD}\leq\sqrt{\operatorname*{\mathbb{E}}_{b\sim[\Sigma(U_{p}^{n})]_{0}}[p(b )-1]^{2}}\leq O(\lambda)\).
## 6 Proof Strengths and Limitations
When \(\lambda>0.27\), our proof method fails to provide the desired \(O(\lambda)\) total variation distance since \(\sum_{k=1}^{n/p}(\frac{e\lambda}{1-\lambda})^{2k}\) does not converge and goes to \(\infty\). We do, however, reach a higher lower-bound on the radius of convergence (\(\lambda\leq 0.27\)) than [1]'s \(\lambda\leq 0.16\) and [12]'s more general result but which is only valid for \(\lambda<0.01\) and a fixed \(p\) in their interpretation of their \(O(\lambda p^{O(p)})\) result, whereas our methodology allows us to remove the dependency of \(p\) in the total variation distance (though only for the generalized sticky random walk). We conjecture that \(\lambda\leq 0.27\) is not the optimal radius of convergence for the generalized sticky random walk and leave this as an open problem for future research directions in this topic to resolve.
## 7 Reductions
**Theorem 7.1**.: Let \(p\) be the number of vertices in the generalized sticky random walk. Then, for all \(p\mod k\equiv 0\), \(S(n,p,\lambda)\) reduces from \(S(n,k,p\lambda(1-\frac{1}{k}))\).
Proof.: Consider a generalized sticky random walk which details an irreducible homogeneous Markov chain on \(p\) states where the probability of staying at the same state is \(\frac{1}{p}+(p-1)\lambda\) and the probability of switching states is \(\frac{1}{p}-\lambda\). Then, consider a 'grouped' random-walk, for a grouping of states \(V=[p]=V_{0}\sqcap\cdots\sqcap V_{k-1}\), where \(V_{i}\) contains an arbitrary selection of \(p/k\) vertices and where \(p\mod k\equiv 0\). Then, \(S(n,p,\lambda)\) details an \(n\)-step long random walk on the generalized sticky random walk. Then, we note that the probability that the current state of the grouped random walk stays at itself must \((\frac{1}{k}+(p-1)\lambda)+(\frac{1}{p}-\lambda)(\frac{p}{k}-1)=\frac{1}{k}+p \lambda(1-\frac{1}{k})\). Here, the first term comes from the probability of any state \(E\) in the sticky random walk staying at itself, and the second term comes from the probability that any other vertex in the same group as \(E\) transitions to \(E\). Since this is true for any of the grouped vertex, we conclude that our grouping of the random walk yields a sticky random walk on \(k\) vertices and bias \(p\lambda(1-\frac{1}{k})\), which completes the reduction.
**Theorem 7.2**.: Every generalized sticky random walk \(S(n,p,\lambda)\) corresponds to a \(n\)-step long random walk on a \(p\lambda\)-spectral expander with \(p\) vertices.
Proof.: We first extend Definition 45 of the sticky random walk matrix in [12]. For subsets \(A,B\in[p]\), let \(J_{A,B}\in\mathbb{R}^{A\times B}\) denote the matrix with all entries equal to \(\frac{1}{|A|}\). Then, for \(\lambda\in[0,1]\), let \(G_{\lambda,p}\in\mathbb{R}^{p\times p}\) denote the generalized sticky random walk matrix. Then, by definition, we write that:
\[G_{\lambda,p}=(1-\lambda)J_{V,V}+p\lambda I_{\{n\times n\}}\]
Then, note that \(\|I_{n\times n}\|_{2}\) is \(1\), as it acts as \(J\) on the orthogonal subspaces \(\mathbb{R}\) of \(\mathbb{R}^{p}\). Therefore, \(\lambda(G_{\lambda,p})\leq p\lambda\) since the eigenvector of \(G_{\lambda,p}\) is orthogonal to \(J_{V,V}\) and so \(G_{\lambda,p}\nu=((1-\lambda)J_{v,v}+p\lambda I_{\{n\times n\}})\nu=(1-\lambda )J_{v,v}\nu+p\lambda I_{\{n\times n\}}\nu=p\lambda\nu\). The opposite inequality comes from the fact that \(\frac{n-1}{n}\widetilde{1}\{c_{\diamond}\}-\frac{1}{n}\sum_{i=1}^{p-1} \widetilde{1}\{c_{\diamond}\}\in\widetilde{1}^{+}\) is an eigenvector of \(G_{\lambda,p}\) with eigenvalue \(p\lambda\), which proves the theorem.
## 8 Acknowledgements
This work was done as part of a Caltech undergraduate thesis, and this preprint submission has been reformatted from the thesis submission [1]. EA thanks Professor Chris Umans for his mentorship during this project, and Elia Gorokhovsky and David Hou for helpful suggestions over the course of this work.
|
2306.09462
|
Motion Comfort Optimization for Autonomous Vehicles: Concepts, Methods,
and Techniques
|
This article outlines the architecture of autonomous driving and related
complementary frameworks from the perspective of human comfort. The technical
elements for measuring Autonomous Vehicle (AV) user comfort and psychoanalysis
are listed here. At the same time, this article introduces the technology
related to the structure of automatic driving and the reaction time of
automatic driving. We also discuss the technical details related to the
automatic driving comfort system, the response time of the AV driver, the
comfort level of the AV, motion sickness, and related optimization
technologies. The function of the sensor is affected by various factors. Since
the sensor of automatic driving mainly senses the environment around a vehicle,
including "the weather" which introduces the challenges and limitations of
second-hand sensors in autonomous vehicles under different weather conditions.
The comfort and safety of autonomous driving are also factors that affect the
development of autonomous driving technologies. This article further analyzes
the impact of autonomous driving on the user's physical and psychological
states and how the comfort factors of autonomous vehicles affect the automotive
market. Also, part of our focus is on the benefits and shortcomings of
autonomous driving. The goal is to present an exhaustive overview of the most
relevant technical matters to help researchers and application developers
comprehend the different comfort factors and systems of autonomous driving.
Finally, we provide detailed automated driving comfort use cases to illustrate
the comfort-related issues of autonomous driving. Then, we provide implications
and insights for the future of autonomous driving.
|
Mohammed Aledhari, Mohamed Rahouti, Junaid Qadir, Basheer Qolomany, Mohsen Guizani, Ala Al-Fuqaha
|
2023-06-15T19:32:04Z
|
http://arxiv.org/abs/2306.09462v1
|
# Motion Comfort Optimization for Autonomous Vehicles: Concepts, Methods, and Techniques
###### Abstract
This article outlines the architecture of autonomous driving and related complementary frameworks from the perspective of human comfort. The technical elements for measuring Autonomous Vehicle (AV) user comfort and psychoanalysis are listed here. At the same time, this article introduces the technology related to the structure of automatic driving and the reaction time of automatic driving. We also discuss the technical details related to the automatic driving comfort system, the response time of the AV driver, the comfort level of the AV, motion sickness, and related optimization technologies. The function of the sensor is affected by various factors. Since the sensor of automatic driving mainly senses the environment around a vehicle, including "the weather" which introduces the challenges and limitations of second-hand sensors in autonomous vehicles under different weather conditions. The comfort and safety of autonomous driving are also factors that affect the development of autonomous driving technologies. This article further analyzes the impact of autonomous driving on the user's physical and psychological states and how the comfort factors of autonomous vehicles affect the automotive market. Also, part of our focus is on the benefits and shortcomings of autonomous driving. The goal is to present an exhaustive overview of the most relevant technical matters to help researchers and application developers comprehend the different comfort factors and systems of autonomous driving. Finally, we provide detailed automated driving comfort use cases to illustrate the comfort-related issues of autonomous driving. Then, we provide implications and insights for the future of autonomous driving.
Autonomous driving, Autonomous vehicles, Sensors, Comfort, Driving safety, Psychoanalysis, Machine learning, Response time, Reaction time
## I Introduction
Currently, the autonomous vehicles (AV) market is forecasted to reach 186.40 billion USD by 2030, and in 2021 alone, the AV market was worth 4 billion USD. Such high investments prove that AVs have become the primary revolution in connectivity technology and automation, and many operations have become reality [1]. Research on AVs is still developing rapidly since AVs have emerged to provide travel services for humans. Recent research regarding AVs is focused on achieving significant improvement in the AV's safety, fuel consumption, efficiency, and comfort. Evaluation factors for AVs have also been examined and implemented in terms of safety and energy, but comfort seems to be the most difficult evaluation factor to define properly. This brings to attention a problem within the AV research community: unclear guidelines and standards on how to provide the best driving experience via a human-machine interaction system. The comfort of the customer for the AV involves not only the passenger but also the driver [2]. The AVs are classified from the automation perspectives into five levels, 0-5. Level 0 refers to vehicles with no automation; level 5 represents fully autonomous vehicles, where drivers do not intervene. Level 2 indicates that drivers must control some main driving tasks while some can be partially autonomously. Vehicles have complete control to drive autonomously in levels 3 and 4, but drivers are allowed to intervene if needed. In levels 3 and 4, drivers can manage their time to relax for specific periods and drive at other times. As such, with each increasing level of automation, comfort becomes more essential [3].
Comfort can be categorized by different factors as shown in Figure 1. Here, controllability factors represent the probabilistic attributes linking accidents to hazardous events based on the likelihood of the driver's ability to control the hazardous situation and thus evade harm. Robotic control factors represent system factors contributing to the movement of the car which involves the program and mechanical aspects that make it possible to control the car operations [4, 5, 6, 7]. In general, the comfort level of AV users is challenging to eliminate or minimize because it is subjective. Different people will have different levels of comfort. Comfort of self-driving cars is impacted by many aspects, among which the reaction time of the self-driving car impacts the comfort of passengers both physiologically and psychologically [8]. Note that the user's comfort level includes both the driver and the passengers of the self-driving car because once it is successfully implemented, the driver will now become a passenger. Unfortunately, when it comes to AV planning and design, as far as the performance of self-driving cars and user comfort of self-driving cars are concerned, user comfort is treated as an after-thought [9]. The lack of extensive investigations of control strategies of AVs and their impact on the comfort factors of AV users and drivers contributed to such significant issues [10][11, 12][13].
Further complicating matters regarding comfort for AVs
is that there is no standard definition for it in the scientific community [14]. Some researchers have categorized the comfort factors into either comfort or discomfort classes. As such, comfort would simply be defined as the absence of discomfort. Additionally, [14] the authors discussed whether comfort factors should be treated as discomfort in the same significance or whether one of them is more important than the other. This problem is also relevant to the Human-Machine-Interaction (HMI) areas. Moreover, some literature divided the comfort factors into three levels of a human's expressive states, as seen by Figure 2. This is also applies towards riding/experiencing AVs for the first time [15, 16]. State A is a negative state, which relates to apprehension of riding AVs. These are the most complex and essential psychological and physiological factors for passengers to satisfy the required AV regulations since it is natural for people to get anxious when they are first using new devices. Usually, people have a significant level of anxiety if they do not have prior practice or use for the new technology or device [17, 18]. This anxiety may accumulate in other forms of discomfort for the passenger [19, 20]. State B corresponds to the openness (willingness) of that passenger who wants to use the AV as an ordinary use case, which represents the main goal for autonomous vehicles. State C represents the positive state (excitability) when passengers get used to riding with AVs.
When studying these topics, some researchers focused on a specific type of discomfort (e.g., motion sickness, anxiety) in relation to AVs. Out of all the types of discomforts relating to AVs, motion sickness is the most prevalent, as it has a huge impact on the consumer's riding experience, acceptance, and trust in the AV. What causes motion sickness has not yet been conclusively clarified, but according to a common theory, the cause is likely a sensory conflict; The AV's movement and its expectation (what we see) do not match. According to this theory, it should be able to help give the AV's occupants as precise information as possible about the impending movement - acceleration, curves, braking, and so on. Motion sickness can be identified in several ways, such as the presence of eye strain, nausea, headache, vomiting, sweating, and difficulty focusing. Such symptoms can be detected in AV drivers and passengers by checking and monitoring their physiological conditions. Motion sickness is one of the active AV research areas, with many strategies and frameworks being implemented to mitigate to achieve the AV's maximum comfort.
Safety and security are further crucial factors that affect the comfort of AVs. In terms of safety, AVs are designed to prevent accidents and minimize the risk of injury to passengers and road users. The safety features of AVs include advanced sensors, cameras, and machine learning algorithms that detect obstacles, pedestrians, and other vehicles in real-time. These features work together to ensure that the vehicle stays on the road and does not collide with other objects or vehicles. In terms of security, AVs are vulnerable to cyber attacks, which may compromise the safety and security of passengers in the vehicle. For instance, an adversary could take control of the vehicle and cause it to crash or divert it from its intended route. Thus, safety and security are critical factors that impact the comfort of AV users [9]. Passengers must feel safe and secure while using these vehicles to enjoy a comfortable ride. Therefore, manufacturers must prioritize safety and security in the design and implementation of autonomous vehicles.
This article explores the various possibilities that affect AV user comfort based on the elements listed in technical aspects, biomedical sensors, and psychological analysis. It looks at problems that affect the driver's reaction time in an incident, the comfort level in AVs, the movement dysfunction, smoothness, and jerkiness, and whether autonomous driving is relaxing and safe. Moreover, the article also explores comfort-related factors of autonomous driving, such as the impact of safety plus self-driving car architectures. This article not only reviews the comfort of autonomous driving from a technical perspective but also involves the physiological conditions of autonomous driving. It also explores the impact and benefits and costs of autonomous driving from an economic perspective.
### _Scope and Contributions of the Paper_
This work reviews and discusses topics of AVs with a focus on comfort and motion sickness from the humans/users (i.e., drivers and passengers) perspective. We cover a broad range of technical aspects of both topics in regards to some current frameworks that have been developed and some common questionnaires that have been proposed. The paper also covers movement dysfunction, smoothness, and jerkiness, as those factors impact both the comfort of AVs for the user as well as the level of motion sickness experienced. Our main contribution is an exploration of the comfort factors
Fig. 1: Crucial concepts to consider for improving AV experience for consumers.
Fig. 2: A passenger’s emotive state for using AVs based on the concept of HMI [14].
and motion sickness of AVs from a technical perspective. We examine current architectures and frameworks proposed for mitigating motion sickness for AVs. We also examine the correlation between trust and comfort for AVs, as trust is also a crucial topic of AVs that is still talked about. There has been discussion about trust in terms of safety and security, but little about whether trust and comfort are correlated. We provide arguments as to how and why trust and comfort for AVs are correlated. Unlike existing studies that are summarized in Table I, the key contributions of this article are:
* Providing a comprehensive and informative overview of the technical and non-technical elements that impact human comfort in autonomous driving.
* Exploring the different comfort factors and motion sickness from a technical perspective, examining prominent frameworks and architectures proposed for addressing motion sickness, and analyzing the correlation between trust and comfort for AVs.
* Presenting a clear understanding of the challenges and limitations in assessing and improving the comfort of AV users and drivers, which is a missing piece of the existing literature.
### _Organization of the Paper_
The roadmap and organization of this article are shown in Figure 3. Specifically, the document structure includes a discussion of similar work in Section II. These include similar research areas and different aspects of AVs such as comfort factors, motion sickness, and incorporating trust in the AVs comfort factors. Then, the article discusses some of the most common physiological measurements used for comfort in Section III. In Section IV, the article discusses some current frameworks and representative frameworks that address comfort, motion sickness, and trust in AVs. Then, Section V discusses some control process models for enhancing comfort for AVs. Section VI also discusses some strategies, applications, and implementations done by some researchers to enhance comfort, trust, and other aspects of AVs. Furthermore, in Section VII security risks and health issues, best practices in designing safe AVs, and future research directions are provided. Section VII further identifies important and growing research opportunities for autonomous driving computing technology. Last, the conclusion is in Section VIII.
## II Related Papers
Numerous studies attempt to discuss and cover the safety and human comfort factors for AVs. A study by Noy et al. [27] reviews the promises for improved safety that are espoused by proponents of automated driving and discusses considerations in the safety of automated driving, while the work by Yurtsever et al. [33] discusses challenges of autonomous driving and covers the technical factors of automated driving. They also discuss some issues in high-level system architectures
Fig. 3: A roadmap of the paper.
and methodologies, including planning and human-machine interfaces.
### _AV Technological Innovations and Risks_
Further work by Rojas et al. [34] and Fleetwood [24] focus on the AV risks to public safety. Specifically, Rojas et al. [34] discuss the implications and risks of autonomous driving on public health and reviews policies and regulatory frameworks related to public safety and transportation, whereas Fleetwood [24] reviews critical public health implications of AV and analyzed the key important ethical issues inherent in AVs design.
As the technological evolution in the automated driving industry, the work by Crayton et al. [25] reviews relations between technological innovations in AV and public health risks and also discusses the relevant health implications of AV policies. Similarly, the work by Koopman and Wagner [23] identify the challenges and safety risks associated with validating dependable AV systems and how they relate to other safety areas. Additionally, another work by Kelly [26] analyzes public health challenges in the USA and reviews critical factors for determining AV benefits and risks to public safety.
Further research attempts such as the work by [35] considered developing a framework to systematically identify possible AV health risks. This work also reviewed AV's impact on public health and safety. Furthermore, the work by Curto et al. [42] and Morando et al. [28] address the safety implications and impacts of AVs. Notably, Curto et al. [42] cover the effects and ramifications of connected and autonomous vehicles on public safety, comfort, and security, while Morando et al. [28] analyzes the driving safety of AVs using a simulator.
Taeihagh and Lim in [31] review governance strategies adopted in response to AV evolution and discussed technological risks of AVs in different areas, including safety and liability. Meanwhile, Cunningham and Regan [21] identify and analyze challenges related to transitioning from manually driven cars to AV and outline requirements to address them.
### _Correlations Between Motion Sickness and Passenger Comfort_
The work of Guo et al. incorporates driving condition prompts to alleviate motion sickness and improve passenger comfort [37]. The authors utilized a driving simulator to assess the influence of a developed Driving Condition Prompt (DCP) system on passengers' motion sickness and comfort. The authors claimed their experiments could improve passenger comfort when utilizing the vehicle's DCP systems and alleviate motion sickness. This system claims to provide promising
enhancement for AV comfort and delivers sufficient guidelines for future HMI implementations for smart cars.
In another work [41], Li and Chen tackle the challenge of motion sickness, with semi- and fully automated vehicles. They develop a vibration cue system that uses tactile stimulation to inform passengers of future AV movement. Furthermore, this work analyzed some AV simulators' results of motion planning that aim to optimize vibration cueing time and patterns. The proposed approach utilized a vibration cue cushion-based system by evaluating the data and reactions of 20 participants. The results of the examined system showed a significant increase in passengers' understanding of the cues. The proposed approach claimed the motion sickness of the participants improved significantly, with results of an average success rate of 89% for generating motion anticipation.
The work of Reuten et al. [38] also focuses on motion sickness. Specifically, the authors investigate whether there is a possible correlation between general unpleasantness and specific symptomatology for motion sickness. The article highlights a research gap in the symptomatology of motion sickness and unpleasantness, mainly that the correlation between them is unclear. After all, motion sickness is shown via feelings of unpleasantness, which range from slight discomfort to absolute dreadfulness. The authors also argue that there have been previous studies that have reported positive correlations between motion sickness symptomatology and measures of unpleasantness. However, the correlation-based investigation could hide potential deviations. The most significant manifesting symptom of this issue is vomiting, which is said to offer relief. For that reason, it will not be fair to rate the bad feeling of individuals in a similar fashion to how close they are to vomiting. The proposed work investigated a linear relationship to consider the bad feeling of passengers as potential symptoms progress. The suggested work divided their experiments into two phases; 1) the development of unpleasantness and other signs during continuous motion stimulation; 2) the development of unpleasantness progression of motion sickness symptoms. The authors studied the sickness rates in two published experimentations that deployed 20 to 30-minute motion sickness stimuli through virtual and natural motion. Each test included numerous sessions on different days, and the rates were obtained at 2 to 5 minutes. The experiment results inferred a positive correlation between motion sickness symptoms and unpleasantness, but still, there is a period of relief at the onset of nausea. However, there is a need for further studies and analysis before the generalization of procedure outcomes that used only two different scales in addition to the listed anomaly.
The authors of [39] investigated the impact of discomfort factors through monitoring the motion sickness and associated factors that were expressed in (_Experiment 1_) and (_Experiment 2_). The listed experiments utilized the MISC test. _Experiment 1_ employed 15 participants and analyzed the discomfort levels and associated MISC test results with each level, all participants were males aged between 20 and 38 years old. _Experiment 2_ has utilized 17 participants whose data were analyzed at 60 minutes or until reaching level 6 of MISC. After those periods, a 10-minute break is given, followed by 30 minutes for either movement measurements or reaching MISC 6 with a 0.3 Hz oscillation frequency. After that, participants conducted a second experiment in a completely dark environment. The MISC at 30-second intervals is used to rate the level of sickness of participants in each session. In _Experiment 1_, the authors found that the discomfort level is increased using the MISC score. The experiments also found that participants with headaches, warmth, severe dizziness, stomach awareness, and sweating symptoms are more associated with a high uncomfortable level. The authors also found that discomfort levels associated with each level of the MISC increase monotonously in _Experiment 2_. While both experiments were in-depth and the results were strong, it should be noted that for both experiments, the participants were mostly male. The sample sizes of those experiments could have included more females. It would have been interesting to see the differences between genders in terms of discomfort.
The authors of [36] conducted a literature review on some comfort factors using the PRISMA framework. The above article studied 41 articles for the period of 2006 to 2021. The literature review employed several keywords to identify articles discussing motion sickness, causation, and mitigation methods in human and auto-pilot driving modes. Several databases were employed in this work, such as IEEE Xplore, EBSCOhost, Scopus, and ACM Digital Library. Moreover, the mentioned article used some keyword combinations, such as "_autonomous vehicles and motion sickness_", "_motion sickness and mitigation and autonomous vehicles_", "_motion sickness and situation awareness and autonomous vehicles_." The human factors articles represented the majority of the literature review findings. Also, the above article included vehicle dynamics and control algorithms research. The article found that most studies utilized virtual reality (VR) headsets to simulate and collect data on AVs and drivers' behaviors in non-driving tasks, which cannot help. The findings above utilized driving routes that include causes of motion sickness. Moreover, some studies developed heuristic models involving mathematical prototypes for motion sickness and AV performance that aim to improve the dynamics and offer comfortable rides. The literature review also concluded that real-world driving simulators and VR headsets have better reliability than driving simulators and VR headsets.
The authors in [43] take an interesting approach to investigating motion sickness in AVs. The hypothesis is that olfaction (which relates to the sense of smell) may play a role in reducing motion sickness non-invasively. They focused on special scents, such as lavender and ginger. The test track investigated the impacts of those smells on drivers and passengers in chauffeured drives. The results showed that drivers could be at risk due to those smells in the pre-test and post-test.
### _The Correlation Between Comfort and Trust_
The work of [32] has explored the potential connection between comfort and trust in AVs. The article examined specific characteristics of rider experiences in the AV environment that impact trust and comfort. The study involved 55 participants
that rode a shared AV under experimental conditions at a test zone. Each experiment employed 2 participants in four trips, accompanied by a researcher and safety operative. Two scenarios were presented for each trip, the direction (forward/backward) and maximum vehicle speed. Each participant rated the 'comfort' and 'trust' factors after each trip. The investigation found a significant connection between trust and comfort factors statistically. Moreover, other studies explored AVs' trust, such as [44, 45], and [46].
The work of [40] investigated the comfort factor, considering other characteristics, such as weather conditions, road type, and traffic. That study assumed participants drove under level 3 of autonomy using various situations, including congested traffic, highways, heavy rain, and following defined speeds. The participants rated their perceived comfort level as they were driving an AV. The outcomes of the experimentation showed unfavorable changes in comfort in driving on downtown and highway under various conditions, such as heavy rain and traffic. At the same time, the experiment showed that decelerating the car's speed improved driving comfort. Later, the experiment studied four profiles 1) adversity to speed reduction, 2) trust in automation, 3) mistrust in automation, and 4) risk averse.
Comfort in AVs has also been studied using some more novel techniques, as in the case of [47]. The authors noted that human comfort in AVs is still not discussed often, and it is a crucial aspect of user acceptance of AVs. The argument is that existing studies relating to AV comfort only concentrated on physical factors such as sitting posture, vibration, and noise. Fewer studies investigated some psychological factors, for which this article aims to provide sufficient details and discussions. The authors attempt to define being comfortable as having no discomfort since it has a dominant effect on human comfort. The authors carried out their study using a driving simulator. The work investigated the issue in a simulated AV environment. Furthermore, a collection of AV trips that include motions have been experimented (27 videos total). The collected data included the physiological signals and comfort score. The listed work added a pressing button that enabled participants to express comfortable levels through the press strengths while riding the vehicle. Pushing the button referred to discomfort, while not pushing indicated the comfortable. What is unique about this work is that the authors used wearable sensors to collect physiological signals. The experiment also involved different types of roads (e.g city roads, highway roads, and mountain/rural roads) and different types of driving styles (gentle, aggressive, and normal). The listed videos lasted 3 to 5 minutes, and the participants were asked to express their feelings after watching each video. The experiment participants' feelings were measured and monitored by dividing the test into many sessions. Each session ran for 75 minutes per day for each participant. The authors also used a Support Vector Machine (SVM) for comfort level detection accuracy, and the SVM achieved a 71% accuracy detection rate. The authors' experiment was highly detailed and the results were comprehensive, but the study suffers from sample size issues, a lack of diversity regarding participants, and generalizability regarding the machine method used; Similarly, the work of [48] measured sitting position, heartbeat, discomfort, and skin conductance through deploying sensors.
The work [29] attempts to address comfort in AVs with a different approach. Unlike the work discussed in this section thus far, the authors actually address the ISO-2631 standard, which evaluates the impact of environmental shakes on comfort, health, and efficiency. The authors proposed and experimented with a modified ISO-2621 standard. This work utilized a body measurement system (BMS) for documenting occupant movements while driving in actual and simulated traffic. The used safety head cap is equipped with a sensor, while another is hooked to the seat rail. The proposed modified model evaluated the head movement during the experiments. The outcomes of this model found that measuring the occupant's head can improve the objectification of driving comfort for an AV.
The work [30, 49] focused on the discomfort of AV passengers. The authors assessed the discomfort level by collecting measurements of passenger feelings while driving in real-time on the road. The main goal of the work is to identify the passenger discomfort levels using multimodal. The above work ran image processing techniques using recorded data of the outward-looking camera, location, and routing to analyze the comfort and discomfort. The discomfort is defined as adverse changes in passengers' emotions. Those negative changes can happen in (1) unwanted body moving, such as a sharp brake, or (2) perceptions of exterior risks, such as a car being too close to a leading car. The main idea of the listed work was to recognize potential driving scenarios that may cause discomfort and classify those scenarios in real-time. Ten sessions of rides with two participants were conducted in the experiment. Two passengers were involved in this experiment, one sitting in the front seat and the other sitting behind the driver. The experimentation was carried out in an urban area of San Francisco. Participants drove two pre-designed paths using two distinct driving manners (1) driving in the AV environment with caution and (2) driving in the non-AV environment without caution. Drivers rated their driving discomfort and comfort in real-time. The driving was in real-time measurement using a scale of 1 to 10. One refers to the lowest comfort level, and 10 indicates the highest level. Also, a logistic regression model was used to analyze the participant discomfort readings. The experimental results showed that the perception of risk is critical to a motorist's pressure and discomfort.
## III Physiological Measures for Comfort in AVs
Using various physiological measures for assessing comfort in AVs is beneficial because they allow for a closer and more objective analysis of the user for the AV. We can compare several physiological measures against other potential factors of AV driving. Numerous studies have undertaken this approach, with a similar consensus being that it may be easier to quantify discomfort, rather than comfort itself. It is not easy to assess the AV driver's comfort precisely because it's subjective. Moreover, measuring the AV drivers' and passengers' comfort and discomfort in real-time is not
pleasant for them and might result in wrong readings and feelings. So, for measuring comfort for AV driving, we need non-intrusive and objective discomfort measurement systems that can be utilized for adapting to the AV's driving style and make sure the driver and passengers are relaxed [50, 51]. Thankfully, recent technological advancements such as wearable sensors are able to give non-intrusive physiological measurements. Some examples of these are in Table II. Note that the Empatica E4 and Microsoft Band 2 have been used in multiple studies pertaining to motion sickness and comfort in AVs.
The analyzed physiological signals in the study regarding the AV participants comfort, included the heart rate variability (HRV), systolic blood pressure (SBP), heart rate (HR), Electrodermal Activity (EDA), and EEG. Some studies used the Galvanic Skin Response (GSR) and Skin Conductance Level (SCL) as alternative terms for the EDA [52]. Other physiological measurements such as eye-tracking have also been used as demonstrated in table III.
### _Heart Rate Variability and Heart Rate_
Heart Rate Variability (HRV) and Heart Rate (HR) and are two types of measures that have been used in studies involving AVs, driving simulations, and on-the-road driving studies. With these, both HR and HRV have been used to indicate levels of anxiety, cognitive measure, workload, and duty demands [60]. However, there is a distinction between HR and HRV; HR is measured in BPM (beats per minute) and usually does not require exact times. HR focuses on average BPM while HRV focuses on measuring specific changes in time between successive heartbeats. The time duration between these heartbeats is measured in milliseconds and is referred to as the R-R interval or inter-beat interval (IBI).
Regarding their involvement in AV-related studies, some findings such as those from [56] and [57], indicated that HR decreased consistently during uncomfortable driving situations. Similarly, HRV was shown to decrease. Interestingly, in [55], the authors actually found that GSR and HR were significantly impacted by driving parameters such as acceleration and jerk, especially in the presence of another vehicle. These aspects lead to the HR increasing between 3.6 and 6.5 BPM. In another work by [59], The work developed a real-time webcam-based detection system to measure drivers' HR and monitor their physical activities. The color variations of the skin's blood circulation were used to identify the HR.
### _Eda_
The autonomic nervous system (ANS) signals are used to track the electric conductance changes in the skin, which represent the EDA. The sweat gland activity can be increased using the arousal of the sympathetic ANS activity that leads
to high skin conductivity. For that reason, EDA can be utilized to measure the physiological arousal readings of participants in response to the external stimulus [57].
EDA is utilized to monitor and read information for various AV-related studies, similar to HR and HRV. EDA information includes the workload, task difficulty, mental effort, skin conductance level (SCL) with higher arousal, alertness, emotional load, and stress [61]. However, EDA is not a reliable metric due to its sensitivity to various stimuli. The increase in SCL readings would be expected due to higher alertness and arousal prediction indicating discomfort. There have been some notable findings with this physiological measure as demonstrated by [51], where the authors found that EDA values were more sensitive than HRV values in terms of scenarios that induced discomfort in AVs. Other examples come from [44, 46, 52, 62], and [63], which examined the correlation between EDA and trust, where a high level of EDA refers to a low level of trust. However, these claims were disproven in [55], where the authors indicated there were no significant correlations between trust and EDA. A similar case also occurred in Julius Horsting et al. [45], where the authors compared EDA among three groups in regard to trust and EDA for AVs. The goal was to investigate the correlation between AV speeds and participants' trust. The EDA was used as the physiological measurement. The study didn't find a significant correlation between the three groups using the EDA readings.
### _Eeg_
An electroencephalogram (EEG) allows us to measure the electrical activities in human brain. Applying electrodes to the individual's scalp is the way to get EEG readings. EEGs have been used in AV studies for a few aspects, mainly emotional state and stress; One example comes from [58], where the authors incorporate EEG as an indicator of the emotional state of a passenger in an AV simulator. Another example comes from [64], where the authors hypothesize that abnormalities in braking timing for AVs were reflected in the physiological signals. Upon experimenting and evaluation, the authors' results indicated that it was in fact possible to estimate the driver's abnormal braking by using the physiological signals before and after the braking timing. In [53], the authors developed a wearable EEG headband to measure stress-related brain activity during driving. Upon testing the proposed approach on 10 volunteers, results showed that manual driving had the highest stress on drivers.
Perhaps the most notable work that uses EEG comes from [65], where the authors focus on designing a complex sensor for AVs, by equipping a semi-AV with an intricate sensor structure that can provide centralized data regarding EEGs of both passenger and driver of the AV. This would in turn transform the AV into a mobile sensor connected via the Internet. The authors design the proposed sensor such that the acquisition devices of EEG signals are installed in the head and backtests on car seats. Not only does this allow for maximum comfort, but the approach can be used as a way to help the transition between manually driven vehicles and AVs by bringing forth a system that monitors the well-being of the driver's location in AV and non-AV environments.
### _Other Physiological Measures of Comfort_
Some other physiological measures of comfort for AVS involve eye-tracking. Studies by Drewitz et al. [66], Dillen et al. [55], and Beggiato et al. [56][57], all incorporated eye-tracking when assessing for comfort/discomfort. The work found that the diameter of the pupil is primarily on ambient light and depends on mental states. Niermann also found that the pupil diameter is the best factor for predicting and evaluating AV drivers' discomfort [48]. Additionally, Beggiato et al. found that there is a reduced eye-blinking rate during discomforting AV scenarios. Furthermore, [67] explores the possibility of incorporating face-tracking strategies to monitor comfort for driver state monitoring regarding discomfort in AV driving. Another technique, [54], was used to identify the potential correlations between increased amplitude recorded EGGs and the severity of self-reported sickness.
## IV Architectures/Frameworks Addressing Comfort, Motion Sickness and Trust
### _Motion Planning Algorithm_
Many researchers have proposed various frameworks that address comfort and motion sickness for AVs, with one example coming from [68], which proposes a novel concept to remediate car-sickness problems in motion planning, instead of motion control. This work uses a frequency shaping method to develop a motion planning algorithm that is commonly used for checking automobile sickness. The outcomes of the simulation and experiment showed a promising reduction by 21% and 37% of motion sickness dose value (MSDV). The ISO 2631-1:1997 standard referred to the used MSDV-based motion planning algorithm. The MSDV is defined as follows:
\[MSDV\ =\ \sqrt{f_{0}^{T}\left[\widehat{a}\left(t\right)\right]^{2}dt} \tag{1}\]
\(T\) refers to the total exposure time of the acceleration and the stimulus acceleration weighted by function, as defined in ISO 2631-1:1997.
Figure 4[68] shows the formulated motion planning algorithm similar to the optimal control problem (OCP). The planned vehicle trajectory is determined by minimizing the cost obtained by calculating the route planner's target positions and velocity profiles.
The actual mitigation happens in the motion planning phase, so there is no impact for the motion control to cause motion sickness. The need for fewer car pa
Fig. 4: Illustration of the motion planning architecture [68].
the proposed approach, making it a sufficient and preferable algorithm for motion planning.
The proposed algorithm is designed to work in real-time, which requires initial and final conditions. The experiment set the initial speed vehicle to 1.7m/s for simulated congested urban driving conditions and 30 km/h for the main road. The experimentation assumed the vehicle was equipped with the optimal control algorithm to keep the car at the planned velocity on the intended path. The experiment included comparing the proposed framework to a polynomial-curve-based planning methodology, a benchmark algorithm. A polynomial-curved-based method is widely used for AV-based research. The polynomial function and time are used to calculate the AV path and velocity. The polynomial function parameter is defined in two steps and evaluated using the Root Means Squared Error (RMSE). The authors claimed that the experimental results showed that the proposed framework performed better performance than the benchmark algorithm in identifying comfort. They also stated that their method increased sickness alleviation by up to 37%.
Overall, motion planning algorithms can improve comfort by providing a smooth, predictable, and customizable ride experience based upon the following features.
* Optimizing the trajectory to ensure that the ride is as smooth as possible and minimizing sudden accelerations, decelerations, and turns. This can alleviate the discomfort and motion sickness.
* Anticipating the behavior of other road users and adjusting the trajectory accordingly. This helps avoid sudden stops and maneuvers, reducing the likelihood of discomfort.
* Providing customizable driving preferences to take into account the driving preferences of individual AV users. For instance, if a user prefers a more conservative driving style, the AV can be programmed to adjust its trajectory accordingly.
* Providing an adaptive ride comfort by adjusting to changes in the environment, such as road conditions and traffic patterns. By adjusting the trajectory in real-time, the motion planning algorithms can ensure that the ride is as comfortable as possible, even in challenging conditions.
### _XR Mobility Platform - For Passenger Comfort Improvement_
A multi-modal system introduced in the [69] could be mounted on the AV to improve passenger comfort. The work claimed that the drivers were freed from driving and became passengers, and the windshield and windows became dashboards to provide driving information. The goal was to implement technology for improving the passenger's comfort during auto-driving. A concept covered in this framework is comfort intelligence (CI), which focuses on passenger comfort in AVs and on both the positive and negative states of the passenger's feelings inside the AV. The authors claimed their approach is better than other approaches, particularly because it uses immersive displays with a controllable seat capable of tilting in all directions in the actual AV environment. The proposed method architecture is shown in Figure 5.
The proposed framework includes several elements, as shown in Figure 5. The Head Mounted Display (HMD) incorporates a high presence of VR inside the AV, and the dashboard camera records images. The passenger seat of the minivan's backspace equipped with the motion platform is used to monitor the motion. Numerous actuators are connected to the back of that seat to control tilting through a computer. The proposed framework also uses an open-source software called Autoware.AI, which deals with driving control. Autoware.AI gets behavioral data of the AV from sensors through a controller area network (CAN) bus.
Furthermore, the proposed framework also uses two control methods to improve passenger comfort. The first one focuses on reducing car sickness. This strategy is supposed to reduce sensory conflict among the passenger by controlling the passenger's body movement directly. So, both the cylindrical screen and motion platform seat are tasked with reducing acceleration stimuli. With this approach, the passenger is supposed to feel fewer acceleration stimuli and what they do feel is similar to the driver's movement. The second control strategy focuses on reducing passenger movement. This second approach utilizes the HMD to control the visual movement and body of the passenger. The HMD used is the immersive HMD that is attached to the passenger's head. This helps creates an illusion that the passenger does not feel they are tilting, which helps the passenger feel less movement overall.
Overall, the XR Mobility Platform has the potential to enhance passenger comfort by providing a more immersive, personalized, and interactive experience for passengers based on the following aspects.
* VR simulations for different transportation scenarios, allowing passengers to experience their journey before it even begins. This can help to reduce anxiety and uncertainty, which may cause discomfort for some passengers.
* Passengers can create personalized virtual environments that suit their individual needs. For instance, they can adjust the lighting, temperature, and sound to enable a more comfortable environment.
* Real-time information for passengers about their journey, such as the location of their vehicle and any delays or disruptions. This helps reducing stress and improve the overall passenger experience.
* Interactive entertainment options for passengers, such as virtual games and movies, which may help to pass the time and distract them from any discomfort they may be experiencing.
Fig. 5: XR Mobility Platform Infrastructure [69].
### _Simulation Framework for Assessing Ride Comfort_
A simulation framework to evaluate comfort using collected data from vehicle dynamics has been proposed in [70]. The proposed simulation framework also generates a process for producing optimal comfort estimates by incorporating a Monte Carlo simulator and road surface model. The authors then develop a case study with the proposed simulation framework. The case study consists of a few real road sites that demonstrate the effectiveness of the proposed approach with actual data, and is able to achieve highly optimal comfort results.
The work utilized specific frequency ranges that follow the ISO-2631 application domains to provide accurate comfort readings. As such, low frequency is between 0.5Hz and 80Hz in terms of health, comfort, and perception. Motion sickness varies in range from 0.1Hz to 0.5Hz; The proposed technique utilized a frequency weighting approach based on the ISO-2631 standard to identify frequency ranges and specific applications. The frequency weights have been selected based on the passenger position and the application type (i.e., sitting, standing, or recumbent). The overall conceptual architecture of the proposed method is shown in Figure 6.
The simulation framework is shown in Figure 6. An XML file is used to save the framework's configuration parameters, which provides flexibility to specify various test scenarios. The input parameters of the corresponding probabilistic are shown in block 1, and the automation functionality and the communication interface to the simulation environment are presented in block 2. The framework has been implemented using MATLAB/Simulink setting. MATLAB Simulink and CarMaker are both used as simulation software for set-up, input configuration, and data exportation (block 3). Block 4 included the dataset that utilizes to quantify comfort. Several methods are applied, such as bi-linear transformation, signal filtering, and final calculation. Finally, block 5 shows the storage of the comfort prediction. The authors chose to go with a generic framework so that input or output parameters can be easily replaced by different technologies. Moreover, other simulators compatible with MATLAB/Simulink can be used for the framework simulation instead of CarMaker. These changes will not negatively impact the performance of the proposed approach. The authors reported that their proposed simulation framework found a relationship between the acceleration signal peaks and comfort.
### _A Hardware-Software based ELM Approach for Improved Ride Comfort in AVs_
The authors in [71] also focus on ride comfort. More specifically, the authors propose a hybrid hardware/software approach to improve AV ride quality. In addition, a driving style identification system is also integrated, as driving style is also taken into account in the proposed work. The proposed approach is utilized on a one-chip implementation to read driving comfort levels in real-time. So, the proposed framework categorizes the comfort-driving data into several types. The experiment used hierarchical cluster analysis (HCA) (unsupervised learning approach) and supervised extreme machine learning (ELM) to improve performance. HCA uses a bottom-up approach to measure the distance between two clusters, then merges the following two clusters at the next step. The Euclidean distance is used to evaluate HCA performance, similar to the nearest neighbor algorithm. The proposed framework uses Xilinx's Zynq-7000 programmable system-on-chip for real-time data processing and determining comfort levels. It also operated high operating frequencies and low latency times. The goal of this proposed approach was to alert drivers whenever passenger comfort was compromised.
This framework is featured by detecting non-continued circumstances, such as sudden lane shifts and moves that cause motion sickness and discomfort, especially if they are maintained over an extended time. The proposed work identifies the discomfort by recording the highest acceleration and jerk speeds, usually when aggressive lane changes occur. The discomfort can happen when values of acceleration and jerking are high, even for a short period. Figure 7 shows the framework components that handle the data exchange between the AV's CAN bus and the entire system. The driving style can be determined using the ELM-based model. To enable the system to work correctly and precisely must carefully follow specific configurations to integrate hardware with software. The proposed framework also utilized a dual-core ARM microprocessor with peripherals, so the software can run on a full-size FPGA where the ELM is deployed.
The experiment of the proposed approach showed a 95% success rate of comfort after classifying driving data. The authors claimed that the proposed framework is helpful, especially the lack of driver advising-based comfort-improving solutions. They also indicated it could easily integrate the proposed approach with current auto line productions with some minor modifications to increase the comfort level of AV driving.
### _Lateral Control Strategy based on Head Movement Responses - Mitigating Motion Sickness_
A new control strategy has been proposed to mitigate AV motion sickness, which uses head roll angle as a control variable to reduce lateral acceleration [72]. The authors first develop a base model for driver and passenger head roll. The models of a driver and passenger are viewed as control
Fig. 6: The proposed simulation framework generates a process for producing optimal comfort estimates by incorporating a Monte Carlo simulator and road surface model [70].
subjects. This strategy employed a fuzzy logic control and head roll angle response to induce a corrective wheel angle. The proposed approach was run through via simulations, which showed a 3% to 10% reduction in motion sickness.
Overall, lateral control strategies based on head movement responses are an effective way to mitigate motion sickness in autonomous vehicles by reducing the sensory conflict that causes it. By aligning the visual field with the physical motion of the vehicle, passengers can enjoy a more comfortable and enjoyable ride experience. For instance, if the vehicle turns left, the display can be shifted to the right, creating a compensatory visual stimulus that aligns with the physical motion of the vehicle. This mitigates the sensory conflict between the passengers' visual and vestibular systems, which is the primary cause of motion sickness. It is crucial to note, however, that this strategy may not work for all passengers and that individual differences in susceptibility to motion sickness should be taken into account.
### _A Device for Predicting Motion Sickness in AVs_
In [73], the proposed framework is actually a device that could be used to predict user motion-sickness in real-time using motion, head tilt, and ambient conditions. The proposed approach heavily focuses on Non-Driving-Related-Tasks (NDRTs). Requirements for the device were the following:
* Objective motion dose score
* A subjective sickness score
* Ambient temperature sensitive
* A position of the occupant head's
* Variability of the sensitivity tuning
* Real-time display and calculation
The introduced model aims to capture the AV acceleration, using an Arduino microprocessor and an Inertial Measurement Unit (IMU) equipped with an OLED display. The model reads a single-user trial of over 100 samples from more than 1500 experiments to identify a subjective sickness score. Figures detailing both the motion sickness measurement device and the device software schematic are shown in Figure 8.
The proposed device also featured detecting temperature besides motion sickness signals in real-time. Furthermore, multiple journey routes were computed during the data collection and testing phases, with more than 30 minutes for each journey. The entire experiment conducted 101 trips. The samples were collected from enormous 14-seat mini-buses and passenger coaches. The authors also used data from multiple seating positions. The vehicle's air conditioning system controlled the ambient temperature. The authors also conducted testing in partly cloudy weather.
Overall, devices that can predict motion sickness can potentially improve comfort in AVs by enabling advanced warning to users and allowing them to take preventative measures. These devices employ a variety of sensors to track the vehicle's movements and the passenger's physiological responses to them. For instance, they may assess the user's heart rate, skin
Fig. 8: the proposed framework is actually a device that could be used to predict user motion-sickness in real-time using motion, head tilt, and ambient conditions [73]. Part (A) Represents the Hardware schematic and Part (B) Represents the Device Software schematic
Fig. 7: Hardware-Software based ELM approach for Improved Ride Comfort in AVs [71].
conductance, and other indicators of motion sickness. This information can then be analyzed by algorithms for predicting the motion sickness occurrence likelihood. Upon detecting a high risk of motion sickness, the device can alert the user and provide suggestions on how to prevent it. For instance, it may suggest adjusting the temperature, opening a window, changing the seating position, or taking medication. This can help take proactive measures to prevent motion sickness before it becomes severe.
In addition to architectures that address comfort and motion sickness, there are also architectures/frameworks which address the trust aspect of autonomous vehicles. Such examples of these frameworks are discussed in the next few subsections.
### _Sense-Assess-explain (SAX)_
Numerous assurance and regulation critical barriers to deploy large-scale deployment of AVs were raised and discussed with their potential solutions in [74]. The work attempts to address and clear the following objectives with the proposed approach:
1. Reading and analyzing weather conditions on multiple scenarios to test potential sensing modalities' limitations.
2. Continually evaluating and optimizing the perception performance and navigation methods.
3. Demonstrating the system's capability in interpreting the vehicle vision and how that impacted its decision-making.
To satisfy these objectives, the authors develop a model known as sense-assess-eXplain (SAX). The proposed method is used to measure assurance and trust in AV operations. SAX comprises three core research aspects that aim to identify and match various levels of AV sensing and evaluation, which helps understand vehicle observations and decision-making in real-time. The AV can also clarify that the current and predicted environmental conditions impacted its performance. Figure 9 illustrates the overall schematic of SAX.
### _Real-time Trust-Building Schemes for Mitigating Malicious Behaviors_
The importance of trust regarding AVs safety and security was discussed in [75], which also suggested two architectures: centralized and distributed, to reduce dangerous behaviors using AVs trust measurements. Those architectures divided the roles of the trusted authority and vehicle nodes in plausibility checking and sustaining trust to detect trust in real-time. The suggested method (shown in Figure 10) also assumes the adoption of a public-key infrastructure to facilitate inter-vehicle communications, in which case a trusted authority is in charge of issuing and revocation of certificates for certifying AV IDs.
The centralized architecture utilized the trusted authority to monitor vehicle functions and interchanges via V2X messages within a pre-defined area. Each vehicle was collected and transferred broadcasted V2X messages to the trusted authority for verification. Figure 10 illustrates how the trusted authority communicates and handles the authenticated vehicle identity and message integrity and calculates trust. The capability of the authorized source to have a broader view of each AV's behaviors concerning time duration and travel distance is the main advantage of the centralized architecture to detect malicious behaviors. Nevertheless, storing trajectory and running plausibility checks of each vehicle increase memory consumption, a significant weakness of this architecture that needs to be considered when deploying an existing system in AVs.
In contrast, vehicle trajectory verification is delegated to AVs in the distributed architecture, as shown in Figure 10 (B). A car verifier estimates the vehicle's location and then decides the AV trust value and shares data with the trusted authority. In that way, the trusted authority only updates its trust on the related node. This architecture's strengths are detecting local malicious behaviors that are out of vehicles' signal transmission and reducing memory consumption and associated calculations. However, this architecture's weakness is the vehicle verifier's inability to detect attackers on a global scale.
The listed work modeled a Python-based simulator for non-continued events to evaluate V2V messages exchange processes among several units. This work used centralized architecture to implement verifiers for trusted authority, while distributed architecture was developed for AVs. Moreover, four existing algorithms have been utilized in this model to test the plausibility checking module. The open-source dataset called VeReMi was used to assess malicious behaviors algorithms detecting in vehicular networks. Both centralized and distributed architectures have been deployed to test and evaluate four malicious strategies under low- and medium-density states. Despite the poor performance of the listed methods, they represent a good choice to continue investigating and gain user trust during driving AVs.
### _Safe and Online MPC_
The AV navigation system that uses MPC controller design was presented in [76], containing longitudinal and lateral
Fig. 9: The developed sense-assess-eXplain (SAX) model [74]. The proposed model is used to measure assurance and trust in AV operations.
controls. Besides the safety monitoring module, it also has a cost function designed to calculate human driving habits. It estimates the required time to reach unacceptable situations, including tracking performance and comfort constraints under different tracking conditions. The strength of this kind of controller is presented in its ability to guarantee smooth trajectory and acceptable performance. Moreover, a simulator has been implemented to test the proposed MPC controller under typical urban settings.
The initial Guidance, Navigation, and Control (GNC) architecture components helped develop the suggested MPC controller to facilitate navigation and wayfinding in urban areas. The listed MPC controller controls both axes, the lateral and longitudinal. The lateral control signal manages the car steering via passing signals to a low-level controller. Calculating associated risks with the current tracking condition is another task of the MPC controller, which is considered a safety monitoring algorithm. This risk was calculated using the predicted lateral error and MPC accuracy. The proposed architecture is shown in Figure 11.
The suggested model was developed and examined using the 4D-Virtualize simulation engine and Robot Operating System (ROS) framework to provide semi-actual vehicle experiences. The simulation utilized IPCar, an urban electric vehicle with a maximum speed of 3 meters per second and a turning radius of 3 meters. The presented technique is robust to noise and is expected to solve ground vehicles' trajectory-tracking issues. Additionally, the proposed work also considers GPS data, making sure the GPS data is accurate as well. Furthermore, the proposed approach also achieved lower complexity. A strong benefit of this lower complexity would be that the AV gets an easier time planning its destination/route so that the passenger discomfort is potentially minimized.
Fig. 11: The proposed AV navigation system that uses MPC controller, containing longitudinal and lateral controls [76].
Fig. 10: The two architectures developed in [75]—(1) centralized and (2) distributed—to reduce dangerous behaviors using AVs trust measurements.
## V Intelligent Control Methods and Optimizations
When working for control process models and robust machine processing implementations for enhancing comfort and trust for AVs, intelligent control methods contrasted for conventional control methods are largely expressed. These include model-driven control systems, neural network management and deep learning methods [77, 78, 79]. Such algorithms are currently being used slowly in widely used automotive regulation. Figure 12 presents an overall taxonomy of the application of these intelligent control methods.
### _Model-based Control_
Model-based control is also known as Model Predictive Control (MPC), Rolling Horizon Control (MHC), and Receding Horizon Control (RHC). It is a model prediction-based computerized control system, which has been extensively researched and applied over the last few years [80][81]. The basic theory can be summarized as: an open-loop optimization problem of the final time domain is overcome at each sampling point and the first dimension of the control sequence obtained is implemented on the controlled object according to the current measurement information currently obtained. Invoke the above procedure in a sampling instance and then restart and address the question of optimization with new calculated values. The key distinction between predictive model management and conventional control methods is to solve open-loop problems digitally in order to achieve open-loop optimization sequences. The predictive monitoring algorithm consists primarily of four parts: predictive model, adjustment of feedback, rolling optimization, and comparison trajectory. The first (or first part) aspect of the optimization approach is better added to the method.
### _Neural Network and Imitation Learning Control_
Neural control is the study and usage of certain structural mechanisms of the human brain and the control of the system by human knowledge and experience [83]. The control problem can be considered to be a pattern recognition problem using neural networks, and the mode defined is a "shift" signal which is transformed into a "compassionate" signal. The most remarkable feature of neural control is its ability to learn and adapt. It is achieved by constantly correcting the weight connections between neurons and storing them discretely in the connection network. It has a good effect on the control of nonlinear systems and systems that are difficult to model. In general, neural networks are used in control systems in two ways: one is to use them for modeling, mainly using neural networks to arbitrarily approximate any continuous function and the advantages of its learning algorithm, there are two feedforward neural networks and recurrent neural networks One type; the other is used directly as a controller.
The vehicle's longitudinal dynamics are handled via a pre-tuned PID controller, whereas the lateral dynamics are handled using an end-to-end trained neural network. The main advantages of deploying imitation learning-based control in autonomous vehicles are its straightforward implementation in addition to its quick and fair training capabilities, i.e., in comparison with reinforcement learning. Furthermore, the neural network facilitates the driving learning process for vehicles based on a provided dataset. Although this can be accomplished without complex formulations governing the system's input-output relation, biased and/or improperly labeled data may lead to incorrect learning by the neural network. Therefore, advancements are still needed in order to attain reliable performance and accuracy when adopting such an imitation learning-based control for autonomous driving [82].
### _Deep Learning Control_
The research of Neural Networks, meaning deep neural networks (DNN), aka artificial neural networks (ANN), results in profound thinking [84]. This model's strength is presented in the ability to automatically select features of high-dimensionality data, which helps avoid complex feature manual selections. The most important feature of using deep learning in such fields is handling feature extraction techniques within the model (without using separate data preprocessing techniques to detect features) and fitting a model. For control systems with high-dimensional data, introducing deep learning has a lot of significance. Lately, some examinations have applied deep learning approaches to control systems, such as deep belief network (DBM) and Stacked Automatic Coding (SAE); CNN [85]; and Recurrent Neural Networks (RNN).
### _Reinforcement Learning Control_
Unlike imitation learning-based control (neural network methods), where the learner that is trained to mimic the trainer's behavior may never outperform the trainer's performance capability, the reinforcement learning method learns through trial and error. The "agent" trained using reinforcement learning discovers the surrounding environment and independently explores new control strategies. Specifically, \(\pi\), the policy to be trained for optimal control can be a neural network that behaves as a function approximator controlling the relation between actions (outputs) a and observations (inputs), \(\pi(a|o)\). The agent is rewarded upon carrying out valid actions while penalized for any improper actions, and thus the goal of the agent, given a specific policy, is to maximize the expected reward per \(\operatorname*{argmax}E[R|\pi(a|o)]\). However, defining the expected reward is still highly critical as rewarding the agent for carrying out good actions may enable them to randomly wander pursuing the reward maximization (i.e., risk of never accomplishing the training objective) [82].
Last, a summary of comparisons between the traditional and intelligent control methods is provided in Table IV. The potential integration of AI into control models (fuzzy logic, evolutionary algorithms, and ML) with a summary of optimization applications for driving comfort enhancement is further depicted in Figure 12.
## References
### _Trajectory Optimization_
#### Vi-E1 Linear Quadratic Path Planning
In traditional driving, if the driver feels uncomfortable, he can easily adjust the driving style to improve the comfort in driving and thus the driving experience. This role was passed to the car operating system after the vehicle implemented autonomous functions. A system for determining a driving efficiency is required in order to achieve safe riding in autonomous vehicles.
Considering the importance of ride comfort measurement during the development and adjustment of autonomous vehicles, the Svensson team designed a system of adjustment strategies including frequency analysis [86]. The system evaluates YSIS and ride quality by simulating the lateral control system. Therefore, they updated the controller configuration and suggested H as the frequencies weighted linear quadratic controller in order to further boost the conduction efficiency output through the route planning algorithm. By measuring and testing their frequency range, the frequency quality of the acceleration data is calculated. The system chooses to use the power spectral density (PSD).
The system is a lateral control system to change the lateral route preparation for self-supporting vehicles, taking into consideration ride quality efficiency [86]. This works by developing a way of determining the right set of tuning parameters by looking at the balance between reaction and riding standard. These sums can be measured by graphical instruments [86].
#### Vi-E2 Path Planning and Optimization
The method based on trajectory optimization is a widely used method in path planning [87, 88], which represents route planning as a question of optimization, taking into account the appropriate vehicle efficiency and related restrictions [89]. Its main advantage is that it can flexibly and easily encode various requirements on the required path of the AV. At the same time, advances in solvers for online optimization (CVX [90] and Ipopt) have helped to provide real-time, fast, and reliable path generation. Owing to the normal complex and unpredictable driving conditions (which can not be completely modeled), the model predictive control (MPC) approach is also used for online AV track preparation [91, 92, 93]. MPC solves a variety of common problems in the optimization of finite times and during the planning process can change the environmental status.
The Liu team suggested an MPC-based path plan system, which tackles multiple modes in a single sense [94]. They do not use preset rules to determine some operations, but rather to resolve online non-linear MPC issues. The paths created by MPC are automatically determined by operations including lane shifts, lane-keeping habits, ramp merges, and intersections. They have chosen the relaxed convex method for determining lane change and line-keeping activities in the objective function of MPC [94]. Around the same time, cars were designed as polygons in order to ensure driving safety. A form of restriction has been created to implement avoidance of collisions between auto and surrounding cars. They also have a lane-related area that is designed for the easy and efficient handling of nearby vehicles in the same lane [94].
### _Confort Optimization Techniques in AVs_
#### Vi-F1 Overall Comfort Optimization
Comfort is principally affected by the following two factors: _Jerk_, which refers to the acceleration shift rate; and _Bending Rate_, which refers to the change in curvature. Higher comfort means a smaller rate of change in acceleration and a smaller rate of change in curvature. The corresponding cost function can be set as "acceleration + acceleration change rate + curvature + curvature change rate". Therefore, the optimization of comfort is mainly related to horizontal optimization, vertical optimization, and midline issues.
#### Vi-F2 Response Time Optimization
The reaction time is a system response time, which involves time for cloud network monitoring and the analysis of laboratory meetings and the system measurement, and the breaking time for the car. The system's overall response time can not be greater than 10 milliseconds if the braking distance of 100 km is not to surpass 30 cm, and the individual reaction speed of the strongest F1 conductor is around 100 milliseconds.
Thousands of self-directed sensors will increase their speed and intelligence. Such devices produce unimaginable data sizes well in advance of any other IoT device. If data needs to be interpreted and analyzed faster than in current 4G systems, self-driving vehicle systems require exceptional data processing and speed to simulate time for human reaction. 5G will speed up the network and reduce the latency of self-propelled vehicles in order to achieve faster response times.
Leading semi-producers worldwide, including Intel and Qualcomm are going for the ASIC revolution to tackle the difficult question of integrating 5G bandwidth with wireless radio and antenna architecture [95]. Briefly, these firms create chips to transform autonomous cars into mobile data centers, allowing cars to make complex decisions in real-time.
In terms of supercomputing power, along with the continuous improvement of cloud computing and on-board computer computing power, the on-board computer system can process more complex tasks in a shorter time and realize automatic driving real-time perception of road conditions, intelligent decision-making, and control. With the emergence and wide application of excellent algorithms such as ML and DL, AI has entered a new stage of rapid development after 2013. Applied to the field of automatic driving. The error rate of vehicle recognition using deep learning methods is lower than that of traditional methods adopted in the year.
Low end-to-end latency is more dependent on improvements in sensors, processors, algorithms, and machine transmission. After the deployment of ultra-low latency 5G networks, we believe that more advanced communication technologies will bring more innovations to car companies and make AVs more secure.
The real innovation of the new system lies in its comprehensive signal processing capability. This allows all processing to be performed directly within the module, and the system selectively filters the data from the radar system and the stereo camera so that the processing can be performed immediately or deliberately delayed to subsequent processing stages. Identify irrelevant information, but do not forward it. Sensor fusion is applied to data fusion between the camera and radar. Then,
the neural network evaluates the data and determines the actual traffic impact based on machine learning techniques. Therefore, the system does not need to send status information to the vehicle, but only needs to send a response command. This frees the vehicle's bus to process important signals, such as detecting a child suddenly running on the road.
#### V-B3 Motion Sickness Optimization
To optimize the problem of motion sickness in autonomous driving, researchers have made a lot of efforts, summarized as follows.
Anti motion sickness glasses frameFrench automaker Citroen has designed an anti-motion sickness eyeglass frame called SEETROEN [96]. In this solution, there are blue liquids inside the four rings. These liquids are located on the "visual edge" of the wearer. When the vehicle turns, brakes, or bumps, the blue liquid will also flow with it, so that the visual information is obtained by the eyes. Consistent with the movement signals felt by the cochlea, keeping the brain's perception consistent. It is understood that SEETROEN can alleviate motion sickness within 10 minutes, and the success rate is as high as 95%.
Trick the brainOne strategy is to trick the passengers' brains into thinking they see the movement of the car, even if they do not. The University of Michigan Transportation Research Institute has patented a system that simulates visual cues that people get when viewing from outside a car [97]. This technology mimics the vestibular input corresponding to the vehicle's speed, acceleration, yaw, and lateral motion. Its goal is to blind your sight to the vestibular system. This allows passengers to focus on other things, which has been a big selling point for autonomous vehicle technology since its inception, without the effects of motion sickness.
Smooth driving and motion sickness warning systemGoogle's Waymo has filed a patent describing a system that eliminates motion sickness by identifying a route that minimizes motion sickness [98]. Waymo's patent solution to motion sickness is to calculate the motion sickness index of each route, and finally select the route with the smallest motion sickness index. And communicate motion sickness information with passengers through the display panel.
The system will suggest providing less complicated driving roads to prevent motion sickness, or send passengers an alert on bumpy roads prone to motion sickness, and recommend that passengers do not look down or read books during the journey [98]. There are also special seats in the car for people who are prone to motion sickness, and the algorithm evaluates the congestion of alternative paths in advance to determine the motion sickness index. Waymo's technology will study the car's back-and-forth acceleration and sway to calculate whether motion sickness may occur on a particular route. When passengers specify that they feel unwell, the system can also change their driving style or route. In addition, Waymo unmanned vehicles can adjust driving styles based on passenger motion sickness reactions, such as increasing the distance from the car in front.
Sensory simulation systemUber plans to use vibration and mobile seats, airflow targeted at the face or other parts of the body, and light bars and screens to prevent passengers from feeling uncomfortable. It applied for a "sensory stimulation system" that is essentially a vestibular system that trains passengers so that what they see is related to their feelings [99]. One mechanism proposed for this includes a light bar that surrounds a portion of the interior and provides visual cues to alert passengers to acceleration, braking, or changes in direction. It may use color and brightness levels to indicate different actions.
Uber not only attempts to unify visual and cochlear information, but also provides a set of immersive experiences including vision, hearing, seating, and airflow. You might also hear the breeze rushing through the emergency stop as the vehicle brakes abruptly [99]. First, visually, Uber's car has a set of AR (augmented reality, overlaying virtual information in the real environment) system, which can use the environmental information outside the car captured in the car to present in the car, making up for the visually immobile perception Dislocation to synchronize somatosensory information. A matching tactile system is also provided on the seat, which will give the corresponding airflow when turning sharply and rapidly.
VR gamesHoloride is trying to eliminate motion sickness with VR glasses [100]. The movement in VR games is consistent with the vehicle trajectory so that the movement we see is consistent with the movement felt by the body.
Vestibular electrical stimulationMost of the above methods change the visual information so that the visual information is consistent. vMocion attempts to alter vestibular information to align it with visual information [101]. vMocion's 3v\({}^{TM}\) platform is a device based on electrical vestibular stimulation (GVS), which can potentially be used to convert motion feedback to users [102]. By placing 4 electrodes in the ears, neck, and forehead, electrical stimulation is used to align the movement detected by the nervous system with the visual content. Figure 13 shows vMocion's 3v\({}^{TM}\) platform.
Samsung has already created an Entrim 4D headset built on the GVS device. The team sees Entrim 4D as a potential response to the movement disorder issue [103]. However, the technology is still under development.
#### V-B4 Lane Change Maneuver and Optimization
The required speed can be regulated according to the route radius and a quick transformation can be achieved [104]. This eliminates the disparity between vision and human vestibular system in fluid travel, making the journey smoother. Smooth motoring techniques will boost vehicle satisfaction and closely track accelerating, braking and steering. The quick lane changes or curves generate a vehicle's lateral acceleration, so developing
Fig. 13: vMocion’s 3v\({}^{TM}\) Platform based on GVS to convert motion feedback to users using 4 electrodes in the ears, neck, and forehead [102].
new sufficient lane change maneuver algorithms helps improve AV comfort.
There are different techniques for modifying the routes of driverless vehicles, with Bezier curves the most widely used way to create routes for lightweight communication and marshaling (LCM). A Bezier curve-based route planning algorithm has been proposed to create a reference path to meet the roadside restrictions in [105]. Also, a Bezier curve-based collision-prevention technique was introduced by [106] that considers all path lengths between automobiles. Chen et al. considered the incomplete constraints and yaw rate parameters of vehicle steering and developed a Bezier-based lane change algorithm [107]. In another work, Kawabata et al. suggested to Bezier a 3-dimensional refer to a Curve generation constraint optimization method [108]. Moreover, an efficient LCM Bezier curve-based algorithm was developed [109] to estimate potential lateral acceleration to be employed in a local road planning phase to transfer the side motion of the vehicle during lane change maneuvers smoothly. The algorithm measures the curvature of the Bezier curve by the parameters of the lateral acceleration factor reached by the operator in order to restrict lateral motion. If lane change maneuvers are required, space availability must first be confirmed to generate a local path. The planning block then produces alternative pathways and tests the local pathways generated to determine whether they fulfill a certain acceleration limit.
## VI Applications of Driving Control Methods and Comfort Optimization Techniques for AVs
This section covers some applications of driving control methods and comfort optimization strategies for AVs in various scenarios. Interestingly, a couple of these works incorporate path-planning mechanisms, as seen in [110] and [111].
### _Speed Control Framework for AVs - Rough Pavements_
A deep reinforcement learning-based speed control strategy has been proposed for AVs on rigid pavements [112]. The developed approach aims to estimate riding comfort and energy-efficient speed control. The proposed framework utilized open-access real-world data for training the model. The model achieved energy and computation efficiencies by 8%, 24%, and 94%, for which the authors claimed their model would be a good choice for controlling AVs speed on suburban roads.
### _Saint-Acc_
An adaptive deep reinforcement learning-based cruise control system called Safety Aware Intelligent ACC (SAINT-ACC) has been proposed to optimize the efficiency of traffic, safety, and comfort [113]. SAINT-ACC can perform driving safety assessments via modification of safety model parameters in response to various traffic conditions. Moreover, the proposed approach also utilized a dual reinforcement learning system, improving traffic efficiency, safety, and comfort. A separate reinforcement learning component is designed to find a specific threshold based on traffic data obtained from the environment. This model interacts with another reinforcement learning-based model to identify the insufficiency in the riding flow, safety, and comfort. The two reinforcement learning agents are trained via numerous highway scenarios, including those with on-ramp and off-ramp highways. A simulation platform is also used to perform experiments over 12,000 runs for training. Results obtained from these tests confirmed SAINT-ACC's ability to enhance driving safety, traffic flow, and comfort.
### _Real-time NMPC Path Tracker_
A non-linear predictive-based path tracker model for AVs has been proposed [114] using C++ and several powerful libraries to guarantee optimal real-time performance. The listed work claimed it featured by high flexibility in a way that allows the incorporation of multiple objective terms to form a cost function. The proposed model aims to provide precise tracking while maintaining a comfortable lift. After testing the proposed approach on numerous simulation studies that featured intricate tracks with many sharp turns, the authors confirmed that the proposed path tracker could outperform a regular PID-based controller of an AV.
### _Tdr-Obca_
An optimized collision avoidance model for AVs driving has been presented in [115] to drive comfortably and efficiently. The TRD-OBCA is designed to provide a smooth collision-free trajectory model for AVs in accessible environments, such as parking lots or off-road areas. The TDR-OBCA algorithm has been tested and validated in simulations and real-world road experiments. Valet parking and various obstacles have been illustrated in simulations. This approach claimed to drastically reduce the loss by 97%, while increasing the computation time by 45% and improving the overall driving comfort scores. The proposed model is claimed to lower the steering control output by more than 13.5%, for which the authors suggested using it for real-world AV applications.
### _Model Predictive Control_
The purpose of [116] was to propose a method for designing a model predictive control scheme to enhance the passenger's comfort for AVs. According to the authors, the proposed strategy is different from previous attempts because of its wide applicability in various driving scenarios and incorporates an offline method weighting parameters of the model predictive control scheme. The authors tested the proposed approach with experiments conducted in Matlab/Simulink and the tests were done using the highway and urban setting scenarios. The proposed approach utilized the MSDV and acceleration coefficient metrics to evaluate the model performance to meet the ISO 2631 standard requirements. The system also considers the ride comfort and drive quality described by ISO 5805 standards. Ride comfort is identified as "a subjective state of welling and absence of mechanical disturbance concerning the environment and ride quality."
## VII Implications, Insights, and Future Works
### _Comfort and Health Risks_
AVs may strengthen the disadvantaged position of humans in the battle against pollution and sedentary behavior, affecting the health of drivers and increasing health risks. Automated driving reduces traffic pressure and the apparent danger of car travel, which leads people to rely more on vehicles. Even for short distances, people may choose to drive by car instead of walking.
Sedentary motion and frequent exposure to pollution are the two most common risk factors for heart disease. Statistics show that about one-quarter of Americans die of heart disease each year. People who drive frequently often live a sedentary life, and their incidence of obesity, diabetes, and heart disease will also increase.
In the 20th century, more and more people owned their own cars, which allowed them to work in the city without having to live in the city, which means that the core of the city is surrounded by a growing number of suburban residents. Therefore, for many people, walking is no longer an option. The growth of the city also presents significant problems for public transport: the need for buses or subways with longer kilometers is growing as the number of residents grows. This situation requires everyone to make a choice, either buy a car or bow to reality and choose a longer commute.
A future full of self-driving cars means less walking time and more urban expansion for people, and if the development and progress of electric vehicles cannot keep up with the requirements of the times, then pollution is bound to increase sharply. After all, when people drive self-driving cars and can do anything they want to do on the commute, the driver doesn't care whether the time on the road is 20 minutes or an hour.
### _Benefits and Costs of Driving Comfort Adoptions_
#### Vi-B1 Practices in Designing Safety and Comfort of AVs
The vehicle must not completely rely on the GPS function. The vehicle needs to cross-reference the pre-drawn map and real-time sensor data, and after comparing the two, determine the location of the vehicle. A notable example of such a design practice is the Google Waymo use case. The Waymo team aims to develop a comprehensive high-resolution 3D map before driving, which will show road lines, curbs and sidewalks, streets, crosswalks, stop signs and other road conditions [117]. Furthermore, the self-driving car sensors and software must continuously scan the surrounding environmental information within 300 meters and read traffic control information from the color of traffic lights and railway crossings Temporary Stop Sign. To enhance driving comfort levels, the deployed algorithms will forecast the potential travel on the basis of the present speed and direction of various complex path goals, and can even distinguish the difference between vehicles and cyclists or pedestrians. The software needs to use information obtained from other road users to anticipate a variety of possible driving routes and predict how the road surface will affect other targets around the vehicle.
#### Vi-B2 Benefits
Due to the reduced time, energy, and money required to travel, GTC is expected to decrease. The comfortable travel, first of all, the safety of travel, reliability of travel time and other activities than driving like work, meetings, food or sleep, etc. are more important [118].
The effect of self-employed cars on the time demand of travelers has been minimal. A preference survey conducted by Yap, Correia and van Arem in the Netherlands found that the use of fully automatic (level 5) has a higher time value than the use of manually driven vehicles as a train travel exit mode [119]. Such researchers offer the impression that driving a driving vehicle will offer passengers a feeling of uneasiness, lack real-life knowledge with self-drift vehicles, and outbound trips are typically short trips and travelers do not have the ability to completely benefit from self-driving vehicles, including protection in their journey. Cyganski team suggested that in the questionnaire survey in Germany, only a very small number of respondents referred to the working ability (level 3 and higher) of autonomous vehicles as an advantage [120]. In comparison, most respondents accepted that behaviors, such as looking, chatting, or enjoying music, should also be relevant while traveling in a self-employed vehicle which they usually practiced when driving a regular car. Answers who have been found working in the current route prefer to work in independent vehicles.
The free passage capability, distribution of vehicles along the roads, and consistency of traffic can be advantageous for individual vehicles [118]. Higher freeflow levels and reduced traffic levels will improve road capacity and thus decrease congestion time, which is the reduction in the amount of queued pollutant emissions. Quoted congestion delays are minimized as a result of improved road availability and minimized (or even eliminated) duration of parking search due to car parking space. The use of modern cars has also expanded and can take less time to drive.
In their study, Hoogendoorn et al. concluded that self-driving would decrease traffic congestion by 50 percent, and this degree of reduction can be improved with the help of technology in cars and buses [121]. The Michael team revealed that, although the degree of coordination between cars is growing, efficiency is being seen in the simulation of a single-wave automatic highway network [122].
Autonomous vehicles should be implemented to facilitate the growth of motorway and car-sharing systems. The running costs for ride-sharing and car rental systems will be greatly lowered for autonomous vehicles. Autonomous cars can fulfill individual travel criteria easily at lower costs and greater versatility [118]. Then city residents may agree to limit or even eliminate the number of cars they own. In the hypothetical and real town of Zurich, Zhang et al showed simulated in Switzerland that about ten and fourteen traditional cars can be replaced by each shared self-driving car [123]. The substitution rates for private vehicles, according to Chen, Kockelman, and Hanna, are between 3.7 and 6.8 if automotive charging is not used in the public, hybrid and self-driving cars [124].
Regional and local area preparation can be influenced by autonomous vehicles. People can choose to use self-driving shared vehicles instead of driving themselves, so parking
areas can be reduced, and the increased space can be used to build more suitable public facilities for life. Reducing parking demand can also bring about changes in land use and architectural design.
In the background of the urban economy, Zahalenko studied the effect of fully autonomous vehicles [125]. The researcher has developed a semi-circular single-center two-dimensional city model that has been calibrated to representative cities in the United States. He believes that workers can choose between not commuting, traditional commuting, and automatic commuting, taking into account the variability of each choice, parking fee, and fixed charges. Statistics suggest that about 97 percent of daily car parking needs will be moved beyond the city center to "dedicated parking areas." In addition, the economic growth in the city center will have a favorable impact and would raise land prices by 34%. On the other hand, the decreased travel costs arising from the use of electric vehicles will lead to urban growth, and land rent will decline by about 40% from areas beyond the city center.
Due to the lower risk of accidents and the higher fuel efficiency of vehicles, the efficiency of traffic flow is improved and the monetary cost of travel can also be reduced. The limited distance allows air resistance to be minimized and therefore the consumption of fuel and costs to be decreased. The implementation of auto drives will also offer incentives for electricity and pollution.
Wu et al. showed a fuel efficiency optimization program that can suggest the right acceleration/deceleration values for the driver or automatic device, considering vehicle velocity and acceleration, as well as existing speed limit, size, road lighting, and road sign [126]. Their studies with driving simulators in urban environments of signalized intersections have demonstrated a 31% drop in drivers' fuel consumption.
#### V-B3 Costs
It is expected that autonomous driving incurs additional costs where the AV cost (e.g., vehicle equipment development, equipment maintenance, etc.) is regarded as the main constraint to achieving the maximum level of benefits. The overall AV cost comprises manufacturing cost, operating cost, and social cost.
Manufacturing cost
* R&D
Needless to say, R&D is the core of autonomous driving. Because it is difficult to achieve the desired output in recent years, the unit R&D cost will be high. If a self-driving R&D team has 500 people and the average annual salary is 200,000 US dollars, the total salary the company pays to employees every year is 100 Million. If this company has an amazing production capacity, it can build 5,000 self-driving cars every year, and the unit R&D cost is 5 20,000.
* Sensor
Sensors are the most important part of the cost of autonomous vehicles [127]. Taking General Cruise as an example, in November 2017, a Cruise will be equipped with 14 cameras, 5 LiDARs, and 21 radars. Moreover, the sensor technology is updated very quickly, and the LiDAR introduced at the beginning of the year may become obsolete by the end of the year. So even if the cost of the sensor itself is getting lower and lower, the cost of replacement will not be reduced.
* Assembly and mould
The sensors of self-driving cars are many and complicated, and the assembly process will not be simpler than traditional cars. A set of molds for traditional cars can reach 200 Million US dollars, if it is a new design it will reach 600 Million US dollars. For self-driving cars, if the body is a new design, and the replacement speed is fast, the mold cost will remain high.
* MaterialsMost self-driving car R&D companies have begun to use energy-saving and environmentally friendly materials to manufacture car bodies to ensure that the car body is light and energy-saving [127]. Waymo uses plastic as the windshield and fiber material as the seat. There are also cushioning materials in the front and rear to reduce the impact of the impact. Some materials will be cheaper than the metal plates of traditional cars.
* InteriorAlthough the interior of a self-driving car will add a lot of experience products, such as in-car screens, in-car sensors, desks, stereos, etc., but because many traditional car functions will be transferred to the mobile phone APP, The overall cost may still be cheaper than ordinary cars.
Operating costMaintenance should be the largest cost in autonomous driving operations. A self-driving car repairers are not as easy to recruit as regular car repairers. Not only must they be rigorously trained, but they also need to have software engineers to assist them in maintenance [128]. Various parts and batteries also need to be replaced from time to time. However, because each driving behavior of an autonomous vehicle can be strictly calculated, it will not step on the accelerator or brake like a human driver, and it can also avoid hard-to-open road sections (such as climbing), and the maintenance of parts is easier.
Social costBig data is the subject of many sectors and is no different for the automotive industry. Nevertheless, the automotive industry will become a collector of data, and a big database developer with the introduction of self-driving vehicles. An autonomous vehicle can generate 100GB of data per second [129]. Self-driving cars depend on huge quantities of data transmitted by various sensors installed. Cars that drive will know exactly where they are and where they are going to track something that is encountered when driving [130]. The world in which the vehicle is situated and those that use it must always be recognized by self-drive vehicles. The better an auto is, the more useful it is for car consumers. However, cars need more and more personal data to make them smart and incorporate the data results into the services provided.
Users are paying more attention to the threat of AVs to their privacy. Self-driving vehicles are likely to compound this problem. Consumers and regulatory authorities are conscious that vast amounts of personal data and information about consumers and the environment will be produced, used, and preserved in these vehicles. Automobiles are real data factories. It means that anonymity is compromised in the world mobile surveillance ensures that. From the perspective of protecting privacy, the potential data that self-driving cars can collect and how these data are used have received increasing attention.
A travel mode is one of the most important results. Automobiles provide historical geographical data and continuous geographical data in real-time. Such details would not only help a third party to know where the user is and where the user is now, but it will also know where the user is. Announcers should be able to map user-frequented shops to recognize customer purchasing habits. Insurance undertakings may recognize particular personality types [131] by tracking users' daily activities or eating habits, such as frequent fast food restaurants or gyms.
The constant information gathering of cars is a cause for worrying that personal information is used for the purpose of targeted ads and advertisements, and drivers may be bored or even dangerous to others. The auto-driving sensors continuously monitor the surroundings and collect photographs would result in potential privacy breaches. The users of auto-driven vehicles are likely to worry about gathering or using their personal details without their consent, intent, or effect on users themselves.
Self-moving vehicles gather and show information about where, when, and how people travel from one location to another, basically automatically. Users are concerned about the reason for which such personal information is to be used. How do we gather this information? What is the purpose of the data? How long is the data maintained? Who is able to access the information? Some analysts suggest that the user adoption of autonomous vehicles would be influenced significantly by safety issues. Anderson et al. [132] believe that autonomous driving will lead to additional costs such as more travel, destruction of parking income and loss of work. Litman [133] has summed up the cost to vehicles and infrastructure of autonomous driving, threatening, anonymity, increased transport, social equity, and lower employment. A summary of AV passenger's benefits and cost are provided in Table V.
### _Future Research Directions_
Because the technology is not yet mature, and the mass production of autonomous vehicles will take time, autonomous driving still has a long way to go before landing at the L4 and L5 levels. Autonomous driving must confront various problems if it is to be widely used. In addition to advanced sensor technology, more sensor fusion technology is needed. There are still many technical obstacles to be solved. Secondly, there is no doubt about regulation. The surrounding environment in which autonomous vehicles can be used should be clearly specified. In addition, consumers' familiarity and acceptance of autonomous driving solutions is also a point that cannot be ignored. Autonomous driving must allow people to safely give control of driving to a set of very intelligent processors, sensors and software systems behind it. The arrival of driverless driving may take some time, but there will always be a day when it will become popular. In this case, it is not just the type and way people buy cars, but also how they own cars. This section discusses potential directions for future research work focusing on the comfort aspects of AV users. A taxonomy of these directions is depicted in Figure 14 and discussed in detail next.
#### Vi-C1 Motion Sickness Reduction
Although motion sickness is an unavoidable problem for autonomous driving (i.e., about 40% of people have serious symptoms of motion sickness due to severe discomfort), comfortable autonomous driving can support our various activities in the car and reduce motion sickness severity. A future improvement direction can be about addressing the car's stimuli making the vestibule to the center and sense which directly contributes to motion sickness problems. The stimuli here are mainly generated when the vehicle undergoes a linear acceleration (deceleration) speed change, and acceleration and deceleration braking.
The overall comfort of autonomous driving can be enhanced through jerks and accelerations. This can also improve the AV user's experience and enable sustainable driverless services. Therefore, more research work is still needed to investigate the comfort levels in AVs from the perspective of the different jerk and acceleration factors in order to provide proper jerk and acceleration criteria for planning and self-driving strategies. Furthermore, it is highly recommended that future planning and controller solutions for AVs are verified and validated over real-time software environments in accordance with different acceleration criteria.
Last, proposed solutions for motion sickness optimization must be validated based upon a comparison analysis between autonomous driving and how the human driver operates in terms of the driving comfort experienced by both drivers and passengers.
#### Vi-C2 Enhancement of Non-driving Tasks Experience
As AV comfort also concerns both the drivers and passengers, non-driving related jobs must be considered and tested in accordance with safety and comfort design practices. These non-driving activities or tasks can be challenging when designing a universal prototype to improve the users' situation awareness. Hence, further automotive research works may focus on specific non-driving tasks inside the AV to maximize the overall comfort experience of the drivers and passengers.
#### Vi-C3 Modalities of Peripheral Information Display
To improve the situation awareness of AV users, further research work may consider testing a broad range of modalities to
convey peripheral information in the vehicle. Up to date, research works have focused on a single modality of peripheral information display such as the haptic cue. Such a single modality can improve situation awareness, but however, it cannot mitigate motion sickness. Therefore, future research direction could be the exploration and integration of various modalities to enhance the overall awareness of AV users.
#### V-B4 Minimizing Involuntary Movement
To enhance the AV users' experience and comfort, active mechanisms for movements can be considered for the optimization of AV users' involuntary movement. This can help avoid or reduce the prolonged postural instability that leads to the increase of motion sickness over time. Specifically, reducing or eliminating the involuntary movement of AV users by countering the impacts of centrifugal force upon turning into corners is a potential research direction in improving the overall AV comfort level. This direction will require collaborations between human factors experts and automotive designers and engineers.
#### V-B5 Visual Capabilities and Road Conditions Adaptation
Improving visual ability of the car is also a challenge faced by current driverless cars. The driverless car needs to recognize other vehicles in the surroundings and must also be able to detect the surrounding lanes, pedestrians, traffic signs, etc. in various challenging environments (such as rain and snow). In addition, challenging road conditions are another problem that driverless cars need to consider and solve.
#### V-B6 AI and Driving Comfort
The simulation of human intelligence via AI is drastically influencing the growing number of AVs and the sophisticated services they offer. Over 90% of all car accidents are typically due to driver errors (e.g., poor reaction to the violation or a hazard of traffic laws, poor anticipation, etc.). Hence, further integration of AI techniques in future autonomous driving technology can help with the stagnation in the road accidents rate decline, while increasing the comfort and trust of users in AVs.
Furthermore, governments and legislative authorities may consider making ADAS technology mandatory in all future AVs. ADAS technology will have an AI module to monitor, analyze, and recognize possibly unsafe and dis-comfortable driving behaviors. Specifically, if comfort-disturbing behaviors, such as distractions, drowsiness, and lane departure, are detected, the system can issue a real-time alert warning the driver about the potential danger.
AI-enabled driving control methods can improve the overall driving experience. For instance, navigation systems leverage a broad range of information (e.g., road traffic conditions, real-time weather conditions, etc.) to optimally guide the driver to their destination with the least hassle. A similar component can be integrated as a sub-module of the overall intelligent speed assistance system to support fuel efficiency by providing suggestions about the ideal moments to brake/accelerate to the driver [66, 134].
#### V-B7 Manufacturer Considerations
When designing future AV, the key distinct areas of safety and comfort that must be addressed are interpersonal protection, operating safety, crash safety, and health without a crash. These critical factors are summarized as follows.
* Comportment protection requires driving decisions and actions on the lane. AVs also need to abide by traffic rules, and must provide navigation for users to ensure driving safety in various expected and unexpected scenarios. This problem can be formulated based on precise safety and comfort criteria, where a multi-pronged test over various methods and drive assays must be used.
* Functional safety helps to guarantee security tests in AVs in the case of a device malfunction and a backup framework for adapting to the vehicle's unexpected cir
Fig. 14: Taxonomy of potential future research directions in AV comfort.
cumstances.
* Collision tolerance refers to the driver's ability to shield occupants in the car by different steps, shielding people in the driver by structural construction, including seat belts and airbags, and minimizing injuries.
* Operational security refers to the interaction between the vehicle and passengers (i.e., HMI). When the safety of operation is ensured, AV can ensure that consumers can bring the comfortable experience provided by AVs. The company will consider enhancing the users' comfort by integrating an interface that allows them to understand their destination, direct vehicles to park by the side, and contact the car's manufacturer to obtain technology auxiliary.
* Non-collision safety for people who may interact with vehicles. The AVs' manufacturers must aim to provide physical protection against hazards caused by electronic systems or sensors.
#### Vii-B8 Efficient Comfort Level Analysis
The AV companies must consider employing different methods for safety and risky comfort levels analysis. The risk analysis methods will help define the necessary specifications for autonomous driving system design, subsystems, and components. The designated safety and comfort standards need to follow a set of methods for device and system review, different procedures for systems engineering construction, and the specifications of international and national legislation.
## VIII Conclusions
With the promotion of artificial intelligence (AI) technology, self-driving cars are showing fast-paced development, mainly reflected in environmental awareness, decision-making and planning, control and execution, high-precision maps, and real-time positioning technologies. This article explains the comfort of autonomous driving from various perspectives. With that being said, comfort and its associated factors are critical indicators in self-driving cars. Although comfort levels vary from one user to another, they are impacted by many factors, such as the reaction time of autonomous vehicles (AVs), which impacts the comfort of passengers both physiologically and psychologically. Furthermore, when it comes to the planning and design of self-driving cars, as far as the performance of self-driving cars and their user comfort are concerned, user comfort is often ignored. This is typically due to the lack of comprehensive investigations on the control mechanisms in regard to self-driving service operations that relate to user comfort driving factors. This paper discussed the various possibilities impacting the AV's user comfort based on a broad range of technical aspects, biomedical sensors, and psychological analysis. This paper can serve as a baseline for researchers to further address the issues impacting the driver's reaction time in an incident, the level of comfort of the AV, the movement dysfunction, smoothness and jerkiness, and whether autonomous driving is relaxed and safe.
|
2301.02590
|
Studies on the Transcorrelated Method
|
We investigate the possibility of using a transcorrelated Hamiltonian to
describe electron correlation. Amethod to obtain
transcorrelatedwavefunctionswas developed based on the mathematical framework
of the bi-variational principle. This involves the construction of an effective
transcorrelated Hamiltonian matrix which can be solved in a self-consistent
manner. This was optimised using a method we call Second Order Moment (SOM)
minimisation to give highly accurate energies for some closed-shell atoms and
helium-like ions. The effect of certain correlator terms on the description of
electron-electron and electron-nuclear cusps were also examined graphically and
some transcorrelated wavefunctions were compared against near-exact Hylleraas
wavefunctions.
|
Nicholas Lee, Alex J. W. Thom
|
2023-01-06T16:38:18Z
|
http://arxiv.org/abs/2301.02590v1
|
# Studies on the Transcorrelated Method
###### Abstract
We investigate the possibility of using a transcorrelated Hamiltonian to describe electron correlation. A method to obtain transcorrelated wavefunctions was developed based on the mathematical framework of the bi-variational principle. This involves the construction of an effective transcorrelated Hamiltonian matrix which can be solved in a self-consistent manner. This was optimised using a method we call Second Order Moment (SOM) minimisation to give highly accurate energies for some closed-shell atoms and helium-like ions. The effect of certain correlator terms on the description of electron-electron and electron-nuclear cusps were also examined graphically and some transcorrelated wavefunctions were compared against near-exact Hylleraas wavefunctions.
## 1 Introduction
Capturing the effects of electron correlation is a central problem in electronic structure theory. A possible approach to tackle the problem involves the use of a similarity transformed Hamiltonian \(\bar{H}=e^{-\tau}\hat{H}e^{\tau}\), where the \(\tau\) is a polynomial dependent on electronic positions and incorporates explicitly the correlation between various electron pairs. The use of such a Hamiltonian is known as the _transcorrelated method_. The inclusion of \(r_{12}\) terms to describe electronic correlation can be dated back to Hylleraas,[1] and later popularised by Kutzelnigg,[2] forming the basis of R12/F12 methodology[3, 4] today. Boys and Handy employed the transcorrelated formalism to introduce correlation terms to get near-exact energies for various atoms and molecules.[5, 6, 7, 8, 9, 10, 11] This was done using a custom basis set and an optimised Jastrow factor. Hirschfelder,[12] Bartlett and Nooijen,[13] and Klopper and coworkers[14, 15] have also considered the use of such similarity-transformed Hamiltonians to eliminate the singularities associated with the \(\frac{1}{r_{ij}}\) term in the many-electron Hamiltonian.
There are two principal difficulties working with the transcorrelated Hamiltonian. Firstly, the transcorrelated Hamiltonian will involve three-electron operators which can be expensive computationally. Secondly, the transcorrelated Hamiltonian is non-Hermitian. Unlike with Hermitian operators, the variational principle does not hold for non-Hermitian operators. This implies that the expectation value of the transcorrelated Hamiltonian is not bounded from below and hence unphysical energies may be obtained. Furthermore, non-Hermitian matrices are difficult to work with due to the possibility for numerical instability in matrix computations.
However, with the introduction of Variational Monte Carlo (VMC), the calculation became more computationally feasible and promising results were shown for a variety of atoms, molecules[15, 16, 17, 18] and periodic systems.[19, 20] To tackle the issue of non-Hermiticity, Luo proposed to replace the non-Hermitian ansatz with a Hermitian approximation so that a variational approach becomes viable.[21, 22]
The transcorrelated Hamiltonian has also more recently been used with a variety of quantum chemistry methods with promising results. Alavi and co-workers have applied Full Configuration Interaction Quantum Monte Carlo (FCIQMC) to the transcorrelated ansatz for a variety of systems successfully.[23, 24, 25, 26] Their numerical results show that the unboundedness of the non-Hermitian operator did not pose serious difficulties and that highly accurate results, even up to spectroscopic accuracies,[27] could be realised. More recently, they have used the transcorrelated Hamiltonian with Coupled Cluster[28] and the energies found demonstrated better basis set convergence. On the other hand, Reiher and co-workers have developed a transcorrelated analogue of Density Matrix Renormalisation Group (DMRG) and ap
plied it to the Fermi-Hubbard model and some homonuclear diatomics [29, 30]. They have similarly found that the use of transcorrelation accelerates the convergence to the complete basis set limit.
Building on the recent successes of the transcorrelated method, this work shall attempt a deterministic approach and examine its viability as a computational tool to capture the effects of electron correlation. While Boys and Handy's formulation was deterministic, the computational resources of their time would have restricted the scope of their work. We therefore think that it would be useful to explore the possibilities of a deterministic approach with the computational resources today. Unlike Boys and Handy, however, we will solve the non-Hermitian Hamiltonian matrix self-consistently. Instead of parametrising the correlator using the contraction equations, we propose to find it via what we shall call second-order-moment (SOM) minimisation, an analogue of variance minimisation that we have adapted for this work.
## 2 Theoretical Background
### Transcorrelated Hamiltonian
The transcorrelated Hamiltonian, \(\bar{H}=e^{-\tau}\hat{H}e^{\tau}\), can be expanded using the Baker-Campbell-Hausdorff (BCH) expansion:
\[\bar{H}=\hat{H}+[\hat{H},\tau]+\frac{1}{2!}[[\hat{H},\tau],\tau]+\frac{1}{3!} [[\hat{H},\tau],\tau],\tau]+... \tag{1}\]
By using a correlator of the form \(\tau=\sum_{i<j}u(\mathbf{r}_{i},\mathbf{r}_{j})\), the third- and higher-order commutator terms vanishes. The commutators can be further expanded to give:
\[\bar{H}=\hat{H}-\sum_{i}^{N_{e}}\Big{(}\frac{1}{2}\nabla_{i}^{2}\tau+\mathbf{ \nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}+\frac{1}{2}\Big{(}\mathbf{\nabla}_{i}\tau \Big{)}^{2}\Big{)} \tag{2}\]
where \(N_{e}\) is the total number of electrons in the system studied.
Substituting \(\tau=\sum_{i<j}u(\mathbf{r}_{i},\mathbf{r}_{j})\), the transcorrelated Hamiltonian takes the form
\[\bar{H}=\hat{H}-\sum_{i<j}^{N_{e}}\hat{K}(\mathbf{r}_{i},\mathbf{r}_{j})-\sum_{i<j<k} ^{N_{e}}\hat{L}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{r}_{k}) \tag{3}\]
where
\[\hat{K}(\mathbf{r}_{i},\mathbf{r}_{j}) =\frac{1}{2}\Big{(}\nabla_{i}^{2}u(\mathbf{r}_{i},\mathbf{r}_{j})+\nabla _{j}^{2}u(\mathbf{r}_{i},\mathbf{r}_{j})\] \[\quad+(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})^{2}+(\mathbf{\nabla}_{ j}u(\mathbf{r}_{i},\mathbf{r}_{j})^{2}\Big{)}\] \[\quad+\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{i} +\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{j} \tag{4}\] \[\hat{L}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{r}_{k}) =\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{i}u(\bm {r}_{i},\mathbf{r}_{k})\] \[\quad+\mathbf{\nabla}_{j}u(\mathbf{r}_{j},\mathbf{r}_{k})\cdot\mathbf{\nabla}_{j} u(\mathbf{r}_{j},\mathbf{r}_{i})\] (5) \[\quad+\mathbf{\nabla}_{k}u(\mathbf{r}_{k},\mathbf{r}_{i})\cdot\mathbf{\nabla}_{k }u(\mathbf{r}_{k},\mathbf{r}_{j})\]
The presence of the terms \(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{i}+\mathbf{\nabla}_{j}u( \mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{j}\) in \(\hat{K}(\mathbf{r}_{i},\mathbf{r}_{j})\) makes the transcorrelated Hamiltonian non-self-adjoint. This has been derived previously in several papers [6, 14, 15, 24], but is recapitulated here for completeness.
### One-electron effective Hamiltonian
The transcorrelated Hamiltonian is non-self-adjoint and will therefore have left- and right-eigenvectors. The left- and right- eigenvectors \(\Psi=\hat{\mathcal{A}}(\psi_{1}\psi_{2}\cdots\psi_{n})\) and \(\Phi=\hat{\mathcal{A}}(\phi_{1}\phi_{2}\cdots\phi_{n})\) are Slater Determinants formed from molecular orbitals \(\{\psi_{1}\psi_{2}\cdots\psi_{n}\}\) and \(\{\phi_{1}\phi_{2}\cdots\phi_{n}\}\), respectively. A bi-orthonormal set of molecular orbitals, that is, \(\langle\psi_{i}|\phi_{j}\rangle=\delta_{ij}\) can always be found via Lowdin pairing [31] and hence we assume bi-orthonormality throughout this paper.
The Slater Determinants satisfy the following equations:
\[\bar{H}\Phi=E\Phi\qquad\qquad\Psi\bar{H}=E\Psi \tag{6}\]
with \(E\) denoting the energy associated with the eigenvectors. The transcorrelated energy can be identified as:
\[E =\frac{\langle\Psi|\bar{H}|\Phi\rangle}{\langle\Psi|\Phi\rangle} \tag{7}\] \[=\langle\Psi|\bar{H}|\Phi\rangle \tag{8}\]
The denominator is unity due to the bi-orthonormality condition. The effective transcorrelated Hamiltonian can be found by taking the functional variation of the transcorrelated energy. The functional variation can be found by using the method of Lagrange multipliers. Forming the Lagrangian \(\mathcal{L}\) under the constraint of bi-orthonormal orbitals:
\[\mathcal{L}=E-\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\langle\psi_{i}|\phi_{j}\rangle -\delta_{ij}) \tag{9}\]
We seek the solution to \(\delta\mathcal{L}\) to find a stationary point of the energy with respect to the constraint. We prove in Appendix A that using the condition \(\delta\mathcal{L}=0\), we get the equation:
\[\Big{[}\,\hat{h}+\sum_{j=1}\bar{G}_{j}+\frac{1}{2}\sum_{j=1}\sum_ {k=1}\bar{L}_{jk}\,\Big{]}\phi_{i}(\mathbf{r}_{1}) =\sum_{j=1}\epsilon_{ij}\phi_{i}(\mathbf{r}_{1})\] \[\bar{H}_{\text{eff}}(\mathbf{r}_{1})\phi_{i}(\mathbf{r}_{1}) =\sum_{j=1}\epsilon_{ij}\phi_{i}(\mathbf{r}_{1}) \tag{10}\]
such that
\[\bar{G}_{j}=\sum_{j=1}\int d\mathbf{r}_{2}\psi^{*}_{j}(\mathbf{r}_{2})(r_{12}^{-1}-\hat{K}( \mathbf{r}_{1},\mathbf{r}_{2}))\mathcal{P}_{2}\phi_{j}(\mathbf{r}_{2}) \tag{11}\]
\[\bar{L}_{jk}=\int\int d\mathbf{r}_{2}d\mathbf{r}_{3}\psi^{*}_{j}(\mathbf{r}_{2})\psi^{*}_{k} (\mathbf{r}_{3})\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\mathcal{P}_{3}\phi_{j}( \mathbf{r}_{2})\phi_{k}(\mathbf{r}_{3}) \tag{12}\]
We also introduce a notation \(\mathcal{P}_{N}=\sum_{\hat{P}\in S_{N}}(-1)^{p}\hat{P}\). \(S_{N}\) is the symmetric group of degree \(N\). For example,
\[\mathcal{P}_{3}\ket{ijk}=\ket{ijk}-\ket{ikj}+\ket{jki}-\ket{jik}+\ket{kij}- \ket{kji} \tag{13}\]
\(\mathcal{P}_{3}\) therefore gives all the possible permutations (with the correct parity) of the three-particle ket \(\ket{ijk}\). \(\bar{H}_{\text{eff}}\) is the effective transcorrelated Hamiltonian. It is a functional of the bi-orthogonal set of molecular orbitals \(\{\psi_{i}\}\) and \(\{\phi_{i}\}\) and can thus be solved for iteratively through a self-consistent approach.
### Jastrow factor
The following form of the correlator was first introduced by Boys and Handy [8]:
\[u(\mathbf{r}_{i},\mathbf{r}_{j})=\sum_{mno}c_{mno}\Delta_{mn}(\bar{r}_{iA}^{m}\bar{r} _{jA}^{n}+\bar{r}_{iA}^{n}\bar{r}_{jA}^{m})\bar{r}_{ij}^{o} \tag{14}\]
where
\[\bar{r}=\frac{ar}{1+br} \tag{15}\]
and
\[\Delta_{mn}=\begin{cases}\frac{1}{2}&m=n\\ 1&\text{otherwise}\end{cases} \tag{16}\]
Scaling of the inter-particle distances as \(\bar{r}\) is known as the Pade form [32]. Scaled distances are commonly used for Jastrow factors such that at large inter-particle distances, the terms in the Jastrow factors will approach a constant. There have been a number of scaling functions employed in literature [33]. Following the work of Schmidt and Moskowitz [34, 35], we will use the Pade form with \(a=b=1\) due to the simplicity of implementation.
### Optimising correlator parameters
The correlators are a function of the set of parameters \(\{c_{mno}\}\). However, determination of these parameters is a non-trivial task. While the parameter \(c_{001}\) in equation 14 has been determined previously to be \(\frac{1}{2}\) to satisfy the cusp condition, the other parameters \(c_{mno}\) have yet to be determined. The unbounded nature of the non-self-adjoint transcorrelated Hamiltonian operator prevents the use of energy minimisation for this. However, minimisation of the local energy variance can be performed to find these parameters. Schmidt and Moskowitz applied Variational
analyse some limiting cases to gain a better understanding of the quantity \(U^{\text{SOM}}\).
In the limit of \(\bar{H}^{\dagger}=\bar{H}\) (self-adjointness), the left- and right-eigenfunctions become identical, \(\Psi=\Phi\). \(U^{\text{SOM}}\) therefore reduces to the standard definition of the variance:
\[\begin{split} U^{\text{SOM}}&=\langle\Psi|\bar{H} \bar{H}|\Phi\rangle-\langle\bar{H}\rangle^{2}\\ &=\langle\Phi|\bar{H}^{\dagger}\bar{H}|\Phi\rangle-\langle\bar{H} \rangle^{2}\\ &=\langle\Psi|\bar{H}^{2}|\Phi\rangle-\langle\Phi|\bar{H}|\Phi \rangle^{2}\end{split} \tag{19}\]
In the limit that \(\Phi\) is an exact eigenfunction, that is, \(\bar{H}\Phi=\lambda\Phi\) where \(\lambda\in\mathbb{C}\),
\[\begin{split}\langle\bar{H}\rangle&=\langle\Psi| \bar{H}|\Phi\rangle\\ &=\lambda\,\langle\Psi|\Phi\rangle\\ &=\lambda\end{split} \tag{20}\]
where we have made use of the bi-orthogonality of the left- and right-eigenfunctions. Then,
\[\begin{split} U^{\text{SOM}}&=\langle\Psi|(\bar{H} -\lambda)(\bar{H}-\lambda)|\Phi\rangle\\ &=(\lambda-\lambda)\,\langle\Psi|(\bar{H}-\lambda)|\Phi\rangle\\ &=0\end{split} \tag{21}\]
A similar proof holds for the limit that \(\Psi\) is an exact eigenfunction. Since the exact eigenfunction has to satisfy the condition that \(U^{\text{SOM}}=0\), the parameters should be varied such that the quantity \(U^{\text{SOM}}\) becomes as close to zero as possible. The evaluation of \(U^{\text{SOM}}\) would similarly require the evaluation of six-electron terms (three from each \(\bar{H}\)) and it is therefore as computationally challenging as Handy's transcorrelated variance. To side-step this difficulty, the resolution of identity is employed. In the bi-orthogonal basis, the identity is given by:
\[\mathbb{I}=\sum_{k}|\Phi_{k}\rangle\,\langle\Psi_{k}| \tag{22}\]
where \(k\) runs through all of the possible Slater Determinants for the given basis.
\[\begin{split} U^{\text{SOM}}&=\langle\Psi|(\bar{H} -\langle\bar{H}\rangle)(\bar{H}-\langle\bar{H}\rangle)|\Phi\rangle\\ &=\sum_{k}\,\langle\Psi|(\bar{H}-\langle\bar{H}\rangle)|\Phi_{k} \rangle\,\langle\Psi_{k}|(\bar{H}-\langle\bar{H}\rangle)|\Phi\rangle\\ &=(\langle\Psi|\bar{H}|\Phi\rangle-\langle\bar{H}\rangle)( \langle\Psi|\bar{H}|\Phi\rangle\\ &\quad-\langle\bar{H}\rangle)+\sum_{k\neq 0}\,\langle\Psi|\bar{H}| \Phi_{k}\rangle\,\langle\Psi_{k}|\bar{H}|\Phi\rangle\\ &=\sum_{k\neq 0}\,\langle\Psi|\bar{H}|\Phi_{k}\rangle\,\langle\Psi_{k}| \bar{H}|\Phi\rangle\\ &\approx\sum_{\sigma}\sum_{ia}\,\langle\Psi|\bar{H}|\Phi_{i}^{a} \rangle\,\langle\Psi_{i}^{a}|\bar{H}|\Phi\rangle\\ &\quad+\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\sum_{ijab}\, \langle\Psi|\bar{H}|\Phi_{ij}^{ab}\rangle\,\langle\Psi_{ij}^{ab}|\bar{H}| \Phi\rangle\end{split} \tag{23}\]
where the factor of a half was added to take into account double counting of \(ij\) and \(ab\). The \(\sigma\) terms denote the various spins of electrons. The penultimate step is an approximation as we ignore the triple excitation terms when they are much smaller than the double excitation terms. In addition, the single excitation term has terms with: \(\langle\Psi_{i}^{a}|\bar{H}|\Phi\rangle\). By analogy to Brillouin's theorem for the Hartree-Fock method, single excitation determinants will not interact _directly_ with the ground-state determinant, that is, \(\langle\Psi_{i}^{a}|\bar{H}|\Phi\rangle=0\). We can therefore ignore the single excitation terms and deduce that:
\[U^{\text{SOM}}\approx\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\sum_{ijab}\, \langle\Psi|\bar{H}|\Phi_{ij}^{ab}\rangle\,\langle\Psi_{ij}^{ab}|\bar{H}|\Phi\rangle \tag{24}\]
### Bi-variational Principle
Having found the appropriate correlator parameters, we can construct the transcorrelated Hamiltonian and solve for its eigenfunctions. However, when the Hamiltonian is non-self-adjoint (as in the case for the transcorrelated Hamiltonian), the variational principle does not hold and the expectation value of the Hamiltonian is not bounded from below. A naive minimisation of the Hamiltonian's expectation value can therefore lead to values below the exact ground state energy, which are unphysical. However, one can formulate a different variational principle for a generic operator, which is not necessarily self-adjoint. While the mathematical exposition on the bi-variational principle has been previously undertaken by Lowdin,[39, 40, 41] the essential parts of the proofs are reviewed here as it is a crucial to the development of the transcorrelated method.
For a non-self-adjoint operator \(\bar{H}\), we can define left and right eigenfunctions \(\Psi\) and \(\Phi\), respectively such that
\[\bar{H}\Phi=\lambda\Phi\qquad\bar{H}^{\dagger}\Psi=\mu\Psi\qquad\lambda,\mu\in \mathbb{C} \tag{25}\]
We note that \(\lambda\) and \(\mu\) are related by complex conjugation, that is, \(\lambda=\mu^{*}\) For a given pair of trial functions \(\Psi_{i}\) and \(\Phi_{i}\) such that
\[\Phi_{i}=\Phi+\delta\Phi\qquad\qquad\Psi_{i}=\Psi+\delta\Psi \tag{26}\]
the expectation values \(\lambda_{i}\) and \(\mu_{i}\) are given by
\[\lambda_{i}=\lambda+\frac{\langle\delta\Psi|\bar{H}-\lambda|\delta\Phi\rangle}{ \langle\Psi_{i}|\Phi_{i}\rangle}\quad\mu_{i}=\mu+\frac{\langle\delta\Phi|\bar {H}^{\dagger}-\mu|\delta\Psi\rangle}{\langle\Phi_{i}|\Psi_{i}\rangle} \tag{27}\]
The expectation values \(\lambda_{i}\) and \(\mu_{i}\) have vanishing first-order variations (\(\delta\lambda_{i}=0\)), that is, they correspond to stationary points about the exact eigenvalues \(\lambda\) and \(\mu\)
respectively. This is known as the _bi-variational principle for a pair of adjoint operators_.[39]
Conversely, we can show that if \(\delta\lambda_{i}=0\) for all \(\delta\Phi\) and \(\delta\Psi\),
\[(\bar{H}-\lambda_{i})\Phi_{i}=0\hskip 28.452756pt(\bar{H}-\lambda_{i})^{\dagger} \Psi_{i}=0 \tag{28}\]
This implies that the trial function \(\Phi_{i}\) is an eigenfunction of \(\bar{H}\) with eigenvalue \(\lambda_{i}\) and \(\Psi_{i}\) is an eigenfunction of \(\bar{H}^{\dagger}\) with eigenvalue \(\lambda_{i}^{*}=\mu_{i}\). Equation 27 implies that if the trial functions \(\Phi_{i}\) and \(\Psi_{i}\) are correct to first order, the approximation of the eigenvalue \(\lambda_{i}\) to the exact eigenvalue \(\lambda\) is correct to second order.
### Matrix Representation
The bi-variational equations can be recast in matrix form. In the following, the tensor notation of Head-Gordon _et al._[42] shall be used. Given an atomic-orbital basis \(\{\chi_{1}...\chi_{n}\}\), we can expand any pair of trial functions \(\Phi_{i}\) and \(\Psi_{i}\) as
\[\Phi_{i}=\sum_{\tau}\chi_{\tau}c_{\cdot i}^{\tau}\hskip 28.452756pt\Psi_{i}= \sum_{\tau}\chi_{\tau}d_{\cdot i}^{\tau} \tag{29}\]
From equation 28, the bi-variation equations can then be expressed as
\[\bar{H}\Phi_{i} =\lambda_{i}\Phi_{i} \tag{30}\] \[\bar{H}^{\dagger}\Psi_{i} =\lambda_{i}^{*}\Psi_{i}\] (32) \[\sum_{\tau}\left\langle\chi_{\sigma}|\bar{H}|\chi_{\tau}\right\rangle c _{\cdot i}^{\tau} =\lambda\sum_{\tau}\left\langle\chi_{\sigma}|\chi_{\tau}\right\rangle c _{\cdot i}^{\tau}\] \[\sum_{\tau}\left\langle\chi_{\sigma}|\bar{H}^{\dagger}|\chi_{\tau }\right\rangle d_{\cdot i}^{\tau} =\lambda^{*}\sum_{\tau}\left\langle\chi_{\sigma}|\chi_{\tau} \right\rangle d_{\cdot i}^{\tau}\] \[\bar{H}\mathbf{c} =\mathbf{\Lambda}\mathbf{S}\mathbf{c}\] \[\bar{H}^{\dagger}\mathbf{d} =\mathbf{\Lambda}^{\dagger}\mathbf{S}\mathbf{d}\]
The expressions in equation 31 were obtained through left-multiplying by \(\chi_{\sigma}\) and integrating over all space. In the last step we make the identification that \(\bar{\mathbf{H}}_{\sigma\tau}=\left\langle\chi_{\sigma}|\bar{H}^{\dagger}|\chi_{ \tau}\right\rangle\) and \(\mathbf{S}_{\sigma\tau}=\left\langle\chi_{\sigma}|\chi_{\tau}\right\rangle\).
### Solving the Transcorrelated Equation
We are now in a position to apply the bi-variational approach on the transcorrelated Hamiltonian. The effective transcorrelated Hamiltonian matrix has to be solved iteratively as the two- and three-electron terms are dependent the trial functions \(\Phi_{i}\) and \(\Psi_{i}\). The following workflow was utilised:
1. Perform Hartree-Fock calculation and use the Hartree-Fock coefficients as a starting guess.
2. Build the effective transcorrelated Hamiltonian matrix.
3. Diagonalise the matrix to get new coefficients for the left- and right-eigenvectors.
4. Repeat until convergence.
In doing so, we are _simultaneously_ optimising both the left- and right-eigenvectors. This is a different approach to that of Dobrautz, Luo, and Alavi[25] where only the right-eigenvector is optimised. While our approach requires the optimisation of both left- and right-eigenvectors, which translates to a more expensive calculation, we gain the benefit of bounding the error of the calculation by the bi-variational principle.
### Maximum Overlap Method
Convergence of the bi-variational approach can be difficult in some cases. Taking inspiration from the work of Gilbert and co-workers,[45] we first assume that the Hartree-Fock coefficients are a good guess at our final coefficients. Therefore, at each iteration, the set of orbitals with the largest overlap to the occupied orbitals in the previous iterations will be picked. This process proceeds until convergence is reached. This is known as the Maximum Overlap Method (MOM). Given the right coefficient matrix from the previous iteration \(\mathbf{C}_{\text{old}}\), the left coefficient matrix from the current iteration \(\mathbf{D}_{\text{new}}\) and the atomic orbital overlap matrix \(\mathbf{S}\), the maximum overlap matrix \(\mathbf{O}_{\text{MOM}}\) is given by:
\[\mathbf{O}_{\text{MOM}}=|\mathbf{C}_{\text{old}}^{\dagger}\mathbf{S}\mathbf{D}_{\text{new}}| \tag{33}\]
The bi-orthogonal solutions from each iteration are determined only up to a phase factor, and hence the modulus is taken to ensure that the overlap remains positive.
Even with traditional implementations of MOM, it is found that it is possible for SCF iterations to converge onto unwanted solutions. This has led to the introduction of the Initial Maximum Overlap Method (IMOM),[44] where new orbitals in each iteration are picked based on their overlaps with the initial guess orbitals. This prevents the solutions from drifting away from the initial guess and has been shown to give better convergence to desired solutions. In this work, we adapt it for bi-orthogonal orbitals, such that the maximum overlap matrix \(\mathbf{O}_{\text{IMOM}}\) is given by:
\[\mathbf{O}_{\text{IMOM}}=|\mathbf{C}_{\text{initial}}^{\dagger}\mathbf{S}\mathbf{D}_{\text{new}}| \tag{34}\]
where \(\mathbf{C}_{\text{initial}}\) is the initial left coefficient matrix.
Both forms of the maximum overlap method were implemented for improved convergence of the iterative procedure.
Computational Details
The transcorrelated method is implemented in Python. The matrix elements relating to the correlator were found via numerical integration. These integrations were performed with grids found in the PySCF package. Throughout this work, we used Treutier-Ahlrichs grids with Becke partitioning. Q-Chem 5.3 was used for conventional NOCI calculations and for finding Hartree-Fock solutions. Mathematica was used to plot Figures 2-8.
## 4 Transcorrelated energies using Schmidt-Moskowitz Parameters
Schmidt and Moskowitz have previously found sets of 7, 9, and 17 correlator parameters for first-row atoms via variance minimisation.[45] Following Alavi and co-workers, we shall refer to these sets as SM7, SM9, and SM17, respectively. The correlator has the form in equation 14. For ease of reference, the various terms incorporated in the correlator for SM7, SM9, and SM17 are tabulated in Table 1.
Using the correlator parameters found by Schmidt and Moskowitz, we solved the transcorrelated Hamiltonian for the first-row atoms with a series of correlation-consistent basis sets (cc-pVXZ, X=D, T, Q). This was done by using the corresponding Unrestricted Hartree-Fock orbitals as a starting guess and varying it until self-consistency. The data in Tables 2, 3 and 4 shows that for a fixed set of correlator parameters, the transcorrelated total energies of small atoms converge with basis set size. However, the transcorrelated energies do not necessarily decrease with an increasing number of parameters used. For example, in atoms from boron through neon, the transcorrelated energies increase going from 9 parameters to 17 parameters. This can be understood when we consider the origin of the parameters used. Schmidt and Moskowitz used Slater-type orbitals (STOs) in their optimisation studies to obtain the parameters. On the other hand, this work employs Gaussian-type orbitals (GTOs). One major difference between the STOs and GTOs is that the electron-nuclear cusp condition is fulfilled while using STOs but not when using GTOs. Different corrections are therefore required for the Hartree-Fock solutions expressed with different orbital bases, leading to the need for different parameters. Hence, the parameters used in this study may not be optimal.
Alavi and co-workers have also used correlation consistent bases in their work on the transcorrelated Hamiltonian. However, they are able to find energies in excellent agreement with experimental values (Tables 2, 3 and 4). We believe that this is due to the effective multi-reference nature of the FCIQMC method such that any errors incurred from using these SM parameters are corrected for by adjusting the weight of each determinant.
## 5 SOM minimisation of correlator parameters
### Singlet state atoms
To improve upon the accuracy of our results, we allowed correlator parameters to vary alongside the orbitals. The correlator parameters were optimised by using SOM minimisation. The parameters found from Schmidt and Moskowitz (the set of 7 parameters) were used as a starting guess for the optimisation, with an additional \(mno=001\) term to correct for the electron-nuclear cusp conditions (SOM8 in Table 1). The starting guess for the \(mno=001\) term is zero. This set of 18 parameters will be referred to as SOM18. In practice, we have found it to be useful to optimise the parameters using a two-step SOM minimisation procedure where we first keep the orbitals fixed through the optimisation process and after the first round of optimisation, we perform SOM minimisation with orbital relaxation in each iteration of the second optimisation cycle. In doing so, we are less likely to get caught in local minima after orbital relaxation. The transcorrelated energies found using the optimised parameters are tabulated in Table 5.
The energies found did not appear to suffer from non-variationality. For the \({}^{1}S\) states of Be, C and O, very accurate energies could be found which recover more than 98% of the correlation energy. While the results for helium and neon were not as encouraging, we note that a highly accurate energy of \(-2.9037E_{\mathrm{h}}\) for helium could be found by using a different starting guess (Table 6). A similarly good result of \(-2.9033E_{\mathrm{h}}\) (see Table 7) can also be found by using the SOM18 set of parameters.
In practice, there were a number of local minima found during optimisations depending on the starting guess and hence a number of possible energies can in principle be found. A sample of possible solutions for helium are tabulated in Table 6. This suggests that the starting guess is very important to obtain the right correlator parameters and one should be cautious about the parameters found from such an optimisation.
In the case of neon, the use of a larger set of pa
\begin{table}
\begin{tabular}{c c} \hline \hline Set & Parameters \\ \hline SM7 & 001, 002, 003, 004, 200, 300, 400 \\ SM9 & 001, 002, 003, 004, 200, 300, 400, 220, 202 \\ SM17 & 001, 002, 003, 004, 200, 300, 400, 220, 202, 222, 402, 204, 422, 602, 404, 224, 206 \\ SOM8 & 001, 002, 003, 004, 100, 200, 300, 400 \\ SOM10 & 001, 002, 003, 004, 100, 200, 300, 400, 220, 202 \\ SOM18 & 001, 002, 003, 004, 100, 200, 300, 400, 220, 222, 402, 204, 422, 602, 404, 224, 206 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the various sets of parameters used, where each number in the second column has the form \(mno\). For example, \(001\) corresponds to the \(m=0\), \(n=0\), and \(o=1\) term. i.e. \(r_{ij}\) term. “SM” refers to Schmidt–Moskowitz parameters while “SOM” refers to parameters found via SOM minimisation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & cc-pVDZ & cc-pVTZ & cc-pVQZ & SM7[45] & FCIQMC (cc-pVQZ)[24] & Experimental[46] \\ \hline He & -2.8962 & -2.9021 & -2.9025 & -2.8997 & - & -2.9037 \\ Li & -7.4670 & -7.4671 & -7.4672 & -7.4746 & -7.4779 & -7.4781 \\ Be & -14.6111 & -14.6112 & -14.6113 & -14.6259 & -14.6679 & -14.6674 \\ B & -24.5740 & -24.5756 & -24.5764 & -24.5946 & -24.65417 & -24.6539 \\ C & -37.7431 & -37.7475 & -37.7489 & -37.7721 & -37.8479 & -37.8450 \\ N & -54.4502 & -54.4593 & -54.4618 & -54.5019 & -54.5878 & -54.5892 \\ O & -74.8659 & -74.8849 & -74.8659 & -74.9469 & -75.0630 & -75.0673 \\ F & -99.4619 & -99.4912 & -99.4989 & -99.5746 & -99.7251 & -99.7339 \\ Ne & -128.6119 & -128.6528 & -128.6640 & -128.7689 & -128.9297 & -128.9376 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the transcorrelated total energies (in Hartrees) found with the bi-variational approach using 7 parameters against literature and experimental values. The parameters were the same as that used by Schmidt and Moskowitz.[45] FCIQMC (cc-pVQZ basis) data was found by Alavi and co-workers.[24] Experimental values were found by Chakraverty and co-workers.[46]
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & cc-pVDZ & cc-pVTZ & cc-pVQZ & SM9[45] & FCIQMC[24] & Experimental[46] \\ \hline He & -2.8935 & -2.8995 & -2.8998 & -2.9029 & - & -2.9037 \\ Li & -7.4746 & -7.4727 & -7.4724 & -7.4731 & - & -7.4781 \\ Be & -14.6205 & -14.6191 & -14.6192 & -14.6332 & - & -14.6674 \\ B & -24.6057 & -24.6055 & -24.6062 & -24.6113 & - & -24.6539 \\ C & -37.7592 & -37.7632 & -37.7644 & -37.7956 & - & -37.8450 \\ N & -54.5262 & -54.5334 & -54.5349 & -54.5390 & - & -54.5892 \\ O & -74.9971 & -75.0136 & -75.0164 & -75.0109 & - & -75.0673 \\ F & -99.6589 & -99.6873 & -99.6920 & -99.6685 & - & -99.7339 \\ Ne & -128.8567 & -128.8985 & -128.9070 & -128.8796 & - & -128.9376 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the transcorrelated total energies (in Hartrees) found with the bi-variational approach using 9 parameters against literature and experimental values. The parameters were the same as that used by Schmidt and Moskowitz.[45] FCIQMC (cc-pVQZ basis) data was found by Alavi and co-workers.[24] Experimental values were found by Chakraverty and co-workers.[46]
\begin{table}
\begin{tabular}{c c} \hline \hline Initial guess & TC energy \\ \hline \(c_{001}=0.5\), \(c_{100}=+1\) & -2.8969 \\ \(c_{001}=0.5\), \(c_{100}=-1\) & -2.8989 \\ \(c_{001}=0.5\), \(c_{100}=-2\) & -2.9037 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Transcorrelated (TC) energies of helium atom, in Hartrees for different starting guesses. The set of parameters SOM8 was used, with starting guesses of \(0\) unless otherwise stated in the first column. Starting from SOM8 with \(c_{001}=0.5\), \(c_{100}=-2\) and other parameters zero, a highly accurate energy of helium atom could be found. The optimised parameters can be found in the Appendix (Table 10)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & cc-pVDZ & cc-pVTZ & cc-pVQZ & SM17[45] & FCIQMC[24] & Experimental[46] \\ \hline He & -2.8959 & -2.9020 & -2.9023 & -2.9036 & - & -2.9037 \\ Li & -7.4770 & -7.4766 & -7.4765 & -7.4768 & -7.4785 & -7.4781 \\ Be & -14.6304 & -14.6300 & -14.6283 & -14.6370 & -14.6675 & -14.6674 \\ B & -24.5974 & -24.5980 & -24.5977 & -24.6156 & -24.6529 & -24.6539 \\ C & -37.7740 & -37.7766 & -37.7772 & -37.8017 & -37.8446 & -37.8450 \\ N & -54.5022 & -54.5099 & -54.5116 & -54.5456 & -54.5884 & -54.5892 \\ O & -74.9549 & -74.9719 & -74.9549 & -75.0146 & -75.0661 & -75.0673 \\ F & -99.5830 & -99.6117 & -99.6187 & -99.6736 & -99.7328 & -99.7339 \\ Ne & -128.7533 & -128.7925 & -128.8043 & -128.8796 & -128.9354 & -128.9376 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the transcorrelated total energies (in Hartrees) found with the bi-variational approach using 17 parameters against literature and experimental values. The parameters were the same as that used by Schmidt and Moskowitz.[45] FCIQMC (cc-pVQZ basis) data was found by Alavi and co-workers.[24] Experimental values were found by Chakraverty and co-workers.[46]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & HF & SOM8 & Exact* & Difference & Correlation energy (\%) \\ \hline \({}^{1}S\) He & -2.8615 & -2.8947 & -2.9037 & 0.0090 & 79 \\ \({}^{1}S\) Be & -14.5730 & -14.6663 & -14.6674 & 0.0011 & 99 \\ \({}^{1}S\) C & -37.6042 & -37.7435 & -37.7465 & 0.0030 & 98 \\ \({}^{1}S\) O & -74.6897 & -74.9093 & -74.9133 & 0.0040 & 98 \\ \({}^{1}S\) Ne & -128.5435 & -128.8758 & -128.9376 & 0.0618 & 84 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of energies found after SOM minimisation and the exact energies. The difference (in Hartrees) and the percentage of correlation energy found were similarly reported. *The exact energies of \({}^{1}S\) C and \({}^{1}S\) O were deduced from spectroscopic measurements.[47, 48] The exact energies of the other closed shell atoms were taken from experimental values found by found by Chakraverty and co-workers.[46] The optimised parameters can be found in the Appendix (Table 9)
rameters (SOM18) gave a non-variational energy of \(-129.0019E_{\text{h}}\). This demonstrates the possibility of obtaining non-variational energies and highlights the potential pitfalls of using SOM minimisation to obtain correlator parameters.
### Helium-like systems
Encouraged by the possibility of highly accurate energies using SOM minimisation, we examined the approach on a series of helium-like systems (Table 7). To find the correlator parameters for this series, we used the set of 17 parameters from Schmidt and Moskowitz and with an additional \(mno=001\) term as a starting guess for the helium atom and performed SOM minimisation to obtain the optimised parameters (SOM18). These optimised parameters were then used as starting guesses for each of these ions.
We found that it was important to use an augmented basis set (aug-cc-pVQZ) for the negatively charged hydride anion as more diffuse functions are required to describe the expanded orbitals. In contrast, a basis optimised for describing core-core correlations (cc-pCVQZ) was found to be useful to describe the contracted orbitals in cations.
For comparison, the same calculations were performed with a cc-pVQZ basis set and the results are tabulated in Table 8. The use of cc-pVQZ basis increased the absolute error from SOM minimisation and the transcorrelated energies found were mostly non-variational. This shows that the choice of basis set is imperative to the accuracy of the transcorrelated method.
Using the appropriate basis sets for cations and anions, we found that the absolute error from SOM minimisation is lower than that for HF for each ion (Table 7). From H\({}^{-}\) through N\({}^{5+}\), SOM minimisation recovers a large proportion of the correlation energy. From
\[\Psi_{He}=e^{-1.755656s}(1+0.337294u+0.112519t^{2}-0.145874s+0.0236 34s^{2}-0.037024u^{2}) \tag{35}\] \[\text{where }s=|\mathbf{r}_{1}|+|\mathbf{r}_{2}|,t=|\mathbf{r}_{1}|-|\mathbf{r}_{2}| \text{ and }u=|\mathbf{r}_{1}-\mathbf{r}_{2}|.\]
The Jastrow factor's introduction of electron-electron cusps to the Hartree-Fock solution can be seen from Figure 2. The shape of the electron-electron cusp gets increasingly similar to that of the Hylleraas wavefunction as the coefficient increases from 0.1 to 0.5.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & Basis & HF & SOM18 & Exact & Error & Error & Correlation \\ & & & & (HF) & (SOM) & energy (\%) \\ \hline \({}^{1}S\) H\({}^{-}\) & aug-cc-pVQZ & -0.4878 & -0.5231 & -0.5278 & 0.0400 & 0.0047 & 88 \\ \hline He & cc-pVQZ & -2.8615 & -2.9033 & -2.9037 & 0.0422 & 0.0004 & 99 \\ \hline \({}^{1}S\) Li\({}^{+}\) & cc-pCVQZ & -7.2364 & -7.2807 & -7.2799 & 0.0435 & -0.0008 & 101 \\ \hline \({}^{1}S\) Be\({}^{2+}\) & cc-pCVQZ & -13.6113 & -13.6558 & -13.6556 & 0.0443 & -0.0002 & 100 \\ \hline \({}^{1}S\) B\({}^{3+}\) & cc-pCVQZ & -21.9862 & -22.0298 & -22.0309 & 0.0447 & 0.0011 & 98 \\ \hline \({}^{1}S\) C\({}^{4+}\) & cc-pCVQZ & -32.3611 & -32.4038 & -32.4062 & 0.0451 & 0.0024 & 95 \\ \hline \({}^{1}S\) N S\({}^{5+}\) & cc-pCVQZ & -44.7360 & -44.7779 & -44.7814 & 0.0454 & 0.0035 & 92 \\ \hline \({}^{1}S\) O\({}^{6+}\) & cc-pCVQZ & -59.1110 & -59.1486 & -59.1566 & 0.0456 & 0.0080 & 82 \\ \hline \({}^{1}S\) F\({}^{7+}\) & cc-pCVQZ & -75.4859 & -75.5214 & -75.5317 & 0.0458 & 0.0103 & 78 \\ \hline \({}^{1}S\) Ne\({}^{8+}\) & cc-pCVQZ & -93.8608 & -93.8966 & -93.9068 & 0.0460 & 0.0102 & 78 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of absolute error in the HF energy against the absolute error in the energy found by SOM minimisation. The absolute error for both HF and SOM minimisation methods increases with the magnitude of nuclear charge. The absolute error found from HF is consistently above that of those found from SOM minimisation. The optimised correlator parameters found are tabulated in the Appendix (Tables 11, 12).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Basis & HF & SOM18 & Error & Error \\ & & & & (HF) & (SOM) \\ \hline \({}^{1}S\) H\({}^{-}\) & cc-pVQZ & -0.4735 & -0.5078 & 0.0543 & 0.0200 \\ \hline \({}^{1}S\) Li\({}^{+}\) & cc-pVQZ & -7.2364 & -7.2872 & 0.0435 & -0.0073 \\ \hline \({}^{1}S\) Be\({}^{2+}\) & cc-pVQZ & -13.6113 & -13.6652 & 0.0443 & -0.0096 \\ \hline \({}^{1}S\) B\({}^{3+}\) & cc-pVQZ & -21.9862 & -22.0455 & 0.0447 & -0.0146 \\ \hline \({}^{1}S\) C\({}^{4+}\) & cc-pVQZ & -32.3611 & -32.4231 & 0.0451 & -0.0169 \\ \hline \({}^{1}S\) N\({}^{5+}\) & cc-pVQZ & -44.7360 & -44.8001 & 0.0454 & -0.0187 \\ \hline \({}^{1}S\) O\({}^{6+}\) & cc-pVQZ & -59.1108 & -59.1780 & 0.0458 & -0.0214 \\ \hline \({}^{1}S\) F\({}^{7+}\) & cc-pVQZ & -75.4857 & -75.5548 & 0.0460 & -0.0231 \\ \hline \({}^{1}S\) Ne\({}^{8+}\) & cc-pVQZ & -93.8605 & -93.9309 & 0.0463 & -0.0241 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of absolute error in the HF energy against those found by SOM minimisation in cc-pVQZ basis. The absolute error from SOM minimisation is significantly higher than those found in Table 7. Most of the transcorrelated energies found are also non-variational.
Figure 1: (Left) Two electrons constrained at a fixed electron-nuclear distance of \(R\). The electron-electron distance is modulated by their angle of separation, \(\theta\). \(\theta\) is measured in radians. (Right) One electron is fixed at distance \(R\) and the other is constrained to the (dotted) line defined by the nucleus and the fixed electron. \(R\) is found by the expectation value of the electron-nuclear separation in near-exact wavefunction given by Nakatsuji and coworkers.[50]
Figure 3: Graphical description of the electron-nuclear cusp of the transcorrelated wavefunction of He with \(c=-2\), \(-1\), and \(1\) against Hartree–Fock and Hylleraas wavefunctions. The position at which the other electron is fixed is shown by the dashed blue line.
Figure 2: Graphical description of the electron-electron cusp of the transcorrelated wavefunction with \(c=0.1\) through \(c=0.5\) against Hartree–Fock and Hylleraas wavefunctions. The function with \(c=0.5\) most closely matches that of the Hylleraas wavefunction, but the cusp is shallow as compared to the Hylleraas wavefunction.
To examine the effect of the electron-nuclear cusp, we use the special case whereby the nucleus and an electron are constrained to be 1.26 Bohr away and the other electron is free to move along the line defined by them (Figure 1, Right). We use a different correlator \(\tau=\frac{1}{2}\frac{r_{12}}{1+r_{12}}+c\frac{r}{1+r}\) where \(r\) is the variable electron-nuclear distance and \(r_{12}=r-R\) is the electron-electron distance, and vary the value of parameter \(c\). The Hartree-Fock wavefunction most closely matches that of the Hylleraas wavefunction at the nucleus while the function with \(c=-1\) is more similar further away from it (Figure 3).
For comparison, the function with \(c=-2\) was plotted as \(c=-2\) is what would be expected from a simple application of Kato's cusp conditions. This illustrates that the coefficient \(c\) need not equal to the negative of the nuclear charge, \(-Z\), as the electron-nuclear interaction term in the Jastrow factor affects the overall wavefunction and not only at the cusp. Performing SOM minimisation with this correlator, we found a value of \(c=-1.12\), and the transcorrelated wavefunction with \(c=-1.12\) is plotted. The plot shows a good agreement with the Hylleraas wavefunction (Figure 4).
A similar series of calculations were attempted for Li\({}^{+}\). The Hylleraas wavefunction for Li\({}^{+}\) is given by:
Figure 4: Plotting the Hylleraas wavefunction against the transcorrelated wavefunction with \(c=-1.12\). \(c\) was found by SOM minimisation was performed with a cc-pV5Z basis set. The Slater Determinant used is found via Hartree–Fock calculation using a cc-pVQZ basis. While there is some discrepancy between the transcorrelated wavefunction and the Hylleraas wavefunction near the nucleus, the two wavefunctions are very similar further away from the nucleus.
Figure 5: Graphical description of the electron-nuclear cusp of the transcorrelated wavefunction of Li\({}^{+}\) with varying values of \(c\) against Hartree–Fock and Hylleraas wavefunctions. The Hartree–Fock wavefunction resembles the Hylleraas wavefunction near the nucleus. The transcorrelated wavefunction with \(c=-1\) most closely matches that of the Hylleraas wavefunction further away from the nucleus.
\[\Psi_{\rm Li^{+}}=e^{-2.784751s}(1+0.354317u+0.154657t^{2}-0.127225s+0.042220s^{2}-0.06 6731u^{2}) \tag{36}\]
From Figure 5 it can be seen that the Hartree-Fock wavefunction is very similar to that of the Hylleraas wavefunction near the nucleus. At regions further from the nucleus, the \(c=-1\) wavefunction most closely resembles the Hylleraas wavefunction. SOM minimisation gave \(c=-3.79\) but in this case, we
Figure 8: Two plots describing the electron-nuclear cusp corresponding to Figure 1 (Right).
(Left) Plot of trancorrelated wavefunction for He against the corresponding Hylleraas wavefunction. (Right) Plot of trancorrelated wavefunction for Li\({}^{+}\) against the corresponding Hylleraas wavefunction. Both plots show that the 18 parameter transcorrelated wavefunction reproduces the Hylleraas wavefunction well.
Figure 6: Plotting the Hylleraas wavefunction against the trancorrelated wavefunction with \(c=-3.79\) and \(c=-1\). \(c\) was found by SOM minimisation with a cc-pCVQZ basis set. The Slater Determinant used is found via Hartree–Fock calculation using a cc-pVQZ basis.
Figure 7: Two plots describing the electron-electron cusp corresponding to Figure 1 (Left).
(Left) Plot of trancorrelated wavefunction for He against the corresponding Hylleraas wavefunction. (Right) Plot of trancorrelated wavefunction for Li\({}^{+}\) against the corresponding Hylleraas wavefunction. Both plots show that the 18 parameter transcorrelated wavefunction reproduces the shape of the electron-electron cusp, but the cusp is shallower than the Hylleraas wavefunction.
were unable to reproduce the Hylleraas wavefunction (Figure 6). There are several reasons why such a discrepancy can exist.
Firstly, resolution of identity is an _approximation_ which may not be valid depending on the size of the basis set used.
Secondly, the Hartree-Fock wavefunction resembles the Hylleraas wavefunction well. While the addition of a Jastrow factor to it can provide a better description of the electron-nuclear cusp, it comes at the cost of affecting other parts of the wavefunction. More terms in the correlator may need to be added to more accurately describe the transcorrelated at different points in space.
To address the latter point, the transcorrelated wavefunctions for He and Li\({}^{+}\) found using the SOM18 set of parameters (Table 7) were plotted against the respective Hylleraas wavefunctions (Figures 7, 8). From Table 7, it can be observed that the transcorrelated energies are very close to the exact energies, and this suggests that the transcorrelated wavefunctions should look similar to that of the Hylleraas wavefunction. This is reflected graphically in the depiction of electron-electron cusp (Figure 7) and electron-nuclear cusp (Figure 8), where each of the transcorrelated wavefunctions agree well with the corresponding Hylleraas wavefunction. The main difference between the transcorrelated wavefunction and Hylleraas wavefunctions appear to be the description of electron-electron interactions at larger electron-electron distances, which may hint at the use of a differently scaled form of the correlator to account for longer range effects. Overall, the plots shows that SOM minimisation can get highly accurate wavefunctions given sufficiently many correlator parameters, supporting the utility of SOM minimisation when appropriate starting guesses are used.
## 7 Conclusions
A self-consistent method for solving the non-self-adjoint transcorrelated Hamiltonian has been implemented successfully to obtain highly accurate energies of some first row atoms. The correlator parameters found in the literature are not optimised for the Gaussian orbital basis used in this current study and had to be re-optimised through a method we refer to as SOM minimisation. This allowed us to find optimised parameters for any system, in principle. However, the optimisation of multiple parameters is challenging and in practice, we have found it to be useful to optimise the parameters using a two-step SOM minimisation procedure. SOM minimisation has been found to give good energies for the first row atoms. However, the percentage of correlation energy recovered has been found to decrease with increased nuclear charge across a series of helium-like ions. We believe that this is due to the inability of the basis set to accurately describe highly charged cations and a custom basis with more contracted basis functions would be helpful to describe the correlation in these systems.
Thus far, SOM minimisation has been attempted for closed shell systems. Further work has to be done on open-shell systems where there is a possibility of spin-symmetry breaking, leading to an unphysical wavefunction. SOM minimisation should also be attempted on larger systems to test if the method works more generally. However, as pointed out by Alavi and coworkers,[24] memory use is a bottleneck in transcorrelated calculations. This can be challenging especially with the need to use large basis sets for SOM minimisation as it relies on the resolution-of-the-identity approximation. A possible solution would be to use an auxiliary basis set instead which would allow us to use a smaller basis set to represent the Slater Determinant but a large auxiliary basis to satisfy the resolution-of-the-identity approximation. A graphical analysis has also been done to illustrate the effects of some correlator terms on the overall wavefunction and demonstrated the importance of including higher-order correlator terms in the Jastrow factor to give a more accurate wavefunction.
## 8 Acknowledgements
NL is grateful for the helpful feedback and insights provided by Prof. Ali Alavi and Prof. Matthew Foulkes.
Transcorrelated Hamiltonian
### Expansion of commutator terms
We first separate the many-electron electronic Hamiltonian into a kinetic energy (\(\hat{T}\)) and potential energy (\(\hat{V}\)) term:
\[\hat{H}=\underbrace{-\frac{1}{2}\sum_{i}\nabla_{i}^{2}-\underbrace{\sum_{iA} \frac{Z_{A}}{|\mathbf{r}_{1}-\mathbf{R}_{A}|}+\sum_{i<j}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}| }}}_{\hat{V}} \tag{37}\]
\(\hat{V}\) is multiplicative and hence commutes with \(\tau\) in the commutator \([\hat{H},\tau]\), that is,
\[[\hat{H},\tau] =[\hat{T}+\hat{V},\tau] \tag{38}\] \[=[\hat{T},\tau]\]
The commutator terms in equation 1 will be evaluated term by term as follows:
\[[\hat{H},\tau]f =[\hat{T},\tau]f \tag{39}\] \[=\hat{T}(\tau f)-\tau\hat{T}f\] \[=-\frac{1}{2}\sum_{i}\mathbf{\nabla}_{i}\cdot\mathbf{\nabla}_{i}(\tau f)+ \frac{1}{2}\sum_{i}\tau\nabla_{i}^{2}f\] \[=-\frac{1}{2}\sum_{i}\mathbf{\nabla}_{i}\cdot(\tau\mathbf{\nabla}_{i}f+f \mathbf{\nabla}_{i}\tau)+\frac{1}{2}\sum_{i}\tau\nabla_{i}^{2}f\] \[=-\frac{1}{2}\sum_{i}(\tau\nabla_{i}^{2}f+f\nabla_{i}^{2}\tau+2 \mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}f)\] \[\quad+\frac{1}{2}\sum_{i}\tau\nabla_{i}^{2}f\] \[=\Big{(}-\frac{1}{2}\sum_{i}\nabla_{i}^{2}\tau-\sum_{i}\mathbf{ \nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}\Big{)}f\]
We can now make the identification
\[[\hat{H},\tau]\equiv-\sum_{i}\Big{(}\frac{1}{2}\nabla_{i}^{2}\tau+\mathbf{\nabla}_ {i}\tau\cdot\mathbf{\nabla}_{i}\Big{)} \tag{40}\]
\[[[\hat{H},\tau],\tau]f =[\hat{H},\tau](\tau f)-\tau[\hat{H},\tau]f \tag{41}\] \[=\sum_{i}\Big{(}-\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}(\tau f)+ \tau\mathbf{\nabla}_{i}\mathbf{\tau}\cdot\mathbf{\nabla}_{i}f\Big{)}\] \[=\sum_{i}\Big{(}-f\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}\tau- \tau\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}f\] \[\quad+\tau\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}f\Big{)}\] \[=\sum_{i}\Big{(}-f\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}\tau \Big{)}\] \[=\sum_{i}-\Big{(}\mathbf{\nabla}_{i}\tau\Big{)}^{2}f\]
Therefore,
\[[[\hat{H},\tau],\tau]\equiv\sum_{i}-\Big{(}\mathbf{\nabla}_{i}\tau\Big{)}^{2} \tag{42}\]
Since \([[\hat{H},\tau],\tau]\) is a multiplicative term, higher-order commutators of the form \([[[\hat{H},\tau],\tau]...]\) vanish.
Given that \(\tau=\sum_{i<j}u(\mathbf{r}_{i},\mathbf{r}_{j})\), we can further expand equation 2.
\[\mathbf{\nabla}_{i}\tau =\mathbf{\nabla}_{i}\sum_{a<b}u(\mathbf{r}_{a},\mathbf{r}_{b}) \tag{43}\] \[=\sum_{a}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a})\]
where we used the symmetry \(u(\mathbf{r}_{i},\mathbf{r}_{j})=u(\mathbf{r}_{j},\mathbf{r}_{i})\). We therefore find:
\[\sum_{i}\nabla_{i}^{2}\tau =\sum_{i}\sum_{a}\nabla_{i}^{2}u(\mathbf{r}_{i},\mathbf{r}_{a}) \tag{44}\] \[=\frac{1}{2}\sum_{i}\sum_{a}\nabla_{i}^{2}u(\mathbf{r}_{i},\mathbf{r}_{a})\] \[\quad+\frac{1}{2}\sum_{i}\sum_{a}\nabla_{a}^{2}u(\mathbf{r}_{a},\mathbf{r }_{i})\] \[=\sum_{i<j}\nabla_{i}^{2}u(\mathbf{r}_{i},\mathbf{r}_{j})+\sum_{i<j} \nabla_{j}^{2}u(\mathbf{r}_{i},\mathbf{r}_{j})\]
where we relabelled \(a\) by \(j\) and used the symmetry of \(u\) in the last line.
We similarly find:
\[\sum_{i}\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i} =\sum_{i}\sum_{a}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a})\cdot\mathbf{ \nabla}_{i} \tag{45}\] \[=\frac{1}{2}\sum_{i}\sum_{a}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a })\cdot\mathbf{\nabla}_{i}\] \[\quad+\frac{1}{2}\sum_{i}\sum_{a}\mathbf{\nabla}_{a}u(\mathbf{r}_{a},\bm {r}_{i})\cdot\mathbf{\nabla}_{a}\] \[=\sum_{i<j}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla }_{i}+\sum_{i<j}\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla}_{j}\]
Finally,
\[\sum_{i}\left(\mathbf{\nabla}_{i}\tau\right)^{2} =\sum_{i}\mathbf{\nabla}_{i}\tau\cdot\mathbf{\nabla}_{i}\tau\] \[=\sum_{i}\sum_{a}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a})\cdot\sum_ {b}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\] \[=\sum_{i,j}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\cdot\mathbf{\nabla} _{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\] \[\quad+\sum_{i,a\neq b}\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a}) \cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\] \[=\sum_{i<j}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^{ 2}+\sum_{i<j}\left(\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^{2}\] \[\quad+\sum_{i}\sum_{a<b}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r} _{a})\cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\right)\] \[\quad+\sum_{i}\sum_{b<a}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r} _{a})\cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\right)\] \[=\sum_{i<j}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^ {2}+\sum_{i<j}\left(\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^{2}\] \[\quad+2\sum_{i}\sum_{a<b}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r} _{a})\cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\right)\] \[=\sum_{i<j}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^ {2}+\sum_{i<j}\left(\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^{2}\] \[\quad+2\sum_{i<a<b}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a}) \cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\right)\] \[\quad+2\sum_{a<b<i}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{a}) \cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{b})\right)\] \[=\sum_{i<j}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^ {2}+\sum_{i<j}\left(\mathbf{\nabla}_{j}u(\mathbf{r}_{i},\mathbf{r}_{j})\right)^{2}\] \[\quad+2\sum_{i<j<k}\left(\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{j}) \cdot\mathbf{\nabla}_{i}u(\mathbf{r}_{i},\mathbf{r}_{k})\right.\] \[\quad+\mathbf{\nabla}_{j}u(\mathbf{r}_{j},\mathbf{r}_{i})\cdot\mathbf{\nabla}_{j} u(\mathbf{r}_{j},\mathbf{r}_{k})\] \[\quad\qquad\qquad+\mathbf{\nabla}_{k}u(\mathbf{r}_{k},\mathbf{r}_{i})\cdot \mathbf{\nabla}_{k}u(\mathbf{r}_{k},\mathbf{r}_{j})\right) \tag{46}\]
Substituting these commutator terms back into equation 1 recovers equation 2.
### Lagrangian approach to the Transcorrelated Hamiltonian
We show here that the method of Lagrange multipliers can be used to derive the effective transcorrelated Hamiltonian.
\[\begin{split}\mathcal{L}&=E-\sum_{i=1}\sum_{j=1} \epsilon_{ij}(\langle\psi_{i}|\phi_{j}\rangle-\delta_{ij})\\ &=\langle\Psi|\hat{H}|\Phi\rangle-\sum_{i=1}\sum_{j=1}\epsilon_{ij }(\langle\psi_{i}|\phi_{j}\rangle-\delta_{ij})\\ &=\langle\Psi|\hat{H}-\sum_{i<j}\hat{K}(\mathbf{r}_{i},\mathbf{r}_{j})- \sum_{i<j<k}\hat{L}(\mathbf{r}_{i},\mathbf{r}_{j},\mathbf{r}_{k})|\Phi\rangle\\ &\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\langle\psi_{i}|\phi_{j }\rangle-\delta_{ij})\\ &=\langle\Psi|\sum_{i}\hat{h}_{i}|\Phi\rangle+\langle\Psi|\sum_{i <j}(r_{ij}^{-1}-\hat{K}(\mathbf{r}_{i},\mathbf{r}_{j}))|\Phi\rangle\\ &\quad-\langle\Psi|\sum_{i<j<k}\hat{L}(\mathbf{r}_{i},\mathbf{r}_{j}, \mathbf{r}_{k})|\Phi\rangle\\ &\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\langle\psi_{i}|\phi_{j }\rangle-\delta_{ij})\\ &=\langle\Psi|\hat{O}_{1}|\Phi\rangle+\langle\Psi|\hat{O}_{2}|\Phi \rangle+\langle\Psi|\hat{O}_{3}|\Phi\rangle\\ &\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\langle\psi_{i}|\phi_{j }\rangle-\delta_{ij})\end{split} \tag{47}\]
In the last line we have renamed the \(n\)-electron operators by the \(\hat{O}_{n}\). This is for brevity of notation and the understanding that the mathematics after is concerned only with the number of electrons the operators act upon. Taking an infinitesimal change of the Lagrangian with respect to variation of the spin-orbitals,
\[\begin{split}\delta\mathcal{L}&=\delta\langle\Psi| \hat{O}_{1}|\Phi\rangle+\delta\,\langle\Psi|\hat{O}_{2}|\Phi\rangle+\delta\, \langle\Psi|\hat{O}_{3}|\Phi\rangle\\ &\quad-\delta\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\langle\psi_{i}| \phi_{j}\rangle-\delta_{ij})\end{split} \tag{48}\]
We shall analyse each term of the above expression in turn. We will adopt the shorthand \(\langle\psi_{i}\psi_{j}...\psi_{k}|\hat{O}|\phi_{a}\phi_{b}...\phi_{c}\rangle= \langle ij...k|\hat{O}|ab...c\rangle\). We assume that the bra always contains molecular orbitals from the set \(\{\psi_{i}\}\) and that the ket always contains molecular orbitals from the set \(\{\phi_{i}\}\).
#### a.2.1 One electron term
\[\begin{split}\delta\,\langle\Psi|\hat{O}_{1}|\Phi\rangle& =\delta\Big{(}\sum_{i}\langle i|\hat{O}_{1}|i\rangle\,\Big{)}\\ &=\sum_{i}\Big{(}\langle\delta i|\hat{O}_{1}|i\rangle+\langle i| \hat{O}_{1}|\delta i\rangle\,\Big{)}\end{split} \tag{49}\]
#### a.2.2 Two electron term
\[\delta\left\langle\Psi|\hat{O}_{2}|\Phi\right\rangle =\sum_{i<j}\left(\delta\left\langle ij|\hat{O}_{2}|ij\right\rangle- \delta\left\langle ij|\hat{O}_{2}|ji\right\rangle\right)\] \[=\sum_{i<j}\left(\delta ij|\hat{O}_{2}|ij\right)+\langle i\delta j |\hat{O}_{2}|ij\rangle\] \[\quad+\langle ij|\hat{O}_{2}|\delta ij\rangle+\langle ij|\hat{O} _{2}|i\delta j\rangle \tag{50}\] \[\quad-\langle\delta ij|\hat{O}_{2}|ji\rangle-\langle i\delta j |\hat{O}_{2}|ji\rangle\] \[\quad-\langle ij|\hat{O}_{2}|\delta ji\rangle-\langle ij|\hat{O} _{2}|j\delta i\rangle\right)\] \[=\sum_{i<j}\left(2\left\langle\delta ij|\hat{O}_{2}|ij\right\rangle -2\left\langle\delta ij|\hat{O}_{2}|ji\right\rangle\right.\] \[\quad+2\left\langle ij|\hat{O}_{2}|\delta ij\right\rangle-2 \left\langle ji|\hat{O}_{2}|\delta ij\right\rangle\left)\]
Where we have used the permutation symmetry of the integrals e.g. \(\langle ij|\hat{O}_{2}|ij\rangle=\langle ji|\hat{O}_{2}|ji\rangle\).
#### a.2.3 Three electron term
\[\delta\left\langle\Psi|\hat{O}_{3}|\Phi\right\rangle =\sum_{i<j<k}\left(\delta\left\langle ijk|\hat{O}_{3}|ijk\right\rangle -\delta\left\langle ijk|\hat{O}_{3}|ikj\right\rangle\right.\] \[\quad+\delta\left\langle ijk|\hat{O}_{3}|jki\right\rangle-\delta \left\langle ijk|\hat{O}_{3}|jik\right\rangle\] \[\quad+\delta\left\langle ijk|\hat{O}_{3}|kij\right\rangle- \delta\left\langle ijk|\hat{O}_{3}|jik\right\rangle\] \[\quad+\delta\left\langle ijk|\hat{O}_{3}|kij\right\rangle-3 \left\langle ijk|\hat{O}_{3}|kji\right\rangle\] \[\quad+3\left\langle ijk|\hat{O}_{3}|ijk\right\rangle-3\left\langle ijk |\hat{O}_{3}|\delta ijk\right\rangle\] \[\quad+3\left\langle ijk|\hat{O}_{3}|ijk\right\rangle-3\left\langle ijk |\hat{O}_{3}|\delta ijk\right\rangle\] \[\quad+3\left\langle ijk|\hat{O}_{3}|\delta ijk\right\rangle-3 \left\langle ijk|\hat{O}_{3}|\delta ijk\right\rangle \tag{51}\]
#### a.2.4 Lagrangian differential
\[\delta\mathcal{L} =\delta\left\langle\Psi|\hat{O}_{1}|\Phi\right\rangle+\delta \left\langle\Psi|\hat{O}_{2}|\Phi\right\rangle\] \[\quad+\delta\left\langle\Psi|\hat{O}_{3}|\Phi\right\rangle- \delta\sum_{i=1}\sum_{j=1}\epsilon_{ij}(\left\langle i|j\right\rangle-\delta_ {ij})\] \[=\delta\left\langle\Psi|\hat{O}_{1}|\Phi\right\rangle+\delta \left\langle\Psi|\hat{O}_{2}|\Phi\right\rangle\] \[\quad+\delta\left\langle\Psi|\hat{O}_{3}|\Phi\right\rangle-\sum_ {i=1}\sum_{j=1}\epsilon_{ij}(\left\langle\delta i|j\right\rangle+\left\langle i |\delta j\right\rangle)\] \[=\delta\mathcal{L}_{\psi}+\delta\mathcal{L}_{\phi} \tag{52}\]
where we have defined the following:
\[\delta\mathcal{L}_{\psi} =\sum_{i}\left\langle\delta i|\hat{O}_{1}|i\right\rangle\] \[\quad+\sum_{i<j}\left(2\left\langle\delta ij|\hat{O}_{2}|ij \right\rangle-2\left\langle\delta ij|\hat{O}_{2}|ji\right\rangle\right)\] \[\quad+\sum_{i<j<k}\left(3\left\langle\delta ijk|\hat{O}_{3}|ijk \right\rangle-3\left\langle\delta ijk|\hat{O}_{3}|ikj\right\rangle\right.\] \[\quad+3\left\langle\delta ijk|\hat{O}_{3}|kij\right\rangle-3 \left\langle\delta ijk|\hat{O}_{3}|kji\right\rangle\left.\right)\] \[\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle\delta i|j\right\rangle\] \[=\sum_{i}\left\langle\delta i|\hat{O}_{1}|i\right\rangle+2\sum_{i <j}\left\langle\delta ij|\hat{O}_{2}|ij\right\rangle_{P}\] \[\quad+3\sum_{i<j<k}\left\langle\delta ijk|\hat{O}_{3}|ijk\right\rangle _{P}-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle\delta i|j\right\rangle \tag{53}\]
We use the shorthand such that \(\left|ij..k\right\rangle_{P}=\sum_{\hat{P}\in S_{n}}(-1)^{p}\hat{P}\left|ij..k \right\rangle=\mathcal{P}_{n}\left|ij..k\right\rangle\), \(S_{n}\) being the permutation group of \(n\) elements (Equation 13). Similarly,
\[\delta\mathcal{L}_{\phi} =\sum_{i}\left\langle i|\hat{O}_{1}|\delta i\right\rangle+2\sum_ {i<j}\left\langle ij|\hat{O}_{2}|\delta ij\right\rangle_{P}\] \[\quad+3\sum_{i<j<k}\left\langle ijk|\hat{O}_{3}|\delta ijk\right\rangle _{P}-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle i|\delta j\right\rangle \tag{54}\]
We seek \(\delta\mathcal{L}=0\) for any arbitrary changes of \(\psi\) and \(\phi\) independently. Hence, \(\delta\mathcal{L}_{\psi}=0\) and \(\delta\mathcal{L}_{\phi}=0\) independently.
#### a.2.5 Effective transcorrelated Hamiltonian
Consider the condition \(\delta\mathcal{L}_{\psi}=0\).
\[\delta\mathcal{L}_{\psi} =\sum_{i}\left\langle\delta i\middle|\hat{O}_{1}\middle|i\right\rangle +2\sum_{i<j}\left\langle\delta ij\middle|\hat{O}_{2}\middle|ij\right\rangle_{P} \tag{57}\] \[\quad+3\sum_{i<j<k}\left\langle\delta ijk\middle|\hat{O}_{3} \middle|ijk\right\rangle_{P}-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle \delta i\middle|j\right\rangle\] \[=\sum_{i}\left\langle\delta i\middle|\hat{h}_{i}\middle|i\right\rangle +2\sum_{i<j}\left\langle\delta ij\middle|(r_{12}^{-1}-\hat{K}(\mathbf{r}_{1},\mathbf{r}_{ 2}))\middle|ij\right\rangle_{P}\] \[\quad+3\sum_{i<j<k}\left\langle\delta ijk\middle|\hat{L}(\mathbf{r}_{ 1},\mathbf{r}_{2},\mathbf{r}_{3})\middle|ijk\right\rangle_{P}\] \[\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle\delta i \middle|j\right\rangle\] \[=\sum_{i}\left\langle\delta i\middle|\hat{h}_{i}\middle|i\right\rangle +\sum_{i=1}\sum_{j=1}\left\langle\delta ij\middle|(r_{12}^{-1}-\hat{K}(\mathbf{r}_{ 1},\mathbf{r}_{2}))\middle|ij\right\rangle_{P}\] \[\quad+\frac{1}{2}\sum_{i=1}\sum_{j=1}\sum_{k=1}\left\langle\delta ijk \middle|\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\middle|ijk\right\rangle_{P}\] \[\quad-\sum_{i=1}\sum_{j=1}\epsilon_{ij}\left\langle\delta i \middle|j\right\rangle\] \[=\sum_{i}\left[\left\langle\delta i\middle|\hat{h}_{i}\middle|i \right\rangle+\sum_{j=1}\left\langle\delta ij\middle|(r_{12}^{-1}-\hat{K}(\mathbf{ r}_{1},\mathbf{r}_{2}))\middle|ij\right\rangle_{P}\right.\] \[\quad+\frac{1}{2}\sum_{j=1}\sum_{k=1}\left\langle\delta ijk \middle|\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\middle|ijk\right\rangle_{P}\] \[\quad-\sum_{j=1}\epsilon_{ij}\left\langle\delta i\middle|j\right\rangle\] \[=0\]
Rewriting the expression more explicitly in integral form,
\[\sum_{i}\left[\left\langle\delta i\middle|\hat{h}_{i}\middle|i \right\rangle+\sum_{j=1}\left\langle\delta ij\middle|(r_{12}^{-1}-\hat{K}(\mathbf{ r}_{1},\mathbf{r}_{2}))\middle|ij\right\rangle_{P}\right.\] \[\left.+\frac{1}{2}\sum_{j=1}\sum_{k=1}\left\langle\delta ijk \middle|\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3})\middle|ijk\right\rangle_{P}\right.\] \[\left.-\sum_{j=1}\epsilon_{ij}\left\langle\delta i\middle|j \right\rangle\right]=0 \tag{56}\] \[\sum_{i=1}\int d\mathbf{r}_{1}\delta\psi_{i}^{*}(\mathbf{r}_{1})\Big{[} \hat{h}_{i}(\mathbf{r}_{1})\] \[\quad+\sum_{j=1}\int d\mathbf{r}_{2}\psi_{j}^{*}(\mathbf{r}_{2})(r_{12}^{ -1}-\hat{K}(\mathbf{r}_{1},\mathbf{r}_{2}))\mathcal{P}_{2}\phi_{j}(\mathbf{r}_{2})\] \[\quad+\frac{1}{2}\sum_{j,k=1}\int\int d\mathbf{r}_{2}d\mathbf{r}_{3}\psi_ {j}^{*}(\mathbf{r}_{2})\psi_{k}^{*}(\mathbf{r}_{3})\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r }_{3})\mathcal{P}_{3}\phi_{j}(\mathbf{r}_{2})\phi_{k}(\mathbf{r}_{3})\] \[\quad-\sum_{j=1}\epsilon_{ij}\left\lvert\phi_{i}(\mathbf{r}_{1})=0\right.\]
Since the expression holds for any \(\delta\psi_{i}^{*}\), the terms in the square bracket must be zero. Hence,
\[\hat{h}_{i}(\mathbf{r}_{1})+\sum_{j=1}\int d\mathbf{r}_{2}\psi_{j}^{*}( \mathbf{r}_{2})(r_{12}^{-1}-\hat{K}(\mathbf{r}_{1},\mathbf{r}_{2}))\mathcal{P}_{2}\phi_{j}( \mathbf{r}_{2}) \tag{58}\] \[\quad+\frac{1}{2}\sum_{j,k=1}\int\int d\mathbf{r}_{2}d\mathbf{r}_{3}\psi _{j}^{*}(\mathbf{r}_{2})\psi_{k}^{*}(\mathbf{r}_{3})\hat{L}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{ r}_{3})\mathcal{P}_{3}\phi_{j}(\mathbf{r}_{2})\phi_{k}(\mathbf{r}_{3})\] \[\quad-\sum_{j=1}\epsilon_{ij}=0\]
for all \(i\). Rearrangement of the equation recovers the equation given in equation 10.
## Appendix B Correlator parameters
### Correlator parameters for closed shell atoms (SOM8)
The correlator parameters for closed shell ions (Table 5) found from using SOM minimisation on a set of 8 parameters are tabulated in Table 9.
### Correlator parameters for helium from different initial guesses (SOM8)
The correlator parameters for helium (Table 6) found from using different initial guesses are tabulated in Table 10.
### Correlator parameters for helium-like ions (SOM18)
The correlator parameters for helium-like ions found from using SOM minimisation on a set of 18 parameters are tabulated in Tables 11 and 12. The parameters for helium were used as a starting guess for the helium-like systems in Table 7.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(m\) & \(n\) & \(o\) & He & Be & C & O & Ne \\ \hline
0 & 0 & 1 & 0.50000 & 0.50000 & 0.50000 & 0.50000 \\
0 & 0 & 2 & 0.29627 & 0.29152 & 0.29071 \\
0 & 0 & 3 & 0.12627 & 0.14407 & 0.15048 \\
0 & 0 & 4 & 0.04485 & 0.06602 & 0.07330 \\
1 & 0 & 0 & 0.99565 & -1.00073 & -1.99929 \\
2 & 0 & 0 & -0.00338 & -0.00177 & -0.00116 \\
3 & 0 & 0 & -0.00200 & -0.00159 & -0.00134 \\
4 & 0 & 0 & -0.00110 & -0.00101 & -0.00100 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Correlator parameters for helium found from using SOM minimisation with different initial guesses. These parameters are used to obtain the transcorrelated energies in table 6.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(m\) & \(n\) & \(o\) & \({\rm H}^{-}\) & He & \({\rm Li}^{+}\) & \({\rm Be}^{2+}\) & \({\rm B}^{3+}\) \\ \hline
0 & 0 & 1 & 0.50000 & 0.50000 & 0.50000 & 0.50000 \\
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 11: Correlator parameters for helium-like ions (\({\rm H}^{-}\) to \({\rm B}^{3+}\)) found from using SOM minimisation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(m\) & \(n\) & \(o\) & He & Be & C & O & Ne \\ \hline
0 & 0 & 1 & 0.50000 & 0.50000 & 0.50000 & 0.50000 \\
0 & 0 & 2 & 0.42476 & 0.21162 & 0.03755 & -0.45608 & -0.75272 \\
0 & 0 & 3 & -0.23302 & 0.30937 & -0.18384 & 0.67271 & 1.31436 \\
0 & 0 & 4 & 0.28445 & -0.21062 & 0.42119 & 0.04159 & -0.36159 \\
1 & 0 & 0 & 0.00151 & 0.01082 & -0.00952 & -0.02441 & -0.00979 \\
2 & 0 & 0 & -0.16865 & -0.11872 & -0.12690 & -0.12667 & -0.14499 \\
3 & 0 & 0 & -0.34421 & -0.17257 & -0.05827 & -0.01992 & -0.00973 \\
4 & 0 & 0 & -0.54727 & 0.16579 & 0.08346 & 0.02964 & 0.08552 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Correlator parameters for closed shell atoms found from using SOM minimisation. These parameters are used to obtain the transcorrelated energies in Table 5.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(m\) & \(n\) & \(o\) & C\({}^{4+}\) & N\({}^{5+}\) & O\({}^{6+}\) & F\({}^{7+}\) & Ne\({}^{8+}\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 12: Correlator parameters for helium-like ions (C\({}^{4+}\) to Ne\({}^{8+}\)) found from using SOM minimisation.
|
2304.07236
|
Learning Perceptive Bipedal Locomotion over Irregular Terrain
|
In this paper we propose a novel bipedal locomotion controller that uses
noisy exteroception to traverse a wide variety of terrains. Building on the
cutting-edge advancements in attention based belief encoding for quadrupedal
locomotion, our work extends these methods to the bipedal domain, resulting in
a robust and reliable internal belief of the terrain ahead despite noisy sensor
inputs. Additionally, we present a reward function that allows the controller
to successfully traverse irregular terrain. We compare our method with a
proprioceptive baseline and show that our method is able to traverse a wide
variety of terrains and greatly outperforms the state-of-the-art in terms of
robustness, speed and efficiency.
|
Bart van Marum, Matthia Sabatelli, Hamidreza Kasaei
|
2023-04-14T16:33:42Z
|
http://arxiv.org/abs/2304.07236v1
|
# Learning Perceptive Bipedal Locomotion over Irregular Terrain
###### Abstract
In this paper we propose a novel bipedal locomotion controller that uses noisy exteroception to traverse a wide variety of terrains. Building on the cutting-edge advancements in attention based belief encoding for quadrupedal locomotion, our work extends these methods to the bipedal domain, resulting in a robust and reliable internal belief of the terrain ahead despite noisy sensor inputs. Additionally, we present a reward function that allows the controller to successfully traverse irregular terrain. We compare our method with a proprioceptive baseline and show that our method is able to traverse a wide variety of terrains and greatly outperforms the state-of-the-art in terms of robustness, speed and efficiency.
## I Introduction
Humanoid robots hold immense potential as a general-purpose platform for various applications due to their compatibility with human-designed environments. This compatibility enables humanoid robots to seamlessly work alongside humans, reducing the need for expensive modifications to existing infrastructure. Despite the benefits, creating a fully functional and general-purpose humanoid robot still poses several technical challenges, including locomotion over irregular and previously unseen terrain. To address this challenge, the present work focuses on developing a robust and reliable bipedal locomotion controller.
Conventionally bipedal locomotion controllers are designed as complicated state machines and explicit dynamical models [1, 2]. However, these models lack in robustness, do not generalize to new scenarios or terrains without explicit modelling, and are laborious and complicated to develop and maintain. Moreover, adding exteroceptive capabilities to such methods is not straightforward.
In recent years, there has been a shift towards the use of Reinforcement Learning (RL) [3] based controllers for simulated [4, 5], as well as real-world bipedal robots [6, 7, 8, 9, 10]. These methods model the control policy as a neural network and train them to maximize some reward signal. This approach has proven to be robust, even in the face of motor malfunctions [7].
Many RL based approaches rely on the use of reference trajectories and imitation rewards to train a policy to produce a gait [6], limiting the policy to learn predetermined behaviour. However, recent work suggests that it is possible to learn a wide variety of different bipedal gaits in a single neural network by using periodic reward functions [8]. Despite the great progress in recent years for neural network based bipedal locomotion controllers, most approaches are compatible with flat terrain only. Some successful attempts have been made to learn blind bipedal locomotion controllers for more challenging terrains [10], however, these policies resort to more conservative foot trajectories with higher steps and are unable to avoid dangerous areas. Such blind strategies do not generalize well to a wide variety of unseen and irregular terrains and lack in efficiency, and therefore are not feasible for a fully capable humanoid robot.
In order for a legged locomotion controller to traverse any random previously unseen terrain robustly, it needs information about the world ahead. A controller that is based on proprioception only is limited to reactive and precautionary behaviour. Only a controller that has information about the world ahead can actively plan steps. Exteroceptive inputs are necessary, however using both proprioception and exteroception presents a challenge. Exteroceptive sensors such as cameras, Lidar, or radar often produce spurious readings in cases such as reflection (water puddle), transparency (glass), deformation (snow), or fake obstacles (tall grass). This may lead locomotion policies based on such inputs to unnecessarily avoid certain areas or fail outright, and raises the question of how to handle disagree
Fig. 1: In this work we develop a bipedal locomotion control policy based on both exteroception and proprioception that is able to traverse a wide variety of terrains. The first row shows Cassie walking over random terrain in the physics simulator. The second row shows noisy exteroceptive samples that are input to the policy at the same timesteps. The bottom shows the policy architecture during inference.
The field of quadrupedal locomotion control has shown great progress in recent years in learning controllers for navigating challenging terrain [11, 12]. Most notably, [13] shows that using a recurrent belief encoder with an attention mechanism, a neural network policy is able to learn when to trust and when not to trust the exteroceptive data. This allows the locomotion controller to utilize exteroceptive data when it is most useful, and fall back to proprioceptive data when it is not. Our work extends these methods to the bipedal domain, resulting in a robust and reliable internal belief of the terrain ahead despite noisy sensor inputs.
_Contribution:_ In this work we apply a recurrent attention based belief encoder to a bipedal locomotion policy to develop a robust controller capable of traversing irregular terrain based on noisy exteroception. We present a reward function leveraging prior work that allows the policy to learn traversing rough terrain. We perform a wide range of simulation based experiments to show that our controller is able to navigate a variety of terrains while outperforming state-of-the-art proprioceptive controllers in terms of robustness, efficiency, and speed. Figure 1 shows the architecture of our controller.
The remainder of the paper is organized as follows: The next section describes the methods used to train our controller. Section III describes the training process and the experiments performed to evaluate the performance of our controller, including the results. Finally, Section IV concludes the paper.
## II Methods
### _Learning Setup_
The main goal is to develop a robust bipedal locomotion controller that is able to navigate irregular terrain while following a command. In order to do so we use privileged learning [14] to distill a policy that is able to work with potentially noisy and spurious exteroceptive observations. Previous work has shown that directly learning the desired behavior over rough terrain with RL does not converge within reasonable time budgets [11]. First a teacher policy with access to perfect, noise free observations is trained in simulation through reinforcement learning to traverse a wide range of different terrains. We then train a student policy to imitate the behaviour of the teacher policy, but without privileged information and noisy inputs. We use Proximal Policy Optimization (PPO) [15] to train our policies, as PPO has shown to yield good results in bipedal locomotion control [6, 7, 8].
### _State and Action Representation_
We define three observations \(\mathbf{o}_{t}^{p},o_{t}^{e},o_{t}^{n}\). Here \(\mathbf{o}_{t}^{p}\in\mathbb{R}^{44}\) is the proprioceptive input, consisting of motor positions, motor speeds, joint positions, joints speeds, pelvis orientation, pelvis angular velocity, user commanded velocity \(v_{c}\) and clock inputs \(i_{t}\). The user commanded velocity \(v_{c}\) is defined as the pair \((\mathbf{v}_{cmd},\omega_{cmd})\) where \(\mathbf{v}_{cmd}\in\mathbb{R}^{2}\) represents the commanded velocity in the \(x\) and \(y\) directions and \(\omega_{cmd}\) represents the commanded angular velocity around the \(z\) axis. The clock input \(i_{t}\) is defined as the pair \(\sin(2\pi(\phi))\) and \(\sin(2\pi(\phi+0.5))\), where \(\phi\) is defined as \(\phi=(t\mod T)/T\), with \(t\) denoting the current timestep, and \(T\) a user defined gait period in terms of timesteps. Although it has been noted in past research [9] that clock inputs are necessary, we found in preliminary experiments that a policy without clock inputs learns a gait with no meaningful effect on the reward. However, we have not yet performed a thorough investigation on this matter.
Additionally, \(o_{t}^{e}=(\mathbf{e}_{t}^{l},\mathbf{e}_{t}^{e})\) is the pair of noiseless exteroceptive observations for the left and right feet. To obtain the observations the terrain height is sampled with a sampling pattern centered at the location of the respective foot. The sampling pattern consists of 318 points spaced circularly. Figure 2 shows the height sampling taking place in the simulation environment.
Finally, \(o_{t}^{n}=(\mathbf{n}_{t}^{l},\mathbf{n}_{t}^{r})\) is the pair of noisy exteroceptive observations. Sampling is similar to \(\mathbf{o}_{t}^{e}\) however a noise is applied to the sampling pattern coordinates and sampled values. Further details on noise generation are discussed in section II-E.
The action \(\mathbf{a}_{t}\in\mathbb{R}^{10}\) represents the PD targets for the actuators in the robot model. Previous research has shown that PD targets are an effective action parametrization for learning locomotion [16]. The robot PD controller runs at 2 kHz, whereas actions are sampled from the policy at a rate of 40 Hz.
### _Policy Architecture_
The teacher and student policy architectures are illustrated in Figure 3. In this section, we will provide a detailed description of these architectures.
#### Ii-C1 Teacher Policy
The teacher policy \(\pi^{t}\) consists of an exteroceptive encoder \(f_{e}\) and an LSTM [17]. The encoder \(f_{e}\) consists of 3 fully connected layers of size {256, 160, 96} and the LSTM has two layers of 256 nodes. The teacher policy receives the observation \(o_{t}^{r}=(\mathbf{o}_{t}^{p},\mathbf{o}_{t}^{e})\). The exteroceptive encoder \(f_{e}\) receives both exteroceptive observations \(\mathbf{e}_{t}^{l},\mathbf{e}_{t}^{r}\) that are in \(o_{t}^{e}\) and encodes them separately into the latent vectors \(\mathbf{I}_{t}^{e}\) and \(\mathbf{I}_{t}^{e^{r}}\) which are concatenated into the latent vector \(\mathbf{I}_{t}^{e}\in\mathbb{R}^{200}\). The LSTM receives the concatenation of \(\mathbf{o}_{t}^{p}\) and \(\mathbf{I}_{t}^{e}\) and outputs an action \(\mathbf{a}_{t}\).
Fig. 2: (a) A close up view of the exteroceptive simulator. The red dots represent the height sample for the right foot, and the green dots for the left foot. Samples are taken with the sampling pattern centered around the \(xy\) position of each foot and rotated to match pelvis orientation. (b) Shows a detailed plot of the pattern used to sample heights from the terrain.
#### Iii-C2 Student Policy
The student policy \(\pi^{s}\) takes in the noisy observation \(o_{t}^{s}=(\mathbf{o}_{t}^{p},\mathbf{o}_{t}^{n})\) and has a partially similar architecture as the teacher policy, using the same encoder \(f_{e}\) and LSTM. An added component is a recurrent belief encoder, which receives the exteroceptive latent vector \(\mathbf{I}_{t}^{e}\) and the proprioceptive observation \(\mathbf{o}_{t}^{p}\) and outputs a belief vector \(\mathbf{b}_{t}\in\mathbb{R}^{192}\). The belief vector is then concatenated with the proprioceptive observation \(\mathbf{o}_{t}^{p}\) and fed into the LSTM, which in turn outputs an action \(\mathbf{a}_{t}\).
The main aspect of the student policy is the recurrent belief encoder, an approach introduced by [13], which is intended to take the proprioception and noisy exteroception and develop an internal representation of what the terrain looks like. In order to train this internal representation a belief decoder is added to the policy, which takes as input the hidden state of the recurrent belief encoder. The belief decoder outputs a reconstruction of the exteroceptive inputs, which is trained to minimize the difference with the noise free exteroceptive observation \(o_{t}^{e}\). This method encourages the internal hidden state of the belief encoder to represent a representation of the outside world that is as accurate as possible, despite noisy inputs. Additionally, the belief encoding system is fitted with an attention mechanism, such that the policy is able to learn when exteroceptive data is not useful, and rely on proprioception instead to construct the belief. For more detail about the belief encoder and decoder we refer the reader to [13].
The student policy is trained with both an action imitation loss, and an observation reconstruction loss to encourage the internal belief representation of the outside world to be as accurate as possible.
### _Terrain Generation_
We use a linear curriculum to ramp terrain generation intensity. The ramp starts after the policy has learned to walk on flat ground. All generated terrains are modelled as a height map in meters and multiplied with the curriculum factor \(c_{t}\in[0,1]\).
We define five different terrain modes, as shown in Figure 4. The first is _hills_, which is modelled as a sum of a low frequency and a higher frequency Perlin noise [18]. The generated values are normalized to a range \([0,0.8]\). The second terrain mode is _edges_, which consists of a Perlin noise that has been quantized to two levels \(\{0,h\sim\mathcal{U}(0.15,0.25)\}\). The third mode is _squares_ which consists of a grid of squares with sides \(d\in[0.4,0.6]\) of random height \(h\in[0,0.4]\). The fourth mode is _quantized hills_ which is Perlin noise that has been quantized to discrete levels with a random step size \(h\in[0.12,0.18]\). The fifth and final mode of terrain generation is _stairs_, consisting of alternating ascending and descending staircases. To generate a staircase we randomly select a run \(d\in[0.3,0.4]\) and rise \(r\in[0.1,0.22]\) for 10 equal stairs.
### _Randomization_
Although we only focus on simulation based experiments in this work, previous work has shown that policies trained in simulation are able to bridge the sim-to-real gap given proper domain randomization [9, 19]. Therefore we randomize joint damping, body part masses and friction coefficients at the start of each episode. We use the same parametrization as presented in [10].
Furthermore, we randomize the velocity command to expose the policy to a range of different velocity commands during training. At the start of each episode and at one random timestep during each episode a new velocity command is sampled. The probability distribution for the velocity commands is shown in Table I.
To obtain the noisy exteroceptive student observation \(o_{t}^{n}\) we apply a noise to the noise free exteroceptive teacher observation \(o_{t}^{e}\). We sample noises for the sampling coordinates \(x,y\) and the sampled height values \(z\) at the episode, foot and timestep level. Additionally, we select random points on the height sampling pattern and apply a large noise to simulate outliers. For the parametrization of these noises we leverage three modes {_nominal_, _offset_, _noisy_} defined in prior work by [13] who designed the using method to mimic noise in real exteroceptive sensors.
### _Reward_
For the teacher policy to learn to follow arbitrary commands over arbitrary terrain we use a number of reward terms that are divided into three main categories: (1) _gait_, (2) _command
Fig. 3: The top figure shows the teacher policy architecture which is trained with PPO. The bottom figure shows the student policy architecture which is trained to both imitate the action output of the teacher policy, and to denoise the noisy exteroceptive input.
following_ and (3) _smoothness_. The full reward function we use is defined as:
\[r=(0.25r_{frc}+0.25r_{vel}+0.2)\cdot c_{r}+(r_{air}+0.1r_{one})\cdot(1-c_{r})\\ +0.2r_{v,xy}+0.2r_{\omega,z}+0.05r_{lov}+0.05r_{fo}\\ +0.05r_{pm}+0.05r_{po}+0.025r_{i}+0.025r_{a}\]
of which the components are explained in the next subsections.
#### Iii-B1 Gait rewards
We use a combination of two reward methods to learn a gait. The first is a clock based reward function that oscilates between swing and stance modes in a gait period, as introduced in [8]. In the swing phase the reward function will penalize foot forces, while in the stance phase foot velocity is penalized. By offsetting the clock functions for the left and right leg a gait can be learned. Figure 5 shows the gait clocks. The foot force reward component \(r_{frc}\) is defined as:
\[r_{frc}=\tanh(\pi F_{i}k_{frc,l})+\tanh(\pi F_{r}k_{frc,r})\]
where \(F_{r}\) and \(F_{l}\) represent the normalized norm of the foot forces on the respective foot, and \(k_{frc,l}\) and \(k_{frc,r}\) are the respective foot force gait clocks. The foot velocity reward component \(r_{vel}\) is defined as:
\[r_{vel}=\tanh(\pi v_{l}k_{vel,l})+\tanh(\pi v_{r}k_{vel,r})\]
where \(v_{r}\) and \(v_{l}\) represent the normalized norm of the foot velocities of the respective foot, and \(k_{vel,l}\) and \(k_{vel,r}\) are the respective foot velocity gait clocks.
We find that this periodic reward function produces good quality gaits on flat terrain, however performance on rough terrain is less satisfactory. We hypothesize that this is due to the fixed cadence embedded in the gait clocks, restricting the policy to a fixed gait period, and that a more flexible reward function is better suited for rough terrain.
The second is a more flexible reward function that simply rewards foot airtime [12] for both feet. The foot airtime reward component \(r_{air}\) is defined as:
\[r_{air}=\sum_{f=0}^{2}(\mathbf{t}_{air,f}-0.5)*\mathbf{1}_{first\ contact,f}\]
where \(\mathbf{t}_{air}\in\mathbb{R}^{2}\) denotes the cumulative airtime during the swing phase of both feet, and \(\mathbf{1}_{first\ contact}\in\mathbb{R}^{2}\) holds binary values indicating whether the swing phase is ended by contact. To prevent the policy from learning to simply jump, a component \(r_{one}\) is added to reward standing on one foot:
\[r_{one}=\mathbf{1}_{single\ contact}\]
where \(\mathbf{1}_{single\ contact}\) is a binary value indicating whether the robot is standing on one foot. We find that this reward function leads to more stable gaits on rough terrain, but convergence is slower. Therefore we start the training process with the clock based reward function until a satisfactory gait has been learned on flat terrain. We then switch to the airtime based reward function by setting the reward curriculum factor \(c_{r}\) from 1 to 0.
#### Iii-B2 Command following rewards
We employ similar command following rewards as [11, 13] with the goal of maximizing velocity in a given direction. The velocity reward component \(r_{v,x,y}\) is defined as:
\[r_{v,xxy}=\begin{cases}\exp(-2.5\cdot\left\|\mathbf{v}_{xy}\right\|^{2})&|| \mathbf{v}_{cmd}||=0\\ 1&\mathbf{v}_{cmd}\cdot\mathbf{v}_{xy}\geq 1\\ \exp(-2\cdot(\mathbf{v}_{cmd}\cdot\mathbf{v}_{xy}-1)^{2})&else\end{cases}\]
where \(\mathbf{v}_{xy}\in\mathbb{R}^{2}\) represents the linear velocity in the \(xy\) plane. The angular velocity reward component \(r_{\omega,z}\) is defined as:
\[r_{\omega,z}=\begin{cases}\exp(-5\cdot\omega_{z}^{2})&\omega_{cmd}=0\\ 1&\omega_{cmd}\cdot\omega_{z}\geq 1\\ \exp(-2\cdot(\omega_{cmd}\cdot\omega_{z}-1)^{2})&else\end{cases}\]
Fig. 4: The five different terrain modes used for training: (a) _hills_, (b) _edges_, (c) _quantized hills_, (d) _squares_, (e) _stairs_.
Fig. 5: The gait clocks \(k_{frc}(\phi)\) and \(k_{vel}(\phi)\) for both feet. The stance phases of both feet are offset by \(\phi/2\), but overlap slightly, producing a walking gait.
where \(\omega_{z}\) represents the pelvis angular velocity. The linear orthogonal velocity offset reward component \(r_{lov}\) is defined as:
\[r_{lov}=\exp(-5\cdot\left\|\mathbf{v}_{xy}-\mathbf{v}_{cmd}\cdot\mathbf{v}_{xy} \right\|)\]
where \(\mathbf{v}_{xy}\in\mathbb{R}^{2}\) again represents pelvis velocity in the \(xy\) plane. It is intended to penalize linear velocities orthogonal to the commanded velocity.
#### Ii-B3 Smoothness rewards
The foot orientation reward component \(r_{fo}\) is defined as:
\[r_{fo}=\exp(-1.5\cdot(\mathbf{\hat{z}}\cdot\boldsymbol{\psi}_{lf}+\mathbf{\hat {z}}\cdot\boldsymbol{\psi}_{rf}))\cdot(1-c_{t})+c_{t}\]
where \(\mathbf{\hat{z}}\in\mathbb{R}^{3}\) represents the unit vector in the \(z\) direction and \(\boldsymbol{\psi}_{lf},\boldsymbol{\psi}_{rf}\in\mathbb{R}^{3}\) represent the foot orientation vectors pointing along the length of both feet. This reward component encourages the policy to keep the feet flat on the ground, but in any planar direction. Additionally, the reward is gradually shifted to a constant reward component as the curriculum is ramped up to allow the policy to adapt to terrains where a non flat position might be more beneficial. The pelvis motion reward component \(r_{pm}\) is defined as:
\[r_{pm}=\exp(-(v_{z}^{2}+\omega_{y}^{2}+\omega_{x}^{2}))\]
and is intended to penalize pelvis motions in directions not part of the command. The pelvis orientation reward component \(r_{po}\) is defined as:
\[r_{po}=\exp(-3\cdot(\left|\psi_{x}\right|+\left|\psi_{y}\right|))\]
where \(\psi_{x}\) and \(\psi_{y}\) represent pelvis orientation and is intended to encourage the policy to keep the pelvis level. The torque reward component \(r_{t}\) is defined as:
\[r_{t}=\exp(-0.02\cdot\overline{\left|\boldsymbol{\tau}\right|})\]
where \(\boldsymbol{\tau}\) represents the torque vector exerted by the actuators, with the aim to reduce energy consumption. The action reward component \(r_{a}\) is defined as:
\[r_{a}=\exp(-5\cdot\overline{\left|\mathbf{a}_{t}-\mathbf{a}_{t-1}\right|})\]
where \(\mathbf{a}_{t}\) represents the action vector at time \(t\) and is intended to penalize large changes in the action vector in order to improve smoothness and stability.
## III Experimental Results
### _Simulation_
Training and experimentation are performed in simulation on the Cassie robot [20]. We use the Mujoco [21] physics simulator with the _cassie-mujoco-sim_ environment [22].
### _Training_
For teacher policy training the recurrent PPO algorithm from StableBaselines3 (SB3) [23] is used. In order to achieve a 7 times speedup in terms of timesteps per second we modified SB3 to use batches of whole sequences. The hyperparameters for PPO are listed in Table IIIa. Teacher policy training takes around 36 hours for \(60\times 10^{6}\) timesteps on a single 12-core V100 node.
The student policy is trained on a dataset of \(10^{6}\) timesteps sampled from the teacher policy, with the hyperparameters listed in Table IIIb. The student policy is implemented in PyTorch [24], and training takes around 1 hour on a high end laptop.
In order to compare and quantify the performance of our exteroceptive student policy we train a baseline policy \(\pi^{b}\). This baseline policy observes proprioception \(\mathbf{o}_{t}^{p}\) only and has the same architecture as the teacher policy but without the exteroceptive encoder, similar to the current state-of-the-art [8, 10]. The baseline policy is trained on the same terrain and curriculum as the exteroceptive student policy.
### _Experiments_
We conduct a number of simulation based experiments to evaluate the performance of our exteroceptive policy against the proprioceptive baseline policy. In all experiments our policy has access to _nominal_ noise exteroception, and is commanded to walk forward, unless otherwise mentioned. For all results 100 episodes were attempted and the results averaged. We include a supplementary video providing a visual representation of the experiments. 1
Footnote 1: [https://youtu.be/B3Qr-7ZZHZQ](https://youtu.be/B3Qr-7ZZHZQ)
#### Iii-C1 Maximum speed over various terrains
We record mean speed, average actuator torque and the success rate for all terrains in the training curriculum. A success is defined as the episode completing at 300 timesteps without the robot falling over. The results are shown in Figure 6. Our policy outperforms the baseline policy on all terrains in terms of success rate, with near perfect scores on all terrains but stairs. Although the proprioceptive baseline is able to achieve a near 100% success rate on hills and flat terrain, its performance is worse on quantized hills, edges and squares. The largest outperformance is found in the stairs terrain, and upon further investigation we find that most failures occur during the descent of the staircase. This underperformance of the baseline policy is likely due to the large vertical change in the stairs, quantized hills, edges and squares terrains, which the robot is unable to anticipate due to the lack of exteroceptive information. To investigate this further we conduct a more detailed analysis of the staircase terrain in the next experiments.
Our exteroceptive policy is able to achieve higher speeds than the baseline policy on all terrains but stairs. This is due to the fact that the exteroceptive policy is able to take more decisive actions, as it can gather information about the environment in advance. The baseline policy is forced to resort to more conservative gaits, at slower speeds and lifting feet higher. The slow speed on stairs for our exteroceptive policy is caused by a combination of caution and the policy veering slightly off course during stair descent. This behaviour emerged from the training process and we hypothesize that, although undesirable, it is beneficial for the policy success, as it effectively lengthens the run of the stairs, making it easier to find a suitable foothold.
Applied torque is lower for our policy than the baseline policy on all terrains, indicating that less energy is consumed despite walking at higher speeds. Our exteroceptive policy is therefore more energy efficient. However, we did not investigate whether the difference is large enough to counteract the added computational cost of the exteroceptive encoder.
#### V-B2 Success rate over a step
We command both policies to go forward over flat terrain with a 1 meter wide step of varying heights at 1 meter from the starting position and record the success rate. The results are shown in Figure 7. Our policy outperforms the baseline policy, reliably traversing over steps up to 30 cm in height, while the baseline policy starts to fail at 20 cm. This experiment clearly shows the advantage of exteroceptive information in the case of a step, as the policy can anticipate the step and take a more reliable action.
#### V-B3 Success rate over a trench
We command both policies to go forward over a 50 cm deep trench of varying widths to gauge the effects of exteroception on the policy's ability to avoid dangerous areas. The results are shown in Figure 7. Surprisingly the proprioceptive baseline is able to outperform our exteroceptive policy. We find that the exteroceptive policy does not avoid the trench, but tries to step inside. We hypothesize this is caused by the policy not encountering terrains with dangerous areas during training.
#### V-B4 Success rate for stair descent
As can be seen from the success rate results in Figure 6, the stair terrain is most difficult for both policies. Specifically, most failures occur when the robot is descending stairs. We believe this is due to Cassies morphology since the steep backwards incline of the legs make them interfere with the stairs. Effectively causing the useable portion of a stair run to be shorter and requiring more precision in foot placement. To investigate whether exteroception is beneficial in this environment we command both policies to go down staircases of 10 equal steps and vary the step heights while recording the success rates. All staircases use the same run of 35 cm. The results are shown in Figure 7. Our exteroceptive policy is able to walk down stairs with step heights up to 16 cm with near 100% success rate, while the baseline policy starts failing at 12 cm. This clearly shows the advantage of exteroception in this environment. Figure 8 shows both policies traversing down stairs of 12 cm step height.
#### V-B5 Command Following
To gauge the capability of the policy to locomote over irregular terrain according to a user
Fig. 6: Policy performance metrics for the five different terrain modes as well as flat terrain. Error bars denote the standard deviation of the mean. Our exteroceptive policy achieves higher success rates and speeds as it can take more decisive actions, thanks to its ability to gather information about the environment in advance.
Fig. 7: Success rate metrics for stepping over a single step (top), a trench (middle) and walking downstairs (bottom). In all cases the exteroceptive policy is able to traverse more challenging terrains than the proprioceptive baseline policy.
specified command we command the policy to walk in all directions over the squares terrain and record the speed. We choose the squares terrain for this experiment for its high density of changes in terrain height. Figure 9 shows the commanded linear and angular velocities and the policies responses to them. The policies are able to follow the commands in the \(x\) direction over rough terrain accurately, albeit with a delay. Similar results apply for speed achieved in the \(y\) direction, however the maximum speed is lower than commanded. We believe this is due to the low range of motion available in the roll actuators of the robot hips, requiring smaller steps and limiting velocity. We found that results are similar on flat terrain, confirming that terrain is not the limiting factor. Lastly, both policies are able to accurately follow the commanded angular velocity over the squares terrain. These results clearly show that the policies have learned to locomote over irregular terrain according to a user specified command, with our exteroceptive policy slightly outperforming in terms of speed.
#### Iii-C6 Dealing with spurious exteroceptive inputs
An important aspect of the belief encoder system is the ability to interpret noisy exteroceptive inputs along with proprioception to form an accurate belief of the environment. In order to demonstrate this capability we show a view of the exteroceptive reconstruction produced by the belief decoder in Figure 10. Our policy is able to denoise large, sometimes alternating offsets in the height map. Additionally, the belief encoder is able to eliminate outliers, while keeping an accurate representation of the terrain.
## IV Discussion
In this work we presented a method to learn a bipedal locomotion policy that can utilize exteroceptive observations to successfully traverse irregular terrain, while following a command. We have shown that such a policy observing
Fig. 8: The top two rows shows our exteroceptive policy successfully walking down a staircase of ten 12 cm steps, with the second row showing the noisy exteroceptive inputs at the same timesteps. The bottom row shows the proprioceptive baseline policy attempting to walk down the same staircase, with less success.
Fig. 9: The policy is commanded to walk in all directions over the squares terrain, and the commanded and actual linear and angular velocities are recorded. All plots represent the mean velocity of 100 independent trials. The shaded area denotes the standard deviation of the mean.
exteroception greatly outperforms a purely proprioception based locomotion policy when traversing irregular terrains. An exteroceptive policy is able to achieve this outperformance on terrain while at the same time increasing speed, stability and energy efficiency. Critically, we have shown that our policy has learned to achieve such behavior while relying on noisy exteroceptive observations, showcasing the robustness of the control policy.
Limitations of this work include the fact that our experiments have only been conducted in simulation. Although most methods used in this work have been proven to work on real robots in the past, future work should include testing on a real robot to confirm results. Another limitation we observe is the low speed of iteration in reward and curriculum design caused by the multiple day training time of the policy. Future work could explore the use of faster training methods to more effectively optimize the rewards and curriculum, such as presented in [12].
## Acknowledgment
This work was done as part of the Master's thesis of the first author. We thank the authors of the _cassie-mujoco-sim_ environment [22] for making their code publicly available. Finally, we thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.
|
2308.00854
|
Training on Foveated Images Improves Robustness to Adversarial Attacks
|
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial
attacks -- subtle, perceptually indistinguishable perturbations of inputs that
change the response of the model. In the context of vision, we hypothesize that
an important contributor to the robustness of human visual perception is
constant exposure to low-fidelity visual stimuli in our peripheral vision. To
investigate this hypothesis, we develop \RBlur, an image transform that
simulates the loss in fidelity of peripheral vision by blurring the image and
reducing its color saturation based on the distance from a given fixation
point. We show that compared to DNNs trained on the original images, DNNs
trained on images transformed by \RBlur are substantially more robust to
adversarial attacks, as well as other, non-adversarial, corruptions, achieving
up to 25\% higher accuracy on perturbed data.
|
Muhammad A. Shah, Bhiksha Raj
|
2023-08-01T21:40:30Z
|
http://arxiv.org/abs/2308.00854v1
|
# Training on Foveated Images Improves Robustness to Adversarial Attacks
###### Abstract
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks - subtle, perceptually indistinguishable perturbations of inputs that change the response of the model. In the context of vision, we hypothesize that an important contributor to the robustness of human visual perception is constant exposure to low-fidelity visual stimuli in our peripheral vision. To investigate this hypothesis, we develop _R-Blur_, an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation based on the distance from a given fixation point. We show that compared to DNNs trained on the original images, DNNs trained on images transformed by _R-Blur_ are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data.
## 1 Introduction
Deep Neural Networks (DNNs) are exceptionally adept at many computer vision tasks and have emerged as one of the best models of the biological neurons involved in visual object recognition (Yamins et al., 2014; Cadieu et al., 2014). However, their lack of robustness to subtle image perturbations that humans are largely invariant (Szegedy et al., 2014; Geirhos et al., 2018; Dodge and Karam, 2017) to has raised questions about their reliability in real-world scenarios. Of these perturbations, perhaps the most alarming are _adversarial attacks_, which are specially crafted distortions that can change the response of DNNs when added to their inputs (Szegedy et al., 2014; Ilyas et al., 2019) but are either imperceptible to humans or perceptually irrelevant enough to be ignored by them.
While several defenses have been proposed over the years to defend DNNs against adversarial attacks, only a few of them have sought inspiration from biological perception, which, perhaps axiomatically, is one of the most robust perceptual systems in existence. Instead, most methods seek to _teach_ DNNs to be robust to adversarial attacks by exposing them to adversarially perturbed images (Madry et al., 2018; Wong et al., 2019; Zhang et al., 2019) or random noise (Cohen et al., 2019; Fischer et al., 2020; Carlini et al., 2022) during training. While this approach is highly effective in making DNNs robust to the types of perturbations used during training, the robustness often does not generalize to other types of perturbations (Joos et al., 2022; Sharma and Chen, 2017; Schott et al., 2018). In contrast, biologically-inspired defenses seek to make DNNs robust by integrating into them biological mechanisms that would bring their behavior more in line with human/animal vision (Paiton et al., 2020; Bai et al., 2021; Dapello et al., 2020; Jonnalagadda et al., 2022; Luo et al., 2015; Gant et al., 2021; Vuyyuru et al., 2020). As these defenses do not require DNNs to be trained on any particular type of perturbation, they yield models that, like humans, are robust to a variety of perturbations (Dapello et al., 2020) in addition to adversarial attacks. For this reason, and in light of the evidence indicating a positive correlation between biological alignment and adversarial robustness (Dapello et al., 2020), we believe biologically inspired defenses are more promising in the long run.
Following this line of inquiry, we investigate the contribution of low-fidelity visual sensing that occurs in peripheral vision to the robustness of human/animal vision. Unlike DNNs, which sense visual stimuli at maximum fidelity at every point in their visual field, humans sense most of their visual field in low fidelity, i.e without fine-grained contrast [Stewart et al., 2020] and color information [Hansen et al., 2009]. In adults with fully developed vision, only a small region (less than 1% by area) of the visual field around the point of fixation [Kolb, 2005] can be sensed with high fidelity. In the remainder of the visual field (the periphery), the fidelity of the sensed stimuli decreases exponentially with distance from the fixation point [Dragoi and Tsuchitani, 2020]. This phenomenon is called "foveation". Despite this limitation, humans can accurately categorize objects that appear in the visual periphery into high-level classes [Ramezani et al., 2019]. Meanwhile, the presence of a small amount of noise or blurring can decimate the accuracy of an otherwise accurate DNN. Therefore, we hypothesize that the experience of viewing the world at multiple levels of fidelity, perhaps even at the same instant, causes human vision to be invariant to low-level features, such as textures, and high-frequency patterns, that can be exploited by adversarial attacks.
In this paper, we propose _R-Blur_ (short for Retina Blur), which simulates foveation by blurring the image and reducing its color saturation adaptively based on the distance from a given fixation point. This causes regions further away from the fixation point to appear more blurry and less vividly colored than those closer to it. Although adaptive blurring methods have been proposed as computational approximations of foveation [Deza and Konkle, 2021, Pramod et al., 2018, Wang and Cottrell, 2017], their impact on robustness has not been evaluated to the best of our knowledge. Furthermore, color sensitivity is known to decrease in the periphery of the visual field [Hansen et al., 2009, Johnson, 1986], yet most of the existing techniques do not account for this phenomenon.
Similar to how the retina preprocesses the visual stimuli before it reaches the visual cortex, we use _R-Blur_ to preprocess the input before it reaches the DNN. To measure the impact of _R-Blur_, we evaluate the object recognition capability of ResNets [He et al., 2016] trained with and without _R-Blur_ on three image datasets: CIFAR-10 [Krizhevsky et al.], Ecoset [Mehrer et al., 2021] and Imagenet [Russakovsky et al., 2015], under different levels of adversarial attacks and common image corruptions [Hendrycks and Dietterich, 2019]. We find that _R-Blur_ models retain most of the high classification accuracy of the base ResNet while being more robust. Compared to the base ResNet, _R-Blur_ models achieve 12-25 percentage points (pp) higher accuracy on perturbed images. Furthermore, the robustness achieved by _R-Blur_ is certifiable using the approach from [Cohen et al., 2019]. We also compare _R-Blur_ with two biologically inspired preprocessing defenses, namely _VOneBlock_[Dapello et al., 2020], a fixed parameter module that simulates the primate V1, and a non-uniform sampling-based foveation technique [Vuyyuru et al., 2020], which we refer to as _R-Bur_. We observe that _R-Blur_ induces a higher level of robustness, achieving accuracy up to 33 pp higher than _R-Warp_ and up to 15 pp higher than _VOneBlock_ against adversarial attacks. Compared to adversarial training (_AT_) [Madry et al., 2018, Wong et al., 2019] - the state-of-the-art non-biological defense, _R-Blur_ achieves up to 7 pp higher accuracy on average against non-adversarial corruptions of various types and strengths thus indicating that the robustness of _R-Blur_ generalizes better to non-adversarial perturbations than _AT_. Finally, an ablation study showed that both adaptive blurring and desaturation contribute to the improved robustness of _R-Blur_.
## 2 Retinal Blur: An Approximation for Peripheral Vision
To simulate the loss in contrast and color sensitivity of human perception with increasing eccentricity, we propose _R-Blur_, an adaptive Gaussian blurring, and color desaturation technique. The operations performed by _R-Blur_, given an image and fixation point, are shown in Figure 1. First, _R-Blur_ adds Gaussian noise to the image to simulate stochastic firing rates of biological photoreceptors. It then creates color and greyscale copies of the image and estimates the acuity of color and grayscale vision at each pixel location, using distributions that approximate the relationship between distance from the fixation point (eccentricity) and visual acuity levels in humans. _R-Blur_ then applies _adaptive_ Gaussian blurring to both image copies such that the standard deviation of the Gaussian kernel at each pixel in the color and the grayscale image is a function of the estimated color and grayscale acuity at that pixel. Finally, _R-Blur_ combines the two blurred images in a pixel-wise weighted combination in which the weights of the colored and gray pixels are a function of their respective estimated acuity values. Below we describe some of the more involved operations in detail.
### Eccentricity Computation
The distance of a pixel location from the fixation point, i.e. its eccentricity, determines the standard deviation of the Gaussian kernel applied to it and the combination weight of the color and gray images at this location. While eccentricity is typically measured radially, in this paper we use a different distance metric that produces un-rotated square level sets. This property allows us to efficiently extract regions having the same eccentricity by simply slicing the image tensor. Concretely, we compute the eccentricity of the pixel at location \((x_{p},y_{p})\) as
\[e_{x_{p},y_{p}}=\frac{\max(|x_{p}-x_{f}|,|y_{p}-y_{f}|)}{W_{V}}, \tag{1}\]
where \((x_{f},y_{f})\) and \(W_{V}\) represent the fixation point and the width of the visual field, i.e. the rectangular region over which _R-Blur_ operates and defines the maximum image size that is expected by _R-Blur_. We normalize by \(W_{V}\) to make the \(e_{x_{p},y_{p}}\) invariant to the size of the visual field.
### Visual Acuity Estimation
We compute the visual acuity at each pixel location based on its eccentricity. The biological retina contains two types of photoreceptors. The first type, called cones, are color sensitive and give rise to high-fidelity visual perception at the fovea, while the second type, called rods, are sensitive to only illumination but not color and give rise to low-fidelity vision in the periphery. The acuity of color and grayscale vision, arising from the cones and rods, is estimated at each pixel location, \((x,y)\), using two sampling distributions \(D_{R}(e_{x,y})\) and \(D_{C}(e_{x,y})\), respectively. In this work, we use:
\[\mathcal{D}(e;\sigma,\alpha) =\max\left[\lambda(e;0,\sigma),\gamma(e;0,\alpha\sigma)\right] \tag{2}\] \[D_{C}(e;\sigma_{C},\alpha) =\mathcal{D}(e;\sigma_{C},\alpha)\] (3) \[D_{R}(e;\sigma_{R},\alpha,p_{max}) =p_{max}(1-\mathcal{D}(e;\sigma_{R},\alpha)), \tag{4}\]
where \(\lambda(.;\mu,\sigma)\) and \(\gamma(.;\mu,\sigma)\) are the PDFs of the Laplace and Cauchy distribution with location and scale parameters \(\mu\) and \(\sigma\), and \(\alpha\) is a parameter used to control the width of the distribution. We set \(\sigma_{C}=0.12,\sigma_{R}=0.09,\alpha=2.5\) and \(p_{max}=0.12\). We choose the above equations and their parameters to approximate the curves of photopic and scotopic visual acuity from (Dragoi and Tsuchitani, 2020). The resulting acuity estimates are shown in Figure 1(b). Unfortunately, the measured photopic and scotopic acuity curves from (Dragoi and Tsuchitani, 2020) cannot be reproduced here due to copyright reasons, however, they can be viewed at [https://nba.uth.tmc.edu/neuroscience/m/s2/chapter14.html](https://nba.uth.tmc.edu/neuroscience/m/s2/chapter14.html) (see Figure 14.3).
### Quantizing the Visual Acuity Estimate
In the form stated above, we would need to create and apply as many Gaussian kernels as the distance between the fixation point and the farthest vertex of the visual field. This number can be quite large as the size of the image increases and will drastically increase the per-image computation time. To mitigate this issue we quantize the estimated acuity values. As a result, the locations to which
Figure 1: _R-Blur_ adds Gaussian noise to image (a) with the fixation point (red dot) to obtain (b). It then creates a colored and a grayscaled copy of the image and applies adaptive Gaussian blurring to them to obtain the low-fidelity images (c) and (d), where the numbers indicate the standard deviation of the Gaussian kernel applied in the region bounded by the boxes. The blurred color and gray images are combined in a pixel-wise weighted combination to obtain the final image (e), where the weights of the colored and gray pixels are a function of their respective estimated acuity values (see 2.2).
the same kernel is applied no longer constitute a single pixel perimeter but become a much wider region (see Figure 1 (c) and (d)), which allows us to apply the Gaussian kernel in these regions very efficiently using optimized implementations of the convolution operator.
To create a quantized eccentricity-acuity mapping, we do the following. We first list all the color and gray acuity values possible in the visual field by assuming a fixation point at \((0,0)\), computing eccentricity values \(e_{0,y}\) for \(y\in[0,W_{V}]\) and the corresponding values of \(\mathcal{D}_{R}=\{D_{R}(e_{0,y})|y\in[0,W_{V}]\}\) and \(\mathcal{D}_{C}=\{D_{C}(e_{0,y})|y\in[0,W_{V}]\}\). We then compute and store the histograms, \(H_{R}\) and \(H_{C}\), from \(\mathcal{D}_{R}\) and \(\mathcal{D}_{C}\), respectively. To further reduce the number of kernels we need to apply and increase the size of the region each of them is applied to, we merge the bins containing less than \(\tau\) elements in each histogram with the adjacent bin to their left. After that, given an image to process, we will compute the color and gray visual acuity for each pixel, determine in which bin it falls in \(H_{R}\) and \(H_{C}\), and assign it the average value of that bin.
### Changing the Viewing Distance
Increasing the viewing distance can be beneficial as it allows the viewer to gather a more global view of the visual scene and facilitates object recognition. To increase the viewing distance we drop the \(k\) lowest acuity bins and shift the pixels assigned to them \(k\) bins ahead such that the pixels that were in bins 1 through \(k-1\) are now assigned to bin 1. Figure 3 shows the change in the viewing distance as the value of \(k\) increases from 0 to 5. Formally, given the quantized \(D_{C}(e_{x,y})\) and \(D_{R}(e_{x,y})\), let \(D=[d_{1},...,d_{n}]\) represent the value assigned to each bin and \(P_{i}\) be the pixel locations assigned to the \(i^{th}\) bin, with \(P_{1}\) and \(P_{n}\) corresponding to points with the lowest and highest eccentricity, respectively. To increase the viewing distance, we merge bins 1 through \(k\) such that \(D^{\prime}=[d_{1},...,d_{n-k}]\) and the corresponding pixels are \(P^{\prime}_{1}=[P_{1},...,P_{k}]\) and \(P_{i>1}=P_{k+1}\).
### Blurring and Color Desaturation
We map the estimated visual acuity at each pixel location, \((x_{p},y_{p})\), to the standard deviation of the Gaussian kernel that will be applied at that location as \(\sigma_{(x_{p},y_{p})}=\beta W_{V}(1-D(e_{x,y}))\), where \(\beta\) is constant to control the standard deviation and is set to \(\beta=0.05\) in this paper, and \(D=D_{C}\) for pixels in the colored image and \(D=D_{R}\) for pixels in the grayscaled image. We then apply Gaussian kernels of the corresponding standard deviation to each pixel in the colored and grayscale image to obtain an adaptively blurred copy of each, which we combine in a pixel-wise weighted combination to obtain the final image. The weight of each colored and gray pixel is given by the normalized color and gray acuity, respectively, at that pixel. Formally, the pixel at \((x_{p},y_{p})\) in the final image has value
\[v^{f}_{(x_{p},y_{p})}=\frac{v^{c}_{(x_{p},y_{p})}D_{C}(e_{x,y};\sigma_{C}, \alpha)+v^{g}_{(x_{p},y_{p})}D_{R}(e_{x,y};\sigma_{C},\alpha)}{D_{C}(e_{x,y}; \sigma_{C},\alpha)+D_{R}(e_{x,y};\sigma_{C},\alpha)},\]
\(v^{c}_{(x_{p},y_{p})}\) and \(v^{g}_{(x_{p},y_{p})}\) are the pixel value at \((x_{p},y_{p})\) in the blurred color and gray images respectively.
## 3 Evaluation
In this section, we determine the accuracy and robustness of _R-Blur_ by evaluating it on clean data and data that has been perturbed by either adversarial attacks or common - non-adversarial - corruptions. We compare the performance of _R-Blur_ with an unmodified ResNet, two existing biologically-inspired defenses, _R-Warp_(Vuyyuru et al., 2020) and _VOneBlock_(Dapello et al., 2020), and two
non-biological adversarial defenses: Adversarial Training (_AT_) (Madry et al., 2018) and Randomized Smoothing (_RS_) (Cohen et al., 2019). We show _R-Blur_ to be significantly more robust to adversarial attacks and common corruptions than the unmodified ResNet and prior biologically inspired methods. Moreover, the certified accuracy of _R-Blur_ is close to that of _RS_, and even better in certain cases. While _AT_ is more robust than _R-Blur_ against adversarial attacks, _R-Blur_ is more robust than _AT_ against high-intensity common corruptions, thus indicating that the robustness of _R-Blur_ generalizes better to different types of perturbation than _AT_. We also analyze the contribution of the various components of _R-Blur_ in improving robustness.
### Experimental Setup
**Datasets:** We use natural image datasets, namely CIFAR-10 (Krizhevsky et al., 2012, 2012, 2015), Ecoset (Mehrer et al., 2021) and a 10-class subset of Ecoset (Ecoset-10). Ecoset contains around 1.4M images, mostly obtained from ImageNet (the database (Deng et al., 2009) not the ILSVRC dataset), that are organized into 565 basic object classes. The classes in Ecoset correspond to commonly used nouns that refer to concrete objects. To create Ecoset-10, we order the classes in the Ecoset dataset by the number of images that each class is assigned to and then select the top 10 classes respectively. The training/validation/test splits of Ecoset-10 and Ecoset are 48K/859/1K, and 1.4M/28K/28K respectively. For most experiments with Ecoset and Imagenet, we use 1130, and 2000 test images, with an equal number of images per class. During training, we use random horizontal flipping and padding + random cropping, as well as AutoAugment (Cubuk et al., 2018) for CIFAR-10 and RandAugment for Ecoset and Imagenet images were resized and cropped to \(224\times 224\).
**Model Architectures:** For CIFAR-10 we use a Wide-Resnet (Zagoruyko and Komodakis, 2016) model with 22 convolutional layers and a widening factor of 4, and for Ecoset and Imagenet we use XResNet-18 from fastai (Howard and Gugger, 2020) with a widening factor of 2. Results for additional architectures are presented in Appendix C.
**Baselines and Existing Methods:** We compare the performance of _R-Blur_ to two baselines: (1) an unmodified ResNet trained on clean data (ResNet), and (2) a ResNet which applies five affine transformations 1 to the input image and averages the logits (_RandAffine_). We also compare _R-Blur_ with two biologically inspired defenses: _VOneBlock_ pre-processing proposed in (Dapello et al., 2020), which simulates the receptive fields and activations of the primate V1 2, and _R-Warp_ preprocessing proposed in (Vuyyuru et al., 2020), which simulates foveation by resampling input images such that the sampling density of pixels is maximal at the point of fixation and decays progressively in regions further away from it. Finally, we compare _R-Blur_ with two non-biological adversarial defenses: fast adversarial training (Wong et al., 2019) with \(\|\delta\|_{\infty}=0.008\) (_AT_), and Randomized Smoothing (_RS_) (Cohen et al., 2019).
Footnote 1: We apply rotation, translation, and shearing, with their parameters sampled from \([-8.6^{\circ},8.6^{\circ}]\), \([-49,49]\) and \([-8.6^{\circ},8.6^{\circ}]\) respectively. The ranges are chosen to match the ranges used in RandAugment. The random seed is fixed during evaluation to prevent interference with adversarial attack generation.
Footnote 2: As in (Dapello et al., 2020), we remove the first conv, batch norm, ReLU, and MaxPool from the ResNet with _VOneBlock_.
**Fixation Selection for _R-Blur_ and _R-Warp_**: While training models with _R-Blur_ and _R-Warp_, we split each batch into sub-batches of 32 images, and for each sub-batch, we randomly sample a single fixation point that we use to apply _R-Blur_ or _R-Warp_ to all the images in that sub-batch. While training the _R-Blur_ model, we also set the viewing distance uniformly at random using the procedure described in 2.4. During inference, we determine a sequence of five fixation points (a scanpath) using DeepGaze-III (Kummerer et al., 2022). Given an image, DeepGaze-III passes it through a pretrained CNN backbone (DenseNet-201 in (Kummerer et al., 2022)) and extracts the activations from several intermediate layers of the CNN. It then applies a sequence of pointwise convolution and normalization layers to the activations to obtain a heatmap indicating where a human is likely to fixate. We found that it was more efficient to not use the scanpath prediction module in DeepGaze-III, and instead obtain scanpaths by keeping track of the past fixation points, and masking the predicted heatmap at these locations prior to sampling the next fixation point from it. This process is illustrated in Figure 4. We trained two instances of DeepGaze-III using the ResNets we trained with _R-Blur_ and _R-Warp_ as the CNN backbone. We use the corresponding DeepGaze-III models to predict the scanpaths for _R-Blur_ and _R-Warp_ models.
### Results
_R-Blur_ Improves Robustness to White-Box Attacks:We evaluate robustness by measuring the accuracy of models under Auto-PGD (APGD)[Croce and Hein, 2020] attack, which is a state-of-the-art white-box adversarial attack. We run APGD for 25 steps on each image. We find that increasing the number of steps beyond 25 only minimally reduces accuracy (see Appendix A). We take a number of measures to ensure that we avoid the pitfalls of gradient obfuscation [Athalye et al., 2018a, Carlini et al., 2019] so that our results reflect the true robustness of _R-Blur_. These steps and detailed settings used for adversarial attacks are mentioned in Appendix A.
To determine if _R-Blur_ improves robustness, we compare _R-Blur_ with the unmodified ResNet and _RandAffine_ under the APGD attack. We observe that _R-Blur_ is significantly more robust than the unmodified ResNet and _RandAffine_ models, consistently achieving higher accuracy than the two on all datasets and against all perturbation types and sizes, while largely retaining accuracy on clean data (Figure 5). While _RandAffine_ does induce some level of robustness, it significantly underperforms _R-Blur_. On smaller datasets, _R-Blur_ suffers relatively little loss in accuracy at small to moderate levels (\(\|\delta\|_{\infty}\leq 0.004\), \(\|\delta\|_{2}\leq 1\)) of adversarial perturbations, while the accuracy of baseline methods quickly deteriorates to chance or worse. On larger datasets - Ecoset and Imagenet, even the smallest amount of adversarial perturbation (\(\|\delta\|_{\infty}=0.002\), \(\|\delta\|_{2}=0.5\)) is enough to drive the accuracy of the baselines to \(\sim\)10%, while _R-Blur_ still is able to achieve 35-44% accuracy. As the perturbation is increased to \(\|\delta\|_{\infty}=0.004\) and \(\|\delta\|_{2}=1.0\), the accuracy of the baselines goes to 0%, while _R-Blur_ achieves 18-22%. We do observe that the accuracy of _R-Blur_ on clean data from Ecoset and Imagenet is noticeably lower than that of the baseline methods.
We also compare _R-Blur_ to two existing biologically motivated adversarial defenses: _VOneBlock_ and _R-Warp_, and find that _R-Blur_ achieves higher accuracy than both of them at all perturbation sizes and types. From Figure 6 we see that _R-Blur_ achieves up to 33pp higher accuracy than _R-Warp_, and up to 15 pp higher accuracy than _VOneBlock_ on adversarially perturbed data.
Figure 4: Illustration of fixation selection. The initial fixation point is set to top-left (0,0) and the image at \(t_{0}\) is processed with _R-Blur_ /_R-Warp_ to get the image at \(t_{1}\). DeepGaze-III is used to generate a fixation heatmap from this image. The next fixation point is sampled from the heat map, and _R-Blur_ /_R-Warp_ is applied to get the image at \(t_{2}\). The region in the heatmap around the chosen fixation point is masked with an inverted Gaussian kernel to prevent spatial clustering of fixation points. This process is repeated to get a sequence of fixation points.
Figure 5: Comparison of accuracy on various datasets (a-d) under adversarial attacks of several \(\ell_{2}\) (top) and \(\ell_{\infty}\) (bottom) norms between _R-Blur_ (green) and two baseline methods: _RandAffine_ (orange) and ResNet (blue). The dashed lines indicate accuracy on clean images. _R-Blur_ models consistently achieve higher accuracy than baseline methods on all datasets, and adversarial perturbation sizes.
_R-Blur_ is Certifiably Robust:** To verify that the gains in robustness observed above are indeed reliable, we use the certification method (Certify) from (Cohen et al., 2019) to provide formal robustness guarantees for _R-Blur_. This entails obtaining predictions for an input under a _very_ large number (\(10^{5}\)) of noise samples drawn from \(\mathcal{N}(0,\sigma_{c})\), and using a hypothesis test to determine the _certified radius_ around the input in which the model's prediction is stable _with high probability_ (\(\geq 99.9\%\)). Given a dataset, we can compute the _certified accuracy_ at a radius \(r\) as the proportion of data points for which the certified radius is \(\geq r\) and the model's prediction is correct. We compute the certified accuracy of _R-Blur_ on 200 images sampled from Imagenet and Ecoset, and compare it with a model trained on data perturbed with Gaussian noise, and is known to achieve high certified accuracy (Cohen et al., 2019). We call this model _G-Noise_.
We expose both _R-Blur_ and _G-Noise_ to Gaussian noise of scale \(\sigma_{t}=0.125\) during training and compute their certified accuracy at radii \(r\in\{0.5,1.0\}\). According to (Cohen et al., 2019), if the scale of the noise used in Certify is \(\sigma_{c}\), then the maximum radius for which certified accuracy can be computed (with \(10^{5}\) noise samples) is \(r=4\sigma_{c}\). Therefore, when computing certified accuracy at \(r\leq 0.5\)Certify adds noise of the same scale as was used during training (\(\sigma_{c}=0.125=\sigma_{t}\)), thus we call this the _matched_ setting. However, to compute certified accuracy at \(r\leq 1.0\)Certify adds noise of a larger scale than was used during training (\(\sigma_{c}=0.25>\sigma_{t}\)), and thus in order to achieve high certified accuracy at \(r\leq 1.0\) the model must be able to generalize to a change in noise distribution. We call this the _unmatched_ setting.
Figure 6(a) and 6(b) show the certified accuracy of _R-Blur_ and the _G-Noise_ on Ecoset and Imagenet at several \(\ell_{2}\) norm radii under matched and unmatched settings. In both settings, we see that _R-Blur_ achieves a high certified accuracy on both Ecoset and Imagenet, with the certified accuracy at \(r\approx 0.5\) and \(r\approx 1.0\) being close to the ones observed in Figure 5, indicating that our earlier results are a faithful representation of _R-Blur_'s robustness. Furthermore, we see that even if _R-Blur_ was trained without any noise, it can still achieve more than 50% of the certified accuracy achieved by _R-Blur_ trained with noise. This indicates that adaptive blurring and desaturation do in fact endow the model with a significant level of robustness. Finally, we note that while _G-Noise_ has (slightly) higher certified accuracy than _R-Blur_ in the matched setting, _R-Blur_ achieves significantly higher certified accuracy in the unmatched setting, outstripping _G-Noise_ by more than 10 pp at \(r\approx 1.0\) on Imagenet. This shows that the robustness of _R-Blur_ generalizes beyond the training conditions, while _G-Noise_ overfits to them. This makes _R-Blur_ particularly suited for settings in which the exact adversarial attack budget is not known, and the model must be able to generalize.
Figure 6: The difference in accuracy under adversarial attacks of several \(\ell_{2}\) and \(\ell_{\infty}\) norms between _R-Blur_ and two biologically inspired defenses: _R-Warp_ (blue) and _VOneBlock_ (orange). _R-Blur_ consistently achieves higher accuracy on all adversarial perturbation sizes than _R-Warp_ and _VOneBlock_.
Figure 7: The certified accuracy at various \(\ell_{2}\)-norm radii of _R-Blur_ and _G-Noise_ models. _R-Blur_-CFI uses 1 fixation at the center of the image, and _R-Blur_-5FI, averages logits from 5 fixation (corners + center). \(\sigma_{t}\) denotes the scale of noise added during training and is 0.125 unless specified, whereas \(\sigma_{c}\) is the scale of the noise used to compute the certified accuracy. _G-Noise_ outperforms _R-Blur_ in the matched scenario, while _R-Blur_ is superior in the unmatched scenario indicating that the robustness of _R-Blur_ is more generalizable.
_R-Blur_ Improves Accuracy on Common (Non-Adversarial) Corruptions: Adversarial perturbations constitute only a small subset of perturbations that human vision is invariant to, therefore we evaluate the _R-Blur_ on some other common types of image corruptions that humans are largely invariant to but DNNs are not. At this point, we also include the adversarially trained ResNet (_AT_) in the evaluation to measure how well the robustness endowed by adversarial training generalizes to other, non-adversarial, corruptions, and compare it with biologically motivated methods. We sampled 2 images/class from Imagenet and 5 images/class from Ecoset. Then we applied the 17 of the 19 3 common (non-adversarial) corruptions proposed in [1] at 5 different severity levels to generate 85 corrupted versions of each image leading to corrupted versions of Imagenet and Ecoset containing 170K and 240K images, respectively.
Footnote 3: We exclude Gaussian blur and Gaussian noise since they are similar to the transformations done by _R-Blur_.
Table 1 shows the accuracy achieved by the models on Ecoset and Imagenet under adversarial and non-adversarial perturbations. As expected, _AT_ achieved the highest accuracy on whitebox attacks by virtue of being trained on data perturbed with attacks very similar to APGD. After _AT_, the most adversarially robust model is _R-Blur_, followed by _VOneBlock_. We see that _R-Blur_ is the most accurate model on non-adversarial common corruptions, followed by _VOneBlock_, which supports our hypothesis that the robustness of biologically motivated methods in general, and _R-Blur_ in particular, is highly generalizable. In contrast, the accuracy of _AT_ on common corruptions is almost the same as that of the unmodified ResNet, indicating that the robustness of _AT_ does not generalize well.
### Ablation Study
Having established that _R-Blur_ indeed improves the robustness of image recognition model, we examine, by way of ablation, how much each component of _R-Blur_ contributes towards this improvement as shown in Figure 8. The most significant contributor to robustness is the addition of noise during training, followed by adaptive blurring. Importantly, applying non-adaptive blurring by convolving the image with a single Gaussian kernel having \(\sigma=10.5\) (\(\sigma=10.9\) is the maximum used for adaptive blurring) improves robustness very slightly while degrading clean accuracy, thus, indicating the significance of simulating foveation via adaptive blurring. The next most significant factor is evaluating at multiple fixation points which improved robustness significantly compared to a single fixation point in the center of the image, which suggests that, multiple fixations and saccades are important when the image is hard to recognize and presents a promising direction for future work. Furthermore, not adaptively desaturating the colors reduces the robustness slightly. To summarize, all the biologically-motivated components of _R-Blur_ contribute towards improving the adversarial robustness of object recognition DNNs from close to 0% to 45% (\(\ell_{\infty}=0.008\) for Ecoset-10)
\begin{table}
\begin{tabular}{l c c c c c c|c c c c c} \hline \hline & Overall & Perturbation & & & Overall & Perturbation & & & & \\ \cline{3-13} Method & Mean & Mean\({}_{\text{L}}\) & Mean\({}_{\text{H}}\) & CC & Wb & Clean & Mean\({}_{\text{L}}\) & Mean\({}_{\text{H}}\) & CC & Wb & Clean \\ \hline \multicolumn{13}{c}{Ecoset} & \multicolumn{1}{c}{Imagenet} \\ \hline ResNet & 37.1 & 25.5 & 12.7 & 39.4 & 0.8 & **71.2** & 34.7 & 21.7 & 9.7 & 33.6 & 0.1 & **70.3** \\ _RandAffine_ & 35.7 & 27.6 & 11.1 & 35.8 & 3.6 & 67.6 & 33.7 & 23.0 & 8.6 & 30.8 & 2.0 & 68.3 \\ \hline _AT_ & **49.0** & **50.9** & **35.5** & 38.5 & **47.5** & 61.1 & **46.3** & **48.0** & **30.6** & 34.2 & **43.5** & 61.3 \\ \hline _R-Warp_ & 38.5 & 29.8 & 13.2 & 40.0 & 4.5 & 71.1 & 34.1 & 19.5 & 16.2 & 32.5 & 2.2 & 67.7 \\ _VOneBlock_ & 42.9 & 43.7 & 16.1 & **40.7** & 16.1 & **72.0** & 38.8 & 37.5 & 12.3 & _35.8_ & 11.9 & _68.7_ \\ \hline _R-Blur_ & _44.2_ & _48.7_ & _24.2_ & **45.6** & _23.8_ & 63.3 & _38.9_ & _41.0_ & _18.2_ & **39.0** & _17.2_ & 60.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of the evaluated models on clean, and perturbed data from Imagenet. “WB” refers to the accuracy under APGD attacks, while “CC” refers to the accuracy under common non-adversarial corruption [1]. Mean\({}_{\text{L}}\) and Mean\({}_{\text{H}}\) refer to the average accuracy on low intensity (\(\|\delta\|_{\infty}=0.002\), \(\|\delta\|_{2}=0.5\), severity \(\leq 3\)) and high intensity (\(\|\delta\|_{\infty}\in\{0.004,0.008\}\), \(\|\delta\|_{2}\in\{1.5,2.\}\) perturbations, severity \(>3\)). _R-Blur_ significantly improves the robustness of ResNet, and outperforms prior biologically motivated defenses, while approaching the performance of _AT_.
## 4 Related Work
**Non-biological defenses:** Perhaps the most successful class of adversarial defenses are adversarial training algorithms (Madry et al., 2018; Zhang et al., 2019; Rebuffi et al., 2021; Bai et al., 2021; Wong et al., 2019), which train models on adversarially perturbed data generated by backpropagating gradients from the loss to the input during each training step. Another popular class of defenses is certified defenses (Cohen et al., 2019; Fischer et al., 2020; Kumar and Goldstein, 2021; Li et al., 2019) which are accompanied by provable guarantees of the form: with probability \(1-\delta\), the model's output will not change if a given image is perturbed at most \(\epsilon\). Perhaps, most closely related to our work are preprocessing defenses (Guo et al., 2017; Raff et al., 2019) that apply a large number of transforms to the input during inference. Usually, these defenses rely on non-differentiable transformations, and a high degree of randomization in the number, sequence, and parameters of the transforms they apply to each image. Therefore, these defenses tend to obfuscate gradients (Athalye et al., 2018), and have been shown to be compromised by attacks with a higher step budget. We would like to point out that R-Blur does not have these aforementioned pitfalls - the transforms that R-Blur applies (Gaussian blur and desaturation) are fully differentiable and totally deterministic. In general, it is our opinion that by not being cognizant of the biological basis of robust vision, current approaches are excluding a large set of potentially effective approaches for defending against adversarial attacks.
**Biologically inspired defenses:** These defenses involve integrating computational analogues of biological processes that are absent from common DNNs, such as predictive/sparse coding (Paiton et al., 2020; Bai et al., 2021), biologically constrained visual filters, nonlinearities, and stochasticity (Dapello et al., 2020), foveation (Jonnalagadda et al., 2022; Luo et al., 2015; Gant et al., 2021), non-uniform retinal sampling and cortical fixations (Vuyyuru et al., 2020), into DNNs. The resulting models are made more robust to adversarially perturbed data.
**Retina Models for Robustness:** Some prior works study the impact of simulating foveation in CNNs on their adversarial robustness, but the body of literature in this space is rather sparse. One of the earliest works (Luo et al., 2015) implements a foveation-based defense by cropping the input image using various schemes. (Vuyyuru et al., 2020) proposed a defense that simulates the non-uniform sampling of the visual stimuli that occurs in the retina. They do this by resampling input images so that the sampling density of pixels is maximal at the point of fixation and decays progressively in regions further away from it. More recently, (Gant et al., 2021) proposed a neural network that modifies the textures in the image such that the modified image appears identical to human observers when viewed in peripheral vision. They report improvement in adversarial robustness when DNNs are trained on images to which this transform has been applied. While we do not implement and evaluate their approach in this paper, we note that they report 40% accuracy against perturbation of \(\ell_{\infty}\) norm 0.005 by a 5-step PGD attack, while we achieve an accuracy close to 50% at \(\ell_{\infty}\) norm 0.008 with a 25-step APGD attack. Although the accuracy of these biologically inspired approaches is higher than that of an unmodified model under adversarial attack, there is still a large gap in robustness between the proposed approaches and non-biological defenses, indicating that there is room for improvement.
## 5 Limitations
Adding _R-Blur_ reduces accuracy on clean data, however, it is possible to significantly improve the accuracy of _R-Blur_ by developing better methods for selecting the fixation point. Further experimental results presented in Appendix B show that if the optimal fixation point was chosen by an oracle the clean accuracy of _R-Blur_ can be improved to within 2% of the accuracy of the unmodified ResNet.
Figure 8: Accuracy on clean and APGD (\(\|\delta\|_{\infty}=0.008\)) perturbed Ecoset-10 images when individual components of _R-Blur_ are removed (left), and when non-adaptive transforms are used (right). Removing the biologically-motivated components of _R-Blur_ harms the robustness
Conclusion
Since the existence of adversarial attacks presents a divergence between DNNs and humans, we ask if some aspect of human vision is fundamental to its robustness that is not modeled by DNNs. To this end, we propose _R-Blur_, a foveation technique that blurs the input image and reduces its color saturation adaptively based on the distance from a given fixation point. We evaluate _R-Blur_ and other baseline models against APGD attacks on two datasets containing real-world images. _R-Blur_ outperforms other biologically inspired defenses. Furthermore, _R-Blur_ also significantly improves robustness to common, non-adversarial corruptions and achieves accuracy greater than that of adversarial training. The robustness achieved by _R-Blur_ is certifiable using the approach from (Cohen et al., 2019) and the certified accuracy achieved by _R-Blur_ is at par or better than that achieved by randomized smoothing (Cohen et al., 2019). Our work provides further evidence that biologically inspired techniques can improve the accuracy and robustness of AI models.
|
2308.11781
|
Addressing Dynamic and Sparse Qualitative Data: A Hilbert Space
Embedding of Categorical Variables
|
We propose a novel framework for incorporating qualitative data into
quantitative models for causal estimation. Previous methods use categorical
variables derived from qualitative data to build quantitative models. However,
this approach can lead to data-sparse categories and yield inconsistent
(asymptotically biased) and imprecise (finite sample biased) estimates if the
qualitative information is dynamic and intricate. We use functional analysis to
create a more nuanced and flexible framework. We embed the observed categories
into a latent Baire space and introduce a continuous linear map -- a Hilbert
space embedding -- from the Baire space of categories to a Reproducing Kernel
Hilbert Space (RKHS) of representation functions. Through the Riesz
representation theorem, we establish that the canonical treatment of
categorical variables in causal models can be transformed into an identified
structure in the RKHS. Transfer learning acts as a catalyst to streamline
estimation -- embeddings from traditional models are paired with the kernel
trick to form the Hilbert space embedding. We validate our model through
comprehensive simulation evidence and demonstrate its relevance in a real-world
study that contrasts theoretical predictions from economics and psychology in
an e-commerce marketplace. The results confirm the superior performance of our
model, particularly in scenarios where qualitative information is nuanced and
complex.
|
Anirban Mukherjee, Hannah H. Chang
|
2023-08-22T20:40:31Z
|
http://arxiv.org/abs/2308.11781v1
|
# Addressing Dynamic and Sparse Qualitative Data : A Hilbert Space Embedding of Categorical Variables
###### Abstract
We propose a novel framework for incorporating qualitative data into quantitative models for causal estimation. Previous methods use categorical variables derived from qualitative data to build quantitative models. However, this approach can lead to data-sparse categories and yield inconsistent (asymptotically biased) and imprecise (finite sample biased) estimates if the qualitative information is dynamic and intricate.
We use functional analysis to create a more nuanced and flexible framework. We embed the observed categories into a latent Baire space and introduce a continuous linear map--a Hilbert space embedding--from the Baire space of categories to a Reproducing Kernel Hilbert Space (RKHS) of representation functions. Through the Riesz representation theorem, we establish that the canonical treatment of categorical variables in causal models can be transformed into an identified structure in the RKHS. Transfer learning acts as a catalyst to streamline estimation--embeddings from traditional models are paired with the kernel trick to form the Hilbert space embedding.
We validate our model through comprehensive simulation evidence and demonstrate its relevance in a real-world study that contrasts theoretical predictions from economics and psychology in an e-commerce marketplace. The results confirm the superior performance of our model, particularly in scenarios where qualitative information is nuanced and complex.
**Keywords:** Qualitative data, Functional analysis, Hilbert space embedding, Riesz representation theorem, Causal estimation.
## 1 Introduction
Qualitative data refers to non-numerical information used in data analysis. It encompasses attributes or characteristics such as color, taste, texture, smell, or appearance. Derived from interviews, focus groups, observations, and open-ended survey responses, this descriptive data offers deep insights into behaviors, emotions, experiences, and social phenomena.
Integrating qualitative data into quantitative models for causal estimation presents challenges. A prevalent strategy involves transforming qualitative data into categorical variables for inclusion in quantitative models (Powers and Xie, 2008). However, this method can lead to estimates that are both imprecise and biased in a finite sample (i.e., finite-sample
bias) or inconsistent, exhibiting bias that does not vanish as the sample size increases (i.e., asymptotic bias) when handling dynamic and complex qualitative information.
Specifically, dynamic information environments exhibit swift and continuous evolution. For instance, on e-commerce platforms like Amazon, new products with unique qualitative descriptors (e.g., shape, style, and look) are released. Concurrently, older products with other qualitative descriptors become obsolete. Similar scenarios occur in healthcare research, where new diseases and treatments emerge (Murray et al., 2012), network science research with changing node interactions (Holme and Saramaki, 2012), and climate science research facing continuously shifting climate patterns (Knutti et al., 2010). In each of these instances, a portion of the description (e.g., the symptoms of a disease) is communicated qualitatively.
In these examples, as the qualitative data changes over time, so do the categorical variables used to express the qualitative data, and this dynamic can significantly impact data analysis. When categories fade or become obsolete, the accumulation of relevant information ceases. For example, in a demand model, additional observations may relate to new products because outdated products have left the market, limiting the sample suitable for estimating parameters tied to obsolete products (e.g., brand fixed effects, coefficients relating to older technologies and fashion elements) (Broda and Weinstein, 2010; Mukherjee and Kadiyali, 2011). This can yield imprecise and inconsistent estimates because only fixed-length partitions of the samples are pertinent for estimating corresponding sets of parameters; the information in a period is neither relevant to outdated products nor to future products.
Moreover, in scenarios requiring intricate categorization, some categories may correspond to only a few observations. Consider, for instance, Netflix's in-house movie classification system, which comprises 76,000 microgenres such as 'Cult Evil Kid Horror Movies,' 'British set in Europe Sci-Fi & Fantasy from the 1960s,' and 'Time Travel Movies starring William Hartnell' (Madrigal, 2014). The system's complexity likely results in sparse categories in a given dataset, as such specific category definitions can only correspond to a few movies, and thus, to a limited number of observations. Consequently, the estimates of coefficients associated with such sparse categories may exhibit severe finite-sample bias (Kim et al., 2006; Meier et al., 2008).
The number of observations within a category may not necessarily correspond to its informational significance. For instance, in a wine dataset, the coefficient associated with the taste descriptors associated with rare but expensive wines could hold substantial informational value. However, conventional approaches aimed at managing data and model complexity may either : merge the category describing the descriptors with other categories, thereby rendering all related category coefficients equivalent, or assume it to be zero by eliminating the corresponding dummy variable. As such, in many social science applications, including this example, the qualitative data components represented by sparse categories are likely to exert a non-zero and distinct influence on the outcome. Therefore, accurately estimating coefficients on data-sparse categories may be pivotal to the analysis.
Lastly, imprecision and inconsistency in category-level fixed effects, or 'group-level' parameters, can cause inconsistency in the estimates of the structural parameters of the model (i.e., parameters relating to the entire dataset), a phenomenon known as the incidental parameters problem (Fernandez-Val, 2009; Greene, 2004; Honore and Kyriazidou, 2000; Lancaster, 2000; Manski, 1987; Small and Murdoch, 1993). Furthermore, the exclusion of relevant variables to minimize variance in the bias-variance tradeoff, guided either by domain-specific
information or principled variable selection methods such as the Least Absolute Shrinkage and Selection Operator (LASSO ; Chetverikov et al., 2021; Hastie et al., 2015; Tibshirani, 1996; Yuan and Lin, 2006), has been shown to significantly bias estimates of the included variables in the model (Wuthrich and Zhu, 2021). These factors constrain the classical model's ability to accurately depict the true underlying relationships in the data-generating process.
### Color Mentions in Amazon Product Titles : A Case Study
As a concrete example of the dynamism and complexity of qualitative information, consider color references on Amazon. Figure 1 presents a frequency analysis of color mentions in product titles in the 'Fashion' category. To fix ideas, we consistently refer back to this example throughout our paper.
To construct this figure, we began with a large-scale dataset of Amazon metadata, as detailed by Ni et al. (2019), focusing exclusively on fashion products. We then searched these product titles for a selection of 90 color names that were chosen based on the presence of a corresponding Wikipedia article, to ensure the names are in common usage and are likely to influence consumer behavior. Our final data includes 105,275 products.
The bars in the figure represent the number of product titles that reference each color, presented on a logarithmic scale to compensate for the highly skewed nature of the data. For example, 'Black' is the most commonly referenced color, appearing in 31,103 instances, while the 25\({}^{\text{th}}\) and 75\({}^{\text{th}}\) percentiles account for 3 and 506.25 references, respectively. Despite our search being confined to common color names and solely including products within the 'Fashion' category, we identified 72 unique colors, demonstrating a wide spectrum of references. The count of unique colors would increase if our search were to encompass less common names or a broader range of product types.
The data comprises a variety of both similar ('Lemon' and 'Lime') and dissimilar ('Lemon' and 'Raspberry') color names. Suppose we were to specify a demand model based on this data. Traditional categorical variable approaches would treat these colors as entirely unrelated, thereby failing to capture the latent relationships. Crucially, these latent relationships, while not evident in a simple categorical variable for color, are readily discernible in color composition models such as the RGB model. For instance, the similarities between the colors 'Lemon' and 'Lime', as well as the differences between 'Lemon' and 'Raspberry', are well-reflected in their respective RGB values. At the core of our proposed modeling framework is the utilization of such latent similarities in the information represented by these categories. This can be achieved either by applying an alternate model to the category labels or by analyzing the physical phenomena they represent (for example, examining the actual colors corresponding to the color names) to enhance estimation efficiency.
### Overview of Contributions
This paper introduces a novel machine learning and econometrics framework for integrating qualitative data into quantitative models for causal estimation. Our approach addresses the key challenges of data sparsity, evolving categorizations, and complex categorical structures, thereby improving upon standard models. The primary contributions include :
Note : The length of each bar represents the natural logarithm of the number of times each color was referenced in a product title. The colors are organized by their RGB representation, with red, green, and blue channels increasing from left to right.
Figure 1: Frequency Analysis of Color References in Product Titles
1. **Advanced Processing of Qualitative Data :** We use functional analysis to map observed categories to a latent Baire space, which we then embed in a Reproducing Kernel Hilbert Space (RKHS). This approach offers more nuanced handling of categorical variables than traditional methods.
2. **Addressing Inconsistency and Imprecision in Dynamic Environments :** Our framework provides consistent estimators in dynamic settings, even where traditional methods may fail. It efficiently handles categories that may become obsolete due to rapid environmental changes.
3. **Handling Data-Sparse Categories :** Our estimator is designed for complex categorization systems with sparse data. We use Hilbert space embeddings to enhance the interpretation and representation of this data, which increases econometric estimation efficiency.
4. **Incorporating Supplementary Data :** Our study illustrates how auxiliary information can be used in the modeling process, such as physical phenomena models related to categories (e.g., color representation models), inferences from language models based on textual category descriptions, and transformations of direct stimuli.
5. **Demonstration of Model Efficacy and Traditional Model Limitations :** We validate our model through rigorous simulation experiments, establishing superior performance in handling dynamic and complex qualitative information and uncovering traditional models' limitations.
6. **Balancing Intuition and Mathematical Admissibility :** Our research elucidates the following dichotomy between intuition and mathematical admissibility--certain intuitive functions of the initial embeddings such as the dot product are admissible under Mercer's theorem, while others (e.g., the Euclidean norm) are inadmissible. In developing theory, we provide a rigorous foundation that is flexible in offering alternatives, while clearly outlining key requirements.
By applying our model to a novel research question relating to color references in Amazon product titles, we showcase how it can address key challenges and integrate qualitative data into quantitative causal models.
## 2 Conceptual Framework
In this section, we establish the conceptual framework for our research. We begin with a review of the relevant statistical, econometric, and empirical modeling literature. Next, we outline the mathematical structures that underpin our model. We conclude with a discussion of our analytical and empirical studies, and how they relate to the extant literature.
### Prior Literature
Despite the pervasiveness of qualitative data and the challenges posed by the sparse and dynamic categorical variables it generates, comprehensive solutions have yet to be found. Practitioners typically employ one of two strategies : either consolidating rare categories or implementing variable selection methods (Hastie et al., 2015; Tibshirani, 1996; Yuan and Lin, 2006). However, these strategies fall short in several aspects. They fail to provide
estimates for rarely observed or emerging categories, often leading to these being dropped or consolidated. Furthermore, they struggle to address potential endogeneity issues, which can arise from the loss of unobserved qualitative information when categories are merged or dropped. Specifically, while variable selection models like LASSO offer consistent estimation in sparse variable scenarios where many coefficients are nonzero (Zou, 2006), they introduce bias in situations with dense relationships, where many or all coefficients are nonzero--a manifestation of the bias-variance tradeoff (Vapnik, 1999; Wuthrich and Zhu, 2021).
Existing methods are partly limited by the approach of considering high-dimensional categorical parameters as 'nuisance' parameters, with an assumption that they are not the central point of investigation. This perspective is evident in the Neyman-Scott paradox. In this paradox, the group means (nuisance parameters) are estimated with consistency but not precision. Conversely, the standard deviation of these group means (the structural parameter) is estimated precisely but inconsistently. Current methods mainly aim to restore consistency in the structural parameter estimates at the expense of the nuisance parameter estimates (Greene, 2004; Honore and Kyriazidou, 2000; Small and Murdoch, 1993). For instance, the analogue of the Neyman-Scott paradox in panel data is typically addressed by differencing out the group-level panel fixed-effects. While this method can produce consistent estimates of the structural parameters, it does not provide estimates of the panel-level nuisance parameters (Lancaster, 2000; Manski, 1987).
With the proliferation of data sources and the advancement of large-scale databases cataloging human activity, it is increasingly common for economic models to be situated within high dimensional data. Consequently, there's a growing appreciation for sparse or approximately sparse models, where numerous variables and their transformations are candidates for inclusion in a statistical model but only a few are principal, resulting in a low-rank true design matrix (e.g., Athey et al., 2018; Belloni and Chernozhukov, 2011; Bickel et al., 2009; Chen et al., 2022; Chernozhukov et al., 2021; D'Amour et al., 2021; Fan et al., 2023; Farrell, 2015; VanderWeele, 2019; Zhang and Huang, 2008). Alternatively, in many cases, the empirical process can be well approximated by a sparse model or a model with regularized coefficients, such as those proposed by Belloni et al. (2014) and Candes and Tao (2005).
To handle high parametric complexity and high dimensionality, contemporary econometrics research methods often turn to machine learning. For instance, in the treatment effects literature, it is increasingly common to initially estimate two machine learning models--one for the outcome and the other for the propensity score--followed by the estimation of a causal econometric model (Chernozhukov et al., 2018). In such instances, the causal estimator can significantly benefit from techniques such as sample splitting, cross-fitting, and specifying Neyman orthogonal moment conditions (Chernozhukov et al., 2017).
Distinct from these use-cases, our research focuses on applications where complex and dynamic categorical information holds primary importance. For instance, researchers may wish to include a brand fixed effect in an Amazon demand model or a micro-genre-fixed effect in a Netflix demand model. In such situations, estimating the fixed effects of data-sparse categories, i.e., categories with corresponding observations, can be crucial. Take, for instance, the challenge of estimating the fixed effects of watch micro-brands like Autodromo and Farer in a demand-side analysis. Estimating these parameters may be challenging due to their infrequent appearance in the data. However, these estimates may be indispensable for a comprehensive analysis if the research question pertains to brands and branding such that
micro-brands play a significant role. Although these types of scenarios are commonplace, they have received comparatively less attention in the field.
We accomplish these goals by identifying and incorporating categories based on commonalities found within the existing data. For example, even if products in 'Carmine' and 'Cerise' are infrequent, our model can infer the fixed effects for these colors by analyzing data related to other shades of red, such as 'Maroon', 'Burgundy', and 'Red'. Figure 2 shows the frequency of fashion product titles on Amazon referencing eight popular shades of red. This visual representation underscores the intuition behind our model, demonstrating how color categories, although closely related, might be referenced with varying frequencies due to linguistic conventions. Such variations lead to sparse categorical data, which our method navigates by learning and leveraging inter-category relationships.
Modeling data-sparse categories carries important implications for the research process. The results generated by existing qualitative data management strategies often hinge on the specifics of a particular dataset. This is because the inclusion and exclusion of categories, and thus any bias arising from omitting relevant variables, depend on the frequency of these variables in a specific dataset, rather than their theoretical significance.
Recall our earlier example of Netflix's micro-genre categorization of movie storylines. If we were to conduct a demand analysis for Netflix, one data complication that may arise is that the prevalence of micro-genres in a dataset is likely to fluctuate over time. This fluctuation occurs because movies often evolve in cycles or waves, with themes and story
Figure 2: Frequency Analysis of Shades of Red in Amazon Product Titles
lines recurring over time. Various factors, including societal shifts, technological advances, and commercial successes that inspire imitation, contribute to these trends (Mukherjee and Kadiyali, 2018). For instance, the success of films like 'The Matrix' spurred an influx of technologically-driven, dystopian narratives at the turn of the 21st century (Neale, 2005).
If the inclusion of categories is determined by the frequency with which the categories appear in the data, then the micro-genres included in the analysis depend on the data selection criteria (e.g., which years are covered by the data and which movies are included in the data). Thus, for example, a study of movies released prior to the release of 'The Matrix' would be based on data that reflects a different frequency of micro-genres, and therefore would employ a different set of micro-genres in the analysis, than a study of movies released after 'The Matrix'.
This scenario underscores a broader problem in managing qualitative data : the absence of a standardized approach to category selection complicates the comparison of findings across studies, exacerbating the challenge of managing sparse and dynamic categorical variables. Our method alleviates this issue by automatically projecting categorizations into a continuous space. As this process applies the same mapping, regardless of covariate balance, across all studies, it enables researchers to compare findings in a more meaningful way.
In sum, our research addresses data sparsity in dense statistical models (i.e., where many or all of the coefficients on variables are nonzero). In classical settings, where the size of the category set and its prevalence in the data remain constant over time (i.e., categories do not become obsolete), estimating fixed effects is straightforward. However, in data-sparse settings, excluding fixed effects is likely to bias the inferences and reduce their specificity, whereas including them is likely to lead to imprecision and therefore the incidental parameters problem. Therefore, our research is driven by different considerations than those of the existing literature; our focus arises from the complexity of qualitative data, as represented in a glut of categorical variables, rather than from a large number of quantitative variables. Our proposed methodology aims to bridge this critical gap.
### Overview of Our Approach
We introduce a model that integrates principles from econometrics, functional analysis, and machine learning to adeptly handle categorical variables. This subsection offers a high-level outline of the model's principal components, with subsequent sections providing in-depth explanations.
1. **Baire Space of Categories :** Unlike traditional models that treat categories as discrete, orthogonal entities, we embed categories in a latent Baire space. The topology of this space is designed to capture inherent similarities among categories, as reflected in their impact on a target outcome.
2. **Reproducing Kernel Hilbert Space (RKHS) of Category Representations :** We introduce a latent RKHS of category representation functions. By leveraging the kernel trick in its formulation, we lay the foundation for a comprehensive model of qualitative data, sidestepping the need for its explicit construction.
3. **Bounded Linear Bijective Operator Between Spaces :** A continuous, linear operator \(T\) bridges the Baire space and the RKHS of representations. Its inverse serves as a continuous linear map from the RKHS back to the category space. This
architecture enables us to stipulate a continuous linear functional on the RKHS, which corresponds to the fixed effect linked with each category within the RKHS. This functional symbolizes the conditional mean shift occurring when the reference category is supplanted by its RKHS counterpart.
4. **Reformulation of the Semiparametric Framework :** By utilizing the defined functional, we augment the semiparametric framework, transitioning from a traditionally discrete structure to a continuous one. This adaptation allows us to seamlessly integrate qualitative data into quantitative models using the RKHS and the kernel trick. As a result, the model's resilience is notably amplified in dynamic situations and when dealing with sparse category-level data.
5. **Application of the Riesz Representation Theorem :** Using the Riesz Representation Theorem, we establish a one-to-one correspondence between the Riesz representer of the functional and parameters in the original model. This ensures the identifiability of our transformed model.
6. **Initial Embedding and the Kernel Trick :** We construct an initial embedding \(\Gamma\), mapping the Baire space to a complete inner-product vector space. This mapping is deliberately designed to provide a numerical representation of the data, reflecting its inherent properties, and enabling the use of the kernel trick to compute the inner product in an expanded feature space. This approach enhances computational efficiency while preserving constraints on the dimensionality of the parameter space in the transformed model.
Figure 3 provides a visual encapsulation of our innovative approach, emphasizing the synergy of diverse mathematical concepts and computational techniques. By bridging traditional econometric methods with functional analysis and machine learning principles, we present a model capable of adeptly managing categorical variables. This integration promises increased resilience in handling dynamic situations and sparse category-level data, paving the way for new analytical possibilities.
In concert, these elements construct an identifiable empirical process that can be used to compute the conditional mean shift linked to the qualitative data. The following illustrates how qualitative data can be incorporated into a quantitative model on colors. We can form an initial finite-dimensional embedding by applying a color model to the qualitative data. To make an inference about a color (e.g., 'Cerise'), we obtain its description in the color model and use our parameter estimates to compute its fixed effect in the econometric model. By construction, the model elements are designed to reflect the conditional mean shift associated with the data represented in the categories. These estimates will then reflect both the information on 'Cerise' products and the influence of other colors, varying in their similarity to 'Cerise'. For instance, in our data on Amazon products, only 12 are 'Cerise', even though related colors such as 'Red' are common (13,992 products in our data). Thus, pooling information across categories (i.e., pooling information across colors based on color similarity) serves a dual purpose : it boosts efficiency and ensures convergence in contexts that do not adhere to classical requirements, such as a bounded number of categories and therefore bounded estimator entropy.
### Analytical Approach and Empirical Investigation
Our research methodology combines a comprehensive analytical approach with an empirical investigation. The analysis employs a reinforcement algorithm based on the seminal work of Yule (1925) and Simon (1955). This algorithm generates the Yule-Simon distribution, a discrete probability distribution commonly used to describe the distribution of ranks in various contexts, such as in language and species richness. Named after G. Udny Yule and Herbert A. Simon, this distribution is characterized by a power-law tail, meaning that large ranks can still have substantial probability. Its probability mass function is governed by a parameter often referred to as the reinforcement parameter.
The Yule-Simon distribution has been employed in various scientific fields, including linguistics, ecology, and economics, particularly in the modeling of processes where new categories can emerge over time or where rich-get-richer dynamics are present. Examples of such phenomena include the number of species per genus in certain higher taxonomic levels of biotic organisms, the size distribution of cities, the distribution of wealth among individuals, and the number of links to webpages on the World Wide Web (Garcia, 2011). Indeed, this distribution was presented in Simon (1955) as a model for generating text, a prominent form of qualitative data. Importantly, except in the trivial case, the reinforced sequence is neither exchangeable nor partially exchangeable, as we describe later in the paper.
A recent study by Bertoin (2020) provides several significant insights. First and foremost, although the empirical process is covered by the conclusion of the Glivenko-Cantelli theorem, it is not by the conclusions of the Donsker theorem. Specifically, for certain parameter values, the empirical process converges in law to a Brownian bridge up to a scale factor. For other
Figure 3: An Overview of Our Model
parameter values, an additional rescaling is necessary, and the limit is a Brownian bridge with exchangeable increments and discontinuous paths.
To illustrate the forms of data we consider, we employ the Yule-Simon distribution to model categories derived from qualitative variables, a subset of the independent variables. Estimating the fixed effects of these categories can become intractable without additional model structure. This issue arises when the number of parameters increases with the number of categories. Consequently, the entropy of the model, as measured by the logarithm of the covering number, increases as a function of \(C_{n}\). Therefore, if \(C_{n}\) increases in \(n\), the parameter space may be unbounded in complexity. This is especially evident when examining the empirical measure indexed by \(\mathcal{L}\).
\[\mathcal{P}_{i}(l)=\frac{1}{i}\sum_{n=1}^{i}l(\{I_{nc}\}_{c=1}^{C_{n}},y_{n},v _{n};\{\beta_{c}\}_{c=1}^{C_{n}},\zeta),\ l\in\mathcal{L},\ y_{n}\in\mathcal{ Y},\ v_{n}\in\mathcal{V}. \tag{1}\]
Here, \(l:\mathcal{C}\times\mathcal{Y}\times\mathcal{V}\rightarrow\mathbb{R}\) denotes a real-valued loss function that incorporates category fixed effects. \(\mathcal{C}\) is the sample space of categories, \(\mathcal{Y}\) is the sample space of the dependent variable, and \(\mathcal{V}\) is the sample space of other independent variables. \(\mathcal{L}\) is a collection of such loss functions. The fixed effect parameter for category \(c\) is denoted by \(\beta_{c}\). \(I_{nc}\) is a dummy variable indicating whether observation \(n\) corresponds to category \(c\). \(C_{n}\), representing the dependence of the cardinality of the observed categories on \(n\), is non-decreasing in \(n\). \(\zeta\) are the structural parameters.
The empirical measure may fail to converge in distribution to the mean-zero \(P\)-Brownian bridge. Thus, \(l\) is not a \(P\)-Donsker class function 1 if the Yule-Simon model determines the categories denoted by \(I_{nc}\)(Van Der Vaart et al., 1996). Here \(P\) is a probability measure.
Footnote 1: A class \(\mathcal{F}\) of measurable functions on a probability space \((A,\mathcal{A},P)\) is called a \(P\)-Donsker class if the empirical processes \(X_{n}^{P}(f)\equiv\sqrt{n}(P_{n}(f)-P(f))\), for all \(f\in\mathcal{F}\), converge weakly to a \(P\)-Brownian bridge with almost surely bounded and uniformly continuous sample paths (Sheehy and Wellner, 1992).
A failure of the empirical measure to converge occurs if \(C_{n}\) grows with \(n\), as previously discussed. However, it is important to note that in this formulation, a failure of the empirical measure to converge may occur even if the category assignment is achieved through a fixed partition of the unit interval that does not change with \(n\). In particular, it is not necessary for \(C_{n}\) to increase in \(n\). This is clarified by the sequence of indicator functions \(\hat{G}_{n}(u)\), given by
\[\hat{G}_{n}(u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\Big{(}\mathbf{1}_{\hat{U}_{i} \leq u}-u\Big{)},\quad u\in[0,1]. \tag{2}\]
This sequence of indicator functions \(\hat{G}_{n}(u)\) defined by Equation (2) may not converge to the \(P\)-Brownian bridge if the \(\hat{U}_{i}\) variables are distributed reinforced uniform (Bertoin, 2020). Consequently, the empirical measure in Equation (1), in which the loss function incorporates category fixed effects, may not converge in law if the category assignments are defined using the indicator functions \(\hat{G}_{n}(u)\).
We employ the Yule-Simon process due to its wide-ranging applicability. Various refinements have been proposed for other classical processes introducing 'innovations' (e.g., the addition of new colors in the Polya urn process, Bertoin, 2022; Janson et al., 2023),'reinfor-cements' (where, at each step, a process can repeat one of its preceding steps), and'stops'
(where, at each step, the process might remain at its current position). A notable example of such enhanced processes is the recently proposed variations of the elephant random walk process (Bertenghi, 2021; Laulin, 2022). These modifications resonate with real-world contexts where prior categories reappear, new ones surface, and extant ones fade out, thereby aligning the data generating process with many examples of categorical data.
When these features are integrated into statistical processes, they can introduce complex dynamics and disturb the convergence to the Brownian bridge process, an assumption prevalent in many statistical and econometric models. In such contexts, if category incidence indicators are constructed using these models, they might both emulate data generating processes that mirror or align closely with observed qualitative data and present similar challenges due to the growth in parametric entropy. Our model provides a potential alternative : the additional structure we integrate ensures that the parametric complexity remains bounded as the sample size grows, guaranteeing convergence. While our discussion is anchored in the Yule-Simon process, our findings likely hold more widely.
We translate our theoretical insights into practical implications by generating data across varying parameter values using the Yule-Simon reinforcement process. This data is then scrutinized using the standard fixed-effects regression model. Intuitively, one might assume that the conventional model would be robust when the rate of new category emergence is low, especially in qualitative datasets dominated by few recurring elements. This notion originates from the idea that there would be fewer category fixed effects to estimate. However, our results indicate a divergence even in these scenarios.
This divergence can be traced back to the reinforcing nature of the category emergence process. Specifically, categories that appear later in the data sequence are less represented, as the likelihood of selecting an existing category is influenced by its past appearances. This pattern gives rise to a large number of data-sparse categories, leading to an increase in the entropy of the inferred parameter distribution with the addition of new data. This phenomenon persists even in situations where a new category emerges only once in every 100 observations, indicative of a relatively stable information structure.
Next, we apply our model to a widely used e-commerce dataset to explore a theory-driven research question about user ratings for fashion products of different colors. We divide our investigation into two main theoretical perspectives : economic factors and psychological factors.
**Economic factors** : Economic theory posits that evaluations of products, averaged over a large-scale dataset of potentially functionally identical items that vary only in color, should be equal in distribution. This stems from the notion that any discernible relationship between customer satisfaction and color should be competed away in a market with free entry and exit (Dixit, 1979; Tirole, 1988). Specifically, if a color is evaluated highly, more products of that color, possibly of lower quality, can be released ; conversely, fewer products may be released for colors evaluated poorly. This occurs because fashion products can generally be produced in any color at nearly the same cost. Consider an example where 'Blue' shirts are received more favorably than 'Black' shirts. Since both can be produced at the same cost, market dynamics involving the entry and exit of products should eliminate this advantage. Formally, in a market characterized by differentiated Bertrand competition with symmetric costs, free entry, and no exit barriers, any advantage provided by a factor that incurs no additional cost, such as color, should not persist (Singh and Vives, 1984).
**Psychological factors** : While the economic theory provides a rational perspective, the complexity of human behavior introduces additional variables, as seen in the psychological factors. Reasons stemming from consumer behavior and psychology literature may suggest that products of different colors are received differently by consumers. For instance, prior literature indicates that darker colors might elicit a broader spectrum of emotions, both positive and negative (Valdez and Mehrabian, 1994). They are also more attention-grabbing, possibly leading to polarized opinions (Gorn et al., 1997). The cultural significance of colors, such as black's association with death in Western cultures and with power elsewhere (Ho et al., 2014), further complicates evaluations. Additionally, personal color preferences may lead to diverse reviews. These preferences are influenced by various factors such as culture, personal experiences, and current trends (Palmer et al., 2013). Even the perceived quality of a product may be influenced by its color, with colors deemed high-quality or aesthetically pleasing likely to receive more coherent and positive reviews (Labrecque and Milne, 2013). These factors collectively indicate differential preferences in user reviews and ratings for items of varying colors.
The efficacy of the economic factors hinges on analysts' ability to relate observed outcomes to color references. If analysts fail to infer this relationship, the economic rationale no longer holds, as market participants can only exploit arbitrage opportunities that they can identify. In this scenario, the economic argument may be eclipsed by psychological explanations. Conversely, if the relationship is straightforward to detect, then competitive pressures may dominate.
Crucially, this information is relatively straightforward to infer in the data for frequent colors such as 'Blue' and 'Black'. However, it may be more complex to discern for infrequent colors such as 'Cerise', as previously discussed in Section 1.1. In such cases, the empirical relationship may be challenging to identify using traditional estimators, causing the economic arguments to falter. Thus, the degree to which a systematic and predictable relationship exists between outcomes and color mentions serves as a barometer of the practical relevance of the estimation issues that motivate our research.
To shed light on this debate, we demonstrate the versatility of our proposed methodology by estimating various specifications of our key innovation--a mechanism to include qualitative descriptors like color in a theoretically grounded causal estimation model. We uncover a clear relationship between color and ratings that is indicative of the influence of the psychological factors over the economic. Moreover, by examining the 'Fashion' category on Amazon, we open avenues for practical application and further academic inquiry. This aligns with the intricate intersection of economics, psychology, and consumer behavior that forms the central theme of our investigation.
## 3 Model Formulation
This section lays the groundwork for the mathematical foundations of our proposed framework, focusing on a versatile semiparametric model that strikes a balance between flexibility and interpretability.
We begin by introducing a continuous representation of categories through a Baire space, a mathematical construct that allows for more effective integration of qualitative data. This
representation forms the basis for connecting the category space and representation space, achieved through the construction of a RKHS and a bounded linear operator.
Leveraging the Riesz Representation theorem, we reformulate the model to seamlessly incorporate qualitative data while retaining identifiability. We further detail how constructing the representation map through an initial embedding and the kernel trick enhances efficiency. Finally, we touch upon nonparametric specifications and describe the estimation procedure, culminating in a framework adept at handling dynamic and intricate qualitative data.
Our model builds on the semiparametric framework, a key component in achieving the balance between model flexibility and interpretability that we have outlined. Semiparametric models maintain the virtues of parametric models, including having a finite parameter vector that governs the influence of key explanatory variables (Li and Racine, 2023; Robinson, 1988; Schmalensee and Stoker, 1999). This aspect facilitates interpretation and inference. At the same time, semiparametric models also incorporate nonparametric components that can model complex, nonlinear relationships without strong modeling assumptions, thereby providing greater model flexibility than pure parametric models.
In our application, the parametric linear component captures the conditional mean shifts induced by the membership in qualitative categories, our primary explanatory construct of interest. The nonparametric component flexibly models the effects of any additional covariates. This hybrid approach balances interpretability of the effects of interest with model flexibility to accurately characterize the true data generating process. In addition, by restricting the nonparametric complexity to certain model components, our approach retains more estimation efficiency than fully nonparametric models, which can suffer from the curse of dimensionality. We revisit this theme in Section 3.5, where we discuss the extension of our model to be fully nonparametric in qualitative information.
With this foundation, our model formulation strives to :
1. Craft a continuous representation of categories, aiming for deeper insights and relationships, achieved via embedding in a Baire space and then in a RKHS of category representation functions, thereby smoothly integrating qualitative data.
2. Guarantee model identifiability using the Riesz Representation theorem.
3. Boost computational and econometric efficiency through the use of an initial embedding and the kernel trick.
As some target application domains of our model may not regularly involve functional analysis and abstract mathematics, we offer a more detailed description to make our model and paper more broadly accessible. To maintain focus on the novel aspects of our approach and their applications, we reference established theorems and results without delving into the proofs. In general, these proofs can be found in seminal textbooks such as Berlinet and Thomas-Agnan, 2011. Our primary contributions lie in combining insights from these areas of mathematics and statistics with classical formulations of qualitative data estimands and modern embedding methods, to develop novel inference techniques.
### Semiparametric Framework : An Overview
We consider semiparametric models of the following form :
\[y_{n}(I_{n},v_{n}) =\alpha+\sum_{c=1}^{C_{n}}\beta_{c}I_{nc}+g(v_{n};\zeta)+\epsilon_{n}, \tag{3}\] \[o_{n}(I_{n},v_{n}) =\mathrm{obs}(y_{n}). \tag{4}\]
The latent variable \(y_{n}\), associated with the \(n^{\text{th}}\) observation, is influenced by two types of variables : the categorical variables derived from qualitative data, represented by binary indicators \(I_{n}=\{I_{nc}\}_{c=1}^{C_{n}}\), and the observed quantitative and categorical variables. The indicators denote the membership of the \(n^{\text{th}}\) observation among the \(C_{n}\) categorical divisions, allowing the model to integrate qualitative data. Importantly, to account for the emergence of new categories and the subsequent expansion of the observed category set, \(C_{n}\) may increase with respect to \(n\).
The second type, represented by \(v_{n}\in\mathcal{V}\), incorporates observed quantitative and categorical variables related to the \(n^{\text{th}}\) observation. We define \(\mathcal{V}\) to be a second countable Hausdorff space, aligning it with other semiparametric models. This specification ensures the separation of distinct points and the compactness of the space. \(g(v_{n};\zeta)\) denotes a Lipschitz continuous and smooth function, which is parameterized by \(\zeta\) and accounts for the additional nonlinear influences of the covariates \(v_{n}\) on \(y_{n}\). These are general conditions that can accommodate many typical empirical specifications.
The model's linear component, \(\alpha+\sum_{c=1}^{C_{n}}\beta_{c}I_{nc}\), captures the shift in the conditional mean of \(y_{n}\) resulting from membership in categories derived from the qualitative data. Each coefficient, \(\beta_{c}\), represents the change in the conditional mean of \(y_{n}\) given \(v_{n}\) when the \(c^{\text{th}}\) category replaces the reference category. For identification purposes, we adopt the regularity conditions that the intercept, \(\alpha\), is synonymous with the conditional mean of the reference category, \(\beta_{c}=0\) for the reference category, and \(\beta_{c}\) is finite for all \(c\).
The error term, \(\epsilon_{n}\), accounts for the influence of unobserved variables or random fluctuations on the relationship between \(y_{n}\) and \(o_{n}\). The observed outcome, \(o_{n}\), is derived via an observation function, obs, which links the latent variable \(y_{n}\) to the observed data. In standard linear regression, obs functions as an identity function, thereby directly linking the latent variable \(y_{n}\) to the observed outcome \(o_{n}\). However, in more specialized regression models such as censored regression or models for limited dependent variables, obs operates as a Cadlag function. Under certain conditions (e.g., in probit, tobit models), inaccurate estimation of \(\beta\) may bias the estimates of \(\zeta\), even if \(\zeta\) is adequately informed by the data.
Additional restrictions on \(g\) and \(\zeta\) are necessary. For instance, without such restrictions, \(|\zeta|=N\) for \(N\) observations could yield a \(g\) that perfectly fits the data but provides little insight into the true data-generating process. Multiple approaches exist to regularize \(g\). Given our paper's focus on the first type of variables, we abstract from the specific form of restrictions chosen by the researcher except for requiring that Equation (3) is identified conditional on the linear component of the model : \(\zeta\) can be uniquely determined from \(y_{n}\) given \(\alpha+\sum_{c=1}^{C_{n}}\beta_{c}I_{nc}\) and \(v_{n}\).
We do not require the parameters \(\{\beta_{1},\ldots,\beta_{C_{n}}\}\) to be identified in Equation (3). For example, consider a case where each observation belongs to a category that is 'nearly identical' to the preceding one. In this scenario, the classical model is underidentified because
\(C_{n}=n\). The classical model, however, does not use the information that the categories are only'slightly' different between observations. Our model aims to formalize these latent similarities. Therefore, identification in our model is determined in a different parameter space and is not tied to the parameters of the original model. This adjustment ensures the utility of our model, even when the classical model is underidentified.
### Baire Space of Categories
To develop a model that more effectively integrates qualitative data into quantitative models, we propose a shift from a discrete to a continuous representation. Specifically, we expand the set of categories to form a latent Baire space (a complete pseudometric space) and denote it as \(\mathcal{X}\). We introduce a continuous, linear function \(\beta:\mathcal{X}\rightarrow\mathbb{R}\) that belongs to \(\mathcal{X}\)'s dual space, denoted by \(\mathcal{X}^{*}\), and that corresponds to \(\{\beta_{1},\ldots,\beta_{C_{n}}\}\) in Equation (3) such that \(x_{c}\) represents the \(c^{\text{th}}\) category and \(\beta(x_{c})=\beta_{c}\). This function symbolizes the conditional mean shift in \(y\) when a category \(x\) replaces the reference category.
Since \(\beta\) belongs to \(\mathcal{X}^{*}\), it is linear, additive, and homogeneous in \(x\). This enhanced capacity of a continuous function better captures the complexity and nuances of qualitative data. Moreover, the specification of a continuous function and associated vector space improves inference, as these model elements account for latent similarities in the categories, as opposed to treating the categories as orthogonal elements in the parameter space.
Together, these modifications enable us to refine the representation \(\sum_{c=1}^{C_{n}}\beta_{c}I_{nc}\) in Equation (3) as follows :
\[y_{n}(x_{n},v_{n})=\alpha+\int_{\mathcal{X}}\beta(x)\delta(x-x_{n})\,dx+g(v_{n };\zeta)+\epsilon_{n}, \tag{5}\]
where \(\delta(x-x_{n})\) refers to the Dirac delta function centered at \(x_{n}\), which corresponds to the category of the \(n^{\text{th}}\) observation. The Dirac delta function'selects' the effect of the specific category \(x_{n}\) corresponding to the \(n^{\text{th}}\) observation.
\(\mathcal{X}\) is constructed to conform to Equation (3) and Equation (5) in three key aspects :
1. We posit that the combination of the latent properties of two categories will correspond to a category with the resultant combined latent properties. Therefore, the addition operation in \(\mathcal{X}\), defined by the union set operation, aligns with the equation : \[\beta(x_{1}+x_{2})=\beta(x^{*}),\] for \(x^{*}\) whose latent properties are the sum of the latent properties of \(x_{1},x_{2}\in\mathcal{X}\).
2. We posit that the scalar multiplication of a category's latent properties will correspond to the category possessing the scalar multiple of those latent properties. Therefore, we require that the scalar multiplication operation in \(\mathcal{X}\) aligns with the equation : \[\beta(kx_{1})=\beta(x^{**}),\ k\in\mathbb{R},\] for \(x^{**}\) whose latent properties are \(k\) times the latent properties of \(x_{1}\in\mathcal{X}\).
3. The pseudometric on \(\mathcal{X}\) is the Euclidean metric on \(\mathbb{R}\) applied to the distance between the mean shifts induced by the two categories. This metric satisfies the three properties of a pseudometric : it is nonnegative, symmetric, and satisfies the triangle inequality. However, unlike a full metric, this pseudometric may not satisfy the identity of indiscernibles. Even if two categories are distinct within the qualitative data, they could yield the same conditional mean shift, resulting in a distance of zero between them. For example, in a model where qualitative data on colors is mapped to the Baire space, two colors might induce the same conditional mean shift.
\(\mathcal{X}\) is the minimal complete pseudometric space that satisfies the properties outlined above. A Baire space and continuous function can always be constructed from a dataset. For example, we could construct a Baire space by distributing the observed categories on the standard basis on \(\mathbb{R}^{C_{n}-1}\). This construction is similar to the way that categorical variables are typically treated in statistical models, wherein each category is associated with a unique dimension in a high-dimensional space. Once we have created this mapping, we can define \(\beta(x)=\sum_{c=1}^{C_{n}-1}\beta_{c}x_{c}\). Here \(\{x_{c}\}_{c=1}^{C_{n}-1}\) is the representation of \(x\) on the standard basis on \(\mathbb{R}^{C_{n}-1}\), the \(C_{n}^{\text{th}}\) category is the reference category, and \(\beta_{c}\) is the coefficient on dimension \(c\).
Such a high-dimensional, orthogonal representation might not be the most efficient way to capture the relationships between categories. Specifically, in many types of qualitative data, the categories might be based on underlying low-dimensional properties. For example, colors are combinations of primary colors, human speech comprises basic phonetic sounds, and brands derive from elements of brand identity (Keller, 2020). In these examples, the minimal complete pseudometric space \(\mathcal{X}\) that satisfies our properties will likely be lower in rank than \(\mathbb{R}^{C_{n}-1}\). It is in such contexts and cases that our research is centered.
### Bounded Linear Bijective Operator
We introduce an RKHS of representation functions, denoted by \(\mathcal{H}\). This space encapsulates the commonalities and variations among categories in the qualitative data, facilitating their seamless integration into the quantitative model. Furthermore, we define a bounded linear bijective operator, denoted by \(T:\mathcal{X}\rightarrow\mathcal{H}\). We introduce this operator to simplify the specification of a linear functional \(L\) on \(\mathcal{H}\) :
\[L(h)=\beta(T^{-1}(h)),h\in\mathcal{H}. \tag{6}\]
The operator \(T\) acts a bridge, connecting the category space \(\mathcal{X}\) and the representation space \(\mathcal{H}\). Since each representation in \(\mathcal{H}\) corresponds to a unique category in the pre-image space of \(\mathcal{H}\), the functional \(L\) is implicitly defined through the continuous and linear function \(\beta\). This function accounts for the conditional mean shift that occurs when each category supersedes the reference category.
The graph of the operator \(T\), denoted as \(G(T)=\{(x,T(x)):x\in\mathcal{X}\}\), depicts the Cartesian product of two distinct representations of qualitative data. The first is the abstract representation in \(\mathcal{X}\), which endows the observed discrete categories with a continuous representation. The second is the functional representation in \(\mathcal{H}\). The focal element in Equation (6) serves as an embedding of \(\mathcal{X}\), an extended set of categories equipped with a pseudometric determined by the outcomes, into \(\mathcal{H}\). Therefore, we refer to \(T\) as the Hilbert space embedding of the categories.
#### 3.3.1 Model Reformulation
The properties of \(T\) and \(\beta\) imply that \(L\) is continuous and linear. Specifically, \(L=\beta\circ T\) is a composition of the continuous functions \(\beta\) and \(T\), thus it is also continuous. For all \(h_{1},h_{2}\in\mathcal{H}\) and all scalars \(c\), we have :
\[L(ch_{1}+h_{2}) =\beta(T(ch_{1}+h_{2}))\] \[=\beta(cT(h_{1})+T(h_{2}))\] (since
\[T\]
is linear) \[=c\beta(T(h_{1}))+\beta(T(h_{2}))\] (since
\[\beta\]
is linear) \[=cL(h_{1})+L(h_{2}).\]
Therefore, by the Riesz representation theorem (for a proof see Akhiezer and Glazman, 2013), there exists a unique vector \(p_{L}\in\mathcal{H}\) satisfying :
\[L(h)=\langle h,p_{L}\rangle_{H}.\]
where \(p_{L}\) is the Riesz representer of \(L\).
This element of the representation space fulfills a role similar to \(\beta\) in Equation (5) and enables us to refine Equation (3) as follows :
\[y_{n}=\alpha+\langle h_{n},p_{L}\rangle_{H}+g(v_{n};\zeta)+\epsilon_{n}, \tag{7}\]
where \(h_{n}\) denotes the element in \(\mathcal{H}\) corresponding to observation \(n\).
Our construction is informed by recent proposals for the use of Riesz representers in causal estimation (Bennett et al., 2022; Chernozhukov et al., 2021, 2022), yet it deviates significantly. Specifically, prevalent methodologies primarily focus on estimating a single linear functional of a high-dimensional function, such as the classic treatment effect model. In this context, the domain is a high-dimensional function that represents the difference in conditional expectations given treatment and control, and the codomain is the real line, symbolizing the treatment effect.
Unlike the cited papers that focus on single-valued functionals, our paper studies potentially infinite categories embodied in qualitative data. Consequently, while these methodologies apply to our model in the simplest case, where one category is represented in the qualitative data, they do not apply more generally.
As previously highlighted, the classical literature comprehensively addresses situations involving low-dimensional categories, but these are often ill-suited for many qualitative data applications. Therefore, we instead focus on scenarios where the qualitative data is complex, leading to the classical fixed effects estimator becoming either inadmissible or inherently inefficient to the point of impracticality. Moreover, even when the functional approach in extant papers can be extended to a vector-valued functional, the mathematical results would necessitate the codomain of the functional to be finite-dimensional, and likely even of fixed dimensionality. Our method uniquely accommodates the case of dynamically growing sets of categories through category emergence, a key consideration in our target applications.
### Constructing the Representation Map
RKHSs possess several fundamental properties that have been extensively discussed in the literature (Scholkopf et al., 2002). One of the pivotal properties is that for an RKHS
defined by the kernel \(k\),
\[h(x)=\langle h,k(x,\cdot)\rangle,\]
for any point \(x\) in the domain and any function \(h\) within the RKHS (Aronszajn, 1950). Building on this property, the value of \(p_{L}\)--the Riesz representer of a continuous linear functional \(L\)--at point \(x\) is expressed as :
\[p_{L}(x)=\langle p_{L},k(x,\cdot)\rangle.\]
Taking \(k(x,\cdot)\) as \(h\), we get :
\[L(k(x,\cdot))=\langle k(x,\cdot),p_{L}\rangle.\]
From this, we can conclude that \(p_{L}(x)=L(k(x,\cdot))\). In the context of our model, the Riesz representer \(p_{L}\) of the functional \(L\) defined on \(\mathcal{H}\) evaluated at point \(x\) provides insights into how an outcome changes when the category associated with \(k(x,\cdot)\) replaces the reference category. If the Hilbert space embedding, denoted as \(T\), aligns with the feature map (i.e., \(x\to h\implies h=k(x,\cdot)\)), two significant implications emerge :
1. **Consistency** : Every category in the Baire space maps back to itself through \(T^{-1}(k(x,\cdot))\).
2. **Practicality** : We can efficiently construct \(T\).
Our focus is on models of this nature. To bridge the conceptual framework with its implementation, we introduce an initial injective mapping, \(\Gamma:\mathcal{X}\rightarrow\mathcal{Z}\), from the Baire space of categories \(\mathcal{X}\) to a numerical representation in a complete inner product space, \(\mathcal{Z}\).
Examples of \(\Gamma\) can be found across a wide range of domains. In color theory, an RGB model can act as \(\Gamma\), mapping color categories into numerical representations. In audio processing, the Fourier transform can serve a similar purpose, mapping sound categories into a frequency representation. In more abstract scenarios, like the categorization of Netflix's micro-genres, the textual descriptions serve to define the categories. Phrases such as 'critically-acclaimed emotional underdog movies' and 'British set in Europe Sci-Fi & Fantasy from the 1960s' can be processed through a sophisticated language model to generate numerical representations residing in \(\mathcal{Z}\). Many traditional models and embedding algorithms map non-numerical data onto bounded convex subsets within the Euclidean space, using the dot product (often interpreted as the cosine distance). Such algorithms would be candidates for \(\Gamma\).
Although initially designed for other applications, we repurpose these embeddings in our model using transfer learning. To refine these representations, we introduce a subsequent feature map, \(\phi:\mathcal{Z}\rightarrow\mathcal{H}\). In line with traditional kernel methods, we implicitly define \(\phi\) using the kernel function \(k\), thereby avoiding direct computation :
\[\phi(x)=k(x,\cdot),\]
for any point \(x\) in the domain.
We focus on \(\mathcal{H}\) that can be constructed through the kernel trick, and where we can further refine Equation (7) to :
\[y_{n}=\alpha+\sum_{i}\alpha_{i}K(z_{n},w_{i})+g(v_{n};\zeta)+\epsilon_{n}. \tag{8}\]
Here \(z_{n}\) denotes the element in \(\mathcal{Z}\) corresponding to observation \(n\), while \(\alpha_{i},w_{i}\) denote weight vectors.
Equation (8) provides identifiable and robust estimates, even in scenarios where Equation (3) might be underidentified due to the unrestricted cardinality of the parameter vector \(\beta=\beta_{1},\ldots,\beta_{C_{n}}\). Specifically, \(C_{n}\) could increase with \(n\) at a rate that potentially disrupts inference. This is particularly likely in dynamic settings, such as longitudinal studies, where new categories continually emerge, and older categories become obsolete. Unlike \(\beta\), the weight vectors \(\alpha_{i},w_{i}\) possesses a fixed cardinality that depends solely on the complexity of the categories, irrespective of \(n\). We refer to \(w_{i}\) as the weight vector to distinguish it from the structural parameters in the original model formulation, and because in the case where \(K\) is the inner product in \(\mathbb{R}^{d}\), \(w_{i}\) acts as a set of weights on the dimensions of the category representations in \(\mathcal{Z}\). In other cases, \(K\) may have a distinct interpretation. For instance in the case of a radial basis function, \(w_{i}\) may be the center point, yielding the 'ideal point' model (Srinivasan and Shocker, 1973) and the multiple 'ideal point' model (Lee et al., 2002).
It is important to underscore that our proposed model framework is tailored for causal inference. The primary construct of interest is the impact of qualitative variables, represented through the Hilbert space, on the outcome variable. In line with prior work, solutions to risk minimization problems involving both an empirical risk and a regularizer are often expressible as expansions in terms of the training examples (Kimeldorf and Wahba, 1971; Scholkopf et al., 2001). We can effectively apply this principle to Equation (8) by aligning the loss function with the error term as indicated in Equation (4), and integrating an appropriate regularizer like the quadratic regularizer. In these circumstances, an optimal solution resides within the linear hull of the training examples as mapped into the feature space. An approximation to this solution can be computed effectively utilizing well-established methodologies and solutions, circumventing the need to explicitly compute the fixed effects.
### Nonparametric Model
It is feasible to construct \(\mathcal{X}\) and \(\beta\) within a nonparametric model of qualitative information. Specifically, suppose Equation (3) takes the form :
\[y_{n}(I_{n},v_{n})=\alpha+h(I_{n},v_{n};\eta)+\epsilon_{n},\]
where \(h\) is Lipshitz continuous and smooth, and \(\eta\) incorporates the effects captured by \(\beta\) and \(\zeta\) in the semiparametric form. We can arrange the observed categories on the standard basis, and \(v_{n}\) on additional dimensions, such that \(\beta\equiv h\). This construction is isometric ; there's no distortion in the embedding of categories and independent variables, even in the case of a fairly general \(h\) where we may otherwise have anticipated distortion, because we have access to \(\beta\) in addition to \(\mathcal{X}\), and the pseudometric on \(\mathcal{X}\) is defined by applying the Euclidean metric to \(\beta(x)\) for \(x\in\mathcal{X}\). It is for these features that we specify \(\mathcal{X}\) as a Baire space, as it enables the distortion-free embedding of \(C\) categories and any additional independent variables in the 'flat' space of \(\mathcal{X}\) for a wide range of functional forms.
However, even though the nonparametric approach is mathematically appealing, it may be challenging to estimate it in the kinds of data for which our model is designed. Specifically, in typical applications, we expect data sparsity and categories to become outdated. Under these conditions, we hypothesize that the parsimonious linear and additive form of the
traditional semi-parametric linear model may outperform more flexible specifications in a majority of applications.
### Model Estimation Procedure
Previous sections established this model's theoretical consistency with qualitative data, a known challenge for conventional econometrics, and its adaptability through diverse kernel functions that represent categorical data. This section details practical implementation, highlighting the integration of transfer learning and established econometric techniques like regression.
1. **Baseline Representation :** Choose a baseline representation using a mapping \(\Gamma\) and kernel \(K\) within the RKHS framework. \(\Gamma\) could leverage existing models, transfer learning, or feature extraction. The model is a linear regression with \(\alpha\) representing the baseline category and \(\sum_{i}\alpha_{i}K(z_{n},w_{i})\) a linear additive term.
2. **Determination of \(\alpha_{i}\) and \(w_{i}\) :** Estimate \(\alpha_{i}\) and \(w_{i}\) based on \(z_{n}\), \(\Gamma\), and \(K\). Potential estimation techniques include Maximum Likelihood Estimation, Bayesian methodologies, or the Method of Moments. For linear kernels, \(z_{n}\) can be treated as covariates in the regression, with \(\sum_{i}\alpha_{i}w_{i}\) as coefficients.
3. **Kernel Function Selection :** When applying the kernel trick, it is important that the kernel \(K\) satisfies Mercer's theorem : \(K\) must be positive definite. This property ensures that \(K\) corresponds to an inner product in some feature space induced by a feature map \(\phi\).
4. **Fixed Effects Calculation :** For category \(c\), calculate the fixed effect, denoted \(\beta(c)\), using the equation \(\beta(c)=\sum_{i}\alpha_{i}K(z(c),\hat{w}_{i})\), where \(z(c)\) represents category \(c\) in \(\mathcal{Z}\). \(\hat{w}_{i})\) are the inferred weights.
## 4 Empirical Process
In this section, we introduce and illustrate a novel data generating process that uncovers key limitations of traditional fixed effects regressors in handling complex categorical data. Through an analogy we term 'Fido's Ball', we employ the Yule-Simon process to highlight scenarios where conventional approaches struggle, underscoring the need for our proposed model that handles such complex categorical data more effectively. We delve into the empirical process to expose the underlying issues impacting traditional techniques, including data sparsity and the emergence of new categories. Subsequent sections will empirically demonstrate these challenges through simulations and a real-world dataset, further revealing the advantages of our method, as well as establishing the convergence and stability of our proposed model and estimator.
### Yule-Simon Process
The Yule-Simon model is a statistical process that yields a distribution that is particularly suited to representing natural phenomena with many rare events. We consider the following setup, as described and analysed by Bertoin (2020). To aid comprehension and facilitate further exploration, we intentionally adopt the same notation as Bertoin. This
provides an opportunity for interested readers to delve deeper into the theoretical results in Bertoin's paper as a means of expanding upon our own findings.
Consider \(U_{1},U_{2},\ldots\) as i.i.d. uniform random variables on the interval \([0,1]\). We define the sequence of (uniform) empirical processes as :
\[G_{n}(u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\left(1_{U_{i}\leq u}-u\right),\ u\in[0,1],\]
where this sequence converges in distribution as \(n\to\infty\) towards a Brownian bridge \((G(u))_{0\leq u\leq 1}\) in the Skorokhod metric.
To illustrate the empirical process, consider a sequence \(\varepsilon_{2},\varepsilon_{3},\ldots\) of i.i.d. Bernoulli variables with a fixed parameter \(p\in(0,1)\) that signify when repetitions occur, such that the \(n^{\text{th}}\) step of the algorithm is a repetition if \(\varepsilon_{n}=1\), and an innovation if \(\varepsilon_{n}=0\). For every \(n\geq 2\), we also define \(v(n)\) as a uniform random variable on the set \(\{1,\ldots,n-1\}\), independent of the other variables, which specifies which preceding item is copied when a repetition happens.
We set \(\varepsilon_{1}=0\) for definiteness and recursively construct a sequence of random variables \(\hat{U}_{1},\hat{U}_{2},\ldots\) :
\[\hat{U}_{n}=\begin{cases}\hat{U}_{v(n)},&\text{if }\varepsilon_{n}=1,\\ U_{i(n)},&\text{if }\varepsilon_{n}=0,\end{cases}\]
where
\[i(n)=\sum_{j=1}^{n}(1-\varepsilon_{j})\quad\text{for }n\geq 1\]
denotes the total number of innovations after \(n\) steps. It is implicit that the sequences \((v(n))_{n\geq 2}\), \((\varepsilon_{n})_{n\geq 2}\), and \((U_{j})_{j\geq 1}\) are independent.
This represents a linear reinforcement procedure (i.e., a preferential attachment process), implying that, provided \(i(n)\geq j\) (i.e., the variable \(U_{j}\) has already appeared at the \(n^{\text{th}}\) step of the algorithm), the probability that \(U_{j}\) is repeated at the \((n+1)^{\text{th}}\) step is proportional to the number of its previous occurrences. We henceforth refer to the parameter p of the Bernoulli variables \(\varepsilon_{n}\) as the reinforcement parameter. While each variable \(\hat{U}_{n}\) adheres to the uniform distribution on \([0,1]\), the reinforced sequence \((\hat{U}_{n})_{n\geq 1}\) is not stationary, exchangeable, or even partially exchangeable. In what follows, we refer to realizations of \((\hat{U}_{n})_{n\geq 1}\) as Yule-Simon draws.
Moreover, while the conclusions of the Glivenko-Cantelli theorem apply, the conclusions of the Donsker theorem do not (Bertoin, 2020). Specifically, for certain parameter values, the sequence of empirical processes \((\hat{G}_{n}(u))_{0\leq u\leq 1}\) defined in Equation (2) converges in law to a Brownian bridge up to a scale factor, whereas for other parameter values, an additional rescaling is necessary and the limit is a Brownian bridge with exchangeable increments and discontinuous paths.
### Fido's Ball
We describe our proposed empirical process through the following analogy. Consider an ongoing game of fetch between Fido, a'memoryless' Golden Retriever, and his owner, Odif.
This serves as an illustration of the empirical process. In each iteration of this never-ending game, Odif selects a ball using the Yule-Simon process. Though the balls are inherently unordered, they are organized using an injective map from the interval \([0,1]\) to the type (i.e. category) of ball. This arrangement allows a Yule-Simon draw to specify ball type. Importantly, the process does not require the selection of distinct ball types; many, or potentially all, values within the closed interval may map to the same type. Nonetheless, our research is most relevant to scenarios where the codomain of the map is a diverse and potentially infinite set of ball types.
During each iteration, Fido chases and retrieves the ball thrown by Odif. The joy Fido experiences in the \(i^{th}\) round depends on the type of ball thrown :
\[\text{joy}_{i}=\alpha+\beta(\text{type}(\hat{u}_{i}))+\epsilon_{i}, \tag{9}\]
Here, \(\beta\) is a mean-shift function. \(\hat{u}_{i}\) is the \(i^{\text{th}}\) Yule-Simon draw. type is a map from the closed interval \([0,1]\) to ball types, such that \(\text{type}(u_{i})\) denotes the type of ball thrown in iteration \(i\).
We consider compositional maps of the form \(\tau(\text{type}(\hat{u}_{i}))\) from the closed interval \([0,1]\) to some bounded and closed subset of \(\mathbb{R}^{d}\), where \(d\) is finite, that is continuous and linear. Specifically, in the context of color, consider the RGB color model with the slight modification that the intensity of the three primary colors is expressed in the unit interval and \(\tau(\text{type}):[0,1]\rightarrow[0,1]^{3}\). We use this map to define \(\beta\) as follows :
\[\beta(\text{type}(\hat{u}_{i}))=w_{r}\text{red}(\hat{u}_{i})+w_{g}\text{green }(\hat{u}_{i})+w_{b}\text{blue}(\hat{u}_{i}),\]
where \(w_{r},w_{g},w_{b}\) is a weight vector, and \(\{\text{red},\text{green},\text{blue}\}\) are constituent submaps of \(\tau(\text{type})\) such that each draw is matched to a category (i.e., color), which is then mapped to a vector of primary color intensities. Note that, although we use the linear kernel to construct \(\beta(\text{type}(\hat{u}_{i}))\) for the sake of clarity in this exposition, more complex kernels could also be employed.
As we discuss in Section 2.3, traditional models are likely to struggle to estimate Fido's joy empirically. First, an infinite number of categories can be introduced through type and the Yule-Simon process. This would require an infinite number of fixed effects, which would lead to concerns about the complexity of the parameter space. These concerns are likely to be amplified in parameter ranges where the rate of emergence of new categories is high as the rate of emergence of new categories may be faster than the rate at which information accumulates on the parameters associated with each category.
Crucially, the traditional estimator may still be consistent if the categories emerge rarely (i.e., the reinforcement parameter is close to 1). In such cases, it is likely that the categories that emerge late in the data may be reinforced rarely as the model places more weight on random variables that occured more often in the past. Therefore, in such cases, even if the new categories emerge slowly, the rate of emergence may be too great for the rare categories in the data where novel information is increasingly unlikely to accumulate with growth in the dataset.
In this paper, we pose the question : given that Odif is unaware of \(\beta\), but cognizant of \(\tau\) and type (with the former being a natural fact and the latter an ordering), can Odif use \(\tau(\text{type})\) to estimate Fido's joy? More generally, can the qualitative data inference problem
be solved by deriving a map of the form of \(\tau\)(type) to recast the qualitative data as a fixed-length vector of numerical representations that can then be incorporated in the statistical model?
We answer this question affirmatively. In the context of the linear kernel, we reframe Equation (9) as :
\[\text{joy}_{i}=\alpha+w_{r}\text{red}(\hat{u}_{i})+w_{g}\text{green}(\hat{u}_{i })+w_{b}\text{blue}(\hat{u}_{i})+\epsilon_{i}. \tag{10}\]
Equation (10) is straightforward to estimate and identify from the data. Notably, its analysis uncovers an essential identification condition in the linear model--the coefficients are only identified based on the variations induced in the data along the dimensions of the codomain of type. For instance, if Odir selects balls varying only in the primary color'red,' then the coefficients for 'blue' and 'green' become underidentified. This requirement is well established in the literature. A coefficient such as \(w_{r}\) can only be determined if variation exists in the corresponding dimension of the independent variables, e.g., the color of the ball. Moreover, this condition is naturally likely to be satisfied in relevant data, as the categories emerge uniformly in it, and are therefore equally likely to vary in each dimension of type (i.e.,'red' balls are equally likely as 'green' or 'blue' balls).
In the case of a non-linear kernel, the estimation is expressed as :
\[\text{joy}_{i}=\alpha+K(\{\text{red}(\hat{u}_{i}),\text{green}(\hat{u}_{i}), \text{blue}(\hat{u}_{i})\};w)+\epsilon_{i}.\]
Here \(w=\{w_{r},w_{g},w_{b},\dots\}\) corresponds to the single component model, and we focus on models of this specific form. More intricate forms that seek a blend of kernels or adopt a compositional strategy in constructing advanced kernels can also be estimated through non-linear least squares (in the case of the specification of an empirical loss function) or likelihood-based methodologies (when delineating the distribution of the error term, among other model elements) provided the data has sufficient resolution to identify the more complex form.
## 5 Model Validation Through Simulations
Simulations play a crucial role in validating our proposed model, particularly when contrasting it with traditional approaches for estimating complex qualitative data. Our validation pursues two main objectives : (1) elucidating the challenges faced by traditional models in handling evolving categorizations and sparse data categories, and (2) revealing the robustness of our model, even in the face of swiftly proliferating new categories. We undertake these evaluations by generating data through the Yule-Simon process, characterized by the following :
1. Unique Yule-Simon draws form distinct categories.
2. The reinforcement parameter, ranging between 0.01 and 0.99, signifies the probability that an extant category recurs.
3. The value of each draw is the fixed effect of its category ; hence, all fixed effects fall within the range of 0 to 1.
4. A continuous covariate follows an i.i.d. standard Normal distribution.
5. The error term also follows an i.i.d. standard Normal distribution.
Subsequent subsections first detail the convergence dynamics for the conservative case, where the reinforcement parameter is set at 0.99. This setting implies that a new category emerges once every 100 observations, with the remaining 99 relating to existing categories. We explore a variety of reinforcement parameters, ranging from a very high rate of category emergence to the lower rate described in the previous simulation. Next, we illustrate the implications of estimation error on inference by showcasing parameter estimates. We compare the performance of variable selection methods, such as LASSO, and regularized parameter estimation methods like Ridge Regression, to traditional estimators on the simulated data. Finally, we present our proposed model and conclude.
As a brief preview, we demonstrate that traditional models struggle to provide consistent estimates when dealing with complex, evolving qualitative data. In contrast, our proposed model maintains robust performance even when new categories emerge rapidly, validating its ability to effectively handle data-sparse categories and shifting categorization systems. The method's superior consistency, precision, and computational efficiency make it a modern alternative to traditional models for qualitative data analysis.
### Convergence Dynamics
In this subsection, we explore the convergence dynamics of the traditional estimator by generating simulated datasets of varying sizes, ranging from \(10^{2.5}\) to \(10^{5}\), spaced evenly on a logarithmic scale. This broad range provides a comprehensive view of the estimator's performance across different scales, capturing both nuances and overarching trends. Our analysis focuses on two key metrics : estimation error and entropy. The absolute estimation error measures the accuracy in recovering true fixed effects, while entropy gauges uncertainty in the inferred estimates. Together, these metrics illuminate the traditional estimator's efficacy in assimilating information.
#### 5.1.1 Estimation Error
Under classical conditions, the discrepancy between true and estimated values should decrease as dataset size increases, reflecting the assimilation of information. To investigate this, we computed the absolute estimation error for each category in the simulated data as the absolute difference between the estimated and actual values. We identified the \(10^{\text{th}}\), \(25^{\text{th}}\), \(50^{\text{th}}\), \(75^{\text{th}}\), and \(90^{\text{th}}\) percentiles of these errors and plotted them against the logarithm (base 10) of the dataset size in Figure 4.
Figure 4 reveals two principal observations. Initially, in smaller datasets, estimation errors remain consistent across percentiles, a predictable outcome given the restricted range of categories and limited observations. However, as the dataset grows, a clear divergence occurs. Errors for the lower percentiles decrease, reflecting the incorporation of new information in the more commonly observed categories, while errors for the higher percentiles increase due to the emergence of new, infrequently reinforced categories, leading to imprecision. Strikingly, the median absolute error remains relatively stable at around 0.5 across dataset sizes. Considering that true fixed effects range between 0 and 1, this consistent median error underscores a potentially non-informative estimation process.
The widening gap in errors between frequently and rarely observed categories highlights the challenges that conventional regression techniques face when handling intricate cate
gory \(\Sigma\) is the categorical data. Categories that appear later in the data, particularly those that are seldom observed, do not show improved estimates as the dataset grows. This inefficiency, especially when faced with swiftly emerging and underrepresented categories, underscores a critical challenge and emphasizes the need for innovative approaches that are adaptive to the dynamic nature of category emergence and representation.
#### 5.1.2 Entropy
We proceed to calculate the entropy of the inferred estimates derived from the traditional regression estimator. The entropy of a \(D\)-dimensional multivariate Gaussian with covariance matrix \(\Sigma\) is given by :
\[H(x_{MVN})=\frac{D}{2}\left(1+\log(2\pi)\right)+\frac{1}{2}\log|\Sigma|. \tag{11}\]
Setting aside the Donsker conditions, the inferred distribution of parameters converges to a multivariate Normal with estimable asymptotic mean and covariance using standard techniques. Furthermore, in our simulations, the regression estimator is efficient since the error term follows a standard Normal distribution. Thus, we compute the entropy of the inferred parameter distribution based on this functional relationship.
Figure 5 shows that the entropy of the inferred parameters increases with the sample size, a trend driven by the introduction of new but infrequently reinforced categories within the data. In a scenario where the reinforcement process were to favor newly emerging categories, the model might preserve consistency through the aggregation of information. However, our simulations reveal a different behavior. Specifically, the increase in entropy from the first
Figure 4: Absolute Estimation Error and Dataset Size
term in Equation (11) is not adequately offset by the decrease in the second term, causing a divergence in the values. This divergence not only highlights the limitations of conventional modeling techniques when dealing with rapidly evolving categories but also emphasizes the necessity for more sophisticated methods that can accurately interpret and represent the inherent complexity of such categorical data.
### Reinforcement Parameter
Our initial exploration utilized a reinforcement parameter of 0.99, signifying a context where new categories arise infrequently. To deepen our understanding, we extended our study to investigate a spectrum of reinforcement parameter values, holding the dataset size constant at 5,000 observations to ensure a clear manifestation of the data's long-term patterns.
We systematically generated data for reinforcement parameters, incrementing by 0.01, spanning from 0.01 to 0.99. A parameter of 0.01 implies a high frequency of new categories, whereas 0.99, as previously studied, establishes a conservative testing ground for the regression model.
Figure 6 shows that estimation errors are notably higher for lower reinforcement parameters. Surprisingly, even at higher values, where the resulting data is similar in category frequency to many observed datasets and where one might expect less error, significant deviations persist. This divergence from the expected root-n convergence can be traced to two fundamental processes : (1) the emergence of new categories, where lower reinforcement values usher in a surge of new categories, saturating the model ; and (2) the reinforcement
Figure 5: Entropy and Dataset Size
dynamics of existing categories, where the preferential attachment process makes gaining reinforcement for sporadically reinforced categories challenging, thereby inducing data sparsity.
### Parameter Estimates
Expanding on our prior examination of the reinforcement parameter's impact on estimation accuracy, we now delve into the credibility of our derived estimates. Figure 7 showcases the p-values of estimates for the same simulated dataset as in Section 5.2 that has 5,000 observations and a reinforcement parameter of 0.99. Despite all fixed effects in our simulation being non-zero, the results are alarming : 87.72% of the cases yield a p-value greater than 0.05, and 80.7% surpass 0.1. This indicates that in many cases, the estimated effects are statistically no different from the null hypothesis, casting doubt on the reliability of traditional estimation methods in complex, real-world contexts.
Our dataset comprises a total of 58 parameters : 57 fixed effects and one continuous covariate. Accepted guidelines, like the rule of thumb suggesting at least 10 data points for each estimated parameter (Harrell Jr et al., 1984), as well as more stringent benchmarks such as 1 in 20 or 1 in 50 observations per variable (Steyerberg et al., 2000), would deem our sample size to be sufficiently large. However, instead we find that the data is insufficient to obtain reliable estimates for the vast majority of parameters using the traditional estimator. This discrepancy underscores the limitations of the classical model and cautions against undu
Figure 6: Absolute Estimation Error and Reinforcement Parameter Values
Note : Dataset size is 5,000 ; local polynomial regression fits for various error percentiles are shown in different colors ; standard error bands are represented in grey.
Delving into the estimates, Figures 8 depicts a histogram of the estimates. It is concerning to observe that nearly half (27 out of 57, or 47.4%) of these estimates fall outside their true value range, which is between 0 and 1. Such overexpression aligns with our earlier concerns regarding data sparsity. Moreover, examining the significant estimates reveals a tendency for them to be large in magnitude, with 5 out of 7 falling outside the [0, 1] range and 1 falling almost at the boundary (0.987). This occurs because smaller effect sizes require more statistical power to resolve. Therefore, it is the larger estimates that are more likely to be significant, either because the underlying effect size is large or due to noise. These findings highlight how models can be misled by sparse categorical data, raising serious questions about the validity of existing estimation techniques. The results underscore the urgency of reevaluating conventional benchmarks and exploring alternative methodologies, as detailed in the subsequent sections.
### Lasso
LASSO, a prominent variable selection method, provides promising solutions for confronting the estimation challenges and overfitting risks that are prevalent in traditional estimators for high-dimensional categorical data. Leveraging regularization procedures, LASSO distinguishes between influential and non-essential predictors, focusing on the selection of the most pertinent variables. By systematically penalizing the magnitudes of coefficients and nullifying those of less impactful variables, LASSO creates a more concise and interpretable model. When applied to dummy variable representations of categories, LASSO presents the potential to differentiate the truly impactful categories from the non-essential ones, effec
Figure 7: Histogram of Regression P-values
tively addressing the limitations observed in conventional regression methods within our simulated dataset.
From the 57 fixed effects, LASSO provides estimates for only 29 in our 5,000-observation dataset, signifying an average drop of about half the fixed effects. Each retained effect is backed by 170 observations. Despite this seemingly substantial support, the derived estimates differ significantly from the actual values. Figure 9 depicts our derived estimates against the true values ; the significant deviation from the ideal straight line with a slope of 1 illustrates the disparity.
This analysis underscores LASSO's limitations in sparse categorical data contexts such as dynamic category structures and data sparsity. The method might falter in offering consistently accurate insights for retained categories and omit many entirely. This limitation arises from LASSO's reliance on observed data frequencies to determine variable importance, failing to discern latent relationships between categories that could promote information sharing.
### Ridge Regression
Alongside LASSO, we explore Ridge Regression as an alternative regularization approach designed to address the complexities of high-dimensional models. Both LASSO and Ridge Regression aim to curtail overfitting and mitigate the estimation challenges inherent in high-dimensional data, with their main point of divergence residing in their penalization strategies.
Figure 8: Histogram of Regression Estimates
LASSO, utilizing an L1 penalty, can lead to certain coefficients being completely nullified. This effect is particularly concerning in scenarios with sparse categorical data, as it may result in the omission of predictors with latent importance that is not readily apparent from mere data frequencies. In contrast, Ridge Regression adopts an L2 penalty, ensuring that all predictors remain in the model, albeit with potentially shrunken coefficients, and are never entirely eliminated. This differentiation is critical, especially in sparse categorical data contexts where it is known that the categories are likely to exert a non-zero influence, making Ridge Regression an appealing alternative for capturing nuanced category relationships often missed by LASSO.
The L2 penalty in Ridge Regression suggests that the optimal penalty parameter, \(\lambda\), might differ from its counterpart in LASSO. Therefore, as with LASSO, we employ cross-validation to pinpoint \(\lambda\).
The scatter plot in Figure 10 illustrates the relationship between our derived estimates from Ridge Regression and the actual values. While there's a discernible departure from an ideal line with a slope of 1, Ridge Regression offers an alternative perspective when juxtaposed with LASSO. This suggests that under specific conditions, Ridge Regression might be more adept at unveiling subtle relationships within the data.
Interestingly, the line illustrating the relationship between the estimated coefficients and the true values in both LASSO and Ridge Regression has a slope that closely approaches 0, with a positive intercept. This behavior indicates that both estimators, in an effort to minimize estimation variance, gravitate towards assigning the mean of the actual fixed effect values (0.5) to all fixed effects. Consequently, this strategy may render the estimates
Figure 9: Scatter Plot of LASSO Estimates
somewhat uninsightful, as the deviations of the category coefficients (i.e., the fixed effects) from the mean become relatively inconsequential.
### Rule-of-Thumb Aggregation
To address the complexity posed by a vast number of categories, another viable strategy is category aggregation. This method entails combining several low-frequency categories into a broader, unified category. As a general guideline, we merge any category containing fewer than 5 observations, given the low probability of accurate estimation for such categories. Similar rule-of-thumb-based aggregation strategies are prevalent in the social sciences literature.
Figure 11 presents a scatter plot of the estimates against the true values. As indicated by the regression line, these estimates demonstrate consistency to the extent that the slope of the line approximates 1, enhancing the credibility of the analysis. However, only 10 out of 57 categories were retained in this procedure. Consequently, the results of the analysis are drastically less informative than those derived from LASSO or Ridge Regression.
Moreover, only 4 out of 57 fixed effects are significant--6 fixed effects were estimated but are nonsignificant, in addition to the 47 dropped fixed effects. In our current analysis, we study categories whose incidence probability comprises independent and conditionally independent processes as described in Section 4. If the presence of a dropped (infrequent) category is systematically associated with the presence of an included category, a bias may manifest in the estimates, which would then be apparent as a departure of the regression
Figure 10: Scatter Plot of Ridge Regression Estimates
line's slope from 1. The potential for such correlations underscores a further limitation of the category aggregation approach.
### Proposed Method
The results of the analysis conducted using our proposed model are presented in Table 1. The Yule-Simon Draws and the Continuous Covariate yield estimated coefficients of 0.964 and 1.009 respectively, suggesting precise and consistent estimation given their true values of 1.
Figure 12 presents a scatter plot of the fixed effect estimates against the true values. As both the points and the regression line visually indicate, these estimates demonstrate consistency to the extent that the slope of the line approximates 1.
The improvement in efficiency and specificity is achieved by using a map from categories to Yule-Simon draws. This technique is incorporated to simulate real-life contexts where we anticipate deploying our model. In these settings, analysts are typically aware of each category's description, such as its color, enabling a map from categories to a numerical representation. This mapping allows the creation of a higher-dimensional Hilbert space embedding for the categories, and the estimation of a finite-dimensional parameter vector sufficient to identify the category fixed effects. Through this structured approach, our model gains efficiency and maintains estimator convergence where traditional estimators fall short, as detailed earlier.
Moreover, crucially, the extent to which category fixed effects is found to be significant is decoupled from the effect size. This is because the coefficient on the Yule-Simon draw
Figure 11: Scatter Plot of Rule-of-Thumb Estimates
\begin{table}
\begin{tabular}{l c} \hline \hline & _Dependent variable :_ \\ \cline{2-3} & y \\ \hline Yule-Simon Draws & 0.964*** \\ & (0.063) \\ Continuous Covariate & 1.009*** \\ & (0.014) \\ \hline Observations & 5,000 \\ R\({}^{2}\) & 0.511 \\ Adjusted R\({}^{2}\) & 0.511 \\ Residual Std. Error & 1.009 (df = 4998) \\ F Statistic & 2,615.147*** (df = 2 ; 4998) \\ \hline \hline _Note :_ & *p\textless{0.1} ; **p\textless{0.05} ; ***p\textless{0.01} \\ \end{tabular}
\end{table}
Table 1: Regression Results using the Proposed Method
Figure 12: Scatter Plot of Estimates from Proposed Procedure
converges with increasing sample size, and the fixed effect in this model is the product of the known draw and the inferred coefficient. Therefore, categories with much smaller fixed effects are likely to be significant in this model. In essence, by introducing additional model structure, we convert the discrete estimation problem into a more efficient continuous estimation framework.
Furthermore, as the model is stable and the empirical process meets the classical conditions, the convergence of the coefficient estimates takes place across all categories and the entirety of the data. This includes both the elements that relate to a category and the elements that relate to other categories. In sum, these two features ensure that the fixed effect estimates are likely to be both consistent and precise in a wide variety of settings.
### Conclusion
Our simulations demonstrate the limitations of applying traditional regression models to estimate complex and evolving categorical data. Specifically, we find that traditional estimators struggle to provide accurate and consistent estimates for categories that are sparse or emerging. This stems from two key issues :
First, when new categories emerge rapidly, the model becomes saturated and unable to precisely estimate the effects. Our analysis of varying reinforcement parameter values showed that higher rates of new category introduction lead to increased estimation errors.
Second, even when new categories emerge slowly, categories that are infrequently reinforced remain challenging to estimate. This manifests in diverging estimation errors between common and rare categories as the sample size grows. Our investigation into entropy and parameter credibility further supports the view that accumulating observations does not resolve the deficiencies for sparsely reinforced categories.
Critically, these characteristics are common in many real-world categorical datasets, especially those capturing evolving social constructs and classifications. However, diagnosing the limitations can be difficult. Strategies like variable selection and category aggregation may not fully address the underlying sparse data challenges.
In contrast, our proposed approach demonstrates superior consistency by substituting categorical variables with their numerical representations. This resolves the estimation difficulties by converting the problem into a more efficient continuous estimation framework. The strengths of our model include :
1. Consistent and precise estimates, regardless of category dynamics.
2. Avoidance of instability due to emerging or sparse categories.
3. Computational scalability, as the dimensionality remains fixed.
4. No requirement for ad-hoc solutions such as dropping categories.
In summary, our simulations validate the advantages of our proposed approach over conventional models in estimating intricate and evolving categorical data. The method's consistency, precision, and computational efficiency position it as an attractive modern alternative to traditional regression techniques for qualitative data analysis.
## 6 Empirical Analysis
Our empirical analysis navigates the intricate relationship between color and consumer ratings in the fashion industry. Traditional economic theory posits that color should not influence evaluations of functionally identical items ; yet human psychology and consumer behavior suggest a more complex reality. Colors may elicit emotional responses, convey cultural meanings, and influence personal preferences, thereby affecting product reviews and ratings.
Moreover, the complexity of inferring these relationships from data introduces further uncertainty. Whereas manufacturer and seller competition would imply that any costless factor conferring a quality advantage should be competed away, this explanation hinges on the discoverability of the factor's influence such that it can be exploited by market forces. Yet, despite the centrality of color in marketing and economics, most prior evidence has been directional in scope and confined to behavioral laboratory studies (Labrecque and Milne, 2012). Therefore, an open question remains as to the extent to which any systematic and predictable relationship exists in data from a real-world marketplace.
We examine this question using data from Amazon. Specifically, we investigate the product rating, averaged across all customer reviews for each product. Utilizing the average rating provides a consistent numerical measure of satisfaction on a 1-5 scale. Averaging accounts for individual variability and enables comparison across products. This allows us to assess whether color mentions systematically influence ratings, shedding light on underlying economic and psychological factors.
We represent the influence of color on product ratings through a mathematical model, formulated as follows :
\[r_{i}=\alpha+\gamma K(\mathrm{red}(i),\mathrm{green}(i),\mathrm{blue}(i);w)+ \epsilon_{i}. \tag{12}\]
Here, \(\alpha\) is the intercept. \(\gamma\) is the influence parameter that determines the extent to which the kernel function influences the mean rating \(r_{i}\) of product \(i\). \(\mathrm{red}(i),\mathrm{green}(i),\mathrm{blue}(i)\) are the RGB encodings of product color. \(\beta\) are the parameters of the kernel function \(K\). \(\epsilon_{i}\) is the error term. RGB values are normalized to lie within the [0, 1] interval for numerical stability, and in cases where multiple colors are referenced, the corresponding RGB values are averaged.
This model is directly comparable to the data-generating process ('Fido's Ball') presented earlier in Section 4.2. In particular, our specification for the average rating of each product aligns with the framework and method for estimating Fido's joy, as given in Equation (10). The mathematical representation of color in the RGB encoding parallels the selection and mapping of balls in the game of fetch between Fido and Odif, guided by the Yule-Simon draws. The versatility and multifaceted nature of the kernel functions, and their applicability to marketing and economic models, resonate with the various types of balls and their impact on Fido's joy. A direct alignment between the process analogy and empirical estimation ensures a coherent and consistent bridge between our theoretical results and empirical analysis.
A key advantage of our proposed method is its compatibility with a broad array of kernel functions. This flexibility makes our method a versatile tool for addressing challenges in the realm of categorical variables, regardless of the specific characteristics of the dataset
under consideration. To make the connection between our focus on color and the broader methodology more explicit, we detail below the specific kernel functions that correspond to well-known models in marketing and economics, demonstrating how they can be applied to our analysis.
First, we consider the linear kernel, which translates to the dot product. This form of the model corresponds to a regression with the inclusion of representations from \(\mathcal{Z}\) as covariates. Second, we consider the Gaussian Radial Basis Function (RBF) kernel, which translates to an ideal point model--the center point of the kernel corresponds to the idealized product (i.e., the ideal point), with deviations from the ideal point providing less utility. Third, we consider the multiquadric RBF kernel. This kernel is similar to the Gaussian RBF but distinct in the following way : whereas the Gaussian RBF peaks at the center point and diminishes exponentially with the square of the distance, the multiquadric RBF reaches its nadir at its center point, and increases at the rate of the inverse of the distance moving out from the center. We estimate these models using maximum likelihood estimation, as each corresponds to a tractable functional form--more complex kernels can be estimated using either simulated maximum likelihood or Bayesian methods such as tracing out the posterior using Monte Carlo methods (e.g., the No U-Turn Sampler).
### Results
Table 2 details our findings from employing linear, Gaussian RBF, and multiquadric RBF kernels. Within the linear kernel model, the influence parameter is confounded with other kernel parameters due to the inherent linearity of the kernel, thus necessitating the setting of this parameter to 1 for the estimation procedure. Our results indicate a highly significant intercept, whereas the coefficient related to the Red variable is nonsignificant. The Green coefficient is significant at the 0.05 level, and the Blue coefficient is significant at the 0.01 level; however, the effect sizes are notably modest. For example, a unit increase in the Blue color value from 0 to 1 (corresponding to 0 to 255 in the original RGB model) leads to an average rating change of -0.048. Although statistically significant, this result is substantively unimportant, given that the ratings variable spans a range of 1 to 5, resulting in a maximum effect size of approximately 0.01%.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & & _Kernels_ & \\ & _Linear_ & _Gaussian RBF_ & _Multiquadric RBF_ \\ \cline{2-4} Intercept & 3.838*** & 3.698*** & 4.221*** \\ Red & \(-\)0.013 & 0.442*** & 0.427*** \\ Green & 0.039** & 0.537*** & 0.552*** \\ Blue & \(-\)0.048*** & 0.276*** & 0.297*** \\ Influence & & 0.211*** & \(-\)0.319*** \\ \hline Observations & 105,272 & 105,272 & 105,272 \\ Log-likelihood & -179329 & -179281 & -179278 \\ \hline \hline _Note :_ & & \({}^{*}\)p\(<\)0.1; **p\(<\)0.05; ***p\(<\)0.01 \\ \hline \end{tabular}
\end{table}
Table 2: Results from Linear, Gaussian RBF, and Multiquadric RBF Kernels
The influence function estimates within the Gaussian RBF and multiquadric RBF models underscore the critical role of a kernel capable of projecting the initial RGB embedding into a significantly higher-dimensional feature space. As detailed in Table 2, the estimates for this parameter are 0.211 and -0.319 in the Gaussian RBF and multiquadric RBF model specifications, respectively. This divergence translates to contrasting behaviors in the kernels : In the Gaussian RBF, the kernel value is 1 at the center point and decreases with the Euclidean distance from this point, as evidenced by the exponential of the negative square of the distance. Conversely, the multiquadric RBF begins with a value of 1 at the center point but increases as the distance from the center point expands. Consequently, the findings in the Gaussian RBF model, where the ratings decrease moving away from the center point (indicated by the positive influence coefficient), are mirrored by the negative influence coefficient of the multiquadric RBF, where the kernel's value augments with increasing distance.
In both the Gaussian RBF and multiquadric RBF models, the center point bears the same substantive interpretation : it signifies the color where the average rating peaks. Specifically, our analysis identifies this color as [113, 137, 70] in the Gaussian RBF model and [109, 141, 76] in the multiquadric RBF model. Colloquially referred to as a shade of dirty olive green, and formally aligned closest to 'Dingley'--an 'unsaturated light cold chartreuse,' this color emerges as the most favored in Amazon fashion. Its popularity can likely be attributed to its versatility across genders, various garments, and occasions, making it a standout choice in our dataset.
Critically, our findings demonstrate the distinct superiority of the two RBF kernel models over the linear kernel, underscoring the importance of allowing for a rich and expansive feature space derived from the RGB representation of color references. Among the two RBF kernels, the multiquadric kernel marginally outperforms the Gaussian RBF in representation, as evidenced by the log-likelihood. This suggests that the decline in expected ratings with distance from the center point manifests more gradually than an exponential decrease and is more aptly characterized by the multiquadric function. Thus, the analysis not only showcases the value of employing nonlinear kernel models but also highlights the nuanced differences between them in accurately capturing the underlying dynamics of color preferences.
More broadly, these results highlight a consistent yet hitherto unexploited influence of color on customer ratings within a highly competitive market of over 100,000 products from various sellers. The fact that such a systematic and enduring impact has not been competed away may indicate that market participants using extant estimators have been unable to decipher this relationship. That is, the data is consistent with the explanation that traditional estimators falter in uncovering this intricate link between a seemingly minor qualitative variable like color and significant market outcomes. Our proposed model seeks to circumvent these limitations by employing additional structure relating to the categories described in the qualitative data, thereby enhancing efficiency and ensuring consistency.
## 7 General Discussion
We introduce a novel framework for integrating qualitative data into quantitative models for causal estimation. This methodology employs functional analysis, embedding observed categories into a latent Baire space, and mapping them through a continuous linear map--a Hilbert space embedding--to an RKHS of representations. By leveraging the kernel trick and
transfer learning, we streamline the estimation process. Validation through extensive simulations and a case study shows that our model outperforms traditional methods, particularly when managing complex and dynamic qualitative information.
We develop our paper in the context of unstructured qualitative data, where challenges such as category dynamics and sparsity are particularly pronounced. However, the underlying mathematical theory and model structure have broader applicability and can be adapted even when categorical variables are directly observed. For example, in a demand model for automobile sales, a car's color might be treated as a categorical variable, given its influence on consumers' choices between different colors. Our model can be applied using the RGB representation of color, paralleling how color might be expressed in unstructured textual data. This adaptability to different data types highlights the potential for our modeling proposals to extend well beyond the confines of unstructured data.
Our research reveals a distinct dichotomy between intuition and mathematical admissibility. Specifically, while some admissible functions in accordance with Mercer's theorem may seem elementary, such as the dot product, other intuitive choices, like the Euclidean norm, contradict Mercer's condition, leading to inconsistency with the inner product in a feature space. This observation underscores the critical role played by the development of a framework establishing clear requirements that correspond to desired properties. Moreover, as the model is compatible with any positive semi-definite kernel, it offers many options for applied researchers to discover the best fitting model for a particular application. Thus, for instance, an analyst may choose from a variety of radial basis functions that both meet Mercer's condition and capture the influence of the Euclidean norm on an outcome of interest.
A limitation of our model is that, although it efficiently handles related categories that can be condensed into a singular schema, it is less effective when dealing with many unrelated or orthogonal categories. Moreover, potential challenges with feature sparsity and the extraction of continuous covariates from qualitative data also point to future model extensions. Specifically, it would be interesting to extend the model to account for feature sparsity in qualitative data, such as through the use of the Representer theorem. In addition, while we can easily extract and map simple variables like price or size using modern tools from qualitative data, intricate information (such as the emotional content in a user review) might require further model adaptations.
We hope these ideas further the use of functional analysis in incorporating qualitative data into causal econometric models. The framework we present here offers a modern approach to harnessing unstructured data that holds significant promise.
This research was supported by the Ministry of Education (MOE), Singapore, under its Academic Research Fund (AcRF) Tier 2 Grant, No. MOE-T2EP40221-0008.
|
2305.00670
|
Regularity of powers of path ideals of line graphs
|
Let $L_n$ be a line graph with $n$ vertices and let $I$ be its $t$-path
ideal. It is shown that $I^s$ has a linear resolution for some $s\geq 1$ (or
equivalently for all $s\geq 1$) if and only if $I^s$ has linear quotients for
some $s\geq 1$ (or equivalently for all $s\geq 1$) if and only if $t\leq n\leq
2t$. In addition, we present an explicit formula for the regularity of $I^s$
for all $s\geq 1$. It turns out it is linear in $s$ from the very beginning.
|
Jiawen Shan, Dancheng Lu
|
2023-05-01T05:54:50Z
|
http://arxiv.org/abs/2305.00670v3
|
# Regularity of powers of path ideals of line graphs
###### Abstract.
Let \(I\) be the \(t\)-path ideal of \(L_{n}\), the line graph with \(n\)-vertices. It is shown that \(I^{s}\) has a linear resolution for some \(s\geq 1\) (or equivalently for all \(s\geq 1\)) if and only if \(I^{s}\) has linear quotients for some \(s\geq 1\) (or equivalently for all \(s\geq 1\)) if and only if \(t\leq n\leq 2t\). In addition, we present an explicit formula for the regularity of \(I^{s}\) for all \(s\geq 1\). It turns out it is linear in \(s\) from the very beginning.
Key words and phrases:regularity; linear resolution; path ideals; powers; line graphs
## 1. Introduction
Let \(R=\mathbb{K}[x_{1},\ldots,x_{n}]\) be the polynomial ring in variables \(x_{1},x_{2},\ldots,x_{n}\) over a field \(\mathbb{K}\). For a homogeneous ideal \(I\subset R\), the Castelnuovo-Mumford regularity (regularity for short) is an important algebraic invariant which measures its complexity. The study on the regularity of powers of homogeneous ideals began with a celebrated result, which was independently proved by Cutkosky-Herzog-Trung and Kodiyalam in [12] and [18] respectively. This result states that the regularity of \(R/I^{s}\) is asymptotically a linear function in \(s\), i.e., there exist integers \(d,e\) and \(s_{0}\) such that \(\operatorname{reg}\,R/I^{s}=ds+e\) for all \(s\geq s_{0}\). Scholars' research in this respect can be roughly divided into two classes. The one is to understand the constant \(e\) and the minimum power \(s_{0}\) starting from which \(\operatorname{reg}\,R/I^{s}\) becomes a linear function for some special or general ideals \(I\), see for instance [10, 15]. The other is to bound or provide explicit formulas for \(\operatorname{reg}\,R/I^{s}\) of some special ideal, see e.g. [21, 9].
The edge ideal was defined and studied by Villarreal in [23]. Let \(G\) be a finite simple graph with vertex set \(V=\{x_{1},\cdots,x_{n}\}\) and edge set \(E\). The ideal generated by all quadratic monomials \(x_{i}x_{j}\) such that \(\{x_{i},x_{j}\}\in E\) is called the _edge ideal_ of \(G\), denoted by \(I(G)\). The regularity of powers of edge ideals has been investigated extensively and many exciting results appear, see for instance [2, 4, 5, 9, 17] and the references therein.
In 1999, Conca and De Negri generalized the definition of an edge ideal and first introduced the notion of a \(t\)-path ideal in [11]. For an integer \(2\leq t\leq n\), a _\(t\)-path_ of \(G\) is a sequence of \((t-1)\) distinct edges \(\{x_{i_{1}},x_{i_{2}}\},\{x_{i_{2}},x_{i_{3}}\},\cdots,\{x_{i_{t-1}},x_{i_{t}}\}\), denoted by \(\{x_{i_{1}},x_{i_{2}},\ldots,x_{i_{t}}\}\). The _\(t\)-path ideal_\(I_{t}(G)\) associated to \(G\) is the squarefree monomial ideal
\[I_{t}(G)=(x_{i_{1}}\cdots x_{i_{t}}\mid\{x_{i_{1}},\ldots,x_{i_{t}}\}\text{ is a $t$-path of $G$})\]
in the polynomial ring \(R\). In the recent years, some algebraic properties of path ideals have been investigated extensively, see for instance [1, 7, 8, 16, 19, 20].
However, little is known about the powers of path ideals. In this note we focus on powers of path ideals of line graphs. A line graph of length \(n\) is a graph with vertex set \(\{x_{1},\ldots,x_{n}\}\) and whose edge set is \(\{\{x_{j},x_{j+1}\}\mid j=1,\ldots,n-1\}\), denote by \(L_{n}\). That is, \(L_{n}\) has the following form:
Recently, Balanescu-Cimpoeas [6] obtained an explicit formula for the depth of powers of path ideals of line graphs. The aim of this paper is to explore the equivalent conditions for powers of such ideals to have a linear resolution, and to provide a regularity formula for powers of the same class of ideals, see Theorem 2.2 and Theorem 3.6 respectively.
Throughout this note, we let \(L_{n}\) be a line graph of length \(n\) and \(I_{t}(L_{n})\) the \(t\)-path ideal of \(L_{n}\). Let \(G(I_{t}(L_{n}))=\{u_{1},u_{2},\ldots,u_{n-t+1}\}\) denote the minimal monomial generating set of \(I_{t}(L_{n})\) in which \(u_{i}=x_{i}x_{i+1}\cdots x_{i+t-1}\) for \(i=1,\ldots,n-t+1\).
## 2. When powers of \(I_{t}(L_{n})\) have a linear resolution?
In this section, we study when the powers of \(I_{t}(L_{n})\) have a linear resolution and compute graded Betti numbers for such ideals.
Let \(I\) be a monomial ideal generated in a single degree and \(G(I)\) the minimal set of monomial generators of \(I\). Recall from [22] that \(I\) is _quasi-linear_ if for each \(u\in G(I)\), the colon ideal \(I\backslash_{u}:u\) is generated by variables. Here, \(I\backslash_{u}\) is the ideal generated by monomials in \(G(I)\setminus\{u\}.\) It was shown in [22] that if \(I\) has a linear resolution then \(I\) is quasi-linear. Hence we have the following implications:
\[I\text{ has linear quotients}\Longrightarrow I\text{ has a linear resolution for all field }\mathbb{K}\]
\[\Longrightarrow I\text{ has a linear resolution for some field }\mathbb{K} \Longrightarrow I\text{ is quasi-linear.}\]
Restricting to powers of \(t\)-path ideals of line graphs, we can show that the converses of the above implications also hold true. We begin with the following lemma.
**Lemma 2.1**.: _Let \(\alpha=u_{1}^{a_{1}}u_{2}^{a_{2}}\cdots u_{n-t+1}^{a_{n-t+1}}\) and \(\beta=u_{1}^{b_{1}}u_{2}^{b_{2}}\cdots u_{n-t+1}^{b_{n-t+1}}\) be monomials in \(G(I_{t}(L_{n})^{s})\). Then \(\alpha=\beta\) if and only if \(a_{i}=b_{i}\) for \(i=1,\ldots,n-t+1\). In particular, we have \(|G(I_{t}(L_{n})^{s})|=\binom{s+n-t}{s}\)._
Proof.: For the first statement, it suffices to prove if \(\alpha=\beta\) then \(a_{i}=b_{i}\) for all \(i\). We proceed by induction on \(s\). The case that \(s=1\) is clear. Suppose \(s>1\). Let \(k\) denote the minimum of \(i^{\prime}s\) such that \(x_{i}\) divides \(\alpha(=\beta)\). Then \(a_{k}>0\), \(b_{k}>0\) and \(a_{i}=b_{i}\) for \(i=1,\cdots,k-1\). Note that \(u_{k}^{a_{k}-1}u_{k+1}^{a_{k+1}}\cdots u_{n-t+1}^{a_{n-t+1}}=u_{k}^{b_{k}-1}u_ {k+1}^{b_{k+1}}\cdots u_{n-t+1}^{b_{n-t+1}}\) belongs to \(G(I_{t}(L_{n})^{s-1})\), it follows by induction that \(a_{k}-1=b_{k}-1\) and \(a_{i}=b_{i}\) for \(i=1,\ldots,n-t+1\). This completes the proof.
The last statement follows from the well-known fact that the number of \(\{(a_{1},\cdots,a_{k})\mid a_{1}+\cdots+a_{k}=s,a_{i}\geq 0\text{ for all }i=1,\ldots,k\}\) is equal to \(\binom{s+k-1}{k-1}\).
Let \(\alpha,\beta\) be monomials in \(R\). As usual, we set \(\deg_{k}\alpha:=\max\{i\mid x_{k}^{i}\text{ divides }\alpha\}\) for \(k=1,\ldots,n\), and \(\deg\alpha:=\sum_{k=1}^{n}\deg_{k}\alpha\). For convenience, we use \(\alpha/\beta\) to denote the monomial \(\frac{\alpha}{(\alpha,\beta)}\). Here, \((\alpha,\beta)\) is the greatest common factor of \(\alpha\) and \(\beta\). We now present our first main result of this paper.
**Theorem 2.2**.: _Let \(I=I_{t}(L_{n})\) be the \(t\)-path ideal of \(L_{n}\). Then the following statements are equivalent:_
1. \(I^{s}\) _has a linear resolution for some_ \(s\geq 1\)_;_
2. \(I^{s}\) _has a linear resolution for all_ \(s\geq 1\)_;_
3. \(t\leq n\leq 2t\)_;_
4. \(I^{s}\) _has linear quotients for some_ \(s\geq 1\)_;_
5. \(I^{s}\) _has linear quotients for all_ \(s\geq 1\)_;_
6. \(I^{s}\) _is quasi-linear for some_ \(s\geq 1\)_;_
7. \(I^{s}\) _is quasi-linear for all_ \(s\geq 1\)_._
Proof.: The implications \((2)\Rightarrow(1)\), \((5)\Rightarrow(4)\) and \((7)\Rightarrow(6)\) are automatical, and \((5)\Rightarrow(2)\Rightarrow(7)\) follow from [13, Proposition 8.2.1] and [22, Theorem 2.3]. We only need to prove \((6)\Rightarrow(3)\Rightarrow(5)\).
\((6)\Rightarrow(3)\) Assume on the contrary that \(n\geq 2t+1\). Fix \(s\geq 1\), we will prove \(I^{s}\) is not quasi-linear. To this end, we denote \(\alpha=u_{n-t+1}^{s}\) and let \(J\) be the ideal generated by monomials in \(G(I^{s})\setminus\{\alpha\}\). Choose any element \(x_{k}\in J:\alpha\), then there is \(\beta\in G(I^{s})\setminus\{\alpha\}\) such that \(x_{k}=\beta/\alpha=\frac{\beta}{(\alpha,\beta)}\). This implies \(\deg(\alpha,\beta)\) is equal to \(ts-1\), and so we have \(\beta=u_{n-t}u_{n-t+1}^{s-1}\). Hence, \(x_{k}=x_{n-t}\) and consequently, \(x_{n-t}\) is the unique variable in \(J:\alpha\). Note that \(x_{n-t}\) does not divides \(u_{1}^{s}\), it follows that \(J:\alpha\) is not generated by variables, a contradiction.
\((3)\Rightarrow(5)\) By an easy calculation, we have \((u_{1},\ldots,u_{i-1}):u_{i}\) is generated by the unique variable \(x_{i-1}\) for \(i=2,\ldots,n-t+1\). Hence \(I\) has linear quotients. For the case where \(s\geq 2\), we first observe the following key fact due to \(n\leq 2t\) which we will use later: if \(1\leq k\leq n-t\), then \(\deg_{k}u_{1}=\deg_{k}u_{2}=\cdots=\deg_{k}u_{k}=1\) and \(\deg_{k}u_{k+1}=\cdots=\deg_{k}u_{n-t+1}=0\).
In view of Lemma 2.1, we have
\[G(I^{s})=\{u_{1}^{a_{1}}\cdots u_{n-t+1}^{a_{n-t+1}}\mid a_{1}+\cdots+a_{n-t+ 1}=s,a_{i}\in\mathbb{Z}_{\geq 0}\text{ for all }i=1,\ldots,n-t+1\}.\]
We define a linear order on \(G(I^{s})\) as follows: if \(\alpha=u_{1}^{a_{1}}\cdots u_{n-t+1}^{a_{n-t+1}}\) and \(\beta=u_{1}^{b_{1}}\cdots u_{n-t+1}^{b_{n-t+1}}\) are elements of \(G(I^{s})\), then
\[\alpha<\beta\Longleftrightarrow\text{ if there exists }i>0\text{ such that }a_{1}=b_{1},\ldots,a_{i-1}=b_{i-1}\text{ and }a_{i}<b_{i}.\]
With respect to this order, \(u_{n-t+1}^{s}\) and \(u_{1}^{s}\) are the least element and greatest element of \(G(I^{s})\) respectively.
Denote by \(q\) the number of \(G(I^{s})\) and let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{q}\) be all the elements of \(G(I^{s})\) such that \(\alpha_{q}>\alpha_{q-1}>\cdots>\alpha_{1}\). We want to show that for each \(1\leq i\leq q-1\), the colon ideal \((\alpha_{q},\ldots,\alpha_{i+1}):\alpha_{i}\), which is denoted by \(C_{i}\), is generated by variables.
Fix \(i\) and assume \(\alpha_{i}=u_{1}^{a_{1}}\cdots u_{n-t+1}^{a_{n-t+1}}\). We may write the support set \(\{1\leq j\leq n-t+1\mid a_{j}>0\}\) as \(\{j_{1},j_{2},\ldots,j_{g}\}\) with \(j_{1}<j_{2}<\cdots<j_{g}\). We claim that \(C_{i}=(x_{j_{1}-1},\ldots,x_{j_{g}-1})\). Here, we use the convention that \(x_{0}=0\). Namely, if \(j_{1}=1\), then \(C_{i}=(x_{j_{2}-1},\ldots,x_{j_{g}-1})\).
Let \(j\in\{j_{1},j_{2},\ldots,j_{g}\}\) with \(j\geq 2\). Since \(\alpha:=u_{1}^{a_{1}}\cdots u_{j-1}^{a_{j}-1}u_{j}^{a_{j}-1}\cdots u_{n-t+1}^{ a_{n-t+1}}>\alpha_{i}\), we have \(\alpha/\alpha_{i}=x_{j-1}\in C_{i}\). This proves the inclusion \((x_{j_{1}-1},\ldots,x_{j_{g}-1})\subseteq C_{i}\). Conversely, let \(\beta=u_{1}^{b_{1}}\cdots u_{n-t+1}^{b_{n-t+1}}\) with \(\beta>\alpha_{i}\). Then there exists \(\ell<j_{g}\) such that \(a_{1}=b_{1}\), \(\ldots\), \(a_{\ell-1}=b_{\ell-1}\) and \(a_{\ell}<b_{\ell}\). Let \(k\) be the minimum of such \(\ell^{\prime}s\). If \(k<j_{1}\), then \(\deg_{(j_{1}-1)}\beta\geq b_{k}>\deg_{(j_{1}-1)}\alpha_{i}=0\). Hence \(\beta/\alpha_{i}\in(x_{j_{1}-1})\subseteq(x_{j_{1}-1},\ldots,x_{j_{g}-1})\). Suppose next that \(j_{f}\leq k<j_{f+1}\) for some \(f\in\{1,2,\ldots,g-1\}\). Then, in view of the fact mentioned at the beginning of this proof, we have
\[\deg_{(j_{f+1}-1)}\beta=b_{1}+\cdots+b_{(j_{f+1}-1)}>a_{1}+\cdots+a_{(j_{f+1}- 1)}=\deg_{(j_{f+1}-1)}\alpha_{i},\]
and so \(x_{(j_{f+1}-1)}\) divides \(\beta/\alpha_{i}\). It follows that \(\beta/\alpha_{i}\in(x_{(j_{f+1}-1)})\subseteq(x_{j_{1}-1},\ldots,x_{j_{g}-1})\) and thus the claim follows. This completes the proof.
**Corollary 2.3**.: _Let \(I=I_{t}(L_{n})\) be the \(t\)-path ideal of \(L_{n}\). If \(t\leq n\leq 2t\), then \(\operatorname{reg}\,R/I^{s}=ts-1\)._
We now compute the Betti number of \(I_{t}^{s}(L_{n})\) when \(t\leq n\leq 2t\). Recall from the combinatorial theory that the number of the set \(\{(a_{1},\cdots,a_{k})\mid a_{1}+\cdots+a_{k}=s,a_{i}\geq 1\text{ for all }i=1,\ldots,k\}\) is equal to \(\binom{s-1}{k-1}\).
**Corollary 2.4**.: _Let \(I=I_{t}(L_{n})\) be the \(t\)-path ideal of \(L_{n}\). If \(t\leq n\leq 2t\), then_
\[\beta_{i}(I^{s})=\beta_{i,i+st}(I^{s})=\sum_{k=i}^{n-t}\binom{n-t}{k}\left( \begin{matrix}s\\ k\end{matrix}\right)\left(\begin{matrix}k\\ i\end{matrix}\right).\]
_In particular, \(\operatorname{pd}\,R/I^{s}=\min\{n-t+1,s+1\}\)._
Proof.: For \(1\leq i\leq q-1\), let \(C_{i}\) denote the same ideal as in the proof of (3) \(\Rightarrow\) (5) of Theorem 2.2 and let \(r_{i}\) be the number of variables in \(C_{i}\). Then, for each \(1\leq k\leq n-t\), the number of \(\{1\leq i\leq q\mid r_{i}=k\}\), denoted by \(S_{k}\), is equal to the number of tuples \((a_{1},a_{2},\ldots,a_{n-t+1})\) such that \(a_{1}+\cdots+a_{n-t+1}=s\) and \(|\{2\leq i\leq n-t+1\mid a_{i}>0\}|=k.\) This implies
\[S_{k}=\binom{n-t}{k}\left(\binom{s-1}{k-1}+\cdots+\binom{k-1}{k-1}\right)= \binom{n-t}{k}\left(\begin{matrix}s\\ k\end{matrix}\right).\]
It follows from [13, Corollary 8.2.2.] that
\[\beta_{i}(I^{s})=\beta_{i,i+st}(I^{s})=S_{i}\left(\begin{matrix}i\\ i\end{matrix}\right)+S_{i+1}\left(\begin{matrix}i+1\\ i\end{matrix}\right)+\cdots+S_{n-t}\left(\begin{matrix}n-t\\ i\end{matrix}\right).\]
## 3. Regularity of powers of \(I_{t}(L_{n})\)
In this section we give an explicit formula for the regularity of the powers of \(I_{t}(L_{n})\).
We first recall a result in [1] by Alilooee and Faridi. For simplicity we define a function \(\Gamma\) on \(\mathbb{Z}_{\geq 0}\times\mathbb{Z}_{\geq 0}\) as below:
\[\Gamma(n,t)=\left\{\begin{array}{ll}p(t-1),&\text{ if }n=p(t+1)+d\text{ and }0 \leq d\leq t;\\ (p+1)(t-1),&\text{ if }n=p(t+1)+t.\end{array}\right..\]
With this notion, the second part of [1, Corollary 4.15] can be rephrased in the following form.
**Lemma 3.1**.: _Let \(n,t,p,d\) be non-negative integers such that \(n=p(t+1)+d\) with \(0\leq d\leq t\). Then \(\operatorname{reg}\,R/I_{t}(L_{n})=\Gamma(n,t)\)._
Note that [1, Corollary 4.15] said that \(\operatorname{reg}\,R/I_{t}(L_{n})=\Gamma(n,t)\) if \(n\geq t\geq 2\). But it holds true for all \(n\geq 1\) and \(t\geq 1\). In fact, if either \(n<t\) or \(t=1\), then \(\operatorname{reg}\,R/I_{t}(L_{n})=\Gamma(n,t)=0\). This fact is used in the proof of Theorem 3.6. The following lemma collects some easy but useful observations on \(\Gamma(n,t)\).
**Lemma 3.2**.: _Let \(n,t,a,b\) be integers \(\geq 0\) with \(n\geq t+1\). Then we have_
(1)_\(\Gamma(n-t-1,t)=\Gamma(n,t)-(t-1)\);_
(2)_\(\Gamma(a,t)+\Gamma(b,t)\leq\Gamma(a+b+1,t)\) _for all_ \(a,b\geq 1\)_._
Proof.: (1) Straightforward.
(2) Let \(a=p_{1}(t+1)+d_{1}\) and \(b=p_{2}(t+1)+d_{2}\), where \(p_{1},p_{2}\geq 0\) and \(0\leq d_{1},d_{2}\leq t\). As a consequence we have \(a+b+1=(p_{1}+p_{2})(t+1)+(d_{1}+d_{2}+1)\).
If both \(d_{1}\) and \(d_{2}\) are equal to \(t\). Then \(\Gamma(a,t)=(p_{1}+1)(t-1)\), \(\Gamma(b,t)=(p_{2}+1)(t-1)\). Note that \(a+b+1=(p_{1}+p_{2}+1)(t+1)+t\). Then \(\Gamma(a+b+1,t)=(p_{1}+p_{2}+2)(t-1)=\Gamma(a,t)+\Gamma(b,t)\).
If exactly one of \(d_{1}\) and \(d_{2}\) is equal to \(t\). We may assume that \(d_{1}=t\) and \(d_{2}\neq t\). Then \(\Gamma(a,t)=(p_{1}+1)(t-1)\), \(\Gamma(b,t)=p_{2}(t-1)\). Note that \(a+b+1=(p_{1}+p_{2}+1)(t+1)+d_{2}\). Then \(\Gamma(a+b+1,t)=(p_{1}+p_{2}+1)(t-1)=\Gamma(a,t)+\Gamma(b,t)\).
If neither \(d_{1}\) nor \(d_{2}\) is equal to \(t\), then \(\Gamma(a,t)=p_{1}(t-1)\) and \(\Gamma(b,t)=p_{2}(t-1)\). Note that
\[a+b+1=\left\{\begin{array}{ll}(p_{1}+p_{2})(t+1)+(d_{1}+d_{2}+1),&d_{1}+d_{ 2}+1\leq t\\ (p_{1}+p_{2}+1)(t+1)+(d_{1}+d_{2}-t),&d_{1}+d_{2}+1\geq t+1\end{array}\right.,\]
it follows that
\[\Gamma(a+b+1,t)=\left\{\begin{array}{ll}(p_{1}+p_{2})(t-1),&d_{1}+d_{2}+1< t\\ (p_{1}+p_{2}+1)(t-1),&d_{1}+d_{2}+1\geq t\end{array}\right..\]
Hence \(\Gamma(a,t)+\Gamma(b,t)\leq\Gamma(a+b+1,t)\). This finishes the proof.
We record the following well-known lemmas for later use.
**Lemma 3.3**.: _Let \(0{\longrightarrow}M{\longrightarrow}N{\longrightarrow}P{\longrightarrow}0\) be a short exact sequence of finitely generated graded \(S\)-modules. Then_
(1) _\(\operatorname{reg}\,N\leq\max\{\operatorname{reg}\,M,\operatorname{reg}\,P\}\). The equality holds for \(\operatorname{reg}\,P\neq\operatorname{reg}\,M-1\)._
(2) _\(\operatorname{reg}\,M\leq\max\{\operatorname{reg}\,N,\operatorname{reg}\,P+1\}\). The equality holds for \(\operatorname{reg}\,N\neq\operatorname{reg}\,P\)._
(3) _\(\operatorname{reg}\,P\leq\max\{\operatorname{reg}\,M-1,\operatorname{reg}\,N\}\). The equality holds for \(\operatorname{reg}\,M\neq\operatorname{reg}\,N\)._
**Lemma 3.4**.: _[_14_, Lemma 3.2]_ _Let \(A=\mathbb{K}[\boldsymbol{x}]=\mathbb{K}[x_{1},\ldots,x_{n}]\), \(B=\mathbb{K}[\boldsymbol{y}]=\mathbb{K}[y_{1},\ldots,y_{N}]\) and \(S=\mathbb{K}[\boldsymbol{x},\boldsymbol{y}]\) be polynomial rings. Then for nonzero homogeneous ideals \(I\subset A\) and \(J\subset B\), we have_
(1) _\(\operatorname{reg}\,S/(I+J)=\operatorname{reg}\,A/I+\operatorname{reg}\,B/J\);_
(2) _\(\operatorname{reg}\,S/(IJ)=\operatorname{reg}\,A/I+\operatorname{reg}\,B/J+1\)._
**Lemma 3.5**.: _[_6_, Lemma 2.1]_ _Let \(1\leq t\leq n\) and \(s\geq 2\) be some integers. Then_
\[(I_{t}(L_{n})^{s}:u_{n-t+1})=I_{t}(L_{n})^{s-1}.\]
We now are ready to prove the second main result of this paper.
**Theorem 3.6**.: _Let \(I_{t}(L_{n})\) be the \(t\)-path ideal of \(L_{n}\). Then the regularity of \(I_{t}(L_{n})^{s}\) is_
\[\operatorname{reg}\,R/I_{t}(L_{n})^{s}=\Gamma(n,t)+t(s-1).\]
Proof.: If either \(s=1\) or \(t=1\), the result is given by Lemma 3.1 (i.e., [1, Corollary 4.15]). The case that \(t\leq n\leq 2t\) follows from Corollary 2.4. Suppose now that \(n\geq 2t+1\) and \(t\geq 2\).
As before we write
\[I_{t}(L_{n})=(u_{1},u_{2},\ldots,u_{n-t+1}),\]
where \(u_{i}\) denotes the monomial \(x_{i}x_{i+1}\cdots x_{i+t-1}\) for \(1\leq i\leq n-t+1\). We consider the following short exact sequence:
\[0{\longrightarrow}\frac{R}{(I_{t}(L_{n})^{s}:u_{n-t+1})}[-t]\stackrel{{ u_{n-t+1}}}{{\longrightarrow}}\frac{R}{I_{t}(L_{n})^{s}}{\longrightarrow} \frac{R}{(I_{t}(L_{n})^{s},u_{n-t+1})}{\longrightarrow}0.\]
By induction hypothesis and by Lemma 3.5, we have
\[\operatorname{reg}\,\frac{R}{(I_{t}(L_{n})^{s}:u_{n-t+1})}=\operatorname{reg}\, \frac{R}{I_{t}(L_{n})^{s-1}}=\Gamma(n,t)+t(s-2).\]
Hence
\[\operatorname{reg}\,\frac{R}{(I_{t}(L_{n})^{s}:u_{n-t+1})}[-t]=\operatorname{ reg}\,\frac{R}{(I_{t}(L_{n})^{s}:u_{n-t+1})}+t=\Gamma(n,t)+t(s-1).\]
It remains to be shown that \(\operatorname{reg}\,\frac{R}{(I_{t}(L_{n})^{s},u_{n-t+1})}=\Gamma(n,t)+t(s-1)\). To this end, we put \(A_{0}=(I_{t}(L_{n})^{s},u_{n-t+1})\), and then put \(A_{j}=(A_{j-1},u_{n-t-j+1})\) and \(B_{j}=(A_{j-1}:u_{n-t-j+1})\) for \(1\leq j\leq n-t\) recursively. It follows that
\[A_{j}=(A_{0},u_{n-t},\ldots,u_{n-t-j+1})=(I_{t}(L_{n})^{s},u_{n-t+1},\ldots,u_ {n-t-j+1}). \tag{3.1}\]
In particular, we have \(A_{n-t}=(I_{t}(L_{n})^{s},u_{n-t+1},\ldots,u_{1})=I_{t}(L_{n})\), and thus
\[\operatorname{reg}\,\frac{R}{A_{n-t}}=\Gamma(n,t). \tag{3.2}\]
According to Lemma 3.5, for any \(1\leq j\leq n-t\), we have
\[B_{j} = ((I_{t}(L_{n})^{s},u_{n-t+1},\ldots,u_{n-t-j+2}):u_{n-t-j+1}) \tag{3.3}\] \[= (((u_{1},\ldots,u_{n-t-j+1})^{s},u_{n-t+1},\ldots,u_{n-t-j+2}):u_{ n-t-j+1})\] \[= ((u_{1},\ldots,u_{n-t-j+1})^{s-1},u_{n-t+1},\ldots,u_{n-j+2},x_{n -j+1})\] \[= I_{t}(L_{n-j})^{s-1}+(u_{n-t+1},\ldots,u_{n-j+2})+(x_{n-j+1}).\]
Note that the ideal \((u_{n-t+1},\ldots,u_{n-j+2})\) is isomorphic to the \(t\)-path ideal of a line graph of length \(j-1\). By Lemma 3.4(1), Lemma 3.2(2) and the induction hypothesis, it follows that
\[\mbox{reg }\frac{R}{B_{j}}[-t] = \mbox{reg }\frac{R}{I_{t}(L_{n-j})^{s-1}}+\mbox{reg }\frac{R}{I_{t}(L_{j-1})}+t \tag{3.4}\] \[= \Gamma(n-j,t)+t(s-2)+\Gamma(j-1,t)+t\] \[\leq \Gamma(n,t)+t(s-1).\]
We now show that (3.4) becomes an equality when \(j=n-t\). In fact, we have
\[B_{n-t}=I_{t}(L_{t})^{s-1}+I_{t}(L_{n-t-1})+(x_{t+1}).\]
From this it follows that
\[\mbox{reg }\frac{R}{B_{n-t}}[-t] = \mbox{reg }\frac{R}{I_{t}(L_{t})^{s-1}}+\mbox{reg }\frac{R}{I_{t}(L_{n-t-1})}+t \tag{3.5}\] \[= t(s-1)-1+\Gamma(n-t-1,t)+t\] \[= t(s-1)-1+\Gamma(n,t)-(t-1)+t\] \[= \Gamma(n,t)+t(s-1).\]
Here, the third equality follows from Lemma 3.2(1). Now, we consider the following short exact sequences of cyclic \(R\)-modules:
\[0{\longrightarrow}\frac{R}{B_{1}}[-t]\stackrel{{ \cdot u_{n-t}}}{{\longrightarrow}}\frac{R}{A_{0}}{\longrightarrow}\frac{R} {A_{1}}{\longrightarrow}0;\] \[0{\longrightarrow}\frac{R}{B_{2}}[-t]\stackrel{{ \cdot u_{n-t-1}}}{{\longrightarrow}}\frac{R}{A_{1}}{\longrightarrow}\frac{R} {A_{2}}{\longrightarrow}0;\] \[\vdots\] \[0{\longrightarrow}\frac{R}{B_{n-t}}[-t]\stackrel{{ \cdot u_{1}}}{{\longrightarrow}}\frac{R}{A_{n-t-1}}{\longrightarrow}\frac{R} {A_{n-t}}{\longrightarrow}0.\]
In view of (3.2) together with (3.5), and applying Lemma 3.3(1) to the last short exact sequence above, we obtain
\[\mbox{reg }\frac{R}{A_{n-t-1}}=\Gamma(n,t)+t(s-1). \tag{3.6}\]
In view of (3.6) and applying Lemma 3.3(1) to the the last second short exact sequence above, we obtain
\[\mbox{reg }\frac{R}{A_{n-t-2}}=\Gamma(n,t)+t(s-1).\]
Continuing in this way, finally we obtain
\[\text{reg }\frac{R}{A_{0}}=\Gamma(n,t)+t(s-1).\]
This is what we want and so the proof is complete.
**Remark 3.7**.: By checking the proof of Theorem 3.6, we obtain not only the regularity of \(\frac{R}{I_{t}(L_{n})^{s}}\), but also the following formula:
\[\text{reg }\frac{R}{(I_{t}(L_{n})^{s},u_{n-t+1},\ldots,u_{j})}=\Gamma(n,t)+t(s-1),\]
for all \(s\geq 1\) and \(2\leq j\leq n-t+1\).
**Acknowledgment:** This research is supported by NSFC (No. 11971338).
|
2303.13582
|
SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates
|
Neural radiance fields (NeRFs) have enabled high fidelity 3D reconstruction
from multiple 2D input views. However, a well-known drawback of NeRFs is the
less-than-ideal performance under a small number of views, due to insufficient
constraints enforced by volumetric rendering. To address this issue, we
introduce SCADE, a novel technique that improves NeRF reconstruction quality on
sparse, unconstrained input views for in-the-wild indoor scenes. To constrain
NeRF reconstruction, we leverage geometric priors in the form of per-view depth
estimates produced with state-of-the-art monocular depth estimation models,
which can generalize across scenes. A key challenge is that monocular depth
estimation is an ill-posed problem, with inherent ambiguities. To handle this
issue, we propose a new method that learns to predict, for each view, a
continuous, multimodal distribution of depth estimates using conditional
Implicit Maximum Likelihood Estimation (cIMLE). In order to disambiguate
exploiting multiple views, we introduce an original space carving loss that
guides the NeRF representation to fuse multiple hypothesized depth maps from
each view and distill from them a common geometry that is consistent with all
views. Experiments show that our approach enables higher fidelity novel view
synthesis from sparse views. Our project page can be found at
https://scade-spacecarving-nerfs.github.io .
|
Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas, Ke Li
|
2023-03-23T18:00:07Z
|
http://arxiv.org/abs/2303.13582v1
|
# SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates
###### Abstract
Neural radiance fields (NeRFs) have enabled high fidelity 3D reconstruction from multiple 2D input views. However, a well-known drawback of NeRFs is the less-than-ideal performance under a small number of views, due to insufficient constraints enforced by volumetric rendering. To address this issue, we introduce SCADE, a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views for in-the-wild indoor scenes. To constrain NeRF reconstruction, we leverage geometric priors in the form of per-view depth estimates produced with state-of-the-art monocular depth estimation models, which can generalize across scenes. A key challenge is that monocular depth estimation is an ill-posed problem, with inherent ambiguities. To handle this issue, we propose a new method that learns to predict, for each view, a continuous, multimodal distribution of depth estimates using conditional Implicit Maximum Likelihood Estimation (cIMLE). In order to disambiguate exploiting multiple views, we introduce an original space carving loss that guides the NeRF representation to fuse multiple hypothesized depth maps from each view and distill from them a common geometry that is consistent with all views. Experiments show that our approach enables higher fidelity novel view synthesis from sparse views. Our project page can be found at scale-spacecarving-nerfs.github.io.
## 1 Introduction
Neural radiance fields (NeRF) [26] have enabled high fidelity novel view synthesis from dozens of input views. Such number of views are difficult to obtain in practice, however. When given only a small number of sparse views, vanilla NeRF tends to struggle with reconstructing shape accurately, due to inadequate constraints enforced by the volume rendering loss alone.
Shape priors can help remedy this problem. Various forms of shape priors have been proposed for NeRFs, such as object category-level priors [14], and handcrafted data-independent priors [27]. The former requires knowledge of category labels and is not applicable to scenes, and the latter is agnostic to the specifics of the scene and only encodes low-level regularity (e.g., local smoothness). A form of prior that is both scene-dependent and category-agnostic is per-view monocular depth estimates, which have been explored in prior work [34, 6]. Unfortunately, monocular depth estimates are often inaccurate, due to estimation errors and inherent ambiguities, such as albedo vs. shading (cf. check shadow illusion), concavity vs. convexity (cf. hollow face illusion), distance vs. scale (cf. miniature cinematography), etc. As a result, the incorrect or inconsistent priors imposed by such depth estimates may mislead the NeRF into reconstructing incorrect shape and produce artifacts in the generated views.
In this paper, we propose a method that embraces the uncertainty and ambiguities present in monocular depth esti
mates, by modelling a probability distribution over depth estimates. The ambiguities are retained at the stage of monocular depth estimation, and are only resolved once information from multiple views are fused together. We do so with a principled loss defined on probability distributions over depth estimates for different views. This loss selects the subset of modes of probability distributions that are consistent across all views and matches them with the modes of the depth distribution along different rays as modelled by the NeRF. It turns out that this operation can be interpreted as a probabilistic analogue of classical depth-based space carving [19]. For this reason, we dub our method _Space Carving with Ambiguity-aware Depth Estimates_, or SCADE for short.
Compared to prior approaches of depth supervision [34, 6] that only supervise the moments of NeRF's depth distribution (rather than the modes), our key innovation is that we supervise the modes of NeRF's depth distribution. The supervisory signal provided by the former is weaker than the latter, because the former only constrains the value of an integral aggregated along the ray, whereas the latter constrains the values at different individual points along the ray. Hence, the supervisory signal provided by the former is 2D (because it integrates over a ray), whereas the supervisory signal provided by the our method is 3D (because it is point-wise) and thus can be more fine-grained.
An important technical challenge is modelling probability distributions over depth estimates. Classical approaches use simple distributions with closed-form probability densities such as Gaussian or Laplace distributions. Unfortunately these distributions are not very expressive, since they only have a single mode (known as "unimodal") and have a fixed shape for the tails. Since each interpretation of an ambiguous image should be a distinct mode, these simple unimodal distributions cannot capture the complex ambiguities in depth estimates. Naive extensions like a mixture of Gaussians are also not ideal because some images are more ambiguous than others, and so the number of modes needed may differ for the depth estimates of different images. Moreover, learning such a mixture requires backpropagating through E-M, which is nontrivial. Any attempt at modifying the probability density to make it capable of handling a variable number of modes can easily run into an intractable partition function [33, 4, 10], which makes learning difficult because maximum likelihood requires the ability to evaluate the probability density, which is a function of the partition function.
To get around this conundrum, we propose representing the probability distribution with a set of samples generated from a neural network. Such a distribution can be learned with a conditional GAN; however, because GANs suffer from mode collapse, they cannot model multiple modes [12] and are therefore unsuited to modelling ambiguity. Instead, we propose leveraging conditional Implicit Maximum Likelihood Estimation (cIMLE) [30, 21] to learn the distribution, which is designed to avoid mode collapse.
We consider the challenging setting of leveraging out-of-domain depth priors to train NeRFs on real-world indoor scenes with sparse views. Under this setting, the depth priors we use are trained on a different dataset (e.g., Taskonomy) from the scenes our NeRFs are trained on (e.g., ScanNet). This setting is more challenging than usual due to domain gap between the dataset the prior is trained on and the scenes NeRF is asked to reconstruct. We demonstrate that our method outperforms vanilla NeRF and NeRFs with supervision from depth-based priors in novel view synthesis.
In summary, our key contributions include:
* An method that allows the creation of NeRFs in unconstrained indoor settings from only a modest number of 2D views by introducing a sophisticated way to exploit ambiguity-aware monocular depth estimation.
* A novel way to sample distributions over image depth estimates based on conditional Implicit Maximum Likelihood Estimation that can represent depth ambiguities and capture a variable number of discrete depth modes.
* A new space-carving loss that can be used in the NeRF formulation to optimize for a mode-seeking 3D density that helps select consistent depth modes across the views and thus compensate for the under-constrained photometric information in the few view regime.
## 2 Related Work
Novel View Synthesis and 3D Reconstruction.Reconstructing the structure of a scene from a few views is a long standing problem in computer vision with a long literature. Neural Radiance Fields or NeRF [26] revolutionized the field by showing how to use weights of an MLP to represent a scene that is rendered using volume rendering [25]. A key innovation was the use of positional encoding [39, 40] to increase the effective capacity of the MLPs that model a emitted radiance and density as a function of position and
Figure 2: **Ambiguities from a single image. We show samples from our ambiguity-aware depth estimates that is able to handle different types of ambiguities. (Top) An image of a chair taken from the top-view. Without context of the scene, it is unclear that it is an image of a chair. We show different samples from our ambiguity-aware depth estimates that are able to capture different degrees of convextiy. (Bottom) An image of a cardboard under bad lighting conditions that capture the albedo vs shading ambiguity that is also represented by our different samples.**
viewing direction. Extensions of NeRF include works on unconstrained photo collections [24], dynamic scenes [22], deformable scenes [29], and reflective materials [1, 41].
A critical shortcoming of NeRF is its reliance on having many input views of a scene. Several approaches have been proposed, including adding patch likelihood losses [27], data-driven priors [38, 50], semantic consistency prior [13], image features [43], or surface [41, 27], occupancy [28], and depth [45, 16, 34] priors. DS-NeRF [6] uses the sparse point reconstructions recovered during Structure-from-Motion, to supervise the depth of sparse points in the recovered NeRF. In the same spirit, DDP [34] uses a depth completion network, that takes as input sparse point cloud projected to one an input view, and produces a depth estimate. Both works [16, 34] model depth with uncertainty but only supervise with _moments_ of NeRF's depth distribution. In contrast, our work is able to represent _multimodal_ depth estimates, which can handle the inherit ambiguities of depth estimation, and is able to seek the modes for each depth estimate to make a consistent prediction of the scene structure.
Inspired by traditional multi-view stereo, MVS-NeRF [3] and RC-MVSNet [2] incorporate the use cost volumes [9] to NeRF. To construct the cost volumes, these works look for agreement between features to look for correspondences, which is difficult under the setting of large variations in appearance or viewpoint. In contrast, our approach introduces a novel space carving loss that does not rely on feature correspondences and instead directly defines the loss in 3D.
Another line of works focus on geometry reconstruction [47, 42, 51] by using the volumetric rendering scheme to learn a neural SDF representation. These works tackle a different problem on modeling geometry reconstruction, unlike NeRFs whose focus is modeling appearance for novel view synthesis, where our work falls under.
Depth Estimation from Single View.Depth estimation from a single image is a complex task due multiple ambiguities, including scene scale and shading / albedo ambiguities. Early efforts used MRFs to compute predictions on superpixels [35]. The advent of consumer depth cameras like Kinect [55] enabled the acquisition of larger scale indoor 3D datasets [37], leading to methods that used deep neural networks to predict depth from a single color image [7]. Nonetheless, scaling datasets for depth estimation remained a challenge. Deep3D [46] enabled the creation of stereo views by training on a large dataset of stereo movies, while [8] used the self-supervision of left-right consistency to learn depth from stereo views from driving cars. This approach was extended to videos where the egomotion is also estimated and self-supervision happens across time [56]. MegaDepth [23] uses the depth from SfM reconstructions of internet photos, together with semantic segmentation that conveys ordinal depth supervision cues, i.e. transient objects must be in front of the static scenes. Most recent, multi-task learning has shown promise in training depth estimators that work well in out-of-domain data [32].
In some cases, surface normal estimation has fewer pitfalls than direct depth estimation, as it suffers somewhat less from scale ambiguities. GeoNet [31] jointly estimates depth and surface normals, then uses the geometric relation between them to refine the estimates. LeRes [48] uses a second geometry reasoning module that refines a focal length estimate and the depth itself. Our depth estimation approach is derived from LeReS without the second stage, as the focal length is already estimated during SfM for NeRF scenes.
Most relevant is the work of Kendall and Gal [16], that learns depth estimation in a Bayesian setting, where their objective maximizes the likelihood under a predicted distribution. In our work, we use instead a multimodal depth estimation technique built on clMLE [21], which makes our prior robust to ambiguities in depth prediction.
## 3 Background
### Neural Radiance Fields (NeRF)
A neural radiance field [26], or NeRF for short, represents a field in 3D space, where each point represents an infinitesimal particle with a certain opacity that emits varying amounts of light along different viewing directions. The opacity at a point is represented as a volume density, and the amount of emitted light is represented as a colour. Mathematically, a NeRF is represented as two parameterized functions, a volume density \(\sigma_{\theta}:\mathbb{R}^{3}\rightarrow\mathbb{R}_{\geq 0}\) and a colour \(\mathbf{c}_{\psi}:\mathbb{R}^{3}\times S^{2}\rightarrow[0,255]^{3}\). The former maps a 3D coordinate \(\mathbf{x}\in\mathbb{R}^{3}\) to a volume density \(\sigma\in\mathbb{R}_{\geq 0}\), and the latter maps a 3D coordinate \(\mathbf{x}\in\mathbb{R}^{3}\) and a viewing direction \(\mathbf{d}\in S^{2}\) to a colour \(\mathbf{c}\in[0,255]^{3}\).
To render a NeRF from a view, we shoot rays from each pixel on the camera sensor and integrate over the product of colour, volume density and visibility along the ray to arrive at each pixel value. Visibility at a point is represented with transmittance, which accumulates the exponentiated negative volume density multiplicatively up to the point. The higher transmittance at a point, the more visible the point is from the camera. Mathematically, if the camera ray is \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), where \(\mathbf{o}\) is the camera centre and \(\mathbf{d}\) is the ray direction, the pixel value \(\hat{I}_{\theta,\psi}(\mathbf{o},\mathbf{d})\) we would have is:
\[\begin{split}\hat{I}_{\theta,\psi}(\mathbf{o},\mathbf{d}):=\int_{ t_{n}}^{t_{f}}T_{\theta,\mathbf{o},\mathbf{d}}(t)\sigma_{\theta}(\mathbf{o}+t \mathbf{d})\mathbf{c}_{\psi}(\mathbf{o}+t\mathbf{d},\mathbf{d})\;\mathrm{d}t \\ \text{where}\;\;T_{\theta,\mathbf{o},\mathbf{d}}(t)=\exp(-\int_{t_{ n}}^{t}\sigma_{\theta}(\mathbf{o}+s\mathbf{d})\;\mathrm{d}s).\end{split} \tag{1}\]
In the expressions above, \(t_{n}\) and \(t_{f}\) denote the points the ray intersects with the near plane and far plane respectively.
### Inverse Rendering with NeRF
Given a set of real-world images from known views, inverse rendering aims to find the scene whose rendered images match the real-world images. If we use \(I(\mathbf{o},\mathbf{d})\) to de
note the pixel value for the ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), this problem can be cast as an optimization problem, namely:
\[\min_{\theta,\psi}\sum_{\mathbf{o}}\sum_{\mathbf{d}}\|\hat{I}_{\theta,\psi}( \mathbf{o},\mathbf{d})-I(\mathbf{o},\mathbf{d})\|_{2}^{2} \tag{2}\]
If \(\hat{I}_{\theta,\psi}(\mathbf{o},\mathbf{d})\) is differentiable w.r.t. the parameters \(\theta,\psi\), this problem can be tackled straightforwardly with gradient-based optimization. To enable this, the volume density function \(\sigma_{\theta}\) and the colour function \(\mathbf{c}_{\psi}\) are chosen to be neural networks. As a result, inverse rendering amounts to training the NeRF to reconstruct the scene in a way that is consistent with the images that are given. Once the NeRF is trained, novel view synthesis can be achieved by rendering the NeRF from new, unseen views.
Since this optimization problem is underconstrained, in general many views are needed to reconstruct the scene accurately. Yet, in practical applications, typically only a few views are available, hence priors are needed to provide sufficient constraints. In this paper, we consider priors in the form of monocular depth estimates from each view. The monocular depth estimates are trained on different datasets from the dataset NeRF is trained on to simulate real-world conditions where there is a domain gap between what the prior was trained on and what the NeRF is trained on.
### Ray Termination Distance
In order to leverage depth priors, a natural way is to use them to constrain the ray termination distance. Because NeRF can represent non-opaque surfaces, there is no single ray termination distance. Instead, there is a distribution of ray termination distances. The cumulative distribution function (CDF) of this distribution represents the probability that the ray terminates before reaching a given point. It turns out that the probability density of this distribution \(f_{\theta,\mathbf{o},\mathbf{d}}(t)\) is given by [6]:
\[f_{\theta,\mathbf{o},\mathbf{d}}(t)=T_{\theta,\mathbf{o},\mathbf{d}}(t)\sigma _{\theta}(\mathbf{o}+t\mathbf{d}) \tag{3}\]
## 4 Method
### Supervising Ray Termination Distance
If all surfaces were opaque and the ground truth depth were given, supervising NeRF with the ground truth depth would be straightforward. All that is needed is to make the NeRFs distribution of ray termination distances as close as possible to a delta distribution centred at the ground truth depth. However, when the ground truth depth is not known, we have to use monocular depth estimates as supervision. The challenge is that unlike ground truth depth, monocular depth estimates may not be stereo-consistent, due to estimation error and inherent ambiguity. As a result, it may not be possible to find a scene that is consistent with the depth estimates from every view. Therefore, in order to reconstruct the scene, we must model the uncertainty and ambiguity in the depth estimates, which can be mathematically characterized by distributions. Such ambiguities cannot be resolved from a single view and can often be resolved from multiple views. So we need an method that can fuse together uncertain depth estimates from different views.
A natural way of representing uncertainty is through a probability distribution. When there are ambiguities, the distribution of depth estimates are typically _multimodal_, i.e., their probability densities have disjoint peaks, where each mode represents one interpretation of the scene structure. For example, albedo vs shading ambiguity as shown in Figure 2 can result in multiple plausible interpretations and make the distribution multimodal.
An added challenge arises when non-opaque surfaces are present. In this case, there is no single ground truth depth, and so the distribution of ground truth depths is multimodal. So even after training the NeRF, the distribution of ray termination distances induced by NeRF should be multimodal, as illustrated in the glass walls in Figure 1.
Prior methods compute moments (i.e., mean and/or variance) of the distribution of depth estimates or ray termination distance induced by NeRF. For example, DS-NeRF [6] predicts the mean depth estimate and uses reprojection error as a proxy for the variance. It then minimizes the KL divergence from the distribution of ray termination distances induced by NeRF to a Gaussian whose moments match those of the depth estimates. DDP [34] fits a Gaussian to distribution of ray termination distances and maximizes the likelihood of the depth estimate under the Gaussian.
Distilling complex distributions to moments has the drawback of ignoring their possible multimodality. Because both the distribution of depth estimates and ray termination distances are potentially multimodal, it is important to handle the multimodality.
### Scade
We now present our novel method _Space Carving with Ambiguity-aware Depth Estimates_ (**SCADE**). Our contributions addresses the existing problems and drawbacks mentioned in the previous section. First, SCADE accounts for the _multimodality_ of both the distributions of monocular depth estimates and ray termination distances that arise due to inherent ambiguities (Sec 4.2.1) and non-opaque surfaces.
Figure 3: Network Architecture for our Ambiguity-aware Depth Estimates.
Second, SCADE is also able to resolve these ambiguities by fusing together information from multiple views through our space carving loss (Sec 4.2.3) that uses the reverse cross entropy loss. Because this a loss on the distribution rather than moments of the distribution (which was the paradigm in prior work [6, 34]), it achieves supervision in 3D rather than 2D. Third, our loss formulation is sample-based (Sec 4.2.2) and so is computationally efficient to optimize.
#### 4.2.1 Our Ambiguity-aware Depth Estimates
We handle multimodality of monocular depth distributions by introducing our ambiguity-aware depth estimation module to account for ambiguities in depth estimation from a single view (Figure 1, 2). In contrast to existing monocular depth estimation networks [49] that predict single point estimates for depth, we model the inherent uncertainty in monocular depth estimation by representing the distribution as a _set of samples_ generated by a neural network. We propose leveraging conditional Implicit Maximum Likelihood estimation (cIMLE) [21] to learn a multimodal distribution of depth estimates. We chose cIMLE rather than conditional GANs [15] in order to avoid mode collapse, which can lead to a unimodal distribution.
Figure 3 shows our network architecture. Concretely, we combine cIMLE with a state-of-the-art monocular depth estimation network, LeReS [49]. Our ambiguity-aware depth estimation module (\(G\)) takes an input image (\(I\)) and a latent code \(z\sim\mathcal{N}(0,\mathbf{I})\) and outputs a conditional depth sample \(G(I,z)\) for the input image \(I\). To inject randomness into the network, we follow the technique used in [15] and incorporate AdaIn layers into the network backbone that predicts a scale and shift to the intermediate network features. Specifically, we added four AdaIn layers to the encoder of the depth estimation backbone.
#### 4.2.2 Sample-based Losses on Distributions
We desire to achieve an agreement between the distributions from our ambiguity-aware prior and the ray termination distance from NeRF to constrain the NeRF optimization under the sparse view set-up using our learned prior. As distributions are continuous, taking their analytical integral is computationally intractable. Thus, in order to define a differentiable loss function for our optimization problem, we result to a sample-based loss on these two distributions.
Samples from depth prior.As previously described, we leverage cIMLE to sample from our ambiguity-aware depth prior. Sampling \(z_{1},...,z_{M}\sim\mathcal{N}(0,\mathbf{I})\), we get depth map samples \(G(I,z_{1}),...,G(I,z_{M})\) for input image \(I\). Hence we denote corresponding samples to _estimate_ and represent the distribution for ray \(\mathbf{r}\) as \(y_{1},y_{2},...,y_{M}\sim G_{\mathbf{o},\mathbf{d}}\).
Samples from NeRF.For the distribution given by the ray termination distances modelled by NeRF, we sample from its probability density function \(f_{\theta,\mathbf{o},\mathbf{d}}\) (Eq. 3) with inverse transform sampling. Concretely, we define the probability mass function for ray \(\mathbf{r}\) as \(f_{\theta,\mathbf{o},\mathbf{d}}(t_{i})\)as given in Eq. 3 with samples \(t_{1},...,t_{K}\), where \(t_{i}\in[t_{n},t_{f}]\forall i\). By inverse transform sampling, we first compute the cumulative distribution function (CDF) of the ray termination distribution at samples \(t_{i}\), which we denote as \(F_{\theta,\mathbf{o},\mathbf{d}}(t_{i})=\sum_{j\leq i}f_{\theta,\mathbf{o}, \mathbf{d}}(t_{i})\). Thus our samples \(x_{1},x_{2},...,x_{N}\) of ray termination distances from NeRF are then given by:
\[\begin{split}& x_{i}=F_{\theta,\mathbf{o},\mathbf{d}}^{-1}(u_{i}), \\ &\text{where }u_{i}\sim\mathcal{U}(0,1)\end{split} \tag{4}\]
#### 4.2.3 Our Space Carving Loss
We now introduce our novel space carving loss utilizes the samples from both multimodal distributions to constrain the NeRF optimization. The goal is to find a subset of modes captured in the monocular depth distributions from each view that are globally consistent. In doing so, we can fuse together the information from the different views, since the inherent ambiguities are only resolved given information from multiple views. This would allow us to find a common shape that is consistent across all views and "snap" objects surfaces together.
We thus desire a loss that has **mode seeking** behavior, that is the modes of the distribution being trained should be a subset of the modes of the distribution that provides supervision. One such loss is the reverse cross entropy, which is the cross entropy from the NeRF ray termination distance distribution to the distribution of our ambiguity-aware depth estimates:
\[H(f_{\theta,\mathbf{o},\mathbf{d}},G_{\mathbf{o},\mathbf{d}})=-\mathbb{E}_{f_ {\theta,\mathbf{o},\mathbf{d}}}[\log G_{\mathbf{o},\mathbf{d}}]. \tag{5}\]
As shown in the IMLE papers [21, 30], this is equivalent to penalizing the minimum of the L2 norm between the samples from the two distributions, please see [20] for the full proof. Hence, our space carving loss is given by
\[\mathcal{L}_{\text{space\_carving}}(\mathbf{r})=\sum_{i\in[N]}\min_{j\in[M]} ||x_{i}-y_{j}||_{2}^{2}. \tag{6}\]
Because our novel space carving loss operates on distributions rather than moments of distributions, we can have
Figure 4: Our space carving loss supervises the ray termination in 3D as opposed to existing approaches that supervise on 2D expected depth. As shown, this clears cut clouds of dust in space. Moreover supervising in 3D allows for wide baseline input views and still finds the common mode that correctly snaps the surfaces.
different supervision targets for different points along the ray. In contrast, if the loss were to only to operate on moments, which are integrals along the ray, this only yields the same supervision target for all points along the ray. Therefore, our loss allows us to supervise in 3D, unlike prior methods [6, 34] (which use losses on moments) that supervise in 2D. 3D supervision allows us to clear out dust in space, since in order to move a sample from the ray termination distance distribution farther in depth, the loss would have to decrease the probability density of _all_ the points in front of it. These advantages are highlighted in Figure 4.
Our total loss to optimize and train our NeRF model is given by
\[\mathcal{L}=\mathcal{L}_{\text{photometric}}+\lambda\mathcal{L}_{\text{space \_carving}}, \tag{7}\]
where \(\mathcal{L}_{\text{photometric}}\) is the standard MSE loss on the predicted and ground truth image. See Fig. 1 for our overall pipeline.
## 5 Results
In this section, we present our experimental evaluation to demonstrate the advantages of **SCADE**.
### Datasets and Evaluation Metrics
We evaluate our method on ScanNet [5] and an in-the-wild dataset we collected 1. We use the sparse-view ScanNet data used by DDP [34] in their evaluations, which comprises of three sample scenes each with 18 to 20 train images and 8 test images. To further test the robustness of our method, we further evaluated our method on three in-the-wild scenes collected using an iPhoneX. We captured sparse views in three different scenes - basement, kitchen and lounge, where each scene has 18 to 23 train images and 8 test images. Following similar data preprocessing steps as DDP [34], we ran SfM [36] on all images to obtain camera poses for NeRF training. For quantitative comparison, we follow the original NeRF [26] paper and report the PSNR, SSIM [44] and LPIPS [54] on the novel test views.
Footnote 1: Please also see supplement for results on the Tanks and Temples dataset [18].
### Implementation Details
We train our ambiguity-aware prior on the Taskonomy [52] dataset. We initialize the weights with the pre-trained LeReS [49] model. We use \(M=20\) depth estimates in our experiments. Please see supplement for additional implementation details.
### Experiments on ScanNet
We evaluate on ScanNet following the evaluation from DDP [34]. We compare to the original NeRF [26] (**Vanilla NeRF**) and the recent state-of-the-art NeRF with depth prior-based supervision, Dense Depth Prior [34] (**DDP**). Table 1 shows SCADE quantitatively outperforming the baselines. Because we are interested in simulating real-world conditions with a domain gap between the prior and the NeRF, our setting requires the use of out-of-domain priors. Since DDP uses a in-domain prior, we retrain DDP's depth completion network using their official code and hyperparameters on Taskonomy [52], the same out-of-domain dataset that our prior is trained on.
Figure 6 shows our qualitative results. As shown, compared to the baselines, SCADE is able to avoid producing the clouds of dust that are present in the results of baselines (first, third and last column). Moreover, SCADE is also able to snap and recover objects in the scene such as the details on the blinds (second column), the back of the chair (fourth column) and the legs of the piano stool (fifth column). Moreover, Fig. 5 shows rendered depthmaps and fusion results using [53] on a Scannet scene. Notice that SCADE is able to recover better geometry compared to DDP - see corner of the calendar, cabinets and office chair in the right image.
the ground truth distribution of ray termination distances induced by NeRF and the distribution of estimated depths from the prior must be multimodal. The fact that we do well validates the ability of our model to capture the multimodal distributions.
Similar to the ScanNet results, we are able to recover crisp shapes of objects such as the thin table leg and the white chair (second and last column), and we are also able to clear up dust near the microwave and printer compared to the baselines (fifth and last column).
### Ablation Study
Supervision on full distribution vs momentsWe first validate the importance of using our novel space carving loss that supervises the full distribution of ray termination distance rather than just its moments. The latter integrates along the ray, and so provides one target value for the entire ray, which effectively makes it 2D supervision. The former provides different target values for different samples along the same ray, which makes it 3D supervision. We adapt our method to use 2D supervision proposed in the recent MonoSDF [51], which computes the expected ray termination distance and aligns the output depth map with a monocular depth estimation prior using the MiDaS [32] loss. We use our learned prior as the monocular depth estimate. Table 3 validates the effectiveness of our space carving loss.
Multimodal prior vs unimodal prior.Our prior models does not constrain the depth distribution to a particular form and allows it to be multimodal. In contrast, DDP's depth completion prior assumes a Gaussian, which is unimodal. We ablate our prior against DDP's by sampling from their Gaussian distribution. We train with our space carving loss using both a single sample, as well as \(M\) samples drawn from DDP's prior. As shown in Table 3, while both perform better than the 2D supervision provided by MonoSDF, but they are subpar compared to using our multimodal prior.
Multiple vs single sample.Finally, we also ablate on using a single sample vs multiple samples for our space carving loss. For the single sample set-up, we supervise using the sample mean of the depth estimates, which is equivalent to the maximum likelihood estimate of the depth under Gaussian likelihood. Results in Table 3 show the importance of using multiple samples for our space carving loss.
Sparsity.We further show addt'l results under varying number of views in Tab 4. Note that in general sparsity is not fully reflected by the absolute number of views, because all else being equal, a larger scene requires more views to attain complete coverage. Following prior work [34], we also report the average number of views that see the same point as another measure of sparsity. As shown, our setting is at the edge of the theoretical lower limit of 2 for general-purpose
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Vanilla NeRF [26] & 20.46 & 0.713 & 0.398 \\ DDP [34] & 21.28 & 0.727 & 0.366 \\ \hline
**SCADE** (Ours) & **22.82** & **0.743** & **0.347** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **In-the-wild Results.**
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline MonoSDF supervision & 20.13 & 0.710 & 0.332 \\ DDP prior - single sample & 20.85 & 0.712 & 0.320 \\ DDP prior - multiple samples & 21.00 & 0.718 & 0.316 \\ Our prior - single sample & 21.22 & 0.714 & 0.318 \\ \hline
**SCADE** (Ours) & **21.54** & **0.732** & **0.292** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation Study**. Results on the Scannet dataset. MonoSDF supervision refers to supervising with MonoSDF loss on expected ray termination distance using our prior.
Figure 6: **Qualitative results on ScanNet.**
reconstruction, which shows view sparsity.
### Discussion on our Ambiguity-Aware Prior
We provide addt'1 details on our ambiguity-aware prior on i) how and why it works, ii) its performance on reflective surfaces, which is a common failure case for depth estimate, and iii) provide an intuition on how what our depth modes look like. i) We achieve variable depth modes by exploiting inconsistently labeled training data. As shown in Fig 8-a, in Taskonomy [52], different training images with non-opaque surfaces label depth differently: shooting through the glass (left), on the glass (middle), or a mixture of both (right). Despite multiple possible depth labels, each image only has one ground truth label. Training with cIMLE allows our prior to model these multiple possible (ambiguous) outputs through sampling2, even when given only one label per image. Interestingly when testing our prior on a test image with a mirror, we find that it is able to capture variable modes on reflective surfaces, including the "correct" flat surface as shown in Fig 8-b. An intuition of what the depth modes look like is for example if a ray intersects \(n-1\) non-opaque surfaces, the \(n\) modes are intersections with the non-opaque surfaces and the terminating point on the opaque surface.
Footnote 2: See supplement for more qualitative examples on our ambiguity-aware depth estimates.
## 6 Conclusion
In this paper, we present a new approach towards NeRF reconstruction that can work with a modest number of in-wild views of an indoor scene. We address the under-constrained nature of this problem by regularizing the NeRF optimization with additional depth estimates for each view. Our key technical contribution is to model multimodality in the depth estimates, which can capture inherent ambiguities in monocular depth estimation as well as the possible presence of non-opaque surfaces. We resolve ambiguities using a novel space carving loss that fuses the multimodal depth estimates from different views and seeks the modes that are consistent across views so as to arrive at a globally consistent 3D reconstruction. The improved recovery of shape and appearance enables higher fidelity novel view synthesis from sparse views.
Limitations and Future Work.The performance of our method is constrained by the quality of the monocular depth priors. While we found that our prior generalizes well across domains, if the domain gap is too great, the performance of our method will degrade. A future direction would to be detect when this happens and dynamically adjust the strength of depth supervision in response.
Acknowledgements.We sincerely thank the Teleportation and CCI team in Google for all the insightful discussions during the summer. We also thank Mirko Visontai for internship logistics, Guandao Yang for experiment set-up assist, and Weicheng Kuo for looking over the paper.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline ave. \#views visible & 1.87 & 1.98 & 2.01 & 2.2 \\ absolute \#views & 18 & 20 & 22 & 24 \\ \hline \hline DDP / SCADE & 19.35/ **21.66** & 22.4/ **23.67** & 23.10/ **24.00** & 23.56/ **24.84** \\ \hline \hline \end{tabular}
\end{table}
Table 4: PSNR for DDP [34]/SCADE on scene781 of Scannet.
Figure 8: **Depth Mode Discussion.** a) Train images from Taskonomy [52] and their labels. Notice that non-opaque surfaces are labelled differently. b) Output of our prior on reflective surfaces.
Figure 7: **Qualitative results on In-the-Wild data.**
|
2307.01471
|
On Hofstadter's G-sequence
|
We characterize the entries of Hofstadter's G-sequence in terms of the lower
and upper Wythoff sequences. This can be used to give a short and comprehensive
proof of the equality of Hofstadter's G-sequence and the sequence of averages
of the swapped Wythoff sequences. In a second part we give some results that
hold when one replaces the golden mean by other quadratic algebraic numbers. In
a third part we prove a close relationship between Hofstadter's G-sequence and
a sequence studied by Avdivpahic and Zejnulahi.
|
Michel Dekking
|
2023-07-04T04:39:36Z
|
http://arxiv.org/abs/2307.01471v3
|
###### Abstract
###### Abstract
We characterize the entries of Hofstadter's G-sequence in terms of the lower and upper Wythoff sequences. This can be used to give a short and comprehensive proof of the equality of Hofstadter's G-sequence and the sequence of averages of the swapped Wythoff sequences. In a second part we give some new results that hold when one replaces the golden mean by other quadratic algebraic numbers.
**On Hofstadter's G-sequence**
F. M. Dekking
CWI Amsterdam and Delft University of Technology
Faculty EEMCS
P.O. Box 5031
2600 GA Delft
The Netherlands
[email protected]
## 1 Introduction
Hofstadter's G-sequence \(G\) is defined by \(G(1)=1,G(n)=n-G(G(n-1)))\) for \(n\geq 2\).
It was proved in 1988, independently in the two articles [5, 6] that there is a simple expression for Hofstadter's G-sequence as a slow Beatty sequence, given by
\[G(n)=\lfloor(n+1)\gamma\rfloor, \tag{1}\]
where \(\gamma:=(\sqrt{5}-1)/2\), the small golden mean.
The terminology'slow Beatty sequence' comes from the paper [7] by Kimberling and Stolarsky. From this paper we copy the following useful result.
**Theorem 1**.: **[Kimberling and Stolarsky]** _Suppose that \(\sigma\) in \((0,1)\) is irrational, and let \(s(n)=\lfloor n\sigma\rfloor\). Let a be the sequence of numbers n such that \(s(n+1)=s(n)\), and b the sequence of those n such that \(s(n+1)=s(n)+1\). Then a is the Beatty sequence of \(1/(1-\sigma)\), and b is the Beatty sequence of \(1/\sigma\)._
## 2 Hofstadter and Wythoff
Let \(\varphi:=(1+\sqrt{5})/2\) be the golden mean. The lower and upper Wythoff sequences are given by \(L(n)=\lfloor n\varphi\rfloor\) and \(U(n)=\lfloor n\varphi^{2}\rfloor\) for \(n\geq 1\).
**Theorem 2**.: _The Hofstadter G-sequence satisfies_
\[G(L(n))=n,G(U(n))=L(n),\text{ for all }n\geq 1.\]
Proof.: The lower Wythoff sequence \(L\) satisfies by definition
\[n\varphi=L(n)+\varepsilon_{n}\Rightarrow\varphi=(L(n)+\varepsilon_{n})/n,\]
for some \(\varepsilon_{n}\) with \(0<\varepsilon_{n}<1\). Since \(\varphi\gamma=1\), this leads to
\[G(L(n))=\Big{\lfloor}(L(n)+1)\gamma\Big{\rfloor}=\Big{\lfloor}\frac{L(n)+1}{ \varphi}\Big{\rfloor}=\Big{\lfloor}\frac{n(L(n)+1)}{L(n)+\varepsilon_{n}} \Big{\rfloor}=\Big{\lfloor}n+\delta_{n}\Big{\rfloor},\]
with
\[\delta_{n}=\frac{n(1-\varepsilon_{n})}{L(n)+\varepsilon_{n}}.\]
Since obviously \(n<L(n)+\varepsilon_{n}\), we have \(0<\delta_{n}<1\), and we conclude that \(G(L(n))=n\).
We turn to the second equation. The upper Wythoff sequence \(U\) satisfies by definition
\[n\varphi^{2}=U(n)+\varepsilon_{n}^{\prime},\]
for some \(\varepsilon_{n}^{\prime}\) with \(0<\varepsilon_{n}^{\prime}<1\). Since \(\varphi\gamma=1\), this leads to
\[G(U(n))=\Big{\lfloor}(U(n)+1)\gamma\Big{\rfloor}=\Big{\lfloor}\frac{U(n}{ \varphi}+\gamma\Big{\rfloor}=\Big{\lfloor}\frac{n\varphi^{2}-\varepsilon_{n}^ {\prime}}{\varphi}+\gamma\Big{\rfloor}=\Big{\lfloor}n\varphi+(1-\varepsilon_{n }^{\prime})\gamma\Big{\rfloor}=L(n),\]
since obviously \(0<(1-\varepsilon_{n}^{\prime})\gamma<1\).
We next turn our attention to sequence A002251, described as: Start with the nonnegative integers; then swap \(L(k)\) and \(U(k)\) for all \(k\geq 1\), where \(L\) and \(U\) are the lower and upper Wythoff sequences.
This means that this sequence, which we call \(W,\) satisfies
\[W(L(n))=U(n),W(U(n))=L(n)\text{ for all }n\geq 1. \tag{2}\]
Regretfully, the sequence \(W\) has been given offset \(0\) in OEIS. One of the unpleasant consequences of the useless addition of \(0\) is that sequence A073869 is not a clean Cesaro average of A002251. Another unpleasant consequence is that A073869 is basically a copy of A019444.
The sequence \(W\) has the remarkable property that the sum of the first \(n+1\) terms is divisible by \(n+1\). This leads to the sequence A073869, defined as \(A073869(n)=\sum_{i=0}^{n}W(i)/(n+1)\).
The following theorem is a conjecture by Amarnath Murthy in [10, A073869], but is proved in the long paper [11]. We give a new short proof below.
**Theorem 3**.: _The averaged Wythoff swap sequence \(\overline{W}\) is equal to Hofstadter's G-sequence._
Proof.: The result holds for \(n=0,1\). It suffices therefore to consider the sequence of differences. Subtracting \(G(n-1)=\sum_{i=0}^{n-1}W(i)/n\) from \(G(n)=\sum_{i=0}^{n}W(i)/(n+1)\), we see that we have to prove
\[(n+1)G(n)-nG(n-1)=W(n). \tag{3}\]
But we know that there are only two possibilities for the recursion from \(G(n-1)\) to \(G(n)\). Therefore Equation (3) turns into the following two equations.
\[G(n)=G(n-1) \Rightarrow G(n)=W(n), \tag{4}\] \[G(n)=G(n-1)+1 \Rightarrow G(n)=W(n)-n. \tag{5}\]
It is not clear how to prove these equalities directly. However, we can exploit Theorem 1. According to this theorem with \(\sigma=\gamma\),
\[G(n)=G(n-1) \Leftrightarrow\exists M\text{ such that }n=U(M), \tag{6}\] \[G(n)=G(n-1)+1 \Leftrightarrow\exists M\text{ such that }n=L(M). \tag{7}\]
So we first have to prove that \(n=U(M)\) implies \(G(n)=W(n)\). This holds indeed by an application of Theorem 2 and Equation (2):
\[G(n)=G(U(M)=L(M)=W(U(M))=W(n).\]
Similarly, for the second case \(n=L(M)\):
\[G(n)=G(L(M))=M=U(M)-L(M)=W(L(M))-L(M)=W(n)-n.\]
Here we applied \(U(M)=L(M)+M\) for \(M\geq 1\), a direct consequence of \(\varphi^{2}M=(\varphi+1)M\).
In the comments of A073869 there is a scatterplot by N.J.A.Sloane--c.f. Figure 1. The points have a nice symmetric distribution around the line \(y=x\), since the points consists of all pairs \((L(n),U(n))\) and \((U(n),L(n))\) for \(n=1,2,\dots\). (Ignoring (0,0).) Apparently the points are almost lying on two lines. What are the equations of these lines? This is answered by the following proposition.
**Proposition 4**.: _Let \(W\) be the Wythoff swap sequence. Then for all \(n\geq 1\)_
\[W(U(n))=\lfloor\gamma U(n)\rfloor,W(L(n))=\lfloor\varphi L(n)\rfloor+1.\]
Proof.: From Equation (4) and Equation (5) we see that
\[\begin{array}{rl}W(n)&=G(n),&\mbox{if $G(n)=G(n-1)$},\\ W(n)&=G(n)+n&\mbox{if $G(n)=G(n-1)+1$}.\end{array}\]
Since \(G(n)=\lfloor(n+1)\gamma\rfloor\) by Equation (1), it follows from Equation (6) that
\[W(U(M))=\lfloor U(M)\gamma\rfloor.\]
Figure 1: Scatterplot of the first 68 entries of \(W\).
Since all \(M=1,2\ldots\) will occur, this gives the first half of the proposition.
For the second half of the proposition we do the following computation under the assumption that \(n=L(M)\):
\[G(n)+n=G(n-1)+n+1=\lfloor n\gamma\rfloor+n+1=\lfloor n(\gamma+1)\rfloor+1= \lfloor n\varphi\rfloor+1.\]
Now Equation (7) gives that \(W(L(M))=\lfloor\varphi L(M)\rfloor+1\).
_Remark 5_.: Simple applications of Theorem 3 prove the conjectures in A090908 (Terms \(a(k)\) of A073869 for which \(a(k)=a(k+1)\).), and A090909 (Terms \(a(k)\) of A073869 for which \(a(k-1),a(k)\) and \(a(k+1)\) are distinct.). It also proves the conjectured values of sequence A293688.
_Remark 6_.: The results in this section can all be proved automatically by Walnut: see the paper [9].
## 3 Generalizations
There is a lot of literature on generalizations of Hofstadter's recursion \(G(n)=n-G(n-1))\). In most cases there is no simple description of the sequences that are generated by such recursions. An exception is the recursion \(V(n)=V(n-V(n-1))+V(n-V(n-4))\) analysed in [3]. The sequence with initial values 1,1,1,1 generated by this recursion is sequence A063882. Allouche and Shallit prove in [1] that the 'frequencies' of this sequence can be generated by an automaton. See the recent paper [8] for more results on this type of Hofstadter's recursions, known as Hofstadter Q-sequences. We consider the paper [4], that gives a direct generalization of Hofstadter's G-sequence.
**Theorem 7**.: **[Celaya and Ruskey]** _Let \(k\geq 1\), and let \(\gamma=[0;k,k,k,\ldots]\). Assume \(H(n)=0\) for \(n<k\), and for \(n\geq k\), let_
\[H(n)=n-k+1-\Big{(}\sum_{i=1}^{k-1}H(n-i)\Big{)}-H(H(n-k).\]
_Then for \(n\geq 1\), \(H(n)=\lfloor\gamma(n+1)\rfloor\)._
As an example, we take the case \(k=2\). In that case \(\gamma=\sqrt{2}-1\), the small silver mean. The recursion for what we call the Hofstadter Pell sequence is
\[H(n)=n-1-H(n-1)-H(H(n-2)).\]
Here Theorem 7 gives that
\[(H(n))=\lfloor\gamma(n+1)\rfloor=0,0,1,1,2,2,2,3,3,4,4,4,5,5,6,6,7,7,7,8,8,9,9, 9,10,10,\ldots.\]
This is sequence A097508 in OEIS.
Let \(1/\gamma=1+\sqrt{2}\) and \(1/(1-\gamma)=1+\frac{1}{2}\sqrt{2}\) form the Beatty pair given by Theorem 1. Let \(L^{\mathrm{P}}=(\lfloor n(1+\sqrt{2})\rfloor)\) and \(U^{\mathrm{P}}=(\lfloor n(1+\frac{1}{2}\sqrt{2})\rfloor)\) be the associated Beatty sequences. One has \(L^{\mathrm{P}}=\textsc{A003151}\), and \(U^{\mathrm{P}}=\textsc{A003152}\).
The following version of Theorem 2 holds, where \(R\) is the slow Beatty sequence A049472 given by \(R(n)=\lfloor\frac{1}{2}\sqrt{2}n\rfloor\).
**Theorem 8**.: _The Hofstadter Pell sequence \(H\) satisfies_
\[H(L^{\mathrm{P}}(n))=n,H(U^{\mathrm{P}}(n))=R(n),\text{ for all }n\geq 1.\]
The proof of Theorem 8 is very similar to the proof of Theorem 2, based on the relation \(\gamma(1+\sqrt{2})=1\).
The sequence with \(L^{\mathrm{P}}\) and \(U^{\mathrm{P}}\) swapped is
\(\textsc{A109250}=2,1,4,3,7,9,5,12,6,14,16,8,19,10,21,11,24,26\ldots\).
Apparently there is nothing comparable to the averaging phenomenon that occurred in the golden mean case.
_Remark 9_.: See A078474, and in particular A286389 for two generalizations of Hofstadter's recursion, with conjectured expressions similar to Equation (1).
For the recursion \(a(n)=n-\lfloor\frac{1}{2}a(a(n-1))\rfloor\) given in A138466 it is proved by Benoit Cloitre that \((a(n))\) satisfies Equation (1) with \(\gamma=\sqrt{3}-1\). For generalizations of this see A138467.
## 4 Acknowledgment
I thank Jean-Paul Allouche for useful remarks. Thanks are also due to an anonymous referee for pointing out an important reference.
|
2304.06896
|
Machine Perception-Driven Image Compression: A Layered Generative
Approach
|
In this age of information, images are a critical medium for storing and
transmitting information. With the rapid growth of image data amount, visual
compression and visual data perception are two important research topics
attracting a lot attention. However, those two topics are rarely discussed
together and follow separate research path. Due to the compact compressed
domain representation offered by learning-based image compression methods,
there exists possibility to have one stream targeting both efficient data
storage and compression, and machine perception tasks. In this paper, we
propose a layered generative image compression model achieving high human
vision-oriented image reconstructed quality, even at extreme compression
ratios. To obtain analysis efficiency and flexibility, a task-agnostic
learning-based compression model is proposed, which effectively supports
various compressed domain-based analytical tasks while reserves outstanding
reconstructed perceptual quality, compared with traditional and learning-based
codecs. In addition, joint optimization schedule is adopted to acquire best
balance point among compression ratio, reconstructed image quality, and
downstream perception performance. Experimental results verify that our
proposed compressed domain-based multi-task analysis method can achieve
comparable analysis results against the RGB image-based methods with up to
99.6% bit rate saving (i.e., compared with taking original RGB image as the
analysis model input). The practical ability of our model is further justified
from model size and information fidelity aspects.
|
Yuefeng Zhang, Chuanmin Jia, Jiannhui Chang, Siwei Ma
|
2023-04-14T02:12:38Z
|
http://arxiv.org/abs/2304.06896v1
|
# Machine Perception-Driven Image Compression:
###### Abstract
In this age of information, images are a critical medium for storing and transmitting information. With the rapid growth of image data amount, visual compression and visual data perception are two important research topics attracting a lot attention. However, those two topics are rarely discussed together and follow separate research path. Due to the compact compressed domain representation offered by learning-based image compression methods, there exists possibility to have one stream targeting both efficient data storage and compression, and machine perception tasks. In this paper, we propose a layered generative image compression model achieving high human vision-oriented image reconstructed quality, even at extreme compression ratios. To obtain analysis efficiency and flexibility, a task-agnostic learning-based compression model is proposed, which effectively supports various compressed domain-based analytical tasks while reserves outstanding reconstructed perceptual quality, compared with traditional and learning-based codecs. In addition, joint optimization schedule is adopted to acquire best balance point among compression ratio, reconstructed image quality, and downstream perception performance. Experimental results verify that our proposed compressed domain-based multi-task analysis method can achieve comparable analysis results against the RGB image-based methods with up to \(99.6\)% bit rate saving (i.e., compared with taking original RGB image as the analysis model input). The practical ability of our model is further justified from model size and information fidelity aspects.
Image compression, machine perception, generative adversarial network, visual applications.
## I Introduction
Visual data is created at an incredible speed due to the great progress in electronic image acquisition area. On the one hand, data compression is in need to efficiently store and transmit mass visual data. On the other hand, high-level semantic understanding is required to translate pixel-level signals into machine-oriented knowledge. In the past, those two domains are seldom studied simultaneously due to the different targets between two domains [1].
The increasing volume of visual data emphasizes the need for not only storage and transmission, but also machine perception. Apart from the goal of effectively reconstructing pixels for human vision, image/video coding standards are beginning to pay attention to high-level machine perception and pattern recognition tasks. To support large scale image/video retrieval and analysis, MPEG introduced standards: compact descriptor for visual search (CDVS) (ISO/IEC15938-13) [2] in Sep.2015 and compact descriptors for video analysis (CDVA) (ISO/IEC15938-14) [3] in July 2019. At the same time, JPEG AI [4] is launched in March 2019 targeting both human vision and computer vision tasks.
Video coding for machine (VCM) [5] or intelligent coding [6] is proposed aiming at building an efficient joint visual compression and analysis framework. Those methods can be categorised into two. The first one [7, 8] intends to make the reconstructed image more suitable for machine-oriented tasks. Most methods of this category are built on traditional codecs through introducing analysis-related information. The second one [9, 10] conducts visual analysis directly on the intermediate compressed features. In this category, learning-based compression methods have the advantage that their deep feature is full of compact semantic information, which traditional codecs does not have.
Taking the advantage of deep learning, learning-based compression methods begin to surpass traditional compression techniques in the aspect of rate-distortion efficiency [11, 12]. At the same time, visual applications are also undergoing huge changes, such as image detection [13], recognition [14], and segmentation [15] tasks. Therefore, it offers possibility to bridge compression and visual analysis together. One advantage of using leaning-based methods in image compression is that their compressed domain is _learned_ and thus full of compact semantic information, which traditional codecs does not have. Intuitively, compressed data have the purpose to reconstruct the original image so they are supposed to be interpreted as a kind of high-compacted visual representation [9], which implies the feasibility of directly conducting high-level image processing and machine perception tasks on the compressed domain.
After the pandemic, organizations begin to encourage employees to work from home by using video conferences to interact with coworkers, and when self-quarantining at home short-form video platforms help people kill the time, all of these factors contribute to an explosion of visual data. The question of how to store and transfer this data in an efficient and cost-effective manner has piqued the industry's interest. Facebook announced their technical solution [16] on video chatting by transmitting face landmarks and using generative models under low bandwidth situations. Qualcomm released real-time \(1280\times 704\) video decoding demo [17] on a smartphone powered by Qualcomm Snapdragon 888 processor. In their work, neural video decoding is supported by AI inference engine and parallelizable entropy coding. Due to the enormous demand for bandwidth at this time, the relevance of visual integrity at extremely high compression ratios in real-world applications is highlighted.
To bridge the gap between previous research and practical applications, we focus on human face data and present a layered generative compression model that maintains visual fidelity even at extremely high compression ratios.
The design of the layered end-to-end compression pipeline
is fundamentally motivated by Marr's computational theory [18] that vision can be construed as one signal processing system which naturally implicit the support for the layered operation. According to the image-to-image series work [19, 20, 21, 22], image style and texture information can be described by a one-dimensional vector, while content information still needs spatial knowledge whose representation form is two-dimensional. Moreover, it is proved that key information can be effectively reserved through layered operation, especially under extremely low bitrate coding scenarios [23, 24].
Meanwhile, to directly analyze the compressed data for machine perception tasks, a task-agnostic multi-task analysis model is proposed.
In this paper, our main contributions can be summarized as follows:
1. We study and theoretically analyze the characteristics of the machine perception-driven learning-based compression model frameworks. Through these analyses, we present the optimization formulations of those methods and uncover the lack of flexibility of existing layered image compression methods.
2. We propose a machine perception-driven layered compression model, which is learned in an end-to-end way that each layer is _learnable_ and thus semantic-rich. The proposed model serves for both human vision and machine perception tasks that extensive experiments are conducted on CelebAMask-HQ dataset.
3. Regarding the coding performance, the proposed model outperforms both traditional codecs (e.g., VVC) and state-of-the-art learning-based codecs in perceptual metrics (i.e., FID [25], DISTS [26], and LPIPS [27]), especially at extremely low bit rate condition (i.e., bpp \(\leq 0.1\)). For the semantic analysis tasks, comparable visual analysis performance is achieved on compressed data, saving up to \(99.6\)% bit rates in transmission compared with analyzing original RGB images. The multi-task analysis model is proposed to further bring gains on visual tasks, and reduce model complexity.
The remainder of this paper is organized as follows. Section II analyzes coding for machine perception and optimization formulations. Section III reviews the related works of this paper. Section IV introduce the main contribution of the proposed approach by displaying the framework. Section V and Section VI show experimental results and discussions based on our model. Finally, conclusions and implications for future work are discussed in Section VII.
## II Problem Formulation
To illustrate general setups for the machine perception-driven compression models, we define the following components as [1]:
* \(\mathrm{E}(\cdot|\theta_{\mathrm{E}})\) which transforms visual input to feature matrix;
* \(\mathrm{C}(\cdot|\theta_{\mathrm{C}})\) which codec features into bit-stream, including probability estimation and entropy coding steps;
* \(\mathrm{D}(\cdot|\theta_{\mathrm{D}})\) which decodes bit-stream into feature matrix.
For different input data types and purposes, we define the following analysis networks:
* \(\mathrm{A}(\cdot|\theta_{\mathrm{A}})\) that projects features to reconstructed images for human vision purpose;
* \(\mathrm{A}^{\prime}(\cdot|\theta_{\mathrm{A}^{\prime}})\) servers for machine vision tasks, taking reconstruction images as input;
* \(\mathrm{\tilde{A}}(\cdot|\theta_{\mathrm{\tilde{A}}})\) servers for machine vision tasks, taking hand-crafted side features as input;
* \(\mathrm{\tilde{A}}(\cdot|\theta_{\mathrm{\tilde{A}}})\) servers for machine vision tasks, taking intermediate decoded features as input.
We categorize previous related research by their training targets and analyze their corresponding training components
Fig. 1: Image compression framework.
Fig. 4: The framework of side information assisted layered image compression.
Fig. 3: The framework of directly conducting machine perception on the compressed visual data.
Fig. 2: The framework of conducting machine perception on the reconstructed image.
in gradient back-propagation. For learning-based image compression methods shown in Fig. 1, their training target is to balance the trade-off between rate and distortion as:
\[\operatorname*{arg\,min}_{\Theta}\operatorname{R}_{z}+\lambda\mathrm{d},\quad \Theta=\{\theta_{\mathrm{E}},\theta_{\mathrm{C}},\theta_{\mathrm{D}},\theta_{ \mathrm{A}}\}. \tag{1}\]
Taking machine perception into consideration, the training target is to find the balance point between rate, distortion, and visual task performance then the optimization function can be formulated as:
\[\operatorname*{arg\,min}_{\Theta}\operatorname{R}_{z}+\lambda d(X,\widehat{ \mathrm{X}})+\sum_{0\leq i\leq n}w_{i}l_{i}(\hat{Y}_{i},Y_{i}), \tag{2}\]
where \(X\) is input image, \(\hat{X}\) is reconstructed image, \(Y_{i}\) is \(i\)th task's groundtruth downstream task label, \(\hat{Y}_{i}\) is the prediction results for task \(i\), \(d\) is image distortion metric and \(l_{i}\) represents the task loss metric for task \(i\). If downstream tasks take reconstructed images \(\hat{X}\) as input illustrated as Fig. 2, visual task prediction results \(\hat{Y}\) and optimization parameters are as:
\[\hat{Y}=\Lambda^{\prime}(\hat{X}),\quad\Theta=\{\theta_{\mathrm{E}},\theta_{ \mathrm{C}},\theta_{\mathrm{D}},\theta_{\mathrm{A}},\theta_{\mathrm{A^{\prime} }}\}. \tag{3}\]
If the analysis model takes intermediate features \(Z\) as input shown as Fig. 3, the optimization function is as:
\[\hat{Y}=\widetilde{\Lambda}(Z),\quad\Theta=\{\theta_{\mathrm{E}},\theta_{ \mathrm{C}},\theta_{\mathrm{D}},\theta_{\mathrm{A}},\theta_{\widetilde{\Lambda }}\}. \tag{4}\]
For separate training, the analysis module is trained by optimizing \(\theta_{\mathrm{A^{\prime}}}\) or \(\theta_{\widetilde{\Lambda}}\), independent from compression modules.
## III Related Work
### _Compressed Visual Data for Machine Perception_
There are two types of previous studies on how to analyze compressed representations: traditional codec compressed representations and learning-based codec compressed representations.
From JPEG [28] to recent VVC [29] standard, traditional codecs can all be considered as a block-based prediction, transform, and quantization. To directly analyze image/video bit-stream from traditional codecs, it is common to take transform parameters [7, 8, 30] or intermediate coding blocks [31, 32] as input. However, in comparison to learned methods, those compressed representations are hand-crafted and lack extensibility.
Learning-based codecs [33, 11, 34] directly optimize rate-distortion trade-off commonly using an auto-encoder architecture where quantization, probability estimation, and entropy coding are conducted to achieve high compression ratio. The basic framework for variational auto-encoder (VAE) based compression is illustrated in Fig. 1 where pixel-level distortion loss is used for training. With the machine vision attracting more attention, research works [35, 36, 37] are done to figure out how compression affects machine perception performance when the analysis model's input is a reconstructed image rather than the original one. A high-level view of those methods is shown in Fig. 2 where the reconstructed image \(\hat{X}\) is input into vision analysis model \(A^{\prime}(\cdot)\).
In order to efficiently execute machine perception tasks, since the compressed representation of learning-based compression methods can be deemed flexible and semantics-rich, recent researches [1, 9, 10, 38] have proven the potential of directly conducting visual tasks on the compressed representation without decoding, whose framework is shown as Fig. 3. Those methods can be categorized into two groups based on their demand for task-related previous knowledge: task-aware and task-agnostic. For the first group, compression models are designed to solve particular downstream tasks [1, 6, 10, 39] by training compression and task analysis models together. For the second one, compression works are done without noticing downstream tasks to be solved [9, 40].
### _Layered Image Compression_
Layered image compression is aided by prior knowledge that the pre-defined information can take several forms, e.g., edge map [23], semantic segmentation [41], face landmarks [16], etc. As shown in Fig. 4, visual data is first transformed into layered feature components, one of which is hand-designed and the other learned, and then coded separately. Pre-defined layer information is designed to help with specific downstream visual analysis tasks, for example, face edge map can support landmark detection [42].
## IV Method
The overall flowchart of our proposed model is shown in Fig. 5, which involves the compression part and the machine task analysis part.
### _Learning-Based Layered Compression_
Our learning-based layered compression model has three main components: encoder, decoder, and probability estimation model. We first designed a layered encoder and generative decoder to efficiently compress visual data. The optimization goal is then outlined.
#### Iv-A1 Layered encoder
According to the image-to-image series work [19, 20, 21, 22], image style and texture information can be described by a one-dimensional vector, while content information still needs spatial knowledge whose representation form is two-dimensional. Moreover, it is proved that key information can be effectively reserved through layered operation, especially under extremely low bitrate coding scenarios [23, 24].
Different encoders are adopted to encode the input image signal \(X\) into feature \(I_{R}\) and \(I_{S}\), supporting reconstruction and semantic analysis purposes, respectively. Noting corresponding encoders as \(E_{R}\) and \(E_{S}\) that latent representations can be calculated as:
\[I_{R}=E_{R}(X),I_{S}=E_{S}(X). \tag{5}\]
Regarding distinct data distributions, we use independent probability distribution models to further transform \(I\) into compact \(z\), which is then encoded into bit-streams using entropy encoding methods based on data distribution estimation. By calculating cross-entropy \(H(p,q)\), which is proven
to be the upper bound of \(z\)'s theoretical bit-stream size [33], the parametric probability model \(q\) is adopted to indirectly estimate the probability distribution of \(z\) whose probability distribution is \(p\).
Specifically, the hyperprior entropy model [33] is utilized for the entropy estimation of \(I_{R}\). Latent representation \(I_{R}\) is first quantized to a vector of discrete symbols \(\mathbf{X}=\{X_{1},X_{2},\ldots,X_{n}\}\). Denoting hyperprior as \(\mathbf{Y}\), the conditional probability can be written as \(P_{\mathbf{X}|\mathbf{Y}}\). The conditional distribution of each element \(X_{i}\) in \(\mathbf{X}\) can be assumed as Gaussian distribution, i.e.,
\[p_{\hat{\mathbf{X}}|\hat{\mathbf{Y}}}(\hat{\mathbf{X}}\mid\hat{\mathbf{Y}}) \sim\mathcal{N}\left(\mathbf{\mu},\mathbf{\sigma}\right) \tag{6}\]
The likelihood of the latent representation can be calculated as follows:
\[\begin{split} p_{\hat{\mathbf{X}}|\hat{\mathbf{Y}}}(\hat{ \mathbf{X}}=\hat{x_{i}}\mid\hat{\mathbf{Y}})&=\mathcal{N}\left( \mathbf{\mu},\mathbf{\sigma}\right)*\mathcal{U}(-\frac{1}{2},\frac{1}{2})(\hat{x_{i}}) \\ &=\phi(\hat{x_{i}}+\frac{1}{2})-\phi(\hat{y_{i}}-\frac{1}{2})\end{split} \tag{7}\]
where \(\phi\) denotes the cumulative function for a standard Gaussian distribution. Non-parametric factorized density model [11] with Gaussian distribution is adopted for \(I_{S}\) due to its low data dimension setting.
#### Ii-B2 Fusion module and decoder
Our decoder works in a generative way with the help of fusion module \(F\) to reconstruct the original visual signal. Adaptive Instance Normalization (AdaIN) [19] is applied as the fusion module that semantic-rich representation \(z_{2}\) can be considered as the input of a multi-layer perception (MLP) predicting the mean and variance for convolution layers in residual blocks, i.e., \(F(z_{1},z_{2})=\mathrm{AdaIN}(z_{1},\gamma,\beta)=\gamma\left(\frac{z_{1}- \mu(z_{1})}{\sigma(z_{1})}\right)+\beta\), where \(\mu,\sigma\) are channel-wise mean and standard deviation, and \(\gamma,\beta\) are learnt by MLP taking \(z_{2}\) as the input. An adversarial loss is used for the decoder, producing human-eye-friendly reconstructed images.
#### Ii-B3 Optimization target
Training setting is important in bridging data compression and visual analysis. In terms of the task goal, there are two types of training strategies: a) **for human vision task**; b) **for machine perception task**. We first introduce the training setup for human vision task here. Then, in the next subsection, we consider the setup for the machine perception task.
Regarding the human vision task, our model's goal is to rebuild the original image signal by predicting pixel values, minimizing the distortion between \(X\) and \(\hat{X}\) at the same time. In order to satisfy human vision besides preserving signal fidelity, our distortion metric includes three components: the pixel-wise mean-absolute error (MAE) loss \(d_{MAE}\), holistic structure-wise SSIM-based loss (i.e., \(d_{SSIM}=1-SSIM(X,\hat{X})\)), and perception-wise loss \(d_{p}\) as [44]. The distortion metric \(d\) can be formulated as follows:
\[d(X,\hat{X})=\lambda_{MAE}d_{MAE}+\lambda_{SSIM}d_{SSIM}+\lambda_{p}d_{p}, \tag{8}\]
where \(\lambda_{MAE}\), \(\lambda_{SSIM}\), and \(\lambda_{p}\) are hyper-parameters. For encoder and decoder parts, training is optimized by:
\[\mathcal{L}_{1}=\mathbb{E}_{X}[-\lambda\log(R(\hat{z}))+d(X,\hat{ X})-\beta\log(\mathrm{J}(\hat{X},z_{1}))], \tag{9}\] \[\Theta=\{\theta_{\mathrm{E}},\theta_{\mathrm{G}},\theta_{ \mathrm{D}},\theta_{\mathrm{F}},\theta_{\mathrm{A}}\}, \tag{10}\]
where \(\mathrm{J}(\cdot)\) is the discriminator helping generate/decode images meeting the perceptual requirements by telling whether the reconstructed image can be determined as real or fake in the training process. The training loss of \(\mathrm{J}(\cdot)\) can be defined as:
Fig. 5: Overview of our proposed approach. We transform the original image signal into two-layered representations: reconstruction-oriented \(I_{R}\) and semantic-oriented \(I_{S}\). Two targets are illustrated, including 1) compression; 2) machine perception of the compressed representation.
Fig. 6: Illustration of our multi-task analysis network architecture. Here \(C\) and \(S\) are class numbers for classification and segmentation, respectively. For the _Resblock_, we follow the second kind of deep residual block design in [43].
\[\mathcal{L}_{\rm J}=\mathbb{E}_{X}[-\log(1-{\rm J}(\hat{X},z_{1}))]+\mathbb{E}_{X} [-\log({\rm J}(X,z_{1}))], \tag{11}\]
where \(\mathcal{L}_{\rm J}\) is the loss function for discriminator. To be noted, the discriminator \(J\) is conditioned on \(z_{1}\) like [44] to obtain sharp image and we adopt the multi-scale discriminator method proposed by [45] to eliminate mode collapse problem. Network design details for discriminator \(J\) are shown in Table II.
Under this target setting, in the training process, the optimized parameters are \(\Theta=\{\theta_{\rm E},\theta_{\rm C},\theta_{\rm D},\theta_{\rm F},\theta_{ \rm A}\}\) and \(\Theta=\{\theta_{\rm J}\}\), iteratively. In order to use gradient descent methods in the quantization process, uniform noise is added in training, and rounding operation is adopted in the inference stage that values are rounded to their nearest integer, following [11]. Optimization target setting for machine vision task will be discussed in Section IV-B.
### _Machine Perception on Compressed Data_
Learning-based image compression models have the potential to close the gap between visual data compression and machine perception owing to their flexible and learned latent representation. Visual analysis with compressed data input and the setup for the optimization target will be introduced next.
#### Iv-B1 Multi-task analysis
Our hypothesis is that the essential semantics have been embedded and learned in \(\hat{z}_{1}\) and \(\hat{z}_{2}\) that multiple downstream tasks can be solved without decoding. To further improve analysis efficiency, we treat multiple tasks from a multi-task analysis perspective, as,
\[\hat{Y}=\widetilde{\rm A}(z_{1},z_{2}),\quad\hat{Y}=[\hat{Y}_{0},\cdots,\hat{Y }_{n}], \tag{12}\]
where \(n\) is the total number of downstream tasks and \(\hat{Y}_{i}\) is the prediction for \(i\)th task. Classification and segmentation tasks are included as examples because they are indicative of machine perception tasks and have been widely explored, preventing generality loss. The multi-task analysis network details are shown in Fig. 6 with task-related modules. One of the most important aspects of multi-task training is learning how to balance distinct task losses [46], here we establish hyper-parameters to control the trade-off between tasks, thus the multi-task visual analysis loss can be formulated as,
\[\mathcal{L}_{\widetilde{\rm A}}=\lambda_{cls}l_{cls}+\lambda_{seg}l_{seg}, \tag{13}\]
Fig. 8: Rate-perception curves of our compression model. Arrow \(\downarrow\) in the title indicates the metric value is the lower the better. Grey arrow bars shown in the NIQE metric suggest the data variance while the data variance in the other three metrics is small to be neglected.
Fig. 7: Visual comparison of the reconstructed images with both traditional and learning-based codecs. For each reconstructed image, we display its bit rate (i.e., in bit per pixel, bpp) under itself. Since the resulting bit rates of the compression models are not continuous, we adapt the binary search to find the closest bit rate output for each compression method to ours.
where \(l_{cls}\) and \(l_{seg}\) are task-related losses for classification and segmentation, and \(\lambda_{cls},\lambda_{seg}\) are their weight hyper-parameters, respectively. In Ablation Studies, we discuss the choice of those hyper-parameters empirically.
#### Iv-B2 Optimization target
We investigate two types of training procedures to further explore the relationship between compression and machine perception in order to optimize the compression model for machine perception tasks: separate training and joint training. Different parameters that need to be optimized are taken into account.
In the separate training setup, we fix the compression model and train the multi-task analysis model only, i.e., \(\Theta=\{\theta_{\bar{\Lambda}}\}\). For joint training, the compression model and the multi-task analysis model are jointly optimized by the total loss and corresponding parameters:
\[\mathcal{L}=\mathcal{L}_{1}+\mathcal{L}_{\mathrm{J}}+\gamma \mathcal{L}_{\bar{\Lambda}}, \tag{14}\] \[\Theta=\{\theta_{\mathrm{E}},\theta_{\mathrm{C}},\theta_{\mathrm{D }},\theta_{\mathrm{F}},\theta_{\mathrm{A}},\theta_{\bar{\Lambda}}\}, \tag{15}\]
where the hyper-parameter \(\gamma\) balances the optimized point between human vision and machine perception targets. Specifically, parameters \(\theta_{\mathrm{E}},\theta_{\mathrm{C}},\theta_{\mathrm{D}},\theta_{\mathrm{F }},\theta_{\mathrm{A}}\) are fine-tuned separate train results instead of from scratch following [9] and we set \(\gamma\) as \(1\) in Section V.
## V Experiments
We empirically prove that the main advantage of learning-based image compression methods is not in signal-level preserving but in high perceptual reconstruction and semantic-level information preservation which is feasible for conducting machine perception tasks on compressed visual data.
### _Experimental Settings_
#### V-A1 Dataset
To evaluate the proposed model, we conduct extensive experiments on CelebAMask-HQ [47] dataset to study the proposed model's performance for human vision and machine perception. CelebAMask-HQ is a large-scale high-resolution facial semantic dataset that contains \(30,000\) face images with resolution \(512\times 512\). Each image is labeled with \(40\) attribute classes and has high-quality pixel-level labels of \(19\) semantic classes, i.e., \(18\) foreground classes and \(1\) background classes. We follow the official dataset split setting in [48] and report the performance on \(256\times 256\) images.
#### V-A2 Image quality metrics
Reconstructed images are evaluated by human perception-oriented image quality assessments (i.e., FID [25], DISTS [26], LPIPS [27]), which are proposed to rank models for low-level vision tasks according to their perceptual performance. We use the VGG16 c network pre-trained on ImageNet [49] as the feature extractor of LPIPS assessment. A no-reference metric NIQE [50] is also adopted which measures reconstructed image distribution that deviates from natural image distribution in statics.
#### V-A3 Machine perception metrics
We verify the feasibility of direct analyzing compressed domain and the effectiveness of multi-task training on two tasks: a) **multi-attribute estimation** evaluated by calculating average prediction accuracy (Accu.) among \(40\) attributes; b) **semantic segmentation** evaluated by pixel accuracy of all regions, class accuracy for each semantic class, mean Intersection over Union (mIoU) by averaging among all \(19\) classes, and Frequency-weighted Intersection over Union (FWIoU) which adds class appearance frequency into consideration on the basis of mIoU.
#### V-A4 Compared Codecs.
We compare our method's reconstructed image quality with the following baselines of both traditional and learning-based codecs. Three traditional codecs (i.e., JPEG, BPG, VVC) are taken as the comparison methods. For JPEG, we use cjpeg1 implementation. BPG is an open-sourced software2 of H.265/HEVC standard. The VTM3 version 11.0 is the reference software of H.266/VVC. For both BPG and VVC, we encode the images using YUV420 color space. For learning-based compression methods, we have HiFiC [44] which is based on generative decoding and CHENG [12] which achieves comparable performance with VVC regarding subject metric PSNR in comparison.
Footnote 1: [https://linux.die.net/man/1/cjpeg](https://linux.die.net/man/1/cjpeg)
Footnote 2: [https://bellard.org/bpg/](https://bellard.org/bpg/)
Footnote 3: [https://cveti.hhi.fraunhofer.de/jvet/VVCSoftware_VTM](https://cveti.hhi.fraunhofer.de/jvet/VVCSoftware_VTM)
### _Implementation Details_
#### V-B1 Separate training schedule
Separate training schedule are adopted independently for compression and machine perception parts. For compression part, optimization targets Eq. (10) and \(\Theta=\{\theta_{\bar{\Lambda}}\}\) are optimized separately. For the compression part, we use Adam [51] as our optimizer, where \(\beta_{1}\) and \(\beta_{2}\) take the default values of \(0.9\) and \(0.999\). We adopt a mini-batch size of \(8\), and a fixed learning rate of \(0.0002\). We follow the Least Squares GAN (LSGAN) training strategy [52] and use the two time-scale update rule (TTUR) [25] with independent learning rate for the discriminator and decoder/generator to help training convergence. We train the compression model for 3.6M iterations from the scratch. \(\lambda_{MAE}\), \(\lambda_{SSIM}\), and \(\lambda_{p}\) in Eq. (8) are set as \(10\), \(0.25\), \(0.2\), respectively.
For the machine perception part, we use the same training strategy for single-task and multi-task analysis networks in the separate train. Models are optimized with a stochastic gradient optimizer for \(50\) epochs with mini-batch of size \(32\). The learning rate starts at \(0.001\) and is multiplied by \(0.1\) every \(10\) epochs. Notably, we do not apply image augmentation methods (e.g., randomly cropping and flipping) because those methods lack semantic meaning in the compressed domain.
#### V-B2 Joint training schedule
In joint training phase, we jointly optimize compression and multi-task analysis model together by the target loss in Eq. (14) that the parameters to be optimized are Eq. (15). We fine-tune Eq. (15) with the pretrained checkpoints from the separate train for \(10\) epoches with a fixed learning rate \(0.0001\).
#### V-B3 bit rate Control
Given the fusion module \(\mathrm{F}(\cdot)\) used in our design, we propose a straightforward bit rate control strategy to obtain different compression ratio. After observing Fig. 10(a), we find out that nearly \(80\)% latents' channels contribute little to the estimated bit rate. Moreover, as depicted in heatmap Fig. 10(b), channel relationships are quite weak, and thus we can represent latent representations by parts of their feature channels. Based on this analysis, we conduct
the bit rate control strategy in which we directly cut down the channel number of the latent representations (i.e., \(Z\)) to acquire compression models which have the potential to get an extremely high compression ratio. For our compression models from extreme to high bit rate, we set latent representations' channel number in the set \(\{8,16,64,128\}\), in order.
### _Compression Performance_
From the point of subjective evaluation, we display the reconstructed images in Fig. 7 to illustrate that our human vision-oriented compression model can maintain vision-friendly information even at extremely low bit rate restriction (i.e., images in the bottom row of Fig. 7). It is apparent that traditional codecs including JPEG, BPG, and VVC suffer from severe blurring and blocking artifacts. In comparison to VVC baseline, the proposed approach produces noticeably improved reconstruction quality with finer and more accurate texture at two-thirds or less of its bit rate. Although the HiFiC model likewise generates images with great visual quality using image generation methods, it does not investigate the condition of high compression ratio.
From the point of objective evaluation, image quality assessments aiming at reflecting human vision perception including FID, DISTS, LPIPS, and NIQE are illustrated in Fig. 8). Especially on FID and NIQE metrics, our compression model outperforms comparison methods greatly and on DISTS and LPIPS, our model shows advantages at extremely low bit rate circumstances (i.e., bpp \(\leq 0.1\)). Objectively, based on the re
Fig. 10: Channel-wise visualization analysis results. (a) The rate used for each feature channel is sorted in descending order. (b) Absolute Pearson correlation coefficients among 20 randomly selected channels and consider their absolute value which is in the range \([0,1]\), the lower value demonstrates the weaker correlation.
Fig. 9: The qualitative comparison results with VVC(VTM 11.0, qp = 37) and our method. Lower LPIPS and DISTS indicate better quality.
sults in Fig. 8), taking DISTS metric as the example and VVC as the anchor, the proposed method achieves 39.1%, 27.7%, 15.1%, and 6.6% average reconstruction quality gains under equivalent bit rate (i.e., bpp in the set of {0.2,0.4,0.6,0.8}).
### _Machine Perception Performance_
#### V-D1 Single-task vs. multi-task
In Table III, we explore the potential classification performance gain by multi-task learning analysis network as Fig. 6. Taking classification task as an example, \(0.7\%\) accuracy gain is obtained by introducing multi-task learning, compared with single-task learning.
#### V-D2 Semantic analysis
Semantic analysis results with our multi-task analysis model are displayed in Table I.
Here we have four metrics for segmentation task. The results reveal that different metrics reflect different degradation degree and we analyze the cause that our generative learning-based compression model relies seriously on prior knowledge (i.e., data distribution) so it performs worse when a rare accessory appears which harms severely class-related metrics like class accuracy.
#### V-D3 Comparisons with perception on RGB images
We have the machine perception analysis with RGB image input as the comparison method with our compressed latent input model. Take our low bit rate compression model as an example. In Table I, multi-task analysis with its compressed latent as input sacrifices \(0.8\)% regarding the mIoU metric of segmentation task meanwhile saves \(99.6\)% bit rates (i.e., \((24-0.096)/24\times 100\%\)) compared with the original RGB images as input. After observing experimental results shown in Table I, findings can be concluded as:
* Machine perception can be conducted on compressed domain achieving comparable performance with the RGB image input.
* Compression ratio is obtained at the cost of reconstructed image quality and machine perception performance.
* Visual compressed data contains limited information which shows different analysis effects on different tasks. Pixel-wise segmentation suffers more serious loss than classification task.
#### V-D4 Joint training result
We evaluate the joint training schedule's effect from three dimensions: compression ratio, reconstructed image quality, and machine perception performance. As the radar chart displayed in 11), machine perception performance is elevated at the limited cost of the other two evaluation aspects at extreme compression levels while reconstructed image quality shows trade-off between different quality assessments at high compression level. We display that there exists a machine perception-distortion-human perception trade-off which can be seen as a real-world practical extension for the theoretical work in [53].
classification accuracy, respectively. It can be observed that the best reconstruction and analysis combination is achieved at \(\lambda_{MAE}=10\), \(\lambda_{SSIM}=1\).
### _Multi-task Analysis Model Loss_
The relationship among tasks is balanced with hyper-parameters as noted in the loss function Eq. (13). We verify the relationship between those two tasks following the separate training schedule and set weight \(\lambda_{cls}\) for the classification as a constant \(1.0\), and change task weight \(\lambda_{seg}\) for segmentation to see how their performance varies. As shown in Fig. 12, a trade-off relationship between classification and segmentation task is revealed that the best visual analysis performance result is reached when \(\lambda_{seg}=2\).
### _Biometric Information Reservation_
To evaluate the accuracy of the reconstructed images with respect to biometric features, we use two image processing tasks: facial feature detection and 1:1 face recognition. The face landmark detection task provides information about the structural change of the face, and the 1:1 face recognition task determines whether two facial images are from the same person or not.
**Face landmark detection**: Image compression should produce structure-fidelity and semantic-fidelity reconstructed images for practical use. Considering the characteristic of human face image, we evaluate the structure-fidelity performance by facial landmark detection. We set the detection results on uncompressed images by the pre-trained detection model from [58] as groundtruth landmarks \(g\). Here normalized error is defined as:
\[Error=\frac{\left\|d_{i}-g_{i}\right\|_{2}}{d_{inter}} \tag{16}\]
where \(d_{i}\) is the predicted 2-d locations for landmark \(i\) and \(d_{inter}\) represents for inter-ocular distance [57]. _Mean Error_ is the mean value of \(Error\) in Eq. (16) for all landmarks.
As illustrated in Fig. 14, our compression model suffers smaller detection error at high compression ratio (i.e., \(\leq 0.1\) bpp) compared with JPEG, acquiring reconstructed images with more structure information reservation.
**1:1 face recognition**: We compute the 1:1 similarity rate of face recognition between the original image and the compressed image. A higher similarity rate means less loss of biometric features. We use the online 1:1 face recognition SDK from iFLYTECH4. The comparison results are shown in Table VII that under extreme bit rate constraint, the proposed method can save \(67.24\%\) of bit rate to achieve comparable 1:1 face recognition performance on the reconstructed image when taking JPEG codec as the comparison method. It's intuitive that the JPEG undergoes severe blocking artifacts and color shift, which damaged the biometric information of facial images thus reduces the similarity comparison accuracy in 1:1 face recognition.
Footnote 4: [https://global.iflytek.com/](https://global.iflytek.com/)
### _Hierarchical Representation Effect_
Hierarchical representations can help boost the model's robustness to the quality of visual analysis. We take two approaches to the issue of figuring out how effective hierarchical representations are: 1) _quantitative analysis_ in which we conduct ablation tests to determine the impact of hierarchical representations on the outcomes of the visual analysis. 2) _qualitative analysis_ to determine the visual effects of latent features following hierarchical decomposition.
Fig. 12: Classification and segmentation tasks’ performance by adjusting hyper-parameter \(\lambda_{seg}\). The point of intersection has \(\lambda_{seg}=2\).
**Quantitative analysis:** We evaluate the effect of hierarchical features by removing the \(I_{S}\) feature, leaving the rest unchanged. We find that at the comparable bit rate, utilizing layered structure (i.e., using both \(I_{R}\) and \(I_{S}\)) can result in an average gain of 1.66% on downstream visual analysis tasks when compared to using non-layered structure (i.e., using \(I_{R}\) only).
**Qualitative analysis:** We analyze the effect of each part in the proposed hierarchical representations from the visual viewpoint as shown in updated Figure 13. We directly show the effect of reconstruction representation \(I_{R}\) by zeroing the semantic component \(I_{S}\) as image A. And \(\delta\) image is the difference of reconstructed image A and \(N_{0}\)\(I_{S}\), from which we can find out that semantic component \(I_{S}\) controls luma and color intuitively from the point of visual effects. As defined in Section IV-A, \(I_{S}\) is a one-dimension feature so it does not reflect much information if directly visualizes it. Thus we employ the method that takes \(I_{R}\) from image A and \(I_{S}\) from image B. Then we combine them to see the effect of \(I_{S}\) through the combination image. From the image in the bottom right corner of Figure 13, we could find that the luma and color of facial and hair parts are controlled by the target image B.
## VII Conclusion
In this paper, we propose an efficient layered generative compression model which achieves comparable downstream machine perception performance with the compressed visual data as input, meanwhile, the reconstructed image is human visual fidelity proved with four perceptual quality metrics. A task-agnostic perception model with compressed data input is developed to achieve analysis efficiency and flexibility, and it efficiently supports diverse analytical tasks. Thorough experiments on face images are conducted verifying that multiple independent visual analysis tasks can be directly reasoned from the compressed representations, meanwhile saving the decoding process and decoder-side computation complexity which demonstrates the proposed model's practical value.
The analysis and observation in this paper provide feasible directions for conducting visual tasks on latent representations, particularly which are learned. Although learning-based compression models have their limitations for pixel-fidelity-based metrics (e.g., PSNR and MSE), we argue that the principal advantages of learning-based image compression models lay in perception retention and flexible analysis support for machine perception, instead of signal-level fidelity preservation, which shows a direction for future codecs.
|
2307.11662
|
BlockCampus: A Blockchain-Based DApp for enhancing Student Engagement
and Reward Mechanisms in an Academic Community for E-JUST University
|
In today's digital age, online communities have become an integral part of
our lives, fostering collaboration, knowledge sharing, and community
engagement. Higher education institutions, in particular, can greatly benefit
from dedicated platforms that facilitate academic discussions and provide
incentives for active participation. This research paper presents a
comprehensive study and implementation of a decentralized application (DApp)
leveraging the blockchain technology to address these needs specifically for
E-JUST (Egypt-Japan University of Science and Technology) students and academic
staff.
|
Mariam Ayman, Youssef El-harty, Ahmed Rashed, Ahmed Fathy, Ahmed Abdullah, Omar Wassim, Walid Gomaa
|
2023-07-07T19:12:19Z
|
http://arxiv.org/abs/2307.11662v1
|
BlockCampus: A Blockchain-Based DApp for enhancing Student Engagement and Reward Mechanisms in an Academic Community for E-JUST University
###### Abstract
In today's digital age, online communities have become an integral part of our lives, fostering collaboration, knowledge sharing, and community engagement. Higher education institutions, in particular, can greatly benefit from dedicated platforms that facilitate academic discussions and provide incentives for active participation. This research paper presents a comprehensive study and implementation of a decentralized application (DApp) leveraging the blockchain technology to address these needs specifically for E-JUST (Egypt-Japan University of Science and Technology) students and academic staff.
DApp, Solidity, Decentralization, Blockchain Development
## I Introduction
The Internet is a gateway for students to learn basically anything anytime. The reliance of students on the Internet for information can be quite challenging, not everything is true and/or reliable which can cause confusion and mistrust as in [1] and [2]. As a result, many platforms were created to provide an online community where people can ask questions and receive answers from a diverse range of users as: Quora [3], Reddit [4], and Stack Exchange [5]. These serve as knowledge-sharing platforms, enabling individuals to seek assistance, share insights, and learn from others' experiences in various fields. They aim to foster collaboration, problem-solving, and knowledge exchange among their users.
However, the problem with these platforms is that they are not foolproof [6]. Maintaining the quality and accuracy of information can be a challenge on these platforms since anyone can contribute answers, there is a risk of incorrect or unreliable information and/or misinformation being shared [7]. Like any online community, these platforms can be susceptible to trolling, spamming, or inappropriate behavior [8].
Moreover, over-reliance on individual opinions as answers provided on these platforms are typically based on personal opinions and experiences which is anther critical problem. While they can be valuable, they may not always align with expert advice or best practices as shown from [9]. Our motivation for creating a dedicated Q&A platform for our university E-JUST poses several challenges. First, the decentralized nature of discussions across different channels creates a fragmented knowledge base, making it difficult for users to access relevant information efficiently [10]. Second, the lack of proper incentives and rewards discourages active participation and fails to acknowledge the valuable contributions of the community members [11]. Lastly, the absence of a transparent and immutable system for tracking user reputations limits the ability to establish trust and credibility within the community. The primary objective of this research is to design and implement a blockchain-based DApp, named **BlockCampus**, that serves as a centralized hub for academic discussions, knowledge sharing, and community engagement within the E-JUST ecosystem [12]. By leveraging the Ethereum blockchain [13], we aim to address the aforementioned challenges by introducing a reward mechanism based on user participation and reputation. The DApp will provide a seamless and user-friendly interface, empowering students and academic staff to ask questions, provide answers, vote on content, and earn rewards based on their contributions. The scope of this research paper encompasses the conceptualization, implementation, and evaluation of the DApp, along with the analysis of user engagement and the effectiveness of the reward mechanism [14][15].
BlockCampus DApp provides a dedicated platform tailored to E-JUST's unique requirements, BlockCampus aims to foster a vibrant and collaborative community, encourage knowledge sharing, and recognize the contributions of individuals within the ecosystem [14]. The blockchain technology ensures transparency, immutability, and decentralization, enhancing trust and enabling a seamless experience for all participants [16][14]. Through this research, we envision a future where students and academic staff at E-JUST (and consequently any other academic institution) can harness the power of blockchain technology to fuel knowledge sharing, collaboration, and innovation. By building a vibrant and inclusive community, we strive to create a more enriching and rewarding academic experience for all participants. Designed to serve this purpose, BlockCampus is built on top of the Ethereum blockchain and adopts a consortium blockchain model.
This paper is organized in some sections. Section I is an introduction. Section II is an overview of the underlying concepts and protocols that were used to make this project possible. Section III is an abstract view of the structure of the individual nodes, the blockchain network, and the decentralized app. Section IV is a deep dive into how the technologies the DApp utilizes interact with each other and
the blockchain network. Section V is a discussion of the importance of testing. Section VI is a final discussion, some observations, and conclusions. Section VII is a discussion of what further improvements can be made and applications that this kind of architecture can be used in.
## II Background
### _Blockchain_
The Blockchain technology, initially introduced as the underlying technology for cryptocurrencies like Bitcoin [14], has gained significant attention in various industries beyond the realm of digital currencies. At its core, _blockchain is a distributed and decentralized ledger_ that records and verifies transactions across multiple nodes, providing transparency, immutability, and security. It operates as a chain of blocks, where each block contains a set of transactions, linked together using cryptographic hashes as introduced in [17]. With the help of these technologies, blockchain provides a secure, reliable, and a decentralized ledger database that can be utilized for many applications and this is greatly shown in [18] and [19].
### _Decentralization in Blockchain_
Centralization is a common form of governance, where we must trust one or few central authorities to maintain order over a structured system, like banks [12]. This creates 'Centrality' meaning that if these authorities can't control the structure or entity they operate it will fail, making it vulnerable to attacks and may damage the system or any related entities [16]. This creates a monopolization of power and a security risk.
One of the fundamental characteristics of blockchain technology is _decentralization_. Unlike traditional centralized systems that rely on a single authority or intermediary, blockchain distributes data and control among multiple participants (called **Nodes**), eliminating the need for a central governing entity [20]. This decentralized architecture enhances trust, resilience, and security by removing single points of failure and reducing the risks of failure, manipulation, or censorship [21][16].
### _Ethereum Blockchain_
Blockchain technology has revolutionized various industries by introducing decentralized and secure systems for data management and transaction processing as indicated by [21] and [22]. One prominent blockchain platform that has garnered significant attention is **Ethereum**. It was proposed by Vitalik Buterin in late 2013 and launched in July 2015, with the aim of enabling the development of decentralized applications **DApps** through the use of smart contracts [12][23].
Ethereum distinguishes itself from traditional blockchain platforms by providing a programmable infrastructure, allowing developers to create and deploy **smart contracts** on its blockchain [24]. **Smart contracts** are self-executing agreements with predefined rules and conditions written into code. They automate the execution and enforcement of contractual obligations without the need for intermediaries, thereby enhancing transparency, efficiency, and trust in various sectors.
The Ethereum platform operates on a decentralized network of computers, known as nodes, that collectively maintain the integrity and security of the blockchain. Transactions on Ethereum are verified and validated through a consensus mechanism, initially based on proof-of-work (PoW) and transitioning to proof-of-stake (PoS) in Ethereum 2.0 [12]. This consensus mechanism ensures the immutability of the blockchain and prevents malicious activities. The potential applications of Ethereum extend far beyond simple transactions. Its programmable nature enables the development of complex DApps, security systems, decentralized finance (DeFi) [16][15] protocols, Internet of Things (IOT) [18] and more. Ethereum has become a breeding ground for innovation and experimentation in the blockchain space, attracting developers, entrepreneurs, and organizations worldwide as it is obvious from.
### _Smart Contracts_
Smart contracts are **self-executing** software built on blockchain that automatically facilitate, verify, and enforce the terms of a contract or an agreement without the need for intermediaries. They are programmable contracts that operate on decentralized applications (DApps) and are encoded with specific conditions and actions. An accurate definition given by Gideon Greenspan is "A smart contract is a piece of code which is stored on an Blockchain, triggered by Blockchain transactions, and which reads and writes data in that Blockchain's database" [25]. Blockchain transactions are any transfer of data or digital assets on the blockchain network, data could range from reading or writing simple text information to complex smart contract interactions that include asset ownership transfer like tokens and cryptocurrencies. Once deployed, the smart contract waits for specific triggers or events to occur. These triggers can be predefined conditions, such as a specific date, a certain action by a user, or the fulfillment of certain criteria. When a trigger is met, the smart contract is executed automatically, without the need for any central authority. During execution, the smart contract carries out a series of actions based on its programmed instructions. These actions can include transferring digital assets, updating data records, or triggering other smart contracts. The execution process is entirely deterministic, meaning that the outcome of the contract is predictable and reproducible.
Smart contracts offer several notable advantages in DApps. First, they eliminate the need for intermediaries, reducing costs and the potential for human error. By automating the execution and enforcement of agreements, smart contracts enhance the efficiency of transactions and minimize the risk
Fig. 1: Structure of interconnected blocks
of fraud [24]. Moreover, smart contracts provide increased security through their decentralized nature. They are stored on a distributed ledger, making them resistant to tampering and hacking. Once deployed on the blockchain, the terms of the contract are immutable and transparent to all participants, ensuring accountability and reducing disputes.
Another key benefit of smart contracts is their ability to streamline complex processes. They can handle a wide range of transactions, from simple financial transfers to more intricate multi-step operations. This capability opens up opportunities for various sectors, such as supply chain management, real estate, insurance, and finance, where processes can be automated, and trust can be established through code execution [12].
Smart contracts serve as the backbone of DApps, providing a secure, efficient, and transparent mechanism for executing agreements. By removing intermediaries, enhancing security, and streamlining processes, smart contracts have the potential to revolutionize industries and reshape the way transactions are conducted in the digital era [18].
### _DApps_
The popularity of blockchain and cryptocurrency has increased in the recent years which gave rise to decentralized Applications (DApps). DApps are an innovative approach to software systems. DApps leverage blockchain technology to provide decentralized, transparent, and secure platforms for various domains. Researchers have explored the potential of DApps in different areas, such as finance, health, and accountability [26]. These applications offer a range of benefits, including reduced reliance on intermediaries, enhanced data integrity, and increased user control. The surveyed papers shed light on key aspects of DApps, such as their architecture, execution mechanisms, and potential challenges [27]. They emphasize the importance of accountability and trust in DApp ecosystems and highlight the potential of leveraging blockchain technology to create robust and efficient software systems. By harnessing the power of smart contracts and decentralized networks, DApps offer new avenues for developing innovative solutions, revolutionizing industries, and transforming traditional centralized approaches to software development [26].
## III System Architecture and Logic
### _DApp Overview_
#### Iii-A1 DApp Goals
Our DApp, known as BlockCampus, is designed to provide an exceptional online community where users can connect, share knowledge, and interact with peers who share similar interests. Leveraging the power of Ethereum blockchain technology, BlockCampus ensures a reliable and secure platform for students to collaborate and thrive acadically. Users of this platform are the stake holders, including students, academic staff (professors and teaching assistants), university administrative staff, developers of the DApp. Each has their own role.
#### Iii-A2 DApp Users and their Roles
**Students:** Once they are logged in, students choose their field of interest and join communities with similar interests. Upon completion, they would have the ability to view the community posts, questions and interact with other members of the community. An Example of these interactions is posting questions or inquiring about topics related to the community and sharing useful study materials, such as asking the proof a mathematical problem or asking questions concerning a programming language or advancements in finance and business models. These questions can be commented on, which is the primary way for users to interact with each other. Comments and posts can be voted on by other users and along with other parameters, they are ranked accordingly, and the most rated contributions are shown first. **Academic staff:** In addition to the features mentioned above, the academic staff members on BlockCampus play a pivotal role in fostering an engaging and productive learning environment. Academic staff have access to an expanded set of functionalities that allow them to not only participate in the community but also contribute to its growth and development. They possess additional privileges and responsibilities that enhance their engagement within the community. One of which is the ability to rate students' contributions with their distinct rating system that provides expert feedback and guidance to students' queries. They can comment on posts, provide clarifications, suggest alternative approaches, and offer constructive criticism when necessary. **EJUST administrative staff:** Since the DApp is built in a decentralized manner, users have complete freedom to express their thoughts, opinions, and ideas openly in the platform which is one of the main features of the DApp. But sometimes these opinions can be controversial and sometimes inappropriate and unrelated to the community, and in some cases lead to harassment and trolling. So, to ensure a safe and conflict free environment, community guidelines that are compatible with the university
Fig. 2: Operational Mechanism of a Smart Contract
rules are implemented to mitigate illegal activities, spread of false information, and introduce a culture of respect and inclusivity. The role of administrative staff is to do exactly that by moderating the overall DApp content and flagging inappropriate content while providing freedom in opinion at the same time. **Developers**: Their role is the most important as they keep the operation, maintenance, and updates of the system throughout the lifecycle of the DApp. Having a well maintained DApp creates a secure and reliable system invulnerable to attacks and safeguards the community. This ensures optimal user experience, development and growth of the platform.
#### Iii-A3 Incentives: Reputations System and Awards
Rewards can help foster long-term engagement and retention within the community. Users who consistently contribute and earn rewards feel a sense of loyalty and commitment to the community. This sustained engagement promotes a stable and active user base, ensuring the continuity and growth of the community over time.
**Reputation points (Bateekh):** As a way to recognize contributions and establish a sense of trust between users, a reputation system was made to measure credibility within the community and it's a reflection of the user's engagement, behavior, trustworthiness, and commitment. The simplest way to implement such a system is by using points, these points are affected by the engagement to the contribution through comments, votes, and rewards (more on them later). Each upvote increases the points of the user based on precise calculations that takes into account the rating of the staff, reputation of the user, and the age of the post. Combined, they give an accurate estimate of how mush points the user should earn, the higher these parameters are the greater the points. Users with enough reputation points would earn the native token of the DApp (Tofu) that can be used on the platform. These tokens can be used in exchange for wavering university tuition fees, bonus grades, or services provided by other users or the university. Examples on the services are free courses, premium features, customizing the profile, and gain exclusive material. These services can vary depending on the organization and the user, that is the academic staff may have services available to them that are different then the students. The distribution of the tokens and points is managed automatically by the Token smart contract and the distribution model created within the contract by the developers. More details are discussed in Section III.4 and III.5.
**Awards:** Another fun way to show appreciation and gratitude to users is rewards. They make the users feel recognized and valued for their efforts. This recognition acts as a motivator, encouraging them to continue participating actively. As an incentive, user who receive recognition and Awards for their contributions are more likely to attract attention from other students and academic staff. This can lead to networking opportunities, mentorship possibilities, or even career advancements for the rewarded users.
### _Blockchain Architecture_
#### Iii-B1 Block Anatomy
The blockchain in the BlockCampus DApp consists of a series of blocks as seen in Figure [1], and each block contains a set of transactions. These transactions can represent various activities within the DApp, such as posting questions, providing answers, voting on content, or earning rewards. Additionally, each block is linked to the previous block through cryptographic hashes which serve as a unique identifier for each block and contains the information of the previous block's hash. This linking mechanism creates a chain of blocks, ensuring the integrity and immutability of the blockchain Figure [3]. If any modification is attempted on a block, it would result in a change in its hash, which, in turn, would affect the subsequent block's hash. This property makes it extremely difficult for malicious actors to tamper with the blockchain, as any tampering would be immediately detectable [16].
#### Iii-B2 Blockchain Type (Consortium)
BlockCampus adopts a _consortium blockchain model_ to meet the specific requirements of the E-JUST ecosystem. Unlike public blockchains, which are open to anyone for participation and validation, a consortium blockchain restricts access to a predefined set of trusted entities. In the case of BlockCampus, the consortium members include E-JUST students, academic staff, and other relevant stakeholders within the university ecosystem. By limiting the participation to trusted entities, the consortium blockchain enhances privacy, scalability, and transaction throughput compared to public blockchains. Therefore, the participants have a higher level of control and can make decisions collectively, enabling more efficient consensus mechanisms. It also offers enhanced privacy as the transactions and data shared within the consortium are not visible to the public. This privacy is particularly important for academic discussions and sensitive information sharing within the E-JUST community. In addition, it ensures a higher degree of trust among participants since they are known entities within the ecosystem which facilitates effective collaboration, knowledge sharing, and community engagement. By utilizing a consortium blockchain, BlockCampus can strike a balance between openness and privacy, allowing the E-JUST community to benefit from the advantages of blockchain technology while maintaining control over the network and data.
#### Iii-B3 Node Architecture
The BlockCampus blockchain network consists of multiple nodes, which can be represented by
Fig. 3: Block Anatomy
E-JUST, academic staff, students, and other authorized stakeholders. These nodes are distributed across different locations and are connected through the blockchain network. Each node maintains a copy of the blockchain and participates in the consensus process. The distributed nature of the node architecture ensures that no single entity or centralized authority has complete control over the network. This decentralization enhances the security and resilience of the network, as it becomes more resistant to single points of failure and external attacks. If one node goes offline or becomes compromised, the other nodes in the network continue to validate transactions and maintain the integrity of the blockchain.
#### Iii-B4 Consensus Mechanism (PoA)
To maintain consensus on the state of the blockchain and validate transactions, BlockCampus employs a proof-of-authority (PoA) consensus mechanism. In PoA, consortium members take turns acting as validators based on their authority within the network. In the BlockCampus DApp, the consortium members are E-JUST administrative staff (as regulators and compliance authorities).They are given the authority to validate transactions. Each validator takes turns adding blocks to the blockchain and confirming the validity of transactions. This consensus mechanism ensures faster transaction confirmation times and higher throughput compared to more computationally intensive consensus mechanisms like proof-of-work (PoW). The applied PoA consensus mechanism is suitable for our consortium blockchains as it strikes a balance between decentralization and scalability. Consortium members are known and trusted entities within the network and their authority allow for efficient transaction validation and consensus.
### _DApp Architecture_
As shown in Figure [4], the BlockCampus DApp architecture consists of the following key components:
**User Interface (UI) Layer:** The UI layer is responsible for providing an intuitive and user-friendly interface for users to interact with the BlockCampus DApp. It includes the graphical user interface (GUI) elements such as web pages, forms, buttons, and menus that enable users to access different functionalities of the DApp. The UI layer is designed to be responsive, ensuring a seamless experience across different devices and screen sizes.
**Backend Layer:** The backend layer in the BlockCampus DApp architecture plays a pivotal role in facilitating seamless interactions between the user interface layer, smart contracts, and IPFS (InterPlanetary File System) integration. Web3 serves are used as a bridge between these components, handling communication between the DApp's UI and the blockchain network. One of the key responsibilities of the backend layer is to interact with the smart contracts deployed on the Ethereum blockchain. Through the API calls of Web3 and the integration with the smart contracts written in Solidity, the backend layer enables the execution of transactions and the retrieval and updating of data stored on the blockchain. This ensures transparency, security, and immutability of the information exchanged within the BlockCampus ecosystem.
As shown from figure[5], the layer handles all the critical functionalities of the logic of the DApp as: safe data storage and communication with the blockchain network, including interactions with smart contracts and leveraging IPFS functionalities for decentralized file storage and retrieval. Additionally, the backend layer enforces access controls and ensures user authentication and authorization, safeguarding sensitive information and maintaining user privacy within the BlockCampus ecosystem.
**Blockchain Network:** The blockchain network forms the foundation of the BlockCampus DApp architecture. In our case, the DApp is built on top of the Ethereum blockchain which comprises multiple nodes that collaborate to maintain the integrity and security of our blockchain.
The interactions within the BlockCampus DApp architecture follow a coordinated flow as shown from figure [6]. Users interact with the user interface layer, accessing various functionalities and submitting requests. These requests are then processed by the backend layer, which communicates with the Ethereum blockchain network, including IPFS, to execute
Fig. 4: DApp Architecture
Fig. 5: IPFS Gateway (Adapted from [28])
transactions, store files, and retrieve data from smart contracts and IPFS storage. The backend layer handles the necessary computations, validates inputs, and ensures data integrity and security. Finally, the results are processed by the backend layer and presented to the user through the user interface layer, enabling a seamless and secure user experience within the BlockCampus DApp.
Overall, the DApp architecture combines the user-friendly interface, the logic and rules defined in smart contracts, the backend layer for processing and communication, and the underlying blockchain network to provide a decentralized, transparent, and secure platform for academic discussions, knowledge sharing, and community engagement within the E-JUST ecosystem.
### _Tokens Creation_
In order to establish a vibrant ecosystem within our blockchain platform, the creation of tokens plays a crucial role. We have implemented a robust token creation process that adheres to industry standards and best practices. Our platform supports token standards as ERC-20, which enable seamless integration with existing decentralized applications (DApps) and ensure compatibility with various wallets and exchanges. To create our tokens, we utilize smart contracts that facilitate the automatic issuance and management of tokens, ensuring transparency and security. Through the use of smart contracts, we eliminate the need for intermediaries and enable peer-to-peer token transfers.
### _Tokenomics_
Tokenomics forms the foundation of our token's value proposition and economic model. By carefully designing the tokenomics, we aim to incentivize adoption, promote network growth, and reward DApp users. The token's value is influenced by factors such as supply and demand dynamics, utility within the ecosystem, and the overall economic model. Additionally, we have incorporated mechanisms that recognize and reward user contributions to the community:
**Bateekh:** Bateekh is the reputation point of our system, indicating a user's contribution to the community through questions and answers. Users will gain Bateekh when others upvote their questions or answers. The rating of the academic staff will also grant Bateekh points to users. The number of points gained depends on various parameters, including the age of the questions or answers and the rating of the academic staff.
**Tofu:** Tofu serves as the main currency (Token) of our website and is utilized in all transactions. It holds tangible value as users can exchange Tofu for services on the website or receive acknowledgment from the university. The Tofu token serves as a reward mechanism within our ecosystem, further motivating community engagement and participation.
#### Iii-D1 Distribution Model
The distribution model refers to the way in which tokens are initially allocated and distributed among users or participants. It determines how the token supply is divided and distributed among various stakeholders. In our application we used the following distribution model:
**- Token supply:** With accordance to the ERC-20 standards for token creation and the project's goals, we decided to launch the token with an initial supply that would be enough to reward users for their contribution without overflowing the DApp with too much tokens. And the max token supply will be fixed and determined to ensure a predictable and finite token availability.
**- Token utility:** Tofu token will exclusively serve as a reward mechanism within the DApp ecosystem. Users can earn Tofu tokens by actively participating in the community, contributing valuable content, providing insightful comments, or receiving positive feedback from other users.
**- Token Rewards:** Users who accumulate Tofu tokens can redeem them for various services offered within the DApp. These services can include accessing premium content, receiving personalized assistance or support, obtaining exclusive features, or participating in specialized activities or events.
**- Token Exchange:** Tofu tokens will not be tradable on external cryptocurrency exchanges. Instead, users can only use the tokens within the DApp to access the designated services. This ensures that the tokens retain their value within the ecosystem and encourages users to actively participate to earn more tokens.
**- Token Burns**: To maintain the value of Tofu tokens, a portion of the tokens may be periodically burned or permanently removed from circulation. This can be done by the DApp's smart contract, ensuring that the token supply is stable over time, potentially keeping the value of the remaining tokens constant. In some cases where a user stops using the DApp and deactivates his account, or the user is no longer affiliated with the university, his/her tokens will be burned as to not keep unused tokens in the network.
**- Token Grants:** Tokens are allocated to specific individuals, organizations, or community members as a reward, incentive, or support for their contributions to the project. The distribution model chosen for a token project depends on
Fig. 6: Data Sharing on IPFS (Adapted from [29])
factors such as the project's goals, regulatory considerations, funding needs, community engagement strategy, and desired token economics. Therefore, to ensure a fair and transparent token distribution, we have devised a comprehensive distribution model based on the proposed model. The allocation of tokens is designed to benefit different stakeholders involved in the project, including investors, team members, advisors, and community members. We have allocated tokens based on predetermined percentages to ensure equity and align incentives. Furthermore, the distribution model takes into account the contributions of community members through the accumulation of Tofu tokens. Users who actively contribute and accrue Tofu tokens may be eligible for additional token allocations or rewards, fostering a sense of fairness and recognition within the community.
#### Iii-B2 Access Control
Access controls refer to the mechanisms and rules in a smart contract that govern who can perform certain actions or access specific functions or data within the contract. Access controls are implemented through a set of defined functions and rules written in the smart contracts to ensure that only authorized individuals or entities can execute specific operations, modify contract state, or access sensitive information. Access controls won't affect the decentrality of the network as they only affect sensitive information that only developers and owners of the blockchain are interested in. They are crucial for maintaining the security, integrity, and proper functioning of a smart contract. They help prevent unauthorized actions or malicious behavior that could compromise the contract or its associated assets. By enforcing access controls, you can mitigate risks and ensure that only trusted parties can perform critical operations or access sensitive data.
In addition to the ownership role, We implement the Role-Based Access Control (RBAC) which assigns specific roles and privileges to certain user accounts, such as Teaching Assistants (TAs) and Professors as they have the authority to rate and validate answers provided by all users. This ensures that contributions from these designated individuals hold significance and are recognized within the BlockCampus community. Through implementing access control mechanisms using the smart contracts, we provide a secure and controlled environment for users, ensuring that only authorized individuals can perform specific actions or access certain features of the DApp which helps maintain the integrity of the system and prevents unauthorized use or malicious activities.
### _User Management and Authentication_
User management and authentication are integral parts of the BlockCampus DApp, ensuring secure access and personalized experiences for users. The system supports two main user categories: students and academic staff. **For students,** functions such as registration, login, asking questions, answering questions, voting, giving awards, and posting are available. Students' reputation, known as Bateekh, is influenced by their question and answer contributions. **For academic staff members,** including professors and teaching assistants, they have similar functionalities to students, but with additional information during registration, such as their academic staff ID and position. User management and authentication are facilitated through the use of usernames and passwords, allowing users to securely access their accounts. On top of that, the blockchain provides a unique address to each user, and this address is included in all transactions within the network. This creates an extra layer of security as any fraudulent activity such as tampering with transactions or data in the blockchain can be detected and traced back to the attacker who will be penalized accordingly. By implementing robust user management and authentication mechanisms, BlockCampus ensures the privacy and security of user information, while providing a seamless and tailored experience for both students and academic staff.
### _Rewarding Mechanism_
As the project get into the production and expands, the rewarding mechanism or token distribution can be adjusted based on the needs and feedback received from the platform's users and stakeholders. Launching provides an opportunity to gather real-world data and insights on user engagement, behavior, and preferences. This information can be used to refine and optimize the rewarding mechanism and token distribution to ensure its effectiveness and alignment with the platform's goals. Users may provide feedback on the current rewarding mechanism, suggesting improvements or highlighting areas where adjustments could enhance their experience. Analyzing user behavior and engagement patterns can also benefit the developers by providing information about the effectiveness of the current rewarding mechanism. It may reveal areas where adjustments can be made to better incentivize desired user actions or promote specific behaviors. As the platform grows and attracts a larger user base (if, for example, other universities joins the platform), the initial token distribution model may need to be adjusted to accommodate the increased demand for rewards. Scaling the rewarding mechanism ensures that the platform can sustainably support a growing user community.
## IV Implementation
### _Blockchain Network_
One of our goals was to create a private blockchain network to deploy the smart contracts on. Creating a private Ethereum network built on Besu Hyperledger involves setting up a local blockchain environment that allows for secure and private transactions within a specific network of participants. This private network can be utilized for various purposes such as testing and development of our DApp.
To create a private Ethereum network with Besu Hyperledger, several steps need to be followed. Firstly, the network participants must set up the required infrastructure, including installing the Besu client and configuring the network parameters. Besu is an open-source Ethereum client developed by Hyperledger that offers enhanced privacy features and enterprise-grade capabilities.
Once the infrastructure is in place, the network's genesis block needs to be defined. The genesis block serves as the
initial state of the blockchain and includes information such as the network ID, chain ID, account balances, and other network-specific configurations. It can be customized to meet the specific requirements of the private network.
Next, the network participants need to configure the nodes to join the private network. Each participant should have a node running the Besu client and connect to the network using the appropriate network ID and bootnodes. Bootnodes act as the initial network entry points and help nodes discover and connect with each other.
Once the nodes are connected, the private network is ready for use. Participants can interact with the network by deploying and executing smart contracts, sending transactions, and accessing the blockchain's data. The private nature of the network ensures that only authorized participants can access and participate in the blockchain's activities.
By creating a private Ethereum network with Besu Hyperledger, we can leverage the benefits of blockchain technology while maintaining control over the network's privacy and configuration. This allows for experimentation, development, and deployment of decentralized applications in a secure and controlled environment, suited for the specific needs of the participants.
### _TechTools_
The development of the DApp involved the utilization of various tools and technologies to ensure efficient and secure implementation. This section provides an overview of the key tools and technologies employed throughout the development process.
**Hardhat:** A popular development framework for Ethereum, was chosen as the primary tool for compiling, testing, and deploying smart contracts. Hardhat offers a comprehensive suite of developer-friendly features, including built-in testing capabilities, a robust plugin system, and integration with popular Ethereum networks. Its extensive functionality and extensive community support made it an ideal choice for the development workflow.
**React:** For the front-end application, the React framework was employed due to its flexibility, modularity, and the extensive ecosystem of libraries and tools. React enables the creation of interactive user interfaces, providing a smooth and responsive user experience. Additionally, its component-based architecture facilitates code reusability and maintainability, contributing to efficient development practices.
**Remix IDE:** Remix IDE, a web-based integrated development environment (IDE), played a vital role in testing the smart contracts. With its intuitive interface and powerful debugging capabilities, Remix IDE enabled developers to deploy and interact with contracts in a simulated environment. Its comprehensive testing framework facilitated thorough unit testing and ensured the reliability and correctness of the smart contract code.
**Besu Hyperledger:** To establish a private blockchain network, Besu Hyperledger was utilized. Besu, an Ethereum client developed by Hyperledger, offers enterprise-grade features and enhanced privacy mechanisms. It enables the creation of permissioned networks with configurable consensus algorithms and network parameters. By leveraging Besu Hyperledger, the DApp ensured a secure and controlled environment for transaction processing and data storage.
**MetaMask:** MetaMask, a widely adopted wallet and browser extension, served as the primary tool for user wallet management and interaction with the DApp. MetaMask provided a convenient and secure way for users to access and interact with the Ethereum network, sign transactions, and manage their digital assets. Its compatibility with multiple browsers and user-friendly interface made it a popular choice among Ethereum users.
**Infura:** Infura played a crucial role in providing reliable and scalable access to the Ethereum Virtual Machine (EVM). By leveraging Infura's RPC endpoint, the DApp was able to connect to the Ethereum network without the need to run a local node. Infura's infrastructure allowed for seamless deployment and interaction with smart contracts, ensuring high availability and reliability.
The development of the DApp involved a comprehensive stack of tools and technologies. The combination of these tools and technologies along with the features and services of the blockchain technology formed a robust foundation for the development and deployment of the DApp.
### _Challenges and Optimizations_
Throughout the development process, several challenges were encountered, primarily stemming from the deprecation of certain frameworks and tools. These deprecations posed significant hurdles that required careful mitigation strategies to overcome. One notable challenge encountered was the deprecation of various frameworks and tools that were initially relied upon for certain development tasks. For instance, some testnets were discontinued which made the conversion to other testing environments a necessity. Another example for testnets, Georli testnet (which is the most common testnet) only provided test ether for users who have real Ether in the Ethereum Mainnet, which was a recent decision due to the abuse of users to the Georli faucets. This migration resulted in compatibility issues and error-prone deployments. This required the exploration and adoption of alternative tools and approaches to accomplish these tasks effectively.
The deprecation of Microsoft Azure Blockchain-as-a-Service (BaaS) in 2021 also posed a significant challenge in creating a private blockchain network and hosting it on the cloud. Azure BaaS provided a convenient and streamlined platform for deploying private blockchains, but its deprecation necessitated the exploration of alternative cloud providers and infrastructure solutions. We explored ConsenSys Quorum blockchain service which was very similar to Azure's BaaS, but like its counterpart, it was deprecated on march 2023. So a decision to use Sepolia testnet was made.
As with any complex development project, the process was not without its fair share of errors and troubleshooting. The
integration of multiple tools and frameworks introduced potential points of failure and compatibility issues. Addressing these errors required meticulous debugging, error handling, and collaboration among the development team. Thorough testing and meticulous code review were essential in identifying and resolving these issues promptly.
The development of the DApp involved the utilization of various tools and technologies that were relatively new to the development team. This introduced a steep learning curve and necessitated acquiring new knowledge and skills to effectively leverage these tools. To comprehend the underlying mechanisms of blockchain technology and the secure nature of Ethereum, the development team embarked on a journey of studying cryptography. Understanding concepts such as encryption, digital signatures, and cryptographic hashing algorithms was crucial to grasp the security principles of blockchain and effectively implement them within the DApp. The learning process involved rigorous self-study, consultation of academic resources, and engaging in online courses to gain a solid foundation in cryptography and blockchain fundamentals.
## V Testing
Blockchains are immutable, no data within the network can be changed or deleted, that's one of the key features of the blockchain technology. So, modifying any data would be very difficult and require the consensus of the whole network. For these reasons, testing a smart contract before deploying is a requirement to ensure that the contract satisfies requirements for reliability, usability, and security. An ideal approach of testing was using a local Ethereum environment that would ease the process of testing and catching any minor or major flaws in the smart contract. Remix IDE, a tool that is used for testing the contracts locally without affecting the mainnet, is used in our case, it provides all the tools needed for a comprehensive testing, and a compatible environment focused on handling and discovering bugs within the system. Testing is a crucial step in development, upgrade or editing in the blockchain, if it's withing the scope of possibility, would result in major errors and can cause massive losses for users. Before writing or developing the smart contract, in-depth research into the working mechanism of the smart contracts and the underlying blockchain was done. As well as understanding the requirements of the DApp, how users will access and use those functionalities, assumptions regarding contract execution and conducting unit tests to validate those assumptions and implementing any security or reliability measures necessary for a reliable service. This is particularly useful for running happy path tests that determine if functions in a contract return the correct output for valid user inputs. Another effective approach is to go beyond conducting tests for expected user behavior and include negative tests that assess the failure of functions and how users can manipulate or abuse the system for their favor (Cases such as cheating or creating fake accounts to upvote their own posts and comments).
## VI Discussion and Conclusions
We presented the design, implementation, and evaluation of the BlockCampus DApp, a blockchain-based decentralized application tailored for E-JUST university students and academic staff. The DApp serves as a centralized hub for academic discussions, knowledge sharing, and community engagement within the E-JUST ecosystem.
By leveraging the Ethereum blockchain, the BlockCampus DApp addresses the challenges of fragmented knowledge base, lack of incentives, and transparency in tracking user reputation. The blockchain architecture ensures transparency, immutability, and decentralization, enhancing trust and security. The consortium blockchain model, with trusted entities as participants, provides privacy, scalability, and transaction throughput while maintaining control over the network and data. The PoA consensus mechanism ensures fast transaction confirmation times and higher throughput.
The token creation and tokenomics model incentivize adoption, promote network growth, and reward user's contributions. Bateekh and Tofu tokens provide tangible value within the ecosystem, recognizing and rewarding user participation and fostering community engagement.
The implementation of the BlockCampus DApp has the potential to foster a vibrant and collaborative community within the academic environment. By harnessing the power of blockchain technology, E-JUST students and academic staff can benefit from enhanced knowledge sharing, collaboration, and innovation. The transparent and decentralized nature of the DApp promotes trust and facilitates a rewarding academic experience for all participants. Future enhancements to the BlockCampus DApp could include further refining the tokenomics model, expanding the functionality and user base, and integrating additional features to enhance user experience and engagement.
In conclusion, the BlockCampus DApp presents a promising solution to enhance community engagement, knowledge sharing, and collaboration within the academic settings of E-JUST. By leveraging the blockchain technology and implementing a robust tokenomics model, the DApp offers transparency, incentivization, and trust among participants. However, further research, evaluation, and refinement are needed to fully realize the potential of the BlockCampus DApp and its impact on academic communities.
## VII Future Plans
Our project opens up avenues for future research and exploration. Additional studies could focus on the scalability of the DApp architecture, exploring solutions such as sharding or layer-two protocols to accommodate a growing user base and increasing transaction volume [30]. Security audits and vulnerability assessments can be conducted to ensure the robustness and resilience of the DApp against potential attacks.
Moreover, extending the functionality of the BlockCampus DApp to include features such as collaborative document editing, peer-to-peer mentoring, or research collaboration tools could enhance its value proposition and attract a wider range
of users. Integration with other academic systems and platforms, such as learning management systems or academic repositories, can further streamline the academic experience and facilitate seamless knowledge sharing.
The utilization of blockchain-based decentralized applications (DApps) also has the potential to revolutionize the way students are granted ownership of their credentials, such as certificates and diplomas. By harnessing the power of blockchain technology, it becomes possible to ally this innovative approach with education institutions, making the process of verifying academic qualifications more efficient, streamlined, and transparent.
Traditionally, verifying academic credentials involves complex and time-consuming procedures, relying on manual checks and third-party intermediaries. However, by leveraging the blockchain technology, a tamper-proof and decentralized ledger, the entire verification process can be facilitated. Each academic achievement can be securely recorded on the blockchain, providing an immutable record of a student's accomplishments. With blockchain-enabled DApps, employers, educational institutions, and other stakeholders can effortlessly authenticate the validity of a student's credentials, eliminating the need for cumbersome paperwork and reducing the chances of fraudulent qualifications. Moreover, this system ensures that students retain ownership and control over their academic records, empowering them to share their achievements securely and seamlessly with potential employers or academic institutions worldwide.
The integration of blockchain technology into the realm of education not only enhances the trust and reliability of academic qualifications but also fosters a more accessible and inclusive environment for learners.
## Acknowledgements
We would like to express our sincere gratitude to Dr. Walid Gomaa for his invaluable guidance, expertise, and support throughout the entire research process of this paper. Dr. Gomaa's profound knowledge in the field, as well as his insightful feedback and constructive criticism, have significantly contributed to the quality and depth of this research.
Additionally, we would like to express our gratitude to Eng. Amine Ben Mansour for his valuable contributions and assistance throughout the project. His expertise, collaboration, and dedication played a crucial role in the successful completion of this research. His experience in Blockchain development enabled us to achieve our research goals.
|
2304.08663
|
Continuous Versatile Jumping Using Learned Action Residuals
|
Jumping is essential for legged robots to traverse through difficult
terrains. In this work, we propose a hierarchical framework that combines
optimal control and reinforcement learning to learn continuous jumping motions
for quadrupedal robots. The core of our framework is a stance controller, which
combines a manually designed acceleration controller with a learned residual
policy. As the acceleration controller warm starts policy for efficient
training, the trained policy overcomes the limitation of the acceleration
controller and improves the jumping stability. In addition, a low-level
whole-body controller converts the body pose command from the stance controller
to motor commands. After training in simulation, our framework can be deployed
directly to the real robot, and perform versatile, continuous jumping motions,
including omni-directional jumps at up to 50cm high, 60cm forward, and
jump-turning at up to 90 degrees. Please visit our website for more results:
https://sites.google.com/view/learning-to-jump.
|
Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots
|
2023-04-17T23:28:32Z
|
http://arxiv.org/abs/2304.08663v1
|
# Continuous Versatile Jumping Using Learned Action Residuals
###### Abstract
Jumping is essential for legged robots to traverse through difficult terrains. In this work, we propose a hierarchical framework that combines optimal control and reinforcement learning to learn continuous jumping motions for quadrupedal robots. The core of our framework is a stance controller, which combines a manually designed acceleration controller with a learned residual policy. As the acceleration controller warm starts policy for efficient training, the trained policy overcomes the limitation of the acceleration controller and improves the jumping stability. In addition, a low-level whole-body controller converts the body pose command from the stance controller to motor commands. After training in simulation, our framework can be deployed directly to the real robot, and perform versatile, continuous jumping motions, including omni-directional jumps at up to 50cm high, 60cm forward, and jump-turning at up to 90 degrees. Please visit our website for more results: [https://sites.google.com/view/learning-to-jump](https://sites.google.com/view/learning-to-jump).
L 2023
C 2023 Y. Yang, X. Meng, W. Yu, T. Zhang, J. Tan & B. Boots.
Legged Locomotion, Robot Agility, Optimal Control, Reinforcement Learning
## 1 Introduction
Jumping can greatly extend the capabilities of legged robots. Compared to walking, jumping exhibits a long "air phase", where all legs leave the ground at the same time. This air phase enables the robot to traverse through large areas without making contact, which is essential for difficult terrains with large gaps or abrupt height changes. While recent works have greatly improved the speed (Di Carlo et al., 2018; Kim et al., 2019; Margolis et al., 2022) and robustness (Kumar et al., 2021; Agarwal et al., 2022; Miki et al., 2022) of legged robots, most of them focused on standard walking behaviors with continuous foot contacts. Meanwhile, long-distance jumping is still a difficult task, and usually requires manual trajectory design (Li et al., 2022; Park et al., 2021) as well as long periods of offline planning (Chignoli et al., 2022; Winkler et al., 2018).
In this work, we present a hierarchical learning framework for quadrupedal robots to jump continuously, where the jumping direction and distance can be specified online. Continuous jumping has long been a difficult task for legged robots due to complex robot dynamics and frequent, abrupt
contact changes. On the one hand, while optimal control based controllers have achieved robust walking in many quadruped platforms (Di Carlo et al., 2018; Grandia et al., 2019), they usually assume a simplified dynamics model for computation efficiency (Di Carlo et al., 2018), and cannot control the robot pose precisely in highly dynamic motions like jumping (Chignoli et al., 2022). On the other hand, despite recent success in learning for locomotion (Kumar et al., 2021; Margolis et al., 2022; Miki et al., 2022), reinforcement learning (RL) based controllers still require careful reward tuning (Peng et al., 2020) and extended training times to learn jumping motions, due to the non-smooth reward landscape created by abrupt contact changes. Therefore, it can be difficult to use standard control or learning techniques for the jumping task.
Our framework addresses the challenges above, and learns continuous, versatile jumping motions that can be transferred directly to the real world. The core of our framework is a _stance controller_, which computes desired body pose by summing over the outputs from a manually-designed _acceleration controller_ and a learned _residual policy_. Our design of the stance controller has two major benefits. Firstly, warm-starting the policy training with the acceleration controller reduces noises in the reward landscape, so that training process converges smoothly and efficiently. Secondly, with the residual policy trained, the robot performance is no longer limited by the simplified dynamics model used by the acceleration controller, and therefore can better stabilize the robot throughout the entire jumping episode. In addition to the stance controller, we also implemented a low-level _whole-body controller_ to convert the body pose command to motor actions. By combining the acceleration controller, the residual policy and the low-level whole-body controller, our framework learns continuous, versatile jumping motions automatically.
We train our framework on a simulated environment of the Go1 quadruped robot from Unitree (Unitree), and test the trained policy directly on the real robot. The trained framework enables versatile jumping motions for the robot, including jumping at different directions and distances (up to 50cm high, 60cm forward), and jump-turning (up to 90 degrees). We then conduct detailed analysis on the behavior of our overall framework, and verify that the combination the acceleration controller and residual policy can learn more stable jumping motions than each individual method. Additionally, we compare our method to end-to-end RL and find that our method is at least 1 order-of-magnitude more data efficient, thanks to the hierarchical setup and the smooth reward landscape from the acceleration controller.
In summary, the contributions of this paper include the following:
1. We propose a hierarchical framework that combines optimal control and reinforcement learning to learn continuous, versatile jumping for quadrupedal robots.
2. The trained framework can be directly transferred to the real robot and achieves continuous jumping motions at substantial height (50cm) and distance (60cm).
3. Our experiments show that the combination of controller and residual policy can learn more stable jumping motions than using either method individually.
## 2 Related Works
Learning Agile LocomotionRecently, researchers have made significant progress in applying reinforcement learning (RL) for quadrupedal locomotion. Using reinforcement learning, the legged robots can adapt to the environment (Kumar et al., 2021; Miki et al., 2022) and learn diverse skills
such as self-righting (Hwangbo et al., 2019; Wu et al., 2022), high-speed walking (Margolis et al., 2022), goal-keeping (Huang et al., 2022). However, for successful real-robot deployment, RL-based controllers usually require significant effort in imitation learning(Peng et al., 2020), reward shaping (Kumar et al., 2021) and sim-to-real (Tan et al., 2018; Hwangbo et al., 2019; Song et al., 2020), especially for more agile tasks such as jumping. Compared to the end-to-end RL approaches, we re-design the RL task to learn the residual action (Johannink et al., 2019) that adds to a manually-designed acceleration controller. As a result, the training process consumes significantly fewer data and does not require complex reward specification. More over, the learned policy can be deployed directly to the real world without additional fine-tuning.
Optimal Control for LocomotionWith recent advancements in actuator design and numerical optimization, optimal-control based controllers have enabled high-speed and robust locomotion (Di Carlo et al., 2018; Kim et al., 2019; Grandia et al., 2019) for legged robots. For computation efficiency, these controllers usually simplify the robot dynamics model, such as the single rigid body model with massless legs (Di Carlo et al., 2018; Li et al., 2022, 2022), so that the optimization can run in real-time. However, these simplified models usually cannot capture the robot's orientation dynamics accurately, especially when the robot is in the air. Therefore, it can be difficult to use them for jumping tasks with long periods of air time. To optimize for more precise jumps, controllers usually need to pre-compute the entire jumping trajectory offline using higher-fidelity models (Chignoli et al., 2022; Winkler et al., 2018). Compared to these approaches, our method does not require offline optimization, jumps continuously, and can respond to different landing positions and orientations.
Combining control and learning for legged locomotionRecently, a number of works have proposed to use reinforcement learning and optimal control jointly for robust and versatile locomotion. A common approach is to set up the controller hierarchically, where a high-level policy outputs intermediate commands, which are converted by a low-level controller into motor actions. While such hierarchical approaches have enabled the robot to walk more efficiently (Yang et al., 2022; Da et al., 2020) and conquer difficult terrains (Xie et al., 2021), they are not yet demonstrated on more agile tasks such as jumping. We use a similar hierarchical setup, but uses a different whole-body controller in the low-level for precise tracking of body acceleration, and achieves continuous, versatile jumping on the real robot. Another common approach is to use RL to finetune the controller's outputs. Recently, Bellegarda and Nguyen (2020) demonstrates that deep RL can improve the quality of a single jump in simulation. Our work extends their result by using RL to optimize for the controller's performance in the real world, and achieves continuous, omni-directional jumping on the real robot, as well as jump turns.
## 3 Overview
We design a hierarchical framework to learn continuous robot jumping (Fig. 1). The timing of each jump is regulated by an open-loop time-based _contact scheduler_, which subsequently specifies the desired state of each leg (_swing_ or _stance_). We adopt the pronking gait, where all legs enter and leave the ground at the same time, and set the duration of each jump to be 1 second, which consists of 0.5s of stance time and 0.5s of swing time. Based on the output from the contact scheduler, we use separate control strategies for swing and stance legs, where the _stance controller_ controls the desired body pose and _swing controller_ controls the foot positions. At the low level, a whole-body
controller converts the pose command from the swing or stance controller to motor commands. We also implemented a Kalman Filter based state estimator to estimate the position and velocity of the robot. We run the entire pipeline at 500Hz so that the robot can respond quickly to external perturbations.
As a critical component of the entire control process, the stance controller needs to achieve sufficient lift-off speed for each jump, while maintaining body stability throughout the entire jump. To achieve that, we compute the pose command of the stance controller as the sum of a manually designed _acceleration controller_ and a learned _residual policy_, where the acceleration controller computes base acceleration to ensure sufficient lift-off velocity based on simplified robot dynamics, and the residual policy fine-tunes the controller's action to ensure stability. Since the robot is under-actuated in the air, we use a simple trajectory controller for swing legs, which computes the desired landing position of each feet according to the Raibert Heuristic (Raibert, 1986).
## 4 Learning Residual Policy for Continuous, Versatile Jumping
### Reinforcement Learning Preliminaries
The reinforcement learning (RL) problem is represented as a Markov Decision Process (MDP), which consists of a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), transition probability \(p(s_{t+1}|s_{t},a_{t})\), reward function: \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), an initial state distribution \(p_{0}(s_{0})\) and an episode length \(T\). We aim to learn a policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that maximizes the expected cumulative reward over the entire episode, which is defined as:
\[J(\pi)=\mathbb{E}_{s_{0}\sim p_{0}(\cdot),a_{t}\sim\pi(s_{t}),s_{t+1}\sim p( \cdot|s_{t},a_{t})}\sum_{t=0}^{T}r(s_{t},a_{t}) \tag{1}\]
### Environment setup
We design our environment so that the robot can jump in several directions within each episode. More specifically, each episode consists of 5 jumps, where the robot jumps in-place, 1m forward,
Figure 1: Our hierarchical learning framework controls the jumping of the Go1 robot from two different levels.
0.5m backward, 0.2m to the left, and 0.2m to the right, in that order. Before each jump, the environment records the robot's current position and computes the desired landing position based on the jumping directions. This information is then supplied to the residual policy to compute the desired pose commands, which is subsequently sent to the low-level whole-body controller. The details of our environment are as follows:
State SpaceThe state space includes the robot state and task information. More specifically, the robot state includes the position, orientation, linear velocity, and angular velocity of the base, as well as the foot positions. The task information includes the robot's distance to the desired landing position, and the remaining time in the current locomotion cycle.
Action SpaceSince the low-level whole-body controller is effective for precise tracking of the body pose (Kim et al., 2019), we directly command the desired body pose in our environment. In the original work by Kim et al. (2019), the whole-body controller takes in an 18-dimensional vector specifying the position, velocity, and acceleration for each of the base's 6 DoFs. To reduce the search dimension, we design the action space to specify one command for each DoF of the base, and computes the other two dimensions of each DoF heuristically. More specifically, the action space specifies the desired linear _acceleration_ (3-dimensional) and angular _acceleration_ around the \(z\) axis (1-dimensional), so that the policy can directly control the planar and vertical movements of the body, as well as turning, which can change rapidly within each jump. The action space specifies the desired _position_ for the remaining two DoFs, namely the body roll and pitch, to avoid unnecessary body oscillations. The remaining commands for each DoF is specified heuristically. See section. 5 for more details.
RewardWe design the reward function as the linear combination of alive bonus, distance penalty, orientation penalty and contact consistency penalty:
\[r=r_{\text{alive}}+w_{p}\cdot r_{\text{position}}+w_{o}\cdot r_{\text{ orientation}}+w_{c}\cdot r_{\text{contact}} \tag{2}\]
where the alive bonus, \(r_{\text{alive}}=4\), is a fixed constant that makes the total return positive. The position reward, \(r_{\text{position}}\) is the squared distance between the robot's current position and the desired landing position, normalized by the total desired jumping distance. The orientation reward, \(r_{\text{orientation}}=-(\text{roll}^{2}+\text{pitch}^{2})\) encourages the robot to stay upright. The contact consistency reward \(r_{\text{contact}}=\sum_{i=1}^{4}\mathds{1}(c_{i}\neq\hat{c}_{i})\) is the sum of 4 indicator functions about the contact situation of each leg, where \(\hat{c}_{i}\in\{0,1\}\) is the desired contact of foot \(i\) according to contact scheduler (Section. 3) and \(c_{i}\in\{0,1\}\) is the actual contact of foot \(i\). Intuitively, the contact consistency reward ensures that the robot jumps according to the desired schedule without significant mismatch in lift-off and touch-downs. We use \(w_{p}=1,w_{o}=5,w_{c}=0.4\) in all our experiments.
Early TerminationTo prevent the learning algorithm from unnecessary explorations in suboptimal regions, we terminate an episode early if the robot falls (i.e. when the base height is lower than 8cm, when the cosine distance between body's upright vector and the gravity direction is less than 0.6, or when any body parts came in contact with the ground).
### Manually-designed Acceleration Controller
Due to the discrete contact change, the reward landscape in the jumping environment can be highly non-smooth with local minima, which makes it challenging to learn using standard exploration
strategies in RL algorithms. To facilitate learning, we manually design an acceleration controller as the base policy for the environment, and uses reinforcement learning to learn _residual actions_ to finetune the policy's performance. The acceleration controller models the robot body as a single point mass, computes the desired lift-off velocity based on contact timing, and tracks this lift-off velocity using simple heuristics.
Computing the Lift-off VelocityThe acceleration controller computes the desired lift-off velocity \(\mathbf{v}_{\text{liftoff}}=(v_{x},v_{y},v_{z},v_{\text{saw}})\) based on the desired landing displacement \(p_{x},p_{y},p_{\text{saw}}\) and the swing time \(t_{\text{swing}}\). The planar velocities, \(v_{x}=\frac{p_{x}}{t_{\text{swing}}},v_{y}=\frac{p_{y}}{t_{\text{swing}}},v_{ \text{saw}}=\frac{p_{\text{saw}}}{t_{\text{swing}}}\), is the average flying speed required for the robot to land at the desired position and orientation. The vertical velocity, \(v_{z}=\frac{1}{2}gt_{\text{swing}}\), is the minimum vertical velocity for the robot to maintain the desired swing time, where \(g\) is the gravity constant.
Tracking the Lift-off VelocityTo track the desired lift-off velocity \(\mathbf{v}_{\text{liftoff}}\), the acceleration controller computes the desired acceleration \(\mathbf{a}_{\text{des}}=\frac{\mathbf{v}_{\text{liftoff}}-\mathbf{v}}{t}\) based on the remaining time in the stance phase \(t\), the desired liftoff velocity \(\mathbf{v}_{\text{liftoff}}\) and the current CoM velocity \(\mathbf{v}\). The acceleration controller then either executes this acceleration \(\mathbf{a}_{\text{des}}\), or moves the robot to a preparation position, based on an estimation of the lift-off position (Fig. 2). More specifically, the acceleration controller assumes the robot as a point-mass, and computes the body's CoM position at lift-off time based on the current pose and desired accelerations. If the lift-off position violates kinematics limits (e.g. if the base is so high that foot contact cannot be maintained), the controller computes an alternative acceleration that moves the robot to a low-standing preparation position. Otherwise, the controller outputs the desired acceleration \(\mathbf{a}_{\text{des}}\). To simplify computation, we approximate the feasible CoM positions as a bounding box around the robot's current CoM.
### Training the Residual Policy
To improve the performance of the acceleration controller, we train a policy to add an action residual to the acceleration controller's outputs. We represent our policy as a neural network with 1 hidden layer of 256 units and tanh activation. We train our policy using Augmented Random Search (ARS) (Mania et al., 2018), a simple, parallelizable evolutionary algorithm that has been successfully ap
Figure 2: The acceleration controller decides whether to execute the desired acceleration command \(\mathbf{a}_{\text{des}}\) based on the estimated lift-off position, which is computed by numerical integration. If the lift-off position (dark red dot) is within the approximated kinematics limit (dashed square), the controller executes the \(\mathbf{a}_{\text{des}}\). Otherwise the controller moves the robot to a low-standing preparation pose.
plied to locomotion tasks (Iscen et al., 2018; Tan et al., 2011, 2016; Geijtenbeek et al., 2013). We chose ARS because it explores in the policy parameter space and does not directly inject noise in action outputs. Moreover, ARS evaluates policies require accurate estimation of the value function, which can be challenging for hierarchical tasks (Yang et al., 2022) due to its non-Markovian nature.
## 5 Low-level Whole-body Controller
At the low-level, we use a whole-body controller (WBC) to convert the body and foot pose commands into motor actions. Our implementation is based on the work by Kim et al. (2019) with a few modifications. We briefly summarize the controller design here. Please refer to the original work for further details.
Interface with Stance ControllerIn summary, WBC takes in a 18-dimensional vector that specifies the desired pose \(\mathbf{p}_{\text{body}}\), velocity \(\mathbf{v}_{\text{body}}\), and acceleration \(\mathbf{a}_{\text{body}}\) for each of the 6 DoFs of the robot _body_, as well as the foot swing positions \(\mathbf{p}_{\text{foot}}\). The foot swing positions \(\mathbf{p}_{\text{foot}}\) is directly specified by the swing controller (Section. 3). The stance controller (Section. 4.2) specifies 4 out of the 6 dimensions for the base accelerations \(\mathbf{a}_{\text{body}}\) (3 linear accelerations and angular accelerations around the \(z\) axis), as well as 2 out of the 6 dimensions of the base pose \(\mathbf{p}_{\text{body}}\) (roll and pitch). For the remaining pose commands, we set the desired body pose \(p_{\text{body}}\) to be the current body pose, the desired linear body velocity to the current velocity, the desired angular body velocity to 0, and the desired angular acceleration around the \(x,y\) axis to 0.
Computation of Motor CommandsWBC computes an impedance command that specifies the desired position \(\bar{\mathbf{q}}\), velocity \(\bar{\bar{\mathbf{q}}}\) and torque \(\bar{\mathbf{\tau}}\) for each _motor_. The applied torque \(\mathbf{\tau}\) is the sum of the desired torque plus the PD feedback:
\[\mathbf{\tau}=k_{p}(\bar{\mathbf{q}}-\mathbf{q})+k_{d}(\bar{\bar{\mathbf{q}}}-\dot{\mathbf{q}})+ \bar{\mathbf{\tau}} \tag{3}\]
where \(\mathbf{q},\dot{\mathbf{q}}\) is the current position and velocity of each motor, and \(k_{p},k_{d}\) are fixed gains. To compute the motor command, WBC first applies an inverse kinematics algorithm, which computes the desired position \(\bar{\mathbf{q}}\) and velocity \(\bar{\bar{\mathbf{q}}}\) for each motor, in order to move the robot to the desired body position \(\mathbf{p}_{\text{body}}\) and velocity \(\mathbf{v}_{\text{body}}\). After that, WBC computes the additional motor torque \(\bar{\mathbf{\tau}}\) required to achieve the desired base accelerations \(\mathbf{a}_{\text{body}}\), based on the full rigid-body dynamics model of the robot.
## 6 Results
### Experiment Setup
We test our framework on a Go1 robot from Unitree (Unitree), which is a small-scale, 15kg quadrupedal robot with 12 degrees of freedom. We build the simulation environment of Go1 using Pybullet (Coumans and Bai, 2016, 2020; Coumans and Bai, 2016, 2020), and implement the entire control pipeline, including the the state estimator, low-level WBC, the jump controller and the residual policy in Python. We train the residual policy on a desktop computer with a 16-core CPU, where the training takes around 3 hours to complete.
### Continuous, Versatile Jumping
We test the performance of our learned framework on a series of jumping tasks, where each task specifies either a different desired jumping direction, or a desired turning rate (Fig. 7). Although we train the framework in only 4 jumping directions, the resulting controller interpolates between them and jumps in intermediate directions (fig. 2(a)). The framework also enables the robot to jump and turn _simultaneously_, and achieves an average turning rate of about 3.5 rad/s (fig. 2(b)). For omni-directional jumping, we also notice that the controller tends to overshoot for forward jumps, and undershoot for backward jumps. This is likely due to the asymmetric leg designs of the robot platform, which generates higher accelerations in the forward direction than in the backward direction.
### Transfer to the real robot
We deploy the learned residual policy directly to the real world without additional finetuning. Thanks to the robustness of the low-level WBC, our framework can complete several high jumps in the real world, including jumping in multiple directions and turning (Fig. 4). Please visit the website for videos. The robot achieves a maximum jumping height of around 50cm, a forward jumping distance of around 60cm, and a maximum turning rate of 90 degrees per jump. Note that this jumping performance in the real world is slightly lower than the robot's performance in simulation. We hypothesize this as a result of unmodeled motor saturation, and plan to investigate further in future works.
### Comparison with baseline policies
To further validate our design choices, we conduct an ablation study by removing either the residual policy or the acceleration controller from our pipeline. We also compare our framework with an end-to-end RL policy that directly outputs motor position commands. The result is summarized in figure. 5.
Acceleration Controller OnlyWe test the performance of the manually designed acceleration controller without learned residual policy on the same jumping task. While the controller completes
Figure 3: Different jumping skills learned by our framework. **Left**: Omni-directional jumping. Lines show birds-eye view of CoM trajectory, where the robot starts facing positive \(x\) direction. Crosses show desired landing positions. Colors show whether the direction is seen during training. **Right**: Continuous jump-turn. Plot shows the z-component of angular velocity (approximately the change rate of yaw angle). Shaded area indicates foot contact.
all the jumps without falling over, it achieves a lower reward without the residual policy (Fig. 5). We also find that the residual policy increases the overall success rate of the robot in the real world (Table. 1). To further understand the effectiveness of the residual policy, we perform a backward jump with and without the residual policy, and plot the base pitch angle in Fig. 6. While the acceleration controller maintains the body pitch closer to reference in the stance phase, it causes significantly larger pitch angle deviations in the swing phase. This is because the acceleration controller approximates the robot as a point-mass, and cannot account for changes in body orientations. In contrast, the residual policy learns to correct this pitch angle deviation by slightly shifting the body pitch in stance phase, which results in lower overall pitch deviations throughout the entire jump.
Policy OnlyWithout the acceleration controller, the residual policy gets stuck in a local minima, and achieves a low reward (Fig. 5). The resulting policy does not achieve sufficient height for each jump, and keeps legs in contact most of the times (Fig. 6(a)). We hypothesize this as a result of the frequent contact changes in the environment, which can create a noisy reward landscape for the algorithm to optimize. In contrast, the policy trained with the controller achieves consistent, high flight times for each jump (Fig. 6(b)).
Figure 4: Footages of the Go1 robot completing several jumps in the real world.
Figure 5: Learning curves of our framework compared with baseline methods. Error bar shows 1 standard deviation.
End-to-End RLLastly, we compare our method to a baseline, where we train an end-to-end reinforcement learning policy that directly outputs motor position commands. Similar to previous works (Kumar et al., 2021; Miki et al., 2022; Tan et al., 2018), we reduce the control frequency to 50Hz to avoid high-frequency motor oscillations. We use the same state space and reward function in the environment design, and train the policy using the same ARS algorithm. We find that the E2E policy learns significantly slower, and requires around 10 times more training episodes (Fig. 5). The policy also achieves the lowest overall reward compared to the other policies. While additional efforts in reward shaping and imitation learning (Peng et al., 2020) can improve the performance of the E2E policy, our method creates a simple, data-efficient alternative that does not require pre-recorded demonstration trajectories.
## 7 Conclusion
In this work, we present a hierarchical framework to for continuous quadruped jumping, which consists of a manually designed acceleration controller, a learned residual policy, and a low-level whole-body controller. The trained framework can be transferred directly to the real world, and is capable of continuous jumping at arbitrary angle and distances according to user specification, as well as jump-turning. One limitation of our work is that it currently only supports the pronking gait, where all 4 legs leave or touch the ground at the same time. Supporting more versatile gaits such as bounding or galloping can potentially increase the height and distance of the jump, which we plan to investigate in future work. Another future direction is to integrate perception into our framework, so that the robot can use its jumping skills to traverse through difficult terrains autonomously.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Task** & **Controller** & **Controller** \\ & **+ Policy** & **Only** \\ \hline Forward & 100\% & 20\% \\ Backward & 80\% & 0\% \\ Left & 100\% & 80\% \\ Right & 100\% & 60\% \\ Jump-Turn & 100\% & 0\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Success rates (over 5 trials) with or without the residual policy. A jump is successful if it does not trigger the early termination condition (Sec. 4.2).
|
2306.09304
|
Radars for Autonomous Driving: A Review of Deep Learning Methods and
Challenges
|
Radar is a key component of the suite of perception sensors used for safe and
reliable navigation of autonomous vehicles. Its unique capabilities include
high-resolution velocity imaging, detection of agents in occlusion and over
long ranges, and robust performance in adverse weather conditions. However, the
usage of radar data presents some challenges: it is characterized by low
resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
These challenges have limited radar deep learning research. As a result,
current radar models are often influenced by lidar and vision models, which are
focused on optical features that are relatively weak in radar data, thus
resulting in under-utilization of radar's capabilities and diminishing its
contribution to autonomous perception. This review seeks to encourage further
deep learning research on autonomous radar data by 1) identifying key research
themes, and 2) offering a comprehensive overview of current opportunities and
challenges in the field. Topics covered include early and late fusion,
occupancy flow estimation, uncertainty modeling, and multipath detection. The
paper also discusses radar fundamentals and data representation, presents a
curated list of recent radar datasets, and reviews state-of-the-art lidar and
vision models relevant for radar research. For a summary of the paper and more
results, visit the website: autonomous-radars.github.io.
|
Arvind Srivastav, Soumyajit Mandal
|
2023-06-15T17:37:52Z
|
http://arxiv.org/abs/2306.09304v3
|
# Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges
###### Abstract
Radar is a key component of the suite of perception sensors used for safe and reliable navigation of autonomous vehicles. Its unique capabilities include high-resolution velocity imaging, detection of agents in occlusion and over long ranges, and robust performance in adverse weather conditions. However, the usage of radar data presents some challenges: it is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets. These challenges have limited radar deep learning research. As a result, current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data, thus resulting in under-utilization of radar's capabilities and diminishing its contribution to autonomous perception. This review seeks to encourage further deep learning research on autonomous radar data by 1) identifying key research themes, and 2) offering a comprehensive overview of current opportunities and challenges in the field. Topics covered include early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection. The paper also discusses radar fundamentals and data representation, presents a curated list of recent radar datasets, and reviews state-of-the-art lidar and vision models relevant for radar research. For a summary of the paper and more results, visit the website: autonomous-radars.github.io.
Radar, perception, autonomous driving, self-driving cars, electric vehicles, 4D data, deep learning, early fusion, occupancy estimation, transformers, point cloud, uncertainty modeling, multipath +
Footnote †: footnote]Note: [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j)
adds clutter - false positive observations and noise - in radar data [6].
Detection and tracking of agents in these complex scenes requires deep learning models. As with vision and lidar research, training these models requires quality radar datasets which were scarce until recently. The first acceptable radar dataset, NuScenes [7], was released only in 2019, and higher quality datasets are just now becoming available [8; 9; 10; 11]. This data shortage, combined with the challenges of radar data, have led to a dearth of radar-focused models [12; 13; 14; 15; 16].
As a result, current radar models are often based on lidar and camera detection models [17; 18; 19; 20] Such models are easily implemented after formatting radar data to match the required input format, such as a pseudo-image [17]. Additionally, perception has conventionally followed a late fusion-based multi-object tracking paradigm [21] that needs full object detection from each sensor model. Employing lidar and camera models simplifies the task of radar since these models are designed for object detection.
Lidar and camera models are primarily optimized to learn from optical features and produce outputs that include accurate class, shape, and orientation of objects. However, radars have relatively weak optical features, as evident from their low resolution, sparse, clutter-filled, and highly uncertain data. Instead, radars offer unique features like high resolution range and velocity imaging and early detection. Employing camera and lidar detection models therefore under-utilizes the capabilities of radar, leading to less reliable detection and reduced contribution of radars to perception tasks.
Recently, researchers have begun exploring new types of models to enhance the contribution of radar to perception, including early fusion models, occupancy estimation models, and uncertainty-based models. The "late fusion" approach traditionally used for perception tasks requires full object detection from each sensor's data and lacks access to sensor data features during fusion. This approach makes the final fused outputs less robust. By contrast, early fusion models learn jointly from multi-sensor data features to produce robust unified detection results. Early fusion has gained more traction with the advent of transformers [22], which use cross-attention to learn effectively from dissimilar modalities. Radar-based early fusion models primarily use an optical sensor data for class, shape, and orientation inferences, and radar data to enhance depth and velocity estimations [23; 24; 25; 26]. Certain radar-based early fusion models also achieve tolerance against weather or lighting conditions by leveraging radar features in innovative architectures and training methods [27; 28].
Both late fusion and early fusion paradigms in perception require object detection for tracking. Detection models, however, can be unpredictable on rare types of agents like an airborne chair or rolling tyre due to their under-representation in the training dataset. Thus, these rare cases make perception susceptible to failure.
To overcome this issue, occupancy estimation models dispense with the need for detection during tracking. These models divide the scene into high-resolution 2D or 3D grids and learn occupancy and velocity features for each cell [29]. They track these features over frames to estimate the drivable area for ego vehicle navigation. Using this approach, occupancy models achieve robustness to all shapes and types of agents. Radar-based occupancy [30; 31] and scene flow [32; 33; 34] estimation models contribute more significantly to this paradigm by providing features rich
Figure 1. An overview of the capabilities and challenges associated with using radars for autonomous driving. Radars excel in getting observations from occluded agents, covering long ranges, and performing in adverse weather conditions. However, their data is often sparse, low resolution, contains clutter, and has high uncertainty. These issues can lead to low confidence in shape estimation and false positive detection by deep learning models.
in velocity, range, and early detection that are tolerant to weather conditions.
As previously mentioned, radar data is far from perfect. The challenges for a radar model increase when considering that object labels for radar data are typically copied from lidar and camera labels [35], which may not be very accurate for the radar data. Inaccuracies can arise in labels due to issues with sensor time-synchronization, the low angular resolution of radar, and differences in sensor imaging physics. Additionally, labels for all objects in the dataset carry the same confidence, irrespective of their distance, size, or visibility. Training models on such a dataset frequently produces incorrect and overconfident detections as the data doesn't capture the inherent uncertainty in labels [36]. Some studies leverage the uncertainty in the data and labels [37] to train probabilistic models [38] that improve detection performance for low-certainty agents.
Radars used in autonomy generally belong to one of two generations. Most are older-generation radars, referred to as _3D radars_, that image the scene in three dimensions - range, velocity, and azimuth angle. These radars provide data that is sparse and low-resolution and lacks height information. On the other hand, a few newer-generation radars, termed _4D radars_, markedly enhance density and resolution compared to 3D radars and provide an elevation angle dimension. These devices are more costly, but their data holds the potential to significantly boost the performance of learned radar models due to stronger features. However, research using superior quality 4D radar datasets, such as the K-radar [8], is still limited, largely due to the prevalence of the 3D radar NuScenes dataset as the benchmark for comparison. We anticipate that the adoption of 4D radars will accelerate as researchers begin to publish their benchmark results on 4D radar datasets.
In this review, we examine state-of-the-art radar models for the areas identified above and discuss challenges and opportunities within these areas. Additionally, we discuss the fundamentals of radar, explore radar data representation, and provide a curated list of both 3D and 4D autonomous radar datasets. Further, as 4D radars make strides towards enhancing the quality of optical-like features, we also review relevant state-of-the-art lidar and vision models that may guide research on generation and training of radar models.
The rest of the paper is organized as follows: Section II offers an intuitive understanding of radar fundamentals and explores various radar data formats and their representations for deep learning. Section III introduces a curated list of recent radar datasets useful for radar deep learning research. Section IV provides an overview of perception and the integration of different areas of radar deep learning within perceptual models. Section V reviews state-of-the-art radar models in both late fusion and early fusion paradigms. Section VI reviews radar-only and multi-sensor fusion models proposed for occupancy and scene flow estimations. Section VII discusses uncertainty issues in radars used for autonomous navigation and discusses methods to reduce uncertainty in both the models and labels to improve detection accuracy. Section VIII reviews research on reducing multipath clutter in radar data and discusses the prospects of generating synthetic radar data. The review concludes with a summary and our final thoughts.
## II Radar Fundamentals
### A brief history of radar
Radars were invented during World War II primarily for detection and tracking of enemy aircraft from distances of hundreds of kilometers. By the late 20th century, radars had also found numerous applications in civilian sectors such as air traffic control and weather prediction.
As technology advanced, radars evolved into lower-power, higher-resolution devices, opening up new applications, notably in the automotive sector around the turn of the 21st century. These automotive radars mainly support the advanced driver-assistance system (ADAS) in vehicles, enabling functionalities like adaptive cruise control and blind spot detection through classical radar signal processing algorithms.
The advent of the 2005 Darpa Grand Challenge [39] and AlexNet [40] catalyzed the development of autonomous vehicles and associated deep learning research. Given the complexity of driving conditions, autonomous driving requires accurate instantaneous velocity estimations, navigation in adverse weather, and long-range detection. These needs have driven the development of autonomous radar-based sensors, including 3D radars, and more recently, 4D radars. These devices offer a substantial improvement in resolution compared to their automotive counterparts and facilitate the application of deep learning models to their data for detection and tracking in complex environments.
### _Radar Physics_
A radar is a time-of-flight sensor which provides an X-ray-like view by measuring the range, radial velocity, and angle of agents within its scene. It transmits electromagnetic (EM) waves within a solid angle at regular intervals, and for each transmitted wave, measures a _range_ of agents by computing the delay in receiving the wave reflected from those agents. By taking rapid successive range measurements from waves at regular intervals, it can also compute the rate of range change, or the _radial velocity_ of agents.
Radars use arrays of transmitter and receiver antennas to measure the angles of agents. For a transmitted EM wave, each receiver receives signals at varying delays from the same agent due to different path lengths created by the agent's angle relative to the transmitter orientation. These differences in delay assist in measuring the angles of agents in the scene. The _azimuth angle_ is determined by a horizontal array of antennas, while the _elevation angle_ is determined by a vertical array of antennas. The total size of the array in both dimensions (i.e., the spatial aperture) governs the diffraction-limited angular resolution.
Nonetheless, due to physical constraints on the number of antennas that can be integrated into a standard-sized radar sensor, the angle measurement in radars used for autonomous
navigation tends to be coarse, with a resolution generally exceeding \(1^{\circ}\). Fig. 2(a) illustrates the range (\(R\)), azimuth (\(Az\)), and radial velocity (\(V_{rel}\)) components of a radar observation from an incoming car.
### Overview of Radars Used in Autonomous Navigation
The radars used in autonomous navigation typically operate in the 77-81 GHz frequency band, as allocated by the Federal Communications Commission (FCC). The radar waves in this band have wavelength of \(\sim\)4 mm, which is larger than fog and dust particles and light rain droplets. Thus, these waves undergo minimal scattering, which makes them immune to adverse weather conditions. These radars provide high-resolution range and velocity imaging and low-resolution angle imaging of the scene and can observe agents early in occlusion and over long ranges. These characteristics, combined with immunity to weather conditions, make radars useful complementary sensors for autonomous perception.
Radars used in autonomous navigation are primarily of two types: frequency-modulated continuous wave (FMCW) [41] and coded [42]. FMCW radars are more common due to their low cost and simple architecture. FMCW radars operate in the frequency domain, translating time-of-flight into frequency shifts in their received signals. A Fast Fourier transform (FFT) is required to translate these signals back into the time domain and obtain range, velocity, and angle measurements.
FMCW radars typically use a _linear chirp_, i.e, a wave of linearly increasing frequency for a fixed duration, as the basic unit of the transmitted signal. The chirp waveform is reflected upon interacting with various agents. The resulting received signal is sampled and FFT-processed to obtain observations from objects at different ranges (known as echoes). The amplitudes of echoes at different range bins indicates the absence or presence of objects, along with their sizes and types. For example, large buses with metallic bodies generate large-amplitude echoes that span several range bins, while pedestrians generate small-amplitude echoes that span only a few bins. The geometric size and reflectivity of any object can be combined into a radar cross section (RCS). The latter can be used to analytically estimate the signal-to
Figure 2: An illustration of the working of an FMCW radar. (a) An autonomous driving scene featuring two objects along with their relative ranges, velocities, and azimuth angles. (b) An FMCW radar antenna array composed of 3 transmitting antennas and 4 receiving antennas, forming a \(3\times 4\) array. (c) An FMCW frame with \(P\) transmitted chirps and their corresponding received chirps; and (d) chirps from all three transmitters. (e) The range test Fourier transform (FFT) of each chirp leading to a 2D range-Doppler spectrogram after performing a Doppler FFT. (f) The final radarcube obtained after performing an angle FFT.
noise ratio (SNR) of the object using the classical radar range equation [43]. Rao [44] provides a great tutorial on working of FMCW radars.
During a typical FMCW radar frame of about 100 ms, each transmitter emits multiple chirps. The FFT-processed data from these chirps are stacked to form a 2D matrix, on which an FFT is performed over the chirp dimension to calculate the radial velocity for each observation. Concurrently, data from multiple transmit-receive pairs is combined to generate a synthesized 2D array, as demonstrated in Fig. 2(b). As for the chirp dimension, an FFT is performed along the horizontal antenna dimension after stacking 2D FFT-processed matrices from horizontal antennas, thus resolving the azimuthal angle (\(Az\)) of the observations. Subsequently, 4D radars perform a similar FFT along the vertical antenna dimension to resolve the elevation angle of the observations.
While the frequency domain architecture of FMCW radars renders them simple and affordable, it also leaves them vulnerable to substantial interference from neighboring FMCW radars due to frequency overlap. This problem has propelled development of coded radars, which transmit orthogonally-coded noise-like signals to minimize mutual interference. However, coded radars require high-speed signal processing due to their use of noise-like signals, which in turn makes them expensive and power-hungry and leaves FMCW radars as the more practical choice. Moreover, architecture-level strategies to mitigate interference in FMCW radars have been devised [45], making them more usable in the presence of other FMCW radars.
The first generation of FMCW radars employed in autonomous navigation were 3D devices that offered low resolution in azimuth, resulting in a relatively low detection probability. However, the increasing demand for high-resolution radars led to the development of next-generation 4D radars. These devices offer significantly improved resolution across all dimensions, including elevation, and thereby promise to enhance the contributions of radar to autonomous perception.
### Comparison with Lidar
The new 4D radars represent a significant upgrade over their predecessors, offering 3D spatial and velocity imaging of objects. By contrast, lidars generate a dense, high-resolution 3D point cloud of objects over a 360\({}^{\circ}\) field of view. Traditional lidars, made of delicate spinning lasers, are cost-prohibitive and high-maintenance, thus they are primarily used in ride service-targeted autonomous vehicles and not in general-purpose electric vehicles.
However, the advent of semiconductor chip-based solid-state lidars, although possessing lower resolution and a reduced field of view, has introduced a more affordable alternative that is better suited to general-purpose vehicles. Moreover, a category of FMCW lidars has been developed recently, which can detect the instantaneous radial velocity of objects similar to radars [46]. Hence, a comparison of lidar and radar capabilities seems pertinent. Table 1 provides a comparative analysis of previous-generation 3D radars, new 4D radars, traditional 32-beam lidars, and newer, more affordable solid-state lidars.
As the table suggests, lidars still outperform radars significantly on angular resolution due to the much smaller optical wavelength. However, radars maintain their dominance in velocity estimation and detection in occlusions and adverse weather conditions. Thus, it is likely that autonomous vehicles will utilize both lidars and radars to ensure high levels of safety and reliability during full self-driving. General-purpose electric vehicles will likely utilize radars, sometimes accompanied by solid-state lidars, to balance affordability and high levels of autonomy.
### Radar Data Formats
As previously discussed in section II.C, FMCW radars store the range, Doppler, and angle observations of agents for each frame in the multi-dimensional FFT-processed matrix, referred to as the _radarcube_. For 3D radars, the size of a radarcube typically reaches \(32\) millions cells (\(1024\) range bins \(\times 1024\) velocity bins \(\times 32\) angle bins), and it is even larger for 4D radars. With a 16-bit value for each matrix cell, the data rate required to transfer radarcubes can reach 2.5 Gbps for 3D radars and 10 Gbps for 4D radars. Such large multi-dimensional radarcube sizes are impractical for deep learning as they imply the need for large convolutions at each layer of the model.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Specification** & **4D** & **3D Radar** & **3D Radar** & **32-Beam** & **Solid State** \\ & **Radar** & **(Near)** & **(Far)** & **Lidar** & **Lidar** \\ \hline Max range & 300 m & 70 m & 250 m & 200 m & 200 m \\ FoV (HV) & \(12\sigma/30^{o}\) & \(20^{o}/2^{o}\) & \(20^{o}/2^{o}\) & \(36\sigma/40^{o}\) & \(120^{o}/2^{o}\) \\ Ang. res. (HV) & \(17^{o}/1^{o}\) & \(4^{o}/4^{o}\) & \(1.5^{o}/2^{o}\) & \(0.1^{o}/0.3^{o}\) & \(0.2^{o}/0.2^{o}\) \\ Doppler res. & 0.1 m/s & 0.1 m/s & 0.1 m/s & \(\mathbf{\kappa}\) & \(\mathbf{\kappa}\) \\ Point density & Medium & Low & Low & High & High \\ All-water & ✓ & ✓ & ✓ & ✗ & ✗ \\ Cost & Medium & Low & Low & High & Medium \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison of capabilities of radars and lidars. Generally, 3D radars provide two types of low-resolution imaging – near range and far range. 4D radars have substantially higher imaging resolution and more capabilities than their 3D counterparts, thus bridging the performance gap with lidars. Here FoV refers to the “Field of View” while HV denotes HorizontalVertical.
Figure 3: A diagram showing different radar data formats obtained from the radar signal processing chain. The chain takes the sampled receive signal as input and is capable of generating a variety of formats, including the radar point cloud data, within the radar itself.
Fortunately, crucial observations within the radarcube are extremely sparse, as agents only occupy a few localized subspaces. Hence, it is common practice to extract useful observations in various compressed data formats for model development. These formats include the range-azimuth-Doppler (RAD) tensor, range-azimuth heatmap, point cloud, and micro-Doppler spectrogram, as illustrated in Fig. 4, with the point cloud being the most common format. These formats are extracted at various stages of the FFT-based signal processing chain in radars, as shown in Fig. 3 We describe these formats below.
#### 3.1.1 RAD Tensor
The range-azimuth-Doppler (RAD) tensor is a 3D representation extracted from the radar signal processing chain following the azimuth angle FFT. Typically, the observation values are averaged over the elevation dimension, if present. In addition, the tensor is occasionally downsampled across the three dimensions to reduce its size. Despite being relatively large, the RAD tensor offers the benefit of localizing agents in the top-down view (via range-azimuth), while retaining robust radar features within the high-resolution range and Doppler dimensions. Consequently, this format allows learned detection models to infer object attributes, such as position, speed, and 2D shape, directly from the tensor.
#### 3.1.2 Range-Azimuth Heatmap
The range-azimuth heatmap is a 2D radar image (top-down view) obtained by compressing the Doppler dimension of the RAD tensor. Each bin in the heatmap carries the amplitude of the observations, occasionally including averaged Doppler. While the heatmap provides a more manageable data size for deep learning models compared to RAD tensors, it loses the rich radar features essential for high performance of learned detection models. This is due to the compression of fine-grained Doppler features and the high uncertainty associated with low azimuth resolution.
#### 3.1.3 Radar Point Cloud
The radar point cloud is a commonly used radar data format that represents the relevant observations within the radarcube as a sparse set of data points in a 3D space. Each point in the cloud corresponds to a peak in the radarcube, obtained using a constant false alarm rate (CFAR) peak extraction algorithm. CFAR classifies a cell in the radarcube as containing a signal (i.e., echo) if its amplitude relative to the average noise floor (computed from surrounding cells) exceeds a certain threshold. The value of this threshold can be adjusted to set the desired false alarm rate, i.e., incorrect classification of noise as a signal of interest. Each point in the cloud contains both 3D location features consisting of range, azimuth, and elevation (if present), and also radar-specific features that include radial velocity, SNR, and measured RCS.
The point cloud format is especially beneficial for object detection and classification models, as it substantially reduces data dimensionality while preserving the essential details of the object and the scene. However, the point cloud format may filter out weaker (i.e., low SNR) observations from agents, which may be undesirable for a high-performance radar deep learning model.
find and cite the paper on improving learning from noise
Figure 4: Illustration of commonly used radar data formats. (a) The range-azimuth-Doppler (RAD) tensor offers high resolutions along the range and Doppler dimensions but has a low resolution along the azimuth dimension. (b) The range-azimuth heatmap, a relatively coarser data format, is obtained by compressing the Doppler dimension of the RAD tensor. (c) The radar point cloud includes strong observations from the radarcube in a sparse point cloud representation. (d) The micro-Doppler spectrogram captures fine-grained temporal variations of object motion features, but does not provide spatial information.
Figure 5: The micro-Doppler spectrograms of a pedestrian (d), a bicycle (e), and a car (f) for their motions outlined in (a-c) [47]. These spectrograms capture distinct motion patterns of different parts of these agents, as illustrated in these plots. During turning, the velocity features of the agents become more complex as their body parts undergo different rapidly changing motions.
#### 3.4.4 Micro-Doppler Spectrogram
The micro-Doppler spectrogram is a 2D representation that maps the Doppler frequency shift against time [48]. It is a classical format obtained by performing a short-time Fourier transform after the range FFT in the radar signal processing chain. This format excels in capturing rich motion features of agents. These features are quite distinctive, both for different agents and their different motion types, as illustrated by the micro-Doppler features from moving and turning events of a car, a bicycle, and a pedestrian shown in Fig. 5. Patterns within the micro-Doppler spectrogram can be used to classify the type of object and infer its activities. An interesting application of this format to detect drones and birds was studied in [49].
The 2D spectrogram format does not capture spatial features, which makes it unsuitable for object detection models. However, this format underscores the importance of time-varying Doppler features in radar data for reliable object detection.
### Input formats for deep learning models
While the formats mentioned above are commonly used to represent radar data, object detection and perception models often use different representations for input data. Typically, these models detect and track objects in either a bird-eye view or a perspective view. We discuss these representations for the radar point cloud below.
#### 3.4.1 Bird-Eye View (BEV)
The bird-eye view (BEV) offers a top-down view of the scene data. This representation excels in preserving distances between agents in the scene and separating all objects spatially, including those in occlusion. The conversion of radar point cloud data to BEV usually involves accumulating points within each cell of a top-down grid, followed by learning fixed-sized features within each cell to retain most of the properties present in the point cloud data [17].
The BEV projection compresses the spatial dimensions of the data from 3D to 2D, thus reducing the required complexity of the detection models. However, the size of the 2D grid expands quadratically with increase in either the range or the grid resolution of the BEV representation, making the detection models quadratically larger. As a result, this representation is not ideal for long-range detection models.
#### 3.4.2 Perspective View
The perspective view is the view normally seen by human eyes. It is similar to camera images from the ego vehicle, but with an added depth feature (and accompanying velocity feature for radar data) assigned to each pixel. Unlike the BEV, the size of the grid in this view remains constant, regardless of range. This characteristic makes it a suitable choice for long-range detection tasks. Fig. 4(d) shows the perspective view of a radar point cloud.
Despite this advantages, using a perspective view for radar data presents considerable challenges. At each pixel behind an observation, radars often register several weaker observations from longer ranges that can be obscured in the perspective view. Moreover, radars provide low-resolution perspective view images due to their low azimuth and elevation resolutions, which further diminishes the effectiveness of this representation.
## 4 Autonomous radar datasets
High-quality datasets are the backbone of cutting-edge deep learning research. The proliferation of computer vision research, for instance, was largely propelled by the availability of the extensive and high-quality ImageNet dataset [51]. Similarly, for autonomous vision and lidar research, high-grade, large-scale datasets like the KITTI robotics dataset [52] have been available since 2013. Over the years, the quality of these datasets has continued to improve, with offerings such as NuScenes [7] and the Waymo Open Dataset [53].
However, the evolution of radar datasets has not been as smooth. The first radar datasets for autonomous applications, the multimodal NuScenes dataset [7], DENSE [54] and RADIATE [35], were released around 2019. But the quality of radar data in these datasets was somewhat lacking, primarily due to the limitations of 3D radars. Additionally, these datasets contained a limited amount of labeled radar data. Fortunately, new and more promising autonomous radar datasets are surfacing with the development of next-generation 4D radars, as required to advance radar deep learning research.
The most useful publicly-available radar datasets are summarized in Table 2. For each dataset, we have noted their targeted tasks, popularity gauged by citation count, release time, a description of the radar data, additional sensor data included, the size of the dataset, its diversity, and the size and type of annotations. This comprehensive summary should serve as a valuable resource when selecting the right dataset for model development. The list is arranged in order of decreasing assessed value.
Next, we provide a brief overview of the datasets. K-radar [8], though recent, is a high-quality radar dataset. It includes dense 4D radar tensors along with 4D radar point cloud data, contains a wide diversity of scenes, and has a large size. However, it only contains data from front-facing radar. The View-of-Delft (VoD) dataset [9] also offers high quality 4D radar point cloud data from front-facing radar, but
Figure 6: Different radar data input formats used for deep learning models. The bird-eye view (BEV) (a) represents data in a top-down grid, while the perspective view (b) projects data as seen from an image perspective.
falls short in terms of scene diversity and size compared to K-radar.
On the other hand, the most popular radar dataset for research at present is the multimodal NuScenes dataset, which is larger in size compared to K-radar but contains radar point cloud data from previous-generation 3D radars. It has also been extensively used to develop radar-based early fusion models, discussed later. RadarScenes [10] is another large radar dataset worth considering, particularly for models requiring point-wise annotations. Finally, Boreas [11] and Oxford Radar Robotcar [50] offer radar range-azimuth heatmap data, albeit without velocity information, and they have been used for a few object localization and lidar-radar early fusion model studies [55].
## IV High-level overview of perception
At a high level, autonomous perception typically has two steps: detection and tracking. Detection involves identifying objects in the scene from sensor data. For example, identifying other vehicles, bicyclists, pedestrians, and road signs in the images or point clouds captured by the sensors. Tracking, on the other hand, involves following the identified objects over time [56, 57, 58].
Traditional methods have used a late fusion approach for detection in which results from individual sensor detection models are fused, but more recent methods have begun exploring early fusion where raw features from multiple sensors are fused in a model to infer joint detections [59]. Additionally, there are methods that eschew detection altogether for tracking, such as occupancy-based methods. We provide an overview of late fusion, early fusion, and occupancy-based methods below.
### Detection-based tracking
#### Iv-A1 Late Fusion
The late fusion approach employs separate full object detection models on each sensor's data to infer the class, distance, shape, and orientation attributes of agents in the scene. Some dissimilarities in the attributes from different sensor models exist, such as cameras providing 2D perspective view detections, lidars 3D BEV detections, and radars 2D BEV detections with associated velocities.
The detections from different models are associated together to infer unified perception detections, commonly using a learned associator [21]. The associator produces the unified detections in a joint 3D space and learns to assign weights to model outputs from sensors based on their respective strengths. For instance, in the unified output, it might assign greater weights to class, shape, and orientation attributes from lidar and camera detections and to velocity attributes from radar detections. The tracker updates agent tracks in the current frame by using tracks from the preceding frame and detections in the present frame, commonly by applying classical models such as the extended Kalman filter [60] or learned models like the unsupervised learned Kalman filter [61].
In the early days of autonomous driving, the late fusion approach was widely adopted due to its ease of implementation and iteration. It was challenging to develop unified detection models on disparate data from different sensors. By contrast, the detection models for each sensor produced similar outputs that could be effectively combined using an associator. This approach also facilitated quick upgrades of individual sensor models as more advanced detection models emerged from research.
While the late fusion approach is generally effective, it can struggle in reliably detecting agents with weak or partial signals in multiple sensors. Large vehicles such as vans or trucks positioned directly ahead of the ego vehicle are a good
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Dataset** & **Tasks** & **Citations** & **Year** & **Radar Data** & **Other Sensors** & **Data Size** & **Diversity** & **Annotations** \\ \hline K-Radar [8] & Detection, & 3 & 2022 & 4D Radar Tensors and Point Cloud from Front Radar & Lidars, 4 Frames & 35K Frames & Different and Lighting, Korea & 93K 3D \\ View-of- & Detection, & 22 & 2022 & 4D Radar Point Cloud from Front Radar & 64-Beam Lidar, Stereo Camera & 8.6K Frames & Sunny, Dense Traffic, Refurbes & 123K 3D \\ \hline NuScenes [7] & Detection, & 2877 & 2019 & 3D Radar Point Cloud from 5 Surround Radars & 32-Beam Lidar, 6 Frames & 1.3M Frames & Urban Scenes, Boston, Singapore & 1.4M 3D \\ \hline RadarScenes [10] & Detection, & 52 & 2020 & 3D Radar Point Cloud from 4 Surround Radars & 118M Points & Urban Scenes, Germany & Point-wise \\ \hline Boreas [11] & Detection, & 24 & 2022 & Range-Azimuth Heatmap from Top Rotating Radar (No Velocity) & 128-Beam Lidar, Front Camera & 7.1K Frames & Sunny, Toronto, Canada & 320K 3D \\ \hline Oxford Radar & Localization & 249 & 2020 & Range-Azimuth Heatmap from Top Rotating Radar (No Velocity) & 2 32-Beam Lidars, Front Camera & 240K Frames & Varied Lighting, Limited Weather, UK & N/A \\ \hline \hline \end{tabular}
\end{table} TABLE II: A comparison of recent high-quality radar datasets for autonomous systems
example. Lidars capture limited points from the side surfaces of these vehicles when they are directly ahead. By contrast, radars yield dense uniform returns from these agents, owing to their large metallic bodies and underbody reflections [62], as discussed in Section VIII. Nevertheless, the associator in a late fusion system may routinely disregard shape features from radar model outputs due to the absence of in-context learning. A similar situation arises when driving in adverse weather, where the associator might continue to underweight the shape features of radar model outputs, even though these features might be more reliable than those from the lidar model output. These challenges of late fusion have led to a growing interest in early fusion, which is discussed below.
#### 3.1.2 Early Fusion
One of the main limitations of the late fusion approach is its association of model outputs that have stripped away all data features. By contrast, early fusion models learn object detections jointly from multiple sensor inputs, which give them access to data-level features from the sensors. These models effectively perform the roles of both multiple single-sensor models and the associator within the late fusion pipeline, but in an improved fashion. Early fusion has gained further traction with the advent of transformers, which enable more effective joint learning from dissimilar multi-modal data through their cross-attention mechanism. In fact, recent early fusion models have demonstrated notable improvements in detection performance compared to their single-sensor counterparts [55, 63].
Thus far, the bulk of early fusion research has focused on the fusion of data from two sensors such as camera-radar, lidar-radar, and camera-lidar. Thus, an associator remains necessary to produce final unified detections for tracking. Fig. 7 presents a high-level overview of both late fusion and early fusion methods in perception, along with an emerging occupancy-based tracking method which we describe below.
### _Occupancy-based Tracking_
A significant drawback of the detection-based tracking paradigm for perception is its dependence on object detection, which can be unreliable for rarely observed agents such as rolling tires, reflective trucks, protruding construction vehicles, or workers emerging from manholes, among others [64]. Given the limitless variety of such uncommon scenarios, there will inevitably be elements that are under-represented in model datasets.
However, object detection is not essential for tracking or ego vehicle navigation. Fundamental principles of robotic autonomy suggest that a safe navigation of the ego vehicle is feasible if the occupancy and velocity of cells in a fine-grained 3D gridded world around the ego vehicle are known. Occupancy-based models aim to do just this by developing an ego-centric, fine-grained 3D grid of the world and determining the occupancy and velocity of each cell. Such a representation allows for the estimation of occluded regions, drivable areas, and moving agents, which in turn facilitates tracking, prediction, and planning for ego vehicle navigation.
Occupancy-based approaches are effective in handling rare objects since they do not rely on object detection priors for tracking. All objects occupy cells that can be tracked and constitute non-drivable regions regardless of their rarity or under-representation in data. Another strength of these methods is their reduced dependency on labeled data for model training as occupancy estimation can be learned from sensor data using semi-supervised methods.
Occupancy estimation is more reliable at short-to-medium ranges than at long ranges due to the increasing uncertainty of sensor data with range. In both occupancy-based and detection-based methods, a significant source of uncertainty is sensor noise in dynamically changing environments, known as _aleatoric heteroscedastic uncertainty_. The regions with high aleoteric heteroscedastic uncertainty in sensor data typically yield low confidence occupancy predictions. Recognizing and harnessing this uncertainty can enhance the robustness of the occupancy models. We delve deeper into this topic in Section VII.
Figure 7: These high-level block diagrams represent popular paradigms in autonomous perception. Diagram (a) depicts late fusion detection-based tracking, which fuses predictions of individual object detection models. Diagram (b) represents early fusion detection-based tracking, which jointly detects objects via early fusion models and follows this by association and tracking. Diagram (c) illustrates occupancy-based tracking, which predicts fine-grained scene occupancy for estimating the drivable surface area, as well as for object detection and occupancy tracking.
## V RADARS IN Detection-based Tracking
### _Radar Detection Models_
Radar detection models for perception typically provide a comprehensive set of attributes such as class, distance, shape, orientation, and velocity for each detection via multi-task learning [65]. These attributes are utilized by the late fusion associator to generate unified object detections. Of these attributes, velocity and distance are the primary contributions from radar models to the unified late fusion output. Additional minor contributors include robustness to weather conditions and early detection of dynamic agents in occlusions or over long ranges.
As previously noted, the majority of radar models are influenced by vision and lidar detection models due to challenges associated with the radar data and the scarcity of radar datasets. A handful of studies propose radar-based models, such as [18], which performs semantic segmentation of radar point cloud, and [66], which detects objects from the RAD tensor. These studies, however, utilize small, custom datasets and have a relatively narrow scope.
A high-level architectural block diagram of radar models is shown in Fig. 8. The input radar data, usually in point cloud format, is encoded using feature encoders in one of the popular perception input representations formats, as discussed in Section II.F. These encoded features are then used by a backbone to extract learned features at varying levels, which are subsequently utilized by detection heads for multi-task predictions.
This design follows a modular detector framework, in which existing components can be replaced with components from various state-of-the-art models at relevant stages with minor modifications. We examine such state-of-the-art models and illustrate components from them at the appropriate stages in the architectural block diagram. We begin with a review of feature encoders, followed by an examination of backbones and detection heads for radar point cloud data.
**Feature encoders.** Point clouds are an unstructured data format and therefore unsuitable for tensor-based convolutional models. PointPillars [67, 17] is a pioneering work that proposed a pillars-based feature encoder to transform
Figure 8: A high-level architectural block diagram of radar models, which typically consist of three stages: a feature encoder, a backbone, and detection heads. The feature encoder processes radar data, often in point cloud format, and generates encoded BEV or perspective view features. The backbone, comprising of several convolutional or transformer-based layers, takes in the encoded features and outputs learned features. Detection heads take in the learned features and predict various detection attributes using multiple heads in convolutional models or a feed-forward network (FFN) head in transformer-based models. Notable components from leading studies are shown for each stage.
point cloud data into a structured BEV pseudo-image representation. It achieves this by employing stacked vertical pillars in a BEV 2D grid, aggregating points that fall within each pillar, and subsequently generating fixed-size learned features for each of these pillars. This results in a 2D pseudo-image, which is used by a 2D convolutional backbone to extract features at various levels. These features are then fed into a single-shot detection head [68] for prediction.
Although the feature encoder proposed in PointPillars is lightweight and effective, its pillars lack height differentiation, leading to a significant loss of vertical structure encoded in the elevation of points. VoxelNet [69, 70] is an extension of the PointPillars architecture to 3D which preserves the vertical structure of the data. Its voxel feature encoder converts raw point cloud data into a volumetric representation, or "voxel grid", using several fully connected layers to learn fixed-sized features for each voxel. This process results in a 4D tensor, which is then passed through a convolutional backbone followed by a region proposal network [71] to generate 3D bounding boxes.
VoxelNet achieves enhanced detection performance over PointPillars, particularly for high-resolution and dense point clouds. However, it has a higher memory and computational footprint than PointPillars due to its 3D voxel grid. Thus, PointPillars is often a better choice than VoxelNet for radar data due to its lower density and resolution of points.
PointPillars and VoxelNet are ideal for short and medium-range BEV detections, but not for long-range detections due to the quadratic model complexity associated with range scaling. To overcome this, Range Sparse Net [72, 73] projects points in the perspective view and encodes point features in the perspective view grid. It subsequently uses an image-space backbone to extract multi-level features followed by detection heads to generate predictions.
**Backbones and detection heads**. There are two popular types of backbones in the literature: convolutional and transformer-based, each with its own type of detection heads. Convolutional backbones usually extract multi-scale features by down-sampling successive convolutional layers from their input, after which several deconvolution-based detection heads are implemented for multi-task object detection. By contrast, transformer-based backbones utilize their self-attention mechanism to learn object-level features, which are then used by feed-forward network (FFN) based detectors for inference. Though the transformer-based models generally perform better, their gain comes at cost of larger model sizes that require more training data.
Initially, convolutional backbones commonly employed a detection head only on the final backbone layer output for predictions. However, this approach was less effective, as the resulting bottleneck failed to use features from previous layers. The feature pyramid network (FPN) [74] addressed this problem by utilizing features from all layers of a backbone. By implementing feature pyramids via top-down and lateral connections in the deconvolution layers and attaching detection heads to all deconvolution layers, an FPN obtains detections at different feature scales. Subsequently, it combines detections via non-max suppression (NMS), which improves accuracy while keeping latency in check. Deep layer aggregation [75] further improved upon FPN by using hierarchical and iterative skip connections from previous layers that deepen the representation and refine its resolution.
The backbones described above have state-of-the-art object detection performance on perspective view image datasets. However, in a BEV autonomous driving scene, objects exhibit significantly less variation in scale and are small relative to the image size. It has been argued that multi-stride backbones for BEV object detection bring little advantage and in fact lead to inevitable information loss due to the coarser resolution of the downsampled layers [76]. Through an experiment using the PointPillars model, the authors demonstrate that single-stride backbones with an appropriately-sized receptive field and without downsampling provide better detection performance than multi-stride backbones. They further argue that transformer backbones are superior to convolutional backbones for BEV detections as they are better at learning features for objects that are small relative to image size. Building on these insights, the authors propose a single-stride sparse transformer (SST) architecture that takes voxelized input and implements a backbone consisting of single-stride SWIN transformer blocks [77] to generate a dense feature map for feeding to the detection heads. SST achieves a substantial improvement in detection performance over state-of-the-art multi-stride backbone-based models on the Waymo Open Dataset.
DETR [78] proposes a transformer encoder-decoder backbone for feature extraction followed by an FFN with bipartite Hungarian matching loss [79] for end-to-end object detection. DETR is simple in design and fully differentiable as it does not use any non-differentiable components, such as NMS, that are commonly used in convolutional architectures.
Masked autoencoders (MAE) [80] provide a self-supervised pre-trained backbone for developing fully trained models with a small labeled dataset. During pre-training, a MAE segments the input into a top-down 2D grid (patches) and masks over \(75\%\) of randomly selected patches. From the unmasked patches, a transformer encoder generates encoded features, which, along with mask tokens, are fed to a simple decoder to reconstruct the input. Afterwards, a detector model consisting of the pre-trained backbone and detection heads can be fine-tuned using a small labeled dataset.
The models discussed so far provide full object attributes predictions, including class, shape, and orientation. However, these attributes are learned from optical features of the data, which are relatively weak in radar data, leading to reduced detection performance on radar data processed by such models.
CenterNet [81, 82] is a notable work that eliminates the need to predict full object attributes upfront. Instead, it models objects as their center points and trains the model parameters to estimate centers as keypoints. The remaining attributes of objects, such as shape, position, and orienta
tion, are regressed subsequently. Hence, CenterNet presents an arguably better-suited architecture for radar-based object detection. CenterNet also proves useful as a detection head for perspective radar detectors, as the object attributes in radar data are even more noisy when represented in a low-resolution azimuth-elevation space [83].
### _Early Fusion with Radar_
Radar-based early fusion models learn from features of both radar and optical sensor data. Such joint learning results in improved detections, with optical data features contributing accurate optical attributes and radar data features complementing them with the unique strengths of radar. The two most prevalent fusion pairings with radar data are camera-radar fusion and lidar-radar fusion. In both cases, radar serves a complementary role, enriching detections with its strengths such as velocity, distance, and robustness to weather and lighting conditions. Initially, the majority of early fusion models in literature were convolution-based, though transformer-based models have recently gained popularity. These models demonstrate effective joint learning through their cross-attention mechanism, albeit at the cost of increased model size.
#### 1) Camera-Radar Early Fusion
Camera and radar are both cost-effective and low-maintenance sensors with excellent complementary capabilities. Cameras capture detailed semantic information of the scene in a perspective view up to long distances, while radars provide accurate depth and velocity imaging, detections in occlusion and over long ranges, and robustness to weather and lighting conditions. Combined, they offer rich complementary features to facilitate advanced levels of autonomy.
However, early fusion of camera and radar presents a significant challenge due to their view disparity. Cameras image the scene in the perspective view, while radars capture rich features in the BEV. Converting one sensor's data to another's view is a complex and error-prone task. Although models such as Lift-Splat-Shoot [84] can convert perspective view images to BEV, the results are not always accurate due to the ill-posed nature of camera monocular depth estimations. Similarly, representing radar data in the perspective view increases its ambiguity due to low azimuth and elevation resolutions of radars.
The camera-radar early fusion models in the literature primarily use either a perspective view [83, 85] or a BEV [86, 87], depending on their application. Some models, however, use non-standard joint views that offer a balanced compromise in view disparity for both sensors, such as CramNet [28].
CenterFusion [83] is a convolutional early fusion model that uses perspective views to perform fusion of learned features from camera and radar to improve depth and velocity estimations of camera 3D detections. It employs a CenterNet-based architecture with the DLA backbone for extracting camera features and predicting preliminary 3D detections. Afterwards, it filters radar points falling in the frustum of each camera detection and concatenates learned features from those points with camera features for inferring final 3D detections. However, this model has issues with associating radar points to camera detections, especially when points at different depths fall within the same frustum. This may result in imprecise depth estimations.
CramNet [28] fuses learned features from perspective view camera and BEV radar data in a ray-constrained projected 3D space via a transformer-based cross-attention mechanism. It also implements sensor dropout training by feeding Gaussian noise to one sensor input at random and demonstrates some model robustness to sensor data corruption in situations like adverse weather. However, it faces limitations in fusing features in the projected 3D space due to the lack of elevation resolution in the radar data.
RADIANT [85] addresses the issue of imprecise association of radar points with camera object detections, which can lead to sub-optimal depth estimations. Instead of associating radar points to objects as other models do, RADIANT learns to predict 3D offsets between radar points and object centers, followed by a feedforward depth weighing network to refine their monocular estimated depth. This method could be extended for learning the velocity of objects as well.
Contrary to the previous studies, Camera Radar Net (CRN) [86] performs fusion in the BEV, using a multi-model deformable cross-attention module to fuse multi-view image and 3D radar point cloud data. It introduces some innovative techniques to mitigate issues with camera-to-BEV projection and BEV range scaling. It uses a radar-assisted view transform module to project image features in BEV that outperforms Lift-Splat-Shoot [84] by including depth signals from radar data. In addition, its use of sparse aggregation by querying select features makes it efficient for long-range detection. These key innovations result in a \(\sim\)100 m range for the model and comparable detection performance to state-of-the-art lidar detector models on the NuScenes dataset.
TransCAR [87] is another BEV early fusion model that utilizes DETR3D [88] to generate 3D object features from multi-view images and uses an FFN on radar point cloud data followed by positional encoding to generate spatial radar features. Subsequently, it fuses camera and radar features via three multi-head cross-attention decoders, assisted by queries to a features closeness mask. The model achieves excellent results, but is large and slow due to its heavy reliance on transformers.
While the models discussed above feature innovative architectures for camera-radar early fusion, they still face challenges due to issues with 3D radar point cloud data and disparities between camera and radar views. Apart from such models, an interesting study estimates the full instantaneous velocity of objects from their camera optical flow and radar radial velocity features using a simple closed-form solution [89]. The authors demonstrate the accuracy of their velocity estimation by precise accumulation of radar points from the past 20 frames, which are spread over long distances
due to their motion over frames.
#### 3.2.2 Lidar-Radar Early Fusion
Radars provide excellent complementary features to lidars for early fusion. Radars can enhance the accurate 3D detections provided by lidars due to their strong range and velocity features and robustness to adverse weather. Moreover, radars can improve the detection of large agents, such as buses and trucks, as radars receive strong signals from these objects. Furthermore, an early fusion model can leverage the complementary strengths of lidars and radars to reduce ghost detections that arise from each other's spurious observations. These spurious observations are often sensor-specific characteristic, such as radar clutter arising from multipath phenomena and lidar distractors arising from conditions like steam, fog, and retro-reflections.
Interestingly, the lidar-radar pair is more compatible with early fusion compared to camera-radar or camera-lidar pairs due to the high similarity in their data. Both sensors generate point cloud data that can be represented in a BEV, and being active sensors, their data remains unaffected by lighting conditions. Nevertheless, research in the area of lidar-radar fusion remains limited. This scarcity likely arises from the limited use of lidars outside of the autonomous driving field because of their high cost, and a prevalent interest in camera-lidar fusion within the autonomous driving field due to the availability of high-quality public datasets and fewer data-related challenges, a common theme with radar data.
LiRaNet [90] is a lidar-radar early fusion study that extracts spatio-temporal BEV features from raw radar data and fuses them with lidar and road map network BEV features. By incorporating radar features in the model, it not only enhances detection performance but also improves the prediction of tracks with high accelerations and at long ranges.
MVDNet [55] is a transformer-based early fusion model that generates common region proposal tokens from lidar and radar BEV features and fuses them via a transformer cross-attention mechanism. It shows robustness on foggy weather data and overall improved detection performance over lidar-only models. However, it is trained on a small dataset and its prediction tasks are relatively simple. ST-MVDNet [27] further improves MVDNet's performance on foggy weather data by implementing an envelope student-teacher model over the MVDNet model and applying a modality dropout training.
Although the studies mentioned above have shown improved performance by integrating radar with lidar, they have not yet fully exploited the potential of radars. This limitation is partly due to the sparsity and low resolution of 3D radar data and partly due to low effectiveness of the proposed early fusion model architectures. The issue with radar data quality could be mitigated by using more advanced 4D radar datasets. On the other hand, the architectural issues could be addressed by developing more effective fusion methods, similar to those proposed for camera-lidar fusion such as BEVFusion [91, 92], CrossFusion [93], and FullySparseFusion [94].
## 4 Radars for occupancy-based tracking
Occupancy estimation methods partition the ego-centric world into a fine-grained 2D or 3D grid and predict the occupancy and dynamics of each cell. These methods essentially perform an inverse sensor modeling (ISM) of the scene based on sensor data. Such modeling is typically achieved via semantic segmentation models that commonly segment the grid into occupied, free, unobserved, and ignored regions. Some models also segment the grid into static and dynamic regions and estimate the velocity of each cell. A related method, _scene flow estimation_, predicts the motion field to estimate the sensor data in the next frame from the current
Figure 10: A high-level block diagram of common lidar-radar early fusion models. These models learn BEV features from both sensors, typically using similar backbones, and follow by fusing these features for joint inference.
Figure 9: These high-level block diagrams depict common camera-radar early fusion models. Diagram (a) illustrates fusion in the perspective view, where radar points are associated with preliminary vision detections to extract learned radar features. The features are then fused with camera features to improve depth and velocity attributes of the preliminary detections. Diagram (b) shows fusion in the BEV, in which camera images are projected in the BEV by a learned model. The extracted BEV features of camera and radar are then fused to infer joint detections.
one. Radar-based models in both methods aim to harness the unique features of radar data, including accurate velocity and range values, early detection abilities, and weather tolerance. We review the literature on both methods below.
### Occupancy estimation
The early occupancy estimation methods, including those based on radar, utilized classical approaches such as Bayesian filtering, particle filtering, and Monte-Carlo to predict the occupancy of each cell [96]. However, these traditional methods used rigid models that struggled to generalize well in the complex scenes encountered during autonomous navigation tasks.
Certain learned radar occupancy estimation models, such as [97, 98], utilize UNet's [99] encoder-decoder architecture on radar point clouds for 2D BEV semantic segmentation of the scene. However, these models only predict static occupancy despite having radar data containing velocity features. Moreover, their segmentation is coarse due to high angular uncertainty associated with the data. [100] improves the segmentation performance over these models by leveraging the uncertainty in radar data to learn the variance of occupancy for each cell along with its mean value.
By contrast, [101] implements an evidential grid-based tracking on the radar point cloud and employs a clustering algorithm to predict dynamic occupancy. This study further shows that radar-based occupancy estimation achieves superior segmentation of static and dynamic regions compared to similar lidar-based methods, owing to a more consistent velocity grid prediction from radar data.
The aforementioned models estimate occupancy solely using radar data and typically generate self-supervised segmentation labels from the accompanying lidar point cloud, thereby removing the need for labeled data. While these models exhibit good capabilities in segmenting dynamic regions of a scene, they suffer from coarse resolution in their occupancy estimation. To counteract this, multi-sensor occupancy models fuse lidar and radar point clouds, resulting in accurate scene occupancy and velocity estimations.
As a multi-sensor occupancy model, [30] predicts future dynamic occupancy of cells in the BEV from statistically computed multi-sensor dynamic occupancy input using a convolutional model. While this model outperforms single-sensor occupancy models, it occasionally mis-predicts static regions in the subsequent frame as dynamic and suffers from temporal inconsistency of tracks, particularly in occlusions.
Another multi-sensor occupancy model is [102], which addresses the temporal inconsistency problem of [30] by embedding LSTM cells into the convolutional layers. The resulting Recurrent Neural Network (RNN) architecture accumulates long-term sequence information, which adds context to dynamic tracks, enabling tracking them in occlusions and estimating their shapes more accurately.
Apart from radar-only and multi-sensor models, a notable model worth mentioning is [103], which employs a convolutional-recurrent UNet architecture to predict occupancy, velocity estimates, object segmentation, and drivable area from lidar point cloud data. It uses object labels from the dataset and the accompanying lidar point cloud to label the occupancy of the grid and perform frame interpolation for labeling the velocity of dynamic cells. This model achieves robust performance for occupancy and drivable area estimation and could potentially also be used for radar-only and multi-sensor occupancy estimation.
Several challenges remain in radar-based occupancy estimation, primarily due to the sparsity and low resolution of radar data, which result in coarse occupancy prediction. However, as some studies above have demonstrated, radar data still holds the potential for improving velocity estimation. Furthermore, radar data can enhance the weather tolerance of occupancy estimation and provide early tracking of moving agents in occlusion and over long ranges. Achieving these improvements would require (i) 4D radar data from new datasets which significantly enhance resolution and density
Figure 11: These two sub-figures illustrate the tasks of occupancy estimation and scene flow models. (a) Occupancy estimation models segment the scene into occupied, free, occluded, and unobserved regions [96]. Some models also segment regions into static and dynamic and predict the velocity of each cell. (b) Scene flow models predict a motion field to estimate the point cloud in the next frame from the point cloud in the current frame.
of data over 3D radar data, and (ii) more effective multi-sensor occupancy estimation architectures that strategically leverage the strengths of radar data.
### Scene Flow
Scene flow estimation methods predict a motion field that describes the translation of points induced by motion of dynamic agents in the scene and motion of the ego vehicle itself. Scene flow gained popularity for lidar data due to its dense point cloud representation. Some prominent lidar-based scene flow estimation models include Object Scene Flow [32], Just Go with the Flow [33], Flot [104], SLIM [105], PointFlowNet [106], and FlowNet3D [107].
An example of a radar-based scene flow model is RaFlow [108], which proposes a self-supervised model to estimate scene flow on 4D radar point cloud data. Its tasks include estimation of the motion field, static and dynamic segmentation, and rigid ego motion transformation to predict the subsequent point cloud frame. RaFlow takes two consecutive point cloud frames as input to generate a static mask and then uses this mask to segment the dynamic points. One of RaFlow's main limitations is its emphasis on scene flow for static points to improve overall metrics, which leads to less reliable flow estimation of dynamic points that carry more useful information.
CMFlow [109] extends RaFlow by implementing cross-model supervised scene flow estimation using co-located data from lidar, camera, and odometer sensors. Such sensor data is incorporated at various stages of RaFlow to enhance supervision and constrain losses, thereby improving scene flow estimation of dynamic points.
## VII Modeling Uncertainty in Radar
Every sensor measurement inherently carries a degree of uncertainty. The nature of this uncertainty can be either _homoscedastic_ or _heteroscedastic_, contingent upon whether successive measurements are drawn from a random distribution with a fixed or variable variance, respectively. Inaccurately presuming heteroscedastic variables as homoscedastic can result in biased error estimates [110].
The autonomous environment is a quintessential example of heteroscedasticity due to its ever-changing dynamics. For example, measurements of a car at 10 m range typically exhibit lower variance compared to measurements of a car at 100 m. Similarly, the variance in measurements differs between a car moving straight directly in front of the ego vehicle and a car turning towards the ego vehicle due to two distinct sources of randomness.
Uncertainty can also be categorized as _epistemic_ (knowledge uncertainty) or _aleatoric_ (uncertainty related to inherent randomness). Epistemic uncertainty can be reduced by improving system knowledge, whereas aleatoric uncertainty, which arises from randomness inherent to the system, is irreducible and can only be lessened by selecting a new system with lower aleatoric uncertainty [36].
In the context of deep learning, epistemic uncertainty arises from uncertainty in the model parameters and can be mitigated by increasing the model's knowledge. This could involve selecting a larger, more balanced dataset and/or using an ensemble of models. Aleatoric uncertainty, on the other hand, pertains to the observational noise in the input data. Reducing this form of uncertainty requires improving the system, for example by using new measurement sensors with higher resolution and/or better dataset labels.
Radar data is often characterized by high _heteroscedastic aleatoric uncertainty_ due to its lower spatial resolution and weaker optical features compared to lidar and camera data. Such uncertainty often results in decreased radar model performance. However, a few studies have managed to use knowledge of uncertainty to reduce model output uncertainty or label uncertainty, thus improvin model performance. While some of these studies focus on lidar data, they can be applied to radar data as well. Important examples are reviewed below.
### Reducing Uncertainty in Model Tasks
While not a lot can be done to reduce uncertainty in sensor data post-capture, the model's performance on the data can be improved by properly leveraging data uncertainty. One radar model that demonstrates this is [100], which incorporates heteroscedastic uncertainty into its model formulation and learns the variance of occupancy for each cell in addition
Figure 12: The high uncertainty of radar data and measurements results in less reliable detections from learned radar models. (a) Some studies seek to improve task robustness by learning the uncertainty associated with each task using auxiliary heads during the forward pass, followed by weighing the loss for each task within the total loss based on its learned uncertainty. (b) Other studies focus on improving model robustness by reducing the uncertainty of the labels.
to its mean value. This improves segmentation of occluded regions during occupancy estimation tasks.
Similarly, the lidar-based work in [111] leverages observation noise in the data to predict heteroscedastic aleatoric uncertainty associated with each detection, resulting in a \(9\%\) improvement in average precision over similar state-of-the-art models. The model uses a modified Faster-RCNN [71] architecture for object detection and includes auxiliary heads to predict variances in uncertainty for the region proposal network and for location and orientation prediction heads. The predicted variances regularize losses in a multi-loss function, thus promoting higher learning from informative samples and lower learning from noisy samples.
Lastly, the vision segmentation model in [112] learns homoscedastic uncertainty for tasks and weighs each task's loss in the multi-task loss function according to its homoscedastic uncertainty, leading to improved semantic segmentation, instance segmentation, and per-pixel depth regression.
### _Reducing Uncertainty in Labels_
Radar data labels often carry substantial uncertainty, which can negatively affect the performance of radar models. This uncertainty primarily originates from two sources: the lack of confidence scores and the presence of error-prone labels. All objects in the dataset carry the same confidence, irrespective of their distance, size, or visibility, which can lead to overconfident predictions for agents with low confidence. Moreover, labels for radar data are usually transferred from the spatial labels of lidar and camera data, which can be error-prone due to different imaging physics between radars and optical sensors. Such uncertainty can result in occasional erroneous predictions.
Patel et al. [37] propose a solution to the confidence issue in form of a P-smoothing method that generates soft labels for objects based on their average received power. According to the radar equation [113], objects with lower reflectivity, such as pedestrians and bicyclists, and at longer ranges have larger aleatoric uncertainty, which is reflected in their average received power. The study trained a model on P-smoothed labels which achieved significant improvement in detection performance on a corrupted test dataset containing low-certainty agents, compared to a similar model trained on hard labels.
## VIII Radar: Challenges and Opportunities
### _Multipath and clutter_
Radars exhibit robustness against weather conditions due to their substantially larger wavelength compared to optical sensors like lidars and cameras. However, this large wavelength causes nearby surfaces, such as vehicle or guard trails, to behave as sources of specular reflections (i.e., mirrors), thus producing spurious observations in radar data known as clutter. Thus, clutter does not arise from real objects but from indirect paths created by multiple reflections from surfaces, a phenomenon often referred to as multipath. Observations of clutter can sometimes be indistinguishable from those of real objects, leading to false positive detections in learned radar models.
Multipath effects for autonomous vehicles can be classified into three types: double bounce, underbody reflection, and mirrored ghost detections [62]. Radar double bounces occur due to two back-and-forth reflections between an object and the radar-equipped ego vehicle, resulting in false radar observations at double the range and velocity relative to the real object. Underbody reflections usually occur under a vehicle due to multiple reflections between undercarriage of the vehicle and the road. Although underbody observations increase the density of radar points on vehicles, they often appear behind the vehicle due to their longer indirect path, elongating the vehicle's shape in the learned model. Finally, mirrored ghost detections are caused by radar-reflective surfaces in the environment, such as concrete walls, guardrails, and vehicle walls. These create ghost observations that appear behind these surfaces, potentially leading to false positive detections in radar models.
It is desirable to remove clutter from radar data to reduce false positive detections in learned models. Classical studies focused on rule-based filters to remove clutter. For example, Kopp et al. [62] developed rule-based filters to remove three types of multipath clutter from radar data in three successive steps. However, such designs are not robust in detecting clutter in radar data from autonomous navigation scenes due to the complex nature of multipath propagation.
More recent work has focused on learned approaches for clutter filtering. In particular, architectures based on PointNet [114] and PointNet++ [115] have proven to be popular [116, 117, 118, 119]. These models learn individual point features and global scene features to predict clutter points. Among these models, [116] accumulates points over multiple frames to create dense point cloud data and feeds it into a custom-modified PointNet++ model. It also generates an auto-labeled dynamic clutter points dataset by filtering dynamic radar points that fall outside bounding boxes beyond a certain error margin. By contrast, [119] uses a PointNet model and encodes point features in a spherical coordinate system which better learns connection between points, thereby improving the segmentation of clutter points.
However, many clutter points, especially mirrored ghost detections, show inconsistency in motion between frames due to irregular changes in multipath length. Single-frame models like the ones mentioned above often miss these points because they lack the temporal context of the data. MATLAB's proposed multipath filter [120] utilizes spatio-temporal information in radar data to achieve robust clutter detection performance. It first detects reflector surfaces in the scene from static radar points and identifies occluded dynamic clutter points arising from them. Next, it identifies occluded clutter points arising from agents detected by a learned radar model. Finally, it filters out clutter points with velocities that are inconsistent with the actual motion of the target by using a sophisticated probability hypothesis density tracker. This approach correctly detected \(97\%\) of static and
\(80\%\) of dynamic clutter in test data.
### Radar Simulation
Deep learning models used for perception typically require a large amount of labeled data, with a good diversity balance, for robust performance across various driving scenarios. However, both collecting such diverse data and labeling it are extremely expensive and time-consuming processes. Synthetic data, generated in simulation, offers a cost-effective way to complement real-word data.
While the synthetic data for cameras and lidars can be made realistic with modern techniques like ray-tracing and deep learning, generating realistic synthetic radar data is quite challenging. This is due to significant differences between radar physics and the optical physics for which most simulation models have been developed. To accurately simulate radar data of a scene, knowledge of the fine-grained radar reflectivity of every object in the scene is required. Therefore, there are not many useful simulation models for radars.
Generative adversarial networks (GANs) [121] offer a promising solution to generate simulated radar data, especially when large amounts of real world radar data is available. GANs operate within a framework in which a generator model produces synthetic output, while an adversarial model, trained on real world data, detects whether the output is realistic or not. When the generator model becomes so good that its output can't be reliably discriminated by the adversarial model, the adversarial model is discarded, and the generator model is used to produce synthetic data.
Along these lines, [122] presents a GAN model that simulates real world range-azimuth heatmap radar data. The generator produces synthetic radar data from lidar elevation measurements. During training, in addition to this forward mapping, a backward mapping is also learned where real radar data is used to generate synthetic elevation measurements. This enforces cyclic consistency in the model, leading to more realistic synthetic radar data generation. Furthermore, this study shows that a radar model trained on the generated synthetic data has similar performance as one trained on real-world test data.
### Velocity Labeling
Radars used in autonomous navigation, whether 3D and 4D, offer very high-resolution imaging along their velocity dimension. However, the velocity dimension of radar data is generally not labeled, as the labels for radar data are usually transferred from spatial labels of lidar and camera data. Given that radars offer relatively low resolution imaging along their azimuth and elevation dimensions, radar observations from neighboring objects can often overlap, potentially corrupting shape, velocity, and class predictions of objects in radar models. This issue could negatively impact the detection of critical objects in occlusion and at long ranges, where such objects have lower angular separation from adjacent objects.
Notably, observations from neighboring objects are clearly separated along the velocity dimension due to differences in their velocities and the fine-grained velocity imaging provided by radars. Therefore, observations from a moving agent in occlusion or at long range are easily distinguished from those of stationary objects in the scene along the velocity
Figure 14: This figure illustrates the forward and backward flow of a generative adversarial network (GAN) that produces synthetic radar data. GANs present a promising direction to generate realistic radar data, which has been challenging using traditional methods due to the intricacies of radar physics.
Figure 13: Three types of multipath phenomena that lead to spurious radar observations. (a) Double bounce, which arises due to two back-and-forth reflections between an object and the radar, resulting in observations roughly at double the distance and velocity, as illustrated by points 1 and 2. (b) Underbody reflections, which occur between the undercanriage of a vehicle and the road, depicted by point 3. (c) Mirrored ghost detections, which occur from a surface adjacent to the main object and the radar and result in observations at various positions based on their direction of arrival and path length, shown by points 4, 5, and 6.
dimension. Thus, labeling this dimension could significantly improve the robustness of radar detection models.
Velocity dimension labeling of radar data could be achieved by using semi-supervised or clustering-based methods. More learned approaches could also be utilized, such as the method described in [123], which generates range-Doppler labels of objects in radar data by warping a range-Doppler spectrogram into the image domain, obtaining object segmentation using camera and lidar data, and then warping it back into the range-Doppler domain.
## IX Discussion and Conclusion
In conclusion, the role of radar as a critical sensor for secure and reliable autonomous navigation has been highlighted. The notable strengths of radar include its ability to perform high-resolution velocity imaging, detect objects in occlusion and over long ranges, and maintain robust performance in adverse weather conditions. However, these strengths are counterbalanced by challenges, notably the low resolution, sparsity, clutter, and high uncertainty of radar data. This review identified and discussed key areas of focus within radar deep learning, presenting a comprehensive examination of the existing research topics, the challenges, and the potential opportunities. Along with presenting radar fundamentals, this review also included an in-depth exploration of approaches such as early and late fusion, occupancy flow estimation, uncertainty modeling, and multipath detection.
Autonomous driving is arguably one of the most challenging applications of high-resolutions radars. However, the ideas and methods discussed in this review are not limited to this application. In fact, they can be readily extended to a variety of related applications, including indoor autonomy [124, 125], ground penetrating radar [126, 127, 128], next-generation planetary rovers, and joint communication and sensing for future wireless networks.
|
2305.10305
|
Effects of Kaluza-Klein Neutrinos on $R_{D}$ and $R_{D^{*}}$
|
Recent measurements of $R_{D}$ and $R_{D^{*}}$ by the LHCb collaboration show
deviations from their respective Standard Model values. These semileptonic $B$
meson decays, associated with $b\rightarrow c \tau \bar{\nu}$ transition, are
pointing toward new physics beyond the Standard Model via leptonic flavor
universality violation. In this paper, we show that such anomaly can be
resolved by the cummulative Kaluza-Klein (KK) modes of singlet right-handed
neutrino which propagates in the large extra dimensional space. We found that
the number of extra dimension should be 2 to explain $R_{D}$ and $R_{D^{*}}$.
We show that both $R_{D}$ and $R_{D^{*}}$ constraint the energy scale $M_{F}$
of this extra dimension which are compatible with the limits from lepton flavor
violating tau decays. In contrast, our findings are in tension with the limits
coming from the neutrino experiments which set the most stringent lower bound
on $M_{F}$. The future measurements of $R_{D^{(*)}}^{exp}$ with reduced
uncertainties will exclude this extra dimensional model with right-handed
neutrino propagating in the bulk, if the central values stay.
|
Janus Capellan Aban, Chuan-Ren Chen, Chrisna Setyo Nugroho
|
2023-05-17T15:43:33Z
|
http://arxiv.org/abs/2305.10305v1
|
# Effects of Kaluza-Klein Neutrinos on \(R_{d}\) and \(R_{d^{*}}\)
###### Abstract
Recent measurements of \(R_{D}\) and \(R_{D^{*}}\) by the LHCb collaboration show deviations from their respective Standard Model values. These semileptonic \(B\) meson decays, associated with \(b\to c\tau\bar{\nu}\) transition, are pointing toward new physics beyond the Standard Model via leptonic flavor universality violation. In this paper, we show that such anomaly can be resolved by the cummulative Kaluza-Klein (KK) modes of singlet right-handed neutrino which propagates in the large extra dimensional space. We found that the number of extra dimension should be 2 to explain \(R_{D}\) and \(R_{D^{*}}\). We show that both \(R_{D}\) and \(R_{D^{*}}\) constraint the energy scale \(M_{F}\) of this extra dimension which are compatible with the limits from lepton flavor violating tau decays. In contrast, our findings are in tension with the limits coming from the neutrino experiments which set the most stringent lower bound on \(M_{F}\). The future measurements of \(R_{D^{(*)}}^{exp}\) with reduced uncertainties will exclude this extra dimensional model with right-handed neutrino propagating in the bulk, if the central values stay.
Introduction
Discrepancies between the Standard Model (SM) predictions and experimental data in the decays of B mesons have gained much attention for years since they may be the hints of new physics beyond the SM, in particular the violations of lepton flavor universality (LFU) in the measurements of \(R_{K^{(*)}}\) and \(R_{D^{(*)}}\). Even though the updated LHCb measurement [1] confirmed the consistency with SM predictions in \(R_{K}\) and \(R_{K^{*}}\) due to the \(b\to s\) transitions, the LFU violation remains about \(3\sigma\) away from the SM expectations in \(b\to c\tau\bar{\nu}_{\tau}\) transition in the \(R_{D}\) and \(R_{D^{*}}\) measurements.
The definition of \(R_{D^{(*)}}\) is given as
\[R_{D^{(*)}}=\frac{Br(B\to D^{(*)}\tau\bar{\nu}_{\tau})}{[(Br(B\to D^{(*)}e\bar{ \nu}_{e}+Br(B\to D^{(*)}\mu\bar{\nu}_{\mu}))/2]}, \tag{1}\]
which is independent of \(|V_{cb}|\) and also of the \(B\to D^{(*)}\) form factor to a large extent [2]. A newly calculated world average [3] using the data from Belle, BaBar, and LHCb collaborations in 2022 obtained \(R_{D}^{ave}=0.358\pm 0.027\) and \(R_{D^{*}}^{ave}=0.285\pm 0.013\). The SM predictions of the branching ratios \(R_{D}^{SM}\) and \(R_{D^{*}}^{SM}\) are \(0.298\pm 0.004\) and \(0.254\pm 0.005\)[2], respectively, which are clearly smaller than the measurements. After incorporating all the recent developments in \(B\to D^{(*)}\) that include form factors for predicting \(R_{D^{(*)}}^{SM}\), the largest pull of the combined new world average is \(4.1\sigma\) from SM predictions [3]. Furthermore, even the most updated results of LHCb [4] using the data collected in 2015 and 2016 are included, the global picture of the combined new world average does not change. And this corresponds to the most recent combined world average \(R_{D}^{ave}=0.356\pm 0.029\) and \(R_{D^{*}}^{ave}=0.284\pm 0.013\). For this large deviation, the new physics effect could be comparable to the tree-level SM contributions, therefore many models have been proposed, such as introducing leptoquarks [5], or new colorless vector \(W^{\prime}\)[6], or scalar particles [7] as tree-level mediators.
In this work, we study a large extra-dimensional model in which three generations of right-handed neutrinos propagate in the bulk [8]. As a result of the compactification of these fields, active neutrinos become massive and eventually mix with KK neutrinos due to the mass matrix diagonalization process. Concerning \(R_{D}^{(*)}\), it is important to note that SM tree-level semileptonic processes with \(W^{\star}\) as mediators preserved the LFU in the SM. With the existence of KK neutrinos, there may be additional decay channels for \(b\) decays into \(c\) and KK neutrinos, \(b\to c\ell\bar{\nu}_{\ell}^{KK}\), if KK neutrinos are light enough. Furthermore, if
\(b\to c\tau\bar{\nu}_{\tau}^{KK}\) decay width is much lager than \(b\to ce\bar{\nu}_{e}^{KK}\) and \(b\to c\mu\bar{\nu}_{\mu}^{KK}\), the \(R_{D^{(*)}}\) would be larger than the SM prediction, and this is the case we consider in this study. Namely, new physics contributes to \(R_{D}^{(*)}\) from the cumulative effects of \(\tau\) neutrino KK modes in \(b\to c\tau\bar{\nu}_{\tau}^{KK}\) transition as a tree-level process via \(W^{\pm}\) exchange.
The following is how the paper is structured: we briefly discuss the extra-dimensional KK model described by [8] in section II. Then in section III, we calculate the decay width \(\Gamma(B\to D\tau\bar{\nu}_{\tau}^{KK})\) in a tree-level process to determine the total contributions of the KK neutrinos to \(R_{D^{(*)}}\) measurements. Section IV discusses the constraints we consider for the lower limits of fundamental scale \(M_{F}\) of extra dimension. In addition to this part, the results of a combined analysis of several neutrino experiments performed in [9] to confront the upper bound of the Large Extra Dimension (LED) size \(R\) has been implemented. From [9], the upper bound for \(R\) at 90% confidence level (C.L.) is \(R<0.20\,\mu m\) for normal ordering (NO) and \(R<0.10\,\mu m\) for inverted ordering (IO). Finally, in section V we present our conclusions.
## II Model
In this section, we briefly review a model in the extra-dimensional framework, where three right-handed neutrinos are introduced and able to propagate in the bulk, while all the SM particles are on the brane [8].
The effective action of such interaction is given as [8]
\[S=\int d^{4}xdy[\bar{\Psi}_{i}\Gamma_{A}i\partial^{A}\Psi_{i}]+\int\,d^{4}x[ \bar{\nu}_{i}i\not{\partial}\nu_{i}+\nu_{i}\lambda_{ij}\psi_{j}(x^{\mu},0)H+h.c.]\,, \tag{2}\]
where \(A=0,1,2,3,4\), \(\Psi_{i}(x,y)\) and \(\nu_{i}\) with \(i=1,2,3\) are respectively the right-handed neutrino and active neutrino fields, \(H\) is the 4D Higgs doublet, and \(\lambda\) is the matrix of Yukawa couplings. The right-handed neutrino fields can be written as
\[\Psi_{j}(x,y)=\begin{pmatrix}\psi_{j}(x,y)\\ \bar{\psi}_{j}^{c}(x,y)\end{pmatrix}, \tag{3}\]
such that \(\psi_{j}\) and \(\bar{\psi}_{j}^{c}\) are two-component Weyl spinors in five dimensions. Here, we denote \(x\equiv x^{\mu}\) as the four-vector with \(\mu=0,1,2,3\), where \(x^{0}\) is the time coordinate and \(x^{i}\) are the spatial coordinates for each \(i=1,2,3\). The extra-dimensional coordinates are represented by \(y\equiv y^{k}\), where \(k=1,2,...,\delta\) with \(\delta\) the number of extra dimensions. Note that the fundamental
scale \(M_{F}\) is related to the Planck scale \(M_{p}\simeq 10^{19}\) GeV by \(M_{F}^{\delta+2}\simeq M_{P}^{2}R^{-\delta}\). Suppose \(\Psi_{j}(x,y)\) are \(2\pi R\)-periodic on variable \(y\) then we can express its components into Fourier modes as
\[\psi_{j}(x,y) =\frac{1}{\sqrt{2\pi R}}\sum_{n=-\infty}^{+\infty}\psi_{j}^{(n)}( x)\exp(\frac{iny}{R})\,, \tag{4}\] \[\psi_{j}^{c}(x,y) =\frac{1}{\sqrt{2\pi R}}\sum_{n=-\infty}^{+\infty}\psi_{j}^{(n)c} (x)\exp(\frac{iny}{R})\,. \tag{5}\]
With a redefinition of the standard left-handed neutrinos \(\nu_{i}\) with its neutrino flavor eigenstates, the relation
\[\nu_{\alpha,L}^{f}=\sum_{i=1,2,3}V_{\alpha i}\nu_{i},\ \ \ \ \ \alpha=e,\mu,\tau. \tag{6}\]
and the compactification in coordinate \(y\) on a circle of radius \(R\), the action (2), after spontaneous symmetry breaking, will give \(S=\int\mathcal{L}d^{4}x\)[8], where
\[\mathcal{L}=\sum_{i=1,2,3}\left(\bar{\nu}_{i}i\not{\partial}\nu_{i}+\sum_{n=- \infty}^{+\infty}\left[\bar{\psi}_{i}^{(n)}i\not{\partial}\psi_{i}^{(n)}+\bar{ \psi}_{i}^{(n)c}i\not{\partial}\psi_{i}^{(n)c}+m_{i}\nu_{i}\psi_{i}^{(n)} \right]+\sum_{n=1}^{+\infty}\frac{n}{R}(\psi_{i}^{(n)}\psi_{i}^{(n)c}-\psi_{i} ^{(-n)}\psi_{i}^{(-n)c})+h.c.\right) \tag{7}\]
such that \(m_{i}=\frac{M_{F}}{M_{P}}\frac{h_{i}v}{\sqrt{2}}\) as in [13], with \(h_{i}\) as the corresponding Yukawa coupling for \(i=e,\mu,\tau\). It is important to note that \(m_{i}\) is suppressed by a volume factor \(\frac{M_{F}}{M_{P}}\) of the extra compactified dimensions [10, 11]. Finally the relevant mass terms in the action is given by
\[\mathcal{L}_{mass}^{KK}=\sum_{i,j=1}^{3}N_{Li}^{T}M_{ij}N_{Rj}+h.c.\,, \tag{8}\]
with KK index being suppressed, and the mass matrix is given by
\[M=\begin{pmatrix}m_{i}&\sqrt{2}m_{i}&\sqrt{2}m_{i}&...\\ 0&\frac{1}{R}&0&...\\ 0&0&\frac{2}{R}&...\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}. \tag{9}\]
The corresponding basis vectors for this mass matrix are
\[N_{Li}=\begin{pmatrix}\nu\\ \nu_{nL}\end{pmatrix}_{i},\ \ \ \text{and}\ \ \ N_{Ri}=\begin{pmatrix}\nu_{R}\\ \nu_{nR}\end{pmatrix}_{i}, \tag{10}\]
where we have redefined the fields as [8]
\[\nu_{nR.i}=\frac{1}{\sqrt{2}}\big{(}\psi_{i}^{(n)}+\psi_{i}^{(-n)} \big{)},\quad\nu_{(n)L,i}=\frac{1}{\sqrt{2}}\big{(}\psi_{i}^{(n)c}+\psi_{i}^{(-n )c}\big{)}\quad\text{for}\quad n\geq 1\quad\text{and}\quad\nu_{R}=\psi_{i}^{0}. \tag{11}\]
We diagonalize the matrix \(MM^{T}\) for each generation \(i\) using a unitary matrix \(U^{(i)}\) to obtain the square of the masses of the mass eigenstates. Let's call the eigenvalues of \(MM^{T}\) to be \(\dfrac{\lambda_{i}^{(n)2}}{R^{2}}\) which satisfies [8]
\[\lambda_{i}^{(n)2}-\pi\lambda_{i}^{(n)}\xi_{i}^{2}\cot\pi\lambda_ {i}^{(n)}=0\,, \tag{12}\]
for \(n\geq 0\) and \(\xi_{i}=m_{i}R\) with \(i=e,\ \mu,\ \tau\).
Following [13; 15] the mixing component of KK neutrinos for active neutrinos can be written as
\[\Xi^{(i)}\equiv\big{(}\dfrac{U_{01}^{(i)}}{U_{00}^{(i)}},\dfrac{ U_{02}^{(i)}}{U_{00}^{(i)}},\dfrac{U_{03}^{(i)}}{U_{00}^{(i)}},...\big{)}\,. \tag{13}\]
As shown in [8; 11; 12; 14] by setting \(\xi_{i}=m_{i}R\) we obtain
\[U_{0n}^{(i)2}=\dfrac{2}{1+\pi^{2}\xi_{i}^{2}+\dfrac{\lambda_{i}^ {(n)2}}{\xi_{i}^{2}}}\,. \tag{14}\]
In the case of \(\xi_{i}<<1\) the approximation \(\lambda_{n}^{(i)}\approx n\) for \(n>0\) gives
\[U_{0n}^{(i)2}\approx\dfrac{2\xi_{i}^{2}}{\xi_{i}^{2}+n^{2}}. \tag{15}\]
Consequently, for \(n=0\), \(\lambda_{0}^{(i)}\) is approximately equal to \(\xi_{i}\) which yields \(U_{00}^{(i)2}\approx 1\). Therefore it can be easily checked in equation (13) that
\[\Xi^{(i)}\approx(U_{01}^{(i)},U_{02}^{(i)},U_{03}^{(i)},...). \tag{16}\]
The following is the relevant interaction Lagrangian involving the neutrino mass eigenstates \(\nu_{l}\) and \(\chi^{(n)}\), the charged leptons \(l\), together with the weak bosons \(W^{*}\), and their corresponding Goldstone bosons \(G^{*}\) given by [17]
\[\mathcal{L}_{W^{*}}^{int} =-\dfrac{g_{w}}{\sqrt{2}}W^{-\mu}\sum_{l=e,\mu,\tau}\Big{(}B_{l \nu_{l}}\bar{l}\gamma_{\mu}P_{L}\nu_{l}+\sum_{n=-\infty}^{\infty}B_{l,n}\bar{ l}\gamma_{\mu}P_{L}\chi^{(n)}+h.c.\Big{)}\,,\] \[\mathcal{L}_{G^{*}}^{int} =-\dfrac{g_{w}}{\sqrt{2}M_{W}}G^{-}\sum_{l=e,\mu,\tau}\Big{[}B_{l \nu_{l}}m_{l}\bar{l}P_{L}\nu_{l}+\sum_{n=-\infty}^{\infty}B_{l,n}\bar{l}\big{(} m_{l}P_{L}-m_{(n)}P_{R}\big{)}\chi^{(n)}+h.c.\Big{]}, \tag{17}\]
where \(g_{w}\) is the weak coupling constant and \(P_{R,L}=\frac{1\pm\gamma_{5}}{2}\) are the chirality projection operators. Here \(m_{(n)}\) and \(m_{l}\) represent the masses of KK neutrinos and charged leptons respectively. Also the expressions for the elements of the matrix \(B\) are given in [13]
\[B_{l,n}=\sum_{i=e,\mu,\tau}V_{li}^{l}U_{i,n}^{\nu}\,,\qquad B_{l\nu_{k}}=\sum_{ i=e,\mu,\tau}V_{li}^{l}U_{ik}^{\nu}\quad\text{for each }k=e,\mu,\tau\,, \tag{18}\]
where the matrix \(V^{l}\) diagonalizes the charged lepton mass matrix Indeed, the KK neutrino mixing parameters emphasized in [13; 15; 16] are
\[(s_{L}^{\nu_{k}})^{2}=\sum_{n=1}^{+\infty}|B_{l,n}|^{2}\approx\Xi^{(i)}\Xi^{(i )T}\approx\sum_{n=1}^{\infty}U_{0n}^{(i)2}. \tag{19}\]
The discrete summation including all of the KK modes can be written into continuous integration over all of its energy scale \(E\) as a prescription of [13; 16]
\[\sum_{n=1}^{\infty}\longrightarrow S_{\delta}R^{\delta}\int_{\frac{1}{R}}^{M_ {F}}E^{\delta-1}dE\,, \tag{20}\]
with \(M_{F}\) as the ultraviolet (UV) cut-off, \(R\) is the radius of the extra dimension, and \(S_{\delta}=2\pi^{\delta/2}/\Gamma(\frac{\delta}{2})\) to be the surface area of the unit sphere in \(\delta\) dimensions. As a result, the mixings can be expressed as
\[(s_{L}^{\nu_{k}})^{2}\approx\begin{cases}\frac{\pi h_{i}^{2}v^{2 }}{M_{F}^{2}}\ln\Bigl{[}\frac{M_{P}^{2}}{M_{F}^{2}}\Bigr{]}&\text{for}\ \ \delta=2\\ \frac{S_{\delta}h_{i}^{2}v^{2}}{(\delta-2)M_{F}^{2}}\Bigl{[}\Bigl{(}1-\bigl{(} \frac{M_{F}}{M_{P}}\bigr{)}^{2-\frac{4}{\delta}}\Bigr{)}\Bigr{]}&\text{for}\ \ \delta>2\,.\end{cases} \tag{21}\]
## III \(b\to c\tau\nu_{\tau}^{KK}\) transition and constraints
In additional to the SM diagram, the relevant Feynman diagram in extra-dimensional model that gives the the same experimental signatures as SM \(B\to D^{(*)}\tau\bar{\nu}\) is shown in Fig. 1. Therefore, together with the new contributions from the KK neutrinos, the prediction for \(R_{D^{(*)}}\) is
\[R_{D^{(*)}}=\frac{Br(B\to D^{(*)}\tau\bar{\nu}_{\tau})+Br(B\to D^{(*)}\tau \bar{\nu}_{\tau}^{KK})}{[(Br(B\to D^{(*)}e\bar{\nu}_{e}+Br(B\to D^{(*)}\mu \bar{\nu}_{\mu}))/2]}\approx R_{D^{(*)}}^{SM}\left(1+\frac{\Gamma(B\to D^{(*)} \tau\bar{\nu}_{\tau}^{KK})}{\Gamma^{SM}(B\to D^{(*)}\tau\bar{\nu}_{\tau}))} \right) \tag{22}\]
\[\approx R_{D^{(*)}}^{SM}\left(1+\sum_{n=1}^{+\infty}\eta_{n}B_{\tau,n}^{*}B_{ \tau,n}\right)=R_{D^{(*)}}^{SM}\left(1+\frac{h_{\tau}^{2}v^{2}M_{P}^{\frac{4}{ \delta}-2}}{M_{F}^{\frac{4}{\delta}}}S_{\delta}R^{\delta-2}\int_{\frac{1}{R}}^ {m_{B}-m_{D^{(*)}}-m_{\tau}}\frac{E^{\delta-1}}{m^{2}+E^{2}}\eta(E)dE\right),\]
where \(m^{2}=\frac{h_{\tau}^{2}v^{2}M_{F}^{2}}{2M_{P}^{2}}\) and \(\eta_{n}\) are the effects from three-body phase, having \(\eta(E)\) as the corresponding transformation of \(\eta_{n}\) (see the Appendix for the details) after summing up over all the contributions from KK neutrinos while invoking Eq. (20). Also note that we have \(\frac{1}{R}\simeq\frac{M_{F}^{\frac{2}{3}+1}}{M_{P}^{\frac{4}{3}}}\) from the fundamental relation between \(M_{P}\) and \(M_{F}\). To fit the \(R_{D}\) central value, the needed \(M_{F}\) is about 7 TeV for Yukawa coupling \(h_{\tau}=1\), which is excluded by the LHC mono-jet plus missing energy search [18] that imposes a lower bound \(M_{F}\raisebox{-3.698858pt}{~{}\shortstack{$>$ \\ [-0.07cm] $\sim$}}~{}11.2\) TeV. Therefore, we choose Yukawa coupling \(h_{\tau}=5\) for our benchmark value throughout this paper unless otherwise stated. Fig. 2 shows the relation between fundamental scale \(M_{F}\) and \(R_{D^{(*)}}\). The plots are made by plugging in the SM central values of \(R_{D^{(*)}}^{SM}\)[2] in eq. (41). The solid and dashed lines represent the \(R_{D}\) and \(R_{D^{*}}\), respectively, for \(\delta=2,3,4,5\), and 6. Clearly as fundamental scale \(M_{F}\) increases, \(R_{D^{(*)}}\) approach to the SM values. The best fit of the most recent experimental central values of \(R_{D^{(*)}}^{exp}\)[4] with Yukawa coupling \(h_{\tau}=5\) are
\[M_{F}=33\,{\rm TeV},\,748\,{\rm GeV},\,127\,{\rm GeV},\,46\,{\rm GeV},\,24\, {\rm GeV}\quad{\rm for}\,\,R_{D}\,, \tag{23}\]
\[M_{F}=42\,{\rm TeV},\,834\,{\rm GeV},\,135\,{\rm GeV},\,48\,{\rm GeV},\,24\, {\rm GeV}\quad{\rm for}\,\,R_{D^{*}}\,, \tag{24}\]
corresponding to \(\delta=2,3,4,5,6\). It is clear that the fundamental scales \(M_{F}\) are excluded by LHC searches, even when \(h_{\tau}\) is increased up to \(4\pi\), except for \(\delta=2\). Hence, the only feasible scenario is when \(\delta=2\).
The Fig. 3 shows the \((1\sigma,2\sigma,3\sigma)\) contour plots between \(R_{D}\) and \(R_{D^{*}}\) taken from [3; 19; 20]. The Standard Model predictions are the crosses [21; 22; 23; 24]. Along with the \(KK\) neutrinos
contributions to \(R_{D}\) and \(R_{D^{*}}\) the four pairs of points \((R_{D},R_{D^{*}})\) are plotted corresponding to Yukawa couplings \(h_{\tau}=1,5,8,10\) with fixed \(M_{F}=110\) TeV and \(M_{F}=128\) TeV derived from the lower bounds [9] in neutrino fittings for normal and inverted ordering, respectively. It can be observed that as Yukawa coupling increases the contributions from KK neutrinos increase, and as a result, predictions of \(R_{D^{(*)}}\) are more close to the experimental central values of the new world average [3].
In the case of normal ordering the two points corresponding to \(h_{\tau}=5\) and \(h_{\tau}=8\) lie within the \(3\sigma\) of the world average, while the point with \(h_{\tau}=1\) lies outside \(3\sigma\), and lastly the point with \(h_{\tau}=10\) lies within \(2\sigma\). Moreover, in inverted ordering the points with \(h_{\tau}=8\) and \(h_{\tau}=10\) lie within \(3\sigma\), and the remaining two points lie outside \(3\sigma\). With regards to the most updated results from LHCb shown in lower panel, the global picture of the combined world average will not change.
Figure 2: The plot of \(M_{F}\) vs \(R_{D^{(*)}}\) for a fixed Yukawa coupling strength \(h_{\tau}=5\). The solid lines represent the \(R_{D}\) while the dashed lines represent the \(R_{D^{*}}\).
Figure 3: The upper panel shows the \((1\sigma,2\sigma,3\sigma)\) contour plots of the experimental results of \(R_{D}\) and \(R_{D^{*}}\) of the world average (red) [3] in mid-autumn 2022, [19] HFLAV 2021 average results (dashed orange) and with [20](dashed purple). The Standard Model (SM) predictions are denoted by the crosses [21; 22; 23; 24]. The pairs of points \((R_{D},R_{D^{*}})\) in normal (inverted) ordering are shown in red squares (green triangles) with Yukawa couplings \(h_{\tau}=1,5,8,10\) and fixed \(M_{F}=110(128)\) TeV. The lower panel shows the most updated results of the combined new world average using the most recent results of LHCb [4], and the global picture is almost the same as in upper panel.
Constraints from \(\tau\) decays
Now let's consider constraints for the predictions of \(R_{D}\) and \(R_{D^{*}}\). Since the new contributions come from the \(\tau\) lepton sector only, the most relevant and stringent constraints we consider are the experimental bounds for rare \(\tau\) decays, including \(\tau\to e\gamma\), \(\tau\to\mu\gamma\), \(\tau\to e\mu^{+}\mu^{-}\) and \(\tau\to\mu e^{+}e^{-}\).
After summing up all the KK neutrino modes, the expression of the branching ratios of \(\tau\to l^{{}^{\prime}}\gamma\) is given as [13]
\[Br(l\to l^{{}^{\prime}}\gamma)\approx\frac{\alpha_{w}^{3}\,s_{w}^{2}}{1024\pi^ {2}}\frac{m_{\tau}^{4}}{M_{W}^{4}}\frac{m_{\tau}}{\Gamma_{\tau}}\,(s_{L}^{\nu _{\tau}})^{2}\,(s_{L}^{\nu_{\tau}})^{2}\,, \tag{25}\]
where \(M_{W}\) is the mass of \(W\)-boson, \(\alpha_{w}=\frac{g_{w}^{2}}{4\pi}\) with \(g_{w}\) being the \(SU(2)_{L}\) coupling strength, \(s_{w}=\sin\theta_{w}\) with \(\theta_{w}\) being the weak mixing angle, and \(\ell^{\prime}=e,\ \mu\). The current experimental value from, \(Br_{\rm exp}(\tau\to e\gamma)<3.3\times 10^{-8}\) and \(Br_{\rm exp}(\tau\to\mu\gamma)<4.2\times 10^{-8}\)[25] at 90% confidence level (CL), and \(\Gamma_{\tau}=2.23\times 10^{-12}\) GeV using the mean-lifetime of \(\tau\) lepton in [25], one obtains the lower limits
\[M_{F}>\begin{cases}67\,{\rm TeV}&\text{for}\ \ \tau\to e\gamma\,,\\ 63\,{\rm TeV}&\text{for}\ \ \tau\to\mu\gamma\,,\end{cases} \tag{26}\]
by setting \(h_{e}=h_{\mu}=1\) and \(h_{\tau}=5\). The next constraints come from the three-body decay of \(\tau\) lepton, namely \(\tau\to e\mu\mu\) and \(\tau\to\mu ee\). The corrresponding branching ratios are given by [13]
\[Br(\tau\to e\mu\mu) =\frac{\alpha_{w}^{4}}{98304}\frac{m_{\tau}^{4}}{M_{W}^{4}}\frac{ M_{F}^{4}}{M_{W}^{4}}d_{\delta}^{2}(s_{L}^{\nu_{\tau}})^{2}(s_{L}^{\nu_{\mu}})^{ 2}\Bigg{\{}(s_{L}^{\nu_{\mu}})^{4}+2(1-2s_{w}^{2})(s_{L}^{\nu_{\mu}})^{4} \big{[}\sum_{l=e,\mu,\tau}\,(s_{L}^{\nu_{l}})^{2}\big{]}\] \[+8s_{w}^{4}\Big{[}\sum_{l=e,\mu,\tau}(s_{L}^{\nu_{l}})^{2}\big{]} ^{2}\Bigg{\}}\,, \tag{27}\]
\[Br(\tau\to\mu ee) =\frac{\alpha_{w}^{4}}{98304}\frac{m_{\tau}^{4}}{M_{W}^{4}}\frac{ M_{F}^{4}}{M_{W}^{4}}d_{\delta}^{2}(s_{L}^{\nu_{\tau}})^{2}(s_{L}^{\nu_{\mu}})^{ 2}\Bigg{\{}(s_{L}^{\nu_{e}})^{4}+2(1-2s_{w}^{2})(s_{L}^{\nu_{e}})^{4}\big{[} \sum_{l=e,\mu,\tau}\,(s_{L}^{\nu_{l}})^{2}\big{]}\] \[+8s_{w}^{4}\Big{[}\sum_{l=e,\mu,\tau}(s_{L}^{\nu_{l}})^{2}\big{]} ^{2}\Bigg{\}}\,, \tag{28}\]
where \(d_{\delta}=d_{2}=\frac{\pi^{2}}{12\ln^{2}(M_{P}/M_{F})}\lesssim 1\) is a dimension-dependent factor. The current experimental
bounds \(Br(\tau\to e\mu\mu)<2.7\times 10^{-8}\) and \(Br(\tau\to\mu ee)<1.8\times 10^{-8}\)[25] give
\[M_{F}>\begin{cases}54\,\mathrm{TeV}&\text{for}\ \ \tau\to e\mu\mu\,,\\ 60\,\mathrm{TeV}&\text{for}\ \ \tau\to\mu ee\,,\end{cases} \tag{29}\]
with \(h_{\tau}=5\) and \(h_{e}=h_{\mu}=1\). The last constraints are from the fittings of various neutrino experiments. It is shown that the upper bounds on the size of the extra dimension should be smaller than \(0.2\ \mu\)m and \(0.1\ \mu\)m at \(90\ \%\) C.L. for normal and inverted ordering, respectively [9].
The corresponding lower limits of fundamental scale for \(\delta=2\) are given as
\[M_{F}>110\,(128)\,\mathrm{TeV}\,, \tag{30}\]
for normal ordering (inverted ordering). Fig. 4 summarises the data of \(R_{D^{(*)}}\) for \(\delta=2\) with Yukawa coupling \(h_{\tau}=5\) together with the experimental bounds on \(M_{F}\) from \(Br(\tau\to e\mu\mu)\), \(Br(\tau\to\mu ee)\), \(Br(\tau\to\mu\gamma)\), \(Br(\tau\to e\gamma)\) and neutrino oscillations. The yellow bands determine the \(1\sigma\), \(2\sigma\) and \(3\sigma\) regions of \(R_{D^{(*)}}\). The horizontal dashed lines give the central values, and boundaries of \(1\sigma\), \(2\sigma\) and \(3\sigma\) of \(R_{D^{(*)}}^{exp}\). The most stringent bound comes from the size of extra dimensions for normal and inverted ordering in [9]. As determined in (30), these lower limits correspond to \(R_{D}=0.304\,(0.301)\) and \(R_{D^{*}}=0.259\,(0.257)\), for normal (inverted) ordering, respectively. All these \(R_{D^{(*)}}\) predictions can be found very near to the boundary of \(2\sigma\) below from the central values of \(R_{D^{(*)}}^{exp}\) for both NO and IO, respectively.
## V Conclusion
A possible violation of the lepton flavor universality can be found in anomalies involving rare B meson decays. This is a positive sign of physics beyond the SM. A newly calculated world average of the data by different experimental groups BaBar, Belle, and LHCb collaboration strongly supports again the leptonic flavor universality violation in the \(b\to c\tau\bar{\nu}_{\tau}\) transition. We show in this work that it is possible to explain these anomalies in the extra-dimensional framework, where the Planck scale \(M_{P}\) is lowered to the fundamental scale \(M_{F}\). By introducing three right-handed neutrinos propagating in the bulk, the contributions from their corresponding KK neutrino modes after compactification give a plausible description of the anomalies through mixings from the active neutrinos. The central values \(R_{D}^{exp}=0.356\)
Figure 4: The left (right) panel of Fig. 4(a) and Fig. 4(b) is the plot of \(M_{F}\) vs \(R_{D^{(*)}}\) for \(\delta=2\) with Yukawa coupling \(h_{\tau}=5\) together with the following constraints: (i.) \(\text{Br}(\tau\to e\mu\mu)<2.7\times 10^{-8}\) (dashed cyan), (ii.) \(\text{Br}(\tau\rightarrow\mu ee)<1.8\times 10^{-8}\) (dashed brown), (iii.) \(\text{Br}(\tau\rightarrow\mu\gamma)<4.2\times 10^{-8}\) (dashed green), (iv.) \(\text{Br}(\tau\to e\gamma)<3.3\times 10^{-8}\) (dashed orange) [25], and (v.) neutrino bounds (solid purple) [9]. The yellow bands give \(1\sigma\), \(2\sigma\) and \(3\sigma\) regions of \(R_{D^{(*)}}^{exp}\). The dashed black lines determine the central values of \(R_{D^{(*)}}^{exp}\), while dashed blue and red lines are the boundaries of \(1\sigma\) and \(2\sigma\), and \(3\sigma\) regions of \(R_{D^{(*)}}^{exp}\). The data we used here is the most updated world average [4].
and \(R_{D^{*}}^{exp}=0.284\) ruled out the cases \(\delta=3,4,5,\) and \(6\) since the needed values of \(M_{F}\) are lower than the bounds from LHC searches. As a result, we only considered the very special number of extra dimensions \(\delta=2\). The most severe bounds from neutrino experiments on the size of large extra dimension are \(R<0.2\mu m\) and \(R<0.1\mu m\) for NO and IO, respectively. To satisfy these bounds the lower limits for the fundamental scale \(M_{F}\) must be 110 TeV and 128 TeV, for NO and IO, respectively. With Yukawa coupling strength \(h_{\tau}=5\), the predictions for \(R_{D}\) and \(R_{D^{*}}\) with the corresponding lower limits of \(M_{F}\) from neutrino experiments are 0.304 (0.301) and 0.259 (0.257) on the boundary of \(2\sigma\) contour, respectively, for NO (IO). Apparently, there is a tension between central values of \(R_{D^{(*)}}^{exp}\) with the lower bounds from neutrino experiments. The future measurements of \(R_{D^{(*)}}^{exp}\) will exclude this extra dimensional model with right-handed neutrino propagating in the bulk, if the central values stay.
## Acknowledgment
We would like to acknowledge the support of National Center for Theoretical Sciences (NCTS). This work was supported in part by the National Science and Technology Council (NSTC) of Taiwan under Grant No.MOST 110-2112-M-003-003-, 111-2112-M-003-006 and 111-2811-M-003-025-.
## Appendix
The three-body phase of \(B\to D\tau\nu_{\tau}^{(n)KK}\) is given as
\[\int_{y_{min}^{(n)}}^{y_{max}}\int_{x_{min}^{(n)}}^{x_{max}^{(n)}}(x+y+s)(t^{(n )}-x-y)dxdy\,, \tag{31}\]
where
\[s=-(m_{D^{(*)}}^{2}+m_{\tau}^{2}),\quad t^{(n)}=(m_{B}^{2}+m_{KK}^{(n)2})\,. \tag{32}\]
. The lower and upper limits in variable \(y\) are
\[y_{min}^{(n)}=(m_{KK}^{(n)}+m_{\tau})^{2},\quad y_{max}=(m_{B}-m_{D^{(*)}})^{2 }\,, \tag{33}\]
while the lower and upper limits for variable \(x\) can be written in terms of \(y\)
\[x_{max/min}^{(n)}=\frac{-(y^{2}+A^{(n)}-yB^{(n)})\pm\sqrt{(y^{2}+A^{(n)}-yB^{( n)})^{2}-4y(yC^{(n)}+D^{(n)})}}{2y}\,, \tag{34}\]
such that the expressions for \(A^{(n)},B^{(n)},C^{(n)}\), and \(D^{(n)}\) are the following:
\[A^{(n)} = \big{(}m_{\tau}^{2}-m_{KK}^{(n)2}\big{)}\big{(}m_{B}^{2}-m_{D^{(*)} }^{2}\big{)}\,, \tag{35}\] \[B^{(n)} = \big{(}m_{B}^{2}+m_{D^{(*)}}^{2}+m_{KK}^{(n)2}+m_{\tau}^{2}\big{)}\,,\] (36) \[C^{(n)} = \big{(}m_{D^{(*)}}^{2}-m_{KK}^{(n)2}\big{)}\big{(}m_{B}^{2}-m_{ \tau}^{2}\big{)}\,,\] (37) \[D^{(n)} = \big{(}m_{KK}^{(n)2}m_{B}^{2}-m_{D^{(*)}}^{2}m_{\tau}^{2}\big{)} \big{(}m_{B}^{2}-m_{D^{(*)}}^{2}+m_{KK}^{(n)2}-m_{\tau}^{2}\big{)}. \tag{38}\]
Here \(m_{B}\), \(m_{D^{(*)}}\), \(m_{KK}^{(n)}\), and \(m_{\tau}\) are the masses of the B-meson, D-meson, KK mass eigenstates, and tau lepton respectively. When we sum over \(n\), the integration replacement of the discrete sum in Eq. (20) transforms Eq. (31) into
\[S_{\delta}R^{\delta-2}\int_{\frac{1}{R}}^{m_{B}-m_{D^{(*)}}-m_{ \tau}}\frac{E^{\delta-1}}{m^{2}+E^{2}}\times\int_{y_{min}}^{y_{max}}\int_{x_{ min}}^{x_{max}}(x+y+s)(t-x-y)dxdyddE\,, \tag{39}\]
such that \(m^{2}=\frac{h_{\tau}^{2}v^{2}M_{F}^{2}}{2M_{P}^{2}}\) and every appearance of \(m_{KK}^{(n)}\) in the expression
\[\int_{y_{min}^{(n)}}^{y_{max}}\int_{x_{min}^{(n)}}^{x_{max}^{(n)} }(x+y+s)(t^{(n)}-x-y)dxdy\,, \tag{40}\]
is replaced by variable \(E\). The prediction for \(R_{D^{(*)}}\) with contributions from the KK neutrinos is given by
\[R_{D^{(*)}}=\frac{Br\big{(}B\to D^{(*)}\tau\bar{\nu}_{\tau} \big{)}+Br\big{(}B\to D^{(*)}\tau\bar{\nu}_{\tau}^{KK}\big{)}}{\big{[}(Br \big{(}B\to D^{(*)}e\bar{\nu}_{e}+Br\big{(}B\to D^{(*)}\mu\bar{\nu}_{\mu} \big{)}\big{)}/2\big{]}}\approx R_{D^{(*)}}^{SM}\left(1+\frac{\Gamma\big{(}B \to D^{(*)}\tau\bar{\nu}_{\tau}^{KK}\big{)}\big{)}}{\Gamma^{SM}\big{(}B\to D ^{(*)}\tau\bar{\nu}_{\tau}\big{)}\big{)}}\right) \tag{41}\] \[\approx R_{D^{(*)}}^{SM}\left(1+\sum_{n=1}^{+\infty}\eta_{n}B_{ \tau,n}^{*}B_{\tau,n}\right)=R_{D^{(*)}}^{SM}\left(1+\frac{h_{\tau}^{2}v^{2} M_{P}^{\frac{4}{2}-2}}{M_{F}^{\frac{4}{2}}}S_{\delta}R^{\delta-2}\int_{\frac{1}{R}}^{m _{B}-m_{D^{(*)}}-m_{\tau}}\frac{E^{\delta-1}}{m^{2}+E^{2}}\eta(E)dE\right),\]
where
\[\eta_{n}=\frac{\int_{y_{min}^{(n)}}^{y_{max}}\int_{x_{min}^{(n)}}^ {x_{max}^{(n)}}\big{(}x+y+s\big{)}\big{(}t^{(n)}-x-y\big{)}dxdy}{\int_{y_{min}} ^{y_{max}}\int_{x_{min}}^{x_{max}}\big{(}x+y+s\big{)}\big{(}t-x-y\big{)}dxdy \,\,\Big{|}_{SM}},\,\,\,\,\eta(E)=\eta_{n}\,\,\Big{|}_{m_{KK}^{(n)}=E}\,. \tag{42}\]
|
2304.11332
|
Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model
|
The Segment Anything Model (SAM) is a recently developed large model for
general-purpose segmentation for computer vision tasks. SAM was trained using
11 million images with over 1 billion masks and can produce segmentation
results for a wide range of objects in natural scene images. SAM can be viewed
as a general perception model for segmentation (partitioning images into
semantically meaningful regions). Thus, how to utilize such a large foundation
model for medical image segmentation is an emerging research target. This paper
shows that although SAM does not immediately give high-quality segmentation for
medical image data, its generated masks, features, and stability scores are
useful for building and training better medical image segmentation models. In
particular, we demonstrate how to use SAM to augment image input for
commonly-used medical image segmentation models (e.g., U-Net). Experiments on
three segmentation tasks show the effectiveness of our proposed SAMAug method.
The code is available at \url{https://github.com/yizhezhang2000/SAMAug}.
|
Yizhe Zhang, Tao Zhou, Shuo Wang, Peixian Liang, Danny Z. Chen
|
2023-04-22T07:11:53Z
|
http://arxiv.org/abs/2304.11332v2
|
# Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model
###### Abstract
The Segment Anything Model (SAM) is a recently developed large model for general-purpose segmentation for computer vision tasks. SAM was trained using 11 million images with over 1 billion masks and can produce segmentation results for a wide range of objects in natural scene images. SAM can be viewed as a general perception model for segmentation (partitioning images into semantically meaningful regions). Thus, how to utilize such a large foundation model for medical image segmentation is an emerging research target. This paper shows that although SAM does not immediately give high-quality segmentation for medical image data, its generated masks, features, and stability scores are useful for building and training better medical image segmentation models. In particular, we demonstrate how to use SAM to augment image input for commonly-used medical image segmentation models (e.g., U-Net). Experiments on three segmentation tasks show the effectiveness of our proposed SAMAug method. The code is available at [https://github.com/yizhezhang2000/SAMAug](https://github.com/yizhezhang2000/SAMAug).
## 1 Introduction
The Segment Anything Model (SAM) [11] is a remarkable recent advance in foundation models for computer vision tasks. SAM was trained using 11 million images and over 1 billion masks. Despite its strong capability in producing segmentation for a wide variety of objects, several studies [9, 4, 32] showed that SAM is not powerful enough for segmentation tasks that require domain expert knowledge (e.g., medical image segmentation).
For a given medical image segmentation task with image and annotation pairs, we aim to build and train a medical image segmentation model, denoted by \(\mathcal{M}\), on top of the segmentation foundation model SAM. We propose a new
method called SAMAug that directly utilizes the segmentation masks (with stability scores) generated by SAM to augment the raw inputs of the medical image segmentation model \(\mathcal{M}\). The input augmentation is performed by a fusion function. The inference process (with SAMAug) for a given image is illustrated in Fig. 1. The task-specific medical image segmentation model \(\mathcal{M}\) is trainable using a specific dataset4 (e.g., MoNuSeg [12]). The parameters of SAM remain fixed, the fusion (augmentation) function is a parameter-free module, and the learning process aims to update the parameters of \(\mathcal{M}\) with respect to the given foundation model SAM, the fusion function, and the training data.
Footnote 4: SAMAug performs on all images, including training images and testing images.
Our main contributions can be summarized as follows. (1) We identify that the emerging segmentation foundation model SAM can provide attention (prior) maps for downstream segmentation tasks. (2) With a simple and novel method (SAMAug), we combine segmentation outputs of SAM with raw image inputs, generating SAM-augmented input images for building downstream medical image segmentation models. (3) We conduct comprehensive experiments to demonstrate that our proposed method is effective for both CNN and Transformer segmentation models in three medical image segmentation tasks.
## 2 Related Work
**Data Augmentation.** Data augmentation (DA) has been widely used in training medical image segmentation models [31, 3]. A main aim of DA is to synthesize new views of existing samples in training data. Our SAMAug can be viewed as a type of DA technique. Unlike previous DA methods which often use hand-designed transformations (e.g., rotation, cropping), SAMAug utilizes a segmentation foundation model to augment raw images, aiming to impose semantically useful structures to the input of a medical image segmentation model.
**Image Enhancement.** From the image enhancement (IE) view point, SAMAug enhances images by adding semantic structures from a segmentation foundation model. A critical difference between SAMAug and the previous enhancement methods [17, 5] is that traditional IE often works at a low level, e.g., de-blurring
Figure 1: Input augmentation with SAM for boosting medical image segmentation.
and noise reduction, and the purpose of enhancement is to reconstruct and recover. In contrast, SAMAug aims to add high-level structures to raw images, providing better semantics for the subsequent medical image segmentation model.
**Recent SAM-related Methods.** Since the introduction of SAM, many attempts have been made to understand and utilize SAM for medical image analysis (e.g., [7, 32, 13, 25, 30, 27, 26, 15, 6, 28]). Recent work has shown that SAM alone, without further fine-tuning and/or adaptation, often delivers unsatisfied results for medical image segmentation tasks [7, 32]. In order to utilize SAM more effectively, Ma et al. [13] proposed to fine-tune SAM using labeled images. Wu et al. [25] proposed to add additional layers to adapt SAM for a medical image segmentation task. Compared with these fine-tuning and adaptation methods, our method is more efficient in computation and memory costs during model training. In test time, these fine-tuning, adapting, and augmentation methods all require performing forward propagation of test images through SAM.
## 3 Methodology
In Section 3.1, we describe the two key image representations obtained by applying SAM to a medical image, a segmentation prior map and a boundary prior map. In Section 3.2, we show how to augment a medical image using the two obtained prior maps. In Section 3.3, we present the details of using augmented images in training a medical image segmentation model. Finally, in Section 3.4, we show how to use the trained model in model deployment (model testing).
Figure 2: Visual examples of a raw input image, its segmentation prior map by SAM, boundary prior map by SAM, and SAM-augmented image input (illustrated in Fig. 1). The image sample is from the MonuSeg dataset [12].
### Segmentation and Boundary Prior Maps
In the grid prompt setting, SAM uses a grid prompt to generate segmentation masks for a given image. That is, segmentation masks are generated at all plausible locations in the image. The generated segmentation masks are then stored in a list. For each segmentation mask in the list, we draw the mask on a newly created segmentation prior map using the value suggested by the mask's corresponding stability score (generated by SAM). In addition to the segmentation prior map, we further generate a boundary prior map according to the masks provided by SAM. We draw the exterior boundary of each segmentation mask in the mask list and put all the boundaries together to form a boundary prior map. For a given image \(x\), we generate two prior maps, \(\text{prior}_{\text{seg}}\) and \(\text{prior}_{\text{boundary}}\), using the process discussed above. In Fig. 2 (the second and third columns), we give visual examples of these two prior maps thus generated.
### Augmenting Input Images
With the prior maps generated, our next step is to augment the input image \(x\) with the generated prior maps. We choose a simple method for this augmentation: adding the prior maps to the raw image. Note that many medical image segmentation tasks can be reduced to a three-class segmentation task in which the 1st class corresponds to the background, the 2nd class corresponds to the regions of interest (ROIs), and the 3rd class corresponds to the boundaries between the ROIs and background. We add the segmentation prior map to the second channel of the raw image and the boundary prior map to the third channel of the raw image. If the raw image is in gray-scale, we create a 3-channel image with the first channel consisting of the gray-scale raw image, the second channel consisting of its segmentation prior map (only), and the third channel consisting of its boundary prior map (only). For each image \(x\) in the training set, we generate its augmented version \(x^{aug}=\text{Aug}(\text{prior}_{\text{seg}},\text{prior}_{\text{boundary}},x)\). Fig. 2 (the fourth column) gives a visual example of the SAM-augmented image input.
### Model Training with SAM-Augmented Images
With the input augmentation on each image sample in the training set, we obtain a new augmented training set \(\{(x_{1}^{aug},y_{1}),(x_{2}^{aug},y_{2}),\ldots,(x_{n}^{aug},y_{n})\}\), where \(x_{i}^{aug}\in\mathbb{R}^{w\times h\times 3}\), \(y_{i}\in\{0,1\}^{w\times h\times C}\) is the annotation of the input image \(x_{i}\), and \(C\) is the number of classes for the segmentation task. A common medical image segmentation model \(\mathcal{M}\) (e.g., a U-Net) can be directly utilized for learning from the augmented training set. A simple way to learn from SAM-augmented images is to use the following learning objective with respect to the parameters of \(\mathcal{M}\):
\[\sum_{i=1}^{n}loss(\mathcal{M}(x_{i}^{aug}),y_{i}). \tag{1}\]
The above objective only uses SAM-augmented images for model training. Consequently, in model testing, the trained model accepts only images augmented by SAM. In situations where SAM fails to give plausible prior maps, we consider training a segmentation model using both raw images and images with SAM augmentation. The new learning objective is to minimize the following objective with respect to the parameters of \(\mathcal{M}\):
\[\sum_{i=1}^{n}\beta loss(\mathcal{M}(x_{i}),y_{i})+\lambda loss(\mathcal{M}(x_{i}^ {aug}),y_{i}), \tag{2}\]
where \(\beta\) and \(\lambda\) control the importance of the training loss for samples with raw images and samples with augmented images. When setting \(\beta=0\) and \(\lambda=1\), the objective function in Eq. (2) is reduced to Eq. (1). By default, we set both \(\beta\) and \(\lambda\) equal to 1. The spatial cross-entropy loss or Dice loss can be used for constructing the loss function in Eq. (1) and Eq. (2). An SGD-based optimizer (e.g., Adam [10]) can be applied to reduce the values of the loss function.
### Model Deployment with SAM-Augmented Images
When the segmentation model is trained using only SAM-augmented images, the model deployment (testing) requires the input also to be SAM-augmented images. The model deployment can be written as:
\[\hat{y}=\tau(\mathcal{M}(x^{aug})), \tag{3}\]
where \(\tau\) is an output activation function (e.g., a sigmoid function, a softmax function), and \(x^{aug}\) is a SAM-augmented image (as described in Section 3.2). When the segmentation model \(\mathcal{M}\) is trained using both raw images and SAM-augmented images, we identify new opportunities in inference time to fully realize the potential of the trained model. A simple way of using \(\mathcal{M}\) would be to apply sample inference twice for each test sample: The first time inference uses the raw image \(x\) as input and the second time inference uses its SAM augmented image as input. The final segmentation output can be generated by the average ensemble of the two outputs. Formally, this inference process can be written as:
\[\hat{y}=\tau(\mathcal{M}(x)+\mathcal{M}(x^{aug})). \tag{4}\]
Another way of utilizing the two output candidates \(\mathcal{M}(x)\) and \(\mathcal{M}(x^{aug})\) is to select a plausible segmentation output from these two candidates:
\[\hat{y}=\tau(\mathcal{M}(x^{*})), \tag{5}\]
where \(x^{*}\) is obtained via solving the following optimization:
\[x^{*}=\text{argmin}_{x^{\prime}\in\{x,x^{aug}\}}Entropy(\tau(\mathcal{M}(x^{ \prime}))). \tag{6}\]
Namely, we choose an input version out of the two input candidates (\(x\) and \(x^{aug}\)) according to the entropy (prediction certainty) of the segmentation output. Segmentation output with a lower entropy means that the model is more certain in its prediction, and a higher certainty in prediction often positively correlates to higher segmentation accuracy [22].
## 4 Experiments and Results
### Datasets and Setups
We perform experiments on the Polyp [32], MoNuSeg [12], and GlaS [19] benchmarks to demonstrate the effectiveness of our proposed SAMAug method. For the polyp segmentation experiments, we follow the training setup used in training the state-of-the-art (SOTA) model HSNet [29]5. For the MoNuSeg and GlaS segmentation, the training of a medical image segmentation model uses the Adam optimizer [10], with batch size = 8, image cropping window size = 256 \(\times\) 256, and learning rate = \(5e-4\). The total number of training iterations is 50K. The spatial cross entropy loss is used for the model training.
Footnote 5: [https://github.com/baiboat/HSNet](https://github.com/baiboat/HSNet)
### Polyp Segmentation on Five Datasets
Automatic polyp segmentation in endoscopic images can help improve the efficiency and accuracy in clinical screenings and tests for gastrointestinal diseases. Many deep learning (DL) based models have been proposed for robust and automatic segmentation of polyps. Here, we utilize the SOTA model HSNet [29] for evaluating our proposed SAMAug method. We use the objective function described in Eq. (2) in model training. In test time, we use the model deployment strategy given in Eq. (6). In Fig. 3, we show the segmentation performance
Figure 3: Polyp segmentation results of the vanilla HSNet and SAMAug-enhanced HSNet.
(in Dice score) of the vanilla HSNet and SAMAug-enhanced HSNet on the test sets of CVC-300 [21], CVC-ClinicDB [1], Kvasir [8], CVC-ColonDB [20], and ETIS [18]. All the model training sessions were run ten times with different random seeds for reporting the means and standard deviations of the segmentation performance. In Fig. 3, we observe that SAMAug improves HSNet on the CVC-ClinicDB and CVC-ColonDB datasets significantly, and remains at the same level of performance on the other three datasets (all validated by t-test). Furthermore, we give visual result comparisons in Fig. 4.
### Cell Segmentation on the MoNuSeg Dataset
The MoNuSeg dataset [12] was constructed using H&E stained tissue images (at 40x magnification) from the TCGA archive [24]. The training set consists of 30 images with about 22000 cell nuclear annotations. The test set contains 14 images with about 7000 cell nuclear annotations. We use the objective function described in Eq. (1) in model training. In test time, we use the model deployment strategy given in Eq. (3). In Table 1, we show clear advantages of our proposed method in improving segmentation results for the U-Net, P-Net, and Attention U-Net models. AJI (Aggregated Jaccard Index) is a standard segmentation
Figure 4: Visual result comparisons of the vanilla HSNet and SAMAug-enhanced HSNet in polyp segmentation.
evaluation metric6 used on MoNuSeg which evaluates segmentation performance on the object level. F-score evaluates the cell segmentation performance on the pixel level. In addition, we give visual result comparisons in Fig. 5. Note that, although the segmentation generated by SAM (e.g., see the 3rd column of Fig. 5) does not immediately give accurate cell segmentation, SAM provides a general segmentation perceptual prior for the subsequent DL models to generate much more accurate task-specific segmentation results.
Footnote 6: [https://monuseg.grand-challenge.org/Evaluation/](https://monuseg.grand-challenge.org/Evaluation/)
### 4.4 Gland Segmentation on the GlaS Dataset
The GlaS dataset [19] has 85 training images (37 benign (BN), 48 malignant (MT)), and 60 test images (33 BN, 27 MT) in part A and 20 test images (4 BN, 16 MT) in part B. We use the official evaluation code7 for evaluating segmentation performance. For simplicity, we merge test set part A and test set part B, and perform segmentation evaluation at once for all the samples in the test set. We use the objective function described in Eq. (1) in model training. In test time, we use the model deployment strategy given in Eq. (3). From Table 2, one can see that U-Net with SAMAug augmentation performs considerably better than that without SAMAug augmentation.
Footnote 7: [https://warwick.ac.uk/fac/cross_fac/tia/data/glascontest/evaluation/](https://warwick.ac.uk/fac/cross_fac/tia/data/glascontest/evaluation/)
## 5 Conclusions
In this paper, we proposed a new method, SAMAug, for boosting medical image segmentation that uses the Segment Anything Model (SAM) to augment image input for commonly-used medical image segmentation models. Experiments on three segmentation tasks showed the effectiveness of our proposed method. Future work may consider conducting further research on: (1) designing a more robust and advanced augmentation function; (2) improving the efficiency of applying SAM in the SAMAug scheme; (3) utilizing SAMAug for uncertainty estimations and in other clinically-oriented applications.
|
2305.06567
|
Electron-phonon coupling and non-equilibrium thermal conduction in
ultrafast heating systems
|
The electron-phonon coupling in ultrafast heating systems is studied within
the framework of Boltzmann transport equation (BTE) with coupled electron and
phonon transport. A discrete unified gas kinetic scheme is developed to solve
the BTE, in which the electron/phonon advection, scattering and electron-phonon
interactions are coupled together within one time step by solving the BTE again
at the cell interface. Numerical results show that the present scheme can
correctly predict the electron-phonon coupling constant, and is in excellent
agreement with typical two-temperature model (TTM) and experimental results in
existing literatures and our performed time-domain thermoreflectance technique.
It can also capture the ballistic or thermal wave effects when the
characteristic length/time is comparable to or smaller than the mean free
path/relaxation time where the TTM fails. Finally, the electron-phonon coupling
in transient thermal grating geometry and Au/Pt bilayer metals with interfacial
thermal resistance is simulated and discussed. For the former, heat flow from
phonon to electron is predicted in both the ballistic and diffusive regimes.
For the latter, the reflected signal increases in the early tens of picoseconds
and then decreases with time after the heat source is removed.
|
Chuang Zhang, Rulei Guo, Meng Lian, Junichiro Shiomi
|
2023-05-11T04:53:59Z
|
http://arxiv.org/abs/2305.06567v2
|
# Electron-phonon coupling and non-equilibrium thermal conduction in ultrafast heating systems
###### Abstract
The electron-phonon coupling in ultrafast heating systems is studied within the framework of Boltzmann transport equation (BTE) with coupled electron and phonon transport. To directly solve the BTE, a discrete unified gas kinetic scheme is developed, in which the electron/phonon advection, scattering and electron-phonon interactions are coupled together within one time step by solving the BTE at the cell interface. Numerical results show that the present scheme can correctly predict the electron-phonon coupling constant, and is in excellent agreement with typical two-temperature model (TTM) and experimental results in existing literatures and our home-made time-domain thermoreflectance technique in ultrafast laser heating problem. In the transient thermal grating (TTG) geometry, the present scheme not only recovers the TTM in the diffusive regime, but also captures the ballistic and thermal wave effects when the characteristic length is comparable to or smaller than the mean free path where the TTM fails. More interestingly, an unexpected heat flow from phonon to electron is predicted in both the ballistic and diffusive regimes in the TTG geometry. It results from the competition of the thermal diffusivity and electron-phonon coupling in the diffusive regime, and in the ballistic regime it results from the competition of the phonon/electron advection and electron-phonon coupling.
## I Introduction
Ultrafast laser heating is playing a more and more important role in industrial manufacturing of micro/nano electronic devices, medical detection and the exploration of frontier basic science, which involves the interactions between various energy-carrying particles [1; 2; 3; 4]. One of the key factors in multiscale energy transfer and conversion is the coupling between electron and phonon (lattice vibration) in solid materials [5; 6; 7; 8; 9]. The electron-phonon coupling at the microscopic level can explain many macroscopic thermal and electrical transport phenomena, including thermoelectric conversion [10; 11], electrothermal power consumption and transfer in semiconductor chips [12; 13; 14] and so on.
Over the past decades, many macroscopic phenomenological heat conduction models are develop to describe the electron-phonon coupling [15; 16; 9; 17]. One of the most widely used theoretical models is the two-temperature model (TTM) [18; 19; 20; 21] proposed by Anisimov \(et\)\(al.\)[22]. In this empirical model, the electron temperature \(T_{e}\) and phonon temperature \(T_{p}\) are introduced individually and their interactions are represented by a single phenomenological electron-phonon coupling constant \(G\). Although it is simple, it is widely used in ultrafast pump-probe experiments for measuring the electron-phonon coupling constant in metals [23; 24; 25; 26; 27; 28]. The Fourier's law is used to express the evolution process of electron and phonon in the spatial and temporal spaces so that the TTM is a parabolic two-step heat conduction equation with infinite heat propagation speed. In order to remove this non-physical assumption, a hyperbolic two-temperature model is developed by introducing a delay term of the derivative of heat flow with respect to time for electron or phonon transport [29; 19], which is like an extension of Cattaneo equation [30]. In addition, the non-thermal lattice model [16] or multitemperature model [17; 29] is developed, in which phonons with different modes or branches are not in local thermal equilibrium and many lattice temperatures and electron-phonon coupling constants are introduced to describe the complex physical interactions.
Although above phenomenological heat conduction models made great success in describing ultrafast energy exchange, they are not suitable to capture the highly non-equilibrium situations, for example when the system characteristic length is comparable to or smaller than electron/phonon mean free path where the diffusive transport is broken [9; 31]. To study the multiscale energy transfer in ultrafast heating systems, the time-dependent Boltzmann transport equation (BTE) [32; 33; 34; 35; 36] with coupled electron and phonon thermal transport becomes an optimal compromise between efficiency and accuracy, which can capture the diffusive and ballistic thermal transport simultaneously. Instead of directly describe
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.