id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.10461
Singular Value Inequalities related to PPT Blocks
In this article, we introduce several singular value and norm inequalities comparing the main diagonal and the off-diagonal components of a two by two PPT block. Some applications are given to obtain a new set of inequalities, some of which generalize and improve many well-known singular value and norm inequalities in the literature.
Mohammad Alakhrass
2023-05-17T08:08:57Z
http://arxiv.org/abs/2305.10461v1
# Singular value inequalities related to positive partial transpose blocks ###### Abstract. In this article, we introduce several singular value and norm inequalities comparing the main diagonal and the off-diagonal components of a \(2\times 2\) PPT block. Some applications are given to obtain a new set of inequalities, some of which generalize and improve many well known singular value and norm inequalities in the literature. 2010 Mathematics Subject Classification: 15A18, 15A42, 15A45, 15A60 **keywords:** Block matrices; positive partial transpose matrices; singular value inequalities, norm inequalities. ## 1. Introduction Let \(\mathbb{M}_{n}\) be the algebra of all \(n\times n\) complex matrices. For \(X\in\mathbb{M}_{n}\), the singular values of \(X\) are the eigenvalues of the positive semidefinite matrix \(|X|=(X^{*}X)^{1/2}\). They are denoted by \(s_{j}(X),j=1,2,...,n\) and are arranged so that \(s_{1}(X)\geq s_{2}(X)\geq...\geq s_{n}(X)\). For \(A,B,X\in\mathbb{M}_{n}\), let \(H\) be the \(2\times 2\) block matrix defined as follows \[H=\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right).\] It is well known that \(H\geq 0\) if and only if \[B-X^{*}A^{-1}X\geq 0, \tag{1}\] provided that \(A\) is strictly positive. Notice that the notation \(X\geq 0\) (resp. \(X>0\)) means that \(X\) is positive semidefinite (resp. positive definite). For two Hermitian \(X,Y\in\mathbb{M}_{n}\), \(X\leq Y\) means \(Y-X\geq 0\). We remark that the \(2\times 2\) blocks are very important in studying matrices in general and they also play an important roll in studying sectorial matrices, see for example [1], [2] and [3]. The partial transpose of the block \(H\) is defined by \[H^{\tau}=\left(\begin{array}{cc}A&X^{*}\\ X&B\end{array}\right).\] The block \(H\) is said to be positive partial transpose, or PPT for short, if both \(H\) and \(H^{\tau}\) are positive semidefinite. Therefore, by (1), \(H\) is PPT if and only if \[B-X^{*}A^{-1}X\geq 0\quad\text{and}\quad B-XA^{-1}X^{*}\geq 0,\] provided that \(A\) is strictly positive. The class of PPT matrices have been thoroughly studied in literature, for example see [8, 9, 11, 14, 15, 17] and the references therein. In this article, we focus on singular values inequalities and unitarily invariant norm inequalities connecting the main and the off-diagonal of the PPT Block. Namely, we show that: If \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, then for \(t\in[0,1]\) \[\prod_{j=1}^{k}s_{j}^{2}(X)\leq\prod_{j=1}^{k}s_{j}(A\#_{t}B)s_{j}(A\#_{1-t}B ),\quad k=1,2,...,n\] As a result of the this log majorization inequalities we establish the following norm inequality for any unitarily invariant norm. \[\||X|^{r}\|^{2}\leq\|(A\#_{t}B)^{r/2}\|\|(A\#_{1-t}B)^{r/2}\|,\] where \(r>0\) and \(t\in[0,1]\). We also show that \[s_{j}(X)\leq s_{\lceil\frac{j+1}{2}\rceil}\left(\frac{(A\#_{t}B)+(A\#_{1-t}B )}{2}\right),\quad j=1,2,...,n,\] where \([x]\) is the greatest integer \(\leq x\). Several applications are given to obtain a new sets of singular value and unitarily invariant norm inequalities in which we obtain result that generalize and improve many well known results in the literature. ## 2. Proofs of main results Before stating the main results, let us recall some important facts about geometric and weighted-geometric mean of two positive matrices. For positive definite \(X,Y\in\mathbb{M}_{n}\) and \(t\in[0,1]\), the weighted geometric mean of \(X\) and \(Y\) is defined as follows \[X\#_{t}Y=X^{1/2}(X^{-1/2}YX^{-1/2})^{t}X^{1/2}.\] When \(t=\frac{1}{2}\), we drop \(t\) from the above definition, and we simply write \(X\#Y\) and call it the geometric mean of \(X\) and \(Y\). It is well-known that \[X\#_{t}Y\leq(1-t)X+tY. \tag{2}\] See [7, Chapter 4]. When \(t=\frac{1}{2}\), we have the following characterization of the geometric mean. \[X\#Y=\max\left\{Z:Z=Z^{*},\left(\begin{array}{cc}X&Z\\ Z&Y\end{array}\right)\geq 0\right\}. \tag{3}\] See [7, Theorem 4.1.3]. We have for \(k=1,2,...,n\) and \(r>0\) \[\prod_{j=1}^{k}s_{j}^{r}(X\#_{t}Y) \leq\prod_{j=1}^{k}s_{j}^{r}(e^{(1-t)\log X+t\log Y})\] \[\leq\prod_{j=1}^{k}s_{j}(Y^{rt/2}X^{(1-t)r}Y^{rt/2})\] \[\leq\prod_{j=1}^{k}s_{j}(X^{(1-t)r}Y^{tr}). \tag{4}\] See [6]. The following lemma is important in our proofs. **Lemma 2.1**.: _If \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, then for every \(t\in[0,1]\) the block \(\left(\begin{array}{cc}A\#_{t}B&X\\ X^{*}&A\#_{1-t}B\end{array}\right)\) is PPT._ Proof.: Since \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, it clear that both \[\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\quad\text{and}\quad\left(\begin{array}{cc}B&X\\ X^{*}&A\end{array}\right)\] are positive definite. Without loss of generality we may assume they are positive definite, otherwise we use the well know continuous argument. Therefore, \[X^{*}A^{-1}X\leq B\quad\text{and}\quad X^{*}B^{-1}X\leq A.\] Observe, \[X^{*}(A\#_{t}B)^{-1})X =X^{*}(A^{-1}\#_{t}B^{-1})X\] \[=(X^{*}A^{-1}X)\#_{t}(X^{*}B^{-1})X\] \[\leq B\#_{t}A\quad\text{(by the increasing property of means).}\] \[=A\#_{1-t}B \tag{5}\] And so \(A\#_{1-t}B\geq X^{*}(A\#_{t}B)^{-1})X.\) This implies that \(\left(\begin{array}{cc}A\#_{t}B&X\\ X^{*}&A\#_{1-t}B\end{array}\right)\) is positive definite. Similarly, it can be proved that \(\left(\begin{array}{cc}A\#_{t}B&X^{*}\\ X&A\#_{1-t}B\end{array}\right)\) is also positive definite. This complete the proof. Now, we state the following log majorization inequalities which governs the off diagonal and the main diagonal of a PPT Block. **Theorem 2.1**.: _If \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, then for \(k=1,2,...,n\) and for \(t\in[0,1]\)._ \[\prod_{j=1}^{k}s_{j}^{2r}(X) \leq\prod_{j=1}^{k}s_{j}^{r}(A\#_{t}B)s_{j}^{r}(A\#_{1-t}B)\] \[\leq\prod_{j=1}^{k}s_{j}^{r}\left(e^{(1-t)\log A+t\log B}\right) s_{j}^{r}\left(e^{t\log A+(1-t)\log B}\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left(B^{tr/2}A^{(1-t)r}B^{tr/2}\right)s_ {j}\left(A^{tr/2}B^{(1-t)r}A^{tr/2}\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left(A^{(1-t)r}B^{tr}\right)s_{j}\left( A^{tr}B^{(1-t)r}\right)\] Proof.: Since \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, Lemma 2.1 implies that the block \(\left(\begin{array}{cc}A\#_{t}B&X\\ X^{*}&A\#_{1-t}B\end{array}\right)\) is positive semidefinite. Therefore, by [7, page 13], \[X=(A\#_{t}B)^{1/2}K(A\#_{1-t}B)^{1/2}\quad\text{for some contraction}\quad K.\] Then, \[\prod_{j=1}^{k}s_{j}(X) =\prod_{j=1}^{k}s_{j}\left((A\#_{t}B)^{1/2}K(A\#_{1-t}B)^{1/2}\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left((A\#_{t}B)^{1/2}\right)s_{j}\left(K \right)s_{j}\left((A\#_{1-t}B)^{1/2}\right)\] \[\leq\prod_{j=1}^{k}s_{j}^{1/2}\left((A\#_{t}B)\right)s_{j}^{1/2} \left((A\#_{1-t}B)\right)\] The other inequalities follow from (4). Recall that a norm \(\|\cdot\|\) on \(\mathbb{M}_{n}\) is called unitarily invariant norm if \(\|UXV\|=\|X\|\) for all \(X\in\mathbb{M}_{n}\) and all unitary elements \(U,V\in\mathbb{M}_{n}\). Let \(\mathbb{R}_{+\downarrow}^{m}\) denotes all the vectors \(\gamma=(\gamma_{1},\gamma_{2},...,\gamma_{n})\) in \(\mathbb{R}^{n}\) with \(\gamma_{1}\geq\gamma_{2}\geq...\geq\gamma_{n}\geq 0\). For each \(\gamma\in\mathbb{R}_{+\downarrow}^{n}\) let \(||\cdot||_{\gamma}\) be the norm defined, on \(\mathbb{M}_{n}\), as follows \[||X||_{\gamma}=\sum_{j=1}^{n}\gamma_{j}s_{j}(X).\] Let \(\|\cdot\|\) be a unitarily invariant norm on \(\mathbb{M}_{n}\). Then there is a compact set \(K_{\|\cdot\|}\subset\mathbb{R}_{+\downarrow}^{n}\) such that \[\|X\|=\max\{\|X\|_{\gamma}:\gamma\in K_{\|\cdot\|}\}\quad\text{for all}\quad X \in\mathbb{M}_{n}. \tag{6}\] See [12]. **Theorem 2.2**.: _If \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT. Let \(r>0\) and \(t\in[0,1]\), then_ \[\||X|^{r}\|^{2}\leq\|(A\#_{t}B)^{r}\|\|(A\#_{1-t}B)^{r}\|.\] _In particular_ \[\|X\|\leq\|A\#B\|.\] Proof.: Theorem 2.1 implies that \[\prod_{j=1}^{k}s_{j}^{r}(X)\leq\prod_{j=1}^{k}s_{j}^{r/2}(A\#_{t}B)s_{j}^{r/2 }(A\#_{1-t}B).\] Let \(\gamma=(\gamma_{1},\gamma_{2},...,\gamma_{n})\in K_{\|\cdot\|}\). Then \[\prod_{j=1}^{k}\gamma_{j}s_{j}^{r}(X)\leq\prod_{j=1}^{k}\gamma_{j}^{1/2}s_{j}^ {r/2}(A\#_{t}B)\gamma_{j}^{1/2}s_{j}^{r/2}(A\#_{1-t}B).\] Using Cauchy-Schwarz inequality and the fact that log majorization implies week majorization, we have for \(k=1,2,...,n\) and for \(t\in[0,1]\) \[\sum_{j=1}^{k}\gamma_{j}s_{j}^{r}(X) \leq\sum_{j=1}^{k}\gamma_{j}^{1/2}s_{j}^{r/2}(A\#_{t}B)\gamma_{j}^ {1/2}s_{j}^{r/2}(A\#_{1-t}B)\] \[\leq\left(\sum_{j=1}^{k}\gamma_{j}s_{j}(A\#_{t}B)^{r}\right)^{1/2 }\left(\sum_{j=1}^{k}\gamma_{j}s_{j}(A\#_{1-t}B)^{r}\right)^{1/2}\] \[=\|(A\#_{t}B)^{r}\|_{\gamma}^{1/2}\|(A\#_{1-t}B)^{r}\|_{\gamma}^{ 1/2}\] \[\leq\|(A\#_{t}B)^{r}\|^{1/2}\|(A\#_{1-t}B)^{r}\|^{1/2}.\] Hence, \[\|X|^{r}\|_{\gamma}\leq\|(A\#_{t}B)^{r}\|^{1/2}\|(A\#_{1-t}B)^{r}\|^{1/2}.\] The result follows by taking the maximum over all \(\gamma\in K_{|\cdot||}\). The next result can be stated as follows. **Theorem 2.3**.: _Let \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) be PPT. Then for \(t\in[0,1]\)_ \[s_{j}(X)\leq s_{\lfloor\frac{j+1}{2}\rfloor}\left(\frac{(A\#_{t}B)+(A\#_{1-t} B)}{2}\right)\quad j=1,2,...,n.\] _In particular,_ \[s_{j}(X)\leq s_{\lfloor\frac{j+1}{2}\rfloor}\left(A\#B\right)j=1,2,...,n,\] _where \([x]\) is the greatest integer \(\leq x\)._ Proof.: Since \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, we have \[\left(\begin{array}{cc}B&-X\\ -X^{*}&A\end{array}\right)=\left(\begin{array}{cc}0&-I\\ I&0\end{array}\right)\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\left(\begin{array}{cc}0&I\\ -I&0\end{array}\right)\geq 0.\] Therefore, \[\frac{1}{2}\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\leq\frac{1}{2}\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)+\frac{1}{2}\left(\begin{array}{cc}B&-X\\ -X^{*}&A\end{array}\right)=\left(\begin{array}{cc}\frac{A+B}{2}&0\\ 0&\frac{A+B}{2}\end{array}\right).\] Hence, \[\frac{1}{2}\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)-\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right)\leq\left(\begin{array}{cc}\frac{A+B}{2}&0\\ 0&\frac{A+B}{2}\end{array}\right)-\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right).\] Note that, the left hand side of the above inequality \[\frac{1}{2}\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)-\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right)=\frac{1}{2}\left(\begin{array}{cc}A&-X\\ -X^{*}&B\end{array}\right)\] is positive semidefinite. Then, we have \[\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right)\leq\left(\begin{array}{cc}\frac{A+B}{2}&0\\ 0&\frac{A+B}{2}\end{array}\right).\] Therefore, by Weyl's monotonicity principle, we have \[\lambda_{j}\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right)\leq\lambda_{j}\left(\begin{array}{cc}\frac{A+B}{2}&0 \\ 0&\frac{A+B}{2}\end{array}\right),\quad\text{for }j=1,2,,...,2n.\] Note that, the eigenvalues of \(\left(\begin{array}{cc}0&X\\ X^{*}&0\end{array}\right)\) are \(s_{1}(X)\geq s_{2}(X)\geq...\geq s_{n}(X)\geq 0\geq-s_{n}(X)\geq-s_{n-1}(X)\geq...\geq-s_{ 1}(X),\) and the eigenvalues of \(\left(\begin{array}{cc}\frac{A+B}{2}&0\\ 0&\frac{A+B}{2}\end{array}\right)\) are \(s_{1}\left(\frac{A+B}{2}\right)\geq s_{1}\left(\frac{A+B}{2}\right)\geq s_{2} \left(\frac{A+B}{2}\right)\geq s_{2}\left(\frac{A+B}{2}\right)\geq...\geq s_{n }\left(\frac{A+B}{2}\right)\geq s_{n}\left(\frac{A+B}{2}\right).\) Therefore, we have shown that if \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, then \[s_{j}(X)\leq s_{\left[\frac{j+1}{2}\right]}\left(\frac{A+B}{2}\right),\quad j =1,2,...,n. \tag{7}\] Since \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is PPT, Theorem 2.1 implies that for \(t\in[0,1]\)\(G=\left(\begin{array}{cc}A\#_{t}B&X\\ X^{*}&A\#_{1-t}B\end{array}\right)\) is PPT. Applying (7) to \(G\) implies the result. ## 3. Applications In this section, we present several applications of Theorem 2.1. Before doing so, we notice that for any \(A,B\in\mathbb{M}_{n},\) the block \(\left(\begin{array}{cc}A^{*}A&A^{*}B\\ AB^{*}&B^{*}B\end{array}\right)\) is positive definite since \[\left(\begin{array}{cc}A^{*}A&A^{*}B\\ AB^{*}&B^{*}B\end{array}\right)=\left(\begin{array}{cc}A&B\end{array}\right)^ {*}\left(\begin{array}{cc}A&B\end{array}\right).\] Moreover, if \(\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\) is positive definite and \(X=UX\) is the polar decomposition of \(X\), then \(\left(\begin{array}{cc}U^{*}AU&|X|\\ |X|&B\end{array}\right)\) is PPT since \[\left(\begin{array}{cc}U^{*}AU&|X|\\ |X|&B\end{array}\right)=\left(\begin{array}{cc}U^{*}&0\\ 0&I\end{array}\right)\left(\begin{array}{cc}A&X\\ X^{*}&B\end{array}\right)\left(\begin{array}{cc}U&0\\ 0&I\end{array}\right).\] **Example 3.1**.: _For \(j=1,2,...,m\), let \(A_{j},B_{j}\in\mathbb{M}_{n}\) be such that \(A_{j}^{*}B_{j}=B_{j}^{*}A_{j}\). Then_ \[\sum_{j=1}^{m}\left(\begin{array}{cc}A_{j}^{*}A_{j}&A_{j}^{*}B_{j}\\ A_{j}B_{j}^{*}&B_{j}^{*}B_{j}\end{array}\right)=\left(\begin{array}{cc}\sum _{j=1}^{m}|A_{j}|^{2}&\sum_{j=1}^{m}A_{j}^{*}B_{j}\\ \sum_{j=1}^{m}A_{j}B_{j}^{*}&\sum_{j=1}^{m}|B_{j}|^{2}\end{array}\right)\] _is PPT. Using Theorem 2.1 implies_ \[\prod_{j=1}^{k}s_{j}^{2r}\left(\sum_{j=1}^{m}A_{j}^{*}B_{j}\right)\] \[\leq\prod_{j=1}^{k}s_{j}^{r}\left(\left(\sum_{j=1}^{m}|A_{j}|^{2} \right)\#_{t}\left(\sum_{j=1}^{m}|B_{j}|^{2}\right)\right)s_{j}^{r}\left( \left(\sum_{j=1}^{m}|A_{j}|^{2}\right)\#_{1-t}\left(\sum_{j=1}^{m}|B_{j}|^{2} \right)\right)\] \[\leq\prod_{j=1}^{k}s_{j}^{r}\left(f_{t}\left(\sum_{j=1}^{m}|A_{j} |^{2},\sum_{j=1}^{m}|B_{j}|^{2}\right)\right)s_{j}^{r}\left(f_{1-t}\left(\sum _{j=1}^{m}|A_{j}|^{2},\sum_{j=1}^{m}|B_{j}|^{2}\right)\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left(g_{r,t}\left(\sum_{j=1}^{m}|A_{j}|^ {2},\sum_{j=1}^{m}|B_{j}|^{2}\right)\right)s_{j}\left(g_{r,1-t}\left(\sum_{j=1 }^{m}|A_{j}|^{2},\sum_{j=1}^{m}|B_{j}|^{2}\right)\right),\] _where \(f_{t}(A,B)=e^{(1-t)\log A+t\log B}\) and \(g_{r,t}(A,B)=B^{rt/2}A^{(1-t)r}B^{rt/2}.\) A particular case of this is when \(A_{j},B_{j}\in\mathbb{M}_{n}\) are positive semidefinite and \(t=1/2\). In this case, if we replace \(A_{j},B_{j}\) by \(A_{j}^{1/r},B_{j}^{1/r}\) respectively, we have_ \[\prod_{j=1}^{k}s_{j}^{r}\left(\sum_{j=1}^{m}A_{j}^{1/r}B_{j}^{1/ r}\right) \leq\prod_{j=1}^{k}s_{j}^{r}\left(\left(\sum_{j=1}^{m}A_{j}^{2/r} \right)\#\left(\sum_{j=1}^{m}B_{j}^{2/r}\right)\right)\] \[\leq\prod_{j=1}^{k}s_{j}^{r}\left(e^{\frac{1}{2}\log\left(\sum_{ j=1}^{m}A_{j}^{2/r}\right)+\frac{1}{2}\log\left(\sum_{j=1}^{m}B_{j}^{2/r} \right)}\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left(\left(\sum_{j=1}^{m}B_{j}^{2/r} \right)^{r/4}\left(\sum_{j=1}^{m}A_{j}^{2/r}\right)^{r/2}\left(\sum_{j=1}^{m} B_{j}^{2/r}\right)^{r/4}\right).\] _This implies_ \[\left\|\left(\sum_{j=1}^{m}A_{j}^{1/r}B_{j}^{1/r}\right)^{r}\right\| \leq\left\|\left(\left(\sum_{j=1}^{m}A_{j}^{2/r}\right)\#\left(\sum _{j=1}^{m}B_{j}^{2/r}\right)\right)^{r}\right\|\] \[\leq\left\|\left(e^{\frac{1}{2}\log\left(\sum_{j=1}^{m}A_{j}^{2/r }\right)+\frac{1}{2}\log\left(\sum_{j=1}^{m}B_{j}^{2/r}\right)}\right)^{r}\right\|\] \[\left\|\left(\sum_{j=1}^{m}B_{j}^{2/r}\right)^{r/4}\left(\sum_{j= 1}^{m}A_{j}^{2/r}\right)^{r/2}\left(\sum_{j=1}^{m}B_{j}^{2/r}\right)^{r/4}\right\| \tag{8}\] _Notice that \(\|\sum_{j=1}^{m}f(A_{j})\|\leq\|f\left(\sum_{j=1}^{m}A_{j}\right)\|\) for every nonnegative convex function \(f\) on \([0,\infty)\) such that \(f(0)=0\). See [13]. In particular, if \(f(x)=x^{r},r\geq 1\), then_ \[\left\|\sum_{j=1}^{m}A_{j}B_{j}\right\|=\left\|\sum_{j=1}^{m}\left(A_{j}^{1/r }B_{j}^{1/r}\right)^{r}\right\|\leq\left\|\left(\sum_{j=1}^{m}A_{j}^{1/r}B_{j }^{1/r}\right)^{r}\right\|. \tag{9}\] _For \(r=2\), combining the inequalities in (8) and (9) implies the following result._ **Theorem 3.1**.: _For \(j=1,2,...,m\), let \(A_{j},B_{j}\in\mathbb{M}_{n}\) be positive semi-definite such that, for each \(j\), \(A_{j}\) commutes with \(A_{j}\). Then for all unitarily invariant norms_ \[\left\|\sum_{j=1}^{m}A_{j}B_{j}\right\| \leq\left\|\left(\left(\sum_{j=1}^{m}A_{j}\right)\#\left(\sum_{j =1}^{m}B_{j}\right)\right)^{2}\right\|\] \[\leq\left\|\left(\sum_{j=1}^{m}A_{j}^{1/2}B_{j}^{1/2}\right)^{2}\right\|\] \[\leq\left\|\left(e^{\frac{1}{2}\log\left(\sum_{j=1}^{m}A_{j} \right)+\frac{1}{2}\log\left(\sum_{j=1}^{m}B_{j}\right)}\right)^{2}\right\|\] \[\leq\left\|\left(\sum_{j=1}^{m}B_{j}\right)^{1/2}\left(\sum_{j=1} ^{m}A_{j}\right)\left(\sum_{j=1}^{m}B_{j}\right)^{1/2}\right\|.\] \[\leq\left\|\left(\sum_{j=1}^{m}A_{j}\right)\left(\sum_{j=1}^{m}B_ {j}\right)\right\|.\] _Notice that the last inequality follows from the general facts that \(\|Re(X)\|\leq\|X\|\) for all \(X\in\mathbb{M}_{n}\), and if a product \(XY\) is Hermitian, then \(\|XY\|\leq\|Re(YX)\|\)._ _We remark that the above result is an improvement of Audenert Theorem [4, Theorem 1]. See also [10] and [16] for alternative proof of Audenert result._ Before considering the second example, one may ask what happens if we drop the condition \(A_{j}^{*}B_{j}=B_{j}^{*}A_{j},j=1,2,...,m\)? In fact, a weaker result can be obtained. To be more specific, let \(A_{j},B_{j}\in\mathbb{M}_{n},j=1,2,...,m\). Then \[\sum_{j=1}^{m}\left(\begin{array}{cc}A_{j}^{*}A_{j}&A_{j}^{*}B_{j}\\ A_{j}B_{j}^{*}&B_{j}^{*}B_{j}\end{array}\right)=\left(\begin{array}{cc}\sum _{j=1}^{m}|A_{j}|^{2}&\sum_{j=1}^{m}A_{j}^{*}B_{j}\\ \sum_{j=1}^{m}A_{j}B_{j}^{*}&\sum_{j=1}^{m}|B_{j}|^{2}\end{array}\right)\] is positive semidefinite. Let \(\sum_{j=1}^{m}A_{j}^{*}B_{j}=U\left|\sum_{j=1}^{m}A_{j}^{*}B_{j}\right|\) be the polar decomposition of \(\sum_{j=1}^{m}A_{j}^{*}B_{j}\). Then \[\left(\begin{array}{cc}\sum_{j=1}^{m}U^{*}|A_{j}|^{2}U&\left|\sum_{j=1}^{m}A _{j}^{*}B_{j}\right|\\ \left|\sum_{j=1}^{m}A_{j}^{*}B_{j}\right|&\sum_{j=1}^{m}|B_{j}|^{2}\end{array}\right)\] is PPT. Therefore, Theorem 2.1, with \(t=1/2,r=2\), implies the following result. **Theorem 3.2**.: _For \(j=1,2,...,m\), let \(A_{j},B_{j}\in\mathbb{M}_{n}\). Let \(r>0\). Then for some unitary \(U\in\mathbb{M}_{n}\) and for all unitarily invariant norms_ \[\left\|\left|\sum_{j=1}^{m}A_{j}^{*}B_{j}\right|^{2}\right\| \leq\left\|\left(\left(\sum_{j=1}^{m}U^{*}|A_{j}|^{2}U\right)\# \left(\sum_{j=1}^{m}|B_{j}|^{2}\right)\right)^{2}\right\|\] \[\leq\left\|\left(\sum_{j=1}^{m}|B_{j}|^{2}\right)^{1/2}U^{*}\left( \sum_{j=1}^{m}|A_{j}|^{2}\right)U\left(\sum_{j=1}^{m}|B_{j}|^{2}\right)^{1/2}\right\|\] \[\leq\left\|\left(\sum_{j=1}^{m}|A_{j}|^{2}\right)U\left(\sum_{j=1 }^{m}|B_{j}|^{2}\right)\right\|.\] **Example 3.2**.: _Let \(A,B\in\mathbb{M}_{n}\). Then_ \[\left(\begin{array}{cc}I+AA^{*}&A+B\\ (A+B)^{*}&I+B^{*}B\end{array}\right)=\left(\begin{array}{cc}I&A\\ B^{*}&I\end{array}\right)\left(\begin{array}{cc}I&B\\ A^{*}&I\end{array}\right)\] _is positive semidefinite. Therefore, \(\left(\begin{array}{cc}I+U^{*}|A|^{2}U&|A+B|\\ (|A+B|&I+|B|^{2}\end{array}\right)\) is PPT, with \(U=U_{1}U_{2}\), where \(U_{1}\) is the unitary in the polar decomposition of \((A+B)\) and \(U_{2}\) is the unitary such that \(AA^{*}=U_{2}^{*}A^{*}AU_{2}\). Then, using Theorem 2.1 and using the fact that \(\prod_{j=1}^{k}s_{j}(XY)\leq\prod_{j=1}^{k}s_{j}(X)s_{j}(Y)\) we have for \(r>0\) and \(k=1,2,...,n\)_ \[\prod_{j=1}^{k}s_{j}(|A+B|^{r}) \leq\prod_{j=1}^{k}s_{j}^{r}\left((I+U^{*}|A|^{2}U)\#(I+|B|^{2})\right)\] \[\leq\prod_{j=1}^{k}s_{j}^{r}\left((I+U^{*}|A|^{2}U)^{1/2}\right)s _{j}^{r}\left((I+|B|^{2})^{1/2}\right).\] \[=\prod_{j=1}^{k}s_{j}^{r/2}\left((I+|A|^{2})\right)s_{j}^{r/2} \left((I+|B|^{2})\right).\] _Therefore,_ \[\prod_{j=1}^{k}s_{j}(|A+B|^{r})\leq\prod_{j=1}^{k}s_{j}^{r/2}\left((I+|A|^{2}) \right)s_{j}^{r/2}\left((I+|B|^{2})\right). \tag{10}\] _We remark that \((1+x^{2})^{r}\leq(1+x^{r})^{2}\) for all positive real numbers \(x\) and for \(1\leq r\leq 2\). See [5, Lemma 2.7]. Then_ \[s_{j}^{r/2}\left((I+|A|^{2})\right)=\left(1+s_{j}^{2}(A)\right)^{r/2}\leq(1+s _{j}^{r}(A))=s_{j}(I+|A|^{r}). \tag{11}\] _By combining (10) and (11) we have the following result which was given in [5], where the proof given was more complicated._ **Theorem 3.3**.: _([5, Theorem 2.8]) Let \(A,B\in\mathbb{M}_{n}\)._ \[\prod_{j=1}^{k}s_{j}(|A+B|^{r})\leq\prod_{j=1}^{k}s_{j}\left(I+|A|^{r}\right)s _{j}\left(I+|B|^{r}\right),\] _for all \(1\leq r\leq 2\)._ _We can get an improvement of the above result if \(A,B\) are Hermitian. In fact, if \(A,B\) are Hermitian, then \(\left(\begin{array}{cc}I+A^{2}&A+B\\ A+B&I+B^{2}\end{array}\right)\) is PPT. Therefore for \(k=1,2,...,n\)_ \[\prod_{j=1}^{k}s_{j}(|A+B|^{r}) \leq\prod_{j=1}^{k}s_{j}^{r}\left((I+A^{2})\#(I+B^{2})\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left((I+B^{2})^{r/4}(I+A^{2})^{r/2}(I+B^{2 })^{r/4}\right)\] \[\leq\prod_{j=1}^{k}s_{j}\left((I+A^{2})^{r/2}\right)s_{j}\left((I+ B^{2})^{r/2}\right),\quad\forall r\geq 0\] \[\leq\prod_{j=1}^{k}s_{j}\left((I+|A|^{r})\right)s_{j}\left((I+|B|^ {r})\right),\quad\text{(for $1\leq r\leq 2$)}.\] Before introducing the next application we need to recall the Hadamard product of two matrices. Let \(A=[a_{ij}],B=[b_{ij}]\in\mathbb{M}_{n}\), The Hadamard product of \(A\) and \(B\) is defined as \(A\circ B=[a_{ij}b_{ij}]\). **Example 3.3**.: _Let \(A,B\in\mathbb{M}_{n}\) be Hermitian. Then, by (1), it is clear that \(\left(\begin{array}{cc}A^{2}&A\\ A&I\end{array}\right)\) and \(\left(\begin{array}{cc}I&B\\ B&B^{2}\end{array}\right)\) are PPT. Since the Hadamard product of positive semidefinite is positive semidefinite, the block \(\left(\begin{array}{cc}I\circ A^{2}&A\circ B\\ A\circ B&I\circ B^{2}\end{array}\right)\) is PPT. Then, by Theorem 2.1_ \[\prod_{j=1}^{k}s_{j}^{2}\left(A\circ B\right) \leq\prod_{j=1}^{k}s_{j}^{2}\left((I\circ A^{2})\#(I\circ B^{2}) \right)\] \[=\prod_{j=1}^{k}s_{j}^{2}\left((I\circ A^{2})^{1/2}(I\circ B^{2}) ^{1/2}\right)\] \[=\prod_{j=1}^{k}s_{j}\left((I\circ A^{2})(I\circ B^{2})\right).\] _Hence, we have_ \[\prod_{j=1}^{k}s_{j}^{2}\left(A\circ B\right)\leq\prod_{j=1}^{k}s_{j}\left((I \circ A^{2})(I\circ B^{2})\right),\:k=1,...,n.\] _Then we have_ \[\prod_{j=1}^{k}s_{j}^{2}\left(A\circ B\right) \leq\prod_{j=1}^{k}s_{j}^{2}\left((I\circ A^{2})(I\circ B^{2}) \right),\:k=1,...,n.\] \[\left\|\left|A\circ B\right|^{2}\right\| \leq\left\|(I\circ A^{2})(I\circ B^{2})\right\|\] \[\leq\left\|(I\circ A^{2})\right\|\ \left\|(I\circ B^{2})\right\|\] \[\leq\left\|A^{2}\right\|\ \left\|B^{2}\right\|\] \[\leq\left(\left\|A\right\|\ \left\|B\right\|\right)^{2}.\] _Notice that the third inequality holds by Schur's Theorem._ **Example 3.4**.: _For \(j=1,2,...,m\), let \(A_{j}\in\mathbb{M}_{n}\). Then \(\left(\begin{array}{cc}I&A_{j}\\ A_{j}^{*}&A_{j}^{*}A_{j}\end{array}\right)\) is positive semidefinite. Hence, \(\left(\begin{array}{cc}I&A_{1}\circ A_{2}\circ...\circ A_{m}\\ (A_{1}\circ A_{2}\circ...\circ A_{m})^{*}&A_{1}^{*}A_{1}\circ A_{2}^{*}A_{2} \circ...\circ A_{m}^{*}A_{m}\end{array}\right)\) is positive semidefinite. Therefore,_ \[\left(\begin{array}{cc}I&\left|A_{1}\circ A_{2}\circ...\circ A_{m}\right|\\ \left|A_{1}\circ A_{2}\circ...\circ A_{m}\right|&A_{1}^{*}A_{1}\circ A_{2}^{* }A_{2}\circ...\circ A_{m}^{*}A_{m}\end{array}\right)\] _is PPT. By Theorem 2.1, we have for \(k=1,...,n\)_ \[\prod_{j=1}^{k}s_{j}^{2}\left(\left|A_{1}\circ A_{2}\circ...\circ A _{m}\right|\right) \leq\prod_{j=1}^{k}s_{j}^{2}\left(I\#\left(A_{1}^{*}A_{1}\circ A _{2}^{*}A_{2}\circ...\circ A_{m}^{*}A_{m}\right)\right)\] \[=\prod_{j=1}^{k}s_{j}\left(A_{1}^{*}A_{1}\circ A_{2}^{*}A_{2} \circ...\circ A_{m}^{*}A_{m}\right)\] \[=\prod_{j=1}^{k}s_{j}\left(\left|A_{1}\right|^{2}\circ\left|A_{2} \right|^{2}\circ...\circ\left|A_{m}\right|^{2}\right),\] _and so_ \[\prod_{j=1}^{k}s_{j}^{2}\left(\left|A_{1}\circ A_{2}\circ...\circ A_{m}\right| \right)\leq\prod_{j=1}^{k}s_{j}\left(\left|A_{1}\right|^{2}\circ\left|A_{2} \right|^{2}\circ...\circ\left|A_{m}\right|^{2}\right),\quad k=1,2,...,n.\] _This implies_ \[\left\|\left|A_{1}\circ A_{2}\circ...\circ A_{m}\right|^{2}\right\|\leq\left\| \left|A_{1}\right|^{2}\circ\left|A_{2}\right|^{2}\circ...\circ\left|A_{m}\right| ^{2}\right\|.\] ## Disclosure statement: The authors declare that they have no conflict of interest.
2305.06203
Multiclass MRI Brain Tumor Segmentation using 3D Attention-based U-Net
This paper proposes a 3D attention-based U-Net architecture for multi-region segmentation of brain tumors using a single stacked multi-modal volume created by combining three non-native MRI volumes. The attention mechanism added to the decoder side of the U-Net helps to improve segmentation accuracy by de-emphasizing healthy tissues and accentuating malignant tissues, resulting in better generalization power and reduced computational resources. The method is trained and evaluated on the BraTS 2021 Task 1 dataset, and demonstrates improvement of accuracy over other approaches. My findings suggest that the proposed approach has potential to enhance brain tumor segmentation using multi-modal MRI data, contributing to better understanding and diagnosis of brain diseases. This work highlights the importance of combining multiple imaging modalities and incorporating attention mechanisms for improved accuracy in brain tumor segmentation.
Maryann M. Gitonga
2023-05-10T14:35:07Z
http://arxiv.org/abs/2305.06203v1
# Multiclass MRI Brain Tumor Segmentation using 3D Attention-based U-Net ###### Abstract This paper proposes a 3D attention-based U-Net architecture for multi-region segmentation of brain tumors using a single stacked multi-modal volume created by combining three non-native MRI volumes. The attention mechanism added to the decoder side of the U-Net helps to improve segmentation accuracy by de-emphasizing healthy tissues and accentuating malignant tissues, resulting in better generalization power and reduced computational resources. The method is trained and evaluated on the BraTS 2021 Task 1 dataset, and demonstrates improvement of accuracy over other approaches. My findings suggest that the proposed approach has potential to enhance brain tumor segmentation using multi-modal MRI data, contributing to better understanding and diagnosis of brain diseases. This work highlights the importance of combining multiple imaging modalities and incorporating attention mechanisms for improved accuracy in brain tumor segmentation. **Keywords:** attention mechanism, U-Net, MRI, NIFTI. ## 1 Introduction Glioma is a common malignant brain tumor that originates from glial cells in the brain and spinal cord. Gliomas are aggressive, and the median survival time for glioma patients is about 12 months [1]. Early detection of these tumors is critical, and MRI is a primary tool used for this purpose. MRI provides high spatial resolution anatomical information and different sequences such as T1-weighted, T2-weighted, T1-weighted contrast-enhanced, and T2 Fluid Attenuated Inversion Recovery, which highlight different tumor characteristics [2]. Accurate annotation and segmentation of tumor borders are essential for tumor diagnosis. However, manual segmentation is costly, time-consuming, and prone to human error, especially in cases where tumors have varying intensities and shapes in different sub-regions [3]. ## 2 Literature Review Consequently, deep learning techniques have transformed brain tumor segmentation from feature-driven to data-driven. Two types of deep learning algorithms, Convolutional Neural Network(CNN)-based and Fully Convolutional Network(FCN)-based, are used in brain tumor segmentation. Havaei _et al._ proposed a multi-path CNN network, InputCascadeCNN, which uses variable size convolution kernels to extract context features and has both local and global routes [4]. The model attained a Dice coefficient of 0.81 for the complete segmentation on the BraTS 2013 dataset. Myronenko proposed a 3D CNN-based approach using a shared decoder and a variational auto-encoder branch for regularization. The encoder of the network extracts features of the images and the decoder reconstructs the dense segmentation masks for the scan. The variational auto-encoder branch reconstructs the input image into itself and is used only during training to regularize the shared decoder. The model attained an average Dice coefficient of 0.82 on the BraTS 2018 dataset [5]. Bukhari _et al._ proposed the 3D U-Net which has a contracting path for capturing context information and an expanding path for ensuring precise location, which greatly improves the performance of medical picture segmentation task. This state-of-the-art model attained a Dice coefficient of approximately 0.92 on the BraTS 2021 dataset[6]. Lin _et al._ incorporated a feature pyramid module into the U-Net architecture to combine multi-scale semantic and location information. The solution had shortened distance between the output layers and deep features by bottom-up path aggregation to try and reduce the noise in the segmentation. The efficient feature pyramid was used to improve mask prediction using fewer resources to complete the feature pyramid effect. The model attained a Dice coefficient of 0.80 on the BraTS 2017 and BraTS 2018 datasets[7]. Jun _et al._ introduced a nn Unet architecture which included an encoder and decoder composed of convolutions, normalization, and skip connections, with deep supervision added to all but the two lowest resolutions in the decoder. It attained an average Dice coefficient of 0.90 on the BraTS 2021 dataset [8]. However, these techniques still take up significant time and resources in training and evaluation to get good results. Additionally, small-scale tumors are difficult to accurately segment due to decreased picture dimension during downsampling. In this paper, I test the 3D attention-based U-Net network [9] with the Dice Coefficient and Tversky Loss Function as metrics, to improve the segmentation accuracy. [10] applied the same network but applied Hausdorff Distance and augmented the dataset using the Positive Mining technique. I apply the proposed method to the BraTS (Brain Tumor Segmentation) 2021 Dataset provided by Medical Image Computing and Computer-Assisted Intervention (MIC-CAI). To provide a richer spatial information in the input as well as enable one-time segmentation, the three modalities out of the four are combined into one. This is because the native modality (T1) highlights the healthy anatomy of the brain and not the tumor regions [11]. This results to a 4D input of dimensions \(3M\times L\times W\times S\), where \(M\) is the modality, \(L\) is the length of the scan, \(W\) is the width of the scan and \(S\) is the number of slices in each volume. This allows the model to put more focus on the regions of interest (region showing a potential tumor). ## 3 Methodology ### 3D Attention U-Net The proposed architecture (cf. Figure 1) uses the U-Net architecture [12], which employs a contracting path to down-sample image dimensions and an expanding path to up-sample while retaining spatial information through skip connections. Attention modules are utilized at the skip connections to highlight relevant activations and suppress those in irrelevant regions during training [9]. The soft attention module, which is differentiable and essential for back-propagation, is used, consisting of two sub-modules: the channel attention module and spatial attention module. The former selects important feature maps, while the latter identifies important regions within the feature maps, and both are used to take full advantage of the architecture. 3D attention gates are introduced to generate 3D channel and spatial attention by utilizing 3D inter-channel and inter-spatial feature relationships. the two input vectors, x and g. g is acquired from the lower part of the network and represents better features and x comes from early layers and represent better spatial information. To align the dimensions of the two vectors, x undergoes a strided 3D convolution while g undergoes a 3D convolution with number of filters = \(F_{g}\). The two vectors are summed element-wise, with aligned weights becoming larger while unaligned weights become relatively smaller. The resultant vector undergoes a ReLU activation layer and a 1x1x1 convolution that reduces the dimensions to 1xHxWxD. A sigmoid layer is applied to scale the vector between 0 and 1, generating attention coefficients that indicate the relevance of the features. The attention coefficients are upsampled to the original dimensions of vector x and multiplied element-wise with vector x. The resultant scaled vector is passed along in the skip connection. This mechanism helps to increase the sensitivity of the network to small but important details and reduce the impact of noisy or irrelevant features. The overall network is able to learn better and more discriminative features which improves on its accuracy and efficiency. Figure 1: 3D Attention U-Net Architecture Figure 2: Visual representation of 3D Attention mechanism Experiments ### Dataset and Pre-processing The BraTS 2021 Dataset, which consists of 1400 cases of multi-parametric MRI (mpMRI) scans with expert neuro-radiologists' ground truth annotations, was used for this project. The dataset provides mpMRI scans in NIfTI format and includes native (T1), post-contrast T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, along with manually annotated GD-enhancing tumor, peritumoral edematous/invaded tissue, necrotic tumor core, and normal tissue. The scans in each folder were transformed by scaling and translating the features using the MinMax Scaler to shrink the features within the range of 0 to 1 [13]. The combined MRI scan was generated by merging the 3 volumes of each brain scan to form a 4D array of 3(modalities) x length x width x number of slices. This provides richer spatial information for one-time segmentation. The fourth volume (native modality) was left out because the scan highlights the healthy tissues of the brain [11] which does not majorly contribute to the segmentation of the tumor regions. The combined scan and corresponding mask were then cropped to remove useless blank regions, reducing bias and focusing on the important parts of the volume. The pre-processed combined scan and mask were saved as numpy arrays, with the mask features converted to class values (labels) 0, 1, 2, and 3. Masks with a segmented region less than 1% were excluded to retain only significant feature representation for the segmented regions. The resulting dataset after pre-processing had about 1200 cases of tumor without any additional in-house data. The dataset was divided into three sets: the train, test, and validation dataset in the ratio 6:2:2 respectively. ### Implementation Details During the training of a segmentation model, several hyper-parameters were investigated, including batch size, learning rate, epochs, activation function, dropout rate, metric functions, and loss functions. To prevent overfitting, the dropout approach and batch normalization were utilized for model regularization. The dropout rate was distributed across the layers between the range of 0.1 to 0.3 in both the encoder and decoder modules. To fit the data into memory, a batch size of 2 was utilized, which was the maximum allowable limit based on the GPU specifications obtained for this research. This small batch size also offered a regularizing effect, resulting in lower generalization error [14]. The Adam optimizer with a learning rate of \(1.0\times 10^{-4}\) was utilized for weight updates [15]. A pixel-wise softmax activation function was employed in the last layer of the model. The Dice Coefficient (Equation 1) was utilized as an evaluation metric for both training and testing phases. It calculates the ratio between the intersection and the union of the segmented and ground truth regions, focusing only on the segmentation classes and not the background class. The pixels are classified as True Positive (\(TP\)), False Negative (\(FN\)) and False Positive (\(FP\)). \[D=\frac{2TP}{2TP+FN+FP} \tag{1}\] The Tversky Loss [16] (Equation 3), based on the Tversky Index (Equation 2) where \(\alpha=0.7,\beta=0.3\)[17]), was used for both training and testing phases. \[TI=\frac{TP}{TP+\alpha FN+\beta FP} \tag{2}\] \[TL=1-TI \tag{3}\] This loss function is a generalized approach to handle class imbalance issues, resulting in a better balance between precision and recall during model training. ReLU was used in the first trial of training and evaluation and in the second trial, the activation function was changed to LeakyReLU to prevent the dying ReLU problem. [18]. ## 5 Results The network was implemented in Tensorflow and trained it on NVIDIA Tesla V100 32GB GPU. Results for Dice Coefficient and Tversky Loss metrics evaluation on the validation and testing datasets are presented in Table 1. The developed model achieved promising results in brain tumor segmentation, with the best performance attained during the second trial at the 127th epoch. Table 2 shows the dice coefficients attained by other models from different studies in comparison to the developed model. The use of Dice Coefficient and Tversky Loss metrics evaluation on the validation and testing datasets demonstrated the model's effectiveness. The visualization of the testing dataset is as shown in Figure 3. The model's ability to accurately delineate the tumor and its sub-regions from the input stack of three volumes (T2-FLAIR, T1CE, and T2) helps to create an effective treatment plan based on the nature of the tumor sub-regions observed from the error-proof segmentations obtained. This method has significant implications for early brain tumor detection, which \begin{table} \begin{tabular}{l|l|l} \hline \hline Trial & Dice Coefficient & Tversky Loss \\ \hline \hline \multicolumn{3}{c}{Validation Dataset (BraTS 2021)} \\ \hline \hline Trial 1 (epoch = 75) & 0.9430 & 0.0570 \\ \hline Trial 2 (epoch = 127) & **0.9562** & **0.0438** \\ \hline \hline \multicolumn{3}{c}{Testing Dataset (BraTS 2021)} \\ \hline \hline Trial 2 & **0.9864** & **0.0136** \\ \hline \end{tabular} \end{table} Table 1: Dice Coefficient and Tversky Loss metrics evaluation on the validation and testing datasets. \begin{table} \begin{tabular}{l|l|l} \hline \hline Model & Dataset & Dice Coefficient \\ \hline \hline InputCascadeCNN [4] & BraTS 2013 & 0.81 \\ \hline Encoder-Decoder with a & BraTS 2018 & 0.82 \\ variational auto-encoder & & \\ branch [5] & & \\ \hline 3D U-Net [6] & BraTS 2021 & 0.92 \\ \hline Path aggregation U-Net & BraTS 2017, BraTS 2018 & 0.80 \\ [7] & 2018 & \\ \hline 3D Attention U-Net [19] & BraTS 2019 & 0.86 \\ \hline Nn U-Net [8] & BraTS 2021 & 0.90 \\ \hline 3D Attention U-Net* & BraTS 2021 & **0.98** \\ \hline \hline \end{tabular} \end{table} Table 2: Dice Coefficient metric comparison with models from other studies. The developed model is marked with an *. Figure 3: Predictions and Ground truths visualized from a combined scan generated from 3 modalities: T2-FLAIR, T1 and T1CE from the validation set from BraTS 2021. The annotation can be interpreted as necrosis, edema (invaded tissue) and enhancing tumor. is crucial for effective treatment and ultimately saving lives. With tumors being one of the leading causes of mortality worldwide, the model's output is critical in forecasting the tumor's aggressiveness and the patient's survival early enough, allowing for the best chance for successful treatment. The developed solution helps facilitate accurate and effective medical diagnostics by optimizing computational resources consumed on irrelevant areas on the images and facilitating better generalization of the network used for the task. ## 6 Conclusion In this paper, the model implements a 3D attention mechanism in a 3D U-Net network to improve model sensitivity and accuracy to foreground pixels without requiring significant computation overhead by progressively suppressing features responses in irrelevant background regions. The model performs segmentation on a stack of 3 modalities of the MRI scan, in their original format (NIfTI) to attain richer feature representation during segmentation. This work clearly exhibits the significance of the 3D attention mechanism in multi-class segmentation with limited computational resources. The significance of stacking the modalities into one array is also demonstrated: providing better feature representation in the input and facilitation of one-time segmentation of the multi-modal scans. This solution in its entirety contributes to the development of accurate, extensive delineation tools for brain tumors, allowing the physicians to develop effective treatment plans for the patients based on the nature of the tumor sub-regions observed from error-proof segmentations obtained.
2301.12959
GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis
Synthesizing high-fidelity complex images from text is challenging. Based on large pretraining, the autoregressive and diffusion models can synthesize photo-realistic images. Although these large models have shown notable progress, there remain three flaws. 1) These models require tremendous training data and parameters to achieve good performance. 2) The multi-step generation design slows the image synthesis process heavily. 3) The synthesized visual features are difficult to control and require delicately designed prompts. To enable high-quality, efficient, fast, and controllable text-to-image synthesis, we propose Generative Adversarial CLIPs, namely GALIP. GALIP leverages the powerful pretrained CLIP model both in the discriminator and generator. Specifically, we propose a CLIP-based discriminator. The complex scene understanding ability of CLIP enables the discriminator to accurately assess the image quality. Furthermore, we propose a CLIP-empowered generator that induces the visual concepts from CLIP through bridge features and prompts. The CLIP-integrated generator and discriminator boost training efficiency, and as a result, our model only requires about 3% training data and 6% learnable parameters, achieving comparable results to large pretrained autoregressive and diffusion models. Moreover, our model achieves 120 times faster synthesis speed and inherits the smooth latent space from GAN. The extensive experimental results demonstrate the excellent performance of our GALIP. Code is available at https://github.com/tobran/GALIP.
Ming Tao, Bing-Kun Bao, Hao Tang, Changsheng Xu
2023-01-30T14:58:23Z
http://arxiv.org/abs/2301.12959v1
# GALIP: Generative Adversarial CLIPs for Text-to-Image Synthesis ###### Abstract Synthesizing high-fidelity complex images from text is challenging. Based on large pretraining, the autoregressive and diffusion models can synthesize photo-realistic images. Although these large models have shown notable progress, there remain three flaws. 1) These models require tremendous training data and parameters to achieve good performance. 2) The multi-step generation design slows the image synthesis process heavily. 3) The synthesized visual features are difficult to control and require delicately designed prompts. To enable high-quality, efficient, fast, and controllable text-to-image synthesis, we propose Generative Adversarial CLIPs, namely GALIP. GALIP leverages the powerful pretrained CLIP model both in the discriminator and generator. Specifically, we propose a CLIP-based discriminator. The complex scene understanding ability of CLIP enables the discriminator to accurately assess the image quality. Furthermore, we propose a CLIP-empowered generator that induces the visual concepts from CLIP through bridge features and prompts. The CLIP-integrated generator and discriminator boost training efficiency, and as a result, our model only requires about \(3\%\) training data and \(6\%\) learnable parameters, achieving comparable results to large pretrained autoregressive and diffusion models. Moreover, our model achieves \(\sim\)120\(\times\)faster synthesis speed and inherits the smooth latent space from GAN. The extensive experimental results demonstrate the excellent performance of our GALIP. Code is available at [https://github.com/tobran/GALIP](https://github.com/tobran/GALIP). ## 1 Introduction Over the last few years, we have witnessed the great success of generative models for various applications [4, 45]. Among them, text-to-image synthesis [3, 18, 19, 20, 25, 28, 29, 33, 46, 57, 48, 50] is one of the most appealing applications. It generates high-fidelity images according to given language guidance. Owing to the convenience of language for users, text-to-images synthesis has attracted many researchers and become an active research area. Based on a large scale of data collections, model size, and pretraining, recently proposed large pretrained autoregressive and diffusion models, _e.g._, DALL-E [33] and LDM [35], show the impressive generative ability to synthesize complex scenes and outperform the previous text-to-image GANs significantly. Although these large pretrained generative models have achieved significant advances, they still suffer from three flaws. First, these models require tremendous training data and parameters for pretraining. The large data and model size brings an extremely high computing budget and hardware requirements, making it inaccessible to many researchers and users. Second, the generation of large models is much slower than GANs. The token-by-token generation and progressive denoising require hundreds of inference steps and make the generated results lag the language inputs seriously. Third, there is no intuitive smooth latent space as GANs, which maps meaningful visual attributes to the latent vector. The multi-step generation design breaks the synthesis process and scatters the meaningful latent space. It makes the synthesis process require delicately designed prompts to control. To address the above limitations, we rethink Generative Adversarial Networks (GAN). GANs are much faster than autoregressive and diffusion models and have smooth latent Figure 1: (a) Existing text-to-image GANs conduct adversarial training from scratch. (b) Our proposed GALIP conducts adversarial training based on the integrated CLIP model. space, which enables more controllable synthesis. However, GAN models are known for potentially unstable training and less diversity in the generation [6]. It makes current text-to-image GANs suffer from unsatisfied synthesis quality under complex scenes. In this work, we introduce the pretrained CLIP [30] into text-to-image GANs. The large pretraining of CLIP brings two advantages. First, it enhances the complex scene understanding ability. The pretraining dataset has many complex images under different scenes. Armed with the Vision Transformer (ViT) [8], the image encoder can extract informative and meaningful visual features from complex images to align the corresponding text descriptions after adequate pretraining. Second, the large pretraining dataset also enables excellent domain generalization ability. It contains various kinds of images, _e.g_., photos, drawings, cartoons, and sketches, collected from a variety of publicly available sources. The various images make the CLIP model can map different kinds of images to the shared concepts and enable impressive domain generalization and zero-shot transfer ability. These two advantages of CLIP, complex scene understanding and domain generalization ability, motivate us to build a more powerful text-to-image model. We propose a novel text-to-image generation framework named Generative Adversarial CLIPs (GALIP). As shown in Figure 1, the GALIP integrates the CLIP model [30] in both the discriminator and generator. To be specific, we propose the CLIP-based discriminator and CLIP-empowered generator. The CLIP-based discriminator inherits the complex scene understanding ability of CLIP [30]. It is composed of a frozen ViT-based CLIP image encoder (CLIP-ViT) and a learnable mate-discriminator (Mate-D). The Mate-D is mated to the CLIP-ViT for adversarial training. To retain the knowledge of complex scene understanding in the CLIP-ViT, we freeze its weights and collect the predicted CLIP image features from different layers. Then, the Mate-D further extracts informative visual features from collected CLIP features to distinguish the synthesized and real images. Based on the complex scene understanding ability of CLIP-ViT and the continuous analysis of Mate-D, the CLIP-based discriminator can assess the quality of generated complex images more accurately. Furthermore, we propose the CLIP-empowered generator, which exerts the domain generalization ability of CLIP [30]. It is hard for the generator to synthesize complex images directly. Some works employ sketch [10] and layout [20, 22] as bridge domains to alleviate the difficulty. However, such a design requires additional labeled data. Different from these works, the excellent domain generalization of CLIP [30] motivates us that there may be an implicit bridge domain, which is easier to synthesize but can be mapped to the same visual concepts through the CLIP-ViT. Thus, we design the CLIP-empowered generator. It is composed of a frozen CLIP-ViT and a learnable mate-generator (Mate-G). The Mate-G first predicts the implicit bridge features from text and noise. Then the bridge feature will be mapped to the visual concepts through CLIP-ViT. Furthermore, we add some text-conditioned prompts to the CLIP-ViT for task adaptation. The predicted visual concepts close the gap between text features and target images which enhances the complex image synthesis ability. As shown in Figure 2, the proposed GALIP achieves \(\sim\)120\(\times\)faster synthesis speed and comparable synthesis ability based on significantly smaller trainable parameters and training data. Overall, our contributions can be summarized as follows: * We propose an efficient, fast, and more controllable model for text-to-image synthesis that can synthesize high-quality complex images. * We propose the CLIP-based discriminator, which assesses the quality of generated complex images more accurately. * We propose the CLIP-empowered generator, which synthesizes images based on text features and predicted CLIP visual features. * Extensive experiments demonstrate that the proposed GALIP can achieve comparable performance with large pertaining models based on significantly smaller computational costs. Figure 2: Comparing with Latent Diffusion Models (LDM) [35], our GALIP achieves comparable zero-shot Fréchet Inception Distance (ZS-FID) with measly 320M parameters (0.08B trainable parameters + 0.24B frozen CLIP parameters) and 12M training data. Furthermore, our GALIP only requires 0.04s to synthesize one image which is \(\sim\)120\(\times\)faster than LDM. Speed is calculated on NVIDIA 3090 GPU and Intel Xeon Silver 4314 CPU. ## 2 Related Work **Text-to-Image GANs.** GAN-INT-CLS [34] first adopted conditional GANs to synthesize images from text descriptions. To enable higher resolution synthesis, the StackGAN [54, 55], AttnGAN [48], and DM-GAN [57] stacks multiple generators and discriminators. Tao _et al._[42] proposed a simpler yet effective text-to-image framework called DF-GAN that enables one-stage high-resolution generation. LAFITE [56] introduces CLIP text-image contrastive loss for text-to-image training and shows large improvements on CC3M [40]. **Text-to-Image Large Models.** Recently, large pretrained autoregressive and diffusion models have shown impressive performance on text-to-image synthesis. DALL-E [33], CogView [6], and M6 [23] leverage VQ-VAE [43] or VQ-GAN [9] to tokenize the images into discrete image tokens. Then they take the word tokens and image tokens together to pre-train a large unidirectional transformer for an autoregressive generation. Parti [51] proposes a sequence-to-sequence autoregressive model to treat text-to-image synthesis as a translation task. Cogview2 [7] employs hierarchical transformers and local parallel autoregressive generation for faster autoregressive image generation. Some works try to employ the diffusion model [13, 5, 41, 26, 41] to overcome the slow generation defect of the autoregressive model. VQ-Diffusion [11] combines the VQ-VAE [43] and diffusion model [26, 14] to eliminate the unidirectional bias and avoids accumulated prediction errors. GLIDE [27] applies guided diffusion to the problem of text-conditional image synthesis. DALL-E2 [32] combines the CLIP representation and diffusion model to make a CLIP decoder. Latent Diffusion Models (LDM) [35] apply the diffusion model in the latent space to enable the training on limited computational resources while retaining image quality. The particular text-to-image LDM is Stable Diffusion [36], which is a favorite open-source project and provides an easy-to-use interface. Imagen [38] introduces the large language model [31] to provide high-quality text features and proposes an Efficient U-Net for diffusion models. ## 3 Generative Adversarial CLIPs In this paper, we propose a novel framework for text-to-image synthesis named Generative Adversarial CLIPs (GALIP). To synthesize high-quality complex images, we propose: (i) a novel CLIP-based discriminator that inherits the complex scene understanding ability of CLIP [30] for more accurate image quality assessment. (ii) a novel CLIP-empowered generator that exerts the domain generalization ability of CLIP [30] and induces the CLIP visual concepts to close the gap between text and image features. In the following of this section, we first present the overall structure of our GALIP. Then, we introduce the CLIP-based discriminator and CLIP-empowered generator in detail. ### Model Overview As shown in Figure 3, the proposed GALIP is composed of a CLIP text encoder, a CLIP-based discriminator, and a CLIP-empowered generator. The pretrained CLIP text encoder takes the text description and yields a global sentence vector \(\mathbf{T}\). After the text-encoder is the CLIP-empowered Figure 3: The architecture of the proposed GALIP for text-to-image synthesis. Armed with the CLIP-based discriminator and CLIP-empowered generator, our model can synthesize more realistic complex images. generator and CLIP-based discriminator under the GAN framework. The CLIP-empowered generator is composed of a frozen CLIP-ViT and a mate generator (Mate-G). There are three main modules in the Mate-G, the bridge feature predictor (Bridge-FP), the prompt predictor, and the image generator. The CLIP-empowered generator has two inputs, the sentence vector \(\mathbf{T}\) encoded from the text encoder and the noise vector \(\mathbf{Z}\) sampled from the Gaussian distribution. The noise vector ensures the diversity of the synthesized images. In the CLIP-empowered generator, the sentence vector and noise are first fed into the bridge feature predictor. The bridge feature predictor translates the sentence vector and noise to the bridge feature for the CLIP-ViT. Furthermore, we add several text-conditioned prompts to the transformer blocks (TransBlock) in CLIP-ViT for task adaptation. Finally, the image generator takes the predicted visual concepts, bridge features, sentences, and noise vectors to synthesize high-quality images. The CLIP-based discriminator is composed of a frozen CLIP-ViT and a mate discriminator (Mate-D). The CLIP-ViT converts images into image feature through a convolution layer and a series of transformer blocks. The CLIP feature extractor (CLIP-FE) in Mate-D collects the image features from different layers in CLIP-ViT. Then it further extracts informative visual features from collected CLIP features for the quality assessor. Lastly, an adversarial loss will be predicted by the quality assessor based on the extracted informative features and sentence vectors. By distinguishing synthesized images from real ones, the discriminator promotes the generator to synthesize higher-quality images. ### CLIP-based Discriminator In this section, we detailed the proposed CLIP-based discriminator, which is composed of a frozen CLIP-ViT and a Mate-D. The CLIP-based discriminator inherits the complex scene understanding ability from the frozen CLIP-ViT. Furthermore, we propose the Mate-D, which is mated to the CLIP-ViT to further extract informative visual features and distinguish real and synthesized images. The CLIP-ViT and Mate-D enable the discriminator to assess the quality of generated complex images more accurately. As shown in Figure 4, the Mate-D consists of a CLIP-FE and a quality assessor. To fully utilize the knowledge of complex scene understanding in CLIP-ViT, the CLIP-FE takes the CLIP image features from multilayers. There are \(N\) CLIP features collected for the CLIP-FE. We name them CLIP Feature \(1\) to \(N\), which are collected from shallow to deep layers in CLIP-ViT. To further extract informative visual features from these CLIP features, we design a CLIP-FE. It contains a sequence of extraction blocks, and each block contains two convolution layers and two ReLU active functions. And the extracted image feature is summed with the shortcut and the next CLIP feature. There are \(N-1\) extraction blocks stacked in CLIP-FE. Since the CLIP feature \(N\) is only added to the processed image features in the last extraction block. To fuse the CLIP feature \(N\), we append two convolution layers without the CLIP feature addition behind. The CLIP-FE extracts informative visual features for the quality assessor. Then the sentence vector is replicated and concatenated with the extracted image features. An adversarial loss is predicted by two convolution layers to evaluate the image quality. Furthermore, to stabilize the adversarial learning process of Mate-D, we apply the matching-aware gradient penalty (MAGP) [42] on the collected CLIP features and corresponding text features. Based on the complex scene understanding ability of CLIP-ViT, the CLIP-based discriminator can extract more informative visual features from complex images. The higher-quality extracted visual features make it easier for the discriminator to detect unreal image parts, which improves the discriminative efficiency, thus prompting the generator to generate more realistic images. ### CLIP-empowered Generator In this section, we detail the proposed CLIP-empowered generator, which is composed of a frozen CLIP-ViT and a Mate-G. The CLIP-empowered generator exerts the domain generalization ability of the CLIP-ViT. Furthermore, we propose the Mate-G, which is mated to the CLIP-ViT to induce useful visual features from the CLIP-ViT and gen Figure 4: The architecture of the proposed Mate-D for text-to-image synthesis. It further extracts informative visual features from collected CLIP features and assesses the image quality more accurately. erate images from text and induced visual features. The Mate-G consists of a Bridge Feature Predictor (Bridge-FP), a prompt predictor, a frozen CLIP-ViT, and an image generator (see Figure 3). We detail them next. **Bridge Feature Predictor.** The structure of the Bridge-FP is shown in Figure 5, as highlighted by the red dashed box. The Bridge-FP consists of an FC (Fully-Connected) layer and \(M\) fusion blocks (F-BLKs). The input noise is fed into the FC layer and reshaped to \((7,7,64)\) as an initial bridge feature. The initial bridge feature output by the FC layer still contains a lot of noise. Therefore, we apply a sequence of F-BLKs to fuse text information and make it more meaningful. The F-BLK is composed of two convolution layers (Conv) and two deep text-image fusion blocks (DFBlock) [42]. The DFBlock has shown its effectiveness in fusing text and image features through stacked affine transformations. Thus, we adopt it to fuse text features and intermediate bridge features. There is a shortcut addition in F-BLK for effective information propagation and gradient back-propagation. Through the Bridge-FP, the sentence and noise vectors will be translated to the bridge feature, which is adjusted to induce meaningful visual concepts from CLIP-ViT. **Prompt Predictor.** The CLIP-ViT is pretrained to predict visual features from image data. There is a large gap between text and image data. To alleviate the difficulty of bridge feature translation from text features, we employ prompt tuning [16], which has shown effectiveness on domain transferring for ViT. We design a prompt predictor, which predicts prompts based on sentence and noise vectors through an FC layer. The predicted text-conditioned prompts are appended behind the visual patch embeddings in CLIP-ViT. Furthermore, we find that it is better not to add prompts to the last few layers in CLIP-ViT. The last few layers summarize the visual features and output the last image representations. The prompt predicted from text and noise in the last few layers may defect its performance. **Image Generator.** The image generator consists of \(K\) generation blocks (G-BLKs). We sum the predicted visual concepts and bridge features through shortcut addition for effective information propagation and gradient back-propagation. The image generator receives the summed visual features as input and fuses sentence and noise vectors through the DFBlocks [42] in each G-BLK. The intermediate image features grow larger during the generation process by the upsample layers. Finally, the image features are converted into high-resolution RGB images. ### Objective Functions To stabilize the training process of adversarial learning, we employ the hinge loss [52] and one-way discriminator [42]. Finally, the whole formulation of our GALIP is shown as follows: \[\begin{split} L_{D}=&-\mathbb{E}_{x\sim\mathbb{P}_{ r}}[min(0,-1+D(C(x),e))]\\ &-(1/2)\mathbb{E}_{G(x,e)\sim\mathbb{P}_{g}}[min(0,-1-D(C(G(z,e)),e))]\\ &-(1/2)\mathbb{E}_{x\sim\mathbb{P}_{mis}}[min(0,-1-D(C(x),e))]\\ &+k\mathbb{E}_{x\sim\mathbb{P}_{r}}[(\|\nabla_{C(x)}D(C(x),e)\|+ \|\nabla_{e}D(C(x),e)\|)^{p}],\\ L_{G}=&-\mathbb{E}_{G(z,e)\sim\mathbb{P}_{g}}[D(C(G (z,e)),e)]\\ &-\lambda\mathbb{E}_{G(z,e)\sim\mathbb{P}_{g}}[S(G(z,e),e)],\end{split} \tag{1}\] where \(z\) is the noise vector sampled from Gaussian distribution; \(e\) is the sentence vector; \(G\) is the CLIP-empowered generator; \(D\) is the Mate-D; \(C\) is the frozen CLIP-ViT in CLIP-based discriminator; \(S\) represents the cosine similarity between the encoded visual and text features of CLIP; \(k\) and \(p\) are two hyper-parameters of gradient penalty; \(\lambda\) is the coefficients of the text-image similarity;\(\mathbb{P}_{g}\), \(\mathbb{P}_{r}\), \(\mathbb{P}_{mis}\) denote the synthetic data distribution, real data distribution, and mismatching data distribution, respectively. ## 4 Experiments In this section, we introduce the datasets, training details, and evaluation metrics employed in our experiments, then evaluate our proposed GALIP and its variants quantitatively. **Datasets.** We conduct experiments on four challenging datasets: CUB bird [44], COCO [24], CC3M [40], and CC12M [2]. For the CUB bird dataset, there are 11,788 images belonging to 200 bird species, with each image corresponding to ten language descriptions. The train and vali Figure 5: The architecture of the proposed CLIP-empowered generator for text-to-image synthesis. Armed with bridge feature predictor and prompt predictor, it can induce meaningful visual concepts from the frozen CLIP-ViT for image synthesis. dation splits of the CUB bird dataset are implied as previous works did [48, 54, 55, 57, 42]. Since there are various shapes, colors, and postures of birds in the CUB dataset, it is always employed to evaluate the performance of fine-grained content synthesis. For COCO dataset, it contains 80k images for training and 40k images for testing. Each image corresponds to 5 language descriptions. The image in the COCO dataset is complex and always contains multiple objects under different scenes. The COCO dataset is always employed in recent works to evaluate the performance of complex image synthesis. For CC3M and CC12M datasets, they are two large datasets that contain about 3 and 12 million text-image pairs. It is always adopted for pretraining and to evaluate the zero-shot performance of the text-to-image model. **Training and Evaluation Details.** We choose the ViT-B/32 [30] model as the CLIP model in our GALIP. In the CLIP-based discriminator, the CLIP-FE collects the CLIP feature from \(2^{nd}\), \(5^{th}\), \(9^{th}\) layers in CLIP-ViT. There are two extraction blocks stacked in CLIP-FE. In the CLIP-empowered generator, the Bridge-FP contains 4 Fusion Blocks, and the image generator contains 6 generation blocks for \(224{\times}224\) image synthesis. The prompt predictor predicts 8 prompts for TransBlocks 2 to 10 in CLIP-ViT. We conduct some ablation studies on these designs. The hyper-parameters of the discriminator \(k\) and \(p\) are set to 2 and 6 as [42]. The hyper-parameters of the generator \(\lambda\) are set to 4 for all the datasets. Furthermore, we employ the Adam optimizer [17] with \(\beta_{1}{=}0.0\) and \(\beta_{2}{=}0.9\) to train our model. According to the two timescale update rule (TTUR) [12], the learning rate is set to 0.0001 for the generator and 0.0004 for the discriminator. Following the previous text-to-image works [48, 47, 47, 57], we adopt the Frechet Inception Distance (FID) [12] and CLIPSIM [47] to evaluate the image fidelity and text-image semantic consistency. All GALIP models are trained on 8\({\times}\)3090 GPUs. We train our GALIP for 0.5, 1.5, 2, and 3 days on CUB, COCO, CC3M, and CC12M datasets, respectively. ### Quantitative Evaluation To evaluate the performance of our GALIP, we compare the proposed model with several state-of-the-art methods [56, 57, 53, 42, 37, 11], which have achieved impressive results in text-to-image synthesis. The results are shown in Table 1. Compared with other leading models, our GALIP has a significant improvement on both CUB and COCO datasets. Especially, compared with the recently proposed LAFITE [56], which employs CLIP text-image contrastive loss for text-to-image training, our GALIP decreases the FID metric from 14.58 to 10.08 and improves the CLIPSIM (CS) from 0.3125 to 0.3164 on the CUB dataset. Furthermore, our GALIP decreases the FID of COCO from 8.21 to 5.85 significantly. Compared with VQ-diffusion [11], which adopts diffusion models for text-to-image synthesis, our GALIP also decreases FID from 10.32 to 10.08 on the CUB dataset and decreases the FID of COCO from 13.86 to 5.85 remarkably. The quantitative comparisons on CUB and COCO datasets demonstrate that our GALIP is more effective in synthesizing high-fidelity images, especially for complex image generation. Moreover, we evaluate the zero-shot text-to-image synthesis ability of our GALIP. The results are shown in Table 2. Compared with LAFITE [56] trained on CC3M, our GALIP (CC3M) decreases FID from 26.94 to 16.12 signif \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{CUB} & \multicolumn{2}{c}{COCO} \\ \cline{2-5} & FID \(\downarrow\) & CS \(\uparrow\) & FID \(\downarrow\) & CS \(\uparrow\) \\ \hline DM-GAN [57] & 16.09 & - & 32.64 & - \\ XMC-GAN [53] & - & - & 9.30 & - \\ DAE-GAN [37] & 15.19 & - & 28.12 & - \\ DF-GAN [42] & 14.81 & 0.2920 & 19.32 & 0.2972 \\ LAFITE [56] & 14.58 & 0.3125 & 8.21 & 0.3335 \\ VQ-Diffusion [11] & 10.32 & - & 13.86 & - \\ \hline GALIP (Ours) & **10.08** & **0.3164** & **5.85** & **0.3338** \\ \hline \hline \end{tabular} \end{table} Table 1: The results of FID and CLIPSIM (CS) compared with the state-of-the-art methods on the test set of CUB and COCO. Figure 6: Examples of images synthesized by LAFITE [56], VQ-Diffusion [11], and our proposed GALIP conditioned on text descriptions from the test set of CUB and COCO datasets. icantly. It demonstrates that integrating the CLIP model in the generator and discriminator is more effective than only introducing the CLIP loss for the GAN model. Compared with autoregressive models (AR) and diffusion models (DF) which are pretrained with much larger model sizes and datasets, our GALIP also achieves competitive performance. Especially, compared with LDM [35] which is one of the most important open-source large pretrained models, our GALIP achieves better performance even with much smaller model parameters and data size. Furthermore, as shown in Figure 2, our GALIP only requires 0.04s to generate one image which is \(\sim\)120\(\times\)faster than LDM [35]. Besides, our GALIP can be inference on the CPU fastly without other acceleration settings. This significantly reduces the hardware requirements of users. In addition, the computational cost to pretrain our GALIP is quite less than these large pretrained autoregressive and diffusion models. The GALIP of CC12M is only pretrained on 8\(\times\)3090 GPUs for 3 days. But these models require hundreds of GPUs and many weeks to pre-train. ### Qualitative Evaluation To evaluate the visual quality of synthesized images, we first compare the images synthesized by LAFITE [56], VQ-Diffusion [11], and our GALIP which are trained on COCO in Figure 6. Then, we compare our GALIP (CC12M) with LDM (LAION-400M) [35, 36] in Figure 7. As shown in the 1\({}^{st}\), 2\({}^{nd}\), 4\({}^{th}\) and 5\({}^{th}\) columns of Figure 6, the birds synthesized by LAFITE [56] and VQ-Diffusion [11] contain break or wrong shapes. Moreover, both LAFITE [56] and VQ-Diffusion [11] lose some fine-grained visual features (e.g., 1\({}^{st}\), 2\({}^{nd}\), 5\({}^{th}\) and 6\({}^{th}\) columns), which makes the synthesized images lack details and look unreal. However, the images synthesized by our GALIP have correct object shapes and clear fine-grained contents. The superiority is more obvious in complex COCO images, which contain various shapes and multiple objects. As the results are shown in the 7\({}^{th}\), 8\({}^{th}\), 9\({}^{th}\), 10\({}^{th}\) columns of Figure 7, the LAFITE [56] and VQ-Diffusion [11] models cannot synthesize the right shape of "train", "children", "woman", and "stuffed bear". Furthermore, they also cannot synthesize the right visual concept of "showing off toy cell phone" and "sitting on a book shelf". However, armed with the proposed CLIP-based D and CLIP-empowered G, our GALIP can cope with more strict visual requirements and synthesize various shapes of different objects (see 8\({}^{th}\), 9\({}^{th}\), 10\({}^{th}\) and 12\({}^{th}\) columns) and present the right visual concepts in synthesized images. We also observe that LAFITE [56] and VQ-Diffusion [11] also can not synthesize correct human facial features. For example, as shown in the 8\({}^{th}\), 9\({}^{th}\), 12\({}^{th}\), they can not synthesize realistic human faces. But our GALIP can synthesize these features correctly. Moreover, we compare the images synthesized by the LDM (LAION-400M) [35, 36] and our GALIP (CC12M) in Figure 7. As the results are shown in the 1\({}^{st}\), 4\({}^{th}\), 5\({}^{th}\), 8\({}^{th}\), 11\({}^{th}\) columns of Figure 7, the LDM does not generate the objects ("ghost", "teddy bear", "modem", "person", "model") described in the texts, but our GALIP can synthesize these objects correctly. Also, our model can generate correct visual features such as "shining eyes", "Blue Lighthouse", "smiling statue", and "surprised girl" in the 3\({}^{rd}\), 6\({}^{th}\), 7\({}^{th}\), 10\({}^{th}\) columns. Furthermore, as shown in the 9\({}^{th}\), 10\({}^{th}\), and 12\({}^{th}\) columns of Figure 7, our GALIP keeps the superior performance of human face synthesis. The extensive quantitative evaluation results demonstrate the superi \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Model & Type & Param [B] & Data size [M] & ZS-FID \(\downarrow\) \\ \hline DALL-E [33] & AR & 12 & 250 & 27.5 \\ Cogview [6] & AR & 4 & 30 & 27.1 \\ Cogview [7] & AR & 6 & 30 & 24.0 \\ Parti-350M [51] & AR & 0.35 & \(>\)800 & 14.10 \\ Parti-20B [51] & AR & 20 & \(>\)800 & 7.23 \\ \hline GLIDE [27] & DF & 5 & 250 & 12.24 \\ LDM [35] & DF & 1.45 & 400 & 12.63 \\ DALL-E 2 [32] & DF & 6.5 & 250 & 10.39 \\ Imagen [38] & DF & 7.9 & 860 & 7.27 \\ eDiff-I [1] & DF & 9.1 & 1000 & 6.95 \\ \hline LAFITE [56] & GAN & 0.15+0.08 & 3 & 26.94 \\ GALIP (CC3M) & GAN & 0.24+0.08 & 3 & 16.12 \\ GALIP (CC12M) & GAN & 0.24+0.08 & 12 & 12.54 \\ \hline \hline \end{tabular} \end{table} Table 2: We compare the performance of large pretrained autoregressive models (AR), diffusion models (DF), and GANs under zero-shot setting on the COCO test dataset. Figure 7: Text-to-Image samples from GALIP (CC12M) and Latent Diffusion (LAION-400M) [35, 36]. We sample 16 images from each given text description, and randomly select one as the final generation result ority and effectiveness of our proposed GALIP, which is able to generate high-fidelity, creative and complex images with various shapes and multiple objects. Additionally, we conduct some experiments to show the smooth latent space of our GALIP. Current autoregressive and diffusion models are sensitive to input sentences. This instability makes users need to try a lot of prompts to get satisfied images. Differently, our GALIP inherits the smooth latent space from GAN, it enables gradual and smooth changes along with text changes. As shown in Figure 8, there is a smooth transition of synthesized images from top to bottom, left to right. The smooth latent space makes the degree of stylization of the image controllable. The users can fine-tune synthesized image styles like a style knob, and it also enables the users to create new styles by blending different image styles, as highlighted by the red dashed lines. ### Ablation Study To verify the effectiveness of each component in the proposed GALIP, we conduct ablation studies on the test set of the COCO dataset. The components being evaluated in this subsection include CLIP-based D (CD) and CLIP-empowered G (CG). We also further conduct ablation studies on Bridge-FP (BFP) and Prompt Predictor (PP) in CLIP-empowered G, and CLIP-FE (CFE) in CLIP-based D. Furthermore, we compare our CLIP-FE with CCM&CSM of Projected GAN [39], which yields a U-Net architecture to enable multi-scale feedback. In addition, we investigate the layer choice strategy of CLIP-FE and Prompt Predictor. The results on the COCO dataset are shown in Table 3. **Baseline.** Our baseline is a one-stage text-to-image GAN [42]. It is composed of a CLIP text encoder and CNN-based generator and discriminator. And it generates complex images from sentence vectors directly. **Effect of CLIP-based D and CLIP-FE.** The CLIP-based D decreases FID from 17.31 to 7.92 and improves CLIM-SIM (CS) from 0.2996 to 0.3221. The results demonstrate that the complex scene understanding ability of CLIP-ViT promotes the complex image synthesis ability significantly. Furthermore, we compared our CLIP-FE (CFE) with CCM&CSM [39]. Our CLIP-FE achieves better FID and CLIPSIM. It shows that our CLIP-FE is more effective in extracting informative visual features from CLIP-ViT. **Effect of CLIP-empowered G and Bridge-FP.** The CLIP-empowered G with Bridge-FP further decreases FID from 7.92 to 6.52 and improves CLIPSIM from 0.3221 to 0.3301. It demonstrates that predicted bridge features and CLIP-ViT can enhance the complex image synthesis ability effectively. **Effect of Prompt Predictor.** The proposed Prompt Predictor (PP) also decreases FID from 6.52 to 5.85 and improves CLIPSIM from 0.3301 to 0.3338. The result demonstrates that the Prompt Predictor makes the CLIP-ViT more suitable for generation tasks and induces more meaningful features from CLIP-ViT to improve the generative ability. **CLIP Layer Selection.** We find that the last few layers of CLIP-ViT defect the performance of CLIP-based D. The reason may be that the first layers of CLIP-ViT extract useful visual features and understand complex images, and the last layers focus on generalization ability to align \begin{table} \begin{tabular}{l|c|c} \hline \hline Architecture & FID \(\downarrow\) & CS \(\uparrow\) \\ \hline Baseline & 17.31 & 0.2996 \\ Baseline w/ CD w/ CFE & 7.92 & 0.3221 \\ Baseline w/ CD w/ CCM\&CSM & 10.77 & 0.3123 \\ Baseline w/ CD w/ BFP & 6.52 & 0.3301 \\ Baseline w/ CD w/ BFP w/ PP (GALIP) & **5.85** & **0.3338** \\ \hline GALIP w/ CFE (\(2^{nd}\)) & 13.41 & 0.3015 \\ GALIP w/ CFE (\(5^{th}\)) & 8.60 & 0.3145 \\ GALIP w/ CFE (\(12^{th}\)) & 10.72 & 0.3104 \\ GALIP w/ CFE (\(2^{nd}\),\(5^{th}\)) & 6.70 & 0.3301 \\ GALIP w/ CFE (\(2^{nd}\),\(5^{th}\),\(12^{th}\)) & 6.61 & 0.3305 \\ GALIP w/ CFE (\(2^{nd}\),\(5^{th}\),\(9^{th}\)) & **5.85** & **0.3338** \\ GALIP w/ CFE (\(2^{nd}\),\(5^{th}\),\(8^{th}\),\(9^{th}\)) & 6.01 & 0.3305 \\ \hline GALIP w/ PP (\(1^{st}\)-\(12^{th}\)) & 6.24 & 0.3320 \\ GALIP w/ PP (\(1^{st}\)-\(9^{th}\)) & **5.85** & **0.3338** \\ GALIP w/ PP (\(1^{st}\)-\(6^{th}\)) & 6.40 & 0.3310 \\ GALIP w/ PP (\(1^{st}\)-\(3^{th}\)) & 6.52 & 0.3305 \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of different components of our model on the test set of COCO. Figure 8: Images synthesized by interpolating four-sentence embeddings. Our GALIP supports gradual changes when interpolating sentence embeddings describing different image styles. It makes the degree of stylization of the image controllable and creates new styles by blending different styles. with high-level concepts in text features. The generalization ability may defect the performance of CLIP-based D because it reduces the differences between synthetic and real images and weakens the discriminator. Conversely, since CLIP-empowered G requires the generalization ability to map the bridge feature to meaningful visual features, adding prompts in the last few layers may defect the generalization ability. So we extract the CLIP features from 2\({}^{nd}\),5\({}^{th}\),and 9\({}^{th}\) layers in CLIP-based D, and add prompts to 1\({}^{st}\)-9\({}^{th}\) layers. And we find that extracting more CLIP features does not lead to better performance. ### Limitations Our GALIP shows superiority in text-to-image synthesis, but some limitations should be considered in future studies. First, our model employs the CLIP model to provide text features for the generator and discriminator. However, current models [38] show that the generic large language models [31] (e.g., T5) improve the performance of text-to-image synthesis effectively. Replacing the CLIP text encoder with T5 may further improve the performance. Second, the model size and pretraining dataset are much smaller than other large pretrained models [1, 32, 35, 38, 51], it limits the synthesis ability of imaginary images (see Figure 9). Pretraining on a larger dataset with a larger model size may benefit the performance. We will try to address these limitations in our future work. ## 5 Conclusion In this paper, we propose a novel Generative Adversarial CLIPS (GALIP) for text-to-image synthesis. Compared with previous models, our GALIP can synthesize higher-quality complex images. Moreover, we propose a CLIP-based discriminator and CLIP-empowered generator, which exerts the complex scene understanding and domain generalization ability of CLIP. Our GALIP achieves significant improvements on challenging datasets. Furthermore, current large models are pretrained for generative or understanding tasks. In this work, we integrate the understanding model (CLIP-ViT) into a generative model and achieve impressive results. It shows that there are some commonalities between understanding and generative models. This may be enlightening for building a general large model.
2307.07764
Explaining and visualizing black-box models through counterfactual paths
Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable. In this paper, we propose a novel approach to XAI that uses the so-called counterfactual paths generated by conditional permutations of features. The algorithm measures feature importance by identifying sequential permutations of features that most influence changes in model predictions. It is particularly suitable for generating explanations based on counterfactual paths in knowledge graphs incorporating domain knowledge. Counterfactual paths introduce an additional graph dimension to current XAI methods in both explaining and visualizing black-box models. Experiments with synthetic and medical data demonstrate the practical applicability of our approach.
Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek
2023-07-15T10:16:51Z
http://arxiv.org/abs/2307.07764v3
# Explaining and visualizing black-box models through counterfactual paths ###### Abstract Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable. In this paper, we propose a novel approach to XAI that uses the so-called _counterfactual paths_ generated by conditional permutations of features. The algorithm measures feature importance by identifying sequential permutations of features that most influence changes in model predictions. It is particularly suitable for generating explanations based on counterfactual paths in knowledge graphs incorporating domain knowledge. _Counterfactual paths_ introduce an additional graph dimension to current XAI methods in both explaining and visualizing black-box models. Experiments with synthetic and medical data demonstrate the practical applicability of our approach. Keywords:explainable machine learning, knowledge graph, feature importance, counterfactual explanation ## 1 Introduction Explainable AI (XAI) methods are becoming a promising solution to the emerging challenge of providing human-understandable justifications for AI decisions (Holzinger et al., 2022), e.g. in medicine (Gozzi et al., 2022; Krzyzinski et al., 2023; Xu et al., 2022), biology (Anguita-Ruiz et al., 2020) or finance (Bucker et al., 2022). One of the key problems in model explanation is determining the importance of variables, and to this end, many methods have been proposed either specific to some model families or model agnostic, here we focus on the latter. One approach to XAI is through counterfactual explanations, which involve generating alternative scenarios that could have led to different prediction outcomes. Counterfactual explanations help users understand how an AI model arrived at a particular decision and what factors influenced that decision (Chou et al., 2022). Another widely adopted approach is feature importance methods ## 1 Introduction The purpose of this paper is to develop a new method for computing the computational complexity of the data. The first step is to compute the computational complexity of the data. The second step is to compute the computational complexity of the data. The second step is to compute the computational complexity of the data. The second step is to compute the computational complexity of the data. The second step is to compute the computational complexity of the data. The second step is to compute the computational complexity of the data. The third step is to compute the computational complexity of the data. that express the reliance of model predictive performance on a specific feature in data (Casalicchio et al., 2019; Fisher et al., 2019). However, these methods most often present the importance for variables separately, without taking into account, often complex relationships between variables such as interactions or correlations. To challenge this status quo, we introduce the counterfactual paths (CPATH) algorithm for model-agnostic global explanations of machine learning predictive models trained on tabular data. This algorithm is inspired by both counterfactual explanations and permutational feature importance (see Figure 1 for a graphical illustration). Compared to classical feature importance methods, CPATH provides additional graph information about the counterfactual dependence of the black-box model on particular features, and thus can help to uncover its decision-making process. Counterfactual paths represent the relationships between input features and the model's output, allowing users to explore how changing individual features or their combinations affect the model's predictions. One of the main advantages of counterfactual paths is their ability to provide insights into how the model works and to identify potential biases or confounding factors that may affect its predictions. These insights can then improve the model's accuracy and robustness. Furthermore, counterfactual paths and their visualization could provide a more intuitive and interpretable explanation of the model's behaviour than traditional feature importance methods. For example, rather than simply providing a list of the top features that contribute to the model's predictions, counterfactual paths can show how changes to specific combinations of features lead to changes in the output of the model. This can help users to better understand the underlying patterns in the data and the overall behaviour of the model. ## 2 Related work **Model-agnostic feature importance.** Explaining a predictive model on a _global_ level aims to understand how important is a given feature to its performance. Several model-agnostic, i.e. explaining any black-box function, feature importance measures have been proposed. The widely adopted approach is permutation feature importance (Fisher et al., 2019), which is also extended to local and partial importance (Casalicchio et al., 2019). Molnar et al. (2021) introduce confidence intervals for permutation feature importance and Au et al. (2022) propose to group features to explain their combined importance. In practice, estimating the marginal importance of a single feature without taking into account correlation structure in data is a challenge (Molnar et al., 2023). To address feature dependence, Watson and Wright (2021) proposes to measure the conditional predictive impact between features and predictions using the knockoff sampling framework; also for categorical features (Blesch et al., 2023). Related is work on conditional estimation of Shapley-based feature attributions Aas et al. (2021). As opposed to considering Shapley-based feature importance measures (Casalicchio et al., 2019), the model's performance is not a et al., 2019; Covert et al., 2020), in this paper, we specifically relate permutation importance to counterfactual explanations. #### 2.0.1 Counterfactual explanations. A counterfactual explanation of model prediction describes the smallest change to the feature values that change the predicted class. It is a useful _local_ "what-if" explanation for making actionable decisions Wachter et al. (2017), Saranti et al. (2022). One can optimize to find counterfactuals using model-specific optimization methods, e.g. gradients. Our work relates more to model-agnostic approaches available to any black-box model Karimi et al. (2020). Dandl et al. (2020) propose multi-objective counterfactual explanations that take into account data manifold, i.e. how likely it is that the counterfactual data point originates from the training data distribution. Mothil et al. (2020) focus on finding a diverse set of counterfactual data points for a given prediction. Most recent work considers the robustness of such explanations to (potentially adversarial) data perturbations Pawelczyk et al. (2023). Contrary to related work, we use the intuition behind counterfactual explanations to construct a novel global explanation of feature importance. #### 2.0.2 Domain knowledge in the context of model explanation. Relating explanations to domain knowledge is an emerging research topic Biecek and Burzykowski (2021). Crucially, interpreting models should be done with respect to data distribution and its correlation structure Baniecki et al. (2023); Molnar et al. (2021). One idea is to build a surrogate model based on explanations of a black-box function achieving an interpretable predictive interface consistent with domain knowledge Alaa et al. (2021); Gosiewska et al. (2021). Other approaches consider including domain knowledge directly in the algorithm used to learn a predictive function Confalonieri et al. (2021); Panigutti et al. (2020); Pfeifer et al. (2022), which is challenging to do in an algorithm-agnostic way. In this paper, we introduce feature importance explanation and visualization through counterfactual paths that provide additional information about the graph structure of features. Domain knowledge can both be derived from explanations, as well as participate in estimating more accurate explanations. ## 3 Counterfactual paths for explaining black-box models #### 3.0.1 Intuition. The herein proposed CPATH algorithm randomly selects paths through the feature space, where each feature represents a node in a fully-connected graph or a user-specified knowledge graph, masking the sampling scheme. Once a path is created, CPATH permutes one feature after the other of the sampled path and terminates as soon as a certain number of class labels swap to the inverse class. We call these paths _counterfactual paths_. From these counterfactual paths, we derive an adjacency matrix weighted by the inferred path lengths. Based on the weighted adjacency matrix the _global_ importance of a certain feature is determined. ### Mathematical formulation Let \(\mathcal{M}\colon\mathbb{R}^{p}\to\{1,\ldots,g\}\), where \(g\) is the number of classes, denote the model of interest, which is a classifier. Given an observation \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{p})\in\mathbb{R}^{p}\), the model's prediction is denoted as \(\mathcal{M}(\mathbf{x})\). We describe the problem using weighted directed graphs, (weighted digraphs). A graph \(G_{\mathcal{M}}=(V,E,w_{\mathcal{M}})\) is defined as an ordered triple, where \(V=1,\ldots,p\) represents the set of predictors (explanatory variables) that form the vertices of the graph. The set of directed edges (arcs), denoted as \(E\), contains ordered pairs of vertices \((i,j)\) such that there exists a directed edge from vertex \(i\) to vertex \(j\). The function \(w_{\mathcal{M}}\colon E\to\mathbb{R}\) maps the edges to their weights and can also be represented as an adjacency matrix \(W\). The variables used in the \(\mathcal{M}\) model which constitute the set of vertices \(V\) are predetermined, i.e., these are the variables used in the model fitting process. Similarly, the set \(E\) is known - its form can be derived from a domain knowledge graph describing the process of generating the data. However, if the domain knowledge graph is unknown, it can be assumed that \(G_{\mathcal{M}}\) is a complete digraph, meaning that each pair of vertices is connected by a symmetric pair of directed arcs. The objective is to estimate the function \(w_{\mathcal{M}}\) (related to the adjacency matrix \(W\)) by aggregating _counterfactual paths_ obtained from sampling trajectories of finite-length random walks on the unweighted version of the graph \(\widetilde{G_{\mathcal{M}}}\). Definition 1 (Counterfactual path): Given a model's predictions \(\mathcal{M}(\mathbf{X})=[\mathcal{M}(\mathbf{x}_{1}),\ldots,\mathcal{M}( \mathbf{x}_{n})]\in\{1,\ldots,g\}^{n}\) for a dataset \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{T}\), we call a sequence \(\mathbf{v}=(v_{1},\ldots,v_{k})\), \(k\leq p\), a counterfactual path under counterfactual policy \(\Psi\) if perturbing (e.g., by permuting) values in the features \(v_{1},\ldots,v_{k}\) leads to a substantial change in the model's predictions according to the indicator given by \(\Psi\), i.e, if \[\Psi(\mathcal{M}(\mathbf{X}),\mathcal{M}(\mathbf{X}^{\prime}_{\mathbf{v}}))=1.\] In the above definition, \(\mathbf{X}^{\prime}_{\mathbf{v}}\) represents a dataset with perturbed values of features from the sequence \(\mathbf{v}\) (note that perturbations are performed sequentially, for individual variables as they are added to the path). The counterfactual policy \(\Psi\) is used to determine whether a perturbation of features leads to a significant change in predictions. Various choices for the counterfactual policy \(\Psi\) are possible. One approach is to define \(\Psi\) based on the fraction of changed predictions \(p\), where \[p=\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\left(\mathcal{M}(\mathbf{x}_{i})\neq \mathcal{M}(\mathbf{x}^{\prime}_{i})\right).\] With that, we define \(\Psi\) in a stochastic way as follows: \[\Psi_{\text{s}}(\mathcal{M}(\mathbf{X}),\mathcal{M}(\mathbf{X}^{\prime}_{ \mathbf{y}}))\coloneqq X\sim Bern(p).\] Alternatively, \(\Psi\) can be defined as an indicator of whether the fraction of changed predictions exceeds the selected threshold \(\kappa\) chosen by a user, i.e., \[\Psi_{\text{d}}(\mathcal{M}(\mathbf{X}),\mathcal{M}(\mathbf{X}_{\mathbf{v}}^{ \prime}))\coloneqq\mathbb{1}\left(p>\kappa\right).\] Another option is to penalize path lengths by considering their length \(k\) in the definition of \(\Psi\). The selection of an appropriate counterfactual policy \(\Psi\) is a crucial step in the proposed methodology. Once the counterfactual policy is defined, Algorithm 1 is employed to identify and collect counterfactual paths. This algorithm takes several inputs, including the model of interest \(\mathcal{M}\), the dataset \(\mathbf{X}\), the counterfactual policy \(\Psi\), the unweighted graph \(\widetilde{G_{\mathcal{M}}}\), the number of iterations \(n_{iter}\), and the maximal path length \(k\). The CPATH algorithm initializes a set to store counterfactual paths and iterates a specified number of times. In each iteration, a new path is generated by sampling vertices from the unweighted graph and checking for significant changes in the model's predictions based on the counterfactual policy. ``` Data:\(\mathcal{M}\), \(\mathbf{X}\), \(\Psi\), \(\widetilde{G_{\mathcal{M}}}\), \(n\_{iter}\in\mathbf{Z}_{+}\), \(k\in\mathbf{Z}_{+}\) 1\(CPATHS\leftarrow\{\}\); 2for\(i=1\)to n_iterdo 3\(\mathbf{v}\leftarrow[]\); 4for\(j=1\)to\(k\)do 5\(v_{j}\leftarrow\text{sample\_vertex}(\widetilde{G_{\mathcal{M}}})\); 6\(\mathbf{v}\leftarrow\mathbf{v}+[v_{j}]\); 7if\(\Psi(\mathcal{M}(\mathbf{X}),\mathcal{M}(\mathbf{X}_{\mathbf{v}}^{\prime}))=1\)then 8\(CPATHS\gets CPATHS\cup\{\mathbf{v}\}\) ; 9break; 10 11 end if 12 13 end for 14 15 end for ``` **Algorithm 1**CPATH: counterfactual paths generation Subsequently, the observed counterfactual paths are used in Algorithm 2 to estimate the function \(w_{\mathcal{M}}\), i.e., derive the corresponding adjacency matrix \(W\). Specifically, the transition matrix \(\mathbf{T}\) is calculated by iterating through the generated paths. For each path, the length \(l\) of the path is computed. Then, for each consecutive pair of vertices \((v_{i},v_{i+1})\) in the path, the corresponding entry in \(\mathbf{T}\) is incremented by \(k-l+1\). This process accounts for the penalization of longer paths, as paths with a shorter length contribute more to the edge weights. The resulting adjacency matrix should capture the underlying dependencies and interactions between variables based on the observed counterfactual paths. ``` Data:\(CPATHS\), \(k\in\mathbf{Z}_{+}\) 1\(\mathbf{T}\leftarrow\mathbf{0}_{p\times p}\); 2for\(\mathbf{v}\) in \(CPATHS\)do 3\(l\leftarrow\mathrm{length}(\mathbf{v})\); 4if\(l=1\)then 5\(\mathbf{T}[v_{1},v_{1}]\leftarrow\mathbf{T}[v_{1},v_{1}]+k-l+1\); 6 7 end for 8for\(i=1\)to\(l-1\)do 9\(\mathbf{T}[v_{i},v_{i+1}]\leftarrow\mathbf{T}[v_{i},v_{i+1}]+k-l+1\); 10 11 end for 12 13 end for ``` **Algorithm 2**CPATH: transition matrix generation Based on the determined adjacency matrix \(W\), we can calculate the importance of individual variables. A straightforward approach is to compute the fraction of weights of edges adjacent to the node corresponding to variable \(\mathbf{X}_{j}\). This can be expressed as: \[Imp(\mathbf{X}_{j})=\frac{\sum_{i\in[p]}\mathbf{T}[v_{i},v_{j}]}{\sum_{i\in[p],k\in[p]}\mathbf{T}[v_{i},v_{k}]}, \tag{1}\] where \(\mathbf{T}\) is the transition matrix obtained from Algorithm 2 and \([p]\) denotes the set of indices from \(1\) to \(p\). Another way to estimate the importance values is by considering the stationary distribution of a Markov chain based on transition matrix \(\mathbf{P}\) being the row-wise normalized matrix \(\mathbf{T}\). The stationary distribution represents the long-term probability distribution of being at each node in the graph and it is a vector \(\pi\) that satisfies the condition \(\pi\mathbf{P}=\pi\). In this setting, the importance of variable \(\mathbf{X}_{j}\) is given by the corresponding element \(\pi_{j}\) in the stationary distribution vector. ### Counterfactual paths for causal modeling In the previous section the transition matrix \(\mathbf{T}\) was used to derive feature importances of single features. However, it also contains information about important interactions of features in form of a weighted directed graph. In fact, the generated counterfactual paths in \(CPATHS\) can be used for causal modeling and inference, for instance with Bayesian networks. From the absence and presence of features within the detected counterfactual paths one can estimate the conditional probability distribution between the important features, which may help to discovery causal effects. We illustrate this on two concrete examples in Section 5. ## 4 Evaluation and experimental set-up **General approach.** We have generated synthetic data under three conditions: Conditional dependency, correlation, and conditional independence (see Appendix section A.1). For each of these datasets two features were simulated fulfilling the above-mentioned conditions; the rest of the features were added as noise. A Random Forest was trained on the generated synthetic data and predictions were made based on the training set. The model was treated as a black box and explainable methods were exploited to verify the relevant features. ### Explaining the model Our evaluation strategy was to compare the feature importance scores computed by the explainers (e.g. SHAP (Aas et al., 2021) and LIME (Ribeiro et al., 2016)) with the model-intern Gini impurity scores generated by the random forest, which we consider here as ground truth. A high correlation between the explainers' feature importance scores and the model's Gini impurity values indicates that the explainer can detect the underlying patterns in the model's behaviour. Here, we report on the correlation with the model-specific Gini impurity scores associated with each simulated feature. We compare our approach with SHAP, LIME, and Permutation Feature Importance (PFI, Fisher et al., 2019). We report and analyze the performance of PFI, because it relies on a permutation scheme to derive feature importances, and thus is related to our method. Unlike CPATH, however, PFI requires the ground-truth labels to determine feature importance. ### Explaining the data We are aware that the Gini impurity scores themselves are biased due to several shortcomings (Nembrini et al., 2018). As a consequence, we also studied the capability of the explainers to detect the most important features within the data. We could show that the model-specific Gini impurity scores efficiently reflect the simulated ground truth, which essentially supports the validity of the first evaluation set-up as described in Section 4.1. In this particular investigation, however, the focus was more on the simulated features within the data and not on the features the model actually preferred, which is addressed by our first evaluation (see Section 4.1). In the case of SHAP and LIME, feature importance scores for _global_ explanations were computed as the mean absolute values of the local explanation scores. For this data-centric experiment we compared our method also to Conditional Predictive Impact (CPI, Watson and Wright, 2021). CPI provides variable importance measures taking into account the association between one or several features and a given outcome. ### Evaluation of the explanations' quality There are several methods that are used to measure the quality of explanations; in a survey of surveys (Schwalbe and Finzel, 2023) describe a taxonomy of Explainable AI methods and mention the fundamental differences and commonalities between several quality of explanation metrics. Expressing the expectation of the fact that a substantial change in the model decision's logic will also be reflected in its explanations is made by the sensitivity metric. Furthermore, fidelity (also referred to as faithfulness) is mainly used in conjecture with a surrogate model (Ancona et al., 2017). It describes the agreement between the original and surrogate model that is used to provide the explanations. Typically, models strive to have low infidelity, which is computed by the following function: \[\text{INFD}(\Phi,\mathbf{f},\mathbf{x})=\mathbb{E}_{\mathbf{I}\sim\mu_{ \mathbf{I}}}\bigg{[}\mathbf{I}^{T}\Phi(\mathbf{f},\mathbf{x})-\Big{(}\mathbf{ f}(\mathbf{x})-\mathbf{f}(\mathbf{x}-\mathbf{I})\Big{)}^{2}\bigg{]} \tag{2}\] Equation 2 provides the infidelity for a model computing the function \(\mathbf{f}\), having \(\mathbf{x}\) as input. The explanation method is represented by \(\Phi\), called "explanation functional". This metric is based on the definition of suitable perturbations, tailored to the needs of the task, the used model, input and the xAI method. They are represented by the difference \(\mathbf{I}=\mathbf{x}-\mathbf{x}_{0}\in\mathbb{R}^{d}\) between the input \(\mathbf{x}\) and \(\mathbf{x}_{0}\), where \(\mathbf{x}_{0}\) can be a baseline value, a noisy baseline value or even a Random Variable (RV) (in the equation 2, \(\mathbf{I}\sim\mu_{\mathbf{I}}\) is a RV having the probability measure \(\mu_{\mathbf{I}}\)). Another metric that is related to infidelity to a certain extent, since it is also based on the principle of using perturbations is sensitivity. In this case, it is not the input data \(\mathbf{x}\) that is perturbed but the input features; the perturbation strategy is not defined solely by the data scientist, but it is driven by the relevance/attribution values that each feature has - according to the xAI method of interest. This metric expresses an expectation that the computed attribution values for each feature have some relationship with the computed output of the model. If a feature that is said to be highly important or decisive for a particular prediction is removed or replaced by a less informative one, then this action has to have some impact on the model's prediction. Even more, this change has to be in some way analogous to the assigned relevance/attribution of the feature, as computed by the xAI method. The extension of sensitivity to subsets of features (and not just one) is called sensitivity-\(n\), where \(n\) is the cardinality of the selected feature subset \(S\). The corresponding equation is: \[\text{Sens}_{n}=r\Big{(}\sum_{s\in S_{i}}\mathbf{e}_{s},\mathbf{f}(\mathbf{x} )_{c}-\mathbf{f}(\mathbf{x}_{S_{i}})_{c}\Big{)} \tag{3}\] where \(c\) is the predicted class, \(\mathbf{x}\) is the original input, \(\mathbf{x}_{S}\) is \(\mathbf{x}\) with all features in subset \(S\) removed. The sum of attributions in the subset \(S\) is denoted by \(\sum_{s\in S_{i}}\mathbf{e}_{s}\) and its Pearson correlation \(r\) with the aforementioned difference comprises the value of sensitivity-\(n\). What is questionable is the use of this correlation without considering Spearman's rank correlation coefficient or Mutual Information (MI) (MacKay, 2003) for capturing non-linear correlations. A detailed description of the relationship between sensitivity and infidelity for different scenarios is described in (Yeh et al., 2019); the researchers have shown that a small decrease in sensitivity can be achieved when the explanation is multiplied by a smooth kernel (in this case a Gaussian) without increasing the infidelity - and under particular circumstances also decreasing it. This provided a general methodology that can start from any explanation towards one that has more beneficial quality properties - at least as far as those metrics are concerned. A detailed analysis of several related metrics with different representative experiments testing various datasets in the image processing domain is presented in (Gevaert et al., 2022). ### Explanations based on domain knowledge graphs In the above-described experiments, the counterfactual paths are computed on a fully-connected graph. The domain expert, however, might have some prior knowledge about the functional relationships between the studied features (nodes in the graph), which could be reflected as edges in a knowledge graph. In these cases, the explanatory factors induced by the counterfactual paths and their visiting nodes are restricted and guided by domain knowledge. In this specific evaluation, we compute the network, features and classes in the following way. First, we generate Barabasi networks (Barabasi and Albert, 1999) of varying size and structure. Second, we compute the features associated with a node using a normal distribution with \(N(\mu=0,\sigma=1)\). In a next step, we randomly select two connected nodes \(v_{1}\) and \(v_{2}\) and apply the following formula \[z=5*v_{1}+3*v_{2}.\] We apply the sigmoid function to transform \(z\) into a range between 0 and 1 \[S(z)=\frac{1}{1+e^{-z}}.\] After we get the transformed values from the sigmoid function, we underlay a Bernoulli distribution and sample from it to obtain the outcome class. In the described experiment the feature sampling in CPATH is guided by the network through random walks. In the remainder of this paper, we refer to it as \(\text{CPATH}_{know}\). Finally, we exemplary show the potential of our approaches (CPATH and \(\text{CPATH}_{know}\)) in a bioinformatics application for biomarker discovery. We used the gene expression data of human breast cancer patient samples for an experimental evaluation of the herein proposed methodologies. The data was retrieved from The Cancer Genome Atlas (TCGA) and was preprocessed as described in (Chereda et al., 2021). We masked the data by the topology of the Human Protein Reference Database (HPRD) protein-protein interaction (PPI) network (Keshava Prasad et al., 2009). The resulting dataset comprised 981 patients and 8469 genes. The binary prediction task was to classify the samples into a group of patients with the luminal A subtype (499 samples) and patients with other breast cancer subtypes (482 samples). Knowledge-guided explanations were generated using \(\text{CPATH}_{know}\) for the detection of potential breast cancer-specific biomarkers. ## 5 Results ### Synthetic data: correlation with ground-truth The correlation with the Gini impurities (herein assumed as the _model-specific ground-truth_) is depicted in Figure 2. The results indicate that path-based explanations (CPATH) and permutation-based feature importances (PFI) are more accurate compared to SHAP and LIME. Especially in the case of conditional feature dependency they efficiently reflect the model's internal behaviour (see Figure 2 & Table 1). The explanations based on LIME are not informative. The same observation can be made when interpreting the ability of the methods to detect the relevant features within the data (see Figure 3 and Table 2). Here, the simulated features are defined as the _data-specific ground-truth_. In the case of conditional feature dependency, CPATH is clearly more accurate than SHAP and LIME. CPATH almost perfectly aligns with the ground truth. CPI outperforms all methods when the signal-to-noise ratio is low. However, there is a substantial performance degradation when this ratio increases. It should be noted that CPI is specially designed for feature selection purposes and not for explaining the internals of the model. For instance, specialized feature selection methods often use a random forest, or any other classifier as a wrapper to detect the most relevant features within the data (Pfeifer et al., 2022b). In the case of correlated features, CPATH performs slightly worse than the competing SHAP method (see Figure 2). Also Figure 3 suggests that SHAP is the more appropriate explainer in the presence of correlated features. In this data-focused experiment (Figure 3, Table 2) also LIME is closer to the _ground-truth_. Notably, when the signal-to-noise ratio is high all methods perform almost identically. Figure 2: Correlation with Gini importance values based on 50 simulations. Explaining and visualizing black-box models through counterfactual paths Figure 3: Mean coverage of important features within the simulated data based on 50 simulations and along the signal-to-noise ratio. generated explanations based on the test set. In terms of sensitivity, CPATH performs well on all datasets and outperforms LIME and SHAP on two out of four cases (see Appendix Figure 1). LIME in this experiment is the worst-performing method. The fidelity of the explanations based on CPATH is not as good as SHAP, but it is competitive with LIME (see Appendix Figure 2). To showcase the interpretability of our proposed approach we analyzed the Diabetes dataset in more detail (see Figure 4). The accuracy of the random forest classifier was \(AUC=0.97\). A graphical summary of the generated counterfactual is shown in Figure 4. We see that the glucose variable causes the largest number of swapped labels when it is used as the first node in a counterfactual path. More than 20% labels on average swap when this feature is permuted. Once permuted, insulin and mass increase the swapped fraction up to 30%. However, the largest fraction is obtained by the path starting with pedigree and further going through glucose, mass, and age. This particular path caused \(>40\%\) of swapped labels on average. These observations suggest that a combination of the mentioned variables might be a marker for a detailed medical risk assessment. The feature importances derived from the counterfactual paths can be obtained from Figure 5. We further learned a Bayesian network from the inferred counterfactual paths (Figure 5), using the R-package bnlearn (Scutari, 2017). While the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Experiment** & **signal/noise** & **GINI** & **CPATH** & **SHAP** & **LIME** & **CPI** \\ \hline Conditional dependency (1) & 2/2 & 0.990 & 0.960 & 0.855 & 0.720 & 1 \\ Conditional dependency (2) & 2/2 & 0.990 & 0.950 & 0.905 & 0.730 & 0.995 \\ Correlation & 2/2 & 0.995 & 0.960 & 0.905 & 0.725 & 1 \\ Conditional in-dependency & 2/2 & 1 & 1 & 1 & 1 & 1 \\ \hline Conditional dependency (1) & 2/4 & 0.940 & 0.910 & 0.700 & 0.655 & 0.955 \\ Conditional dependency (2) & 2/4 & 0.940 & 0.880 & 0.710 & 0.665 & 0.925 \\ Correlation & 2/4 & 0.760 & 0.700 & 0.705 & 0.770 & 0.805 \\ Conditional in-dependency & 2/4 & 1 & 1 & 1 & 1 & 1 \\ \hline Conditional dependency (1) & 2/6 & 0.875 & 0.830 & 0.635 & 0.595 & 0.860 \\ Conditional dependency (2) & 2/6 & 0.870 & 0.820 & 0.670 & 0.655 & 0.895 \\ Correlation & 2/6 & 0.730 & 0.745 & 0.675 & 0.770 & 0.775 \\ Conditional in-dependency & 2/6 & 1 & 1 & 1 & 1 & 1 \\ \hline Conditional dependency (1) & 2/8 & 0.860 & 0.850 & 0.640 & 0.600 & 0.505 \\ Conditional dependency (2) & 2/8 & 0.845 & 0.805 & 0.615 & 0.600 & 0.400 \\ Correlation & 2/8 & 0.685 & 0.710 & 0.660 & 0.755 & 0.415 \\ Conditional in-dependency & 2/8 & 1 & 1 & 1 & 1 & 0.500 \\ \hline Conditional dependency (1) & 2/10 & 0.815 & 0.715 & 0.615 & 0.600 & 0.520 \\ Conditional dependency (2) & 2/10 & 0.820 & 0.735 & 0.585 & 0.575 & 0.325 \\ Correlation & 2/10 & 0.645 & 0.670 & 0.640 & 0.655 & 0.390 \\ Conditional in-dependency & 2/10 & 0.995 & 0.900 & 0.995 & 0.995 & 0.495 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean coverage of the important features within the data glucose variable was inferred as the most important feature, the Directed Acyclic Graph in Figure 4(b) indicates that it depends on several features, including age, triceps and insulin, causing the overall importance. From the learned conditional distributions we see that when these features are present within the given path, it increases the probability of glucose up to 0.72 to be also part of that path, causing the label swap. ### Knowledge-guided explanations and application on Protein-Protein Interaction Network (PPI) #### 5.3.1 Synthetic Barabasi networks. The experiments on synthetic Barabasi networks indicate that incorporated domain knowledge generates more parsimony Figure 4: Diabetes dataset. Graphical summary of the counterfactual paths. and interpretable explanations (see Figure 6). CPATH with incorporated knowledge is more accurate in detecting the relevant features when the path length is low. CPATH without guided domain knowledge requires about three times larger paths to capture the ground truth. Furthermore, knowledge-guided counterfactual explanations converge faster, as indicated by the higher performance when the number of sampled paths is low (Figure 5(a)). Interestingly, CPATH without domain knowledge significantly outperforms \(\text{CPATH}_{know}\) when the size of the sampled paths is high. In that case, CPATH is able to explore the feature space more efficiently, while the incorporated knowledge graph seems to hinder optimal convergence. The reason for that behavior is due to the edge degrees of the graph which leads to repeated visits of the same node in a random walk. Thus, we can derive the conclusion that for incorporated knowledge, in the form of a graph, a high number of generated paths is essential to ensure a sufficient number of starting nodes so that the whole graph can be explored by the random walks. #### 4.2.3 Application on PPI networks. The retrieved breast cancer gene expression data was split into a train (80%) and test set (20%). We trained a random forest comprising 1000 trees and we evaluated the accuracy based on the independent hold-out test set. The trained classifier achieved an Area Under the ROC Curve (AUC) of 0.93. Knowledge-guided explanations were generated using \(\text{CPATH}_{know}\) using a PPI network. We have generated 10000 paths using a path length of \(k=10\). Overall, 118 paths of the generated paths were counterfactuals. The feature importances derived from these paths can be obtained from Figure 5: Diabetes dataset. Feature importances and conditional dependencies inferred by Bayesian network learning and here represented as a Directed Acyclic Graph. Figure 7. The top-3 genes were LAT, RANBP1, and KDM5B. Single-feature test set accuracy for these genes were AUC(LAT) = 0.49, AUC(RANBP1) = 0.73, and AUC(KDM5B) = 0.54. These results suggest that LAT and KDM5B on their own are not sufficient as predictive markers. LAT is a protein that plays a critical role in the immune system and is primarily involved in T cell activation and signaling. It is expressed in T cells and other immune cells, facilitating the transmission of signals from the T cell receptor to the downstream signalling molecules. The KDM5B gene, also known as JARID1B or PLU-1, is a gene that encodes a protein belonging to the family of histone demethylases. Histone demethylases are enzymes involved in the regulation of gene expression by modifying histone proteins, which are involved in packaging DNA within the nucleus. Studies have implicated KDM5B in various biological processes, including development, differentiation, and cancer. While KDM5B's role in breast cancer is still being actively investigated, emerging research suggests its involvement in breast cancer progression and metastasis (Di Nisio et al., 2023). The RANBP1 gene, also known as Ran-binding protein 1, encodes a protein involved in the regulation of the Ran GTPase cycle. The Ran GTPase is essential for nucleocytoplasmic transport, which controls the movement of molecules between the nucleus and the cytoplasm. While RANBP1 itself is not directly associated with breast cancer, aberrant expression or dysregulation of proteins involved in the Ran GTPase cycle, including RANBP1, have been implicated in cancer, including breast cancer (Yuen et al., 2016)(Bamodu et al., 2016). Some Figure 6: Mean coverage in detecting the important features within the simulated data, masked by Barabasi networks. The results are based on 100 simulations, where data and network topology vary in each iteration. (**a**) The path length is set to \(k=4\). (**b**) The number of paths is set to 100. studies have suggested that RANBP1 may have tumor suppressor properties. Decreased expression of RANBP1 has been associated with poor prognosis and aggressive features in breast cancer patients. Loss or downregulation of RANBP1 expression may contribute to tumor development and progression. In a further investigation, we retrieved the first-order neighbourhood (N) of the detected and aforementioned genes. The results were AUC(N(LAT))= 0.78, AUC(N(RANBP1))= 0.82, AUC(N(KDM5B))= 0.67. The neighborhood of the RANBP1 gene (N(RANBP1)) included four additional genes, namely RAN, CO93, RCC1, and RANGRF. For this subset, we repeated the generation of CPATH explanations, but this time without incorporating domain-knowledge. The number of paths was set to 1000, and the length of paths was \(k=5\). We obtained 864 counterfactual paths. The generated counterfactual paths can be obtained from Figure 8. The shortest counterfactual path goes through RANBB1, RCC1 and RAN, leading to an average of 50% label swaps. For the first-order neighborhood of RANBP1 we applied bayesian network learning based on the detected counterfactual paths. The RANBP1 strongly depends on the CD93 gene. When CD93 is part of a counterfactual path the probability is 0.74 that RANBP1 also is included. Figure 7: Breast cancer dataset. (**a**) Top-10 relevant genes inferred by CPATH with incorporated PPI domain knowledge. (**b**) Conditional dependencies of the neighborhood features of RANBP1 inferred by Bayesian network learning and here represented as a Directed Acyclic Graph. ## 6 Discussion Apart from the proposed technique, various other approaches can be used to obtain importance values based on sampling trajectories of finite-length random walks on the \(\widetilde{G_{\mathcal{M}}}\) graph (with corresponding changes in model predictions). In particular, other approaches can use any sampled trajectory to compute importance scores without relying directly on the definition of counterfactual paths using the counterfactual policy \(\Psi\). Therefore, in our initial experiments, we also tested solutions based on reinforcement learning (RL) methods. The main difference with the technique presented earlier is that in reinforcement learning-based approaches, successive sampled trajectories can be used to update the transition matrix (or directly the importance vector) iteration by iteration, after each episode of the learning process. Figure 8: Breast cancer dataset. Graphical summary of the counterfactual paths. When RL techniques are used to estimate feature importance, the agent learns a policy that maximizes the cumulative reward over time. The agent explores the graph by traversing different paths and observing the resulting changes in the model predictions. Technically, we can formulate the problem as a Markov decision process (MDP), which is a tuple \(\langle S,A,P,R\rangle\), where * \(S\) represents the state space corresponding to the set of vertices in the graph \(G_{\mathcal{M}}\) (explanatory variables). * \(A\) is the action space, representing the available actions that an RL agent can take in each state. In this case, the actions correspond to traversing from one vertex to another in the graph. * \(P\) denotes the transition probabilities to other states, given the current state and the chosen action. In this particular environment, the state to which an action leads is deterministic (since the action is to select the appropriate edge of the graph). * \(R\) represents the reward function that provides feedback to the RL agent based on its actions. In the context of feature importance, the reward can be defined to reflect the changes in model predictions induced by traversing different paths. The terminal state is then determined by a fixed maximum number of variables on the path or is associated with exceeding the threshold of the reward (significant change in the model predictions). In our implementation, a solution using the model-free Q-learning algorithm is available. It estimates the matrix \(Q_{p\times p}\) of state-action quality, from which the importance of variables can be extracted by aggregation, analogous to the main method. Other available techniques allow the direct estimation of the state value function (which translates into the importance of the variables) - this can be done using the temporal difference learning TD(0) or the first-visit Monte Carlo algorithm. Although these methods give promising results, they suffer from sensitivity to hyperparameters. They require further research, investigation and validation, which is beyond the scope of this paper. Furthermore, our proposed approach was herein evaluated based on global explanations. A single test-instance, however, could be explained by first learning the transition matrix \(\mathbf{T}\) on the training set. From this transition matrix candidate paths could be generated by a Markov process. The generated path features and their corresponding values could be exchanged with those in the training set to test for counterfactuals. ## 7 Conclusion We have developed a novel explainable AI method: counterfactual paths. Unlike classical feature importance methods, the generated explanations are efficiently visualized through graphs, which could help detect causal effects and important interactions between features. ## Code availability The herein presented method is implemented within the cpath package available for R and Python at [https://github.com/pievos101/cpath](https://github.com/pievos101/cpath). ## Acknowledgements Parts of this work have been funded by the Austrian Science Fund (FWF), Project: P-32554 "explainable Artificial Intelligence" (Grantholder AH). Parts of this work have been funded by the Polish National Science Centre (NCN) grant 2019/34/E/ST6/00052 (Grantholder PB).
2304.01645
Symmetry-breaking motility of penetrable objects in active fluids
We investigate how a symmetric penetrable object immersed in an active fluid becomes motile due to a negative drag acting in the direction of its velocity. While similar phenomena have been reported only for active fluids that posses polar or nematic order, we demonstrate that such motility can occur even in active fluids without any preexisting order. The emergence of object motility is characterized by both continuous and discontinuous transitions associated with the symmetry-breaking bifurcation of the object's steady-state velocity. Furthermore, we also discuss the relevance of the transitions to the nonmonotonic particle-size dependence of the object's diffusion coefficient.
Ki-Won Kim, Yunsik Choe, Yongjoo Baek
2023-04-04T09:02:18Z
http://arxiv.org/abs/2304.01645v2
# Generic symmetry-breaking motility in active fluids ###### Abstract We investigate how a symmetric, porous object immersed in an active fluid becomes motile due to a negative drag acting in the direction of its velocity. Previous research has suggested that this phenomenon is restricted to active fluids that possess polar or nematic order. However, using mean-field analysis, we demonstrate that such motility can occur even in active fluids without any preexisting order. The emergence of object motility is characterized by both continuous and discontinuous transitions associated with the symmetry-breaking bifurcation of the object's steady-state velocity. Furthermore, we also discuss the relevance of the transitions to the nonmonotonic particle-size dependence of the object's diffusion coefficient. _Introduction. --_ An active fluid is a fluid consisting of _active particles_, which utilize stored energy to propel themselves [1; 2; 3; 4; 5; 6]. One important feature of such fluids is current rectification by asymmetric potentials. An asymmetric object immersed in an active fluid generally induces long-range density gradients [7; 8; 9; 10] or persistent motion [11; 12; 13; 14]. These phenomena have been applied to the design of targeted delivery systems [15] and self-starting micromotors [16; 17; 18]. But asymmetric shape is not always necessary for an object immersed in an active fluid to be motile. Indeed, there are various examples of symmetric objects that exhibit motility via spontaneous symmetry breaking. Many of them feature preexisting order in the system. For instance, given sufficiently strong contractile stress, an active droplet with polar order in a passive fluid is known to develop splay instability, which in turn induces unidirectional motion [19]. Conversely, a passive droplet inside a polar active gel can become motile by spontaneous creation of a topological defect on one side of the droplet [20]. Another possible scenario is when the object itself can be deformed. Polymer chains immersed in active fluids spontaneously develop curvatures, which then results in current rectification that turns them into traveling structures [21; 22; 23]. In this Letter, we show that neither ordered medium nor deformable object are needed for an object in an active fluid to become motile. Using a simple model of a symmetric, porous object immersed in an ideal active fluid lacking any order, we analytically describe the steady-state dynamics of the object. It turns out that the object motion by itself induces a rectifying effect, which creates a _negative drag force_ that acts in the direction of motion. Based on this, we show that symmetry-breaking motility emerges via continuous or discontinuous phase transitions. While negative drag has been reported for cargo transport by molecular motors [24] and contractile active mechanics [25], the mechanism we discuss is at work even in an ideal active gas. We note that the same type of negative drag has recently been discussed in [26], but the study focused on the regime where the negative drag is so small that it only affects the diffusive properties of the object. Here we investigate the case where the negative drag is strong enough to give rise to persistent motion of the object. Our results also have interesting implications for the nonmonotonic object size dependence of effective diffusivity in an active fluid. While previous studies have attributed the phenomenon to the interplay of diffusion and advection [27; 28; 29; 30; 31], we discover that symmetry-breaking motility contributes an alternative mechanism for the nonmonotonic behaviors observed in the effective diffusion coefficient. _Model. --_ We consider a symmetric, overdamped, penetrable object of size \(\Lambda\) immersed in an active ideal gas on a one-dimensional (1-d) ring of length \(L\), see Fig. 1(a). The gas consists of \(N\) run-and-tumble particles (RTPs), which are a simple model of bacterial motion [32; 33]. Each RTP travels to the left or to the right at constant velocity \(u\), flipping the direction at rate \(\alpha/2\). The RTPs do not interact with each other but interact only with the object via the potential \[V(x)=\begin{cases}F(x+\Lambda/2)&\text{for }-\Lambda/2\leq x<0,\\ -F(x-\Lambda/2)&\text{for }0\leq x<\Lambda/2,\\ 0&\text{otherwise},\end{cases} \tag{1}\] which means that the RTP and the object repel each other at constant force \(F\) whenever they overlap. We also assume that the thermal noise is negligible compared to the other forces. With these assumptions, each RTP obeys \[\dot{x}_{i}=-\mu\,V^{\prime}(x_{i}-X)+u\,s_{i}(t)\quad\text{for }i\in\{1, \dots,N\}, \tag{2}\] where \(x_{i}\) is the position of the \(i\)-th RTP, \(\mu\) its mobility, \(s_{i}(t)=\pm 1\) its polarity that flips sign at rate \(\alpha/2\). Meanwhile, \(X\) denotes the position of the object and evolves according to \[\dot{X}=\mu_{\text{obj}}\sum_{i=1}^{N}V^{\prime}(x_{i}-X), \tag{3}\] where \(\mu_{\rm obj}\) is the mobility of the object. _Drag force._ -- We first calculate the force applied by the RTPs on the object when it is dragged at constant velocity \(v\). For convenience, we adopt the frame of reference fixed to the object (\(x_{i}\to x_{i}+X\)). Then Eq. (2) changes to \[\dot{x}_{i}=-\mu\,V^{\prime}(x_{i})-v+u\,s_{i}(t)\quad\text{for }i\in\{1,\dots,N\}. \tag{4}\] It is straightforward to convert this to the equations for the densities \(\rho_{\pm}\) of the right-/left-moving RTPs: \[\begin{cases}\partial_{t}\rho_{+}=-\partial_{x}\{[F_{\rm eff}(x)+u]\rho_{+}\} +\frac{\alpha}{2}(\rho_{-}-\rho_{+}),\\ \partial_{t}\rho_{-}=-\partial_{x}\{[F_{\rm eff}(x)-u]\rho_{-}\}+\frac{\alpha }{2}(\rho_{+}-\rho_{-}),\end{cases} \tag{5}\] where \(F_{\rm eff}\equiv-\mu\partial_{x}V-v\) is the effective force felt by each RTP in the object frame. Using the total density \(\rho\equiv\rho_{+}+\rho_{-}\) and the polarization \(\Delta\equiv\rho_{+}-\rho_{-}\), Eq. (5) can be rewritten as \[\begin{cases}\partial_{t}\rho=-\partial_{x}J,&J\equiv F_{\rm eff}\rho+u \Delta,\\ \partial_{t}\Delta=-\partial_{x}J_{\Delta}-\alpha\Delta,&J_{\Delta}\equiv F_{ \rm eff}\Delta+u\rho,\end{cases} \tag{6}\] where \(J\) and \(J_{\Delta}\) are the density and the polarization currents, respectively. Then, solving Eq. (6) for the steady state (\(\partial_{t}\rho=\partial_{t}\Delta=0\)), we can calculate the drag force on the object \[F_{\rm obj}(v)=\int_{0}^{L}dx\,\rho_{\rm s}(x;v)\,V^{\prime}(x), \tag{7}\] where \(\rho_{\rm s}(x;v)\) denotes the steady-state density profile. Remarkably, as illustrated in Fig. 1(b), dragging the object can result in the RTPs piling up behind the object, in contrast to the case of a passive fluid where the particles always accumulate in front of the object. This implies that the force applied by the RTPs on the object is in the direction of motion, not against it. In other words, the RTPs exert a _negative drag_ on the object. Let us denote by \(\bar{\rho}\equiv N/L\) the mean density of the RTPs and by \(l_{p}\equiv 2u/\alpha\) their persistence length. When \(v\) is small, the drag force on the object can be linearized as \(F_{\rm obj}\simeq-(\bar{\rho}l_{p}/\mu)a_{1}v\), where \(a_{1}\) is the dimensionless drag coefficient. In the limit \(L\to\infty\), the coefficient can be expressed as a function of the dimensionless parameters \(\lambda\equiv\Lambda/l_{p}\) (rescaled object size) and \(f\equiv\mu F/u\) (rescaled object-RTP repulsion): \[a_{1}(f,\lambda)=\frac{2}{f}\sinh\left(\frac{f\lambda}{1-f^{2}}\right)-\frac {\lambda\left(2-f^{2}+f^{4}\right)}{\left(1-f^{2}\right)^{2}}. \tag{8}\] The boundary of the negative drag regime (\(a_{1}<0\)) is indicated by the dashed green line in Fig. 1(c). This shows that \(a_{1}<0\) requires both \(\lambda\) and \(f\) to be small enough. These properties can be understood intuitively as follows. The RTPs, due to their persistent motion, tend to move in the same direction even after penetrating into the object. Since they slow down inside the object, their density has to increase to keep the current uniform, as required by the steady-state condition. Thus, in contrast to the passive particles, whose density is always lower inside an object, the RTPs accumulate and form a high-density region at the object's surface, as illustrated in Fig. 1(b). Due to the symmetry, the same numbers of particles accumulate on both sides of a static object. But if the object moves, the magnitude \(|F_{\rm eff}|\) of the effective repulsion is stronger behind the object than in front. This means that the RTP finds it easier to cross the object from the front to the rear than the other way around. Thus, the RTPs tend to accumulate more on the rear side of the object, inducing the negative drag. We note that this mechanism would work only when the object size is smaller than or comparable to the persistence length, so that the RTP keeps its direction of motion as it crosses the object. Moreover, the RTP-object repulsion should be weak enough to allow a sufficiently large flux between the two sides of the object. These are the reasons why the negative drag requires small \(\lambda\) and \(f\). _Phase transitions._ -- Thus far, we have assumed that Figure 1: (a) A schematic illustration of the model. (b) Density profiles of the left-moving (\(\rho_{-}\)) and the right-moving (\(\rho_{+}\)) RTPs around a symmetric object moving to the right. (c) A diagram showing the stable fixed points of the mean-field approximation in the large \(L\) limit. Negative drag is observed above the dashed green line, and continuous transitions occur on the thick black line. The star indicates the parameters used in (b). the object moves at constant velocity. But what would be the steady-state velocity of the object if it is allowed to move freely? Let us revisit Eq. (3) describing the object motion. Assuming that the RTPs instantaneously relax to the steady state for a given object velocity \(v=\dot{X}\), Eq. (3) can be rewritten as a self-consistency equation \[F_{\rm tot}(v)\equiv F_{\rm obj}(v)-\frac{1}{\mu_{\rm obj}}v=0. \tag{9}\] Then the solutions of the above equation that satisfy the stability condition \(F^{\prime}_{\rm obj}(v)<1/\mu_{\rm obj}\) approximate the steady-state object velocity. In this scheme, \(v\) plays the role of the mean field for every RTP. Thus we may call Eq. (9) the _mean-field theory_ for this system. Depending on the types of stable solutions, the steady-state object motion can be classified into the following four regimes, as shown in Fig. 1(c): (i) in the Immotile (I) regime, \(v=0\) is the only stable solution. Here the object diffuses without any persistent traveling in a single direction; (ii) in the Motile (M) regime, \(v>0\) is the only stable solution. Here the object always travels persistently in a single direction; (iii) the Motile-Immotile (MI) coexistence regime has stable solutions at both \(v=0\) and \(v>0\). Given enough time, the object vacillates between the motile state and the immotile (diffusing) state; (iv) The regime of multiple motile states (M') has two stable positive solutions. Given enough time, the object vacillates between two different traveling velocities [34]. Note that, by the symmetry of the system, \(-v\) is a stable solution of Eq. (9) if \(v\) is. For more details about how these regimes differ from each other, see Fig. 2. All solutions of Eq. (9) for various values of \(f\) and \(\lambda\) are shown by contours in Fig. 2(a). The diagonal line \(\mu F+v=u\) marks the boundary above (below) which the RTPs approaching the object from behind cannot (can) overtake the object. The line is important for determining which boundary conditions should be used in the mean-field theory, as detailed in Appendix A. Meanwhile, the behavior of the lhs of Eq. (9) as a function of \(v\) is schematically illustrated for each dynamical regime in Fig. 2(b). The stable solutions are marked with diamonds. This system is invariant under reflection about the object center, so it has the \(Z_{2}\) symmetry. Any dynamical regime with a nonzero stable solution indicates that the symmetry is spontaneously broken. This implies the existence of phase transitions between the I and the M regimes in the proper thermodynamic limit. In this study, we focus on the thermodynamic limit defined as \(N\to\infty\) with \(L\) and \(N\mu_{\rm obj}\) fixed. This ensures that the rhs of Eq. (3) converges to a finite value as \(N\to\infty\). Then, the dynamical regimes shown in Fig. 1(c) indicate that there are two types of transitions between the immotile state and the motile state. Along the thick black curve between the two white circles on the right, the M regime is in direct contact with the I regime. As the system crosses this curve, the steady-state velocity changes continuously between \(0\) and nonzero values, marking a _continuous transition_. Indeed, in the vicinity of the thick black curve, the total force on the object can be expanded as \[F_{\rm tot}\simeq\frac{\bar{\rho}l_{p}u}{\mu}\left\{-\left[a_{1 }(f,\lambda,L)+\gamma\right]\frac{v}{u}+a_{3}(f,\lambda,L)\left(\frac{v}{u} \right)^{3}\right\}, \tag{10}\] where the even-order terms in \(v\) do not appear because of the \(Z_{2}\) symmetry, and \(\gamma\equiv\mu/(\bar{\rho}l_{p}\,\mu_{\rm obj})\) is the rescaled friction coefficient of the object, which stays finite in the limit \(N\to\infty\) because \(\bar{\rho}\sim N\) and \(\mu_{\rm obj}\sim 1/N\). We have fixed \(\gamma=0.1\) throughout this study, including Fig. 1(c). Figure 2: (a) Self-consistent velocities of the object as the repulsion strength \(f\) is varied for a fixed object size \(\lambda\) (colors). The intervals of \(f\) corresponding to each dynamical regime at \(\lambda=0.2\) indicated by colored areas. Note that \(v=0\) is always self-consistent. (b) Behavior of the net force on the object as a function of the object velocity \(v\) for each dynamical regime. This amounts to fixing the mobilities (\(\mu\) and \(\mu_{\rm obj}\)) and the RTP properties (\(u\) and \(\alpha\)) while varying the object porosity (\(F\)) and size (\(\Lambda\)). Explicit calculations of \(a_{1}\) and \(a_{3}\) are given in Appendix B. Along the thick black line shown in Fig. 1(c), the dimensionless coefficients of Eq. (10) are given by \(a_{1}=-\gamma\) and \(a_{3}<0\). For a given value of \(\lambda\), we denote the value of \(f\) satisfying the condition \(a_{1}=-\gamma\) as \(f_{c}(\lambda,L)\), at which a continuous transition occurs with the critical behavior \(v\sim|f-f_{c}|^{\beta}\) with \(\beta=1/2\). Meanwhile, we expect there to be a discontinuous transition line in each of the two MI regimes shown in Fig. 1(c). While both motile and immotile states are possible at finite \(N\), we expect one of the two states to be exponentially more likely as \(N\) grows. Also, there are two multicritical points located at the junctions between the critical line and the discontinuous transition lines, which are indicated by white circles in Fig. 1(c). _Observations of phase transitions. --_ To verify the existence of discontinuous and continuous transitions, we ran extensive simulations of Eqs. (1)-(3) and examined the steady-state statistics of the system, with the results shown in Fig. 3 for \(\lambda=1\). As shown in Fig. 1(c), the mean-field theory predicts that varying \(f\) along the \(\lambda=1\) line produces a discontinuous transition somewhere within the MI regime and a continuous transition at the critical line. In the heat map shown in Fig. 3(b), the colors indicate the probability density of the rescaled object velocity \(v/u\) for a given value of \(f\). With the object mobility scaling as \(\mu_{\rm obj}\sim 1/N\), one can expect the mean-field theory to be more and more exact as \(N\) grows because the dynamics of the object becomes slower compared to the relaxation of the RTPs. At \(N=30000\) RTPs, the result already seems to be in good agreement with the mean-field predictions represented by the solid red curves. The red curves clearly indicate the existence of a discontinuous transition. To verify this, in Fig. 3(a) we plot the _Binder cumulant_[35]\(U_{4}\equiv 1-\langle v^{4}\rangle/(3\langle v^{2}\rangle^{2})\) as a function of \(f\) for various values of \(N\). As \(N\) increases, \(U_{4}\) develops a dip which becomes narrower and deeper. This a hallmark of a discontinuous transition as predicted by the mean-field theory. Meanwhile, in Fig. 3(c), we present a finite-size scaling (FSS) analysis of the continuous transition behavior observed at \(f_{c}\approx 0.81\). Using the FSS form \[v=N^{-\beta/\bar{\nu}}\,\Phi((f-f_{c})N^{1/\bar{\nu}}) \tag{11}\] with the mean-field Ising critical exponents \(\beta=1/2\) and \(\bar{\nu}=2\), all the data obtained at different values of \(N\) collapse onto a single curve. This implies that the critical phenomena are of the mean-field Ising universality class. Why do we observe such behavior, even though the system is 1-d? This is because, via Eqs. (1)-(3) with the scaling \(\mu_{\rm obj}\sim 1/N\), each RTP is coupled to the "mean field" of all the other RTPs. Via interactions with the object, all the RTPs are effectively coupled to each other, which resembles the all-to-all Ising model for which the mean-field theory is known to be exact. _Effective diffusion. --_ When the noise is not negligible, the object dynamics eventually becomes diffusive in the long-time limit. Then we can define the effective diffusion coefficient \(D_{\rm eff}\equiv\lim_{t\to\infty}\langle[X(t)-X(0)]^{2}\rangle/(2t)\). We can use the diagram shown in Fig. 1(c) to guess the behaviors of \(D_{\rm eff}\) as \(f\) and \(\lambda\) are varied. Since \(D_{\rm eff}\) is proportional to the product of the object velocity and its persistence length, we expect it to be the largest in the M regime, somewhat large in the MI regime, and quite small in the I regime. Our numerics indeed confirm this intuition. In Fig. 4, we show the behaviors of \(D_{\rm eff}\) as functions of \(f\) and \(\lambda\), respectively. These indicate that the diffusivity of a porous object immersed in an active fluid exhibit a nonmonotonic behavior as the porosity or the size of the object is increased [36]. The latter phenomenon is similar to [28], but the mechanism we propose is completely novel. _Summary and outlook. --_ We theoretically described the steady-state dynamics of a 1-d symmetric porous object immersed in an ideal gas of RTPs. We found that Figure 3: Analysis of the phase transitions of the object’s dynamical state. (a) The Binder cumulant \(U_{4}\) exhibits the hallmarks of a discontinuous transition. (b) Simulation confirms the bifurcations of the steady-state velocity as \(f\) is varied. At \(N=3\times 10^{4}\), the results are in good agreement with the mean-field prediction (solid red lines). (c) The continuous transition at \(f_{c}\approx 0.81\) exhibits characteristics of the mean-field Ising universality class (\(\beta=1/2\) and \(\bar{\nu}=2\)). We use \(\lambda=1\) in all the results. the drag coefficient of the object becomes negative when the object size and the object-RTP repulsion are both sufficiently small. In that case, the object moves persistently in a single direction by breaking the symmetry. Provided the complete time scale separation between the object and the RTPs, the steady-state velocity of the object exhibits discontinuous and continuous phase transitions, with the latter showing the mean-field Ising critical phenomena. Even if the time scale separation is not complete, these transitions lead to dramatic changes in the effective diffusion coefficient of the object. It would be interesting to check these effects by experiments with porous objects immersed in active fluids. Moreover, while 1-d systems require the object to be penetrable for the negative drag to be possible, the condition may not be necessary for higher-dimensional systems, as the active particles can interact with both sides of the object without going through the object. Thus, we expect the same symmetry-breaking mechanism to be at work even for a higher-dimensional object with a hard core. Finally, it would be interesting to explore possible collective phenomena involving multiple symmetric objects arising from the long-range interactions mediated by active particle currents [37, 38]. _Acknowledgments._ -- This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2020R1C1C101443613). YB also thanks Yariv Kafri, Alexander Solon, Nikolai Nikola, Xinpeng Xu, and Patrick Pietzonka for helpful comments. ## Appendix A Derivation of the exerted force on the object We derive the expression for \(F_{\rm obj}\) using the steady-state density of RTPs around the object moving at constant velocity \(v\). Towards this goal, as pointed out in [39], we need to separately address three different cases described below and illustrated in Fig. 5 (assuming that the object moves to the right with \(v>0\)): * Case A: for \(\mu F+v<u\), the RTPs can penetrate the object from both sides. * Case B: for \(\mu F+v>u\) and \(\mu F-v<u\), the right-moving RTPs cannot pass through the object, while the left-moving RTPs can. * Case C: for \(\mu F>u+v\), no RTPs can penetrate the object. We neglect the case where the object is faster than the RTPs since such situations would not arise naturally from the object-RTP interactions. Moreover, we are not interested in case C where the RTPs are bound to accumulate more in front of the object than behind, making the immotile state the only stable solution. Thus, we focus on cases A and B. ### Force-current relationship Utilizing Eqs. (6) and (7), \[F_{\rm obj}=-\frac{1}{\mu}\int_{0}^{L}dx\,(J+v\rho-u\Delta)=-\frac{1}{\mu} \left(JL+vN-u\int_{0}^{L}dx\,\Delta\right). \tag{10}\] Figure 4: Nonmonotonic behaviors of the effective diffusion coefficient of the object as (a) the object-RTP repulsion \(f\) is varied and as (b) the object size \(\lambda\) is varied. In the steady state, Eq. (6) implies \[F_{\rm obj}=-\frac{1}{\mu}\left[JL+vN-\frac{u}{\alpha}\int_{0}^{L}dx\left(\partial _{x}J_{\Delta}\right)\right]. \tag{10}\] Since the system is periodic, the last term on the rhs is zero. Thus, we obtain \[F_{\rm obj}=-\frac{1}{\mu}(JL+vN), \tag{11}\] which describes the force-current relationship in the object frame. We note that a similar expression was derived in [21] for the lab frame of reference. ### Steady-state RTP density In order to obtain \(J\), we first derive expressions for the steady-state density of the RTPs. In the steady state, the elimination of \(\Delta\) in Eq. (6) yields \[\partial_{x}\{[u^{2}-F_{\rm eff}(x)^{2}]\,\rho(x)\}-\alpha F_{\rm eff}(x)\, \rho(x)+[\alpha+F^{\prime}_{\rm eff}(x)]J=0. \tag{12}\] Defining \[g(x) \equiv [u^{2}-F_{\rm eff}(x)^{2}]\rho(x), \tag{13}\] \[a(x) \equiv \frac{\alpha F_{\rm eff}(x)}{u^{2}-F_{\rm eff}(x)^{2}},\] (14) \[b(x) \equiv [\alpha+F^{\prime}_{\rm eff}(x)]J, \tag{15}\] Figure 5: Three regimes classified according to the penetrability of the object the equation can be rewritten as \[g^{\prime}(x)-a(x)\,g(x)+b(x)=0. \tag{10}\] Since this is a first-order ODE, its general solution is straightforwardly obtained as \[g(x)-g(c)\exp\bigg{\{}\int_{c}^{x}dx_{2}\,a(x_{2})\bigg{\}}=-\int_{c}^{x}dx_{1}\,b (x_{1})\exp\bigg{\{}-\int_{x}^{x_{1}}dx_{2}\,a(x_{2})\bigg{\}}, \tag{11}\] where \(c\) is an arbitrary constant. Then, using the definition of \(g(x)\), we can write \[\rho(x) =\frac{u^{2}-F_{\rm eff}(c)^{2}}{u^{2}-F_{\rm eff}(x)^{2}}\,\rho(c )\exp\bigg{\{}\int_{c}^{x}dx_{2}\,a(x_{2})\bigg{\}}\] \[\qquad-\frac{1}{u^{2}-F_{\rm eff}(x)^{2}}\int_{c}^{x}dx_{1}\,b(x_{ 1})\exp\bigg{\{}\int_{x_{1}}^{x}dx_{2}\,a(x_{2})\bigg{\}}. \tag{12}\] From this expression, we learn the following: * Since \(F_{\rm eff}(x)\) changes discontinuously at \(x=\pm\Lambda/2\) and \(x=0\), \(b(x)\) has delta peaks at these locations. This implies that \(\rho(x)\) has discontinuous jumps at the same locations. For this reason, as shown in Fig. 6, we put \[\rho(x)=\begin{cases}\rho_{1}(x)&\text{for $-\Lambda/2<x<0$,}\\ \rho_{2}(x)&\text{for $0<x<\Lambda/2$,}\\ \rho_{3}(x)&\text{otherwise,}\end{cases}\] (13) and apply Eq. (12) separately to \(\rho_{1}\), \(\rho_{2}\), and \(\rho_{3}\) to calculate these functions. * The discontinuities of \(F_{\rm eff}(x)\) should be understood as the limiting behaviors of some smoothly behaved effective force profile. That is, while we have assumed \(F_{\rm eff}(x)\) to take only one of the three values \(\pm\mu F-v\) and \(-v\), the function actually takes all values in between, rapidly changing in the infinitesimal neighborhoods of \(x=\pm\Lambda/2\) and \(x=0\). In case A, where \(u>|F_{\rm eff}|\), such continuous changes of \(F_{\rm eff}(x)\) never satisfies \(u^{2}-F_{\rm eff}^{2}(x)=0\) Figure 6: RTP densities for respective region. thus, \(\rho(x)\) as described by Eq. (101) stays finite. In contrast, in case B, \(u^{2}-F_{\rm eff}^{2}(x)=0\) is achieved in the infinitesimal neighborhoods of \(x=-\Lambda/2\) and \(x=0\); thus, Eq. (101) implies that \(\rho(x)\) may diverge to infinity at these locations. Hence extra care must be taken when dealing with the boundary conditions there. ### Solution for case A Let \(x_{0}^{+}\) (\(x_{0}^{-}\)) indicate a point infinitesimally close to \(x_{0}\) with \(x_{0}^{-}<x_{0}<x_{0}^{+}\). Then, using \(c=-\Lambda/2^{+}\), \(0^{+}\), and \(\Lambda/2^{+}\) in Eq. (101), we obtain \[\rho_{1}(x) = \rho(-\Lambda/2^{+})\exp\left[-\frac{\alpha(\mu F+v)}{u^{2}-(\mu F +v)^{2}}\left(x+\frac{\Lambda}{2}\right)\right] \tag{102}\] \[-\frac{J}{\mu F+v}\left\{1-\exp\left[-\frac{\alpha(\mu F+v)}{u^{ 2}-(\mu F+v)^{2}}\left(x+\frac{\Lambda}{2}\right)\right]\right\},\] \[\rho_{2}(x) = \rho(0^{+})\exp\left[\frac{\alpha(\mu F-v)}{u^{2}-(\mu F-v)^{2} }x\right]+\frac{J}{\mu F-v}\left\{1-\exp\left[\frac{\alpha(\mu F-v)}{u^{2}-( \mu F-v)^{2}}x\right]\right\},\] (103) \[\rho_{3}(x) = \rho(+\Lambda/2^{+})\exp\left[-\frac{\alpha v}{u^{2}-v^{2}}\left( x-\frac{\Lambda}{2}\right)\right]-\frac{J}{v}\left\{1-\exp\left[-\frac{\alpha v}{u^{2}-v ^{2}}\left(x-\frac{\Lambda}{2}\right)\right]\right\} \tag{104}\] for each partition shown in Fig. 6, respectively. Note that we used the relation \(F_{\rm eff}(c)=F_{\rm eff}(x)\) for each partition in Eq. (101). In the steady state, Eq. (6) implies \[\partial_{x}J_{\Delta}=-\alpha\Delta. \tag{105}\] As discussed above, in case A, \(\rho\equiv\rho_{+}+\rho_{-}\) always stays finite. Since \(\rho_{+}\geq 0\) and \(\rho_{-}\geq 0\), this also means that \(\Delta\equiv\rho_{+}-\rho_{-}\) is finite as well. Then, Eq. (105) implies the continuity of \(J_{\Delta}\) in space. Using the definitions \(J\equiv F_{\rm eff}\rho+u\Delta\) and \(J_{\Delta}\equiv F_{\rm eff}\Delta+u\rho\), the elimination of \(\Delta\) yields \[J_{\Delta}=\frac{F_{\rm eff}}{u}J+\frac{u^{2}-F_{\rm eff}^{2}}{u}\rho. \tag{106}\] Then the continuity of \(J_{\Delta}\) at \(x=\pm\Lambda/2\) and \(x=0\) leads to \[-\frac{v}{u}J+\frac{u^{2}-v^{2}}{u}\rho_{3}(L-\Lambda/2) = -\frac{\mu F+v}{u}J+\frac{u^{2}-(\mu F+v)^{2}}{u}\rho_{1}(-\Lambda /2), \tag{107}\] \[-\frac{\mu F+v}{u}J+\frac{u^{2}-(\mu F+v)^{2}}{u}\rho_{1}(0) = \frac{\mu F-v}{u}J+\frac{u^{2}-(\mu F-v)^{2}}{u}\rho_{2}(0),\] (108) \[\frac{\mu F-v}{u}J+\frac{u^{2}-(\mu F-v)^{2}}{u}\rho_{2}(\Lambda /2) = -\frac{v}{u}J+\frac{u^{2}-v^{2}}{u}\rho_{3}(\Lambda/2). \tag{109}\] Also, the solution must satisfy the normalization condition \[\int_{-\Lambda/2}^{0}dx\,\rho_{1}(x)+\int_{0}^{\Lambda/2}dx\,\rho_{2}(x)+\int_ {\Lambda/2}^{L-\Lambda/2}dx\,\rho_{3}(x)=N=\bar{\rho}L. \tag{110}\] In case A, Eqs. (107)-(110) fix the boundary conditions of the system. Since we have four unknown parameters \(J\) \(\rho(\pm l/2^{+})\) and \(\rho(0^{+})\), these four equations completely determine the steady-state density profile and the current \(J\). Then, using Eq. (10), we obtain \(F_{\rm obj}\). ### Solution for case B Next, we address case B. As discussed in Sec. A.2, \(u^{2}-F_{\rm eff}(x)^{2}=0\) is achieved in the infinitesimal neighborhoods of \(x=-\Lambda/2\) and \(x=0\). Let us denote by \(c_{0}\) and \(c_{0}^{\prime}\) the points at which \(F_{\rm eff}(x)=-u\) near \(x=-\Lambda/2\) and \(x=0\), respectively. We need to check whether \(\rho(x)\) diverges to infinity at these points. For this purpose, we express the steady-state current \(J\) in terms of the densities of the right-moving and the left-moving RTPs in the neighborhoods of \(x=-\Lambda/2\) and \(x=0\). When \(F_{\rm eff}(x)=-u\), the total effective force on a right-moving particle (including the self-propulsion) disappears. Thus, we can write \[J=-2u\,\rho_{-}(c_{0})=-2u\,\rho_{-}(c_{0}^{\prime}), \tag{12}\] which implies that \(\rho_{-}(x)\) stays finite at \(x=c_{0}\) and \(x=c_{0}^{\prime}\). According to Eq. (11), these are the only points where \(\rho(x)\) can possibly diverge. Thus, \(\rho_{-}(x)\) must be finite throughout the system. Now it remains to examine the behaviors of \(\rho_{+}(x)\). Near \(x=-\Lambda/2\), for positive and infinitesimal \(\epsilon\), applying \(\int_{-\Lambda/2-\epsilon}^{-\Lambda/2+\epsilon}dx\) to the first identity of Eq. (5) in the steady state, we obtain \[\left(\mu F+v-u\right)\rho_{+}\!\left(-\frac{\Lambda}{2}+\epsilon\right)+ \left(u-v\right)\rho_{+}\!\left(-\frac{\Lambda}{2}-\epsilon\right)\simeq\frac {\alpha}{2}\int_{-\frac{\Lambda}{2}-\epsilon}^{-\frac{\alpha}{2}+\epsilon}dx\, \rho_{+}(x). \tag{13}\] Since the lhs is bound to be positive, \(\rho_{+}(x)\) must diverge to infinity in the infinitesimal neighborhood of \(x=-\Lambda/2\). Since the current \[J=\rho_{+}(x)\left[F_{\rm eff}(x)+u\right]+\rho_{-}(x)\left[F_{\rm eff}(x)-u\right] \tag{14}\] must be finite, the divergence of \(\rho_{+}(x)\) occurs precisely at \(x=c_{0}\), where \(F_{\rm eff}(x)+u=0\). This implies the existence of a delta peak of the RTP density at \(x=c_{0}\). Meanwhile, applying \(\int_{-\epsilon}^{\epsilon}dx\) to the first identity of Eq. (5) in the steady state, we obtain \[-\!\left(\mu F-v+u\right)\rho_{+}(\epsilon)-\left(\mu F+v-u\right)\rho_{+}\! \left(-\epsilon\right)=\frac{\alpha}{2}\int_{-\epsilon}^{\epsilon}dx\,\rho_{+ }(x)-\frac{\alpha}{2}\int_{-\epsilon}^{\epsilon}dx\,\rho_{-}(x). \tag{15}\] Since the lhs cannot be greater than zero, the two sides can be equal only if \(\rho_{+}(\pm\epsilon)\sim\epsilon\). Thus, \(\rho_{+}(x)\) converges to zero at \(x=c_{0}^{\prime}\). As \(\rho_{-}(c_{0}^{\prime})\) is finite, this implies that \(\rho(c_{0}^{\prime})\) is also finite. To sum up, \(\rho(x)\) diverges to infinity only at \(x=c_{0}\) in the infinitesimal neighborhood of \(x=-\Lambda/2\), and \(\rho(x)=\rho_{-}(x)\) at \(x=c_{0}^{\prime}\) in the infinitesimal neighborhood of \(x=0\). Combining the latter observation with Eq. (14), we can show that \(J\) directly determines \(\rho(0^{\pm})\) as follows: \[\rho(0^{\pm})=\rho_{-}(0^{\pm})=\frac{J}{F_{\rm eff}(0^{\pm})-u}. \tag{16}\] Then, using Eq. (105), we obtain \[\rho_{1}(x)=\rho(0^{-})\exp\left[-\frac{\alpha(\mu F+v)}{u^{2}-(\mu F+v)^{2}}x \right]-\frac{J}{\mu F+v}\left\{1-\exp\left[-\frac{\alpha(\mu F+v)}{u^{2}-(\mu F +v)^{2}}x\right]\right\} \tag{110}\] for \(-\Lambda/2<x<0\), where \(J\) within \(\rho(0^{-})\) is the only unknown coefficient. We can similarly express \(\rho_{2}(x)\) and \(\rho_{3}(x)\) in terms of \(J\) by applying Eqs. (104), (105), and (106). It should be noted that \(\rho_{1}(x)\), \(\rho_{2}(x)\), and \(\rho_{3}(x)\) are all smooth and finite-valued functions. The delta peak at \(x=-\Lambda/2\) must be separately taken into account. Thus, the normalization condition of the RTP density profile can be written as \[\bar{\rho}L=\int_{-\Lambda/2}^{0}dx\,\rho_{1}(x)+\int_{0}^{\Lambda/2}dx\,\rho_ {2}(x)+\int_{\Lambda/2}^{L-\Lambda/2}dx\,\rho_{3}(x)+M, \tag{111}\] where \(M\) is the magnitude of the delta peak at \(x=-\Lambda/2\). To fully determine the unknown coefficients \(J\) and \(M\), we revisit Eq. (107): \(\partial_{x}J_{\Delta}=-\alpha\Delta\). Since the delta peak is entirely due to \(\rho_{+}(x)\), the polarization \(\Delta(x)\) also has a delta peak with the same magnitude at \(x=-\Lambda/2\). Thus, integrating Eq. (107) across the infinitesimal interval \([-\Lambda/2-\epsilon,\,-\Lambda/2+\epsilon]\), we obtain \[M=-\frac{1}{\alpha}\left[J_{\Delta}\!\left(-\frac{\Lambda}{2}+\epsilon\right)- J_{\Delta}\!\left(-\frac{\Lambda}{2}-\epsilon\right)\right]. \tag{112}\] Using Eq. (108), this can be rewritten as \[M=-\frac{1}{\alpha}\left[-\frac{\mu F}{u}J+\frac{u^{2}-(\mu F+v)^{2}}{u}\rho_ {1}\!\left(-\frac{\Lambda}{2}\right)-\frac{u^{2}-v^{2}}{u}\rho_{3}\!\left(L- \frac{\Lambda}{2}\right)\right], \tag{113}\] which relates \(M\) to \(J\). Together with the normalization condition in Eq. (111), this equation fully determines the values of \(J\) and \(M\). Thus we have fully determined the steady-state RTP density for case B, and \(F_{\rm obj}\) can also be derived from \(J\) using Eq. (104). ## Appendix B Small \(v\) expansion of the force on the object With \(F_{\rm obj}\) determined by the procedure described in the previous section, we can expand the expression in terms of small \(v\) and single out the leading-order terms for large \(L\), getting the linear-order coefficient \[a_{1}(f,\lambda,L)=\frac{2}{f}\sinh\left(\frac{f\lambda}{1-f^{2}}\right)-\frac {\lambda\left(f^{4}-f^{2}+2\right)}{\left(1-f^{2}\right)^{2}}+\mathcal{O}\! \left(L^{-1}\right) \tag{114}\] and the coeffcient of \(v^{3}\) \[a_{3}(f,\lambda,L)=-\frac{L}{\Lambda}\!\left[-\frac{\lambda}{3f^{ 2}}+\frac{\lambda^{3}}{6}\frac{(1+f^{2})^{2}}{(1-f^{2})^{4}}+\frac{\lambda}{3f ^{2}}\cosh\!\left(\frac{f\lambda}{1-f^{2}}\right)\right.\\ -\frac{\lambda^{2}}{3}\frac{1+f^{2}}{f(1-f^{2})^{2}}\sinh\!\left( \frac{f\lambda}{1-f^{2}}\right)\bigg{]}+\mathcal{O}\!\left(L^{0}\right). \tag{115}\]
2305.16010
On the solution of the Kolmogorov-Feller equation arising in the model of biological evolution
The Kolmogorov-Feller equation for the probability density of a Markov process on a half-axis, which arises in important problems of biology, is considered. This process consists of random jumps distributed according to Laplace's law and a deterministic return to zero. It is shown that Green's function for such an equation can be found both in the form of a series and in explicit form for some ratios of the parameters. This allows one to explicitly find solutions to the Kolmogorov-Feller equation for many initial data.
Olga S. Rozanova
2023-05-25T12:50:43Z
http://arxiv.org/abs/2305.16010v2
# On the solution of the Kolmogorov-Feller equation arising in the model of biological evolution ###### Abstract. The Kolmogorov-Feller equation for the probability density of a Markov process on a half-axis, which arises in important problems of biology, is considered. This process consists of random jumps distributed according to Laplace's law and a deterministic return to zero. It is shown that the Green's function for such an equation can be found both in the form of a series and in explicit form for some ratios of the parameters. This allows one to explicitly find solutions to the Kolmogorov-Feller equation for many initial data. Key words and phrases:probability density, gene expression, Kolmogorov-Feller equation, fundamental solution, exact solution 2020 Mathematics Subject Classification: Primary 60E05; Secondary 35Q84; 82C31 ## 1. Introduction and problem statement The cells of all living organisms contain three main macromolecules: DNA, mRNA and proteins. Matrix ribonucle nucleic acid (mRNA) contains information about the primary structure (amino acid sequence) of proteins and plays an important role in gene expression. mRNA is synthesized from DNA during transcription, after which, in turn, it is used during translation as a template for protein synthesis. Gene expression, that is, the process of transferring information from mRNA to proteins, consisting of a series of biochemical reactions that occur randomly inside living cells, has been studied from an experimental and theoretical point of view for half a century. However, the simplest mathematical model of protein distribution in a cell population depending on the protein concentration inside a particular cell was introduced only in 2006 in [1]. It assumes a stochastic spasmodic nature of gene expression according to an exponential law, accompanied by continuous deterministic degradation (reversion to zero). Namely, the protein is produced in jumps, in which an mRNA molecule is translated into several protein molecules before disintegrating. The lifetime of an mRNA is considered to be short compared to the lifetime of a protein molecule; protein production occurs in random exponentially distributed uncorrelated events. The probability density \(P(t,x)\geq 0\) of such a Markov process is described by the following integro-differential equation [1] \[\frac{\partial}{\partial t}P\left(t,x\right)=\frac{\partial}{ \partial x}\left(\beta\,x\,P\left(t,x\right)\right)+\lambda\,\left(k\,\int_{0} ^{x}\!\!P\left(t,z\right)\mathrm{e}^{-k\left(x-z\right)}dz-P\left(t,x\right) \right), \tag{1}\] \[0\leq z\leq x,\,t\geq 0,\] where \(\lim\limits_{x\to 0}xP(t,x)=0\) and \(\beta,\lambda,k\) are positive constants. This is a generalization of the Fokker-Planck-Kolmogorov equation, which is sometimes called the Kolmogorov-Feller equation. In the biological interpretation, the variable \(x\) corresponds to the concentration of the protein inside a particular cell, \(\beta\) is the rate of protein degradation, \(\lambda\) is the rate of DNA transcription in mRNA, \(k\) is the ratio of the rate of mRNA degradation to the rate of mRNA translation in protein molecules. The constants \(\alpha=\frac{\lambda}{\beta}\) and \(k\) are the main parameters characterizing protein production. There is a very large number of works in which the [1] model is generalized, for example, [2], [3] and the references contained there, but the study of solutions of the (1) equation is limited to the study of stationary solutions and the asymptotics of solutions for large \(x\). The dynamics of the solution in time, as a rule, is studied only numerically. In this communication, we want to show that the Green's function \(\mathcal{G}(t,x,y)\) of the Cauchy problem, that is, the solution of the equation (1) with initial conditions \[P|_{t=0}=\delta(x-y),\quad x\geq 0,\,0\leq y\leq x, \tag{2}\] can be found analytically in the form of a series, and for some relations between the parameters and in the form of a finite sum. This allows us to find a solution to the Cauchy problem for any integrable on the semi-axis and bounded initial conditions \[P|_{t=0}=\phi(x)\geq 0,\quad\int\limits_{\mathbb{R}_{+}}\phi\,dx=1,\] as \[P(t,x)=\int_{0}^{\infty}\mathcal{G}\left(t,x,y\right)\phi(y)dy, \tag{3}\] which is an explicit formula for some types of initial data. To ensure the classical smoothness of the solution, it is necessary to require \(\phi\in C^{1}(\overline{\mathbb{R}}_{+})\). **2. Finding the Green's function** 1. Applying the Laplace transform \(x\to w\) to the (1) equation and the initial data (2), we obtain the Cauchy problem for \(\mathcal{L}\{P\}=\mathcal{L}\{P\}\left(t,w\right)\) \[\frac{\partial}{\partial t}\mathcal{L}\{P\}+\beta w\frac{\partial}{\partial w }\,\mathcal{L}\{P\}+\frac{\lambda k}{w+k}\mathcal{L}\{P\}=0,\quad\mathcal{L}\{ P\}|_{t=0}=\Theta(y)e^{-wy},\] whose solution has the form \[\mathcal{L}\{P\}(t,w)=\mathcal{L}\{\mathcal{G}\}(t,w,y)=\left(\frac{we^{- \beta t}+k}{w+k}\right)^{\alpha}\,e^{-ywe^{-\beta t}},\quad\alpha=\frac{ \lambda}{\beta}.\] Denote \(\bar{x}=x-ye^{-\beta t}\geq 0\). Note that \(\frac{we^{-\beta t}+k}{w+k}=1+W,\) where \(W=\frac{w(e^{-\beta t}-1)}{w+k}\), \(|W|<1\). Then, expanding \(\left(\frac{we^{-\beta t}+k}{w+k}\right)^{\alpha}\) into a convergent binomial series and applying the inverse Laplace transform, we obtain \[\mathcal{G}(t,x,y)=\sum_{i=0}^{\infty}C_{\alpha}^{i}(e^{-\beta t}-1)^{i} \mathcal{L}^{-1}\left\{\left(\frac{w}{w+k}\right)^{i}\right\}(t,\bar{x}). \tag{4}\] Using the properties of the Laplace transform, we find that \[\mathcal{L}^{-1}\left\{\left(\frac{w}{w+k}\right)^{i}\right\}(t,\bar{x})= \mathcal{L}^{-1}\left\{\left(1-\frac{k}{w+k}\right)^{i}\right\}(t,\bar{x})= \tag{5}\] \[\delta(\bar{x})+\sum_{s=1}^{i}(-1)^{s}C_{i}^{s}\Psi_{s}(\bar{x}),\] \[\Psi_{s}(\bar{x})=\frac{1}{(s-1)!}k^{s}\bar{x}^{s-1}e^{-k\bar{x}},\quad s\in \mathbb{N}.\] Substituting (5) into (4) and noticing that \(1+\sum\limits_{i=1}^{\infty}C^{i}_{\alpha}(e^{-\beta t}-1)^{i}=e^{-\alpha\beta t}\), we obtain a representation of the solution in the form of a series converging for each \(\bar{x}\in\mathbb{R}_{+}\) as a sum of the singular component \(\mathcal{G}_{sing}=A(t)\delta(\bar{x})\) and the regular component \(\mathcal{G}_{reg}\): \[\mathcal{G}(t,x,y)=e^{-\alpha\beta t}\,\delta(\bar{x})+e^{-k\bar{x}}\,\sum \limits_{i=1}^{\infty}C^{i}_{\alpha}(e^{-\beta t}-1)^{i}\sum\limits_{s=1}^{i}C ^{s}_{i}\frac{(-1)^{s}}{(s-1)!}k^{s}\bar{x}^{s-1}, \tag{6}\] \[i,s\in\mathbb{N},\quad s\leq i.\] We see that the Green's function contains a singular component \(\mathcal{G}_{sing}\) for all \(t>0\), but its amplitude \(A(t)\to 0\) for \(t\to\infty\). 2. If \(\alpha=n\in\mathbb{N}\), then the sum (6) becomes finite. 3. The regular component \(\mathcal{G}_{reg}\) tends at \(t\to\infty\) to the probability density of the gamma distribution, \[\mathcal{G}_{st}(x)=e^{-kx}\,\sum\limits_{i=1}^{\infty}C^{i}_{ \alpha}(-1)^{i}\sum\limits_{s=1}^{i}C^{s}_{i}\frac{(-1)^{s}}{(s-1)!}k^{s}x^{s- 1}=\frac{k^{\alpha}x^{\alpha-1}\,e^{-kx}}{\Gamma(\alpha)},\] \[i,s\in\mathbb{N},\quad s\leq i,\quad x\geq 0,\] where \(\Gamma(\alpha)\) is Euler's gamma function. The stationary solution of the equation (1) of the form \(\mathcal{G}_{st}(x)\) was already obtained in [1]. Its maximum at \(\alpha\leq 1\) is at the origin, and at \(\alpha>1\) it is at the point \(x=\frac{\alpha-1}{k}\). Note that all the transformations were done formally, but after the explicit form of the Green's function is obtained, we see from (3) that, under the conditions imposed above on the initial data, \(P(t,x)\) is absolutely integrable on the half-axis function (due to the presence of the factor \(e^{-kx}\)), so the Laplace transform is defined. The inverse Laplace transform is also defined since the image is an analytic function. **3. Examples.** For some fairly wide classes of initial data for \(\alpha=n\in\mathbb{N}\), it is possible to represent the solution of the Cauchy problem in the form of an explicit formula. This, for example, \(\phi(x)=A_{1}x^{a_{1}}e^{-b_{1}x}\), \(\phi(x)=A_{2}x^{a_{2}}e^{-b_{2}x^{2}}\), where \(A_{1},A_{2},a_{1},a_{2},b_{1},b_{2}\) are positive constants, chosen so as to ensure that integral over the semi-axis is equal to one, as well as piecewise constant or piecewise polynomial initial data. Note that discontinuities in the initial conditions do not smooth out, as happens in the case of the heat equation, but continue to be present for all \(t>0\), but their amplitude tends to zero for \(t\to\infty\). This happens due to the hyperbolicity of the equation (see below). Therefore, to extend the class of initial data to piecewise-smooth functions, one has to use the generalized formulation of the solution of equation (1). For large \(n\), the formulas can be quite cumbersome, but they are easily found using computer algebra packages. These formulas provide a large stock of tests for numerical methods for solving integro-differential equations. As examples illustrating the dynamics of density, we consider the cases \(n=1\) and \(n=2\), for which the corresponding Green's functions \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) are written rather short. Namely, \[\mathcal{G}_{1}(t,\bar{x}) = (1-e^{-\beta t})e^{-k\bar{x}}+e^{-\beta t}\delta(\bar{x})\] \[\mathcal{G}_{2}(t,\bar{x}) = \left(2ke^{-\beta t}(1-e^{-\beta t})+k^{2}(1-e^{-\beta t})^{2} \bar{x}\right)\,e^{-k\bar{x}}+e^{-2\beta t}\delta(\bar{x}).\] Their limit behavior is significantly different: \(\mathcal{G}_{1st}\) has a maximum at zero, while \(\mathcal{G}_{2st}\) has a maximum at the point \(x=\frac{1}{k}>0\). As the initial data in both cases, we choose the function \(\phi(x)=xe^{-x}\). In this case, integral (3) can be elementary calculated. Fig.1 shows plots of the solution at different times for \(n=1\) (left) and \(n=2\) (right). We see that for \(n=1\) the density maximum tends monotonically in time to the origin, while for \(n=2\) the density maximum first also tends to the origin, but then the graph has a competing maximum, which eventually tends to maximum of the stationary solution, while the first maximum vanishes. ## 4. Generalizations and remarks Equation (1) uses the simplest form of the density of jumps \(p(z)=ke^{-kx},x\geq 0\). Initially, it was chosen not only for reasons of simplicity, but also because of its compliance with experimental data. However, if we solve the purely mathematical problem of finding the Green's function, and hence the solutions of the Cauchy problem in explicit form, then we can consider wider classes of functions as the kernel \(p(z)\). The solution can be obtained explicitly if \(p(z)\) is a solution to a linear equation with constant coefficients of any order. For example, it can be a finite sum of exponents of the form \(\frac{1}{j}\ \sum\limits_{i=1}^{j}k_{i}e^{-k_{i}x}\), \(x\geq 0\), \(j\in\mathbb{N}\). Note that the method of finding the Green's function is standard, but the fact that it is possible to obtain an explicit expression with the inverse Laplace transform is a rather rare phenomenon. As shown in [2], by replacing \(Y(t,x)=\int\limits_{0}^{x}P(t,x-z)e^{-kz}dz\), the integro-differential equation (1) can be reduced to the differential equation \[\frac{\partial^{2}Y}{\partial t\partial x}-\beta x\frac{\partial^{2}Y}{ \partial x^{2}}+k\frac{\partial Y}{\partial t}+(\lambda-\beta(kx+1))\frac{ \partial Y}{\partial x}+k\beta Y=0,\quad x\geq 0,\] belonging to the hyperbolic type. This explains the fact that discontinuities in the initial data do not disappear with time, but propagate along the characteristics. The characteristics are \(x(t)=x_{0}e^{-\beta t}\), \(x_{0}\geq 0\), and \(t=\text{const}\). We see that one more family of characteristics is added to the "parabolic" one.
2306.00322
Search for Boosted Dark Matter in COSINE-100
We search for energetic electron recoil signals induced by boosted dark matter (BDM) from the galactic center using the COSINE-100 array of NaI(Tl) crystal detectors at the Yangyang Underground Laboratory. The signal would be an excess of events with energies above 4 MeV over the well-understood background. Because no excess of events are observed in a 97.7 kg$\cdot$years exposure, we set limits on BDM interactions under a variety of hypotheses. Notably, we explored the dark photon parameter space, leading to competitive limits compared to direct dark photon search experiments, particularly for dark photon masses below 4\,MeV and considering the invisible decay mode. Furthermore, by comparing our results with a previous BDM search conducted by the Super-Kamionkande experiment, we found that the COSINE-100 detector has advantages in searching for low-mass dark matter. This analysis demonstrates the potential of the COSINE-100 detector to search for MeV electron recoil signals produced by the dark sector particle interactions.
G. Adhikari, N. Carlin, J. J. Choi, S. Choi, A. C. Ezeribe, L. E. Franca, C. Ha, I. S. Hahn, S. J. Hollick, E. J. Jeon, J. H. Jo, H. W. Joo, W. G. Kang, M. Kauer, B. H. Kim, H. J. Kim, J. Kim, K. W. Kim, S. H. Kim, S. K. Kim, W. K. Kim, Y. D. Kim, Y. H. Kim, Y. J. Ko, D. H. Lee, E. K. Lee, H. Lee, H. S. Lee, H. Y. Lee, I. S. Lee, J. Lee, J. Y. Lee, M. H. Lee, S. H. Lee, S. M. Lee, Y. J. Lee, D. S. Leonard, N. T. Luan, B. B. Manzato, R. H. Maruyama, R. J. Neal, J. A. Nikkel, S. L. Olsen, B. J. Park, H. K. Park, H. S. Park, K. S. Park, S. D. Park, R. L. C. Pitta, H. Prihtiadi, S. J. Ra, C. Rott, K. A. Shin, D. F. F. S. Cavalcante, A. Scarff, N. J. C. Spooner, W. G. Thompson, L. Yang, G. H. Yu
2023-06-01T03:43:18Z
http://arxiv.org/abs/2306.00322v2
# Search for Boosted Dark Matter in COSINE-100 ###### Abstract We search for energetic electron recoil signals induced by boosted dark matter (BDM) from the galactic center using the COSINE-100 array of NaI(Tl) crystal detectors at the Yangyang Underground Laboratory. The signal would be an excess of events with energies above 4 MeV over the well-understood background. Because no excess of events are observed in a 97.7 kg - years exposure, we set limits on BDM interactions under a variety of hypotheses. Notably, we explored the dark photon parameter space, leading to competitive limits compared to direct dark photon search experiments, particularly for dark photon masses below 4 MeV and considering the invisible decay mode. Furthermore, by comparing our results with a previous BDM search conducted by the Super-Kamionkande experiment, we found that the COSINE-100 detector has advantages in searching for low-mass dark matter. This analysis demonstrates the potential of the COSINE-100 detector to search for MeV electron recoil signals produced by the dark sector particle interactions. A number of astrophysical observations provide evidence that the dominant matter component of the Universe is not ordinary matter but rather non-baryonic dark matter [1; 2]. Many searches for signs of dark matter have been pursued by direct detection experiments [3; 4; 5], indirect detection experiments [6; 7; 8], and collider experiments [9; 10; 11] without success [12]. It motivates searches for alternative types of dark matter that produce substantially different signatures in detectors, such as light (mass) dark matter models that predict extremely low-energy signals [13; 14; 15; 16] or relativistically boosted dark matter (BDM) that produce more energetic signals [17; 18; 19]. A relativistic dark matter particle, \(i.e.\) one that is boosted by interactions with cosmic-rays in the galaxy [20; 21; 22; 23; 24] or produced by the decay [25; 26; 27] or annihilation [28; 29; 30] of heavier dark sector particles, can deposit signals with energies that are above MeV in detectors. Since typical direct detection experiments search for low-energy nuclear recoil signals, scenarios for such energetic events have not been very well studied. Here we consider a model in which a BDM is produced by heavier dark matter particles [31; 32; 33]. It would require at least two species of dark matter particles, denoted by \(\chi_{0}\) and \(\chi_{1}\) for the heavier and lighter dark matter particles, respectively [31; 34]. The first direct search for BDM from annihilations of heavy dark matter particles in the galactic center was performed with the Super-Kamiokande detector that searched for energetic electron recoil signals above 100 MeV induced by BDM elastic scattering [35]. With COSINE-100 data, we searched for the inelastic scattering of BDM (IBDM) [18] induced by the existence of another dark sector particle [29; 30]. Recently, searches for cosmic-ray BDM interacting with protons in dark matter detectors with energies of a few keV [36; 37], as well as in neutrino detectors with energies between a few MeV [38] and a few GeV [39], were performed. In this Letter, we report on a search for BDM that elastically interacts with electrons in the NaI(Tl) crystals of the COSINE-100 detector. Such interactions would produce energetic electrons in the NaI(Tl) crystals. Our region of interest for the BDM interaction consists of an energy deposition above 4 MeV, since radioactive \(\gamma\) or \(\beta\) particles primarily have energy less than 4 MeV. COSINE-100 [40] is composed of an array of eight ultra-pure NaI(Tl) crystals, each coupled to two photo-multiplier tubes (PMTs). Due to high background levels and low light yields of three crystals, this analysis only uses data from five crystals, corresponding to an effective mass of 61.3 kg [41; 42]. The crystals are immersed in an active veto detector that is composed of 2,200 L of linear alkylbenzene (LAB)-based liquid scintillator (LS) [43]. The LS is contained within a shield comprising a 3 cm thick layer of oxygen-free copper, a 20 cm thick layer of lead, and an array of plastic scintillation counters for cosmic-ray muon tagging [44; 45]. We used data obtained between 21 October 2016 and 18 July 2018, corresponding to 1.7 years of effective live time, and a total exposure of 97.7 kg \(\cdot\) years for this search. The same dataset was already adopted for a precise understanding and modeling of the observed background between 1 keV and 3 MeV [46], as well as for a dark matter search that concentrated on the low-energy nuclear recoil spectrum [41]. The COSINE-100 data acquisition system recorded two different signals from the crystal PMTs, covering a wide dynamic range from single-photoelectron to 4 MeV high energy [47]. In addition to the low-energy (0-100 keV) anode readout, the \(5^{\rm th}\) stage dynode readout was recorded by 500 MHz flash analog-to-digital converters (FADCs) for 8 \(\mu\)s long waveforms. It provided sufficient energy resolution for events with energies between 50 keV and 3 MeV. We have previously presented a background model for the COSINE-100 detectors that covered energies below 3 MeV [46; 48]. However, events with energies greater than 4 MeV were above the limit of the FADC dynamic range and suffered from a saturated, non-linear response. To address this issue, we developed an algorithm to detect the saturation of the recorded pulse and reconstruct the saturated event. A template from the unsaturated events at the 2-3 MeV energy was compared to the saturated pulse, and the reconstruction at the saturated region was performed, as shown in Fig. 1 (a). The original energy spectrum as well as the recovered energy spectrum are shown in Fig. 1 (b). The energy scale above 4 MeV is calibrated with 7.6 MeV and 7.9 MeV \(\gamma\)-rays from \({}^{56}\)Fe and \({}^{63}\)Cu, respectively, that are produced by thermal neutron capture in the steel supporter of the lead shield and the copper encapsulation of the NaI(Tl) crystals [40]. Figure 1 (b) shows the reconstructed energy spectrum of the single hit events, in which the spectrum above 6 MeV is well described by the neutron capture events from Geant4 [49]-based simulation. Candidate events are selected if the reconstructed energy is greater than 4 MeV with no coincident muon candidate tracks in the muon detector [44]. We reject \(\alpha\)-induced events in the crystals using a pulse shape discrimination method [50]. Selected candidate events are sorted into two different categories: single-hit and multiple-hit events. A multiple-hit event has accompanying crystal signals with more than four photoelectrons or has a liquid scintillator signal above 80 keV [43]. A single-hit event is classified as one where the other detectors do not meet these criteria. Although a BDM interaction with the NaI(Tl) crystal would generate an energetic single electron with energy between a few MeV and a few 100 MeV [29; 30], this energetic electron could generate a number of Bremsstrahlung radiation-induced \(\gamma\)s that could convert and deposit energy in the other crystals or the LS. Therefore, we use both single and multiple-hit channels in this analysis. Four different categories of events contribute to the background above 4 MeV. Internal or external \(\beta/\gamma\) radiation induced by environmental radioactivities were well understood by the background modeling of the COSINE-100 detector for energy below 3 MeV [46; 48]. We extended this model to energies above 4 MeV. Here the main contribution is caused by internal \({}^{228}\)Th decay, especially sequenced \({}^{212}\)Bi (\(\beta\)-decay with a Q-value of 2.25 MeV) and \({}^{212}\)Po (\(\alpha\)-decay with a Q-value of 8.95 MeV) decays with a 300 ns half-life of \({}^{212}\)Po. Because of the short half-life, \({}^{212}\)Bi and \({}^{212}\)Po events pile up in the 8 \(\mu\)s event window. Based on their distinct pulse shapes, we can partially reject them, but their residual is the main contribution above 4 MeV from environmental \(\beta/\gamma\) radiation. Although the muon veto detector tags a coincident event with muons [45], 2.14\(\pm\)0.21% of the muons that transit the detector are mis-tagged due to gaps between plastic scintillator pannels. We applied a data-driven method to estimate the muon mis-tag contribution in the signal region, as described in Ref. [18]. Because of the 4\(\pi\) solid-angle coverage of the LS active shield [43], almost no events can reach the NaI(Tl) crystals without hits on the LS detector. Therefore, we do not consider the mis-tagged muon contribution for the single-hit events. Thermal neutron capture by copper or iron nuclei in the shielding materials produces \(\gamma\)-rays with energies as high as 8 MeV via \((n,\gamma)\) reactions. The thermal neutron and total neutron flux measured at the Yangyang underground laboratory are (1.44\(\pm\)0.15)\(\times 10^{-5}/(\rm cm^{2}\cdot s)\) and (4.46\(\pm\)0.66)\(\times 10^{-5}/(\rm cm^{2}\cdot s)\), respectively [51]. The neutron-induced events shown in Fig. 1 (b) were simulated based on this flux. In addition, we estimate the expected background events from \({}^{8}\)B solar neutrino elastic scattering on electrons. Table 1 presents the expected backgrounds from the aforementioned contributions for the single-hit and multiple-hit channels, which are compared with the measured data. The measured data agree with the total expected backgrounds within their uncertainties. In Ref. [31], it is proposed that the boosted, lighter \(\chi_{1}\) dark matter particles are produced in the pair-annihilation of two heavier \(\chi_{0}\) with a total flux, \[\mathcal{F}=1.6\times 10^{-4}\mathrm{cm}^{-2}\mathrm{s}^{-1}\left(\frac{< \sigma v>_{0\to 1}}{5\times 10^{-26}\mathrm{cm}^{3}\mathrm{s}^{-1}}\right) \left(\frac{\mathrm{GeV}}{m_{0}}\right)^{2}, \tag{1}\] where the reference value for \(<\sigma v>_{0\to 1}\), which is the velocity-averaged annihilation cross section of \(\chi_{0}\chi_{0}\rightarrow\chi_{1}\chi_{1}\), corresponds to a correct dark matter thermal relic density for \(\chi_{0}\) that is derived by a so-called "assisted" freeze-out mechanism [34], and \(m_{0}\) denotes the mass of \(\chi_{0}\). This production rate is subject to uncertainties in the dark matter halo models [52; 53; 54]. Here we assume the NFW halo profile [52; 55] described in Ref. [31]. Note that we implicitly assum \(\chi_{0}\) and its anti-particle \(\bar{\chi_{0}}\) are distinguishable as discussed in Ref. [56]. The relativistic \(\chi_{1}\) (mass \(m_{1}\)) travels and interacts with terrestrial detector elements either elastically or inelastically. We consider \(\chi_{1}e^{-}\) elastic scattering via a mediator \(X\) (mass \(m_{X}\)) exchange. We generate expected signals for various values of BDM parameters (\(\gamma_{1}=m_{0}/m_{1}\), \(m_{X}\), and \(\epsilon\), where \(\epsilon\) is the coupling between the dark sector mediator \(X\) and the electron) based on Refs. [29; 30]. The generated signal events undergo detector simulation and event selection. To search for BDM-induced events, we use a Bayesian approach with a likelihood function based on Poisson probability [41]. We perform binned maximum likelihood fits to the measured energy spectra for two different channels of the single-hit and the multiple-hit events for each signal of the various BDM parameter. Each crystal for each channel is fitted with a crystal- and channel-specific background model and a crystal- and channel-correlated BDM signal for the combined fit by multiplying the ten likelihoods of the five crystals. We use evaluated background contributions to set Gaussian priors for the known background rates. Figure 2 presents an example of the maximum likelihood fit for BDM signals with assumed parameters of \(m_{1}\)=6 MeV, \(m_{X}\)=13 MeV, \(\gamma_{1}\)=20, \(\epsilon\)=8\(\times 10^{-4}\). The summed event spectra for the five crystals in the single-hit (a) and multiple-hit (b) events are shown together with the best-fit result. For comparison, the expected signals for the BDM parameters \(m_{1}\)=10 MeV, \(m_{X}\)=1 MeV, \(\gamma_{1}\)=50, \(\epsilon\)=3\(\times 10^{-4}\) and \(m_{1}\)=6 MeV, \(m_{X}\)=13 MeV, \(\gamma_{1}\)=20, \(\epsilon\)=8\(\times 10^{-4}\) are presented. No excess of events that could be attributed to BDM interaction is found for the considered BDM signals. The posterior probabilities of signals are consistent with zero in all cases, and 90 % CL upper limits are determined. We interpret this result in the context of dark photon phenomenology by assuming that the interaction between the standard model particles and the dark sector particles is mediated by a dark photon. It allows us to compare this result with other dark photon searches in terms of the parameters \(m_{X}\) and \(\epsilon\). A similar interpretation with 59.6 days of COSINE-100 data for IBDM was presented in Ref. [18]. In our analysis, we generate signals using different sets of model parameters, fixing \(m_{1}\) and \(\gamma_{1}\) while varying \(m_{X}\). Figure 3 shows the measured 90 % CL up Figure 1: (a) An example of a saturated event due to the limited dynamic range (12 bit, 4096 for 2.5 V) is presented as a solid line. Reconstruction of the saturated event (red dashed line) is achieved by comparison with the template from unsaturated events. In this example, a 6.02 MeV energy is reconstructed for this event. (b) The measured energy spectra before (black-solid line) and after (red dots) reconstruction of the saturated events are presented. The reconstructed energy spectrum is calibrated with 7.6 MeV and 7.9 MeV \(\gamma\)-rays from the neutron capture of iron and copper, respectively, as shown in the blue dotted line. per limits obtained from the 1.7 years of COSINE-100 data for the aforementioned model parameters. We compare our results with those of direct dark photon searches for both the visible decay mode (\(m_{X}<2m_{1}\)) and the invisible decay mode (\(m_{X}\geq 2m_{1}\)) in Fig. 3 (a) and (b)1, respectively. Notably, for the invisible mode, our analysis yields a competitive limit for the dark photon mass below 4 MeV, assuming parameters of \(m_{1}\)=0.5 MeV and \(\gamma_{1}\)=10. This result highlights the complementarity of our search for the dark photon, although the specific model discussed in this paper has to be assumed. Footnote 1: Note that additional constraints from cosmological and astrophysical observations, depending on the detailed model of the dark sector particles discussed in Ref. [64], need to be taken into account. An additional interpretation of the Super-Kamiokande (SK) IV search result [17] considers the relic dark matter mass (\(m_{0}\)) and coupling (\(\epsilon\)) parameter space, as shown in Fig. 4. Two results are obtained using the same NFW halo profile [52] with a 0.5 coupling constant between the BDM and the mediator through the elastic interaction. Despite using a much smaller dataset of 97.7 kg \(\cdot\) years compared to the 161.9 kiloton \(\cdot\) years SK exposure, the lowest bound for \(\epsilon\) at \(m_{0}\) of about 200 MeV is at a similar level as that of the SK search result for an \(m_{0}\) of about 5 GeV. Because of the well understood backgrounds in the COSINE-100 detector above 4 MeV, the COSINE-100 data is complementary to results from the SK detector in searching unexplored parameter space at low-mass dark matter. In summary, we searched for evidence of boosted dark matter (BDM) by observing energetic electron recoil events induced by the elastic scattering of the BDM. Based on 1.7 years of COSINE-100 data, we found no evidence of BDM interaction, and we set 90 % CL lim \begin{table} \begin{tabular}{c|c|c|c|c|c||c|c|c|c|c} \hline Energy & \multicolumn{4}{c||}{Single-hit} & \multicolumn{4}{c}{Multiple-hit} \\ \cline{2-13} (MeV) & \(\beta/\gamma\) & neutron & neutrino & total & data & \(\beta/\gamma\) & neutron & muon & total & data \\ \hline 4\(-\)6 & 172\(\pm\)26 & 203\(\pm\)30 & 0.039\(\pm\)0.006 & 375\(\pm\)40 & 322 & 12\(\pm\)2 & 889\(\pm\)91 & 15\(\pm\)2 & 915\(\pm\)91 & 873 \\ 6\(-\)8 & 0 & 592\(\pm\)63 & 0.024\(\pm\)0.004 & 592\(\pm\)63 & 545 & 0 & 1165\(\pm\)120 & 16\(\pm\)2 & 1181\(\pm\)120 & 1194 \\ 8\(-\)10 & 0 & 60\(\pm\)25 & 0.011\(\pm\)0.002 & 60\(\pm\)25 & 78 & 0 & 30\(\pm\)11 & 21\(\pm\)3 & 51\(\pm\)12 & 37 \\ \(>\)10 & 0 & 0 & 0.003\(\pm\)0.001 & 0.003\(\pm\)0.001 & 0 & 0 & 2\(\pm\)1 & 211\(\pm\)4 & 213\(\pm\)4 & 218 \\ \hline \end{tabular} \end{table} Table 1: The expected number of background events and the observed events from the 1.7 years COSINE-100 dataset are shown for both the single-hit and the multiple-hit channels. The individual contributions from environmental \(\beta/\gamma\), thermal neutron capture, muon mis-tag, and solar neutrino are also listed. Figure 2: The summed energy spectra for the five crystals (black filled circles) and the best fit (green solid lines) with BDM signal of \(m_{1}\)=6 MeV, \(m_{X}\)=13 MeV, \(\gamma_{1}\)=20, \(\epsilon\)=8\(\times\)10\({}^{-4}\) are presented for the single-hit events (a) and the multiple-hit events (b). Fitted contributions to the background from \(\beta/\gamma\) radiation, neutron-capture, and muon mis-tag are indicated. The green bands are the 68 % confidence level (CL) intervals of the uncertainties obtained from the likelihood fit. For presentation purposes, we draw the BDM signal shapes assuming BDM parameters of \(m_{1}\)=10 MeV, \(m_{X}\)=1 MeV, \(\gamma_{1}\)=50, \(\epsilon\)=3\(\times\)10\({}^{-4}\) and \(m_{1}\)=6 MeV, \(m_{X}\)=13 MeV, \(\gamma_{1}\)=20, \(\epsilon\)=8\(\times\)10\({}^{-4}\) with \(\times\)30 amplification of the signal amplitude. its for various model parameters. Our investigation of dark photon interactions explored a parameter space that complements other dark photon search experiments. We also demonstrate that a small-scale dark matter search detector has some unique advantages for the low-mass dark matter in the BDM scenario compared to the much larger neutrino detectors. Although our results are interpreted in the context of the BDM model that elastically scatters electron, this search can apply to any theory that predicts an excess of events in electron recoil of a few MeV, for which the COSINE-100 detector has world-competitive sensitivity. ###### Acknowledgements. We thank Jong-Chul Park and Seodong Shin for insightful discussions. We thank the Korea Hydro and Nuclear Power (KHNP) Company for providing underground laboratory space at Yangyang and the IBS Research Solution Center (RSC) for providing high performance computing resources. This work is supported by: the Institute for Basic Science (IBS) under project code IBS-R016-A1, NRF-2021R1A2C3010989 and NRF-2021R1A2C1013761, Republic of Korea; NSF Grants No. PHY-1913742, DGE-1122492, WIPAC, the Wisconsin Alumni Research Foundation, United States; STFC Grant ST/N000277/1 and ST/K001337/1, United Kingdom; Grant No. 2021/06743-1 and 2022/12002-7 FAPESP, CAPES Finance Code 001, CNPq 131152/2020-3, Brazil.
2305.19323
Turbulent convection as a significant hidden provider of magnetic helicity in solar eruptions
Solar flares and coronal mass ejections, the primary space weather disturbances affecting the entire heliosphere and near-Earth environment, mainly emanate from sunspot regions harbouring high degrees of magnetic twist. However, it is not clear how magnetic helicity, the quantity for measuring the magnetic twist, is supplied to the upper solar atmosphere via the emergence of magnetic flux from the turbulent convection zone. Here, we report state-of-the-art numerical simulations of magnetic flux emergence from the deep convection zone. By controlling the twist of emerging flux, we find that with the support of convective upflow, the untwisted emerging flux can reach the solar surface without collapsing, in contrast to previous theoretical predictions, and eventually create sunspots. Because of the turbulent twisting of magnetic flux, the produced sunspots exhibit rotation and inject magnetic helicity into the upper atmosphere, amounting to a substantial fraction of injected helicity in the twisted cases that is sufficient to produce flare eruptions. This result indicates that the turbulent convection is responsible for supplying a non-negligible amount of magnetic helicity and potentially contributes to solar flares.
Shin Toriumi, Hideyuki Hotta, Kanya Kusano
2023-05-30T18:00:05Z
http://arxiv.org/abs/2305.19323v1
# Turbulent convection as a significant hidden provider of magnetic helicity in solar eruptions ###### Abstract Solar flares and coronal mass ejections, the primary space weather disturbances affecting the entire heliosphere and near-Earth environment, mainly emanate from sunspot regions harbouring high degrees of magnetic twist. However, it is not clear how magnetic helicity, the quantity for measuring the magnetic twist, is supplied to the upper solar atmosphere via the emergence of magnetic flux from the turbulent convection zone. Here, we report state-of-the-art numerical simulations of magnetic flux emergence from the deep convection zone. By controlling the twist of emerging flux, we find that with the support of convective upflow, the untwisted emerging flux can reach the solar surface without collapsing, in contrast to previous theoretical predictions, and eventually create sunspots. Because of the turbulent twisting of magnetic flux, the produced sunspots exhibit rotation and inject magnetic helicity into the upper atmosphere, amounting to a substantial fraction of injected helicity in the twisted cases that is sufficient to produce flare eruptions. This result indicates that the turbulent convection is responsible for supplying a non-negligible amount of magnetic helicity and potentially contributes to solar flares. ## Introduction Solar flares and coronal mass ejections are the primary sources of space weather disturbances driving various plasma processes in the entire interplanetary space, including the near-Earth environment [1, 2, 3]. The strongest events among these eruptions emanate from the solar active regions, in which the rotation and shear motions of strongly magnetised sunspots are often observed [4]. Once a solar flare occurs and a coronal mass ejection is launched, a helical magnetic flux rope is observed in the interplanetary space [5, 6], which is the clearest evidence that solar flares are the sudden release of the excessive magnetic energy accumulated in the helical magnetic structure in the solar corona through magnetic reconnection and plasma instability [7, 8, 9]. Magnetic helicity is a measure to quantify the topology of a magnetic field, such as twists, kinks, and internal linkages: \[H=\int A\cdot B\,dV, \tag{1}\] where \(A\) is the vector potential of a magnetic field \(B\), i.e., \(B=\nabla\times A\). It is well conserved even in resistive magnetohydrodynamic (MHD) processes but gauge-invariant only if the magnetic field is fully contained in a closed volume. Otherwise, the relative magnetic helicity [10, 11], \[H_{\text{R}}=\int\left(A+A_{\text{p}}\right)\cdot\left(B-B_{\text{p}}\right)dV. \tag{2}\] is widely used because it is gauge invariant in any case, and the potential magnetic field \(B_{\text{p}}\) is often adopted as a reference. The total amount of injected helicity, \(H_{\text{R}}\), and the helicity injection rate (helicity flux), \[\frac{dH_{\text{R}}}{dt}=2\int_{S}\left[\left(A_{\text{p}}\cdot B_{\text{h}} \right)V_{z}-\left(A_{\text{p}}\cdot V_{\text{h}}\right)B_{z}\right]\,dS, \tag{3}\] where the subscripts h and \(z\) denote the horizontal and vertical directions, respectively, are widely used in analyses of flare-productive active regions. For instance, observations show that active regions with a larger amount of magnetic helicity tend to produce stronger flares [12, 13, 14] and that the activity level is enhanced in association with the total injected helicity or the temporal variation of helicity flux [15, 16, 17, 18, 19]. This is why magnetic helicity attracts broad attention of the solar and heliophysics community in the context of flare prediction and forecasting. Based on a copious amount of observational evidence, it is widely believed that the injection of magnetic helicity into the corona is mainly due to the emergence of twisted magnetic flux from the convection zone and the consequential motions of sunspots, such as rotation and shearing [16, 20, 21, 22, 23, 24, 25, 26, 27]. Therefore, investigating how magnetic helicity is accumulated in the corona as the magnetic flux emerges and builds up active regions is an important factor in understanding the occurrence of solar flares. However, it has almost never been considered how and to what extent the background convection affects the helicity injection, which is probably because we cannot probe the solar interior using direct optical observations. The detection of flux emergence in the convection zone using helioseismology is a promising technique but still in the development stage [28, 29, 30]. To solve this problem, we perform numerical simulations in which a twist-free magnetic flux tube emerges from the turbulent convection zone and examine whether the helicity injection into the upper atmosphere is negligibly small or comparable to the observations. Previous theoretical studies suggested that an untwisted emerging flux cannot reach the photosphere because it experiences counteracting aerodynamic drag (see [31] and the references therein). Under the ideal condition with no background convection, even if the non-twisted flux tube reaches the photosphere, the helicity injection will be zero (as far as the emerging bipole keeps its geometrical symmetry). In this study, we explore the occurrence of helicity injection by calculating the case with convection and, if so, to what extent. We perform our computations using radiative MHD code _R2D2_[32] to reproduce the realistic turbulent thermal convection in the Sun. We use the convection model that is in a statistically steady state as the initial condition (\(t=0\)). We do not consider solar rotation in our model; therefore, the net kinetic helicity of the background flow field is negligibly small. The simulations with no solar rotation and thus infinitesimal net kinetic helicity enable investigation of the magnetic helicity injection that is caused purely by turbulence. At \(t=0\), a magnetic flux tube with an axial field strength of 12 kG and a typical radius of 8 Mm is placed at a depth of 22 Mm in the rectangular computational box that covers the entire solar convection zone, with the box spanning over \(0\leq x\leq 98.3\) Mm, \(0\leq y\leq 98.3\) Mm, and \(-201\) Mm \(\leq z\leq 676\) km (bottom panel of Figure 1). The bottom boundary (\(-201\) Mm) is deeper than most previous convective flux emergence simulations, which were at most \(-30\) Mm [33, 34, 35]. Thus, the present model allows us to investigate the effects of large-scale convection on the emergence of magnetic flux and the resultant sunspot formation [36, 37, 38, 39]. In the initial state, a mechanical balance between the flux tube and surroundings is achieved by lowering the entropy inside the tube. Therefore, the magnetic flux starts rising in response to the background velocity field without artificial buoyancy. For the purpose of comparison, we calculate two additional cases where the flux tubes are weakly and strongly twisted (right-handed twist), but the background convection field remains the same. The twist strengths of the weakly- and strongly-twisted tubes are \(q_{\rm cr}/4\) and \(q_{\rm cr}/2\), respectively, where \(q_{\rm cr}(=0.125\) Mm\({}^{-1})\) is the critical twist for the kink instability [40], namely, these tubes are stable against the instability. In all three cases, the total magnetic flux in the axial direction is on the order of \(2\times 10^{22}\) Mx. ## Results ### General evolution Figure 1 shows the vertical magnetic field \(B_{z}\) in the photospheric surface, the emergent intensity and the magnetic field strength and field lines in the 3D space for the three runs, i.e., the non-, weakly- and strongly-twisted cases. In this study, we define the layer where the optical depth is unity (\(\tau=1\)) as the photospheric surface and measure physical quantities to make direct comparisons with observations. Figure 1 shows that the flux tube is levitated by a convective upflow located in the centre of the computational box in all three cases. Even the non-twisted flux tube successfully reaches the photosphere, in contrast to the previous prediction that a flux tube needs some twisting to maintain its cohesion against aerodynamic drag [31]. From \(t=20\) hr, the positive and negative magnetic elements distributed in the photosphere gradually assemble and build up sunspots with positive and negative polarities. The positive sunspot moves beyond the side periodic boundary and at around \(t=30\) hr, the two sunspots collide with each other, eventually creating a strongly-packed bipolar sunspot (\(\delta\)-sunspot), which is known to be highly flare-active [41]. This situation agrees with the scenario where a single flux system emerges at multiple locations to build up a colliding bipolar sunspot [42, 36, 43]. The magnetic energy of the flux tube at the initial state is set to be the same for the three cases. However, once the sunspots are established in the photosphere, their decay is more rapid for the weaker twist cases because the local convection cells can more easily intrude into the sunspots and break them into pieces. In the twist-free case, the sunspots disappear by around \(t=60\) hr, i.e., the lifetime in the photosphere is approximately 40 hr. One remarkable feature of the developed sunspots is their continued rotation. In the strong twist case, the two sunspots rotate in the same clockwise direction because the flux tube, endowed initially with right-handed torsion, releases its twist as it appears in the photosphere [44, 45, 36]. Observations show that many more flaring active regions show rotations of bipolar sunspots in the same directions than in the opposite directions [46]. Of particular note, however, is that the sunspots in the no-twist case also exhibit rotations, with the negative sunspot in a clockwise direction and the positive sunspot in a counterclockwise direction (the square-framed zone in Figure 1). Because this flux tube is not given any twist at the initial state, the observed sunspot rotations are presumed to be driven by the background turbulence beneath the solar surface. ### Magnetic helicity injection and sunspot rotation Figure 2 shows the temporal evolutions of injected magnetic helicity, \(H_{\rm R}\), and helicity flux, \(dH_{\rm R}/dt\), as compared with the total unsigned magnetic flux, \[\Phi=\int|B_{z}|\,dS, \tag{4}\] all measured in the photosphere for the three cases. The initial total magnetic flux in the flux tubes in the axial direction is of the order of \(2\times 10^{22}\) Mx. Thus, if a flux tube emerges bodily as a whole to form a pair of sunspots, the total photospheric flux would be \(4\times 10^{22}\) Mx. However, in all cases it is on the order of \(3\times 10^{22}\) Mx, indicating that some fractions of the flux tubes remain below the photosphere. The total fluxes peaked around \(t=40\) hr and then gradually decreased as the sunspots decayed. The middle panel of Figure 2 illustrates that positive magnetic helicity is injected in the cases of strong (black line) and weak (blue line) twists. This is reasonable considering that these flux tubes were initially given right-handed twists and that the sunspots displayed clockwise rotations. What is noteworthy about this plot is that the positive helicity injection also occurs in the non-twisted case (red line). The accumulated helicity amounts as much as to \(2.9\times 10^{43}\) Mx\({}^{2}\), which is about 20% to 50% of those in the twisted cases. This result reveals that a non-negligible amount of magnetic helicity can be injected even when a twist-free flux tube emerges in the convective zone with null net kinetic helicity. It should be noted that the magnetic helicity normalised by the square of the total photospheric flux, \(H_{\rm R}/\max{(\Phi)}^{2}\), for the three cases is 0.029, 0.058 and 0.094 (peak values), which are comparable to the recent observations of flaring active regions [47, 48]. Therefore, the present simulations provide the reasonable reproduction of solar active regions. The bottom panel of Figure 2 shows the evolution of helicity flux (see Equation (3)). Then, what effect causes a finite positive helicity injection in the non-twisted case? Figure 3 shows the 3D rendering below the sunspot collision area (corresponding to the square-framed zone in Figure 1) and the plots of velocity fields at three different depths. We average the data over the period from \(t=35\) hr to 38 hr, when the helicity flux is peaked (bottom panel of Figure 2). Below the positive and negative sunspots, a pair of vertical magnetic fluxes (represented by yellow magnetic field lines) extend downward into the deep convection zone. The velocity plots reveal that the two magnetic pillars (indicated by contours) reside in the strong downflowing plumes, to which surrounding plasmas stream in, leaving local vortices (highlighted by cyan arrows). These structures are created because some portions of the initially horizontal flux tube are draged into the strong downflow plumes and become vertical magnetic concentrations [35, 37]. At the same time, the plumes also drive the surrounding plasmas to flow into them, accompanied by the vortices. The directions of the vortices agree with those of the sunspot rotations in the photosphere, i.e., clockwise (counterclockwise) for the negative (positive) sunspot. This indicates that the local vortices streaming into the downflow plumes spin the magnetic pillars and drive the sunspot rotations in the photosphere. If the sunspot rotation is because of the release of magnetic twists, the two magnetic pillars (sunspots in the photosphere) should rotate in the same direction [44, 45]. However, this is not the case. Thus, even if the flux tube is initially endowed with no magnetic twist, it is possible that the surrounding turbulent flow, in which the local vortices reside, exerts a spinning effect on the flux tube and, accordingly, the sunspot rotation occurs in the photosphere, leading to the significant injection of magnetic helicity into the upper atmosphere. ### Flare productivity One of the most promising methodologies that have been suggested to predict the flare eruptions is to analyse the spatial distribution of high free-energy regions (HiFERs), where the non-potential magnetic field, \(B_{\rm ap}=|B-B_{\rm p}|\), exceeds the critical value of 1 kG, and examine the occurrence of the double-arc instability (DAI [50]) for each HiFER patch near the polarity inversion line [49]. Top and middle panels of Figure 4 show the distributions of \(B_{\rm ap}\) and the magnetic twist flux density \(\tau_{\rm twist}\) at the representative timings for the non-twisted and strongly-twisted cases. The bottom panels are the flare phase diagram, which is the scatterplot of the minimum critical radius of the circular reconnection region required to satisfy the DAI condition, \(r_{\rm c}\), versus the magnetic energy that can be released, \(E_{\rm f}\), for the identified HiFERs. Observations show that large X-class flares likely occur when there is a HiFER with \(r_{\rm c}<1\) Mm and \(E_{\rm r}>4\times 10^{31}\) erg on the flare phase diagram [49]. In the strongly-twisted case, the HiFER patches (\(B_{\rm ap}>1\) kG) are coherent and the positive twist is distributed at the polarity inversion line between the two major sunspots, which is in agreement with the analysis results in the previous subsections. In the flare phase diagram, several locations are very close to the X-flare criticality. Therefore, we can predict that M-class flares are possible with a slight chance of X flares for the strong twist case. In the non-twisted case, however, although there are regions where \(B_{\rm np}\) exceeds 2 kG, the HiFERs are more fragmented. The twists around the polarity inversion lines are a mixture of positive and negative signs, suggesting that the twists of both signs can be generated, probably randomly, because of the nature of turbulent convection, when a twist-less magnetic flux emerges. No HiFERs satisfy the X-flare threshold in the flare phase diagram, but lower-class (e.g. C-class) flares may still occur. ## Discussion We investigate the physical mechanisms that provide magnetic helicity into the solar corona, which is critically important in understanding the occurrence of flare eruptions, by comparing the flux emergence simulations with and without magnetic twists under the existence of turbulent thermal convection. It is widely believed that the emergence of twisted flux tubes and associated sunspot rotations supply helicity into the active region corona. However, this is not trivial because of the lack of optical observations below the photosphere, which is the primary motivation of this study. We summarise the key results as follows. 1. A non-twisted flux tube can reach the photosphere and develop an active region. This is contrary to the previous simulation results without thermal convection during the emergence, a rising untwisted flux tube splits into two vortex rolls and quickly loses its identity because of the counteracting aerodynamic drag[31]. With the support of convection, a non-twisted flux tube may complete its journey to the solar surface. 2. The injected magnetic helicity is non-zero even in the case of no magnetic twist. In fact, the amount of injected helicity was quite large, reaching about 50% as in the twisted cases. Our analysis reveals that in the convection zone below the developed sunspots, inflows into the strong downwelling plumes produce local vortex motions, which spin the vertically standing magnetic flux tubes that extend along the plumes. As a result, the sunspots in the photosphere show rotations in the directions determined by the subsurface vortices, and a considerable amount of helicity injection into the upper atmosphere occurs. 3. The instability analysis suggests that the generated non-potential magnetic field in the non-twisted case can produce eruptions, albeit weaker than the twisted cases. This indicates that the "twist" observed in a variety of forms in the flare-productive active regions of the Sun is not only supplied by the magnetic "twist" of the emerging flux and the associated sunspot rotations, as previously believed, but also includes a non-negligible fraction of the "twist" brought by the background turbulent convection. In the present simulations, although the net kinetic helicity of the initial background convection was infinitesimally small, the resultant helicity injection in the photosphere is comparable to that provided by the emergence of twisted flux tubes. Considering the nature of the turbulence where we do not consider the solar rotation, the directions of the rotations of the local vortices that contributed to the spinning of the flux tubes may probably be determined by chance. Because the downflow plumes accompanied by the vortices are a coherent structure owing to the large time scale in the deeper convection zone, although the directions of the vortices are randomly determined, the sign of helicity injection may remain the same (positive in the present runs) over time. This indicates that the sign of helicity injection in simulations with different background turbulence can be negative, which will be examined in the future studies. Asymmetry in sunspot rotation is reported in bipolar active regions[51, 46], and this could be due to the difference in vortex motions in the convection zone. Also, a considerable fraction of solar active regions do not obey the hemispheric helicity sign rule, with the fraction of violators being 20-40%[52, 53]. This randomness may be the result of the stochastic imposing of magnetic helicity by the background turbulence to the twist of the emerging flux that is determined by the solar dynamo[54]. ## Methods In this study, we use the radiative MHD code _R2D2[32]_, which realistically simulates the thermal convection over the entire solar convection zone, to model the emergence of magnetic flux tubes and the spontaneous sunspot formation[36, 37, 38, 39]. This code computes the 3D MHD equations with realistic radiation transfer and equation of state, and by implementing RSST[55, 56, 57, 58], it effectively suppresses the high sound speed in the deep solar interior and relaxes the Courant-Friedrichs-Lewy condition. We use a rectangular Cartesian box as the computational domain, spanning over \(0\leq x\leq 98.3\) Mm, \(0\leq y\leq 98.3\) Mm, and \(-201\) Mm\(\leq z\leq 676\) km, resolved by a \(1024\times 1024\times 384\) grid. The grid spacing for the horizontal directions was \(\Delta x=\Delta y=96\) km (uniform), while that for the vertical direction was uniform at \(\Delta z=48\) km from the top boundary to \(z=-5.6\) Mm, which linearly increased to the bottom boundary up to \(\Delta z=1486\) km. The horizontal boundaries assumed the periodic boundary condition, while the magnetic field at the top boundary was connected to the potential field. The initial background convection was the same as that in[38], in which a magnetic flux tube emerges to successfully produce a bipolar sunspot group. The net kinetic helicity is negligibly small because the calculation box did not consider the solar rotation. The normalised net kinetic helicity at \(t=0\), when the flux tube was embedded, was; \[\frac{\int_{z<0}V\cdot\left(\nabla\times V\right)dV}{\int_{z<0} \left|V\cdot\left(\nabla\times V\right)\right|dV}=0.00523\%. \tag{5}\] The magnetic flux tube was embedded at a depth of 22 Mm. Unlike the previous _R2D2_ simulations, which used the force-free magnetic flux tubes, we adopt commonly used Gaussian-type flux tubes, where the longitudinal and azimuthal components of the magnetic field are given by \[B_{x}(r)=B_{\rm tb}\exp\left(-\frac{r^{2}}{R_{\rm tb}^{2}}\right) \tag{6}\] and \[B_{\phi}(r)=qrB_{x}(r), \tag{7}\] where \(B_{\rm tb}\), \(r\), \(R_{\rm tb}\) and \(q\) are the axial field strength, radial distance from the tube axis, typical radius and twist intensity, respectively. We calculate three cases with varying \(q\) (Table 1). Here, \(B_{\rm tb}\) was also varied so that the magnetic energies of the flux tubes, \(E_{\rm mag}(=\int B^{2}/(8\pi)\,dV)\), are the same in all three cases. The total axial magnetic flux \(\Phi_{x}(=\int B_{x}\,dS)\) is of the order of \(2\times 10^{22}\) Mx. To achieve the mechanical balance with no magnetic buoyancy, we adjust the internal pressure and density by reducing the entropy by \(\frac{B^{2}}{8\pi}/\left(\frac{\partial p}{\partial s}\right)_{\rho}\). Therefore, the flux tubes started emergence only because of advection exerted by the background turbulence. In all three cases, we set \(q\) to be 0 or below the critical value for the kink instability \(q_{\rm cr}(=1/R_{\rm tb})^{40}\), where \(q>0\) indicates that the tube has a right-handed twist. We measure several physical quantities including the relative magnetic helicity (\(H_{\rm R}\)), helicity flux (\(dH_{\rm R}/dt\)) and total unsigned magnetic flux (\(\Phi\)) at the photospheric surface, here defined as where the optical depth is unity (\(\tau=1\)). For measuring the helicity, there is a degree of freedom in choosing the vector potential \(A_{\rm p}\) and, in this study, we select vector potentials satisfying \(A_{\rm p}\cdot\hat{z}=0\) and calculate it using the method by [59], \[A_{\rm p}=\frac{1}{2\pi}\hat{z}\times\int_{S^{\prime}}B_{z}(x^{\prime})\frac{r }{r^{2}}\,dS^{\prime}, \tag{8}\] where \(r=x-x^{\prime}\). In this process, the grid number was reduced from \(1024\times 1024\) to \(256\times 256\) to accelerate the computation speed. The helicity flux was then calculated using Equation (3), and the injected magnetic helicity over the course of flux emergence (Equation (2)) was obtained by integrating the helicity flux over time. To investigate the flare productivity of the generated active regions, the instability analysis is performed based on the DAI theory, which states that a flux rope becomes unstable if it is sufficiently twisted and has a sufficient amount of magnetic flux against the overlying confinement field [50]. This instability is characterised by the parameter \[\kappa=\left|\frac{\int_{\rm rec}T_{\rm W}\,d\Phi}{\Phi_{\rm over}}\right|, \tag{9}\] or equivalently, \[\kappa=\left|\frac{\int_{S_{\rm rec}}\tau_{\rm twist}\,dS}{\Phi_{\rm over}} \right|, \tag{10}\] where \(T_{\rm W}\) is the amount of twist integrated over each magnetic field line in a flux rope, \(S_{\rm rec}\) is the footpoint area of the magnetic field lines that reconnect to form the flux rope, \(\Phi_{\rm over}\) is the magnetic flux of the overlying field, and \(\tau_{\rm twist}=T_{\rm W}|B_{z}|\). In DAI, \(\kappa\) is usually larger for larger \(S_{\rm rec}\); therefore, there is a critical value \(S_{\rm c}\) at which the instability occurs: \(\kappa>\kappa_{0}\sim 0.1\). It is shown that the ratio of the magnetic helicity of current-carrying magnetic field (\(|H_{\rm j}|\)) to the total relative helicity (\(|H_{\rm R}|\)) well discriminates whether a flare event becomes eruptive or not [60]. While the \(\kappa\) parameter is the critical parameter which determines the onset \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Case & \(R_{\rm tb}\) & \(q\) & \(B_{\rm tb}\) & \(E_{\rm mag}\) & \(\Phi_{x}\) \\ \hline Non-twisted & 8 Mm & 0 & 12.2 kG & \(5.9\times 10^{34}\) erg & \(2.4\times 10^{22}\) Mx \\ \hline Weakly-twisted & 8 Mm & \(q_{\rm cr}/4\) & 12.1 kG & \(5.9\times 10^{34}\) erg & \(2.4\times 10^{22}\) Mx \\ \hline Strongly-twisted & 8 Mm & \(q_{\rm cr}/2\) & 11.5 kG & \(5.9\times 10^{34}\) erg & \(2.3\times 10^{22}\) Mx \\ \hline \end{tabular} \end{table} Table 1: Summary of the simulation cases. The critical twist for the kink instability corresponds to \(q_{\rm cr}=1/R_{\rm tb}=0.125\) Mm\({}^{-1}\). of DAI (i.e., whether a flare occur), the helicity ratio \(|H_{\rm j}|/|H_{\rm R}|\) may be capable of distinguishing the eruptivity when the flare occurs. In the \(\kappa\)-scheme [49], the coronal magnetic field is first calculated by the non-linear force-free field extrapolation based on the vector magnetic field of the photosphere. For the computational resources, the grid number was reduced from \(1024\times 1024\) to \(512\times 256\) in this process. Then, HiFERs are identified as the regions where the non-potential field \(B_{\rm np}(=|B-B_{\rm p}|)\) exceeds the threshold value of 1 kG, which is based on the observational result that \(B_{\rm np}=1\) kG sufficiently encompasses the the distribution of non-potential magnetic fields that drive large flares [49]. For each HiFER, the critical area \(S_{\rm c}\) is measured as the minimum circular area that satisfies the DAI condition (\(\kappa>\kappa_{0}\)), and \(r_{\rm c}\) is obtained as the radius of \(S_{\rm c}\), i.e., \(S_{\rm c}=\pi r_{\rm c}^{2}\). The releasable energy for each HiFER is estimated as \[E_{\rm r}=\frac{S_{\rm r}^{1/2}}{8\pi}\int_{S_{\rm r}}B_{\rm np}^{2}\,dS, \tag{11}\] where \(S_{\rm r}\) is the area of the footpoint of the magnetic flux that pass over the circular area \(S_{\rm c}\). ## Data Availability The data are available from the corresponding author upon reasonable request.
2309.01861
FlexRDZ: Autonomous Mobility Management for Radio Dynamic Zones
FlexRDZ is an online, autonomous manager for radio dynamic zones (RDZ) that seeks to enable the safe operation of RDZs through real-time control of deployed test transmitters. FlexRDZ leverages Hierarchical Task Networks and digital twin modeling to plan and resolve RDZ violations in near real-time. We prototype FlexRDZ with GTPyhop and the Terrain Integrated Rough Earth Model (TIREM). We deploy and evaluate FlexRDZ within a simulated version of the Salt Lake City POWDER testbed, a potential urban RDZ environment. Our simulations show that FlexRDZ enables up to a 20 dBm reduction in mobile interference and a significant reduction in the total power of leaked transmissions while preserving the overall communication capabilities and uptime of test transmitters. To our knowledge, FlexRDZ is the first autonomous system for RDZ management.
Aashish Gottipati, Jacobus Van der Merwe
2023-09-04T23:35:54Z
http://arxiv.org/abs/2309.01861v1
# FlexRDZ: Autonomous Mobility Management for Radio Dynamic Zones ###### Abstract FlexRDZ is an online, autonomous manager for radio dynamic zones (RDZ) that seeks to enable the safe operation of RDZs through real-time control of deployed test transmitters. FlexRDZ leverages Hierarchical Task Networks and digital twin modeling to plan and resolve RDZ violations in near real-time. We prototype FlexRDZ with GTPyhop and the Terrain Integrated Rough Earth Model (TIREM). We deploy and evaluate FlexRDZ within a simulated version of the Salt Lake City POWDER testbed, a potential urban RDZ environment. Our simulations show that FlexRDZ enables up to a 20 dBm reduction in mobile interference and a significant reduction in the total power of leaked transmissions while preserving the overall communication capabilities and uptime of test transmitters. To our knowledge, FlexRDZ is the first autonomous system for RDZ management. Radio Dynamic Zone, Mobility Management, Network Control, AI Planning ## I Introduction Radio technology has progressed tremendously in recent years, as seen by the proliferation of software-defined radios, 5G networks, and the advent of terra hertz technology. To continue to drive innovation, researchers require access to adequate radio test facilities, enabling them to develop, benchmark, and validate their test transmitters without worrying about potential impacts on nearby radio infrastructure. In accordance, the notion of developing a universal radio test facility led to the proposed concept of a National Radio Dynamic Zone (NRDZ) [1]. Radio Dynamic Zones (RDZ) are isolated from the outside world spectrally, enabling the deployment of new test transmitters. These zones prevent internal transmissions from escaping, freeing test operators from the responsibility of worrying about interfering with nearby infrastructure. While an RDZ seeks to coexist with existing infrastructure, an RDZ also seeks to enable users to deploy and test new transmitter technology. In many cases, these test transmitters may be inimical towards nearby infrastructure, motivating the RDZ operator to employ over test transmitters. While operators may be able to exercise oversight over stationary test transmitters due to their static position, mobile transmitters further tax an RDZ operator's ability to supervise the RDZ properly. For example, many spectrum allocation techniques assume that test transmitters are stationary, failing to model mobile entities accurately [2, 3]; however, in an RDZ, test transmitters may or may not be stationary (e.g., naval and air radar systems). In these cases, utilizing traditional techniques may not be sufficient to prevent test transmitters from impacting nearby infrastructure. We, therefore, choose to explore RDZ mobility based on the increase in management complexity and the assumption that many future test transmitters will be mobile [4]. To simplify our model of transmitter mobility, like others [5], we assume a complete control framework, meaning that all transmitters cede control to a management entity, enabling a higher level of flexibility within the RDZ (see Figure 1). For example, the area in which a mobile test transmitter can operate becomes more flexible through techniques such as spectrum sharing and real-time transmit power adjustments, since leakage and interference can be handled dynamically. To realize a more flexible RDZ, we propose FlexRDZ. FlexRDZ is a closed-loop, autonomous RDZ manager that seeks to enable the safe operation of an RDZ through real-time control of deployed test transmitters. FlexRDZ is primarily defined by its Hierarchical Task Network (HTN), a graph-based planning approach, and its RDZ digital twin. These two components enable FlexRDZ to dynamically model the RDZ environment and generate a plan to maintain the "health" of the RDZ in the event a compromise occurs, e.g., internal transmissions are detected beyond the RDZ boundary. FlexRDZ utilizes its programmable control framework to execute its generated plan in order to preserve the health of the Fig. 1: Mobile RDZ Deployment. RDZ. To validate our implementation of FlexRDZ, we deploy and evaluate FlexRDZ within a simulated version of the Salt Lake City POWDER testbed [6], a potential urban RDZ environment. Our simulations show that FlexRDZ enables up to a \(20\) dBm reduction in mobile interference and a significant reduction in the total power of leaked transmissions while preserving the overall communication capabilities and uptime of test transmitters. To realize an autonomous RDZ management system, we make the following contributions: \(\bullet\) The design of FlexRDZ, a near real-time, autonomous RDZ manager that aims to maintain an RDZ in an online fashion. \(\bullet\) A prototype implementation of FlexRDZ that reduces RDZ leakage and preserves transmitter communication. \(\bullet\) Quantitative results that demonstrate the benefits of FlexRDZ by highlighting its performance and use cases. The remainder of this paper is laid out as follows. Section II discusses key background information related to planning. Section III surveys related work in the space of dynamic spectrum environments. Section IV discusses the design and implementation decisions pertaining to FlexRDZ. Section V highlights FlexRDZ's capability within the context of autonomous RDZ management. Lastly, section VI comments on the limitations of our approach, proposes future areas of research, and provides concluding remarks. ## II Background AI planning refers to the problem of finding a set of actions that, if executed, will transition the environment from an initial state to a goal state. More formally, given a set of states \(\mathcal{S}\) and actions \(\mathcal{A}\), we require a mapping between states and actions \(\pi:\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) where the actions push the agent in the direction of the goal state. Furthermore, the transition model defines the probability of a taking an action given the current state: \(\pi(a,s)=Pr(a_{t}=a|s_{t}=s)\). We call the combination of the mapping and transition model the policy, i.e., a distribution of actions conditioned on the current environment state. This problem is canonically known as a Markov decision process (MDP). Modern approaches to AI planning utilize a mix of deep reinforcement learning (RL), statistical techniques, and integer programming [7] with the former becoming the dominant planning approach given its application in robot planning and manipulation [8]. At a high level, RL seeks to learn a policy for acting in an environment where the environment is represented by an MDP. In contrast to other deep learning disciplines, RL provides an agent with a reward function, which aims to guide the agent toward favorable states in the state space. Upon taking an action, the agent receives a reward according to a predefined reward function, which provides a metric for gauging the performance of an action. Long-term rewards are penalized via a discount rate, which prioritizes events in the immediate future over the distant future. In essence, the agent seeks to learn a policy by maximizing the returned reward, which, in turn, guides the agents toward favorable states. However, there are two issues with this formulation. One, while deep learning-based techniques perform well in practice, many of these techniques suffer from the lack of explainability [9]; hence, robust safeguards are required when deploying in environments with risky tail-end events. Two, in the real world, state observations are noisy, i.e., states are not fully observable. In this case, the problem becomes a partially-observable MDPs (POMDP), a well-studied problem in robotics [10], but requires more assumptions for safe deployments. In the context of RDZs, transparency and safety are crucial since an incorrect plan could broadly impact surrounding telecommunications services [5]. Given this, we focus on a classical planning approach known as HTNs [11]. HTNs are dependency graphs that are constructed with domain knowledge, meaning that the policy \(\pi\) is hand-crafted rather than learned. In contrast to other modern methods, HTNs are human interpretable, offering more transparency into understanding why certain decisions or plans were generated [12]. HTNs follow a hierarchical structure where the root node corresponds to the initial state or the start task. The starting task is then recursively broken down into subtasks until an atomic action can be executed (see Figure 2). The root node connects to its descendants via edges, which represent dependencies. To transition from the initial state (root node) to the goal state, the root's dependencies must be satisfied by carrying out the corresponding atomic actions. When called upon, an HTN generates a sequence of atomic actions to satisfy an overarching goal. We seek to leverage HTNs to dynamically generate plans to mitigate RDZ violations and autonomously maintain the RDZ environment. ## III Related Work ### _Citizens Broadband Radio Service_ **Interference Management.** To mitigate interference within CBRS, the Wireless Innovation Forum (WINNFORM) suggest three different graph-based approaches to spectrum allocation [13, 14, 15]. Approaches one and two utilize a graph coloring algorithm to allocate non-overlapping spectrum to nearby transmitters, while approach three employs a recursive Fig. 2: FlexRDZ’s AI Planner. clustering algorithm. Gao et al. evaluate all three approaches using a suite of propagation models and GIS map data of Virginia Beach and San Diego [2, 3], showing that all three approaches are indeed effective at reducing potential interference. In contrast to WINNFORM, Abbass et al. explore the application of Q-learning for spectrum allocation in CBRS [16]. Specifically, the authors investigate opening up idle access priority access license (PAL) channels to general authorized access (GAA) users. Numerical results demonstrate improvements in spectrum utilization and data rate per unit cost; however, real-world evaluations are necessary. Although, while the WINNFORM approaches excel at mitigating interference, there are many drawbacks in practice. For example, since these approaches utilize the overlap in deployment area as an estimate for interference between transmitters, the estimates of interference may underestimate or overestimate the actual interference based on the deployed environment [17]. In addition, graph coloring is an NP-complete problem, and, as such, most algorithms are subject to non-polynomial growth, meaning that, with dense transmitter deployments, the computation layer becomes the bottleneck [17]. Lastly, these interference management techniques yield their highest performance when transmit nodes are stationary, which may not hold in an RDZ environment where test transmitters may be mobile, e.g., drones. Thus, to better model dynamic RDZ environments, we supplement our approximations with simulations, leading to the addition of FlexRDZ's digital twin. We note that we are not solving the interference management problem within CBRS; instead, we build upon techniques used in CBRS and generalize them to RDZs. **Mobility Management.** The most relevant work to handling mobility within an RDZ-like environment is related to detecting naval incumbents in CBRS. To detect naval incumbents, the National Telecommunications and Information Administration (NTIA) proposed the use of an environmental sensing capability (ESC) network [18]. An ESC network comprises multiple sensors employed to detect an incumbent's presence and trigger protective measures upon detection. Nguyen et al. formulated the ESC deployment problem as a set cover problem to compute the minimum number of sensors to cover an area of interest while minimizing the overlapping area between sensors, since overlapping areas may lead to false positives [19]. As opposed to preemptively turning off nearby equipment upon detecting an incumbent, Kang et al. offer a different approach: using a management entity to oversee interactions between transmitters [20]. Instead of proactively pausing communications in inflection areas (e.g., where naval incumbents are detected), Kang suggests employing more dynamic techniques such as spectrum sharing and virtualization. In our formulation of an RDZ, we implicitly cover both of these cases; however, we emphasize that we are not bound by the same assumptions as CBRS. ### _Radio Dynamic Zones_ We now cover related work on RDZs. Maeng et al. propose a spectrum monitoring approach for out-of-zone signal leakage detection and explore spatial correlation-based estimation techniques for signal prediction [21]. The authors constrain their RDZ formulation geographically and assume sensor nodes that cover and monitor the boundary of the RDZ. Through simulation, the authors demonstrate that their spatial correlation-based algorithm enables a larger RDZ radius with sparsely deployed sensor nodes in comparison to propagation-loss techniques. In addition, Maeng et al. present an RDZ concept that relies on both autonomous aerial and ground sensor nodes for radio environment monitoring, enabling real-time radio environment maps of all relevant frequencies and locations [22]. Lastly, [5] discusses key challenges for real RDZ deployments and details a Zone Management Engine for RDZ supervision. Specifically, the Zone Management Engine is composed of a decision engine and three subsystems: spectrum, experiment, and policy management systems. In concert, these three subsystems provide the decision engine with experiment information, spectrum policy rules, user information, and resource allocations to enable dynamic real-time coordination of systems within the zone coupled with user interference protection. In other words, FlexRDZ is a realized prototype of the previously proposed Zone Management Engine [5]. We diverge from previous work in fundamental ways. First, we primarily focus on the problem of streamlining RDZ maintenance through dynamic, flexible planning practices rather than spectrum monitoring. Second, we assume infrastructure for spectrum monitoring and leverage real-time environment maps to estimate and plan for future transmitter behavior. Lastly, we approach RDZ maintenance through a systems perspective, prototyping and evaluating a system for real-time RDZ maintenance and control. ## IV Design and Prototype ### _Overview_ FlexRDZ is an operational tool that seeks to mitigate RDZ violations through swift resolution and maintain the environment in an autonomous and online fashion. RDZ violations encompass situations that compromise the state of the RDZ (e.g., transmissions are detected beyond the environment boundary or the RDZ fails to accommodate local infrastructure) and cases where user test objectives are conceded. Fig. 3: FlexRDZ Architecture. As seen in Figure 3, FlexRDZ receives management parameters via its northbound interface. These management parameters encompass RDZ maintenance variables (e.g., boundary parameters, leakage thresholds, and interference thresholds) and user test objectives (e.g., reserved areas and frequencies). Upon failing to maintain these parameters, FlexRDZ utilizes its planner to generate a plan and resolve the situation dynamically. Lastly, FlexRDZ functions in an online, autonomous fashion. These characteristics are a byproduct of FlexRDZ's southbound interface, which routinely pulls the state of the RDZ from the environment and enables direct control over test transmitters. Illustrated in [22], real-time RDZ state can be monitored via ground-based and aerial sensing infrastructure. By retrieving real-time updates, FlexRDZ can, in near real-time, generate solutions and resolve environmental issues with its planner. The combination of FlexRDZ's southbound and northbound interface enables it to function within a closed-loop, online manner. The architecture of FlexRDZ is in-line with previously envisioned RDZ supervisors, e.g., a Zone Management Engine [5]. ### _Intelligent Control_ While planning may deceptively appear straightforward, it is important to note that RDZ environments quickly become too convoluted to manage due to the sheer number of observable environment states over time, making it infeasible to iterate over all potential control solutions. Although this problem is challenging, a great deal of related work has sought to overcome similar complexities in adjacent problems by efficiently searching over the planning space through the use of symbolic methods, artificial intelligence (AI), and various optimization techniques [7]. Like other planning techniques, symbolic methods such as HTNs are utilized to search over the planning space efficiently [11]. We leverage HTNs for their increased decision transparency and reliability- two characteristics essential for maintaining an urban RDZ. We note that the design of FlexRDZ does not preclude the use of other planning techniques. ### _Digital Twin Modeling_ In combination with GIS map data and transmitter parameters, FlexRDZ leverages an internal radio-frequency (RF) model to "digitize" the RDZ environment and generate a radio-environment map in real-time. FlexRDZ's RF model serves to model the RF interactions among transmitters, estimate the coverage for a transmitter, and derive RDZ-specific, key performance indicators. Note that the design of FlexRDZ does not rely on any one modeling technique and can generalize to various methods such as path-loss or AI-based. By encoding the environment, FlexRDZ can estimate potential states of the RDZ by simulating future actions through its RF model, enabling more robust control updates. ### _Implementation_ To prototype FlexRDZ, we utilized the Terrain Integrated Rough Earth Model (TIREM) propagation model [23]. TIREM is a set of physics-based algorithms used to estimate the coverage for mobile land radios and point-to-point distances. We opted to utilize TIREM as it is the standard propagation model utilized by the United States government. However, through preliminary evaluations, we observed that modeling the entire RDZ environment via TIREM can be extremely costly. In our case, we observed estimation times of approximately \(30\) seconds for large areas, e.g., the downtown Salt Lake City area, preventing real-time control. Thus, similar to the idea proposed in [24], to alleviate this bottleneck, we trained a neural network to approximate the RF maps produced by TIREM, cutting inference time to approximately \(70\) ms, and enabling real-time control. We opted to utilize a fusion-based network for their ability to learn more intricate feature representations [25]. As input, the RF model accepts GIS map data that corresponds to the RDZ deployment area and the transmit parameters of the mobile transmitter. The GIS map data is processed through a series of ResNet blocks [26], while transmit parameters are encoded via fully-connected layers. The two embeddings are concatenated and used as input to another fully-connected layer to generate a fusion embedding. From this embedding, the model learns a decoding function to output the predicted RF map. To train our model, we first generated \(10000\) training instances. A training instance consisted of the GIS map data, the mobile transmit parameters, and the ground truth TIREM RF map of the downtown Salt Lake City area. After generating the dataset, we randomly split the dataset into train, validation, and test set splits following an \(80/10/10\) ratio. We empirically chose the fusion embedding size of \(2048\) as it resulted in the fast training times and an overall storage footprint of approximately \(0.8\) Mb. Furthermore, we trained our model for \(150\) epochs via Adam with an initial learning rate of \(0.001\), using mean-squared error (MSE) as our loss function, until we observed model convergence on our validation set. We verified that our model generates reasonable RF maps, producing an average empirical error of \(-0.045\) dBm per cell (see Figure 4). We emphasize that our learned model is unlikely to generalize to new settings as our training data is drawn from the Salt Lake City area. We leave learning a general model as an area for Fig. 4: Simulated RF Map vs. Generated RF Map. The left figure represents the path-loss contour output of our simulation tool while the right figure represents the path-loss contour output of our model. future work. Furthermore, to realize FlexRDZ's HTN planning component, we leveraged the open-source HTN planner known as GTPyhop [27]. GTPyhop provides a basic framework for constructing HTNs, enabling users to specify objectives, sub-tasks, and action primitives. GTPyhop utilizes a modified depth-first search to search over the planning space efficiently. Nau et al. provide an in-depth analysis of the GTPyhop planning algorithm [27]. Moreover, we constructed FlexRDZ's HTN planner (see Figure 2) based on a small set of atomic primitives: idle, disable transmitter, enable transmitter, and round-robin frequency reassignment, which are routinely utilized to manage dynamic spectrum environments to foster generalizability. Like other works [28], we intertwined the digital twin with the HTN to produce a more robust management policy [28]. For example, suppose that a transmitter has been disabled due to leaked signals. To re-enable the transmitter, FlexRDZ must ensure that the newly enabled transmitter does not violate the terms of the RDZ. By leveraging its digital twin, FlexRDZ can simulate the impact of its future decision; hence, the transmitter is only re-enabled when its digital counterpart does not leak, which leads to a more compliance-oriented policy. ## V Evaluation ### _Setup_ We utilized our internal RDZ simulation tool to simulate an RDZ deployed within the POWDER testbed. We opted to simulate the POWDER testbed environment as POWDER may provide RDZ functionality to researchers in the future. In addition, the testbed is situated near the downtown Salt Lake City urban area, which directly reflects the urban RDZ deployment scenario laid out in Section I. The simulated environment consisted of the Salt Lake City area projected into a \(400\times 400\) matrix. Each entry in the projected matrix corresponded to the summation of predicted TIREM signal strengths of the deployed transmitters at that given point. Additionally, we leveraged GIS map data of the Salt Lake City area and approximations of the POWDER testbed's campus buildings to model the terrain of the POWDER testbed. The parameters of the digital twin are fixed; hence, we leave dynamic digital twin modeling- utilizing real-time updates to adjust the parameters of the digital twin- for future work. The evaluation seeks to evaluate the performance of FlexRDZ's AI planner (see Figure 2) rather than the performance of FlexRDZ's digital twin. Note that we present a suite of planning-based techniques and seek to demonstrate the efficacy of FlexRDZ to bolster RDZ integrity against non-management based environments, e.g., an environment with no planning agent. To evaluate our HTN planner, we compare with the following planning approaches. 1. _Stochastic HTN_ The stochastic HTN generates an identical plan to our HTN implementation; however, the agent executes an action with probability \(1-\epsilon\). We set \(\epsilon\) to \(0.1\), \(0.2\), \(0.3\) (\(1\), \(2\), and \(3\) respectively). 2. _Proximal Policy Optimization (PPO)._ A state-of-the-art model-free RL algorithm. 3. _Random._ The agent adopts a random policy and executes an action at random. 4. _Naive._ No planning agent. In this scenario, the RDZ operator adopts a trust-based policy (users will seek to not violate the terms of the RDZ); hence, no control framework is utilized. This is the primary baseline for comparison. Furthermore, the simulated RDZ was defined by a rigid geographical boundary, which specified where signals above a given power must not be detected, mimicking our desired urban RDZ deployment. The deployment consisted of \(10\) fixed endpoints deployed at a height of \(1.8\)m with an antenna gain of \(-2\) dBi, \(9\) rooftop stationary test transmitters deployed at a height between \(20\)m and \(40\)m with an antenna gain of \(4.9\) dBi, and \(8\) densely deployed, stationary test transmitters placed at a height of \(8\)m with an antenna gain of \(4.9\) dBi. The deployed transmitters were evenly split among the frequencies of \(3600\), \(3610\), and \(3620\) MHz. The placement and parameters of transmitters were selected to mimic the current POWDER testbed deployment. A single mobile transmitter was deployed at a height of \(30\)m with an antenna gain of \(4.9\) dBi, on one of the previously mentioned frequencies, depending on the simulated evaluation. We justify our limited evaluation with only one mobile node since in an early stage POWDER RDZ deployment, we expect the number of mobile transmitters to be constrained by the POWDER testbed deployment area, the density of existing stationary transmitters, and the restricted set of operating parameters. To measure the performance of our system, we track the following metrics across trials. 1. _Leakage Points._ The number of locations outside the RDZ boundary with a value above the designated RDZ power threshold. 2. _Total Strength of Leaked Signals._ The total power of induced signals outside the RDZ boundary with a value above the designated RDZ leakage threshold. 3. _Induced Interference._ The total mobile interference observed within the RDZ boundary for a given trial. 4. _Mobile Signal-to-Interference-plus-Noise Ratio._ The observed SINR during each evaluation step. 5. _Mobile Transmitter Uptime._ The percentage of valid time steps in which the transmitter is active. A time step is considered valid if the mobile transmitter is not violating the policies defined by the RDZ operator (e.g., mobile transmissions are limited to the area of the RDZ). ### _RL Training Procedure_ To benchmark our HTN approach against PPO, we leverage Open AI Stable Baselines and Open AI Gym to train an agent for RDZ maintenance. We adopt the following RL formulation. **State.** A state observation corresponds to a \(2048\) dimensional fusion vector that encodes information about the current state of the simulated RDZ environment as well as the mobile transmit parameters. We leverage the fusion backbone of our RF model to encode the state of the simulation. **Action.** A discrete value between \(0\) and \(3\). We map each action of our HTN to a number between \(0\) and \(3\); specifically, \(0\) corresponds to idle, \(1\) disable transmitter, \(2\) enable transmitter, and \(3\) round robin frequency assignment. **Reward.** Our reward is defined per time step as \[10\times U+\frac{S}{30}-\frac{I}{I_{T}}-\frac{P}{A}-\frac{L}{L_{T}} \tag{1}\] where \(U\) corresponds to the number of time steps where the mobile transmitter is enabled, \(S\) is the SINR of the mobile transmitter in dB, \(I\) is the induced mobile interference in dBM, \(I_{T}\) is the interference threshold in dBM, \(P\) is the number of leakage points detected, \(A\) is the total area of the RDZ in units, \(L\) is the total power of leaked signals in dBM, and \(L_{T}\) is RDZ power threshold for leaked signals in dBM. While simple, our reward function encourages uptime and open communication channels, while minimizing mobile interference and leakage, all of which are critical to maintaining an RDZ. Note that we clip each reward component by an empirically chosen constant to better shape the reward function and reduce asymmetries between goals. We represent the learned policy with a multi-layer perceptron (MLP). We trained our agent in simulation for \(8000\) episodes via PPO [29], a state-of-the-art on-policy training approach, with each episode consisting of \(10\) time steps. After each time step, we apply the agent's action where the resulting reward is calculated according to equation 1 until policy convergence is observed. ### _Mobile Leakage_ For this evaluation, we set the RDZ leakage threshold to \(-95\) dBm, indicating that if any signals were detected beyond the RDZ boundary above \(-95\) dBm, then this constituted an RDZ violation, i.e., leakage. Since our problem formulation considered an urban RDZ with nearby existing infrastructure, and signals with a strength of \(-90\) dBm are likely to drown within the noise floor, signals at \(-95\) dBm are likely to be too weak to impact nearby systems. The evaluation sought to evaluate FlexRDZ's HTN against other planning approaches in terms of leakage mitigation. The naive approach relies on "good faith" that the mobile transmitter will not violate the trust agreement of the RDZ. The evaluation procedure was as follows. A partition of POW-DER nodes (9 nodes) was deployed on \(3600\) MHz. A single mobile transmitter was then randomly deployed on \(3600\) MHz. The mobile transmitter was then simulated for \(50\) discrete time steps, with the transmitter moving \(5\) units in a random direction at each time step. The number of leakage points detected and the total strength of leaked signals were recorded and summed at each step. We repeated the experiment for \(5\) trials, reusing the same transmit and movement parameters for all planning methods. We summarize the results of the evaluation in Figure 5. The results show that the HTN-based planners consistently led to less leakage in comparison to the RL-based and naive planning approaches. We omit the results from the random policy as the transmitter uptime was significantly lower than the other planning approaches (see Table I). The decrease in leakage can be observed by the fact the first four bars in Figure 5 are consistently lower than the last two, which correspond to PPO and the naive approach. It is important to note that the stochastic planners outperform FlexRDZ's deterministic planner in some cases, e.g., trials \(4\) and \(5\) in terms of leakage points and trials \(1\)-\(3\) in terms of leaked signals. This is likely due to the exploration vs. exploitation trade-off commonly discussed in RL literature [29]. In this case, due to the \(\epsilon\) hyperparameter, the stochastic planners are able to travel to new states that are not reachable by the deterministic HTN. For example, since the deterministic HTN is reactive and Fig. 5: Observed Mobile Leakage. A smaller number indicates less RDZ leakage, while a larger number indicates more leakage. not preemptive, it is likely that these stochastic planners are preemptively disabling a transmitter that is about leak, leading to enhanced leakage mitigation. On the other hand, while the planners reduce the total number of observed leakage points in comparison to the naive approach, the amount of leakage points detected is still relatively high. One explanation is that these trials encompass some adversarial cases in which a transmitter attempts to straddle the boundary of the RDZ, i.e., the transmitter continues to oscillate between leaking and compliant behavior. Since FlexRDZ does not record the history of a transmitter, the periodic behavior circumvents FlexRDZ's basic leakage mitigation policy. To alleviate these adversarial cases, one could incorporate a strike policy into their RDZ, i.e., after \(x\) amount of strikes, the offending transmitter may be penalized (e.g., disabled for a period of time). Nonetheless, based on the results, the HTN-based planners outperform both the PPO-based policy and the naive approach, while offering greater transparency than their RL counterparts as we can easily trace the planning path through the HTN. We observe improved performance among the stochastic HTN methods for certain trials, indicating that stochastic behavior can potentially enhance our planning policies. Furthermore, we observe a significant reduction in leakage points (nearly \(10\%\) in Trial 1) and leaked signal strength (approximately \(100\) dBm in Trial 1) across trials. Note that the signal results in Figure 5 are presented in log scale (base \(10\)). Most importantly, we emphasize that FlexRDZ's oversight greatly reduces amount of leakage in comparison to non-agent based environments. ### _Mobile Interference_ We set the mobile interference threshold to \(-70\) dBm for this evaluation. Therefore, if the mobile interference exceeded \(-70\) dBm, this constituted an RDZ violation. While weak, signals at \(-70\) dBm are likely to impact nearby transmitters, users, and test experiments. We collected and recorded the aggregate mobile interference observed and the SINR for the mobile transmitter. In this evaluation, we aimed to measure FlexRDZ's ability to preserve communication (i.e., maintaining adequate SINR and reducing mobile interference). The procedure and setup were nearly identical to the previous evaluation; however, two POWDER partitions (\(18\) nodes total) were deployed and split among \(3600\) and \(3610\) MHz respectively. The results of the evaluation are summarized in Figure 6 and Figure 7. We omit the results from the random policy as the transmitter uptime was significantly lower than the other planning approaches (see Table I). Based on the results in Figure 6, the stochastic HTN planning methods significantly outperformed all other planning methods; additionally, all planning methods consistently outperformed the naive approach. The consistent decrease in the interference is apparent by the smaller (more negative) interference values obtained by the various planning methods in comparison to the naive approach. Like in the previous evaluation, the stochastic HTN methods' improved interference policy appears to stem from greater state space exploration, with the agent landing in states that have preemptive properties; however, the benefits of the stochastic approaches over their deterministic counterpart are more blatant in this case. Given this, adopting a small perturbation (e.g., \(\epsilon=0.1\)) can lead to a noticeable boost in interference reduction and communication preservation. In essence, based on the results, in most trials, FlexRDZ outperforms the naive approach with a decrease in total interference of nearly \(20\) dBm in some cases. As for the observed mobile SINR, we observe a small difference between the FlexRDZ and the naive case. To highlight the performance differences, we plot the SINR results for the mobile transmitter as a CDF (see Figure 7). We assume that the SINR values are normally distributed according to the sample mean and the sample standard deviation of the collected results. For all approaches, most of the collected SINR measurements of the mobile transmitter are above \(25\) dB, which indicates that the transmitter's transmissions are strong and communication is preserved. Note that PPO outperforms all other planning methods significantly. One reason for this may be due to skewed credit assignment. In this case, the provided reward function may be asymmetrically promoting the mobile SINR above all other metrics; hence, the agent learns a policy that prioritizes the mobile SINR, e.g., always adjusting the mobile transmit frequency to promote the mobile communication channel. Although more evaluations may shed light on this descrepancy, the motivations behind the planning trajectory are opaque, unlike the HTN which is human interpretable. Like the previous evaluations, these results indicate that FlexRDZ's planning-based approaches outperform the naive approach by preserving a higher level of communication for mobile transmitters. We emphasize that FlexRDZ's oversight greatly reduces the magnitude of interference in comparison against non-agent based environments. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & _HTN_ & _HTN 1_ & _HTN 2_ & _HTN 3_ & _PPO_ & _Random_ & _Naive_ \\ \hline Uptime & \(1.0\) & \(1.0\) & \(1.0\) & \(1.0\) & \(1.0\) & \(\approx 0.51\) & \(1.0\) \\ \hline \end{tabular} \end{table} TABLE I: Average Mobile Uptime across Trials. Fig. 6: Observed Mobile Interference. A lower value indicates reduced interference, while a higher value indicates increased interference. ## VI Conclusion We present FlexRDZ, an autonomous RDZ manager, and validated its design through a proof-of-concept prototype. While promising, further research is necessary. For example, leveraging online samples to update the internal propagation model's parameters would improve FlexRDZ's generalizability. Additionally, real-world benchmarks are crucial, motivating further research into enhancing testbed infrastructure. Despite its limitations, FlexRDZ represents an intersection between a growing interest in autonomous control, digital twin modeling, and dynamic spectrum environments. We argue that systems such as FlexRDZ will be critical towards realizing an autonomous RDZ amidst the growing sophistication and complexity of wireless environments.
2303.05484
Let's talk about the weather: A cluster-based approach to weather forecast accuracy
Improved understanding of characteristics related to weather forecast accuracy in the United States may help meteorologists develop more accurate predictions and may help Americans better interpret their daily weather forecasts. This article examines how spatio-temporal characteristics across the United States relate to forecast accuracy. We cluster the United States into six weather regions based on weather and geographic characteristics and analyze the patterns in forecast accuracy within each weather region. We then explore the relationship between climate characteristics and forecast accuracy within these weather regions. We conclude that patterns in forecast errors are closely related to the unique climates that characterize each region.
Jill Lundell, Brennan Bean, Juergen Symanzik
2023-03-09T18:36:26Z
http://arxiv.org/abs/2303.05484v1
# Let's talk about the weather ###### Abstract Improved understanding of characteristics related to weather forecast accuracy in the United States may help meteorologists develop more accurate predictions and may help Americans better interpret their daily weather forecasts. This article examines how spatio-temporal characteristics across the United States relate to forecast accuracy. We cluster the United States into six weather regions based on weather and geographic characteristics and analyze the patterns in forecast accuracy within each weather region. We then explore the relationship between climate characteristics and forecast accuracy within these weather regions. We conclude that patterns in forecast errors are closely related to the unique climates that characterize each region. Climate Clustering Data Expo 2018 Glyph Plots Random Forests Visualization From the icy, wet winters along the Great Lakes, to the hot and dry summers in the Southwest, the United States (U.S.) experiences a wide range of climatic extremes. These extremes create unique challenges when forecasting the weather. Understanding forecast errors across such a diverse landscape is equally challenging, requiring multi-dimensional visualizations across space, time, and climate measurements. Better understanding of the nature and patterns in forecast errors across the U.S. helps meteorologists as they strive to improve weather forecasts. It can also help everyday Americans know how much faith to put in the weather forecast on the day of an important event. The 2018 Data Expo of the Sections on Statistical Computing and Statistical Graphics of the American Statistical Association (ASA) provided an opportunity to explore and compare weather forecast errors across the U.S. Our analysis focused on the question: How do weather forecast errors differ across regions of the U.S.? This motivating question prompted the subsequent questions: * Do U.S. weather stations cluster into regions based on weather characteristics? * How do error variables correlate and do these correlations change by region? * How do forecast errors change by region and by season? * Where are the best and worst forecast accuracies? * Which variables are important in determining forecast errors? Preliminary results of our analysis are published in the proceedings for the 2018 Joint Statistical Meetings [1]. This article is devoted to answering these questions. We use ensemble graphics to create an overall picture of weather forecast errors across different regions of the U.S. [2]. Ensemble graphics enhance traditional analyses by connecting several visualizations of the data with adjoining text. This presentation is able to tell a cohesive story of the data more effectively than would be possible with a few disjointed graphics. In Section 1, we summarize the data and then show that the U.S. can be clustered into six well-defined weather regions using the provided climate measurements, elevation, and distance to coast. These clusters, or weather regions, form the basis of our comparison of forecast accuracy across the U.S. through a series of multi-dimensional plots and variable importance analyses described in Section 2. in Section 3, we introduce the interactive application we created to enhance our data explorations. We conclude in Section 4 that the climate differences that distinguish the weather regions of the U.S. also create region-specific patterns and differences in forecast accuracy. Two appendixes are included at the end of this paper to explain data cleaning and how to create the glyphs used in this article. ## 1 Weather regions The data contain measurements and forecasts for 113 U.S. weather stations from July 2014 to September 2017. These data can be obtained from our supplemental materials or at the following URL: [http://community.amstat.org/stat-computing/data-expo/data-expo-2018](http://community.amstat.org/stat-computing/data-expo/data-expo-2018). Daily measurements for eight different weather metrics were recorded for each location including temperature, precipitation, dew point, humidity, sea level pressure, wind speed, cloud cover, and visibility. Many notable weather events are also textually recorded such as thunderstorms and fog. Daily measurements of the minimum, maximum, and mean were recorded for each metric. Weather characteristics used in this article are listed in Table 1. Data were supplemented with some geographic information and carefully examined and cleaned. Details on data cleaning, obtaining additional data, and the justification behind our final variable selection are found in Appendix A. ### Developing weather clusters The U.S. has been divided into regions based on environmental characteristics such as watersheds and climate [3][4]. We examined the set of existing environmental regions and were unable to find one that made sense in terms of weather in the context of this analysis. We created our own weather regions by clustering the weather stations based on the metrics in Table 1. Thus, clusters are defined by weather characteristics observed at each station. We use these clusters to determine how weather forecast error patterns are related to the unique climate measurements of a particular region. A review of existing weather regions and how they correspond to our weather regions is discussed in Section 1.2. Data were aggregated across each weather station by taking the mean and standard deviation of each variable in Table 1 for each of the 113 weather stations over the period of record. \begin{table} \begin{tabular}{l c c} \hline \hline **Variable** & **Unit** & **Range** \\ \hline Min/Max Temperature & \({}^{\circ}F\) & \([-37,127]\) \\ Precipitation & in & \([0,12.95]\) \\ Min/Max Dew Point & \({}^{\circ}F\) & \([-50,90]\) \\ Min/Max Humidity & \% & \((0,100]\) \\ Min/Max Sea Level Pressure & inHg & \([28.2,31.2]\) \\ Mean/Max Wind Speed & mph & \([0,70]\) \\ Min Visibility & mi & \([0,10]\) \\ Cloud Cover & okta & \(\{0,1,\cdots,8\}\) \\ Distance to Coast & mi & \([0,807]\) \\ Elevation & ft & \([3,7422]\) \\ \hline \hline \end{tabular} \end{table} Table 1: List of weather variables included in our analysis. All observations outside the indicated ranges were removed prior to our analysis. Hierarchical clustering [5] with Euclidean distance and Ward's minimum variance clustering method [6] was used to identify clusters. The clusters were examined spatially to determine the performance of the clustering method and select the final number of clusters. We wanted to ensure the weather station clusters were of a sufficient size to be practical. Five clusters resulted in one cluster that included all of the stations from the Midwest to the East Coast which we think is too large because of the differences in coastal and inland climates. Seven clusters produced a cluster that contained only five weather stations which is too small. Thus, we chose six clusters to divide the U.S. into weather regions. Figures 1 and 2 show the results of the cluster analysis. Figure 3 shows a parallel coordinate plot of the characteristics for each weather region. The Z-score for mean and standard deviation for each of the variables in Table 1 was computed and plotted on the parallel coordinate plot. It is difficult to distinguish the six weather regions from each other so an interactive app was created that provides a better view of the features of each cluster. The app is discussed in Section 3. The names and characteristics of each weather cluster are as follows: * **Cali-Florida** (13 stations): Warm and humid with high dew point and pressure. Low variability in almost all measurements. * **Southeast** (22 stations): Warm and humid with lots of rain. High variability in precipitation and low variability in temperature. * **Northeast** (39 stations): Cold, humid, and low visibility. High variability in temperature, dew point, and pressure. * **Intermountain West** (19 stations): Cold and dry, with high variability in temperature, wind speed, and pressure. Low variability in precipitation and dew point. * **Midwest** (13 stations): Landlocked with high wind speed and high variability in temperature, pressure, and wind speed. * **Southwest** (7 stations): Warm, sunny, and dry with little variation in temperature or precipitation. High variability in wind speed and humidity. ### Comparison to existing climate regions Ecological and climate regions have been developed for the U.S. in other studies. Many of these studies focused on smaller regions in the U.S., but a few have looked at the U.S. as a whole. Clustering methods and the variables used to identify clusters differ from study to study. The ecological regions of North America defined by the Commission for Environmental Cooperation [3] used ecosystems to develop regions. Air, water, land, and biota, including humans, were used to create the ecoregions. These ecoregions show a strong longitudinal trend that corresponds well with the longitudinal trends in our clusters. Clusters were not determined by statistical clustering methods, but by careful assessment of ecological properties across North America. The National Oceanic and Atmospheric Administration (NOAA) developed climate regions that incorporate seasonal temperature and precipitation information [7]. These regions differ substantially from the North American ecological regions as they also have a lateral trend in addition to the longitudinal trend and are constrained by state boundaries. Spectral curves assessing drought and wet spells were used to define the NOAA regions [8]. The NOAA regions correspond roughly to our general weather regions despite region borders being defined by state boundaries. The north/south division in the eastern U.S. closely aligns with our cluster division in that area. The major east/west division in our clusters is in a similar location to the NOAA clusters as well. The International Energy Conservation Code (IECC) climate clustering of the U.S. [4] and subsequent reclassification by Hathaway, et al. [9] divided the U.S. into fourteen regions based on temperature, dew point, wind speed, and radiation. Cluster methods included K-means clustering and Monte-Carlo sifting. Both sets of regions show a strong lateral trend in the Eastern U.S. These regions also show distinct separation of the West coast and Southwest deserts from the rest of the Western U.S. Similar trends are also seen in our clusters. The lateral trend in the Eastern U.S. is not as strong in our clusters, but this is likely because we chose a smaller number of weather clusters. The inclusion of additional variables insensitive to lateral trends such as distance to coast, elevation, and humidity, all serve to reduce the lateral separation in our clusters. One key difference between our weather regions and the regions seen in other studies is that we combine Florida and the Pacific coast into a single weather region. This is likely a result of our choice to omit geographic proximity of weather stations in the cluster analysis calculations and consider only similarities in weather patterns. Both Florida and the Pacific coast experience less seasonality in their weather patterns than the rest of the country. This results in smaller than average standard deviations for many of the climate variables in both of these regions. These small standard deviations create a measure of closeness between Florida and the Pacific coast which likely explains why these two geographic areas all into a single cluster when working with six or fewer clusters. The Florida and Pacific stations separate into separate clusters when using seven clusters with exception of two stations from the Pacific Coast that cluster with the Florida stations. Hawaii and Alaska are either ignored in the literature or placed in their own regions. Because we did not use spatial proximity as a clustering variable and we assigned all weather stations to one of our six weather clusters, Hawaii and Alaska are clustered with Cali-Florida and the Northeast respectively. Our clusters show that weather patterns typically have strong spatial correlations, with temperate coastal regions being a notable exception. ## 2 Forecast error explorations Given the clear separation of the country into distinct weather regions, we seek to determine if there are clear differences in forecast error patterns among the regions. Forecasts were restricted to minimum temperature, maximum temperature, and the probability of precipitation. The forecast error for minimum and maximum temperature is calculated as the absolute difference between forecast and measurement. The forecast error for precipitation is measured using the Brier Skill Score (BSS), a well-known measure of probabilistic forecast accuracy [10]. It is defined for a particular weather station as \[\text{BSS}=1-\frac{\sum\limits_{i=1}^{N}\sum\limits_{j=0}^{M}{(Y_{ij}-O_{i})^{ 2}}}{\sum\limits_{i=1}^{N}\sum\limits_{j=0}^{M}{(P-O_{i})^{2}}} \tag{1}\] where * \(Y_{ij}\in[0,1]\) is the predicted probability of rain on day \(i\) with forecast lag \(j\); Figure 1: Map of the six weather regions. The color band at the bottom identifies each region by name and color Figure 2: Dendrogram of weather clusters identified in Figure 1 * \(O_{i}\in\{0,1\}\) is a binary variable with value \(1\) if _any_ precipitation fell during the day and 0 otherwise. We define a precipitation event as a positive precipitation measurement or the inclusion of the words "rain" or "snow" in the event information; * \(P\in[0,1]\) is the average daily chance of precipitation over the period of interest, defined as \(P=\frac{1}{N}\sum\limits_{i=1}^{N}O_{i}\); * \(N\) denotes the number of days of recorded precipitation in the period of record and \(M\in\{0,\ldots,5\}\) denotes the number of forecast lags. Note that the BSS\(\in(-\infty,1]\), with 1 indicating a perfect forecast skill and movement towards \(-\infty\) indicating worse forecasts. We chose to use \(1-\) BSS so all three error variables are consistent in orientation. The following subsections explore differences in forecast errors both between and within the previously defined weather regions visualized in Figure 1. Forecast errors are averaged over lag and in some cases averaged over month in each graph. The visualizations in the following subsections confirm our hypothesis that different weather regions experience distinctly different weather forecast error patterns. ### Error correlations Are the forecast errors for the three different measurements (i.e., minimum temperature, maximum temperature, and precipitation) correlated with each other? How do these relationships change between the different weather regions? We explore such correlations through the use of correlation ellipses [11] superimposed on a map of the U.S. in Figure 4. We calculated Spearman correlations between each pair of measurements for the locations within each cluster. The sign of the correlation coefficient is denoted by the slope of the ellipse and the strength of correlation is denoted by the width of the ellipse. All of the correlations between error variables are positive except for correlations between minimum temperature and the other two variables in the Northeast. The strongest relationships are seen in the Midwest, the South and the Southwest. The weakest relationships are found in the Northeast. Only a few cluster-specific correlations are significant. This is likely due to the small number of stations in many of the weather regions. However, the overall correlations for the 113 weather stations are all positive and significant. This indicates that areas with good predictions for one forecast variable have generally good predictions for the other forecast variables as well. The weakest correlations are between minimum temperature and precipitation predictions. Although there are relationships between the three weather forecast variables, those relationships are not particularly strong and the strength differs within each region. The observations made using this correlation ellipse map illustrate how this plot style facilitates multi-dimensional comparisons across space. Information on the calculations and implementation of the correlation glyphs can be found in Appendix B. Figure 3: Parallel coordinate plot of the means and standard deviations of the weather variables listed in Table 1. Each line in the plot represents one of the 113 weather stations. The color of the lines match the weather region to which the station belongs. An interactive app is available that allows for better identification of regional trends.The Southwest region is highlighted in this graph to emphasize its weather characteristics ### Error scatterplots Scatterplots reveal outliers and overall trends within weather regions and across forecast lag. Forecast lag is defined as the number of days between the day of forecast and the day being forecast. Thus, same day forecasts would have a lag of 0, one day prior forecasts a lag of 1, and so on. Because we are comparing three variables spatially and temporally across the U.S., static graphs are not optimal for assessing all relationships of interest. We constructed an interactive scatterplot app that facilitates examination of trends between the three forecast error variables aggregated across all forecast lags or for individual forecast lags. Figure 5 (a-c) shows examples of plots from the interactive app. The figure shows the scatterplot for the data aggregated over all forecast lags, as well as the scatterplots for lags of 5, 3, and 1, to illustrate how forecast accuracy changes over forecast lag. Figure 5(a) compares minimum temperature forecast accuracy with precipitation accuracy. Weather stations with the worst predictions of minimum temperature are located in New England and the Intermountain West. New England is known for extreme winter weather and the frequency of extreme weather events seems to be increasing [12]. This likely contributes to the struggle these stations have predicting minimum temperature. The worst predictor of minimum temperature is Austin, Nevada. This location is addressed further in Figure 5(c). Cali-Florida uniformly has the best predictions of minimum temperature. However, Cali-Florida also has some of the greatest variability in precipitation prediction accuracy when examining individual lags. Figure 5(b) compares maximum temperature prediction accuracy with precipitation accuracy. Four weather stations in the Great Lakes region have the worst precipitation predictions in the dataset. Poor precipitation forecast accuracy in this region illustrates the difficulty in forecasting lake-effect snow. This phenomenon is discussed in greater depth in Section 2.3. Precipitation forecast accuracy for the Great Lakes region improves substantially as the forecast lag decreases and forecasts with lag 1 are as accurate as the rest of the nation. Figure 5(c) shows the relationship between minimum and maximum temperature forecast accuracy. Three outliers stand out in these scatterplots, namely Key West, Florida, Austin, Nevada, and San Francisco, California. Key West predicts both minimum and maximum temperature more accurately then any other weather station. Key West also ranks in the top five for lowest variability in eight of the weather variables, which likely explains the accurate forecasts. Austin is the poorest predictor of both measures. Seventy miles along the "loneliest highway in America" [13] separate Austin from Figure 4: Spearman correlations between forecast error variables represented as ellipses superimposed on a map of the United States. The p-value for each correlation is compared against a 0.05 level of significance Figure 5: Scatterplots comparing the three forecast error variables. The scatterplot to the left of the map is aggregated over all forecast lags. Points of interest discussed in the text are highlighted in the respective plots its weather measurements which were collected in Eureka, Nevada. The poor predictions for maximum and minimum temperature can be explained by the change in climate over such a large distance. This is reflected in a negative prediction bias of around 5\({}^{\circ}\)F for maximum temperature and a positive bias of around 7\({}^{\circ}\)F for minimum temperature. San Francisco has good predictions of minimum temperature and poor predictions for maximum temperature. This phenomenon is further explained in Section 2.3. The interactive app developed in conjunction with this project allows for further investigation of forecast accuracy trends. The app is discussed in Section 3. ### Seasonal trends The position of the U.S. in the northern hemisphere makes most of the country subject to distinct weather seasons. Seasons are most pronounced in the northern U.S. We hypothesize that the forecast error behavior is inextricably linked to this seasonality. We explore this through a series of space-time graphs. Modeling space and time simultaneously creates a three-dimensional problem usually visualized as small multiples. Small multiples are "a series of graphics, showing the same combination of variables [e.g., latitude and longitude], indexed by changes in another variable [e.g., time]" [14]. The issue with this approach is that it becomes difficult to visually comprehend all but the most drastic changes from graph to graph. One alternative that allows simultaneous visualizations of both space and time is through the use of glyphs, or symbols, that allow for multi-dimensional visualizations in a spatial context [15][16]. Figure 6 shows glyph plots of seasonal forecast errors throughout time. The forecast error is visualized as the scaled distance from a center point to the edge of a polygon with twelve observations starting with January at the 12:00 position and proceeding clockwise. The asymmetry of the glyphs about their center points illustrates how forecast errors change across time and across space. For example, locations in the Northeast are worse at forecasting precipitation in the winter than in the summer, while locations in the Southeast forecast precipitation equally well throughout the year. In addition to highlighting forecasting asymmetries, Figure 6 reveals location-specific anomalies. For example, San Francisco, California, predicts minimum temperatures well all year, but only predicts maximum temperatures well in the winter months. This is likely due to chilling coastal fogs known to frequent the region throughout the year that can create sharp temperature differences over short distances [17]. The struggle to predict temperature seems reasonable in light of these facts as this measurement location is more than 11 miles inland from the forecast location. The issue is likely less pronounced in the winter because the contrast between inland and coastal temperatures is reduced. Maximum temperature predictions are particularly poor in the summer months in Austin, Nevada. It is unclear why predictions are worse in the summer than in the winter. Another location-specific anomaly of note is the drastic seasonality of precipitation forecasts for locations surrounding the Great Lakes, as observed in Figure 6. The error scatterplots in Figure 5(b) show that precipitation accuracy is poor in this region, but the seasonality of the predictions cannot be observed in the scatterplots. The unusually bad forecasting in the winter is likely due to lake-effect snow which is prevalent in the region. Up to 100% more snow falls downwind of Lake Superior in the winter than would be expected without the lake-effect [18]. This area has been previously identified as having the most unpredictable precipitation patterns in the nation [19]. The above examples demonstrate the ease with which comparisons can be made across space and time with these glyph-based plots. Information about how to generate the glyphs is included in Appendix B. ### Variable importance The differences in forecast error patterns across regions prompt identification of the most important climate measurements for predicting forecast error. We used random forests [20] to determine which weather variables had the greatest impact on the forecast errors. The data were aggregated over forecast lag and month. Three random forest models were generated for each weather region using the forecast error variables as the response. The means and standard deviations for each of the weather variables listed in Table 1 and the forecast lag were the predictor variables. Figure 7 contains three parallel coordinate plots that show the variable importance measures in each region for each forecast error variable. The importance measures obtained from random forests were recentered by subtracting the minimum importance measure and then rescaled to the interval (0, 100) by dividing by the maximum importance measure of the recentered values for each weather cluster and forecast error variable combination, and finally multiplying by 100. Thus, the most important variable within each weather region has a value of 100 and the least important has a value of 0 for each error measure. This allows direct comparisons of importance between weather regions and across error measures. Figure 7 shows that the most important variable for the precipitation error is forecast lag regardless of weather region. None of the other variables are very important relative to lag. The Southeast shows minimum dew point (DP) and the Figure 6: Glyph plots of weather forecast accuracy averaged by month. The error is represented as the scaled distance from a center point to the edge of a polygon beginning with January at the 12:00 position and proceeding clockwise standard deviation of maximum dew point as being somewhat important. Cloud cover is important for the precipitation error in the Northeast. Forecast lag is also the most important variable for the maximum temperature error for all weather regions except Cali-Florida. The standard deviation of maximum temperature and maximum wind speed (WS) are more important than lag in Cali-Florida. The variability in maximum temperature is also important for the Southeast, Northeast, and the Intermountain West. Distance to coast (Dist2Coast) and elevation are important for the maximum temperature error in the Intermountain West. Variables that are important for the minimum temperature error varied substantially across weather regions. The variability in minimum temperatures is important for all regions, but other important variables differ widely from region to region. Minimum temperature is the most important for the Northeast and Intermountain West, but maximum temperature is important for the Southeast. Minimum dew point and the variability in the maximum sea level pressure (SLP) are important in the Southwest while variability in minimum sea level pressure is the most important for the Midwest, Southeast, and Southwest. Forecast lag is not particularly important for any of the regions except for the Midwest. ## 3 Interactive application It is difficult to identify the patterns in climate measurements and forecast errors for all weather regions with static visualizations. We developed an interactive Shiny app to enhance our weather data explorations. This app can be accessed at [https://jilllundell.shinyapps.io/finaldataexpoapp/](https://jilllundell.shinyapps.io/finaldataexpoapp/). The first tab of the app is an interactive version of the parallel coordinate plot introduced in Figure 3. The app allows the user to select a weather region which is highlighted on the graph. Characteristics of the selected region can be easily seen and compared to all other observations. Figure 7: Variable importance for each of the three forecast accuracy measurements. Variable importance measures have been rescaled to make the measures directly comparable between weather regions and accuracy measures The second tab of the app is an interactive scatterplot. Figure 5 (a-c) shows examples of the graphs generated in this tab. The user can select up to two of the three forecast error variables to be on the axes. The forecast lag can also be selected. Points on the scatterplot can be brushed or clicked and the selected points show up on a map of the U.S. Information about selected stations is listed in a table under the graph. The idea of linked brushing between scatterplots and maps was first introduced in Monmonier [21]. This app allows for a more complete exploration of outliers and trends in the data across forecast lags and between error variables than a static graph. ## 4 Conclusions Climate patterns in the United States cleanly separate into six recognizable regions through a cluster analysis using the means and standard deviations of the weather variables provided in Table 1. We explored the relationship between the three weather forecast variables (i.e., minimum temperature, maximum temperature, and precipitation) using correlation ellipses shown in Figure 4. We found that all clusters show signs of positive correlations among the error variables with the exception of the Northeast cluster. We visualized the pairwise relationship between forecast errors through a series of scatterplots across all forecast lags in Figure 5. These plots highlight the superiority of locations in the Cali-Florida region for predicting minimum temperature across all lags, and also show that the poor precipitation predictions of the Great Lakes region are mostly confined to forecasts greater than lag 2. Lastly, the abnormally high errors in Austin, Nevada, are likely a product of the large distance between forecast and measurement locations. We explored seasonal differences of forecast errors in Figure 6 and observed that seasonal differences in forecast errors tend to be more pronounced in northern, inland clusters than southern clusters. We also showed that location specific anomalies, such as the asymmetry in seasonal maximum temperature forecast errors in San Francisco and the precipitation forecast errors near the Great Lakes, have plausible explanations in the literature. Next, we compared the important variables in determining forecast errors across clusters using scaled random forest variable importance measures in Figure 7. These measures demonstrate that forecast lag is most important in determining the maximum temperature and the precipitation forecast errors, but not important in predicting the minimum temperature forecast errors. Many clusters place similar importance on a few variables, but there are some variables that are important only in a single cluster, such as the importance of maximum wind speed in predicting the maximum temperature forecast error in Cali-Florida. For further insight regarding the nature of forecast errors across these six clusters, we refer readers to our R shiny app described in the previous section. A current version of the app can be found at the following URL: [https://jillundell.shinyapps.io/finaldataexpoapp/](https://jillundell.shinyapps.io/finaldataexpoapp/) This app, in conjunction with the visualizations presented in this article, reinforces the idea that the U.S. cleanly clusters into well defined weather regions and patterns in forecast errors are closely related to the unique climates that characterize each region. The visualizations in this paper, both interactive and static, were designed to be scalable for larger weather datasets. We anticipate illustrating this capability on an expanded set of stations in the future. An expanded analyses will also serve to validate the regional patterns observed and described in this paper. In addition, we anticipate adapting several of the static glyph plots presented in this paper for interactive use. Greater interactivity will allow for more detailed explorations of weather patterns in the United States across both time and space. ## 5 Acknowledgements The authors would like to thank the Sections on Statistical Computing and Statistical Graphics of the ASA for providing the data used in this analysis. The primary analytical tool for this analysis was the R [22]. Additional data information regarding specific measurement locations were provided in the weatherData R package [23]. Distance and spatial calculations made use of the fields [24], geosphere [25], mapproj [26], rgdal [27], and sp [28] R packages. Other data manipulations and visualizations made use of the tidyverse [29], as well as the ggforce [30], latex2exp [31], RColorBrewer [32], and reshape2 [33] R packages. Variable importance models made use of the randomForest [34] R package. also Data cleaning We primarily used the dataset provided by the Data Expo to perform the analyses described in the article. We supplemented the provided location information with elevation and distance to the nearest major coast. Elevation information was obtained for each location through Google's API server [35] via the rgbif R package [36]. Distance to coast was calculated as the closest geographical distance between each measurement location and one of the vertices in the U.S. Medium Shoreline dataset [37], which includes all ocean and Great Lakes coasts for the contiguous 48 states. Because this dataset does not include the coastlines of Alaska and Hawaii, distance to coast calculations for these locations used manually extracted shorelines from NOAA's Shoreline Data Explorer [38]. We acknowledge there are limitations to this method of distance calculation, as distances for some locations, such as Arizona (Flagstaff, Nogales, and Phoenix), are slightly longer than they would be had we used shoreline information for Mexico's Gulf of California. Nevertheless, these measurements effectively separate inland weather stations from coastal stations. Table 1 shows the weather variables included in our final analysis. We excluded mean daily measurements for temperature, precipitation, dew point, humidity and sea level pressure as these measurements were near perfect linear combinations of their corresponding minimum and maximum measurements. We also excluded maximum visibility from the analysis as this measurement was equal to 10 miles for more than 97% of all recorded measurements. Lastly, we combined the information provided by maximum wind speed and maximum wind gusty retaining only the lower of the two measurements after removing outliers. The decision to combine the information from these two wind variables was motivated by the fact that 13% of all maximum wind gust values were missing. In addition, it is difficult to separate unusually high, yet valid, maximum wind gust and wind speed measurements from true outliers. Some stations did not record relevant climate variables. When possible, these missing observations were replaced with corresponding measurements obtained from the nearest National Weather Service (NWS) first order station as obtained through the National Climatic Data Center (NCDC) [39]. Missing values include wind speed in Baltimore, Maryland, precipitation in Denver, Colorado, and replacements of outlier precipitation measurements at multiple locations. When replacements were not readily obtained through the NCDC, systematic missing observations were replaced with corresponding observations from the nearest geographical neighbor within the dataset, as was the case for visibility and cloud cover in Baltimore, Maryland (replaced with Dover, Delaware, measurements) and Austin, Nevada (replaced with Reno, Nevada, measurements). Table 1 also shows the observation ranges for each of the included variables. These measurement ranges are either definitional, such as the bounds for humidity, or simply practical, such as the bounds for temperature. All measurements falling outside the bounds shown in Table 1 were removed prior to our analysis. Several individual outliers were also removed or replaced based on location-specific inconsistencies including * removal of one unusually low minimum temperature measurement in Honolulu, Hawaii, (\(<10^{\circ}\)F) and two in San Francisco, California (\(<20^{\circ}\)F); * replacement of the following unusually high precipitation readings with precipitation readings at nearby weather stations [39]: * Oklahoma City, Oklahoma, on 8/10/2017 (\(38.33\)in \(\to 0.8\)in) * Salmon, Idaho, on 4/21/2015, 5/2/2016, and 5/3-4/2017 (\(10.02\)in \(\to 0\)in) * Flagstaff, Arizona, on 12/24/2016 (\(7.48\)in \(\to 0.97\)in) * Indianapolis, Indiana, on 7/15/2015 (\(9.99\)in \(\to 0\)in); * removal of one unusually low minimum dew point measurement in Honolulu, Hawaii (\(<40^{\circ}\)F), two in Hoquiam, Washington (\(<0^{\circ}\)F), four in Las Vegas, Nevada (\(<-15^{\circ}\)F), and two in Denver, Colorado (\(<-20^{\circ}\)F). Forecast variables were restricted to minimum temperature, maximum temperature, and the probability of precipitation. We found no obvious outliers in the weather forecasts. This is reasonable due to the fact that forecasts are not subject to inevitable sensor technology failures that occur when taking an actual measurement. Rather, the forecast data were replete with duplicate values for minimum temperature and precipitation. We retained the lowest forecast of minimum temperature and the highest forecast of precipitation probability for each forecast. Forecast lags of six or seven days contained a large number of missing values. We removed all forecasts past lag 5. We also removed all forecasts containing negative lags (i.e., a forecast made _after_ the actual observation). Polar coordinate considerations for geographic maps The glyph plots in Figures 4 and 6 rely on proper conversions from polar to geographic or Cartesian coordinates. This allows the glyphs to be plotted directly on the underlying map, rather than embedding polar coordinate subplots in the image. Avoiding subplots allows for greater precision in the placement of the glyphs and avoids the computational burden of creating and embedding multiple figures. This direct plotting approach requires special considerations for geographical maps, as polar coordinate glyphs become distorted when projecting geographical coordinates to a Cartesian plane. For example, a perfect circle in geographical coordinates will appear elongated in the vertical direction when the circle is projected in the northern hemisphere. One solution to this issue is to project all geographical coordinates to a Cartesian plane prior to the glyph construction. This can be conveniently accomplished using the mapproject() function in the mapproj R package [26]. Polar coordinates are defined in terms of radius \(r\) and angle \(\theta\). Figure 6 defines \(r\in[0,1]\) as the scaled average absolute error between predicted and actual temperature and \(\theta=\frac{(4-m)\pi}{6}\) where \(m\) represents the numeric month. We center each glyph at 0 with Cartesian coordinates \[(x,y)=(r\cos\theta,r\sin\theta)\] Let \((\mathbf{x}_{i},\mathbf{y}_{i})\) represent the set of Cartesian coordinates centered at the origin that create the glyph associated with location \(i\). These coordinates are defined using the same units as the underlying map projection. The final coordinates of the rendered glyph are defined as \[\alpha\cdot(\mathbf{x}_{i}+u_{x},\mathbf{y}_{i}+u_{y})\] where \((u_{x},u_{y})\) represents coordinates of location \(i\) and \(\alpha\) represents a global scaling parameter used to adjust the size of the rendered glyphs on the map. A point is drawn at location \((u_{x},u_{y})\) to serve as a reference for the glyph. Asymmetry about the point \((u_{x},u_{y})\) reveals seasonal patterns in the forecast errors. We construct the correlation ellipses of Figure 4 with foci \(F_{1},F_{2}\) located along the semi-major axis \(y=x\) (\(\theta=\frac{\pi}{4}\)) for positive correlations and \(y=-x\) (\(\theta=-\frac{\pi}{4}\)) for negative correlations. We fix \(F_{1}\) at the origin and denote \(r\) as the radius extending from \(F_{1}\) to the edge of the ellipse, as illustrated in Figure 8. This approach to ellipse creation is outlined in Knisley and Shirley [40] and adapted here where we define \(r\) for \(\theta\in[0,2\pi]\) as \[r=\frac{(1-|\rho|)^{2}}{1-\sqrt{|p|(2-|p|)}\cos(\theta-\frac{\text{sign}(\rho )\pi}{4})},\] where \(\rho\in(-1,1)\backslash\{0\}\) represents the desired correlation between forecast errors. In the event that \(\rho=-1,1,\text{ or }0\), we use \(\rho\pm\epsilon\left(\epsilon>0\right)\) when creating the ellipse to avoid numerical precision errors. The ellipse is then converted to Cartesian coordinates and centered at the origin as \[\left(r\cos(\theta)-\frac{|\rho|(2-|\rho|)}{\sqrt{2}},r\sin(\theta)-\text{ sign}(\rho)\frac{|\rho|(2-|\rho|)}{\sqrt{2}}\right).\] Each ellipse is scaled to be circumscribed in the \([-0.5,0.5]\times[-0.5,0.5]\) square. This scaling makes it possible to create a matrix of ellipses using a common grid size. It also reduces the difference in areas between ellipses which facilitates comparisons of shape. This scaling is defined as \[(\mathbf{x}_{i}^{\prime},\mathbf{y}_{i}^{\prime})=\left(\frac{\mathbf{x}_{i}}{ 2\cdot\text{max}(|\mathbf{x}_{i}|)},\frac{\mathbf{y}_{i}}{2\cdot\text{max}(| \mathbf{y}_{i}|)}\right).\] Note that there are three ellipses for each location. We define a matrix of ellipses centered at the shared vertex of the lattice denoted by \((u_{x},u_{y})\). Let \((\mathbf{x}_{i},\mathbf{y}_{i})\) represent the coordinates of one of the three ellipses centered at this location. Each ellipse is centered and scaled on the map as \[\alpha\cdot(\mathbf{x}_{i}^{\prime}+u_{x}+o_{1},\mathbf{y}_{i}^{\prime}+u_{y}+ o_{2})\] where \(o_{1}\) and \(o_{2}\) represent offset terms used to separate the centers of the three ellipses in the matrix defined for each location. This direct plotting approach of the ellipses eases plot customization, as there is no need to reconcile formatting differences between independently created subplots. This approach can also be generalized to plot other geometric shapes on a geographic map. It is also helpful for interactive applications that require fast renderings of images in response to dynamic inputs.
2301.10196
Overlap-ADAPT-VQE: Practical Quantum Chemistry on Quantum Computers via Overlap-Guided Compact Ansätze
ADAPT-VQE is a robust algorithm for hybrid quantum-classical simulations of quantum chemical systems on near-term quantum computers. While its iterative process systematically reaches the ground state energy, ADAPT-VQE is sensitive to local energy minima, leading to over-parameterized ans\"atze. We introduce the Overlap-ADAPT-VQE to grow wave-functions by maximizing their overlap with any intermediate target wave-function that already captures some electronic correlation. By avoiding building the ansatz in the energy landscape strewn with local minima, the Overlap-ADAPT-VQE produces ultra-compact ans\"atze suitable for high-accuracy initializations of a new ADAPT procedure. Spectacular advantages over ADAPT-VQE are observed for strongly correlated systems including massive savings in circuit depth. Since this compression strategy can also be initialized with accurate Selected-Configuration Interaction (SCI) classical target wave-functions, it paves the way for chemically accurate simulations of larger systems, and strengthens the promise of decisively surpassing classical quantum chemistry through the power of quantum computing.
César Feniou, Muhammad Hassan, Diata Traoré, Emmanuel Giner, Yvon Maday, Jean-Philip Piquemal
2023-01-24T18:10:58Z
http://arxiv.org/abs/2301.10196v3
Overlap-ADAPT-VQE: Practical Quantum Chemistry on Quantum Computers via Overlap-Guided Compact Ansatze ###### Abstract ADAPT-VQE is a robust algorithm for hybrid quantum-classical simulations of quantum chemical systems on near-term quantum computers. While its iterative process systematically reaches the ground state energy, practical implementations of ADAPT-VQE are sensitive to local energy minima, leading to over-parameterized ansatze. We introduce the Overlap-ADAPT-VQE to grow wave-functions by maximizing their overlap with any intermediate target wave-function that already captures some electronic correlation. By avoiding building the ansatz in the energy landscape strewn with local minima, the Overlap-ADAPT-VQE produces ultra-compact ansatze suitable for high-accuracy initialization of a new ADAPT procedure. Spectacular advantages over ADAPT-VQE are observed for strongly correlated systems including massive savings in circuit depth. Since this compression strategy can also be initialized with accurate Selected-Configuration Interaction (SCI) classical target wave-functions, it paves the way for chemically accurate simulations of larger systems, and strengthens the promise of decisively surpassing classical quantum chemistry through the power of quantum computing. ## Introduction The computational cost of approximating the ground state energy of an \(n\)-electron molecular system on classical computing architectures typically grows exponentially in \(n\). Quantum computers allow for the encoding of the exponentially scaling underlying Hilbert space using only \(\mathcal{O}(n)\) qubits, and are therefore likely to outperform classical devices on a range of chemical simulations [1, 2, 3]. The Variational Quantum Eigensolver (VQE) is a hybrid quantum-classical algorithm that is considered a very promising candidate for chemical calculations on Noisy Intermediate Scale Quantum (NISQ) devices [4, 5]. In this approach, a parameterized wave-function is generated and variationally tuned to minimize the expectation value of the molecular electronic Hamiltonian. A variety of different parameterized wave-functions have been proposed, including the Trotterised Unitary Coupled Cluster (tUCC) ansatz [6, 7] which consists of a sequence of exponential, unitary operators acting on a judiciously chosen reference state. While the tUCC approach includes electronic correlation and has, in principle, a rather simple quantum circuit structure, the excessive depth of these quantum circuits make them ill-suited for applications in the NISQ regime. This issue has led to the proposal that ansatz wave-functions be constructed through the action of a selective _subset_ of possible unitary operators, i.e., only those operators whose inclusion in the ansatz can potentially lead to the largest decrease in the expectation value of the molecular electronic Hamiltonian. In this context, the Adaptive Derivative-Assembled Pseudo-Trotter VQE (ADAPT-VQE) [8] has emerged as the gold standard for generating highly accurate and compact ansatz wave-functions. In ADAPT-VQE, the ansatz is grown iteratively by appending a sequence of unitary operators to the reference Hartee-Fock state. At each iteration, the unitary operator to be applied is chosen according to a simple criterion based on the gradient of the expectation value of the Hamiltonian (see the section on technical background and methods for details). Assuming that the number of spin-orbitals \(N\) being considered is proportional to the number of electrons \(n\) in the system, the pool of potential unitary operators in tUCC-based VQEs scales as \(\mathcal{O}(N^{t})\) for \(\ell\geq 4\)[1]. Consequently, conventional VQEs based on the tUCC ansatz require the representation of a product of \(\mathcal{O}(N^{t})\) unitary operators on quantum circuitry and the optimization of an \(\mathcal{O}(N^{t})\)-dimensional cost-function, both of which are practically impossible using the current generation of NISQ devices. The ADAPT-VQE algorithm attempts to alleviate these problems by avoiding the inclusion of unitary operators in the ansatz wave-function that are not expected to lead to a lowering of the resulting energy. Numerical evidence suggests that ADAPT-VQE is indeed resource-saving and the energy-gradient criterion employed by ADAPT-VQE leads to much more accurate wave-functions than conventional VQE algorithms while preserving moderate circuit depth [8, 9, 10]. In spite of this comparative advantage, such an energy gradient guided procedure has a tendency to fall into local minima of the energy landscape. Exiting from such minima comes at the expense of adding and optimizing operators through multiple ADAPT iterations [11] and leads to over-parameterized wave-functions. In practice, this is associated with an unnecessary increase of the quantum circuit depth required for the representation of the ansatz wave-function coupled to an increasingly difficult classical optimization. Therefore, simulating strongly correlated systems on existing NISQ devices is simply too demanding to accomplish with ADAPT-VQE. Our proposed approach for overcoming the challenges of energy plateaus requires modifying the manner in which the ansatz wavefunction is constructed. Indeed, rather than contructing an ansatz wave-function through an energy minimisation procedure and potentially encountering local minima, we grow the ansatz wave-function through a process that maximizes its overlap with a - potentially intermediate - target wave function that already captures some electronic correlation of the system. We thus use such a target wave-function as a guide to help us build our ansatz in the right direction so as to catch the bulk of electronic correlation. The workflow of this routine is depicted in Figure 2. The resulting overlap-guided ansatz is subsequently used as a high accuracy initialization for an ADAPT-VQE procedure, an algorithm that we refer to as Overlap-ADAPT-VQE. We benchmark and compare the ansatz wave-functions obtained with Overlap-ADAPT-VQE method to standard ADAPT-VQE on a range of small chemical systems with varying levels of correlation. ## Technical Background and Methods ### Qubit Representation of the Molecular Hamiltonian The molecular electronic Hamiltonian with one-body and two-body interactions can be expressed in second-quantization notation as \[H:=\sum_{p,q}h_{pq}a_{p}^{\dagger}a_{q}+\sum_{p,q,r,s}h_{pqrs}a_{p}^{\dagger}a _{q}^{\dagger}a_{r}a_{s}. \tag{1}\] Here, \(p,q,r,\) and \(s\) are indices that label the spin-orbitals used to discretize the system, \(a_{p}\) and \(a_{p}^{\dagger}\) are the \(p^{\text{th}}\) fermionic annihilation and creation operators that satisfy the anti-commutation relations: \[\left\{a_{p},a_{q}^{\dagger}\right\}:=a_{p}a_{q}^{\dagger}+a_{q}^{\dagger}a_{ p}=\delta_{pq}\quad\text{and}\quad\left\{a_{p},a_{q}\right\}:=a_{p}a_{q}+a_{q}a_{ p}=0, \tag{2}\] with \(\delta_{pq}\) representing the classical Kronecker symbol in the frame of operator algebra, and \(h_{pq}\) and \(h_{pqrs}\) are one-electron and two-electron integrals that can be computed on classical hardware through the expressions \[h_{pq}:= \int_{\mathbb{R}^{3}}\Psi_{p}^{*}(\mathbf{x})\left(-\frac{1}{2} \Delta-V_{\text{nuc}}\right)\Psi_{q}(\mathbf{x})\ d\mathbf{x}, \tag{3}\] \[h_{pqrs}:= \int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\Psi_{p}^{*}(\mathbf{ x})\Psi_{q}^{*}(\mathbf{y})\left(\frac{1}{|\mathbf{x}-\mathbf{y}|}\right)\Psi_{r}( \mathbf{x})\Psi_{s}(\mathbf{y})\ d\mathbf{x}d\mathbf{y},\] where \(\Psi_{p},\Psi_{q},\Psi_{r},\Psi_{s}\) denote spin-orbitals labeled by the indices \(p,q,r,\) and \(s\) respectively. In order to represent the second-quantized Hamiltonian \(H\) on a quantum computer, we use the Jordan-Wigner transform [12, 13] to map the creation and annihilation operators to tensor products involving unitary matrices. To this end, we denote by \({\left|0\right\rangle}_{p}\) and \({\left|1\right\rangle}_{p}\) states corresponding to an _empty_ and _occupied_ spin-orbital \(p\) respectively. Using this formalism, the reference Hartree-Fock state for a system having \(n\) electrons in \(N\) spin-orbitals can be expressed as \({\left|\Psi_{\text{HF}}\right\rangle}:={\left|1_{0}\ldots 1_{n}0_{n+1} \ldots 0_{N}\right\rangle}\), and the corresponding fermionic creation and annihilation operators are given by \[a_{p}=\left(\bigotimes_{i=0}^{p-1}Z_{i}\right)\otimes\frac{X_{p}+iY_{p}}{2} =:\left(\bigotimes_{i=0}^{p-1}Z_{i}\right)\otimes Q_{p}, \tag{4}\] \[a_{p}^{\dagger}=\left(\bigotimes_{i=0}^{p-1}Z_{i}\right)\otimes\frac{X_{p}-iY_ {p}}{2} =:\left(\bigotimes_{i=0}^{p-1}Z_{i}\right)\otimes Q_{p}^{\dagger},\] where \(X_{p},Y_{p},Z_{p}\) are single qubit Pauli gates applied to qubit \(p\)[14]. Note that in Equation (4), we have introduced the so-called qubit excitation and de-excitation operators \(Q_{p}\) and \(Q_{p}^{\dagger}\) respectively that switch the occupancy of the spin-orbital. These operators will be the subject of further discussion in the sequel. Let us also remark here that the Jordan-Wigner-transformed excitation and de-excitation operators (4) respect the anti-commutation relations (1). This is simply a consequence of including the tensor product of \(Z\)-Pauli gates in Equation (4)[12]. ### The Variational Quantum Eigensolver Equipped with the single-qubit Pauli gate representation of the molecular Hamiltonian \(H\), we are now interested in approximating its ground-state eigenvalue. The Variational-Quantum-Eigensolver (VQE) is a hybrid quantum-classical algorithm that couples a classical optimization loop to a subroutine that computes on a quantum computer, the expectation value of the Hamiltonian with respect to a proposed ansatz wave-function. This quantum subroutine involves two fundamental steps: 1. The preparation of a trial quantum state (the ansatz wave-function) \(\left|\Psi(\vec{\theta})\right\rangle\). A variety of different functional forms for the ansatz wave-function have been proposed [7, 15, 16, 17] including the aforementioned tUCC ansatz which consists of a sequence of parameterized, exponential fermionic excitation and de-excitation operators acting on a reference state (see below for explicit expressions of these operators). 2. The measurement of the expectation value \(\left\langle\Psi(\vec{\theta})\right|H\left|\Psi(\vec{\theta})\right\rangle\). The output of the quantum subroutine is fed into a classical optimization algorithm which calculates the optimal set of parameters \(\vec{\theta}_{\text{opt}}\) that minimizes the expectation value of the Hamiltonian \(H\). The variational principle ensures that the resulting optimized energy is always an upper bound for the exact ground-state energy \(E_{0}\) of \(H\), i.e., \[\left\langle\Psi(\vec{\theta}_{\text{opt}})\right|H\left|\Psi(\vec{\theta}_{ \text{opt}})\right\rangle\geq E_{0}. \tag{5}\] The fundamental challenge in implementing the VQE methodology on NISQ devices is thus to construct an ansatz wavefunction that can capture the most important contributions to the electronic correlation energy and, at the same time, is capable of being represented on rather shallow quantum circuits. A necessary condition to achieve the latter is that the chosen ansatz wave-function be parameterized with a relatively small number of optimization parameters. Thus, the major computational shortcoming of the popular tUCCSD method- which otherwise possesses an attractive functional form [7]- is that its actual implementation on quantum computers requires extremely deep circuits which generate far too much noise on the current generation of NISQ devices [18]. Indeed, implementing the tUCCSD algorithm on quantum architectures through the Jordan-Wigner mapping (4) requires \(O(N^{3}n^{2})\) quantum gates [7] (recall that \(N\) is the number of spin-orbitals being considered and \(n\) is the number of electrons in the system so that if \(N\) is proportional to \(n\), then the number of quantum gates required will be of the order of \(O(N^{5})\)). This problem is further exacerbated by the ubiquitous usage of CNOT gates in the construction of quantum circuits for fermionic excitation and de-excitation operators. tUCCSD has been recently extended to triple excitations (tUCCSDT) [19] and coupled to both spin and orbital symmetries to reduce the operators count but this latter remains too high for real life QPUs implementation despite a significant increased accuracy over tUCCSD. ### The ADAPT-VQE Ansatz The adaptive derivative-assembled pseudo-Trotter variational quantum eigensolver (Adapt-VQE) [8] was designed to overcome the computational shortcomings of the traditional tUCCSD method by proposing an ansatz function that is adaptively grown through an iterative process. ADAPT-VQE is based on the fact [20] that the full-CI quantum state can be represented by the action of a potentially infinitely long product of only one-body and two-body operators on the reference Hartree-Fock determinant, i.e., \[\left|\Psi_{\text{FCI}}\right\rangle=\prod_{k}^{\infty}\left[\prod_{pq}\hat{ A}_{p}^{q}(\theta_{k}^{pq})\prod_{pqrs}\hat{A}_{pq}^{rs}(\theta_{k}^{pqrs})\right] \left|\Psi_{\text{HF}}\right\rangle. \tag{6}\] Here, \(\hat{A}_{p}^{q}(\theta_{k}^{pq}):=e^{\theta_{k}^{pqrs}\hat{A}_{p}^{q}(k)}\) and \(\hat{A}_{pq}^{rs}(k):=e^{\theta_{k}^{pqrs}\hat{A}_{pq}^{rs}}\) where \(\hat{\tau}_{p}^{q}\) and \(\hat{\tau}_{pq}^{rs}\) denote the anti-symmetric operators \(\hat{a}_{p}^{q}-\hat{a}_{q}^{p}\) and \(\hat{a}_{pq}^{rs}-\hat{a}_{rs}^{pq}\) and \(\theta_{k}^{pq}\) (resp. \(\theta_{k}^{pqrs}\)) is the expansion coefficient of the \(k^{\text{th}}\) repetition of the operator \(\hat{A}_{p}^{q}\) (resp. \(\hat{A}_{pq}^{rs}\)). The general workflow of the ADAPT-VQE algorithm is as follows: 1. On classical hardware, compute one-electron and two-electron integrals, and map the molecular Hamiltonian into a qubit representation. **On quantum hardware**, boot the qubits to a reference state \(\left|\Psi^{0}\right\rangle=\left|\Psi_{\text{HF}}\right\rangle\). 2. Define a pool of parameterized unitary operators that will be used to construct the ansatz. 3. **On quantum hardware,** at the \(m^{\text{th}}\) iteration, identify the parameterized unitary operator \(\mathcal{U}_{m}(\theta_{m})\) whose action on the current ansatz \(\ket{\Psi^{m-1}}\) will produce a new wave-function with the largest drop in energy. This identification is done by computing suitable gradients at \(\theta_{m}=0\), the gradients being expressed in terms of commutators involving the molecular Hamiltonian acting on the current ansatz wave-function: \[\frac{\partial}{\partial\theta_{m}}\bra{\Psi^{m-1}}\hat{\mathcal{U}}_{m}^{ \dagger}(\theta_{m})H\hat{\mathcal{U}}_{m}(\theta_{m})\ket{\Psi^{m-1}}\big{|} _{\theta_{m}=0}=\bra{\Psi^{m-1}}\ket{\hat{H},\hat{\mathcal{U}}(\theta_{m})} \ket{\Psi^{m-1}}\big{|}_{\theta_{m}=0}\] (7) 4. Exit the iterative process if the gradient norm is smaller than some threshold \(\varepsilon\). Otherwise, append the selected operator to the left of the current ansatz wave-function \(\ket{\Psi^{m-1}}\), i.e., define \(\ket{\widetilde{\Psi}^{m}}:=\hat{\mathcal{U}}_{m}(\theta_{m})\ket{\Psi^{m-1} }=\hat{\mathcal{U}}_{m}(\theta_{m})\hat{\mathcal{U}}_{m-1}(\theta_{m-1}^{ \prime})\dots\hat{\mathcal{U}}_{1}(\theta_{1}^{\prime})\ket{\Psi^{0}}\). 5. **Hybrid Quantum-Classical VQE:** Optimize all parameters \(\theta_{m},\theta_{m-1},\dots,\theta_{1}\) in the new ansatz wave-function so as to minimize the expectation value of the molecular Hamiltonian, i.e., solve the optimization problem \[\overline{\delta}^{\text{opt}} := (\theta_{1}^{\prime},\dots,\theta_{m-1}^{\prime},\theta_{m}^{ \prime})\] \[:= \operatorname*{argmin}_{\theta_{1},\dots,\theta_{m-1},\theta_{m}} \langle\hat{\mathcal{U}}_{m}(\theta_{m})\hat{\mathcal{U}}_{m-1}(\theta_{m-1}) \dots\hat{\mathcal{U}}_{1}(\theta_{1})\Psi^{0}|H\hat{\mathcal{U}}_{m}(\theta_ {m})\hat{\mathcal{U}}_{m-1}(\theta_{m-1})\dots\hat{\mathcal{U}}_{1}(\theta_{1 })\Psi^{0}\rangle\] and define the new ansatz wave-function \(\ket{\Psi^{m}}\) using the newly optimized parameters \(\theta_{1}^{\prime},\dots,\theta_{m}^{\prime}\), i.e., define \(\ket{\Psi^{m}}:=\hat{\mathcal{U}}_{m}(\theta_{m}^{\prime})\hat{\mathcal{U}}_{ m-1}(\theta_{m-1}^{\prime})\dots\hat{\mathcal{U}}_{1}(\theta_{1}^{\prime}) \ket{\Psi^{0}}\). Let us emphasize that although we also denote the newly optimized parameters at the current \(m^{\text{th}}\) iteration by \(\theta_{1}^{\prime},\dots\theta_{m}^{\prime}\), these optimized values are not necessarily the same as those used to define \(\ket{\Psi^{m-1}}\) and referenced in Step 4 above. 6. Return to Step 3 with the updated ansatz \(\ket{\Psi^{m}}\). There are essentially three types of operator pools that are used to construct the ADAPT-VQE ansatz. * Fermionic-ADAPT-VQE[8] uses a pool of spin-complemented pairs of single and double fermionic excitation operators. The quantum circuits performing these unitary operations are of the staircase shape (see Figure 11). * Qubit-ADAPT-VQE[10] divides the fermionic-ADAPT operators after the Jordan-Wigner mapping and takes the individual Pauli strings as operators of the pool. The quantum circuit for an operator is a single layer of fermionic excitation "CNOT-staircase" circuits, similar to the circuit displayed in Figure 11. * Qubit-Excitation-Based-ADAPT-VQE (QEB-ADAPT-VQE)[9] uses a pool of qubit excitation operators. Exponential single-qubit and double-qubit excitation evolutions can be expressed using the qubit creation and annihilation operators \(Q_{p}\) and \(Q_{p}^{\dagger}\) defined through Equation (4) as \[U_{pq}^{\text{(sq)}}(\theta) = \exp\left(\theta(Q_{p}^{\dagger}Q_{q}-Q_{q}^{\dagger}Q_{p})\right)\] (9) \[U_{pqrs}^{\text{(dq)}}(\theta) = \exp\left(\theta(Q_{p}^{\dagger}Q_{q}^{\dagger}Q_{r}Q_{s}-Q_{r}^{ \dagger}Q_{s}^{\dagger}Q_{p}Q_{q})\right),\] which, after the Jordan-Wigner encoding yields \[U_{pq}^{\text{(sq)}}(\theta) = \exp\left(-i\frac{\theta}{2}\big{(}X_{q}Y_{p}-Y_{q}X_{p}\big{)}\right)\] \[U_{pqrs}^{\text{(dq)}}(\theta) = \exp\left(-i\frac{\theta}{2}\big{(}X_{r}Y_{s}X_{p}X_{q}+Y_{r}X_{s }X_{p}X_{q}+Y_{r}Y_{s}Y_{p}X_{q}+Y_{r}Y_{s}X_{p}Y_{q}\right.\] \[\left.\quad-X_{r}X_{s}Y_{p}X_{q}-X_{r}X_{s}X_{p}Y_{q}-Y_{r}X_{s}Y_ {p}Y_{q}-X_{r}Y_{s}Y_{p}Y_{q}\big{)}\right),\] with \(p,q,r,\) and \(s\) denoting, as usual, indices for the spin-orbitals, and we have written (sq) and (dq) as abbreviations for single qubit and double qubit excitation evolutions respectively. The quantum circuits corresponding to the single-qubit and double-qubit excitation operators[21] are then given in Figure 1. Extensive comparisons between these pools of operators have been carried out by Yordanov et al.[9] and numerical evidence suggests that QEB-ADAPT-VQE generates the most computationally tractable ansatz wave-functions. This is primarily due to the fact that qubit excitation circuits can be constructed using much fewer quantum gates than fermionic excitation circuits[21] in combination with the observation that qubit excitation evolutions approximate molecular electronic wave-functions with almost the same level of accuracy as fermionic excitation evolutions. For the purpose of this article therefore, we will restrict our attention to operator pools involving qubit excitation evolutions and work in the framework of QEB-ADAPT-VQE. ### The Overlap-Guided Adaptative Algorithm (Overlap-ADAPT) The numerical evidence presented in the articles [8, 9, 11] demonstrates that the ADAPT-VQE algorithm is capable of approximating the ground state Full-CI energy to a very high accuracy. Unfortunately, achieving a suitably accurate approximation to the sought-after energy may require a large number of ADAPT iterations which results both in deep quantum circuits that cannot be implemented on the current generation of NISQ devices as well as an increasingly computationally expensive optimization procedure. This problem is particularly apparent in strongly correlated systems for which the ADAPT algorithm frequently encounters energy plateaus2_prior_ to achieving the classical chemical accuracy threshold of \(10^{-3}\) Hartree. Since quantum chemists are primarily interested in numerical results in the regime \(10^{-3}\) to \(10^{-4}\) Hartree, i.e., slightly below the chemical accuracy threshold, it is natural to ask if the ADAPT-VQE procedure could be modified so as to avoid these initial energy plateau slowdowns and achieve the required accuracy using an ansatz compact-enough to be implementable on current NISQ devices. Footnote 2: During these plateaus, a series of new operators are added to the anszatz without meaningfully reducing the energy To make these ideas more precise, let us first introduce for any natural number \(p\), the set of all wave-functions that can be represented by the product of exactly \(p\) exponential, one-body and two-body qubit excitation evolution operators acting on the Hartree-Fock reference state: \[W_{p}:=\left\{\left(\prod_{k=1}^{p}\exp\left(\theta_{k}Q_{p_{k}}Q_{p_{k}}^{ \dagger}\right)\right)|\Psi_{\text{HF}}\rangle:\,\theta_{k}\in\mathbb{R},\,Q_ {p_{k}},Q_{p_{k}}^{\dagger}\,\,\text{ defined as in Equation (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq: Given now an arbitrary electronic wave-function \(\ket{\Psi_{\text{ref}}}\), we can define the best approximation of \(\ket{\Psi_{\text{ref}}}\) in the set \(W_{p}\) as \[\ket{\Psi_{p}^{*}}:=\underset{\ket{\Psi}\in W_{p}}{\operatorname{ argmin}}\Big{\|}\ket{\Psi}-\ket{\Psi_{\text{ref}}}\Big{\|}, \tag{15}\] where \(\|\cdot\|\) denotes a suitable norm such as the usual \(L^{2}\) or \(H^{1}\) norms on the space of all electronic wave-functions. The \(L^{2}\)-norm and the \(H^{1}\)-norm can both be computed on either classical computers or on quantum devices, depending on whether the underlying wave-functions are represented classically or on quantum circuitry. The computation of the \(L^{2}\)-norm, however, is more direct and we will therefore adopt this choice of norm for the subsequent numerical simulations considered in this study. Returning now to Equation (15), we see that \(\ket{\Psi_{p}^{*}}\) is the best approximation of an arbitrary target wave-function \(\ket{\Psi_{\text{ref}}}\) using a product of exactly \(p\) exponential qubit excitation evolution operators acting on the Hartree-Fock reference state. The question we are now interested in answering is the following: If we take the _full-CI wave-function_\(\ket{\Psi_{\text{FCI}}}\) as the target, does the corresponding best approximation \(\ket{\Psi_{p}^{\text{FCI}}}\) defined according to (15) provide a chemically accurate wave-function for small choices of \(p\)? More precisely, we wish to explore if for small choices of maximal operator count \(p\) it holds that \[\bra{\Psi_{p}^{\text{FCI}}}{H}\ket{\Psi_{p}^{\text{FCI}}}-\bra{\Psi_{\text{ FCI}}}{H}\ket{\Psi_{\text{FCI}}}=\bra{\Psi_{p}^{\text{FCI}}}{H}\ket{\Psi_{p}^{ \text{FCI}}}-E_{0}<10^{-3}\;\text{Ha}. \tag{16}\] The answer to this question will be a strong indication as to whether there exists an ansatz wave-function that is simultaneously more compact than the ADAPT-VQE ansatz and which can also capture the bulk of the electronic correlation in the system. Let us emphasise that we are specifically interested in understanding whether we can obtain a more compact ansatz wave-function than that produced by ADAPT-VQE at _chemical accuracy_ and not at the level of full-CI accuracy. Unfortunately, answering this question by solving the optimization problem (15) for an arbitrary target wave-function exactly is not computationally feasible since the size of the set \(W_{p}\) grows exponentially in \(p\). Nevertheless, an adaptive, iterative procedure that generates an _approximate_ solution to the optimization problem (15) can be defined as follows (see also Figure 2). Given a target wave-function \(\ket{\Psi_{\text{ref}}}\) and a maximal operator count \(p\): 1. Set the initialisation to the Hartree-Fock reference state, i.e., set \(\ket{\Psi^{0}}=\ket{\Psi_{\text{HF}}}\). 2. At the \(m^{\text{th}}\) iteration, identify the parametrised exponential qubit excitation evolution operator \(\widehat{A}_{m}(\theta_{m})\) whose action on the current ansatz \(\ket{\Psi^{m-1}}\) will produce a new wave-function with the largest overlap with respect to the target wave-function. This identification is done by computing the following gradient involving the current ansatz wave-function at \(\theta_{m}=0\): \[\frac{\partial}{\partial\theta_{m}}\bra{\Psi_{\text{ref}}}\widehat{A}_{m}( \theta_{m})\Psi^{m-1}\rangle\,\big{|}_{\theta_{m}=0}.\] (17) 3. Append the selected operator to the left of the current ansatz wave-function \(\ket{\Psi^{m-1}}\), i.e., define \(\ket{\widetilde{\psi}^{m}}:=\widehat{A}_{m}(\theta_{m})\ket{\Psi^{m-1}}= \widehat{A}_{m}(\theta_{m})\widehat{A}_{m-1}(\theta_{m-1}^{\prime})\dots \widehat{A}_{1}(\theta_{1}^{\prime})\ket{\Psi^{0}}\). 4. Optimize all parameters \(\theta_{m},\theta_{m-1},\dots,\theta_{1}\) in the new ansatz wave-function \(\ket{\widetilde{\psi}^{m}}\) so as to maximize its overlap with the target wave-function i.e., solve the optimization problem \[\bar{\theta}^{\text{opt}}:=(\theta_{1}^{\prime},\dots,\theta_{m-1}^{\prime}, \theta_{m}^{\prime}):=\underset{\theta_{1},\dots,\theta_{m-1},\theta_{m}}{ \operatorname{argmax}}\bra{\Psi_{\text{ref}}}\widehat{A}_{m}(\theta_{m}) \widehat{A}_{m-1}(\theta_{m-1})\dots\widehat{A}_{1}(\theta_{1})\Psi^{0})\,,\] (18) and define the new ansatz wave-function \(\ket{\Psi^{m}}\) using the newly optimized parameters \(\theta_{1}^{\prime},\dots,\theta_{m}^{\prime}\), i.e., define \(\ket{\Psi^{m}}:=\widehat{A}_{m}(\theta_{m}^{\prime})\widehat{A}_{m-1}(\theta_ {m-1}^{\prime})\dots\widehat{A}_{1}(\theta_{1}^{\prime})\ket{\Psi^{0}}\). Let us emphasize that although we also denote the newly optimized parameters at the current \(m^{\text{th}}\) iteration by \(\theta_{1}^{\prime},\dots\theta_{m}^{\prime}\), these optimized values are not necessarily the same as those used to define \(\ket{\Psi^{m-1}}\) and referenced in Step 3 above. 5. If the total number of operators in the updated ansatz is equal to \(p\), exit the iterative process. Otherwise go to Step 2 with the updated ansatz wave-function. We refer to this adaptive procedure as the Overlap-ADAPT-VQE. Let us emphasise here that rather than fixing a maximal operator count, we may employ some other convergence criteria such as the magnitude of the overlap or the magnitude of the gradient vectors as in the original ADAPT-VQE. Moreover, depending on whether the target wavefunction is in a quantum or a classical representation, the gradient screening and the overlap measurements can be performed using either a quantum or a classical device. In particular, if the targeted wave-function is classically computed, then no additional quantum resources are required or measurements are required to compute the overlaps. We are now interested in applying the Overlap-ADAPT procedure to the reference full-CI wave-functions of some simple, yet strongly correlated molecular systems in an effort to understand the compactness of the wave-function generated by QEB-ADAPT-VQE in the chemical accuracy regime. To do so, we will compute the energy of the Overlap-ADAPT approximation of the target full-CI wave-functions of a stretched BeH\({}_{2}\) molecule and a stretched linear H\({}_{6}\) chain in a minimal basis set as a function of the number of optimisation parameters, and plot this energy in comparison to the energy obtained using QEB-ADAPT-VQE. The resulting energy plots, which are displayed in Figure 3, clearly show that the overlap-guided adaptive procedure is able to avoid the initial energy plateaus afflicting the ADAPT procedure that prevent the attainment of chemical accuracy in a small number of iterative steps. These results strongly suggest the potential for creating a more condensed ansatz wave-function than that generated by ADAPT-VQE which can sidestep the issue of early energy plateaus. Before proceeding, let us point out that a key metric for evaluating the efficiency of the Overlap-ADAPT algorithm is to compute the overlap between the ansatz wave-function and the full-CI wave-function over the course of several algorithm iterations. Consequently, for the stretched BeH\({}_{2}\) and stretched linear H\({}_{6}\) chain considered above, we plot the overlap convergence with respect to the full-CI wave-function in Figure 4. It is readily seen that the Overlap-ADAPT procedure targeted Figure 2: Workflow for Overlap-Guided Adapative Algorithm (Overlap ADAPT) at the full-CI wave-function far outperforms the original ADAPT-VQE, achieving a notably higher overlap with the full-CI wave-function for both a stretched BeH\({}_{2}\) molecule and a stretched linear H\({}_{6}\) chain. In particular, for the H\({}_{6}\) system, while ADAPT-VQE reaches a plateau and stalls its progress, the Overlap-ADAPT procedure smoothly advances without interruption. Of course the Overlap-ADAPT-VQE targeted at a full-CI wave-function does not define a practical VQE since the full-CI ground state energy is precisely the quantity we wish to approximate. A practical VQE based on orbital overlap optimization can, however, be developed by replacing the targeted full-CI wave-function with a tractable high accuracy approximation thereof and using the resulting overlap-guided ansatz wave-function as a high accuracy initialisation for a new ADAPT-VQE procedure. The targeted "computable" wave-function in this situation can be completely general, i.e., it can be the output of any existing numerical algorithm, whether classical or quantum. The goal of the subsequent sections is to showcase the efficacy of this Overlap-ADAPT algorithm at obtaining chemically Figure 4: Comparison of the Full-CI Overlap Guided ADAPT-VQE and ADAPT-VQE for maximising the overlap with the full-CI wave-function of a stretched BeH\({}_{2}\) molecule and a stretched linear H\({}_{6}\) chain with an interatomic distance of 3 Angstrom for both. The plots represent the discrepancy between the ansatz and the full-CI wave-function, calculated as one minus the overlap, as a function of the number of parameters in the ansatz. Figure 3: Comparison of the Full-CI Overlap Guided ADAPT-VQE and ADAPT-VQE for the ground state energy of a stretched BeH\({}_{2}\) molecule and a stretched linear H\({}_{6}\) chain with an interatomic distance of 3 Angstrom for both. The plots represent the energy convergence as a function of the number of parameters in the ansatz. The pink area indicates chemical accuracy at \(10^{-3}\) Hartree. accurate results using a minimal number of optimisation parameters. Such findings are important for practical uses of quantum computing for quantum chemistry since, as we have already stated, real-life chemists are interested in reaching convergence in energies corresponding to the so-called chemical accuracy, i.e. \(10^{-3}\) to \(10^{-4}\) Hartree. Our results can therefore introduce a practical route for compactyfing the ADAPT-VQE operator counts using the Overlap-ADAPT-VQE within this accuracy regime. ## Results ### Setting of Numerical Simulations The classical numerical simulations reported in this section have been carried out with an in-house code, using Openfermion-PySCF module [22] for integral computations and OpenFermion [23] for second quantization and the Jordan-Wigner mapping. All calculations are performed within the minimal STO-3G basis set [24] without considering frozen orbitals unless otherwise specified. Note that the number of qubits that a simulation requires is equal to the number of spin orbitals of a system, which therefore limits the quality of the single-particle basis and the size of the system that can be simulated. All optimization routines use the BFGS algorithm implemented on the SCIPY Python module [25]. We use a pool of non spin-complemented restricted single- and double-qubit excitations evolutions. By'restricted', we mean that we consider only excitations from occupied orbitals to virtual orbitals with respect to the Hartree-Fock determinant. Using fewer operators in the pool makes the gradient screening process faster and easier to handle from a computational point of view [9]. To ensure a fair comparison, this same operator pool is used for both the overlap-guided Ansatz and ADAPT-VQE. To anticipate applications on noisy quantum machines of such adaptive algorithms, there are essentially two constraints to respect: * The circuit depth should be kept as shallow as possible so as to reduce the effect of decoherence in NISQ devices. In the current context, the circuit depth corresponds to the number of gates used to construct our wave-function ansatz. * The number of measurements an NISQ device can undertake is very limited. On the other hand, the ADAPT-VQE algorithm requires a large number of measurements both in the form of gradient evaluations at the beginning of each iteration and during the VQE optimization step of the ansatz wave-function. The optimization step in particular often requires an excessive number of measurements since the cost function is both high dimensional and noisy. Consequently, the optimization of the ansatz wave-function is simply intractable with a limited number of evaluations thus preventing practical application of ADAPT-VQE on current quantum devices. In order to implement such adaptive algorithms on the current generation of NISQ devices therefore, we must minimise both the circuit depth and the number of evaluations. Indeed, as the depth of a circuit increases, the noise level also increases, which results in a greater number of samples being required for accurate measurement of the Hamiltonian expectation values. In ADAPT-VQE, each operator added to the ansatz corresponds to an additional layer of quantum gates in the circuit and an additional parameter in the ansatz. Consequently, to address both the circuit depth and the number of evaluations constraints, we will evaluate the energy convergence as a function of the number of operators present in the ansatz. ### Application of Overlap-ADAPT-VQE for Compactification of ADAPT-VQE Ansatze As a first test of its effectiveness, we apply the overlap-guided adaptive algorithm to a target wave-function provided by an existing QEB-ADAPT-VQE procedure and then use the result as a high-accuracy initialisation for a new QEB-ADAPT-VQE procedure. Essentially, this first set of numerical experiments is meant to model the situation where we have a strong constraint on the circuit depth (represented by the number of optimisation parameter in the ansatz wave-function), and we wish to see if it is possible to use the Overlap-ADAPT-VQE procedure to compactify the ADAPT-VQE ansatz thereby obtaining a higher accuracy wave-function that respects the constraint on the circuit depth. We compute the ground state energy of the benchmark Beryllium Hydride (BeH\({}_{2}\)) molecule considered in the original ADAPT-VQE articles [8]. We consider the BeH\({}_{2}\) molecule both at its equilibrium geometry (bond length of 1.3264 Angstrom) as well as at a stretched geometry (bond length of 3.0 Angstrom), which is meant to model a more strongly correlated system. Our results are depicted in Figure 5. The numerical results indicate that the Overlap-ADAPT-VQE can indeed compactify the QEB-ADAPT-VQE ansatz wavefunction and using the output as an initialization for a new QEB-ADAPT-VQE yields a much more accurate wave-function. Under the constraint of a maximal operator count of 50, the overlap-guided procedure improves the final accuracy of the computed BeH\({}_{2}\) ground state energy at equilibrium and stretched geometries by a factor of 3 and 10 respectively. Note that the improvement in accuracy is much higher in the case of the stretched BeH\({}_{2}\) molecule which exhibits strong correlation, and this suggests that the comparative advantage of the overlap-guided adaptive algorithm over a pure ADAPT-VQE procedure will be more conspicuous for molecules with strong correlation- systems for which the ADAPT-VQE algorithm struggles to compute the ground state energy. Thus, in the case of the BeH\({}_{2}\) molecule for instance, we are able to achieve chemical accuracy using only a 34 operator-ansatz wave-function whereas the QEB-ADAPT-VQE algorithm requires more than 50. Numerical simulations for stretched BeH\({}_{2}\) using a lower maximal operator count of 40 and 45 are displayed in Figure 6 and show similar improvements in the final accuracy of the ansatz wave-function, although the advantage decreases as the maximal operator count becomes smaller. A further test of the Overlap-ADAPT-VQE applied to a target QEB-ADAPT-VQE wave-function is carried out for the diatomic Nitrogen (N\({}_{2}\)) molecule at equilibrium and stretched geometries. Although the minimal basis set for N\({}_{2}\) is quite large, Figure 5: Comparison of the Overlap-ADAPT-VQE and ADAPT-VQE for the ground state energy of a BeH\({}_{2}\) molecule at equilibrium and stretched geometries. The plot represents the energy convergence as a function of the number of parameters in the ansatz. The right-pointing triangles denotes the start of an ADAPT-VQE procedure. The left-pointing triangle denotes the end of an ADAPT-VQE procedure at which point the resulting wave-function is taken as a target for the Overlap-ADAPT-VQE method. The green dotted line corresponds to an FCI-Overlap-ADAPT-VQE procedure which is plotted simply as a reference. The pink area indicates chemical accuracy at 10\({}^{-3}\) Hartree. Figure 6: Comparison of the Overlap-ADAPT-VQE and ADAPT-VQE for the ground state energy of a BeH\({}_{2}\) molecule with a stretched geometry for a maximal operator count of 40 and 45. The plot represents the energy convergence as a function of the number of parameters in the ansatz. The right-pointing triangles denotes the start of an ADAPT-VQE procedure. The left-pointing triangle denotes the end of an ADAPT-VQE procedure at which point the resulting wave-function is taken as a target for the Overlap-ADAPT-VQE method. The green dotted line corresponds to an FCI-Overlap-ADAPT-VQE procedure which is plotted simply as a reference. The pink area indicates chemical accuracy at 10\({}^{-3}\) Hartree. a tractable computation can be carried out using an active space approach where the eight core electrons of the N\({}_{2}\) molecule are frozen and the ground state energy of the system is computed using the resulting frozen core effective Hamiltonian an approach commonly referred as CAS(6,6). As shown in Figure 7, we see that the Overlap-ADAPT procedure does not further compactify the QEB-ADAPT-VQE wave-function at equilibrium, the final accuracy of the Overlap-QEB-ADAPT-VQE being only slightly higher than that of the classical QEB-ADAPT-VQE procedure. Nevertheless, by applying the Overlap-ADAPT-VQE procedure _twice_, i.e., taking a QEB-ADAPT-VQE wave-function as the first target, performing an Overlap-ADAPT-VQE procedure, and then taking the resulting wave-function as the target for an additional Overlap-ADAPT-VQE procedure yields a huge gain in accuracy for the stretched geometry. Indeed, the Overlap-QEB-ADAPT-VQE energy is nearly an order of magnitude more accurate than the classical QEB-ADAPT-VQE energy. Let us remark that as a rule of thumb, for all these simulations, the Overlap-ADAPT algorithm is used to construct an approximate wave-function using a number of operators equal to about 40%-50% of the maximal operator count. If the maximal operator count is more flexible, then as a general rule we observe that the ADAPT-VQE ansatz taken immediately after the ADAPT process has exited a an energy plateau, serves as an effective choice of target wave-function for an overlap-guided adaptive procedure, i.e., the Overlap-ADAPT-VQE can produce a more compact wave-function with comparable energy to that of the target ADAPT wave-function. On the other hand, taking ADAPT-VQE ansatz wave-function from the middle of an energy plateau as the overlap-guided target seems to be a less effective strategy. ### Application of Overlap-ADAPT-VQE to Classically Computed Wave-Functions The stretched linear H\({}_{6}\) chain is a molecular system that exhibits a high degree of electronic correlation. The complex electronic structure creates a rough energy landscape with many local minima, making the finding of the global energy minimum difficult. This system has already been extensively studied [9] and it was shown that achieving chemical accuracy with ADAPT-VQE method required constructing an ansatz wave-function with more than 150 operators from a pool of either generalized fermionic or generalized qubit-excitations. Clearly, resources of this kind will never be accessible on NISQ devices, and it is therefore necessary to develop adaptive methods for simulating systems of this type using a much smaller operator count. Unfortunately, the ADAPT-VQE ansatz wave-function, presumably not constructed with a satisfactory choice of qubit excitation evolution operators prior to an unreachable number of iterations, cannot be used as the target of the overlap-guided adaptive algorithm as in the previous subsection. Instead, we propose the use of an intermediate, classically computed, multi-configuration wave-function as the overlap-guided target. This approach has the consequent advantage of not costing additional quantum resources. Particularly well-suited choices which fit in the framework of adaptive methods are provided by the so-called Selected-CI (SCI) methods. Figure 7: Comparison of the Overlap-ADAPT-VQE and ADAPT-VQE for the ground state energy of an N\({}_{2}\) molecule at equilibrium and stretched geometries. The plots represent the energy convergence as a function of the number of parameters in the ansatz. The right-pointing triangles denotes the start of an ADAPT-VQE procedure. The left-pointing triangle denotes the end of an ADAPT-VQE procedure at which point the resulting wave-function is taken as a target for the Overlap-ADAPT-VQE method. The green dotted line correspond to the previous FCI-Overlap-ADAPT-VQE. The green dotted line corresponds to an FCI-Overlap-ADAPT-VQE procedure which is plotted simply as a reference. The pink area indicates chemical accuracy at \(10^{-3}\) Hartree. #### Combining Classical Selected-CI Approaches and Quantum Computing The key idea of SCI methods is to build a compact representation of the reference wave-function by selecting _on-the-fly_ the most relevant Slater determinants thanks to an importance criterion based on perturbation theory (PT). Thanks to this clever selection of the Slater determinants, the variational energy of the reference wave function converges rapidly towards the full-CI energy. Although the recent revival of SCI approaches [26, 27, 28, 29, 30, 31, 32, 33, 34] has significantly pushed further the size limit of systems for which near full-CI quality energies can be obtained (typically a few tens of correlated electrons in about two hundreds of orbitals [35, 36]), the scaling of SCI methods is intrinsically exponential in the number of correlated electrons and orbitals. The reason for this exponential scaling is directly linked to the linear parametrization of the sought-after wave-function in terms of Slater determinants, which implies that the intrinsic exponential structure of the wave function must be built explicitly by adding more and more determinants to the reference wave function. This necessarily leads to size consistency errors which manifest through an underestimation of the coefficients of the reference and perturbative wave functions and therefore of the correlation energy. Because the size consistency errors grow with the total (absolute) value of the correlation energy, SCI methods struggle more and more as the number of correlated electrons increases and/or the strength of correlation increases. Recently, attempts to cure this problem have been proposed with a selection of the individual excitation operators [37, 38] in a single-reference CC approach. To overcome these limitations of SCI approaches, an alternative idea is to combine the robust and linear parametrization of SCI with the intrinsic exponential parametrization of the ansatz used in QC computation to take advantage of both worlds: 1. While reaching chemical accuracy in SCI methods is a struggle in the strong correlation regime, obtaining a compact and robust representation of the bulk of correlation effects is an easy task thanks to the smart selection of Slater determinants and the simplicity of the linear parametrization; 2. Use this compact SCI wave-function as the target of the overlap-guided adaptive algorithm so as to obtain an intermediate wave-function represented in terms of qubit excitation evolution operators acting on the Hartree-Fock reference state; 3. Use the intermediate wave-function as a high accuracy initialization of a new QEB-ADAPT-VQE procedure. For the purpose of this study, we choose to employ the so-called CI pertubatively selected iteratively (CIPSI) algorithm implemented in QP2 [33] to generate the required SCI wave-function. Before proceeding to the application of this algorithm to the linear H\({}_{6}\) chain, we provide a brief recap of the CIPSI methodology. #### The CIPSI algorithm in a nutshell The CIPSI algorithm, which was originally introduced in the late seventies [39, 40], is the archetype of SCI approaches: it approximates the FCI wave function through an iterative selected CI procedure, and the FCI energy through a second-order multi-reference perturbation theory (in this case, with an Epstein-Nesbet [41, 42] partition). The CIPSI energy is defined as \[E_{\text{CIPSI}}:=E_{\text{v}}+E^{(2)}. \tag{19}\] Here, \(E_{\text{v}}\) is the variational energy given by \[E_{\text{v}}:=\min_{\{\alpha\}}\frac{\langle\Psi^{(0)}|\,\hat{H}\,|\Psi^{(0)} \rangle}{\langle\Psi^{(0)}|\Psi^{(0)}\rangle}, \tag{20}\] where the reference wave function \(|\Psi^{(0)}\rangle=\sum_{\mathbf{l}\in\mathcal{R}}\,c_{1}\,\,|\mathbf{l}\rangle\) is expanded in Slater determinants \(|\mathbf{l}\rangle\) within the CI reference space \(\mathcal{R}\), and \(E^{(2)}\) is the second-order energy correction defined as \[E^{(2)}:=\sum_{\kappa}\frac{|\langle\Psi^{(0)}|\,\hat{H}\,|\kappa\rangle\,|^{2 }}{E_{\text{v}}-\langle\kappa\,|H\,|\kappa\rangle}=\sum_{\kappa}\,e_{\kappa}^ {(2)}, \tag{21}\] where \(\kappa\) denotes a determinant outside the reference space \(\mathcal{R}\). The CIPSI energy is systematically refined by doubling the size of the CI reference space at each iteration, selecting the determinants \(\kappa\) with the largest \(|e_{\kappa}^{(2)}|\). The calculations are stopped when a target value of \(E^{(2)}\) is reached. #### CIPSI-Overlap-ADAPT Numerical results We performed CIPSI calculations through the open-source quantum chemistry environment Quantum Package [33] for the different molecular systems. As mentioned previously, the CIPSI wavefunction is used as a target for the overlap-guided adaptive algorithm and is therefore not required to be very accurate. In particular, all CIPSI wave-functions employed in this study are well below chemical accuracy. In the remainder of this section, we compare the energy convergence of the QEB-ADAPT-VQE algorithm starting from an intermediate wave-function obtained by applying the overlap-guided algorithm to a CIPSI wave-function with the traditional QEB-ADAPT-VQE procedure that initializes from a simple Hartree-Fock ansatz. As a rule of thumb, for all these simulations, the Overlap-ADAPT-VQE is used to construct an approximate wave-function with energy comparable to that of the targeted CIPSI wave-function before initiating the subsequent QEB-ADAPT-VQE procedure. Figure 8 shows the energy convergence plot of the two different ADAPT-VQE protocols on the stretched linear H\({}_{6}\) system. We observe a significant difference in the results, with chemical accuracy being achieved using only 40 parameters when the QEB-ADAPT-VQE procedure is initialized with the overlap-guided-CIPSI intermediate wave-function whereas while the classical ADAPT-VQE ansatz is nearly 15 times less accurate despite using 50 parameters. Additional calculations revealed that with the classical QEB-ADAPT-VQE protocol requires more than 150 parameters to achieve chemical accuracy [9]. This massive performance gap demonstrates that the CIPSI wave-function initialization guides the ansatz construction in a manner that avoids a massive energy plateau which impedes the progress of classical QEB-ADAPT-VQE. Let us emphasize here that the initial CIPSI wave-function was composed of only 50 determinants and had an accuracy of less than \(10^{-2}\) Hartree, which suggests that even a low accuracy classically computed target wave-function for the overlap-guided algorithm is enough to improve the convergence of the subsequent QEB-ADAPT-VQE procedure. This observation is particularly important since it highlights the potential of applying this CIPSI-Overlap-ADAPT procedure to much larger systems with strong correlation where CIPSI approaches are not effective and are simply unable to achieve chemical accuracy. For such systems, we can envision computing a CIPSI wave-function at the limit of classical computational resources, using this non-chemically accurate CIPSI wave-function as a target for the overlap-guided adaptive algorithm, and initialising a subsequent QEB-ADAPT-VQE procedure on a quantum computer in order to obtain a final result with chemical accuracy. To further test the effectiveness of this CIPSI-Overlap-ADAPT approach, we return to the stretched BeH\({}_{2}\) molecule considered in the previous subsection. We employ two different CIPSI wave-functions as targets for the overlap-guided adaptive algorithm and use the approximate wave-functions obtained as high accuracy initializations for QEB-ADAPT-VQE proce Figure 8: Comparison of the CIPSI-Overlap-ADAPT-VQE and ADAPT-VQE for the ground state energy of a linear H\({}_{6}\) chain with an interatomic distance of 3 Angstrom. The plot represents the energy convergence as a function of the number of parameters in the ansatz. The CIPSI-Overlap ansatz is grown up to 20 parameters and then used as the initial state for an ADAPT-VQE process. This transition from Overlap-ADAPT-VQE to classical ADAPT-VQE is denoted by the top-pointing triangle. The horizontal black dotted line corresponds to the energy error of the initial CIPSI target wave-function. The light blue dotted line corresponds to the energy of the tUCCSD method [9], which consists of an ansatz wave-function composed of 118 generalised excitation evolutions acting on a reference Hartree-Fock state. The green dotted line corresponds to an FCI-Overlap-ADAPT-VQE procedure which is plotted simply as a reference. The pink area indicates chemical accuracy at \(10^{-3}\) Hartree. dures. Our results are displayed in Figure 9 and demonstrate that the CIPSI-Overlap-ADAPT produces a significantly more compact ansatz than the classical QEB-ADAPT-VQE procedure for both choices of CIPSI wave-functions. In both cases, the final accuracy of the wave-function with a maximal operator count of 50 operators is nearly an order of magnitude more than that of QEB-ADAPT-VQE. Furthermore, as noted in the case of the H\({}_{6}\) molecule, the choice of a low accuracy CIPSI wave-function as the initial target for the Overlap-ADAPT-VQE does not meaningfully degrade the final accuracy. Let us also remark here that the CIPSI-Overlap-ADAPT-VQE wave-function obtained at the end of the iterative process can then further be used a target for _an additional_ Overlap-ADAPT-VQE procedure, thereby further increasing the accuracy of the ansatz wave-function. In the case of the stretched BeH\({}_{2}\) molecule, this results in further minor improvements to the final energy that is achievable using a maximal operator count of 50, as displayed in Figure 9. ## Discussion In this study, we have explored the possibility of creating ansatz wave-functions for the variational quantum eigensolver that are more compact than the popular ADAPT-VQE at the chemical accuracy level for some small molecular systems. Since the overparametrization phenomenon observed in the ADAPT algorithm can be attributed to the algorithm's natural propensity to encounter local energy minima, we have proposed a new overlap-guided adaptative algorithm called Overlap-ADAPT-VQE, wherein the ansatz wave-function is grown by maximizing its overlap with an intermediate target wave-function that already captures some electronic correlation. We then use this overlap-guided ansatz as a high accuracy initialization for a classical ADAPT-VQE procedure. As a first test of our proposed approach, we used an existing ADAPT-VQE ansatz wave-function as a target for the overlap-guided adaptive algorithm. The resulting ansatz wave-function was shown to achieve chemical accuracy using significantly less operators than the classical ADAPT-VQE ansatz. We have also shown that this compression process can be carried out more than once and leads to an even more compact ansatz. For strongly correlated systems, the overlap-guided ansatz is noticeably steered by the target wave-function away from the majority of local traps that are typically encountered in standard ADAPT-VQE when starting from the Hartree-Fock state. While it appears that the ADAPT ansatz is already quite compact for systems with poor electronic correlation, the Overlap-ADAPT approach remains able to offer slight improvements. Motivated next by the inability of ADAPT-VQE to process highly correlated systems such as the stretched linear H\({}_{6}\) chain using a reasonably compact ansatz, we combined classical selected-CI approaches and quantum computing by taking a CIPSI Figure 9: Comparison of the CIPSI-Overlap-ADAPT-VQE and ADAPT-VQE for the ground state energy of a BeH\({}_{2}\) molecule with an interatomic distance of 3 Angstrom. The plots represent the energy convergence as a function of the number of parameters in the ansatz for two CIPSI initial wavefunction. The CIPSI-Overlap ansatz is grown up to 12 parameters (resp. 25 parameters) for the less accurate (resp. more accurate) initial CIPSI wave-function and then used as the initial state for an ADAPT-VQE process. This transition from Overlap-ADAPT-VQE to classical ADAPT-VQE is denoted by the top-pointing triangle. The horizontal dotted lines correspond to the energy error of the initial CIPSI target wave-function. The green dotted line corresponds to an FCI-Overlap-ADAPT-VQE procedure which is plotted simply as a reference. The blue and purple hexagons denote the final energy obtained by taking the CIPSI-Overlap-ADAPT-VQE wave-function with 50 parameters as a target for an additional Overlap-ADAPT-VQE procedure. The pink area indicates chemical accuracy at 10\({}^{-3}\) Hartree. wave-function as a target for our overlap-guided adaptive algorithm. The resulting CIPSI-Overlap-ADAPT-VQE procedure produced a massive improvement over standard ADAPT-VQE, allowing us to reach chemical accuracy using an ansatz with only 40 operators compared to more than 150 for the classical ADAPT-VQE method. Previous studies have already investigated the use of additional classical computation to enhance the UCCSD or ADAPT-VQE methods and have demonstrated promising improvements [7, 43, 44, 45, 46]. Our work builds upon this research and contributes to this line of study. It is worth noting that the overlap-guided ansatz can also be interpreted as a state preparation algorithm within a VQE framework, but this prepared state is not frozen and will be further optimized through the ADAPT process. However, within our new framework, the hybrid selected-CI-Overlap algorithm has the potential to bring a quantum advantage over classical quantum chemistry methods by following this procedure: pushing the classical computation of a complex molecular system to its limits, then generating the corresponding ansatz in a quantum computer using the Overlap adaptative algorithm, and further improving this ansatz through ADAPT-VQE and potentially additional overlap-guided compression steps. We are also testing the possibility of a final perturbative state (PT2) calculation following the spirit of the modern classical selected-CI approaches. Finally, let us emphasise that Overlap-ADAPT-VQE is, by design, able to integrate seamlessly with the recent improvements made to ADAPT-VQE [46, 47], sharing the same structure and adaptive property while still leveraging its own unique approach to operator selection, and many combinations with ADAPT variants can now be proposed and studied. Conversely, convergence in overlaps can be achieved more quickly by incorporating a wider range of operators, such as generalized excitations or symmetry breaking operators, into the pool of operators used. This would lead to immediate improvements in the performance of the Overlap-ADAPT-VQE algorithm. To explore further the capabilities of the various Overlap-ADAPT approaches and their potential practical advantage over classical methods, we are currently working towards larger scale simulations on extended implementations encompassing larger qubit counts on present NISQ machines and new generation advanced simulators. ## Acknowledgements This work has been funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant No 810367), project EMC2 (J.-P. P. and Y.M.)
2307.03502
Collapse transition in epidemic spreading subject to detection with limited resources
Compartmental models are the most widely used framework for modeling infectious diseases. These models have been continuously refined to incorporate all the realistic mechanisms that can shape the course of an epidemic outbreak. Building on a compartmental model that accounts for early detection and isolation of infectious individuals through testing, in this article we focus on the viability of detection processes under limited availability of testing resources, and we study how the latter impacts on the detection rate. Our results show that, in addition to the well-known epidemic transition at ${\mathcal{R}}_0=1$, a second transition occurs at ${\mathcal{R}}^*_0>1$ pinpointing the collapse of the detection system and, as a consequence, the switch from a regime of mitigation to a regime in which the pathogen spreads freely. We characterize the epidemic phase diagram of the model as a function of the relevant control parameters: the basic reproduction number, the maximum detection capacity of the system, and the fraction of individuals in shelter. Our analysis thus provides a valuable tool for estimating the detection resources and the level of confinement needed to face epidemic outbreaks.
Santiago Lamata-Otín, Adriana Reyna-Lara, David Soriano-Paños, Vito Latora, Jesús Gómez-Gardeñes
2023-07-07T10:39:10Z
http://arxiv.org/abs/2307.03502v1
# Collapse transition in epidemic spreading subject to detection with limited resources ###### Abstract Compartmental models are the most widely used framework for modeling infectious diseases. These models have been continuously refined to incorporate all the realistic mechanisms that can shape the course of an epidemic outbreak. Building on a compartmental model that accounts for early detection and isolation of infectious individuals through testing, in this article we focus on the viability of detection processes under limited availability of testing resources, and we study how the latter impacts on the detection rate. Our results show that, in addition to the well-known epidemic transition at \(\mathcal{R}_{0}=1\), a second transition occurs at \(\mathcal{R}_{0}^{*}>1\) pinpointing the collapse of the detection system and, as a consequence, the switch from a regime of mitigation to a regime in which the pathogen spreads freely. We characterize the epidemic phase diagram of the model as a function of the relevant control parameters: the basic reproduction number, the maximum detection capacity of the system, and the fraction of individuals in shelter. Our analysis thus provides a valuable tool for estimating the detection resources and the level of confinement needed to face epidemic outbreaks. pacs: 89.20.-a, 89.75.Hc, 89.75.Kd ## I Introduction The COVID-19 pandemic has affected the entire world, causing significant loss of life, economic hardship, and widespread disruption of social and cultural norms. As a result, it is essential to understand how communicable diseases spread and what can be done to mitigate their impact. Mathematical models are particularly useful in this regard [1; 2], as they allow researchers to gain valuable insights into the transmission dynamics of infectious diseases [3; 4], inform public health policies [5; 6], and guide efforts to control [7; 8] and prevent future outbreaks [9]. Compartmental models are the most widely used framework for modeling infectious diseases [10; 11]. In these models, a population is divided into compartments or states, with transitions between them mediated by different parameters. While simple and widely used, these basic models, such as the Susceptible-Infectious-Recovered model [12], have limitations in accounting for complex demographic or social factors that may impact transmission dynamics and the effectiveness of containment policies. Over the past years, researchers have made significant efforts to overcome the limitations of compartmental models used for modeling infectious diseases. One approach has been to incorporate various elements that make the models more realistic, thus broadening their range of applicability. These refinements cover the use of complex networks to model interactions through which the pathogen can spread [13; 14], thus allowing to better capture the heterogeneity of the connection between individuals and its impact on disease transmission dynamics. Another refinement has been its combination with diffusion processes [15], which mimic mobility flows between densely populated areas [16], enabling the development of metapopulation frameworks to analyze the role of travel and movement patterns in the spread of infectious diseases [17; 18; 19]. Additionally, researchers have coupled the spreading dynamics of infectious diseases with behavioral factors that impact the acceptance of interventions [20; 21]. This refinement acknowledges the importance of human behavior in the success of disease control measures and allows for the exploration of interventions more likely to be adopted by the population [22; 23]. Following this line of research, recently new compartmental models have been developed to study how early detection and isolation of infectious individuals through testing and the subsequent activation of contact tracing strategies can interrupt the advance of transmission chains [24; 25; 26; 27]. In this study, we explore how limited availability of testing resources alter the viability of detection processes and their impact on the ongoing epidemic outbreak. We propose a minimal compartmental model that can simulate the effects of different interventions, such as lockdowns and testing, and derive the epidemic phase diagram analytically. Our results show that, in addition to the well-known epidemic transition that occurs when the basic reproduction number \(\mathcal{R}_{0}\) is \(\mathcal{R}_{0}^{*}=1\), a second transition takes place at \(\mathcal{R}_{0}^{*}>1\), which depends on the maximum detection capacity of the system. When \(\mathcal{R}_{0}>\mathcal{R}_{0}^{*}\), the system moves from a phase where detection can mitigate the epidemic outbreak to a phase where the pathogen spreads freely. By characterizing this transition, we can determine the precise value of \(\mathcal{R}_{0}^{*}\) as a function of both the detection capacity of the system and the fraction of individuals in shelter. Our model provides a valuable tool for estimating the detection resources and confinement needed to face epidemic outbreaks and can be adapted for use in more elaborate models. Epidemic spreading dynamics To simulate the time progression of infections during a single epidemic wave without considering reinfections, we adopt a Susceptible-Infected-Recovered (\(SIR\)) framework. This compartmental modeling approach divides the population into three epidemiological states: Susceptible (\(S\)), i.e. individuals who lack prior exposure to the pathogen and, therefore, possess no immunity, Infected (\(I\)), individuals infected with the pathogen and carrying a sufficient viral load to infect others, and Recovered (\(R\)), individuals who have overcome the infection and have developed immunity. In addition to the three typical SIR categories, we introduce two further categories: Locked (\(L\)) and Detected (\(D\)), so that we can refer to our model as SLIDR. The \(L\) category comprises epidemiological \(S\) individuals who are under strict lockdown measures and thus cannot contract the disease. The \(D\) group denotes those individuals who were Infected but have been detected through testing, and consequently, are no longer infectious due to their isolation. As typical in compartmental models, each individual of a population can occupy only one state at a given time, and the transition from one state to another is governed by the flow diagram illustrated in Fig. 1.a. Initially, a fraction \(l_{0}\) of the population transitions from \(S\) to \(L\), while the remaining individuals in \(S\) are susceptible to infection at a rate \(\beta\) per contact with an agent in compartment \(I\). Infectious individuals transition to either recovered, \(R\), at a rate \(\mu\), or detected, \(D\), at a rate \(g(t)\). Note that the detection rate, \(g(t)\), is time-dependent, as it is contingent upon the testing capacity, as we will discuss in details below. Finally, detected individuals transition to the recovered state at a rate \(\gamma\). The former transitions allow us to write a set of mean-field equations by considering that each agent is involved in \(\left\langle k\right\rangle\) contacts per unit time in a population of size \(N\). Considering the fractions of the population in each compartment (\(s=S/N\), \(l=L/N\), \(i=I/N\), \(d=D/N\), and \(r=R/N\)) fulfilling \(s+l+i+d+r=1\), the differential equations that govern their time evolution read as: \[\dot{s} = -\left(\left\langle k\right\rangle-1\right)\beta si\;, \tag{1}\] \[\dot{l} = 0\;,\] (2) \[\dot{i} = \left(\left\langle k\right\rangle-1\right)\beta si-\left(\mu+g(t) \right)i\;,\] (3) \[\dot{d} = -\gamma d+gi\;,\] (4) \[\dot{r} = \mu i+\gamma d\;. \tag{5}\] Note that, as explained above, Eq. (2) implies that the fraction of population initially set under lockdown remains constant during time (\(l(t)=l_{0}\)). As mentioned above, the detection rate \(g(t)\) is not constant in order to capture the limited nature of testing resources. In particular we assume the time dependence of \(g(t)\) shown in Fig. 1.b, whose mathematical expression reads: \[g(t)=\begin{cases}g_{0}&\text{if}\quad i(t)<\theta\;,\\ g_{0}e^{-\lambda N(i(t)-\theta)}&\text{if}\quad i(t)>\theta\;.\end{cases} \tag{6}\] The previous functional form assumes that detection operates in a normal way, i.e. with a constant rate \(g_{0}\), provided that the fraction of infectious agents remains below a specified capacity threshold \(\theta\). Under normal conditions, tests are readily available, and the entire detection process, including identification of infectious agents, testing, and processing of results, takes place optimally within an average time period of \(1/g_{0}\). However, when the number of infected individuals \(i(t)\) exceeds the capacity threshold \(\theta\), we assume that a national health system begins to experience delays, causing a reduction in detection efficiency. Ultimately, when the detection system becomes too slow, infected individuals may recover before being detected, so detection does not alter their infectious period. To model the collapse of the detection system, we introduce an exponential decay of the original rate, \(g_{0}\), towards the detection compartment according to the difference between the availability threshold \(\theta\) and the demand \(i(t)\). The exponential decay is regulated by a tunable decay rate \(\lambda\) times the size of the population \(N\). In Fig. 2, we study how the interplay between the implementation of lockdown policies and the existence of limited testing resources affects epidemic trajectories. To set a reference, we represent an epidemic trajectory in Fig. 2.a-b with a baseline detection rate \(g_{0}=0.2\), unlimited resources, i.e. \(\theta=1\), and with no lockdown policies at a play, i.e. \(l_{0}=0\). First, we explore in Fig. 2.c-d the impact of limited resources by setting the capacity threshold to \(\theta=0.07\) and the decay rate \(\lambda=1\). The selection of the capacity threshold value draws inspiration from the typical testing ratios observed during the most challenging weeks of the COVID-19 epidemic in Figure 1: In panel (a) we show the flux diagram of the \(SLIDR\) model. The model has five compartments: susceptible (\(S\)), locked (L), infectious (\(I\)), detected (\(D\)) and recovered (\(R\)). Arrows indicate the possible transitions between different states. In panel (b) we sketch the dependence of the detection rate on the fraction of infected individuals in the population, given by Eq. (6). Europe [28]. The time evolution of each compartment clearly shows that, once the infectious population reaches the value \(\theta\) noticed by the horizontal dashed red line in Fig. 2.d, the fraction of detected agents decays and, as a consequence, the increase of the infectious population speeds up. As a consequence, a much larger attack rate \(r^{\infty}=\lim_{t\rightarrow\infty}r(t)\) is observed. For longer times we observe that detection has a second peak, pinpointing that after the epidemic peak the infectious population falls back under the threshold \(\theta\). Finally, Fig. 2.e-f shows what happens when \(\theta=0.07\), but a finite fraction \(l_{0}=0.1\) of the susceptible population is under lockdown. In this case, the pool of susceptible individuals available to be infected is smaller and, as a result, the capacity threshold \(\theta\) is more effective in stopping the disease spreading, leading to a better mitigation of the outbreak and, consequently, to a smaller attack rate. The overall effect of detection and its limited capacity can be analyzed by computing the epidemic diagram, i.e. the impact of the contagion wave, measured by the value of \(r^{\infty}\), as a function of the basic reproduction number of the pathogen, \(\mathcal{R}_{0}\), whose expression for the \(SLIDR\) model considering an initially fully susceptible population is given by: \[\mathcal{R}_{0}=\frac{\beta(\langle k\rangle-1)}{g_{0}+\mu}\;. \tag{7}\] In Fig. 3.a, we show (thin curves) the epidemic diagrams, \(r^{\infty}(\mathcal{R}_{0})\), in the case \(l_{0}=0\) and a force of detection characterized by a baseline rate \(g_{0}=1\), a capacity threshold \(\theta=0.07\), and two different values of \(\lambda\). From these diagrams, we observe that there exist two transition points. First, the well-known epidemic threshold at \(\mathcal{R}_{0}=\mathcal{R}_{0}^{c}=1\), pinpointing that beyond this point the infective power \(\beta(\langle k\rangle-1)\) is larger than the effective recovery rate \((g_{0}+\mu)\). In addition to the epidemic threshold, a second transition point appears at \(\mathcal{R}_{0}^{\star}>1\) which corresponds to the collapse transition. To better illustrate the collapse transition point, we also show the epidemic diagrams (thick curves) corresponding to free propagation, \(r_{\text{FP}}^{\infty}(\mathcal{R}_{0})\), i.e. in the absence of the detection policies, and that corresponding to perfect mitigation, \(r_{\text{PM}}^{\infty}(\mathcal{R}_{0})\), which is computed assuming the availability of unlimited resources (\(\theta=1\)). With these two phase diagrams as limiting cases, it is clear that the collapse point \(\mathcal{R}_{0}^{\star}\) corresponds to the minimum reproduction number that a pathogen needs to jeopardize the detection capacity and decrease the mitigation effects of the early removal of infectious agents. Beyond this point, the epidemic diagram separates from the one corresponding to unlimited detection resources and eventually reaches that corresponding to null detection at a value \(\mathcal{R}_{0}>\mathcal{R}_{0}^{\star}\) for which the mitigation power of detection is completely suppressed. It is also important to discuss the role played by the decay rate \(\lambda\). Although \(\lambda\) does not intervene in the precise value of \(\mathcal{R}_{0}^{\star}\), it controls the transition between perfect-mitigation and the free-propagation regimes. In particular, the larger the values of \(\lambda\), the more abrupt the collapse transition becomes. For large values of \(\lambda\), the explosive nature of the collapse transitions implies that once the system reaches the collapse transition \(\mathcal{R}_{0}^{\star}\), the attack rate will be equivalent to the one observed in an uncontrolled scenario. The increase of the sharpness of the collapse transition is clear from Fig. 3.b where \(r^{\infty}(\mathcal{R}_{0})\) is reported for a continuous range of \(\lambda\) values. To better characterize the collapse transition shown in Fig. 3 we have introduced the mitigation efficiency \(\eta\), which quantifies the global impact of the detection collapse. The miti Figure 2: In panels (a)-(b) we show the temporal evolution of all the compartments of the \(SLIDR\) model in absence of lockdown (\(l_{0}=0\)), with unlimited resources (\(\theta=1\)), an average connectivity \(\langle k\rangle=5\), a infectivity \(\beta=0.137\), a recovery rate \(\mu=1/7\), a transition rate from \(D\) to \(R\) regulated by \(\gamma=3/20\), and assuming a baseline detection rate of \(g_{0}=0.2\). In panels (c)-(d) we appreciate how considering limited resources, \(\theta=0.07\) (the rest of the parameters are identical to panels (a)-(b)), gives rise to the acceleration of the infected cases growth once the capacity threshold is reached. This acceleration depends on the decay rate \(\lambda=1\) and on the population size \(N=700\). In panels (e)-(f) we show how collapse can be avoided when a fraction of population is under lockdown (\(l_{0}=0.1\)). \(r^{\infty}\) reference curves display how lockdown reduces the attack rate of the diseases even below the one corresponding to unlimited resources (panel (a)). The rest of the parameters are identical to panels (c)-(d) (\(\theta=0.07\), \(\langle k\rangle=5\), \(\beta=0.137\), \(\mu=1/7\), \(\gamma=3/20\), \(g_{0}=0.2\)). Note that in all panels the basic reproduction number is fixed to \(R_{0}=1.6\) and time is measured in arbitrary units. gation effectiveness \(\eta\), compares the actual epidemic diagram \(r^{\infty}(\mathcal{R}_{0})\) for a given detection force (\(g_{0}\), \(\theta\), \(\lambda\)) with that obtained with the same detection rate \(g_{0}\) and unlimited resources (\(\theta=1\)) and is defined as: \[\eta=\frac{\int_{\mathcal{R}_{0}}\left(r_{\text{FP}}^{\infty}(\mathcal{R}_{0})- r^{\infty}(\mathcal{R}_{0})\right)d\mathcal{R}_{0}}{\int_{\mathcal{R}_{0}} \left(r_{\text{FP}}^{\infty}(\mathcal{R}_{0})-r_{\text{PM}}^{\infty}(\mathcal{ R}_{0})\right)d\mathcal{R}_{0}}\;. \tag{8}\] It takes values in the range \([0,1]\), with \(\eta=0\) when the mitigation effect is null, and \(\eta=1\) when the mitigation attained is the best possible one. Fig. 4 shows the mitigation effectiveness as a function of \(\theta\) and \(\lambda\). The latter parameter becomes more relevant when little resources are available, yielding a great variance in the parameter \(\eta\) when considering low \(\theta\) values. However, as the availability of resources increases, the transition is delayed and the nature of the transition is less relevant. This is because for high basic reproduction number diseases, the free-propagation and the perfect-mitigation curves are quite close, as shown in Fig. 3.a. ## III The collapse threshold \(\mathcal{R}_{0}^{\star}\) Once characterized numerically the existence of a collapse transition leading to the failure of the epidemic containment through detection of infectious agents, we now proceed to derive analytically the precise value of \(\mathcal{R}_{0}^{\star}\). This analysis will shed light into the dependence of \(\mathcal{R}_{0}^{\star}\) on the epidemiological parameters characterizing the spread of the pathogen. As mentioned earlier, the collapse threshold \(\mathcal{R}_{0}^{\star}\) is the minimum value of the basic reproduction number to keep the epidemic curve always below the capacity threshold \(\theta\). Therefore, \(\mathcal{R}_{0}^{\star}\) can be determined as the value of \(\mathcal{R}_{0}\) that gives rise to \(i_{\text{max}}=\theta\). Although our \(SLIDR\) model includes two additional compartments \(L\) and \(D\), we can effectively treat it as an \(SI\bar{R}\) model with a \(\bar{R}\) compartment which aggregates the populations \(L\),\(D\) and \(R\) of the original \(SLIDR\) formulation. Since we assume that the fraction of population in \(L\) is contained in the new compartment \(\bar{R}\), we have \(\bar{r}_{0}=l_{0}\) and \(s_{0}=1-l_{0}-i_{0}\). Secondly, using our time-continuous approach, we obtain the transition rate from the \(I\) compartment to the new \(\bar{R}\) compartment by adding the detection rate \(g_{0}\) and the recovery rate \(\mu\). This way, we can transfer the outgoing flow from \(I\) to the effective compartment \(\bar{R}\), neglecting the internal dynamics \(D\to R\) within the new effective compart Figure 3: In panel (a) we show the \(SLIDR\) attack rate \(r^{\infty}\) for two values of \(\lambda\) in the absence of lockdown (\(l_{0}=0\)). The lower value of \(\lambda\) yields to a second order collapse transition, while the higher displays a first order transition. The gray curve corresponds to a perfect mitigation case and the red to the free propagation dynamics. In panel (b) the complete phase diagram is shown. The horizontal dashed lines indicate the values of \(\lambda\) corresponding to the parameters used to represent curves in panel (a). The vertical black dashed line indicates the critical value \(\mathcal{R}_{0}^{\star}\) computed according to Eq. (17). In both panels simulations are performed for the range of infectivity values \(\beta\in[0,0.75]\), assuming a baseline detection rate \(g_{0}=1\), a capacity threshold \(\theta=0.07\), a population size \(N=700\) with average connectivity \(\langle k\rangle=5\), a recovery rate \(\mu=1/7\) and a transition rate from \(D\) to \(R\) regulated by \(\gamma=3/20\). Figure 4: The mitigation effectiveness \(\eta\) (color code) is reported as a function of the capacity threshold \(\theta\) and of the decay rate \(\lambda\). Simulations are performed for a large range of infectivity values with no lockdown polices implemented (\(l_{0}=0\)). We assume a baseline detection rate \(g_{0}=1\), a population size \(N=700\) with average connectivity \(\langle k\rangle=5\), a recovery rate \(\mu=1/7\) and a transition rate from \(D\) to \(R\) regulated by \(\gamma=3/20\). ment. After this reformulation we have : \[\dot{s} = -\left(\left\langle k\right\rangle-1\right)\beta si\;, \tag{9}\] \[\dot{i} = \left(\left\langle k\right\rangle-1\right)\beta si-\left(\mu+g_{0} \right)i\;,\] (10) \[\dot{r} = \left(\mu+g_{0}\right)i\;, \tag{11}\] with initial conditions: \(s_{0}=1-l_{0}-i_{0}\), \(i_{0}\ll 1\), and \(\bar{r}_{0}=l_{0}\). Note that in the formulation of this effective model we have fixed the detection rate to \(g_{0}\) as we are interested in the maximum possible value of \(\mathcal{R}_{0}^{\star}\) not triggering the decrease of the detection rate during the epidemic evolution. To obtain an analytical expression of \(i_{\text{max}}\) we proceed as usual in the SIR model by dividing Eq. (9) and Eq. (11), yielding: \[\frac{ds}{d\bar{r}}=-\frac{(\left\langle k\right\rangle-1)\beta}{(g_{0}+\mu)} s=-\mathcal{R}_{0}s\;, \tag{12}\] which can be integrated with the initial conditions above to obtain the evolution of infectious agents as a function of the fraction of susceptible ones: \[i(t)=1-l_{0}-s(t)+\frac{1}{\mathcal{R}_{0}}\log\left(\frac{s(t)}{1-l_{0}} \right)\;, \tag{13}\] where we have set \(\bar{r}_{0}=l_{0}\) and made the approximation \(s_{0}\simeq 1-l_{0}\), considering that, at the beginning of an epidemic, the number of infectious agents is small, \(i_{0}\ll 1\). The implicit expression \(i(s)\) in Eq. (13) allows us to determine the density of susceptibles when the outbreak reaches its peak. Setting \(\frac{di}{ds}=0\) we obtain \(s(i_{\text{max}})=\mathcal{R}_{0}^{-1}\) and, by inserting this value into Eq. (13), we finally obtain: \[i_{\text{max}}=1-l_{0}-\frac{1}{\mathcal{R}_{0}}\left[1+\log\left(\mathcal{R}_ {0}(1-l_{0})\right)\right]\;, \tag{14}\] Once derived the expression for \(i_{\text{max}}\) we set in Eq. (14) the condition fulfilled at \(\mathcal{R}_{0}^{\star}\), i.e. \(\theta=i_{\text{max}}\), and after some algebra we obtain the implicit relation for \(\mathcal{R}_{0}^{\star}\), given the values of \(\theta\) and \(l_{0}\): \[\mathcal{R}_{0}^{\star}(1-l_{0})e^{\mathcal{R}_{0}^{\star}(1-l_{0})\left[ \frac{\theta}{(1-l_{0})}-1\right]}=\frac{1}{e}\;. \tag{15}\] To solve the former relation for \(\mathcal{R}_{0}^{\star}\) we perform the following change of variables \(x=(\theta/(1-l_{0})-1)e^{-1}\) and \(y=\mathcal{R}_{0}^{\star}(1-l_{0})\left[\theta/(1-l_{0})-1\right]\). With this new variables the former expression reads: \[ye^{y}=x\;, \tag{16}\] which is a well-known transcendental equation whose solution is the Lambert function, \(y=W(x)\). In order to assess the validity of the expression in Eq. (16) to derive \(\mathcal{R}_{0}^{\star}\) we recall some of the properties of the Lambert function. First, we note that the Lambert \(W(x)\) function is only real valuated in the case \(x\geq-\frac{1}{e}\). Considering the expression of \(x\), it is easy to derive that the former condition demands that \(\theta\geq 0\), which is automatically satisfied. Additionally, for \(x<0\) (\(\theta+l_{0}<1\)), the Lambert function possesses two branches, namely \(y=W_{0}(x)\) and \(y=W_{-1}(x)\). This particular range of \(x\) is of interest, as it corresponds to situations in which both the capacity threshold and the fraction of locked individuals are small enough for the collapse transition to show up. After comparing the numerical solution of \(\mathcal{R}_{0}^{\star}\) with the analytical values obtained from the two branches of the Lambert function predictions, we found that the correct behavior is captured by the \(y=W_{-1}(x)\) branch. Thus, the expression of the collapse threshold can be finally written as: \[\mathcal{R}_{0}^{\star}(\theta,l_{0})=\frac{1}{\theta-(1-l_{0})}W_{-1}\left( \frac{\theta-(1-l_{0})}{e(1-l_{0})}\right)\;. \tag{17}\] In Fig. 3.b we show (see vertical dashed line) that the analytical expression in Eq. (17) works fairly well in reproducing the precise value of \(\mathcal{R}_{0}\) at which the numerical curve \(r^{\infty}(\mathcal{R}_{0})\) detaches from the one corresponding to perfect mitigation, \(r^{\infty}_{\text{PM}}(\mathcal{R}_{0})\), and starts approaching that corresponding to free propagation, \(r^{\infty}_{\text{FP}}(\mathcal{R}_{0})\). We continue by addressing the question about what is the minimum amount of resources \(\theta\) needed to avoid collapse given that the spreading pathogen is characterized by \(\mathcal{R}_{0}\). This threshold value for \(\theta\), hereafter called \(\theta^{\star}\), can be straightforwardly calculated by just imposing \(\theta=i_{\text{max}}(R_{0})\) in Eq. (14) which yields: \[\theta^{\star}(\mathcal{R}_{0},l_{0})=(1-l_{0})\left\{1-\frac{1}{\mathcal{R}_{ 0}(1-l_{0})}\left[1+\log\left(\mathcal{R}_{0}(1-l_{0})\right)\right]\right\}\;. \tag{18}\] Figure 5: Phase diagram of the \(SLIDR\) model in absence of lockdown (\(l_{0}=0\)). \(\mathcal{R}_{0}^{\star}=1\) is the epidemic threshold that separates the absorbent phase from the active epidemic region. Within the active phase, the model allows the computation of the minimum amount of resources \(\theta^{\star}\) that we need to avoid collapse, given by (18), as a function of the basic reproduction number \(R_{0}\). Thus, \(\mathcal{R}_{0}^{\star}\) is the critical point related to the transition between that Perfect-Mitigation Active Phase and the Active Collapse Phase. Some \(\mathcal{R}_{0}\) values have been indicated as reference, with their associated \(\theta^{\star}\). Note that the expression (18) can be obtained as the inverse function of the equation (17) in the parameter range (\(\theta+l_{0}<1\)) prescribed above. To complete our analytical derivations, we will now focus on determining the minimum lockdown fraction, denoted by \(l_{0}\), required to maintain perfect mitigation when facing a spreading pathogen characterized by the basic reproduction number \(\mathcal{R}_{0}\), assuming a fixed resource capacity \(\theta\). We obtain the threshold value \(l_{0}^{*}\) by imposing \(\theta=i_{\text{max}}(R_{0})\) in Eq. (14). By constructing the Lambert function, we obtain the following expression for the threshold value: \[l_{0}^{*}(\mathcal{R}_{0},\theta)=1+\frac{1}{\mathcal{R}_{0}}W_{-1}\left(-e^{ -\theta\mathcal{R}_{0}-1}\right)\, \tag{19}\] where \(W_{-1}\) denotes the branch of the Lambert function. It is worth noting that the validity limits of the Lambert function are always satisfied, since \(\theta\mathcal{R}_{0}\geq 0\) by definition of the parameters. Additionally, the branch \(W_{-1}\) is well-defined, as the condition \(e^{-\theta\mathcal{R}_{0}-1}>0\) is always fulfilled. ## IV Epidemic phase diagrams The analytical results derived in section III allow us to analyze the epidemic phase diagram of the system as a function of the relevant parameters: the infectiousness of the pathogen, \(\mathcal{R}_{0}\), the maximum detection capacity of the system, \(\theta\), and the fraction of the population out in shelter, \(l_{0}\). In the first case when \(l_{0}=0\) so that no lockdown is imposed, as shown in Fig. 5, the phase diagram can be plotted as a function of \(\mathcal{R}_{0}\) and \(\theta\). The diagram exhibits three possible phases: the disease-free phase when \(\mathcal{R}_{0}<1\), the perfect-mitigation phase when \(\mathcal{R}_{0}>1\) and \(\theta>\theta^{*}(\mathcal{R}_{0})\), and the collapse active phase provided \(\mathcal{R}_{0}>1\) and \(\theta<\theta^{*}(\mathcal{R}_{0})\). The two former phases are separated by the curve provided by Eq. (17) (or alternatively Eq. (18)). The phase diagram in Fig. 5 shows that for pathogens with \(\mathcal{R}_{0}>3\) the maximum detection capacity should be \(\theta^{*}>0.3\), pointing out that the sole active detection demands an extraordinary amount of resources to avoid the collapse phase. Thus, in the following we explore how combining active detection and lockdown can suffice to achieve the mitigation of the outbreak. From Eq. (18) it is clear that when lockdown enters into the game (\(l_{0}>0\)) two beneficial effects show up. First, the basic reproductive number turns into an lower effective one \(\bar{\mathcal{R}}_{0}=\mathcal{R}_{0}(1-l_{0})\) and, second, the maximum capacity detection is effectively increased from \(\theta^{*}\) to \(\bar{\theta}^{*}=\theta^{*}/(1-l_{0})\). These two effects combined allow to increase the area of the perfect mitigation phase in the \((\mathcal{R}_{0},\theta)\)-plane as shown in Fig. 6.a which reports the curves \(\theta^{*}(\mathcal{R}_{0})\) that separate the perfect mitigation and the active collapse phases for different values of the fraction \(l_{0}\). The beneficial effects of combining partial lockdown with detection are also illustrated in Fig. 6.b. Here we show for different values of \(\mathcal{R}_{0}\) the lockdown fraction, \(l_{0}^{*}\), needed to remain in the perfect mitigation phase considering a fixed maximum detection capacity \(\theta\). ## V Conclusions The implementation of reliable detection systems is key to ensure that policies, such as contact tracing and isolation of infectious individuals have the desired impact on outbreak control. In this study, we have introduced and studied a compartmental model in which detection resources are limited. Using Figure 6: In panel (a) we show the critical value \(\theta^{*}(\mathcal{R}_{0},l_{0})\) according to Eq. (18). The blue line separates the region of values where the collapse transition can occur from the region where the transition never exists because there is always perfect mitigation. In panel (b) the critical value \(l_{0}^{*}(\mathcal{R}_{0},\theta)\) is drawn according to Eq. (19). The blue line indicates the boundary \(\theta+l_{0}=1\) and separates the region where \(R_{0}^{*}\) is finite from the region where the transition never exists. Note that the contour lines correspond to non equally spaced \(\mathcal{R}_{0}\) values. a mean field approach, we have characterized the different dynamic regimes of the system as a function of its parameters. The most relevant result of this work is the observation of two transitions as a function of the basic reproductive number: the epidemic (\(\mathcal{R}_{0}=1\)) and the collapse (\(\mathcal{R}_{0}=\mathcal{R}_{0}^{\star}\)) transitions. In the latter transition the health system is unable to meet the demand for detection (for \(\mathcal{R}_{0}>\mathcal{R}_{0}^{\star}\)) and we move from a controlled regime, where detection drives the mitigation of the epidemic outbreak, to a regime in which the pathogen spreads freely. The existence of a collapse transition has motivated the analysis of a combined implementation of detection and lockdowns [29]. We have observed that the combination of the two strategies can help to avoid the collapse point, specially for those pathogens with a large \(\mathcal{R}_{0}\). Besides, our results show how, for certain values of the decay constant \(\lambda\), the nature of the collapse transition turns out to be explosive. This result is striking because it means that, above the collapse transition, the attack rate can be the same as in the unmitigated dynamics. The way on how this explosive behavior shows up is also remarkable since it arises in the active phase, i.e. well beyond the epidemic threshold. Thus, it stands in contrast to conventional forms of explosivity observed in contagion models [30; 31], in which deliberate delays in epidemic onset lead to abrupt transitions deviating from the usual smooth ones [32]. Finally, it should be noted that the analytical results presented here are based on a mean-field approach and, thus, some limitation are worth mentioning. First our model does not account for all observed features of social connections and pathogen performance. In this context, it would be worthwhile to refine the mechanisms built into \(SLIDR\) to be able to incorporate real connectivity patterns given by networks of close contacts. In this area, the inclusion of contact tracing strategies and not only symptomatic detection could be of particular interest. This approach could also be applied to reaction-diffusion processes that simultaneously incorporate mobility flows and contact patterns [33], paving the way for the identification of optimal distributions of detection resources [34; 35]. In addition, the lockdown that complements detection has been implemented in a stylized way, i.e. starting from the beginning of the epidemic way rather than being applied in subsequent times. For this particular scenario, we have checked that the reported results are robust provided the time elapsed between the start of the epidemic wave and the lockdown is small enough compared to the time associated to the epidemic peak. Overall, our model is able to provide analytical insights as a benchmark for more realistic models that can capture the full range of complexities involved in infectious disease outbreaks. ## Acknowledgements S.L.O and J.G.G. acknowledge financial support from the Departamento de Industria e Innovacion del Gobierno de Aragon y Fondo Social Europeo (FENOL group grant E36-23R) and from grant PID2020-113582GB-I00 funded by MCIN/AEI/10.13039/501100011033.
2306.14357
PolicyClusterGCN: Identifying Efficient Clusters for Training Graph Convolutional Networks
Graph convolutional networks (GCNs) have achieved huge success in several machine learning (ML) tasks on graph-structured data. Recently, several sampling techniques have been proposed for the efficient training of GCNs and to improve the performance of GCNs on ML tasks. Specifically, the subgraph-based sampling approaches such as ClusterGCN and GraphSAINT have achieved state-of-the-art performance on the node classification tasks. These subgraph-based sampling approaches rely on heuristics -- such as graph partitioning via edge cuts -- to identify clusters that are then treated as minibatches during GCN training. In this work, we hypothesize that rather than relying on such heuristics, one can learn a reinforcement learning (RL) policy to compute efficient clusters that lead to effective GCN performance. To that end, we propose PolicyClusterGCN, an online RL framework that can identify good clusters for GCN training. We develop a novel Markov Decision Process (MDP) formulation that allows the policy network to predict ``importance" weights on the edges which are then utilized by a clustering algorithm (Graclus) to compute the clusters. We train the policy network using a standard policy gradient algorithm where the rewards are computed from the classification accuracies while training GCN using clusters given by the policy. Experiments on six real-world datasets and several synthetic datasets show that PolicyClusterGCN outperforms existing state-of-the-art models on node classification task.
Saket Gurukar, Shaileshh Bojja Venkatakrishnan, Balaraman Ravindran, Srinivasan Parthasarathy
2023-06-25T22:17:25Z
http://arxiv.org/abs/2306.14357v1
# PolicyClusterGCN: Identifying Efficient Clusters for ###### Abstract Graph convolutional networks (GCNs) have achieved huge success in several machine learning (ML) tasks on graph-structured data. Recently, several sampling techniques have been proposed for the efficient training of GCNs and to improve the performance of GCNs on ML tasks. Specifically, the subgraph-based sampling approaches such as ClusterGCN and GraphSAINT have achieved state-of-the-art performance on the node classification tasks. These subgraph-based sampling approaches rely on _heuristics_ - such as graph partitioning via edge cuts - to identify clusters that are then treated as minibatches during GCN training. In this work, we hypothesize that rather than relying on such heuristics, one can learn a reinforcement learning (RL) policy to compute efficient clusters that lead to effective GCN performance. To that end, we propose PolicyClusterGCN, an online RL framework that can identify good clusters for GCN training. We develop a novel Markov Decision Process (MDP) formulation that allows the policy network to predict "importance" weights on the edges which are then utilized by a clustering algorithm (Graclus) to compute the clusters. We train the policy network using a standard policy gradient algorithm where the rewards are computed from the classification accuracies while training GCN using clusters given by the policy. Experiments on six real-world datasets and several synthetic datasets show that PolicyClusterGCN outperforms existing state-of-the-art models on node classification task. ## 1 Introduction Graph convolution networks (GCNs) learn high-quality node representations of graph-structured data. Such representations allow GCN to achieve state-of-the-art performance on several graph-based machine learning tasks such as node classification [17], link prediction [14], molecular graph generation [21], and recommendation systems [22]. However, GCN cannot easily operate on large-scale graphs as it requires \(O(nfl)\) memory [18] where \(n,f,\) and \(l\) are the number of nodes, features, and GCN layers, respectively. To improve the efficiency and scaling of GCNs, several sampling-based models have been proposed recently [13]. Liu et al. categorize these sampling methods as node-wise, layer-wise, and subgraph-based sampling models. The recent subgraph-based sampling models such as ClusterGCN [18] and GraphSAINT [15] have shown superior performance on node classification tasks as compared to GCNs and other sampling models. These subgraph-based models identify subgraphs (clusters) that are treated as minibatches for GCN training. The resultant mini-batch training helps improve the performance (such as node classification) of the GCN and also helps in scaling GCNs to large graphs. Subgraph-based based sampling models rely on _predetermined heuristics_ - such as graph partitioning via edge cuts, or communication volume minimization - to identify subgraphs (clusters). For instance, ClusterGCN [18] and DistDGL [15] relies on the clustering algorithm, Metis [12] to identify clusters while Aligraph [16] relies on four graph partitioning algorithms to identify clusters. GraphSAINT [15], on the other hand, relies on random subgraph sampling techniques to identify subgraphs. These techniques achieve better performance than GCN by either avoiding the neighborhood expansion problem [18] in GCN training or by mitigating the high variance in GCN training through normalization [Zeng et al., 2019]. To the best of our knowledge, such heuristics are oblivious to impact of formed clusters on GCN training effectiveness. The optimization objective of GCN [Kipf and Welling, 2016a] suggests that its training performance through clusters is dependent on both the graph structure of the clusters and the distribution of labels within the clusters. However, it is difficult to know _apriori_ which heuristics will lead to the identification of efficient clusters for a given input graph. Here, a cluster configuration is efficient if it results in good GCN performance. Identifying efficient clusters is a hard problem: for \(k\) clusters, there exists \(k^{n}\) possible cluster configurations where \(n\) is the number of nodes. Training GCN on \(k^{n}\) cluster configurations to find the best performance is not computationally feasible. Hence, we require systematic exploration of the possible configurations. To that end, we propose PolicyClusterGCN, an online reinforcement learning framework that relies on a novel Markov Decision Process (MDP) formulation for computing clusters. The MDP policy is parameterized with a neural network. However, using RL approach to compute efficient clusters is a challenging problem. For instance, a straightforward approach of designing a policy that directly assigns nodes to clusters would lead to clusters with a large fraction of nodes without an immediate edge between them. Hence, we design our policy neural network such that it predicts an "importance" weight for each edge. Edges that are deemed unimportant are assigned a low weight and vice-versa by the policy. A standard edge-cut-based graph partitioning algorithm would then output different cluster configurations based on the edge weights of the graph. This novel setup allows the exploration of a diverse set of clusters while including nodes that are immediately connected in the same clusters. We train the policy network using a standard policy gradient algorithm where the rewards are computed from the classification accuracies while training GCN using clusters given by the policy. Our contributions can be summarized as follows: 1. We discover that the choice of clusters has a significant impact on GCN performance. 2. To compute efficient clusters for GCN training, we formulate a novel MDP in which policy predicts edge weights that are utilized by a clustering algorithm to compute clusters. 3. Our experiments show that GCN trained on clusters identified by PolicyClusterGCN outperforms existing methods on five real-world datasets. 4. We also analyze the clusters computed by PolicyClusterGCN by studying their graph structure (via synthetic datasets) and label distribution (via label entropy metric [Chiang et al., 2019]). ## 2 Notations **Notations**: Let \(G(V,E)\) be the input graph \(G\) with \(|V|\) and \(|E|\) number of nodes and edges, respectively. Each edge \(e=(u,v,w)\) consists of nodes \(u\) and \(v\) and has edge weight \(w\in\mathbb{Z}^{+}\) that denotes the strength of connection. Let \(A\in\mathbb{R}^{|V|\times|V|}\) and \(\hat{A}\in\mathbb{R}^{|V|\times|V|}\) be the adjacency matrix and normalized adjacency matrix of graph \(G\), respectively. Let \(F\in\mathbb{R}^{|V|\times f}\) be the node features. Let \(Z^{(l)}\) be the node embedding at layer \(l\) with \(Z^{0}=F\) and \(W^{(l)}\) be the learning parameters of GCN. The node labels are denoted by \(y_{L}\). Let \(\mathcal{T}_{\mathcal{G}}\) be the training graph consisting of all the training nodes and edges incident on those training nodes. Similarly, let \(\mathcal{V}_{\mathcal{G}}\) and \(\mathcal{T}e_{\mathcal{G}}\) be the validation and test graphs. ## 3 Methodology ### MDP Formulation The overview of PolicyClusterGCN is shown in Figure 1. At a step \(t\) in MDP, the agent selects an action that updates all the edge weights of the graph in state \(s_{t}\). As a result, the MDP transitions to new state \(s_{t+1}\). Our MDP is nonepisodic that continuously trains during each step. We experimented Figure 1: Overview of PolicyClusterGCN (best viewed in color) with episodic MDP formulations, however, we empirically found our presented MDP design to work better. **Agent**: Let \(\mathcal{S}\) and \(\mathcal{A}\) be the state space and action space, respectively. In this work, a state represents the graph. Let \(\alpha\) be the set of possible discrete edge weights on an edge and \(a=(e_{1},\ldots,e_{|E|})\) be the selected edge weights where \(e_{i}\in\alpha\). The edge placement policy can then be defined as a mapping \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that assigns an edge weight to all the edges where \(\mathcal{A}=\alpha^{|E|}\). The goal of the policy network is to find a placement policy \(\pi\) that results in efficient clusters. **Environment**: The environment accepts a graph and its edge weights computed by the agent as its input. It then partitions the graph into several clusters based on the edge weights and trains the GCN model on each cluster. **State**: A state observation \(s\in S\) comprises of training graph \(\mathcal{T}_{\mathcal{G}}\) with the following features on each edge \(e=(u,v):\) (1) concatenation of node features of nodes \(u\) and \(v\); (2) concatenation of node embeddings of nodes \(u\) and \(v\). Here, we utilize an unsupervised graph representation learning method (UGRL), node2vec [10] to learn the node embeddings (later, we show in the experiments section, that one can also choose other UGRL methods for this step); (3) concatenation of embeddings of 1-hop neighbor nodes of nodes \(u\) and \(v\). A node's neighbor embeddings are aggregated with sum operation. We choose the sum operation as it allows us to capture the graph neighborhood in an expressive manner [11]; (4) the previous \(m\) number of edge weights taken by the policy \(\pi\); (5) the previous \(m\) number of rewards received from the environment. At the initial state \(s_{0}\), the previous \(m\) edge weights and previous \(m\) edge rewards are assigned a value of 1 and 0, respectively. **Action**: An action at step \(t\) is given by the policy \(\pi\): \(a_{t}=\pi(s_{t})=(e_{1},\ldots,e_{|E|})\). To accelerate the exploration phase through actions, we utilize exponential edge weights. In exponential edge weights, if there are \(p\) number of possible actions, an action \(i\) would correspond to edge weight \(2^{i}\) where \(0\leq i\leq p\). An edge with \(2^{p}\) weight would have the strongest connection and would be less likely to be eliminated during graph partitioning. The choice of discrete edge weights instead of continuous edge weights for the actions is primarily due to the restriction-induced by graph partitioning algorithms (e.g. Metis [11], Graclus [14], and MLR-MCL [21]). **Reward**: Now, to identify a good cluster configuration from the space of all possible cluster configurations, we require a reward signal about the goodness of a given cluster configuration. Here, we treat the performance (such as node classification) of GCN on training nodes as the reward signal by training GCN for a few iterations (\(iters\)) over the computed clusters. We train GCN for a small number of iterations (for example, 50 iterations in our experiments) to get rewards in a short amount of time. Once the GCN model is trained, we compute a score for each edge \(e=(u,v)\) of the training graph by summing scores of nodes \(u\) and \(v\). Here, a score for a node is +1 if the node label is predicted correctly by the trained GCN while -1 otherwise. In the case of multilabel classification, we compute the score for each label and then sum the scores. We compute the reward as the mean of all the scores across all the edges. ### Policy Network Architecture PolicyClusterGCN learns the good cluster configuration of a graph by parameterizing the MDP policy using a neural network. Here, one could design the policy network using GCN. However, GCN has a huge memory requirement for large graphs. Hence, we select a two-layer neural network as our policy network. The input to the network is the state of edge \(i\) and the output of the network is the predicted edge weight. This design allows the parallelization of the edge weight prediction task. This policy network design can also perform the edge weight prediction task for large graphs. The policy network is trained with a standard policy gradient algorithm. ### Training PolicyClusterGCN is trained with a standard policy gradient algorithm - actor-critic [12]. Let \(\theta\) and \(w\) be the policy parameters (actor) and state-value function parameters (critic), respectively. Let \(r_{t}\) be the received reward obtained by following policy \(\pi_{\theta}\) at step \(t\). Then the actor and critic parameters are updated as follows: \[\delta\gets r_{t}+\gamma\hat{v}(s_{t+1},w)-\hat{v}(s_{t},w), \tag{1}\] where \(\gamma\) is the discount factor and \(\hat{v}\) is the value function and, \[\begin{split} w&\gets w+\alpha^{w}\delta\; \nabla_{w}\;\hat{v}(s_{t},w)\\ \theta&\leftarrow\theta+\alpha^{\theta}\delta\; \nabla_{\theta}\;\text{ln}\;\pi(a_{t}|s_{t},\theta),\end{split} \tag{2}\] where \(\alpha^{w}>0\) and \(\alpha^{\theta}>0\) are the step sizes for actor and critic parameters and \(\mathcal{A}_{t}\) denotes the actions taken by step \(t\). The training algorithm of PolicyClusterGCN is presented in Algorithm 1. In each step, we update the edge weights of all the edges. Here, given the state \(s_{t}\) at step \(t\), we employ an \(\epsilon\)-greedy method for exploring the edge weights. We set the edge weight of an edge \(i\) to one of the possible edge weights in a uniform random manner with \(\epsilon\) probability and set the edge-weight of the edge given by the policy network (\(\pi_{\theta}(s_{t})[i]\)) with \(1-\epsilon\) probability. We decay the \(\epsilon\) values as \(\epsilon_{end}+(\epsilon_{start}-\epsilon_{end})\times\exp(-1\times t/\epsilon _{decay}))\). We update the training graph \(\mathcal{T}_{\mathcal{G}}\) with the predicted edge weights. The change in the edge weights of the graph allows us to explore different cluster configurations. We identify clusters using a multi-level graph partitioning algorithm, Graclus [13] that allows the identification of imbalanced clusters. Metis [11], on the other hand, has a load balance constraint that allows limited exploration of diverse cluster configurations. Once, we identify the clusters, we change the edge weight of the non-cut edges to that of the original graph. This change is important as we are interested in identifying good cluster configuration rather than modifying the input graph. Next, to compute the reward signal we train the ClusterGCN model (shared in Algorithm 2). In ClusterGCN, given \(k\) clusters, we form \(\mathcal{T}_{\mathcal{G}_{1}},\mathcal{T}_{\mathcal{G}_{2}},...\mathcal{T}_{ \mathcal{G}_{k}}\) subgraphs. Let \(\hat{A}_{\mathcal{G}_{E}}\in\mathbb{R}^{n_{x}\times n_{x}}\) be the normalized adjacency matrix of subgraph \(x\) with \(n_{x}\) number of nodes. Let \(Z_{x}^{(l)}\in\mathbb{R}^{n_{x}\times f}\) be the node embedding at layer \(l\) with \(Z_{x}^{0}=F_{x}\) where \(F_{x}\) is the initial node features. While \(W^{(l)}\) are the learnable parameters. Then, we train GCN by following the training procedure outlined in ClusterGCN [10]. The ClusterGCN training process also includes a parameter (\(bsize\)) to enable a stochastic multi-partition approach and we follow the same approach in our implementation (approach [10], not shown in Algorithm 2 for expository simplicity). Once we train ClusterGCN, we compute the reward by following the procedure mentioned in "MDP formulation" section. We also compute the validation score on the supervised task (such as node classification) and store the clusters that resulted in the best validation score. The policy and critic networks are then trained using equations 1 and 2. \begin{table} \begin{tabular}{l c c c c c|l} \hline **Datasets** & **Nodes** & **Edges** & **Feats** & **L/C** & **train/ val/ test** & **References** \\ \hline Romania & 41,773 & 996,404 & 16 & 84 & 0.60 / 0.10 / 0.30 & [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 21, 22, 24, 25, 26, 28, 29, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 21, 22, 23, 25, 26, 29, 20, 22, 24, 25, 27, 28, 29, 21, 23, 24, 26, 29, 20, 21, 22, 25, 26, 27, 28, 29, 20, 22, 23, 24, 25, 28, 29, 20, 23, 25, 26, 27, 28, 29, 21, 24, 25, 29, 22, 26, 28, 29, 20, 23, 24, 25, 26, 29, 20, 24, 27, 28, 29, 21, 25, 27, 29, 22, 23, 26, 28, 29, 20, 24, 25, 29, 26, 27, 28, 29, 22, 28, 29, 20, 25, 29, 21, 26, 29, 22, 23, 27, 28, 29, 20, 29, 21, 22, 24, 25, 26, 29, 22, 27, 28, 29, 20, 21, 22, 29, 23, 28, 29, 21, 24, 25, 26, 27, 28, 29, 22, 23, 29, 24, 26, 28, 29, 25, 29, 26, 27, 29, 28, 29, 20, 29, 21, 22, 28, 27, 29, 23, 28, 29, 24, 25, 29, 26, 28, 27, 29, 20, 28, 29, 21, 22, 22, 23, 29, 24, 26, 29, 25, 27, 28, 29, 29, 20, 29, 21, 22, 28, 29, 22, 24, 28, 29, 20, 22, 23, 24, 29, 25, 26, 27, 28, 29, 21, 29, 22, 28, 29, 23, 24, 29, 25, 28, 26, 29, 27, 28, 29, 29, 20, 29, 22, 29, 21, 28, 29, 23, 28, 29, 24, 29, 25, 26, 29, 27, 28, 29, 21, 29, 20, 29, 22, 23, 29, 24, 28, 29, 25, 26, 29, 27, 28, 29, 21, 28, 29, 22, 29, 23, 29, 24, 29, 25, 26, 27, 28, 29, 20, 29, 23, 28, 29, 24, 29, 26, 29, 27, 28, 29, 20, 28, 29, 20, 29, 21, 29, 22, 22, 23, 28, 29, 25, 29, 20, 26, 27, 29, 28, 29, 21, 29, 22, 24, 29, 23, 28, 29, 24, 25, 29, 26, 29, 27, 28, 29, 20, 29, 21, 28, 29, 20, 22, 21, 29, 20, 22, 23, 29, 24, 29, 25, 27, 29, 28, 29, 21, 29, 23, 29, 25, 29, 26, 29, 27, 28, 29, 29, 20, 29, 21, 22, 23, 29, 24, 28, 29, 23, 25, 29, 26, 27, 28, 29, 20, 28, 29, 21, 29, 22, 23, 29, 24, 29, 25, 26, 28, 29, 20, 27, 29, 23, 28, 29, 21, 28, 29, 20, 29, 22, 24, 29, 25, 29, 26, 27, 28, 29, 29, 20, 28, 29, 21, 29, 20, 29, 22, 21, 29, 23, 29, 22, 24, 29, 25, 26, 29, 27, 28, 29, 29, 20, 29, 21, 22, 23, 29, 26, 27, 29, 23, 28, 29, 20, 23, 29, 24, 28, 29, 25, 29, 26, 27, 28, 29, 21, 29, 20, 21, 22, 29, 23, 29, 23, 28, 29, 22, 24, 29, 25, 27, 29, 26, 28, 29, 21, 29, 20, 27, 28, 29, 22, 23, 29, 27, 29, 21, 28, 29, 22, 29, 23, 29, 20, 29, 20, 21, 29, 22, 24, 29, 25, 29, 26, 28, 29, 2 ## 4 Experiments ### Datasets We evaluate PolicyClusterGCN on six frequently evaluated datasets. The statistics of these datasets are presented in Table 1. The column train/val/test in Table 1 refers to the size of training, validation, and test splits of the dataset. The Romania, Hungary, and Croatia datasets were introduced in [11] and extensively used in the literature. We evaluate models on multilabel classification tasks on Romania, Hungary, Croatia, Twitter, and Facebook datasets and multiclass classification on the Blogcatalog dataset. In the case of Twitter and Facebook datasets, certain labels are present in only a few nodes, hence in our evaluation, we consider only those labels that are present in at least 10 nodes. Node attributes are not present for Romania, Hungary, and Croatia datasets. Hence, we factorize the adjacency matrix of these datasets and treat the 16 left singular vectors as node attributes [10]. ### Baselines We consider state-of-the-art GCN training models including several models that perform efficient sampling for GCN training [14]. * **GCN**[12] : Initial graph convolutional network. * **GraphSAGE**[15] : A node sampling based GCN that samples few l-hop neighbors of nodes to learn the node embeddings. * **VR-GCN**[16]: A node sampling based GCN that utilizes historical activations to sample a smaller number of node's neighbors. * **FAST-GCN**[16]: A layer sampling based GCN that samples new node neighbors at each layer of GCN. * **LADIES**[17]: A layer sampling based GCN that utilizes importance sampling to sample neighbor-dependent nodes at each layer of GCN. * **RippleWalk**[18]: A subgraph sampling based GCN that propose a random process to form a subgraph for GCN training. * **GraphSaint**[19]: A subgraph sampling based GCN that proposed samplers for forming subgraph and introduce normalization techniques to eliminate bias and reduce variance. * **ClusterGCN**[15]: A subgraph sampling based GCN that partitions graphs into multiple clusters and treat each cluster as minibatch for GCN training. \begin{table} \begin{tabular}{l|l|c c c c c c} \hline \hline **Sampling Strategy** & **Model** & **Croatia** & **Romania** & **Hungary** & **Facebook** & **Twitter** & **Blogcatalog** \\ \hline & GCN & 0.343 & 0.340 & 0.384 & 0.500 & 0.150 & 0.940 \\ & & (\(\pm\) 0.030) & (\(\pm\) 0.034) & (\(\pm\) 0.036) & (\(\pm\) 0.031) & (\(\pm\) 0.015) & (\(\pm\) 0.012) \\ \hline \multirow{2}{*}{Node wise sampling} & GraphSage & 0.355 & 0.352 & 0.399 & 0.478 & 0.132 & 0.933 \\ & & (\(\pm\)0.011) & (\(\pm\)0.017) & (\(\pm\)0.024) & (\(\pm\)0.022) & (\(\pm\)0.022) & (\(\pm\)0.005) \\ & VR-GCN & 0.338 & 0.366 & 0.396 & 0.525 & 0.156 & 0.953 \\ & & (\(\pm\) 0.026) & (\(\pm\) 0.030) & (\(\pm\) 0.030) & (\(\pm\) 0.010) & (\(\pm\) 0.055) & (\(\pm\) 0.010) \\ \hline \multirow{2}{*}{Layer wise sampling} & LADIES & 0.403 & 0.377 & 0.422 & 0.459 & 0.137 & 0.889 \\ & & (\(\pm\) 0.040) & (\(\pm\) 0.043) & (\(\pm\) 0.043) & (\(\pm\)0.024) & (\(\pm\)0.023) & (\(\pm\)0.006) \\ & FAST-GCN & 0.404 & 0.375 & 0.383 & 0.368 & 0.115 & 0.832 \\ & & (\(\pm\) 0.075) & (\(\pm\) 0.070) & (\(\pm\) 0.056) & (\(\pm\)0.009) & (\(\pm\)0.034) & (\(\pm\)0.032) \\ \hline \multirow{2}{*}{Subgraph sampling} & RippleWalk & 0.359 & 0.358 & 0.406 & 0.466 & 0.144 & 0.877 \\ & & (\(\pm\) 0.027) & (\(\pm\) 0.030) & (\(\pm\) 0.0340) & (\(\pm\) 0.046) & (\(\pm\) 0.052) & (\(\pm\) 0.011) \\ \multirow{2}{*}{Subgraph sampling} & GraphSaint & 0.459 & 0.467 & 0.485 & 0.537 & 0.164 & **0.959** \\ & & (\(\pm\)0.005) & (\(\pm\)0.002) & (\(\pm\)0.001) & (\(\pm\)0.007) & (\(\pm\)0.014) & (\(\pm\)0.005) \\ \multirow{2}{*}{} & ClusterGCN & 0.403 & 0.387 & 0.420 & 0.510 & 0.148 & 0.950 \\ & & (\(\pm\)0.012) & (\(\pm\)0.005) & (\(\pm\)0.043) & (\(\pm\)0.006) & (\(\pm\)0.032) & (\(\pm\)0.004) \\ \multirow{2}{*}{} & ClusterHOGCN & 0.453 & 0.469 & 0.488 & 0.557 & 0.171 & 0.951 \\ & & (\(\pm\)0.003) & (\(\pm\)0.006) & (\(\pm\)0.001) & (\(\pm\)0.013) & (\(\pm\)0.022) & (\(\pm\)0.004) \\ \hline \multirow{2}{*}{Proposed} & PolicyClusterGCN & 0.428 & 0.413 & 0.453 & 0.515 & 0.155 & 0.952 \\ & & (\(\pm\)0.009) & (\(\pm\)0.001) & (\(\pm\)0.007) & (\(\pm\)0.008) & (\(\pm\)0.024) & (\(\pm\)0.006) \\ \multirow{2}{*}{} & PolicyClusterHOGCN & **0.463** & **0.478** & **0.497** & **0.563** & **0.189** & 0.956 \\ & & (\(\pm\)0.003) & (\(\pm\)0.003) & (\(\pm\)0.001) & (\(\pm\)0.001) & (\(\pm\)0.016) & (\(\pm\)0.006) \\ \hline \hline \end{tabular} \end{table} Table 2: Test Micro-f1 score on the node classification task. Results averaged over five independent runs for all the models. * **PolicyClusterGCN**: Our proposed algorithm that performs efficient training of GCN. * **PolicyClusterHOGCN**: Our proposed algorithm where instead of GCN we train Higher Order GCN (HOGCN) model [26]. Each layer in HOGCN combines GraphSAGE-concat [13] layer with MixHop [12] layer. We tune the same HOGCN parameters for both GraphSaint and PolicyClusterHOGCN. ### Training Details PolicyClusterGCN is implemented in PyTorch. For baselines we utilize the code provided by the authors. For PolicyClusterGCN we apply Graclus [14] partitioning method to formulate clusters. We set the number of GCN layers to 2 for all the baselines. We tune the following hyper-parameters for all the baselines: learning rates = [0.01, 0.001], dropouts = [0.0, 0.1, 0.2] and embedding dimensions = [128, 512, 1024]. The number of epochs is set to 1500 for all the models. For GraphSAINT, we also tune the normalization parameters = ["norm", "norm-nn"] and embedding aggregation process = ["mean", "concatenation"]. For ClusterGCN and PolicyClusterGCN, we tune the number of clusters = [4, 8, 16, 32] and number of clusters for each batch = [1, 4, 8]. For all the models, we perform the evaluation on the same five train/val/test splits and report the average test micro-f1 score over five runs. For PolicyClusterGCN, we set the number of previous edge weights and the number of the previous edge rewards value \(m\) to 5. To get the node embeddings, we set the node2vec parameters as walk-length=40, context-size=5, walks-per-node=20, p=1.0, q=1.0. During development, we played with the parameters of node2vec and found that PolicyClusterGCN is not sensitive to those parameters. We train PolicyClusterGCN on the cluster configuration identified by Algorithm 1. PolicyClusterGCN parameters: We set the \(eps\_decay\) value to 100. The step sizes of \(\alpha^{w}=0.001\) and \(\alpha^{\theta}=0.001\). The discount factor \(\gamma\) is set to 0.95. During development, we played with the following node2vec parameters: walk-length=[20, 40, 80], context-sizes=[3,5], walks-per-node=[20, 40]. However, we did not find PolicyClusterGCN to be sensitive with respect to node2vec parameters. All the experiments are conducted on a machine with an NVIDIA Tesla V100 (32 GB memory), an Intel Xeon E5-2680 CPU (28 cores, 2.40GHz), and 128 GB of RAM. ## 5 Results ### Performance We compare the node classification performance of PolicyClusterGCN with other baselines in Table 2. The values in the parenthesis in Table 2 represents standard deviation. We rely on the Micro-f1 metric for evaluation as it is a frequently used metric in the literature. We observe that our proposed PolicyClusterHOGCN outperforms state-of-the-art models on five out of six real-world datasets. Moreover, our proposed PolicyClusterHOGCN and PolicyClusterGCN outperform ClusterHOGCN and ClusterGCN on all the datasets. This result suggests that our approach is able to identify strong cluster configurations than prior state-of-the-art approaches. The difference in performance between PolicyClusterHOGCN and other baselines is statistically significant with a standard paired t-test at a significance level of 0.05. Another observation is that with HOGCN, the performance of ClusterHOGCN is close to or often better than that of GraphSAINT. Note that our proposed framework is not limited to GCN or HOGCN and one can experiment with other graph convolutional networks design [21] for further improvement in the supervised task. ### Reward Analysis Figure 2 shows the mean of mean edge rewards received over five independent runs. We observe that as the policy network's training progresses, PolicyClusterGCN can identify better cluster configurations that result in the improvement in GCN training performance. Figure 2: Mean edges reward per PolicyClusterHOGCN step ### Robustness: State Embedding Methods We test the sensitivity of the proposed PolicyClusterGCN framework with respect to the choice of state embedding methods. We consider three UGRL methods: a random-walk based method, node2vec [16], a community aware embedding method, M-NMF [20], and a matrix factorization based approach, NetMF [21]. The average results over five independent runs are presented in Table 3. We observe that PolicyClusterHOGCN achieves similar performance on all of the six real-world datasets and is robust to the choice of state embedding method. ### Robustness: Policy Gradient Methods Next, we test the sensitivity of our framework with respect to the training algorithm. Here, we train PolicyClusterHOGCN with reinforce [20] and actor-critic [14] algorithms. Table 4 shows the performance of PolicyClusterHOGCN. We observe that PolicyClusterHOGCN trained with actor-critic often outperforms PolicyClusterHOGCN trained with the reinforce algorithm. The performance improvement is likely due to the better ability of the actor-critic algorithm to reduce gradient variance [17]. ### Synthetic Datasets We compare the performance of PolicyClusterHOGCN and ClusterHOGCN on the synthetic LFR datasets [11]. Each LFR dataset consists of a set of communities and \(\mu\) percentage of inter-community connections. In this section, we study how the performance of PolicyClusterHOGCN and PolicyClusterHOGCN changes as we change the value of \(\mu\) from 0.1 to 0.4 with a 0.05 step size. We keep the rest of the parameters of the LFR dataset the same (number of nodes=5,000, average degree=5, min community size=50, power law exponent for the degree distribution and community size distribution to 3 and 1.5, respectively). We use cdilib library [15] for generating the datasets. A node's cluster is set as its class and we perform multi-class classification. For node features, we factorize the adjacency matrix of these datasets and treat 16 left singular vectors as node attributes. The test Micro-f1 scores are shared in Table 5. We observe that PolicyClusterHOGCN is consistently able to identify better clusters as compared to ClusterHOGCN for all \(\mu\) values. However, as the percentage of inter-community edges increases, the performance between PolicyClusterHOGCN and ClusterHOGCN decreases. This result is not terribly surprising since at that point noise (represented by inter-cluster edges) tends to dominate the signal (represented by node clusters). ### Cluster Analysis: Label Entropy The performance of GCN is dependent on both cluster structure and node label distribution in the cluster. Here, we compare and contrast the node labels present in the clusters identified by ClusterHOGCN and PolicyClusterHOGCN. For comparison, we rely on the label entropy metric utilized elsewhere [10]. To compute a cluster's label entropy, we assume labels are independent and a node's label can have 0 or 1 value. For \(q\) labels, cluster \(c\)'s label entropy can then be calculated as \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & & **Croatia** & **Romania** & **Hungary** & **Facebook** & **Twitter** & **Blogcatalog** \\ \hline \multirow{3}{*}{PolicyClusterHOGCN} & Node2vec & 0.463 & 0.478 & 0.497 & 0.563 & 0.189 & 0.956 \\ & M-NMF & 0.461 & 0.476 & 0.497 & 0.556 & 0.163 & 0.956 \\ & Net-MF & 0.462 & 0.478 & 0.498 & 0.562 & 0.168 & 0.956 \\ \hline \hline \end{tabular} \end{table} Table 3: Robustness: state embedding methods. \begin{table} \begin{tabular}{c c c c} \hline \hline \(\text{LFR}_{\mu}\) & **ClusterGCN** & **PolicyClusterGCN** & **\% diff.** \\ \hline 0.10 & 0.711 & 0.731 & +2.00 \\ 0.15 & 0.739 & 0.753 & +1.40 \\ 0.20 & 0.552 & 0.561 & +0.90 \\ 0.25 & 0.452 & 0.472 & +2.05 \\ 0.30 & 0.223 & 0.237 & +1.40 \\ 0.35 & 0.193 & 0.201 & +0.80 \\ 0.40 & 0.135 & 0.144 & +0.90 \\ \hline \hline \end{tabular} \end{table} Table 5: Synthetic Dataset: LFR. \(\text{LFR}_{\mu}\) corresponds to percentage of inter cluster links. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & & **Croatia** & **Romania** & **Hungary** & **Facebook** & **Twitter** & **Blogcatalog** \\ \hline \multirow{3}{*}{PolicyClusterHOGCN} & Actor-critic & 0.463 & 0.478 & 0.497 & 0.563 & 0.189 & 0.956 \\ & Reinforce & 0.461 & 0.479 & 0.488 & 0.562 & 0.177 & 0.955 \\ \hline \hline \end{tabular} \end{table} Table 4: Robustness: policy gradient methods. \(S_{c}=H(X_{1},X_{2},...,X_{q})=H(X_{1})+H(X_{2})+...+H(X_{q})\) where \(X_{i}\) is Bernoulli random variable. A dataset's label entropy is represented through the distribution of cluster label entropies and consists of \(k\) number of clusters. Since we report the results over five independent runs, our reported distribution size is \(k\) times five. Figure 3 presents the distribution of cluster label entropies using the kernel density estimation algorithm for six real-world datasets. We observe that, in general, the distribution of label entropies of ClusterHOGCN has high variance as compared to that of PoliyClusterHOGCN which has low variance - see Figure 2(a) vs 2(b), Figure 2(e) vs 2(f), Figure 2(g) vs 2(l). The low variance or high saturated label entropy's suggests that PolicyClusterGCN identifies clusters with high label uncertainty in the clusters. ## 6 Related Work Graph convolutional neural networks (GCNs) have shown promising results on several machine learning tasks on graph-structured data. However, GCNs require the full normalized adjacency matrix of the graph, and hence scaling them on large networks becomes challenging [Hamilton et al., 2017, Chiang et al., 2019, Zheng et al., 2020]. A plethora of efficient sampling methods has been proposed to solve the scalability bottleneck and improve the GCN training [Liu et al., 2021]. The sampling-based techniques can be categorized as node sampling vs layer sampling vs subgraph sampling. In node sampling techniques, one usually samples a fixed number of neighbors per node and forms a subgraph consisting of multiple connected nodes and their sampled neighbors [Hamilton et al., 2017]. VRGCN utilizes the historical activations of nodes in GCN training to efficiently sample a much smaller number of neighbors per node. Layer sampling-based method sample usually diverse nodes per GCN layer [Chen et al., 2018b]. LADIES [Zou et al., 2019] improved the sampling process through importance sampling. Subgraph sampling-based GCN approaches have achieved state-of-the-art performances on the node classification task. These approaches often perform clustering using graph partitioning algorithms [Chiang et al., 2019, Zheng et al., 2020, Zhu et al., 2019] and the identified subgraph is treated as a minibatch for GCN training. RippleWalk [Bai et al., 2021] proposed a novel set expansion-based subgraph sampling method to construct minibatches. GraphSAINT [Zeng et al., 2019] proposed several subgraph samplers to identify subgraphs and also introduced several normalization techniques to reduce the bias and variance of GCN training. While we have not evaluated its use for the placement of operations on computational devices we believe PolicyClusterGCN can be used for such a purpose (see Placeto [Addanki et al., 2019]). Placeto [Addanki et al., 2019] learns efficient placement of node (tensorflow operations) on a compute device (GPUs). Placeto's single episode has an episode length equal to the number of nodes in the graph. One episode step places a node to a different cluster and then passes the modified graph to the environment to get the reward - rendering the training process to be quite expensive. We believe PolicyClusterGCN can simplify this task and plan to examine such ideas in the future. ## Conclusion We propose PolicyClusterGCN, an online RL based approach, to identify such efficient cluster configuration for Figure 3: Label Entropy GCN training. PolicyClusterGCN's policy network modifies the edge weights of the graph that allows it to explore diverse cluster configurations. We train the policy network using an actor-critic algorithm where the reward signal is received through GCN performance. We perform experiments on six real-world datasets and show that our proposed model can outperform state-of-the-art baselines. In the future, we plan to explore two directions for PolicyClusterGCN: add generalizability functionality and improve scalability. To add the generalizability functionality, we propose to transform the nodes embedding of different graphs into a common embedding space and devise an efficient training algorithm for PolicyClusterGCN. To improve scalability, we plan to explore multi-level frameworks that rely on graph coarsening to significantly reduce the size of the graph (Deng et al., 2019; Liang et al., 2021). ## 7 Acknowledgements This material is partially supported by the National Science Foundation (NSF) under grants CNS-2112471, OAC-2018627, and CCF-2028944. Any opinions, findings, and conclusions in this material are those of the author(s) and may not reflect the views of the respective funding agencies.
2301.00385
Inner Riesz pseudo-balayage and its applications to minimum energy problems with external fields
For the Riesz kernel $\kappa_\alpha(x,y):=|x-y|^{\alpha-n}$, $0<\alpha<n$, on $\mathbb R^n$, $n\geqslant2$, we introduce the inner pseudo-balayage $\hat{\omega}^A$ of a (Radon) measure $\omega$ on $\mathbb R^n$ to a set $A\subset\mathbb R^n$ as the (unique) measure minimizing the Gauss functional \[\int\kappa_\alpha(x,y)\,d(\mu\otimes\mu)(x,y)-2\int\kappa_\alpha(x,y)\,d(\omega\otimes\mu)(x,y)\] over the class $\mathcal E^+(A)$ of all positive measures $\mu$ of finite energy, concentrated on $A$. For quite general signed $\omega$ (not necessarily of finite energy) and $A$ (not necessarily closed), such $\hat{\omega}^A$ does exist, and it maintains the basic features of inner balayage for positive measures (defined when $\alpha\leqslant2$), except for those implied by the domination principle. (To illustrate the latter, we point out that, in contrast to what occurs for the balayage, the inner pseudo-balayage of a positive measure may increase its total mass.) The inner pseudo-balayage $\hat{\omega}^A$ is further shown to be a powerful tool in the problem of minimizing the Gauss functional over all $\mu\in\mathcal E^+(A)$ with $\mu(\mathbb R^n)=1$, which enables us to improve substantially many recent results on this topic, by strengthening their formulations and/or by extending the areas of their applications. For instance, if $A$ is a quasiclosed set of nonzero inner capacity $c_*(A)$, and if $\omega$ is a signed measure, compactly supported in $\mathbb R^n\setminus{\rm Cl}_{\mathbb R^n}A$, then the problem in question is solvable if and only if either $c_*(A)<\infty$, or $\hat{\omega}^A(\mathbb R^n)\geqslant1$.
Natalia Zorii
2023-01-01T10:58:28Z
http://arxiv.org/abs/2301.00385v1
# Inner Riesz pseudo-balayage and its applications to minimum energy problems with external fields # Inner Riesz pseudo-balayage and its applications to minimum energy problems with external fields **Natalia Zorii** _Dedicated to Professor Stephen J. Gardiner on the occasion of his 65th birthday_ + Footnote †: Key words: Minimum Riesz energy problems with external fields, inner Riesz balayage, inner Riesz pseudo-balayage. **Abstract.** For the Riesz kernel \(\kappa_{\alpha}(x,y):=|x-y|^{\alpha-n}\) of order \(0<\alpha<n\) on \(\mathbb{R}^{n}\), \(n\geqslant 2\), we introduce the so-called inner pseudo-balayage \(\hat{\omega}^{A}\) of a (Radon) measure \(\omega\) on \(\mathbb{R}^{n}\) to a set \(A\subset\mathbb{R}^{n}\) as the (unique) measure minimizing the Gauss functional \[\int\kappa_{\alpha}(x,y)\,d(\mu\otimes\mu)(x,y)-2\int\kappa_{\alpha}(x,y)\,d( \omega\otimes\mu)(x,y)\] over the class \(\mathcal{E}^{+}(A)\) of all positive measures \(\mu\) of finite energy, concentrated on \(A\). For quite general signed \(\omega\) (not necessarily of finite energy) and \(A\) (not necessarily closed), such \(\hat{\omega}^{A}\) does exist, and it maintains the basic features of inner balayage for positive measures (defined when \(\alpha\leqslant 2\)), except for those implied by the domination principle. (To illustrate the latter, we point out that, in contrast to what occurs for the balayage, the inner pseudo-balayage of a positive measure may increase its total mass.) The inner pseudo-balayage \(\hat{\omega}^{A}\) is further shown to be a powerful tool in the problem of minimizing the Gauss functional over all \(\mu\in\mathcal{E}^{+}(A)\) with \(\mu(\mathbb{R}^{n})=1\), which enables us to improve substantially many recent results on this topic, by strengthening their formulations and/or by extending the areas of their applications. For instance, if \(A\) is a quasiclosed set of nonzero inner capacity \(c_{*}(A)\), and if \(\omega\) is a signed measure, compactly supported in \(\mathbb{R}^{n}\setminus\operatorname{Cl}_{\mathbb{R}^{n}}A\), then the problem in question is solvable if and only if either \(c_{*}(A)<\infty\), or \(\hat{\omega}^{A}(\mathbb{R}^{n})\geqslant 1\). In particular, if \(c_{*}(A)=\infty\), then the problem has no solution whenever \(\omega^{+}(\mathbb{R}^{n})<1/C_{n,\alpha}\), where \(C_{n,\alpha}:=1\) if \(\alpha\leqslant 2\), and \(C_{n,\alpha}:=2^{n-\alpha}\) otherwise; whereas \(\omega^{-}(\mathbb{R}^{n})\), the total amount of the negative charge, has no influence on this phenomenon. The results obtained are illustrated by some examples. ## 1. Inner pseudo-balayage: a motivation and a model case This paper deals with the theory of potentials with respect to the \(\alpha\)-Riesz kernels \(\kappa_{\alpha}(x,y):=|x-y|^{\alpha-n}\) of order \(0<\alpha<n\) on \(\mathbb{R}^{n}\), \(n\geqslant 2\), \(|x-y|\) being the Euclidean distance in \(\mathbb{R}^{n}\). Our main goal is to proceed further with the study of minimum \(\alpha\)-Riesz energy problems in the presence of external fields \(f\), a point of interest for many researchers (see e.g. the monographs [2, 20] and references therein, [14, 18], [21]-[25], as well as [1, 8, 32, 33], some of the most recent papers on this topic). In the current work we improve substantially many recent results in this field, by strengthening their formulations and/or by extending the areas of their applications (see Section 6 for the results obtained). This has become possible due to the development of a new tool, the inner pseudo-balayage (see Section 3, cf. also the present section for a motivation of the proposed definition as well as for a model case). It is well known that the \(\alpha\)-Riesz balayage (sweeping out) serves as an efficient tool in the problems in question (see e.g. [8, 23, 25, 33]). However, its application is only limited to the case of \(\alpha\) ranging over \((0,2]\), and to external fields \(f\) of the form \[f(x):=-U^{\omega}(x):=-\int\kappa_{\alpha}(x,y)\,d\omega(y), \tag{1.1}\] where \(\omega\) is a suitable _positive_ Radon measure. To extend the area of application of such a tool to _arbitrary_\(\alpha\in(0,n)\) and/or to external fields \(f\) given by (1.1), but now with _signed_\(\omega\) involved, we generalize the standard concept of inner balayage of positive measures (defined for \(\alpha\in(0,2]\)) to the so-called inner pseudo-balayage of signed measures, by maintaining the basic features of the former concept -- except for those implied by the domination principle. Being crucial to our study of minimum energy problems with external fields, the concept of inner pseudo-balayage is also of independent interest, looking promising for further generalizations and other applications. Before introducing it, we first review some basic facts of the theory of \(\alpha\)-Riesz potentials. We denote by \(\mathfrak{M}\) the linear space of all (real-valued Radon) measures \(\mu\) on \(\mathbb{R}^{n}\), equipped with the _vague_ topology of pointwise convergence on the class \(C_{0}(\mathbb{R}^{n})\) of all continuous functions \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) of compact support, and by \(\mathfrak{M}^{+}\) the cone of all positive \(\mu\in\mathfrak{M}\), where \(\mu\) is _positive_ if and only if \(\mu(\varphi)\geqslant 0\) for all positive \(\varphi\in C_{0}(\mathbb{R}^{n})\). Given \(\mu,\nu\in\mathfrak{M}\), the _potential_\(U^{\mu}\) and the _mutual energy_\(I(\mu,\nu)\) are introduced by \[U^{\mu}(x):=\int\kappa_{\alpha}(x,y)\,d\mu(y),\quad x\in \mathbb{R}^{n},\] \[I(\mu,\nu):=\int\kappa_{\alpha}(x,y)\,d(\mu\otimes\nu)(x,y),\] respectively, provided that the integral on the right is well defined (as a finite number or \(\pm\infty\)). For \(\mu=\nu\), \(I(\mu,\nu)\) defines the _energy_\(I(\mu):=I(\mu,\mu)\) of \(\mu\in\mathfrak{M}\). The following property of _strict positive definiteness_ for the \(\alpha\)-Riesz kernels, discovered by M. Riesz [19, Chapter I, Eq. (13)] (cf. also [17, Theorem 1.15]), is crucial to the current study: \(I(\mu)\geqslant 0\) for any (signed) \(\mu\in\mathfrak{M}\), and \(I(\mu)=0\iff\mu=0\). This implies that all (signed) \(\mu\in\mathfrak{M}\) with \(I(\mu)<\infty\) form a pre-Hilbert space \(\mathcal{E}\) with the inner product \(\langle\mu,\nu\rangle:=I(\mu,\nu)\) and the energy norm \(\|\mu\|:=\sqrt{I(\mu)}\), see e.g. [10, Lemma 3.1.2]. The topology on \(\mathcal{E}\) defined by means of this norm, is said to be _strong_. Another fact decisive to this paper is that the cone \(\mathcal{E}^{+}:=\mathcal{E}\cap\mathfrak{M}^{+}\) is _strongly complete_, and that the strong topology on \(\mathcal{E}^{+}\) is _finer_ than the (induced) vague topology on \(\mathcal{E}^{+}\) (see J. Deny [6]; for \(\alpha=2\), cf. also H. Cartan [4]). Thus any strong Cauchy sequence (net) \((\mu_{j})\subset\mathcal{E}^{+}\) converges both strongly and vaguely to the same unique limit \(\mu_{0}\in\mathcal{E}^{+}\), the strong topology on \(\mathcal{E}\) as well as the vague topology on \(\mathfrak{M}\) being Hausdorff. (Following B. Fuglede [10], such a kernel is said to be _perfect_.) ### A model case As a model case for introducing the concept of inner \(\alpha\)-Riesz pseudo-balayage, consider first a _closed_ set \(F\subset\mathbb{R}^{n}\) and a (signed) measure \(\omega\in\mathfrak{M}\) of _finite_ energy. Since the class \(\mathfrak{M}^{+}(F)\) of all \(\mu\in\mathfrak{M}^{+}\) with the support \(S(\mu)\subset F\) is vaguely closed [3, Section III.2, Proposition 6], the convex cone \(\mathcal{E}^{+}(F):=\mathfrak{M}^{+}(F)\cap\mathcal{E}\) is strongly closed, and hence strongly complete, the \(\alpha\)-Riesz kernel being perfect. By applying [9] (Theorem 1.12.3 and Proposition 1.12.4(2)), we therefore conclude that for the given \(\omega\in\mathcal{E}\), there exists the unique \(P\omega\in\mathcal{E}^{+}(F)\) such that \[\|\omega-P\omega\|=\min_{\mu\in\mathcal{E}^{+}(F)}\,\|\omega-\mu\|, \tag{1.2}\] and the same \(P\omega\) is uniquely characterized within \(\mathcal{E}^{+}(F)\) by the two relations \[\langle P\omega-\omega,\mu\rangle\geqslant 0\ \text{ for all }\mu\in\mathcal{E}^{+}(F), \tag{1.3}\] \[\langle P\omega-\omega,P\omega\rangle=0. \tag{1.4}\] This \(P\omega\) is said to be the _orthogonal projection_ of \(\omega\in\mathcal{E}\) onto \(\mathcal{E}^{+}(F)\). By a slight modification of [26, Proof of Theorem 3.1] we infer from the above that \(P\omega\) is the only measure in \(\mathcal{E}^{+}(F)\) having the two properties \[U^{P\omega}\geqslant U^{\omega}\ \ \text{n.e.\ on }F, \tag{1.5}\] \[U^{P\omega}=U^{\omega}\ \ P\omega\text{-a.e.}, \tag{1.6}\] where the abbreviation _n.e._ (_nearly everywhere_) means that the inequality holds true everywhere on \(F\) except for a subset \(N\subset F\) of _inner capacity_ zero: \(c_{*}(N)=0\).1 Footnote 1: For closed \(F\), the set \(N\) of all \(x\in F\) where (1.5) fails is Borel, hence capacitable, and so (1.5) actually holds true even _quasi-everywhere_ (_q.e._) on \(F\), namely everywhere on \(F\) except for \(N\) of _outer capacity_ zero: \(c^{*}(N)=0\). (For the concepts of inner and outer capacities, see [17, Section II.2.6].) Assume for a moment that \(\alpha\leqslant 2\), and that the above \(\omega\) is _positive_, i.e. \(\omega\in\mathcal{E}^{+}\). By use of the _complete maximum principle_[17, Theorems 1.27, 1.29], we derive from (1.5) and (1.6) that \(P\omega\) is then uniquely characterized within \(\mathcal{E}^{+}(F)\) by the equality \[U^{P\omega}=U^{\omega}\ \ \text{n.e.\ on }F, \tag{1.7}\] see [26, Theorem 3.1], and hence \(P\omega\) is actually the _balayage_\(\omega^{F}\) of \(\omega\in\mathcal{E}^{+}\) onto \(F\): \[P\omega=\omega^{F}.\] However, if either \(\alpha>2\), or if the above \(\omega\) is _signed_, then relations (1.2)-(1.6) still hold, but they no longer result in (1.7). Motivated by this observation, we introduce the following definition. **Definition 1.1**.: The _pseudo-balayage_\(\hat{\omega}^{F}\) of \(\omega\in\mathcal{E}\) onto a closed set \(F\subset\mathbb{R}^{n}\) with respect to the \(\alpha\)-Riesz kernel of arbitrary order \(\alpha\in(0,n)\) is defined as the only measure in \(\mathcal{E}^{+}(F)\) satisfying (1.2) (with \(\hat{\omega}^{F}\) in place of \(P\omega\)); or equivalently, as the unique measure in \(\mathcal{E}^{+}(F)\) having properties (1.3)-(1.6) (with \(\hat{\omega}^{F}\) in place of \(P\omega\)). **Remark 1.2**.: Assume for a moment that \(\omega\) is positive. It follows from the above that the pseudo-balayage \(\hat{\omega}^{F}\) coincides with the balayage \(\omega^{F}\) whenever \(\alpha\leqslant 2\); while otherwise, the former concept presents a natural extension of the latter, the problem of balayage for \(\alpha>2\) being unsolvable.2 But if now \(\omega\) is signed, then for \(\alpha\leqslant 2\), both \(\hat{\omega}^{F}\) and \(\omega^{F}\) still exist and are unique, whereas, in general, Footnote 2: See e.g. [17, Section IV.5.20]; this is caused by the fact that for \(\alpha>2\), the maximum principle fails to hold. Therefore, when speaking of \(\alpha\)-Riesz balayage, we understand that \(\alpha\in(0,2]\). \[\hat{\omega}^{F}\neq\omega^{F},\] the balayage of signed \(\omega\) being defined by linearity (for more details see Remark 3.3). **Remark 1.3**.: In Section 3 below, we shall extend the above definition of the pseudo-balayage \(\hat{\omega}^{F}\), given in the model case of \(\omega\in\mathcal{E}\) and closed \(F\subset\mathbb{R}^{n}\), to: * \(\omega\in\mathfrak{M}\) _that are not necessarily of finite energy_. * \(F\subset\mathbb{R}^{n}\) _that are not necessarily closed_. For the former goal, we observe that problem (1.2) is equivalent to that of minimizing _the Gauss functional_\(\|\mu\|^{2}-2\int U^{\omega}\,d\mu\), which makes sense _not_ only for \(\omega\in\mathcal{E}\). We complete this section with some general conventions, used in what follows. From now on, when speaking of a (signed) measure \(\mu\in\mathfrak{M}\), we understand that its potential \(U^{\mu}\) is well defined and finite almost everywhere with respect to the Lebesgue measure on \(\mathbb{R}^{n}\); or equivalently (cf. [17, Section I.3.7]) that \[\int_{|y|>1}\frac{d|\mu|(y)}{|y|^{n-\alpha}}<\infty, \tag{1.8}\] where \(|\mu|:=\mu^{+}+\mu^{-}\), \(\mu^{+}\) and \(\mu^{-}\) being the positive and negative parts of \(\mu\) in the Hahn-Jordan decomposition. Actually, then (and only then) \(U^{\mu}\) is finite q.e. on \(\mathbb{R}^{n}\), cf. [17, Section III.1.1]. This would necessarily hold if \(\mu\) were required to be _bounded_ (that is, with \(|\mu|(\mathbb{R}^{n})<\infty\)), or of finite energy, cf. [10, Corollary to Lemma 3.2.3]. A measure \(\mu\in\mathfrak{M}^{+}\) is said to be _concentrated_ on a set \(A\subset\mathbb{R}^{n}\) if \(A^{c}:=\mathbb{R}^{n}\setminus A\) is \(\mu\)-negligible, or equivalently if \(A\) is \(\mu\)-measurable and \(\mu=\mu|_{A}\), \(\mu|_{A}\) being the restriction of \(\mu\) to \(A\). Denoting by \(\mathfrak{M}^{+}(A)\) the cone of all \(\mu\in\mathfrak{M}^{+}\) concentrated on \(A\), we further write \(\mathcal{E}^{+}(A):=\mathfrak{M}^{+}(A)\cap\mathcal{E}\), and let \(\mathcal{E}^{\prime}(A)\) stand for the closure of \(\mathcal{E}^{+}(A)\) in the strong topology on \(\mathcal{E}^{+}\). We emphasize that \(\mathcal{E}^{\prime}(A)\)_is strongly complete_, being a strongly closed subcone of the strongly complete cone \(\mathcal{E}^{+}\). Given \(A\subset\mathbb{R}^{n}\), denote by \(\mathfrak{C}_{A}\) the upward directed set of all compact subsets \(K\) of \(A\), where \(K_{1}\leqslant K_{2}\) if and only if \(K_{1}\subset K_{2}\). If a net \((x_{K})_{K\in\mathfrak{C}_{A}}\subset Y\) converges to \(x_{0}\in Y\), \(Y\) being a topological space, then we shall indicate this fact by writing \[x_{K}\to x_{0}\ \ \text{in $Y$ as $K\uparrow A$}.\] ## 2. On the inner Riesz balayage Before proceeding with an extension of the concept of pseudo-balayage announced in Remark 1.3, we first recall some basic facts of the theory of inner \(\alpha\)-Riesz balayage. Such a theory, generalizing Cartan's pioneering work [5] on the inner Newtonian balayage (\(\alpha=2\)) to any \(\alpha\in(0,2]\), was initiated in the author's recent papers [26, 27], and it was further developed in [28]-[30].3 Throughout this section, \(0<\alpha\leqslant 2\). Footnote 3: See also [31] for an application of this theory to Deny’s principle of positivity of mass. **Definition 2.1** ([26, Sections 3, 4]).: The _inner balayage_\(\omega^{A}\) of a measure \(\omega\in\mathfrak{M}^{+}\) to a set \(A\subset\mathbb{R}^{n}\) is defined as the measure of minimum potential in the class \(\Gamma_{A,\omega}\), \[\Gamma_{A,\omega}:=\big{\{}\mu\in\mathfrak{M}^{+}:\ U^{\mu}\geqslant U^{ \omega}\ \ \text{n.e. on $A$}\big{\}}.\] That is, \(\omega^{A}\in\Gamma_{A,\omega}\) and \[U^{\omega^{A}}=\min_{\mu\in\Gamma_{A,\omega}}\ U^{\mu}\ \ \text{on $\mathbb{R}^{n}$}. \tag{2.1}\] **Theorem 2.2** ([26, Sections 3, 4]).: Given arbitrary \(\omega\in\mathfrak{M}^{+}\) and \(A\subset\mathbb{R}^{n}\), the inner balayage \(\omega^{A}\), introduced by Definition 2.1, exists and is unique. Furthermore,4 Footnote 4: As pointed out in [26, Remark 3.12], (2.2) no longer characterizes \(\omega^{A}\) uniquely (as it does for closed \(A\) and \(\omega\in\mathcal{E}^{+}\)). For more details see footnote 5, Corollary 2.4, Remark 2.5, and Theorem 2.6. \[U^{\omega^{A}}=U^{\omega}\ \ \text{n.e. on $A$}, \tag{2.2}\] \[U^{\omega^{A}}\leqslant U^{\omega}\ \ \text{on $\mathbb{R}^{n}$}.\] The same \(\omega^{A}\) can alternatively be characterized by means of either of the following (equivalent) assertions: (a) \(\omega^{A}\) is the unique measure in \(\mathfrak{M}^{+}\) satisfying the symmetry relation \[I(\omega^{A},\sigma)=I(\sigma^{A},\omega)\ \ \text{for all $\sigma\in\mathcal{E}^{+}$},\] where \(\sigma^{A}\) denotes the only measure in \(\mathcal{E}^{\prime}(A)\) with \(U^{\sigma^{A}}=U^{\sigma}\) n.e. on \(A\).5 2. \(\omega^{A}\) is the unique measure in \(\mathfrak{M}^{+}\) satisfying either of the two limit relations \[\omega^{A}_{j}\to\omega^{A}\ \ \text{\rm vaguely in $\mathfrak{M}^{+}$ as $j\to\infty$},\] \[U^{\omega^{A}_{j}}\uparrow U^{\omega^{A}}\ \ \text{\rm pointwise on $\mathbb{R}^{n}$ as $j\to\infty$},\] where \((\omega_{j})\subset\mathcal{E}^{+}\) is an arbitrary sequence having the property6 Footnote 6: Such \(\omega_{j}\in\mathcal{E}^{+}\), \(j\in\mathbb{N}\), do exist; they can be defined, for instance, by means of the formula \[U^{\omega_{j}}:=\min\big{\{}U^{\omega},\,jU^{\lambda}\big{\}},\] \[U^{\omega_{j}}:=\min\big{\{}U^{\omega},\,jU^{\lambda}\big{\}},\] \[\lambda\in\mathcal{E}^{+}\ \text{being fixed (see e.g. \@@cite[cite]{[\@@bibref{}{e.g. 272}{}{}]}$ or \@@cite[cite]{[\@@bibref{}{e.g. 257}{}{}]},\, p. \@@cite[cite]{[\@@bibref{}{e.g. 257}{}{}]},\,\text{\rm footnote}]). Here we have used the fact that for any \(\mu_{1},\mu_{2}\in\mathfrak{M}^{+}\), there is \(\mu_{0}\in\mathfrak{M}^{+}\) such that \(U^{\mu_{0}}:=\min\big{\{}U^{\mu_{1}},\,U^{\mu_{2}}\big{\}}\)[17, Theorem 1.31]. \[U^{\omega_{j}}\uparrow U^{\omega}\ \ \text{\rm pointwise on $\mathbb{R}^{n}$ as $j\to\infty$},\] whereas \(\omega^{A}_{j}\) denotes the only measure in \(\mathcal{E}^{\prime}(A)\) with \(U^{\omega^{A}_{j}}=U^{\omega_{j}}\) n.e. on \(A\). **Remark 2.3**.: For signed \(\omega\in\mathfrak{M}\), we define the inner balayage \(\omega^{A}\) by linearity: \[\omega^{A}:=(\omega^{+})^{A}-(\omega^{-})^{A}. \tag{2.3}\] If moreover the mutual energy \(I(\omega,\sigma)\) is well defined for all \(\sigma\in\mathcal{E}\), then this \(\omega^{A}\) is uniquely characterized by the symmetry relation \[I(\omega^{A},\sigma)=I(\sigma^{A},\omega)\ \ \text{for all $\sigma\in\mathcal{E}$},\] which actually only needs to be verified for certain countably many \(\sigma\in\mathcal{E}\), independent of the choice of \(\omega\) (cf. [30], Theorem 1.4 and Remark 1.4). \(\bullet\) In the rest of this paper, we shall always require \(A\subset\mathbb{R}^{n}\) to have the property7 Footnote 7: As shown in [33, Theorem 3.9], \((\mathcal{P}_{1})\) is fulfilled, for instance, if \(A\) is _quasiclosed_ (_quasicompact_), that is, if \(A\) can be approximated in outer capacity by closed (compact) sets, see Fuglede [11]. \((\mathcal{P}_{1})\)_\(\mathcal{E}^{+}(A)\) is strongly closed_. Then (and only then) \[\mathcal{E}^{\prime}(A)=\mathcal{E}^{+}(A),\] and hence Theorem 2.2 remains valid with \(\mathcal{E}^{\prime}(A)\) replaced throughout by \(\mathcal{E}^{+}(A)\). In particular, the following useful corollary holds true. **Corollary 2.4**.: For this \(A\) and for any \(\omega\in\mathcal{E}^{+}\), the inner balayage \(\omega^{A}\) is, in fact, the orthogonal projection of \(\omega\) onto the (convex, strongly complete) cone \(\mathcal{E}^{+}(A)\): \[\|\omega-\omega^{A}\|=\min_{\mu\in\mathcal{E}^{+}(A)}\|\omega-\mu\|.\] The same \(\omega^{A}\) is uniquely characterized within \(\mathcal{E}^{+}(A)\) by \(U^{\omega^{A}}=U^{\omega}\) n.e. on \(A\). **Remark 2.5**.: Assumption \((\mathcal{P}_{1})\) is important for the validity of Corollary 2.4. Indeed, if \(\alpha=2\) and \(A:=B_{r}\), \(r\in(0,\infty)\), then for any \(\omega\in\mathfrak{M}^{+}(B_{r}^{c})\) with \(\omega(\overline{B}_{r}^{c})>0\), we have \(S(\omega^{B_{r}})=S_{r}\)[27, Theorems 4.1, 5.1], and hence the inner balayage \(\omega^{B_{r}}\) is _not_ concentrated on the set \(B_{r}\) itself. (Actually, \(S(\omega^{B_{r}})\cap B_{r}=\varnothing\).)8 Footnote 8: Here and in the sequel we use the notations \(B_{r}:=\{|x|<r\}\), \(\overline{B}_{r}:=\{|x|\leqslant r\}\), \(S_{r}:=\{|x|=r\}\). The following generalization of Corollary 2.4 will be useful in the sequel. **Theorem 2.6**.: Given \(\omega\in\mathfrak{M}^{+}\), assume that \(\omega^{A}\) is of finite energy.9 Then \(\omega^{A}\) is concentrated on \(A\), that is, \(\omega^{A}\in\mathcal{E}^{+}(A)\), and it is the unique solution to the problem of minimizing the Gauss functional \(\|\mu\|^{2}-2\int U^{\omega}\,d\mu\), \(\mu\) ranging over \(\mathcal{E}^{+}(A)\). Alternatively, \(\omega^{A}\) is uniquely characterized within \(\mathcal{E}^{+}(A)\) by \(U^{\omega^{A}}=U^{\omega}\) n.e. on \(A\). Proof.: Since \(\omega^{A}=(\omega^{A})^{A}\) (see [26, Corollary 4.2]), Corollary 2.4 applied to \(\omega^{A}\in\mathcal{E}^{+}\) shows that, indeed, \(\omega^{A}\in\mathcal{E}^{+}(A)\), and hence \(\omega^{A}\) is the orthogonal projection of itself onto \(\mathcal{E}^{+}(A)\); or equivalently, it is the (unique) solution to the problem of minimizing the functional \(\|\mu\|^{2}-2\int U^{\omega^{A}}\,d\mu\), \(\mu\) ranging over \(\mathcal{E}^{+}(A)\). This implies the former part of the claim by noting that \(\int U^{\omega^{A}}\,d\mu=\int U^{\omega}\,d\mu\) for all \(\mu\in\mathcal{E}^{+}(A)\), which, in turn, is derived from (2.2) by use of the fact, to be often used in what follows, that any \(\mu\)-measurable subset of \(A\) with \(c_{*}(\cdot)=0\) is \(\mu\)-negligible for any \(\mu\in\mathcal{E}^{+}(A)\). For the latter part, assume (2.2) holds for some \(\mu_{0}\in\mathcal{E}^{+}(A)\) in place of \(\omega^{A}\). By the strengthened version of countable subadditivity for inner capacity (Lemma 2.7), \[U^{\mu_{0}}=U^{\omega}=U^{\omega^{A}}\mbox{ n.e. on }A,\] whence \(\mu_{0}=(\omega^{A})^{A}\), again by Corollary 2.4 applied to \(\omega^{A}\in\mathcal{E}^{+}\). Combining this with \((\omega^{A})^{A}=\omega^{A}\) (see above) gives \(\mu_{0}=\omega^{A}\), thereby completing the whole proof. **Lemma 2.7**.: For arbitrary \(Q\subset\mathbb{R}^{n}\) and Borel \(U_{j}\subset\mathbb{R}^{n}\), \[c_{*}\Bigl{(}\bigcup_{j\in\mathbb{N}}Q\cap U_{j}\Bigr{)}\leqslant\sum_{j\in \mathbb{N}}\,c_{*}(Q\cap U_{j}).\] Proof.: See [10, pp. 157-158] (for \(\alpha=2\), cf. [5, p. 253]); compare with [17, p. 144]. ## 3. An extension of the concept of pseudo-balayage In what follows, we assume that \[c_{*}(A)>0. \tag{3.1}\] Then (and only then) the class \(\mathcal{E}^{+}(A)\) is not reduced to \(\{0\}\), see [10, Lemma 2.3.1], and the problems in question become nontrivial. Fix a (signed) measure \(\omega\in\mathfrak{M}\) (not necessarily of finite energy), and define the external field \(f:\mathbb{R}^{n}\to[-\infty,\infty]\) by means of the formula \[f:=-U^{\omega}.\] Being the difference between two lower semicontinuous (l.s.c.) functions, \(f\) is Borel measurable, and, due to (1.8), \(f\) is finite q.e. on \(\mathbb{R}^{n}\). Let \(\mathcal{E}^{+}_{f}(A)\) stand for the convex cone of all \(\mu\in\mathcal{E}^{+}(A)\) such that \(f\) is \(\mu\)-integrable; then for every \(\mu\in\mathcal{E}^{+}_{f}(A)\), the Gauss functional10 Footnote 10: In constructive function theory, the Gauss functional is also referred to as _the \(f\)-weighted energy_. \[I_{f}(\mu):=\|\mu\|^{2}+2\int f\,d\mu=\|\mu\|^{2}-2\int U^{\omega}\,d\mu\] is finite. Denoting \[\hat{w}_{f}(A):=\inf_{\mu\in\mathcal{E}^{+}_{f}(A)}\,I_{f}(\mu), \tag{3.2}\] we have \[-\infty\leqslant\hat{w}_{f}(A)\leqslant 0, \tag{3.3}\] the upper estimate being caused by the fact that \(0\in\mathcal{E}^{+}_{f}(A)\) while \(I_{f}(0)=0\). **Definition 3.1**.: A measure \(\hat{\omega}^{A}\in\mathcal{E}^{+}_{f}(A)\) with \(I_{f}(\hat{\omega}^{A})=\hat{w}_{f}(A)\) is said to be _the inner \(\alpha\)-Riesz pseudo-balayage_ of \(\omega\) onto \(A\). **Lemma 3.2**.: The inner pseudo-balayage \(\hat{\omega}^{A}\) is unique (if it exists). Proof.: This follows by standard methods based on the convexity of the class \(\mathcal{E}^{+}_{f}(A)\) and the parallelogram identity in the pre-Hilbert space \(\mathcal{E}\), by use of the strict positive definiteness of the Riesz kernel. (See e.g. [21, Proof of Lemma 6]. Note that this proof requires \(\hat{w}_{f}(A)\) to be finite, which however necessarily holds whenever \(\hat{\omega}^{A}\) exists.) **Remark 3.3**.: If \(\omega\) is positive, then, as seen from Theorem 2.6, the concept of inner pseudo-balayage \(\hat{\omega}^{A}\) extends that of inner balayage \(\omega^{A}\) (introduced for \(\alpha\leqslant 2\)) to arbitrary \(\alpha\in(0,n)\). See the present section as well as Section 4 for details. But if \(\omega\) is _signed_, then the inner balayage \(\omega^{A}\), defined for \(\alpha\leqslant 2\) by means of (2.3), may _not_ coincide with the inner pseudo-balayage \(\hat{\omega}^{A}\). (Thus, in case \(\alpha\leqslant 2\), the theory of inner pseudo-balayage may be thought of as an alternative theory of inner balayage for signed measures, which is _not_ equivalent to the standard one.) Indeed, take \(\omega\in\mathfrak{M}\) such that \(\omega=-\omega^{-}\neq 0\). Then \(\omega^{A}=-(\omega^{-})^{A}\neq 0\), whereas \(\hat{\omega}^{A}\), minimizing \(\|\mu\|^{2}+2\int U^{\omega^{-}}\,d\mu\geqslant 0\) over \(\mathcal{E}^{+}_{f}(A)\), is obviously \(0\), and so \(\hat{\omega}^{A}\neq\omega^{A}\). **Remark 3.4**.: It follows easily from Definition 3.1 that, if \(\hat{\omega}^{A}\) exists, then so does \((\widehat{c\omega})^{A}\) for any \(c\in[0,\infty)\), and moreover \[(\widehat{c\omega})^{A}=c\hat{\omega}^{A}. \tag{3.4}\] However, this fails to hold if \(c<0\), cf. Remark 3.3. ### On the existence of the inner pseudo-balayage Recall that we are working under the permanent requirements \((\mathcal{P}_{1})\) and (3.1). In the rest of this paper, we also assume that \(\omega\in\mathfrak{M}\) satisfies either of the following properties:11 Footnote 11: \((\mathcal{P}_{3})\) is certainly fulfilled if \(\omega\in\mathfrak{M}\) is compactly supported in \(\overline{A}^{\cdot}\). \(\omega^{+}\) _is bounded, \(U^{\omega}\) is upper semicontinuous on \(\overline{A}:=\mathrm{Cl}_{\mathbb{R}^{n}}A\), and_ \[M_{A}:=\sup_{x\in A}\,U^{|\omega|}(x)<\infty. \tag{3.5}\] Note that in case \((\mathcal{P}_{2})\), the class \(\mathcal{E}^{+}_{f}(A)\) actually coincides with the whole of \(\mathcal{E}^{+}(A)\), cf. (3.13), while in case \((\mathcal{P}_{3})\), it necessarily contains all _bounded \(\mu\in\mathcal{E}^{+}(A)\)_. **Theorem 3.5**.: The inner pseudo-balayage \(\hat{\omega}^{A}\), introduced by Definition 3.1, does exist. Hence, \[-\infty<\hat{w}_{f}(A)\leqslant 0. \tag{3.6}\] The same \(\hat{\omega}^{A}\) can alternatively be characterized by either of the following (a) or (b): (a) \(\hat{\omega}^{A}\) is the only measure in \(\mathcal{E}^{+}_{f}(A)\) having the two properties \[\int U^{\hat{\omega}^{A}-\omega}\,d\mu\geqslant 0\ \ \text{for all}\ \mu\in\mathcal{E}^{+}_{f}(A), \tag{3.7}\] \[\int U^{\hat{\omega}^{A}-\omega}\,d\hat{\omega}^{A}=0. \tag{3.8}\] (b) \(\hat{\omega}^{A}\) is the only measure in \(\mathcal{E}^{+}(A)\) having the two properties12 Footnote 12: Due to (3.9), \(U^{\hat{\omega}^{A}}\geqslant U^{\omega}\) holds true \(\hat{\omega}^{A}\)-a.e.; hence, (3.10) is equivalent to the apparently weaker relation \(U^{\hat{\omega}^{A}}\leqslant U^{\omega}\)\(\hat{\omega}^{A}\)-a.e. Similarly, (3.8) can be replaced by \(\int U^{\hat{\omega}^{A}-\omega}\,d\hat{\omega}^{A}\leqslant 0\). \[U^{\hat{\omega}^{A}}\geqslant U^{\omega}\ \ \text{n.e.\ on}\ A, \tag{3.9}\] \[U^{\hat{\omega}^{A}}=U^{\omega}\ \ \hat{\omega}^{A}\text{-a.e.} \tag{3.10}\] Proof.: Fixing \(\nu\in{\mathcal{E}}^{+}_{f}(A)\), we shall first show that (3.9) and (3.10) hold true for \(\nu\) in place of \(\hat{\omega}^{A}\) if and only if so do (3.7) and (3.8). It is enough to verify only the "if" part of this claim, the opposite being obvious from the fact that any \(\mu\)-measurable subset of \(A\) with \(c_{*}(\cdot)=0\) is \(\mu\)-negligible for any \(\mu\in{\mathcal{E}}^{+}(A)\). Assuming, therefore, that (3.7) and (3.8) hold true (for \(\nu\) in place of \(\hat{\omega}^{A}\)), suppose to the contrary that (3.9) fails. But then there is compact \(K\subset A\) such that \(U^{\nu}<U^{\omega}\) on \(K\) while \(c(K)>0\),13 hence \(\int U^{\nu-\omega}\,d\tau<0\) for any \(\tau\in{\mathcal{E}}^{+}(K)\), \(\tau\neq 0\), which contradicts (3.7), \(\int f\,d\tau\) being obviously finite. Thus (3.9) does indeed hold, whence Footnote 13: If \(A\) is capacitable (e.g. Borel), we write \(c(A):=c_{*}(A)=c^{*}(A)\). \[U^{\nu}\geqslant U^{\omega}\ \ \nu\mbox{-a.e.} \tag{3.11}\] Further, assuming to the contrary that (3.10) fails to hold, we infer from (3.11) that there exists compact \(Q\subset A\) such that \(\nu(Q)>0\) while \(U^{\nu}>U^{\omega}\) on \(Q\). Together with (3.11), this yields \(\int U^{\nu-\omega}\,d\nu>0\), which however contradicts (3.8). The equivalence thereby verified enables us to prove the statement on the uniqueness in each of assertions (a) and (b). Indeed, suppose that (3.9) and (3.10) are fulfilled by some \(\nu,\nu^{\prime}\in{\mathcal{E}}^{+}(A)\) in place of \(\hat{\omega}^{A}\). Noting from (3.10) that then necessarily \(\nu,\nu^{\prime}\in{\mathcal{E}}^{+}_{f}(A)\), we conclude by applying (3.7) and (3.8) to each of \(\nu\) and \(\nu^{\prime}\) that \[\langle\nu,\nu^{\prime}\rangle\geqslant\int U^{\omega}\,d\nu^{\prime}=\|\nu^ {\prime}\|^{2},\quad\langle\nu^{\prime},\nu\rangle\geqslant\int U^{\omega}\,d \nu=\|\nu\|^{2}.\] Therefore, \[0\leqslant\|\nu-\nu^{\prime}\|^{2}=\big{(}\|\nu\|^{2}-\langle\nu^{\prime}, \nu\rangle\big{)}+\big{(}\|\nu^{\prime}\|^{2}-\langle\nu,\nu^{\prime}\rangle \big{)}\leqslant 0,\] whence \(\nu=\nu^{\prime}\), by virtue of the strict positive definiteness of the \(\alpha\)-Riesz kernel. _Case_ (\({\mathcal{P}}_{2}\)). Assume first that (\({\mathcal{P}}_{2}\)) holds; then the Gauss functional has the form \[I_{f}(\mu)=\|\omega-\mu\|^{2}-\|\omega\|^{2}\ \ \mbox{for all}\ \mu\in{\mathcal{E}}^{+}(A), \tag{3.12}\] whence \[{\mathcal{E}}^{+}_{f}(A)={\mathcal{E}}^{+}(A). \tag{3.13}\] Thus the problem on the existence of the inner pseudo-balayage \(\hat{\omega}^{A}\) is reduced to that on the existence of the orthogonal projection of \(\omega\in{\mathcal{E}}\) onto \({\mathcal{E}}^{+}(A)\), i.e. \[\hat{\omega}^{A}\in{\mathcal{E}}^{+}(A)\ \ \mbox{and}\ \ \|\omega-\hat{\omega}^{A}\|= \min_{\mu\in{\mathcal{E}}^{+}(A)}\ \|\omega-\mu\|.\] Since the convex cone \({\mathcal{E}}^{+}(A)\) is strongly closed by (\({\mathcal{P}}_{1}\)), hence strongly complete, \({\mathcal{E}}^{+}\) being strongly complete by the perfectness of the \(\alpha\)-Riesz kernel, applying [9] (Theorem 1.12.3 and Proposition 1.12.4(2)) shows that such an orthogonal projection does exist, and it is uniquely characterized within \({\mathcal{E}}^{+}(A)\) by both (3.7) and (3.8). In view of the equivalence of (a) and (b) proved above, this implies the theorem. _Case_ (\({\mathcal{P}}_{3}\)). The remaining case (\({\mathcal{P}}_{3}\)) will be treated in four steps. _Step 1._ The purpose of this step is to show that the inner pseudo-balayage \(\hat{\omega}^{A}\) exists if and only if there exists the (unique) measure \(\mu_{0}\in{\mathcal{E}}^{+}_{f}(A)\) satisfying (a) (equivalently, (b)), and then necessarily \(\mu_{0}=\hat{\omega}^{A}\). Assume first that \(\hat{\omega}^{A}\) exists. To verify (3.9), suppose to the contrary that there is a compact set \(K\subset A\) with \(c(K)>0\), such that \(U^{\hat{\omega}^{A}}<U^{\omega}\) on \(K\). A straightforward verification then shows that for any \(\tau\in{\mathcal{E}}^{+}(K)\), \(\tau\neq 0\), and any \(t\in(0,\infty)\), \[I_{f}(\hat{\omega}^{A}+t\tau)-I_{f}(\hat{\omega}^{A})=2t\int\!\left(U^{\hat{ \omega}^{A}}-U^{\omega}\right)d\tau+t^{2}\|\tau\|^{2}. \tag{3.14}\] As \(\|\tau\|<\infty\), the value on the right in (3.14) (hence, also that on the left) is \(<0\) when \(t>0\) is small enough, which however contradicts Definition 3.1, for \(\hat{\omega}^{A}+t\tau\in\mathcal{E}^{+}_{f}(A)\). Having thus established (3.9), we obtain \[U^{\hat{\omega}^{A}}\geqslant U^{\omega}\ \ \hat{\omega}^{A}\text{-a.e.} \tag{3.15}\] Suppose now that (3.10) fails to hold; then there exists a compact set \(Q\subset A\) with \(\hat{\omega}^{A}(Q)>0\), such that \(U^{\hat{\omega}^{A}}>U^{\omega}\) on \(Q\), cf. (3.15). Denoting \(\upsilon:=\hat{\omega}^{A}|_{Q}\), we have \(\hat{\omega}^{A}-tv\in\mathcal{E}^{+}_{f}(A)\) for all \(t\in(0,1)\), hence \[I_{f}(\hat{\omega}^{A}-tv)-I_{f}(\hat{\omega}^{A})=-2t\int\bigl{(}U^{\hat{ \omega}^{A}}-U^{\omega}\bigr{)}\,d\upsilon+t^{2}\|\upsilon\|^{2},\] which again contradicts Definition 3.1 when \(t\) is small enough. (Note that, by (3.10), \[U^{\hat{\omega}^{A}}\leqslant U^{\omega}\ \ \text{on}\ S(\hat{\omega}^{A}), \tag{3.16}\] because \(U^{\omega}\) is upper semicontinuous (u.s.c.) on \(\overline{A}\) by \((\mathcal{P}_{3})\), while \(U^{\hat{\omega}^{A}}\) is l.s.c. on \(\mathbb{R}^{n}\).) For the "if" part of the claim, assume that (a) holds true for some (unique) \(\mu_{0}\in\mathcal{E}^{+}_{f}(A)\). To show that then necessarily \(\mu_{0}=\hat{\omega}^{A}\), we only need to verify that \[I_{f}(\mu)-I_{f}(\mu_{0})\geqslant 0\ \ \text{for any}\ \mu\in\mathcal{E}^{+}_{f}(A). \tag{3.17}\] But obviously \[I_{f}(\mu)-I_{f}(\mu_{0}) =\|\mu-\mu_{0}+\mu_{0}\|^{2}-2\int U^{\omega}\,d\mu-\|\mu_{0}\|^ {2}+2\int U^{\omega}\,d\mu_{0}\] \[=\|\mu-\mu_{0}\|^{2}+2\int U^{\mu_{0}-\omega}\,d(\mu-\mu_{0}),\] and relations (3.7) and (3.8) (with \(\mu_{0}\) in place of \(\hat{\omega}^{A}\)) immediately lead to (3.17). _Step 2._ To complete the proof of the theorem, it thus remains to establish the existence of the inner pseudo-balayage \(\hat{\omega}^{A}\). To this end, suppose throughout this step that \(A=K\) is _compact_. As \(c(K)>0\), see (3.1), we may restrict ourselves to _nonzero_ measures \(\mu\in\mathcal{E}^{+}(K)\). For each of those \(\mu\), there are \(t\in(0,\infty)\) and \(\tau\in\mathcal{E}^{+}(K)\) with \(\tau(K)=1\) such that \(\mu=t\tau\). The potential \(U^{|\omega|}\) being bounded on \(K\) by \((\mathcal{P}_{3})\), \[I_{f}(\mu) =t^{2}\|\tau\|^{2}-2t\int U^{\omega}\,d\tau\geqslant t^{2}\|\tau \|^{2}-2t\int U^{\omega^{+}}\,d\tau\] \[\geqslant t^{2}c(K)^{-1}-2tM_{K}=t^{2}\bigl{(}c(K)^{-1}-2M_{K}t^{- 1}\bigr{)}, \tag{3.18}\] \(M_{K}\in[0,\infty)\) being introduced by (3.5) (with \(A:=K\)). Hence, by virtue of (3.18), \(I_{f}(\mu)>0\) for all \(\mu\in\mathcal{E}^{+}(K)\) having the property \[\mu(K)>2M_{K}c(K)=:L_{K}\in[0,\infty).\] On the other hand, \(\hat{w}_{f}(K)\leqslant 0\), cf. (3.3). In view of the above, \(\hat{w}_{f}(K)\) would therefore be the same if \(\mathcal{E}^{+}_{f}(K)\) in (3.2) were replaced by \[\mathcal{E}^{+}_{L_{K}}(K):=\bigl{\{}\mu\in\mathcal{E}^{+}(K):\ \mu(K)\leqslant L_{K} \bigr{\}}, \tag{3.19}\] that is, \[\hat{w}_{f}(K)=\inf_{\mu\in\mathcal{E}^{+}_{L_{K}}(K)}I_{f}(\mu)=:\hat{w}_{f,L _{K}}(K). \tag{3.20}\] Hence, \[-\infty<-2M_{K}L_{K}\leqslant\hat{w}_{f}(K)\leqslant 0.\] Choose a (minimizing) sequence \((\mu_{j})\subset\mathcal{E}^{+}_{L_{K}}(K)\) such that \[\lim_{j\to\infty}\,I_{f}(\mu_{j})=\hat{w}_{f,L_{K}}(K).\] Being vaguely bounded, cf. (3.19), the sequence \((\mu_{j})\) is vaguely relatively compact [3, Section III.1, Proposition 15], and so there is a subsequence \((\mu_{j_{k}})\) converging vaguely to some \(\mu_{0}\in\mathfrak{M}^{+}(K)\). (Here we have used the first countability of the vague topology on \(\mathfrak{M}\), see [30, Lemma 4.4], as well as the fact that \(\mathfrak{M}^{+}(K)\) is vaguely closed, see [3, Section III.2, Proposition 6].) By the principle of descent [17, Eq. (1.4.5)], \[\|\mu_{0}\|^{2}\leqslant\liminf_{k\to\infty}\,\|\mu_{j_{k}}\|^{2}.\] Furthermore, by [3, Section IV.4.4, Corollary 3], \[\int U^{\omega}\,d\mu_{0}\geqslant\limsup_{k\to\infty}\,\int U^{\omega}\,d \mu_{j_{k}}\in(-\infty,\infty),\] \(U^{\omega}\) being bounded and u.s.c. on the (compact) set \(K\). This altogether gives \[\hat{w}_{f}(K)\leqslant I_{f}(\mu_{0})\leqslant\liminf_{k\to\infty}\,I_{f}( \mu_{j_{k}})=\hat{w}_{f,L_{K}}(K),\] which combined with (3.20) shows that \(\mu_{0}\) serves as the pseudo-balayage \(\hat{\omega}^{K}\). _Step 3_. Our next aim is to show that the constant \(L_{K}\), satisfying (3.20), can be defined to be independent of \(K\in\mathfrak{C}_{A}\) large enough. (To be exact, here and in the sequel \(K\in\mathfrak{C}_{A}\) is chosen to follow some \(K_{0}\) with \(c(K_{0})>0\).) By (3.9) and (3.16), \(U^{\hat{\omega}^{K}}=U^{\omega}\) n.e. on \(\mathcal{S}:=S(\hat{\omega}^{K})\), hence \(\gamma_{\mathcal{S}}\)-a.e., where \(\gamma_{\mathcal{S}}\) denotes the capacitary measure on \(\mathcal{S}\) (see [10, Theorem 2.5]). Since \(U^{\gamma_{\mathcal{S}}}\geqslant 1\) holds true n.e. on \(\mathcal{S}\), hence \(\hat{\omega}^{K}\)-a.e., we get, by Fubini's theorem, \[\hat{\omega}^{K}(\mathbb{R}^{n}) =\int 1\,d\hat{\omega}^{K}\leqslant\int U^{\gamma_{\mathcal{S}}} \,d\hat{\omega}^{K}=\int U^{\hat{\omega}^{K}}\,d\gamma_{\mathcal{S}}\] \[=\int U^{\omega}\,d\gamma_{\mathcal{S}}=\int U^{\gamma_{\mathcal{ S}}}\,d\omega\leqslant\int U^{\gamma_{\mathcal{S}}}\,d\omega^{+}.\] As \(U^{\gamma_{\mathcal{S}}}\leqslant 1\) on \(S(\gamma_{\mathcal{S}})\), applying the Frostman maximum principle if \(\alpha\leqslant 2\) (see [17, Theorem 1.10]), or [17, Theorem 1.5] otherwise, gives \[\hat{\omega}^{K}(\mathbb{R}^{n})\leqslant C_{n,\alpha}\omega^{+}(\mathbb{R}^ {n})=:L\ \ \text{for all compact}\ K\subset A, \tag{3.21}\] where \[C_{n,\alpha}:=\left\{\begin{array}{cc}1&\text{if}\ \ \alpha\leqslant 2,\\ 2^{n-\alpha}&\text{otherwise}.\end{array}\right. \tag{3.22}\] We are thus led to the following conclusion, crucial to our proof. _\(\blacklozenge\) We have_ \[\hat{w}_{f}(K)=\hat{w}_{f,L}(K)\in[-2M_{A}L,0]\ \ \text{for all}\ K\in\mathfrak{C}_{A}, \tag{3.23}\] \(M_{A}\in[0,\infty)\) _and \(L\in[0,\infty)\) being introduced by (3.5) and (3.21), respectively._ _The infimum \(\hat{w}_{f,L}(K)\) is an actual minimum with the minimizer \(\hat{\omega}^{K}\)._ _Step 4_. To establish the existence of \(\hat{\omega}^{A}\) for noncompact \(A\), we first note that the net \(\big{(}\hat{w}_{f,L}(K)\big{)}_{K\in\mathfrak{C}_{A}}\) decreases, and moreover, by (3.23), \[\infty<\lim_{K\uparrow A}\,\hat{w}_{f,L}(K)\leqslant 0. \tag{3.24}\] For any compact \(K,K^{\prime}\subset A\) such that \(K\subset K^{\prime}\) and \(c(K)>0\), \[(\hat{\omega}^{K}+\hat{\omega}^{K^{\prime}})/2\in\mathcal{E}^{+}_{L}(K^{\prime}),\] whence \[\|\hat{\omega}^{K}+\hat{\omega}^{K^{\prime}}\|^{2}-4\int U^{\omega}\,d(\hat{ \omega}^{K}+\hat{\omega}^{K^{\prime}})\geqslant 4\hat{w}_{f,L}(K^{\prime})=4I_{f} (\hat{\omega}^{K^{\prime}}).\] Applying the parallelogram identity to \(\hat{\omega}^{K},\hat{\omega}^{K^{\prime}}\in\mathcal{E}^{+}\) we therefore get \[\|\hat{\omega}^{K}-\hat{\omega}^{K^{\prime}}\|^{2}\leqslant 2I_{f}(\hat{\omega}^{ K})-2I_{f}(\hat{\omega}^{K^{\prime}}). \tag{3.25}\] Noting from (3.24) that the net \(\big{(}I_{f}(\hat{\omega}^{K})\big{)}_{K\geqslant K_{0}}\) is Cauchy in \(\mathbb{R}\), we infer from (3.25) that the net \((\hat{\omega}^{K})_{K\geqslant K_{0}}\) is strong Cauchy in \(\mathcal{E}^{+}(A)\). The cone \(\mathcal{E}^{+}(A)\) being strongly closed (hence strongly complete) by \((\mathcal{P}_{1})\), there exists \(\zeta\in\mathcal{E}^{+}(A)\) such that \[\hat{\omega}^{K}\to\zeta\ \ \text{strongly (hence vaguely) in $\mathcal{E}^{+}(A)$ as $K\uparrow A$}. \tag{3.26}\] Moreover, \(\zeta\in\mathcal{E}^{+}_{L}(A)\), the mapping \(\mu\mapsto\mu(\mathbb{R}^{n})\) being vaguely l.s.c. on \(\mathfrak{M}^{+}\).14 Footnote 14: See [3, Section IV.1, Proposition 4] applied to the (positive, l.s.c.) function \(1\) on \(\mathbb{R}^{n}\). It is useful to point out that in the case where the set in question is _compact_, the mapping \(\mu\mapsto\int g\,d\mu\) remains vaguely l.s.c. on \(\mathfrak{M}^{+}(K)\) for _any_ l.s.c. function \(g\) (not necessarily positive). This follows by replacing \(g\) by \(g^{\prime}:=g+c\geqslant 0\), where \(c\in(0,\infty)\), a l.s.c. function on a compact set being lower bounded, and then by making use of the vague continuity of the mapping \(\mu\mapsto\mu(K)\) on \(\mathfrak{M}^{+}(K)\). We claim that this \(\zeta\) serves as the inner pseudo-balayage \(\hat{\omega}^{A}\). As shown above (Step 1), this will follow once we verify (3.9) and (3.10) for \(\zeta\) in place of \(\hat{\omega}^{A}\). To verify (3.9) (for \(\zeta\) in place of \(\hat{\omega}^{A}\)), it is enough to do this for any given compact \(K_{*}\subset A\). The strong topology on \(\mathcal{E}^{+}\) being first-countable, in view of (3.26) there is a subsequence \((\hat{\omega}^{K_{j}})_{j\in\mathbb{N}}\) of the net \((\hat{\omega}^{K})_{K\in\mathcal{E}_{A}}\) such that \(K_{j}\supset K_{*}\) for all \(j\), and \[\hat{\omega}^{K_{j}}\to\zeta\ \ \text{strongly (hence vaguely) in $\mathcal{E}^{+}$ as $j\to\infty$}. \tag{3.27}\] Passing if necessary to a subsequence and changing the notations, we conclude from (3.27), by use of [10, p. 166, Remark], that \[U^{\zeta}=\lim_{j\to\infty}\,U^{\hat{\omega}^{K_{j}}}\ \ \text{n.e. on $\mathbb{R}^{n}$}. \tag{3.28}\] Applying now (3.9) to each \(\hat{\omega}^{K_{j}}\), and then letting \(j\to\infty\), on account of the countable subadditivity of inner capacity on Borel sets [17, p. 144] we infer from (3.28) that (3.9) (with \(\zeta\) in place of \(\hat{\omega}^{A}\)) does indeed hold n.e. on \(K_{*}\), whence n.e. on \(A\). To establish (3.10) for \(\zeta\) in place of \(\hat{\omega}^{A}\), we note from (3.16) applied to \(K_{j}\) that \[U^{\hat{\omega}^{K_{j}}}\leqslant U^{\omega}\ \ \text{on $S(\hat{\omega}^{K_{j}})$}, \tag{3.29}\] (\(K_{j}\)) being the sequence chosen above. Since \((\hat{\omega}^{K_{j}})\) converges to \(\zeta\) vaguely, see (3.27), for every \(x\in S(\zeta)\) there exist a subsequence \((K_{j_{k}})\) of \((K_{j})\) and points \(x_{j_{k}}\in S(\hat{\omega}^{K_{j_{k}}})\) such that \(x_{j_{k}}\), \(k\in\mathbb{N}\), approach \(x\) as \(k\to\infty\). Thus, by (3.29), \[U^{\hat{\omega}^{K_{j_{k}}}}(x_{j_{k}})\leqslant U^{\omega}(x_{j_{k}})\ \ \text{for all $k\in\mathbb{N}$}.\] Letting here \(k\to\infty\), in view of the upper semicontinuity of \(U^{\omega}\) on \(\overline{A}\) and the lower semicontinuity of the mapping \((x,\mu)\mapsto U^{\mu}(x)\) on \(\mathbb{R}^{n}\times\mathfrak{M}^{+}\), where \(\mathfrak{M}^{+}\) is equipped with the vague topology [10, Lemma 2.2.1(b)], we obtain (3.10) for \(\zeta\) in place of \(\hat{\omega}^{A}\). This implies that \[\zeta=\hat{\omega}^{A}, \tag{3.30}\] thereby completing the proof of the whole theorem. **Remark 3.6**.: Assume for a moment that \(\omega\) is positive, and that \(A\) is quasiclosed and Borel. If moreover either \(\omega\in\mathcal{E}^{+}\), or \(U^{\omega}|_{A}\) is bounded while \(c^{*}(A)<\infty\),15 then Theorem 3.5 can be deduced from Fuglede's result [12, Theorem 4.10] on the _outer_ pseudo-balayage with respect to a perfect kernel on a locally compact (Hausdorff) space. The methods developed in the above proof are essentially different from those in [12], which enabled us to establish the existence of the inner Riesz pseudo-balayage \(\hat{\omega}^{A}\) for pretty general \(\omega\) and \(A\), see \((\mathcal{P}_{1})\) and \((\mathcal{P}_{2})\), or \((\mathcal{P}_{1})\) and \((\mathcal{P}_{3})\). In this regard, it is also worth noting that the above proof seems to admit a generalization to suitable perfect kernels on locally compact spaces, which we plan to pursue in future work. Footnote 15: A quasiclosed set of finite outer capacity is actually quasicompact [12, Lemma 3.14]. ### Further properties of the inner pseudo-balayage The following theorem justifies the term "inner" pseudo-balayage. **Theorem 3.7**.: \(\hat{\omega}^{K}\to\hat{\omega}^{A}\) strongly and vaguely in \(\mathcal{E}^{+}\) as \(K\uparrow A\)_._ Proof.: In case \((\mathcal{P}_{3})\), this follows by substituting (3.30) into (3.26). It is thus left to consider case \((\mathcal{P}_{2})\). Then \(\omega\in\mathcal{E}\), and therefore \(\hat{\omega}^{K}\), resp. \(\hat{\omega}^{A}\), is the orthogonal projection of \(\omega\) onto the (convex, strongly complete) cone \(\mathcal{E}^{+}(K)\), resp. \(\mathcal{E}^{+}(A)\). A slight modification of the proof of (3.25) shows that \[\|\hat{\omega}^{K}-\hat{\omega}^{K^{\prime}}\|^{2}\leqslant 2I_{f}(\hat{ \omega}^{K})-2I_{f}(\hat{\omega}^{K^{\prime}})\ \ \text{whenever}\ K\subset K^{\prime}\quad(K,K^{\prime}\in\mathfrak{C}_{A}).\] Noting that the net \(\big{(}\hat{w}_{f}(K)\big{)}_{K\in\mathfrak{C}_{A}}\) is decreasing and, by (3.12), bounded: \[-\|\omega\|^{2}\leqslant\hat{w}_{f}(K)\leqslant 0\ \ \text{for all}\ K\in\mathfrak{C}_{A},\] we conclude from the above that the net \((\hat{\omega}^{K})_{K\in\mathfrak{C}_{A}}\subset\mathcal{E}^{+}(A)\) is strong Cauchy, and hence converges strongly and vaguely to some (unique) \(\mu_{0}\in\mathcal{E}^{+}(A)\). This implies \[\hat{w}_{f}(A)\leqslant I_{f}(\mu_{0})=\lim_{K\uparrow A}\,I_{f}(\hat{\omega} ^{K})=\lim_{K\uparrow A}\,\hat{w}_{f}(K),\] the former equality being derived from the strong convergence of \((\hat{\omega}^{K})\) to \(\mu_{0}\) by use of (3.12). To verify that this \(\mu_{0}\) actually equals \(\hat{\omega}^{A}\), it thus remains to show that \[\lim_{K\uparrow A}\,\hat{w}_{f}(K)\leqslant\hat{w}_{f}(A). \tag{3.31}\] But for every \(\mu\in\mathcal{E}^{+}(A)\), \[I_{f}(\mu)=\lim_{K\uparrow A}\,I_{f}(\mu|_{K})\geqslant\lim_{K\uparrow A}\, \hat{w}_{f}(K),\] where the equality follows by applying [10, Lemma 1.2.2] to each of the positive, l.s.c., \(\mu\)-integrable functions \(\kappa_{\alpha}\), \(U^{\omega^{+}}\), and \(U^{\omega^{-}}\), the set \(A\) being \(\mu\)-measurable. Letting now \(\mu\) range over \(\mathcal{E}^{+}(A)\) we get (3.31), thereby completing the proof of the theorem. **Corollary 3.8**.: If \(U^{\omega}\) is u.s.c. on \(\overline{A}\) (which holds in particular in case \((\mathcal{P}_{3})\)), then \[\hat{\omega}^{A}(\mathbb{R}^{n})\leqslant C_{n,\alpha}\omega^{+}(\mathbb{R}^{n }), \tag{3.32}\] \(C_{n,\alpha}\) being introduced by (3.22). Proof.: In view of the upper semicontinuity of \(U^{\omega}\) on \(\overline{A}\), the proof of (3.21), provided in case \((\mathcal{P}_{3})\), remains valid in case \((\mathcal{P}_{2})\) as well. Hence, in both cases \((\mathcal{P}_{2})\) and \((\mathcal{P}_{3})\), \[\hat{\omega}^{K}(\mathbb{R}^{n})\leqslant C_{n,\alpha}\omega^{+}(\mathbb{R}^{n })\ \ \text{for all}\ K\in\mathfrak{C}_{A},\] which results in (3.32) since the net \((\hat{\omega}^{K})_{K\in\mathfrak{C}_{A}}\) converges vaguely to \(\hat{\omega}^{A}\) (Theorem 3.7) while the mapping \(\mu\mapsto\mu(\mathbb{R}^{n})\) is vaguely l.s.c. on \(\mathfrak{M}^{+}\) (cf. footnote 14). ## 4 The comparison of the concepts of pseudo-balayage and balayage The aim of Examples 4.1-4.4 below is to demonstrate that usual nice properties of the inner balayage may fail to hold when dealing with the inner pseudo-balayage, and this occurs even in the simplest case of the Dirac measure and a sphere. \(\blacklozenge\) The first fact illustrating the difference between these two concepts is that the inner pseudo-balayage may increase the total mass of a positive measure (see e.g. (4.5), pertaining to \(\alpha>2\)). Recall that, if \(\alpha\in(0,2]\), then, by [26, Corollary 4.9], \[\mu^{A}(\mathbb{R}^{n})\leqslant\mu(\mathbb{R}^{n})\ \ \text{for any}\ \mu\in \mathfrak{M}^{+}\ \text{and}\ A\subset\mathbb{R}^{n}. \tag{4.1}\] \(\blacklozenge\) The second one is that, for \(\alpha>2\), there exist a set \(A\subset\mathbb{R}^{n}\) which is not inner \(\alpha\)-thin at infinity16 Footnote 16: By [16, Definition 3.1], \(A\subset\mathbb{R}^{n}\) is said to be _inner \(\alpha\)-thin at infinity_ if \[\sum_{j\in\mathbb{N}}\frac{c_{*}(A_{j})}{q^{j(n-\alpha)}}<\infty,\] where \(q\in(1,\infty)\) and \(A_{j}:=A\cap\{x\in\mathbb{R}^{n}:\ q^{j}\leqslant|x|<q^{j+1}\}\). For \(\alpha\in(0,2]\), see also [27, Definition 2.1], while for \(\alpha=2\) and Borel \(A\), see [7, pp. 175–176]. and a measure \(\mu\in\mathfrak{M}^{+}\) such that \[\hat{\mu}^{A}(\mathbb{R}^{n})>\mu(\mathbb{R}^{n}),\] see e.g. (4.5). In contrast to that, not being inner \(\alpha\)-thin at infinity is necessary and sufficient for equality to prevail in (4.1) for _all_\(\mu\in\mathfrak{M}^{+}\), see [27, Corollary 5.3]. **Example 4.1.** Let \(D\subset\mathbb{R}^{n}\) be a bounded (connected, open) domain, \(\omega:=\varepsilon_{x_{0}}\), where \(\varepsilon_{x_{0}}\) is the unit Dirac measure at \(x_{0}\in D\), and let \(A\) be the inverse of \(\overline{D}\setminus\{x_{0}\}\) with respect to the sphere \(S_{x_{0},1}:=\{|x-x_{0}|=1\}\). For these \(A\) and \(\omega\), \((\mathcal{P}_{1})\) and \((\mathcal{P}_{3})\) are fulfilled, and hence the pseudo-balayage \(\hat{\varepsilon}^{A}_{x_{0}}\) exists and is unique (Theorem 3.5). Assume first that \(\alpha\in(0,2]\). Since the balayage \(\varepsilon^{A}_{x_{0}}\) of \(\varepsilon_{x_{0}}\) onto \(A\) is obviously of finite energy, Theorem 2.6 yields \[\hat{\varepsilon}^{A}_{x_{0}}=\varepsilon^{A}_{x_{0}}. \tag{4.2}\] Applying [17, Section IV.5.20] we therefore infer that the pseudo-balayage \(\hat{\varepsilon}^{A}_{x_{0}}\) is actually the Kelvin transform \(\gamma^{*}_{\overline{D}}\) of the capacitary measure \(\gamma_{\overline{D}}\) on \(\overline{D}\) with respect to the sphere \(S_{x_{0},1}\). Hence, in particular, \[S(\hat{\varepsilon}^{A}_{x_{0}})=\left\{\begin{array}{ll}\partial A&\text{ if }\ \alpha=2,\\ A&\text{ if }\ \alpha<2,\end{array}\right. \tag{4.3}\] where \(\partial A:=\partial_{\mathbb{R}^{n}}A\). The set \(A\) not being \(\alpha\)-thin at infinity, we also get, in consequence of (4.2) and [13, Theorem 3.22], \[\hat{\varepsilon}^{A}_{x_{0}}(\mathbb{R}^{n})=\varepsilon_{x_{0}}(\mathbb{R}^ {n})=1. \tag{4.4}\] Let now \(\alpha\in(2,n)\). We aim to show that then, in contrast to (4.4),17 Footnote 17: Compare with (6.11). \[1=\varepsilon_{x_{0}}(\mathbb{R}^{n})<\hat{\varepsilon}^{A}_{x_{0}}(\mathbb{R} ^{n})\leqslant 2^{n-\alpha}, \tag{4.5}\] the latter inequality being valid by virtue of (3.32). As is known, the capacitary measure \(\gamma_{\overline{D}}\) on \(\overline{D}\) is the (unique) measure in \(\mathcal{E}^{+}(\overline{D})\) of (finite) total mass \(c(\overline{D})\), and such that (see [17, Sections II.1.3, II.3.13]) \[U^{\gamma_{\overline{D}}}\geqslant 1\ \ \text{n.e.\ on}\ \overline{D}, \tag{4.6}\] \[U^{\gamma_{\overline{D}}}=1\ \ \text{n.e.\ on}\ S(\gamma_{ \overline{D}}),\] (4.7) \[U^{\gamma_{\overline{D}}}>1\ \ \text{on}\ D, \tag{4.8}\] (4.8) being caused by the fact that for \(\alpha\in(2,n)\), the \(\alpha\)-Riesz potential of a positive measure is superharmonic on \(\mathbb{R}^{n}\)[17, Theorem 1.4]. For the Kelvin transform \(\gamma_{\overline{D}}^{*}\) of \(\gamma_{\overline{D}}\), applying [17, Eqs. (4.5.2)-(4.5.4)] yields \[U^{\gamma_{\overline{D}}^{*}}(x)=|x-x_{0}|^{\alpha-n}U^{\gamma_{ \overline{D}}}(x^{*}),\] \[I(\gamma_{\overline{D}}^{*})=I(\gamma_{\overline{D}}),\] \[\gamma_{\overline{D}}^{*}(\mathbb{R}^{n})=U^{\gamma_{\overline{D }}}(x_{0}), \tag{4.9}\] \(x^{*}\) being the inverse of \(x\) with respect to \(S_{x_{0},1}\). Combined with (4.6) and (4.7), this shows that \(\gamma_{\overline{D}}^{*}\) is a measure of the class \(\mathcal{E}^{+}(A)\) having the properties \[U^{\gamma_{\overline{D}}^{*}}\geqslant U^{\varepsilon_{x_{0}}} \text{ n.e. on }A,\] \[U^{\gamma_{\overline{D}}^{*}}=U^{\varepsilon_{x_{0}}} \text{ n.e. on }S(\gamma_{\overline{D}}^{*}),\] whence (3.9) and (3.10) with \(\omega:=\varepsilon_{x_{0}}\) and \(\hat{\omega}^{A}:=\gamma_{\overline{D}}^{*}\). Thus, by virtue of Theorem 3.5, \[\hat{\varepsilon}_{x_{0}}^{A}=\gamma_{\overline{D}}^{*},\] which together with (4.8) and (4.9) establishes (4.5). Also note that \[S(\hat{\varepsilon}_{x_{0}}^{A})\subset\partial A\] (compare with (4.3)). **Example 4.2.** A slight modification of arguments in Example 4.1 shows that if \(G\subset\mathbb{R}^{n}\) is an open, relatively compact set, then for any \(\alpha\in(2,n)\) and any \(x_{0}\in G\), \[1<\hat{\varepsilon}_{x_{0}}^{G^{e}}(\mathbb{R}^{n})\leqslant 2^{n-\alpha}.\] **Example 4.3.** Let \(\alpha\in(2,n)\), \(A:=B_{R}^{c}:=\{|x|\geqslant R\}\), \(R\in(0,\infty)\), and let \(\omega:=\varepsilon_{0}\), where \(\varepsilon_{0}\) denotes the unit Dirac measure at \(x=0\). As follows from Example 4.1, \[\hat{\varepsilon}_{0}^{B_{R}^{c}}=\gamma_{\overline{B}_{r}}^{*},\] where \(\gamma_{\overline{B}_{r}}\) is the capacitary measure on the ball \(\overline{B}_{r}\), \(r:=1/R\), and \(\gamma_{\overline{B}_{r}}^{*}\) is the Kelvin transform of \(\gamma_{\overline{B}_{r}}\) with respect to the unit sphere \(S_{1}\). Therefore, by symmetry reasons applied to \(\gamma_{\overline{B}_{r}}\), \(\hat{\varepsilon}_{0}^{B_{R}^{c}}\) is uniformly distributed over the sphere \(S_{R}\), and such that \[1<\hat{\varepsilon}_{0}^{B_{R}^{c}}(\mathbb{R}^{n})\leqslant 2^{n-\alpha}.\] **Example 4.4.** Let \(\alpha\in(2,n)\), \(A:=S_{R}\), \(R\in(0,\infty)\), and let \(\omega:=\varepsilon_{0}\). As shown in Example 4.3, \[S(\hat{\varepsilon}_{0}^{B_{R}^{c}})=S_{R},\] which in view of Definition 3.1 gives \[\hat{\varepsilon}_{0}^{S_{R}}=\hat{\varepsilon}_{0}^{B_{R}^{c}}.\] Thus \(\hat{\varepsilon}_{0}^{S_{R}}\) is uniformly distributed over the sphere \(S_{R}\), and such that \[1<\hat{\varepsilon}_{0}^{S_{R}}(\mathbb{R}^{n})\leqslant 2^{n-\alpha}.\] Also note that \(\hat{\varepsilon}_{0}^{S_{R}}\) is, in fact, the Kelvin transform of the capacitary measure \(\gamma_{S_{r}}\) ( \(=\gamma_{\overline{B}_{r}}\)), where \(r:=1/R\), with respect to the unit sphere \(S_{1}\). ## 5. The inner Gauss variational problem As before, consider a set \(A\subset\mathbb{R}^{n}\) such that (3.1) and \((\mathcal{P}_{1})\) are fulfilled, a (signed) measure \(\omega\in\mathfrak{M}\) satisfying either \((\mathcal{P}_{2})\) or \((\mathcal{P}_{3})\), and the external field \(f\) given by \[f:=-U^{\omega}.\] According to Theorem 3.5, the problem of minimizing the Gauss functional \(I_{f}(\mu)\), \[I_{f}(\mu):=\|\mu\|^{2}+2\int f\,d\mu=\|\mu\|^{2}-2\int U^{\omega}\,d\mu,\] over the class \(\mathcal{E}_{f}^{+}(A)\) of all \(\mu\in\mathcal{E}^{+}(A)\) with finite \(I_{f}(\mu)\) is uniquely solvable, and its solution \(\hat{\omega}^{A}\), called the inner pseudo-balayage of \(\omega\) onto \(A\), is uniquely characterized within \(\mathcal{E}_{f}^{+}(A)\) by both (3.7) and (3.8) -- or, equivalently, by both (3.9) and (3.10). The rest of the paper is to show that the concept of inner pseudo-balayage serves as a powerful tool in the inner Gauss variational problem, which reads as follows. **Problem 5.1.** Does there exist \(\lambda_{A,f}\) minimizing \(I_{f}(\mu)\) within \(\check{\mathcal{E}}^{+}(A)\)? Here, \[\check{\mathcal{E}}^{+}(A):=\big{\{}\mu\in\mathcal{E}^{+}(A):\ \mu(\mathbb{R}^{n})=1\big{\}}.\] Recent results on Problem 5.1, which was originated by C.F. Gauss [14], are reviewed in the monographs [2, 20] (see also numerous references therein); for some of the latest researches on this topic, see [8, 32, 33]. **Remark 5.2.** If \(A=K\) is compact while \(f\) is l.s.c. on \(K\), then the existence of the minimizer \(\lambda_{K,f}\) can easily be verified, by use of the fact that the class \(\check{\mathcal{E}}^{+}(K)\) is vaguely compact, cf. [3, Section III.1.9, Corollary 3], while the Gauss functional \(I_{f}(\cdot)\) is vaguely l.s.c. on \(\mathfrak{M}^{+}(K)\), the latter being obvious from the principle of descent and the vague lower semicontinuity of the mapping \(\mu\mapsto\int f\,d\mu\) on \(\mathfrak{M}^{+}(K)\) (footnote 14). However, such a proof, based on the vague topology only, is no longer applicable if either \(A\) is noncompact, or \(f\) is not l.s.c. To investigate Problem 5.1 in the general case where \(A\) is noncompact and/or \(f\) is not l.s.c., we have recently developed an approach based on the systematic use of both the strong and vague topologies on the pre-Hilbert space \(\mathcal{E}\), which utilized essentially the perfectness of the Riesz kernels, see [33]. However, if \(c_{*}(A)=\infty\), then the analysis performed in [33] was only limited to \(\alpha\leqslant 2\) and \(\omega\geqslant 0\), being mainly based on the theory of inner balayage for positive measures. Motivated by this observation, we generalize the approach, suggested in [33], to _arbitrary_\(\alpha\in(0,n)\) and _signed_\(\omega\), by use of the theory of inner pseudo-balayage, developed in Sections 3, 4 above. The results thereby established are formulated in Section 6, and are proved in Sections 7-9. It is worth emphasizing that those results improve substantially many recent ones from [8, 33] (see Section 6.3 for some details). ### Preliminary results To begin with, observe that under either of assumptions \((\mathcal{P}_{2})\) or \((\mathcal{P}_{3})\), the Gauss functional \(I_{f}(\mu)\) is finite for all \(\mu\in\check{\mathcal{E}}^{+}(A)\). Thus \[\check{\mathcal{E}}^{+}(A)\subset\mathcal{E}_{f}^{+}(A), \tag{5.1}\] and therefore \[\min_{\mu\in\mathcal{E}_{f}^{+}(A)}\,I_{f}(\mu)=:\hat{w}_{f}(A)\leqslant w_{f} (A):=\inf_{\mu\in\check{\mathcal{E}}^{+}(A)}\,I_{f}(\mu). \tag{5.2}\] **Lemma 5.3.**\(-\infty<w_{f}(A)<\infty\). Proof.: According to [21, Lemma 5], \(w_{f}(A)<\infty\) is equivalent to the inequality \[c_{*}\big{(}\{x\in A:\ |f|(x)<\infty\}\big{)}>0,\] which indeed holds true by virtue of (3.1) and the fact that \(f\) is finite n.e. on \(\mathbb{R}^{n}\). (Here the strengthened version of countable subadditivity for inner capacity has been utilized, see Lemma 2.7.) Finally, combining (5.2) and (3.6) gives \(w_{f}(A)>-\infty\). The solution \(\lambda_{A,f}\) to Problem 5.1 is _unique_ (if it exists), which can be proved by use of the convexity of the class \(\check{\mathcal{E}}^{+}(A)\) and the parallelogram identity in the pre-Hilbert space \(\mathcal{E}\). Such \(\lambda_{A,f}\) is said to be _the inner \(f\)-weighted equilibrium measure_. The following theorem, providing characteristic properties of \(\lambda_{A,f}\), can be derived from the author's earlier paper [21] (see Theorems 1, 2 and Proposition 1 therein). **Theorem 5.4.** For \(\lambda\in\check{\mathcal{E}}^{+}(A)\) to be the (unique) solution \(\lambda_{A,f}\) to Problem 5.1, it is necessary and sufficient that either of the following two inequalities be fulfilled: \[U_{f}^{\lambda}\geqslant\int U_{f}^{\lambda}\,d\lambda\ \ \text{n.e.\ on}\ A, \tag{5.3}\] \[U_{f}^{\lambda}\leqslant w_{f}(A)-\int f\,d\lambda\ \ \text{$\lambda$-a.e.\ on}\ A, \tag{5.4}\] where \[U_{f}^{\lambda}:=U^{\lambda}+f\] is said to be the \(f\)-weighted potential of \(\lambda\). If (5.3) or (5.4) holds true, then actually \[\int U_{f}^{\lambda}\,d\lambda=w_{f}(A)-\int f\,d\lambda=:c_{A,f}\in(-\infty, \infty), \tag{5.5}\] \(c_{A,f}\) being referred to as the inner \(f\)-weighted equilibrium constant.18 Footnote 18: Similarly to [20, p. 27], \(c_{A,f}\) might also be referred to as _the inner modified Robin constant_. **Remark 5.5.** If \(f\) is l.s.c. on \(\overline{A}\) (which occurs e.g. in case \((\mathcal{P}_{3})\)), then, by (5.4), \[U_{f}^{\lambda_{A,f}}\leqslant c_{A,f}\ \ \text{on}\ S(\lambda_{A,f}),\] which combined with (5.3) gives \[U_{f}^{\lambda_{A,f}}=c_{A,f}\ \ \text{n.e.\ on}\ A\cap S(\lambda_{A,f}).\] ## 6. On the existence of \(\lambda_{A,f}\) and its properties In all that follows, except for Corollary 6.8, we assume \(A\) and \(f\) to satisfy the permanent requirements, reminded at the beginning of Section 5. ### On the solvability of Problem 5.1 Sufficient and/or necessary conditions for the existence of the (unique) solution \(\lambda_{A,f}\) to Problem 5.1 are established in Theorems 6.1, 6.3, 6.4, 6.7 and Corollaries 6.2, 6.5, 6.6, 6.8 below. **Theorem 6.1.** For \(\lambda_{A,f}\) to exist, it is sufficient that19 Footnote 19: See Remark 8.2 below for an extension of Theorem 6.1 and Corollary 6.2. \[c_{*}(A)<\infty. \tag{6.1}\] **Corollary 6.2.**\(\lambda_{A,f}\) does exist whenever \(A\) is quasicompact. Proof.: This is obvious since for quasicompact \(A\), both \((\mathcal{P}_{1})\) and (6.1) hold true, by virtue of [33, Theorem 3.9] and [11, Definition 2.1], respectively. **Theorem 6.3.** For \(\lambda_{A,f}\) to exist, it is sufficient that \[\hat{\omega}^{A}(\mathbb{R}^{n})=1,\] \(\hat{\omega}^{A}\) being the inner pseudo-balayage of \(\omega\) onto \(A\). Furthermore, then \[\lambda_{A,f}=\hat{\omega}^{A},\quad w_{f}(A)=\hat{w}_{f}(A),\quad c_{A,f}=0,\] \(c_{A,f}\) being the inner \(f\)-weighted equilibrium constant. \(\bullet\) On account of Theorem 6.1, in the rest of this section we assume that \[c_{*}(A)=\infty.\] \(\bullet\) Unless \(({\cal P}_{2})\) holds, assume additionally that \[\lim_{|x|\to\infty,\ x\in\overline{A}}U^{\omega^{-}}(x)=0.\] **Theorem 6.4.** Problem 5.1 is unsolvable whenever \[\hat{\omega}^{A}(\mathbb{R}^{n})<1.\] **Corollary 6.5.** If \(\omega^{+}=0\), then Problem 5.1 is always unsolvable. _Proof._ Indeed, if \(\omega=-\omega^{-}\), then (6.5) is fulfilled, for \(\hat{\omega}^{A}=0\) (see Remark 3.3). \(\square\) **Corollary 6.6.** If \(U^{\omega}\) is u.s.c. on \(\overline{A}\), then Problem 5.1 is unsolvable whenever \[\omega^{+}(\mathbb{R}^{n})<1/C_{n,\alpha},\] \(C_{n,\alpha}\) being introduced by (3.22). _Proof._ This follows from Theorem 6.4 by use of (3.32). \(\square\) **Theorem 6.7.** Unless \(({\cal P}_{2})\) holds, assume \(U^{\omega}\) is continuous20 on \(\overline{A}\) and Footnote 20: When speaking of a continuous function, we understand that the values are _finite_ numbers. \[\lim_{|x|\to\infty,\ x\in\overline{A}}U^{\omega^{\pm}}(x)=0.\] Then \[\lambda_{A,f}\ \mbox{exists}\ \Longleftrightarrow\ \hat{\omega}^{A}( \mathbb{R}^{n})\geqslant 1.\] If moreover \[\hat{\omega}^{A}(\mathbb{R}^{n})>1,\] then21 Footnote 21: Compare with Theorem 6.3 as well as with Remark 8.3. \[\lambda_{A,f}\neq\hat{\omega}^{A},\quad w_{f}(A)\neq\hat{w}_{f}(A),\quad c_{A,f}\neq 0.\] **Corollary 6.8.** Dropping assumption (6.3) as well as all those imposed on \(\omega\), consider signed \(\omega\in\mathfrak{M}\), compactly supported in \(\overline{A}^{c}\). Then \(\lambda_{A,f}\) exists if and only if either \(c_{*}(A)<\infty\), or \(\hat{\omega}^{A}(\mathbb{R}^{n})\geqslant 1\). In particular, \(\lambda_{A,f}\) does not exist if both \(c_{*}(A)=\infty\) and \(\omega^{+}(\mathbb{R}^{n})<1/C_{n,\alpha}\) are fulfilled, \(C_{n,\alpha}\) being introduced by (3.22). _Proof._ This follows by combining Theorems 6.1, 6.7 and Corollary 6.6. \(\square\) **Remark 6.9.** Thus, if \(\omega\in\mathfrak{M}\) is compactly supported in \(\overline{A}^{c}\) while \(c_{*}(A)=\infty\), then, according to Corollary 6.8, Problem 5.1 is unsolvable whenever \(\omega^{+}(\mathbb{R}^{n})\) is small enough, whereas \(\omega^{-}(\mathbb{R}^{n})\), the total amount of the negative charge, has no influence on this phenomenon. (Note that, when appealing to the electrostatic interpretation of the problem, the fact just observed agrees with our physical intuition.) ### On the description of the support \(S(\lambda_{A,f})\) The following Theorem 6.10 establishes sufficient conditions for the minimizer \(\lambda_{A,f}\) to be of compact support, whereas Examples 6.11 and 6.12 analyze their sharpness. **Theorem 6.10**.: Under the hypotheses of Theorem 6.7, assume moreover that \(A\) is not inner \(\alpha\)-thin at infinity. Then \(S(\lambda_{A,f})\) is compact whenever (6.7) is fulfilled. Examples 6.11 and 6.12 provide explicit formulae for \(S(\lambda_{A,f})\) for some specific \(A\) and \(f\). The latter equality in (6.9) as well as both equalities in (6.13) show that Theorem 6.10 would fail in general if \(\hat{\omega}^{A}(\mathbb{R}^{n})>1\) were replaced by \(\hat{\omega}^{A}(\mathbb{R}^{n})=1\). **Example 6.11**.: For \(A:=B_{R}^{c}:=\{|x|\geqslant R\}\), \(R\in(0,\infty)\), and for the unit Dirac measure \(\varepsilon_{0}\) at \(x=0\), define \[\omega:=\varepsilon_{0}/q,\ \ \text{where}\ \ q:=\hat{\varepsilon}_{0}^{B_{R}^{c }}(\mathbb{R}^{n}).\] As shown in Example 4.1, \(q=1\) if \(\alpha\leqslant 2\), and \(q>1\) otherwise. By (3.4), \[\hat{\omega}^{B_{R}^{c}}=\hat{\varepsilon}_{0}^{B_{R}^{c}}/q,\] whence \[\hat{\omega}^{B_{R}^{c}}(\mathbb{R}^{n})=\hat{\varepsilon}_{0}^{B_{R}^{c}}( \mathbb{R}^{n})/q=1.\] Therefore, by virtue of Theorem 6.3, the solution \(\lambda_{B_{R}^{c},f}\) to Problem 5.1 with \(f:=-U^{\omega}\) does exist, and moreover \[\lambda_{B_{R}^{c},f}=\hat{\omega}^{B_{R}^{c}}=\hat{\varepsilon}_{0}^{B_{R}^ {c}}/q.\] In view of what was pointed out in Examples 4.1 and 4.3, this gives \[S(\lambda_{B_{R}^{c},f})=\left\{\begin{array}{ll}S_{R}&\text{if }\alpha \geqslant 2,\\ B_{R}^{c}&\text{otherwise.}\end{array}\right. \tag{6.9}\] **Example 6.12**.: Let \(\overline{B}_{z,r}\) be the closed ball of radius \(r\) centered at the point \(z:=(0,\ldots,0,-r)\), and let \(A_{0}\) denote the inverse of \(\overline{B}_{z,r}\setminus\{0\}\) with respect to the unit sphere \(S_{1}\) centered at the origin \(0\). Then \(A_{0}\) is a closed subset of \(\mathbb{R}^{n}\), not \(\alpha\)-thin at infinity, such that \(0\not\in A_{0}\), and whose boundary \(\partial A_{0}\) is unbounded. In a manner similar to that in Example 4.1, we see that \[\hat{\varepsilon}_{0}^{A_{0}}=\gamma_{\overline{B}_{z,r}}^{\star}, \tag{6.10}\] \(\gamma_{\overline{B}_{z,r}}^{\star}\) being the Kelvin transform of \(\gamma_{\overline{B}_{z,r}}\), the capacitary measure on \(\overline{B}_{z,r}\), with respect to \(S_{1}\). Noting that \(\gamma_{\overline{B}_{z,r}}^{\star}(\mathbb{R}^{n})=U^{\gamma_{\overline{B}_{ z,r}}}(0)\) (cf. (4.9)), whereas \(U^{\gamma_{\overline{B}_{z,r}}}(0)=1\), we obtain \[\hat{\varepsilon}_{0}^{A_{0}}(\mathbb{R}^{n})=1\ \ \text{for any}\ \alpha\in(0,n) \tag{6.11}\] (compare with (4.5)). Applying Theorem 6.3 we therefore infer that the solution \(\lambda_{A_{0},f}\) to Problem 5.1 with \(A:=A_{0}\) and \(f:=-U^{\varepsilon_{0}}\) does exist, and moreover \[\lambda_{A_{0},f}=\hat{\varepsilon}_{0}^{A_{0}}. \tag{6.12}\] By virtue of the description of \(S(\gamma_{\overline{B}_{z,r}})\)[17, Section II.3.13], (6.10) and (6.12) yield \[S(\lambda_{A_{0},f})=\left\{\begin{array}{ll}\partial A_{0}&\text{if }\alpha \geqslant 2,\\ A_{0}&\text{otherwise.}\end{array}\right. \tag{6.13}\] ### Remark The results thereby obtained improve substantially many recent ones from [8, 33], by strengthening their formulations and/or by extending the areas of their applications. For instance, [8, Corollary 2.6] only deals with closed sets \(A\) that are not thin at infinity, and with external fields \(f\) of the form \(-U^{\omega}\), where \(\omega:=c\varepsilon_{x_{0}}\), \(c\in(0,\infty)\), \(\varepsilon_{x_{0}}\) being the unit Dirac measure at \(x_{0}\not\in A\). However, even for these very particular \(A\) and \(\omega\), all the assertions in [8, Corollary 2.6] are in general weaker than the relevant ones, established above. This is caused, in particular, by the fact that those assertions from [8] are given in terms of \(\omega(\mathbb{R}^{n})\), whereas ours -- in terms of \(\hat{\omega}^{A}(\mathbb{R}^{n})\) (see Section 4 for the relations between these two values). Regarding the advantages of our current approach in comparison with that suggested in [33], see Remark 5.2 above. ## 7. Proof of Theorem 6.3 Due to condition (6.2), the inner pseudo-balayage \(\hat{\omega}^{A}\), minimizing \(I_{f}(\mu)\) over the class \(\mathcal{E}_{f}^{+}(A)\), actually belongs to its proper subclass \(\check{\mathcal{E}}^{+}(A)\), see (5.1). Therefore, \[w_{f}(A)\leqslant I_{f}(\hat{\omega}^{A})=\hat{w}_{f}(A),\] which combined with (5.2) gives \[I_{f}(\hat{\omega}^{A})=w_{f}(A)=\hat{w}_{f}(A).\] Thus \(\hat{\omega}^{A}\) serves as the (unique) solution to Problem 5.1, i.e. \(\hat{\omega}^{A}=\lambda_{A,f}\). Substituting this equality into (3.8), we get \[\int U_{f}^{\lambda_{A,f}}\,d\lambda_{A,f}=\int\bigl{(}U^{\hat{\omega}^{A}}-U^ {\omega}\bigr{)}\,d\hat{\omega}^{A}=0,\] which according to Theorem 5.4 establishes the remaining relation \(c_{A,f}=0\). ## 8. Proofs of Theorems 6.1 and 6.4 ### Extremal measures Let \(\mathbb{M}_{f}(A)\) stand for the (nonempty) set of all nets \((\mu_{s})_{s\in S}\subset\check{\mathcal{E}}^{+}(A)\) having the property \[\lim_{s\in S}\,I_{f}(\mu_{s})=w_{f}(A); \tag{8.1}\] those nets \((\mu_{s})_{s\in S}\) are said to be _minimizing_ (in Problem 5.1). Using the finiteness of \(w_{f}(A)\) (Lemma 5.3), the convexity of \(\check{\mathcal{E}}^{+}(A)\), and the perfectness of the \(\alpha\)-Riesz kernel, one can see with the aid of arguments similar to those in [33, Lemma 4.2] that there exists the unique \(\xi_{A,f}\in\mathcal{E}^{+}\) such that, for every \((\mu_{s})_{s\in S}\in\mathbb{M}_{f}(A)\), \[\mu_{s}\to\xi_{A,f}\ \text{ strongly and vaguely in $\mathcal{E}^{+}$ (as $s$ ranges through $S$)}. \tag{8.2}\] This \(\xi:=\xi_{A,f}\) will be referred to as _the extremal measure_ (in Problem 5.1). Due to (8.2), we have \[\xi_{A,f}\in\mathcal{E}^{+}(A), \tag{8.3}\] \(\mathcal{E}^{+}(A)\) being strongly closed by \((\mathcal{P}_{1})\), and moreover \[\xi_{A,f}(\mathbb{R}^{n})\leqslant 1, \tag{8.4}\] the mapping \(\mu\mapsto\mu(\mathbb{R}^{n})\) being vaguely l.s.c. on \(\mathfrak{M}^{+}\)[3, Section IV.1, Proposition 4]. The following simple observation is crucial to the proofs given below. **Lemma 8.1**.: Problem 5.1 is solvable if and only if \[\xi_{A,f}(\mathbb{R}^{n})=1\ \ \text{and}\ \ I_{f}(\xi_{A,f})=w_{f}(A), \tag{8.5}\] and in the affirmative case \[\lambda_{A,f}=\xi_{A,f}. \tag{8.6}\] Proof.: The "if" part is evident by (8.3), whereas the opposite is implied by the fact that the trivial net \((\lambda_{A,f})\) is obviously minimizing, and hence converges strongly to both \(\lambda_{A,f}\) and \(\xi_{A,f}\). Since the strong topology on \(\mathcal{E}\) is Hausdorff, (8.6) follows. ### Proof of Theorem 6.1 Fix a minimizing sequence \((\mu_{j})\in\mathbb{M}_{f}(A)\); by (8.2), \[\mu_{j}\to\xi\ \ \text{strongly and vaguely in $\mathcal{E}^{+}$ (as $j\to\infty$)},\] \(\xi:=\xi_{A,f}\) being the (unique) extremal measure in Problem 5.1, whence \[\sup_{j\in\mathbb{N}}\,\|\mu_{j}\|<\infty. \tag{8.7}\] To show that this \(\xi\) serves as the solution to Problem 5.1, it is enough to verify (8.5). Since \(\mu_{j}\to\xi\) vaguely, applying [3, Section IV.1, Proposition 4] gives \[\xi(\mathbb{R}^{n})\leqslant\liminf_{j\to\infty}\,\mu_{j}(\mathbb{R}^{n})=1, \tag{8.8}\] whereas [3, Section IV.4.4, Corollary 3] yields \[\int 1_{K}\,d\xi\geqslant\limsup_{j\to\infty}\,\int 1_{K}\,d\mu_{j}\ \ \text{for every compact $K\subset\mathbb{R}^{n}$}, \tag{8.9}\] the indicator function \(1_{K}\) of \(K\) being bounded, of compact support, and u.s.c. on \(\mathbb{R}^{n}\). Combining (8.8) and (8.9) with \[\xi(\mathbb{R}^{n})=\lim_{K\uparrow\mathbb{R}^{n}}\,\int 1_{K}\,d\xi,\] we get \[1\geqslant\xi(\mathbb{R}^{n})\geqslant\limsup_{(j,K)\in\mathbb{N}\times \mathfrak{C}}\,\int 1_{K}\,d\mu_{j}=1-\liminf_{(j,K)\in\mathbb{N}\times \mathfrak{C}}\,\int 1_{A\setminus K}\,d\mu_{j},\] \(\mathbb{N}\times\mathfrak{C}\) being the directed product of the directed sets \(\mathbb{N}\) and \(\mathfrak{C}:=\mathfrak{C}_{\mathbb{R}^{n}}\)[15, p. 68]. The former relation in (8.5) will therefore follow once we establish the equality \[\liminf_{(j,K)\in\mathbb{N}\times\mathfrak{C}}\,\int 1_{A\setminus K}\,d\mu_{j}=0. \tag{8.10}\] By [17, Theorem 2.6] applied to \(A\setminus K\), \(K\in\mathfrak{C}\) being arbitrarily chosen, there exists the (unique) inner capacitary measure \(\gamma_{A\setminus K}\), minimizing the energy \(\|\mu\|^{2}\) over the (convex) set \(\Gamma_{A\setminus K}\) consisting of all \(\mu\in\mathcal{E}^{+}\) with \[U^{\mu}\geqslant 1\ \ \text{n.e.\ on $A\setminus K$}.\] For any \(K^{\prime}\in\mathfrak{C}\) such that \(K\subset K^{\prime}\), we have \(\Gamma_{A\setminus K}\subset\Gamma_{A\setminus K^{\prime}}\), and [17, Lemma 2.2] therefore gives \[\|\gamma_{A\setminus K}-\gamma_{A\setminus K^{\prime}}\|^{2}\leqslant\|\gamma_ {A\setminus K}\|^{2}-\|\gamma_{A\setminus K^{\prime}}\|^{2}. \tag{8.11}\] Since \(\|\gamma_{A\setminus K}\|^{2}=c_{*}(A\setminus K)\)[17, Theorem 2.6], \(\|\gamma_{A\setminus K}\|^{2}\) decreases as \(K\) ranges through \(\mathfrak{C}\), which together with (8.11) implies that the net \((\gamma_{A\setminus K})_{K\in\mathfrak{C}}\subset\mathcal{E}^{+}\) is Cauchy in the strong topology on \(\mathcal{E}^{+}\). Noting that \((\gamma_{A\setminus K})_{K\in\mathfrak{C}}\) converges vaguely to zero,22 we get Footnote 22: Indeed, for any given \(\varphi\in C_{0}(\mathbb{R}^{n})\), there exists an open, relatively compact set \(G\subset\mathbb{R}^{n}\) such that \(\varphi(x)=0\) for all \(x\not\in\overline{G}\). Hence, \(\gamma_{A\setminus K}(\varphi)=0\) for all \(K\in\mathfrak{C}\) with \(K\supset\overline{G}\), and the claim follows. \[\gamma_{A\setminus K}\to 0\ \ \text{strongly in $\mathcal{E}^{+}$ as $K\uparrow\mathbb{R}^{n}$}, \tag{8.12}\] the \(\alpha\)-Riesz kernel being perfect. It follows from the above that \[U^{\gamma_{A\setminus K}}\geqslant 1_{A\setminus K}\ \ \text{n.e.\ on $A \setminus K$}, \tag{8.13}\] and, therefore, \(\mu_{j}\)-a.e. for all \(j\in\mathbb{N}\). Integrating (8.13) with respect to \(\mu_{j}\) we obtain, by the Cauchy-Schwarz (Bunyakovski) inequality, \[\int 1_{A\setminus K}\,d\mu_{j}\leqslant\int U^{\gamma_{A\setminus K}}\,d \mu_{j}\leqslant\|\gamma_{A\setminus K}\|\cdot\|\mu_{j}\|\ \ \text{for all $K\in\mathfrak{C}$ and $j\in\mathbb{N}$}.\] Combined with (8.7) and (8.12), this gives (8.10), hence \(\xi\in\breve{\mathcal{E}}^{+}(A)\), and consequently \[w_{f}(A)\leqslant I_{f}(\xi). \tag{8.14}\] To complete the proof of the theorem, it remains to verify the latter relation in (8.5), which in view of (8.1) and (8.14) is reduced to the inequality \[I_{f}(\xi)\leqslant\lim_{j\to\infty}\,I_{f}(\mu_{j}). \tag{8.15}\] In case \((\mathcal{P}_{2})\), (8.15) follows at once from the strong convergence of \((\mu_{j})\) to \(\xi\), by applying (3.12) to each of \(\mu_{j}\) and \(\xi\). Otherwise, case \((\mathcal{P}_{3})\) holds, and hence \(f=-U^{\omega}\) is l.s.c. and bounded on \(\overline{A}\). Thus there is \(c\in(0,\infty)\) such that \(f^{\prime}:=f+c\geqslant 0\) on \(\overline{A}\), and [3, Section IV.1, Proposition 4] applied to \(f^{\prime}\) gives \[\int f\,d\xi+c=\int f^{\prime}\,d\xi\leqslant\liminf_{j\to\infty}\,\int f^{ \prime}\,d\mu_{j}=\liminf_{j\to\infty}\,\int f\,d\mu_{j}+c,\] the first and last equalities being valid by virtue of \(\xi(\mathbb{R}^{n})=\mu_{j}(\mathbb{R}^{n})=1\). Therefore, \[\int f\,d\xi\leqslant\lim_{j\to\infty}\,\int f\,d\mu_{j}.\] Multiplied by 2, and then added to \[\lim_{j\to\infty}\,\|\mu_{j}\|^{2}=\|\xi\|^{2},\] this results in (8.15), thereby completing the whole proof. **Remark 8.2.** A slight generalization of the above proof shows that Theorem 6.1 and, hence, Corollary 6.2 remain valid for an external field \(f\) represented as the sum \[f:=u+U^{\vartheta},\] where \(\vartheta\in\mathcal{E}\), while \(u:\overline{A}\to(-\infty,\infty]\) is l.s.c., bounded from below, and such that \[c_{*}(\{x\in A:\ u(x)<\infty\})>0.\] ### Proof of Theorem 6.4 As \(c_{*}(A)=\infty\) by (6.3), there are mutually nonintersecting, compact sets \(K_{j}\subset A\), \(j\in\mathbb{N}\), such that \(|x|\geqslant j\) for all \(x\in K_{j}\) and \(c(K_{j})\geqslant j\). If \(\lambda_{j}:=\gamma_{K_{j}}/c(K_{j})\in\breve{\mathcal{E}}^{+}(K_{j})\) denotes the normalized capacitary measure on \(K_{j}\), then \[\|\lambda_{j}\|\to 0\ \ \text{as}\ j\to\infty, \tag{8.16}\] \[\lambda_{j}\to 0\ \ \text{vaguely in}\ \mathcal{E}^{+}\ \text{as}\ j\to\infty, \tag{8.17}\] the latter being implied by the fact that for any compact subset \(K\) of \(\mathbb{R}^{n}\), we have \(K\cap S(\lambda_{j})=\varnothing\) for all \(j\) large enough. Define \[\mu_{j}:=\hat{\omega}^{A}+c_{j}\lambda_{j},\ \ \text{where}\ \ c_{j}:=1-\hat{ \omega}^{A}(\mathbb{R}^{n}). \tag{8.18}\] Noting from (6.5) that \[0<c_{j}\leqslant 1\ \ \text{for all}\ j, \tag{8.19}\] we get \(\mu_{j}\in\breve{\mathcal{E}}^{+}(A)\) for all \(j\), whence \[w_{f}(A)\leqslant\liminf_{j\to\infty}\,I_{f}(\mu_{j}). \tag{8.20}\] On the other hand, \(I_{f}(\hat{\omega}^{A})=\hat{w}_{f}(A)\) (see Definition 3.1 and Theorem 3.5). By means of a straightforward verification, we derive from (8.16), (8.18), and (8.19) that \[\limsup_{j\to\infty}\,I_{f}(\mu_{j})\leqslant\limsup_{j\to\infty}\Big{(}I_{f} (\hat{\omega}^{A})+2c_{j}\int U^{\omega^{-}}\,d\lambda_{j}\Big{)}\leqslant \hat{w}_{f}(A)+2L_{\infty}, \tag{8.21}\] where \[L_{\infty}:=\limsup_{j\to\infty}\,\int U^{\omega^{-}}\,d\lambda_{j}.\] Let first \((\mathcal{P}_{2})\) take place. Applying the Cauchy-Schwarz inequality to the measures \(\omega^{-},\lambda_{j}\in\mathcal{E}^{+}\), and then letting \(j\to\infty\), we infer from (8.16) that \[L_{\infty}=0. \tag{8.22}\] Otherwise, \((\mathcal{P}_{3})\) and hence (6.4) must be fulfilled, which again results in (8.22), \(\lambda_{j}\) being the unit measure supported by \(\overline{A}\cap\{|x|\geqslant j\}\). Substituting (8.22) into (8.21), and then combining the inequality thus obtained with (8.20) and (5.2), we get \[\lim_{j\to\infty}\,I_{f}(\mu_{j})=\hat{w}_{f}(A)=w_{f}(A), \tag{8.23}\] which shows that the sequence \((\mu_{j})\) is, in fact, minimizing in Problem 5.1, and hence converges both strongly and vaguely to the extremal measure \(\xi_{A,f}\): \[\mu_{j}\to\xi_{A,f}\ \ \text{strongly and vaguely in}\ \mathcal{E}^{+}\ \text{as}\ j\to\infty.\] On account of (8.17)-(8.19), this gives \[\hat{\omega}^{A}=\xi_{A,f},\] the vague topology on \(\mathfrak{M}\) being Hausdorff. Therefore, by (6.5), \[\xi_{A,f}(\mathbb{R}^{n})=\hat{\omega}^{A}(\mathbb{R}^{n})<1,\] and an application of Lemma 8.1 shows that Problem 5.1 is indeed unsolvable. **Remark 8.3**.: As shown in (8.23), under the assumptions of Theorem 6.4, equality prevails in (5.2), i.e. \[\hat{w}_{f}(A)=w_{f}(A)\] (compare with Theorems 6.3 and 6.7). ## 9. Proofs of Theorems 6.7 and 6.10 ### Auxiliary results According to Corollary 6.2, for every \(K\in\mathfrak{C}_{A}\) such that \(K\geqslant K_{0}\), where \(c(K_{0})>0\), there is the (unique) solution \(\lambda_{K,f}\) to Problem 5.1 with \(A:=K\), whereas by virtue of Lemma 9.1 below, those \(\lambda_{K,f}\) form a minimizing net: \[(\lambda_{K,f})_{K\geqslant K_{0}}\in\mathbb{M}_{f}(A). \tag{9.1}\] **Lemma 9.1**.: \(w_{f}(K)\downarrow w_{f}(A)\) _as \(K\uparrow A\)._ Proof.: For any \(\mu\in\breve{\mathcal{E}}^{+}(A)\), \(\mu(K)\uparrow 1\) as \(K\uparrow A\). Applying [10, Lemma 1.2.2] to each of the (positive, l.s.c., \(\mu\)-integrable) functions \(\kappa_{\alpha}\), \(U^{\omega^{+}}\), and \(U^{\omega^{-}}\), we therefore get \[I_{f}(\mu)=\lim_{K\uparrow A}\,I_{f}(\mu|_{K})=\lim_{K\uparrow A}\,I_{f}(\nu _{K})\geqslant\lim_{K\uparrow A}\,w_{f}(K),\] where \[\nu_{K}:=\mu|_{K}/\mu(K)\in\breve{\mathcal{E}}^{+}(K),\] \(K\in\mathfrak{C}_{A}\) being large enough. Letting now \(\mu\) range over \(\breve{\mathcal{E}}^{+}(A)\) gives \[w_{f}(A)\geqslant\lim_{K\uparrow A}\,w_{f}(K),\] whence the lemma, the opposite being obvious by the monotonicity. \(\bullet\) Unless case \((\mathcal{P}_{2})\) takes place, assume in what follows that the external field \(f=-U^{\omega}\) is continuous on \(\overline{A}\), and that (6.6) is fulfilled. **Lemma 9.2**.: For the extremal measure \(\xi_{A,f}\), we have \[I_{f}(\xi_{A,f})=w_{f}(A). \tag{9.2}\] Proof.: By virtue of (8.2) and (9.1), \[\lambda_{K,f}\to\xi_{A,f}\ \ \text{strongly and vaguely in $\mathcal{E}^{+}$ as $K\uparrow A$},\] hence \[\lim_{K\uparrow A}\,\|\lambda_{K,f}\|^{2}=\|\xi_{A,f}\|^{2}. \tag{9.3}\] If case \((\mathcal{P}_{2})\) takes place, then the strong convergence of \((\lambda_{K,f})_{K\geqslant K_{0}}\) to \(\xi_{A,f}\) yields, by applying (3.12) to each of \(\lambda_{K,f}\) and \(\xi_{A,f}\), \[\lim_{K\uparrow A}\,I_{f}(\lambda_{K,f})=I_{f}(\xi_{A,f}), \tag{9.4}\] whence, by Lemma 9.1, \[I_{f}(\xi_{A,f})=\lim_{K\uparrow A}\,w_{f}(K)=w_{f}(A).\] In the remaining case \((\mathcal{P}_{3})\), for any \(t>0\) choose \(r\) so that \[|f|<t/2\ \ \text{on $A\cap\overline{B}_{r}^{c}$},\] which is possible in view of (6.6). On account of (8.4), \[\left|\int_{\overline{B_{r}}^{c}}\,f\,d(\lambda_{K,f}-\xi_{A,f})\right|<t\ \ \text{for all $K\geqslant K_{0}$}. \tag{9.5}\] The above \(r\) can certainly be chosen so that \[\xi_{A,f}(S_{r})=0,\] the measure \(\xi_{A,f}\) being bounded. Then, according to [17, Theorem 0.5\({}^{\prime}\)], \[\lambda_{K,f}|_{\overline{B_{r}}}\to\xi_{A,f}|_{\overline{B_{r}}}\ \ \text{ vaguely in $\mathfrak{M}^{+}$ as $K\uparrow A$},\] whence, by the continuity of \(f\) on \(\overline{A}\), \[\lim_{K\uparrow A}\,\int f\,d\lambda_{K,f}|_{\overline{B}_{r}}=\int f\,d\xi_{A, f}|_{\overline{B}_{r}},\] which combined with (9.5), taken for \(t>0\) arbitrarily small, results in23 Footnote 23: In case \((\mathcal{P}_{2})\), (9.6) holds true as well, which is obtained by subtracting (9.3) from (9.4). Alternatively, it can be derived from the strong convergence of the net \((\lambda_{K,f})_{K\geqslant K_{0}}\) to \(\xi_{A,f}\), by applying the Cauchy–Schwarz inequality to the measures \(\omega,\lambda_{K,f}-\xi_{A,f}\in\mathcal{E}\). \[\lim_{K\uparrow A}\,\int f\,d\lambda_{K,f}=\int f\,d\xi_{A,f}. \tag{9.6}\] Multiplied by \(2\), and then added to (9.3), this yields (9.4), whence (9.2), again by making use of Lemma 9.1. **Corollary 9.3.**\(\lambda_{A,f}\) exists if and only if \(\xi_{A,f}(\mathbb{R}^{n})=1\), and in the affirmative case \(\lambda_{A,f}=\xi_{A,f}\). _Proof_. This follows by combining Lemmas 8.1 and 9.2. **Lemma 9.4.** For the extremal measure \(\xi=\xi_{A,f}\), we have \[U_{f}^{\xi}\geqslant C_{\xi}\ \ \text{n.e.\ on }A, \tag{9.7}\] \[U_{f}^{\xi}\leqslant C_{\xi}\ \ \text{on }S(\xi), \tag{9.8}\] where \[C_{\xi}:=\int U_{f}^{\xi}\,d\xi\in(-\infty,\infty). \tag{9.9}\] _Proof_. In case \((\mathcal{P}_{3})\), the finiteness of \(\int U_{f}^{\xi}\,d\xi\) follows from the boundedness of \(U^{\omega^{\pm}}\) on \(\overline{A}\), the extremal measure \(\xi\) being bounded by (8.4); while in case \((\mathcal{P}_{2})\), it is obvious. By Theorem 5.4 applied to each \(K\in\mathfrak{C}_{A}\) large enough, \[U_{f}^{\lambda_{K,f}}\geqslant c_{K,f}\ \ \text{n.e.\ on }K, \tag{9.10}\] \[U_{f}^{\lambda_{K,f}}\leqslant c_{K,f}\ \ \text{on }S(\lambda_{K,f}), \tag{9.11}\] where \[c_{K,f}=\int U_{f}^{\lambda_{K,f}}\,d\lambda_{K,f}.\] Furthermore, by combining (9.3) with (9.6), \[\lim_{K\uparrow A}\,c_{K,f}=C_{\xi}, \tag{9.12}\] \(C_{\xi}\) being the (finite) constant appearing in (9.9). Fix \(K_{*}\in\mathfrak{C}_{A}\). The strong topology on \(\mathcal{E}^{+}\) being first-countable, one can choose a subsequence \((\lambda_{K_{j},f})_{j\in\mathbb{N}}\) of the net \((\lambda_{K,f})_{K\in\mathfrak{C}_{A}}\) such that \[\lambda_{K_{j},f}\to\xi\ \ \text{strongly (hence vaguely) in }\mathcal{E}^{+}\ \text{as }j\to\infty. \tag{9.13}\] There is certainly no loss of generality in assuming that \[K_{*}\subset K_{j}\ \ \text{for all }j,\] for if not, we replace \(K_{j}\) by \(K_{j}^{\prime}:=K_{j}\cup K_{*}\); then, by the monotonicity of \(\big{(}w_{f}(K)\big{)}\), the sequence \((\lambda_{K_{j}^{\prime},f})_{j\in\mathbb{N}}\) remains minimizing, and hence also converges strongly to \(\xi\). Due to the arbitrary choice of \(K_{*}\in\mathfrak{C}_{A}\), (9.7) will follow once we show that \[U_{f}^{\xi}\geqslant C_{\xi}\ \ \text{n.e.\ on }K_{*}. \tag{9.14}\] Passing if necessary to a subsequence and changing the notations, we conclude from (9.13), by virtue of [10, p. 166, Remark], that \[U^{\xi}=\lim_{j\to\infty}\,U^{\lambda_{K_{j},f}}\ \ \mbox{n.e. on $\mathbb{R}^{n}$}. \tag{9.15}\] Applying now (9.10) to each \(K_{j}\), and then letting \(j\to\infty\), on account of (9.12) and (9.15) we arrive at (9.14). (Here the countable subadditivity of inner capacity on Borel sets has been utilized.) Since \((\lambda_{K_{j},f})\) converges to \(\xi\) vaguely, see (9.13), for every \(x\in S(\xi)\) there exist a subsequence \((K_{j_{k}})\) of the sequence \((K_{j})\) and points \(x_{j_{k}}\in S(\lambda_{K_{j_{k}},f})\), \(k\in\mathbb{N}\), such that \(x_{j_{k}}\) approach \(x\) as \(k\to\infty\). Thus, according to (9.11), \[U_{f}^{\lambda_{K_{j_{k}},f}}(x_{j_{k}})\leqslant\int U_{f}^{\lambda_{K_{j_{k }},f}}\,d\lambda_{K_{j_{k}},f}\ \ \mbox{for all $k\in\mathbb{N}$}.\] Letting here \(k\to\infty\), and applying (9.12), the continuity of \(f\) on \(\overline{A}\), and the lower semicontinuity of the mapping \((x,\mu)\mapsto U^{\mu}(x)\) on \(\mathbb{R}^{n}\times\mathfrak{M}^{+}\), \(\mathfrak{M}^{+}\) being equipped with the vague topology [10, Lemma 2.2.1(b)], we get the remaining inequality (9.8). ### Proof of Theorem 6.7 We first remark from Theorems 6.3 and 6.4 that it is actually enough to consider the case when (6.7) is fulfilled. For the extremal measure \(\xi=\xi_{A,f}\), we infer from (9.7) and (9.8) that \[U_{f}^{\xi}=C_{\xi}\ \ \mbox{n.e. on $S(\xi)\cap A$},\] whence \[U_{f}^{\xi}=C_{\xi}\ \ \mbox{$\xi$-a.e.}, \tag{9.16}\] the measure \(\xi\) being of the class \(\mathcal{E}^{+}(A)\). We claim that then necessarily \[C_{\xi}\neq 0. \tag{9.17}\] Indeed, if this were not true, then (9.7) and (9.16) would imply, by virtue of Theorem 3.5, that \(\xi=\hat{\omega}^{A}\), which however contradicts (6.7), for \(\xi(\mathbb{R}^{n})\leqslant 1\) by (8.4). Integrating (9.16) with respect to \(\xi\) we obtain \[\int U_{f}^{\xi}\,d\xi=C_{\xi}\cdot\xi(\mathbb{R}^{n}),\] whence, by (9.9) and (9.17), \[\xi(\mathbb{R}^{n})=1.\] Applying Corollary 9.3 we see that under assumption (6.7), \(\lambda_{A,f}\) does indeed exist, and moreover \(\lambda_{A,f}=\xi\). The equality \(\lambda_{A,f}=\xi\) implies, by use of (5.5), (9.9), and (9.17), that \[c_{A,f}=\int U_{f}^{\lambda_{A,f}}\,d\lambda_{A,f}=\int U_{f}^{\xi}\,d\xi=C_ {\xi}\neq 0,\] which proves the third relation in (6.8). The first is obvious since, by (6.7), \[\lambda_{A,f}(\mathbb{R}^{n})=1<\hat{\omega}^{A}(\mathbb{R}^{n}).\] Finally, the first relation implies the second, for if not, then \(w_{f}(A)=\hat{w}_{f}(A)\), whence \(\lambda_{A,f}=\hat{\omega}^{A}\), by the uniqueness of \(\hat{\omega}^{A}\) and the inclusion \(\breve{\mathcal{E}}^{+}(A)\subset\mathcal{E}_{f}^{+}(A)\), cf. (5.1). ### Proof of Theorem 6.10 According to Theorem 6.7, under the stated assumptions there exists the solution \(\lambda_{A,f}\) to Problem 5.1, and moreover, by (6.8), \[c_{A,f}\neq 0. \tag{9.18}\] Assume to the contrary that \(S(\lambda_{A,f})\) is noncompact. As seen from Remark 5.5, then there exists a sequence \((x_{j})\subset A\) such that \(|x_{j}|\to\infty\) as \(j\to\infty\), and \[U_{f}^{\lambda_{A,f}}(x_{j})=c_{A,f}\ \ \text{for all}\ j\in\mathbb{N}.\] On account of (6.6), this yields \[\liminf_{j\to\infty}\,U^{\lambda_{A,f}}(x_{j})=c_{A,f},\] whence \(c_{A,f}\geqslant 0\), which in view of (9.18) shows that, actually, \[c_{A,f}>0. \tag{9.19}\] But, by (5.3), \(U_{f}^{\lambda_{A,f}}\geqslant c_{A,f}\) n.e. on \(A\), which together with (6.6) and (9.19) gives \[\liminf_{|x|\to\infty,\ x\in A}\,U^{\lambda_{A,f}}(x)>0.\] However, this is impossible in consequence of [16, Remark 4.12(i)], the set \(A\) not being inner \(\alpha\)-thin at infinity.
2303.10937
Boosting Weakly Supervised Object Detection using Fusion and Priors from Hallucinated Depth
Despite recent attention and exploration of depth for various tasks, it is still an unexplored modality for weakly-supervised object detection (WSOD). We propose an amplifier method for enhancing the performance of WSOD by integrating depth information. Our approach can be applied to any WSOD method based on multiple-instance learning, without necessitating additional annotations or inducing large computational expenses. Our proposed method employs a monocular depth estimation technique to obtain hallucinated depth information, which is then incorporated into a Siamese WSOD network using contrastive loss and fusion. By analyzing the relationship between language context and depth, we calculate depth priors to identify the bounding box proposals that may contain an object of interest. These depth priors are then utilized to update the list of pseudo ground-truth boxes, or adjust the confidence of per-box predictions. Our proposed method is evaluated on six datasets (COCO, PASCAL VOC, Conceptual Captions, Clipart1k, Watercolor2k, and Comic2k) by implementing it on top of two state-of-the-art WSOD methods, and we demonstrate a substantial enhancement in performance.
Cagri Gungor, Adriana Kovashka
2023-03-20T08:26:29Z
http://arxiv.org/abs/2303.10937v2
# Boosting Weakly Supervised Object Detection using Fusion and Priors from Hallucinated Depth ###### Abstract Despite recent attention and exploration of depth for various tasks, it is still an unexplored modality for weakly-supervised object detection (WSOD). We propose an amplifier method for enhancing the performance of WSOD by integrating depth information. Our approach can be applied to any WSOD method based on multiple-instance learning, without necessitating additional annotations or inducing large computational expenses. Our proposed method employs a monocular depth estimation technique to obtain hallucinated depth information, which is then incorporated into a Siamese WSOD network using contrastive loss and fusion. By analyzing the relationship between language context and depth, we calculate depth priors to identify the bounding box proposals that may contain an object of interest. These depth priors are then utilized to update the list of pseudo ground-truth boxes, or adjust the confidence of per-box predictions. Our proposed method is evaluated on six datasets (COCO, PASCAL VOC, Conceptual Captions, Clipart1k, Watercolor2k, and Comic2k) by implementing it on top of two state-of-the-art WSOD methods, and we demonstrate a substantial enhancement in performance. ## 1 Introduction Weakly-supervised object detection (WSOD) is a challenging task since it is unclear which instances have the label that was provided at the image level. Traditional methods only use appearance information in RGB images. However, appearance information is insufficient to localize objects in complex, cluttered environments. On the other hand, humans are capable of finding useful information in complex environments because they rely on object function, not just appearance. For example, they might reason about which objects are within reach, which can be captured with depth from stereo vision [2]. The depth modality provides additional cues about the spatial relationships and geometrical structure of objects in a scene and is invariant to appearance variations (e.g. in texture), making it complementary to the RGB modality. However, weakly-supervised object detection methods do not use depth information. We equip WSOD methods with the ability to reason about functional information (depth). Importantly, our method does so without requiring additional annotations or suffering significant computational costs. We propose an amplifier method that can enhance the performance of any weakly supervised object detection method based on multiple-instance learning. Since traditional WSOD datasets do not contain ground-truth depth information, the proposed method utilizes hallucinated (predicted) depth information obtained through a monocular depth estimation technique. During training, the method incorporates depth information to improve representation learning and to prune or down-weight predictions at the box level, which leads to improved object detection performance during inference. First, depth can directly be used as a feature to aid in representation learning, or to produce predictions which can be fused with those computed from RGB. While simple, this Figure 1: Depth ranges of objects are estimated using depth-language relationship before training. These ranges are further used to spot relevant visual proposals that may contain target objects to enhance weakly supervised object detection. technique has not been used for WSOD, and we show that it is very effective: it boosts the performance of appearance-only methods by up to 2.6 [email protected] (11% relative gain). Further, depth can provide strong priors about which of the bounding box proposals in the noisy WSOD setting contain an object of interest. We examine the rough depth at which objects of particular categories occur, by computing the depth range of an object in 1-5% of annotated images. To make this range more precise, we examine the relationship between language context and depth, by keeping track of depth range statistics conditioned on co-mentioned objects and averaging across the most common co-occurring objects. We then use this range to prune the pseudo ground-truth bounding boxes used to iteratively update weakly-supervised detection methods, or to down-weight predictions on the box level. This approach boosts WSOD performance further for a total of 14% [email protected] relative gain. Our method is simple and can boost multiple WSOD methods that rely on iterative improvement. We test it in a variety of settings, using two state-of-art WSOD baselines, MIST [27] and SoS-WSOD [30], and five datasets, COCO, PASCAL VOC, Clipart1k, Watercolor2k, and Comic2k. Inspired by recent work that trains object detection methods with language supervision [38, 33, 11], we further test our method in a setting where labels at the image level are not ground-truth but estimated. In this setting, our method boosts the basic WSOD performance even more, by 18% when labels for training are extracted from COCO, and 63% when they are extracted from Conceptual Captions. To summarize, our contributions are: (1) We examine for the first time the use of depth in weakly-supervised object detection. (2) In addition to depth fusion, we propose a technique specific to WSOD, which estimates depth priors with the help of language, and uses them to refine pseudo boxes and box predictions. (3) We show large performance gains in a large variety of settings, with the biggest boost from depth refinement when supervision is least expensive. ## 2 Related Work **Weakly-supervised object detection (WSOD).** WSOD is the task of learning to detect the location and type of multiple objects given only image-level labels during training. The multi-instance learning (MIL) framework is commonly utilized in WSOD methods, with early works such as WSDDN [1] integrating MIL into an end-to-end WSOD system. OICR [32] improved upon this by proposing pseudo-ground-truth mining and an online instance refinement branch, which was further refined by PCL [31] through proposal clustering. C-MIL [34] and MIST [27] introduced modifications to the MIL loss and pseudo-ground-truth mining, respectively. More recent work, SoS-WSOD [30], proposes a method that produces pseudo boxes for FSOD by using their improved WSOD module, then they split noisy data for semi-supervised object detection. Additionally, there have been efforts to bypass the need for image-level labels by utilizing noisy label information extracted from caption or subtitle data [35, 4, 38, 33, 11]. In contrast to these works, our methodology leverages the incorporation of depth information as an additional modality, leading to improved performance in WSOD and a reduction of the noise between text and label information. **RGB-D detection.** The integration of RGB and depth information to derive complementary features has been previously studied for fully-supervised indoor analysis [37, 39, 16, 19] and salient object detection [17, 14, 8, 9, 18]. Fusing the information contained in the RGB and depth modalities is crucial, as they provide complementary information. The strategies for merging the two modalities can be classified into three groups, depending on the point in the processing pipeline where the fusion occurs: early fusion [6, 24], middle fusion [9, 10, 40, 3], and late fusion [13, 23]. Early fusion techniques involve combining the RGB and depth images into a single four-channel matrix at the earliest stage of the process. This integrated matrix is then treated as a single input. Middle fusion provides a balance between early and late fusion by utilizing CNNs for both feature extraction and subsequent merging. In late fusion, individual saliency prediction maps are produced from the RGB and depth channels. These two predicted maps are then combined through post-processing operations such as element-wise summation or multiplication. In contrast to the majority of aforementioned methods, which use separate networks to extract features from RGB and depth images, several studies [9, 10, 29, 21] employ Siamese networks to learn hierarchical features from both RGB and depth inputs by utilizing shared parameters. However, _we are the first to leverage depth data in weakly-supervised object detection._ Our approach is not specific to a particular method, as it can be applied to any MIL-based WSOD method to improve its performance without incurring any extra annotation expenses and with minimal computational overhead during training. Although the depth modality is not used during the inference stage, incorporating it during training enhances the performance of the inference. **Monocular depth estimation** involves predicting the depth map of a scene from a single RGB image [22, 26, 25, 36]. We utilize the method described in [22] due to its strong performance. This estimated ("hallucinated") depth information is then utilized to improve the performance of weakly supervised object detection. ## 3 Approach We propose an amplifier approach that incorporates a depth modality to improve the effectiveness of WSOD methods. Our method can be used with all MIL-based WSOD methods to boost their performance by incurring little extra cost during training. It does not use the depth modality during inference to avoid any slow-downs and reliance on additional data (depth estimation or captions). The proposed approach comprises three main steps (Sec. 3.1, 3.2, 3.3, respectively). First, a Siamese network with a shared backbone is employed to improve representation learning through contrastive learning between RGB and depth features (referred to as Siamese-Only in the experiments). Second, we combine detection and classification scores obtained from both RGB and depth modalities, which can be categorized as late fusion (Fusion). Third, we use captions and a small number of ground truth bounding box annotations to calculate depth priors. These depth priors are then used to improve the OICR-style [32] module in two WSOD methods (Depth-OICR) and create attention with combined score probabilities (Depth-Attention). Note that Siamese-Only is always applied, while Fusion, Depth-OICR and Depth-Attention build on top of it, and can be used alone or combined. ### The Siamese WSOD Network **WSOD.** Following Bilen et al. [1], let \(\mathbf{I}\in\mathbb{R}^{h\times w\times 3}\) denote an RGB image, and \(y_{c}\in\{0,1\}\) (where \(c\in\{1,...,C\}\) and \(C\) is the total number of object categories) be its corresponding ground-truth class labels. Let \(v_{i}\), \(i\in\{1,...,R\}\) (where \(R\) is the number of proposals), denote the visual proposals in image \(\mathbf{I}\). RoI pooling is applied and a fixed-length feature vector \(\phi(v_{i})\) extracted for each visual region. The proposal features \(\phi(v_{i})\) are fed into two parallel fully-connected layers to compute the visual detection score \(v_{i,c}^{det}\in\mathbb{R}^{1}\) and classification score \(v_{i,c}^{cls}\in\mathbb{R}^{1}\): \[v_{i,c}^{det}=w_{c}^{det\intercal}\phi(v_{i})+b_{c}^{det},\quad v_{i,c}^{cls} =w_{c}^{cls\intercal}\phi(v_{i})+b_{c}^{cls} \tag{1}\] where \(w\) and \(b\) are weights and bias, respectively. **Estimating the depth images.** To extract depth information from RGB images, we employ the monocular depth estimation technique by Mahdi et al. [22]. This enables us to use existing RGB-only object detection datasets without the need for additional annotations. Although the extracted depth images are initially grayscale, we use a color map to convert them to RGB images with three channels. **Siamese design.** Our approach utilizes a Siamese network with contrastive learning to incorporate depth information in the weakly-supervised object detection network during training. This design allows us to use a backbone pre-trained with RGB images to extract features from both RGB and depth images, without adding extra complexity to the model's parameters. We enhance the representation learning of the backbone by defining contrastive loss between RGB and depth features similar to [21]. Utilizing a Siamese network provides the advantage of using only RGB images during inference similar to other WSOD methods. This ensures that our contribution does not introduce any additional overhead on the inference time. With the help of a pre-trained backbone model, the feature map of RGB image \(\psi(\mathbf{I})\) is extracted. Let \(\mathbf{D}\in\mathbb{R}^{h\times w\times 3}\) denote a depth image associated with the RGB image \(\mathbf{I}\) and let \(\psi(\mathbf{D})\) be the feature map of the depth image \(\mathbf{D}\) extracted by the Siamese backbone. The RGB feature map \(\psi(\mathbf{I})\) and depth feature map \(\psi(\mathbf{D})\) are fed into adaptive pooling and the Siamese fully connected layer to obtain \(d\)-dimensional projected feature vectors \(\psi_{proj}(\mathbf{I})\) and \(\psi_{proj}(\mathbf{D})\). The only extra parameters we add to the traditional MIL-based WSOD network come from the fully connected layer for projection with 8 percent overhead (13M parameters for the projection layer, vs 154M total). If no late fusion is performed in the experiments, we train as de Figure 2: This figure illustrates the design of our proposed amplifier technique that takes advantage of depth information to enhance the performance of other weakly-supervised object detection methods. During inference, we only use the RGB branch (shown in orange). scribed in Sec. 3.2, but excluding the \(d_{i,c}\) variables in Eq. 5. **Contrastive learning.** We L2-normalize the RGB and depth feature vectors \(\psi_{proj}(\mathbf{I})\) and \(\psi_{proj}(\mathbf{D})\) vectors, and compute their cosine similarity: \[S(\mathbf{I},\mathbf{D})=\langle\psi_{proj}(\mathbf{I}),\psi_{proj}(\mathbf{D })\rangle/\rho \tag{2}\] where \(\rho\), is a learnable temperature parameter. We use noise contrastive estimation (NCE) [12] to define the contrastive learning by considering RGB image and depth image pairs \((\mathbf{I},\mathbf{D})\in\mathcal{B}\) where \(\mathcal{B}\) is an RGB-depth pair batch. The first component of the NCE loss contrasts an RGB image with negative depth images to measure how closely the RGB image matches with its paired depth among others in the batch: \[\mathcal{L}_{D\to I}=-\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{I},\mathbf{D})\in \mathcal{B}}log\frac{exp(S(\mathbf{I},\mathbf{D}))}{exp(S(\mathbf{I},\mathbf{ D}))+\sum_{(\mathbf{I}^{\prime},\mathbf{D}^{\prime})\in\mathcal{B}}exp(S( \mathbf{I},\mathbf{D}^{\prime}))} \tag{3}\] The second component of the NCE loss, \(\mathcal{L}_{I\to D}\), is analogously defined to contrast a depth image with negative RGB image samples, and the two components are averaged: \[\mathcal{L}_{NCE}=(\mathcal{L}_{D\to I}+\mathcal{L}_{I\to D})/2 \tag{4}\] ### Late Fusion of the Modalities The detection and classification scores computed from RGB and depth modalities are imbued with disparate and complementary details that jointly enrich our understanding of the target objects. Therefore, we combine these scores to amplify the performance of object detection. As the depth images are derived from the RGB images, the spatial arrangement of the objects is equivalent in both modalities. Hence, we utilize the same visual region proposals for both RGB and depth modalities. Following the application of the RoI pooling layer and the Siamese box feature extractor to the depth feature map \(\psi(\mathbf{D})\), we obtain the feature vector \(\phi(d_{i})\) for each depth region. Thereafter, we employ the approach presented in Eq. 1 to derive the depth detection score \(d_{i,c}^{det}\in\mathbb{R}^{1}\) and the depth classification score \(d_{i,c}^{cls}\in\mathbb{R}^{1}\). Subsequently, we fuse (sum) the scores from the RGB and depth modalities: \[f_{i,c}^{det}=v_{i,c}^{det}+d_{i,c}^{det},\quad f_{i,c}^{cls}=v_{i,c}^{cls}+d_ {i,c}^{cls} \tag{5}\] where \(f_{i,c}^{det}\) and \(f_{i,c}^{cls}\) are fusion detection and classification scores, respectively. Following the WSDDN [1] architecture, these classification and detection scores are converted to probabilities such that \(p_{i,c}^{cls}\) is the probability that class \(c\) is in present proposal \(f_{i}\), and \(p_{i,c}^{det}\) is the probability that \(f_{i}\) is important for predicting image-level label \(y_{c}\). \[p_{i,c}^{det}=\frac{exp(f_{i,c}^{det})}{\sum_{k=1}^{R}exp(f_{k,c}^{det})}, \quad p_{i,c}^{cls}=\frac{exp(f_{i,c}^{cls})}{\sum_{k=1}^{C}exp(f_{i,k}^{cls})} \tag{6}\] We element-wise multiply the classification and detection scores to obtain the combined score \(p_{i,c}^{comb}\): \[p_{i,c}^{comb}=p_{i,c}^{det}p_{i,c}^{cls} \tag{7}\] Finally, image-level predictions \(\hat{p}_{c}\) are computed as follows, where greater values of \(\hat{p}_{c}\in[0,1]\) mean a higher likelihood that \(c\) is present in the image. \[\hat{p}_{c}=\sigma\left(\sum_{i=1}^{R}p_{i,c}^{comb}\right) \tag{8}\] Assuming the label \(y_{c}\) = 1 if and only if class \(c\) is present, the classification loss used for training the model is defined as follows. Since no region-level labels are provided, we must derive region-level scores indirectly, by optimizing this loss. \[\mathcal{L}_{mil}=-\sum_{c=1}^{C}\left[y_{c}\log\hat{p}_{c}+(1-y_{c})\log(1- \hat{p}_{c})\right] \tag{9}\] ### Depth Priors By leveraging a small amount of caption and ground-truth bounding box annotations, we extract prior knowledge about the relative depths of objects. We subsequently exploit these depth priors to guide the identification of the relevant visual regions that may contain the target objects. We note that the depth priors we estimate vary very little whether we use 1%, 5%, 10%, or 50% of the available COCO training data; see Table 3. Further, we show that even though we estimate the depth priors from COCO, they generalize to Conceptual Captions (Table 2). We use the notation \(pd_{i}\in[0,1]\), \(i\in 1,...,R\), where \(R\) is the number of pre-computed region proposals for depth image \(\mathbf{D}\), to represent the average depth value in the \(i\)-th region proposal. Each region proposal contains pixels with values ranging from 0 to 1, which correspond to the smallest and largest depth values, respectively. We employ a limited set of ground-truth bounding box annotations \(B\) to approximate the depth value of objects using the caption that describes the image in which the objects are present. Let \(C\) be the set of object categories, \(W\) be the set of distinct words in the vocabulary that includes every word in the captions, and \(B\) be the set of ground-truth bounding box annotations. Let \(d_{c,w,b}\in\{[0,1],\varnothing\}\) denote the depth value for object \(c\in C\), word \(w\in W\) and box \(b\in B\), which is calculated by averaging the depth values in the pixels of \(b\) similar to the calculation of \(pd_{i}\). As an example, \(d_{\text{bird,ocean},b}\) represents the depth value of the "bird" object corresponding box \(b\) of a depth image that has a caption that includes the "ocean" word. In the absence of "ocean" in the caption or when annotation \(b\) does not correspond to the "bird" object, the depth value \(d_{\text{bird,ocean},b}\) is set to null \(\varnothing\). Further, \(d_{c,w}\) represents a set of depth values calculated from annotations \(B\), excluding \(\varnothing\) values. The depth range \(r_{c,w}=[mean-std,mean+std]\) for class \(c\) and word \(w\) is obtained by utilizing the mean and standard deviation (std) of this set of depth values in \(d_{c,w}\). Once these depth ranges \(r_{c,w}\) are computed, they can be applied to estimate an allowable depth range for a class \(c\) in a new image, without any boxes on that image. For any new depth image \(\mathbf{D}\), the range of estimated depth priors for an object \(c\) is \(dr_{c}\): \[dr_{c}=\frac{\sum_{s\in S}r_{c,s}}{|S|} \tag{10}\] where \(S\) denotes the set of words in the caption corresponding to \(\mathbf{D}\). We **only require captions at training time.** We utilize the estimated depth prior range \(dr_{c}\) to identify potentially important regions in \(pd\) for each class. We define a depth mask indicator variable \(m_{i,c}\in{0,1}\) for each region \(i\in R\) and class \(c\in C\), which indicates the likelihood of a particular region in an image containing an object of a certain class. The computation of this variable is as follows: \[m_{i,c}=\begin{cases}1,&\text{if }pd_{i}\in dr_{c}\\ 0,&\text{otherwise}\end{cases} \tag{11}\] If the proposal depth value \(pd_{i}\) falls within the estimated depth prior range \(dr_{c}\) for class \(c\), it is considered as a relevant region for that class, and the corresponding mask variable \(m_{i,c}\) is set to 1; otherwise, it is set to 0. Subsequently, we utilize the mask variable \(m_{i,c}\) in combination with our end-to-end network to improve its performance. As an example, Fig. 3 presents two images featuring a "bird" object, with different depths. The estimated depth prior ranges \(dr_{\text{bird}}\) are calculated using Eq. 10 for each image based on the words present in the caption. The caption of the first image includes "feeding" and "hand" words that suggest the "bird" object is likely to have a smaller depth value, while the caption of the second image includes "flying" and "ocean" words that suggest the "bird" object is likely to have a bigger depth value. The regions on the images having a proposal depth value of \(pd_{1}\) are in the estimated depth prior range \(dr_{\text{bird}}\); we observe that they truly include the "bird" object. The range allows us to rule regions with depth values \(pd_{2}\), which do not contain "bird". #### 3.3.1 Depth Priors: Updated OICR Input: Proposals \(R\), Depth Mask Indicator Variable \(m\) Output: Pseudo boxes \(R\) ``` 1:\(\hat{R}=\varnothing\) 2:for all\(c=1:C\)do 3:for all\(i=1:|R|\)do 4:\(\hat{R}_{c}=\hat{R}_{c}\cup R_{i}\)if\(m_{i,c}=1\) 5:return\(\hat{R}\) ``` **Algorithm 1** OICR Mining with Depth Priors Online Instance Classifier Refinement (OICR) [32] is a weakly supervised object detection algorithm that iteratively refines object proposals. Recent studies [31, 27, 30] have highlighted the importance of more effective proposal mining strategies for achieving better recall and precision of objects in WSOD detectors. We propose an algorithm that incorporates the depth priors during the proposal mining provided in Alg. 1. As our proposed method aims to enhance MIL-based WSOD methods, we utilize our algorithm in conjunction with recent OICR-style/self-training/mining strategies, subject to the depth prior condition specified in the fourth line of Alg. 1. After using the depth prior condition, OICR-style mining selects fewer but more relevant proposals so our contribution increases mining precision.1 Footnote 1: In early experiments, we verified that our method’s gains persist if the baseline is allowed to drop the lowest-scoring pseudo boxes but without using depth. #### 3.3.2 Depth Priors: Attention The depth mask variable \(m_{i,c}\) indicates the potentially important proposal regions for each class. We use this variable to employ an attention mechanism with combined score probabilities \(p_{i,c}^{comb}\) provided in Eq. 7 as follows: \[p_{i,c}^{comb}=p_{i,c}^{comb}*0.5,\text{if }m_{i,c}=0 \tag{12}\] This mechanism reduces the probability of a region for class \(c\) by half if the region is determined as less likely to be important by \(m_{i,c}\). These scores are then used in Eq. 8. Figure 3: The figure displays a row of images that are accompanied by their respective depth and caption data, as well as proposal depth value of different regions and estimated depth prior range. ## 4 Experiments We test our method on top of two weakly-supervised detection techniques, and verify the contributions of the constituents of our approach: * Siamese WSOD Network (Siamese-Only, Sec. 3.1); * Late Fusion of the Modalities (Fusion, Sec. 3.2) which combines classification/detection from RGB and depth, and builds on top of the Siamese WSOD Network (Sec. 3.1); * Depth Priors are utilized to enhance the OICR-style module (Depth-Oicr, Sec. 3.3.1) and construct attention (Depth-Attention, Sec. 3.3.2) with visual-only score probabilities, both building upon the Siamese WSOD Network (Sec. 3.1); * Finally, we use all components of our method (Wsod-Amplifier, Sec. 3.1, 3.2, 3.3.1, 3.3.2). ### Experimental Setup #### 4.1.1 Dataset and Metrics **PASCAL Visual Object Classes 2007 (VOC-07)**[5] contains 20 classes. For training, we use 2501 images from the train set and 2510 images from the validation set. We evaluate using 4952 images from the test set. **Common Objects in Context (COCO)**[20] consists of 80 classes. We utilize approximately 118k images from the train set and use the labels provided at the image level. Additionally, to test how well our method works when labels are obtained from noisy language supervision (in captions), we train our models using labels obtained through an exact match (EM) method following [33], also referred to as substring matching in [7]. Due to the unavailability of any labels for around 15k images extracted from captions, we excluded them from the training set and use 103k images. We evaluate using 5k images from the validation set. **Conceptual Captions (CC)**[28] is a large-scale image captioning dataset containing over 3 million images annotated with only captions. We use around 30k images and their corresponding captions and the labels are extracted for the 80 COCO dataset classes using an exact match method from the captions. During the evaluation, we used 5k images from the COCO validation set. **Domain shift datasets**[15]. Clipart1k has the same 20 classes as VOC with 1,000 images, while Watercolor2k and Comic2k share 6 classes with VOC and have 2,000 images each. We use these datasets for evaluation only. **Evaluation protocols.** We utilize mean Average Precision (mAP) considering various IoU thresholds as the common evaluation metric for COCO and VOC datasets. Additionally, we report mAP for objects of different sizes during COCO evaluation and we report the results of Correct Localization (CorLoc) for VOC evaluation. #### 4.1.2 Implementation details We employ the official PyTorch implementations of SoS-WSOD [30] and MIST [27] methods to apply our amplifier technique. SoS-WSOD uses four images per GPU as two augmented images and their flipped versions with a total of 4 GPUs, whereas MIST uses only one image per GPU with a total of 8 GPUs. However, we use one image per GPU for SoS-WSOD due to VRAM limitation in our GPUs, as we also utilized depth images for each corresponding RGB image. Therefore, the baseline results of SoS-WSOD reported in Table 1 are slightly lower than those reported in the original paper. Moreover, we solely use the first stage of SoS-WSOD since it includes the MIL-based WSOD module which is convenient to implement our method on top of. The other settings are kept the same as the official implementations with the VGG16 backbone. To compute the depth prior, we use only 5 percent of the total annotations from the COCO dataset as the ground-truth bounding box annotations \(B\). Furthermore, we utilize the same depth range \(r_{c,w}\) from the COCO annotations for the Wood-Amplifier method on the Conceptual Captions dataset, as the latter lacks ground-truth box annotations. ### Comparing our amplifier to state of the art We evaluate our proposed methods, Fusion and Wood-Amplifier, on two state-of-the-art WSOD approaches, SoS-WSOD [30] and MIST [27], and conduct experiments on COCO and VOC-07 datasets. The performance of our proposed methods are compared with the baseline methods in Table 1. When our Wood-Amplifier method is applied to MIST, it improves the baseline performance by \(17\%\) in \(mAP_{50:95}\) (relative gain, 13.8/11.8-1) and \(14\%\) in \(mAP_{50}\). Similarly, when our Wood-Amplifier method is applied to SoS-WSOD, it improves the baseline performance by \(2\%\) in \(mAP_{50:95}\), \(1.5\%\) in \(mAP_{50}\), and \(6\%\) in \(mAP_{75}\). As the VOC-07 dataset does not have captions, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Avg. Precision, IoU} & \multicolumn{3}{c}{Avg. Precision, Area} \\ \cline{2-7} Methods on COCO & 0.5:0.95 & 0.5 & 0.75 & S & M & L \\ \hline MIST [27] & 11.8 & 24.3 & 10.7 & 3.6 & 13.2 & 18.9 \\ + Fusion & 13.5 & 26.9 & **12.4** & 4.0 & 14.7 & 21.6 \\ + Wsod-Amplifier & **13.8** & **27.7** & 12.3 & **4.6** & **14.7** & **22.4** \\ \hline SoS-WSOD [30] & 10.2 & 21.5 & 8.6 & **2.5** & 10.6 & 17.7 \\ + Fusion & 10.3 & 21.6 & 8.9 & 2.3 & 10.8 & 18.4 \\ + Wsod-Amplifier & **10.4** & **21.8** & **9.1** & 2.4 & **11.0** & **18.7** \\ \hline \hline & \multicolumn{3}{c}{Avg. Precision, IoU} & \multicolumn{3}{c}{CorLoc} \\ \cline{2-7} Methods on VOC-07 & 0.5:0.95 & 0.5 & 0.75 & 0.5:0.95 & 0.5 & 0.75 \\ \hline SoS-WSOD [30] & 24.8 & 52.2 & 20.4 & 38.7 & 71.7 & 36.9 \\ + Fusion & **26.0** & **53.1** & **22.3** & **39.6** & **72.1** & **38.5** \\ \hline \hline \end{tabular} \end{table} Table 1: This table compares the performance enhancement of our methods, to their baseline results SoS-WSOD [30] and MIST [27], on COCO and VOC-07. The best performer per column is in **bold**. we are only able to apply the Siamese-Only and Fusion methods on SoS-WSOD but not the Depth-Oicr and Depth-Attention. On this dataset, our improvements outperformed the baseline SoS-WSOD by \(5\%\) in \(mAP_{50:95}\), \(2\%\) in \(mAP_{50}\), and \(9\%\) in \(mAP_{75}\). ### Ablation studies and visualization **Experiments with labels from captions.** Several attempts [35, 33, 7] have been made to eliminate the requirement for image-level labels by leveraging noisy label information obtained from captions or subtitles. Although it is cost-effective to use text information for label extraction, it results in a decrease in the performance of weakly supervised object detection. [33] propose a text classifier approach to extract labels more effectively than the simple exact match (EM) and reduce the noise between text and ground truth (GT) labels. In contrast to previous studies, our research employs the depth modality to reduce the noise in labels extracted from captions. Our approach improves the model's detection capability and employs captions during the calculation of depth priors. We conducted experiments with MIST [27] using both GT and EM labels and observed that, as expected, training with GT labels leads to significantly better performance than training with EM labels in Table 2 due to the noise in labels extracted from captions. However, our proposed Wsod-Amplifier method applied on MIST with EM labels surpasses the baseline and MIST with GT labels. These findings demonstrate that our method effectively reduces noise and enables the model trained with EM labels to achieve better performance than those trained with GT labels. It is worth noting that the text classifier approach proposed by [33] also performs better than EM-labeled training data, but falls short of the performance achieved by GT-labeled data. **Results on noisy datasets.** We also extract labels from captions on the Conceptual Captions dataset, which lacks labels at the image level. We also observe that our Wsod-Amplifier boosts results by an impressive 63% relative gain using \(mAP_{50}\). Conceptual Captions is a noisier dataset than COCO, since captions were not collected through crowdsourcing, but were crawled as alt-text for web search results. Thus, it is noteworthy that _the benefit of our approach becomes more pronounced as the cost of supervision decreases, and the noise in the supervision increases._ **Analysing the components of our approach.** To understand the impact of each component of our approach on the overall performance, we conducted experiments with MIST [27] using EM labels as a baseline and applied each component of our method on top of the baseline in Table 2. Our Siamese-Only method, which incorporates the depth modality in the Siamese network using contrastive learning, improves feature extraction and results in a \(4\%\) increase in \(mAP_{50:95}\) and \(mAP_{50}\). Our Depth-Oicr method, which utilizes depth priors in the OICR module to improve the mining strategy, increases \(mAP_{50:95}\) and \(mAP_{50}\) over Siamese-Only by \(7-8\%\) on COCO and \(47-50\%\) on Conceptual Captions (CC). Our Depth-Attention method, which incorporates depth priors to use potentially important regions in an attention mechanism with combined score probabilities, increases \(AP_{50:95}\) and \(mAP_{75}\) by \(6-7\%\). Our Fusion method, which combines RGB and depth image scores, improves by \(14-16\%\) on COCO and \(7-21\%\) on CC. Comparing Fusion and Depth-Oicr, the bigger gain using \(mAP_{50}\) is achieved by Fusion on COCO, and Depth-Oicr on CC. _Thus, the benefit of our WSOD-specific method component increases as the noise in the dataset increases, which is appealing due to its real-world applicability._ Finally, our Wsod-Amplifier method, which includes all components of our approach, achieves the highest performance increase over MIST w/ EM baseline, with _improvements in all \(mAP\) metrics by \(18-20\%\) on COCO and \(42-63\%\) on CC. **Generalization of depth priors.** Even though we apply the depth priors calculated from COCO to CC, without any bounding boxes available in CC, our proposed method exhibits a more substantial enhancement in CC performance compared to COCO, achieving an improvement of \(63\%\)\(mAP_{50}\) over the MIST baseline (50% improvement from Depth-Oicr alone). Thus, our Depth-Oicr method has a higher impact than Fusion on CC, in contrast to COCO. Given the recent interest in learning from vision-language data, our approach has the potential to be highly impactful. **Generalization to appearance changes.** By relying on depth information, our method builds some robustness to overfitting to appearance, which may not be the same across \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Avg. Precision, IoU} & \multicolumn{3}{c}{Avg. Precision, Area} \\ \cline{2-7} Methods on COCO & 0.5:0.95 & 0.5 & 0.75 & S & M & L \\ \hline MIST [27] w/ GT & 9.7 & 21.1 & 8.0 & 3.0 & 10.4 & 15.1 \\ \hline MIST [27] w/ EM & 8.5 & 17.9 & 7.3 & 3.0 & 9.4 & 14.9 \\ + Siamese-Only & 8.8 & 18.7 & 7.3 & 2.9 & 9.6 & 15.4 \\ + Depth-Oicr & 9.1 & 19.4 & 7.4 & 3.0 & 9.8 & 16.0 \\ + Depth-attention & 9.1 & 18.8 & 7.8 & 2.8 & 9.4 & 16.1 \\ + Fusion & 9.9 & 20.4 & **8.5** & 3.0 & 10.1 & 17.1 \\ + Wsod-Amplifier & **10.2** & **21.1** & 8.4 & **3.2** & **10.3** & **17.4** \\ \hline \hline & \multicolumn{3}{c}{Avg. Precision, IoU} & \multicolumn{3}{c}{Avg. Precision, Area} \\ \cline{2-7} Methods on CC & 0.5:0.95 & 0.5 & 0.75 & S & M & L \\ \hline MIST [27] w/ EM & 1.7 & 3.8 & 1.4 & 0.3 & 1.7 & 3.4 \\ + Fusion & 2.0 & 4.1 & 1.7 & 0.3 & 1.9 & 4.0 \\ + Depth-Oicr & 2.5 & 5.7 & 2.0 & 0.4 & 2.1 & 5.3 \\ + Wsod-Amplifier & **2.6** & **6.2** & **2.0** & **0.4** & **2.7** & **5.6** \\ \hline \hline \end{tabular} \end{table} Table 2: This table introduces the effect of each component of our method implemented on MIST [27] with exact match (EM) labels on COCO (top) and Conceptual Captions (CC) (bottom). The best performer per column is in **bold**. On top, all proposed methods that outperform the Siamese-Only are underlined. datasets. To test this hypothesis, we conduct experiments with domain shift datasets [15]. Table 4 shows that our Wsod-Amplifier boosts the performance of MIST baseline in \(mAP_{50}\) by \(4-10\%\), even though no training was performed on these datasets. **Analyzing the depth priors calculated with varying percentages of GT annotations.** In this section, we examine how the proportion of ground truth (GT) annotations affects the ability to estimate the relative depth range of objects. Table 3 presents the average depth ranges and standard deviations for ten objects at various annotation percentages. Across all 80 objects, the average difference between the upper and lower bounds is \(0.26\), while the average standard deviation is a mere \(0.01\). This indicates that the standard deviation is significantly lower than the average range difference. Although we utilize \(5\%\) percent of annotations as default in our experiments, our findings suggest that even utilizing only \(1\%\) of GT annotations is sufficient to approximate depth priors successfully. **Qualitative analysis.** We visualize the object detection performance improvement of our proposed Wsod-Amplifier method in comparison to MIST [27] in Figure 4. In the first image, the baseline struggles to accurately identify multiple instances of the same "vase" objects, instead grouping them together in a single bounding box with a high score. Our method overcomes this challenge, precisely detecting each individual "vase" instance. In the second image, the baseline faces the problem of part domination due to some discriminative parts of a "zebra" object. Our proposed method helps to overcome this issue by utilizing depth modality, which emphasizes the geometric variations of objects, while comparatively ignoring the complex background. In other images, unlike our method, the baseline misses objects entirely, or produces large and imprecise bounding boxes. Moreover, the bounding boxes detected by our method tend to have higher prediction scores. ## 5 Conclusion We have demonstrated how the depth modality can be useful for weakly-supervised object detection, without incurring annotation or significant computation costs. Our Siamese WSOD network efficiently incorporates RGB and depth modality, along with contrastive loss and fusion. With the relationship of language and depth, depth priors estimate the bounding box proposals that may contain an object of interest. We have implemented our boosting approach on two WSOD methods, SoS-WSOD, and MIST and significantly increased their performance. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Clipart & Watercolor & Comic \\ \hline MIST [27] w/ GT & 9.4 & 13.3 & 9.2 \\ + Wsod-Amplifier & **10.2** & **14.7** & **9.6** \\ \hline \hline \end{tabular} \end{table} Table 4: This table introduces the improvement of our Wsod-Amplifier over MIST on domain shift datasets. The results are in \(mAP_{50}\) metric. The best performer per column is in **bold**. Figure 4: Qualitative comparison of MIST [27] (top) and our proposed Wsod-Amplifier method (bottom) implemented on MIST. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline \hline \% of GT anno. & person & car & motorcycle & handbag & bowl & hair drier & toaster & scissers & parking meter & tennis racket \\ \hline 1\% & 0.23 & 0.49 & 0.26 & 0.53 & 0.17 & 0.43 & 0.13 & 0.41 & 0.13 & 0.43 & 0.39 & 0.52 & 0.32 & 0.42 & 0.08 & 0.42 & 0.11 & 0.39 & 0.23 & 0.48 \\ 5\% & 0.23 & 0.49 & 0.28 & 0.54 & 0.15 & 0.41 & 0.41 & 0.41 & 0.13 & 0.43 & 0.22 & 0.46 & 0.27 & 0.55 & 0.15 & 0.45 & 0.12 & 0.35 & 0.16 & 0.44 \\ 10\% & 0.23 & 0.49 & 0.28 & 0.54 & 0.16 & 0.41 & 0.14 & 0.42 & 0.13 & 0.43 & 0.27 & 0.47 & 0.26 & 0.54 & 0.15 & 0.46 & 0.16 & 0.44 & 0.16 & 0.43 \\ 50\% & 0.23 & 0.49 & 0.27 & 0.54 & 0.16 & 0.42 & 0.15 & 0.42 & 0.14 & 0.44 & 0.24 & 0.50 & 0.26 & 0.54 & 0.16 & 0.45 & 0.15 & 0.44 & 0.17 & 0.44 \\ \hline Average & 0.230 & 0.490 & 0.272 & 0.537 & 0.160 & 0.417 & 0.140 & 0.415 & 0.132 & 0.432 & 0.280 & 0.487 & 0.277 & 0.512 & 0.135 & 0.445 & 0.135 & 0.405 & 0.180 & 0.447 \\ Stdev & 0 & 0 & 0.009 & 0.005 & 0.008 & 0.009 & 0.008 & 0.005 & 0.005 & 0.005 & 0.070 & 0.027 & 0.028 & 0.061 & 0.036 & 0.017 & 0.023 & 0.043 & 0.033 & 0.022 \\ \hline \hline \end{tabular} \end{table} Table 3: The table displays the lower and upper depth range values, denoted as \(r_{c}=[\text{lower, upper}]\), for each object class \(c\), which are extracted using varying percentages of ground truth bounding box annotations. Among the ten object classes considered, the first 5 exhibit the lowest standard deviations across the different percentages, and the last 5 display the highest deviations out of the 80 COCO classes.
2307.05173
Diagnostics of Anomalous Conductance Plateaus in Abelian Quantum Hall Regime
Two-terminal conductance quantization in the context of quantum Hall (QH) physics is intimately related to current carried by a discrete number of edge modes. Upon pinching off of a QH bar, one may reach regimes in which some modes which are fully transmitted (while the others are fully reflected), giving rise to quantized conductance plateaus. Here we study an alternative protocol for quantized values of the conductance, which relies on inter-mode equilibration. Concretely this is the result of considering different inequalities between charge equilibration and thermal equilibration lengths on one hand and geometrical lengths on the other hand, and accounting for the possibility of edge reconstruction. Taking the 2/3 state as a prototypical example, we obtain a host of scenarios leading to conductance plateaus at $e^2/3h$ (observed previously), $e^2/2h$ (recently observed), and $5e^2/9h$ (our prediction). Our approach facilitates distinguishing among various scenarios based on dc current auto- and cross-correlation (shot noise).
Sourav Manna, Ankur Das, Yuval Gefen, Moshe Goldstein
2023-07-11T11:10:39Z
http://arxiv.org/abs/2307.05173v3
# Shot Noise as a Diagnostic in the Fractional Quantum Hall Edge Zoo ###### Abstract Bulk-boundary correspondence allows one to probe the bulk topological order by studying the transport properties of the edge modes. However, edge modes in a fractional quantum Hall (FQH) state can undergo edge reconstruction; moreover, they can be in the coherent regime or exhibit varying degrees of charge and thermal equilibration, giving rise to a zoo of intriguing scenarios. Even more possibilities arise when a quantum point contact (QPC) is introduced and tuned into a conductance plateau. Distinguishing among the different models and equilibration regimes is an outstanding problem, which cannot be resolved by dc electrical conductance measurement. In this work we show that _electrical shot noise_ at a QPC conductance plateau can serve as such diagnostic. As a prototypical example we consider the \(\nu=2/3\) FQH state, and show that different inequalities between the auto- and cross-correlation electrical shot noise hold for different edge models. In particular, our results offer several possible scenarios for the QPC conductance plateaus \(e^{2}/3h\) (observed previously), \(e^{2}/2h\) (recently observed), and \(5e^{2}/9h\) (our prediction), as well as how to distinguish among them via shot noise. _Introduction.--_ The oldest known examples of topological states of matter are the quantum Hall states in a two-dimensional electron gas (2DEG) subject to a strong magnetic field [1; 2; 3]. These gapped bulk phases have chiral edge modes [4; 5] carrying both charge and energy [6; 7; 8]. In simple cases, such as the Laughlin states [9], these modes can be co-propagating. The situation becomes more interesting when counter-propagating modes appear, either due to topological constraints [10; 11] (e.g. hole-conjugate \(\nu=2/3\) filling fraction) or due to edge reconstruction [12; 13]. Therefore, for a given bulk topological order a number of edge models can be found which are consistent with the bulk-boundary correspondence. Moreover, the edge modes can be coherent or experience different degree of charge and thermal equilibration can exist, which give rise to a rich zoo of scenarios. Distinguishing between them based on an experimentally relevant diagnostic in a single device is an important and interesting avenue. A quantum point contact (QPC), that is, a constriction in 2DEG, is an essential component for manipulating and controlling edge modes. As we make the QPC constriction narrower, the conductance across the QPC can have plateaus; e.g., for \(\nu=2/3\) bulk filling, plateaus at \(e^{2}/2h\)[14; 15] and at \(e^{2}/3h\)[16] were observed experimentally. This leads to even more possibilities for the corresponding edge mode structure, which cannot be resolved by dc electric conductance measurements. In this work we will both enumerate such possibilities and show how they could be distinguished experimentally using purely-electrical means. Earlier, it was shown that the shot noise (described later) across the QPC can be used to determine the fractional charge carried by an edge mode [17; 18; 19; 20; 21; 22]. However, even though noise is not expected at a QPC conductance plateau, it was observed experimentally [16; 23; 24] and discussed theoretically [25; 26]. In this work we show how auto- and cross-correlation shot noise can resolve the edge structure and equilibration state. _System.--_ We consider a Hall bar at filling \(\nu\) interrupted by a QPC with filling \(\nu_{i}\). We denote its four contacts as a source \(S\), on which a dc voltage \(V_{\rm dc}\) is applied, a ground \(G\), and two drains \(D_{1},D_{2}\), and we assume the typical arm length \(L_{\rm A}\) to be much larger than the typical QPC size \(L_{\rm Q}\) (Figs. 1, 2, 3). In addition to the edge modes, dictated by topology, there can be edge reconstruction leading to the introduction of counter-propagating edge modes for each filling. For each edge structure the modes can be in the coherent regime at zero temperature and can be renormalized due to inter-mode interactions and random disorder induced charge tunnelings reaching a renormalization group (RG) fixed point. Also, in each edge structure equilibration can take place at finite temperature. Recent experiments have shown that the charge equilibration length \(l_{\rm eq}^{\rm ch}\) is typically very short [27; 28; 29], hence full charge equilibration can be assumed in each segment of the device, leading to \(l_{\rm eq}^{\rm ch}\ll L_{\rm Q}\ll L_{\rm A}\). On the other hand, the thermal equilibration length \(l_{\rm eq}^{\rm th}\) can be parametrically larger, allowing for three regimes of thermal equilibration: (1) each segment is thermally unequilibrated, \(L_{\rm Q}\ll L_{\rm A}\ll l_{\rm eq}^{\rm th}\) (no), (2) the QPC is thermally unequilibrated while the other segments are thermally equilibrated, \(L_{\rm Q}\ll l_{\rm eq}^{\rm th}\ll L_{\rm A}\) (mixed), and (3) each segment is thermally equilibrated, \(l_{\rm eq}^{\rm th}\ll L_{\rm Q}\ll L_{\rm A}\) (full). For full charge and thermal equilibration, the modes in each segment form a chiral hydrodynamic mode characterized by its electrical and thermal conductances, which eliminates any effect of edge reconstruction. We denote by \(I_{1}\) and \(I_{2}\) the currents (correspondingly \(Q_{1}\) and \(Q_{2}\) are the charges) entering the drains \(D_{1}\) and \(D_{2}\), respectively. The dc current-current auto-correlations are defined as \(\delta^{2}I_{1}=\langle(I_{1}-\langle I_{1}\rangle)^{2}\rangle\) in \(D_{1}\) and \(\delta^{2}I_{2}=\langle(I_{2}-\langle I_{2}\rangle)^{2}\rangle\) in \(D_{2}\), while the cross correlation is \(\delta^{2}I_{c}=\langle\left(I_{1}-\langle I_{1}\rangle\right)\left(I_{2}- \langle I_{2}\rangle\right)\rangle\)[30]. Correspondingly, the correlations in charge fluctuations are \(\delta^{2}Q_{1},\delta^{2}Q_{2}\) and \(\delta^{2}Q_{c}\). The Fano factors are defined as \(F_{j}=|\delta^{2}I_{j}|/2e\langle I\rangle t(1-t)=|\delta^{2}Q_{j}|/e\sigma \langle I\rangle t(1-t)\), with \(j\in\{1,2,c\}\), where \(\langle I\rangle\) is the source current, \(\tau\) is time, and \(t=\langle I_{1}\rangle/\langle I\rangle\) is the QPC transmission [31]. The QPC conductance is \(G_{D_{1}}e^{2}/h\), where \(G_{D_{1}}=t\langle I\rangle\tau/e\). To make the discussion of different edge model concrete, from now on we focus our attention on the prototypical example of \(\nu=2/3\), its QPC conductance plateaus, and shot noise in those plateaus. As mentioned above, we will show that, by measuring both the auto- and cross-correlations of the electrical current across a QPC, one may discern both the edge configuration and its degree of equilibration. \(\nu=2/3\)_edge models.--_ We consider the prototypical example of filling \(\nu=2/3\) in a QPC (Figs. 1, 2, 3) and take the structure of the bare edge modes as the MacDonald model [32; 10; 33], consisting of two counter-propagating modes having filling factor discontinuities (from bulk to edge) \(\delta\nu=[-1/3,+1]\). We note that in the coherent regime the MacDonald edge structure fails to be consistent with several experimental observations [34]. Subsequently, it was realized that an interplay of the inter-mode interactions and disorder-induced charge tunneling can drive the system into a disorder dominated RG fixed point, known as the Kane-Fischer-Polchinski (KFP) RG fixed point [11], consistent with the experimental observations. Later, a number of experimental observations [35; 16; 36] were found which could not be explained by the KFP RG fixed point. To reconcile these experimental observations, a model (reconstructed MacDonald edge [13; 37]) was proposed consisting of four counter-propagating modes having \(\delta\nu=[-1/3,+1,-1/3,+1/3]\), which, in the coherent regime, may give rise to a new Wang-Meir-Gefen (WMG) [13] intermediate fixed point. For each edge structure, different equilibration regimes can happen as explained earlier. Emergence of different \(G_{D_{1}}\) plateaus from different models and a classification of those based on the shot noise are listed in Table 1. We assume that there is no bulk-leakage [38; 39; 40; 41]. _The \(G_{D_{1}}=1/2\) plateau.--_ Recent experiments have shown the emergence of an intermediate QPC conductance plateau at \(G_{D_{1}}=1/2\)[14; 15]. A theoretical explanation for the appearance of it was provided [14], which is similar to an earlier work [46]. Here, we show that this plateau may arise due to different mechanisms in either the coherent or equilibrated regimes and show that shot noise can be used to discriminate among them (Table 1). (a) Coherent scenario.-- We consider the MacDonald edge structure [10], consisting of counter-propagating \(e/3\) and \(e\) charge modes (from bulk to edge) (Fig. 1(a)). We assume that the contacts are clean, where the modes are non-interacting [43; 44]. In each region between a contact and the QPC the \(e\) and \(e/3\) modes are renormalized to \((2/3+\epsilon)e\) and \(\epsilon e\) charge modes, respectively, where \(\epsilon>0\) (KFP region) [11]. At the KFP RG fixed point we have \(\epsilon=0\) and the \(\epsilon e\) mode becomes neutral. At the QPC the \(e/3\) mode is fully backscattered and the \(e\) mode is fully transmitted at a plateau having QPC conductance \(G_{D_{1}}\). We consider a wavepacket having charge \(e\) emanating from \(S\) in the charge mode \(e\) in time \(\tau\). The wavepacket encounters an infinite number of stochastic reflections and transmissions while entering and leaving the KFP regions. The values of those reflection and transmission coefficients are parametrized by the elements of the density kernel matrix [47] in each KFP region, which are determined by the conductance matrix [43]. The wavepacket leaves the device through either \(D_{1}\) or \(D_{2}\). To calculate the total charge reaching different contacts we write down an infinite series of terms, each of which is composed of the following factors: Figure 1: Different scenarios for a \(1/2\) conductance plateau in a Hall bar at filling \(\nu=2/3\) interrupted by a QPC. The source contact \(S\) is biased by a dc voltage \(V_{\rm dc}\); in addition there are a ground contact \(G\) and two drains \(D_{1}\) and \(D_{2}\). The typical geometric lengths \(L_{\rm A},L_{Q}\) are depicted, while the charge propagation chirality is indicated by circular arrows. (a) The MacDonald edge structure, consisting of counter-propagating \(e/3\) and \(e\) charge modes (from bulk to edge) [10]. We assume clean contacts, where the modes are non-interacting [43; 44]. In each region between a contact and QPC the \(e\) and \(e/3\) modes are renormalized to counter-propagating \((2/3+\epsilon)e\) and \(\epsilon e\) charge modes, where \(\epsilon>0\) (KFP RG fixed point refers to \(\epsilon=0\)) [11]. At the QPC, the \(e/3\) mode is fully backscattered and the \(e\) mode is fully transmitted. (b) Edge equilibration model with bulk filling \(\nu=2/3\) and QPC filling \(\nu_{i}=1\). We denote the boundary of the vacuum with \(\nu\) as the “outer”, and with \(\nu_{i}\) as the “upper”, and between \(\nu\) and \(\nu_{i}\) as the “line”. Voltage drops occur at the hot spots \(H_{1},H_{2}\) (red circles) resulting in the noise spots \(M,N,O,P\) (green circles) [45]. (i) a tunnelling factor for the first entrance to the QPC region from \(S\), (ii) a factor for the shortest path leaving the QPC region to reach a contact (\(D_{1}\) or \(D_{2}\)), and (iii) a factor giving the contribution of multiple reflections from different KFP regions, where the piece arising from "(iii)" is the same for both \(D_{1}\) and \(D_{2}\)[31]. This process gives rise to the shot noise, and we note that \(\delta^{2}Q_{1}=\langle Q_{1}\rangle(1-\langle Q_{1}\rangle)\), \(\delta^{2}Q_{2}=\langle Q_{2}\rangle(1-\langle Q_{2}\rangle)\), and \(\delta^{2}Q_{c}=-\langle Q_{1}\rangle\langle Q_{2}\rangle\). Summing up this series we find that to second order in \(\epsilon\) the source current becomes \(I\approx(2/3+0.25\epsilon+2.25\epsilon^{2})e/\tau\), \(t\approx(3/4+0.281\epsilon+2.21\epsilon^{2})\), and \(G_{D_{1}}\approx(1/2+0.75\epsilon+3.375\epsilon^{2})\)[31]. Moreover, \(F_{1}\approx 2-0.75\epsilon+3.375\epsilon^{2},F_{2}\approx 1.111-4.41 6\epsilon+7.375\epsilon^{2}\) and \(F_{c}\approx-0.666+2.25\epsilon-7.875\epsilon^{2}\)[31]. For \(\epsilon=0\) we find the results at the KFP RG fixed point and we note that \(G_{D_{1}}=1/2\) matches with the recent experimental observations [14; 15]. (b) Equilibration scenario.-- In this case, differences in the transport properties can arise depending on the degree of edge equilibration. We assume that the charge transport is ballistic (B), moving "downstream" along each segment of the QPC (Fig. 1(b)). The nature of heat transport in that segment can be B, diffusive (D), or antiballistic (AB, i.e., "upstream"). [45; 38]. In these three regimes we have, respectively, an exponentially suppressed, an algebraically decaying, or a constant shot noise as a function of the geometric length of the segment [45; 38]. From now on, we neglect the exponentially suppressed contribution to the shot noise, arising from B heat transport. As contacts \(S\) and \(G\) are at different potentials, there are potential drops in the device which occur in the regions marked as hot spots \(H_{1},H_{2}\), resulting in Joule heating (Fig. 1(b)) [45; 48]. In principle, there exist two possible hot spots near the drains \(D_{1},D_{2}\). However, the heat generated there cannot flow back to the QPC region, in this configuration, and hence cannot contribute to the noise [45]. In addition, four noise spots \((M,N,O,P)\) are formed due to the creation of thermally excited particle-hole pairs and their stochastic splitting into the two drains \(D_{1},D_{2}\) (Fig. 1(b)) [45; 48]. The shot noise is computed by collecting the contributions from \(M,N,O,P\), which are determined by the nature of heat transport in the outer, line, and upper segments (Fig. 1(b)). The Fano factors are found to be [31]\(F_{1}=F_{2}=F_{O}+F_{P}+F_{M}+F_{N}\) and \(F_{c}=F^{\prime}_{O}+F^{\prime}_{P}-F_{M}-F_{N}\), where \(F_{\alpha}\) is the contribution from the noise spot \(\alpha\in\{M,N,O,P\}\) and \(F^{\prime}_{\beta}\) is the contribution to the cross-correlation from the noise spot \(\beta\in\{O,P\}\). We consider three possible edge structures giving rise to a \(1/2\) QPC conductance plateau. They correspond to three different combinations of \(\{\nu,\nu_{i}\}\), where \(\nu\) and \(\nu_{i}\) are the bulk and QPC filling factors, respectively. We take \(\{\nu,\nu_{i}\}=\{2/3,1\}\) or \(\{2/3(\text{R}),1\}\) or \(\{2/3(\text{R}),1(\text{R})\}\), where \(2/3(\text{R})\) refers to the reconstructed MacDonald edge and \(1(\text{R})\) denotes edge reconstruction in QPC leading to the QPC filling factor discontinuities (from bulk to edge) \(\delta\nu_{i}=[+1,-1/3,+1/3]\)[42]. We sum up an infinite series to compute total current at \(D_{1}\) and thereby find \(G_{D_{1}}=1/2\), that is, \(t=3/4\) for all those edge structures; this result is due to the assumed full charge equilibration. For no thermal equilibration we have only B and AB heat transports leading to constant Fano factors (Table 1). For mixed and full thermal equilibration, the heat transport in the outer segment becomes D, and hence the heat, generated at the hot spots \(H_{1},H_{2}\), flows to the contacts very slowly. Thus, the noise spots \(M,N\) acquire a \(\sqrt{L_{\text{A}}/l_{\text{eq}}^{\text{th}}}\) contribution to their temperatures, which is manifested in the shot noise, while the noise spots \(O,P\) provide asymptotically constant contributions (Table 1) [31]. _The \(G_{D_{1}}=5/9\) plateau.--_ Here only a coherent scenario is possible. We consider the reconstructed MacDonald edge structure, consisting of counter-propagating \(e/3\) ("innermost"), \(e\), \(e/3\) and \(e/3\) ("outermost") charge modes (from bulk to edge) [13; 37](Fig. 2). At the QPC, we consider the case when the outermost \(e/3\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(G_{0}(e^{\prime 2h})\) & Coherent (\(F_{1}>F_{2}>|F_{c}|\)) & \multicolumn{3}{c|}{Thermal equilibration (\(F_{1}=F_{2}>|F_{c}|\))} \\ \hline & & No & Mixed & Full \\ \hline & & & Mixed & & Full \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & & \\ \hline & & & & \\ \hline & & & & & \\ \hline & & & & \\ \end{tabular} \end{table} Table 1: The different QPC conductance plateaus \(G_{D_{1}}(e^{2}/h)\) arising in either the coherent (RG fixed point) and equilibrated regimes, and a classification of those based on the Fano factors \(F_{1},F_{2}\) (auto-correlations for drains \(D_{1},D_{2}\), respectively) and \(F_{c}\) (cross-correlation). We always consider full charge equilibration, while no, mixed, or full thermal equilibrations may occur. Here, \(2/3\)(R) refers to the reconstructed MacDonald edge [13; 37] and \(1\)(R) denotes edge reconstruction in the QPC leading to the QPC filling factor discontinuities (from bulk to edge) \(\delta\nu_{i}=[+1,-1/3,+1/3]\)[42]. Pink, cyan, and yellow shades display distinct inequalities among \(F_{1},F_{2},|F_{c}|\). Note that \(G_{D_{1}}=1/2\) matches with the recent experimental observations [14; 15], while \(G_{D_{1}}=1/3\) was obsevered previously [16] and \(G_{D_{1}}=5/9\) is our prediction. mode is fully transmitted, the innermost \(e/3\) mode is fully backscattered, and the remaining modes are renormalized to the vicinity of the KFP RG fixed point [11]. The renormalized charge modes become \((2/3+\epsilon_{3})e\) and \(\epsilon_{3}e\), where \(\epsilon_{3}>0\) (KFP region). In each region between a contact and the QPC the remaining modes are renormalized to the vicinity of the WMG RG fixed point [13]. The renormalized charge modes become \((1/3+\epsilon_{1}+\epsilon_{2})e,\ \epsilon_{1}e\) and \(\epsilon_{2}e\), where \(\epsilon_{1}>0,\ \epsilon_{2}>0\) (WMG region). At the RG fixed points we have, respectively, \(\epsilon_{3}=0\) or \(\epsilon_{1}=\epsilon_{2}=0\), and the \(\epsilon_{1}e,\epsilon_{2}e,\epsilon_{3}e\) modes become neutral. Similarly to the 1/2 QPC plateau considered before, we write down an infinite series with contributions (i), (ii), and (iii) to calculate the total charge reaching different contacts. Differently from before, here the piece (iii) contains three types of contributions as a factor which arises due to multiple reflections (iiia) among all the contacts, (iib) between \(S\) and \(D_{1}\), and (iic) between \(G\) and \(D_{2}\)[31]. Summing up all the contributions to first order in \(\epsilon_{1,2,3}\), the source current becomes \(I\approx\left[2/3+0.55(\epsilon_{1}+\epsilon_{2})\right]e/\tau\), hence \(t\approx\left[5/6+0.5\epsilon_{3}-1.36(\epsilon_{1}+\epsilon_{2})\right]\), and \(G_{D_{1}}\approx\left[5/9+0.33\epsilon_{3}-0.44(\epsilon_{1}+\epsilon_{2})\right]\)[31]. We also find \(F_{1}\approx\left[1.866+6.48\epsilon_{3}-16.41(\epsilon_{1}+\epsilon_{2}) \right],F_{2}\approx\left[1.066+2.56\epsilon_{3}-15.32(\epsilon_{1}+\epsilon_ {2})\right]\) and \(F_{c}\approx\left[-0.266-1.04\epsilon_{3}+4.63(\epsilon_{1}+\epsilon_{2})\right]\). For \(\epsilon_{1}=\epsilon_{2}=\epsilon_{3}=0\) we obtain the results at the RG fixed points. _The \(G_{D_{1}}=1/3\) plateau.--_ Earlier experiments have shown the emergence of an intermediate QPC conductance plateau at \(G_{D_{1}}=1/3\)[16]. Here, we provide its theoretical explanation based on either coherent and equilibrated scenarios and show that shot noise can be used to discriminate among those (Table 1). (a) Coherent scenario.-- We consider the renormalized reconstructed MacDonald edge structure (WMG RG fixed point [13]), consisting of \(n_{1},n_{2},e/3\) ("inner") and \(e/3\) ("outer") modes (from bulk to edge), where \(n_{1},n_{2}\) denote the neutral modes (Fig. 3(a)). A plateau is observed at transmission \(t=1/2\), leading to \(G_{D_{1}}=1/3\), when the inner \(e/3\) charge mode is fully backscattered and outer \(e/3\) charge mode is fully transmitted [16]. At this transmission plateau, it has been shown earlier that the neutral modes can create particle-hole pairs, which stochastically split and reach different contacts, thus creating current fluctuations in \(D_{1}\) and \(D_{2}\) and the Fano factors were found to be \(F_{1}=F_{2}=2/3\)[25]. Using the same stochastic variable approach, one finds that \(F_{c}=-2/3\)[31]. (b) Equilibration scenario.-- We consider two possible edge structure combinations, \(\{\nu,\nu_{i}\}=\{2/3,1/3\}\) or \(\{2/3(\text{R}),1/3\}\). Employing the same techniques as for the 1/2 plateau, we find \(G_{D_{1}}=1/3\) leading to \(t=1/2\) for Figure 2: A scenario for a 5/9 QPC conductance plateau for the same geometry as in Fig. 1. We show the reconstructed MacDonald edge structure, consisting of counter-propagating \(e/3\) (innermost), \(e,e/3\) and \(e/3\) (outermost) charge modes (from bulk to edge) [13; 37] in a QPC. At the QPC, the outermost \(e/3\) mode is fully transmitted, the innermost \(e/3\) mode is fully backscattered, and the remaining modes are renormalized to counter-propagating \((2/3+\epsilon_{3})e\) and \(\epsilon_{3}e\) charge modes (KFP RG fixed point refers to \(\epsilon_{3}=0\)) [11]. In each region between a contact and QPC the remaining modes are renormalized to \((1/3+\epsilon_{1}+\epsilon_{2})e\) counter-propagating to \(\epsilon_{1}e\) and \(\epsilon_{2}e\) charge modes, where \(\epsilon_{1}>0,\ \epsilon_{2}>0\) (WMG RG fixed point refers to \(\epsilon_{1}=\epsilon_{2}=0\)) [13]. Figure 3: Different scenarios for the 1/3 conductance plateau. Details of the QPC set up is the same as Fig. 1. (a) The renormlazed (at the WMG RG fixed point [13]) reconstructed MacDonald edge structure, consisting of \(n_{1},n_{2},e/3\) (inner), \(e/3\) (outer) modes (from bulk to edge), where \(n_{1},n_{2}\) denote the neutral modes (wiggly red), is shown. At QPC, the inner \(e/3\) charge mode is fully backscattered and outer \(e/3\) charge mode is fully transmitted. (b) Edge equilibration model for \(\nu=2/3\) and QPC filling \(\nu_{i}=1/3\). both of these edge structures (Fig. 3(b)). Again, the equality is due to full charge equilibration. For no thermal equilibration we have constant Fano factors while for mixed and full thermal equilibration, the Fano factors acquire a \(\sqrt{L_{\text{A}}/l_{\text{eq}}^{\text{th}}}\) contributions (Table 1) [31]. _Summary and outlook.--_ One FQH state may feature different edge modes due to reconstruction. Moreover, the modes can be coherent or equilibrated to varying extent. We show that different models can give rise to the same QPC conductance plateau but the models can be distinguished based on shot noise. We have established our claim by studying the \(\nu=2/3\) FQH state, and found that different inequalities among the Fano factors hold for different scenarios. Our results include several possible scenarios for the recently observed \(e^{2}/2h\)[14; 15], previously observed \(e^{2}/3h\)[16] QPC conductance plateaus in experiments and the means to distinguish between them. In addition, we predict a possible \(5e^{2}/9h\) (only in the coherent regime) QPC conductance plateau. Our scheme is realizable with the present day experimental abilities. The analyses can be extended to other quantum Hall states [49], graphene quantum Hall, and edge reconstructed \(\mathbb{Z}_{2}\) topological insulators [50; 51; 52]. Recently, Ref. [53] has also discussed the auto- and cross-correlation noise. We thank Yuval Gefen for many illuminating discussions and collaboration on related works. We also thank Christian Glattli, Kun Yang, Michael J. Manfra, and Udit Khanna for their useful discussions. S.M. was supported by the Weizmann Institute of Science, Israel Deans fellowship through Feinberg Graduate School, as well as the Raymond Beverly Sackler Center for Computational Molecular and Material Science at Tel Aviv University. A.D. was supported by the German-Israeli Foundation Grant No. I-1505-303.10/2019, DFG MI 658/10-2, DFG RO 2247/11-1, DFG EG 96/13-1, and CRC 183 (project C01). A.D. also thanks the Israel Planning and budgeting committee (PBC) and the Weizmann Institute of Science, the Dean of Faculty fellowship, and the Koshland Foundation for financial support. M.G. has been supported by the Israel Science Foundation (ISF) and the Directorate for Defense Research and Development (DDR&D) Grant No. 3427/21, and by the US-Israel Binational Science Foundation (BSF) Grant No. 2020072.
2301.08901
Roughness in Anti Semigroup
In this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types(4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.
Faraj. A. Abdunabi, Ahmed shletiet, Najah. A. Bosaif
2023-01-21T05:57:04Z
http://arxiv.org/abs/2301.08901v1
# Roughness in Anti Semigroup ###### Abstract In this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group. **A BSTRACT** upper approximation lowe approximations anti-simgroups homeomorphisms anti-group **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subgroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subsemigroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subsemigroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces. Specify the concepts of rough in Finite anti-groups of types (4) are studies. Moreover, some properties of approximations and these algebraic structures are introduced. In addition, we give the definition of homomorphism anti-group.** **A BSTRACT** **in this paper, we present the concepts of the upper and lower approximations of Anti-rough subsemigroups, Anti-rough subsemigroups, and homeomorphisms of Anti-Rough anti-semigroups in approximation spaces.
2302.11900
On the Observability of Recurrent Nova Super-Remnants
The nova super-remnant (NSR) surrounding M31N 2008-12a (12a), the annually erupting recurrent nova (RN), is the only known example of this phenomenon. As this structure has grown as a result of frequent eruptions from 12a, we might expect to see NSRs around other RNe; this would confirm the RN--NSR association and strengthen the connection between novae and type Ia supernovae (SN Ia) as NSRs centered on SN Ia provide a lasting, unequivocal signpost to the single degenerate progenitor type of that explosion. The only previous NSR simulation used identical eruptions from a static white dwarf (WD). In this Paper, we simulate the growth of NSRs alongside the natural growth/erosion of the central WD, within a range of environments, accretion rates, WD temperatures, and initial WD masses. The subsequent evolving eruptions create dynamic NSRs tens of parsecs in radius comprising a low-density cavity, bordered by a hot ejecta pile-up region, and surrounded by a cool high-density, thin, shell. Higher density environments restrict NSR size, as do higher accretion rates, whereas the WD temperature and initial mass have less impact. NSRs form around growing or eroding WDs, indicating that NSRs also exist around old novae with low-mass WDs. Observables such as X-ray and H$\alpha$ emission from the modelled NSRs are derived to aid searches for more examples; only NSRs around high accretion rate novae will currently be observable. The observed properties of the 12a NSR can be reproduced when considering both the dynamically grown NSR and photoionisation by the nova system.
M. W. Healy-Kalesh, M. J. Darnley, E. J. Harvey, C. M. Copperwheat, P. A. James, T. Andersson, M. Henze, T. J. O'Brien
2023-02-23T10:12:35Z
http://arxiv.org/abs/2302.11900v1
# On the Observability of Recurrent Nova Super-Remnants ###### Abstract The nova super-remnant (NSR) surrounding M 31N 2008-12a (12a), the annually erupting recurrent nova (RN), is the only known example of this phenomenon. As this structure has grown as a result of frequent eruptions from 12a, we might expect to see NSRs around other RNe; this would confirm the RN-NSR association and strengthen the connection between novae and type Ia supernovae (SN Ia) as NSRs centered on SN Ia provide a lasting, unequivocal signpost to the single degenerate progenitor type of that explosion. The only previous NSR simulation used identical eruptions from a static white dwarf (WD). In this Paper, we simulate the growth of NSRs alongside the natural growth/erosion of the central WD, within a range of environments, accretion rates, WD temperatures, and initial WD masses. The subsequent evolving eruptions create dynamic NSRs tens of parsecs in radius comprising a low-density cavity, bordered by a hot ejecta pile-up region, and surrounded by a cool high-density, thin, shell. Higher density environments restrict NSR size, as do higher accretion rates, whereas the WD temperature and initial mass have less impact. NSRs form around growing or eroding WDs, indicating that NSRs also exist around old novae with low-mass WDs. Observables such as X-ray and H\(\alpha\) emission from the modelled NSRs are derived to aid searches for more examples; only NSRs around high accretion rate novae will currently be observable. The observed properties of the 12a NSR can be reproduced when considering both the dynamically grown NSR and photoionisation by the nova system. keywords: hydrodynamics - novae, cataclysmic variables ## 1 Introduction Recurrent novae (RNe) are a subclass of the cataclysmic variables that experience repeated thermonuclear eruptions on timescales of a human lifetime. Like classical novae (CNe) - systems observed in eruption just once - RNe are interacting binary systems (Walker, 1954; Warner, 1995) containing a white dwarf (WD) and a main-sequence, subgiant, or red giant donor (Darnley et al., 2012). Hydrogen-rich material is expelled from the outer layers of the donor through stellar winds or Roche lobe overflow, following which it accumulates on the surface of the WD usually via an accretion disc. At the base of the accreted layer, compression and heating continually increase until the critical pressure for a thermonuclear runaway (TNR; Starrfield et al., 1972, 1976, 2020) is reached. Once degeneracy is lifted, the accreted envelope is driven upwards by radiation pressure and expands violently, with material travelling faster than the escape velocity of the WD ejected into the surrounding environment as the nova eruption (see, for example, Starrfield et al., 1976, 2020). Mass accretion then continues after (and possibly during; Kato et al., 2017; Henze et al., 2018) the eruption, leading to successive RN eruptions, separated by a recurrence period (\(P_{\rm rec}\)) which can vary. Novae with carbon-oxygen WDs present a compelling single degenerate (SD) pathway to type Ia supernovae (SN Ia; Whelan & Iben, 1973; Hachisu et al., 1999, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026, 2027, 2028, 2029, 2029, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2040, 2041, 2042, 2043, 20444, 2045, 2046, 2047, 2050, 2061, 2062, 2063, 2073, 2074, 2075, 2076, 2077, 2078, 2089, 2090, 2091, 2092, 2093, 2094, 2095, 2096, 2097, 2098, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 209, 2099, 2099, 2099, 209, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 209, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 2099, 209, 2099, 209, 2099, 209, 2099, 2099, 2099, 2099, 2099, 2099, 209, 2099, 2099, 209, 2099, 209, 2099, 209, 2099, 209, 2099, 209, 2099, 209, 2099, 209, 2099, 209, 209, 2099, 209, 209, 2099, 209, 209, 209, 2099, inform us of the underlying configuration of the binary. In particular,nova shells are structured with an equatorial waist and polar cones of emission (Hutchings, 1972). This structure forms from the originally near-spherically symmetrical nova ejecta interacting with the material in the orbital plane lost by the donor (see, e.g., Mohamed et al., 2013). Polar blobs, equatorial (and/or tropical) rings as well as knots are common to almost all nova shells; see, for example, DQ Her (Williams et al., 1978), HR Del (Harman and O'Brien, 2003), DO Aql and V4362 Sgr (Harvey et al., 2020) as well as V5668 Sgr (Takeda et al., 2022). In addition, due to the repeating nature of RNe, we have an example of interacting ejecta from successive eruptions producing clumping and shock heating around the RN T Pyxidis (Shara et al., 1997; Toraskar et al., 2013). Even though the accretion disk surrounding the WD can be altered (Henze et al., 2018) to the point of removal in many cases (Drake and Orlando, 2010; Figueira et al., 2018), it will re-establish after the nova outburst (Worters et al., 2007) in preparation for future eruptions. Consequently, all nova systems are predicted to experience repeated outbursts with substantial variation in recurrence period between systems (Y05). Yet, only the recurrence periods for the _known_ RNe, all contained within the Galaxy (10; Schaefer, 2010; Darnley, 2021), the Large Magellanic Cloud (4) and M31 (19, Darnley and Henze, 2020), have been determined, ranging from 98 years (Pagnotta et al., 2009) to 1 year (Henze et al., 2015, 2018; Darnley and Henze, 2020). Such short inter-eruption intervals are powered by a combination of a massive WD and a high mass accretion rate (Starrfield et al., 1988). The most rapidly recurring nova known is M31 N 2008-12a, or simply '12a', (see, e.g., Darnley et al., 2016; Henze et al., 2018; Darnley and Henze, 2020; Darnley, 2021, and references therein). This extreme example erupts annually (\(\overline{P}_{\rm rec}=0.99\pm 0.02\) years; Darnley and Henze, 2020) and has the most massive WD known (\(\simeq 1.38\) M\({}_{\odot}\); Kato et al., 2015), likely CO in composition (Darnley et al., 2017), accreting with a substantial mass accretion rate of \((0.6\leq\dot{M}\leq 1.4)\times 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) from a red giant (or clump) companion (Darnley et al., 2014, 2017). First associated with 12a by Darnley et al. (2015), this RN is surrounded by a vastly extended nebulosity. Compared to some of the largest Galactic CN shells known such as GK Persei (\(\sim\)0.5 pc; Bode et al., 2004; Harvey et al., 2016), Z Camelopardalis (\(\sim\)0.7 pc; Shara et al., 2007) and AT Cancri (0.2 pc; Shara et al., 2012), 12a's shell has semi-major and -minor axes of 67 and 45 pc, respectively, justifying a nova super-remnant (NSR; Darnley et al., 2019, hereafter DHO19) status. DHO19 ruled out the possibility of the shell being a SN remnant, a superbubble or a fossil H ii region with \(\alpha\)+[Nii] imaging and deep low-resolution spectroscopy. Instead, the NSR's existence was attributed to the cumulative sweeping up of \(\sim\)10\({}^{5-6}\) M\({}_{\odot}\) (DHO19) of local interstellar medium (ISM) from many previous nova eruptions. To test the viability of a RN origin for 12a's NSR, DHO19 utilised Morphues (Vaytet et al., 2007) to perform 1D hydrodynamical simulation of 10\({}^{5}\) 12a eruptions. Each of these eruptions ejected 5\(\times\)10\({}^{-8}\) M\({}_{\odot}\) at a terminal velocity of 3000 km s\({}^{-1}\) over seven days, repeating every 350 days (DHO19). We assign the DHO19 simulation as Run 0 and it will be used as a comparison throughout this work. Self- and ISM-interaction of the ejecta from each Run 0 eruption formed a huge cavity surrounded by an expanding shell with relative thickness of 22%. An unavoidable consequence of continual eruptions from a central system is the formation of a dynamical structure, be that a nova shell or larger remnant. However, the existence of a dynamical NSR does not necessarily signify a NSR that is observable. Nevertheless, the simulated dynamic remnant of Run 0 was found to be consistent with observations of the 12a NSR (DHO19). A unique feature of a structure formed from repeatedly interacting eruptions is a continuously shock-heated region located inside the outer shell (DHO19). Extrapolating the growth rate from these simulations to the observed size of the super-remnant, DHO19 suggested an age of \(6\times 10^{6}\) yrs. Importantly, the mechanism driving the NSR formation is also growing the 12a CO WD, which Darnley et al. (2017) predict will surpass the Chandrasekhar limit and explode as a SN Ia in \(<\) 20,000 years. In this paper, we build upon the NSR hydrodynamic modelling presented by DHO19 through consideration of the complete eruption history of a nova system as the WD mass grows from its formation toward the Chandrasekhar mass. We also explore a number of factors, both intrinsic and extrinsic to the nova system that might impact NSR formation, to aid the search for more NSRs. This will be the first attempt to determine if the NSR associated with 12a is unique or whether it is simply the first of the phenomena to be found. In Section 2 we describe the eruption model used to generate input parameters. We describe the Morphues hydrodynamic code employed in this paper in Section 3 before outlining each of the separate runs of our main simulations. Various tests conducted after the main simulations are presented in Section 4. We explore the observability of NSRs in Section 5 by modelling emission from the simulations and then compare our simulations to observations of the 12a NSR in Section 6, before concluding our paper in Section 7. ## 2 Generating Nova Ejecta Properties The DHO19 simulations of the 12a NSR utilised 10\({}^{5}\) identical eruptions with a fixed recurrence period. While a good approximation for this system during its recent evolution, identical eruptions do not match the expected long term evolution of such a system, whereby the characteristics of the ejecta evolve with the changing WD mass. Therefore, to obtain the properties of a nova system with incrementally changing nova eruptions, we were required to grow a WD (see Section 2.2). We will only describe the model we used to grow the WD for a'reference simulation' as an illustration, however this model was utilised for each of the different WD temperatures and accretion rates. As a reference simulation corresponding to the 12a system, we chose to grow a \(10^{7}\) K WD with \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) (see Section 2.1 for details), which we then placed within an environment with a hydrogen-only ISM density of \(1.67\times 10^{-24}\) g cm\({}^{-3}\) (1 H atom per cubic centimetre). We refer to this ISM density throughout the paper by the number density \(n=1\) cm\({}^{-3}\) (but drop the units for clarity). ### Parameter space Y05 provides a parameter space for the characteristics of a nova envelope and the outburst characteristics for an extended grid of nova models with varying WD mass, temperature and accretion rate. This grid runs through all permutations of these parameters and outputs various eruption characteristics such as the mass accreted onto the WD which ignites during the TNR (\(m_{\rm acc}\)), the mass ejected from the WD during the nova eruption (\(m_{\rm ej}\)) and the duration of the mass-loss phase (\(t_{\rm ml}\)) i.e., the timescale of each eruption. For this study, we use values of \(m_{\rm acc}\), which we equate to the ignition mass (\(m_{\rm eig}\)), \(m_{\rm ej}\), and \(t_{\rm ml}\) for WDs with masses 0.65, 1.0, 1.25 and 1.4 M\({}_{\odot}\)1, three temperatures of 10 MK, 30 MK and 50,MK, and three accretion rates of \(10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\), \(10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\) (we consider three temperatures for \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) but only 10 MK for the other accretion rate values). To interpolate and extrapolate these points for a continuous set of values for our WD growth model, we required a function that evolved smoothly, behaved as a power law for lower masses, yet which became asymptotic as the Chandrasekhar mass was approached (see Section 4.1 for an alternative approach). The functions we fit to \(m_{\rm ig}\) and \(t_{\rm ml}\) are shown in Figure 1, as well as the continuous function for \(P_{\rm rec}\) (the ratio of \(m_{\rm ig}\) and accretion rate). As we also wish to be consistent with observed characteristics of the nova eruption, we utilised observationally determined relations from Warner (1995) and Henze et al. (2014) to determine a function for the terminal ejecta velocity of the outburst. Footnote 1: The \(\dot{M}\)-model is a function of the mass \(M_{\odot}\), which is defined as the mass of the WD, \(m_{\rm ig}\), and \(t_{\rm ml}\), as defined in Section 2.1. ### Growing a white dwarf We grew a 1 M\({}_{\odot}\) WD to a M\({}_{\rm Ch}\) WD by accumulating the retained mass from iterated nova eruptions and using the interpolated relationships given in Section 2.1 to obtain properties of each eruption. For this example, a 1 M\({}_{\odot}\) WD with a temperature of 10 MK experiences approximately 1,900,000 eruptions while growing from 1 M\({}_{\odot}\) to 1.4 M\({}_{\odot}\), reaching a recurrence period lower limit of \(\sim\)282 days. This WD mass upper limit of 1.4 M\({}_{\odot}\) is assumed for all WD scenarios, which we equate to the Chandrasekhar mass (M\({}_{\rm Ch}\)) for this study. A WD is grown (or eroded) according to the amount of accreted material retained (or removed) between eruptions. To model the evolution of the mass accumulation efficiency (\(\eta\)) over the evolution of a WD, we utilised the values of \(m_{\rm ig}\) and \(m_{\rm eig}\) from Y05 such that \(\eta=(m_{\rm ig}-m_{\rm eig})/m_{\rm ig}\) and interpolated between these points for a continuous set of values (see top right panel of Figure 1). The changing mass of the WD can thus be described as: \[M_{\rm WD,\,\dot{\mu}+1}=M_{\rm WD,\,\dot{\mu}}+\left(m_{\rm ig,\,\dot{\mu}} \times\eta_{\dot{\mu}}\right), \tag{1}\] where \(M_{\rm WD,\,\dot{\mu}}\) is the pre-eruption mass of the WD, \(m_{\rm ig,\dot{\mu}}\) is the mass accreted by the WD before the eruption, \(\eta_{\dot{\mu}}\) is the evolving mass accumulation efficiency, and \(M_{\rm WD,\dot{\mu}+1}\) is the post-eruption mass of the WD. With the initial WD mass being 1 M\({}_{\odot}\), we utilised the relationships found in Section 2.1 to give the associated \(m_{\rm ig}\) value for equation 1. The post-eruption mass was then used as the \(M_{\rm WD}\) value in the next iteration and we continued this until we reached the limiting mass stated previously. We used the output parameters from this iterative model in our simulations. With each iteration, we were also able to use the relationships found in Section 2.1 to illustrate the evolution of a number of parameters including ejecta kinetic energy and momentum in terms of WD mass, recurrence period, elapsed time (from the first eruption), and the number of eruptions. Utilising the WD growth model, we generated nova ejecta with incrementally changing properties. As the mass of the WD increases, eruptions become more frequent, and ejecta become less massive but with higher velocity in response to the increasing WD surface gravity. ## 3 Hydrodynamical simulations As the net mass loss rate from the WD varies as the WD mass grows, an analytic relation for the growth of the NSR shell cannot be derived. As such, full hydrodynamic simulations are a necessity if we are to understand the evolution of NSRs and their emission characteristics. As in DHO19, the hydrodynamical simulations in this work were performed with Morpheus (Vaytet et al., 2007) - developed by the Nova Groups from the University of Manchester and Liverpool John Moores University. Morpheus brings together one-dimensional (Asphere; see Vaytet et al., 2007), two-dimensional (Novarot; see Lloyd et al., 1997) and three-dimensional (CubeMPI; see Wareing et al., 2006) codes to form an MPI-OpenMP Eulerian second-order Godunov simulation code that functions with Cartesian, spherical or cylindrical coordinates, and includes radiative cooling and gravity. The configuration of the nova systems in this work are modelled in the same manner as given in DHO19 such that the mass donor is a red giant exhibiting a continuous wind mass loss rate (after accretion) of \(2.6\times 10^{-8}\) M\({}_{\odot}\)yr\({}^{-1}\) with a terminal velocity of 20 km s\({}^{-1}\). These values are assumed to be consistent with the donor in the RS Ophuichi system (Bode & Kahn, 1985), thus are used as representative values with the red giant wind having negligible influence on the NSR evolution. The nova eruption is represented by an instantaneous increase in mass loss and ejecta velocity (the red giant wind's contribution becomes negligible here) followed by a quiescent period in which only the red giant wind (with decreased mass loss and lower ejecta velocity) is present. Furthermore, unless otherwise stated, each ejection is modelled as a wind with a mass-loss rate and velocity that incrementally increase throughout the simulation as governed by the relationships determined from Y05 models (see Section 2 for details and Figure 1). The eruptions are separated by incrementally decreasing recurrence periods also governed by the aforementioned relationships. True nova ejecta are not spherically symmetric, however largely for computational reasons, we have assumed one-dimensional spherical symmetry for these simulations, effectively modelling the bulk equatorial ejecta (see, e.g., Mohamed et al., 2013). The spatial resolution of the full simulations (\(\geq\)200 AU/cell) is larger than the expected orbital separation of the WD and the donor (for example, the orbital separation for 12a is \(\sim\)1.6 AU; Henze et al., 2018) so we assume that both are located at the origin. Therefore, interaction between the ejecta and the donor or accretion disk is ignored. Ideally, we would want to run each complete simulation at a high spatial resolution, however, this is not feasible with temporal and computing constraints. Running the reference simulation (see Section 3.2) several times with varying spatial resolution (and varying number of eruptions), we found that running its full 1,900,750 eruptions at 200 AU/cell would have the same long term structure as a simulation with resolution of 1 AU/cell (the resolution of a test run with 100 eruptions). Consequently, we set a spatial resolution of 200 AU/cell for most of our simulations, while those with lower spatial resolution (as indicted in Table 1) are set in response to the infrequency of eruptions, and therefore lessened impact on resolving the gross NSR structure, within those particular runs. ### Incorporating radiative cooling Nova ejecta lose energy through radiative cooling, which affects the evolution of any NSR. Therefore, the effects of cooling were tested in DHO19, with a NSR grown from \(10^{3}\) eruptions with the inclusion of the radiative cooling module in Morpheus. The cooling model utilised in Morpheus was taken from Raymond et al. (1976, their Figure 1). The cooling rate is given as a function of gas temperature of an optically thin plasma, with no dust or molecules, made up of H, He, C, N, O, Ne, Mg, Si, S, Ca, Fe and Ni. Radiative cooling becomes ineffective below a temperature of \(10^{4}\) K. Above \(10^{8}\) K, the gas is ionised and only radiates through free-free Bremsstrahlung (Vaytet et al., 2007). Between these limits, cooling is dominated by line-cooling from the metals within the gas (Vaytet, 2009). DHO19 demonstrated that there was no significant difference between the Run 0 NSR structure with or without cooling (see their Extended Data Figure 4). Cooling was suppressed in the Run 0 NSR as the recurrence period was much shorter than the cooling timescale. Hence, radiative cooling in the full simulation of Run 0 was not included. In all cases, the NSR evolution presented in this work begins with high mass and low velocity ejecta (see Section 2) leading to less energetic eruptions and, crucially, with long gaps between consecutive eruptions. Therefore, at early times, the recurrence period will be longer than the cooling timescale and, as such, we incorporate radiative cooling in all simulations. ### Reference simulation -- Run 1 Our reference simulation, Run 1, models now eruptions from a growing WD with a temperature \(T_{\rm WD}=10^{7}\) K, with \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\), and within a low density ISM (\(n=1\)). With the varying mass accumulation efficiency, it would take \(\sim\)31 Myr (1,900,750 eruptions) for this WD to grow from 1 M\({}_{\odot}\) to M\({}_{\rm Ch}\). Run 1 has a spatial resolution of 200 AU. This information, including the total kinetic energy released, is summarised in Table 1 for all simulations in this paper. Run 1 is presented in Figure 2: the left-hand plot shows the density, pressure, velocity and temperature characteristics of the NSR after all \(\sim\)1,900,000 eruptions; the right-hand plot shows the evolution of the NSR shell outer edge and the inner edge, and the inner edge of the ejecta pile-up boundary (regions of the NSR are outlined in the top left panel of Figure 2). In the top-left panel of the left-hand plot of Figure 2, we see that the inner and outer edges of the dynamical NSR shell extend to \(\sim\)70.5 and \(\sim\)71.3 pc, respectively - a shell thickness of 1.1%. As can be seen in the right-hand plot of Figure 2, shell thickness varies over the NSR evolution. For example, the shell compresses from 2.72% (\(P_{\rm rec}=50\) years) to 1.14% (\(P_{\rm rec}=1\) year) to 1.10% (\(P_{\rm rec}=282\) days). At all times, this is much thinner than the 12a NSR shell (DHO19), which is 22% from observations and remained at this thickness throughout Run 0 (see Figure 3). The shell thickness evolution during Run 1 is directly related to energy losses via cooling and to the evolution of eruption properties whereby the increasing frequency and kinetic energy of the ejecta drive a compression through the NSR shell. In Run 1, the higher density found at the NSR shell inner edge (\(n\approx 160\)) compared to the outer edge (\(n\approx 3\)), seen in the top panel of Figure 3, is attributed to the contribution from the more recent, more frequent and more energetic eruptions - the rate of change of eruption properties surpasses the dynamic time-scale of the NSR shell at later times. The rate of propagation of the NSR shell into the surrounding ISM, and therefore the outer edge of the shell, remains largely based upon the combined properties of the entire eruption history, whereas the inner edge is shaped by newly arriving material. As evident in the bottom left panel of the left-hand plot of Figure 2, Figure 1: Top left: Ignition mass (\(m_{\rm g}\)) as a function of WD mass (\(M_{\rm WD}\)) derived from fitting to the output characteristics for \(m_{\rm acc}\) (circles, squares and stars) from Y05. Top middle: Recurrence period (\(P_{\rm rec}\)) as a function of WD mass found by dividing the ignition mass in the \(m_{\rm g}-M_{\rm WD}\) relation (top left panel) by \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\). Top right: Mass accumulation efficiency (\(\eta\)) as a function of WD mass derived from fitting to the output characteristics for \((m_{\rm g}-m_{\rm g})/m_{\rm g}\) (circles, squares and stars) from Y05. We set \(\eta=1\) for all \(1\times 10^{-7}\) points at \(M_{\rm WD}=0.65\) M\({}_{\odot}\) as Y05 indicated that there were no eruptions (no mass ejected) for these models. Bottom left: Mass loss phase (\(m_{\rm ms}\)) as a function of WD mass derived from fitting to the output characteristics for \(m_{\rm rad}\) (circles, squares and stars) from Y05. Bottom middle: Terminal ejecta velocity (\(v_{\rm g}\)) as a function of WD mass derived from relations presented in Warner (1995) and Henze et al. (2014) to the \(t_{\rm rad}-M_{\rm WD}\) relation (bottom left panel). Purple lines indicate broken exponential (or linear for \(\eta\)) fits to the data as described in Section 4.1. \begin{table} \begin{tabular}{c c c c c c c c c} \hline Run \# & M\({}_{\rm WD}\) & \(T_{\rm WD}\) & \(\dot{M}\) & ISM density & Spatial resolution & Number & Cumulative time & Total Kinetic Energy \\ & (M\({}_{\odot}\)) & (K) & (M\({}_{\odot}\)yr\({}^{-1}\)) & (1.67 \(\times\) 10\({}^{-24}\) g cm\({}^{-3}\)) & (AU/cell) & of eruptions & (years) & (erg) \\ \hline 0 & \(n\alpha\) & \(n\alpha\) & \(1.6\times 10^{7}\) & 1 & 4 & 100,000 & \(1.0\times 10^{5}\) & \(4.5\times 10^{27}\) \\ [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 1: Parameters for each run. Columns record the simulation number, initial WD mass, WD temperature, accretion rate, ISM density, spatial resolution, number of eruptions to grow the WD to M\({}_{\rm WD}\) or for the simulation to reach the temporal upper limit of \(10^{8}\) years, the cumulative time of the simulation, and the total kinetic energy released. Run 0 relates to the \(10^{6}\) identical eruptions as modelled by DHO19. Epic characteristics for Run 1\({}^{\dagger}\) used a broken exponential/linear interpolation (see Section 4.1). Runs 1\({}^{\star}\), 2\({}^{\star}\), 5\({}^{\star}\) and 7\({}^{\star}\) have the same ejecta characteristics as Runs 1, 2, 5 and 7, respectively, but do not include radiative cooling. Run 22 contains the same nova system as Run 1 but tuned with an ISM density of \(n=1.278\) to match the ISM predicted in Section 6.3 for the reference simulation WD to grow a NSR to the size (67 pc) of the observed NSR around M 31N 2008-12a. Figure 2: Left: The dynamics of the Run 1 (with radiative cooling; black) the velocity of material in the inner cavity is high (\(\sim\)\(6.7\times 10^{3}\) km s\({}^{-1}\)) as it is essentially in free expansion. The velocity then drops substantially as the ejecta pile-up region is encountered, with the resultant shock-heating increasing temperatures by five orders of magnitude (see bottom right panel of the left-hand plot). The velocity and temperature in the ejecta pile-up region declines continuously out to the NSR shell as the ejecta encounter previously ejected material and reverse shocks (from the pile-up/inner shell boundary), with the cool outer edge expanding at a relatively low \(\sim\)1 km s\({}^{-1}\). Figure 2 provides a comparison between Run 1 and Run 1\({}^{\star}\) (with and without radiative cooling, respectively), to illustrate the significant difference in the NSR size and shell structure. The outer edge of the NSR in Run 1 extends to 71.3 pc yet, without radiative cooling in Run 1\({}^{\star}\), the NSR extends to \(\sim\)90 pc (having swept up around twice as much ISM). This substantial reduction in size can only be attributed to radiative losses within the NSR. Additionally, the radiatively cooled NSR shell from Run 1 is much thinner (\(\sim\)1%) than the uncooled equivalent in Run 1\({}^{\star}\) (\(\sim\)21%; see Figure 2). This results from the material in the early NSR shell losing energy via radiative cooling and therefore lacking the necessary pressure to maintain its size. This suppresses the early NSR shell formation such that when shell compression takes effect at later times (as increasingly energetic ejecta collide with the inner edge of the shell), the starting point is a thinner shell. The NSR cavity and ejecta pile up boundary at \(\sim\)10 pc have similar density, pressure, velocity and temperature in Run 1 and Run 1\({}^{\star}\). At later stages, the increased frequency and energy of the eruptions results in the scenario that tends toward the Run 0 regime, whereby there is not enough time for the ejecta or remnant to cool radiatively between consecutive eruptions. Consequently, we see the effects of radiative cooling at the outer edge of the remnant, a relic of the earlier spaced out less energetic eruptions, and the centre of the NSR reflecting the later frequent eruptions. Furthermore, this point can be extended to all of the simulations conducted throughout this paper, whereby the growth and subsequent size of the nova super-remnant is shaped heavily by its early evolution. So far, we have only considered the final epoch of Run 1, after the full 1,900,750 eruptions (Figure 2). However, to appreciate the changing structure and characteristics of the NSR, we have provided an animation of the Run 1 in Figure 4. We illustrate in Figure 5 the spatiotemporal analysis of the evolution of the Run 1 NSR in terms of density, pressure, velocity and temperature. The NSR shell in Figure 5 can be identified most clearly in the top left panel as the narrowing light green segment running from bottom left (at \(\sim\)0.25 parsec) to the top right. In addition, the boundary of the ejecta pile-up, separating the cavity and the ejecta pile-up region can be seen as the other apparent line left of the remnant shell, running from the bottom left to the top centre of the panel (this boundary can be seen most clearly in the bottom right panel showing temperature evolution). This radial evolution of the shell and ejecta pile-up boundary directly replicates those seen in the right-hand plot of Figure 2, however here we show how each parameter changes over the full simulation. The average density of the early NSR shell is approximately \(n\simeq 6\) for the first \(10^{6}\) years of growth (see top left panel of Figure 5). Beyond this epoch, we see the effect of radiative cooling as the NSR shell loses energy and is compressed by the surrounding ISM and incoming eruptions, thereby leading to an increase in the average density within the shell to \(n\simeq 36\) after \(\sim\)\(3\times 10^{7}\) years. The average density within the ejecta pile-up region is much lower than the surrounding ISM and continuously decreases throughout the evolution, dropping as low as \(n=2.4\times 10^{-4}\) by the final epoch. After \(10^{6}\) years the mass of the shell is \(\sim\)50 M\({}_{\odot}\) but then substantially increases to \(4\times 10^{3}\) M\({}_{\odot}\) after \(10^{7}\) years and ending with a mass of \(\sim\)\(4\times 10^{4}\) M\({}_{\odot}\) by the final epoch (\(\sim\)\(3\times 10^{7}\) years). This is consistent with the upper limiting shell masses derived from imaging and spectroscopy of the 12a Figure 4: Animated evolution of density, pressure, velocity, and temperature for Run 1. Figure 3: NSR shell thickness evolution comparison between Run 1 (top) and Run 0 (bottom). Percentages indicate progress through each simulation, with the recurrence period given for Run 1; for Run 0 \(P_{\rm rec}=1\) throughout. Radii are normalised to the outer edge of the NSR at each epoch, density is normalised to the ISM. Note the range of radial size is different in each panel. NSR (\(7\times 10^{5}\) M\({}_{\odot}\) and \(10^{6}\) M\({}_{\odot}\) from assuming oblate and prolate geometries, respectively; DHO19). As shown in the top right panel of Figure 5, the average pressure within the NSR shell is initially high as this thin high density region initially forms at high temperature. The pressure within the shell decreases until it matches the average pressure within the pile-up region after \(\sim\!2\times 10^{7}\) years. The outer edge of the shell remains at the same pressure for the remainder of the simulation. However, the pressure at the inner edge increases, creating a pressure gradient within the shell. With the average temperature of the ejecta pile-up region increasing monotonically throughout its evolution (see the bottom right panel of Figure 5), the pressure within follows the same trend once that region's size is established. The average pressure evolution illustrates how the NSR shell compression takes place during an intermediary period. The shell forms initially without compression, is then compressed as it is subjected to pressure gradients and after \(\sim\!2\times 10^{7}\) years, the thinner shell remains. The average temperature of the Run 1 NSR shell falls as a direct result of cooling due to expansion and radiative losses, dropping from an initial \(5\times 10^{3}\) K to 40 K after \(\sim\!\!2.8\times 10^{7}\) years before increasing modestly to 90 K as later eruptions become more frequent and begin to impact the inner edge of the shell through the pile-up region, leading to compression and re-heating (see bottom right panel of Figure 5). On the other hand, the pile-up region begins with higher temperatures of \(\sim\!\!1\times 10^{6}\) K and continues to experience this temperature throughout before dramatically increasing to \(\sim\!\!2.5\times 10^{8}\) K after the full \(3\times 10^{7}\) years, maintaining these extremely high temperatures through shock-heating. The average velocity of the NSR shell, like the average temperature and average pressure, decreases throughout the evolution before a slight increase for the final \(6\times 10^{6}\) years (see bottom left panel of Figure 5). The velocity of the shell's outer edge at \(\sim\!\!6\times 10^{3}\) years is \(\sim\!\!10\) km s\({}^{-1}\) and remains below this velocity throughout. However, the velocity of the inner edge does increase due to the more frequent collisions occurring within the pile-up region, leading to a small velocity gradient within the shell. The ejecta pile-up region follows a similar trend but with higher average velocities, a result of increasingly frequent and higher velocity ejecta impacting the ejecta pile-up boundary. As the cavity is essentially a vacuum, the increasing velocities within this region are directly reflecting the increasing velocities of the nova ejecta. Figure 5: Run 1 spatiotemporal evolution of density, pressure, velocity, and temperature. The structure apparent \(\lesssim 0.1\) pc is associated with individual eruptions. At early times (\(t\lesssim 3\times 10^{4}\) years), the temporal resolution becomes evident. As shown in the bottom left panel, the velocity of the ISM is negligible (\(\ll 10\)km s\({}^{-1}\)). ### Varying the ISM density Here we consider the same nova system as Run 1 (\(T_{\rm WD}=10^{7}\) K; \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)), but placed in lower and higher density surroundings. Run 2 is pre-populated by ISM with a lower density of \(1.67\times 10^{-25}\) g cm\({}^{-3}\) (\(n=0.1\)) and the ISM density of Run 5 and Run 7 is \(1.67\times 10^{-23}\) g cm\({}^{-3}\) (\(n=10\)) and \(1.67\times 10^{-22}\) g cm\({}^{-3}\) (\(n=100\)), respectively. We also sampled between these ISM densities with Run 3 (\(n=10^{-0.5}\approx 0.316\)), Run 4 (\(n=10^{0.5}\approx 3.16\)) and Run 6 (\(n=10^{1.5}\approx 31.6\)). As illustrated in Figure 6, the full simulations extend progressively further as the ISM density is decreased (e.g., \(\sim\)116 pc, \(\sim\)43 pc and \(\sim\)26 pc for Run 2 (\(n=0.1\)), Run 5 (\(n=10\)) and Run 7 (\(n=100\)), respectively) and all maintain an exceptionally thin shell due to the suppression of the early shell formation, reminiscent of Run 1. Furthermore, the remnants grown in Run 1, 2, 5 and 7 with radiative cooling are 78.26%, 63.29%, 78.31%, and 77.96%, respectively, of the size of their counterpart without cooling (from Runs 1*, 2*, 5* and 7*) as a direct result of radiative losses from cooling. The relative thickness of the NSR shell varies for each simulation but remains small (\(\lesssim 4\%\)) for all ISM densities, resulting from the same amount of work done by the same nova system on surroundings that present increasingly higher resistance. As expected, the density in the NSR cavity and pile-up region increases approximately in-line with ISM density. These regions are not only denser as a result of the ISM environment, but are also more compressed for higher \(n\), leading to increased pressure. The velocity of material inside the NSR cavity from Runs 1-7 is identical as in all cases the ejecta are essentially undergoing free expansion. Also, temperatures in this region for each Runs 1-7 all reach the same extreme temperature of \(\sim 1\times 10^{9}\) K, as nova ejecta expanding without resistance collide into earlier ejected matter in the pile-up region, before dropping away to \(<10\) K at the nova shell's inner edge (i.e., the properties in this region don't strongly depend upon \(n\)). The growth of the outer edge of the NSR shells within the \(n=10\) and \(n=100\) ISM follow a similar evolution as that of Run 1 (see the red line on the right plot of Figure 2). We can summarise our findings for this section as follows: for a given total kinetic energy, an increase in local ISM density results in a smaller nova super-remnant. ### Varying the mass accretion rate The next six simulations (Runs 8-13) explored NSR evolution while varying accretion rate. We considered a WD with a temperature \(10^{7}\) K accreting hydrogen rich material at a rate of \(\dot{M}=10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) as well as a nova with the same WD temperature but with a lower accretion rate of \(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\), placed within the three ISM densities used in Runs 1, 5 and 7, see Table 1. Runs 1-7 presumed that accretion was driven by the wind of a giant donor. We include mass loss from the donor between eruptions, although this has no impact upon the results (yet is computationally favourable, see Section 3). As such, we reduce the mass loss rate from the donor in line with any simulated changes to accretion rate for consistency and to ensure that the donor wind does not become important. The WD growth models for \(\dot{M}=10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\) reveal that the WD loses mass with every eruption; it does not grow towards the Chandrasekhar limit, but is instead eroded. We therefore imposed a temporal upper limit of 100 Myr for the \(\dot{M}=10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\) simulations. The WD growth models indicate that these systems require 40,343 eruptions and 2,094 eruptions, respectively, to reach the temporal upper limit. At which point, these systems would have a recurrence period of \(\sim\)3,000 years and \(\sim\)49,000 years, respectively. Focusing on Runs 8-10 (\(\dot{M}=10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\)) presented in the second row of Figure 7, we find that the overall structure of the remnants are similar to those grown with higher accretion rate. The major difference is their much larger size and thicker shells. The shell grown in the lowest density ISM (Run 8; \(n=1\)) extends to \(\sim\)99 pc, with a shell thickness of \(\sim\)11%, and Run 9 (\(n=10\)) and Run 10 (\(n=100\)) grow remnants with radial sizes of \(\sim\)62 pc and \(\sim\)40 pc, and shell thicknesses of \(\sim\)22% and \(\sim\)25%, respectively. These more extended shells are a consequence of the larger amount of kinetic energy ejected by the underlying system and the longer time over which it can act (\(1\times 10^{8}\) years compared to \(\sim\)3.1 \(\times\) 10\({}^{7}\) years in Run 1; see Figure 8). The outer edge of the NSR shell follows the same evolutionary trend as seen in Runs 1-7 (in the same manner as the remnant in the right plot of Figure 2). In Runs 11-13 (\(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\); \(n=1\), 10, 100, respectively), we see that the NSRs take the familiar shape seen in Runs 1-10 with a very low density cavity preceding a high density shell (see the third row of Figure 7). The remnants grown in the Run 11 (\(n=1\)), Run 12 (\(n=10\)) and Run 13 (\(n=100\)) extend to \(\sim\)75 pc, \(\sim\)48 pc and \(\sim\)26 pc, respectively, and have shell thicknesses of 17%, 34% and 39%, respectively. Yet for each of these runs, the remnant shell is difficult to discern from the surroundings with the peak density within the NSR shell of Run 11, Run 12 and Run 13 reaching only 10.9%, 1.9% and 1.4% beyond that of the prepopulated ISM density, respectively. As expected, the outer shells of the remnants grown in systems with the lower accretion rate (\(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\)) follow the same growth curve over time as previous runs. The nova eruptions from the systems in Run 11-13 occur in-frequently for the vast majority of the evolution, starting with \(P_{\rm rec}\sim\)46,600 years when \(M_{\rm WD}=1\) M\({}_{\odot}\) and increasing to \(\sim\)49, 000 years after the full \(1\times 10^{8}\) years. Therefore, a combination of low energy eruptions and long recurrence period leads to a very broad, low-contrast shell as the ejecta individually dissipate into the surrounding ISM with minimal pile-up. Dynamically, such a NSR would be difficult to discern from the local environment. However, we would not expect this form of shell to exist around the known RNe as these systems would not (currently) be recognised as recurrent nova with Figure 6: Dynamics of Run 1 (\(n=1\)) compared to Run 2 (\(n=0.1\)), Run 3 (\(n=0.316\)), Run 4 (\(n=3.16\)), Run 5 (\(n=10\)), Run 6 (\(n=31.6\)) and Run 7 (\(n=100\)). Figure 7: End point dynamics of Runs 1–21. _First row:_\(\dot{M}=10^{-7}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), \(T_{\mathrm{WD}}=10\,\mathrm{MK}\), and \(M_{\mathrm{WD}}=1\,\mathrm{M}_{\odot}\) for \(n=0.1,0.316,1,3.16,10\), \(31.6,100\) (Runs 2,3,1,4,5,6,7 respectively). _Second row:_\(\dot{M}=10^{-8}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), \(T_{\mathrm{WD}}=10\,\mathrm{MK}\), and \(M_{\mathrm{WD}}=1\,\mathrm{M}_{\odot}\) for \(n=1,10,100\) (Runs 8–10, respectively). _Third row:_\(\dot{M}=10^{-9}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), \(T_{\mathrm{WD}}=10\,\mathrm{MK}\) and \(M_{\mathrm{WD}}=1\,\mathrm{M}_{\odot}\) for \(n=1,10,100\) (Runs 11–13, respectively). _Fourth row:_\(\dot{M}=10^{-7}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), \(n=1\) and \(M_{\mathrm{WD}}=1\,\mathrm{M}_{\odot}\) for \(T_{\mathrm{WD}}=10\,\mathrm{MK}\), \(30\,\mathrm{MK}\), \(50\,\mathrm{MK}\) (Runs 1, 14, 15, respectively). _Fifth row:_\(\dot{M}=10^{-7}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), \(n=1\) and \(T_{\mathrm{WD}}=10\,\mathrm{MK}\) for \(M_{\mathrm{WD}}=0.65\,\mathrm{M}_{\odot}\), \(0.8\,\mathrm{M}_{\odot}\), \(0.9\,\mathrm{M}_{\odot}\), \(1\,\mathrm{M}_{\odot}\), \(1.1\,\mathrm{M}_{\odot}\), \(1.2\,\mathrm{M}_{\odot}\), \(1.3\,\mathrm{M}_{\odot}\) (Runs 16–18, 1, 19–21, respectively). their recurrence periods being \(\gg 100\) years (see, for example, Darnley and Henze 2020). Equipped with the simulations of NSRs grown from systems with different accretion rates, we find that a lower accretion rate leads to more extended, but less well-defined, NSRs: a direct result of the longer evolutionary timescale. ### Varying the white dwarf temperature The underlying WD temperature does not have a significant impact on the evolution of most of the various parameters given in Section 2.2. For example, for \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\), the evolution of each parameter is very similar throughout, regardless of the WD temperature. Yet, there is a moderate difference in the evolution of the mass accumulation efficiency for the different temperatures. This is also true for the total kinetic energy of the ejecta generated from the entirety of the nova eruptions whereby the 30 MK and 50 MK have approximately twice the kinetic energy output as the cooler 10 MK WD. This is reflected in the set of simulations with the WD temperature varied from 10 MK (Run 1) to 30 MK (Run 14) to 50 MK (Run 15) with \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(n=1\). A comparison of the NSR shell, as shown in the fourth row of Figure 7 for the three different WD temperatures, reveals the overall structure of each to be similar, but with the 30 MK WD remnant extending moderately further than the others. The outer edge of the remnant shell for the coolest WD is 71.3 pc and the hottest WD leads to an outer edge of 79.7 pc, whereas the outer edge of the 30 MK WD remnant shell is 97.4 pc. Yet, this informs us that, for the highest accretion rate we have considered, the WD temperature has a small impact on the large scale structure of the NSR in comparison to the effects of ISM density (Section 3.3) and mass accretion rate (Section 3.4). There are similarities with the evolution of the shell for each WD temperature and at each epoch the density and thickness of the shells are a close match. By analysing how the recurrence period and the total kinetic energy change as the NSR grows in each of these systems, it is apparent that the WD temperature only has a relatively small impact. This may be due to the system having a high accretion rate (\(10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)), so being dominated by accretion heating2. Any influence of WD temperature may become more substantial as accretion rate decreases as accretion heating will become less severe and the WD would have more time to cool between eruptions. Footnote 2: Y05 accounted for accretion heating within their computations. A further consideration is that unlike accretion rate and ISM density, which were both varied by factors of 10 and 100, the WD temperatures considered here only vary by factors of 3 and 5. The range we use (10 MK, 30 MK and 50 MK) was initially employed by Prialnik and Kovetz (1995)3 and was chosen to represent two extremes and an intermediate WD core temperature; the lower limit was set as a colder WD delays hydrogen ignition leading to long accretion times (hence more substantial eruptions) and the upper limit accounts for hot WDs being able to quickly reach the conditions for TNR. Footnote 3: Before consequently being adopted by Y05 with the incorporation of lower accretion rates for the cooler WDs. We can conclude, for the accretion rate and ISM density (\(n\)) sampled in Runs 1, 14, 15, that the expected variation in WD temperature has much less impact on NSR evolution than plausible variations in accretion rate or \(n\). ### Varying the initial white dwarf mass So far we have considered nova eruptions generated by a WD growing from 1 M\({}_{\odot}\) to M\({}_{\rm Ch}\). Here, we consider a number of different initial WD masses; 0.65 M\({}_{\odot}\) in Run 16, 0.8 M\({}_{\odot}\) in Run 17, 0.9 M\({}_{\odot}\) in Run 18 and 1.1 M\({}_{\odot}\) in Run 19 with \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(n=1\). This upper initial mass is the upper formation limit for a CO WD (Ritossa et al. 1996). We also sample WDs with masses of 1.2 M\({}_{\odot}\) in Run 20 and 1.3 M\({}_{\odot}\) in Run 21. The number of eruptions appreciably increases as we lower the initial WD mass, as more eruptions are needed to reach M\({}_{\rm Ch}\) (see Table 1). A comparison of the NSR shells from these runs, presented in the last row of Figure 7, shows that each remnant becomes marginally larger as the initial WD mass is lowered, as more eruptions lead to more ejecta impacting the surrounding ISM over a longer period of time. The radial size of the NSRs in Runs 16, 17, 18 and 19 (0.65 M\({}_{\odot}\), 0.8 M\({}_{\odot}\), 0.9 M\({}_{\odot}\) and 1.1 M\({}_{\odot}\)) almost completely resemble that of the NSR from Run 1 (1 M\({}_{\odot}\)) whereas starting with a WD mass \(>1.1\) M\({}_{\odot}\) (in the regime of ONe WDs; Ritossa et al. 1996) such as simulated in Run 20 (1.2 M\({}_{\odot}\)) and Run 21 (1.3 M\({}_{\odot}\)) does make a difference to the radial size of the NSR. The structure of the shell for each NSR is remarkably similar, with the 0.65 M\({}_{\odot}\) WD simulation finishing with a shell thickness of \(\sim\)1.1% compared to \(\sim\)1.2% for the 1.3 M\({}_{\odot}\) WD. Each NSR shell also follows a very similar transition, with similar shell widths ratios at the same epochs. The radial growth curves of each simulation follow the same evolution with the 0.65 M\({}_{\odot}\) WD taking ten times (37 Myr) the time to reach M\({}_{\rm Ch}\) than the 1.3 M\({}_{\odot}\) WD (3.7 Myr). We can therefore conclude, that the initial mass of the growing WD has little impact on the final structure of the NSR, much less than the prominent influence of the ISM density (Section 3.3) and accretion rate (Section 3.4). ## 4 Additional tests In Section 3, we presented the full set of simulations. Here, we outline several tests of alternative models of the ejecta characteristics. Figure 8: Kinetic energy evolution from the simulated nova eruptions and red giant wind for Runs 1–13. The vertical black line represents the temporal cut-off point for Runs 8–13 i.e. the upper time limit for the simulations in which the WD is shrinking and therefore never reaches the Chandrasekhar limit. ### Using broken fits to estimate system parameters For Runs 1-21, we utilised ejecta characteristics determined from our WD growth model. This was based on interpolating between the results of multi-cycle nova evolutionary simulations by Y05 (see Section 2.1). In our work, a smooth function asymptotically approaching \(\rm{M_{Ch}}\) was fitted to the Y05 grid. An alternative way of interpolating between the Y05 grid points is with a 'knee' function (e.g., Sorasiam & Gilfanov, 2015, their Figure 1), which we replicated by fitting two distinct exponentials (Figure 1). From here, we grew a 1 \(\rm{M_{\odot}}\) WD with our model as outlined in Section 2.2, but referring in this case to the broken exponential fits. Eruption parameters evolve in the same way as those from the smooth function fitting, with the main difference being the abrupt 'knee' at 1.25 \(\rm{M_{\odot}}\). The total kinetic energy at the end of the WD growth is \(\sim\)\(5.3\times 10^{-2}\) foe (\(10^{51}\) ergs). This is much greater than the total kinetic energy generated from our smooth fitting function in Section 2.2; this ended with \(\sim\)\(2.5\times 10^{-2}\) foe. This reflects the more extreme eruptions later on in this system's evolution as a direct result of the higher ejecta velocities after the WD has surpassed 1.25 \(\rm{M_{\odot}}\). We ran a simulation (Run 1\({}^{\dagger}\)) of nova eruptions generated from the two distinct exponential fits, with the same parameters as our reference simulation (Run 1) including \(\dot{M}=10^{-7}\)\(\rm{M_{\odot}}\) yr\({}^{-1}\) and a WD temperature of \(10^{7}\) K and \(n=1\) ISM (see Table 1). As can be seen in Figure 9, the shell grown from the broken exponential fitting does not grow as large as the shell grown from the smooth fitting. This is a consequence of the much higher mass accumulation efficiency between 1 \(\rm{M_{\odot}}\) and 1.25 \(\rm{M_{\odot}}\) (see Figure 1) for the broken exponential fit, resulting in lower levels of ejecta and substantially less kinetic energy during the early stages of NSR growth; this period has a major impact on the proceeding evolution. Beyond 1.25 \(\rm{M_{\odot}}\), the radial growth curve of the Run 1\({}^{\dagger}\) shell deviates from that of Run 1 (at approximately \(1.4\times 10^{7}\) years; see the inset of the right panel in Figure 9) as a result of the later eruptions becoming more extreme. As shown in the inset of the left panel in Figure 9, both the shells in Run 1 and Run 1\({}^{\dagger}\) have a similar structure however the shell in Run 1\({}^{\dagger}\) is thinner, and consequently, comprises a higher density inner edge. This also has a greater impact on the temperature gradient of the shell in Run 1\({}^{\dagger}\), with the outer edge being much hotter than the inner edge, unlike Run 1. It is clear that using an alternative interpolation to the values given in Y05 does have an effect on the final simulated NSR. In the case of the Run 1\({}^{\dagger}\), the shell width is approximately half the shell width of the remnant in Run 1 plus the size of the remnant decreases by a factor of \(\sim\)12%. Whilst a non-negligible difference, we consider the more realistic smooth evolution of system parameters adopted for our study to be a truer representation for NSR simulations. Nevertheless, this does indicate the need for more finely sampled nova model grids. ### Eruption characteristics Although we are predominantly concerned with the long term evolution (and therefore large scale structure) of a NSR, we explored several eruption characteristics to observe how NSR evolution is affected. Firstly, we know that the timescale of the nova eruption can vary as we see a wide range of SSS periods (see, e.g. Henze et al., 2014). Secondly, shocks play a key role within the nova ejecta, and instead of material being ejected in one event, the eruption contains a number of components with varying masses and velocities (Metzger et al., 2014; Aydi et al., 2020, 2020; Murphy-Glaysher et al., 2022). #### 4.2.1 Eruption duration As an extension to the Run 0 tests (DHO19), to determine if the duration of a nova eruption affects NSR large scale structure we ran high resolution (\(\sim\)4 AU/cell) simulations, each with 1000 eruptions utilising the Run 0 setup with a range of eruption durations: 0.07 d, 0.7 d, 7d, 70 d and 350 d. For each test, the eruption duration plus the quiescent period match the recurrence period (350 days; e.g., \(349.034+0.074\) or \(3434+7\)d), with a fixed ejecta velocity of 3000 km s\({}^{-1}\). We required each test to inject the same total kinetic energy, so the eruption mass-loss rate was decreased to account for the longer timescales. After around 100 eruptions, the inner and outer edges of the NSR shell followed the same evolutionary trend regardless of eruption duration, and even though the NSR pile-up fluctuates more than the shell, they again settle into similar growth rates. This removes eruption duration dependency and indicates that our NSR results are not sensitive to any assumptions made about eruption time-scales. #### 4.2.2 Intra-eruption shocks We also wanted to test whether having a non-uniform ejection of material from the nova would affect the large scale structure of the shell. For this, we considered the composition of a classical nova whereby the eruption takes place over a certain timescale and over that time period, the speed of ejection increases (Bode & Evans, 1989; O'Brien et al., 1994; Metzger et al., 2014; Aydi et al., 2020, 2020). This implies that the outburst is comprised of a slow wind followed by a faster wind, creating a shock within the ejecta (O'Brien et al., 1994; Metzger et al., 2014; Aydi et al., 2020, 2020). We ran a Run 0-based simulation following 1000 eruptions with a 7 day duration. To incorporate intra-ejecta shocks, we split the ejecta into two separate components. For moderate-speed novae, the ejecta velocities range from 500-2000 km s\({}^{-1}\) but for fast novae, this range is 1000-4000 km s\({}^{-1}\)(O'Brien et al., 1994). As we are considering recurrent nova eruptions and therefore dealing with fast novae, we used the latter range of velocities for this test. We ejected half the mass at 1000 km s\({}^{-1}\) over 3.5 days immediately followed by half of the mass at 4123 km s\({}^{-1}\) over the next 3.5 days, such that the total kinetic energy matched that of a 7 day eruption with an ejecta velocity of 3000 km s\({}^{-1}\). As the second half of the mass is ejected at a higher velocity than the first, we see intra-ejecta shocks as the later ejecta overtakes and interacts with the earlier ejecta. Again, after around 100 years, the inner and outer edge of the NSR shells created from ejecta with and without intra-eruption shocks follow the same evolutionary trend. In Sections 4.2.1 and 4.2.2, we have demonstrated that the long term evolution of nova ejecta is not affected by the nova eruption duration nor by the presence of intra-ejecta shocks, and consequently, neither is any NSR. NSR evolution only depends upon the total kinetic energy of the ejecta and the surrounding medium4. Footnote 4: Here, we are considering a pure adiabatic scenario. ## 5 Observational predictions Here, we investigate the evolution of NSR observables, derived from Run 1 (Section 3.2), in part to inform any NSR follow-up observations or searches. The simplest and computationally cheapest way to predict the emission over a full simulation of a NSR is by assuming a pure hydrogen environment. We can thus compute the ionisation fraction (\(f\)), emission measure (EM), recombination time-scale, X-ray luminosity, and H\(\alpha\) emission. In general, an assumption of pure hydrogen provides a good estimate of \(f\) throughout the NSR. ### Evolution of emission measure Assuming pure hydrogen, we employed the Saha (1921) equation to compute \(f\) for each NSR cell across all epochs. As the number of free protons in a medium of fully ionised hydrogen is equal to the number of electrons, we define the EM in each NSR cell as the square of electron density (\(n_{\rm e}^{2}\)) integrated over the volume of the spherical shell represented by each cell. The EM from the different Run 1 NSR regions (cavity, ejecta pile-up, shell, and the entire NSR) at each epoch were calculated by integrating over all shells within each region. The mean ionisation fraction (\(\bar{f}\)) in each region, per epoch, was computed in a similar fashion while also weighting each shell by density. The evolution of \(\bar{f}\) and the total EM for each region is shown in Figure 10. In Figure 11, we show the evolution of \(\bar{f}\) and EM for the cavity, ejecta pile-up region and shell alongside the evolution of the mean density and temperature. As illustrated in Figure 11, the mean temperature of the pile-up region is approximately a few \(\times 10^{6}\) K for \(\sim\)\(2.7\times 10^{7}\) years (except during the initial eruption) and begins to increase toward \(\sim\)\(2.8\times 10^{8}\) K during the next \(\sim\)\(3.5\times 10^{6}\) years of the NSR evolution. The density in this region decreases by over a factor of 2 as it grows but the extremely high temperatures maintains \(\bar{f}\gtrsim 25\%\). As a result, the EM from this region remains high. Within the cavity, \(\bar{f}\gtrsim 1\%\), and so even with density decreasing over time, the emission from this region remains a contributing, albeit fluctuating, factor, until latter stages of NSR evolution. If we focus on \(\bar{f}\) within the NSR shell in Figure 11, we see the effect of recombination as a result of the high densities and cooling. For the first \(10^{5}\) years, the NSR shell is fully ionised, here the shell EM is high and the dominant source. After this, \(\bar{f}\) in the shell decreases to negligible levels as the material recombines and remains neutral for the majority of the NSR lifetime (from \(\sim\)\(10^{5}\) years to \(\sim\)\(3\times 10^{7}\) years) which, combined with an almost constant mean density during this period, leads to a drop in EM to effectively zero. However, as with the other regions, the late-time frequent highly energetic eruptions begin to re-heat the NSR shell, increasing \(\bar{f}\) marginally. The high NSR shell density at this time leads to the NSR shell again contributing to the EM at the end of the simulation. The evolution of the total NSR EM is shown in the bottom-right panel of Figure 10. The NSR shell initially dominates the EM as this high density region begins to sweep up ISM. After \(\sim\)\(5\times 10^{5}\) years, the average temperature within the shell has decreased enough for the material to recombine, resulting in a dramatic reduction in EM from this region. As a result, the total EM from the NSR becomes dominated by the pile-up region between \(\sim\)\(5\times 10^{5}\) years and \(\sim\)\(3\times 10^{7}\) years, with additional contribution from the fluctuating cavity emission throughout (originating from the eruptions themselves). Once the later stages have been reached (the last \(\sim\)\(5\times 10^{5}\) years), with frequent highly energetic ejecta, the rate of ionisation within the very high Figure 10: Run 1 ionisation fraction (blue) and emission measure (black) evolution within the cavity, the pile-up region, shell and the entire NSR. The ‘bump’ in the cavity emission measure at \(\sim\)\(3\times 10^{5}\) years is an artefact of the temporal sampling. Note that the cavity and ejecta pile-up region panels have different ionisation fraction limits to the NSR shell and total NSR panels. Figure 9: As Figure 2, but comparing Run 1 (grey) to Run 1\({}^{\dagger}\) (black), i.e., smooth versus broken exponential interpolation of the Y05 relations. In the right panel, we indicate the point at which the break in the exponential fitting occurs (\(\sim\)\(1.4\times 10^{7}\) years). density shell (particularly at the inner edge) leads to a substantial increase in EM from this region. However, unlike at early times when EM was dominated by the entire NSR shell, the emission at these later times emanates exclusively from the pile-up region and the inner edge of the shell. ### Evolution of recombination time The Morpheus code only informs as to the ionisation state of the material based upon the dynamics of the simulation; it does not include radiative transfer. As such, when considering the emission from simulated NSRs, and indeed their observability, we must also take account of recombination timescales (\(t_{\rm recomb}\)). As recombination time depends upon the relative abundances of the gas, from this point on we assume that all material is of Solar composition. While this will be a good approximation for the ISM it will be less so for the ejecta. However, the NSR is predominantly swept up ISM. Abundances from Wilms et al. (2000) were utilised to determine \(f\) for H, He, C, N, O, Ne, Na, Mg, Al, Si, P, S, Cl, Ar, Ca, Ti, Cr, Mn, Fe, Co, and Ni within the NSR. We compute the minimum recombination time for all cells of Run 1 by assuming the NSR is fully ionised, thus providing a lower limit on recombination time for each cell. The recombination time evolution across the entire Run 1 NSR remnant is shown in Figure 12. Here, we see that maximum recombination time within the NSR shell (except for the first epoch considered) is always \(\leq 3\times 10^{4}\) years and with the peak always being at the inner edge of the shell, the minimum recombination time of the shell approximately corresponds with the peak density. As the evolving WD approaches \(\rm M_{Ch}\), the amount of ionised mass within the NSR ejecta pile-up region (effectively the entire NSR), reaches \(\sim\)10 \(\rm M_{\odot}\), as gas within the pile-up region is heated by the late-time frequent and energetic eruptions. This is once again reflected in the moderate rise of the recombination time at the inner edge of the shell in Figure 12 (the thick black dotted line tracing the NSR shell inner edge). Notably, the mass weighted median recombination time (indicated by the dashed line) remains essentially constant throughout after \(\sim\)\(5\times 10^{6}\) years, hence we adopt \(t_{\rm recomb}=315\) yr throughout the Run 1 NSR shell (the mass weighted median recombination time during the epoch when \(P_{\rm rec}=1\) yr). ### Evolution of X-ray luminosity Following Vaytet (2009), Vaytet et al. (2011) and DHO19, we compute the EM contribution from each Run 1 NSR spherical shell (as defined by the simulation cells) and then bin the EM contribution into 95 logarithmically divided temperature bins ranging from 149 K to \(\sim\)\(3.9\times 10^{9}\) K (based on the shell/cell temperature). The temperature-binned EMs are used as inputs to XSPEC. Within XSPEC, we utilise the APEC (Smith et al., 2001) model which computes an emission spectrum containing lines for H, He, C, N, O, Ne, Mg, Al, Si, S, Ar, Ca, Fe, and Ni with Solar abundances (He fixed at cosmic) from a collisionally-ionised diffuse gas. The EM histograms can also be used to broadly explore the evolution of NSR emission as a function of photon energy and hence wavelength. Tracking the emission evolution for the Run 0 NSR (see Extended Data Figure 7 in DHO19) reveals that it starts off at high temperatures, emitting mostly in X-rays at \(\sim\)1 keV as in Run 0, as the eruptions are immediately frequent and highly energetic. But, as the NSR shell grows and cools, the EM peak moves toward lower energies, ending in the optical/NIR region (\(\sim\)\(2\times 10^{-3}\) keV) after the full \(10^{5}\) eruptions. A logarithmic extrapolation of the EM indicates that the present day peak might be in the infrared, around 12-13 \(\mu\)m, and could be a potential target for _JWST_(DHO19). On the other hand, the Run 1 NSR begins with the peak EM at low energies (optical/NIR) due to the long period between the initial low energy eruptions allowing the NSR to cool. The temperature of the NSR _as a whole_ remains low throughout the evolution and the EM peak remains at low energies through all 1,900,750 eruptions. Separating the EM evolution into the component NSR parts, namely the cavity, pile-up, and shell, provides the contributions from each of these regions. The cavity emission remains relatively low compared to other regions throughout the full evolution. For the first \(\sim\)\(10^{4}\) eruptions, the cavity emits in the optical/NIR regime. However, when the recurrence period approaches one year, the contribution from the cavity, albeit small, branches across to higher energies. This may be attributed to the ejected material colliding with the inner edge of the pile-up region. Emission levels from the pile-up region are considerably higher than from the cavity and contribute more to the X-ray emission at Figure 11: Run 1 average density, average temperature, ionisation fraction, and emission measure evolution within the cavity (light blue), the pile-up region (dark blue) and the NSR shell (black). Figure 12: Run 1 recombination time evolution at various epochs when assuming that all material is completely ionised. The median mass weighted recombination time at each epoch is represented with the dashed line. The thick black dotted line traces the inner edge of the NSR shell. later times as incoming ejecta continuously shock-heat this region. In fact, after the full 1,900,750 eruptions, a portion of the pile-up region emits in excess of 100 keV. In contrast, the NSR shell emits mostly in the optical at early times before peaking after only \(10^{3}\) eruptions, when the majority of the emission lies in the NIR. Beyond this epoch, for the entire evolution of the NSR, the shell contributes a negligible amount of emission and it remains the coolest part of the NSR, largely shielded from the highly energetic material. We use the EM to predict the evolution of the Run 1 X-ray luminosity. We assumed that our simulated NSR is at a distance of 778 kpc (Stanek & Garnavich, 1998, i.e., within M 31). To remove the impact of single eruptions, we re-bin to a lower temporal resolution. This is illustrated in Figure 13 with comparison to the X-ray luminosity evolution from the NSR created in Run 0. As shown in the left plot of Figure 13, for the Run 0 NSR, the X-ray luminosity peaks at \(\sim\)\(6\times 10^{31}\) erg s\({}^{-1}\) after approximately \(10^{3}\) years (equivalent to \(10^{3}\) eruptions for Run 0). This luminosity then fades to \(\sim\)\(9\times 10^{29}\) erg s\({}^{-1}\) after \(10^{5}\) years/eruptions and with a power-law extrapolation to the latest time, representing present day in DHO19, the total X-ray luminosity drops to \(\sim\)\(3\times 10^{29}\) erg s\({}^{-1}\). As detailed in DHO19, the X-ray luminosities predicted for the entire NSR evolution lie well below the \(3\sigma\) upper limiting luminosity of \(\sim\)\(9\times 10^{34}\) erg s\({}^{-1}\) constrained by archival X-ray observations (see horizontal dotted line in the left plot of Figure 13). The Run 1 NSR X-ray luminosity follows an entirely different evolution from Run 0 (see right plot of Figure 13). While X-ray emission is predicted from the onset of Run 0, we predict negligible X-ray emission from the Run 1 NSR until \(3\times 10^{5}\) years; \(P_{\rm rec}\lesssim 85\) yr. From that point, the X-ray luminosity rises as the recurrence period falls and the ejecta become more energetic, increasing significantly during the final \(\sim\)\(10^{7}\) years. Starting at \(\sim\)\(2\times 10^{29}\) erg s\({}^{-1}\) after \(\sim\)\(3\times 10^{5}\) years, the initial X-ray luminosity is dominated by soft emission between 0.3-1 keV. The influence of the more frequent and energetic eruptions becomes evident over the next 26 Myr as harder emission from shock-heating, with energies between 1-10 keV, reaches \(\sim\)\(1.5\times 10^{30}\) erg s\({}^{-1}\) after \(\sim\)\(2.7\times 10^{7}\) years (see inset of the right plot in Figure 13), contributing greatly to the total X-ray luminosity of \(\sim\)\(1\times 10^{31}\) erg s\({}^{-1}\) at this epoch. However, this is still much fainter than typical no X-ray luminosities such as, for example, M311 2004-01b, 2005-02a, and 2006-06b with \(L_{\rm X}=(11.1\pm 1.6)\times 10^{36}\) erg s\({}^{-1}\), \(2.6\times 10^{37}\) erg s\({}^{-1}\) and \((3.6\pm 0.3)\times 10^{36}\) erg s\({}^{-1}\), respectively5 (see Henze et al., 2010, in 2011, for a large sample of M 31 CNe X-ray luminosities). Instead, this X-ray luminosity is more akin to that seen in quiescent novae such as \(\sim\)\(6\times 10^{31}\) erg s\({}^{-1}\) for RS Ophiuchi (Page et al., 2022). Footnote 5: Unabsorbed luminosity between 0.2–10 keV. The NSR X-ray luminosity then continues to increase for the remainder of the evolution, ending with a luminosity of \(\sim\)\(1\times 10^{31}\) erg s\({}^{-1}\). This is due to hard emission (1-10 keV) becoming increasingly significant, with harder emission between 10–50 keV appearing in the final \(4\times 10^{6}\) years. If we consider the \(P_{\rm rec}=1\) yr epoch, the Run 1 NSR X-ray luminosity is \(\sim\)\(9\times 10^{30}\) erg s\({}^{-1}\) (see inset of the right plot in Figure 13). This is 30\(\times\) greater than the present day extrapolated luminosity from Run 0. ### Evolution of H\(\alpha\) flux From observations of the 12a NSR, we know such structures should be visible through their H\(\alpha\) emission (DHO19). As such, we utilised Run 1 to predict the evolution of H\(\alpha\) emission from a NSR in a similar manner to that described in Andersson (2021). The H\(\alpha\) luminosity was calculated by convolving the EM histograms with the appropriate temperature-dependent recombination coefficient for the given temperature (from Pequignot et al., 1991). The NSR was placed at the distance of M 31 and we applied extinction of \(A_{\rm H\alpha}=0.253\) to find the H\(\alpha\) flux across the simulated NSR. The evolution of the Run 1 NSR H\(\alpha\) flux is presented in Figure 14. The Run 1 NSR H\(\alpha\) evolution broadly follows the EM evolution (cf. Figures 10 and 11). Initially, as the early NSR shell sweeps into the ISM, the H\(\alpha\) emission (predominantly emanating from the shell) follows a roughly power-law increase, reaching a peak of \(\sim\)\(8\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) after \(10^{5}\) yr. Beyond this time however, the shell temperature decreases, allowing for recombination and a consequent (power-law-like) drop in H\(\alpha\) emission. As described in Section 5.1, between \(\sim\)\(10^{5}\) yrs and \(\sim\)\(3\times 10^{7}\) yrs, the main sources of H\(\alpha\) emission are the pile-up region and cavity. The cavity contribution can be seen as the numerous spikes in H\(\alpha\) flux, with the later energetic eruptions from the nova colliding with the sparse material within that region. As shown in Figure 14, the last \(\sim\)\(8\times 10^{6}\) years then see a dramatic increase in H\(\alpha\) emission, almost exclusively coming from the highly energetic eruptions at this stage impacting the high density inner edge of the formed NSR shell and pile-up region, reaching a maximum of \(\sim\)\(6\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) after the full \(3.1\times 10^{7}\) years. As shown in Figure 14, we also modelled the NSR H\(\alpha\) flux evolution for Run 8 (\(\dot{M}=10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\)) and Run 11 (\(\dot{M}=10^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\)) to explore the impact of mass accretion rate on H\(\alpha\) observability. Early in their evolution, H\(\alpha\) emission from these NSRs follow a similar, but much fainter, evolution to the NSR emission in Run 1 (with \(\dot{M}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)). However, unlike in Run 1 where the H\(\alpha\) flux begins to increase beyond \(\sim\)\(10^{6}\) years, the emission in Run 8 and Run 11 drops away, and continues to do so for the rest of the NSR's growth. In both Run 8 and Run 11 the H\(\alpha\) flux drops to \(\sim\)\(2\times 10^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\) after \(1\times 10^{7}\) years (compared to \(\sim\)\(1\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\) in Run 1) and ends with \(\sim\)\(6\times 10^{-21}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(\sim\)\(4\times 10^{-21}\) erg s\({}^{-1}\) cm\({}^{-2}\), respectively, after \(1\times 10^{8}\) years. We can tentatively conclude from these models, that NSR H\(\alpha\) emission for systems with high accretion rates is significant early on in NSR growth (younger RNe systems) and again late on in the NSR evolution, from older RNe systems such as the RRNe. Furthermore, the brightest NSRs are the systems containing near-Chandrasekhar mass WDs. However, for systems with lower accretion rates, in which the WD is eroding, the H\(\alpha\) emission at latter stages of evolution is orders of magnitude fainter than observed in high accretion systems. ## 6 Comparing simulations and observations ### Run 1 versus the 12a nova super-remnant: dynamics To determine how well these simulations recreate properties of the only known NSR, we compare them to observations of the 12a NSR. For this, we will consider the simulated NSR grown from a nova with parameters that most resemble 12a. The 12a mass accretion rate derived from observations is \((6-14)\times 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\), the closest accretion rate we were able to consider is \(10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\), within Runs 1-7. The 12a \(P_{\rm rec}=1\) yr, therefore we compare with simulations at this recurrence period (\(\sim\)-99.54% through the simulations). At this point, the simulated WD mass is \(\sim\)1.396 M\({}_{\odot}\). The most immediate difference we see between observations and the simulations is the NSR radial size and the shell thickness. Within the reference simulation (Run 1; \(n=1\)), the NSR extends to \(\sim\)71.3 pc compared to the observed 67 pc (DHO19). Furthermore, DHO19 assumed that 12a is located within a high density environment, which leads to a smaller NSR, more closely resembling the Run 7 NSR (\(n=100\)). The shell thickness of the Run 1 NSR is \(\sim\)1%, dramatically smaller than the 22% derived from observations of the inner and outer edges of the 12a NSR (DHO19). As with the first simulation (Run 0) of a NSR, the general shell structure of the NSR in Runs 1 - 7 is reminiscent of the observed shell. They all have a very low density central cavity (not apparent in observations) with freely expanding high velocity ejecta leading up to a very hot pile-up region. Spectroscopy of an inner 'knot' in the 12a NSR reveals strong [O iii] emission, indicative of higher temperatures closer to the 12a system (DHO19). In the 12a observations, we see evidence for a high density shell sweeping up the surrounding ISM, which is replicated in Runs 1 - 7. The lack of [O iii] emission in the 12a shell demonstrates that the shell has cooled below the ionisation temperature of O\({}^{+}\) (DHO19). We can conclude that the simulations that most resemble the 12a NSR, in terms of accretion rate and ISM density (Runs 1 - 7), can replicate the radial size of the NSR that is observed, but not its shell thickness. As a result, we can only conclude that there must be other contributing factors in the evolution and shaping (or geometry) of these structures that we have not yet considered. In particular, we wish to explore the impact of early helium flashes as well as a non-fixed accretion rate on NSR evolution in future work. Additionally, the simulations presented in this work are one-dimensional and so are not susceptible to Rayleigh-Taylor or Richtmyer-Meshkov instabilities. This additional physics is likely to influence the dynamics of the growing shell, through for example, shell fragmentation as seen in Toraszkar et al. (2013). But importantly, our models only simulate the dynamically grown structure, and associated emission of a NSR, they do not (yet) consider additional effects that photoionisation may have on any observed NSR (see Section 6.3). ### Run 1 versus the 12a nova super-remnant: emission We again explore the epoch of Run 1 that coincides with \(P_{\rm rec}=1\) yr (after \(3.04\times 10^{7}\) yr) to predict the X-ray luminosity and H\(\alpha\) flux, as in Section 5, to directly compare to the emission from 12a's NSR. Figure 14: Run 1, 8 and 11 H\(\alpha\) flux evolution from a NSR at the distance of M 31 with respect to elapsed time. The top abscissa indicates the associated recurrence period _only_ for Run 1. The nova systems within Run 8 and Run 11 have very different recurrence periods after the same cumulative time. Individual eruptions are responsible for the ‘spikes’ in each run. Figure 13: Left: Reproduced from DHO19 (see their Figure 5) showing the evolution of the Run 0 synthetic X-ray luminosity. The soft (0.3–1 keV; blue), hard (1–10 keV; red) and total X-ray luminosity (0.3–10 keV; black) are shown alongside the hardness ratio (hard/soft; green). The horizontal dashed line indicates the 3\(\sigma\) upper limit derived from extensive and deep XMM-Newton observations (see DHO19, for more details). Right: Evolution of the Run 1 synthetic X-ray (0.3–50 keV) luminosity with respect to elapsed time (bottom abscissa) and recurrence period (top abscissa). The soft (0.3–1 keV; blue), hard (1–10 keV; red), harder X-rays (10–50 keV; green) are shown alongside the total X-ray luminosity (0.3–50 keV; black). Recurrence period for the RNe 12a, U Scorpii and RS Ophiuchi are shown by vertical grey lines and the inset zooms in on the X-ray luminosity from \(1.8\times 10^{7}\) years to the end of the evolution. #### 6.2.1 Emission measure at one year recurrence period We follow the procedures in Section 5.1 to compute the ionisation fraction (\(f\)) and emission measure (EM) for the NSR at \(3.04\times 10^{7}\) yr (see Figure 15). Here, the entire NSR, up to the inner edge of the shell is fully ionised (\(f=1\)). The ionisation decreases dramatically, to negligible values, within the shell. This fully ionised state within the cavity (up to \(\sim\)10 pc) can be attributed to the ejecta interaction with the RGW and subsequent free expansion. Within the pile up region (between \(\sim\)10 - 70.2 pc), gas is continuously impacted by incoming eruptions and shocks resulting in collisional excitation and, consequently, \(f=1\). Shocks are also present at the inner edge of the NSR shell (\(\sim\)70.2 pc) as gas flows through the pile-up region into the swept up shell. However, further into the shell, toward the outer edge (\(\sim\)71 pc), the gas is dynamically shielded from incoming shocks and does not experience a high level of ionisation. #### 6.2.2 Recombination time at one year recurrence period In Section 5.2, we computed minimum recombination times throughout the NSR evolution by considering the recombination time for a hypothetical fully ionised NSR. For the epoch of this simulation where \(P_{\rm rec}=1\) yr, we also compute recombination time for the NSR given the \(f\) predicted by the dynamic growth. The recombination times for a NSR dominated by Solar material are illustrated in Figure 16 with the red line. Recombination times throughout the NSR are extremely long, owing to the extremely low density and continuous ejecta-RGW shocks within the cavity (up to \(\sim\)10 pc). Within the pile-up region (\(\sim\)10 - 70.2 pc) the continual shock-heating from colliding ejecta drives the recombination time high. At the inner edge of the NSR shell, where the gas density dramatically increases, we see the recombination time drop to a \(2\times 10^{5}\) yrs. Beyond the inner edge (at the front end of the shell), cooler neutral gas forces the recombination time to increase substantially. When considering an already fully-ionised NSR, we still see extremely long recombination times within the cavity and pile-up regions. However, we do see a significant difference within the NSR shell. As before, the recombination time drops dramatically at the inner edge yet now we see \(t_{\rm recomb}\)\(\sim\)10 yr at the inner edge, rising to \(\sim\)10\({}^{4}\) yrs at the outer edge. As a result of the high recombination times within cavity and pile-up regions of the NSR and recombination times in the shell on a par with the travel time for nova ejecta to cross the NSR (\(\sim\)3.4\(\times\)10\({}^{4}\) yrs for ejecta travelling at \(\sim\)2000 km s\({}^{-1}\)), the NSR shell may exhibit emission induced by photoionisation from the nova eruptions. Furthermore, if the ISM density is low enough (see Section 6.3), then ionising radiation from the central source might traverse the (fully collisionally ionised) inner regions of the NSR with the ability to potentially create an ionised region beyond (or within the shell of) the dynamically grown NSR. #### 6.2.3 X-ray luminosity at one year recurrence period The output from the Run 1 NSR at the epoch coinciding with \(P_{\rm rec}=1\) yr was processed and passed to XSPEC. The X-ray luminosity as a function of radius was calculated using the APEC model (without the incorporation of absorption) and is shown in Figure 17. At the centre of the remnant, there is a high X-ray luminosity from the underlying system due to the nova eruptions, however this is then followed by negligible emission from the cavity as the ejecta are in free expansion. Beyond this cavity, the ejecta begins to impact the higher density pile-up region (up to \(\sim\)10 pc), leading to a significant jump in the X-ray luminosity (\(\sim\)1 \(\times\) 10\({}^{22}\) erg s\({}^{-1}\)). As more and more ejecta Figure 16: Run 1 recombination timescale distribution for simulated Solar material (red) and completely ionised material (green) at \(P_{\rm rec}=1\) year. The black line is the corresponding density distribution. The left and right inset zoom in on the NSR shell for the simulated Solar material and the completely ionised case, respectively. Figure 17: Run 1 synthetic X-ray luminosity (without absorption; black) for \(P_{\rm rec}=1\) year, along with the NSR density distribution (grey). The inset zooms in on the NSR shell to illustrate the peak X-ray luminosity. Figure 15: Run 1 NSR emission measure (black) and density (grey) distribution for \(P_{\rm rec}=1\) year. Inset focuses on the NSR shell emission peak. contribute toward shock-heating the pile-up region further from the centre, we see a continuous increase in X-ray emission up to the inner edge of the NSR shell at \(\sim\)70.2 pc, where L\({}_{\rm X-ray}\simeq 4\,\times 10^{27}\) erg s\({}^{-1}\). The total predicted X-ray luminosity from the NSR at this epoch is \(\sim\)\(1\times 10^{31}\) erg s\({}^{-1}\) (see Figure 13). This is consistent with the unabsorbed luminosity upper limit of the NSR associated with 12a derived from archival _XMM-Newton_ observations (\(<\)9 \(\times 10^{34}\) erg s\({}^{-1}\); DHO19). #### 6.2.4 H\(\alpha\) flux at one year recurrence period We applied the technique set out in Section 5.4 to Run 1 at the epoch corresponding to \(P_{\rm rec}=1\) yr to compare the predicted H\(\alpha\) emission to that from the 12a NSR (see Figure 18). Here, we see that there is H\(\alpha\) emission from the cavity and increasingly from the ejecta pile-up region, yet this always remains below \(\sim\)\(10^{-27}\) erg s\({}^{-1}\) cm\({}^{-2}\). However, as is the case for X-ray emission, the majority of H\(\alpha\) flux originates at the inner edge of the NSR shell. Here, the density of hydrogen is extremely high compared to the rest of the NSR and so the large amount of collisional excitation from the impacting ejecta results in high levels of recombination and H\(\alpha\) emission of \(\sim\)\(3\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\), many orders of magnitudes higher than anywhere else across the NSR. The total predicted H\(\alpha\) luminosity from the NSR at this epoch is L\({}_{\rm H\alpha}\simeq 3.6\,\times 10^{32}\) erg s\({}^{-1}\). ### A photoionisation remnant? As stated in the previous section, the dynamic simulations in this work, with parameters most similar to 12a, replicate the broad observed structure, but not the shell thickness, or potentially observability, of the 12a NSR. But so far, we have only considered the growth and emission of the dynamically formed NSR. However, a proportion of the NSR will be exposed to photoionisation directly from the central system, the accretion disk, the eruptions, as well as any shock emission. As such, we consider here the formation and radial size of the photoionisation remnant, and any dependence upon ISM density. We will assume that material inwards from the NSR shell is fully ionised throughout the evolution as discussed in Section 5.1 and shown in Figure 11. We show in Figure 19, the dynamical remnant inner (purple) and outer (green) radii for Runs 1-7 with respect to ISM density at the epoch when each of the runs have recurrence periods of one year and assuming that the mass accretion rate is \(10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\). We then interpolated these points with a power-law fit. To estimate the size of any photoionisation region generated by the nova eruptions, we can perform a Stromgren-like analysis as \(t_{\rm recomb}\gg P_{\rm rec}\) within the NSR shell and the ISM. However, because all material inwards of the NSR shell is always fully shock ionised, instead of a Stromgren sphere, we will have a Stromgren shell. Consequently, the photoionisation region can be estimated thus: \[r_{\rm out}^{3}=\frac{3S_{\star}}{4\pi n(r)^{2}\beta(r,T)}+r_{\rm in}^{3}, \tag{2}\] where \(r_{\rm out}\) is the outer radius of the photoionised region, \(S_{\star}\) is the ionising luminosity from the source, \(n(r)\) is the number density of the medium, \(\beta(r,T)\) is the total recombination rate for Case B recombination (see, e.g., Dyson & Williams, 1980), and \(r_{\rm in}\) is assumed to be the outer edge of the fully ionised region of the NSR. This \(r_{\rm in}\) was determined to be the first point from the center of the NSR in which the ionisation fraction (\(f\)) falls below 100%. We will take the ionising luminosity from the nova eruptions (or the SSS emission) as the Eddington luminosity of a 1.396 M\({}_{\odot}\) WD (the mass of the WD in our models at the time when \(P_{\rm rec}=1\) yr) minus the observed luminosity of the 12a SSS, such that L\({}_{\rm Edd}-\) L\({}_{\rm obs}\approx 41,400\) L\({}_{\odot}\) for two weeks (the SSS timescale of each eruption; Henze et al., 2018) and assume a spectrum of 15 eV photons, giving a time averaged \(S_{\star,\rm SSS}=6.6\times 10^{48}\) photons s\({}^{-1}\). Substituting this into the equation 2 along with varying values for \(n\) (ISM density) provides us with the width of the ionisation region for Runs 1-7 (see the orange points in Figure 19). We also calculated a similar ionisation region but with the inclusion of the disk luminosity (5910 L\({}_{\odot}\)) such that \(S_{\star,\rm disk}=9.4\times 10^{47}\) photons s\({}^{-1}\) (with this emission present at all times), alongside the SSS emission (see the yellow points in Figure 19: Top: Low ISM density interpolation of Run 1–7 photoionisation regions at \(P_{\rm rec}=1\) yr. The purple and green lines indicate the interpolated fits to the inner and outer radii (purple and green points) of the NSR dynamical shells and the horizontal blue line is the observed outer radius of the 12a SNR emission. The outer edge of any photoionised region created by the nova emission or the combined nova and accretion emission are indicated by the orange and yellow points. The interpolation fitted to these points are shown with orange and yellow lines, respectively. The dashed vertical line indicates the ISM density at which the extrapolated outer radius fitting would equal the outer radius of the observed NSR around M 31N 2008-12a. Figure 18: As Figure 17 but for the simulated H\(\alpha\) flux. Figure 19). Again, we assumed a \(1.396\,\mathrm{M_{\odot}}\) WD, \(\dot{M}=10^{-7}\,\mathrm{M_{\odot}}\,\mathrm{yr^{-1}}\), and a spectrum of \(15\,\mathrm{eV}\) photons to estimate the disk luminosity using \(\mathrm{L_{disk}}=(G\,\dot{m}\mathrm{M_{WD}})/\mathrm{R_{WD}}\). We also considered the contribution of ionising photons from shocks, by computing the shock emission within XSPEC when \(P_{\mathrm{rec}}=1\,\mathrm{yr}\), yielding \(S_{\bullet,\mathrm{shock}}=2.9\times 10^{41}\) photons s\({}^{-1}\). But this is many orders of magnitude less than \(S_{\bullet,\mathrm{SSS}}\) and \(S_{\bullet,\mathrm{disk}}\) and so was not considered further. With the two luminosities we do consider, the widths of the ionisation regions (SSS or SSS+disk) produced can be found and are shown in Figure 19 with orange (SSS) and yellow (SSS+disk) points. For all of the NSRs grown in Runs 1-7, the emission from the nova system (eruptions and disk) cannot ionise the NSR shell and so the ionisation regions are fully contained within the remnant shell. This suggests that observations of NSRs should exhibit emission at the inner edge of the NSR shell. To test this, we created a synthetic sky image (with the inclusion of seeing) to directly compare with observations of the NSR surrounding M 31N 2008-12a. Utilising the outer edge fitting presented in Figure 19, we determined the ISM density required to grow a NSR with the same radial extent as the observed NSR around 31N 2008-12a (67 pc) to be \(n=1.278\). With this, we ran another simulation (Run 22) with the same system parameters as Run 1 but with an ISM density of \(n=1.278\). The outer edge of the NSR grown within Run 22 extends to \(\sim\)67.4 pc (as expected) and exhibits a shell thickness of 1.1%. The boundary between the cavity and ejecta pile-up region is located at \(\sim\)9 pc and the inner edge of the NSR shell is at \(\sim\)66.6 pc, with a density of \(2.6\times 10^{-22}\) (\(n\simeq 122\)). Using the same technique as described in Sections 5.4 and 6.2.4, we took the Run 22 NSR at the epoch when \(P_{\mathrm{rec}}=1\,\mathrm{yr}\) and predicted its H\(\alpha\) emission profile. We then generated a synthetic sky image of H\(\alpha\) emission for Run 22 by integrating this H\(\alpha\) emission radial profile over the volume of a sphere, collapsing this sphere along one axis into a two-dimensional image before convolving this with a Gaussian with a width of 1 arcsecond to represent the typical seeing at the Liverpool Telescope (LT; Steele et al., 2004). A wedge of this spherical NSR is shown in Figure 20. As can be seen in Figure 20, the structure does resemble the structure of the observed remnant around M 31N 2008-12a, as seen from the ground with the LT (see Figure 8 in Darnley et al., 2015). Specifically, we can see a negligible measure near the origin of the NSR (light grey) and a very low measure at the transitionary ejecta pile-up region (same light grey section), mimicking the LT observations. Then, at the inner edge of the shell, we see a vastly significant increase in the emission measure (dark grey band) as the ejecta that traversed the pile-up region collides with the extremely high density remnant shell. There is, however, a geometrical difference between the full synthetic sky image which uses a spherically symmetric model and the observed remnant around 12a which is elliptical, likely from an inclined torus or barrel-like structure. As well as replicating the 12a NSR on the sky, this is the type of structure we would also expect to observe around other novae hosting NSRs, using ground-based facilities. Based on our full suite of simulations of NSRs, we find that detectable remnants can form around novae with very different system parameters and so should actively be searched for around all types of novae, not just those with very short recurrence periods. ## 7 Conclusions We have presented a suite of hydrodynamical simulations of recurrent nova eruptions to determine how system parameters such as accretion rate, ISM density, WD temperature and initial WD mass affect the growth of a nova super-remnant. We follow the evolution of the WD from its formation mass up to either the Chandrasekhar mass (for high accretion rate systems) or the mass at a temporal upper limit (for lower accretion rate systems), and evolve the eruption properties as the mass changes. We utilised these simulations to predict the observational signatures associated with NSRs such as X-ray and H\(\alpha\) emission, before comparing our simulations with the NSR observed around 12a, including the generation of a synthetic sky image. Here, we summarise the key results: 1. Dynamic nova super-remnants (NSR) should be found around all RNe, including those with long recurrence periods and lengthy evolutionary times, as the nova eruptions naturally drive their creation. 2. Unlike the DHO19 study, we find that radiative cooling plays a key part in the formation of dynamic NSRs, and significantly alters the density and thickness of the outer dynamic shell. 3. The creation of a dynamic NSR occurs whether the WD mass is increasing or decreasing, indicating that NSRs also exist around old novae with low mass WDs. 4. The evolving eruptions create NSRs many parsecs in radius comprising a very low density cavity, bordered by a very hot pile-up region, and surrounded by a cool, thin, high density shell. 5. A high density ISM restricts the NSR size, as does a high accretion rate; these parameters have the largest effect on NSR size. 6. The temperature of the WD and initial WD mass may have much less impact on NSR size, however NSRs grown from ONe WDs (\(>\)1.1 \(\mathrm{M_{\odot}}\)) are significantly reduced. 7. The simulated NSRs can replicate the size of the 12a NSR and can reproduce the associated structure of H\(\alpha\) emission. 8. Only NSRs grown from systems with high accretion rates will currently be observable. NSR structures may have been overlooked within the Milky Way as they will extend across large regions of sky, far beyond their central RN. Ultimately though, the discovery of a second NSR surrounding another RN would provide strong evidence for an association between RNe and NSRs. NSRs also offer an opportunity to find unknown/unconfirmed RNe, and have the potential to point to 'extinct' novae where the donor has been exhausted (Darnley, 2021). Additionally, with the WD in a proportion of these systems being close to \(\mathrm{M_{CB}}\) with the real possibility to explode as a SN Ia, these phenomena can also provide "a clear and _persistent signpost_ to the progenitor-type of that SN Ia" (Darnley, 2021), and provide a mechanism for the removal of hydrogen from the immediate vicinity of a single-degenerate SN Ia (removing \(\sim\)10\({}^{6}\) M\({}_{\odot}\) of gas tens of parsecs from the central system; Harvey et al., 2016; Darnley, 2021). Figure 20: Synthetic image (at 1 arcsecond seeing) showing a portion of the predicted H\(\alpha\) emission from Run 22 at the epoch when \(P_{\mathrm{rec}}=1\,\mathrm{yr}\). Chosen grey scale shows linear changes in H\(\alpha\) flux. ## Acknowledgements The authors would like to gratefully thank our reviewer, Michael Shara, for his insightful suggestions that helped to improve our study and strengthened our conclusions. MWH-K acknowledges a PDRA position funded by the UK Science and Technology Facilities Council (STFC). MWH-K, MJD, EJH and PAJ receive funding from STFC grant number ST/S505559/1. This work made use of the high performance computing facilities at Liverpool John Moores University, partly funded by LJMU's Faculty of Engineering and Technology and by the Royal Society. ## Data Availability The data in this study can be shared on reasonable request to the corresponding author. This work was conducted with the Morpheus (Vaytet et al., 2007) program and analysed using the Python libraries: Numpy (Harris et al., 2020) and Matplotlib (Hunter, 2007).
2305.04414
Untrained Neural Network based Bayesian Detector for OTFS Modulation Systems
The orthogonal time frequency space (OTFS) symbol detector design for high mobility communication scenarios has received numerous attention lately. Current state-of-the-art OTFS detectors mainly can be divided into two categories; iterative and training-based deep neural network (DNN) detectors. Many practical iterative detectors rely on minimum-mean-square-error (MMSE) denoiser to get the initial symbol estimates. However, their computational complexity increases exponentially with the number of detected symbols. Training-based DNN detectors typically suffer from dependency on the availability of large computation resources and the fidelity of synthetic datasets for the training phase, which are both costly. In this paper, we propose an untrained DNN based on the deep image prior (DIP) and decoder architecture, referred to as D-DIP that replaces the MMSE denoiser in the iterative detector. DIP is a type of DNN that requires no training, which makes it beneficial in OTFS detector design. Then we propose to combine the D-DIP denoiser with the Bayesian parallel interference cancellation (BPIC) detector to perform iterative symbol detection, referred to as D-DIP-BPIC. Our simulation results show that the symbol error rate (SER) performance of the proposed D-DIP-BPIC detector outperforms practical state-of-the-art detectors by 0.5 dB and retains low computational complexity.
Hao Chang, Alva Kosasih, Wibowo Hardjawana, Xinwei Qu, Branka Vucetic
2023-05-08T01:47:02Z
http://arxiv.org/abs/2305.04414v1
# Untrained Neural Network based Bayesian Detector for OTFS Modulation Systems ###### Abstract The orthogonal time frequency space (OTFS) symbol detector design for high mobility communication scenarios has received numerous attention lately. Current state-of-the-art OTFS detectors mainly can be divided into two categories; iterative and training-based deep neural network (DNN) detectors. Many practical iterative detectors rely on minimum-mean-square-error (MMSE) denoiser to get the initial symbol estimates. However, their computational complexity increases exponentially with the number of detected symbols. Training-based DNN detectors typically suffer from dependency on the availability of large computation resources and the fidelity of synthetic datasets for the training phase, which are both costly. In this paper, we propose an untrained DNN based on the deep image prior (DIP) and decoder architecture, referred to as D-DIP that replaces the MMSE denoiser in the iterative detector. DIP is a type of DNN that requires no training, which makes it beneficial in OTFS detector design. Then we propose to combine the D-DIP denoiser with the Bayesian parallel interference cancellation (BPIC) detector to perform iterative symbol detection, referred to as D-DIP-BPIC. Our simulation results show that the symbol error rate (SER) performance of the proposed D-DIP-BPIC detector outperforms practical state-of-the-art detectors by 0.5 dB and retains low computational complexity. OTFS, symbol detection, deep image prior, Bayesian parallel interference cancellation, mobile cellular networks. ## I Introduction The future mobile system will support various high-mobility scenarios (e.g., unmanned aerial vehicles and autonomous cars) with strict mobility requirements [1]. However, current orthogonal frequency division multiplexing (OFDM) [2] is not suitable for these scenarios due to the high inter-carrier interference (ICI) caused by a large number of high-mobility moving reflectors. The orthogonal time frequency space (OTFS) modulation was proposed in [1] to address this issue because it allows the tracking of ICI during the symbol estimation process. Multiple OTFS symbol detectors [3, 4, 5, 6, 7, 8, 9, 10] have been investigated in current literature. Several iterative detectors have been proposed in OTFS systems, e.g., message passing (MP) [3], approximate message passing (AMP) [4], Bayesian parallel interference cancellation (BPIC) that uses minimum-mean-square-error (MMSE) denoiser [5], unitary approximate message passing (UAMP) [6], and expectation propagation (EP) [7] detectors. These detectors provide a significant symbol error rate (SER) performance gain compared to that of the classical MMSE detector [8]. Unfortunately, when a large number of moving reflectors exist, MP and AMP suffer from performance degradation due to high ICI [5]. The UAMP detector addresses this issue by performing singular value decomposition (SVD) that exploits the structure of the OTFS channel prior to executing AMP. Similar performance in terms of reliability and complexity to the UAMP detector has also been achieved by our proposed iterative MMSE-BPIC detector in [5]. We combined an MMSE denoiser, the Bayesian concept, and parallel interference cancellation (PIC) to perform iterative symbol detection. Unfortunately, their performance is still suboptimal in comparison with the EP OTFS detector [7]. EP uses the Bayesian concept and multivariate Gaussian distributions to approximate the mean and variance of posterior detected symbols iteratively from the observed received signals. The outperformance of the EP detector comes at the cost of high computational complexity in performing iterative matrix inversion operations. In addition to those iterative detectors, deep neural network (DNN) based approaches are widely used in symbol detector design. They can be divided into two categories; 1) Training-based DNN and 2) untrained DNN. The training-based DNN requires a large dataset to train the symbol detector prior to deployment. Recent examples of training-based DNN category are a 2-D convolutional neural network (CNN) based OTFS detector in [9] and also our recently proposed BPICNet OTFS detector in [10] that integrates the MMSE denoiser, BPIC and DNN whereby the modified BPIC parameters are trained by using DNN. There are two major disadvantages for the training-based DNN approach; 1) dependency on the availability of large computation resources that necessitate substantial energy or CO2 consumptions and high cost for the training phase [11]; 2) the fidelity of synthetic training data, artificially generated due to high cost of acquiring real datasets, in the real environment [12]. For example, a high fidelity training dataset implies the distribution functions for all possible velocity of mobile reflectors is known beforehand, which is impossible. The second category, untrained DNN, avoids the need for training datasets. Deep image prior (DIP) proposed in [13] has been widely used in image restoration as an untrained DNN approach. The encoder-decoder architecture used in the original DIP shows excellent performance in image restoration tasks but the use of up to millions of trainable parameters results in high latency and thus still cannot be used for an OTFS detector that requires close to real-time processing time. Recently, the authors in [14] show that the decoder-only DIP offers similar performance as compared to an encoder-decoder DIP architecture when it is applied to Magnetic Resonance Imaging (MRI). The complexity of decoder-only DIP is significantly lower than the original encoder-decoder DIP, thus enhancing its potential use as a real-time OTFS detector. To date, no study has been conducted on untrained DNN based OTFS detectors. In this paper, we propose to use untrained DNN with BPIC to perform iterative symbol detection. Specifically, we use DIP with a decoder-only architecture, referred to as D-DIP to act as a denoiser and to provide the initial symbol estimates for the BPIC detector. We choose BPIC here in order to keep low computational complexity for the OTFS receiver. We first describe a single-input single-output (SISO) OTFS system model consisting of the transmitter, channel and receiver. We then provide a review of the MMSE-BPIC detector in [5, 15] that uses the MMSE denoiser to obtain the initial symbol estimates. Instead of using MMSE, we propose a high-performance D-DIP denoiser to calculate the initial symbol estimates inputted to the BPIC. We then explain our proposed D-DIP in detail and also provide computational complexity and performance comparisons to other schemes. Simulation results indicate an average of approximately 0.5 dB SER outperformance as compared to other practical schemes in the literature. The main contribution of this paper is the first to propose a combination of a decoder-only DIP denoiser and the BPIC OTFS detector. The proposed denoiser 1) provides better initial symbol estimates for the BPIC detector and 2) has lower computational complexity than the MMSE denoiser. This leads to the proposed scheme having the closest SER performance to the EP scheme as compared to other schemes, achieved with much lower computational complexity (approximately 15 times less complex than the EP). **Notations**: \(a\), \(\mathbf{a}\) and \(\mathbf{A}\) denote scalar, vector, and matrix respectively. \(\mathbb{C}^{M\times N}\) denotes the set of \(M\times N\) dimensional complex matrices. We use \(\mathbf{I}_{N}\), \(\mathbf{F}_{N}\), and \(\mathbf{F}_{N}^{\mathbf{H}}\) to represent an \(N\)-dimensional identity matrix, \(N\)-points discrete Fourier Transform (DFT) matrix, and \(N\)-points inverse discrete Fourier transform (IDFT) matrix. \((\cdot)^{T}\) represents the transpose operation. We define \(\mathbf{a}=\mathsf{vec}(\mathbf{A})\) as the column-wise vectorization of matrix \(\mathbf{A}\) and \(\mathbf{A}=\mathsf{vec}^{-1}(\mathbf{a})\) denotes the vector elements folded back into a matrix. The Kronecker product is denoted as \(\otimes\). \([\frac{a}{b}]\) represents the floor operation, and \([\cdot]_{M}\) represent the mod-\(M\) operations. The Euclidean distance of vector \(\mathbf{x}\) is denoted as \(\|\mathbf{x}\|\). We use \(\mathcal{N}(\mathbf{x}:\boldsymbol{\mu},\boldsymbol{\Sigma})\) to express the multivariate Gaussian distribution of a vector \(\mathbf{x}\) where \(\boldsymbol{\mu}\) is the mean and \(\boldsymbol{\Sigma}\) is the covariance matrix. ## II OTFS System Model We consider an OTFS system, as illustrated in Fig. 1. In the following, we explain the details of the OTFS transmitter, channel and receiver. ### _OTFS Transmitter_ In the transmitter side, \(MN\) information symbols \(\mathbf{X}_{\mathrm{DD}}\in\mathbb{C}^{M\times N}\) from a modulation alphabet of size \(Q\mathbb{A}=\{a_{1},\cdots,a_{Q}\}\) are allocated to an \(M\times N\) grids in the delay-Doppler (DD) domain, where \(M\) and \(N\) represent the number of subcarriers and time slots used, respectively. As illustrated in Fig. 1, the DD domain symbols are transformed into the time-frequency (TF) domain by using the inverse symplectic finite Fourier transform (ISFFT) [1]. Here, the TF domain is discretized to \(M\) by \(N\) grids with uniform intervals \(\Delta f\) (Hz) and \(T_{s}=1/\Delta f\) (seconds), respectively. Therefore, the sampling time is \(T_{s}/M\). The TF domain sample \(\mathbf{X}_{\mathrm{TF}}\in\mathbb{C}^{M\times N}\) is an OTFS frame, which occupies the bandwidth of \(M\Delta f\) and the duration of \(NT_{s}\), is given as \[\mathbf{X}_{\mathrm{TF}}=\mathbf{F}_{M}\mathbf{X}_{\mathrm{DD}}\mathbf{F}_{N}^ {\mathbf{H}}, \tag{1}\] where \(\mathbf{F}_{M}\in\mathbb{C}^{M\times M}\) and \(\mathbf{F}_{N}^{\mathbf{H}}\in\mathbb{C}^{N\times N}\) are \(M\)-points DFT and \(N\)-points IDFT matrices, and the \((p,q)\)-th entries of them are \((\frac{1}{\sqrt{M}}e^{-j2\pi pq/M})_{p,q=0,\cdots,M-1}\) and \((\frac{1}{\sqrt{N}}e^{j2\pi pq/N})_{p,q=0,\cdots,N-1}\), respectively. The \((m,n)\)-th entries \(X_{\mathrm{TF}}[m,n]\) of \(\mathbf{X}_{\mathrm{TF}}\) is written as \[X_{\mathrm{TF}}[m,n]=\frac{1}{\sqrt{MN}}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}X_{ \mathrm{DD}}[k,l]e^{j2\pi(\frac{nk}{N}-\frac{ml}{M})}, \tag{2}\] where \(X_{\mathrm{DD}}[k,l]\) represents the \((k,l)\)-th entries of \(\mathbf{X}_{\mathrm{DD}}\) for \(k=0,\cdots,M-1,l=0,\cdots,N-1\). The (discrete) Heisenberg transform [1] is then applied to generate the time domain transmitted signal by using (1) and Kronecker product rule1, the vector form of the transmitted signal can be written as Footnote 1: A matrix multiplication is often expressed by using vectorization with the Kronecker product. That is, \(\mathsf{vec}(ABC)=(C^{T}\otimes A)\mathsf{vec}(B)\) \[\mathbf{s}=\mathsf{vec}(\mathbf{G}_{\mathrm{tx}}\mathbf{F}_{M}^{\mathbf{H}} \mathbf{X}_{\mathrm{TF}})=(\mathbf{F}_{N}^{\mathbf{H}}\otimes\mathbf{G}_{ \mathrm{tx}})\mathbf{x}_{\mathrm{DD}}, \tag{3}\] where \(\mathbf{G}_{\mathrm{tx}}\) is the pulse-shaping waveform, and we consider the rectangular waveform with a duration of \(T_{s}\) that leads to \(\mathbf{G}_{\mathrm{tx}}=\mathbf{I}_{M}\)[16], \(\mathbf{x}_{\mathrm{DD}}=\mathsf{vec}(\mathbf{X}_{\mathrm{DD}})\), and \(\mathbf{x}_{\mathrm{DD}}=[x_{\mathrm{DD}}(0),\cdots,x_{\mathrm{DD}}(MN-1)]^{T}\). \(\mathbf{s}\in\mathbb{C}^{MN\times 1}\) is the vector form of the transmitted signal, \(\mathbf{s}=[s(0),\cdots,s(n),\cdots,s(MN-1)]^{T}\), \(n=0,\cdots,MN-1\), and \(s(n)\) can be written as \[s(n)=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}e^{j2\pi[\frac{1}{M}]k/N}x_{\mathrm{DD} }([n]_{M}+kM). \tag{4}\] We insert the cyclic prefix (CP) at the beginning of each OTFS frame, the length of CP is the same as the index of maximum delay \(l_{max}\). Thus, the time duration after adding CP is \(NT_{s}+N_{\mathrm{cp}}\frac{T_{s}}{M}\), where \(N_{\mathrm{cp}}=l_{max}\). After adding CP, \(\mathbf{s}=[s(MN-N_{\mathrm{cp}}+1),s(MN-N_{\mathrm{cp}}+2),\cdots,s(MN-1),s(0 ),\cdots,s(n),\cdots,s(MN-1)]^{T}\), and \(\mathbf{s}\) is transmitted through a time-varying channel. ### _OTFS Wireless Channel_ The OTFS wireless channel is a time-varying multipath channel, represented by the impulse responses in the DD domain, \[h(\tau,v)=\sum_{i=1}^{P}h_{i}\delta(\tau-\tau_{i})\delta(v-v_{i}) \tag{5}\] where \(\delta(\cdot)\) is the Dirac delta function, \(h_{i}\sim\mathcal{N}(0,1/P)\) denotes the \(i\)-th path gain, and \(P\) is the total number of paths. Each of the paths represents a channel between a moving reflector/transmitter and a receiver with a different delay \((\tau_{i})\) and/or Doppler \((v_{i})\) characteristics. The delay and Doppler shifts are given as \(\tau_{i}=l_{i}\frac{T_{i}}{M}\) and \(v_{i}=k_{i}\frac{\Delta f}{N}\), respectively. The ICI depends on the delay and Doppler of the channel as illustrated in [16]. Here, for every path, the randomly selected integers \(l_{i}\in[0,l_{max}]\) and \(k_{i}\in[-k_{max},k_{max}]\) denote the indices of the delay and Doppler shifts, where \(l_{max}\) and \(k_{max}\) are the indices of the maximum delay and maximum Doppler shifts among all channel paths. Note for every path, the combination of the \(l_{i}\) and \(k_{i}\) are different. For our wireless channel, we assume \(l_{max}\leq M-1\) and \(k_{max}\leq\lfloor\frac{N}{2}\rfloor\), implying maximum channel delay and Doppler shifts of less than \(T_{s}\) seconds and \(\Delta f\) Hz, respectively. ### _OTFS Receiver_ At the receiver side, the time domain received signal \(r(t)\) is shown as [1] \[r(t)=\int\int h(\tau,v)s(t-\tau)e^{j2\pi v(t-\tau)}d\tau dv+w(t), \tag{6}\] where \(s(t)\) is the time-domain received signal \(\mathbf{s}\), while \(h(\tau,v)\) is the DD domain channel shown in (5). The received signal \(r(t)\) is then sampled at \(t=\frac{n}{M\Delta f}\), where \(n=0,\cdots,MN-1\). After discarding CP, the discrete received signal \(r(n)\) is obtained from (5) and (6), written as \[r(n)=\sum_{i=1}^{P}h_{i}e^{j2\pi\frac{k_{i}(n-l_{i})}{MN}}s([n-l_{i}]_{MN})+w( n), \tag{7}\] We then write (7) in the vector form as \[\mathbf{r}=\mathbf{H}\mathbf{s}+\mathbf{w}, \tag{8}\] where \(\mathbf{w}\) is the complex independent and identically distributed (i.i.d.) white Gaussian noise that follows \(\mathcal{N}(\mathbf{0},\sigma_{\mathrm{c}}^{2}\mathbf{I})\), \(\sigma_{\mathrm{c}}^{2}\) is the variance of the noise. \(\mathbf{H}=\sum_{i=1}^{P}h_{i}\mathbf{I}_{MN}(l_{i})\mathbf{\Delta}(k_{i})\), \(l_{MN}(l_{i})\) denotes a \(MN\times MN\) matrix obtained by circularly left shifting the columns of the identity matrix by \(l_{i}\). \(\mathbf{\Delta}\) is the \(MN\times MN\) Doppler shift diagonal matrix, \(\mathbf{\Delta}(k_{i})=\text{diag}\left[e^{\frac{j2\pi k_{i}(0)}{MN}},e^{ \frac{j2\pi k_{i}(1)}{MN}},\cdots,e^{\frac{j2\pi k_{i}(MN-1)}{MN}}\right]\), and \(\text{diag}(\cdot)\) denotes a diagonalization operation on a vector. Note that the matrices \(\mathbf{I}_{MN}(l_{i})\) and \(\mathbf{\Delta}(k_{i})\) model the delay and Doppler shifts in (5), respectively. As shown in Fig. 1, the TF domain received signal \(\mathbf{Y}_{\mathrm{TF}}\in\mathbb{C}^{M\times N}\) is obtained by applying the Wigner transform [16], shown as, \[\mathbf{Y}_{\mathrm{TF}}=\mathbf{F}_{M}\mathbf{G}_{\mathrm{rx}}\mathbf{R}, \tag{9}\] where \(\mathbf{R}=\mathsf{vec}^{-1}(\mathbf{r})\), \(\mathbf{G}_{\mathrm{rx}}\) is the rectangular waveform with a duration \(T_{s}\) in the receiver, and \(\mathbf{G}_{\mathrm{rx}}=\mathbf{I}_{M}\). Then the DD domain received signal \(\mathbf{Y}_{\mathrm{DD}}\in\mathbb{C}^{M\times N}\) is obtained by using the symplectic finite Fourier transform (SFFT), which is \[\mathbf{Y}_{\mathrm{DD}}=\mathbf{F}_{M}^{\mathbf{H}}\mathbf{Y}_{\mathrm{TF}} \mathbf{F}_{N}=\mathbf{F}_{M}^{\mathbf{H}}\mathbf{F}_{M}\mathbf{G}_{\mathrm{rx} }\mathbf{R}\mathbf{F}_{N}=\mathbf{G}_{\mathrm{rx}}\mathbf{R}\mathbf{F}_{N}. \tag{10}\] By following the vectorization with Kronecker product rule, we can rewrite (10) as \[\mathbf{y}_{\mathrm{DD}}=\mathsf{vec}(\mathbf{Y}_{\mathrm{DD}})=\mathsf{vec}( \mathbf{G}_{\mathrm{rx}}\mathbf{R}\mathbf{F}_{N})=(\mathbf{F}_{N}\otimes \mathbf{G}_{\mathrm{rx}})\mathbf{r}. \tag{11}\] By substituting (3) into (8) and (11) we obtain \[\mathbf{y}_{\mathrm{DD}}=\mathbf{H}_{\mathrm{DD}}\mathbf{x}_{\mathrm{DD}}+ \tilde{\mathbf{w}}, \tag{12}\] where \(\mathbf{H}_{\mathrm{DD}}=(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}}) \mathbf{H}(\mathbf{F}_{N}^{\mathbf{H}}\otimes\mathbf{G}_{\mathrm{rx}})\) and \(\tilde{\mathbf{w}}=(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}})\mathbf{w}\) denote the effective channel and noise in the DD domain, respectively. Here, \(\tilde{\mathbf{w}}\) is an i.i.d. Gaussian noise, since \(\mathbf{F}_{N}\otimes\mathbf{G}_{\mathrm{rx}}\) is a unitary orthogonal matrix [1, 16]. For convenience, we transform complex-valued model in (12) into real-valued model. Accordingly, \(\mathbf{x}=\left[\Re(\mathbf{x}_{\mathrm{DD}})\ \Im(\mathbf{x}_{\mathrm{DD}})\right]^{T}\), \(\mathbf{y}=\left[\Re(\mathbf{y}_{\mathrm{DD}})\ \Im(\mathbf{y}_{\mathrm{DD}})\right]^{T}\), \(\mathbf{n}=\left[\Re(\tilde{\mathbf{w}})\ \Im(\tilde{\mathbf{w}})\right]^{T}\), \(\mathbf{H}_{\mathrm{eff}}=\left[\Re(\mathbf{H}_{\mathrm{DD}})\ \ -\Im(\mathbf{H}_{\mathrm{DD}})\right]\), \(\Re(\cdot)\) and \(\Im(\cdot)\) are the real and imaginary parts, respectively. Thus, the variance of \(\mathbf{n}\) is \(\sigma^{2}=\sigma_{\mathrm{c}}^{2}/2\) and \(\mathbf{x},\mathbf{y},\mathbf{n}\) are vectors of size \(2MN\) and \(\mathbf{H}_{\mathrm{eff}}\) is a matrix of size \(2MN\times 2MN\). Then, we can rewrite (12) as \[\mathbf{y}=\mathbf{H}_{\mathrm{eff}}\mathbf{x}+\mathbf{n}. \tag{13}\] We assume \(\mathbf{H}_{\mathrm{eff}}\) is known at the detector side. For notation simplicity, we omit the subscript of \(\mathbf{H}_{\mathrm{eff}}\) in (13) and just write it as \(\mathbf{H}\) in all subsequent sections. The signal-to-noise ratio (SNR) of the system is defined as \(\mathrm{SNR}=10\mathrm{log}_{10}(\frac{1}{\sigma_{\mathrm{c}}^{2}})\mathrm{dB}\). ## III MMSE-BPIC Detector In this section, we briefly describe the BPIC detector that employs MMSE denoiser, recently proposed in [15]. The structure of the BPIC detector is shown in Fig. 2. It consists of four modules: Denoiser, Bayesian symbol observation (BSO), Bayesian symbol estimation (BSE), and decision statistics combining (DSC). Fig. 1: The system model of OTFS modulation scheme In the Denoiser module, the MMSE scheme is used to obtain the initial symbol estimates \(\hat{\mathbf{x}}^{(0)}\) in the first BPIC iteration [15] as shown in Fig. 2. The MMSE denoiser can be expressed as \[\hat{\mathbf{x}}^{(0)}=\left(\mathbf{H}^{T}\mathbf{H}+\sigma^{2}\mathbf{I} \right)^{-1}\mathbf{H}^{T}\mathbf{y}. \tag{14}\] In the BSO module, the matched filter based PIC scheme is used to detect the transmitted symbols, shown as \[\mu_{q}^{(t)}=\hat{x}_{q}^{(t-1)}+\frac{\mathbf{h}_{q}^{T}\left(\mathbf{y}- \mathbf{H}\hat{\mathbf{x}}^{(t-1)}\right)}{\|\mathbf{h}_{q}\|^{2}}, \tag{15}\] where \(\mu_{q}^{(t)}\) is the soft estimate of \(q\)-th symbol \(x_{q}\) in iteration \(t\), \(\mathbf{h}_{q}\) is the \(q\)-th column of matrix \(\mathbf{H}\). \(\hat{\mathbf{x}}^{(t-1)}=[\hat{x}_{1}^{(t-1)},\cdots,\hat{x}_{q}^{(t-1)}, \cdots,\hat{x}_{2MN}^{(t-1)}]^{T}\) is the vector of the estimated symbol. The variance \(\Sigma_{q}^{(t)}\) of the \(q\)-th symbol estimate is derived in [15] as \[\Sigma_{q}^{(t)}=\frac{1}{(\mathbf{h}_{q}^{T}\mathbf{h}_{q})^{2}}\left(\sum_{ \begin{subarray}{c}j=1\\ j\neq q\end{subarray}}^{MN}(\mathbf{h}_{q}^{T}\mathbf{h}_{q})^{2}v_{j}^{(t-1) }+(\mathbf{h}_{q}^{T}\mathbf{h}_{q})\sigma^{2}\right), \tag{16}\] where \(v_{j}^{(t-1)}\) is the \(j\)-th element in a vector of symbol estimates variance \(\mathbf{v}^{(t-1)}\) in iteration \(t-1\) and \(\mathbf{v}^{(t-1)}=[v_{1}^{(t-1)},\cdots,v_{q-1}^{(t-1)},\cdots,v_{2MN}^{(t-1) }]^{T}\), we set \(\mathbf{v}^{(0)}=0\) because we have no prior knowledge of the variance at the beginning. Then the estimated symbol \(\boldsymbol{\mu}^{(t)}=[\mu_{1}^{(t)},\cdots,\mu_{q}^{(t)},\cdots,\mu_{2MN}^{(t )}]^{T}\)and variance \(\boldsymbol{\Sigma}^{(t)}=[\Sigma_{1}^{(t)},\cdots,\Sigma_{q}^{(t)},\cdots, \Sigma_{2MN}^{(t)}]^{T}\) are forwarded to the BSE module, as shown in Fig. 2 In the BSE module, we compute the Bayesian symbol estimates and the variance of the \(q\)-th symbol obtained from the BSO module. given as \[\hat{x}_{q}^{(t)}=\mathbb{E}\left[x_{q}\Big{|}\mu_{q}^{(t)}, \Sigma_{q}^{(t)}\right]=\sum_{a\in\Omega}a\hat{p}^{(t)}(x_{q}=a|\mathbf{y}) \tag{17}\] \[v_{q}^{(t)}=\mathbb{E}\left[\left|x_{q}-\mathbb{E}\left[x_{q} \Big{|}\mu_{q}^{(t)},\Sigma_{q}^{(t)}\Big{]}\right|^{2}\right], \tag{18}\] where \(\hat{p}^{(t)}\left(x_{q}|\mathbf{y}\right)=\mathcal{N}(x_{q}:\mu_{q}^{(t)}, \Sigma_{q}^{(t)})\) is obtained from the BSO module and it is normalized so that \(\sum_{q\in\Omega}\hat{p}^{(t)}\left(x_{q}=a|\mathbf{y}\right)=1\). The outputs of the BSE module, \(\hat{x}_{q}^{(t)}\)and \(v_{q}^{(t)}\) are then sent to the following DSC module. The DSC module performs a linear combination of the symbol estimates in two consecutive iterations, shown as \[\hat{x}_{q}^{(t)}=\left(1-\rho_{q}^{(t)}\right)\hat{x}_{q}^{(t-1)}+\rho_{q}^{( t)}\hat{x}_{q}^{(t)} \tag{19}\] \[v_{q}^{(t)}=\left(1-\rho_{q}^{(t)}\right)v_{q}^{(t-1)}+\rho_{q}^{(t)}v_{q}^{(t )}. \tag{20}\] The weighting coefficient is determined by maximizing the signal-to-interference-plus-noise-ratio variance, given as \[\rho_{q}^{(t)}=\frac{e_{q}^{(t-1)}}{e_{q}^{(t)}+e_{q}^{(t-1)}}, \tag{21}\] where \(e_{q}^{(t)}\) is defined as the instantaneous square error of the \(q\)-th symbol estimate, computed by using the MRC filter, \[e_{q}^{(t)}=\left\|\frac{\mathbf{h}_{q}^{T}}{\|\mathbf{h}_{q}\|^{2}}\left( \mathbf{y}-\mathbf{H}\hat{\mathbf{x}}^{(t)}\right)\right\|^{2}. \tag{22}\] The weighted symbol estimates \(\hat{\mathbf{x}}^{(t)}\) and their variance \(\mathbf{v}^{(t)}\) are then returned to the BSO module to continue the iteration. After \(T\) iterations, \(\hat{\mathbf{x}}^{(T)}\) is taken as a vector of symbol estimates. ## IV D-DIP denoiser For symbol estimation In this section, we propose D-DIP to improve the initial symbol estimates performance of the BPIC detector, and the whole iterative process of D-DIP is shown in Fig. 3. The DNN used in D-DIP is classified as a fully connected decoder DNN that consists of \(L=5\) fully connected layers. Those layers can be broken down into an input layer, an output layer and three hidden layers with p1 = 4, p2 = 8, p3 = 16, p4 = 32, p5 = \(2MN\) neurons, respectively. We use a random vector \(\mathbf{z}_{0}\) drawn from a normal distribution \(\mathcal{N}(\mathbf{0},\mathbf{1})\) of size 4x1 as the input of the DNN first layer (i.e., input layer). \(\mathbf{z}_{0}\) is fixed during the D-DIP iterative process. DNN output at iteration \(i\)\(\mathbf{x}_{\mathrm{D-DIP}}^{(i)}\) is obtained by passing \(\mathbf{z}_{0}\) through 5 layers, shown as \[\mathbf{x}_{\mathrm{D-DIP}}^{(i)}=cf_{L}^{(i)}(f_{L-1}^{(i)}(\cdots f_{2}^{(i) }(\mathbf{z}_{0}))), \tag{23}\] where \(c\) is a constant used to control the output range of the DNN and \(f_{l}^{(i)}\) is the output of layer \(l\) at iteration \(i\), \[f_{l}^{(i)}=\mathrm{Tanh}(\mathbf{W}_{l}^{(i)}f_{l-1}^{(i)}+\mathbf{b}_{l}^{(i) }),l=2,\ldots,L \tag{24}\] where \(f_{1}^{(i)}=\mathbf{z}_{0}\), \(\mathbf{W}_{l}^{(i)}\) represents the weight matrix between layer \(l\) and \(l-1\) at iteration \(i\). \(\mathbf{b}_{l}^{(i)}\) is the bias vector in layer \(l\) at iteration \(i\). In the beginning, each entry of \(\mathbf{W}_{l}^{(0)}\) and \(\mathbf{b}_{l}^{(0)}\) are initialized randomly following a uniform distribution with a range of \((\frac{-1}{\sqrt{p_{l}}},\frac{1}{\sqrt{p_{l}}})\)[17], where \(p_{l}\) represents the number of neurons in layer \(l\). \(\mathrm{Tanh}\) is an activation function used after each layer. After that, we use a stopping scheme in [18] to control the iterative process of D-DIP to avoid the overfitting problem due to the parameterization feature in the DIP. The stopping scheme is based on calculating the variance of the DNN output, given as \[\varsigma^{(i)}=\frac{1}{W}\sum_{j=i-W}^{i}\|\mathbf{x}_{\mathrm{D-DIP}}^{(j)}- \frac{1}{W}\sum_{j^{\prime}=i-W}^{i}\mathbf{x}_{\mathrm{D-DIP}}^{(j^{\prime})} \|^{2},i\geq W, \tag{25}\] Fig. 2: BPIC detector architecture where \(\varsigma^{(i)}\) is the variance value at iteration \(i\). When \(i<W\), the variance calculation is inactive. \(W\) is a constant determined based on the experiments and should be smaller than the iterations needed for D-DIP to converge. As shown in Fig. 3, we compare \(\varsigma^{(i)}\) with a threshold \(\epsilon\). If \(\varsigma^{(i)}<\epsilon\) the iterative process of D-DIP will stop, and the output of D-DIP \(\mathbf{x}_{\mathrm{D-DIP}}^{(I)}\) is then forwarded to BPIC as initial symbol estimates, i.e., \(\hat{\mathbf{x}}^{(0)}=\mathbf{x}_{\mathrm{D-DIP}}^{(I)}\), where \(I\) is the number of the last D-DIP iteration. Otherwise use mean square error (MSE) to calculate the loss shown as \[\mathcal{L}^{(i)}=\frac{1}{2MN}\|\mathbf{H}\mathbf{x}_{\mathrm{D-DIP}}^{(i)}- \mathbf{y}\|^{2}. \tag{26}\] The DNN parameters that consist of weights \(\mathbf{W}_{l}^{(i)}\) and biases \(\mathbf{b}_{l}^{(i)}\) are then optimized by using Adam optimizer [19] and the calculated loss in (26). The process is then repeated as shown in Fig. 3. ## V Complexity Analysis In this section, we analyze the computational complexity of the proposed D-DIP-BPIC detector. As for the complexity of D-DIP, the computational complexity of fully-connected layers is matrix vector multiplications with a cost of \(\mathcal{O}(M^{2}N^{2}I)\), where \(I\) denotes the number of iterations needed for D-DIP. The computational complexity for different detection algorithms is shown in Table I, where \(T\) represents the iterations needed for the BPIC, UAMP, EP and BPICNet detectors. For instance, for \(M=12,N=7,T=10,I=50\), the complexity of D-DIP-BPIC is approximately 1.5 times lower than MMSE-BPIC, UAMP and BPICNet. The complexity of D-DIP-BPIC is approximately 15 times lower than EP. Thus our proposed detector has the lowest complexity compared to above high-performance detectors. Note that BPICNet has an extra complexity due to training requirements. BPICNet uses a large data set used for the training prior to deployment. For example, \(b=5.12\times 10^{6}\) is used in [10]. Fig. 4 shows the cumulative distribution function (CDF) of \(I\) (i.e., the number of D-DIP iterations needed to satisfy the stopping scheme (25)) for \(M=12,24,36,48,N=7,l_{max}=M-1,k_{max}=3,SNR=15dB\). The figure shows that the number of iterations required for D-DIP to converge, \(I\), is not sensitive to the OTFS frame size (i.e., \(M\) and \(N\)) which is a significant advantage. ## VI Numerical results In this section, we evaluate the performance of our proposed detector by comparing its SER performance with those in MMSE-BPIC [5], UAMP [20], EP [7] and BPICNet [10]. Here we use UAMP in [20] instead, because the UAMP proposed in [6] is not suitable for our system model as shown in [5]. For the simulations, we set \(N=7,l_{max}=M-1\), \(\Delta f=15\)kHz. The carrier frequency is set to \(f_{c}=10\)GHz. The \(4\)-QAM modulation is employed for the simulations, and we set \(c=1/\sqrt{2}\) that is corresponding to the normalized power of constellations to normalize the DNN output. The same DNN parameters described in section IV (e.g., number of layers and number of neurons in each layer) are used in the DNN for all simulations. We use the Adam optimizer with a learning rate of \(0.01\) to optimize the DNN parameters. The stopping criteria parameter for (25), \(W\) is set to 30, and the threshold \(\epsilon\) is set to 0.001. The number of iterations for the BPIC, UAMP, EP and BPICNet is set to \(T=10\) to ensure convergence. For the training setting of BPICNet, we use the same setting in [10], where \(M=12,N=7,l_{max}=11,k_{max}=3\) and 500 epochs are used during the training process, in each epoch, 40 batches \begin{table} \begin{tabular}{|c|c|c|} \hline Detector & Complexity order (Training) & Complexity order (Deployment) \\ \hline MMSE-BPIC [5] & Not required & \(\mathcal{O}(M^{3}N^{3}+M^{2}N^{2}T)\) \\ \hline UAMP [20] & Not required & \(\mathcal{O}(M^{3}N^{3}+M^{2}N^{2}T)\) \\ \hline EP [7] & Not required & \(\mathcal{O}(M^{3}N^{3}T)\) \\ \hline BPICNet [10] & \(\mathcal{O}(b(M^{3}N^{3}+MN)\)\(+M^{2}N^{2}T))\) & \(+M^{2}N^{2}T)\) \\ \hline D-DIP-BPIC & Not required & \(\mathcal{O}(M^{2}N^{2}I+M^{2}N^{2}T)\) \\ \hline \end{tabular} \end{table} Table I: Computational complexity comparison Fig. 4: CDF of I Fig. 3: D-DIP structure of 256 samples were generated. \(P\in\{6,\ldots,12\}\) is randomly chosen and the values of SNR are uniformly distributed in a certain range, more details are shown in [10]. Fig. 5(a) demonstrates that the proposed D-DIP-BPIC detector achieves around 0.5 dB performance gain over MMSE-BPIC and UAMP. In fact, its SER performance is very close to BPICNet and EP. Fig. 5(b) evaluates the scalability of our proposed D-DIP-BPIC detector. As we increase the OTFS frame size (i.e., number of subcarriers), D-DIP-BPIC remains the outperformance over MMSE-BPIC and UAMP and achieves a close to BPICNet and EP performance. Fig. 5(c) shows that when the number of paths (e.g., mobile reflectors) increases, the D-DIP-BPIC detector still can achieve close to BPICNet and EP performance and outperform others. As shown in Fig. 5(d), it is obvious that the performance of the BPICNet detector degrades in the case of \(k_{max}=1\) as compared to \(k_{max}=2,3\) as the fidelity of training data is compromised while our D-DIP-BPIC still retains its benefit. ## VII Conclusion We proposed an untrained neural network based OTFS detector that can achieve excellent performance compared to state-of-the-art OTFS detectors. Our simulation results showed that the proposed D-DIP-BPIC detector achieves a 0.5 dB SER performance improvement over MMSE-BPIC, and achieve a close to EP SER performance with much lower complexity.
2308.14823
Gravitational radiation from a particle plunging into a Schwarzschild black hole: frequency-domain and semirelativistic analyses
We revisit the classic problem of gravitational wave emission by a test particle plunging into a Schwarzschild black hole both in the frequency-domain Regge-Wheeler-Zerilli formalism and in the semirelativistic approximation. We use, and generalize, a transformation due to Nakamura, Sasaki, and Shibata to improve the falloff of the source term of the Zerilli function. The faster decay improves the numerical convergence of quantities of interest, such as the energy radiated at spatial infinity through gravitational waves. As a test of the method, we study the gravitational radiation produced by test particles that plunge into the black hole with impact parameters close to the threshold for scattering. We recover and expand upon previous results that were obtained using the Sasaki-Nakamura equation. In particular, we study the relative contributions to the total energy radiated due to waves of axial and polar parity, and uncover an universal behavior in the waveforms at late times. We complement our study with a semirelativistic analysis of the problem, and we compare the two approaches. The generalized Nakamura-Sasaki-Shibata transformation presented here is a simple and practical alternative for the analysis of gravitational-wave emission by unbound orbits in the Schwarzschild spacetime using the frequency-domain Regge-Wheeler-Zerilli formalism.
Hector O. Silva, Giovanni Tambalo, Kostas Glampedakis, Kent Yagi
2023-08-28T18:20:50Z
http://arxiv.org/abs/2308.14823v2
Gravitational radiation from a particle plunging into a Schwarzschild black hole: frequency-domain and semi-relativistic analyses ###### Abstract We revisit the classic problem of gravitational wave emission by a test particle plunging into a Schwarzschild black hole both in the frequency-domain Regge-Wheeler-Zerilli formalism and in the semi-relativistic approximation. We use, and generalize, a transformation due to Nakamura, Sasaki and Shibata to improve the fall-off of the source term of the Zerilli function. The faster decay improves the numerical convergence of quantities of interest, such as the energy radiated at spatial infinity through gravitational waves. As a test of the method, we study the gravitational radiation produced by test particles that plunge into the black hole with impact parameters close to the threshold for scattering. We recover and expand upon previous results that were obtained using the Sasaki-Nakamura equation. In particular, we study the relative contributions to the total energy radiated due to waves of axial and polar parity, and uncover an universal behavior in the waveforms at late times. We complement our study with a semi-relativistic analysis of the problem, and we compare the two approaches. The generalized Nakamura-Sasaki-Shibata transformation presented here is a simple and practical alternative for the analysis of gravitational-wave emission by unbound orbits in the Schwarzschild spacetime using the frequency-domain Regge-Wheeler-Zerilli formalism. ## I Introduction The study of gravitational radiation produced by test particles in black-hole spacetimes has a long history dating back to the early 1970s, and played a central role in the early development of the understanding of potential gravitational-wave sources [1; 2]. In the framework of black-hole perturbation theory, developed by Regge, Wheeler, and Zerilli [3; 4], the pioneering works on this problem were done by Zerilli [5], Davies et al. [6; 7; 8], Chung [9], and Ruffini [10]. These works assumed particles in unbound trajectories that start from spatial infinity and plunge into a Schwarzschild black hole. Critical to these calculations is the asymptotic behavior of the source term that encapsulates how the test particle excites the gravitational perturbations. As an extreme example, the source term of the Teukolsky equation, that describes the gravitational perturbations of a Kerr black hole [11], diverges at spatial infinity for unbound geodesics. In principle, this jeopardizes the calculation of physical quantities of interest, such as gravitational waveforms or the energy carried away to infinity by the waves. To circumvent this problem, one can either develop a regularization scheme [12; 13; 14; 15], or rewrite the Teukolsky equation to tame the source's asymptotic behavior. Pursuing the latter approach, Sasaki and Nakamura [16; 17; 18] found their eponymous equation, widely used in the study of unbound geodesics both in Schwarzschild and Kerr spacetimes; see e.g., Refs. [19; 20; 21] and [22; 23; 24] respectively for early work. A somewhat similar situation also happens when the perturbations of a Schwarzschild black hole are described in terms of the Cunningham-Price-Moncrief [25] and Zerilli-Moncrief [26] functions. Less dramatically, the source term of the Zerilli equation has a slow fall-off at spatial infinity. See Hopper [27] for a detailed discussion. In a serendipitous event, in the course of a related investigation, we learned of a work by Shibata and Nakamura that presents a simple transformation of the Zerilli function to improve the fall-off of the source term of the Zerilli equation, and that was applied to the radial infall case only [28].1 Here we revisit this method and extend its application range to problems involving particles plunging with nonzero angular momentum. This allows us to reexamine some aspects of the gravitational radiation produced by particles that plunge with angular momenta near the threshold for scattering. This situation is relevant in the context of understanding ultrarelativistic binary black hole collisions, and that was until now only studied in detail using the Sasaki-Nakamura equation; see Berti et al. [29] and references therein. We complement our study of this problem with a calculation of the energy radiated within the semi-relativistic approximation of Ruffini and Sasaki [30]. Footnote 1: Ref. [28] cites an unpublished work by Nakamura and Sasaki when introducing this transformation, that we will refer to as the Nakamura-Sasaki-Shibata transformation. This paper is organized as follows. In Sec. II, we review the motion of test particles plunging into a Schwarzschild black hole. In Sec. III, we provide a summary of Regge-Wheeler-Zerilli formalism and identify the main issue we want to resolve. In Sec. IV, we review and generalize the method of Ref. [28]. In Sec. V, we describe our numerical methods and present our numerical results. In Sec. VI, we compare our results against an analysis in the semi-relativistic approximation. We summarize our findings in Sec. VII. We use the mostly-plus metric signature and use geometrical units with \(c=G=1\), unless stated otherwise. Geodesic motion We consider a particle of mass \(\mu\) in geodesic motion the spacetime of a Schwarzschild black hole of mass \(M\), with \(\mu/M\ll 1\). We use Schwarzschild-Droste coordinates \(x^{\mu}=\{t,r,\theta,\phi\}\) in which the spacetime's line element is \[\mathrm{d}s^{2}=-f(r)\,\mathrm{d}t^{2}+f^{-1}(r)\,\mathrm{d}r^{2}+r^{2}( \mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2})\,, \tag{1}\] where \(f=1-2M/r\) and \(r=2M\) is the location of the event horizon. We assume that the particle starts from rest at infinity with (conserved) energy \(\mathcal{E}=E/\mu=1\) and angular momentum \(\mathcal{L}=L/\mu\) per unit mass. If we chose, without loss of generality, that the particle's motion happens in the equatorial plane \(\theta=\pi/2\), we can parametrize the particle's worldline in terms of the proper time \(\tau\) as \(z^{\mu}(\tau)=\{t_{p},r_{p},\pi/2,\phi_{p}\}\), and, from the geodesic equation and the timelike constraint \(g_{ab}u^{a}u^{b}=-1\), with \(u^{a}=\mathrm{d}z^{a}/\mathrm{d}t\), we obtain: \[\dot{t}_{p}=\mathcal{E}/f_{p}\,,\quad\dot{\phi}_{p}=\mathcal{L}/r_{p}^{2}\,, \quad\dot{r}_{p}^{2}=\mathcal{E}^{2}-U(r_{p},\mathcal{L})\,, \tag{2}\] where \(\dot{}=\mathrm{d}/\mathrm{d}\tau\) and \[U=f\,(1+\mathcal{L}^{2}/r^{2})\,, \tag{3}\] is the effective radial potential. The particle's trajectories are classified according to the number of real roots of \(\mathcal{E}^{2}-U\). For the special case \(\mathcal{E}=1\) the analysis is simple, and we find the roots to be, \[r_{\pm}=\frac{\mathcal{L}}{4M}\left[\mathcal{L}\pm(\mathcal{L}^{2}-16M^{2})^{ 1/2}\right]. \tag{4}\] Hence, a plunging orbit from infinity requires that \(\mathcal{L}<4M\), for otherwise the turning points are real and positive (i.e., the particle is scattered.) We will define this special value of the angular momentum as \(\mathcal{L}_{\mathrm{crit}}=4M\). In Fig. 1 we show the effective potential (3) for a range of values \(\mathcal{L}/M\in\{3.25,\,4.25\}\) (light curves). The thicker line corresponds to \(U(r,\mathcal{L}_{\mathrm{crit}})\), which peaks at \(r=4M\) with value of one. Hence, a particle falling from rest and with angular momentum \(\mathcal{L}=\mathcal{L}_{\mathrm{crit}}\) will be captured in a marginally stable circular orbit. A particle with \(\mathcal{L}\) larger (smaller) than \(\mathcal{L}_{\mathrm{crit}}\) will scatter (plunge). It is convenient to rewrite Eqs. (2) as first-order in \(r\) equations, \[\mathrm{d}t_{p}/\mathrm{d}r_{p} =-(\mathcal{E}/f_{p})\,(\mathcal{E}^{2}-U_{p})^{-1/2}\,, \tag{5a}\] \[\mathrm{d}\phi_{p}/\mathrm{d}r_{p} =-(\mathcal{L}/r_{p}^{2})\,(\mathcal{E}^{2}-U_{p})^{-1/2}\,, \tag{5b}\] where we have taken \(\dot{r}_{p}<0\). We integrate Eqs. (5) with initial conditions \(t_{p}(r_{\mathrm{max}})=0\) and \(\phi_{p}(r_{\mathrm{max}})=0\) at some arbitrarily large \(r_{p}=r_{\mathrm{max}}\) down to the horizon, \(r_{p}=2M\). In Fig. 2 we show a sequence of trajectories starting from \(\mathcal{L}/M=3.25\) and up to \(\mathcal{L}/M=3.9996\) (i.e., with \(99.99\%\) of \(\mathcal{L}_{\mathrm{crit}}\)). We translate from Schwarzschild-Droste to Cartesian coordinates using \[x_{p}=r_{p}\cos\phi_{p}\,,\quad y_{p}=r_{p}\sin\phi_{p}\,,\quad z_{p}=0\,. \tag{6}\] As the ratio \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}\) approaches one, the particle executes an increasing fractional number of orbits \(\phi_{p}/(2\pi)\), given approximately by \(-(\sqrt{2}\pi)^{-1}\,\log(1-\mathcal{L}/\mathcal{L}_{\mathrm{crit}})\)[31]. Figure 2: Timelike geodesics with angular momentum per unit mass \(\mathcal{L}/M\in\{3.25,\,3.9996\}\) plunging into a Schwarzschild black hole (black disk) starting from rest at spatial infinity, \(\mathcal{E}=1\). The dot-dashed line represents to the innermost stable circular orbit (\(r=6M\)) and the dashed line corresponds to the location of the marginally stable circular orbit (\(r=4M\)). In the limit \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}\to 1\), the particle executes an increasing fractional number of orbits \(\phi/(2\pi)\), as seen in the red curve that corresponds to \(\mathcal{L}=3.9996M\), or \(99.99\%\) of \(\mathcal{L}_{\mathrm{crit}}\). ## III Black hole perturbations in the Regge-Wheeler-Zerilli gauge We are interested in calculating the gravitational waves produced by a particle plunging in a Schwarzschild black hole. The standard treatment of this problem, in the metric-perturbation formalism, is due to Regge and Wheeler [3] and Zerilli [4; 5]. The problem reduces to solving two equations in the time domain, \[\left[-\frac{\partial^{2}}{\partial t^{2}}+\frac{\partial^{2}}{\partial x^{2}}- V_{\ell}^{(\pm)}(r)\right]X_{\ell m}^{(\pm)}(t,r)=S_{\ell m}^{(\pm)}(t,r)\,, \tag{7}\] or, by going to the Fourier domain through \[X_{\ell m}^{(\pm)}(t,r)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}\omega\, e^{-i\omega t}\,X_{\ell m\omega}^{(\pm)}(r)\,, \tag{8}\] we have alternatively \[\left[\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\omega^{2}-V_{\ell}^{(\pm)}(r) \right]X_{\ell m\omega}^{(\pm)}(r)=S_{\ell m\omega}^{(\pm)}(r)\,. \tag{9}\] In these equations, \(x\) is the tortoise coordinate \[x=r+2M\log[r/(2M)-1]\,, \tag{10}\] that maps the region \(2M<r<\infty\) to \(-\infty<x<\infty\). The superscript \((\pm)\) denotes variables associated to metric perturbations of polar \((+)\) or axial \((-)\) parity, and \(V_{\ell}^{(\pm)}\) is an effective potential. Perturbations of each parity are described by a single master function, known as the Zerilli \(X^{(+)}\) and Regge-Wheeler \(X^{(-)}\) functions, respectively. The effective potentials associated to each of these perturbations bear the same respective names and are given by \[V_{\ell}^{(+)} =\frac{f}{r^{2}\Lambda^{2}}\left[2\lambda^{2}\left(\Lambda+1 \right)+\frac{18M^{2}}{r^{2}}\left(\lambda+\frac{M}{r}\right)\right], \tag{11a}\] \[V_{\ell}^{(-)} =\frac{f}{r^{2}}\left[\ell(\ell+1)+\frac{6M}{r}\right], \tag{11b}\] where we defined \[\lambda=(\ell+2)(\ell-1)/2\,,\quad\text{and}\quad\Lambda=\lambda+3M/r\,. \tag{12}\] The function \(S_{\ell m\omega}^{(\pm)}\) is the source term, responsible for the excitation of the gravitational perturbations. A detailed derivation of this source term in the Regge-Wheeler-Zerilli formalism can be found e.g., in Refs. [32; 33], whose results we quote in the Appendix A. Because the potential \(V_{\ell}^{(\pm)}\) vanishes both at the horizon and at infinity, and provided that the source vanishes sufficiently fast at both boundaries, the solutions to Eq. (7) can be written as plane waves for \(x\to\pm\infty\). We will impose that \(X_{\ell m\omega}^{(\pm)}\) is purely ingoing at the event horizon and purely outgoing at spatial infinity, that is, \[X_{\ell m\omega}^{(\pm)}\simeq\left\{\begin{array}{ll}C_{\ell m\omega}^{(\pm ),\,\mathrm{in}}&e^{-i\omega x}\\ C_{\ell m\omega}^{(\pm),\,\mathrm{out}}&e^{+i\omega x}\end{array}\right.\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, This is the main quantity we must numerically calculate to obtain, e.g., the time-domain gravitational waveform or the energy radiated to infinity. More specifically, we will be interested in the spectral energy distribution in each \((\ell,m)\) mode \[\frac{\mathrm{d}E_{\ell m}}{\mathrm{d}\omega}=\frac{\omega^{2}}{64 \pi^{2}}\frac{(\ell+2)!}{(\ell-2)!}\left[|C^{(+),\,\mathrm{out}}_{\ell m\omega}| ^{2}+\frac{4}{\omega^{2}}|C^{(-),\,\mathrm{out}}_{\ell m\omega}|^{2}\right], \tag{21}\] and in the time-domain Regge-Wheeler and Zerilli mode functions \[X^{(\pm)}_{\ell m}(t,x)=\frac{1}{2\pi}\int_{-\infty}^{+\infty} \mathrm{d}\omega\,C^{(\pm),\,\mathrm{out}}_{\ell m\omega}\,e^{-i\omega(t-x)}\,. \tag{22}\] In practice, we note that as \(x\to\infty\), the Wronskian becomes, \[W^{(\pm)}_{\ell m\omega}=2i\omega A^{(\pm),\,\mathrm{in}}_{\ell m \omega}\,, \tag{23}\] and that we can also rewrite Eq. (20) as \[C^{(\pm),\,\mathrm{out}}_{\ell m\omega}=\frac{1}{W^{(\pm)}_{\ell m \omega}}\int_{2M}^{+\infty}\frac{\mathrm{d}r^{\prime}}{f}\,X^{(\pm),\,\mathrm{ in}}_{\ell m\omega}\,S^{(\pm)}_{\ell m\omega}\,. \tag{24}\] Because \(f\simeq 1\) and \(X^{(\pm),\,\mathrm{in}}_{\ell m\omega}\simeq\exp(\pm i\omega x)\) as \(x\to\infty\), we see that the convergence of this integral depends critically on the asymptotic properties of \(S^{(\pm)}_{\ell m\omega}\). ## IV Asymptotic behavior of the frequency domain sources We now review the properties of \(S^{(+)}_{\ell m\omega}\) in two cases of interest. We will start by reviewing the case in which the particle falls radially, \(\mathcal{L}=0\), into the black hole and how Ref. [28] improved the asymptotic behavior of \(S^{(+)}_{\ell m\omega}\). We will then show one way in which the Nakamura-Sasaki-Shibata transformation can be generalized to general plunging trajectories, \(\mathcal{L}\neq 0\), and discuss the asymptotic properties of this new source term. ### The case of radial infall When \(\mathcal{L}=0\), \(S^{(-)}_{\ell m\omega}\) vanishes and \(S^{(+)}_{\ell m\omega}\) simplifies to [6; 7] \[S^{(+)}_{\ell m\omega}=-\,8\pi\mu\mathcal{A}_{\ell m}\frac{f}{r \Lambda}\left[\sqrt{\frac{r}{2M}}-\frac{2i}{\omega}\frac{\lambda}{r\Lambda} \right]e^{i\omega t_{p}(r)}\,, \tag{25}\] where \(\mathcal{A}_{\ell m}=Y^{*}_{\ell m}(\pi/2,\phi)\exp(im\phi)\), \(Y_{\ell m}\) are the spherical harmonics, with the asterisk indicates complex conjugation, and \(t_{p}\) is given by \[\frac{t_{p}}{2M}=-\frac{2}{3}\left(\frac{r}{2M}\right)^{\frac{3}{2}}-2\left( \frac{r}{2M}\right)^{\frac{1}{2}}+\log\left[\frac{\sqrt{r/(2M)}+1}{\sqrt{r/( 2M)}-1}\right]. \tag{26}\] We find that the near-horizon and spatial-infinity behaviors of \(S^{(+)}_{\ell m\omega}\) are: \[S^{(+)}_{\ell m\omega}\simeq\begin{cases}0&x\to-\infty\,,\\ x^{-1/2}&x\to+\infty\,.\end{cases} \tag{27}\] Hence the integral Eq. (20) converges slowly at spatial infinity. To improve the convergence, Ref. [28] proposed the substitution,2 Footnote 2: Our notation differs from that used in Ref. [28]. See also Ref. [13], for a similar substitution in the context of the Teukolsky equation. \[X^{(+)}_{\ell m\omega}=\tilde{X}^{(+)}_{\ell m\omega}+Q_{\ell m \omega}\,, \tag{28}\] where \[Q_{\ell m\omega}=-\frac{8\pi\mu\mathcal{A}_{\ell m}}{\omega^{2}} \frac{f}{r\Lambda}\sqrt{\frac{2M}{r}}e^{i\omega t_{p}}\,. \tag{29}\] The function \(Q_{\ell m\omega}\) vanishes at the event horizon \(x\to-\infty\) and decays as \(x^{-3/2}\) for \(x\to\infty\). Thus, \(X^{(+)}_{\ell m\omega}\simeq\tilde{X}^{(+)}_{\ell m\omega}\) in the latter limit. We then insert Eq. (28) in Eq. (7) to find, \[\left[\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\omega^{2}-V^{(+)}_{ \ell}\right]\tilde{X}^{(+)}_{\ell m\omega}=\tilde{S}^{(+)}_{\ell m\omega}\,, \tag{30}\] where \(\tilde{S}^{(+)}_{\ell m\omega}\) is a new source term given by, \[\tilde{S}^{(+)}_{\ell m\omega}=S^{(+)}_{\ell m\omega}-\left[ \frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\omega^{2}-V^{(+)}_{\ell}\right]Q_{ \ell m\omega}\,. \tag{31}\] The asymptotic behaviors of the new source term are, \[\tilde{S}^{(+)}_{\ell m\omega}\simeq\begin{cases}0&x\to-\infty\,,\\ x^{-3/2}&x\to+\infty\,,\end{cases} \tag{32}\] making the integral in Eq. (20) converge faster. One can verify that it is the second \(x\)-derivative of \(Q_{\ell m\omega}\) in Eq. (31) that yields a term that decays as \(r^{-1/2}\) as \(x\to\infty\), and that cancels the leading-order asymptotic term in the large-\(x\) expansion of the original source term. We note that both \(X^{(+)}_{\ell m\omega}\) and \(\tilde{X}^{(+)}_{\ell m\omega}\) satisfy the same homogeneous equation. Hence, we can calculate \(C^{(\pm),\,\mathrm{out}}_{\ell m\omega}\) with the mere replacement \(S^{(+)}_{\ell m\omega}\to\tilde{S}^{(+)}_{\ell m\omega}\) in Eq. (24). In Fig. 3, we show both the original (25) and the Nakamura-Sasaki-Shibata-transformed (31) source terms of the Zerilli equation for a particle falling radially into the black hole, and \(\ell=m=2\) and \(M\omega=0.3\). In the left panel we show their absolute values. In the right panels we show the real (solid line) and imaginary (dashed line) parts of the original (top panel) and new (bottom panel) source terms. The faster decay of Eq. (31) accelerates the convergence of Eq. (20). The value \(M\omega=0.3\) is approximately the real part of the fundamental quasinormal mode frequency of a Schwarzschild black hole [34], and dominates the emission of energy in the form of gravitational waves [12]. The source terms are largest near \(r\simeq 3M\), that corresponds approximately to the location of peak of the Zerilli potential \(r_{\ell=2,\,\text{peak}}^{(+)}\simeq 3.1M\), and of the light ring \(r=3M\). The success of the Nakamura-Sasaki-Shibata transformation opens two questions. First, can we, even for a radially infalling particle, make the source term decay more rapidly? Second, can we generalize Eqs. (28) and (29) to the case in which the particle plunges with a nonzero angular momentum? In the next section we give positive answers to both questions. ### The case of general plunging trajectories We seek a minimal extension of Eqs. (28) and (29) for the case of a plunging particle with general angular momentum \(\mathcal{L}\neq 0\). We focus only on the source term in the Zerilli equation, because the source term in the Regge-Wheeler equation already has a fast fall-off: \[S_{\ell m\omega}^{\,(-)}\simeq x^{-2}\,,\quad x\to\infty\,. \tag{33}\] We would like to make the source term in the Zerilli equation to decay at least as rapidly as the source term for the Regge-Wheeler equation. Our strategy is to retain Eq. (28), but generalize \(Q_{\ell m\omega}\) to the form: \[Q_{\ell m\omega} = -\frac{8\pi\mu\mathcal{A}_{\ell m}}{\omega^{2}}\frac{f}{\Lambda r }\left[\,\sum_{n=0}^{N}a_{n}\,\left(2M/r\right)^{(n+1)/2}\,\right] \tag{34}\] \[\times e^{i(\omega t_{p}-m\phi_{p})}\,.\] Proceeding as in the previous section, we arrive at the same Eq. (31), where \(S_{\ell m\omega}^{\,(+)}\) is now given by a more complicated formula, whose form can be found in Eq. (17), Appendix A. We then expand Eq. (31) in a power series in \(r\) for \(r\to\infty\), and collect powers in \(r\). Next, we fix the coefficients \(a_{n}\) in such a way to cancel each successive power of \(r\) in this power series. This yields \[a_{0}=1\,,\quad a_{1}=\frac{im}{\ell(\ell+1)}\frac{\mathcal{L}}{M}\,,\quad a_ {2}=1-\frac{\mathcal{L}^{2}}{8M^{2}}\,, \tag{35}\] by working up to \(N=2\). Higher-order terms can be obtained by following the same procedure just outlined. We notice there is no dependence on \(\mathcal{L}\) at order \(N=0\) and that we recover Eq. (29) in this case. The next correction to the radial infall case occurs at \(n=2\). In Fig. 4 we show the original source term of the Zerilli equation \(S_{\ell m\omega}^{\,(+)}\) and its transformed version \(\tilde{S}_{\ell m\omega}^{\,(+),\,(N)}\) (for \(N=0\), \(1\), and \(2\)) for \(\ell=m=2\), \(M\omega=0.3\) and a near-critical geodesic with \(\mathcal{L}=0.9999\,\mathcal{L}_{\text{crit}}\). We chose this value of \(\mathcal{L}\) as a "stress test" for the method, due to the large amplitude of the source term around the location of the marginally stable circular orbit, \(r=4M\). This peak dominates over the peak at light ring \(r=3M\) present in Fig. 3. The left panel shows the absolute values of the Figure 3: The \(\ell=m=2\) source term for the Zerilli equation for a radially plunging particle starting from rest at spatial infinity, for \(M\omega=0.3\). Left panel: the absolute values of the original Zerilli source term, \(S^{(+)}\), and its Nakamura-Sasaki-Shibata-transformed version, \(\tilde{S}^{\,(+)}\). Right panels: the real (“Re”) and imaginary (“Im”) parts of \(S^{\,(+)}\) (top) and \(\tilde{S}^{\,(+)}\) (bottom). Both sources vanish at the event horizon \(r=2M\), and the faster asymptotic decay of \(\tilde{S}^{\,(+)}\) improves the convergence of Eq. (20). The source terms are largest near \(r\simeq 3M\), that corresponds approximately to the location of the peak of the Zerilli potential \(r_{\ell=2,\,\text{peak}}^{(+)}\simeq 3.1M\) and of the light ring \(r=3M\). various source terms, whereas in the right panels we show their real (solid lines) and imaginary parts (dashed lines). We see that the original Nakamura-Sasaki-Shibata transformation (\(N=0\)) results in a slower decaying source term, \(r^{-1}\), when applied to case in which \(\mathcal{L}\neq 0\), when compared to the radial infall case, \(r^{-3/2}\). This can be understood by examining the asymptotic behavior of \(S^{(+)}_{\ell m\omega}\): \[S^{(+)}_{\ell m\omega} =-\frac{4\pi\mu\mathcal{A}_{\ell m}}{\lambda M}\left[\sqrt{\frac{2 M}{r}}+\frac{im}{\ell(\ell+1)}\frac{\mathcal{L}}{M}\frac{2M}{r}\right.\] \[\left.+\left(\frac{\mathcal{L}^{2}}{8M^{2}}+\frac{3+2\lambda}{2 \lambda}\right)\left(\frac{2M}{r}\right)^{3/2}\right]+\mathcal{O}(r^{-2})\,. \tag{36}\] This expansion shows that the first term containing the angular momentum \(\mathcal{L}\) appears at order \(r^{-1}\), and it cannot be canceled with Eq. (29). ## V Numerical methods and results In this section we describe the numerical methods we use to compute Eq. (24), and show a few applications of solving this equation using the generalized Nakamura-Sasaki-Shibata transformation (34). ### Numerical methods To evaluate our main quantity of interest, namely the amplitude \(C^{(\pm),\,\mathrm{out}}_{\ell m\omega}\) we proceed as follows. 1. Choose a value of \(\ell\), \(|m|\leq\ell\) and \(M\omega\). 2. Choose a value of the angular momentum \(\mathcal{L}\), and solve the geodesic equations (5), to obtain \(t_{p}(r)\) and \(\phi_{p}(r)\). We use initial conditions \(t_{p}(r_{\mathrm{min}})=0\) and \(\phi_{p}(r_{\mathrm{min}})=0\), and integrate from \(r_{\mathrm{min}}=2\,(1+10^{-4})M\) up to \(r_{\mathrm{max}}=500/\omega\). 3. If \(\ell+m\) is even, integrate the homogeneous Zerilli equation. If \(\ell+m\) is odd, integrate the homogeneous Regge-Wheeler equation; see Eqs. (14) and (11). 4. Integrate the equation from the previous step with "in" boundary conditions, from \(r_{\mathrm{min}}=2\,(1+10^{-4})M\) to \(r=r_{\mathrm{max}}\). From \(X^{(\pm)}_{\ell m\omega}\) and \(\mathrm{d}X^{(\pm)}_{\ell m\omega}/\mathrm{d}r\) at \(r_{\mathrm{max}}\) calculate \(A^{(\pm),\,\mathrm{in},\,\mathrm{out}}_{\ell m\omega}\). See Appendix C. 5. Calculate the Wronskian (23) and the source \(S^{(\pm)}_{\ell m\omega}\) to evaluate the integral in Eq. (24). 6. Repeat steps 1 through 5, scanning the range \(M\omega\in[5\times 10^{-3},\,1.5]\) in steps of size \(\Delta(M\omega)=5\times 10^{-3}\), for \(\ell=2\) to 6, covering all \(|m|\leq\ell\) in between. Our code is written in Mathematica. In step 3, we validated our integration of the homogeneous equations (14) by comparison against the integrators available in the Black Hole Perturbation Toolkit [35]. In step 5, we calculate the Wronskian as explained in Appendix C, and it is useful to rewrite Eq. (24) as a differential equation [36; 37] \[\frac{\mathrm{d}C^{(\pm),\,\mathrm{out}}_{\ell m\omega}}{\mathrm{d}r}=\frac{1 }{W^{(\pm)}_{\ell m\omega}}\,f^{-1}\,X^{(\pm),\,\mathrm{in}}_{\ell m\omega}\,S ^{(\pm)}_{\ell m\omega}\,. \tag{37}\] Figure 4: The \((\ell,m)=(2,2)\) source term for the Zerilli equation for a plunging particle with \(\mathcal{L}=0.9999\,\mathcal{L}_{\mathrm{crit}}\) starting from rest at spatial infinity, for \(M\omega=0.3\). Left panel: the absolute values of the original Zerilli source term, \(S^{(+)}\), and its Nakamura-Sasaki-Shibata-transformed version, \(S^{(+),\,(N)}\), for \(N=0\), 1, and 2 in the expansion (34). Right panels: the real (“\(\mathrm{Re}\)”) and imaginary (“\(\mathrm{Im}\)”) parts of \(S^{(+)}\) (top) and \(S^{(+),\,(N)}\) (three remaining panels) for increasing values of \(N\). The source term is largest near \(r\simeq 4M\), that corresponds to the location of the marginally stable circular orbit at which the particle particle executes an increasing number of revolutions in the limit \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}\to 1\). We integrate Eq. (37), with initial condition \(C^{(\pm),\,\mathrm{out}}_{\ell m\omega}(r_{\mathrm{min}})=0\) up to \(r_{\mathrm{max}}\), using the Regge-Wheeler source (10) or the Nakamura-Sasaki-Shibata-transformed Zerilli source (11), depending on whether \(\ell+m\) is odd or even, respectively. In the context of Eq. (37), the new Zerilli source term reduces the value of \(r_{\mathrm{max}}\) at which \(\mathrm{d}C^{(\pm),\,\mathrm{out}}_{\ell m\omega}/\mathrm{d}r\) becomes zero to the desired accuracy. We decided here only, with respect to the rest this work, to integrate the geodesic equations from near the horizon outwards. In particular, when integrating from spatial infinity, \(t_{p}\) acquires a large value around \(r\approx 2M\). This causes the factor \(\exp(i\omega t_{p})\), that appears in the source, to become highly oscillatory, and thus sensitive to our choice of \(r_{\mathrm{min}}\) and \(\omega\). Consequently, the phase (but not the amplitude) of the \(C^{(\pm),\,\mathrm{out}}_{\ell m\omega}\) failed to converge in our numerical calculations. We solved this issue by fixing the initial conditions at the horizon instead of spatial infinity. At last, we performed steps 1 to 6 for the range of angular momentum values \(\mathcal{L}=\{0,\,0.25,\,0.5,\,0.75,\,0.9,\,0.99,\,0.999\}\mathcal{L}_{ \mathrm{crit}}\). ### Energy spectrum We first consider the energy spectrum for particle in the near-critical limit \(\mathcal{L}\approx\mathcal{L}_{\mathrm{crit}}\). To our knowledge, this situation was studied only in the frequency domain using the Sasaki-Nakamura equation [29] or using the Zerilli-Moncrief [26] and Cunningham-Price-Moncrief [25] master functions in Ref. [38]. To validate our calculations, we focus on two cases, \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}=0.9\) and \(0.9999\), that were examined in detail by Berti et al. [29]. In Fig. 5, we show the energy spectrum (21) for \(\mathcal{L}=0.9\,\mathcal{L}_{\mathrm{crit}}\) (left panels) and \(\mathcal{L}=0.9999\,\mathcal{L}_{\mathrm{crit}}\) (right panels). The top panels show the spectral energy for multipoles \(\ell=m\) ranging from 2 to 6 (different line colors) and their sum (dashed line). The bottom panels show the quadrupole mode \(\ell=2\) and all azimuthal contributions \(|m|\leq 2\) (different line styles.) Our results are excellent agreement with Ref. [29]; cf. Figs. 10 and 11 therein. For \(\mathcal{L}=0.9\,\mathcal{L}_{\mathrm{crit}}\), the energy radiated in each multipole \(\ell=m\) has a single maximum. The location \(M\omega\) of these peaks coincide approximately with the real part of the fundamental (\(n=0\)) quasinormal mode frequency of a Schwarzschild black hole \(M\omega_{\ell m0}\), as first observed by Detweiler and Szedenits [12]. For reference, these values are \(\mathrm{Re}[M\omega_{\ell m0}]\simeq\{0.373,\,0.599,\,0.809,\,1.012,\,1.212\}\), for \(\ell\in\{2,\,6\}\)[39; 40; 41]. (Due to the spherical symmetry of the Schwarzschild solution, all quasinormal modes with \(|m|\leq\ell\), at fixed \(\ell\geq 2\), are degenerate.) In the near-critical limit \(\mathcal{L}=0.9999\)\(\mathcal{L}_{\mathrm{crit}}\), the spectral energy distribution has a peak at \(M\omega<\mathrm{Re}[M\omega_{\ell m0}]\). This peak corresponds to \(m\)-times the particle's orbital frequency \(M\Omega_{\mathrm{orb}}=1/8\) at the marginally stable circular orbit at \(r=4M\) (cf. Fig. 2), that is: \[M\omega=m\,M\,\Omega_{\mathrm{orb}}=m/8\,. \tag{38}\] Therefore, the energy emitted at these frequencies is dominated by the particle's geodesic motion. This is not surprising given that we saw that the maximum amplitude of the source term of the Zerilli equation is located at \(r\simeq 4M\), as we discussed in Fig. 4. For moderate values of \(\ell\), the contributions to the energy driven by the particle's orbital motion and the quasinormal mode excitation overlap, while as \(\ell\gg 1\) their contributions separate sufficiently to result in two peaks in the energy spectra; see the \(\ell=m=5\) and 6 in Fig. 5. We can estimate this separation as follows. First, we use the geometrical-optics (eikonal) limit [42; 43; 44], to approximate the real part of the quasinormal mode frequency as \[\mathrm{Re}[M\omega_{\mathrm{eik}}]\simeq\ell\,\Omega_{\mathrm{LR}}=\ell/(3 \sqrt{3}), \tag{39}\] where \(M\Omega_{\mathrm{LR}}=1/(3\sqrt{3})\) is the orbital frequency of a null geodesic at the light ring. Equation (39) can also be obtained from the \(\ell\gg 1\) limit of a WKB approximation to the calculation of black-hole quasinormal modes [45; 46]. We can then estimate the separation between the "quasinormal-mode" and "geodesic" peaks by taking the difference of Eqs. (38) and (39): \[(\Delta M\omega)_{\mathrm{peak}}\simeq\frac{1}{3\sqrt{3}}\left(\ell-\frac{3 \sqrt{3}}{8}m\right), \tag{40}\] which is valid for \(\ell\gg 1\), \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}\simeq 1\), and \(\mathcal{E}=1\). For \(\ell=m=6\), we find \((\Delta M\omega)_{\mathrm{peak}}\simeq 0.404\), in fair agreement with the numerical result \(\simeq 0.362\) obtained from Fig. 5. Although not studied here, it is interesting to analyze the ultrarelativistic limit, in which the particle plunges with an energy \(\mathcal{E}\to\infty\). In this limit, the "geodesic" peak moves rightwards, eventually overlapping with the "quasinormal-mode" peak; see Ref. [29], Fig. 10. This occurs because as \(\mathcal{E}\to\infty\), the particle's marginally stable circular orbit coincides with the light ring, hence \(\Omega_{\mathrm{orb}}\simeq\Omega_{\mathrm{LR}}\)[31]. In this limit, we then have \[(\Delta M\omega)_{\mathrm{peak}}=(\ell-m)\,(3\sqrt{3}), \tag{41}\] which is valid for \(\ell\gg 1\), \(\mathcal{L}/\mathcal{L}_{\mathrm{crit}}\simeq 1\), and \(\mathcal{E}\to\infty\). The peak separation vanishes when \(\ell=m\), reproducing the results of Ref. [29]. ### Total energy From the energy spectrum, we can compute the total energy \(\Delta E\) emitted in form of gravitational-waves by integrating the spectrum density and summing the contributions from all multipoles: \[\Delta E=\sum_{\ell=2}^{\infty}\,\Delta E_{\ell}=\sum_{\ell=2}^{\infty}\sum_{m=- \ell}^{\ell}\,\int_{0}^{+\infty}\mathrm{d}\omega\,\frac{\mathrm{d}E_{\ell m}}{ \mathrm{d}\omega}\,. \tag{42}\] To understand the relative contribution due to perturbations of each parity, we also define \(\Delta E^{(\pm)}\), where the subscript (\(+\)) means we add only the contributions to the energy coming from multipoles for which \(\ell+m\) is even and (\(-\)) when \(\ell+m\) is odd. For particles that plunge from infinity initially from rest, Oohara and Nakamura [20] observed the relation \[\left(M/\mu^{2}\right)\Delta E_{\ell}=a\,\exp(-b\,\ell\,), \tag{43}\] between energy and multipole \(\ell\). We fit Eq. (43) to the outcome of computing \(\Delta E_{\ell}\), truncating the \(\ell\)-sum at \(\ell_{\rm max}=6\). Table 1 shows the fitting coefficients and the total energy emitted, including the individual polar and axial contributions. The results for the total energy agree with those of Ref. [29], Table 2, with less than 1% difference, when comparison is possible. As observed by Oohara and Nakamura, the value of \(a\) increases while that of \(b\) decreases as \(\mathcal{L}\) approaches \(\mathcal{L}_{\rm crit}\). In other words, the fraction of the total energy radiated contained in each \(\ell\)-pole tends to "even out" as \(\mathcal{L}\) increases. Figure 6 illustrates these observations. For example, we see in the leftmost panel that \(\Delta E_{2}\) varies from being about four orders of magnitude larger than \(\Delta E_{6}\), to being some more than ten times larger as \(\mathcal{L}\) approaches criticality. The individual contributions of the polar and axial perturbations to the energy are shown in the middle and rightmost panels, respectively. We see that the energy emitted through the "axial radiative channel" is always subdominant relative to the polar one, even for near-critical values of \(\mathcal{L}\); the extreme case is, of course, when \(\mathcal{L}=0\) where \(\Delta E^{\,(-)}\) vanishes. Interestingly, the axial and polar contributions to the Figure 5: The energy spectra for a particle plunging with angular momentum \(\mathcal{L}=0.9\,\mathcal{L}_{\rm crit}\) (left panels) and \(\mathcal{L}=0.9999\,\mathcal{L}_{\rm crit}\) (right panels) into a Schwarzschild black hole. Top panels: the energy spectra from the multipoles \(\ell=m\) from 2 to 6, (colored lines), and their sum (dashed line). Bottom panels: the energy spectra for the quadrupole perturbation \(\ell=2\) and all \(|m|\leq 2\) in between. Note the different ranges in the ordinates across the panels. Our results agree with Ref. [29], that solved the Sasaki-Nakamura equation instead. total energy \(\Delta E\) are nonmonotonic with respect to the angular momentum. In Fig. 7, we show the ratio \(\Delta E\,^{(\pm)}/\Delta E\) as a function of the logarithm of \(1-\mathcal{L}/\mathcal{L}_{\rm crit}\). As we increase the particle's angular momentum (i.e., moving right to left along the abscissa), we see that \(\Delta E\,^{(-)}/\Delta E\) has a local maximum at \(\mathcal{L}\approx 0.75\,\mathcal{L}_{\rm crit}\), yet with only approximately \(14\%\) of the total energy budget. We interpret this result as follows. When \(\mathcal{L}=0\), by symmetry, all energy must be radiated through the polar channel. As we increase \(\mathcal{L}\), axial perturbations become increasingly excited, and a nonzero (albeit small) percentage of the energy is emitted through them. As \(\mathcal{L}\) approaches \(\mathcal{L}_{\rm crit}\), the particle orbits the black hole an increasing number of times around the _circular_ orbit \(r=4M\). Hence, again by symmetry, we expect the total energy fraction emitted through the polar channel to increase, and it happens to the extent of thereby decreasing the fraction emitted through the axial channel. ### Time-domain waveforms We also computed the time-domain Regge-Wheeler and Zerilli mode functions using Eq. (22). In Fig. 8, we show the dominant modes for the Regge-Wheeler and Zerilli functions, \((\ell,m)=(2,1)\) and \((2,2)\), respectively, as a function of the retarded time \((t-x)/M\). The top panel corresponds to a particle that plunges with \(\mathcal{L}=0.9\)\(\mathcal{L}_{\rm crit}\), while the bottom panel to \(\mathcal{L}=0.9999\)\(\mathcal{L}_{\rm crit}\). In the former case, we see that the waveform has the characteristic "precursor", "sharp burst" and "ringing tail" phases, as first observed for radial infalling particles by Davis et al. [8] and for wave scattering by Vishveshwara [47]. As \(\mathcal{L}\to\mathcal{L}_{\rm crit}\), the large values of \(\mathcal{L}\) cause the Zerilli (but not the Regge-Wheeler) function to have an intermediate quasimonochromatic phase, related to the particle's sweep around \(r=4M\). This behavior is qualitatively the same as that seen in Refs. [48; 49] for test particles plunging from the innermost stable circular orbit (ISCO) \(r=6M\). We see that the amplitude of the Regge-Wheeler function is always smaller than Zerilli's, even in the nearest-critical angular-momentum values studied by us. In Fig. 9, we take a closer look on the waveforms in the limit \(\mathcal{L}\to\mathcal{L}_{\rm crit}\), for the same multipole moments. In this limit, the waveforms become increasingly similar around their peak amplitudes, \(t-x\simeq-40\,M\), although differences are clearly visible at the "precursor" phase. Similar results hold for the other multipoles we examined. This result suggests that a quasiuniversal description of the plunge from the marginally stable circular orbit \(r=4M\) exists. Such a treatment for particle's plunging from the ISCO exists [50]. ## VI The semi-relativistic approximation A complementary way of studying the gravitational wave emission by an infalling particle is via the so-called "semi-relativistic" approximation [30] (often used in the "kludge" approximation [51; 52; 53]). In this approach the particle is assumed to move along a fully relativistic geodesic Figure 6: Energy radiated to infinity in the plunge process. From left to right, the panels show the total energy radiated, only through the parity even (“Zerilli”) perturbations, and only through the parity odd (“Regge-Wheeler”) perturbations. The markers correspond to Eq. (42), whereas the straight lines corresponds to the fit (43) suggested by Oohara and Nakamura [20]. The lines starting from the bottom correspond to \(\mathcal{L}=0\) and increase to \(\mathcal{L}=0.9999\,\mathcal{L}_{\rm crit}\) as one moves upwards. The energy emitted in the dominant quadrupole mode in the axial-parity radiative channel is always smaller than that emitted through the polar one, at fixed \(\mathcal{L}\). When \(\mathcal{L}=0\), there is no energy radiated in the axial-parity channel. Figure 7: Fraction of the total energy radiated by the polar and axial perturbations in the plunge process. We show both the polar [(+), left ordinate] and axial [(-), right ordinate] total energies as functions of the particle’s angular momentum. The fraction of the total energy radiated via each radiative channel is nonmonotonic in \(\mathcal{L}\), with the axial channel having a peak at \(\mathcal{L}/\mathcal{L}_{\rm crit}\approx 0.75\). The polar contribution is at least \(\approx 85\%\) of the total energy budget for all values of \(\mathcal{L}\) considered. trajectory of the black-hole spacetime while the gravitational wave emission itself is treated approximately, using the weak-gravity quadrupole formula. Despite its inherent inconsistency, the semi-relativistic approximation is known to perform surprisingly well, when compared against more rigorous results obtained from black-hole perturbation theory. The price one has to pay for the conceptual and technical simplicity of this approach is that its accuracy deteriorates as soon as the particle's trajectory enters the near-horizon, strong field regime where radiation backscattering by the spacetime curvature becomes an important factor. Unfortunately, this condition is clearly met by a plunging particle so we expect the semi-relativistic method to provide accurate results only for the early-time portion of the trajectory (i.e., the low frequency part of the gravitational wave spectrum.) Nevertheless, the less accurate \(\omega M\gtrsim 1\) part of the spectrum is of some interest in its own right, as it allows us to understand (and separate) the effects due to the particle's motion and due to backscattering. The quadrupole-order gravitational-wave formalism underpinning the semi-relativistic approximation can be found in many general relativity textbooks; here we follow and expand the analysis of Ref. [54], Sec. 4.3.1, for radially infalling particles. We start by recalling that the appropriately averaged gravitational-wave luminosity is given by \[L=\tfrac{1}{5}\,\langle\,\dddot{M}_{ij}\dddot{M}^{ij}-\tfrac{1}{3}\dddot{M}^{2 }\,\rangle, \tag{44}\] where the (mass) quadrupole moment \(M_{ij}\) is defined as, \[M^{ij}(t)=\int\mathrm{d}^{3}x\,\rho(t,\mathbf{x})\,x^{i}x^{j}\,, \tag{45}\] for a mass density \(\rho(t,\mathbf{x})\), and with trace \(M=M^{i}{}_{i}\), which is distinguishable from the black hole's mass \(M\) from the context. The total energy emitted in gravitational waves is given by the integral \[\Delta E=\int_{-\infty}^{t_{\text{max}}}\mathrm{d}t\,L(t)\,, \tag{46}\] where the luminosity is to be evaluated without any averaging (also, we can set \(t_{\text{max}}=\infty\) at the particle's horizon crossing time). The same quantity can be evaluated in the frequency domain \[\Delta E=\int_{0}^{+\infty}\mathrm{d}\omega\,\frac{\mathrm{d}E}{\mathrm{d} \omega}\,, \tag{47}\] where \(\mathrm{d}E/\mathrm{d}\omega\) is the spectral energy distribution. For a point particle moving along a trajectory \(\mathbf{x}_{p}(t)\) we have \(\rho(t,\mathbf{x})=\mu\,\delta^{(3)}[\mathbf{x}-\mathbf{x}_{p}(t)]\), and we find \[M^{ij}(t)=\mu\,x_{p}^{i}(t)\,x_{p}^{j}(t)\,. \tag{48}\] Figure 8: Time-domain Zerilli (solid lines) and Regge-Wheeler (dashed lines) mode functions excited by a test particle plunging with \(\mathcal{L}=0.9\,\mathcal{L}_{\text{crit}}\) (top panel) and \(\mathcal{L}=0.9999\,\mathcal{L}_{\text{crit}}\) (bottom panel) into a Schwarzschild black hole. We focus on the dominant quadrupole multipole, associated with the perturbations of each parity; \(m=2\) and \(m=1\) for the Zerilli and Regge-Wheeler modes, respectively. The former is always larger in amplitude than the latter. Figure 9: Time-domain Zerilli (top panel) and Regge-Wheeler (bottom panel) mode functions excited by a test particle plunging with \(\mathcal{L}=0.99\,\mathcal{L}_{\text{crit}}\) (solid lines), and \(\mathcal{L}=0.999\,\mathcal{L}_{\text{crit}}\) (dashed lines) and \(\mathcal{L}=0.9999\,\mathcal{L}_{\text{crit}}\) (dot-dashed lines) into a Schwarzschild black hole. As in Fig. 8, we consider the dominant quadrupolar Zerilli and Regge-Wheeler-modes. In the limit \(\mathcal{L}\to\mathcal{L}_{\text{crit}}\), the solutions become quasinuniversal around and after \(t-x\gtrsim-40\,M\). As in Sec. II, the geodesics under consideration can be taken to lie in the equatorial \(x\)-\(y\) plane and the Cartesian coordinates can be related to the Schwarzschild-Droste coordinates (1) through Eqs. (6). In this setup, the only nonvanishing components of the quadrupole moment (45) are \(M_{11}\), \(M_{22}\), \(M_{12}\), and the trace is \(M=M_{11}+M_{12}\). Then, a short calculation leads to \[\Delta E =\frac{2}{15}\int_{-\infty}^{+\infty}\mathrm{d}t\,\dddot{\tilde{M} }_{11}^{2}+\dddot{\tilde{M}}_{22}^{2}-\dddot{\tilde{M}}_{11}\dddot{\tilde{M}}_ {22}+3\dddot{\tilde{M}}_{12}^{2}]\,,\] \[=\frac{2}{15\pi}\int_{0}^{+\infty}\mathrm{d}\omega\,\omega^{6}\, [\,|\tilde{M}_{11}|^{2}+|\tilde{M}_{22}|^{2}+3|\tilde{M}_{12}|^{2}\] \[\quad-\,\mathrm{Re}[\tilde{M}_{11}\tilde{M}_{22}^{*}]\,]\,, \tag{49}\] where the Fourier transform of \(M_{ij}\) is defined as3 Footnote 3: For clarity, we adopt a slightly different notation for frequency-domain quantities in this section, mirroring Ref. [54]. \[\tilde{M}_{ij}(\omega)=\int_{-\infty}^{+\infty}\mathrm{d}t\,e^{i\omega t}\,M_{ ij}(t)\,. \tag{50}\] The fact that \(M_{ij}\) is real implies the useful property \[\tilde{M}_{ij}^{*}(\omega)=\tilde{M}_{ij}(-\omega)\,. \tag{51}\] As discussed in Ref. [54], the integral (50) is divergent at \(t\to-\infty\), since \(x_{p}\to+\infty\). Therefore, some regularization procedure is required. This is achieved by working with the Fourier transform \(\dddot{\tilde{M}}_{ij}\) which is well behaved at spatial infinity. In fact, this procedure is equivalent to the regularization of \(\tilde{M}_{ij}(\omega)\) via integrations by parts, where the produced divergent boundary terms are discarded (see, e.g., Detweiler and Szedenits [12] for a similar regularization of the solution of the Teukolsky equation sourced by a plunging particle). The outcome of this exercise is the regularized quadrupole moment \[\tilde{M}_{ij}^{\mathrm{reg}}=-\dddot{\tilde{M}}_{ij}/\omega^{2}\,, \tag{52}\] and from Eq. (49) we read off the regularized formula: \[\frac{\mathrm{d}E}{\mathrm{d}\omega}=\frac{2\,\omega^{2}}{15\pi} [\,|\dddot{\tilde{M}}_{11}|^{2}+|\dddot{\tilde{M}}_{22}|^{2}+3|\dddot{\tilde{ M}}_{12}|^{2}-\mathrm{Re}[\dddot{\tilde{M}}_{11}\dddot{\tilde{M}}_{22}^{*}]\,]\,. \tag{53}\] For the actual numerical evaluation of \(\dddot{\tilde{M}}_{ij}\) (and the subsequent one of \(\mathrm{d}E/\mathrm{d}\omega\)), it is advantageous to convert the time integral into a radial integral using the geodesic equations (2), leading to \[\dddot{\tilde{M}}_{ij}=\int_{2M}^{r_{\mathrm{max}}}\frac{\mathrm{d}r}{f}\left[ \frac{2M}{r}+f\,\frac{\mathcal{L}^{2}}{r^{2}}\right]^{-\frac{1}{2}}\tilde{M}_{ ij}\,e^{i\omega t_{p}(r)}\,. \tag{54}\] For the problem at hand, the individual \(\dddot{M}_{ij}\) components required for the calculation of Eq. (52) are \[\ddot{M}_{11} =2\mu\left(x_{p}\ddot{x}_{p}+\dot{x}_{p}^{2}\right), \tag{55a}\] \[\ddot{M}_{22} =2\mu\left(y_{p}\ddot{y}_{p}+\dot{y}_{p}^{2}\right),\] (55b) \[\ddot{M}_{12} =\mu\left(2\dddot{x}_{p}\dot{y}_{p}+x_{p}\ddot{y}_{p}+y_{p}\dddot{ x}_{p}\right). \tag{55c}\] With the help of Eqs. (2) both the accelerations \(\dddot{x}_{p}\), \(\dddot{y}_{p}\) and the velocities \(\dddot{x}_{p}\), \(\dddot{y}_{p}\) can be rewritten solely as functions of \(r\), \(\phi_{p}(r)\) and the orbital constant \(\mathcal{L}\). At the same time \(t_{p}(r)\) explicitly appears in the exponential term. Both \(\phi_{p}(r)\) and \(t_{p}(r)\) are obtained numerically as in Sec. II. We chose trajectories with \(\phi_{p}(0)=0\) at some initial radius \(r_{\mathrm{max}}=r(0)\). The same radius serves as the "infinity" upper limit in the \(\dddot{\tilde{M}}_{ij}\) integral. The initial time \(t=0\) can be chosen arbitrarily because the addition of a constant in \(t_{p}(r)\) does not affect the numerical value of Eq. (54). For any value \(\mathcal{L}<\mathcal{L}_{\mathrm{crit}}\), the integral in the \(\dddot{\tilde{M}}_{11}\) component is the slowest converging one at large \(r\) (the absolute value of the integrand decays as \(\sim r^{-1/2}\)). In order to improve the convergence of this component and reduce the associated numerical error we employ a procedure similar to that in Sec. IV.2, namely, we subtract the slowly-decaying part prior to the numerical integration and then add back the (analytically obtained) asymptotic contribution from that part. The semi-relativistic energy spectrum, calculated from Eqs. (53) and (54), is shown in Fig. 10, and has a characteristic single-hump profile. When comparing against the Figure 10: The energy spectra for a particle plunging with angular momentum \(\mathcal{L}=0\), \(\mathcal{L}=0.75\,\mathcal{L}_{\mathrm{crit}}\) and \(\mathcal{L}=0.9999\,\mathcal{L}_{\mathrm{crit}}\) into a Schwarzschild black hole in the semi-relativistic approximation (“SR”, dashed lines) and black-hole perturbation theory, for \(\ell=m=2\) (“BHPT”, solid lines). Both calculations agree at low frequencies \(M\omega\ll 1\), where the radiation is due to the particle’s quasi-Newtonian motion. For \(\mathcal{L}=0\) and \(\mathcal{L}=0.75\,\mathcal{L}_{\mathrm{crit}}\), the sludge calculation underestimates the location of the spectrum’s peak, which is related to the hole’s fundamental quasinormal mode frequency [12]. As \(\mathcal{L}\to\mathcal{L}_{\mathrm{crit}}\), the peak of the spectrum is dominated by the particle’s orbit around \(r=4M\), and the two calculations agree qualitatively with each other up to \(M\omega\lesssim 0.25\). The semi-relativistic approximation predicts a slowly decaying tail for the spectrum. full black-hole perturbation theory result for \(\ell=m=2\) shown in Fig. 5, we notice a moderate disagreement in the location of the emission peak. This is not surprising because the exact location of this peak depends on the properties of the BH potential near \(3M\); as pointed out earlier, this information is missing altogether in the semi-relativistic approximation. As expected, the two spectra are in good agreement in the low frequency, \(\omega M\ll 1\), end of the spectrum which is the part that corresponds to quasi-Newtonian motion. Interestingly, the agreement improves to some degree as we approach the critical value \(\mathcal{L}_{\text{crit}}\) for scattering. In this case, the particle spends a large amount of time orbiting around \(r=4\,M\) and the bulk of gravitational-wave emission is generated there, as we have discussed in Sec. V. The opposite high-frequency end of the spectrum, \(\omega M\gg 1\), has also some interest of its own. In the fully relativistic calculation this part of the spectrum appears to be independent of the angular momentum \(\mathcal{L}\) and the modal numbers \(\ell\) and \(m\). We can approximate the high-frequency "tail" of \(\mathrm{d}E_{\ell m}/\mathrm{d}\omega\) in the special case of radial infall, taking into account that this part of the spectrum corresponds to the near-horizon region of integration in Eq. (24). Under these circumstances we can recast the integrand into a single exponential, and expand the exponent up to linear order in \(r/(2M)-1\). The integral can then be computed analytically, and it is found to be dominated by the lower-limit of integration. Moreover, in the same high-frequency limit the Wronskian can be approximated as \(W_{\ell m\omega}\simeq 2i\omega\). These manipulations lead to a scaling \[\frac{\mathrm{d}E}{\mathrm{d}\omega}\propto\frac{e^{-8\pi M\omega}}{M\omega} \,,\quad\text{(perturbation theory)}\,. \tag{56}\] The same procedure applied to the semi-relativistic spectrum leads to a slower decaying tail \[\frac{\mathrm{d}E}{\mathrm{d}\omega}\propto\frac{e^{-4\pi M\omega}}{M\omega} \,,\quad\text{(semi-relativistic)}\,. \tag{57}\] The difference can be traced back to the additional highly-oscillatory function \(X^{(\pm),\,\mathrm{in}}_{\ell m\omega}\simeq\exp(i\omega x)\) in the full black-hole perturbation theory expression, which suppresses the integral at large \(\omega\). This analysis thus explains the difference in the high-frequency tails shown in Fig. 10. ## VII Conclusions We reviewed, and generalized, a transformation by Nakamura, Sasaki and Shibata [28] that makes the source term of the Zerilli function to have a faster fall-off at spatial infinity, thereby improving the numerical convergence of the convolution integral that arises in the calculation of gravitational radiation by particles in unbound geodesic motion in the Schwarzschild spacetime. As an application, we studied the gravitational radiation produced by test particles that plunge from rest and with angular momentum \(\mathcal{L}\) from spatial infinity into a Schwarzschild black hole. In particular, we focused on the limit in which \(\mathcal{L}\) approximates from below the critical value \(\mathcal{L}_{\text{crit}}\) for scattering. To our knowledge, this is the first time this calculation was done using the original Regge-Wheeler and Zerilli master functions. Our results are in agreement with the work by Berti et al. [29] that used the Sasaki-Nakamura equation. We studied in detail the relative contributions to the energy radiated in gravitational waves due to perturbations of polar and axial parity. We found that the former always dominates. We also observed an quasiuniversal late-time behavior of the waveforms in limit in which \(\mathcal{L}\) approaches the critical value for scattering, \(4M\). The main merit of the Nakamura-Sasaki-Shibata transformation is that it only requires minimal modifications to the source term of the Zerilli function [cf. Eqs. (29) and (31)]. The new source term can be easily computed analytically with any symbolic algebra software. In contrast, the Sasaki-Nakamura formalism requires the numerical integration of an auxiliary second-order differential equation for the calculation of the source term [16; 17; 18]. However, an advantage of the Sasaki-Nakamura formalism is that it applies to the Kerr spacetime, while the method presented here is not. Our work is alternative to that of Hopper [38] that addresses the sources of the Zerilli-Moncrief and Cunningham-Price-Moncrief master functions. We complemented our calculations in black-hole perturbation theory, with an analysis of the same plunging-particle problem in the semi-relativistic approximation. We found that the two energy spectra agree best with in the limit \(\mathcal{L}\to\mathcal{L}_{\text{crit}}\), when the energy is dominated by the particle's motion around the marginally stable circular orbit. We also studied the high-frequency limit of the spectrum, expanding upon the discussion in Ref. [54]. The method presented here can be used in other problems involving unbound motion in the Schwarzschild spacetime, including ultrarelativistic plunges [55; 10], scattering [27], or in the spacetime of relativistic stars [56; 57; 58; 59] using the Regge-Wheeler-Zerilli formalism. ###### Acknowledgements. We thank Emanuele Berti, Benjamin Leather, Caio F. B. Macedo, Raj Patil, Masaru Shibata, Jan Steinhoff and Helvi Witek for discussions. HOS acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) - Project No.: 386119226. KG acknowledges support from research grant PID2020-1149GB-I00 of the Spanish Ministerio de Ciencia e Innovacion. K.Y. acknowledges support from NSF Grant PHY-2207349, PHY-2309066, a Sloan Foundation Research Fellowship and the Owens Family Foundation. This work makes use of the Black Hole Perturbation Toolkit [35]. Some of our calculations were performed in the Hypatia cluster at the Max Planck Institute for Gravitational Physics. ## Appendix A The sources of the Regge-Wheeler and Zerilli equations In this Appendix we present the sources for the Regge-Wheeler and Zerilli equations (7), quoting from the formulas presented in Refs. [32; 33]; the original derivation is due Zerilli [5]. The source \(S^{(+)}_{\ell m\omega}\), that excites the perturbations of polar parity, is is given by \[S^{(+)}_{\ell m\omega} =-if\frac{\mathrm{d}}{\mathrm{d}r}\left[\frac{f^{2}}{\Lambda} \left(\frac{ir}{f}\,\tilde{C}_{1\ell m\omega}+\tilde{C}_{2\ell m\omega}\right)\right]\] \[\quad+i\frac{f}{r\Lambda}\left[i\frac{\lambda r^{2}-3\lambda Mr-3 M^{2}}{rf}\,\tilde{C}_{1\ell m\omega}\right.\] \[\quad\left.+\frac{\lambda(\lambda+1)r^{2}+3\lambda Mr+6M^{2}}{r^ {2}}\,\tilde{C}_{2\ell m\omega}\right], \tag{10}\] where4 Footnote 4: We note that in the equation for \(\tilde{B}_{\ell m\omega}\), the term proportional to \(A^{(1)}_{\ell m\omega}\), in Eq. (A42) of Ref. [32] has a typo, which is corrected in Eq. (4.12) of Ref. [33]. \[\tilde{B}_{\ell m\omega} =\frac{8\pi r^{2}f}{\Lambda}\left[A_{\ell m\omega}+\frac{1}{ \sqrt{\ell(\ell+1)/2}}\,B_{\ell m\omega}\right]\] \[\quad-4\pi\frac{\sqrt{2}}{\Lambda}\frac{M}{\omega}A^{(1)}_{\ell m \omega}\,, \tag{11}\] \[\tilde{C}_{1\ell m\omega} =\frac{8\pi}{\sqrt{2}\omega}A^{(1)}_{\ell m\omega}+\frac{1}{r} \tilde{B}_{\ell m\omega}\] \[\quad-16\pi r\left[\frac{1}{2}\frac{(\ell+2)!}{(\ell-2)!}\right] ^{\frac{1}{2}}\,F_{\ell m\omega}\,,\] (12) \[\tilde{C}_{2\ell m\omega} =\frac{8\pi i}{\omega\sqrt{\ell(\ell+1)/2}}\frac{r}{f}\,B^{(0)}_{ \ell m\omega}-\frac{i}{f}\,\tilde{B}_{\ell m\omega}\] \[\quad+\frac{16\pi ir^{2}}{f}\left[\frac{1}{2}\frac{(\ell+2)!}{( \ell-2)!}\right]^{-\frac{1}{2}}F_{\ell m\omega}\,. \tag{13}\] Here, \(f=1-2M/r\), and \(A_{\ell m\omega}\), \(A_{\ell m\omega}^{(1)}\), \(B_{\ell m\omega}^{(0)}\), \(B_{\ell m\omega}\), \(\tilde{B}_{\ell m\omega}\) and \(F_{\ell m\omega}\) are the Fourier-domain projections of the particle's energy-momentum tensor onto the tensor harmonic basis; cf. Ref. [32], Table 1, for their expressions in time-domain and generic orbit. These functions encode information about the particle's geodesic motion, and when particularized for plunging geodesics they read \[A_{\ell m\omega} =\mu\,\frac{V}{r^{2}f^{2}}\,Y^{*}_{\ell m}\,e^{i\omega t_{p}}\,, \tag{14a}\] \[A_{\ell m\omega}^{(1)} =-i\sqrt{2}\,\mu\,\frac{\mathcal{E}}{r^{2}f}\,Y^{*}_{\ell m}\,e^{ i\omega t_{p}}\,,\] (14b) \[B_{\ell m\omega}^{(0)} =i\mu\,\frac{\mathcal{E}\,\mathcal{L}}{Vr^{3}}\frac{1}{\sqrt{\ell (\ell+1)/2}}\partial_{\phi}Y^{*}_{\ell m}\,e^{i\omega t_{p}}\,,\] (14c) \[B_{\ell m\omega}^{(1)} =-\mu\,\frac{\mathcal{L}}{r^{3}f}\frac{1}{\sqrt{\ell(\ell+1)/2}} \,\partial_{\phi}Y^{*}_{\ell m}\,e^{i\omega t_{p}}\,,\] (14d) \[F_{\ell m\omega} =\mu\,\frac{\mathcal{L}^{2}}{Vr^{4}}\left[\frac{1}{2}\frac{(\ell +2)!}{(\ell-2)!}\right]^{-\frac{1}{2}}\partial_{\phi\phi}Y^{*}_{\ell m}\,e^{ i\omega t_{p}}, \tag{14e}\] where, for brevity, we wrote \(Y^{*}_{\ell m}=Y^{*}_{\ell m}(\pi/2,\phi_{p})\) and defined \(V=\sqrt{\mathcal{E}^{2}-U}\), where \(U\) is given by Eq. (3). Here, \(\mu\) is the particle's mass and \(t_{p}\) and \(\phi_{p}\) are functions of \(r\), obtained by integrating the geodesic equations (5). The source \(S^{(-)}_{\ell m\omega}\), that excites the perturbations of axial parity, is given by \[S^{(-)}_{\ell m\omega} =\frac{8\pi ir}{r}\left[\frac{1}{2}\frac{(\ell+2)!}{(\ell-2)!} \right]^{-\frac{1}{2}}\left[-r^{2}\frac{\mathrm{d}}{\mathrm{d}r}(fD_{\ell m\omega})\right.\] \[\quad\left.+\,\sqrt{2\lambda}\,rfQ_{\ell m\omega}\right], \tag{14}\] where, analogously to the polar case, \(D_{\ell m\omega}\) and \(Q_{\ell m\omega}\) are the Fourier-domain projections of the particle's energy-momentum tensor onto the tensor harmonic basis; cf. Ref. [32], Table 1, for their time-domain, general orbit forms. When particularized for plunging trajectories, \(D_{\ell m\omega}\) and \(Q_{\ell m\omega}\) become: \[D_{\ell m\omega} =i\mu\,\frac{\mathcal{L}^{2}}{Vr^{4}}\left[\frac{1}{2}\frac{( \ell+2)!}{(\ell-2)!}\right]^{-\frac{1}{2}}X^{*}_{\ell m}\,e^{i\omega t_{p}}\,, \tag{15a}\] \[Q_{\ell m\omega} =-i\mu\,\frac{\mathcal{L}}{fr^{3}}\frac{1}{\sqrt{\ell(\ell+1)/2}} \,\partial_{\theta}Y^{*}_{\ell m}\,e^{i\omega t_{p}}\,, \tag{15b}\] where we introduced the shorthand notation, \[X_{\ell m}=2\partial_{\phi}(\partial_{\theta}-\cot\theta)Y_{\ell m}\,.\] (15a) In the case of radial infall, \[\mathcal{L}=0\], both \[D_{\ell m\omega}\] and \[Q_{\ell m\omega}\] are zero, and consequently \[S^{(-)}_{\ell m\omega}\], vanishes. ## Appendix B The near-horizon and far-field expansions of the Regge-Wheeler and Zerilli functions When integrating numerically the homogeneous Regge-Wheeler and Zerilli equations, we use series representations of the solutions to these equations in the near-horizon ( \[r/(2M)-1\ll 1\] or \[x\to-\infty\] ) and asymptotic ( \[r\gg 2M\] or \[x\to\infty\] ) regions of the Schwarzschild spacetime. The coefficients in these series satisfy certain recursion relations that we summarize here. We first consider the near-horizon limit. We assume that the Regge-Wheeler and Zerilli mode functions \(X^{(\pm)}_{\ell m\omega}\) can be factorized as, \[X^{(\pm)}_{\ell m\omega}=e^{-i\omega x}\sum_{n=0}^{\infty}c_{n}^{(\pm)}\,z(r)^{n} \,\quad z=r/(2M)-1\,. \tag{16}\] We substitute Eq. (14) in the homogeneous equation (14) and derive a recursion relation between the coefficients \(c_{n}^{(\pm)}\). For the Zerilli equation this relation is \[\mathfrak{a}\,c_{n}^{(+)} =\mathfrak{b}\,c_{n-1}^{(+)}+\mathfrak{c}\,c_{n-2}^{(+)}+ \mathfrak{d}\,c_{n-3}^{(+)}+\mathfrak{e}\,c_{n-4}^{(+)}+\mathfrak{f}\,c_{n-5}^{ (+)}\,, \tag{101}\] where \[\mathfrak{a} =n(3+2\lambda)^{2}(n-4i\sigma)\,,\] \[\mathfrak{b} =4\lambda^{2}+i(40\lambda+36)(n-1)\sigma+\lambda[2(9-4n)n-6]\] \[\phantom{=}-3(n-2)(2n-1)\,,\] \[\mathfrak{c} =24\lambda^{3}-\lambda^{2}(24n^{2}-108n+72)-\lambda(36n^{2}-168 n+174)\] \[\phantom{=}-9n^{2}+i(160\lambda^{2}+288\lambda+108)(n-2)\sigma+4 5n-54\,,\] \[\mathfrak{d} =24\lambda^{3}-\lambda^{2}(16n^{2}-108n+144)-\lambda(12n^{2}-8 4n+144)\] \[\phantom{=}+i(160\lambda^{2}+192\lambda+36)(n-3)\sigma\,,\] \[\mathfrak{e} =i\lambda\,(80\lambda+48)\,(n-4)\sigma-4\lambda^{2}(n-6)(n-3)+8 \lambda^{3}\,,\] \[\mathfrak{f} =16i\lambda^{2}(n-5)\,, \tag{102}\] and \(\sigma=M\omega\)[39]. For the Regge-Wheeler equation we obtain instead \[n(n+4i\sigma)\,c_{n}^{(-)} =-[2n^{2}-12i(n-1)\sigma-5n+\ell(\ell+1)\] \[\phantom{=}-6]\,c_{n-1}^{(-)}-[(n-\ell-3)(n+\ell-2)\] \[\phantom{=}-12i(n-2)\sigma]\,c_{n-2}^{(-)}+4i(n-3)\sigma\,c_{n-3 }^{(-)}. \tag{103}\] In Eqs. (101) and (103), \(c_{n}^{(\pm)}=0\) for negative values of \(n\). We now consider the asymptotic expansion of \(X_{\ell m\omega}^{(\pm)}\). We assume they can be factorized as, \[X_{\ell m\omega}^{(\pm)}=J_{\ell m\omega}^{(\pm)}(r)\,e^{+i\omega x}\,, \tag{104}\] where \(J_{\ell m\omega}^{(\pm)}\) ("Jost function") approaches a constant as \(x\to\infty\) in order to recover a purely outgoing behavior. We then substitute Eq. (104) in the homogeneous equation (14). This results in a differential equation for \(J_{\ell m\omega}^{(\pm)}\), \[\left[f\frac{\mathrm{d}^{2}}{\mathrm{d}r^{2}}+\left(\frac{2M}{r^{2}}+2i\omega \right)\frac{\mathrm{d}}{\mathrm{d}r}-\frac{V_{\ell}^{(\pm)}}{f}\right]J_{\ell m \omega}^{(\pm)}=0\,, \tag{105}\] which we solve with Frobenius' method. We assume a series expansion of the form, \[J_{\ell m\omega}^{(\pm)}=\sum_{n=0}^{\infty}a_{n}^{(\pm)}/(\omega r)^{n}\,. \tag{106}\] We then substitute this expansion in Eq. (105), and derive a recursion relation between the coefficients \(a_{n}^{(\pm)}\). When using the Zerilli potential (11a), we obtain \[2i\lambda^{2}na_{n}^{(+)} =\lambda[\lambda(n-1)n-12i\sigma(n-1)-2\lambda(\lambda+1)]\,a_{n -1}^{(+)}\] \[\phantom{=}+2\sigma[\lambda(3-\lambda)(n-2)(n-1)-(\lambda^{2}+9 i\sigma)]\] \[\phantom{=}\times(n-2)-3\lambda^{2}]\,a_{n-2}^{(+)}+3\sigma^{2}[( 3-4\lambda)(n-3)]\] \[\phantom{=}\times(n-2)-4\lambda(n-3)-6\lambda]\,a_{n-3}^{(+)}\] \[\phantom{=}-18\sigma^{3}(n-3)^{2}\,a_{n-4}^{(+)}\,, \tag{107}\] while using the Regge-Wheeler potential (11b) we find \[2ina_{n}^{(-)} =-2\sigma[(n+1)(n-3)]\,a_{n-1}^{(-)}\] \[\phantom{=}-\left[\ell(\ell+1)-n(n-1)\right]a_{n-2}^{(-)}\,, \tag{108}\] where \(a_{n}^{(\pm)}=0\) for negative values of \(n\)[34; 60]. The derivations we just made also apply to modes that behave as \(\simeq\exp(-i\omega x)\) at spatial infinity. In practice, we can replace \(\sigma\to-\sigma\) in Eqs. (107) and (108). ## Appendix C Calculation of the Wronskian To calculate the Wronskian (23), we need first to calculate the mode amplitude \(A_{\ell m\omega}^{(\pm)\,\mathrm{in}}\). We do this calculation following the same strategy outlined in Ref. [29], Appendix A1, in the context of the Sasaki-Nakamura formalism. We first integrate \(X_{\ell m\omega}^{(\pm),\,\mathrm{in}}\) from near the horizon [making use of the recursion relations (101) and (103)] out to some large \(r_{\mathrm{max}}\). This gives us two constants \(X_{\ell m\omega}^{(\pm),\,\mathrm{in}}(r_{\mathrm{max}})\) and \(\mathrm{d}X_{\ell m\omega}^{(\pm),\,\mathrm{in}}(r_{\mathrm{max}})/\mathrm{d}r\). For large \(r\), the mode function is approximately given by \[X_{\ell m\omega}^{(\pm),\,\mathrm{in}} \simeq A_{\ell m\omega}^{(\pm),\,\mathrm{in}}\,e^{-i\omega x}+A_{ \ell m\omega}^{(\pm),\,\mathrm{out}}\,e^{+i\omega x}\] \[=A_{\ell m\omega}^{(\pm),\,\mathrm{in}}\,J_{\ell m\omega}^{(\pm), \,\mathrm{in}}\,e^{-i\omega x}+A_{\ell m\omega}^{(\pm),\,\mathrm{out}}\,J_{ \ell m\omega}^{(\pm),\,\mathrm{out}}\,e^{+i\omega x}\,, \tag{109}\] where we used the factorization (104) in the second line. We evaluate the Jost functions \(J_{\ell m\omega}^{(\pm)}\) using the recursion relations (107) and (108), with \(a_{0}^{(\pm)}=1\) and adding terms in the series until a sufficient level of accuracy is reached. Equation (109) and its derivative with respect to \(r\) provide two equations that depend on \(X_{\ell m\omega}^{(\pm),\,\mathrm{in}}\) (and its derivative) at \(r_{\mathrm{max}}\); the outcomes of the numerical integration of \(X_{\ell m\omega}^{(\pm),\,\mathrm{in}}\). We use these two equations to solve for the two amplitudes \(A_{\ell m\omega}^{(\pm),\,\mathrm{in}}\) and \(A_{\ell m\omega}^{(\pm),\,\mathrm{out}}\). The former is then used to calculate the Wronskian (23).
2306.01014
Functional Ghobber-Jaming Uncertainty Principle
Let $(\{f_j\}_{j=1}^n, \{\tau_j\}_{j=1}^n)$ and $(\{g_k\}_{k=1}^n, \{\omega_k\}_{k=1}^n)$ be two p-orthonormal bases for a finite dimensional Banach space $\mathcal{X}$. Let $M,N\subseteq \{1, \dots, n\}$ be such that \begin{align*} o(M)^\frac{1}{q}o(N)^\frac{1}{p}< \frac{1}{\displaystyle \max_{1\leq j,k\leq n}|g_k(\tau_j) |}, \end{align*} where $q$ is the conjugate index of $p$. Then for all $x \in \mathcal{X}$, we show that \begin{align}\label{FGJU} (1) \quad \quad \quad \quad \|x\|\leq \left(1+\frac{1}{1-o(M)^\frac{1}{q}o(N)^\frac{1}{p}\displaystyle\max_{1\leq j,k\leq n}|g_k(\tau_j)|}\right)\left[\left(\sum_{j\in M^c}|f_j(x)|^p\right)^\frac{1}{p}+\left(\sum_{k\in N^c}|g_k(x) |^p\right)^\frac{1}{p}\right]. \end{align} We call Inequality (1) as \textbf{Functional Ghobber-Jaming Uncertainty Principle}. Inequality (1) improves the uncertainty principle obtained by Ghobber and Jaming \textit{[Linear Algebra Appl., 2011]}.
K. Mahesh Krishna
2023-06-01T04:17:41Z
http://arxiv.org/abs/2306.01014v1
**FUNCTIONAL GHOBBER-JAMIING UNCERTAINTY PRINCIPLE** **FUNCTIONAL GHOBBER-JAMIING UNCERTAINTY PRINCIPLE** **K. MAHESH KRISHNA** Post Doctoral Fellow Statistics and Mathematics Unit Indian Statistical Institute, Bangalore Centre Karnataka 560 059, India Email: [email protected] Date: June 5, 2023 **Abstract**: Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be two p-orthonormal bases for a finite dimensional Banach space \(\mathcal{X}\). Let \(M,N\subseteq\{1,\ldots,n\}\) be such that \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}<\frac{1}{\max\limits_{1\leq j,k\leq n}|g_ {k}(\tau_{j})|},\] where \(q\) is the conjugate index of \(p\). Then for all \(x\in\mathcal{X}\), we show that \[\|x\|\leq\left(1+\frac{1}{1-o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}\max\limits_{1 \leq j,k\leq n}|g_{k}(\tau_{j})|}\right)\left[\left(\sum\limits_{j\in M^{c}}|f _{j}(x)|^{p}\right)^{\frac{1}{p}}+\left(\sum\limits_{k\in N^{c}}|g_{k}(x)|^{ p}\right)^{\frac{1}{p}}\right]. \tag{1}\] We call Inequality (1) as **Functional Grobber-Jaming Uncertainty Principle**. Inequality (1) improves the uncertainty principle obtained by Grobber and Jaming _[Linear Algebra Appl., 2011]_. **Keywords**: Uncertainty Principle, Orthonormal Basis, Hilbert space, Banach space. **Mathematics Subject Classification (2020)**: 42C15, 46B03, 46B04. ###### Contents * 1 Introduction * 2 Functional Grobber-Jaming Uncertainty Principle ## 1. Introduction Let \(d\in\mathbb{N}\) and \(\widehat{\ \ \ }:\mathcal{L}^{2}(\mathbb{R}^{d})\to\mathcal{L}^{2}(\mathbb{R}^{d})\) be the unitary Fourier transform obtained by extending uniquely the bounded linear operator \[\widehat{\ \ }:\mathcal{L}^{1}(\mathbb{R}^{d})\cap\mathcal{L}^{2}(\mathbb{R}^{ d})\ni f\mapsto\widehat{f}\in C_{0}(\mathbb{R}^{d});\quad\widehat{f}: \mathbb{R}^{d}\ni\xi\mapsto\widehat{f}(\xi)\coloneqq\int_{\mathbb{R}^{d}}f(x) e^{-2\pi i\langle x,\xi\rangle}\,dx\ \in\mathbb{C}.\] In 2007, Jaming [4] extended the uncertainty principle obtained by Nazarov for \(\mathbb{R}\) in 1993 [8] (cf. [3]). In the following theorem, Lebesgue measure on \(\mathbb{R}^{d}\) is denoted by \(m\). Mean width of a measurable subset \(E\) of \(\mathbb{R}^{d}\) having finite measure is denoted by \(w(E)\). **Theorem 1.1**.: _[_4, 8_]_ _(**Nazarov-Jaming Uncertainty Principle**) For each \(d\in\mathbb{N}\), there exists a universal constant \(C_{d}\) (depends upon \(d\)) satisfying the following: If \(E,F\subseteq\mathbb{R}^{d}\) are measurable subsets having finite measure, then for all \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\),_ \[\int_{\mathbb{R}^{d}}|f(x)|^{2}\,dx\leq C_{d}e^{C_{d}\min\{m(E)m(F),m(E)^{\frac{1 }{d}}w(F),m(F)^{\frac{1}{d}}w(E)\}}\quad\left[\int_{E^{c}}|f(x)|^{2}\,dx+\int_{F^ {c}}|\widehat{f}(\xi)|^{2}\,d\xi\right]. \tag{2}\] _In particular, if \(f\) is supported on \(E\) and \(\widehat{f}\) is supported on \(F\), then \(f=0\)._ Theorem 1.1 and the milestone paper [1] of Donoho and Stark which derived finite dimensional uncertainty principles, motivated Ghobber and Jaming [2] to ask what is the exact finite dimensional analogue of Theorem 1.1? Ghobber and Jaming were able to derive the following beautiful theorem. Given a subset \(M\subseteq\{1,\ldots,n\}\), the number of elements in \(M\) is denoted by \(o(M)\). **Theorem 1.2**.: _[_2_]_ _(**Ghobber-Jaming Uncertainty Principle**) Let \(\{\tau_{j}\}_{j=1}^{n}\) and \(\{\omega_{j}\}_{j=1}^{n}\) be orthonormal bases for the Hilbert space \(\mathbb{C}^{n}\). If \(M,N\subseteq\{1,\ldots,n\}\) are such that_ \[o(M)o(N)<\frac{1}{\max_{1\leq j,k\leq n}|\langle\tau_{j},\omega_{k}\rangle|^{2}}, \tag{3}\] _then for all \(h\in\mathbb{C}^{n}\),_ \[\|h\|\leq\left(1+\frac{1}{1-\sqrt{o(M)o(N)}\max_{1\leq j,k\leq n}|\langle\tau_ {j},\omega_{k}\rangle|}\right)\left[\left(\sum_{j\in M^{c}}|\langle h,\tau_{j} \rangle|^{2}\right)^{\frac{1}{2}}+\left(\sum_{k\in N^{c}}|\langle h,\omega_{k }\rangle|^{2}\right)^{\frac{1}{2}}\right].\] _In particular, if \(h\) is supported on \(M\) in the expansion using basis \(\{\tau_{j}\}_{j=1}^{n}\) and \(h\) is supported on \(N\) in the expansion using basis \(\{\omega_{j}\}_{j=1}^{n}\), then \(h=0\)._ It is reasonable to ask whether there is a Banach space version of Ghobber-Jaming Uncertainty Principle, which when restricted to Hilbert space, reduces to Theorem 1.2? We are going to answer this question in the paper. ## 2. Functional Ghobber-Jaming Uncertainty Principle In the paper, \(\mathbb{K}\) denotes \(\mathbb{C}\) or \(\mathbb{R}\) and \(\mathcal{X}\) denotes a finite dimensional Banach space over \(\mathbb{K}\). Identity operator on \(\mathcal{X}\) is denoted by \(I_{\mathcal{X}}\). Dual of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{*}\). Whenever \(1<p<\infty\), \(q\) denotes conjugate index of \(p\). For \(d\in\mathbb{N}\), the standard finite dimensional Banach space \(\mathbb{K}^{d}\) over \(\mathbb{K}\) equipped with standard \(\|\cdot\|_{p}\) norm is denoted by \(\ell^{p}([d])\). Canonical basis for \(\mathbb{K}^{d}\) is denoted by \(\{\delta_{j}\}_{j=1}^{d}\) and \(\{\zeta_{j}\}_{j=1}^{d}\) be the coordinate functionals associated with \(\{\delta_{j}\}_{j=1}^{d}\). Motivated from the properties of orthonormal bases for Hilbert spaces, we set the following notion of p-orthonormal bases which is also motivated from the notion of p-approximate Schauder frames [7] and p-unconditional Schauder frames [6]. **Definition 2.1**.: _Let \(\mathcal{X}\) be a finite dimensional Banach space over \(\mathbb{K}\). Let \(\{\tau_{j}\}_{j=1}^{n}\) be a basis for \(\mathcal{X}\) and let \(\{f_{j}\}_{j=1}^{n}\) be the coordinate functionals associated with \(\{\tau_{j}\}_{j=1}^{n}\). The pair \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) is said to be a **p-orthonormal basis** (\(1<p<\infty\)) for \(\mathcal{X}\) if the following conditions hold._ 1. \(\|f_{j}\|=\|\tau_{j}\|=1\) _for all_ \(1\leq j\leq n\)_._ 2. _For every_ \((a_{j})_{j=1}^{n}\in\mathbb{K}^{n}\)_,_ \[\left\|\sum_{j=1}^{n}a_{j}\tau_{j}\right\|=\left(\sum_{j=1}^{n}|a_{j}|^{p} \right)^{\frac{1}{p}}.\] Given a p-orthonormal basis \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\), we easily see from Definition 2.1 that \[\|x\|=\left\|\sum_{j=1}^{n}f_{j}(x)\tau_{j}\right\|=\left(\sum_{j=1}^{n}|f_{j}(x )|^{p}\right)^{\frac{1}{p}},\quad\forall x\in\mathcal{X}.\] **Example 2.2**.: _The pair \((\{\zeta_{j}\}_{j=1}^{d},\{\delta_{j}\}_{j=1}^{d})\) is a \(p\)-orthonormal basis for \(\ell^{p}([d])\)._ Like orthonormal bases for Hilbert spaces, the following theorem characterizes all p-orthonormal bases. **Theorem 2.3**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) be a p-orthonormal basis for \(\mathcal{X}\). Then a pair \((\{g_{j}\}_{j=1}^{n},\{\omega_{j}\}_{j=1}^{n})\) is a p-orthonormal basis for \(\mathcal{X}\) if and only if there is an invertible linear isometry \(V:\mathcal{X}\to\mathcal{X}\) such that_ \[g_{j}=f_{j}V^{-1},\ \omega_{j}=V\tau_{j},\quad\forall 1\leq j\leq n.\] Proof.: (\(\Rightarrow\)) Define \(V:\mathcal{X}\ni x\mapsto\sum_{j=1}^{n}f_{j}(x)\omega_{j}\in\mathcal{X}\). Since \(\{\omega_{j}\}_{j=1}^{n}\) is a basis for \(\mathcal{X}\), \(V\) is invertible with inverse \(V^{-1}:\mathcal{X}\ni x\mapsto\sum_{j=1}^{n}g_{j}(x)\tau_{j}\in\mathcal{X}\). For \(x\in\mathcal{X}\), \[\|Vx\|=\left\|\sum_{j=1}^{n}f_{j}(x)\omega_{j}\right\|=\left(\sum_{j=1}^{n}|f_ {j}(x)|^{p}\right)^{\frac{1}{p}}=\left\|\sum_{j=1}^{n}f_{j}(x)\tau_{j}\right\| =\|x\|.\] Therefore \(V\) is isometry. Note that we clearly have \(\omega_{j}=V\tau_{j},\forall 1\leq j\leq n.\) Now let \(1\leq j\leq n\). Then \[f_{j}(V^{-1}x)=f_{j}\left(\sum_{k=1}^{n}g_{k}(x)\tau_{k}\right)=\sum_{k=1}^{n} g_{k}(x)f_{j}(\tau_{k})=g_{j}(x),\quad\forall x\in\mathcal{X}.\] (\(\Leftarrow\)) Since \(V\) is invertible, \(\{\omega_{j}\}_{j=1}^{n}\) is a basis for \(\mathcal{X}\). Now we see that \(g_{j}(\omega_{k})=f_{j}(V^{-1}V\tau_{k})=f_{j}(\tau_{k})=\delta_{j,k}\) for all \(1\leq j,k\leq n\). Therefore \(\{g_{j}\}_{j=1}^{n}\) is the coordinate functionals associated with \(\{\omega_{j}\}_{j=1}^{n}\). Since \(V\) is an isometry, we have \(\|\omega_{j}\|=1\) for all \(1\leq j\leq n\). Since \(V\) is also invertible, we have \[\|g_{j}\| =\sup_{x\in\mathcal{X},\|x\|\leq 1}|g_{j}(x)|=\sup_{x\in\mathcal{X}, \|x\|\leq 1}|f_{j}(V^{-1}x)|=\sup_{Vy\in\mathcal{X},\|Vy\|\leq 1}|f_{j}(y)|\] \[=\sup_{Vy\in\mathcal{X},\|y\|\leq 1}|f_{j}(y)|=\|f_{j}\|=1,\quad \forall 1\leq j\leq n.\] Finally, for every \((a_{j})_{j=1}^{n}\in\mathbb{K}^{n}\), \[\left\|\sum_{j=1}^{n}a_{j}\omega_{j}\right\|=\left\|\sum_{j=1}^{n}a_{j}V\tau_{ j}\right\|=\left\|V\left(\sum_{j=1}^{n}a_{j}\tau_{j}\right)\right\|=\left\|\sum_{j=1} ^{n}a_{j}\tau_{j}\right\|=\left(\sum_{j=1}^{n}|a_{j}|^{p}\right)^{\frac{1}{p}}.\] In the next result we show that Example 2.2 is prototypical as long as we consider p-orthonormal bases. **Theorem 2.4**.: _If \(\mathcal{X}\) has a p-orthonormal basis \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\), then \(\mathcal{X}\) is isometrically isomorphic to \(\ell^{p}([n])\)._ Proof.: Define \(V:\mathcal{X}\ni x\mapsto\sum_{j=1}^{n}f_{j}(x)\delta_{j}\in\ell^{p}([n])\). By doing a similar calculation as in the direct part in the proof of Theorem 2.3, we see that \(V\) is an invertible isometry. Now we derive main result of this paper. **Theorem 2.5**.: _(**Functional Grobber-Jaming Uncertainty Principle**) Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be p-orthonormal bases for \(\mathcal{X}\). If \(M,N\subseteq\{1,\ldots,n\}\) are such that_ \[o(M)^{\frac{1}{p}}o(N)^{\frac{1}{p}}<\frac{1}{\max\limits_{1\leq j,k\leq n}|g_ {k}(\tau_{j})|},\] _then for all \(x\in\mathcal{X}\),_ \[\|x\|\leq\left(1+\frac{1}{1-o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}\max\limits_{1 \leq j,k\leq n}|g_{k}(\tau_{j})|}\right)\left[\left(\sum_{j\in M^{c}}|f_{j}(x)| ^{p}\right)^{\frac{1}{p}}+\left(\sum_{k\in N^{c}}|g_{k}(x)|^{p}\right)^{\frac {1}{p}}\right]. \tag{4}\] _In particular, if \(x\) is supported on \(M\) in the expansion using basis \(\{\tau_{j}\}_{j=1}^{n}\) and \(x\) is supported on \(N\) in the expansion using basis \(\{\omega_{k}\}_{k=1}^{n}\), then \(x=0\)._ Proof.: Given \(S\subseteq\{1,\ldots,n\}\), define \[P_{S}x\coloneqq\sum_{j\in S}f_{j}(x)\tau_{j},\quad\forall x\in\mathcal{X}, \quad\|x\|_{S,f}\coloneqq\left(\sum_{j\in S}|f_{j}(x)|^{p}\right)^{\frac{1}{p} },\quad\|x\|_{S,g}\coloneqq\left(\sum_{j\in S}|g_{j}(x)|^{p}\right)^{\frac{1}{ p}}.\] Also define \(V:\mathcal{X}\ni x\mapsto\sum_{k=1}^{n}g_{k}(x)\tau_{k}\in\mathcal{X}\). Then \(V\) is an invertible isometry. Using \(V\) we make following important calculations: \[\|P_{S}x\|=\left\|\sum_{j\in S}f_{j}(x)\tau_{j}\right\|=\left(\sum_{j\in S}|f_ {j}(x)|^{p}\right)^{\frac{1}{p}}=\|x\|_{S,f},\quad\forall x\in\mathcal{X}\] and \[\|P_{S}Vx\| =\left\|\sum_{j\in S}f_{j}(Vx)\tau_{j}\right\|=\left\|\sum_{j\in S }f_{j}\left(\sum_{k=1}^{n}g_{k}(x)\tau_{k}\right)\tau_{j}\right\|=\left\|\sum_ {j\in S}\sum_{k=1}^{n}g_{k}(x)f_{j}(\tau_{k})\tau_{j}\right\|\] \[=\left\|\sum_{j\in S}g_{j}(x)\tau_{j}\right\|=\left(\sum_{j\in S }|g_{j}(x)|^{p}\right)^{\frac{1}{p}}=\|x\|_{S,g},\quad\forall x\in\mathcal{X}.\] Now let \(y\in\mathcal{X}\) be such that \(\{j\in\{1,\ldots,n\}:f_{j}(y)\neq 0\}\subseteq M.\) Then \(\|P_{N}Vy\|=\|P_{N}VP_{M}y\|\leq\|P_{N}VP_{M}\|\|y\|\) and \[\|y\|_{N^{c},g}=\|P_{N^{c}}Vy\|=\|Vy-P_{N}Vy\|\geq\|Vy\|-\|P_{N}Vy\|=\|y\|-\|P _{N}Vy\|\geq\|y\|-\|P_{N}VP_{M}\|\|y\|.\] Therefore \[\|y\|_{N^{c},g}\geq(1-\|P_{N}VP_{M}\|)\|y\|. \tag{5}\] **FUNCTIONAL GHOBBER-JAMIING UNCERTAINTY PRINCIPLE** Let \(x\in\mathcal{X}\). Note that \(P_{M}x\) satisfies \(\{j\in\{1,\ldots,n\}:f_{j}(P_{M}x)\neq 0\}\subseteq M.\) Now using (5) we get \[\|x\| =\|P_{M}x+P_{M^{c}}x\|\leq\|P_{M}x\|+\|P_{M^{c}}x\|\leq\frac{1}{1- \|P_{N}VP_{M}\|}\|P_{M}x\|_{N^{c},g}+\|P_{M^{c}}x\|\] \[=\frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}VP_{M}x\|+\|P_{M^{c}}x\|= \frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}V(x-P_{M^{c}}x)\|+\|P_{M^{c}}x\|\] \[\leq\frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}Vx\|+\frac{1}{1-\|P_{N }VP_{M}\|}\|P_{N^{c}}VP_{M^{c}}x\|+\|P_{M^{c}}x\|\] \[\leq\frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}Vx\|+\frac{1}{1-\|P_{N }VP_{M}\|}\|P_{M^{c}}x\|+\|P_{M^{c}}x\|\] \[=\frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}Vx\|+\left(1+\frac{1}{1- \|P_{N}VP_{M}\|}\right)\|P_{M^{c}}x\|\] \[\leq\|P_{N^{c}}Vx\|+\frac{1}{1-\|P_{N}VP_{M}\|}\|P_{N^{c}}Vx\|+ \left(1+\frac{1}{1-\|P_{N}VP_{M}\|}\right)\|P_{M^{c}}x\|\] \[=\left(1+\frac{1}{1-\|P_{N}VP_{M}\|}\right)\left[\|P_{N^{c}}Vx\|+ \left\|P_{M^{c}}x\right\|\right]=\left(1+\frac{1}{1-\|P_{N}VP_{M}\|}\right) \left[\|x\|_{N^{c},g}+\|P_{M^{c}}x\|\right]\] \[=\left(1+\frac{1}{1-\|P_{N}VP_{M}\|}\right)\left[\left(\sum_{j \in M^{c}}|f_{j}(x)|^{p}\right)^{\frac{1}{p}}+\left(\sum_{k\in N^{c}}|g_{k}( x)|^{p}\right)^{\frac{1}{p}}\right].\] For \(x\in\mathcal{X}\), we now find \[\|P_{N}VP_{M}x\|^{p}=\left\|\sum_{k\in N}f_{k}(VP_{M}x)\tau_{k} \right\|^{p}=\left(\sum_{k\in N}|f_{k}(VP_{M}x)|^{p}\right)^{\frac{1}{p}}= \sum_{k\in N}\left|(f_{k}V)\left(\sum_{j\in M}f_{j}(x)\tau_{j}\right)\right|^ {p}\] \[=\sum_{k\in N}\left|\sum_{j\in M}f_{j}(x)f_{k}(V\tau_{j})\right|^ {p}=\sum_{k\in N}\left|\sum_{j\in M}f_{j}(x)f_{k}\left(\sum_{r=1}^{n}g_{r}( \tau_{j})\tau_{r}\right)\right|^{p}=\sum_{k\in N}\left|\sum_{j\in M}f_{j}(x) \sum_{r=1}^{n}g_{r}(\tau_{j})f_{k}(\tau_{r})\right|^{p}\] \[=\sum_{k\in N}\left|\sum_{j\in M}f_{j}(x)g_{k}(\tau_{j})\right|^ {p}\leq\sum_{k\in N}\left(\sum_{j\in M}|f_{j}(x)g_{k}(\tau_{j})|\right)^{p} \leq\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}\sum_{k\in N} \left(\sum_{j\in M}|f_{j}(x)|\right)^{p}\] \[=\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}o(N) \left(\sum_{j\in M}|f_{j}(x)|\right)^{p}\leq\left(\max_{1\leq j,k\leq n}|g_{k} (\tau_{j})|\right)^{p}o(N)\left(\sum_{j\in M}|f_{j}(x)|^{p}\right)^{\frac{p}{p} }\left(\sum_{j\in M}1^{q}\right)^{\frac{p}{q}}\] \[\leq\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}o(N) \left(\sum_{j=1}^{n}|f_{j}(x)|^{p}\right)^{\frac{p}{p}}\left(\sum_{j\in M}1^{ q}\right)^{\frac{p}{q}}=\left(\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|\right)^{p}o(N) \|x\|^{p}o(M)^{\frac{p}{q}}.\] Therefore \[\|P_{N}VP_{M}\|\leq\max_{1\leq j,k\leq n}|g_{k}(\tau_{j})|o(N)^{ \frac{1}{p}}o(M)^{\frac{1}{q}}\] which gives the theorem. **Corollary 2.6**.: _Theorem 1.2 follows from Theorem 2.5._ Proof.: Let \(\{\tau_{j}\}_{j=1}^{n}\), \(\{\omega_{j}\}_{j=1}^{n}\) be two orthonormal bases for a finite dimensional Hilbert space \(\mathcal{H}\). Define \[f_{j}:\mathcal{H}\ni h\mapsto\langle h,\tau_{j}\rangle\in\mathbb{K};\quad g_{j}: \mathcal{H}\ni h\mapsto\langle h,\omega_{j}\rangle\in\mathbb{K},\quad\forall 1 \leq j\leq n.\] Then \(p=q=2\) and \(|f_{j}(\omega_{k})|=|\langle\omega_{k},\tau_{j}\rangle|\) for all \(1\leq j,k\leq n\). By interchanging p-orthonormal bases in Theorem 2.5 we get the following theorem. **Theorem 2.7**.: _(**Functional Ghobber-Jaming Uncertainty Principle**) Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be p-orthonormal bases for \(\mathcal{X}\). If \(M,N\subseteq\{1,\ldots,n\}\) are such that_ \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}<\frac{1}{\max_{1\leq j,k\leq n}|f_{j}( \omega_{k})|},\] _then for all \(x\in\mathcal{X}\),_ \[\|x\|\leq\left(1+\frac{1}{1-o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \max_{1\leq j,k\leq n}|f_{j}(\omega_{k})|}\right)\left[\left(\sum_{k\in M^{e}}| g_{k}(x)|^{p}\right)^{\frac{1}{p}}+\left(\sum_{j\in N^{e}}|f_{j}(x)|^{p} \right)^{\frac{1}{p}}\right].\] _In particular, if \(x\) is supported on \(M\) in the expansion using basis \(\{\omega_{k}\}_{k=1}^{n}\) and \(x\) is supported on \(N\) in the expansion using basis \(\{\tau_{j}\}_{j=1}^{n}\), then \(x=0\)._ Observe that the constant \[C_{d}e^{C_{d}\min\{m(E)m(F),m(E)^{\frac{1}{2}}w(F),m(F)^{\frac{1}{2}}w(E)\}}\] in Inequality (2) is depending upon subsets \(E\), \(F\) and not on the entire domain \(\mathbb{R}\) of functions \(f\), \(\widehat{f}\). Thus it is natural to ask whether there is a constant sharper in Inequality (4) depending upon subsets \(M\), \(N\) and not on \(\{1,\ldots,n\}\). A careful observation in the proof of Theorem 2.5 gives following result. **Theorem 2.8**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be p-orthonormal bases for \(\mathcal{X}\). If \(M,N\subseteq\{1,\ldots,n\}\) are such that_ \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}<\frac{1}{\max_{j\in M,k\in N}|g_{k}(\tau_ {j})|},\] _then for all \(x\in\mathcal{X}\),_ \[\|x\|\leq\left(1+\frac{1}{1-o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \max_{j\in M,k\in N}|g_{k}(\tau_{j})|}\right)\left[\left(\sum_{j\in M^{e}}|f_{ j}(x)|^{p}\right)^{\frac{1}{p}}+\left(\sum_{k\in N^{e}}|g_{k}(x)|^{p} \right)^{\frac{1}{p}}\right].\] Similarly we have the following result from Theorem 2.7. **Theorem 2.9**.: _Let \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\) and \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) be p-orthonormal bases for \(\mathcal{X}\). If \(M,N\subseteq\{1,\ldots,n\}\) are such that_ \[o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}}<\frac{1}{\max_{j\in N,k\in M}|f_{j}( \omega_{k})|},\] _then for all \(x\in\mathcal{X}\),_ \[\|x\|\leq\left(1+\frac{1}{1-o(M)^{\frac{1}{q}}o(N)^{\frac{1}{p}} \max_{j\in N,k\in M}|f_{j}(\omega_{k})|}\right)\left[\left(\sum_{k\in M^{e}}| g_{k}(x)|^{p}\right)^{\frac{1}{p}}+\left(\sum_{j\in N^{e}}|f_{j}(x)|^{p} \right)^{\frac{1}{p}}\right].\] Theorem 2.5 brings the following question. **Question 2.10**.: _Given \(p\) and a Banach space \(\mathcal{X}\) of dimension \(n\), for which subsets \(M,N\subseteq\{1,\ldots,n\}\) and pairs of p-orthonormal bases \((\{f_{j}\}_{j=1}^{n},\{\tau_{j}\}_{j=1}^{n})\), \((\{g_{k}\}_{k=1}^{n},\{\omega_{k}\}_{k=1}^{n})\) for \(\mathcal{X}\), we have equality in Inequality (4)?_ It is clear that we used \(1<p<\infty\) in the proof of Theorem 2.5. However, Definition 2.1 can easily be extended to include cases \(p=1\) and \(p=\infty\). This therefore leads to the following question. **Question 2.11**.: _Whether there are Functional Grobber-Jaming Uncertainty Principle (versions of Theorem 2.5) for 1-orthonormal bases and \(\infty\)-orthonormal bases?_ We end by mentioning that Donoho-Stark-Elad-Bruckstein-Ricaud-Torresani Uncertainty Principle for finite dimensional Banach spaces is derived in [5] (actually, in [5] the functional uncertainty principle was derived for p-Schauder frames which is general than p-orthonormal bases. Thus it is worth to derive Theorem 2.5 or a variation of it for p-Schauder frames, which we are unable).
2305.18015
On the Correspondence Between Monotonic Max-Sum GNNs and Datalog
Although there has been significant interest in applying machine learning techniques to structured data, the expressivity (i.e., a description of what can be learned) of such techniques is still poorly understood. In this paper, we study data transformations based on graph neural networks (GNNs). First, we note that the choice of how a dataset is encoded into a numeric form processable by a GNN can obscure the characterisation of a model's expressivity, and we argue that a canonical encoding provides an appropriate basis. Second, we study the expressivity of monotonic max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation functions. We show that, for each such GNN, one can compute a Datalog program such that applying the GNN to any dataset produces the same facts as a single round of application of the program's rules to the dataset. Monotonic max-sum GNNs can sum an unbounded number of feature vectors which can result in arbitrarily large feature values, whereas rule application requires only a bounded number of constants. Hence, our result shows that the unbounded summation of monotonic max-sum GNNs does not increase their expressive power. Third, we sharpen our result to the subclass of monotonic max GNNs, which use only the max aggregation function, and identify a corresponding class of Datalog programs.
David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, Egor V. Kostylev
2023-05-29T11:13:38Z
http://arxiv.org/abs/2305.18015v3
# On the Correspondence Between Monotonic Max-Sum GNNs and Datalog ###### Abstract Although there has been significant interest in applying machine learning techniques to structured data, the _expressivity_ (i.e., a description of what can be learned) of such techniques is still poorly understood. In this paper, we study data transformations based on _graph neural networks_ (GNNs). First, we note that the choice of how a dataset is encoded into a numeric form processable by a GNN can obscure the characterisation of a model's expressivity, and we argue that a _canonical_ encoding provides an appropriate basis. Second, we study the expressivity of _monotonic max-sum_ GNNs, which cover a subclass of GNNs with max and sum aggregation functions. We show that, for each such GNN, one can compute a Datalog program such that applying the GNN to any dataset produces the same facts as a single round of application of the program's rules to the dataset. Monotonic max-sum GNNs can sum an unbounded number of feature vectors which can result in arbitrarily large feature values, whereas rule application requires only a bounded number of constants. Hence, our result shows that the unbounded summation of monotonic max-sum GNNs does not increase their expressive power. Third, we sharpen our result to the subclass of _monotonic max_ GNNs, which use only the max aggregation function, and identify a corresponding class of Datalog programs. ## 1 Introduction Data management tasks such as query answering or logical reasoning can be abstractly seen as transforming an input dataset into an output dataset. A key aspect of such transformations is their _expressivity_, which is often established by identifying a logic-based language that realises the same class of transformations. For example, core aspects of the SQL and SPARQL query languages have been characterised using fragments of first-order logic (Abitebouhl, Hull, and Vianu, 1995; Perez, Arenas, and Gutierrez, 2009), and logical deduction over RDF datasets has been described using the rule-based language _Datalog_(Motik et al., 2012). Such correspondences enable rigorous understanding and comparison of different data management languages. Recently, there has been an increasing interest in applying machine learning techniques to data management tasks. A key benefit is that the desired transformation between datasets can be induced from examples, rather than specified explicitly. Many models have been proposed for this purpose, such as recurrent (Holldobler, Kalinke, and Storr, 1999), fibring (Bader, d'Avila Garcez, and Hitzler, 2005), and feed-forward networks (Bader et al., 2007), architectures that simulate forward (Dong et al., 2019; Campero et al., 2018) and backward chaining (Rocktaschel and Riedel, 2017), and architectures for rule learning (Yang, Yang, and Cohen, 2017; Sadeghian et al., 2019). _Graph neural networks_ (GNNs) have proved particularly popular since they can express graph transformations and have been widely applied to link prediction and node classification tasks in structured datasets (Schlichtkrull et al., 2018; Pflueger, Tena Cucala, and Kostylev, 2022; Liu et al., 2021; Ioannidis, Marques, and Giannakis, 2019; Qu, Bengio, and Tang, 2019; Yang, Cohen, and Salakhutdinov, 2016; Kipf and Welling, 2017; Zhang and Chen, 2018; Teru, Denis, and Hamilton, 2020). Characterising the expressivity of ML models for data management has thus steadily gained importance, and computational logic provides a well-established methodology: we can describe conditions under which ML-induced models become equivalent to logical formalisms in the sense that applying the ML model to an arbitrary dataset produces the same result as applying a specific logical formula. In a pioneering study, Barcelo et al. (2020) showed that each GNN-induced transformation expressible in first-order logic is equivalent to a concept query of the _\(\mathcal{ALCQ}\) description logic_(Baader et al., 2007)--a popular KR formalism. Huang et al. (2023) proved an analogous result for a class of GNNs with a dedicated vertex and colour. Morris et al. (2019) showed that GNNs can express certain types of graph isomorphism tests. Sourek, Zelezny, and Kuzelka (2021) characterised the expressivity of GNNs using a hybrid language where each Datalog rule is annotated with a tensor. Tena Cucala et al. (2022) characterised the expressivity of _monotonic GNNs_ (MGNNs), which use the max aggregation function and require all weights in the matrices to be nonnegative, in terms of a class of Datalog programs. Finally, Tena Cucala, Cuenca Grau, and Motik (2022) characterised the expressivity of the Neural-LP model of rule learning. In this paper, we take a next step in the study of the expressivity of GNN-based transformations of structured data. A key technical challenge can be summarised as follows. GNNs typically use summation to aggregate feature vectors of all vertices adjacent to a given vertex in the input graph. The number of adjacent vertices in the input is unbounded (i.e., there is no a priori limit on the number of neighbours a vertex can have), and so the summation result can be unbounded as well; hence, it appears that arbitrarily many vertices can influence whether a fact is derived. This seems fundamentally different to reasoning in fragments of first-order logic such as Datalog: the number of constants that need to be jointly considered in an application of a Datalog rule is determined by the number of rule variables, and _not_ by the structure of the input dataset. Thus, at first glance, one might expect GNNs with summation to be fundamentally different from Datalog rules. To shed light on this issue, we present several novel contributions. In Section 3 we focus on a key obstacle: to apply a GNN to a dataset, the latter must be encoded as a graph where each vertex is assigned a numeric feature vector; but then, the expressivity of the transformation inevitably depends on the details of the encoding, which obscures the contribution of the GNN itself. To overcome this, we adopt a _canonical_ encoding, variants of which have already been considered by Schlichtkrull et al. (2018), Barcelo et al. (2020), and Pflueger, Tena Cucala, and Kostylev (2022). We define a GNN to be _equivalent_ to a Datalog program if applying the GNN to any dataset while using the canonical encoding produces the same facts as applying the program's rules to the dataset _once_ (i.e., without fixpoint iteration). Finally, we observe that noncanonical encodings by Tena Cucala et al. (2022), Morris et al. (2019), or Liu et al. (2021) can be described using well-known extensions of Datalog, and so the expressivity of transformations based on such encodings can be characterised by composing all relevant programs. In Section 4 we present our main technical contribution. First, we introduce a class of _monotonic max-sum_ GNNs. Similarly to the MGNNs by Tena Cucala et al. (2022), monotonic max-sum GNNs require matrix weights to be be nonnegative; however, they allow for the max or sum aggregation functions in each network layer, and they place certain restrictions on the activation and classification functions (ReLU and threshold functions are allowed). Tena Cucala et al. (2022) showed that the performance of such GNNs with just max aggregation on tasks such as knowledge graph completion is on a par with that of other recent approaches. Hence, monotonic max-sum GNNs are practically relevant, but they also allow their predictions to be explained using logical proofs. Second, we prove that each monotonic max-sum GNN is equivalent to a Datalog program of a certain shape possibly containing inequalities in rule bodies. Strictly speaking, such a program can be recursive in the sense that the same predicate can occur in both rule bodies and heads; however, our notion of equivalence does not involve fixpoint iteration (i.e., the program's rules are applied just once). Thus, monotonic max-sum GNNs can derive facts with predicates from the input, but they cannot express true recursive properties such as reachability; moreover, the ability to produce unbounded feature values does not lead to a fundamental increase in expressivity. Our equivalence proof is quite different from the analogous result for MGNNs: when aggregation is limited to just max, the value of each feature of a vertex clearly depends on only a fixed number of neighbours of the vertex. Third, we prove that the equivalent Datalog program can be computed from the GNN itself. This result is interesting because it requires enumerating potentially infinite sets of real-valued candidate feature values in a way that guarantees termination. This provides a starting point for future development of practical techniques for extracting Datalog programs from monotonic max-sum GNNs. Finally, in Section 5 we sharpen our results to _monotonic max_ GNNs, which allow only for max aggregation. We show that, analogously to MGNNs, each monotonic max GNN is equivalent to a positive Datalog program; however, we also present a converse result: we identify a class Datalog programs such that, for each program in the class, there exists an equivalent monotonic max GNN. In this way, we obtain an exact characterisation of an interesting class of GNN-based transformations using logical formalisms. The proofs of all theorems are given in full in Appendices A and B. ## 2 Preliminaries We next recapitulate the basics of Datalog and GNNs. **Datasets and Datalog.** We fix a signature consisting of countably infinite, disjoint sets of _predicates_ and _constants_. Each predicate is associated with a nonnegative integer arity. We also consider a countably infinite set of _variables_ that is disjoint with the sets of predicates and constants. A _term_ is a variable or a constant. An _atom_ is of the form \(P(t_{1},\ldots,t_{n})\) where \(P\) is a predicate of arity \(n\) and \(t_{1},\cdots,t_{n}\) are terms. An _inequality_ is an expression of the form \(t_{1}\not\approx t_{2}\) where \(t_{1}\) and \(t_{2}\) are terms. A _literal_ is an atom or an inequality. A term or a literal is _ground_ if it is variable-free. A _fact_ is a ground atom and a _dataset_ is a finite set of facts; thus, datasets cannot contain inequalities. A conjunction \(\alpha\) of facts is true in a dataset \(D\), written \(D\models\alpha\), if \(A\in D\) for each fact \(A\) in \(\alpha\). A ground inequality \(s\not\approx t\) is true if \(s\not=t\); for uniformity with facts, we often write \(D\models s\not\approx t\) even though the truth of \(s\not\approx t\) does not depend on \(D\). A (Datalog) _rule_ is of the form (1) where \(n\geq 0\), \(B_{1},\ldots,B_{n}\) are _body_ literals, and \(H\) is the _head_ atom: \[B_{1}\wedge\cdots\wedge B_{n}\to H. \tag{1}\] A (Datalog) _program_ is a finite set of rules. A _substitution_\(\nu\) is a mapping of finitely many variables to ground terms; for \(\alpha\) a literal, \(\alpha\nu\) is the result of replacing in \(\alpha\) each variable \(x\) with \(\nu(x)\) provided the latter is defined. Each rule \(r\) of form (1) defines an _immediate consequence_ operator \(T_{r}\) on datasets: for \(D\) a dataset, \(T_{r}(D)\) is the dataset that contains the fact \(H\nu\) for each substitution \(\nu\) mapping all variables of \(r\) to terms occurring in \(D\) such that \(D\models B_{i}\nu\) for each \(1\leq i\leq n\). For \(\mathcal{P}\) a program, \(T_{\mathcal{P}}(D)=\bigcup_{r\in\mathcal{P}}T_{r}(D)\). To simplify the formal treatment, we do not make the usual _safety_ requirement where each variable in a rule must occur in a body atom; in fact, the body can be empty, which we denote by \(\top\). For example, rule \(r=\top\to R(x,y)\) is syntactically valid; moreover, the definition of \(T_{r}\) ensures that \(T_{r}(D)\) contains exactly each fact \(R(s,t)\) for all (not necessarily distinct) terms \(s\) and \(t\) occurring in \(D\). Conjunctions \(\alpha\) and \(\beta\) of literals are _equal up to variable renaming_ if there exists a bijective mapping \(\nu\) from the set of all variables of \(\alpha\) to the set of all variables of \(\beta\) such that \(\alpha\nu\) and \(\beta\) contain exactly the same conjuncts; this notion is extended to rules in the obvious way. A set \(S\)_contains_ a conjunction \(\alpha\) of literals _up to variable renaming_ if there exists \(\beta\in S\) such that \(\alpha\) and \(\beta\) are equal up to variable renaming. **Graph Neural Networks.** We use \(\mathbb{R}\) and \(\mathbb{R}_{0}^{+}\) for the sets of real and nonnegative real numbers, respectively. Also, we use \(\mathbb{N}\) for the set of natural numbers, and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). A function \(\sigma:\mathbb{R}\to\mathbb{R}\) is _monotonically increasing_ if \(x<y\) implies \(\sigma(x)\leq\sigma(y)\). Function \(\sigma\) is _Boolean_ if its range is \(\{0,1\}\). Finally, \(\sigma\) is _unbounded_ if, for each \(y\in\mathbb{R}\), there exists \(x\in\mathbb{R}\) such that \(\sigma(x)>y\). A real _multiset_ is a function \(S:\mathbb{R}\to\mathbb{N}_{0}\) that assigns to each \(x\in\mathbb{R}\) the number of occurrences \(S(x)\). Such \(S\) is _finite_ if \(S(x)>0\) for finitely many \(x\in\mathbb{R}\); the _cardinality_ of such \(S\) is \(|S|=\sum_{x\in\mathbb{R}}S(x)\); and \(\mathcal{F}(\mathbb{R})\) is the set of all finite real multisets. We often write a finite \(S\) as a list of possibly repeated real numbers in double-braces \(\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * \(\langle v_{t},v_{s}\rangle\in\mathcal{E}^{c}\) _if_ \(E^{c}(t,s)\in D\) _for each_ \(c\in\mathsf{Col}\)_; and_ * \(\langle\mathbf{v}_{t}\rangle_{i}=1\) _if_ \(U_{i}(t)\in D\)_, and_ \((\mathbf{v}_{t})_{i}=0\) _otherwise._ _The canonical decoding \(\mathsf{dec}(\mathcal{G})\) of a Boolean \((\mathsf{Col},\delta)\)-graph \(\mathcal{G}=\langle\mathcal{V},\{\mathcal{E}^{c}\}_{c\in\mathsf{Col}},\lambda\rangle\) is the dataset that contains_ * _the fact_ \(E^{c}(t,s)\) _for each_ \(\langle v_{t},v_{s}\rangle\in\mathcal{E}^{c}\) _and_ \(c\in\mathsf{Col}\)_, and_ * _the fact_ \(U_{i}(t)\) _for each_ \(v_{t}\in\mathcal{V}\) _and_ \(i\in\{1,\ldots,\delta\}\) _such that_ \((\mathbf{v}_{t})_{i}=1\)_._ _Each \((\mathsf{Col},\delta)\)-GNN \(\mathcal{N}\) induces the canonical transformation \(T_{\mathcal{N}}\) on \((\mathsf{Col},\delta)\)-datasets where \(T_{\mathcal{N}}(D)=\mathsf{dec}(\mathcal{N}(\mathsf{enc}(D)))\) for each \((\mathsf{Col},\delta)\)-dataset \(D\)._ This encoding neither introduces nor omits any information from the input dataset, so a \((\mathsf{Col},\delta)\)-dataset \(D\) and its canonical encoding \(\mathsf{enc}(D)\) straightforwardly correspond to one another. Since datasets are directional, \((\mathsf{Col},\delta)\)-graphs must be directed as well to minimise the discrepancy between the two representations. The canonical decoding is analogous to the encoding, and the two are inverse operations on graphs that are regular as per Definition 3. **Definition 3**.: _A \((\mathsf{Col},\delta)\)-graph \(\mathcal{G}=\langle\mathcal{V},\{\mathcal{E}^{c}\}_{c\in\mathsf{Col}},\lambda\rangle\) is regular if \(\mathcal{G}\) is Boolean and each vertex \(v\in\mathcal{V}\) either occurs in \(\mathcal{E}^{c}\) for some \(c\in\mathsf{Col}\), or \((\mathbf{v})_{i}=1\) for some \(i\in\{1,\ldots,\delta\}\)._ Our canonical encoding produces only regular graphs, and there is a one-to-one correspondence between \((\mathsf{Col},\delta)\)-datasets and regular \((\mathsf{Col},\delta)\)-graphs. Our results from the following sections can be equivalently framed as characterising expressivity of GNN transformations of regular graphs in terms of Datalog programs. Graphs that are not Boolean do not correspond to encodings of datasets, so we do not see a natural way to view GNN transformations over such graphs in terms of logical formalisms. Finally, a \((\mathsf{Col},\delta)\)-graph \(\mathcal{G}\) that is Boolean but not regular contains 'isolated' vertices that are not connected to any other vertex and are labelled by zeros only. When such \(\mathcal{G}\) is decoded into a \((\mathsf{Col},\delta)\)-dataset, such 'isolated' vertices do not produce any facts in \(\mathsf{dec}(\mathcal{G})\) and thus several non-regular Boolean graphs can produce the same \((\mathsf{Col},\delta)\)-dataset. Note, however, that each 'isolated' zero-labelled vertex is transformed by a GNN in the same way--that is, the vector labelling the vertex in the GNN's output does not depend on any other vertices but only on the matrices of the GNN. Consequently, such vertices are not interesting for our study of GNN expressivity. We are now ready to formalise our central notion of equivalence between a GNN and a Datalog program. **Definition 4**.: _A \((\mathsf{Col},\delta)\)-GNN \(\mathcal{N}\) captures a rule or a Datalog program \(\alpha\) if \(T_{\alpha}(D)\subseteq T_{\mathcal{N}}(D)\) for each \((\mathsf{Col},\delta)\)-dataset \(D\). Moreover, \(\mathcal{N}\) and \(\alpha\) are equivalent if \(T_{\mathcal{N}}(D)=T_{\alpha}(D)\) for each \((\mathsf{Col},\delta)\)-dataset \(D\)._ The key question we address in Sections 4 and 5 is the following: under what conditions is a given \((\mathsf{Col},\delta)\)-GNN \(\mathcal{N}\) equivalent to a Datalog program, and can this program (at least in principle) be computed from \(\mathcal{N}\)? ### Noncanonical Encoding/Decoding Schemes For each \((\mathsf{Col},\delta)\)-dataset \(D\), the binary facts of \(D\) and \(T_{\mathcal{N}}(D)\) coincide, and so applying \(T_{\mathcal{N}}\) to \(D\) cannot derive any binary facts. To overcome this limitation, more complex, noncanonical encodings have been proposed (Tena Cucala et al., 2022; Morris et al., 2019; Liu et al., 2021). These introduce vertices representing combinations of several constants so that facts of higher arity can be encoded in appropriate feature vectors, but there is no obvious canonical way to achieve this. Expressivity results based on such encodings are less transparent because it is not obvious which aspects of expressivity are due to the encoding/decoding scheme and which are immanent to the GNN itself. We argue that noncanonical encoding/decoding schemes can often be described by a pair of programs \(\mathcal{P}_{\mathsf{enc}}\) and \(\mathcal{P}_{\mathsf{dec}}\), possibly expressed in a well-known extension of Datalog, which convert an input dataset into a \((\mathsf{Col},\delta)\)-dataset and vice versa. Thus, given an arbitrary dataset \(D\), the result of applying the end-to-end transformation that uses a GNN \(\mathcal{N}\) and the respective encoding/decoding scheme is \(T_{\mathcal{P}_{\mathsf{dec}}}(T_{\mathcal{N}}(T_{\mathcal{P}_{\mathsf{enc}} }(D)))\). Furthermore, if \(\mathcal{N}\) is equivalent to a Datalog program \(\mathcal{P}_{\mathcal{N}}\), then the composition of \(\mathcal{P}_{\mathsf{enc}}\), \(\mathcal{P}_{\mathcal{N}}\), and \(\mathcal{P}_{\mathsf{dec}}\) characterises the end-to-end transformation. This allows us to clearly separate the contribution of the GNN from the contributions of the encoding and decoding. **Tena Cucala et al. (2022)** recently presented a dataset transformation based on a class of _monotonic_ GNNs (MGNNs). Their approach is applicable to a dataset \(D\) that uses unary predicates \(A_{1},\ldots,A_{\epsilon}\) and binary predicates \(R_{\epsilon+1},\ldots,R_{\delta}\), and \(D\) is encoded into a symmetric \((\mathsf{Col},\delta)\)-graph over the set of colours \(\mathsf{Col}=\{c_{1},c_{2},c_{3},c_{4}\}\). The encoding introduces a vertex \(v_{a}\) for each constant \(a\) in \(D\) as well as vertices \(v_{a,b}\) and \(v_{b,a}\) for each pair of constants \(a,b\) occurring together in a binary fact in \(D\). Predicates are assigned fixed positions in vectors so that the value of a component of a vector labelling a vertex indicates the presence or absence of a specific fact in \(D\). For example, if \(A_{i}(a)\in D\), then \((\mathbf{v}_{a})_{i}\) is set to \(1\); analogously, if \(R_{j}(a,b)\not\in D\) but \(a\) and \(b\) occur in \(D\) in a binary fact, then \((\mathbf{v}_{a,b})_{j}\) is set to \(0\). Moreover, the edges of the coloured graph indicate different types of 'connections' between constants; for example, vertices \(v_{a}\) and \(v_{a,b}\) are connected by an edge of colour \(c_{1}\) to indicate that constant \(a\) occurs first in the constant pair \((a,b)\). A variant of this approach was also proposed by Liu et al. (2021) in the context of knowledge graph completion. We next show how to capture this encoding using rules. Note that the encoder introduces vertices of the form \(v_{a,b}\) for pairs of constants \(a\) and \(b\), so the encoding program \(\mathcal{P}_{\mathsf{enc}}\) requires value invention. This can be conveniently realised using functional terms. For example, we can represent vertex \(v_{a,b}\) using term \(g(a,b)\), and we can represent each vertex of the form \(v_{a}\) using a term \(f(a)\) for uniformity. Applying the encoding program \(\mathcal{P}_{\mathsf{enc}}\) to a dataset thus produces a \((\mathsf{Col},\delta)\)-dataset with functional terms, which should be processed by the GNN as if they were constants; for example, the canonical encoding should transform \(g(a,b)\) into vertex \(v_{g(a,b)}\). Based on this idea, the encoding program \(\mathcal{P}_{\mathsf{enc}}\) contains rule (4) instantiated for each \(i\in\{1,\ldots,\epsilon\}\), and rules (5)-(13) instantiated for each \(j\in\{\epsilon+1,\ldots,\delta\}\). \[A_{i}(x) \to U_{i}(f(x)) \tag{4}\] \[R_{j}(x,y) \to U_{j}(g(x,y))\] (5) \[R_{j}(x,y) \to E^{c_{1}}(f(x),g(x,y)) \tag{6}\] \[R_{j}(x,y) \to E^{c_{1}}(g(x,y),f(x)) \tag{7}\] \[R_{j}(x,y) \to E^{c_{2}}(f(y),g(x,y))\] (8) \[R_{j}(x,y) \to E^{c_{2}}(g(x,y),f(y))\] (9) \[R_{j}(x,y) \to E^{c_{3}}(g(x,y),g(y,x))\] (10) \[R_{j}(x,y) \to E^{c_{3}}(g(y,x),g(x,y))\] (11) \[R_{j}(x,y) \to E^{c_{4}}(f(x),f(y))\] (12) \[R_{j}(x,y) \to E^{c_{4}}(f(y),f(x)) \tag{13}\] Rules (4) and (5) ensure that all unary and binary facts in the input dataset are encoded as facts of the form \(U_{i}(f(a))\) and \(U_{j}(g(a,b))\); thus, when these are further transformed into a \((\mathsf{Col},\delta)\)-graph, the vectors labelling vertices \(v_{f(a)}\) and \(v_{g(a,b)}\) encode all input facts of the form \(A_{i}(a)\) and \(R_{j}(a,b)\) for \(i\in\{1,\ldots,\epsilon\}\) and \(j\in\{\epsilon+1,\ldots,\delta\}\). In addition, rules (6)-(13) encode the adjacency relationships between terms: colour \(c_{1}\) connects terms \(g(a,b)\) and \(f(a)\), colour \(c_{2}\) connects \(g(a,b)\) and \(f(b)\), colour \(c_{3}\) connects \(g(a,b)\) and \(g(b,a)\), and colour \(c_{4}\) connects terms \(f(a)\) and \(f(b)\) provided that \(a\) and \(b\) occur jointly in a binary fact. Program \(\mathcal{P}_{\mathsf{enc}}\) capturing the decoder contains rule (14) instantiated for each \(i\in\{1,\ldots,\epsilon\}\), as well as rule (15) instantiated for each \(j\in\{\epsilon+1,\ldots,\delta\}\). \[U_{i}(f(x)) \to A_{i}(x) \tag{14}\] \[U_{j}(g(x,y)) \to R_{j}(x,y) \tag{15}\] Intuitively, these rules just'read off' the facts from the labels of vertices such as \(v_{f(a)}\) and \(v_{g(a,b)}\). The composition of these three programs is a (function-free) Datalog program. It is straightforward to show that, for each dataset \(D\), the graph obtained by applying the encoder by Tena Cucala et al. (2022) is isomorphic to the graph obtained by applying the canonical encoding from Definition 2 to \(T_{\mathcal{P}_{\mathsf{enc}}}(D)\) and thus program \(\mathcal{P}_{\mathsf{enc}}\) correctly captures their encoder. A limitation of this encoding is that the transformation's output can contain a fact of the form \(R(a,b)\) only if the input dataset contains a fact of the form \(S(a,b)\) or \(S(b,a)\). Intuitively, the presence of \(S(a,b)\) or \(S(b,a)\) in the input ensures that the resulting \((\mathsf{Col},\delta)\)-graph contains a vertex \(v_{g(a,b)}\) for representing binary facts of the form \(R(a,b)\). An obvious way to overcome this limitation is to introduce terms \(g(a,b)\) for all constants \(a\) and \(b\) occurring in the input, without requiring \(a\) and \(b\) to occur jointly in a binary fact. While this increases the expressivity of the end-to-end transformation, the increase is due to the encoding step, rather than the GNN. Our framework makes this point clear. For example, we can extend \(\mathcal{P}_{\mathsf{enc}}\) with rules such as (16)-(19) and so on for all other combinations of unary and binary predicates and colours. The chaining of \(\mathcal{P}_{\mathsf{enc}}\), \(\mathcal{P}_{\mathcal{N}}\), and \(\mathcal{P}_{\mathsf{dec}}\) can now capture different transformations even if \(\mathcal{P}_{\mathcal{N}}\) remains the same. \[A_{i}(x)\wedge A_{j}(y) \to E^{c_{1}}(f(x),g(x,y)) \tag{16}\] \[A_{i}(x)\wedge A_{j}(y) \to E^{c_{1}}(g(x,y),f(x))\] (17) \[R_{i}(x,z)\wedge A_{j}(y) \to E^{c_{1}}(g(x,y),f(x))\] (18) \[R_{i}(z,x)\wedge A_{j}(y) \to E^{c_{1}}(g(x,y),f(x)) \tag{19}\] **Morris et al. (2019)** introduced \(k\)-GNNs and showed them to be more expressive than standard GNNs. The input to a \(k\)-GNN is a symmetric \((\mathsf{Col},\delta_{1})\)-graph \(\mathcal{G}_{1}\) without self-loops where \(\mathsf{Col}\) contains a single colour \(c\) and, for each vertex \(v\) of \(\mathcal{G}_{1}\), \((\mathbf{v})_{i}=1\) for exactly one \(1\leq i\leq\delta_{1}\). To apply a \(k\)-GNN to \(\mathcal{G}_{1}\), the latter is transformed into another \((\mathsf{Col},\delta_{2})\)-graph \(\mathcal{G}_{2}\) that contains one vertex for each set of \(k\) distinct vertices of \(\mathcal{G}_{1}\), and then a standard \((\mathsf{Col},\delta_{2})\)-GNN is applied to \(\mathcal{G}_{2}\). We next show that the transformation of \(\mathcal{G}_{1}\) into \(\mathcal{G}_{2}\) can be captured by a program \(\mathcal{P}_{\mathsf{enc}}\) that transforms a \((\mathsf{Col},\delta_{1})\)-dataset over unary predicates \(A_{1},\ldots,A_{\delta_{1}}\) and a binary predicate \(R\) into a \((\mathsf{Col},\delta_{2})\)-dataset. Thus, the increase in expressivity of \(k\)-GNNs does not come from the GNN model itself, but rather from the encoding implicit in their approach. For readability, we make several simplifying assumptions. First, while Morris et al. (2019) consider sets of \(k\) distinct vertices in order to ensure practical scalability, we consider \(k\)-tuples instead and limit our presentation to just \(k=2\). Second, we consider just the _local neighbourhood_ approach to connecting vertices in \(\mathcal{G}_{2}\). Finally, our encoding requires extending Datalog not only with function symbols, but also with stratified negation-as-failure not (Dantsin et al., 2001). Program \(\mathcal{P}_{\mathsf{enc}}\) consists of rules (20)-(23) instantiated for all \(i,j,k,\ell\in\{1,\ldots,\delta_{1}\}\). \[A_{i}(x)\wedge A_{j}(y)\wedge x\not\approx y\wedge\] \[A_{k}(x)\wedge A_{\ell}(z)\wedge x\not\approx z\wedge \tag{20}\] \[R(y,z)\wedge y\not\approx z\to E^{c}(g(x,y),g(x,z))\] \[A_{i}(y)\wedge A_{j}(x)\wedge y\not\approx x\wedge\] \[A_{k}(z)\wedge A_{\ell}(x)\wedge z\not\approx x\wedge\] (21) \[R(y,z)\wedge y\not\approx z\to E^{c}(g(y,x),g(z,x))\] \[A_{i}(x)\wedge A_{j}(y)\wedge x\not\approx y\wedge\mathsf{not}R(x,y)\] (22) \[\to U_{i,j,0}(g(x,y))\] \[A_{i}(x)\wedge A_{j}(y)\wedge x\not\approx y\wedge R(x,y)\] (23) \[\to U_{i,j,1}(g(x,y))\] Conjunctions of the form \(A_{i}(x)\wedge A_{j}(y)\wedge x\not\approx y\) in these rules identify pairs of distinct constants \(a\) and \(b\) (corresponding to the vertices of \(\mathcal{G}_{1}\)) in the input dataset, and, for each such pair, \(g(x,y)\) introduces a term \(g(a,b)\) (corresponding to a vertex of \(\mathcal{G}_{2}\)). Rules (20) and (21) encode the _local neighbourhood_ approach: terms \(g(a,b)\) and \(g(d,e)\) are connected in \(\mathcal{G}_{2}\) if either \(a=b\) and \(d\neq e\), or \(a\neq b\) and \(d=e\), and additionally the two constants in the inequality are connected in \(\mathcal{G}_{1}\). Finally, rules (22) and (23) identify the type of the subgraph of \(\mathcal{G}_{1}\) that \(a\) and \(b\) participate in. Specifically, a fact of the form \(U_{i,j,0}(g(a,b))\) says that \(a\) and \(b\) are labelled in \(\mathcal{G}_{1}\) by \(A_{i}\) and \(A_{j}\) respectively, but they are not connected in \(\mathcal{G}_{1}\). A fact of the form \(U_{i,j,1}(g(a,b))\) is analogous, but with the difference that \(a\) and \(b\) are connected in \(\mathcal{G}_{1}\). ## 4 GNNs with Max-Sum Aggregation In this section, we introduce monotonic max-sum GNNs and prove that each such GNN corresponds to a Datalog program (possibly with inequalities in the rule bodies) that can be computed from the GNN's definition. Monotonic max-sum GNNs can use the following aggregation function in all layers, which generalises both max and sum. **Definition 5**.: _For \(k\in\mathbb{N}_{0}\cup\{\infty\}\), a finite real multiset \(S\in\mathcal{F}(\mathbb{R})\), and \(\ell=\min\left(k,|S|\right)\), let_ \[\max\text{-}k\text{-}\text{sum}(S)=\begin{cases}0&\text{if }\ell=0,\\ \sum\limits_{i=1}^{\ell}s_{i}&\text{where }s_{1},\ldots,s_{\ell}\text{ are the}\\ &\text{$\ell$ largest numbers of }S.\end{cases}\] Each occurrence of a number is counted separately; for example, \(\max\text{-}3\text{-}\text{sum}(\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! (via line 7) or \(0\) if this does not happen. Indeed, assume that \(\mathsf{cls}((\mathbf{v}_{\lambda_{L}})_{i})=1\) for some \(v\). If \(\mathbf{A}_{L}\) and all \(\mathbf{B}_{L}^{c}\) contain only zeros, or if all \(\mathcal{X}_{L,i}\) contain only zeros, then \(L=\ell_{\mathsf{st}}\); no neighbours of \(v\) are needed so we can set all \(C_{\ell}\) to \(0\) and the equality above holds. Otherwise, \(\mathsf{cls}\) is a threshold function, so \((\mathbf{v}_{\lambda_{L}})_{i}\geq\alpha_{L}\) holds for \(\alpha_{L}\) the threshold of \(\mathsf{cls}\), and so the argument to the activation function when computing \((\mathbf{v}_{\lambda_{L}})_{i}\) is at least \(\beta_{L}\). Moreover, \((\mathbf{v}_{\lambda_{L}})_{i}\) is produced from \((\mathbf{v}_{\lambda_{L-1}})_{i}\) and the values of \((\mathbf{u}_{\lambda_{L-1}})_{j}\) where \(u\) ranges over the neighbours of \(v\). If we assume that \((\mathbf{v}_{\lambda_{L-1}})_{i}=0\) and that \(\ell_{\ell}\) is the least nonzero value that each \(u\) can contribute to \((\mathbf{v}_{\lambda_{L}})_{i}\), it suffices to have at least \(\lceil\frac{\beta_{\ell}-b_{\ell}}{w_{\ell}\cdot\epsilon_{\ell}}\rceil\) nonzero neighbours to reach \(\beta_{L}\). Thus, we can replace \(k_{\ell}\) with this number whenever this number is smaller than \(k_{\ell}\); in contrast, if \(k_{\ell}\) is smaller, we need to keep \(k_{\ell}\) so that \(\mathcal{N}^{\prime}\) does not derive any new consequences. Finally, \(\alpha_{L-1}\) is the value of \((\mathbf{v}_{\lambda_{L-1}})_{i}\) in layer \(L-1\) to which we can apply analogous reasoning. ### Equivalence with Datalog Programs We next show that there exists a Datalog program \(\mathcal{P}_{\mathcal{N}}\) that is equivalent to \(\mathcal{N}\) in the sense described in Definition 4. Towards this goal, in Definition 11 we capture the syntactic structure of the rules in \(\mathcal{P}_{\mathcal{N}}\) as rules of form (25) where \(\varphi\) is a _tree-like_ formula for \(x\). To understand the intuition, assume that we construct from \(\varphi\) a graph whose vertices are the variables in \(\varphi\), and where a directed edge from \(x\) to \(y\) is introduced for each \(E^{c}(x,y)\) in \(\varphi\); then, such graph must be a directed tree. Moreover, if variable \(x\) has children \(y_{1}\) and \(y_{2}\) in this graph, then \(\varphi\) is allowed to contain inequalities of the form \(y_{1}\not\approx y_{2}\), which provide \(\varphi\) with a limited capability for counting; for example, formula \(E^{c}(x,y_{1})\wedge E^{c}(x,y_{2})\wedge y_{1}\not\approx y_{2}\) is true precisely for those values of \(x\) that are connected via the \(E^{c}\) predicate to at least two distinct constants. We also introduce intuitive notions of a _fan-out_ (i.e., the number of children) and _depth_ of a variable. Tree-like formulas contain all concepts of the \(\mathcal{ALCQ}\) description logic (Baader et al., 2007) constructed from \(\top\), atomic concepts, and concepts of the form \(\geq nR.C\) and \(C_{1}\sqcap C_{2}\); however, our definition also allows for formulas such as \(E^{c}(x,y_{1})\wedge E^{c}(x,y_{2})\wedge U(y_{1})\wedge y_{1}\not\approx y_{2}\), which do not correspond to the translation of \(\mathcal{ALCQ}\) concepts. **Definition 11**.: _A tree-like formula for a variable is defined inductively as follows._ * _For each variable_ \(x\)_, formula_ \(\top\) _is tree-like for_ \(x\)_._ * _For each variable_ \(x\) _and each unary predicate_ \(U\)_, atom_ \(U(x)\) _is tree-like for_ \(x\)_._ * _For each variable_ \(x\) _and all tree-like formulas_ \(\varphi_{1}\) _and_ \(\varphi_{2}\) _for_ \(x\) _that share no variables other than_ \(x\)_, formula_ \(\varphi_{1}\wedge\varphi_{2}\) _is tree-like for_ \(x\)_._ * _For each variable_ \(x\)_, each binary predicate_ \(E^{c}\)_, and all tree-like formulas_ \(\varphi_{1},\ldots,\varphi_{n}\) _for distinct variables_ \(y_{1},\ldots,y_{n}\) _where no_ \(\varphi_{i}\) _contains_ \(x\) _and no_ \(\varphi_{i}\) _and_ \(\varphi_{j}\) _with_ \(i\neq j\) _share a variable, formula (_24_) is tree-like for_ \(x\)_._ \[\bigwedge_{i=1}^{n}\left(E^{c}(x,y_{i})\wedge\varphi_{i}\right)\wedge\bigwedge_ {1\leq i<j\leq n}y_{i}\not\approx y_{j}\] (24) _Let \(\varphi\) be a tree-like formula and let \(x\) be a variable in \(\varphi\). The fan-out of \(x\) in \(\varphi\) is the number of distinct variables \(y_{i}\) for which \(E^{c}(x,y_{i})\) is a conjunct of \(\varphi\). The depth of \(x\) is the maximal \(n\) for which there exist variables \(x_{0},\ldots,x_{n}\) and predicates \(E^{c_{1}},\ldots,E^{c_{n}}\) such that \(x_{n}=x\) and \(E^{c_{i}}(x_{i-1},x_{i})\) is a conjunct of \(\varphi\) for each \(1\leq i\leq n\). The depth of \(\varphi\) is the maximum depth of a variable in \(\varphi\)._ _For \(d\) and \(f\) natural numbers, a tree-like formula \(\varphi\) is \((d,f)\)-tree-like if, for each variable \(x\) in \(\varphi\), the depth \(i\) of \(x\) is at most \(d\) and the fan-out of \(x\) is at most \(f(d-i)\). Moreover, a Datalog rule is \((d,f)\)-tree-like if it is of form (25), where \(\varphi\) is a \((d,f)\)-tree-like formula for \(x\)._ \[\varphi\to U(x) \tag{25}\] Note that \(\varphi\) is allowed to be \(\top\) in a rule of form (25); for example, \(\top\to U(x)\) is a valid \((0,0)\)-tree-like rule. As explained in Section 2, when applied to a dataset \(D\), such a rule derives \(U(t)\) for each term \(t\) occurring in \(D\). Now let \(\delta_{\mathcal{N}}=\max(\delta_{0},\ldots,\delta_{L})\). To construct \(\mathcal{P}_{\mathcal{N}}\), we proceed as follows: we compute \(f=|\mathsf{Col}|\cdot\delta_{\mathcal{N}}\cdot C_{\mathcal{N}}\), we enumerate all \((L,f)\)-tree-like rules (up to variable renaming), and we add to \(\mathcal{P}_{\mathcal{N}}\) each such rule that is captured by \(\mathcal{N}\). Lemma 12 shows that this latter test can, at least in principle, be operationalised. In particular, to test whether a rule \(\varphi\to U(x)\) with \(n\) variables is captured by \(\mathcal{N}\), we consider each possible dataset \(D\) obtained from the atoms of \(\varphi\) by substituting the variables with up to \(n\) distinct constants, and we check whether applying \(\mathcal{N}\) to \(D\) derives the analogously instantiated rule head; if this is the case for all such \(D\), then the rule is captured by \(\mathcal{N}\). Tena Cucala et al. (2022) used a similar test for MGNNs, but their approach was simpler since it did not need to support inequalities. Theorem 13 then shows that program \(\mathcal{P}_{\mathcal{N}}\) is indeed equivalent to \(\mathcal{N}\). **Lemma 12**.: _Let \(r\) be a constant-free Datalog rule with head \(H\), let \(V\) be the set of variables in \(r\), and let \(A\) be the set of body atoms of \(r\). Then, \(\mathcal{N}\) captures \(r\) if and only if \(H\nu\in T_{\mathcal{N}}(A\nu)\) for each substitution \(\nu:V\to S\) such that \(H\nu\in T_{r}(A\nu)\), where \(S\) is a set of \(|V|\) distinct constants._ **Theorem 13**.: _Let \(\mathcal{P}_{\mathcal{N}}\) be the Datalog program containing, up to variable renaming, each \((L,|\mathsf{Col}|\cdot\delta_{\mathcal{N}}\cdot C_{\mathcal{N}})\)-tree-like rule captured by \(\mathcal{N}\), where \(\delta_{\mathcal{N}}=\max(\delta_{0},\ldots,\delta_{L})\). Then, \(\mathcal{N}\) and \(\mathcal{P}_{\mathcal{N}}\) are equivalent._ To understand this result intuitively, assume that \(\mathcal{N}\) is applied to a dataset \(D\). The fact that all rules of \(\mathcal{P}_{\mathcal{N}}\) are cap tured by \(\mathcal{N}\) clearly implies \(T_{\mathcal{P}_{\mathcal{N}}}(D)\subseteq T_{\mathcal{N}}(D)\). Furthermore, by equation (3), the value of \((\mathbf{v}_{L})_{i}\) for some \(i\) is computed from the values of \((\mathbf{v}_{L-1})_{i}\) and \((\mathbf{u}_{L-1})_{j}\) for \(k\leq C_{L}\) distinct neighbours \(u\) of \(v\) per colour and position; but then, if \(t\) and \(s\) are terms represented by \(v\) and \(u\), respectively, the canonical encoding ensures \(E^{c}(t,s)\in D\) for some \(c\in\mathsf{Col}\). Also, \((\mathbf{u}_{L-1})_{j}\) are computed using the neighbours of \(u\) and so on. Hence, each term \(w\) in \(D\) that can possibly influence \(\mathbf{v}_{L}\) must be connected in \(D\) to \(t\) by at most \(L\) such facts, so all relevant neighbours of \(t\) can be selected by a \((d,f)\)-tree-like formula. The inequalities can be used to check for the existence of at least \(k\) distinct neighbours of \(t\) in \(D\). Now let \(D^{\prime}\) be the subset of \(D\) containing precisely the facts that contribute to the value of \((\mathbf{v}_{L})_{i}\). We can unfold \(D^{\prime}\) into another tree-like dataset \(D^{\prime\prime}\) that corresponds to the body of an instantiated tree-like rule \(r\). Since the elements of all \(\mathbf{A}_{\ell}\) and \(\mathbf{B}_{\ell}^{c}\) are nonnegative, applying \(\mathcal{N}\) to \(D\) and \(D^{\prime\prime}\) derives the same value for \(\mathsf{cls}((\mathbf{v}_{L})_{i})\). If this value is \(1\), then applying the rule \(r\) to \(D\) produces the same fact as \(\mathcal{N}\). Furthermore, by definition, \(\mathcal{N}\) captures \(r\) and so \(r\in\mathcal{P}_{\mathcal{N}}\). Thus, \(T_{\mathcal{P}_{\mathcal{N}}}(D)\) contains all facts derived by \(\mathcal{N}\) on \(D\). ### Enumerating Sets \(\mathcal{X}_{\ell,i}\) The results we presented thus far show that program \(\mathcal{P}_{\mathcal{N}}\) exists, but it is not yet clear that \(\mathcal{P}_{\mathcal{N}}\) is computable: the definition of \(C_{\ell}\) in Algorithm 1 uses sets \(\mathcal{X}_{\ell,i}\), which can be infinite. We next show that each \(\mathcal{X}_{\ell,i}\) can be enumerated algorithmically using function \(\mathsf{Next}(\ell,i,\alpha)\) from Algorithm 2 as follows: for \(\alpha\) a special symbol \(\rhd\), function \(\mathsf{Next}(\ell,i,\rhd)\) returns the smallest element of \(\mathcal{X}_{\ell,i}\); moreover, for \(\alpha\in\mathbb{R}\), function \(\mathsf{Next}(\ell,i,\alpha)\) returns the smallest element of \(\mathcal{X}_{\ell,i}^{>\alpha}\) if \(\mathcal{X}_{\ell,i}^{>\alpha}\neq\emptyset\), or \(\lhd\) otherwise. For example, \(\mathsf{Next}(\ell,i,0)\) returns the smallest nonzero element of \(\mathcal{X}_{\ell,i}\), if one exists. In the presentation of Algorithm 2, we use the following notation: for \(\mathbf{x}\) a vector, \(j\) an index, and \(v\) a real number, \(\mathbf{x}[j\gets v]\) is the vector obtained from \(\mathbf{x}\) by replacing its \(j\)-th component with \(v\). The algorithm is based on the observation that, since \(\mathbf{A}_{\ell}\) and \(\mathbf{B}_{\ell}^{c}\) contain only nonnegative elements, and the activation function is monotonically increasing, we can enumerate the values computed by equation (3) in some \(\mathbf{v}_{\ell}\) in a monotonically increasing fashion. To achieve this, the algorithm maintains a _frontier_\(F\) of triples \(\langle\mathbf{x},\mathbf{Y},z\rangle\), each describing one way to compute a value of \((\mathbf{v}_{\ell})_{i}\): vector \(\mathbf{x}\) reflects the values of \((\mathbf{v}_{\ell-1})_{i}\), the \((\mathsf{Col},\ell-1)\)-multiset family \(\mathbf{Y}\) describes multisets \(\mathbf{Y}^{c}\) reflecting the values of \((\mathbf{u}_{\ell-1})_{i}\), and \(z\) is \(\mathsf{Val}(\ell,i,\mathbf{x},\mathbf{Y})\)--that is, the argument to the activation function when computing \((\mathbf{v}_{\ell})_{i}\). The starting point for the exploration (line 8) is provided by \(\mathsf{Start}(\ell)\), which returns \(\mathbf{v}_{\ell}\) for a vertex \(v\) with no neighbours. To enumerate all candidate values for \((\mathbf{v}_{\ell})_{i}\) in an increasing order, the algorithm selects a triple in the frontier with the smallest \(z\) (line 10), and considers ways to modify \(\mathbf{x}\) or \(\mathbf{Y}\) that increase \(z\); each such combination is added to the frontier (lines 14, 19, and 27). Modifications involve replacing some component of \(\mathbf{x}\) with the next component (lines 12-14), choosing some \(\mathbf{y}\in\mathbf{Y}^{c}\) for some \(c\in\mathsf{Col}\) and replacing some component of \(\mathbf{y}\) with the next component (lines 16-19), or expanding some \(\mathbf{Y}^{c}\) with an additional vector (lines 10-27). In the latter case, if \(\mathsf{Start}(\ell)\) contains just zeros, then adding \(\mathsf{Start}(\ell)\) to \(\mathbf{Y}^{c}\) is not going to change the computed value of \(z\) so the algorithm considers vectors obtained by expanding \(\mathsf{Start}(\ell)\) in order to allow \(z\) to increase. This process produces values of \(z\) in an increasing order and it guarantees that \(\sigma(z)\in\mathcal{X}_{\ell,i}\). If \(\alpha=\rhd\), the algorithm stops when the first such value is produced (line 7). For \(\alpha\in\mathbb{R}\), Theorem 8 guarantees that set \(\mathcal{X}_{\ell,i}\setminus\mathcal{X}_{\ell,i}^{>\alpha}\) is finite; since \(F\) is extended only if the value of \(z\) increases, either \(F\) eventually becomes empty or \(\sigma(z)\) exceeds \(\alpha\) so the algorithm terminates (line 11 or 28). Theorem 14 captures the formal properties of the algorithm. ``` 1:if\(\ell=0\)then 2:if\(\alpha=\rhd\) or \(\alpha<0\)thenreturn\(0\) 3:elseif\(\alpha<1\)thenreturn\(1\) 4:elsereturn\(\lhd\) 5:let \(\mathbf{Y}_{\emptyset}\) be such that \(\mathbf{Y}_{\emptyset}^{c}=\emptyset\) for each \(c\in\mathsf{Col}\) 6:\(z:=\mathsf{Val}(\ell,i,\mathsf{Start}(\ell),\mathbf{Y}_{\emptyset})\) 7:if\(\alpha=\rhd\)thenreturn\(\sigma(z)\) 8:\(F:=\{\langle\mathsf{Start}(\ell),\mathbf{Y}_{\emptyset},z\rangle\}\) 9:while\(F\neq\emptyset\)do 10: choose and remove \(\langle\mathbf{x},\mathbf{Y},z\rangle\) in \(F\) with least \(z\) 11:if\(\sigma(z)>\alpha\)thenreturn\(\sigma(z)\) 12:for\(\mathbf{x}^{\prime}\in\mathsf{Expand}(\ell,\mathbf{x})\)do 13:\(z^{\prime}:=\mathsf{Val}(\ell,i,\mathbf{x}^{\prime},\mathbf{Y})\) 14:if\(z^{\prime}>z\)then add \(\langle\mathbf{x}^{\prime},\mathbf{Y},z^{\prime}\rangle\) to \(F\) 15:for\(c\in\mathsf{Col}\)do 16:for\(\mathbf{y}\in\mathbf{Y}^{c}\) and \(\mathbf{y}^{\prime}\in\mathsf{Expand}(\ell,\mathbf{y})\)do 17:\(\mathbf{Y}^{\prime}:=\mathbf{Y}\) and \(\mathbf{Y}^{\prime c}:=(\mathbf{Y}^{\prime c}\setminus\{\mathbf{y}\})\cup\{ \mathbf{y}^{\prime}\}\) 18:\(z^{\prime}:=\mathsf{Val}(\ell,i,\mathbf{x},\mathbf{Y}^{\prime})\) 19:if\(z^{\prime}>z\)then add \(\langle\mathbf{x},\mathbf{Y}^{\prime},z^{\prime}\rangle\) to \(F\) 20:if\(\mathsf{Start}(\ell)\) contains a nonzero then 21:\(V:=\{\mathsf{Start}(\ell)\}\) 22:else 23:\(V:=\mathsf{Expand}(\ell,\mathsf{Start}(\ell))\) 24:for\(\mathbf{y}^{\prime}\in V\)do 25:\(\mathbf{Y}^{\prime}:=\mathbf{Y}\) and \(\mathbf{Y}^{\prime c}:=\mathbf{Y}^{\prime c}\cup\{\mathbf{y}^{\prime}\}\) 26:\(z^{\prime}:=\mathsf{Val}(\ell,i,\mathbf{x},\mathbf{Y}^{\prime})\) 27:if\(z^{\prime}>z\)then add \(\langle\mathbf{x},\mathbf{Y}^{\prime},z^{\prime}\rangle\) to \(F\) 28:return\(\lhd\) 29:function\(\mathsf{Start}(\ell)\) 30:return the vector \(\mathbf{x}\) of dimension \(\delta_{\ell-1}\) where \((\mathbf{x})_{j}=\mathsf{Next}(\ell-1,j,\rhd)\) for \(1\leq j\leq\delta_{\ell-1}\) 31:function\(\mathsf{Expand}(\ell,\mathbf{v})\) 32:\(V:=\emptyset\) 33:for\(1\leq j\leq\delta_{\ell-1}\)do 34:\(v^{\prime}:=\mathsf{Next}(\ell-1,j,(\mathbf{v})_{j})\) 35:if\(v^{\prime}\neq\lhd\)then\(V:=V\cup\{\mathbf{v}[j\gets v^{\prime}]\}\) 36:return\(V\) ``` **Algorithm 2**\(\mathsf{Next}(\ell,i, * _for each_ \(\alpha\in\mathbb{R}\)_,_ \(\mathsf{Next}(\ell,i,\alpha)\) _returns_ \(<\) _if_ \(\mathcal{X}_{\ell,i}^{>\alpha}=\emptyset\)_, and otherwise it returns the smallest element of_ \(\mathcal{X}_{\ell,i}^{>\alpha}\)_._ The complexity of Algorithm 14 depends on the number of recursive calls to \(\mathsf{Next}\), which in turn depends on the matrices of \(\mathcal{N}\). We leave investigating this issue to future work. ## 5 Limiting Aggregation to Max In this section we study the expressivity of _monotonic max GNNs_, which follow the same restrictions as monotonic max-sum GNNs but additionally allow only for the max aggregation function. Theorem 16 shows that each such GNN corresponds to a Datalog program without inequalities. Consequently, monotonic max GNNs cannot count the connections of a constant in a dataset. **Definition 15**.: _A monotonic max \((\mathsf{Col},\delta)\)-GNN is a monotonic max-sum GNN that uses the max-\(1\)-sum aggregation function in all layers._ **Theorem 16**.: _For each monotonic max \((\mathsf{Col},\delta)\)-GNN \(\mathcal{N}\) with \(L\) layers, let \(\delta_{\mathcal{N}}=\max(\delta_{0},\ldots,\delta_{L})\), and let \(\mathcal{P}_{\mathcal{N}}\) be the Datalog program containing up to variable renaming each \((L,|\mathsf{Col}|\cdot\delta_{\mathcal{N}})\)-tree-like rule without inequalities captured by \(\mathcal{N}\). Then, \(\mathcal{N}\) and \(\mathcal{P}_{\mathcal{N}}\) are equivalent._ Tena Cucala et al. (2022) presented a closely related characterisation for MGNNs, and the main difference is that we use the canonical encoding. The latter allows us to describe the target Datalog class more precisely, which in turn allows us to prove the converse: each Datalog program with only tree-like rules and without inequalities is equivalent to a monotonic max GNN. In what follows, we fix a program \(\mathcal{P}\) consisting of \((d,f)\)-tree-like rules without inequalities. Recall that the signature of \(\mathcal{P}\) consists of unary predicates \(U_{1},\ldots,U_{\delta}\) and binary predicates \(E^{c}\) for \(c\in\mathsf{Col}\). Now let \(\tau_{1},\ldots,\tau_{n}\) be a sequence containing up to variable renaming each \((d,f)\)-tree-like formula for variable \(x\) without inequalities ordered by increasing depth--that is, for all \(i<j\), the depth of \(\tau_{i}\) is less than or equal to the depth of \(\tau_{j}\). Each \(\tau_{i}\) can be written as \[\tau_{i}=\varphi_{i,0}\wedge\bigwedge_{k=1}^{m_{i}}\Big{(}E^{c_{k}}(x,y_{k}) \wedge\varphi_{i,k}\Big{)}, \tag{26}\] where \(\varphi_{i,0}\) is a conjunction of unary atoms using only variable \(x\), each \(\varphi_{i,k}\) with \(1\leq k\leq m_{i}\) is a \((d-1,f)\)-tree-like formula for \(y_{k}\), and, for all \(1\leq k<k^{\prime}\leq m_{i}\), formulas \(\varphi_{i,k}\) and \(\varphi_{i,k^{\prime}}\) do not have variables in common. Note that formulas \(\varphi_{i,k}\) can be \(\top\), and that colours \(c_{k}\) need not be distinct. We define \(\mathcal{N}_{\mathcal{P}}\) as the monotonic max \((\mathsf{Col},\delta)\)-GNN of form (2) satisfying the following conditions. The number of layers is \(L=d+2\), the activation function is ReLU, and the classification function cls is the step function with threshold \(1\). For \(1\leq\ell<L\), dimension \(\delta_{\ell}\) is defined as the number of formulas in the above sequence of depth at most \(\ell-1\). The elements of \(\mathbf{A}_{\ell}\), \(\mathbf{B}_{\ell}^{c}\), and \(\mathbf{b}_{\ell}\) are defined as follows, for \(c\in\mathsf{Col}\), \(1\leq\ell\leq L\), \(1\leq i\leq\delta_{\ell}\), and \(1\leq j\leq\delta_{\ell-1}\). \[(\mathbf{A}_{\ell})_{i,j}=\left\{\begin{array}{ll}1&\text{if}\\ &\text{$\ell=1$ and $\tau_{i}$ contains $U_{j}(x)$};\text{ or}\\ &\text{$2\leq\ell<L$ and $-1\leq i\leq\delta_{\ell-1}$ and $i=j$, or}\\ &\text{$-\delta_{\ell-1}<i\leq\delta_{\ell}$ and $\varphi_{i,0}=\tau_{j}$}; \text{ or}\\ &\text{$\tau_{j}\to U_{i}(x)$ up to variable renaming};\\ 0&\text{otherwise}.\end{array}\right.\] \[(\mathbf{B}_{\ell}^{c})_{i,j}=\left\{\begin{array}{ll}1&\text{if $2\leq\ell<L$ and there exists $1\leq k\leq m_{i}$}\\ &\text{such that $c=c_{k}$ and $\varphi_{i,k}$ and $\tau_{j}$}\\ &\text{are equal up to variable renaming};\\ 0&\text{otherwise}.\end{array}\right.\] \[(\mathbf{b}_{\ell})_{i}=\left\{\begin{array}{ll}1&\text{if $\ell=1$, or}\\ &\text{$1\leq\ell<L$ and}\\ &\text{$\delta_{\ell-1}<i\leq\delta_{\ell}$};\\ 0&\text{otherwise}.\end{array}\right.\] To understand the intuition behind the construction of \(\mathcal{N}_{\mathcal{P}}\), assume that \(\mathcal{N}_{\mathcal{P}}\) is applied to a dataset \(D\), and consider a vector \(\mathbf{v}_{\ell}\) labelling in layer \(\ell\) a vertex corresponding to some term \(t\) of \(D\). Then, the \(i\)-th component of \(\mathbf{v}_{\ell}\) is paired with formula \(\tau_{i}\) from the above enumeration, and it indicates whether it is possible to evaluate \(\tau_{i}\) over \(D\) by mapping variable \(x\) to \(t\). This is formally captured by Lemma 17. To ensure that \(\mathcal{N}_{\mathcal{P}}\) and \(\mathcal{P}\) are equivalent, layer \(L\) of \(\mathcal{N}_{\mathcal{P}}\) simply realises a disjunction over all rules in the program. **Lemma 17**.: _For each \((\mathsf{Col},\delta)\)-dataset \(D\), layer \(1\leq\ell<L\) of \(\mathcal{N}_{\mathcal{P}}\), position \(1\leq i\leq\delta_{\ell}\), and term \(t\) in \(D\), and for \(\mathbf{v}_{\ell}\) the labelling of the vertex corresponding to \(t\) when \(\mathcal{N}_{\mathcal{P}}\) is applied to the canonical encoding of \(D\),_ * \((\mathbf{v}_{\ell})_{i}=1\) _if there exists a substitution_ \(\nu\) _mapping_ \(x\) _to_ \(t\) _such that_ \(D\models\tau_{i}\nu\)_, and_ * \((\mathbf{v}_{\ell})_{i}=0\) _otherwise._ Note that each \(\delta_{\ell}\) with \(1\leq\ell<L\) is determined by the number of \((d,f)\)-tree-like formulas of depth \(\ell-1\), and that \(\delta_{L-1}\) is the largest such number. We next determine an upper bound on \(\delta_{L-1}\). By Definition 11, the fan-out of a variable of depth \(i\) is at most \(f(d-i)\). The number of variables of depth \(i\) is at most the number of variables of depth \(i-1\) times the fan-out of each variable, which is \(f^{i}\cdot d\ldots(d-i+1)\) and is bounded by \(f^{i}\cdot d!\). By adding up the contribution for each depth, there are at most \(f^{d}\cdot(d+1)!\) variables. Each variable is labelled by one of the \(2^{\delta}\) conjunctions of depth zero, and each non-root variable is connected by one of the \(|\mathsf{Col}|\) predicates to its parent. Hence, there are at most \((|\mathsf{Col}|\cdot 2^{\delta})^{f^{d}\cdot(d+1)!}\) tree-like formulas. **Theorem 18**.: _Program \(\mathcal{P}\) and GNN \(\mathcal{N}_{\mathcal{P}}\) are equivalent, and moreover \(\delta_{L-1}\leq(|\mathsf{Col}|\cdot 2^{\delta})^{f^{d}\cdot(d+1)!}\)._ ## 6 Conclusion We have shown that each monotonic max-sum GNN (i.e., a GNN that uses max and sum aggregation functions and satisfies certain properties) is equivalent to a Datalog program with inequalities in the sense that applying the GNN or a single round of the rules of the program to any dataset produces the same result. We have also sharpened this result to monotonic max GNNs and shown the converse: each tree-like Datalog program without inequalities is equivalent to a monotonic max GNN. We see many avenues for future work. First, we aim to completely characterise monotonic max-sum GNNs. Second, we intend to implement rule extraction. Third, we shall investigate the empirical performance of monotonic max-sum GNNs on tasks other than link prediction, such as node classification. ## Acknowledgements This work was supported by the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project number 237889), and the EPSRC projects ConCur (EP/V050869/1), UK FIRES (EP/S019111/1), and AnaLOG (EP/P025943/1). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
2307.07994
Facilitating Multi-turn Emotional Support Conversation with Positive Emotion Elicitation: A Reinforcement Learning Approach
Emotional support conversation (ESC) aims to provide emotional support (ES) to improve one's mental state. Existing works stay at fitting grounded responses and responding strategies (e.g., question), which ignore the effect on ES and lack explicit goals to guide emotional positive transition. To this end, we introduce a new paradigm to formalize multi-turn ESC as a process of positive emotion elicitation. Addressing this task requires finely adjusting the elicitation intensity in ES as the conversation progresses while maintaining conversational goals like coherence. In this paper, we propose Supporter, a mixture-of-expert-based reinforcement learning model, and well design ES and dialogue coherence rewards to guide policy's learning for responding. Experiments verify the superiority of Supporter in achieving positive emotion elicitation during responding while maintaining conversational goals including coherence.
Jinfeng Zhou, Zhuang Chen, Bo Wang, Minlie Huang
2023-07-16T09:58:44Z
http://arxiv.org/abs/2307.07994v1
Facilitating Multi-turn Emotional Support Conversation with Positive Emotion Elicitation: A Reinforcement Learning Approach ###### Abstract Emotional support conversation (ESC) aims to provide emotional support (ES) to improve one's mental state. Existing works stay at fitting grounded responses and responding strategies (e.g., _question_), which ignore the effect on ES and lack explicit goals to guide emotional positive transition. To this end, we introduce a new paradigm to formalize multi-turn ESC as a process of positive emotion elicitation. Addressing this task requires finely adjusting the elicitation intensity in ES as the conversation progresses while maintaining conversational goals like coherence. In this paper, we propose Supporter, a mixture-of-expert-based reinforcement learning model, and well design ES and dialogue coherence rewards to guide policy's learning for responding. Experiments verify the superiority of Supporter in achieving positive emotion elicitation during responding while maintaining conversational goals including coherence. ## 1 Introduction Emotional support (ES) aims to reassure a person to recover from emotional distress and improve one's mental state (Burleson, 2003). It is a manifestation of emotional intelligence in social interactions (Heaney and Israel, 2008; Atoum and Al-Shoboul, 2018). Endowing ES into social dialogue systems for building helpful and trustful agents is an emerging trend (Huang et al., 2020; Rains et al., 2020). To achieve this goal, a typical practice is modeling empathy, which aims to perceive and understand the situation and feelings of others (Keskin, 2014). Yet, the empathetic conversation (Rashkin et al., 2019) is inherently deficient in providing ES as (1) Lack of consideration of multi-turn conversation. Just making empathetic responses in each single dialogue turn leads to ignoring the user's feedback and mental state changes in multi-turn interaction. (2) Lack of awareness of emotional elicitation. Only emanating emotional resonance fails to help users jump out of negative mental states. Although Liu et al. (2021) design emotional support conversation (ESC) task promising to remedy these deficiencies, existing works (Tu et al., 2022; Cheng et al., 2022; Peng et al., 2022) stay at fitting grounded responses and responding strategies (e.g., _question_) while ignoring the effects of such efforts on ES. They do not fully model the essential working mechanism of ESC and lack explicit goals to guide a user's emotion to a positive transition in the multi-turn process. Thus, they are still insufficient to lay out an entire ESC process and cannot effectively improve one's mental state. To this end, we introduce multi-turn ESC with positive emotion elicitation, a new paradigm aims to progressively empathize and elicit users to reach a better mental state through multi-turn conversation. Addressing this task is challenging (an example is in Figure 1): **First**, in a realistic multi-turn ESC, the user's emotions often transit towards positive (e.g., the user's emotion starts with negative and ends with positive, i.e., "_My school was closed_" Figure 1: A simplified multi-turn ESC example between the user (_left_) and agent (_right_). The agent progressively adjusts the intensity of _empathy_ and _elicitation_ to achieve the goal of improving the user’s mental state. \(\rightarrow\) "_I feel better now_") with fluctuation (e.g., the user's negative emotions in the first two turns gradually deepen, i.e., "_My school was closed_" \(\rightarrow\) "_I don't even know_"), which requires the agent to equip with the mechanism dealing with complex situations to respond satisfactorily (Shibata et al., 2014; Yoshino and Kawahara, 2015). **Second**, for ES, the ES response requires a delicate balance between empathy and elicitation. Only empathizing without eliciting falls into a negative emotional cycle, while the opposite setting brings a sense of distance in communication. They need to be progressively and purposefully adjusted in ongoing interactions, e.g., the agent expresses empathy of varying emotional polarity (_negative_\(\rightarrow\)_negative_\(\rightarrow\)_positive_) and carefully increase the intensity of elicitation (_only empathy_\(\rightarrow\)_weak elicitation_\(\rightarrow\)_strong elicitation_). **Third**, for language expression, the ES response purposefully elicits positive emotions but should not undermine general conversational goals like coherence. Making an eliciting response that is out of the dialogue context, e.g., replacing "_I understand you. I would... happened to me._" with "_Come on! I believe... find a solution!_", may cause users to resent and block useful feedback. In this paper, we propose **Supporter1** to facilitate multi-turn emotional **Support** conversation with positive emotion **E**licitation using a mixture-of-expert(MoE) based **R**einforcement learning(RL). MoE designs heuristic experts associated with specific tasks to learn diverse semantics by characterizing dialogue context, where: (1) To cope with the user's emotional fluctuation in the ongoing conversation, experts are devised as positive and negative experts as a whole; (2) To inspire ES of responding, the emotion experts of MoE are designed to predict the user's emotional states that are possibly transited to; (3) To inspire the expression of responding, the keyword experts of MoE are designed to predict the keywords that maintain the dialogue coherence. With experts as candidates, our RL agent learns conversational semantic encoding policy and purposefully selects experts with expert selection policy for response generation. To achieve the goal of positive emotion elicitation during responding while maintaining conversational goals like coherence, we optimize policy by carefully constructing the rewards: (1) ES rewards consider the conversation progress to dynamically adjust the elicitation intensity of positive emotion; (2) Dialogue coherence rewards involve keyword-level and sentence-level guides to finely maintain coherence. Footnote 1: The project repository is available at [https://github.com/jfzhouyoo/Supporter](https://github.com/jfzhouyoo/Supporter) Our contributions are summarized as follows: (1) We introduce a new paradigm by carefully dissecting the challenges of formalizing multi-turn ESC as a process of positive emotion elicitation. (2) We propose **Supporter**, an MoE-based RL model with carefully constructed ES and dialogue coherence rewards, elicits positive emotion during responding while maintaining dialogue coherence. (3) Extensive experiments show the superiority of **Supporter** with automatic, interactive human, and novel ES and dialogue coherence evaluations. ## 2 Related Work Empathetic ConversationTo construct a warm dialogue system, a milestone is to endow it with empathy (Rashkin et al., 2019). Considering affective empathy (Lin et al., 2019; Majumder et al., 2020; Li et al., 2020, 2022), i.e., perceiving the user's emotion, and cognitive empathy (Zheng et al., 2021; Sabour et al., 2022; Zhou et al., 2022), i.e., understanding the user's situation, puts the psychological theory of empathy into practice. Limited by focusing on a single-turn empathy and lack of emotional induction, it is difficult to achieve the higher goal of improving the user's mental state due to failure to help one jump out of the negative situation. Emotional Support ConversationTo remedy above deficiencies, Liu et al. (2021) design ESC for providing ES in interactions. Our work is related to existing works on ESC but differs in task definition as we focus on enhancing the elicitation effect of positive emotion of responses instead of responding strategy prediction (e.g., _question_) and grounded response generation. Although fusing knowledge (Tu et al., 2022; Peng et al., 2022) and planning strategy (Cheng et al., 2022) are beneficial for word-overlap metrics (e.g., _Bleu_), we argue whether the gains serve to ES is opaque and less convincing due to lacking corresponding evaluation mechanisms. Positive Emotion Elicitation ConversationTo free users from emotional distress and advance the conversation towards an optimistic state, positive emotion elicitation is an intuitive solution (Mishara et al., 2007; Jiang et al., 2021). Previous works (Hasegawa et al., 2013; Lubis et al., 2018, 2019, 2019) posit the emotional elicitation process as an ideal single-turn dialogue with linear emotional changes (Wang et al., 2022). However, realistic scenarios often involve multi-turn interactions with complex emotional fluctuations. To weaken the previous strong hypothesis, we extend positive emotion elicitation to ESC by well defining challenges, and take it as a real-world application of the solution. ## 3 Preliminaries At the \(t\)-th turn of dialogue, given dialogue context \(C_{t}=\{x_{1},y_{1},\dots,x_{t-1},y_{t-1},x_{t}\}\), our goal is to generate the response \(y_{t}\) which serves to improve the user's mental state. To equip this ability, the response generation process should achieve specific goals related to ES and language expression. ES for Positive Emotion ElicitationProviding effective elicitation during multi-turn ESC suffers from two issues: First, the elicitation intensity of positive emotion needs to be adjusted progressively as the conversation progresses. Maintaining weak elicitation (e.g., "_I understand you_") or strong elicitation (e.g., "_Come on_") may fail to shake one's mental state. Second, the elicitation effect of positive emotion needs to be indirectly verified by the feedback from the user's next turn utterance. It means the elicitation intensity should consider the future fluctuation of the user's emotional states. In this work, we construct conversation-level and turn-level ES rewards to guide the model's learning of elicitation policy and conduct corresponding automatic and interactive human evaluations for measuring the ES performance of responding. Language Expression for Dialogue CoherenceThe purpose of generative processes to enhance elicitation induces two attendant issues: First, without proper controls may lead to greedily pursuing the goals of elicitation while discarding the contextual coherence, e.g., "_Come on_" with strong elicitation as a response in the context of the user continuing to express negative emotions. Second, whether the response meets the user's expectations needs feedback from the user's future utterance. It means maintaining coherence with future dialogue is also crucial. In this work, we construct contextual and future dialogue coherence rewards to guide the model's learning of bi-coherent expressions and perform the automatic and interactive human evaluation of conversational goals including coherence. ## 4 Methodology In Figure 2, our Supporter takes dialogue context as input to construct state sequence, which is encoded by a dialogue encoder as the conversational semantic encoding policy. The mixture-of-expert associated with emotion and keyword prediction tasks characterize state semantics to yield action candidates of the expert selection policy, which are purposefully selected for inducing state update. We use the updated state to generate response and further optimize the policy by measuring how well the response reaches the goal of ES and dialogue coherence with the well-designed parallel rewards. ### Multi-task Mixture-of-Expert As a key component of Supporter, we first introduce the structure of multi-task mixture-of-expert. Dialogue EncoderFollowing Liu et al. (2021), the dialogue encoder is implemented with Blender-Bot (Roller et al., 2021). Given an input sequence \(X\), we concatenate all input tokens and prepend Figure 2: The architecture of the proposed Supporter model. _DC_ is an abbreviation for _Dialogue Coherence_. with a \([CLS]\) token, e.g., for the dialogue context, getting \([CLS]\oplus x_{1}\oplus y_{1}\ldots\oplus x_{t-1}\). The sequence is fed into the dialogue encoder to obtain the hidden state \(\mathbf{H}_{X}\). We denote the sequence representation derived from \([CLS]\) as \(\mathbf{h}_{X}\). Emotion ExpertsTo track possible transitions of user's emotional states, emotion experts are associated with contextual and future user emotion predictions. We extract \(M\) fine-grained emotional reactions for each utterance in the corpus, which are inferred from COMET Bosselut et al. (2019) using the "_xReact_" relation. Since emotional reactions are often emotional words (e.g., _happy_, _sad_), we use VAD Mohammad (2018) to identify the emotional polarity of each word according to its valence as a positive or negative emotional category. The high-frequency categories are finally retained as supervised labels for the emotion prediction task. We divide contextual emotion experts into positive and negative emotion experts, which are two MLP transforming \(\mathbf{H}_{X}\) into \(\mathbf{H}_{X,pos}\) and \(\mathbf{H}_{X,neg}\): \[\begin{split}&\mathbf{H}_{X,pos}=MLP_{pos}\left(\mathbf{H}_{X}\right),\\ &\mathbf{H}_{X,neg}=MLP_{neg}\left(\mathbf{H}_{X}\right).\end{split} \tag{1}\] We project the \([CLS]\) representations \(\mathbf{h}_{X,pos}\) and \(\mathbf{h}_{X,neg}\) of positive and negative experts to predict positive and negative emotion, respectively: \[\begin{split}& P_{pos}=\operatorname{softmax}\left(\mathbf{W}_{ pos}\mathbf{h}_{X,pos}\right),\\ & P_{neg}=\operatorname{softmax}\left(\mathbf{W}_{neg}\mathbf{h}_{X,neg }\right),\end{split} \tag{2}\] which is supervised by the positive and negative emotions collected in the \(e^{*}_{pos}\) and \(e^{*}_{neg}\) sets of the user's last utterance in the dialogue context using cross-entropy loss: \[\begin{split}& L^{ctx-emo}_{pos}=-\frac{1}{\left|e^{*}_{pos} \right|}\sum_{i=1}^{\left|e^{*}_{pos}\right|}\log P_{pos}\left(e^{*}_{i}\right),\\ & L^{ctx-emo}_{neg}=-\frac{1}{\left|e^{*}_{neg}\right|}\sum_{i=1 }^{\left|e^{*}_{neg}\right|}\log P_{neg}\left(e^{*}_{i}\right).\end{split} \tag{3}\] Note that an utterance may be inferred to the emotions with different polarities due to cognitive differences Westbrook et al. (2011); Zhou et al. (2022). For future emotion experts, we adopt the above method to get \(L^{ftr-emo}_{pos}\) and \(L^{ftr-emo}_{neg}\) losses and train them to predict the positive and negative emotions of the user's future utterance (i.e., next turn utterance). In this way, emotion experts can learn various emotion-level features by \(L_{emo}\) loss: \(L_{emo}=L^{ctx-emo}_{pos}+L^{ctx-emo}_{neg}+L^{ftr-emo}_{pos}+L^{ftr-emo}_{ neg}\). Keyword ExpertsTo meet the need for dialogue coherence, keyword experts are associated with keyword predictions that act on maintaining coherence with contextual and future utterances. Here, a bidirectional emotion keyword graph \(\mathcal{G}\) is constructed, which is also used in coherence rewards designing (a construction example is in Appendix A). We extract the salient keywords of each utterance in the corpus as vertices using a rule-based approach Tang et al. (2019), and employ VAD to identify the emotional polarity of each keyword. The pointwise mutual information (PMI) Church and Hanks (1989) is adopted to construct bidirectional edges by characterizing the association between keyword pairs, where the _forward_ edge depicts the keyword pairs extracted from the context and response, and the _backward_ edge depicts the ones are from the future utterance and response. We further construct _positive_ edges to describe the keywords with positive tail vertices, and _negative_ edges are negative ones. Finally, each head vertex selects the tail vertices with the top PMI scores for building connections. The vertices of \(\mathcal{G}\) serve as supervised labels for the keyword prediction task. Contextual keyword experts are transformed similarly to emotion experts, and their \([CLS]\) representations \(\mathbf{h}_{X,pos}^{ctx-kws}\) and \(\mathbf{h}_{X,neg}^{ctx-kws}\) can be obtained from positive and negative keyword experts \(\mathbf{H}_{X,pos}^{ctx-kws}\) and \(\mathbf{H}_{X,neg}^{ctx-kws}\), respectively. We infer the one-hop neighbors of contextual keywords from the "_forward-positive_" and "_forward-negative_" relations respectively in \(\mathcal{G}\) to enhance the perception of the target keywords in the golden response. Specifically, we use attention Bahdanau et al. (2015) to obtain fused embeddings \(\mathbf{e}_{pos}^{ctx-kws}\) and \(\mathbf{e}_{neg}^{ctx-kws}\): \[\begin{split}&\mathbf{e}_{pos}^{ctx-kws}=\operatorname{Attention}(\mathbf{h}_{X,pos}^{ctx-kws},\mathbf{E}_{pos}^{ctx-kws}),\\ &\mathbf{e}_{neg}^{ctx-kws}=\operatorname{Attention}(\mathbf{h}_{X, neg}^{ctx-kws},\mathbf{E}_{neg}^{ctx-kws}),\end{split} \tag{4}\] where \(\mathbf{E}_{pos}^{ctx-kws}\) and \(\mathbf{E}_{neg}^{ctx-kws}\) are positive and negative neighbor embedding matrices that share parameters with the dialogue encoder. We then concatenate \(\mathbf{e}_{pos}^{ctx-kws}\) and \(\mathbf{e}_{neg}^{ctx-kws}\) with \(\mathbf{H}_{X,pos}^{ctx-kws}\) and \(\mathbf{H}_{X,neg}^{ctx-kws}\) respectively at the token level, and use an MLP layer to fuse them to obtain keyword-enhanced experts \(\mathbf{H}_{X,pos-kws}^{ctx-kws}\) and \(\mathbf{H}_{X,neg-kws}^{ctx-kws}\): \[\begin{split}&\mathbf{H}_{X,pos-kws}^{ctx-kws}[i]=\operatorname{MLP}(\mathbf{H}_{X,pos}^{ctx-kws}[i]\oplus\mathbf{e}_{pos}^{ctx-kws})\\ &\mathbf{H}_{X,neg-kws}^{ctx-kws}[i]=\operatorname{MLP}(\mathbf{H}_{X, neg}^{ctx-kws}[i]\oplus\mathbf{e}_{neg}^{ctx-kws})\end{split} \tag{5}\] Further, we take the positive and negative key words in the golden response as supervision to optimize the \(L_{pos}^{ctx-kws}\) and \(L_{neg}^{ctx-kws}\) losses adopting cross-entropy (this process can refer to above emotion prediction task). Similarly, multi-hop reasoning on \(\mathcal{G}\), i.e., "_forward_\(\rightarrow\)_forward_\(\rightarrow\)_backward-positive_" and "_forward_\(\rightarrow\)_forward_\(\rightarrow\)_backward-negative_" (clarified in Appendix A), is performed to obtain keywords coherent with the future utterance. Taking the positive and negative keywords in future utterance as the prediction target, the keyword-enhanced future keyword experts can be optimized by \(L_{pos}^{ftr-kws}\) and \(L_{neg}^{ftr-kws}\) losses. In this way, keyword experts can learn various expression-level features by \(L_{kws}\) loss: \(L_{kws}=L_{pos}^{ctx-kws}+L_{neg}^{ctx-kws}+L_{pos}^{ftr-kws}+L_{neg}^{ftr-kws}\). Multi-task TrainingTo make the experts retain the primitive semantics without hindering their respective diversity, we give them a minor constraint. Specifically, we average the representations of emotion and keyword experts to get \(\mathbf{h}_{X,exp}\), and make it close to sequence representation \(\mathbf{h}_{X}\) by optimizing the MSE loss with a minor hyperparameter \(\alpha\): \[L_{mse}=\frac{\alpha}{d_{h}}\sum_{i=1}^{d_{h}}\left(\mathbf{h}_{X}[i]-\mathbf{h}_{X, exp}[i]\right)^{2}, \tag{6}\] where \(d_{h}\) is the dimension of \(\mathbf{h}_{X}\). Then, we jointly train the multi-task MoE by optimizing \(L_{exp}\) loss: \[L_{exp}=L_{emo}+L_{kws}+L_{mse}. \tag{7}\] ### MoE-based Reinforcement Learning We use the standard reinforcement learning framework (Sutton and Barto, 2018) as the backbone. StateWe concatenate the dialogue context and the extracted keywords as the initial state \(s_{1}\in\mathcal{S}\), i.e., \(s_{1}=\{C,C_{kws}\}\) (we omit the subscript \(t\) of dialogue context \(C_{t}\) for simplicity). At each step, the prompt token sequence \(\mathcal{E}\) generated by the policy determined expert (i.e., action) triggers an update of the state. We record the observed state \(s_{k}\in\mathcal{S}\) at \(k\)-th step, i.e., \(s_{k}=\{C,\mathcal{E}_{1},\dots,\mathcal{E}_{k-1}\}\), which is encoded by the dialogue encoder to get \(\mathbf{H}_{S,k}\) and \(\mathbf{h}_{S,k}\). We concatenate sequence representations of historical states to obtain current state embedding \(\mathbf{s}_{k}=\mathbf{h}_{S,1}\oplus\dots\oplus\mathbf{h}_{S,k}\). If \(k\) is smaller than the set maximum iteration steps \(K\), we pad \(\mathbf{s}_{k}\) with zeros for fixing dimension. Note that when \(k>1\), we discard the keywords \(C_{kws}\) because: (1) It has already acted on the first iteration; (2) The input sequence length is limited due to the constraint of the pre-trained model (i.e., BlenderBot). ActionThe action space \(\mathcal{A}_{k}\) at \(k\)-th step is defined as the multi-task associated experts transformed by state \(s_{k}\). At state \(s_{k}\), our agent learns to choose an expert in \(\mathcal{A}_{k}\) as expert action \(a_{k}\). We utilize a BlenderBot-based dialogue decoder to generate expert prompt \(\mathcal{E}_{k}\) of \(a_{k}\). PolicyBesides the above dialogue encoder as the semantic encoding policy network, we design an expert selection policy network using REINFORCE with baseline (Sutton and Barto, 2018) that includes an actor network and a value network. Actor learns an expert finding policy \(\pi_{\varphi}\left(a_{k},s_{k},\mathcal{A}_{k}\right)\) which selects the appropriate expert action \(a_{k}\) based on the current state \(s_{k}\) and action space \(\mathcal{A}_{k}\) by emitting the probability distribution of actions in \(\mathcal{A}_{k}\). The value network measures the value \(Q_{\delta}\left(s_{k}\right)\) of state \(s_{k}\) as the baseline in REINFORCE. Their network structures are defined as: \[\begin{split}\mathbf{o}_{k}=\eta\left(\left(\eta\left(\mathbf{s}_{k}\mathbf{W} _{1}\right)\mathbf{W}_{2}\right)\right),\\ \pi_{\varphi}\left(a_{k},s_{k},\mathcal{A}_{k}\right)=\phi\left( \mathbf{A}_{k}\odot\mathbf{o}_{k}\mathbf{W}_{\varphi}\right),\\ Q_{\delta}\left(s_{k}\right)=\mathbf{o}_{k}\mathbf{W}_{\delta},\end{split} \tag{8}\] where \(\eta(\cdot)\) is an ELU activation function with a dropout layer, \(\odot\) is the hadamard product, \(\phi(\cdot)\) is the softmax function. \(\mathbf{A}_{k}\) is a binarized vector for pruning the action space, and we set it as a full-one vector due to the small number of experts. RewardsTo guide policy learning, we reward the decision made at each step by measuring how well the response generated from updated state \(s_{k+1}\) provides ES and maintains dialogue coherence. (1) Conversation-level ES Reward: aims to dynamically adjust the elicitation intensity of positive emotion as the conversation progresses defined as: \[\begin{split} PED_{cES}=f_{ES}(y)-f_{ES}\left(c_{t}\right),\\ r_{cES}=\sum_{t=1}^{T}\cos(\frac{\pi}{2}\cdot\frac{t}{MT})\cdot PED _{cES}.\end{split} \tag{9}\] Here, \(f_{ES}(\cdot)\) measures the positive emotion level of an utterance using the emotion classification model developed by Hartmann (2022). The model is trained on six datasets containing diverse text types and achieves 66% accuracy for emotion classification. Positive emotion scores are collected as positive level. We encourage the positive emotion distance \(PED_{cES}\) of the generated response \(y\) and the contextual user's post \(c_{t}\): (a) is non-negative, i.e., expressing empathy (equal to 0) or elicitation (greater than 0) is the underlying requirement; (b) synchronously increases with the dialogue turn \(t\), i.e., the early stage of the conversation is dominated by empathy, and the latter is elicitation. \(MT\) is the maximum turn of conversation, \(T\) is current turn. (2) Turn-level ES Reward: aims to capture the feedback of user's next turn emotion defined as: \[\begin{split} PED_{tES}=\left|f_{ES}(y)-f_{ES}\left(c_{f}\right) \right|,\\ r_{tES}=\cos(\frac{\pi}{2}\cdot\frac{T}{MT})\cdot\cos(\frac{\pi}{ 2}\cdot PED_{tES}).\end{split} \tag{10}\] Here, \(PED_{tES}\) measures the relative positive emotion distance between the generated response \(y\) and the user's future (i.e., next turn) utterance \(c_{f}\). We encourage \(PED_{tES}\) to get smaller with the approaching of current turn \(T\) to \(MT\), i.e., supervising smooth elicitation in the latter stage and improving tolerance to emotional fluctuations. (3) Contextual Dialogue Coherence Reward: aims to constrain generated response \(y\) to maintain coherence with context \(C\) by measuring their coherence at keyword-level and sentence-level. First, we reconstruct a dataset Liu et al. (2021) containing coherent and incoherent context-response pairs, where the response of the incoherent pairs is an utterance randomly sampled from the dataset. Next, a BERT-based Devlin et al. (2019) text classification model \(f_{cDC}\) is trained by feeding sentence-keyword pairs and achieves 85% accuracy. We take the coherence probability as the coherence score, the reward is defined as: \[r_{cDC}=f_{cDC}\left(C\oplus C_{kws},y\oplus y_{kws}\right)\cdot e^{\frac{N_{ c,kws}}{|y_{kws}|}-1}, \tag{11}\] where \(y_{kws}\) is the keyword set of \(y\) and \(N_{c,kws}\) is the number of keywords in \(y_{kws}\) that are the _forward_ neighbors of contextual keywords in \(\mathcal{G}\). (4) Future Dialogue Coherence Reward: aims to introduce the consideration of coherence with the user's future utterance \(c_{f}\). Similarly, we reconstruct a dataset Liu et al. (2021) containing coherent and incoherent future utterance-response pairs and train another text classification model \(f_{fDC}\) which achieves 77% accuracy. The reward is defined as: \[r_{fDC}=f_{fDC}\left(c_{f}\oplus c_{f_{kws}},y\oplus y_{kws}\right)\cdot e^{ \frac{N_{f,kws}}{|y_{kws}|}-1}, \tag{12}\] where \(N_{f,kws}\) is the number of keywords in \(y_{kws}\) that have a _backward_ relation with keywords \(c_{f_{kws}}\) of \(c_{f}\) in \(\mathcal{G}\). (5) Total reward. The total reward is \(r=w_{cES}*r_{cES}+w_{tES}*r_{tES}+w_{cDC}*r_{cDC}+w_{fDC}*r_{fDC}\). ### Optimization We set \(K\)-step iterations, and the goal of agent learning is to maximize the expected cumulative reward: \(J_{\theta}=\mathbb{E}_{\pi}\left[\sum_{k=1}^{K}\gamma^{k}r_{k+1}\right]\), where \(\theta\) is the learned parameter and \(\gamma\) is the discount coefficient. The agent is optimized by \(L_{agent}\) loss and its policy gradient is defined as: \[\nabla_{\theta}J_{\theta}=\mathbb{E}_{\pi}[\nabla_{\theta}\log\pi_{\varphi}( a_{k},s_{k},\mathcal{A}_{k})(G-Q_{\delta}(s_{k}))], \tag{13}\] where \(G\) is the discounted cumulative reward from the initial state to the terminal state. Finally, we take the hidden state \(\boldsymbol{H}_{S,K+1}\) of the state \(s_{K+1}\) to generate the response, where the decoder is optimized by \(L_{gen}\) loss: \[L_{gen}=-\sum_{m=1}^{M}\log P(y_{m}\mid\boldsymbol{H}_{S,K+1},y_{<m}). \tag{14}\] Warm StartWe use the pretrained small version of BenderBot for initializing our model. The initial state is used as input to fine-tune the model for warm start by optimizing \(L_{warm}=L_{exp}+L_{gen}\). Joint TrainingOur model is finally jointly trained by optimizing \(L_{joint}\) loss: \[L_{joint}=L_{agent}+L_{gen}+\frac{1}{K+1}\sum_{k=1}^{K+1}L_{exp,k} \tag{15}\] ## 5 Experiments ### Experimental Setup DatasetOur experiments are conducted on the widely used ESConv Liu et al. (2021), a multi-turn conversation dataset for ES. In a conversation, the user confides personal negative situation, and the supporter provides comfort and support to improve the user's mental state. The statistics of ESConv and graph \(\mathcal{G}\) after preprocessing are in Table 1. \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{\#Dialogues} & 1,053 \\ Corpus & \begin{tabular}{c} \#Utterances \\ Avg. length of dialogues \\ Avg. length of utterances \\ \#Split Ratio \\ \end{tabular} & 31,410 \\ \begin{tabular}{c} Info. \\ \end{tabular} & \begin{tabular}{c} Avg. length of dialogues \\ Avg. length of utterances \\ \#Split Ratio \\ \end{tabular} & 29.8 \\ \hline \multirow{4}{*}{\begin{tabular}{c} Graph \(\mathcal{G}\) \\ Info. \\ \end{tabular} } & \begin{tabular}{c} \#Keywords \\ Avg. forward neighbors \\ Avg. backward neighbors \\ Avg. positive neighbors \\ Avg. negative neighbors \\ \end{tabular} & 24.33 \\ \cline{1-1} & \begin{tabular}{c} Avg. forward neighbors \\ Avg. backward neighbors \\ Avg. positive neighbors \\ Avg. negative neighbors \\ \end{tabular} & 21.17 \\ \cline{1-1} & \begin{tabular}{c} Avg. positive neighbors \\ Avg. positive neighbors \\ Avg. negative neighbors \\ \end{tabular} & 33.94 \\ \cline{1-1} & \begin{tabular}{c} Avg. negative neighbors \\ Avg. negative neighbors \\ \end{tabular} & 8.46 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of our dataset after preprocessing. Baselines(1) _MoEL_Lin et al. (2019): An empathetic conversation model that uses multiple decoders to capture possible user emotions for generating. (2) _MIME_Majumder et al. (2020): An empathetic conversation model that mimics user's emotions during responding. (3) _BlenderBot-Joint_Liu et al. (2021): An ESC model that prepends a predicted strategy token on the backbone of BlenderBot. (4) _MISC_Tu et al. (2022): An ESC model that fuses commonsense. (5) _GLHG_Peng et al. (2022): A commonsense-based ESC model that designs a global-to-local graph. (6) We design _Bart-Joint_ by replacing the backbone of BlenderBot-Joint with Bart Lewis et al. (2020). It achieves comparable performance to _MultiESC_Cheng et al. (2022) as its replacement since MultiESC's code is unavailable. Implementation DetailsWe implement all models with Pytorch, and all pretrained models (i.e., BlenderBot, Bart) use small versions. We set the number of steps \(K=2\) and reward weights \(w_{cES}=w_{cDC}=0.1,w_{tES}=w_{fDC}=1.0\) (selected using a grid-search approach with two values {0.1, 1.0} for each hyperparameter). We extract \(M=10\) emotional reactions for each utterance. The maximum number of conversation turn \(MT\) is set to 10. The discount factor \(\gamma\) is 0.99, the hyperparameter \(\alpha\) is 1e-5, and the batch size is 16. We use Adam optimizer Kingma and Ba (2015) with an initial learning rate of 2e-5 and a linear warmup of 120 steps for training on a GPU-V100 machine. The warm start stage is trained for 5 epochs, and the joint training stage is set to 3 epochs. The decoding settings are consistent with Liu et al. (2021). For a fair comparison, all baselines with available codes are reproduced under the same setting. ### Automatic Evaluation We adopt Perplexity (PPL), Bleu (B-\(n\)) and Distinct (D-\(n\)) to evaluate the general generation quality and diversity of the models. To measure how well the generated responses achieve goals, we define (1) ES scores containing conversation-level (_cES_) and turn-level (_tES_), i.e., \(r_{cES}\) and \(r_{tES}\), measure the elicitation intensity of positive emotion involving conversation progress and the perceived intensity to the user's next turn emotion; (2) Dialogue coherence scores containing contextual (_cDC_) and future (_fDC_), i.e., \(r_{cDC}\) and \(r_{fDC}\), measure the coherence with the context and the user's future utterance. Overall PerformanceIn Table 2, compared with all baselines, our Supporter achieves the most diverse expressions and highest ES (12.9% outperforms the second best MoEL on _cES_) while maintaining competitive dialogue quality (_PPL_, _Bleu_) and coherence (_cDC_, _fDC_). Supportive responses generated by MoEL are often accompanied by low diversity and low coherence due to the retelling of generic responses (e.g., _"I am glad I could help you"_ with high positive emotion) that are found from its outputs. Bart-based models benefit from robust sequence modeling Lewis et al. (2020) with inherent advantages in coherence and Bleu but perform poorly in ES and diversity. The contextual coherence (_cDC_) of our Supporter is inferior to BlenderBot-Joint, which is acceptable as ES for positive emotion elicitation needs to sacrifice a little coherence to jump out of negative topics. Ablation StudyIn Table 2: **First**, we remove the emotion experts (w/o EmoExperts), keyword experts (w/o KwsExperts), and the multi-task as \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline Models & PPL\(\downarrow\) & B-1\(\uparrow\) & B-2\(\uparrow\) & B-3\(\uparrow\) & D-1\(\uparrow\) & D-2\(\uparrow\) & D-3\(\uparrow\) & _cES\(\uparrow\)_ & _tES\(\uparrow\)_ & _cDC\(\uparrow\)_ & _fDC\(\uparrow\)_ & Len \\ \hline MoEL & 112.34 & 18.14 & 6.77 & 3.22 & 2.43 & 17.03 & 38.08 & 0.658 & 0.390 & 0.391 & 0.384 & 20.36 \\ MIME & 68.49 & 15.89 & 6.58 & 3.27 & 2.02 & 10.51 & 22.60 & 0.598 & 0.370 & 0.450 & 0.412 & 19.44 \\ BlenderBot-Joint & **14.78** & 17.97 & 7.17 & 3.31 & 4.56 & 24.65 & 49.71 & 0.611 & 0.398 & 0.710 & 0.459 & 17.69 \\ MISC & 16.16 & - & 7.31 & - & 4.41 & 19.71 & - & - & - & - & - & - \\ GLHG & 15.67 & 19.66 & 7.57 & 3.74 & 3.50 & 21.61 & - & - & - & - & - & - \\ Bart-Joint & 16.05 & **19.99** & **7.92** & **3.93** & 4.24 & 21.98 & 43.33 & 0.635 & 0.402 & **0.723** & **0.475** & 18.85 \\ \hline **Supporter** & 15.37 & 19.50 & 7.49 & 3.58 & **4.93** & **27.73** & **53.78** & **0.743** & **0.409** & 0.681 & 0.472 & 18.37 \\ \hline w/o EmoExperts & 15.35 & 18.32 & 7.12 & 3.38 & 4.79 & 27.20 & 53.01 & 0.711 & 0.392 & 0.679 & 0.460 & 18.14 \\ w/o KwsExperts & 15.54 & 17.76 & 6.74 & 3.19 & 4.69 & 26.16 & 50.92 & 0.728 & 0.394 & 0.636 & 0.443 & 17.72 \\ w/o Multi-Task & 15.49 & 16.79 & 6.54 & 3.18 & 4.78 & 27.17 & 53.45 & 0.651 & 0.399 & 0.651 & 0.450 & 16.48 \\ w/o ESR rewards & 15.46 & 18.49 & 7.10 & 3.36 & 4.69 & 26.92 & 52.49 & 0.664 & 0.391 & 0.660 & 0.457 & 18.41 \\ w/o DCRewards & 15.43 & 17.28 & 6.80 & 3.25 & 4.80 & 27.45 & 53.04 & 0.707 & 0.401 & 0.652 & 0.448 & 17.12 \\ w/o ExpertPolicy & 15.54 & 18.30 & 7.23 & 3.54 & 4.75 & 27.23 & 52.85 & 0.683 & 0.395 & 0.657 & 0.454 & 18.54 \\ Warm-Start Only & 15.03 & 17.42 & 6.74 & 3.21 & 4.67 & 26.24 & 51.82 & 0.629 & 0.402 & 0.644 & 0.444 & 17.35 \\ w/o Warm-Start & 15.01 & 17.98 & 6.86 & 3.18 & 4.55 & 26.06 & 51.62 & 0.673 & 0.403 & 0.638 & 0.453 & 18.26 \\ \hline \hline \end{tabular} \end{table} Table 2: Automatic evaluation results. “Len” indicates the average length of the generated responses. sociated with the experts (w/o Multi-Task), respectively. Emotion experts mainly act on ES, including _cES_ and _tES_. Keyword experts contribute significantly to dialogue coherence, including _cDC_ and _fDC_. Multi-task training endows experts with specific abilities and thus has an impressive impact on overall performance. **Second**, we remove the ES rewards (w/o ESRwards) and dialogue coherence rewards (w/o DCRewards), respectively. The former improves positive support, and the latter maintains grounded expression. Therefore, besides achieving their own goals, they also benefit dialogue diversity and quality, respectively. Moreover, we replace the expert selection policy network with random sampling (w/o ExpertPolicy). Random experts lead to uncertainty in decision-making and thus damage overall performance, especially on ES and coherence. **Third**, we test using only warm start and without joint training (Warm-Start Only) as well as without warm start and only joint training (w/o Warm-Start). The former reaches comparable or even worse results than the baselines, and the latter greedily achieves the goal of maximizing the rewards resulting in low dialogue quality. ### Interactive Human Evaluation We recruited three crowdsourcing workers and exposed them to 100 negative situations randomly sampled from the test set. They were asked to engage in multi-turn conversation with the models to simulate the process of seeking ES and to choose the better one (Win) from a model pair by considering five aspects, respectively: (1) Fluency: which bot's response is more fluent and understandable? (2) Informativeness: which bot's response is more diverse and specific, and contains more information? (3) Coherence: which bot's response is more coherent with context in a multi-turn conversation? (4) Supportiveness: which bot provides more effective ES, i.e., is more likely to elicit users to change their emotions from negative to positive? (5) Overall: generally, which bot is more preferred? As in Table 3, from the comparison with baselines, we found that a single incoherent response (_cDC_ in Table 2) has less impact on the coherence of the overall multi-turn conversation. Comparisons with variants of Supporter demonstrate that key components of our model, i.e., emotion experts and expert selection policy, lead to significant advantages in the overall performance. ### Qualitative Analysis Specificity of ExpertsTo analyze the quality of the experts, we show the specificity of the experts learned by Supporter. As shown in Figure 3, we visualize the latent space of experts using \(t\)-SNE on 200 conversation samples. The latent space distributions of multi-task-associated experts are clearly separated and clustered in specific regions. Some overlap is also intuitive due to the similarity between experts with the same polarity, e.g., contextual and future positive emotion experts. This verifies our MoE has diverse and specific semantics and the superiority of multi-task learning. Adjustability of ElicitationTo further explore the adjustability of elicitation intensity of positive emotion in multi-turn conversation, we analyze the trend of positive emotion distance with the dialogue \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Supporter vs.} & \multicolumn{2}{c}{BlenderBot-Joint} & \multicolumn{3}{c}{Bart-Joint} & \multicolumn{3}{c}{w/o EmoExperts} & \multicolumn{3}{c}{w/o ExpertPolicy} \\ & Win & Lose & Tie & Win & Lose & Tie & Win & Lose & Tie & Win & Lose & Tie \\ \hline Fluency & **67.5\({}^{\ddagger}\)** & 23.7 & 8.8 & **66.5\({}^{\ddagger}\)** & 26.5 & 7.0 & **44.5\({}^{\dagger}\)** & 40.0 & 15.5 & **42.9\({}^{\dagger}\)** & 37.5 & 19.6 \\ Informativeness & **55.2\({}^{\ddagger}\)** & 40.7 & 4.1 & **56.7\({}^{\ddagger}\)** & 38.8 & 4.5 & **48.6\({}^{\ddagger}\)** & 36.8 & 14.6 & **38.5** & 35.9 & 25.6 \\ Coherence & **53.8\({}^{\ddagger}\)** & 31.8 & 14.4 & **45.4** & 43.8 & 10.8 & **53.7\({}^{\ddagger}\)** & 35.7 & 10.6 & **55.1\({}^{\ddagger}\)** & 32.4 & 12.5 \\ Supportiveness & **59.2\({}^{\ddagger}\)** & 34.1 & 6.7 & **51.4\({}^{\ddagger}\)** & 37.6 & 11.0 & **54.5\({}^{\ddagger}\)** & 33.4 & 12.1 & **51.4\({}^{\ddagger}\)** & 34.3 & 14.3 \\ \hline Overall & **56.5\({}^{\ddagger}\)** & 30.4 & 13.1 & **48.6\({}^{\ddagger}\)** & 37.1 & 14.3 & **50.0\({}^{\ddagger}\)** & 34.3 & 15.7 & **49.6\({}^{\ddagger}\)** & 32.1 & 18.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of interactive human evaluation (%). \(\dagger/\ddagger\) denote \(p\)-value \(<\) 0.1/0.05 (statistical significance test). Figure 3: Latent space visualization of experts. Separate clusters show MoE has diverse and specific semantics. turns, i.e., \(PED=f_{ES}(y)-\frac{1}{T}\sum_{t=1}^{T}f_{ES}\left(c_{t}\right)\). As shown in Figure 4, the PED score of all models tends to rise first and then fall. In the early stage of the conversation (turn\(<\)6), Supporter keeps the same trend as the empathy model (i.e., MoEL, MIME) and gradually increases the intensity of elicitation. This is attributed to our encouragement that it should progressively transform the conversation from empathy-dominated to elicitation-dominated. In the later stage of the conversation (turn\(>\)6), Supporter still maintains a higher level of elicitation than baselines and shows robust adjustment ability. ### Parameter Analysis We further analyze the impact of the number of iteration steps \(K\). In Table 4, with the increase of steps, diversity and _tES_ show an upward trend, while other metrics show a downward one. This happens possibly because the informativeness of the generated responses increases with selected experts, making it possible to lose focus and thus lead to poor dialogue quality. Furthermore, Supporter outperforms the best baselines in most cases, confirming its effectiveness. ## 6 Conclusions In this paper, we introduce a new paradigm to formalize multi-turn ESC as a process of positive emotion elicitation and propose an MoE-based reinforcement learning model Supporter with well-designed ES and dialogue coherence rewards. Extensive experiments verify the superiority of our model in providing effective ES for positive emotion elicitation while maintaining conversational goals including coherence. Our work will facilitate future work to develop ESC with positive emotion elicitation for improving the users' mental state. ## Limitations We discuss three limitations of this work as follows. The first one is the instability of reinforcement learning. Reward-driven policy learning is an essential advantage of this work because it is better equipped with the positive emotion-driven process of ESC than existing works and can model flexible ESC expression beyond the training data. However, this flexibility also suffers from instability, which calls for additional knowledge or strategies to refine the learning process. The second one is the need for further reference to psychological theory. An advantage of our work is to learn posterior ESC patterns integrating the dialogue context and future feedback in the form of rewards. However, there is still other valuable prior knowledge to be referred from psychology studies, e.g., the CBT (cognitive-behavioral therapy) methods. This kind of prior knowledge can be used as additional knowledge to refine the learning process as mentioned in the first limitation. The third one is that the reward design can be further optimized. The ideal case is to construct a high-quality dataset with human-feedback labels for training reward model (e.g., the constructed example of ChatGPT). At the same time, the larger parameter of the reward model, the more conducive it is to learn a robust policy and avoid it overfitting to the reward function. However, such optimizations need a trade-off with cost. ## Ethical Considerations In this paper, the ESConv dataset used in our experiments is a publicly-available benchmark for emotional support conversation, which does not contain sensitive and personal information as well as unethical language. Our work builds on this dataset to study positive emotion elicitation to improve the user's mental state. Therefore, we focus on constructing a dialogue system to provide emotional support from families and friends in the daily scenarios limited by this dataset rather than profes \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Models & D-1 & B-2 & _cES_ & _tES_ & _cDC_ & _fDC_ \\ \hline Supporter\({}_{K=1}\) & 4.40 & 7.55 & 0.801 & 0.382 & 0.668 & 0.466 \\ Supporter\({}_{K=2}\) & 4.93 & 7.49 & 0.743 & 0.409 & 0.681 & 0.472 \\ Supporter\({}_{K=3}\) & 5.22 & 6.71 & 0.699 & 0.405 & 0.657 & 0.459 \\ Supporter\({}_{K=4}\) & 5.05 & 6.10 & 0.673 & 0.413 & 0.594 & 0.431 \\ \hline \hline \end{tabular} \end{table} Table 4: Parameter analysis for iteration steps \(K\). Supporter outperforms the best baselines in most settings. Figure 4: Supporter progressively enhances the elicitation intensity and exhibits robust adjustment ability in the later stage of the conversation. sional psychological counseling or psychological treatment. For risky non-daily scenarios such as self-harm or suicide-related conversations, we do not claim that the dialogue system we built has a treatment or improvement effect on them. Additionally, we also ensure the anonymity of our interactive human evaluation. We believe our work meets ACL's Code of Ethics. ## Acknowledgements This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005. This work was also supported by Tsinghua Precision Medicine Foundation. This work was also supported by the National Natural Science Foundation of China (with No. 62272340, 61876128, 62276187).
2306.16568
Early warning signals for predicting cryptomarket vendor success using dark net forum networks
In this work we focus on identifying key players in dark net cryptomarkets that facilitate online trade of illegal goods. Law enforcement aims to disrupt criminal activity conducted through these markets by targeting key players vital to the market's existence and success. We particularly focus on detecting successful vendors responsible for the majority of illegal trade. Our methodology aims to uncover whether the task of key player identification should center around plainly measuring user and forum activity, or that it requires leveraging specific patterns of user communication. We focus on a large-scale dataset from the Evolution cryptomarket, which we model as an evolving communication network. Results indicate that user and forum activity, measured through topic engagement, is best able to identify successful vendors. Interestingly, considering users with higher betweenness centrality in the communication network further improves performance, also identifying successful vendors with moderate activity on the forum. But more importantly, analyzing the forum data over time, we find evidence that attaining a high betweenness score comes before vendor success. This suggests that the proposed network-driven approach of modelling user communication might prove useful as an early warning signal for key player identification.
Hanjo D. Boekhout, Arjan A. J. Blokland, Frank W. Takes
2023-06-28T21:21:39Z
http://arxiv.org/abs/2306.16568v3
# Early warning signals for predicting cryptomarket vendor success using dark net forum networks ###### Abstract In this work we focus on identifying key players in dark net cryptomarkets. Law enforcement aims to disrupt criminal activity conducted through these markets by targeting key players vital to the market's existence and success. We particularly focus on detecting successful vendors responsible for the majority of illegal trade. Our methodology aims to uncover whether the task of key player identification should center around plainly measuring user and forum activity, or that it requires leveraging specific patterns of user communication. We focus on a large-scale dataset from the Evolution cryptomarket, which we model as an evolving communication network. While user and forum activity measures are useful for identifying the most successful vendors, we find that betweenness centrality additionally identifies those with lesser activity. But more importantly, analyzing the forum data over time, we find evidence that attaining a high betweenness score comes before vendor success. This suggests that the proposed network-driven approach of modelling user communication might prove useful as an early warning signal for key player identification. ## Introduction The dark net, a part of the internet that requires specific software or authorization to access [1], hosts a myriad of online for that are increasingly a hotbed for criminal behavior and radicalisation [2, 3]. Dark net fora can, both theoretically and empirically, be split in those functioning as meeting places for the exchange of criminal information and those where criminal goods and services are traded, i.e., criminal marketplaces. These fora and marketplaces can serve up to hundreds of thousands of users. They are often moderated and organized in a professional manner, with cryptocurrencies, such as Bitcoin, serving as currency and are therefore referred to as _cryptomarkets_[4, 5]. To efficiently coordinate its activities disrupting these cryptomarkets, law enforcement aims to target key players that are vital to these market's existence and success [5, 6]. Key players include the administrators and moderators responsible for the existence and proper functioning of the cryptomarket. But also the more successful vendors that are responsible for the majority of the trade conducted on the cryptomarket. Identifying which users function as administrators can often be as easy as looking at the titles assigned to them on the cryptomarkets' forums. Similarly, if sales statistics were shared on the cryptomarket, currently successful vendors would be easily identifiable. However, many cryptomarkets do not record sales information and at best provide a label to vendors independent of their success. Furthermore, it is nearly impossible to identify those vendors whose success is yet to come. Yet, if law enforcement wishes to disrupt future sales, it is exactly these future successful vendors that they would need to identify. Therefore, in this paper we focus on identifying key players in the form of both current and future successful vendors. We do so by studying a cryptomarket at points in time, i.e., we look at at various _snapshots_. By doing so we simulate law enforcement investigating the state of the cryptomarket at those specific points in time, while subsequent, i.e., future, data shows how the cryptomarket would progress without intervention. Consequently, we propose a methodology with the potential to serve as an _early warning signal_ for future vendor success on cryptomarkets. Existing research studying the workings of cryptomarkets and aimed at assisting law enforcement in identifying key players, often uses methods such as topic modelling or sentiment analysis [7, 8, 9, 10, 11]. These methods rely on (combinations of) commonly used words and sentence structures in the forum message contents. However, the rise of the use of message encryption in criminal communication, calls for the development of methods not reliant on knowledge of message content. In this work, we aim to develop a method to identify key players based on the temporal structure of their communication network alone; thus, ignoring message content entirely. _Communication networks_ model the interaction between entities within communication systems, such as mobile phone [12, 13, 14], face-to-face[15], and social media communication[16, 17]; but also communication through online fora[6, 18]. Online fora, including those associated with cryptomarkets, usually consist of _topics_, which may be grouped by subject. Each topic is started by one user with a first message, also called a _post_, and allows the set of users with access to respond by placing their own posts. This activity can be considered a form of indirect communication from the posters to those users who placed posts on the same topic before them. We can model this indirect communication using what we call a user-to-user communication network that directly connects users that posted in the same topic. At the very least, a link in such a network represents a shared interest in the same topic as well as a level of familiarity with one another due to the likelihood of having seen each others' posts. At best, a link can signify direct communication between two users that are, by means of forum posts, responding to one another. Thus, links represent potential social ties formed on a dark net forum. In this work, we leverage the structure of these communication networks without relying on knowledge of message content, with the goal of identifying and predicting successful vendors. To find important users in a (criminal) network, one of the most commonly used approaches is to apply network centrality measures, which rank users based on their position in the network[6, 13, 14, 15, 16, 19]. Different network centrality measures often imply different roles a given user plays within a network. In this paper, we explore four different measures: degree, harmonic closeness centrality, betweenness centrality and PageRank. This allows us to get a better grasp of what type of role may be more suited to the task of identifying key players in cryptomarkets. The nuances of the interpretation of centrality measures can vary depending on whether we account for edge weights, i.e., the strength of social ties, and edge directions, i.e., who responds to whom. Therefore, we consider for each measure whether the direction and strength of social ties matters, for identifying (successful) vendors for law enforcement applications. Beside network measures, several intuitive straightforward measures can be obtained directly from the forum data. We consider three such measures: post activity, topics started, and (started) topic engagement. We henceforth refer to these measures as forum _activity indicators_. The rationale behind these three activity indicators, rely on vendors' tendency to start topics to promote their listings[11, 20] and the concept of _name recognition_. Name recognition, also called brand awareness in a market context, has been linked to improved trust[21] and market outcomes[22] (e.g., more sales). Furthermore, Duxbury & Haynie[23] concluded that trustworthiness is a better predictor of vendor selection than product diversity or affordability. To wrap up, in this paper, we investigate to what level employing network measures computed on user-to-user communication networks are useful in identifying both current and future successful vendors on cryptomarkets. We look at three law enforcement applications, each increasingly more useful to law enforcement practitioners. We investigate whether (1) network measures can be used to distinguish vendors and their level of success; if (2) rankings induced by network measures can narrow down the user base to a significantly smaller set of potentially relevant users for law enforcement to investigate; and, to what extent (3) the top ranked users include successful vendors and other key players. The Evolution cryptomarket, is the main dataset studied in this paper and features over 500 thousand posts and over four thousand vendors. Applying our methods, we find that both the activity indicators and network centrality measures assign higher values to (successful) vendors on average, distinguishing them from non-vendors and less successful vendors alike. Additionally, betweenness centrality and topic engagement provide the best starting points for identifying (future) successful vendors, with betweenness centrality including the largest share of successful vendors not included by any activity indicator. Finally, betweenness centrality and topic engagement perform reliably for future success and have the potential to provide law enforcement with early warning signals for vendor success. The remainder of this paper is structured as follows. In the Results section we shortly describe the dataset and measures used before reporting on our results. The results and their implications for law enforcement are discussed in the Discussion section. Finally, the Methods section provides more in-depth descriptions on the dataset and network extraction as well as the activity indicators and network measures used in this work. ## Results In this section we first discuss our dataset and the (network) measures for identifying key players that we consider in this work. Next, we report and interpret results for the task of distinguishing vendors from non-vendors and predicting the levels of success. Then, we explore to what extent the rankings induced by (network) measures can reduce the set of users for law enforcement to investigate, while still including the greatest share of successful vendors. Finally, we look at the set of top ranked users for the most promising network centrality measure and activity indicator at a specific point in time. We do so to establish how well represented key players are among these top ranked users. ### Data In this study we focus on the cryptomarket _Evolution_. Evolution was active from January 2014 until March 2015, when it closed due to an exit scam. At the time, it was one of the most popular cryptomarkets[5]. It formed a combination of a carding forum, where card information (e.g., credit/debit/ID/etc.) is traded and an underground drug market[24]. We obtained raw data of the Evolution marketplace and forum from the dark net market archives [25]. From this, we extracted a structured dataset, established a method of linking the market and forum data, and subsequently extracted communication network(s). The extraction process, the resulting dataset, and various statistics on the dataset and its completeness, are presented in Boekhout et al. [26]. The same extraction procedure and parameters (\(\delta_{b}=10\), \(\delta_{i}=1\) month, \(\omega_{lower}=0.2\), \(t_{lim}=7\) days, and \(\omega_{first}=0.5\)) were used for the communication network(s) studied in this work. The parameters control respectively the bounds on when two posts constitute a social tie (\(\delta_{b}\) and \(\delta_{i}\)) and the strength of the social tie (\(\omega_{lower}\), \(t_{lim}\), and \(\omega_{first}\)). We demonstrate the robustness of our findings for each of these parameters in the Supplementary Material. The cryptomarket Evolution observed two notable changes in user and post activity. In the initial months up to May 2014, the cryptomarket underwent steady growth in terms of both post activity and the number of active users. However, monthly post activity stabilised from May until October (see Figure 3). Notably, May saw a change in the vendor ranking system, which assigns textual labels to vendors that are visible on the marketplace to potential customers and imply a level of success and trustworthiness. Obtaining a label representing greater success and trustworthiness as a vendor now required sufficient positive feedback, but most important for us, the new ranking system also reported on the exact total number of sales a vendor had made up to that point. The second major change to the cryptomarket came in early November 2014, as a by-product of the closure of six cryptomarkets following the joint international law enforcement operation dubbed "Onymous" [5]. After this disruption, Evolution showed a significant spike in overall activity until its closure. Both the communication networks and current & future sales counts were extracted on a monthly basis using data up to the end of each month. As such, we obtained 15 network snapshots (starting from January 2014 up to March 2015). Details on the network extraction process and the computation of monthly sales statistics are provided in the Methods section. ### Network measures & activity indicators Each considered network measure captures a different role a user may play within the network. To cover a wide range of user roles that may be important to vendor success, we report on four centrality measures: (1) in-degree; (2) bidirectional harmonic closeness centrality; (3) directed weighted betweenness centrality; and (4) directed weighted PageRank. The _in-degree_ of a user indicates the number of different users that posted (shortly) after them on the same topic(s). Thus, it can serve as a proxy of how many users have seen one or more of their posts and thus to some extent their level of name recognition. The bidirectional _harmonic closeness centrality_[27] is a measure of a user's ability to reach the entirety of the network, following paths regardless of link direction. High harmonic closeness centrality indicates that it should be relatively easy to reach and therefore potentially be visible to the entire user base. The directed weighted _betweenness centrality_[28, 29] computes how often a user lies on shortest paths connecting other nodes, taking into account both the direction and strength of social ties. High betweenness nodes often lie 'between' communities. As such, it may be a good measure of how well a (potential) vendor reaches different, otherwise separated, communities of customers. Finally, the directed weighted _Pagerank_[30] computes the probability that a random walker that infinitely traverses a network ends up at a given node, taking into account both the direction and strength of social ties. High PageRank centrality is often an indicator of being well connected to other important users. Duxbury & Haynie [23] found that buyers were more likely to continue ordering with vendors within the same community. As such, a close connection with other key players, as indicated by a high PageRank value, can be indicative of a high perceived trust, positively affecting sales. To evaluate the network measures we compare them against three activity indicators. These activity indicators can be computed directly from the forum data, so without aforementioned communication network extraction, are intuitively meaningful in the context of cryptomarket vendor success and also do not require knowledge of message content. We consider: (1) post activity; (2) topics started; and (3) topic engagement. _Post activity_ refers to the number of posts a user has placed on the forum. It relies on the idea that greater activity means greater visibility, which in turn leads to greater name recognition. _Topics started_ determines the number of topics a user started and _topic engagement_ subsequently computes the sum of all posts placed within those topics, regardless of who posted them. These measures rely on the fact that the more topics a user has started and the more engagement those topics received, the greater the likelihood that they are a (successful) vendor. Again, the increased visibility through these started topics also boosts their name recognition. Further details on the computation and interpretation of the measures is provided in the Methods section. ### Distinguishing vendors and their level of success To predict vendor success, we must determine if it is possible to distinguish between vendors and non-vendors, as well as between various levels of success. We look at the average network centralities and activity indicators for groups of users, in an attempt to distinguish groups with greater success. To this end, we divided, for each month, all active vendors, i.e., all users that are or will become vendors with at least one post already posted at that time, into five groups of success percentiles, each including respectively the top 0-20%, 20-40%, etc. of vendors in terms of sales. We refer to these groups as _vendor percentiles_. Separate vendor percentiles are formed for current and future success. We refer to the most and second most successful percentiles as the _top_ and _sub-top percentile_, respectively. The group of _non-vendors_ consists of regular forum users and those vendors with no recorded sales at all. First, we computed for each month the mean normalized value for each measure for the groups of all vendors and all non-vendors. Normalization, to the range \([0,1]\), was performed separately for each month. We compute a _relative difference score_ between vendors and non-vendors, as the former group's mean value subtracted and then divided by the latter group's mean value. For the _absolute difference score_ only the subtraction is performed. Both are reported as a percentage. The monthly difference scores between vendors and non-vendors for the four network measures and three activity indicators are depicted in Figures 0(a),0(d). In these figures, lines give a third polynomial approximation of the trend based on the monthly centralities and activity indicators. Here, the third polynomial is used to account for the two aforementioned changes in activity observed for the Evolution cryptomarket [26]. Dashed lines are used for the network measures and dotted lines for the activity indicators. Figures 0(a),0(d) show that, for all measures, vendors have higher network centralities and activity indicators than non-vendors. Furthermore, they show that although the relative difference score for betweenness centrality of vendors over non-vendors is quite significant (600-1000%), the corresponding absolute improvement factor is the smallest of all these measures. This indicates that betweenness has relatively small values overall with some extremely high outliers. On the contrary, harmonic closeness centrality has low improvement factors but nominal absolute improvement factors. Since these effects are expected to disappear when inducing a ranking from the actual values, it is less the size of the improvement factor than the fact that they are positive that are an indicator of (useful) predictive power. After all, the ranking induced by the centralities and baselines is more useful to law enforcement practitioners than the actual values. Thus, the exclusively positive values in Figures 0(a),0(d),0(d),0(d) and0(d). Next, we investigate whether these measures can also distinguish between vendors' levels of success. To assess this, we looked at the relative difference scores between the top percentile and all vendors (Figures 0(b),0(c)) and between the top and sub-top percentile (Figures 0(e),0(f)) for both current and future success. Figure 0(b) shows that for all measures the currently most successful vendors have on average higher network centralities and activity indicators. After the first month and with the exception of July and August 2014 for betweenness centrality, Figure 0(e) demonstrates this also holds when comparing the top with the sub-top percentile. Interestingly, trend changes for most measures follow cryptomarket developments. For example, up Figure 1: The (relative or absolute) difference score between vendors over non-vendors and between the top percentiles and all vendors or the sub-top percentiles. The difference score computes the difference in the average normalized value of one group of users over another. In this figure, positive difference scores indicate that the more “successful” group achieves higher network centralities or activity indicators on average. until May the difference score increases monthly, similar as to how the level of activity on the cryptomarket increased during this period. The following period, up to the "Onymous" disruption [5], shows stable but slightly decreasing difference scores for most measures. Finally, after this disruption, we see a small increase in difference scores again. When we consider future success, Figure 0(c) shows again positive difference scores between the top vendor percentile and all vendors. However, they are noticeably lower than for current success. Similarly, Figure 0(f) shows mostly positive difference scores when comparing with the sub-top percentile, but with lower scores. Thus, for both current and future success the network centralities and activity indicators show the potential to distinguish vendors' level of success. Notably, betweenness centrality shows trends that differ from the all other measures. Specifically, for current success we see clearly higher difference scores in the last months. On the contrary, for future success the final months show lower difference scores than before. This behaviour is likely due to the delay between successful vendors establishing themselves in the network and reaping the benefits in terms of sales. In other words, high betweenness centrality is expected to be more a prelude to than a consequence of vendor success. Thus, these results show the potential of betweenness centrality as an early warning signal for future vendor success. In short, for all measures under consideration (successful) vendors show positive difference scores over non-vendors and less successful vendors. Thus, the rankings induced by these measures are expected to rank successful vendors (relatively) higher. Therefore, the induced rankings have the potential to assist law enforcement by allowing them to focus on the higher ranked users. Furthermore, betweenness centrality was shown to have potential as an early warning signal, as high betweenness appears to precede vendor success. Finally, among the remaining network and baseline measure, topic engagement consistently showed the highest difference scores. This suggests that topic engagement may provide the best predictions of vendor success. ### Detecting vendors in the user base In their efforts to disrupt cryptomarkets, law enforcement has access to limited personnel and resources. One method employed by law enforcement to deal with this limitation, is to reduce the set of users to investigate based on a ranking induced by some measure. Rankings that after such a reduction still include many users of interest, are of course preferable. In the previous section, we established the predictive potential of the network measures and activity indicators for predicting (successful) vendors. Now, we want to explore how this predictive potential translates to the task of reducing the set of users to investigate. To do this, we consider what we call the _vendor recall_. The vendor recall computes what percentage of users among the top vendor percentile (the top 20% of vendors) is also among the top percentile of all users, i.e., among the top 20% of all users when ranked on a given network or baseline measure. Thus, for a random ranking, we would expect a vendor recall of 20%. Monthly vendor recall, including trend approximations, is plotted in Figures 1(a),d for current and future success, respectively. Figures 1(a),d show that, for both current and future success, degree and closeness centrality generally have a worse vendor recall than any of our activity indicators. From May onwards, PageRank outperforms post activity and performs on par with the topics started. Meanwhile, from July onwards, betweenness centrality consistently outperforms both the post activity and topics started activity indicators and performs (nearly) on par with topic engagement. Overall, the topic engagement indicator most consistently achieves high performance in terms of vendor recall. These observations tell us two things. First, network centrality measures require the communication network to have developed and stabilised sufficiently before achieving reliable vendor recall. After all, during the initial months the communication network and its structure are still undergoing significant changes. Consequently, we also see large fluctuations in vendor recall for the network measures between these months. Second, network measures do not strictly improve on our best activity indicator(s) in terms of vendor recall. Despite achieving the best vendor recall, topic engagement is only able to detect up to two thirds of the most successful vendors for current success and even fewer for future success. Thus, there may still be a significant number of successful vendors that are not detected by the activity indicators that may be included by network measures. Therefore, we also analyse the overlap of detected vendors between the network measures and activity indicators. Table 1 shows the average monthly overlap of each network measure with each individual activity indicator and the union of detected vendors by all activity indicators. We see that PageRank and betweenness centrality detect the greatest share of vendors also found by the activity indicators, detecting on average approximately 80% of all current vendors and 75% of all future vendors found. However, respectively nearly 99% and 97% of all vendors detected by PageRank are also found by the activity indicators. As such, PageRank is not able to identify many new vendors. On the contrary, the activity indicators find respectively only 94% and 90% of the vendors included by betweenness centrality. Notably, individual indicators find far fewer. Thus, betweenness centrality is able to detect the largest share of successful vendors not included by any of the activity indicators. Therefore, reducing the set of users for law enforcement to investigate using betweenness centrality may provide a fresh perspective. Despite finding additional vendors, the union of all successful vendors detected by betweenness centrality and all activity indicators only finds around 75% and 65% of the top percentile for current and future success, respectively. This means there is still a significant segment of the most successful vendors that would not be found for any of these measures. One possible explanation for scoring low on any of these measures is simply low posting activity. To assess whether this holds for the successful vendors that do not score high enough to be detected, we look at what we call the _post activity recall_ of the top vendor percentile in Figures 1(b),e. The post activity recall is the percentage of the top vendor percentile's total post activity, for a given month, that is associated with those vendors detected with vendor recall. Figures 1(b),e show that for both current and future success, the vast majority of post activity is associated with the vendors with high network centrality and activity indicators. As such, low post activity can be considered the main reason for the relatively low vendor recalls we observe. After all, the over 30% of successful vendors that are not found are responsible for less than 10% of the post activity of the entire group (in most cases even less). Furthermore, vendors with low post activity are also much less likely to be found using other methodologies. Therefore, applying this methodology is unlikely to miss vendors that other methodologies might have found. Thus, the relatively low vendor recall achieved by betweenness centrality and topic engagement should not discourage law enforcement practitioners from using this methodology. Finally, we consider the _sales recall_, which measures what percentage of sales of the entire top percentile the detected vendors are responsible for. We plot the monthly current and future sales recall in Figures 1(c),f. We see that for current success most of our observations for vendor recall hold up. Perhaps the most significant change is that the differences between PageRank and topics started and between betweenness centrality and topic engagement are more prominent. Similarly, for future success PageRank now outperforms the topics started baseline more consistently. For both current and future success, we observe that the sales recall is generally between 10-20% higher than the corresponding vendor recall. This indicates that the detected vendors are, on average, the more successful vendors among the top percentile. To summarise, topic engagement provides the best single measure recall performance. Meanwhile, betweenness centrality identifies the greatest share of vendors that do not score high for any of the activity indicators. Additionally, betweenness centrality detects the most vendors of all network measures. As such, betweenness centrality is the network measure most likely to be of use to law enforcement for detecting vendors in the user base. Furthermore, betweenness centrality performs, relative to the activity indicators, better for future vendor success, further demonstrating its potential as an early warning signal. Figure 2: Vendor recall of top vendor percentile (top 0-20% vendors in terms of sales) among the top 20% of all users based on the network measures and activity indicators. Plots cover recall in terms of vendors and those recalled vendors’ post activity and sales w.r.t. the entire top percentile’s activity and sales for both current (top row) and future success (bottom row). Higher vendor recall indicates a greater portion of the top vendor percentile was found. Higher post activity recall indicates that the recalled vendors placed a relatively larger share of the top vendor percentiles total post activity. Higher sales recall indicates a greater portion of the top vendor percentile’s total sales was attributed to the recalled vendors. ### Key player identification In the previous section we determined that betweenness centrality and topic engagement are the measures with the greatest vendor recall performance. That is to say, they are likely to have the most successful vendors among the top ranked users when ranked on these measures. Here we look at the top scoring users to investigate to what extent the top scoring users are indeed key players in the cryptomarket. To this end, we report the top 25 users, their member title and their current and future sales for September 2014 for these measures in Table 2. We see that among the top 25 users in betweenness centrality and topic engagement there are ten (i.e., 40%) that occur in both rankings. Furthermore, we observe that for both measures over half of the top 25 users have current and/or future sales (56% and 64% respectively). The probabilities of this happening randomly are \(3.47\times 10^{-7}\) and \(3.44\times 10^{-9}\), respectively. Note, not all users with sales also have the corresponding "Vendor" member title. The reason for this is twofold: first, more important titles such as "Administrator" and "Moderator" supersede the "Vendor" title; and second, the "Vendor" title did not exist before September leading to some older vendors with few future sales not to be labelled as such. This also illustrates a potential pitfall of relying too much on forum member titles for key player identification. Of the users with sales, twelve are among the top percentile for current sales and eight are among the top percentile for future sales. Respectively three (_kalashnikov_, _Yasuo_, and _Grandeur_) and one (_SkypeMan_) of them are in fact in the top 10 current and future sales. This suggests, these two measures are suitable for predicting successful vendors. Notably, _Trippyy_, who is included in the top 25 for betweenness centrality, is the only user that is a member of the top percentile for future sales, but not a member of the top percentile for current sales. Note, that _Trippyy_'s member title in September was still "Vendor". Finally, compared to topic engagement, the top 25 of betweenness centrality includes a greater proportion of vendors for whom the majority of their sales are yet to come. Thus, this is another indicator that betweenness centrality can potentially serve as early warning signal for future vendor success. In addition to vendors, we also find users with other important positions on the forum, such as "Administrator" and "Moderator", among the top 25 for both measures. In fact, betweenness centrality and topic engagement combined include three out of the four users to have held the title "Administrator" among their top users. Furthermore, the only missing administrator became inactive within a month of the founding of the cryptomarket. Thus, we can say that all active administrators were found. Additionally, betweenness centrality identifies five out of nine users to have held the title of "Moderator" and who registered before the end of September 2014 (four out of seven if we exclude users who obtained the title after September, including _d33poutside_). The probability of this happening randomly is \(2.07\times 10^{-11}\) (\(2.27\times 10^{-9}\)). On the other hand, topic engagement includes two out of nine (two out of seven) with a probability of \(3.10\times 10^{-4}\) (\(1.82\times 10^{-4}\)). Thus, these measures are suited to predicting key players beyond just successful vendors. Though neither measure perfectly identifies only key players, they provide an excellent way of identifying individuals to investigate further manually. ## Discussion The identification of key players in cryptomarkets such as successful vendors and administrators, is a vital step in law enforcement interventions. Whereas it can be easy to identify administrators due to titles given to these users, it may be harder to identify successful vendors. It is especially difficult to identify those vendors whose success is yet to come. These tasks might be further complicated when encryption is used for message contents. The results presented in this work showed that \begin{table} \begin{tabular}{|c||c c c|c||c c|c|c|} \hline & \multicolumn{4}{c||}{Current success} & \multicolumn{4}{c|}{Future success} \\ \hline & pa & ts & te & pa \(\cup\) te \(\cup\) ts & pa & ts & te & pa \(\cup\) te \(\cup\) ts \\ \hline \hline & \multicolumn{4}{c||}{\% of indicator detected vendors also found by centrality (higher is better)} \\ \hline In-degree & 90.3 & 77.9 & 78.4 & 72.3 \(\pm\)4.49 & 83.8 & 70.4 & 72.1 & 66.0 \(\pm\)4.60 \\ Bidirectional harmonic closeness centrality & 88.9 & 77.1 & 75.5 & 70.3 \(\pm\)4.04 & 83.9 & 72.1 & 70.6 & 66.2 \(\pm\)7.09 \\ Weighted directed betweenness centrality & 89.8 & 83.1 & 85.4 & **80.4**\(\pm\)5.23 & 82.5 & 77.4 & 79.5 & 74.1 \(\pm\)4.33 \\ Weighted directed PageRank & 95.7 & 83.2 & 86.5 & 79.6 \(\pm\)2.78 & 90.9 & 76.4 & 82.4 & **74.6**\(\pm\)4.14 \\ \hline & \multicolumn{4}{c||}{\% of centrality detected vendors also found by indicator (lower is better)} \\ \hline In-degree & 94.1 & 88.1 & 94.8 & 98.7 \(\pm\)2.04 & 93.6 & 83.1 & 91.9 & 97.8 \(\pm\)4.04 \\ Bidirectional harmonic closeness centrality & 92.5 & 87.1 & 91.2 & 95.9 \(\pm\)2.27 & 91.3 & 82.9 & 87.7 & 95.4 \(\pm\)3.90 \\ Weighted directed betweenness centrality & 80.4 & 80.7 & 89.0 & **94.4**\(\pm\)2.25 & 75.9 & 75.6 & 83.8 & **90.6**\(\pm\)2.88 \\ Weighted directed PageRank & 90.6 & 85.4 & 95.0 & 98.8 \(\pm\)1.14 & 89.0 & 79.1 & 92.3 & 97.0 \(\pm\)2.02 \\ \hline \end{tabular} \end{table} Table 1: Mean (and standard deviation) of the monthly overlap between network centrality based and activity indicator based detected vendors for the top vendor percentile (top 0-20% of vendors in terms of sales) as shown in Figure 2. (Abbreviations of activity indicators: pa = post activity, ts = topics started, and te = topic engagement.) network measures computed on the communication network and three forum activity indicators, that are intuitively linked to vendor success but not reliant on knowledge of message content, are useful in predicting (future) successful vendors. Our results showed that, on average, it is possible to distinguish between vendors and non-vendors using both network centrality and the activity indicators. Additionally, we found that more successful vendors have on average higher centralities and activity indicators than less successful vendors. This holds for both current and future success, though to a lesser degree for the latter. However, it is important to remember that these findings are about the average case; perfect delineations cannot be made. Even so, they indicate that the rankings induced by the measures have predictive potential for vendor success and may be useful to law enforcement activities. To reduce the workload for law enforcement, it can be beneficial to reduce the set of users that need to be manually investigated. We found that the measures of betweenness centrality and topic engagement included the greatest proportion of successful vendors when applying such a reduction (up to two thirds of the successful vendors when reducing to 20% of the users). Additionally, results showed that the vast majority (up to 98%) of post activity of the most successful vendors were covered by those included and that those included were on the relatively more successful vendors. As such, most successful vendors that are not retained by these measures are simply almost inactive on the forum. We note that the network centrality measures appear to require the communication network to have sufficiently developed and stabilised for good predictive performance. We found that betweenness centrality was the only network measure that was able to detect a substantial set of successful vendors that were not found by any of the activity indicators. Thus, there are vendors that may not be the most active, start the most topics or get the most engagement on their topics, but that are able to establish themselves in the structure of the communication network such that they connect communities of customers. Therefore, betweenness centrality could be beneficial to law enforcement activities for reducing the set of users to investigate. The results highlight that the same measures are almost as effective at recognizing those that will do well in the future. This can partly be explained by those that are already quite successful and will simply continue to do well. However, results indicate \begin{table} \begin{tabular}{r|l l|r r r||l r r} & \multicolumn{3}{c||}{Weighted directed betweenness centrality} & \multicolumn{3}{c|}{Topic engagement} \\ \cline{3-10} & & \multicolumn{2}{c||}{\#sales} & & \multicolumn{2}{c|}{\#sales} \\ \cline{3-10} & \multicolumn{2}{c||}{username} & \multicolumn{1}{c||}{title} & \multicolumn{1}{c||}{current} & \multicolumn{1}{c||}{future} & \multicolumn{1}{c||}{username} & \multicolumn{1}{c||}{title} & \multicolumn{1}{c||}{current} & \multicolumn{1}{c||}{future} \\ \hline 1 & penissmith & Troll & 45 & 5 & Yasuo & Vendor & 2420 & 1790 \\ 2 & wefinance & Banned & 44 & 2 & themostseekrit & Moderator & 0 & 0 \\ 3 & themostseekrit & Moderator & 0 & 0 & Grandeur & Vendor & 1735 & 622 \\ 4 & FRIM & Vendor & 703 & 381 & First & Banned & 119 & 1 \\ 5 & scruffre & Member & 0 & 0 & SingularLee & Member & 0 & 0 \\ 6 & Yasuo & Vendor & 2420 & 1790 & penissmith & Troll & 45 & 5 \\ 7 & Scattermind & Public Relations & 400 & 0 & kalashnikov & Vendor & 3029 & 2214 \\ 8 & LudoTilMortem & Market Moderator & 274 & 0 & wefinance & Banned & 44 & 2 \\ 9 & Kimble & Administrator & 0 & 76 & FRIM & Vendor & 703 & 381 \\ 10 & leon-trotsky & Troll & 0 & 0 & moka & Vendor & 0 & 0 \\ 11 & ScoobyJew & Moderator & 0 & 0 & JoeBloggs & Member & 17 & 0 \\ 12 & elmachico777 & Vendor & 49 & 240 & highasakite & Vendor & 158 & 3 \\ 13 & Cypher & Vendor & 123 & 3 & ucard & Vendor & 910 & 177 \\ 14 & evilsmile & Banned & 0 & 0 & mountainhigh9 & Vendor & 0 & 0 \\ 15 & Trippyy & Moderator & 8 & 464 & Scattermind & Public Relations & 400 & 0 \\ 16 & Grandeur & Vendor & 1735 & 622 & misterbitcoin & Vendor & 0 & 0 \\ 17 & nerotic & Member & 0 & 0 & kesh & Vendor & 16 & 4 \\ 18 & sinordos & Member & 0 & 0 & alphawolf89 & Vendor & 378 & 94 \\ 19 & d33poutside & Administrator & 43 & 2 & SkypeMan & Vendor & 585 & 5078 \\ 20 & moka & Vendor & 0 & 0 & fbgduck55 & Troll & 0 & 0 \\ 21 & johnjones & Member & 0 & 0 & DonaldTrump & Member & 0 & 0 \\ 22 & misterbitcoin & Vendor & 0 & 0 & IronHeart & Vendor & 144 & 148 \\ 23 & Gold & Vendor & 21 & 216 & funfnu & Member & 0 & 0 \\ 24 & Sportlife & Vendor & 649 & 268 & Verto & Administrator & 1163 & 338 \\ 25 & maaadcity & Member & 0 & 0 & ScoobyJew & Moderator & 0 & 0 \\ \hline \end{tabular} \end{table} Table 2: Top 25 weighted directed betweenness and topic engagement for September 2014. Titles are determined as their most significant observed over the entire dataset (Administrator > Market Moderator > Moderator > Public Relations > Vendor > Banned > Troll > Member). that the top ranked users by betweenness and topic engagement in fact include several vendors whose majority of sales are yet to come. Furthermore, evidence suggests that high betweenness centrality may (often) precede sales success. As such, beyond predicting current success, the proposed approach can provide early warning signals for future success. ## Methods In this section we discuss our dataset, followed by a description of how the communication networks were extracted. Next, we discuss the rationale and computation of our activity indicators and the four network measures employed, in the context of finding key players in cryptomarkets. ### Dataset As previously discussed in the Data section, we use the data presented in Boekhout et al. [26]. This dataset consists of data on the forum and the market, as well as data that links forum users to market users, i.e., vendors. For the forum data, we rely almost exclusively on the post and user data, ignoring more general information about topics and fora. For the market data, we rely exclusively on the vendors data. This vendors data includes their sales statistics at specific moments in time. However, in most cases, these moments in time are not conveniently at the end of each month. As such, the _current sales_ of a vendor at the end of a given month were estimated based on their average daily growth in the number of sales between the most recent sales information available before and after the change of month. For the months after the last available sales information, the final sales total is used. _Future sales_ of a vendor were then determined as the difference between their current sales, for a given month, and the last available sales information. Figure 3 shows the total and monthly post activity and number of active users and vendors. Here, _active_ users and vendors are those with at least one post up to and including the given month, where for the monthly active users we require at least one post that month. Throughout our results we relied on the total sets of active users and vendors for each month. ### Network extraction Along with the dataset, we also utilise the communication network extraction method proposed in Boekhout et al. [26]. This extraction method creates nodes for all active users and adds an edge connecting nodes for any posts by a pair of users that are in the same topic and adhere to certain parameters. The direction of these edges are from the user who placed the later post to the user who placed to earlier post. Additionally, edges are formed from every user who placed a post in a topic to the user who placed the first post in the topic. All edges are weighted to indicate the strength of the social tie implied by the edge. As mentioned in the Data section, we used the following parameters for network extraction: \(\delta_{o}=10\), \(\delta_{t}=1\) month, \(\omega_{lower}=0.2\), \(t_{lim}=7\) days, and \(\omega_{first}=0.5\). The first two parameters, i.e., \(\delta_{o}=10\), \(\delta_{t}=1\) month, set limitations on the existence of an edge. Specifically, they prohibit any edge to be formed for posts that are more than ten posts apart or that were placed more than one month apart. The parameters \(\omega_{lower}=0.2\), \(t_{lim}=7\) days, determine the scope and decay of the exponential weighting function applied to "regular" edges, i.e., they determine the strength of the implied social tie. Specifically, \(\omega_{lower}\) sets the minimum weight at \(0.2\), while \(t_{lim}\) determines that this minimum weight applies for all pairs of posts at least seven days apart. The resulting exponential weighting function is shown in Figure 4. Thus, \(\omega_{lower}\) and \(t_{lim}\) determine the likelihood that a post was placed in response to or after having at least seen a specific earlier post, while \(\delta_{o}\) and \(\delta_{t}\) determine at what point we consider this likelihood too low to imply a social tie. The final parameter, \(\omega_{first}=0.5\), sets the weight for all other edges, i.e., edges formed from linking posts to the initial post. Robustness of our results for these parameters is investigated in the Supplementary Material. Monthly communication networks were extracted based on all posts up to the end of the given month, thus including posts from previous months. Additionally, we simplify the networks by merging all parallel edges, i.e., all edges connecting the same two nodes in the same direction, into single edges. The weights of the resulting edges are exactly the sum of the parallel edges that were merged. In other words, the resulting weights represent the the combined likelihood of a meaningful social tie connecting two users. As a result, we obtain fifteen simplified monthly weighted directed networks \(G=(V,E)\), where each node \(u\in V\) represents an active user and each weighted edge \((u,v)\in E\) represents the inferred weight of the social tie from user \(u\in V\) to user \(v\in V\). It is on these monthly weighted directed networks that the network measures were computed. ### Activity indicators To evaluate the performance of predicting vendor success using network measures, we compare against some activity indicators that can be directly computed from the forum data. Similar to the rationale for our use of network measures, these activity indicators must also adhere to the requirement that we lack knowledge of message content. We considered three activity indicators in this paper: post activity, topics started and topic engagement. Below, we discuss why we believe these are appropriate indicators and how they are computed. #### Post activity Post activity refers to the number of posts a user has posted on the forum up to a given moment in time. A straightforward link can be made between a user's visibility on a forum and their post activity. After all, the more often someone posts, the more likely it is that another user will come across one of them. This increased visibility leads to greater name recognition, which has been linked to improved trust[21] and market outcomes[22] (e.g., more sales); and trustworthiness has been shown to be a better predictor of vendor selection than product diversity or affordability[23]. Therefore, post activity can be used as in indicator of the likelihood of vendor success. Furthermore, since post activity is simply the number of posts placed by a given user, it can be determined at no computational cost and without knowledge of message content. Thus, post activity is well suited to provide baseline performance to evaluate the network centrality measures. #### Topics started Forums that accompany cryptomarkets are intended to allow vendors and their customers to interact. As such, it is common practice for vendors to promote their products listed for sale by starting a topic promoting their listings[11]. The number of topics a user has started is therefore a potential indicator of being a vendor and hence our second baseline measure. As a greater number of topics started may lead to greater visibility, greater name recognition, and simply a greater reach, it may also lead to increased success for vendors[20]. Furthermore, the number of started topics is also easy to compute and is not reliant on knowledge of message content. #### Topic engagement Topic engagement is the total number of responses to all topics started by a user combined. It can be computed with little computational cost and independent of any knowledge of message content. Topic engagement combines the fact that starting topics is a good indicator of being a vendor with the fact that when topics receive a lot of engagement they are naturally also more visible. Additionally, engagement in any topics about a specific listing is likely to be associated to that listing or the vendor. For example, a post may concern feedback on the particular listing or on the vendor themselves. Either way, engagement on these topics is also highly probable to be associated with actual sales. As such, where the topics started baseline is more likely to be a good indicator of being a vendor or not, topic engagement is more likely to be a good indicator of the success of any such vendor. #### Network centrality measures In this subsection we discuss the various network measures utilised in this paper. We discuss their computation and interpret their meaning within the context of cryptomarket communication networks. All network measures were computed using the igraph package[31]. #### Degree The degree of a node is a measure of the number of distinct neighbors connected to that node. While degree captures this regardless of edge directions, in- and out-degree count only the neighbors connected through incoming and outgoing edges, respectively. Furthermore, the weighted degree variants sum the weights of the connections with the neighbors. The degree can be interpreted as the number of different users that a (potential) vendor responds to or receives responses from. The weighted variant also takes into account how strong the relation to these users are. Thus, a high in-degree in our networks indicates many different users responding within a relatively short time frame. Since it is likely that those that respond shortly after a post have seen that post, a high in-degree implies visibility to many different users, thereby improving the aforementioned brand awareness. As brand awareness promotes trust and sales [21, 22] and trust is a good predictor of vendor selection [23], a high in-degree might serve as a good predictor of vendor success. Unlike incoming edges used for in-degree, outgoing edges do not imply visibility of the user to the neighbors these edges connect to, since those neighbors posted before the user. For this reason we focus on the in-degree instead of the degree or out-degree. We report results for the unweighted in-degree, as we believe the number of neighbors, i.e., the number of potential customers, to be a better predictor of vendor success than the combined strength of the social ties to these neighbors. Weighted in-degree showed similar results, but with slightly fewer detected vendors that were not found by the activity indicators. #### Harmonic closeness centrality Closeness centrality [27] is a measure of how easily a node can reach every other node in the network. Essentially, it computes the shortest distances, i.e., shortest paths, to every other node. In other words, where degree was a measure of how well someone is connected locally, closeness is a measure of how well connected a node is globally, i.e., to the entire network. Harmonic closeness centrality behaves essentially the same as standard closeness centrality and extends properly to directed and disconnected networks, i.e., networks with node pairs that are not connected by any (directed) path [32], such as ours. Let \(d_{G}(u,v)\) be the shortest distance connecting nodes \(u,v\in V\), where if no path exists \(d_{G}(u,v)=\infty\). Using \(\frac{1}{m}=0\), we can define the harmonic closeness centrality as: \[hcc_{G}(u)=\sum_{v\in V}\frac{1}{d_{G}(u,v)}. \tag{1}\] For bidirectional harmonic closeness centrality, the shortest paths can be determined following edges regardless of their direction. However, for incoming and outgoing harmonic closeness centrality the paths may follow edges only in one direction, either following the direction of the edges (outgoing) or going against the direction of the edges (incoming). The weighted variants of these measures use the inverse of the edge weights during shortest distance computation, such that stronger connections equate to shorter distances. We report on the unweighted bidirectional harmonic closeness centrality in the Results section as it detected the largest share of vendors not found by any of the activity indicators of all variants. The interpretation of distance more than a single edge away with respect to vendor success in cryptomarket communication networks is not straightforward. To a certain extent, one can interpret a smaller distance as being more likely for your posts to be visible to the other user. Even so, it is unknown how the topics, that are responsible for forming the edges that make up the connecting paths, are related. They may originate from the same or a highly similar topic, increasing the odds of being visible, or they may differ greatly, making it unlikely that these connections truly form a meaningful path. As such, a high closeness centrality does not intuitively imply a successful vendor. Regardless, closeness centrality has often proven to capture users at important positions in a network and is therefore included in our analyses. #### Betweenness centrality Betweenness centrality [28, 29] measures the extent to which a node is on shortest paths connecting pairs of nodes in the network. In other words, it measures how important a node is with respect to connecting various communities in the network. In the context of cryptomarkets, this makes it a good measure of how well a (potential) vendor reaches different communities of potential buyers. As such, a vendor with a high betweenness is more likely to have a larger pool of buyers as they may be able to draw from more communities of buyers. Additionally, betweenness centrality has been shown to perform well in identifying key players in criminal networks [13, 19]. The betweenness centrality of node \(u\in V\) is determined by computing the sum of the fraction of shortest paths connecting nodes \(v,w\in V\) that pass through \(u\). Let \(\sigma_{vw}\) indicate the number of shortest paths connecting nodes \(v,w\in V\), and let \(\sigma_{vw}\) indicate the number of those shortest paths that pass through node \(u\in V\). Then betweenness centrality can be defined as: \[bc(u)=\sum_{v,w\in V,u\neq v\neq w}\frac{\sigma_{vw}}{\sigma_{vw}} \tag{2}\] For directed betweenness centrality, paths must follow the direction of the edges, while undirected betweenness can follow edges in either direction. Like for harmonic closeness centrality, the weighted variants use the inverse of the edge weights during shortest path computation, such that stronger connections equate to shorter distances. The variant taking both direction and weighting into account, showed the best performance. Therefore, its results are reported in the Results section. #### PageRank The final measure we consider is PageRank[30]. PageRank computes the probability that a random walker that infinitely traverses a network ends up at a given node. Each step of these random walks consist of either following one of the available edges or jumping to a random node with a particular probability. For the directed variant the choice of edge is restricted to following the direction of the edges and adding weights impacts the odds of following any given edge. Similar as for betweenness centrality, we report the results for the variant taking both direction and weighting into account, as it showed the best performance. High PageRank values often follow from having paths/edges incoming from many and/or other important (i.e., high value) nodes in the network. As such, we can interpret a high PageRank value as being closely connected to other key players. As previously stated, Duxbury & Haynie[23] found that buyers were more likely to continue ordering with vendors within the same community. This means that the close connection between users with high PageRank value can be indicative of a boost in their perceived trust and may stimulate their sales. Thus, a high PageRank value may be able to predict successful vendors. ## Data availability The datasets and communication networks analysed during this study are available upon request. The dataset extraction and data quality resolution process as well as the network extraction process, is discussed in Boekhout et al.[26].
2304.10760
Magnon squeezing by two-tone driving of a qubit in cavity-magnon-qubit systems
We propose a scheme for preparing magnon squeezed states in a hybrid cavity-magnon-qubit system. The system consists of a microwave cavity that simultaneously couples to a magnon mode of a macroscopic yttrium-iron-garnet (YIG) sphere via the magnetic-dipole interaction and to a transmon-type superconducting qubit via the electric-dipole interaction. By far detuning from the magnon-qubit system, the microwave cavity is adiabatically eliminated. The magnon mode and the qubit then get effectively coupled via the mediation of virtual photons of the microwave cavity. We show that by driving the qubit with two microwave fields and by appropriately choosing the drive frequencies and strengths, magnonic parametric amplification can be realized, which leads to magnon quadrature squeezing with the noise below vacuum fluctuation. We provide optimal conditions for achieving magnon squeezing, and moderate squeezing can be obtained using currently available parameters. The generated squeezed states are of a magnon mode involving more than $10^{18}$ spins and thus macroscopic quantum states. The work may find promising applications in quantum information processing and high-precision measurements based on magnons and in the study of macroscopic quantum states.
Qi Guo, Jiong Cheng, Huatang Tan, Jie Li
2023-04-21T06:09:13Z
http://arxiv.org/abs/2304.10760v4
# Magnon squeezing by two-tone driving of a qubit in cavity-magnon-qubit systems ###### Abstract We propose a scheme for preparing magnon squeezed states in a hybrid cavity-magnon-qubit system. The system consists of a microwave cavity that simultaneously couples to a magnon mode of a macroscopic yttrium-iron-garnet (YIG) sphere via the magnetic-dipole interaction and to a transmon-type superconducting qubit via the electric-dipole interaction. By far detuning from the magnon-qubit system, the microwave cavity is adiabatically eliminated. The magnon mode and the qubit then get effectively coupled via the mediation of virtual photons of the microwave cavity. We show that by driving the qubit with two microwave fields and by appropriately choosing the drive frequencies and strengths, magnonic parametric amplification can be realized, which leads to magnon quadrature squeezing with the noise below vacuum fluctuation. We provide optimal conditions for achieving magnon squeezing, and moderate squeezing can be obtained using currently available parameters. The generated squeezed states are of a magnon mode involving more than \(10^{18}\) spins and thus macroscopic quantum states. The work may find promising applications in quantum information processing and high-precision measurements based on magnons and in the study of macroscopic quantum states. ## I Introduction With the increasing improvement of experimental technology, the study of macroscopic quantum states has been attracting more and more attention since the Schrodinger's cat state was proposed [1]. In particular, cavity optomechanics (COM), exploring the interaction between electromagnetic fields and mechanical motion via radiation pressure, provides an ideal platform to prepare macroscopic quantum states [2]. In the past decade, significant progress has been made in the field of COM in generating macroscopic quantum states of massive mechanical oscillators. These include the realization of the entangled states of a mechanical oscillator and an electromagnetic field [3], the entangled states of two mechanical oscillators [4; 5; 6], the squeezed states [7] and superposition states [8; 9] of mechanical motion, etc. In addition, nonclassical states, e.g., superposition states [10], Fock states [11], cat states [12] and entangled states [13; 14], of macroscopic mechanical resonators can also be generated by coupling to and controlling the superconducting qubit. In recent years, hybrid systems based on collective spin excitations (magnons) in macroscopic ferromagnetic crystals, such as yttrium-iron-garnet (YIG), have become a new platform to explore macroscopic quantum phenomena and develop novel quantum technologies [15; 16; 17]. It was first proposed in cavity magnonechanics [18; 19; 20; 21] that macroscopic entangled states of magnons, photons and phonons can be created exploiting the dispersive magnetostrictive interaction [19]. Such nonlinear magnomechanical coupling can also be used to entangle two magnon modes [22], two mechanical modes [23], and generate squeezed states of magnons and phonons [24]. It can also be exploited to achieve Einstein-Podolsky-Rosen steering between magnons, photons and phonons [25; 26], and quantum ground states of mechanical vibration [27; 28; 29]. Apart from utilizing the nonlinear magnetostriction, many other mechanisms have been put forward in cavity magnonics to prepare macroscopic quantum states. Specifically, the nonlinear magnon-photon interaction in cavity optomagnonics is exploited to cool magnons [30], and prepare magnon Fock [31], cat [32] and path-entangled [33] states, as well as the entangled states of magnons and optical photons [34; 35]. Dissipative coupling between magnons and microwave photons is used to generate a magnon-photon Bell state [36]. Anisotropy, together with conditional measurements on microwave cavity photons, is utilized to prepare a magnon cat state [37]. Kerr-type nonlinearities are adopted to entangle two magnon modes [38; 39] and achieve one-way quantum steering between ferrimagnetic microspheres [40]. Another approach is to use external quantum drives, e.g., single-mode or two-mode squeezed vacuum fields, which are employed to entangle two magnon modes [41; 42] and mechanical modes [43], and control one-way quantum steering [44; 45; 46]. The effective coupling of magnons with superconducting qubits via the mediation of microwave cavity photons can also provide necessary nonlinearity to prepare quantum states of magnons [15; 16; 47]. Due to the high controllability and scalability of the superconducting circuits, the study on the hybrid cavity-magnon-superconducting-qubit system has been receiving increasing attention in recent years. Significant experimental progress has been made in this system. Specifically, strong coupling between a magnon and a superconducting qubit and magnon-vacuum induced Rabi splitting were demonstrated [48]. Shortly afterwards, the quanta of a magnon mode in a millimeter-sized YIG sphere were resolved by using the magnon-qubit strong dispersive interaction [49]. Working in the same dispersive regime, high-sensitivity detection of a single magnon in a YIG sphere with quantum efficiency of up to 0.71 was realized [50]. Very recently, the superposition state of a single magnon and vacuum was deterministically generated [51]. These successful experimental demonstrations have further stimulated the study on the quantum states in such a hybrid system. A series of theoretical proposals have been provided to explore quantum effects in the system, such as magnon blockade [52; 53; 54; 55; 56; 57], continuous-variable [58; 59] and discrete-variable [60; 61; 62; 63; 64] magnon entanglement and steering, magnon cat states [65; 66], and so on. All of these indicate that the magnon-qubit system is a promising system to prepare various magnonic quantum states via manipulating the qubit. Here, we show how to generate magnon squeezed states in such a cavity-magnon-qubit system. To date, only a few protocols have been offered in cavity magnonics to prepare magnon squeezed states. They can be achieved by exploiting the anisotropy of the ferromagnet [67], the mechanism of the ponderomotive-like squeezing [68], the reservoir-engineering technique [69], or the squeezed microwave drive fields [24]. Our approach differs from all the above mechanisms and is realized via two-tone driving of the superconducting qubit. It is akin to that used to produce squeezed light by two-tone driving of an atom [70]. The system is operating in the regime where the microwave cavity is far detuned from the magnon-qubit system and can thus be adiabatically eliminated. The qubit is simultaneously driven by two microwave fields. We show that by properly choosing the drive frequencies and strengths, the effective parametric amplification Hamiltonian can be obtained for the magnon mode, which leads to a two-magnon process and thus the squeezing of the magnon mode. The paper is organized as follows. In Sec. II, we describe the system and derive the effective Hamiltonian for the magnon mode, which gives rise to magnon quadrature squeezing. In Sec. III, we present the numerical results of the magnon squeezing, check the validity of our derived approximate model, provide the optimal drive conditions, and analyze the dissipation and thermal noise effects on the squeezing. Lastly, we draw the conclusions in Sec. IV. ## II The system and effective Hamiltonian The hybrid cavity-magnon-superconducting-qubit system, as depicted in Fig. 1(a), consists of a YIG sphere (e.g., with the diameter of 1 mm [51]) and a transmon-type superconducting qubit that are placed inside a microwave cavity. The YIG sphere supports a magnon mode (collective motion of a large number of spins), which couples to the microwave cavity via the magnetic-dipole interaction and the latter further couples to the qubit via the electric-dipole interaction. The Hamiltonian of this tripartite system reads (\(\hbar=1\)) \[H = \omega_{0}a^{\dagger}a+\frac{1}{2}\omega_{q}\sigma_{z}+\omega_{m} m^{\dagger}m \tag{1}\] \[+g_{1}\left(a\sigma^{+}+a^{\dagger}\sigma^{-}\right)+g_{2}\left( am^{\dagger}+a^{\dagger}m\right),\] where \(a\) (\(a^{\dagger}\)) and \(m\) (\(m^{\dagger}\)) are the annihilation (creation) operators of the microwave cavity and the magnon mode, respectively, and \(\omega_{0}\) and \(\omega_{m}\) are their resonance frequencies. We limit the subspace of the transmon-type qubit to the ground state \(|g\rangle\) and the first-excited state \(|e\rangle\), and the Pauli matrix \(\sigma_{z}=|e\rangle\langle e|-|g\rangle\langle g|\), and \(\sigma^{-}=|g\rangle\langle e|\) and \(\sigma^{+}=|e\rangle\langle g|\) are the ladder operators of the qubit with transition frequency \(\omega_{q}\). The coupling strengths \(g_{1}\) and \(g_{2}\) are of the cavity-qubit and cavity-magnon systems, respectively. For simplicity, we consider the situation where the qubit and the magnon are resonant, \(\omega_{q}=\omega_{m}\equiv\omega\), and far-detuned from the microwave cavity, i.e., \(\Delta=\omega_{0}-\omega\gg g_{1},g_{2}\). This allows us to adiabatically eliminate the cavity mode and obtain the effective Jaynes-Cummings-type Hamiltonian of the magnon-qubit system [15], which is given by \[H_{\rm eff}=\frac{1}{2}\omega_{Q}\sigma_{z}+\omega_{M}m^{\dagger}m+G\left( \sigma^{+}m+\sigma^{-}m^{\dagger}\right), \tag{2}\] where \(\omega_{Q}=\omega+\frac{g_{1}^{2}}{\Delta}\) and \(\omega_{M}=\omega+\frac{g_{2}^{2}}{\Delta}\) correspond to the effective frequencies of the qubit and the magnon mode, respectively (c.f. Fig. 1(b)), and \(G=\frac{g_{1}g_{2}}{\Delta}\) denotes the effective magnon-qubit coupling. Such an effective Hamiltonian has been adopted in the experiments [47; 48; 49; 50; 51]. We then apply two microwave fields to drive the qubit and Figure 1: (a) Schematic of the cavity-magnon-superconducting-qubit system. A microwave cavity couples to both a magnon mode of a macroscopic YIG sphere, which is placed in a uniform bias magnetic field \(B_{z}\) (\(z\) direction), and a superconducting qubit, which is driven by two microwave fields. The magnon mode and the qubit get effectively coupled via the mediation of the microwave cavity. (b) Frequency spectrum of the system. The cavity with frequency \(\omega_{0}\) is far-detuned from the magnon mode (\(\omega_{m}\)) and the qubit (\(\omega_{q}\)). The effective qubit transition frequency \(\omega_{Q}\) is resonant with the drive field at frequency \(\omega_{1}\), but is detuned by \(\delta_{1}\) and \(\delta_{2}\), respectively, from the effective magnon frequency \(\omega_{M}\) and the drive field at frequency \(\omega_{2}\). the drive frequencies are \(\omega_{1}=\omega_{Q}\) and \(\omega_{2}\), and the corresponding driving strengths are \(\Omega_{1}\) and \(\Omega_{2}\). The Hamiltonian, in the interaction picture with respect to \(\omega_{1}(\frac{1}{2}\sigma_{z}+m^{\dagger}m)\), can be written as \[H_{1}=-\delta_{1}m^{\dagger}m+\left(G\sigma^{+}m+\Omega_{1}\sigma^{+}+\Omega_{ 2}e^{i\delta_{2}t}\sigma^{+}+\text{H.c.}\right), \tag{3}\] where \(\delta_{1}=\omega_{1}-\omega_{M}\) and \(\delta_{2}=\omega_{1}-\omega_{2}\). Without loss of generality, \(\Omega_{1}\) and \(\Omega_{2}\) are assumed to be real. To express the physics more straightforwardly, we adopt the qubit representation dressed by the drive field (of frequency \(\omega_{1}\)). By diagonalizing the driving Hamiltonian \(V_{1}=\Omega_{1}(\sigma^{+}+\sigma^{-})\), the dressed states are expressed as \[\ket{+} = \frac{1}{\sqrt{2}}\left(\ket{e}+\ket{g}\right),\] \[\ket{-} = \frac{1}{\sqrt{2}}\left(\ket{e}-\ket{g}\right). \tag{4}\] Rewriting the Hamiltonian \(H_{1}\) in terms of the dressed states, we obtain \[H_{2}=-\delta_{1}m^{\dagger}m+\Omega_{1}\left(\sigma_{++}- \sigma_{--}\right)\] \[\qquad+\frac{1}{2}\Big{[}\left(Gm+\Omega_{2}e^{i\delta_{2}t} \right)\left(\sigma_{++}-\sigma_{+-}+\sigma_{-+}-\sigma_{--}\right)\] \[\qquad+\left(Gm^{\dagger}+\Omega_{2}e^{-i\delta_{2}t}\right) \left(\sigma_{++}-\sigma_{-+}+\sigma_{+-}-\sigma_{--}\right)\Big{]}, \tag{5}\] where we define \(\sigma_{jk}=\ket{j}\bra{k}\) (\(j,k=+,-\)). Working in the interaction picture with respect to \(-\delta_{1}m^{\dagger}m+\Omega_{1}\left(\sigma_{++}-\sigma_{--}\right)\) and taking the rotating-wave approximation under the conditions of \(\Omega_{1}=-\frac{1}{2}\delta_{2}\) and \(\ket{\delta_{2}}\gg\frac{\Omega_{1}}{2},\frac{\Omega_{2}}{2},\ket{\delta_{1}}\), we obtain the following Hamiltonian: \[H_{3}=\!\frac{1}{2}G\left(me^{i\delta_{1}t}\!+\!m^{\dagger}e^{-i\delta_{1}t} \right)\left(\sigma_{++}\!-\!\sigma_{--}\right)\!-\!\frac{1}{2}\Omega_{2}\left( \sigma_{+-}\!+\!\sigma_{-+}\right). \tag{6}\] The second term \(V_{2}=-\frac{1}{2}\Omega_{2}(\sigma_{+-}+\sigma_{-+})\) corresponds to the driving Hamiltonian associated with the second drive for the qubit. By diagonalizing \(V_{2}\), we find that its eigenstates \(\left(\ket{+}\pm\ket{-}\right)/\sqrt{2}\) are exactly the bare qubit states \(\ket{e}\) and \(\ket{g}\). Therefore, the Hamiltonian (6) can be expressed in the initial qubit-state basis \(\{\ket{e},\ket{g}\}\) as \[H_{4}=-\frac{1}{2}\Omega_{2}\sigma_{z}+\frac{1}{2}G\left(me^{i\delta_{1}t}+m^ {\dagger}e^{-i\delta_{1}t}\right)\left(\sigma^{+}+\sigma^{-}\right). \tag{7}\] Under the conditions \(\ket{\delta_{1}}\ll\Omega_{2}\) and \(\ket{\delta_{1}}\pm\Omega_{2}\rvert\gg\frac{G}{2}\), we derive the following effective Hamiltonian in the interaction picture with respect to \(-\frac{1}{2}\Omega_{2}\sigma_{z}\)[71]: \[H_{5}=\frac{G^{2}}{4}\bigg{[}\frac{1}{\delta_{1}\!-\!\Omega_{2} }\left(m^{\dagger}m\sigma_{z}\!+\!\sigma^{+}\sigma^{-}\right)\!+\!\frac{1}{ \delta_{1}\!+\!\Omega_{2}}\left(-m^{\dagger}m\sigma_{z}\!+\!\sigma^{-}\sigma^{ +}\right)\] \[\qquad+\frac{1}{\Omega_{2}}m^{2}\sigma_{z}e^{i2\delta_{1}t}+\frac {1}{\Omega_{2}}m^{\dagger 2}\sigma_{z}e^{-i2\delta_{1}t}\bigg{]}. \tag{8}\] For the case of the qubit being initially prepared in the state \(\ket{e}\) (similarly, for the ground state \(\ket{g}\)), we obtain the parametric amplification Hamiltonian for the magnon mode in the interaction picture, i.e., \[H_{6}=\chi\left[m^{2}e^{i\left(\delta_{1}-\frac{\Omega_{2}e^{2}}{2\delta_{1} ^{2}-\Omega_{2}^{2}}\right)}\!+m^{\dagger 2}e^{-\left(2\delta_{1}-\frac{\Omega_{2}e^{2}}{2 \delta_{1}^{2}-\Omega_{2}^{2}}\right)}\right], \tag{9}\] where \(\chi=G^{2}/(4\Omega_{2})\). This Hamiltonian describes a two-magnon process and can generate a magnon squeezed vacuum state. The squeezing direction in the phase space rotates due to the time dependence of the Hamiltonian. By appropriately choosing the parameters to have \(\delta_{1}=\frac{\Omega_{2}G^{2}}{4\omega_{1}^{2}-\Omega_{2}^{2}}\), i.e., \(4\Delta\Omega_{2}\!=-g_{1}^{2}g_{2}^{2}/(g_{1}^{2}-g_{2}^{2})\), the Hamiltonian (9) can be time-independent, which yields the normal parametric amplification Hamiltonian of \(\chi(m^{2}+m^{\dagger 2})\). ## III Results of magnon quadrature squeezing In Sec. II, we prove analytically that our mechanism can generate squeezing of the magnon mode and the derivation is performed without considering any dissipation of the system. In this section, we present the numerical results of the magnon squeezing by including dissipations of the system and using experimentally feasible parameters. We calculate the magnon squeezing by using the effective Hamiltonian (9), and compare it with that obtained using the original (full) Hamiltonian (3), where no approximation is made. This allows us to check the validity of our model and determine the parameter regime where the effective Hamiltonian is a good approximation. The squeezing denotes that the variance of the general quadrature of the magnon mode, \(X=\cos\theta X_{1}+\sin\theta X_{2}\), is below that of the vacuum noise, where \(X_{1}=(m+m^{\dagger})/\sqrt{2}\) and \(X_{2}=i(m^{\dagger}-m)/\sqrt{2}\) are the magnon amplitude and phase quadratures. In fact, the minimum variance of the quadrature \(X\), i.e., \(V_{\text{min}}(X)\), can be obtained analytically using the time-independent parametric amplification Hamiltonian (9) under precisely chosen parameters. Here, to be generic, we calculate the variance using the time-dependent Hamiltonian (9). The time dependence leads to the time-dependent optimal squeezing angle \(\theta_{\text{opt}}\), corresponding to the minimum variance and thus the maximum squeezing. However, \(V_{\text{min}}(X)\) can still be achieved by computing the minimum eigenvalue of the covariance matrix (CM) \(\sigma\) of the two magnon quadratures \(X_{1,2}\), i.e., \[V_{\text{min}}(X)=\min\left\{\text{eig}[\sigma]\right\}. \tag{10}\] The CM \(\sigma\) is defined as \[\sigma=\left(\begin{array}{cc}\sigma_{11}&\sigma_{12}\\ \sigma_{21}&\sigma_{22}\end{array}\right), \tag{11}\] where \(\sigma_{jk}=\text{Tr}\{\rho(X_{j}X_{k}+X_{k}X_{j})/2\}-\text{Tr}\{\rho X_{j}\} \text{Tr}[\rho X_{k}]\) (\(j,k=1,2\)), and \(\rho=\rho(t)\) is the density matrix of the system at time \(t\). The optimal squeezing angle \(\theta_{\text{opt}}\) can be obtained from the CM \(\sigma\), which is \(\theta_{\text{opt}}=\frac{1}{2}\arctan\frac{2\sigma_{12}}{\sigma_{1}-\sigma_{22}}- \frac{\pi}{2}\). In Fig. 2(a), we plot the minimum variance \(V_{\text{min}}(X)\) as a function of time \(t\), where the solid (dashed) line corresponds to the result obtained using the full (effective) Hamiltonian (3) ((9)). We use experimentally feasible parameters [48, 49, 50, 51]: \(\omega_{0}/2\pi=7.5\) GHz, \(\omega/2\pi=7.2\) GHz, \(g_{1}/2\pi=36\) MHz, \(g_{2}/2\pi=36.6\) MHz (corresponding to \(G=\frac{g_{1}\pi}{\Delta}=2\pi\times 4.4\) MHz), and \(\Omega_{1}=10\Omega_{2}=10^{2}G\). We assume that the qubit is initially in the excited state \(|e\rangle\) and the magnon mode is in the vacuum state, which is the case of low bath temperature, e.g., of tens of mK. Clearly, magnon squeezed states can be achieved and the two results (using the Hamiltonians (3) and (9)) agree well with each other, indicating that our derived effective Hamiltonian is a very good approximation. In Fig. 2(b), a smaller value of \(\Omega_{2}=5G\) is used, which just satisfies the condition \(|\Omega_{2}|\gg\frac{G}{2}\) for deriving the Hamiltonian (9). The deviation of the two curves becomes larger especially for longer evolution time. Nevertheless, the effective Hamiltonian is still a good approximation once the conditions listed in Sec. II are fulfilled. Figure 2 is obtained without considering any dissipation of the system. Therefore, the variance \(V_{\rm min}(X)\to 0\) when \(t\to\infty\). In what follows, we analyze the effect of the magnon and qubit dissipations on the degree of the squeezing. We adopt the Lindblad master equation [72] \[\frac{d}{dt}\rho = -i[H,\rho]+\kappa(\bar{n}_{m}+1)\mathcal{L}_{m}\rho+\kappa\bar{n} _{m}\mathcal{L}_{m^{\prime}}\rho \tag{12}\] \[+\gamma(\bar{n}_{q}+1)\mathcal{L}_{\sigma^{\prime}}\rho+\gamma \bar{n}_{q}\mathcal{L}_{\sigma^{\prime}}\rho,\] where \[\mathcal{L}_{\alpha}\rho=(\rho o^{\dagger}-\frac{1}{2}\sigma^{\dagger}o\rho- \frac{1}{2}\rho o^{\dagger}o) \tag{13}\] represents the Lindblad term for an arbitrary operator \(o\) (\(o=m,m^{\dagger},\sigma^{-},\sigma^{+}\)). \(\kappa\) (\(\gamma\)) is the dissipation rate of the magnon mode (the qubit), and \(\bar{n}_{m}\) (\(\bar{n}_{q}\)) is the mean thermal occupation number, and \(\bar{n}_{j}\simeq[\exp(\hbar o/k_{B}T)-1]^{-1}\) (\(j=m,q\)) with \(T\) being the bath temperature. In Fig. 3(a), \(V_{\rm min}(X)\) is plotted with the dissipation rates \(\kappa/2\pi=1\) MHz and \(\gamma/2\pi=20\) kHz and at temperature \(T=10\) mK [51] for two sets of drive conditions, which correspond to those used in Figs. 2(a) and 2(b), respectively. Compared with the no-dissipation case of Fig. 2, it is evident that the dissipations can significantly reduce the degree of the squeezing. Moreover, there is an optimal time for achieving the maximum squeezing, after which more noises enter the system through the dissipation channels and degrade the squeezing. To vividly show the magnon squeezing, we plot the Wigner function of the magnon mode in Fig. 3(b), corresponding to the point at \(t=300\) ns in the range curve of Fig. 3(a) and the minimum variance of \(0.31\). We now analyze the optimal drive conditions for obtaining the magnon squeezing. Summarizing the conditions used for deriving the desired parametric amplification Hamiltonian (9), we have \(|\delta_{2}|=2\Omega_{1}\gg\frac{\Omega_{2}}{2}\gg\frac{G}{4}\). Once the frequency of the second drive is determined (i.e., \(\delta_{2}\) and \(\Omega_{1}=\frac{|\delta_{2}|}{2}\) are fixed), it puts an upper limit on the driving strength \(\bar{\Omega}_{2}\) to get the optimal squeezing. A smaller \(\Omega_{2}\) is preferred since the degree of squeezing is proportional to \(\chi=\frac{G^{\ast}}{4\Omega_{2}}\). However, \(\Omega_{2}\) cannot Figure 3: (a) Minimum variance \(V_{\rm min}(X)\) of the magnon quadrature versus \(t\) with \(\kappa/2\pi=1\) MHz and \(\gamma/2\pi=20\) kHz for \(\Omega_{1}=10\Omega_{2}=10^{2}G\) (blue dashed line) and for \(\Omega_{1}=10\Omega_{2}=50G\) (orange solid line). (b) Wigner function of the magnon mode corresponding to the orange line in (a) at \(t=300\) ns. The other parameters are the same as in Fig. 2. be too small because of the lower limit of \(\Omega_{2}\gg\frac{G}{2}\). This further sets an upper limit on the maximum squeezing that can be achieved in our protocol since \(\chi\ll\frac{G}{2}\). The presence of an optimal \(\Omega_{2}\) is confirmed by Fig. 4. In the inset of Fig. 4, we plot the degree of squeezing versus \(\Omega_{1}\) for a fixed \(\Omega_{2}=3G\). It shows that there is also an optimal driving strength \(\Omega_{1}\). This is because, on the one hand, the driving strength must be strong enough to satisfy \(\Omega_{1}\gg\frac{\Omega_{2}}{4}\); while on the other hand, it cannot be too strong as a large \(\Omega_{1}\) corresponds to a large detuning \(|\delta_{2}|=2\Omega_{1}\), which reduces the drive efficiency associated with the second drive and thus the degree of squeezing. It should be noted that the drive frequencies \(\omega_{1,2}\) are determined by \(\omega_{Q}\) and \(\Omega_{1}\), so according to Fig. 4, the optimal drive frequencies can also be determined. We evaluate the degree of squeezing in units of dB, which is defined as \(S=-10\text{log}_{10}[V_{\text{min}}(X)/V_{\text{vac}}(X)]\), where \(V_{\text{vac}}(X)=\frac{1}{2}\) corresponds to the vacuum fluctuation. The squeezing is robust against dissipations of the system and bath temperature, as shown in Fig. 5. We plot in Fig. 5(a) the degree of squeezing \(S\) (dB) versus two dissipation rates \(\kappa\) and \(\gamma\) at low temperature \(T=10\) mK. Clearly, the squeezing is present for a wide range of both \(\kappa\) and \(\gamma\). In Fig. 5(b), we plot \(S\) versus \(T\) for \(\kappa/2\pi=1\) MHz and \(\gamma/2\pi=20\) kHz [51]. It shows that the squeezing is still present for the temperature up to \(\sim 330\) mK. ## IV Conclusions We present a scheme for preparing magnon squeezed states in a hybrid cavity-magnon-qubit system. The qubit is simultaneously driven by two microwave fields. By properly selecting the drive frequencies and strengths, an effective parametric amplification Hamiltonian is obtained for the magnon mode, which yields magnon quadrature squeezing. We provide the optimal drive conditions and analyze the validity of the model. The magnon squeezing is robust against dissipations and bath temperature, and the numerical results indicate that moderate squeezing can be achieved using fully realistic parameters from recent experiments [48; 49; 50; 51]. The squeezed state, with the noise below vacuum fluctuation, is of a magnon mode consisting of more than \(10^{18}\) spins for a 1-mm-diameter-YIG sphere and thus represents a macroscopic quantum state. The work may find potential applications in the study of macroscopic quantum phenomena, as well as in high-precision measurements based on magnons. ###### Acknowledgements. This work has been supported by National Key Research and Development Program of China (Grant No. 2022YFA1405200) and National Natural Science Foundation of China (Grant Nos. 12274274, 12174140, and 92265202).
2307.09761
Origin of Life Molecules in the Atmosphere After Big Impacts on the Early Earth
The origin of life on Earth would benefit from a prebiotic atmosphere that produced nitriles, like HCN, which enable ribonucleotide synthesis. However, geochemical evidence suggests that Hadean air was relatively oxidizing with negligible photochemical production of prebiotic molecules. These paradoxes are resolved by iron-rich asteroid impacts that transiently reduced the entire atmosphere, allowing nitriles to form in subsequent photochemistry. Here, we investigate impact-generated reducing atmospheres using new time-dependent, coupled atmospheric chemistry and climate models, which account for gas-phase reactions and surface-catalysis. The resulting H$_2$-, CH$_4$- and NH$_3$-rich atmospheres persist for millions of years, until hydrogen escapes to space. HCN and HCCCN production and rainout to the surface can reach $10^9$ molecules cm$^{-2}$ s$^{-1}$ in hazy atmospheres with a mole ratio of $\mathrm{CH_4} / \mathrm{CO_2} > 0.1$. Smaller $\mathrm{CH_4} / \mathrm{CO_2}$ ratios produce HCN rainout rates $< 10^5$ molecules cm$^{-2}$ s$^{-1}$, and negligible HCCCN. The minimum impactor mass that creates atmospheric $\mathrm{CH_4} / \mathrm{CO_2} > 0.1$ is $4 \times 10^{20}$ to $5 \times 10^{21}$ kg (570 to 1330 km diameter), depending on how efficiently iron reacts with a steam atmosphere, the extent of atmospheric equilibration with an impact-induced melt pond, and the surface area of nickel that catalyzes CH$_4$ production. Alternatively, if steam permeates and deeply oxidizes crust, impactors $\sim 10^{20}$ kg could be effective. Atmospheres with copious nitriles have $> 360$ K surface temperatures, perhaps posing a challenge for RNA longevity, although cloud albedo can produce cooler climates. Regardless, post-impact cyanide can be stockpiled and used in prebiotic schemes after hydrogen has escaped to space.
Nicholas F. Wogan, David C. Catling, Kevin J. Zahnle, Roxana Lupu
2023-07-19T05:44:10Z
http://arxiv.org/abs/2307.09761v1
# Origin of Life Molecules in the Atmosphere After Big Impacts on the Early Earth ###### Abstract The origin of life on Earth would benefit from a prebiotic atmosphere that produced nitriles, like HCN, which enable ribonucleotide synthesis. However, geochemical evidence suggests that Hadean air was relatively oxidizing with negligible photochemical production of prebiotic molecules. These paradoxes are resolved by iron-rich asteroid impacts that transiently reduced the entire atmosphere, allowing nitriles to form in subsequent photochemistry. Here, we investigate impact-generated reducing atmospheres using new time-dependent, coupled atmospheric chemistry and climate models, which account for gas-phase reactions and surface-catalysis. The resulting H\({}_{2}\)-, CH\({}_{4}\)- and NH\({}_{3}\)-rich atmospheres persist for millions of years, until hydrogen escapes to space. HCN and HCCCN production and rainout to the surface can reach \(10^{9}\) molecules cm\({}^{-2}\) s\({}^{-1}\) in hazy atmospheres with a mole ratio of CH\({}_{4}\)/CO\({}_{2}>0.1\). Smaller CH\({}_{4}\)/CO\({}_{2}\) ratios produce HCN rainout rates \(<10^{5}\) molecules cm\({}^{-2}\) s\({}^{-1}\), and negligible HCCCN. The minimum impactor mass that creates atmospheric CH\({}_{4}\)/CO\({}_{2}>0.1\) is \(4\times 10^{20}\) to \(5\times 10^{21}\) kg (570 to 1330 km diameter), depending on how efficiently iron reacts with a steam atmosphere, the extent of atmospheric equilibration with an impact-induced melt pond, and the surface area of nickel that catalyzes CH\({}_{4}\) production. Alternatively, if steam permeates and deeply oxidizes crust, impactors \(\sim 10^{20}\) kg could be effective. Atmospheres with copious nitriles have \(>360\) K surface temperatures, perhaps posing a challenge for RNA longevity, although cloud albedo can produce cooler climates. Regardless, post-impact cyanide can be stockpiled and used in prebiotic schemes after hydrogen has escaped to space. ## 1 Introduction Two essential aspects of life are a genome and catalytic reactions, so the presence of ribonucleotide molecular "fossils" in modern biochemistry (White, 1976; Goldman & Kacar, 2021) and the ability of RNAs to store genetic information and catalyze reactions have led to the hypothesis that RNA-based organisms originated early (Cech, 2012; Gilbert, 1986). This hypothesis proposes a stage of primitive life with RNA as a self-replicating genetic molecule that evolved by natural selection, which, at some point, became encapsulated in a cellular membrane and may have interacted with peptides from the beginning in the modified hypothesis of the RNA-Peptide World (e.g. Di Giulio, 1997; Muller et al., 2022). In any case, RNA must be produced abiotically on early Earth for such scenarios. Chemists have proposed several prebiotic schemes that require nitriles - hydrogen cyanide (HCN), cyanoacetylene (HCCCN), cyanamide (H\({}_{2}\)NCN), and cyanogen (NCCN) - to synthesize ribonucleobases, which are building blocks of RNA (Benner et al., 2020; Sutherland, 2016; Yadav et al., 2020). Abiotic synthesis of nitriles in nature is known to occur efficiently from photochemistry in reducing N\({}_{2}\)-CH\({}_{4}\) atmospheres (Zahnle, 1986; Tian et al., 2011). Indeed, Titan's atmosphere, composed of mostly N\({}_{2}\) and CH\({}_{4}\), makes HCN, HCCCN and NCCN (Strobel et al., 2009). Geochemical evidence does not favor a volcanic source for a CH\({}_{4}\)-rich prebiotic atmosphere. Redox proxies in old rocks indicate that Earth's mantle was only somewhat more reducing 4 billion years ago (Aulbach & Stagno, 2016; Nicklas, 2019). Therefore, volcanoes would have mostly produced relatively oxidized gases like H\({}_{2}\)O, CO\({}_{2}\) and N\({}_{2}\) instead of highly reduced equivalents, H\({}_{2}\), CH\({}_{4}\) and NH\({}_{3}\)(Holland, 1984; Catling & Kasting, 2017; Wogan et al., 2020). Thus, steady-state volcanism would have likely produced Hadean (4.56 - 4.0 Ga) air with CO\({}_{2}\) and N\({}_{2}\) as bulk constituents, whereas reducing gases, such as CH\({}_{4}\), would have been minor or very minor. However, Urey (1952) suggested that the prebiotic atmosphere was transiently reduced by large asteroid impacts. In more detail, Zahnle et al. (2020) argued that iron-rich impact ejecta could react with an impact-vaporized ocean to generate \(\rm H_{2}\) (\(\rm Fe+H_{2}O\leftrightarrow FeO+H_{2}\)). As the \(\rm H_{2}O\)- and \(\rm H_{2}\)-rich atmosphere cools, their chemical equilibrium modeling with parameterized quenching finds that \(\rm H_{2}\) can combine with atmospheric CO or \(\rm CO_{2}\) to generate \(\rm CH_{4}\). After several thousand years of cooling, the steam condenses to an ocean, leaving a \(\rm H_{2}\) dominated atmosphere containing \(\rm CH_{4}\). Zahnle et al. (2020) used a photochemical box model to show that such a reducing atmosphere would have generated prebiotic molecules like HCN. The reducing atmospheric state terminates when \(\rm H_{2}\) escapes to space after millions of years. Model simplicity in Zahnle et al. (2020) left critical questions unanswered. Their model of a cooling steam post-impact atmosphere did not explicitly simulate chemical kinetics pertinent to Earth, which may inaccurately estimate the generated \(\rm CH_{4}\). Additionally, their photochemical box model did not include all relevant reactions or distinguish between different prebiotic nitriles (e.g. HCN and HCCCN). Finally, Zahnle et al. (2020) only crudely computed the climate of post-impact atmospheres, yet surface temperature is important for understanding the possible fate of prebiotic feedstock molecules. These molecules are needed to initiate prebiotic synthesis and must be available in the prebiotic environment. Here, we improve upon the calculations made in Zahnle et al. (2020) using more sophisticated and accurate models of post-impact atmospheres. We estimate post-impact \(\rm H_{2}\) production by considering reactions between the atmosphere and delivered iron, and equilibration between the atmosphere and impact-generated melt. Our model explicitly simulates the 0-D chemical kinetics of a cooling steam atmosphere, considering gas-phase reactions, as well as reactions occurring on nickel surfaces which catalyze \(\rm CH_{4}\) production given that nickel is expected to be delivered by big impactors. After post-impact steam condenses to an ocean, we simulate the long-term evolution of a reducing atmosphere with a 1-D photochemical-climate model, quantifying HCN and HCCCN production and the climate in which they are deposited on Earth's surface. Additionally, we discuss the possible fate and preservation of prebiotic molecules in ponds or lakes on Hadean land. Finally, we discuss how "lucky" primitive life was if created by post-impact molecules, given a need to not be subsequently annihilated by further impactors. ## 2 Methods We organize our investigation of post-impact Hadean atmospheres in three phases of atmospheric evolution depicted in Figure 1. Below, we briefly describe our numerical models for each phase and complete descriptions can be found in the Appendix. In Phase 1, an impactor collides with Earth, vaporizing the ocean, and \(\rm H_{2}\) is generated by reactions between the atmosphere and iron-rich impact ejecta, and atmospheric reactions with an impact-produced melt pond. Our model of this phase (Appendix A) accounts for \(\rm H_{2}\) generation from impactor iron by assuming each mole of iron delivered to the atmosphere removes one mole of oxygen. For example, \(\rm Fe\) can sequester O atoms from steam: \[\rm Fe+H_{2}O\to FeO+H_{2} \tag{1}\] Simulations that consider reactions between the atmosphere and impact-melted crust follow a similar procedure to the one described in Itcovitz et al. (2022). Our model requires that the atmosphere and melt have the same oxygen fugacity. The oxygen fugacity of the melt is governed by relative amounts of ferric and ferrous iron (Kress and Carmichael, 1991): \[0.5\rm O_{2}+2FeO\leftrightarrow Fe_{2}O_{3} \tag{2}\] We assume that oxygen atoms can flow from the atmosphere into the melt (or visa-versa), and use an equilibrium constant for Reaction 2 from Kress and Carmichael (1991). Finally, we compute a chemical equilibrium state of the atmosphere (or atmosphere-melt system) at 1900 K using thermodynamic data from NIST for 96 gas-phase species (Appendix C.2). The result gives the estimated amount of \(\rm H_{2}\) generated by an impact. In Phase 2 of Figure 1, the steam atmosphere cools for thousands of years generating \(\rm CH_{4}\) and \(\rm NH_{3}\), and eventually, the steam condenses to an ocean. We simulate these events with the 0-D kinetics-climate box model fully described in Appendix B. The gas-phase model tracks 96 species connected by 605 reversible reactions (Appendix C.2), but we do not account for photolysis. The model also optionally accounts for reactions that occur on nickel surfaces using the chemical network described in Schmider et al. (2021). As discussed later in Section 3.2, nickel is potentially delivered to Earth's surface by impacts which may catalyze methane production. In the model, atmospheric temperature changes as energy is radiated to space and is modulated by latent heat released from water condensation. We estimate the net energy radiated to space by using a parameterization of calculations performed with our radiative transfer code (Appendix D). During Phase 3, photochemistry generates HCN and other prebiotic molecules. Hydrogen in the H\({}_{2}\) dominated atmosphere escapes to space over millions of years, ushering in the return of a CO\({}_{2}\) and N\({}_{2}\) atmosphere. We use our time-dependent photochemical-climate model, _Photochem_ (Appendix C), to simulate this phase of atmospheric evolution. The model solves a system of partial differential equations approximating molecular transport in the vertical direction and the effect of chemical reactions, photolysis, condensation, rainout in droplets of water, and hydrogen atmospheric escape. Specifically, the model rains out haze particles and HCN among a few other atmospheric species listed in Appendix C.2. We simulate diffusion-limited and hydrodynamic hydrogen escape using Equation (47) in Zahnle et al. (2020). Our reaction network (Appendix C.2) acceptably reproduces the steady-state composition of Earth and Titan (Appendix Figure A9). When reproducing the chemistry of Earth and Titan we fix the temperature profile to measured values, rather than self-consistently compute the climate. We evolve the model equations accurately over time using the CVODE Backward Differential Formula (BDF) method (Hindmarsh et al., 2005). As the atmosphere evolves, we compute self-consistent temperature structures using the radiative transfer code described and validated in Appendix D. Unless otherwise noted in the text, our climate calculations use the opacities in Table 1, which is a subset of the opacities available in our radiative transfer code (Appendix Table A2). Climate calculations do not account for the radiative effects of clouds or hazes. However, our UV radiative transfer for computing photolysis rates do account for haze absorption and scattering. ## 3 Results The following sections simulates the three post-impact phases of atmospheric evolution shown in Figure 1 for impactor masses between \(10^{20}\) and \(10^{22}\) kg (360 to 1680 km diameter) under various modeling assumptions. ### Phase 1: Reducing the steam-generated atmosphere with impactor iron Within days, a massive asteroid impact would leave the Hadean Earth with a global \(\sim\)2000 K rock and iron vapor atmosphere, the iron derived from the impactor's core (Itcovitz et al., 2022). In the following months to years, energy radiated downward from the silicates would vaporize a large fraction of the ocean, adding steam to the atmosphere (Sleep et al., 1989). At this point, steam should rapidly react with iron to generate H\({}_{2}\). Eventually, the iron vapor and then rock would rain out leaving behind a steam-dominated atmosphere containing H\({}_{2}\), as well as CO\({}_{2}\) and N\({}_{2}\) from the pre-impact atmosphere. The sequence of metal followed by silicate condensation with falling temperature is loosely analogous to that of the well-known condensation sequence of the solar nebula. Furthermore, the massive impact would generate a melt pool on Earth's surface inside the impact crater, which may contain reducing impact-derived iron. The atmosphere and melt pool could react to a redox-equilibrium state. This could add or sequester H\({}_{2}\) from the atmosphere, depending on whether the melt was more or less reducing than the atmosphere (Itcovitz et al., 2022). Recently, Itovitz et al. (2022) used a smoothed-particle hydrodynamics (SPH) code with \(0.5\times 10^{6}\) - \(3\times 10^{6}\) particles of 150 - 250 km diameter to estimate the amount of H\({}_{2}\) generated as these processes unfold under several different impact scenarios on the Hadean Earth. In their fiducial case (i.e. their "Model 1A"), they assume that 100% of iron delivered by an impactor is available to react and reduce a post-impact steam atmosphere. In another scenario, they assume that only \(\sim\)15 - 30% of impactor iron reacts with the steam atmosphere based on their SPH simulations (their "Model 1B") (Citron and Stewart, 2022). For both cases, they also consider equilibration between the atmosphere and a melt pool (their "Model 2", "Model 3A" and "Model 3B"). In their simulations, the melt pool is extremely reducing or more oxidizing depending on whether they assume it contains a fraction of the impactor's iron, and they use SPH \begin{table} \begin{tabular}{l|l|l} \hline \hline Line absorption & Continuum CIA absorption & Rayleigh Scattering \\ \hline H\({}_{2}\)O, CO\({}_{2}\), CH\({}_{4}\) & CO\({}_{2}\)-CO\({}_{2}\), N\({}_{2}\)-N\({}_{2}\), CH\({}_{4}\)-CH\({}_{4}\), H\({}_{2}\)-CH\({}_{4}\), H\({}_{2}\)-H\({}_{2}\), \\ & H\({}_{2}\)O-H\({}_{2}\)O, H\({}_{2}\)O-N\({}_{2}\) & \\ \hline \end{tabular} \end{table} Table 1: Opacities used in climate modeling models to predict the amount of iron accreted to the melt pool (Citron and Stewart, 2022). Overall, they conclude that melt-atmosphere equilibration generates about as much H\({}_{2}\) as their fiducial case as long as the iron delivered to the melt-atmosphere system can equilibrate. However, if iron delivered to the melt pool sinks into Earth and cannot react with the atmosphere, then approximately 2 - 10 times less H\({}_{2}\) is produced compared to their fiducial scenario (see the erratum in Itcovitz et al. (2022)). Itcovitz et al. (2022) considers impactors between \(2\times 10^{21}\) and \(2\times 10^{22}\) kg, and assumes the pre-impact Earth has 1.85 oceans of water, 100 bars CO\({}_{2}\) and 2 bars of N\({}_{2}\). However, we investigate impacts as small as \(10^{20}\) kg, and our nominal model (Table 2) assumes only 0.5 bars of pre-impact CO\({}_{2}\) motivated by models of the Hadean carbonate-silicate cycle (Kadoya et al., 2020) and assuming little mantle-hosted carbonate is vaporized. Therefore, we use a similar model (Appendix A) to the one described in Itcovitz et al. (2022) to predict the post-impact H\({}_{2}\) for our alternative model assumptions (Table 2) and impactor sizes. Figure 2 shows the results. Our calculations give two end-member scenarios for impact H\({}_{2}\) production which we consider for subsequent calculations in this article. The more optimistic case assumes that 100% of the impactor's iron reacts with an atmosphere that is chemically isolated from a melt pool ("Model 1A" in Figure 2). Following Zahnle et al. (2020), we adopt this scenario as our nominal model throughout the main text. This assumption produces a similar amount of H\({}_{2}\) as an atmosphere-melt system that retains most of the impactor's iron (e.g. "Model 2" in Figure 2), which is consistent with Itcovitz et al. (2022). The "Model 2" calculation assumes the melt pool has an initial oxygen fugacity of \(\Delta\)FMQ-2.3 which is appropriate for a peridotite melt (Itcovitz et al., 2022).1 However, our results are not sensitive to this assumption because, for "Model 2", initial melt oxygen fugacities between \(\Delta\)FMQ and \(\Delta\)FMQ-4 changes the generated H\({}_{2}\) by a factor of at most \(\sim 1.3\). Footnote 1: FMQ is the fayalite-magnetite-quartz redox buffer. See Chapter 7 in Catling and Kasting (2017) for a discussion of redox buffers. The less-optimistic case for H\({}_{2}\) production is "Model 1B" in Figure 2, which assumes that only a fraction of the impactor iron reacts with an atmosphere (\(\sim 15\%\) to \(\sim 30\%\)), and that the latter does not react with a melt pool. We compute the fraction of available iron by extrapolating SPH simulations of impacts traveling at twice Earth's escape velocity and colliding with Earth at a \(45^{\circ}\) angle (Appendix A), which is the most probable angle (Citron and Stewart, 2022). Most simulations shown in the main text have a complementary figure in the Appendix that makes this alternative pessimistic assumption regarding post-impact H\({}_{2}\) generation. ### Phase 2: The cooling post-impact steam atmosphere Figure 1: The three phases of atmospheric evolution after a large asteroid impact on the Hadean Earth. In Phase 1, the impactor vaporizes the ocean and heats up the atmosphere. Iron delivered by the impactor reacts with hot steam to make H\({}_{2}\). H\({}_{2}\) is also modulated by equilibration between the atmosphere and an impact-generated melt pond. In Phase 2, as the steam-rich atmosphere cools for thousands of years, H\({}_{2}\) reacts with CO\({}_{2}\) to make atmospheric CH\({}_{4}\). Ultimately, the steam condenses to an ocean. Finally, in Phase 3, N\({}_{2}\) and CH\({}_{4}\) photochemistry generates HCN and other prebiotic nitriles. The H\({}_{2}\) dominated atmosphere escapes to space over millions of years, causing the return of a more oxidizing N\({}_{2}\) and CO\({}_{2}\) atmosphere. After reactions between impact-derived iron and steam produce H\({}_{2}\), the atmosphere would radiate at a rate determined by the optical properties of water vapor (Zahnle et al., 2020). Chemical reactions would initially be rapid, forcing the whole atmosphere to chemical equilibrium. Methane is thermodynamically preferred at lower temperatures (e.g., more methane is prefered in a gas at 1000 K than a gas at 1500 K), so it should become more abundant as the atmosphere cools. Eventually the atmosphere would reach a temperature where the reactions producing methane would be extremely sluggish compared to the rate of atmospheric cooling. At this point, the methane abundance would freeze, or quench. Ammonia would exhibit the same behavior as methane by initially rising in abundance then quenching when kinetics become slow. After several thousand years, water vapor condenses and rains out of the atmosphere to form an ocean. We use the 0-D kinetics-climate box model described in Appendix B to simulate these events. By simulating each elementary chemical reaction, the model automatically computes methane and ammonia quenching as the atmosphere cools and temperature-dependent reactions slow. We first consider gas-phase kinetics, and later we will also consider nickel-surface kinetics. Figure 3 shows our model applied to a \(1.58\times 10^{21}\) kg (\(\sim 900\) km diameter) impactor. As the steam cools, ammonia quenches when the atmosphere is \(\sim 1200\) K, followed by CH\({}_{4}\) quenching at \(\sim 950\) K. After quenching, nearly half of the total carbon in the atmosphere exists as CH\({}_{4}\). After 4200 years, the steam has largely rained out to form an ocean, leaving behind a H\({}_{2}\)-dominated atmosphere containing CH\({}_{4}\) and NH\({}_{3}\). NH\({}_{3}\) is soluble in water, so a fraction should be removed from the atmosphere by dissolution in the newly formed ocean; however our simulations (e.g. Figure 3) do not account for this effect. Figure 4 shows predicted atmospheric composition at the end of the steam atmosphere (e.g. at 4200 years in Figure 3) as a function of impactor mass. The calculations use gas-phase reactions, and our nominal model parameters (Table 2), including the assumption that 100% of the iron delivered by the impactor reacts with the steam atmosphere to make H\({}_{2}\). For example, a \(10^{20}\) kg impactor generates \(1.2\times 10^{2}\) H\({}_{2}\) moles cm\({}^{-2}\) which would have a partial pressure of 1.2 bars if the atmosphere did not contain water vapor. A \(10^{22}\) kg impactor generates \(1.1\times 10^{4}\) H\({}_{2}\) moles cm Figure 2: Post-impact H\({}_{2}\) generation as a function of impactor mass under different modeling assumptions. Models 1A, 1B, 2 and 3B are identical to those describe in Figure 1 of Itcovitz et al. (2022). The simulation’s pre-impact volatile inventories, impact angle, and impact velocity are listed in Table 2. In Model 1A, all iron delivered by an impact reacts with steam to produce H\({}_{2}\). The resulting atmosphere does not equilibrate with a impact-generated melt pool. Model 1B assumes that a fraction (\(\sim 15\%\) to \(\sim 30\%\)) of impactor iron reduces the steam atmosphere based on SPH simulations (Citron and Stewart, 2022), and that the atmosphere is chemically isolated from a melt pool. Model 2 is like Model 1A while also including post-impact equilibration with a melt pool with a redox state of \(\Delta\)FMQ-2.3 to represent peridotite (Itcovitz et al., 2022). Model 3B assumes that a fraction (\(\sim 15\%\) to \(\sim 30\%\)) of impactor iron reacts with the steam atmosphere based on SPH simulations, and includes melt-atmosphere redox equilibration with a pool magma initially at \(\Delta\)FMQ-2.3. As stated in Section 3.1, we nominally assume Model 1A throughout the main text calculations. We also include simulations in the appendix that instead adopt Model 1B, which we consider to be a plausible lower bound for post-impact H\({}_{2}\) generation. \begin{table} \begin{tabular}{l|l|l} \hline \hline Parameter & symbol & value \\ \hline Pre-impact ocean inventory & \(N_{\rm H_{2}O}\) & \(1.5\times 10^{4}\) mol cm\({}^{-2}\) (i.e. 1 ocean)\({}^{\rm a}\) \\ Pre-impact CO\({}_{2}\) inventory & \(N_{\rm CO_{2}}\) & 12.5 mol cm\({}^{-2}\) (i.e. “0.5 bars”)\({}^{\rm b}\) \\ Pre-impact N\({}_{2}\) inventory & \(N_{\rm N_{2}}\) & 36 mol cm\({}^{-2}\) (i.e. “1 bar”)\({}^{\rm c}\) \\ Impactor mass & \(M_{\rm imp}\) & 10\({}^{20}\) - 10\({}^{22}\) kg \\ Iron mass fraction of the impactor & \(m_{\rm Fe,imp}\) & 0.33 \\ Fraction of iron that reacts with atmosphere & \(X_{\rm Fe,atmos}\) & 1.0\({}^{\rm d}\) \\ Impact angle & - & 45\({}^{\circ}\) \\ Impact velocity relative to Earth & - & 20.7 km s\({}^{-1}\) \\ Eddy diffusion coefficient\({}^{\rm e}\) & \(K_{zz}\) & 10\({}^{6}\) cm\({}^{2}\) s\({}^{-1}\) \\ Aerosol particle radius\({}^{\rm e}\) & - & 0.1 \(\mu\)m \\ Toposphere relative humidity & \(\phi\) & 1 \\ Surface Albedo & \(A_{s}\) & 0.2 \\ Temperature of the stratosphere & T\({}_{\rm strat}\) & 200 K \\ Rainfall rate & \(R_{\rm rain}\) & 1.1\(\times 10^{17}\) molecules cm\({}^{-2}\) s\({}^{-1}\) (Modern \\ & & Earth’s value) \\ HCN deposition velocity\({}^{\rm f}\) & \(v_{\rm dep,HCN}\) & \(7\times 10^{-3}\) cm s\({}^{-1}\) \\ HCCCN deposition velocity\({}^{\rm g}\) & \(v_{\rm dep,HCCCN}\) & \(7\times 10^{-3}\) cm s\({}^{-1}\) \\ \hline \end{tabular} \({}^{\rm a}\) The source and inventory of surface H\({}_{2}\)O throughout the Hadean is debated (Miyazaki & Korenaga, 2022; Korenaga, 2021; Johnson & Wing, 2020) and even how much water is present on the modern Earth (e.g., Lecuyer et al. (1998) estimates 0.3-3 oceans in Earth’s mantle). Our nominal case of one modern ocean is one possibility among several. \({}^{\rm b}\) Based on Hadean carbon cycle modeling in Kadoya et al. (2020). \({}^{\rm c}\) Based on Figure 5 in Catling & Zahnle (2020). \({}^{\rm d}\) This is the “Model 1A” scenario for H\({}_{2}\) production described near the end of Section 3.1 and in Figure 2. \({}^{\rm e}\) Assumed to be constant as a function of altitude. \({}^{\rm f}\) Estimated based on the HCN hydrolysis rate in the ocean (Appendix C.4). \({}^{\rm g}\) Assumed to the the same as HCN. \end{table} Table 2: Nominal model assumptions Figure 3: A kinetics-climate simulation of a cooling steam atmosphere caused by a \(1.58\times 10^{21}\) kg impactor. The model uses the Table 2 nominal parameters. The top panel is surface temperature and the bottom panel shows atmospheric composition. which would have a "dry" partial pressure of 23.8 bars.2 We find that most of the CO\({}_{2}\) in the atmosphere is converted to CH\({}_{4}\) for impactors larger than \(1.6\times 10^{21}\) kg (\(\sim 900\) km diameter), and that bigger impacts generate more NH\({}_{3}\), e.g., a \(10^{22}\) kg impactor makes 0.013 "dry" bars of NH\({}_{3}\). Reduced species like CH\({}_{4}\) and NH\({}_{3}\) are thermodynamically preferred in the thick H\({}_{2}\) atmospheres generated by bigger impacts. Large impacts generate big amounts of hydrogen because they deliver more iron which more thoroughly reduces the atmosphere. Footnote 2: Partial pressures depend on the mean molecular weight of the atmosphere. The \(10^{22}\) kg simulation in Figure 4 has 65.0 bars H\({}_{2}\) before ocean vapor condenses, and would have 23.8 bars H\({}_{2}\) if there was no water vapor in the atmosphere. Both scenarios have the same number of H\({}_{2}\) molecules in the atmosphere, but have different partial pressures because of dissimilar mean molecular weights. To avoid ambiguity, we occasionally report partial pressures in “dry” bars, which is the partial pressure of a gas if the atmosphere had no water vapor. The Figure 4 calculations might underestimate the CH\({}_{4}\) produced in the post-impact atmosphere because they ignore reactions occurring on nickel surfaces that can catalyze CH\({}_{4}\) generation. If the impactors that struck the Earth during the Hadean resembled enstatite chondrite or carbonaceous chondrite composition then they would have contained 1% - 2% nickel (Lewis, 1992, Table 15). This nickel would have coexisted with the rock and iron vapor atmosphere that lasted months to years following a massive impact (Phase 1 in Figure 1). Metals along with silicates would have rained out as spherules covering the entire planet (Genda et al., 2017). As the impact-generated steam cooled, chemical reactions catalyzing CH\({}_{4}\) production could have occurred on nickel surfaces in the bed of spherules (Schmider et al., 2021). These surface reactions could lower the quench temperature of CH\({}_{4}\), causing more of the gas to be produced. To estimate the effect of nickel catalysis on CH\({}_{4}\) production, we use our kinetics-climate box model (Appendix B) with the nickel-surface reaction network developed by Schmider et al. (2021). The network is based on quantum chemistry calculations and about a dozen experiments from the literature. Our micro-kinetics approach is distinct from the empirical one taken by, e.g. Kress and McKay (2004), because our model tries to capture each elementary step of catalysis, rather than use a parameterization that is specific to certain experimental conditions. Figure 5 shows the quenched methane abundance as a function of impactor mass predicted by our model that includes nickel catalysts. The amount of CH\({}_{4}\) generated depends strongly on the amount of available nickel surface area. Nickel areas bigger than 0.1 cm\({}^{2}\) nickel / cm\({}^{2}\) Earth permit more CH\({}_{4}\) production compared to our gas-phase only model. Assuming a nickel area of 1000 cm\({}^{2}\) nickel / cm\({}^{2}\) Earth, then a Vesta-size impactor (\(2.6\times 10^{20}\) kg, 500 km diameter) could convert most CO\({}_{2}\) in the pre-impact atmosphere to CH\({}_{4}\). Unfortunately, a precise nickel surface area is hard to estimate. The correct value depends on how the rock, iron and nickel spherules mix and precipitate to the surface, and furthermore, how effectively the atmosphere can diffuse through and react on exposed nickel. We do not attempt to compute these effects here, and instead estimate possible upper bounds. Consider a \(2.6\times 10^{20}\) kg impactor (Vesta-sized) of enstatite chondrite composition, containing 2% by mass Ni (Lewis, 1992). If all this nickel is gathered into 1 mm spheres, a plausible droplet size according to Genda et al. (2017), then the total nickel surface area is \(3.4\times 10^{3}\) cm\({}^{2}\) nickel / cm\({}^{2}\) Earth. An impactor ten times more massive would deliver Figure 4: Predicted atmospheric composition as a function of impactor mass after steam has condensed to an ocean. We use our nominal modeling assumptions (Table 2), and also use gas-phase kinetics. Most CO\({}_{2}\) is converted to CH\({}_{4}\) for impactors larger than \(1.6\times 10^{21}\) kg. ten times more nickel resulting in an upper bound Ni area that is one order of magnitude larger. Significantly smaller nickel particles are conceivable. There is experimental support for the formation of ultra-fine \(<300\) nm particles in the wake of impacts colliding with an ocean (Furukawa et al., 2007). For a Vesta-sized impactor, collecting all nickel into 100 nm particles has a nickel area six orders of magnitude large than the 1 mm case - \(3.4\times 10^{9}\) cm\({}^{2}\) nickel / cm\({}^{2}\) Earth. Overall, the larger nickel areas shown in Figure 5 may be within the realm of possibility. Alternatively, nickel might be buried by rock and iron when these materials condense out of the post-impact atmosphere, and that \(<0.1\) cm\({}^{2}\) nickel / cm\({}^{2}\) Earth is available for catalysis. In this case, gas-phase kinetics would determine the conversion of CO\({}_{2}\) to CH\({}_{4}\). Figures 4 and 5 optimistically assume that all iron delivered by the impactor reacts with steam to make H\({}_{2}\), however, this may not be the case (see Section 3.1). Therefore, in Appendix Figures A2 and A3 we recalculate Figures 4 and 5, but assume that only a fraction of the impactor's iron reduces the steam atmosphere by extrapolating SPH simulations of impacts ("Model 1B" in Figure 2). The resulting H\({}_{2}\), CH\({}_{4}\), and NH\({}_{3}\) production appear similar, except shifted by a factor of \(\sim 5\) to larger impactors. The results are shifted by this amount because SPH simulations suggest approximately \(\sim 1/5\) of impactor iron is delivered to the atmosphere, while the rest is either embedded in Earth, or ejected to space. We consider these supplementary calculations lower-bounds for impactor generated CH\({}_{4}\) and NH\({}_{3}\). ### Phase 3: Long-term photochemical-climate evolution Several thousand years after a massive impact, the steam-dominated atmosphere would condense to an ocean leaving behind a H\({}_{2}\)-dominated atmosphere containing CH\({}_{4}\) and NH\({}_{3}\) (e.g. at 4200 years in Figure 3). The reducing atmospheric state should persist for millions of years until hydrogen escapes (Zahnle et al., 2020). We simulate the long-term evolution of this hydrogen-rich atmosphere using a coupled one-dimensional photochemical-climate model (Appendix C). Figure 6 shows our model applied to the atmosphere following a \(1.58\times 10^{21}\) kg (\(\sim 900\) km diameter) impactor. We assume a pre-impact atmosphere with 1 bar N\({}_{2}\) and 0.5 bars of CO\({}_{2}\), and simulate the cooling steam atmosphere with our kinetics-climate climate model (Section 3.2). Next, we use the end of the steam atmosphere simulation as initial conditions for our 1-D photochemical-climate model. We find that N\({}_{2}\) and CH\({}_{4}\) photochemistry generates HCN in a hazy Titan-like atmosphere for about one million years until it is halted by hydrogen escape to space. In this model, the dominant channel producing HCN is N + \({}^{3}\)CH\({}_{2}\)\(\rightarrow\) HCN + H where \({}^{3}\)CH\({}_{2}\) is ground (triplet) state of the methylene radical derived form methane photolysis. There are two other important paths. The first is \(\rm N+CH\rightarrowCN+H\) followed by \(\rm H_{2}+CN\rightarrow HCN+H\), and the second is \(\rm N+CH_{3}\rightarrowH+H_{2}CN\) and \(\rm H_{2}CN+H\rightarrow HCN+H_{2}\). In all pathways, hydrocarbon radicals (e.g., \({}^{3}\)CH\({}_{2}\) and CH\({}_{3}\)) are sourced from photolyzed CH\({}_{4}\) and atomic N is derived from photolyzed N\({}_{2}\), which both occur at high altitudes (\(p<10^{-5}\) bar, Appendix Figure A5). The largest chemical loss of HCN is photolysis followed by \(\rm N+CN\rightarrow\rm N_{2}+C\). Other significant losses are paths that form HCCCN haze aerosols. HCN production and loss is our model is comparable to pathways discussed in similar studies (Zahnle, 1986; Tian et al., 2011; Rimmer and Rugheimer, 2019). We determined Figure 5: The effect of nickel catalysts on post-impact methane production. The calculations use the Table 2 model parameters and the Schmider et al. (2021) surface reaction network. Ni areas larger than 0.1 cm\({}^{2}\) nickel / cm\({}^{2}\) Earth generates more methane than our model that uses gas-phase reactions, e.g., Figure 4. the chemical paths most important for producing and destroying HCN by studying column integrated reaction rates at 14,200 years in Figure 6. In Figure 6, HCN mixes to the surface and rains out in droplets of water at a rate of \(\sim 10^{7}\) molecules cm\({}^{-2}\) s\({}^{-1}\). HCN also dissolves into the ocean at a similar rate, where we assume it is eventually destroyed by hydrolysis (not shown in Figure 6). To emulate HCN dissolution and destruction in the ocean, we assume a \(7\times 10^{-3}\) cm s\({}^{-1}\) deposition velocity justified in Appendix C.4. Additionally, a relatively small amount of HCN polymerizes to haze particles in our model via \(\mathrm{H_{2}CN+HCN\to polymer}\) following Lavvas et al. (2008a), which falls and rains out in water droplets to the surface. Our results differ from the simulations of Zahnle et al. (2020), which suggested that the duration of HCN production after an impact was limited by rapid photolysis of methane. The Figure 6 simulation finds that the CH\({}_{4}\) lifetime is 4.8 million years because, following photolysis, CH\({}_{4}\) efficiently recombines in a hydrogen rich atmosphere from the following reaction, which is well known in the atmospheres of the giant planets in out solar system (Appendix Figure A4). \[\mathrm{CH_{3}+H+M\to CH_{4}+M} \tag{3}\] Zahnle et al. (2020) did not account for Reaction 3. The lifetime of cyanide production is therefore instead determined by the timescale of hydrogen escape to space. Significant hydrogen escape permits the destruction of most atmospheric CH\({}_{4}\) because Reaction 3 becomes inefficient, which in turn ceases CH\({}_{4}\)-driven HCN production. In Figure 6, HCCCN is primarily destroyed by photolysis and produced by the following reaction from acetylene and the cyanide radical, \[\mathrm{C_{2}H_{2}+CN\to HCCCN+H} \tag{4}\] A fraction of produced HCCCN reacts to form aerosols via \(\mathrm{C_{4}H+HCCCN\to polymer}\) following Lavvas et al. (2008a). These polymers fall and mix toward the surface where they rainout in droplets of water at a rate of \(\sim 10^{8}\) molecules cm\({}^{-2}\) s\({}^{-1}\). Most gas-phase HCCCN is either destroyed by photolysis or incorporated into aerosols, causing vanishingly small surface HCCCN gas pressures (\(<10^{-16}\) bar). Figure 6: Simulated composition and climate of the Hadean atmosphere after a \(1.58\times 10^{21}\) kg impactor that produces 7.0 bars of \(\mathrm{H_{2}}\) once vaporized ocean water condenses. We use the Table 2 model parameters. The blue shaded region labeled “hot steam atmosphere”, also called Phase 2 in Figure 1, is simulated by the kinetics-climate model described in Appendix B. After this time-period, during Phase 3 of a post-impact atmosphere, we evolve the atmosphere with 1-D photochemical-climate model (Appendix C), which maintains 0.018 bar of CH\({}_{4}\) between \(4\times 10^{3}\) and \(\sim 10^{6}\) years. Dashed lines are referenced to the right-hand axis. “HCN rainout” is HCN molecules raining out in droplets of water. “HCCCN haze rainout” is the rainout rate of HCCCN incorporated into particles formed from the reaction \(\mathrm{C_{4}H+HCCCN\to polymer}\). CH\({}_{4}\) and N\({}_{2}\) photochemistry generates HCN and HCCCN for about one million years until \(\mathrm{H_{2}}\) escapes to space. Our model approximates haze formation with the following three reactions: \(\rm C_{2}H+C_{4}H_{2}\topolymer+H\), \(\rm H_{2}CN+HCN\to polymer\), and \(\rm C_{4}H+HCCCN\to polymer\). At 14,200 years in Figure 6, the first pathway dominates, forming \(\sim 1.8\times 10^{13}\) g haze yr\({}^{-1}\). At this same point in time the second and third pathways produce \(9\times 10^{7}\) g yr\({}^{-1}\) and \(2.8\times 10^{12}\) g yr\({}^{-1}\), respectively. The total haze production rate (\(2.1\times 10^{13}\) g yr\({}^{-1}\)) is comparable to values estimated by Trainer et al. (2006) for the early Earth based on laboratory experiments. Haze particles fall and rainout to the surface where they can hydrolyze and participate in prebiotic chemistry (Neish et al., 2010; Poch et al., 2012). In Figure 6, impact-generated ammonia persists for nearly \(10^{5}\) years. NH\({}_{3}\) is primarily destroyed by photolysis, but then recombines from reactions with hydrogen: \[\rm NH+H_{2}+M\to NH_{3}+M \tag{5}\] \[\rm NH_{2}+H+M\to NH_{3}+M \tag{6}\] Reactions 5 and 6 are relatively efficient in a hydrogen-rich atmosphere. Ammonia photolysis primarily occurs at the \(10^{-3}\) bar altitude, while haze is largely produced above the \(10^{-5}\) bar altitude. Therefore, haze particles partially shield ammonia from photolysis, extending the NH\({}_{3}\) lifetime (Sagan & Chyba, 1997). Our model assumes the haze particles are perfect spheres with optical properties governed by Mie theory. Observations of Titan's haze have revealed that hydrocarbon haze particles have a fractal structure which absorb and scatter UV more effectively than Mie spheres (Wolf & Toon, 2010). Therefore, our model likely overestimates NH\({}_{3}\) photolysis in post-impact atmospheres. Figure 6 assumes that all NH\({}_{3}\) is in the atmosphere and that it does not rainout, but the gas is highly soluble in water and should dissolve in the ocean where it hydrolyzes to ammonium, NH\({}_{4}^{+}\). Later in Section 4.4.2, we show that for an atmosphere with 0.3 mol cm\({}^{-2}\) NH\({}_{3}\) and a 371 K ocean at pH = 7, 4% of NH\({}_{3}\) would persist in the atmosphere, while the rest is dissolved in the ocean. For a hotter 505 K atmosphere with 6.8 mol cm\({}^{-2}\) NH\({}_{3}\), only 20% of ammonia dissolves in the ocean because solubility decreases with increasing temperature (Section 4.4.2). Ammonia dissolution in the ocean would protect it from photolysis perhaps lengthening the lifetime of ammonia in the atmosphere-ocean system. Overall, since our photochemical-climate model neglects NH\({}_{3}\) ocean dissolution and likely overestimates NH\({}_{3}\) photolysis, then we probably underestimate the lifetime of NH\({}_{3}\) in Figure 6. While HCN and HCCCN are produced in Figure 6, the surface temperature would be \(\sim 390\) K primarily caused by H\({}_{2}\)-H\({}_{2}\) collision-induced absorption (CIA), which has a significant greenhouse effect in thick H\({}_{2}\) atmospheres like this one of 8.5 bars total pressure. The atmosphere cools to \(\sim 300\) K after H\({}_{2}\) escapes to space. Figure 7 applies our model to various impact masses. The results show the Hadean atmosphere 10,000 years after the post-impact generated steam atmosphere has condensed to an ocean. We choose 10,000 years after ocean condensation because this is adequate time for the atmosphere to reach a quasi-photochemical steady-state that does not change significantly until hydrogen escapes (e.g. Figure 6). Figure 7d and 7e show a sharp increase in the HCN and HCCCN production for impactors larger than \(10^{21}\) kg (\(\sim 780\) km). Such large impacts generate CH\({}_{4}\)/CO\({}_{2}>0.1\) (Figure 4), which makes a thick Titan-like haze (Trainer et al., 2006). Haze shielding causes CH\({}_{4}\) photolysis to be higher in the atmosphere and closer to N\({}_{2}\) photolysis, therefore the photolysis products of both species can more efficiently combine to make cyanides (Appendix Figure A5). Additionally, HCCCN production requires acetylene (Reaction 4), which is a haze precursor that accumulates when CH\({}_{4}\)/CO\({}_{2}>0.1\). These Titan-like atmospheres have \(\sim 10^{-9}\) bar surface HCN, and HCN ocean deposition and rainout rates between \(10^{7}\) and \(10^{9}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\) persisting on hydrogen escape timescales (\(>1\) million years). HCCCN is incorporated into aerosols before raining out to the surface at a rate of up to \(10^{9}\) HCCCN molecules cm\({}^{-2}\) s\({}^{-1}\). In addition to photochemistry, lightning should also generate HCN (Chameides & Walker, 1981; Stribling & Miller, 1987). Appendix Figure A8 shows HCN production from lighting for the same time period as the Figure 7 simulation using methods described in Chameides & Walker (1981). Assuming the same lightning dissipation rate as modern Earth's, we find that lightning produces up to \(\sim 10^{4}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\). This value is small compared to the \(10^{7}\) - \(10^{9}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\) produced from photochemistry after \(>10^{21}\) kg impacts. (Kadoya et al., 2020). However, these values might be unrealistically small because a large impact would warm surface rocks possibly causing carbonates to degass thereby increasing the atmospheric CO\({}_{2}\) reservoir. Up to \(\sim 80\) bars of CO\({}_{2}\) may potentially be liberated from surface carbonates (Krissansen-Totton et al., 2021). Figure 8: The effect of the pre-impact CO\({}_{2}\) abundance on HCN and HCCCN production in post-impact atmospheres. All values are for the atmosphere 10,000 years after the steam condenses to an ocean, which is within Phase 3 of a post-impact atmosphere (Figure 1). The simulations assume the Table 1 parameters, except vary the pre-impact CO\({}_{2}\) inventory between 0.01 bar (triangles), 0.5 bar (circles), and 10 bars (squares). The calculations use gas-phase chemistry during the cooling steam atmosphere. Panels (a) - (d) show the surface HCN abundance and fluxes, while (e) - (g) show HCCCN production. Our model assumes that HCCCN is not soluble in water and does not rainout, therefore we omit a panel showing HCCCN rainout. Prebiotic nitrile production is directly correlated with the pre-impact CO\({}_{2}\) inventory. Figure 7: The state of the Hadean post-impact atmospheres 10,000 years after steam condenses to an ocean. This time period is within Phase 3 of a post-impact atmosphere indicated in Figure 1. The simulations assume the Table 2 parameters and gas-phase reactions during the cooling steam atmosphere. (a) The surface HCN pressure and timescale of H\({}_{2}\) escape, which can be interpreted as the approximate duration of HCN and HCCCN production. (b) The HCCCN surface pressure. (c) The surface temperature and pressure. (d) The HCN deposition rate in the ocean and the rate HCN leaves the atmosphere in rain drops. “HCN haze rainout” is the rainout rate of a aerosol created via the reaction \(\mathrm{H_{2}CN+HCN\to polymer}\). (e) The rainout rate of an aerosol formed from the reaction \(\mathrm{C_{4}H+HCCCN\to polymer}\). (f) The \(<250\) nm photons hitting the surface, and the total hydrocarbon haze column abundance. Impactors larger than \(10^{21}\) kg produce haze-rich atmospheres and a stepwise increase in HCN and HCCCN production. Figure 8 explores the effect of different pre-impact CO\({}_{2}\) abundances on HCN and HCCCN production in post-impact atmospheres. The simulations are snapshots of the atmosphere 10,000 years after the impact-vaporized steam has condensed to an ocean. Larger pre-impact CO\({}_{2}\) causes larger HCN and HCCCN production because it allows more CH\({}_{4}\) to form in the cooling steam atmosphere. As discussed previously, CH\({}_{4}\) is closely tied to photochemical cyanide generation. Regardless of the pre-impact CO\({}_{2}\) concentrations, HCN and HCCCN production sharply increases for impactors larger than \(\sim 10^{21}\) kg due to more efficient haze production (Figure 7, and corresponding text). Figure 9 shows the state of the atmosphere after impacts of various size assuming 10 cm\({}^{2}\) nickel / cm\({}^{2}\) Earth is present in the steam atmosphere to catalyze methane production. The nickel causes more efficient conversion of CO\({}_{2}\) to CH\({}_{4}\) compared to the gas-phase only scenario (Figure 7) permitting greater production of HCN and HCCCN for smaller impactors. For example, a \(5\times 10^{20}\) kg (\(\sim 610\) km) impactor which accounts for nickel catalysts (Figure 9) has comparable HCN and HCCCN production to a \(1.6\times 10^{21}\) kg (\(\sim 900\) km diameter) impactor if no nickel catalysts are assumed in the cooling steam atmosphere (Figure 7). A critical assumption in this section is that 100% of the iron delivered by impactors reacts with steam to generate H\({}_{2}\). As discussed in Section 3.1, it is possible that the post-impact atmosphere is less thoroughly reduced by impactor iron. Appendix Figures A6 and A7 recalculate main text Figures 7 and 9, assuming that a fraction (approximately 15% to 30%) of impactor iron reduces the steam atmosphere based on SPH simulations ("Model 1B" in Figure 2). This alternative assumption requires that impactors \(\sim 5\) times more massive are required to generate a haze-rich post-impact atmosphere with copious HCN and HCCCN production. For example, in Figure 7, recall that there is a sharp increase in cyanide production for impactors larger than \(10^{21}\) kg (\(\sim 780\) km). Appendix Figure A6, which instead assumes a fraction of iron reduces the steam atmosphere, finds that the sharp increase in cyanide production occurs for impacts larger than \(5\times 10^{21}\) kg (\(\sim 1330\) km). However, the presence of nickel catalysts may permit large prebiotic nitrile production for smaller impactors, even under pessimistic post-impact H\({}_{2}\) generation (Appendix Figure A7). ## 4 Discussion ### Comparison to previous work Recently, Zahnle et al. (2020) performed calculations of post-impact atmospheres using simpler models than the ones used in this article. Our results differ in several important ways. First, we find that our purely gas-phase model of the post-impact steam atmosphere (Section 3.2) predicts less CH\({}_{4}\) generation than the model used in Zahnle et al. (2020). For example, Figure 4 predicts that most CO\({}_{2}\) is converted to CH\({}_{4}\) for impactors larger than \(1.6\times 10^{21}\) kg. Figure 2 (top panel) in Zahnle et al. (2020), which is a comparable scenario, suggests a \(5\times 10^{20}\) kg impactor is required to convert most of the atmospheric CO\({}_{2}\) to CH\({}_{4}\). The difference is likely caused by different approaches to computing Figure 9: Identical to Figure 7, except simulations account for nickel-surface reactions which catalyze methane production as the steam atmosphere cools (Schmider et al., 2021). We assume a nickel surface area of 10 cm\({}^{2}\) nickel / cm\({}^{2}\) Earth (for context, see Figure 5). Nickel catalysts cause more efficient CH\({}_{4}\) generation, permitting bigger HCN and HCCCN production for smaller impactors compared to the gas-phase only scenario (Figure 7). CH\({}_{4}\) quenching, or freeze-out, as the atmosphere cools. Our kinetics-climate model automatically computes CH\({}_{4}\) quenching by tracking the elementary reactions producing and destroying CH\({}_{4}\) along with many other atmospheric species. In most of our simulations of cooling post-impact atmospheres, CH\({}_{4}\) quenches when the temperature is between 900 and 1000 K. Zahnle et al. (2020) instead used equilibrium chemistry modeling with a parameterization for CH\({}_{4}\) quenching derived from kinetics calculations of H\({}_{2}\)-dominated brown dwarf atmospheres (Zahnle and Marley, 2014). This parameterization predicts \(\sim 800\) K CH\({}_{4}\) quenching temperatures. The different quenching temperatures between our model and the Zahnle et al. (2020) model suggests that the Zahnle et al. (2020) kinetics parameterization is likely not suitable for a cooling steam-rich atmosphere. The new photochemical model predicts longer post-impact CH\({}_{4}\) lifetimes than the Zahnle et al. (2020) model. As mentioned previously, Zahnle et al. (2020) included CH\({}_{4}\) photolysis, but neglected Reaction 3, which efficiently recombine photolysis products in hydrogen-rich atmospheres. In our model, these recombination reactions allow CH\({}_{4}\) to persist in most post-impact atmospheres until hydrogen escapes to space (\(\sim\) millions of years). Zahnle et al. (2020) instead finds that CH\({}_{4}\) is eradicated from the atmosphere before hydrogen escape. Finally, nitrile production and rainout in our new model depend strongly on the presence of haze and the CH\({}_{4}\)/CO\({}_{2}\) ratio, which was not the case in Zahnle et al. (2020). Our model finds that up to \(\sim 10^{9}\) molecules cm\({}^{-2}\) s\({}^{-1}\) HCN and HCCCN is rained out in hazy post-impact atmospheres with CH\({}_{4}\)/CO\({}_{2}>0.1\) (Figure 7). When CH\({}_{4}\)/CO\({}_{2}<0.1\), there is little haze, and HCN production is \(<\sim 10^{5}\) molecules cm\({}^{-2}\) s\({}^{-1}\) and HCCCN production is negligible. Haze causes CH\({}_{4}\) and N\({}_{2}\) photolysis products to be close in altitude so that they efficiently react to make cyanides (Appendix Figure A5). Additionally, HCCCN generation requires C\({}_{2}\)H\({}_{2}\) in our model (Reaction 4), which is only abundant in hazy atmospheres. In contrast, Zahnle et al. (2020) finds that cyanide production rate in post-impact atmospheres is \(10^{8}\) to \(10^{10}\) molecules cm\({}^{-2}\) s\({}^{-1}\) regardless of the presence of haze and the CH\({}_{4}\)/CO\({}_{2}\) ratio. Our results differ largely because our model is 1-D (has vertical transport), while the Zahnle et al. (2020) is a zero dimensional box model. HCN production depends on the proximity of CH\({}_{4}\) and N\({}_{2}\) photolysis, but a box model cannot account for this 1-D effect. Furthermore, Zahnle et al. (2020) does not distinguish between different prebiotic nitriles (e.g. HCN and HCCCN), or determine their surface concentrations and rainout rates. Also, Zahnle et al. (2020) does not have a coupled climate model. Cometary and lightning sources of HCN are relatively small compared to our estimated photochemical production rates in haze-rich post-impact atmospheres. Todd and Oberg (2020) calculated that comets could deliver \(\sim 1.8\times 10^{5}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\) to the Hadean Earth, a value \(\sim 4\) orders of magnitude smaller than HCN from photochemistry in our most optimistic models. As discussed in Section 3.3, we find that HCN production from lightning in post-impact atmospheres to be at most \(\sim 10^{4}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\) which is also small compared to UV photochemistry in a CH\({}_{4}\) rich atmosphere. This result agrees with Pearce et al. (2022), who also finds that lightning-produced HCN is relatively insignificant. Rimmer and Shorttle (2019) suggested that localized ultra-reducing magma rich in carbon and nitrogen might outgas HCN and HCCCN. They imagine this gas interacting with subsurface water causing high concentrations of dissolved prebiotic molecules, and therefore a setting for origin of life chemistry. While this idea may have merit, their calculations do not account for graphite saturation in magma, which may inhibit outgassing of reduced carbon-bearing species, like HCN (Hirschmann and Withers, 2008; Wogan et al., 2020; Thompson et al., 2022). Additionally, Rimmer and Shorttle (2019) did not self-consistently account for the solubility of gases in magma, which has been been hypothesized to prevent the outgassing of H-bearing gases, like CH\({}_{4}\) or HCN (Wogan et al., 2020). Therefore, we argue that a hypothesized volcanic source of HCN and HCCCN requires further modeling and experiments before it can be compared to a photochemical source, but, in general, seems challenging. Cerrillo (2022) recently used a climate model to predict the surface temperature of post-impact atmospheres with compositions predicted by Zahnle et al. (2020). They find surface temperatures \(>600\) K in some cases. However, the Cerrillo (2022) calculations do not include the effects of water vapor on the lapse rate in the troposphere. Latent heat from water condensation alters convection in the troposphere, greatly reducing the lapse rate when compared to a dry lapse rate. The result is a much cooler surface. Our climate calculations include the effects of water vapor on the lapse rate, which is why we predict surface temperatures \(\lesssim 500\) K, even in the wake of a \(7.9\times 10^{21}\) kg impact in Figure 7. All our climate simulations of post-impact atmospheres allow surface liquid water. If a post-impact atmosphere of 2000 mol cm\({}^{-2}\) H\({}_{2}\) is all lost to space (\(\sim 4\) bar pure H\({}_{2}\) atmosphere, or \(\sim 13\%\) of the H\({}_{2}\) in an ocean), that would shift D/H of the ocean heavier by 1.4% by hydrodynamic escape and Rayleigh fractionation (following Equation (16) in Zahnle et al. (2019) using an escape fractionation factor \(\sim 0.9\) appropriate for an atmosphere where H\({}_{2}\) dominates over CO\({}_{2}\)). This may be an underestimate of the D/H shift because the immediate post-impact oxidation of iron by steam probably produce H\({}_{2}\) with a lower concentration of D than the steam that condense into an ocean. Experiments show isotopic fractionation in reaction of iron powder with steam at low temperatures (Smith and Posey, 1957), but we are unaware of high temperature experiments corresponding to post-impact conditions. In any case, cumulative big impacts during the Hadean that created highly reducing atmospheres would be expected to increase oceanic D/H additively, raising the ocean D/H from starting values that were ten (Piani et al., 2020) to tens of percent (Alexander et al., 2012) lighter than the modern ocean. Such an evolution with intermittent hydrogen escape in the Hadean is consistent both with D/H constraints and with the xenon isotope record (Avice et al., 2018), in which ionic xenon is dragged out to space by early hydrogen escape and the distribution of xenon isotopes becomes heavier (Zahnle et al., 2019). ### Origin of life setting and stockpiling of cyanides The Hadean Earth may have had less land but was likely speckled with hot-spot volcanic islands similar to modern-day Hawaii (Bada and Korenaga, 2018), and possibly had continental land (Korenaga, 2021) where nitriles could accumulate. The majority of HCCCN and HCN produced in post-impact atmospheres would dissolve or rainout into the ocean where it would be diluted and gradually removed by hydrolysis reactions (Miyakawa et al., 2002) or complexation with dissolved ferrous iron (Keefe and Miller, 1996). However, some of the nitriles would be deposited in lakes or ponds on land. We consider, first, equilibrium with atmospheric \(p_{\rm HCN}\) and, second, time-integrated deposition. Nitrile concentrations in waterbodies on land in equilibrium with the atmosphere according to Henry's law would be too small to participate in prebiotic schemes that form ribonucleotides. Our models predict HCN surface pressures up to \(10^{-9}\) bar (Figure 7). For a warm 373 K pond, Henry's law predicts the dissolved HCN concentration is \(4\times 10^{-11}\) mol L\({}^{-1}\). Yet, \(\sim 0.01\) mol L\({}^{-1}\) HCN is required for polymerization (Sanchez et al., 1967) and published prebiotic schemes can use 1 mol L\({}^{-1}\) HCN (Patel et al., 2015). Additionally, while nitriles are produced in post-impact atmospheres, waterbodies on land would likely be too warm for prebiotic chemistry. In the Figure 6 simulation, substantial HCN and HCCCN production occurs in the aftermath of big impacts when the surface temperature is \(\sim 390\) K caused by a H\({}_{2}\)-H\({}_{2}\) CIA greenhouse. Nickel catalysts permit big HCN and HCCCN production for surface temperatures as small as \(\sim 360\) K (Figure 9). Nucleotide building blocks are fragile at such hot temperatures and conditions may not be conducive to an RNA world (Bada and Lazcano, 2002). We propose that cyanides produced in hot post-impact atmospheres may instead be preserved, stockpiled, and concentrated, and used in prebiotic schemes at a later time when the climate is colder. Cyanide rainout and stockpiling could occur for millions of years until HCN production is halted by H\({}_{2}\) escape to space (Figure 6). For example, if HCN rains out at \(10^{9}\) molecules cm\({}^{-2}\) s\({}^{-1}\) over one million years (Figure 7), then \(\sim 1.4\) g cm\({}^{-2}\) HCN could be stockpiled assuming all molecules are preserved. Once H\({}_{2}\) escapes, the surface temperature would drop to \(\sim 300\) K (Figure 6), and over longer timescales the carbonate-silicate cycle might settle on even colder climates because impact ejecta promotes CO\({}_{2}\) sequestration (Kadoya et al., 2020). In this cold climate, cyanide stockpiled into salts could be released as HCN or CN\({}^{-}\) into water bodies on land because of rehydration, volcanic or impact heating (Patel et al., 2015; Sasselov et al., 2020), or UV exposure (Todd et al., 2022). Liberation of cyanide could enable the prebiotic schemes that make RNA. Toner and Catling (2019) investigated a mechanism for stockpiling cyanides. Their thermodynamic calculations show that HCN can be preserved as ferrocyanide salts in evaporating carbonate-rich lakes. However, the Toner and Catling (2019) numerical experiments were at 273 K and 298 K, which are far colder environments than the \(>360\) K surface temperatures that coincide with large HCN production in post-impact atmospheres (Figure 9). Although, Toner and Catling (2019) did not address stockpiling of HCCCN, cyanoacetylene can be captured by 4,5-dicyanoimidazole (DCI), a byproduct of adenine synthesis, to make crystals of 4,5-dicyanoimidazole (CV-DCI) (Ritson et al., 2022) and it is possible that other capture mechanisms are yet to be discovered. Overall, the feasibility of stockpiling prebiotic nitriles in post-impact conditions requires further geochemical modeling and experiments. ### Impactor size and the likelihood of the origin of life We hypothesize that CH\({}_{4}\)/CO\({}_{2}>0.1\) might be an important threshold required for a post-impact atmosphere to produce useful concentrations of nitriles for origin of life chemistry. Figure 10 shows HCN and HCCCN haze rainout as a function of the atmospheric CH\({}_{4}\)/CO\({}_{2}\) mole ratio for every post-impact simulation in this article. When CH\({}_{4}\)/CO\({}_{2}>0.1\), the atmosphere is hazy, and HCN and HCCCN are delivered to the surface at a rate of up to \(\sim 10^{9}\) molecules cm\({}^{-2}\) s\({}^{-1}\). In contrast, atmospheres with CH\({}_{4}\)/CO\({}_{2}<0.1\) rainout less than \(10^{5}\) HCN molecules cm\({}^{-2}\) s\({}^{-1}\) and have surface HCN concentrations less than \(10^{-13}\) bar (Figure 7). Such small HCN concentrations may be challenging to stockpile as ferrocyanides (Toner and Catling, 2019). Additionally, modeled atmospheres with CH\({}_{4}\)/CO\({}_{2}<0.1\) produce negligible HCCCN, yet the molecule is required in prebiotic schemes to synthesize pyrimidine (cytosine and uracil) nucleobase precursors to RNA (Powner et al., 2009; Okamura et al., 2019; Becker et al., 2019). The impactor mass required to generate an atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\) is uncertain. Our optimistic model, which considers the effect of nickel-catalyzed methane production, requires a \(>4\times 10^{20}\) kg (\(>570\) km) impactor (Figure 9). The lunar cratering record and abundance of highly siderophile elements Earth's mantle imply that between 4 and 7 such impacts occurred during the Hadean (Marchi et al., 2014; Zahnle et al., 2020). Our least optimistic model needs a \(>5\times 10^{21}\) kg (\(>1330\) km) impact to create a post-impact atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\) because it assumes only a fraction of iron delivered to Earth reacts with the ocean to create atmospheric H\({}_{2}\) (Appendix Figure A6). The Hadean only experienced 0 to 2 collisions this large (Zahnle et al., 2020). The precise minimum impactor mass to make an atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\) depends on the importance of atmospheric equilibration with a melt pond (Section 3.1, and Itcovitz et al. (2022)), the fraction of impactor iron that reduces the atmosphere, and the effect of Nickel and other surface catalysts on CH\({}_{4}\) kinetics. An additional consideration is that any progress toward the origin of life caused by an impact could be erased by a subsequent impact that sterilizes the planet. For example, suppose a \(>500\) km impact that vaporizes the ocean sterilizes the globe (Citron and Stewart, 2022). With our most pessimistic calculations for post-impact CH\({}_{4}\) generation a \(>1330\) km (\(>5\times 10^{21}\) kg) impact is required to create an atmosphere that generates significant HCN and HCCCN. In this scenario, the last \(>1330\) km impact favorable for prebiotic chemistry would likely be followed by a 500 to 1330 km impact that would destroy any primitive life without rekindling it. Alternatively, our optimistic model for post-impact CH\({}_{4}\) generation only requires a \(>4\times 10^{20}\) kg (\(>570\) km) impact to create an atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\). In this case, the final \(>570\) km impact that might kickstart the origin of life is unlikely to be followed by a slightly smaller 500 km to 570 km sterilizing impact. Figure 10: HCN rainout and HCCCN haze rainout as a function of the CH\({}_{4}\)/CO\({}_{2}\) mole ratio in all simulated post-impact atmospheres. The figure considers simulations shown in Figure 7, 8, 9, A6 and A7. All values are for the atmosphere 10,000 years after steam has condensed to an ocean. The HCN and HCCCN haze production is significantly larger for atmospheres with CH\({}_{4}\)/CO\({}_{2}\gtrsim 0.1\). A caveat to the reasoning in the previous paragraph is that ocean-vaporization may not have sterilized the planet because microbes could have possibly survived in the deep subsurface (Sleep et al., 1989; Grimm and Marchi, 2018). In summary, we suggest that CH\({}_{4}\)/CO\({}_{2}>0.1\) may be an important threshold for post-impact atmospheres to be conducive to the origin of life because they generate \(>4\) orders of magnitude larger surface HCN concentrations, and are the only modeled atmospheres capable of generating HCCCN. We find that the minimum impactor mass required to create a post-impact atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\) is between \(4\times 10^{20}\) and \(5\times 10^{21}\) kg (570 to 1330 km). The value is uncertain because we do not know how effectively iron delivered by an impact reduces the atmosphere (Section 3.1), the importance of atmospheric equilibration with a melt pond (Section 3.1), and because it is hard to estimate a realistic surface area of nickel catalysts available during the cooling steam atmosphere (Section 3.2). ### Model caveats and uncertainties #### 4.4.1 Hydrogen from crust-atmosphere reactions Perhaps the most significant caveat to the modeling effort described above is that we did not consider H\({}_{2}\) production from reactions between a hot post-impact atmosphere and solid, non-melted crust. Section 3.1 explores impact H\({}_{2}\) made by two mechanisms: (1) reduction of the atmosphere by impact-derived iron and (2) atmospheric equilibration with a melt pond made by the impact. However, it is also conceivable that while the atmosphere is hot and steam-rich in the \(\sim 10^{3}\) years following an impact (i.e. Phase 2), water vapor could permeate through and react with the solid crust to produce H\({}_{2}\) by a process like serpentinization. Specifically, H\({}_{2}\)O reduction by FeO in the solid crust could make H\({}_{2}\): \[\mathrm{H_{2}O}+3\mathrm{FeO}\rightarrow\mathrm{H_{2}}+\mathrm{Fe_{3}O_{4}} \tag{7}\] In our nominal model (Figure 4 and 7), we require a post-impact atmosphere has \(>2\times 10^{3}\) H\({}_{2}\) mol cm\({}^{-2}\) (i.e. the equivalent of converting 13% of Earth's ocean to H\({}_{2}\)) in order to reach a CH\({}_{4}\)/CO\({}_{2}>0.1\) and big nitrile production rates. Assuming a crustal FeO content of 8 wt% (Takahashi, 1986), then \(2\times 10^{3}\) H\({}_{2}\) mol cm\({}^{-2}\) could be produced by reacting water with FeO in the top \(\sim 16\) km of Earth's lithosphere. The feasibility of extensive water-rock H\({}_{2}\) generation depends on the permeability of the crust and the pressure gradients driving subsurface fluid circulation. For example, low permeability rocks with slow water circulation may not permit serpentinization of the upper crust within \(\sim 10^{3}\) years while the atmosphere is hot and steam-rich. A comprehensive model is out of the scope of this article, but if attainable, significant water-rock reactions might produce a thick H\({}_{2}\) atmosphere after relatively small impacts (e.g. \(10^{20}\) kg) which favors a CH\({}_{4}\)/CO\({}_{2}>0.1\) and significant nitrile generation. Another possibility, which we do not investigate in detail, is that atmosphere-crust reactions occur in the immediate aftermath of a giant impact (i.e. Phase 1), rather than over \(\sim 10^{3}\) as previously discussed. A large impact could produce a global ejecta blanket several kilometers thick of mixed hot water and rock. As water was vaporized to form a steam atmosphere, the water and rock slurry could chemically equilibrate, producing H\({}_{2}\). Zahnle et al. (2020) attempted to account for atmosphere-crust interaction by equilibrating the post-impact steam atmosphere (Phase 2) to a mineral redox buffer. For example, their Figure 5 assumes the atmosphere has a fixed oxygen fugacity set by the FMQ buffer at an assumed 650 K methane quench temperature. The calculation predicts most CO\({}_{2}\) is converted to CH\({}_{4}\) for impacts as small as \(\sim 5\times 10^{19}\) kg, but Zahnle et al. (2020) did not determine whether such significant atmosphere-crust interaction is physically plausible. #### 4.4.2 Climate A shortcoming of this work is that our climate model is relatively simple. Throughout the Results section, our climate code assumes an isothermal 200 K stratosphere, a saturated adiabatic troposphere (i.e. relative humidity, \(\phi=1.0\)), and ignores clouds. However, many of our simulated post-impact atmospheres contain a hydrocarbon haze which should absorb sunlight and warm the stratosphere (Arney et al., 2016). Also, in a hydrogen-dominated atmosphere, water vapor has a larger molecular weight compared to the background gas which could inhibit convection (Leconte et al., 2017) and perhaps cause low relative humidities. Furthermore, low-altitude clouds reflect sunlight and should cool a planet while high clouds have a greenhouse warming effect (Goldblatt and Zahnle, 2011). Figure 11 attempts to show the uncertainty in our climate calculations as a function of three free parameters: stratosphere temperature, relative humidity, and low-altitude clouds which we crudely approximate by varying the surface albedo. The calculation uses the composition of the atmosphere after a \(5\times 10^{20}\) kg impact in Figure 9 immediately after the steam atmosphere has condensed to an ocean. Our nominal climate parameters (\(T_{\mathrm{strat}}=200\) K, \(\phi=1\), \(A_{s}=0.2\)) predict a 361 K surface temperature. A warm stratosphere caused by hydrocarbon UV absorption and high albedo low altitude clouds might cause the surface to be \(\sim 30\) K colder than our nominal model, assuming water vapor is saturated. On the other hand, low relative humidities, which might be favored in convection-inhibited H\({}_{2}\) dominated atmospheres, increase the troposphere lapse rate which warms the surface (Leconte et al., 2017). While Figure 11 gives a sense for the possible uncertainty in our climate calculations, it does not self-consistently simulate haze, relative humidity and clouds feedbacks. A more comprehensive model is required to resolve these nuances. A further caveat is that our climate calculations ignore greenhouse warming from NH\({}_{3}\) (Table 1). We choose to disregard the influence of NH\({}_{3}\) because a substantial fraction of the gas should dissolve in the ocean (Zahnle et al., 2020), a process that our coupled photochemical-climate model cannot self-consistently account for. However, our climate model (Appendix D), when uncoupled to photochemistry, can partition gases between the atmosphere and ocean according to gas solubility and ocean chemistry. Below, we use this stand-alone climate model to determine the climate affects of NH\({}_{3}\) in a post-impact atmosphere. NH\({}_{3}\) should dissolve into an ocean by henry's law, then hydrolyze to NH\({}_{4}^{+}\): Figure 11: Surface temperature of a post-impact atmosphere as a function of stratosphere temperature, relative humidity, and surface clouds which we crudely approximate with the surface albedo (\(A_{b}\)). The atmosphere has 5.9 mol cm\({}^{-2}\) CO\({}_{2}\), 5.6 mol cm\({}^{-2}\) CH\({}_{4}\), 35.8 mol cm\({}^{-2}\) N\({}_{2}\), 556 mol cm\({}^{-2}\) H\({}_{2}\), 0.01 mol cm\({}^{-2}\) CO, 0.3 mol cm\({}^{-2}\) NH\({}_{3}\), and a liquid water ocean at the surface. This is the same composition of the atmosphere after a \(5\times 10^{20}\) kg impact in Figure 9 once steam has condensed to an ocean. The gray shaded region labeled “no steady-state” has no steady-state climate solutions that balance incoming shortwave and outgoing longwave energy. Uncertainties in our assumed stratosphere temperature, relative humidity and the effects of low-altitude clouds predict surface temperatures from \(\sim\) 330 K to \(\sim 390\) K with a nominal value of 361 K. \[\mathrm{NH_{3}(g)}\leftrightarrow\mathrm{NH_{3}(aq)} \tag{8}\] \[\mathrm{NH_{3}(aq)}+\mathrm{H_{2}O}\leftrightarrow\mathrm{NH_{4}^{+}+ OH^{-}}\] (9) \[\mathrm{H_{2}O}\leftrightarrow\mathrm{OH^{-}+H^{+}} \tag{10}\] Therefore, the concentration of aqueous NH\({}_{3}\) (in mol kg\({}^{-1}\)) is given by \(m_{\mathrm{NH_{3}}}=p_{\mathrm{NH_{3}}}\alpha_{\mathrm{NH_{3}}}\), where \(p_{\mathrm{NH_{3}}}\) is the surface partial pressure of NH\({}_{3}\) in bars and \(\alpha_{\mathrm{NH_{3}}}\) is the Henry's law constant (mol kg\({}^{-1}\) bar\({}^{-1}\)). Reactions 9 and 10 give the ammonium concentration to be \(m_{\mathrm{NH_{4}^{+}}}=(K_{9}/K_{10})m_{\mathrm{NH_{3}}}m_{\mathrm{H^{+}}}\), where \(K_{9}\) and \(K_{10}\) are equilibrium constants for each reaction. The Henry's law constant for NH\({}_{3}\) (in mol kg\({}^{-1}\) bar\({}^{-1}\)) is \(\alpha_{\mathrm{NH_{3}}}=61\exp(4200(\frac{1}{T}-\frac{1}{298.15}))\)(Linstrom & Mallard, 1998). The equilibrium constants for Reactions 9 and 10 are approximately \(\log_{10}(K_{9})=-84.63\exp(-0.0161T)-4.05\) and \(\log_{10}(K_{10})=-39.96\exp(-0.00639T)-8.06\). We derived both of these parameterizations using the SUPCRT thermodynamic database (Johnson et al., 1992). We implement this ocean chemistry into our stand-alone climate model (Appendix D), and compute the surface temperature after the \(5\times 10^{20}\) kg impact in Figure 9 once the steam atmosphere has condensed to an ocean. The mol cm\({}^{-2}\) of each gas are given in the Figure 11 caption. We use our nominal climate parameters (\(T_{\mathrm{strat}}=200\) K, \(\phi=1\), \(A_{s}=0.2\)), assume the total H\({}_{2}\)O reservoir is 1 modern ocean (15,000 mol cm\({}^{-2}\)) with pH = 7, and account for the radiative affects of NH\({}_{3}\) in addition to the Table 1 opacities. The model predicts a 371 K surface temperature, which is 10 K warmer than calculations that do not include NH\({}_{3}\) opacities or ocean dissolution. 96% of the ammonia reservoir is dissolved in the ocean. Ammonia has a more substantial effect on climate after larger impactors. Consider the atmosphere after a \(7.9\times 10^{21}\) kg impact in Figure 5 once steam has condensed to an ocean (0.008 mol cm\({}^{-2}\) CO\({}_{2}\), 11.5 mol cm\({}^{-2}\) CH\({}_{4}\), 32.6 mol cm\({}^{-2}\) N\({}_{2}\), 9122 mol cm\({}^{-2}\) H\({}_{2}\), 0.002 mol cm\({}^{-2}\) CO, and 6.8 mol cm\({}^{-2}\) NH\({}_{3}\)). Our climate model, which includes NH\({}_{3}\) opacities and ocean dissolution, predicts a 505 K surface temperature with only 20% of the NH\({}_{3}\) dissolved in the ocean because solubility decreases with increased temperature. This is 42 K hotter than our model that ignores NH\({}_{3}\) greenhouse contributions (Figure 7). Overall, our climate calculations throughout most of this article perhaps underestimate the greenhouse warming by 10 to \(\sim 40\) K by ignoring NH\({}_{3}\) opacities, but instead may overestimate surface temperature because do not account for the cooling effects of haze and low altitude clouds (Figure 11). Additional warming from NH\({}_{3}\) would only be relevant for a fraction of the post-impact atmosphere, before ammonia is destroyed by photolysis (Figure 6). #### 4.4.3 Unknown chemical reactions and the effect of ions While our chemical scheme for HCCCN successfully reproduces the HCCCN abundances in Titan's atmosphere (Appendix Figure A9), it may lack many reactions relevant to post-impact atmospheres. Our sparse HCCCN network is necessary because currently few kinetic measurements are published in the literature. Our photochemical model does not include ion chemistry, which is likely a reasonable simplification because ions are not important for HCN or HCCCN formation on Titan (Loison et al., 2015). Only some heavy hydrocarbons, like benzene (C\({}_{6}\)H\({}_{6}\)), rely on coupled neutral-ion chemistry to explain their observed abundances in Titan's atmosphere (Horst, 2017). ## 5 Conclusions We use atmospheric models to investigate the production of prebiotic feedstock molecules in impact-generated reducing atmospheres on the Hadean Earth, updating simpler calculations made by Zahnle et al. (2020). We find that massive asteroid impacts can generate temporary H\({}_{2}\)-, CH\({}_{4}\)- and NH\({}_{3}\)-rich atmospheres, which photochemically generate HCN and HCCCN for the duration of hydrogen escape to space (\(10^{5}\) to \(10^{7}\) years). The production of nitriles increases dramatically for haze-rich atmospheres that have mole ratios of CH\({}_{4}\)/CO\({}_{2}>0.1\). In these cases, HCN can rain out onto land surfaces at a rate of \(\sim 10^{9}\) molecules cm\({}^{-2}\) s\({}^{-1}\), and HCCCN incorporated in haze rains out at a similar rate. Atmospheres with CH\({}_{4}\)/CO\({}_{2}<0.1\) produce 3 to 4 orders of magnitude less HCN, and generate negligible HCCCN. The impactor mass required to create an atmosphere with CH\({}_{4}\)/CO\({}_{2}>0.1\) is uncertain and depends on how efficiently atmosphere-iron, atmosphere-melt and atmosphere-crust reactions generate H\({}_{2}\) and the surface area of nickel catalysts exposed to the cooling steam atmosphere. In an optimistic modeling scenario a \(>4\times 10^{20}\) kg (\(>570\) km) impactor is sufficient, while in our least optimistic scenario a \(>5\times 10^{21}\) kg (\(>1330\) km) impactor is required. We find that post-impact atmospheres that generate significant prebiotic molecules have \(>360\) K surface temperatures caused by a H\({}_{2}\)-H\({}_{2}\) greenhouse which may be too hot for prebiotic chemistry, although the temperature may be cooler if reflective clouds occur. An alternative is that HCN and HCCCN generated in post-impact atmosphere are stockpiled. Cyanide can plausibly be stockpiled and concentrated in ferrocyanide salts and cyanoacetylene could be captured by byproducts of adenine synthesis into imidazole-based crystals (Ritson et al., 2022). HCN and HCCCN can be used to create nucleotide precursors to RNA millions of years after the impact, once the H\({}_{2}\) has escaped to space, and the atmosphere has cooled to a more temperate state. Nominally, the Hadean Earth appears to have experienced several impacts that would have produced an atmosphere that made significant prebiotic feedstock molecules. Like Earth, all rocky exoplanets accreted from impacts. Consequently, impact-induced reducing atmospheres may be a common planetary processes that provides windows of opportunity for the origin of exoplanet life. ## Acknowledgements We thank Joshua Krissansen-Totton for numerous conversations that have improved the atmospheric models used in this article. Conversations with Maggie Thompson, Sandra Bastelberger, and Shawn Domagal-Goldman also helped us create the _Photochem_ and _Clima_ models. We also thank Eric Wolf for advice on computing reliable k-distributions for climate modeling. Additionally, we thank Paul Molliere for conversations that helped us build _Clima_. Finally, we thank our two anonymous reviewers for constructive feedback that improved this article. N.F.W. and D.C.C. were supported by the Simon's Collaboration on Origin of Life Grant 511570 (to D.C.C.). Also, N.F.W., D.C.C., and K.J.Z. were supported by NASA Astrobiology Program Grant 80NSSC18K0829 and benefited from participation in the NASA Nexus for Exoplanet Systems Science research coordination network. N.F.W. and D.C.C. also acknowledge support from Sloan Foundation Grant G-2021-14194. R.L. and K.J.Z. were supported by NASA Exobiology Grant 80NSSC18K1082. R.L. was additionally supported by NASA XRP 80NSSC22K0953. ## Appendix A H\({}_{2}\) Generation from iron and molten crust Here, we describe our model for atmospheric H\({}_{2}\) generation in the days to months following a massive asteroid impact (Phase 1 in Figure 1). All our simulations assume a pre-impact atmosphere containing CO\({}_{2}\), N\({}_{2}\), and ocean water. First, we assume that half of the impactor's kinetic energy heats the atmosphere and ocean water to \(\sim 2000\) K. We assume the atmosphere is heated to \(\sim 2000\) K because this is roughly the evaporation temperature of silicates. For our assumed impact velocity of \(20.7\) km s\({}^{-1}\), all impactor masses that we consider in the main text (\(10^{20}\) to \(10^{22}\) kg) have kinetic energies \(>2\times 10^{28}\) joules delivering \(>10^{28}\) joules to the atmosphere which is larger than the \(5\times 10^{27}\) joules required to vaporize an ocean (Sleep et al., 1989). Next, our model assumes each mole of iron delivered reacts with the atmosphere and removes one mole of oxygen. The moles cm\({}^{-2}\) of iron delivered to the atmosphere is \[N_{\rm Fe,atmos}=\frac{X_{\rm Fe,atmos}X_{\rm Fe,imp}M_{\rm imp}}{\mu_{\rm Fe} A_{\oplus}}\] (A1) Here, \(M_{\rm imp}\) is the mass of the impact in grams, \(X_{\rm Fe,imp}\) is the iron mass fraction of the impactor, \(X_{\rm Fe,atmos}\) is the fraction of the impactor iron that reacts with the atmosphere, \(\mu_{\rm Fe}\) is the molar weight of iron, and \(A_{\oplus}\) is the area of Earth in cm\({}^{2}\). Following Zahnle et al. (2020) we take \(X_{\rm Fe,imp}=0.33\). In main text, we assume \(X_{\rm Fe,atmos}=1\) (e.g. "Model 1A" in Figure 2), while the Appendix contains calculations with \(X_{\rm Fe,atmos}=0.15\) to 0.3 based on extrapolations of the Citron & Stewart (2022) SPH impact simulations for \(45^{\circ}\) impactors traveling at 20.7 km s\({}^{-1}\) (e.g. "Model 1B" in Figure 2). To approximate equilibration between the delivered iron and the atmosphere, we simply remove \(N_{\rm Fe,atmos}\) of oxygen atoms from the atmosphere. Figure A4: The methane photochemical lifetime in post-impact atmospheres. The plot shows the CH\({}_{4}\) mixing ratio and production and loss as a function of altitude 10,000 years after the steam atmosphere has condensed to an ocean following the \(1.58\times 10^{21}\) kg impact described in Figure 6. CH\({}_{4}\) is primarily destroyed by photolysis, but reforms efficiently in the H\({}_{2}\) rich atmosphere from CH\({}_{3}+\)H\(+\)M \(\rightarrow\) CH\({}_{4}+\)M. The result is a 4.8 million year CH\({}_{4}\) photochemical lifetime. CH\({}_{4}\) only persists in the atmosphere for about one million years because H\({}_{2}\) escapes to space in this amount of time which inhibits CH\({}_{4}\) recombination. Our model also optionally considers reactions between the atmosphere and a melt pond generated by the impact. Our approach is similar to the one described in Itcovitz et al. (2022). We estimate the total mass of the melt pond (\(M_{\rm melt}\)) by interpolating SPH impact simulations from Citron and Stewart (2022) for a \(45^{\circ}\) impact angle. The smallest impact they consider is \(7\times 10^{21}\) kg, so we extrapolate their results down to \(10^{20}\) kg. We additionally take the melted crust to be basaltic in composition except with variable initial amounts of ferric and ferrous iron. Effectively, this means that the initial oxygen fugacity of the melted crust is a free parameter because iron redox state is related to oxygen fugacity through the equilibrium reaction, \[0.5\rm O_{2}+2FeO\leftrightarrow Fe_{2}O_{3}\] (A2) We assume the oxygen atoms can flow from the atmosphere into the melt (or vice-versa) in order to bring Reaction A2 to an equilibrium state defined by Kress and Carmichael (1991) thermodynamic data. Our model also considers H\({}_{2}\)O gas dissolution in the melt using the Equation (19) solubility relation in Itcovitz et al. (2022). Finally, given a heated post-impact atmosphere that has been reduced by impactor iron and, optionally, in contact with an melt pool, we compute thermodynamic equilibrium of the atmosphere-melt system at 1900 K. We choose 1900 K because any impact-produced silicate vapors should have condensed and rained out of the atmosphere, and the melt pool should have not yet solidified (Itcovitz et al., 2022). To find an equilibrium state, we first compute an equilibrium composition for the atmosphere alone using the equilibrium solver in the Cantera chemical engineering package (Goodwin et al., 2022) with our thermodynamic data (Appendix C.2). Next, to equilibrate the atmosphere-melt system, we perform a zero-dimensional kinetics integration for 1000 years at constant temperature and pressure with our reaction network (Appendix C.2). All reactions in our network are reversible thermodynamically, therefore integrating the kinetics forward in time should ultimately reach a state of thermodynamic equilibrium. Our integration includes additional reactions representing Reaction A2 and H\({}_{2}\)O dissolution in the melt. We arbitrarily choose forward reaction rates of \(10^{-10}\) s\({}^{-1}\) for both reactions, then reverse the rates using the Kress & Carmichael (1991) equilibrium constant, and the Equation (19) solubility relation in Itcovitz et al. (2022). Overall, our approach finds a chemical equilibrium state between the atmosphere and the melt pond, and therefore an estimation of the amount of H\({}_{2}\) generated from atmosphere-iron and atmosphere-melt reactions. Our code for solving melt-atmosphere equilibrium is available at the following Zenodo link: [https://doi.org/10.5281/zenodo.7802966](https://doi.org/10.5281/zenodo.7802966). ## Appendix B Kinetics model of a cooling steam atmosphere We simulate the chemistry of a cooling post-impact atmosphere using a zero-dimensional kinetics-climate model. We assume the atmosphere's composition, pressure, and temperature are homogeneous in all directions, and has a vertical extent of one atmospheric scale height (\(H_{a}\)). For these assumptions, the following system of ordinary differential equations govern our model: \[\frac{\partial N_{i}}{\partial t}=\frac{H_{a}}{N_{a}}(P_{i}-L_{i} )+\frac{A_{c}}{N_{a}}(P_{i,\text{surf}}-L_{i,\text{surf}})\] (B3) \[\frac{\partial T_{s}}{\partial t}=-\frac{1}{\rho c_{p}}\left( \frac{F_{\text{net}}}{H_{a}}\right)-\frac{1}{\rho c_{p}}\left(\frac{dM_{\text{ H${}_{2}$O}}}{dt}\frac{l_{\text{H${}_{2}$O}}}{A_{\oplus}H_{a}}\right)\] (B4) All variables and units are in Table A1. In Equation (B3), \(N_{i}\) is the column abundance of species \(i\) in mole cm\({}^{-2}\), which changes because of gas-phase chemical reactions (production rate \(P_{i}\) and loss rate \(L_{i}\)) and reactions occurring on surfaces (\(P_{i,\text{surf}}\) and \(L_{i,\text{surf}}\)). In Equation (B4), \(T\) is surface temperature, which changes because of energy radiated to space (\(F_{\text{net}}\)), and because of latent heat from H\({}_{2}\)O condensation (\(\frac{dM_{\text{H${}_{2}$O}}}{dt}\)), where \(M_{\text{H${}_{2}$O}}\) is the mass of H\({}_{2}\)O in the atmosphere. We approximate the energy radiated to space in ergs cm\({}^{-2}\) s\({}^{-1}\) from a steam-dominated atmosphere with the following parameterization: \[F_{\text{net}}=8.3\times 10^{4}+1000\max(T_{s}-1750,0)\] (B5) This parameterization fits calculations from our radiative transfer model (see Appendix D), which uses the a solar spectrum at 4.0 Ga derived from methods described in Claire et al. (2012). We can rewrite Equation (B4), replacing \(\rho H_{a}\) using the ideal gas law and the definition of atmospheric scale height, \[\rho H_{a}=\frac{p\bar{\mu}}{N_{a}kT}\frac{N_{a}kT}{\bar{\mu}g}=\frac{p}{g}\] (B6) Here, \(p\) is the total atmospheric pressure in dynes cm\({}^{-2}\), \(g\) is gravitational acceleration in cm s\({}^{-2}\), \(k\) is the Boltzmann constant, \(\bar{\mu}\) is the mean molecular weight in g mol\({}^{-1}\), and \(N_{a}\) is Avogadro's number. Therefore, \[\frac{\partial T_{s}}{\partial t}=-\frac{g}{pc_{p}}F_{\rm net}-\frac{g}{pc_{p} }\left(\frac{dM_{\rm H_{2}O}}{dt}\frac{l_{\rm H_{2}O}}{A_{\oplus}}\right)\] (B7) Next, we must derive an expression for the steam condensation rate (\(dM_{\rm H_{2}O}/dt\)) in terms of known variables. Working in CGS units, the total pressure of the atmosphere is given by its gravitational force divided by Earth's surface area (\(5.1\times 10^{18}\) cm\({}^{2}\)): \[p=\frac{Mg}{A_{\oplus}}\] (B8) Here, \(M\) is the mass of the atmosphere in grams. We are considering steam-dominated atmospheres, therefore, the mass and pressure in the above relation is approximately equal to the mass of atmospheric H\({}_{2}\)O and the H\({}_{2}\)O partial pressure. \[p_{\rm H_{2}O} \approx\frac{M_{\rm H_{2}O}g}{A_{\oplus}}\] (B9) \[M_{\rm H_{2}O} \approx\frac{p_{\rm H_{2}O}A_{\oplus}}{g}\] (B10) Taking a time derivative of Equation (B10) yields \[\frac{dM_{\rm H_{2}O}}{dt}\approx\frac{A_{\oplus}}{g}\frac{dp_{\rm H_{2}O}}{dt}\] (B11) We assume that the only processes changing the H\({}_{2}\)O mass in the atmosphere is condensation, which occurs in our model when steam becomes saturated. We further assume that the H\({}_{2}\)O partial pressure is fixed at saturation once steam condensation begins. We approximate the saturation vapor pressure of H\({}_{2}\)O, \(p_{\rm H_{2}O}^{\rm sat}\), using the Clausius-Clapeyron equation, assuming a temperature-independent latent heat, \(l_{\rm H_{2}O}\), \[p_{\rm H_{2}O}^{\rm sat}=p_{0}\exp\left(\frac{l_{\rm H_{2}O}\mu_{\rm H_{2}O}}{ R}\left(\frac{1}{T_{0}}-\frac{1}{T}\right)\right)\] (B12) \(p_{0}\) and \(T_{0}\) are reference pressures and temperatures, respectively. Taking a time derivative of Equation (B12) yields \[\frac{dp_{\rm H_{2}O}^{\rm sat}}{dt}=\left(\frac{l_{\rm H_{2}O}\mu_{\rm H_{2}O }}{RT^{2}}\right)\frac{dT_{s}}{dt}p_{\rm H_{2}O}^{\rm sat}\] (B13) Substituting Equation (B13) into Equation (B11) gives \[\frac{dM_{\rm H_{2}O}}{dt}=\frac{A_{\oplus}}{g}\left(\frac{l_{\rm H_{2}O}\mu_{ \rm H_{2}O}}{RT^{2}}\right)\frac{dT_{s}}{dt}p_{\rm H_{2}O}^{\rm sat}\] (B14) Finally, we can substitute Equation (B14) into Equation (B7) and rearrange to solve for \(dT/dt\). The result below gives the rate of change of temperature when the steam is too hot to condense (\(p_{\rm H_{2}O}>p_{\rm H_{2}O}^{\rm sat}\)), and when the steam is condensing (\(p_{\rm H_{2}O}=p_{\rm H_{2}O}^{\rm sat}\)). \[\frac{dT_{s}}{dt}=\begin{cases}-\frac{g}{pc_{p}}F_{\rm net}&p_{\rm H_{2}O}>p_ {\rm H_{2}O}^{\rm sat}\\ -\frac{g}{pc_{p}}F_{\rm net}\left(1+\frac{l_{\rm H_{2}O}^{2}\mu_{\rm H_{2}O}p_ {\rm H_{2}O}^{\rm sat}}{pc_{p}RT^{2}}\right)^{-1}&p_{\rm H_{2}O}=p_{\rm H_{2}O} ^{\rm sat}\end{cases}\] (B15) Equations (B3) and (B15) are a system of ordinary differential equations, which we approximately solve over time using the CVODE BDF method developed by Sundials Computing (Hindmarsh et al., 2005). Additionally, for either gas-phase or surface reactions, we make use of the Cantera software library (Goodwin et al., 2022) to compute chemical production and destruction rates. Our code for solving the equations derived in this section is available at the following Zenodo link: [https://doi.org/10.5281/zenodo.7802966](https://doi.org/10.5281/zenodo.7802966). ## Appendix C The _Photochem_ model To simulate the photochemistry of post-impact reducing atmospheres, we developed a photochemical model called _Photochem_. The model is a re-written and vastly updated version of _PhotochemPy_(Wogan et al., 2022). _Photochem_ is written in modern Fortran and C, with a Python interface made possible by Cython (Behnel et al., 2010). This article uses _Photochem_ version v0.3.14 archived in the following Zenodo repository: [https://doi.org/10.5281/zenodo.7802921](https://doi.org/10.5281/zenodo.7802921). The following sections briefly describe the fundamental model equations solved by _Photochem_, our chemical network, and validates the model against observations of Earth and Titan. ### Model equations We begin our derivation of the equations governing _Photochem_ with modified versions of Equations B.1, B.2 and B.29 in Catling and Kasting (2017): \[\frac{\partial n_{i}}{\partial t}=-\frac{\partial}{\partial z} \Phi_{i}+P_{i}-L_{i}-R_{i,\text{ rainout}}+Q_{i,\text{ cond}}\] (C16) \[\Phi_{i,\text{gas}}=-K_{zz}n\frac{\partial}{\partial z}\left( \frac{n_{i}}{n}\right)-n_{i}D_{i}\left(\frac{1}{n_{i}}\frac{\partial n_{i}}{ \partial z}+\frac{1}{H_{i}}+\frac{1}{T}\frac{\partial T}{\partial z}+\frac{ \alpha_{Ti}}{T}\frac{\partial T}{\partial z}\right)\] (C17) \[\Phi_{i,\text{particle}}=-K_{zz}n\frac{\partial}{\partial z} \left(\frac{n_{i}}{n}\right)-w_{i}n_{i}\] (C18) Table A1 explains the variables and their units. Equation (C16) states that molecule concentration (\(n_{i}\) in molecules cm\({}^{-3}\)) changes over time at a point in space because of vertical movement of particles (\(\frac{\partial}{\partial z}\Phi_{i}\)), and chemical reactions, rainout or condensation/evaporation (\(P_{i},\,L_{i},\,R_{i,\text{ rainout}}\), and \(C_{i,\text{ cond}}\)). The equation is 1-D, because it only considers vertical gas transport and differs from Equation B.1 in Catling and Kasting (2017) because we explicitly include rainout and condensation. Equation (C17) states that the flux of gases (\(\Phi_{i,\text{gas}}\)) is determined by eddy and molecular diffusion, and Equation (C18) assumes that the flux of particles (\(\Phi_{i,\text{particle}}\)) is given by eddy diffusion and the rate particles fall through the atmosphere. Many 1-D photochemical models further simplify Equation (C16) by assuming that total number density does not change over time (\(\partial n/\partial t\approx 0\)). Using this assumption, Equation (C16) is recast in terms of evolving mixing ratios (\(f_{i}\)) rather than number densities (see Appendix B.1 in Catling and Kasting (2017) for a derivation). Such models assume a time-constant temperature profile. The surface pressure is also prescribed, and pressures above the surface are computed with the hydrostatic equation. In order to guarantee that all mixing ratios in the atmosphere sum to 1, models assume a background filler gas with a mixing ratio \(f_{\text{background}}=1-\sum_{i}f_{i}\). N\({}_{2}\), CO\({}_{2}\) or H\({}_{2}\) are common choices for the background gas, depending on the atmosphere under investigation. By definition, the background gas is not conserved. This approach is valid for steady-state photochemical calculations, and is also reasonable for atmospheric transitions which maintain approximately constant surface pressure and atmospheric temperature. The _Photochem_ code contains an implementation of this traditional approach to photochemical modeling. Unfortunately, solving a simplified version of Equation (C16) in terms of mixing ratios does not work well for post-impact atmospheric modeling. For example, a post-impact atmosphere can contain 10 bars of H\({}_{2}\) which escapes to space over millions of years, lowering the surface pressure to a 1 bar N\({}_{2}\) dominated atmosphere (e.g. Figure 6). Traditional photochemical models fail to simulate this scenario because it is not reasonable to assume a single background gas and time-constant surface pressure. Additionally, most models fix atmospheric temperature during any single model integration, but surface temperature should change significantly as impact-generated H\({}_{2}\) escapes to space. Therefore, _Photochem_ implements a code that solves Equation (C16) in terms of number densities (\(n_{i}\)) without the assumption of fixed surface pressure or a background gas. This approach requires slight modifications to Equation (C17) and (C18) which we describe below. Consider the hydrostatic equation and ideal gas law \[\frac{\partial p}{\partial z} =\frac{-gp\bar{\mu}}{N_{a}kT}\] (C19) \[p =nkT\] (C20) Substituting the ideal gas law in the hydrostatic equation yields \[\frac{\partial}{\partial z}(nT)=\frac{-gn\bar{\mu}}{N_{a}k}\] (C21) \[n\frac{\partial T}{\partial z}+T\frac{\partial n}{\partial z}= \frac{-gn\bar{\mu}}{N_{a}k}\] (C22) After rearrangement and substituting the definition of scale height, \[\frac{1}{n}\frac{\partial n}{\partial z}=-\frac{1}{H_{a}}-\frac{1}{T}\frac{ \partial T}{\partial z}\] (C23) Now consider the following expansion using the quotient rule \[\frac{\partial}{\partial z}\left(\frac{n_{i}}{n}\right)=\frac{1}{n}\frac{ \partial n_{i}}{\partial z}-\frac{n_{i}}{n^{2}}\frac{\partial n}{\partial z}\] (C24) Substituting Equation (C23) into Equation (C24) and rearrangement gives \[n\frac{\partial}{\partial z}\left(\frac{n_{i}}{n}\right)=\frac{\partial n_{i} }{\partial z}+\frac{n_{i}}{H_{a}}+\frac{n_{i}}{T}\frac{\partial T}{\partial z}\] (C25) Finally, we can substitute Equation (C25) into Equations (C17) and (C18) to derive new equations for the flux of gases and particles \[\Phi_{i,\text{gas}}=-K_{zz}n_{i}\left(\frac{1}{n_{i}}\frac{\partial n_{i}}{ \partial z}+\frac{1}{H_{a}}+\frac{1}{T}\frac{\partial T}{\partial z}\right)- n_{i}D_{i}\left(\frac{1}{n_{i}}\frac{\partial n_{i}}{\partial z}+\frac{1}{H_{i} }+\frac{1}{T}\frac{\partial T}{\partial z}+\frac{\alpha_{Ti}}{T}\frac{ \partial T}{\partial z}\right)\] (C26) \[\Phi_{i,\text{particle}}=-K_{zz}n_{i}\left(\frac{1}{n_{i}}\frac{\partial n_{i} }{\partial z}+\frac{1}{H_{a}}+\frac{1}{T}\frac{\partial T}{\partial z}\right) -w_{i}n_{i}\] (C27) We then apply a finite-volume approximation to the Equation (C16) system of particle differential equations using fluxes for gases and particles given by Equations (C26) and (C27), which results in a system of ordinary differential equations. We use a second-order centered scheme for all spatial derivatives except falling particles, which use a first-order upwind scheme for stability. _Photochem_ evolves the finite volume approximate forward in time using the CVODE BDF method developed by Sundials Computing (Hindmarsh et al., 2005). The model assumes no background gas, and surface pressure can evolve over time as, for example, gases escape to space. Additionally, our model computes a self-consistent temperature structure within each time step using the _Clima_ radiative transfer code (Appendix D) assuming a pseudo-moist adiabatic troposphere connected to an isothermal upper atmosphere. An additional challenge of post-impact atmospheres is that the scale height changes by a factor of \(\sim 10\) or more when H\({}_{2}\) escapes leaving behind a N\({}_{2}\) or CO\({}_{2}\) dominated atmosphere (Figure 6). Most relevant photochemistry occurs at pressures \(>10^{-7}\) bar, and so we choose a model domain which starts at the surface and extends to an altitude that is approximately this pressure. However, suppose we choose a model domain extending to \(\sim 1000\) km (i.e. the \(10^{-7}\) bar level) appropriate for an H\({}_{2}\) dominated atmosphere. After H\({}_{2}\) escapes to space, all relevant photochemistry would occur below \(~{}100\) km, in the bottom several grid cells of the model. Therefore, the important photochemistry would be poorly resolved and inaccurate, and the extremely small pressures at the top of the model domain would likely cause numerical instability. Our solution is to adaptively adjust the model domain so it is always appropriate for atmospheres scale height. We use the root finding functionally in CVODE BDF to halt integration whenever the pressure at the top of the atmosphere falls below \(10^{-7}\) bar and lower the top of the model domain before continuing integration. This procedure is done automatically tens to hundreds of times during each post-impact integration. ### Chemical network, photolysis cross sections and thermodynamic data Our chemical reactions, photolysis cross sections, and thermodynamic data used for all gas-phase kinetics are archived in the following Zenodo repository: [https://doi.org/10.5281/zenodo.7802962](https://doi.org/10.5281/zenodo.7802962). Chemical reactions and thermodynamic data are in the file "reaction_mechanisms/zahnle_earth.yaml", and photolysis cross sections are in the folder "xsections/". All thermodynamic data is from the NIST Chemistry WebBook (Linstrom & Mallard, 1998). The chemical and photolysis reactions are an updated version of rates presented in Zahnle et al. (2016). In this article, our model simulates rainout in droplets of water for the following species: particles, OH, CN, HCN, C\({}_{2}\)H\({}_{4}\), NO, HO\({}_{2}\), N\({}_{2}\)O, H\({}_{2}\)O\({}_{2}\), O\({}_{3}\), NO\({}_{2}\), NO\({}_{3}\), HNO\({}_{2}\), HNO\({}_{3}\), C\({}_{2}\)H\({}_{6}\), CH\({}_{3}\)OH, CH\({}_{3}\)CHO, C\({}_{3}\)H\({}_{6}\), CH\({}_{3}\)CN. ### Model validation Figure 14 shows _Photochem_ applied to Earth and Titan compared to observations gathered from the literature. All boundary conditions and settings for each model are archived in the "ModernEarth" and "Titan" templates in the following Zenodo repository: [https://doi.org/10.5281/zenodo.7802921](https://doi.org/10.5281/zenodo.7802921). Our model of Titan fixes the surface CH\({}_{4}\) mixing ratio to 0.015 volume mixing ratio, permits H\({}_{2}\) escape at the diffusion-limited rate, and allows aerosols to fall to Titan's surface, but otherwise has zero-flux boundary conditions. We ignore the effects of galactic cosmic rays, which causes our model to under-predict the nitrile haze production in the lower atmosphere (Lavvas et al., 2008). Additionally, we neglect ion chemistry which is argued to be important for the formation of large hydrocarbons (e.g., C\({}_{6}\)H\({}_{6}\)), but inconsequential for smaller molecular weight species. Despite these omissions, _Photochem_ broadly reproduces the main cyanide chemistry on Titan. ### Deposition velocity of HCN Our photochemical-climate simulations of post-impact atmosphere assume a HCN surface deposition velocity of \(7\times 10^{-3}\) cm s\({}^{-1}\). Here, we describe a simple model of HCN hydrolysis in the ocean which justifies this value. Motivated by Appendix 3 in Kharecha et al. (2005), we imagine a two-box ocean model with a surface ocean of depth \(\sim 100\) m and a deep ocean (\(\sim 4\) km). We assume HCN transport into the ocean is governed by a stagnant boundary layer model (see Figure 3 in Kharecha et al. (2005)), where it is destroyed by hydrolysis reactions. HCN is mixed between the surface and deep ocean reservoirs by a turnover velocity, \(v_{\rm over}\), which we nominally take to be \(1.2\times 10^{-5}\) cm s\({}^{-1}\) which is appropriate for modern Earth. Under these circumstances, the following system of ordinary differential equations governs the concentration of HCN in the surface and deep ocean. \[\frac{dm_{\rm HCN,s}}{dt}=\frac{\Phi_{\rm HCN}}{Cz_{s}}-k_{\rm tot }m_{\rm HCN,s}-\left(\frac{v_{\rm over}}{z_{s}}\right)\left(m_{\rm HCN,s}-m_{ \rm HCN,d}\right)\] (C28) \[\frac{dm_{\rm HCN,d}}{dt}=-k_{\rm tot}m_{\rm HCN,s}+\left(\frac{ v_{\rm over}}{z_{d}}\right)\left(m_{\rm HCN,s}-m_{\rm HCN,d}\right)\] (C29) Here, \(m_{\rm HCN,s}\) and \(m_{\rm HCN,d}\) are the concentration of HCN in the surface and deep ocean, respectively, in mol L\({}^{-1}\), \(C\) is a constant equal to \(6.022\times 10^{20}\) molecules mol\({}^{-1}\) L cm\({}^{-3}\), \(z_{s}\) is the depth of the surface ocean, and \(z_{d}\) is the depth of the deep ocean. We compute the temperature and pH dependent hydrolysis rate coefficient, \(k_{\rm tot}\), following Miyakawa et al. (2002). \(\Phi_{\rm HCN}\) is the HCN flux into the ocean in molecules cm\({}^{-2}\) s\({}^{-1}\), which is determined by a stagnant boundary layer model: \[\Phi_{\rm HCN}=v_{\rm p,HCN}(\alpha_{\rm HCN}10^{-6}p_{\rm HCN}-m_{\rm HCN,s})C\] (C30) We assume the piston velocity of HCN is \(5\times 10^{-3}\) cm s\({}^{-1}\), which is the same as the piston velocity of CO (Kharecha et al., 2005, Table 1). Also, \(\alpha_{\rm HCN}\) is the henry's law coefficient for HCN. The flux of a gas can also be parameterized with a deposition velocity (\(v_{\rm d,HCN}\)): \[\Phi_{\rm HCN} =n_{\rm HCN}v_{\rm d,HCN}\] (C31) \[=\frac{p_{\rm HCN}}{kT}v_{\rm d,HCN}\] Assuming a steady state (\(dm_{\rm HCN,s}/dt=dm_{\rm HCN,d}/dt=0\)) and solving for \(v_{\rm d,HCN}\) in Equations (C28) - (C31) yields \[v_{\rm d,HCN}=10^{-6}kT\alpha_{\rm HCN}Ck_{\rm tot}v_{\rm p,HCN}\frac{k_{\rm tot }z_{d}z_{s}+v_{\rm over}(z_{d}+z_{s})}{k_{\rm tot}z_{d}(v_{\rm p,HCN}+k_{\rm tot }z_{s})+v_{\rm over}(v_{\rm p,HCN}+k_{\rm tot}(z_{d}+z_{s}))}\] (C32) Here, we assume that the temperature and pH of the ocean is uniform, and that the temperature of the surface air is the same as the temperature of the ocean. Figure A10 computes the deposition velocity of HCN using Equation (C32) over a wide range of ocean temperatures and pH. Kadoya et al. (2020) used a model of the geologic carbon cycle to argue that the Hadean ocean was moderately alkaline (pH \(\approx 8\)). Therefore, we choose a HCN deposition velocity of \(7\times 10^{-3}\) cm s\({}^{-1}\) for our nominal model because it a reasonable approximation of the pH \(=8\) case over a wide range of temperatures. Additionally, we assume that HCCCN has the same deposition velocity as HCN, also caused by hydrolysis reactions in the ocean. We have re-run our photochemical-climate simulations of post-impact atmospheres with order-of-magnitude larger and smaller HCN deposition velocities. The results are qualitatively unchanged. For example, assuming \(v_{\rm d,HCN}=7\times 10^{-4}\) cm s\({}^{-1}\) for a \(2\times 10^{21}\) kg impactor in Figure 7 causes one order of magnitude smaller HCN ocean deposition. However, the HCN rainout and HCN surface pressure is unchanged because HCN rainout dominates over the HCN ocean deposition. ## Appendix D The _Clima_ Radiative Transfer and Climate Model To simulate the climate of post-impact atmospheres, we developed a new radiative transfer and climate code called _Clima_. We approximately solve the radiative transfer equation using standard two-stream methods (Toon et al., 1989). The code includes opacities representing photolysis, Rayleigh scattering, collision-induced absorption, and approximates line absorption with k-distributions. All available opacities and citations, except photolysis cross sections, are listed in Table A2. In this article, to account for line absorption of multiple species, we use the "random overlap with resorting and rebinning" method described in Amundsen et al. (2017). Figure A11 shows a thermal emission spectra computed with _Clima_ for a two bar pure CO\({}_{2}\) atmosphere on Mars with a 250 K surface temperature. This same benchmark has been computed by several other radiative transfer codes: SOCRATES (Wolf et al., 2022, Figure 2), ExoRT (Wolf et al., 2022, Figure 2), SMART (Figure A11), and the radiative transfer code used in Kopparapu et al. (2013) (their Figure 1). All codes estimate the total outgoing thermal energy to be between 86 and 94 W m\({}^{-2}\), which is comparable to the value computed by _Clima_ (92.9 W m\({}^{-2}\)). The _Clima_ code also includes an adiabatic climate model which we use in Section 4.4.2. Given partial pressures of gases at the surface, the code draws a pseudo-adiabat temperature profile upward using Equation (1) in Graham et al. (2021) until the temperature reaches an assume isothermal stratosphere. The code is general and can consider any number of condensing species, but H\({}_{2}\)O is the only relevant condensible for post-impact atmospheres. Finally, to find an equilibrium climate, we solve a nonlinear equation for the surface temperature that balances incoming solar and outgoing longwave radiation. Each iteration of the nonlinear solve involves drawing an adiabat upward then computing the solar and infrared radiative fluxes. We have validated the stand-alone climate model in _Clima_ by reproducing the calculations in Wordsworth et al. (2017) of early Mars with CO\({}_{2}\) and H\({}_{2}\) atmospheres (left panel of Figure 2 in Wordsworth et al. (2017)). Furthermore, we predict the runaway greenhouse limit to be 291 W m\({}^{-2}\), which is in acceptable agreement with the literature (e.g., Kopparapu et al., 2013). Finally, we have also confirmed that our code for drawing pseudo-moist adiabats reproduces the code used in Graham et al. (2021). The version of _Clima_ used in this article (v0.3.7) is archived on Zenodo ([https://doi.org/10.5281/zenodo.8060772](https://doi.org/10.5281/zenodo.8060772)), while the most up-to-date version can be found on GitHub ([https://github.com/Nicholaswogan/clima](https://github.com/Nicholaswogan/clima)).
2306.10119
Early Spectroscopy and Dense Circumstellar Medium Interaction in SN 2023ixf
We present the optical spectroscopic evolution of SN~2023ixf seen in sub-night cadence spectra from 1.18 to 14 days after explosion. We identify high-ionization emission features, signatures of interaction with material surrounding the progenitor star, that fade over the first 7 days, with rapid evolution between spectra observed within the same night. We compare the emission lines present and their relative strength to those of other supernovae with early interaction, finding a close match to SN~2020pni and SN~2017ahn in the first spectrum and SN~2014G at later epochs. To physically interpret our observations we compare them to CMFGEN models with confined, dense circumstellar material around a red supergiant progenitor from the literature. We find that very few models reproduce the blended \NC{} emission lines observed in the first few spectra and their rapid disappearance thereafter, making this a unique diagnostic. From the best models, we find a mass-loss rate of $10^{-3}-10^{-2}$ \mlunit{}, which far exceeds the mass-loss rate for any steady wind, especially for a red supergiant in the initial mass range of the detected progenitor. These mass-loss rates are, however, similar to rates inferred for other supernovae with early circumstellar interaction. Using the phase when the narrow emission features disappear, we calculate an outer dense radius of circumstellar material $R_\mathrm{CSM, out}\sim5\times10^{14}~\mathrm{cm}$ and a mean circumstellar material density of $\rho=5.6\times10^{-14}~\mathrm{g\,cm^{-3}}$. This is consistent with the lower limit on the outer radius of the circumstellar material we calculate from the peak \Halpha{} emission flux, $R_\text{CSM, out}\gtrsim9\times10^{13}~\mathrm{cm}$.
K. Azalee Bostroem, Jeniveve Pearson, Manisha Shrestha, David J. Sand, Stefano Valenti, Saurabh W. Jha, Jennifer E. Andrews, Nathan Smith, Giacomo Terreran, Elizabeth Green, Yize Dong, Michael Lundquist, Joshua Haislip, Emily T. Hoang, Griffin Hosseinzadeh, Daryl Janzen, Jacob E. Jencson, Vladimir Kouprianov, Emmy Paraskeva, Nicolas E. Meza Retamal, Daniel E. Reichart, Iair Arcavi, Alceste Z. Bonanos, Michael W. Coughlin, Ross Dobson, Joseph Farah, Lluís Albany, Claudia Gutiérrez, Suzanne Hawley, Leslie Hebb, Daichi Hiramatsu, D. Andrew Howell, Takashi Iijima, Ilya Ilyin, Kiran Jhass, Curtis McCully, Sean Moran, Brett M. Morris, Alessandra C. Mura, Tomás Müller-Bravo, James Munday, Megan Newsome, Maria Th. Pabst, Paolo Ochner, Estefania Padilla Gonzalez, Andrea Pastorello, Craig Pellegrino, Lara Piscarreta, Aravind P. Ravi, Andrea Reguitti, Laura Salo, Jozsef Vinko, Kellie de Vos, J. C. Wheeler, G. Grant Williams, Samuel Wyatt
2023-06-16T18:04:24Z
http://arxiv.org/abs/2306.10119v2
# Early Spectroscopy and Dense Circumstellar Medium Interaction in SN 2023ixf ###### Abstract We present the optical spectroscopic evolution of SN 2023ixf seen in sub-night cadence spectra from 1.18 to 14 days after explosion. We identify high-ionization emission features, signatures of interaction with material surrounding the progenitor star, that fade over the first 7 days, with rapid evolution between spectra observed within the same night. We compare the emission lines present and their relative strength to those of other supernovae with early interaction, finding a close match to SN 2020pni and SN 2017ahn in the first spectrum and SN 2014G at later epochs. To physically interpret our observations we compare them to CMFGEN models with confined, dense circumstellar material around a red supergiant progenitor from the literature. We find that very few models reproduce the blended N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) emission lines observed in the first few spectra and their rapid disappearance thereafter, making this a unique diagnostic. From the best models, we find a mass-loss rate of \(10^{-3}-10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\), which far exceeds the mass-loss rate for any steady wind, especially for a red supergiant in the initial mass range of the detected progenitor. These mass-loss rates are, however, similar to rates inferred for other supernovae with early circumstellar interaction. Using the phase when the narrow emission features disappear, we calculate an outer dense radius of circumstellar material \(R_{\rm CSM,out}\sim 5\times 10^{14}\) cm and a mean circumstellar material density of \(\rho=5.6\times 10^{-14}\) g cm\({}^{-3}\). This is consistent with the lower limit on the outer radius of the circumstellar material we calculate from the peak H\(\alpha\) emission flux, \(R_{\rm CSM,\ out}\gtrsim 9\times 10^{13}\) cm. Core-collapse supernovae (304), Type II supernovae (1731), Circumstellar matter (241), Stellar mass loss (1613), Red supergiant stars (1375) + Footnote †: LSSTC Catalyst Fellow ## 1 Introduction Type II supernovae (SNe; hydrogen-rich, specifically Type IIP/L/) are thought to come from red supergiant (RSG) progenitors, with masses of \(\sim\)8-25 M\({}_{\odot}\)(Smartt et al., 2009; Smartt, 2015). While there is a consensus that massive stars enrich their environments through mass loss, there is no model that quantitatively predicts observed RSG mass-loss rates (Kee et al., 2021, and references therein). Empirical mass-loss rates derived from direct observations span orders of magnitude (Mauron and Josselin, 2011). While the empirical prescription most often used in single-star evolutionary models is that of de Jager et al. (1988) (which spans \(10^{-7}\)-\(10^{-3.8}\) M\({}_{\odot}\) yr\({}^{-1}\) for RSG luminosities of log(\(L/L_{\odot}\)) = 3.9-5.8) recent analyses have found evidence of significantly higher (Ekstrom et al., 2012; Massey et al., 2023) and lower (Beasor et al., 2020) mass-loss rates. Recently, observations of both the early light curves and spectra of Type II SNe show evidence of dense circumstellar material (Khazov et al., 2016; Morozova et al., 2018; Bruch et al., 2022; Subrayan et al., 2023), indicating more extreme mass loss (e.g. eruptive mass loss or a superwind), which are known to occur in more massive stars (e.g. luminous blue variables; LBVs) (Smith et al., 2011). Regardless of the mechanism, all massive stars lose mass, and therefore we expect the photons and ejecta from all of their resultant SN explosions to interact with the circumstellar material (CSM) surrounding the progenitor at some level (see Smith, 2014, for a review). One of the signatures of CSM interaction in CCSNe is narrow emission lines corresponding to highly-ionized species in their early spectra. Narrow emission lines can first occur when photons from shock breakout ionize surrounding CSM (e.g. Yaron et al., 2017). As the shock passes through the CSM, the kinetic energy of the ejecta is converted to high-energy photons that can also ionize the CSM ahead of the photosphere (e.g. Leonard et al., 2000; Smith et al., 2015; Terreran et al., 2022). The recombination of the ionized gas leads to emission features, with the photons scattering off of the ionized electrons to produce Lorentzian line wings. This produces high-ionization features which are a function of the temperature, density, and composition of the CSM (e.g. O vi, O v, N v, N iv, C iv, He ii). As the CSM cools, higher ionization features give way to lower ionization features (e.g. N iii, O iii) and eventually all emission lines fade. At the same time, narrow P Cygni profiles can develop if the CSM is dense enough and sufficiently cool (e.g. Terreran et al., 2022; Benetti et al., 2016; Leonard et al., 2000). These profiles can develop into intermediate width features as the CSM begins to be accelerated by the shock. Eventually, the ejecta sweep up or engulf the CSM and the spectrum begins to develop as a normal Type II SN with broad P Cygni profiles, often with a shallow absorption component. However, even at this phase the CSM interaction can contribute to the light curve (Dessart and Hillier, 2022; Andrews and Smith, 2018; Smith et al., 2015; Smith, 2017). Analyses of samples of CCSNe with early spectroscopic observations show that a significant fraction of nearby CCSNe display these features (Khazov et al., 2016; Bruch et al., 2021, 2022). Detailed modeling of these flash features can constrain the progenitor mass-loss rate just prior to explosion, the surface chemical composition, as well as the extent of the confined CSM (e.g. Dessart et al., 2017; Boian & Groh, 2019, 2020). There are now dozens of examples of early flash spectroscopy, and several cases where the observations have been modeled or been compared to models in some detail (e.g. Yaron et al., 2017; Boian & Groh, 2020; Tartaglia et al., 2021; Terreran et al., 2022; Jacobson-Galan et al., 2022), but the time evolution of the flash ionization lines has rarely been captured due to their ephemeral nature. SN 2023ixf was discovered in M101 (D=6.85 Mpc Riess et al., 2022) by Koichi Itakagi on 2023-05-19 17:27:15 UTC (all times given in this paper are in UTC; MJD 60083.72) at a magnitude of 14.9 AB mag in a Clear filter (Itagaki, 2023). It was classified on 2023-05-19 23:35:34 (MJD 60083.98) as a Type II SN with flash ionization features (H, He, C, and N), using a spectrum taken a few hours after discovery (Perley & Gal-Yam, 2023). Over the first \(\sim\)5 days, it rapidly rose to a plateau brightness of \(V\approx 11.2\) mag, or \(M_{V}\approx-18.2\) at the distance to M101 - a similar brightness to the well-studied Type IIP SN 2004et in NGC 6946 (e.g. Maguire et al., 2010). The early photometric evolution is detailed in a companion paper, Hosseinzadeh et al. (2023). In this paper, we present the remarkable early spectroscopic evolution of SN 2023ixf with flash features observed in extraordinary detail. In Section 2, we describe our spectroscopic observations, while in Section 3 we present basic properties of SN 2023ixf relevant for our work. From there, in Section 4 we discuss the fast spectroscopic evolution of SN 2023ixf and compare it with existing observational data sets. In Section 5, we compare our unprecedented flash spectroscopic sequence to existing radiative transfer models to infer the mass-loss rate of the progenitor star and in Section 6, we use the spectroscopic evolution to further characterize the CSM of SN 2023ixf and place it in the context of other interacting SNe. We summarize and conclude in Section 7. ## 2 Spectroscopic Observations Immediately following the discovery announcement, we began a high-cadence, comprehensive campaign to observe the detailed evolution of SN 2023ixf with the Arizona Transient Exploration and Characterization (AZTEC) collaboration, Distance Less Than 40 Mpc (DLT40) collaboration, and the Global Supernova Project. We observed SN 2023ixf using the moderate-resolution optical spectrograph Hectospec (Fabricant et al., 2005) on the MMT on Mt. Hopkins, AZ from 2023-05-20 to 2023-05-26. The observed spectra were reduced using an IDL pipeline called HSRED1 and then flux calibrated using IRAF (Tody, 1986, 1993). Further optical spectroscopy was obtained using the FLOYDS spectrograph on Faulkes Telescope North (FTN) through the Global Supernova Project. Spectra were reduced with standard methods using a custom IRAF-based pipeline (Valenti et al., 2014). Adding to our high-cadence spectroscopic coverage, we observed SN 2023ixf with the Alhambra Faint Object Spectrograph (ALFOSC) on the Nordic Optical Telescope (NOT; Proposal 67-112, PI: Bonanos). Observations were reduced with standard reduction techniques using IRAF. We add publicly available ALFOSC observations from the NUTS2 collaboration2 Stritzinger et al. (2023). In addition, we observed SN 2023ixf with the Multi-Object Double Spectrographs (Pogge et al., 2010) on the Large Binocular Telescope (LBT). Data were bias and flat-field corrected using the modsCCDred package (Pogge, 2019), then extracted and flux calibrated with IRAF. Spectroscopic observations were taken with the Boller and Chivens Spectrograph (B&C) on University of Arizona's Bok 2.3m telescope located at Kitt Peak Observatory. These observations were reduced using standard IRAF reduction techniques. An optical spectrum was also taken with the Low Resolution Spectrograph 2 (LRS2, Chonis et al., 2016) on the Hobby-Eberly Telescope (HET, Ramsey et al., 1998; Hill et al., 2021) at McDonald Observatory on 2023-06-02. The data from the red and blue arms (LRS2-R and LRS2-B) were combined into a single spectrum covering the spectral region from 3600 to 10500 A. The Integral Field Unit (IFU) spectra were reduced by the Panacea pipeline3. We collected one 30-minute exposure with the Astrophysical Research Consortium Echelle Spectrograph (ARCES) with resolution \(R\sim 31,000\) on the ARC 3.5 m Telescope at Apache Point Observatory (APO). We reduced the spectra with IRAF and aesop(Morris & Dorn-Wallenstein, 2018). Further optical spectra were obtained with the 1.22-m Galileo telescope+B&C at the Asiago Astrophysical Observatory, Italy, which were reduced using an IRAF-based pipeline. To this dataset we add publicly available reduced spectroscopic observations from the Liverpool Telescope (LT) archive (SPRAT; Perley, 2023; Steele et al., 2004; Piascik et al., 2014) and Transient Name Server4(HFOSC, SPRAT). Footnote 1: [http://mingus.as.arizona.edu/](http://mingus.as.arizona.edu/)\({}^{\sim}\)bjw/mmt/hecto_reduction.html Footnote 2: The Nordic optical telescope Unbiases Transients Surveys 2; [https://nuts.sn.ie](https://nuts.sn.ie) Footnote 3: [https://github.com/grzeimann/Panacea](https://github.com/grzeimann/Panacea) Footnote 4: [https://www.wis-tns.org/object/2023ixf](https://www.wis-tns.org/object/2023ixf) ###### Abstract We present the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023f observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ix and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ix and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2023ixf observations of SN 2023ixf with the 2023ixf and 2 A complete list of spectroscopic observations is given in Appendix A and shown in Figure 1 and Figure 2. Spectra will be made available on Wiserep5(Yaron and Gal-Yam, 2012). Footnote 5: [https://www.wiserep.org](https://www.wiserep.org) ## 3 Fundamental Supernova Parameters The nearby host galaxy of SN 2023ixf, M101, also hosted the Type Ia SN 2011fe. We adopt the distance modulus to M101, derived using the Leavitt Law, from Riess et al. (2022): \(\mu=29.178\pm 0.041\) mag (D=6.85 Mpc). For Milky Way extinction, we use the SN location and the dust maps of Schlafly and Finkbeiner (2011)6 to find \(E(B-V)=0.0077\pm 0.0002\) mag. Measuring the equivalent width of the Na i D lines in high-resolution observations and using the relationship of Poznanski et al. (2012), Smith et al. (2023) find an average host \(E(B-V)=0.031\) mag with \(\pm 30\%\) uncertainty from the relation between Na i D and extinction; a value which is consistent with other Na i D extinction measurements made with high resolution data (Lundquist et al., 2023). Footnote 6: [https://irsa.ipac.caltech.edu/applications/DUST/index.html](https://irsa.ipac.caltech.edu/applications/DUST/index.html) M101 is a popular target for both amateur and professional observers, and images of the galaxy taken by amateur astronomers (Mao et al., 2023) prior to discovery provided the last deep non-detection (2023-05-18 15:50:24; MJD 60082.66) and first detection (2023-05-18 20:29; MJD 60082.85), with \(\lesssim\)5 hours separating them. Following Hosseinzadeh et al. (2023), we define the explosion epoch as half way between the last non-detection and first detection: 2023-05-18 18:00:00 (MJD 60082.75 \(\pm\) 0.10), where the adopted uncertainty is the span between the explosion epoch and last non-detection (or first detection). ## 4 Spectroscopic Evolution Tracking the rapid evolution of SN 2023ixf, we observed SN 2023ixf at least four times per night for the first 5 days and at least nightly thereafter. While high-cadence spectra have been obtained for a select few SNe over the first few days of evolution (Yaron et al., 2017; Terreran et al., 2022), SN 2023ixf is the first to have intra-night observations for the first week. Over the first two weeks, the spectra evolve from strong, narrow emission lines with broad wings, to a nearly featureless spectrum and finally develop Balmer P Cygni profiles with shallow absorption, more typical of the early evolution of Type II SNe. To identify spectral features, we use the second spectrum, taken 1.36 days after explosion, as it has higher signal-to-noise and resolution than the classification spectrum. From this spectrum, we identify H i, He i, He ii, C iii, C iv, O iii, N iii, and N iv lines. A full list of species is given in Appendix B. We see a rapid evolution in the first 0.5 days of our spectra (1.18-1.67 days), which is shown in detail in Figure 2. First we turn to the complex of lines around 4700 A in the top panel of the figure. Over this epoch, the blend of N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) fades into the broad blueshifted wing of He ii (\(\lambda 4685.5\)) while the latter line increases in strength. Similarly, the He i lines visible in the first spectrum rapidly fade until they are no longer visible 0.5d later. Other lines such as O iii, C iii, and N iv (\(\lambda\) 5074 A) that are marginally detected in the first few spectra also disappear on this time scale. At the same time, N iv (\(\lambda\lambda 7103,7109,\lambda 7122\)) along with He ii (\(\lambda 4685.5\)), C iv (\(\lambda\lambda\) 5801.3, 5811.98 A) increase in strength. The most persistent features through day 5 are H i (\(\lambda 4097.33\)) and possibly N iii (\(\lambda\) 4101.73; although it is not possible to determine if two distinct lines exist at this resolution), He ii (\(\lambda 4685.5\)), and C iv (\(\lambda\lambda 5801.3,5811.98\)). Interestingly, N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) become visible again as a broad shelf in the He ii (\(\lambda 4685.5\)) blue wing around day 4, as the He ii (\(\lambda 4685.5\)) line fades. Over this time, the strength of all these narrow features decreases to a nearly featureless spectrum 7 days after explosion. The one line still clearly present in the spectrum after day 6.5 is H\(\alpha\), which appears to develop an asymmetric emission profile, reminiscent of a P Cygni profile with shallow absorption. The intermediate width of this feature indicates the presence of CSM that has been accelerated by the shock. Details of its evolution are presented by Smith et al. (2023). Around day 12, clear broad P Cygni profiles from the SN ejecta are present in the high-order Balmer features, although the profile at H\(\alpha\) is significantly more complex with no clear absorption component. ### Comparison to other Flash SNe A number of SNe have been observed to have narrow emission lines early in their evolution, often disappearing within the first week. In Figure 3 we compare the spectra of SN 2023ixf at day 1, 3, 7, and 14 to those of SN 1998S (Leonard et al., 2000), SN 2013fs (Yaron et al., 2017), SN 2013cu (Gal-Yam et al., 2014), SN 2014G (Terreran et al., 2016), SN 2017ahn (Tartaglia et al., 2021), and SN 2020pni (Terreran et al., 2022) at comparable phases. Among the SNe in this sample, SN 2013fs is the most distinct. It showed the highest ionization species (e.g. Figure 1.1: The evolution of the optical spectra of SN 2023ixf over the first \(\sim 5.2\) days, with the earliest epoch at the top of the figure. Spectra are color coded by the instrument used to observe them, normalized by a black body fit, and corrected for redshift. Emission lines are identified at their rest wavelengths with vertical lines and labeled at the top of the figure, while the most prominent telluric feature is marked with the shaded gray region. Throughout this sequence, the spectra evolve from showing strong, narrow emission lines from high-ionization species to a intermediate width features as the CSM is accelerated by the shock. The spectra evolve rapidly between 1.18–1.67d, with the He i and N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) lines disappearing within the first 0.5d from the first spectrum while He ii (\(\lambda 4685.5\)), N iv, and C iv gain strength. Figure 1.2: A continuation of Figure 1.1 showing the evolution of the optical spectra of SN 2023ixf from day 5.2 to 14.5 with the earliest epoch at the top of the figure. Over the first 7 days, the spectra evolve from showing strong, narrow emission lines from high-ionization species to a nearly featureless spectrum with an intermediate with P Cygni profile in H\(\alpha\). In the subsequent 7 days, broad P Cygni profiles develop in the higher order Balmer features. Figure 2: The evolution of SN 2023ixf from day 1.18 through day 3, colored by phase. Ions are labeled in each panel and the phase is given in the middle panel. Emission lines from the low-ionization levels, He i, N iii, C iii, O iii, and N iv (\(\lambda\) 5074 Å), disappear over the first 0.5 days of evolution while emission from high-ionization levels, N iv (\(\lambda\lambda 7103,7109,\lambda 7122\)), He ii (\(\lambda 4685.5\)), and C iv (\(\lambda\lambda\) 5801.3, 5811.98 Å) increase in strength. Spectra are fit with a blackbody function and normalized, corrected for redshift, and offset for readability. In the bottom panel, the B-band telluric feature is marked with a gray shaded region. O V; \(\lambda\)5541 and O VI; \(\lambda\lambda\lambda 3811,3834\); Yaron et al. 2017) in the earliest spectra that are not present in SN 2023ixf or other SNe in this sample. However, it is possible that these features were present at earlier times in SN 2023ixf and, by the first spectrum at day 1.18, had disappeared. While there are emission lines from some of the same ions in the day 1.5 spectra of SN 2013fs and SN 2023ixf, many are not present in the SN 2013fs spectrum, especially between 3800-4500 A and 5000-6500 A, and N v (\(\lambda\) 4604) is absent from the SN 2023ixf spectrum. Additionally, the emission in all lines is weaker in SN 2013fs and fades more rapidly. SN 2023ixf more closely resembles the other comparison objects in Figure 3, although it does not always match their evolution. The first spectrum closely resembles that of SNe 2020pni and 2017ahn. However, some features, e.g. N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)), are persistent at day 3 in SNe 2020pni and 2017ahn, but are not visible in SN 2023ixf. This likely indicates that the CSM of SN 2023ixf is cooling more slowly than that of SN 2020pni or SN 2017ahn, perhaps due to a lower density. At day 3, we also have spectra from SN 1998S, SN 2013cu, and SN 2014G. At this phase, SN 2023ixf is best matched to SN 2014G, a trend that continues throughout the two week evolution, while SN 1998S and SN 2013cu more closely resemble SN 2020pni and SN 2017ahn. At day 7, most features have faded and H\(\alpha\) has been replaced by an intermediate width P Cygni profile in SN 2023ixf, in contrast to the narrow P Cygni profiles in SNe 2020pni and 1998S. Finally, at day 14, SN 2020pni and SN 1998S still show narrow P Cygni profiles, while SN 2023ixf and SN 2014G are starting to develop broad P Cygni profiles. The disappearance of these features can be interpreted as the ejecta enveloping the CSM, which would indicate that the radial extent of the CSM of SN 2023ixf is smaller than that of SN 2020pni and SN 1998S or the ejecta velocity higher. ## 5 Comparison to Models Variations in mass-loss rate, SN luminosity, surface abundance, and CSM density profile can all affect the characteristics of the narrow emission line spectrum and its evolution. In the following section, we compare our spectroscopic dataset to two sets of publicly available model grids (Boian and Groh, 2020; Dessart et al., 2017), which vary different parameters, using the best-fit model to characterize the progenitor, CSM, and SN properties. We fit a blackbody to both the model and observed data and normalize by this before comparing them. This removes any temperature continuum effects, instead only examining flux relative to the continuum. ### Boian and Groh models Boian and Groh (2019) propose that the narrow features produced by the CSM around Type II SNe can be used to constrain the abundances of this material and therefore the mass of the progenitor system. They predict different SN line diagnostics for low-mass red supergiant progenitors (8-15 M\({}_{\odot}\)), massive red supergiant, yellow hypergiant, blue supergiant progenitors (15-30 M\({}_{\odot}\)), and stripped stars like LBV and N-rich Wolf-Rayet progenitors (15-30 M\({}_{\odot}\)) for observations 1 to a few days after explosion. These progenitors are then modeled for high-, medium-, and low-luminosity SNe using the radiation transport code CMFGEN (Hillier and Miller, 1998; Hillier and Dessart, 2012; Dessart et al., 2013; Hillier and Dessart, 2019). The primary difference between the low- and high-mass progenitors is in the abundances of the surface material. Low-mass progenitors should experience weak or no CNO processing. High mass progenitors, on the other hand, are expected to have significant CNO processed materials. Finally, stripped stars should be He-rich. Table 1 gives the predicted spectroscopic signatures of low and high-mass progenitors and different luminosity SNe. In our 1.36d spectrum we see no O vi (\(\lambda\lambda 3811,3834\)) which rules out low-mass, high-luminosity SNe. We also do not see C iii (\(\lambda 5697\)), which is one of three diagnostics of the low-mass, medium-luminosity system. We do identify N iv (\(\lambda 4058\), \(\lambda\lambda 7109,7122\)), N iii (\(\lambda\lambda 4634,4640\)), C iii (\(\lambda\lambda 4647,4650\)), and C iv (\(\lambda\lambda 5801,5811\)), which are features of the remaining progenitor mass - luminosity combinations. Thus, with these diagnostics we cannot point to a single likely progenitor class. We note that this diagnostic depends on uncertain physical parameters in single star evolutionary models such as mixing efficiency, mass-loss rates, and convective overshoot which complicate the connection between surface abundance, as measured from CSM, and initial mass. We visually compare our 1.36d spectrum to the models of Boian and Groh (2019) for all mass-loss rates, luminosities, and progenitors. The clear presence of both N iii (\(\lambda 4634\)) and He ii (\(\lambda 4685.5\)) with He ii stronger than N iii greatly limits the number of possible models to \(L=1.5\times 10^{9}\) L\({}_{\odot}\) (which they define as medium luminosity) and \(\dot{M}=3\times 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\). The wind velocity of these models is \(v_{w}=150\)km s\({}^{-1}\) which is consistent with the observed CSM velocity (Smith et al., 2023). In comparing to observed spectra, Boian and Groh (2020) first identify the temperature via the ionization level present. We also selected the best models based on continuum normalized spectra, looking only at the relative ionization levels. In their model, the temperature and Figure 3: A comparison of SN 2023ixf (black) to SN 1998S (teal; Leonard et al., 2000), SN 2013fs (blue; Yaron et al., 2017), SN 2013cu (green; Gal-Yam et al., 2014), SN 2014G (purple; Terreran et al., 2016), SN 2017ahn (mustard; Tartaglia et al., 2021), and SN 2020pni (pink; Terreran et al., 2022) \(\sim\)1, 3, 7, and 14 days after explosion. These SNe all have flash features that evolve over the first two weeks. The SN name and epoch are marked to the right of the plot. All spectra have been redshift and extinction corrected and then normalized by a blackbody fit to the continuum. Emission lines identified in the first spectrum are marked with vertical lines which are labeled at the top of the figure. In the first epoch, SN 2023ixf closely resembles SNe 2020pni and 2017ahn, although at later epochs it more closely follows SN 2014G (which does not have a spectrum at day 1.5). SN 2013fs is considerably different in its evolution throughout the first two weeks. At later epochs, SN 2020pni and SN 1998S develop narrow P Cygni profiles that are not present in the spectra of other SNe. luminosity are related via the Stefan-Boltzmann law. Thus after identifying the temperature, they scale to the observed luminosity, scaling the radius to maintain the derived temperature. They also derive a scale factor for the mass loss rate, assuming that \(\dot{M}\propto L_{SN}^{3/4}\). We duplicate this analysis for SN 2023ixf for the three surface abundances (which do not produce significantly different results), concluding that the mass-loss rate of the progenitor of SN 2023ixf was \(\dot{M}\approx 4.5\times 10^{-3}\)\(\rm M_{\odot}\,yr^{-1}\) and scaled luminosity for each abundance is \(L\approx 2.6\times 10^{9}\)\(\rm L_{\odot}\). From here we examine the solar abundance spectra, corresponding to the low mass RSG scenario; the CNO-processed surface abundance, corresponding to the high-mass RSG, BSG, YSG scenario; and the He-rich abundance, corresponding to the LBV, WN, stripped star scenario. These are compared to an observed spectrum in Figure 4. We find that no one scenario matches the line strengths of the observation. While many of the lines are well matched in the solar abundance model, the C iv (\(\lambda\lambda 5801,5811\)) and H\(\alpha\) are greatly over-estimated by the model. The C iv (\(\lambda\lambda 5801,5811\)) is better represented in the CNO abundance model (a result of the suppression of C and O), however, the N in our observed spectrum is significantly weaker than the model, countering the expectation that these stars would be nitrogen enriched. Finally, in the He-rich model, the H i line is well-modeled and the He ii (\(\lambda 4685.5\)) line is stronger in the model than in the observation. Like the CNO abundance model, the N lines are too strong in the model. It is possible that, rather than indicating surface abundance, these discrepancies arise from a mismatch between the physical and model CSM density and/or the hardness of the radiation field. ### Dessart & Hillier models In another study, Dessart et al. (2017) model the spectroscopic signature of CSM interaction with a variety of RSG mass-loss rates (\(\dot{M}=10^{-6}-10^{-2}\)\(\rm M_{\odot}\)\(\rm yr^{-1}\)) and atmospheric density scale heights (\(H_{\rho}=0.01,0.1,0.3R_{*}\)). The base of each RSG model is a 15 M\({}_{\odot}\) star onto which they add an atmosphere with a given density scale height, which transitions to wind mass loss when the density of the atmosphere equals the density of the wind. The wind is then extended to \(R_{\rm out,CSM}=5\times 10^{14}\) cm for all but one model which is extended to \(R_{\rm out,CSM}=2\times 10^{14}\) cm. At this point all models transition to an \(\dot{M}=10^{-6}\)\(\rm M_{\odot}\,yr^{-1}\) wind. The parameters of each model are summarized in Table 2. Each of these models is evolved from shock breakout to over 10 days and snapshots of the spectra and light curves are reported. Given the uncertainties in the explosion epoch for the observations and challenges of explosion in the models (e.g. core-collapse vs shock breakout, varying shock breakout time scales in dense CSM), we visually identify the model that best matches our day 1.36 spectrum. This spectrum is not fully reproduced by any of the models. Particularly challenging is the blended N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) complex and the He i lines, which are clearly visible in the observed spectrum at day 1.36 and quickly fades below detection by day 2.0. These lines are not present in most of the model spectra at any epoch. More broadly, these emission features in the model spectra fade much more rapidly than in the observed spectra, if they are present at all. This implies that the CSM in SN 2023ixf extends beyond that of the models or has a higher density (e.g. the r1w6 model is the only one with narrow emission lines that persist past 2 days). We compare our observed spectra with the full suite of models for all mass-loss rates and epochs. In the weak-wind models (r1w1, r1w1h, r2w1), the only spectrum in the time series with narrow emission features is the first spectrum at shock break out. As our lines are clearly present throughout the first 5-7 days of evolution, we do not examine these models further. The remaining models (r1w4, r1w6, r1w5r, and r1w5h) show multiple epochs of narrow emission features. These features give way to narrow P Cygni profiles in the He ii (\(\lambda 4685.5\)) and H\(\alpha\) lines which are eventually replaced by broader features originating from the bulk motion of the ejecta. The spectra of SN 2023ixf never show N v (\(\lambda 4610\)), which is present in the early spectra of all of the strong wind models, although in some models this has faded by the phase of our first spectrum (e.g. r1w4 in Figure 5). Instead, we see N iii (\(\lambda 4636\)), which has a similar flux to He ii (\(\lambda 4686\)) in the classification spectrum and then fades over the subsequent 0.5d. Although the r1w4 and r1w6 models show this feature just prior to the development of the narrow P Cygni features, it is always significantly weaker than the He ii emission. Given the rapid evolution of this feature, this could be a function of the model sampling. We note that we do not see evidence of the rise of N v described by Jacobson-Galan et al. (2023) in our spectra. Rather, we find the asymmetric blue wings of He ii (\(\lambda 4685.5\)) to be more consistent with N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)). We find the best agreement with the r1w4 and r1w6 models which, while not able to reproduce the observed line ratios, show N iii, He i (\(\lambda 7065\)), C iv (\(\lambda\lambda 5801,5811;\lambda 7110\)), and N iv (\(\lambda\)4057, \(\lambda\)7122) features. Although present, the C iv and N iv are significantly stronger in the first model spectra than in the observed spectra. Additionally, the strength of the Balmer emission lines is better matched in these models as is the lack of the O v (\(\lambda\)5597). Figure 5 shows the spectral evolution of SN 2023ixf compared to the r1w4 model. While the first spectrum matches well, the narrow lines disappear from the model by day 2 and narrow P Cygni profiles emerge. In the final spectrum, the model shows significantly more H\(\alpha\) emission than the observation. Interestingly, the mass-loss rate of this model is \(10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\), consistent with the conclusions from the comparison to the models of Boian & Groh (2019). The first r1w6 model spectrum on day 1.3 clearly shows N v. Additionally, r1w6 model time series does not show the N iii shoulder on He ii until 1-2 days after it has disappeared from the observed spectra, however, the features are otherwise reasonably matched. The disappearance of the narrow emission lines and evolution of H\(\alpha\) are better matched in the r1w6 model. This is consistent with the findings of Jacobson-Galan et al. (2023), who find a best fit model in their custom grid is the r1w6 model with a larger radius. From this we conclude that the mass-loss rate for SN 2023ixf was between \(10^{-3}\) and \(10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\). The spectra themselves are sensitive to density, which is parameterized as \(\dot{M}=\rho v_{w}\), thus a different mass-loss rate is inferred if the assumed velocity is different. The wind velocity in these models is \(v_{w}=50\) km s\({}^{-1}\), which is about a factor of 3 smaller than the measured velocity. However, given the order of magnitude range in mass-loss rate inferred from these models, we do not further modify the mass-loss rates. ## 6 Model-independent CSM constraints The properties and evolution of the narrow emission lines allow us to compute order of magnitude estimates of the CSM characteristics. A lower limit on the outer CSM radius can be calculated from the brightest H\(\alpha\) flux (\(F_{\rm H\alpha}\); Yaron et al., 2017; Ofek et al., 2013). Briefly, the H\(\alpha\) luminosity (\(L_{\rm H\alpha}\)) is produced by the recombination of ionized H. Assuming a spherically symmetric CSM that is composed of hydrogen, all of which is ionized, we can use the H\(\alpha\) luminosity to calculate the total CSM mass. The total mass can also be calculated by integrating a constant velocity wind density profile over the radial extent of the CSM. Equating these two relations allows us to determine the radius of the CSM: \[r\gtrsim\frac{\kappa L_{\rm H\alpha}}{A} \tag{1}\] where \(\kappa=0.34\) cm\({}^{2}\) g\({}^{-1}\) is the electron scattering opacity of the CSM. We use the distance \(D\) to convert \(F_{\rm H\alpha}\) to \(L_{\rm H\alpha}\) via \(L_{\rm H\alpha}=4\pi D^{2}F_{\rm H\alpha}\). A is defined as \[A=\frac{4\pi h\nu_{H}\alpha_{H}^{\rm eff}}{\mu_{p}m_{p}^{2}} \tag{2}\] where \(\nu_{H}=4.56\times 10^{14}\) Hz is the frequency of H\(\alpha\), \(\alpha_{H}^{\rm eff}=8.7\times 10^{14}\) cm\({}^{3}\) s\({}^{-1}\) is the H recombination coefficient for case B recombination at \(T_{\rm eff}=10000\) K, \(\mu_{p}=0.5\) is the mean molecular weight, \(m_{p}\) is the proton mass, and \(h\) is Planck's constant. We treat Equation 1 as \begin{table} \begin{tabular}{c c c} \hline \hline Luminosity & Low-mass Progenitor (8–15 M\({}_{\odot}\)) & High-mass Progenitor (15–30 M\({}_{\odot}\)) \\ \hline Low luminosity & C iii (\(\lambda\lambda 4647,4650\)) & C iv (\(\lambda\lambda 5801,5811\)) \\ (\(\sim 1.9\times 10^{9}\) L\({}_{\odot}\)) & & Lack of C iii (\(\lambda 5697\)) \\ \hline Medium luminosity & C iii (\(\lambda 5697\)) & N iv (\(\lambda 4058\), \(\lambda\lambda 7109,7122\)) \\ (\(3.9\times 10^{8}-3.1\times 10^{9}\) L\({}_{\odot}\)) & C iv (\(\lambda 0\lambda 5801,5811\)) & \\ & N iii (\(\lambda\lambda 4634,4640\)) & N iii (\(\lambda\lambda 4634,4640\)) \\ \hline High luminosity (\(>6.3\times 10^{9}\) L\({}_{\odot}\)) & O vi (\(\lambda\lambda 3811,3834\)) & Lack of O vi (\(\lambda\lambda 3811,3834\)) \\ \hline \end{tabular} \end{table} Table 1: Predicted Spectroscopic Signatures from Boian & Groh (2019) a lower limit on the outer CSM radius, as either the composition or ionization assumption may not be true, which would lead to a larger CSM radius than calculated here. Additionally, we assume that the CSM above the emitting region is transparent to the H\(\alpha\) photons. If this is not true, it would also lead to a larger CSM radius. To measure the flux of H\(\alpha\), we fit a blackbody to the continuum and subtract it from the flux of our 1.36 day spectrum, which has the maximum H\(\alpha\) flux of our spectral series. To the continuum-subtracted flux, we simultaneously fit broad and narrow Lorentzian emission profiles. We integrate this fit from 6300-6800 A to find \(F_{\rm H\alpha}=3.18\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\). Using a distance \begin{table} \begin{tabular}{c c c c c} \hline \hline Model Name & Mass-loss Rate (M\({}_{\odot}\) yr\({}^{-1}\)) & Radius (R\({}_{\odot}\)) & Scale Height (\(R_{*}\)) & Transition Radius\({}^{*}\) (cm) \\ \hline r1w1 & \(10^{-6}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w2 & \(10^{-5}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w3 & \(10^{-4}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w4 & \(10^{-3}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w5 & \(5\times 10^{-3}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w6 & \(10^{-2}\) & 501 & 0.01 & \(5\times 10^{14}\) \\ r1w1h & power law\({}^{\dagger}\) & 501 & 0.3 & \(5\times 10^{14}\) \\ r1w5r & \(10^{-5}\) & 501 & 0.01 & \(2\times 10^{14}\) \\ r2w1 & \(10^{-6}\) & 1107 & 0.01 & \(5\times 10^{14}\) \\ \hline \end{tabular} Note. – \({}^{\dagger}\)power-law exponent with an exponent of 12 Note. – \({}^{*}\)Radius at which the density transitions to \(\dot{M}=10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) \end{table} Table 2: CSM Parameters for the Models of Dessart et al. (2017) Figure 4: The day 1.36 spectrum (black) compared with three different surface abundance models from Boian and Groh (2019) with \(L=1.5\times 10^{9}\) L\({}_{\odot}\) and \(\dot{M}=3\times 10^{-3}\) M\({}_{\odot}\)yr\({}^{-1}\). The solar abundance represents a low-mass RSG (yellow), CNO-processed abundance represents a high-mass RSG/BSG/YSG (blue), and He-rich abundance represents a LBV/WN/stripped star (pink). Vertical dashed lines represent the ions that are identified in the observed spectra that are labeled at the top of the plot. Vertical dotted lines in light gray show ions that are not detected but are part of the progenitor diagnostics detailed in Boian and Groh (2019). We convolve the model spectra with a Gaussian kernel to mimic the resolution of the observed spectra. of 6.85 Mpc, we find \(L_{\rm H\alpha}=1.78\times 10^{39}\) erg s\({}^{-1}\) and \(R_{\rm CSM,\ out}\gtrsim 8.7\times 10^{13}\) cm. Assuming a spherical wind with a constant velocity, mass-loss rate, and homogeneous density structure, the density can be calculated from radius: \[\rho=\frac{1}{\kappa\,r} \tag{3}\] From this, we calculate a density of \(3.4\times 10^{-14}\) g cm\({}^{-3}\). Assuming a typical RSG wind of \(v_{w}\approx 10\) km s\({}^{-1}\), this mass loss event would have begun \(\gtrsim 3\) yr before explosion. However, using the mass-loss rate derived from high-resolution spectroscopy in Smith et al. (2023) of \(v_{w}\approx 150\) km s\({}^{-1}\) we find a much smaller start time of \(\sim 2\) months prior to explosion. The narrow features in the CSM are only present when there is unshocked, photoionized CSM in front of the SN shock and ejecta. Therefore, we expect these features to disappear when the material producing them is swept up by the ejecta, and we can use this information to calculate the radius of the CSM. In SN 2023ixf, the unshocked, narrow features disappear from H\(\alpha\) 3-4 days after explosion. Smith et al. (2023) find this corresponds to a radius of \(R=(3-5)\times 10^{14}\) cm. However, at this phase, there is still material in front of the photosphere which produces intermediate width lines and eventually an intermediate width P Cygni profile in H\(\alpha\). These intermediate width lines disappear around day \(\sim\)6-7. On day 13, clear broad absorption is visible in H\(\beta\) and from this we approximate an average ejecta velocity of \(\sim 9000\) km s\({}^{-1}\). Putting this together with the time that the lines disappear, we calculate a CSM radius of \(R_{\rm CSM,out}\sim 5.4\times 10^{14}\) cm. Interestingly, this is exactly where the models of Dessart et al. (2017) transition to M \(=10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\). Again, assuming a constant RSG wind of \(v_{w}\approx 10\) km s\({}^{-1}\) (or \(v_{w}\approx 150\) km s\({}^{-1}\)), this implies that the event began \(\sim 17\) yr (1 yr) before explosion. Assuming a constant wind density profile, the density is related to radius by: \[\rho=\frac{\dot{M}}{4\pi v_{w}R_{\rm CSM}^{2}} \tag{4}\] Figure 5: A comparison of the spectral evolution of SN 2023ixf (black) to the r1w4 model of Dessart et al. (2017) (pink). Phase are shown to the right of the figure in pink for the model and black from the observed spectra. While the first observation at 1.2 day matches fairly well, the model spectra evolve much more rapidly, with P Cygni profiles developing at day 2 and all emission disappearing by day 4, while the observations show emission through day 6-7. The model spectrum at 0.5d shows a number of lines that have faded by 1d (e.g. N v) Using a representative mass-loss rate of \(5\times 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\), we find a density of \(\rho=8.5\times 10^{-14}\) g cm\({}^{-3}\) (\(\rho=5.6\times 10^{-14}\) g cm\({}^{-3}\)). This is consistent with the lower limit calculated from the H\(\alpha\) luminosity. We compare the mass-loss rates derived in Section 5 to a population of Type II SNe with flash ionization features to identify how unusual (or normal) these values are. We use the sample of Boian and Groh (2020), who use CMFGEN to model a sample of 17 Type II SNe that show narrow emission features in their early spectra, indicating CSM interaction. In Section 5, we used a spectrum taken 1.36 days after explosion to determine the mass-loss rate and SN luminosity for SN 2023ixf. Given our extensive time sampling, we investigate whether the epoch of the spectrum used to do this would affect which model was selected. We tested this using spectra taken on days 2.26, 3.16, and 5.15. Given the uncertainty in the epoch of the models, we consider all model epochs for each observed spectrum. We find that, in all cases, regardless of the epoch used, we would select the models and thus mass-loss rates from Section 5. Additionally, we find that with the day 5.15 spectrum, we would include the r1w5h model, which has a mass-loss rate between the two models we selected (r1w4 and r1w6). This demonstrates that the differences between the models are robust to observed epoch as long as narrow features are present. With this confirmation, we add SN 2023ixf to Figure 8 from Boian and Groh (2020), which shows the SN luminosity and mass-loss rate for all SNe in their sample (Figure 6). We find the mass-loss rate of SN 2023ixf to be fairly low when compared to the range of mass-loss rates for other SN with early interaction, although it is in no way an outlier. This is consistent with a lower density CSM causing the persistence of the higher-ion levels of N iv and C iv. While in line with other interacting events, the mass-loss rates determined for the confined CSM in this paper are significantly higher than steady-state RSG winds. Using the progenitor luminosity \(\log(L/\mathrm{L}_{\odot})=4.94\) found by Jencson et al. (2023) and the de Jager et al. (1988) relation, the expected mass-loss rate would be \(\dot{M}\sim 10^{-5.7}\) M\({}_{\odot}\) yr\({}^{-1}\) and even lower for the progenitor luminosity identified in Kilpatrick et al. (2023). Using the relationship of Beasor et al. (2020) and the stellar parameters of Jencson et al. (2023), the expected mass-loss rate would be \(\dot{M}\sim 2\times 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\). Even using the enhanced mass-loss rates of Ekstrom et al. (2012), the mass-loss rate would only be \(\dot{M}\sim 10^{-5.2}\) M\({}_{\odot}\) yr\({}^{-1}\). This implies that the mass loss event which led to this CSM was vastly greater than the nominal RSG mass loss. On the other hand, the largest mass-loss rate that we find is consistent with the lower range of mass-loss rates required by Morozova et al. (2018) (assuming a 10 km s\({}^{-1}\) wind) to fit the rapid light curve rises seen in Type II SNe. We make a final note on the first 0.5d evolution of the N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) emission line. If the CSM were suddenly ionized by a high energy flash from shock breakout photons, then one would expect that the CSM is almost instantaneously (modulo the light travel time) ionized. Thus, as time passes in the days after the initial flash ionization, the spectral evolution should proceed from high ionization to low ionization species as the initially highly ionized CSM recombines over a timescale determined by the density of the CSM. However, we observe the opposite. Spectra of SN 2023ixf show that relatively low-ionization levels are present in the earliest epochs (i.e. narrow N iii (\(\lambda\lambda 4634.0,4640.6\))/C iii (\(\lambda\lambda 4647.5,4650.0\)) and He i emission on day 1-2). Over the following day, these fade away as other higher ionization features (such as He ii (\(\lambda\)4685.5), N iv, C iv) strengthen. This is also discussed by Smith et al. (2023) and Jacobson-Galan et al. (2023). This implies that we are witnessing a gradual or delayed ionization of the CSM, an effect never observed before. This effect seems inconsistent with a sudden flash ionization from shock breakout alone, and more indicative of a slowly varying source of ionization (i.e. radiation from the CSM interaction shock itself). ## 7 Summary & Conclusions The comprehensive spectroscopic data set presented in this paper provides sub-day cadence spectroscopic observations for the first week of evolution and at least daily cadence through day 14. To our knowledge, no other SN with flash ionization features has been studied with this cadence for this length of time. The unique combination of cadence and duration of these observations will enable the community to trace the CSM density profile, and therefore the mass loss history of the progenitor, in a way that has never before been possible. These spectra show narrow high ionization emission features with Lorentzian wings. We identify emission from H i, He i, He ii, C iii, C iv, O iii, N iii, and N iv. He i, C iii, N iii, and O iii fade over our first 0.5d of monitoring (\(\sim\)1.5-2d since explosion). We find the spectrum at 1.36d most closely resembles that of SNe 2017ahn and 2020pni and looks significantly different from SN 2013fs. Over time, the evolution closely matches that of SN 2014G, as the high-ionization features fade and eventually so does He ii and H i. The differences in the evolution of these different SNe imply that SN 2023ixf has a lower density CSM than SNe 2017ahn and 2020pni and a smaller radial extent. By day 13, broad P Cygni profiles have developed indicating emission from the SN ejecta. We compare the same 1.36d spectrum with the models of Boian & Groh (2019), finding the spectrum most closely resembles the models with \(L\approx\)2.6\(\times 10^{9}\) L\({}_{\odot}\) and \(\dot{M}\approx\)4.5\(\times 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\), although we do not find any model that reproduces the line ratios in our observed spectra and therefore cannot use these diagnostics to infer a progenitor mass. We also relate the full spectral evolution over the first two weeks to the models of Dessart et al. (2017), which examine different mass-loss rates and atmospheric scale heights. We find the spectra of SN 2023ixf are best represented by the r1w4 and r1w6 models, corresponding to mass-loss rates of \(\dot{\rm M}=10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(\dot{\rm M}=10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\) respectively. However, we note that in r1w4 model, the narrow emission lines disappear much more rapidly from the model than we observe indicating that a larger radial extent is required in the model. We find that despite the rapid evolution of the spectrum over the first five days, the spectra are most consistent with the same models, regardless of epoch used to identify them. Finally, we use the narrow lines to calculate the properties of the CSM. Using the maximum H\(\alpha\) flux and assuming a spherical geometry, we find \(R_{\rm CSM,~{}out}\gtrsim 8.7\times 10^{13}\) cm, implying the CSM ejection began at least 3 years ago if it is expanding at 10 km s\({}^{-1}\) and 2 months ago if the observed velocity of 150 km s\({}^{-1}\). At this radius, we find a CSM density of \(3.3\times 10^{-14}\) g cm\({}^{-3}\). Using the epoch at which time the narrow emission features disappear, we find a consistent radius of \(R_{\rm CSM}=5.4\times 10^{14}\) cm. With this radius and RSG wind \(v_{w}\approx 10\) km s\({}^{-1}\) (\(v_{w}\approx 150\) km s\({}^{-1}\)), the CSM ejection began 17 years ago (1 year ago) and the density is \(\rho=8.5\times 10^{-14}\) g cm\({}^{-3}\) (\(\rho=5.6\times 10^{-14}\) g cm\({}^{-3}\)). Comparing SN 2023ixf to a sample of 17 SNe with early CSM interaction, we find the mass-loss rate in line with the lower end of the distribution and the low luminosity. We note that this analysis assumes spherically symmetric CSM. Asymmetric CSM (such as that proposed by Smith et al. (2023) or clumped CSM (e.g. Dessart et al., 2018) would alter these conclusions, although the details of the effect will depend on the exact configuration. SN 2023ixf is an extraordinary SN, combining proximity with early detection and classification, and tight Figure 6: The SN luminosity and mass-loss rate derived by Boian & Groh (2020) for a sample of 17 SNe compared to the best model of SN 2023ixf (black). Upper limits on mass loss are shown with semi-transparent markers and arrows, while determined values are solid. We determine a scaled mass-loss rate of \(\dot{M}\approx 4.5\times 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\) and a scaled luminosity of \(L\approx 2.6\times 10^{9}\)L\({}_{\odot}\) from the models of Boian & Groh (2019) (depending on the surface abundance) which we plot as a black stars. In practice, these are located at virtually the same location on this plot. We also include the mass-loss rates of the r1w4 and r1w6 models of Dessart et al. (2017) (\(\dot{M}=10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\) and \(\dot{M}=10^{-2}\) M\({}_{\odot}\) yr\({}^{-1}\), respectively) which are shown as black triangles. Dessart et al. (2017) does not explore variations in luminosity and we therefore use the luminosity derived from Boian & Groh (2019) solar abundance models and do not scale the mass-loss rates. The mass-loss rate of SN 2023ixf is in line with the lower end of the range of mass-loss rates in this sample. constraints on explosion. The immediate announcement of the discovery and classification allowed us to harness our resources and observe the detailed evolution of the CSM interaction over the first two weeks. With these observations we identify a significantly higher mass-loss rate than the nominal RSG mass-loss rate of \(\dot{\rm M}=10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\). This indicates either a superwind or period of eruptive mass loss (but see also Kochanek, 2019). While both the models of Boian & Groh (2019) and Dessart et al. (2017) are unable to match the temporal evolution and the relative flux ratios, the majority of the emission lines present are reproduced and we find the prospects of a custom model to match the observations encouraging. The spectroscopic data set presented in this paper can help guide future modeling efforts and be used to benchmark the evolution flash features in any SN. ## Acknowledgments This publication was made possible through the support of an LSSTC Catalyst Fellowship to K.A.B., funded through Grant 62192 from the John Templeton Foundation to LSST Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of LSSTC or the John Templeton Foundation. Time domain research by the University of Arizona team and D.J.S. is supported by NSF grants AST-1821987, 1813466, 1908972, & 2108032, and by the Heising-Simons Foundation under grant #20201864. The research by Y.D., S.V., N.M., and E.H. is supported by NSF grants AST-2008108. J.E.A. is supported by the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. The research of J.C.W. and J.V. is supported by NSF AST-1813825. J.V. is also supported by OTKA grant K-142534 of the National Research, Development and Innovation Office, Hungary. L.S. and M.W.C. acknowledge support from the National Science Foundation with grant numbers PHY-2010970 and OAC-2117997. A.Z.B acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 772086). AR acknowledges support from ANID BECAS/DOCTORADO NACIONAL 21202412 We thank the MMT director, G. Williams, for granting Director's Discretionary Time for the Hectospec spectral sequence. This paper made use of the modsCCDRed data reduction code developed in part with funds provided by NSF Grants AST-9987045 and AST-1108693. A.P. and P.O. acknowledge support of the PRIN-INAF 2022 project "Shedding light on the nature of gap transients: from the observations to the models". We thank David Bohlender, Dmitry Monin, and James Di Francesco for obtaining DAO spectra and the whole LBT team, especially Alexander Becker and Jennifer Power. Based on observations obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximillians-Universitaet Muenchen, and Georg-August-Universitaet Goettingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. The Low Resolution Spectrograph 2 (LRS2) was developed and funded by the University of Texas at Austin McDonald Observatory and Department of Astronomy, and by Pennsylvania State University. We thank the Leibniz-Institut fur Astrophysik Potsdam (AIP) and the Institut fur Astrophysik Goettingen (IAG) for their contributions to the construction of the integral field units. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOT. Based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101004719 (ORP: OPTICON RadioNet Pilot). Observations reported here were obtained at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. Based on observations made with the 1.22 Galileo Galilei Telescope of the Padova University in the Asiago site. This paper made use of the modsCCDRed data reduction code developed in part with funds provided by NSF Grants AST-9987045 and AST-1108693. ADS, ARC (ARCES), Asiago:Galileo (B&C), Bok (B&C), HCT (HFOSC), HET (LRS2), LBT (MODS), LCOGT (FLOYDS), Liverpool:2m (SPRAT), MMT (Hectospec), NED, NOT (ALFOSC), TNS aesop (Morris & Dorn-Wallenstein, 2018), astropy (Astropy Collaboration, 2013; 2018; 2022), CMFGEN (Hillier & Miller, 1998; Hillier & Dessart, 2012, 2019; Dessart et al., 2013), FLOYDS (Valenti et al., 2014), HSRED, IRAF (Tody, 1986, 1993), Light Curve Fitting (Hosseinzadeh et al., 2023), MatPLOTLIB (Hunter, 2007), MODS pipeline (Pogge, 2019), NumPy (Harris et al., 2020), Scipy (Virtanen et al., 2020) ## Appendix A Spectroscopic Observations Table 3 lists the date, telescope, instrument, and resolving power for each spectroscopic observation used in this paper. \begin{table} \begin{tabular}{r r r r r r} \hline \hline \multicolumn{1}{c}{ Phase (d)} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{MJD} & \multicolumn{1}{c}{Telescope} & \multicolumn{1}{c}{Instrument} & \multicolumn{1}{c}{R (\(\lambda/\Delta\lambda\))} \\ \hline 1.18 & 2023-05-19 22:23:45 & 60083.93 & LT & SPRAT & 350 \\ 1.36 & 2023-05-20 02:39:56 & 60084.11 & NOT & ALFOSC & 360 \\ 1.54 & 2023-05-20 07:03:24 & 60084.29 & Bok & B\&C & 700 \\ 1.67 & 2023-05-20 10:00:43 & 60084.42 & MMT & Hectospec & 1325 \\ 2.13 & 2023-05-20 21:04:19 & 60084.88 & HCT & HFOSC & 350 \\ 2.26 & 2023-05-21 00:14:57 & 60085.01 & NOT & ALFOSC & 360 \\ 2.43 & 2023-05-21 04:19:39 & 60085.18 & MMT & Hectospec & 1325 \\ 2.50 & 2023-05-21 06:07:11 & 60085.25 & FTN & FLOYDS & 500 \\ 2.53 & 2023-05-21 06:40:49 & 60085.28 & Bok & B\&C & 700 \\ 2.66 & 2023-05-21 09:50:44 & 60085.41 & MMT & Hectospec & 1325 \\ 2.76 & 2023-05-21 12:07:58 & 60085.51 & FTN & FLOYDS & 500 \\ 3.09 & 2023-05-21 20:03:50 & 60085.84 & Galileo & B\&C & 2762 \\ 3.15 & 2023-05-21 21:41:41 & 60085.90 & LT & SPRAT & 350 \\ 3.16 & 2023-05-21 21:48:37 & 60085.91 & NOT & ALFOSC & 360 \\ 3.39 & 2023-05-22 03:21:44 & 60086.14 & LBT & MODS & 2075 \\ 3.42 & 2023-05-22 03:57:46 & 60086.17 & Bok & B\&C & 700 \\ 3.54 & 2023-05-22 07:00:07 & 60086.29 & FTN & FLOYDS & 500 \\ 3.61 & 2023-05-22 08:41:45 & 60086.36 & Bok & B\&C & 700 \\ 3.64 & 2023-05-22 09:15:15 & 60086.39 & MMT & Hectospec & 1325 \\ 4.02 & 2023-05-22 18:31:41 & 60086.77 & HCT & HFOSC & 350 \\ 4.19 & 2023-05-22 22:33:56 & 60086.94 & NOT & ALFOSC & 300 \\ 4.24 & 2023-05-22 23:50:00 & 60086.99 & Other\({}^{\dagger}\) & Other\({}^{\dagger}\) & -\({}^{\dagger}\) \\ 4.41 & 2023-05-23 03:43:23 & 60087.16 & MMT & Hectospec & 1325 \\ 4.45 & 2023-05-23 04:47:09 & 60087.20 & Bok & B\&C & 700 \\ 5.10 & 2023-05-23 20:16:48 & 60087.85 & Galileo & B\&C & 983 \\ 5.15 & 2023-05-23 21:33:06 & 60087.90 & NOT & ALFOSC & 300 \\ 5.20 & 2023-05-23 22:48:32 & 60087.95 & LT & SPRAT & 350 \\ \hline \end{tabular} \end{table} Table 3: Log of Spectroscopic Observations ## Appendix B Line Identification Table 4 gives the ion and wavelength of the lines identified and shown throughout this paper.
2305.00882
More Ramsey theory for highly connected monochromatic subgraphs
An infinite graph is said to be highly connected if the induced subgraph on the complement of any set of vertices of smaller size is connected. We continue the study of weaker versions of Ramsey Theorem on uncountable cardinals asserting that if we color edges of the complete graph we can find a large highly connected monochromatic subgraph. In particular, several questions of Bergfalk, Hru\v{s}\'ak and Shelah are answered by showing that assuming the consistency of suitable large cardinals the following are relatively consistent with $\mathsf{ZFC}$: $\kappa\to_{hc} (\kappa)^2_\omega$ for every regular cardinal $\kappa\geq \aleph_2$ and $\neg\mathsf{CH}+ \aleph_2 \to_{hc} (\aleph_1)^2_\omega$. Building on a work of Lambie-Hanson, we also show that $\aleph_2 \to_{hc} [\aleph_2]^2_{\omega,2}$ is consistent with $\neg\mathsf{CH}$. To prove these results, we use the existence of ideals with strong combinatorial properties after collapsing suitable large cardinals.
Michael Hrušák, Saharon Shelah, Jing Zhang
2023-05-01T15:33:31Z
http://arxiv.org/abs/2305.00882v2
# More Ramsey theory for highly connected monochromatic subgraphs ###### Abstract. An infinite graph is said to be highly connected if the induced subgraph on the complement of any set of vertices of smaller size is connected. We continue the study of weaker versions of Ramsey Theorem on uncountable cardinals asserting that if we color edges of the complete graph we can find a large highly connected monochromatic subgraph. In particular, several questions of Bergfalk, Hrusak and Shelah [5] are answered by showing that assuming the consistency of suitable large cardinals the following are relatively consistent with ZFC: * \(\kappa\to_{hc}(\kappa)_{\omega}^{2}\) for every regular cardinal \(\kappa\geq\omega_{2}\), * \(\neg\mathsf{CH}+\aleph_{2}\to_{hc}(\aleph_{1})_{\omega}^{2}\). Building on a work of Lambie-Hanson [14], we also show that * \(\aleph_{2}\to_{hc}[\aleph_{2}]_{\omega,2}^{2}\) is consistent with \(\neg\mathsf{CH}\). To prove these results, we use the existence of ideals with strong combinatorial properties after collapsing suitable large cardinals. _2010 MSC._ 03E02,03E10 _Key words and phrases._ highly connected graph, saturated ideal, partition relations, forcing. Research of the first author was partially supported by a PAPIIT grant IN101323 and CONACyT grant A1-S-16164. Research of the second author was partially supported by the NSF grant DMS 1833363 and by the Israel Science Foundation (ISF) grant 1838/19. Research of the third author was supported by the European Research Council (grant agreement ERC-2018-StG 802756), NSERC grants RGPIN-2019-04311, RGPIN-2021-03549 and RGPIN-2016-06541. The paper appears as number 1242 on the second author's publications list. ## 1. Introduction Let \(\kappa\) be a real number. Let \(\kappa\) be a real number. **Definition 1.2**.: Fix \(k\in\omega\). We let \[\kappa\rightarrow[\kappa]_{\omega,k}^{2}\] to abbreviate that for any \(c:[\kappa]^{2}\rightarrow\omega\), there exist \(H\in[\omega_{2}]^{\aleph_{2}}\) and \(K\in[\omega]^{k}\) such that \((H,c^{-1}(K))\cap[H]^{2})\) is highly connected. The following is a more refined variation of the highly connected partition relations, conditioned on the lengths of the paths. **Definition 1.3**.: Fix \(n\in\omega\cup\{\omega\}\). Let \(\kappa\rightarrow_{hc,<n}(\kappa)_{\omega}^{2}\) abbreviate \(\kappa\rightarrow_{hc}(\kappa)_{\omega}^{2}\) via paths of length \(<n\). More precisely, it asserts: for any \(c:[\kappa]^{2}\rightarrow\omega\), there exists \(A\in[\kappa]^{\kappa}\) and \(i\in\omega\) such that for any \(C\in[A]^{<\kappa}\) and \(\alpha,\beta\in A-C\), there exist \(l<n\) and a path \(\langle\gamma_{k}:k<l+1\rangle\subset A-C\) with \(\gamma_{0}=\alpha\) and \(\gamma_{l}=\beta\) such that for all \(j<l\), \(c(\gamma_{j},\gamma_{j+1})=i\). The organization of the paper is 1. In Section 2, we establish the consistency of \(\aleph_{2}\rightarrow_{hc}(\aleph_{1})_{\omega}^{2}\) and \(\neg\mathsf{CH}\), 2. in Section 3, we isolate and investigate 2-precipitous ideals (see Definition 3.1) whose existence implies \(\kappa\rightarrow_{hc}[\kappa]_{\omega,2}^{2}\), 3. in Section 4, we demonstrate two methods of constructing 2-precipitous ideals and show that \(\aleph_{2}\rightarrow_{hc}[\aleph_{2}]_{\omega,2}^{2}+2^{\omega}\geq\omega_{2}\) is consistent, 4. in Section 5, we prove the consistency of \(\aleph_{2}\rightarrow_{hc}(\aleph_{2})_{\omega}^{2}\), 5. in Section 6, we sketch how to use large cardinals to establish the consistency of the ideal hypothesis used in Section 5, 6. in Section 7, we show we cannot improve the result in Section 6 by making the lengths of the paths required to connect vertices shorter, 7. finally in Section 8, we finish with some open questions. ## 2. The consistency of \(\aleph_{2}\rightarrow_{hc}(\aleph_{1})_{\omega}^{2}+\neg\mathsf{CH}\) We call an ideal \(I\) on \(\omega_{1}\) is _\(\aleph_{1}\)-proper with respect to \(S\subset P_{\aleph_{2}}(H(\theta))\)_ where \(\theta\) is a large enough regular cardinal if for any \(M\in S\) and \(X\in M\cap I^{+}\), there exists an extension \(Y\subset_{I}X\) such that \(Y\) is \((M,I^{+})\)-generic. Namely, for any dense \(D\subset I^{+}\) with \(D\in M\) and any \(Y^{\prime}\subset_{I}Y\), there exists \(X\in D\cap M\) such that \(Y^{\prime}\cap X\in I^{+}\). **Lemma 2.1**.: _If \(I\) is \(\aleph_{1}\)-proper with respect to \(\{M\}\) with \(M\prec H(\theta)\) of size \(\aleph_{1}\) containing \(I\) then whenever \(Y\in I^{+}\) is a \((M,I^{+})\)-generic condition, the following holds: for any \(E\subset I^{+}\) in \(M\), if there is some \(Y^{\prime}\in E\) such that \(Y\subset_{I}Y^{\prime}\), then there exists \(X\in E\cap M\) such that \(Y\cap X\in I^{+}\)._ Proof.: Define a dense subset \(D_{E}\subset I^{+}\) in \(M\) as follows: \(A\in D_{E}\) iff either there exists \(B\in E\), \(A\subset_{I}B\) or for all \(B\in E\), \(A\cap B=_{I}\emptyset\). By the hypothesis, there exists \(X^{\prime}\in D_{E}\cap M\) such that \(Y\cap X^{\prime}\in I^{+}\). Note that there exists \(X\in E\) such that \(X^{\prime}\subset_{I}X\) since \(X^{\prime}\cap Y^{\prime}\neq_{I}\emptyset\) and \(Y\subset_{I}Y^{\prime}\in E\). The elementarity of \(M\) guarantees the existence of such \(X\in M\). If \(I\) is \(\aleph_{2}\)-saturated then \(I\) is \(\aleph_{1}\)-proper with respect to a closed unbounded subset of \(P_{\aleph_{2}}(H(\theta))\) for sufficiently large \(\theta\). There are many models where \(\omega_{1}\) carries a \(\sigma\)-complete \(\aleph_{2}\)-saturated ideal and CH fails. For example, they are both consequences of Martin's Maximum [9]. **Proposition 2.2**.: _If there exists a \(\sigma\)-complete \(\aleph_{1}\)-proper ideal on \(\omega_{1}\) with respect to a stationary subset of \(\{X\in P_{\aleph_{2}}(H(\theta)):\sup X\cap\omega_{2}\in\operatorname{cof}( \omega_{1})\}\) for some large enough \(\theta\), then \(\aleph_{2}\to_{hc}(\aleph_{1})_{\omega}^{2}\). 1_ Footnote 1: Originally we used the hypothesis that \(\omega_{1}\) carries a countably complete \(\aleph_{2}\)-Knaster ideal. Stevo Todorcevic pointed out that our proof should work from a weaker saturation hypothesis, such as \(\aleph_{2}\)-saturation. Proof.: Given \(c:[\omega_{2}]^{2}\to\omega\), we will find \(A\in[\omega_{1}]^{\aleph_{1}}\) and \(B\in[\omega_{2}-\omega_{1}]^{\aleph_{1}}\) satisfying the following properties: there exists \(k\in\omega\) such that 1. for any \(\alpha_{0},\alpha_{1}\in A\), there are uncountably many \(\beta\in B\) such that \(c(\alpha_{0},\beta)=k=c(\alpha_{1},\beta)\), and 2. for any \(\beta_{0}\in B\), there are uncountably many \(\alpha\in A\) satisfies that \(c(\alpha,\beta_{0})=k\). **Claim 2.3**.: \(A\cup B\) is highly connected witnessed by \(k\). Proof of the claim.: Let \(C\) be the countable set of vertices being removed. If \(\alpha_{0},\alpha_{1}\in A\), then it follows immediately by the first requirement that there is some \(\beta\in B-C\) such that \(c(\alpha_{0},\beta)=c(\alpha_{1},\beta)=k\). Let us check the case when \(\alpha\in A\) and \(\beta\in B\). We can find some large enough \(\alpha^{\prime}\in A-C\) such that \(c(\alpha^{\prime},\beta)=k\). After that we find some \(\beta^{\prime}\in B-C\) such that \(c(\alpha^{\prime},\beta^{\prime})=k=c(\alpha,\beta^{\prime})\). Then \(\alpha\) is connected to \(\beta\) via the \(k\)-path: \(\alpha,\beta^{\prime},\alpha^{\prime},\beta\). If \(\beta_{0},\beta_{1}\in B\), then we can easily reduce to the previous case by finding some large enough \(\alpha_{0}\in A\) such that \(c(\alpha_{0},\beta_{0})=k\). Apply the previous analysis to \(\alpha_{0}\in A,\beta_{1}\in B\). We proceed to find \(A,B,k\) as above. For each \(\alpha\in\omega_{2}-\omega_{1}\) and \(i\in\omega\), let \[X_{\alpha,i}=\{\gamma\in\omega_{1}:c(\gamma,\alpha)=i\}.\] Let \(M\prec H(\theta)\) of size \(\aleph_{1}\) contain \(I,c\) with \(\sup M\cap\omega_{2}\in\operatorname{cof}(\omega_{1})\) and let \(Y\in I^{+}\) be \((M,I^{+})\)-generic. Let \(\rho\in\omega_{2}-\sup M\cap\omega_{2}\). By the \(\sigma\)-completeness of \(I\), find some \(k\in\omega\) such that \(A=Y\cap X_{\rho,k}\in I^{+}\), which is still \((M,I^{+})\)-generic. Finally, let us define \(B\) recursively. Let \(\langle a^{i}=(a^{i}_{0},a^{i}_{1}):i<\omega_{1}\rangle\) enumerate \([A]^{2}\) with unbounded repetitions. Suppose we have defined \(\langle\beta_{j}:j<\alpha\rangle\) for some \(\alpha<\omega_{1}\) satisfying that for all \(j<\alpha\), 1. \(a^{j}\subset X_{\beta_{j},k}\) and 2. \(X_{\beta_{j},k}\cap A\in I^{+}\). It is clear if the construction is good for all \(\alpha<\omega_{1}\), then \(A,B=\{\beta_{j}:j<\omega_{1}\}\) and \(k\) are as desired. Suppose we are at the \(\alpha\)-th step of the construction and let us find \(\beta_{\alpha}\) maintaining the same requirements. Let \(\bar{\beta}=\min M-\sup_{j<\underline{\alpha}}\beta_{j}<\sup M\cap\omega_{2}\). Consider \(E=\{X_{\beta,k}\in I^{+}:a^{\alpha}_{0},a^{\alpha}_{1}\in X_{\beta,k},\beta> \bar{\beta}\}\). In particular, \(E\in M\) and \(A\subset_{I}X_{\rho,k}\in E\). By Lemma 2.1, there is \(X_{\beta_{\alpha},k}\in M\cap E\) such that \(A\cap X_{\beta_{\alpha},k}\in I^{+}\). ## 3. 2-precipitous ideals on \(\kappa\) and \(\kappa\rightarrow_{hc}[\kappa]^{2}_{\omega,2}\) Chris Lambe-Hanson [14] showed that adding weakly compact many Cohen reals forces that \(2^{\omega}\rightarrow_{hc}[2^{\omega}]^{2}_{\omega,2}\), in contrast with the ZFC fact that \(2^{\omega}\not\rightarrow_{hc}(2^{\omega})^{2}_{\omega}\). He also demonstrated that such partition relations already have non-trivial consistency strength, by showing that \(\square(\kappa)\) implies \(\kappa\not\rightarrow_{hc}[\kappa]^{2}_{\omega,<\omega}\). In this section, we investigate the ideal hypothesis on \(\kappa\) that implies \(\kappa\rightarrow_{hc}[\kappa]^{2}_{\omega,2}\). In particular, such analysis enables us to have more consistent scenarios, such as a model where \(2^{\omega}\geq\omega_{2}\) and \(\aleph_{2}\rightarrow_{hc}[\aleph_{2}]^{2}_{\omega,2}\) both hold. **Definition 3.1**.: We say an ideal \(I\) on \(\kappa\) is _2-precipitous_ if Player Empty does not have a winning strategy in the following game \(G_{I}\) with perfect information: Player Empty and Nonempty take turns playing a \(\subset_{I}\)-decreasing sequence of pairs of \(I\)-positive sets \(\langle(A_{n},B_{n}):n\in\omega\rangle\) with Player Empty starting the game. Player Nonempty wins iff there exist \(\alpha<\beta\) with \(\alpha\in\bigcap_{n\in\omega}A_{n}\) and \(\beta\in\bigcap_{n\in\omega}B_{n}\). **Lemma 3.2**.: _Fix a dense subset \(D\subset P(\kappa)/I\). Player Empty has a winning strategy in \(G_{I}\) iff Player Empty has a winning strategy \(\sigma\) in \(G_{I}\) such that \(range(\sigma)\subset\{A-M:A\in D,M\in I\}\)._ Proof.: Let us show the nontrivial direction \((\rightarrow)\). Fix a winning strategy \(\sigma\) of Player Empty. The input of \(\sigma\) will be \((I^{+}\times I^{+})^{<\omega}\), corresponding to the sequence of positive sets Player Nonempty has played so far. Let \(\pi:I^{+}\to I^{+}\) be a map such that \(\pi(B)=A-M\) where \((A,M)\in D\times I\) is least (with respect to some fixed well ordering) such that \(A-M\subset B\). Such \(\pi\) exists since \(D\) is dense in \(P(\kappa)/I\). Consider \(\sigma^{\prime}=\pi\circ\sigma\). Clearly, the range of \(\sigma^{\prime}\subset\{A-M:A\in D,M\in I\}\). To see that it is a winning strategy for Player Empty, Suppose \(\langle(A_{n},B_{n}):n\in\omega\rangle\) is a play such that Player Empty plays according to \(\sigma^{\prime}\). Notice that \(\langle(A^{\prime}_{n},B^{\prime}_{n}):n\in\omega\rangle\), where \((A^{\prime}_{n},B^{\prime}_{n})=(A_{n},B_{n})\) when \(n\) is odd and \((A^{\prime}_{n},B^{\prime}_{n})=\sigma(\langle(A_{2k-1},B_{2k-1}):2k-1<n)\rangle\) is a legal play where Player Empty is playing according to \(\sigma\). As a result, there do not exist \(\alpha<\beta\) such that \(\alpha\in\bigcap_{n\in\omega}A^{\prime}_{n}=\bigcap_{n\in\omega}A_{n}\) and \(\beta\in\bigcap_{n\in\omega}B^{\prime}_{n}=\bigcap_{n\in\omega}B_{n}\). Therefore, \(\sigma^{\prime}\) is a winning strategy for Player Empty. **Theorem 3.3**.: _If \(\kappa\) carries a uniform normal \(2\)-precipitous ideal, then \(\kappa\to_{hc}[\kappa]_{\omega,2}^{2}\)._ Proof.: Fix a normal uniform \(2\)-precipitous ideal \(I\) on \(\kappa\) and a coloring \(c:[\kappa]^{2}\to\omega\). We say a pair of \(I\)-positive sets \((B_{0},B_{1})\) is _\((i,j)\)-frequent_ if for any \(I\)-positive sets \(B^{\prime}_{0}\subset B_{0}\), \(B^{\prime}_{1}\subset B_{1}\), there are * \(\alpha<\beta\) with \(\alpha\in B^{\prime}_{0},\beta\in B^{\prime}_{1}\) such that \(c(\alpha,\beta)=i\) and * \(\beta^{\prime}<\alpha^{\prime}\) with \(\beta^{\prime}\in B^{\prime}_{1}\), \(\alpha^{\prime}\in B^{\prime}_{0}\) such that \(c(\beta^{\prime},\alpha^{\prime})=j\). **Claim 3.4**.: There exists a pair of \(I\)-positive sets \((B_{0},B_{1})\) and \(i,j\in\omega\) such that \((B_{0},B_{1})\) is \((i,j)\)-frequent. Proof of the Claim.: Starting with a positive pair \((A_{0},A_{1})\), we find some \(i\in\omega\) and positive \((C_{0},C_{1})\subset(A_{0},A_{1})\) such that \((C_{0},C_{1})\) satisfies the first requirement of the \((i,j)\)-frequent-ness, namely, for all positive sets \(C^{\prime}_{0}\subset C_{0},C^{\prime}_{1}\subset C_{1}\) there are \((\alpha,\beta)\in C^{\prime}_{0}\otimes{C^{\prime}_{1}}^{2}\) satisfies that \(c(\alpha,\beta)=i\). Suppose for the sake of contradiction that such \((C_{0},C_{1})\) and \(i\) do not exist. We define a strategy \(\sigma\) for Player Empty: they start by playing \((A^{0},B^{0})=_{def}(A_{0},A_{1})\). At stage \(2i\), with the game played so far is \(\langle(A^{k},B^{k}):k<2i\rangle\), by the hypothesis, there are positive \((A^{\prime},B^{\prime})\subset(A^{2i-1},B^{2i-1})\) such that no \((\alpha,\beta)\in A^{\prime}\otimes B^{\prime}\) satisfy \(c(\alpha,\beta)=i\). Player Empty then plays \((A^{2i},B^{2i})=(A^{\prime},B^{\prime})\). Since by the hypothesis of \(I\), Player Empty does not have a winning strategy. Therefore, there is a play \(\langle(A^{n},B^{n}):n\in\omega\rangle\) where Player Empty plays according to the strategy \(\sigma\) but in the end, there are \((\alpha,\beta)\in\bigcap_{n\in\omega}A^{n}\otimes\bigcap_{n\in\omega}B^{n}\). However, if \(c(\alpha,\beta)=k\), then at stage \(2k\), the strategy of Empty makes sure \(c^{\prime\prime}A^{2k}\otimes B^{2k}\) omits \(\{k\}\), which is a contradiction. Finally, we repeat the previous argument with input \((C_{1},C_{0})\) in place of \((A_{0},A_{1})\) to find positive \((B_{1},B_{0})\subset(C_{1},C_{0})\) and \(j\in\omega\) satisfying the second part of the \((i,j)\)-frequentness, as desired. Fix \((B_{0},B_{1})\) that is \((i,j)\)-frequent. We strengthen the conclusion using the normality of \(I\). Recall for any positive \(S\in I^{+}\), \(I^{*}\upharpoonright S\) denote the dual filter of \(I\) restricted on \(S\). **Claim 3.5**.: For any \(I\)-positive \(B_{0}^{\prime}\subset B_{0},B_{1}^{\prime}\subset B_{1}\), * \(\{\alpha\in B_{0}:\{\beta\in B_{1}^{\prime}:c(\alpha,\beta)=i\}\in I^{+}\}\in I ^{*}\upharpoonright B_{0}\), * \(\{\beta^{\prime}\in B_{1}:\{\alpha^{\prime}\in B_{0}^{\prime}:c(\beta^{\prime },\alpha^{\prime})=j\}\in I^{+}\}\in I^{*}\upharpoonright B_{1}\). Proof of the claim.: Let us just show the first part. The proof of the second part is identical. Suppose for the sake of contradiction, \(B^{0}=_{def}\{\alpha\in B_{0}:B_{\alpha}^{1}=_{def}\{\beta\in B_{1}^{\prime}:c (\alpha,\beta)=i\}\in I\}\in I^{+}\). Since \(I\) is normal, \(B^{1}=\bigtriangledown_{\alpha\in B^{0}}B_{\alpha}^{1}\in I\). Applying the assumption that \((B_{0},B_{1})\) is \((i,j)\)-frequent to \(B^{0}\) and \(B_{1}^{\prime}-B^{1}\), we get \((\alpha,\beta)\in B^{0}\otimes(B_{1}^{\prime}-B^{1})\) such that \(c(\alpha,\beta)=i\). However, by the definition of \(B^{1}\), \(\beta\in B^{1}\), which is a contradiction. Applying Claim 3.5, we find \(B_{0}^{*}\in I^{*}\upharpoonright B_{0},B_{1}^{*}\in I^{*}\upharpoonright B_{1}\) be such that 1. for any \(\alpha\in B_{0}^{*}\), \(\{\beta\in B_{1}^{*}:c(\alpha,\beta)=i\}\in I^{+}\) and 2. for any \(\beta^{\prime}\in B_{1}^{*}\), \(\{\alpha^{\prime}\in B_{0}^{*}:c(\beta^{\prime},\alpha^{\prime})=j\}\in I^{+}\). Let us check that \((B_{0}^{*}\cup B_{1}^{*},c^{-1}(\{i,j\}))\) is a highly connected subgraph of size \(\aleph_{2}\). Given \(C\in[B_{0}^{*}\cup B_{1}^{*}]^{\leq\aleph_{1}}\), \(\alpha,\beta\in B_{0}^{*}\cup B_{1}^{*}-C\), we need to find an \((i,j)\)-valued path connecting them in \(B_{0}^{*}\cup B_{1}^{*}-C\). Consider the following cases. * \(\alpha\in B_{0}^{*},\beta\in B_{1}^{*}\): let \(A_{\alpha}=\{\gamma\in B_{1}^{*}:c(\alpha,\gamma)=i\}\in I^{+}\) and let \(B_{\beta}=\{\eta\in B_{0}^{*}:c(\beta,\eta)=j\}\in I^{+}\). Since \((B_{0}^{*},B_{1}^{*})\) is \((i,j)\)-frequent, we can find \((\gamma,\eta)\in(A_{\alpha}-C)\otimes(B_{\beta}-C)\) such that \(c(\gamma,\eta)=j\). Then the path \(\alpha,\gamma,\eta,\beta\) is as desired. * \(\alpha,\beta\in B_{0}^{*}\) or \(\alpha,\beta\in B_{1}^{*}\): we can reduce to the previous case by moving either \(\alpha\) or \(\beta\) to the other side using an edge of \(c\)-color either \(i\) or \(j\). ## 4. The consistency of the existence of a 2-precipitous ideal In this section we discuss two forcing constructions for a 2-precipitous ideal on \(\kappa\). The first is cardinal preserving and the second involves collapsing cardinals. First let us record some characterizations of 2-precipitous ideals analogous to those of precipitous ideals in [12]. **Definition 4.1**.: _A tree \(T\) of maximal antichains of \(P(\kappa)/I\times P(\kappa)/I\) is a sequence of maximal antichains \(\langle\mathcal{A}_{n}:n\in\omega\rangle\) of \(P(\kappa)/I\times P(\kappa)/I\) such that \(\mathcal{A}_{n+1}\) refines \(\mathcal{A}_{n}\) for each \(n\in\omega\). A branch through \(T\) is a decreasing sequence of conditions \(\langle b_{n}:n\in\omega\rangle\) such that \(b_{n}\in\mathcal{A}_{n}\)._ The proof by Jech and Prikry [12] (see also [10, Proposition 2.7]) essentially gives the following. **Theorem 4.2** ([12]).: \(I\) _is 2-precipitous if for any pair of positive sets \((C_{0},C_{1})\) and a tree \(T\) of maximal antichains \(\langle\mathcal{A}_{n}:n\in\omega\rangle\) below \((C_{0},C_{1})\), there exists a sequence \(\langle(A_{n},B_{n}):n\in\omega\rangle\) such that_ 1. \(\langle(A_{n},B_{n}):n\in\omega\rangle\) _is a branch through the tree_ \(T\)_, and_ 2. _there exist_ \(\alpha<\beta\) _such that_ \(\alpha\in\bigcap_{n\in\omega}A_{n}\) _and_ \(\beta\in\bigcap_{n\in\omega}B_{n}\)_._ **Remark 4.3**.: _Suppose we are given a dense subset \(D\subset P(\kappa)/I\). By Lemma 3.2, it is without loss of generality to assume \(\{(C_{0},C_{1})\}\cup\bigcup_{n\in\omega}\mathcal{A}_{n}\subset\{A-M:A\in D, M\in I\}\times\{A-M:A\in D,M\in I\}\)._ For the rest of this section, we will apply Remark 4.3 liberally. Also it turns out that suppressing the \(I\) part does not affect the reasoning. Therefore, to avoid cumbersome notations, we will further assume that \(\mathcal{A}_{n}\subset D\times D\) for all \(n\in\omega\). Given a partial order \(\mathbb{P}\), we denote the complete Boolean algebra generated by \(\mathbb{P}\) as \(\mathbb{B}(\mathbb{P})\). **Proposition 4.4**.: _If \(I\) is a \(\kappa\)-complete normal ideal on \(\kappa\) and \(P(\kappa)/I\simeq\mathbb{B}(Add(\omega,\lambda))\) for some \(\lambda\), then \(I\) is 2-precipitous._ Proof.: Let \(\pi:\mathbb{B}(Add(\omega,\lambda))\to P(\kappa)/I\) be an isomorphism. For each \(r\in Add(\omega,\lambda)\), let \(X_{r}=\pi(r)\). Here we identify \(Add(\omega,\lambda)\) as a dense subset of \(\mathbb{B}(Add(\omega,\lambda))\), \(D=\{X_{r}:r\in Add(\omega,\lambda)\}\). Suppose for the sake of contradiction, \(I\) is not 2-precipitous. By Theorem 4.2, we can find the witnessing \((C_{0},C_{1})\) and a tree \(T\) of maximal antichains below \((C_{0},C_{1})\). Note that since \(P(\kappa)/I\times P(\kappa)/I\) satisfies c.c.c, each \(\mathcal{A}_{n}\subset D\times D\) is countable. Find \(r_{0},r_{1}\in Add(\omega,\lambda)\) such that \(C_{i}=X_{r_{i}}\) for \(i<2\). Let \(G\subset P(\kappa)/I\) be generic containing \(C_{0}\). Then in \(V[G]\), there is a generic elementary embedding \(j:V\to M\) which can be taken to be the ultrapower embedding with respect to the added generic \(V\)-ultrafilter extending the dual filter of \(I\). Consider \(T^{\prime}_{n}=\{B^{*}:\exists(A^{*},B^{*})\in j(\mathcal{A}_{n}),\kappa\in A ^{*}\}\). Note that \(\langle T^{\prime}_{n}:n\in\omega\rangle\in M\) since \(V[G]\models{}^{\omega}M\subset M\). Note that \(T^{\prime}_{n}\subset j^{\prime\prime}V\). This follows from the fact that each \(\mathcal{A}_{n}\) is countable, hence \(j(\mathcal{A}_{n})=j^{\prime\prime}\mathcal{A}_{n}\). **Claim 4.5**.: In \(M\), \(T^{\prime}_{n}\) is a maximal antichain below \(j(C_{1})\) for the poset \(j(P(\kappa)/I)\). Proof of the claim.: Suppose not. By the product lemma, \(\mathcal{B}=\{B:\exists(A,B)\in\mathcal{A}_{n},A\in G\}\) is a maximal antichain for \((P(\kappa)/I)^{V}\simeq\mathbb{B}(Add(\omega,\lambda))\) below \(X_{r_{1}}\) in \(V[G]\). We can enumerate \[\mathcal{B}=\langle X_{p_{n}}:n\in\omega,r_{n}\in Add(\omega,\lambda)\rangle.\] In particular, \(\langle p_{n}:n\in\omega\rangle\) is a maximal antichain for \(Add(\omega,\lambda)\) below \(r_{1}\) in \(V[G]\). If \(\langle j(X_{p_{n}}):n\in\omega\rangle\) is not a maximal antichain in \(j(P(\kappa)/I)\simeq j(\mathbb{B}(Add(\omega,\lambda)))\), then there exists a condition \(r\in Add(\omega,j(\lambda))\) such that \(X_{r}^{*}=_{\text{def}}j(\pi)(r)\) is incompatible with any condition in the set \(\{j(X_{p_{n}}):n\in\omega\}\). This means \(r\) is incompatible with any condition in \(\{j(p_{n}):n\in\omega\}\). Since \(j(p_{n})=j^{\prime\prime}p_{n}\), we may assume \(r\in Add(\omega,j^{\prime\prime}\lambda)\). Let \(r^{*}=j^{-1}(r)\). Then \(r^{*}\in Add(\omega,\lambda)/r_{1}\) is incompatible with any condition in \(\{p_{n}:n\in\omega\}\). This contradicts with the fact that \(\langle p_{n}:n\in\omega\rangle\) is a maximal antichain subset of \(Add(\omega,\lambda)\) below \(r_{1}\) in \(V[G]\). Let \(H\subset j(P(\kappa)/I)\) be generic over \(V[G]\) containing \(j(D)\). Since \(M\models j(P(\kappa)/I)\) is a \(j(\kappa)\)-complete and \(\aleph_{1}\)-saturated ideal, \(H\) gives rise an ultrapower embedding \(k:M\to N\) with critical point \(j(\kappa)\). Consider \(b=\{(A_{n},B_{n}):(\kappa,j(\kappa))\in k(j(A_{n}))\times k(j(B_{n}))\}\). By Claim 4.8, \(H\) meets \(T_{n}^{\prime}\) for all \(n\in\omega\). As a result, \(k\circ j(b)=k\circ j^{\prime\prime}b\) is a branch \(\langle(A_{n}^{*},B_{n}^{*}):n\in\omega\rangle\) through \(k(j(T))\) in \(V[G*H]\) with \((\kappa,j(\kappa))\in\bigcap_{n\in\omega}A_{n}^{*}\otimes\bigcup_{n\in\omega} B_{n}^{*}\). Since \(N\) is well-founded, there is such a branch in \(N\). By the elementarity of \(k\circ j\), \(T\) has a branch \(\langle(A_{n},B_{n}):n\in\omega\rangle\) in \(V\) satisfying that there are \(\alpha<\beta\) such that \(\alpha\in\bigcap_{n\in\omega}A_{n}\) and \(\beta\in\bigcap_{n\in\omega}B_{n}\). It is easy to see that if \(P(\kappa)/I\) has a \(\sigma\)-closed dense subset, then \(I\) is 2-precipitous. However, in this case, it is necessary that \(2^{\omega}<\kappa\). The second construction gives a scenario where \(\kappa\) is a small uncountable cardinal (like \(\aleph_{2}\)) as well as CH fails. In particular, such ideal can be constructed using the Mitchell collapse [16]. 3 Footnote 3: We thank Spencer Unger for his suggestion on the relevance of the Mitchell collapse. Recall the Mitchell forcing from [16] (the representation of the forcing here is due to Abraham, see [1] and [7, Section 23]) \(\mathbb{M}(\omega,\lambda)\) consists of conditions of the form \((p,r)\) where \(p\in Add(\omega,\lambda)\) and \(r\) is a function on \(\lambda\) of countable support such that for any \(\alpha<\lambda\), \(\Vdash_{Add(\omega,\alpha)}r(\alpha)\) is a condition in \(Add(\omega_{1},1)\). The order is that \((p_{2},r_{2})\leq(p_{1},r_{1})\) iff \(p_{2}\supset p_{1}\), \(supp(r_{2})\supset supp(r_{1})\), and for any \(\alpha\in supp(r_{1})\), \(p_{2}\upharpoonright\alpha\Vdash_{Add(\omega,\alpha)}r_{2}(\alpha)\leq_{Add( \omega_{1},1)}r_{1}(\alpha)\). Define \(R\) to be the poset consisting of countably supported functions \(r\) with domain \(\lambda\) such that for each \(\alpha\in supp(r)\), \(r(\alpha)\) is an \(Add(\omega,\alpha)\)-name for a condition in \(Add(\omega_{1},1)\). The order of \(R\) is the following: \(r_{1}\leq_{R}r_{2}\) iff \(supp(r_{1})\supset supp(r_{2})\) and for any \(\alpha\in supp(r_{2})\), \(\Vdash_{Add(\omega,\alpha)}r_{1}(\alpha)\leq_{Add(\omega_{1},1)}r_{2}(\alpha)\). The following are standard facts about this forcing (see [7]): 1. \(\mathbb{M}(\omega,\lambda)\) projects onto \(Add(\omega,\lambda)\) by projecting onto the first coordinate, 2. \(Add(\omega,\lambda)\times R\) projects onto \(\mathbb{M}(\omega,\lambda)\) by the identity map. **Remark 4.6**.: _Whenever \((p_{2},r_{2})\leq_{\mathbb{M}(\omega,\lambda)}(p_{1},r_{1})\), there exists \(r_{2}^{\prime}\in R\) with \(dom(r_{2}^{\prime})=dom(r_{2})\) such that \(r_{2}^{\prime}\leq_{R}r_{1}\) and \(p_{2}\upharpoonright\alpha\Vdash_{Add(\omega,\alpha)}r_{2}(\alpha)=r_{2}^{ \prime}(\alpha)\) for any \(\alpha\in dom(r_{2})\). In other words, \((p_{2},r_{2})\) and \((p_{2},r_{2}^{\prime})\) are equivalent conditions. We will use this fact freely in the following proofs._ Note that the poset \(R\) has the property that any countable decreasing sequence has a greatest lower bound. **Proposition 4.7**.: _If \(P(\kappa)/I\simeq\mathbb{B}(\mathbb{M}(\omega,\lambda))\) for some \(\lambda\), then \(I\) is 2-precipitous._ Proof.: Let \(\pi:\mathbb{M}(\omega,\lambda)\to D\) be an isomorphism where \(D\subset P(\kappa)/I\) is a dense subset. For any \((p,r)\in\mathbb{M}(\omega,\lambda)\), let \(X_{p,r}\) denote \(\pi(p,r)\). Assume for the sake of contradiction that \(I\) is not 2-precipitous. Fix a winning strategy \(\sigma\) for Player Empty in the game \(G_{I}\). We may assume \(\sigma\) satisfies the conclusion of Lemma 3.2 applied to \(D\). To avoid cumbersome notations, we will assume for simplicity that \(\sigma\) outputs elements from \(D\times D\). See the paragraph after Remark 4.3. We will use \(\sigma\) to construct a tree of antichains \(T=\langle\mathcal{A}_{n}:n\in\omega\rangle\) below \(\sigma(\emptyset)=(A,B)=(X_{p_{a}^{-1},r_{a}^{-1}},X_{p_{b}^{-1},r_{b}^{-1}})= \mathcal{A}_{-1}\) satisfying additional properties: 1. \(\mathcal{A}_{n+1}\) refines \(\mathcal{A}_{n}\), 2. \(\mathcal{A}_{n}=\langle(a_{i}^{n},b_{i}^{n}):i<\gamma_{n}\rangle\subset D\) is countable (note that it is _not maximal_ anymore), 3. for each \(n\in\omega,i\in\gamma_{n}\), there are unique \((p_{i,a}^{n},r_{i,a}^{n}),(p_{i,b}^{n},r_{i,b}^{n})\in\mathbb{M}(\omega,\lambda)\) such that \(X_{p_{i,a}^{n},r_{i,a}^{n}}=a_{i}^{n}\) and \(X_{p_{i,b}^{n},r_{i,b}^{n}}=b_{i}^{n}\), 4. for any \(n\) and \(i<j\in\gamma_{n}\), \((r_{j,a}^{n},r_{j,b}^{n})\leq_{R\times R}(r_{i,a}^{n},r_{i,b}^{n})\), 5. for any \((r_{0},r_{1})\) lower bound for \(\langle(r_{i,a}^{n},r_{i,b}^{n}):i\in\gamma_{n}\rangle\), we have that \(\mathcal{A}_{n}\downarrow(r_{0},r_{1})=_{def}\{X_{p_{i,a}^{n},r_{0}},X_{p_{i,b }^{n},r_{1}}:i\in\gamma_{n}\}\) is a maximal antichain in \((A\cap X_{\emptyset,r_{0}},B\cap X_{\emptyset,r_{1}})\), 6. for any \(n\) and \(i,j\), \((r_{j,a}^{n+1},r_{j,b}^{n+1})\leq_{R\times R}(r_{i,a}^{n},r_{i,b}^{n})\), 7. for any branch \(\langle(A_{n},B_{n}):n\in\omega\rangle\) through \(T\), there do not exist \(\alpha<\beta\) such that \(\alpha\in\bigcap_{n\in\omega}A_{n}\) and \(\beta\in\bigcap_{n\in\omega}B_{n}\). Assuming that the construction of such \(T\) is possible, let us derive a contradiction. Let \((r_{a},r_{b})\) be the greatest lower bound in \(R\times R\) for \(\langle\langle(r_{i,a}^{n},r_{i,b}^{n}):i\in\gamma_{n}\rangle:n\in\omega\rangle\). By property (5), we know that for each \(n\), \(\mathcal{A}_{n}\downarrow(r_{a},r_{b})\) is a maximal antichain below \((X_{\emptyset,r_{a}}\cap A,X_{\emptyset,r_{b}}\cap B)\) as a subset of \((P(\kappa)/I)^{V}\) in \(V[G]\). Force over \(V\) to get a generic \(G\subset P(\kappa)/I\) over \(V\) containing \(X_{\emptyset,r_{a}}\cap A\). Using \(G\), we find an elementary embedding \(j:V\to M\) with critical point \(\kappa\). In \(V[G]\), consider \(T^{\prime}\) consisting of \(\mathcal{A}^{\prime}_{n}=\{j(C):\kappa\in j(D),(D,C)\in\mathcal{A}_{n}\downarrow (r_{a},r_{b})\}\). Notice that by property (5) and the product lemma, \(\mathcal{A}^{*}_{n}=\{C:j(C)\in\mathcal{A}^{\prime}_{n}\}\) is a maximal antichain below \(X_{\emptyset,r_{b}}\cap B\). **Claim 4.8**.: For each \(n\in\omega\), \(\mathcal{A}^{\prime}_{n}\subset j(P(\kappa)/I)\) is a maximal antichain below \(j(B)\cap X_{\emptyset,j(r_{b})}\) in \(V[G]\) (or in \(M\), since \(V[G]\models{}^{\omega}M\subset M\)). Proof of the claim.: Otherwise, we can find \((p,r^{*})\in j(\mathbb{M}(\omega,\lambda))\) below \((\emptyset,j(r_{b}))\) such that \(X^{*}_{p,r^{*}}=_{\text{def}}j(\pi)((p,r^{*}))\subset j(B)\cap X_{\emptyset,j (r_{b})}\) and \(X^{*}_{p,r^{*}}\) is incompatible with any element in \(\mathcal{A}^{\prime}_{n}\). By changing to an equivalent condition if necessary, we may assume that \(r^{*}\leq_{j(R)}j(r_{b})\). As a result, \(p\perp_{j(Add(\omega,\lambda))}j(p^{n}_{k,b})\) for all \(k\in\omega\). Consider \(p^{\prime}=j^{-1}(p)\in Add(\omega,\lambda)\). Then \(p^{\prime}\perp_{Add(\omega,\lambda)}p^{n}_{k,b}\) for all \(k\in\omega\). As a result, \(X_{p^{\prime},r_{b}}\) is incompatible with each element in \(\mathcal{A}^{*}_{n}\), but \(X_{p^{\prime},r_{b}}\cap B\cap X_{\emptyset,r_{b}}\in I^{+}\), which is a contradiction to the fact that \(\mathcal{A}^{*}_{n}\) is a maximal antichain below \(X_{\emptyset,r_{b}}\cap B\). Let \(H\subset j(P(\kappa)/I)\) be generic over \(V\) containing \(j(B\cap X_{\emptyset,r_{b}})\). Then in \(V[G*H]\), we can form an elementary embedding \(k:M\to N\) with critical point \(j(\kappa)\). Consider \(b=\{(A_{n},B_{n}):(\kappa,j(\kappa))\in j(A_{n})\otimes k(j(B_{n})),(A_{n},B_ {n})\in\mathcal{A}_{n},n\in\omega\}\). By Claim 4.8, \(k\circ j^{\prime\prime}b\in V[G*H]\) is a branch through \(k(j(T))\) violating property (7) as witnessed by \((\kappa,j(\kappa))\). Since \(N\) is a well-founded inner model of \(V[G*H]\), there is such a branch in \(N\). By the elementarity of \(k\circ j\), there is such a branch in \(V\) through \(T\) violating property (7), which is a contradiction. Let us turn to the construction of \(T\). We will construct \(T\) recursively by levels. Let \(\mathcal{A}_{-1}=\sigma(\emptyset)=(A,B)=(X_{p^{-1}_{a},r^{-1}_{a}},X_{p^{-1}_ {b},r^{-1}_{b}})\). To avoid excessive repetitions, we will assume that all the conditions from \(\mathbb{M}(\omega,\lambda)\times\mathbb{M}(\omega,\lambda)\) extend \(((p^{-1}_{a},r^{-1}_{a}),(p^{-1}_{b},r^{-1}_{b}))\). Let us first define \(T(0)=\mathcal{A}_{0}\). Recursively, suppose we have defined \(\mathcal{A}_{0,<\eta}=\langle(X_{p^{0}_{i,a},r^{0}_{i,a}},X_{p^{0}_{i,b},r^{0} _{i,b}}):i<\eta\rangle\) (partially) satisfying property (4). Let \((t_{0},t_{1})\) be a lower bound for \(\langle(r^{0}_{i,a},r^{0}_{i,b}):i<\eta\rangle\) in \(R\times R\). If there exists \((q_{0},t^{\prime}_{0})\leq(p^{-1}_{a},t_{0}),(q_{1},t^{\prime}_{1})\leq(p^{-1} _{b},t_{1})\) such that \((X_{q_{0},t^{\prime}_{0}},X_{q_{1},t^{\prime}_{1}})\) is incompatible with any element in \(\mathcal{A}_{0,<\eta}\), let \((Y^{0}_{\eta,a},Y^{0}_{\eta,b})\) be one such \((X_{q_{0},t^{\prime}_{0}},X_{q_{1},t^{\prime}_{1}})\). Then we define \((X_{p^{0}_{\eta,a},r^{0}_{\eta,a}},X_{p^{0}_{\eta,b},r^{0}_{\eta,b}})\) be \(\sigma(\langle\emptyset,(Y^{0}_{\eta,a},Y^{0}_{\eta,b})\rangle)\). Notice that this process must stop at some countable stage \(\gamma_{0}<\omega_{1}\) since \(\{(p^{0}_{i,a},p^{0}_{i,b}):i<\gamma_{0}\}\) is an antichain in \(Add(\omega,\lambda)\times Add(\omega,\lambda)\) below \((p^{-1}_{a},p^{-1}_{b})\), which satisfies the countable chain condition. Let us verify all the properties are satisfied. Properties (1), (2), (3), (4), (6) are satisfied by the construction. Property (7) is not relevant at this stage. Property (5) is satisfied since we only stop when the process described above cannot be continued, which is exactly saying property (5) is satisfied. In general, the definition of \(\mathcal{A}_{n+1}\) is very similar to the construction above. Basically, for each \((C_{0},C_{1})\in\mathcal{A}_{n}\), we repeat the process above with \((C_{0},C_{1})\) playing the role of \(\mathcal{A}_{-1}\). One difference, in order to satisfy property (6), is that at the beginning of the construction, we let \((h_{0},h_{1})\) be the lower bound in \(R\times R\) for \(\langle(r_{i,a}^{n},r_{i,b}^{n}):i<\gamma_{n}\rangle\) and work below \(((p_{a}^{-1},h_{0}),(p_{b}^{-1},h_{1}))\) in \(\mathbb{M}(\omega,\lambda)\times\mathbb{M}(\omega,\lambda)\). Finally, to see that property (7) is satisfied, notice that any branch \(b\) through \(T\) corresponds to a play of the game \(G_{I}\) where Player Empty is playing according to their winning strategy \(\sigma\). More precisely, \(b\) is the sequence of sets played by Player Empty according to \(\sigma\) in a play of the game \(G_{I}\). As a result, the winning condition of Player Empty ensures (7) is satisfied. **Corollary 4.9**.: _It is consistent that \(\aleph_{2}\to_{hc}[\aleph_{2}]_{\omega,2}^{2}\) and \(2^{\omega}\geq\omega_{2}\)._ Proof.: Let \(\kappa\) be a measurable cardinal. Then in \(V^{\mathbb{M}(\omega,\kappa)}\), \(2^{\omega}\geq\omega_{2}\) and there is an ideal satisfying the hypothesis of Proposition 4.7 (see for example [7, Theorem 23.2]). Apply Proposition 4.7 and Theorem 3.3. ## 5. \(\sigma\)-closed ideals and monochromatic highly connected subgraphs In this section, we prove the following theorem. **Theorem 5.1**.: _Suppose a regular cardinal \(\kappa\) carries a countably complete uniform ideal \(I\) such that there exists a dense collection \(H\subset I^{+}\) that is \(\sigma\)-closed. Then \(\kappa\to_{hc}(\kappa)_{\omega}^{2}\). Moreover, \(\kappa\to_{hc,<4}(\kappa)_{\omega}^{2}\) holds._ Fix an ideal \(I\) as in the hypothesis of Theorem 5.1. It is worth comparing such ideal hypothesis with the one from the previous section: * we do not insist that \(I\) is normal any more, * we impose a stronger requirement that the ideal has a \(\sigma\)-closed dense subset (any such ideal is \(2\)-precipitous). Fix a coloring \(c:[\kappa]^{2}\to\omega\). Given \(B_{0},B_{1}\in I^{+}\) and \(i\in\omega\), we say \((B_{0},B_{1})\)_is \(i\)-frequent_ if for any positive \(B_{0}^{\prime}\subset B_{0}\) and positive \(B_{1}^{\prime}\subset B_{1}\), it is the case that \(\{\alpha\in B_{0}^{\prime}:\{\beta\in B_{1}^{\prime}:c(\alpha,\beta)=i\}\in I ^{+}\}\in I^{+}\). **Remark 5.2**.: _Equivalently, \((B_{0},B_{1})\) is \(i\)-frequent if for any positive \(B_{1}^{\prime}\subset B_{1}\), it is the case that \(\{\alpha\in B_{0}:\{\beta\in B_{1}^{\prime}:c(\alpha,\beta)=i\}\in I^{+}\}\in I ^{*}\upharpoonright B_{0}\)._ **Claim 5.3**.: There exists a positive \(B\in I^{+}\) and \(i\in\omega\) such that for any positive \(B^{\prime}\subset B\), there are positive \(B_{0},B_{1}\subset B^{\prime}\) such that \((B_{0},B_{1})\) is \(i\)-frequent. Proof.: In the following proof, to avoid repetitions, whenever we mention a positive set, we implicitly assume the positive set in the \(\sigma\)-closed dense collection \(H\). Suppose otherwise for the sake of contradiction. For \(A\in I^{+}\) and \(j\in\omega\), let \((*)_{A,j}\) abbreviate the assertion: there are positive sets \(B_{0},B_{1}\subset A\) such that \((B_{0},B_{1})\) is \(j\)-frequent. By the hypothesis, we can recursively define \(\langle B^{\prime}_{k}\in I^{+}:k\in\omega\rangle\) such that * \(B^{\prime}_{0}=\kappa\), * for any \(k\in\omega\), \(B^{\prime}_{k+1}\subset B^{\prime}_{k}\) and \(\neg(*)_{B^{\prime}_{k+1},k}\). Let \(B^{\prime}=\bigcap_{k\in\omega}B^{\prime}_{k}\). By the \(\sigma\)-closure of \(I\), we have that \(B^{\prime}\in I^{+}\). Then it satisfies that for any \(i\in\omega\), such that there are no positive \(B_{0},B_{1}\subset B^{\prime}\) such that \((B_{0},B_{1})\) is \(i\)-frequent. Recursively construct an \(\omega\)-sequence of pairs of \(I\)-positive sets \(\langle(C_{k},D_{k}):k\in\omega\rangle\) as follows: start with \((B^{\prime},B^{\prime})=(C_{-1},D_{-1})\), since it is not \(0\)-frequent, there are positive \((C_{0},D_{0})\) such that for all \(\alpha\in C_{0}\), \(\{\beta\in D_{0}:c(\alpha,\beta)=0\}\in I\). In general, as \((C_{i},D_{i})\) is not \(i+1\)-frequent, we can find \((C_{i+1},D_{i+1})\) such that for all \(\alpha\in C_{0}\), \(\{\beta\in D_{0}:c(\alpha,\beta)=i+1\}\in I\). Let \(C^{*}=\bigcap_{i\in\omega}C_{i}\) and \(D^{*}=\bigcap_{i\in\omega}D_{i}\). By the \(\sigma\)-closure of \(I\), both \(C^{*}\) and \(D^{*}\) are in \(I^{+}\). By the \(\sigma\)-completeness of the ideal, we can find some \(i\) and \(\alpha\in C^{*}\) such that \(\{\beta\in D^{*}:c(\alpha,\beta)=i\}\in I^{+}\). However, this contradicts that \(\alpha\in C_{i}\). To finish the proof of Theorem 5.1, by repeatedly applying Claim 5.3, we can find \(\langle B_{n}\in I^{+}:n\in\omega\rangle\) and \(i\in\omega\) such that for any \(n<k\), \((B_{n},B_{k})\) is \(i\)-frequent. Given \(n\in\omega\), let \(B^{*}_{n}\subset B_{n}\) be the collection of \(\alpha\in B_{n}\) satisfying that for any \(k>n\), \(\{\beta\in B_{k}:c(\alpha,\beta)=i\}\in I^{+}\). Notice that \(B^{*}_{n}=_{I}B_{n}\) by Remark 5.2 and the fact that \(I\) is \(\sigma\)-complete. We claim that \(B=\bigcup_{n\in\omega}B^{*}_{n}\) is highly connected witnessed by \(i\). Given \(\alpha<\beta\in B\) and \(C\in[B]^{<\kappa}\), there must be some \(n_{0},n_{1}\in\omega\) such that \(\alpha\in B^{*}_{n_{0}}\) and \(\beta\in B^{*}_{n_{1}}\). Find \(k>\max\{n_{0},n_{1}\}\). By the hypothesis, \(C_{0}=\{\gamma\in B^{*}_{k}:c(\alpha,\gamma)=i\}\in I^{+}\) and \(C_{1}=\{\gamma\in B^{*}_{k+1}:c(\beta,\gamma)=i\}\in I^{+}\). As \((B_{k},B_{k+1})\) is \(i\)-frequent, we can find \(\gamma_{0}\in C_{0}-C\) and \(\gamma_{1}\in C_{1}-C\) such that \(c(\gamma_{0},\gamma_{1})=i\). As a result, \(\alpha,\gamma_{0},\gamma_{1},\beta\) is the required path of color \(i\). ## 6. Remarks on the consistency of the ideal hypothesis For regular cardinal \(\lambda\geq\kappa\), if \(\kappa\) be \(\lambda\)-supercompact, we show that in \(V^{Coll(\omega_{1},<\kappa)}\), \(\lambda\) carries a \(\kappa\)-complete uniform ideal which admits a dense and \(\sigma\)-close collection of positive sets. The construction is due to Galvin, Jech, Magidor [11] and independently Laver [15]. We supply a proof for the sake of completeness. Let \(U\) be a fine normal ultrafilter on \(P_{\kappa}\lambda\). By a theorem of Solovay (see [18, Theorem 14] for a proof), there exists \(B\in U\) such that \(a\in B\mapsto\sup a\) is injective. Let \(j:V\to M\simeq Ult(V,U)\). Let \(\delta=\sup j^{\prime\prime}\lambda\). Let \(G\subset Coll(\omega_{1},<\kappa)\) be generic over \(V\). It is well-known that if \(H\subset Coll(\omega_{1},[\kappa,j(\kappa)))\) is generic over \(V[G]\), then we can lift \(j\) to \(j^{+}:V[G]\to M[G*H]\) in \(V[G*H]\). In \(V[G]\), consider the ideal \[I=\{X\subset\lambda:\mathrel{\mathop{\kern 0.0pt|\kern-0.0pt\vdash}}_{Coll( \omega_{1},[\kappa,<j(\kappa)))}\delta\not\in j^{+}(X)\}.\] The fact that \(I\) is \(\kappa\)-complete and uniform is immediate. Let us show that there exists a dense \(\sigma\)-closed collection of positive sets. For each \(r\in Coll(\omega_{1},<j(\kappa))^{M}/G\), there exists a function \(f_{r}:B\to P\) such that \(j(f_{r})(j^{\prime\prime}\lambda)=r\). Define \(X_{r}=\{\sup a:a\in B,f_{r}(a)\in G\}\). **Claim 6.1**.: \(X_{r}\in I^{+}\)_._ Proof.: It suffices to check that \(r\mathrel{\mathop{\kern 0.0pt|\kern-0.0pt\vdash}}\delta\in j^{+}(X_{r})\). Let \(H\subset Coll(\omega_{1},<j(\kappa))^{M}/G\) containing \(r\) be generic over \(V[G]\), then we can lift \(j\) to \(j^{+}:V[G]\to M[H]\). In particular, \(H=j^{+}(G)\). Since \(j(f_{r})(j^{\prime\prime}\lambda)=r\in j^{+}(G)\), we have that \(\delta\in j^{+}(X_{r})\). **Claim 6.2**.: \(X_{r}\subset_{I}X_{r^{\prime}}\) iff \(r\leq_{Coll(\omega_{1},<j(\kappa))^{M}/G}r^{\prime}\). Proof.: If \(r\leq r^{\prime}\), then it is clear that \(X_{r}\subset_{I}X_{r^{\prime}}\). For the other direction, suppose for the sake of contradiction that \(r\not\leq_{Coll(\omega_{1},<j(\kappa))^{M}/G}r^{\prime}\). In particular, there is some extension \(r^{*}\) of \(r\) that is incompatible with \(r^{\prime}\). Then \(r^{*}\mathrel{\mathop{\kern 0.0pt|\kern-0.0pt\vdash}}\delta\in j^{+}(X_{r})-j^{+} (X_{r^{\prime}})\). Hence, \(X_{r}\not\subset_{I}X_{r^{\prime}}\). As a result, \[\{X_{r}:r\in Coll(\omega_{1},<j(\kappa))^{M}/G\}\] is \(\sigma\)-closed, since \(Coll(\omega_{1},<j(\kappa))^{M}/G\) is \(\sigma\)-closed in \(V[G]\). **Claim 6.3**.: For any \(X\in I^{+}\), there exists some \(r\) such that \(X_{r}\subset_{I}X\). Proof.: Let \(r\in Coll(\omega_{1},<j(\kappa))^{M}/G\) force that \(\delta\in j^{+}(X)\). We show that \(X_{r}\subset_{I}X\). Otherwise, there is some \(r^{\prime}\) forcing that \(\delta\in j^{+}(X_{r})-j^{+}(X)\). In particular, \(r^{\prime}\) forces that \(j(f_{r})(j^{\prime\prime}\lambda)=r\in j^{+}(G)\). By the separability of the forcing, \(r^{\prime}\leq_{Coll(\omega_{1},<j(\kappa))^{M}/G}r\). This contradicts with the fact that \(r\) forces \(\delta\in j^{+}(X)\) **Theorem 6.4**.: 1. _If_ \(\kappa\) _is measurable, then in_ \(V^{Coll(\omega_{1},<\kappa)}\)_,_ \(\aleph_{2}\rightarrow_{hc}(\aleph_{2})_{\omega}^{2}\)_._ 2. _If_ \(\kappa\) _is supercompact, then in_ \(V^{Coll(\omega_{1},<\kappa)}\)_, for all regular_ \(\lambda\geq\aleph_{2}\)_,_ \(\lambda\rightarrow_{hc}(\lambda)_{\omega}^{2}\)_._ Some large cardinal assumption is necessary to establish the consistency of \(\kappa\rightarrow_{hc}(\kappa)_{\omega}^{2}\), as shown in [14]. ## 7. The lengths of the paths In Section 5, we have shown that if there exists a \(\sigma\)-complete uniform ideal on \(\omega_{2}\) admitting a \(\sigma\)-closed collection of dense positive sets, then \(\omega_{2}\rightarrow_{hc,<4}(\omega_{2})_{\omega}^{2}\). One natural question is whether we can improve the conclusion to \(\omega_{2}\rightarrow_{hc,<3}(\omega_{2})_{\omega}^{2}\). In this section, we show that the answer is no, at least not from the same hypothesis. **Remark 7.1**.: _If there is a \(\sigma\)-closed forcing \(P\) such that in \(V^{P}\), there is a transitive class \(M\) and an elementary embedding \(j:V\to M\) with critical point \(\kappa\), then \(\kappa\rightarrow_{hc}(\kappa)_{\omega}^{2}\) holds. Essentially the same proof from Section 5 works._ **Theorem 7.2**.: _It is consistent relative to the existence of a measurable cardinal that \(\aleph_{2}\rightarrow_{hc,<4}(\aleph_{2})_{\omega}^{2}\) but \(\aleph_{2}\not\rightarrow_{hc,<3}(\aleph_{2})_{\omega}^{2}\)._ Proof.: Let \(\kappa\) be a measurable cardinal. We will make use of a forcing poset \(\mathbb{P}_{\kappa}\) due to Komjath-Shelah [13, Theorem 7]. The final forcing will be \(Coll(\omega_{1},<\kappa)*\mathbb{P}_{\kappa}\). It follows from the work by Komjath-Shelah that \(\omega_{2}\not\rightarrow_{hc,<3}(\omega_{2})_{\omega}^{2}\) in the final model. Let \(G\subset Coll(\omega_{1},<\kappa)*\mathbb{P}_{\kappa}\) be generic. By Remark 7.1, it suffices to check that in a further countably closed forcing extension, there exists an elementary embedding \(j:V[G]\to M\) with critical point \(\omega_{2}^{V[G]}=\kappa\). Towards this, let us recall the definition of \(\mathbb{P}_{\kappa}\): conditions consist of \((S,f,\mathcal{H},h)\) such that * \(S\in[\kappa]^{\leq\aleph_{0}}\), * \(f:[S]^{2}\rightarrow[\omega]^{\omega}\), * \(\mathcal{H}\subset[S]^{\omega}\), \(|\mathcal{H}|\leq\aleph_{0}\), for each \(H\in\mathcal{H}\), \(otp(H)=\omega\) and for any \(H,H^{\prime}\in\mathcal{H}\), \(H\cap H^{\prime}\) is finite. * if \(\alpha\in S\), \(H\in\mathcal{H}\) with \(\min H<\alpha\), then \(|\{\beta\in H:h(H)\in f(\alpha,\beta)\}|\leq 1\). The order is: \((S^{\prime},f^{\prime},\mathcal{H}^{\prime},h^{\prime})\leq(S,f,\mathcal{H},h)\) iff \(S^{\prime}\supset S\), \(\mathcal{H}^{\prime}\supset\mathcal{H}\), \(f^{\prime}\upharpoonright[S]^{2}=f\), \(h^{\prime}\upharpoonright\mathcal{H}=h\) and for any \(H\in\mathcal{H}^{\prime}-\mathcal{H}\), \(H\not\subset S\). Note that \(P_{\kappa}\) is a countably closed forcing of size \(\kappa\). Let \(j:V\to M\) witness that \(\kappa\) is measurable. Let \(G*H\subset Coll(\omega_{1},<\kappa)*\mathbb{P}_{\kappa}\). Let \(G^{*}\subset Coll(\omega_{1},<j(\kappa))/G*H\) be generic over \(V[G*H]\). This is possible since \(Coll(\omega_{1},<\kappa)*\mathbb{P}_{\kappa}\) regularly embeds into \(Coll(\omega_{1},<j(\kappa))\) with a countably closed quotient (see [7, Theorem 14.3]). As a result, we can lift \(j:V[G]\to M[G^{*}]\). In order to lift further to \(V[G*H]\), we need to force \(j(P_{\kappa})/H\) over \(V[G^{*}]\). It suffices to show that in \(V[G^{*}]\), \(j(\mathbb{P}_{\kappa})/H\) is countably closed. Suppose \(\langle p_{n}=_{def}(S_{n},f_{n},\mathcal{H}_{n},h_{n}):n\in\omega\rangle\subset j (\mathbb{P}_{\kappa})/H\) is an decreasing sequence. We want to show that \(q=\bigcup_{n\in\omega}p_{n}\) is the lower bound as desired. For this, we only need to verify that \(q\in j(\mathbb{P}_{\kappa})/H\). More explicitly, we need to verify that \(q\) is compatible with \(h\in H\) for any \(h\in H\). For each \(p=(S_{p},f_{p},\mathcal{H}_{p},h_{p})\in j(P_{\kappa})\), let \(p\upharpoonright\kappa\) denote the condition: \((S_{p}\cap\kappa,f_{p}\upharpoonright[S_{p}\cap\kappa]^{2},\mathcal{H}_{p} \cap P(\kappa),h_{p}\upharpoonright(\mathcal{H}_{p}\cap P(\kappa)))\). It is not hard to check that \(p\upharpoonright\kappa\in P_{\kappa}\) and \(p\leq_{j(\mathbb{P}_{\kappa})}p\upharpoonright\kappa\). **Claim 7.3**.: \(p\in j(P_{\kappa})/H\) iff \(p\upharpoonright\kappa\in H\). Proof of the claim.: If \(p\upharpoonright\kappa\in H\), to see \(p\in j(P_{\kappa})/H\), it suffices to see that for any \(r\leq_{P_{\kappa}}p\upharpoonright\kappa\) and \(r\in H\), \(r\) is compatible with \(p\). To check that \(r\cup p\) can be extended to a condition, it suffices to check that for any \(B\in\mathcal{H}_{r}-\mathcal{H}_{p}\), \(B\not\subset S_{p}\) and any \(B\in\mathcal{H}_{p}-\mathcal{H}_{r}\), \(B\not\subset S_{r}\). To see the former, note that if \(B\in\mathcal{H}_{r}-\mathcal{H}_{p}\), then \(B\in\mathcal{H}_{r}-\mathcal{H}_{p\upharpoonright\kappa}\), since \(r\leq p\upharpoonright\kappa\), \(B\not\subset S_{p}\cap\kappa\). Since \(B\subset\kappa\), we have \(B\not\subset S_{p}\). To see the latter, \(B\in\mathcal{H}_{p}-\mathcal{H}_{r}\) implies that \(B\in\mathcal{H}_{p}-\mathcal{H}_{p\upharpoonright\kappa}\). In particular, \(B\cap[\kappa,j(\kappa))\neq\emptyset\). Hence \(B\not\subset S_{r}\) since \(S_{r}\subset\kappa\). If \(p\in j(P_{\kappa})/H\), then for any \(h\in H\), \(h\) and \(p\) are compatible. In particular, since \(p\leq p\upharpoonright\kappa\), we know that any \(h\in H\) is compatible with \(p\upharpoonright\kappa\). This implies that \(p\upharpoonright\kappa\in H\). To finish the proof, since for each \(n\in\omega\), by Claim 7.3 and the fact that \(p_{n}\in j(\mathbb{P}_{\kappa})/H\), we have that \(p_{n}\upharpoonright\kappa\in H\). As a result, we must have \(q\upharpoonright\kappa\in H\). By Claim 7.3, we have \(q\in j(\mathbb{P}_{\kappa})/H\). ## 8. Open questions **Question 8.1**.: Starting from the existence of a weakly compact cardinal, can one force that \(\aleph_{2}\rightarrow_{hc}(\aleph_{2})_{\omega}^{2}\)? The proof in this paper can be adapted to a weaker assumtion of the existence of a weakly Ramsey cardinal. Recall that \(\kappa\) is _weakly Ramsey_ if for any \(\kappa\)-model \(M\), there exists a \(\kappa\)-complete \(M\)-ultrafilter that is weakly amenable to \(M\), namely, any \(\mathcal{F}\in M\), and \(|\mathcal{F}|\leq\kappa\), \(\mathcal{F}\cap U\in M\). However, being a weakly Ramsey cardinal is much stronger than being a weakly compact cardinal. **Question 8.2**.: Is \(\aleph_{2}\rightarrow_{hc,<3}(\aleph_{2})_{\omega}^{2}\) consistent? **Question 8.3**.: Can one separate \(\aleph_{2}\to_{hc,<m}(\aleph_{2})_{\omega}^{2}\) from \(\aleph_{2}\to_{hc,<n}(\aleph_{2})_{\omega}^{2}\) where \(4\leq m<n\leq\omega\)? **Question 8.4**.: Is \(\aleph_{2}\not\to_{hc}(\aleph_{1})_{\omega}^{2}\) consistent? The last question concerns the natural generalization of \(2\)-precipitous ideals. **Definition 8.5**.: Fix a regular uncountable cardinal \(\kappa\) and \(k\in\omega\) with \(n\geq 3\). We say an ideal \(I\) on \(\kappa\) is _n-precipitous_ if Player Empty does not have a winning strategy in the following game \(G_{I}^{n}\) with perfect information: Player Empty and Nonempty take turns playing a \(\subset_{I}\)-decreasing sequence of \(k\)-tuples of \(I\)-positive sets \(\langle(A_{n}^{i})_{i<k}:n\in\omega\rangle\) with Player Empty starting the game. Player Nonempty wins iff there exist \(\alpha_{0}<\alpha_{1}<\dots<\alpha_{k-1}\) such that \(\alpha_{i}\in\bigcap_{n\in\omega}A_{n}^{i}\) for all \(i<k\). We have shown that the existence of a uniform normal \(2\)-precipitous ideal on \(\kappa\) implies that \(\kappa\geq\omega_{2}\). A natural question is whether there is a similar phenomenon for larger \(n\): **Question 8.6**.: Fix \(n\geq 3\). Suppose \(\kappa\) carries a uniform normal \(n\)-precipitous ideal. Must \(\kappa\) be \(\geq\omega_{n}\)? ## 9. Acknowledgement We thank Stevo Todorcevic and Spencer Unger for helpful discussions and comments.
2306.08756
Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Pre-trained encoder-only and sequence-to-sequence (seq2seq) models each have advantages, however training both model types from scratch is computationally expensive. We explore recipes to improve pre-training efficiency by initializing one model from the other. (1) Extracting the encoder from a seq2seq model, we show it under-performs a Masked Language Modeling (MLM) encoder, particularly on sequence labeling tasks. Variations of masking during seq2seq training, reducing the decoder size, and continuing with a small amount of MLM training do not close the gap. (2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model. Overall, this two-stage approach is an efficient recipe to obtain both a multilingual encoder and a seq2seq model, matching the performance of training each model from scratch while reducing the total compute cost by 27%.
Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza
2023-06-14T21:41:52Z
http://arxiv.org/abs/2306.08756v1
# Recipes for Sequential Pre-training of Multilingual ###### Abstract Pre-trained encoder-only and sequence-to-sequence (seq2seq) models each have advantages, however training both model types from scratch is computationally expensive. We explore recipes to improve pre-training efficiency by initializing one model from the other. (1) Extracting the encoder from a seq2seq model, we show it under-performs a Masked Language Modeling (MLM) encoder, particularly on sequence labeling tasks. Variations of masking during seq2seq training, reducing the decoder size, and continuing with a small amount of MLM training do not close the gap. (2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model. Overall, this two-stage approach is an efficient recipe to obtain both a multilingual encoder and a seq2seq model, matching the performance of training each model from scratch while reducing the total compute cost by 27%. ## 1 Introduction and Related Work Transformer-based Pre-trained Language Models (PLMs) have become the main building blocks when creating models for most Natural Language Processing (NLP) tasks. PLMs come in three main architectures: decoder-only (e.g. GPT), sequence-to-sequence (seq2seq, e.g. BART, T5), and encoder-only (e.g. BERT). Multilingual models such as XLM-RoBERTa (encoder-only) and mBART/mT5 (seq2seq) are also common. Raffel et al. (2020) showed that seq2seq models can perform many NLP tasks on par with similarly-sized encoder-only models trained via Masked Language Modeling (MLM) by framing tasks such a sentence classification or sequence labeling as text generation. However, encoder models remain more efficient at inference for sequence labeling tasks like Named Entity Recognition (NER) and Part-of-Speech tagging (POS): an encoder can label all words in the sequence with a single forward pass, while a seq2seq model must generate each word's label autoregressively. Motivated by the need for both an encoder model for efficient sequence labeling and a seq2seq model for generative tasks like semantic parsing and summarization, we explore recipes to pre-train both models. Compared to training each model from scratch, we propose two sequential training recipes which reduce the total compute cost (Section 2.1.6). The first recipe is to extract the encoder of a seq2seq model as proposed in Ni et al. (2022). Although it performs well on classification tasks, we show that the encoder from seq2seq under-performs a from-scratch encoder on sequence labeling tasks. Variations of masking during seq2seq training and reducing the decoder size do not provide a consistent benefit to the encoder. We also explore continuing training the extracted encoder on MLM for a small number of updates. However, we show it cannot consistently close the gap in performance across different datasets. The second recipe is to warm-start seq2seq pre-training with an encoder pre-trained via MLM (Fig Figure 1: Two-stage seq2seq pre-training. First (left), we train the encoder via Masked Language Modeling (MLM). Second (right), we attach a randomly initialized decoder to the pre-trained MLM encoder, and train on the same data with de-noising objective. The encoder may remain frozen for part or all of the second stage. ure 1). Rothe et al. (2020) proposed a similar idea for fine-tuning. AlexaTM 20B and AlexaTM 5B applied this recipe for pre-training, by warm-starting with Alexa Teacher Model encoders Soltan et al. (2022); Rosenbaum et al. (2022); FitzGerald et al. (2022). We add the novelty of comparing to a seq2seq model pre-trained from scratch with the same data and codebase. First, we observe that if the encoder is frozen the whole time, the model under-performs a from-scratch seq2seq model on semantic parsing and summarization tasks. While cross-attention fusion across different layers of the encoder reduces the performance gap, we find that we can match performance of a from-scratch model by using standard cross-attention and unfreezing the encoder partway through training. Overall, the second recipe demonstrates a viable approach for efficient pre-training of both a multilingual encoder and a multilingual seq2seq model, matching the performance of training each model from scratch, while using 27% less total compute. See Appendix A for additional related work. ## 2 Pre-Training Setup We describe our pre-training objectives, models, datasets, two recipes for initializing one model type from the other, and compare compute costs. ### Models We pre-train ten models (Table 1): one from-scratch encoder, five from-scratch seq2seq models, one encoder from a seq2seq model with continued MLM training, and three two-stage seq2seq models warm-started with the from-scratch encoder. We report the pre-training Compute Cost for each, where "TU" (Training Units) is defined as 100k update steps for 12 model layers with hidden dimension 1024 and batch size 1M tokens (Appendix D, E). #### 2.1.1 Encoder Model From Scratch We train an encoder model ("roberta-12e" in Table 1) following a similar recipe to XLM-RoBERTa Conneau et al. (2020), using the MLM objectve (Figure 1(a)) of randomly masking 15% of subword tokens, as introduced in BERT Devlin et al. (2019). We use a batch size of 1M tokens and train for 500k update steps. Notably, these settings match our seq2seq models. We use "PreLayerNorm" Xiong et al. (2020), moving the layer norms to inside residual blocks to improve training stability. #### 2.1.2 Seq2Seq Objectives Our seq2seq training follows the architecture and de-noising task of BART and mBART Lewis et al. (2020); Liu et al. (2020); the only architecture change we make is to again use PreLayerNorm. The de-noising objective selects 15% of the tokens in the input (spans of length \(\sim\) Poisson(3)), and either (i) simply drops them, or (ii) replaces each selected span with a single mask token. The model is trained to reconstruct the original input entirely. See Figures 1(b) and 1(c), respectively. We add a suffix "-mask" to the model names that use masking instead of dropping the tokens. Intuitively, adding an explicit mask token for de-noising makes the reconstruction task easier, as the decoder knows exactly where the missing tokens are needed. #### 2.1.3 Seq2Seq Models From Scratch All of our seq2seq models use 12 encoder layers ("12g"). The first five models are trained from scratch starting from randomly initialized weights. The models "bart-12e12d" and "bart-12e12d-mask" use 12-layer decoders (same number as encoder layers) using the seq2seq de-noising training objective without masking and with masking, respectively. The remaining three models use a smaller decoder of either 2 layers ("bart-12e2d" without masking, "bart-12e2d-mask" with masking) or 1 layer ("bart-12e1d-mask", with masking). We hypothesize that reducing the size of the decoder may strengthen the encoder when it is extracted and used on its own. #### 2.1.4 Recipe 1: Encoder of Seq2Seq + MLM We extract the encoder from the seq2seq model "bart-12e12d" and continue training via MLM for 100k updates ("bart-12e12d+mlm"). We initialize the MLM head from the input embedding and untie. #### 2.1.5 Recipe 2: Two-Stage Seq2Seq Models Finally, we train three seq2seq models following the two-stage setup (Figure 1). We initialize the encoder weights of the seq2seq model with the MLM encoder "roberta-12e" (Section 2.1.1) and train via seq2seq de-noising without masking. The first two models train for 500k updates with the encoder always frozen: "2stage-bart-12e12d" uses standard cross-attention, where the decoder attends to only the final encoder layer, and "2stage-bart-12e12d-attn-f" uses a novel application of **attention fusion**Cao et al. (2022) during cross-attention, where the decoder attends to all encoder layers. The last model, "2stage-bart-12e12d-unfrz" uses standard cross-attention and **unfreezes the encoder partway through training**, applying 200k update steps with the encoder frozen, then 150k update steps with the encoder unfrozen. In all cases, we initialize and tie the decoder embeddings from/to the encoder embeddings and keep them frozen as long as the encoder is frozen. The LM head is also initialized from the encoder embeddings, but it is untied from the embeddings and unfrozen from the beginning of the training. #### 2.1.6 Compute Cost Comparison The baseline of training both models from scratch has a compute cost of 15.0 TU: 5.0 TU for "roberta-12e" plus 10.0 TU for "bart-12e12d". Our proposed recipes reduce the total compute cost either by 17% (to 12.5 TU) or by 27% (to 11.0 TU). ### Pretraining Dataset We pre-train on a combination of Wikipedia and mC4 (Xue et al., 2021) data in 12 languages: Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu. We pack sequences of tokens to produce sequences of approximately 512 subword units. We allow unrelated content to be packed together in the same sequence, separated with a special symbol "[DOC]". Maintaining a relatively constant number of subword sequences reduces padding and results in efficient compute. We up-sample data for different languages following Conneau et al. (2020). ## 3 Fine-Tuning Results We present the results on fine-tuning our pre-trained models. All runs are averaged over three random seeds and reported as mean \(\pm\) standard deviation. See Appendix C for hyperparameters. ### Encoder Model Results In Table 2, we compare the performance of our encoder models on four datasets: (1) XNLI (Conneau et al., 2018) sentence-pair classification, (2) mATIS++ (Xu et al., 2020) joint Intent Classification (IC) and Slot Labeling (SL), (3) WikiANN (Pan et al., 2017) token-level Named Entity Recognition (NER), and (4) UDPOS (Nivre et al., 2020) token-level Part-of-Speech tagging (POS) (XTREME (Hu et al., 2020) version). For each task, we follow the cross-lingual zero-shot setting: train and validate on English data only, then report on the test set in English ("en") and the average over the zero-shot langauges ("avg-0s"). Appendix B shows results on each language. **We find that the MLM encoder performs best on all tasks** except for mATIS++ IC avg-0s setting. The encoder of seq2seq ("bart-12e12d") is only slightly behind on the sentence-level tasks, on en/avg-0s by 0.6/1.1 points on XNLI (83.9 vs. 84.5 / 74.7 vs. 75.8), and 1.0/1.0 points on mATIS++ IC (96.8 vs. 97.8 / 86.2 vs. 87.2). However, **the gap is much larger on the sequence labeling tasks**: on en/avg-0s, 3.2/17.3 points on mATIS++ SL (92.5 vs. 95.7 / 44.3 vs. 61.6), 6.4/9.0 points on WikiANN NER (76.6 vs. 83.0 / 52.1 vs. 61.1), and \begin{table} \begin{tabular}{l c c|c|c|c} \hline \hline Model & \begin{tabular}{c} Encoder \\ Layers \\ \end{tabular} & \begin{tabular}{c} Decoder \\ Layers \\ \end{tabular} & \begin{tabular}{c} Encoder \\ Updates \\ \end{tabular} & \begin{tabular}{c} Decoder \\ Updates \\ \end{tabular} & \begin{tabular}{c} Compute \\ Cost (TU) \\ \end{tabular} \\ \hline \hline \multicolumn{5}{c}{Encoder Model From Scratch (MLM only)} \\ \hline roberta-12e & 12 & 0 & 500k & 0 & 5.0 \\ \hline \multicolumn{5}{c}{Seq2Seq Models From Scratch (de-noising only)} \\ \hline bart-12e12d & 12 & 12 & 500k & 500k & 10.0 \\ bart-12e12d-mask & 12 & 12 & 500k & 500k & 10.0 \\ \hline bart-12e2d & 12 & 2 & 500k & 500k & 5.8 \\ bart-12e2d-mask & 12 & 2 & 500k & 500k & 5.8 \\ bart-12e1d-mask & 12 & 1 & 500k & 500k & 5.4 \\ \hline \multicolumn{5}{c}{Recipe 1: Encoder of Seq2Seq + MLM} \\ \hline bart-12e12d+mlm & 12 & 12 & 500k (s2s) + 100k & 500k & 10.0 (s2s) + 1.0 = 11.0 \\ \hline \hline \multicolumn{5}{c}{Recipe 2: Two-Stage Seq2Seq Models (warm-start with MLM encoder)} \\ \hline 2stage-bart-12e12d & 12 & 12 & 500k (MLM) & 500k & 5.0 (MLM) + 7.5 = 12.5 \\ 2stage-bart-12e12d-attn-f & 12 & 12 & 500k (MLM) & 500k & 5.0 (MLM) + 7.5 = 12.5 \\ 2stage-bart-12e12d-unfrz & 12 & 12 & 500k (MLM) + 150k & 200k + 150k & 5.0 (MLM) + 6.0 = 11.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Model architecture details. All models use a batch size of 1M tokens with hidden dimension of 1024, feed-forward dimension of 4096 and 16 attention heads. 1.5/12.0 on UDPOS (94.3 vs. 95.8 / 61.5 vs. 73.5). This suggests that seq2seq pre-training may give the encoder the knowledge to perform sentence-level tasks, while MLM pre-training may be particularly effective for sequence labeling tasks which use the token-level representations directly. With a 12-layer decoder, the explicit mask token during seq2seq pre-training does not seem to improve the encoder. However, when the decoder has only 2 layers, the mask token is crucial: "bart-12e2d-mask" out-performs "bart-12e2d" by a wide margin across tasks. We hypothesize that the mask token makes de-noising easier, by signaling where tokens should be filled in, and without this signal, the task is too challenging for a seq2seq model with just a 2-layer decoder. Reducing the decoder further to only 1 layer does not benefit the encoder. Continuing training the seq2seq-extracted encoder on MLM for 100k updates does not close the gap to the from-scratch encoder across datasets. Some tasks improve, while others degrade. ### Seq2Seq Model Results We evaluate the generation quality of our seq2seq models on two datasets: mTOP [11] cross-lingual zero-shot semantic parsing, and XSUM [16] English summarization. For mTOP, following CLASP [15], we use space-joined tokens as input, word sentinels, and SCIEM (Space- and Case-Insensitive Exact Match) metric. For both datasets, we generate outputs using beam search with k=3. As shown in Table 3, **the two-stage model with encoder unfrozen partway through training is on-par with the from-scratch seq2seq model**: compared to "bart-12e12d ", "2stage-bart-12e12d-unfrz" is only 0.1 points behind on mTOP en (83.3 vs. 83.4) yet 2.5 points ahead on cross-lingual zero-shot (48.2 vs. 45.7). On XSUM, the two-stage model is on-par or slightly better than the from-scratch seq2seq models. Masking during seq2seq pre-training does not greatly impact generation quality. When the encoder is frozen ("2stage-bart-12e12d"), the results are slightly behind; attention fusion ("2stage-bart-12e12d-attn-f") does not provide a clear benefit. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline & \multicolumn{4}{c|}{Classification} & \multicolumn{4}{c}{Sequence Labeling} \\ \hline \multirow{3}{*}{Encoder} & \multicolumn{2}{c|}{XNLI (acc.)} & \multicolumn{2}{c|}{mATIS++ IC (acc.)} & \multicolumn{2}{c|}{mATIS++ SL (f1)} & \multicolumn{2}{c|}{WikiANN (f1)} & \multicolumn{2}{c}{UDPOS (f1)} \\ & en avg-0s & en avg-0s & en avg-0s & en avg-0s & en avg-0s & en avg-0s \\ \hline \hline \multicolumn{10}{c}{Encoder Model From Scratch (MLM only)} \\ \hline roberta-12e & **84.5\({}_{\pm 0.5}\)** & **75.8\({}_{\pm 0.2}\)** & **97.8\({}_{\pm 0.1}\)** & 87.2\({}_{\pm 4.1}\) & **95.7\({}_{\pm 0.1}\)** & **61.6\({}_{\pm 0.6}\)** & **83.0\({}_{\pm 0.1}\)** & **61.1\({}_{\pm 0.4}\)** & **95.8\({}_{\pm 0.0}\)** & **73.5\({}_{\pm 0.2}\)** \\ \hline \multicolumn{10}{c}{Encoder of Seq2Seq Models (de-noising only)} \\ \hline bart-12e12d & \(\underline{83.9}_{\pm 0.2}\) & 74.7\({}_{\pm 0.3}\) & 96.8\({}_{\pm 0.1}\) & 86.2\({}_{\pm 1.5}\) & 92.5\({}_{\pm 0.3}\) & 44.3\({}_{\pm 1.3}\) & 76.6\({}_{\pm 0.2}\) & 52.1\({}_{\pm 0.9}\) & 94.3\({}_{\pm 0.7}\) & 61.5\({}_{\pm 0.4}\) \\ bart-12e12d-mask & \(\underline{83.9}_{\pm 0.4}\) & 75.0\({}_{\pm 0.6}\) & 97.1\({}_{\pm 0.1}\) & 87.3\({}_{\pm 0.7}\) & 91.1\({}_{\pm 0.9}\) & 41.3\({}_{\pm 1.3}\) & 73.2\({}_{\pm 0.1}\) & 48.4\({}_{\pm 0.6}\) & 93.3\({}_{\pm 0.1}\) & 55.1\({}_{\pm 0.4}\) \\ \hline bart-12e2d & \(\underline{71.3}_{\pm 0.1}\) & 59.7\({}_{\pm 0.5}\) & 96.1\({}_{\pm 0.1}\) & 79.1\({}_{\pm 0.8}\) & 91.4\({}_{\pm 0.1}\) & 38.2\({}_{\pm 1.7}\) & 69.3\({}_{\pm 0.5}\) & 42.9\({}_{\pm 0.1}\) & 92.1\({}_{\pm 0.1}\) & 50.7\({}_{\pm 0.5}\) \\ bart-12e2d-mask & \(\underline{82.9}_{\pm 0.3}\) & 73.8\({}_{\pm 0.2}\) & 96.8\({}_{\pm 0.1}\) & **88.1\({}_{\pm 0.9}\)** & 92.3\({}_{\pm 0.3}\) & 48.0\({}_{\pm 1.4}\) & 76.5\({}_{\pm 0.2}\) & 54.0\({}_{\pm 0.6}\) & 93.3\({}_{\pm 0.1}\) & 54.0\({}_{\pm 0.6}\) \\ bart-12e1d-mask & \(\underline{82.4}_{\pm 0.2}\) & 72.7\({}_{\pm 0.1}\) & 97.0\({}_{\pm 0.1}\) & 87.6\({}_{\pm 0.5}\) & 92.8\({}_{\pm 0.5}\) & 49.3\({}_{\pm 1.2}\) & 74.6\({}_{\pm 0.5}\) & 48.5\({}_{\pm 0.3}\) & 92.4\({}_{\pm 0.1}\) & 46.3\({}_{\pm 1.7}\) \\ \hline \hline \multicolumn{10}{c}{Recipe 1: Encoder of Seq2Seq Model + MLM} \\ \hline bart-12e12d+mlm & 80.3\({}_{\pm 0.4}\) & 69.0\({}_{\pm 0.4}\) & 97.2\({}_{\pm 0.4}\) & 83.9\({}_{\pm 1.6}\) & 95.3\({}_{\pm 0.2}\) & 56.5\({}_{\pm 2.8}\) & 79.9\({}_{\pm 0.2}\) & 47.5\({}_{\pm 0.5}\) & 95.1\({}_{\pm 0.0}\) & 60.7\({}_{\pm 0.9}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Encoder results per task, English and avg. zero-shot. The best (second) mean result is bolded (underlined). \begin{table} \begin{tabular}{l|c c|c c} \hline \hline Seq2Seq Models & \multicolumn{2}{c|}{mTOP (acc.)} & \multicolumn{2}{c}{XSUM (ROUGE)} \\ & en avg-0s & R-1 & R-2 & R-L \\ \hline \multicolumn{4}{c}{Seq2Seq Models From Scratch (de-noising only)} \\ \hline bart-12e12d & **83.4\({}_{\pm 0.2}\)** & 45.7\({}_{\pm 1.1}\) & 40.37\({}_{\pm 0.07}\) & 17.37\({}_{\pm 0.06}\) & 32.46\({}_{\pm 0.06}\) \\ bart-12e12d-mask & 83.2\({}_{\pm 0.5}\) & 46.9\({}_{\pm 0.5}\) & **40.63\({}_{\pm 0.09}\)** & 17.48\({}_{\pm 0.10}\) & 32.63\({}_{\pm 0.06}\) \\ \hline \hline \multicolumn{4}{c}{Recipe 2: Two-Stage Seq2Seq Models (warm-start with MLM encoder)} \\ \hline 2stage-bart-12e12d & 82.0\({}_{\pm 1.1}\) & 46.8\({}_{\pm 1.1}\) & 40.12\({}_{\pm 0.06}\) & 17.13\({}_{\pm 0.03}\) & 32.16\({}_{\pm 0.01}\) \\ 2stage-bart-12e12d-attn-f & 80.6\({}_{\pm 1.3}\) & 46.4\({}_{\pm 0.5}\) & 40.13\({}_{\pm 0.06}\) & 17.24\({}_{\pm 0.07}\) & 32.28\({}_{\pm 0.03}\) \\ 2stage-bart-12e12d-unfrz & \(\underline{83.3}_{\pm 0.2}\) & **48.2\({}_{\pm 0.5}\)** & **40.63\({}_{\pm 0.11}\)** & **17.58\({}_{\pm 0.03}\)** & **32.65\({}_{\pm 0.05}\)** \\ \hline \hline \end{tab Overall, our proposed **two-stage seq2seq pre-training recipe provides both a multilingual encoder and a seq2seq model on-par with the two models trained from scratch, while reducing compute cost by 27%** (from 15.0 to 11.0 TU). ## 4 Conclusion and Future Work In this work, we studied recipes to efficiently pre-train both a multilingual encoder and a seq2seq model by re-using the weights from one model for the other. We found that the most effective recipe is to start training of a seq2seq model from a pre-trained encoder and unfreeze it partway through the training. Future work can explore even more efficient pre-training strategies such as jointly training on MLM and sequence-level de-noising objectives, and probe further why the encoders trained as part of a seq2seq model do not do well on sequence labeling tasks. ## 5 Limitations Our proposed two-stage training recipe is beneficial under the assumption that a pre-trained model is needed for generative as well as sequence labeling tasks. We believe that is typically the case, as one tries to offset the pre-training investment by using the model for as many tasks as possible, but this assumption might not apply in all cases. While we assess the effect of randomness on fine-tuning results by using multiple seeds, we have not done that for the pre-training itself. Even at our medium-size scale, it is already prohibitively expensive to do so. The evidence for the effectiveness of the two-stage approach is also limited by the number of tasks evaluated (2 sequence classification tasks, 2 sequence labeling tasks, 2 generation tasks), but we believe it is a reasonable trade-off between robust results and compute investment. ## 6 Acknowledgments We thank Kai-Wei Chang, Nicolas Guenon Des Mesnards, and the anonymous ACL reviewers for their helpful feedback on our work.
2310.04852
Balancing utility and cognitive cost in social representation
To successfully navigate its environment, an agent must construct and maintain representations of the other agents that it encounters. Such representations are useful for many tasks, but they are not without cost. As a result, agents must make decisions regarding how much information they choose to store about the agents in their environment. Using selective social learning as an example task, we motivate the problem of finding agent representations that optimally trade off between downstream utility and information cost, and illustrate two example approaches to resource-constrained social representation.
Max Taylor-Davies, Christopher G. Lucas
2023-10-07T15:27:01Z
http://arxiv.org/abs/2310.04852v2
# Balancing utility and cognitive cost in social representation ###### Abstract To successfully navigate its environment, an agent must construct and maintain representations of the other agents that it encounters. Such representations are useful for many tasks, but they are not without cost. As a result, agents must make decisions regarding how much information they choose to represent about the agents in their environment. Using selective imitation as an example task, we motivate the problem of finding agent representations that optimally trade off between downstream utility and information cost, and illustrate two example approaches to resource-constrained social representation. ## Representing agents under cost constraints It is generally accepted that, in order to produce adaptive behaviour, an agent must in some sense acquire and maintain an internal representation of its environment (Craik, 1943; Tolman, 1948; Wilson et al., 2014). Furthermore, unless it is condemned to an entirely solitary existence, we can expect that many environments encountered by a hypothetical agent will contain _other agents_. Much as our agent should represent the rest of the environment, we expect that it ought also to be able to represent these other agents. Humans do this--in fact, it seems we are intrinsically motivated to form mental representations of the other people we encounter (Dennett, 1987; Malle, 2008; Baker et al., 2017). We use these representations for a variety of different purposes: understanding the strengths and weaknesses of a colleague in order to effectively collaborate with them; determining whether an unfamiliar person should be treated as friend or foe; or predicting the plays of a chess opponent in order to beat them. Ideally, we would want to maintain representations that encode all possible available information about every agent in our environment. But real agents, whether biological or artificial, will inevitably have to contend with limits on their cognitive resources. Inspired by a growing body of work in computational cognitive science that models human cognition through the lens of resource rationality (Lieder and Griffiths, 2019; Bhui et al., 2021), we consider the problem of developing social representations that balance downstream utility against cognitive cost. ## Social decision tasks In order to consider the idea of using social representations for decision-making, we develop the general construct of a _social decision task_--which describes any decision task where the optimal strategy depends on having access to representations of some set of other agents. That is, the optimal policy or choice rule (in terms of maximising expected return) is conditional on the value of some social representation \(\chi\): \(\pi(\cdot)=\pi(\cdot|\chi)\). To consider the utility of a particular social representation \(\chi\), first let \(\Pi(\chi)\) be the class of possible policies that depend on \(\chi\) (i.e. may produce a different output depending on the value of \(\chi\)). We can then consider the utility of \(\chi\) as the expected return induced by the _best_ member of \(\Pi(\chi)\): \[U(\chi)=\max_{\pi\in\Pi(\chi)}\mathbb{E}[R\mid\text{behaviour}\sim\pi(\cdot|\chi)] \tag{1}\] This tells us, given access to social representation \(\chi\), how well do we expect to perform on a particular task, assuming that we're able to make the best possible use of the information in \(\chi\)? Given some function \(C:\chi\rightarrow\mathbb{R}\) that measures cognitive cost, we define the cost-adjusted utility of \(\chi\) as \[U^{\prime}(\chi)=(1-\lambda)U(\chi)-\lambda C(\chi) \tag{2}\] where \(\lambda\in[0,1]\) is a parameter that trades off between utility and cost (essentially telling us how much we _care_ about cost in the context of the current task/environment). A representation \(\chi^{*}\) is then considered optimal if it satisfies \(\chi^{*}=\arg\max_{\chi}U^{\prime}(\chi)\). This optimality criterion is similar to the objectives used within work on capacity-limited Bayesian decision-making and RL, such as Arumugam et al. (2023). The key difference (beyond our explicit focus on social representations) is that we are interested not so much in the cognitive cost of converting representations into behaviour, but in the cost the representations themselves. In general, we expect this be a combination of the cost involved in _acquiring_ a representation, and the cost involved in _storing_ it--for now we simply blanket both under entropy (for more details see Appendix A), with Eq. 2 then becoming \[U^{\prime}(\chi)=(1-\lambda)U(\chi)-\lambda H[\chi] \tag{3}\] This makes intuitive sense as an objective--essentially saying that we want to maximise expected utility while minimising the amount of information we store about other agents. ### Selective imitation as an example downstream task As a simple example of a social decision task, we consider _selective imitation_, a common component of human and animal social behaviour which involves the problem of identifying the 'best' (most rewarding) agent to imitate within a given environment (Rendell et al., 2011; Heyes, 2016). In the most general framing of this task, the imitating agent \(\alpha^{\text{ego}}\) is situated within some environment populated by a number of other agents, with the ability to observe their behaviour. At each trial, \(\alpha^{\text{ego}}\) must select an agent \(\alpha^{\text{target}}\) to imitate. All agents then execute some sequence of behaviour within the environment, visible to \(\alpha^{\text{ego}}\). \(\alpha^{\text{ego}}\) executes the exact same behaviour as the chosen agent \(\alpha^{\text{target}}\), and receives some reward corresponding to their own internal value function \(\mathbf{v}^{\text{ego}}\). A general policy for selecting imitation targets on the basis of representations is given by \[\text{Pr}(\alpha^{\text{target}}=\alpha^{(m)})\propto\exp\left(\frac{w( \alpha^{(m)};\chi)}{\beta^{\text{ego}}}\right) \tag{4}\] where \(w\) is some function that assigns agent weights based on the representations \(\chi\). Of course, the optimal choice of \(w\) will depend on the type of decision-making employed by the other agents. We will assume that each agent attempts to maximise its own return, governed by its own Figure 1: We generate a number of agents, where each has a randomly sampled scalar value function \(\mathbf{v}^{(m)}\in[0,1]\), which controls the agent’s relative preference for the two states labelled A and B. From each agent we sample a number of trajectories using different values of the decision noise parameter \(\beta^{(m)}\). We then plot average imitation reward as a function of \(\mathbf{v}^{(m)}\) and \(\beta^{(m)}\). value function \(\mathbf{v}^{(m)}\), and that it does so in a noisily rational manner, according to a Boltzmann policy with decision noise \(\beta^{(m)}\). Under this model, the imitation reward should be maximised by selecting an agent such that we maximise some measure of the _similarity_ between \(\mathbf{v}^{\text{target}}\) and \(\mathbf{v}^{\text{ego}}\) while minimising \(\beta^{(m)}\). We can demonstrate this through simulation using a simplified one-dimensional gridworld environment--Fig. 1 shows that imitation return indeed increases with increasing value function similarity and decreasing \(\beta^{(m)}\). For the remainder of this paper, we will make the assumption that all agents are equivalently rational, and thus set aside the decision noise parameter. If we were to consider only downstream utility and disregard cost, the optimal social representation for selective imitation would therefore consist simply of every agent's exact value function. More generally, we can consider representations of the form \(\chi^{(m)}=\hat{\mathbf{v}}^{(m)}\) where \(\hat{\mathbf{v}}^{(m)}\) is some approximation of \(\mathbf{v}^{(m)}\). To study strategies for selective imitation, we use a contextual bandit setting. At each trial, each agent \(m\) is placed at a random tile within a 2D grid, chooses an destination tile to travel to, and then receives a reward \(r=\mathbf{v}^{(m)}_{\text{dest}}-c\big{(}|x_{\text{dest}}-x_{\text{start}}|+|y_ {\text{dest}}-y_{\text{start}}|\big{)}\) where \(c\geq 0\) is a constant that determines the cost of taking one step on the grid. Agents are noisily rational in selecting destination tiles to maximise reward, following a Boltzmann policy with constant decision noise \(\beta\). Following the design of Wu et al. (2018) and Witt et al. (2023), we generate spatially correlated value functions (i.e. nearby tiles generally yield similar rewards) by sampling from a Gaussian process prior with a radial basis function kernel. The imitator selects targets according to Eq. 4, with weights given by \[w(\alpha^{(m)};\chi)=\text{sim}(\mathbf{v}^{\text{ego}},\hat{\mathbf{v}}^{(m)} )=\frac{\mathbf{v}^{\text{ego}}\cdot\hat{\mathbf{v}}^{(m)}}{|\hat{\mathbf{v}} ^{(m)}||\hat{\mathbf{v}}^{(m)}|} \tag{5}\] ## Experiment 1: state aggregation If our imitator \(\alpha^{\text{ego}}\) has to choose between only a small number of agents, or faces only a small state space, then it may be feasible just to represent value functions exactly (i.e. use \(\hat{\mathbf{v}}^{(m)}=\mathbf{v}^{(m)}\), even for \(\lambda>0\). But if the agent population or state space is large, then the optimal representation in terms of cost-adjusted utility (Eq. 3) will likely be an approximation that discards some information for the sake of lower entropy (see Appendix A). One way we might obtain such an approximation is simply through state aggregation--i.e. partition the state space into non-overlapping square 'patches', and then group all states within each patch under a single value (Sutton and Barto, 1998). Fig. 2 shows how the cost-balanced utility of aggregated value function representations under various values of the tradeoff parameter \(\lambda\). As expected, we observe that increasing the amount of aggregation leads to a decrease in both average return and representation cost, with the relative advantage or disadvantage determined by the tradeoff parameter: as \(\lambda\) increases, the optimal aggregation amount shifts to the right. ## Experiment 2: representing social groups So we have seen how naive compression of value function information can help an imitator navigate this tradeoff. But rather than merely taking individual representations and making them coarser, a more humanlike approach might involve using information about the agent population _as a whole_, and Figure 2: Left: illustration of state-aggregated estimates of an example value function (lighter grid squares indicate higher values). Middle: average return from selective imitation using state-aggregated value function estimates as agent representation (with indiscriminate imitation return as a baseline). Right: cost-adjusted return for aggregated representations at different values of \(\lambda\). Since we are interested only in the tradeoff between return and cost (entropy), rather than each quantity’s absolute values, both are individually normalised to lie in [0,1]. how individual agents within the population relate to one another. For instance, we often represent people that we're unfamiliar with in terms of their membership of certain social _groups_ or _categories_, using our knowledge of those groups to make inferences about the unknown properties of individuals within them (Rhodes and Baron, 2019; Liberman et al., 2017). To see how this can produce cheap agent representations, imagine that we have a known fixed number of groups \(K\), and a population of \(M\) agents that are partitioned among them such that each agent only belongs to one group (given by \(\mathbf{z}_{m}\)). Continuing our example of value-function-guided selective imitation, we will say that each group \(k\) has a corresponding distribution over value functions, \(p(\mathbf{v}^{(m)}|\mathbf{z}_{m}=k)\), with mean value \(\mu^{(k)}\). As an extreme, if we treat all members of group \(k\) has having \(\mathbf{v}=\mu^{(k)}\), then we need only represent each agent with a single scalar value \(\mathbf{z}_{m}\). Including the group means, this would yield an overall representation cost \[C(\chi_{\text{gpr}})=KH[\mu^{(k)}]+MH[\mathbf{z}_{m}] \tag{6}\] Intuitively, the extent to which this representation is useful will be primarily determined by the ratio between inter- and intragroup variation. When groups are very different, and agents within each group are very similar, then knowing an agent's group assignment tells you a lot; when groups are similar or agents vary a lot within each group, then group assignments are less informative. We can test this using the same selective imitation bandit setup as in the previous experiment. We first sample a number of group mean value functions \(\mu^{(k)}\) from the original GP prior; for each group we then sample a number of agents from a multivariate Gaussian distribution with mean \(\mu^{(k)}\) and covariance \(\Sigma=\rho\Sigma_{\text{GP}}\) where \(\rho>0\) is a scalar and \(\Sigma_{\text{GP}}\) is the covariance of the GP prior. Fig. 3 compares the cost-adjusted utility of the groups-only and individuals-only representation strategies for different values of \(\rho\) and \(\lambda\). We can see that, below a certain \(\lambda\) threshold, individual representations are always as good or better than group representations; above a certain \(\lambda\) threshold, the reverse is true. Otherwise, as predicted, the optimal representation depends on the group variance ratio, with group-only representations becoming less useful as \(\rho\) increases. ## Discussion In this paper, we have briefly laid out and motivated the problem of cost-constrained social representation; that is, representing other agents in such a way that optimally trades off between downstream utility and information cost. Using selective imitation as an example task, we explored two potential approaches to this problem: compressing individual representations, and representing agents in terms of group identities. While we hope the work we present here has value and interest, it is of course very much a preliminary step--in future work we hope to consider how agents can _learn_ good social representations by optimising directly for this tradeoff, as well as broadening our analysis to encompass additional example tasks and agent features. As hinted at earlier, we also hope to explore a more detailed conception of representation cost that considers both acquisition and storage of information. Figure 3: Top: cost-adjusted return for group-only and individual representations (with indiscriminate imitation baseline) for different values of the group variance ratio \(\rho\) and the tradeoff parameter \(\lambda\). As before, both return and cost (entropy) are normalised to lie in [0,1]. Bottom: illustration of example value functions sampled for a 3-member group for three different values of the variance ratio \(\rho\).
2310.15787
SequenceMatch: Revisiting the design of weak-strong augmentations for Semi-supervised learning
Semi-supervised learning (SSL) has become popular in recent years because it allows the training of a model using a large amount of unlabeled data. However, one issue that many SSL methods face is the confirmation bias, which occurs when the model is overfitted to the small labeled training dataset and produces overconfident, incorrect predictions. To address this issue, we propose SequenceMatch, an efficient SSL method that utilizes multiple data augmentations. The key element of SequenceMatch is the inclusion of a medium augmentation for unlabeled data. By taking advantage of different augmentations and the consistency constraints between each pair of augmented examples, SequenceMatch helps reduce the divergence between the prediction distribution of the model for weakly and strongly augmented examples. In addition, SequenceMatch defines two different consistency constraints for high and low-confidence predictions. As a result, SequenceMatch is more data-efficient than ReMixMatch, and more time-efficient than both ReMixMatch ($\times4$) and CoMatch ($\times2$) while having higher accuracy. Despite its simplicity, SequenceMatch consistently outperforms prior methods on standard benchmarks, such as CIFAR-10/100, SVHN, and STL-10. It also surpasses prior state-of-the-art methods by a large margin on large-scale datasets such as ImageNet, with a 38.46\% error rate. Code is available at https://github.com/beandkay/SequenceMatch.
Khanh-Binh Nguyen
2023-10-24T12:34:58Z
http://arxiv.org/abs/2310.15787v1
# SequenceMatch ###### Abstract Semi-supervised learning (SSL) has become popular in recent years because it allows the training of a model using a large amount of unlabeled data. However, one issue that many SSL methods face is the confirmation bias, which occurs when the model is overfitted to the small labeled training dataset and produces overconfident, incorrect predictions. To address this issue, we propose SequenceMatch, an efficient SSL method that utilizes multiple data augmentations. The key element of SequenceMatch is the inclusion of a medium augmentation for unlabeled data. By taking advantage of different augmentations and the consistency constraints between each pair of augmented examples, SequenceMatch helps reduce the divergence between the prediction distribution of the model for weakly and strongly augmented examples. In addition, SequenceMatch defines two different consistency constraints for high and low-confidence predictions. As a result, SequenceMatch is more data-efficient than ReMixMatch, and more time-efficient than both ReMixMatch (\(\times 4\)) and CoMatch (\(\times 2\)) while having higher accuracy. Despite its simplicity, SequenceMatch consistently outperforms prior methods on standard benchmarks, such as CIFAR-10/100, SVHN, and STL-10. It also surpasses prior state-of-the-art methods by a large margin on large-scale datasets such as ImageNet, with a 38.46% error rate. Code is available at [https://github.com/beandkay/SequenceMatch](https://github.com/beandkay/SequenceMatch). ## 1 Introduction Deep Neural Networks (DNNs) have made significant strides in recent years, achieving an extraordinary performance on many tasks such as image recognition [16], speech recognition [1], and natural language processing [43]. The state-of-the-art performance of DNNs is achieved through supervised learning, which requires labeled data. The empirical observation shows that training DNNs on larger labeled datasets produces a better performance [17, 18, 26, 34, 35]. However, the labeled data is limited in quantity and significantly costly due to the hand-labeling process which must be done by experts. An impressive approach for training models with a large amount of unlabeled data is semi-supervised learning (SSL). In recent years, SSL has received much attention due to its advantages in leveraging a large amount of unlabeled data. Since the unlabeled data can be obtained easily without the need for human labor, using SSL results in comparable performance to the supervised learning methods but with a lower cost. This success has led to the development of many SSL methods [3, 4, 21, 22, 46, 50]. There are two popular SSL methods which are pseudo-labeling [22] (also called self-training [38, 50]) and consistency regularization [2, 41, 21]. While the pseudo-labeling-based approaches use model predictions as labels to train the unlabeled data, the consistency-regularization-based approaches use loss functions such as mean squared error (MSE) or Kullback-Leibler divergence (KL divergence) to minimize the difference between the prediction distribution of different augmented inputs. However, they are still en Figure 1: Example scheme where the prediction distribution of weakly and strongly augmented examples have high KL divergence. This high divergence happens when the model suffers from the confirmation bias issue. countering the confirmation bias issue because of the small labeled training dataset. Hence, during training, when a confirmation bias issue occurs, the performance stops improving and could become worse. Based on the finding from [5] that we could utilize KL divergence with multiple augmentations to increase model invariance and generalization, we propose a simple SSL pipeline, SequenceMatch. The idea of using multiple data augmentations for SSL is not new since it has been introduced by [3] and [24]. ReMixMatch [3] uses a technique called Augmentation Anchoring (AA). AA anchors a weak augmentation then makes \(\mathbf{K}\) strong augmentations and encourages each output to be close to the anchoring prediction. Similarly, CoMatch [24] generates two strongly augmented versions for each unlabeled sample to construct the embedding graph. However, we argue that using multiple strong augmentations can result in disparate predictions, and thus may not be a meaningful target. Particularly, ReMixMatch [3] found that using stronger augmentations in MixMatch resulted in high divergence, and the training would not converge if we replace the weak with strong augmentation, resulting in very poor performance. SequenceMatch also uses multiple data augmentations but in a different manner. Specifically, we introduce a medium augmentation, then minimize the KL divergence between prediction distributions for each pair of inputs, thus minimizing the discrepancy between the representation of weak and strong augmented predictions. Therefore, by minimizing these divergences, we assume that the learned representation of the strong augmentation would align with the one from the weak augmentation by using medium augmentation as an anchor. The medium augmentation also works like a Teacher Assistant (TA) to distill the knowledge, similar to a TA in [29]. As a result, SequenceMatch encourages the similarity of the network outputs to produce more reliable pseudo-labels for unlabeled data during training, reduces overconfident pseudo-labels, and optimizes data utilization for the unlabeled dataset. The benefit of SequenceMatch is found in all datasets. For instance, on the STL-10 dataset, SequenceMatch achieves 15.45%, and 5.56% error rates when the label amount is 40, and 1000, respectively. Moreover, SequenceMatch shows its superiority on imbalanced datasets such as SVHN and ImageNet. On the SVHN dataset, SequenceMatch achieves 1.96%, 1.89%, and 1.79% error rate when the label amount is 40, 250, and 1000, respectively. For ImageNet, SequenceMatch achieves 38.46% error rate, surpassing FlexMatch of 41.85%, FixMatch of 43.66%, CoMatch of 42.17%, and FreeMatch of 40.57%. In addition, SequenceMatch achieves high performance even though it does not introduce as many augmentations as ReMixMatch and does not need to store the embedded graph like CoMatch. To sum up, this paper makes the following contributions: * We propose SequenceMatch, a SSL training pipeline that helps reduce the divergence between the prediction distributions of different augmented versions of the same input. Therefore, SequenceMatch helps reduce the overconfident predictions and the distribution discrepancy between weakly and strongly augmented predictions. * SequenceMatch leverages the whole unlabeled dataset, including high-confidence and low-confidence predictions, thus optimizing the data utilization. * We verify our hypothesis that reducing the confirmation bias issue of the trained model and reducing the divergence between the prediction distributions would yield better results. Hence, SequenceMatch significantly achieves state-of-the-art results on many datasets with different numbers of labels. ## 2 Analysis of high-confidence and low-confidence pseudo-label In order to examine the importance of low-confidence predictions in the training process, we train FixMatch separately with "hard" and "soft" pseudo-labels. The "hard" pseudo-label training is the conventional FixMatch using high-confidence predictions, while for the "soft" pseudo-label training, the model is trained only on low-confidence predictions. Specifically, instead of choosing high-confidence predictions as the pseudo-label, we take the low-confidence predictions from weakly-augmented examples, sharpen them by temperature \(\mathbf{T}=\mathbf{0.5}\) and compute the KL divergence with the predictions from strongly-augmented. The experiment results from Table 1 show that using only low-confidence predictions to train the model can still Figure 2: Differences between SequenceMatch versus ReMixMatch and CoMatch methods for multi augmentations. achieve a competitive performance with the one using high-confidence predictions on the CIFAR-10 dataset. This shows that the conventional approach of using a high threshold and discarding a large proportion of unlabeled data during training is inefficient and does not fully leverage the unlabeled data. Thus, instead of using only high-confidence predictions, in this work, we bridge the strengths of both high-confidence and low-confidence predictions. ## 3 Background We give a brief introduction to unsupervised data augmentation (UDA) [49] and FixMatch [44], which are mostly related to our work. Let \(B\) be the batch size of labeled data, \(\mu\) be the ratio of unlabeled data to labeled data, and \(p_{m}\) represent the output probability of the model. \(A_{w}\) and \(A_{s}\) are weakly and strongly augmentation functions, respectively. The unsupervised loss term in UDA is formulated as: \[\frac{1}{\mu B}\sum_{b=1}^{\mu B}\mathbbm{1}\left(\max\left(q_{b}\right)\geq \tau\right)H\left(q_{s},p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b}\right) \right)\right), \tag{1}\] where \(\tau\) is the constant pre-defined threshold, \(q_{s}=\frac{\exp\left(q_{b}/\mathbf{T}\right)}{\sum_{k}\exp\left(q_{k}/ \mathbf{T}\right)}\) is the sharpen predictions by temperature \(\mathbf{T}\), \(q_{b}=p_{m}\left(y\mid\mathcal{A}_{w}\left(u_{b}\right)\right)\) is the logit of label \(y\) for input \(\mathcal{A}_{w}\left(u_{b}\right)\). Unlike UDA, FixMatch leverages this consistency regularization with strong augmentation to achieve a competitive performance. The unsupervised loss term becomes: \[\frac{1}{\mu B}\sum_{b=1}^{\mu B}\mathbbm{1}\left(\max\left(q_{b}\right)\geq \tau\right)H\left(\hat{q}_{b},p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b} \right)\right)\right), \tag{2}\] where \(\hat{q}_{b}=\arg\max\left(q_{b}\right)\) is the pseudo-label of \(q_{b}\). Following FixMatch, FlexMatch uses the same loss term with a dynamic threshold \(\tau_{t}\) for each class, thus improving the per-class sampling rate and making the model learn equally. FixMatch shows that using a high-confidence threshold with "hard" labels can eliminate the noise pseudo-labels, thus enhancing the performance of the whole SSL framework. In addition, FixMatch also claims that using the high-confidence threshold with the "soft" pseudo-labels does not show a significant difference in performance. ## 4 SequenceMatch We propose SequenceMatch, a simple SSL pipeline that aims at balancing the prediction distribution of unlabeled data. The distinction with FixMatch is that we consider both "hard" and "soft" pseudo-labels. The main novelty comes from the additional medium augmentations for unlabeled data. With the additional mediumly augmented examples, SequenceMatch helps reduce the divergence between the prediction distributions of the weakly and strongly augmented data. The intuition is to make the prediction distribution of weakly, mediumly, and strongly augmented examples similar to each other while maintaining the correct pseudo-labels, thus reducing the overfitting of the model on labeled data and reducing the confirmation bias issue. The medium augmentation is made up of weak augmentation, a transformation chosen at random from the list of strong augmentation transformations, and cutout [13]. This makes the mediumly augmented samples look different from the weakly augmented ones, but not as distorted as the strongly augmented ones because the induced distortions could severely change the image structures, and thus the transformed images cannot maintain the identity of the original instances. We visualize the differences of three kinds of augmentation in Appendix 7. ### SequenceMatch Pipeline In this section, we present the pipeline of SequenceMatch method as shown in Figure 3. First, similar to other SSL \begin{table} \begin{tabular}{l c c} \hline \hline Dataset & high-confidence & low-confidence \\ \hline \hline CIFAR-10-40 & 7.47 & 28.88 \\ CIFAR-10-250 & 4.86 & 8.07 \\ CIFAR-10-4000 & 4.21 & 8.04 \\ \hline \hline \end{tabular} \end{table} Table 1: Error rate of FixMatch using high-confidence vs low-confidence predictions on CIFAR-10 with 40, 250, and 1000-label split. Figure 3: SequenceMatch pipeline. Unlike other SSL methods that use only two types of augmented versions for unlabeled data, we propose a ”mediumly augmented” version for unlabeled data. The blue and green arrows indicate high-confidence and low-confidence predictions, respectively. In addition, we measure the Kullback-Leibler divergence losses between the weakly, mediumly, and strongly augmented versions of the same input, then we minimize them during training. methods, we train the model on the labeled data. Then, for the unlabeled data, instead of using only weakly and strongly augmented examples, we create three versions of augmented input: weakly, mediumly, and strongly augmented examples. Finally, for each pair of the prediction distribution such as weak-medium, medium-strong, and weak-strong, a Kullback-Leibler divergence loss function is used to measure the divergence of each pair. The KL divergence losses will be optimized during the training process to minimize the divergence. ### Loss Function The loss function for SequenceMatch consists of two different loss terms. One is the supervised loss, which is a standard cross-entropy loss (\(\mathcal{L}_{s}^{\mathrm{CE}}\)) for the labeled data. The other one is the unsupervised loss, including the Kullback divergence (\(\mathcal{L}_{KL}\)) between the prediction distributions and the standard cross-entropy loss (\(\mathcal{L}_{u}^{\mathrm{CE}}\)) for strongly augmented data with pseudo-labels. For an \(L\)-class classification problem, let \(\mathcal{X}=\{(x_{b},y_{b}):b\in(1,\dots,B)\}\) be a batch of \(B\) labeled examples, where \(x_{b}\) is the training examples and \(y_{b}\) is one-hot labels. Let \(\mathcal{U}=\{u_{b}:b\in(1,\dots,\mu B)\}\) be a batch of \(\mu B\) unlabeled examples where \(\mu\) is a hyperparameter that determines the relative sizes of \(\mathcal{X}\) and \(\mathcal{U}\). Let \(p_{m}(y|x)\) is the predicted class distribution of the model for input \(x\), \(H(p,q)\) denotes the "hard" label cross entropy between two probability distributions \(p\) and \(q\). The loss function for SSL is defined as: \[\mathcal{L}_{\mathrm{SSL}}=\mathcal{L}_{s}^{\mathrm{CE}}+\lambda_{u}\mathcal{ L}_{u}, \tag{3}\] where \(\lambda_{u}\) is the fixed weight for the unlabeled data loss. Specifically, \(\mathcal{L}_{s}^{\mathrm{CE}}\) is a standard cross entropy loss on weakly augmented labeled data: \[\mathcal{L}_{s}^{\mathrm{CE}}=\frac{1}{B}\sum_{b=1}^{B}\mathrm{H}\left(y_{b},p _{m}\left(y\mid\mathcal{A}_{w}\left(x_{b}\right)\right)\right) \tag{4}\] Then, let \(\mathcal{A}_{w}\), \(\mathcal{A}_{m}\), \(\mathcal{A}_{s}\) be the weakly, mediumly, and strongly augmentations for unlabeled data. \(\mathcal{L}_{u}\) is defined as a total of the standard cross-entropy loss (\(\mathcal{L}_{u}^{\mathrm{CE}}\)) and the Kullback-Leibler divergence loss (\(\mathcal{L}_{\mathrm{KL}}\)). \(\mathcal{L}_{u}^{\mathrm{CE}}\) has two parts: first is the cross-entropy loss between pseudo-labels and the strongly augmented predictions; second is the cross-entropy loss between sharpen predictions of weakly and strongly augmented samples. \(\mathcal{L}_{KL}\) is the KL divergence of the prediction distribution between each pair of augmented examples \(\mathcal{A}_{w}-\mathcal{A}_{m}\), \(\mathcal{A}_{w}-\mathcal{A}_{s}\), \(\mathcal{A}_{m}-\mathcal{A}_{s}\): \[\mathcal{L}_{u}=\mathcal{L}_{u}^{\mathrm{CE}}+\mathcal{L}_{\mathrm{KL}}^{w-m}+ \mathcal{L}_{\mathrm{KL}}^{m-s}+\mathcal{L}_{\mathrm{KL}}^{w-s} \tag{5}\] \[\mathcal{L}_{u}^{\mathrm{CE}}=\frac{1}{\mu B}\sum_{b=1}^{\mu B} \left(\mathbb{1}\left(\max\left(q_{b}^{w}\right)\geq\tau\right)\mathrm{H} \left(\hat{q}_{b},p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b}\right)\right) \right)\right. \tag{6}\] \[+ \left.\mathbb{1}\left(\max\left(q_{b}^{w}\right)<\tau\right) \mathrm{H}\left(q_{s}\mid p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b}\right) \right)\right)\right)\] \[\mathcal{L}_{\mathrm{KL}}^{w-m}=\frac{1}{\mu B}\sum_{b=1}^{\mu B} 1\left(\max\left(q_{b}^{w}\right)\geq\tau\right)D_{\mathrm{KL}}\left(q_{s}^{ w}\mid p_{m}\left(y\mid\mathcal{A}_{m}\left(u_{b}\right)\right)\right) \tag{7}\] \[\mathcal{L}_{\mathrm{KL}}^{m-s}=\frac{1}{\mu B}\sum_{b=1}^{\mu B} 1\left(\max\left(q_{b}^{w}\right)\geq\tau\right)D_{\mathrm{KL}}\left(q_{s}^{ m}\mid p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b}\right)\right)\right)\] (8) \[\mathcal{L}_{\mathrm{KL}}^{w-s}=\frac{1}{\mu B}\sum_{b=1}^{\mu B} 1\left(\max\left(q_{b}^{w}\right)\geq\tau\right)D_{\mathrm{KL}}\left(q_{s}^{ w}\mid p_{m}\left(y\mid\mathcal{A}_{s}\left(u_{b}\right)\right)\right) \tag{9}\] where \(\hat{q}_{b}=\arg\max\left(q_{b}\right)\) is the pseudo-label with \(q_{b}=p_{m}\left(y\mid\Omega\left(u_{b}\right)\right)\), \(\Omega\) is the corresponding augmentation function, \(q_{s}=\frac{\exp\left(q_{b}/\mathbf{T}\right)}{\sum_{b}\exp\left(q_{b}/\mathbf{ T}\right)}\) is the sharpen predictions, \(\tau\) is the fixed threshold for choosing pseudo-labels, \(D_{\mathrm{KL}}\) denotes the KL divergence function, and T is the temperature for sharpening. We use the fixed \(\tau\) for high-confidence KL loss to reduce the divergence of overconfident predictions over three augmentations. Notably, we only enforce the consistency loss for low-confidence on weak-strong predictions pair since the low-confidence predictions from medium and strong augmented predictions are unreliable. Compared with prior methods, the use of unlabeled data by KL loss is more reasonable, as KL loss will not bring negative supervisory information due to the wrong predictions but just emphasize the distribution consistency between the weakly, mediumly, and strongly augmented images. ## 5 Experiments We evaluate SequenceMatch on common datasets: CIFAR-10/100 [20], SVHN [31], STL-10 [10], and ImageNet [12], and extensively investigate the performance under various labeled data amounts. We mainly compare our proposed method with fully SSL methods without using self-supervised pre-trained weights such as UDA [49], FixMatch [44], FlexMatch [53], FreeMatch [47], etc, since they all include a pre-defined threshold, and they are currently the state-of-the-art in the field. To further understand the findings of SSL techniques, we additionally include a fully-supervised experiment for each dataset. We implement our proposed method and evaluate all methods using USB framework1. Footnote 1: [https://github.com/microsoft/Semi-supervised-learning/](https://github.com/microsoft/Semi-supervised-learning/) For a fair comparison, we use the same hyperparameters of the UDA, FixMatch, and FlexMatch methods. Standard stochastic gradient descent (SGD) with a momentum of 0.9 is used as an optimizer in all experiments [33, 45]. For all datasets, we use an initial learning rate of 0.03 with a cosine annealing learning rate scheduler [25] for a total of \(2^{20}\) training iterations. We also conduct an exponential moving average with a momentum of 0.999. The batch size of labeled data is 64, except for ImageNet. For CIFAR-10/100, SVHN, and STL-10, \(\mu\) is set to 7, and it is set to 1 for ImageNet. For UDA, \(\tau\) is set to 0.8, while it is set to 0.95 for FixMatch, FlexMatch, and SequenceMatch. These configurations follow the original papers [44, 49, 53]. The medium and strong augmentation in our experiments is RandAugment [11] with a different number of augmentations (1 for medium augmentation and 3 for strong augmentation; we study the choices for medium augmentation and visualize the differences in the Appendix). We use ResNet-50 [20] for the ImageNet dataset and Wide-ResNet (WRN) [52] for other datasets. ### Cifar-10/100, Stl-10, Svhn We evaluate the best error rate of each method by averaging results from five runs with distinct random seeds. The classification error rates on CIFAR-10/100, STL-10, and SVHN datasets are recorded in Table 2. For the CIFAR-10 and SVHN datasets, we use Wide-ResNet-28-2 [52] as a backbone model, Wide-ResNet-28-8 for the CIFAR-100 dataset, and Wide-ResNet-37-2 for the STL-10 dataset. As shown in Table 2, the proposed method outperforms all other methods on most datasets with varying numbers of labels. According to FlexMatch study [53], FlexMatch performs less favorably on imbalanced datasets such as SVHN. SequenceMatch, on the other hand, not only achieves high performance across all datasets but also performs well on the SVHN dataset. This shows that our proposed method has the effect of reducing overfitting, which usually appears when training on a small and imbalanced dataset. #### Precision, Recall, F1 Score and AUC results on CIFAR-10 To comprehensively evaluate the performance of all methods in a classification setting, we further report the precision, recall, F1-score, and AUC (area under curve) results on the CIFAR-10 dataset. As shown in Table 3, we see that in addition to the reduced error rates, SequenceMatch also has the best performance in precision, recall, F1 score, and AUC. These metrics, together with error rates (or accuracy), show the strong performance of our proposed method. #### STL-10 Confusion Matrix The confusion matrix of FixMatch, FlexMatch, and SequenceMatch on the STL-10 dataset with a 40-label split is visualized in Figure 4. Compared with FlexMatch, SequenceMatch improves the performance for classes 2, 4, and 6. In addition, FixMatch is overfitted for class number 1, and it failed to recognize class number 3 and 7. #### Convergence Speed Our proposed SequenceMatch outperforms FlexMatch when the number of labels is limited. We visualize a validation loss and a top-1 accuracy of both FlexMatch and SequenceMatch on the CIFAR-10 dataset with 40 labels within the first 200k iterations. As we can see in Figure 5, SequenceMatch achieves over 80% of accuracy within the first 25k iterations when FlexMatch nearly hits 80%. After 200k iterations, SequenceMatch achieves up to 94.28% accuracy while FlexMatch can only achieve 93.72% of accuracy. Moreover, the loss of our proposed SequenceMatch also decreases as fast and smoothly as FlexMatch, even though we add extra augmented data. \begin{table} \begin{tabular}{l|c c|c c|c c|c c c|c c} \hline \hline Dataset & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c|}{CIFAR-100} & \multicolumn{3}{c|}{SVHN} & \multicolumn{3}{c}{STL-10} \\ \#Label & 40 & 250 & 4000 & 400 & 250 & 10000 & 40 & 250 & 10000 & 40 & 1000 \\ \hline \hline II Model [36] & 74.34\(\pm\)1.76 & 46.24\(\pm\)1.29 & 13.13\(\pm\)0.89 & 86.96\(\pm\)0.06 & 58.80\(\pm\)0.06 & 36.65\(\pm\)0.00 & 67.48\(\pm\)0.99 & 13.30\(\pm\)1.2 & 7.16\(\pm\)0.11 & 74.31\(\pm\)0.83 & 32.78\(\pm\)0.40 \\ Pseudo Label [21] & 74.61\(\pm\)0.26 & 46.49\(\pm\)1.26 & 15.08\(\pm\)0.19 & 87.45\(\pm\)0.05 & 57.44\(\pm\)1.26 & 36.55\(\pm\)0.14 & 64.61\(\pm\)1.56 & 15.99\(\pm\)0.52 & 9.40\(\pm\)0.33 & 74.68\(\pm\)0.99 & 32.64\(\pm\)0.17 \\ Vart [30] & 74.66\(\pm\)1.22 & 41.03\(\pm\)1.97 & 10.51\(\pm\)1.21 & 85.20\(\pm\)1.40 & 46.84\(\pm\)1.09 & 32.14\(\pm\)1.99 & 74.75\(\pm\)3.38 & 4.33\(\pm\)2.43 & 4.11\(\pm\)0.20 & 74.74\(\pm\)0.38 & 37.95\(\pm\)1.21 \\ MeanTeacher [46] & 70.09\(\pm\)1.69 & 37.46\(\pm\)3.30 & 8.10\(\pm\)0.21 & 81.11\(\pm\)1.44 & 45.17\(\pm\)1.06 & 31.75\(\pm\)0.13 & 36.09\(\pm\)3.99 & 3.45\(\pm\)0.03 & 3.27\(\pm\)0.05 & 71.72\(\pm\)1.45 & 33.90\(\pm\)1.37 \\ MiMatch [4] & 36.19\(\pm\)1.98 & 43.63\(\pm\)0.96 & 6.66\(\pm\)2.65 & 67.59\(\pm\)0.96 & 39.76\(\pm\)0.48 & 27.78\(\pm\)0.97 & 30.60\(\pm\)0.93 & 4.562\(\pm\)0.39 & 3.69\(\pm\)0.37 & 54.93\(\pm\)0.96 & 21.70\(\pm\)0.68 \\ ReMLMatch [3] & 9.88\(\pm\)1.03 & 6.30\(\pm\)0.84 & 44.02\(\pm\)5.75 & 11.60\(\pm\)0.33 & **20.08\(\pm\)0.27** & **24.04\(\pm\)0.13** & 6.36\(\pm\)0.22 & 5.16\(\pm\)0.31 & 32.12\(\pm\)1.66 & 6.74\(\pm\)0.14 \\ UDA [49] & 10.62\(\pm\)3.75 & 5.16\(\pm\)0.66 & 4.29\(\pm\)0.97 & 46.39\(\pm\)1.99 & 27.73\(\pm\)0.21 & 22.49\(\pm\)0.23 & 5.12\(\pm\)2.47 & 1.92\(\pm\)0.05 & 1.89\(\pm\)0.01 & 37.42\(\pm\)0.84 & 6.64\(\pm\)0.17 \\ FixMatch [44] & 74.72\(\pm\)0.28 & 4.86\(\pm\)0.25 & 41.21\(\pm\)0.86 & 46.42\(\pm\)2.86 & 28.03\(\pm\)1.66 & 22.20\(\pm\)1.31 & 8.11\(\pm\)2.02 & 6.20\(\pm\)1.96 & 1.96\(\pm\)0.03 & 35.97\(\pm\)1.44 & 6.25\(\pm\)0.33 \\ Dasgil [51] & 8.93\(\pm\)1.11 & 5.16\(\pm\)0.23 & 4.361\(\pm\)0.82 & 46.42\(\pm\)2.86 & 27.10\(\pm\)0.21 & 21.88\(\pm\)0.07 & 2.19\(\pm\)0.04 & 1.97\(\pm\)0.04 & 34.52\(\pm\)0.36 & 6.39\(\pm\)0.56 \\ MPL [32] & 6.62\(\pm\)0.91 & 5.76\(\pm\)0.24 & 4.55\(\pm\)0.04 & 46.26\(\pm\)1.84 & 27.71\(\pm\)0.19 & 27.14\(\pm\)0.09 & 9.33\(\pm\)0.02 & 2.29\(\pm\)0.04 & 2.28\(\pm\)0.02 & 35.76\(\pm\)3.43 & 6.66\(\pm\)0.00 \\ FlexMatch [53] & 4.97\(\pm\)0.06 & 4.98\(\pm\)0.09 & 4.19\(\pm\)0.01 & 39.94\(\pm\)1.62 & 26.49\(\pm\)0.20 & 21.90\(\pm\)0.15 & 8.19\(\pm\)2.39 & 6.59\(\pm\)1.92 & 6.72\(\pm\)0.30 & 29.15\(\pm\)1.46 & 5.77\(\pm\)0.18 \\ FlexMatch [47] & 4.90\(\pm\)0.04 & 4.88\(\pm\)0.18 & **4.10\(\pm\)0.27** & 37.98\(\pm\)0.24 & 26.47\(\pm\)0.20 & 21.68\(\pm\)0.03 & 1.97\(\pm\)0.20 & 1.97\(\pm\)0.01 & 1.96\(\pm\)0.00 & 1.56\(\pm\)0.18 & 5.65\(\pm\)0.16 \\ **SequenceMatch** & **4.80\(\pm\)0.01** & **4.75\(\pm\)0.05** & **4.15\(\pm\)0.05** & **37.86\(\pm\)0.20** & **25.99\(\pm\)0.02** & **20.10\(\pm\)0.04** & **1.96\(\pm\)0.23** & **1.89\(\pm\)0.38** & **1.79\(\pm\)0.02** & **1.55\(\pm\)1.45** & **5.56\(\pm\)1.01** \\ \hline Fully-Supervised & \ We report a detailed comparison for class-wise accuracy in Table 4. Our proposed SequenceMatch not only retains a high accuracy in easy-to-learn classes but also improves the accuracy of hard-to-learn classes. The final class-wise accuracy of SequenceMatch is balanced between classes, including hard-to-learn classes (_e.g._ class 2, 3). Especially for the evaluation phase, the performance of hard-to-learn classes surpasses FixMatch by a large margin. The class-wise accuracy from the training phase, as shown in Figure 6, also indicates that SequenceMatch can help reduce the confirmation bias issue. It can be seen that the training phase accuracy of SequenceMatch is not only higher than FixMatch and FlexMatch but also balanced between classes. SequenceMatch prevents the trained model from overfitting toward easy-to-learn classes. Figure 7 shows the accuracy of the pseudo-label during training on the CIFAR-10 40-label split. We can see that SequenceMatch mitigates the overfitting and overconfidence issues, therefore achieving a higher pseudo-label accuracy. that the mask ratio of SequenceMatch fluctuates less than that of FixMatch and FlexMatch. Furthermore, the data utilization ratio of SequenceMatch surpasses that of FixMatch and FlexMatch by a large margin. ### ImageNet We further evaluate SequenceMatch on the ImageNet [12] dataset to verify the performance on the large and complex dataset. We compare the proposed SequenceMatch with FixMatch, FlexMatch, CoMatch, and SimMatch. All of the models are trained on 100k of the training data as labeled. The rest of the data is treated as unlabeled data. Furthermore, because the ImageNet dataset is large and complex, we set the \(\tau\) threshold to 0.7 to improve the capture of samples with the correct pseudo-label. The batch size is set to 128 and the weight decay is set to 0.0003. As reported in Table 5, SequenceMatch outperforms FlexMatch with 38.46% and 17.38% for top-1 and top-5 error rates, respectively. The top-1 error rate result is 3.39% lower than FlexMatch and 5.20% lower than FixMatch. This result strongly indicates that when the task is complicated and the dataset is imbalanced (in the ImageNet dataset, the number of images per class ranges between 732 and 1300), our proposed SequenceMatch can help boost the performance. We also compare SequenceMatch with CoMatch and SimMatch using their source code on 10% labeled data. SequenceMatch outperforms FixMatch with and without using self-supervised pre-trained weights. Compared with CoMatch and SimMatch, SequenceMatch achieves higher performance while having a fewer number of parameters. ### Imbalance dataset In Figure 9 and 10, we show the performance of FixMatch, FlexMatch, and SequenceMatch on the SVHN and ImageNet datasets. For instance, our proposed SequenceMatch results show superiority over FixMatch and FlexMatch when dealing with imbalanced data problems such as SVHN and ImageNet datasets. According to Table 2, Table 5, our results are identical to FlexMatch results; however, FlexMatch fails on the SVHN dataset since CPL may generate low final thresholds for the tail classes that allow noisy pseudo-labeled samples to be trusted and learned. SequenceMatch solves this problem by maintaining the consistency of the model throughout the training process and mitigating the overfitting issue. Furthermore, SequenceMatch results on the ImageNet dataset outperform FixMatch and FlexMatch without additional modules. \begin{table} \begin{tabular}{l|c c c c c c c c c c} \hline \hline Class Number & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \hline FixMatch & 0.964 & 0.982 & 0.697 & 0.852 & **0.974** & 0.890 & 0.987 & 0.970 & 0.982 & **0.981** \\ FlexMatch & 0.967 & 0.980 & 0.921 & 0.866 & 0.957 & 0.883 & **0.988** & 0.975 & 0.982 & 0.968 \\ FreeMatch & 0.962 & 0.984 & **0.923** & 0.874 & 0.963 & **0.894** & 0.979 & **0.977** & 0.980 & 0.976 \\ **SequenceMatch** & **0.977** & **0.984** & 0.922 & **0.890** & 0.966 & 0.889 & 0.981 & 0.974 & **0.985** & **0.980** \\ \hline \hline \end{tabular} \end{table} Table 4: Class-wise accuracy comparison on CIFAR-10 40-label split. Figure 10: Accuracy and loss comparison of Fixmatch, FlexMatch, and SequenceMatch on Imagenet dataset. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & Top-1 & Top-5 & Top-1 & Top-5 \\ \cline{2-5} & \multicolumn{2}{c|}{100k} & \multicolumn{2}{c}{10\%} \\ \hline \hline FixMatch[44] & 43.66 & 21.80 & 28.50 & 10.90 \\ FlexMatch[53] & 41.85 & 19.48 & - & - \\ CoMatch[24] & 42.17 & 19.64 & 26.30 & 8.60 \\ SimMatch[54] & - & - & 25.60 & 8.40 \\ FreeMatch[47] & 40.57 & 18.77 & - & - \\ **SequenceMatch** & **38.46** & **17.38** & **25.20** & **8.10** \\ \hline \hline \end{tabular} \end{table} Table 5: Error rate results on ImageNet. Figure 9: Accuracy comparison of Figure 8(a): FixMatch vs SequenceMatch and Figure 8(b): Flexmatch vs SequenceMatch for first 150k iterations on SVHN dataset with 40-label and 1000-label ### Calibration of SSL [7] suggests addressing confirmation bias from the calibration perspective. We measure the calibration of FixMatch, FlexMatch, and SequenceMatch trained on the ImageNet dataset with 100k labels 2. Several common calibration indicators are used: Expected Calibration Error (ECE), confidence histogram, and reliability diagram. As shown in Figure 11, even though FlexMatch has higher accuracy than FixMatch, its ECE value of \(20.55\) is larger than that of FixMatch, at \(20.14\), indicating poorer probability estimation. On the other hand, SequenceMatch achieves both higher accuracy and a lower ECE value of \(18.09\), which proves that it can reduce the confirmation bias and produce a well-calibrated model. Footnote 2: [https://github.com/hollance/reliability-diagrams](https://github.com/hollance/reliability-diagrams) ## 6 Related Work Self-training is a concept that has been around for decades [28, 42]. Self-training (i.e., utilizing a prediction distribution to generate pseudo-labels for unlabeled data) has been employed in a variety of areas, including natural language processing [27], object recognition [39], image classification [22, 50], domain adaption [55], etc. Pseudo-labeling [22] is a pioneering SSL method that uses "hard" artificial labels converted from model predictions. Pseudo-labeling is frequently used together with confidence-based thresholding, which keeps unlabeled samples only when predictions are sufficiently confident [39, 44, 49, 53]. [2] introduced consistency regularization, which was later popularized [21, 41]. Consistency regularization utilizes unlabeled data by relying on the assumption that the model should output similar predictions when perturbed versions of the same image are fed. Data augmentation [14], stochastic regularization [21, 40], and adversarial perturbations [30] have all been used to generate random perturbations. It has recently been demonstrated that applying significant data augmentation can improve outcomes [49]. FixMatch [44] proposed a combination of both pseudo-labeling and consistency regularization methods for SSL. FixMatch's thresholded pseudo-labeling produces a sharpening-like effect that encourages the model to deliver high-confidence predictions. FixMatch could be considered a combination version of UDA and ReMixMatch, in which two common strategies (pseudo-labeling and consistency regularization) are integrated while many components are removed (sharpening, training signal annealing from UDA, distribution alignment, and the rotation loss from ReMixMatch, etc.). FlexMatch [53] introduces a Curriculum Pseudo Labeling (CPL) method, which enables conventional SSL to train with a dynamic threshold for each class. CPL can be considered a dynamic thresholding approach since it dynamically adjusts the threshold for each class after each iteration, thus enabling higher performance for each class. FlexMatch outperforms most state-of-the-art SSL across a wide range of datasets. Lately, CoMatch [24] is introduced, which combines two contrastive representations on unlabeled data. However, CoMatch is extremely sensitive to hyperparameter settings and requires a large memory bank during training to store the embedded features. Recently work of [54] considers the semantic similarity and instance similarity during training. It shows that forcing consistency on both the semantic-level and instance-level can bring an improvement, thus achieving state-of-the-art benchmarks. ## 7 Conclusion In this paper, we introduce SequenceMatch, an SSL pipeline that sequentially matches predictions to reduce the divergence between the predicted class distributions for different augmented versions of the same input. SequenceMatch introduces a medium augmentation for unlabeled data, which helps reduce the divergence between the prediction distributions while maintaining the correct pseudo-label. Furthermore, SequenceMatch also helps reduce the overfitting phenomenon that most SSL methods are facing. SequenceMatch achieves state-of-the-art performance on a variety of SSL benchmarks and works well for all datasets. Figure 11: Reliability diagrams (top) and confidence histograms (bottom) for ImageNet dataset.
2308.11973
A study of Pt, Rh, Ni and Ir dispersion on anatase TiO2(101) and the role of water
Understanding how metal atoms are stabilized on metal oxide supports is important for predicting the stability of single-atom catalysts. In this study, we use scanning tunnelling microscopy (STM) and x-ray photoelectron spectroscopy (XPS) to investigate four catalytically active metals - Platinum, Rhodium, Nickel and Iridium - on the anatase TiO2(101) surface. The metals were vapor deposited at room temperature in ultrahigh vacuum (UHV) conditions, and also with a background water pressure of 2x10-8 mbar. Pt and Ni exist as a mixture of adatoms and nanoparticles in UHV at low coverage, with the adatoms immobilized at defect sites. Water has no discernible effect on the Pt dispersion, but significantly increases the amount of Ni single atoms. Ir is highly dispersed, but sinters to nanoparticles in the water vapor background leading to the formation of large clusters at step edges. Rh forms clusters on the terrace of anatase TiO2(101) irrespective of the environment. We conclude that introducing defect sites into metal oxide supports could be a strategy to aid the dispersion of single atoms on metal-oxide surfaces, and that the presence of water should be taken into account in the modelling of single-atom catalysts.
Lena Puntscher, Kevin Daninger, Michael Schmid, Ulrike Diebold, Gareth S. Parkinson
2023-08-23T07:39:16Z
http://arxiv.org/abs/2308.11973v1
###### Abstract ###### Abstract Understanding how metal atoms are stabilized on metal oxide supports is important for predicting the stability of "single-atom" catalysts. In this study, we use scanning tunnelling microscopy (STM) and x-ray photoelectron spectroscopy (XPS) to investigate four catalytically active metals - Platinum, Rhodium, Nickel and Iridium - on the anatase TiO\({}_{2}\)(101) surface. The metals were vapor deposited at room temperature in ultrahigh vacuum (UHV) conditions, and also with a background water pressure of 2x10\({}^{-8}\) mbar. Pt and Ni exist as a mixture of adatoms and nanoparticles in UHV at low coverage, with the adatoms immobilized at defect sites. Water has no discernible effect on the Pt dispersion, but significantly increases the amount of Ni single atoms. Ir is highly dispersed, but sinters to nanoparticles in the water vapor background leading to the formation of large clusters at step edges. Rh forms clusters on the terrace of anatase TiO\({}_{2}\)(101) irrespective of the environment. We conclude that introducing defect sites into metal oxide supports could be a strategy to aid the dispersion of single atoms on metal-oxide surfaces, and that the presence of water should be taken into account in the modelling of single-atom catalysts. **A study of Pt, Rh, Ni and Ir dispersion on anatase TiO\({}_{2}\)(101) and the role of water** Lena Puntscher, Kevin Danger, Michael Schmid, Ulrike Diebold and Gareth S. Parkinson* Institute of Applied Physics, TU Wien, Vienna, Austria *[email protected] Keywords: STM, oxide surfaces, Single atom catalysis ## 1 Introduction Precious metal nanoparticles supported on metal oxides are used for chemical conversions in heterogeneous and electrocatalysis. Reducing the size of the particles increases the fraction of atoms present at the nanoparticles' surface, and thus the per-atom efficiency. When the particles enter the subnano regime, quantum size effects can affect the catalytic activity.[1, 2] In the ultimate limit, single metal atoms can be anchored directly onto the oxide support, and so-called single-atom catalysis (SAC) has emerged as a key strategy in heterogeneous and electrocatalysis in the last decade.[3-7] Unravelling how catalytically active metal atoms bind to the metal oxide support and interact with reactants is essential for understanding their properties. The local coordination environment has been shown to strongly influence the reactivity and stability of SACs,[8-11] but the structural details of the active sites are difficult to obtain from experiment. This is partly due to the structural inhomogeneity of powder supports, and partly due to the limitations of analytical techniques. In the absence of this critical information, the reaction mechanism is typically modelled computationally assuming an idealized low-index facet of the support material with the catalyst atom located at a high-symmetry site. Such models almost certainly do not represent the active catalyst, particularly for electrochemical applications, because the presence of water and/or hydroxyl groups is neglected. One approach to investigate the validity of the assumptions made in the computational modelling of SACs is to synthesize analogous systems experimentally. Such experimental modelling is achieved using single-crystalline metal-oxide supports where the atomic structure is well known. The metal of interest is evaporated directly onto the pristine surface in ultrahigh vacuum, which allows the most stable adsorption site to be determined. One can also selectively introduce molecules that might affect the stability of the system, and determine their individual impact unambiguously. For systems ultimately utilized in an aqueous solution, water is the obvious candidate. Recently, we demonstrated that Rh atoms sinter rapidly after deposition on a pristine \(\alpha\)-Fe\({}_{2}\)O\({}_{8}\)(1102) model support in ultrahigh vacuum (UHV) at room temperature but are stabilized as "single atoms" when the same experiment is performed with 10\({}^{8}\) mbar water in the background pressure. The enhanced dispersion occurs because the adatoms are stabilized by additional coordination to two OH ligands.[12] On the other hand, adsorbates can also induce sintering, as observed for Pd/Fe\({}_{3}\)O\({}_{4}\)(001) in the presence of CO.[13] In this work, we turn our attention to TiO\({}_{2}\) as a model support. The thermodynamically stable rutile phase of TiO\({}_{2}\), especially the (110) surface, has been widely investigated in surface science studies.[14, 15] In SAC, the anatase polymorph (a-TiO\({}_{2}\)) is of particular interest because it becomes more stable in the nanoparticle form typically used as a support [16]. The reactivity of a-TiO\({}_{2}\)-supported SAC systems has been heavily investigated in recent years [17-23]. DeRita et al. [24], for example, have convincingly demonstrated that Pt adatoms are active for CO oxidation. The possibility that agglomerates might be responsible for the observed activity was ruled out by using a very low Pt loading; each support particle hosted on average just one Pt atom. DFT-based calculations accompanying the experiments suggested that Pt atoms are probably not stable on the bare a-TiO\({}_{2}\)(101) surface. Moreover, a single Pt adatom on top of the bare a-TiO\({}_{2}\)(101) surface also failed to reproduce the experimentally observed binding energy and vibrational frequency of Pt-adsorbed CO molecules. These benchmark parameters were best reproduced when the Pt atoms were assumed to be coordinated to two additional oxygen atoms originating from hydroxyl groups on the surface.[24] Another study by the same group [25] reported that the coordination of Rh adatoms on a-TiO\({}_{2}\) is sensitive to the composition of the reducing gas that was used to activate the catalyst. When Rh was pre-treated in CO at 300 "C and further exposed to CO, Rh(CO)\({}_{2}\) species formed with Rh being bound to two O\({}_{2}\)c from the lattice. (For a sketch of the a-TiO\({}_{2}\) surface structure and the adsorption site, see Fig. 1, below) When the system was pre-treated with H\({}_{2}\) at 100 "C, hydroxyls formed. These coordinated to the Rh(CO)\({}_{2}\) species by adding an additional neighbouring surface OH group which substantially changed the CO binding energy. It was concluded from CO FTIR-TPD and DFT that the presence of hydroxyl groups can alter the local metal coordination and molecular desorption significantly.[25] Here, we present a surface science investigation study of four different metals - Pt, Rh, Ni and Ir - vapor-deposited directly onto an a-TiO\({}_{2}\)(101) single crystal support at room temperature in UHV. We find that Ir is the only metal that exhibits atomic dispersion under UHV conditions. However, the presence of water de-stabilizes the Ir adatoms, which leads to the formation of large clusters anchored at step edges. Pt, Ni, and Rh all form mostly clusters even at very low coverages, suggesting diffusion is facile on the regular terrace at room temperature. For Pt and Ni, small protrusions are observed in the STM images that we tentatively assign as isolated adatoms immobilized at defects. ## 2 Experimental Methods Room-temperature scanning tunnelling microscopy (STM) was performed in a two-vessel UHV chamber consisting of a preparation chamber (base pressure p \(<\) 10\({}^{-10}\) mbar) and an analysis chamber (p \(<\) 5x10\({}^{-11}\) mbar). The analysis chamber is equipped with a nonmonochromatic Al K\(\alpha\) X-ray source (VG) and a SPECS Phoibos 100 analyzer for XPS, and an Omicron \(\mu\)-STM. The STM measurements (positive sample bias, empty states) were conducted in constant current mode with an electrochemically etched W tip. The natural a-TiO\({}_{2}\)(101) single crystal was prepared in UHV by sputtering (Ar\({}^{*}\), 1 keV, 10 min) and annealing (610 "C, 20 min). Every fifth cycle the sample was annealed at 500 "C for 20 min in 5x10\({}^{-7}\) mbar of O\({}_{2}\) and then in UHV at 610 "C.[26] Pt, Rh, Ni and Ir were deposited using an e-beam evaporator (FOCUS), with the flux calibrated using a water-cooled quartz microbalance (QCM). One monolayer (ML) is defined as one metal atom per surface unit cell. (The areal density of unit cells is 5.15 \(\times\) 10\({}^{18}\) m\({}^{-1}\)) The STM images were corrected for distortion and creep of the piezo scanner as described in ref [27]. The gray scale of each image is set individually to ensure that the possible adatoms and other small adsorbates are easily distinguishable. Furthermore, a background subtraction is done by setting the iron atoms of the support to an apparent height of zero. ## 3 Results ### The as-prepared anatase TiO2(101) surface Figure 1a shows STM images of the a-TiO2(101) surface after several cleaning cycles. As is typical for this well-studied surface, a clean sample exhibits rows of bright, oval-shaped protrusions running in the [010] direction. These are attributed to the surface Ti\({}_{\text{sc}}\) and O\({}_{\text{2c}}\) atoms shown in Figures 1c, d. [28] The [10-1] direction cannot be easily determined from these images; it was inferred from the preferred step directions.[28] Isolated dark features (highlighted with a blue arrow in Figure 1a) between the rows are inhomogeneously distributed over the surface. These have previously been attributed to extrinsic Nb dopants, which are often present in natural anatase TiO2 samples.[29] Our XPS survey spectra did not show a peak that would allow us to confirm or debunk this assignment, likely because their average coverage is too low (0.02-0.03 ML as measured by STM). For what follows, it is important to note that surface oxygen vacancies (VoS) are not present at the surface of a-TiO2(101). Even when formed artificially, they quickly diffuse to the subsurface at room temperature [30]. This is in stark contrast to rutile TiO2(110), where Vo sites are prevalent and active sites for adsorption [14]. Figure 1b shows the surface after 2 hours of exposure to the residual gas of the preparation chamber at room temperature (with the evaporator turned on but the shutter closed). Bright protrusions are observed; there appear identical to those observed after water adsorption in low-temperature studies [31]. Since water is known to desorb from regular a-TiO2(101) surface sites below 250 K [32], we conclude these water molecules are adsorbed at surface defects. The concentration agrees with that of the dark defects highlighted in Figure 1a. Interestingly, the water molecules are mobile at room temperature (Figure 2), but do not leave a visible defect behind when diffusing. This suggests that water and defect probably diffuse together, which makes it unlikely that the defect can be a cation substituting Ti in the anatase lattice. It could conceivably be an interstitial lattice species, or perhaps a surface site above a subsurface defect such as an oxygen vacancy. The images also exhibit a low concentration of molecular O2[33] species, which are in the residual gas as left-over from the oxidation step during sample preparation. This species is also most likely bound at defects, and a few examples are highlighted in orange and marked as (O2)\({}_{\text{ext}}\) in Figure 1b, consistent with the labelling in ref. [29]. These (O2)\({}_{\text{ext}}\) are also mobile at room temperature, and in a rare case we observed one of them to hop onto a dark defect, without leaving a similar defect behind (Figure 3). This suggests that there may be defect sites that are not visible in STM images, where adsorbates can bind more strongly than at regular surface sites. Overall, these data show that the regular anatase surface is inert at room temperature, but that defects (both visible and invisible in STM) can act as binding sites for molecular adsorbates. In what follows, we will show that these defects can also stabilize metal adatoms. Figure 1: The as-prepared a-TiO\({}_{2}\)(101) surface. STM images (a) of the a-TiO\({}_{2}\)(101) surface acquired shortly after preparation (\(V_{sample}\) = +1 V, \(I_{tunnel}\) = 0.3 nA), and (b) after keeping the sample in the preparation chamber for 2 hours (\(V_{sample}\) = +1 V, \(I_{tunnel}\) = 0.3 nA). In (a), dark defects (previously attributed to Nb dopants [29]) are highlighted in light blue. After exposing the surface to the residual gas in the preparation chamber for 2 hours (b) water molecules (highlighted in yellow) as well as O\({}_{2}\) molecules (highlighted in orange) adsorb at the surface, likely at defect sites [31]. Panels c) and d) show a model of the a-TiO\({}_{2}\)(101) surface in which the O atoms are coloured red, and the Ti atoms are coloured blue. The most stable site for Pt and Rh adatom adsorption computed by prior studies is located between two surface O\({}_{2c}\) atoms (marked by the grey circle).[22, 34-36] Figure 2: Diffusion of water on the a-TiO\({}_{2}\)(101) surface. Water was adsorbed from the residual gas in the preparation chamber over the course of two hours at room temperature. Sequential STM images (\(V_{sample}\) = +1 V, \(I_{tunnel}\) = 0.3 nA, \(\approx\) 300 nm/s) show that the water molecules can move along and across the [010] direction. Movements across the [010] direction are shown in green, movement along the [010] are shown in cyan. The full circles mark the current position of the water molecule, the dashed circle the position it has after respectively before the movement. ### Pt/anatase TiO2(101) A previous STM study of the Pt/a-TiO2(101) system [34] revealed that small clusters form predominantly on the terrace, with some species tentatively assigned to adatoms. Our data (Fig. 4a for a coverage of 0.05 ML) are similar to those in presented in ref. [34], and we also observe the coexistence of larger clusters and smaller features that have a uniform apparent height of 150-160 pm. At a lower coverage of 0.01 ML (Figure 4b), the density and size of the clusters is lower, and the 150-160 pm species are again observed. These smallest Pt species are easily distinguished from adsorbed water by their larger apparent height at our imaging conditions (60-80 pm for water, see figure 4c for a comparison), and because they are immobile in room-temperature STM movies. Given their relatively small apparent height, we tentatively assign these protrusions to Pt species. From the experiment alone, we cannot discount that these species could be dimers (or trimers) if such species were significantly more stable. For the higher coverage, (0.05 ML) \(\approx\)7 % of the deposited Pt (according to the QCM calibration) can be attributed to possible single atoms, whereas at 0.01 ML this increases to \(\approx\)17 %. Figure 5a shows a high-resolution image, in which orange dots mark the approximate position of surface Ti\({}_{\text{sc}}\) atoms. Assuming that the substrate maxima imaged by STM are closer to the Ti\({}_{\text{sc}}\) than the O\({}_{2c}\), the Pt-related protrusion is close to the position predicted by DFT calculations in ref. [34] (in between two adjacent O\({}_{2c}\) atoms, grey circles in Fig. 1d). We also note that the Pt adsorption site is equivalent to the sites where the dark defects are seen in STM (Figure Sf), so it is possible that these defects help stabilize the Pt atoms. Adsorbed water and O\({}_{2}\) are also labelled in Figs 4a and 4b for ease of comparison. Figure 4d shows an STM image of Pt deposited in a water vapor background of 2 \(\times\)10\({}^{8}\) mbar. Again, a mixture of clusters and possible adatoms is observed, and the ratio of clusters and possible single atoms is comparable to that obtained in UHV. We thus conclude that water has no significant effect on the dispersion of Pt on the a-TiO2(101) terraces, at least in this low-pressure regime. Figure 4: Pt on a-TiO2(101). (a) After deposition of 0.05 ML Pt (\(V_{sample}\) = +1.5 V, \(I_{tunnel}\) = 0.3 nA), (b) 0.01 ML Pt (\(V_{sample}\) = +1.5 V, \(I_{tunnel}\) = 0.3 nA). (c) A magnified section with a water molecule and a possible Pt adatom. (d) After depositing 0.025 ML Pt in a background of 2\(\times\)10\({}^{8}\) mbar water vapor (\(V_{sample}\) = +2 V, \(I_{tunnel}\) = 0.3 nA). Possible Pt adatoms are highlighted in cyan. In (a) the cluster with the highest apparent height is marked. It measures 489 pm at its highest point. Figure 3: Sequential STM images acquired after keeping the sample in the preparation chamber for 2 hours at room temperature (\(V_{sample}\) = +1 V, \(I_{tunnel}\) = 0.3 nA). In the second image, an adsorbed O\({}_{2}\) molecule has moved to occupy a dark defect, leaving behind an apparently regular lattice site. Figure 5b shows that the possible Pt adatom adsorption site is the same, independent of whether deposition was done in a water vapor background or in UHV. ### Rh/anatase TiO2(101) Figure 6 shows the a-TiO2(101) surface after deposition of 0.02 ML Rh (a) in UHV and (b) in a water vapor background of 2x10* mbar. Unlike Pt, Rh forms exclusively small clusters on the surface despite the presence of the dark defects. We did not observe any features that we would attribute to single atoms, irrespective of the environment. We conclude that Rh1 species are not stable on the a-TiO2(101) surface at room temperature under our conditions. This is similar to our experience with _r_-TiO2(110) [37], where Rh1 species were found to sinter already at 150 K. Figure 5: Enlarged STM images of the adatoms, to determine the extract adsorption site of all metal atoms deposited in UHV (a, c, e) and in a background of 2x10* mbar water vapor (b, d) and of a dark defect (f) The orange dots mark the approximate positions of surface Ti\({}_{\rm sc}\) atoms. ### Ni/anatase TiO2(101) Figure 7 shows the surface after deposition of 0.02 ML Ni in a) UHV and a water vapor background of 2\(\times\)10\({}^{8}\) mbar. Like Pt, Ni forms a mixture of clusters and small, uniform features that could be attributed to adsorbed single atoms. The coverage of these small species is relatively high: assuming that they are Ni\({}_{1}\) they would account for =20 % of the deposited Ni, with the rest contained within larger clusters. The smallest species are easily distinguished from adsorbed water, partly by their apparent height (150-170 pm), and also because they are immobile on the a-TiO2(101) surface at room temperature. Thus, in analogy to Pt, we presume that the smallest Ni species are most likely trapped at defect sites. Figures 5c and d show that these species are adsorbed at a different location on the surface than the features attributed to Pt atoms. After deposition in a water background of 2\(\times\)10\({}^{8}\) mbar, the concentration of the Ni\({}_{1}\) species doubles from 20 % of the deposited Ni to 40 %. There is no discernible difference between the protrusions in the two experiments, so it seems that water has a significant effect on the dispersion of Ni and may play a role in stabilizing Ni at defect sites. ### Ir/anatase TiO2(101) The last metal investigated in this study was Ir. Figure 8 shows 0.02 ML Ir deposited in UHV. Unlike Pt, Rh and Ni, Ir forms mostly uniform features with an apparent height of 130-160 pm. All these features occupy the same site on the surface and are immobile at room temperature. Like Pt, Ir is located approximately between two O\({}_{2c}\) surface atoms (Fig. 5). We assign these features to single Ir adatoms, which appear at a coverage of 0.011 ML on the surface. In addition to the single atoms, a small number of clusters can also be recognized. The apparent height of all features is depicted in a histogram in Figure 8b. A clear peak exists at 150 pm due to the features attributed to single atoms, with the shoulder at larger apparent heights originating from clusters. Considering that each cluster Figure 6: STM results of Rh on a-TiO2(101). (a) After deposition of 0.02 ML Rh in UHV (\(V_{sample}\) = +1.2 V, \(I_{tunnel}\) = 0.2 nA) and (b) 0.02 ML Rh in a water vapor background of 2\(\times\)10\({}^{8}\) mbar (\(V_{sample}\) = +1.5 V, \(I_{tunnel}\) = 0.15 nA). In (a) the cluster with the highest apparent height is marked. It measures 444 pm at its highest point. Figure 7: STM results of Ni on a-TiO2(101). 0.02 ML Ni has been deposited (a) in UHV (\(V_{sample}\) = +1.2 V, \(I_{tunnel}\) = 0.15 nA) and (b) in a water vapor background of 2\(\times\)10\({}^{8}\) mbar (\(V_{sample}\) = +1 V, \(I_{tunnel}\) = 0.1 nA). The density of possible Ni adatoms doubles in the presence of water. In (a) the cluster with the highest apparent height is marked. It measures 551 pm at its highest point. contains several Ir atoms, the coverage of the =150 pm high features agrees nicely with the assignment to single atoms. If the smallest species were dimers, our QCM calibration would have to be significantly underestimating the amount of Ir deposited. Increasing the Ir coverage to 0.05 ML (Figure 8c) increases the density of clusters but does not affect the density of adatoms. Figure 8d) shows the influence of water on the Ir/a-TiO\({}_{2}\)(101) system. Deposition in water at room temperature leads to complete sintering of the single Ir atoms and the formation of large clusters at the step edges. This de-stabilizing effect of water is different to all the other metals studied here. We also performed XPS measurements on the four metals deposited in UHV and in water vapor. Figure 9 shows an overview. For Pt, Rh and Ni, the peaks are shifted towards higher binding energy than those of the respective pure metals in the bulk. Water did not cause any drastic peak shifts, but intensity changes consistent with the propensity of dispersion/cluster formation observed in STM. The peak maxima are marked with a dotted line. Figure 8: STM results of Ir on a-TiO\({}_{2}\)(101). Frame (a) shows 0.02 ML Ir deposited in UHV (\(V_{sample}\) = +1.5 V, \(I_{tunnel}\) = 0.2 nA) and (b) the corresponding distribution of apparent heights. (c) 0.05 ML Ir deposited in UHV (\(V_{sample}\) = +1.5 V, \(I_{tunnel}\) = 0.2 nA) and (d) 0.02 ML Ir in water vapor background of 2\(\times\)10-8 mbar (\(V_{sample}\) = +2 V, \(I_{tunnel}\) = 0.2 nA). The highest cluster is marked in (d) and measures 625 pm at its highest point. ## 4 Discussion Overall, this study shows that Pt, Rh and Ni readily sinter after deposition onto the a-TiO\({}_{2}\)(101) surface in UHV conditions. Rh is particularly unstable, and forms small clusters even at the lowest coverage studied with no evidence of any adatoms. Pt and Ni exhibit a mixture of small clusters and small, uniform features, which we assign as single atoms. Ir, in contrast, is highly dispersed at low metal coverages, but clusters begin to form when the coverage is increased. Our analysis of the adsorption site suggests that the adatom-assigned protrusions are between two surface O\({}_{2c}\) atoms for Pt and Ir, which is consistent with the site predicted for Pt by several DFT studies [34; 35; 38]. If the metal atoms bind to O, it is clear that the behaviour of the different metals can be understood in terms of the different oxygen affinities. Campbell and co-workers [39] recently studied adsorption of several late transition metals on MgO(110) and CeO\({}_{2\cdot\cdot}\)(111), and reported the trend Ir > Ni > Pt > Rh for the oxygen affinity, which matches the relative stability observed for the UHV experiments here. One issue with the assignment of adsorption at a regular lattice site is that the diffusion barrier for Pt atoms along the [010] direction has been calculated to be 0.86 eV [38]. Such a value means that Pt atoms could diffuse at room temperature on the ideal surface, which is likely why the majority of Pt atoms form small clusters before our STM measurements are conducted. Consequently, we infer that the immobile adatoms we observe at room temperature must be trapped at defect sites. TiO\({}_{2}\) is sometimes considered synonymous with oxygen vacancies, because the behaviour of rutile TiO\({}_{2}\)(110) [37] in UHV is dominated by V\({}_{0}\) sites. DFT calculations suggest that Pt atoms would indeed be highly stable at surface V\({}_{0}\) sites (4.71 eV, compared to 2.20 eV on the pristine surface) [34], but it is known that V\({}_{0}\)s are preferentially accommodated in the subsurface layers. It is possible that such a large energy difference could cause a V\({}_{0}\) to diffuse to the surface[40] in the presence of Pt atoms, but this is inconsistent with our STM results for Pt and Ni: In this case, the adatom would sit on an O\({}_{2c}\) site, not in-between as is consistently observed with Pt and Ir (Fig. 5). Consequently, we conclude that the immobile Pt and Ir species are stabilized by another defect type. Ni on the other hand is slightly shifted from the Pt and Ir adatoms and could therefore possibly be stabilized by a V\({}_{0}\). The dark defects observed in STM are a primary candidate for the stabilization of metal adatoms because the defect is also located between two O\({}_{2c}\) atoms (compare Figures 5a and 5f). However, since the density of these defects is very inhomogeneous, which hinders any analysis of the number of defects covered by other species, we cannot exclude that another defect also plays a role. In any case, the nature of the dark defect is not clear. The previous assignment to substitutional Nb dopants Figure 9: XPS spectra of Pt, Rh, Ni and Ir deposited in UHV and in a background of 2\(\times\)10\({}^{-8}\) mbar water vapor. [29] seems unlikely given the diffusion behaviour observed in the presence of water (Figure 2). Similar logic leads us to exclude that the defect is linked to substitutional Fe cations, although we do observe a small Fe2p signal in XPS survey spectra as this metal is a common contaminant in natural crystals. Nevertheless, Fe tends to be localized in patches on the surface, and its appearance differs significantly from the dark defects at standard imaging conditions.[26] Finally, we can also exclude that the dark defects are mistaken for an adsorbate, because the appearance of most candidate molecules present in the residual gas (water, O\({}_{2}\), CO, OH) has already been established on the a-TiO\({}_{2}\)(101) surface. [29; 31] We propose that the defect is most likely a dopant atom present in an interstitial site in the lattice. Hebenstreit et al. speculated that it could possibly be a Ti interstitial.[41] While we cannot positively identify the chemical nature of the dark defect at this stage, our results suggest that extrinsic doping of the oxide could be a strategy to provide stronger binding sites capable of immobilizing expensive metals on a-TiO\({}_{2}\)(101). Turning now to the effect of water, we first note the completely different behaviour of Rh on a-TiO\({}_{2}\)(101) compared to our previous study on \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)(1\(\bar{1}\)02). In that work, depositing the metal in a background of water led to complete dispersion because Rh adatoms were stabilized by additional OH ligands [12]. One possible difference here is that water is already partially dissociated on \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)(1\(\bar{1}\)02) at room temperature [42], so OH ligands are more readily available than on a-TiO\({}_{2}\)(101) where water adsorbs molecularly. Another difference is the surface geometry: On \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)(1\(\bar{1}\)02), OH groups adsorbed on nearby surface Fe cations can create a square planar environment for the Rh\({}_{1}\) adatom [12], which is known to be energetically favourable.[43] On a-TiO\({}_{2}\)(101), this is not possible because OH groups adsorbed on surface Ti cations can at best create a threefold coordination, assuming the Rh\({}_{1}\) remains coordinated to two surface O atoms. The only case where the dispersion seems to be aided by water is Ni, and our data suggest that \(\approx\)40 % of the deposited metal can be stabilized as isolated adatoms. The apparent height and adsorption site is the same as it was in the absence of water, so it is possible that some of the Ni\({}_{1}\) were already partially stabilized by the water inadvertently present in the residual gas in the UHV experiments. The complete opposite effect was seen after the deposition of Ir in a water vapor background. The presence of water promotes a dramatic sintering of the dispersed species, leading to mobile clusters, which finally get trapped at the steps. Clearly, then, the effect of water is difficult to predict, and given that water and hydroxyl groups are always present on metal-oxide surfaces, its omission from computational modelling of SAC systems is likely a major oversimplification. It is also important to recognise that water can have a significant effect on the reactivity, and there is evidence that water can play a role in SAC reaction mechanisms [44; 45]. Finally, one of the goals of this study was to assess the suitability of a-TiO\({}_{2}\)(101) surface as a model support for surface science studies of SAC mechanisms. While it would be possible to study adsorption at the adatoms by STM/nc-AFM, the ambiguity over the nature of the defect sites that stabilize the adatoms precludes reliable modelling of the system. In any case, the presence of clusters at low coverage will make it difficult to distinguish the reactivity of single atoms from clusters using area averaging-techniques. At present, it is difficult to recommend this system as a suitable model system for studies of single-atom catalysis. ## 5 Conclusions We have carried out room-temperature STM measurements of the a-TiO\({}_{2}\)(101) surface after deposition of Pt, Rh, Ni and Ir in UHV and in a water vapor background. Pt and Ni form a mixture of small clusters and, possibly, single atoms. Rh exclusively forms clusters, while Ir is highly dispersed at a low coverage. The influence of water strongly varies from metal to metal. No influence is discernible for Pt and Rh, but the dispersion of Ni is increased when deposition is performed in a water vapor background. The exact opposite effect occurs in the case of Ir, which rapidly sinters after deposition in a water vapor background. The adsorption site of the species attributed to Pt and Ir atoms is the same as calculated for Pt\({}_{1}\) on the pristine surface; nevertheless, there is evidence that the single metal atoms are trapped at defect sites on the a-TiO\({}_{2}\)(101) surface. As such, we conclude that doping of oxide surfaces could be a viable strategy to provide strong adsorption sites for single metal atoms. Acknowledgement: LH, KD, and GSP acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. [864628], Consolidator Research Grant "E-SAC"). UD acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement No. [883395], Advanced Research Grant "WatFun").
2304.09231
Embedded Finite Models Beyond Restricted Quantifier Collapse
We revisit evaluation of logical formulas that allow both uninterpreted relations, constrained to be finite, as well as an interpreted vocabulary over an infinite domain. This formalism was denoted embedded finite model theory in the past. It is clear that the expressiveness and evaluating complexity of formulas of this type depends heavily on the infinite structure. If we embed in a wild structure like the integers with additive and multiplicative arithmetic, logic is extremely expressive and formulas are impossible to evaluate. On the other hand, for some well-known decidable structures, the expressiveness and evaluating complexity are similar to the situation without the additional infrastructure. The latter phenomenon was formalized via the notion of ``Restricted Quantifier Collapse'': adding quantification over the infinite structure does not add expressiveness. Beyond these two extremes little was known. In this work we show that the possibilities for expressiveness and complexity are much wider. We show that we can get almost any possible complexity of evaluation while staying within a decidable structure. We also show that in some decidable structures, there is a disconnect between expressiveness of the logic and complexity, in that we cannot eliminate quantification over the structure, but this is not due to an ability to embed complex relational computation in the logic. We show failure of collapse for the theory of finite fields and the related theory of pseudo-finite fields, which will involve coding computation in the logic. As a by-product of this, we establish new lower-bounds for the complexity of decision procedures for several decidable theories of fields, including the theory of finite fields. In the process of investigating this landscape, we investigate several weakenings of collapse.
Michael Benedikt, Ehud Hrushovski
2023-04-18T18:42:35Z
http://arxiv.org/abs/2304.09231v2
# Embedded finite models beyond restricted quantifier collapse ###### Abstract. We revisit evaluation of logical formulas that allow both uninterpreted relations, constrained to be finite, as well as interpreted vocabulary over an infinite domain: denoted in the past as _embedded finite model theory_. We extend the analysis of "collapse results": the ability to eliminate first-order quantifiers over the infinite domain in favor of quantification over the finite structure. We investigate several weakenings of collapse, one allowing higher-order quantification over the finite structure, another allowing expansion of the theory. We also provide results comparing collapse for unary signatures with general signatures, and new analyses of collapse for natural decidable theories. ## 1. Introduction This work concerns the setting where we have a theory \(T\) in language \(L\) and we consider formulas over the language \(L_{V}\) expanding \(L\) by predicates in a finite relational signature \(V\), with the intention that the interpretation of the \(V\) predicates ranges over finite subsets in a model of \(T\). When we talk about an \(L_{V}\) formula, we always mean a standard first-order formula in this signature. Based on the intended semantics mentioned above, two \(L_{V}\) formulas are said to be equivalent (modulo \(T\)) if they agree over all finite interpretations of \(V\) in a model of \(T\). In past work, such finite interpretations are referred to as _embedded finite models_ (for \(T\)): see [1, 2, 3]. In analogy, we refer to the case where \(V=\{P\}\), \(P\) a single unary predicate, as an _embedded finite subset_. A special kind of \(L_{V}\) formula is a first-order formula built up from \(L\)-formulas with quantifiers ranging over elements in \(V\): we call these _first-order restricted-quantifier formulas_ or 1-RQ formulas for short. Prior work has identified conditions in which all sentences of \(L_{V}\) are equivalent to 1-RQ ones, either over all embedded finite models for a given theory \(T\)[4, 2, 5, 6, 7] or over a particular class of such models [8]. For example, [1] shows such results for a class of theories containing the real field, [7] extends to a class containing Presburger arithmetic, while [5] gives results for a class that includes the complex field. These are sometimes referred to as _restricted-quantifier collapse_ (RQC) results in the literature. They show that the additional power of quantification over an infinite structure in \(L_{V}\) formulas can be "compiled away". In the presence of RQC, \(L_{V}\) sentences are, in some sense, no more expressive than traditional first-order logic over finite Introduction The study of the \(k\)-RQC in \(G_{0}\) is a very important topic in the study of the \(k\)-RQC. The \(k\)-RQC is a very important topic in the study of the \(k\)-RQC. pseudo-fields to give lower bounds on quantifier-elimination for the theory. We hope this connection between collapse to restricted-quantifier formulas and lower bounds can be exploited for other examples. **Our techniques.** The prior results in embedded finite model theory [2, 7] use classical model-theoretic techniques. For example, results that unrestricted quantification can be eliminated from real closed fields rely on some very basic properties of fields, namely o-minimality. We hope that our work also offers some insight into the use of more recent technical tools. In our results on impact of the signature, we make use of recent results on NIP theories, an area that has developed rapidly over the last decade: [10, 11]. In investigating weaker notions of collapse one main tool is a construction of Henson [12]. In investigating weakening of collapse via expansion, we make use of results on Ramsey expansions of theories [13]. **Organization.** Section 2 reviews the basic definitions around embedded finite model theory, and also overviews some older results on the topic. Section 3 studies higher order collapse. Section 4 presents our results on collapse in monadic relational signatures vs collapse over all relational signatures. Section 5 investigates whether the failure of collapse can be fixed by extending the signature. Section 6 focuses on the case of pseudofinite fields. We close with a discussion of our results and open questions in Section 7. Many proofs as well as supplementary material are deferred to the appendix. ## 2. Definitions and prior results Let \(T\) be a complete theory in a language \(L\). Fix a finite relational signature \(V\) disjoint from \(L\) and let \(L_{V}\) be the union of \(L\) by \(V\). We write an \(L_{V}\) structure as \((M,I)\). Given a \(V\) structure \(I\), the _active domain_ of \(I\), \(\mathsf{Adom}(I)\), is the union of the projections of the interpretations of symbols in \(V\). Thus if \(I\) interprets each relation in \(V\) by a finite set of tuples, \(\mathsf{Adom}(I)\) is a finite set. An \(L_{V}\)_formula_ always means an ordinary first-order formula in this signature. A _first-order restricted-quantifier formula_ (or just "1-RQ formula") is built up from first-order \(L\)-formulas and \(V\) atoms by quantifications of the form \(\exists x\in\mathsf{Adom}\ \phi\) or \(\forall x\in\mathsf{Adom}\ \phi\) where \(x\) is a variable. The semantics on \(L_{V}\) structure \((M,I)\) with valuation \(\sigma\) is that \(\exists x\in\mathsf{Adom}\ \phi\) holds when there is \(x_{0}\) in \(\mathsf{Adom}(I)\) such that \(\phi\) holds on \(\sigma\) extended with \(x\mapsto x_{0}\). The semantics of \(\forall x\in\mathsf{Adom}\ \phi\) is given similarly, or by duality. It is easy to see that these formulas can be translated to special kinds of \(L_{V}\) formulas: we can expand out \(\exists x\in\mathsf{Adom}\ \phi\) into disjunctions of formulas having the form \(\exists x\vec{y}\ A(x,\vec{y})\wedge\phi\). We will also focus on the case \(V=\{P\}\), abbreviating \(L_{V}\) as \(L_{P}\). Here active domain quantification is just quantification over \(P\), and we write \(\exists x\in P\ \phi\) or \(\forall x\in P\ \phi\) for such quantification. We do not need atom \(P(y)\) in the base case of the syntax, since they can be mimiced by quantifications \(\exists x\in P\ x=y\). **Example 1**.: _Let \(T\) be the theory of an infinite linear order \(<\). If \(V\) contains two unary predicates \(P\) and \(Q\), then one \(L_{V}\) sentence states that there is a real number above every member of \(Q\) and below every member of \(P\):_ \[\exists x\ (\forall q\in Q\ x>q)\wedge(\forall p\in P\ p<x)\] _This sentence is not \(1\)-RQ. But it is equivalent to the following \(1\)-RQ sentence:_ \[\forall p\in P\ \forall q\in Q\ p>q\] Note that in the prior literature many other names are used for these formulas: e.g. "active domain" or "restricted". In the literature, attention is often focused on theories with quantifier-elimination, and RQ formulas are built up only from _atomic \(L\) formulas_ rather than arbitrary \(L\) formulas. In this work we will not assume quantifier-elimination in the theory, and thus use the more general definition. An _embedded finite model_ for theory \(T\) is a pair \((M,I)\) as above where \(M\models T\), and \(\mathsf{Adom}(I)\) is a finite subset of the domain of \(M\). An _embedded finite subset_ for theory \(T\) is just the special case where \(V=\{P\}\) with \(P\) unary: A pair \((M,P)\) where \(M,\models T\) and \(P\) is a finite subset of a domain. We say two formulas \(\phi(\vec{x})\) and \(\phi^{\prime}(\vec{x})\) of \(L_{V}\) are _equivalent over \(T\)_ if in every embedded finite model \((M,I)\) for \(T\), \((M,I)\models\forall\vec{x}\ [\phi\leftrightarrow\phi^{\prime}]\). We now come to our central definition: **Definition 1**.: _We say that a theory \(T\) has \(1\)-Restricted Quantifier Collapse ("is \(1\)-RQC") if: every \(L_{V}\) formula is equivalent to a \(1\)-RQ formula._ In the prior literature \(1\)-RQC is referred to just as RQC [2]: the reason for the prefix "\(1\)" will be clear when we introduce higher-order generalizations below. ### A little bit of model theory We briefly overview some of the model theoretic notions that are relevant to our work. **Indiscernible sequences.** Let \(J\) be a linear-ordered set and \(a_{i}:i\in J\) be a \(J\)-indexed sequence in an \(L\) structure \(M\): an injective function from \(J\) into an \(L\) structure \(M\). The sequence is _order indiscernible_ (or order \(L\)-indiscernible, if \(L\) is not clear from context) if for any \(k\), any two \(k\) tuples \(a_{n_{1}}\ldots a_{n_{k}}\), \(a_{n^{\prime}_{1}}\ldots a_{n^{\prime}_{k}}\) with \(n_{1}<n_{2}\ldots<n_{k}\) and \(n^{\prime}_{1}<n^{\prime}_{2}\ldots<n^{\prime}_{k}\), and any \(L\) formula \(\phi(x_{1}\ldots x_{k})\), \(M\models\phi(a_{n_{1}}\ldots a_{n_{k}})\leftrightarrow\phi(a_{n^{\prime}_{1} }\ldots a_{n^{\prime}_{k}})\). That is, the ordering on the indices determines the formulas satisfied by elements in the sequence. We will often deal with the case where the elements are ordered \(a_{i}:i\in\mathbb{N}\), and refer to this as an _indiscernible sequence_. Every theory with an infinite model has one with an indiscernible sequence [14]. A _totally indiscernible set_ is an infinite set \(A\) in a model where \(M\models\phi(\vec{a})\leftrightarrow\phi(\vec{a}^{\prime})\) for each \(k\)-tuple of distinct elements \(\vec{a},\vec{a}^{\prime}\) and each \(L\)-formula \(\phi(x_{1}\ldots x_{k})\). We often just talk about an indiscernible set, where the context makes clear whether it is order indiscernible for some ordering or totally indiscernible. If \(\vec{d}\) is a subset of an \(L\)-structure \(M\), a sequence is _indiscernible over \(\vec{d}\)_ if it is indiscernible in the model for the extension of \(L\) with constants interpreted by \(\vec{d}\). **NFCP theories.** A theory \(T\) is NFCP1if it satisfies a strong quantitative form of the compactness theorem. For every \(\phi(x_{1}\ldots x_{j},y_{1}\ldots y_{k})\) there is number \(n\) such that for any finite set \(S\) of \(k\)-tuples in a model \(M\) of \(T\), if for every subset \(S_{0}\) of \(S\) of size at most \(n\), \(M\models\exists\vec{x}\ \bigwedge_{\vec{s}\in S_{0}}\phi(\vec{x},\vec{s})\), then \(M\models\exists\vec{x}\bigwedge_{\vec{s}\in S}\phi(\vec{x},\vec{s})\). Examples of NFCP theories include the theory of algebraically closed fields in each characteristic. Footnote 1: NFCP stands for “Not the Finite Cover Property”. The reader without a background in model theory will probably find expanding acronyms of model-theoretic classes is not insightful, while for readers with such a background spelling out the acronym is unnecessary. We thus avoid spelling out for the other classes (e.g. NIP) below. NFCP theories are inherently _unordered_: they do not admit a definable linear order. In fact, a much stronger statement is true: NFCP theories are _stable_, informally meaning that there is no formula \(\phi(\vec{x},\vec{y})\), such that \(\phi\) restricted to arbitrarily large finite sets of tuples defines a linear order. It is known that there are very basic stable theories that are not \(1\)-RQC. One of them is the canonical example of a theory without NFCP, which will be particularly relevant to our discussion. **Example 2**.: _Let \(L=\{E(x,y)\}\) and \(T\) be the \(L\)-theory stating that \(E\) is an equivalence relation with classes of each finite size. Consider the \(L_{P}\) formula \(\phi_{contained}\) stating "some equivalence class is contained in \(P\)". It is easy to show that this is not equivalent to any \(1\)-RQ formula. Informally with a \(1\)-RQ formula, all we can say about a finite set \(P\) that lies within a single equivalence class are Boolean combinations of cardinality bounds: \(|P|\geq k\) for fixed \(k\)._ **O-minimal theories.** We now turn to a model theoretic tameness property for _linear ordered_ structures. Consider a theory \(T\) with a relation \(<(x,y)\) such that \(T\) implies that \(<\) is a linear order. Such a \(T\) is _o-minimal_ if for every \(\phi(x,\vec{y})\) for every model \(M\) of \(T\) and any \(\vec{c}\) in \(M\), \(\{x|M\models\phi(x,\vec{c})\}\) is a finite union of intervals. The real ordered group, real ordered field, and the real exponential field are all o-minimal [15]. Since no NFCP theory has a definable linear order, NFCP and o-minimal are disjoint. **NIP theories.** The most important class we deal with here are NIP theories [16]. Given \(\phi(x_{1}\ldots x_{j},y_{1}\ldots y_{k})\), and a finite set of \(j\)-tuples \(S\) in a model \(M\) we say that \(S\) is _shattered_ by \(\phi_{\vec{y}}\) if for each subset \(S_{0}\) of \(S\) there is \(\vec{y}_{0}\) such that \(S_{0}=\{\vec{s}\in S|M\models\phi(\vec{s},\vec{y}_{0})\}\). A theory \(T\) is NIP if for each formula \(\phi\) and each partition of the free variables into \(\vec{x},\vec{y}\), there is a number that bounds the size of a set shattered by \(\phi_{\vec{y}}\) in a model of \(T\). NIP theories include both ordered and highly unordered structures. Specifically, they contain o-minimal structures, Presburger arithmetic, as well as all stable structures, and hence all NFCP structures. NIP can also be rephrased in terms of the well-known notion of VC dimension in learning theory. Every partitioned \(L\) formula \(\phi(\vec{x},\vec{y})\) gives a family of subsets of the \(j\)-tuples in a model, as we vary \(\vec{y}\). NIP can be restated as asserting that for every \(\phi\), the corresponding family has finite VC dimension [17]. **Sufficient conditions for \(1\)-RQC.** In the setting of "unordered" structures, the main known example of RQC comes from NFCP theories: **Theorem 1**.: _[_5_]_ _Every NFCP theory is \(1\)-RQC._ Note that this includes the case of pure equality, which was known very early [18]. On the side of "ordered structures", a model-theoretic sufficient condition concerning RQC involves o-minimal theories: **Theorem 2**.: _[_1_]_ _Every o-minimal theory is \(1\)-RQC._ [7] showed RQC for an even broader class, what today are known as _distal theories_. We will not need the definition here, but it subsumes \(o\)-minimal theories, Presburger arithmetic, and the theory of the infinite tree, while being contained in NIP. An easy observation is that NIP is necessary for RQC: **Proposition 1**.: _[_2_]_ _If \(T\) is \(1\)-RQC, then \(T\) is NIP._ We will strengthen this result in Theorem 6 below. We mentioned previously that Example 2 is stable, hence NIP. This is true for the theory of any equivalence relation. Since Example 2 is not \(1\)-RQC, we see that NIP theories do not necessarily have \(1\)-RQC. However, results from [8] show that in NIP theories \(T\) we have an infinite subset \(S_{0}\) of a model \(M\) of \(T\) such that all \(L_{V}\) sentences are equivalent to \(1\)-RQ sentences for embedded finite models based on \(M\) whose active domain lies in \(S_{0}\). We say that a theory is \(\exists\)\(1\)-RQC when this holds. **Theorem 3**.: _[_8_]_ _Every NIP theory is \(\exists 1\)-RQC._ \(\exists\)\(1\)-RQC limits what can be expressed by a sentence in \(L_{V}\) whose truth value depends only on the isomorphism type of the \(V\) structure - we call these _isomorphism-invariant \(L_{V}\) sentences_ for short. By shrinking the infinite set to be a set of indiscernibles for the model, we can replace all \(L\)-formulas by a linear order representing the order on indiscernibles. From this we can see that \(\exists\)\(1\)-RQC implies that isomorphism-invariant sentences are expressible in _order-invariant first-order logic_. Syntactically, this is first-order logic over a relational signature \(V\) augmented with an additional symbol \(<\), with the additional semantic requirement that the result of \(Q\) is the same whenever \(<\) is interpreted by a linear order on the domain of the finite structure, regardless of how the elements are ordered. For more on the expressiveness of order-invariant FO on finite models, see [19]. Thus Theorem 3 and known limitations of FO on finite structures imply the following: **Corollary 1**.: _In an NIP theory any isomorphism-invariant \(L_{V}\) sentence is expressible in order-invariant FO over the \(V\) vocabulary. Hence no sentence can express a property of \(|P|\) that holds on infinitely and co-infinitely many cardinalities. And no \(L\cup\{G\}\) sentence can express that \(G\) is connected._ ### Fraisse Limits We will make use of a well-known model-theoretic construction known as the Fraisse limit. We apply it to a class \(C\) of isomorphism types of finite structures in a finite relational signature that satisfy some closure properties: the Joint Embedding Property and the Hereditary Property. Fraisse's theorem states that for any such class \(C\) there is a countably infinite model such that \(C\) is the class of (isomorphism types) of finite substructures of \(M\), and which is _homogeneous_: in our setting, this means that every isomorphism between finite substructures of the model extends to an isomorphism of the whole model. Furthermore, if \(C\) satisfies another closure property - the Amalgamation Property - then there is a unique such structure. One canonical example is where \(C\) is just the class of graphs. In this case, the Fraisse limit is the _random graph_, whose theory is axiomatized by the _extension axioms_ stating that for every finite graph \(G\) and every supergraph \(H\) of \(G\), any embedding of \(G\) in the model extends to an embedding of \(H\). Another examples is where \(C\) is the class of finite linear orders, where the Fraisse limit is simply a countable dense linear order without endpoints. We will not need the precise definitions of the closure properties above, but see, e.g. [14]. ## 3. Higher order collapse We will consider a first natural weakening of RQC by loosening the notion of restricted-quantifier formula, allowing higher-order quantification over the active domain. We define higher-order sorts by starting with some base sort \(B\) and closing under tupling and power sets. The order of a sort is the maximum number of nested powersets: e.g. a sort \(\mathcal{P}(\mathcal{P}(B)\times B)\) has order \(2\). The first component is a set of elements of base sort and the second is an element of base sort. Given a finite set \(P_{0}\), and a variable of sort \(\tau\), the valid valuations of the variables are defined in terms of the hierarchy over \(P_{0}\): for a variable \(X\) of sort \(\mathcal{P}(\mathcal{P}(B)\times B)\), a valid valuation of \(X\) assigns to it sets of pairs, where the first component is a set of elements of \(P_{0}\) sort and the second is an element of \(P_{0}\). Given relational schema \(V\) disjoint from \(L\), the _higher-order restricted-quantifier formulas_ over \(L\cup V\) are built up from \(L\) formulas, atomic \(V\) formulas, and atoms \(X(u_{1}\dots u_{m})\) where \(X\) and each \(u_{i}\) are higher order variables. The inductive definition is formed by closing under the (implicitly) restricted quantifications \(\exists X\) and \(\forall X\) for \(X\) a higher order variable. The semantics on embedded finite model \((M,I)\) is in terms of valuations over \(\mathsf{Adom}(I)\). For \(x\) of sort \(B\) we write out \(\exists x\ \phi\) as \(\exists x\in\mathsf{Adom}\ \phi\), in order to make the bounding explicit in the syntax and to be consistent with our definition of \(1\)-RQ formulas. Similarly for universal quantifiers restricted to the active domain. For \(X\) of sort \(\mathcal{P}(B)\), we write \(\exists X\ \phi\) as \(\exists X\subseteq\mathsf{Adom}\ \phi\), again to make the bounding explicit. And similarly for universal quantification and at higher orders. In the case where \(V=\{P\}\), \(P\) a unary predicate, we can simplify the restricted existential second-order quantifier as \(\exists X\subseteq P\ \phi\). For a number \(k\), a _\(k\)-restricted-quantifier formula_ or just \(k\)-RQ formula is one where the variables are of order at most \(k\). **Example 3**.: _Let \(L=G(x,y)\) and \(V=\{P(x),Q(x)\}\). If \(X\) is a variable of sort \(\mathcal{P}(B)\) then \(\forall S\subseteq P\ \exists x\in Q\ \forall y\in P\ (G(x,y)\leftrightarrow S(x))\) is a \(2\)-RQ formula. Informally, it says that using \(G\) and elements of \(Q\), we can pick out any subset of \(P\)._ We are now ready to give our first weakening of \(1\)-RQC: **Definition 2**.: _A theory is \(k\)-RQC if for every finite relational signature \(V\), every \(L_{V}\) sentence is equivalent to a \(k\)-RQ one._ Higher-order RQC has not been explored in any depth. But there is one simple observation in the literature: (see [3]). **Proposition 2**.: _The random graph is \(2\)-RQC but not \(1\)-RQC._ We can weaken collapse further by dropping the requirement of a uniform bound. We say \(T\) is \(\omega\)-RQC if for every \(L_{P}\) formula \(\tau\) there is a formula \(\theta\) that is \(k\)-RQ for some \(k\) such that \(\theta\) is equivalent to \(\tau\) over all embedded finite subsets. It is easy to see that in considering \(k\)-RQC for any fixed \(k\), or \(\omega\)-RQC for a theory, we can restrict attention to any model of the theory. Although \(\omega\)-RQC has not been studied explicitly, it is easy to see that certain examples that are not \(1\)-RQC are also not \(\omega\)-RQC: **Proposition 3**.: _The theory of full integer arithmetic \((N,+,\cdot)\) is not \(\omega\)-RQC._ ### Separating RQC levels and Higher-order Spectra The goal is to show that for any \(k\) there is a \(T\) that is \((k+2)\)-RQC but not \(k\)-RQC. We will also show that there are decidable theories that are not even \(\omega\)-RQC. \(k^{th}\)-order logic is the logic built up using variables of order \(1\) to \(k\), with atomic formulas equality as well as \(X(x)\) where \(X\) has order \(j+1\) and \(x\) order \(j\) via Boolean connectives and quantifiers \(\exists S\). We will consider "pure equality higher-order sentences": \(k^{th}\) order logic sentences with no relational constants, only quantified variables. We can interpret such sentences over a structure consisting only of a domain - a pure set. We call a set \(J\) of numbers a _pure \(k^{th}\) order spectrum_ if there is a sentence \(\phi\) of the above form such that \(J=\{|P|\ \text{finite}\ \mid P\models\phi\}\). The following is a variation of results in [20]: **Proposition 4**.: _For any \(k\), there is a set \(J_{k}\) of numbers that is a pure \(k+2\) order spectrum but not a pure \(k\) order spectrum._ We show how to generate theories \(T_{J}\) from a set of numbers \(J\), such that \(T_{J}\) is \(k\)-RQC if and only if \(J\) is a pure \(k\)-order spectrum. Thus separation of RQC levels corresponds to separation of higher-order definability in finite model theory. As a consequence we will have: **Theorem 4**.: _For each \(k\) there are decidable complete theories \(T_{k}\) that are \((k+2)\)-RQC but not \(k\)-RQC._ The recipe for obtaining \(T_{J}\) from \(J\) can be used to reduce other separation statements (e.g. about the variation of \(k\)-RQC where only Monadic higher order quantification is allowed) to separations in finite model theory. We use a construction due to Henson [12], first used to show that there are uncountably many homogeneous directed graphs. See [21] for more background on these constructions. A graph is _decomposable_ if there is a non-trivial set \(S\) of vertices with the property that for each pair \(x,y\in S\) and every other vertex \(v\) in the graph, there is an edge from \(x\) to \(v\) if and only there is an edge from \(y\) to \(v\) and similarly for edges from \(v\) to \(x\). That is, elements of \(S\) behave the same way with respect to elements outside of \(S\). A graph is _indecomposable_ if it is not decomposable. What is achieved by the construction of Henson can be stated as follows: **Theorem 5**.: _[_12_]_ _For each natural number there is a finite indecomposable directed graph \(B(n)\) such that for \(n\neq n^{\prime}\), \(B(n)\) does not embed as an induced substructure of \(B(n^{\prime})\). Furthermore, there is a first-order interpretation \(\delta\) whose input language is just a binary language \(<\), such that applying the interpretation to a linear order of size \(n\) gives \(B(n)\)._ A first-order interpretation of dimension \(d\)[14] is a function from structures to structures, specified by a formula \(\phi_{\mathsf{Dom}}(x_{1}\ldots x_{d})\) in the input vocabulary describing the domain of the output, along with, for each \(k\)-ary relation symbol in the output vocabulary, a \((k\cdot d)\)-ary formula in the input vocabulary. An interpretation allows a formula also for equality - meaning that the domain of the output is actually a quotient of the \(k\)-tuples satisfying \(\phi_{\mathsf{Dom}}\). But such a quotient is not needed in the interpretations of Theorem 5. Let a _cone_ be a digraph of the form \(\{d\}\times D\) or \(D\times\{d\}\). One can verify that the construction of [12] has the property that no \(B(n)\) is an induced substructure of a cone. We consider a signature consisting of ternary relations \(R\) and \(Z\). Given a structure \(M\) for such a relation and \(a\) in the domain of \(M\) we let \(R_{a}\) be the binary relation such that \(R_{a}(b,c)\) holds exactly when \(R(a,b,c)\) holds. Let \(Z_{a}\) be a binary relation defined analogously. For a set of numbers \(J\), let \(K_{J}\) be the set of finite structures for \(R,Z\) such that \(R(x,y,z)\) implies \(x,y,z\) are all distinct and for no \(a\) in the domain do we have: \(B(n)\) for \(n\in J\) embeds as an induced substructure of \(R_{a}\). This set of structures is easily seen to have the hereditary, amalgamation, and joint embedding properties. Thus there is a Fraisse limit (see Section 2) which we denote as \(M_{J}\). Let \(T_{J}\) be the complete theory of \(M_{J}\). Since there was no restriction on \(Z\), in \(M_{J}\) the relation \(Z\) will simply be a random ternary relation. Whenever \(J\) is a decidable set, the class of finite structures \(K_{J}\) is effective, and thus the theory \(T_{J}\) is computably axiomatizable via the usual "extension axioms", stating that for every tuple \(\vec{t}_{1}\) in the model and isomorphism \(h_{1}\) of a structure \(s_{1}\) in \(K_{J}\) to the substructure of the model induced on \(\vec{t}_{1}\), for every structure \(s_{2}\) such that \(s_{1}\) is an induced substructure of \(s_{2}\), there is a tuple \(\vec{t}_{2}\) extending \(\vec{t}_{1}\) and isomorphism of \(s_{2}\) to \(\vec{t}_{2}\). Since \(T_{J}\) is a complete theory with a computable axiomatization, it is decidable. The key property of \(T_{J}\) is that for any finite linear order \(L\) over a domain \(D_{L}\) and any \(a\) in a model \(M\) of \(T_{J}\), \(\delta(L)\) can be embedded in \(R_{a}\) if and only if \(|D_{L}|\not\in J\). **Proposition 5**.: _For \(k\geq 2\), if \(T_{J}\) is \(k\)-RQC then \(J\) is a pure \(k\) order spectrum._ Proof.: Consider the sentence \(\psi(P)\) in \(L_{P}\) stating that: For some \(a\), and some \(b\)\(Z_{b}\) induces a linear order \(<_{P}\) on \(P\), and \(\delta\) applied to \(<_{P}\) is an induced substructure of \(R_{a}\). The properties of the structure guarantee that such a sentence is true if and only if \(|P|\not\in J\). Suppose \(T_{J}\) is \(k\)-RQC. The sentence \(\psi\) is cardinality-invariant. Consider embedded finite subsets living within a totally indiscernible set \(I\). Then there is a \(k\)-RQ sentence that defines \(\psi\), and indiscernibility allows the references to the ambient structure to be eliminated from \(\psi\). From this we see that \(J\) is the spectrum of a \(k^{th}\) order logic sentence. **Proposition 6**.: _For \(k\geq 2\) if \(J\) is a pure \(k^{th}\) order logic spectrum, then \(T_{J}\) is \(k\)-RQC._ Proof.: Suppose \(J\) is the spectrum of \(\tau\). For \(G\subseteq P^{2}\), we write \(R_{x,1,P}=G\) to abbreviate the formula: \(\forall y,z\in P\ R(x,y,z)\leftrightarrow G(y,z)\). We let \(R_{x,2,P}=G\) abbreviate \(\forall y,z\in P\ R(y,x,z)\leftrightarrow G(y,z)\). and define \(R_{x,3,P}=G\), \(Z_{x,i,P}=G\) for \(i=1,2,3\) analogously. Inductively, it suffices to show that a formula \(\gamma(x,\vec{y})\) of the form: \(\exists x\ \rho(x,\vec{y})\) with \(\rho\ k\)-RQ, is equivalent to a \(k\)-RQ formula. We can assume that \(x\) occurs in \(\rho\) only in \(L\) atoms. Since \(T_{J}\) enforces that \(R(x,y,z)\) implies \(x,y,z\) distinct, we can also assume \(x\) occurs at most once in each \(R\) atom. We then rewrite \(\rho\) as \[\exists\ G_{1},G_{2},G_{3}Z_{1},Z_{2},Z_{3}\subset P^{2}\ R_{x,1,P}=G_{1}\wedge\] \[R_{x,2,P}=G_{2}\wedge R_{x,3,P}=G_{3}\wedge\ldots\wedge\rho^{\prime}\] Here \(\ldots\) indicates additional equalities with \(Z_{x,i,P}\). And \(\rho^{\prime}\) is obtained from \(\rho\) by replacing each \(R\) atom containing \(x\) with the appropriate \(G_{i}\) atom and similarly for \(Z\). It will suffice to simplify the formula \[\exists x\ R_{x,1,P}=G_{1}\wedge R_{x,2,P}=G_{2}\wedge R_{x,3,P}=G_{3}\wedge\ldots\] where \(G_{1},G_{2},G_{3}\) are constrained to be in \(P^{2}\). We claim that for \(G_{1},G_{2},G_{3}\subseteq P^{2}\) and \(P\) finite, there is \(x\) such that \(R_{x,1,P}=G_{1}\), \(R_{x,2,P}=G_{2}\) and \(R_{x,3,P}=G_{3}\) and the \(Z_{x,i,P}\) equalities hold exactly when no \(B(n)\) for \(n\in J\) embeds as an induced substructure of \(G_{1}\). This is easily verified: the fact that \(G_{2}\) and \(G_{3}\) play no role uses the assumption that no \(B(n)\) is contained in a cone. The equalities involving \(Z\) can always be achieved. Note that if \(G_{1}\) contains some \(B(n)\) then necessarily \(n\leq|P|\), since \(G_{1}\) is contained in \(P^{2}\). Thus the condition that \(G_{1}\) does not contain any \(B(n)\) can be expressed as: for no subset \(P^{\prime}\) of \(P\), for no linear order \(<\) on \(P^{\prime}\) if \(P^{\prime}\) satisfies \(\tau\), then \(\delta(<)\) does not embed as an induced substructure of \(G_{1}\). Since \(\tau\) is a \(k\)-RQ formula in \(P\), this is a \(k\)-RQ formula on \(G_{1},P\). Thus we have reduced separating levels of RQC to the corresponding separations in finite model theory. By Proposition 4 we can choose \(J\) to be the spectrum of a \(k+2\) sentence but not a \(k\) order logic sentence, and thus get theories that are \(k+2\)-RQC but not \(k\)-RQC. By adjusting the definability of the set of natural numbers \(J\), we can also show: **Corollary 2**.: _There are decidable theories that are not \(\omega\)-RQC._ Proof.: Choose \(J\) such that \(J\) is decidable but is not a pure \(k\) spectrum for any \(k\). Recall from Section 2 that there is a strong connection between various classical model-theoretic dividing lines and 1-RQC: for example, 1-RQC implies NIP. The result above can be seen as a negative one about similar connections between higher-order collapse and classical model-theoretic dividing lines: Note that model-theoretically the different theories \(T_{J}\) are quite similar. But they can be adjusted to get any level of RQC or none at all, depending on \(J\). The construction does not give theories that are NIP. We can show that it is possible to separate 2-RQC from 1-RQC with an NIP theory: see the appendix. But the problem of separating \(k+2\) from \(k\) with an NIP theory for \(k>0\) is open. **Conversion of isomorphism-invariant sentences to higher-order restricted-quantifier form and the complexity of theories.** The previous family of examples was constructed so that there is a tight connection between the level of RQC of general \(L_{V}\) formulas and the descriptive complexity of the underlying set of integers \(J\), which also controls the complexity of the theory. Is there a more general connection between higher-order RQC for a theory \(T\) and the complexity of decision procedures for \(T\)? For an arbitrary theory there may be no connection between the complexity of the theory and the ability to convert \(L_{V}\) sentences to RQ ones: this is the case, for example in Example 2. But the following result shows that if we focus on isomorphism-invariant sentences, then collapse can follow from reasonably weak assumptions on the complexity of a decision procedure for the theory. We tailor the result for isomorphism-invariant \(\omega\)-RQC, but variations hold for \(k\)-RQC. **Proposition 7**.: _Consider a complete theory \(T\) in a language \(L\) and let \(C\) be an infinite set of constant symbols disjoint from \(L\). Assume that valid sentences of \(L\cup C\) can be efficiently enumerated and also that for any number \(v\), there is an elementary time algorithm that decides whether a first-order sentence \(\phi\) in \(L\cup C\) with at most \(v\) bound variables is consistent with \(T\). Then every isomorphism-invariant sentence in \(L_{V}\) is equivalent to a \(k\)-RQ one for some \(k\)._ Proof.: Consider the following algorithm for determining if \(\phi\) holds on an embedded finite model: given a finite interpretation \(I\) of relations in \(V\) whose active domain has \(n\) elements, we form an object coding the \(L\cup C\) sentence \[\gamma_{I}=\bigwedge_{i,j\leq n}c_{i}\neq c_{j}\wedge\phi_{I}\] where \(\phi_{I}\) is formed from \(\phi\) by replacing every atom \(R(\vec{x})\) by \(\bigvee_{\vec{c}\in R^{I}}\bigwedge_{k}x_{k}=c_{k}\). \(\gamma_{I}\) is of size polynomial in \(I\), and \(\phi_{I}\) can thus be coded by suitable sequences over the domain of \(I\). By assumption, there is an elementary time algorithm deciding whether a \(\phi_{I}\) is consistent with \(T\). The run of such an algorithm on \(\phi_{I}\) can be captured again by a suitable higher order object. Since \(\phi\) is isomorphism-invariant, being satisfied on some \(n\) element subset implies being true on \(P\). However, note that the model-theoretic sufficient conditions given in Section 2 imply that RQC can hold for theories that are not even decidable. ## 4. Embedded models vs embedded subsets We now consider another weakening of RQC, where we restrict based on the signature for uninterpreted relations, focusing on sentences dealing with embedded finite subsets rather than general embedded finite models. **Definition 3**.: _We say a theory is \(1\),Monadic-RQC if every \(L_{P}\) formula is equivalent to a \(1\)-RQ formula._ We begin by extending Proposition 1. **Theorem 6**.: _If \(M\) has \(1\),Monadic-RQC, then \(M\) is NIP._ Proof.: If \(T\) is not NIP, then by [10], we can find an \(L\) formula \(R(x,y,\vec{p})\) such that in every model there is \(\vec{c}\) such that there are arbitrarily large sets shattered by \(R_{y,\vec{c}}=\{x|R(x;y,\vec{c})\}\). For short "\(R\) has IP". Thus, we can find an infinite order indiscernible sequence such that every subset of the sequence is \(\{x|R(x,b,\vec{c})\}\) for some \(b\). We find a sequence \(\langle a_{i},b_{i}\rangle_{i}\in\mathbb{N}\) with each \(a_{i}\neq b_{i}\), \(\langle a_{i},b_{i}\rangle\) indiscernible over \(\vec{c}\), and for each \(i,j\in\mathbb{N}\), \(R(a_{i},b_{j},\vec{c})\) if and only if \(i<j\). We can do this by starting with \(\langle a_{i}\rangle\) indiscernible over \(\vec{c}\), then build \(b_{j}\) inductively: the induction step uses that every subset of the sequence is \(\{x|R(x,b,\vec{c})\}\) for some \(b\). Then we take a subsequence of \(\langle a_{i},b_{i}\rangle\) that is indiscernible over \(\vec{c}\). We now consider several cases. Case 1: There exists some \(b^{*}\) such that \(R(a_{i},b^{*},\vec{c})\) for all \(i\), and for infinitely many \(i\neg R(b_{i},b^{*},\vec{c})\). Then, by refining, we can arrange that \(\neg R(b_{i},b^{*},\vec{c})\) for all \(i\) and \(\langle a_{i},b_{i}\rangle\) is indiscernible over \(\vec{c}\cup b^{*}\). We can arrange that \(\vec{c}\cup b^{*}\) has cardinality \(0\) mod \(4\) via padding. Now we consider the class \(\mathcal{C}\) of embedded finite subsets where \(P\) is interpreted by \(P_{i}=A_{i}\cup B_{i}\cup\vec{c}\cup b^{*}\) We define the \(L_{P}\) sentence \(\phi\) to hold if there exists \(\vec{c},b^{*}\) consisting of distinct values, such that, letting: \[A_{\vec{c},b*} =\{x\in P\mid\bigwedge_{i}x\neq c_{i}\wedge x\neq b^{*}\wedge R(x, b^{*},\vec{c})\}\] \[B_{\vec{c},b^{*}} =P\setminus A_{\vec{c},b^{*}}\setminus b^{*}\setminus\vec{c}\] \[<^{A}_{\vec{c},b^{*}} =\{\langle x,y\rangle\mid x\neq y\wedge x,y\in A_{\vec{c},b^{*}}\wedge\] \[\forall z\in B_{\vec{c},b^{*}}(R(x,z,\vec{c})\to R(y,z,\vec{c})\}\] \[<^{B}_{\vec{c},b^{*}}\text{ defined similarly as above but with }\] \[B_{\vec{c},b^{*}}\text{ and }A_{\vec{c},b^{*}}\text{ swapped }\] \[Biject_{\vec{c},b^{*}} =\{\langle x,y\rangle\mid x\in A_{\vec{c},b^{*}}\wedge\] \[y\in B_{\vec{c},b^{*}}\wedge y\text{ is }<^{B}_{\vec{c},b^{*}}\text{ - maximal such that }R(x,y,\vec{c})\}\] then: * \(<^{A}_{\vec{c},b^{*}}\) is a linear order on \(A_{\vec{c},b^{*}}\) and similarly for \(<^{B}_{\vec{c},b^{*}}\). * \(Biject_{\vec{c},b^{*}}\) is a bijection from \(A_{\vec{c},b*}\) to \(B_{\vec{c},b*}\) * for some \(b\), \(\{x\in A_{\vec{c},b^{*}}\mid R(x,b,\vec{c})\}\) includes exactly one of any element of \(A_{\vec{c},b^{*}}\) and its \(<^{A}_{\vec{c},b^{*}}\) successor, while including the first element and excluding the last element according to \(<^{A}_{\vec{c},b^{*}}\). The important points about \(\phi\) are: * \(\phi\) is in \(L_{P}\) * \(\phi\) implies, over any embedded finite subset, that the cardinality of \(P\) is \(0\) mod \(4\). This follows directly from the definitions and the assumption that \(\vec{c}\cup\vec{b}^{*}\) has cardinality a multiple of \(4\). In particular this implication holds for subsets in \(\mathcal{C}\). * Conversely, for any \(P_{i}\in\mathcal{C}\), if \(P_{i}\) has cardinality \(0\) mod \(4\) then \(|A_{i}|\) and \(|B_{i}|\) are even, and by choosing \(\vec{c}\) and \(b^{*}\) correctly -- that is, as above -- we see that \(\phi\) holds. But \(\phi\) cannot be definable by a \(1\)-RQd formula for embedded finite subsets in class \(\mathcal{C}\). Over this class, every \(L\)-formula is equivalent (modulo expansion) to a formula using \(Biject_{\vec{c},b^{*}}\), \(<^{A}_{\vec{c},b^{*}}\), \(<^{B}_{\vec{c},b^{*}}\) for the correct \(\vec{c},b^{*}\), where these are formulas in \(L\) extended with constants. So we have a formula in this language over a structure consisting of two linear ordered sets with a bijective correspondence. By a standard Ehrenfeucht-Fraisse argument it is clear that the cardinality of the universe modulo \(4\) cannot be defined in such a family. This completes the argument for Case 1. Case 2: there exists some \(b^{*}\) such that \(\neg R(a_{i},b^{*},\vec{c})\) holds for all \(i\), and for infinitely many \(i\)\(R(b_{i},b^{*},\vec{c})\). This is argued symmetrically with Case 1 above. Case 3. None of the above. If \(R(b_{i},b_{j},\vec{c})\) and \(\neg R(b_{j},b_{i},\vec{c})\) holds for all \(i<j\) (or dually) then \(R(x,y,\vec{c})\) defines an ordering on the \(b_{i}\). We let \(\mathcal{C}\) be defined as above but without the \(a_{i}\) or \(b^{*}\), and conclude analogously to Case 1. Otherwise, by indiscernibility of \(b_{i}\) over \(\vec{c}\), either \(R(b_{i},b_{j},\vec{c})\) holds for all \(i\neq j\), or \(\neg R(b_{i},b_{j},\vec{c})\) holds for all \(i\neq j\). Say the latter. Extend the indiscernible sequence \(\langle a_{i},b_{i}\rangle_{i\in\mathbb{N}}\) by adding one more element \((a_{\omega},b_{\omega})\) maximal in the ordering. So \(R(a_{i},b_{\omega},\vec{c})\) holds for all \(i\in\mathbb{N}\), using indiscernibility, since \(b_{\omega}\) is above \(b_{i}\) for all standard \(i\). But \(\neg R(b_{i},b_{\omega},\vec{c})\) holds for all \(i\), again by indiscernibility and \(b_{\omega}\) being above \(b_{i}\in\mathbb{N}\). Hence we are in Case 1, a contradiction. We now show that for \(1\)-RQC, there is no difference between looking at embedded finite subsets and embedded finite models. That is, our weakening does not make any difference at the level of theories: **Theorem 7**.: _A theory \(T\) is \(1\),Monadic-RQC iff \(T\) is \(1\)-RQC._ Proof.: The interesting direction is assuming \(1\),Monadic-RQC and proving \(1\)-RQC. Inductively it suffices to convert a formula \(\phi\) of the form: \[\exists x\ Q_{1}(u_{1})\dots Q_{n}(u_{n})\ \Gamma(x,\vec{u},\vec{y})\] to an active domain formula, where the \(Q_{i}\) are active domain quantifiers and \(\Gamma\) is a Boolean combination of \(\sigma\) formulas and \(L\) formulas. We can assume that \(x\) only occurs in \(L\) formulas. Let \(\Gamma_{L}(x,\vec{u},\vec{y})\) be the vector of \(L\) subformulas of \(\Gamma\). By Theorem 6, \(T\) is NIP. By [11] "local types are uniformly definable over finite sets": for every \(L\) formula \(\phi(\vec{x};\vec{y})\), there is \(L\) formula \(\delta(\vec{y};\vec{p})\) such that: for any finite set \(S\), for any \(\vec{x}_{0}\), there is a tuple \(\vec{p}_{0}\) such that \(\forall\vec{y}\in S\ \phi(\vec{x}_{0},\vec{y})\leftrightarrow\delta(\vec{y}, \vec{p}_{0})\). This means, in particular, that for \(\gamma_{i}\in\Gamma_{L}\) there is \(\delta_{i}(\vec{u},\vec{p})\) such that for every finite set \(P\) for every \(x_{0},\vec{y}_{0}\) in a model of \(T\) there is \(\vec{p}^{\sharp}\) in \(P\) such that \[\forall\vec{u}\in P\ \gamma_{i}(x_{0},\vec{y}_{0},\vec{u})\leftrightarrow\delta_ {i}(\vec{u},\vec{p}^{\sharp})\] This holds over all finite \(P\), so in particular for the active domain of an embedded finite model. We can thus replace \(\phi\) with \[\exists\vec{p}^{\sharp}\in\mathsf{Adom}\dots\exists\vec{p}^{n}\in\mathsf{Adom}\] \[\vec{p}_{1}\dots\vec{p}_{n}\ \text{are definers for some $x$ for $\vec{y}\wedge$}\] \[Q_{1}(u_{1})\dots Q_{n}(u_{n})\ \Gamma^{\prime}(\vec{u},\vec{y},\vec{p}^{ \sharp}\dots\vec{p}^{n})\] Here \(\Gamma^{\prime}\) is obtained by \(\Gamma\) via replacing \(\gamma_{i}\) with the corresponding \(\delta_{i}\). The first conjunct inside the existential has the obvious meaning, that the iff above holds for some \(x\) in the role of \(x_{0}\), with \(\vec{y}\) in the role of \(\vec{y}_{0}\). The first conjunct does not mention the additional relational signature \(\sigma\), and thus can be transformed into a 1-RQ formula via 1,Monadic-RQC. **Embedded subsets vs embedded models for isomorphism-invariant sentences.** We contrast Theorem 7 with the situation when we restrict to _isomorphism-invariant_ sentences. Consider the random graph. The theory is not NIP, so by Theorem 6 it is not 1-RQC, and indeed not even 1, Monadic-RQC. And we have mentioned before that it is 2-RQC. In fact, every \(L_{V}\) sentence can be converted to one that uses only Monadic Second Order quantification over the active domain of the \(V\) predicates. Thus, informally, first-order quantification over the model gives you the power of Monadic Second Order active domain quantification over the relational signature \(V\). We can also see that there are isomorphism-invariant \(L\cup\{G(x,y)\}\) sentences \(\gamma\) that are not equivalent to 1-RQ ones: by using unrestricted existential quantification to quantify over subsets of the nodes of \(G\), we can express that \(G\) has a non-trivial connected component. But consider an isomorphism-invariant sentence in \(L_{P}\). By the observation above, applicable to arbitrary \(L_{P}\) sentences, these can all be expressed in restricted-quantifier Monadic Second Order logic \(\phi^{\prime}\) quantifying only over unary predicate \(P\). By isomorphism-invariance, such a sentence is determined by its truth value on \(P\) lying within an order-indiscernible set, and thus we can rewrite such a sentence to \(\phi^{\prime\prime}\) that does not mention the graph predicate of \(L\) at all: only equality atoms in it. Such a sentence can only define the set of finite \(P\)'s whose cardinality is in a finite or co-finite set. And such a set of finite \(P\)'s can be also be defined in first-order logic over \(P\). Thus there can be a difference between considering unary signatures and higher-arity signatures for isomorphism-invariant sentences. ## 5. Weakening RQC by allowing Expansion of the signature Like quantifier-elimination, RQC is sensitive to the signature. It is easy to construct theories that are "badly behaved" - not even \(\omega\)-RQC - but which become 1-RQC when the theory is expanded. **Example 4**.: _Let us return to Example 2, an equivalence relation with classes of each finite size. We first argue that the \(L_{P}\) formula \(\phi_{\subseteq}\) is not equivalent to any \(k\)-RQ formula. Fix the standard model of this theory, call it \(M\). We show that for each \(k\), there are \(P_{k},P_{k}^{\prime}\) and a function \(f_{k}\) taking \(P_{k}\) to \(P_{k}^{\prime}\) such that: \((M,P_{k})\models\phi_{\subseteq}\), \((M,P_{k}^{\prime})\models\neg\phi_{\subseteq}\), but \(f_{k}\) preserves all formulas with at most \(k\) variables. We say that the formula is "not finitely redeemable" (NFR), and it is easy to see that this implies that \(\phi_{\subseteq}\) cannot be converted to a higher-order restricted-quantifier sentence. We choose \(P_{k}\) to be a set of \(k\) elements that exhausts an equivalence class of size \(k\), and \(P^{\prime}_{k}\) to be a set of \(k\) elements inside an equivalence class of size larger than \(k\). We can then choose \(f_{k}\) to be an arbitrary bijection between \(P_{k}\) and \(P^{\prime}_{k}\)._ _Let \(T^{+}\) expand \(T\) to \(L^{+}=\{E,<(x,y)\}\), stating that \(<\) is a linear order for which each \(E\) equivalence class is an interval with endpoints. Then \(T^{+}\) can be shown to be \(1\)-RQC. For example, the sentence \(\phi_{\subseteq}\) now becomes equivalent to a restricted-quantifier one: intuitively, we say that \(P\) contains the interval between two equivalent elements \(a\) and \(b\) which form the endpoints of a class. The full argument is included in the appendix._ We thus define another weakening of RQC, allowing expansion of the interpreted signature: **Definition 4**.: _Say that an \(L_{V}\) sentence \(\phi\) is potentially \(k\)-RQC (for \(T\)) if there is some expansion of \(T\) where \(\phi\) is equivalent to a \(k\)-RQ sentence. We say that a theory is potentially \(k\)-RQC if there is an expansion of \(T\) that is \(k\)-RQC, and define potentially \(\omega\)-RQC analogously. A theory that is not potentially \(\omega\)-RQC is said to be persistently unrestricted._ Thus the persistently unrestricted theories are ones in which this weakening does not help us obtain RQC. Notice that the "potentially RQ" sentence \(\phi_{\subseteq}\) in Example 2 is not isomorphism-invariant. This is not a coincidence: **Proposition 8**.: _If theory \(T\) has an isomorphism-invariant \(L_{V}\) sentence \(\phi\) not equivalent to a \(k\)-RQ sentence relative to \(T\), then the same is true in any expansion of \(T\)._ The proposition tell us that weakening RQC in this manner can not allow us to use unrestricted quantification to do any new "pure relational computation". In particular, really badly-behaved theories like full arithmetic cannot become k-RQC via augmenting the signature. One open question is: _Is every NIP theory potentially \(\omega\)-RQC?_ One cannot use isomorphism-invariant sentences to get a counterexample, since by Theorem 3 from [8], isomorphism-invariant formulas in NIP theories are always equivalent to \(1\)-RQ ones that use an additional order, and thus in particular are equivalent to \(2\)-RQ ones. Although we do not have an NIP theory that is persistently unrestricted, the main result of this section is that there are persistently unrestricted theories that are reasonably well-behaved. We show that even when isomorphism-invariant formulas are well-behaved, and the complexity of deciding the theory is elementary (see [22]), the theory may be persistently unrestricted: **Theorem 8**.: _The theory of atomless Boolean Algebras is \(\exists\)\(2\)-RQC: there is an infinite set such that every formula is equivalent to a \(2\)-RQ one for embedded finite models coming from this set. Thus every isomorphism-invariant \(L_{V}\) sentence is equivalent to a \(2\)-RQ one. It also is \(\exists\)\(1\)-Monadic RQC, hence isomorphism-invariant \(L_{P}\) sentences are equivalent to \(1\)-RQ ones. But the theory is persistently unrestricted._ We will first sketch an argument that there is an infinite set \(A\) such that, for embedded finite structures with active domain of the \(V\) relations in \(A\), every \(L_{V}\) sentence is equivalent to a Monadic Second Order Logic formula. In fact we will describe such a set where the elements are totally indiscernible: the \(L\) formulas \(\phi(\vec{a})\) satisfied by tuples \(\vec{a}\) of distinct elements are independent of the choice of \(\vec{a}\). Hence for \(P\) unary, isomorphism-invariant \(L_{P}\) sentences are equivalent to first-order sentences quantifying over \(P\). Fix an \(L_{V}\)\(\phi\) that depends only on the isomorphism type of the \(V\) structure. Let \(\mathcal{C}\) be the class of embedded finite models where the active domain is a subalgebra of the underlying substructure. We observe: **Claim 1**.: _Every \(L_{V}\)\(\phi\) is equivalent, for embedded finite models in \(\mathcal{C}\), to a \(1\)-RQ formula._ Proof.: Consider a finite subalgebra \(P_{0}\). Note that the full Boolean Algebra factors as \(P_{0}\times B\), where \(B\) is just a copy of the full algebra. Thus by the Fefferman-Vaught theorem for products [14], a first-order sentence over \(P\) can be decomposed into a Boolean combination of sentences over \(P_{0}\) and over \(B\), uniformly in \(P_{0}\). The sentences over \(P_{0}\) are just \(1\)-RQ sentences, while the sentences over \(B\) are independent of \(P_{0}\), hence they can be replaced with true or false. An alternative inductive transformation proving the claim is given in the appendix. Now take \(A\) to be an infinite antichain in a model. We argue that \(A\) has the required property. The antichain is definable within the subalgebra that it generates by a \(1\)-RQ formula: it is just the atoms of that algebra. We transform \(\phi\) to \(\phi^{\prime}\) as follows. We add to the \(V\) signature a unary predicate \(P\), and add a conjunct \(\phi_{\mathsf{sub}}\) saying that the domain of \(P\) is a subalgebra, and that its atoms are exactly the active domain of \(V\). We replace references to a \(V\) relation symbol \(U\) in \(\phi\) with references to the restriction of \(U\) to elements that are atoms of the active domain of \(P\). For example, if \(\phi=\exists xy\neg U(x,y)\), then \(\phi^{\prime}\) would be \(\phi_{\mathsf{sub}}\wedge\exists xy\neg(\)\(x\)\(y\) are atoms of the domain of \(P\) that satisfy \(U\) ). It is easy to verify that \(\phi^{\prime}\) holds on an interpretation of relational vocabulary \(U\cup\{P\}\) where \(P\) is the subalgebra generated by the active domain of \(U\), exactly when \(\phi\) is true on the original interpretations. \(\phi^{\prime}\) can be converted to a \(1\)-RQ formula \(\phi^{\prime\prime}\) over \(\mathcal{C}\). Then quantifications in \(\phi^{\prime\prime}\), where quantifications range over the subalgebra generated by the active domain, can be replaced by references to subsets of the atoms, giving a Monadic Second Order restricted-quantifier formula. Summarizing, we have shown each claim in Theorem 8 except the last one. We now sketch the argument that the theory is persistently unrestricted. We start by considering one particular extension \(\mathsf{ALessBA}^{<}\), which we describe as the complete theory of a particular model. In this model we have a distinguished partition of \(1\) into countably many elements ordered by a discrete linear order \(<\). We look at the Boolean Algebra generated by these sets, which we can identify as infinite bit strings over \(\mathbb{N}\); we order two elements via the lexicographic ordering on the corresponding strings. We next claim that \(\mathsf{ALessBA}^{<}\) is NFR (see Example 4), which witnesses the failure of \(\omega\)-RQC: that is, there is a family of pairs of embedded finite subsets \(P_{n},P^{\prime}_{n}\) along with a mapping \(f_{n}\) from \(P_{n}\) to \(P^{\prime}_{n}\) such that \(f_{n}\) preserves all \(L\)-formulas with at most \(n\) free variables. It is easy to see that \(\mathsf{ALessBA}^{<}\) admits such a family. Here "all \(L\) formulas" can be replaced by "all atomic \(L\) formulas", using quantifier elimination in \(\mathsf{ALessBA}\). We next make use of a result from [13]. For a tuple \(\vec{x}_{0}\) in a \(L^{\prime}\) structure \(M^{\prime}\) and \(L\) a subset of \(L\), the \(L\)-type of \(\vec{x}_{0}\) is the set of formulas \(\phi(\vec{x})\) with vocabulary in \(L\) that are satisfied by \(\vec{x}_{0}\). Informally, a theory \(T\) is _everywhere Ramsey theory_ if for every model \(M\) of \(T\), for every expansion \(M_{e}\) of \(M\), there is an elementary extension \(M_{e}^{*}\) of \(M_{e}\) and a copy \(M^{\prime}\) of \(M\) such that for every \(k\), the \(L(M)\)-type of a \(k\)-tuple in \(M^{\prime}\) determines its type in \(M^{\prime}\). Then [13] shows: **Proposition 9**.: \(\mathsf{ALessBA}^{<}\) _is an everywhere Ramsey theory._ Now consider an arbitrary expansion \(M^{*}\) of a model of \(\mathsf{ALessBA}\). We can further expand \(M^{*}\) to a model of \(\mathsf{ALessBA}^{<}\) by choosing the ordering appropriately. Let \(M,P_{n},P^{\prime}_{n},f_{n}\) be the witness finite models for \(\mathsf{ALessBA}^{<}\) above. Since \(\mathsf{ALessBA}^{<}\) is everywhere Ramsey, we find a copy of \(M\) inside an elementary extension of \(M^{*}\). The copies of \(P_{n},P^{\prime}_{n},f_{n}\) will witness that \(M^{*}\) is also not \(\omega\)-RQC. Failure of weaker forms of collapse for pseudo-finite fields and connections to complexity of decision procedures The previous sections investigated weaker notions of collapse, and Subsection 3.1 provided us with examples of theories that have different levels of higher-order collapse. It is natural to ask about decidable theories arising naturally that fail weak notions of collapse - e.g. are not \(\omega\)-RQC. In previous work it was noted that many decidable examples (e.g. real-closed fields) satisfy broad model-theoretic properties like o-minimality, known to imply 1-RQC. The random graph is 2-RQC. Another well-known source of decidable theories are the _pseudo-finite fields_. These are infinite fields that satisfy all the sentences holding in every finite field. There are several alternative definitions. For example, they are the ultraproducts of finite fields [9, 23]: a pseudo-finite field of characteristic \(p\) is an ultraproduct of finite fields of characteristic \(p\), while a pseudo-field of characteristic \(0\) is an ultraproduct of finite fields with unbounded characteristic. Pseudo-finiteness can be axiomatized, giving a decidable incomplete theory [9]: the assertions we make below will apply to all pseudo-finite fields - equivalently, all completions of the axiomatization. We will focus on positive characteristic, letting \(PF_{p}\) be the theory of pseudo-finite fields in characteristic \(p\). We begin by showing that they are not \(k\)-RQC for any \(k\), even for isomorphism-invariant formulas. This negative result shows that natural decidable theories can fail to have even weak forms of RQC. It is arguably surprising given that pseudo-finite fields have some commonalities with the 2-RQC random graph. They are another canonical example of simple but unstable (hence not NIP) theories [24]. In fact, we will show that in pseudo-finite fields we can interpret the \(n^{th}\) iterated powerset over \(P\), for all \(n\), uniformly in the field. From there it is easy to show, since we can code \(k\)-order logic for any \(k\), that such fields cannot be \(k\)-RQC for any \(k\). We will use this result to derive new lower bounds on decision procedures for pseudo-finite fields. By a \(k^{th}\)-order finite set theory structure over a predicate \(P\), we mean a structure with language \((P_{1},\ldots,P_{k},E_{1},\ldots,E_{k})\), with \(P_{1}=P\), \(P_{i}\) a unary predicate, \(E_{i}\subset P_{i}\times P_{i+1}\) a binary predicate for a membership relation, such that \(E_{i}\) satisfies extensionality and \(P_{i+1}\) can be viewed as a set of subsets of \(P_{i}\) closed under flipping membership of one element, i.e. such that: \[\forall x\in P_{i}\ \forall y\in P_{i+1}\ \exists y^{\prime}\in P_{i+1}\] \[((xE_{i}y^{\prime}\leftrightarrow\neg xE_{i}y)\wedge(\forall z\neq x\in P_{i} \ (zE_{i}y^{\prime}\leftrightarrow xE_{i}y))\] In case the interpretation of \(P\) is finite, this implies that \((P_{i},P_{i+1},E_{i})\) is isomorphic to the membership relation on \(P_{i}\) and the set of subsets of \(P_{i}\). Interpreting such a structure is equivalent to interpreting the \(k^{th}\) iterated powerset of \(P\). **Theorem 9**.: _For any \(k\), there exist \(L_{P}\) formulas_ \[\phi_{P_{1}},\cdots,\phi_{P_{k}},\phi_{E_{1}},\ldots,\phi_{E_{k}}\] _with the following properties._ * _each_ \(\phi_{P_{i}}\) _has one distinguished free variable, and each_ \(\phi_{E_{i}}\) _has two distinguished free variables. Each formula may optionally have additional "parameter variables"._ * _for any_ \(M\models PF_{p}\)_, and any interpretation of_ \(P\) _as a finite subset of_ \(M\)_, for some elements_ \((c_{1},\ldots,c_{j})\) _of_ \(M\) _for the parameter variables the_ \(\phi_{P_{i}},\phi_{E_{i}}\) _define the full_ \(k^{th}\) _order set theory structure over_ \(P\)_._ _Further, there is an \(L_{P}\) formula good such that the parameters can be taken to satisfy good, and every parameters satisfying good have the properties above._ Proof.: Consider the additive group \(A\) as an \(\mathbb{F}_{p}\)-vector space. We make use of the fact that there is a definable non-degenerate bilinear form \(b\) into \(\mathbb{F}_{p}\). For example, one can use the trace product (see Lemma 3.9 of [25]). Now consider \(X_{0}\) a finite subset of \(A\), of size \(n_{0}\). Define \(X_{i},Y_{i}\subset A\) inductively. We will use as parameters elements \(t_{1},t_{2},\cdots\), taken to be algebraically independent over the field generated by \(X_{0}\). We will always have \(Y_{i}\) be the subspace generated by \(X_{i}\): \[y\in Y_{i}\iff(\forall x)[(\forall u\in X_{i}\ b(x,u)=0)\to b(x,y)=0]\] Inductively we set \[X_{i+1}=\{\frac{1}{t_{i}-y}:y\in Y_{i}\}\] Then \(X_{i+1}\) is linearly independent, and we can generate \(Y_{i+1}\) as above. Note that for a non-degenerate bilinear form, the linear span of a finite set \(P\) is just the orthogonal complement of the orthogonal complement: the set of elements \(x\) such that \(b(x,a)=0\) whenever \(b(a,p)=0\) for each \(p\in P\). Thus there is an \(L_{P}\) open formula that defines the linear span of a finite set \(P\). Given a finite subset \(J\) of \(X_{i+1}\) with its sum \(e_{J}\), we can recover \(J\) from \(e_{J}\) as the set of \(j\in J\) such that \(e_{J}\) is not in the span of \(J-\{j\}\): this definition is correct because \(J\) was linearly independent. Composing with the inverse of the map \(\frac{1}{t_{i}-y}\), we can see that \(X_{i+1}\) codes subsets of \(X_{i}\). Thus \(i^{th}\) order monadic logic over \(X_{0}\) is interpretable in \(X_{i}\), and by consider \(k^{\prime}\) sufficiently higher than \(k\) we can interpret full \(k^{th}\) order logic. We now discuss how to capture the parameters by a formula \(good\) as required in the last part of the theorem. Let \(T_{i}(P)\) be the theory of \(i^{th}\) order set theory over a predicate \(P\) interpreted by a finite set. \(T_{i}(P)\) is finitely axiomatizable. For example for \(i=1\) we have a predicate \(\epsilon(x,y)\) and we need to say that there is \(y\) with no elements, and for each \(y\) and each \(p\in P\) there is a \(y^{\prime}\) such that \(y^{\prime}\) agrees with \(y\) on \(P\setminus\{p\}\) and disagrees with \(y\) on \(p\). Let \(good_{i}(\vec{t})\) be an \(L_{P}\) formula expressing that the formulas \(\phi_{1}(\vec{t})\ldots\) with parameters \(\vec{t}\) defined above (for \(i\)) give an interpretation satisfying \(T_{i}(P)\). Since \(T_{i}(P)\) is finitely axiomatizable, this is a first-order sentence in \(L_{P}\). The result above shows that the theory of pseudo-finite fields entails \(\exists\vec{t}\ good_{i}(\vec{t})\), for each \(i\). Note that above we talked about an interpretation with equality interpreted in the standard way, but where the formulas in the interpretation can include parameters. The standard notion of interpretation [14] allows the output model to be defined as a quotient by a definable equivalence relation. That is, the interpretation can interpret equality. We can remove the notion of "good parameter" by introducing such an equivalence relation, thus talking about an interpretation without parameters. In either case, the consequence is: **Corollary 3**.: _Any \(k\)-RQC sentence \(\gamma\) over \(P\) can be expressed using an \(L_{P}\) sentence: by existentially quantifying over \(\vec{t}\) satisfying \(good_{k}\) and checking \(\gamma\) in the resulting interpretation._ **Corollary 4**.: _For every number \(k\), any completion of the theory of pseudo-finite fields in positive characteristic cannot be \(k\)-RQC (even for isomorphic-invariant sentences)._ Proof.: By Proposition 4 for each \(k\) there is a set of integers that is the spectrum of a \(k+2\) order logic sentence \(\phi_{k}\) over the empty vocabulary, which is not the spectrum of a \(k\) order logic sentence over the empty vocabulary. By the result above, \(\phi_{k}\) can be described in \(L_{P}\). But clearly \(\phi_{k}\) it cannot be described by a \(k\)-RQ sentence of \(L_{P}\), since over an indiscernible set the \(L\) atoms could be eliminated. ### Embedded finite model theory and lower bounds Recall that Proposition 7 indicated that if \(\omega\)-RQC fails for an isomorphism-invariant sentence, then the complexity of deciding the theory must be very high. We show that efficiently interpreting iterated powersets can also suffice to give lower bounds, illustrating this in the case of pseudo-finite fields. We doubt that one can get an isomorphism invariant counterexample to \(\omega\)-RQC for pseudo-finite field: see discussion in the next subsection. A primitive recursive decision procedure for the theory is obtained in Fried-Sacerdote [26]. But to our knowledge it has not been improved by concrete bounds. This contrasts with theories such as real closed fields, where doubly exponential bounds are known [27]. From our construction we immediately obtain the following lower bound: **Proposition 10**.: _There is no decision procedure for \(PF_{p}\) in positive characteristic \(p\) that works in elementary complexity._ Proof.: We have shown that the the theory can interpret the \(k^{th}\) iterated powerset for any \(k\). This allows us to polynomially many-one reduce the model-checking problem for \(k^{th}\) order logic to satisfiability of sentences in the theory, uniformly in \(k\). Since the model checking problem for \(k^{th}\) order logic is known to be hard for an \(O(k)\) exponential tower in \(k\) (see, e.g. [20]) we can conclude. We can also make a conclusion about quantifier elimination. It is convenient to use a language with a sort \(k\) for the field, as well as a sort \(k_{n}\) for the unique field extension of degree \(n\); and the natural inclusion maps among the \(k_{n}\). 2 It is known that any formula \(\phi\) of \(PF_{p}\) with variables \(\vec{x}=(x_{1},\ldots,x_{m})\) is equivalent to a formula of the form Footnote 2: In many treatments in the literature, coefficients of an irreducible polynomial of degree \(n\), for each \(n\), are added as distinguished constants; the virtue of this is precisely that \(k_{n}\) becomes interpretable. \[\exists y\ \bigwedge_{i}F_{i}(\vec{x},\vec{y})=0\] Where \(F_{i}\) are polynomials which may have additional parameters from the model, but only one existentially quantified variable. We call such formulas _basic_ below. See, for example, Section 2 of [28] for further details. Similar normal forms are known in [9]. **Proposition 11**.: _The complexity of quantifier-elimination turning a \(PF_{p}\) formula into a basic formula cannot be bounded by a stack of exponentials of bounded height; this is already true for \(PF_{p}\) for any fixed prime \(p\). In fact for any \(k\) there exist formulas \(\phi_{n}\) with a bounded number of variables - and hence with a bounded quantifier rank, independent of \(n\) - of length \(O(n)\), such that any basic formula \(\psi_{n}\) which is \(PF_{p}\) equivalent to \(\phi_{n}\), must have size at least \(p^{p^{{}^{\cdot},^{p^{n}}}}\) (exponential tower of height \(O(k)\))._ Proof.: Fix \(k\), and towards a contradiction consider a \(k^{\prime}\) sufficiently larger than \(k\). As in Theorem 9, we can write a formula \(\phi^{k}(x;P)\) of \(L_{P}\), such that whenever \(P\) is interpreted by a set of size \(n\) elements, then \(\phi^{k^{\prime}}(x;A)\) defines a set of size precisely \(n^{n^{\cdot,n}}\), where the ellipses denote an exponential tower of height \(k^{\prime}\). For \(q\) a power of \(p\), let \(\phi_{k^{\prime},q}(x)=\phi_{k^{\prime}}(x;x^{q}=x)\) be the formula obtained from \(\phi_{k}\) by replacing \(P\) by \(x^{q}=x\). Note that \(x^{q}=x\) has precisely \(q\) solutions, and so \(\phi_{k^{\prime},q}\) has \(q^{q^{\cdot,q}}\) solutions, where again the ellipses denote an exponential tower of height \(k^{\prime}\). Note that these formulas have bounded quantifier rank. Consider a quantifier-free formula \(\psi_{q}\) produced by \(\phi_{k,q}\). It suffices to show: most a number elementary in its size. The projection \((x,y)\mapsto x\) maps the solutions of the formula \(\theta(x,y)\) in the representation \((Ax)\) of \(\psi\) onto those of \(\psi\), and is at most \(l\) to \(1\), where \(l\) is the degree in the representation. So it suffices to bound the number of \(x\) in the quantifier-free formulas of the form of \(\theta\). We will argue the following: **Theorem 10**.: _There is an elementary function \(F\) such that in any pseudo-finite field if \(\theta(\vec{u},\vec{v})\) is a basic formula and for a given \(\vec{u}\) the set \(\theta_{\vec{u}}\) of \(\vec{v}\) such that \(\theta(\vec{u},\vec{v})\) holds is finite, then \(|\theta_{\vec{u}}|\) has size at most \(F(size(\theta))\). Here size of \(\theta\) takes into account the degrees of the polynomials in addition to the number of terms._ Theorem 10 can be thought of as an analog of Bezout's theorem bounding the number of joint solutions of polynomial equalities. The subtlety is that while Bezout's theorem works over algebraically closed fields, here we are working in pseudo-finite fields, which will never be algebraically closed. The above result follows from [29], Prop. 2.2.1: see the appendix. ### \(\omega\)-RQC and Pseudo-finite fields We continue with the theory \(T_{p}\) of pseudo-finite fields of positive characteristic \(p>0\). Our next goal is: **Theorem 11**.: _The theory of a pseudo-finite field of positive characteristic \(p\) is not \(\omega\)-RQC._ We show that there is a \(L_{P}\)-sentence \(\psi\) such that for arbitrary large \(n\), there are embedded finite subsets \((M,P_{n})\), \((M,P^{\prime}_{n})\) with \(M\) a pseudo-finite field and a function \(h_{n}\) from \(P_{n}\) to \(P^{\prime}_{n}\) that preserves all atomic formulas among \(n\) elements. Rephrased, we show that we can find _isomorphic_ pseudo-finite substructures of some pseudofinite field of characteristic \(p\) such that \(\psi\) holds in one but not the other. We will assume \(p>2\); but an analogous construction works if \(p=2\), replacing the use of square roots in the argument below by Artin-Schreier roots (solutions \(y\) of \(y^{2}+y=x\)). **Lemma 1**.: _Let \(k\) be a field of char. \(p\neq 2\). Let \(K=k(x_{1},\ldots,x_{n})\) be the rational function field over \(k\) in \(n\) variables. We can consider each \(x_{i}\) as a member of the field, and let \(x=\sum_{i=1}^{n}x_{i}\). Let \(K^{alg}\) be an algebraic closure of \(K\). Within \(K^{alg}\), let \(K_{i}\) be the algebraic closure of \(k_{i}:=k(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})\). Then \(x\) is not a square in \(K_{1}K_{2}\cdots K_{n}\)._ Proof.: For any irreducible \(f\in k[X_{1},\ldots,X_{n}]\), we have a valuation \(v_{f}:K\to\mathbb{Z}\), defined by: \[v_{f}(g)=j\iff g=f^{j}\cdot h\] where \(h\) is a polynomial relatively prime to \(f\). The existence and uniqueness of \(j\) follows from unique factorization in \(k[X_{1},\ldots,X_{n}]\), and the fact that \(f\) is irreducible. It is clear that \(v_{f}(uv)=v_{f}(u)+v_{f}(v)\), \(v_{f}\) is \(0\) on \(k\), \(v_{f}(u+v)\geq\min(v_{f}(u),v_{f}(v))\). Extend \(v_{x_{1}+\cdots+x_{n}}\) to a valuation on \(K^{alg}\), denoted by \(v\). The key property is that \(v\) is \(0\) on \(K\) and \(1\) on \(x\). An automorphism \(\sigma\) of a Galois extension \(L\) of \(K\) over \(K\) is said to _fix the residue field_ if whenever \(v(u)=0\), we have \(v(u-\sigma(u))>0\). I.e. \(\sigma\) induces the identity on the residue field. Let \(Aut(L/K,\operatorname{res}(L))\) denote the group of automorphisms of \(L/K\) fixing the residue field. The set of elements fixed by every member of \(Aut(L/K,\operatorname{res}(L))\) is called the _maximal unramified subextension of \(L/K\)._ If it equals \(L\), we say \(L/K\) is unramified or \(L\) is unramified over \(K\), and otherwise we say that \(L\) is ramified over \(K\). Now \(v\) restricts to the trivial valuation of each field \(k_{i}\): every non-zero element maps to \(0\). Using basic valuation theory (applying the definition of a valuation to minimal polynomials), \(v\) is trivial on each \(K_{i}\). Hence the only automorphism of \(K_{i}\) over \(k_{i}\) that fixes the residue field is the identity, and thus the same statement holds for automorphisms of \(K_{1}\cdots K_{n}\) fixing the residue field. So \(K_{1}\cdots K_{n}\) is unramified over \(K\). To prove that \(\sqrt{x}\notin K_{1}\cdots K_{n}\), it suffices to show that \(K(\sqrt{x})\) is ramified over \(K\), which we argue next. Since \(x\) is not a square in \(K\) (again using unique factorization), \(K(\sqrt{x})\) is a Galois extension of \(K\) of degree \(2\), and admits the automorphism \(\tau\) fixing \(K\) and with \(\tau(\sqrt{x})=-\sqrt{x}\). Since \(p\neq 2\), \(\tau\) is not the identity. We will show that \(\tau\) fixes the residue field, thus \(\sqrt{x}\) witnesses that \(K(\sqrt{x})\) is ramified over \(K\). An element \(c\) of \(K(\sqrt{x})\) has the form \(c=a+b\sqrt{x}\) with \(a,b\in K\). If \(b=0\) then \(v(c-\tau(c))=v(0)=\infty\) and in particular \(v(c-\tau(c))\neq 0\). Thus for the purposes of showing that \(\tau\) fixes the residue field, we assume \(b\neq 0\). We have \(v(a)\in\mathbb{Z}\), and \(v(b\sqrt{x})=1/2+v(b)\notin\mathbb{Z}\). So \(v(a)\neq v(b\sqrt{x})\). Thus \(v(c)=\min\{v(a),v(b)+1/2\}\). Towards arguing that \(\tau\) fixes the residue field, we are interested in \(c\) with \(v(c)=0\). \(v(b)+1/2\) cannot be \(0\), so if \(v(c)=0\), we know that \(v(a)=0\) and \(v(b)\geq-1/2\). Since \(v(b)\) is an integer, we conclude that \(v(b)\geq 0\). We have \(v(c-\tau(c))=v(a+b\sqrt{x}-(a-b\sqrt{x}))=v(2b\sqrt{x})\geq v(b)+1/2\). Since \(v(b)\geq 0\), we deduce that \(v(c-\tau(c))>0\) as required. This shows that \(\tau\) fixes the residue field, and \(K(\sqrt{x})\) is ramified over \(K\), finishing the proof of the lemma. **Lemma 2**.: _Let \(F\) be a pseudo-finite field of char. >2, and \(a_{1},\ldots,a_{n}\in F\) algebraically independent elements. Then there exists a pseudo-finite field \(F^{\prime}\) and \(a^{\prime}_{1},\ldots,a^{\prime}_{n}\in F^{\prime}\) such that \((F,a_{1},\ldots,a_{i-1},a_{i+1},\ldots,a_{n})\equiv(F^{\prime},a^{\prime}_{1}, \ldots,a^{\prime}_{i-1},a^{\prime}_{i+1},\ldots,a^{\prime}_{n})\) for each \(i\), while also \(F\models(\exists y)(y^{2}=\sum a_{i})\) iff \(F^{\prime}\models\neg(\exists y)(y^{2}=\sum a^{\prime}_{i})\)._ Proof.: Since \(F\) is pseudo-finite, it has one extension of each fixed degree, and thus the Galois group for any fixed degree is cyclic: see, for example [30]. We can thus create an automorphism of each fixed degree extension where the set of elements fixed is exactly \(F\), and since the full algebraic closure is the union of these extensions, \(F=Fix(\sigma)\) for some automorphism \(\sigma\) of the algebraic closure \(F^{alg}\). Let \(k\) be the algebraic closure of the prime field, \(K=k(a_{1},\ldots,a_{n})\), \(k_{i}=k(a_{1},\ldots,a_{i-1},a_{i+1},\ldots,a_{n})\), \(K_{i}=k_{i}^{alg}\). By Lemma 1, there exists an automorphism \(\tau\) of \(F^{alg}\) fixing \(K_{1},\cdots,K_{n}\) but with \(\tau(a)=-a\) where \(a=\sum_{i=1}^{n}a_{i}\). Let \(\sigma^{\prime}=\tau\sigma\). Let \(K\) be \(Fix(\sigma^{\prime})\). Thus \(K\) is a subfield of \(F^{alg}\). Further \(K\) has exactly one extension of every degree. We now use the following result: For \(F\) a pseudo-finite field, \(K\) a subfield of \(F^{alg}\) having at most one extension of each degree, there is a pseudo-finite field \(K^{\prime}\) such that the algebraic numbers in \(K^{\prime}\) are isomorphic to \(K\). By taking an isomorphic copy, we get that \(K\) is a subfield of \(K^{\prime}\) and \(K\) is relatively algebraically closed in \(K^{\prime}\). The claim above is an extension of Proposition 7 in [9], which states that for every subfield \(K\) of the algebraic numbers such that \(K\) has at most one extension of each degree, there is a pseudo-finite field \(K^{\prime}\) such that the algebraic numbers in \(K^{\prime}\) are isomorphic to \(K\). Thus there exists a pseudo-finite field \(F^{\prime}\) containing \(Fix(\sigma^{\prime})\) and such that \(Fix(\sigma^{\prime})\) is relatively algebraically closed in \(F^{\prime}\). For each \(i\) set \(a^{\prime}_{i}=a_{i}\). Then for any \(x\) we have \(F^{\prime}\models(\exists y)(y^{2}=x)\) iff \(Fix(\sigma^{\prime})\models(\exists y)(y^{2}=x)\). In the case \(x=a\), we have this is true iff \(\tau\sigma(\sqrt{a})=\sqrt{a}\) iff \(-\sigma(\sqrt{a})=\sqrt{a}\) iff \(\sqrt{a}\notin F\) iff \(F\models\neg(\exists y)(y^{2}=a)\). To see the right to left direction, note that if \(F\models\neg(\exists y)(y^{2}=a)\), then \(\sigma\) cannot fix \(\sqrt{a}\), and hence it must map \(\sqrt{a}\) to \(-\sqrt{a}\), the only other root of \(a\) in the algebraic closure. \(\tau\) would need to send \(-\sqrt{a}\) to \(\sqrt{a}\), since otherwise it must fix \(\sqrt{a}\), which means it would fix \(a\) as well, contrary to the assumption on \(\tau\). The left-to-right direction is argued similarly. Again by results of Ax, the theory of \((F,a_{1},\ldots,a_{n-1})\) is determined by \(k(a_{1},\ldots,a_{n-1})^{alg}\cap F\); this is identical for \(F^{\prime}\), so the theories are equal; and likewise for any other \(n-1\)-tuple. This completes the proof of Lemma 2. Note that above we have dealt with embedded finite subsets over distinct pseudo-finite fields. We can get the same result for a class over the same field: \(F,F^{\prime}\) are elementarily equivalent; so one can find \(F^{\prime\prime}\) and elementary embeddings of \(F\) and \(F^{\prime}\) into \(F^{\prime\prime}\); of course the images of the \(a_{i}\) will be two different \(n\)-tuples in \(F^{\prime\prime}\), with the same properties as above. We now prove Theorem 11. Now recall that, due to the presence of a nondegenerate bilinear form, we have \(L_{P}\) formulas that assert: * \(P\) is linearly independent * \(y\) is the sum of the elements of \(P\): in particular, \(y\) depends on \(P\) and for \(u\in P\)\(y-u\) depends on \(P\smallsetminus\{u\}\). The bilinear form is used to state that \(y\) is the sum of elements of \(P\). Let \(\phi(y)\) be this formula. We also have an \(L_{P}\) sentence \(\psi\) asserting that the sum of elements of \(P\) is a square. By Lemma 2, for any sublanguage \(L^{\prime}\) of \(L\) generated by formulas of arity \(n-1\) or less, there exist two \(L^{\prime}\)-isomorphic choices of \(P\), one satisfying \(\psi\) and the other not. From this we can easily conclude that \(\psi\) cannot be equivalent to a \(k\)-RQ formula for any \(k\). Therefore the theory is not \(\omega\)-RQC. Notice that our example contradicting \(\omega\)-RQC is not isomorphism-invariant. We believe this is not a coincidence. By Proposition 7 if there were an isomorphism-invariant \(L_{V}\) sentence that is not equivalent to a \(k\)-RQ one for any \(k\), then there could be no elementary time algorithm for deciding sentences with a fixed number of variables, for any completion of theory of pseudo-finite fields. We conjecture that there are completions that are "fixed-variable elementary", which would disprove the existence of such a sentence. ## 7. Discussion and open issues One of the main goals of this work was to revisit embedded finite model theory, a topic which has not seen much activity in the last decades. We investigated some natural ways to extend the phenomenon of restricted quantifier collapse studied in the 90's and early 2000's. The results from earlier decades showed close connections to traditional dividing lines in model theory like NIP. Some of our results on extensions can be seen as negative, in that we do not get tight connections of these extended notions (e.g. \(k\)-RQC) and existing model theoretic classes. This leaves open the question of whether a finer investigation of model-theoretic structure can be carried out for these new classes. Further, our proofs hint at some additional connections between collapse and questions in descriptive complexity theory. In each aspect of the paper, we have left open many basic questions. In terms of higher-order collapse, although we have separated \(k+2\) from \(k\)-RQC, we do not have the corresponding separation for \(k+1\) and \(k\). Our construction from Subsection 3.1 shows that this would follow from the corresponding separation of pure spectra in finite model theory. We believe one can construct examples \(T_{k}\) where the \(L_{P}\) theory is _bi-interpretable_ with \(k\) higher order logic over \(P\), which would provide an equivalence between separations of RQC levels and expressiveness separations in higher-order logic. On the issue of impact of the signature, the main question is whether \(k\)-RQC for \(V=\{P\}\), \(P\) a unary predicate \(P\) implies \(k\)-RQC for general \(V\). On the issue of persistent unrestrictedness, one major question is whether any model theoretic criteria can enforce potential RQC. In particular, we do not know if NIP theories can always be expanded to become RQC. It is known that there are NIP theories (even stable theories) that cannot be expanded to fall into the known existing classes that imply 1-RQC [31]. We have shown that Atomless Boolean Algebras are persistently unrestricted. We suspect the same argument applies to another well-known decidable theory, Buchi Arithmetic, but we have not verified this. Our discussion of algebraic examples focused on pseudo-finite fields, and (in the appendix) on the related topic of vector spaces with a bilinear form into a finite field. Even for pseudofinite fields we do not deal with characteristic 0. The results on pseudo-finite fields relate to the broader question of the connection between "tame definability properties" of a structure such as \(k\)-RQC and the complexity of the underlying theory. In Proposition 7, we noted one simple connection: roughly speaking, high expressiveness of isomorphism-invariant \(L_{V}\) sentences implies lower bounds for deciding \(L\) formulas. In Section 6 we showed that we can sometimes convert high expressiveness of isomorphism-invariant sentences into lower bounds on quantifier elimination in the theory, even when the number of variables is not fixed. We know that, in general, properties like \(k\)-RQC do not even imply decidability of the theory, much less complexity bounds. This follows from the fact that purely model-theoretic properties - e.g. NFCP, o-minimality, see Section 2 - with no effectiveness requirement suffice to get 1-RQC. Conversely, Example 2 can be used to show that theories with modest complexity of satisfiability (even for fixed number of variables) may not even be \(\omega\)-RQC. Intuitively, a Turing Machine can decide the theory by accessing additional parts of the structure, while restricted-quantifier formulas have no means to access this structure. One might hope that adding function symbols to the signature might close this gap, as it does in Example 4. But Theorem 8 shows that there are examples where this tactic cannot help. **Acknowledgments.** We thank the referees of LICS for many helpful comments. This research was funded in paprt by EPSRC grant EP/T022124/1. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
2310.05594
Observational evidence for parametrized emergent dark energy models
Recent cosmological observations show a statistically significant tension in the estimated values of the cosmological parameters within the standard $\Lambda$CDM framework. In a recent study, Li and Shafieloo introduced a simple Phenomenological Emergent Dark Energy (PEDE) model, which possesses the same number of parameters as that of the $\Lambda$CDM model. Their research highlighted this model as a viable alternative to $\Lambda$CDM, capable of alleviating the Hubble tension and explaining the late-time cosmic acceleration. Following this, we consider a series of PEDE-type models where a new parameter $b$ is introduced in the dark energy expression that distinguishes one model from the other and is designated as bPEDE models. The PEDE and $\Lambda$CDM models were the special cases of the bPEDE model. In contrast to the PEDE model, the bPEDE model demonstrates the presence of dark energy in the past while indicating its absence in the asymptotic future. Confronting these models with the observational Hubble data (OHD) shows that a series of bPEDE models fit the data better than the PEDE model and the standard $\Lambda$CDM model. Notably, the Hubble constant ($H_0$) value computed using the best-fit bPEDE models closely aligns with the CMBR prediction. It significantly deviates from the local measurement at a significance level of approximately $3.4\sigma$ for the model independent OHD data combination. The outcome suggests reconsidering the systematic uncertainties associated with the local measurement. The best-fit bPEDE models predict the deceleration to acceleration transition at a redshift $z_T \sim 0.78$ which is in close agreement with the $\Lambda$CDM prediction. The age of the universe predicted by the bPEDE model is $\sim 14$ Gyr, slightly higher than the age predicted by the $\Lambda$CDM model. The statefinder trajectory reveals a quintessence nature of dark energy.
Sarath Nelleri, Navaneeth Poonthottathil
2023-10-09T10:28:41Z
http://arxiv.org/abs/2310.05594v1
# Observational evidence for parametrized emergent dark energy models ###### Abstract Recent cosmological observations show a statistically significant tension in the estimated values of the cosmological parameters within the standard \(\Lambda\)CDM framework. In a recent study, Li and Shafieloo introduced a simple Phenomenological Emergent Dark Energy (PEDE) model, which possesses the same number of parameters as that of the \(\Lambda\)CDM model. Their research highlighted this model as a viable alternative to \(\Lambda\)CDM, capable of alleviating the Hubble tension and explaining the late-time cosmic acceleration. Following this, we consider a series of PEDE-type models where a new parameter \(b\) is introduced in the dark energy expression that distinguishes one model from the other and is designated as bPEDE models. The PEDE and \(\Lambda\)CDM models were the special cases of the bPEDE model. In contrast to the PEDE model, the bPEDE model demonstrates the presence of dark energy in the past while indicating its absence in the asymptotic future. Confronting these models with the observational Hubble data (OHD) shows that a series of bPEDE models fit the data better than the PEDE model and the standard \(\Lambda\)CDM model. Notably, the Hubble constant (\(H_{0}\)) value computed using the best-fit bPEDE models closely aligns with the CMBR prediction. It significantly deviates from the local measurement at a significance level of approximately \(3.4\sigma\) for the model independent OHD data combination. The outcome suggests reconsidering the systematic uncertainties associated with the local measurement. The best-fit bPEDE models predict the deceleration to acceleration transition at a redshift \(z_{T}\sim 0.78\) which is in close agreement with the \(\Lambda\)CDM prediction. The age of the universe predicted by the bPEDE model is \(\sim 14\) Gyr, slightly higher than the age predicted by the \(\Lambda\)CDM model. The statefinder trajectory reveals a quintessence nature of dark energy. Our analysis shows that the best-fit bPEDE model outperforms the PEDE and \(\Lambda\)CDM models and hint towards a static future of the universe. ## Introduction Observations on Type Ia supernovae indicated that the present evolution of the universe is accelerating[1, 2, 3]. It is well-supported by comprehensive observational probes such as Cosmic Microwave Background Radiation (CMBR)[4, 5, 6, 7, 8, 9], the Baryon Acoustic Oscillation (BAO)[10, 11], the Large Scale Structure (LSS)[12, 13, 14, 10] and Observational Hubble Data (OHD)[15, 16, 17]. The accelerated expansion of the universe during the late phase can be modelled by combining the cosmological constant that produces negative pressure with the cold dark matter, known as the \(\Lambda\)CDM model[18, 19, 20, 21]. As a result of its remarkable success in explaining the observational data, the \(\Lambda\)CDM model is considered to be the standard model of cosmology. Investigation into Cosmic Microwave Background Radiation (CMBR) and local measurement of Hubble parameter with respect to redshift highlight a statistically significant tension regarding the present value of the cosmological parameters[22, 23, 24, 25, 26]. For instance, the estimation of \(H_{0}\) by the Planck collaboration 2018 yielded a value of \(67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\)[9] whereas the observation on Cepheids by the SHOES collaboration resulted in \(74.03\pm 1.42\) km s\({}^{-1}\) Mpc\({}^{-1}\)[27, 28]. This discrepancy showcases the tension in the precise measurement of the Hubble constant, where the tension is about \(4.2\sigma.\) In this context, the Planck result is acquired under the assumption of \(\Lambda\)CDM model. Conversely, the SHOES result is obtained in a model-independent manner. The measurements obtained from the Cosmic Microwave Background (CMB) exhibit high precision and align effectively with the predictions of the standard cosmological model. However, the researchers mostly posit that the discrepancy lies within the \(\Lambda\)CDM model, refuting the possibility of systemic errors in the local measurements. Furthermore, the parameter that quantifies the matter fluctuation amplitude (\(S_{8}\)) obtained from CMBR data is \(S_{8}=0.832\pm 0.013\)[9] and those obtained from galaxy clustering and weak lensing is \(S_{8}=0.762^{0.025}_{-0.024}\)[29], the discrepancy is at a level of \(\sim 2\sigma\). Many attempts have been made to resolve these issues. These discrepancies may shed light on the most compelling evidence of physics beyond the standard cosmology. Various attempts have been made to resolve these difficulties. For instance, ref. [30] presents early dark energy that behaves like a cosmological constant in the early universe, alleviating the Hubble tension. Other notable examples comprise phantom dynamical dark energy models[31], negative dark energy models[32, 33], the dissipative axion particle model[34], baryon inhomogeneities resulting from primordial magnetic fields[35], modified gravity models[36] and many more. Nevertheless, certain models exhibit dynamic instability, while support for other models in contrast to \(\Lambda\)CDM, considering observational data, is limited. Recent studies show that the cosmological tension problem entails a redshift evolution of the cosmological parameters[33]. Phenomenological emergent dark energy (PEDE) model[37] has been proposed as a potential alternate to the \(\Lambda\)CDM model with a motivation of alleviating the Hubble tension. As described within the framework of this model, dark energy has no effective presence in the past and only emerges as time advances. In this model, the dark energy density is assumed to have the form \[\Omega_{D}=\Omega_{D_{0}}[1-\tanh(\log_{10}(1+z))], \tag{1}\] where \(\Omega_{D_{0}}\) is the dark energy density at present and \(z\) is the redshift. In Eq. (1), authors preferred the base of the logarithm to ten. Even if one assume that the evolution of the dark energy follows a hyperbolic tangent of a logarithmic function, there is no theoretical reason to set the base of the logarithm to ten. Such an assumption leaves the possibility of a model to have a different base that provides a better fit to the data than the PEDE model and the standard \(\Lambda\)CDM model. The PEDE model, as described in ref. [37] highlighted that the analysis relying on the data sets encompassing SNIa, BAO, Ly\(\alpha\) BAO and CMBR prefer the PEDE model substantially better than the \(\Lambda\)CDM model. The authors employed a hard-cut \(2\sigma\) lower bound prior for the \(H_{0}\) in the analysis to get this better evidence, a choice which remarkably influenced their findings. Under this assumption, their analysis demonstrated a substantial reduction of the Hubble tension, assuming the reliability of the local measurement and effectively refuting the possibility of systematic errors in the local measurement. It is conclusive from their analysis that, with the absence of any \(H_{0}\) prior, the \(\Lambda\)CDM model gains better preference over the PEDE model. Notably, the \(H_{0}\) value obtained from the \(\Lambda\)CDM model closely aligns with the value obtained by the Planck measurement when considering the pantheon+BAO data combinations. Despite these challenges, the PEDE-type evolution of dark energy presents a promising candidate to explain the late-phase acceleration of the universe and gain deeper insights into the tension problem within cosmology. Attempts has been made to generalize the PEDE model including extra parameters, the resulting models are known as Generalized Emergent Dark Energy (GEDE) models[16, 38, 39, 40, 41]. Indeed, these models posit a more generalized form of dark energy, \(\Omega_{D}(z)=\Omega_{D_{0}}\times\frac{F(z)}{F(z=0)}\) with \(F(z)=1-\tanh([\log_{10}(1+z)-\log_{10}(1+z_{t})])\), where \(z_{t}\) is the transition redshift. Investigation of the GEDE models utilizing the observational probes such as CMBR, BAO, Type Ia supernovae and \(H_{0}\) from Hubble Space Telescope (HST) indicates a preference for these models over the \(\Lambda\)CDM model. However, this preference is observed at a relatively modest statistical significance of \(2\sigma\)[39]. Furthermore, Bayesian inference of PEDE model based on CMBR data demonstrated that the \(\Lambda\)CDM model provides stronger evidence compared to the PEDE model [40]. Recently, Benaoum et al. [41] analyzed a modified version of the PEDE model named as Modified Emergent Dark Energy model (MEDE) where a new parameter \(\alpha\) is introduced and consequently the dark energy density has the form \(\Omega_{D}=\Omega_{D_{0}}[1-\tanh(\log_{10}(1+z)^{\alpha})]\). The analysis shows that the Hubble tension problem is not completely resolved but somewhat alleviated. In this study, we focus on PEDE-type dark energy models designated as parmetrized emergent dark energy models (bPEDE) that possess a minimum number of parameters as the \(\Lambda\)CDM model. We explore the possibility of parametrized emergent dark energy model whose logarithm base can be adjusted to arbitrary numbers, that fits the observational data better than PEDE models. Following the comprehensive comparative analysis involving generalized emergent dark energy model, PEDE model and the \(\Lambda\)CDM model based on the observational Hubble data as presented in ref.[16], we proceed to employ the observational Hubble data for our analysis. In this study, we are primarily interested in estimating the Hubble constant value assuming the best-fit bPEDE model and assessing whether the model prefers the Planck collaboration's measurement of the CMBR or SHOES's local measurement. In addition, we test the performance of the bPEDE model against the standard \(\Lambda\)CDM model and the PEDE model adopting the Bayesian statistics to determine which model is preferred by the OHD data. We further analyze the background evolution of the universe within the framework of the bPEDE models. This paper is organized as follows. In section 2, we discuss the bPEDE-type models. In section 3, we perform the parameter inference and model selection using the observational Hubble data. In section 4, we study the evolution of cosmographic parameters. We conclude in section 5. ## Parametrized emergent dark energy (bPEDE) models Observations suggest that the phenomenological emergent dark energy model (PEDE) is a potential alternative explanation for the accelerated expansion of the universe and resolves the Hubble tension problem. According to the PEDE model, dark energy is a dynamical quantity, and its dynamical behaviour is described in Eq. (1). Our approach extends the PEDE model to allow any positive real number to be the base of the logarithmic function. In this context, we may refer to these models as bPEDE models, where 'b' represents the base of the logarithmic function. The functional form of the dark energy density is considered the same. At the same time, base 'b' differentiates one model from the other so that all the models under consideration are very close to the PEDE model. The evolution of the dark energy density in the bPEDE model has the form given by, \[\Omega_{D}=\Omega_{D_{0}}[1-\tanh(\log_{b}(1+z))], \tag{2}\] where \(\Omega_{D_{0}}\) is the present value of the dark energy density, and \(z\) is the redshift related to the scale factor (\(a\)) as \(1+z=1/a\). The matter density and dark energy density satisfy individual conservation equations, \[\dot{\rho_{m}}+3H\rho_{m} =0, \tag{3}\] \[\dot{\rho_{D}}+3H(1+\omega_{D})\rho_{D} =0, \tag{4}\] where \(\omega_{D}\) is the equation of state of dark energy density. The progress of \(\omega_{D}\) with redshift is obtained from Eq. (4), can be expressed as \[\omega_{D}(z)=\frac{1}{3}\frac{d\ln\Omega_{D}}{dz}(1+z)-1, \tag{5}\] where \(\Omega_{D}\) is the dark energy density normalized over the present value of the critical density, \(\rho_{c_{0}}=3H_{0}^{2}/8\pi G\). Substituting Eq. (2) in (5), the evolution of \(\omega_{D}\) is obtained as \[\omega_{D}(z)=-\frac{1}{3\ln b}\left(1+\tanh[\log_{b}(1+z)]\right)-1. \tag{6}\] The eq. (6) shows that the nature of dark energy depends on the base 'b'. For \(0<b<\frac{1}{e}\), the model is a quintessence-type dark energy model, while for \(1<b<\infty\), the model resembles a phantom dark energy model. At present, \(\omega_{D}(z=0)=\frac{-1}{3\ln b}-1\), shows that the present value of the equation of state of dark energy depends on the value of 'b'. The evolution of the matter density is obtained by solving the Eq. (3), expressed as \(\rho_{m}=\rho_{m_{0}}(1+z)^{3}\), where \(\rho_{m_{0}}\) is the matter density at present. The evolution of matter density is same as that of the PEDE and \(\Lambda\)CDM models. The evolution of the Hubble parameter within a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric[42] is \[H^{2}(z)=H_{0}^{2}\left[\Omega_{m_{0}}(1+z)^{3}+\Omega_{D_{0}}[1-\tanh(\log_{ b}(1+z))]\right] \tag{7}\] At present, \(z=0\), the Hubble parameter (\(H\)) reduces to \(H_{0}\), where \(H_{0}\) is the present value of the Hubble parameter. In the far past, \(z\rightarrow\infty\), the evolution of dark energy density depends on the value of parameter 'b'. In the asymptotic future, \(z\rightarrow-1\) the matter density, \(\Omega_{m}\to 0\). In contrast the value of \(\Omega_{D}\) depends on the value of 'b', consequently influencing the asymptotic evolution of the Hubble parameter. ## Parameter inference and model Selection The parameter estimation for the Phenomenological Emergent Dark Energy (PEDE) model, as presented in the reference [37], indicates that the observational data, including the Type Ia supernovae, the Cosmic Microwave Background Radiation (CMBR), and the Baryon Acoustic Oscillations (BAO), without the assumption of a prior for the Hubble constant (\(H_{0}\)), tend to favour a value of \(H_{0}\) closely aligned with the measurement obtained by the SHOES collaboration, approximately \(\sim 74\). However, it is important to note, based on their analysis without assuming any hard-cut prior on \(H_{0}\), that both the \(\Lambda\)CDM model and the Chevallier-Polarski-Linder (CPL) parameterization model [43, 44] show better Deviance Information Criterion (DIC) [45, 46] compared to the PEDE model. Intriguingly, these models yield a value of \(H_{0}\) in close agreement with the predicted CMBR value, around \(\sim 67\), for data combinations involving the Pantheon supernova compilation along with BAO, and either Lyman-alpha and CMB data. Notably, the \(\Lambda\)CDM and CPL parameterization models predict a value of \(H_{0}\) close to the CMBR prediction, regardless of whether the data set includes CMBR data or not. The authors observed that the value of \(H_{0}\) aligns closely with the local measurement value when assuming \(1\sigma\) or \(2\sigma\) priors for \(H_{0}\) taken from the SHOES result. Under these conditions, the PEDE model shows better evidence than the \(\Lambda\)CDM model[37]. In this context, this study aims to explore the bPEDE models presented in eq. (7), aiming to achieve a better fit than the PEDE and \(\Lambda\)CDM models and to examine the predicted value of \(H_{0}\) within the bPEDE model. In this study, as presented in ref. [39], we adopt observational Hubble data (OHD) to perform both parameter inference and model selection. The OHD dataset comprises 51 Hubble parameter values observed within the redshift range of \(0.07<z<2.36\). Among these, 31 data points are obtained model-independently from the differential age (DA) technique [47, 48]. The remaining 20 data points are obtained through Baryon Acoustic Oscillations (BAO) measurements, where assumptions based on standard cosmology (\(\Lambda\)CDM) are used to estimate the sound horizon at the drag epoch [49]. Hence, these data points may introduce some biased constraints for the model parameters. Nevertheless, to avoid potential biases, homogenized and model-independent OHD data is presented in reference [49], obtained by employing the sound horizon at the drag epoch from Planck collaboration 2016 data. Within the scope of this study, we consider three distinct data combinations: Data1: OHD (DA) + OHD (homogeneous from BAO), Data2: OHD (DA) + OHD (non-homogeneous from BAO) and Data3: OHD (DA) for our analyses. We adopt Bayesian statistics for the parameter inference and model selection. Bayesian statistic is based on the Bayes theorem, which provides a gratifying description to obtain the posterior distribution of the model parameters (\(\theta\)) for a given set of data (D) and model (M)[50, 51]. According to Bayes theorem, \[P(\theta|D,M)=\frac{P(D|\theta,M)P(\theta|M)}{P(D|M)}, \tag{8}\] where \(P(\theta|D,H)\) is the posterior distribution of the model parameter, \(P(D|\theta,M)\) is the likelihood, \(P(\theta|H)\) is the prior and \(P(D|M)\) is just a normalization factor that represents the evidence of the model. It evidence is irrelevant for the parameter estimation. However, it is the central quantity of interest when we do the model selection. The prior probability encapsulates any information available regarding the model parameters before acquiring the data[52, 53]. The choice of prior is contingent upon any information we have regarding the model and depends on the quality of judgment[54]. Nonetheless, once the prior is established, successive application of Bayes' theorem results in convergence towards a common posterior. The likelihood is defined as \[P(D|\theta,M)\equiv\exp(-\chi^{2}(\theta)/2), \tag{9}\] where the \(\chi^{2}\) is defined as \[\chi^{2}(\theta)=\sum_{k}\left[\frac{H_{k}-H_{k}(\theta)}{\sigma_{k}}\right]^ {2} \tag{10}\] Here, \(H_{k}\) is the Hubble parameter value corresponding to the redshift value \(z_{k}\) given in the OHD data, \(H_{k}(\theta)\) is the corresponding theoretical value obtained from the model and \(\sigma_{k}\) is the standard deviation in the measured values. Marginalizing over all other parameters except the parameter of interest, we obtain the posterior distribution of the parameter of interest[51, 55, 40]. For instance, if the model has the parameter space \(\theta=(\theta_{1},\theta_{2}.....\theta_{n})\), the marginal probability of \(\theta_{1}\) can be expressed as \[p(\theta_{1}|D,M)=\int p(\theta|D,M)d\theta_{2}...d\theta_{n}, \tag{11}\] which represents a one-dimensional posterior distribution of the model parameter \(\theta_{1}.\) A two-dimensional posterior can also be defined similarly. We use the Markov Chain Monte Carlo (MCMC) method for the numerical simulation. Bayesian evidence \(p(D|M)\) plays a central role in the model selection. It is obtained by taking the average of likelihood over the prior for a particular model of choice, which can be expressed as \[p(D|M)=\int d\theta p(D|\theta,M)p(\theta|M) \tag{12}\] The evidence of one model (\(M_{0}\)) over the other (\(M_{1}\)) is quantified using the Bayes factor (\(B_{01}\)), which is defined as the ratio of Bayesian evidence of the models, expressed as \[B_{01}\equiv\frac{p(D|M_{0})}{p(D|M_{1})}. \tag{13}\] The empirical scale for quantifying the strength of evidence is called Jeffrey's scale, presented in Tab. 1[51]. We also use information criteria, which are frequently used in cosmology for the model selection, such as Akaike Information Criterion (AIC)[56; 57; 55] and Bayesian Information Criterion (BIC)[58; 59]. The Akaike Information Criterion (AIC), which is essentially a frequentist criterion that includes a penalty term equal to twice the number of parameters present in the model (\(k\)), is defined as \[AIC\equiv-2\ln\mathcal{L}_{max}+2k, \tag{14}\] where \(\mathcal{L}_{max}\equiv p(D|\theta_{max},M)\) is the maximum likelihood value. The Bayesian Information Criterion (BIC), which is also known as the Schwarz Information Criterion, that follows from a Gaussian approximation to the Bayesian evidence in the limit of larger sample size is defined as \[BIC\equiv-2\ln\mathcal{L}_{max}+k\ln N, \tag{15}\] where \(N\) is the number of data points. The model that minimizes AIC and BIC is considered as the best model. Initially, we are interested to see how the AIC and BIC change according to the change in base (\(b\)) of the logarithmic function presented in Eq. (2). We consider the range of \(\log_{10}b\) between \(-10\) and \(+10\). The variation of AIC and BIC with respect to \(\log_{10}b\) is presented in Fig. (2) and (3), respectively. It is evident from the figures that there is a series of bPEDE models that gives a better fit to all the data combinations compared to the PEDE model (\(\log_{10}b=1\)). The model that gives minimum AIC and BIC is expected to be in the range \(-1<\log_{10}b<0\) for Data1 and Data2 while \(0<\log_{10}b<1\) for Data3. We computed the AIC and BIC for the PEDE model with base \(-1<\log_{10}b<0\) for the Data1 and Data2 and \(0<\log_{10}b<1\) for the Data3 and found that the best-fit model corresponds to \(b=-0.7,-0.6\) and \(0.7\) for the Data1, Data2 and Data3 respectively. A comparison between the \(\chi^{2}_{min}\), AIC, BIC and the model parameters of the bPEDE model, PEDE model and the standard \(\Lambda\)CDM model is presented in Tab. 2. Also, the 1-D and 2-D posterior distributions of the model parameters are presented in Fig. 1. The Tab. 2 shows that the bPEDE model gives lower AIC and BIC values than the PEDE and the \(\Lambda\)CDM models. It indicates that \begin{table} \begin{tabular}{|c|c|c|} \hline \(|\ln B_{01}|\) & Probability & Strength of evidence \\ \hline \(<1.0\) & \(<0.750\) & Inconclusive \\ \hline 1.0 & 0.750 & weak evidence \\ \hline 2.5 & 0.923 & Moderate evidence \\ \hline 5.0 & 0.993 & Strong evidence \\ \hline \end{tabular} \end{table} Table 1: Jeffrey’s scale Figure 3: Variation of BIC against \(\log_{10}b\) for bPEDE model for the different OHD data combinations. The star (blue) represents the BIC corresponding to the PEDE model Figure 2: Variation of AIC against \(\log_{10}b\) for bPEDE models for the different OHD data combinations. The star (blue) represents the AIC corresponding to the PEDE model Figure 1: The 1-D and 2-D marginal likelihood of the model parameters of the bPEDE model for the different OHD data combinations. observational Hubble data prefer the bPEDE model over the PEDE and \(\Lambda\)CDM models. More interestingly, the bPEDE model that best fits these data combinations predicts the value of \(H_{0}\) close to the value obtained from CMBR data. None of these models prefers the value of \(H_{0}\) close to the one obtained by the SHOES collaboration. The PEDE model gives values of \(H_{0}\) close to the value obtained by SHOES collaboration for all the data combinations. However, it should be noted that it is possible to construct a model having PEDE-like behaviour that gives a better fit than the PEDE model, which gives the value of \(H_{0}\) close to the CMBR predicted value. The \(\Lambda\)CDM model gives better-fit to Data1 and Data2 as compared to the PEDE model, and it gives the value of \(H_{0}\) in between the CMBR predicted value and the one obtained by the SHOES collaboration. The best-fit bPEDE, PEDE and \(\Lambda\)CDM give almost similar fit to the OHD (DA) data, and suggest the CMBR predicted value of \(H_{0}\) for the OHD (DA) dataset. Further, we have computed the Bayesian evidence of the best-fit bPEDE, PEDE and the \(\Lambda\)CDM models using Eq. (12). The Bayes factor that quantifies the relative evidence is obtained using Eq. (13). The Bayes factors obtained for the bPEDE model against the \(\Lambda\)CDM model and the PEDE model are presented in Tab. 3. It shows that the bPEDE model is preferred over the PEDE model for OHD (DA) + OHD (Homogenous) and OHD (DA) + OHD (Non-homogenous) data combinations with \(>75\%\) and \(>92.3\%\) probabilities while preferred over the \(\Lambda\)CDM model \(\sim 75\%\) probabilities. All three models give almost the same fit to the OHD (DA) data set. We can conclude that the improved data set prefers the bPEDE model as compared to the PEDE model, and the whole analysis shows moderate evidence of the bPEDE model over the PEDE model, and the evidence against the \(\Lambda\)CDM is not strong. Recently, Almada et al.[16] did a comparative study of the PEDE model and the generalized version of the PEDE model called the Generalized Emergent Dark Energy Model (GEDE) in the light of the observational Hubble data set presented in this work. Our analysis based on the AIC and BIC criteria shows that the bPEDE model is preferred over the Generalized Emergent Dark Energy (GEDE) model presented in ref.[16]. The present analysis based on the observational Hubble data shows that there exists a possibility of the bPEDE model having a specific b value that fits better than PEDE, GEDE and the \(\Lambda\)CDM models. Also, this best-fit model predicts the value to \(H_{0}\) very close to the \(H_{0}\) value obtained from CMBR assuming the standard \(\Lambda\)CDM model. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model & Data & \(\log_{10}b\) & \(\chi^{2}_{min}\) & AIC & BIC & \(H_{0}\) & \(\Omega_{m_{0}}\) \\ \hline \multirow{3}{*}{bPEDE} & Data1 & \(-0.7\) & \(20.90\) & \(-41.47\) & \(-37.61\) & \(67.13\pm 1.42\) & \(0.2728\pm 0.0025\) \\ \cline{2-7} & Data2 & \(-0.6\) & \(25.16\) & \(-32.02\) & \(-28.16\) & \(65.97\pm 1.01\) & \(0.2800\pm 0.0196\) \\ \cline{2-7} & Data3 & \(0.7\) & \(14.40\) & \(-19.76\) & \(-16.89\) & \(69.92\pm 3.41\) & \(0.3340\pm 0.0572\) \\ \hline \multirow{3}{*}{PEDE} & Data1 & \(10\) & \(24.48\) & \(-33.41\) & \(-29.55\) & \(73.81\pm 1.83\) & \(0.2541\pm 0.0229\) \\ \cline{2-7} & Data2 & \(10\) & \(32.06\) & \(-19.67\) & \(-15.80\) & \(73.92\pm 1.37\) & \(0.2496\pm 0.0170\) \\ \cline{2-7} & Data3 & \(10\) & \(14.41\) & \(-19.74\) & \(-16.87\) & \(69.24\pm 3.32\) & \(0.3330\pm 0.0586\) \\ \hline \multirow{3}{*}{\(\Lambda\)CDM} & Data1 & \(\infty\) & \(22.00\) & \(-38.89\) & \(-35.03\) & \(70.88\pm 1.65\) & \(0.2603\pm 0.0240\) \\ \cline{2-7} & Data2 & \(\infty\) & \(27.45\) & \(-27.58\) & \(-23.72\) & \(70.65\pm 1.22\) & \(0.2589\pm 0.0181\) \\ \cline{1-1} \cline{2-7} & Data3 & \(\infty\) & \(14.52\) & \(-19.50\) & \(-16.04\) & \(67.76\pm 3.09\) & \(0.3271\pm 0.0609\) \\ \hline \end{tabular} \end{table} Table 2: Comparison between \(\chi^{2}\), AIC, BIC and the model parameters predicted by bPEDE model, PEDE model and the \(\Lambda\)CDM model. ## Cosmological parameters From the analysis presented in the last section, we have seen that the bPEDE model better fits the observational Hubble data than the standard \(\Lambda\)CDM and PEDE models. In this section, we present a comparative study of evolution of the various cosmographic parameters for the bPEDE, PEDE and the \(\Lambda\)CDM models. The rate of expansion of the universe is encoded in the Hubble parameter, which is given by Eq. (7). The evolution of the Hubble parameter against redshift for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models are shown in Fig. 4. The Hubble parameter of the best-fit bPEDE with \(\log_{10}(b)=-0.7,-0.6\) shows a similar behaviour where the Hubble parameter is an ever-decreasing function of the scale factor. Interestingly, the Hubble parameter tends to zero in the asymptotic future when \(z\rightarrow-1\) or equivalently \(a\rightarrow\infty\), indicating the possibility of a static universe in the asymptotic future. In the asymptotic future, as \(z\) approaches \(-1\), the Hubble parameter stabilizes to a constant value within the \(\Lambda\)CDM model, manifesting a deSitter-type evolution of the universe. In the context of the PEDE and bPEDE model with \(b=\log_{10}(0.7)\), the Hubble parameter shows a decreasing trend in the past, followed by an increase in the future, finally converging to a constant value as \(z\rightarrow-1\). Notably, the asymptotic value in these models marginally exceeds the Hubble \begin{table} \begin{tabular}{|c|c|c|} \hline Data & Bayes factor (\(B_{ij}\)) & \(|\ln B_{ij}|\) \\ \hline \hline OHD (DA) + OHD (Homogenous) & \(B_{01}=6.17\) & \(1.82\) \\ \cline{2-3} & \(B_{02}=1.75\) & \(0.55\) \\ \hline OHD (DA) + OHD (Non-homogenous) & \(B_{01}=32.63\) & \(3.48\) \\ \cline{2-3} & \(B_{02}=3.12\) & \(1.13\) \\ \hline OHD (DA) & \(B_{01}=0.99\) & \(0.01\) \\ \cline{2-3} & \(B_{02}=1.02\) & \(0.01\) \\ \hline \end{tabular} \end{table} Table 3: Bayes factor Figure 4: Evolution of Hubble parameter (\(H\)) against redshift (\(z\)) is plotted for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models. parameter value of the \(\Lambda\)CDM model. The expression to compute the present age of the universe that follows from the definition of the Hubble parameter is \[t_{0}-t_{B}=\int_{0}^{1}\frac{1}{aH(a)}da, \tag{16}\] where \(t_{0}\) is the present age of the universe, and \(t_{B}\) is the age of the universe at the Big Bang, which is assumed to be zero. The age of the universe computed for bPEDE, PEDE and \(\Lambda\)CDM models are presented in Tab. 4. The universe's age computed for the bPEDE models with \(\log_{10}(b)=-0.7,-0.6\) is slightly higher than the universe's age computed with the \(\Lambda\)CDM model. In comparison, the bPEDE model with \(\log_{10}(b)=0.7\) and PEDE model predict a slightly lower age than the standard model prediction where age predicted by PEDE model is closer to the \(\Lambda\)CDM model. The evolution of the matter density has the same form for all the models, i.e. \(\Omega_{m}=\Omega_{m_{0}}(1+z)^{3}\). The matter density decreases with the increase in scale factor and approaches zero in the asymptotic future \(a\rightarrow\infty.\) The evolution of dark energy density normalized over the present value of the critical \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Model & \(t_{0}\) & \(\omega_{D_{0}}\) & \(q_{0}\) & \(z_{T}\) \\ \hline bPEDE (\(\log_{10}(b)=-0.7\)) & 13.96 & -0.79 & -0.36 & 0.79 \\ \hline bPEDE (\(\log_{10}(b)=-0.6\)) & 14.05 & -0.76 & -0.32 & 0.78 \\ \hline bPEDE (\(\log_{10}(b)=0.7\)) & 13.44 & -1.2 & -0.70 & 0.59 \\ \hline PEDE & 13.71 & -1.1 & -0.78 & 0.78 \\ \hline \(\Lambda\)CDM & 13.82 & -1 & -0.60 & 0.78 \\ \hline \end{tabular} \end{table} Table 4: Age (\(t_{0}\)), present value of the equation of state parameter (\(\omega_{D_{0}}\)), present value of the deceleration parameter (\(q_{0}\)) and transition redshift (\(z_{T}\)) of the universe computed for best-fit bPEDE, PEDE model and \(\Lambda\)CDM models. Figure 5: Evolution of age of the universe against redshift (z) is plotted for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models. The present age of the universe is the age at \(z=0\). density (\(\rho_{c_{0}}\)) for all the models under consideration are shown in Fig. 6. The dark energy density has had no effective presence in the past for the PEDE and bPEDE (\(\log_{10}(b)=0.7\)) models, and the densities increase with an increase in scale factor and asymptotically reaches a value close to \(2\Omega_{D_{0}}\). On the other hand, the dark energy densities within the bPEDE models (\(\log_{10}(b)=-0.7,-0.6\)) show distinct behaviour where the dark energy density has the value \(2\Omega_{D_{0}}\) in the past and asymptotically tends to zero in the far future. However, the dark energy density dominates over the matter density at present for all the models, indicating that all the models successfully explain the late-phase accelerated expansion of the universe. The progress of the equation of state parameter of the dark energy density with the redshift for best-fit bPEDE models, PEDE model and the \(\Lambda\)CDM model, are shown in Fig. 7. From Fig. 7, it is evident that the value of \(\omega_{D}\) is in between -1/3 and -1 throughout the evolution of the universe for the bPEDE (\(\log_{10}(b)=-0.7,-0.6\)) models, resembles the quintessence nature of dark energy whereas the \(\omega_{D}>-1\) throughout the evolution of the universe for the bPEDE (\(\log_{10}(b)=0.7\)) and PEDE models, resembles the phantom nature of dark energy. The \(\omega_{D}\) tend to -1 in the far past for the quintessence-type bPEDE models, while the \(\omega_{D}\) tend to -1 in the asymptotic future for the phantom-type bPEDE and PEDE models. The present values of the equation of state of dark energy, \(\omega_{D_{0}}\) for all the models are given in Tab. 4. The present \(\omega_{D}\) is slightly less than -1 for quintessence-type bPEDE models, while the values are slightly higher than -1 for the Phantom-type bPEDE and PEDE models. Evolution of the deceleration parameter (\(q\)) of the universe with respect to redshift (\(z\)) for all the models are shown in Fig. 8. The present values of the deceleration parameter and the transition redshift (\(z_{T}\)) for all the models are summarized in Tab. 4. From the Fig. 8, it is clear that all the models under consideration predict the decelerating to accelerating transition, and the present value of the deceleration parameter is negative for all the models showing that the present universe is undergoing an accelerating expansion. The decelerating to accelerating phase transition occurred at a redshift \(z_{T}\sim 0.78\) for quintessence-type bPEDE, PEDE and the \(\Lambda\)CDM models while the \(z_{T}\) is Figure 6: Evolution of dark energy density normalized over the present value of the critical density against redshift (z) is plotted for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models. slightly less value for the phantom-type bPEDE model. The Statefinder, originally proposed by Sahni et al. [60], is a geometric diagnostic tool that distinguishes between dark energy models. In statefinder analysis, the geometric pair jerk parameter (r) and the snap parameter (s) characterize the dark energy models. The jerk parameter is defined as \[r=\frac{1}{aH^{3}}\frac{d^{3}a}{dt^{3}}\hskip 28.452756pts=\frac{r-1}{3(q-1/2)} \tag{17}\] where \(a\) is the scale factor, \(H\) is the Hubble parameter and \(q\) is the deceleration parameter. It is convenient to express \(r\) and \(s\) in terms of derivative with respect to the parameter \(x\), where \(x=\ln a\). Then \(r\) and \(s\) can be expressed as \[r=\frac{1}{2h^{2}}\frac{d^{2}h^{2}}{dx^{2}}+\frac{3}{2h^{2}}\frac{dh^{2}}{dx}+1, \tag{18}\] \[s=-\left(\frac{\frac{1}{2h^{2}}\frac{d^{2}h^{2}}{dx^{2}}+\frac{3}{2h^{2}} \frac{dh^{2}}{dx}}{\frac{3}{2h^{2}}\frac{dh^{2}}{dx}+\frac{9}{2}}\right). \tag{19}\] Substituting the expression for \(h=H/H_{0}\) from Eq. (7) in 18 and 19, we obtain the evolution of \(r\) and \(s\) parameter for the bPEDE models as, \[r=\frac{\Omega_{D_{0}}\,\text{sech}^{2}(\log_{b}(1+z))}{(\ln b)^{2}h^{2}}\left[ \tanh(\log_{b}(1+z))+\frac{3}{2}\ln b\right]+1, \tag{20}\] Figure 7: Evolution of equation of state parameter of dark energy against redshift (z) is plotted for the best-fit bPEDE models, PEDE model and \(\Lambda\)CDM model. The present \(\omega_{D}\) corresponds to the value at \(z=0\). Figure 8: Evolution of deceleration parameter of the universe against redshift (z) is plotted for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models. Figure 9: The r-s trajectory is plotted for the best-fit bPEDE, PEDE and \(\Lambda\)CDM models. The \(r=1\), \(s=0\) (blue star) is a fixed point for the standard \(\Lambda\)CDM model. \[s=-\left[\frac{\Omega_{D_{0}}\,{\rm sech}^{2}(\log_{b}(1+z))\left[\tanh(\log_{b}(1 +z))+\frac{3}{2}\ln b\right]}{\frac{3}{2}(\ln b)^{2}\left(-3\Omega_{m_{0}}(1+z)^ {3}+\frac{\Omega_{D_{0}}}{\ln b}\,{\rm sech}^{2}(\log_{b}(1+z))+3h^{2}\right)} \right]. \tag{21}\] We also obtain the \(r\) and \(s\) evolution of PEDE and \(\Lambda\)CDM model using the respective Hubble parameter evolution. The r-s trajectory of all the studied models is presented in Fig. (9). The \((r,s)=(1,0)\) is a fixed point for the \(\Lambda\)CDM model. The \(r>1\) and \(s<0\) for the PEDE and bPEDE (\(\log_{10}(b)=0.7\)) model depicting the phantom nature of dark energy density. The \(r\) and \(s\) values of these models reach the \(\Lambda\)CDM fixed point in the far future. The \(r<1\) and \(s>0\) for the bPEDE (\(\log_{10}(b)=-0.7,-0.6\)), depicting quintessence nature of dark energy. The \(r\) and \(s\) values of these models possess the \(\Lambda\)CDM fixed point in the far past. In conclusion, the models that better fit the observational data are quintessence type. Interestingly, the best-fit bPEDE models predict the present value of the Hubble parameter closely aligns with the CMBR data while inconsistent with the local measurements. ## Conclusion Motivated by the investigation of the Hubble tension problem within the framework of Emergent Dark Energy (EDE) models as outlined in ref. [38, 39, 16, 54], we explored the possibility of Phenomenological Emergent Dark Energy (PEDE)-type models in light of observational Hubble data. Our particular interest lied in identifying the PEDE-type models that yield a better fit to the observational Hubble data as compared to the PEDE and the \(\Lambda\)CDM models and also investigating whether \(H_{0}\) value aligns with the CMBR measurement or the value obtained by the local measurement. We designated this model as the bPEDE model. In the bPEDE model, the matter and dark energy are considered separately conserved. Consequently, the matter density adheres to the same evolutionary behaviour observed in the \(\Lambda\)CDM model. However, dark energy density is assumed to follow a specific form reminiscent of the PEDE model, \(\Omega_{D}\propto\tanh(\log_{b}(1+z))\). In the present analysis, we considered three distinct datasets: OHD (DA), OHD (DA) + OHD (Homogenous) and OHD (DA) + OHD (Non-homogenous). Employing a comprehensive analysis based on the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and the Bayesian evidence shows that the bPEDE model, parameterized with \(\log_{10}(b)=-0.7,-0.6\) are preferred over both the PEDE model and \(\Lambda\)CDM, specifically for the OHD (DA) + OHD (Homogenous), OHD (DA) + OHD (Non-omogenous) datasets combinations, respectively. Intriguingly, the OHD (DA) data displayed no clear preference for either model. Notably, the best-fit bPEDE models are preferred over the PEDE model with a \(\Delta\)AIC or \(\Delta\)BIC approximately \(-8.1\) for the OHD (DA) + OHD (Homogenous) dataset and even more prominently at \(-12.4\) for the OHD (DA) + OHD (Non-homogenous) dataset. The same trend was also encoded in the Bayesian evidence. Interestingly, the best-fit bPEDE models predict the value of \(H_{0}\) closely aligns with the value obtained from the CMBR measurement. Indeed, it is essential to compare the predictions of the bPEDE model with the Cosmic Microwave Background Radiation (CMBR) data to determine whether the model can solve the Hubble tension problem. Furthermore, a comprehensive analysis incorporating diverse observational probes such as Type Ia supernovae, Baryon Acoustic Oscillation (BAO) and Large-scale structure (LSS) in the distribution of galaxies are essential to ascertain if these datasets collectively converge towards a value of \(H_{0}\) in agreement with the CMBR predicted value. Then, it would strongly suggest reconsidering the systematic uncertainties associated with the local measurement, which will be the prime focus of future work. Further, we analyzed the evolution of cosmological parameters. The best-fit bPEDE models (\(\log_{10}(b)=-0.7,-0.6\)) show decreasing Hubble parameters, approaching zero as \(z\rightarrow-1\) or equiv alently \(a\rightarrow\infty\), hinting at a possible static future universe. The value of \(\omega_{D}\) remains between \(-1/3\) and \(-1\) for both bPEDE models (\(\log_{10}(b)=-0.7,-0.6\)), akin to quintessence dark energy. Conversely, for bPEDE (\(\log(b)=0.7\)) and PEDE models, \(\omega_{D}\) consistently exceeds \(-1\), resembling phantom dark energy. For the phantom dark energy models considered here, dark energy has had a negligible presence in the past. As the scale factor increases, density approaches a value near \(2\Omega_{D_{0}}\). Conversely, quintessence-type bPEDE models display a distinct pattern, with the density starting at \(2\Omega_{D_{0}}\) in the past and asymptotically approaching zero in the far future. The age of the universe computed for the quintessence-type bPEDE models is slightly greater than that computed using the \(\Lambda\)CDM model. Conversely, the Phantom-type bPEDE model and PEDE model predict a slightly younger age than the \(\Lambda\)CDM model, with age predicted by the PEDE model being closer to the \(\Lambda\)CDM prediction. All models analyzed here predict a transition from decelerated to accelerated expansion, with the present value of the deceleration parameter being negative, confirming the universe's ongoing accelerating expansion. This transition occurred at a redshift \(z_{T}\sim 0.78\) for the quintessence-type bPEDE models, the PEDE and the \(\Lambda\)CDM models. However, for the phantom-type bPEDE model, the transition occurred at a slightly lower value. The statefinder analysis revealed that \(r>1\) and \(s<0\) for the PEDE and bPEDE (\(\log(b)=0.7\)) model, depicting the phantom nature of dark energy density whereas \(r<1\) and \(s>0\) for the bPEDE models (\(\log_{10}(b)=-0.7,-0.6\)), depicting quintessence nature of dark energy. In summary, our analysis based on observational Hubble data indicates a substantial preference for the quintessence-type bPEDE model over both the PEDE model and the standard \(\Lambda\)CDM model. Remarkably, the best-fit quintessence-type bPEDE model predicts a value of \(H_{0}\) that closely aligns with the value obtained from the CMBR measurements assuming the \(\Lambda\)CDM model. Furthermore, in contrast to the \(\Lambda\)CDM model and PEDE model, the quintessence-type bPEDE models predict the static future for the universe.
2305.01544
Analytical Fitting of Gamma-ray Photopeaks in Germanium Cross Strip Detectors
In an ideal germanium detector, fully-absorbed monoenergetic gamma-rays will appear in the measured spectrum as a narrow peak, broadened into a Gaussian of width determined only by the statistical properties of charge cloud generation and the electronic noise of the readout electronics. Multielectrode detectors complicate this picture. Broadening of the charge clouds as they drift through the detector will lead to charge sharing between neighboring electrodes and, inevitably, low-energy tails on the photopeak spectra. We simulate charge sharing in our germanium cross strip detectors in order to reproduce the low-energy tails due to charge sharing. Our goal is to utilize these simulated spectra to develop an analytical fit (shape function) for the spectral lines that provides a robust and high-quality fit to the spectral profile, reliably reproduces the interaction energy, noise width, and the number of counts in both the true photopeak and the low-energy tail, and minimizes the number of additional parameters. Accurate modeling of the detailed line profiles is crucial for both calibration of the detectors as well as scientific interpretation of measured spectra.
Steven E. Boggs, Sean N. Pike
2023-05-02T15:50:12Z
http://arxiv.org/abs/2305.01544v2
# Analytical Fitting of \(\gamma\)-ray Photopeaks in Germanium Cross Strip Detectors ###### Abstract In an ideal germanium detector, fully-absorbed monoenergetic \(\gamma\)-rays will appear in the measured spectrum as a narrow peak, broadened into a Gaussian of width determined only by the statistical properties of charge cloud generation and the electronic noise of the readout electronics. Multielectrode detectors complicate this picture. Broadening of the charge clouds as they drift through the detector will lead to charge sharing between neighboring electrodes and, inevitably, low-energy tails on the photopeak spectra. We simulate charge sharing in our germanium cross strip detectors in order to reproduce the low-energy tails due to charge sharing. Our goal is to utilize these simulated spectra to develop an analytical fit (shape function) for the spectral lines that provides a robust and high-quality fit to the spectral profile, reliably reproduces the interaction energy, noise width, and the number of counts in both the true photopeak and the low-energy tail, and minimizes the number of additional parameters. Accurate modeling of the detailed line profiles is crucial for both calibration of the detectors as well as scientific interpretation of measured spectra. keywords: Germanium semiconductor detectors, Charge sharing, \(\gamma\)-ray spectroscopy, \(\gamma\)-ray line profiles + ## 1 Introduction The Compton Spectrometer and Imager (COSI) is a soft \(\gamma\)-ray survey telescope (0.2-5 MeV) designed to probe the origins of Galactic positrons, reveal sites of ongoing element formation in the Galaxy, use \(\gamma\)-ray polarimetry to gain insight into extreme environments, and explore the physics of multi-messenger events [1; 2; 3]. The COSI detectors are custom, large-volume (54 cm\({}^{2}\) area, 1.5 cm thick) cross-strip germanium detectors utilizing amorphous contact technologies [4]. Cross-strip electrodes on the opposite faces, combined with signal timing, provide full 3D position resolution for interactions within the detector. In this work we are focused on our original 2.0-mm strip pitch germanium detectors that flew on the COSI balloon payload [1; 2]. When a \(\gamma\)-ray photon interacts in the germanium, either by photoabsorption or Compton scattering, a fast recoil electron is produced which knocks more electrons from the valence band to the conduction band, leaving holes behind. The number of electron-hole (e-h) pairs is directly proportional to the energy deposited, 2.96 eV per e-h pair in germanium. In an applied electric field (+1500 V bias) these charge clouds will separate and drift in opposite directions, electrons toward the cathode and holes toward the anode. The \(\gamma\)-ray interaction energy is measured on both the cathode (electron signal) and the anode (hole signal) strips independently by measuring the integrated charge induced on each electrode by the charge clouds drifting towards their respective electrodes. As these charge clouds drift in the detector, their charge density profiles broaden due to both thermal diffusion and mutual electrostatic repulsion. The finite size of the charge clouds will result in some interactions having their charge collected on multiple electrodes. Such interactions lead to either charge sharing between strips, or low-energy tailing on spectral lines if the charge shared on the neighboring strip falls below the detection threshold for that electrode. Optimizing the spectral performance of these high-resolution germanium detectors requires detailed knowledge of the photopeak and low-energy tail profiles. Inadequate modeling of the shapes of the spectral line can affect the overall spectral calibration of the instrument, as well as scientific analysis of observed spectral features. We present a novel shape function that reliably reproduces both the line profiles and the underlying physical parameters for our spectra created with charge sharing simulations, with minimal additional fit parameters. In Section 2, we describe how we create simulated photopeak and low energy tail spectra utilizing a novel charge profile density model. In Section 3 we review how germanium spectral lines with low-energy tails have been fit in the past. Section 4 presents the shape function utilized in this work. In Section 5 we show how constraining some of the shape function parameters leads to more robust estimates of the underlying physical parameters. Section 6 extends these fits to higher and lower interaction energies. In Section 7 we demonstrate how the shape function varies for line profiles that include the effects of extended initial charge clouds (created by the recoil electron) and present fit parameters accounting for effects of the recoil electrons. We conclude with a discussion of applications and future directions. ## 2 Model Spectra We utilize the analytical charge cloud profiles derived in Boggs [5] to simulate charge sharing within our germanium cross strip detectors. The analytical approximations in that paper allow us to model the 1-D projected charge density profiles for electron and hole clouds across the collection electrodes as a function of their drift time (\(\tau\)) in the detector, which maps directly to interaction depth within the detector. These charge profiles include the effects of thermal diffusion and mutual electrostatic repulsion in broadening the charge clouds. In this work we initially assume that the recoil electron deposits all of its energy at a single point, but we include the effects of a finite initial charge cloud distributions in Section 7. In order to turn these Figure 1: Diagram of the process utilized to create the simulated spectra, including the true photopeak, low-energy tail, and “measured” total. For a given drift time, \(\tau\), we sampled \(10^{5}\) initial interaction positions across the primary strip. The resulting charge cloud profiles, \(\lambda(x,\tau)\), were numerically integrated to determine the charge (energy), \(E_{1}\), collected on the primary strip as well as that collected on the neighbor, \(E_{2}\). Figure 2: Example model spectra for 661.66 keV photopeak interactions, showing the individual true photopeak and low-energy tail components of the model as well as the combined full spectra. (Top Left) Electron signals, \(\tau=50\,ns\) drift time, (Top Right) hole signals, \(\tau=50\,ns\), (Bottom Left) electron signals, \(\tau=250\,ns\), (Bottom Right) hole signals, \(\tau=250\,ns\). Events with \(\tau=50\,ns\) occur near the signal collection electrode, while those with \(\tau=250\,ns\) occur far from the collection electrode. charge density profiles for fixed drift times into spectra, we sampled \(10^{5}\) initial interaction locations across the primary strip electrode for each drift time (Fig. 1), numerically integrating the charge density profile to determine the charge (energy) deposited on the primary strip (\(E_{1}\)) and neighboring strip (\(E_{2}\)). Once \(E_{1}\) was determined for each sampled location we added a random noise to \(E_{1}\) based on our measured resolution \(\sigma_{m}(E)\), which is given approximately by: \[\sigma_{m}(E)=[2.17+0.65*\sqrt{E/1000}]/(2.35)\:keV;[E]=keV \tag{1}\] Each of these simulated interactions was classified into one of three categories based on the charge (energy) deposited on the neighboring strip. Events where \(E_{2}=0\) (no charge sharing) were classified as "true photopeak" events and contribute to the narrow Gaussian photopeak centered at the initial interaction energy, \(E_{0}\). Events where enough charge was shared on the neighboring strip to exceed the trigger threshold of the readout electronics on that strip (\(E_{2}\geq E_{th}\)) were classified as "triggered shared" events. We are focused on analytical fitting of the single-strip photopeak spectra in this work and hence are not considering these triggered shared events any further here. The last classification is for events where charge (energy) was collected on the neighboring strip, but not enough to trigger the strip (\(0<E_{2}<E_{th}\)). These events were classified as "untriggered shared" events and contribute to the low-energy tail on the spectral peak. (The COSI readout electronics that flew on the balloon payload, which we are modeling in this work, were not designed to read out neighboring strips unless the interaction on the neighbor exceeded the trigger threshold.) We then proceed to bin the "measured" energy \(E_{1}\) into one of two spectra, the true photopeak spectrum and the low-energy tail spectrum. Separating these two spectral components through the modeling allows us to know the exact number of counts in the true photopeak (\(N_{p}\)) and the low-energy tail (\(N_{t}\)) separately, as well as study the shape of the low-energy tail independently of the true photopeak. Adding these two spectra together creates our "measured" full spectral line for a given drift time. In Figure 2 we show example electron-signal and hole-signal spectra for two different drift times, \(\tau=50\:ns\) which represent interactions near the collection electrode and hence have minimal charge sharing with neighbor strips, and \(\tau=250\:ns\) which represents interactions far from the collection electrode so exhibit maximum charge sharing with neighbor strips. These \begin{table} \begin{tabular}{c c c c c} \hline \multicolumn{5}{c}{Electron signals} \\ \hline \(\tau\) [ns] & \(E_{0}\) [\(keV\)] & \(\sigma\) [\(keV\)] & \(N[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 661.62 & 1.18 & 0.97 & 21.22 \\ 50 & 661.56 & 1.21 & 0.94 & 40.94 \\ 100 & 661.51 & 1.23 & 0.90 & 55.60 \\ 150 & 661.46 & 1.26 & 0.88 & 66.34 \\ 200 & 661.41 & 1.28 & 0.85 & 75.19 \\ 250 & 661.34 & 1.30 & 0.83 & 82.52 \\ \hline \multicolumn{5}{c}{Hole signals} \\ \hline \(\tau\) [ns] & \(E_{0}\) [\(keV\)] & \(\sigma\) [\(keV\)] & \(N[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 661.61 & 1.18 & 0.97 & 20.90 \\ 50 & 661.55 & 1.22 & 0.93 & 43.55 \\ 100 & 661.49 & 1.25 & 0.90 & 59.44 \\ 150 & 661.42 & 1.27 & 0.87 & 71.38 \\ 200 & 661.36 & 1.30 & 0.84 & 80.38 \\ 250 & 661.28 & 1.33 & 0.81 & 88.02 \\ \hline \end{tabular} \end{table} Table 1: Gaussian fit parameters for full spectra. The simulated spectral model assumes \(E_{0}=661.66\,keV\) and \(\sigma=1.15\,keV\). The quality of fits are poor (\(\chi^{2}_{R}\gg 1\)), even for the shortest drift times corresponding to minimal tailing. At longer drift times, the quality of fit degrades, and the fit values for \(E_{0}\), \(\sigma\), and the number of events in the peak (\(N[fit/sim]\)) become less accurate. Figure 3: Gaussian-only fits to the full spectra, 250 ns drift time. (Left) Electron signals, both the fitted (and expected) values, \(E_{0}=661.34\) keV (661.66), \(\sigma=1.30\,keV\) (1.15). (Right) Holes signals, \(E_{0}=661.28\) keV (661.66), \(\sigma=1.33\,keV\) (1.15). The individual true photopeak and low-energy tail components of the model spectra are shown for comparison. More details are presented in Table 1. spectra demonstrate how the untriggered shared events create the low-energy tails on the photopeaks in our simulated spectra. (For clarity, electron and hole spectra at the same drift time as shown here do not correspond to the same interactions as longer electron drift times would correspond to shorter hole drift times for the same interaction, and vice versa.) To motivate our need to derive more complicated fitting functions we have fit our combined full spectra with a simple Gaussian shape function. As is evident in Fig. 3, a simple Gaussian is not an adequate fit to the data, and specifically does not reproduce the underlying parameters we are trying to ascertain. Table 1 gives the best-fit parameters to \(661.66\,keV\) lines generated by our simulations for a range of drift times characteristic of our germanium detectors. Several trends in this table are worth noting for later comparison. First, the high \(\chi^{2}_{R}\) values (\(\gg\,1\)) indicate poor quality fits, and get worse for larger drift times (i.e., more charge sharing). The fit value of \(E_{0}\) is shifted to lower energies and \(\sigma\) is broader than the actual noise value due to the Gaussian shape function trying to account for the low-energy tail. Utilizing this simple Gaussian shape function in spectral analyses can lead to erroneous gain calibrations as well as inaccurate measurement of peak energies and potential Doppler shifts and broadening for measured \(\gamma\)-ray lines. In addition, the simple Gaussian fit significantly overestimates the number of counts in the true photopeak (\(N_{p}\)) and leaves no characterization of the number of counts in the tail (\(N_{t}\)). ## 3 Previous Analytical Fits A wide variety of shape functions have been proposed to facilitate analytical peak-fitting techniques for narrow \(\gamma\)-ray spectral lines in germanium detectors. These shape functions universally share the fundamental feature that the photopeak for totally-absorbed \(\gamma\)-rays where the resulting charge clouds are fully collected on the electrode(s) is modeled by a Gaussian peak of width \(\sigma\), the center of which reflects the incident photon energy, \(E_{0}\), in a properly calibrated system. The width of the Gaussian is determined by the statistical fluctuations in the initial e-h charge cloud produced by the recoil electron combined with the electronic readout noise [6]. Measured spectral lines in germanium detectors always exhibit asymmetries. There is inevitably an excess of counts on the lower-energy side of the peak, low-energy tails, that have traditionally been attributed to a number of physical process in the detector including charge trapping, inactive regions Figure 4: Example fits to tail-only spectra, 661.66 keV, 250 \(ns\) drift time. The low-energy tail spectra were fit with the tail components of the shape function only, with all parameters unconstrained. Here we show only electron signal spectra, but the hole signal spectra and resulting fits are very similar. (Top left) Exponential tail only fit (\(\chi^{2}_{R}\) = 13.37). (Top right) Exponential + step tail (\(\chi^{2}_{R}\) = 1.98). (Bottom left) Short + long exponentials (\(\chi^{2}_{R}\) = 1.27). (Bottom right) Exponential + linear tail (\(\chi^{2}_{R}\) = 1.23). Both of the latter two shapes produce high quality fits and reproduce the photopeak energy and the number of counts in the tail; however, the last shape, exponential + linear tail, provides a much more robust fitting as we vary the interaction energy and drift time. The true photopeak and full spectra are shown only for comparison. in the detector, and escaped bremmstrahlung photons [7]. With the advent of multi-electrode detectors such as the COSI cross-strip detectors, charge sharing between multiple electrodes can be added to this list as a dominating contributor to low-energy tails. Occasionally spectral peaks exhibit excess events on the higher-energy side of the peak. Such high-energy tails are primarily due to electronic pile-up [7] or cross-talk between neighboring electrode electronics [2]. We will not consider high-energy tails further in this work. While charge trapping is present in our germanium detectors for both electron and hole signals, we are not including the effects of trapping on our simulated line profiles. We will return to a discussion of charge trapping in Section 8. Comparison of the wide variety of proposed shape functions for germanium detectors have been reviewed by multiple authors, e.g. [8, 9]. In general, the shape functions that best fit experimental data combine the Gaussian peak with an exponential low-energy tail, plus an additional component extending to lower energies that is usually represented by either a step function or a second longer exponential tail (or both) [10, 11]. Given the simplicity and historical success of this general shape function, we adopt this as our baseline approach. These previous investigations into optimal shape functions were primarily modeling the response of monolithic, single-electrode germanium detectors, where the low-energy tails would extend indefinitely below the Gaussian peak. The low-energy tails we are simulating in this work are due solely to untriggered charge sharing on neighboring electrode strips, hence these low-energy tails only extend below the peak (\(E_{0}\)) to energies \(E_{0}\)-\(E_{th}\), where \(E_{th}\) is the trigger threshold energy for the neighboring strip. For our germanium cross strip detectors this threshold is at relatively low energies (\(E_{th}=18\,keV\)). Hence we introduce our first modification to any adopted shape function by requiring a low-energy cutoff at \(E_{0}\)-\(E_{th}\). We explored four (4) tailing shape functions in detail, keeping in mind our simultaneous goals of finding a shape function that provides a robust and high quality fit to the simulated spectra, reliably reproduces \(E_{0}\), \(\sigma\), \(N_{p}\), and \(N_{t}\), and minimizes the number of fit parameters. Here we define "robust" fits as ones where the parameters do not vary dramatically as we vary the interaction energy and the drift times. The four models we have explored are shown in Fig. 4, and summarized here. 1. Exponential low-energy tail (2 parameters). Before adding any additional components to the tail shape function we first looked at the single component exponential low-energy tail model. As can be seen in Fig. 4 (top left), this single component model does not provide a quality fit of the tail, justifying the need to look for an additional component to extend the tail to lower energies. 2. Exponential + step tail (3 parameters). The simplest additional component we can add to this tail model is a step function that extends the tail shape function to lower energies. The form of this shape function is the same as used in [12], but with the addition of the low-energy cutoff. An example fit utilizing this tail shape is shown in Fig. 4 (top right). This function does a much better job qualitatively of modeling the complex tail shape, but the quality of the fits (\(\chi^{2}_{R}\)) indicate there is room for further improvement, and close inspection shows that the fit does not adequately capture the slope of the extended tail at the lowest energies. Surprisingly, however, the fits with this tail shape are very robust and do an excellent job of reproducing the underlying physical parameters (\(E_{0}\), \(\sigma\), \(N_{p}\), \(N_{t}\)), with minimal parameters. We keep this quality in mind when investigating the next two models. 3. Short + long exponential tail (4 parameters). The next modification to the shape function we pursued replaces the step function with a second, longer exponential component to the tail [11]. An example fit utilizing this double exponential is shown in Fig. 4 (bottom left). This shape function does an excellent job of producing high quality fits (\(\chi^{2}_{R}\sim 1\)) to the tail and overall full spectrum. That boded well for this model. However, we find that this tail shape function produces less robust fits (in terms of variation of the fitting parameters) than the exponential + step function, and also does an inferior job of accurately reproducing \(E_{0}\), \(\sigma\), \(N_{p}\), and \(N_{t}\). 4. Exponential + linear tail (4 parameters). The fourth option we explored is not as well represented in previous literature. We combined the short exponential tail with a linear function extending to lower energies. A fit utilizing this function is shown in Fig. 4 (bottom right). This shape function does an excellent job in fitting the extended tails, with \(\chi^{2}_{R}\sim 1\), comparable to the double exponential function. However, this shape function also provides robust fits and reliably reproduces the underlying physical parameters (\(E_{0}\), \(\sigma\), \(N_{p}\), \(N_{t}\)). Hence, we have selected this shape function as the most promising to pursue in greater detail. ## 4 Empirical Tailing Model We arrive at a shape function that includes three core components: a Gaussian peak (3 parameters: \(A\), \(E_{0}\), \(\sigma\)), a short exponential low-energy tail (2 parameters: \(B\), \(\Gamma_{S}\)), and a long linear low-energy tail (2 parameters: \(C\), \(D\)). The latter two components need to be cut off at higher energies (\(E_{0}\)) and lower energies (\(E_{0}-E_{th}\)), as well as effectively broadened by a noise term (1 parameter, \(\sigma_{t}\)). This latter noise term for the tail component, \(\sigma_{t}\), is often assumed equal to the noise terms in the Gaussian peak, \(\sigma\), but not always [8; 10]. We have chosen to keep this as a free parameter for now and check whether consistency between these fitted parameters justifies setting them equal or not. Technically, the low-energy cutoff introduces an additional parameter to these fits, but this is a known parameter for our detectors and is fixed in these fits (\(E_{th}=18\,keV\)). The shape function we have selected that reflects all of these components is given by the equation (8 parameters): \[f(E)=Ae^{\frac{-(E-E_{0})^{2}}{2\sigma^{2}}}+[Be^{\Gamma(E-E_{0})}+C(1+D(E-E_{ 0}))]*[1-erf(\frac{E-E_{0}}{\sqrt{2}\sigma_{t}})]*[1+erf(\frac{E-E_{0}+E_{th}} {\sqrt{2}\sigma_{t}})] \tag{2}\] The last two terms in brackets represent the high- and low-energy cutoffs to the two tail components. In Fig. 4 (bottom right) we show just the tail components of this shape function (short exponential tail and longer linear tail) fit to the low-energy tail model spectra, keeping all of the fit parameters free. The tail shape function does an excellent job of recreating the profile of the simulated tail. It also does an excellent job of reproducing the true physical parameters that we are trying to uncover: the initial interaction energy, \(E_{0}\), and the number of counts in the low-energy tail, \(N_{t}\). However, we encounter a challenge when we try to fit the full shape function (Eqn. 2) to the full spectral lines. As can be seen in Fig. 5, this fit with eight unconstrained parameters does not adequately distinguish between the low-energy tail events and the true photopeak events. The result is that the fitted Gaussian peak is broader than the true photopeak, with the fitted peak energy shifted to lower energies. The number of counts in the true photopeak (\(N_{p}\)) are overestimated, while the number of counts in the low-energy tail (\(N_{t}\)) are underestimated. Effectively this full shape function fit is utilizing the eight (8) unconstrained fitting parameters to maximize the quality of the empirical fit (i.e., minimize \(\chi^{2}_{R}\)) at the expense of producing inaccurate physical numbers. While the quality of fit is promising, this model as implemented with eight unconstrained parameters does not meet our requirement that the shape function reliably reproduce the underlying physical parameters when fit to the full spectrum. ## 5 Constrained Tailing Model To address this challenge, we returned to our simulated low-energy tail spectra and the tail shape function to see if there are modifications we can make to the tail shape function to more robustly reproduce the actual tail parameters when doing the full shape function fit to the full spectrum. Here is where our ability to model the low-energy tail spectra separately from the true photopeak spectra becomes particularly powerful. In Table 2 we show the best-fit parameters for the tail model fit to the low-energy tail spectra for a range of drift times characteristic of our germanium detectors. For these fits we held the fit parameter \(E_{0}\) fixed at the known photon interaction energy (661.66 keV) since this parameter is primarily driven in the full shape function Figure 5: Example, \(\tau=250\,ns\), \(E_{0}=661.66\,keV\), fits of the full shape function to full spectra (8 unconstrained parameters). The overall quality of fit is excellent: (Left) electron signals, (\(\chi^{2}_{R}=1.218\)), (Right) hole signals, (\(\chi^{2}_{R}=1.104\)). But as can seen in both plots the full shape function fit shifts and broadens the Gaussian photopeak while overestimating the number of counts in the true photopeak and underestimating the number of counts in the low-energy tail. The true photopeak and low-energy tail spectral components are shown for comparison with the relevant components of the fitted shape function (dotted lines) to illustrate how the unconstrained full shape function can incorrectly reflect the underlying components of the spectra. fits by the Gaussian peak. When we look closely at these fit parameters three significant trends pop out. First, the parameter \(\Gamma\) in the fit for the exponential component of the low-energy tail does not vary significantly over the full range of drift times, nor between electron signals versus hole signals. The limited range of these best-fit values suggests that we can help stabilize the full shape function fits by fixing \(\Gamma\) for our tail shape function. We chose a value of \(\Gamma\) averaged over both the electron signals and the hole signals as well as the range of drift times, weighted by the number of counts in the low-energy tails. This averaging resulted in us fixing the parameter \(\Gamma\equiv 0.50\,keV^{-1}\) (at \(E_{0}=661.66\,keV\)). The second trend that we can see in Table 2 is that the ratio of the amplitudes of the linear tail to the exponential tail, \(C/B\), also remains nearly consistent over the range of drift times for both the electron signals and the hole signals. This is our second clue to stabilizing the tail shape function fits by fixing the ratio \(C/B\). Based again on the weighted average, we fixed the \begin{table} \begin{tabular}{c c c c c c c} \hline \multicolumn{5}{c}{Electron signals} \\ \hline \(\tau\) [ns] & \(\Gamma\) [\(keV^{-1}\)] & \(C/B\) & \(D\) [\(keV^{-1}\)] & \(\sigma_{t}/\sigma\) & \(N_{t}[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 0.53 & 0.14 & 0.030 & 0.81 & 0.98 & 0.96 \\ 50 & 0.50 & 0.13 & 0.027 & 0.85 & 0.99 & 0.92 \\ 100 & 0.50 & 0.13 & 0.027 & 0.86 & 0.99 & 1.05 \\ 150 & 0.50 & 0.13 & 0.028 & 0.86 & 0.99 & 1.40 \\ 200 & 0.51 & 0.13 & 0.029 & 0.85 & 0.99 & 1.23 \\ 250 & 0.51 & 0.13 & 0.029 & 0.85 & 0.99 & 1.20 \\ \hline Ave & 0.50 & 0.13 & 0.029 & 0.85 & & \\ \hline \multicolumn{5}{c}{Hole signals} \\ \hline \(\tau\) [ns] & \(\Gamma\) [\(keV^{-1}\)] & \(C/B\) & \(D\) [\(keV^{-1}\)] & \(\sigma_{t}/\sigma\) & \(N_{t}[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 0.51 & 0.15 & 0.030 & 0.85 & 0.98 & 0.87 \\ 50 & 0.49 & 0.13 & 0.028 & 0.86 & 0.99 & 1.51 \\ 100 & 0.50 & 0.13 & 0.029 & 0.85 & 0.99 & 1.45 \\ 150 & 0.51 & 0.14 & 0.030 & 0.85 & 0.99 & 1.31 \\ 200 & 0.50 & 0.13 & 0.029 & 0.85 & 0.99 & 1.29 \\ 250 & 0.51 & 0.13 & 0.030 & 0.84 & 0.99 & 1.33 \\ \hline Ave & 0.50 & 0.13 & 0.029 & 0.85 & & \\ \hline \end{tabular} \end{table} Table 2: Best-fit parameters to the tail shape function holding \(E_{0}\) fixed at 661.66 keV but allowing all the other parameters to vary. Figure 6: Constrained fits to the full spectra, for three different drift times. (Left) Electron signals, (Right) hole signals. The quality of fits and corresponding fit parameters are listed in Table 3. The individual true photopeak and low-energy tail components of both the model spectra and the fitted shape function are shown for comparison but were not used in the fitting. ratio \(C/B\equiv 0.13\) (at \(E_{0}=661.66\,keV\)). The third trend that we can see in Table 2 is that the slope of the linear component of the tail, \(D\), also remains nearly consistent over the range of drift times for both the electron signals and the hole signals. This is our third clue to stabilizing the tail shape function fits by fixing the slope \(D\). Based again on the weighted average, we fixed the slope \(D\equiv 0.29\,keV^{-1}\) (at \(E_{0}=661.66\,keV\)). Finally, the fourth trend that we can see in Table 2 is that the ratio of the noise terms, \(\sigma_{t}/\sigma\), also remains nearly constant over the range of drift times for both the electron signals and the hole signals. This provides our fourth clue for stabilizing the tail shape function fits by fixing the ratio of these noise terms. Based again on the weighted average, we fixed the ratio \(\sigma_{t}/\sigma\equiv 0.85\). Notably, this ratio is not unity, which justifies defining \(\sigma_{t}\) as a separate parameter from \(\sigma\). By fixing the parameters \(\Gamma\), \(C/B\), \(D\), and \(\sigma_{t}/\sigma\), we have effectively re \begin{table} \begin{tabular}{c c c c c c} \hline \hline Electron signals & & & & & \\ \hline \(\tau\) [ns] & \(E_{0}\) [\(keV\)] & \(\sigma\) [\(keV\)] & \(N_{p}[fit/sim]\) & \(N_{t}[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 661.66 & 1.15 & 1.00 & 1.00 & 1.20 \\ 50 & 661.66 & 1.15 & 1.00 & 0.99 & 0.98 \\ 100 & 661.66 & 1.15 & 1.00 & 0.98 & 1.33 \\ 150 & 661.65 & 1.15 & 1.01 & 0.97 & 1.24 \\ 200 & 661.65 & 1.15 & 1.01 & 0.97 & 1.31 \\ 250 & 661.64 & 1.16 & 1.02 & 0.97 & 1.39 \\ \hline \hline \multicolumn{6}{c}{Hole signals} & & & & \\ \hline \(\tau\) [ns] & \(E_{0}\) [\(keV\)] & \(\sigma\) [\(keV\)] & \(N_{p}[fit/sim]\) & \(N_{t}[fit/sim]\) & \(\chi^{2}_{R}\) \\ \hline 10 & 661.66 & 1.15 & 1.00 & 0.99 & 0.92 \\ 50 & 661.66 & 1.15 & 1.00 & 0.99 & 0.96 \\ 100 & 661.65 & 1.15 & 1.00 & 0.99 & 0.86 \\ 150 & 661.65 & 1.15 & 1.00 & 0.98 & 1.02 \\ 200 & 661.64 & 1.16 & 1.01 & 0.98 & 1.17 \\ 250 & 661.63 & 1.16 & 1.02 & 0.97 & 1.18 \\ \hline \hline \end{tabular} \end{table} Table 3: Best-fit parameters for constrained fits to the full spectra. The simulated spectra assume \(E_{0}=661.66\,keV\) and \(\sigma=1.15\,keV\). The full shape function with constrained parameters (\(\Gamma\), \(C/B\), \(D\), \(\sigma_{t}/\sigma\) held fixed) still produces high-quality fits with \(\chi^{2}_{R}\) comparable to the unconstrained version, but reliably reproduces \(E_{0}\), \(\sigma\), \(N_{p}\), and \(N_{t}\). duced the number of free parameters in our shape function from eight (8) to four (4), just one additional parameter over the pure Gaussian peak (\(B\)). This additional parameter is effectively the amplitude of the low-energy tail component. In Fig. 6 we show simulated spectra (\(E_{0}=661.66\,keV\)) fit to the refined shape function, Eqn. 2 with \(\Gamma\), \(C/B\), \(D\), and \(\sigma_{t}/\sigma\) held fixed. The constrained shape function still does an excellent job in reproducing the overall shape of the full spectrum. It also does a better job of characterizing the underlying true photopeak and low-energy tail spectra. Most importantly, this constrained shape function reliably reproduces the underlying physical parameters as documented in Table 3. The quality of fits (\(\chi^{2}_{R}\)) remain good for the full range of drift times. The true photopeak parameters, \(E_{0}\) and \(\sigma\) are consistently reproduced in the fits, and the number of counts within the true photopeak (\(N_{p}\)) are reliably reproduced with \(\leq 2\%\) systematic error, and the number of counts within the low-energy tail (\(N_{t}\)) with \(\leq 3\%\) systematic error. So far, as verified at \(E_{0}=661.66\,keV\) at least, this constrained tail fit has met our goals for the shape function (quality of fit, reliable parameter estimates, minimal parameters). \begin{table} \begin{tabular}{c c c c c c} \hline \(E_{0}\) [keV] & \(<\Gamma>\) [\(keV^{-1}\)] & \(<C/B>\) & \(<D>\) [\(keV^{-1}\)] & \(<\sigma_{t}/\sigma>\) & \(<\chi^{2}_{R}>\) \\ \hline 59.54 & 0.55 & 0.15 & 0.025 & 0.85 & 1.21 \\ 122.06 & 0.55 & 0.14 & 0.028 & 0.85 & 1.17 \\ 356.02 & 0.52 & 0.14 & 0.029 & 0.85 & 1.17 \\ 511.00 & 0.51 & 0.13 & 0.029 & 0.85 & 1.25 \\ 661.66 & 0.50 & 0.13 & 0.029 & 0.85 & 1.26 \\ 898.04 & 0.50 & 0.13 & 0.028 & 0.84 & 1.21 \\ 1173.24 & 0.49 & 0.13 & 0.028 & 0.84 & 1.19 \\ 1274.53 & 0.48 & 0.13 & 0.028 & 0.85 & 1.16 \\ 1332.50 & 0.48 & 0.13 & 0.027 & 0.85 & 1.11 \\ 1674.73 & 0.47 & 0.13 & 0.027 & 0.85 & 1.16 \\ 1836.06 & 0.46 & 0.12 & 0.027 & 0.85 & 1.19 \\ \hline \end{tabular} \end{table} Table 4: Best-fit parameters at various interaction energies \(E_{0}\) to the tail shape function, averaged over electrons and holes as well as drift times, holding \(E_{0}\) fixed but allowing all the other parameters to vary. These spectral simulations assume a point-like initial interaction (see Section 7). These parameters are also plotted in Fig. 8. ## 6 Generalization to Other Energies Our constrained shape function works well for fitting the spectrum of monoenergetic interactions at 661.66 keV (\({}^{137}Cs\)). The immediate question is whether the constrained shape function can adequately meet our requirements for fitting at other \(\gamma\)-ray energies. To answer this question we reproduced the analysis above for a range of additional energies, representing common laboratory calibration source monoenergetic line energies: 59.54 keV (\({}^{241}Am\)), 122.06 keV (\({}^{57}Co\)), 356.02 keV (\({}^{133}Ba\)), 511.00 keV (\({}^{22}Na\)), 898.04 keV (\({}^{88}Y\)), 1173.24 keV (\({}^{60}Co\)), 1274.54 keV (\({}^{22}Na\)), 1332.50 keV (\({}^{60}Co\)), 1674.73 keV (\({}^{58}Co\)), and 1836.06 keV (\({}^{88}Y\)). The primary factors that vary in the detector response for these energies are that the measured noise \(\sigma_{m}\) increases with energy (Eqn. 1), and the effects of repulsion on the charge cloud profile are larger at higher energies [5]. Table 4 documents the average best-fit parameters over this range of energies. The quality of fit (\(<\chi_{R}^{2}>\)) remains very good over the full range of energies. As at 661.66 \(keV\), \(\Gamma\), \(C/B\), and \(D\) do not vary significantly for a given interaction energy between electron signals and hole signals and as we varied the drift times, _but the average values of these parameters do vary with photon energy itself._ The ratio \(<\sigma_{t}/\sigma>\) varies very little with energy, remaining nearly constant at \(<\sigma_{t}/\sigma>\sim 0.85\). In Fig. 7 we show example fits for 59.54 keV and 1173.23 keV model spectra, for drift times of 250 ns (maximum tailing), utilizing the energy-specfic fixed parameters from Table 4. The quality of fits remain excellent despite the wide range in energy. ## 7 Extended Initial Charge Cloud Now we turn to the impact of extended charge distributions of electron-hole pairs created by the recoil electron following the initial \(\gamma\)-ray interaction. Extended initial charge cloud distributions will become increasingly important for higher interaction energies, such as those we are measuring in \(\gamma\)-ray applications of our germanium detectors. In Boggs [5], we discussed a simple approach to modeling the effects of initially extended charge clouds by assuming that the initial charge cloud can be approximated at \(t=0\) as a sphere of uniform charge density and finite radius \(R_{0}\). The size of this initial charge cloud can be estimated using the practical electron range, \(D_{p}\), in germanium as a function of recoil electron energy, which can be estimated by the following formula [13]: Figure 7: Example constrained shape function fits. (Top Left) 59.54 keV, electron signals (\(\chi^{2}_{R}=1.21\)). (Top Right) 59.54 keV, hole signals (\(\chi^{2}_{R}=1.03\)). (Bottom Left) 1173.23 keV, electron signals (\(\chi^{2}_{R}=1.05\)). (Bottom Right) 1173.23 keV, hole signals (\(\chi^{2}_{R}=1.20\)). \(E_{0}\), \(\sigma\), \(N_{p}\), and \(N_{t}\) remain well reproduced by the fits. \[D_{p}(E)=\alpha E[1-\beta/(1+\gamma E)] \tag{3}\] With \(\alpha=0.83\,\mu m\cdot keV^{-1}\), \(\beta=0.9841\), and \(\gamma=0.0030\,keV^{-1}\). This estimate of the practical range for electrons in germanium is accurate to \(\sim 10\%\), which is adequate for our purposes. For our charge cloud estimates, we assume for a given \(E_{0}\) that \(R_{0}=D_{p}/2\), effectively that the recoil electron deposits its energy uniformly in a sphere with a diameter equal to the full practical range. This assumption is conservative, overestimating the full extent of the initial charge cloud and hence the impacts of charge sharing on simulated tail spectra. We have repeated the analysis performed above for the characteristic \(\gamma\)-ray calibration energies of fitting the low-energy tail spectra and determining the average best-fit parameters. These results are shown in Table 5 and Fig. 8. These results show that the adopted tail shape function continues to provide an excellent fit to the tail spectra, both in terms of quality of fit (\(<\chi_{R}^{2}>\)) as well as reproducing the physical parameters, even when the spectra include the effects of an extended initial charge cloud. Below interaction energies of \(\sim 1000\,keV\) the recoil electron range has little effect on the fits. At higher energies, we can start to see slight divergence in the best-fit parameters for \(<\Gamma>\), \(<C/B>\), and \(<D>\) compared to the point interaction case, though the ratio \(<\sigma_{t}/\sigma>\) remains largely unchanged between the two cases. Given that our initial analysis in Section 6 ignored the recoil electron range by assuming the initial charge clouds were concentrated at a single point, and the analysis in this section overestimates the effects of extended initial charge clouds by assuming spherical symmetry, we anticipate that the best-fit parameters for a realistic initial charge cloud distribution would lie somewhere between these two extremes. As seen in Table 4 and Table 5, these parameters are identical below \(\sim 1000\,keV\). For higher energies, we can average the results for the two extreme cases to give an estimate of the best-fit parameters for realistic initial charge cloud distributions. Curve fits to the average of these best-fit parameters are plotted in Fig. 8. Fortunately, these parameters vary smoothly and predictably as a function of \(\gamma\)-ray interaction energy. The energy dependence (\([E]=keV\)) of the parameter \(\Gamma(E)\) is fit by the function: \[\Gamma(E)=0.547-(5.39\times 10^{-5})E;[\Gamma]=keV^{-1} \tag{4}\] The ratio \(C/B(E)\) is fit by the function: \[C/B(E)=0.131+\frac{1.44}{E} \tag{5}\] The parameter \(D(E)\) is fit by the function: \[D(E)=0.0312-\frac{0.336}{E}-(3.13\times 10^{-6})E;[D]=keV^{-1} \tag{6}\] We can effectively fix \(\sigma_{t}/\sigma\equiv 0.85\) for all energies. Characterization of these trends enable us to utilize the constrained tail shape function at arbitrary photopeak energies with some confidence that we are accounting for the finite range of the initial recoil electron. ## 8 Discussion In order to optimize the spectral performance of high resolution germanium detectors, very careful energy calibrations need to be performed using monoenergetic \(\gamma\)-ray line sources of known energies. As seen with the simple Gaussian fits to the true asymmetric line profiles presented in Section 2, utilizing ill-fitting shape functions to fit the asymmetric line profiles will lead \begin{table} \begin{tabular}{c c c c c c} \hline \(E_{0}\) [keV] & \(<\Gamma>\) [\(keV^{-1}\)] & \(<C/B>\) & \(<D>\) [\(keV^{-1}\)] & \(<\sigma_{t}/\sigma>\) & \(<\chi_{R}^{2}>\) \\ \hline 59.54 & 0.56 & 0.16 & 0.026 & 0.85 & 1.25 \\ 122.06 & 0.55 & 0.14 & 0.028 & 0.85 & 1.18 \\ 356.02 & 0.52 & 0.14 & 0.029 & 0.85 & 1.23 \\ 511.00 & 0.51 & 0.13 & 0.029 & 0.85 & 1.23 \\ 661.66 & 0.50 & 0.13 & 0.028 & 0.85 & 1.26 \\ 898.04 & 0.50 & 0.13 & 0.028 & 0.84 & 1.20 \\ 1173.24 & 0.48 & 0.13 & 0.027 & 0.85 & 1.11 \\ 1274.53 & 0.48 & 0.14 & 0.027 & 0.85 & 1.21 \\ 1332.50 & 0.47 & 0.14 & 0.026 & 0.85 & 1.11 \\ 1674.73 & 0.45 & 0.14 & 0.025 & 0.86 & 1.30 \\ 1836.06 & 0.44 & 0.15 & 0.024 & 0.85 & 1.13 \\ \hline \end{tabular} \end{table} Table 5: Extended initial charge cloud. Best-fit parameters at various interaction energies \(E_{0}\) to the tail shape function, averaged over electrons and holes as well as drift times, holding \(E_{0}\) fixed but allowing all the other parameters to vary. These spectral simulations assume a spherical extended initial charge cloud. These parameters are also plotted in Fig. 8. Figure 8: The best-fit average tail shape function parameters \(<\Gamma>\), \(<C/B>\), \(<D>\), and \(<\sigma_{t}/\sigma>\) for our range of interaction energies, showing the parameters derived for point-like initial interactions (crosses) and extended initial charge clouds (diamonds). Also shown are the best-fit curves (dotted lines) to \(\Gamma(E)\) (Eqn. 4), \(C/B(E)\) (Eqn. 5), and \(D(E)\) (Eqn. 6) for the average of these two extreme cases. to incorrect determination of the true photoabsorption peak, and hence skew the subsequent energy calibrations for the detector. Conversely, detailed analysis of scientific spectral data requires a detailed understanding of the instrumental line profiles to accurately identify line energies, as well as potential Doppler broadening and shifts - important factors in our astrophysical program. In this work we have considered the effects of charge sharing as the dominant factor in creating low-energy tails in our multi-electrode germanium detectors. We have not included charge trapping in these spectral simulations. While charge trapping can be a significant factor in producing low-energy tails in germanium detectors, our ability to measure the full 3-D position of photon interactions within our detector volume allows us to largely correct the effects of trapping on the collected spectra. The details of the charge trapping correction are beyond the scope of this current work, but suffice it to say that our work to correct the effects of charge trapping largely led to our in-depth analysis of spectral profiles presented in this paper. One of the advantages of the work presented here is the simplicity of the simulations used to generate the charge-sharing spectra, as well as the simplicity of the shape function used to characterize the resulting photopeaks and low-energy tails. While the shape function utilized in this work produces complicated line profiles, these profiles reliably reproduce the underlying physical parameters of the simulated spectra with only one additional parameter over a simple Gaussian peak. The shape function developed in this work has multiple future applications to the COSI program. The line profile will enable us to accurately perform the energy calibration of the instrument, including characterizing and correcting the effects of charge trapping in the detectors. The profiles themselves provide a useful tool for simulating the expected spectral performance of the instrument. Finally, the shape function will be a critical component in the scientific analysis and interpretation of the astrophysical data. ## 9 Acknowledgements This work was supported by the NASA Astrophysics Research and Analysis (APRA) program, grant 80NSSC21K1815. Thanks to J. Tomsick for feedback on this work.
2307.00023
Special relativity and the twins: a review and a new approach
It is sometimes claimed that the twin "paradox" requires general relativity for a resolution. This paper presents a simple, exact resolution using only special relativity and the equivalence principle. Two earlier approximate solutions are considered, along with some background review to render the article self-contained. It is hoped that this material will be suitable for classroom instruction.
David Derbes
2023-06-28T15:35:55Z
http://arxiv.org/abs/2307.00023v1
# Special relativity and the twins: a review and a new approach ###### Abstract It is sometimes claimed that the twin "paradox" requires general relativity for a resolution. This paper presents a simple, exact resolution using only special relativity and the equivalence principle. Two earlier approximate solutions are considered, along with some background review to render the article self-contained. It is hoped that this material will be suitable for classroom instruction. ## 0 Prolegomena and apologia Full disclosure: Shorter versions of this article have been rejected four times over thirty-five years. The objections (after the first, in 1988) were broadly two: that there was nothing new in it, but more forcefully, that articles on the twins were harmful. Not in themselves, but in the second order: these articles frequently induce a cloud of cranks who swamp journals in the misbegotten hope of disproving relativity and Einstein, thereby obliging conscientious editors to waste valuable time and energy refuting nonsense. So why write another article? Why read one? The selfish motive is to publish what this author thinks, _pace_ his referees, is a new approach to this old puzzle.1 Less selfish motives are pedagogic, to clear up a number of misunderstandings related to the twins. Some hold that special relativity can provide only an approximate resolution to the puzzle; exact reconciliation of the twins' times requires the general theory. Occasionally one reads that special relativity applies only to inertial reference frames and cannot handle accelerated frames. As will be shown, each of these claims is mistaken. Additionally, many students have at best a murky understanding of how acceleration affects clock rates, which this paper may clear up. The author also hopes to make better known the use of Moller coordinates and the encyclopedic writings of H. Arzelies. Perhaps this elementary paper will provide something useful to those who teach relativity. Footnote 1: Careful review of the literature years ago failed to find an _exact_ special relativistic solution in the literature. The author recently discovered the paper by J. Gambos, F. Mendez, M. B. Paranjape and Benoit Sirois, “The twin paradox: the role of acceleration”, _Can. Jour. Phys._**97** (2019) 1049, arXiv:gr-qc/1807.02148v1, which _is_ exact and special relativistic. While the results and some of the calculations are very similar to those of this paper, the approach is very different. ## 1 Introduction On January 16, 1911, Einstein gave a lecture to the Zurich Society of Natural Sciences, entitled _The Theory of Relativity.2_ After discussing the usual time dilation, he added that it was "at its strangest" if one imagined a clock given a uniform velocity in one direction for some distance, then returning with the reverse velocity, coming to rest at its original position. He went on: Footnote 2: A. Einstein, “Die Relativitätstheorie,” _Viertejghnschrift_**56**, 1-14 (1911). English translation in _The Collected Papers of Albert Einstein, Vol.3: The Swiss Years: Writings 1909-1911_, (Princeton, U. P., 1994), document 17, 340–350. This quote is from pp. 348–349. Einstein’s papers are freely available online in the original language and in English: [https://einsteinpapers.press.princeton.edu/](https://einsteinpapers.press.princeton.edu/).. Were we, for example, to place a living organism in a box and make it perform the same to-and-fro motion as the clock discussed above, it would be possible to have this organism return to its original starting point after an arbitrarily long flight having undergone an arbitrarily small change, while identically constituted organisms that remained at rest at the point of origin have long since given way to new generations.
2305.15531
Twists of Gr(3,n) Cluster Variables as Double and Triple Dimer Partition Functions
We give a combinatorial interpretation for certain cluster variables in Grassmannian cluster algebras in terms of double and triple dimer configurations. More specifically, we examine several Gr(3,n) cluster variables that may be written as degree two or degree three polynomials in terms of Pl\"ucker coordinates, and give generating functions for their images under the twist map - a cluster algebra automorphism introduced in work of Berenstein-Fomin-Zelevinsky. The generating functions range over certain double or triple dimer configurations on an associated plabic graph, which we describe using particular non-crossing matchings or webs (as defined by Kuperberg), respectively. These connections shed light on a recent conjecture of Cheung et al., extend the concept of web duality introduced in a paper of Fraser-Lam-Le, and more broadly make headway on understanding Grassmannian cluster algebras for Gr(3,n).
Moriah Elkin, Gregg Musiker, Kayla Wright
2023-05-24T19:36:43Z
http://arxiv.org/abs/2305.15531v2
# Twists of \(\operatorname{Gr}(3,n)\) cluster variables as ###### Abstract. We give a combinatorial interpretation for certain cluster variables in Grassmannian cluster algebras in terms of double and triple dimer configurations. More specifically, we examine several \(\operatorname{Gr}(3,n)\) cluster variables that may be written as degree two or degree three polynomials in terms of Plucker coordinates, and give generating functions for their images under the twist map - a cluster algebra automorphism introduced in [1]. The generating functions range over certain double or triple dimer configurations on an associated plabic graph, which we describe using particular non-crossing matchings or webs (as in [21]), respectively. These connections shed light on a conjecture appearing in [13], extend the concept of web duality introduced in [14], and more broadly make headway on understanding Grassmannian cluster algebras for \(\operatorname{Gr}(3,n)\). ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 The Grassmannian and its Cluster Structure * 2.2 Plabic Graphs * 2.3 Quadratic and Cubic Differences of Plucker Coordinates * 2.4 Dimer Configurations * 2.5 The Twist Map * 3 Dimer Face Weights * 4 Double Dimer Configurations for Quadratic Differences * 5 Triple Dimer Configurations for Cubic Differences * 5.1 Webs * 5.2 Triple Dimer Configurations As Webs * 5.3 Enumeration of Non-Elliptic Webs * 5.4 Webs for Cubic Differences * 6 Comparison to Web Duality * 6.1 Young Tableaux * 7 Construction of \(C\) * 8 Appendix: Proofs of Lemmas * 9 Appendix: Computations of Twists * 9.1 Computing \(\mathscr{T}^{*}(\sigma^{2}(A))\) * 9.2 Computing \(\mathscr{T}^{*}(\sigma^{7}(B))\) * 9.3 Computing Twists Algebraically ## 1. Introduction Cluster algebras are a well-loved object of study in algebraic combinatorics because of their deep connection to a myriad of mathematical fields. Many mathematicians are interested in finding combinatorial models for the generators of these algebras, which are otherwise only recursively defined. This paper addresses that question for certain cluster algebras coming from the Grassmannian, which is denoted \(\operatorname{Gr}(k,n)\) and refers to the set of \(k\)-dimensional subspaces of \(\mathbb{C}^{n}\). Focusing on the case of \(k=3\), we discuss a connection between Grassmannian cluster algebras, \(m\)-fold dimer configurations and non-elliptic webs. In particular, we establish a dimer-theoretic model for certain generators of Grassmannian cluster algebras. Scott in [16] was first to describe the cluster structure on \(\operatorname{Gr}(k,n)\), and Postnikov pioneered the study of the combinatorics of these cluster algebras in [15]. In particular, Postnikov introduced _plabic graphs_, which became the main combinatorial tool for studying these cluster structures. In the same paper, he defined a _boundary measurement map_ that linked _Plucker coordinates_, coordinates that parameterize the Grassmannian as a projective variety and are always generators for the associated cluster algebras, to _dimers_ (almost perfect matchings) on plabic graphs. This map was later made more explicit by Talaska in [12]. Recently, [13] and [14] used the boundary measurement map to give Laurent expansions for the images of Plucker coordinates under a certain famous automorphism called the _twist map_, defined originally in [1]. As shown in [14, Prop. 8.10], up to multiplication by frozen variables, the twist map on the Grassmannian sends cluster variables to cluster variables. For \(k=2\), every Grassmannian cluster variable is a Plucker coordinate, and thus every Plucker coordinate is the twist (up to frozens) of another Plucker coordinate. However, in Grassmannian cluster algebras with \(k\geq 3\) and \(n\geq 6\), some cluster variables are more complicated polynomials in Plucker coordinates, and some Plucker coordinates only appear as factors in the twists of these cluster variables. This paper will focus on certain quadratic and cubic polynomials that first appear as cluster variables in \(\operatorname{Gr}(3,6)\), \(\operatorname{Gr}(3,8)\), and \(\operatorname{Gr}(3,9)\), and will describe their twists combinatorially via the boundary measurement map, thus providing Laurent expansions for a larger set of cluster variables than simply twists of Plucker coordinates. To do so, we examine the connection between products of \(m\) Plucker coordinates and \(m\)_-fold dimer configurations_ (i.e. superimpositions of \(m\) single dimer configurations) elucidated in [11] for \(m=2\) and \(3\). \(2\)-fold or double dimer configurations may be described as _non-crossing matchings_, and Theorem 4.1 describes the matchings associated to the twists of the quadratic cluster variables \(X\) and \(Y\) defined in Scott. To describe the appropriate \(3\)-fold or triple dimer configurations for cubic cluster variables, we require another combinatorial object called a _web_, introduced in [10]. Synthesizing novel graph-theoretic reasoning with versions of the results of [14] and [14] yields our main theorems, Theorems 5.10, 5.11, 5.12, and 5.13, which describe the twists of several cubic expressions in Plucker coordinates as corresponding to certain basis webs. We note that the matchings referenced in Theorem 4.1 and the webs referenced in Theorems 5.12 and 5.13 first appeared in [10], but without the level of justification or the application to Laurent expansions provided here. This paper is organized as follows. In Section 2 we give background: we recall the cluster algebra structure of the Grassmannian in Section 2.1 and the terminology of plabic graphs in Section 2.2, describe the cluster variables we will model combinatorially in Section 2.3, define the combinatorial models (dimer configurations) in Section 2.4, and define the twist map as a cluster algebra automorphism in Section 2.5. In Section 3, we describe a weighting scheme on dimer configurations, and translate results of [14] and [14] into our setting for use in proving our main theorems. Then, in Sections 4 and 5, we give our double and triple dimer partition functions for twists of quadratic and cubic differences of Plucker coordinates; Section 5 begins with exposition about webs and their connection to triple dimers, enumerates several relevant classes of webs, and culminates with our main theorems. In Section 6, we describe these theorems using the language of web duality introduced in [10], and provide some explicit computations with standard Young tableaux. Finally, in Section 7, we justify a novel expression for a \(\operatorname{Gr}(3,9)\) cluster variable introduced in Section 2.3 and referenced throughout. We conclude with two appendices. The first, Section 8, contains the lengthy, pictorial proofs of vital lemmas posed in Section 5. The second, Section 9, gives explicit computations of Laurent polynomials using our results in Section 5; it also provides values of the twist map on many cluster variables in \(\operatorname{Gr}(3,7)\) and \(\operatorname{Gr}(3,8)\). **Acknowledgements:** We would like to give special thanks to Chris Fraser for assisting with the derivations in Subsection 2.5, as well as many helpful conversations. We would also like to thank Pavlo Pylyavskyy for sharing his expertise. The first author was initially supported by an Undergraduate Research Opportunities Program award, and subsequently by the University of Minnesota, as her work on this paper partially fulfills the requirements for the degree of Master of Science. The second and third authors were supported by NSF Grants DMS-1745638 and DMS-1854162. ## 2. Preliminaries We will use the following notation throughout the paper: let \([n]=\{1,2,\ldots,n\}\), and for \(1\leq k\leq n\), let \(\binom{[n]}{k}\) denote the set of all \(k\)-element subsets of \([n]\). ### The Grassmannian and its Cluster Structure In this section, we briefly review the cluster algebra structure on the Grassmannian; see [11] for a more thorough exposition. The **Grassmannian**, denoted \(\operatorname{Gr}(k,n)\), is the space of \(k\)-dimensional linear subspaces of \(n\)-dimensional complex space. Equivalently, it is the space of full rank \(k\)-by-\(n\) matrices written in row-reduced echelon form, where the corresponding linear subspace is given by the row span of the corresponding matrix. We will consider the **Plucker embedding** of \(\operatorname{Gr}(k,n)\) into projective space of dimension \(\binom{n}{k}-1\), defined as follows: for any \(J\in\binom{[n]}{k}\), we have a corresponding projective **Plucker coordinate**\(\Delta_{J}\), and at a given \(M\in\operatorname{Mat}_{k\times n}(\mathbb{C})\) we define the value \(\Delta_{J}(M)\) to be the maximal minor of \(M\) using the column set \(J\). For any \(k,n\), the corresponding set of Plucker coordinates will satisfy certain quadratic **Plucker relations**. **Example 2.1**.: Consider a generic element \(M\in\operatorname{Gr}(2,4)\), represented by the following row-reduced matrix: \[M=\begin{pmatrix}1&0&a&b\\ 0&1&c&d\end{pmatrix}\ \ \text{for $a,b,c,d\in\mathbb{C}$}\] The set of Plucker coordinates is given by: \[\Delta_{12}=\det\begin{pmatrix}1&0\\ 0&1\end{pmatrix}=1\,\ \Delta_{13}=\det\begin{pmatrix}1&a\\ 0&c\end{pmatrix}=c\,\ \Delta_{14}=\det\begin{pmatrix}1&b\\ 0&d\end{pmatrix}=d\] \[\Delta_{23}=\det\begin{pmatrix}0&a\\ 1&c\end{pmatrix}=-a\,\ \Delta_{24}=\det\begin{pmatrix}0&b\\ 1&d\end{pmatrix}=-b\,\ \Delta_{34}=\det\begin{pmatrix}a&b\\ c&d\end{pmatrix}=ad-bc\] These coordinates satisfy the algebraic relation \((-b)\cdot c=1\cdot(ad-bc)+(-a)\cdot d\), i.e. \[\Delta_{24}\cdot\Delta_{13}=\Delta_{12}\cdot\Delta_{34}+\Delta_{23}\cdot\Delta _{14}.\] This equation is in fact the Ptolemy relation from ancient Greek geometry. To see the connection, draw a quadrilateral inscribed in a circle with vertices cyclically labeled \(1,2,3,4\), and write \(\Delta_{ij}\) for the distance between vertices \(i\) and \(j\). Then the above relation among the lengths of the sides and diagonals of the quadrilateral will always hold. We will consider the homogeneous coordinate ring of \(\operatorname{Gr}(k,n)\), denoted \(\mathbb{C}[\widehat{\operatorname{Gr}}(k,n)]\), where \(\widehat{\operatorname{Gr}}(k,n)\) is the affine cone over \(\operatorname{Gr}(k,n)\), taken in the Plucker embedding. Scott showed in [10] that \(\mathbb{C}[\widehat{\operatorname{Gr}}(k,n)]\) is a **cluster algebra** in the sense of [11]. We refer the reader to [11] for an in-depth exposition of cluster algebras, but will briefly summarize here. In essence, a cluster algebra is a commutative ring generated by **cluster variables**; cluster variables are grouped into families called **clusters**, and are produced recursively from an initial cluster through a process called **mutation**. Mutation relations are encoded by a **quiver**, as follows. **Definition 2.2**.: Let \(Q\) be a quiver with vertex set \(Q_{0}\) and arrow set \(Q_{1}\). For each mutable vertex \(r\in Q_{0}\), define the **mutation in direction r** of \(Q\), denoted \(\mu_{r}(Q)\), as another quiver on vertices \((Q_{0}\setminus\{r\})\cup\{r^{\prime}\}\) obtained by the following three-step process: * For any arrow \(s\to r\), draw an arrow \(r^{\prime}\to s\), and for any arrow \(r\to t\), draw an arrow \(t\to r^{\prime}\); * For any path \(s\to r\to t\), draw an arrow \(s\to t\); * Delete any created 2-cycles. We label the new vertex \(r^{\prime}\) according to the following relation: \[r^{\prime}r=\prod_{(s\to r)\in Q_{1}}s+\prod_{(r\to t)\in Q_{1}}t.\] **Example 2.3**.: The quivers in Figure 1 illustrate the cluster structure in \(\mathbb{C}[\widehat{\operatorname{Gr}}(2,5)]\). In the left quiver, the mutable vertices are \(\Delta_{13}\) and \(\Delta_{35}\); all other vertices are not mutable or **frozen**, which is indicated by drawing them in boxes or "ice cubes." The quiver on the right arises from mutation in direction \(\Delta_{13}\), and the new vertex label is \[\frac{\Delta_{12}\Delta_{35}+\Delta_{23}\Delta_{15}}{\Delta_{13}}=\Delta_{25}.\] For any \(k<n\), the "rectangles seed" defined in [11] provides an initial quiver and set of cluster variables that generate \(\mathbb{C}[\widehat{\operatorname{Gr}}(k,n)]\) as a cluster algebra. These initial cluster variables are a subset of the Plucker coordinates, and all Plucker coordinates appear as cluster variables. However, for \(k\geq 3\) and \(n\geq 6\), certain cluster variables are homogeneous polynomial functions of degree greater than 1 in the Plucker coordinates. We study several types of non-Plucker cluster variables throughout this paper. While lower-dimensional cells of the Grassmannian also admit cluster algebra structures, see for instance [14] which describes the twist map for cases of general positroid varieties, in this paper we focus on seeds associated to the _top cell_ of the Grassmannian. In particular, this restriction causes our cluster algebra structure to come equipped with frozen cluster variables, which correspond to determinants of circularly consecutive subsets of columns, i.e. Pluckers of the form \(\Delta_{i,i+1,i+2,\ldots,i+k}\) where indices are taken modulo \(n\). ### Plabic Graphs In this section, we introduce plabic graphs and their connection to the cluster algebra structure on \(\mathbb{C}[\widetilde{\operatorname{Gr}}(k,n)]\); this study was pioneered by Postnikov in [14]. See [13] for a more detailed exposition. **Definition 2.4**.: A **plabic graph**\(G\) is a **planar bicolored graph embedded in a disk, with \(n\) boundary vertices of degree \(1\), labeled \(1,\ldots,n\) clockwise. The embedding must be proper, i.e. the edges of \(G\) must not cross, and each internal vertex of \(G\) must be connected by a path to some boundary vertex of \(G\). All of our plabic graphs will in fact be bipartite, and all of our boundary vertices will be colored black. We will refer to the set of vertices of \(G\) as \(V(G)\), the set of edges of \(G\) as \(E(G)\), and the set of faces of \(G\) as \(F(G)\). We next define an important method of labeling the faces of a _reduced_ plabic graphs. **Definition 2.5**.: A **trip** (also called a **Postnikov strand** or **zigzag path**) \(i\to j\) in a plabic graph is a directed path in \(G\) that 1. either connects two boundary vertices or is a closed cycle containing no boundary vertices 2. obeys the _rules of the road_ by turning maximally right at black internal vertices and maximally left at white internal vertices. We may label any face \(f\in G\) with the set \[I_{f}:=\{i\ |\ f\text{ lies to the left of the trip ending at vertex }i\}.\] As shown in [14], the above definition will produce a labeling on the faces of the plabic graph such that each face is labeled by the same number of indices and no two faces have the same label. Note that these conventions for trips and face labels coincide with the conventions in Muller and Speyer's work [15]. One may construct a reduced plabic graph from certain initial seeds for \(\operatorname{Gr}(k,n)\) (such as the rectangles seed of [13]) by reversing the operation described in [13, Definition 7.1.4]: in particular, by taking the planar dual of the quiver, identifying frozen vertices with boundary faces, and checking that the face labels defined above agree with the vertex labels of the original quiver. Figure 2 depicts two examples of the result of such a construction, arising from the quivers in Figure 1. We note that because we are working in the top cell of \(\operatorname{Gr}(k,n)\), all trips as in Definition 2.5 are of the form \(i\to i+k\mod n\), and boundary faces are labeled by circularly consecutive Plucker coordinates. Mutations at a vertex of degree \(4\) in a quiver correspond to **square moves** in a plabic graph. **Definition 2.6**.: Let \(G\) be a plabic graph, and let \(F\) be a square face of \(G\) such that each vertex bordering \(F\) is trivalent.1 Suppose that \(F\) is bounded by strands with sinks \(a,b,c,d\) such that \(F\) is labeled by the \(k\)-element subset \(Sbd:=S\cup\{b,d\}\) for some \(S\in\binom{[n]}{k-2}\) and \(b,d\in[n]\setminus S\). Define the **square move** at face \(Sbd\) to be the local move on \(G\) that swaps the colors of all vertices bordering \(Sab\), and updates the label of \(Sbd\) to \(Sac\). See Figure 3. Footnote 1: In general, performing a square move also requires that the vertices bordering \(F\) alternate in color, but this is immediate from the fact that all plabic graphs we consider are bipartite. We note that if one begins with a reduced plabic graph associated to a Plucker seed and applies a sequence of square moves, the resulting plabic graph will again correspond to a Plucker seed. Our work concerns non-Plucker cluster variables, which arise from quiver mutations at vertices with valence greater than \(4\). ### Quadratic and Cubic Differences of Plucker Coordinates This paper will focus on Grassmannian cluster algebras in the case \(k=3\); in this section, we introduce notation for several classes of degree 2 and 3 cluster variables that appear in this setting, as well as another distinguished degree 3 polynomial in Plucker coordinates. When working in \(\mathbb{C}[\widehat{\mathrm{Gr}}(3,6)]\), a cluster algebra of finite type \(D_{4}\), we adopt the conventions of [10], and write \(X\) to refer to the compound determinant \[\det\bigg{(}v_{1}\times v_{2}\quad v_{3}\times v_{4}\quad v_{5}\times v_{6} \bigg{)}.\] Here \(v_{i}\) denotes the \(i\)th column of \(M\in\mathrm{Gr}(3,6)\), treated as a vector in \(\mathbb{R}^{3}\), and \(\times\) denotes the usual three-dimensional cross-product (taking two \(\mathbb{R}^{3}\)-vectors as input and outputting an \(\mathbb{R}^{3}\)-vector). Using cross-product Figure 3. A square move, the plabic graph analogue of quiver mutation at a degree 4 vertex. Note that the new face label satisfies the Plücker relation \(\Delta_{Sac}=\frac{\Delta_{Sab}\Delta_{Scd}+\Delta_{Sad}\Delta_{Sbc}}{\Delta_{ Sbd}}\). Figure 2. Two plabic graphs for \(\mathrm{Gr}(2,5)\) (with face labeling), arising from the quivers in Figure 1. identities (2) and (3) from [10], i.e. \[u\cdot(v\times w)=(u\times v)\cdot w=\det(u\quad u\quad w)\text{ and }(u\times v) \cdot(w\times z)=\begin{pmatrix}u\cdot w&u\cdot z\\ v\cdot w&v\cdot z\end{pmatrix},\] we can re-express \(X\) as the quadratic difference \(\Delta_{134}\Delta_{256}-\Delta_{156}\Delta_{234}\), which may also be written as \(\Delta_{124}\Delta_{356}-\Delta_{123}\Delta_{456}\) or \(\Delta_{125}\Delta_{346}-\Delta_{126}\Delta_{345}\). Analogously, we write \(Y\) to refer to the compound determinant \[\det\bigg{(}v_{6}\times v_{1}\quad v_{2}\times v_{3}\quad v_{4}\times v_{5} \bigg{)},\] which can be re-expressed as a quadratic difference as \(\Delta_{145}\Delta_{236}-\Delta_{123}\Delta_{456}\), \(\Delta_{146}\Delta_{235}-\Delta_{156}\Delta_{234}\), or \(\Delta_{136}\Delta_{245}-\Delta_{126}\Delta_{345}\). Scott observes in [10] that \(X\) and \(Y\) appear as cluster variables for \(\mathcal{C}[\widehat{\mathrm{Gr}}(3,6)]\). When working in \(\mathbb{C}[\widehat{\mathrm{Gr}}(3,8)]\), of finite type \(E_{8}\), Scott also shows that all dihedral translates of the following cubics appear as cluster variables: \[A=\Delta_{134}\Delta_{258}\Delta_{167}-\Delta_{134}\Delta_{678}\Delta_{125}- \Delta_{158}\Delta_{234}\Delta_{167}\] and \[B=\Delta_{258}\Delta_{134}\Delta_{267}-\Delta_{234}\Delta_{128}\Delta_{567}- \Delta_{234}\Delta_{258}\Delta_{167}.\] We note that when the dihedral group \(D_{8}\) acts on the indices of the cubic function \(A\) in \(\mathrm{Gr}(3,8)\), the image is only of size \(8\), i.e. only the cyclic translates. For example, if we apply the reflection \(\rho:i\to 9-i\) to each entry, we get \(\rho(A)=\Delta_{568}\Delta_{147}\Delta_{238}-\Delta_{568}\Delta_{123}\Delta_{ 478}-\Delta_{148}\Delta_{567}\Delta_{238}=\sigma^{7}(A)\) where \(\sigma:i\to i+1\mod 8\). On the other hand, the image of \(D_{8}\)'s action on the cubic function \(B\) is indeed of size \(16\). For \(n=9\), the corresponding cluster algebra \(\mathbb{C}[\widehat{\mathrm{Gr}}(3,9)]\) is of affine type. We will prove in Section 7 that all dihedral translates of the following expression appear as cluster variables: \[C=\Delta_{124}\Delta_{357}\Delta_{689}+\Delta_{123}\Delta_{456}\Delta_{789}- \Delta_{124}\Delta_{356}\Delta_{789}-\Delta_{123}\Delta_{457}\Delta_{689}.\] Note that the image of the action of the dihedral group \(D_{9}\) on \(C\) has size \(9\), since \(C\) is invariant under the reflection \(\rho:i\to 10-i\), i.e. \(\rho(C)=C\). We will also consider the expression \[Z=\Delta_{145}\Delta_{278}\Delta_{369}-\Delta_{245}\Delta_{178}\Delta_{369}- \Delta_{123}\Delta_{456}\Delta_{789}-\Delta_{129}\Delta_{345}\Delta_{678}\in \mathbb{C}[\widehat{\mathrm{Gr}}(3,9)],\] which appears in Example 8.1 of [12]. Its significance arises from the fact that every cluster monomial arises from a standard Young tableau, but not every standard Young tableau arises from a cluster monomial. The ones that do arise from cluster monomials are called **real** tableaux, due to their manifestation in quantum affine algebras. In finite type, every Young tableau is real, but since \(\mathrm{Gr}(3,9)\) is of affine type, some non-real tableaux appear. \(Z\) is the lowest-degree element of \(\mathrm{Gr}(3,9)\) that arises from a non-real tableau. We note that \(\sigma^{3}(Z)=Z\) and \(\rho(Z)=\sigma^{8}(Z)\), so the image of \(D_{9}\) acting on \(Z\) is simply of size \(3\). To extend these classes of cluster variables, we increase \(n\). Given \(n\geq n^{\prime}\geq 3\) and a subset \(I\subseteq[n]\) with \(|I|=n^{\prime}\), we define a projection \(\pi_{n,I}:\mathrm{Gr}(3,n)\to\mathrm{Gr}(3,n^{\prime})\) that retains exactly the columns indexed by \(I\) of any matrix \(M\in\mathrm{Gr}(3,n)\). We have from [10] that in the case of \(n^{\prime}=6\), for any \(n\geq 6\) and \(I\subseteq[n]\) with \(|I|=6\), the expressions \(X^{I}:=X\circ\pi_{6,I}\) and \(Y^{I}:=Y\circ\pi_{6,I}\) appear as cluster variables of \(\mathrm{Gr}(3,n)\). Theorem 8.8 of [11] implies the analogous statement for expressions that project to \(A\), \(B\), and \(C\). The following widely expected conjecture, a reformulation of Conjecture 3.2 of [12] in the case of \(\mathrm{Gr}(3,n)\), generalizes this fact. **Conjecture 2.7**.: Given \(n\geq n^{\prime}\geq 3\) and a subset \(I\subseteq[n]\) with \(|I|=n^{\prime}\), we have that \(x\) is a cluster variable in \(\mathbb{C}[\widehat{\mathrm{Gr}}(3,n^{\prime})]\) if and only if \(\pi_{n,I}\circ x\) is a cluster variable in \(\mathbb{C}[\widehat{\mathrm{Gr}}(3,n)]\). We note that the above expressions for \(A\), \(B\), and \(C\) shed light on Conjecture 3.1 of [12], which in part posits that there are \(24\binom{n}{8}+9\binom{n}{9}\) degree \(3\) cluster variables in \(\mathrm{Gr}(3,n)\). Indeed, for any \(n\) we have described exactly \(24\binom{n}{8}+9\binom{n}{9}\) such cluster variables: \(8\binom{n}{8}\) dihedral translates and projections of \(A\), \(16\binom{n}{8}\) dihedral translates and projections of \(B\), and \(9\binom{n}{9}\) dihedral translates and projections of \(C\). It remains to show that there are no other degree \(3\) cluster variables. ### Dimer Configurations We discuss the combinatorial model that dimers on plabic graphs provide for Plucker coordinates. This connection was discovered by Postnikov in [14] and developed by Talaska in [15]. We also define \(m\)-fold dimers; in Sections 4 and 5, we will describe how these objects extend the model to products of \(m\) Plucker coordinates. **Definition 2.8**.: A **dimer configuration** (also called an **almost perfect matching**) \(D\) on a plabic graph \(G\) is a subset of edges of \(G\) such that 1. each interior vertex of \(G\) is adjacent to exactly one edge in \(D\), and 2. each boundary vertex of \(G\) is adjacent to either no edges or one edge in \(D\). Let \(\partial(D)\) be the set of boundary vertices adjacent to one edge in \(D\); we call \(\partial(D)\) the **boundary condition** for \(D\). Also let \(\mathcal{D}(G)\) denote the set of dimer configurations on \(G\), and define \[\mathcal{D}_{J}(G):=\{D\in\mathcal{D}(G)\ |\ \partial(D)=J\}.\] Given a plabic graph \(G\) for \(\operatorname{Gr}(k,n)\), we may assign nonnegative real weights to its edges, and define the **edge weight** of any dimer \(D\) to be the product of the weights of the edges it contains: \[\operatorname{wt}_{e}(D)=\prod_{e\in D}\operatorname{wt}(e).\] The following theorem, stated concisely in [1] with references to other work, relates dimer configurations to points in the Grassmannian. **Theorem 2.9** ([16],[17],[18],[19]).: _Let \(G\) be a plabic graph with black boundary vertices \(1,2,\dots,n\), and let \(k\) be the number of internal white vertices minus the number of internal black vertices in \(G\). Also let \(\operatorname{wt}:E(G)\longrightarrow\mathbb{R}_{\geq 0}\) be any weight function on the edges of \(G\). Then there exists some \(\tilde{M}\) in the affine cone over the Grassmannian \(\widetilde{G}t(k,n)\) such that for all \(J\in\binom{[n]}{k}\),_ \[\Delta_{J}(\tilde{M})=\sum_{D\in\mathcal{D}_{J}(G)}\operatorname{wt}_{e}(D).\] Here the affine cone arises since Plucker coordinates embed the Grassmannian into projective space, so the value of an individual \(\Delta_{J}(M)\) is not well-defined in \(\mathbb{R}\). To instead arrive at a point in the Grassmannian, we may identify edge weight functions that yield the same sets of Pluckers up to scaling, or alternatively take the equivalence class of any \(\tilde{M}\). **Example 2.10**.: Consider the weighted plabic graph in Figure 4. The edges highlighted in red form a dimer \(D\) with \(\partial(D)=\{1,4\}\) and edge weight \(\operatorname{wt}_{e}(D)=achnk\). To model products of Plucker coordinates, we define the notion of higher dimer configurations. **Definition 2.11**.: An \(m\)**-fold dimer configuration**\(D\) of a plabic graph \(G\) is a multiset of the edges of \(G\) such that each vertex is contained in exactly \(m\) edges in \(D\). In other words, it is a superimposition of \(m\) single dimer configurations of \(G\). When \(m=2\), we call these **double dimer configurations**, and when \(m=3\) we call them **triple dimer configurations**. We refer to the set of \(m\)-fold dimer configurations of \(G\) as \(\mathcal{D}^{m}(G)\). Figure 4. An example of a dimer \(D\) with \(\partial D=\{1,4\}\) on the plabic graph at the right of Figure 2. **Example 2.12**.: In Figure 5, the red edges, blue edges, and green edges are individually single dimer configurations of the the plabic graph with boundary conditions \(\{4,5,6\}\), \(\{2,3,4\}\) and \(\{1,7,8\}\) respectively. Forgetting the distinctions between these colors yields a corresponding triple dimer configuration. ### The Twist Map In this section, we define an important cluster algebra automorphism called the _twist map_. This map was first introduced in [1]; Marsh and Scott linked it to dimer partition functions in [13], and Muller and Speyer showed in [13] that it provides an inverse to the famous boundary measurement map introduced by Postnikov [14]. Each paper uses a slightly different set of conventions, and we will use yet another, but we will clarify the relationships between our twist and those in [13] and [13]. We first give exposition following [13, Sec. 2]. Given a matrix \(M\) representing an element of \(\operatorname{Gr}(k,n)\), with column vectors \(v_{1},v_{2},\ldots v_{n}\in\mathbb{R}^{k}\) (in order), we define the **generalized cross-product**\(v=v_{1}\times v_{2}\times\ldots\times v_{k-1}\) to be the unique vector in \(\mathbb{R}^{k}\) satisfying \(v\cdot w=\det(v_{1}\quad v_{2}\ldots v_{k-1}\quad w)\) for all \(w\in\mathbb{R}^{k}\). Then, the **(left) twist** of \(M\), denoted as \(\mathscr{T}(M)\), is defined to be the \(k\)-by-\(n\) matrix whose \(i\)th column vector is given by \(\mathscr{T}(M)_{i}=\varepsilon_{i}\cdot v_{i-k+1}\times v_{i-k+2}\times \ldots\times v_{i-1}\), where \[\varepsilon_{i}=\begin{cases}(-1)^{i(k-i)}&i\leq(k-1)\\ 1&i\geq k\end{cases}\] and the subscripts are taken modulo \(n\) with signs introduced when wrapping around. Explicitly, \[\mathscr{T}(M)_{i}=\begin{cases}(-1)^{k-i}v_{1}\times v_{2}\times\ldots\times v _{i-1}\times v_{i-k+1+n}\times\ldots\times v_{n}&\text{ if }i\leq k-1\\ v_{i-k+1}\times v_{i-k+2}\times\ldots\times v_{i-1}&\text{ if }i\geq k\end{cases}.\] In the special case of this paper where \(k=3\), by construction we obtain \[\mathscr{T}(M)=[v_{n-1}\times v_{n}\quad v_{n}\times v_{1}\quad v_{1}\times v _{2}\ldots v_{n-2}\times v_{n-1}]=[v_{n-1}\times v_{n}\quad-v_{1}\times v_{n} \quad v_{1}\times v_{2}\quad\ldots\quad v_{n-2}\times v_{n-1}]\] where only the usual cross-product is required. To compare the value of Plucker coordinates before and after the twist, let \(\Delta_{J}\) denote the determinant of the submatrix \(M_{J}\) (given by the columns indexed by set \(J\)), and let \(\mathscr{T}(\Delta_{J})\) denote the determinant of the corresponding submatrix of the twisted matrix, i.e. \(\det\left(\mathscr{T}(M)_{J}\right)\). When \(J=\{a,b,c\}\), we have \[\mathscr{T}(\Delta_{abc})=\det\bigg{(}v_{a-2}\times v_{a-1}\quad v_{b-2} \times v_{b-1}\quad v_{c-2}\times v_{c-1}\bigg{)},\] where indices are taken modulo \(n\). In [13], Muller and Speyer define a different version of the left twist, which we denote \(\mathscr{T}_{MuSp}\). They also define a **right twist** analogously, which is its inverse. We define a right twist \(\mathscr{T}^{*}\) analogously to the left twist of [13] above, via \(\mathscr{T}^{*}(M)_{i}=\varepsilon_{i}^{\prime}\cdot v_{i+1}\times\ldots \times v_{i+k-1}\), where \[\varepsilon_{i}^{\prime}=\begin{cases}(-1)^{(k-1)(n-i+1)}&i\geq(n-k+2)\\ 1&i\leq(n-k+1)\end{cases}\] Figure 5. Three overlaid single dimer configurations (left) and the associated triple dimer (right). again with reduction modulo \(n\) and appropriate signs. Explicitly, \[\mathscr{T}^{*}(M)_{i}=\begin{cases}(-1)^{k-n+i-1}v_{1}\times v_{2}\times\ldots \times v_{i-n+k-1}\times v_{i+1}\times\ldots\times v_{n}&\text{if }i\geq n-k+2\\ v_{i+1}\times v_{i+2}\times\ldots\times v_{i+k-1}&\text{if }i\leq n-k+1\end{cases}.\] When \(k=3\) and \(J=\{a,b,c\}\), we have \[\mathscr{T}^{*}(\Delta_{abc})=\det\bigg{(}v_{a+1}\times v_{a+2}\quad v_{b+1} \times v_{b+2}\quad v_{c+1}\times v_{c+2}\bigg{)},\] where indices are taken modulo \(n\). This \(\mathscr{T}^{*}\) is the twist we will use for the rest of the paper. We now recover a version of [14, Proposition 3.5] for our right twist in the special case of \(k=3\), using the cross-product identities of [13] mentioned in Section 2.3. If \(J=\{a,a+1,a+2\}\), i.e. \(\Delta_{J}\) is a frozen variable, then \[\mathscr{T}^{*}(\Delta_{J}) =\det\bigg{(}v_{a+1}\times v_{a+2}\quad v_{a+2}\times v_{a+3}\quad v _{a+3}\times v_{a+4}\bigg{)}\] \[=\det(v_{a+1}\ v_{a+2}\ v_{a+3})\det(v_{a+2}\ v_{a+3}\ v_{a+4})- \det(v_{a+1}\ v_{a+3}\ v_{a+4})\det(v_{a+2}\ v_{a+2}\ v_{a+3})\] \[=\Delta_{a+1,a+2,a+3}\Delta_{a+2,a+3,a+4},\] since \(\det(v_{a-1}\ v_{a-1}\ v_{a})=\det(v_{a+2}\ v_{a+2}\ v_{a+3})=0\). Similarly, if \(J=\{a,a+1,b\}\) where \(b\neq a-1,a+2\), then \[\mathscr{T}^{*}(\Delta_{J}) =\det\bigg{(}v_{a+1}\times v_{a+2}\quad v_{a+2}\times v_{a+3}\quad v _{b+1}\times v_{b+2}\bigg{)}\] \[=\det(v_{a+1}\ v_{a+2}\ v_{a+3})\det(v_{a+2}\ v_{b+1}\ v_{b+2})- \det(v_{a+1}\ v_{b+1}\ v_{b+2})\det(v_{a+2}\ v_{a+2}\ v_{a+3})\] \[=\Delta_{a+1,a+2,a+3}\Delta_{a+2,b+1,b+2}.\] Note that since the sign of a 3-cycle is \(+1\), we may reorder the indices of the resulting Plucker coordinates to be increasing modulo \(n\) without concern for signs. When \(J=\{a,b,c\}\) where none of \(a,b,c\) are adjacent, none of the Plucker coordinates appearing in the quadratic differences vanish, and hence we recover the expressions \[\mathscr{T}^{*}(\Delta_{J}) =\det\bigg{(}v_{a+1}\times v_{a+2}\quad v_{b+1}\times v_{b+2} \quad v_{c+1}\times v_{c+2}\bigg{)}\] \[=\begin{cases}X^{a+1,\ a+2,\ b+1,\ b+2,\ c+1,\ c+2}&a,b,c\neq n-1 \\ Y^{a+1,\ a+2,\ b+1,\ b+2,\ c+1,\ c+2}&\text{otherwise}\end{cases}.\] Recall that order is disregarded for the superscripts, so we may always write them in increasing order modulo \(n\). We conclude by noting that, as stated in [14, Remark 6.3], our right twist \(\mathscr{T}^{*}\) agrees with the right twist \(\mathscr{T}^{*}{}_{MuSp}\) up to rescaling. In the case of \(\mathscr{T}^{*}(\Delta_{J})\) where \(J=\{a_{1},a_{2},\ldots,a_{k}\}\), we get \[\mathscr{T}^{*}(\Delta_{J})=\Delta_{I_{[a_{1}]}}\Delta_{I_{[a_{2}]}}\cdots \Delta_{I_{[a_{k}]}}\cdot\mathscr{T}^{*}{}_{MuSp}(\Delta_{J}), \tag{2.1}\] where the notation \(\Delta_{I_{[a_{j}]}}\) is shorthand for the Plucker coordinate for the cyclically connected subset \(\{a_{j},a_{j}+1,a_{j}+2,\ldots,a_{j}+k-1\}\) where indices are taken modulo \(n\). ## 3. Dimer Face Weights In this section, we describe a method of weighting dimers using the face labels that arise from strands of a plabic graph, rather than the arbitrary real edge weights described in Section 2.4. Work of Marsh and Scott [14] and later Muller and Speyer [14] connects a given boundary set to a sum of face weights of its corresponding dimers via the twist map described in Section 2.5, though Marsh and Scott use different conventions than ours. We show that our face weights coincide with face weights in [14] up to scaling, and conclude that they describe our version of the right twist. We also describe a translation between edge and face weights similar to that in [14]. We begin by establishing the notation that we will use to define our version of face weights. Given a plabic graph \(G\) and a face \(f\in F(G)\), labeled using strands as described in Subsection 2.1, we define the following quantities: \[I_{f}:=\text{ the face label of }f\qquad\text{ and }\qquad W_{f}:=\#\{\text{white vertices bordering }f\}.\] We say \(f\) is an **inner face** if it is not adjacent to the boundary of the circle, and **outer** otherwise. Given a dimer \(D\), we define the following quantities based on whether a given face is inner or outer. \[D_{f}:=\begin{cases}\#\{\text{edges of $D$ that border $f$}\}&\text{ if $f$ is inner}\\ \#\{\text{edges of $D$ not adjacent to boundary vertices that border $f$}\}&\text{ if $f$ is outer}\end{cases}.\] **Example 3.1**.: Figure 6 depicts the four single dimer configurations on a certain plabic graph for \(\operatorname{Gr}(3,7)\) that have the boundary condition \(\{3,4,6\}\). Let \(D_{1}\), \(D_{2}\), \(D_{3}\), and \(D_{4}\) be the single dimer configurations shown left to right in colors red, orange, blue and purple respectively. * For the inner face \(\Delta_{367}\), \(W_{f}=2\); we have \((D_{1})_{f}=(D_{2})_{f}=2\) and \((D_{3})_{f}=(D_{4})_{f}=1\). * For the outer face \(\Delta_{127}\), \(W_{f}=3\); we have \((D_{1})_{f}=(D_{2})_{f}=(D_{3})_{f}=2\) and \((D_{4})_{f}=1\). We are now ready to define dimer face weights. **Definition 3.2**.: Given an \(m\)-fold dimer configuration \(D\) on a plabic graph \(G\), we define the **face weight of \(D\)** to be \[\operatorname{wt}_{f}(D)=\prod_{f\in F(G)}I_{f}^{mW_{f}-D_{f}-m}.\] **Example 3.3**.: Again, consider the dimers \(D_{1}\), \(D_{2}\), \(D_{3}\), and \(D_{4}\) on the plabic graph for \(\operatorname{Gr}(3,7)\) shown in Figure 6. There are four possible single dimer configurations with respect to the boundary condition \(\{3,4,6\}\). The weights of each of these single dimer configurations are as follows: \[\operatorname{wt}_{f}(D_{1})=\Delta_{456}\frac{\Delta_{167}\Delta_{237} \Delta_{567}}{\Delta_{267}\Delta_{367}}\ ;\ \operatorname{wt}_{f}(D_{2})=\Delta_{456}\frac{\Delta_{167} \Delta_{347}\Delta_{567}}{\Delta_{367}\Delta_{467}}\] \[\operatorname{wt}_{f}(D_{3})=\Delta_{456}\frac{\Delta_{167}\Delta_{457}}{ \Delta_{467}}\ ;\ \operatorname{wt}_{f}(D_{4})=\Delta_{456}\frac{\Delta_{127} \Delta_{567}}{\Delta_{267}}.\] The following theorem is central to our main results; we will extend it to \(m\)-fold dimers in future sections. **Theorem 3.4**.: _Let \(G\) be a plabic graph with black boundary vertices \(1,2,\dots,n\), let \(k\) be the number of internal white vertices minus the number of internal black vertices in \(G\), and let \(J\) be a \(k\)-element subset of \([n]\). Then where \(wt_{f}(D)=\prod_{f\in F(G)}I_{f}^{nW_{f}-D_{f}-n}\),_ \[\mathscr{T}^{*}(\Delta_{J})=\sum_{D\in\mathscr{D}_{J}(G)}wt_{f}(D).\] Figure 6. All single dimer configurations with boundary condition \(\{3,4,6\}\) on a certain plabic graph for \(\operatorname{Gr}(3,7)\). Proof.: We will prove this theorem by relating it to Remark 7.11 of [13], which provides the following formula for their variant \(\mathscr{T}^{*}{}_{MuSp}\) of the right twist of a Plucker coordinate: \[\mathscr{T}^{*}{}_{MuSp}(\Delta_{J})=\sum_{D\in\mathcal{D}_{J}(G)}\widetilde{ \operatorname{wt}_{f}(D)},\] where \(\widetilde{\operatorname{wt}_{f}(D)}\) is defined via \[\prod_{f\in F(G)}I_{f}^{(\tilde{B}_{f}-1)-\#\{e\in D:\tilde{\partial}_{f_{e}}=1 \}},\] and \(\tilde{B}_{f}\) and \(\tilde{\partial}_{fe}\) are defined in terms of the number of edges \(e\) such that face \(f\) lies **directly upstream** of \(e\).2 We have from Equation (2.1) that for \(J=\{a_{1},a_{2},\dots,a_{k}\}\), Footnote 2: As in [13, Remark 5.8], such a weighting appeared previously in [11] but via different exposition. \[\mathscr{T}^{*}{}_{MuSp}(\Delta_{J})=\frac{1}{\Delta_{I_{[a_{1}]}}\Delta_{I_{ [a_{2}]}}\cdots\Delta_{I_{[a_{k}]}}}\mathscr{T}^{*}(\Delta_{J}),\] where the notation \(\Delta_{I_{[a_{j}]}}\) is shorthand for the Plucker coordinate for the cyclically connected subset \(\{a_{j},a_{j}+1,a_{j}+2,\dots,a_{j}+k-1\}\) where indices are taken modulo \(n\). It will therefore suffice to show that \[\sum_{D\in\mathcal{D}_{J}(G)}\widetilde{\operatorname{wt}_{f}(D)}=\frac{1}{ \Delta_{I_{[a_{1}]}}\Delta_{I_{[a_{2}]}}\cdots\Delta_{I_{[a_{k}]}}}\sum_{D\in \mathcal{D}_{J}(G)}\operatorname{wt}_{f}(D). \tag{3.1}\] Note that since our plabic graphs are bipartite, all inner faces are bordered by an even number of edges, and similarly for all outer faces since all boundary vertices are black (and we do not count the "edges" between boundary vertices). Thus \(W_{f}\), the number of white vertices adjacent to a face \(f\), is exactly half the number of edges bordering \(f\); we now have from [13, Section 5.1] that \(W_{f}=\tilde{B}_{f}\). If a given face \(f\) is inner, we immediately have that \(\#\{e\in D:\tilde{\partial}_{fe}=1\}=D_{f}\). If \(f\) is outer, the equality holds unless \(f\) lies immediately counter-clockwise to a boundary edge \(e\) that is included in the dimer cover \(D\), in which case we have \(\#\{e\in D:\tilde{\partial}_{fe}=1\}=D_{f}+1\). The boundary edges included in \(D\) are exactly those adjacent to boundary vertices \(j\in J\). Therefore, by the construction of face labels, when \(J=\{a_{1},a_{2},\dots,a_{k}\}\), the three outer faces where \(\#\{e\in D:\tilde{\partial}_{fe}=1\}=D_{f}+1\) are precisely those labeled \(I_{[a_{1}]}\), \(I_{[a_{2}]}\),\(\dots\), \(I_{[a_{k}]}\). Equation (3.1) now follows from a comparison of definitions. _Remark 3.5_.: For \(k=3\), the twist map \(\mathscr{T}^{*}{}_{MuSp}\) doubles degree, while the twist map \(\mathscr{T}^{*}{}_{MuSp}\) is of degree \(-1\). This is consistent with the claims made in our proof, which assert in this case that images of \(\mathscr{T}^{*}{}_{MuSp}(\Delta_{J})\) and \(\mathscr{T}^{*}(\Delta_{J})\) agree up to a quotient by three frozen variables. The following definition and proposition provide a translation between the face weights of Definition 3.2 and the edge weights used in the statements of Subsection 2.4, creating a streamlined comparison between our results and those of other authors that are phrased in terms of edge weights. In particular, in Section 6, we will use this propostion to describe our main theorems using the language of web duality introduced in [10]. **Definition 3.6**.: Given a plabic graph \(G\) with face labels \(I_{f}\), we define the weight of an edge \(e\), denoted \(wt(e)\), via \[\operatorname{wt}(e)=\frac{\Delta_{I_{1}}\cdot\Delta_{I_{2}}\cdots\Delta_{I_{ d}}}{\Delta_{I_{e(1)}}\cdot\Delta_{I_{e(2)}}},\] where \(I_{1}\) through \(I_{d}\) label the \(d\) faces bordering the black endpoint of edge \(e\), and \(I_{e(1)}\) and \(I_{e(2)}\) label the two faces that border edge \(e\) itself. See Figure 7. _Remark 3.7_.: This definition is analogous to [13, Definition 7.1], except that they consider white vertices where we consider black vertices. The significance of this switch is that by our definition, all edges adjacent to boundary vertices will have weight \(1\), since all boundary vertices of our plabic graphs are black. **Proposition 3.8**.: _Given an \(m\)-fold dimer configuration \(D\) on a plabic graph \(G\), consider the edge-weighting wt\((e)\) of Definition 3.6, and let \(wt_{e}(D)=\prod_{e\in D}\text{wt}(e)\) as in Section 2.4. Also recall the face-weighting wt\({}_{f}(D)\) of Definition 3.2. Then_ \[wt_{e}(D)\bigg{/}\left(\prod_{inner\ f\in F(G)}\Delta_{I_{f}}\right)^{m}=wt_{f} (D).\] Proof.: First, note that any \(m\)-fold dimer \(D\) is an overlay of \(m\) single dimers \(D_{1},\ldots,D_{m}\), and from the definitions we have \[\frac{\operatorname{wt}_{e}(D)}{\left(\prod\limits_{f\in F(G)}\Delta_{I_{f}} \right)^{m}}=\prod\limits_{i=1}^{m}\frac{\operatorname{wt}_{e}(D_{i})}{\prod \limits_{\begin{subarray}{c}\operatorname{inner}\\ \operatorname{incident}\text{ to }v\end{subarray}}}\Delta_{I_{f}}\] and \[\operatorname{wt}_{f}(D)=\prod\limits_{i=1}^{m}\operatorname{wt}_{f}(D_{i}).\] It therefore suffices to prove the case where \(m=1\). For any edge \(e\in E(G)\), let \(e(1)\) and \(e(2)\) be the faces adjacent to \(e\), as in Figure 7. Also, let \(V_{D}(G)\) be the set of vertices of \(G\) that are adjacent to some edge of \(D\); note that \(V_{D}(G)\) is exactly the set of interior vertices of \(G\) together with \(\partial D\). Then by definition, we have \[\frac{\operatorname{wt}_{e}(D)}{\prod\limits_{f\in F(G)}\Delta_{I_{f}}}=\frac {\prod\limits_{v\in V_{D}(G)}\prod\limits_{\begin{subarray}{c}\operatorname{incident}\text{ to }v\\ \operatorname{incident}\text{ to }v\end{subarray}}}\frac{\Delta_{I_{f}}}{ \prod\limits_{e\in D}I_{e(1)}I_{e(2)}}\cdot\frac{1}{\prod\limits_{j \text{ inner}}}\Delta_{I_{f}}\] \[=\frac{\prod\limits_{j\text{ inner}}}{\prod\limits_{f\in F(G)}\Delta_{I_{f}} ^{\#\left(\begin{subarray}{c}\operatorname{black}\text{ }v\in V_{D}(G)\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-1}}\cdot\prod \limits_{f\in F(G)}\Delta_{I_{f}}^{\#\left(\begin{subarray}{c} \operatorname{black}\text{ }v\in V_{D}(G)\\ \operatorname{incident}\text{ to }f\end{subarray}\right)}}{\prod \limits_{e\in D}I_{e(1)}I_{e(2)}}\] . For a given \(f\in F(G)\), we extract the power of \(\Delta_{I_{f}}\) in the above quotient. If \(f\) is inner, every vertex adjacent to \(f\) is included in \(D\), and there are the same number of white vertices adjacent to \(f\) as black vertices. We therefore get \[\begin{subarray}{c}\#\left(\begin{subarray}{c}\operatorname{black}\text{ }v\in V_{D}(G)\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-\#\left( \begin{subarray}{c}\operatorname{edges}\text{ }e\in D\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-1}=\#\left( \begin{subarray}{c}\operatorname{white}\text{ }v\in V(G)\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-\#\left( \begin{subarray}{c}\operatorname{edges}\text{ }e\in D\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-1}=\Delta_{I_{f}}^{W_{f}-D_{f}-1}.\] If \(f\) is outer, we get \[=\Delta_{I_{f}}^{\#\left(\begin{subarray}{c}\operatorname{black}\text{ }v\in V(G)\\ \operatorname{incident}\text{ to }f\end{subarray}\right)+\#\left( \begin{subarray}{c}\operatorname{black}\text{ }v\in D\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-\#\left( \begin{subarray}{c}\operatorname{edges}\text{ }e\in D\\ \operatorname{incident}\text{ to }f\end{subarray}\right)}\] \[=\Delta_{I_{f}}^{\left(\#\left(\begin{subarray}{c}\operatorname{white} \text{ }\operatorname{ interior}\\ \operatorname{incident}\text{ to }f\end{subarray}\right)-1\right)-\#\left( \begin{subarray}{c}\operatorname{edges}\text{ }e\in D\\ \operatorname{incident}\text{ to }f\end{subarray}\right)}\] where the second-to-last equality follows from the fact that all boundary vertices of \(G\) are black. The product of these powers of face weights is \(\operatorname{wt}_{f}(D)\) by definition. **Example 3.9**.: Since \(\{3,4,6\}=\{a,a+1,b\}\), we have from Subsection 2.5 that \(\mathscr{T}^{*}(\Delta_{346})=\Delta_{456}\Delta_{157}.\) Taking the sum of the weights of the single dimer configurations with boundary condition \(\{3,4,6\}\), as computed in Example 3.3, yields \[\Delta_{456}\cdot\frac{\Delta_{167}\Delta_{237}\Delta_{467}\Delta_{567}+\Delta _{167}\Delta_{267}\Delta_{347}\Delta_{567}+\Delta_{167}\Delta_{267}\Delta_{367} \Delta_{457}+\Delta_{127}\Delta_{367}\Delta_{467}\Delta_{567}}{\Delta_{267} \Delta_{367}\Delta_{467}}.\] This is indeed the Laurent expansion for \(\Delta_{456}\Delta_{157}\), which is consistent with Theorem 3.4. We also note using Equation (2.1) that \(\mathscr{T}^{*}{}_{MuSp}(\Delta_{346})=\frac{\Delta_{456}\Delta_{157}}{\Delta_{345 }\Delta_{456}\Delta_{167}}=\frac{\Delta_{157}}{\Delta_{345}\Delta_{167}}\), which is consistent with this map being degree \(-1\) as in Remark 3.5. ## 4. Double Dimer Configurations for Quadratic Differences In this section, we give a combinatorial interpretation via double dimer face weights for the twists of the quadratic cluster variables \(X\) and \(Y\) in \(\mathrm{Gr}(3,6)\), and of analogous expressions \(X^{S}\) and \(Y^{S}\) for any \(S\subset[n]\) with \(|S|=6\leq n\). **Theorem 4.1**.: _Given a plable graph \(G\) for \(\mathrm{Gr}(3,n)\) and a set \(S=\{s_{1}<\cdots<s_{6}\}\subseteq[n]\), let \(\mathcal{D}^{2}_{X}(G)\) be the set of double dimers on \(G\) with paths connecting vertices in pairs \(\{s_{1},s_{6}\},\{s_{2},s_{3}\},\) and \(\{s_{4},s_{5}\}\), and let \(\mathcal{D}^{2}_{Y}(G)\) be the set of double dimers with paths connecting vertices in pairs \(\{s_{1},s_{2}\},\{s_{3},s_{4}\},\) and \(\{s_{5},s_{6}\},\) where the double dimers in each set possibly include internal doubled edges and cycles, but no additional edges adjacent to boundary vertices. Then_ * \(\mathcal{T}^{*}(X^{S})=\sum_{D\in\mathcal{D}^{2}_{X}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D)\)__ * \(\mathcal{T}^{*}(Y^{S})=\sum_{D\in\mathcal{D}^{2}_{Y}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D).\)__ Proof.: It follows from Theorem 3.4 that for any plable graph \(G\) for \(\mathrm{Gr}(k,n)\), and for any \(I,J\subset[n]\) with \(|I|=|J|=k\), \[\mathscr{T}^{*}(\Delta_{I}\Delta_{J})=\mathscr{T}^{*}(\Delta_{I})\mathscr{T}^ {*}(\Delta_{J})=\left(\sum_{D\in\mathcal{D}_{I}(G)}\mathrm{wt}_{f}(D)\right) \left(\sum_{D\in\mathcal{D}_{J}(G)}\mathrm{wt}_{f}(D)\right)\] \[=\sum_{D\in\mathcal{D}^{2}_{I,J}(G)}M_{D}\mathrm{wt}_{f}(D),\] where \(\mathcal{D}^{2}_{I,J}(G)\subset\mathcal{D}^{2}(G)\) is the set of double dimer covers \(D\) of \(G\) formed by overlaying a single dimer with boundary condition \(I\) and a single dimer with boundary condition \(J\), and the multiplicity \(M_{D}\) is the number of pairs of single dimers that become \(D\) when overlaid. To characterize \(\mathcal{D}^{2}_{I,J}(G)\), we note that the edges contained in any double dimer may be viewed as a union of connected components, each of which is a path between boundary vertices, a doubled edge, or an internal cycle; see [10]. It follows from [1, Theorem 3.1] that for any \(I,J\in\binom{[n]}{k}\), \(\mathcal{D}^{2}_{I,J}(G)\) contains exactly those double dimers consisting of \(k\) non-crossing paths, each connecting a vertex with label in \(I\) to a vertex with label in \(J\), as well as possibly some internal doubled edges and cycles; and that for any double dimer \(D\), \[M_{D}=2^{\#(\text{cycles in }D)}.\] Now in the case of \(X\), where \(k=3\), we have \[\mathscr{T}^{*}(X^{S})=\mathscr{T}^{*}(\Delta_{134})\mathscr{T}^{*}(\Delta_{ 256})-\mathscr{T}^{*}(\Delta_{156})\mathscr{T}^{*}(\Delta_{234})\] \[=\sum_{D\in\mathcal{D}_{134,256}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D)-\sum_{D\in \mathcal{D}_{156,234}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D).\] The first sum contains exactly those double dimers with paths connecting 1 to 2, 3 to 6, and 4 to 5; or 1 to 6, 2 to 3, and 4 to 5. These are the only possible non-crossing matchings. The second, negative sum contains exactly those double dimers with paths connecting 1 to 2, 3 to 6, and 4 to 5. Therefore the terms that remain after cancellation are the weights of all double dimers that connect 1 to 6, 2 to 3, and 4 to 5, which are exactly those included in \(\mathcal{D}^{2}_{X}(G)\). Similarly, in the case of \(Y\), we have \[\mathscr{T}^{*}(Y)=\mathscr{T}^{*}(\Delta_{145})\mathscr{T}^{*}(\Delta_{236}) -\mathscr{T}^{*}(\Delta_{123})\mathscr{T}^{*}(\Delta_{456})\] \[=\sum_{D\in\mathcal{D}_{145,236}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D)-\sum_{D\in \mathcal{D}_{123,456}(G)}2^{\#(\text{cycles in }D)}wt_{f}(D).\] The first sum contains exactly those double dimers with paths connecting 1 to 6, 2 to 5, and 3 to 4; or 1 to 2, 3 to 4, and 5 to 6. The second, negative sum contains exactly those double dimers with paths connecting 1 to 6, 2 to 5, and 3 to 4. Therefore the terms that remain after cancellation are the weights of all double dimers that connect 1 to 2, 3 to 4, and 5 to 6, which are exactly those included in \(\mathcal{D}^{2}_{Y}(G)\). This completes the proof for \(S=\{1,2,3,4,5,6\}\); the argument is identical for general \(S\) **Example 4.2**.: Figure 8 depicts all double dimer configurations in \(\mathcal{D}^{2}_{X^{123567}}(G)\), which is defined in Theorem 4.1 to be the set of double dimers with paths connecting vertex 1 to 7, 2 to 3, and 5 to 6. One may streamline the construction of these dimers by first finding "forced edges" that necessarily must be included a certain number of times in any dimer with the desired connectivity. For example, there must be a single dimer edge adjacent to every boundary vertex except vertex 4, which cannot be adjacent to any dimer edges. Computing the twist of \(X^{123567}\) using the methods described in Section 2.5 yields \(\mathscr{T}^{*}(X^{123567})=(127)(234)(X^{134567})\). Theorem 4.1 asserts that the Laurent expansion for \(\mathscr{T}^{*}(X^{123567})\) should be the sum of the face weights of the double dimers in Figure 8, and indeed we have \[(127)(234)(X^{134567}) =(\ref{eq:127})(234)\frac{(167)(237)(345)(467)}{(267)(347)}+(\ref{eq:127 })(234)\frac{(\ref{eq:127})(234)(367)^{2}(457)}{(237)(267)(347)}\] \[+(\ref{eq:127})(234)\frac{(127)(345)(367)(467)}{(267)(347)}+( \ref{eq:127})(234)\frac{(167)(234)(367)(457)}{(267)(347)}\] \[+(\ref{eq:127})(234)\frac{(123)(367)(457)}{(237)}\] \[=wt_{f}(D_{1})+wt_{f}(D_{2})+wt_{f}(D_{3})+wt_{f}(D_{4})+wt_{f}( D_{5})\] where \(D_{1},D_{2},D_{3},D_{4},D_{5}\) are the red, orange, blue, purple and brown double dimer configurations in Figure 8, respectively. ## 5. Triple Dimer Configurations for Cubic Differences ### Webs In order to classify the triple dimer configurations that give expressions for \(A\) and \(B\), we associate each triple dimer configuration to a **web**. Webs were first introduced by Kuperberg in [13]. **Definition 5.1**.: A **web**\(W\) is a planar bipartite graph embedded in the disc such that all internal vertices are trivalent. Within the disc, \(W\) may also include some vertex-less directed cycles, as well as some directed edges from a black boundary vertex to a white boundary vertex, which we call "paths." We will require all boundary vertices of our webs to be either univalent or isolated (0-valent); by an abuse of notation, we will treat this condition as intrinsic to the definition for the remainder of the paper. Given a web \(W\), we may consider its **web interior**\(\hat{W}\), which only consists of the internal vertices and edges of \(W\). A **connected component** of \(W\) is a connected component of \(\hat{W}\) along with the boundary vertices it attaches to; namely, we do not consider the boundary of the disc as an edge that connects all boundary vertices. Figure 8. Double dimer configurations satisfying the connectivity pattern for \(X^{123567}\). A **non-elliptic** web \(W\) is a web containing no interior faces bounded by four or fewer edges; i.e., it contains no contractible cycles, bigons, or squares. Every web may be expressed as a sum of nonelliptic webs via the reduction moves in Figure 9, see [10] and [11]. Lastly, we introduce terminology for some common web components. We will call an internal white vertex incident to three black boundary vertices a **tripod**; and we will call a component with one black internal vertex adjacent to three white internal vertices, each of which are adjacent to two black boundary vertices, a **hexapod**. See Figure 10. ### Triple Dimer Configurations As Webs Given a triple dimer configuration \(D\) on a plabic graph \(G\), let \(G_{D}\) be the subgraph of \(G\) containing all edges included at least once in \(D\). We create a web \(W(D)\) corresponding to \(D\) as follows: 1. For each boundary vertex in \(G\), create a corresponding boundary vertex in \(W(D)\). Color each boundary vertex white if it is adjacent to a doubled edge in \(D\), and black otherwise. 2. For each interior cycle in \(G_{D}\) consisting entirely of bivalent vertices, add a vertexless loop to \(W(D)\), oriented arbitrarily. 3. For each chain of bivalent vertices connecting two boundary vertices \(v,v^{\prime}\in G_{D}\), construct a path between the corresponding boundary vertices in \(W(D)\). To orient this path, note that the chain must correspond to a path alternating between singled and doubled edges in \(D\), and since all boundary vertices of \(G\) are black, that path must have even length. Thus exactly one of \(v\) and \(v^{\prime}\) must be adjacent to a doubled edge in \(D\), and therefore colored white in \(W(D)\). Orient the path in \(W(D)\) towards the white vertex. 4. For each connected component of \(G_{D}\) containing at least one trivalent vertex, include a corresponding component in \(W(D)\) with all bivalent vertices removed, merging each pair of edges that was adjacent to a deleted vertex. Retain the color of all interior trivalent vertices. (Note that all trivalent vertices in \(G_{D}\) must be adjacent to single edges in \(D\), so the graph will remain bipartite, again because chains of bivalent vertices in \(G_{D}\) correspond to paths alternating between singled and doubled edges in \(D\).) Note that this construction of \(W(D)\) is equivalent to the construction of a web from a triple dimer via "weblike subgraphs" given in [1]. Figure 10. A web composed of a hexapod (on vertices 1, 2, 3, 7, 8, and 9) and a tripod (on vertices 4, 5, and 6) Figure 9. Planar skein relations for web reduction. **Example 5.2**.: Consider the triple dimer configuration \(D\) on the plabic graph \(G\) in Figure 11. We create the corresponding subgraph \(G_{D}\) by removing duplicate edges of \(D\). In \(W(D)\), we include an oriented edge from vertex \(2\) to vertex \(1\), since vertex \(1\) is adjacent to a doubled edge in \(D\), and color vertex \(1\) white, while all other boundary vertices remain black. We also include two connected components corresponding to the components of \(G_{D}\) that contain trivalent vertices, with bivalent vertices removed. Note that we ignore isolated edges in \(G_{D}\), which come from tripled edges in \(D\). ### Enumeration of Non-Elliptic Webs The proofs of our main theorems will rely on the enumeration of several classes of non-elliptic webs in Lemmas 5.6, 5.7, 5.8, and 5.9. Propositions 5.3 and 5.5 place significant bounds on this enumeration, in particular confirming that the classes are finite. Lemma 5.4 is used to prove Proposition 5.5. **Proposition 5.3**.: _Let \(W\) be a nonelliptic web with \(n\) boundary vertices. Let \(c\) be the number of cycles in \(W\), let \(k\) be the number of connected components of \(W\), and let \(|V_{int}|\) be the number of internal vertices in \(W\). Then \(|V_{int}|\)\(=n+2c-2k\)._ Proof.: By planarity of \(W\) and the Euler characteristic, we have that \(|E|\)\(=|V|\)\(-k+c\) where \(E\) is the set of edges of \(W\) and \(V\) is the set of all vertices in \(W\). Moreover, we have \(\sum_{v\in V}\deg(v)=2|E|\). Since all internal vertices in the web are trivalent and we have \(n\) boundary vertices of degree \(1\), we see \[n+3(|V|\)\(-n)=\sum_{v\in V}\deg(v)=2|E|\)\(=2|V|\)\(-2k+2c,\] which implies that \(|V|\)\(-2n=-2k+2c\). Since \(|V|\)\(=|V_{int}|\)\(+n\), it follows that \(|V_{int}|\)\(=n+2c-2k\). **Lemma 5.4**.: _Let \(\hat{W}\) be the interior of a nonelliptic web \(W\), and assume that \(\hat{W}\) is connected and consists only of cycles. Then there exist two adjacent vertices in \(\hat{W}\) that are bivalent in \(\hat{W}\)._ Proof.: Given a connected nonelliptic web interior \(\hat{W}\) that consists entirely of cycles, we may construct \(\hat{W}\) by beginning with a central \(2m\)-gon and adding \(2m\)-gons exterior of it one by one, so that every intermediate step remains a valid web interior \(\hat{V}\) of a different web \(V\). Since webs are trivalent, all vertices with degree less than three in \(\hat{V}\) must be adjacent to the boundary in \(V\); and since webs are planar, only vertices not enclosed by other edges in \(V\) ("exterior vertices") may possibly be adjacent to the boundary in \(V\). Therefore, at any step of the process of building \(\hat{W}\), the additional \(2m\)-gon cannot share more than one edge with any one old \(2m\)-gon (see the left of Figure 12), and it also must only share one set of adjacent edges along the boundary of the web interior (see the right of Figure 12): otherwise, in both cases, we would have interior bivalent vertices in \(\hat{V}\). Additionally, note that at any step, every exterior vertex of \(\hat{V}\) must be either bivalent or trivalent, since \(\hat{V}\) consists only of cycles. Let \(\mathcal{BT}_{\hat{V}}\) be the number of bivalent exterior vertices minus the number of trivalent exterior vertices of \(\hat{V}\). When we build \(\hat{W}\) starting from a central \(2m\)-gon \(\hat{W_{0}}\), this central \(2m\)-gon has at least \(6\) bivalent vertices and \(0\) trivalent vertices, so \(\mathcal{BT}_{\hat{W_{0}}}\geq 6\). Figure 13 shows the possible effects of adding a hexagon; none of these change \(\mathcal{BT}\). The effects of adding a larger \(2m\)-gon would be analogous, with possibly more trivalent vertices being replaced by possibly more bivalent vertices, so adding a larger \(2m\)-gon would only increase \(\mathcal{BT}\). Therefore \(\hat{W}\) has \(\mathcal{BT}_{\hat{W}}\geq 6\) also, i.e. \(\hat{W}\) has more exterior bivalent vertices than exterior trivalent vertices. The exterior vertices of \(\hat{W}\) (each of which is either bivalent or trivalent) form a cycle, Figure 11. From a triple dimer configuration \(D\) (left), to the corresponding graph \(G_{D}\) (middle), to a web \(W(D)\) (right). and since the discrepancy \(\mathcal{BT}_{\hat{V}}\) is positive for each such intermediate web interior \(\hat{V}\), at least two exterior bivalent vertices must be adjacent. **Proposition 5.5**.: _Let \(W\) be a nonelliptic web, let \(c\) be the number of cycles in \(W\), and let \(|V_{int}|\) be the number of internal vertices in \(W\) (equivalently, \(|V_{int}|\) is the number of vertices in \(\hat{W}\)). Then_ * _if_ \(c\geq 1\)_,_ \(|V_{int}|\geq 2c+4\)_,_ * _if_ \(c\geq 2\)_,_ \(|V_{int}|\geq 2c+6\)_,_ * _if_ \(c\geq 3\)_,_ \(|V_{int}|\geq 2c+7\)_,_ * _and if_ \(c\geq 4\)_,_ \(|V_{int}|\geq 2c+8\)_._ Proof.: Let \(a_{c}\) be the minimal number of vertices in a nonelliptic web interior with \(c\) cycles, so \(|V_{int}|\geq a_{c}\). We first show that \(a_{c}\) is also the minimum number of vertices in a connected nonelliptic web interior with \(c\) cycles that is composed entirely of cycles. To accomplish this, given a nonelliptic web interior \(\hat{W}\) with \(c\) cycles and \(v\) vertices, we demonstrate that if \(\hat{W}\) is disconnected or contains a vertex or edge that is not part of a cycle, then \(v>a_{c}\). If \(\hat{W}\) contains a vertex that is not part of a cycle, then we may delete it to arrive at a web interior with \(c\) cycles and \(v-1\) vertices, so by minimality, \(v>a_{c}\). If \(\hat{W}\) contains edges that are not part of cycles, we may delete them to create a disconnected web interior that has \(v\) vertices and is composed entirely of cycles, so the only remaining case is when \(\hat{W}\) is disconnected and composed entirely of cycles. In this case, by Lemma 5.4, there exist two adjacent bivalent vertices in each connected component; identifying two such pairs of vertices from separate components yields a web interior with \(c\) cycles and \(v-2\) vertices, so \(v>a_{c}\) in this case also. Therefore \(a_{c}\) is the minimum number of vertices in a connected nonelliptic web interior with \(c\) cycles that is composed entirely of cycles. Let \(\hat{W}\) be such a web interior with \(c\) cycles and \(a_{c}\) vertices. Then from Lemma 5.4, there exist two adjacent bivalent vertices in \(\hat{W}\); these vertices must both be part of only one Figure 12. Ways to add a hexagon to a web interior that do not produce another valid web interior. Figure 13. The five ways to add a hexagon to a web interior: the boundary initially consists of the blue and green vertices (where the green vertices are bivalent), and after the hexagon is added, it consists of the red and green vertices (where the green vertices are trivalent). Note that in each case, adding the hexagon does not change \(\mathcal{BT}\). For instance, adding a hexagon as in the top leftmost figure replaces two bivalent vertices with four bivalent vertices and two trivalent vertices. cycle. Therefore removing both vertices creates a web interior with \(c-1\) cycles and \(a_{c}-2\) vertices, so \(a_{c-1}\leq a_{c}-2\), so \(a_{c}\geq a_{c-1}+2\). It follows that, for any \(i<c\), \[|V_{int}|\geq a_{c}\geq a_{i}+2(c-i)=2c+(a_{i}-2i).\] Clearly \(a_{1}=6\), the minimal number of vertices in a nonelliptic web interior with one cycle. The only way to construct a connected nonelliptic web interior consisting entirely of two cycles is to overlap the cycles at one edge; the number of vertices is minimized when both cycles are hexagons, in which case there are ten vertices, so \(a_{2}=10\). Next, we must be able to construct any connected nonelliptic web interior consisting entirely of three cycles by adding a cycle to a connected nonelliptic web interior consisting entirely of two cycles. There is one place to add the new cycle that only creates three new vertices, and no way to create fewer; therefore \(a_{3}=13\). The same is true to add a fourth cycle, so \(a_{4}=16\). Now, applying the above formula for \(i=1,2,3,4\) yields the three desired statements. See [http://oeis.org/A121149](http://oeis.org/A121149) for more details. We now begin our enumeration of particular non-elliptic webs with 8 boundary vertices, which will assist in the proofs of Theorems 5.10 and 5.11. **Lemma 5.6**.: _All non-elliptic webs with seven black boundary vertices and one white boundary vertex that do not have have paths between boundary vertices are listed in Figure 14._ Proof.: Proposition 5.3 gives \(|V_{int}|=8+2c-2k\) for the number of internal vertices in such a web \(W\) that has \(k\) connected components and \(c\) cycles. If \(k=1\), \(|V_{int}|=2c+6\), so from Proposition 5.5, \(c<3\) in this case. If \(c=2\), \(|V_{int}|=10\); all of these vertices are required to construct the two cycles, but due to the colors of the boundary vertices, it is impossible to connect all boundary vertices to the interior hexagons without adding more vertices. Therefore \(c\) cannot be 2. If \(c=1\), \(|V_{int}|=8\). Six of these internal vertices must be used to create the hexagon; and since there is only one white boundary vertex, the other two internal vertices must be white and adjacent to two of the black vertices in the hexagon. The resulting web is possible to complete, as shown in the top left web in Figure 14. Finally, if \(c=0\), \(|V_{int}|=6\). The only bipartite tree webs with 7 black leaves, 1 white leaf, and 6 internal trivalent vertices are shown in Figure 14; there are two. If \(k=2\), \(|V_{int}|=2c+4\), so from Proposition 5.5, \(c<2\) in this case. If \(c=1\), \(|V_{int}|=6\); all of these vertices are required to construct the cycle, and again due to the colors of the boundary vertices, it is impossible to complete the web without adding more vertices. Therefore \(c=0\). The only bipartite forest webs with 7 black leaves, 1 white leaf, 4 internal trivalent vertices, and 2 connected components are shown in Figure 14; there are five. Figure 14. The only non-elliptic webs with seven black boundary vertices and one white boundary vertex that do not contain paths between boundary vertices. If \(k\geq 3\), we have from Proposition 5.5 that \(c=0\), implying that \(|V_{int}|\leq 2\). It is impossible to connect all boundary vertices with this few internal vertices. Therefore Figure 14 enumerates all pathless non-elliptic webs with seven black boundary vertices and one white boundary vertex. **Lemma 5.7**.: _All non-elliptic webs with a path between two adjacent boundary vertices and six other black boundary vertices are listed in Figure 15, up to reflection._ Proof.: Proposition 5.3 gives \(|V_{int}|=6+2c-2k\) for the number of internal vertices in such a web \(W\) with \(k\) connected components and \(c\) cycles. If \(k=1\), then \(|V_{int}|=2c+4\), so from Proposition 5.5, \(c<2\) in this case. If \(c=1\), \(|V_{int}|=6\), and we must use all six internal vertices to make a hexagon. However, the three black internal vertices cannot be connected to the boundary since the web must be bipartite, so it is impossible to complete a valid web where \(c=1\). If \(c=0\), the web must be composed of a path \(2\to 1\) and a bipartite tree with six black leaves and four trivalent internal vertices. Suppose that we have a white vertex adjacent to a black leaf. We claim that this white vertex must be adjacent to exactly one other leaf. If it were attached to at least two other leaves, that would force us to finish a connected component without including all the vertices. If it was attached to no other leaves, this would force the creation of two other internal black vertices, which would in turn force the creation of four white vertices, two for each new black vertex. This would yield a total of seven internal vertices, which is too many. Therefore, the white vertex must be adjacent to exactly one other black leaf, along with one new black internal vertex adjacent to two more white internal vertices. This tree appears alongside the requisite path in the two webs shown in the top row of Figure 15. If \(k=2\), then \(|V_{int}|=2c+2\), so from Proposition 5.5, \(c=0\). There is only one way to construct a bipartite forest with two connected components, six black leaves, and three trivalent internal vertices; the possible configurations of these trees are shown alongside the requisite path in the bottom three webs in Figure 15. Finally, if \(k\geq 3\), then \(|V_{int}|\leq 2c\), but \(c=0\) by Proposition 5.5 and there is no way to complete a web with \(0\) internal vertices. Therefore Figure 15 enumerates all non-elliptic webs with a path between adjacent boundary vertices and six other black boundary vertices, up to reflection. **Lemma 5.8**.: _The only non-elliptic web with a path between non-adjacent boundary vertices and six other black boundary vertices is depicted in Figure 16._ Proof.: For ease of reference, we label the target of the path vertex \(1\), and continue labeling vertices clockwise. We first show that the only possible path between non-adjacent boundary vertices must be \(5\to 1\). If we had a path \(3\to 1\) or \(7\to 1\), then since webs are planar, one boundary vertex (\(2\) or \(8\) respectively) would be isolated, unable to be in the same connected component as any other boundary vertex. Then Proposition 5.3 gives \(|V_{int}|=1+2c-2k\) for this portion of the web, and by Proposition 5.5, it cannot have any cycles. Therefore this portion of the web must have \(-1\) internal vertices, which is impossible, so there is no way to complete the web. Similarly, if we had a path \(4\to 1\) or \(6\to 1\), then two vertices (\(2\) and \(3\) or \(7\) and \(8\) Figure 16. The unique web that connects non-adjacent boundary vertices. Figure 15. All possible nonelliptic webs with a path between two adjacent boundary vertices and six other black boundary vertices. respectively) would be isolated. Then Proposition 5.3 gives \(|V_{int}|=2+2c-2k\) for this portion of the web, and by Proposition 5.5, it cannot have any cycles (since it has at least one connected component). Therefore this portion of the web must have zero internal vertices and only one connected component, which is impossible, so there is no way to complete the web. We now show that Figure 16 depicts the only web with a path \(5\to 1\). This path isolates the vertices \(2,3,4\) and \(6,7,8\), so Proposition 5.3 gives \(|V_{int}|=3+2c-2k\) for the portions of the web on either side of the path. Neither side can have any cycles by Lemma 5.5, so the web must have one internal vertex adjacent to each of these boundary vertices. This produces the web in Figure 16. The following lemma will be of use in the proofs of Theorems 5.12 and 5.13. **Lemma 5.9**.: _All non-elliptic webs with nine black boundary vertices are the dihedral translates of those listed in Figure 17._ Proof.: Proposition 5.3 gives \(|V_{int}|=9+2c-2k\) for the number of internal vertices in such a web \(W\) that has \(k\) connected components and \(c\) cycles. Note that since there are no white boundary vertices, this web cannot contain any directed paths. If \(k=1\), \(|V_{int}|=2c+7\), so from Proposition 5.5, \(c<4\) in this case. If \(c=3\), \(|V_{int}|=13\); all of these vertices are required to construct the three cycles, but due to the colors of the boundary vertices, it is impossible to connect all boundary vertices to the interior hexagons without adding more vertices. Thus, \(c\) cannot be 3. If \(c=2\), \(|V_{int}|=11\); all but one of these vertices are required to construct the two cycles, but again due to the colors of the boundary vertices, it is impossible to connect all boundary vertices to the interior hexagons without adding more vertices. Therefore \(c\) cannot be 2. If \(c=1\), \(|V_{int}|=9\). Six of these internal vertices must be used to create the hexagon; and since there are only black boundary vertices, the other three internal vertices must be white and adjacent to two of the black vertices in the hexagon. The resulting web is possible to complete, as shown in Figure 17. Finally, if \(c=0\), \(|V_{int}|=7\). The only bipartite tree with 9 black leaves and 7 internal trivalent vertices is shown. If \(k=2\), then \(|V_{int}|=2c+5\), which implies that \(c<2\). If \(c=1\), then there are seven internal vertices, of which six of these must comprise a cycle (a hexagon). However, then there are not enough internal vertices left to connect to nine degree one black vertices as two connected components. On the other hand, if \(c=0\), there are five internal vertices, and it is possible to construct configurations consisting of a hexapod and a tripod as its two connected components. If \(k=3\), then \(|V_{int}|=2c+3\), which implies that \(c=0\), and there are simply three internal vertices. Consequently, the only allowable configurations in this case are composed of three tripods. Lastly, we observe that if \(k\geq 4\), we again have \(c=0\) and hence only one internal vertex; it is impossible to connect all boundary vertices with this few internal vertices, so no configurations exist in this case. ### Webs for Cubic Differences Our main theorems give non-elliptic webs that describe triple dimers for twists of \(A\), \(B\), \(C\), \(Z\), and their dihedral translates. We work through examples of their use in Section 9. We begin by establishing notation. Given \(n\geq 8\) and a plable graph \(G\) for \(\operatorname{Gr}(3,n)\), recall that \(\mathcal{D}^{3}(G)\) is the set of triple dimers on \(G\), and let \(\mathcal{W}\) denote the set of nonelliptic webs. For any \(D\in\mathcal{D}^{3}(G)\), we may write its corresponding web \(W(D)\) as a sum of nonelliptic summands: \[W(D)=\sum_{W\in\mathcal{W}}C_{W}^{D}W,\] where \(C_{W}^{D}\in\mathbb{Z}_{\geq 0}\) is the coefficient of the nonelliptic web \(W\) in \(W(D)\). Additionally, given any web \(W\) with \(n^{\prime}\) boundary vertices, \(n\geq n^{\prime}\), and \(S=\{s_{1}<\cdots<s_{n^{\prime}}\}\subseteq[n]\), let \(W^{S}\) denote the web with \(n\) boundary vertices and the following properties: * All boundary vertices with labels in \([n]\setminus S\) are isolated, i.e. they are not adjacent to edges. Figure 17. All nonelliptic webs with nine black boundary vertices. * When all boundary vertices of \(W^{S}\) with labels not in \(S\) are removed, and the remaining vertices are relabeled \(s_{1}\mapsto 1,s_{2}\mapsto 2,\ldots,s_{n^{\prime}}\mapsto n^{\prime}\), the resulting web is \(W\). **Theorem 5.10**.: _In Gr\((3,8)\), write \(A=A_{1}-A_{2}-A_{3}\)_ \[=\Delta_{134}\Delta_{258}\Delta_{167}-\Delta_{134}\Delta_{678}\Delta_{125}- \Delta_{158}\Delta_{234}\Delta_{167}.\] _Then_ \[\mathscr{T}^{*}(A)=\sum_{D\in\mathcal{D}^{3}(G)}C^{D}_{bquiring}wt_{f}(D),\] _where the "batwing" is the nonelliptic web pictured in Figure 18(i). Additionally, for any \(n\geq 8\), \(\sigma\in D_{8}\), and \(S=\{s_{1}<\cdots<s_{8}\}\subseteq[n]\), we have that \(\mathscr{T}^{*}(\sigma(A)^{S})=\sum_{D\in\mathcal{D}^{3}}C^{D}_{\sigma(batwing)^{S }}wt_{f}(D)\)._ **Theorem 5.11**.: _In Gr\((3,8)\), write \(B=B_{1}-B_{2}-B_{3}\)_ \[=\Delta_{258}\Delta_{134}\Delta_{267}-\Delta_{234}\Delta_{128}\Delta_{567}- \Delta_{234}\Delta_{258}\Delta_{167}.\] _Then_ \[\mathscr{T}^{*}(B)=\sum_{D\in\mathcal{D}^{3}(G)}C^{D}_{octopus}wt_{f}(D),\] _where the "octopus" is the nonelliptic web pictured in Figure 18(ii). Additionally, for any \(n\geq 8\), \(\sigma\in D_{8}\), and \(S=\{s_{1}<\cdots<s_{8}\}\subseteq[n]\), we have that \(\mathscr{T}^{*}(\sigma(B)^{S})=\sum_{D\in\mathcal{D}^{3}}C^{D}_{\sigma(octopus )^{S}}wt_{f}(D)\)._ **Theorem 5.12**.: _In Gr\((3,9)\), write \(C=C_{1}+C_{2}-C_{3}-C_{4}\)_ \[=\Delta_{124}\Delta_{357}\Delta_{689}+\Delta_{123}\Delta_{456}\Delta_{789}- \Delta_{124}\Delta_{356}\Delta_{789}-\Delta_{123}\Delta_{457}\Delta_{689}.\] _Then_ \[\mathscr{T}^{*}(C)=\sum_{D\in\mathcal{D}^{3}(G)}C^{D}_{\text{hexa-crab}}wt_{ f}(D),\] _where the "hexa-crab" is the nonelliptic web pictured in Figure 18(iii). Additionally, for any \(n\geq 9\), \(\sigma\in D_{9}\), and \(S=\{s_{1}<\cdots<s_{9}\}\subseteq[n]\), we have that \(\mathscr{T}^{*}(\sigma(C)^{S})=\sum_{D\in\mathcal{D}^{3}}C^{D}_{\sigma(hexa-crab)^{S }}wt_{f}(D)\)._ **Theorem 5.13**.: _In Gr\((3,9)\), write \(Z=Z_{1}-Z_{2}-Z_{3}-Z_{4}\)_ \[=\Delta_{145}\Delta_{278}\Delta_{369}-\Delta_{245}\Delta_{178}\Delta_{369}- \Delta_{123}\Delta_{456}\Delta_{789}-\Delta_{129}\Delta_{345}\Delta_{678}.\] _Then_ \[\mathscr{T}^{*}(Z)=\sum_{D\in\mathcal{D}^{3}(G)}C^{D}_{tri-crab}wt_{f}(D),\] _where the "tri-crab" is the nonelliptic web pictured in Figure 18(iv)._ We will present an example of the application of Theorem 5.10 in Section 9.1, and an example of the application of Theorem 5.11 in Section 9.2. In order to prove Theorems 5.10 through 5.13, we define a notion of _compatibility_ between webs and triple products of Plucker coordinates. We enumerate nonelliptic webs that are compatible with \(A_{1}\), and then show that the set of nonelliptic webs compatible with \(A_{2}\) or \(A_{3}\) is exactly the set of nonelliptic webs compatible Figure 18. Non-elliptic webs corresponding to cubic differences in Gr\((3,8)\) and Gr\((3,9)\). (i) is the batwing corresponding to \(A\), (ii) is the octopus corresponding to \(B\), (iii) is the hexa-crab corresponding to \(C\), and (iv) is the tri-crab corresponding to \(Z\). with \(A_{1}\) that are not the batwing. It follows intuitively that the batwing should be the only web compatible with \(A_{1}-A_{2}-A_{3}\). Rigorously, we show that the twist of \(A_{1}\) is the sum of the face weights of triple dimers that correspond to webs that have as a summand one of the webs with which \(A_{1}\) is compatible. Proving the analogous statement for \(A_{2}\) and \(A_{3}\) allows cancellation to yield the given formula for the twist of \(A\). A similar cancellation occurs with Theorem 5.11 for \(B\) and the octopus, Theorem 5.12 for \(C\) and the hexa-crab, and Theorem 5.13 for \(Z\) and the tri-crab. **Definition 5.14**.: Let \(W\) be a non-elliptic web with \(n\) boundary vertices \(v_{1},\ldots,v_{n}\). Also let \(I,J,K\) be subsets of \([n]\) with \(|I|=|J|=|K|=3\); we associate them to the Plucker coordinates \(\Delta_{I}\), \(\Delta_{J}\), and \(\Delta_{K}\). By Lemma 4.11 of [1], \(W\) has an edge coloring in the usual sense (that no incident edges have the same color) using the three colors red, blue, and green. We write that \(W\) is **compatible** with the product \(\Delta_{I}\Delta_{J}\Delta_{K}\) if there exists such an edge coloring of \(W\) that satisfies the following conditions for all \(i\in[n]\): * If \(i\in I\setminus J\setminus K\) (resp. \(J\setminus I\setminus K\), \(K\setminus I\setminus J\)), then \(v_{i}\) is black and adjacent to a red (resp. blue, green) edge. * If \(i\in I\cap J\setminus K\) (resp. \(I\cap K\setminus J\), \(J\cap K\setminus I\)), then \(v_{i}\) is white and adjacent to a green (resp. blue, red) edge. * Otherwise, \(v_{i}\) is adjacent to no edges in \(W\). We write that \(W\) is **uniquely compatible** with \(\Delta_{I}\Delta_{J}\Delta_{K}\) if there exists a unique such edge coloring. _Remark 5.15_.: This definition is a special case of the notion of _consistent labeling_ defined in [1, Section 4.5]. In Lam's notation, \(a(I,J,K;W)\) counts the number of edge colorings that satisfy the above conditions; thus we write that \(W\) is compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\) if \(a(I,J,K;W)>0\), and uniquely compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\) if \(a(I,J,K;W)=1\). _Remark 5.16_.: The definition of compatibility is best understood through the lens of dimers. \(W\) is compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\) exactly when it is possible to construct a triple dimer (on some plabic graph) that corresponds to \(W\) by overlaying a red dimer with boundary condition \(I\), a blue dimer with boundary condition \(J\), and a green dimer with boundary condition \(K\). We next present lemmas enumerating the non-elliptic webs corresponding to each term of \(A\), \(B\), \(C\), and \(Z\). Each proof involves examining the webs enumerated in Lemmas 5.6, 5.7, 5.8, and 5.9, in an attempt to find an edge coloring that satisfies the compatibility conditions. Even after some immediate reductions, the lists of webs to be tested are cumulatively quite long, so we defer these proofs to Section 8. **Lemma 5.17**.: _The non-elliptic webs compatible with \(A_{1}=\Delta_{134}\Delta_{258}\Delta_{167}\) are pictured in Figure 19. Web (ii) of Figure 19 is the only non-elliptic web compatible with \(A_{2}=\Delta_{134}\Delta_{125}\Delta_{67}\), and web (iii) of Figure 19 is the only non-elliptic web compatible with \(A_{3}=\Delta_{158}\Delta_{234}\Delta_{167}\). Additionally, each compatibility is unique._ **Lemma 5.18**.: _The non-elliptic webs compatible with \(B_{1}=\Delta_{258}\Delta_{134}\Delta_{267}\) are pictured in Figure 20. Web (iii) of Figure 20 is the only non-elliptic web compatible with \(B_{2}=\Delta_{234}\Delta_{128}\Delta_{567}\), and webs (ii) and (iv) of Figure 20 are the only non-elliptic webs compatible with \(B_{3}=\Delta_{234}\Delta_{258}\Delta_{167}\). Additionally, each compatibility is unique._ Figure 19. All non-elliptic webs compatible with \(A_{1}\). Web (i) is the batwing, web (ii) is the only non-elliptic web compatible with \(A_{2}\), and web (iii) is the only non-elliptic webs compatible with \(A_{3}\). **Lemma 5.19**.: _Figure 21 depicts all non-elliptic webs compatible with \(C_{1}=\Delta_{124}\Delta_{357}\triangle_{\textsc{SS9}}\). Web (ii) is the only non-elliptic web compatible with \(C_{2}=\Delta_{123}\Delta_{456}\triangle_{\textsc{789}}\), webs (ii) and (iii) are the only non-elliptic webs compatible with \(C_{3}=\Delta_{124}\Delta_{356}\triangle_{\textsc{789}}\), and webs (ii) and (iv) are the only non-elliptic webs compatible with \(C_{4}=\Delta_{123}\Delta_{457}\triangle_{\textsc{689}}\). Additionally, each compatibility is unique._ **Lemma 5.20**.: _Figure 22 depicts all non-elliptic webs compatible with \(Z_{1}=\Delta_{145}\Delta_{278}\triangle_{\textsc{359}}\). Webs (ii) through (v) are the only non-elliptic webs compatible with \(Z_{2}=\Delta_{245}\Delta_{178}\triangle_{\textsc{369}}\), web (vi) is the only non-elliptic web compatible with \(Z_{3}=\Delta_{123}\Delta_{456}\triangle_{\textsc{789}}\), and web (vii) is the only non-elliptic web compatible with \(Z_{4}=\Delta_{129}\Delta_{345}\triangle_{\textsc{678}}\). Additionally, each compatibility is unique._ Figure 21. Webs corresponding to the triple products in \(C\). Figure 20. All non-elliptic webs compatible with \(B_{1}\). Web (ii) is the only non-elliptic web compatible with \(B_{2}\), and webs (iii) and (iv) are the only non-elliptic webs compatible with \(B_{3}\). At last, we complete the proofs of our formulas for the twists of \(A\), \(B\), \(C\), and \(Z\), as well as their projections and dihedral translates. The arguments are similar at this stage, so we combine them. Proof of Theorems 5.10, 5.11, 5.12, and 5.13.: It follows from Theorem 3.4 that for any plabic graph \(G\) for \(\operatorname{Gr}(3,n)\), and for any \(I,J,K\subset[n]\) of size \(3\), \[\mathscr{T}^{*}(\Delta_{I}\Delta_{J}\Delta_{K})=\mathscr{T}^{*}( \Delta_{I})\mathscr{T}^{*}(\Delta_{J})\mathscr{T}^{*}(\Delta_{K})\] \[=\left(\sum_{D\in\mathcal{D}_{I}(G)}\operatorname{wt}_{f}(D) \right)\left(\sum_{D\in\mathcal{D}_{J}(G)}\operatorname{wt}_{f}(D)\right) \left(\sum_{D\in\mathcal{D}_{K}(G)}\operatorname{wt}_{f}(D)\right)\] \[=\sum_{D\in\mathcal{D}_{J,J,K}^{3}(G)}M_{D}\operatorname{wt}_{f}(D),\] where \(\mathcal{D}_{J,J,K}^{3}(G)\subset\mathcal{D}^{3}(G)\) is the set of triple dimer configurations \(D\) on \(G\) formed by overlaying three single dimers with boundary conditions \(I\), \(J\), and \(K\) respectively, and where the multiplicity \(M_{D}\) is the number of triples of single dimers that become \(D\) when overlaid. It follows from [1, Lemma 4.12] that \(\mathcal{D}_{I,J,K}^{3}(G)\) contains all dimers \(D\) such that \(W(D)\) is compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\), each with multiplicity \[M_{D}=\sum_{W\in\mathcal{W}}C_{W}^{D}a(I,J,K;W),\] where \(a(I,J,K;W)\) is the number of edge colorings of \(W\) compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\), as in Remark 5.15. Lemma 5.17 asserts that \(a(A_{1};W)\) is \(0\) for all nonelliptic webs except those shown in Figure 19, for which it is \(1\); that \(a(A_{2};W)\) is \(0\) for all nonelliptic webs except the one shown in Figure 19(ii), for which it is \(1\); and that \(a(A_{3};W)\) is \(0\) for all nonelliptic webs except the one shown in Figure 19(iii), for which it is \(1\). It follows that \[\mathscr{T}^{*}(A)=\mathscr{T}^{*}(A_{1})-\mathscr{T}^{*}(A_{2})-\mathscr{T} ^{*}(A_{3})=\sum_{D\in\mathcal{D}_{A_{1}}^{3}(G)}\operatorname{wt}_{f}(D)- \sum_{D\in\mathcal{D}_{A_{2}}^{3}(G)}\operatorname{wt}_{f}(D)-\sum_{D\in \mathcal{D}_{A_{3}}^{3}(G)}\operatorname{wt}_{f}(D)\] \[=\sum_{D\in\mathcal{D}^{3}(G)}\left(\sum_{\begin{subarray}{c}W\in\mathcal{W} \\ \text{in Figure \ref{fig:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ _def__def_def_ _def__def_ _def__ * If \(r=1\), then \(\mathrm{SL}_{1}\)-webs are sets of boundary vertices, and \(v_{i}\) should be included in the set if and only if \(\lambda_{i}=1\). * If \(r=2\), then \(\mathrm{SL}_{2}\)-webs are spanned by non-crossing matchings, and \(v_{i}\) should be included in the matching if and only if \(\lambda_{i}=1\). * If \(r=3\), then \(\mathrm{SL}_{3}\)-webs are those defined in Section 5.1. We color \(v_{i}\) black if \(\lambda_{i}=1\) and white if \(\lambda_{i}=2\), and we do not include \(v_{i}\) in the web if \(\lambda_{i}\in\{0,3\}\). We also have a \(\mathbb{Z}^{n}\)-grading of \(\mathbb{C}[\widehat{\mathrm{Gr}}(k,n)]\) given by the number of times each column appears in a given product of Plucker coordinates. In particular, \(\mathbb{C}[\widehat{\mathrm{Gr}}(k,n)]_{\lambda}\) consists of linear combinations of \(r\)-fold products of Plucker coordinates such that in each product, column \(i\) appears \(\lambda_{i}\) times. Note that in [11], a Plucker coordinate is shorthand for the sum of dimer weights given by the boundary measurement map of Theorem 2.9. We may identify elements of \(\mathbb{C}[\widehat{\mathrm{Gr}}(k,n)]_{\lambda}\) with webs as in [10]. Given a plable graph \(G\), the notation \(\mathrm{Web}_{r}(G;\lambda)\) refers to the weighted sum of \(r\)-weblike subgraphs of \(G\) (equivalently, of \(r\)-fold dimers on \(G\)) that satisfy the boundary conditions given by \(\lambda\). For instance, \(\mathrm{Web}_{3}(G;(2,1,1,1,1,1,1)\) would be the weighted sum of all triple dimers \(D\) on \(G\) such that boundary vertex \(1\) of \(W(D)\) is white and \(W(D)\) has \(7\) boundary black vertices. Finally, we define the **immanant map** \[\mathrm{Imm}:\mathcal{W}_{\lambda}(U)^{*}\to\mathbb{C}[\widehat{\mathrm{Gr}}( k,n)]_{\lambda}\] via \[\mathrm{Imm}(\varphi)(\tilde{X}(G))=\varphi(\mathrm{Web}_{r}(G;\lambda))\] for any edge-weighted plable graph \(\tilde{X}(G)\). Effectively, the weight of an \(r\)-fold dimer \(D\) in \(\mathrm{Web}_{r}(G;\lambda)\) is included in \(\mathrm{Imm}(\varphi)(\tilde{X}(G))\) with multiplicity equal to the value of \(\varphi\) on \(W(D)\). We say that an \(\mathrm{SL}_{r}\)-web \(W\) and an \(\mathrm{SL}_{k}\)-web \(W^{\prime}\) (viewed as an element of \(\mathbb{C}[\widehat{\mathrm{Gr}}(k,n)]\)) are **dual** if the functional \(\varphi\) that is \(1\) on \(W\) and \(0\) on all other independent webs has \(\mathrm{Imm}(\varphi)(\tilde{X}(G))=W^{\prime}\). We may now restate Theorems 4.1, 5.10, 5.11, 5.12, and 5.13. For clarity, the following theorem lacks full generality with respect to projections and dihedral translates, but the generalizations should be clear from the original theorem statements. We rely on Proposition 3.8 to transition between our face weights and the edge weights of [11]. **Theorem 6.1**.: _Let \(G\) be a plable graph with face weights as in Definition 3.2, and assign it edge weights as in Definition 3.6._ 1. _Let_ \(U\) _be a 2-dimensional vector space, and define_ \(\varphi_{X}\in\mathcal{W}_{[1^{0}]}(U)^{*}\) _to be 1 on the_ \(\mathrm{SL}_{2}\) _basis web with paths connecting six boundary vertices in pairs_ \(\{1,6\},\{2,3\},\) _and_ \(\{4,5\}\)_. Then_ \(\text{Imm}(\varphi_{X})(\tilde{X}(G))=\mathscr{T}^{*}(X)\)_. Similarly, define_ \(\varphi_{Y}\in\mathcal{W}_{[1^{0}]}(U)^{*}\) _to be 1 on the_ \(\mathrm{SL}_{2}\) _basis web with paths connecting vertices in pairs_ \(\{1,2\},\{3,4\},\) _and_ \(\{5,6\}\)_. Then_ \(\text{Imm}(\varphi_{Y})(\tilde{X}(G))=\mathscr{T}^{*}(Y)\)_._ 2. _Let_ \(U\) _be a 3-dimensional vector space, and define_ \(\varphi_{A}\in\mathcal{W}_{[2,1^{7}]}(U)^{*}\) _to be the batwing as in Figure_ 18_. Then_ \(\text{Imm}(\varphi_{A})(\tilde{X}(G))=\mathscr{T}^{*}(A)\)_._ 3. _Let_ \(U\) _be a 3-dimensional vector space, and define_ \(\varphi_{B}\in\mathcal{W}_{[2,1^{7}]}(U)^{*}\) _to be the octopous as in Figure_ 18_. Then_ \(\text{Imm}(\varphi_{B})(\tilde{X}(G))=\mathscr{T}^{*}(B)\)_._ 4. _Let_ \(U\) _be a 3-dimensional vector space, and define_ \(\varphi_{C}\in\mathcal{W}_{[1^{0}]}(U)^{*}\) _to be the hexa-crab as in Figure_ 18_. Then_ \(\text{Imm}(\varphi_{C})(\tilde{X}(G))=\mathscr{T}^{*}(C)\)_._ 5. _Let_ \(U\) _be a 3-dimensional vector space, and define_ \(\varphi_{Z}\in\mathcal{W}_{[1^{0}]}(U)^{*}\) _to be the tri-crab as in Figure_ 18_. Then_ \(\text{Imm}(\varphi_{Z})(\tilde{X}(G))=\mathscr{T}^{*}(Z)\)_._ Note that in particular, removing the stated choice of edge weights and considering \(X\), \(Y\), \(C\), and \(Z\) only as arising abstractly from the boundary measurement map recovers several of the dualities in the top and bottom left of Figure 3 of [11] (reproduced in the top left and bottom of Figure 23). Our work presents the twist map as concretely realizing this duality. It also adds an interpretation of duality for \(A\) and \(B\) in \(\mathrm{Gr}(3,8)\), which appears pictorially on the top right of Figure 23. We reference [10, Figure 22] for the tensor diagrams corresponding to \(A\) and \(B\), the duals of the batwing and octopous, respectively. Figure 23 displays elements of \(\mathcal{W}_{\lambda}(U)^{*}\) on the left, and their dual tensor diagrams in \(\mathbb{C}[\widehat{\mathrm{Gr}}(k,n)]\) on the right.3 Unlike our setting, Fomin and Pylyavskyy [10] notably allow tensor diagrams with boundary vertices of valence higher than one. Footnote 3: We only study one direction of duality for \(\mathrm{Gr}(3,8)\), since diagrams with white boundary vertices represent more general \(SL_{3}\)-invariants than elements of the coordinate ring of the Grassmannian. Note that an analogue of Observation 8.2 of [11] continues to hold in the listed examples for \(\mathrm{Gr}(3,8)\): the dual of a non-elliptic web arises from clasping boundary vertices of another non-elliptic web. It would be interesting to check this statement for the other non-elliptic webs depicted in Lemmas 5.6, 5.7, and 5.8 using the methods we will describe in Section 8. ### Young Tableaux We provide some explicit computations with tableaux associated to [11, Observation 8.3], which notes that in small cases, the duality map aligns with a combination of the transpose map applied to a rectangular standard Young tableau and the Khovanov-Kuperberg bijection [10] between two-row or three-row standard Young tableaux and non-crossing matchings or non-elliptic webs, respectively. We begin with \(\operatorname{Gr}(3,6)\). In what follows, for every standard Young tableau of rectangular shape \([3,3]\), we demonstrate the effect of the Khovanov-Kuperberg bijection: we read the top row from left to right, and match each entry \(j\) to the largest unmatched value \(i\) on the row below such that \(i<j\). \[\begin{array}{|c|c|c|}\hline 4&5&6\\ \hline 1&2&3\\ \hline\end{array}\leftrightarrow(3,4),(2,5),(1,6),\] \[\begin{array}{|c|c|c|}\hline 2&5&6\\ \hline 1&3&4\\ \hline\end{array}\leftrightarrow(1,2),(4,5),(3,6),\] \[\begin{array}{|c|c|c|}\hline 3&4&6\\ \hline 1&2&5\\ \hline\end{array}\leftrightarrow(2,3),(1,4),(5,6),\] Figure 23. Duality between webs in small cases. Note that the top-left and bottom of the figure depicts webs shown in [11, Figure 3]; the top-right is new. \begin{tabular}{|c|c|c|} \hline 3 & 5 & 6 \\ \hline 1 & 2 & 4 \\ \hline \end{tabular} \(\leftrightarrow(2,3),(4,5),(1,6)\), and \begin{tabular}{|c|c|c|} \hline 2 & 4 & 6 \\ \hline 1 & 3 & 5 \\ \hline \end{tabular} \(\leftrightarrow(1,2),(3,4),(5,6)\). Additionally, as computed in [13], the transposes of these five standard Young tableaux (of shape \([2,2,2]\)) biject to non-elliptic webs, with \begin{tabular}{|c|c|c|} \hline 3 & 6 \\ \hline 2 & 5 \\ \hline 1 & 4 \\ \hline \end{tabular} corresponding to the union of the two tripods \([1,2,3]\) and \([4,5,6]\), \begin{tabular}{|c|c|c|} \hline 4 & 6 \\ \hline 3 & 5 \\ \hline 1 & 2 \\ \hline \end{tabular} corresponding to the union of the two tripods \([1,5,6]\) and \([2,3,4]\), and \begin{tabular}{|c|c|c|} \hline 5 & 6 \\ \hline 2 & 4 \\ \hline 1 & 3 \\ \hline \end{tabular} corresponding to the union of the two tripods \([1,2,6]\) and \([3,4,5]\), \begin{tabular}{|c|c|c|} \hline 4 & 6 \\ \hline 2 & 5 \\ \hline 1 & 3 \\ \hline \end{tabular} corresponding to two rotations of a hexapod (the second entry in the right column at the top left of Figure 23), which encode the compound determinants \(X=\det\left(v_{1}\times v_{2}\quad v_{3}\times v_{4}\quad v_{5}\times v_{6}\right)\) and \(Y=\det\left(v_{6}\times v_{1}\quad v_{2}\times v_{3}\quad v_{4}\times v_{5}\right)\), respectively. We now observe that, as shown in Figure 23, duality links a non-crossing matching to the non-elliptic web corresponding to the transpose of its standard Young tableau: \[(3,4),(2,5),(1,6)\leftrightarrow\text{the union of the two tripods }[1,2,3]\text{ and }[4,5,6]\] \[(1,2),(4,5),(3,6)\leftrightarrow\text{the union of the two tripods }[1,5,6]\text{ and }[2,3,4]\] \[(2,3),(1,4),(5,6)\leftrightarrow\text{the union of the two tripods }[1,2,6]\text{ and }[3,4,5]\] \[(2,3),(4,5),(1,6)\leftrightarrow\text{the hexapod corresponding to }X\] \[(1,2),(3,4),(5,6)\leftrightarrow\text{the hexapod corresponding to }Y.\] We perform the analogous computations for the dualities in \(\operatorname{Gr}(3,9)\) given by twisting the degree three cluster algebra elements \(C\) and \(Z\). Under the Khovanov-Kuperberg bijection, the tensor diagram for \(C\) (the second entry in the right column at the bottom of Figure 23) corresponds to the standard Young tableau \[T_{C}=\boxed{\begin{array}{c|c|c|c}\hline 6&8&9\\ \hline 3&5&7\\ \hline 1&2&4\\ \hline\end{array}}\] The web corresponding to its transpose \[T_{C}^{*}=\boxed{\begin{array}{c|c|c|c}\hline 4&7&9\\ \hline 2&5&8\\ \hline 1&3&6\\ \hline\end{array}}\] is the hexa-crab depicted in Figure 18, which aligns with the duality depicted in the middle row in the bottom of 23. Additionally, Example 8.1 and Remark 8.2 of [14] verify that the tensor diagram for (the second entry in the right column at the bottom left of Figure 23) corresponds to the standard Young tableau \[T_{Z}=\begin{array}{|c|c|c|}\hline 5&8&9\\ \hline 2&6&7\\ \hline 1&3&4\\ \hline\end{array}\] The web corresponding to its transpose \[T_{Z}^{*}=\begin{array}{|c|c|c|}\hline 4&7&9\\ \hline 3&6&8\\ \hline 1&2&5\\ \hline\end{array}\] is in fact the three tripods \((1,8,9),(2,3,4),(5,6,7)\), i.e. the fourth web illustrated in Figure 17, the tri-crab. Again, this result is consistent with the duality depicted in the top row in the bottom left quadrant of [11, Figure 3]. It would be interesting to extend these computations to our additional dualities in \(\operatorname{Gr}(3,8)\), depicted in Figure 23. We expect that they relate to **semi-standard Young tableaux** of shape \([3,3,3]\), using entries only involving \(1,2,\ldots,8\); conjecturally, these would be the 24 tableaux provided in [3, Section 3.1] for \(\operatorname{Gr}(3,8)\). ## 7. Construction of \(C\) In this section, we justify our claim from Section 2.3 that the element \[C=\Delta_{124}\Delta_{357}\Delta_{689}+\Delta_{123}\Delta_{456}\Delta_{789}- \Delta_{124}\Delta_{356}\Delta_{789}-\Delta_{123}\Delta_{457}\Delta_{689}\] of \(\mathbb{C}[\operatorname{Gr}(3,9)]\) is a cluster variable. It will suffice to show that the above expression corresponds to the tensor diagram on the right of the middle row in the bottom left of Figure 23, since that diagram is a planar tree, and therefore corresponds to a cluster variable by [11, Corollary 8.10]. We explained in Section 6 that under the Khovanov-Kuperberg bijection, the cluster algebra element \(C\) and its corresponding web invariant \([W_{C}]\) correspond to a particular standard Young tableau, namely \[T_{C}=\begin{array}{|c|c|c|}\hline 6&8&9\\ \hline 3&5&7\\ \hline 1&2&4\\ \hline\end{array}\] We algebraically express \([W_{C}]\) as a polynomial in Plucker coordinates by producing a tensor diagram from the rows of \(T_{C}\) and then resolving crossings to create a summation in terms of planar webs. This process is illustrated in Figure 25: we consider the triple product \(\Delta_{124}\Delta_{357}\Delta_{689}\) formed from the rows of \(T_{C}\), construct a corresponding tensor diagram by superimposing the three associated tripods, and apply the \(SL_{3}\)-web Skein relation shown in Figure 24 to resolve two of the crossings that appear. Figure 24. Additional Skein relation for tensor diagrams, which are not necessarily planar. To translate the tensor diagrams in Figure 25 into an expression in Plucker coordinates, we note that the tensor diagram on the left side of the equation consists of three tripods, and therefore corresponds to a product of three Plucker cluster variables, in particular \(\Delta_{124}\Delta_{357}\Delta_{689}\). The first summand in the bottom line of Figure 25 similarly corresponds to \(\Delta_{124}\Delta_{356}\Delta_{789}\). The next summand has two components: a tripod, which corresponds to the Plucker coordinate \(\Delta_{123}\); and a hexapod, which corresponds to the compound determinant \(\det(v_{4}\times v_{5},v_{6}\times v_{7},v_{8}\times v_{9})\), i.e. the quadratic difference \(X^{456789}=\Delta_{457}\Delta_{689}-\Delta_{456}\Delta_{789}\). Therefore, we may express this summand as \(\Delta_{123}(\Delta_{457}\Delta_{689}-\Delta_{456}\Delta_{789})\). Finally, the rightmost summand at the bottom of Figure 25 is the desired web \([W_{C}]\). We have therefore recovered the relation \[\Delta_{124}\Delta_{357}\Delta_{689}=\Delta_{124}\Delta_{356}\Delta_{789}+ \Delta_{123}(\Delta_{457}\Delta_{689}-\Delta_{456}\Delta_{789})+C,\] and our expression for \(C\) follows. We note that similar tensor diagram manipulations are sufficient to calculate expansions of \(A\), \(B\), and \(Z\); however, such expansions are already present in the literature. ## 8. Appendix: Proofs of Lemmas In this Appendix, we prove the lemmas from Section 5, restating them for convenience. **Lemma 5.17**.: _The non-elliptic webs compatible with \(A_{1}=\Delta_{134}\Delta_{258}\Delta_{167}\) are pictured in Figure 19. Web (ii) of Figure 19 is the only non-elliptic web compatible with \(A_{2}=\Delta_{134}\Delta_{125}\Delta_{675}\), and web (iii) of Figure 19 is the only non-elliptic web compatible with \(A_{3}=\Delta_{158}\Delta_{234}\Delta_{167}\). Additionally, each compatibility is unique._ Proof of Lemma 5.17.: In Figures 26, 27, and 28, we enumerate all nonelliptic webs with boundary vertices \(\{v_{1},\ldots,v_{n}\}\) such that \(v_{1}\) is colored white and \(v_{2},\ldots,v_{n}\) are colored black. Those compatible with \(A_{1}\), \(A_{2}\), or \(A_{3}\) respectively are fully colored and boxed; for all others, the impossibility of a proper edge coloring with the corresponding boundary edge colors is demonstrated. Note that no web compatible with \(A_{2}\) can contain a path \(v_{2}\to v_{1}\) or \(v_{5}\to v_{1}\), since the edges incident to \(v_{2}\) and \(v_{5}\) cannot be the same color as the edge incident to \(v_{1}\); similarly, no web compatible with \(A_{3}\) can contain a path \(v_{8}\to v_{1}\) or \(v_{5}\to v_{1}\). We therefore omit such webs in the corresponding figures. Also note that any completed proper coloring is unique given the compatibility conditions. **Lemma 5.18**.: _The non-elliptic webs compatible with \(B_{1}=\Delta_{258}\Delta_{134}\Delta_{267}\) are pictured in Figure 20. Web (iii) of Figure 20 is the only non-elliptic web compatible with \(B_{2}=\Delta_{234}\Delta_{128}\Delta_{567}\), and webs (ii) and (iv) of Figure 20 are the only non-elliptic webs compatible with \(B_{3}=\Delta_{234}\Delta_{258}\Delta_{167}\). Additionally, each compatibility is unique._ Proof of Lemma 5.18.: Similarly to the previous proof, for each term of \(B\), we enumerate all nonelliptic webs with boundary vertices \(\{v_{1},\ldots,v_{n}\}\) such that \(v_{2}\) is colored white and \(v_{1},v_{3},\ldots,v_{n}\) are colored black. Note that no web compatible with \(B_{1}\) can contain a path \(v_{6}\to v_{2}\), no web compatible with \(B_{2}\) can contain any Figure 25. Applying the Skein relation of Figure 24 to the tensor diagram for \(\Delta_{124}\Delta_{357}\Delta_{689}\) to obtain the tensor diagram for \(C\). Figure 27. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(A_{2}=(134)(125)(678)\). Figure 26. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(A_{1}=(134)(258)(167)\). path, and no web compatible with \(B_{3}\) can contain a path \(v_{3}\to v_{2}\), so we omit these webs in the corresponding figures. Also note that any completed proper coloring is unique given the compatibility conditions. Figure 29. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(B_{1}=(258)(134)(267)\). Figure 28. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(A_{3}=(158)(234)(167)\). Figure 31. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(B_{3}=(234)(258)(167)\). Note that the middle web of the third row is incompatible due to a collision on an internal vertex of the hexagon, where two green edges are forced to meet. The rightmost two webs of the third row give the only webs compatible with the boundary condition \(B_{3}\). Figure 30. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(B_{2}=(234)(128)(567)\). We use the following lemma to limit our lists of webs for terms of \(C\). **Lemma 8.1**.: _Let \(I=\{i-1,i,i+1\}\) mod 9, and \(J,K\in\binom{[9]\setminus I}{3}\). If a non-elliptic web is compatible with \(\Delta_{I}\Delta_{J}\Delta_{K}\), it must be one of the webs pictured in Figure 32._ Proof.: In Figure 33, we enumerate all nonelliptic webs listed in Lemma 5.9, drawing vertex \(i\) at the top of each web without loss of generality, and eliminate any that do not admit a proper coloring such that the edges adjacent to vertices \(i-1\), \(i\), and \(i+1\) are the same color. We now prove Lemmas 5.19 and 5.20. **Lemma 5.19**.: _Figure 21 depicts all non-elliptic webs compatible with \(C_{1}=\Delta_{124}\Delta_{357}\Delta_{\mbox{\tiny\sc GS9}}\). Web (ii) is the only non-elliptic web compatible with \(C_{2}=\Delta_{123}\Delta_{456}\Delta_{\mbox{\tiny\sc TS9}}\), webs (ii) and (iii) are the only non-elliptic webs compatible with \(C_{3}=\Delta_{124}\Delta_{356}\Delta_{\mbox{\tiny\sc TS9}}\), and webs (ii) and (iv) are the only non-elliptic webs compatible with \(C_{4}=\Delta_{123}\Delta_{457}\Delta_{\mbox{\tiny\sc GS9}}\). Additionally, each compatibility is unique._ Proof of Lemma 5.19.: For each term of \(C\), we enumerate non-elliptic webs with nine black boundary vertices. Since every term of \(C\) besides \(C_{1}\) has a factor of \(\Delta_{I}\) where \(I=\{i-1,i,i+1\}\) mod 9, for these terms we only check the appropriate rotations of the webs listed in Lemma 8.1; see Figure 35. For \(C_{1}\), in Figure 34 we check all webs listed in Lemma 5.9. Note that any completed proper coloring is unique given the compatibility conditions. **Lemma 5.20**.: _Figure 22 depicts all non-elliptic webs compatible with \(Z_{1}=\Delta_{145}\Delta_{278}\Delta_{\mbox{\tiny\sc TS9}}\). Webs (ii) through (v) are the only non-elliptic webs compatible with \(Z_{2}=\Delta_{245}\Delta_{178}\Delta_{\mbox{\tiny\sc TS9}}\), web (vi) is the only non-elliptic web compatible with \(Z_{3}=\Delta_{123}\Delta_{456}\Delta_{\mbox{\tiny\sc TS9}}\), and web (vii) is the only non-elliptic web compatible with \(Z_{4}=\Delta_{129}\Delta_{345}\Delta_{\mbox{\tiny\sc TS9}}\). Additionally, each compatibility is unique._ Proof of Lemma 5.20.: Similarly to the previous proof, in Figures 36 and 37 we enumerate non-elliptic webs with nine black boundary vertices for each of \(Z_{1}\) and \(Z_{2}\). Note that \(Z_{3}=C_{2}=(123)(456)(789)\), and \(Z_{4}\) is a cyclic shift of \(Z_{3}\) corresponding to a counterclockwise rotation by one vertex, so we may refer to Lemma 5.19 for the webs compatible with those triple products. Also note that any completed proper coloring is unique given the compatibility conditions. Figure 33. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with some \(\Delta_{I}\Delta_{J}\Delta_{K}\), where \(I=\{i-1,i,i+1\}\) mod 9 with vertex \(i\) drawn at the top of each web, and \(J,K\in\binom{[9]\setminus I}{3}\). Figure 34. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(C_{1}\). Figure 35. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(C_{2},C_{3}\), and \(C_{4}\). Figure 36. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(Z_{1}\). Figure 37. Enumeration of non-elliptic webs compatible (highlighted in yellow for emphasis) and incompatible with \(Z_{2}\). ## 9. Appendix: Computations of Twists In this Appendix, we demonstrate computations using Theorems 5.10 and 5.11. Since this section contains many long expressions in Plucker coordinates, in what follows we will use the shorthand \((I)\) to denote the Plucker coordinate \(\Delta_{I}\). We note that we chose to compute \(\mathscr{T}^{*}(\sigma^{2}(A))\) and \(\mathscr{T}^{*}(\sigma^{7}(B))\) (where \(\sigma\in D_{8}\) represents clockwise rotation by one vertex) in particular because, for the initial seed we will consider, the numbers of terms in their Laurent expansions are relatively small compared to the corresponding numbers for other dihedral images of \(A\) and \(B\). To check the Laurent expressions arising from our computations, we used SageMath[10] and Pavel Galashin's applet [11] to explore the full set of Plucker cluster variables in a particular quiver for \(Gr(3,8)\), via sequences of square moves on the corresponding plabic graph. This allowed us to identify the resulting cluster variables as Plucker coordinates, as well as record a sample mutation sequence to get to each such variable. We then used SageMath, including the ClusterSeed and ClusterAlgebra packages (thanks to the second author and Christian Stump; Dylan Rupel and Salvatore Stella) to arrive at Laurent polynomial expressions for all such cluster variables. Computations in SageMath subsequently allowed us to compute Laurent polynomial expressions for the entire mutation class of 128 cluster variables. With this list, we were able to identify the Laurent expansions for the remaining 56 non-Plucker cluster variables as versions of \(X\), \(Y\), \(A\), and \(B\). We will refer to these expansions in the following two examples. For reference, we list our computations for the twists of \(A\) and \(B\) as in [11] here: \[\mathscr{T}^{*}(A) =(123)(234)(456)(178)[(267)(358)-(235)(678)]\] \[=(123)(234)(456)(178)Y^{235678} \tag{9.1}\] \[\mathscr{T}^{*}(B) =(178)(456)(234)[(348)(367)(125)-(348)(567)(123)-(345)(367)(128)]\] \[=(178)(456)(234)\sigma^{4}\rho(B) \tag{9.2}\] where \(\rho\) reflects indices via \(i\to 9-i\). We justify these computations algebraically in Section 9.3, along with providing a table of the twists of all \(\operatorname{Gr}(3,7)\) cluster variables. ### Computing \(\mathscr{T}^{*}(\sigma^{2}(A))\) We give a complete description of the triple dimer partition function given in Theorem 5.10 for \[\sigma^{2}(A)=(356)(247)(138)-(356)(347)(128)-(237)(456)(138),\] the cyclic rotation of \(A\) clockwise by two vertices. In what follows, we will refer to the rotation of the batwing connectivity pattern clockwise by two vertices as the batwing\({}^{2}\). From the theorem, the only triple dimers \(D\) whose weights contribute to \(\mathscr{T}^{*}(\sigma^{2}(A))\) are those such that \(W(D)\) contains the batwing\({}^{2}\) as a nonelliptic summand. We were able to list these by first drawing all triple dimers such that \(W(D)\) is the batwing\({}^{2}\), and then adding squares, bigons, and internal cycles wherever possible. Tables 2 and 3 contain a complete list of these triple dimers, and their associated weights; note that only one triple dimer has a bigon in its corresponding web, causing its weight to have a coefficient of \(C^{D}_{batwing^{2}}=2\) in \(\mathscr{T}^{*}(\sigma^{2}(A))\). According to Theorem 5.10, summing the weights listed in Tables 2 and 3 should yield \(\mathscr{T}^{*}(\sigma^{2}(A))\). Indeed, it follows from (9.1) that \(\mathscr{T}^{*}(\sigma^{2}(A))=(123)(345)(456)(678)Y^{124578}\), and our code yielded the following Laurent polynomial expression for this product of cluster variables: \[\mathscr{T}^{*}(\sigma^{2}(A)) =\frac{(123)(345)(456)(678)}{(246)(568)(268)^{2}(168)}[(248)^{2}(25 6)(268)(168)^{2}(567)+\] \[+(248)(245)(268)^{2}(168)^{2}(567)+(248)(246)(568)(268)(168)(128)(5 67)\] \[+(248)^{2}(256)^{2}(168)^{2}(567)+(248)(246)(568)(268)(168)(128)(5 67)\] \[+2\cdot(248)(246)(256)(568)(168)(128)(678)+(245)(246)(568)(268)(168 )(128)(678)\] \[+(246)^{2}(568)^{2}(128)^{2}(678)+(248)(246)(256)(568)(268)(168)(1 78)\] \[+(246)^{2}(568)^{2}(268)(128)(178)].\] The terms of this Laurent expansion corresponding to each triple dimer are also listed in Tables 2 and 3, confirming that the sums agree. ### Computing \(\mathscr{T}^{*}(\sigma^{7}(B))\) We similarly give a complete description of the triple dimer partition function given in Theorem 5.11 for \[\sigma^{7}(B)=(147)(156)(238)-(123)(178)(456)-(123)(147)(568),\] the cyclic rotation of \(B\) clockwise by seven vertices. From the theorem, the only triple dimers \(D\) whose weights contribute to \(\mathscr{T}^{*}(\sigma^{7}(B))\) are those such that \(W(D)\) contains the appropriate rotation of the octopus as a nonelliptic summand. We were able to list these using the same method as in the previous section; Tables 4, 5, and 6 contain a complete list of these triple dimers and their associated weights and coefficients. To check that the sum of these weights (with multiplicity) indeed yields \(\mathscr{T}^{*}(\sigma^{7}(B))\), we have from the expression (9.2) for \(\mathscr{T}^{*}(B)\) that \[\mathscr{T}^{*}(\sigma^{7}(B)) =(123)(345)(678)\bigg{[}(256)[(247)(138)-(347)(128)]-(237)(456)(128 )\bigg{]}\] \[=(123)(345)(678)\left(\sigma^{3}\rho(B)\right)\] where \(\rho\) reflects indices via \(i\to 9-i\) and \(\sigma\) follows this by clockwise rotation. Our code yielded the following Laurent polynomial for this expression: \[\mathscr{T}^{*}(\sigma^{7}(B)) =\frac{(123)(345)(678)}{(248)(124)(568)(268)^{2}(168)}[(248)^{3}( 256)(268)(168)^{2}(123)(567)\] \[+(248)^{2}(246)(568)(268)(168)(128)(123)(567)+(248)^{2}(256)(268)( 168)^{2}(128)(234)(567)\] \[+(248)(246)(568)(268)(168)(128)^{2}(234)(567)+(248)^{3}(256)^{2}(1 68)^{2}(123)(678)\] \[+2\cdot(248)^{2}(246)(256)(568)(168)(128)(123)(678)+(248)(246)^{2 }(568)^{2}(128)^{2}(123)(678)\] \[+(248)(124)(256)(568)(268)(168)(128)(234)(678)+(248)^{2}(256)^{2}( 168)^{2}(128)(234)(678)\] \[+(124)(246)(568)^{2}(268)(128)^{2}(234)(678)+2\cdot(248)(246)(256) (568)(168)(128)^{2}(234)(678)\] \[(246)^{2}(568)^{2}(128)^{3}(234)(678)+(248)^{2}(246)(256)(568)(168 )(123)(178)\] \[+(248)(246)^{2}(568)^{2}(268)(128)(123)(178)+(124)(246)(568)^{2}(2 68)^{2}(128)(234)(178)\] \[+(248)(246)(256)(568)(268)(168)(128)(234)(178)+(246)^{2}(568)^{2}( 268)(128)^{2}(234)(178)]\] The terms of this Laurent expansion corresponding to each triple dimer are also listed in Tables 4, 5, and 6, confirming that the sums agree. ### Computing Twists Algebraically In this section, we list the twists of all cluster variables in \(\operatorname{Gr}(3,7)\), and algebraically justify our computations for the twists of \(A\) and \(B\) in \(\operatorname{Gr}(3,8)\). We utilize the following results from Section 2.5 (where indices are taken in increasing order modulo \(n\)): * We have \(\mathscr{T}^{*}(\Delta_{a,a+1,a+2})=\Delta_{a+1,a+2,a+3}\Delta_{a+2,b+1,b+2}\). * When \(b\neq a-1,a+2\), we have \(\mathscr{T}^{*}(\Delta_{a,a+1,b})=\Delta_{a+1,a+2,a+3}\Delta_{a+2,b+1,b+2}\). * When \(J=\{a,b,c\}\) where none of \(a,b,c\) are adjacent, we have \[\mathscr{T}^{*}(\Delta_{J}) =\det\left(v_{a+1}\times v_{a+2}\quad v_{b+1}\times v_{b+2}\quad v _{c+1}\times v_{c+2}\right)\] \[=\begin{cases}X^{a+1,\ a+2,\ b+1,\ b+2,\ c+1,\ c+2}&a,b,c\neq n-1 \\ Y^{a+1,\ a+2,\ b+1,\ b+2,\ c+1,\ c+2}&\text{otherwise}\end{cases}.\] Table 1 lists twists in \(\operatorname{Gr}(3,7)\). As an example of the calculations required to fill the table, we use the fact that the twist map \(\mathscr{T}^{*}\) is a homomorphism to compute the twist of \(X^{123456}\) as follows: \[\mathscr{T}^{*}(X^{123456}) = \mathscr{T}^{*}(\Delta_{134})\mathscr{T}^{*}(\Delta_{256})-\mathscr{ T}^{*}(\Delta_{156})\mathscr{T}^{*}(\Delta_{234})\] \[= [\Delta_{456}\Delta_{235}][\Delta_{167}\Delta_{347}]-[\Delta_{167 }\Delta_{237}][\Delta_{345}\Delta_{456}]\] \[= \Delta_{167}\Delta_{456}[\Delta_{235}\Delta_{347}-\Delta_{237} \Delta_{345}]\] \[= \Delta_{167}\Delta_{456}\Delta_{234}[\Delta_{357}].\] We note that the non-crossing matching corresponding to a given quadratic difference provides a convenient heuristic for computing frozen factors of twists of quadratic differences. In particular, these frozen factors are indexed by all face labels appearing between two boundary vertices that are included in the corresponding matching, but not connected to each other. We also observe overall that in \(\operatorname{Gr}(3,7)\), we either have \[\mathscr{T}^{*}(X^{s_{1},s_{2},s_{3},s_{4},s_{5},s_{6}})=\Delta_{s_{2},s_{2}+1,s _{2}+2}\Delta_{s_{4},s_{4}+1,s_{4}+2}\Delta_{s_{6},s_{6}+1,s_{6}+2}\Delta_{s_{ 2}+1,s_{4}+1,s_{6}+1}\] or that \(\mathscr{T}^{*}(X^{s_{1},s_{2},s_{3},s_{4},s_{5},s_{6}})\) is the product of two frozen variables and a quadratic difference. We now compute the twists of \(A\) and \(B\) in \(\operatorname{Gr}(3,8)\). Applying the twist to the expression \[A=(134)(258)(167)-(134)(125)(678)-(158)(234)(167)\] yields \[\mathscr{T}^{*}(A) =(235)(456)[(347)(126)-(346)(127)](238)(178)\] \[-(235)(456)(234)(367)(178)(128)-(123)(267)(345)(456)(238)(178)\] \[=(178)(456)[(235)(347)(126)(238)-(235)(346)(127)(238)\] \[-(235)(234)(367)(128)-(123)(267)(345)(238).]\] We use the Plucker relation \((123)(467)-(124)(367)+(126)(347)-(127)(346)=0\) to arrive at the expression \[\mathscr{T}^{*}(A) =(178)(456)[(235)(238)(124)(367)-(235)(238)(123)(467)\] \[-(235)(234)(367)(128)-(123)(267)(345)(238)],\] and further simplify using the Plucker relation \((124)(238)-(234)(128)=(123)(248)\) to arrive at \[\mathscr{T}^{*}(A)=(178)(456)(123)[(235)(367)(248)-(235)(238)(467)-(267)(238)( 345)].\] The Plucker relation \((267)(348)-(367)(248)+(467)(238)-(678)(234)=0\) implies that \((235)(367)(248)-(235)(238)(467)=(235)[(267)(348)-(234)(678)]\) and \((267)(238)(345)=(267)[(235)(348)-(234)(358)]\). Substituting yields \[\mathscr{T}^{*}(A) =(178)(456)(123)[(267)(234)(358)-(234)(235)(678)]\] \[=(123)(234)(456)(178)[(267)(358)-(235)(678)]\] \[=(123)(234)(456)(178)Y^{235678}.\] \begin{table} \begin{tabular}{|c|c||c|c|} \hline **Variable** & **Twist** & **Variable** & **Twist** \\ \hline \(\Delta_{124}\) & \(\Delta_{234}\Delta_{356}\) & \(\Delta_{126}\) & \(\Delta_{234}\Delta_{137}\) \\ \hline \(\Delta_{235}\) & \(\Delta_{345}\Delta_{467}\) & \(\Delta_{237}\) & \(\Delta_{345}\Delta_{124}\) \\ \hline \(\Delta_{346}\) & \(\Delta_{456}\Delta_{157}\) & \(\Delta_{134}\) & \(\Delta_{456}\Delta_{235}\) \\ \hline \(\Delta_{457}\) & \(\Delta_{567}\Delta_{126}\) & \(\Delta_{245}\) & \(\Delta_{567}\Delta_{346}\) \\ \hline \(\Delta_{156}\) & \(\Delta_{167}\Delta_{237}\) & \(\Delta_{356}\) & \(\Delta_{167}\Delta_{457}\) \\ \hline \(\Delta_{267}\) & \(\Delta_{127}\Delta_{134}\) & \(\Delta_{467}\) & \(\Delta_{127}\Delta_{156}\) \\ \hline \(\Delta_{137}\) & \(\Delta_{123}\Delta_{245}\) & \(\Delta_{157}\) & \(\Delta_{123}\Delta_{267}\) \\ \hline \hline \(\Delta_{125}\) & \(\Delta_{234}\Delta_{367}\) & \(\Delta_{135}\) & \(X^{234567}\) \\ \hline \(\Delta_{236}\) & \(\Delta_{345}\Delta_{147}\) & \(\Delta_{246}\) & \(Y^{134567}\) \\ \hline \(\Delta_{347}\) & \(\Delta_{456}\Delta_{125}\) & \(\Delta_{357}\) & \(X^{124567}\) \\ \hline \(\Delta_{145}\) & \(\Delta_{567}\Delta_{236}\) & \(\Delta_{146}\) & \(Y^{123567}\) \\ \hline \(\Delta_{256}\) & \(\Delta_{167}\Delta_{347}\) & \(\Delta_{257}\) & \(X^{123567}\) \\ \hline \(\Delta_{367}\) & \(\Delta_{127}\Delta_{145}\) & \(\Delta_{136}\) & \(Y^{123457}\) \\ \hline \(\Delta_{147}\) & \(\Delta_{123}\Delta_{256}\) & \(\Delta_{247}\) & \(X^{123456}\) \\ \hline \hline \(X^{123456}\) & \(\Delta_{167}\Delta_{234}\Delta_{456}\Delta_{357}\) & \(Y^{1234567}\) & \(\Delta_{345}\Delta_{567}\) \\ \hline \hline \(X^{1234567}\) & \(\Delta_{127}\Delta_{345}\Delta_{567}\) & \(Y^{123457}\) & \(\Delta_{123}\Delta_{345}\Delta_{567}\Delta_{246}\) \\ \hline \end{tabular} \end{table} Table 1. Twists of \(\operatorname{Gr}(3,7)\) cluster variables, arranged by cyclic orbits. We now apply the twist to the expression \[B=(258)(134)(267)-(234)(158)(267)-(234)(125)(678),\] which yields \[\mathscr{T}^{*}(B) =[(134)(267)-(234)(167)](235)(456)(348)(178)\] \[-(345)(456)(267)(123)(348)(178)-(345)(456)(234)(367)(178)(128)\] \[=(456)(178)[(134)(267)(235)(348)-(234)(167)(235)(348)\] \[-(345)(267)(123)(348)-(345)(234)(367)(128).]\] Using the Plucker relation \((134)(235)=(123)(345)+(135)(234)\), we arrive at the expression \[\mathscr{T}^{*}(B) =(178)(456)(234)[(135)(348)(267)-(235)(167)(348)-(345)(367)(128)],\] and using the Plucker relation \((167)(235)-(267)(135)+(367)(125)-(567)(123)=0\) yields \[\mathscr{T}^{*}(B) =(178)(456)(234)[(348)(367)(125)-(348)(567)(123)-(345)(367)(128)]\] \[=(178)(456)(234)\sigma^{5}\rho(B),\] where \(\rho\) reflects indices via \(i\to 9-i\) and \(\sigma\) follows this by clockwise rotation. \begin{table} \begin{tabular}{c c c} \hline Triple Dimer Configuration & Weight of Dimer & Term in Laurent Expansion \\ \hline \end{tabular} \end{table} Table 2. Triple Dimers for \(\mathscr{T}^{*}(\sigma^{2}(A))\). \begin{table} \begin{tabular}{c c c} \hline Triple Dimer Configuration & Weight of Dimer & Term in Laurent Expansion \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(2\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(678)\) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(68)(128)(68)(128)(68)(128)(68) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(68)(128)(68)(128)(68)(128)(68) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(68)(128)(68)(128)(68) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(68)(128)(68)(128)(68)(128)(68)(128)(68) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345)(456)(678)^{2}(128)(248)(256)}{(268)^{2}}\) & \(2\cdot(248)(246)(256)(568)(168)(128)(68)(128)(68)(128)(68) \\ \hline \(\mathcal{T}^{*}(\sigma^{2}(A))\) & \(248\cdot\frac{(123)(345) \begin{table} \begin{tabular}{c c c} \hline Triple Dimer Configuration & Weight of Dimer & Term in Laurent Expansion \\ \hline \(8\) & \(123\) & \(248\) \\ \(7\) & \(268\) & \(345\) \\ \(7\) & \(268\) & \(345\) \\ \(6\) & \(678\) & \(567\) \\ \hline \((248)^{2}(168)(256)(123)^{2}(345)(567)(678)\) & \((248)^{3}(256)(268)(168)^{2}(123)(567)\) \\ \(7\) & \(268\) & \(246\) \begin{table} \begin{tabular}{c c c} \hline Triple Dimer Configuration & Weight of Dimer & Term in Laurent Expansion \\ \hline \(\frac{(246)^{2}(568)(123)^{2}(345)(678)^{2}(128)^{2}}{(124)(268)^{2}(168)}\) & \((248)(246)^{2}(568)^{2}(128)^{2}(123)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)^{2}(345)(678)^{2}(128)^{2}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{2}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \(2\cdot(248)(246)(256)(568)(168)(128)^{2}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \(2\cdot(248)(246)(256)(568)(168)(128)^{2}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \(246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \((246)^{2}(568)^{2}(128)^{3}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(268)^{2}(168)}\) & \(2\cdot(248)(246)(256)(568)(168)(128)^{2}(234)(678)\) \\ \hline \(\frac{(246)^{2}(568)(123)(234)(345)(678)^{2}(128)^{3}}{(124)(248)(26 \begin{table} \begin{tabular}{c c c} \hline Triple Dimer Configuration & Weight of Dimer & Term in Laurent Expansion \\ \hline \(\mathcal{T}^{*}(\sigma^{\tau}(B))\) & \(\mathcal{T}^{*}(\sigma^{\tau}(B))\) & \(\mathcal
2302.00825
Chiral restoration of nucleons in neutron star matter: studies based on a parity doublet model
We review the chiral variant and invariant components of nucleon masses and its consequence on the chiral restoration in extreme conditions, neutron star matter in particular. We consider a model of linear realization of chiral symmetry with the nucleon parity doublet structure that permits the chiral invariant mass, $m_0$, for positive and negative parity nucleons. Nuclear matter is constructed with the parity doublet nucleon model coupled to scalar fields $\sigma$, vector fields $(\omega, \rho)$, and to mesons with strangeness through the U(1)$_A$ anomaly. In models with large $m_0$, the nucleon mass is insensitive to the medium, and the nuclear saturation properties can be reproduced without demanding strong couplings of nucleons to scalar fields $\sigma$ and vector fields $\omega$. We confront the resulting nuclear equations of state with nuclear constraints and neutron star observations, and delineate the chiral invariant mass and effective interactions. To further examine nuclear equations of state beyond the saturation density, we supplement quark models to set the boundary conditions from the high density side. The quark models are constrained by the two-solar mass conditions, and such constraints are transferred to nuclear models through the causality and thermodynamic stability conditions. We also calculate various condensates and matter composition from nuclear to quark matter in a unified matter, by constructing a generating functional that interpolates nuclear and quark matter with external fields. Two types of chiral restoration are discussed; the one due to the positive scalar charges of nucleons, and the other triggered by the evolution of the Dirac sea. We found the U(1)$_A$ anomaly softens equations of state from low to high density.
Takuya Minamikawa, Bikai Gao, Toru kojo, Masayasu Harada
2023-02-02T02:15:27Z
http://arxiv.org/abs/2302.00825v1
# Chiral restoration of nucleons in neutron star matter: ###### Abstract We review the chiral variant and invariant components of nucleon masses and its consequence on the chiral restoration in extreme conditions, neutron star matter in particular. We consider a model of linear realization of chiral symmetry with the nucleon parity doublet structure that permits the chiral invariant mass, \(m_{0}\), for positive and negative parity nucleons. Nuclear matter is constructed with the parity doublet nucleon model coupled to scalar fields \(\sigma\), vector fields \((\omega,\rho)\), and to mesons with strangeness through the U(1)\({}_{A}\) anomaly. In models with large \(m_{0}\), the nucleon mass is insensitive to the medium, and the nuclear saturation properties can be reproduced without demanding strong couplings of nucleons to scalar fields \(\sigma\) and vector fields \(\omega\). We confront the resulting nuclear equations of state with nuclear constraints and neutron star observations, and delineate the chiral invariant mass and effective interactions. To further examine nuclear equations of state beyond the saturation density, we supplement quark models to set the boundary conditions from the high density side. The quark models are constrained by the two-solar mass conditions, and such constraints are transferred to nuclear models through the causality and thermodynamic stability conditions. We also calculate various condensates and matter composition from nuclear to quark matter in a unified matter, by constructing a generating functional that interpolates nuclear and quark matter with external fields. Two types of chiral restoration are discussed; the one due to the positive scalar charges of nucleons, and the other triggered by the evolution of the Dirac sea. We found the U(1)\({}_{A}\) anomaly softens equations of state from low to high density. Chiral invariant mass; neutron star matter; U(1)\({}_{A}\) anomaly; quark-hadron-crossover 02220220220220220220220220220220220220220220220222022202 in pion momenta, and greatly systematize the construction of effective Lagrangian [4]. The \(\sigma\) field does not manifestly appear as a dynamical degree of freedom, and is not necessary to make the Lagrangian chiral invariant. In fact, we can allow the chiral invariant mass term of the form, \(\sim\bar{N}M_{\rm inv}U_{5}(\vec{\pi})N=M_{\rm inv}\mathcal{N}\mathcal{N}\) where \(\mathcal{N}\equiv U_{5}^{1/2}N\) is nucleons (with "pion cloud") in the non-linear realization. If we start with a linear \(\sigma\) model, the chiral invariant mass \(M_{\rm inv}\) appear as \(\sim(\langle\sigma^{2}+\vec{\pi}^{2}\rangle)^{1/2}\), but models of non-linear realization do not necessarily require such identification; this draws our attention to dynamical mechanisms, not necessarily related to the \(S\chi SB\), for the origin of \(M_{\rm inv}\). While the non-linear realization allows more general construction of nucleonic models than the linear realization, the descriptions without \(\sigma\) fields, in practice, have difficulties in the extension to the domain of the chiral symmetry restoration; there \(\vec{\pi}\) and \(\sigma\) should together form a chiral multiplet, since physical states in symmetry unbroken vacuum must belong to irreducible representations of the chiral symmetry. We note here that it is not trivial that such mesonic excitations exist in the chiral symmetric phase, but it would be useful to include the \(\sigma\) into an effective model to approach the restoration point from the broken phase. Furthermore, if the chiral restoration is not the first order phase transitions, one may observe the consequence of the symmetry restoration even before reaching the complete restoration. For this purpose the linear realization with \(\sigma\) has more advantage over the non-linear realization (where \(\sigma\) must be generated dynamically from the pion dynamics). Such chiral restoration may happen at high temperature and at high density, and has phenomenological impacts in descriptions of the physics of relativistic heavy-ion collision and neutron stars (NSs) [5]. A model of linear realization may be improved by supplementing the concept of Weinberg's mended symmetry [6; 7]. The mended symmetry states that, even in spontaneously broken vacuum, superposing the linear representations of the original symmetry may be used to describe the physical spectra. Based on this picture, Weinberg described low-lying mesons \((\sigma,\pi,\rho,a_{1})\) as the superposition of chiral multiplets, and then obtained reasonable mass relations and decay widths for these states. This success encourages us to consider models of linear realization for nucleons including several chiral multiplets. In this review we consider a parity doublet model (PDM) of nucleons as a model of linear realization, and examine its feature through the phenomenology of dense QCD, especially neutron star matter. The PDM includes two nucleon fields, \(N_{1}\) and \(N_{2}\), whose left- and right-handed components (defined through the \((1\pm\gamma_{5})/2\) projections) transform differently as \(N_{1R/L}\to g_{R/L}N_{1R/L}\) and \(N_{2R/L}\to g_{L/R}N_{2R/L}\) under the U(\(N_{t}\))\({}_{L}\otimes\) U(\(N_{t}\))\({}_{R}\) chiral transformations (mirror assignment). The mass term of \(\sim m_{0}(\bar{N}_{1R}N_{2L}+\bar{N}_{1L}N_{2R})\) is now possible without breaking U(\(N_{t}\))\({}_{L}\otimes\) U(\(N_{t}\))\({}_{R}\) symmetry, and the mass \(m_{0}\) is chiral invariant. This chiral invariant mass term and the conventional Yukawa coupling term are diagonalized together, yielding spectra of positive and negative parity nucleons. For a sufficiently large \(m_{0}\), the overall magnitude of the physical nucleon masses is primarily set by \(m_{0}\), while the chiral variant mass \(\propto\langle\sigma\rangle\) is mainly responsible for the mass splitting between positive and negative parity nucleons. Such a model was first constructed by DeTar and Kunihiro [8], where \(N(939)\) and \(N^{*}(1535)\) are regarded as partners. The size of \(m_{0}\) is of great concern to predict the properties of nucleons near the chiral restoration. In a minimal PDM, the decay width of \(N^{*}(1535)\) is used to set the constraint \(m_{0}\lesssim 500\) MeV [9]. However, as in the standard \(\sigma\) model, such estimates can be easily affected by \(\sim 30\%\) if we permit non-renormalizable terms of dimension 5, and a larger value of \(m_{0}\) is possible (see, e.g., Ref. [10]). Further evidence of large \(m_{0}\) comes from a lattice QCD study at finite temperature for a nucleon and its parity partner [11]. The mass gap between \(N(939)\) and \(N^{*}(1535)\) is reduced together with the reduction of chiral condensates, while the substantial mass of \(N(939)\) can remain; this suggests that \(m_{0}\) may be as large as the mass of \(N(939)\) itself. The nucleon mass relatively insensitive to the chiral restoration has important consequences on dense nuclear matter at density relevant for neutron star (NS) phenomenology. In the past \(\sim 20\) years there have been dramatic progress in measurements of NS mass-radius (\(M\)-\(R\)) relations which have one-to-one correspondence with the QCD equation of state (EOS). The key question is whether EOS is stiff or soft; stiffer EOS has a larger pressure at a given energy density and prevents a star from the gravitational collapse to a blackhole. The relevant NS constraints are the existence of \(2M_{\odot}\) NS [12; 13; 14; 15; 16; 17], the radii of \(1.4M_{\odot}\)[18; 19; 20] and \(\simeq 2.1M_{\odot}\) NS [21; 22]. In short, NS EOS is relatively soft at baryon density \(n_{B}\) around \(1\)-\(2n_{0}\) (\(n_{0}\simeq 0.16\,\text{fm}^{-3}\): nuclear saturation density), but evolves into very stiff EOS at \(\sim 5n_{0}\). The density \(\simeq 1\)-\(2n_{0}\) is usually regarded as the domain of nuclear theories, while the domain at \(\gtrsim 5n_{0}\), where nucleons of the radii \(\sim 0.5\)-\(0.8\) fm begin to overlap, likely demands quark matter descriptions. The EOS constraints at \(1\)-\(2n_{0}\) obviously give important information on the chiral invariant mass, but the EOS constraints on \(\gtrsim 5n_{0}\) also impose indirect but powerful constraints on the nuclear territory through the causality condition that the sound velocity, \(c_{s}=(\partial P/\partial\epsilon)^{1/2}\) (\(P\): pressure, \(\epsilon\): energy density), is less than the light velocity (\(c=1\) in our unit), see, e.g., Ref. [23]. In order to describe the domain between nuclear and quark matter in a way consistent with the observed soft-to-stiff evolution of EOS, the simplest scenario is the quark-hadron-crossover (QHC) [24; 25; 26; 27; 28]. Unlike models with first order phase transitions, gradual quark matter formation does not accompany strong softening of EOS, but even lead to stiffening [29; 30; 31; 32; 33]. Based on this picture, we build unified equations of state which utilize nuclear models at \(n_{B}\lesssim 2n_{0}\), quark models at \(n_{B}\gtrsim 5n_{0}\), and interpolate them for EOS at \(2n_{0}\lesssim n_{B}\lesssim 5n_{0}\). We confront the unified EOS with \(M\)-\(R\) relations constrained by observations, and also calculate chiral condensates and matter composition. All these quantities are examined from the nuclear to quark matter domain, and the correlation between low and high densities gives us global insights into the chiral properties of nucleons. For construction of nuclear EOS, we implement a PDM into the Walecka type mean field model with \(\sigma\), \(\omega\), and \(\rho\)[34; 35; 36]. The strangeness is included at the level of \(U(1)_{A}\) anomaly where scalar mesons with strangeness, \(\sigma_{s}\), couple to \(\sigma\) made of up- and down-quarks. For neutron star EOS based on the PDM, see, e.g., Refs. [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. The most notable feature in the PDM is the density dependence. The chiral invariant mass allows nucleons to stay massive during the reduction of \(\langle\sigma\rangle\). In dilute regime the \(\langle\sigma\rangle\) decreases linearly as a function of \(n_{B}\), and so does the nucleon mass \(\propto\langle\sigma\rangle\) if \(m_{0}\) is absent; the nucleon mass at \(n_{0}\) is \(\simeq 30\)-\(50\%\) smaller than the vacuum mass. With a larger \(m_{0}\), the mass reduction becomes more modest. In addition, nucleon fields need not to couple to \(\sigma\) very strongly to reproduce the nucleon mass \(m_{N}\simeq 939\) MeV; in the Walecka type model, this results in a weaker coupling between nucleons and \(\omega\) fields, because such models have been arranged to balance the attractive \(\sigma\) and repulsive \(\omega\) contributions to reproduce the nuclear matter properties at \(n_{0}\). Beyond \(n_{0}\), the attractive \(\sigma\) contributions decrease while the repulsive \(\omega\) contributions keep growing. Thus, a greater \(m_{0}\) makes the overall magnitude of \(\sigma\) and \(\omega\) contributions smaller, and the resulting softer \(\omega\) repulsion improves the consistency with the radius constraints on \(1.4M_{\odot}\) NS for which EOS at \(n_{B}=1\)-\(2n_{0}\) is most important. The PDM as a hadronic model does not describe the chiral restoration at quark level, such as the modification in the quark Dirac sea. In order to supply such qualitative trend, quark matter EOS plays a role as a high density boundary condition. For the quark matter, a three flavor Nambu-Jona-Lasinio (NJL)-type model, which leads to the color-flavor locked (CFL) color-superconducting matter (for a review, Ref.[63]), is adopted. Effective interactions are examined to fulfill the two-solar-mass (\(2M_{\odot}\)) constraint [27; 28; 64; 65]. In Refs. [58; 61], they construct an effective model combining a PDM and an NJL-type model with two flavors assuming no color-superconductivity. This article is mostly a review of our works Refs.[66], [67], and [68], but also presents improved the analyses of Refs.[66] and [68] with the up-to-date version of our PDM. In Ref. [66], we used the PDM without \(U(1)_{A}\) anomaly to construct a unified EOS, and obtained the constraint \(600\,\mathrm{MeV}\lesssim m_{0}\lesssim 900\,\mathrm{MeV}\). The lower bound is primarily determined by the tidal deformability constraint from the GW170817, a detection of gravitational waves from a NS merger event. Later, in Ref. [67], we updated the PDM by adding the \(U(1)_{A}\) anomaly effects, the Kobayashi-Maskawa-t Hooft (KMT) interactions [69], to the meson sector. Even though we stop using the PDM to \(\lesssim 2n_{0}\) before hyperons appear, the strangeness does affect the chiral condensates in up- and down-sectors through the KMT interactions. The \(U(1)_{A}\) effects enhance the energy difference between chiral symmetric and broken vacua, leading to stronger softening in EOS when the chiral symmetry is restored. This is found to be true for both hadronic and quark matter. Especially, the chiral restoration with the \(U(1)_{A}\) anomaly makes EOS at 1-2\(n_{0}\) softer and leads to small radii for 1.4\(M_{\odot}\) NS. In effect, the lower bound \(m_{0}\gtrsim 600\) MeV given in Ref.[66] is relaxed to \(m_{0}\gtrsim 400\) MeV. While seminal works [24; 25; 27; 64; 66; 70; 71; 72] utilize the interpolation to construct unified EOS, microscopic quantities have not been calculated in a unified way. To utilize the full potential of the interpolation framework, in Ref. [68] three of the present authors (TM, TK, MH) extended the interpolation to unified generating functionals with external fields coupled to the quantities of interest, and differentiated the functionals to extract chiral and diquark condensates as well as matter composition. The condensates in the interpolated domain are affected by the physics of both hadronic and quark matter through the boundary conditions for the interpolation; for \(m_{0}\gtrsim 500\,\mathrm{MeV}\) the significant chiral condensate remains to \(n_{B}\sim 2\)-3\(n_{0}\), and smoothly approach the condensate in the quark matter at \(n_{B}\gtrsim 5n_{0}\). In this review, we update these analyses including the effects of the \(U(1)_{A}\) anomaly. This review is structured as follows. In Sec. 2, we first review the PDM with mesonic potentials in Ref. [67], and show how to constrain the model parameters to satisfy the hadron properties at vacuum and the saturation properties in nuclear matter. Sec. 3 is the review of quark matter construction. With these hadronic and quark matter models, in Sec. 4 we construct unified generating functionals as introduced in Ref. [68], and calculate various condensates. Sec. 6 is devoted to summary. ## 2 Hadronic matter from a parity doublet model In this section, we review the construction of the PDM in Ref. [67]. The fields appearing in the Lagrangian are linear realization of chiral symmetry, classified by the chiral representation as \((\mathrm{SU}(3)_{L},\,\mathrm{SU}(3)_{R})_{\mathrm{U(1)}_{A}}\). We determine the model parameters to reproduce hadronic properties in vacuum and the saturation properties of nuclear matter. ### Scalar and pseudoscalar mesons We introduce a \(3\times 3\) matrix field \(\Phi\) for scalar and pseudoscalar mesons which belong to \((\mathbf{3},\bar{\mathbf{3}})_{-2}\) under \((\mathrm{SU}(3)_{L},\,\mathrm{SU}(3)_{R})_{\mathrm{U(1)}_{A}}\) symmetry. The Lagrangian is given by \[\mathcal{L}_{M}^{\mathrm{scalar}}=\frac{1}{4}\,\mathrm{tr}\Big{[}\partial_{ \mu}\Phi\partial^{\mu}\Phi^{\dagger}\Big{]}-V_{M}-V_{\mathrm{SB}}-V_{\mathrm{ Anom}}, \tag{1}\] where \[V_{M}= -\frac{1}{4}\bar{\mu}^{2}\operatorname{tr}\Bigl{[}\Phi\Phi^{ \dagger}\Bigr{]}+\frac{1}{8}\lambda_{4}\operatorname{tr}\Bigl{[}\Bigl{(}\Phi\Phi ^{\dagger}\Bigr{)}^{2}\Bigr{]}-\frac{1}{12}\lambda_{6}\operatorname{tr}\Bigl{[} \Bigl{(}\Phi\Phi^{\dagger}\Bigr{)}^{3}\Bigr{]}\] \[+\lambda_{8}\operatorname{tr}\Bigl{[}\Bigl{(}\Phi\Phi^{\dagger} \Bigr{)}^{4}\Bigr{]}+\lambda_{10}\operatorname{tr}\Bigl{[}\Bigl{(}\Phi\Phi^{ \dagger}\Bigr{)}^{5}\Bigr{]}, \tag{2}\] \[V_{\text{SB}}= -\frac{1}{2}c\operatorname{tr}\Bigl{[}\hat{m}_{q}^{\dagger}\Phi+ \hat{m}_{q}\Phi^{\dagger}\Bigr{]},\] (3) \[V_{\text{Anom}}= -B\Bigl{[}\det(\Phi)+\det\Bigl{(}\Phi^{\dagger}\Bigr{)}\Bigr{]}\,, \tag{4}\] with \(B\) and \(c\) being the coefficients for the axial anomaly term and the explicit chiral symmetry breaking term, respectively, and \(\hat{m}_{q}=\text{diag}\{m_{u},m_{d},m_{s}\}\). In the above, we include only terms with one trace in \(V_{M}\) since they are of leading order in the \(1/N_{c}\) expansion. The present hadronic model is used up to \(2n_{0}\) assuming no appearance of hyperons. In the mean field approximation, the \(\Phi\) field can be reduced to \[\Phi\to\begin{pmatrix}M&0\\ 0&\sigma_{s}\end{pmatrix}_{3\times 3}\,, \tag{5}\] where \(M\) is a \(2\times 2\) matrix field transforming as \(M\to g_{L}Mg_{R}^{\dagger}\) with \(g_{L,R}\in\text{SU}(2)_{L,R}\). While we apply the mean field, here we keep a matrix representation for the two-flavor part to clarify the symmetry of the two-flavor part. The field \(\sigma_{s}\) corresponds to the scalar condensate made of a strange and an anti-strange quarks, \(\langle\mathrm{s}\rangle\). The reduced Lagrangian is now given by \[\mathcal{L}_{M}^{\text{scalar}}=\frac{1}{4}\biggl{(}\operatorname{tr}\Bigl{[} \partial_{\mu}M\partial^{\mu}M^{\dagger}\Bigr{]}+(\partial_{\mu}\sigma_{s} \partial^{\mu}\sigma_{s})\biggr{)}-V_{M}-V_{SB}-V_{\text{Anom}}\,, \tag{6}\] where \[V_{M}= -\frac{1}{4}\bar{\mu}^{2}\Bigl{(}\operatorname{tr}\Bigl{[}MM^{ \dagger}\Bigr{]}+\sigma_{s}^{2}\Bigr{)}+\frac{1}{8}\lambda_{4}\Bigl{(} \operatorname{tr}\Bigl{[}(MM^{\dagger})^{2}\Bigr{]}+\sigma_{s}^{4}\Bigr{)}\] \[-\frac{1}{12}\lambda_{6}\Bigl{(}\operatorname{tr}\Bigl{[}(MM^{ \dagger})^{3}\Bigr{]}+\sigma_{s}^{6}\Bigr{)}+\lambda_{8}\Bigl{(}\operatorname{ tr}\Bigl{[}(MM^{\dagger})^{4}\Bigr{]}+\sigma_{s}^{8}\Bigr{)}\] \[+\lambda_{10}\Bigl{(}\operatorname{tr}\Bigl{[}(MM^{\dagger})^{5} \Bigr{]}+\sigma_{s}^{10}\Bigr{)}, \tag{7}\] \[V_{\text{SB}}= -\frac{c}{2}\biggl{[}\operatorname{tr}\Bigl{[}\hat{m}_{2\times 2 }(M+M^{\dagger})\Bigr{]}+2m_{s}\sigma_{s}\biggr{]},\] (8) \[V_{\text{Anom}}= -B\sigma_{s}\Bigl{[}\det(M)+\det(M^{\dagger})\Bigr{]}\,, \tag{9}\] with \(\hat{m}_{2\times 2}=\text{diag}\{m_{u},m_{d}\}\). In the mean field treatment, the two-flavor matrix field \(M\) is reduced to \(\text{diag}(\sigma,\sigma)\). Here one might wonder why the \((\Phi\Phi^{\dagger})\) terms are included up to the fifth powers. In fact, the potential of the two-flavor model in the analyses [49, 59, 66] is not bounded from below at a very large value of \((\Phi\Phi^{\dagger})\), but has only a local minimum. There, very large values of \((\Phi\Phi^{\dagger})\) are simply discarded because they are not supposed to be within the domain applicability of the model. For three-flavor models with the KMT interactions, however, it turns out that reasonable local minima do not exist as seen by the black curve in Fig. 1. We add higher order terms to stabilize the potential and fine tune models to reproduce the nuclear saturation properties. Note that these higher order terms do not modify the potentials at small \(\sigma_{s}\). ### Nucleon parity doublet and vector mesons In the analysis done in Refs. [66; 67; 68], hadronic models are used only up to \(2n_{0}\) with assuming that hyperons are not populated. So, although the mesonic sector includes three-flavors, we include only nucleons in the baryon sector. The nucleons and the chiral partners belong to the \((\mathbf{2},\mathbf{1})_{+1}\) and \((\mathbf{1},\mathbf{2})_{-1}\) representations under \((\mathrm{SU(2)}_{L}\,,\,\mathrm{SU(2)}_{R})_{\mathrm{U(1)}_{A}}\): \[\psi_{1}^{L}:(\mathbf{2},\mathbf{1})_{-1},\quad\psi_{1}^{R}:(\mathbf{1}, \mathbf{2})_{+1},\quad\psi_{2}^{L}:(\mathbf{1},\mathbf{2})_{+1},\quad\psi_{2}^ {R}:(\mathbf{2},\mathbf{1})_{-1}, \tag{10}\] We note that \(\psi_{1}\) and \(\psi_{2}\) carry the positive and negative parities, respectively: \[\psi_{1}\underset{P}{\rightarrow}\gamma_{0}\psi_{1}\;,\quad\psi_{2} \underset{P}{\rightarrow}-\gamma_{0}\psi_{2}\;. \tag{11}\] The relevant Lagrangian for nucleons and their Yukawa interactions to the field \(M\) is given by \[\mathcal{L}_{N}^{\mathrm{scalar}} =\sum_{i=1,2}\bar{\psi}_{i}i\gamma^{\mu}D_{\mu}\psi_{i}-m_{0} \Big{(}\bar{\psi}_{1}^{L}\psi_{2}^{R}-\bar{\psi}_{1}^{R}\psi_{2}^{L}-\bar{\psi }_{2}^{L}\psi_{1}^{R}+\bar{\psi}_{2}^{R}\psi_{1}^{L}\Big{)}\] \[\quad-g_{1}\Big{(}\bar{\psi}_{1}^{L}\tau^{2}(M^{\dagger})^{\rm T }\tau^{2}\psi_{1}^{R}+\bar{\psi}_{1}^{R}\tau^{2}M^{\rm T}\tau^{2}\psi_{1}^{L} \Big{)}\] \[\quad-g_{2}\Big{(}\bar{\psi}_{2}^{L}\tau^{2}M^{\rm T}\tau^{2}\psi _{2}^{R}+\bar{\psi}_{2}^{R}\tau^{2}(M^{\dagger})^{\rm T}\tau^{2}\psi_{2}^{L} \Big{)}\;, \tag{12}\] where the covariant derivatives on the nucleon fields are defined as \[D_{\mu}\psi_{1,2}=(\partial_{\mu}-iV_{\mu})\psi_{1,2}\,, \tag{13}\] with \[V_{\mu}=\begin{pmatrix}\mu_{B}+\mu_{Q}&0\\ 0&\mu_{B}\end{pmatrix}\delta_{\mu}^{0}\;. \tag{14}\] Following Ref. [49], the vector mesons \(\omega\) and \(\rho\) are included based on the framework of the hidden local symmetry (HLS) [73; 74]. Here instead of showing the forms manifestly invariant under the HLS, we just show the relevant interaction terms among baryons and vector mesons: \[\mathcal{L}_{N}^{\mathrm{vector}}=-\sum_{i=1,2}\bar{\psi}_{i}\,\gamma^{\mu} \Big{(}g_{\omega NN}\omega_{\mu}+g_{\rho NN}\,\frac{\vec{\tau}}{2}\,\vec{\rho} _{\mu}\Big{)}\psi_{i}\;, \tag{15}\] where \(\vec{\tau}\) is the Pauli matrix for iso-spin symmetry. The relevant potential terms for vector mesons are expressed as \[\mathcal{L}_{V}= \frac{1}{2}\,m_{\omega}^{2}\,\omega_{\mu}\omega^{\mu}+\frac{1}{2}\,m _{p}^{2}\,\vec{\rho}_{\mu}\cdot\vec{p}^{\mu}+\lambda_{\omega\rho}g_{\omega}^{2} g_{\rho}^{2}\,\omega_{\mu}\omega^{\mu}\,\vec{\rho}_{\nu}\cdot\vec{p}^{\nu}\,. \tag{16}\] In the presence of \(\omega\), the attractive \(\omega^{2}\)-\(\rho^{2}\) coupling with \(\lambda_{\omega\rho}>0\) assists the appearance of \(\rho\) fields, reducing the symmetry energy associated with the isospin asymmetry, as discussed below. (Note that \(\lambda_{\omega\rho}>0\) is needed for the VEVs of \(\omega\) and \(\rho\) fields not to have non-zero value at vacuum.) In the mean field approximation, the meson fields take \[\langle\sigma\rangle=\sigma\,,\quad\langle\omega^{\mu}\rangle=\omega\delta_{0} ^{\mu}\,,\quad\langle\rho^{\mu}\rangle=\rho\,\frac{\mathbb{T}_{3}}{2}\,\delta _{0}^{\mu}\,, \tag{17}\] where each mean field is assumed to be independent of the spatial coordinates. The thermodynamic potential in the hadronic matter is calculated as [49] \[\Omega_{\rm PDM}= V(\sigma,\sigma_{\rm s})-V(\sigma_{0},\sigma_{\rm s0})-\frac{1}{2}m _{\omega}^{2}\omega^{2}-\frac{1}{2}m_{\rho}^{2}\rho^{2}\] \[-\lambda_{\omega\rho}(g_{\omega}\omega)^{2}\big{(}g_{\rho}\rho \big{)}^{2}-2\sum_{i=\pm}\sum_{\alpha=p,n}\int_{\mathbb{P}}^{k_{F}^{i,j}}(\mu_ {\alpha}^{*}-E_{\mathbb{p}}^{i})\, \tag{18}\] where \(i=+\) and \(-\) label the ordinary nucleon \(N(939)\) and the excited nucleon \(N^{*}(1535)\), respectively. The energies of these nucleons are \(E_{\mathbb{p}}^{i}=\sqrt{\mathbb{p}^{2}+m_{i}^{2}}\) with the momenta \(\mathbb{p}\) and masses obtained by diagonalizing the Lagrangian (12), \[m_{\pm}=\sqrt{m_{0}^{2}+\left(\frac{g_{1}+g_{2}}{2}\,\sigma\right)^{2}}\mp \frac{g_{2}-g_{1}}{2}\,\sigma\, \tag{19}\] where \(g_{2}>g_{1}\) is assumed so that \(m_{+}<m_{-}\). The effective chemical potentials \(\mu_{p}^{*}\) and \(\mu_{n}^{*}\) are defined as \[\mu_{p}^{*}=\mu_{B}+\mu_{Q}-g_{\omega NN}\,\omega-\frac{1}{2}g_{\rho NN}\,\rho \,\quad\mu_{n}^{*}=\mu_{B}-g_{\omega NN}\,\omega+\frac{1}{2}g_{\rho NN}\,\rho\, \tag{20}\] In the integration above, the integral region is restricted as \(|\mathbb{p}|<k_{F}^{\kappa,i}\) where \(k_{F}^{\kappa,i}=\sqrt{(\mu_{\alpha}^{*})^{2}-m_{i}^{2}}\) is the Fermi momentum for a nucleon \(i\). In the above expression, we implicitly used the so called no sea approximation, assuming that the structure of the Dirac sea remains the same for the vacuum and medium for \(n_{B}\lesssim 2n_{0}\). \(V(\sigma,\sigma_{\rm s})\) is the potential of scalar mean fields given by \[V(\sigma,\sigma_{\rm s})= -\frac{1}{2}\vec{\mu}^{2}\bigg{(}\sigma^{2}+\frac{1}{2}\sigma_{ \rm s}^{2}\bigg{)}+\frac{1}{4}\lambda_{4}\bigg{(}\sigma^{4}+\frac{1}{2}\sigma _{\rm s}^{4}\bigg{)}-\frac{1}{6}\lambda_{6}\bigg{(}\sigma^{6}+\frac{1}{2} \sigma_{\rm s}^{6}\bigg{)}\] \[+\lambda_{8}\Big{(}2\sigma^{8}+\sigma_{\rm s}^{8}\Big{)}+\lambda _{10}\Big{(}2\sigma^{10}+\sigma_{\rm s}^{10}\Big{)}-2B\sigma^{2}\sigma_{\rm s }-(2cm_{u}\sigma+cm_{\rm s}\sigma_{\rm s})\,. \tag{21}\] In Eq. (18) we subtracted the potential in vacuum \(V(\sigma_{0},\sigma_{\rm s0})\), with which the total potential in vacuum is zero. Here \(\sigma_{0}\) and \(\sigma_{\rm s0}\) are related with the decay constants \(f_{\pi}\) and \(f_{K}\) as \[\sigma_{0}=f_{\pi}\,\quad\sigma_{\rm s0}=2f_{K}-f_{\pi}\,. \tag{22}\] Finally, we include leptons for the charge neutrality realized in NSs. The total thermodynamic potential of hadronic matter for NSs takes the form \[\Omega_{\rm H}=\Omega_{\rm PDM}+\sum_{l=e,\mu}\Omega_{l}\, \tag{23}\] where \(\Omega_{l}\) (\(l=e,\mu\)) are the thermodynamic potentials for leptons given by \[\Omega_{l}=-2\int_{\rm p}^{k_{F}^{l}}(\mu_{l}-E_{\rm p}^{l})\,\ \ \ \ \ \ \ \ k_{F}^{l}=\sqrt{\mu_{l}^{2}-m_{l}^{2}}\,. \tag{24}\] Here, the mean fields are determined by the following stationary conditions: \[0=\frac{\partial\Omega_{\rm H}}{\partial\sigma}=\frac{\partial\Omega_{\rm H}} {\partial\omega}=\frac{\partial\Omega_{\rm H}}{\partial\rho}\,. \tag{25}\] In neutron stars, we impose the beta equilibrium and the charge neutrality condition represented as \[\mu_{e} =\mu_{\mu}=-\mu_{Q}\, \tag{26}\] \[\frac{\partial\Omega_{\rm H}}{\partial\mu_{Q}} =n_{p}-n_{l}=0. \tag{27}\] The mean fields and charge chemical potential is determined as functions of \(\mu_{B}\). After substituting these values into \(\Omega_{\rm H}\), we obtain the pressure in the hadronic matter as a function of \(\mu_{B}\), \[P_{\rm H}(\mu_{B})=-\Omega_{\rm H}(\mu_{B}). \tag{28}\] ### Determination of model parameters In this subsection, we determine the parameters in the PDM to reproduce the masses and decay constants in vacuum and the saturation properties in nuclear matter. In nuclear matter, the energy per nucleon (energy density) \(\varepsilon\) is given as a function of the baryon number density \(n_{B}\) and the isospin asymmetry \(\delta=\frac{n_{p}-n_{n}}{n_{p}+n_{n}}\). The energy density is expanded around the normal nuclear density \(n_{0}=0.16\,{\rm fm}^{-3}\) and symmetric matter \(\delta=0\) as \[\varepsilon=-B_{0}+\frac{K_{0}}{2}\bigg{(}\frac{n_{B}-n_{0}}{3n_{0}}\,\bigg{)} ^{2}+\delta^{2}\bigg{(}S_{0}+L_{0}\,\frac{n_{B}-n_{0}}{3n_{0}}\bigg{)}+{\rm higher \,order}, \tag{29}\] here the \(B_{0},K_{0},S_{0},L_{0}\) are called the binding energy, incompressibility, symmetry energy and slope parameter, respectively, as shown in Fig. 2. The parameter \(K_{0}\) measures the curvature of the energy density at the normal nuclear density: \[K_{0}=9n_{0}^{2}\,\frac{\partial^{2}\varepsilon}{\partial n_{B}^{2}}\bigg{|}_ {n_{B}=n_{0},\delta=0}\,. \tag{30}\] The symmetry energy \(S_{0}\) is calculated as \[S_{0}=\frac{1}{2}\,\frac{\partial^{2}\varepsilon}{\partial\delta^{2}}\bigg{|} _{n_{B}=n_{0},\delta=0}\,. \tag{31}\] The parameter \(L_{0}\) characterize the slope of symmetry energy at normal nuclear density: \[L=\frac{3n_{0}}{2}\left.\frac{\partial^{3}\varepsilon}{\partial n_{B}\,\partial \delta^{2}}\right|_{n_{B}=n_{0},\,\delta=0}\,. \tag{32}\] We summarize input values which we used in this review in Tables. 1 and 2. We first use the masses of \(\omega\) and \(\rho\) mesons to fix \(m_{\omega}\) and \(m_{\rho}\) in Eq. (16). The parameters \(cm_{u}=cm_{d}\) and \(cm_{s}\) are fixed from \(m_{\pi}f_{\pi}\) and \(m_{K}f_{K}\) as \[2cm_{u}=m_{\pi}^{2}f_{\pi}^{2}\,,\qquad c(m_{u}+m_{s})=m_{K}^{2}f_{K}^{2}\,. \tag{33}\] The values of \(g_{1}\) and \(g_{2}\) are determined from the masses of nucleons at vacuum through Eq. (19) with \(\sigma\) replaced with \(f_{\pi}\). There are still 9 parameters to be determined: \[\bar{\mu}^{2},\;\lambda_{4},\;\lambda_{6},\;\lambda_{8},\;\lambda_{10},\;B, \;\lambda_{\omega p}\,,\;g_{\omega NN},\;g_{\rho NN}\,. \tag{34}\] These parameters are tuned to reproduce the saturation properties listed in Table. 2. It turns out that there are some degeneracy related to the choice of parameters \(\lambda_{8},\lambda_{10}\), and \(B\). In Fig.3, we show the range of \(\lambda_{8}\) and \(\lambda_{10}\) needed to satisfy the saturation properties. Finally, we fit the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \(m_{\pi}\) & \(m_{K}\) & \(f_{\pi}\) & \(f_{K}\) & \(m_{\omega}\) & \(m_{\rho}\) & \(m_{+}\) & \(m_{-}\) \\ \hline 140 & 494 & 92.4 & 109 & 783 & 776 & 939 & 1535 \\ \hline \hline \end{tabular} \end{table} Table 1: Physical inputs in vacuum in unit of MeV. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(n_{0}\) [fm\({}^{-3}\)] & \(E_{\rm Bind}\) [MeV] & \(K_{0}\) [MeV] & \(S_{0}\) [MeV] & \(L_{0}\) [MeV] \\ \hline 0.16 & 16 & 240 & 31 & 57.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Saturation properties used to determine the model parameters: the saturation density \(n_{0}\), the binding energy \(B_{0}\), the incompressibility \(K_{0}\), symmetry energy \(S_{0}\) and the slope parameter \(L_{0}\). Figure 2: Density dependence of energy per nucleon for the symmetric matter (indicated by \(\delta=0\)) and the pure neutron matter (\(\delta=1\)). parameter \(B\) to reproduce the masses of \(\eta\) and \(\eta^{\prime}\) mesons. Here we omit the detail and show the determined values of model parameters only for \(m_{0}=700\,\mathrm{MeV}\) as a typical example in Table 3. We refer Ref. [67] for the details of the determination and the values of model parameters for other choices of \(m_{0}\). ### Softening of the EOS by the Effect of Anomaly Here, we briefly explain the mechanism for the effect of anomaly to soften the EOS in the PDM. We refer Ref. [67] for the details. One of the key features is that both the condensate \(\sigma\) and \(\sigma_{s}\) are enhanced when the effect of anomaly is included. Their values at vacuum are actually increased with increasing \(B\). Since the mass of \(\sigma\) meson, \(m_{\sigma}\) is proportional to \(\sigma\), the mass is increased as shown in Fig.4. In the potential picture for nucleons, the \(\sigma\) meson mediates the attractive force among nucleons in matter, so that the larger \(m_{\sigma}\) leads to shorter effective range of the attraction with the weaker overall strength. The repulsive \(\omega\) interaction should be weaker to balance the weaker attraction. As a result, the weaker repulsion for a larger \(B\) makes the EOS softer in the density region higher than the normal nuclear density. ## 3 Quark matter from an NJL-type model Following Ref. [27], we construct quark matter from an NJL-type effective model of quarks with the 4-Fermi interactions which cause the color-superconductivity as well as the spontaneous chiral symmetry breaking. The Lagrangian is given by \[\mathcal{L}_{\rm CSC}=\mathcal{L}_{0}+\mathcal{L}_{\sigma}+\mathcal{L}_{\rm d }+\mathcal{L}_{\rm KMT}+\mathcal{L}_{\rm vec}\, \tag{35}\] where \[{\cal L}_{0} = \bar{q}(i\gamma^{\mu}\partial_{\mu}-\bar{m}_{q}+\gamma_{\mu}\hat{A}^ {\mu})q\, \tag{36}\] \[{\cal L}_{\sigma} = G\sum_{A=0}^{8}\left[(\bar{q}\tau_{A}q)^{2}+(\bar{q}i\gamma_{5} \tau_{A}q)^{2}\right]\,,\] (37) \[{\cal L}_{\rm d} = H\sum_{A,B=2,5,7}\left[(\bar{q}\tau_{A}\lambda_{B}\bar{C}\bar{q} ^{\dagger})(q^{\dagger}C\tau_{A}\lambda_{B}q)\,+\,(\bar{q}i\gamma_{5}\tau_{A} \lambda_{B}C\bar{q}^{\dagger})(q^{\dagger}Ci\gamma_{5}\tau_{A}\lambda_{B}q) \right]\,,\] (38) \[{\cal L}_{\rm KMT} = -K\biggl{[}\det_{f}\bar{q}(1-\gamma_{5})q+\det_{f}\bar{q}(1+ \gamma_{5})q\biggr{]}\,\] (39) \[{\cal L}_{\rm vec} = -g_{V}(\bar{q}\gamma^{\mu}q)(\bar{q}\gamma_{\mu}q)\, \tag{40}\] with \(\hat{A}^{\mu}\) being the collection of chemical potentials \[\hat{A}^{\mu}=(\mu_{q}+\mu_{3}\lambda_{3}+\mu_{8}\lambda_{8}+\mu_{\rm Q}Q) \delta_{0}^{\mu}. \tag{41}\] Here \(\lambda_{a}\) are Gell-Mann matrices in color space, while \(\tau_{0}={\bf 1}_{3\times 3}\sqrt{2/3}\) and \(\tau_{A}(A=1\cdots 8)\) are the Gell-Mann matrices and \(Q=\frac{1}{2}\tau_{3}+\frac{1}{2\sqrt{3}}\tau_{8}={\rm diag}(2/3,-1/3,-1/3)\) is a charge matrix in flavor space. Meanwhile \(\tau_{0}={\bf 1}_{3\times 3}\sqrt{2/3}\) and \(\tau_{A}(A=1\cdots 8)\) are the Gell-Mann matrices for the flavor. For coupling constants \(G\) and \(K\) as well as the cutoff \(\Lambda\), we use \(G\Lambda^{2}=1.835\) \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & \(m_{0}=700\) [MeV] & \(\lambda_{8}^{\prime}=0\) & \(\lambda_{8}^{\prime}=2.677\) \\ \hline & \(g_{1}\) & 7.81 & 7.81 \\ & \(\bar{g}_{2}\) & 14.26 & 14.26 \\ & \(\bar{\mu}^{2}/f_{\pi}^{2}\) & 23.21 & 41.35 \\ & \(\lambda_{4}\) & 133.4 & 194.7 \\ \(B=600\) [MeV] & \(\lambda_{6}f_{\pi}^{2}\) & 82.71 & 160.1 \\ & \(\lambda_{\omega_{p}}\) & 0.3047 & 0.3636 \\ & \(\lambda_{10}f_{\pi}^{6}\) & 0.5221 & 0.09091 \\ & \(g_{\omega NN}\) & 5.437 & 5.142 \\ & \(g_{\rho NN}\) & 9.577 & 9.541 \\ \hline \hline & \(g_{1}\) & 7.81 & 7.81 \\ & \(\bar{g}_{2}\) & 14.26 & 14.26 \\ & \(\bar{g}^{2}/f_{\pi}^{2}\) & 39.98 & 55.24 \\ & \(\lambda_{4}\) & 94.02 & 149.3 \\ \(B=0\) [MeV] & \(\lambda_{6}f_{\pi}^{2}\) & 62.23 & 136.4 \\ & \(\lambda_{\omega_{p}}\) & 0.2442 & 0.2988 \\ & \(\lambda_{10}f_{\pi}^{6}\) & 0.5221 & 0.09091 \\ & \(g_{\omega NN}\) & 6.287 & 5.957 \\ & \(g_{\rho NN}\) & 10.19 & 10.21 \\ \hline \hline \end{tabular} \end{table} Table 3: Model parameters determined from the saturation properties. When \(B=600\) MeV, solutions satisfying the saturation properties can be found only in the range: \(0\leq\lambda_{8}^{{}^{\prime}}\leq 2.677\). Here we list the boundary values as typical examples; \(\lambda_{8}^{\prime}=0\) is the minimum boundary and \(\lambda_{8}^{\prime}=2.677\) is the maximum boundary. \(K\Lambda^{5}=9.29\) and \(\Lambda=631.4\) MeV, which successfully reproduce the hadron phenomenology at low energy [26; 75]. The mean fields are introduced as \[\sigma_{f} =\left\langle\tilde{q}_{f}q_{f}\right\rangle\,,\quad(f=u,d,s)\;, \tag{42}\] \[d_{j} =\left\langle q^{\dagger}C\gamma_{5}R_{j}q_{f}\right\rangle\,, \quad(j=1,2,3)\;,\] (43) \[n_{q} =\sum_{f=u,d,s}\left\langle q_{f}^{\dagger}q_{f}\right\rangle\,, \tag{44}\] where \((R_{1},R_{2},R_{3})=(\tau_{7}\lambda_{7},\tau_{5}\lambda_{5},\tau_{2}\lambda_{ 2})\). The resultant thermodynamic potential is calculated as \[\Omega_{\rm CSC}=\Omega_{s}-\Omega_{s}[\sigma_{f}=\sigma_{f}^{0},d_{j}=0,\mu_ {q}=0]+\Omega_{c}-\Omega_{c}[\sigma_{f}=\sigma_{f}^{0},d_{j}=0]\;, \tag{45}\] where \[\Omega_{s} =-2\sum_{\alpha=1}^{18}\int_{\bf P}^{\Lambda}\frac{\varepsilon_{ \alpha}}{2}\;, \tag{46}\] \[\Omega_{c} =\sum_{f=u,d,s}2G\sigma_{i}^{2}+\sum_{j=1,2,3}Hd_{j}^{2}-4K\sigma _{u}\sigma_{d}\sigma_{s}-g_{V}n_{q}^{2}\;. \tag{47}\] In Eq. (46), \(\varepsilon_{\alpha}\) are energy eigenvalues of the inverse propagator in Nambu-Gor'kov basis given by \[S^{-1}(k)=\begin{pmatrix}\gamma_{\mu}k^{\mu}-\tilde{M}+\gamma^{0}\hat{\mu}& \gamma_{5}\sum_{i}\Delta_{i}R_{i}\\ -\gamma_{5}\sum_{i}\Delta_{i}^{*}R_{i}&\gamma_{\mu}k^{\mu}-\tilde{M}-\gamma^{ 0}\hat{\mu}\end{pmatrix}\,, \tag{48}\] where \[M_{i} =m_{i}-4G\sigma_{i}+K\sum_{j,k=u,d,s}|\epsilon_{ijk}|\sigma_{j} \epsilon_{k}\;,\quad(i=u,d,s)\;, \tag{49}\] \[\Delta_{j} =-2Hd_{j}\,,(j=1,2,3)\;,\] (50) \[\hat{\mu} =\mu_{q}-2g_{V}n_{q}+\mu_{3}\lambda_{3}+\mu_{8}\lambda_{8}+\mu_{Q }Q\;. \tag{51}\] The inverse propagator \(S^{-1}(k)\) in Eq. (48) is \(72\times 72\) matrix in terms of the color, flavor, spin and Nambu-Gorkov basis and has 72 eigenvalues. \(M_{u,d,s}\) are the constituent masses of the \(u,d,s\)-quarks and \(\Delta_{1,2,3}\) are the color-superconducting gap energies. In the high density region, \(n_{B}\gtrsim 5n_{0}\), their ranges are \(M_{u,d}\approx 50\)-100 MeV, \(M_{s}\approx 250\)-300 MeV and \(\Delta_{1,2,3}\approx 200\)-250 MeV [26]. We note that the inverse propagator matrix does not depend on the spin, and that the charge conjugation invariance relates two eigenvalues. Then, there are 18 independent eigenvalues at most. The entire thermodynamic potential is constructed by adding the lepton contribution in Eq. (24) as \[\Omega_{\rm Q}=\Omega_{\rm CSC}+\sum_{l=e,\mu}\Omega_{l}\;. \tag{52}\] The chiral condensates \(\sigma_{i}\,(i=u,d,s)\) and the diquark condensates \(d_{j}\,(j=1,2,3)\) are determined from the gap equations: \[\frac{\partial\Omega_{\rm Q}}{\partial\sigma_{i}}=0\;,\quad\frac{\partial \Omega_{\rm Q}}{\partial d_{j}}=0\;. \tag{53}\] The relevant chemical potentials other than the baryon number density are determined from the beta equilibrium condition in Eq. (26) combined with the conditions for electromagnetic charge neutrality and color charge neutrality expressed as \[n_{j}=-\frac{\partial\Omega_{\rm Q}}{\partial\mu_{j}}=0\,,\quad(j=3,8,Q). \tag{54}\] The baryon number density \(n_{B}\) is equal to three times of quark number density given by \[n_{q}=-\frac{\partial\Omega_{\rm Q}}{\partial\mu_{q}}\, \tag{55}\] where \(\mu_{q}\) is the quark number chemical potential which is \(1/3\) of the baryon number chemical potential. Substituting the above conditions, we obtain the pressure of the system as \[P_{\rm Q}=-\Omega_{\rm Q}. \tag{56}\] ### Softening of the EOS by the Effect of Anomaly in NJL-type model Here we briefly explain how the anomaly softens the EOS in the NJL-type quark model. For simplicity we set \(H=0\) and omit the effects of diquarks. In the KMT interaction in Eq. (39), the coefficient \(K\) represents the strength of U(1)\({}_{A}\) anomaly. The anomaly assists the chiral symmetry breaking and lowers the ground state energy in vacuum; a larger \(K\) leads to chiral condensates greater in magnitude as shown in the left panel of Fig.5. With the chiral restoration, the system loses the energetic benefit of having the chiral condensates. Such release of the energy is more radical with the anomaly than without it. As one can see from the thermodynamic relation \(P=-\varepsilon+\mu_{q}n_{q}\), the larger energy \(\varepsilon\) with a stronger anomaly leads to the smaller pressure, i.e., the softening. In other words, with the anomaly we have to add a larger "bag constant" to the energy density but must subtract it from the pressure. We show the resulting EOSs for \(H/G=0\) and \(g_{V}/G=0.1\) in the right panel of Fig. 5. ## 4 Interpolated EOSs and \(M\)-\(R\) relations of NSs ### Interpolation of EOSs In this subsections, we briefly explain how to interpolate the EOS for hadronic matter to that for quark matter constructed in previous sections. Following Ref. [26], we assume that hadronic matter is realized in the low density region \(n_{B}<2n_{0}\), and use the pressure constructed in Eq. (28). In the high density region \(n_{B}>5n_{0}\), the pressure given in Eq. (56) of Figure 5: \(K\) dependence of chiral condensates (left panel) and the energy dependence of pressure for \(H/G=0\) and \(g_{V}/G=0.1\) (right panel). quark matter is used. In the intermediate region \(2n_{0}<n_{B}<5n_{0}\), we assume that the pressure is expressed by a fifth order polynomial of \(\mu_{B}\) as1 Footnote 1: It is important to make interpolation for a correct set of variables, either \(P(\mu_{B})\) or \(\varepsilon(n_{B})\), from which one can deduce all the thermodynamic quantities by taking derivatives [26]. Other combinations, e.g., \(P(\varepsilon)\) can not be used to derive \(n_{B}\) and hence would miss some constraints. \[P_{\rm I}(\mu_{B})=\sum_{i=0}^{5}C_{i}\mu_{B}^{i}. \tag{57}\] Following the quark-hadron continuity scenario, we demand the interpolating EOS to match quark and hadronic EOS up to the second derivatives (otherwise we would have the first or second order phase transitions at the boundaries). The six parameters \(C_{i}\) (\(i=1,\ldots,6\)) are determined from the boundary conditions given by \[\left.\frac{{\rm d}^{n}P_{\rm I}}{({\rm d}\mu_{B})^{n}}\right|_{\mu_{BL}}= \left.\frac{{\rm d}^{n}P_{\rm H}}{({\rm d}\mu_{B})^{n}}\right|_{\mu_{BL}}\,\quad \left.\frac{{\rm d}^{n}P_{\rm I}}{({\rm d}\mu_{B})^{n}}\right|_{\mu_{BU}}=\left. \frac{{\rm d}^{n}P_{\rm Q}}{({\rm d}\mu_{B})^{n}}\right|_{\mu_{BU}}\,\quad(n=0,1,2)\, \tag{58}\] where \(\mu_{BL}\) is the chemical potential corresponding to \(n_{B}=2n_{0}\) and \(\mu_{BU}\) to \(n_{B}=5n_{0}\). In addition to these boundary conditions, the interpolated pressure must obey the causality constraint, i.e., the sound velocity, \[c_{s}^{2}=\frac{{\rm d}P}{{\rm d}\varepsilon}=\frac{n_{B}}{\mu_{B}\chi_{B}}\, \tag{59}\] where \(n_{B}=\frac{{\rm d}^{P}}{{\rm d}\mu_{B}}\) and \(\chi_{B}=\frac{{\rm d}^{2}P}{{\rm d}\mu_{B}^{2}}\), to be less than the light velocity. This condition is more difficult to be satisfied for the combination of softer nuclear EOS and stiffer quark matter EOS, since such soft-to-stiff combination requires a larger slope in \(P(\varepsilon)\). We show an example of the interpolated pressure in Fig. 6 with a parameter set \(\lambda_{8}^{\prime}=2.677,\lambda_{10}^{\prime}=0.09091\) for \(m_{0}=700\,\)MeV and \(B=600\,\)MeV for the PDM, and two parameter sets \((H/G,g_{V}/G)=(1.45,0.4)\) and \((1.45,0.5)\) for quark matter. Both plots 6(a) and 6(b) in Fig. 6 are smoothly connected by construction, but the set \((H/G,g_{V}/G)=(1.45,0.4)\) violates causality as seen in Fig. 7, and therefore must be excluded. The \(c_{s}^{2}\) exceeding the conformal value, \(c_{s}^{2}=1/3\), and subsequent reduction within the interval \(2\)-\(5n_{0}\) is the characteristic feature of the crossover models [24; 25; 26; 27; 28]. In nuclear domain, the sound velocity is small, \(c_{s}^{2}\sim 0.1\), while the natural size is \(c_{s}^{2}\sim 1/3\) in quark matter. In the intermediate region \(c_{s}^{2}\) makes a peak. How to approach the conformal limit is the subject under intensive discussions, see Refs. [76; 77; 78; 79; 80]. Figure 8 shows allowed combinations of \((H,g_{V})\) for several choices of \(m_{0}\). Here, we fix the parameters in the PDM to \(B=600\,\)MeV and \(\lambda_{8}^{\prime}=0\), which determines the value of \(\lambda_{10}^{\prime}\) as summarized in sec. 2.3 (e.g. \(\lambda_{10}^{\prime}=0.5221\) for \(m_{0}=700\,\)MeV). The parameter \(\lambda_{\omega\rho}\) is set to reproduce the slope parameter as \(L_{0}=57.7\) MeV. In all cases, the allowed values of \(H\) and \(g_{V}\) have a positive correlation; for a larger \(g_{V}\) we need to increase the value of \(H\)[27]. For \(m_{0}=800\) MeV, the maximum masses for all the combinations are below \(2M_{\odot}\), leading to the conclusion that \(m_{0}=800\) MeV should be excluded within the current setup of the PDM parameters. The details of the positive correlation between \(H\) and \(g_{V}\) depend on the low density constraint and the choice of \(m_{0}\). As we mentioned in Introduction, the EOS in hadronic matter is softer for a larger \(m_{0}\). Correspondingly, the parameter \(g_{V}\), which makes quark matter EOS stiff, should not be too large for causal interpolations; for a larger \(m_{0}\), the acceptable \(g_{V}\) tends to appear at lower values. Typical values of \((H,g_{V})\) are greater than expected from the Fierz transformation for which \((H,g_{V})=(0.5,0.5)G\) (see e.g. Ref. [81]). Such choices were used in the hybrid Figure 6: Pressure \(P(\mu_{B})\) of the PDM and the unified equations of state. For the PDM we chose \(\lambda_{8}^{\prime}=2.677,\lambda_{10}^{\prime}=0.09091\) for \(m_{0}=700\,\text{MeV}\) and \(B=600\,\text{MeV}\) as a typical parameter set and for quark models we used \((H/G,g_{V}/G)=(1.45,0.4)\) and \((1.45,0.5)\). The thick curves in the unified equations of state are used to mark the pure hadronic and quark parts. Figure 7: Squared speed of sound \(c_{s}^{2}\) for \((H/G,g_{V}/G)=(1.45,0.4)\) and \((1.45,0.5)\). Curves are same as in Fig. 6. Figure 8: Allowed combinations of \((H_{\_}g_{\_}V)\) for \(m_{\_}0=400\)–\(800\,\mathrm{MeV}\). The color of the circle shows the maximum mass of neutron stars obtained from the corresponding parameters, as indicated by a vertical bar at the right side of each figure. quark-hadron matter EOS with first order phase transitions, but they tend to lead to predictions incompatible with the \(2M_{\odot}\) constraints. ### M-R relations of NSs With the unified EOS explained so far, we now calculate \(M\)-\(R\) relations of NSs by solving the Tolman-Oppenheimer-Volkoff (TOV) equation [82; 83], \[\begin{split}\frac{\mathrm{d}P}{\mathrm{d}r}&=-G\frac {(\varepsilon+P)(m+4\pi r^{3}P)}{r^{2}-2Gmr}\,\\ \frac{\mathrm{d}m}{\mathrm{d}r}&=4\pi r^{2} \varepsilon\,\end{split} \tag{60}\] where \(G\) is the Newton constant, \(r\) is the distance from the center of a neutron star, \(P\), \(m\) and \(\varepsilon\) are the pressure, mass, and energy density as functions of \(r\): \[P=P(r)\,\quad m=m(r)\,\quad\varepsilon=\varepsilon(r). \tag{61}\] The radius \(R\) is determined by the condition \(P(R)=0\) and the mass \(M\) by \(M=m(R)\). To estimate radii of NSs in accuracy better than \(\sim 0.5\) km, we need to include the crust EOS. We use the BPS EOS [84] for the outer and inner crust parts.2 Footnote 2: The BPS EOS is usually referred as EOS for the outer crust, but it also contains the BPP EOS [84] for the inner crust, at \(n_{B}\leq 0.1\,\mathrm{fm}^{-3}\), and at \(n_{B}\geq 0.1\,\mathrm{fm}^{-3}\) we use our unified EOS from nuclear liquid to quark matter. For a given central density, we obtain the corresponding \(M\)-\(R\) point, and the sequence of such points form the \(M\)-\(R\) curves. In order to study the relation between microscopic parameters and \(M\)-\(R\) relations, below we examine the impacts of the PDM EOS, the dependence on the \(\omega^{2}\rho^{2}\) coupling (\(\lambda_{\omega\rho}\)), the chiral invariant mass \(m_{0}\), and the anomaly strength \(B\) for a given set of quark matter parameters (\(H,g_{V}\)). We first study the effect of the \(\omega^{2}\rho^{2}\) interaction. We fix \(m_{0}=500\,\mathrm{MeV}\) and \(B=600\,\mathrm{MeV}\), and vary \(\lambda_{\omega\rho}\) which leads to changes in the slope parameter \(L_{0}\) in the symmetry energy. We examine the cases with \(L_{0}=40,57.7\), and \(80\) MeV since the value of \(L_{0}\) still has the uncertainty which is being intensively studied [85]. The resultant \(M\)-\(R\) relation is shown in Fig. 9. The \(M\)-\(R\) relations with the core density smaller than \(2n_{0}\) (larger than \(5n_{0}\)) are emphasized by thick curves in the low (high) mass region. The \(\lambda_{\omega\rho}>0\) corresponds to attractive correlations that reduce \(L_{0}\) and soften EOS in the nuclear domain. For \(L_{0}=40,57.7\), and \(80\) MeV, the radii of \(1.4M_{\odot}\) NS are \(\simeq 11.05\) km, \(\simeq 11.2\) km, and \(\simeq 12.1\) km, respectively Precise determination of slope parameter in the future will help us to further constrain the NS properties, especially the radii. In the following analysis, we fix the value \(L_{0}=57.7\,\mathrm{MeV}\) and the parameter \(\lambda_{8}^{\prime}=0\). The value of \(\lambda_{10}^{\prime}\) is determined as explained in sec. 2.3. For example, \(\lambda_{10}^{\prime}=0.5221\) is obtained for \(m_{0}=700\,\mathrm{MeV}\) below. Then, we examine the effects of U(1)\({}_{A}\) anomaly on the \(M\)-\(R\) relation. In Fig. 10(a), we show \(M\)-\(R\) curves for several values of the anomaly strength \(B\), with the NJL parameters \((H,g_{V})\) leading to the largest and second largest maximum masses for a given set of the PDM parameters. This shows that, due to the softening effect of anomaly as explained in sec. 2.4, even the stiffest connection for \(m_{0}=800\) MeV with \(B=600\,\mathrm{MeV}\) is unable to satisfy the maximum mass constraints. The effect of the anomaly in general softens EOS from low to high densities, and increasing \(B\) from 0 to 600 MeV (while retuning the other parameters to reproduce nuclear saturation properties) reduces each of \(M\) and \(R\) by a few percent. In Fig. 10(b), we set \(B=600\) MeV to fit the \(\eta^{\prime}\) mass and examine several values of \(m_{0}\). These results should be regarded as the representatives of the present review. Figure 10: Mass-radius relations for different \(m_{0}\) in different parameter setting. (a) \(B=0,600\) MeV for \(m_{0}=500,800\) MeV; (b) \(B=600\) MeV for different \(m_{0}\). NJL parameters \((H,g_{V})/G\) are chosen to be (1.45,1.3)\({}_{m_{0}=400\)MeV, (1.6,1.3)\({}_{m_{0}=500\)MeV, (1.6,1.3)\({}_{m_{0}=600\)MeV, and (1.6,1.2)\({}_{m_{0}=700\)MeV. Figure 9: Dependence of \(M\)-\(R\) relations for \(m_{0}=500\) MeV on the slope parameter. Red curves are connected to the NJL parameters \((H,g_{V})/G=(1.55,1.0)\), (1.50, 0.9); blue curves to (1.55, 0.9), (1.50, 0.8); black curves to (1.55, 0.8), (1.50, 0.7). In this review, the mass of the millisecond pulsar PSR J0740+6620 [16] \[M_{\rm TOV}^{\rm lowest}=2.08^{+0.07}_{-0.07}\,M_{\odot}\,, \tag{62}\] is regarded as the lower bound for the maximum mass, which is shown by upper red-shaded area in Figs. 9 and 10. Actually, the lower bound may be even significantly higher; the recent analyses for the black-widow binary pulsar PSR J0952-0607 suggest the maximum mass of \(2.35\pm 0.17M_{\odot}\)[86]. Meanwhile, there are constraints, \(M_{\rm max}\lesssim 2.16^{+0.17}_{-0.15}M_{\odot}\), from the gamma-ray burst GRB170817A associated with the GW170817 event (under the assumption that the post-merger of GW170817 is a hypermassive NS). If the maximum mass is indeed \(\sim 2.3M_{\odot}\) or higher, we will need to allow much stiffer low density EOS with which much stiffer quark EOS becomes possible. The analyses based on another criterion will be presented elsewhere. Another important constraint comes from NS radii. We show the constraints to the radii obtained from the LIGO-Virgo [87; 88; 89] by green shaded areas on the middle left 3 and from the NICER in Ref. [19] by red shaded areas on the middle right. The inner contour of each area contains 68% of the posterior probability (\(1\sigma\)), and the outer one contains 95% (\(2\sigma\)). These values (plus another NICER result in Ref. [20]) are summarized in Table 4. From all the constraints, we restrict the chiral invariant mass as Footnote 3: More precisely, the LIGO-Virgo constrains the tidal deformability \(\tilde{\Lambda}\) which is the function of the tidal deformability of each neutron star (\(\Lambda_{1}\) and \(\Lambda_{2}\)) and the mass ratio \(q=M_{2}/\,M_{1}\). But for EOS which do not lead to large variation of radii for \(M\gtrsim 1M_{\odot}\), \(\tilde{\Lambda}\) is insensitive to \(q\). In fact the radii of neutron stars and \(\tilde{\Lambda}\) can be strongly correlated (for more details, see Refs. [90; 91]), and for our purposes it is sufficient to directly use the estimates on the radii given in Ref. [89], rather than \(\tilde{\Lambda}\). \[400\,{\rm MeV}\lesssim m_{0}\lesssim 700\,{\rm MeV}\,. \tag{63}\] which is updated from those in the the original work Ref. [66], \(600\,\,{\rm MeV}\lesssim m_{0}\lesssim 900\,\,{\rm MeV}\), which corresponds to the set \(\lambda_{8}^{\prime}=\lambda_{10}^{\prime}=0\) and \(B=0\) in the present model. ## 5 Chiral condensates in crossover The method of interpolation can be used not only to construct a unified EOS but also to calculate microscopic quantities such as condensates and matter composition. In hadronic and quark matter domains we consider the generating functional with external fields coupled to quantities of interest, and then interpolate two functionals. The microscopic quantities are then extracted by differentiating the unified generating functional. We first review the computations in hadronic and quark matter domains, and then turn to computations in the crossover region. \begin{table} \begin{tabular}{c|c|c} \hline \hline & radius [km] & mass [\(M_{\odot}\)] \\ \hline GW170817 (primary) & \(11.9^{+1.4}_{-1.4}\) & \(1.46^{+0.12}_{-0.10}\) \\ GW170817 (second) & \(11.9^{+1.4}_{-1.4}\) & \(1.27^{+0.09}_{-0.09}\) \\ J0030+0451 (NICER [19]) & \(13.02^{+1.24}_{-1.06}\) & \(1.44^{+0.15}_{-0.14}\) \\ J0030+0451 (NICER [20]) & \(12.71^{+1.14}_{-1.19}\) & \(1.34^{+0.15}_{-0.16}\) \\ PSR J0740+6620 (NICER [21]) & \(12.35^{+0.75}_{-0.75}\) & \(2.08^{+0.07}_{-0.07}\) \\ PSR J0740+6620 (NICER [22]) & \(12.39^{+1.30}_{-0.98}\) & \(2.08^{+0.07}_{-0.07}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Radius constraints for neutron stars for \(\simeq 1.4M_{\odot}\) and \(\simeq 2.1M_{\odot}\) NSs. ### Chiral condensates in the PDM The chiral condensate in the PDM can be calculated by differentiating a thermodynamic potential with respect to the current quark mass. In the present model the explicit chiral symmetry breaking enters only through the \(V_{SB}\) term in Eq. (8) which leads to \(-(2cm_{u}\sigma+cm_{s}\sigma_{\rm s})\) as in Eq. (21). There may be the mass dependence in the other coupling constants in front of higher powers in meson fields, but such couplings exist already at \(m_{q}=0\) and the finite \(m_{q}\) is supposed to give only minor corrections. Hence we neglect the \(m_{q}\) dependence except the terms in \(V_{SB}\). Using the Gell-Mann-Oakes-Renner relation, the explicit symmetry breaking term can be written as \[\Omega_{\rm ESB}=-(2cm_{u}\sigma+cm_{s}\sigma_{\rm s})=m_{q}((\bar{u}u+\bar{d} d))_{0}\frac{\sigma}{f_{\pi}}+m_{s}\langle\bar{s}s\rangle_{0}\frac{\sigma_{\rm s }}{\sigma_{s0}}\,, \tag{64}\] where \(\langle(\bar{u}u+\bar{d}d)\rangle_{0}\) and \(\langle\bar{s}s\rangle_{0}\) are the chiral condensates in vacuum. The in-medium chiral condensates are obtained as \[\langle(\bar{u}u+\bar{d}d)\rangle\equiv\frac{\partial\Omega_{\rm ESB }}{\partial m_{q}}=\langle(\bar{u}u+\bar{d}d)\rangle_{0}\frac{\sigma}{f_{\pi} }\,, \tag{65}\] \[\langle\bar{s}s\rangle\equiv\frac{\partial\Omega_{\rm ESB}}{ \partial m_{s}}=\langle\bar{s}s\rangle_{0}\frac{\sigma_{\rm s}}{\sigma_{s0}}\,, \tag{66}\] where we neglected \(m_{q}\) and \(m_{s}\) dependences of \(\langle(\bar{u}u+\bar{d}d)\rangle_{0}\) and \(\langle\bar{s}s\rangle_{0}\) which are of higher orders in \(m_{q}/M_{q}\) and \(m_{s}/M_{s}\). In the following sub-subsection 5.1.1, we examine how \(\sigma\) varies as baryon density increases, and we study the in-medium \(\langle(\bar{u}u+\bar{d}d)\rangle\) condensate in sub-subsection 5.1.2. We postpone discussions on the strange quark condensate \(\langle\bar{s}s\rangle\) to subsection 5.3 since changes in \(\langle\bar{s}s\rangle\) at \(n_{B}\leq 2n_{0}\), which are induced only through the anomaly, are very small in the hadronic region. #### 5.1.1 Chiral scalar density in a nucleon To set up the baseline for the estimate of in-medium chiral condensates, we consider the scalar charge, \(N_{\sigma}\), for a nucleon in vacuum. It is defined as \[N_{\sigma}=\int_{\bf x}\langle N|(\bar{u}u+\bar{d}d)(x)|N\rangle=\langle N| \frac{\partial H_{\rm QCD}}{\partial m_{q}}\,|N\rangle=\frac{\partial m_{N}^ {\rm vac}}{\partial m_{q}}\,, \tag{67}\] where \(H_{\rm QCD}\) is the QCD Hamiltonian. In the last step we used the Hellmann-Feynman theorem [92]. In the PDM, the current quark masses affect nucleon masses only through the modification of \(\sigma\). The nucleon's chiral scalar charge at vacuum is given as \[N_{\sigma}\equiv\frac{\partial m_{N}^{\rm vac}}{\partial m_{q}}=\frac{ \partial\sigma_{0}}{\partial m_{q}}\left(\frac{\partial m_{N}}{\partial \sigma}\right)_{\sigma=\sigma_{0}}. \tag{68}\] The mass derivative of \(\sigma_{0}\) is related to the chiral susceptibility which is given by the (connected) scalar correlator at zero momentum, \[\frac{\partial\langle\bar{q}q(x)\rangle}{\partial m_{q}} \sim \int{\cal D}q{\cal D}q\,\left[\bar{q}q(x)\right]\frac{\partial}{ \partial m_{q}}\left({\rm e}^{-\int_{x^{\prime}}m_{q}\bar{q}q(x^{\prime})+...} /Z\right) \tag{69}\] \[\sim \int_{x^{\prime}}\left\langle[\bar{q}q(x)][\bar{q}q(x^{\prime}) ]\right\rangle_{\rm conn.}\sim\lim_{q\to 0}\frac{1}{q^{2}+m_{\sigma}^{2}}\,.\] Then, a smaller scalar meson mass enhances \(N_{\sigma}\). Multiplication of \(m_{q}\) to the scalar charge leads to the so-called nucleon sigma term: \[\Sigma_{N}\equiv m_{q}N_{\sigma}=\int_{\mathbf{x}}\langle N|m_{q}(\bar{u}u+\bar{ d}d)|N\rangle\,, \tag{70}\] which is renormalization group invariant, and has direct access to experimental quantities. The traditional estimate [93] gives \(\Sigma_{N}\simeq 45\) MeV. But the precise determination is difficult, and the possible range is 40-70 MeV, according to lattice QCD analyses or combined analyses of the lattice QCD and the chiral perturbation theory. (See Ref. [92] for a review and the references therein.) Here we take \(m_{q}\simeq 5\) MeV which leads to obtain \(N_{\sigma}\simeq 8\)-14, and the scalar density is given by \[\frac{N_{\sigma}}{\frac{4}{3}\pi R_{N}^{3}}=\left(0.24\mbox{--}0.30\,\mbox{GeV }\times\frac{1\,\mbox{fm}}{R_{N}}\right)^{3}\,, \tag{71}\] where \(R_{N}\sim 1\) fm is the size of a nucleon. (Note that the scalar isoscalar radius is estimated as \(\langle r_{s}^{2}\rangle\simeq(0.7\mbox{--}1.2\,\mbox{fm})^{2}\)[94].) Note that the magnitude is roughly the same order as the vacuum one, but the sign is opposite. Therefore the nucleon scalar charges tend to cancel the vacuum one and reduce the net value of \(\sigma\). Therefore the appearance of nucleons inevitably reduces the magnitude of chiral condensates. Table 5 summarizes the \(\sigma\)-dependence of the nucleon mass (\(\partial m_{N}/\partial\sigma\)), the scalar meson mass (\(m_{\sigma}\)), and the nucleon sigma term (\(\Sigma_{N}\)) predicted by the PDM for several choices of \(m_{0}\). The estimates of \(m_{\sigma}\) and \(\Sigma_{N}\) are reasonably consistent with the hadron phenomenology; \(m_{\sigma}\) are consistent with the mass of the scalar meson \(f_{0}(500)\) (with the width \(\sim 500\) MeV)4, and the estimates on the nucleon sigma term, \(\Sigma_{N}\simeq 40\)-70 MeV, are within the ball park of several theoretical estimates. Footnote 4: It is not a trivial issue whether one can identify \(\sigma\) in mean field models with the physical scalar meson. #### 5.1.2 Dilute regime In the dilute regime (Fig. 11), nucleons are widely separated, In good approximation the in-medium scalar density is simply the sum of negative scalar charges from chiral condensates and the positive scalar charges from nucleons (linear density approximation (LDA)), \[\langle(\bar{u}u+\bar{d}d)\rangle\simeq\langle(\bar{u}u+\bar{d}d)\rangle_{0}+n _{B}N_{\sigma}\,, \tag{72}\] which can be rewritten as \[\sigma\simeq f_{\pi}\bigg{(}1+n_{B}\frac{N_{\sigma}}{\langle(\bar{u}u+\bar{d} d)\rangle_{0}}\bigg{)}\,. \tag{73}\] In this LDA, the \(\sigma\) decreases linearly as a function of \(n_{B}\). \begin{table} \begin{tabular}{c|c c c c} \hline \hline \(m_{0}\) [MeV] & 400 & 500 & 600 & 700 \\ \hline \((\partial m_{N}/\partial\sigma)_{\rm vac.}\) & 8.79 & 7.97 & 7.01 & 5.87 \\ \(m_{\sigma}\) [MeV] & 607 & 664 & 688 & 599 \\ \(\Sigma_{N}\) [MeV] & 51.12 & 48.71 & 51.39 & 62.01 \\ \hline \hline \end{tabular} \end{table} Table 5: \(\sigma\)-dependence of the nucleon mass, the scalar meson mass, and the nucleon sigma term predicted by the PDM in vacuum. The linear density approximation is violated when density increases and nonlinear effects set in. Shown in Fig. 12 are the ratio of the quark condensate, \(\left\langle\bar{u}u\right\rangle/\left\langle\bar{u}u\right\rangle_{0}= \sigma/f_{\pi}\), as a function of the neutron number density \(n_{n}\) in pure neutron matter. The result of the linear density approximation is also shown for comparisons. Our mean field results are consistent with the linear density approximation with \(\Sigma_{N}=45\,\mathrm{MeV}\) in the low-density region. Our predictions start to deviate from the LDA around \(n_{B}=0.5n_{0}\), signaling the importance of higher powers of \(n_{B}\). We stress that, in the PDM, while the chiral restoration or reduction of \(\sigma\) occurs rather quickly with increasing density, such changes do not immediately mean the structural changes in nucleons nor in the nucleon or quark Dirac sea. The nucleon mass in the PDM is relatively modest (Fig. 13), and this feature is welcomed for commonly used no sea approximation for the thermodynamic potential (see Eq.(18)) which is justified only when modifications in the Dirac sea are small. Another hint on the chiral condensates and hadron structures comes from a high temperature transition where a hadron resonance gas (HRG) transforms to a quark gluon plasma (QGP). There, the chiral condensates begin to drop before the temperature reaches the critical temperature, but the HRG model with the _vacuum_ hadron masses remains valid in reproducing the lattice data even after chiral condensates are substantially reduced [95; 96]. The chiral restoration beyond cancellations of negative and positive scalar charges will be discussed in the next section for quark matter models. Figure 11: Schematic picture of the chiral condensates in dilute regime. The chiral scalar charge is negative where the vacuum chiral condensate dominates, while nucleons contribute to the positive scalar charges to cancel the vacuum contributions. Figure 12: Dependence of the quark condensate in the PDM \(\left\langle\bar{u}u\right\rangle/\left\langle\bar{u}u\right\rangle_{0}= \sigma/f_{\pi}\) on the neutron number density \(n_{n}\) for \(m_{0}=400\), \(500\), \(600\), and \(700\) MeV. Here the condensate is normalized by the vacuum counter part. ### Chiral condensates in the CFL quark matter In terms of quarks, the chiral condensates are triggered by the attractive quark-antiquark pairing. At high density, such pairing is disfavored by the presence of the quark Fermi sea; as shown in Fig. 14, creating an antiquark costs about the quark Fermi energy since it is necessary to bring a particle in the Dirac sea to the domain beyond the Fermi sea. Therefore, the chiral condensates made of quarks and antiquarks naturally dissociate as density increases. Instead, the particle-particle [63] or particle-hole pairings [97; 98; 99] near the Fermi surface do not have such energetic disadvantages. The method of computations is given in Sec. 3. We note that, unlike the chiral restoration in dilute nuclear matter as a mere consequence of cancellations of positive and negative charges, in quark matter the magnitude of each contributions is reduced, together with the chiral restoration in the quark Dirac sea. This extra energy from the Dirac sea modification is important in quark matter EOS and must be taken into account. The softening of quark EOS due to the U(1)\({}_{A}\) anomaly is related to the Dirac sea modifications associated with the chiral restoration5. Figure 14: Chiral symmetry breaking by condensation of quark-antiquark pairs; (upper) in vacuum; (lower) in medium. In the latter the pairing is blocked by the quark Fermi sea. Figure 13: Dependence of the nucleon masses in the PDM on the neutron number density \(n_{u}\), for \(m_{0}=400\), \(500\), \(600\), and \(700\) MeV. ### Condensates in a unified EOS In this subsection, we review the interpolating method of generating functionals which is introduced in Ref. [68]. We use it to calculate the chiral and diquark condensates from nuclear to quark matter, and also to examine the composition of matter with \((u,d,s)\)-quarks and charged leptons (electrons and muons, \(e,\mu\)). #### 5.3.1 Unified generating functional For computations of condensates \(\phi\), we first construct a generating functional \(P(\mu_{B};J)\) with external fields \(J\) coupled to the \(\phi\). A condensate \(\phi\) at a given \(\mu_{B}\) is obtained by differentiating \(P(\mu_{B};J)\) with respect to \(J\) and then set \(J=0\), \[\phi=-\frac{\partial P}{\partial J}\bigg{|}_{J=0}\,. \tag{74}\] The generating functional for the nuclear domain, \(n_{B}\leq 2n_{0}\), is given by the PDM, and for the quark matter domain, \(n_{B}\geq 5n_{0}\), by the NJL-type model. We interpolate these functionals with the constraints that the interpolating curves match up to the second derivatives at each boundary, \(2n_{0}\) and \(5n_{0}\). For the interpolating function, we adopt a polynomial function of \(\mu_{B}\) with six coefficients \(a_{n}(J)\), \[P_{\rm I}(\mu_{B};J)=\sum_{n=0}^{5}a_{n}(J)\mu_{B}^{n}\,. \tag{75}\] We determine the chemical potentials at the boundaries, \(\mu_{B}^{L}\) and \(\mu_{B}^{U}\), as \[n_{B}(\mu_{B}^{L};J)=2n_{0}\,,\quad n_{B}(\mu_{B}^{U};J)=5n_{0}\,. \tag{76}\] The resulting \(\mu_{B}^{L}\) and \(\mu_{B}^{U}\) depend on \(J\). The six boundary conditions \[\frac{\partial^{k}P_{\rm I}}{(\partial\mu_{B})^{k}}\bigg{|}_{\mu_{B}^{L}(\mu_ {B}^{U})}=\frac{\partial^{k}P_{\rm PDM(NJL)}}{(\partial\mu_{B})^{k}}\bigg{|}_ {\mu_{B}^{L}(\mu_{B}^{U})}\,, \tag{77}\] with \(k=0,1,2\) uniquely fix \(a_{n}\)'s. As in the EOS construction, the generating functional must satisfy the causality condition. Such constraints are transferred to the evaluation of condensates; condensates in the crossover domain are correlated with those in nuclear and quark matter. #### 5.3.2 An efficient method for computations of many condensates While the generating functional in the previous section is general, the calculations become cumbersome when we need to compute many condensates. Each condensate requires the corresponding external field and generating functional. Fortunately, for the interpolating function Eq.(75), we can use a more efficient method in Ref. [68] which does not demand construction of \(P(\mu_{B},J)\) but utilizes only the \(\mu_{B}\)-dependence of the condensate at \(J=0\) for each interpolating boundary. In the interpolated domain the condensate \(\phi\) can be expressed as \[\phi_{\rm I}=-\frac{\partial P_{\rm I}}{\partial J}\bigg{|}_{J=0}=-\sum_{n=0} ^{5}\frac{\partial a_{n}}{\partial J}\bigg{|}_{J=0}\mu_{B}^{n}\,. \tag{78}\] This implies the equivalence between the determination of \(\phi_{\rm I}\) and that of six constants \(\partial a_{n}/\partial J\big{|}_{J=0}\). Taking the \(\mu_{B}\)-derivatives in Eq.(77), we obtain \[\frac{\partial}{\partial J}\left(\left.\frac{\partial^{k}P_{\rm I}}{(\partial \mu_{B})^{k}}\right|_{\mu_{B}^{k}(\mu_{B}^{k})}\right)=\frac{\partial}{ \partial J}\left(\left.\frac{\partial^{k}P_{\rm PDM(NJL)}}{(\partial\mu_{B})^{ k}}\right|_{\mu_{B}^{k}(\mu_{B}^{k})}\right), \tag{79}\] where \(k=0,1,2\). Only quantities at a given \(\mu_{B}\) and \(J=0\) are necessary to construct all these derivatives at \(J=0\). Hence this method speeds up our analyses considerably. #### 5.3.3 Numerical results Using the method explained above, we calculate the light quark chiral condensate \(\big{\langle}(\bar{u}u+\bar{d}d)\big{\rangle}\), the strange quark condensate \(\langle s\rangle\), the diquark gaps \(\Delta_{j}\) (\(j=1,2,3\)), and the quark number densities \(n_{f}\) (\(f=u,d,s\)), from nuclear to quark matter domain. Below, we adopt three values of the chiral invariant mass (\(m_{0}=500\), 600, 700 MeV) as samples with fixing the anomaly coefficient \(B\) to 600 MeV and the NJL parameters \((H/G,g_{V}/G)\) to \((1.45,0.5)\). The presence of the anomaly term in the PDM is the difference between the results in this review and in Ref. [68] whose impacts are just few percents in magnitude. The EOS for these parameter sets satisfies \(0\leq c_{s}^{2}\leq 1\). For comparisons, the extrapolation of the PDM results are shown by black dotted curves. Figure 15 shows the density dependences of the in-medium chiral condensate normalized by the vacuum value, \(\big{\langle}(\bar{u}u+\bar{d}d)\big{\rangle}/\big{\langle}(\bar{u}u+\bar{d}d )\big{\rangle}_{0}\). Clearly, the condensate at the boundaries affects the condensate in the crossover region. The condensates in the hadronic matter strongly depend on the choice of \(m_{0}\): for \(m_{0}=500\) MeV, the nucleon mass \(m_{N}=939\) MeV gains a large contribution from the chiral condensate, and the Yukawa coupling of nucleons to \(\sigma\) is large; accordingly the chiral condensate drops quickly as baryon density increases. For a larger \(m_{0}\), the nucleons have less impacts on the chiral condensates, and the chiral restoration takes place more slowly. As mentioned in Sec. 5.1, the PDM may underestimate chiral restoration effects as they do not describe the chiral restoration at quark level. Putting quark matter constraints from the high density and using the causality constraints for the interpolated domain, we can gain qualitative insights how the chiral restoration should occur toward high density. Taking into Figure 15: Density dependence of the chiral condensates normalized by the vacuum counterpart. The parameters are chosen as \(B=600\) MeV and \((H/G,g_{V}/G)=(1.45,0.5)\). account nuclear and quark matter effects, the interpolation offers reasonable descriptions for the crossover domain. #### Strange chiral condensates The density dependence of the strange quark condensate is shown in Fig.16. In the present PDM model, the \(\sigma_{5}\) field corresponding to the strange quark condensate does not directly couple to nucleons, but only through the anomaly term in the meson potential, Eq. (9). As a result, the density dependence is mild in the hadronic matter (\(n_{B}\leq 2n_{0}\)). In the interpolated region, the condensate starts to decrease rapidly toward the one in the quark matter which is about 40% of the vacuum value at \(n_{B}=5n_{0}\). There are at least two effects responsible for this chiral restoration. The first is the reduction of the anomaly contribution, \(\sim\langle\bar{u}u\rangle\langle\bar{d}d\rangle(\bar{s}s)\), which is due to the chiral restoration for light quark sectors. The other is due to the evolution of the strange quark Fermi sea. In our unified model the strangeness sector significantly deviates from the prediction of the PDM at \(n_{B}\simeq 3n_{0}\), due to the constraints from the quark matter boundary conditions. #### Diquark gaps and number density Shown in Figure 17 are the diquark gaps in the \(ud\)-pairing channel (left panel), \(ds\)-pairing channel (right panel) at various densities. We set the diquark condensates to zero at \(n_{B}\leq 2n_{0}\) Figure 16: Density dependence of the strange quark condensate normalized by the vacuum counterpart. The parameters \(B\) and \((H/G,g_{V}/G)\) are as in Fig. 15. Meanwhile the isospin symmetry holds in the CFL quark matter, so in the whole region \(\Delta_{ds}\simeq\Delta_{us}\) holds in good accuracy. Next we study the density dependence of diquark condensates on quark number density (Fig.18). The quark densities in nuclear domain are calculated as \(n_{u}=2n_{p}+n_{n}\), \(n_{d}=n_{p}+2n_{n}\), and \(n_{s}=0\). As seen from Figs. 17 and 18, there are clear correlations between the growth of the diquark condensates and of quark number densities. These two quantities assist each other: more diquark pairs are possible for a larger quark Fermi sea, while the resulting energy reduction in turn enhances the quark density. The flavor composition is also affected by these correlations: the substantial \(u,d\)-quark Fermi sea and the pairing to strange quarks favor the formation of the strange quark Fermi sea, even before the quark chemical potential reaches the threshold of the vacuum strange quark mass. fractions, are one of distinct features of our unified model. Beyond \(5n_{0}\), in the CFL quark matter the charge neutrality is satisfied by quarks, and no charged leptons are necessary. ## 6 A summary In this review article, we summarized main points of Refs. [66; 67; 68] and have updated some of analyses including the U(1)\({}_{A}\) anomaly effects. In Sec. 2, we explained how to construct the EOS in hadronic matter for \(n_{B}\leq 2n_{0}\) using an effective hadron model based on the parity doublet structure. In the analysis, we focused on the effect of U(1)\({}_{A}\) axial anomaly included as the KMT-like interaction among scalar and pseudoscalar mesons, and showed that the effect makes the EOS softer. In Sec. 3, following Ref. [27], we briefly review how to construct a quark matter EOS for \(n_{B}\geq 5n_{0}\) using a NJL-type model. Then in Sec.4, we built up a unified EOS in the density region of \(2n_{0}\leq n_{B}\leq 5n_{0}\) by interpolating hadronic and quark EOS. For given microscopic parameters, we calculated \(M\)-\(R\) relations of NSs, confronted them with the observational constraints, and then obtained constraints on the chiral invariant mass and quark model parameters. In Sec. 5 we determined the density dependence of the chiral condensate in the interpolated region using a method proposed in Refs. [68]. The boundary conditions from the hadronic and quark matter affect condensates in the intermediate region and give a balanced description. We would like to stress that our method provides some connection from microscopic physical quantities such as the chiral invariant mass, the chiral condensates and diquark gaps to macroscopic observables such as masses and radii of NSs. Actually, our analysis implies that rapid decrease of the nucleon mass even near the normal nuclear density, which can occur when the chiral invariant mass \(m_{0}\) is very small, provides too soft EOS to satisfy the radius constraint of NSs with mass of about \(1.4M_{\odot}\). In other words, radius constraint of NSs obtained from recent observations indicates that the nucleon mass should include a certain amount of chiral invariant mass, from which the nucleon keeps a large portion of its mass even in the high density region where the chiral symmetry restoration is expected to occur. Our density dependence of the chiral condensate in the low density region is consistent with the linear density approximation. We should note the reduction of the chiral condensate there is achieved by the contribution of the positive scalar charge of nucleon without changing the nucleon properties drastically. This is due to our construction of hadronic matter in the PDM: We adopted so called "no-sea approximation" where we neglect the effect of nucleon Dirac sea and use fixed nucleon-meson couplings for \(n_{B}\lesssim 2n_{0}\). In the present treatment, the intrinsic properties of nucleons start to change at \(n_{B}\gtrsim 2n_{0}\), drastically, where quark exchanges among baryons become frequent; since baryons are made of quarks, the quark exchanges are supposed to change the baryon structure. Such intrinsic dependence would be able to be included through the introduction of the density (and/or temperature) dependent coupling constants in effective hadronic models as done in, e.g., Refs. [101; 102]. This is refection of partially released quarks which are affected by the medium. The inclusion of such effects into coupling constants is very difficult. Our interpolation scheme provides a practical way to implement some restrictions through the quark matter constraints at high density. In the present model for hadronic matter, we did not explicitly include the hyperons assuming that they are not populated in the low-density region \(n_{B}\lesssim 2n_{0}\). The hyperons may enter into matter around \(n_{B}\sim 2\cdot 3n_{0}\), which is not far from present choice for the hadronic boundary. It would be interesting to make analysis explicitly including hyperons based on the parity doublet structure (see, e.g., Refs. [103; 104]). In the present analysis we assume that anomaly has stronger impact in mesonic sectors than in baryonic sectors and included the anomaly \(B\) term only in the mesonic sector. It would be interesting to include some Yukawa interactions which also break the U(1)\({}_{A}\) symmetry. ###### Acknowledgements. We would like to thank the organizers of the University of Chicago for their hospitality. The work of T.M., B.G., and M.H. was supported in part by JSPS KAKENHI Grant No. 20K03927. T.M. was also supported by JST SPRING, Grant No. JPMJSP2125; T.K. by the Graduate Program on Physics for the Universe (GPPU) at Tohoku university.
2306.03603
Trial matching: capturing variability with data-constrained spiking neural networks
Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.
Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec
2023-06-06T11:46:31Z
http://arxiv.org/abs/2306.03603v2
# Trial matching: capturing variability with data-constrained spiking neural networks ###### Abstract Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a cortical sensory-motor pathway in a tactile detection task with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse. ## 1 Introduction Over the past decades, there has been a remarkable advancement in neural recording technology. Today, we can simultaneously record hundreds, even thousands, of neurons with millisecond time precision. Coupled with behavior measurements, modern experiments enable us to better understand how brain activity and behavior are intertwined [1]. In these experiments, it is often observed that even well-trained animals respond to the same stimuli with considerable variability. For example, mice trained on a simple tactile detection task occasionally miss the water reward [2], possibly because of satiation, lack of attention or neural noise. It is also clear that there is additional uncontrolled variability in the recorded neural activity [3; 4; 5] induced for instance by a wide range of task-irrelevant movements. Our goal is to reconstruct a simulation of the sensory-motor circuitry driving the variability of neural activity and behavior. To understand the generated activity at a circuit level, we develop a generative model which is biologically interpretable: all the spikes are generated by a recurrent spiking neural network (RSNN) with hard-biological constraints (i.e. the voltage and spiking dynamics are simulated with millisecond precision, neurons are either inhibitory or excitatory, spike transmission delay takes \(2-4\) ms). First contribution, we make a significant advance in the simulation methods for data-constrained RSNNs. While most prior works [6; 7; 8] were limited to single recording sessions, our model is constrained to spike recordings from \(28\) sessions covering six cortical areas. The resulting spike-based model enables a data-constrained simulation of a cortical sensory-motor pathway (from somatosensory to motor cortices responsible for the whisker, jaw and tongue movements). As far as we know, our model is the first RSNN model constrained to multi-session recordings with automatic differentiation methods for spiking neural networks [8; 9; 10]. Second contribution, using this model we aim to pinpoint the circuitry that induces variability in behavior (asking for instance what circuit triggers a loss of attention). Towards this goal, we identify an unsolved problem: "how do we enforce the generation of a realistic distribution of neural activity and behavior?" To do this, the model is fitted jointly to the recordings of spiking activity and movements to generate a realistic trial-to-trial co-variability between them. Our technical innovation is to define a supervised learning loss function to match the recorded and generated variability. Concretely the _trial matching_ loss function is the distance between modeled and recorded distributions of neural activity and movements. It relies on recent advances in the field of optimal transport [11; 12; 13] providing notions of distances between distributions. In our data-constrained RSNN, _trial matching_ enables the recovery of the main modes of trial-to-trial variability which includes the neural activity related to instructed behavior (e.g. miss versus hit trials) and uninstructed behavior like spontaneous movements. **Related work** While there is a long tradition of data fitting using the leaky integrate and fire (LIF) model, spike response models [14] or generalized linear models (GLM) [6], most of these models were used to simulate single neuron dynamics [15; 16] or small networks with dozens of neurons recorded in the retina and other brain areas [6; 8; 7; 17]. A major drawback of those fitting algorithms was the limitation to a single recording session. Beyond this, researchers have shown that FORCE methods [18] could be used to fit up to \(13\) sessions with a large RSNN [17; 19; 20]. But in contrast with back-propagation through time (BPTT) in RSNNs [9; 10; 21], FORCE is tied to the theory of recursive least squares making it harder to combine with deep learning technology or arbitrary loss functions. We know only one other study where BPTT is used to constrain RSNN to spike-train recordings [8] but this study was limited to a single recording session. Regarding generative models capturing trial-to-trial variability in neural data, many methods rely on trial-specific latent variables [22; 23; 24; 25; 26]. This is often formalized by abstracting away the physical interpretation of these latent variables using deep neural networks (e.g. see LFADS [22] or spike-GAN [27]) but our goal is here to model the interpretable mechanisms that can generate the recorded data. There are hypothetical implementations of latent variables in RSNNs, most notoriously, latent variables can be represented as the activity of mesoscopic populations of neurons [25], or linear combinations of the neurons' activity [28; 26; 29]. These two models assume respectively an implicit grouping of the neurons [25] or a low-rank connectivity matrix [28; 26; 29]. Here, we want to avoid making any structural hypothesis of this type a priori. We assume instead that the variability is sourced by unstructured noise (Gaussian current or Poisson inputs) and optimize the network parameters to transform it into a structured trial-to-trial variability (e.g. multi-modal distribution of hit versus miss trials). The optimization therefore decides what is the network mechanism that best explains the trial-to-trial variability observed in the data. This hypothesis-free approach is made possible by the _trial matching_ method presented here. This method is complementary to previous optimization methods for generative models in neuroscience. Many studies targeted solely trial-averaged statistics and ignored single-trial activity, for instance methods using the FORCE algorithm [30; 31; 17; 20; 19], RSNN methods using back-propagation through time [8] and multiple techniques using (non-interpretable) deep generative models [32]. There exist other objective functions which can constrain the trial-to-trial variability in the data, namely: the maximum likelihood principle [6; 15] or spike-GANs [27; 33]. We illustrate however in the discussion section why these two alternatives are not a straightforward replacement for the _trial matching_ loss function with our interpretable RSNN generator. ## 2 Large data-constrained Recurrent Spiking Neural Network (RSNN) This paper aims to model the large-scale electrophysiology recordings from [2], where they recorded 4415 units from 12 areas across 22 mice. All animals in this dataset were trained to perform the whisker tactile detection task described in Figure 1: in 50% of the trials (the GO trials), a whisker is deflected and after a \(1\) s delay period an auditory cue indicates water availability if the mouse ticks, whereas in the other 50% of trials (the No-Go trials), there is no whisker deflection and licking after the auditory cue is not rewarded. Throughout the paper we attempt to create a data-constrained model of the six areas that we considered to play a major role in this behavioral task: the primary and secondary whisker somatosensory cortices (wS1, wS2), motor cortices (wM1, wM2), the primary tongue-jaw motor cortex (tjM1) and the anterior lateral motor cortex (ALM), also known as tjM2 (see Figure 1A and 3A). While we focus on this dataset, the method described below aims to be broadly applicable to most contemporary large-scale electrophysiological recordings. We built a spiking data-constrained model that simulates explicitly a cortical neural network at multiple scales. At the single-cell level, each neuron is either excitatory or inhibitory (the output weights have only positive or negative signs respectively), follows leaky-integrate and fire (LIF) dynamics, and transmits information in the form of spikes with synaptic delays ranging from \(2\) to \(4\) ms. At a cortical level, we model six brain areas of the sensory-motor pathway where each area consists of \(250\) recurrently connected neurons (\(200\) excitatory and \(50\) inhibitory) as shown in Figure 3A, such that only excitatory neurons project to other areas. Since the jaw movement defines the behavioral output in this task, we also model how the tongue-jaw motor cortices (tjM1, ALM) drive the jaw movements. Mathematically, we model the spikes \(z_{j,k}^{t}\) of the neuron \(j\) at time \(t\) in the trial \(k\) as a binary number. The spiking dynamics are then driven by the integration of the somatic currents \(I_{j,k}^{t}\) into the membrane voltage \(v_{j,k}^{t}\), by integrating LIF dynamics with a discrete time step \(\delta_{t}=2\) ms. The jaw movement \(y_{k}^{t}\) is simulated with a leaky integrator driven by the activity of tjM1 and ALM neurons, followed by an exponential non-linearity. This can be summarized with the following equations, the trial index \(k\) is omitted for simplicity: \[v_{j}^{t} = \alpha_{j}v_{j}^{t-1}+(1-\alpha_{j})I_{j}^{t}-v_{\mathrm{thr},j} z_{j}^{t-1}+\xi_{j}^{t} \tag{1}\] \[I_{j}^{t} = \sum_{d,i}W_{ij}^{d}z_{i}^{t-d}+\sum_{d,i}W_{ij}^{\mathrm{in},d}x _{i}^{t-d}\] (2) \[\tilde{y}^{t} = \alpha_{jaw}\tilde{y}^{t-1}+(1-\alpha_{jaw})\sum_{i}W_{i}^{ \mathrm{jaw}}z_{i}^{t}\] (3) \[y^{t} = \exp(\tilde{y}^{t})+b \tag{4}\] Figure 1: **Modeling trial-variability in electrophysiological recordings.****A**. During a delayed whisker detection task, the mouse should report the sensation of a whisker stimulation by liking to obtain a water reward. Neural activity and behavior of the mouse are recorded simultaneously. **B**. A recurrent spiking neural network (RSNN) of the sensorimotor pathway receives synaptic input modeling the sensory stimulation and produces the jaw movement as a behavioral output. **C**. The stimuli and the liking action of the mouse organize the trials into four types (hit, miss, false alarm, and correct rejection). Our goal is to build a model with realistic neural and behavioral variability. Panels A and C are adapted from [34]. where \(W_{ij}^{d}\), \(W_{ij}^{\text{in},d}\), \(W_{i}^{\text{jaw}}\), \(v_{\text{thr},j}\), and b are model parameters. The membrane time constants \(\tau_{m}=30\) ms for excitatory and \(\tau_{m}=10\) ms for inhibitory neurons define \(\alpha_{j}=\exp\left(-\frac{\delta t}{\tau_{m,j}}\right)\) and \(\tau_{jaw}=50\) ms define similarly \(\alpha_{jaw}\) which controls the velocity of integration of the membrane voltage and the jaw movement. To implement a soft threshold crossing condition, the spikes inside the recurrent network are sampled with a Bernoulli distribution \(z_{j}^{t}\sim\mathcal{B}(\sigma(\frac{v_{j}^{t}-v_{\text{thr},j}}{v_{0}}))\), where \(v_{0}\) is the temperature of the sigmoid (\(\sigma\)). The spike trains \(x_{i}^{t}\) model the thalamic inputs as simple Poisson neurons producing spikes randomly with a firing probability of \(5\) Hz and increasing their firing rate when a whisker stimulation is present (see Appendix). The last noise source \(\xi_{j}^{t}\) is an instantaneous Gaussian noise \(\xi_{j}^{t}\) of standard deviation \(\beta v_{\text{thr}}\sqrt{\delta t}\) modeling random inputs from other areas (\(\beta\) is a model parameter that is kept constant over time). Session stitchingAn important aspect of our fitting method is to leverage a dataset of electrophysiological recordings with many sessions. To constrain the neurons in the model to the data, we uniquely assign each neuron in the model to a single neuron from the recordings as illustrated in Figure 2A and 3A. Since our model has 1500 neurons, we therefore select randomly 1500 neurons from the recordings (\(250\) in each area, we ignore the other recorded neurons to have the same number of excitatory and inhibitory neurons in each area). This bijective mapping between neurons in the data and the model is fixed throughout the analysis and defines the area and cell types of the neurons in the model. The area is inferred from the location of the corresponding neuron in the dataset and the cell type is inferred from the action potential waveform of this cell (for simplicity, fast-spiking neurons are considered to be inhibitory and regular-spiking neurons as excitatory). Given this assignment, we denote \(\mathbf{z}_{j}^{\mathcal{D}}\) as the spike train of neuron \(j\) in the dataset and \(\mathbf{z}_{j}\) as the spike train of the corresponding neuron in the model; in general, an upper script \(\mathcal{D}\) always refers to the recorded data. A consequence is that two neurons \(i\) and \(j\) might be synaptically connected in the model although they correspond to neurons recorded in separate sessions. This choice is intended to model network sizes beyond what can be recorded during a single session. Our network is therefore a "collage" of multiple sessions stitched together as illustrated in Figure 2A and 3A. This network is then constrained to the recorded data by optimizing the parameters to minimize the loss functions defined in the following section. Altogether, when modeling the dataset from Esmaeili and colleagues [2], the network consists of \(1500\) neurons where each neuron is assigned to one neuron recorded in one of the \(28\) different recording sessions. Since multiple sessions are typically coming from different animals, we model a "template mouse brain" which is not meant to reflect subject-to-subject differences. ## 3 Fitting single-trial variability with the trial matching loss function We fit the network to the recordings with gradient descent and we rely on surrogate gradients to extend back-propagation to RSNNs [9; 10]. At each iteration until convergence, we simulate a batch of \(K=150\) statistically independent trials. We measure some trial-average and single-trial statistics of the simulated and recorded activity, calculate a loss function, and minimize it with respect to all the trainable parameters of the model via gradient descent and automatic differentiation. This protocol is sometimes referred to as a sample-and-measure method [8] as opposed to the likelihood optimization in GLMs where the network trajectory is clamped to the recorded data during optimization [6]. The full optimization lasts for approximately one to three days on a GPU A100-SXM4-40GB. Trial-average lossWe consider the trial-averaged activity over time of each neuron from every session \(\mathcal{T}_{\text{neuron}}\), sometimes referred also as neuron peristimulus time histogram (PSTH). This is defined by \(\mathcal{T}_{\text{neuron}}(\mathbf{z}_{j})=\frac{1}{K}\sum_{k}\mathbf{z}_{j,k}*f\) where \(f\) is a rolling average filter with a window of \(12\) ms, and \(K\) is the number of trials in a batch of spike trains \(\mathbf{z}\). The statistics \(\mathcal{T}_{\text{neuron}}(\mathbf{z}_{j}^{\mathcal{D}})\) are computed similarly on the \(K^{\mathcal{D}}\) trials recorded during the session corresponding to neuron \(j\). We denote the statistics \(\mathcal{T}^{\prime}_{\text{neuron}}\) after normalizing each neuron's trial-averaged activity, and we define the trial-averaged loss function as follows: \[\mathcal{L}_{\text{neuron}}=\sum_{j}\|\mathcal{T}^{\prime}_{\text{neuron}}( \mathbf{z}_{j})-\mathcal{T}^{\prime}_{\text{neuron}}(\mathbf{z}_{j}^{\mathcal{D}})\|^ {2}\;. \tag{5}\] It is expected from [8] that minimizing this loss function alone generates realistic trial-averaged statistics like the average neuron firing rate: Trial matching loss: fitting trial-to-trial variabilityGoing beyond trial-averaged statistics, we now describe the _trial matching_ loss function to capture the main modes of trial-specific activity. From the previous neuroscience study [2], it appears that population activity in well-chosen areas is characteristic of the trial-specific variability. For instance, intense jaw movements are preceded by increased activity in the tongue-jaw motor cortices, and hit trials are characterized by a secondary transient appearing in the sensory cortices a hundred milliseconds after a whisker stimulation. To define single-trial statistics which can capture these features we denote the population-averaged firing rate of an area \(A\) as \(\mathcal{T}_{A}(\mathbf{z}_{k})=\frac{1}{|A|}\sum_{j\in A}(\mathbf{z}_{j,k}*f)\) where \(|A|\) is the number of neurons in area \(A\), the smoothing filter \(f\) has a window size of \(48\) ms and the resulting signal is downsampled to avoid unnecessary redundancy. We write \(\mathcal{T^{\prime}}_{A}\) when each time bin is normalized to mean \(0\) and standard deviation \(1\) using the recorded trials and we use \(\mathcal{T^{\prime}}_{A}\) as feature vectors to characterize the trial-to-trial variability in area \(A\). To construct a single feature vector encapsulating the joint activity dynamics in all areas and the jaw movements in a session, we concatenate all these feature vectors together into \(\mathcal{T^{\prime}}_{\mathrm{trial}}=(\mathcal{T^{\prime}}_{A1},\mathcal{T^{ \prime}}_{A2},\mathbf{y}_{k}*f)\), where \(A1\) and \(A2\) are the areas recorded in this session. The challenging part is now to define the distance between the recorded statistics \(\mathcal{T}_{\mathrm{trial}}(\mathbf{z}^{\mathcal{D}})\) and the generated ones \(\mathcal{T}_{\mathrm{trial}}(\mathbf{z})\). Common choices of distances like the mean square error are not appropriate to compare distributions. This is because the order of trials in a batch of generated/recorded trials has no meaning a priori: there is no reason for the random noise of the first generated trial to correspond to the first recorded trial - rather we want to compare unordered sets of trials and penalize if any generated trial is very far from any recorded trial. Formalizing this mathematically we consider a distance between distributions inspired by the optimal transport literature. Since the plain mean-squared error cannot be used, we use the mean-squared error of the optimal assignment between pairs of recorded and generated trials: we select randomly \(K^{\prime}=\min(K,K^{\mathcal{D}})\) generated and recorded trials (\(K\) and \(K^{\mathcal{D}}\) are respectively the number of generated and recorded trials in one session), and this optimal assignment is formalized by the integer permutation \(\pi:\{1,\dots K^{\prime}\}\rightarrow\{1,\dots K^{\prime}\}\). Then using the feature vector \(\mathcal{T}_{\mathrm{trial}}\) for any trial \(k\), we define the hard _trial matching_ loss function as follows: \[\mathcal{L}_{\mathrm{trial}}=\min_{\pi}\sum_{k}||\mathcal{T^{\prime}}_{ \mathrm{trial}}(\mathbf{z}_{k})-\mathcal{T^{\prime}}_{\mathrm{trial}}(\mathbf{z}^{ \mathcal{D}}_{\pi(k)})||^{2}\;. \tag{6}\] We compute this loss function identically on all the recorded sessions and take the averaged gradients to update the parameters. Each evaluation of this loss function involves the computation of the optimal trial assignment \(\pi\) which can be computed with the Hungarian algorithm [35] (see linear_sum_assignment for an implementation in scipy). This is not the only way to define a distance between distributions of statistics \(\mathcal{T^{\prime}}_{\mathrm{trial}}\). In fact, this choice poses a potential problem because the optimization over \(\pi\) is a discrete optimization problem, so we have to assume that \(\pi\) is a constant with respect to the parameters when computing the loss gradients. We also tested alternative choices relying on a relaxation of the hard assignment into a smooth and differentiable bi-stochastic matrix. This results in the soft _trial matching_ loss function, which replaces the optimization over \(\pi\) by the Sinkhorn divergence [12; 13] (see the geomloss package for implementation in pytorch [13]). In practice, to minimize both \(\mathcal{L}_{\mathrm{trial}}\) (either the soft or hard version) and \(\mathcal{L}_{\mathrm{neuron}}\) simultaneously we optimize them in an additive fashion with a parameter-free multi-task method from deep learning which re-weights the two loss functions to ensure that their gradients have comparable scales (see [36] for a similar implementation). ## 4 Simulation results Validation using an artificial datasetWe generated an artificial dataset with two distinct areas with \(250\) neurons each to showcase the effect of trial variability. In this dataset A1 (representing a sensory area) is responding always to a stimulus while A2 (representing a motor area) responds to the stimulus in only \(80\%\) of the trials (the firing rates of neurons in the artificial dataset are shown in Figure 2B with light shades). This is a toy representation of the variability that is observed in the real data recorded in mice, so we construct the artificial data so that a recording resembles a hit trial ("hit-like") if the transient activity in A2 is higher than \(30\) Hz (otherwise it's a "miss-like" trial). From the results of our simulations Figure 2B-C, we can observe that the models that use trial matching (either soft _trial matching_ or hard _trial matching_) can re-generate faithfully this bimodal response distribution ("hit-like" and "miss-like") in A2. In this dataset we saw little difference between the solutions of soft and hard _trial matching_, if any, soft _trial matching_ reached its optimal performance with fewer iterations (see Appendix). As expected, when the model is only trained to minimize the neuron loss for trial-averaged statistics, it cannot generate stochastically this bimodal distribution and consistently generates instead a noisy average response. Delayed whisker tactile detection datasetWe then apply our modeling approach to the real large-scale electrophysiology recordings from [2]. After optimization, we verify quantitatively that our model generates activity that is similar to the recordings in terms of trial-averaged statistics. First, we see that the 1500 neurons in the network exhibit a realistic diversity of averaged firing rates: the distribution of neuron firing rates is log-normal and matches closely the distribution extracted from the data in Figure 3B. Second, the single-neuron PSTHs of our model are a close match to the PSTHs from the recordings. This can be quantified by the Pearson trial-averaged correlation between the generated and held-out test trials which we did not use for parameter fitting. We obtain an averaged Pearson correlation of \(0.30\pm 0.01\) which is very close to the Pearson correlation obtained when comparing the training and testing sets \(0.31\pm 0.01\). Figure 3C shows how the trial-averaged correlation is distributed over neurons. As expected, this trial averaged metric is not affected if we do not use _trial matching_ (\(0.30\pm 0.01\)). To quantify how the models capture the trial-to-trial variability, we then quantify how the distributions of neural activity and jaw movement are consistent between data and model. So we need to define the _trial-matched Pearson correlation_ to compute a Pearson correlation between the distribution of trial statistics \(T^{\prime}_{\text{trial}}\) which are unordered sets of trials. So we compute the optimal assignment \(\pi\) between trial pairs from the data and the recordings, and we report the averaged Pearson correlation over all trial pairs. Between the data and the model, we measure a _trial-matched Pearson Figure 2: **Artificial Dataset A. Session stitching: every neuron from the recordings is uniquely mapped to a neuron from our model. For example, an excitatory neuron from our model that belongs in the putative A1 is mapped to an excitatory neuron “recorded” in A1. In our network, we constrain the connectivity so that only excitatory neurons can project across different brain regions. B. The first area (A1) responds equally in a hit-like and a miss-like trial, while the second area (A2) responds only in hit-like trials. A model that does not use trial matching cannot capture the bimodal distribution of A2. C. Distribution of max firing rate of the population average of A2 from each trial. Only the trial matching algorithms retrieves the bimodal behavior of A2.** _correlation_ of \(0.48\pm 0.01\), with a performance ceiling at \(0.52\pm 0.01\) obtained by comparing the training and testing set directly (see Figure 3C for details). For reference, the model without _trial matching_ has a lower _trial-matched Pearson correlation_\(0.28\pm 0.003\). Successful recovery of trial type distributionWhile the neuronal activity is recorded, the behavioral response of the animal is also variable. When mice receive a stimulation they perform correctly with a \(66\%\) hit rate, while in the absence of a stimulus, mice still falsely kick with a \(20\%\) false alarm rate. Even in correct trials, the neural activity reflects variability which is correlated to uninstructed jaw and tongue movements [2]. We evaluate the distribution of trial types (hit, miss, correct rejection, and false alarm) from our fitted network model. Indeed, the 95% confidence intervals of the estimated trial type frequencies are always overlapping between the model and the data (see Figure 4A). In this Figure, we classify the trial type with a nearest-neighbor-like classifier using only the neural activity (see Appendix). In contrast, a model without the _trial matching_ would fail completely because it always produces averaged trajectories instead of capturing the multi-modal variability of the data as seen in Figure 4A. With _trial matching_ it is even possible to classify trial types using jaw movement. To define equivalent trial types in the model, we rely on the presence or absence of the stimulation and a classifier to identify a lick action given the jaw movements. This classifier is a multi-layer perceptron trained to predict a lick action on the water dispenser given the recorded jaw movements (like in the data, it occurs that the model "moves" the jaw without inducing a lick action). After optimization with _trial matching_, since the jaw movement \(y^{t}\) is contained in the fitted statistics \(T_{\text{trial}}\), the distribution of jaw movement and the trial types are similar in the fitted model and the trial type distribution remains consistent. In Figure 4B we show population-averaged activity traces where the jaw is used to determine the trial type. Figure 3: **Large-scale electrophysiology recordings dataset A. Session stitching: Every neuron from the recordings across sessions is uniquely mapped to a neuron from our model. For example, an excitatory neuron from our model that belongs in the putative area iM1 is mapped to an excitatory neuron recorded from iM1. In the pink box are the areas from which we decode the jaw trace. B. Baseline firing rate histogram, \(200\) ms before the whisker stimulus, from each neuron of our model and the recordings. C. Left: Pearson correlation of the PSTH, the violin plots represent the pearson correlations across neurons. Right: _trial-matched Pearson correlation_ of \(\mathcal{T}^{\prime}_{trial}\), the violin plots represent the distribution over \(200\) generated and recorded trial pairs.** Unsupervised discovery of modes of variabilitySo far we have analyzed whether the variability among the main four trial types was expressed in the model, but the existence of these four trial types is not enforced explicitly in the loss function. Rather, the _trial matching_ loss function aims to match the overall statistics of the distributions and it has discovered these four main modes of trial-variability without explicit supervision. A consequence is that our model has possibly generated other modes of variability which are needed for the model to explain the full distribution of recorded data. To display the full distribution of generated trials, we represent the neural activity of \(400\) generated trials in 2D in Figure 4C. Formally, we apply UMAP to the sub-selection of \(\mathcal{T}_{\text{trial}}\) which excludes the jaw components: \((\mathcal{T}_{wS1},\ldots\mathcal{T}_{ALM})\). Importantly, the representation of the trial distribution in a 2D projection is only possible with a generative model like ours. Otherwise, it would be nontrivial to define feature vectors for unique recorded trials because of the missing data: in each session, only a couple of areas are recorded simultaneously. However, to confirm that the generated distribution is consistent with the data we display template vectors for each trial condition \(c\) that are calculated from the recorded data. These templates are drawn with stars in Figure 4C and they are computed as follows: the coefficient \(\mathcal{T}_{\text{A,c}}^{\mathcal{D}}\) of this template vector is computed by averaging the population activity of area \(A\) in all recorded trials from all sessions (see Appendix for details), these averaged vectors are then concatenated and projected in the 2D UMAP space. The emerging distribution in this visualization is grouped in clusters. We observe that the template vectors representing the correct rejection, miss, and false alarm trials are located at the center of the corresponding cluster of generated trials. More surprisingly the generated hit trials are split into two clusters (see the two boxed clusters in Figure 4C). This can be explained by a simple feature: 85% of the generated hit trials on the left-hand cluster of panel 4C have intense jaw movements during the Figure 4: **Emergent trial types A**. Trial type distribution from the recordings in the models. The whiskers show the \(95\%\) confidence interval of the trial type frequency. In this panel, trial types use the template matching method to avoid disadvantage models without _trial matching_ which have almost no variability in the jaw movement, see Appendix. **B**. Population and trial-averaged neuronal activity per area and per Hit and Miss trial type from the \(400\) simulated trials from the model against the averaged recordings from the testing set. **C**. Two-dimensional UMAP representation of the \(\mathcal{T}_{trial}\) of \(400\) simulated trials. The jaw movement is not used in this representation. **D**. For the model, we separated the active hit and quiet hit trials based on their location in the UMAP. For the data, we separated the active hit and quiet hit trials based on the jaw movement as in [2]. delay period (\(\max_{t}|y^{t}-y^{t-1}|>4\delta\) where \(\delta\) is the standard deviation of \(|y^{t}-y^{t-1}|\) in the \(200\) ms before whisker stimulation). In fact, a similar criterion had been used in [2] to separate the hit trials in the recorded data, so we also refer to them as the "active hit" and "quiet hit" trials and show the population activity in Figure 4D. It shows that our algorithm has captured without supervision the same partition of trial types that neuroscientists have used to describe this dataset. We conclude that our modeling approach can be used for a hypothesis-free identification of modes of trial-to-trial variability, even when they reflect task-irrelevant behavior. ## 5 Discussion We introduced a generative modeling approach where a data-constrained RSNN is fitted to multi-session electrophysiology data. The two major innovations of this paper are (1) the technical progress towards multi-session RSNN fitted with automatic differentiation, and (2) a _trial matching_ loss function to match the trial-to-trial variability in recorded and generated data. Interpretable mechanistic model of activity and behaviorOur model has a radically different objective in comparison with other deep-learning models: our RSNN is aimed to be biophysically interpretable. In the long term, we hope that this method will be able to capture biological mechanisms (e.g. predicting network structure, causal interaction between areas and anatomical connectivity), but in this paper, we have focused on numerical and methodological questions which are getting us one step closer to this long-term objective. Mechanisms of latent dynamicsA long-standing debate in neuroscience is whether the brain computes with low-dimensional latent representations and how that is implemented in a neural circuit. Deep auto-encoders of neural activity like LFADS [22] can indeed generate trial-to-trial variability from low-dimensional latent representations. By construction, the variability is sourced by the latent variable which contains all the trial-specific information. This is in stark contrast with our approach, where we see the emergence of structured manifolds in the trial-to-trial variability of the RSNN (see the UMAP representation of Figure 4C), although we did not enforce the presence of low-dimensional latent dynamics. Structure in the trial-to-trial variability emerges because the RSNN is capable of transforming the unstructured noise sources (stochastic spikes and Gaussian input current) into a low-dimensional trial-to-trial variability - a typical variational auto-encoder setting would not achieve this. Note that it is also possible however to add a random low-dimensional latent as a source of low-dimensional variability like in LFADS. In the Appendix, we reproduce our results on the multi-session dataset from [2] while assuming that all voltages \(v^{t}_{i,k}\) have a trial-specific excitability offset \(\xi_{i,k}\) using a 5-dimensional gaussian noise \(\mathbf{\psi}_{k}\) and a one-hidden-layer perceptron \(F_{\theta}\) such that \(\xi_{i,k}=F_{\theta,i}(\mathbf{\psi}_{k})\). We observe that this latent noise model accelerates drastically the optimization, probably because \(\xi_{i,k}\) is an ideal noise source for minimizing \(\mathcal{L}_{\mathrm{trial}}\). However, the final solution achieves similar fitting performance metrics so, our method demonstrates that the extra assumption of a low-dimensional input is not necessary to generate realistic variability. Arguably, providing this low-dimensional input might even be counterproductive if the end goal is to identify the mechanism by which the circuit produces the low-dimensional dynamics. Alternative loss functions to capture variabilityThe main alternative methods to constrain the trial-to-trial variability would be likelihood-based approaches [6; 15] or spike-GANs [27; 33]. These methods are appealing as they do not depend on the choice of trial statistics \(\mathcal{T}_{\mathrm{trial}}\). Since these methods were never applied with a multi-session data-constrained RSNN we explored how to extend them to our setting and compare the results. We tested these alternatives on the Artificial dataset in the Appendix. The likelihood of the recorded spike trains [6; 15] cannot be defined with multiple sessions because we cannot clamp neurons that are not recorded (see [8] for details). The closest implementation that we could consider was to let the network simulate the data "freely" which requires, therefore, an optimal assignment between recorded and generated data, so it is a form of _trial-matched likelihood_). With this loss function, we could not retrieve the bi-model hit versus miss trial type distribution unless it is optimized jointly with \(\mathcal{L}_{\mathrm{trial}}\). We also tested the implementation of a spike-GAN discriminator. In GANs the min-max optimization is notoriously hard to tune, and we were unable to train our generator with a generic spike-GAN discriminator from scratch (probably because the biological constraints of our generator affect the robustness of the optimization). In our hands, it only worked when the GAN discriminator was fed directly with the trial statistics \(\mathcal{T}_{\text{trial}}\) and the network was jointly fitted to the trial-averaged loss \(\mathcal{L}_{\text{neuron}}\). It shows that a GAN objective and the _trial matching_ loss function hold a similar role. We conclude that both of these clamping-free methods are promising to fit data-constrained RSNNs. What differs between them, however, is that _trial matching_ replaces the discriminator with the optimal assignment \(\pi\) and the statistics \(\mathcal{T}\) which are parameter-free, making them easy to use and numerically robust. It is conceivable for future work the best results are obtained by combining _trial matching_ with other GAN-like generative methods. ## Acknowledgments and Disclosure of Funding This research was supported by the Swiss National Science Foundation (no. 31003A_182010, TMAG-3_209271, 200020_207426), and Sinergia Project CRSII5_198612. Many thanks to Lenaic Chizat, James Isbister, Shuqi Wang, and Vahid Esmaeili for their helpful discussions.
2310.10827
Deep Policy Iteration for High-Dimensional Mean Field Games
This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.
Mouhcine Assouli, Badr Missaoui
2023-10-16T21:03:13Z
http://arxiv.org/abs/2310.10827v4
# Deep Policy Iteration for High-Dimensional Mean Field Games ###### Abstract This paper introduces Deep Policy Iteration (DPI), a novel approach that combines the MFDGM [1] method and the Policy Iteration method [2; 3] to address high-dimensional stochastic Mean Field Games. The Deep Policy Iteration employs three neural networks to approximate the solutions of equations. These networks are trained to satisfy each equation and its corresponding forward-backward conditions. Unlike existing approaches that are limited to separable Hamiltonians and lower dimensions, DPI extends its capabilities to effectively solve high-dimensional MFG systems, encompassing both separable and non-separable Hamiltonians. To evaluate the reliability and efficacy of DPI, a series of numerical experiments is conducted. The results obtained using DPI are compared with those obtained using the MFDGM method and the Policy Iteration Method. This comparative analysis provides insights into the performance of DPI and its advantages over existing methods. keywords: Mean Field Games, Deep Learning, Policy Iteration, Non-Separable Hamiltonian + Footnote †: journal: Journal of Computational Physics ## 1 Introduction Mean Field Games (MFG) theory, introduced by Lasry and Lions [4], provides a framework for analyzing Nash equilibria in differential games involving a large number of agents. This theory is characterized by a mathematical formulation consisting of a system of coupled partial differential equations (PDEs). Specifically, the system comprises a forward-time Fokker-Planck equation (FP) and a backward-time Hamilton-Jacobi-Bellman equation (HJB), which govern the evolution of the population density and the value function, respectively. MFGs have garnered significant attention and have been extensively studied in various fields, such as autonomous vehicles [5; 6], finance [7; 8], economics [9; 10; 11]. In the general case, the MFG system is described as, \[\left\{\begin{array}{rl}-\partial_{t}\phi-\nu\Delta\phi+H(x,\rho,\nabla\phi) =0,\ in&E,\\ \partial_{t}\rho-\nu\Delta\rho-\mbox{div}\left(\rho\nabla_{p}H(x,\rho,\nabla \phi)\right)=0,\ in&E,\\ \rho(0,x)=\rho_{0}(x),\ \ \phi(T,x)=g(x,\rho(T,x)),\ in&\Omega,\end{array}\right. \tag{1}\] where, \(E=[0,T]\times\Omega\), \(\Omega\) bounded subset of \(\mathbb{R}^{d}\) and \(g\) denotes the terminal cost. The Hamiltonian \(H\) with separable structure is defined as \[H(x,\rho,p)=inf_{v}\{-p.v+L_{0}(x,v)\}-f_{0}(x,\rho)=H_{0}(x,p)-f_{0}(x,\rho), \tag{2}\] The solution of (1) exists and is unique under the standard assumptions of convexity of \(H\) in the second variable and monotonicity of \(f\) and \(g\)[12], see also refs [13; 4] for more details. For non-separable Hamiltonians, where the Hamiltonian of the MFG depends jointly on \(\rho\) and \(p\), the existence and uniqueness of the solution for MFGs of congestion type has been investigated by Achdou and Porretta in [14] and Gomes et al. in [15]. The numerical solution of (1) holds significant importance in the practical application of MFG theory. However, due to the strong coupling and forward-backward structure of the two equations in (1), they cannot be solved independently or jointly using simple forward-in-time methods. Extensive research has been conducted in this area, leading to the proposal of various methods with successful applications, as seen in [16; 4] and references therein. Nevertheless, these methods often face challenges in terms of computational complexity, particularly when dealing with high-dimensional problems [17; 18]. To address this challenge, deep learning methods, such as Generative Adversarial Networks (GANs) [12; 19], have been utilized to reformulate MFGs as primal-dual problems. However, these existing methods require the Hamiltonian \(H\) to be separable in terms of \(\rho\) and \(p\). Unfortunately, none of these methods adequately cover the case of a non-separable Hamiltonian with a generic structure. To the best of our knowledge, two promising numerical approaches have recently emerged to overcome this limitation. The first approach is the Policy Iteration (PI) method (in the context of MFG) [3], which is a numerical method based on finite difference techniques. PI can be seen as a modification of the fixed-point procedure, where, at each iteration, the HJB equation is solved for a fixed control. The control is then updated separately after the value function is updated. The second approach is the MFDGM method [1], which is a deep learning method based on the Deep Galerkin Method. This method offers a promising solution for MFGs with non-separable Hamiltonians. Following the work in [2; 3], we introduce the policy iteration algorithms for (1) with periodic boundary conditions. We first define the Lagrangian as the Legendre transform of \(H\) : \[L(\rho,q)=\sup_{p\in\mathbb{R}^{d}}\{p\cdot q-H(\rho,p)\}\] **Policy Iteration Algorithm:** Given \(R>0\) and given a bounded, measurable vector field \(q^{(0)}:\mathbb{T}^{d}\times[0,T]\rightarrow\mathbb{R}^{d}\) with \(\left|q^{(0)}\right|\leq R\) and \(\left\|\operatorname{div}q^{(0)}\right\|_{L^{r}(E)}\leq R\), iterate: (i) Solve \[\begin{cases}\partial_{t}\rho^{(n)}-\epsilon\Delta\rho^{(n)}- \operatorname{div}\left(\rho^{(n)}q^{(n)}\right)=0,&\text{ in }E,\\ \rho^{(n)}(x,0)=\rho_{0}(x)&\text{ in }\Omega.\end{cases} \tag{3}\] (ii) Solve \[\begin{cases}-\partial_{t}\phi^{(n)}-\epsilon\Delta\phi^{(n)}+q^{(n)}D\phi^{( n)}-L\left(\rho^{(n)},q^{(n)}\right)=0&\text{ in }E,\\ \phi^{(n)}(x,T)=\phi_{T}(x)&\text{ in }\Omega,\end{cases} \tag{4}\] (iii) Update the policy \[q^{(n+1)}(x,t)=\arg\max_{\left|q\right|\leq R}\left\{q\cdot D\phi^{(n)}(x,t)-L \left(\rho^{(n)},q\right)\right\}\quad\text{ in }E. \tag{5}\] _Contributions_ This paper contributes by introducing a novel approach, called Deep Policy Iteration (DPI), which combines elements of the MFDGM method and the PI method to solve high-dimensional stochastic MFG. Inspired by [20; 21], we employ three neural networks to approximate the unknown solutions of equations (3)-(5), trained to satisfy each equation and the associated forward-backward conditions. While most existing methods are restricted to problems with separable Hamiltonians and lower dimensions, DPI overcomes these limitations and can effectively solve high-dimensional MFG systems, encompassing both separable and non-separable Hamiltonians. To assess the reliability and effectiveness of DPI, we conducted a series of numerical experiments. In these experiments, we compare the results obtained using DPI with those obtained using the MFDGM method and the PI method. _Contents_ The subsequent sections of this paper are organized as follows: Section 2 provides an introduction to our approach, outlining its key aspects. In Section 3, we delve into a comprehensive examination of PI and MFDGM methods. Moving on to Section 4, we explore the numerical performance of our proposed algorithms. To evaluate the efficacy of our method, we employ a straightforward analytical solution in Section 4.1. In Section 4.2, we put our method to the test using two high-dimensional examples. Furthermore, we employ a well-established one-dimensional traffic flow problem [5], renowned for its non-separable Hamiltonian, to compare DPI and PI in Section 4.3. ## 2 Methodology The proposed methodology consists of two main steps: In the first step, the algorithm computes the density and cost of a given policy. This is achieved by solving a set of coupled partial differential equations (PDEs) that describe the dynamics of the system. In the second step, the algorithm updates the policy established in the first step and computes a new policy that minimizes the expected cost given the density and value function. This involves solving a separate optimization problem for each agent, which can be done using deep learning techniques. Here, we proceed by using a distinct neural network for each of the three variables: the density function, the value function, and the policy. To assess the accuracy of these approximations, we use a loss function based on the residual of each equation to update the parameters of the neural networks. These neural networks are trained based on the Deep Galerkin Method [20]. Within each iteration, the DPI method repeats the two steps until convergence. At each iteration, the algorithm computes a better policy by taking into account the expected behavior of the other agents in the population. The resulting policy is a solution to the MFG that describes the optimal behavior of the agents in the population, given the interactions between them. We initialize the neural networks as a solution to our system and define: \[q_{\tau}(t,x)=N_{\tau}(t,x),\quad\phi_{\omega}(t,x)=N_{\omega}(t,x),\quad\rho_{ \theta}(t,x)=N_{\theta}(t,x). \tag{6}\] Our training strategy starts by solving (3). We compute the loss (7) at randomly sampled points \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\), and \(\{x_{s}\}_{s=1}^{S}\) from \(\Omega\). \[\text{Loss}_{total}^{(3)}=\text{Loss}^{(3)}+\text{Loss}_{cond}^{(3)}, \tag{7}\] where \[\text{Loss}^{(3)}=\frac{1}{B}\sum_{b=1}^{B}\Big{|}\partial_{t}\rho _{\theta}(t_{b},x_{b})-\nu\Delta\rho_{\theta}(t_{b},x_{b})\] \[\qquad\qquad-\text{div}\left(\rho_{\theta}(t_{b},x_{b})q_{\tau}(t _{b},x_{b})\right)\Big{|}^{2},\] and \[\text{Loss}_{cond}^{(3)}=\frac{1}{S}\sum_{s=1}^{S}\Big{|}\rho_{\theta}(0,x_{s })-\rho_{0}(x_{s})\Big{|}^{2}.\] We then update the weights of \(\rho_{\theta}\) by back-propagating the loss (7). We do the same to (4) with the given \(\rho_{\theta}\). We compute (8) at randomly sampled points \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\), and \(\{x_{s}\}_{s=1}^{S}\) from \(\Omega\), \[\text{Loss}_{total}^{(4)}=\text{Loss}^{(4)}+\text{Loss}_{cond}^{(4)}, \tag{8}\] where \[\text{Loss}^{(4)}=\frac{1}{B}\sum_{b=1}^{B}\Big{|}\partial_{t} \phi_{\omega}(t_{b},x_{b})+\nu\Delta\phi_{\omega}(t_{b},x_{b})\] \[\qquad+q_{\tau}(t_{b},x_{b}))\nabla\phi_{\omega}(t_{b},x_{b})-L( \rho_{\theta}(t_{b},x_{b}),q_{\tau}(t_{b},x_{b}))\Big{|}^{2},\] and \[\text{Loss}_{cond}^{(4)}=\frac{1}{S}\sum_{s=1}^{S}\Big{|}\phi_{\omega}(T,x_{s })-g(x_{s},\rho_{\theta}(T,x_{s}))\Big{|}^{2}.\] We then update the weights of \(\phi_{\omega}\) by backpropagating the loss (8). Finally, we update \(q_{\tau}\) by computing the loss (9) at randomly sampled points \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\). \[\text{Loss}_{total}^{(5)}=\text{Loss}^{(5)}+\text{Loss}_{cond}^{(5)}, \tag{9}\] where \[\text{Loss}^{(5)}=\frac{1}{B}\sum_{b=1}^{B}q_{\tau}(t_{b},x_{b}))\nabla\phi_{ \omega}(t_{b},x_{b})-L(\rho_{\theta}(t_{b},x_{b}),q_{\tau}(t_{b},x_{b})),\] and \[\text{Loss}_{cond}^{(5)}=\frac{1}{B}\sum_{b=1}^{B}\Big{|}q_{\tau}(t_{b},x_{b}) -\nabla_{p}H(\rho_{\theta}(t_{b},x_{b}),\nabla\phi_{\omega}(t_{b},x_{b})) \Big{|}^{2}.\] To help readers better understand our methodology, we have summarized it in the form of an algorithm 1. This algorithm serves as a visual representation of our steps, allowing others to replicate our methodology and validate our results. The convergence of the neural network approximation was previously analyzed in [20; 1]. Furthermore, the convergence of policy iteration was examined in [2; 3] with the Banach fixed point method. It is important to note that by experimenting with diverse network structures and training approaches, one can enhance the performance and robustness of the neural networks utilized in the model. Hence, selecting the best possible combination of architecture and hyperparameters for the neural networks is crucial for achieving the desired outcomes. ## 3 Related Works In this section, we present a literature review that pertains to our study. Specifically, we concentrate on two pertinent sources that provide insight into our research. It is important to note that previous literature has primarily focused on the separable case already presented on [19; 12]. However, our study seeks to broaden this by exploring the general MFG problems. To accomplish this objective, we conduct a thorough analysis of the two identified sources and draw inspiration from them. This allows us to gain a more comprehensive understanding of how decision-making criteria interact in intricate situations. Additionally, this review illuminates potential difficulties that may arise when examining the non-separable case and demonstrates ``` 1:\(L\), \(\nu\) diffusion parameter, \(g\) terminal cost. 2:Initialize neural networks \(N_{\tau_{0}}\), \(N_{\omega_{0}}\) and \(N_{\theta_{0}}\) Train for n=0,1,2...,K-1 do ``` (i) solve (3) Sample batch \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\), and \(\{x_{s}\}_{s=1}^{S}\) from \(\Omega\) \(\mathrm{L}^{(3)}\leftarrow\frac{1}{B}\sum_{b=1}^{B}\Big{|}\partial_{t}\rho_{ \theta_{n}}(t_{b},x_{b})-\nu\Delta\rho_{\theta_{n}}(t_{b},x_{b})\) \[-\operatorname{div}\left(\rho_{\theta_{n}}(t_{b},x_{b})q_{\tau_{n}}(t_{b},x_{b} )\right)\Big{|}^{2}\] \(\mathrm{L}^{(3)}_{cond}\leftarrow\frac{1}{S_{1}}\sum_{s=1}^{S}\Big{|}\rho_{ \theta_{n}}(0,x_{s})-\rho_{0}(x_{s})\Big{|}^{2}\). Backpropagate \(\text{Loss}^{(3)}_{total}\) to \(\theta_{n+1}\) weights. (ii) solve (4) Sample batch \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\), and \(\{x_{s}\}_{s=1}^{S}\) from \(\Omega\). \(\mathrm{L}^{(4)}\leftarrow\frac{1}{B}\sum_{b=1}^{B}\Big{|}\partial_{t}\phi_{ \omega_{n}}(t_{b},x_{b})+\nu\Delta\phi_{\omega_{n}}(t_{b},x_{b})\) \[+q_{\tau_{n}}(t_{b},x_{b}))\nabla\phi_{\omega_{n}}(t_{b},x_{b})-L(\rho_{ \theta_{n+1}}(t_{b},x_{b}),q_{\tau_{n}}(t_{b},x_{b}))\Big{|}^{2}\] \(\mathrm{L}^{(4)}_{cond}\leftarrow\frac{1}{S}\sum_{s=1}^{S}\Big{|}\phi_{\omega_{ n}}(T,x_{s})-g(x_{s},\rho_{\theta_{n+1}}(T,x_{s}))\Big{|}^{2}\). Backpropagate \(\text{Loss}^{(4)}_{total}\) to \(\omega_{n+1}\) weight. (i) solve (5) Sample batch \(\{(t_{b},x_{b})\}_{b=1}^{B}\) from \(E\). \(\mathrm{L}^{(5)}\leftarrow\frac{1}{B}\sum_{b=1}^{B}q_{\tau_{n}}(t_{b},x_{b})) \nabla\phi_{\omega_{n+1}}(t_{b},x_{b})\) \(-L(\rho_{\theta_{n+1}}(t_{b},x_{b}),q_{\tau_{n}}(t_{b},x_{b}))\), \(\mathrm{L}^{(5)}_{cond}\leftarrow\frac{1}{B}\sum_{b=1}^{B}\Big{|}q_{\tau_{n}}(t_ {b},x_{b})-\nabla_{p}H(\rho_{\theta_{n+1}}(t_{b},x_{b}),\nabla\phi_{\omega_{n+ 1}}(t_{b},x_{b}))\Big{|}^{2}\). Backpropagate \(\text{Loss}^{(5)}_{total}\) to \(\tau_{n+1}\) weight. return\(\theta_{K}\), how our approach can contribute to addressing these issues. **Policy Iteration method:** In [3], the authors introduced the Policy iteration method, which proved to be the first successful approach for solving systems of mean-field game partial differential equations with non-separable Hamiltonians. The method involved proposing two algorithms based on policy iteration, which iteratively updated the population distribution, value function, and control. Since the control was fixed, these algorithms only required the solution of two decoupled, linear PDEs at each iteration, which reduced the complexity of the equations; refer to [2]. The authors presented a revised version of the policy iteration algorithm, which involves updating the control during each update of the population distribution and value function. Their novel approach strives to make significant strides in the field and deepen our understanding of the subject matter. However, due to the computational complexity of the method, it was limited to low-dimensional problems. One major limitation of this method was that it may not work in the separable case. To address these issues, our study proposes a new approach using neural networks. We will use the same policy iteration method but instead of using the finite difference method to solve the two decoupled, linear PDEs, we will use neural networks inspired by DGM. This will allow us to overcome the computational challenges of the original method and extend its applicability to the separable case. With this approach, we expect to significantly improve the computational complexity of the method, which will allow us to apply it to a wider range of high-dimensional problems. Overall, our study seeks to extend the applicability of the PI method to more complex and higher-dimensional problems by incorporating neural networks into its framework. We believe that this approach has the potential to make a significant contribution to the field of mean-field game theory and could have practical implications in fields such as finance, economics, and engineering. **MFDGM:** The methodology proposed by [1] involves the use of two neural networks to approximate the population distribution and the value function. The accuracy of these approximations is optimized by using a loss function based on the residual of the first equation (HJB) to update the parameters of the neural networks. The process is then repeated using the second equation (FP) and new parameters to further improve the accuracy of the approximations. In contrast, methods based on Generative Adversarial Networks (GANs) [19; 12] cannot solve MFGs with non-separable Hamiltonians. This novel methodology significantly improves the computational complexity of the method by utilizing neural networks trained simultaneously with a single hidden layer and optimizing the approximations through a loss function. This makes the method more efficient and scalable, enabling its application to a wider range of high-dimensional problems in general mean-field games, including separable or non-separable Hamiltonians and deterministic or stochastic models. Additionally, comparisons to previous methods demonstrate the efficiency of this approach, even with multilayer neural networks. Our research builds upon prior investigations and offers substantial advancements in methodologies for enhancing the efficiency and precision of solving general mean-field games. This is achieved through the integration of PI, which reduces the complexity of the equations, and neural networks, resulting in improved computational performance and accuracy. ## 4 Numerical Experiments To assess the efficacy of the proposed algorithm (1), we conducted experiments on two distinct problems. Firstly, we utilized the example dataset provided in [12; 1], which has an explicitly defined solution structure that facilitates straightforward numerical comparisons. Secondly, we evaluated the algorithm's performance on a well-studied one-dimensional traffic flow problem [5], which is known for its non-separable Hamiltonian. To ascertain the reliability of our approach, we compared the performance of three different algorithms, namely PI, DPI, and MFDGM, on the same dataset. Through this evaluation, we aimed to determine the effectiveness of our proposed algorithm in comparison to existing state-of-the-art methods. Furthermore, we extended the application of our method to more intricate problems involving high-dimensional cases, further substantiating its reliability. ### Analytic Comparison We test the effectiveness of the DPI method, we compare its performance with a simple example of the analytic solution characterized by a separable Hamiltonian. In order to simplify the comparison process, we select the spatial domain as \(\Omega=[-2,2]^{d}\) and set the final time as \(T=1.\) This allows us to easily analyze and compare the results obtained from both methods. For \[\begin{array}{c}H_{0}(x,p)=\frac{||p||^{2}}{2}-\beta\frac{||x||^{2}}{2},\quad f _{0}(x,\rho)=\gamma\ln(\rho),\\ g(x)=\alpha\frac{||x||^{2}}{2}-(\nu d\alpha+\gamma\frac{d}{2}\ln\frac{\alpha}{2 \pi\nu}),\end{array} \tag{10}\] and \(\nu=\beta=1\), where \[\alpha=\frac{-\gamma+\sqrt{\gamma^{2}+4\nu^{2}\beta}}{2\nu}=\frac{-\gamma+ \sqrt{\gamma^{2}+4}}{2}.\] The corresponding MFG system is: \[\left\{\begin{array}{c}-\partial_{t}\phi-\Delta\phi+\frac{||\nabla\phi||^{2 }}{2}-\frac{||x||^{2}}{2}=\gamma\ln(\rho),\\ \partial_{t}\rho-\Delta\rho-\mbox{div}\left(\rho\nabla\phi\right)=0,\\ \rho(0,x)=(\frac{1}{2\pi})^{\frac{d}{2}}e^{-\frac{||x||^{2}}{2}},\\ \phi(T,x)=\alpha\frac{||x||^{2}}{2}-(\alpha d+\gamma\frac{d}{2}\ln\frac{ \alpha}{2\pi}),\end{array}\right. \tag{11}\] and the explicit formula is given by \[\begin{array}{c}\phi(t,x)=\alpha\frac{||x||^{2}}{2}-(\alpha d+\gamma\frac{d }{2}\ln\frac{\alpha}{2\pi})t,\\ \rho(t,x)=(\frac{1}{2\pi})^{\frac{d}{2}}e^{-\frac{||x||^{2}}{2}}.\end{array} \tag{12}\] **Test 1:** Assuming a congestion-free scenario (\(\gamma=0\)) in a one-dimensional system of partial differential equations 11, we utilize Algorithm 1 to obtain results by using a minibatch size of 50 samples at each iteration. Our approach employs neural networks with one hidden layer consisting of 100 neurons each, with Softplus activation function for \(N_{\omega}\) and Tanh activation function for \(N_{\theta}\) and \(N_{\tau}\). To train the neural networks, we use ADAM with a learning rate of \(10^{-4}.\) and weight decay of \(10^{-3}.\) We adopt ResNet as the architecture of the neural networks, with a skip connection weight of 0.5. We kept the same parameters for the second method MFDGM. The numerical results are presented in Figure 1, where we compare the approximate solutions obtained by two methods to the exact solutions at different times. To assess the performance of the methods, we compute the relative error between the model predictions and the exact solutions on a \(100\times 100\) grid within the domain \([0,1]\times[-2,2]\), see Figure 2. Furthermore, we monitored the convergence of our approach by plotting the two residual losses, as defined in Algorithm 1 and the MFDGM algorithm, in Figure 3. Optimal selection of architecture and hyperparameters for the neural networks plays a crucial role Figure 3: Comparison of losses for \(\phi\) and \(\rho\) using Two Methods with \(\gamma=0\). Figure 2: Comparison of Relative Error for \(\phi\) and \(\rho\) using Two Methods with \(\gamma=0\). in attaining the desired outcomes for both methods see experiments in [1]. **Test 2:** In order to investigate the impact of congestion on the previous case, we repeat the same experiment with the same parameters, but this time we consider a non-zero congestion parameter (\(\gamma=0.1\)). We present the numerical results in Figures 4, 5, 6. In comparison to the MFDGM algorithm, DPI demonstra Figure 4: Comparison of Exact Solution and Prediction using DPI and MFDGM in One Dimension at t=(0.25, 0.5, 0.75) with \(\gamma=0.1\) Figure 5: Comparison of Relative Error for \(\phi\) and \(\rho\) using Two Methods with \(\gamma=0.1\). Figure 6: Comparison of losses for \(\phi\) and \(\rho\) using Two Methods with \(\gamma=0.1\). effectiveness, particularly when applied to congestion scenarios. This superiority can be attributed to DPI's ability to reduce the complexity of Partial Differential Equations (PDEs) through the incorporation of a policy neural network. Furthermore, to mitigate errors such as MFDGM, we enforce training DPI within the boundaries. This additional measure ensures improved accuracy in the training process. By leveraging this neural network, DPI offers a more suitable approach for handling congestion-related tests, surpassing the capabilities of the MFDGM algorithm. **Test 3:** In this test, we aim to solve (11) using PI. To enable the application of this method, we incorporate periodic boundary conditions into (6). In their research, the authors utilized Policy Iteration methods to solve a Partial Differential Equation (PDE) system through finite-difference approximation. They adopted a uniform grid \(\mathcal{G}\) and centred second-order finite differences for the discrete Laplacian while computing the Hamiltonian and divergence term in the FP equation using the Engquist-Osher numerical flux for conservation laws. The symbol \(\sharp\) represented the linear differential operators at the grid nodes. Specifically, in the context of one dimension, they implemented a uniform discretization of \(E\) with \(I\) nodes \(x_{i}=ih\), where \(h=1/I\) is the space step. To discretize time, they employed an implicit Euler method for both the time-forward FP equation and the time-backward HJB equation, with a uniform grid on the interval \([0,T]\) with \(N+1\) nodes \(t_{n}=nt\), where \(t=T/N\) was the time step. To prevent any confusion, we adhere to the previously established notation. Therefore, we use \(\phi\), \(\rho\), and the policy \(q\) to represent the vectors that approximate the solution on \(\mathcal{G}\), and by \(\phi_{n},\rho_{n}\), and \(q_{n}\) the vectors on \(\mathcal{G}\) approximating the solution and policy at time \(t_{n}\). The algorithm we will use for the fully discretized System 11 is as follows: we start with an initial guess \(q_{n}^{(0)}:\mathcal{G}\rightarrow\mathbb{R}^{2d}\) for \(n=0,...,N,\) and initial and final data \(\rho_{0},\phi_{N}:\mathcal{G}\rightarrow\mathbb{R}\). We then iterate on \(k\geq 0\) until convergence, (i) Solve for \(n=0,\ldots,N-1\) on \(\mathcal{G}\) \[\left\{\begin{array}{l}\rho_{n+1}^{(k)}-dt\left(\Delta_{\sharp}\rho_{n+1}^{ (k)}+\mathrm{div}_{\sharp}\left(\rho_{n+1}^{(k)}q_{n+1}^{(k)}\right)\right)= \rho_{n}^{(k)}\\ \rho_{0}^{(k)}=\rho_{0}\end{array}\right.\] (ii) Solve for \(n=N-1,\ldots,0\) on \(\mathcal{G}\) \[\begin{cases}\phi_{n}^{(k)}-dt\left(\Delta_{\sharp}\phi_{n}^{(k)}-q_{n,\pm}^{(k) }\cdot D_{\sharp}\phi_{n}^{(k)}\right)\\ \qquad=\phi_{n+1}+dt\left(\frac{1}{2}\left|q_{n+1,\pm}^{(k)}\right|^{2}+x_{i}^ {2}+\gamma Ln\left(\rho_{n+1}^{(k)}\right)\right)\\ \phi_{N}^{(k)}=\phi_{N}\end{cases}\] (iii) Update the policy \(q_{n}^{(k+1)}=D_{\sharp}\phi_{n}^{(k)}\) on \(\mathcal{G}\) for \(n=0,\ldots,N\), and set \(k\gets k+1\). In the following test, we choose \(\gamma=0\), T = 1 for the final time, and \(K=50\). The grid consisted of \(I=200\) nodes in space and \(N=200\) nodes in time. The initial policy was initialized as \(q_{n}^{(0)}\equiv(0,0)\) on \(\mathcal{G}\) for all \(n\). The results of the study Figure 7 showed that policy iteration is not effective in solving MFG problems in separable cases. However, the use of deep learning instead of finite difference methods has exhibited promise in extending the applicability of this approach to MFG in the separable case. In future examinations, we will conduct a thorough assessment to explore the potential of DPI in addressing more intricate MFG problems. **Test 4:** The effectiveness of our approach in high-dimensional cases is tested in this experiment and the next section. We solve the MFG system (11) for dimensions 2, 50, and 100 and present the results in Figure 8. To train the neural networks, we use a minibatch of 100, 500, and 1000 samples respectively for d=2, 50 and 100. The neural networks have a single hidden layer consisting of 100, 200 and 256 neurons each respectively. We utilize the Softplus activation function for \(N_{\omega}\) and the Tanh activation function for \(N_{\tau}\) and \(N_{\theta}\). We use ADAM with a learning rate of \(10^{-4}\), weight decay of \(10^{-4}\), and employ ResNet as the architecture with a skip connection weight Figure 7: Comparison of Exact Solution and Prediction using PI in One Dimension at t=(0.25, 0.5, 0.75) with \(\gamma=0\) of 0.5. The results were obtained after applying a Savgol filter to enhance the clarity of the curves ### High-Dimensional In this section, our goal is to further assess the effectiveness of our approach in high-dimensional contexts by examining two specific examples characterized by non-separable Hamiltonians, as previously introduced in [3]. **Example 1:** We consider the following problem in the stochastic case with \(\nu=0.3\). The problem is defined within the domain \(\Omega=[0,1]^{d}\), with a fixed final time of \(T=1\). In this context, the terminal cost is specified as \(g=0\), and the initial density is characterized by a Gaussian distribution. The corresponding MFG system is, \[\left\{\begin{array}{c}-\phi_{t}-\nu\Delta\phi+\frac{1}{2(1+4\rho)}||\phi_{x }||^{2}=0,\\ \rho_{t}-\nu\Delta\rho-\mbox{div}(\rho\frac{\phi_{x}}{1+4\rho})=0,\\ \rho(0,x)=\left(\frac{1}{2\pi}\right)^{d/2}e^{-\frac{||x-0.25||^{2}}{2}},\\ \phi(x,T)=0.\end{array}\right. \tag{13}\] We addressed the MFG system (13) across three different dimensions: 2, 10, and 50. The results of this analysis are presented in Figure 9. For the training of neural networks, we adopted varying minibatch sizes: 100, 500, and 1000 samples, corresponding to \(d=2,10\), and 100. These neural networks consist of a single hidden layer with 100. We applied the Softplus activation function for \(N_{\omega}\) and the Tanh activation function for \(N_{\tau}\) and \(N_{\theta}\). Our optimization approach used ADAM with a learning rate of \(10^{-4}\) and a weight decay of \(10^{-4}\). We employed the ResNet architecture with a skip connection weight Figure 8: The relative Error for \(\phi\) and \(\rho\) DPI Method for d=2, 50 and 100 of 0.5. To enhance the smoothness of the curves, the results underwent a Savgol filter. **Example 2:** Now we give the following problem with a terminal cost that motivates the agents to direct their movements toward particular subregions within the domain. The corresponding MFG system is, \[\left\{\begin{array}{c}-\phi_{t}-\nu\Delta\phi+\frac{1}{2\rho^{1/2}}||\phi_{x} ||^{2}=0,\\ \rho_{t}-\nu\Delta\rho-\mbox{div}(\rho\frac{\phi_{x}}{\rho^{1/2}})=0,\\ \rho(0,x)=\left(\frac{1}{2\pi}\right)^{d/2}e^{-\frac{||x-0.25||^{2}}{2}},\\ \phi(x,T)=\sum_{i=1}^{d}cos(2\pi x_{i}).\end{array}\right. \tag{14}\] We perform the same process as previously described in Example 1 for the MFG system (14), and the results of this analysis are presented in Figure 10. ### Trafic Flow To assess the DPI method, we conducted experiments on a traffic flow problem for an autonomous vehicle used to test the effectiveness of MFDGM. We specifically chose this problem because it is characterized by a non-separable Hamiltonian, making it a more complex and challenging problem. In our experiments, we applied the PI and DPI Methods to the problem and evaluated its performance in stochastic cases \(\nu=0.1\). We consider the traffic flow problem on the spatial domain \(\Omega=[0,1]\) with dimension \(d=1\) and final time \(T=1\). The terminal cost \(g\) is set to zero and the initial density \(\rho_{0}\) is given by a Gaussian distribution, \(\rho_{0}(x)=0.05-0.9\exp\left(\frac{-1}{2}\left(\frac{x-0.5}{0.1}\right)^{2}\right)\). The corresponding MFG system is, \[\left\{\begin{array}{c}\phi_{t}+\nu\Delta\phi-\frac{1}{2}||\phi_{x}||^{2}+(1- \rho)\phi_{x}=0\\ \rho_{t}-\nu\Delta\rho-\mbox{div}((\phi_{x}-(1-\rho))\rho)=0\\ \rho(x,0)=0.05-0.9\exp(\frac{-1}{2}(\frac{x-0.5}{0.1})^{2}),\\ \phi(x,T)=0.\end{array}\right. \tag{15}\] **Test 1:** In this test, we aim to solve (15) with periodic boundary conditions, we utilized Algorithm 1 with a minibatch size of 50 samples at each iteration to obtain results. Our approach involved the use of neural networks with different hidden layers, each consisting of 100 neurons. Specifically, for \(N_{\theta}\), we employed three hidden layers with the Gelu activation function, while for \(N_{\omega}\) and \(N_{\tau}\), we used a single hidden layer with the Sin activation function. During the training process, we employed the ADAM optimizer with a learning rate of \(10^{-4}\) and weight decay of \(10^{-3}\). To construct the neural networks, we adopted the ResNet architecture, incorporating a skip connection weight of 0.5. To assess the performance of the methods, we conducted an analysis of the numerical results presented in Figure 11. This figure provides visual representations of the density solution as well as the \(L^{\infty}\) Distance between \(\rho^{(k)}\), \(\phi^{(k)}\) and \(q^{(k)}\) from DPI and the final solution \(\rho^{*}\), \(\phi^{*}\) and \(q^{*}\) from fixed point algorithm. The \(L^{\infty}\) Distance values were obtained after applying a Savgol filter to enhance the clarity of the curves. **Test 2:** The purpose of this test is to utilize the PI method in order to solve (15) with periodic boundary conditions. Similar to Test 3, we employ PI algorithm to solve the fully discretized (15) as follows: we start with an initial guess \(q_{n}^{(0)}:\mathcal{G}\rightarrow\mathbb{R}^{2d}\) for \(n=0,...,N,\) and initial and final data \(\rho_{0},\phi_{N}:\mathcal{G}\rightarrow\mathbb{R}\). We then iterate on \(k\geq 0\) until convergence: (i) Solve for \(n=0,\ldots,N-1\) on \(\mathcal{G}\) \[\left\{\begin{array}{c}\rho_{n+1}^{(k)}-dt\left(\Delta_{\sharp}\rho_{n+1}^{ (k)}+\mbox{div}_{\sharp}\left(\rho_{n+1}^{(k)}q_{n+1}^{(k)}\right)\right)= \rho_{n}^{(k)}\\ \rho_{0}^{(k)}=\rho_{0}\end{array}\right.\] (ii) Solve for \(n=N-1,\ldots,0\) on \(\mathcal{G}\) \[\left\{\begin{aligned} \phi_{n}^{(k)}&-dt\left(\Delta_{ \sharp}\phi_{n}^{(k)}-q_{n,\pm}^{(k)}\cdot D_{\sharp}\phi_{n}^{(k)}\right)\\ &=\phi_{n+1}+\frac{dt}{2}\left(\left|q_{n+1,\pm}^{(k)}\right|^{2 }+\left|1-\rho_{n+1}^{(k)}\right|^{2}+\left(1-\rho_{n+1}^{(k)}\right)q_{n+1,\pm }^{(k)}\right)\\ \phi_{N}^{(k)}&=\phi_{N}\end{aligned}\right.\] (iii) Update the policy \(q_{n}^{(k+1)}=D_{\sharp}\phi_{n}^{(k)}\) on \(\mathcal{G}\) for \(n=0,\ldots,N\), and set \(k\gets k+1\). In the following test, we choose \(\gamma=0\), T = 1 for the final time, and \(K=50\). The grid consisted of \(I=200\) nodes in space and \(N=200\) nodes in time. The initial policy was initialized as \(q_{n}^{(0)}\equiv(0,0)\) on \(\mathcal{G}\) for all \(n\). Figure 12 presents the numerical results, offering visual depictions of both the density solution and the \(L^{\infty}\) Distance between \(\rho^{(k)}\), \(\phi^{(k)}\) and \(q^{(k)}\) from policy iteration and the final solution \(\rho^{*}\), \(\phi^{*}\) and \(q^{*}\) from fixed point algorithm. The findings of this study revealed that policy iteration outperforms deep policy iteration in terms of effectiveness and speed. Consequently, we can deduce that traditional methods are commonly employed to address such problems. However, it is important to note that traditional methods have limitations, such as the curse of dimensionality. Hence, there is a growing interest in utilizing deep learning approaches as an alternative solution. ## 5 Conclusion We have presented a novel approach that combines the MFDGM method and the PI method to tackle high-dimensional stochastic MFG. Our approach Figure 11: the density solution obtained with DPI and \(L^{\infty}\) Distance between \(\rho^{(k)}\), \(\phi^{(k)}\) and \(q^{(k)}\) from DPI and the final solution \(\rho^{*}\), \(\phi^{*}\) and \(q^{*}\) from fixed point algorithm exhibits a higher level of effectiveness, particularly in congestion scenarios, compared to the MFDGM algorithm. Additionally, we have observed that while our DPI approach demonstrates effectiveness for general MFG, the traditional PI method outperforms DPI in terms of effectiveness and speed when dealing with non-separable Hamiltonians. It can be deduced that traditional methods are commonly employed to address such problems in low-dimensional settings, but DPI extends its capabilities to high-dimensional scenarios.
2304.12177
Π-ML: A dimensional analysis-based machine learning parameterization of optical turbulence in the atmospheric surface layer
Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams. Therefore, modeling the strength of these fluctuations ($C_n^2$) is highly relevant for the successful development and deployment of future free-space optical communication links. In this letter, we propose a physics-informed machine learning (ML) methodology, $\Pi$-ML, based on dimensional analysis and gradient boosting to estimate $C_n^2$. Through a systematic feature importance analysis, we identify the normalized variance of potential temperature as the dominating feature for predicting $C_n^2$. For statistical robustness, we train an ensemble of models which yields high performance on the out-of-sample data of $R^2=0.958\pm0.001$.
Maximilian Pierzyna, Rudolf Saathof, Sukanta Basu
2023-04-24T15:38:22Z
http://arxiv.org/abs/2304.12177v2
# II-ML: A dimensional analysis-based machine learning parameterization ###### Abstract Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams. Therefore, modeling the strength of these fluctuations (\(C_{n}^{2}\)) is highly relevant for the successful development and deployment of future free-space optical communication links. In this letter, we propose a physics-informed machine learning (ML) methodology, II-ML, based on dimensional analysis and gradient boosting to estimate \(C_{n}^{2}\). Through a systematic feature importance analysis, we identify the normalized variance of potential temperature as the dominating feature for predicting turbulence strength. For statistical robustness, we train an ensemble of models which yields high performance on the out-of-sample data of \(R^{2}=0.958\pm 0.001\). Free-space optical communication (FSOC) between satellites and the ground or between ground-based terminals is among emerging applications in which an optical beam propagates through the atmosphere. FSOC can have a major societal impact, increasing data throughput, data security, and global internet coverage while potentially reducing the cost per bit per second [1]. However, some challenges need to be faced; besides precipitation, clouds, fog, and aerosol scattering, turbulent fluctuations of the atmospheric refractive index form a major source of disturbance for optical beams [2]. The strength of these fluctuations - called optical turbulence - is quantified by the refractive index structure parameter \(C_{n}^{2}\). Good knowledge about the behavior of \(C_{n}^{2}\) in diverse locations and meteorological conditions is required to design and deploy reliable future FSOC links. However, measuring \(C_{n}^{2}\) is difficult and typically needs elaborate post-processing of high-frequency observations [3]. As a result, a wide range of empirical \(C_{n}^{2}\) models and parameterizations have emerged, which aim to relate \(C_{n}^{2}\) to more easily obtainable variables [4]. While well-known _models_ such as the Hufnagel Valley model yield vertical profiles \(C_{n}^{2}(h)\) with little dependency on local meteorological conditions [4], the goal of \(C_{n}^{2}\)_parameterizations_ is to estimate \(C_{n}^{2}\) directly from in-situ measurements. Conventional physics-based parametrizations typically make use of Monin-Obukhov similarity theory (MOST) [5] and associated empirically determined similarity relationships. One of the earliest parameterizations was proposed by [3] and utilizes turbulent fluxes to estimate \(C_{n}^{2}\). Several other competing formulations exist (refer to [6] for a comprehensive review). Recently, multiple studies [7; 8; 9; 10; 11] showed that machine learning (ML) models can be used to parametrize \(C_{n}^{2}\) based on routinely-available meteorological inputs. These ML approaches are useful from an operational standpoint and can be viewed as sophisticated regression models, but they barely include any physical knowledge. In this letter, we propose an alternative physics-inspired ML framework. We present II-ML, a dimensional analysis-based ML framework, which strives to improve conventional MOST-based parameterizations with the power of ML. We utilize dimensional analysis (DA) constrained with domain knowledge to expand the set of traditional MOST variables and an ensemble of gradient-boosting ML regression models to learn similarity relationships from observations. In DA, the relevant dimensional variables of a physical process are combined into non-dimensional groups that describe that process equally well [12]. DA is compelling to use in practice because the non-dimensional variables enable us to combine observational data from different field campaigns around the world. More importantly, when using ML, DA can change the extrapolation problem in dimensional variables to an interpolation problem in non-dimensional variables [13]. To investigate the strengths and weaknesses of the proposed methodology, we use measurements collected during a seeing study at the Mauna Loa Observatory (MLO) on the island of Hawai'i. The MLO study was conducted by the National Center for Atmospheric Research (NCAR) from 9 June 2006 until 8 August 2006 (\(\sim\)8 weeks). The publicly available dataset contains measurements of mean meteorological quantities, turbulent fluxes, and turbulent variances obtained from three sonic anemometers deployed at ca. 6m, 15m, and 25m height above ground. \(C_{n}^{2}\) values were estimated by NCAR using the approach of [14]. We compute two gradients from the mean horizontal wind components \(\overline{u}\) and \(\overline{v}\) and the mean potential temperature \(\overline{\theta}\): mean wind shear \(S=\sqrt{\left(\partial\overline{u}/\partial z\right)^{2}+\left(\partial\overline{ v}/\partial z\right)^{2}}\) and mean potential temperature gradient \(\Gamma=\partial\overline{\theta}/\partial z\). Atmospheric turbulence is generated through bouyancy and wind shear, which are captured by the sensible heat flux \(\overline{w^{\prime}\theta^{\prime}}\) and the friction velocity \(u_{*}=\left(\overline{u^{\prime}w^{\prime}}^{2}+\overline{v^{\prime}w^{\prime 2 }}\right)^{1/4}\), respectively, where \(\overline{w^{\prime}w^{\prime}}\) and \(\overline{v^{\prime}w^{\prime}}\) are momentum flux components. Additionally, we incorporate the variances of potential temperature and horizontal wind magnitude, \(\sigma_{\theta}^{2}\) and \(\sigma_{M}^{2}=\sigma_{u}^{2}+\sigma_{v}^{2}\). All relevant variables forming the input for our II-ML methodology are summarized in the table of figure 1a with their respective fundamental dimensions. For completeness, Earth's gravitational acceleration \(g=9.81\)m s\({}^{-2}\) is included because it is required for the atmospheric force balance. Given the dry atmospheric conditions at the MLO sites, moisture variables were ignored in the present study but can be included for more humid locations. To later assess the \(C_{n}^{2}\) estimation performance of the trained model, the first two weeks of July 2006 are set aside as test data. It is crucial to take the test data out of the middle so that the ML models can capture the seasonal change of meteorology from June to August. The two key components of our proposed II-ML methodology are illustrated in figure 1: the DA constrained with domain knowledge (a) and the ensemble of gradient-boosting ML models, which perform regression on the stacked, non-dimensionalized observations (b). The starting point of the DA is the table of variables and dimensions in figure 1a. The Buckingham \(\Pi\) theorem [12], popular in DA, states that our \(k=10\) dimensional variables with their \(l=3\) fundamental dimensions (length, time, temperature) can be expressed as a set of \((k-l)=7\) independent non-dimensional \(\Pi\) groups. Multiple options to form the sets exist, so we employ the \(\Pi\) theorem implementation of [15], which generates 71 different sets with 7 \(\Pi\) groups each. Using domain knowledge, we conceive the following three constraints to reduce the number of sets from 71 to 14: First, each set can only contain one dependent \(\Pi\) group that is function of \(C_{n}^{2}\) (cf. pink highlights in figure 1a). All other \(\Pi\) groups should only be functions of the independent dimensional variables \(\mathbf{X}\). Second, \(C_{n}^{2}\) and its normalized variant \(\Pi_{y}\) vary over multiple orders of magnitude, so the ML models are trained on \(\log_{10}\Pi_{y}\). Since the logarithm is not defined for negative arguments, only \(\Pi\) sets where \(\Pi_{y}\) is strictly positive are valid. Third, the dimensional variables \(\Gamma\) and \(\overline{w^{\prime}\theta^{\prime}}\) can be positive and negative, so raising them to fractional or even-integer powers can result in complex values or a loss of sign. Therefore, valid \(\Pi\) sets cannot contain such expressions. Each of the 14 constrained \(\Pi\) sets is used to scale and non-dimensionalize the dimensional observations \(\mathbf{X}\) and \(y=C_{n}^{2}\) to yield \(\mathbf{\Pi}_{X}\) and \(\Pi_{y}\), respectively, as illustrated in figure 1b. In their non-dimensional form, the observations from all three levels can be stacked into a combined dataset. From that, ML learns the non-dimensional black box similarity relationship \(f(\mathbf{\Pi}_{X})\approx\log_{10}\Pi_{y}\). For each \(\Pi\) set, we train one ensemble of \(n=25\) member models to make robust \(C_{n}^{2}\) predictions with uncertainty estimates. Each member sees a different 4-week subset of the 6-week training data. The subsets are generated by randomly removing two non-overlapping sets of seven consecutive days from the training data so that each subset covers slightly different meteorological conditions. This is called Monte-Carlo cross-validation. As depicted in figure 1b, each of the \(n\) resulting ensemble members produces a prediction that is robustly aggregated into an ensemble prediction using Figure 1: Our \(\Pi\)-ML methodology consists of two components. (a) The dimensional analysis based on the Buckingham \(\Pi\) theorem combines observed dimensional variables into \(\Pi\) sets of normalized non-dimensional variables. (b) These sets are used to transform the observed data into a stacked non-dimensional dataset to train an ensemble of XGBoost regression models. the median. The 5-th and 95-th percentiles of the prediction distribution correspond to 90% confidence bounds. Each member model is trained separately on its respective 4-week training data subset using the gradient boosting algorithm XGBoost (XGB) and the AutoML library FLAML [16]. FLAML performs time-constrained hyperparameter tuning of the XGB models using 5-fold cross-validation. For this study, for each member, FLAML was given a 10-minute time budget on 8 cores of a 3 GHz Intel Xeon E5-6248R CPU. Such a time-constrained optimization is crucial to keep the computational costs of training one 25-member ensemble reasonable (\(\sim\)34 core hours). The prediction accuracy and model complexity of each trained \(\Pi\)-ML ensemble is assessed to decide which \(\Pi\) set is best suited for our ML-based optical turbulence similarity theory. The root-mean-squared error (RMSE) \(\epsilon=\sqrt{\left\langle\left(y-\hat{y}\right)^{2}\right\rangle}\) in the _dimensional_ log-space is utilized to quantify accuracy as the deviation between the observed \(\text{log}_{10}\,C_{n}^{2}=y\) from the test set (July 1 - 14) and the corresponding _dimensional_\(\Pi\)-ML prediction \(\hat{y}=\text{log}_{10}\,\hat{C}_{n}^{2}\). We also evaluate the complexity of the \(\Pi\) sets and their trained ML ensembles. That is essential because ML models should only be as complex as necessary to increase their ability to perform well on new unseen data [17]. One \(\Pi\) set is considered simpler than another set if its \(\Pi\) groups are constructed from fewer dimensional variables. Similarly, one trained ensemble is considered simpler than another one if fewer \(\Pi\) groups are important for the ML prediction, i.e. the modeled \(C_{n}^{2}\) is sensitive to fewer input features. The features importance of the trained \(\Pi\)-ML models is obtained with the permutation feature importance technique (PFI) [18; 19]. For each feature \(\Pi_{i}\), PFI yields a ratio \((\epsilon_{i}^{\prime}-\epsilon)/\epsilon\) which describes how the RMSE \(\epsilon_{i}\) of a trained model magnifies when the model gets decorrelated data for \(\Pi_{i}\) compared to the baseline RMSE \(\epsilon\) where the correlation is intact. That means a highly important feature result in a large error magnification. The interested reader is referred to [19] for a more detailed description of PFI. The performance and complexity of the 14 \(\Pi\)-ML ensembles are shown in figure 2. The boxplots in panel (a) display the \(\epsilon\) distributions for each ensemble as illustrated in the sketch above the plots. While all ensembles show median RMSEs of the same magnitude, \(\Pi\) sets 9 to 14 outperform the others. Panel (b) visualizes complexity through the number of dimensional variables constituting each \(\Pi\) group (left) together with the sum per set (right). This plot reveals that sets 9 to 12 of the well-performing ensembles are the only ones consisting of \(\Pi\) groups formed from no more than three dimensional variables. The winning ensemble out of these four low-error low-complexity candidates is selected based on the PFI score distributions displayed in figure 3. Remember that the DA yields different functional expressions for \(\Pi_{i}\) for each set, which is why each set shows different PFI distributions. The boxplots reveal that \(\Pi\) sets 9 and 11 result in more complex \(\Pi\)-ML ensembles compared to 10 and 12 because they significantly rely on two \(\Pi\) groups (\(\Pi_{2}\) and \(\Pi_{4}\), see inset) for \(C_{n}^{2}\) estimation instead of one (\(\Pi_{2}\)). Consequently, only sets 10 and 12 remain candidates for our ML-based similarity theory of optical turbulence. Because of the lower \(\epsilon\) spread in figure 2a, we select \(\Pi\)-ML ensemble 10 with \(\Pi_{1}=\sigma_{M}^{2}/u_{*}^{2}\), \(\Pi_{2}=\overline{\theta}/\sqrt{\sigma_{\theta}^{2}}\), \(\Pi_{3}=(S\,z)/u_{*}\), \(\Pi_{4}=\overline{w^{\prime}\theta^{\prime}}/(u_{*}\sigma_{\theta})\), \(\Pi_{5}=(g\,z)/u_{*}\), \(\Pi_{6}=(\Gamma\,z)/\sigma_{\theta}\), and \(\Pi_{y}=(C_{n}^{2})^{3/2}\,z\). The expressions for the other 13 \(\Pi\) sets can be found in appendix A. The observation that \(\Pi_{2}\) - the inverse of the normalized potential temperature variance - is the only dominating feature of our parameterization has important practical implications: First, temperature variances can be measured with thermocouples [20], which are cheaper than sonic anemometers. Second, the low relevance of the gradients (\(\Pi_{3}\) and \(\Pi_{6}\)) indicates that even single-level measurements might be sufficient to estimate \(C_{n}^{2}\) accurately. Therefore, our approach might lead to simpler \(C_{n}^{2}\) measurement setups. Figure 3: Importance of non-dimensional \(\Pi\) groups for \(\Pi\)-ML ensembles 9, 10, 11, and 12 (left to right) based the permutation feature importance strategy. The best performing ensemble 10 is marked in green (hatched). Figure 2: Comparison of (a) ensemble performance and (b) \(\Pi\) set complexity for our 14 different \(\Pi\) sets, where winning set 10 (green/hatched) balances performance and complexity well. The performance of the final \(\Pi\)-ML ensemble is illuminated in more detail in figures 4 and 5. The observed (red) and the predicted median evolutions of \(C_{n}^{2}\) (black) for the test data are shown in figure 4. The evolutions are plotted for the three original sonic heights individually for visualization. The agreement between prediction and observation is high for all levels, although the level-specific \(\epsilon\) slightly increases with height. It is assumed that processes higher in the atmosphere force the turbulence underneath, which would be missed by the sonics and, thus, our ensemble. Notable errors on all levels mostly occur during atmospheric neutral conditions - shortly after sunrise and sunset - where our ensemble overestimates the observed \(C_{n}^{2}\) drops to \(10^{-15}\) and lower. This behavior is also visible in the 2D correlation histogram of figure 5a and the quantile-quantile (QQ) plot in 5b. Panel (a) directly compares observed \(C_{n}^{2}\) samples to their ML-estimated counterpart, while panel (b) plots the cumulative density functions of observed and estimated \(C_{n}^{2}\) against each other. The overestimation of neutral conditions is visible in both panels as the deviation of the histogram/curve from the ideal 1:1 line (dashed) for \(C_{n}^{2}<10^{-15}\). Simultaneously, the grey 90% confidence band in (b) grows, which indicates increasing disagreement between the predictions of the ensemble members. However, less than 8% of \(C_{n}^{2}\) measurements are smaller than \(10^{-15}\), so the regularization of the ML training results in models that favor the center of the \(C_{n}^{2}\) distribution, not its tails. Also, the lower signal-to-noise ratio of the sonic anemometers in weak turbulence conditions increases the measurement uncertainty. Since very low turbulence conditions are also not critical for optical applications such as optical links or astronomy, we argue that little emphasis should be put on these deviations. The regularization mentioned above also explains the minor underestimation visible in panel (b) for observations with \(C_{n}^{2}>10^{-12.5}\), which make up less than 3.5% of the data. Leaving the tails of the distributions aside, both panels of figure 5 show excellent performance of our ensemble for most data. Most points in (a) the histogram and (b) the QQ plot are close to the ideal 1:1 line as quantified by the coefficient of determination of \(R^{2}=0.958\) computed on all test data, including the deviating tails. The spread of the correlation distribution around the 1:1 line is symmetric for \(C_{n}^{2}>10^{-15}\). That means the ensemble predictions are well-balanced and not biased towards over or underestimation for most of the \(C_{n}^{2}\) range. A brief comparison of \(\Pi\)-ML with two conventional MOST-based \(C_{n}^{2}\) parameterizations (W71 [3] and TG92 [21]) in figure 5b illustrates the potential of improvement by utilizing ML. While W71 and TG92 have the operational advantage of being formulated as analytical equations, they lack the flexibility to capture non-linear behavior where ML excels. This results in the larger over and underestimations shown in the QQ plots. In summary, we demonstrated how dimensional analysis constrained with domain knowledge yields non-dimensional scaling expressions, which enable us to train accurate XGBoost regression models. While the methodology was applied in this letter to optical turbulence, it can be applied to model physical processes of all kinds. From an optics perspective, our approach has two advantages over \(C_{n}^{2}\) parametrizations from literature: First, the final ensemble produced highly accurate predictions during day and night, while previous models are often limited to one or the other [4]. Second, the non-dimensional formulation allows making predictions with a pre-trained ensemble for new sonics set up at different heights or locations as long as the climatology is similar. That is not possible with previous ML-based models. Our final \(\Pi\)-ML ensemble was shown to perform well, regardless Figure 4: Median predictions of \(\log_{10}C_{n}^{2}\) based on test data (black) using the selected \(\Pi\) set 10 ensemble. The observed values (red) are shown for reference. Figure 5: Correlation histogram and quantile-quantile plot for \(\Pi\) set 10 ensemble showing (a) high correlation (\(R^{2}=0.958\pm 0.001\)) and (b) well-captured \(C_{n}^{2}\) distributions compared to traditional models from literature (blue, orange). of the complex meteorology of Hawaii and the limited measurement duration of only two months. While the complexity and data sparsity of the MLO campaign limits the applicability of the trained ensemble to other sites, the good performance leads us to assume that our \(\Pi\)-ML methodology works equally well or better in more favorable setups. Additionally, we observed a strong dependency of \(C_{n}^{2}\) on \(\sigma_{\theta}^{2}\) (\(\Pi_{2}\)), suggesting that cheaper single-level variance measurements might be sufficient for accurate \(C_{n}^{2}\) estimation. In conclusion, we presented a powerful, statistically robust physics-informed machine learning methodology (\(\Pi\)-ML) to estimate \(C_{n}^{2}\) from turbulence measurements. ###### Acknowledgements. MP is financed by the FREE project (P19-13) of the TTW-Perspectief research program partially financed by the Dutch Research Council (NWO). ## Appendix A Full list of non-dimensional \(\Pi\) set expressions Table 1 provides a full list of the non-dimensional expressions of the 14 \(\Pi\) sets as shown in figure 2. The \(\Pi_{i}\) groups are sorted by complexity just like in figure 2b, and the normalized \(C_{n}^{2}\) target is denoted \(\Pi_{y}\).
2307.04245
A Novel Pipeline for Improving Optical Character Recognition through Post-processing Using Natural Language Processing
Optical Character Recognition (OCR) technology finds applications in digitizing books and unstructured documents, along with applications in other domains such as mobility statistics, law enforcement, traffic, security systems, etc. The state-of-the-art methods work well with the OCR with printed text on license plates, shop names, etc. However, applications such as printed textbooks and handwritten texts have limited accuracy with existing techniques. The reason may be attributed to similar-looking characters and variations in handwritten characters. Since these issues are challenging to address with OCR technologies exclusively, we propose a post-processing approach using Natural Language Processing (NLP) tools. This work presents an end-to-end pipeline that first performs OCR on the handwritten or printed text and then improves its accuracy using NLP.
Aishik Rakshit, Samyak Mehta, Anirban Dasgupta
2023-07-09T18:51:17Z
http://arxiv.org/abs/2307.04245v1
A Novel Pipeline for Improving Optical Character Recognition through Post-processing Using Natural Language Processing ###### Abstract Optical Character Recognition (OCR) technology finds applications in digitizing books and unstructured documents, along with applications in other domains such as mobility statistics, law enforcement, traffic, security systems, etc. The state-of-the-art methods work well with the OCR with printed text on license plates, shop names, etc. However, applications such as printed textbooks and handwritten texts have limited accuracy with existing techniques. The reason may be attributed to similar-looking characters and variations in handwritten characters. Since these issues are challenging to address with OCR technologies exclusively, we propose a post-processing approach using Natural Language Processing (NLP) tools. This work presents an end-to-end pipeline that first performs OCR on the handwritten or printed text and then improves its accuracy using NLP. OCR, NLP, Handwritten Text, Transformer, Paddle-Paddle ## I Introduction Optical Character Recognition (OCR) is a technology for extracting texts from images containing text information [1]. Such images occur from photos containing text information, scanned documents, scene photos, subtitle text superimposed on an image, etc. OCR is useful as images consume more memory space than text files. Moreover, text information is easier to copy and edit and helpful in many artificial intelligence (AI) tools, particularly for Natural Language Processing (NLP) problems. Some general applications include self-service utility meter reading, intelligent traffic surveillance and parking system, license plate recognition, contactless check-in at private and public transportation stations, intelligent security systems, digitizing old books, etc. [2]. As such, OCR helps to reduce crime, increase police efficiency, and improve safety [2]. The OCR methods recognize characters in the image independently by image segmentation considering only the shape and structure of the characters. Significant research on OCR has been reported on recognizing texts from scanned documents, and number plates, with sufficient performance. Even OCR on handwritten texts in different languages has received much attention, however, with limited accuracy. Hence, there is scope for improvement in the efficiency of OCR of handwritten text. Even the OCR of printed text is yet to be perfect. The prime challenges for inaccurate or missing text are as follows: * variations in font style and size, * case sensitivity, * similar character shapes, such as 'o' and '0', * varying orientations. These OCR mistakes negatively impact several NLP applications, including text summarizing, part-of-speech (POS) tagging, sentence boundary detection, topic modeling, named entity recognition (NER), and text classification. The ability of NER tools to detect and identify proper nouns and classify them into the person, place, and organization categories significantly deteriorates when the error rate (ER) of OCR output rises. Post-processing OCR outputs can significantly help correct these mistakes and increase the accuracy of the outputs. Hence, the objective is to develop an end-to-end pipeline that first performs OCR on the single-line handwritten or printed text and then improves its accuracy by post-processing the OCR output using NLP. ### _Prior Art_ The current OCR approaches use Convolutional Neural Network (CNN)-based encoders for picture interpretation and Recurrent Neural Network (RNN)-based decoders for text generation. The two most popular OCR models are the Transformer-based OCR (Tr-OCR) model [3] and the Paddle OCR (PP-OCR) model [4]. The Tr-OCR model uses the Transformer architecture for workpiece-level text generation and image understanding. TrOCR has a pre-trained image Transformer as an encoder with the decoder as a pre-trained text Transformer. This model has been trained on the IAM handwritten dataset. The PP-OCR model consists of text detection, text recognition, and detected box rectification using a convolutional recurrent neural network (CRNN) as a text recognizer at the back end. The CRNN has convolutional layers for feature extraction followed by recurrence for sequence modeling. These architectures produce efficient results if trained on a specific type of data. However, generalizing is difficult on unconstrained datasets due to the large variability. In the domain of OCR output correction, the prior algorithms used mainly operate on the standard pipeline with the delete operation, followed by transposes, followed by replaces, and finally, inserts. This method used in implementing text blob's spelling correction has taken Peter Norvig's "How to Write a Spelling Corrector" [5] as ground truth for training. This approach is improved using Symspellpy [6]. The symmetric delete spelling correction algorithm lowers the complexity of edit candidate generation and dictionary lookup for a specific Damerau-Levenshtein distance. It is language-independent and is about six times faster than the traditional approach. ## II Materials and Methods This firstly evaluates two OCR models in this work, _viz._, Tr-OCR and PP-OCR, on various handwritten and printed datasets. This work then choose the better-fitting model for recognizing single-line handwritten text. A line segmentation module for segmenting a multi-line document into single lines and a classifier that classifies each of these single lines into printed or handwritten text are also implemented. The output of the OCR model is then fed to our post-processing model, which improves the accuracy of the OCR output. The OCR output post-processing task aims to identify the sequence of words \(X=x_{1}x_{2}...x_{m}\) present in the original hardcopy document given a sequence of n OCR degraded tokens \(Y=y_{1}y_{2}...y_{n}\). It should be noted that \(n\) and \(m\) are not always equal because segmentation errors could result in OCR sub-sequences that are not correct word sequences. We divide our work into two modules. The first consists of the segmentation unit, the classification unit and the OCR model unit. The OCR models are evaluated on various real-life datasets. We then select the better-fit model as input to the second module, i.e., NLP-based post-processing. This module takes in the outputs of the OCR model and then post-processes it using NLP techniques to minimize error. ### _Module-A: OCR Engine_ Module A consists of the first half of the pipeline which is to first perform line segmentation on a multi-line document, then classify each line into printed and handwritten text using a classifier, then perform OCR on it using a suitable OCR model. Evaluation has been performed on two existing popular OCR models on various datasets with different fonts, handwritten dataset, dataset with occluded or background color and noise. #### Ii-A1 Segmentation The aim here is to Segmenting lines in documents using A* Path planning algorithm [7]. The method to achieve this is to: 1. We first input a non-skewed document of either handwritten or printed text into this model and then convert the input image to 2D grayscale image. 2. We use sobel filter to detect the text edges in the image. The image is convolved with two 3*3 kernels (horizontal and vertical), to calculate the image derivatives. 3. We then find the horizontal projection profile of the edge detected image. HPP is calculated by the array of the sum of elements in each row. So more peaks will be seen corresponding to the rows that have text whereas the blank areas will not peak in the HPP graph. 4. We then detect peaks, for which I take the threshold of one-fourth difference of maximum hpp and minimum hpp value. This helps in dividing the potential line segment regions from the text. 5. We then make a cut in places where upper line text connects with the lower line text. 6. We then use the A* path planning along the segmentation region and record the paths. This helps in segmenting the document into single lines. #### Ii-A2 classification Convolutional neural networks (CNN) are used to classify text lines as either printed or handwritten, however it is actually the collection and preparation of the data that presents the biggest challenges. Presenting enough samples to an artificial neural network (ANN) is sufficient to achieve a decent level of accuracy for a wide range of tasks. In fact, current artificial neural networks (ANN) are already capable of handling extremely complicated data (such as ImageNet, which includes 90 different dog breeds to discriminate). The system we created for this work is a DenseNet-121 that has been modified for the binary classification of handwritten and printed text. It is wrapped in some utility classes. A convolutional neural network called DenseNet-121 has 121 layers, the majority of which are tightly connected in 4 blocks. However, compared to designs with more parameters, it has a comparatively low number of parameters for a network of its size and so requires less training data. More information on the classifier used can be found in [8]. #### Ii-A3 Datasets The specific datasets that we have used for the purpose are: * Born-Digital Images Dataset [9]: This dataset contains images made digitally employing a desktop scanner, a camera, and screen capture software. It has 3564 images of words clipped from the actual images and a text file containing the ground truth transcription of all images provided. * Incidental Scene Text Dataset [10]: This dataset consists of 4468 cut-out word images corresponding to the axis-oriented bounding boxes of the words provided and a single text file with the ground truth. * License Plate Dataset [11]: This dataset has 209 cropped license plates using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a.xml annotation file. * Single Line Handwritten Text Dataset [12]: This dataset [12] contains images of handwritten single-line English texts whose labels are similar to the IAM dataset. There are around 400 images along with their labels. * Bing Images of Short Quotes: This dataset contains about 215 images of short quotes with different background styles. This dataset is unlabelled as its primary purpose is to see the improvements in the outputs after post-processing using NLP. #### Ii-A4 Performance Metrics The performance evaluations used are character error rate (CER) and word error rate (WER) evaluation metrics. The CER gives the fraction of the number of characters correctly identified, including spaces. The WER is the fraction of the number of words correctly output in reference to the ground truth text. ### _Module-B: NLP Engine_ The models we consider are as follows: #### Ii-B1 ByT5 The Google AI team debuted T5 [13], also known as a Text-To-Text Transfer Transformer, in 2020. The encoder-decoder structure of the T5 transformer model is identical to that of conventional transformer models. There are 12 pairs of blocks of encoder-decoders in it. Self-attention, a feed-forward network, and optional encoder-decoder attention are all present in each block. The ByT5 [14] proposes a new model that can directly process raw text, i.e., it would be token-free. The benefits are as follows: * They can process text in any language. Tokenizers tailored to specific languages are not necessary. * They reduce the trouble of having complicated text preparation pipelines and are noise-resistant. * Now that we only need 256 embeddings for a byte-level model, we no longer need a large vocabulary matrix. #### Ii-B2 Batr Bidirectional and Auto-Regressive Transformer [15] BART is a pretraining denoising autoencoder for sequence-to-sequence models. The text is first corrupted using a random noise function, and then a model is learned to recreate the original text to train the BART model. It employs a typical Tranformer-based neural machine translation architecture that, despite its simplicity, generalizes several more modern pretraining approaches, including GPT with its left-to-right decoder and BERT (owing to the bidirectional encoder). The dataset used to train the models in a supervised manner was generated synthetically from the OSCAR Corpus. #### Ii-B3 Alpaca-LORA The Alpaca model was optimized through fine-tuning from Meta's LLaMA 7B model, which was achieved through supervised learning on a set of 52K instruction-following demonstrations generated from Open-nAI's text-davinci-003. The process of generating the dataset resulted in 52K distinct instructions and corresponding outputs, and was accomplished at a cost of less than $500 by utilizing the OpenAI API. Hugging Face's training framework was used to fine-tune the LLaMA models, with techniques such as Fully Sharded Data Parallel and mixed precision training being employed. The fine-tuning process of a 7B LLaMA model was accomplished in 3 hours, using 8 80GB A100s. We used the Alpaca model in a zero shot manner and it was run in 8-bit precision using bits and bytes. We tried multiple prompts with the Alpaca-LORA 7B model and the one that worked the best for us was \(\Gamma\)'Fix all the errors in the sentence : text" #### Ii-B4 Synthetic Dataset Generation OCR degraded text is generated for training our byT5 Transformer model using the **nlpaue**[16] library. The OCR Augmentor is used, which can be used to generate character-level errors in the text of the OSCAR [17] Corpus. #### Ii-B5 Preprocessing Inputs To prevent any discrepancies in the lengths of the original Text and the Text Generated by the model with the ground truth, we chunk the texts into lengths of 128 words, but as subword tokenization is being used, we set the max length to 256 but replace all the padding tokens with -100 to prevent loss calculation for them. #### Ii-B6 Post-Processing Model Outputs The correct spacing insertion into the output from the model is performed using the output distribution. Given a text corpus, we assume that all words are dispersed separately. The relative frequency of each term is then all that is required to know. It is logical to take that they adhere to Zipf's law [18], which states that the probability of a word having rank \(n\) in a list of words is approximate \(\frac{1}{n\log N}\), where \(N\) refers to the total number of words in the corpus. After the model is fixed, we can utilize dynamic programming to determine the spaces' locations. The sentence that maximizes the product of the probabilities of each individual word is the most likely one, and dynamic programming makes it simple to calculate. We use a cost defined as the logarithm of the probability's inverse to prevent overflows rather than utilizing the probability itself. This has been done using the word ninja [19] library. ## III Results ### _OCR model evaluation_ We will first discuss the results of the two OCR systems (PP-OCR and Tr-OCR) on various datasets, as discussed above, without any postprocessing. We then proceed to show results of the segmentation and classification sub-modules. #### Iii-A1 Dataset 1: Born-Digital Images Dataset The outputs of some sample images in Fig. 2 are shown in Table II. The ultra-weight PP-OCR model, pre-trained in English and Chinese languages, resulted in a CER of 0.44. While the Tr-OCR model was fine-tuned on the SROIE printed text dataset, it resulted in a CER of 0.3. Hence Tr-OCR performed better than PP-OCR on this dataset. Fig. 1: Synthetic Dataset Example Fig. 2: Sample Born-Digital Images #### Iv-A2 Dataset 2: Incidental Scene Text Dataset The outputs of some sample images in Fig. 3 are shown in Table III. Using the ultra-weight PP-OCR model, which is pre-trained in English and Chinese languages, resulted in a CER of 0.65, while the Tr-OCR model fine-tuned on the SROIE dataset (printed text) resulted in a CER of 0.41. Hence Tr-OCR performed better than PP-OCR on this dataset. #### Iv-A3 Dataset 3: License Plate Dataset This dataset consists of 209 cropped license plates (as seen in Fig. 4) using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a.xml annotation file. The outputs of some sample images in Fig. 4 are shown in Table IV. Using the ultra-weight PP-OCR model pre-trained on English and Chinese languages resulted in a CER of 0.18. While the Tr-OCR model was fine-tuned on the SROIE dataset having printed text, it resulted in a CER of 0.24. Hence PP-OCR performed better than Tr-OCR on this dataset. #### Iv-A4 Dataset 4: Single Line Handwritten Text Dataset This dataset contains handwritten single-line images (as seen in Fig. 5 and Fig. 6), and it's labeled similarly to the IAM dataset. Around 400 lines of handwritten images with their labels are provided. Using the ultra weight PP-OCR model, pre-trained on English and Chinese languages, resulted in a CER of 0.53 and a WER of 0.8. While Tr-OCR model pre-trained on the IAM dataset of handwritten text resulted in a CER of 0.09 and WER of 0.24. Hence Tr-OCR performed better than PP-OCR on this dataset. The outputs of some sample images in Fig. 5 and Fig. 6 are shown in Table V and Table VI respectively. ### _Classification_ The model to classify the text into handwritten and printed text was tested on 2 datasets i.e. the Bing Images of Short quotes (discussed earlier) and a self made handwritten dataset of around 30 images. In handwritten document dataset, it classified 30 out of 32 images correctly as handwritten text and 2 incorrectly as printed text. In printed quotes dataset, it classified 191 out of 198 images correctly as printed text and 7 incorrectly as handwritten text. Overall the classification model has an accuracy of about \(96\%\). ### _Module-A pipeline Results_ A mutli-line document is first fed to the segmentation module which breaks the document down to single line texts, Fig. 4: Sample License Plate Images Fig. 5: Sample Single Line Handwritten text Image 1 Fig. 3: Sample Incidental Scene Text Images Fig. 6: Sample Single Line Handwritten text Image 2 each line is then fed to the classification model which classifies it as handwritten or printed text. If it is handwritten text, the TrOCR model trained on handwritten text is used to perform OCR on it and if it is classified as printed text then the TrOCR model trained on printed text is used to perform OCR on it. The OCR output for each line is then clubbed and the output corresponding to the input document is obtained. Figure 7 is an example of a handwritten document. After segmenting it into individual lines we get we get Figure 8. The classification model classifies each line correctly into printed text as show in Table VII. We then perform OCR using TrOCR model pre-trained on handwritten text. Results obtained are shown in Table VIII. The CER for this example was 0.079 and WER was 0.2. ## IV Conclusion The evaluation of the two OCR models _viz_. PP-OCR and TrOCR over different datasets showed that TrOCR outperforms PP-OCR in all the datasets except the License plate dataset. A fine-tuning of the TrOCR is required on the License dataset to provide improved results, which can be considered as a future scope. Tr-OCR can be used for OCR of printed and handwritten texts as it gives better results in both cases. The line segmentation module works well for non-skewed documents. For skewed documents, another algorithm has to be developed for segmentation which can be considered as another future scope. Similarly, our OCR output Post Processing Pipeline effectively reduces the errors in the OCR Degraded text. This observation can be seen in our results, where for the first synthetically generated dataset, the WER of the OCR Output came down from 0.455 to 0.045, and the CER came down from 0.124 to 0.005. Similarly, on the Kaggle Single Line Dataset, the CER decreased from 0.169 to 0.023 and WER from 0.363 to 0.135. ## Acknowledgement The authors like to thank the funds received from IITG Startup grant (xEEESUGIITG01349ANRD001) for the research.
2306.13410
Explainable Lifelong Stream Learning Based on "Glocal" Pairwise Fusion
Real-time on-device continual learning applications are used on mobile phones, consumer robots, and smart appliances. Such devices have limited processing and memory storage capabilities, whereas continual learning acquires data over a long period of time. By necessity, lifelong learning algorithms have to be able to operate under such constraints while delivering good performance. This study presents the Explainable Lifelong Learning (ExLL) model, which incorporates several important traits: 1) learning to learn, in a single pass, from streaming data with scarce examples and resources; 2) a self-organizing prototype-based architecture that expands as needed and clusters streaming data into separable groups by similarity and preserves data against catastrophic forgetting; 3) an interpretable architecture to convert the clusters into explainable IF-THEN rules as well as to justify model predictions in terms of what is similar and dissimilar to the inference; and 4) inferences at the global and local level using a pairwise decision fusion process to enhance the accuracy of the inference, hence ``Glocal Pairwise Fusion.'' We compare ExLL against contemporary online learning algorithms for image recognition, using OpenLoris, F-SIOL-310, and Places datasets to evaluate several continual learning scenarios for video streams, low-sample learning, ability to scale, and imbalanced data streams. The algorithms are evaluated for their performance in accuracy, number of parameters, and experiment runtime requirements. ExLL outperforms all algorithms for accuracy in the majority of the tested scenarios.
Chu Kiong Loo, Wei Shiung Liew, Stefan Wermter
2023-06-23T09:54:48Z
http://arxiv.org/abs/2306.13410v1
# Explainable Lifelong Stream Learning Based on "Glocal" Pairwise Fusion ###### Abstract Real-time on-device continual learning applications are used on mobile phones, consumer robots, and smart appliances. Such devices have limited processing and memory storage capabilities, whereas continual learning acquires data over a long period of time. By necessity, lifelong learning algorithms have to be able to operate under such constraints while delivering good performance. This study presents the Explainable Lifelong Learning (ExLL) model, which incorporates several important traits: 1) learning to learn, in a single pass, from streaming data with scarce examples and resources; 2) a self-organizing prototype-based architecture that expands as needed and clusters streaming data into separable groups by similarity and preserves data against catastrophic forgetting; 3) an interpretable architecture to convert the clusters into explainable IF-THEN rules as well as to justify model predictions in terms of what is similar and dissimilar to the inference; and 4) inferences at the global and local level using a pairwise decision fusion process to enhance the accuracy of the inference, hence "Glocal Pairwise Fusion." We compare ExLL against contemporary online learning algorithms for image recognition, using OpenLoris, F-SIOL-310, and Places datasets to evaluate several continual learning scenarios for video streams, low-sample learning, ability to scale, and imbalanced data streams. The algorithms are evaluated for their performance in accuracy, number of parameters, and experiment runtime requirements. ExLL outperforms all algorithms for accuracy in the majority of the tested scenarios. Explainable AI Interpretability Prototype-Based Models Lifelong Learning Streaming Learning Transfer Learning Knowledge Engineering Self-Organizing Neural Networks ## 1 Introduction In most real-world applications, data arrives continuously in real-time and is often non-repeating unless it is memorized. From this phenomenon, two paradigms are coined: continuous learning and streaming learning. Continuous learning, also known as lifelong learning [1] refers to the ability to acquire knowledge continuously over a long period of time while retaining previously-learned knowledge. Streaming learning [2] on the other hand is the ability to acquire knowledge from sequential and continuously-arriving data streams. The former encompasses machine learning techniques to adapt and reconcile old and new knowledge while minimizing loss of information and the latter prioritizes quick and efficient knowledge acquisition from high-velocity data streams. When developing machine learning applications for use in embedded systems such as portable digital devices, robots, autonomous vehicles, and smart appliances, not only it is necessary for the applications to have both continuous and streaming learning capabilities, but also the ability to operate in resource-limited environments. Portable devices prioritize compactness which limits how much hardware can be installed on-board the device, thus limiting its processing power, memory storage, and energy storage capabilities. Example applications include portable medical devices which use continuous learning to personalize the diagnosis based on long-term monitoring of a patient's vital signs [3]. Personalized action recognition systems adapt to individual variances in body movements [4]. On-device learning is preferable to ensure greater customization based on the consumer's needs, as opposed to cloud-based learning where a consumer's personalized data may be considered an insignificant detail among many other consumers' data. There are several other benefits to continual on-device learning, such as decreased bandwidth requirements, better control over the consumer's privacy, and less dependence on big data. Conventional learning strategies minimize empirical risk by assuming a given dataset consists of independent and identically-distributed (iid) samples and shuffling them before training. In continuous learning however, this may sometimes cause catastrophic forgetting whereby learning new knowledge causes older learned knowledge to be forgotten [5]. While there have been many research studies to address catastrophic forgetting, not all are suitable for embedded applications. Recent research also stresses the need for interpretability or explainability especially for machine learning algorithms used in critical applications that directly affect human well-being. The main criterion of an explainable learning model is being able to show its thought-processes step-by-step from the input to the final decision, improving human trust in the system [6][7] and debug potentially problematic decisions [8]. The current generation of continual learning systems lacks the ability to self-diagnose their decisions. A common problem involving self-supervised or unsupervised learning systems is when the data stream consists of undetected bias or garbage data, which would negatively impact the model. By implementing explainability in continual learning models, it would be possible to debug the learning process and identify problematic data before use. As of the time of writing this paper, state-of-the-art continual learning architectures such as Streaming Linear Discriminant Analysis (SLDA) [9] did not have explainability capabilities while explainable learning architectures such as the eXplainable Deep Neural Networks (xDNN) [10] have been tested with several continuous learning scenarios but not under streaming learning conditions [11]. We argue the need for the following capabilities in streaming, explainable, continually learning architectures: 1. Learn from a continuous data stream in a single pass in environments where computational resources and data storage is highly constrained. 2. Acquire knowledge from data in any order while maintaining resilience against loss of previously learned information. 3. Learn efficiently and generalize well with minimal labeled examples. 4. Explain model decisions at the intermediate and final stages of the decision-making process. This study investigates explainable continual and streaming learning specifically for embedded devices. The paper presents several research contributions in this field. 1. We propose a modified SLDA architecture to utilize a prototype-based architecture to address the issues of catastrophic forgetting and the stability-plasticity dilemma, the balancing between the network's ability to retain and integrate knowledge. 2. We introduce a collective inference strategy to enhance classification accuracy by combining inferences from two levels: local inferences at the prototype level (i.e. "Among the examples in Class A, which example is the closest match to the input?") and global inferences at the class level (i.e. "Among all the classes, which is the closest match to the input?"). 3. We formulate an explainable lifelong stream learning model with single-pass learning. 4. We conduct a series of benchmark tests and observe how the proposed model performed relative to other continual learning models using established datasets and continual learning scenarios. ## 2 Problem Definition Integrating explainability with online continual learning applications is challenging due to a number of factors. In online continual learning, examples are only presented once and may not be repeated unless they are stored in memory. Online continual learning models receive limited data and have a short time to learn from them. Given the scarcity of training data and learning time, it is difficult for most explainable algorithms to accurately model the concept enough to generate adequate explanations [12]. In addition, the data obtained from the continuous learning process is constantly changing. This means that the information learned by the models may also change over time and lose relevance. While deep learning models are capable of achieving high accuracy, their opaque nature makes it a challenge to generate explanations for their predictions [13][14]. Another major challenge is implementing online continual learning algorithms on embedded devices with limited memory capacity and processing power. This restriction makes it difficult to deploy complex algorithms or algorithms that take up a lot of storage space [15][16]. Models that provide understandable explanations typically compromise on the accuracy of the results. Balancing between explainability and accuracy is a challenge when employing explainable methods in online continual learning applications [17]. ## 3 Related Work ### Streaming Learning In streaming learning scenarios, machine learning algorithms are required to learn from a continuous stream of non-repeating training samples in a single pass. The algorithms must also be capable of being evaluated at any point during the stream and prior training samples are not stored for retraining. In real-life applications, contextual information may not always be available. Several prototype-based classifiers such as ARTMAPs [18] were developed to learn from non-stationary data. However, the presentation order of training data significantly affects the performance of ARTMAPs. Various methods were developed to optimize ARTMAP performance [19; 20; 21] but they were computationally intensive and therefore unsuitable for real-time applications. Streaming Linear Discriminant Analysis (SLDA) [9] extends the conventional Linear Discriminant Analysis (LDA) architecture to support incremental learning from data streams. SLDA stores a running mean for each unique class and a shared covariance matrix. During inference, SLDA classifies a given input to the most likely class using the class means and covariance matrix. The softmax methods used with conventional neural networks are analogous to the LDA's estimated posterior distribution [22]. Deep-SLDA [23] pairs SLDA with a convolutional neural network (CNN) acting as a feature extractor for high-dimensional inputs such as images. The performance of the model surpasses that of state-of-the-art streaming learning and incremental batch learning algorithms. ### Continual Embedded / On-Device Learning Although streaming learning algorithms have been developed to reduce catastrophic forgetting, they don't meet certain requirements for embedded applications. Disqualifying criteria include the high storage and computation requirements of batch learning techniques, and needing task labels during inference [23][24]. Another requirement for continual embedded learning is the ability to generalize from a very small number of training samples. Algorithms with this capability are commonly known as "low-shot" continual learning algorithms [25; 26; 27; 28]. Several CNNs were made to meet the need for on-device learning, balancing accuracy of classification with speed of processing. Networks with efficient computation and reduced memory requirements include MobileNet [29], SqueezeNet [30], ShuffleNet [31] and CondenseNet [32]. Other methods to reduce memory requirements include deep network pruning [33; 34; 35; 36], quantization [37; 38; 39; 40; 41; 42; 43], and model compression or network distillation [44][45]. A comprehensive study was performed to compare several continual learning algorithms and CNNs as feature extractors in multiple scenarios [11]. The models were tested on their robustness to scale, on imbalanced class distribution, and on temporally correlated video streams. The models were then evaluated on the basis of classification accuracy, number of parameters, and experiment runtime. We use the same experiment protocols to evaluate the performance of our proposed continual learning model against other algorithms. ### Explainable Prototype-Based Learning Models The architecture of CNNs is designed to maximize predictive accuracy through a series of convolutional steps. CNNs are considered "black box" models due to how difficult it is to explain how they arrive at a specific classification decision for a given input. CNNs are typically interpreted post hoc: the model's decisions are obtained first before backtracking and generating justifications [46]. A popular explainable technique uses class activation mappings (CAMs) and gradient-weighted CAMs (Grad-CAMs) [47][48] to highlight discriminative features on input images. Such post hoc interpretability techniques are usually approximations as opposed to in-depth explanations of the cause-and-effect relations and reasoning. Prototype-based classifiers such as ARTMAPs [18] and self-organizing networks [49] group training samples according to their proximity in the feature space [50]. Each group or cluster of training samples can be represented by the closest centroid or prototype [51]. xDNN [10] is a prototype-based classifier with the ability to generate explanations for deep neural networks. The prototypes in the architecture are used to generate linguistic IF-THEN rules. xDNN employs empirically derived probability distribution functions based on local densities and global multivariate generative distributions [52]. The prototype-based architecture and algorithm are suitable for transfer learning and continuous learning without retraining. xDNN outperforms state-of-the-art approaches in accuracy and computational simplicity in benchmark tests [10][53]. To summarize, xDNN is an explainable feed-forward neural network with an incremental learning algorithm adding new prototypes to reflect the dynamic data stream [54]. ## 4 Methodology The proposed Explainable Lifelong Learning (ExLL) model is a feed-forward neural network with an incremental learning algorithm and a self-organizing topology. Inputs to the network are typically images passed through a convolutional neural network to extract both abstract and discriminative features from the fully-connected layer. The architecture of ExLL enhances the functionality of the xDNN [10] with a few modifications for implementing a variant of SLDA [9], namely MegaCloud-based global inference and prototype-based local inference. A pairwise fusion method [55] is used to combine the global and local inferences into a "glocal" inference. ### Training the Explainable Continual Learning Model Figure 1 shows the ExLL model's layers. CNN weights are pre-trained with image datasets such as ImageNet. Images are passed through the CNN, and the activations of the last hidden fully-connected layer in the CNN are taken as the discriminative feature vectors to be learned by the ExLL. Figure 1: Explainable Lifelong Learning architecture. Training images produce feature vectors in the fully-connected layer of a pre-trained CNN. Density of the feature vectors are then computed to determine if the training images should be assigned to an existing prototype or to initialize a new prototype. All prototypes belonging to one class label are assigned to one MegaCloud. Inference is performed once at the local level, another at the global level. Local inference matches the inferenced image to the most similar prototype while global inference matches the image to the most similar MegaCloud. Both inference decisions are then combined using a glocal pairwise fusion matrix to obtain the final model decision. Similar to xDNN, the two main components of the proposed model are the Prototypes Layer and the MegaClouds Layer [10]. Input feature vectors are represented as data points in a multi-dimensional topology. Data points that are close to each other can be considered as a "data cloud" encompassing an area of influence between the data points. A data cloud can be represented by a composite feature vector known as a centroid or "prototype", calculated as the average of all points in the data cloud. A prototype is typically assigned a class label based on the labels of the majority points in the data cloud. Each prototype is independent and distinct from each other, representing the local peaks of the data distribution sharing the same class label. Furthermore, adjacent data clouds sharing the same class label can be grouped into a larger structure known as a "MegaCloud". Where a prototype represents an instance-based prototype of a class label, a MegaCloud is a category-based prototype of the class label. Training the ExLL takes place as follows: 1. An image \(I_{i}\) at time stamp \(i\) and belonging to class \(k\) is passed through a CNN. The subsequent feature vector \(\tilde{x}_{i}\) is obtained from the fully-connected layer, and then normalized: \[x_{i}=\frac{\tilde{x}_{i}}{\|\tilde{x}_{i}\|}\] (1) where \(\|\cdot\|\) is the vector norm. 2. The ExLL's global meta-parameters are updated: \[\hat{\mu}_{i}=\frac{i-1}{i}\hat{\mu}_{i-1}+\frac{1}{i}x_{i}\] (2) \[\hat{\mu}_{1}=x_{1}\] \[\hat{\sigma}_{i}=\frac{i-1}{i}\hat{\sigma}_{i-1}+\frac{1}{i}\|x_{i }\|^{2}\] (3) \[\hat{\sigma}_{1}=\|x_{1}\|^{2}=1\] \[\hat{\xi}_{i}=\frac{i-1}{i}\hat{\xi}_{i-1}+\frac{1}{i}(x_{i}- \hat{\mu}_{i})(x_{i}-\hat{\mu}_{i})^{T}\] \[\hat{\xi}_{1}=(x_{1})(x_{1})^{T}\] (4) where \(\hat{\mu}\) is the global average of all training samples, \(\hat{\sigma}\) is the global variance, and \(\hat{\xi}\) is the inter-class global covariance matrix. 3. While the global meta-parameters represent the cross-class topology of the ExLL, local meta-parameters represent the within-class topologies for each class. If \(x_{i}\) has a novel class, the number of unique class labels is incremented, \(k\gets k+1\), and the local meta-parameters for the new class \(k\) are initialized as follows: \[i_{k}\gets 1\] \[g_{k}\gets 1\] \[\mu_{k,1}\gets x_{i}\] (5) \[\sigma_{k,1}\leftarrow\|x_{i}\|^{2}\] \[E_{k,1,1}=0\] Here \(i_{k}\) denotes the number of inputs where class \(k\) was observed during training, \(g_{k}\) counts the prototypes in class \(k\), \(\mu_{k,1}\) is the class mean, \(\sigma_{k,1}\) is the class scalar product, and \(E_{k,1,1}\) is a topological map of edge connections between within-class prototypes. Additionally, the novel class is used to initialize the first prototype of a new MegaCloud: \[p_{k,1}\gets x_{i}\] \[\text{S}_{k,1}\gets 1\] \[r_{k,1}\gets r^{*}\] (6) \[\hat{I}_{k,1}\gets I_{i}\] where \(p_{k,1}\) is the first prototype for class \(k\), \(\text{S}_{k,1}\) is the number of training samples associated with the prototype, \(r_{k,1}\) is the prototype's radius of influence initialized to a default value \(r^{*}=\sqrt{2-2\cos 30^{5}}\)[52], and \(\hat{I}_{k,1}\) keeps a record of all input images associated with this prototype, without actually storing the images themselves. The network then waits for the next input. However, if the input presents a known class, then the prototype layer is updated in response. The local meta-parameters are updated similar to the global meta-parameters as follows: \[\mu_{k,i_{k}}=\frac{i_{k}-1}{i_{k}}\mu_{i_{k}-1}+\frac{1}{i_{k}}x_{i}\] (7) \[\sigma_{k,i_{k}}=\frac{i_{k}-1}{i_{k}}\sigma_{i_{k}-1}+\frac{1}{i_{k}}\|x_{i} \|^{2}\] (8) Class \(k\)'s mean \(\mu_{k,i_{k}}\) and scalar product \(\sigma_{k,i_{k}}\) are updated online. The input \(x_{i}\) is then passed to the density layer. 4. **Density layer**. This layer defines the mutual proximity of the training images relative to the data space defined by the feature vectors. The density of input \(x_{i}\) relative to class \(k\), \(D(k,x_{i})\) can be computed online [56]: \[D(k,x_{i})=\frac{1}{1+\|x_{i}-\mu_{k,i_{k}}\|^{2}+\sigma_{k,i_{k}}-\|\mu_{k,i _{k}}\|^{2}}\] (9) 5. **Prototype** layer. When an input \(x_{i}\) is presented, the nearest and second-nearest within-class prototypes, \(b_{1}\) and \(b_{2}\), are identified using Mahalanobis distance [57]: \[b_{1}=\operatorname*{argmin}_{j=1,\dots,g_{k}}\frac{(x_{i}-p_{j})^{T}(x_{i}-p_ {j})}{\hat{\xi}}\] (10) \[b_{2}=\operatorname*{argmin}_{j=1,\dots,g_{k};j\neq b1}\frac{(x_{i}-p_{j})^{T}( x_{i}-p_{j})}{\hat{\xi}}\] (11) A density condition then tests if \(x_{i}\) is inside the distribution of existing prototypes: \[\text{IF }D(k,x_{i})>\max_{j=1,\dots,g_{k}}D(k,p_{j})\] \[\text{OR }D(k,x_{i})<\min_{j=1,\dots,g_{k}}D(k,p_{j})\] (12) THEN add a new data cloud \((g_{k}\gets g_{k}+1)\) If Condition 12 is met, then the input \(x_{i}\) is considered outside the influence radius of the current prototypes and is sufficiently novel. \(x_{i}\) is then used to initialize a new data cloud: \[\begin{split} g_{k}\gets g_{k}+1\\ \text{S}_{k,g_{k}}\gets 1\\ p_{k,g_{k}}\gets x_{i}\\ r_{k,g_{k}}\gets r_{0}\\ \hat{I}_{k,g_{k}}\gets I_{i}\\ E_{k,g_{k},b_{1}}\gets 1;E_{k,b_{1},g_{k}}\gets 1; \end{split}\] (13) Otherwise, if Condition 12 is not met, the parameters are updated for the closest matching prototype \(b_{1}\): \[\text{S}_{k,b_{1}}\leftarrow\text{S}_{k,b_{1}}+1;\] \[p_{k,b_{1}}\leftarrow\frac{\text{S}_{k,b_{1}}-1}{\text{S}_{k,b_{1}}}p_{k,b_{1 }}+\frac{1}{\text{S}_{k,b_{1}}}x_{i};\] \[r_{k,b_{1}}\leftarrow\sqrt{\frac{r_{k,b_{1}}^{2}+(1-\|p_{k,b_{1}}\|^{2})}{2}}\] (14) \[\hat{I}_{k,b_{1}}\leftarrow\hat{I}_{k,b_{1}}+I_{i}\] \[E_{k,b_{1},b_{2}}\gets E_{k,b_{1},b_{2}}+1\] \[E_{k,b_{2},b_{1}}\gets E_{k,b_{2},b_{1}}+1\] \(E_{k}\) is a square matrix sized \(g_{k}\) for encoding the edges between local prototypes. Whenever a training sample activates the closest prototype \(b_{1}\) and second-closest prototype \(b_{2}\), \(E_{k,b_{1},b_{2}}\) and \(E_{k,b_{2},b_{1}}\) are incremented. The map \(E_{k}\) can then be used for visual evaluation of the spatial relationship between prototypes or for encoding frame of reference transformations [58] with each prototype representing one frame of reference. The prototype layer is the basis of local explainability of the ExLL model. Each prototype is an independent and distinct centroid shaped by associated training inputs. As each prototype records the associated training images in \(\hat{I}\), a set of linguistic IF-THEN rules are formulated as: \[R_{c}:\text{ IF }(I{\sim}\hat{I}_{k,1})\text{ OR }...\text{ OR }(I{\sim}\hat{I}_{k,g_{k}}) \tag{15}\] \[\text{ THEN }(\text{class is }k)\] where \(\sim\) indicates similarity or fuzzy degree of membership to a prototype, \(k=\{1,...,C\}\) is the class, and \(I_{i}\) denotes an input image. 6. **MegaClouds** layer. This layer is the basis of global explainability of the ExLL model. Each MegaCloud is used to facilitate explainability at the class level. Explainable rules generated from MegaClouds have the following format: \[R_{k}:\text{ IF }(x_{i}{\sim}\text{MC}_{k})\text{ THEN }(\text{class is }k) \tag{16}\] where MC\({}_{k}\) is the MegaCloud for the class \(k\). ### Inferring the Explainable Continual Learning Model Figure 2 illustrates the process where a given image is inferred. Prototype-based inference (PrInf) considers the local discriminative ability between individual prototypes while MegaCloud-based inference (McInf) globally discriminates between classes. Both types of inference have their strengths and weaknesses which adapt depending on the class distribution of the used dataset. Pairwise fusion (PF) is used as a method for combining local and global inferences to achieve better performance than either technique alone, hence the term "glocal pairwise fusion". A PF matrix encodes the association between PrInf and McInf during training without prior knowledge of the performance of either technique or the distribution of the dataset. 1. Shrinkage regularization is used to compute the precision matrix from the covariance matrix \(\hat{\xi}\): \[\Lambda=[(1-\epsilon)\hat{\xi}+(\epsilon)I]^{-1}\] (17) Figure 2: The process of image inference for the proposed ExLL. where \(I\) is an identity matrix and \(\epsilon=1e^{-4}\) regulates shrinkage. 2. **Prototype-based inference** assembles the prototypes of a class \(k\), \(\tilde{p}_{k}=\{p_{1,1},...,p_{k,g_{k}}\}\), and \(\Lambda\) to construct local weights \(\tilde{W}_{k}\) and local bias \(\tilde{b}_{k}\): \[\tilde{W}_{k} =\Lambda\tilde{p}_{k}\] (18) \[\tilde{b}_{k} =-\frac{1}{2}(\tilde{p}_{k}\cdot\tilde{W}_{k})\] Subsequently, the posterior distribution \(\tilde{P}\) and label prediction \(\tilde{l}\) are formulated as follows: \[\tilde{P}(y=k|x_{i}) =\frac{\exp(\tilde{W}_{k}^{T}x_{i}+\tilde{b}_{k}^{T})}{\Sigma_{k =1}^{C}\exp(\tilde{W}_{k}^{T}x_{i}+\tilde{b}_{k}^{T})}\] (19) \[\tilde{l} =\operatorname*{argmax}_{k=1,...,C}\tilde{P}(y=k|x_{i})\] 3. **MegaCloud-based inference** assembles all class mean vectors \(\hat{\mu}=\{\mu_{1},...,\mu_{C}\}\), and \(\Lambda\) to construct global weights \(\hat{W}\) and bias \(\hat{b}\): \[\hat{W} =\Lambda\hat{\mu}\] (20) \[\hat{b} =-\frac{1}{2}(\mu\cdot\hat{W})\] and the subsequent posterior distribution \(\hat{P}\) and label prediction \(\hat{l}\) are formulated as follows: \[\hat{P}(y=k|x_{i}) =\frac{\exp(\tilde{W}_{k}^{T}x_{i}+\hat{b}_{k}^{T})}{\Sigma_{k=1} ^{C}\exp(\tilde{W}_{k}^{T}x_{i}+\hat{b}_{k}^{T})}\] (21) \[\hat{l} =\operatorname*{argmax}_{k=1,...,C}\hat{P}(y=k|x_{i})\] 4. **Glocal pairwise fusion**[55] is used for combining the two inferences. During training, given an input vector's class \(k_{i}\), the local class prediction \(\tilde{l}\), and the global class prediction \(\hat{l}\), pairwise fusion encodes the relationship as: \[\Phi(k_{i},\hat{l},\tilde{l})\leftarrow\Phi(k_{i},\hat{l},\tilde{l})+1\] (22) where \(\Phi\) is a 3-dimensional matrix encoding the cumulative interactions between the actual label and the predicted labels from the local PrInf predictions and from the global McInf predictions. \(\Phi\) is updated online as additional training inputs are presented. Other rules for updating the matrix, such as using confidence-based increments [55], can be applied instead of simplified increments. When performing inference on an object \(x_{i}\) with an unknown class, global inference \(\hat{l}\), local inference \(\tilde{l}\), and \(\Phi\) are used for estimating the glocal class probabilities \(P(y=k|x_{i},\tilde{l},\tilde{l})\) and glocal label prediction \(L\): \[P(y=k|x_{i},\hat{l},\tilde{l}) =\frac{\Phi(k,\tilde{l},\tilde{l})}{\Sigma_{k=1}^{C}\Phi(k,\tilde {l},\tilde{l})}\] (23) \[L =\operatorname*{argmax}_{k=1,...,C}(P(y=k|x_{i},\hat{l},\tilde{l}))\] ### Explainability: Inference and Rule Generation ExLL incorporates the element of explainability at the inference stage, so that label predictions can be explained in terms of "Hits", "Near Hits" and "Near Misses" [59]. Given an image with a known label \(k\) to be classified, Equation 23 produces the best-matching label \(L_{1}\) and second-best matching label \(L_{2}\). Going back to Equations 10 and 11, the predicted class labels \(L_{1}\) and \(L_{2}\) each have a best-matching prototype (\(b_{L1,1}\) and \(b_{L2,1}\)) as well as a second-best matching prototype (\(b_{L1,2}\) and \(b_{L2,2}\)). As explained in Equations 13 and 14, each prototype \(g_{k}\) is updated with a record of all associated training images: \(\hat{I}_{g_{k}}\). When the winning prototype \(b_{L1,1}\) is selected for the winning label \(L_{1}\) during inference, \(\hat{I}_{b_{L1,1}}\) is referenced to retrieve the images used to train the prototype. The retrieved images are then shown as a visual explanation, i.e. "Hit", as to why the inference image is assigned to the best-matching prototype. Where the best-matching prototype is selected based on spatial proximity, laymen can observe the retrieved images for visual comparison against the inferenced image. A similar comparison is made, "Near Hit", by showing the training images associated with the second-best matching prototype, \(\hat{I}_{b_{L2,1}}\). Lastly, "Near Miss" shows the training images associated with the winning prototype \(b_{L2,1}\) for the next-best label \(L_{2}\): \(\hat{I}_{b_{L2,1}}\). The visual explanations provided by the "Near Hits" and "Near Misses" describe the decision boundary of the ExLL's prediction. In edge cases where the predictions are ambiguous, the visual comparison of "Hits", "Near Hits", and "Near Misses" informs the user of possible alternatives. Figure 3 demonstrates an example of a correct prediction and two wrong predictions. The top row illustrates the explanation for a True Positive prediction. An image of a shampoo bottle is correctly classified and the training images shown under "Hits" justify the selection of the best-matching prototype due to their visual similarity. The training image from the second-best matching prototype, shown under "Near Hits", also explains why the prototype is not selected due to the visual dissimilarity to the inferenced image. Lastly, "Near Misses" show why the test image is almost incorrectly classified as "Lotion" by showing the associated training images of the best prototype from the next-best class label. Given an incorrect prediction such as the False Negative result in the middle row and the False Positive result in the bottom row, the training images shown for "Hits", "Near Hits", and "Near Misses" explain why the ExLL made the mistake. Figure 4: Explainable rules extracted from prototypes for the classes “Aqueduct” and “Arch” from Places-365. Each rule is made up of training images associated with the corresponding prototype. Figure 3: Example of “Hits”, “Near Hits”, and “Near Misses” for the F-SIOL-310 dataset. The test image in the top row is an example of a True Positive result, the test image in the middle row is a False Negative result, and the test image in the bottom row is a False Positive result. The records \(\hat{I}_{g_{k}}\) are used for visualizing explainable rules. One rule is generated from one prototype. The visualization of explainable rules reveal hidden information in each clustered prototype, as shown in Figure 4. For example, the prototype associated with Rule 1 consisted of aqueducts with clear blue skies in the background. In comparison, the training images associated with the prototype for Rule 2 do not have a visible background. Similarly, the prototype used for generating Rule 3 consists of images of arches over long hallways while the prototype for Rule 4 mainly contains images of arches with people in it. This information is not immediately visible to users since the images have been converted into feature vectors, but can be shown when the images are retrieved after training the model. ## 5 Experiment Setup Given a continuous stream of images where \(X_{t}\) is an image at time \(t\), a neural network classifier \(F(\cdot)\) is trained incrementally using supervised online continual learning, producing a predicted label \(\hat{y_{t}}=F(G(X_{t}))\). The backbone CNN \(G(\cdot)\) is pre-trained on large image datasets such as the ImageNet-1k dataset [60]. Subsequently, feature vectors are obtained from the last hidden fully-connected layer in response to training images fed to \(G(\cdot)\), which are then passed to \(F(\cdot)\) for learning. The intermediate layers in \(G(\cdot)\) are frozen after pre-training to prevent knowledge drift, i.e. the learned representations in \(F(\cdot)\) are no longer up-to-date. With this configuration, eight online continual learning strategies were studied for \(F(\cdot)\) and five backbone architectures were studied for \(G(\cdot)\). These studies are detailed in the following subsections. ### Backbone Architectures Three backbone CNN architectures were selected for comparison for their compact size, effectiveness, and classification accuracy when trained and tested on the ImageNet dataset. **MobileNet-v3**[61] is the successor of two previous architectures created for mobile and embedded applications [29][62]. The CNN incorporates several strategies for efficient and accurate inference under real-time and resource-constrained scenarios. Depth-wise separable convolutions are used in conjunction with linear bottleneck layers to reduce computational cost without negatively impacting performance. Two versions are compared in this study. MobileNet-v3 Small (**MNet-S**) is more efficient but displays worse classification performance compared to MobileNet-v3 Large (**MNet-L**) which is more resource-intensive but shows better classification performance. **EfficientNet**[63] utilizes neural architecture search (NAS) to automate the selection of an optimal architecture to achieve a good tradeoff between performance and model complexity. Like MobileNet-v3, EfficientNet utilizes depth-wise separable convolutions and linear bottleneck layers to reduce computational cost, making it suitable for usage in embedded and mobile applications with limited computing resources. EfficientNet refers to a family of models with varying complexity. Two models with the least complexity are compared in this study: EfficientNet-B0 (**ENet-B0**) and EfficientNet-B1 (**ENet-B1**). **ResNet**[64] makes use of residual blocks allowing information to skip one or more convolutional layers and is efficient when involving very deep networks. During training, residual representations measure the differences between the actual output from each block and the desired output. Learning is performed by updating the convolutional weights to make the residual representations more accurate. ResNet includes several types of models with varying complexity. The smallest model, ResNet-18 (**RN-18**), is selected for this study as the most suitable ResNet candidate to be used in embedded systems and has been extensively tested in continual learning studies [24][65; 66; 67; 68]. ### Online Continual Learning Models We assess how well the proposed model performs when compared to seven other online continuous learning techniques for training the classifier \(F(\cdot)\) using the image feature vectors extracted using \(G(\cdot)\). The techniques were selected because of low memory and computation requirements and they can learn incrementally, continuously, and with a single pass. **Fine-Tune** incrementally adjusts a CNN's fully-connected layer. A stochastic gradient descent optimization strategy is used and progress is measured using cross-entropy loss of the CNN's predictions. **Nearest Class Mean** (NCM) keeps a cumulative average vector for every unique class it encounters during training. Each class mean vector is considered a prototype representing a single class. During inference, NCM compares the input feature vector to the class mean vectors using a similarity metric such as Euclidean distance. The input is assigned to the class with the most similar feature mean vector. **Streaming One-vs-Rest** (SOvR) maintains a series of binary classifiers, one for each unique class it encounters during training. As each new feature vector continuously arrives in a streaming scenario, the classifier for the relevant class of the current input is updated incrementally. During inference, each classifier outputs a confidence score on whether the inferenced feature vector belongs to that class. The final predicted class is selected from the classifier with the best confidence score. **Streaming Linear Discriminant Analysis** (SLDA) [23] is an extension of Linear Discriminant Analysis designed to support learning from streaming data. The data distribution is modeled using class mean vectors and covariance matrices. A discriminant function is used to find a linear projection of the input data that maximizes the separation between classes. Both data distribution and discriminant function are updated incrementally as new feature vectors arrive from the data stream. During inference, SLDA uses the discriminant function to compute the probabilities of the inferenced vector belonging to each of the known classes. The final predicted class is selected from the class with the best probability score. **Streaming Gaussian Naive Bayes** is an extension of the Gaussian Naive Bayes algorithm designed to support learning from streaming data. The model makes use of class-conditional probability distributions to determine if a feature vector belongs to a specific class. The distribution of each feature is represented by a mean and variance. The probability distributions are updated incrementally by observing the feature values of the incoming feature vectors from the data stream. During inference, Bayes' theorem is applied to obtain the posterior probability of each class based on the observed input feature. The predicted class is selected with the highest probability score. **Online Perceptron** keeps a class vector for every unique class it encounters during training. When a feature vector is received, prediction is performed by taking the dot product of the input and the stored class vectors. The final predicted class is selected from the class with the best score. During training, no action is performed if the prediction matches the actual class. However if the prediction is a mismatch, the vector of the actual class is adjusted towards the input while the vector of the mismatched class is adjusted away from the input. This process is repeated continuously as the model receives additional feature vectors from the data stream. **Replay** is a technique to reduce catastrophic forgetting by storing some of the previous training feature vectors in a memory replay buffer. During training, the model samples from incoming feature vectors equally alongside the stored feature vectors. Training examples can be selected from the buffer at random or by using specific strategies to mitigate issues such as imbalanced class representation. By incorporating past knowledge, the replay model balances the learning process to reduce catastrophic forgetting while giving equal attention to new knowledge. As training progresses, the memory buffer can be updated by replacing randomly-selected feature vectors with the current input, or by using specific strategies to retain important feature vectors. Replay can be memory intensive depending on how much storage is allocated for the memory buffer. **Explainable Lifelong Learning** (ExLL) is the proposed model of this study. Three variations of the model were tested. **ExLL-P** uses Prototype-based Inference as per Equation 19 where label predictions are based on the closest prototype mean. **ExLL-M** uses MegaCloud-based Inference as per Equation 21 where label predictions are based on the closest class mean. Lastly, **ExLL-F** uses pairwise fusion to combine the label predictions from ExLL-M and ExLL-P, as per Equation 23. ### Datasets Online continual learners are evaluated using the following image classification datasets. **OpenLORIS**[69] consists of videos of 40 different household items recorded from varying angles and distance, and 121 object instances across all items. Each object instance is recorded under one of the following environmental conditions: the object is surrounded by clutter; the object is illuminated by several light sources; the object is partially occluded; and the object is nearer to or further away from the camera. A total of 9 sessions are recorded for each condition and object instance. This dataset is suitable for testing a model's ability to learn and recognize objects from dynamic and sequential image streams as well as to apply its acquired knowledge to recognize known objects under different environments. **Places-365**[70] consists of 1.8 million images of locations divided into 365 categories. The dataset is segmented into a training and validation set. This dataset is suitable for evaluating a model's ability to learn from a large number of classes and diverse images per class. **Places-Long-Tail** (Places-LT) is a subset of Places-365 with a skewed distribution of images across all classes, designed to evaluate a model's ability to generalize from highly-imbalanced data distributions. Each of the 365 classes may consists of anywhere between 5 to thousands of images, while the validation set is identical to the validation set used by Places-365. **Few-Shot Incremental Object Learning** (F-SIOL-310) [26] consists of static images of 22 household items. There are multiple instances of each item, totaling 310 object instances and 620 static images. This dataset uses two learning scenarios. The 5-shot learning scenario trains a model using only five images per class, selected at random, and tests with all other images. Likewise for the 10-shot learning scenario, only ten randomly-chosen images are used for training the model while all others are reserved as the testing set. Typically this dataset is used with multiple permutations of class orders and training images. This dataset is suitable for evaluating a model's ability to learn from few training samples. ### Experiment Protocol One of the factors impacting the performance of an online continual learner is the order in which training data is presented. This study presents several different orderings of each dataset and we observe the effects on the learners. Two variants of instance data orderings are used for the OpenLORIS dataset [71]. **Instance ordering** shuffles object instances before presenting all training videos to the learner for training. On the other hand, **low-shot instance ordering** presents only one training video from each object instance and category to the learner during training. Having learned from the object instances, the learners are then tested on all testing videos of known objects. The low-shot ordering method evaluates how well the learner generalizes from a limited labeled dataset to identify known objects under various environmental conditions. For the two Places datasets, two data ordering methods are used. **Independent and identical distribution** (IID) shuffles the order of the images. **Class-IID** on the other hand organizes all the images by class; the order of the images is shuffled within each class, as well as the order of the classes. Class-IID is designed to test the learner's ability to handle catastrophic forgetting, and is commonly used as a continual learning metric [65][66][68][72]. Some learners perform poorly with Class-IID ordering if they do not have catastrophic forgetting mitigation, but are still able to perform well using IID ordering. Lastly, F-SIOL-310 is run using Class-IID ordering for each low-shot learning scenario. The experiment is run using three permutations of class orders and the averaged results are reported over all permutations. ### Performance Metric Online continual learners are evaluated on three axes: classification accuracy, number of parameters, and experiment runtime. The performance of an online learner \(\mathcal{M}\) is computed as a modified NetScore metric [73] combining all three metrics into one score as follows: \[\Omega(\mathcal{M})=s\log(\frac{a(\mathcal{M})^{\alpha}}{p(\mathcal{M})^{ \beta}c(\mathcal{M})^{\gamma}}) \tag{24}\] where \(a(\mathcal{M})\) is the learner's testing accuracy, \(p(\mathcal{M})\) is the learner's size, \(c(\mathcal{M})\) is the time taken to complete the experiment from start to finish, and \(\alpha,\beta,\gamma\) are user-defined constants for controlling the contributions of accuracy, number of parameters, and the experiment runtime towards computing the NetScore \(\hat{\Omega}\). The NetScore parameters follow the original parameter settings as suggested by Hayes et al. [11]. \(s=20\) and \(\alpha=2\) to prioritize classification accuracy, and \(\beta=\gamma=0.25\) to moderate the large values of \(p(\mathcal{M})\) and \(c(\mathcal{M})\)[73]. Higher NetScores indicate better performance. ## 6 Results For OpenLORIS and Places-LT, the results are reported from the average performance across three permutations for each data ordering technique. On the other hand, Places-365 is run only once for each ordering due to the long time needed to complete the experiment. Meanwhile, classifiers such as SOvR and NCM are relatively unaffected by data ordering permutations due to the usage of running class mean vectors. As for the Replay method, two buffer sizes were compared: one storing 20 training samples per class (20pc) and the other storing 2 training samples per class (2pc). ### Results on OpenLORIS Online continual learners are evaluated on OpenLORIS using two data ordering methods. **Instance ordering** trains learners on all object instances while **low-shot instance ordering** trains learners on one object instance from each object class. Performance is evaluated by combining the top-1 accuracy scores of the top-ranked choice for each learner. The scores are then averaged across all CNN architectures to compare how various orderings affect learner performance, as shown in Figure 5. All models displayed lower accuracy when using low-shot instance ordering. Perceptron and Fine-Tune in particular showed a much bigger drop in accuracy for the low-shot instance ordering, relative to other continual learning models. The models generalized poorly when tested against images from domains not encountered during training. In comparison, Naive Bayes, SOvR, and NCM were less accurate than Perceptron and Fine-Tune for the full instance ordering, but outperformed them for the low-shot condition. The ExLL models showed the best balance between the two ordering methods, while ExLL-F outperformed all other models for both orderings. #### 6.1.1 NetScore Performance In Table 1, NetScores were used for evaluating continual learning methods in terms of performance as well as memory and computational requirements. The NetScore values were obtained by evaluating all methods on the same hardware for consistency. Higher Netscore values are better. The top three performing online continual learners are NCM, Replay 20pc, and SLDA. The NCM algorithm is the most efficient in terms of memory and computation requirements since it only stores and updates the class mean vectors. Replay 20pc needed additional computation and memory for replaying stored samples, while SLDA needed additional computation and memory for the covariance matrix. Meanwhile, ExLL was placed fifth among the eight algorithms. While ExLL is nominally similar to SLDA, ExLL required significantly more memory to store prototype mean vectors in addition to class mean vectors. In addition, ExLL stores training records to be able to recall the information for explaining inferences. #### 6.1.2 Backbone CNN Comparisons Table 2 presents the best accuracy scores of online continual learning models across different CNN backbones when trained using instance ordering. The EfficientNet architectures showed the best results overall while ResNet-18 showed the worst results in eight of the ten datasets. \begin{table} \begin{tabular}{c|c c c c c|c} \hline Method & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 & Mean \\ \hline Perceptron & -115.9 & -106.5 & -91.0 & -96.5 & -147.8 & -111.5 \\ Fine-Tune & -149.0 & -142.9 & -96.3 & -103.7 & -187.8 & -135.9 \\ Naive Bayes & -83.7 & -77.3 & -75.8 & -84.2 & -204.5 & -105.1 \\ SOvR & -80.0 & -83.9 & -74.0 & -78.7 & -111.4 & -85.6 \\ NCM & **-55.5** & **-64.7** & **-65.3** & **-72.5** & **-78.7** & **-67.3** \\ Replay (20pc) & -58.9 & **-66.5** & **-66.6** & **-72.8** & **-80.5** & **-69.1** \\ SLDA & **-58.3** & -69.1 & -72.3 & -79.3 & -80.8 & -72.0 \\ \hline **ExLL** & -75.3 & -91.4 & -99.5 & -106.8 & -108.1 & -96.2 \\ \hline \end{tabular} \end{table} Table 1: NetScores on OpenLORIS with the **low-shot instance ordering**. **Higher** values are better. Results are highlighted as follows for the **first. second**, and **third** best results. Figure 5: Accuracy results averaged across CNN architectures comparing online continual learners’ performance on OpenLORIS with **instance ordering** and **low-shot instance ordering**. Table 3 presents the performance of the models across different CNN architectures for low-shot instance ordering. Compared to the previous table, classification accuracy was significantly lower due to fewer training samples. The EfficientNet backbone CNNs again outperformed the other backbone CNNs. ### Results on Places-365 and Places-LT In this section, online continual learners were compared in terms of performance, regardless of which CNN architecture was used. Tables 4 and 5 show the average top-1 accuracy for Places-365 and Places-LT, respectively, for all online continual learners across all CNNs. In almost every case, SLDA outperformed ExLL-M and ExLL-P. However, when pairwise fusion was used for combining the results from the two ExLL methods, ExLL-F was able to outperform SLDA, ExLL-M, and ExLL-P by a significant margin. This suggests that the local and global inferences in ExLL-F contain complementary information and were able to address each other's weaknesses when pairwise fusion is used. Perceptron and Fine-Tune show a significant drop in accuracy in Class-IID compared to IID, due to catastrophic forgetting. When training using Class-IID, known classes are not revisited and are thus negatively impacted when new classes are introduced. Other online continual learning models, including ExLL, are not as affected by catastrophic forgetting. Both Places datasets use the same set of images for testing but with different training sets. While Places-365 tests generalization for 365 location-based classes with 1.8 million images, Places-LT tests how well models perform with severe imbalance, with classes consisting of anywhere between 5 to 4,980 training images. Therefore, comparing the performance of the models for the two datasets is one way to observe their robustness against dataset imbalance. Of the three ExLL methods, MegaCloud-based inference was the least affected by dataset imbalance while prototype-based inference and pairwise fusion showed a 7.4% and 12.1% loss in performance respectively when trained with Places-LT. ExLL-F in particular showed worse performance compared to either ExLL-M and ExLL-P, demonstrating a significant vulnerability to imbalance. A visualization of the topology of prototypes is provided as a supplementary material (Figure 7). \begin{table} \begin{tabular}{c|c c c c c|c} \hline Method & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 & Mean \\ \hline Perceptron & 0.793 & 0.880 & 0.935 & 0.942 & 0.796 & 0.869 \\ Fine-Tune & 0.835 & 0.915 & 0.958 & 0.963 & 0.809 & 0.896 \\ Naive Bayes & 0.311 & 0.526 & 0.780 & 0.787 & 0.015 & 0.483 \\ SOVR & 0.374 & 0.477 & 0.739 & 0.723 & 0.346 & 0.531 \\ NCM & 0.729 & 0.789 & 0.859 & 0.867 & 0.797 & 0.808 \\ Replay (20pc) & 0.921 & 0.956 & 0.977 & 0.978 & 0.929 & 0.952 \\ SLDA & **0.956** & **0.982** & **0.988** & **0.988** & 0.950 & **0.972** \\ \hline **ExLL-M** & 0.944 & 0.973 & 0.982 & 0.982 & 0.940 & 0.964 \\ **ExLL-P** & 0.951 & 0.968 & 0.982 & 0.983 & **0.961** & 0.969 \\ **ExLL-F** & **0.987** & **0.993** & **0.996** & **0.996** & **0.988** & **0.992** \\ \hline \end{tabular} \end{table} Table 2: Accuracy results on OpenLORIS with the **instance ordering**. Results are highlighted as follows for the **first**, **second**, and **third best results. \begin{table} \begin{tabular}{c|c c c c c|c} \hline Method & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 & Mean \\ \hline Perceptron & 0.098 & 0.167 & 0.272 & 0.283 & 0.082 & 0.180 \\ Fine-Tune & 0.043 & 0.066 & 0.238 & 0.232 & 0.030 & 0.121 \\ Naive Bayes & 0.232 & 0.366 & 0.421 & 0.399 & 0.021 & 0.287 \\ SOVR & 0.259 & 0.323 & 0.449 & 0.459 & 0.224 & 0.342 \\ NCM & 0.442 & 0.474 & **0.516** & **0.514** & 0.463 & 0.481 \\ Replay (20pc) & 0.453 & 0.480 & **0.529** & **0.532** & 0.446 & **0.488** \\ SLDA & 0.445 & 0.454 & 0.472 & 0.460 & 0.442 & 0.454 \\ \hline **ExLL-M** & 0.463 & 0.493 & 0.504 & 0.487 & 0.440 & 0.477 \\ **ExLL-P** & **0.470** & **0.501** & 0.500 & 0.482 & **0.475** & 0.485 \\ **ExLL-F** & **0.481** & **0.511** & 0.511 & 0.495 & **0.492** & **0.498** \\ \hline \end{tabular} \end{table} Table 3: Accuracy results on OpenLORIS with the **low-shot instance ordering**. Results are highlighted as follows for the **first**, **second**, and **third best results. \begin{table} \begin{tabular}{c|c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{IID} & \multirow{2}{*}{Mean} \\ & & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 \\ \hline Perceptron & 0.303 & 0.344 & 0.352 & 0.340 & 0.294 & 0.326 \\ Fine-Tune & 0.214 & 0.252 & 0.293 & 0.280 & 0.217 & 0.251 \\ Naive Bayes & 0.028 & 0.093 & 0.250 & 0.249 & 0.003 & 0.124 \\ NCM & 0.285 & 0.332 & 0.361 & 0.356 & 0.322 & 0.331 \\ Replay (20pc) & 0.289 & 0.323 & 0.354 & 0.348 & 0.261 & 0.315 \\ SLDA & 0.362 & **0.397** & **0.412** & **0.405** & **0.362** & **0.387** \\ \hline **ExLL-M** & **0.375** & 0.347 & 0.392 & 0.336 & 0.312 & 0.352 \\ **ExLL-P** & 0.354 & 0.380 & 0.381 & 0.370 & 0.343 & 0.365 \\ **ExLL-F** & **0.444** & **0.478** & **0.488** & **0.476** & **0.440** & **0.465** \\ \hline \hline \end{tabular} \begin{tabular}{c|c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{Class-IID} & \multirow{2}{*}{Mean} \\ & & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 \\ \hline Perceptron & 0.004 & 0.003 & 0.012 & 0.013 & 0.005 & 0.007 \\ Fine-Tune & 0.003 & 0.003 & 0.006 & 0.006 & 0.003 & 0.004 \\ Naive Bayes & 0.028 & 0.093 & 0.250 & 0.249 & 0.003 & 0.124 \\ NCM & 0.265 & 0.309 & 0.336 & 0.329 & 0.300 & 0.307 \\ Replay (20pc) & 0.251 & 0.279 & 0.297 & 0.295 & 0.235 & 0.271 \\ SLDA & **0.362** & **0.397** & **0.412** & **0.405** & **0.362** & **0.387** \\ \hline **ExLL-M** & 0.347 & 0.362 & 0.381 & 0.367 & 0.349 & 0.361 \\ **ExLL-P** & 0.352 & 0.378 & 0.381 & 0.373 & 0.343 & 0.365 \\ **ExLL-F** & **0.444** & **0.473** & **0.486** & **0.479** & **0.439** & **0.464** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy results on Places-365 for two data ordering methods **iid** and **class-iid**. Results are highlighted as follows for the **first**, **second**, and **third** best results. \begin{table} \begin{tabular}{c|c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{IID} & \multirow{2}{*}{Mean} \\ & & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 \\ \hline Perceptron & 0.152 & 0.185 & 0.213 & 0.206 & 0.149 & 0.181 \\ Fine-Tune & 0.136 & 0.163 & 0.197 & 0.191 & 0.141 & 0.165 \\ Naive Bayes & 0.015 & 0.050 & 0.199 & 0.213 & 0.100 & 0.115 \\ SOvR & 0.089 & 0.149 & 0.262 & 0.245 & 0.146 & 0.178 \\ NCM & 0.265 & 0.309 & 0.336 & 0.329 & 0.300 & 0.306 \\ Replay (20pc) & 0.239 & 0.267 & 0.290 & 0.282 & 0.223 & 0.260 \\ SLDA & 0.290 & 0.318 & 0.338 & 0.328 & 0.300 & 0.315 \\ \hline **ExLL-M** & **0.356** & **0.392** & **0.407** & **0.400** & **0.360** & **0.383** \\ **ExLL-P** & 0.265 & 0.297 & 0.315 & 0.305 & 0.277 & 0.292 \\ **ExLL-F** & **0.324** & **0.349** & **0.362** & **0.351** & **0.331** & **0.343** \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy results on Places-LT for two data ordering methods **iid** and **class-iid**. Accuracy scores are averaged over three runs with different data permutations. Results are highlighted as follows for the **first**, **second**, and **third** best results. ### Results on F-SIOL-310 F-SIOL-310 was selected to observe how the online continual learning methods perform in low-shot continuous learning applications. Table 6 presents the performance for all continual learning methods, backbone CNNs, and learning scenarios. For the 5-shot scenario, ExLL-F is slightly outperformed by SLDA (0.884 vs. 0.889 respectively). On the other hand, for the 10-shot scenario, ExLL-F significantly outperformed the next-best methods, ExLL-M and SLDA (0.958 vs. 0.936 and 0.931 respectively). A visualization of the topology of prototypes is provided as a supplementary material (Figure 8). ### Overall Results Spider plots were generated to visualize the performance metrics of online continual learners in terms of several factors: (**NetScore**), an index representing the learner's accuracy and memory and runtime requirements; (**Video**), the learner's ability to learn from sequential images or videos, evaluated based on its performance for the instance-ordered OpenLORIS dataset; (**Low-Shot**), the learner's ability to learn from a very small set of training inputs, evaluated using low-shot instance-ordered OpenLORIS; (**Scale**), its scalability to large-scale data, evaluated from Places-365; and (**Imbal.**), the learner's performance on imbalanced datasets, evaluated using Places-LT. To construct the plots, the performance metrics of learners were averaged for all backbone architectures and then normalized by assigning 0 to the worst score and 1 to the best score. Figure 6 illustrates the generated spider plots. The online continual learner's name is presented at the top of each plot along with the averaged score for all five metrics. ExLL-F (0.91) showed the best overall performance. Replay 20pc (0.88) and SLDA (0.88) outperformed the second-best ExLL model, ExLL-P (0.84). The worst-performing ExLL model, ExLL-M (0.78) is also outperformed by NCM (0.81). The ExLL models performed poorly due to having low NetScores despite having better scores in the other four metrics. While sharing some similarities with SLDA, ExLL is less efficient with respect to computation and memory \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline Method & \multicolumn{6}{c}{5-Shot} \\ & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 & Mean \\ \hline Perceptron & 0.181 & 0.177 & 0.406 & 0.454 & 0.049 & 0.253 \\ Fine-Tune & 0.183 & 0.205 & 0.416 & 0.460 & 0.090 & 0.270 \\ Naive Bayes & 0.344 & 0.554 & 0.816 & 0.828 & 0.035 & 0.515 \\ SOvR & 0.592 & 0.666 & 0.679 & 0.693 & 0.428 & 0.611 \\ CBCL & 0.853 & 0.878 & 0.886 & 0.838 & 0.848 & 0.860 \\ NCM & 0.853 & 0.871 & 0.886 & **0.885** & **0.885** & 0.876 \\ Replay (20pc) & 0.541 & 0.632 & 0.594 & 0.612 & 0.624 & 0.600 \\ SLDA & **0.880** & **0.899** & **0.912** & **0.903** & 0.854 & **0.889** \\ \hline **ExLL-M** & 0.842 & 0.873 & 0.863 & 0.847 & 0.851 & 0.855 \\ **ExLL-P** & 0.827 & 0.832 & 0.799 & 0.755 & 0.803 & 0.803 \\ **ExLL-F** & **0.889** & **0.905** & **0.887** & 0.854 & **0.885** & **0.884** \\ \hline \hline \end{tabular} \begin{tabular}{c|c c c c c c} \hline \hline Method & \multicolumn{6}{c}{10-Shot} \\ & MNet-S & MNet-L & ENet-B0 & ENet-B1 & RN-18 & Mean \\ \hline Perceptron & 0.158 & 0.223 & 0.354 & 0.458 & 0.051 & 0.248 \\ Fine-Tune & 0.127 & 0.199 & 0.389 & 0.453 & 0.090 & 0.251 \\ Naive Bayes & 0.320 & 0.537 & 0.806 & 0.854 & 0.015 & 0.506 \\ SOvR & 0.561 & 0.702 & 0.650 & 0.752 & 0.504 & 0.633 \\ CBCL & 0.883 & 0.906 & 0.888 & 0.892 & 0.869 & 0.887 \\ NCM & 0.883 & 0.906 & 0.893 & 0.913 & 0.896 & 0.898 \\ Replay (20pc) & 0.625 & 0.694 & 0.714 & 0.722 & 0.731 & 0.697 \\ SLDA & 0.924 & **0.948** & **0.938** & **0.936** & 0.910 & 0.931 \\ \hline **ExLL-M** & 0.926 & 0.942 & **0.938** & 0.928 & **0.948** & **0.936** \\ **ExLL-P** & **0.927** & 0.934 & 0.897 & 0.879 & 0.930 & 0.913 \\ **ExLL-F** & **0.961** & **0.966** & **0.952** & **0.943** & **0.968** & **0.958** \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy results on F-SIOL-310 using **class-iid data ordering** for **5-shot** learning and **10-shot** learning scenarios. Accuracy scores are averaged over three runs with different data permutations. Results are highlighted as follows for the **first**, **second**, and **third** best results. requirements. In addition to class vector means, ExLL models store prototype vector means as well as records of training samples to facilitate post hoc explainability during inference. ## 7 Conclusion We propose an explainable neural network architecture suitable for online and continual learning applications on embedded devices. The Explainable Lifelong Learning (ExLL) model is a prototype-based classifier inspired by SLDA and is robust against catastrophic forgetting and mitigates the stability-plasticity dilemma. ExLL was designed to facilitate single-pass learning from a continuous data stream. The design of the architecture also makes it easy to generate IF-THEN rules and justify the classifier decisions with highly interpretable explanations. A collective inference strategy was implemented to combine the global MegaCloud inference with the local prototype-based inference using glocal pairwise decision fusion to enhance predictive accuracy. The classifier's performance was benchmarked against state-of-the-art online learning models using several different CNN backbones, object recognition datasets, and evaluation metrics. In terms of video classification accuracy, low-shot learning, scalability, and imbalanced data learning, ExLL outperformed other online learning models in nearly every scenario. However, in terms of metrics to quantify the model's storage and computational requirements, ExLL did not rank as high as Replay 20pc, SLDA, and NCM. One factor is due to these methods maintaining only one class mean per class while ExLL maintains a small topology of centroids per class as well as additional memory storage to facilitate explainability. Overall, ExLL showed state-of-the-art classification accuracy in continual learning scenarios. As for suitability for embedded applications, ExLL outperformed Perceptron, Fine-Tune, and Naive Bayes, but was ranked Figure 6: The normalized performance metrics of online continual learners for accuracy and memory and computation requirements (**NetScore**); learning from temporally correlated videos (**Video**); generalizing from few data samples (**Low-Shot**); scalability (**Scale**); and learning from imbalanced data distributions (**Imbal.**). The learner’s average performance across all metrics is shown at the top of each plot. **Higher** values are better. below SOvR, NCM, Replay, and SLDA. This is one of the trade-offs between number of parameters and experiment runtime requirements, and the need for a prototype-based architecture for explainability. There are several strategies that can be considered to improve ExLL's efficiency. Pruning strategies may help identify low-utility prototypes that can be pruned without significant catastrophic forgetting, thus reducing the number of parameters requirement of the model [74, 75, 76]. Other explainability techniques can be applied to enhance interpretability, including the use of gradient class activation maps to visualize discriminative image features [47][77]. Combined with selective feature weighing to ignore redundant features [78], this may help reduce the dimensionality and computation required by the model. In conclusion, our research has shown that the proposed ExLL model achieved a very good performance when tested under diverse continual learning scenarios, even when compared against state-of-the-art continual learning models. Introducing the ability to explain and justify the model predictions is a necessary and important contribution for all online continual learning algorithms and that we have shown the merits of doing so. ## Acknowledgments The authors acknowledge the support from the German Research Foundation (Deutsche Forschungsgemeinschaft/DFG) under project CML (TRR 169), the TRAnsparent, InterpretabLe Robots (TRAIL) EU project, and from the BMWK under project VeriKAS.
2307.12427
Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection
In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model.
Liu Yuyang, Cong Yang, Goswami Dipam, Liu Xialei, Joost van de Weijer
2023-07-23T20:47:03Z
http://arxiv.org/abs/2307.12427v1
# Augmented Box Replay: Overcoming Foreground Shift ###### Abstract In incremental learning, replaying stored samples from previous tasks together with current task samples is one of the most efficient approaches to address catastrophic forgetting. However, unlike incremental classification, image replay has not been successfully applied to incremental object detection (IOD). In this paper, we identify the overlooked problem of foreground shift as the main reason for this. Foreground shift only occurs when replaying images of previous tasks and refers to the fact that their background might contain foreground objects of the current task. To overcome this problem, a novel and efficient Augmented Box Replay (ABR) method is developed that only stores and replays foreground objects and thereby circumvents the foreground shift problem. In addition, we propose an innovative Attentive RoI Distillation loss that uses spatial attention from region-of-interest (RoI) features to constrain current model to focus on the most important information from old model. ABR significantly reduces forgetting of previous classes while maintaining high plasticity in current classes. Moreover, it considerably reduces the storage requirements when compared to standard image replay. Comprehensive experiments on Pascal-VOC and COCO datasets support the state-of-the-art performance of our model 1. Footnote 1: Code is available at [https://github.com/YuyangSunshine/ABR_IOD.git](https://github.com/YuyangSunshine/ABR_IOD.git) ## 1 Introduction The field of deep learning has witnessed remarkable progress recently, and state-of-the-art object detection models [51, 52, 23, 2, 19] have been developed that performs exceptionally well on benchmark datasets. However, these models are typically designed to learn from data in a static manner, assuming that all object classes are available at once during training. In real-world scenarios, new object classes may emerge over time, making it necessary to update the model with new data. The inability to learn incrementally is a significant limitation for object detectors, particularly in cases of limited data storage capacity or data privacy concerns [46, 10]. Therefore, developing incremental object detection (IOD) methods has become an essential and challenging task in real-world applications. Figure 1: _Background Shift_ and _Foreground Shift_ for image replay settings. For each task, only the new classes are annotated while the other objects are considered as background (bkg). Moving from task \(t-1\) to task \(t\), the definition of the bkg changes, referred to as _background shift_[8]. When current task samples are trained with exemplars from previous tasks, another critical problem-_Foreground Shift_ occurs due to varying annotations of _new classes_ between new samples (person as foreground) and exemplars (person as bkg) in the same task. Our augmented box replay method resolves these problems by mixing previous objects in the bkg of new images or fusing together for training. SOTA object detectors experience a phenomenon known as catastrophic forgetting [47], where their performance on previous classes degrades after learning new classes. This issue is commonly observed in incremental settings [10] and can be mitigated by balancing model stability (retaining previous information) and plasticity (learning new information without forgetting previous knowledge). While most studies in incremental learning are based on image classification [2, 31, 34, 50], recently it has been studied in the context of object detection [7, 9, 22, 48, 55] and semantic segmentation [13, 20, 64]. A critical aspect in IOD is the background shift, also known as missing annotations [7, 48] which occurs due to the presence of multiple class objects in an image. Objects belonging to previous or future tasks in incremental object detection are often not annotated and assigned to the background class, as annotations are only available for classes in the current task. One of most efficient approaches in incremental classification is rehearsal-based strategy with storing images [6, 50]. However, directly applying the replay images into IOD will cause the unlabelled objects of current classes in the replay images to be treated as background by the model. Consequently, the new objects will be background in replay images, while regarded as foreground in the new images. This leads to a contradiction between the foreground annotations in the exemplars and the current images as illustrated in Fig. 1. We refer to this problem as foreground shift which affects the plasticity of the current model. To overcome the foreground shift for image replay in IOD, we propose a novel method called Augmented Box Replay (ABR). ABR uses mixup and mosaic box augmentation strategies to replay previous objects as an alternative to image replay for training in the current task. Compared to storing images in memory, ABR stores approximately four times as many object instances with the same storage requirements. To more effectively address catastrophic forgetting, we introduce a novel Attentive RoI Distillation loss that utilizes spatial attention from region-of-interest (RoI) features to align the most informative features of the previous and new models and correct the anchor position deviations of proposal pairs. The proposed method is experimentally evaluated on Pascal-VOC and COCO datasets, and significantly outperforms SOTA methods in multiple settings. Our main contributions are three-fold: * This paper is the first to identify the critical foreground shift issue which has hampered the usage of replay methods for IOD. We propose Augmented Box Replay as a solution that reduces the memory requirements, eliminates the foreground shift, and improves the model stability and plasticity. * We propose an innovative Attentive RoI Distillation loss to focus the current model on important location and feature information from the previous model and further reduce catastrophic forgetting. * Our method outperforms state-of-the-art methods across multiple datasets and settings, showcasing its practicality and effectiveness. Especially, on the more challenging longer task sequences and the difficult scenario with a small initial task, our method obtains significant performance gains (see Fig. 2). ## 2 Related Work **Object Detection:** Detector networks can be categorized into one-stage [5, 36, 57, 39, 51, 39] and two-stage [52, 19, 23, 35] detectors. One-stage detectors which directly predict the output objects are comparatively faster while the two-stage detectors are generally superior in performance. The two-stage methods first extract regions of interests (RoIs) using a network [52] and then obtain the final classification and regression outputs using a multi-layer network on the RoIs. Since these architectures perform poorly in incremental settings, we extend the two-stage Faster R-CNN [52] network such that it can learn new object classes over time. **Incremental Learning:** Class-incremental learning [10, 46] and catastrophic forgetting [47] has been explored extensively for image classification [6, 34, 50] problems. The previous works can be categorized into rehearsal-based, parameter-isolation and regularization-based methods. Rehearsal-based methods store training samples [6, 33, 50] from previous tasks or generates training data [54, 59, 30]. Parameter-isolation methods [41, 44, 45, 61] modify the initial network to accommodate new classes. Prior-focused regularization methods constrain learning on new classes and penalizing updating on weights [2, 31] or gradients [43] while data regularization methods perform distillation [25] between the intermediate features [14, 15, 27, 34] or attention maps [12] of the teacher model and the current student model to reduce forgetting. Other methods use embedding networks [62] or classifier drift correction [3] to address the changing class distributions. In our work, we focus on rehearsal-based and regularization-based methods. **Incremental Object Detection:** Most of the recent works on incremental object detection use the Faster R-CNN [52] Figure 2: Our ABR method is especially well on the challenging longer sequences (10-1) and when starting with a small initial task (5-5). We compare here with state-of-the-art methods FILOD [48] and MMA [7]. architecture and performs distillation on the intermediate features [7, 22, 40, 48, 60, 66], the region proposal network [7, 48, 66] and head layers [17]. Relatively few works [53, 52, 32] used one-stage architectures for incremental learning. Although the background shift issue was partially addressed in [66] by preventing previous class regions to be sampled as background but it was highlighted recently in [7, 48]. [7] proposed an unbiased classifier training loss and classifier distillation loss to explicitly tackle the background shift. EWC [31] has been adapted by [38] for object detection. While some methods replay images for finetuning [21, 28] after training and for meta-learning [29], very few methods replay whole images [53] or stored feature representations [1] during training. For instance segmentation, [18] explored copying random instances from one image to another. Our work deals with bounding box replay methods to better address the challenges of IOD. ## 3 Proposed Method ### Problem Formulation and Overview Object detection is primarily concerned with accurately identifying and localizing objects of interest within an image. Given a set of data \(D=\{(I_{n},Y_{n})\}_{n=1}^{N}\), an ideal object detector \(f_{\theta}(I_{n})\) can predict a series of boxes \(\hat{Y}_{n}\) corresponding to the groundtruth \(Y_{n}\), where \(Y_{n}=\{y_{g}=(u_{g},v_{g},w_{g},h_{g},c_{g})\}_{g=1}^{G}\), with \((u_{g},v_{g})\) denoting the top-left corner coordinates of the bounding box and \((w_{g},h_{g})\) the width and height of the bounding box, and \(c_{i}\) denotes the class for each of the \(G_{n}\) bounding boxes. Therefore, \(D\) has \(K\)=\(\sum_{n=1}^{N}G_{n}\) groundtruth boxes totally. This work focuses on two-stage detectors from the R-CNN family [52, 19, 23] that typically consist of a CNN-based feature extractor, a Region Proposal Network (RPN), and a class-level classification and bounding box regression network (RCN). Incremental object detection aims to learn to detect objects in a sequence of \(T\) tasks, where each task \(D^{t}=\{(I_{n}^{t},Y_{n}^{t})\}_{n=1}^{N^{t}}\) corresponds to a new set of classes \(\mathcal{C}^{t}\). The model should be able to detect objects in the new classes \(\mathcal{C}^{t}\) while retaining the ability to detect objects in the previously seen classes \(\mathcal{C}^{1:t-1}\) without catastrophic forgetting. However, unlike in the classification tasks where each input has a single label, \(I_{n}^{t}\) may contain objects from both \(\mathcal{C}^{t}\) and \(\mathcal{C}^{1:t-1}\), and the annotations \(Y_{n}^{t}\) only include the bounding boxes and class labels for \(\mathcal{C}^{t}\). Therefore, \(G_{n}^{t}\)\(\leq\) the number of the real annotations in IOD. The presence of unlabeled previous objects can lead to _Background Shift_ during training, where attention of the detector is biased towards the \(\mathcal{C}^{t}\) and it fails to differentiate between the objects from \(\mathcal{C}^{t}\) and \(\mathcal{C}^{1:t-1}\). Moreover, misassociations can propagate over time, exacerbating catastrophic forgetting of previous classes. A straightforward way to a solution is using the original images from \(D^{1:t-1}\), as shown in Fig. 1, which provides certain information for \(\mathcal{C}^{1:t-1}\). However, the image replay method involves replaying original images from the previous training set during the current one, which can cause _Foreground Shift_ due to replay of unlabeled objects from \(\mathcal{C}^{t}\). Thus, the new classes or the foreground in the current images are considered as the background in the replayed images which results in the model failing to generalize to new contexts. Additionally, storing the original images can result in significant memory overhead, since they include a lot of redundant information. ### Augmented Box Replay To mitigate the foreground shift problem, we propose an Augmented Box Replay (ABR) strategy that selects a subset of informative and representative box images from the Figure 3: Illustration of our proposed framework, which highlights the key novelties of Augmented Box Replay (ABR) and Attentive RoI Distillation. ABR fuses prototype object \(b\) from Box Rehearsal \(B^{t-1}\) into the current image \(I_{n}^{t}\) using mixup or mosaic. Attentive RoI Distillation uses pooled attention \(A_{i}\) and masked features \(F_{i}\cdot A_{i}^{t-1}\) to constrain the model to focus on important information from previous model. Inclusive Distillation Loss overcomes catastrophic forgetting based on ABR. previous task, along with a new set of boxes for the current task \(t\). This method avoids replaying redundant information and optimally employs its storage for the relevant object regions. Specifically, ABR can replay these boxes in an augmented way, which helps the model retain its ability to detect previous objects in new contexts while improving its detection performance on the current task. Fig. 3 illustrates the pipeline of Augmented Box Replay strategy. At the beginning, we involve a prototype box selection to choose the most representative boxes whose feature maps are close to the mean feature map after training of task \(t-1\). The memory buffer is denoted as \(B^{t-1}\), where the memory size \(M^{t-1}\) of \(B^{t-1}\) is limited. Therefore, the selection is an important factor that affects the performance. The final \(B^{t-1}\) can focus on the most relevant information and avoid redundant or irrelevant information. Since box images are smaller than images, the storage cost is reduced, making it scalable to large datasets and complex models. See supplementary material for more details. To leverage prototype boxes \(B^{t-1}\) accumulated from the previous tasks in the current task \(t\), we have designed two types of replay strategies: mixup box replay and mosaic box replay, inspired by [4, 65]. These strategies allow us to effectively transfer knowledge from past tasks to the current one and enhance the performance of the model. **MixUp box replay.** This method replays the box images of the previous class in the current data, placed in such a way that the previous box objects blend into the image more naturally, while ensuring that they have minimal overlap with the groundtruth bounding boxes of the new class. It involves assigning a random location in the current image \(I_{n}^{t}\) to each box image \(b\in B^{t-1}\) with size \((w_{b},h_{b})\), and then mixing it with \(I_{n}^{t}\) to create a new image \(\hat{I}_{n}^{t}\). More specifically, \(\hat{I}_{n}^{t}\) is obtained by overlaying \(b\) onto \(I_{n}^{t}\) at a location with a mixing coefficient \(\lambda\). For each pixel location in \(\hat{I}_{n}^{t}\), if \((u,v)\) is not inside the box, then the original pixel value of \(I_{n}^{t}\) is retained. If \((u,v)\) is inside the box, the mixed pixel value is computed by: \[\hat{I}_{n}^{t}(u,v)=\begin{cases}\lambda I_{n}^{t}(u,v)&\text{if }\max_{g\in G_{n}^{t}}y_{g}\cup b\leq th\\ +(1-\lambda)b(\hat{u},\hat{v}),&\text{otherwise}\end{cases}, \tag{1}\] where \(\lambda\) is values with the [0, 1] range and is sampled from the Beta distribution [65], \(b(\hat{u},\hat{v})\) is the pixel value of the box image \(b\) at location \((\hat{u}=u-w_{b},\hat{v}=v-h_{b})\), \(y_{g}\cup b\) is the intersection over union (IOU) between each groundtruth annotation \(y_{g},\forall g\in G_{n}^{t}\) and the box image \(b\), and \(th\) is a threshold value. If the maximum IOU over union between the groundtruth annotations and the box image \(b\leq th\), then the pixel value at \((u,v)\) in the new image \(\hat{I}_{n}^{t}\) is a mixture of the original pixel value \(I_{n}^{t}(u,v)\) and the corresponding pixel value in the box image \(b\). Otherwise, the original pixel value \(I_{n}^{t}(u,v)\) is retained. Note that at most two boxes are mixed up in a single image \(I_{n}^{t}\) since the boxes are selected randomly and the overlap condition limits the number of boxes that can be mixed up in a single image. **Mosaic box replay.** This method involves dividing \(I_{n}^{t}\) into a grid and randomly selecting a subset of cells. Each cell is then replaced with a box image \(b\) from \(B^{t-1}\), and the resulting image \(\hat{I}_{n}^{t}\) is used for rehearsal. In the mosaic box replay strategy, a composite image is formed by combining four box images into a single mosaic image. To create the composite image, a random location is first selected as the center point of the mosaic image. Then, each of the four boxes is resized to a new size that is proportional to the size of the mosaic image, with the scaling factor \(\mu\) randomly sampled from the range of [0.4, 0.6]. The resized boxes are arranged in the four quadrants of the mosaic image, and the remaining areas are filled with a fixed color value. In summary, the Augmented Box Replay offers several advantages for incremental learning in object detection: 1) **Information Richness:** ABR selects the most informative and representative boxes for rehearsal, which preserves the accuracy and diversity of learned model. 2) **Enhanced generalization:** ABR serves as an augmentation method which gives a different background context to both previous and new classes and thus improves the generalization of the model. 3) **Memory efficiency:** ABR replays only a small set of representative box images instead of the entire images, which significantly reduces the memory requirement. 4) **Adaptability:** ABR can easily be integrated with different object detection models to improve their performance. ### Attentive RoI Distillation Distillation-based methods [7, 48, 55] are commonly used in IOD, aiming to transfer the knowledge of a model trained on a previous task (teacher) to a current model (student) while simultaneously learning the new task. To further explore the impact of the distillation operation on the forgetting of each detector component, an ablation study is evaluated on the Faster-ILOD model [48] as shown in Table 1. We can find that the feature extractor has a minimal effect on forgetting when either freezing the backbone or applying the feature distillation operation, and the presence or absence of the RPN component only has a 0.1% effect \begin{table} \begin{tabular}{c c c c c c|c c c} \multicolumn{2}{c|}{**Frozen**} & \multicolumn{2}{c|}{**Feature**} & \multicolumn{2}{c|}{**RPN**} & \multicolumn{2}{c|}{**RCN**} & \multicolumn{2}{c}{**VOC (10-10)**} \\ **Backbone** & **Distil.** & **Distil.** & **Distil.** & **Distil.** & **Distil.** & **1-10** & **11-20** & **1-20** \\ \hline \hline & & ✓ & & ✓ & & ✓ & 70.3 & 53.0 & 61.7 \\ \hline ✓ & & & & ✓ & ✓ & 70.7 & 53.3 & 62.0 \\ & & & ✓ & & ✓ & 70.6 & 53.7 & 62.2 \\ \hline & & ✓ & & & ✓ & 69.8 & 53.3 & 61.6 \\ \hline & & ✓ & & ✓ & & & 8.2 & 62.7 & 35.5 \\ \end{tabular} \end{table} Table 1: Influence of different detector components in Faster-ILOD [48] on VOC 10-10 setting. on forgetting. However, removing the distillation operation of the prediction head (RCN) results in a 26.2% drop in performance. Our obtained analysis and [58] together suggest that forgetting mainly occurs at the classification head. However, a limitation of RPN distillation lies in its focus solely on extracting RPN modules, which provide region proposals without considering features within each proposal. Consequently, the distilled model may overlook informative features within the proposals, leading to sub-optimal performance. To address this, we propose the Attentive RoI Distillation (ARD) loss, which allows the student model to selectively focus on the most important features from the teacher model by aligning the spatial attention maps and masked features of each proposal. Moreover, ARD supports more inclusive RoI features for the final prediction and helps to overcome the forgetting problem in the classification head. To enable the model to focus on the most informative parts of an image, we calculate a spatial attention map \(A_{i}^{t}\) for each \(F_{i}^{t}\), \(\forall i\in P_{n}^{t}\), where \(P_{n}^{t}\) is the number of proposals. The spatial attention map is obtained by raising the absolute value of each feature plane \(F_{i,d}^{t}\) to a power \(p\) (in the experiments, \(p=2\)) based on [63] and summing them up: \[A_{i}^{t}=\sum\limits_{d=1}^{C}\left|F_{i,d}^{t}\right|^{p},\quad p>0, \tag{2}\] Our method employs spatial attention maps from previous and current models to emphasize the most informative features and suppress the less informative ones. More superficially, the pooled attention distillation (PAD) loss is: \[\mathcal{L}_{PAD}=\left\|A_{i}^{t-1}-A_{i}^{t}\right\|, \tag{3}\] where \(A_{i}^{t-1}\) and \(A_{i}^{t}\) are the spatial attention maps for the \(i^{th}\) proposal in the previous and current models, respectively. PAD can transfer knowledge from a previously trained model to a new one in a progressive learning setting. The key difference with existing distillation methods in IOD is that here we explicitly distill the knowledge on the location of the relevant features (this is encoded in the attention map). Furthermore, ours applies the attentive distillation into the aligned bounding boxes, which contain the very relevant both location and feature information. Specifically, the Attentive RoI Feature Distillation (AFD) loss is employed: \[\mathcal{L}_{AFD}= \frac{1}{P_{n}}\sum\limits_{i=1}^{P_{n}}\left(F_{i}^{t-1}-F_{i}^ {t}\right)^{2}A_{i}^{t-1}, \tag{4}\] where \(P_{n}^{t}\) is the number of proposals for \(I_{n}^{t}\), \(F_{i}^{t-1}\) and \(F_{i}^{t}\) are the features extracted from the previous and new models, respectively. The squared difference \((F_{i}^{t-1}-F_{i}^{t})^{2}\) penalizes larger deviations between the previous and new features, which further encourages the new model to reproduce informative features from the previous model. By using the attention maps to weight the MSE term, AFD ensures new model to focus on reproducing the most important features from the previous model, while allowing for some flexibility in reproducing the less informative features. The overall ARD loss function is defined as: \[\mathcal{L}_{ARD}=\mathcal{L}_{AFD}+\gamma\mathcal{L}_{PAD} \tag{5}\] where \(\gamma\) is a hyperparameter that controls the strength of the regularization. ARD loss not only aligns the features of each proposal but also has an effect on the position deviation of each anchor point. This spatial attention feature alignment reduces the impact of background shift caused by the imbalance between new and previous classes and promotes knowledge transfer from the previous model to the new model. ### Inclusive Loss with Background Constraint To avoid forgetting in classification head, we followed the unbiased classification and distillation losses proposed by [7, 8]. However, due to our Augmented Box Replay strategy, the input image \(\hat{I}_{n}^{t}\) contains many annotations about previous objects. This means that using unbiased losses directly in this situation is not feasible, as it would ignore the positive influence of the \(B^{t-1}\) on the previous categories during the training phase. Therefore, we involve Inclusive Loss with Background Constraint to adapt the ABR based on the unbiased classification and distillation losses. In detail, the Inclusive Classification Loss is defined as follows: \[\mathcal{L}_{IC}=\frac{1}{P_{n}^{t}}\sum\limits_{i=1}^{P_{n}^{t}}c_{i}\begin{cases} \log(p_{i}^{b}+\sum\limits_{c=1}^{C^{1:t-1}}p_{i}^{c}),&c_{i}=\mathcal{C}^{b} \\ \sum\limits_{c=1}^{C^{1:t}}c_{i}\log p_{i}^{c},&c_{i}\in\mathcal{C}^{1:t}\end{cases} \tag{6}\] where \(c_{i}\) is the label of proposal \(i\), \(p_{i}^{b}\) is the probability as \(\mathcal{C}^{b}\), \(p_{i}^{c}\) is the probability as \(\mathcal{C}^{t}\). For positive RoI of \(\mathcal{C}^{1:t}\) in ABR, the standard RCN loss based on cross-entropy is maintained. However, for negative RoI, the sum of the probabilities of \(\mathcal{C}^{1:t-1}\) is treated as \(\mathcal{C}^{b}\), ensuring that the model does not learn to predict \(\mathcal{C}^{1:t-1}\) for unlabeled objects. Moreover, the inclusive distillation (ID) loss maintains the performance of task \(t-1\) by aligning the probabilities of the previous model for the background class with the probabilities of the new model for both \(\mathcal{C}^{b}\) and \(\mathcal{C}^{t}\). The training data for ABR includes grountruth annotations from box re-hearsal, and the teacher model can detect previous objects. Therefore, we only need to focus on each proposal of \(\mathcal{C}^{t}\): \[\mathcal{L}_{ID}=\frac{1}{\Omega}\begin{cases}p_{i}^{b,t-1}\log(p_{i}^{b,t}+ \sum\limits_{c=1}^{\mathcal{C}^{t}}p_{i}^{c,t}),&c_{i}=\mathcal{C}^{b}\\ \sum\limits_{c=1}^{\mathcal{C}^{t-1}}p_{i}^{c,t-1}\log(p_{i}^{c,t}),&c_{i}\in \mathcal{C}^{1:t}\end{cases} \tag{7}\] where \(\Omega=|\mathcal{C}^{1:t-1}|+1\) is the number of previous and background classes, \(p_{i}^{b,t-1}\) and \(p_{i}^{c,t-1}\) are the classification probabilities of the background class and previous classes in task \(t-1\), respectively, \(p_{i}^{b,t}\) and \(p_{i}^{c,t}\) are the classification probabilities of the background class and new classes in task \(t\), respectively, for the proposal \(i\), \(p_{i}^{c,t}\) is the classification probability of previous classes and new classes for the proposal \(i\) in the current task \(t\). ## 4 Experiments ### Experimental Settings **Datasets:** We evaluate the proposed method on two publicly available datasets namely PASCAL VOC 2007 [16] and MS COCO 2017 [37]. PASCAL VOC 2007 contains 20 object classes and 9,963 images, 50% of which is used for training and validation and the remaining 50% for testing following [16]. MS COCO 2017, as a challenging dataset, has 80 different object classes and provides 83,000 images for training, 40,000 for validation and 41,000 for testing. **IOD Protocols:** Following previous works on this topic [7, 29, 55], we obey the same experimental protocols. Each training task contains all images which have at least one bounding box from a new class. The annotations are available only for the new classes while the previous and future classes are not annotated. This setting is practical and can also have repetitions of images across tasks. **Implementation Details:** Similar to [7, 29, 40, 48, 55, 60, 66], we use the Faster R-CNN [52] architecture with a Resnet-50 [24] backbone pretrained on ImageNet [11]. We train the network with SGD optimizer, momentum of 0.9 and weight decay of \(10^{-4}\). We use a learning rate of 5 \(\times 10^{-3}\) for the initial task and 2 \(\times 10^{-3}\) for the subsequent tasks. We used 15K iterations for 5 or 10 class increments in a task and 5K iterations when adding 1 or 2 new classes. We set the memory size as 2,000 for all the experiments on PASCAL VOC 2007, 10,000 for 70-10 and 5,000 for 40-40 settings on MS COCO 2017 respectively. Our method uses a stack to store boxes, which are randomly selected and placed (while considering overlap criteria) during each iteration. To balance the number of old and new objects, we determine the 1:1:2 ratio for mixup, mosaic, and new images based on comparisons across different settings. **Evaluation:** We evaluate the methods in terms of mean average precision at 0.5 IoU threshold for PASCAL VOC 2007. For MS COCO 2017, we also report the mAP at different IoU ranging from 0.5 to 0.95 IoU (mAP@[50:95]), at 0.50 IoU (mAP@50) and at 0.75 IoU (mAP@75). ### Quantitative Evaluation Following previous works [7, 29, 66, 60, 55, 60], we evaluate our method on settings with different number of initial classes and one or more incremental tasks. We compare our method with two baselines, the Fine-tuning when the model is trained with the data incrementally without any regularization or data replay, and the Joint training when the model is trained on the entire dataset with all the annotations. All results are obtained after training of the last task. #### 4.2.1 Pascal Voc 2007 For PASCAL-VOC 2007, we perform our experiments on 19-1, 15-5, 10-10 and 5-15 single incremental task settings adding 1, 5, 10, 15 classes respectively. For multi-step incremental settings, we evaluate on 10-5, 5-5, 10-2, 15-1 and 10-1 settings where we add 5, 5, 2, 1 and 1 classes respectively at every step till all the 20 classes are seen. **Single-step increments:** We benchmark our ABR method against the existing methods on Table 2. We notice that Fine-tuning suffers from catastrophic forgetting across all settings. ABR outperforms all other methods across all the settings, significantly improving over MMA on the new classes by 4.5 mAP on 15-5, 8.9 mAP on 10-10 and 9.8 mAP on 5-15. We argue that the enhanced stability and plasticity is due to the augmented box replay of previous classes and our effective attention distillation. Our improvements over the methods storing exemplars [28, 29, 21] confirm the importance of the box replay for IOD. **Multi-step increments:** The catastrophic forgetting and the background shift problem is more crucial on the longer incremental settings as seen in the performance from Table 3. Fine-tuning suffers from almost complete forgetting on the initial classes. ABR improves over the closest competitor MMA by 3.9 mAP on 10-5, 3.5 mAP on 10-2, 3.4 mAP on 15-1 and 7.7 mAP on the longest and most challenging setting 10-1. It is interesting to observe that most methods struggle on the 5-5 setting with only 5 initial classes while ABR improves over MMA by 19.5 mAP. This implies that the existing methods require more classes in the initial task to achieve better generalization and thus, fails to adapt to new classes when the first task has lesser classes in 5-5 setting. On the most difficult setting of 10-1 with 10 increments, ABR outperforms MMA by 4.1 mAP on the previous classes and 11.1 mAP on the new classes. Note that for multiple increment settings, the improvement in the performance of incremental classes is not only due to better learning of new classes but also due to lesser forgetting of the intermediate task classes after moving to new tasks. #### 4.2.2 Ms Coco 2017 For MS COCO 2017, we perform experiments on 40-40 and 70-10 settings adding 40 and 10 classes respectively. As shown in Table 4, Fine-tuning suffers from catastrophic forgetting on both settings. While Faster ILOD and MMA has improved over Fine-tuning, our method improves average mAP@[50:95] over MMA by 1.5 on 40-40 setting and by 0.9 on 70-10 setting. These results signify lesser forgetting and better adaptation to new classes with our method. ### Analysis and Ablation Study We investigate the role of the network components, replay selection strategies, augmentation types in Table 5 on the VOC 10-10 and 10-5 settings. We take the baseline model with the RCN classification and distillation loss proposed by [7]. We show that our attentive RoI distillation improves over the RPN distillation used by [7, 48] owing to better exploitation of location and feature information of the RoIs. In replay strategies, we implemented the herding strategy [50] for selecting boxes to replay. Our method improves 1%\(\sim\)1.5% mAP over the herding strategy. We can observe that our proposed prototype box selection can better capture more representative prototype samples for previous classes. Further, we add mixup and mosaic replay individually and observe that both strategies improve the performance on previous and new classes. The best performance is achieved when both mixup and mosaic replay are performed with the new images. We investigate the role of the memory size and train ABR with different memory size of previous class boxes. Fig. 4 plots the mAP@50 results with increasing memory size. It is observed that the performance increases with increasing memory size or replay of more previous objects. It can be observed that after the memory size \(>\) 2000, the growth rate of mAP tends to be more stable. Therefore, in the main experiments, we use a memory size of 2000. Table 6 presents a comparison between image replay and our proposed ABR method. The same number of objects ensures that the original information about the previous categories stored in the memory buffer is consistent, and the same storage space controls practicality in real-world applications. As shown in Table 6, despite having the same number of objects, image replay performs worse than augmented box replay in recognizing new classes. This confirms that replaying original images can lead to foreground shift and limit the adaptation of new classes. On the other hand, our memory buffer contains about 4 times as many original objects for previous classes as image replay. ### Visualization Fig. 5 shows some examples of images generated by mixup replay in VOC 10-10 setting. It can be seen intu \begin{table} \begin{tabular}{l||c c c|c c c|c c c|c c c} & \multicolumn{3}{c|}{**40-40 mAP@**} & \multicolumn{3}{c}{**70-10 mAP@**} \\ **\#Method** & \(\mathbf{[50:95]}\) & \(\mathbf{50}\) & \(\mathbf{75}\) & \(\mathbf{[50:95]}\) & \(\mathbf{50}\) & \(\mathbf{75}\) & \(\mathbf{[50:95]}\) & \(\mathbf{50}\) & \(\mathbf{75}\) \\ \hline \hline Joint Training & 35.9 & 60.5 & 38.0 & 35.9 & 60.5 & 38.0 \\ Fine-tuning & 19.0 & 31.2 & 20.4 & 5.6 & 8.6 & 6.2 \\ \hline Faster ILOD [48] & 20.6 & 40.1 & - & 21.3 & 39.9 & - \\ MMA [7] & 33.0 & 56.6 & 34.6 & 30.2 & 52.1 & 31.5 \\ \hline **ABR (Ours)** & **34.5** & **57.8** & **35.2** & **31.1** & **52.9** & **32.7** \\ \end{tabular} \end{table} Table 4: mAP results on MS COCO 2014 at different IoU, where the best among columns in **bold**. \begin{table} \begin{tabular}{l||c c c|c c c|c c c|c c} & \multicolumn{3}{c|}{**19-1**} & \multicolumn{3}{c|}{**15-5**} & \multicolumn{3}{c|}{**10-10**} & \multicolumn{3}{c}{**5-15**} \\ **\#Method** & **1-19** & **20** & **1-20** & **1-15** & **16-20** & **1-20** & **1-10** & **11-20** & **1-20** & **1-5** & **6-20** & **1-20** \\ \hline \hline Joint Training & 70.1 & 75.7 & 74.3 & 76.4 & 67.8 & 74.3 & 75.5 & 73.0 & 74.3 & 70.1 & 75.7 & 74.3 \\ Fine-tuning & 11.8 & 64.7 & 14.4 & 15.9 & 54.2 & 25.5 & 2.6 & 63.4 & 32.9 & 6.9 & 63.1 & 49.1 \\ \hline ILOD (FasterRCNN)† [55] & 69.8 & 64.5 & 69.6 & 72.5 & 58.5 & 68.9 & 69.8 & 53.7 & 61.7 & 61.0 & 37.3 & 43.2 \\ Faster ILOD† [48] & 70.9 & 63.2 & 70.6 & **73.1** & 57.3 & 69.2 & 70.3 & 53.0 & 61.7 & 62.0 & 37.1 & 43.3 \\ PPAS [66] & 70.5 & 53.0 & 69.2 & - & - & - & 63.5 & 60.0 & 61.8 & - & - & - \\ MVC [60] & 70.2 & 60.6 & 69.7 & 69.4 & 57.9 & 66.5 & 66.2 & 66.0 & 66.1 & - & - & - \\ MMA† [7] & 70.9 & 62.9 & 70.5 & 72.7 & 60.6 & 69.7 & 69.8 & 63.9 & 66.8 & **66.8** & 57.2 & 59.6 \\ \hline ORE* [28] & 69.4 & 60.1 & 68.9 & 71.8 & 58.7 & 68.5 & 60.4 & 68.8 & 64.6 & - & - & - \\ OW-DETR* [21] & 70.2 & 62.0 & 69.8 & 72.2 & 59.8 & 69.1 & 63.5 & 67.9 & 65.7 & - & - & - \\ Meta-ILOD* [29] & 70.9 & 57.6 & 70.2 & 71.7 & 55.9 & 67.8 & 68.4 & 64.3 & 66.3 & - & - & - \\ \hline **ABR (Ours)** & **71.0** & **69.7** & **70.9** & 73.0 & **65.1** & **71.0** & **71.2** & **72.8** & **72.0** & 64.7 & **71.0** & **69.4** \\ \end{tabular} \end{table} Table 2: [email protected]% results on settings with single increments on Pascal-VOC 2007. Best among columns in **bold** and second best among columns are underlined. Methods with * use exemplars. †: results from re-implementation. \begin{table} \begin{tabular}{l||c c|c c|c c|c c c|c c|c c} & \multicolumn{3}{c|}{**10-5 (3 tasks)**} & \multicolumn{3}{c|}{**5-5 (4 tasks)**} & \multicolumn{3}{c|}{**10-2 (6 tasks)**} & \multicolumn{3}{c}{**15-1 (6 tasks)**} & \multicolumn{3}{c}{**10-1 (11 tasks)**} \\ **\#Method** & **1-10** & **11-20** & **1-20** & **1-5** & **6-20** & **1-20** & **1-10** & **11-20** & **1-15** & **16-20** & **1-20** & **1-10** & **11-20** & **1-10** & **1-20** \\ \hline \hline Joint Training & 75.5 & 73.0 & 74.3 & 70.1 & 75.7 & 74.3 & 75.5 & 73.0 & 74.3 & 76.4 & 67.8 & 74.3 & 75.5 & 73.0 & 74.3 \\ Fine-tuning & 5.3 & 30.6 & 18.0 & 0.5 & 18.3 & 13.8 & 3.79 & 13.6 & 8.7 & 0.0 & 10.47 & 5.3 & 0.0 & 5.1 & 2.55 \\ \hline ILOD (FasterRCNN)† [55] & 67.2 & 59.4 & 63.3 & 58.5 & 15.6 & 26.3 & 62.1 & 49.8 & 55.9 & 65.6 & 47.6 & 60.2 & 52.9 & 41.5 & 47.2 \\ Faster ILOD† [48] & 68.2 & 57.9 & 63.1 & 55.7 & 16.0 & 25.9 & 64.2 & 48.6 & 56.4 & 66.9 & 44.5 & 61.3 & 53.5 & 41.0 & 47.3 \\ MMA† [7] & 67.4 & 60.5 & 64.0 & 62.3 & 31.2 & 38.9 & 65.7 & 52.5 & 59.1 & 67.2 & 47.8 & 62.3 & 57.9 & 44.6 & 51.2 \\ \hline **ABR (Ours)** & **68.7** & **67.1** & **67.9** & **64.7** & **56.4** & **58.4** & **67.0** & **58.1** & **62.6** & **68.7** & **56.7** & **65.7** & **62.0** & **55.7** & **58.9** \\ \end{tabular} \end{table} Table 3: [email protected]% results on settings with multiple increments on Pascal-VOC 2007. Best among columns in **bold** and second best among columns are underlined. †: results from re-implementation. itively that the mixup strategy makes the box reasonably integrated into the new images and minimizes the occlusion with the new objects. In addition, the background information compared to the new objects is greatly enriched. The inference results are available in supplementary material. ## 5 Conclusion In this paper, we studied the experience replay method for incremental object detection problem and introduced the critical issue of foreground drift during old image replay. We hypothesize that the foreground drift is the reason that replay methods, which are dominant in incremental learning for image classification, have been little studied for IOD. To tackle this problem, our proposed method ABR stores bounding boxes from old classes and replays them with new images using mixup and mosaic augmentation strategies. This overcomes the foreground drift situation since only the old classes are stored and replayed and not the unlabeled new classes from old images. In addition to box replay, the proposed attentive RoI distillation uses both the location and feature information for the RoIs extracted from the RPN and enables retention of meaningful knowledge of old classes. Further, our method reduces the memory overhead significantly. We demonstrate that ABR outperforms existing methods across all settings on representative datasets. This work lays the foundation for bounding box replay instead of the traditional image or feature replay methods for object detection tasks. Future research should explore the implications of the foreground shift in incremental semantic segmentation and extend our approach to popular transformer methods [42]. **Acknowledgement.** This work is supported by National Natural Science Foundation of China (Grant No. 62127807, 62206135). We acknowledge projects TED2021-132513B-I00 and PID2022-143257NB-I00, financed by MCIN/AEI/10.13039/501100011033 and FSE+ and the Generalitat de Catalunya CERCA Program. \begin{table} \begin{tabular}{c|c|c|c|c c c|c c c} \multirow{2}{*}{**Type**} & \multirow{2}{*}{**Buffer Size**} & \multirow{2}{*}{**Objects**} & \multicolumn{3}{c|}{**Memory\(\downarrow\)**} & \multicolumn{3}{c}{**VOC (10-10)**} \\ & & & & & & & & & \\ \hline - & - & - & - & - & 47.9 & 76.2 & 62.0 \\ \hline Image & 182 & 455 & 15.5Mb & 70.2 & 62.2 & 66.2 \\ Image & 800 & 2000 & 68Mb & 71.6 & 57.9 & 64.7 \\ ABR & 2000 & 2000 & 15.5Mb & 71.2 & 72.3 & 72.0 \\ \end{tabular} \end{table} Table 6: Rehearsal alternative on Pascal VOC 2007 in mAP@50. All experiments are done in our proposed method with image replay or augmented box replay (ABR). Figure 4: The average mAP@50 of previous, current and total classes in terms of different memory sizes at PASCAL VOC 2007 15-5 setting. Figure 5: Examples of images generated by mixup augmentation for 10-10 setting on PASCAL VOC 2007. Blue boxes represent previous classes which are replayed in the background of new images. Orange boxes represent the ground truth annotations of current classes. \begin{table} \begin{tabular}{c c c c|c|c c|c c c|c c c} \multirow{2}{*}{**RCN**} & \multirow{2}{*}{**RPN**} & \multirow{2}{*}{**RoI**} & \multirow{2}{*}{**Selection**} & \multirow{2}{*}{**AugmentedType**} & \multicolumn{3}{c|}{**VOC (10-10)**} & \multicolumn{3}{c}{**VOC (10-5)**} \\ & & & & & & & & & & **1-10** & **11-20** & **1-10** & **11-15** & **16-20** & **1-20** \\ \hline \hline ✓ & & & & & & & & & & 43.5 & 75.9 & 59.4 & 65.1 & 31.3 & 59.8 & 55.3 \\ ✓ & & ✓ & & & & & & & 45.2 & 75.6 & 60.4 & 67.1 & 30.5 & 59.3 & 55.9 \\ ✓ & & & ✓ & & & & & & 47.9 & **76.2** & 62.0 & 67.0 & 35.6 & 58.4 & 57.0 \\ \hline ✓ & & & & ✓ & & & & & & 68.9 & 72.6 & 70.7 & 67.4 & 72.8 & **63.5** & 67.7 \\ ✓ & & & ✓ & ✓ & & & & & & 70.6 & 71.2 & 70.9 & 67.0 & 70.7 & 61.8 & 66.6 \\ \hline ✓ & & ✓ & & & ✓ & & & ✓ & & & 69.7 & 72.4 & 71.0 & 67.4 & **72.9** & 61.1 & 67.2 \\ ✓ & & & ✓ & & & & & ✓ & & 68.7 & 71.5 & 70.1 & 67.0 & 71.2 & 62.8 & 67.0 \\ ✓ & & & ✓ & & ✓ & & & ✓ & & 69.4 & 71.6 & 70.5 & 67.4 & 72.3 & 61.1 & 67.2 \\ ✓ & & & ✓ & ✓ & & & ✓ & ✓ & & **71.2** & 72.8 & **72.0** & **68.7** & 71.5 & 62.8 & **67.9** \\ \end{tabular} \end{table} Table 5: Ablation study highlighting contribution from different components, where the best among columns in **bold**.
2304.04715
An unusual bifurcation scenario in a stably stratified, valley-shaped enclosure heated from below
We delineate the structure of steady laminar flows within a stably stratified, valley-shaped triangular cavity heated from below through linear stability analysis and Navier-Stokes simulations. We derive an exact solution to the quiescent conduction state, and characterize the flow via the stratification perturbation parameter, $\Pi_s$, which is a measure of the strength of the surface heat flux relative to the background stable stratification. Beyond a threshold value of $\Pi_s$, two unstable eigenmodes appear, one marked by a dominant central circulation, and the other one exhibiting dual circulations of equal strength. Through Navier-Stokes simulations, we confirm that the central-circulation eigenmode generates a pair of asymmetric steady states, whereas the dual-circulation eigenmode leads to distinct upslope and downslope symmetric steady states. Linear stability analysis and Navier-Stokes simulations jointly confirm the instability of the two symmetric steady states, both of which transition to the asymmetric steady state under a perturbation. Thus, for a given set of dimensionless parameters, the Navier-Stokes equations admit at least five possible steady-state solutions. Two of these solutions, namely the quiescent, pure conduction state and the counter-intuitive symmetric downslope state, have previously been overlooked in heated, stably stratified, valley-shaped enclosures. These five flow solutions reveal an intriguing bifurcation structure, including both a perfect pitchfork bifurcation and a nested bifurcation that gives rise to two distinct states. The inner bifurcation, while resembling a pitchfork in some respects, does not break any symmetry of the valley due to the lack of any possible horizontal axis of symmetry. The categorization of this inner bifurcation remains an unresolved matter, as it does not conform to any established descriptions of canonical bifurcations.
Patrick J. Stofanak, Cheng-Nian Xiao, Inanc Senocak
2023-04-10T17:20:31Z
http://arxiv.org/abs/2304.04715v4
# Asymmetric nested pitchfork bifurcation in stratified anabatic flows in idealized valleys ###### Abstract We characterize the full structure of steady laminar anabatic flows in a stably stratified V-shaped valley using a dynamical systems approach. Our approach is based on the discovery of a quiescent conduction state from which a unique asymmetric nested pitchfork bifurcation emerges. We characterize the flow via the stratification perturbation parameter, \(\Pi_{s}\), which is a measure of the surface heat flux relative to the strength of the background stable stratification. At very low \(\Pi_{s}\) values, the pure conduction state remains stable. Beyond a threshold \(\Pi_{s}\) value, it bifurcates into asymmetric and symmetric circulation patterns, with the critical value for the asymmetric state being slightly lower than that of the symmetric state. The asymmetric instability manifests as a perfect mirror image of a clockwise and counterclockwise circulation in the valley. The symmetric instability gives rise to an upslope and a downslope convection patterns which are not mirror images of each other. Linear modal analysis and numerical simulations show that these two symmetric states are linearly unstable and will transition to the asymmetric state under the slightest perturbation. stratified flows, convective instability, pitchfork bifurcation, slope flows ## 1 Introduction During the evening transition in the atmospheric boundary layer, surface cooling causes downslope, or katabatic flows. In complex terrain such as a valley, these katabatic flows lead to the formation of a stably stratified cold pool, which will sustain throughout the night. During the morning transition, surface heating then causes upslope, or anabatic flows against the stably stratified cold pool leading to its eventual breakup. Numerical weather prediction models are known to struggle with stably stratified flows in complex terrain (Holtslag _et al._, 2013), and with transition periods (Angevine _et al._, 2020), which can negatively affect predictions of morning fog formation and pollutant transport (Boutle _et al._, 2018; Salmond & McKendry, 2005). Thus, we aim to better understand stably stratified anabatic flows in a V-shaped valley with idealized flow conditions. This unique setup provides parallels with prior experimental work of Princevac & Fernando (2008) and enables future experiments on flow regimes and instabilities. Thermal convection in attic-shaped triangular cavities with isothermal conditions on sloped walls without any stratification effects have been studied (Saha & Khan, 2011). In such configurations, symmetric convection pattern prevails at low Grashof numbers, and a subcritical pitchfork bifurcation occurs at larger parameter values leading to a steady asymmetric state (Ridouane & Campo, 2006; Omri _et al._, 2007), which has been shown to agree with experiments (Holtzman _et al._, 2000) as well. In contrast to attic-shaped cavities, there has been relatively less attention on convection in V-shaped triangular geometries with stratification effects. Princevac & Fernando (2008) conducted experiments with stratified saline water in a V-shaped tank heated with a constant heat flux on both bottom walls, and observed the eventual breakup of the stratification. They introduced the dimensionless breakup parameter \(B\), along with the slope angle of the valley walls, to characterize flow patterns that form along the sloping walls. In a series of works, Bhowmick _et al._ used two-dimensional (2D) Navier-Stokes (N-S) simulations to investigate flow dynamics in triangular cavities heated from below with an initially stratified fluid and adiabatic conditions on the top boundary (Bhowmick _et al._, 2018), and without any stratification effects but cooled from the top boundary with isothermal conditions (Bhowmick _et al._, 2019, 2022). A common dynamics that was observed in these 2D N-S simulations as a function of increasing Rayleigh number is the establishment of a steady symmetric circulation transitioning to a steady-state asymmetric circulation through a pitchfork bifurcation, which is proceeded by the emergence of a periodic state through a Hopf bifurcation. The present study analyzes the instabilities and steady-state convection patterns in an idealized V-shaped valley with heating on the bottom surfaces under a set of hitherto unexplored conditions using linear stability analysis and three-dimensional (3D) simulations of the Navier-Stokes equations. We establish an expanded dimensionless parameter space for the proposed configuration and investigate transitions that occur between multiple possible flow states in a multi-stable configuration. We impose a constant positive heat flux on both bottom walls of the valley that permits a pure conduction state, which parallels the experimental conditions in Princevac & Fernando (2008). The advantage of using the motionless steady state as the starting point in our studies is the fact that linear stability agrees with nonlinear energy stability for such cases (Shir & Joseph, 1968); hence, the exact bifurcation of the flow at the first critical stability threshold can be fully captured with linear stability analysis, which we also validate with 3D N-S simulations. Furthermore, we consider a constant background stable stratification independent of the thermal forcing at the surface following the Prandtl slope flow model (Prandtl, 1942; Xiao & Senocak, 2019, 2022). ## 2 Technical formulation A schematic of the computational domain is shown in Figure 1, where \(H\) is the height of the domain and \(\alpha\) is the slope angle of the two valley walls. The 2D valley geometry lies in the \(x-y\) plane, with the homogeneous \(z\) direction into the page. Thus \(u\) represents the horizontal velocity in the \(x\) direction, \(v\) represents the vertical velocity in the \(y\) direction, and \(w\) represents the spanwise velocity in \(z\) direction. However, in this paper, we consider only 2D instabilities arising from LSA, and thus the velocity in the homogeneous \(z\) direction is zero for all cases shown here. The buoyancy is given by \(b=g\left(\Theta-\Theta_{e}\right)/\Theta_{r}\), where \(\Theta_{e}\) is the surrounding, environment potential temperature, and \(\Theta_{r}\) is some reference potential temperature. A constant background stratification is imposed through the buoyancy frequency, or Brunt-Vaisala frequency, given by \(N=\sqrt{\left(g/\Theta_{r}\right)\partial\Theta_{e}/\partial y}\). The continuity, momentum, and buoyancy equations, with the Boussinesq approximation, can be written as follows: \[\frac{\partial u_{i}}{\partial x_{i}}=0, \tag{1}\] \[\frac{\partial u_{i}}{\partial t}+\frac{\partial u_{i}u_{j}}{x_{j}}=-\frac{1}{ \rho}\frac{\partial p}{\partial x_{i}}+g_{i}b+\frac{\partial}{\partial x_{j}} \left(\nu\frac{\partial u_{i}}{\partial x_{j}}\right), \tag{2}\] \[\frac{\partial b}{\partial t}+\frac{\partial u_{j}b}{\partial x_{j}}=\frac{ \partial}{\partial x_{j}}\left(\beta\frac{\partial b}{\partial x_{j}}\right)-N ^{2}g_{j}u_{j}, \tag{3}\] where \(\nu\) is the kinematic viscosity, \(\beta\) is the thermal diffusivity, and \(g_{i}\) represents the normalized gravity vector, \(g_{i}=[0,1,0]\), acting only in the \(y\) direction. The boundary conditions for buoyancy include a constant, positive buoyancy flux on the two bottom walls, defined as \(B_{s}=\beta\partial b/\partial n\), where \(n\) is the direction normal to the sloping bottom boundaries, and where a positive \(\partial b/\partial n\) refers to heating of the fluid. On the top boundary, a constant \(b=0\) is imposed. For velocity, a no-slip condition is imposed on the two bottom walls, and a free-slip condition is imposed on the top boundary. Under these conditions, an analytical solution with zero velocity for \(p\) and \(b\) to Equations 2 and 3 is given as follows: \[p(y)=\frac{-\rho}{2}\frac{B_{s}}{\beta\cos\alpha}\left(y-H\right)^{2},\qquad b (y)=\frac{B_{s}}{\beta\cos\alpha}\left(H-y\right). \tag{4}\] This motionless steady state within the heated valley is only possible due to the constant buoyancy imposed at the horizontal boundary combined with the constant heat flux at the sloped surfaces, which admits a linear buoyancy and quadratic pressure profile as a solution. The current configuration parallels the pure conduction state of Rayleigh-Bernard convection at low values of Rayleigh number. For 3D nonlinear simulations, all variables are periodic in the homogeneous \(z\) direction. The flow in the idealized valley with stable stratification is controlled by the following dimensionless parameters: \[\Pi_{s}=\frac{|B_{s}|}{\beta N^{2}},\quad\Pi_{h}=\frac{NH^{2}}{\beta},\quad Pr =\frac{\nu}{\beta},\quad\alpha, \tag{5}\] where \(Pr\) is the Prandtl number, and \(\alpha\) is the slope angle. The stratification perturbation parameter, \(\Pi_{s}\), first introduced to characterize the stability of Prandtl slope flow (Xiao & Senocak, 2019) is key to the present investigation. It represents the ratio between the imposed surface buoyancy gradient and the stabilizing background stratification. The buoyancy number, \(\Pi_{h}\), which has been applied previously to cases of stratified flow with an imposed length scale (Grayer _et al._, 2020), represents the ratio between the diffusive and stratification time scales. We note that the set of dimensionless parameters given in Eq. 5 is larger than the set adopted in previous studies of flows in idealized valleys. For example, Princevac & Fernando (2008) introduce the dimensionless breakup parameter \(B=N^{3}H^{2}/B_{s}\), along with \(Pr\) and \(\alpha\), whereas the Rayleigh number was used in Bhowmick _et al._ (2018). In light of the expanded parameter space given in Eq. 5, we observe that \(B\) is a combination Figure 1: Schematic of the computational domain with key external parameters. of two independent dimensionless parameters \(B=\Pi_{h}/\Pi_{s}\). In this way, we can say that four dimensionless parameters are needed to fully describe the flow dynamics. We use the following scales to normalize dimensional quantities: \[l_{0}=H,\quad u_{0}=\sqrt{\frac{|B_{s}|}{N\sin\alpha}},\quad b_{0}=\frac{H}{N^{2 }},\quad p_{0}=\rho_{0}\frac{H^{2}}{N^{2}}, \tag{6}\] Additionally, a timescale can be defined as \(t_{0}:=l_{0}/u_{0}\). ### Linear stability analysis We linearize equations 1-3 around an arbitrary base flow defined by \((U_{i},\bar{p},\bar{b})\), and assume disturbances take the form of waves given by \[\mathbf{\hat{q}}(\mathbf{x},\mathbf{y},\mathbf{t})=\left[\hat{u}(x,y),\hat{v }(x,y),\hat{w}(x,y),\hat{p}(x,y),\hat{b}(x,y)\right]\exp\left(\omega t\right), \tag{7}\] where \(\mathbf{\hat{q}}\) represents the vector of 2D disturbance quantities, and \(\omega\) represents the temporal growth rate. Substitution of the above disturbance quantities into the linearized Navier-Stokes equations leads to the following generalized eigenvalue problem \[\mathbf{A\hat{q}}(x,y)=\omega\mathbf{B\hat{q}}(x,y). \tag{8}\] By solving the eigenvalue problem, we can determine the global linear stability behavior of the given base flow for a 2D valley. The real part of the growth rate, \(\mathrm{Re}(\omega)\), indicates whether an infinitesimal disturbance will exponentially grow, when \(\mathrm{Re}(\omega)>0\), or decay, when \(\mathrm{Re}(\omega)<0\), while the imaginery part, \(\mathrm{Im}(\omega)\), indicates the temporal frequency of the resulting mode of instability. All simulations were carried out using the spectral/hp element code _Nektar++_(Moxey _et al._, 2020). Numerical integration of the full 3D Navier-Stokes equations was performed to produce steady-state profiles, and to obtain and validate secondary states arising from the primary instabilities. ## 3 Results ### Primary linear stability analysis We first perform 2D LSA of the V-shaped valley with a zero base flow with the pressure and buoyancy profiles given by Eq. 4. In all the simulations, the slope angle \(\alpha\) and \(\Pi_{h}\) are fixed at \(30^{\circ}\) and \(1500\), respectively. \(Pr\) is set to \(7.0\) to parallel the experimental study of Princevac & Fernando (2008). We only vary \(\Pi_{s}\) throughout the study. For small values of \(\Pi_{s}\), meaning the perturbation caused by the surface buoyancy flux is small compared to the stabilizing background stratification, the quiescent, pure conduction state is stable. As we increase \(\Pi_{s}\), this base state becomes linearly unstable. Specifically, LSA reveals two modes of instability at critical \(\Pi_{s}\) values: an asymmetric and a symmetric mode. The eigenfunctions of both modes are shown in Figure 2. The imaginary part of eigenvalues of both modes are zero, indicating they are non-oscillatory. The asymmetric mode's velocity profile as displayed in Figure 2a shows that it consists of one large circulation in the center alongside smaller, counter-rotating corner vortices. Specifically, five distinct circulations are seen in Figure 2a.The magnitude of the center circulation is dominant, with the maximum vorticity magnitude of each of the smaller circulations being approximately an order of magnitude smaller than the next largest. The buoyancy profile of the asymmetric mode, shown in Figure 2b, depicts how the central circulation advects heat away from the hot bottom walls and how the colder fluid near the top wall recirculates down towards the surface. The symmetric states, shown in Figures 2c and 2d, have two identical main circulations on each side of the valley center. Their velocity profiles show only four total circulations rather than the five seen in the asymmetric mode. The buoyancy profile for the second symmetric mode, Figure 2d, shows that the direction of the two central circulations is downslope, which can be seen through the high temperature in the center of the valley. Though a downslope state with heated, sloping walls is counter-intuitive, this suggests that the symmetric state may exist in both upslope and downslope configurations. The reason for the onset of such instabilities can be explained from consideration of the dimensionless \(\Pi_{s}\) parameter. For very small \(\Pi_{s}\) values, the zero flow state can remain stable due to the conduction of the heat through the fluid, as well as the stabilizing effect of the background stratification, but as the surface heating increases, the stabilizing effect of the stratification is overcome, and convection begins to dissipate the additional heat. This provides a strong parallel to the classic Rayleigh-Bernard problem, and in this sense, the initial instabilities can be viewed as a parallel to the convection cells in Rayleigh-Bernard convection. The growth rate of the symmetric and asymmetric modes is plotted against \(\Pi_{s}\) in Figure 3. Each exhibits a roughly linear trend in growth rate in the unstable regime. A line is fit to these unstable points to give an estimate of the critical \(\Pi_{s}\) value for each mode. We find that the critical value of the symmetric mode to be approximately \(0.875\), whereas for the asymmetric mode it is approximately \(0.872\). The slope of the linear trend of growth rate against \(\Pi_{s}\) is found to be \(1.91\) for the symmetric mode, and \(1.95\) for the asymmetric mode. Thus, the asymmetric mode has a lower critical value than the symmetric mode, and the growth rate of the asymmetric mode grows faster with \(\Pi_{s}\) in comparison to the symmetric mode. Both of these findings indicate that the asymmetric mode is the most unstable mode in a perfectly symmetric external configuration. ### Steady-state Navier-Stokes solutions Next, we perform time-integration of the 3D Navier-Stokes equations, Equations 1-2, to obtain the steady-state solutions arising from the symmetric and asymmetric instabilities. Initial conditions for each simulation was defined by the analytical base flow, Eq. 4, plus a small multiple of the eigenvector for each of the unstable modes. In Figure 2: Visualization of eigenmodes resulting from linear stability analysis of analytical base flow for \(\Pi_{s}=0.9\). (a) perturbation velocity and (b) buoyancy of the asymmetric mode, and (c) perturbation velocity and (d) buoyancy of the symmetric mode. this way, we can observe the initial exponential growth of the disturbance, and compare it to the growth rate predicted by LSA. From the eigenvectors shown in Figure 2, we observe that the asymmetric mode shows a clockwise main circulation, and the symmetric mode shows two main circulations that travel down the sloping valley walls. These two modes represent four possible steady-state velocity profiles: the asymmetric can be either clockwise or counterclockwise, and the symmetric can be either upslope or downslope. Through manipulations of the initial eigenvector disturbance to our simulations, we can obtain steady-state profiles for each of these four states, as shown in Figure 4. Focusing first on the asymmetric states shown in Figures 4a and 4b, we can see that each is an exact reflection of the other about the \(y\) axis, which can be explained by the symmetry of the valley geometry and the symmetry of the Navier-Stokes equations about the \(y\) axis. The steady state profiles of the asymmetric state vary significantly from the corresponding eigenvector, shown in Figure 2a. While there is still one dominant circulation, it is no longer in the center of the valley, instead being attracted to one of the strong upslope flows on either wall of the valley. Additionally, the second circulation is much stronger than the secondary circulations in the eigenvector profile. In comparison, the two symmetric states, shown in Figure 4c and 4d, are not exactly alike. While each of the states is symmetric about the \(y\) axis, they do not exhibit exactly the same flow pattern because of the opposite direction of the main circulations. This is due to the fact that the V-shaped valley geometry does not allow symmetry with respect to the \(x\) axis, and thus the opposite directions of circulation lead to distinct final states. Further, the existence of the downslope flow state seems to go against intuition; with heated sloping walls, we would only expect to see upslope flow. While in nature this is the case, mathematically the downslope flow state exists as a solution and can be achieved as a steady state in simulations through careful initial conditions, and for sufficiently low \(\Pi_{s}\) values. The initial exponential growth of the asymmetric and downslope symmetric disturbances in the non-linear simulations is compared to the growth rate predicted by LSA in Figure 5a. The initial conditions of both states consist of the analytical buoyancy Figure 3: Growth rate of symmetric and asymmetric instabilities versus \(\Pi_{s}\). Critical values are estimated based on the linear trend of the positive growth rates. and pressure base flow plus a small multiple of the eigenvector for the corresponding instability. It is seen that the growth rates from the simulations match closely with what we expect from LSA. Additionally, the difference in slope between the two lines confirms the larger growth rate of the asymmetric instability. Using the steady-state profiles obtained for the asymmetric and symmetric states, we now perform secondary linear stability analysis. First, our analysis shows that the asymmetric steady-state profiles are linearly stable at all \(\Pi_{s}\) investigated here. However, LSA with the symmetric base flow results in an unstable mode with an asymmetric eigenvector, similar to the primary asymmetric mode shown in Figure 2a. This was found to be true for all unstable symmetric states. When this asymmetric eigenvector is added to the symmetric state in Navier-Stokes simulations, the flow transitions from the symmetric state to the same asymmetric state as seen previously. The initial exponential evolution is shown in Figure 5b for both the upslope and downslope symmetric cases, and is compared to the growth rates predicted by LSA. Because the upslope and downslope symmetric states are distinct states, we observe different growth rates for each of the secondary instabilities, with the downslope state exhibiting a much larger growth rate than the upslope state. This makes sense since, as stated previously, the state of downslope flow in a valley heated from the base is inherently unstable. Compared to the primary instabilities, both secondary instabilities exhibit smaller growth rates, as can be seen from a comparison of the growth rates shown between Figures 5a and b. ### Bifurcation diagram From our simulations, we now characterize the change in the possible steady-state profiles with a change in our \(\Pi_{s}\) parameter. We observe two primary instabilities, each of which lead to two possible steady-state profiles. Therefore, each of these instabilities represents a supercritical pitchfork bifurcation, and we draw a bifurcation diagram, shown in Figure 6, where we plot the maximum vorticity in the \(z\) direction as a function of \(\Pi_{s}\). This gives a unique structure to the bifurcation diagram, in which we have two supercritical pitchfork bifurcations, one nested inside the other, with the inner, nested pitchfork bifurcation resulting in only unstable states. The outer branches, representing the two asymmetric states, are perfectly symmetric with respect to the \(y\) axis, and thus have equal and opposite maximum vorticity. The inner branches, representing the upslope Figure 4: Visualization of four possible steady-state circulations resulting from the asymmetric and symmetric instabilities at \(\Pi_{s}=0.9\): (a) Asymmetric, clockwise, (b) Asymmetric, counterclockwise, (c) Symmetric, upslope, (d) Symmetric, downslope. and downslope symmetric states, are unique in their asymmetry with respect to the flow profile. As shown earlier, the upslope and downslope flow profiles are not perfect reflections of each other, and this is reflected in the bifurcation diagram. One of the first consequences of this bifurcation diagram is the existence of four possible convective states at a given value of \(\Pi_{s}\). The existence of multiple steady-states has been shown in the past for Rayleigh-Bernard convection (Gelfgat _et al._, 1999) and other confined, convective flows (Erenburg _et al._, 2003), but this has not yet been shown for flows in triangular cavities. Prior studies of flows in idealized V-shaped valleys suggest that the symmetric state is the primary state which appears at lower parameter values than the asymmetric state (Bhowmick _et al._, 2018, 2019). However, when considering the purely conductive base flow state, the asymmetric state is revealed as the primary instability, and the symmetric state is never stable for any value of \(\Pi_{s}\). This suggest that for a random perturbation to the base flow, we would only expect to obtain the asymmetric state, and not the symmetric state, and we confirmed this observation through 3D Navier-Stokes simulations. By analogy, the bifurcations described here can be compared to the first two bifurcations seen in Rayleigh-Bernard convection, the first representing the one roll state, and the second representing the two roll state (Venturi _et al._, 2010). This creates a similar bifurcation diagram, but with a number of important differences. First, the symmetric state, analogous to the two roll state of Rayleigh-Bernard convection, remains unstable at all values of \(\Pi_{s}\), whereas the two roll state becomes stable at a certain Rayleigh number. Additionally, both bifurcations in Rayleigh-Bernard convection result in symmetric branches, which is not the case in our problem. Hence, our emphasis on asymmetric nested bifurcation in the title for our work. From Figure 6, we can observe that the first bifurcation, leading to the asymmetric state has symmetric branches, but the inner bifurcation, leading to the upslope and downslope symmetric states, is not symmetric. In fact, the downslope circulation pattern is counter intuitive considering the heating at the sloping walls. These are two unique aspects of the bifurcation diagram shown in figure 6. Venturi _et al._ (2010) showed that for stochastic initial conditions, 2D Rayleigh-Bernard convection most often prefers the one roll state due to the greater kinetic energy and Figure 5: Comparison of initial growth of normalized \(u\) velocity of simulation to growth rate predicted by LSA for (a) Zero state to the downslope symmetric state at point \((0.36,0.36,0)\) and zero state to the asymmetric state at point \((0,0.9,0)\), and (b) Upslope symmetric state to asymmetric state and downslope symmetric to asymmetric state both at point \((0,0.9,0)\). All cases at \(\Pi_{s}=0.9\). contribution to heat transfer when compared to the two roll state. We observe a similar preference in our valley geometry for the asymmetric over the symmetric state. For random initial perturbations on top of the analytical base flow, be it a symmetric or asymmetric perturbation, the flow converges to an asymmetric state. In contrast, we only observe the unstable symmetric states when the base flow is perturbed by a symmetric eigenmode. ## 4 Conclusion We investigated stably stratified anabatic laminar flows, characterized by the dimensionless stratification parameter \(\Pi_{s}\), in an idealized V-shaped valley with the help of dynamical systems theory and confirmed our findings via 3D direct numerical simulations. At very small values of \(\Pi_{s}\), equivalent to weak surface heating, a quiescent conduction state remains stable. At larger \(\Pi_{s}\), two primary instabilities emerge, an asymmetric and a symmetric instability, each leading to two possible steady-state profiles, with the asymmetric state having a slightly lower critical value than the symmetric state. The two states arising from the symmetric instability are manifested as an upslope flow state and a downslope flow state, each of which is not a perfect reflection of the other. Secondary stability analysis shows that both symmetric steady-state profiles are further unstable to the asymmetric state. Overall, this represents a unique asymmetric nested pitchfork bifurcation, in which the inner branches, representing the symmetric states, are unstable as well as not symmetric to each other. This structure is a direct consequence of the missing reflection symmetry of the valley geometry. To the best of our knowledge a pitchfork bifurcation of this kind has not been observed previously. The results of our study has shown that the dynamics of steady laminar anabatic flows in a stably stratified valley can be efficiently and accurately analyzed with simple tools from dynamical systems theory. We established that while both symmetric and asymmetric convection patterns are possible, the asymmetric state is more dynamically stable and hence more likely to be observed in nature. When taking into account the natural asymmetry and heterogeneity of real valleys, this effect should only be amplified. Figure 6: Pitchfork bifurcation diagram for increasing \(\Pi_{s}\). The quantity on the y-axis is the maximum normalized vorticity \(\omega_{z}\) in the \(z\) direction obtained from 3D N-S simulations. (a) shows the transition from the zero state to the asymmetric state and the symmetric state, (b) shows the same bifurcation plot zoomed into the critical value, along with critical values predicted by LSA shown as vertical lines. ## Declaration of Interests: The authors report no conflict of interest.
2303.01812
Unified Keyword Spotting and Audio Tagging on Mobile Devices with Transformers
Keyword spotting (KWS) is a core human-machine-interaction front-end task for most modern intelligent assistants. Recently, a unified (UniKW-AT) framework has been proposed that adds additional capabilities in the form of audio tagging (AT) to a KWS model. However, previous work did not consider the real-world deployment of a UniKW-AT model, where factors such as model size and inference speed are more important than performance alone. This work introduces three mobile-device deployable models named Unified Transformers (UiT). Our best model achieves an mAP of 34.09 on Audioset, and an accuracy of 97.76 on the public Google Speech Commands V1 dataset. Further, we benchmark our proposed approaches on four mobile platforms, revealing that the proposed UiT models can achieve a speedup of 2 - 6 times against a competitive MobileNetV2.
Heinrich Dinkel, Yongqing Wang, Zhiyong Yan, Junbo Zhang, Yujun Wang
2023-03-03T09:38:53Z
http://arxiv.org/abs/2303.01812v1
# Unified Keyword Spotting and Audio Tagging on Mobile Devices With Transformers ###### Abstract Keyword spotting (KWS) is a core human-machine-interaction front-end task for most modern intelligent assistants. Recently, a unified (UniKW-AT) framework has been proposed that adds additional capabilities in the form of audio tagging (AT) to a KWS model. However, previous work did not consider the real-world deployment of a UniKW-AT model, where factors such as model size and inference speed are more important than performance alone. This work introduces three mobile-device deployable models named Unified Transformers (UiT). Our best model achieves an mAP of 34.09 on Audioset, and an accuracy of 97.76 on the public Google Speech Commands V1 dataset. Further, we benchmark our proposed approaches on four mobile platforms, revealing that the proposed UiT models can achieve a speedup of 2 - 6 times against a competitive MobileNetV2. Heinrich Dinkel+, Yongqing Wang+, Zhiyong Yan+, Junbo Zhang and Yujun Wang Xiaomi Corporation, Beijing, China Keyword spotting, Audio tagging, Vision Transformers, weakly supervised learning. Footnote †: dagger}\) equal contribution. ## 1 Introduction Keyword spotting (KWS) is currently a crucial front-end task for most intelligent voice assistants, which triggers the start of an interaction with the voice assistant if a user utters a specific keyphrase. Further, Audio tagging (AT) is a task that aims to label specific audio content into sound event classes, e.g., the sound of a baby crying. In the previous work [1], the authors have shown that modelling both tasks via a unified framework (UniKW-AT) is possible, significantly improving noise robustness without sacrificing KWS accuracy. However, if UniKW-AT models were deployed in real-world scenarios, they would need to fulfil the same requirements as KWS models. First, KWS models are situated on-device, and their size, i.e., the number of parameters, is limited. Second, KWS models require a fast inference speed and a small floating point operations (FLOPs) footprint due to being "always on". Third, the delay of a KWS model needs to be as low as possible, such that an assistant's wakeup is immediate. While these requirements have already been researched thoroughly within the KWS community, to the best of our knowledge, no previous study has focused on lightweight and on-device computation of AT models. This work aims to bridge the gap and introduce a multitude of transformer-based models for unified keyword spotting and audio tagging (UniKW-AT), which can satisfy the requirements mentioned above. The benefit of such a UniKW-AT model is that outputs from the AT branch can be passed down further into the automatic speech recognition (ASR) pipeline, possibly enhancing robustness against noise. At the very least such a UniKW-AT model can act as a voice activity detector [2, 3, 4]. ### Previous work Due to the practical importance of KWS, previous work is focused on decreasing a model's parameter size [5], increasing its inference speed [6] and reducing its false-acceptance rate [7]. In terms of architecture, convolutional neural networks (CNNs) have been well researched within the community [8, 9, 5], while more recently transformer-based models [10, 11, 12, 13] and multi-layer perceptron (MLP) mixers [14, 15] have also been studied. On the contrary, within the field of AT, most work focuses on improving the state-of-the-art performance on the well-known Audioset benchmark. Works such as [16] popularized CNNs, while [17] utilized transformers. However, the majority of research within AT is solely focused on improving performance without consideration for the real-world deployment of these models. ## 2 Unified Keyword-Spotting and Audio Tagging Transformers This paper contributes a variety of transformer-based networks, further called _u_mified transformers (UiT), which aim to provide fast inference speed and reduce the model-parameter size and computational overhead while preserving KWS and AT performance. Unified Keyword Spotting and Audio TaggingUniKW-AT has been proposed in [1] and is modelled as follows. Given the target KWS labelset \(\mathbb{L}_{\text{KWS}}\) with \(K\) keywords and an AT labelset \(\mathbb{L}_{\text{AT}}\) with \(C\) sound events, UniKW-AT merges both labelsets obtaining \(\mathbb{L}=\mathbb{L}_{\text{KWS}}\cup\mathbb{L}_{\text{AT}}\). Training samples from both KWS and AT datasets are randomly cropped to some target duration \(t\) and the framework is optimized via the binary cross entropy (BCE) loss. The entire training framework can be seen in Figure 1. Vision TransformersTransformers were first proposed for machine translation in [18] and quickly became the state-of-the-art approach within the field of natural language processing (NLP). Later in [19], the _Vision T_ransformer (ViT) has been proposed as an adaptation of transformers into the field of computer vision. Then, ViT-based transformers were used in AT, where images were replaced with two-dimensional spectrograms [17, 20]. The core idea of the ViT framework is the "patchification" operation, where an input image (here spectrogram) is first split into \(N\) non-overlapping patches. Each patch is of size \(P=P_{\text{T}}\cross{P_{\text{F}}}\) (time and frequency) and extracted via a convolution operation. Then these patches are fed into a Transformer model consisting of \(L\) identical blocks of a multi-head attention (MHA) layer followed by a multi-layer perceptron (MLP) layer. A MHA layer computes: \[\mathbf{A}=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{D}})\mathbf{V}, \tag{1}\] where \(\mathbf{K}\in\mathbb{R}^{N\cross D},\mathbf{Q}\in\mathbb{R}^{N\cross D}, \mathbf{V}\in\mathbb{R}^{N\cross D}\) and are the key, query and value matrices obtained by a linear transformation \(\mathbf{W}_{j}\in\mathbb{R}^{D\cross D},j\in\{K,Q,V\}\) of an input \(\mathbf{X}\in\mathbb{R}^{N\cross D}\). The complexity of a single block is \(\mathcal{O}(N^{2}D+D^{2}N)\), i.e., is quadratic for the model dimension \(D\) and the number of patches \(N\). ### Proposed Model We identify the embedding dimension \(D\) and the number of patches \(N\) as the primary reason for a large computational overhead and propose patch-reduction and bottleneck attention to reduce the complexity of our ViT model. Patch-reduction and SubsamplingCommon transformer models on Audioset are trained on a 10-s scale, utilize up to \(N=N_{\text{T}}\cross{N_{\text{F}}}=1212\)[20, 17] patches, leading to a high memory and computational burden for the transformer model. Since time-frequency information is crucial for AT, we focus on reducing the amount of available time-patches \(N_{\text{T}}\) and fix the number of frequency patches \(N_{\text{F}}\) by feeding the model crops of length \(t\) i.e., 1 s. Limiting the context harms AT performance on Audioset since our model's training (1 s), and evaluation (10 s) duration is mismatched. However pseudo strong labels (PSL) [21] can partially alleviate the performance degradation by predicting fine-scale labels from a pretrained teacher (MobileNetV2) model. The first layer within most CNN models, commonly known as stem, maps a \(1\cross{T}\cross{F}\) spectrogram input to \(C\cross{T}/2\cross{F}/2\), where \(C\geq 16\)[22], meaning that the overall memory required will be expanded by a factor of \(\geq 4\). Our approach uses a subsampling stem, directly reducing the memory requirement by mapping an input patch of size \(P\) to a low-dimensional space \(D\), where \(D\) < \(P\). Bottleneck attentionBottleneck attention (BN-A) focuses on reducing the dimension \(D\) during the self-attention stage Equation (1). Our intuition is that each respective patch-embedding within a spectrogram contains large amounts of redundant information. Therefore, we propose using a BN-A approach, which reduces the dimension \(D\) during self-attention to a lower space \(U\), \(U\) < \(D\). We set \(U\) = \(\frac{D}{4}\) for all our models. ArchitectureThe proposed model architectures can be seen in Table 1. For all architectures we use Figure 1: Depiction of the proposed ViT framework for training. We sample from an AT (Audioset) dataset and a KWS dataset (GSCV1). Then we randomly crop the Audioset sample to match the KWS target sample length \(t\). Afterwards, a pre-trained model (MobileNetV2) is used to estimate these labels for these Audioset samples (pseudo-strong labels). During training, each batch is created using 50% of AT and 50% of KWS data. Training is done by optimizing the binary cross entropy (BCE) criterion. \(16\times 16\) patch-sizes, which sets the delay of our models to \(16\) frames and use two heads for each self-attention operation. This leads to \(N_{\text{F}}=4\) patches along the frequency axis and to \(N_{\text{T}}=6\) patches for each input audio second. We use ReLU as the default activation function due to its faster inference speed and lower computational complexity. Similar to [11], we use a lower embedding dimension of \(3D\) within the MLP, reducing the memory footprint. ## 3 Experiments ### Datasets This work mainly uses the Google Speech Commands V1 (GSCV1) [23] and Audioset [24] datasets. We use the common 11 class subset of GSCV1 (V1-11), where the original 30 classes have been reduced to 10 common keywords: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go" while the other 20 keywords are labeled as the AS label "Speech", where each sample is 1 s long. We use the official training/validation/testing split containing 51,088/6,798/6,835 utterances, respectively. As for AT training, we use the 60 h long balanced subset of AS containing 21,292 audio clips with a duration of at most 10 s per clip. EvaluationEvaluation is split between the KWS and AT subtasks. AT evaluation uses the common evaluation subset of AS, containing 18,229 audio clips with an overall duration of 50 h. KWS analysis primarily focuses on the GSCV1 dataset, which provides 2,567 target keywords and 4,268 non-target samples. Note that for clips longer than the target length \(t\), i.e., 10s, we split the input into chunks of length \(t\) (i.e., 1 s), then feed these chunks into the model and average all output scores. ### Setup Regarding front-end feature extraction, we use log-Mel spectrograms (LMS) with 64 bins extracted every 10 ms with a window of 32ms and a 16 kHz sampling rate. Our UiT transformer models use time- and frequency-independent learnable position embeddings. We use random shifting, volume gain, and polarity inversion as augmentation methods in the waveform domain. Having obtained a spectrogram, we augment the data using Specaugment [25]. Training runs with a batch size of 64 for at most 800 epochs using AdamW optimization [26] with a linear warmup of 20 epochs to a starting learning rate of 0.001, which is then gradually decreased using cosine annealing. We use mean average precision (mAP) during training as our primary validation metric. Computing KWS accuracy for UniKW-AT requires post-processing since the model can predict multiple labels simultaneously, i.e., "Keyword + Speech". Here we use a threshold of \(\gamma=0.2\), indicating the presence of a keyword. The top-4 models achieving the highest mAP on the joint held-out validation dataset (GSCV1-valid and AS-balanced) are submitted for evaluation. The neural network back-end is implemented in Pytorch [27]. To speed up the training procedure, we pre-train a single UiT-XS on the full Audioset using the masked Autoencoder approach [20] and initialize all relevant layers for each model from this single checkpoint. The source code is publicly available1. Footnote 1: www.github.com/Richermans/UiT_Mobile ## 4 Results ### Main results The core results of our work on the GSCV1 and Audioset can be seen in Table 2. To illustrate the difficulty of training a UniKW-AT model, we also ran baseline experiments using a common TC-ResNet8 [5] model. The results show that even though TC-ResNet8 can achieve excellent performance on GSCV1 (96.72 Acc), it fails to provide meaningful performance on AT (8.67 mAP). Note that TC-ResNet8's performance on GSCV1 improves against the publicly reported result (\(96.10\to 96.72\)) due to UniKW-AT training, where 60h of "noise" samples from Audioset enhance the model's robustness to false alarms. As we can see, our proposed UiT-XS achieves competitive results compared to the previous MobileNetV2 (MBv2) based method for UniKW-AT, as well as other works in the literature regarding KWS and AT performance. ### Inference latency on mobile hardware Here we measure the inference speed of our models using the PyTorch Mobile Benchmark Tool2. In Table 3, we display measured inference speed on four different mobile devices: We use two high-end Qualcomm Snapdragon chips, a Snapdragon 865 (SD865) and a Snapdragon 888 (SD888) and two mid-range chipsets, a MediaTek Helio G90T (G90T) and a Mediatek Dimensity 700 (MT700). The results are compared to a TC-ResNet8 and an MBv2. The MBv2 can be viewed as a baseline, representing a previous UniKW-AT approach, while \begin{table} \begin{tabular}{l||c c c c c c} \hline Model & \(L\) & \(D\) & MLP & \#Params & MFLops & \(M_{pk}\) \\ \hline UiT-XS & 12 & 128 & 384 & 1.5 M & 34 & 7.59 \\ UiT-2XS & 6 & 128 & 384 & 0.8 M & 18 & 4.10 \\ UiT-3XS & 4 & 128 & 384 & 574 k & 13 & 3.15 \\ \hline \end{tabular} \end{table} Table 1: Proposed UiT-based model architectures. The number of MFLops and the peak memory usage \(M_{pk}\) (in MB) are calculated over 1 s. TC-ResNet8 represents the speed requirement of a modern KWS system. As the latency results demonstrate, our proposed approach can achieve a speedup of over 2 times (\(8.0\to 3.4\) ms) against an MBv2 when using UiT-XS while achieving a similar performance (see Table 2). Even though UiT-2XS and UiT-3XS are slower than the baseline TC-ResNet8, they excel at AT (32.21/30.97 vs. 8.67 mAP, see Table 2). Another important factor worth noting is that the baseline MBv2 has a delay of 320 ms, while our proposed models react within 160 ms. ### Ablation Here we present ablation studies focused on the proposed BN-A mechanism and the choice of a ReLU activation function in favour of a more common Gaussian Error Linear Unit (GeLU) [32]. For simplicity, we only use the SD865 chipset for these tests. The results can be seen in Table 4. We observe that the proposed BN-A mechanism speeds up the inference time by at least 20% (\(4.1\to 3.4\) ms) against standard self-attention without noticeable performance differences. Moreover, while standard self-attention with GeLU can provide marginal performance boosts compared to the proposed BN-A + ReLU approach, they also significantly slow down the inference speed (\(3.4\to 5.7\) ms for UiT-XS) and increase the peak memory usage (\(7.59\to 10.28\) MB for UiT-XS), limiting their potential real-world use. ## 5 Conclusion This paper proposes ViT-based transformer models (UiT) optimized for mobile device deployment of UniKW-AT. Three lightweight UiT models are introduced, providing excellent performance for UniKW-AT coupled with fast inference speed, low peak memory usage and a small parameter footprint. Our best model (UiT-XS) achieves an accuracy of 97.76 on the GSCV1 dataset and an mAP of 34.09 on Au-dioset, outperforming a competitive MBv2 baseline while having half the parameter footprint and a predominantly increased inference speed. \begin{table} \begin{tabular}{l l||c c|c c} \hline \hline Model & Ablation & GSCV1 & AS & Speed & \(M_{pk}\) \\ \hline \multirow{3}{*}{UiT-XS} & Proposed & **97.76** & 34.09 & **3.4** & **7.59** \\ & w/o BN-A & 97.75 & 33.76 & 4.1 & 10.28 \\ & ReLU \(\rightarrow\) GeLU & 97.69 & **34.12** & 5.7 & 10.28 \\ \hline \multirow{3}{*}{UiT-2XS} & Proposed & **97.31** & 32.21 & **1.7** & **4.10** \\ & w/o BN-A & 96.94 & 32.30 & 2.2 & 5.44 \\ & ReLU \(\rightarrow\) GeLU & 97.27 & **32.41** & 2.9 & 5.44 \\ \hline \multirow{3}{*}{UiT-3XS} & Proposed & **97.18** & 30.97 & **1.2** & **3.15** \\ & w/o BN-A & 96.71 & 30.87 & 1.6 & 3.90 \\ \cline{1-1} & ReLU \(\rightarrow\) GeLU & 96.91 & **30.99** & 2.1 & 3.90 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies of the proposed model, where “Speed” represents measured ms and \(M_{pk}\) is the peak memory requirement in MB. ‘w/o BN-A’ represents the use of a standard self-attention mechanism, whereas ‘ReLU \(\rightarrow\) ReLU’ utilizes GeLU with standard self-attention. Results are evaluated on an input of 1 s. Best in bold. \begin{table} \begin{tabular}{l l||c c} \hline \hline Approach & \#Params (M) & GSCV1 & AS \\ \hline TC-ResNet8 [5] & 0.06 & 96.10 & - \\ NAS2 [6] & 0.88 & 97.22 & - \\ MEGA [28] & 0.3 & 96.92 & - \\ MatchBoxNet [29] & 0.5 & 96.83 & - \\ KWT-1 [10] & 0.6 & 97.05 & - \\ LETR-128 [11] & 0.6 & 97.61 & - \\ LETR-256 [11] & 1.1 & 97.85 & - \\ KWT-2 [10] & 2.4 & 97.36 & - \\ KWT-3 [10] & 5.3 & 97.24 & - \\ Wav2KWS [30] & 225 & 97.90 & - \\ \hline MBv2 [31] & 2.9 & - & 26.50 \\ Eff-B0 [31] & 5.3 & - & 33.50 \\ Eff-B2 [31] & 13.6 & - & 34.06 \\ ResNet-50 [31] & 25.6 & - & 31.80 \\ CNN14 [16] & 76 & - & 27.80 \\ AudioMAE [20] & 80 & - & 37.10 \\ \hline TC-ResNet8 & 0.1 & 96.72 & 8.67 \\ MBv2 [1] & 2.9 & 97.53 & 33.42 \\ MBv2\({}^{\bigstar}\)[1] & 2.9 & 97.53 & 32.51 \\ \hline UiT-XS\({}^{\bigstar}\) & 1.5 & 97.76 & 34.09 \\ UiT-2XS\({}^{\bigstar}\) & 0.8 & 97.31 & 32.21 \\ UiT-3XS\({}^{\bigstar}\) & 0.6 & 97.18 & 30.97 \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison between our proposed UiT approaches against other works in literature. Results for GSCV1 use accuracy, while AS use mAP. Approaches denoted with \({}^{\bigstar}\) evaluate on 1 s chunks, influencing Audioset performance. Results with “-” means not available. \begin{table} \begin{tabular}{l l||c c|c c} \hline \hline Model & SD865 & SD888 & G90T & MT700 \\ \hline \multirow{3}{*}{UiT-XS} & 0.4 & 0.4 & 1.1 & 1.1 \\ & 8.0 & 6.2 & 13.1 & 11.6 \\ \hline Uit-XS & 3.4 & 3.4 & 7.3 & 7.1 \\ UiT-2XS & 1.7 & 1.5 & 2.8 & 3.2 \\ UiT-3XS & 1.2 & 1.1 & 2.2 & 2.2 \\ \hline \hline \end{tabular} \end{table} Table 3: Inference speed comparison on mobile system-on-a-chip (SoC) platforms, run on the central processing unit (CPU) with float32 precision and measured in ms. Each speed evaluation is accessed by first warming up the chip with ten warmup iterations followed by 1000 test trials for an input of length 1 s.
2307.02634
Radio WISSH: tuning on the most luminous quasars in the Universe
In the past years, the results obtained by the WISSH quasar project provided a novel general picture on the distinctive multi-band properties of hyper-luminous ($L_{bol}>10^{47}$ erg/s) quasars at high redshift (z$\sim$2-4), unveiling interesting relations among active galactic nuclei, winds and interstellar medium, in these powerful sources at cosmic noon. Since 2022, we are performing a systematic and statistically-significant VLA study of the radio properties of WISSH. We carried out high-resolution VLA observations aiming at: 1) identifying young radio source from the broad-band spectral shape of these objects; 2) sample an unexplored high redshift/high luminosity regime, tracking possible evolutionary effects on the radio-loud/radio-quiet dichotomy; 3) quantifying orientation effects on the observed winds/outflows properties.
Gabriele Bruni, Javier Moldón, Enrico Piconcelli, Francesca Panessa, Miguel Pérez-Torres, Manuela Bischetti, Chiara Feruglio, Giustina Vietri, Cristian Vignali, Luca Zappacosta, Ivano Saccheo
2023-07-05T20:08:12Z
http://arxiv.org/abs/2307.02634v1
[ ###### Abstract In the past years, the results obtained by the WISSH quasar project provided a novel general picture on the distinctive multi-band properties of hyper-luminous (\(L_{bol}>10^{47}\) erg/s) quasars at high redshift (z\(\sim\)2-4), unveiling interesting relations among active galactic nuclei, winds and interstellar medium, in these powerful sources at cosmic noon. Since 2022, we are performing a systematic and statistically-significant VLA study of the radio properties of WISSH. We carried out high-resolution VLA observations aiming at: 1) identifying young radio source from the broad-band spectral shape of these objects; 2) sample an unexplored high redshift/high luminosity regime, tracking possible evolutionary effects on the radio-loud/radio-quiet dichotomy; 3) quantifying orientation effects on the observed winds/outflows properties. Keyword1, keyword2, keyword3, etc. Radio WISSH: tuning on the most luminous quasars in the Universe] Radio WISSH: tuning on the most luminous quasars in the Universe G. Bruni et al.] Gabriele Bruni\({}^{1}\), Javier Moldon\({}^{2}\), Enrico Piconcelli\({}^{3}\), Francesca Panessa\({}^{1}\), Miguel Perez-Torres\({}^{2}\), Manuela Bischetti\({}^{4}\), Chiara Feruglio\({}^{4}\), Giustina Vietri\({}^{5}\), Cristian Vignali\({}^{6}\), Luca Zappacosta\({}^{3}\) and Ivano Saccheo\({}^{3}\) 2008 2008 111-1116 Title of your IAU Symposium A.C. Editor, B.D. Editor & C.E. Editor, eds. ## 1 Introduction Hyper-luminous QSOs (HyLQSOs, i.e. \(L_{bol}>10^{47}\) erg/s), powered by the most massive, highly-accreting supermassive black holes (SMBHs, i.e. \(M_{BH}>10^{9}\)\(M_{\odot}\)), are ideal targets to probe the assembly of giant galaxies (and, likely, the cradles of proto-clusters). Following the nowadays consensus view on SMBH-host galaxy co-evolution, the huge amount of energy released by highly-accreting SMBHs in HyLQSOs is able to strongly affect the evolution of the host galaxy by heating and expelling the interstellar medium (ISM) (the so-called "AGN feedback" mechanism, see e.g. Fabian 2012, Morganti 2017 for a review). The systematic study of HyLQSOs has faced significant challenges due to their low number density and low fluxes resulting from their distance. A significant improvement in our understanding of the properties of the accreting SMBH (\(M_{\rm BH}\), \(\lambda_{\rm Edd}\)), the nuclear thermal and non-thermal emission components, multiphase winds and multiphase ISM (and their interplay) in HyLQSOs can be only achieved investigating all these aspects in a large sample of HyLQSOs. Namely, one that is markedly different from traditional observing programs in a specific frequency band, and focused on sparse sources to study a particular aspect of the HyLQSO phenomenon. This highlights the necessity of building large samples of HyLQSOs with extensive multi-band coverage from radio to X-rays. The additional information coming from the radio band can provide fundamental inputs on the presence of jets, their interplay with winds, and in general on the presence of a possible young radio phase at cosmic noon, and the long-standing questions about the radio-loud/radio-quiet dichotomy across cosmic epochs. ### Probing the brightest end of the AGN luminosity function with WISSH The WISSH quasar project can be regarded as a multi-band effort in the study of HyLQSOs, as demonstrated by the number of publications since 2017 dealing with their the central engine, the outflows/feedback and host galaxy properties, e.g. Bischetti et al. (2017), Duras et al. (2017), Martocchia et al. (2017), Bischetti et al. (2018), Vietri et al. (2018), Bruni et al. (2019), Travascio et al. (2020), Zappacosta et al. (2020), Bischetti et al. (2021), Vietri et al. (2022). The aim of the WISSH project is to establish a reference sample of HyLQSOs at cosmic noon to investigate their nuclear properties and the AGN feedback mechanism on a sound statistical basis. The sample consists of 85 broad-line Type 1, radio-quiet AGN at \(z\sim\) 2-4.5 from SDSS-DR7 and selected by the WISE All Sky Survey with flux \(F_{22\mu m}>\) 3 mJy (see Saccheo et al. 2023 and references therein). Accordingly, the WISSH quasars result to be among the most luminous AGN known in the Universe with \(L_{\rm Bol}>\) 2 \(\times\) 10\({}^{47}\) erg s\({}^{-1}\). In this proceeding, we briefly summarize the first results of our radio campaign on WISSH QSOs, aiming at characterizing the radio emission in objects at cosmic noon. ## 2 A radio characterization of the WISSH sample During 2022, we realized a deep, high-resolution VLA survey of the WISSH sample in A configuration in the 2-8 GHz range. We covered \(\sim\)90% of the sample at 2-4 GHz, while \(\sim\)75% at 4-8 GHz, probing physical scales between 2 and 5 kpc. Our strategy was to reach a sensitivity threshold of \(\sim\)50 \(\mu\)Jy, well below past or current radio surveys. Indeed, a first radio approach to the WISSH sample carried out by Bruni et al. (2019) showed that, cross-correlating with the FIRST survey at 1.4 GHz (Becker et al. 1995), only \(\sim\)20% of the objects shows a detection at a \(\sim\)500 \(\mu\)Jy sensitivity threshold, and a compact morphology (\(<\)40 kpc at the median redshift of the sample). The recent release of the first epoch of the VLASS survey at 3 GHz - at a sensitivity similar to the FIRST one (RMS\(\sim\)120 \(\mu\)Jy/beam) - allowed us to confirm this detection rate. The estimated radio loudness (\(R=f_{6cm}/f_{4400\AA}\)) is lower than 10 for all except two objects, for which R=47 and 290. Given these premises, going deeper in terms of physical scale and sensitivity appeared to be key to unveil the radio properties of the sample. The main goals of our VLA campaign are the following: * where, among the 34 sources in the footprint, 32 were detected - allowing us to extend the frequency coverage down to 0.15 GHz. The overall radio spectrum will allow us to quantify the fraction of peaked sources in the 0.15-12 GHz range, possibly confirming the fraction of young radio sources estimated from the \(L_{1.4\rm GHz}\) vs LS diagram. * Probe an unexplored high redshift/high luminosity regime of active galaxies: at the median redshift of WISSH (\(z=3.33\)), and with an RMS of \(\sim\)10 \(\mu\)Jy/beam at 3 GHz, it is possible to probe radio powers down to \(\sim 7\times 10^{23}\) W/Hz at 3-sigma significance. Thanks to the availability of optical and X-ray (Chandra program ongoing) luminosities, a distribution of radio-loudness can be obtained and compared to those of other surveys in order to test a possible evolution scenario of the radio loudness (Ballo et al., 2012). This can provide clues on long-standing questions about the radio-loud/radio-quiet dichotomy. * Test orientation effects on the observed winds/outflows properties: in quasars, the spectral index of the optically-thin part of the radio jet spectrum can be used as an indicator of the jet orientation (Orr & Browne, 1982), suggesting a near-to-polar line of sight for values \(>\)-0.5, while an equatorial one for values \(<\)-0.5 (\(S_{\nu}\propto\nu^{\alpha}\)). This information can be compared with the outflows orientation estimates from Vietri et al. (2018), and with the presence of nuclear winds (BAL) from Bruni et al. (2019). ### First results: detection rates and morphologies At an RMS of \(\sim\)10 \(\mu\)Jy/beam, about 80% of the observed sample was detected at 3 GHz - reaching a redshift of 4.3 - implying an estimated radio power \(>10^{23}\) W/Hz (classical threshold between radio-quiet and radio-loud AGN in the local Universe, Condon, 1992). This suggests that, at cosmic noon, most of the hyperluminous AGN like the WISSH ones could host jets. The implication of this result are wide-reaching, from the evolutionary effects on the radio-quiet/radio-loud dichotomy, to the contribution of jets in the QSOs feedback budget at large redshifts, and jets launching at this luminosity regime. The very fact that most of sources lie below the VLASS detection threshold (see Fig. 1, left panel) highlights how important deep observations are to perform population study at this high redshift, allowing to reach the completeness needed to draw conclusions on the radio phase evolution in AGN. Three objects showed a resolved morphology at 3 GHz, and more could arise from higher frequency observations. They show a symmetric morphology centered on the optical position of the host, with a projected linear extension of about 30 kpc (see Fig. 1, right panel). This could suggest that these sources are radio galaxies at cosmic noons, but more analysis, including spectral index estimates, is necessary to claim this. Observations are still ongoing, and will be concluded at the end of the current VLA semester, completing the multi-frequency radio view on the WISSH sample. ### The radio phase across cosmic epochs Recently, Patil et al. (2020) performed a VLA study of heavily obscured quasars. The sample was selected in the ultra-luminous regime (\(L_{bol}\sim 10^{11.7-14.2}L_{\odot}\)) at \(z\sim 0.4-3\), and extremely red in the WISE mid-infrared/optical band, along with a detection of bright, unresolved radio emission from the NVSS. Thanks to high-resolution VLA observations at 10 GHz, they found radio luminosities and linear extents similar to young radio sources (Gigahertz Peaked Spectrum, GPS, and Compact Steep Spectrum, CSS, sources). In a subsequent paper (Patil et al., 2022), they built the radio spectra by adding data from surveys, and confirmed the presence of a high fraction of young radio source. Although both the Patil et al. (2020) and WISSH quasar samples are selected to allow the direct observation of AGN feedback in action, they are complementary in terms of quasar evolutionary stages. Indeed, according to Hopkins et al. (2008), the obscured quasars in Patil et al. (2020) represent the initial heavily dust-enshrouded phase associated with rapid SMBH growth and star formation triggered by multiple galaxy encounters, while optically-bright objects like WISSH ones are undergoing the "blow-out" phase, which is characterized by powerful QSO-driven outflows blowing away the nuclear dust cocoon and part of the cold gas reservoir in the host galaxy. The same kind of study performed by Patil et al. (2020) and Patil et al. (2022), once realized on the WISSH sample of hyper-luminous broad-line quasars, will not only provide unprecedented information on the radio phase at cosmic noon, but also shed light on the possible link between the launching mechanism of nuclear winds and radio jets.
2305.03353
MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic
Theory of Mind (ToM) is a critical component of intelligence but its assessment remains the subject of heated debates. Prior research applied human ToM assessments to natural language processing models using either human-created standardized tests or rule-based templates. However, these methods primarily focus on simplistic reasoning and require further validation. Here, we leverage dynamic epistemic logic to isolate a particular component of ToM and to generate controlled problems. We also introduce new verbalization techniques to express these problems in English natural language. Our findings indicate that some language model scaling (from 70M to 6B and 350M to 174B) does not consistently yield results better than random chance. While GPT-4 demonstrates superior epistemic reasoning capabilities, there is still room for improvement. Our code and datasets are publicly available (https://huggingface.co/datasets/sileod/mindgames , https://github.com/sileod/llm-theory-of-mind )
Damien Sileo, Antoine Lernould
2023-05-05T08:14:48Z
http://arxiv.org/abs/2305.03353v2
# MindGames: Targeting Theory of Mind in Large Language Models ###### Abstract Theory of Mind (ToM) is a critical component of intelligence, yet accurately measuring it continues to be a subject of debate. Prior research has attempted to apply human ToM assessments to natural language processing models using either human-created standardized tests or rule-based templates. However, these methods primarily focus on simplistic reasoning and require further validation. In this study, we utilize dynamic epistemic logic, which has established overlaps with ToM, to generate more intricate problems. We also introduce novel verbalization techniques to express these problems using natural language. Our findings indicate that some language model scaling (from 70M to 6B and 350M to 174B) does not consistently yield results better than random chance. While GPT-4 demonstrates superior epistemic reasoning capabilities, there is still room for improvement. Our code and datasets are publicly available1. Footnote 1: [code:GiHub] [data:HF-datasets] ## 1 Introduction Theory of Mind (ToM) is the cognitive ability to attribute mental states, such as beliefs, desires, and intentions, to oneself and others, allowing individuals to understand and predict behavior based on these inferred mental states. It is an important requirement for general text understanding, or general artificial intelligence (Navarro et al., 2020). Animal studies have highlighted that claiming Theory of Mind (ToM), or the absence of it, can be methodologically problematic and prone to being biased by human expectations (de Waal, 2016). Kosinski (2023) recently sparked debate by showing that scaling large language models (LLMs) improves performance at standardized tests designed to measure theory of mind. However, such standardized tests were widely discussed in academic research and might have leaked into the training corpora of LLM. Other previous work generated synthetic examples instead, extending the bAbi (Weston et al., 2016) framework. Nematzadeh et al. (2018) proposed a dataset of fixed templates based on the _Sally-Anne_ problem (Baron-Cohen et al., 1985): _Sally puts a marble in a box while Anne is with her. Sally leaves for a moment and Mary puts the marble in a basket. Where will Sally look for the marble?_ [Answer=Box] Le et al. (2019) deem these problems simplistic and extend them to track second-order beliefs (e.g. the belief of Sally about Anne's beliefs). In this study, we create dynamic epistemic logic (DEL) problems and develop verbalizations to transform them into natural language inference problems. Dynamic epistemic logic is a type of modal logic that facilitates reasoning about agents' knowledge of facts or other agents' knowledge. This logic also enables reasoning about the impact of consecutive public announcements: _Alice and Bob have mud on their head. Their father says that at least one of them is muddy, and asks them if they are muddy. Do they know that they are muddy?_ [Answer=No] _They answer that they don't know. Do they now know that they are muddy?_ [Answer=Yes] DEL serves as one method to formalize certain ToM problems, making it a valuable perspective for ToM assessment. The problems we create can necessitate tracking multiple agents' beliefs and reasoning about higher-order beliefs2. Our dataset encompasses numerous variations of the _Muddy Children_ and _Drinking Logicians_ problems (van Eijck, 2014). This controlled test bench offers deeper insights into language model scaling and presents the first dataset with adequate complexity to challenge supervised learning models. The dataset and the scripts to generate them are publicly available 1. Footnote 2: For example, Anne’s belief about Sally’s belief about Anne’s belief about Mary’s belief. Related Work Logical Reasoning in NLPLogic shares profound connections with NLP. Early systems were built around logic, and more recent approaches incorporate logical reasoning into neural networks Hamilton et al. (2022). Another line of research investigates the logical capabilities of NLP text encoders using textual datasets. RuleTaker Clark et al. (2020) explores this area with propositional logic, while LogicNLI addresses first-order logic Tian et al. (2021). Richardson and Sabharwal (2022) examine the satisfiability problem in natural language. Sileo and Moens (2022) targets probabilistic logic. Our study is the first to focus on modal logic, specifically epistemic logic, in natural language. Theory of Mind in NLPTo measure ToM capabilities of NLP models, Nematzadeh et al. (2018) created examples using Sally-Ann templates, and Le et al. (2019) added complexity to the data by incorporating second-order knowledge. Both studies framed their examples as question-answering tasks. Kosinski (2023) employed handcrafted tests to evaluate language models' next-word prediction capabilities. The Social-IQA dataset Sap et al. (2019) addresses social commonsense in general, which involves some theory of mind aspects along with other challenges such as knowledge of desires and emotions. Cohen (2021) investigated whether natural language inference models captured veridicality with epistemic verbs like _know_ and _think_, using handcrafted patterns. This task was incorporated into the BIG-Bench framework Srivastava et al. (2022) as the _epistemic-reasoning_ task, but it targets only one shallow aspect of epistemic reasoning. Epistemic Logic and ToMVan Ditmarsch and Labuschagne (2007) examined the connections between DEL and ToM, while missing and Bolander (2020) demonstrated DEL's applicability in robotics. Van De Pol et al. (2018) explored the feasibility of using epistemic logic to explain theory of mind by investigating its theoretical computational tractability. ## 3 Dynamic Epistemic Logic Problem Generation and Verbalization ### Problem definition Our objective is to simultaneously create dynamic epistemic logic problems and their corresponding textual representations, allowing us to develop natural language problems in a (Premise, Hypothesis, Label) format. An epistemic logic problem can be constructed using the following components: Agents:A set of \(N\) individuals, each assigned an arbitrary name. Predicates:A set of boolean predicates. In this case, we use \(N\) predicates, one corresponding to each agent (e.g., _Alice has mud on her head_). Observabilitiesdescribe each agent's initial knowledge of predicate values. We represent observabilities with an \(N{\times}N\) matrix, \(\mathcal{O}\). If \(\mathcal{O}_{i,j}{=}1\), it means that agent \(i\) initially knows whether predicate \(j\) is true. Announcements:A list of expressions (predicates or agent knowledge about predicates) that are shared to all agents. Announcements are made sequentially, and each new announcement can change what the agents know, even if the same announcement is repeated twice. Hypothesis:An expression that may contain predicates and knowledge of agents about particular expressions after the announcements, given the agents, observabilities, and announcements combined into a premise. ### Setups: connecting predicate andobservabilities The choice of predicates dictates the observabilities structure. For example, the predicate _"Alice has mud on her head"_ is observable by agents other than Alice, but _"Alice has mud on her hand"_ could be observable by everyone. We combine predicates and observabilities into what we call _setups_ to generate textual descriptions. We define the following setups: **Forehead-mud setup** \(\textsc{Predicate}_{i}\)**: <**Agent\({}_{i}\)**>**_'s forehead is muddy._ \(\mathcal{O}:\textsc{Ones}(N)-\textsc{Identity}(N)\) **Forehead-mud-mirror setup** \(\textsc{Predicate}_{i}\)**: <**Agent\({}_{i}\)**>**_'s forehead is muddy._ \(\mathcal{O}:\textsc{Ones}(N)\) Observation: _There is a mirror in the room._ **Thirst setup** \(\textsc{Predicate}_{i}\)**: <**Agent\({}_{i}\)**>**_'s is thirsty._ \(\mathcal{O}:\textsc{Identity}(N)\) **Explicit setup** Predicate\({}_{i}\): <Agent\({}_{i}\)> picked a red card. \(\mathcal{O}:\textsc{Randbool}(N,N),\mathbb{E}(sum(\mathcal{O})){=}N\) Observation: _Each person draws a card, face unrevealed (red or black). <_ <Agent\({}_{j}\)> _card is revealed to_ <Agent>\({}_{i}\). for all \(i,j\) where \(\mathcal{O}_{i,j}{=}1\)> ### Problem verbalization We then construct a problem for a setup with the following natural language template: **[Premise]**_There are <N> persons. Everyone is visible to others_. <Setup-Observation> It is publicly announced that someone_ <Setup-Predicate> <\([0-N]\) Announcements> **[Hypothesis] <\([1-K]^{th}\) Order Belief>** We restrict announcements to first-order beliefs. A first-order belief has the following structure: <Agent> (_can know whether \(|\) can know that \(|\) cannot know that \(|\) cannot know whether_) (<Setup-Predicate>|<Negated-Setup-Predicate>), e.g. _Mary cannot know whether Paul is muddy_. We use the _can_ verb to account for the fact that sometimes an agent _can_ theoretically infer an expression, but that the reasoning might not be obvious enough for any agent. A \(K^{th}\) order belief is a first-order belief about a \((K{-}1)^{th}\) order belief. We consider _everyone_, _not everyone_, and _nobody_ as possible subjects to setup predicates. Subjects are uniformly sampled among these quantifiers and individual agents. We transform abstract problem representations into natural language and code that can be fed to a model checker to determine whether a hypothesis is entailed by the premise. We utilize the SMCDEL model checker (Benthem et al., 2018), an announcement logic based on the S5 modal logic. This implementation is the most cited publicly available epistemic logic as of April 2023. We discard examples where the premise contains a contradiction3. To generate diverse and gender-balanced random English surnames, we use CensusName4(Qian et al., 2022). Footnote 3: We identify contradictions by examining whether an unused predicate is entailed or not by the premise. Footnote 4: [https://pypi.org/project/censusname/](https://pypi.org/project/censusname/) ## 4 Experiments ### Problem generation parameters We randomly sample \(N{\in}\{2,3\}\) agents, as we observed that problems were sufficiently challenging with only three agents, and we use \(K{=}2\) for the same reason. In announcements, we apply negations 80\(\%\) of the time to encourage richer inferences (as the fact that an agent does not know something conveys information to others), and 50\(\%\) of the time otherwise. We undersample the results to generate 400 problems with a perfect balance of True/False labels per setup. We refer to the result Figure 1: Accuracy of Pythia language models on MindGames setups. Figure 2: Accuracy of GPT-3 family (ada, cabbage, curie, davinci) language models on MindGames setups. ing dataset as MindGames. User: [Premise] There are three persons. Everyone is visible to others. It is publicly announced that someone's forehead is muddy. It is publicly announced that Herbert knows whether someone's forehead is muddy. [Hypothesis] Herbert can now know that Herbert's forehead is muddy. Is the hypothesis true or false? GPT-3.5: It is not possible to determine the truth value of the hypothesis based on the given premise. While it is publicly announced that someone's forehead is muddy, it is not clear who the muddy forehead belongs to. Therefore, it is not possible to determine whether Herbert's forehead is muddy or not based on this information alone. However, it is stated that Herbert knows whether someone's forehead is muddy, so Herbert may know whether their own forehead is muddy or not based on additional information not provided in the premise. ### Scaling experiments We conduct zero-shot experiments and few-shots with a range of language models. We use the lm-eval-harness software Gao et al. (2021) to measure whether a language model perplexity favors the correct reasoning in a multiple-choice setting, with a natural language inference prompt from Brown et al. (2020): <Premise> _Question:_ <Hypothesis> _True or False?"_ with two possible continuation choices, _True_ and _False_. We evaluate two families of language models: Pythia language modelsWe select the Pythia Biderman et al. (2023) language models for our open-source scaling experiments. We use the checkpoints trained on the deduplicated corpus (deduped) with checkpoint sizes of 70M, 160M, 410M, 1B, 1.4B, 2.8B, and 6.9B. OpenAI APIWe utilize the OpenAI GPT-3 Brown et al. (2020) models, specifically the ada, babbage, curie, and davinci checkpoints, through the public API. We assume that their model sizes are respectively 350M, 1.3B, 6.7B, and 174B. Figure 1 displays the results for various Pythia model sizes. We observe that scaling improves 5-shot reasoning, but it has no impact on zero-shot reasoning. In contrast to the emergence results reported by Kosinski (2023), Figure 2 does not show a clear scaling trend for GPT-3 models on MindGames data, which suggests that the emergent behavior they observed was not due to robust epistemic logic capabilities. ### Qualitative analysis with ChatGPT We also run brief qualitative analyses with GPT-3.5 and GPT-4 OpenAI, 2023), as of May 2023. On 20 randomly sampled problems, we found that GPT3 was \(60\%\) correct and GPT-4 more than \(70\%\) correct. We show a brief qualitative analysis of the respective models. As shown in Figure 3, GPT3.5 tend to answer that there is not enough information and to perform correct inferences only when it requires very shallow reasoning. GPT-4 can solve this particular example. However, some problems are still challenging, as shown in Figure 4. GPT-4 rarely answers that there is not enough information and its reasoning looks has the surface form of epistemic reasoning, but occasionally contains glaring mistakes. ## 5 Conclusion We have developed a novel dataset aimed at evaluating epistemic logic reasoning, addressing a particular aspect of Theory of Mind (ToM). Our results reveal that this task continues to pose challenges for contemporary large-scale language models. When future models can solve Mindgames for 2-3 agents, the difficulty of the can be easily scaled up by with more agents. Future studies could explore human performance on our dataset, taking into account factors such as age and educational background. Additionally, further investigation can examine the impact of fine-tuning on other downstream tasks and assess how well Transformer circuits model Kripke structures that represent modal logic problems.
2307.15348
The curse of isotropy: from principal components to principal subspaces
This paper raises an important issue about the interpretation of principal component analysis. The curse of isotropy states that a covariance matrix with repeated eigenvalues yields rotation-invariant eigenvectors. In other words, principal components associated with equal eigenvalues show large intersample variability and are arbitrary combinations of potentially more interpretable components. However, empirical eigenvalues are never exactly equal in practice due to sampling errors. Therefore, most users overlook the problem. In this paper, we propose to identify datasets that are likely to suffer from the curse of isotropy by introducing a generative Gaussian model with repeated eigenvalues and comparing it to traditional models via the principle of parsimony. This yields an explicit criterion to detect the curse of isotropy in practice. We notably argue that in a dataset with 1000 samples, all the eigenvalue pairs with a relative eigengap lower than 21% should be assumed equal. This demonstrates that the curse of isotropy cannot be overlooked. In this context, we propose to transition from fuzzy principal components to much-more-interpretable principal subspaces. The final methodology (principal subspace analysis) is extremely simple and shows promising results on a variety of datasets from different fields.
Tom Szwagier, Xavier Pennec
2023-07-28T06:54:48Z
http://arxiv.org/abs/2307.15348v3
# Stratified Principal Component Analysis ###### Abstract This paper investigates a general family of models that stratifies the space of covariance matrices by eigenvalue multiplicity. This family, coined Stratified Principal Component Analysis (SPCA), includes in particular Probabilistic PCA (PPCA) models, where the noise component is assumed to be isotropic. We provide an explicit maximum likelihood and a geometric characterization relying on flag manifolds. A key outcome of this analysis is that PPCA's parsimony--with respect to the full covariance model--is due to the eigenvalue-equality constraint in the noise space and the subsequent inference of a multidimensional eigenspace. The sequential nature of flag manifolds enables to extend this constraint to the signal space and bring more parsimonious models. Moreover, the stratification and the induced partial order on SPCA yield efficient model selection heuristics. Experiments on simulated and real datasets substantiate the interest of equalising adjacent sample eigenvalues when the gaps are small and the number of samples is limited. They notably demonstrate that SPCA models achieve a better complexity/goodness-of-fit tradeoff than PPCA. Probabilistic principal component analysis; Parsimony; Eigenvalue multiplicity; Flag manifolds; Stratified space ## 1 Introduction _Principal Component Analysis (PCA)_(Pearson, 1901) is a well-known dimension reduction method that is based on the eigenvalue decomposition of the sample covariance matrix. Usually, after the decomposition, one plots the eigenvalue profile in decreasing order and decomposes it into two parts: the signal on the left and the noise on the right. The position of the separation relates to the so-called _intrinsic dimension_ of the dataset (Shepard, 1962). Such a decomposition can be done with simple rules relying on the shape of the profile, like the elbow method (Thorndike, 1953) or the percentage of explained variance. However, those heuristics lack statistical foundation and do not depend on the size of the dataset. Some more statistically grounded dimension selection methods rely on a generative modelling formulation of PCA, called _Probabilistic PCA (PPCA)_(Tipping and Bishop, 1999). PPCA can be seen as a covariance model where the lowest eigenvalues, representing the noise, are all equal. The choice of the cutoff dimension is then rather based on the _principle of parsimony_: the selected model is the one that has the lowest number of parameters, while still well representing the data distribution. Such a tradeoff can be achieved with model selection criteria such as the _Bayesian Information Criterion (BIC)_(Schwarz, 1978), which depends on the dataset size and favors _low-complexity_ over _goodness-of-fit_ when the number of available samples is limited (_small-data_ regime). As the eigenvalues of full-rank sample covariance matrices are almost surely all distinct (see discussion in Appendix A.2), PPCA makes an error when modelling the sample covariance matrix with equal noise eigenvalues, but this error is balanced by the complexity drop. One may wonder however if such a complexity drop is enough, especially in the small-data regime. The eigenvalue-equalisation principle could indeed naturally be extended to the signal space by equalising adjacent sample eigenvalues with small gaps, achieving a better complexity/goodness-of-fit tradeoff. This motivates us to investigate a more general family of covariance models with repeated eigenvalues that is stratified by eigenvalue multiplicity (Arnold, 1995). Those models, coined _Stratified PCA (SPCA)_, enjoy an explicit maximum likelihood estimate and a unifying geometric characterization relying on flag manifolds. SPCA enables to answer a first key question on the identifiability of two adjacent sample eigenvalues. Among the outcomes, we get that a pair of adjacent eigenvalues with a relative gap lower than 21% needs at least 1000 data points to be distinguished, which is rarely satisfied by real datasets. To extend this result to more than two eigenvalues, we must perform model selection among the whole family of SPCA models, which contains PPCA. As the number of candidate models grows exponentially with the data dimension, we are encouraged to design non-greedy model selection heuristics. Fortunately, the stratification of the family of SPCA models and the induced partial order on the sequence of eigenvalue multiplicities enables to design computationally efficient model selection heuristics, whose asymptotic consistency is moreover proven. The application of our model to synthetic and real datasets successfully shows that equalising groups of adjacent eigenvalues with small gaps is indeed justified when the number of available samples is limited. The experiments notably show that SPCA models achieve a better complexity/goodness-of-fit tradeoff than PPCA. The paper is organized in the following way. In Section 2, we present the PPCA model, its maximum likelihood estimate and number of free parameters, as well as a parsimonious version of PPCA called _Isotropic PPCA (IPPCA)_(Bouveyron et al., 2011). In Section 3, we introduce the SPCA model. We derive an explicit maximum likelihood estimate that boils down to an eigenvalue decomposition of the sample covariance matrix followed by a block-averaging of groups of adjacent eigenvalues. We show that SPCA extends PPCA and IPPCA and comes with an insightful geometric interpretation relying on flag manifolds. This enables the accurate computation of the number of free parameters. In Section 4, we develop a model selection framework for SPCA. This one first enables to answer a key question on the distinguishability of two adjacent sample eigenvalues and second to go beyond two eigenvalues using heuristics based on the structure of the SPCA family. In Section 5, we compare PPCA and SPCA models on synthetic and real datasets and show the improvement brought by equalising adjacent eigenvalues with small gaps. ## 2 Probabilistic Principal Component Analysis _Principal Component Analysis (PCA)_ is a ubiquitous tool in statistics, which however used to lack a statistical model formulation. Tipping and Bishop (1999) circumvented this issue by introducing _Probabilistic PCA (PPCA)_ that we describe in this section. ### Model Let \(\left(\boldsymbol{x}_{i}\right)_{i=1}^{n}\) be a \(p\)-dimensional dataset and \(q\in\left[0\ldots p-1\right]\) a lower dimension. In PPCA, the observed data is assumed to stem from a \(q\)-dimensional latent variable via a linear-Gaussian model \[\boldsymbol{x}=\mathit{W}\boldsymbol{z}+\boldsymbol{\mu}+\boldsymbol{\epsilon} \tag{1}\] with \(\boldsymbol{z}\sim\mathcal{N}\left(0,I_{q}\right)\), \(\mathit{W}\in\mathbb{R}^{p\times q}\), \(\boldsymbol{\mu}\in\mathbb{R}^{p}\), \(\boldsymbol{\epsilon}\sim\mathcal{N}\left(0,\sigma^{2}I_{p}\right)\). An illustration of the generative model is provided in Figure 1. Through classical probability theory, one can show that the observed data is modeled as following a multivariate Gaussian distribution: \[\boldsymbol{x}\sim\mathcal{N}\left(\boldsymbol{\mu},\mathit{W}\mathit{W}^{ \top}+\sigma^{2}I_{p}\right). \tag{2}\] An analysis of the covariance matrix reveals that the distribution is actually multivariate on the first \(q\) dimensions and isotropic on the remaining \(p-q\) ones. Hence there is an implicit constraint on the covariance model of the data that is an equality constraint on the lowest \(p-q\) eigenvalues. ### Maximum Likelihood The PPCA model parameters are the shift \(\boldsymbol{\mu}\), the linear map \(\mathit{W}\) and the noise factor \(\sigma^{2}\). Given some observed data \(\left(\boldsymbol{x}_{i}\right)_{i=1}^{n}\), \(\overline{\boldsymbol{x}}:=\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{x}_{i}\) its mean and \(S:=\sum_{j=1}^{p}\lambda_{j}\boldsymbol{v}_{j}\boldsymbol{v}_{j}^{\top}\) its sample covariance matrix, with \(\lambda_{1}\geq\cdots\geq\lambda_{p}\geq 0\) its eigenvalues and \(\boldsymbol{v}_{1}\perp\cdots\perp\boldsymbol{v}_{p}\) some associated eigenvectors, we can explicitly infer the parameters that are the most likely to have generated these data using maximum likelihood estimation. The most likely shift is the empirical mean; the Figure 1: PPCA generative model (1), assuming that the observed data was first sampled from a lower dimensional normal latent variable, then affinely mapped to the ambient space and finally added an isotropic Gaussian noise. _Left:_ Latent variable. _Middle:_ Linear map to data space. _Right:_ Additive shift and noise. most likely linear map is the composition of a scaling by the \(q\) highest eigenvalues \(\Lambda_{q}:=\operatorname{diag}\left(\lambda_{1},\ldots,\lambda_{q}\right)\) (up to the noise) and an orthogonal transformation by the associated \(q\) eigenvectors \(V_{q}:=[\mathbf{v}_{1}|\ldots|\mathbf{v}_{q}]\); the most likely noise factor is the average of the \(p-q\) discarded eigenvalues. \[\hat{\mathbf{\mu}}=\overline{\mathbf{x}},\qquad\qquad\hat{W}=\,V_{q}\left(\Lambda_{q}- \hat{\sigma}^{2}I_{q}\right)^{\frac{1}{2}},\qquad\qquad\hat{\sigma}^{2}=\frac {1}{p-q}\sum_{j=q+1}^{p}\lambda_{j}. \tag{3}\] One can then easily express the maximum log-likelihood \[\ln\hat{\mathcal{L}}:=-\frac{n}{2}\left(p\ln(2\pi)+\sum_{j=1}^{q}\ln\lambda_{j }+(p-q)\ln\left(\frac{1}{p-q}\sum_{j=q+1}^{p}\lambda_{j}\right)+p\right). \tag{4}\] ### Parsimony and model selection The previously described PPCA is already a somewhat parsimonious statistical model. Indeed, it not only makes the assumption that the observed data follows a multivariate Gaussian distribution, which is the entropy-maximizing distribution at a fixed mean and covariance, but it also reduces the number of covariance parameters by constraining the last \(p-q\) eigenvalues to be equal. The covariance matrix \(\Sigma:=\mathit{W}\!W^{\top}+\sigma^{2}I_{p}\) is parameterized by \(\mathit{W}\in\mathbb{R}^{p\times q}\) and \(\sigma^{2}\). It is shown in Tipping and Bishop (1999) to have \(\kappa:=pq-\frac{q(q-1)}{2}+1\) free parameters, the removal of \(\frac{q(q-1)}{2}\) parameters being due to the invariance of the latent variable distribution to a rotation. Although not evident at first sight with this expression of \(\kappa\), we have a drop of complexity--with respect to the full covariance model which is of dimension \(\frac{p(p+1)}{2}\)--due to the equality constraint on the low eigenvalues, and the number of parameters decreases along with \(q\). As shown in Subsection 3.4, we can give a more insightful geometric interpretation to the number of free parameters in the PPCA model using Stiefel manifolds (Edelman et al., 1998). For a given data dimension \(p\), a PPCA model is indexed by its latent variable dimension \(q\in[0\,\mathp ## 3 Stratified Principal Component Analysis Inspired by the complexity drop induced by the isotropy in the noise space in PPCA, we aim at investigating more general isotropy constraints on the full data space. In this section, we introduce _Stratified PCA (SPCA)_, a covariance model with a general constraint on the sequence of eigenvalue multiplicities. SPCA generalizes PPCA and IPPCA and unifies them in a new family of models parameterized with flag manifolds (Monk, 1959). Flag manifolds are themselves generalizations of Stiefel manifolds and Grassmannians (Edelman et al., 1998), hence the link between PPCA, IPPCA and SPCA that is detailed in this section. ### Model We recall that in combinatorics, a _composition_ of an integer \(p\) is an ordered sequence of positive integers that sums up to \(p\). It has to be distinguished from a _partition_ of an integer, which doesn't take into account the ordering of the parts. Let \(\boldsymbol{\gamma}:=(\gamma_{1},\gamma_{2},\ldots,\gamma_{d})\in\mathcal{C}(p)\) be a composition of a positive integer \(p\). We define the _Stratified PCA_ model of _type_\(\boldsymbol{\gamma}\), noted \(\boldsymbol{\gamma}\)-SPCA as \[\boldsymbol{x}=\sum_{k=1}^{d-1}\sigma_{k}\,U_{k}\boldsymbol{z}_{k}+\boldsymbol {\mu}+\boldsymbol{\epsilon}. \tag{6}\] In this formula, \(\sigma_{1}>\cdots>\sigma_{d-1}>0\) are decreasing scaling factors, \(U_{k}\in\mathrm{St}\left(p,\gamma_{k}\right)\) are mutually orthogonal frames (belonging to Stiefel manifolds) and \(\boldsymbol{z}_{k}\sim\mathcal{N}\left(0_{\gamma_{k}},I_{\gamma_{k}}\right)\) are independent latent variables. \(\boldsymbol{\mu}\in\mathbb{R}^{p}\), \(\sigma^{2}>0\) and \(\boldsymbol{\epsilon}\sim\mathcal{N}\left(0_{p},\sigma^{2}I_{p}\right)\) are the classical shift and isotropic noise present in PPCA. An illustration is provided in Figure 2. Similarly as for PPCA, we can compute the population density \[\boldsymbol{x}\sim\mathcal{N}\left(\boldsymbol{\mu},\,\,\sum_{k=1}^{d-1} \sigma_{k}^{2}\,U_{k}\,U_{k}^{\top}+\sigma^{2}I_{p}\right). \tag{7}\] The expression of the covariance matrix \(\Sigma:=\sum_{k}\sigma_{k}^{2}\,U_{k}\,U_{k}^{\top}+\sigma^{2}I_{p}\in\mathbb{ R}^{p\times p}\) can be simplified by gathering all the orthonormal frames into one orthogonal matrix \(Q:=[\,U_{1}|\ldots|\,U_{d-1}|\,U_{d}]\in\mathcal{O}(p)\) where \(U_{d}\in\mathrm{St}\left(p,\gamma_{d}\right)\) is an orthogonal completion of the previous frames. Writing \(L:=\mathrm{diag}\left(\ell_{1}I_{\gamma_{1}},\ldots,\ell_{d}I_{\gamma_{d}}\right)\), with \(\ell_{k}:=\sigma_{k}^{2}+\sigma^{2}\) for \(k\in[1\,\mathpunct{\ldotp}d-1]\) and \(\ell_{d}:=\sigma^{2}\), one gets \[\Sigma=QLQ^{\top}. \tag{8}\] Hence, the fitted density of \(\boldsymbol{\gamma}\)-SPCA is a multivariate Gaussian with repeated eigenvalues \(\ell_{1}>\cdots>\ell_{d}>0\) of respective multiplicity \(\gamma_{1},\ldots,\gamma_{d}\). Therefore, PPCA and IPPCA can be seen as SPCA models, with respective types \(\boldsymbol{\gamma}=(1,\ldots,1,p-q)\) and \(\boldsymbol{\gamma}=(q,p-q)\). From a geometric point of view, the fitted density is isotropic on the eigenspaces of \(\Sigma\), which constitute a sequence of mutually orthogonal subspaces of respective dimension \(\gamma_{1},\ldots,\gamma_{d}\), whose direct sum generates the data space. Such a sequence is called a _flag_ of linear subspaces of _type_\(\boldsymbol{\gamma}\)(Monk, 1959). Hence flags are natural objects to geometrically interprete SPCA, and so a fortiori PPCA and IPPCA. We detail this point later in Subsection 3.4. ### Type Just like the latent variable dimension \(q\in[1\ldots p-1]\) is a central notion in PPCA, the type \(\mathbf{\gamma}\in\mathcal{C}(p)\) is a central notion in SPCA. In this subsection, we introduce the concepts of _refinement_ and \(\mathbf{\gamma}\)_-composition_ to make its analysis more convenient. Let \(\mathbf{\gamma}:=(\gamma_{1},\gamma_{2},\ldots,\gamma_{d})\in\mathcal{C}(p)\). We say that \(\mathbf{\gamma}^{\prime}\in\mathcal{C}(p)\) is a _refinement_ of \(\mathbf{\gamma}\), and note \(\mathbf{\gamma}\preceq\mathbf{\gamma}^{\prime}\), if we can write \(\mathbf{\gamma}^{\prime}:=(\mathbf{\gamma}_{1}^{\prime},\mathbf{\gamma}_{2}^{\prime}, \ldots,\mathbf{\gamma}_{d}^{\prime})\), with \(\mathbf{\gamma}_{k}^{\prime}\in\mathcal{C}(\gamma_{k}),\forall k\in[1\ldots d]\). For instance, one has \((2,3)\preceq(1,1,2,1)\). Let \(\mathbf{\gamma}:=(\gamma_{1},\gamma_{2},\ldots,\gamma_{d})\in\mathcal{C}(p)\). Then each integer between \(1\) and \(p\) can be uniquely assigned a _part_ of the composition, indexed between \(1\) and \(d\). We define the _\(\gamma\)-composition function_\(\phi_{\mathbf{\gamma}}\colon[1,p]\to[1,d]\) to be this surjective map, such that \(\phi_{\mathbf{\gamma}}(j)\) is the index \(k\) of the part the integer \(j\) belongs to. For instance, one has \(\phi_{(2,3)}(1)=\phi_{(2,3)}(2)=1\) and \(\phi_{(2,3)}(3)=\phi_{(2,3)}(4)=\phi_{(2,3)}(5)=2\). Then, intuitively and with slight abuse of notation, each object of size \(p\) can be partitioned into \(d\) sub-objects of respective size \(\gamma_{k}\), for \(k\in[1\ldots d]\). We will call it the _\(\mathbf{\gamma}\)-composition_ of an object. We give two examples. Let \(Q\in\mathcal{O}(p)\). The _\(\mathbf{\gamma}\)_-composition of \(Q\)_ is the sequence \(Q^{\mathbf{\gamma}}:=(\,Q_{1},\ldots,Q_{d})\) such that \(Q_{k}\in\mathbb{R}^{p\times\gamma_{k}},\forall k\in[1\ldots d]\) and \(Q=[Q_{1}|\ldots|Q_{d}]\). Let \(\mathbf{\lambda}:=(\lambda_{1},\ldots,\lambda_{p})\) be a sequence of decreasing eigenvalues. The _\(\mathbf{\gamma}\)_-composition of \(\mathbf{\lambda}\)_ is the sequence \(\mathbf{\lambda^{\gamma}}:=\left(\mathbf{\lambda}^{1},\ldots,\mathbf{\lambda}^{d}\right)\) such that \(\mathbf{\lambda}^{k}\in\mathbb{R}^{\gamma_{k}},\ \forall k\in[1\ldots d]\) and \(\mathbf{\lambda}=\left[\mathbf{\lambda}^{1}|\ldots|\mathbf{\lambda}^{d}\right]\). We will call _\(\mathbf{\gamma}\)-averaging_ of \(\mathbf{\lambda}\) the sequence \(\overline{\mathbf{\lambda}^{\gamma}}:=\left(\overline{\mathbf{\lambda}^{1}},\ldots, \overline{\mathbf{\lambda}^{d}}\right)\in\mathbb{R}^{d}\) of average eigenvalues in \(\mathbf{\lambda}^{\gamma}\). Figure 2: \(\mathbf{\gamma}\)-SPCA generative model (6), assuming that the observed data was first sampled from a sequence of independent lower dimensional normal latent variables, then linearly mapped to mutually orthogonal subspaces and finally shifted and added an isotropic Gaussian noise. The resulting density is a multivariate Gaussian with repeated eigenvalues, whose multiplicities are given by the type \(\mathbf{\gamma}\). ### Maximum Likelihood Similarly as for PPCA, the log-likelihood of the model can be easily computed \[\ln\mathcal{L}\left(\boldsymbol{\mu},\Sigma\right)=-\frac{n}{2}\left(p\ln(2\pi)+ \ln|\Sigma|+\operatorname{tr}\left(\Sigma^{-1}\,C\right)\right), \tag{9}\] with \(\ C=\frac{1}{n}\sum\limits_{i=1}^{n}(\boldsymbol{x}_{i}-\boldsymbol{\mu})( \boldsymbol{x}_{i}-\boldsymbol{\mu})^{\top}\). We will now show that the maximum likelihood estimate for \(\boldsymbol{\gamma}\)-SPCA consists in the eigenvalue decomposition of the sample covariance matrix followed by a block-averaging of adjacent eigenvalues such that the imposed type \(\boldsymbol{\gamma}\) is respected; in other words, a \(\boldsymbol{\gamma}\)-averaging of the eigenvalues. Just before, we naturally extend the notion of _type_ to symmetric matrices, as the sequence of geometric multiplicities of its ordered-descending-eigenvalues. **Theorem 1**.: _Let \(\left(x_{i}\right)_{i=1}^{n}\) be a \(p\)-dimensional dataset, \(\overline{\boldsymbol{x}}:=\frac{1}{n}\sum_{i=1}^{n}x_{i}\) its mean and \(\ S:=\sum_{j=1}^{p}\lambda_{j}\boldsymbol{v}_{j}\boldsymbol{v}_{j}^{\top}\) its sample covariance matrix, with \(\ \lambda_{1}\geq\dots\geq\lambda_{p}\geq 0\) its eigenvalues and \(\left[\boldsymbol{v}_{1}|\dots|\boldsymbol{v}_{p}\right]:=\,V\in\mathcal{O}(p)\) some associated eigenvectors. The maximum likelihood parameters of \(\gamma\)-SPCA are_ \[\hat{\boldsymbol{\mu}}=\overline{\boldsymbol{x}},\qquad\qquad\hat{Q}=\,V, \qquad\qquad\quad\left(\hat{\ell}_{1},\dots,\hat{\ell}_{d}\right)=\overline{ \boldsymbol{\lambda}^{\gamma}}. \tag{10}\] _The parameters \(\hat{\boldsymbol{\mu}}\) and \(\hat{\ell}_{1},\dots,\hat{\ell}_{d}\) are unique. \(\hat{Q}\) is not unique but the flag of linear subspaces generated by its \(\boldsymbol{\gamma}\)-composition almost surely is--more precisely, the flag is unique if and only if the type of \(S\) is a refinement of \(\gamma\), which is almost sure._ Proof.: The proof is given in Appendix A. It relies on optimization and linear algebra. We emphasize that the almost-sure uniqueness of the solution comes from the null Lebesgue measure of the set of symmetric matrices with repeated eigenvalues. One can then easily express the maximum log-likelihood of \(\boldsymbol{\gamma}\)-SPCA: \[\ln\hat{\mathcal{L}}=-\frac{n}{2}\left(p\ln(2\pi)+\sum_{k=1}^{d}\gamma_{k}\ln \overline{\boldsymbol{\lambda}^{k}}+p\right). \tag{11}\] ### Geometric interpretation with flag manifolds As intuited in Subsection 3.1 and then proven in Theorem 1, the accurate parameter space for \(Q\) in \(\boldsymbol{\gamma}\)-SPCA is the space of flags of type \(\boldsymbol{\gamma}\), noted \(\operatorname{Flag}(\boldsymbol{\gamma})\). The geometry of such a set is well known (Monk, 1959). \(\operatorname{Flag}(\boldsymbol{\gamma})\) is a smooth quotient manifold, consisting in equivalence classes of orthogonal matrices \[\operatorname{Flag}(\boldsymbol{\gamma})\cong\mathcal{O}(p)/\left(\mathcal{O} (\gamma_{1})\times\dots\times\mathcal{O}(\gamma_{d})\right). \tag{12}\] This result enables the accurate computation of the number of parameters in SPCA. Let us just before note that the other parameters are \(\boldsymbol{\mu}\in\mathbb{R}^{p}\) and \(L\in\operatorname{D}(\boldsymbol{\gamma}):=\left\{\operatorname{diag}\left( \ell_{1}I_{\gamma_{1}},\dots,\ell_{d}I_{\gamma_{d}}\right)\in\mathbb{R}^{p \times p}\colon\ell_{1}>\dots>\ell_{d}>0\right\}\), which can be seen as a convex cone of \(\mathbb{R}^{d}\). **Proposition 2**: _The number of free parameters in \(\boldsymbol{\gamma}\)-SPCA is_ \[\kappa:=p+d+\frac{p(p-1)}{2}-\sum_{k=1}^{d}\frac{\gamma_{k}(\gamma_{k}-1)}{2}. \tag{13}\] This geometric interpretation sheds light on PPCA, which--we remind--is a special case of SPCA with \(\boldsymbol{\gamma}=(1,\ldots,1,p-q)\). First, as flags of type \((1,\ldots,1,p-q)\) are nothing but Stiefel manifolds (up to changes of signs), we can naturally parameterize PPCA models with those spaces, which is already commonly done in the literature. Second, we can now see PPCA as removing \((p-q-1)+\frac{(p-q)(p-q-1)}{2}\) parameters with respect to the full covariance model by imposing an isotropy constraint on the noise space. SPCA then goes beyond the noise space and results in even more parsimonious models. We can extend this analysis to the IPPCA model, which--we remind--is a special case of SPCA with \(\boldsymbol{\gamma}=(q,p-q)\). Hence we can parameterize it with flags of type \((q,p-q)\), which are nothing but Grassmannians. ## 4 Model selection As discussed in Appendix A.2, sample covariance matrices almost surely have distinct eigenvalues. This makes the full covariance model the most likely to have generated some observed data. However, it does not mean that all the parameters--that are the eigenvectors and the eigenvalues--can be accurately identified, especially in the small-data regime. Hence, one can wonder if a covariance model with repeated eigenvalues and multidimensional eigenspaces would not be more robust. The results of the previous section enable to provide a possible answer, through SPCA model selection. First, we study the identifiability of two adjacent sample eigenvalues and deduce that one rarely has enough samples to distinguish them. We conclude that when the eigenvalue gap is small and the number of samples is limited, one should rather equalise the eigenvalues and gather the associated eigenvectors in a multidimensional eigenspace. Second, to extend this result to more than two eigenvalues, we develop a general model selection framework based on the stratified structure of SPCA.. ### Bayesian Information Criterion In this work, we focus on one simple model selection criterion to set up the ideas. The _Bayesian Information Criterion (BIC)_ is defined as \[\mathrm{BIC}:=\kappa\ln n-2\ln\hat{\mathcal{L}}, \tag{14}\] where \(\kappa\) is the number of free parameters--computed in Proposition 2--and \(\ln\hat{\mathcal{L}}\) is the maximum log-likelihood (11). By removing the constant variables within model selection (like \(p\) and \(n\)), we get the following proposition. **Proposition 3**: _The SPCA model minimizing the BIC is_ \[\hat{\boldsymbol{\gamma}}=\operatorname*{arg\,min}_{\boldsymbol{\gamma}\in \mathcal{C}(p)}\left(d-\sum_{k=1}^{d}\frac{\gamma_{k}(\gamma_{k}-1)}{2} \right)\frac{\ln n}{n}+\sum_{k=1}^{d}\gamma_{k}\ln\overline{\boldsymbol{ \lambda}^{k}}. \tag{15}\] From now on, we remove the shift parameter \(\mathbf{\mu}\in\mathbb{R}^{p}\) because it has the same complexity across models, and rather consider SPCA as a covariance model, like done in Tipping and Bishop (1999). ### Eigenvalue equalisation Willing to better apprehend the dynamics of SPCA model selection, we lead the experiment of quantifying the variation of the BIC induced by the equalisation of two adjacent eigenvalues. More precisely and without loss of generality, we compare the BIC of a _full covariance model_\(\mathbf{\gamma}=(1,\dots,1)\) to the one of an _equalised covariance model_\(\mathbf{\gamma}^{\prime}=(1,\dots,1,2,1,\dots 1)\). **Proposition 4**.: _Let \(\left(x_{i}\right)_{i=1}^{n}\) be a p-dimensional dataset with \(n\) samples, \(\lambda_{j}\geq\lambda_{j+1}\) two adjacent sample eigenvalues and \(\delta_{j}:=\frac{\lambda_{j}-\lambda_{j+1}}{\lambda_{j}}\) their relative eigengap. If_ \[\delta_{j}<2-2e^{2\frac{\ln n}{n}}+2\sqrt{e^{4\frac{\ln n}{n}}-e^{2\frac{\ln n }{n}}}:=\delta(n), \tag{16}\] _then the equalised covariance model has a lower BIC than the full one._ Proof.: The proof is given in Appendix B.1. A few values of the _threshold function_\(\delta(n)\) are reported in Figure 3. The table can be read in the following way: if a pair of sample eigenvalues has a relative eigengap lower than \(21\%\), then we need at least \(1000\) data points to statistically distinguish Figure 3: Plot of the threshold function \(\delta\) (Proposition 4), corresponding to the minimal number of samples needed to distinguish two adjacent eigenvalues, separated by a given relative eigengap. Real datasets that fulfill this condition for any pair of adjacent eigenvalues are pretty scarce. them. This is an important result, as many real datasets do not fulfill this condition, as we will see in the next section. As far as we know, this is the first time that a study on the parsimony induced by the equalisation of two adjacent sample eigenvalues is performed. This is enabled by the very design of SPCA and the geometric interpretation of its parameter space, involving flag manifolds. We could extend this study to the equalisation of more than two eigenvalues, but it would not necessarily yield a condition as simple as the one of Proposition 4. Hence, in the following, we establish a general framework for SPCA model selection. We study the structure of the family of models and design efficient model selection heuristics. ### Structure of the Stratified PCA family Given a dimension \(p\), PPCA has \(p\) models, ranging from the isotropic Gaussian (\(q=0\)) to the full covariance model (\(q=p-1\)). We can naturally equip the set of PPCA models with the _less-than-or-equal_ relation \(\leq\) on the latent variable dimension \(q\), which makes it a totally ordered set. The complexity of the model then increases with \(q\) (cf. Subsection 3.4). The characterization of the SPCA family structure is a bit more technical, as it requires to study the hierarchy of types, involving the concept of integer composition. Fortunately, the structure of such sets has already been well studied in combinatorics (Bergeron et al., 1995). Moreover, several works have shown and exploited the stratification of symmetric matrices by eigenvalue multiplicity (Arnold, 1995; Groisser et al., 2017; Breiding et al., 2018). Hence, without proof, we can state the following result. **Proposition 5**: _The family of \(p\)-dimensional SPCA models induces a stratification of the space of full-rank \(p\times p\) covariance matrices by eigenvalue multiplicity. The refinement relation \(\preceq\) (3.2) makes it a partially ordered set of cardinal \(2^{p-1}\)._ Hence the set of SPCA models at a given data dimension can be represented using a Hasse diagram, as done in Figure 4. We can see that SPCA contains PPCA, IPPCA, and many new models. SPCA therefore has the advantage of possibly providing more adapted models than PPCA and IPPCA, but also the drawback of requiring more comparisons for model selection. In high dimension this becomes quickly computationally heavy, so we need to define heuristics for selecting only a few number of models to compare. The previously derived partial order \(\preceq\) on the set of SPCA models allows simple non-greedy heuristics for model selection. ### Heuristics In this subsection, we develop two simple heuristics for model selection. Their common idea is to a priori choose a subfamily of candidate models based on the shape of the eigenvalue profile, and then restrict the model selection process to this smaller subset. #### 4.4.1 Hierarchical clustering of eigenvalues In this heuristic, the subset of candidate models is generated by the _hierarchical clustering_(Ward, 1963) of the sample eigenvalues. The general principle of hierarchical clustering is to agglomerate one by one the eigenvalues into clusters, thanks to a so-called _cluster-linkage criterion_, which is a measure of dissimilarity between clusters. Here, given two clusters of sample eigenvalues \(A\), \(B\) and any continuous distance \(\Delta\) (such as the relative eigengap defined in Proposition 4), we take as a cluster-linkage criterion the distance between the average eigenvalue in each cluster, \(\Delta\left(\overline{A},\overline{B}\right)\). The method is detailed in Algorithm 1 and illustrated in Figure 5. The hierarchical clustering heuristic creates a _trajectory_ in the Hasse diagram of SPCA types \((\boldsymbol{\gamma}^{t})_{t=1}^{p}\). The sequence starts from \(\boldsymbol{\gamma}^{1}=(1,\ldots,1)\), the full covariance model, in which each eigenvalue is in its own cluster. Then, one by one, the eigenvalues that are the closest in terms of distance \(\Delta\) are agglomerated, and the inter-cluster distances are updated. The algorithm ends when we reach the isotropic covariance model, \(\boldsymbol{\gamma}^{p}=(p)\), in which all the eigenvalues are in the same cluster. The hierarchical clustering heuristic hence generates a subfamily of \(p\) models that can be then compared within a classical model selection framework. In order to assess the quality of such a heuristic, we show the following consistency result. **Proposition 6**: _If the true generative model belongs to SPCA, then the hierarchical clustering heuristic (4.4.1) will asymptotically consistently select it._ Figure 4: Hasse diagram of 5-dimensional SPCA models. Each node represents a model. The associated label and color represent respectively the model type and its number of free parameters. The family contains 16 models: the isotropic Gaussian is the bottom node, the full covariance model is the top node, the five PPCA models are on the right part and the four IPPCA models are on the first floor. Proof: The proof is given in Appendix 0.B.2. Hence, the hierarchical clustering heuristic generates a hierarchical family of models of different complexities, and provided enough data, the true model will be included. Using asymptotic model selection criteria making a tradeoff between goodness-of-fit and complexity like the BIC will then allow to select the true model. We now propose a second heuristic that is not hierarchical but instead makes a prior assumption on the model complexity and then selects the one that has the maximum likelihood among all the candidates. #### 4.4.2 Prior on the length of the type In this heuristic, we perform model selection at a given floor of the Hasse diagram (cf. Figure 4). More precisely, we consider for selection only the models that have a given type length \(d\), like done in IPPCA with \(d=2\). Similarly as for the hierarchical clustering heuristic, the type-length prior heuristic drastically reduces the search space, this time to \(\binom{p-1}{d-1}\) models. Figure 5: Hierarchical clustering of sample eigenvalues, using the Euclidean distance. The successive steps in the hierarchical clustering generate a subfamily of SPCA models, of cardinal \(p\). _Left:_ sample eigenvalues at a given step of the hierarchical clustering, \(\mathbf{\gamma}^{t}=(2,1,1,4,1,3,2,1)\). The colors correspond to the parts of \(\mathbf{\gamma}^{t}\). _Middle:_ hierarchical clustering dendrogram. _Right:_ conceptual view of the hierarchical clustering trajectory on the SPCA Hasse diagram. Similarly as in the hierarchical clustering heuristic (4.4.1), we could then use the BIC to choose the best model among this reduced family. We provide an additional criterion that is nothing but the maximum likelihood itself. We indeed manage to extend to SPCA the surprising result from Bouveyron et al. (2011) that the maximum likelihood criterion alone asymptotically consistently finds the true intrinsic dimension within the IPPCA setting. As this criterion empirically yields competitive results with respect to other classical model selection criteria in the large sample, low signal-to-noise ratio regime, we expect it to be of interest in SPCA as well. **Proposition 7**.: _If the true generative model belongs to SPCA, then the maximum likelihood criterion alone will asymptotically consistently select it within the type-length prior heuristic (4.4.2)._ Proof.: The proof is given in Appendix B.3. We emphasize the use of Jensen's inequality, which elegantly generalizes the proof of Bouveyron et al. (2011). Hence we derived two simple heuristics for model selection, taking into account the structure of the SPCA models family. We now have all the tools needed for inference and model selection using SPCA. ## 5 Experiments As seen in the previous sections, given a dataset and its sample covariance matrix, SPCA equalises the eigenvalues and gives rise to new multidimensional eigenspaces. This causes an additional drop of complexity with respect to PPCA which, according to Figure 3, seems justified when the eigenvalue gaps are small in view of the number of available samples. In this section, we confirm experimentally this hypothesis on some synthetic and real datasets. ### Simpler models for all sample size A key result in the previous section is that we rarely have enough available samples to confidently assert that two adjacent sample eigenvalues are distinct. Consequently, PPCA models could be made more parsimonious by equalising the adjacent sample eigenvalues with small gaps in the signal space as well. less complex models along the whole trajectory. Moreover, interestingly, we note the consistent increase of model complexity with the number of samples. As the sample size increases, SPCA can more confidently distinguish the sample eigenvalues. Third, on the Hasse diagram, we can see that SPCA follows a trajectory, going up with the same number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples that is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples that is larger than the number of samples. This is because the number of samples is larger than the number of samples that is larger than the number of samples. This is because the number of samples is larger than the number of samples. This is because the number of samples is larger than the number of samples that is larger than the number of samples. This is because the number of samples is larger than the number of samples that is larger than the number of samples that is larger than the number of samples. This is because the number of samples is larger than the number of samples that is larger than the number of samples. number of available samples, which recalls the kind of subfamily generated by the hierarchical clustering heuristic (cf. Figure 5). To conclude, we see on this synthetic example that SPCA achieves a better complexity/goodness-of-fit tradeoff than PPCA in a wide range of sample sizes by equalising the highest eigenvalues. ### Parsimony on real data As the previous experiment was synthetic, we naturally wonder whether the same conclusions can be made out of real data. Indeed, as real datasets follow rather non-linear and multimodal distributions, the application of a simple linear-Gaussian model like SPCA to real datasets seems limited. However, PPCA has the same limits and remains quite used as a simple representation. In this experiment, we compare PPCA to SPCA on several classical real datasets extracted from the open source UCI Machine Learning Repository: _Glass Identification_, _Ionosphere_, _Wine_ and _Breast Cancer Wisconsin (WDBC)_. Due to the high dimensionality of some datasets, we cannot apply greedy SPCA model selection techniques, therefore we use the hierarchical clustering heuristic introduced in Subsection 4.4. As those datasets are made for classification problems, we keep only one class in order to make the data distribution more unimodal. For each dataset, we compare the best SPCA model to the best PPCA model (in terms of BIC). The results are reported in Table 1. We see that SPCA achieves again a better complexity/goodness-of-fit tradeoff than PPCA by equalising some eigenvalues with small gaps. For conciseness, we do not report the sample eigenvalue profiles of those datasets, but we can check that none of them satisfies the relative eigengap condition of Proposition 4. Hence, the use of SPCA to model real datasets is justified. In addition to the previous experiment, we also perform a floor-by-floor model comparison on the Glass dataset. More precisely, for each type length, we compare the unique associated PPCA model to the best SPCA one using the type-length prior heuristic introduced in Subsection 4.4. The results are reported on Table 2. We can see that the rich family of SPCA models with a prespecified number of distinct eigenvalues \(d\in[1\dots p]\), which is of cardinal \(\binom{p-1}{d-1}\), importantly increases the modelling power of PPCA, which only contains one model for each \(d\). \begin{table} \begin{tabular}{|c c c|c c|c c|} \hline \multicolumn{3}{|c|}{**Dataset**} & \multicolumn{3}{c|}{**PPCA**} & \multicolumn{2}{c|}{**SPCA**} \\ Name & \(n\) & \(p\) & \(\gamma\) & BIC & \(\gamma\) & BIC \\ \hline \hline Glass & 17 & 9 & \((1^{9})\) & \(-16.77\) & \((1,2,3,1^{3})\) & \(-17.49\) \\ Ion & 224 & 32 & \((1^{30},2)\) & \(-26.59\) & \((1^{5},2,13,6,4,2)\) & \(-28.50\) \\ Wine & 48 & 13 & \((1^{3},10)\) & \(+36.35\) & \((8,5)\) & \(+35.57\) \\ WDBC & 357 & 30 & \((1^{30})\) & \(+25.12\) & \((2,1,2,1,2,5,1,2,1,3^{2},4,1^{3})\) & \(+24.72\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of PPCA and SPCA best models on several real datasets. We can see that for any dataset, SPCA finds new models that have a lower BIC. For instance, on the Wine dataset, PPCA finds a principal subspace of dimension \(3\) with distinct eigenvalues, while SPCA finds a principal subspace of dimension \(8\) with isotropic variability. To shrink long types, we use the power notation to indicate repetition of elements; for instance \((1,1,1,2,2,3):=(1^{3},2^{2},3)\). ## 6 Discussion We introduced in this paper a generative covariance model with repeated eigenvalues called _Stratified PCA (SPCA)_, which generalizes _Probabilistic PCA (PPCA)_(Tipping and Bishop, 1999) and _Isotropic PPCA (IPPCA)_(Bouveyron et al., 2011). The geometric interpretation of its parameter space shed light on the parsimony of PPCA and raised the natural question of extending its eigenvalue-equalisation principle to the signal space. We indeed argued that assuming all the eigenvalues and eigenvectors in the signal space to be identifiable is not justified in many settings. Hence, SPCA could circumvent this issue by equalising the adjacent eigenvalues with small gaps and gathering the associated eigenvectors into a multidimensional eigenspace. We confirmed our expectations on synthetic and real datasets, showing how SPCA models achieve a better complexity/goodness-of-fit tradeoff than PPCA. SPCA is at an early stage of research and its development has been requiring several limiting choices that could be relaxed and improved in future works. A first limit is the choice of the BIC for model selection. Indeed, the BIC is known to favor under-parameterized models and not work very well in the small-data regime. However, this does not prevent it from being widely used due to its simplicity. Therefore, it provides an elementary way to highlight the interest of SPCA, similarly as Tipping and Bishop (1999) used a simple model selection criterion when introducing PPCA. One could later investigate extensions of Minka (2000) and Drton and Plummer (2017) to SPCA models. A second limit is the linear-Gaussian nature of SPCA which is not suited to real data. Some nonlinear and non-Gaussian extensions could therefore be considered in the future. The probable lack of analytic solution would involve optimization on flag manifolds (Ye et al., 2021). Due to the cost of inference for each model, we might need to replace discrete model selection with a global optimization scheme on the space of all SPCA models. The latter being \begin{table} \begin{tabular}{|c c|c c|} \hline \multicolumn{2}{|c|}{**PPCA**} & \multicolumn{2}{|c|}{**SPCA**} \\ \(\boldsymbol{\gamma}\) & BIC & BIC & \(\boldsymbol{\gamma}\) \\ \hline \hline \((9,)\) & \(+4.20\) & \(+4.20\) & \((9,)\) \\ \((1,8)\) & \(-0.78\) & \(-8.21\) & \((8,1)\) \\ \((1,1,7)\) & \(-3.45\) & \(-15.92\) & \((3,5,1)\) \\ \((1,1,1,6)\) & \(-5.97\) & \(-16.93\) & \((3,3,2,1)\) \\ \((1,1,1,1,5)\) & \(-6.36\) & \(-17.38\) & \((1,2,3,2,1)\) \\ \((1,1,1,1,1,4)\) & \(-6.55\) & \(-17.49\) & \((1,2,3,1,1,1)\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \((1,\ldots\ldots\ldots,1)\) & \(-16.77\) & \(-16.77\) & \((1,\ldots\ldots\ldots,1)\) \\ \hline \end{tabular} \end{table} Table 2: Comparison of PPCA and SPCA best models in the fixed-type-length setting (4.4.2) on the Glass dataset. For a given \(d\in[1\,\text{\textminus}\,p]\), PPCA contains only one model (\(q=d-1\)), while SPCA contains \(\binom{p-1}{d-1}\), which increases the modelling power. We can see for instance that the SPCA model of type \((8,1)\) outperforms the PPCA model of type \((1,8)\) on this dataset, therefore assuming a principal subspace of dimension \(8\) with isotropic variability is more likely than assuming a principal subspace of dimension \(1\). stratified by eigenvalue multiplicities, we could benefit from recent works on stratified optimization (Leygonie et al., 2023; Olikier et al., 2023). SPCA also comes with several exciting perspectives. First, it unleashes a whole new family of parsimonious linear-Gaussian models interpolating between the isotropic model and the full covariance one. Hence when a PPCA model overfits and the associated IPPCA model underfits, the perfect model might lie in the SPCA family. Second, the multidimensional eigenspaces obtained by gathering eigenvectors associated to distinct sample eigenvalues could provide robust, invariant and interpretable feature subspaces (Hyvarinen and Hoyer, 2000). Indeed, just like the first eigenvectors can be interpreted as modes of variation (Castro et al., 1986), the eigenspaces inferred from SPCA could be interpreted as multidimensional attributes, and the norms of projection onto them as their level of expressiveness. Third, SPCA brings a statistical framework to the flag-based multiscale modeling of datasets. Indeed, several works use flags to represent datasets, be it in an independent (Nishimori et al., 2006) or principal (Ma et al., 2021) component analysis context, enriching the already well developed literature on Grassmannians and Stiefel manifolds for dimension reduction (Edelman et al., 1998). In this paper, by introducing a generative model whose maximum likelihood estimate coincides with the minimizer of the _accumulated unexplained variance_ criterion (Pennec, 2018), we enrich the previous works and enable for instance to perform flag-type selection. ## Acknowledgements This work was supported by the ERC grant #786854 G-Statistics from the European Research Council under the European Union's Horizon 2020 research and innovation program and by the French government through the 3IA Cote d'Azur Investments ANR-19-P3IA-0002 managed by the National Research Agency. ## Appendix A Proof of Theorem 1 (Maximum likelihood of SPCA) We successively find the optimal \(\hat{\boldsymbol{\mu}}\in\mathbb{R}^{p},\;\hat{Q}\in\mathcal{O}(p)\) and \(\hat{\ell}_{k}\in\mathbb{R}\). ### Expression of \(\hat{\boldsymbol{\mu}}\) The log-likelihood expresses as a function of \(\boldsymbol{\mu}\in\mathbb{R}^{p}\) in the following way \[\ln\mathcal{L}(\boldsymbol{\mu})=-\frac{n}{2}\operatorname{tr}\left(\Sigma^{- 1}C\right)+\text{constant} \tag{17}\] with \(C=\frac{1}{n}\sum_{i=1}^{n}(\boldsymbol{x}_{i}-\boldsymbol{\mu})(\boldsymbol {x}_{i}-\boldsymbol{\mu})^{\top}\). The optimal shift \(\hat{\boldsymbol{\mu}}\) is thus \[\hat{\boldsymbol{\mu}}=\operatorname*{arg\,min}_{\boldsymbol{\mu}\in\mathbb{ R}^{p}}\sum_{i=1}^{n}(\boldsymbol{x}_{i}-\boldsymbol{\mu})^{\top}\Sigma^{-1}( \boldsymbol{x}_{i}-\boldsymbol{\mu}):=f(\boldsymbol{\mu}). \tag{18}\] The gradient of \(\boldsymbol{x}\mapsto(\boldsymbol{x}-\boldsymbol{\mu})^{\top}\Sigma^{-1}( \boldsymbol{x}-\boldsymbol{\mu})\) is \(\boldsymbol{x}\mapsto 2\Sigma^{-1}(\boldsymbol{x}-\boldsymbol{\mu})\). Hence, setting the gradient of \(f\) to \(0\) at \(\hat{\boldsymbol{\mu}}\), one gets \(\sum_{i}2\Sigma^{-1}(\boldsymbol{x}_{i}-\hat{\boldsymbol{\mu}})=0\), whose solution is \(\hat{\boldsymbol{\mu}}=\bar{\boldsymbol{x}}\). Hence \(\hat{C}\) is actually the sample covariance matrix of the dataset, which will be denoted \(S\) (as in the theorem statement) from now on. ### Expression of \(\hat{Q}\) The log-likelihood expresses as a function of \(Q\) in the following way \[\ln\mathcal{L}(Q)=-\frac{n}{2}\left(\ln|\Sigma|+\operatorname{tr}\left(\Sigma^{-1 }S\right)\right)+\text{constant} \tag{19}\] with \(\Sigma=QLQ^{\top}\). Hence \(|\Sigma|\) is independent of \(Q\) and the optimal orthogonal transformation \(\hat{Q}\) is \[\hat{Q}=\operatorname*{arg\,min}_{Q\in\mathcal{O}(p)}\operatorname{tr}\left( \Sigma^{-1}S\right)=\operatorname{tr}\left(QL^{-1}Q^{\top}S\right):=g(Q). \tag{20}\] As \(g\) is a smooth function on \(\mathcal{O}(p)\) which is a compact manifold, \(\hat{Q}\) exists and \(dg_{\hat{Q}}\): \(\mathcal{T}_{\hat{Q}}(\mathcal{O}(p))\ni\delta\mapsto\operatorname{tr}\left( \left(\delta L^{-1}\,\hat{Q}^{\top}+\,\hat{Q}L^{-1}\delta^{\top}\right)S \right)\in\mathbb{R}\) vanishes. It is known that \(\mathcal{T}_{\hat{Q}}(\mathcal{O}(p))=\operatorname{Skew}_{p}\hat{Q}\), therefore one has for all \(A\in\operatorname{Skew}_{p}\) \[dg_{\hat{Q}}(A\,\hat{Q})=\operatorname{tr}\left(\left((A\,\hat{Q})L^{-1}\, \hat{Q}^{\top}+\,\hat{Q}L^{-1}(A\,\hat{Q})^{\top}\right)S\right)=\operatorname {tr}\left(A(\Sigma^{-1}S-S\Sigma^{-1})\right)=0. \tag{21}\] Therefore \(\Sigma^{-1}S-S\Sigma^{-1}=0\). Hence, \(S\) and \(\Sigma^{-1}\) are two symmetric matrices that commute, so they must be simultaneously diagonalizable in an orthonormal basis. Since the trace is basis-invariant, \(g\) simply rewrites as a function of the eigenvalues \[g(Q)=\sum_{k=1}^{d}\ell_{k}^{-1}\left(\sum_{j\in\phi_{\gamma}^{-1}(\{k\})} \lambda_{\psi(j)}\right), \tag{22}\] where \(\psi\in S_{p}\) is a permutation and \(\phi_{\gamma}^{-1}(\{k\})\) is the set of indexes in the \(k\)-th part of the composition \(\gamma\) (cf. Subsection 3.2). We now need to find the permutation \(\hat{\psi}\in S_{p}\) that minimizes \(g\). First, since \(\ell_{1}>\dots>\ell_{d}>0\) by assumption, then \(\left(\ell_{k}^{-1}\right)_{k=1}^{d}\) is an increasing sequence. Therefore, \(\left(\lambda_{\hat{\psi}\left(\phi_{\gamma}^{-1}\{k\}\right)}\right)_{k=1}^{d}\) must be a non-increasing sequence, in that for \(k_{1}<k_{2}\), the eigenvalues in the \(k_{1}\)-th part of \(\gamma\) must be greater than or equal to the eigenvalues in the \(k_{2}\)-th part. Indeed, for \(\ell<\ell^{\prime}\), if \(\lambda<\lambda^{\prime}\), then \(\ell\lambda^{\prime}+\ell^{\prime}\lambda<\ell\lambda+\ell^{\prime}\lambda^{ \prime}\). Second, for such a \(\hat{\psi}\) sorting the eigenvalues in non-increasing order in between parts, we can easily check that the inequality between eigenvalues of distinct parts is strict if and only if the type of \(\Sigma\) is a refinement of \(\gamma\). If so, the minimizing \(\hat{\psi}\) is unique up to permutations within each part of \(\gamma\). Therefore, it is not \(\hat{Q}\) itself but the sequence of eigenspaces of \(\hat{Q}\) generated by its \(\gamma\)-composition (cf. Subsection 3.2) that is unique, and we have \(\left(\operatorname{Im}(\hat{Q}_{1}),\dots,\operatorname{Im}(\hat{Q}_{d}) \right)=\left(\operatorname{Im}(V_{1}),\dots,\operatorname{Im}(V_{d})\right)\). Hence, the accurate space to describe the parameter \(\hat{Q}\) is actually the space of flags of type \(\gamma\). An important remark is that the uniqueness condition will almost surely be met. Indeed, the set of \(p\times p\) symmetric matrices with repeated eigenvalues has null Lebesgue measure (it is a consequence of Sard's theorem applied to the discriminant polynomial function (Breiding et al., 2018)). Therefore, for \(n\geq p\) and any density with respect to Lebesgue measure on the set of sample covariance matrices, a randomly drawn matrix \(S\) almost surely has distinct eigenvalues. Consequently, its type is \((1,\dots,1)\), which is a refinement of any possible type in \(\mathcal{C}(p)\). ### Expression of \(\hat{L}\) The log-likelihood expresses as a function of \(L\) in the following way \[\ln\mathcal{L}(L)=-\frac{n}{2}\left(\ln|\Sigma|+\tr\left(\Sigma^{-1}S\right) \right)+\text{constant} \tag{23}\] with \(\Sigma=\hat{Q}L\hat{Q}^{\top}\). First, one has \(\ln|\Sigma|=\sum_{k=1}^{d}\gamma_{k}\ln\ell_{k}\). Second, according to the previous results, one has \(\tr\left(\Sigma^{-1}S\right)=\sum_{k=1}^{d}\ell_{k}^{-1}\left(\sum_{j\in\phi_{ \gamma}^{-1}\{k\}}\lambda_{j}\right)\). The optimal eigenvalues \(\left(\hat{\ell}_{1},\ldots,\hat{\ell}_{d}\right)\) are thus \[\left(\hat{\ell}_{1},\ldots,\hat{\ell}_{d}\right)=\operatorname*{arg\,min}_{ \ell_{1},\ldots,\ell_{d}\in\mathbb{R}}\sum_{k=1}^{d}\gamma_{k}\ln\ell_{k}+\ell _{k}^{-1}\left(\sum_{j\in\phi_{\gamma}^{-1}\{k\}}\lambda_{j}\right):=h(\ell_{ 1},\ldots,\ell_{d}). \tag{24}\] As \(\frac{\partial h}{\partial\ell_{k}}=\frac{\gamma_{k}}{\ell_{k}}-\ell_{k}^{-2 }\left(\sum_{j\in\phi_{\gamma}^{-1}\{k\}}\lambda_{j}\right)\), we get that \(\frac{\hat{\ell}_{k}=\frac{1}{\gamma_{k}}\left(\sum_{j\in\phi_{\gamma}^{-1}\{ k\}}\lambda_{j}\right)}{\hat{\ell}_{k}}\). ## Appendix B Other proofs ### Proof of Proposition 4 (Eigenvalue equalisation) We compare the BIC of the full covariance model \(\boldsymbol{\gamma}=(1,\ldots,1)\) to the one of the equalised covariance model \(\boldsymbol{\gamma}^{\prime}=(1,\ldots,1,2,1,\ldots 1)\) where the \(j\)-th eigenvalue has been equalised with the \(j+1\)-th. This boils down to studying the sign of the function \(\Delta\,\mathrm{BIC}:=\mathrm{BIC}(\boldsymbol{\gamma})-\mathrm{BIC}( \boldsymbol{\gamma}^{\prime})\). One gets \[\Delta\,\mathrm{BIC} =p\frac{\ln n}{n}+\sum_{k=1}^{p}\ln\lambda_{k}-(p-2)\;\frac{\ln n }{n}-\sum_{k\not\in\{j,j+1\}}\ln\lambda_{k}-2\ln\left(\frac{\lambda_{j}+ \lambda_{j+1}}{2}\right) \tag{25}\] \[=2\frac{\ln n}{n}+\ln\lambda_{j}+\ln\lambda_{j+1}-2\ln\left( \frac{\lambda_{j}+\lambda_{j+1}}{2}\right)\] (26) \[=2\frac{\ln n}{n}+\ln\lambda_{j}+\ln\left(\lambda_{j}\left(1- \delta_{j}\right)\right)-2\ln\left(\frac{\lambda_{j}\left(2-\delta_{j}\right) }{2}\right)\] (27) \[=2\frac{\ln n}{n}+\ln\left(1-\delta_{j}\right)-2\ln\left(1-\frac {\delta_{j}}{2}\right)\] (28) \[=2\frac{\ln n}{n}-\ln\left(\frac{\left(1-\frac{\delta_{j}}{2} \right)^{2}}{1-\delta_{j}}\right). \tag{29}\] Hence, one has \[\Delta\,\mathrm{BIC}=0\iff e^{2\frac{\ln n}{n}}=\frac{\left(1-\frac{\delta_{j }}{2}\right)^{2}}{1-\delta_{j}}\iff\frac{\delta_{j}^{2}}{4}-\left(1-e^{2 \frac{\ln n}{n}}\right)\delta_{j}+1-e^{2\frac{\ln n}{n}}=0.\] It is a polynomial equation whose positive solution is unique when \(n\geq 1\) and is \[\delta(n):=2-2e^{2\frac{\ln n}{n}}+2\sqrt{e^{4\frac{\ln n}{n}}-e^{2\frac{\ln n }{n}}}. \tag{30}\] ### Proof of Proposition 6 (Asymptotic consistency of the hierarchical clustering) Let us assume that the true generative model is stratified with type \(\boldsymbol{\gamma}\in\mathcal{C}(p)\). We can then write the population covariance matrix as \(\varSigma=\sum_{k=1}^{d}\ell_{k}\,Q_{k}\,Q_{k}^{\top}\) with \(\ell_{1}>\cdots>\ell_{d}>0\) and \(Q:=[Q_{1}|\ldots|Q_{d}]\in\mathcal{O}(p)\). Let \(n\) be the number of independent samples and \(S_{n}:=\sum_{j=1}^{p}\lambda_{j}(S_{n})\boldsymbol{v}_{j}(S_{n})\boldsymbol{v} _{j}(S_{n})^{\top}\) with \(\lambda_{1}\geq\cdots\geq\lambda_{p}\) and \(V:=[\boldsymbol{v}_{1}|\ldots|\boldsymbol{v}_{p}]\in\mathcal{O}(p)\). According to (Bouveyron et al., 2011, Proposition 1) and (Tyler, 1981, Lemma 2.1 (i)), one then has almost surely, as \(n\) goes to infinity, \(\lambda_{j}(S_{n})\to\ell_{\phi_{\boldsymbol{\gamma}}(j)}\), where \(\phi_{\boldsymbol{\gamma}}\) is the \(\boldsymbol{\gamma}\)-composition function (cf. Subsection 3.2). Hence for \(n\) large enough, the gaps between eigenvalues in the same part of the \(\boldsymbol{\gamma}\)-composition will be arbitrarily close to \(0\), while the other will be arbitrarily close to the true values \(\left\{\Delta\left(\ell_{k},\ell_{k+1}\right),k\in[1\ldots d-1]\right\}\), which are all positive. Hence the hierarchical clustering method will first agglomerate the eigenvalues that are in the same part of \(\boldsymbol{\gamma}\), and second the distinct blocks, by increasing order of pairwise distance. The last model of the first phase will be exactly the true model. Asymptotic criteria like the BIC will thus consistently choose the true model among this reduced subfamily of cardinal \(p\). ### Proof of Proposition 7 (Asymptotic consistency of the type-length prior) Let us assume that the true generative model is stratified with type \(\boldsymbol{\gamma}^{*}:=(\gamma_{1}^{*},\ldots,\gamma_{d}^{*})\), of length \(d\), and let \(\ell_{1}>\cdots>\ell_{d}>0\) be the eigenvalues of the associated population covariance matrix. Then, similarly as in the last proof, almost surely, asymptotically, the sample covariance matrix eigenvalues are the ones of the population covariance matrix. Hence, for any SPCA model of type \(\boldsymbol{\gamma}:=(\gamma_{1},\ldots,\gamma_{d})\), the maximum likelihood writes \[\ln\hat{\mathcal{L}}\sim-\frac{n}{2}\left(p\ln 2\pi+\sum_{k=1}^{d}\gamma_{k} \ln\left(\frac{1}{\gamma_{k}}\sum_{j\in\phi_{\boldsymbol{\gamma}}^{-1}\{k\}} \ell_{\phi_{\boldsymbol{\gamma}^{*}}(j)}\right)\right). \tag{31}\] As \(n\) and \(p\) are fixed when we compare the models, they do not intervene in the model selection. Hence, the search of the optimal model in terms of maximum likelihood boils down to the following problem \[\operatorname*{arg\,min}_{\begin{subarray}{c}\boldsymbol{\gamma}\in\mathcal{C }(p)\\ \boldsymbol{\eta}\boldsymbol{\gamma}=d\end{subarray}}\sum_{k=1}^{d}\gamma_{k} \ln\left(\frac{1}{\gamma_{k}}\sum_{j\in\phi_{\boldsymbol{\gamma}}^{-1}\{k\}} \ell_{\phi_{\boldsymbol{\gamma}^{*}}(j)}\right):=f(\boldsymbol{\gamma}). \tag{32}\] One has \(f(\boldsymbol{\gamma})=\sum_{k=1}^{d}\gamma_{k}\ln\left(\frac{1}{\gamma_{k}} \sum_{k^{\prime}=1}^{d}c_{kk^{\prime}}\ell_{k^{\prime}}\right)\), where \(c_{kk^{\prime}}\) is the cardinal of the intersection of the \(k\)-th part of \(\boldsymbol{\gamma}\) with the \(k^{\prime}\)-th part of \(\boldsymbol{\gamma}^{*}\). Then, by definition, one has \(\sum_{k^{\prime}=1}^{d}c_{kk^{\prime}}=\gamma_{k}\) and \(\sum_{k=1}^{d}c_{kk^{\prime}}=\gamma_{k^{\prime}}^{*}\). Hence, using Jensen's inequality, \[f(\boldsymbol{\gamma})\geq\sum_{k=1}^{d}\gamma_{k}\left(\sum_{k^{\prime}=1}^{ d}\frac{c_{kk^{\prime}}}{\gamma_{k}}\ln\ell_{k^{\prime}}\right)=\sum_{k,k^{ \prime}=1}^{d}c_{kk^{\prime}}\ln\ell_{k^{\prime}}=\sum_{k^{\prime}=1}^{d} \boldsymbol{\gamma}_{k^{\prime}}^{*}\ln\ell_{k^{\prime}}=f(\boldsymbol{ \gamma}^{*}). \tag{33}\] To conclude, asymptotically, \(\boldsymbol{\gamma}^{*}\)-SPCA is the most likely model. Hence, the maximum likelihood criterion alone finds the true model among the family of SPCA models with the same type length.
2304.14506
Coherent control of the translational and point group symmetries of crystals with light
We use theory and first-principles calculations to explore mechanisms for control of the translational and point group symmetries of crystals in ultrafast optical experiments. We focus in particular on mechanisms that exploit anharmonic (biquadratic) lattice couplings between a driven infrared-active phonon mode and other modes at arbitrary wave vector, which are always allowed by symmetry in any space group. We use Floquet theory to develop a general phase diagram depicting the various dynamical regimes accessible to materials, with simulated dynamics to illustrate how the biquadratic coupling changes materials structure depending on both extrinsic factors (light pulse characteristics) and intrinsic materials parameters (phonon frequencies, phonon coupling strengths). We use our phase diagram, in conjunction with density functional theory calculations, both to suggest experiments to reveal hidden structural order in perovskite KTaO$_3$, and to provide additional insights into recently reported experiments on SrTiO$_3$ and LiNbO$_3$.
Guru Khalsa, Jeffrey Z. Kaaret, Nicole A. Benedek
2023-04-27T20:00:04Z
http://arxiv.org/abs/2304.14506v1
# Coherent control of the translational and point group symmetries of crystals with light ###### Abstract We use theory and first-principles calculations to explore mechanisms for control of the translational and point group symmetries of crystals in ultrafast optical experiments. We focus in particular on mechanisms that exploit anharmonic (biquadratic) lattice couplings between a driven infrared-active phonon mode and other modes at arbitrary wave vector, which are always allowed by symmetry in any space group. We use Floquet theory to develop a general phase diagram depicting the various dynamical regimes accessible to materials, with simulated dynamics to illustrate how the biquadratic coupling changes materials structure depending on both extrinsic factors (light pulse characteristics) and intrinsic materials parameters (phonon frequencies, phonon coupling strengths). We use our phase diagram, in conjunction with density functional theory calculations, both to suggest experiments to reveal hidden structural order in perovskite KTaO\({}_{3}\), and to provide additional insights into recently reported experiments on SrTiO\({}_{3}\) and LiNbO\({}_{3}\). ## I Introduction Phase transitions in crystals are often characterized in terms of symmetry changes. For example, structural phase transitions, which involve some change to the symmetry of the lattice, are ubiquitous in some classes of materials, complex oxides in particular, and have been studied extensively for many decades [1; 2; 3; 4]. Phase transitions involving broken time-reversal symmetry give rise to magnetic materials and have been similarly well-studied [2; 5]. In recent years, attention has turned to more exotic phase transitions. Broken rotational symmetries of the electronic states (but not seen in the lattice) give rise to strong correlations in electronic nematic systems [6; 7]. The relative twist between layers of stacked two-dimensional materials, such as graphene, controls the overall point group and translational symmetry of the system and gives rise to significant changes in the density of states and corresponding electronic properties, depending on the twist angle [8]. In each of these cases, detailed study of the relevant phase transitions has revealed new physical insights and advanced our fundamental understanding of condensed matter. In addition to their fundamental importance, materials that undergo phase transitions are of great practical interest because they provide opportunities for the control of functional properties with external fields. Control of the polarization with electric fields in ferroelectric materials is exploited in certain types of random access memory [9]. The key component of many sensors and actuators is a piezoelectric material [10], which exhibits large changes in its electrical polarization in response to external stress [11] (and _vice versa_[12]). The challenge for condensed matter and materials scientists is that the property of interest may not always couple directly with an external field. For example, the magnetic and electronic properties of many ABO\({}_{3}\) perovskites oxides (where A and B are cations and O is oxygen) are associated with lattice (phonon) modes that do not couple directly to external fields. It is possible to identify mechanisms that couple these modes by symmetry to others that are, say, electric field-controllable thereby giving indirect control of properties with an external field [13; 14; 15; 16; 17]. However, not all materials exhibit the required crystallographic symmetries for such mechanisms. The development of bright mid-infrared and THz laser sources, capable of resonantly exciting phonons in crystals, has created new opportunities for the control of functional properties with external fields, namely light. One such mechanism involves resonantly exciting an infrared (IR)-active phonon mode of the crystal to large amplitude (Q\({}_{IR}\)), which induces quasi-static displacements of some Raman-active modes (Q\({}_{R}\)) _via_ anharmonic coupling of the form Q\({}_{IR}^{2}\)Q\({}_{R}\). In these so-called _nonlinear phononics_ experiments, the light pulse can induce a transition to a metastable phase with properties that are either difficult or impossible to access in the equilibrium structure at a given temperature, for example, metal-insulator phase transitions [18], superconductivity [19; 20], and changes in orbital ordering [21]. In most of the experiments that have been reported so far, the optically excited IR mode couples to Raman modes that are also at the Brillouin zone center, such that the induced metastable phase has the same translational symmetry as the ground-state structure. Is it possible to identify other anharmonic lattice couplings, involving phonon modes at non-zero wave vector, which would allow us to dynamically stabilize phases with a translational symmetry different to that of the parent ground-state structure? Figure 1: Dynamical regimes accessible through biquadratic coupling between an optically pumped IR-active phonon, \(Q_{IR}\), and a mode at arbitrary wave vector, \(Q_{\vec{q}}\). The Floquet phase diagram (center) is shown as a function of the frequency ratio of the biquadratically coupled mode (\(\omega_{\vec{q}}\)) to the excited IR mode (\(\omega_{IR}\)) versus the driving strength \(\epsilon\propto D_{IR,\vec{q}}Q_{IR}^{2}\) (see full definition below Eqn. 6). The simulated dynamics, structural changes, and schematic mechanisms are shown in the color-coordinated panels. When \(Q_{\vec{q}}\) has a positive force constant (real frequency) at equilibrium (top-half, \(\omega_{\vec{q}}/\omega_{IR}>0\)), trivially damped motion is expected in much of the phase diagram (I – white). For negative biquadratic coupling (\(D_{IR,\vec{q}}<0\)), \(Q_{\vec{q}}\) can be frozen in by optical excitation of an IR phonon, leading to novel transient structural phases (II – gray). Near \(\omega_{\vec{q}}/\omega_{IR}=1\), parametric oscillation of \(Q_{\vec{q}}\) is possible (III – blue). In the purple region of the phase diagram (Region IV) we imagine a high-symmetry reference structure for the material of interest, which may be virtual, located at the saddle point (see inset). \(Q_{\vec{q}}\) has a negative force constant in this phase and drives a non-transient structural transition to a lower-symmetry phase. In the low-symmetry phase \(Q_{\vec{q}}\) can be displaced quasi-statically by optical excitation of \(Q_{IR}\); this is the conventional nonlinear phononics effect. For positive coupling between \(Q_{\vec{q}}\) and the IR mode (\(D_{IR,\vec{q}}>0\)), the symmetry broken by \(Q_{\vec{q}}\) can be (re)introduced into the structure (V – green), beyond which \(Q_{\vec{q}}\) again parametrically oscillates. In this work, we use theory and first-principles calculations to explore and elucidate anharmonic lattice coupling pathways involving a zone-center IR-active mode and modes at arbitrary wave vector, \(\mathrm{Q}_{\vec{q}}\). We show that biquadratic coupling between these modes, \(\mathrm{Q}_{IR}^{2}\mathrm{Q}_{\vec{q}}^{2}\), which is _always_ allowed by symmetry in _any_ space group, can be exploited in ultrafast optical experiments such that resonant excitation of an IR-active mode at the zone center can dynamically induce phases with translational symmetry different to that of the ground state through coupling to and condensation of modes at arbitrary (non-zero) wave vector. This biquadratic coupling potentially offers a new pathway through which the functional properties of materials can be dynamically controlled. We use classical Floquet theory to map out a general phase diagram showing the various dynamical regimes that may be accessed with this biquadratic coupling in modern ultrafast optical experiments; this phase diagram is our key result and is shown in Fig. 1. Our approach highlights how the nonlinear phononics mechanism can be viewed as a specific example of a more general parametric amplification process and reveals dynamical regimes that may be missed in the time-averaged theories commonly used to study these processes. We show how different parts of the phase diagram can be accessed by tuning the light polarization, frequency and peak electric field (extrinsic experimental parameters), and the relative frequencies and strength of the coupling between the pumped IR and coupled modes (determined by intrinsic materials parameters, such as the crystal structure and chemical composition). The phase diagram thus functions as a tool that can be used to interpret existing experiments (such as recent work demonstrating transient - that is, not long-lasting or deterministic - polarization switching in LiNbO\({}_{3}\)[22]) and to design future experiments. We use our phase diagram, in combination with first-principles DFT calculations, to both interpret the results of a recently reported experiment on SrTiO\({}_{3}\) showing light control of translational symmetry [23], and to suggest future experiments that can access hidden structural order in perovskite KTaO\({}_{3}\). Our work goes further than previously published studies by exploring a much greater range of dynamical regimes within a single coherent framework. ## II Theoretical model ### Simple model of biquadratic coupling We start by writing down a general equation for describing the response of a centrosymmetric crystal to resonant excitation of an IR-active phonon mode by a short, intense mid-IR pulse. We consider the case where the dominant dynamics induced are those of the excited IR mode, \(Q_{IR}\), and another mode coupled to it at arbitrary wave vector \(\vec{q}\), \(Q_{\vec{q}}\). The lattice potential energy is assumed to be: \[U=\frac{1}{2}K_{IR}Q_{IR}^{2}+\frac{1}{2}K_{\vec{q}}Q_{\vec{q}}^{2}+D_{IR,\vec {q}}Q_{IR}^{2}Q_{\vec{q}}^{2}+\frac{1}{4}D_{\vec{q}}Q_{\vec{q}}^{4}-\Delta \vec{P}\cdot\vec{E}, \tag{1}\] where \(K_{IR}\) and \(K_{\vec{q}}\) are the force constants of the relevant phonon modes at harmonic order, \(D_{\vec{q}}\) is the fourth-order force constant for \(Q_{\vec{q}}\), and \(D_{IR,\vec{q}}\) is the biquadratic coupling coefficient coupling \(Q_{IR}\) and \(Q_{\vec{q}}\). The polarization change in the crystal due to the electric field \(\vec{E}\) of the light pulse is given to linear order by \(\Delta\vec{P}=\vec{Z}^{*}Q_{IR}\), where \(Z^{*}\) is the mode-effective charge of the excited IR mode (as defined in [24]). In previous work, we showed that higher-order terms in the polarization are important for understanding changes in the optical properties of materials due to optical excitation [25]; we ignore those terms in this work to simplify the analysis, and because here we are focused on understanding _structural_ changes due to optical excitation. Also note that we ignore explicit electron-phonon interactions - the electrons adiabatically follow the excited phonon coordinates and the equations of motion are classical. Recent work [26] on the quantum theory of lattice dynamics in the presence of external driving fields has validated this classical approach; the framework developed here could also be adapted to incorporate the quantum theory of Ref. [26]. As mentioned above, much of the previous work on nonlinear phononics has focused on systems where the excited IR mode is coupled to Raman-active modes at the zone center through a term of the form \(Q_{IR}^{2}Q_{R}\). Whether or not such a term is an allowed invariant in the lattice potential is determined by crystallographic symmetry and hence only some materials are candidates for nonlinear phononics experiments that exploit this type of cubic anharmonic coupling. In addition, since the term \(Q_{IR}^{2}Q_{R}\) involves modes at the zone center only, there is no change in the translational symmetry of the crystal upon optical excitation. In contrast, the biquadratic term \(Q_{IR}^{2}Q_{\vec{q}}^{2}\) shown in Eqn. 1 is _always_ allowed by symmetry between _all_ modes and in _all_ space groups (biquadratic coupling between \(Q_{IR}\) and a phonon at \(\vec{q}=\vec{0}\) has been explored in previous work [27]). How can this biquadratic coupling be exploited for light control of properties? By rearranging terms in Eqn. 1, we see that the effective force constant of \(Q_{\vec{q}}\) (\(\tilde{K}_{\vec{q}}\)) is renormalized by the motion of the IR mode: \[\tilde{K}_{\vec{q}}=K_{\vec{q}}+2D_{IR,\vec{q}}Q_{IR}^{2}. \tag{2}\] Displacing the IR mode with \(D_{IR,\vec{q}}>0\) increases the force constant \(\tilde{K}_{\vec{q}}\), thereby stiffening this mode. In contrast, when \(D_{IR,\vec{q}}<0\) the effective force constant \(\tilde{K}_{\vec{q}}\) decreases. If the IR mode displacement and the magnitude of the biquadratic coupling are sufficiently large compared to the equilibrium force constant \(K_{\vec{q}}\) (that is, ### Floquet Theory Consider the equations of motion that are derived from Eqn. 1 by taking derivatives with respect to the mode coordinates, \(Q_{IR}\) and \(Q_{\vec{q}}\). We find, \[\begin{split}\tilde{Q}_{IR}+2\gamma_{IR}\dot{Q}_{IR}+\frac{1}{M_ {IR}}\left(K_{IR}+2D_{IR,\vec{q}}Q_{\vec{q}}^{2}\right)Q_{IR}=\frac{\tilde{Z}^ {*}}{M_{IR}}E,\\ \tilde{Q}_{\vec{q}}+2\gamma_{\vec{q}}\dot{Q}_{\vec{q}}+\frac{1}{ M_{\vec{q}}}\left(K_{\vec{q}}+2D_{IR,\vec{q}}Q_{IR}^{2}\right)Q_{\vec{q}}+ \frac{D_{\vec{q}}}{M_{\vec{q}}}Q_{\vec{q}}^{3}=0.\end{split} \tag{3}\] To simplify the analysis we ignore all terms that are higher than harmonic order except for the biquadratic coupling and reintroduce the \(D_{\vec{q}}Q_{\vec{q}}^{4}\) term only when necessary to guarantee a finite minimum in \(U\) with respect to \(Q_{\vec{q}}\). Additionally, we assume that the pumped IR mode is undamped (\(\gamma_{IR}=0\)), thereby allowing for the exploration of the long timescale dynamics. We justify this assumption _a posteriori_ by numerical exploration of the dynamics, finding that our simulated short-timescale dynamics of \(Q_{\vec{q}}\) are comparable to the Floquet results (which assume continuous periodic driving). After resonant excitation of the IR mode with a Gaussian pulse we find sinusoidal periodic motion for \(Q_{IR}\) of the form \[Q_{IR}\left(t\right)=2\frac{\tilde{Z}^{*}\tilde{E}}{K_{IR}}cos\left(\omega_{ IR}t+\phi\right). \tag{4}\] Here \(\tilde{E}\) is defined as \(\eta E_{0}\tau f_{IR}\), where \(E_{0}\) is the peak electric field, \(\tau\) is the full-width at half-maximum of the electric field, \(f_{IR}\) is the (linear) IR frequency, and \(\eta\) characterizes the shape of the pulse (for a Gaussian pulse \(\eta=\sqrt{\frac{(\pi/2)^{3}}{2\ln(2)}}\approx 1.67\)). Then \(\eta\tau f_{IR}\) measures the number of cycles the IR mode is driven by the electric field of the light pulse and the peak IR displacement is \(Q_{IR,0}=2\frac{\tilde{Z}^{*}\tilde{E}}{K_{IR}}\). We neglect \(\phi\) in what follows. \(Q_{IR}\) is therefore a periodic driver for the general lattice motion. This can be contrasted with other recent Floquet approaches, where off-resonant periodic excitation of the _electronic_ states is motivating the Floquet approach and enabling access to novel nonequilibrium phases [28; 29; 30]. Eqn. 4 assumes that the biquadratic coupling between \(Q_{IR}\) and \(Q_{\vec{q}}\) is zero (\(D_{IR,\vec{q}}=0\)). This assumption can be justified from a perturbation theory perspective, since \(Q_{\vec{q}}\) is initially characterized by small fluctuations about zero amplitude. However, it obviously will not hold when \(Q_{\vec{q}}\) is displaced to large amplitudes. In this scenario, the force constant of the IR mode can also be renormalized by \(Q_{\vec{q}}\), \(\Delta K_{IR}\approx 2D_{IR,\vec{q}}Q_{\vec{q}}^{2}\), such that the IR mode also freezes in. Using Eqn. 4 for \(Q_{IR}\) in the second line of Eqn. 3, rescaling time to \(\theta=\omega_{IR}t\), and collecting terms, we find the following form, \[\frac{d}{d\theta}x(\theta)=A(\theta)x(\theta), \tag{5}\] where we have defined the vector \(x(\theta)=\left(Q_{\vec{q}},\frac{d}{d\theta}Q_{\vec{q}}\right)\) and the matrix \[A(\theta)=\left(\begin{array}{cc}0&1\\ -\left(\delta+\epsilon+\epsilon cos\left(2\theta\right)\right)&-\nu\end{array} \right). \tag{6}\] The dimensionless parameters \(\delta=\frac{1}{\omega_{IR}^{2}}\frac{K_{\vec{L}}}{M_{\vec{q}}}=\frac{\omega_{ \vec{q}}^{2}}{\omega_{IR}}\), \(\epsilon=D_{IR,\vec{q}}Q_{IR,0}^{2}/M_{\vec{q}}\omega_{IR}^{2}\) (see Appendix A for unit conversions), and \(\nu=\frac{2\gamma_{\vec{q}}}{\omega_{IR}}\) measure the square-frequency ratio, the driving strength, and effective damping for the driven mode \(Q_{\vec{q}}\) due to the motion of \(Q_{IR}\), respectively. The matrix \(A(\theta)\) is periodic, with period \(\pi\) (that is, \(A(\theta+\pi)=A(\theta)\)) and therefore represents a Hill's equation. The solutions to this equation (which may not be analytic), as well as how they depend on the parameters \(\delta\), \(\epsilon\) and \(\nu\), can be obtained using the standard techniques of Floquet theory [31, 32]. Our primary interest is to find solutions to Eqn. 5 that predict exponential growth in \(Q_{\vec{q}}\). These represent dynamical regimes in which optical pumping of an IR phonon induces large-amplitude changes in \(Q_{\vec{q}}\), thereby driving a transient structural phase transition that could be resolved experimentally. We use the main results of our Floquet analysis to create a phase diagram, shown in the central panel of Fig. 1, from Eqn. 5 depicting these dynamical regimes (see Appendix B for more details). ### First-Principles Calculations and Dynamical Simulation To connect the Floquet theory to real materials and experiments we find parameters for Eqns. 3 & 6 using density functional theory (DFT). Calculations were performed using VASP 6.2.0 [33, 34, 35], using the projector augmented-wave (PAW) method in the local-density approximation (LDA) [36]. The following states were included in the valence of the relevant PAW potentials: 3s\({}^{2}\)3p\({}^{6}\)4s\({}^{1}\) for K, 5p\({}^{6}\)5d\({}^{4}\)6s\({}^{1}\) for Ta, 4s\({}^{2}\)4p\({}^{6}\)5s\({}^{2}\) for Sr, 3s\({}^{2}\)3p\({}^{6}\)3d\({}^{3}\)4s\({}^{1}\) for Ti, and 2s\({}^{2}\)2p\({}^{4}\) for O. A force convergence tolerance of 10\({}^{-3}\) eV/A was used for all calculations with a 4\(\times\)4\(\times\)4 Monkhorst-Pack **k**-point grid (for a 2\(\times\)2\(\times\)2 formula unit supercell of the primitive cubic perovskite unit cell) and plane-wave energy cutoffs of 600 eV (KTaO\({}_{3}\)) and 700 eV (SrTiO\({}_{3}\)). These values were chosen to converge phonon frequencies, calculated with density functional perturbation theory [37] (DFPT), to within 5 cm\({}^{-1}\) when compared to incrementing the k-point grid to 8\(\times\)8\(\times\)8, and the energy cutoff to 800 eV for both KTaO\({}_{3}\) and SrTiO\({}_{3}\). Our converged cubic lattice constants for KTaO\({}_{3}\) (3.959 A) and SrTiO\({}_{3}\) (3.859 A) are underestimated compared to the experimental lattice constants [38, 39, 40] of 3.988 A and 3.905 A, respectively, as expected in LDA. We focus on pairs of IR modes and modes characterized by arbitrary wave vector that exhibit strong coupling at biquadratic order. To find these strongly coupled mode pairs, we calculated a series of phonon dispersion curves for structures in which an IR-active mode has been 'frozen in' at varying amplitude. Modes that exhibit large changes in their frequencies as a function of IR mode amplitude were selected for further study. Quadratic, biquadratic, and quartic coupling coefficients were calculated via a series of symmetry-constrained frozen-phonon calculations. Mode-effective charges were calculated as defined in Ref. [24], which were found to be consistent with values obtained from modern theory of polarization calculations [41, 42, 43] conducted on +/-10 pm meshes of the IR-active phonons. Non-equilibrium phonon dispersion curves were calculated with Quantum Espresso (plane-wave energy cutoff of 70 Ry, 8\(\times\)8\(\times\)8 Monkhorst-Pack **k**-point grid in the 5-atom cubic unit cell of KTaO\({}_{3}\)), using Garrity-Bennett-Rabe-Vanderbilt pseudopotentials [44, 45], which confirmed the qualitative features of the induced dynamical instabilities, i.e. which modes develop imaginary frequencies when the IR mode amplitude is increased along a fixed polarization direction beyond a critical amplitude. These dispersion curves (Fig. 2) were calculated on a regular 10\(\times\)10\(\times\)10 **q**-point grid, with the inclusion of a simple acoustic sum rule. Symmetry assignment of structural phases and irreducible representations of phonon modes were generated with the ISOTROPY Software Suite [46, 47]. To explore the transient dynamics of the regions found in the Floquet analysis with numerical simulation of Eqns. 3 & 6, we use a Runge-Kutta 5(4) method for numerical integration, as implemented in NumPy [48]. Simulations were performed with Gaussian electric field pulses with duration \(\tau=500\) fs, varied peak electric fields (\(E_{0}\)), IR phonon frequency, and peak field time set at \(t=0\). First-principles calculations of the damping parameters \(\gamma_{IR/\vec{q}}\) are computationally expensive. As a result, we have explored the dynamics as a function of damping parameter, ranging from 0 THz to 1 THz, showing only selected dynamics to highlight the response in different regions of the phase diagram. For initial conditions, \(Q_{IR}\) is taken to be at rest long before the Gaussian pulse is present, and \(Q_{\vec{q}}\) is given an amplitude of 0.1 pm to simulate weak fluctuation about the average equilibrium value. Note that all parameters are included when simulating Eqn. 3, in contrast to the numerical solution of Eqn. 6 where approximations have been made (see discussion in Sec. II.2). ## III Results and Discussion ### Floquet phase diagram and dynamics The phase diagram (center-panel of Fig. 1) shows different dynamical regimes for \(Q_{\vec{q}}\). The vertical axis is the ratio of frequencies \(\sqrt{\delta}=\omega_{\vec{q}}/\omega_{IR}\), while the horizontal axis is \(\epsilon\), the driving strength parameter \(\left(D_{IR,\vec{q}}Q_{IR,0}^{2}/M_{\vec{q}}\omega_{IR}^{2}\right)\). The sign of the anharmonic coupling \(D_{IR,\vec{q}}\) is an intrinsic material property that dictates which side of the phase diagram is accessible - positive coupling increases \(\tilde{K}_{\vec{q}}\) (right) and negative coupling decreases \(\tilde{K}_{\vec{q}}\) (left). While the Floquet analysis informs us of regions of exponential growth or decay of \(Q_{\vec{q}}\), it does not provide compact analytic results of the phase boundaries or dynamics in a given region that might inform future ultrafast experiments. We anticipate this need and include approximate analytic results, expressed in terms of microscopic materials parameters, in what follows and in Appendix C. In the subsections below we discuss the basic physical mechanism in each region, followed by a materials example, and finally show the simulated dynamics of \(Q_{\vec{q}}\). This follows the layout of the color-coded panels extending from the phase diagram in Fig. 1. Where appro priate we discuss previous theoretical and experimental work. We start with the top half of the phase diagram where the high-symmetry equilibrium structure is stable (\(\sqrt{\delta}>0\)). #### iii.1.1 Region I - Trivially damped motion For most of the upper-half of the phase diagram, trivial exponential decay is expected, suggesting that optically excited IR modes will not induce large amplitude dynamical behavior in \(Q_{\vec{q}}\). Due to the exponentially decaying response of \(Q_{IR}\) and \(Q_{\vec{q}}\), a long-lived change in crystal symmetry is not expected. Recall that the biquadratic coupling pathway is allowed for all modes at all wave vectors. For weak driving of an IR phonon, our expectation is that most, or all of the coupled modes will be in this region. As can be seen from Fig. 1, a combination of external and intrinsic materials parameters (\(\epsilon\)) are needed to move one, or several, modes into another region of the phase diagram. We now focus on negative biquadratic coupling (\(\epsilon<0\)) and small \(\sqrt{\delta}\) where, as mentioned in Sec. II.1, a strong enough drive can decrease \(\tilde{K}_{\vec{q}}\) such that \(Q_{\vec{q}}\) freezes into the structure with a nonzero amplitude (Region II). #### iii.1.2 Region II - Access to new phases: Novel exponential growth and symmetry control Region II describes a region in which we expect \(Q_{\vec{q}}\) to grow exponentially, thereby inducing a transient structural phase transition. Recent experimental work [23] has shown that optical pumping of an IR phonon in the cubic phase of the perovskite SrTiO\({}_{3}\) at 135 K can induce a phonon mode that involves rotations of the TiO\({}_{6}\) octahedra, doubling the size of the unit cell and changing the translational symmetry. SrTiO\({}_{3}\) undergoes this structural phase transition with temperature at about 110 K - optical excitation of an IR phonon effectively increases the transition temperature. As far as we are aware, this is the first experimental realization of this dynamical region. We show that in addition to changing the transition temperatures for structural phase transitions, it is also possible to induce structural phases that are not present _at all_ in the equilibrium phase diagram with temperature. Taking \(Q_{\vec{q}}(\theta)\propto e^{\mu\theta}\), with \(\mu\) as a dimensionless exponential growth parameter (\(\mu\omega_{IR}\) is the dimensional growth parameter), inserting this functional form in Eqn. 5, and neglecting the cos(\(2\theta\)) term (which is equivalent to time averaging), we find the following relation \[\mu^{2}+\nu\mu+(\delta+\epsilon)=0, \tag{7}\] which has the exponential growth solution \(\mu_{II,+}=\frac{1}{2}\left(-\nu+\sqrt{\nu^{2}-4\left(\delta-|\epsilon| \right)}\right)\) in Region II. This shows that exponential growth of \(Q_{\vec{q}}\) in Region II is affected by damping, as well as the pulse characteristics through \(\epsilon\). The phase boundary between Region I and Region II is defined by \(\mu=0\), which is solved when \(|\epsilon|=\delta\). This result is surprisingly independent of the damping \(\nu\), suggesting that although damping influences the growth rate of \(Q_{\vec{q}}\) once in Region II it does not influence the location of the phase boundary. Comparing this analytic result with that from the Floquet analysis, we find that \(|\epsilon|=\delta\) is a good approximation for the phase boundary. The analytic result works well for small \(\delta\), _i.e._ when \(\omega_{\vec{q}}<<\omega_{IR}\), but may slightly underestimate the value of \(\epsilon\) needed to reach this regime (Supplementary Fig. S1). This is attributed to neglecting the oscillatory component of the IR motion in the development of Eqn. 7. To find the growth rate of \(Q_{\vec{q}}\) in terms of microscopic parameters we need to rescale \(\mu_{II,+}\) by \(\omega_{IR}\) to account for the definition of dimensionless time \(\theta=\omega_{IR}t\). The growth rate then becomes, \[\mu_{II,+}\omega_{IR}=-\gamma_{\vec{q}}+\sqrt{\gamma_{\vec{q}}^{2}-\left( \omega_{\vec{q}}^{2}-\frac{|D_{IR,\vec{q}}|}{M_{\vec{q}}}Q_{IR,0}^{2}\right)}. \tag{8}\] Notice that on the phase boundary the terms in the parentheses of Eqn. 8 cancel, as a result \(\mu_{II,+}=0\). We can express the phase boundary in terms of microscopic parameters and pulse characteristics to derive a threshold field, which takes the form \[E_{0}\tau\geq\frac{1}{2}\frac{K_{IR}}{\eta_{IR}\tilde{Z}^{*}}\sqrt{\left|\frac {K_{\vec{q}}}{D_{IR,\vec{q}}}\right|}. \tag{9}\] In this analysis, if \(E_{0}\tau\) satisfies this inequality, \(Q_{\vec{q}}\) grows exponentially, inducing a transient structural phase that changes the point group and translational symmetry of the crystal. Beyond the threshold electric field, we can approximate the induced amplitude of \(Q_{\vec{q}}\) and its renormalized frequency by time-averaging Eqn. 1. From this point of view, the curvature of the average potential energy landscape is decreasing and becoming negative with respect to \(Q_{\vec{q}}\). That is, a minimum in the potential energy landscape develops beyond the threshold electric field in which \(Q_{\vec{q}}\neq 0\). This new minimum survives the oscillatory \(2\omega_{IR}\) component of the IR motion (Fig. 1, Region II). The motion of the IR phonon renormalizes both the amplitude of \(Q_{\vec{q}}\), \[Q_{\vec{q},\pm}^{*}\left(Q_{IR}\right)=\pm\sqrt{\left|K_{\vec{q}}+2D_{IR,\vec {q}}\left\langle Q_{IR}^{2}\right\rangle\left|/D_{\vec{q}}\right.}, \tag{10}\] and its frequency, \[\omega^{*}\left(Q_{IR}\right)=\sqrt{2\left|\left(K_{\vec{q}}+2D_{IR,\vec{q}} \left\langle Q_{IR}^{2}\right\rangle\right)\left|/M_{\vec{q}}\right.}, \tag{11}\] where we have explicitly shown the dependence of the new minimum and frequency on the IR phonon amplitude. The condensation of \(Q_{\vec{q}}\) to nonzero amplitude modulates the existing crystal structure with a lengthscale derived from the wave vector \(\vec{q}\). If \(\vec{q}=\zeta_{1}^{-1}\vec{b}_{1}+\zeta_{2}^{-1}\vec{b}_{2}+\zeta_{3}^{-1}\vec{ b}_{3}\) where \(\vec{b}_{i}\) define the reciprocal lattice vectors of a lattice defined by \(\vec{a}_{i}\) so that \(\vec{a}_{i}\cdot\vec{b}_{j}=2\pi\delta_{ij}\), the condensation of \(Q_{\vec{q}}\) transiently induces a new periodicity to the crystal with modulation vector \(\vec{\lambda}=\zeta_{1}\vec{a}_{1}+\zeta_{2}\vec{a}_{2}+\zeta_{3}\vec{a}_{3}\), where each \(\zeta_{i}\) may lead to (in)commensurability with its equilibrium lattice vector \(\vec{a}_{i}\). We now turn to the example of KTaO\({}_{3}\), one of the few perovskites that remains cubic (\(Pm\bar{3}m\) space group) at all temperatures; the 0 K phonon dispersion curve in Fig. 2 shows all modes with real frequencies. KTaO\({}_{3}\) features three sets of triply degenerate IR-active optical phonons and no Raman-active phonons. As a result, new structural phases cannot be dynamically induced by relying on the conventional \(Q_{IR}^{2}Q_{R}\) coupling. In contrast, every phonon at every wave vector is accessible through the biquadratic coupling shown in Eqn. 1. The challenge now is to find which modes couple most strongly to an IR active phonon with a frequency that is accessible in a modern ultrafast optical experiment. We explore this by calculating nonequilibrium phonon dispersion curves as a function of all IR phonons polarized along the [100], [110], and [111] crystallographic directions. The nonequilibrium phonon dispersion curves for the highest frequency (16.7 THz) IR-active phonon displaced along the [111] direction, which we will use as a representative example, are shown in Fig. 2. As the amplitude of the IR phonon increases, we see a branch split at the R-point, with one mode softening and becoming unstable, that is, its frequency becomes imaginary (or its force constant becomes negative). Inspection of the displacement pattern and symmetry of this mode shows that it is an out-of-phase octahedral tilt (transforming as the irreducible representation R\({}_{4}^{+}\)) about the [111] axis (Region II of Fig. 1). Octahedral tilts are common structural distortions in perovskites [1; 49] but are not present in KTaO\({}_{3}\) at any temperature. Here we show that a transient structural phase of KTaO\({}_{3}\) involving octahedral tilts can be induced by biquadratic coupling to an optically excited IR-active phonon. Since the mode that freezes in is at the Brillouin zone edge and has wave vector \(\mathbf{q}_{R}=\frac{2\pi}{a}\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\) (where \(a\) is the cubic lattice parameter) the unit cell of the cubic phase is doubled to accommodate the octahedral tilt pattern. The transient unit cell lattice parameters (\(\vec{a}_{i}^{\prime}\)) are given by \(\vec{a}_{1}^{\prime}=\vec{a}_{2}+\vec{a}_{3}\), \(\vec{a}_{2}^{\prime}=\vec{a}_{1}+\vec{a}_{3}\), and \(\vec{a}_{3}^{\prime}=\vec{a}_{1}+\vec{a}_{2}\), where the unprimed quantities denote the equilibrium lattice parameters and \(\vec{a}_{1}=\vec{a}_{2}=\vec{a}_{3}\) since KTaO\({}_{3}\) is cubic at equilibrium. This shows that light-induced structural phases can change the translational symmetry of crystals on ultrafast timescales. In Sec. III.2 we show that the changes to point group and translational symmetry are polarization dependent, allowing for unprecedented control of crystalline symmetry. Using the DFT-derived parameters in Table 1 we can estimate the critical electric field (Eqn. 9) needed to enter Region II of the phase diagram and induce a new structural phase in cubic KTaO\({}_{3}\). For a 500 fs Gaussian electric field pulse with 16.7 THz carrier frequency, we find a threshold field of 4.8 MV/cm. We note that this value ignores Fresnel reflection, higher-order anharmonicities, and damping. We explore the dynamics for the excitation of the 16.7 THz IR active phonon polarized along the [111] direction as a function of peak electric field in the Region II panel inset of Fig. 1. The frequency ratio (\(\sqrt{\delta}=0.24\)) of the two phonons involved places us on the gray arrow Figure 2: Phonon dispersion curves from our DFT calculations for KTaO\({}_{3}\) in the cubic perovskite structure at equilibrium (left) and with the 16.7 THz IR phonon displaced along [111] with an amplitude of 25 pm in the 5 atom cell (right). The IR phonon displacement splits and softens a branch at the R-point, leading to a driven instability of this mode. Modes with imaginary frequencies are depicted as having negative frequencies. originating on the vertical axis in the phase diagram and pointing into the gray region. A peak electric field of 3.7 MV/cm (\(\epsilon=-0.033\)) is not strong enough to overcome the intrinsic restoring force such that any transient motion imparted on \(Q_{\vec{q}}\) is damped away, hence the amplitude of \(Q_{\vec{q}}\) is zero. That is, the threshold from Region I to Region II is not crossed. By increasing the peak electric field to 5.3 MV/cm (\(\epsilon=-0.066\)), just beyond the critical field, a new energy minimum develops for \(Q_{\vec{q}}\) at \(Q_{\vec{q},+}^{*}\) (Eq. 10). Because the critical condition has just been reached, the effective potential is shallow and the new resonant frequency \(\omega^{*}\) (Eq. 11) is small. Increasing the peak electric field further to 6.4 MV/cm (\(\epsilon=-0.099\)) further establishes the Region II behavior. The renormalized frequency \(\omega^{*}\) has increased as expected, the location of the new energy minimum (\(Q_{\vec{q},+}^{*}\)) is at a larger amplitude, and the enhanced growth rate (\(\mu_{II,+}\)) towards the new phase is visually apparent. This is a consequence of the field-dependent growth rate in Eqn. 8, giving access to the new structural phase \(\approx 0.5\) ps earlier compared to the 5.3 MV/cm peak electric field. We expect the threshold values of \(E_{0}\tau\) (Eqn. 9) to be accessible with current mid-IR laser sources at 16.7 THz [50; 51]. #### iii.2.3 Region III - Parametric Oscillation In the previous section, driving an IR-active phonon condensed a single phonon, \(Q_{\vec{q}}\), transiently altering the crystal symmetry. In complex crystals there are many modes of arbitrary wave vectors, all of which are accessible to the biquadratic coupling. As a result, we expect the dynamical response of the crystal to resonant IR-phonon drive to be rich with complex dynamical motion on many length scales involving many phonons. In this section we show that parametrically driven oscillatory motion of other \(Q_{\vec{q}}\)'s is also expected, adding a complexity of _unexplored_ time scales and microscopic motion to the dynamical response of the crystal. We highlight the parametric oscillation of \(Q_{\vec{q}}\) through the optical excitation of an IR-active phonon in the upper right of the phase diagram (positive \(\epsilon\), blue region). Near \(\epsilon=0\), this region grows from \(\delta=1\), the oscillatory component of the driver is twice the fundamental frequency of \(Q_{\vec{q}}\) (cosine term in Eqn. 6). This suggests that Region III describes parametric oscillation of \(Q_{\vec{q}}\). We therefore expect rapidly growing oscillatory motion of \(Q_{\vec{q}}\) in this region. Derivation of an approximate phase boundary, exponential growth rate, and the effect of intrinsic damping are included in Appendix C, following standard approaches to solutions in dynamical systems [52]. To give a quantitative example of the parametric oscillation process, we return to \(\mathrm{KTaO}_{3}\), and look for positive biquadratic coupling. We identify strong positive biquadratic coupling between the 16.7 THz IR phonon, now polarized along the [100] direction, and a phonon at the \(M\)-point of the Brillouin zone that transforms like the irreducible representation \(M_{3}^{-}\) with frequency 15.2 THz (Fig. 1, Region III Structural Changes); the parameters we used for our dynamical simulations, calculated from first principles, are shown in Table 1. The frequency ratio (\(\sqrt{\delta}=0.91\)) of the two phonons places us on the blue arrow originating from the vertical axis of the phase diagram, near the phase boundary. At this frequency ratio, parametric oscillation of the M\({}_{3}^{-}\) phonon is expected when \(0.11<\epsilon<0.34\) (Eqn. 2) corresponding to peak electric fields between 2.3 MV/cm \(<E_{0}<4.0\) MV/cm. This is consistent with the simulated dynamics in Fig. 1, Region III, which we will now discuss. For \(E_{0}=2.2\) MV/cm (\(\epsilon=0.10\)) the amplitude of \(Q_{\vec{q}}\) appears identical to the horizontal axis in the structural \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Quantity} & \multicolumn{2}{c}{\(Q_{IR}\)} & \multicolumn{2}{c}{\(Q_{\vec{q}}\)} \\ \cline{3-6} & & & Reg II & Reg III \\ \hline Frequency & \(f\) & [THz] & 16.73 & 3.98 & 15.19 \\ Force constant & \(K\) & [eV/Å\({}^{2}\)] & 18.36 & 1.04 & 15.16 \\ Reduced mass & \(M\) & [u] & 16.03 & 16.00 & 16.06 \\ Mode-effective & \multirow{2}{*}{\(\tilde{Z}^{*}\)} & \multirow{2}{*}{[e\({}^{-}\)]} & \multirow{2}{*}{13.03} & \multirow{2}{*}{-} & \multirow{2}{*}{-} \\ charge & & & & & \\ \multirow{2}{*}{\(\mathrm{Biquadratic}\) coupling} & \multirow{2}{*}{\(D_{IR,\vec{q}}\)} & \multirow{2}{*}{[eV/Å\({}^{4}\)]} & \multirow{2}{*}{-} & \multirow{2}{*}{-1.12} & \multirow{2}{*}{10.00} \\ \hline \hline \end{tabular} \end{table} Table 1: Characteristics of the high-frequency IR mode (\(Q_{IR}\)) and biquadratically coupled \(R\)-point modes (\(Q_{\vec{q}}\)) in the cubic phase of \(\mathrm{KTaO}_{3}\) from our DFT calculations needed to realize dynamical regimes in Region II and Region III. The IR mode is polarized along [111] in Region II and along [100] in Region III. In Region III, the excited IR mode and \(Q_{\vec{q}}\) are very strongly coupled and \(D_{IR,\vec{q}}\) varies from 10-20 eV/Å\({}^{4}\), depending on the precise amplitudes of \(Q_{IR}\) and \(Q_{\vec{q}}\) used to calculate it. We used \(D_{IR,\vec{q}}=10\) eV/Å\({}^{4}\) for our dynamical simulations. The units of reduced mass are in atomic mass units, u. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{2}{c}{Quantity} & \multicolumn{2}{c}{\(Q_{IR}\)} & \multicolumn{2}{c}{\(Q_{\vec{q}}\)} \\ \cline{3-6} & & & Reg II & Reg III \\ \hline Frequency & \(f\) & [THz] & 16.73 & 3.98 & 15.19 \\ Force constant & \(K\) & [eV/Å\({}^{2}\)] & 18.36 & 1.04 & 15.16 \\ Reduced mass & \(M\) & [u] & 16.03 & 16.00 & 16.06 \\ Mode-effective & \multirow{2}{*}{\(\tilde{Z}^{*}\)} & \multirow{2}{*}{[e\({}^{-}\)]} & \multirow{2}{*}{13.03} & \multirow{2}{*}{-} & \multirow{2}{*}{-} \\ charge & & & & & \\ \multirow{2}{*}{\(\mathrm{Biquadratic}\) coupling} & \multirow{2}{*}{\(D_{IR,\vec{q}}\)} & \multirow{2}{*}{[eV/Å\({}^{4}\)]} & \multirow{2}{*}{-} & \multirow{2}{*}{-1.12} & \multirow{2}{*}{10.00} \\ \hline \hline \end{tabular} \end{table} Table 2: Characteristics of the IR mode (\(Q_{IR}\)) and biquadratically coupled \(R\)-point mode (\(Q_{\vec{q}}\)) in the cubic phase of \(\mathrm{KTaO}_{3}\) from our DFT calculations needed to realize dynamical regimes in Region II and Region III. The IR mode is polarized along [111] in Region II and along [100] in Region III. In Region III, the excited IR mode and \(Q_{\vec{q}}\) are very strongly coupled and \(D_{IR,\vec{q}}\) varies from 10-20 eV/Å\({}^{4}\), depending on the precise amplitudes of \(Q_{IR}\) and \(Q_{\vec{q}}\) used to calculate it. We used \(D_{IR,\vec{q}}=10\) eV/Å\({}^{4}\) for our dynamical simulations. The units of reduced mass are in atomic mass units, u. dynamics panel of Fig. 1, Region III. That is, the dynamics are those of Region I. For \(E_{0}=3.3\) MV/cm (\(\epsilon=0.23\)) we see large amplification of the \(M_{3}^{-}\) phonon which grows over \(\approx 1\) ps. The large amplification of \(Q_{\vec{q}}\) is due to the oscillation of its effective potential through the \(Q_{IR}\). This parametric oscillation mechanism is shown schematically in Fig. 1, Region III. The effective potential oscillates at a frequency of \(2\omega_{IR}\) as a result of the IR phonon drive, stiffening the potential through the positive \(D_{IR,\vec{q}}\) (negative \(D_{IR,\vec{q}}\) would soften the potential). A half-cycle of \(Q_{\vec{q}}\) motion is shown in four steps with \(Q_{\vec{q}}\) starting at the apex of a cycle. During the first step (1), \(Q_{\vec{q}}\) falls toward the static equilibrium point as the potential softens to its time-averaged value (Eqn. 2). In the second step (2), \(Q_{\vec{q}}\)'s kinetic energy allows it to move beyond the equilibrium point as the potential continues to soften to its undriven value. In the third step (3), \(Q_{\vec{q}}\) begins losing kinetic energy while the potential stiffens back to the time-averaged value. In the final step (4), \(Q_{\vec{q}}\) reaches the height of its half-cycle motion as the potential stiffens back to its apex. \(Q_{IR}\)'s effect on the potential increases the amplitude of \(Q_{\vec{q}}\) with each half-cycle of motion, as expected in a parametric oscillation process. The growth in \(Q_{\vec{q}}\) then decays back to zero amplitude due to the "back action" on \(Q_{IR}\). That is, \(Q_{IR}\) is transiently driven with a finite amount of energy, as \(Q_{\vec{q}}\) grows in amplitude it must gain energy from \(Q_{IR}\), thereby decreasing the overall amplitude of the IR mode. Additionally, as the amplitude of \(Q_{\vec{q}}\) grows, the frequency of \(Q_{IR}\) will be modified via the biquadratic coupling, tuning the frequency ratio \(\sqrt{\delta}\) away from Region III (see the \(2D_{IR,\vec{q}}Q_{\vec{q}}^{2}\) term in Eqn. 3). This suggests that other changes in the IR phonon frequency or amplitude, _e.g._ through other anharmonic couplings, have a similar detrimental effect on the parametric oscillation process. For \(E_{0}=4.4\) MV/cm (\(\epsilon=0.41\)) we are again in a trivial region of the phase diagram where excitation of \(Q_{IR}\) has negligible effect on the amplitude of \(Q_{\vec{q}}\). This is because the effective frequency of \(Q_{\vec{q}}\) (due to \(Q_{IR}\)) is driven out of sync with the oscillation in its potential energy landscape. This conversely explains why amplification was not seen for the \(E_{0}=2.2\) MV/cm case, the effective frequency of \(Q_{\vec{q}}\) was not driven high enough to sync up with \(Q_{IR}\). Since the parametric oscillation process is general, we expect _many_ modes to parametrically oscillate for a large enough drive, the effects of which we expect will be seen in structure factor analysis of diffuse scattering following IR excitation. That being said, parametric oscillation processes and the phase boundary between region I and region III are sensitive to damping (Appendix C). As a result, the experimental observation of parametric oscillation between the modes used in this illustrative example, even with the large coupling between them, may be hindered by damping. We are unaware of any experimental work showing parametric oscillation through the anharmonic lattice potential in the nonlinear phononics literature. A recent work proposed parametric oscillation of an IR-active phonon through nonlinear (in \(Q_{IR}\)) contributions to the polarizability as an explanation for IR-resonant enhancement of the reflectivity in the reststrahlen band in SiC [53]; we have ignored the nonlinear polarizability contribution to the dynamics in this work, although this would certainly be an interesting path to pursue for future work. What happens in crystals where a structural mode is already present in the equilibrium phase? We now consider the lower half of the phase diagram (imaginary \(\sqrt{\delta}\)), and look at Region IV to show that the conventional nonlinear phononics response may be activated by the condensation of a mode responsible for the structural phase transition. #### iii.1.4 Region IV - Conventional Nonlinear Phononics We now briefly consider Region IV in the lower half of the phase diagram in Fig. 1. This region corresponds to the conventional nonlinear phononics effect, which has been the focus of many previous studies [54, 55, 56, 18, 20, 57]. For this region, and in the lower part of the phase diagram generally, it is helpful to consider some (possibly virtual) high-symmetry reference phase of the material of interest. In this phase, \(Q_{\vec{q}}\) has a negative force constant (\(K_{\vec{q}}<0\)) and \(\sqrt{\delta}\) is imaginary. The Floquet analysis predicts exponential growth of \(Q_{\vec{q}}\) even in the absence of an excited IR phonon (\(\epsilon=0\)) because \(Q_{\vec{q}}\) is independently 'unstable' and induces a non-transient structural phase transition. The symmetry of the reference phase is lowered by \(Q_{\vec{q}}\) such that \(Q_{\vec{q}}\) becomes fully symmetric (transforms like the identical representation) in the new structural phase. It is therefore Raman-active about its new minimum and described by a new mode \(Q_{R}\) (that is \(Q_{\vec{q}}\to Q_{\vec{q},\pm}^{*}\left(0\right)+Q_{R}\)). The biquadratic coupling term between an IR-active mode and \(Q_{\vec{q}}\) in the high-symmetry phase transforms into two lower-order terms describing a renormalization of the IR force constant of the form \(K_{IR}\to K_{IR}+\left(2D_{IR,\vec{q}}Q_{\vec{q},\pm}^{*}\left(0\right)^{2}\right)\), and a new linear-quadratic coupling is created of the form \(AQ_{IR}^{2}Q_{R}\) (where \(A=2D_{IR,\vec{q}}Q_{\vec{q},\pm}^{*}(0)\)). When an IR phonon is excited in the low-symmetry phase it will exert a unidirectional force on other modes coupled to it through the \(AQ_{IR}^{2}Q_{R}\) term and the \(Q_{R}\) modes will be unidirectionally displaced from their equilibrium amplitudes. The sign of the biquadratic coupling parameter \(D_{IR,\vec{q}}\) dictates the direction in which \(Q_{R}\) will be displaced: for \(D_{IR,\vec{q}}<0\) (Fig. 1, left side of Region IV), \(Q_{R}\) is pushed away from the saddle point (the amplitude of \(Q_{R}\) increases compared to its equilibrium value), whereas for \(D_{IR,\vec{q}}>0\), \(Q_{R}\) is pushed towards the saddle point (the amplitude of \(Q_{R}\)_decreases_ relative to its equilibrium value). Since \(Q_{R}\) is fully symmetric as mentioned above, unidirectional displacements in this region do not change the crystal point group or translational symme try [58]. We will return to this point in Region V, where symmetry can be restored by driving \(Q_{IR}\). We focus our theoretical development of this region on SrTiO\({}_{3}\), which has been studied in several recent nonlinear phononics experiments [59; 60]. Our development here is intended to demonstrate conceptual features of the Floquet phase diagram, with a detailed description saved for a future publication. To explore the dynamics and connect with the theoretical development of Sections II.1 and II.2, we first calculate parameters for Eqn. 1 by considering an IR-active phonon at 5.15 THz in the cubic phase of SrTiO\({}_{3}\), which we take as our high-symmetry reference phase (see Table 2). We again consider coupling to a mode with \(R_{4}^{+}\) symmetry, which corresponds to an out-of-phase tilt of the TiO\({}_{6}\) octahedra [61]. As mentioned above, this mode drives a structural phase transition in SrTiO\({}_{3}\) at about 110 K to a phase with \(I4/mcm\) symmetry. Hence, in the cubic phase at 0 K this octahedral tilt mode has an imaginary frequency, \(2.68i\) THz. For this particular mode pairing, \(\sqrt{\delta}=0.52i\), which places us in a region represented by the purple arrow originating from the vertical axis in Fig. 1, Region IV. We perform our simulations in a structural phase in which the octahedral tilt mode has frozen in to the cubic structure to produce the low-symmetry (low temperature) tetragonal \(I4/mcm\) phase. In this phase, the octahedral tilt mode has an equilibrium amplitude of -72 pm from our DFT calculations and we denote it as \(Q_{R}\) (the negative amplitude signifies the left minimum of the double-well potential in Fig. 1 Region IV). The triply degenerate IR phonon at 5.15 THz in the cubic phase is split in the tetragonal phase into a mode polarized along the axis about which the octahedra rotate (5.54 THz from our calculations), and a doubly degenerate mode polarized along the in-plane direction, perpendicular to the axis about which the octahedra rotate (5.04 THz). In this section we focus on this latter set of modes. Excitation of an IR phonon polarized along the in-plane direction quasi-statically displaces \(Q_{R}\) away from the saddle point shown in the Region IV inset in Figure 1; that is, the amplitude of \(Q_{R}\) transiently increases. As the peak electric field increases from 5 MV/cm to 7 MV/cm and 9 MV/cm, the amplitude increases to -85 pm, -97 pm and -112 pm (corresponding to an \(\epsilon\) of -0.11, -0.22, -0.37). In this section, we have shown that the conventional nonlinear phononics effect can be incorporated into a more general parametric amplification framework that is enabled by biquadratic coupling between modes in a proximal higher symmetry parent phase that is either real or virtual. In the next section, we go one step further and show that higher symmetry phases may be stabilized through IR-active phonon drive. #### iii.2.5 Region V - (Re)Introducing Symmetry In this final section, we show that crystal phases of proximal high-symmetry parent phases can be stabilized in the transient response to IR-phonon drive, (re)introducing symmetry elements into the transient crystal structure. For this region it is again helpful to consider some high-symmetry reference phase of the material of interest, as was done in the discussion of Region IV. In this reference phase, \(Q_{\vec{q}}\) has a negative force constant (\(K_{\vec{q}}<0\)) and \(\sqrt{\delta}\) is imaginary - the high-symmetry reference structure is a saddle-point of the energy. With \(D_{IR,\vec{q}}>0\) and a sufficient drive \(\epsilon\), Floquet theory predicts exponential decay of \(Q_{\vec{q}}\). That is, Floquet theory predicts the high-symmetry reference phase is stable for large enough IR-active phonon drive, essentially stabilizing a saddle-point of the energy landscape. The negative \(K_{\vec{q}}\) is overcome and made positive by the driven oscillating potential through the large amplitude IR-active phonon motion and biquadratic coupling. This is the phonon counterpart to the classic rigid pendulum problem where driving its pivot point periodically can stabilize the inverted solution [62; 63]. Mathematically, the results are identical to those found in Sec. III.1.2, but both \(\delta\) and \(\epsilon\) have changed sign. Rather than making \(\bar{K}_{\vec{q}}\) negative by driving the IR mode, \(\tilde{K}_{\vec{q}}\) starts negative and is driven to a positive value, collapsing the time-averaged double-well back to a stable single-well. The phase boundary is therefore defined by, in analogy to the discussion below Eqn. 7, \(\epsilon=|\delta|\). In this way the minima associated with \(Q_{\vec{q},\pm}^{*}\left(0\right)\) have moved to zero, restoring a higher-symmetry crystal configuration by reintroducing the symmetry elements of \(Q_{\vec{q}}\). To illustrate this point, we focus on the out-of-plane polarized IR mode in SrTiO\({}_{3}\) at 5.54 THz mentioned in Sec. III.1.4. In contrast to the in-plane excitation, the sign of the biquadratic coupling is positive and therefore \(\epsilon>0\) (Table 2). Exciting this phonon pushes the octahedral tilts (\(Q_{R}\)) from their starting amplitude of -72 pm towards zero amplitude. For \(\epsilon=0.21\) (\(E_{0}=4.0\) MV/cm), the response is still of the conventional nonlinear phononics type. That is, the amplitude of \(Q_{R}\) changes but the point group and translation symmetry of the crystal is preserved. For \(\epsilon=0.65\) (\(E_{0}=7.0\) MV/cm), we crossover the phase boundary from Region IV to Region V, and the point group and translational symmetry elements are reintroduced so that the average transient structure appears cubic (\(Pm\bar{3}m\), space group #221). In the transient response, as \(Q_{IR}\) dissipates energy it eventually becomes unable to sustain \(Q_{R}\) about the cubic structure. When this happens \(Q_{R}\) will fall back into either of the double-well minima (Fig. 1, Region V). We expect that in experiments this process will depend sensitively on details of the pulse characteristics, damping, initial conditions, and the boundary of the illuminated region of the crystal [64; 22; 65]. This suggests that with these approximations _deterministic_ switching from one double-well minimum to another is not possible by this pathway. This was pointed out in a recent study where a multi-pulse sequence was needed to switch the polarization of KNbO\({}_{3}\)[66], and may partly explain the lack of switching and transient recovery of polarization seen in an IR phonon pumped experiment in LiNbO\({}_{3}\)[22] (as pointed out in [64; 65]). For large enough drive, another phase boundary is crossed to the parametric oscillation regime (Region III). In the transient response, in order to cross to the parametric oscillation regime, the trajectory must pass through Region V. That is, the transient response will first be stabilized in the high-symmetry structure before the mode parametrically oscillates. This is shown in the simulated dynamics of Fig. 1, Region V for \(\epsilon=0.93\) and \(E_{0}=8.4\) MV/cm. Increasing \(\epsilon\) further will traverse an alternating series of phase boundaries of reintroducing symmetry and parametric oscillation regimes, though we expect this to be largely inaccessible in experiment due to the presence of other anharmonicity or destruction/melting of the crystal. ### Mode- and polarization-selective response In the development of the work presented so far, it is implicit that the response is frequency dependent. That is, the choice of IR phonon to excite, along with its polarization, may strongly affect the resulting transient structural phase transition. In this section, we illustrate the mode- and polarization-selective features of the lattice response by focusing on Region II of the phase diagram for KTaO\({}_{3}\) and SrTiO\({}_{3}\). For KTaO\({}_{3}\) we can estimate the critical fields needed to induce new structural phases since the cubic phase is stable in DFT. For SrTiO\({}_{3}\), since the cubic phase is unstable in DFT (there are phonon modes with imaginary frequencies), we report only the new structural phases induced, which we expect to be relevant to experiments just above the structural phase transition temperature of 110 K. We note that a general exploration of the polarization dependence for all regions is possible with _ab initio_ techniques, but quite computationally expensive due to the enormous number of anharmonic pathways allowed. This is particularly true for Region III, where phonons at all frequencies and all wave vectors may be involved. In both KTaO\({}_{3}\) and SrTiO\({}_{3}\), and in perovskite materials in general, the most common structural instabilities are associated with the R\({}_{4}^{+}\) and M\({}_{3}^{+}\) zone-edge phonons, that is, these modes often appear with imaginary frequencies in the cubic phase (readily calculated using DFT) and often drive structural phase transitions. The displacement patterns for these phonons represent out-of-phase (R\({}_{4}^{+}\)), and in-phase (M\({}_{3}^{+}\)) octahedral tilts of the TaO\({}_{6}\) or TiO\({}_{6}\) octahedra about the cubic crystallographic axes. It is convenient to introduce a notation favored in the complex oxide community, after Glazer [67; 68], which describes the in-phase (+) and out-of-phase (-) octahedral tilts as a list about the cubic-crystallographic axes. In the example described in Fig. 1 for Region II, the equilibrium state of KTaO\({}_{3}\) is described by the label \(a^{0}a^{0}a^{0}\) indicating that the \(a\), \(b\), and \(c\) lattice constants are all equal and that there are no octahedral tilts about any crystallographic axis. The octahedral tilt pattern induced in Region II (associated with the \(R_{4}^{+}\)[111] phonon of the cubic phase) following excitation of an IR phonon polarized along the [111]-direction is labeled \(a^{-}a^{-}a^{-}\), signifying that the octahedral tilts about each crystallographic axes are all out-of-phase with respect to each other but are of the same amplitude. The octahedral tilt patterns in KTaO\({}_{3}\) associated with transient structural phase transitions following excitation of various IR-active phonons polarized along the [100], [110], and [111] directions are shown in Table 3 (Table 4 for SrTiO\({}_{3}\)). The entries shown represent the first modes that develop negative force constants (imaginary frequencies) with respect to an increase in amplitude of a given IR mode (see Fig. 2). We expect that polarization directions between the principal crystallographic axes will give octahedral tilt patterns between those listed in Table 3, _e.g._ for a 16.7 THz pulse polarized between [111] and [110] we expect an octahedral tilt pattern of \(a^{-}a^{-}b^{-}\), which corresponds to the \(C2/c\) (\(C_{2h}\)) space group. Note that for the [100] direction, the two directions of \(Q_{\vec{q}}\) shown in Tables 3 & 4 have the same energy decrease in DFT, so either direction, or both directions may be seen in experiment. Depending on the polarization direction, although the octahedral tilts dominate the induced \(Q_{\vec{q}}\) structural change, other structural distortions may be present, including strain (Supplementary Sec. S2). We find that other structural distortions associated with A-site and B-site motion and deformation of the oxygen octahedra tend to be small compared to the octahedral tilt components of the motion (Supplementary Sec. S3). We speculate that the strong coupling between IR phonons and octahedral tilt modes is a consequence of the geometric network of bonds in both KTaO\({}_{3}\) and SrTiO\({}_{3}\), and is likely general to perovskites. To illustrate this, consider the 16.7 THz IR phonon polarized along [100] direction in KTaO\({}_{3}\), where at the critical field, the shortest time-averaged Ta-O bond is along the polarization direction and its length has decreased by \(\approx\pm 10\%\) (\(\approx\pm 20\) pm). This shortest Ta-O bond is energetically unfavorable. Rotating the octahedra by displacing the \(A_{5}\) symmetry \(Q_{\vec{q}}\) mode (Table 3) accommodates this unfavorable condition by increasing the length of this Ta-O bond towards its equilibrium value (Supplementary Fig. S4) ## IV Summary and conclusions We have used a combination of first-principles DFT calculations and Floquet theory to develop a phase diagram depicting the various dynamical regimes accessible to materials given ultrafast optical excitation of an IR-active phonon biquadratically coupled to another mode at arbitrary wave vector. We have shown that crystal point group and translational symmetries may be introduced or removed via various mechanisms, depending on which dynamical regime is accessed in a given experiment. Our phase diagram is intended to frame theoretical and experimental work in the nonlinear phononics field where the transient response of the crystal may approach the Floquet regime, as justified by our dynamical simulations with parameters derived from first-principles calculations. Although we have ignored damping of the IR phonon in this work, we note that the inclusion of damping will quantitatively alter some of our results. That is, \begin{table} \begin{tabular}{c c c c c c c} Frequency & Polarization & Space group & Irrep of \(Q_{\vec{q}}\) in & Space group & Octahedral & Space group \\ [THz] & Direction & induced by & space group & induced by & tilt pattern & induced by \\ & & & & & & octahedral tilt & octahedral tilt \\ \hline 16.8 & & & A\({}_{5}\)(a,0) & Ima2 (C\({}_{2v}\)) & \(a^{0}b^{-}b^{-}\) & IMA (D\({}_{2h}\)) \\ 5.2 & [100] & P4mm (C\({}_{4v}\)) & or & or & or & or \\ 1.9 & & & A\({}_{5}\)(a,a) & Fmm2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) \\ 16.8 & & & A\({}_{5}\)(a,0) & Ima2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) \\ 5.2 & [110] & Amm2 (C\({}_{2v}\)) & T\({}_{4}\) & Ima2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) \\ 1.9 & & & & & & \\ 16.8 & & & & & & \\ 5.2 & [111] & R3m (C\({}_{3v}\)) & T\({}_{2}\) & R3c (C\({}_{3v}\)) & \(a^{-}a^{-}a^{-}\) & R3c (D\({}_{3d}\)) \\ 16.8 & & & & & & \\ 5.2 & [111] & R3m (C\({}_{3v}\)) & T\({}_{2}\) & R3c (C\({}_{3v}\)) & \(a^{-}a^{-}a^{-}\) & R3c (D\({}_{3d}\)) \\ 1.9 & & & & & & \\ \end{tabular} \end{table} Table 4: Transiently induced structural phases (Region II) in SrTiO\({}_{3}\) for resonant excitation of IR-active phonons polarized along different directions. Since the cubic phase of SrTiO\({}_{3}\) is dynamically unstable at 0 K in DFT, we only report the predicted structural phases, assuming that the critical field can be controlled by proximity to the 110 K phase transition temperature. Within a half cycle of the IR-active phonon, the space group symmetry is lowered (third column). As the excited IR-active phonon rings, \(Q_{\vec{q}}\) freezes into the crystal, further lowering the space group symmetry (fifth column), altering the point group and translational symmetry of the crystal. \(Q_{\vec{q}}\), labeled by the irreducible representation (Irrep) in the space group induced by the IR-active phonon excitation (fourth column), is primarily associated with TiO\({}_{6}\) octahedral tilt patterns (sixth column), but may also include other subtle distortions (Supplement Sec. S3). The space group induced by the octahedral tilts _alone_ is given in the seventh column. \begin{table} \begin{tabular}{c c c c c c c c} Frequency & Polarization & Space group & Irrep of \(Q_{\vec{q}}\) in & Space group & Octahedral & Space group \\ [THz] & Direction & induced by & space group & induced by & tilt pattern & induced by \\ & & & \(Q_{IR}\) induced by \(Q_{IR}\) & \(Q_{IR}+Q_{\vec{q}}\) & & octahedral tilt & octahedral tilt \\ \hline 16.7 & & & A\({}_{5}\)(a,0) & Ima2 (C\({}_{2v}\)) & \(a^{0}b^{-}b^{-}\) & IMA (D\({}_{2h}\)) & 4.5 \\ 5.5 & [100] & P4mm (C\({}_{4v}\)) & or & or & or & & 18.0 \\ 3.3 & & & A\({}_{5}\)(a,a) & Fmm2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) & 1.8 \\ 16.7 & & & T\({}_{4}\) & Ima2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) & 4.6 \\ 5.5 & [110] & Amm2 (C\({}_{2v}\)) & Y\({}_{4}\) & Pmc2\({}_{1}\) (C\({}_{2v}\)) & \(a^{0}a^{0}b^{+}\) & P4/mmb (D\({}_{4h}\)) & 11.6 \\ 3.3 & & & T\({}_{4}\) & Ima2 (C\({}_{2v}\)) & \(a^{0}a^{0}b^{-}\) & IMA(D\({}_{4h}\)) & 2.3 \\ 16.7 & & & & & & & 4.8 \\ 5.5 & [111] & R3m (C\({}_{3v}\)) & T\({}_{2}\) & R3m (C\({}_{3v}\)) & \(a^{-}a^{-}a^{-}\) & R3c (D\({}_{3d}\)) & 15.1 \\ 3.3 & & & & & & & 2.8 \\ \end{tabular} \end{table} Table 3: Induced instabilities (Region II) in KTaO\({}_{3}\) for resonant excitation of IR-active phonons polarized along different directions from our DFT calculations. Within a half cycle of the IR-active phonon, the space group symmetry is lowered (third column). As the IR-active phonon rings, \(Q_{\vec{q}}\) condenses into the crystal, further lowering the space group symmetry (fifth column), altering the point group and translational symmetry of the crystal. \(Q_{\vec{q}}\), labeled by the irreducible representation (Irrep) in the space group induced by the IR-active phonon excitation (fourth column), is primarily associated with TaO\({}_{6}\) octahedral tilt patterns (sixth column), but may also include other subtle distortions (Supplementary Sec. S3). The space group induced by the octahedral tilts alone is given in the seventh column. The critical electric field needed to condense \(Q_{\vec{q}}\) for a 500 fs duration Gaussian electric field pulse is given in the last column. For the 5.5 THz IR phonon polarized along the [110] direction, our calculations suggest that higher-order lattice anharmonicity may be needed to describe the condensation of \(Q_{\vec{q}}\). for a given IR-active phonon, larger peak electric fields and/or pulse durations may be needed to observe the desired effect. As an example of point group and translational symmetry control, we have shown that in the perovskite KTaO\({}_{3}\), which is cubic at all temperatures at equilibrium, zone-edge octahedral tilts can be induced by coupling to an optically excited IR phonon, revealing a hidden structural phase. The explored polarization dependence of this phenomenon suggests that octahedral tilt modes (and potentially other kinds of structural distortion patterns) can be precisely controlled with light. We expect that the susceptibility of a given material to this kind of control is tied to the frequency of \(Q_{\vec{q}}\), with lower frequency \(\omega_{\vec{q}}\) modes more likely to induce transient structural phase transitions. Accompanying this lowering in symmetry, we expect other phonons to be amplified to large oscillatory motion through parametric oscillation. Resolving this motion experimentally will require measuring the structural response of the crystal on multiple timescales using, for example, ultrafast IR pump/diffuse X-ray scattering probes. Furthermore, our results suggest that continued development of tunable lasers spanning the 1 THz - 20 THz spectral range will enable further exploration of the structural and functional response of crystals to light. Finally, we note that in the development of the Floquet phase diagram in Fig. 1 we have focused on coupling between the driven IR-active phonon and other phonons, however the form of Eqn. 1 is general. That is, replacing \(Q_{\vec{q}}\) with an arbitrary order parameter describing, for example, magnetism or orbital order, preserves the symmetries of the model. The approach here is therefore general and can be applied to understand _any_ IR-phonon-driven phase change. ###### Acknowledgements. JZK and NAB were supported by the Department of Energy - Office of Basic Energy Sciences under award DE-SC0019414. GK was supported by the Cornell Center for Materials Research with funding from the NSF MRSEC program (Grant No. DMR-1719875). Computational resources were provided by the Cornell Center for Advanced Computing. ## Appendix A \(\epsilon\) Unit Conversion The proper unit conversions are required in order for \(\epsilon\) to be a dimensionless quantity. \(\epsilon\), when simplified and written in terms of pulse characteristics and microscopic parameters \[\epsilon=\left(\frac{\eta}{\pi}\right)^{2}\frac{D_{IR,\vec{q}}\tilde{Z}^{*2} \tau^{2}E_{0}^{2}}{M_{\vec{q}}K_{IR}^{2}}, \tag{10}\] where \(\left(\frac{\eta}{\pi}\right)^{2}\) is a dimensionless coefficient and \(\eta\) is defined by the pulse shape (see below Eqn. 4). Given the following choice of units for the other terms: \(D_{IR,\vec{q}}\) [eV/A\({}^{4}\)], \(\tilde{Z}^{*}\) [e\({}^{-}\)], \(\tau\) [ps], \(E_{0}\) [MV/cm], \(M_{\vec{q}}\) [u (atomic mass units)], and \(K_{IR}\) [eV/A\({}^{2}\)]; a scale factor of 0.9648533 is needed. ## Appendix B Constructing the Floquet Phase Diagram In order to construct the phase diagram for the dynamical response described by Eqn. 5 we recall the main results of Floquet theory [31; 32]. The fundamental solutions of Eqn. 5 are of the form, \(x_{i}(\theta)=\lambda_{i}^{\theta/\pi}p_{i}(\theta)\) where \(p_{i}(\theta)=p_{i}(\theta+\pi)\) shares the periodicity of \(A(\theta)\) and \(\lambda_{i}\) are the so-called _characteristic multipliers_. This shows that solutions are generally periodic in time and may grow or decay exponentially. The \(\lambda_{i}\) are found numerically by integrating Eqn. 5 over one period given a set of linearly independent initial conditions \(X(\theta_{0})=(x_{1}(\theta_{0}),x_{2}(\theta_{0}),...,x_{N}(\theta_{0}))\). The transformation matrix \(B=X(\theta_{0})^{-1}X(\theta_{0}+\pi)\) is diagonalized to find its eigenvalues - the characteristic multipliers \(\lambda_{i}\). Three scenarios are possible: \(|\lambda_{i}|>1\) corresponding to exponential growth, \(|\lambda_{i}|<1\) corresponding to exponential decay, and \(|\lambda_{i}|=1\) corresponding to stability of the \(i^{th}\) solution. Therefore, \(|\lambda_{i}|=1\) define phase boundaries between regions of exponential growth and decay. We numerically integrate Eqn. 6 from \(\theta\in[0,\pi]\) for a mesh of \(\delta\), \(\epsilon\), and \(\nu\) with initial conditions \(x_{1}(0)=(1,0)\) and \(x_{2}(0)=(0,1)\). \(x_{1}(0)\) corresponds to a physical scenario where at \(\theta=0\), \(Q_{\vec{q}}\) is displaced but its velocity \(\dot{Q}_{\vec{q}}\) is zero. Conversely, \(x_{2}(0)\) corresponds to a scenario where at \(\theta=0\), \(Q_{\vec{q}}=0\) and the velocity is nonzero. The eigenvalues of the transformation matrix \(B\) define the characteristic multipliers, which are analyzed to construct the phase boundaries (\(|\lambda|=1\)) and regions of exponential growth (\(|\lambda|>1\)) and decay (\(|\lambda|<1\)) in \(Q_{\vec{q}}\). ## Appendix C Parametric Oscillation Derivation In this Appendix, we derive expressions for the approximate phase boundary, an exponential growth rate, and peak growth rate including the effects of damping for parametric oscillation in Region III. We anticipate that these expressions will be useful in future experimental work exploring this phenomenon. To accommodate the exponential growth predicted by the Floquet analysis and the expected periodic motion, we assume a solution of the form \(Q_{\vec{q}}=A(\tau)cos(\tau)+B(\tau)sin(\tau)\). Inserting this _ansatz_ into Eqn. 5, we find for the exponential growth parameters \(A\) and \(B\), \[\frac{d}{d\tau}\left(\begin{array}{c}A\\ B\end{array}\right)=\frac{1}{2}\begin{bmatrix}0&-1+\left(\delta+\epsilon \right)-\epsilon/2\\ 1-\left(\delta+\epsilon\right)-\epsilon/2&0\end{bmatrix}\left(\begin{array}{c} A\\ B\end{array}\right) \tag{49}\] where \(\tilde{A}\) and \(\tilde{B}\) have been neglected in the spirit of the slowly varying envelope approximation [69]. We have also neglected high-harmonic terms and damping to find this form. Eqn. 49 has exponential solutions with their growth rate found by solving for the eigenvalues of the matrix on the right-hand side. We find \(\mu_{V,\pm}=\pm\frac{1}{2}\sqrt{\left(\epsilon/2\right)^{2}-\left(1-\left( \delta+\epsilon\right)\right)^{2}}\), where \(\mu_{V,+}\) gives rise to exponential growth. The phase boundary is again identified by setting \(\mu=0\). We find the following conditions for the phase boundary: \[\begin{split}\left(\frac{\epsilon}{2}\right)^{2}\geq\left(1- \left(\delta+\epsilon\right)\right)^{2}\\ \frac{2}{3}\left(1-\delta\right)\leq\epsilon\leq 2\left(1-\delta \right)\end{split} \tag{50}\] Maximum exponential growth is found at \(\epsilon^{*}=\frac{4}{3}\left(1-\delta\right)\) with a growth rate of \(\mu(\epsilon^{*})=\frac{\left|1-\delta\right|}{2\sqrt{3}}\). To account for the effect of damping we assume the standard result from damped oscillators \(Q_{\vec{q}}\propto e^{\frac{\epsilon}{2}\tau}\) so that exponential growth is only expected when \(\mu_{V,+}\) is greater than \(\frac{\nu}{2}\). This alters Eqn. 50 so that the phase boundary is defined by \[\begin{split}\epsilon&\geq\frac{4}{3}\left(\left(1- \delta_{\nu}\right)-\frac{1}{2}\sqrt{\left(1-\delta_{\nu}\right)^{2}-3\nu^{2}} \right)\\ \epsilon&\leq\frac{4}{3}\left(\left(1-\delta_{\nu} \right)+\frac{1}{2}\sqrt{\left(1-\delta_{\nu}\right)^{2}-3\nu^{2}}\right)\end{split} \tag{51}\] Here \(\delta_{\nu}=\delta-\nu^{2}/4\) accounts for the frequency shift imparted by the damping. This relation requires \(\left|1-\delta_{\nu}\right|\geq\sqrt{3}\nu\) and \(\left|\epsilon\right|\geq 2\nu\) for exponential growth. That is, there is a range of \(\omega_{\vec{q}}\) near \(\omega_{IR}\) that will not exhibit parametric oscillation and a larger drive \(\epsilon\) is needed to overcome the damping for \(\omega_{\vec{q}}\) outside this range. The decrease in size of Region III is taken up by Region I and Region V (Fig. S2). The phase boundaries between Region V and Regions I & IV represents the only phase boundaries in Fig. 1 sensitive to damping of \(Q_{\vec{q}}\). ## References * [1] V. M. Goldschmidt, "Die Gesetze der Krystallochemie," _Naturwissenschaften_, vol. 14, pp. 477-485, May 1926. * on the theory of phase transitions," in _Collected Papers of L.D. Landau_ (D. TER HAAR, ed.), pp. 193-216, Pergamon, 1965. * [3] G. Shirane, "Neutron scattering studies of structural phase transitions at Brookhaven," _Rev. Mod. Phys._, vol. 46, pp. 437-449, July 1974. * [4] W. Li, X. Qian, and J. Li, "Phase transitions in 2D materials," _Nature Reviews Materials_, vol. 6, pp. 829-846, Sept. 2021. * [5] J. Toledano and P. Toledano, _The Landau Theory of Phase Transitions_. World Scientific Publishing Company, 1987. * [6] R. A. Borzi, S. A. Grigera, J. Farrell, R. S. Perry, S. J. S. Lister, S. L. Lee, D. A. Tennant, Y. Maeno, and A. P. Mackenzie, "Formation of a Nematic Fluid at High Fields in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)," _Science_, vol. 315, pp. 214-217, Jan. 2007. * [7] N. K. Gupta, C. McMahon, R. Sutarto, T. Shi, R. Gong, H. I. Wei, K. M. Shen, F. He, Q. Ma, M. Dragomir, B. D. Gaulin, and D. G. Hawthorn, "Vanishing nematic order beyond the pseudogap phase in overdoped cuprate superconductors," _Proceedings of the National Academy of Sciences_, vol. 118, p. e2106881118, Aug. 2021. * [8] Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, "Unconventional superconductivity in magic-angle graphene superlattices," _Nature_, vol. 556, pp. 43-50, Mar. 2018. * [9] X. Zhang, M. Takahashi, K. Takeuchi, and S. Sakai, "64 kbit Ferroelectric-Gate-Transistor-Integrated NAND Flash Memory with 7.5 V Program and Long Data Retention," _Japanese Journal of Applied Physics_, vol. 51, p. 04DD01, Apr. 2012. * [10] P. Jiao, K.-J. I. Egbe, Y. Xie, A. M. Nazar, and A. H. Alavi, "Piezoelectric Sensing Techniques in Structural Health Monitoring: A State-of-the-Art Review," _Sensors_, vol. 20, July 2020. * [11] P. Curie and J. Curie, "Developpement par compression de l'electric polaire dans les cristaux hemiedres a faces inclinees," _Bulletin de Mineralogie_, vol. 3, no. 4, pp. 90-93, 1880. * [12] G. Lippmann, "Principe de la conservation de l'electrice, ou second principe de la theorie des phenomenes electriques," _Journal de Physique Theorique et Appliquee_, vol. 10, no. 1, pp. 381-394, 1881. * [13] E. Bousquet, M. Dawber, N. Stucki, C. Lichtensteiger, P. Hermet, S. Gariglio, J.-M. Triscone, and P. Ghosez, "Improper ferroelectricity in perovskite oxide artificial superlattices," _Nature_, vol. 452, pp. 732-736, Apr. 2008. * [14] T. Fukushima, A. Stroppa, S. Picozzi, and J. M. Perez-Mato, "Large ferroelectric polarization in the new double perovskite NaLaMnWO\({}_{6}\) induced by non-polar instabilities," _Phys. Chem. Chem. Phys._, vol. 13, pp. 12186-12190, June 2011. * [15] N. A. Benedek and C. J. Fennie, "Hybrid improper ferroelectricity: A mechanism for controllable polarization-magnetization coupling," _Phys. Rev. Lett._, vol. 106, p. 107204, Mar. 2011. * [16] N. A. Benedek, A. T. Mulder, and C. J. Fennie, "Polar octahedral rotations: A path to new multifunctional materials," _Journal of Solid State Chemistry_, vol. 195, pp. 11-20, Nov. 2012. * [17] N. A. Benedek and M. A. Hayward, "Hybrid Improper Ferroelectricity: A Theoretical, Computational, and Synthetic Perspective," _Annual Review of Materials Research_, vol. 52, pp. 331-355, July 2022. * [18] M. Rini, R. Tobey, N. Dean, J. Itatani, Y. Tomioka, Y. Tokura, R. W. Schoenlein, and A. Cavalleri, "Control of the electronic phase of a manganite by mode-selective vibrational excitation," _Nature_, vol. 449, pp. 72-74, Sept. 2007. * [19] D. Fausti, R. I. Tobey, N. Dean, S. Kaiser, A. Dienst, M. C. Hoffmann, S. Pyon, T. Takayama, H. Takagi, and A. Cavalleri, "Light-Induced Superconductivity in a Stripe-Ordered Cuprate," _Science_, vol. 331, pp. 189-191, Jan. 2011. * [20] R. Mankowsky, A. Subedi, M. Forst, S. O. Mariager, M. Chollet, H. T. Lemke, J. S. Robinson, J. M. Glownia, M. P. Minitti, A. Frano, M. Fechner, N. A. Spaldin, T. Loew, B. Keimer, A. Georges, and A. Cavalleri, "Nonlinear lattice dynamics as a basis for enhanced superconductivity in YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{6.5}\)," _Nature_, vol. 516, pp. 71-73, Dec. 2014. * [21] T. A. Miller, R. W. Chhajlany, L. Tagliacozzo, B. Green, S. Kovalev, D. Prabhakaran, M. Lewenstein, M. Gensch, and S. Wall, "Terahertz field control of in-plane orbital order in La\({}_{0.5}\)Sr\({}_{1.5}\)MnO\({}_{4}\)," _Nature Communications_, vol. 6, p. 8175, Sept. 2015. * [22] R. Mankowsky, A. von Hoegen, M. Forst, and A. Cavalleri, "Ultrafast Reversal of the Ferroelectric Polarization," _Phys. Rev. Lett._, vol. 118, p. 197601, May 2017. * [23] M. Fechner, M. Forst, G. Orenstein, V. Krapivin, A. S. Disa, M. Buzzi, A. von Hoegen, G. de la Pena, Q. L. Nguyen, R. Mankowsky, M. Sander, H. Lemke, Y. Deng, M. Trigo, and A. Cavalleri, "Quenched lattice fluctuations in optically driven SrTiO\({}_{3}\)," 2023. * [24] X. Gonze and C. Lee, "Dynamical matrices, Born effective charges, dielectric permittivity tensors, and interatomic force constants from density-functional perturbation theory," _Phys. Rev. B_, vol. 55, pp. 10355-10368, Apr. 1997. * [25] G. Khalsa, N. A. Benedek, and J. Moses, "Ultrafast Control of Material Optical Properties via the Infrared Resonant Raman Effect," _Phys. Rev. X_, vol. 11, p. 021067, June 2021. * [26] F. Caruso and M. Zacharias, "Quantum theory of light-driven coherent lattice dynamics," _Phys. Rev. B_, vol. 107, p. 054102, Feb. 2023. * [27] A. Subedi, A. Cavalleri, and A. Georges, "Theory of nonlinear phononics for coherent light control of solids," _Phys. Rev. B_, vol. 89, p. 220301, June 2014. * [28] M. Claassen, H.-C. Jiang, B. Moritz, and T. P. Devereaux, "Dynamical time-reversal symmetry breaking and photo-induced chiral spin liquids in frustrated Mott insulators," _Nature Communications_, vol. 8, p. 1192, Oct. 2017. * [29] T. Oka and S. Kitamura, "Floquet Engineering of Quantum Materials," _Annual Review of Condensed Matter Physics_, vol. 10, no. 1, pp. 387-408, 2019. * [30] A. de la Torre, D. M. Kennes, M. Claassen, S. Gerber, J. W. McIver, and M. A. Sentef, "Colloquium: Nonthermal pathways to ultrafast control in quantum materials," _Rev. Mod. Phys._, vol. 93, p. 041002, Oct. 2021. * [31] W. Magnus and S. Winkler, _Hill's Equation_. Dover Books on Mathematics Series, Dover Publications, 2004. * [32] I. Kovacic, R. Rand, and S. Mohamed Sah, "Mathieu's Equation and Its Generalizations: Overview of Stability Charts and Their Features," _Applied Mechanics Reviews_, vol. 70, Feb. 2018. 2020802. * [33] G. Kresse and J. Hafner, "_Ab initio_ molecular dynamics for liquid metals," _Phys. Rev. B_, vol. 47, pp. 558-561, Jan. 1993. * [34] G. Kresse and J. Furthmuller, "Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set," _Computational Materials Science_, vol. 6, pp. 15-50, July 1996. * [35] G. Kresse and J. Furthmuller, "Efficient iterative schemes for _ab initio_ total-energy calculations using a plane-wave basis set," _Phys. Rev. B_, vol. 54, pp. 11169-11186, Oct. 1996. * [36] G. Kresse and D. Joubert, "From ultrasoft pseudopotentials to the projector augmented-wave method," _Phys. Rev. B_, vol. 59, pp. 1758-1775, Jan. 1999. * [37] S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi, "Phonons and related crystal properties from density-functional perturbation theory," _Rev. Mod. Phys._, vol. 73, pp. 515-562, July 2001. * [38] P. Vousden, "A Study of the Unit-cell Dimensions and Symmetry of certain Ferroelectric Compounds of Niobium and Tantalum at Room Temperature," _Acta Crystallographica_, vol. 4, pp. 373-376, July 1951. * [39] T. Ishihara, N. S. Baik, N. Ono, H. Nishiguchi, and Y. Takita, "Effects of crystal structure on photolysis of H\({}_{2}\)O on K-Ta mixed oxide," _Journal of Photochemistry and Photobiology A: Chemistry_, vol. 167, pp. 149-157, Oct. 2004. * [40] M. Schmidbauer, A. Kwasniewski, and J. Schwarzkopf, "High-precision absolute lattice parameter determination of SrTiO\({}_{3}\), DyScO\({}_{3}\) and NdGaO\({}_{3}\) single crystals," _Acta Crystallographica Section B_, vol. 68, pp. 8-14, Feb. 2012. * [41] R. D. King-Smith and D. Vanderbilt, "Theory of polarization of crystalline solids," _Phys. Rev. B_, vol. 47, pp. 1651-1654, Jan. 1993. * [42] R. Resta, "Macroscopic Electric Polarization as a Geometric Quantum Phase," _Europhysics Letters_, vol. 22, p. 133, Apr. 1993. * [43] D. Vanderbilt and R. D. King-Smith, "Electric polarization as a bulk quantity and its relation to surface charge," _Phys. Rev. B_, vol. 48, pp. 4442-4455, Aug. 1993. * [44] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, "QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials," _Journal of Physics: Condensed Matter_, vol. 21, p. 395502, Sept. 2009. * [45] K. F. Garrity, J. W. Bennett, K. M. Rabe, and D. Vanderbilt, "Pseudopotentials for high-throughput DFT calculations," _Computational Materials Science_, vol. 81, pp. 446-452, Jan. 2014. * [46] H. T. Stokes, D. M. Hatch, and B. J. Campbell, "ISOTROPY Software Suite." iso.byu.edu. * [47] B. J. Campbell, H. T. Stokes, D. E. Tanner, and D. M. Hatch, "_ISODISPLACE_: a web-based tool for exploring structural distortions," _Journal of Applied Crystallography_, vol. 39, pp. 607-614, Aug 2006. * [48] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Rio, M. Wiebe, P. Peterson, P. Gerard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant, "Array programming with NumPy," _Nature_, vol. 585, pp. 357-362, Sept. 2020. * [49] M. W. Lufaso and P. M. Woodward, "Prediction of the crystal structures of perovskites using the software program _SPuDS_," _Acta Crystallographica Section B_, vol. 57, pp. 725-738, Dec 2001. * [50] A. Sell, A. Leitenstorfer, and R. Huber, "Phase-locked generation and field-resolved detection of widely tunable terahertz pulses with amplitudes exceeding 100 MV/cm," _Opt. Lett._, vol. 33, pp. 2767-2769, Dec. 2008. * [51] M. Knorr, J. Raab, M. Tauer, P. Merkl, D. Peller, E. Wittmann, E. Riedle, C. Lange, and R. Huber, "Phase-locked multi-terahertz electric fields exceeding 13 MV/cm at a 190 kHz repetition rate," _Opt. Lett._, vol. 42, pp. 4367-4370, Nov. 2017. * [52] J. V. Jose and E. J. Saletan, _Classical Dynamics: A Contemporary Approach_, p. 382-491. Cambridge University Press, 1998. * [53] A. Cartella, T. F. Nova, M. Fechner, R. Merlin, and A. Cavalleri, "Parametric amplification of optical phonons," _Proceedings of the National Academy of Sciences_, vol. 115, pp. 12148-12151, Nov. 2018. * [54] M. Forst, C. Manzoni, S. Kaiser, Y. Tomioka, Y. Tokura, R. Merlin, and A. Cavalleri, "Nonlinear phononics as an ultrafast route to lattice control," _Nature Physics_, vol. 7, pp. 854-856, Nov. 2011. * [55] M. Forst, R. Mankowsky, and A. Cavalleri, "Mode-Selective Control of the Crystal Lattice," _Accounts of Chemical Research_, vol. 48, pp. 380-387, Feb. 2015. * [56] G. Khalsa and N. A. Benedek, "Ultrafast optically induced ferromagnetic/anti-ferromagnetic phase transition in GdTiO\({}_{3}\) from first principles," _npj Quantum Materials_, vol. 3, p. 15, Mar. 2018. * [57] A. S. Disa, T. F. Nova, and A. Cavalleri, "Engineering crystal structures with light," _Nature Physics_, vol. 17, pp. 1087-1092, Oct. 2021. * [58] We note that the condensation of \(Q_{\vec{q}}\) can create other Raman-active phonons via additional anharmonic lattice energy terms which may be accessed in some crystals through the conventional nonlinear phononics effect [70]. For simplicity, we ignore these pathways in our analysis since, though their inclusion is straightforward, it can quickly become unnecessarily cumbersome for our purposes. * [59] T. F. Nova, A. S. Disa, M. Fechner, and A. Cavalleri, "Metastable ferroelectricity in optically strained SrTiO\({}_{3}\)," _Science_, vol. 364, pp. 1075-1079, June 2019. * [60] X. Li, T. Qiu, J. Zhang, E. Baldini, J. Lu, A. M. Rappe, and K. A. Nelson, "Terahertz field-induced ferroelectricity in quantum paraelectric SrTiO\({}_{3}\)," _Science_, vol. 364, pp. 1079-1082, June 2019. * [61] P. A. Fleury, J. F. Scott, and J. M. Worlock, "Soft Phonon Modes and the 110\({}^{\circ}\)K Phase Transition in SrTiO\({}_{3}\)," _Phys. Rev. Lett._, vol. 21, pp. 16-19, July 1968. * [62] A. Stephenson, "XX. On Induced Stability," _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_, vol. 15, pp. 233-236, Apr. 1908. * [63] P. L. Kapitsa, "Pendulum with vibrating suspension," _Usp. Phys. Sciences_, vol. 44, pp. 7-20, May 1951. * [64] T. Mertelj and V. V. Kabanov, "Comment on "Ultrafast Reversal of the Ferroelectric Polarization"," _Phys. Rev. Lett._, vol. 123, p. 129701, Sept. 2019. * [65] V. A. Abalmasov, "Ultrafast reversal of the ferroelectric polarization by a midinfrared pulse," _Phys. Rev. B_, vol. 101, p. 014102, Jan. 2020. * [66] P. Chen, C. Paillard, H. J. Zhao, J. Iniguez, and L. Bellaiche, "Deterministic control of ferroelectric polarization by ultrafast laser pulses," _Nature Communications_, vol. 13, p. 2566, May 2022. * [67] A. M. Glazer, "The Classification of Tilted Octahedra in Perovskites," _Acta Crystallographica Section B_, vol. 28, pp. 3384-3392, Nov. 1972. * [68] A. M. Glazer, "Simple Ways of Determining Perovskite Structures," _Acta Crystallographica Section A_, vol. 31, pp. 756-762, Nov. 1975. * [69] F. Arecchi and R. Bonifacio, "Theory of Optical Maser Amplifiers," _IEEE Journal of Quantum Electronics_, vol. 1, no. 4, pp. 169-178, 1965. * [70] D. M. Juraschek, M. Fechner, and N. A. Spaldin, "Ultrafast Structure Switching through Nonlinear Phononics," _Phys. Rev. Lett._, vol. 118, p. 054101, Jan. 2017. **Supplementary Information: Coherent control of the translational and point group symmetries of crystals with light** Guru Khalsa\({}^{1}\), Jeffrey Z. Kaaret,\({}^{2}\) and Nicole A. Benedek\({}^{1}\) 1. Department of Materials Science and Engineering, Cornell University, Ithaca NY 14853, USA 2. School of Applied and Engineering Physics, Cornell University, Ithaca NY 14853, USA ## S1 Approximate solutions to the Floquet phase boundaries Approximate analytic results for the phase boundaries are shown in Fig. S1. This suggests that the analytic results given in the main text are good approximations for the transition from trivial motion (white regions - Region I) to novel dynamical response discussed in the main text (shaded regions), compared to those found with Floquet theory. Fig. S2 shows the affect of damping (\(\nu=2\gamma_{\vec{q}}/\omega_{IR}\)) on the phase boundary between trivial damped motion of \(Q_{\vec{q}}\) (white region - Region I) and exponentially growing parametric oscillation of \(Q_{\vec{q}}\) (blue region - Region III). The red lines in the figure show the threshold values the driving strength (\(|\epsilon|\geq 2\nu\)) and the frequency ratio \(\sqrt{\delta}\) (\(|1-\delta_{\nu}|\geq\sqrt{3}\nu\), where \(\delta_{\nu}=\delta-\nu^{2}/4\)) that must be overcome to see the parametric oscillatory behavior (Appendix C in the main text for further discussion). ## S2 The effect of strain and linear-quadratic coupling Large strain states are expected to be observed on timescales comparable to the phonon-phonon scattering component of the lifetime [1]. That is, when the energy transfers from \(Q_{IR}\) and \(Q_{\vec{q}}\) to the acoustic branch. We note that a nearly instantaneous strain response (on the scale of the electronic scattering rate \(\sim 1\) ps) is also expected as the electrons accommodate the increased energy density from the optical field, but we expect this to be a small effect. Here, we discuss how this strain may impact the dynamics of the phase boundary, focusing on Region II. Including strain components in Eqn. 1 gives \[\begin{split} U&=\frac{1}{2}K_{IR}Q_{IR}^{2}+\frac{1}{ 2}K_{\vec{q}}Q_{\vec{q}}^{2}+\frac{1}{2}C_{\varepsilon}\varepsilon^{2}\\ &+\alpha_{IR}\varepsilon Q_{IR}^{2}+\alpha_{\vec{q}}\,\varepsilon \mathcal{Q}_{\vec{q}}^{2}+D_{IR,\vec{q}}Q_{IR}^{2}Q_{\vec{q}}^{2}-\Delta\vec{P} \cdot\vec{E}.\end{split}\] (S1) where \(\varepsilon\) is a strain coordinate, \(C_{\varepsilon}\) is an elastic stiffness, and \(\alpha_{IR/\vec{q}}\) is the coupling between strain and \(Q_{IR/\vec{q}}\). In this equation, \(\varepsilon\) represents the strain induced by the excitation and will generally be a linear combination of the equilibrium strain tensor components. \(C_{\varepsilon}\), and \(\alpha_{IR/\vec{q}}\) are therefore effective elastic parameters associated with the induced strain response. Note that in the calculation of the strain coefficients, only the lattice constants are allowed to change. That is, the only atomic motion included is due to the IR-active phonon. Taking the same steps as done in Sec. II.1 we define an effective force constant for \(Q_{\vec{q}}\): \[\tilde{K}_{\vec{q}}(Q_{IR},\varepsilon)=K_{\vec{q}}+2\alpha_{\vec{q}}\, \varepsilon+2D_{IR,\vec{q}}Q_{IR}^{2},\] (S2) We find the peak strain amplitude \(\varepsilon^{*}\) by solving \(-\frac{\partial U}{\partial\varepsilon}=0\), while keeping \(Q_{\vec{q}}=0\). Resulting in: \(\varepsilon^{*}=\alpha_{IR}Q_{IR}^{2}/C_{\varepsilon}\), which is substituted into Eqn. S2 to find, \[\tilde{K}_{\vec{q}}(Q_{IR})=K_{\vec{q}}+2\left(\frac{\alpha_{IR}\alpha_{\vec{q }}}{C_{\varepsilon}}+D_{IR,\vec{q}}\right)Q_{IR}^{2}.\] (S3) This highlights the qualitative finding that if \(\frac{\alpha_{IR}\alpha_{\vec{q}}}{C}\) has the same sign as \(D_{IR,\vec{q}}\), then strain and the biquadratic coupling work in concert to decrease the critical field. If they differ in sign, strain and the biquadratic coupling work in opposition to increase the threshold field. We are unaware of any _a priori_ strategy for anticipating the relative sign of the coupling and therefore expect theoretical modeling will generally be necessary for this level of detail in the definition of the phase boundary. We note that this framework for including the effect of other modes in the phase boundary applies to _any_ linear-quadratic third-order coupling term. Therefore, the conventional nonlinear phononics effect, with anharmonic coupling \(\propto Q_{IR}^{2}Q_{R}\) can also be treated on equal footing with strain, having only a quantitative effect on the phase boundary. As an example of this, we focus on the Region II phase boundary for KTaO\({}_{3}\) for excitation of the 16.7 THz IR-active phonon polarized along the [100] direction. The phase boundary in the absence of strain is found to occur at 4.6 MV/cm for a 500 fs duration Gaussian pulse. We find that \(\alpha_{IR}\alpha_{\vec{q}}/C_{\varepsilon}\) is negative. That is, strain and the biquadratic coupling work in concert to condense \(Q_{\vec{q}}\). We find the phase boundary including strain to be 3.8 MV/cm. ## S3 Eigenmodes of the induced instability \(Q_{\vec{q}}\) The induced structural instabilities due to IR-active phonon pumping are primarily octahedral tilts/rotations, as discussed in the main text, but involve smaller distortions that are also allowed by symmetry. An eigenmode decomposition of the \(Q_{\vec{q}}\)'s found in Table 3 of the main text, referenced to the parent Pm\(\bar{3}\)m structure, is given in Tables S1, S2, and S3. The symmetry labels and directions were generated by the ISOTROPY Software Suite [S2, S3]. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Frequency [THz]} & \multicolumn{2}{c}{Irrep of \(Q_{\vec{q}}\) in} & & & & \\ & \multicolumn{2}{c}{space group} & & & & \\ & \multicolumn{2}{c}{induced by \(Q_{IR}\)} & & & & \\ \hline & & & \(R_{5}^{+}[101]\):K & \(R_{4}^{-}[101]\):Ta & \(R_{4}^{+}[101]\):O & \(R_{5}^{+}[101]\):O \\ 16.7 & & & - & 0.060726 & 0.978766 & 0.195523 \\ 5.5 & A\({}_{5}\)(a,0) & -0.318172 & -0.059278 & 0.945980 & 0.019344 \\ 3.3 & & - & 0.019962 & 0.983722 & 0.178586 \\ & & \(R_{5}^{+}[100]\):K & \(R_{4}^{-}[001]\):Ta & \(R_{4}^{+}[100]\):O & \(R_{5}^{+}[100]\):O \\ 16.7 & & - & -0.060726 & 0.978766 & 0.195523 \\ 5.5 & A\({}_{5}\)(a,a) & -0.318172 & 0.059278 & 0.945980 & 0.019344 \\ 3.3 & & - & -0.019962 & 0.983722 & 0.178586 \\ \hline \hline \end{tabular} \end{table} Table S1: Real-space eigendisplacements of the induced instabilities for IR phonon excitation along the crystallographic [100] direction in KTaO\({}_{3}\), found in Table III of the main text. The IR phonon lowers the symmetry of the equilibrium Pm\(\bar{3}\)m (\(O_{h}\)) structure to P4mm (\(C_{4v}\)). The irreducible representation (Irrep) of \(Q_{\vec{q}}\) in P4mm is A\({}_{5}\) (second column) which has two high symmetry directions shown in the top and bottom of the table. The decomposition of \(Q_{\vec{q}}\) in the Pm\(\bar{3}\)m reference is given in the last columns. The sum of the squares of the eigenmode components is unity. ## S4 Zone-edge phonon response to IR-active phonon excitation: bond-length changes The strong coupling between the zone-edge phonon \(Q_{\vec{q}}\) and the IR-active phonon \(Q_{IR}\) leading to new translational and point group symmetries in KTaO\({}_{3}\) found in Sec. III B, can be understood in terms of the geometric network of bonds (see last two paragraphs of Sec. III B). The large changes in bond lengths due to the IR phonon excitation are energetically unfavorable for the crystal. Octahedral tilts associated with zone-edge phonons provide a pathway for the crystal to mitigate this situation by increasing the shortest bond length. This is shown in Fig. S4 where the shortest time-averaged Ta-O bond length is shown as a function of IR-active phonon displacement (blue), with the inclusion of the R\({}_{4}^{+}\) octahedral tilt (red) and the full \(A_{5}\) phonon (green). Once the threshold value of \(Q_{IR}\) is reached, both octahedral tilts and the other subtle structural distortions contributing to the \(A_{5}\) mode, increase the shortest time-averaged bond length, lowering the configuration energy of the crystal lattice.
2305.14149
Search and Explore: Symbiotic Policy Synthesis in POMDPs
This paper marries two state-of-the-art controller synthesis methods for partially observable Markov decision processes (POMDPs), a prominent model in sequential decision making under uncertainty. A central issue is to find a POMDP controller - that solely decides based on the observations seen so far - to achieve a total expected reward objective. As finding optimal controllers is undecidable, we concentrate on synthesising good finite-state controllers (FSCs). We do so by tightly integrating two modern, orthogonal methods for POMDP controller synthesis: a belief-based and an inductive approach. The former method obtains an FSC from a finite fragment of the so-called belief MDP, an MDP that keeps track of the probabilities of equally observable POMDP states. The latter is an inductive search technique over a set of FSCs, e.g., controllers with a fixed memory size. The key result of this paper is a symbiotic anytime algorithm that tightly integrates both approaches such that each profits from the controllers constructed by the other. Experimental results indicate a substantial improvement in the value of the controllers while significantly reducing the synthesis time and memory footprint.
Roman Andriushchenko, Alexander Bork, Milan Češka, Sebastian Junges, Joost-Pieter Katoen, Filip Macák
2023-05-23T15:15:09Z
http://arxiv.org/abs/2305.14149v2
# Search and Explore: ###### Abstract This paper marries two state-of-the-art controller synthesis methods for partially observable Markov decision processes (POMDPs), a prominent model in sequential decision making under uncertainty. A central issue is to find a POMDP controller--that solely decides based on the observations seen so far--to achieve a total expected reward objective. As finding optimal controllers is undecidable, we concentrate on synthesising good finite-state controllers (FSCs). We do so by tightly integrating two modern, orthogonal methods for POMDP controller synthesis: a belief-based and an inductive approach. The former method obtains an FSC from a finite fragment of the so-called belief MDP, an MDP that keeps track of the probabilities of equally observable POMDP states. The latter is an inductive search technique over a set of FSCs, e.g., controllers with a fixed memory size. The key result of this paper is a symbiotic anytime algorithm that tightly integrates both approaches such that each profits from the controllers constructed by the other. Experimental results indicate a substantial improvement in the value of the controllers while significantly reducing the synthesis time and memory footprint. ## 1 Introduction A formidable synthesis challenge is to find a decision-making policy that satisfies temporal constraints even in the presence of stochastic noise. _Markov decision processes (MDPs)_[26] are a prominent model to reason about such policies under stochastic uncertainty. The underlying decision problems are efficiently solvable and probabilistic model checkers such as PRISM [22] and Storm[13] are well-equipped to synthesise policies that provably (and optimally) satisfy a given specification. However, a major shortcoming of MDPs is the assumption that the policy can depend on the precise state of a system. This assumption is unrealistic whenever the state of the system is only observable via sensors. _Partially observable MDPs (POMDPs)_ overcome this shortcoming, but policy synthesis for POMDPs and specifications such as _the probability to reach the exit is larger than 50%_ requires solving undecidable problems [23]. Nevertheless, in recent years, a variety of approaches have been successfully applied to a variety of challenging benchmarks, but the approaches also fail somewhat spectacularly on seemingly tiny problem instances. From a user perspective, it is hard to pick the right approach without detailed knowledge of the underlying methods. This paper sets out to develop a framework in which conceptually orthogonal approaches symbiotically alleviate each other's weaknesses and find policies that maximise, e.g., the expected reward before a target is reached. We show empirically that the combined approach can find compact policies achieving a significantly higher reward than the policies that either individual approach constructs. Belief explorationSeveral approaches for solving POMDPs use the notion of _beliefs_[27]. The key idea is that each sequence of observations and actions induces a belief--a distribution over POMDP states that reflects the probability to be in a state conditioned on the observations. POMDP policies can decide optimally solely based on the belief. The evolution of beliefs can be captured by a fully observable, yet possibly infinite _belief MDP_. A practical approach (see the lower part of Fig. 1) is to unfold a finite fragment of this belief MDP and make its frontier absorbing. This finite fragment can be analysed with off-the-shelf MDP model checkers. Its accuracy can be improved by using an arbitrary but fixed cut-off policy from the frontier onwards. Crucially, the probability to reach the target under such a policy can be efficiently pre-computed for all beliefs. This paper considers the belief exploration method from [8] realised in Storm[13]. Policy searchAn orthogonal approach searches a (finite) space of policies [14, 24] and evaluates these policies by verifying the induced Markov chain. To ensure scalability, sets of policies must be efficiently analysed. However, policy spaces explode whenever they require memory. The open challenge is to adequately define the space of policies to search in. In this paper, we consider the policy-search method from [5] as implemented in Paynt[6] that explores spaces of Figure 1: Schematic depiction of the symbiotic approach finite-state controllers (FSCs), represented as deterministic Mealy machines [2], using a combination of abstraction-refinement, counterexamples (to prune sets of policies), and increasing a controller's memory, see the upper part of Fig. 1. _Our symbiotic approach._ In essence, our idea relies on the fact that a policy found via one approach can boost the other approach. The key observation is that such a policy is beneficial even when it is sub-optimal in terms of the objective at hand. Fig. 1 sketches the symbiotic approach. The FSCs \(F_{\mathcal{I}}\) obtained by policy search are used to guide the partial belief MDP to the target. Vice versa, the FSCs \(F_{\mathcal{B}}\) obtained from belief exploration are used to shrinken the set of policies and to steer the abstraction. Our experimental evaluation, using a large set of POMDP benchmarks, reveals that (a) belief exploration can yield better FSCs (sometimes also faster) using FSCs \(F_{\mathcal{I}}\) from Paynt--even if the latter FSCs are far from optimal, (b) policy search can find much better FSCs when using FSCs from belief exploration, and (c) the FSCs from the symbiotic approach are superior in value to the ones obtained by the standalone approaches. _Beyond exploration and policy search._ In this work, we focus on two powerful orthogonal methods from the set of belief-based and search-based methods. Alternatives exist. Exploration can also be done using a fixed set of beliefs [25]. Prominently, HSVI [18] and SARSOP [20] are belief-based policy synthesis approaches typically used for discounted properties. They also support undiscounted properties, but represent policies with \(\alpha\)-vectors. Bounded policy synthesis [29] uses a combination of belief-exploration and inductive synthesis over paths and addresses finite horizon reachability. \(\alpha\)-vector policies lead to more complex analysis downstream: the resulting policies must track the belief and do floating-point computations to select actions. For policy search, prominent alternatives are to search for randomised controllers via gradient descent [17] or via convex optimization [1, 19, 12]. Alternatively, FSCs can be extracted via deep reinforcement learning [9]. However, randomised policies limit predictability, which hampers testing and explainability. The area of programmatic reinforcement learning [28] combines inductive synthesis ideas with RL. While our empirical evaluation is method-specific, the lessons carry over to integrating other methods. _Contributions._ The key contribution of this paper is the symbiosis of belief exploration [8] and policy search [5]. Though this seems natural, various technical obstacles had to be addressed, e.g., obtaining \(F_{\mathcal{B}}\) from the finite fragment of the belief MDP and the policies for its frontier and developing an interplay between the exploration and search phases that minimises the overhead. The benefits of the symbiotic algorithm are manifold, as we show by a thorough empirical evaluation. It can solve POMDPs that cannot be tackled with either of the two approaches alone. It outputs FSCs that are superior in value (with relative improvements of up to 40%) as well as FSCs that are more succinct (with reduction of a factor of up to two orders of magnitude) with only a small penalty in their values. Additionally, the integration reduces the memory footprint compared to belief exploration by a factor of 4. In conclusion, the proposed symbiosis offers a powerful push-button, anytime synthesis algorithm producing, in the given time, superior and/or more succinct FSCs compared to the state-of-the-art methods. ## 2 Motivating Examples We give a sample POMDP that is hard for the belief exploration, a POMDP that challenges the policy search approach, and indicate why a symbiotic approach overcomes this. A third sample POMDP is shown to be unsolvable by either approach alone but can be treated by the symbiotic one. **A challenging POMDP for belief-based exploration.** Consider POMDP \(\mathcal{M}_{a}\) in Fig. 1(a). The objective is to minimise the expected number of steps to the target \(T_{a}\). An optimal policy is to always take action \(\alpha\) yielding 4 expected steps. An FSC realising this policy can be found by a policy search under 1s. _Belief MDPs._ States in the _belief MDP_\(\mathcal{M}_{a}^{\mathcal{B}}\) are _beliefs_, probability distributions over POMDP states with equal observations. The initial belief is \(\{S\mapsto 1\}\). By taking action \(\alpha\), 'yellow' is observed and the belief becomes \(\{L\mapsto\frac{1}{2},\,R\mapsto\frac{1}{2}\}\). Closer inspection shows that the set of reachable beliefs is infinite rendering \(\mathcal{M}_{a}^{\mathcal{B}}\) to be infinite. Belief exploration constructs a finite fragment \(\overline{\mathcal{M}_{a}^{\mathcal{B}}}\) by exploring \(\mathcal{M}_{a}^{\mathcal{B}}\) up to some depth while _cutting off_ the frontier states. From cut-off states, a shortcut is taken directly to the target. These shortcuts are heuristic over-approximations of the true number of expected steps from the cut-off state to the target. The finite MDP \(\overline{\mathcal{M}_{a}^{\mathcal{B}}}\) can be analysed using off-the-shelf tools yielding the minimising policy \(\sigma_{\mathcal{B}}\) assigning to each belief state the optimal action. _Admissible heuristics._ A simple way to over-approximate the minimal number of the expected number of steps to the target is to use an arbitrary controller Figure 2: (a) and (b) contain two POMDPs. Colours encode observations. Unlabelled transitions have probability 1. Omitted actions (e.g. \(\gamma,\delta\) in state \(B_{2}\)) execute a self-loop. (c) Markov chain induced by the minimising policy \(\sigma_{\mathcal{B}}\) in the finite abstraction \(\overline{\mathcal{M}_{a}^{\mathcal{B}}}\) of the POMDP from Fig. 1(a). In the rightmost state, policy \(\overline{F}\) is applied (cut-off), allowing to reach the target in \(\rho\) steps. \(\overline{F}\) and use the expected number of steps under \(\overline{F}\). The latter is cheap if \(\overline{F}\) is compact, as detailed in Sec. 4.2. Fig. 1(c) shows a Markov chain induced by \(\sigma_{\mathcal{B}}\) in \(\overline{\mathcal{M}_{a}^{\mathcal{B}}}\), where the belief \(\{L\mapsto\frac{7}{8},R\mapsto\frac{1}{8}\}\) is cut off using \(\overline{F}\). The belief exploration in Storm[8] unfolds 1000 states of \(\mathcal{M}_{a}^{\mathcal{B}}\) and finds controller \(\overline{F}\) that uniformly randomises over all actions in the rightmost state. The resulting sub-optimal controller \(F_{\mathcal{B}}\) reaches the target in \(\approx\)4.1 steps. Exploring only a few states suffices when replacing \(\overline{F}\) by a (not necessarily optimal) FSC provided by a policy search. **A challenging POMDP for policy search.** Consider POMDP \(\mathcal{M}_{b}\) in Fig. 1(b). The objective is to minimise the expected number of steps to \(T_{b}\). Its 9-state belief MDP \(\mathcal{M}_{b}^{\mathcal{B}}\) is trivial for the belief-based method. Its optimal controller \(\sigma_{\mathcal{B}}\) first picks action \(\gamma\); on observing 'yellow' it plays \(\beta\) twice, otherwise it always picks \(\alpha\). This is realised by an FSC with 3 memory states. The inductive policy search in Paynt[5] explores families of FSCs of increasing complexity, i.e., of increasing memory size. It finds the optimal FSC after consulting about 20 billion candidate policies. This requires 545 model-checking queries; the optimal one is found after 105 queries while the remaining queries prove that no better 3-state FSC exists. _Reference policies._ The policy search is guided by a reference policy, in this case the fully observable MDP policy that picks (senseless) action \(\delta\) in \(B_{1}\) first. Using policy \(\sigma_{\mathcal{B}}\)--obtained by the belief method--instead, \(\delta\) is never taken. As \(\sigma_{\mathcal{B}}\) picks in each 'blue' state a different action, mimicking this requires at least three memory states. Using \(\sigma_{\mathcal{B}}\) reduces the total number of required model-checking queries by a factor of ten; the optimal 3-state FSC is found after 23 queries. **The potential of symbiosis.** To further exemplify the limitation of the two approaches and the potential of their symbiosis, we consider a synthetic POMDP, called Lanes+, combining a Lane model with larger variants of the POMDPs in Fig. 2; see Tab. 2 on page 2 for the model statistics and Appendix C of [3] for the model description. We consider minimisation of the expected number of steps and a 15-minute timeout. The belief-based approach by Storm yields the value 18870. The policy search method by Paynt finds an FSC with 2 memory states achieving the value 8223. This sub-optimal FSC significantly improves the belief MDP approximation and enables Storm to find an FSC with value 6471. The symbiotic synthesis loop finds the optimal FSC with value 4805. ## 3 Preliminaries and Problem Statement A (discrete) _distribution_ over a countable set \(A\) is a function \(\mu\colon A\to[0,1]\) s.t. \(\sum_{a}\mu(a)=1\). The set \(\operatorname{supp}(\mu)\coloneqq\{a\in A\mid\mu(a)>0\}\) is the _support_ of \(\mu\). The set \(Distr(A)\) contains all distributions over \(A\). We use Iverson bracket notation, where \([x]=1\) if the Boolean expression \(x\) evaluates to true and \([x]=0\) otherwise. Definition 1 (Mdp): A _Markov decision process (MDP)_ is a tuple \(M=(S,s_{0},Act,\mathcal{P})\) with a countable set \(S\) of states, an initial state \(s_{0}\in S\), a finite set \(Act\) of actions, and a partial transition function \(\mathcal{P}\colon S\times Act\nrightarrow Distr(S)\). \(Act(s)\coloneqq\{\alpha\in Act\mid\mathcal{P}(s,\alpha)\neq\bot\}\) denotes the set of actions available in state \(s\in S\). An MDP with \(|Act(s)|=1\) for each \(s\in S\) is a _Markov chain (MC)_. Unless stated otherwise, we assume \(Act(s)=Act\) for each \(s\in S\) for conciseness. We denote \(\mathcal{P}(s,\alpha,s^{\prime})\coloneqq\mathcal{P}(s,\alpha)(s^{\prime})\). A (finite) _path_ of an MDP \(M\) is a sequence \(\pi=s_{0}\alpha_{0}s_{1}\alpha_{1}\ldots s_{n}\) where \(\mathcal{P}(s_{i},\alpha_{i},s_{i+1})>0\) for \(0\leq i<n\). We use \(last(\pi)\) to denote the last state of path \(\pi\). Let \(Paths^{M}\) denote the set of all finite paths of \(M\). State \(s\) is absorbing if \(\mathrm{supp}(\mathcal{P}(s,\alpha))=\{s\}\) for all \(\alpha\in Act\). Definition 2 (Pomdp): A _partially observable MDP (POMDP)_ is a tuple \(\mathcal{M}=(M,Z,O)\), where \(M\) is the underlying MDP, \(Z\) is a finite set of observations and \(O\colon S\to Z\) is a (deterministic) observation function. For POMDP \(\mathcal{M}\) with underlying MDP \(M\), an _observation trace_ of path \(\pi=s_{0}\alpha_{0}s_{1}\alpha_{1}\ldots s_{n}\) is a sequence \(O(\pi)\coloneqq O(s_{0})\alpha_{0}O(s_{1})\alpha_{1}\ldots O(s_{n})\). Every MDP can be interpreted as a POMDP with \(Z=S\) and \(O(s)=s\) for all \(s\in S\). A (deterministic) _policy_ is a function \(\sigma\colon Paths^{M}\to Act\). Policy \(\sigma\) is _memoryless_ if \(last(\pi)=last(\pi^{\prime})\Longrightarrow\sigma(\pi)=\sigma(\pi^{\prime})\) for all \(\pi,\pi^{\prime}\in Paths^{M}\). A memoryless policy \(\sigma\) maps a state \(s\in S\) to action \(\sigma(s)\). Policy \(\sigma\) is _observation-based_ if \(O(\pi)=O(\pi^{\prime})\Longrightarrow\sigma(\pi)=\sigma(\pi^{\prime})\) for all \(\pi,\pi^{\prime}\in Paths^{M}\). For POMDPs, we always consider observation-based policies. We denote by \(\Sigma_{obs}\) the set of all observation-based policies. A policy \(\sigma\in\Sigma_{obs}\) induces the MC \(\mathcal{M}^{\sigma}\). We consider indefinite-horizon reachability or expected total reward properties. Formally, let \(M=(S,s_{0},Act,\mathcal{P})\) be an MC, and let \(T\subseteq S\) be a set of _target states_. \(\mathbb{P}^{M}\left[s\models\Di T\right]\) denotes the probability of reaching \(T\) from state \(s\in S\). We use \(\mathbb{P}^{M}\left[\Di T\right]\) to denote \(\mathbb{P}^{M}\left[s_{0}\models\Di T\right]\) and omit the superscript if the MC is clear from context. Now assume POMDP \(\mathcal{M}\) with underlying MDP \(M=(S,s_{0},Act,\mathcal{P})\), and a set \(T\subseteq S\) of absorbing target states. Without loss of generality, we assume that the target states are associated with the unique observation \(z^{T}\in Z\), i.e. \(s\in T\) iff \(O(s)=z^{T}\). For a POMDP \(\mathcal{M}\) and \(T\subseteq S\), the _maximal reachability probability_ of \(T\) for state \(s\in S\) in \(\mathcal{M}\) is \(\mathbb{P}^{\mathcal{M}}_{\max}\left[s\models\Di T\right]\coloneqq\sup_{ \sigma\in\Sigma_{obs}}\mathbb{P}^{\mathcal{M}^{\sigma}}[s\models\Di T]\). The minimal reachability probability \(\mathbb{P}^{\mathcal{M}}_{\min}\left[s\models\Di T\right]\) is defined analogously. Finite-state controllers are automata that compactly encode policies. Definition 3 (Fsc): A _finite-state controller (FSC)_ is a tuple \(F=(N,n_{0},\gamma,\delta)\), with a finite set \(N\) of _nodes_, the _initial node \(n_{0}\in N\), the _action function_\(\gamma\colon N\times Z\to Act\) and the _update function_\(\delta\colon N\times Z\times Z\to N\). A _\(k\)-FSC_ is an FSC with \(|N|=k\). If \(k\)=1, the FSC encodes a memoryless policy. We use \(\mathcal{F}^{\mathcal{M}}\) (\(\mathcal{F}^{\mathcal{M}}_{k}\)) to denote the family of all (\(k\)-)FSCs for POMDP \(\mathcal{M}\). For a POMDP in state \(s\), an agent receives observation \(z=O(s)\). An agent following an FSC \(F\) executes action \(\alpha=\gamma(n,z)\) associated with the current node \(n\) and the current (prior) observation \(z\). The POMDP state is updated accordingly to some \(s^{\prime}\) with \(\mathcal{P}(s,\alpha,s^{\prime})>0\). Based on the next (posterior) observation \(z^{\prime}=O(s^{\prime})\), the FSC evolves to node \(n^{\prime}=\delta(n,z,z^{\prime})\). The _induced MC_ for FSC \(F\) is \(\mathcal{M}^{F}=(S\times N,(s_{0},n_{0}),\{\alpha\},\mathcal{P}^{F})\), where for all \((s,n),(s^{\prime},n^{\prime})\in S\times N\) we have \[\mathcal{P}^{F}\left((s,n),\alpha,(s^{\prime},n^{\prime})\right)=[n^{\prime}= \delta\left(n,O(s),O(s^{\prime})\right)]\cdot\mathcal{P}(s,\gamma(n,O(s)),s^ {\prime}).\] We emphasise that for MDPs with infinite state space and POMDPs, an FSC realising the maximal reachability probability generally does not exist. For FSC \(F\in\mathcal{F}^{\mathcal{M}}\) with the set \(N\) of memory nodes, let \(\mathbb{P}^{\mathcal{M}^{F}}[(s,n)\models\Diamond T]\coloneqq\mathbb{P}^{ \mathcal{M}^{F}}\left[(s,n)\models\Diamond(T\times N)\right]\) denote the probability of reaching target states \(T\) from state \((s,n)\in S\times N\). Analogously, \(\mathbb{P}^{\mathcal{M}^{F}}[\Diamond T]\coloneqq\mathbb{P}^{\mathcal{M}^{F}} \left[\Diamond(T\times N)\right]\) denotes the probability of reaching target states \(T\) in the MC \(\mathcal{M}^{F}\) induced on \(\mathcal{M}\) by \(F\). Problem statement.The classical synthesis problem [23] for POMDPs asks: given POMDP \(\mathcal{M}\), a set \(T\) of targets, and a threshold \(\lambda\), find an FSC \(F\) such that \(\mathbb{P}^{\mathcal{M}^{F}}[\Diamond T]\geq\lambda\), if one exists. We take a more practical stance and aim instead to optimise the value \(\mathbb{P}^{\mathcal{M}^{F}}[\Diamond T]\) in an anytime fashion: the faster we can find FSCs with a high value, the better. Remark 1: Variants of the maximising synthesis problem for the expected total reward and minimisation are defined analogously. For conciseness, in this paper, we always assume that we want to maximise the value. In addition to the value of the FSC \(F\), another key characteristic of the controller is its _size_, which we treat as a secondary objective and discuss in detail in Sec. 6. ## 4 FSCs for and from Belief Exploration We consider _belief exploration_ as described in [8]. A schematic overview is given in the lower part of Fig. 1. We recap the key concepts of belief exploration. This section explains two contributions: we discuss how arbitrary FSCs are included and present an approach to export the associated POMDP policies as FSCs. ### Belief Exploration With Explicit FSC Construction Finite-state controllers for a POMDP can be obtained by analysing the (fully observable) _belief MDP_[27]. The state space of this MDP consists of _beliefs_: probability distributions over states of the POMDP \(\mathcal{M}\) having the same observation. Let \(S_{z}\coloneqq\{s\in S\mid O(s)=z\}\) denote the set of all states of \(\mathcal{M}\) with observation \(z\in Z\). Let the set of all beliefs \(\mathcal{B}_{\mathcal{M}}\coloneqq\bigcup_{z\in Z}Dist(S_{z})\) and denote for \(b\in\mathcal{B}_{\mathcal{M}}\) by \(O(b)\in Z\) the unique observation \(O(s)\) of any \(s\in\operatorname{supp}(b)\). In a belief \(b\), taking action \(\alpha\) yields an updated belief as follows: let \(\mathcal{P}(b,\alpha,z^{\prime})\coloneqq\sum_{s\in S_{O(b)}}b(s)\cdot\sum_{s^ {\prime}\in S_{z^{\prime}}}\mathcal{P}(s,\alpha,s^{\prime})\) denote the probability of observing \(z^{\prime}\in Z\) upon taking action \(\alpha\in Act\) in belief \(b\in\mathcal{B}_{\mathcal{M}}\). If \(\mathcal{P}(b,\alpha,z^{\prime})>0\), the corresponding successor belief \(b^{\prime}=\llbracket b|\alpha,z^{\prime}\rrbracket\) with \(O(b^{\prime})=z^{\prime}\) is defined component-wise as \[\llbracket b|\alpha,z^{\prime}\rrbracket(s^{\prime})\coloneqq\frac{\sum_{s\in S _{O(b)}}b(s)\cdot\mathcal{P}(s,\alpha,s^{\prime})}{\mathcal{P}(b,\alpha,z^{ \prime})}\] for all \(s^{\prime}\in S_{z^{\prime}}\). Otherwise, \(\llbracket b|\alpha,z^{\prime}\rrbracket\) is undefined. Definition 4 (Belief MDP): The _belief MDP_ of POMDP \(\mathcal{M}\) is the MDP \(\mathcal{M}^{\mathcal{B}}=(\mathcal{B}_{\mathcal{M}},b_{0},Act,\mathcal{P}^{ \mathcal{B}})\), with initial belief \(b_{0}\coloneqq\{s_{0}\mapsto 1\}\) and transition function \(\mathcal{P}^{\mathcal{B}}(b,\alpha,b^{\prime})\coloneqq\llbracket b^{\prime}= \llbracket b|\alpha,z^{\prime}\rrbracket\rrbracket\cdot\mathcal{P}(b,\alpha,z^ {\prime})\) where \(z^{\prime}=O(b^{\prime})\). The belief MDP captures the behaviour of its POMDP. It can be unfolded by starting in the initial belief and computing all successor beliefs. Deriving FSCs from finite belief MDPs.Let \(T^{\mathcal{B}}\coloneqq\left\{b\in\mathcal{B}_{\mathcal{M}}\mid O(b)=z^{T}\right\}\) denote the set of _target beliefs_. If the reachable state space of the belief MDP \(\mathcal{M}^{\mathcal{B}}\) is finite, e.g. because the POMDP is acyclic, standard model checking techniques can be applied to compute the memoryless policy \(\sigma_{\mathcal{B}}\colon\mathcal{B}_{\mathcal{M}}\to Act\) that selects in each belief state \(b\in\mathcal{B}_{\mathcal{M}}\) the action that maximises \(\mathbb{P}\left[b\models\Diamond T^{\mathcal{B}}\right]\)4. We can translate the deterministic, memoryless policy \(\sigma_{\mathcal{B}}\) into the corresponding FSC \(F_{\mathcal{B}}=(\mathcal{B}_{\mathcal{M}},b_{0},\gamma,\delta)\) with action function \(\gamma(b,z)=\sigma_{\mathcal{B}}(b)\) and update function \(\delta(b,z,z^{\prime})=\llbracket b|\sigma_{\mathcal{B}}(b),z^{\prime}\rrbracket\) for all \(z,z^{\prime}\in Z\).5 Footnote 4: Memoryless policies suffice to maximise the value in a fully observable MDP [26]. Footnote 5: The assignments of missing combinations where \(z\neq O(b)\) are irrelevant. Handling large and infinite belief MDPs.In case the reachable state space of the belief MDP \(\mathcal{M}^{\mathcal{B}}\) is infinite or too large for a complete unfolding, a finite approximation \(\overline{\mathcal{M}^{\mathcal{B}}}\) is used instead [8]. Assuming \(\mathcal{M}^{\mathcal{B}}\) is unfolded up to some depth, let \(\mathcal{E}\subset\mathcal{B}_{\mathcal{M}}\) denote the set of explored beliefs and let \(\mathcal{U}\subset\mathcal{B}_{\mathcal{M}}\setminus\mathcal{E}\) denote the _frontier_: the set of unexplored beliefs reachable from \(\mathcal{E}\) in one step. To complete the finite abstraction, we require handling of the frontier beliefs. The idea is to use for each \(b\in\mathcal{U}\) a _cut-off value_\(\underline{V}(b)\): an under-approximation of the maximal reachability probability \(\mathbb{P}_{\max}^{\mathcal{M}^{\mathcal{B}}}\left[b\models\Diamond T^{ \mathcal{B}}\right]\) for \(b\) in the belief MDP. We explain how to compute cut-off values systematically given an FSC in Sec. 4.2. Ultimately, we define a finite MDP \(\overline{\mathcal{M}^{\mathcal{B}}}=(\mathcal{E}\cup\mathcal{U}\cup\{b_{ \top},b_{\bot}\},b_{0},Act,\overline{\mathcal{P}^{\mathcal{B}}})\) with the transition function: \(\overline{\mathcal{P}^{\mathcal{B}}}(b,\alpha)\coloneqq\mathcal{P}^{\mathcal{ B}}(b,\alpha)\) for explored beliefs \(b\in\mathcal{E}\) and all \(\alpha\in Act\), and \(\overline{\mathcal{P}^{\mathcal{B}}}(b,\alpha)\coloneqq\{b_{\top}\mapsto \underline{V}(b),b_{\bot}\mapsto 1-\underline{V}(b)\}\) for frontier beliefs \(b\in\mathcal{U}\) and all \(\alpha\in Act\), where \(b_{\top}\) and \(b_{\bot}\) are fresh sink states, i.e. \(\overline{\mathcal{P}^{\mathcal{B}}}(b_{\top},\alpha)\coloneqq\{b_{\top} \mapsto 1\}\) and \(\overline{\mathcal{P}^{\mathcal{B}}}(b_{\bot},\alpha)\coloneqq\{b_{\bot} \mapsto 1\}\) for all \(\alpha\in Act\). The reachable state space of \(\overline{\mathcal{M}^{\mathcal{B}}}\) is finite, enabling its automated analysis; since our method to compute cut-off values emulates an FSC, a policy maximising \(\mathbb{P}_{\max}^{\overline{\mathcal{M}^{\mathcal{B}}}}\left[\Diamond(T^{ \mathcal{B}}\cup\{b_{\top}\})\right]\) induces an FSC for the original POMDP \(\mathcal{M}\). We discuss how to obtain this FSC in Sec. 4.3. ### Using FSCs for Cut-off Values A crucial aspect when applying the belief exploration with cut-offs is the choice of suitable cut-off values. The closer the cut-off value is to the actual optimum in a belief, the better the approximation we obtain. In particular, if the cut-off values coincide with the optimal value, cutting off the initial state is optimal. However, finding optimal values is as hard as solving the original POMDP. We consider _under-approximative value functions_ induced by applying _any_6 FSC to the POMDP and lifting the results to the belief MDP. The better the FSC, the better the cut-off value. We generalise belief exploration with cut-offs such that the approach supports arbitrary sets of FSCs with additional flexibility. Footnote 6: We remark that [8] considers memoryless FSCs only. Let \(F_{\mathcal{I}}\in\mathcal{F}^{\mathcal{M}}\) be an arbitrary, but fixed FSC for POMDP \(\mathcal{M}\). Let \(p_{s,n}\coloneqq\mathbb{P}^{\mathcal{M}^{F_{\mathcal{I}}}}\left[(s,n)\models \Diamond T\right]\) for state \((s,n)\in S\times N\) in the corresponding induced MC. For fixed \(n\in N\), \(V(b,n)\coloneqq\sum_{s\in S_{O(b)}}b(s)\cdot p_{s,n}\) denotes the cut-off value for belief \(b\) and memory node \(n\). It corresponds to the probability of reaching a target state in \(\mathcal{M}^{F_{\mathcal{I}}}\) when starting in memory node \(n\in N\) and state \(s\in S\) according to the probability distribution \(b\). We define the overall cut-off value for \(b\) induced by \(F\) as \(\underline{V}(b)\coloneqq\max_{n\in N}V(b,n)\). It follows straightforwardly that \(\underline{V}(b)\leq\mathbb{P}_{\max}^{\mathcal{M}^{\mathcal{B}}}\left[b \models\Diamond T^{\mathcal{B}}\right]\). As values \(p_{s,n}\) only need to be computed once, computing \(\underline{V}(b)\) for a given belief \(b\) is relatively simple. However, the complexity of the FSC-based cut-off approach depends on the size of the induced MC. Therefore, it is essential that the FSCs used to compute cut-off values are concise. ### Extracting FSC from Belief Exploration Model checking the finite approximation MDP \(\overline{\mathcal{M}^{\mathcal{B}}}\) with cut-off values induced by an FSC \(F_{\mathcal{I}}\) yields a maximising memoryless policy \(\sigma_{\mathcal{B}}\). Our goal is to represent this policy as an FSC \(F_{\mathcal{B}}\). We construct \(F_{\mathcal{B}}\) by considering both \(F_{\mathcal{I}}\) and the necessary memory nodes for each explored belief \(b\in\mathcal{E}\). Concretely, for each explored belief, we introduce a corresponding memory node. In each such node, the action \(\sigma_{\mathcal{B}}(b)\) is selected. For the memory update, we distinguish between two cases based on the next belief after executing \(\sigma_{\mathcal{B}}(b)\) in \(\overline{\mathcal{M}^{\mathcal{B}}}\). If for observation \(z^{\prime}\in Z\), the successor belief \(b^{\prime}=\llbracket b|\sigma_{\mathcal{B}}(b),z^{\prime}\rrbracket\in \mathcal{E}\), the memory is updated to the corresponding node. Otherwise, \(b^{\prime}\in\mathcal{U}\) holds, i.e., the successor is part of the frontier. The memory is then updated to the memory node \(n\) of FSC \(F_{\mathcal{I}}\) that maximises the cut-off value \(V(b^{\prime},n)\). This corresponds to the notion that if the frontier is encountered, we switch from acting according to policy \(\sigma_{\mathcal{B}}\) to following \(F_{\mathcal{I}}\) (initialised in the correct memory node). This is formalised as: Definition 5 (Belief-based FSC with cut-offs): Let \(F_{\mathcal{I}}=(N,n_{0},\gamma_{\mathcal{I}},\delta_{\mathcal{I}})\) and \(\overline{\mathcal{M}^{\mathcal{B}}}\) as before. The _belief-based FSC with cut-offs_ is \(F_{\mathcal{B}}=(\mathcal{E}\cup N,b_{0},\gamma,\delta)\) with action function \(\gamma(b,z)=\sigma_{\mathcal{B}}(b)\) for \(b\in\mathcal{E}\) and \(\gamma(n,z)=\gamma_{\mathcal{I}}(n,z)\) for \(n\in N\) and arbitrary \(z\in Z\). The update function \(\delta\) is defined for all \(z,z^{\prime}\in Z\) by \(\delta(n,z,z^{\prime})=\delta_{\mathcal{I}}(n,z,z^{\prime})\) if \(n\in N\), and for \(b\in\mathcal{E}\) with \(b^{\prime}=\llbracket b|\sigma_{\mathcal{B}}(b),z^{\prime}\rrbracket\) by: \[\delta(b,z,z^{\prime})=b^{\prime}\text{ if }b^{\prime}\in\mathcal{E},\text{ and } \delta(b,z,z^{\prime})=\operatorname*{argmax}_{n\in N}V(b^{\prime},n)\text{ otherwise.}\] ## 5 Accelerated Inductive Synthesis In this section, we consider inductive synthesis [5], an approach for finding controllers for POMDPs in a set of FSCs. We briefly recap the main idea, then first explain how to use a reference policy. Finally, we introduce and discuss a novel search space for the controllers that we consider in this paper in detail. ### Inductive Synthesis with \(k\)-FSCs In the scope of this paper, inductive synthesis [4] considers a finite family of FSCs \(\mathcal{F}_{k}^{\mathcal{M}}\) of \(k\)-FSCs with memory nodes \(N=\{n_{0},\ldots,n_{k-1}\}\), and the family \(\mathcal{M}^{\mathcal{F}_{k}^{\mathcal{M}}}\coloneqq\{\mathcal{M}^{F}\mid F\in \mathcal{F}_{k}^{\mathcal{M}}\}\) of associated induced MCs. The states for each MC are tuples \((s,n)\in S\times N\). For conciseness, we only discuss the abstraction-refinement framework [10] within the inductive synthesis loop. The overall image is as in Fig. 1. Informally, the _MDP abstraction_ of the family \(\mathcal{M}^{\mathcal{F}_{k}^{\mathcal{M}}}\) of MCs is an MDP \(\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})\) with the set \(S\times N\) of states such that, if some MC \(M\in\mathcal{M}^{\mathcal{F}_{k}^{\mathcal{M}}}\) executes action \(\alpha\) in state \((s,n)\in S\times N\), then this action (with the same effect) is also enabled in state \((s,n)\) of \(\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})\). Essentially, \(\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})\) over-approximates the behaviour of all the MCs in the family \(\mathcal{M}^{\mathcal{F}_{k}^{\mathcal{M}}}\): it simulates an arbitrary family member in every step, but it may switch between steps.7 Footnote 7: The MDP is an game-based abstraction [21] of the all-in-one MC [11]. Definition 6: MDP abstraction for POMDP \(\mathcal{M}\) and family \(\mathcal{F}_{k}^{\mathcal{M}}=\{F_{1},\dots,F_{m}\}\) of \(k\)-FSCs is the MDP \(\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})\coloneqq\big{(}S\times N,(s_{0},n_{0}),\{1,\dots,m\},\mathcal{P}^{\mathcal{F}_{k}^{\mathcal{M}}}\big{)}\) with \[\mathcal{P}^{\mathcal{F}_{k}^{\mathcal{M}}}((s,n),i)=\mathcal{P}^{F_{i}}.\] While this MDP has \(m\) actions, practically, many actions coincide. Below, we see how to utilise the structure of the FSCs. Here, we finish by observing that the MDP is a proper abstraction: Lemma 1: _[_10_]_ _For all \(F\in\mathcal{F}_{k}^{\mathcal{M}}\), \(\mathbb{P}_{\min}^{\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})}\left[\Diamond T \right]\leq\mathbb{P}^{\mathcal{M}^{F}}\left[\Diamond T\right]\leq\mathbb{P} _{\max}^{\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})}\left[\Diamond T\right]\)._ With that result, we can naturally start with the set of all \(k\)-FSCs and search through this family by selecting suitable subsets [10]. Since the number \(k\) of memory nodes necessary is not known in advance, one can iteratively explore the sequence \(\mathcal{F}_{1}^{\mathcal{M}},\mathcal{F}_{2}^{\mathcal{M}},\dots\) of families of FSCs of increasing complexity. ### Using Reference Policies to Accelerate Inductive Synthesis Consider the synthesis process of the optimal \(k\)-FSC \(F\in\mathcal{F}_{k}^{\mathcal{M}}\) for POMDP \(\mathcal{M}\). To accelerate the search for \(F\) within this family, we consider a reference policy, e.g., a policy \(\sigma_{\mathcal{B}}\) extracted from an (approximation of the) belief MDP, and shrink the FSC family. For each observation \(z\in Z\), we collect the set \(Act[\sigma_{\mathcal{B}}](z)\coloneqq\{\sigma_{\mathcal{B}}(b)\mid b\in \mathcal{B}_{\mathcal{M}},O(b)=z\}\) of actions that were selected by \(\sigma_{\mathcal{B}}\) in beliefs with observation \(z\). The set \(Act[\sigma_{\mathcal{B}}](z)\) contains the actions used by the reference policy when in observation \(z\). We focus the search on these actions by constructing a subset of FSCs \(\{\ (N,n_{0},\gamma,\delta)\in\mathcal{F}_{k}^{\mathcal{M}}\mid\forall n\in N,z \in Z.\gamma(n,z)\in Act[\sigma_{\mathcal{B}}](z)\}\). Restricting the action selection may exclude the optimal \(k\)-FSC. It also does not guarantee that the optimal FSC in the restricted family achieves the same value as the reference policy \(\sigma_{\mathcal{B}}\) as \(\sigma_{\mathcal{B}}\) may have more memory nodes. We first search the restricted space of FSCs before searching the complete space. This also accelerates the search: The earlier a good policy is found, the easier it is to discard other candidates (because they are provably not optimal). Furthermore, in case the algorithm terminates earlier (notice the anytime aspect of our problem statement), we are more likely to have found a reasonable policy. Additionally, we could use sets \(Act[\sigma_{\mathcal{B}}]\) to determine with which \(k\) to search. If in some observation \(z\in Z\) the belief policy \(\sigma_{\mathcal{B}}\) uses \(|Act[\sigma_{\mathcal{B}}](z)|\) distinct actions, then in order to enable the use of all of these actions, we require at least \(k=\max_{z\in Z}|Act[\sigma_{\mathcal{B}}](z)|\) memory states. However, this may lead to families that are too large and thus we use a more refined view discussed below. ### Inductive Synthesis with Adequate FSCs In this section, we discuss the set of candidate FSCs in more detail. In particular, we take a more refined look at the families that we consider. _More granular FSCs._ We consider memory models [5] that describe per-observation how much memory may be used: Definition 7 (\(\mu\)-Fsc): A _memory model_ for POMDP \(\mathcal{M}\) is a function \(\mu\colon Z\to\mathbb{N}\). Let \(k=\max_{z\in Z}\mu(z)\). The \(k\)-FSC \(F\in\mathcal{F}_{k}^{\mathcal{M}}\) with nodes \(N=\{n_{0},\ldots,n_{k-1}\}\) is a \(\mu\)-FSC iff for all \(z\in Z\) and for all \(i>\mu(z)\) it holds: \(\gamma(n_{i},z)=\gamma(n_{0},z)\) and \(\delta(n_{i},z,z^{\prime})=\delta(n_{0},z,z^{\prime})\) for any \(z^{\prime}\in Z\). \(\mathcal{F}_{\mu}^{\mathcal{M}}\) denotes the family of all \(\mu\)-FSCs. Essentially, memory model \(\mu\) dictates that for prior observation \(z\) only \(\mu(z)\) memory nodes are utilised, while the rest behave exactly as the default memory node \(n_{0}\). Using memory model \(\mu\) with \(\mu(z)<k\) for some observations \(z\in Z\) greatly reduces the number of candidate controllers. For example, if \(|S_{z}|=1\) for some \(z\in Z\), then upon reaching this state, the history becomes irrelevant. It is thus sufficient to set \(\mu(z)=1\) (for the specifications in this paper). It also significantly reduces the size of the abstraction, see Appendix 0.A of [3]. _Posterior-aware or posterior-unaware._ The technique outlined in [5] considers _posterior-unaware FSCs_[2]. An FSC with update function \(\delta\) is posterior-unaware if the posterior observation is not taken into account when updating the memory node of the FSC, i.e. \(\delta(n,z,z^{\prime})=\delta(n,z,z^{\prime\prime})\) for all \(n\in N,z,z^{\prime},z^{\prime\prime}\in Z\). This restriction reduces the policy space and thus the MDP abstraction \(\mathsf{MDP}(\mathcal{F}_{k}^{\mathcal{M}})\). On the other hand, general (posterior-aware) FSCs can utilise information about the next observation to make an informed decision about the next memory node. Figure 3: (a) A POMDP where colours and capital letters encode observations; unlabelled transitions have probability \(1/2\); omitted actions (e.g. action \(\beta\) in the initial state) are self-loops; the objective is to minimise the expected number of steps to reach state \(G\). (b) The optimal posterior-aware 2-FSC. As a result, fewer memory nodes are needed to encode complex policies. Consider Fig. 2(a) which depicts a simple POMDP. First, notice that in yellow states \(Y_{i}\) we want to be able to execute two different actions, implying that we need at least two memory nodes to distinguish between the two states, and the same is true for the blue states \(B_{i}\). Second, notice that in each state the visible action always leads to states having different observations, implying that the posterior observation \(z^{\prime}\) is crucial for the optimal decision making. If \(z^{\prime}\) is ignored, it is impossible to optimally update the memory node. Figure 2(b) depicts the optimal posterior-aware 2-FSC allowing to reach the target within 12 steps on expectation. The optimal posterior-unaware FSC has at least 4 memory nodes and the optimal posterior-unaware 2-FSC uses 14 steps. MDP abstraction.To efficiently and precisely create and analyse MDP abstractions, Def. 6 is overly simplified. In Appendix A of [3], we present the construction for general, posterior-aware FSCs including memory models. ## 6 Integrating Belief Exploration with Inductive Synthesis We clarify the symbiotic approach from Fig. 1 and review FSC sizes. **Symbiosis by closing the loop** Section 4 shows the potential to improve belief exploration using FSCs, e.g., obtained from an inductive synthesis loop, whereas Sec. 5 shows the potential to improve inductive synthesis using policies from, e.g., belief exploration. A natural next step is to use improved inductive synthesis for belief exploration and improved belief exploration for inductive synthesis, i.e., to alternate between both techniques. This section briefly clarifies the symbiotic approach from Fig. 1 using Alg. 1. We iterate until a global timeout \(t\): in each iteration, we make both controllers available to the user as soon as they are computed (Alg. 1, 13). We start in the inductive mode (l. 3-8), where we initially consider the 1-FSCs represented in \(\mathcal{F}_{\mu}^{\mathcal{M}}\). Method search (l. 8) investigates \(\mathcal{F}\) and outputs the new maximising FSC \(F_{\mathcal{I}}\) (if it exists). If the timeout \(t_{\mathcal{I}}\) interrupts the synthesis process, the method additionally returns yet unexplored parameter assignments. If \(\mathcal{F}\) is fully explored within the timeout \(t_{\mathcal{I}}\) (l. 4), we increase \(k\) and repeat the process. After the timeout \(t_{\mathcal{I}}\), we run belief exploration explore for \(t_{\mathcal{B}}\) seconds, where we use \(F_{\mathcal{I}}\) as backup controllers (l. 9). After the timeout \(t_{\mathcal{B}}\) (exploration will continue from a stored configuration in the next belief phase), we use \(F_{\mathcal{I}}\) to obtain cut-off values at unexplored states, compute the optimal policy \(\sigma^{\mathcal{M}^{\mathcal{B}}}\) (see Sec. 4) and extract the FSC \(F_{\mathcal{B}}\) which incorporates \(F_{\mathcal{I}}\). Before we continue the search, we check whether the belief-based FSC is better and whether that FSC gives any reason to update the memory model (l. 10). If so, we update \(\mu\) and reset the \(\mathcal{F}\) (l. 11-12). **The size of an FSC** We have considered several sub-classes of FSCs and wish to compare the sizes of these controllers. For FSC \(F=(N,n_{0},\gamma,\delta)\), we define its size \(size(F)\coloneqq size(\gamma)+size(\delta)\) as the memory required to encode functions \(\gamma\) and \(\delta\). Encoding \(\gamma\colon N\times Z\to Act\) of a general \(k\)-FSC requires \(size(\gamma)=\sum_{n\in N}\sum_{z\in Z}1=k\cdot|Z|\) memory. Encoding \(\delta\colon N\times Z\times Z\to N\) requires \(k\cdot|Z|^{2}\) memory. However, it is uncommon that in each state-memory pair \((s,n)\) all posterior observations can be observed. We therefore encode \(\delta(n,z,\cdot)\) as a sparse adjacency list, i.e., as a list of pairs \((z^{\prime},\delta(n,z,z^{\prime}))\). To define the size of such a list properly, consider the induced MC \(\mathcal{M}^{F}=(S\times N,(s_{0},n_{0}),\{\alpha\},\mathcal{P}^{F})\). Let \(post(n,z)\coloneqq\big{\{}O(s^{\prime})\mid\exists s\in S_{z}\colon(s^{\prime},\cdot)\in\operatorname{supp}(\mathcal{P}^{F}((s,n),\alpha))\big{\}}\) denote the set of posterior observations reachable when taking a transition in a state \((s,n)\) of \(\mathcal{M}^{F}\) with \(O(s)=z\). Table 1 summarises the resulting sizes of FSCs of various sub-classes. The derivation is included in Appendix B of [3]. Table 4 on p. 18 shows that we typically find much smaller \(\mu\)-FSCs (\(F_{\mathcal{I}}\)) than belief-based FSCs (\(F_{\mathcal{B}}\)). ## 7 Experiments Our evaluation focuses on the following three questions: 1. [label=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q0:,ref=Q:,ref=Q0:,ref **Selected benchmarks and setup** Our baseline are the recent belief exploration technique [8] implemented in Storm[13] and the inductive (policy) synthesis method [5] implemented in Paynt[6]. Paynt uses Storm for parsing and model checking of MDPs, but not for solving POMDPs. Our symbiotic framework (Alg. 1) has been implemented on top of Paynt and Storm. In the following, we use Storm and Paynt to refer to the implementation of belief exploration and inductive synthesis respectively, and Saynt to refer to the symbiotic framework. The implementation of Saynt and all benchmarks are publicly available8. Additionally, the implementation and the benchmarks in the form of an artifact are also available at [https://doi.org/10.5281/zenodo.7874513](https://doi.org/10.5281/zenodo.7874513). Footnote 8: [https://github.com/randriu/synthesis](https://github.com/randriu/synthesis) _Setup._ The experiments are run on a single core of a machine equipped with an Intel i5-12600KF @4.9GHz CPU and 64GB of RAM. Paynt searches for posterior-unaware FSCs using abstraction-refinement, as suggested by [5]. By default, Storm applies the cut-offs as presented in Sect. 4.1. Saynt uses the default settings for Paynt and Storm while \(t_{\mathcal{I}}=60s\) and \(t_{\mathcal{B}}=10s\) were taken for Alg. 1. Under Q3, we discuss the effect of changing these values. _Benchmarks._ We evaluate the methods on a selection of models from [7, 8, 5] supplemented by larger variants of these models (Drone-8-2 and Refuel-20), by one model from [16] (Milos-97) and by the synthetic model (Lanes+) described in Appendix C of [3]. We excluded benchmarks for which Paynt or Storm finds the (expected) optimal solution in a matter of seconds. The benchmarks were selected to illustrate advantages as well as drawbacks of all three synthesis approaches: belief exploration, inductive (policy) search, and the symbiotic technique. Table 2 lists for each POMDP the number \(|S|\) of states, the total number \(\sum Act:=\sum_{s}|Act(s)|\) of actions, the number \(|Z|\) of observations, the specification (either maximising or minimising a reachability probability \(P\) or expected reward \(R\)), and a known over-approximation on the optimal value computed using the technique from [7]. These over-approximations are solely used as rough estimates of the optimal values. Tab. 5 on p. 20 reports the quality of the resulting FSCs on a broader range of benchmarks and demonstrates the impact of the non-default settings. \begin{table} \begin{tabular}{|c|c c c|c|c||c c c|c|c|c|} \hline Model & \(|S|\) & \(\sum Act\) & \(|Z|\) & Spec. & \begin{tabular}{c} Over- \\ approx. \\ \end{tabular} & Model & \(|S|\) & \(\sum Act\) & \(|Z|\) & Spec. & \begin{tabular}{c} Over- \\ approx. \\ \end{tabular} \\ \hline 4x3-95 & 22 & 82 & 9 & \(R_{\max}\) & \(\leq 2.24\) & Drone-4-2 & 1226 & 2954 & 761 & \(P_{\max}\) & \(\leq 0.98\) \\ 4x5x2-95 & 79 & 310 & 7 & \(R_{\max}\) & \(\leq 3.26\) & Drone-8-2 & 13k & 32k & 3195 & \(P_{\max}\) & \(\leq 0.99\) \\ Hallway & 61 & 301 & 23 & \(R_{\min}\) & \(\geq 11.5\) & Lanes+ & 2741 & 5285 & 11 & \(R_{\min}\) & \(\geq 4805\) \\ Milos-97 & 165 & 980 & 11 & \(R_{\max}\) & \(\leq 80\) & Netw-3-8-20 & 17k & 30k & 2205 & \(R_{\min}\) & \(\geq 4.31\) \\ Network & 19 & 70 & 5 & \(R_{\max}\) & \(\leq 359\) & Refuel-06 & 208 & 565 & 50 & \(P_{\max}\) & \(\leq 0.78\) \\ Query-s3 & 108 & 320 & 6 & \(R_{\max}\) & \(\leq 600\) & Refuel-20 & 6834 & 25k & 174 & \(P_{\max}\) & \(\leq 0.99\) \\ Tiger-95 & 14 & 50 & 7 & \(R_{\max}\) & \(\leq 159\) & Rocks-12 & 6553 & 32k & 1645 & \(R_{\min}\) & \(\geq 17.8\) \\ \hline \end{tabular} \end{table} Table 2: Information about the benchmark POMDPs. ### Q1: FSCs provide better approximations of the belief MDP In these experiments, Paynt is used to obtain a sub-optimal \(F_{\mathcal{I}}\) within 10s which is then used by Storm. Tab. 3 (left) lists the results. Our main finding is that _belief exploration can yield better FSCs (and sometimes faster) using FSCs from Paynt--_even if the latter FSCs are far from optimal. For instance, Storm with provided \(F_{\mathcal{I}}\) finds an FSC with value 0.97 for the Drone-4-2 benchmark within a total of 10s (1s+9s for obtaining \(F_{\mathcal{I}}\)), compared to obtaining an FSC of value 0.95 in 56s on its own. A value improvement is also obtained if Storm runs longer. For the Network model, the value improves with 37% (short-term) and 47% (long-term) respectively, at the expense of investing 3s to find \(F_{\mathcal{I}}\). For the other models, the relative improvement ranges from 3% to 25%. A further value improvement can be achieved when using better FSCs \(F_{\mathcal{I}}\) from Paynt; see Q3. Sometimes, belief exploration does not profit from \(F_{\mathcal{I}}\). For Hallway, the unexplored part of the belief MDP becomes insignificant rather quickly, and so does the impact of \(F_{\mathcal{I}}\). Clipping [8], a computationally expensive extension of cut-offs, is beneficial only for Rocks-12, rendering \(F_{\mathcal{I}}\) useless. Though even in this case, using \(F_{\mathcal{I}}\) significantly improves Short Storm that did not have enough time to apply clipping. ### Q2: Belief-based FSCs improve inductive synthesis In this experiment, we run Storm for at most 1s, and use the result in Paynt. Tab. 3 (right) lists the results. Our main finding is that _inductive synthesis can find much better FSCs--and sometimes much faster--when using FSCs from belief exploration._ For instance, for the 4x5x2 benchmark, an FSC is obtained about six times faster while improving the value by 116%. On some larger models, Paynt alone struggles to find any good \(F_{\mathcal{I}}\) and using \(F_{\mathcal{B}}\) boosts this; e.g., the value for the Refuel-20 model is raised by a factor 20 at almost no run time penalty. For the Tiger benchmark, a value improvement of 860% is achieved (albeit not as good as \(F_{\mathcal{B}}\) itself) at the expense of doubling the run time. Thus: _even a shallow exploration of the belief MDP pays off in the inductive synthesis_. The inductive search typically profits even more when exploring the belief MDP further. This is demonstrated, e.g., in the Rocks-12 model: using the FSC \(F_{\mathcal{B}}\) computed using clipping (see Table 3 (left)) enables Paynt to find FSC \(F_{\mathcal{I}}\) with the same (optimal) value 20 as \(F_{\mathcal{B}}\) within 1s. Similarly, for the Milos-97 model, running Storm for 45s (producing a more precise \(F_{\mathcal{B}}\)) enables Paynt to find an FSC \(F_{\mathcal{I}}\) achieving a better value than controllers found by Storm or Paynt alone within the timeout. (These results are not reported in the tables.) However, as opposed to Q1, where a better FSC \(F_{\mathcal{I}}\) naturally improves the belief MDP, longer exploring the belief MDP does not always yield a better \(F_{\mathcal{I}}\): a larger \(\overline{\mathcal{M}^{\mathcal{B}}}\) with a better \(F_{\mathcal{B}}\) may yield a larger memory model \(\mu\), thus inducing a significantly larger family where Paynt struggles to identify good FSCs. ### Q3: The practical benefits of the symbiotic approach The goals of these experiments are to investigate whether the symbiotic approach improves the run time (can FSCs of a certain value be obtained faster?), the mem ory footprint (how is the total memory consumption affected?), the controller's value (can better FSCs be obtained with the same computational resources?) and the controller's size (are more compact FSCs obtained?). Value of the synthesised FSCs.Figure 4 plots the value of the FSCs produced by Storm, Paynt, and Saynt versus the computation time. Note that for maximal objectives, the aim is to obtain a high value (the first 4 plots) whereas for minimal objectives a lower value prevails. From the plots, it follows that _the FSCs from the symbiotic approach are superior in value to the ones obtained by the standalone approaches._ The relative improvement of the value of the resulting FSCs differs across individual models, similar to the trends in Q1 and Q2. When comparing the best FSC found by Storm or Paynt alone with the best FSC found by Saynt, the improvement ranges from negligible (4x3-95) to around 3%-7% (Netw-3-8-20, Milos-97, Query-s3) and sometimes goes over 40% (Refuel-20, Lines+). We note that the distance to the (unknown) optimal values remains unclear. The FSC value never decreases but sometimes does also not \begin{table} \begin{tabular}{|c||r||r|r||r|r|} \hline & Paynt & Short Storm & Long Storm & \multicolumn{1}{c|}{\(+F_{\mathcal{I}}\)} \\ Model & \(F_{\mathcal{I}}\) & & \(+\)\(F_{\mathcal{I}}\) & & \(+\)\(F_{\mathcal{I}}\) \\ \hline \hline Drone-4-2 & 0.94 & 0.92 & 0.97 & 0.95 & 0.97 \\ \(P_{\max}\) & \(9s\) & \(1s\) & \(1s\) & \(56s\) & \(57s\) \\ \hline Network & 266.1 & 186.7 & 274.5 & 202.1 & 277.1 \\ \(R_{\max}\) & \(3s\) & \(<\)\(1s\) & \(<\)\(1s\) & \(26s\) & \(33s\) \\ \hline Drone-8-2 & 0.9 & 0.6 & 0.96 & 0.68 & 0.97 \\ \(P_{\max}\) & \(28s\) & \(3s\) & \(3s\) & \(101s\) & \(103s\) \\ \hline 4x3-95 & 1.66 & 1.62 & 1.82 & 1.84 & 1.88 \\ \(R_{\max}\) & \(7s\) & \(<\)\(1s\) & \(<\)\(1s\) & \(60s\) & \(72s\) \\ \hline Query-s3 & 425.2 & 417.4 & 430.0 & 419.6 & 432.0 \\ \(R_{\max}\) & \(7s\) & \(2s\) & \(2s\) & \(91s\) & \(94s\) \\ \hline Milos-97 & 31.56 & 37.15 & 39.15 & 38.35 & 40.64 \\ \(R_{\max}\) & \(3s\) & \(<\)\(1s\) & \(<\)\(1s\) & \(42s\) & \(42s\) \\ \hline Hallway & 16.05 & 13.07 & 12.63 & 12.55 & 12.55 \\ \(R_{\min}\) & \(9s\) & \(1s\) & \(1s\) & \(160s\) & \(167s\) \\ \hline Rocks-12 & 42 & 38 & 31.89 & 20* & 20* \\ \(R_{\min}\) & \(<\)\(1s\) & \(<\)\(1s\) & \(<\)\(1s\) & \(10s\) \\ \hline \end{tabular} \begin{tabular}{|c||r|r||r|r|} \hline & Storm & Paynt \\ Model & \(F_{\mathcal{B}}\) & & \(+\)\(F_{\mathcal{B}}\) \\ \hline \hline 4x5x2-95 & 2.08 & 0.94 & 2.03 \\ \(R_{\max}\) & \(<\)\(1s\) & \(258s\) & \(38s\) \\ \hline Refuel-20 & 0.09 & \(<\)\(0.01\) & 0.19 \\ \(P_{\max}\) & \(1s\) & \(10s\) & \(11s\) \\ \hline Tiger-95 & 50.38 & 2.99 & 28.73 \\ \(R_{\max}\) & \(<\)\(1s\) & \(14s\) & \(23s\) \\ \hline 4x3-95 & 1.62 & 1.75 & 1.84 \\ \(R_{\max}\) & \(<\)\(1s\) & \(14s\) & \(238s\) \\ \hline Refuel-06 & 0.67 & 0.35 & 0.67 \\ \(P_{\max}\) & \(<\)\(1s\) & \(<\)\(1s\) & \(42s\) \\ \hline Milos-97 & 37.15 & 31.56 & 39.29 \\ \(R_{\max}\) & \(<\)\(1s\) & \(3s\) & \(215s\) \\ \hline Netw-3-8-20 & 11.93 & 11.07 & 10.95 \\ \(R_{\min}\) & \(1s\) & \(185s\) & \(271s\) \\ \hline Rocks-12 & 38 & 42 & 38 \\ \(R_{\min}\) & \(<\)\(1s\) & \(<\)\(1s\) & \(<\)\(1s\) \\ \hline \end{tabular} \end{table} Table 3: **Left (Q1)**: Experimental results on how a (quite sub-optimal) FSC \(F_{\mathcal{I}}\) computed by Paynt within 10s impacts Storm. (For Drone-8-2, the largest model in our benchmark, we use 30s). The “Paynt” column indicates the value of \(F_{\mathcal{I}}\) and its run time. The “Short Storm” column runs storm for 1s and compares the value of FSC \(F_{\mathcal{B}}\) found by Storm alone to Storm using \(F_{\mathcal{I}}\). The “Long Storm” column is analogous, but with a 300s timeout for Storm. In the last row, * indicates that clipping was used. **Right (Q2)**: Experimental results on how an FSC \(F_{\mathcal{B}}\) obtained by a shallow exploration of the belief MDP impacts the inductive synthesis by Paynt. The “Storm” column reports the value of \(F_{\mathcal{B}}\) computed within 1s. The “Paynt” column compares the values of the FSCs \(F_{\mathcal{I}}\) obtained by Paynt itself to Paynt using the FSCs \(F_{\mathcal{B}}\) within a 300s timeout. increase, as indicated by Hallway and Rocks-12 (see also Q2). Our experiments (see Tab. 5) also indicate that the improvement over the baseline algorithms is typically more significant in the larger variants of the models. Furthermore, the plots in Fig. 4 also include the FSC value by the one-shot combination of Storm and Paynt. We see that Saynt_can improve the FSC value over the one-shot combination_. This is illustrated in, e.g., the 4x3-95 and Lanes+ benchmarks, see the 1st and 3rd plots in Fig. 4 (left). Figure 4: Value of the generated FSCs over time. The last graph shows the average memory usage of Storm and Saynt. The lines ending before the timeout indicate that the 64GB memory limit was hit. \(\bullet\) indicates that Paynt and Saynt synthesised posterior-aware FSCs. \(\diamond\) indicates that Saynt ran with \(t_{T}=\)90s. Total synthesis time.Saynt initially needs some time for the first iteration (one inductive and one belief phase) in Alg. 1 and thus during the beginning of the synthesis process, the standalone tools may provide FSCs of a certain value faster. After the first iteration, however,Saynt typically provides better FSCs in a shorter time. For instance, for the Refuel-20 benchmark Saynt swiftly overtakes Storm after the first iteration. The only exception is Rocks-12 (discussed before), where Saynt with the default settings needs significantly more time than Storm to obtain an FSC of the same value. Memory footprint.Belief exploration typically has a large memory footprint: Storm quickly hits the 64GB memory limit on exploring the belief MDP.Saynt_reduces the memory footprint of Storm alone by a factor 3 to 4,_ see the bottom right plot of Fig. 4. The average memory footprint of running Paynt standalone quickly stabilises around 700MB. The memory footprint of Saynt is thus dominated by the restricted exploration of the belief MDP. The size of the synthesised FSCs.For selected models, Tab. 4 shows the trade-offs between the value and size of the resulting FSCs \(F_{\mathcal{I}}\) and \(F_{\mathcal{B}}\) found by Saynt. The experiments show that _the FSCs \(F_{\mathcal{I}}\) provided by inductive synthesis are typically about one to two orders of magnitude smaller than the belief-based FSCs \(F_{\mathcal{B}}\) with only a small penalty in their values._ There are models (e.g. Refuel-06) where a very small \(F_{\mathcal{B}}\), having even slightly smaller size than \(F_{\mathcal{I}}\), does exist. The integration mostly reduces the size of \(F_{\mathcal{B}}\) due to the better approximation of the belief MDP by up to a factor of two. This reduction has a negligible effect on the size of \(F_{\mathcal{I}}\). This observation further strengthens the usefulness of Saynt that jointly improves the value of \(F_{\mathcal{I}}\) and \(F_{\mathcal{B}}\). Hence, Saynt gives users a unique opportunity to run a single, time-efficient synthesis and select the FSC according to the trade-off between its value and size. Customising theSaynt_setup.In contrast to the standalone approaches as well as to the one-way integrations presented in Q1 and Q2,Saynt_provides a single synthesis method that is efficient for a general class of models without tuning its parameters._ Naturally, adjusting the parameters to individual benchmarks can further improve the quality of the computed controllers: captions of Fig. 4 and Tab. 4 describe which non-default settings were used for selected models. Saynt achieve better values than the controllers computed by Paynt; size-wise, these better FSCs of Saynt are similar or only slightly bigger. Meanwhile, for FSCs \(F_{\mathcal{B}}\) obtained by Saynt, we sometimes observe a significant size reduction while still improving the value compared to the FSCs produced by Storm. Two models are notable: On Drone-8-2, Saynt obtains 50% smaller \(F_{\mathcal{B}}\) while having a 41% better value. On Network-3-8-20, the size of \(F_{\mathcal{B}}\) is reduced by 40% while again providing better value. In the following, we further discuss the impact of non-default settings for selected benchmarks, as presented in Tab. 5. For instance, using posterior-aware FSCs generally significantly slows down the synthesis process, however, for Network and 4x3-95, it helps improve the value of the default posterior-unaware FSCs by 2% and 4%, respectively. For the former model, a better \(F_{\mathcal{I}}\) also improves \(F_{\mathcal{B}}\) by about a similar value. In some cases, e.g. for Query-s3, it is beneficial to increase the parameter \(t_{\mathcal{I}}\), giving Paynt enough time to search for a good FSC \(F_{\mathcal{I}}\) (the relative improvement is 6%), which also improves the value of the resulting FSC \(F_{\mathcal{B}}\) by about a similar value. Tuning \(t_{\mathcal{I}}\) and \(t_{\mathcal{B}}\) can also have an impact on the value-size trade-off, as seen in the Milos-97 model, where setting longer timeout \(t_{\mathcal{I}}\) results in finding a 2% better \(F_{\mathcal{B}}\) with 130% size increase. A detailed analysis of the experimental results suggests that usually, it is more beneficial to invest time into searching for good \(F_{\mathcal{I}}\) that is used to compute better cut-off values, rather than into deeper exploration of belief MDP. However, the timeouts still need to allow for multiple subsequent iterations of the algorithm in order to utilise the full potential of the symbiosis. ## 8 Conclusion and Future Work We proposed Saynt, a symbiotic integration of the two main approaches for controller synthesis in POMDPs. Using a wide class of models, we demonstrated that Saynt substantially improves the value of the resulting controllers and provides an any-time, push-button synthesis algorithm allowing users to select the controller based on the trade-off between its value and size, and the synthesis time. In future work, we plan to explore if the inductive policy synthesis can also be successfully combined with point-based approximation methods, such as SARSOP, and on discounted reward properties. A preliminary comparison on discounting properties provides two interesting observations: 1) For models with large reachable belief space and discount factors (very) close to one, SARSOP typically fails to update its initial _alpha-vectors_ and thus produces low-quality controllers. In these cases, SAYNT outperforms SARSOP. 2) For common discount factors, SARSOP beats SAYNT on the majority of benchmarks. This is not surprising, as the MDP engine underlying SAYNT does not natively support discounting and instead computes a much harder fixed point. See [15], for a recent discussion on the differences between discounting and not discounting. \begin{table} \begin{tabular}{|c c||c|c||c|c||c|c||c|c|c|c|} \hline Benchmark & \multicolumn{2}{c||}{Model Size} & \multicolumn{2}{c||}{Paynt} & \multicolumn{2}{c||}{Storm} & \multicolumn{2}{c|}{Saynt} \\ Model & Spec. & \(S/\Sigma Act\) & \(Z\) & \(F_{\mathcal{I}}\) & Size & \(F_{\mathcal{B}}\) & Size & \(F_{\mathcal{B}}\) & Size & \(F_{\mathcal{I}}\) & Size \\ \hline \hline \multirow{4}{*}{4x3} & \multirow{4}{*}{\(R_{\max}\)} & 22 & \multirow{4}{*}{9} & 1.81 & \multirow{4}{*}{36} & 1.87 & \multirow{4}{*}{999} & **1.89\(\bullet\)** & 968 & 1.87\(\bullet\) & 126 \\ & & 82 & & 764\(s\) & & 414\(s\) & & 283\(s\) & & 120\(s\) & \\ \cline{5-10} & & & & & & & **1.89** & 869 & 1.79 & 36 \\ & & & & & & & 303\(s\) & & 678\(s\) & \\ \hline 4x5x2 & \multirow{4}{*}{\(R_{\max}\)} & 79 & 0.94 & 26 & 2.08 & 102 & 2.08 & 102 & 2.03 & 38 \\ 95 & & 310 & & 305\(s\) & & 3\(s\) & & 71\(s\) & 378\(s\) & \\ \hline \multirow{4}{*}{Drone 4-1} & \multirow{4}{*}{\(P_{\max}\)} & 1226 & \multirow{4}{*}{384} & 0.87 & \multirow{4}{*}{768} & 0.84 & **0.89\(\bullet\)** & 169k & 0.87\(\bullet\) & 2.5k \\ & & 3026 & & 665\(s\) & & 110\(s\) & & 390\(s\) & 453\(s\) & \\ \cline{5-10} & & & & & & & **0.89** & 176k & 0.79 & 922 \\ \cline{5-10} & & & & & & & 180\(s\) & & 458\(s\) & \\ \hline Drone & \multirow{4}{*}{\(P_{\max}\)} & 1226 & \multirow{4}{*}{761} & 0.95 & 1.5k & 0.95 & 135k & **0.97** & 140k & 0.94 & 1.5k \\ 4-2 & & 3026 & & 900\(s\) & & 110\(s\) & & 194\(s\) & & 1\(s\) \\ \hline Drone & \multirow{4}{*}{\(P_{\max}\)} & 13k & \multirow{4}{*}{3195} & 0.9 & 6.4k & 0.68 & 280k & **0.96** & 140k & 0.9 & 6.4k \\ 8-2 & & 32k & & 260\(s\) & & 98\(s\) & & 247\(s\) & & 30\(s\) & \\ \hline Hallway & \multirow{4}{*}{\(R_{\min}\)} & 61 & \multirow{4}{*}{23} & 15.54 & 66 & 12.55 & 1.9k & 12.55 & 1.8k & 15.46 & 86 \\ & & 301 & & 26\(s\) & & 916\(s\) & & 263\(s\) & & 293\(s\) & \\ \hline Lanes+ & \multirow{4}{*}{\(R_{\min}\)} & 2741 & \multirow{4}{*}{11} & 8223 & 42 & 18870 & 8.1k & **4805** & 8.1k & 6591 & 34 \\ & & 5289 & & 118\(s\) & & 376\(s\) & & 173\(s\) & & 114\(s\) & \\ \hline Milos-97 & \multirow{4}{*}{\(R_{\max}\)} & 165 & \multirow{4}{*}{11} & 31.56 & \multirow{4}{*}{40} & 39.03 & **41.99\(\diamond\)** & 692 & 35.82\(\diamond\) & 40 \\ & & 980 & & 4\(s\) & & 88\(s\) & & 370\(s\) & & 185\(s\) & \\ \cline{5-10} & & & & 4s & & **41.55** & 290 & 35.41 & 40 \\ & & & & & & 270\(s\) & & 114\(s\) & \\ \hline Network & \multirow{4}{*}{\(R_{\max}\)} & 19 & \multirow{4}{*}{5} & 280.33 & \multirow{4}{*}{22} & 209.71 & \multirow{4}{*}{2.4k} & **289.18\(\bullet\)** & 2k & 287.23\(\bullet\) & 54 \\ & & 70 & & 38\(s\) & & 110\(s\) & & 395\(s\) & & 106\(s\) & \\ \cline{5-10} & & & & 38\(s\) & & 110\(s\) & & **284.51** & 1.8k & 280.33 & 22 \\ \cline{5-10} & & & & & & 85\(s\) & & 41\(s\) & \\ \hline Netw & \multirow{4}{*}{\(R_{\min}\)} & 4589 & \multirow{4}{*}{1173} & 4.24 & 2.3k & 3.21 & 34k & **3.2** & 23k & 4.19 & 2.5k \\ 2-8-20 & & 6973 & & 914\(s\) & & 11\(s\) & & 71\(s\) & & 211\(s\) & \\ \hline Netw & \multirow{4}{*}{\(R_{\min}\)} & 17k & 2205 & 11.04 & 4.4k & 10.27 & 64k & **10** & 38k & 11.04 & 4.8k \\ 3-8-20 & & 30k & & 638\(s\) & & 238\(s\) & & 742\(s\) & & 379\(s\) & \\ \hline Query & \multirow{4}{*}{\(R_{\max}\)} & 108 & \multirow{4}{*}{6} & 502.3 & \multirow{4}{*}{28} & 420.11 & \multirow{4}{*}{12.9k} & **511.32\(\diamond\)** & 7.7k & 509.49\(\diamond\) & 26 \\ s3 & & 320 & & 931\(s\) & & 184\(s\) & & 566\(s\) & & 362\(s\) & \\ \cline{5-10} & & & & & & 184\(s\) & & 482.21 & 7.7k & 478.59 & 28 \\ \cline{5-10} & & & & & & & 700\(s\) & & 610\(s\) & \\ \hline Refuel & \multirow{4}{*}{\(P_{\max}\)} & 208 & \multirow{4}{*}{50} & 0.35 & 100 & 0.67 & 343 & 0.67 & 84 & 0.67 & 156 \\ 06 & & 565 & & \(<\)1\(s\) & & 182\(s\) & & 178\(s\) & & 84\(s\) & \\ \hline Refuel & \multirow{4}{*}{\(P_{\max}\)} & 470 & \multirow{4}{*}{66} & 0.32 & 132 & 0.44 & 534 & **0.45** & 140 & 0.3 & 142 \\ 08 & & 1431 & & 253\(s\) & & 96\(s\) & & 186\(s\) & & 84\(s\) & \\ \hline Refuel & \multirow{4}{*}{\(P_{\max}\)} & 6834 & \multirow{4}{*}{174} & 0.02 & 348 & 0.15 & 1.2k & **0.24** & 1.5k & 0.2 & 360 \\ 20 & & 24k & & 922\(s\) & & 468\(s\) & & 386\(s\) & & 173\(s\) & \\ \hline Rocks & \multirow{4}{*}{\(R_{\min}\)} & 6553 & \multirow{4}{*}{1645} & 42 & 3.3k & **20\(\
2310.10199
Impact of Data Synthesis Strategies for the Classification of Craniosynostosis
Introduction: Photogrammetric surface scans provide a radiation-free option to assess and classify craniosynostosis. Due to the low prevalence of craniosynostosis and high patient restrictions, clinical data is rare. Synthetic data could support or even replace clinical data for the classification of craniosynostosis, but this has never been studied systematically. Methods: We test the combinations of three different synthetic data sources: a statistical shape model (SSM), a generative adversarial network (GAN), and image-based principal component analysis for a convolutional neural network (CNN)-based classification of craniosynostosis. The CNN is trained only on synthetic data, but validated and tested on clinical data. Results: The combination of a SSM and a GAN achieved an accuracy of more than 0.96 and a F1-score of more than 0.95 on the unseen test set. The difference to training on clinical data was smaller than 0.01. Including a second image modality improved classification performance for all data sources. Conclusion: Without a single clinical training sample, a CNN was able to classify head deformities as accurate as if it was trained on clinical data. Using multiple data sources was key for a good classification based on synthetic data alone. Synthetic data might play an important future role in the assessment of craniosynostosis.
Matthias Schaufelberger, Reinald Peter Kühle, Andreas Wachter, Frederic Weichel, Niclas Hagen, Friedemann Ringwald, Urs Eisenmann, Jürgen Hoffmann, Michael Engel, Christian Freudlsperger, Werner Nahm
2023-10-16T09:10:49Z
http://arxiv.org/abs/2310.10199v1
# Impact of Data Synthesis Strategies for the Classification of Craniosynostosis ###### Abstract _Introduction:_ Photogrammetric surface scans provide a radiation-free option to assess and classify craniosynostosis. Due to the low prevalence of craniosynostosis and high patient restrictions, clinical data is rare. Synthetic data could support or even replace clinical data for the classification of craniosynostosis, but this has never been studied systematically. _Methods:_ We test the combinations of three different synthetic data sources: a statistical shape model (SSM), a generative adversarial network (GAN), and image-based principal component analysis for a convolutional neural network (CNN)-based classification of craniosynostosis. The CNN is trained only on synthetic data, but validated and tested on clinical data. _Results:_ The combination of a SSM and a GAN achieved an accuracy of more than 0.96 and a F1-score of more than 0.95 on the unseen test set. The difference to training on clinical data was smaller than 0.01. Including a second image modality improved classification performance for all data sources. _Conclusion:_ Without a single clinical training sample, a CNN was able to classify head deformities as accurate as if it was trained on clinical data. Using multiple data sources was key for a good classification based on synthetic data alone. Synthetic data might play an important future role in the assessment of craniosynostosis. ## 1 Introduction Craniosynostosis is a group of head deformities affecting infants involving the irregular closure of one or multiple head sutures and its prevalence is estimated to be between four and ten cases per 10,000 live births [1]. As described by Virchow's law [2], depending on the affected suture distinct types of head deformities arise. Genetic mutations have been identified as one of the main causes of craniosynostosis [3, 4], which has been linked to increased intracranial pressure [5] and decreased brain development [6]. The most-performed therapy is surgical intervention consisting of resection of the suture and cranial remodeling of the skull. It has a high success rate [7] and is usually performed within the first two years of age. Early diagnosis is crucial and often involves palpation, cephalometric measurements, and medical imaging. Computed tomography (CT) imaging is the gold standard for diagnosis, but makes use of harmful ionizing radiation which should be avoided, especially for very young infants. Black-bone magnetic resonance imaging (MRI) [8] is sometimes performed, but requires sedation of the infants to impede moving artifacts. 3D photometric scanning enables the creation of 3D surface models of the child's head and face and is a radiation-free, cost-effective, and fast option to quantify head shape. It can be employed in a pediatrician's office and has potential to be used with smartphone-based scanning approaches [9]. Due to the low prevalence, craniosynostosis is included in the list of rare diseases by the American National Organization for Rare Disorders. Beside the few data, strict patient data regulations, and difficulties in anonymization (photogrammetric recordings show head and face), there are no publicly available clinical datasets of craniosynostosis patients available online. Synthetic data could potentially be used as a substitute to develop algorithms and approaches for the assessment of craniosynostosis, but only one synthetic dataset based on a statistical shape model (SSM) from our group [10] has been made publicly available so far. Scarce training data and high class imbalance due to the different prevalences of the different types of craniosynostosis [4] call for the usage of synthetic data to support or even replace clinical datasets as the primary resource for deep learning (DL)-based assessment and classification. The inclusion of synthetic data could facilitate training due to the reduction of class imbalance and increase the classifier's robustness and performance. Additionally, synthetic data may also be used as a cost-effective way to acquire the required training material for classification models without manually labeling and exporting a lot of clinical data. Using synthetic data for classification studies in a supporting manner or as a full replacement for clinical data has gained attraction in several fields of biomedical engineering (e.g. [11; 12]), especially if clinical data is not abundant. While classification approaches of craniosynostosis on CT data [13], 2D images [14], and 3D photogrammetric surface scans [15; 16; 17] have been proposed, the dataset sizes were below 500 samples (e.g. [17], [15], and [13]) and contained a high class imbalance. The usage of synthetic data is a straightforward way to increase training size and stratify class distribution. However, although the need for synthetic data had been acknowledged [15], synthetic data generation for the classification of head deformities has not been systematically explored yet. With the scarce availability of clinical data and multiple options of synthetic data generation available, we aim to test the effectiveness of multiple data synthesis methods both individually and as multi-modal approaches for the classification of craniosynostosis. Using synthetic data as training material facilitates not only the development of larger and more robust classification approaches, but also makes data sharing easier and increases data availability. A popular approach for 3D data synthesis is statistical shape modeling. It describes the approach to model 3D geometric shape variations by means of statistical analysis. With the application of head deformities, they have been employed to distinguish clinical head parameters [18], to evaluate head shape variations [19], to assess therapy outcome [20], and to classify craniosynostosis [16]. Although their value in the clinical assessment of craniosynostosis has been shown, the impact of SSM-based data augmentation for the classification of craniosynostosis has not been evaluated yet. With the introduction of a conversion of the 3D head geometry into a 2D image, image-based convolutional neural network (CNN)-based classification [17] can be applied on low-resolution images. Generative adversarial networks (GANs) [21] have been suggested as a data augmentation tool [15] and have been able to increase classification performance for small datasets [22]. The goal of this work is to employ a classifier based on synthetic data, using three different types of data synthesis strategies: SSM, GAN, and image-based principal component analysis (PCA). The three modalities are systematically compared regarding their capability in the classification of craniosynostosis when trained only on synthetic data. We will demonstrate that the classification of craniosynostosis is possible with a multi-modal synthetic dataset with a similar performance to a classifier trained on clinical data. Additionally, we propose a GAN design tailored towards the creation of low-resolution images for the classification of craniosynostosis. Both the GAN, the different SSMs, and PCA, were made publicly available along as all the 2D images from the synthetic training, validation and test sets. Methods ### Dataset and Preprocessing All data from this study was provided from the Department of Oral and Max-illofacial Surgery of the Heidelberg University Hospital, in which patients with craniosynostosis are routinely recorded for therapy planning and documentation purposes. The recording device is a photogrammetric 3D scanner (Canfield VECTRA-360-nine-pod system, Canfield Science, Fairfield, NJ, USA). We used a standardized protocol which had been examined and approved by the Ethics Committee Medical Faculty of the University of Heidelberg (Ethics number S-237/2009). The study was carried out according to the Declaration of Helsinki and written informed consent was obtained from parents. Figure 1: Landmarks provided in the dataset, used for the alignment for statistical shape modeling and the coordinate system creation of the distance maps [17]. The three landmarks on the right exist for both left and right part of the head. Figure 2: Pie chart of the class ratios in the clinical dataset (control \(56\,\%\), coronal \(5\,\%\), metopic \(14\,\%\), sagittal \(25\,\%\)). The legend in the center shows the absolute number of samples in the dataset (in total 496 samples). Figure 3: The four classes of the dataset with their distinct head shapes and their resulting distance maps representation. Top row: frontal view, middle row: top view, bottom row: 2D distance maps. Each data sample was available as a 3D triangular surface mesh. We selected the 3D photogrammetric surface scans from all available years (2011-2021). If multiple scans for the same patient were available, we selected only the last preoperative scan to avoid duplicate samples of the same patients. All patient scans had been annotated by medical staff with their diagnosis and 10 cephalometric landmarks. Fig. 1 shows the available landmarks on the dataset. We retrieved patients with coronal suture fusion (brachyccephaly and unilateral anterior plagioccephaly), sagittal suture fusion (scaphoccephaly), and metopic suture fusion (trigonoccephaly), as well as a control group with the dataset distribution displayed in Fig. 2. Besides healthy subjects, the control group also contained patients suffering from mild positional plagioccephaly without suture fusion. Subjects with positional plagioccephaly in the control group were treated with helmet therapy or laying repositioning. In contrast, all patients suffering from craniosynostosis required surgical treatment and underwent remodeling of the neurocranium. The four head shape resulting from craniosynostosis are visualized in Fig. 3. We used the open-source Python module pymeshlab[23] (version 2022.2) to automatically remove some recording artifacts such as duplicated vertices and isolated parts. We also closed holes resulting from incorrect scanning and removed irregular edge lengths by using isotropic explicit re-meshing [24] with a target edge length of 1 mm. In an earlier work [17], we defined a 2D encoding of the 3D head shape ("distance maps", displayed in Fig. 3, bottom row) which was also included in the pre-processing pipeline with the default parameter of [17]. ### Data subdivision We did not use the full clinical dataset (validation and test set according to Fig. 4) as training data for the data generation models (GAN, SSM, and PCA) since the statistical information of the test set would be included in the synthetic data sources, leading to leakage (an overestimation of the model performance due to statistical information "leaking" into the test set). Instead, we chose the schematic displayed in Fig. 4. We used a stratified 50-50 split of the clinical data and used one half of the samples as the validation set and the other half as the test set. The test set was separated from the validation set, only to be used for the final evaluation of the classifier. Following this approach, the test set did neither have any influence on the synthetic data, nor was it incorporated in the validation set and should therefore be a true representation of unknown data to the classifier. The validation set was used to select the best network during training and for hyperparameter tuning, but not as training material. Additionally it was used as the original (training) data on which we built the synthetic image generators. The synthetic training set was then created from the validation set according to the three data synthesis approaches described below: SSM, GAN, and PCA. The three approaches operated on different domains: While the SSM was applied directly on the 3D surface scans, the GAN and the PCA used the 2D distance map images. All images were created as 28\(\times\)28-sized Figure 4: Data subdivision for the synthetic-data-based classification and the creation of synthetic data. The test set was separated initially from the dataset, while the validation set was used to produce the synthetic samples on which the CNN was trained. Green: data, blue: 3D-2D image conversion, dark red: generative models. craniosynostosis distance maps which was sufficient for good classification in an earlier study [17]. We describe each of the three individual approaches SSM, GAN, and PCA below. ### Data Synthesis #### 2.3.1 Statistical shape model The pipeline for the SSM creation (similar to [25]) consists of initial alignment, dense correspondence establishment, and statistical modeling to extract the mean shape and the principal components from the sample covariance matrix (see also Fig. 5). For correspondence establishment, we employed template morphing. We used the mean shape of our previously published SSM [10] as a template which would be morphed onto each of the target scans. Procrustes analysis was employed on the ten cephalometric landmarks to obtain a transformation including translation, rotation, and isotropic scaling from the template to each target according to the cephalometric landmarks on the face and ears. For correspondence establishment, we employed the Laplace-Beltrami regularized projection (LBRP) approach [26] to morph the template onto each of the targets. We used two iterations: a high stiffness fit (providing a now landmark-free transformation from template to the target, improving the alignment also from the back of the head not covered with the landmarks) and a low stiffness fit (allowing the template to deform very close to the targets [27]). The deformed templates were then in dense correspondence, sharing the same point IDs across all scans and were used for further processing. Generalized Procrustes analysis (GPA) was performed to remove both rotational and translational components on all the morphed templates so that the mean shape could be determined and removed. The remaining zero mean data matrix served as a basis for the principal component analysis. To counterbalance higher point density in the facial regions, we used weighted PCA instead of ordinary PCA for the statistical modeling. The weights were assigned according to the surface area that each point encapsulated and computed using the area of each triangle of the surface model. We created one SSM for each class, ensuring that the models were independent from each other and did not contain Figure 5: The statistical shape model pipeline employed in this study. The target scan is colored green with the deforming template in white. influences from the other classes. We cut off the coefficient vectors after 95 % of the normalized variance to remove noise and ensured only the most important components were included in the SSMs. The synthesis of the model instances could then be performed as \[\mathrm{s}=\bar{\mathrm{s}}+\mathrm{V}\Lambda^{\frac{1}{2}}\alpha, \tag{1}\] with \(\bar{\mathrm{s}}\) denoting the mean shape, \(\mathrm{V}\) the principal components, \(\Lambda\) the sample covariance matrix, and \(\alpha\) the shape coefficient vector. We created 1000 random shapes of each class using a Gaussian distribution of the shape coefficient vector and created craniosynostosis distance maps for each sample. #### 2.3.2 Image-based principal component analysis We used ordinary PCA as the last modality to generate 2D image data. While the SSM also made use of PCA in the 3D domain, image-based PCA operated directly on the 2D images. This was a computationally inexpensive and less sophisticated alternative to both GANs and SSMs since neither extensive model training and hyperparameter tuning, nor 3D morphing and correspondence establishment was required. We employed ordinary PCA for each of the four classes separately and we again created 1000 samples for each class. Since SSM is related to PCA, the data synthesis could be performed as \[\mathrm{i}=\bar{\mathrm{i}}+\mathrm{V}\Lambda^{\frac{1}{2}}\alpha, \tag{2}\] with \(\bar{\mathrm{i}}\) denoting the mean image in vectorized shape, \(\mathrm{V}\) again the principal components, \(\Lambda\) the sample covariance matrix, and \(\alpha\) the coefficient vector of the principal components. We again drew 1000 random vectors from a Gaussian distribution and transformed them back into 2D image-shape. #### 2.3.3 Generative adversarial network The GAN combines multiple suggestions from different GAN designs and was designed as a conditional [28] deep convolutional [29] Wasserstein [30] GAN with gradient penalty [31] (cDC-WGAN-GP). The design in terms of the intermediate image sizes is visualized in Fig. 6. For the full design including all layers, consult Appendix A. We opted for a design including a mixture between transposed, interpolation, and normal convolutional filter kernels, which prevented checkerboard artifacts and large patches. The combination of interpolation layers and transposed convolutional layers lead to better images than each of the approaches alone (see also in Appendix B Fig. 12) present in our previous approach [32]. The conditioning of the GAN was implemented as an embedding vector controlling the image label that we wished to synthesize. We trained the GAN for 1000 epochs using the Wasserstein distance [30] which is considered to stabilize training [33]. Instead of the originally proposed weight clipping, we used a gradient penalty [31] of \(\lambda=1\). We used 10 critic iterations before updating the generator and a learning rate of \(\alpha=3\cdot 10^{-5}\) for both networks. The loss \(L\) can be described as follows [31]: \[L=\mathbb{E}_{\tilde{x}\sim\mathbb{E}_{D}}D(\tilde{x}|y)-\mathbb{E}_{x\sim \mathbb{E}_{G}}D(x|y)+\lambda(||\nabla_{\tilde{x}}D(\hat{x})||_{2}-1)^{2} \tag{3}\] with \(\tilde{x}\) denoting the generator samples \(G(z|y)\) and \(\hat{x}=\epsilon x+(1-\epsilon)\tilde{x}\) with \(\epsilon\) denoting a uniformly distributed random variable between 0 and 1 [31]. ### Image assessment We used structural similarity index measure to closest clinical sample (SSIM\({}_{\rm cc}\)) as the basis for a metric to assess the similarity of the synthetic images to the Figure 6: Visualization of the intermediate image sizes from the used GAN model. Left: generator, right: critic (discriminator). The filter kernel sizes are described in the Appendix A. Figure 7: Image development of the GAN generator during different stages of training visualized as a 2\(\times\)2 grid. clinical images and defined the \(\text{SSIM}_{\text{cc}}\) for each _synthetic_ sample by using the minimum \(\text{SSIM}_{\text{cc}}\) with respect to each _clinical_ sample of the same class \(N\): \[\text{SSIM}_{\text{cc},i}=\min\nolimits_{\forall n\in N}\text{SSIM}(p_{i,\text {synthetic}},p_{n,\text{clinical}}) \tag{4}\] It has to be noted that the \(\text{SSIM}_{\text{cc}}\) itself did not assess the quality of the synthetic images, but was rather designed to evaluate the similarity to the clinical images. With this approach, we tried to quantify a "good" data generator: The data should not be very similar to the original data (because then we could simply use the original data), but also not too different (because then they might not be a true representation of the underlying class anymore). "Good" images should not be "too close" to 1, but also not "too low". ### CNN Training Resnet18 was used as a classifier since it showed the best performance on this type of distance maps [17]. We used pytorch's [34] publicly available, pre-trained Resnet18 model and fine-tuned the weights during training. During training, all images were reshaped to a size of 224\(\times\)224 to match the input size of Resnet18. We performed a different run of CNN training on all seven combinations of the synthetic data. The CNN was trained only on synthetic data (except for the clinical scenario which was trained on clinical data for comparison). During training, we evaluated the model on both the (purely synthetic) training data and the (clinical) validation set (see also Fig. 8). The best-performing network was chosen according to the maximum F1-score on the validation set. The test set was never touched during training and only evaluated in a final run after training. When multiple data sources were used, the models had a different number of training samples (see Fig. 9) and all synthetically-trained models were trained for 50 epochs. Convergence was achieved usually already during the first ten epochs, indicating that there was sufficient training material for each model. We used Adam optimizer, cross entropy loss, a batch size of 32 with a learning rate of \(1\cdot 10^{-4}\), weight decay of 0.63 after each 5 epochs. To evaluate the synthetically-trained models against a clinically trained model, we additionally employed one CNN trained on clinical data, trained with the same parameters except a higher learning rate of \(1\cdot 10^{-3}\). We used the following types of data augmentation during training: Adding random pixel noise (with \(\sigma=1/255\)), adding a random intensity (with \(\sigma=5/255\)) across all pixels, horizontal flipping, and shifting images left or right (with \(\sigma=12.44\,\text{pixels}\)). All those types of data augmentation corresponded to real-world patient and scanning modifications: Pixel noise corresponded to scanning and resolution errors, adding a constant intensity was equal to a re-scaling of the patient's head, horizontal flipping corresponded to the patient as if they were mirrored in real life, and shifting the image horizontally modeled an alignment error in which the patient effectively turns their head \(20^{\circ}\) left or right during recording. Figure 8: Classification training using the synthetic data, the validation data, and the test set. The CNN classifier using clinical data uses the validation data as a training set. Green: data, blue: violet: classification models. Figure 9: Number of training samples in each classification scenario. The clinical scenario has less than 500 samples while all synthetic scenarios have at 4000, 8000, or 12000 samples. All the clinical 2D data, the GAN, and the statistical models were made publicly available1. We included a script to create synthetic samples for all three image modalities to allow users to create a large number of samples. The synthetic and clinical samples of this study are available on Zenodo [35]. Footnote 1: [https://github.com/KIT-IBT/craniosource-gan-pca-ssm](https://github.com/KIT-IBT/craniosource-gan-pca-ssm) ## 3 Results ### Image evaluation Fig. 10 shows image of each of the different data synthesis types compared with the clinical images. From a qualitative, visual examination, the synthetic images Figure 10: Images of all three data modalities and clinical samples. From top to bottom the image modalities: SSM, GAN, PCA, clinical. From left to right the four classes: Control, coronal, metopic, sagittal. had similar color gradients, shapes, and intensities as the clinical images. GAN images appeared slightly noisier than the other images and did not show the left and right ear visible in the other images. From the quantitative comparison (see Fig. 11), ordinary PCA images were substantially and consistently more similar to the clinical images than the other two modalities (differences of the medians larger than 0.02), while SSM and GAN images were less similar, with the SSM images being the most dissimilar for the coronal class. ### Classification results All comparison presented here were carried out on the untouched test set. According to the classification results for the synthetic training in Tab. 1, the SSM was the best single source of synthetic data with an F1-score higher than 0.85. All combinations of synthetic models showed F1-scores higher than 0.8. The classifier on the clinical data scored an accuracy above 0.96, but was surpassed by the combination of GAN and SSM. F1-score was highest for the clinical classification (0.9533), but the combination of SSM and GAN scored a very close F1-score (0.9518). Including a second data source always improved the F1-score Figure 11: Boxplots of \(\mathrm{SSIM}_{\mathrm{cc}}\) (structural similarity index measure to closest clinical sample) of each class for each of the synthetic data generators. compared to a model with a single data source (adding PCA to GAN by 0.29, adding SSM to PCA by 0.16, adding SSM to GAN by 0.1). ## 4 Discussion Without being trained on a single clinical sample, the CNN trained from the combination of the SSM and the GAN was able to correctly classify 95 % of the data. Classification performance on the synthetic data proved to be equal to or even slightly better than training on the clinical data, at least for the data generated using the SSM and the GAN (and optionally also PCA). This suggests that certain combinations of synthetic data might be indeed sufficient for a classification algorithm to distinguish between types of craniosynostosis. Compared with classification results from other works, the purely synthetic-data-based classification performs in a similar range and sometimes even better than other approaches on clinical data [15, 17, 16, 36, 13]. The SSM appeared to be the data source contributing the most to the improvement of the classifier: Not only did it score highest among the unique data sources, but it was also present in the highest scoring classification approaches. One reason for this might be that it was also the least similar data source for most of the classes. Due to the inherent modeling of the geometric shape in 3D, the created 2D distance maps are always created from 3D samples, while PCA and the GAN could, in theory, create 2D images which do not correspond to a 3D shape. In contrast, the GAN-based classifiers only showed a good classification performance when combined with a different data modality and its synthesized images seemed to show less pronounced visual features than the other two modalities. However, the SSIM\({}_{\text{cc}}\) based metric shows no substantial difference between the GAN images and the other two modalities. However, one possible reason might be that the GAN learned features of multiple classes and the images might still contain features which are derived from images from other classes. The PCA images were neither required, nor detrimental for a good \begin{table} \begin{tabular}{l c c} \hline \hline Synthetic data source & Accuracy & F1-score \\ \hline GAN & 0.4274 & 0.4930 \\ PCA & 0.7581 & 0.6997 \\ SSM & 0.9153 & 0.8547 \\ GAN-PCA & 0.8508 & 0.7823 \\ GAN-SSM & **0.9677** & **0.9518** \\ PCA-SSM & 0.9153 & 0.8595 \\ GAN-PCA-SSM & 0.9597 & 0.9445 \\ \hline Clinical & 0.9637 & 0.9533 \\ \hline \hline \end{tabular} \end{table} Table 1: CNN-Classification comparison on the test set trained on different synthetic data sources. Boldface: best results among the data source. classification performance. According to the SSIM\({}_{\text{cc}}\), the PCA images were the most similar images to its clinical counterparts. Overall, a combination of different data modalities seemed to be the key element for achieving a good classification performance. Both SSM and PCA model the data according to a Gaussian distribution, while the GAN uses an unrestricted distribution model. The different properties of modeling the underlying statistical distribution of a Gaussian distribution (SSMs and PCA) on the one hand, and without an assumed distribution (GAN) on the other hand might have lead to a compensation of their respective disadvantage increasing overall performance for the combinations. One limitation of this study is the small dataset. As the clinical classification uses the same dataset for training and validation, this might make it prone to overfitting. However, the resulting classification metrics achieved in this study were similar to a classification study on clinical data alone [17] which suggests that over-fitting has not been an issue. ## 5 Conclusion We showed that it is possible to train a classifier for different types of craniosynostosis based solely on artificial data synthesized by a SSM, PCA, and a GAN. Without having seen any clinical samples, a CNN was able to classify four types of head deformities with an F1-score higher than 0.95 and performed comparable to a classifier trained on clinical data. The key component in achieving good classification results was using multiple, but different data generation models. Overall, the SSM was the data source contributing most to the classification performance. For the GAN, using a small image size and alternating between transposed convolutions and interpolations were identified as key elements for suitable image generation. The datasets and generators were made publicly available along with this work. We showed that clinical data is not required for the classification of craniosynostosis paving the way into cost-effective usage of synthetic data for automated diagnosis systems.
2310.19803
ShanshuiDaDA: An Interactive, Generative System towards Chinese Shanshui Painting
Shanshui, which means mountain and water, is an East Asian traditional brush painting involving natural landscapes. This paper proposes an interactive and generative system based on a Generative Adversarial Network(GAN), which helps users draw Shanshui easily. We name this system and installation ShanshuiDaDA. ShanshuiDaDA is trained with CycleGAN and wrapped with a web-based interface. When participants scribble lines and sketch the landscape, the ShanshuiDaDA will assist them in generating and creating a Chinese "Shanshui" painting in real time.
Aven Le Zhou, Qiufeng Wang, Cheng-Hung Lo, Kaizhu Huang
2023-10-04T06:53:49Z
http://arxiv.org/abs/2310.19803v1
# ShanshuiDaDA: An Interactive, Generative System towards Chinese Shanshui Painting ###### Abstract "Shanshui" literally means "mountain and water", is an East Asian traditional type of brush painting involves natural landscape. In this paper, we propose an interactive and generative system based on Generative Adversarial Network(GAN), which helps user draw "Shanshui" easily. We name this system and installation "ShanshuiDaDA". "ShanshuiDaDA" is trained with "CycleGAN" and wrapped with a web-based interface. When participants scribble lines and sketch the landscape, the "ShanshuiDaDA" will assist them to generate/create a Chinese "Shanshui" painting in real-time. ## 1 Introduction "Shanshui" literally means "mountain and water", also known as literati painting, it's an East Asian type of brush painting of Chinese origin that uses ink and involves natural landscape. As a key element of what Chinese calls literati arts -- or amateur arts of the scholars, "Shanshui" used to be along with their education, the Chinese scholars were all trained in this forms of fine arts. This art form is, in a long history, an essential part of the spiritual life of the entire community of ancient Chinese intellectuals [1]. But the tradition is vanishing. "DaDA" refers to "Design and Draw with AI", explains the goal of exploring the possible role of artificial intelligence in (traditionally human-oriented) creative processes - such as drawing and design. In this project, we choose "Shanshui" - this unique eastern traditional art form to train "ShanshuiDaDA" which learns a mapping from hand sketch to "Shanshui". In addition, we deliver the project in an interactive flavor which commits to enhance modern people, especially those have grown up in eastern culture, with the ability to easily use "Shanshui" as an expressive medium and to enrich their spiritual life. Figure 1: ”Shanshui-DaDA” System Overview **Generative:** Machine learning tasks on Sketch-to-Shanshui transition are not found in the scope of this project, there are also very few available data sets. Thus, we collected and made the data set. This includes collecting "Shanshui" copies and pre-processing the scanned copies as well as creating the hand sketches. All "Shanshui" paintings used in this project are high-resolution scans available on the open data platform of the National Palace Museum [2]. The sketch data set is created through computer vision methods. After cropped off the frames, we applied a canny filter to each painting and generated the edges as hand sketch data [3]. We then trained on the data set with "CycleGAN" algorithm and obtained the sketch-to-Shanshui model [4]. **Interactive:** In order to create a real-time and interactive experience, hand sketch from the participant is fed in the pre-trained sketch-to-Shanshui model and generates immediate feedback and presents in the interface. The main architecture of this installation is a web-based client and server system. The "p5.js" web page works as front-end interface and runs on an iPad (Figure 1.a). Participants will draw on the canvas and submit the hand-sketch to back-end server which runs on the cloud. The server will handle data communication and execute command for running the pre-trained model in a test mode. The generated painting will then be post to the front interface. In the installation setup, the front interface (the iPad) is connected to a TV screen for better visuals (Figure 1.b). ## 3 Result and Discussion In figure 3, there are six selected pairs of hand sketches and corresponding generated "Shanshui" paintings from participants. The "ShanshuiDaDA" obviously learned various different styles. Some can be easily mapped to the existed ones, like the first column in figure 3 appears very like the "Qingbi Shanshui" before Tang Dynasty, but some can be hardly categorized. Such as the generated "Shanshui" in third column, it presents a strong ink-wash-painting style but matches rarely any previous styles. In other words, "ShanshuiDaDA" has even created a brand new style. The interactive, generative progress in this project demonstrates a cooperative relationship between human and AI. Human creator not only trains AI with artificial data but also benefits from the assistance of AI. On the other side, artificial intelligence not only learns from human-created data but also "teaches" and provides human creator new approaches to creative goals. Figure 3: Selected Generated “Shanshui” Figure 2: Sample of Collected “Shanshui” Scans
2306.15707
Complexification of an infinite volume Coxeter tetrahedron
Let $T$ be an infinite volume Coxeter tetrahedron in three dimensional real hyperbolic space ${\bf H}^{3}_{\mathbb R}$ with two opposite right-angles and the other angles are all zeros. Let $G$ be the Coxeter group of $T$, so $$G=\left\langle \iota_1, \iota_2, \iota_3, \iota_4 \Bigg| \begin{array} {c} \iota_1^2= \iota_2^2 = \iota_3^2=\iota_4^2=id, \\ (\iota_1 \iota_3)^{2}=(\iota_2 \iota_4)^{2}=id \end{array}\right\rangle$$ as an abstract group. We study type-preserving representations $\rho: G \rightarrow \mathbf{PU}(3,1)$, where $\rho( \iota_{i})=I_{i}$ is a complex reflection fixing a complex hyperbolic plane in three dimensional complex hyperbolic space ${\bf H}^{3}_{\mathbb C}$ for $1 \leq i \leq 4$. The moduli space $\mathcal{M}$ of these representations is parameterized by $\theta \in [\frac{5 \pi}{6}, \pi]$. In particular, $\theta=\frac{5 \pi}{6}$ and $\theta=\pi$ degenerate to ${\bf H}^{2}_{\mathbb C}$-geometry and ${\bf H}^{3}_{\mathbb R}$-geometry respectively. Via Dirichlet domains, we show $\rho=\rho_{\theta}$ is a discrete and faithful representation of the group $G$ for all $\theta \in [\frac{5 \pi}{6}, \pi]$. This is the first nontrivial moduli space in three dimensional complex hyperbolic space that has been studied completely.
Jiming Ma
2023-06-27T06:28:39Z
http://arxiv.org/abs/2306.15707v1
# Complexification of an infinite volume Coxeter tetrahedron ###### Abstract. Let \(T\) be an infinite volume Coxeter tetrahedron in three dimensional real hyperbolic space \(\mathbf{H}_{\mathbb{R}}^{3}\) with two opposite right-angles and the other angles are all zeros. Let \(G\) be the Coxeter group of \(T\), so \[G=\left\langle\begin{matrix}\iota_{1},\iota_{2},\iota_{3},\iota_{4}\end{matrix} \right|\begin{matrix}\iota_{1}^{2}=\iota_{2}^{2}=\iota_{3}^{2}=\iota_{4}^{2}= id,\\ (\iota_{1}\iota_{3})^{2}=(\iota_{2}\iota_{4})^{2}=id\end{matrix}\right\rangle\] as an abstract group. We study type-preserving representations \(\rho:G\to\mathbf{PU}(3,1)\), where \(\rho(\iota_{i})=I_{i}\) is a complex reflection fixing a complex hyperbolic plane in three dimensional complex hyperbolic space \(\mathbf{H}_{\mathbb{C}}^{3}\) for \(1\leq i\leq 4\). The moduli space \(\mathcal{M}\) of these representations is parameterized by \(\theta\in[\frac{5\pi}{6},\pi]\). In particular, \(\theta=\frac{5\pi}{6}\) and \(\theta=\pi\) degenerate to \(\mathbf{H}_{\mathbb{C}}^{2}\)-geometry and \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry respectively. Via Dirichlet domains, we show \(\rho=\rho_{\theta}\) is a discrete and faithful representation of the group \(G\) for all \(\theta\in[\frac{5\pi}{6},\pi]\). This is the first nontrivial moduli space in three dimensional complex hyperbolic space that has been studied completely. Key words and phrases:Complex hyperbolic geometry, Coxeter polytope, Dirichlet domain, complex reflection 2010 Mathematics Subject Classification: 20F55, 20H10, 57M60, 22E40, 51M10 Jiming Ma was partially supported by NSFC 12171092. ## 1. Introduction ### Motivation Hyperbolic \(n\)-space \(\mathbf{H}_{\mathbb{R}}^{n}\) is the unique complete simply connected Riemannian \(n\)-manifold with all sectional curvatures \(-1\). Complex hyperbolic \(n\)-space \(\mathbf{H}_{\mathbb{C}}^{n}\) is the unique complete simply connected Kahler \(n\)-manifold with all holomorphic sectional curvatures \(-1\). But the Riemannian sectional curvatures of a complex hyperbolic space are no longer constant, which are pinched between \(-1\) and \(-\frac{1}{4}\). This makes complex hyperbolic geometry much more difficult to study. The holomorphic isometry group of \(\mathbf{H}_{\mathbb{C}}^{n}\) is \(\mathbf{PU}(n,1)\), the orientation preserving isometry group of \(\mathbf{H}_{\mathbb{R}}^{n}\) is \(\mathbf{PO}(n,1)\). Besides \(\mathbf{H}_{\mathbb{R}}^{n}\) is a totally geodesic submanifold of \(\mathbf{H}_{\mathbb{C}}^{n}\), \(\mathbf{PO}(n,1)\) is a natural subgroup of \(\mathbf{PU}(n,1)\). Over the last sixty years the theory of Kleinian groups, that is, deformations of groups into \(\mathbf{PO}(3,1)\), has flourished because of its close connections with low dimensional topology and geometry. More precisely, pioneered by Ahlfors and Bers in the 1960's, Thurston formulated a conjectural classification scheme for all hyperbolic \(3\)-manifolds with finitely generated fundamental groups in the late 1970's. The conjecture predicted that an infinite volume hyperbolic \(3\)-manifold with finitely generated fundamental group is uniquely determined by its topological type and its end invariants. Thurston's conjecture is completed by a series of works of many mathematicians, which is one of the most great breakthrough in 3-manifolds theory. See Minsky's ICM talk [17] for related topics and the reference. There are also some remarkable works on deformations of groups into \(\mathbf{PU}(2,1)\). Let \(\Delta(p,q,r)\) be the abstract \((p,q,r)\) reflection triangle group with the presentation \[\Delta(p,q,r)=\langle\sigma_{1},\sigma_{2},\sigma_{3}|\sigma_{1}^{2}=\sigma_{2 }^{2}=\sigma_{3}^{2}=(\sigma_{2}\sigma_{3})^{p}=(\sigma_{3}\sigma_{1})^{q}=( \sigma_{1}\sigma_{2})^{r}=id\rangle,\] where \(p,q,r\) are positive integers or \(\infty\) satisfying \[\frac{1}{p}+\frac{1}{q}+\frac{1}{r}<1.\] If \(p,q\) or \(r\) equals \(\infty\), then the corresponding relation does not appear. The ideal triangle group is the case that \(p=q=r=\infty\). A \((p,q,r)\)_complex hyperbolic triangle group_ is a representation \(\rho\) of \(\Delta(p,q,r)\) into \(\mathbf{PU}(2,1)\) where the generators fix complex lines, we denote \(\rho(\sigma_{i})\) by \(I_{i}\). Goldman and Parker initiated the study of the deformations of ideal triangle group into \(\mathbf{PU}(2,1)\) in [13]. They gave an interval in the moduli space of complex hyperbolic ideal triangle groups, for points in this interval the corresponding representations are discrete and faithful. They conjectured that a complex hyperbolic ideal triangle group \(\Gamma=\Delta_{\infty,\infty,\infty}=\langle I_{1},I_{2},I_{3}\rangle\) is discrete and faithful if and only if \(I_{1}I_{2}I_{3}\) is not elliptic. Schwartz proved Goldman-Parker's conjecture in [22, 23]. Richard Schwartz has also conjectured the necessary and sufficient condition for a general complex hyperbolic triangle group \(\Delta_{p,q,r}=\langle I_{1},I_{2},I_{3}\rangle<\mathbf{PU}(2,1)\) to be a discrete and faithful representation of \(\Delta(p,q,r)\). See Schwartz's ICM talk [24] for related topics and the reference. Schwartz's conjecture has been proved in a few cases [19, 20]. From above we know one way to study discrete subgroups of \(\mathbf{PU}(n,1)\) is the deformations of a well-understood representation. From a finitely presented abstract group \(G\), and a discrete faithful representation \(\rho_{0}:G\to\mathbf{PU}(n,1)\), we may deform \(\rho_{0}\) to \(\rho_{1}:G\to\mathbf{PU}(n,1)\) along a path. We are interested in whether \(\rho_{1}\) is discrete and faithful. Moreover, even when \(\rho_{1}\) is not faithful, but it also has the chance to be discrete. This case is very interesting, since if we are lucky, we have the opportunity to get a complex hyperbolic lattice at \(\rho_{1}\)[7, 8]. One of the most important questions in complex hyperbolic geometry is the existence of (infinitely many commensurable classes of) non-arithmetic complex hyperbolic lattices [16, 10]. This is notorious difficult comparing to its real hyperbolic counterpart [14]. As results of forty years' hard work, people only found 22 commensurable classes of non-arithmetic complex hyperbolic lattices in \(\mathbf{PU}(2,1)\)[3, 7, 8], and 2 commensurable classes of non-arithmetic complex hyperbolic lattices in \(\mathbf{PU}(3,1)\)[3, 5]. Both \(\mathbf{PO}(3,1)\) and \(\mathbf{PU}(2,1)\) are subgroups of \(\mathbf{PU}(3,1)\). It is reasonable that deformations of some discrete groups in \(\mathbf{PO}(3,1)\) into the larger group \(\mathbf{PU}(3,1)\) may give some discrete, but not faithful representations, which in turn have the opportunity to give some new \(\mathbf{H}_{\mathbb{C}}^{3}\)-lattices as pioneered by [3, 7, 8]. The author remarks that comparing to results in \(\mathbf{H}_{\mathbb{C}}^{2}\)-geometry [25, 25, 6], any discrete deformations of a group \(G\) into \(\mathbf{PU}(3,1)\) with some accidental parabolic or elliptic element is also highly interesting. For instance, we have no non-trivial example of a 5-manifold \(N\) which admits uniformizable CR-structures now. Here by nontrivial example, we mean \(N\) is neither diffeomorphic to the \(\mathbb{S}^{3}\)-bundle over a \(\mathbf{H}_{\mathbb{R}}^{2}\)-manifold \(F^{2}\), the trivial \(\mathbb{S}^{2}\)-bundle over a \(\mathbf{H}_{\mathbb{R}}^{3}\)-manifold \(Y^{3}\), nor diffeomorphic to a \(\mathbb{S}^{1}\)-bundle over a \(\mathbf{H}_{\mathbb{C}}^{2}\)-manifold \(X^{4}\). In this paper, we study deformations of groups into \(\mathbf{PU}(3,1)\) via Dirichlet domains, which is much more difficult and richer than deformations of groups into \(\mathbf{PO}(3,1)\) and \(\mathbf{PU}(2,1)\). By this we mean: * It is well known that the space of discrete and faithful representations of a group into \(\mathbf{PO}(3,1)\) has fractal boundary in general. For example, the so called Riley slice has a beautiful fractal boundary in \(\mathbb{C}\) (see Page VIII of [1]); * People tend to guess that the space of discrete and faithful representations of a group into \(\mathbf{PU}(2,1)\) has piece-wise smooth boundary (at least when the deformation space has two dimension). For one of the tractable cases, the so called complex Riley slice, which is \(2\)-dimensional, see [20]; * Moreover, there are very few results on the space of discrete and faithful representations of a group into \(\mathbf{P}\widehat{\mathbf{U}(2,1)}\). Here \(\widehat{\mathbf{PU}(2,1)}\) is the full isometry group of \(\mathbf{H}_{\mathbb{C}}^{2}\). To the author's knowledge, the only complete classification result is in [9]. Where Falbel-Parker completed the study on the space of discrete and faithful representations of \(\mathbb{Z}_{2}*\mathbb{Z}_{3}\) into \(\widehat{\mathbf{PU}(2,1)}\) (with one additional parabolic element), the moduli space is \(1\)-dimensional. So deformations of groups into \(\mathbf{PU}(3,1)\) may have fractal boundaries in general (at least when the deformation spaces have large dimensions), but we are very far from understanding them. ### Main result of the paper In this article, we complexify a Coxeter tetrahedron in the real hyperbolic space \(\mathbf{H}_{\mathbb{R}}^{3}\) into the complex hyperbolic space \(\mathbf{H}_{\mathbb{C}}^{3}\). Using the upper space model of \(\mathbf{H}_{\mathbb{R}}^{3}\), a Coxeter tetrahedron in \(\mathbf{H}_{\mathbb{R}}^{3}\) is determined by four round circles \(C_{i}\) in \(\mathbb{C}\) for \(1\leq i\leq 4\). Where each circle \(C_{i}\) is the ideal boundary of a totally geodesic \(\mathbf{H}_{\mathbb{R}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{R}}^{3}\), and \(\mathbb{C}\cup\infty\) is the ideal boundary of \(\mathbf{H}_{\mathbb{R}}^{3}\). In the right subfigure of Figure 1, there is a configuration of four round circles in \(\mathbb{C}\): * the red, purple, green and blue circles are \(C_{1}\), \(C_{2}\), \(C_{3}\) and \(C_{4}\) respectively; * \(C_{1}\) and \(C_{3}\) intersect perpendicularly at two points. \(C_{2}\) and \(C_{4}\) also intersect perpendicularly at two points; * \(C_{i}\) is tangent to \(C_{i+1}\) for \(1\leq i\leq 4\) mod \(4\). This configuration of round circles in \(\mathbb{C}\) can be obtained as follows. We first take \(C_{1}\) and \(C_{3}\) with the same radius and intersect perpendicularly at two point, see the red and green circles in right subfigure of Figure 1. We take original round circles \(C_{2}\) and \(C_{4}\) which are coincident, such that they are tangent to both \(C_{1}\) and \(C_{3}\) (the original \(C_{2}\) and \(C_{4}\) are both the dashed circle in Figure 1). We view \(C_{2}\) and \(C_{4}\) have angle \(\pi\) at this configuration. Then we make \(C_{2}\) with bigger radius, and the center of it to the left of the original one. We also make \(C_{4}\) with bigger but the same radius as \(C_{2}\), and the center of it to the right of the original one. Moreover each of \(C_{2}\) and \(C_{4}\) is also tangent to both \(C_{1}\) and \(C_{3}\). When the radii of \(C_{2}\) and \(C_{4}\) diverge to the infinity, that is, the limiting case is that \(C_{2}\) and \(C_{4}\) are two vertical lines tangent to both \(C_{1}\) and \(C_{3}\), the angle between \(C_{2}\) and \(C_{4}\) converges to zero. So at an intermediate time, the angle between \(C_{2}\) and \(C_{4}\) is \(\frac{\pi}{2}\). It is well-known that there is a unique configuration of round circles in \(\mathbb{C}\) satisfies the above angle conditions up to \(\mathbf{PSL}(2,\mathbb{C})\)-action (equivalently up to \(\mathbf{PO}(3,1)\)-action), see [27]. So let \(T\) be the Coxeter tetrahedron in \(\mathbf{H}_{\mathbb{R}}^{3}\), which is determined by four totally geodesic hyperbolic planes in \(\mathbf{H}_{\mathbb{R}}^{3}\), such that the ideal boundaries of these hyperbolic planes is the given configuration of round circles in right subfigure of Figure 1. Then \(T\) is an infinite volume Coxeter tetrahedron with two opposite angles \(\frac{\pi}{2}\) and the other angles are all zeros, see the left subfigure of Figure 1. Let \(G\) be the Coxeter group of \(T\), that is, the reflection group with four generators the (real) reflections about the defining hyperbolic planes of \(T\). So \[G=\left\langle\iota_{1},\iota_{2},\iota_{3},\iota_{4}\left|\begin{array}{c} \iota_{1}^{2}=\iota_{2}^{2}=\iota_{3}^{2}=\iota_{4}^{2}=id,\\ (\iota_{1}\iota_{3})^{2}=(\iota_{2}\iota_{4})^{2}=id\end{array}\right\rangle\] as an abstract group. The group \(G\) is isomorphic to \((\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})*(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})\) abstractly. For technical reasons, we also need to consider a subgroup of \(G\), say \[K=\langle\iota_{1}\iota_{3},\iota_{2}\iota_{4},\iota_{1}\iota_{2}\rangle.\] \(K\) is an index two subgroup of \(G\), and \(K\) is isomorphic to \(\mathbb{Z}_{2}*\mathbb{Z}_{2}*\mathbb{Z}\). In this paper we study representations \(\rho:G\to\mathbf{PU}(3,1)\), such that \(\rho(\iota_{i})=I_{i}\) is a complex reflection fixing a complex hyperbolic plane in \(\mathbf{H}_{\mathbb{C}}^{3}\) for \(1\leq i\leq 4\), with the condition \(I_{i}I_{i+1}\) is parabolic for \(1\leq i\leq 4\) mod \(4\). This is a natural complexification of \(T\), since we replace the real reflections by complex reflections. The moduli space \(\mathcal{M}\) is parameterized by \(\theta\in[\frac{5\pi}{6},\pi]\). In particular, \(\theta=\frac{5\pi}{6}\) and \(\theta=\pi\) degenerate to \(\mathbf{H}_{\mathbb{C}}^{2}\)-geometry and \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry respectively. By this we mean the group \(\rho_{\frac{5\pi}{6}}(G)\) preserves a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) invariant, and \(\rho_{\pi}(G)\) preserves a totally geodesic \(\mathbf{H}_{\mathbb{R}}^{3}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) invariant respectively. See Section 3 for more details. Using the Dirichlet domains of the \(\rho_{\frac{5\pi}{6}}(G)\)-action on \(\mathbf{H}_{\mathbb{C}}^{2}\) and the \(\rho_{\pi}(G)\)-action on \(\mathbf{H}_{\mathbb{R}}^{3}\) as guides, the main result of this paper is **Theorem 1.1**.: \(\rho_{\theta}\) _is a discrete and faithful representation of the group \(G\) into \(\mathbf{PU}(3,1)\) for any \(\theta\in[\frac{5\pi}{6},\pi]\)._ In fact there is a hidden and lucky \(\mathbb{Z}_{4}\)-symmetry of each representation \(\rho_{\theta}\). More precisely, there is \(J\in\mathbf{PU}(3,1)\) an order-\(4\) regular elliptic element, such that \(I_{i}=JI_{i-1}J^{-1}\) for \(i=1,2,3,4\) mod \(4\). We denote \(A_{i}=I_{i}I_{i+1}\) for \(1\leq i\leq 4\). Then \[\rho_{\theta}(K)=\langle A_{1},A_{2},A_{3},A_{4}\rangle\,,\] Figure 1. The infinite volume Coxeter tetrahedron \(T\) (left) in \(\mathbf{H}_{\mathbb{R}}^{3}\) and the ideal boundary its defining hyperplane configuration (right). such that \(A_{i}\) is parabolic and \(A_{i}A_{i+1}\) has order two for \(1\leq i\leq 4\). We consider the Dirichlet domain of \(\rho_{\theta}(K)\) with the fixed point of \(J\) as the center. The reason that we work on \(\rho_{\theta}(K)\) instead of \(\rho_{\theta}(G)\) is that there are subgroups of \(G\) which are infinite dihedral groups. Moreover by Goldman-Parker [12], the Dirichlet domain of an infinite dihedral group tends to have infinite facets (depending on the chosen center of the Dirichlet domain). So it seems the Dirichlet domain of \(\rho_{\theta}(G)\) is combinatorically much more difficult to study than that of \(\rho_{\theta}(K)\) for general \(\theta\). Let \[R=\{A_{1}^{\pm 1},\ A_{2}^{\pm 1},\ A_{3}^{\pm 1},\ A_{4}^{\pm 1},\ A_{1}A_{2},\ A_ {2}A_{3}\}\] be a subset of \(\rho_{\theta}(K)\) consisting of ten elements. We will show the partial Dirichlet domain \(D_{R}\) is in fact the Dirichlet domain of \(\rho_{\theta}(K)\), then we obtain Theorem 1.1. Since the proof of Theorem 1.1 is much involved, we prove it in following steps: * We consider firstly the case \(\theta=\frac{5\pi}{6}\), that is, we consider \(\mathbf{H}_{\mathbb{C}}^{2}\)-geometry first. The proof when \(\theta=\frac{5\pi}{6}\) can also be viewed as a model of the proof of Theorem 1.1 for general \(\theta\). We remark that for technical reasons the proof for \(\theta\in(\frac{5\pi}{6},\pi]\) in Section 6 does not hold for \(\theta=\frac{5\pi}{6}\), so we must consider \(\theta=\frac{5\pi}{6}\) separately. Moreover, even through the discreteness and faithfulness of \(\rho_{\frac{5\pi}{6}}(K)<\mathbf{PU}(2,1)\) are much easier than general \(\rho_{\theta}(K)<\mathbf{PU}(3,1)\), but it is also highly nontrivial; * We then consider the case \(\theta=\pi\), that is, we consider \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry. We remind the reader that in \(\mathbf{H}_{\mathbb{C}}^{2}\)-geometry and \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry of our groups, even through the combinatorics of Dirichlet domains for \(\rho_{\frac{5\pi}{6}}(K)\) and \(\rho_{\pi}(K)\) are different (and they must be different since one has dimension four and the other has dimension three), but they are similar. Moreover, the words which have contributions to Dirichlet domains both are the set \(R\). We also have that the intersection patterns of Dirichlet domains of \(\rho_{\frac{5\pi}{6}}(K)\) and \(\rho_{\pi}(K)\) are the same. This can be seen by comparing Figure 6 and Figure 8 in Sections 4 and 5 respectively. This fact is also very lucky. The part about \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry (Section 5) can be omitted logically by two reasons: it is simple, and the proof in Section 6 also covers \(\mathbf{H}_{\mathbb{R}}^{3}\)-geometry of \(\rho_{\pi}(K)\). But the reader are encouraged to read this part before Section 6; * From above, it is very reasonable to guess that for general \(\theta\in(\frac{5\pi}{6},\pi]\), the Dirichlet domain of \(\rho_{\theta}(K)\) is also given by the set \(R\). This is what we do in Section 6, that is, we prove \(\mathbf{H}_{\mathbb{C}}^{3}\)-geometry of Theorem 1.1 for general \(\theta\in(\frac{5\pi}{6},\pi]\). In the procedure of the proof of Theorem 1.1, we also find a method that parameterizing the intersection of three co-equidistant bisectors in \(\mathbf{H}_{\mathbb{C}}^{3}\). Moreover, for a group in Theorem 1.1, by the \(\mathbb{Z}_{4}\)-symmetry, there are two isometric types of 5-facets, say \(s_{12}\) and \(s_{13}\); One isometric type of 4-facets, say \(s_{12}\cap s_{13}\cap s_{14}\). In particular, there is no quadruple intersection of bisectors. So we do not need the precisely combinatorial structure of these facets. What we really need is the above mentioned facets are all non-empty. Which is enough for the Poincare polyhedron theorem in our (lucky) case. See Section 6 for more details. But for more general subgroups of \(\mathbf{PU}(3,1)\) in future study, quadruple intersections of bisectors are unavoidably. We pose the following question, by which the author has difficult to show. But we believe it is fundamental in \(\mathbf{H}^{3}_{\mathbb{C}}\)-geometry. See Subsection 2.6 for the notations and background. **Question 1.2**.: Assume any lifts of four points \(q_{0}\), \(q_{1}\), \(q_{2}\) and \(q_{3}\) in \(\mathbf{H}^{3}_{\mathbb{C}}\) are linearly independent vectors in \(\mathbb{C}^{3,1}\), and the triple intersection \[B(q_{0},q_{1})\cap B(q_{0},q_{2})\cap B(q_{0},q_{3})\] of three bisectors \(B(q_{0},q_{i})\) in \(\mathbf{H}^{3}_{\mathbb{C}}\) for \(i=1,2,3\) is nonempty, then the triple intersection is a \(3\)-ball. See Figure 11 for an example of the boundary of the triple intersection of bisectors, which seems to be \(2\)-sphere. To our knowledge, Theorem 1.1 is the first moduli space in \(\mathbf{H}^{3}_{\mathbb{C}}\)-geometry that has been studied completely. There are in fact infinitely many Coxeter polytopes in \(\mathbf{H}^{3}_{\mathbb{R}}\). In this paper we only complexify the simplest one. Specially, finite volume Coxeter polytopes in \(\mathbf{H}^{3}_{\mathbb{R}}\) merit further complexifying/deforming in \(\mathbf{H}^{3}_{\mathbb{C}}\)-geometry to find \(\mathbf{H}^{3}_{\mathbb{C}}\)-lattices. We hope this paper may attract more interest on this promising direction. **The paper is organized as follows.** In Section 2 we give well known background material on complex hyperbolic geometry. In Section 3, we give the matrix representations of \(G\) into \(\mathbf{PU}(3,1)\) with complex reflection generators. Section 4 is devoted to the Dirichlet domain of \(\rho_{\frac{3\pi}{6}}(K)<\mathbf{PU}(2,1)\) acting on \(\mathbf{H}^{2}_{\mathbb{C}}\). The Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) acting on \(\mathbf{H}^{3}_{\mathbb{R}}\) is studied in Section 5 (but omitting some details). With the warming up in Sections 4 and 5, we prove Theorem 1.1 in Section 6. **Acknowledgement**: The author would like to thank his co-author Baohua Xie [15], the author learned a lot from Baohua on complex hyperbolic geometry. ## 2. Background The purpose of this section is to introduce briefly complex hyperbolic geometry. One can refer to Goldman's book [11] for more details. ### Complex hyperbolic space Let \(\mathbb{C}^{n,1}\) denote the vector space \(\mathbb{C}^{n+1}\) equipped with the Hermitian form of signature \((n,1)\): \[\langle\mathbf{z},\mathbf{w}\rangle=\mathbf{w}^{*}\cdot\mathbf{H}\cdot \mathbf{z},\] where \(\mathbf{w}^{*}\) is the Hermitian transpose of \(\mathbf{w}\), \[H=\begin{pmatrix}Id_{n}&0\\ 0&-1\end{pmatrix},\] and \(Id_{n}\) is the \(n\times n\) identity matrix. Then the Hermitian form divides \(\mathbb{C}^{n,1}\) into three parts \(V_{-},V_{0}\) and \(V_{+}\). Which are \[V_{-} = \{\mathbf{z}\in\mathbb{C}^{n+1}-\{0\}:\langle\mathbf{z},\mathbf{ z}\rangle<0\},\] \[V_{0} = \{\mathbf{z}\in\mathbb{C}^{n+1}-\{0\}:\langle\mathbf{z},\mathbf{ z}\rangle=0\},\] \[V_{+} = \{\mathbf{z}\in\mathbb{C}^{n+1}-\{0\}:\langle\mathbf{z},\mathbf{ z}\rangle>0\}.\] Let \[[\ ]:\mathbb{C}^{n+1}-\{0\}\longrightarrow\mathbb{C}\mathbf{P}^{n}\] be the canonical projection onto the complex projective space. Then the _complex hyperbolic space_\(\mathbf{H}^{n}_{\mathbb{C}}\) is the image of \(V_{-}\) in \(\mathbb{C}\mathbf{P}^{n}\) by the map \([\ ]\). The _ideal boundary_\(\mathbf{H}^{n}_{\mathbb{C}}\) is the image of \(V_{-}\) in \(\mathbb{C}\mathbf{P}^{n}\) by the map \([\ ]\). of \(\mathbf{H}_{\mathbb{C}}^{n}\), or _boundary at infinity_, is the image of \(V_{0}\) in \(\mathbb{C}\mathbf{P}^{n}\), we denote it by \(\partial\mathbf{H}_{\mathbb{C}}^{n}\). In this paper, we will denote by \[\mathbf{q}=(z_{1},z_{2},\cdots,z_{n+1})^{t}\] a vector in \(\mathbb{C}^{n,1}\) (note that we use the boldface \(\mathbf{q}\)), and by \[q=[z_{1},z_{2},\cdots,z_{n+1}]^{t}\] the corresponding point in \(\mathbb{C}\mathbf{P}^{n}\). Here the superscript "\(t\)" means the transpose of a vector. Consider the natural biholomorphic embedding of \(\mathbb{C}^{n}\) onto the affine patch of \(\mathbb{C}\mathbf{P}^{n}\), that is \[\begin{pmatrix}z_{1}\\ z_{2}\\ \vdots\\ z_{n}\end{pmatrix}\longmapsto\begin{bmatrix}z_{1}\\ z_{2}\\ \vdots\\ z_{n}\\ 1\end{bmatrix}. \tag{2.1}\] Then \(\mathbf{H}_{\mathbb{C}}^{n}\) is identified with the unit ball \(\mathbb{B}^{n}\) in \(\mathbb{C}^{n}\) by this embedding, and \(\partial\mathbf{H}_{\mathbb{C}}^{n}\) is identified with the unit sphere \(\mathbb{S}^{2n-1}=\partial\mathbb{B}^{n}\) in \(\mathbb{C}^{n}\). There is a typical anti-holomorphic isometry \(\iota\) of \(\mathbf{H}_{\mathbb{C}}^{n}\). \(\iota\) is given on the level of homogeneous coordinates by complex conjugate \[\iota:\begin{bmatrix}z_{1}\\ z_{2}\\ \vdots\\ z_{n+1}\end{bmatrix}\longmapsto\begin{bmatrix}\overline{z_{1}}\\ \overline{z_{2}}\\ \vdots\\ \overline{z_{n+1}}\end{bmatrix}. \tag{2.2}\] ### Totally geodesic submanifolds and complex reflections There are two kinds of totally geodesic submanifolds in \(\mathbf{H}_{\mathbb{C}}^{n}\): * Given any point \(x\in\mathbf{H}_{\mathbb{C}}^{n}\), and a complex linear subspace \(F\) of dimension \(k\) in the tangent space \(T_{x}\mathbf{H}_{\mathbb{C}}^{n}\), there is a unique complete holomorphic totally geodesic submanifold contains \(x\) and is tangent to \(F\). Such a holomorphic submanifold is called a _\(\mathbb{C}^{k}\)-plane_. A \(\mathbb{C}^{k}\)-plane is the intersection of a complex \(k\)-dimensional projective subspace in \(\mathbb{C}\mathbf{P}^{n}\) with \(\mathbf{H}_{\mathbb{C}}^{n}\), and it is holomorphic isometric to \(\mathbf{H}_{\mathbb{C}}^{k}\). A \(\mathbb{C}^{1}\)-plane is also called a _complex geodesic_. The intersection of a \(\mathbb{C}^{k}\)-plane with \(\partial\mathbf{H}_{\mathbb{C}}^{n}=\mathbb{S}^{2n-1}\) is a smoothly embedded sphere \(\mathbb{S}^{2k-1}\), which is called a _\(\mathbb{C}^{k}\)-chain_. * Corresponding to the compatible real structures on \(\mathbb{C}^{n,1}\) are the real forms of \(\mathbf{H}_{\mathbb{C}}^{n}\). That is, the maximal totally real totally geodesic sub-spaces of \(\mathbf{H}_{\mathbb{C}}^{n}\), which have real dimension \(n\). A maximal totally real totally geodesic subspace of \(\mathbf{H}_{\mathbb{C}}^{n}\) is the fixed-point set of an anti-holomorphic isometry of \(\mathbf{H}_{\mathbb{C}}^{n}\). We have gave an example of anti-holomorphic isometry \(\iota\) in (2.2) of Subsection 2.1. For the usual real structure, this submanifold is the real hyperbolic \(n\)-space \(\mathbf{H}_{\mathbb{R}}^{n}\) with curvature \(-\frac{1}{4}\). Any totally geodesic subspace of a maximal totally real totally geodesic subspace is a totally real totally geodesic subspace, which is isometric to the real hyperbolic \(k\)-space \(\mathbf{H}_{\mathbb{R}}^{k}\) for some \(k\). Since the Riemannian sectional curvatures of the complex hyperbolic space are non-constant, there are no totally geodesic hyperplanes in \(\mathbf{H}_{\mathbb{C}}^{n}\) when \(n\geq 2\). Let \(L\) be a \((n-1)\)-dimensional complex plane in \(\mathbf{H}_{\mathbb{C}}^{n}\), a _polar vector_ of \(L\) is the unique vector (up to scaling) in \(\mathbb{C}^{n,1}\) perpendicular to this complex plane with respect to the Hermitian form. A polar vector of a \((n-1)\)-dimensional complex plane belongs to \(V_{+}\) and each vector in \(V_{+}\) corresponds to a \((n-1)\)-dimensional complex plane. Moreover, let \(L\) be a \((n-1)\)-dimensional complex plane with polar vector \(\mathbf{n}\in V_{+}\), then the _complex reflection_ fixing \(L\) with rotation angle \(\theta\) is given by \[I_{\mathbf{n},\theta}(\mathbf{z})=-\mathbf{z}+(1-\mathrm{e}^{\theta\mathrm{i} })\frac{\langle\mathbf{z},\mathbf{n}\rangle}{\langle\mathbf{n},\mathbf{n} \rangle}\mathbf{n}. \tag{2.3}\] The complex plane \(L\) is also called the _mirror_ of \(I_{\mathbf{n},\theta}\). In this paper, we only consider the case \(\theta=\pi\), then the complex reflection has order \(2\). ### The isometries The complex hyperbolic space is a Kahler manifold of constant holomorphic sectional curvature \(-1\). We denote by \(\mathbf{U}(n,1)\) the Lie group of the Hermitian form \(\langle\cdot,\cdot\rangle\) preserving complex linear transformations and by \(\mathbf{PU}(n,1)\) the group modulo scalar matrices. The group of holomorphic isometries of \(\mathbf{H}_{\mathbb{C}}^{n}\) is exactly \(\mathbf{PU}(n,1)\). It is sometimes convenient to work with \(\mathbf{SU}(n,1)\). The full isometry group of \(\mathbf{H}_{\mathbb{C}}^{n}\) is \[\widehat{\mathbf{PU}(n,1)}=\langle\mathbf{PU}(n,1),\iota\rangle,\] where \(\iota\) is the anti-holomorphic isometry in Subsection 2.1. Elements of \(\mathbf{SU}(n,1)\) fall into three types, according to the number and types of fixed points of the corresponding isometry. Namely, * an isometry is _loxodromic_ if it has exactly two fixed points on \(\partial\mathbf{H}_{\mathbb{C}}^{n}\); * an isometry is _parabolic_ if it has exactly one fixed point on \(\partial\mathbf{H}_{\mathbb{C}}^{n}\); * an isometry is _elliptic_ when it has (at least) one fixed point inside \(\mathbf{H}_{\mathbb{C}}^{n}\). An element \(A\in\mathbf{SU}(n,1)\) is called _regular_ whenever it has distinct eigenvalues, an elliptic \(A\in\mathbf{SU}(n,1)\) is called _special elliptic_ if it has a repeated eigenvalue. Complex reflection about a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{n-1}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{n}\) is an example of special elliptic element in \(\mathbf{SU}(n,1)\). ### Bisectors, spinal spheres and Dirichlet domain Dirichlet domain (or Dirichlet polyhedron) is a fundamental tool to study a discrete subgroup in \(\mathbf{SU}(n,1)\), which is defined in terms of (infinitely many) bisectors. **Definition 2.1**.: Given two distinct points \(q_{0}\) and \(q_{1}\) in \(\mathbf{H}_{\mathbb{C}}^{n}\) with the same norm (e.g. one could take lifts \(\mathbf{q_{0}},\mathbf{q_{1}}\) of them such that \(\langle\mathbf{q_{0}},\mathbf{q_{0}}\rangle=\langle\mathbf{q_{1}},\mathbf{q_ {1}}\rangle=-1\)), the _bisector_\(B(q_{0},q_{1})\) is the projectivization of the set of negative vector \(\mathbf{x}\) in \(\mathbb{C}^{n,1}\) with \[|\langle\mathbf{x},\mathbf{q_{0}}\rangle|=|\langle\mathbf{x},\mathbf{q_{1}} \rangle|.\] The _spinal sphere_ of the bisector \(B(q_{0},q_{1})\) is the intersection of \(\partial\mathbf{H}_{\mathbb{C}}^{n}\) with the closure of \(B(q_{0},q_{1})\) in \(\overline{\mathbf{H}_{\mathbb{C}}^{n}}=\mathbf{H}_{\mathbb{C}}^{n}\cup\partial \mathbf{H}_{\mathbb{C}}^{n}\). The bisector \(B(q_{0},q_{1})\) is a topological \((2n-1)\)-ball, and its spinal sphere is a \((2n-2)\)-sphere. A bisector \(B(q_{0},q_{1})\) separates \(\mathbf{H}_{\mathbb{C}}^{n}\) into two half-spaces, one of them containing \(q_{0}\). **Definition 2.2**.: The _Dirichlet domain_\(D_{\Gamma}\) for a discrete group \(\Gamma<\mathbf{PU}(n,1)\) centered at \(q_{0}\) is the intersection of the (closures of the) half-spaces containing \(q_{0}\) of all bisectors corresponds to elements in \(\Gamma\) not fixing \(q_{0}\). That is, \[D_{\Gamma}=\{p\in\mathbf{H}_{\mathbb{C}}^{n}\cup\partial\mathbf{H}_{\mathbb{C }}^{n}:|\langle\mathbf{p},\mathbf{q_{0}}\rangle|\leq|\langle\mathbf{p},g( \mathbf{q_{0}})\rangle|,\ \forall g\in\Gamma\text{ with }g(q_{0})\neq q_{0}\}.\] **Definition 2.3**.: For a subset \(R\subset\Gamma\), the _partial Dirichlet domain_\(D_{R}\) for a discrete group \(\Gamma<\mathbf{PU}(n,1)\) centered at \(q_{0}\) is \[D_{R}=\{p\in\mathbf{H}_{\mathbb{C}}^{n}\cup\partial\mathbf{H}_{\mathbb{C}}^{n}: |\langle\mathbf{p},\mathbf{q_{0}}\rangle|\leq|\langle\mathbf{p},g(\mathbf{q_{ 0}})\rangle|,\ \forall g\in R\ \text{with}\ g(q_{0})\neq q_{0}\}.\] From the definition, one can see that parts of bisectors form the boundary of a Dirichlet domain. For \(g\in\Gamma\), when the center \(q_{0}\) is clear from the context, we also denote \(B(q_{0},g(q_{0}))\) by \(B(g)\), and we will denote by \(s(g)=B(g)\cap D_{\Gamma}\), which is a \((2n-1)\)-facet of \(D_{\Gamma}\) in general. Facets of codimension one in \(D_{\Gamma}\) will also be called _sides_. In general, facets of codimension two in \(D_{\Gamma}\) will be called _ridges_. Facets of dimension one and zero in \(D_{\Gamma}\) will be called _edges_ and _vertices_ respectively. Moreover, a _bounded ridge_ is a ridge which does not intersect \(\partial\mathbf{H}_{\mathbb{C}}^{n}\), and if the intersection of a ridge \(r\) and \(\partial\mathbf{H}_{\mathbb{C}}^{n}\) is non-empty, then \(r\) is an _infinite ridge_. It is usually very hard to determine \(D_{\Gamma}\) because one should check infinitely many inequalities. Therefore a general method will be to guess that a partial Dirichlet domain \(D_{R}\) is in fact the Dirichlet domain \(D_{\Gamma}\), and then check it using the Poincare polyhedron theorem. The basic idea is that the sides of \(D_{\Gamma}\) should be paired by isometries, and the images of \(D_{\Gamma}\) under these so-called side-pairing maps should give a local tiling of \(\mathbf{H}_{\mathbb{C}}^{n}\). If they do (and if the quotient of \(D_{\Gamma}\) by the identification given by the side-pairing maps is complete), then the Poincare polyhedron theorem implies that the images of \(D_{\Gamma}\) actually give a global tiling of \(\mathbf{H}_{\mathbb{C}}^{n}\). So \(D_{\Gamma}\) is a fundamental domain of \(\Gamma\)-action on \(\mathbf{H}_{\mathbb{C}}^{n}\). Once a fundamental domain is obtained, one gets an explicit presentation of \(\Gamma\) in terms of the generators given by the side-pairing maps together with a generating set for the stabilizer \(p_{0}\). Where the relations correspond to so-called ridge cycles, which correspond to the local tilings bear each co-dimension two facet. For more on the Poincare polyhedron theorem, see [7, 20]. ### Hermitian cross product in \(\mathbf{H}_{\mathbb{C}}^{2}\) and double intersection of bisectors The intersections of bisectors in \(\mathbf{H}_{\mathbb{C}}^{2}\) are a little easier to describe than in higher dimensions. We show this in this subsection. For complex hyperbolic plane \(\mathbf{H}_{\mathbb{C}}^{2}\) with Hermitian form given by \[H=\begin{pmatrix}Id_{2}&0\\ 0&-1\end{pmatrix}.\] If \[p=\begin{bmatrix}p_{1}\\ p_{2}\\ p_{3}\end{bmatrix},\quad q=\begin{bmatrix}q_{1}\\ q_{2}\\ q_{3}\end{bmatrix}\] are two points in \(\mathbf{H}_{\mathbb{C}}^{2}\), then the _Hermitian cross product_ of \(p\) and \(q\) is a point in \(\mathbb{C}\mathbf{P}^{2}\) defined by \[p\boxtimes q=\left[\begin{array}{c}\overline{p}_{3}\overline{q}_{2}-\overline {p}_{2}\overline{q}_{3}\\ \overline{p}_{1}\overline{q}_{3}-\overline{p}_{3}\overline{q}_{1}\\ \overline{p}_{1}\overline{q}_{2}-\overline{p}_{2}\overline{q}_{1}\end{array} \right],\] see Page 45 of [11]. Any lift of \(p\boxtimes q\) is orthogonal to lifts of both \(p\) and \(q\) with respect to the Hermitian form \(\langle\cdot,\cdot\rangle\). It is a Hermitian version of the Euclidean cross product in \(\mathbb{R}^{3}\). In order to analyze \(2\)-faces of a Dirichlet polyhedron in \(\mathbf{H}_{\mathbb{C}}^{2}\), we must study the intersections of bisectors. From the detailed analysis in [11], we know that the intersection of two bisectors is usually not totally geodesic and it can be somewhat complicated. In this paper, we shall only consider the intersections of _co-equidistant bisectors_, i.e. bisectors equidistant from a common point. When \(p,q\) and \(r\) are not in a common complex line, that is, their lifts are linearly independent in \(\mathbb{C}^{2,1}\), then the locus \[B(p,q,r)=B(p,q)\cap B(p,r)\] of points in \(\mathbf{H}^{2}_{\mathbb{C}}\) equidistant to \(p\), \(q\) and \(r\) is a smooth disk that is not totally geodesic, and is often called a _Giraud disk_. The following property is crucial when studying fundamental domain. **Proposition 2.4** (Giraud).: _If \(p\), \(q\) and \(r\) in \(\mathbf{H}^{2}_{\mathbb{C}}\) are not in a common complex line, then the Giraud disk \(B(p,q,r)\) is contained in precisely three bisectors, namely \(B(p,q)\), \(B(q,r)\) and \(B(p,r)\)._ Note that checking whether an isometry maps a Giraud disk to another is equivalent to checking that corresponding triples of points are mapped to each other. In order to study Giraud disks, we will use _spinal coordinates_. The _complex spine_ of the bisector \(B(p,q)\) is the complex line through the two points \(p\) and \(q\). The _real spine_ of \(B(p,q)\) is the intersection of the complex spine with the bisector itself, which is a (real) geodesic. The real spine is the locus of points inside the complex spine which are equidistant from \(p\) and \(q\). Bisectors are not totally geodesic, but they have a very nice foliation by two different families of totally geodesic submanifolds. Mostow [18] showed that a bisector is the preimage of the real spine under the orthogonal projection onto the complex spine. The fibres of this projection are complex lines \(\mathbf{H}^{1}_{\mathbb{C}}\hookrightarrow\mathbf{H}^{2}_{\mathbb{C}}\) called the _complex slices_ of the bisector. Goldman [11] showed that a bisector is the union of all totally real totally geodesic planes containing the real spine. Such Lagrangian planes are called the _real slices_ or _meridians_ of the bisector. The complex slices of \(B(p,q)\) are given explicitly by choosing a lift \(\mathbf{p}\) (resp. \(\mathbf{q}\)) of \(p\) (resp. \(q\)). When \(p,q\in\mathbf{H}^{2}_{\mathbb{C}}\), we simply choose lifts such that \(\langle\mathbf{p},\mathbf{p}\rangle=\langle\mathbf{q},\mathbf{q}\rangle\). The complex slices of \(B(p,q)\) are obtained as the set of negative lines \((\overline{z}\mathbf{p}-\mathbf{q})^{\perp}\) in \(\mathbf{H}^{2}_{\mathbb{C}}\) for some arc of values of \(z\in\mathbb{S}^{1}\), which is determined by requiring that \(\langle\overline{z}\mathbf{p}-\mathbf{q},\overline{z}\mathbf{p}-\mathbf{q} \rangle>0\). Since a point of the bisector is on precisely one complex slice, we can parameterize the _Giraud torus_\(\hat{B}(p,q,r)\) in \(\mathbf{P}^{2}_{\mathbb{C}}\) by \((z_{1},z_{2})=(e^{it_{1}},e^{it_{2}})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\) via \[V(z_{1},z_{2})=(\overline{z}_{1}\mathbf{p}-\mathbf{q})\boxtimes(\overline{z} _{2}\mathbf{p}-\mathbf{r})=\mathbf{q}\boxtimes\mathbf{r}+z_{1}\mathbf{r} \boxtimes\mathbf{p}+z_{2}\mathbf{p}\boxtimes\mathbf{q}. \tag{2.4}\] The Giraud disk \(B(p,q,r)\) corresponds to \((z_{1},z_{2})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\) with \[\langle V(z_{1},z_{2}),V(z_{1},z_{2})\rangle<0.\] It is well known that this region is a topological disk if it is non empty [11]. The boundary at infinity \(\partial B(p,q,r)\) is a circle, given in spinal coordinates by the equation \[\langle V(z_{1},z_{2}),V(z_{1},z_{2})\rangle=0.\] Note that the choices of different lifts of \(p\), \(q\) and \(r\) affect the spinal coordinates by rotation on each of the \(\mathbb{S}^{1}\)-factors. A defining equation for the trace of another bisector \(B(u,v)\) on the Giraud disk \(B(p,q,r)\) can be written in the form \[|\langle V(z_{1},z_{2}),\mathbf{u}\rangle|=|\langle V(z_{1},z_{2}),\mathbf{v }\rangle|,\] provided that \(\mathbf{u}\) and \(\mathbf{v}\) are suitably chosen lifts. The expressions \(\langle V(z_{1},z_{2}),\mathbf{u}\rangle\) and \(\langle V(z_{1},z_{2}),\mathbf{v}\rangle\) are affine in \(z_{1}\) and \(z_{2}\). This triple bisector intersection can be parameterized fairly explicitly, because one can solve the equation \[|\langle V(z_{1},z_{2}),\mathbf{u}\rangle|^{2}=|\langle V(z_{1},z_{2}),\mathbf{v }\rangle|^{2}\] for one of the variables \(z_{1}\) or \(z_{2}\) simply by solving a quadratic equation. A detailed explanation of how this works can be found in [4, 6, 7]. Triple Hermitian cross product in \(\mathbf{H}^{3}_{\mathbb{C}}\) and triple intersection of bisectors Now we consider triple intersections of bisectors in \(\mathbf{H}^{3}_{\mathbb{C}}\). Let \(q_{0}\), \(q_{1}\), \(q_{2}\) and \(q_{3}\) be four points in \(\mathbf{H}^{3}_{\mathbb{C}}\). We take lifts \(\mathbf{q}_{i}\) of \(q_{i}\) such that \[\langle\mathbf{q_{0}},\mathbf{q_{0}}\rangle=\langle\mathbf{q_{1}},\mathbf{q_ {1}}\rangle=\langle\mathbf{q_{2}},\mathbf{q_{2}}\rangle=\langle\mathbf{q_{3}},\mathbf{q_{3}}\rangle.\] We also assume \(\{\mathbf{q_{0}},\mathbf{q_{1}},\mathbf{q_{2}},\mathbf{q_{3}}\}\) are linearly independent as vectors in \(\mathbb{C}^{3,1}\). We have three bisectors \(B(q_{0},q_{1})\), \(B(q_{0},q_{2})\) and \(B(q_{0},q_{3})\) in \(\mathbf{H}^{3}_{\mathbb{C}}\). We now parameterize the triple intersection \[B(q_{0},q_{1})\cap B(q_{0},q_{2})\cap B(q_{0},q_{3}).\] The bisector \(B(q_{0},q_{1})\) has a decomposition into a set of complex slices (each of them is a totally geodesic \(\mathbf{H}^{2}_{\mathbb{C}}\hookrightarrow\mathbf{H}^{3}_{\mathbb{C}}\)), these complex slices are obtained as the set of negative lines in \((w\mathbf{q_{0}}-\mathbf{q_{1}})^{\perp}\) for some arc of values \(w\in\mathbb{S}^{1}\), see [11, 4]. Similarly, \(B(q_{0},q_{2})\) and \(B(q_{0},q_{3})\) also have decompositions into a set of complex slices which are parameterized by \((w\mathbf{q_{0}}-\mathbf{q_{2}})^{\perp}\) and \((w\mathbf{q_{0}}-\mathbf{q_{3}})^{\perp}\) for some arc of values \(w\in\mathbb{S}^{1}\). Consider triple Hermitian cross product with respect to the Hermitian form \(H\). Recall that for three linearly independent vectors \[a=\left(a_{1},a_{2},a_{3},a_{4}\right)^{t},\ b=\left(b_{1},b_{2},b_{3},b_{4} \right)^{t},\ c=\left(c_{1},c_{2},c_{3},c_{4}\right)^{t} \tag{2.5}\] in \(\mathbb{R}^{4}\), the _generalized cross product_\(a\times b\times c\) of \(a,b,c\) is a vector \[d=\left(d_{1},d_{2},d_{3},d_{4}\right)^{t}.\] Where \(d_{i}\) is the coefficient of \(e_{i}\) in the determinant of the matrix \[\begin{pmatrix}a_{1}&b_{1}&c_{1}&e_{1}\\ a_{2}&b_{2}&c_{2}&e_{2}\\ a_{3}&b_{3}&c_{3}&e_{3}\\ a_{4}&b_{4}&c_{4}&e_{4}\end{pmatrix}.\] The vector \(d=a\times b\times c\) is perpendicular to \(a\), \(b\) and \(c\) with respect to the standard Euclidean metric in \(\mathbb{R}^{4}\). Now for three linearly independent vectors \(a\), \(b\), \(c\) as in (2.5) in \(\mathbb{C}^{3,1}\) with \(a_{i},b_{i},c_{i}\in\mathbb{C}\), the _triple Hermitian cross product_\(a\boxtimes b\boxtimes c\) of \(a\), \(b\), \(c\) is a vector \[d=\left(d_{1},d_{2},d_{3},d_{4}\right)^{t}.\] Where by definition \[d=\begin{pmatrix}-1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&1\end{pmatrix}\cdot(\bar{a}\times\bar{b}\times\bar{c}),\] here for example \[\bar{a}=\left(\overline{a_{1}},\overline{a_{2}},\overline{a_{3}},\overline{a_ {4}}\right)^{t}\] with \(\overline{a_{i}}\) the complex conjugate of \(a_{i}\). Then \(d=a\boxtimes b\boxtimes c\) is the vector \[\left(\det\begin{pmatrix}\overline{a_{2}}&\overline{b_{2}}&\overline{c_{2}}\\ \overline{a_{3}}&\overline{b_{3}}&\overline{c_{3}}\\ \overline{a_{4}}&\overline{b_{4}}&\overline{c_{4}}\end{pmatrix},-\det\begin{pmatrix} \overline{a_{1}}&\overline{b_{1}}&\overline{c_{1}}\\ \overline{a_{3}}&\overline{b_{3}}&\overline{c_{3}}\\ \overline{a_{4}}&\overline{b_{4}}&\overline{c_{4}}\end{pmatrix},\det\begin{pmatrix} \overline{a_{1}}&\overline{b_{1}}&\overline{c_{1}}\\ \overline{a_{2}}&\overline{b_{2}}&\overline{c_{2}}\\ \overline{a_{4}}&\overline{b_{4}}&\overline{c_{4}}\end{pmatrix},\det\begin{pmatrix} \overline{a_{1}}&\overline{b_{1}}&\overline{c_{1}}\\ \overline{a_{2}}&\overline{b_{2}}&\overline{c_{2}}\\ \overline{a_{3}}&\overline{b_{3}}&\overline{c_{3}}\end{pmatrix}\right)^{t}.\] By direct calculation, we have \[\langle a,a\boxtimes b\boxtimes c\rangle=\langle b,a\boxtimes b\boxtimes c \rangle=\langle c,a\boxtimes b\boxtimes c\rangle=0\] with respect to the Hermitioan form \(H\) on \(\mathbb{C}^{3,1}\). Since a point in a bisector \(B\) lies in precisely one complex slice of \(B\), we can parameterize \(B(q_{0},q_{1})\cap B(q_{0},q_{2})\cap B(q_{0},q_{3})\) by \[V(w_{1},w_{2},w_{3})=(\overline{w_{1}}\mathbf{q_{0}}-\mathbf{q_{1}})\boxtimes (\overline{w_{2}}\mathbf{q_{0}}-\mathbf{q_{2}})\boxtimes(\overline{w_{3}} \mathbf{q_{0}}-\mathbf{q_{3}})\] with \((w_{1},w_{2},w_{3})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}\). Up to sign and rewriting, we can parameterize \(B(q_{0},q_{1})\cap B(q_{0},q_{2})\cap B(q_{0},q_{3})\) by \[V(z_{1},z_{2},z_{3})=\mathbf{q_{1}}\boxtimes\mathbf{q_{2}}\boxtimes\mathbf{q _{3}}+z_{1}\cdot\mathbf{q_{0}}\boxtimes\mathbf{q_{2}}\boxtimes\mathbf{q_{3}}+ z_{2}\cdot\mathbf{q_{0}}\boxtimes\mathbf{q_{1}}\boxtimes\mathbf{q_{3}}+z_{3} \cdot\mathbf{q_{0}}\boxtimes\mathbf{q_{1}}\boxtimes\mathbf{q_{2}} \tag{2.6}\] with \((z_{1},z_{2},z_{3})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}\) such that \(\langle V(z_{1},z_{2},z_{3}),V(z_{1},z_{2},z_{3})\rangle\) is negative. We remark that it is reasonable to guess that for fixed \(\{q_{0},q_{1},q_{2},q_{3}\}\) in \(\mathbf{H}_{\mathbb{C}}^{3}\), if there is \((z_{1},z_{2},z_{3})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}\) such that \(V=V(z_{1},z_{2},z_{3})\) is negative in (2.6), then all \((z_{1},z_{2},z_{3})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}\) satisfying this condition should be a \(3\)-ball in \(\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S}^{1}\). But the author has difficult to show this. See Question 1.2. ## 3. The moduli space of representations of \(G\) into \(\mathbf{PU}(3,1)\) In this section, we give the matrix representations of \(G\) into \(\mathbf{PU}(3,1)\) with complex reflection generators and \(\iota_{i}\iota_{i+1}\) is mapped to a parabolic element for \(i=1,2,3,4\) mod \(4\). Let \(G\) be the abstract group with the presentation \[G=\left\langle\iota_{1},\iota_{2},\iota_{3},\iota_{4}\right|\left.\begin{array} []{l}\iota_{1}^{2}=\iota_{2}^{2}=\iota_{3}^{2}=\iota_{4}^{2}=id,\\ (\iota_{1}\iota_{3})^{2}=(\iota_{2}\iota_{4})^{2}=id\end{array}\right\rangle.\] \(G\) is isomorphic to \((\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})*(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})\) abstractly. Then \(K=\langle\iota_{1}\iota_{3},\iota_{2}\iota_{4},\iota_{1}\iota_{2}\rangle\) is an index two subgroup of \(G\), which is isomorphic to \(\mathbb{Z}_{2}*\mathbb{Z}_{2}*\mathbb{Z}\). The Gram matrices of four complex hyperbolic planes in \(\mathbf{H}_{\mathbb{C}}^{3}\) for the group \(G\) Recall that for two \(\mathbb{C}\)-planes \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) in \(\mathbf{H}_{\mathbb{C}}^{3}\) with polar vectors \(n\) and \(n^{\prime}\) such that \(\langle n,n\rangle=\langle n^{\prime},n^{\prime}\rangle=1\): * If \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) intersect in a \(\mathbb{C}\)-line in \(\mathbf{H}_{\mathbb{C}}^{3}\), then the angle \(\alpha\) between them has \(|\langle n,n^{\prime}\rangle|=\cos(\alpha)\); * If \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) are hyper-parallel in \(\mathbf{H}_{\mathbb{C}}^{3}\), then the distance \(d\) between them has \(|\langle n,n^{\prime}\rangle|=\cosh\frac{d}{2}\); * If \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) are asymptotic in \(\mathbf{H}_{\mathbb{C}}^{3}\), then \(|\langle n,n^{\prime}\rangle|=1\). We consider \(\mathbb{C}\)-planes \(\mathcal{P}_{i}\) for \(i=1,2,3,4\), so each \(\mathcal{P}_{i}\) is a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\). Let \(n^{\prime}_{i}\) be the polar vector of \(\mathcal{P}_{i}\) in \(\mathbb{C}\mathbf{P}^{3}-\overline{\mathbf{H}_{\mathbb{C}}^{3}}\). We assume * the angle between \(\mathcal{P}_{1}\) and \(\mathcal{P}_{3}\) is \(\frac{\pi}{2}\); * the angle between \(\mathcal{P}_{2}\) and \(\mathcal{P}_{4}\) is \(\frac{\pi}{2}\); * the planes \(\mathcal{P}_{i}\) and \(\mathcal{P}_{i+1}\) are asymptotic for \(i=1,2,3,4\) mod \(4\). Then we can normalize the Gram matrix of \(\{n^{\prime}_{i}\}_{i=1}^{4}\) into the following form \[\mathcal{G}^{\prime}=(\langle n^{\prime}_{i},n^{\prime}_{j}\rangle)_{1\leq i,j \leq 4}=\begin{pmatrix}1&1&0&\mathrm{e}^{-4\theta\mathrm{i}}\\ 1&1&1&0\\ 0&1&1&1\\ \mathrm{e}^{4\theta\mathrm{i}}&0&1&1\end{pmatrix}.\] Where up to anti-holomorphic isometry of \(\mathbf{H}^{3}_{\mathbb{C}}\), we may assume \(4\theta\in[a,a+\pi]\) for any \(a\in\mathbb{R}\), that is \(\theta\in[\frac{a}{4},\frac{a+\pi}{4}]\). Moreover \(4\theta=2k\pi\) for \(k\in\mathbb{Z}\) corresponds to the case of an infinite volume \(3\)-dimensional real hyperbolic Coxeter tetrahedron, see [27]. By [2], for a Gram matrix above, there is a unique configuration of four \(\mathbb{C}\)-planes \(\mathcal{P}_{i}\) in \(\mathbf{H}^{3}_{\mathbb{C}}\) for \(i=1,2,3,4\) up to \(\mathbf{PU}(3,1)\) realizing the Gram matrix. In fact, we can re-normalize \(n^{\prime}_{i}\) above into \[n_{1}=n^{\prime}_{1},\ n_{2}=\mathrm{e}^{-\theta\mathrm{i}}\cdot n^{\prime}_{ 2},\ n_{3}=\mathrm{e}^{-2\theta\mathrm{i}}\cdot n^{\prime}_{3},\ n_{4}= \mathrm{e}^{-3\theta\mathrm{i}}\cdot n^{\prime}_{4},\] Then we can re-normalize the Gram matrix to the following form \[\mathcal{G}=(\langle n_{i},n_{j}\rangle)_{1\leq i,j\leq 4}=\begin{pmatrix}1& \mathrm{e}^{\theta\mathrm{i}}&0&\mathrm{e}^{-\theta\mathrm{i}}\\ \mathrm{e}^{-\theta\mathrm{i}}&1&\mathrm{e}^{\theta\mathrm{i}}&0\\ 0&\mathrm{e}^{-\theta\mathrm{i}}&1&\mathrm{e}^{\theta\mathrm{i}}\\ \mathrm{e}^{\theta\mathrm{i}}&0&\mathrm{e}^{-\theta\mathrm{i}}&1\end{pmatrix}. \tag{3.1}\] From now on we may assume that \(\theta\in[\frac{3\pi}{4},\pi]\). It is easy to see \[\det(\mathcal{G})=-1-2\cos(4\theta),\] and the eigenvalues of \(\mathcal{G}\) are \[1\pm 2\cos(\theta),\ 1\pm 2\sin(\theta).\] We have the followings: * when \(\theta=\pi\), all entries of \(\mathcal{G}\) are reals, so it degenerates to \(\mathbf{H}^{3}_{\mathbb{R}}\)-geometry; * when \(\theta=\frac{5\pi}{6}\), \(\mathcal{G}\) has eigenvalues \[0,\ 2,\ 1+\sqrt{3},\ 1-\sqrt{3},\] so it degenerates to \(\mathbf{H}^{2}_{\mathbb{C}}\)-geometry; * when \(\theta\in(\frac{5\pi}{6},\pi]\), \(\mathcal{G}\) has signature \((3,1)\), we have \(\mathbf{H}^{3}_{\mathbb{C}}\)-geometry; * when \(\theta\in[\frac{3\pi}{4},\frac{5\pi}{6})\), \(\mathcal{G}\) has signature \((2,2)\). We will not study them in this paper. So our moduli space is \[\mathscr{M}=\left[\frac{5\pi}{6},\pi\right]. \tag{3.2}\] From the Gram matrix (3.1), there is a \(\mathbb{Z}_{4}\)-symmetry of the configurations of \(\mathbb{C}\)-planes above. Take \[J=\begin{pmatrix}-1&0&0&0\\ 0&\mathrm{i}&0&0\\ 0&0&-\mathrm{i}&0\\ 0&0&0&1\end{pmatrix}.\] Then \(J^{*}HJ=H\), \(J\) has order \(4\) with fixed point \[p_{0}=\begin{bmatrix}0,0,0,1\end{bmatrix}^{t} \tag{3.3}\] in \(\mathbf{H}^{3}_{\mathbb{C}}\). Note that \(\det(J)=-1\), so \(J\in\mathbf{U}(3,1)\), but \(J\notin\mathbf{PU}(3,1)\), and \(\mathrm{e}^{\frac{\pi i}{4}}\cdot\mathrm{J}\in\mathbf{PU}(3,1)\). But we can also use \(J\) to study the \(\mathbb{Z}_{4}\)-symmetry for simplicity of notations. We take \[n_{1}=\left[\begin{array}{c}\dfrac{\sqrt{1-2\cos(\theta)}}{2}\\ \dfrac{\sqrt{1-2\sin(\theta)}}{2}\\ \dfrac{\sqrt{1+2\sin(\theta)}}{2}\\ \dfrac{\sqrt{-1-2\cos(\theta)}}{2}\end{array}\right] \tag{3.4}\] in \(\mathbb{CP}^{3}-\overline{\mathbf{H}^{3}_{\mathbb{C}}}\). When \(\theta\in[\frac{5\pi}{6},\pi]\), all entries of \(n_{1}\) are non-negative reals. Let \(n_{i}\) be \(J^{i-1}(n_{1})\) for \(i=2,3,4\). Then \[(\langle n_{i},n_{j}\rangle)_{1\leq i,j\leq 4}\] is \(\mathcal{G}\) in (3.1). By [2], there is a one-to-one correspondence between the Gram matrix and configurations \(\mathbb{C}\)-planes, so there is a \(\mathbb{Z}_{4}\)-symmetry of the configurations \(\mathbb{C}\)-planes above. _Remark 3.1_.: The author notes that the \(\mathbb{Z}_{4}\)-symmetry above is lucky since it will simplify the calculations dramatically. The \(\mathbb{Z}_{4}\)-symmetry is also a little mysterious since the author can not explain it from geometric point of view. Now for any \(\theta\in\mathcal{M}\), let \(\rho_{\theta}:G\to\mathbf{PU}(3,1)\) be the representation with \(\rho_{\theta}(\iota_{i})=I_{i}\) the order two \(\mathbb{C}\)-reflection about \(\mathcal{P}_{i}\). We also denote by \[\Gamma=\Gamma_{\theta}=\langle I_{1},I_{2},I_{3},I_{4}\rangle.\] * When \(\theta=\frac{5\pi}{6}\), \(\Gamma\) preserves a totally geodesic \(\mathbf{H}^{2}_{\mathbb{C}}\hookrightarrow\mathbf{H}^{3}_{\mathbb{C}}\) invariant, so we will view the representation as degenerating to a representation into \(\mathbf{PU}(2,1)\). But the discreteness and faithfulness of this representation are still non-trivial, see Section 4 for more details. * When \(\theta=\pi\), \(\Gamma\) preserves a totally geodesic \(\mathbf{H}^{3}_{\mathbb{R}}\hookrightarrow\mathbf{H}^{3}_{\mathbb{C}}\) invariant. We have a 3-dimensional real hyperbolic Coxeter tetrahedron, so both the discreteness and faithfulness of this representation are trivial. The Dirichlet domain of this group representation in \(\mathbf{H}^{3}_{\mathbb{R}}\) is not difficult, see Section 5 for more details. * When we decrease \(\theta\) from \(\pi\) to \(\frac{5\pi}{6}\), we still have a representation of \(G\) into \(\mathbf{PU}(3,1)\), but the discreteness of the representation is highly non-trivial. This is what we will do in the paper. ### Matrices representations of the group \(G\) into \(\mathbf{PU}(3,1)\) For each \(\theta\in\mathcal{M}\), we now give the matrix presentation of \(I_{i}\), the order-two complex reflection with mirror \(\mathcal{P}_{i}\) for \(i=1,2,3,4\). From (2.3) and the vector \(n_{1}\) in (3.4), it is easy to see \[I_{1}=\left(\begin{array}{cccc}-\dfrac{1+2\cos(\theta)}{2}&a_{12}&a_{13}&a_{14 }\\ a_{12}&-\dfrac{1+2\sin(\theta)}{2}&a_{23}&a_{24}\\ a_{13}&a_{23}&\dfrac{-1+2\sin(\theta)}{2}&a_{34}\\ -a_{14}&-a_{24}&-a_{34}&\dfrac{-1+2\cos(\theta)}{2}\end{array}\right),\] where \[a_{12}=\dfrac{\sqrt{(1-2\sin(\theta))\cdot(1-2\cos(\theta))}}{2},\] \[a_{13}=\dfrac{\sqrt{(1+2\sin(\theta))\cdot(1-2\cos(\theta))}}{2},\] \[a_{14}=-\dfrac{\sqrt{(2\cos(\theta)-1)\cdot(1+2\cos(\theta))}}{2},\] \[a_{23}=\dfrac{\sqrt{(1+2\sin(\theta))\cdot(1-2\sin(\theta))}}{2},\] \[a_{24}=-\dfrac{\sqrt{(2\sin(\theta)-1)\cdot(1+2\cos(\theta))}}{2},\] and \[a_{34}=-\dfrac{\sqrt{(1+2\sin(\theta))\cdot(-1-2\cos(\theta))}}{2}.\] Since \(\theta\in[\frac{5\pi}{6},\pi]\), all the terms \(a_{ij}\) above are real. Then \(I_{1}\) is the order two \(\mathbb{C}\)-reflection with mirror \(\mathcal{P}_{1}\). Let \(I_{i}=JI_{i-1}J^{-1}\), then \(I_{i}\) is the order two \(\mathbb{C}\)-reflection with mirror \(\mathcal{P}_{i}\) for \(i=2,3,4\). By direct calculation we have \[\det(I_{1})=\det(I_{2})=\det(I_{3})=\det(I_{4})=-1,\] and \[I_{i}^{*}HI_{i}=H\] for \(i=1,2,3,4\), here \(I_{i}^{*}\) is the Hermitian transpose of \(I_{i}\). Moreover \[(I_{1}I_{3})^{2}=(I_{2}I_{4})^{2}=id,\] and \(I_{1}I_{2}\) is parabolic, so we get a representation \(\rho\) of \(G\) into \(\mathbf{PU}(3,1)\) (with additional condition that \(I_{i}I_{i+1}\) is parabolic for \(i=1,2,3,4\) mod \(4\)). Note that \(\det(I_{i})=-1\), so \(I_{i}\in\mathbf{U}(3,1)\), but \(I_{i}\notin\mathbf{PU}(3,1)\), and \(\mathrm{e}^{\frac{\pi i}{4}}\cdot I_{i}\in\mathbf{PU}(3,1)\). But we can also use the matrices \(I_{i}\) to study these representations for simplicity of notations. In the following, we also denote \(A_{i}=I_{i}I_{i+1}\) for \(i=1,2,3,4\) mod \(4\). Then \(A_{i}\) is parabolic, \(A_{1}A_{2}=A_{3}A_{4}\) and \(A_{2}A_{3}=A_{4}A_{1}\) have order \(2\). Then \[\rho_{\theta}(K)=\langle A_{1},A_{2},A_{3},A_{4}\rangle\] is an index two subgroup of \(\Gamma=\rho_{\theta}(G)\). We have \(A_{1}A_{2}=I_{1}I_{3}\) is \[\left(\begin{array}{cccc}2\cos(\theta)&0&0&-\sqrt{4\cos^{2}(\theta)-1}\\ 0&2\sin(\theta)&-\sqrt{1-4\sin^{2}(\theta)}&0\\ 0&-\sqrt{1-4\sin^{2}(\theta)}&-2\sin(\theta)&0\\ -\sqrt{4\cos^{2}(\theta)-1}&0&0&-2\cos(\theta)\end{array}\right).\] The matrix \(A_{1}\) is a little complicated, we omit it, but it is easy to get it from the matrices of \(I_{1}\) and \(J\). Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)<\mathbf{PU}(2,1)\) in \(\mathbf{H}_{\mathbb{C}}^{2}\) In this section, we will prove Theorem 1.1 for \(\theta=\frac{5\pi}{6}\) via the Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)\) with center \(p_{0}\). We note that the proof of Theorem 1.1 when \(\theta\in(\frac{5\pi}{6},\pi]\) in Section 6 does not work for \(\theta=\frac{5\pi}{6}\), since (the lifts of) four points \[\{p_{0},\ I_{1}I_{2}(p_{0}),\ I_{1}I_{3}(p_{0}),\ I_{1}I_{4}(p_{0})\}\] are linearly dependent when \(\theta=\frac{5\pi}{6}\) (see Lemma 6.2). In spite of this, the proof in this section is a model for the proof of Theorem 1.1 for general \(\theta\) in Section 6. ### Matrices in \(\mathbf{PU}(2,1)\) From (3.4) we have \[n_{1}=\left[\frac{\sqrt{1+\sqrt{3}}}{2},\ 0,\ \frac{\sqrt{2}}{2},\ \frac{\sqrt{ \sqrt{3}-1}}{2}\right]^{t} \tag{4.1}\] when \(\theta=\frac{5\pi}{6}\). So the second entry of each \(n_{i}\) is zero for \(i=1,2,3,4\). We take \[n=\left[0,1,0,0\right]^{t}, \tag{4.2}\] in \(\mathbb{C}\mathbf{P}^{3}-\overline{\mathbf{H}_{\mathbb{C}}^{3}}\), then \(\langle n,n_{i}\rangle=0\) for \(i=1,2,3,4\). The intersection of \(\mathbf{H}_{\mathbb{C}}^{3}\) and the dual of \(n\) in \(\mathbb{C}\mathbf{P}^{3}\) with respect to the Hermitian form is a copy of \(\mathbf{H}_{\mathbb{C}}^{2}\), which is \[\mathcal{P}=\left\{\left[\begin{array}{c}z_{1}\\ 0\\ z_{3}\\ 1\end{array}\right]\in\mathbf{H}_{\mathbb{C}}^{3}\right\}.\] Each \(I_{i}\) preserves this \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) invariant. So we delete the second column and the second row of the matrix \(I_{i}\) in Subsection 3.2, we get new matrices in \(\mathbf{U}(2,1)\). But we still denote them by \(I_{i}\) for the simplification of notations. We have \[I_{1}=\left(\begin{array}{ccc}\frac{\sqrt{3}-1}{2}&\frac{1}{\sqrt{\sqrt{3}- 1}}&-\frac{\sqrt{2}}{2}\\ \frac{1}{\sqrt{\sqrt{3}-1}}&0&-\frac{1}{\sqrt{\sqrt{3}+1}}\\ \frac{\sqrt{2}}{2}&\frac{1}{\sqrt{\sqrt{3}+1}}&-\frac{\sqrt{3}+1}{2}\end{array} \right),\] \[J=\begin{pmatrix}-1&0&0\\ 0&-{\rm i}&0\\ 0&0&1\end{pmatrix},\] and \[A_{1}=I_{1}I_{2}=\left(\begin{array}{ccc}\dfrac{3+{\rm i}+\sqrt{3}{\rm i}-\sqrt{ 3}}{2}&-{\rm i}\sqrt{\sqrt{3}-1}&\dfrac{\sqrt{2}(\sqrt{3}+{\rm i})}{2}\\ \sqrt{\sqrt{3}-1}&-\sqrt{3}{\rm i}&\sqrt{\sqrt{3}+1}\\ \dfrac{\sqrt{2}(\sqrt{3}+{\rm i})}{2}&-{\rm i}\sqrt{\sqrt{3}+1}&\dfrac{3-{\rm i }+\sqrt{3}{\rm i}+\sqrt{3}}{2}\end{array}\right),\] \(A_{i}=JA_{i-1}J^{-1}\) for \(i=2,3,4\) and \[A_{1}A_{2}=I_{1}I_{3}=\left(\begin{array}{ccc}-\sqrt{3}&0&\sqrt{2}\\ 0&-1&0\\ -\sqrt{2}&0&\sqrt{3}\end{array}\right).\] In fact the above \(I_{i}\) for \(i=1,2,3,4\) and \(J\) are in \({\bf U}(2,1)\), but not in \({\bf PU}(2,1)\) since the determinants of them are not \(1\). But we can also use these matrices to study the group action on \({\bf H}_{\mathbb{C}}^{2}\) for simplicity of notations. A partial Dirichlet domain \(D_{r}\) of \(\rho_{\frac{5\pi}{6}}(K)<{\bf PU}(2,1)\) in \({\bf H}_{\mathbb{C}}^{2}\) In this subsection, we will define a subset \(R\subset\rho_{\frac{5\pi}{6}}(K)\), from which we have the partial the partial Dirichlet domain \(D_{R}\) in \({\bf H}_{\mathbb{C}}^{2}\). First note that \(I_{1}I_{2}\) is parabolic with fixed point \[q_{12}=[\dfrac{-1+{\rm i}}{\sqrt{2}\sqrt{\sqrt{3}+1}},\ 1,\ \dfrac{1+{\rm i}}{ \sqrt{2}\sqrt{\sqrt{3}-1}}]^{t}\] in \(\partial{\bf H}_{\mathbb{C}}^{2}\). We denote by \(p_{ij}=I_{i}I_{j}(p_{0})\in{\bf H}_{\mathbb{C}}^{2}\) and \(B_{ij}=B(p_{0},p_{ij})\) the bisector with respect to the two points \(p_{0}\) and \(p_{ij}\) for certain \(i,j\in\{1,2,3,4\}\). Now * \(p_{0}=[0,0,1]^{t}\); * \(p_{12}=[\dfrac{\sqrt{2}(\sqrt{3}+{\rm i})}{2},\ \sqrt{\sqrt{3}+1},\ \dfrac{3-{\rm i}+\sqrt{3}+\sqrt{3}{\rm i}}{2}]^{t}\); * \(p_{21}=[-\dfrac{\sqrt{2}(\sqrt{3}-{\rm i})}{2},\ -{\rm i}\sqrt{\sqrt{3}+1},\ \dfrac{3+{\rm i}+\sqrt{3}-\sqrt{3}{\rm i}}{2}]^{t}\); * \(p_{23}=[-\dfrac{\sqrt{2}(\sqrt{3}+{\rm i})}{2},\ -{\rm i}\sqrt{\sqrt{3}+1},\ \dfrac{3-{\rm i}+\sqrt{3}+\sqrt{3}{\rm i}}{2}]^{t}\); * \(p_{32}=[\dfrac{\sqrt{2}(\sqrt{3}-{\rm i})}{2},\ -\sqrt{\sqrt{3}+1},\ \dfrac{3+{\rm i}+\sqrt{3}-\sqrt{3}{\rm i}}{2}]^{t}\); * \(p_{34}=[\dfrac{\sqrt{2}(\sqrt{3}-{\rm i})}{2},\ {\rm i}\sqrt{\sqrt{3}+1},\ \dfrac{3+{\rm i}+\sqrt{3}-\sqrt{3}{\rm i}}{2}]^{t}\); * \(p_{41}=[-\frac{\sqrt{2}(\sqrt{3}+\mathrm{i})}{2},\ \mathrm{i}\sqrt{\sqrt{3}+1},\ \frac{3-\mathrm{i}+\sqrt{3}+\sqrt{3}\mathrm{i}}{2}]^{t}\); * \(p_{14}=[\frac{\sqrt{2}(\sqrt{3}-\mathrm{i})}{2},\ \sqrt{\sqrt{3}+1},\ \frac{3+ \mathrm{i}+\sqrt{3}-\sqrt{3}\mathrm{i}}{2}]^{t}\); * \(p_{13}=[\sqrt{2},0,\sqrt{3}]^{t}\); * \(p_{24}=[-\sqrt{2},0,\sqrt{3}]^{t}\). We fix lifts \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) of \(p_{0}\) and \(p_{ij}\) as vectors in \(\mathbb{C}^{2,1}\), such that the entries of \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) are just the same as entries of \(p_{0}\) and \(p_{ij}\) above. We have \[\langle\mathbf{p_{0}},\mathbf{p_{0}}\rangle=\langle\mathbf{p_{ij}},\mathbf{p_ {ij}}\rangle=-1\] for \(\mathbf{p_{ij}}\) above. All of these lifts have the same norm will be very convenient later. Recall that \(A_{i}=I_{i}I_{i+1}\) for \(i=1,2,3,4\ \mathrm{mod}\ 4\). Let \(R\subset\rho_{\frac{5\pi}{6}}(K)\) be the set of ten words in \(\Gamma\): \[\left\{(I_{1}I_{2})^{\pm 1},\ (I_{2}I_{3})^{\pm 1},\ (I_{3}I_{4})^{\pm 1},\ (I_{4}I_{ 1})^{\pm 1},\ I_{1}I_{3},\ I_{2}I_{4}\right\}.\] We will show the partial Dirichlet domain \(D_{R}\) centered at the fixed point \(p_{0}\) of \(J\) is in fact the Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)\). The main tool for our study is the Poincare polyhedron theorem, which gives sufficient conditions for \(D_{R}\) to be a fundamental domain of the group. We refer to [20] for the precise statement of this version of Poincare polyhedron theorem we need. The main technical result in this section is **Theorem 4.1**.: \(D_{R}\) _is the Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)\) acting on \(\mathbf{H}_{\mathbb{C}}^{2}\) with center \(p_{0}\). Moreover, the group \(\rho_{\frac{5\pi}{6}}(K)=\langle A_{1},A_{2},A_{3},A_{4}\rangle\) has a presentation_ \[\langle A_{1},A_{2},A_{3},A_{4}:A_{1}A_{2}A_{3}A_{4}=id,(A_{1}A_{2})^{2}=(A_{ 2}A_{3})^{2}=(A_{3}A_{4})^{2}=(A_{4}A_{1})^{2}=id\rangle.\] _So \(\rho_{\frac{5\pi}{6}}:K\rightarrow\mathbf{PU}(2,1)\) is a discrete and faithful presentation of \(K\)._ ### Intersection patterns of the bisectors for \(D_{r}\) In this subsection, we will study the information on intersection patterns of the bisectors for \(D_{R}\). We summarize the intersections of them in Table 1 and we will show this carefully. Moreover, Table 1 should be compared with Figures 5 and 6. Where Figure 5 is a realistic view of the boundary of the partial Dirichlet domain \(D_{R}\) of \(\rho_{\frac{5\pi}{6}}(K)<\mathbf{PU}(2,1)\). For example, the sphere labeled by \(B_{41}\) is in fact the spinal sphere \(B_{41}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\). Unfortunately, since the union of these spinal spheres are "twisted" in \(\partial\mathbf{H}_{\mathbb{C}}^{2}\), it seems impossible to see all the spinal spheres from only one point of view. Figure 6 is an abstract picture of the boundary of the partial Dirichlet domain \(D_{R}\), which is much more transparent. We first consider the intersections of the bisectors \(B_{12}\) with other bisectors. **Proposition 4.2**.: _For the bisector \(B_{12}\) of \(I_{1}I_{2}\), we have_ 1. \(B_{12}\) _is tangent to_ \(B_{21}\)_;_ 2. \(B_{12}\) _does not intersect_ \(B_{23}\)_,_ \(B_{32}\)_,_ \(B_{43}\)_,_ \(B_{41}\) _and_ \(B_{24}\)_;_ 3. \(B_{12}\) _intersects_ \(B_{34}\) _in a non-empty Giraud disk_ \(B_{12}\cap B_{34}\)_. Moreover, the disk_ \(B_{12}\cap B_{34}\) _lies in the component of_ \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) _which does not contain the point_ \(p_{0}\)_. In particular,_ \(B_{12}\cap B_{34}\) _does not lie in the partial Dirichlet domain_ \(D_{R}\)_._ Proof.: The trace of \(I_{1}I_{2}\) is \(3\) (when we normalize them such that \(\det(I_{1}I_{1})=1\)), so \(I_{1}I_{2}\) is unipotent. By Theorem 6.1 of [21], the Dirichlet domain (with center \(p_{0}\)) of the infinite cyclic group \(\langle I_{1}I_{2}\rangle\) has two sides \(B_{12}\) and \(B_{21}\). These two bisectors intersect exactly in the fixed point \(q_{12}\) of \(I_{1}I_{2}\). This proves (1) of Proposition 4.2. We now consider \(B_{12}\cap B_{23}\). It is easy to see \(\mathbf{p_{12}}\), \(\mathbf{p_{23}}\) and \(\mathbf{p_{0}}\) are linearly independent in \(\mathbb{C}^{2,1}\). In Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{23}}\) and \(\mathbf{p}=\mathbf{p_{0}}\), then we can parameterize the intersection \(B_{12}\cap B_{23}\) of the bisectors \(B_{12}\) and \(B_{23}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}-\dfrac{(\mathrm{i}\mathrm{e}^{\mathrm{ri}}-\mathrm{ e}^{\mathrm{si}}+2-2\mathrm{i})\sqrt{3}+\mathrm{i}\mathrm{e}^{\mathrm{ri}}- \mathrm{i}\mathrm{e}^{\mathrm{si}}+2-4\mathrm{i}}{\sqrt{\sqrt{3}+1}}\\ \\ -\dfrac{(\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}-2)\sqrt{3}-\mathrm{ i}\mathrm{e}^{\mathrm{ri}}-\mathrm{i}\mathrm{e}^{\mathrm{si}}-4+6\mathrm{i}}{\sqrt{2}} \\ \\ \dfrac{1-\mathrm{i}+(1+\mathrm{i})\sqrt{3}}{\sqrt{\sqrt{3}-1}}\end{array}\right), \tag{4.3}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{ S}^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[(8\sqrt{3}+4)\sin(s)-(8\sqrt{3}+20)\cos(r)-(8\sqrt{3}+16)\cos(s)\] \[+(2\sqrt{3}+2)\sin(r-s)+4\cos(r-s)-8\sin(r)+16\sqrt{3}+48. \tag{4.4}\] With Maple, the minimum of (4.4) is \(12.752\) numerically. In particular, any \(V\) in (4.3) is a positive vector. So \(B_{12}\cap B_{23}=\emptyset\). We then consider \(B_{12}\cap B_{32}\). In Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{32}}\) and \(\mathbf{p}=\mathbf{p_{0}}\), then we can parameterize the intersection \(B_{12}\cap B_{32}\) of the bisectors \(B_{12}\) \begin{table} \begin{tabular}{|c|l|} \hline \(B_{12}\cap B_{21}\), tangent & \(B_{12}\cap B_{41}=\emptyset\) \\ \(B_{12}\cap B_{23}=\emptyset\) & \(B_{12}\cap B_{14}\neq\emptyset\) \\ \(B_{12}\cap B_{32}=\emptyset\) & \(B_{12}\cap B_{13}\neq\emptyset\) \\ \(B_{12}\cap B_{34}\neq\emptyset\) & \(B_{12}\cap B_{24}=\emptyset\) \\ \(B_{12}\cap B_{43}=\emptyset\) & \(B_{13}\cap B_{24}=\emptyset\) \\ \hline \end{tabular} \end{table} Table 1. The intersections of bisectors we should be concerned with up to \(J\)-action. and \(B_{32}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si} }-4)\sqrt{3}+\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}-6}{\sqrt{\sqrt{ 3}+1}}\\ \frac{(\mathrm{e}^{\mathrm{ri}}-\mathrm{e}^{\mathrm{si}}-2\mathrm{i})\sqrt{3}+ \mathrm{i}\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}}{\sqrt{2}}\\ -\sqrt{6}\sqrt{\sqrt{3}+1}\end{array}\right), \tag{4.5}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[-(10\sqrt{3}+12)\cdot(\cos(r)+\cos(s))+2\sqrt{3}\cdot(\sin(r-s)+ \cos(r-s))\\ +14\sqrt{3}-6\sin(r)+6\sin(s)+36. \tag{4.6}\] With Maple, the minimum of (4.6) is 4.78 numerically. In particular, any \(V\) in (4.5) is a positive vector. So \(B_{12}\cap B_{32}=\emptyset\). For \(B_{12}\cap B_{43}\), in Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{43}}\) and \(\mathbf{p}=\mathbf{p_{0}}\). Then we can parameterize the intersection \(B_{12}\cap B_{43}\) of the bisectors \(B_{12}\) and \(B_{43}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{si}}+\mathrm{i}\mathrm{e }^{\mathrm{ri}}-2-2\mathrm{i})\sqrt{3}+\mathrm{i}\mathrm{e}^{\mathrm{ri}}+ \mathrm{e}^{\mathrm{si}}-4-4\mathrm{i}}{\sqrt{\sqrt{3}+1}}\\ -\frac{(\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}-4)\sqrt{3}+ \mathrm{i}\mathrm{e}^{\mathrm{ri}}-\mathrm{i}\mathrm{e}^{\mathrm{si}}-2}{ \sqrt{2}}\\ (1-\mathrm{i})\sqrt{\sqrt{3}-1}\end{array}\right), \tag{4.7}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[2\cdot(\cos(r-s)-\sin(r-s))-(6\sqrt{3}+20)\cdot(\cos(r)+\cos(s)) \\ -4\sqrt{3}\sin(r-s)+(8\sqrt{3}+10)\cdot(\sin(r)-\sin(s))+20\sqrt{3}+54. \tag{4.8}\] With Maple, the minimum of (4.8) is 20.51 numerically. In particular, any \(V\) in (4.7) is a positive vector. So \(B_{12}\cap B_{43}=\emptyset\). Since \(J(B_{41}\cap B_{12})=B_{12}\cap B_{23}\), and we have proved that \(B_{12}\cap B_{23}\) is empty, then \(B_{41}\cap B_{12}\) is empty. For \(B_{12}\cap B_{24}\), in Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{24}}\) and \(\mathbf{p}=\mathbf{p_{0}}\). Then we can parameterize the intersection \(B_{12}\cap B_{24}\) of the bisectors \(B_{12}\) and \(B_{24}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{si}}-1)\sqrt{3}+ \mathrm{e}^{\mathrm{si}}-3}{\sqrt{\sqrt{3}+1}}\\ -\frac{(\mathrm{e}^{\mathrm{si}}-1+2\mathrm{i})\sqrt{3}-\mathrm{i}\mathrm{e}^{ \mathrm{si}}+2\mathrm{e}^{\mathrm{ri}}-6-\mathrm{i}}{\sqrt{2}}\\ \sqrt{2}\sqrt{\sqrt{3}+1}\end{array}\right), \tag{4.9}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S}^{1 }\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[2\sqrt{3}\cdot(\cos(r-s)-\sin(s))-2\sin(r-s)-(10\sqrt{3}+8)\cos(s)\] \[-(12+2\sqrt{3})\cos(r)+(-2+4\sqrt{3})\sin(r)+6\sqrt{3}+32. \tag{4.10}\] With Maple, the minimum of (4.10) is 4.58 numerically. In particular, any \(V\) in (4.9) is a positive vector. So \(B_{12}\cap B_{24}=\emptyset\). We end the proof of (2) of Proposition 4.2. For \(B_{12}\cap B_{34}\), in Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{34}}\) and \(\mathbf{p}=\mathbf{p_{0}}\). Then we can parameterize the intersection \(B_{12}\cap B_{34}\) of the bisectors \(B_{12}\) and \(B_{34}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{ si}}-4)\sqrt{3}+\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}-6+2\mathrm{i}}{ \sqrt{\sqrt{3}+1}}\\ \\ \frac{(\mathrm{i}-\sqrt{3})(\mathrm{e}^{\mathrm{si}}-\mathrm{e}^{\mathrm{ri}}) }{\sqrt{2}}\\ \\ (\mathrm{i}-\sqrt{3})\sqrt{2}\sqrt{\sqrt{3}+1}\end{array}\right), \tag{4.11}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[-(8\sqrt{3}+12)\cdot(\cos(r)+\cos(s))+(2\sqrt{3}-2)\cos(r-s)\] \[+4\cdot(\sin(r)+\sin(s))+14\sqrt{3}+26. \tag{4.12}\] There is \((r,s)\) such that (4.12) is negative, so \(V=V(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\) is a negative vector. A sample point is \((r,s)=(0,-\frac{\pi}{16})\). But for \(V\) in (4.11), \(|V^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}\) is the constant \(8+8\sqrt{3}\), which is 21.85 numerically. For \(V\) in (4.11), \(|V^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}\) is \[4(1+\sqrt{3})\cdot(\cos(r-s)-(\cos(r)+\cos(s))\sqrt{3}-\sin(r)-\sin(s)+3). \tag{4.13}\] Figure 2. \(B_{12}\cap B_{34}\) lies in the half space of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) which does not contain the fixed point \(p_{0}\) of \(J\). The red circle is \(B_{12}\cap B_{34}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\) in coordinates of \((r,s)\in[-\pi,\pi]^{2}\). So the small disk bounded by the red circle is \(B_{12}\cap B_{34}\cap\mathbf{H}_{\mathbb{C}}^{2}\). The cyan region consists of points such that whose distance to \(p_{13}\) is smaller than whose distance to \(p_{0}\). This figure illustrates that the disk \(B_{12}\cap B_{34}\) lies in the component of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) which does not contain the point \(p_{0}\). With Maple, the maximum of (4.13) with the condition \(V^{*}\cdot H\cdot V=0\) is 14.23 numerically, which is smaller than \(|V^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}\). Moreover, the minimum of (4.13) with the condition \(V^{*}\cdot H\cdot V=0\) is 5.85. In particular, the ideal boundary of \(B_{12}\cap B_{34}\), say \(B_{12}\cap B_{34}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\), is disjoint from \(B_{13}\). Then we have \(B_{12}\cap B_{34}\) is disjoint from \(B_{13}\) by well-known properties of bisectors in \(\mathbf{H}_{\mathbb{C}}^{2}\), see [11]. For the sample point \(V=V(\mathrm{e}^{0\cdot\mathrm{i}},\mathrm{e}^{-\frac{\pi}{2}\mathrm{i}})\), it is easy to see its distance to \(p_{13}\) is smaller than its distance to \(p_{0}\). We have \(B_{12}\cap B_{34}\) lies in the half space of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) which does not contain the fixed point \(p_{0}\) of \(J\). In particular, \(B_{12}\cap B_{34}\) does not lie in the partial Dirichlet domain \(D_{R}\). See also Figure 2 for an illustration of this fact. This ends the proof of (3) of Proposition 4.2. **Proposition 4.3**.: _For the bisector \(B_{13}\) of \(I_{1}I_{3}\), we have \(B_{13}\) does not intersect \(B_{21}\), \(B_{23}\), \(B_{43}\), \(B_{41}\) and \(B_{24}\)._ Proof.: We have proved \(B_{12}\cap B_{24}=\emptyset\). By the \(\langle J\rangle=\mathbb{Z}_{4}\) symmetry, we have \[B_{13}\cap B_{23}=\emptyset,\ B_{13}\cap B_{41}=\emptyset.\] The fact that \(B_{13}\cap B_{21}=\emptyset\) and \(B_{13}\cap B_{43}=\emptyset\) can be proved similarly to \(B_{12}\cap B_{24}=\emptyset\) in Proposition 4.2. For \(B_{13}\cap B_{24}\). It is easy to see \(p_{0}\), \(p_{13}\) and \(p_{24}\) lie in the \(\mathbb{C}\)-line \[l=\left\{\left[z_{1},\ 0,\ 1\right]^{t}\in\mathbf{H}_{\mathbb{C}}^{2}\right\}.\] Now it is easy to see \(B_{13}\cap B_{24}\cap l=\emptyset\). Then from the projection of \(\mathbf{H}_{\mathbb{C}}^{2}\) to \(l\), we get \(B_{13}\cap B_{24}=\emptyset\). **Proposition 4.4**.: _For the bisectors \(B_{12}\), \(B_{13}\) and \(B_{14}\), we have:_ 1. _Each of the intersections_ \(B_{12}\cap B_{13}\)_,_ \(B_{12}\cap B_{14}\) _and_ \(B_{13}\cap B_{14}\) _is a non-empty Giraud disk;_ 2. _The triple intersection_ \(B_{12}\cap B_{13}\cap B_{14}\) _is an arc in each of the Giraud disks_ \(B_{12}\cap B_{13}\)_,_ \(B_{12}\cap B_{14}\) _and_ \(B_{13}\cap B_{14}\)_._ Proof.: For \(B_{12}\cap B_{14}\), in Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{14}}\) and \(\mathbf{p}=\mathbf{p_{0}}\). It is easy to see these three vectors are linearly independent. Then we can parameterize the intersection \(B_{12}\cap B_{14}\) of the bisectors \(B_{12}\) and \(B_{14}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{si}}-\mathrm{e}^{\mathrm{ ri}})\sqrt{3}-\mathrm{e}^{\mathrm{ri}}+\mathrm{e}^{\mathrm{si}}-2\mathrm{i}}{ \sqrt{\sqrt{3}+1}}\\ \frac{(\mathrm{e}^{\mathrm{ri}}-\mathrm{e}^{\mathrm{si}}-2\mathrm{i})\sqrt{3} +\mathrm{i}\mathrm{e}^{\mathrm{ri}}+\mathrm{i}\mathrm{e}^{\mathrm{si}}}{ \sqrt{2}}\\ -\mathrm{i}\sqrt{2}\sqrt{\sqrt{3}+1}\end{array}\right) \tag{4.14}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[2\sqrt{3}\cdot(\sin(r-s)-\cos(r)-\cos(s)-\cos(r-s)+1)\\ -4\cos(r-s)-2\sin(r)+2\sin(s)+8. \tag{4.15}\] There are \((r,s)\) such that (4.15) is negative, that is \(V=V(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\) is a negative vector. For example when \((r,s)=(0,0)\). Then \(B_{12}\cap B_{14}\) is non empty, it is a Giraud disk. Now for \(V\) in (4.14), \(|V^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}\) is \(2+2\sqrt{3}\), and \(|V^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}\) is \[4\sqrt{3}(\sin(s)-\sin(r)-\cos(r-s))+4(\sin(s)-\sin(r)-\cos(r-s)) \\ +6\sqrt{3}+6. \tag{4.16}\] The solutions of \(V\) in (4.14) with the condition \[|V^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}=2+2\sqrt{3}\] are \[\{s=-\frac{\pi}{2}\},\ \{r=\frac{\pi}{2}\},\ \{r=s\}.\] It is easy to see when \(s=-\frac{\pi}{2}\) or \(r=\frac{\pi}{2}\) then \(V\) is positive. But when \(r=s\) with \[r\in\left(-\arccos(\frac{\sqrt{3}}{3}),-\arccos(\frac{\sqrt{3}}{3})\right),\] \(V\) is a negative point in \(B_{12}\cap B_{14}\). So \(B_{12}\cap B_{14}\cap B_{13}\) is an arc in \(B_{12}\cap B_{14}\). See Figure 3 for \(B_{12}\cap B_{14}\) in coordinates \((r,s)\in[-\pi,\pi]^{2}\). For \(B_{12}\cap B_{13}\), we have proved that \(B_{12}\cap B_{13}\) is non-empty, so it is a Giraud disk. In Equation (2.4), we take \(\mathbf{q}=\mathbf{p_{12}}\), \(\mathbf{r}=\mathbf{p_{13}}\) and \(\mathbf{p}=\mathbf{p_{0}}\). It is easy to see these three points are linearly independent. Then we can parameterize the intersection Figure 3. The small disk bounded by the red circle is the Giraud disk \(B_{12}\cap B_{14}\). The three purple lines are points (in the Giraud torus \(\hat{B}(p_{0},p_{12},p_{14})\) such that its distances to \(p_{13}\) and \(p_{0}\) are the same. So the red half disk is the part of \(B_{12}\cap B_{14}\) which lies in the half space of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) containing the fixed point of \(J\), and then it is the ridge \(s_{12}\cap s_{14}\). \(B_{12}\cap B_{13}\) of the bisectors \(B_{12}\) and \(B_{13}\) by \(V=V(z_{1},z_{2})\) with \(\langle V,V\rangle<0\). Where \[V=\left(\begin{array}{c}\frac{(\mathrm{e}^{\mathrm{si}}-1)\sqrt{3}+\mathrm{e} ^{\mathrm{si}}-3}{\sqrt{\sqrt{3}+1}}\\ \frac{(-\mathrm{e}^{\mathrm{si}}-1)\sqrt{3}+\mathrm{i}\mathrm{e}^{\mathrm{si} }+2\mathrm{e}^{\mathrm{ri}}-\mathrm{i}}{\sqrt{2}}\\ \\ -\sqrt{2}\sqrt{\sqrt{3}+1}\end{array}\right), \tag{4.17}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S} ^{1}\times\mathbb{S}^{1}\). Now \(\langle V,V\rangle=V^{*}\cdot H\cdot V\) is \[\begin{split}& 2\sqrt{3}\cdot(\sin(s)-\cos(r)-\cos(s)-\cos(r-s)+1)\\ &+2\sin(r-s)-2\sin(r)-4\cos(s)+8.\end{split} \tag{4.18}\] Now for \(V\) in (4.17), \(|V^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}\) is \(2+2\sqrt{3}\), and \(|V^{*}\cdot H\cdot\mathbf{p_{14}}|^{2}\) is \[\begin{split}& 4\sqrt{3}\cdot(\sin(r-s)-\sin(r)-\cos(s))+4 \cdot(\sin(r-s)-\sin(r)-\cos(s))\\ &+6\sqrt{3}+6.\end{split} \tag{4.19}\] The solutions of \(V\) in (4.17) with the condition \[|V^{*}\cdot H\cdot\mathbf{p_{14}}|^{2}=2+2\sqrt{3}\] are \[\{s=0\},\ \{r=\frac{\pi}{2}\},\ \{r=s-\frac{\pi}{2}\}.\] It is easy to see when \(r=\frac{\pi}{2}\) and \(r=s-\frac{\pi}{2}\) then \(V\) is positive. But when \(s=0\) with \[r\in\left(-\arccos(\frac{\sqrt{3}}{3}),-\arccos(\frac{\sqrt{3}}{3})\right),\] \(V\) is a negative point in \(B_{12}\cap B_{13}\). See Figure 4 for \(B_{12}\cap B_{13}\) in coordinates \((r,s)\in[-\pi,\pi]^{2}\). Figure 4. The small disk bounded by the purple circle is the Giraud disk \(B_{12}\cap B_{13}\). The three red lines are points (in the Giraud torus \(\hat{B}(p_{0},p_{12},p_{13})\) such that its distances to \(p_{14}\) and \(p_{0}\) are the same. So the part of \(B_{12}\cap B_{13}\) which lies in the half space of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{14}\) containing the fixed point of \(J\) is the purple half disk, and then it is the ridge \(s_{12}\cap s_{13}\). Similarly, \(B_{13}\cap B_{14}\) is a non-empty Giraud disk. We omit it. From Propositions 4.2, 4.3 and 4.4, we have the following two propositions, which study the combinatorics of 3-sides and ridges of \(D_{R}\). They are crucial for the application of the Poincare polyhedron theorem. For each \(I_{i}I_{j}\in R\), the side \(s_{ij}\) by definition is \(B_{ij}\cap D_{R}\), which is a 3-dimensional object. The reader may compare to Figure 6, which illustrates the ideal boundary behaviors of these 3-sides \(s_{ij}\). **Proposition 4.5**.: _The side \(s_{12}=B_{12}\cap D_{R}\) is 3-ball in \(\mathbf{H}_{\mathbb{C}}^{2}\cup\partial\mathbf{H}_{\mathbb{C}}^{2}\):_ * \(s_{12}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\) _is a disk with the point_ \(q_{12}\) _in its interior;_ * _The frontier of_ \(s_{12}\cap\mathbf{H}_{\mathbb{C}}^{2}\) _is a disk consisting of two half disks_ \(s_{12}\cap s_{14}\) _and_ \(s_{12}\cap s_{13}\) _glued along the arc_ \[s_{12}\cap s_{13}\cap s_{14}=B_{12}\cap B_{13}\cap B_{14}.\] **Proposition 4.6**.: _The side \(s_{13}=B_{13}\cap D_{R}\) is 3-ball in \(\mathbf{H}_{\mathbb{C}}^{2}\cup\partial\mathbf{H}_{\mathbb{C}}^{2}\):_ * \(s_{13}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\) _is an annulus;_ * _The frontier of_ \(s_{13}\cap\mathbf{H}_{\mathbb{C}}^{2}\) _are two disks, each of them consists of two half disks: one component is the union of_ \(s_{13}\cap s_{12}\) _and_ \(s_{13}\cap s_{14}\) _glued along the arc_ \[s_{12}\cap s_{13}\cap s_{14}=B_{12}\cap B_{13}\cap B_{14},\] _and the other component is union of_ \(s_{13}\cap s_{32}\) _and_ \(s_{13}\cap s_{34}\) _glued along the arc_ \[s_{13}\cap s_{32}\cap s_{34}=B_{13}\cap B_{32}\cap B_{34}.\] We have similar properties for the 3-side \(s_{21}\). Since there is a \(\langle J\rangle=\mathbb{Z}_{4}\) symmetry, we omits the statements of the combinatorics of \(s_{23}\), \(s_{32}\), \(s_{34}\), \(s_{43}\), \(s_{41}\), \(s_{14}\) and \(s_{24}\). _Remark 4.7_.: From Figures 5 and 6, we strongly believe that \(\partial\mathbf{H}_{\mathbb{C}}^{2}\cap D_{R}\) is a genus three handlebody. In other words, if we denote by \(\mathcal{H}_{ij}\) the half space of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{ij}\) which does not contain the point \(p_{0}\) for \(I_{i}I_{j}\in R\) (so \(\mathcal{H}_{ij}\) contains \(p_{ij}\)). \(\overline{\mathcal{H}}_{ij}\) is the closure of \(\mathcal{H}_{ij}\) in \(\overline{\mathbf{H}_{\mathbb{C}}^{2}}\). Then \[(\cup_{I_{ij}\in R}\overline{\mathcal{H}_{ij}})\cap\partial\mathbf{H}_{ \mathbb{C}}^{2}\] is a union of 3-balls in \(\partial\mathbf{H}_{\mathbb{C}}^{2}\) (some of them are tangent), so it is a handlebody. This handlebody is the union of all the 3-balls bounded by spheres in Figure 5. We have \(\partial\mathbf{H}_{\mathbb{C}}^{2}\cap D_{R}\) is the complement of \((\cup_{I_{ij}\in R}\overline{\mathcal{H}_{ij}})\cap\partial\mathbf{H}_{ \mathbb{C}}^{2}\) in \(\partial\mathbf{H}_{\mathbb{C}}^{2}\), which is the region outside all of the spheres in Figure 5. We guess that \((\cup_{I_{ij}\in R}\overline{\mathcal{H}_{ij}})\cap\partial\mathbf{H}_{ \mathbb{C}}^{2}\) is an unknotted genus three handlebody. In this paper we do not care it, and we do not show this rigorously. ### Using the Poincare polyhedron theorem With the preliminaries in Subsection 4.3, we have the side pairing maps of \(D_{R}\) as follows: * \(I_{2}I_{1}:s_{12}\to s_{21}\); * \(I_{3}I_{2}:s_{23}\to s_{32}\); * \(I_{4}I_{3}:s_{34}\to s_{43}\); * \(I_{1}I_{4}:s_{41}\to s_{14}\); * \(I_{1}I_{3}:s_{13}\to s_{13}\); * \(I_{2}I_{4}:s_{24}\to s_{24}\). **Proposition 4.8**.: \(I_{2}I_{1}\) _is a homeomorphism from \(s_{12}\) to \(s_{21}\):_ 1. \(I_{2}I_{1}\) _sends the ridge_ \(s_{12}\cap s_{14}\) _to the ridge_ \(s_{21}\cap s_{24}\)_;_ 2. \(I_{2}I_{1}\) _sends the ridge_ \(s_{12}\cap s_{13}\) _to the ridge_ \(s_{21}\cap s_{23}\)_._ Proof.: The ridge \(s_{12}\cap s_{14}\) is defined by the triple equality \[|\langle\mathbf{z},\mathbf{p_{0}}\rangle|=|\langle\mathbf{z},p_{12}\rangle|=| \langle\mathbf{z},\mathbf{p_{14}}\rangle|\] with \(\mathbf{z}\in\mathbb{C}^{3,1}\). From \(I_{2}I_{1}\)'s action on the set \[\{p_{0},p_{12},p_{14}\},\] we get the set \[\{p_{21},p_{0},p_{24}\}.\] So \(I_{2}I_{1}\) maps \(s_{12}\cap s_{14}\) to \(s_{21}\cap s_{24}\). The proof of (2) of Proposition 4.8 is similar. Similarly, we have Figure 5. A realistic view of the boundary of Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)<\mathbf{PU}(2,1)\). For example, the sphere labeled by \(B_{41}\) is in fact the spinal sphere \(B_{41}\cap\partial\mathbf{H}_{\mathbb{C}}^{2}\). The other labels have similar meanings. In this figure, we can not see \(B_{21}\) and \(B_{12}\), and the black one is \(B_{24}\). The brown abd green spheres labeled by \(B_{43}\) and \(B_{34}\) are tangent at a point \(p_{34}\), which is disjoint from the purple sphere labeled by \(B_{13}\). **Proposition 4.9**.: _The side pairing map \(I_{1}I_{3}\) is a self-homeomorphism of \(s_{13}\):_ 1. \(I_{1}I_{3}\) _exchanges the ridges_ \(s_{13}\cap s_{12}\) _and_ \(s_{13}\cap s_{32}\)_;_ 2. \(I_{1}I_{3}\) _exchanges the ridges_ \(s_{13}\cap s_{14}\) _and_ \(s_{13}\cap s_{34}\)_._ **Proposition 4.10**.: \(I_{3}I_{2}\) _is a homeomorphism from \(s_{23}\) to \(s_{32}\):_ 1. \(I_{3}I_{2}\) _sends the ridge_ \(s_{23}\cap s_{24}\) _to the ridge_ \(s_{32}\cap s_{34}\)_;_ 2. \(I_{3}I_{2}\) _sends the ridge_ \(s_{21}\cap s_{23}\) _to the ridge_ \(s_{31}\cap s_{32}\)_._ **Proposition 4.11**.: \(I_{4}I_{3}\) _is a homeomorphism from \(s_{34}\) to \(s_{43}\):_ 1. \(I_{4}I_{3}\) _sends the ridge_ \(s_{34}\cap s_{32}\) _to the ridge_ \(s_{24}\cap s_{43}\)_;_ 2. \(I_{4}I_{3}\) _sends the ridge_ \(s_{34}\cap s_{13}\) _to the ridge_ \(s_{43}\cap s_{41}\)_._ **Proposition 4.12**.: \(I_{1}I_{4}\) _is a homeomorphism from \(s_{41}\) to \(s_{14}\):_ 1. \(I_{1}I_{4}\) _sends the ridge_ \(s_{41}\cap s_{24}\) _to the ridge_ \(s_{14}\cap s_{12}\)_;_ 2. \(I_{1}I_{4}\) _sends the ridge_ \(s_{41}\cap s_{43}\) _to the ridge_ \(s_{14}\cap s_{13}\)_._ **Proposition 4.13**.: _The side pairing map \(I_{2}I_{4}\) is a self-homeomorphism of \(s_{24}\):_ 1. \(I_{2}I_{4}\) _exchanges the ridges_ \(s_{24}\cap s_{21}\) _and_ \(s_{24}\cap s_{41}\)_;_ 2. \(I_{2}I_{4}\) _exchanges the ridges_ \(s_{24}\cap s_{23}\) _and_ \(s_{24}\cap s_{43}\)_._ **Proof of Theorem 4.1.** After above propositions, we prove the tessellation around the sides and ridges of the partial Dirichlet domain \(D_{R}\). First, \(I_{1}I_{3}\) is a self-homeomorphism of \(s_{13}\), and \(I_{1}I_{3}\) exchanges the two components of \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\). Then \(D_{R}\) and \(I_{1}I_{3}(D_{R})\) have disjoint interiors, and they together cover a neighborhood of each point in the interior of the side \(s_{13}\). The cases of the other 3-sides are similar. Secondly, we consider tessellations about ridges. Recall that \(A_{1}=I_{1}I_{2}\), \(A_{2}=I_{2}I_{3}\), \(A_{3}=I_{3}I_{4}\) and \(A_{4}=I_{4}I_{1}\). (1). For the ridge \(s_{14}\cap s_{12}\), the ridge circle is \[s_{14}\cap s_{12}\xrightarrow{A_{1}^{-1}}s_{21}\cap s_{24}\xrightarrow{(A_{ 2}A_{3})^{-1}}s_{24}\cap s_{41}\xrightarrow{A_{4}^{-1}}s_{14}\cap s_{12}.\] Which gives the relation \(A_{4}^{-1}\cdot A_{3}^{-1}A_{2}^{-1}\cdot A_{1}^{-1}=id\). By a standard argument as in [20], we have \(D_{R}\cup A_{1}(D_{R})\cup A_{4}^{-1}(D_{R})\) covers a small neighborhood of \(s_{14}\cap s_{12}\). (2). For the ridge \(s_{13}\cap s_{14}\), the ridge circle is \[s_{13}\cap s_{14}\xrightarrow{A_{4}}s_{41}\cap s_{43}\xrightarrow{A_{3}}s_{ 34}\cap s_{31}\xrightarrow{A_{1}A_{2}}s_{13}\cap s_{14}.\] Which gives the relation \(A_{1}A_{2}\cdot A_{3}\cdot A_{4}=id\). By a standard argument as above \(D_{R}\cup A_{4}^{-1}(D_{R})\cup(A_{1}A_{2})^{-1}(D_{R})\) covers a small neighborhood of \(s_{13}\cap s_{14}\). (3). For the ridge \(s_{13}\cap s_{12}\), the ridge circle is \[s_{13}\cap s_{12}\xrightarrow{A_{1}^{-1}}s_{21}\cap s_{23}\xrightarrow{A_{2 }^{-1}}s_{32}\cap s_{31}\xrightarrow{A_{1}A_{2}}s_{13}\cap s_{12}.\] Which gives the relation \(A_{1}A_{2}\cdot A_{2}^{-1}\cdot A_{1}^{-1}=id\). We have \(D_{R}\cup A_{1}(D_{R})\cup A_{1}A_{2}(D_{R})\) covers a small neighborhood of \(s_{13}\cap s_{12}\). (4). For the ridge \(s_{24}\cap s_{23}\), the ridge circle is \[s_{24}\cap s_{23}\xrightarrow{A_{2}^{-1}}s_{32}\cap s_{34}\xrightarrow{A_{3 }^{-1}}s_{43}\cap s_{42}\xrightarrow{A_{2}A_{3}}s_{24}\cap s_{23}.\] Which gives the relation \(A_{2}A_{3}\cdot A_{3}^{-1}\cdot A_{2}^{-1}=id\). Then \(D_{R}\cup A_{2}(D_{R})\cup A_{2}A_{3}(D_{R})\) covers a small neighborhood of \(s_{13}\cap s_{12}\). In Figure 6, we take the three ridges in the same class as in \(s_{14}\cap s_{12}\) by cyan colors (more precisely, the intersections of these ridges with \(\partial\mathbf{H}_{\mathbb{C}}^{2}\), so we have a set of arcs). Similarly, we color other ridge classes by orange, red and green colors. Since the identification around \(q_{12}\in\partial\mathbf{H}_{\mathbb{C}}^{2}\) is given by \(A_{1}\), which is unipotent. So the identification space given by the side-pairing maps is complete at the ideal point \(q_{12}\). By \(\langle J\rangle\)-symmetry, the identification space of \(D_{R}\) by these side pairing maps is complete. By Poincare polyhedron theorem, the partial Dirichlet domain \(D_{R}\) is in fact the Dirichlet domain of \(\rho_{\frac{5\pi}{6}}(K)<\mathbf{PU}(2,1)\). Now we have the presentation \[\rho_{\frac{5\pi}{6}}(K)=\left\langle A_{1},A_{2},A_{3},A_{4}\middle|(A_{1}A_{ 2})^{2}=(A_{2}A_{3})^{2}=A_{1}A_{2}A_{3}A_{4}=id\right\rangle.\] Note that with the relations \[(A_{1}A_{2})^{2}=id,\ (A_{2}A_{3})^{2}=id,\ A_{1}A_{2}A_{3}A_{4}=id,\] it is trivial to get \[(A_{3}A_{4})^{2}=id,\ (A_{4}A_{1})^{2}=id.\] So the relations above are also \(\mathbb{Z}_{4}\)-invariant, we have a discrete and faithful representation of \(K\) into \(\mathbf{PU}(2,1)\). This ends the proof of Theorem 4.1, and so the proof of Theorem 1.1 when \(\theta=\frac{5\pi}{6}\). ## 5. Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) in \(\mathbf{H}_{\mathbb{R}}^{3}\) In this section, we let \(\theta=\pi\). Then \(\rho_{\pi}(K)\) preserves a totally geodesic \(\mathbf{H}_{\mathbb{R}}^{3}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) invariant. Even through the discreteness of \(\rho_{\pi}(K)\) is trivial, but we consider the Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) in \(\mathbf{H}_{\mathbb{R}}^{3}\), which is the baby case for Section 6. (\rho_{\pi}(K)\)-invariant totally geodesic \(\mathbf{H}_{\mathbb{R}}^{3}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) Since the \(\rho_{\pi}(K)\)-invariant totally geodesic \(\mathbf{H}_{\mathbb{R}}^{3}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\) is not the obvious one in \(\mathbf{H}_{\mathbb{C}}^{3}\), we first describe it. When \(\theta=\pi\), as the notations in Subsection 3.1, we have \[n_{1}=\left[\tfrac{\sqrt{3}}{2},\tfrac{1}{2},\tfrac{1}{2},\tfrac{1}{2}\right] ^{t},\ n_{2}=\left[-\tfrac{\sqrt{3}}{2},\tfrac{\mathrm{i}}{2},-\tfrac{ \mathrm{i}}{2},\tfrac{1}{2}\right]^{t}, \tag{5.1}\] and \[n_{3}=\left[\tfrac{\sqrt{3}}{2},-\tfrac{1}{2},-\tfrac{1}{2},\tfrac{1}{2} \right]^{t},\ n_{4}=\left[-\tfrac{\sqrt{3}}{2},-\tfrac{\mathrm{i}}{2},\tfrac{ \mathrm{i}}{2},\tfrac{1}{2}\right]^{t}. \tag{5.2}\] We fix lifts \(\mathbf{n_{i}}\) of \(n_{i}\) as vectors in \(\mathbb{C}^{3,1}\), such that the entries of \(\mathbf{n_{i}}\) are just the same as entries of \(n_{i}\) above. Let \(\mathcal{L}\) be the intersection of the real span of \(\mathbf{n_{1}},\mathbf{n_{2}},\mathbf{n_{3}},\mathbf{n_{4}}\) and \(\mathbf{H}_{\mathbb{C}}^{3}\), that is \(\mathcal{L}\) is the set \[\left\{[\mathbf{x}=x_{1}\mathbf{n_{1}}+x_{2}\mathbf{n_{2}}+x_{3}\mathbf{n_{3} }+x_{4}\mathbf{n_{4}}]\in\mathbf{H}_{\mathbb{C}}^{3}\ |\ x_{i}\in\mathbb{R}\right\}.\] Since \(J(n_{i})=n_{i+1}\) mod \(4\), for \[x=x_{1}n_{1}+x_{2}n_{2}+x_{3}n_{3}+x_{4}n_{4}\in\mathcal{L},\] \[J(x)=x_{4}n_{1}+x_{1}n_{2}+x_{2}n_{3}+x_{3}n_{4}\in\mathcal{L}.\] So \[J_{\mathcal{L}}=\begin{pmatrix}0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ \end{pmatrix} \tag{5.3}\] is the matrix representation of \(J\) on \(\mathcal{L}\) with basis \(\{n_{1},n_{2},n_{3},n_{4}\}\). The fix point of \(J\) in \(\mathcal{L}\) is \[x_{0}=n_{1}+n_{2}+n_{3}+n_{4},\] that is, we may take \(x_{1}=x_{2}=x_{3}=x_{4}=\frac{1}{2}\). Now \[I_{1}=\left(\begin{array}{cccc}\frac{1}{2}&\frac{\sqrt{3}}{2}&\frac{\sqrt{ 3}}{2}&-\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&-\frac{1}{2}&\frac{1}{2}&-\frac{1}{2}\\ \frac{\sqrt{3}}{2}&\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}\\ \frac{\sqrt{3}}{2}&\frac{1}{2}&\frac{1}{2}&-\frac{3}{2}\\ \end{array}\right) \tag{5.4}\] as given in Subsection 3.1. Consider \[I_{1}(x_{1}n_{1}+x_{2}n_{2}+x_{3}n_{3}+x_{4}n_{4})=y_{1}n_{1}+y_{2}n_{2}+y_{3} n_{3}+y_{4}n_{4},\] we have the matrix representation of \(I_{1}\)-action on \(\mathcal{L}\) with basis \(\{n_{1},n_{2},n_{3},n_{4}\}\) is \[I_{\mathcal{L},1}=\begin{pmatrix}1&-2&0&-2\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\end{pmatrix}. \tag{5.5}\] Then \(I_{\mathcal{L},i}=J_{\mathcal{L}}\cdot I_{\mathcal{L},i}\cdot J_{\mathcal{L}} ^{-1}\) for \(i=2,3,4\) is the matrix representation of \(I_{i}\)-action on \(\mathcal{L}\) with base \(\{n_{1},n_{2},n_{3},n_{4}\}\). We transform \(\mathcal{L}\) into the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\), we also transform \(J_{\mathcal{L}}\) and \(I_{\mathcal{L},i}\) into matrices in the Lie group \(\mathbf{PO}(3,1)\). Now the quadratic form with respect to the basis \(\{\mathbf{n_{1}},\mathbf{n_{2}},\mathbf{n_{3}},\mathbf{n_{4}}\}\) is given by \[H_{\mathcal{L}}=(\langle\mathbf{n_{i}},\mathbf{n_{j}}\rangle)_{1\leq i,j\leq 4 }=\begin{pmatrix}1&-1&0&-1\\ -1&1&-1&0\\ 0&-1&1&-1\\ -1&0&-1&1\end{pmatrix}. \tag{5.6}\] The quadratic form of the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\) is given by \(H_{\mathcal{R}}=H=diag\{1,1,1,-1\}\). Consider the matrix \[C=\begin{pmatrix}1&0&0&0\\ 0&0&1&0\\ -\frac{1}{\sqrt{3}}&-\frac{2}{\sqrt{3}}&-\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{3} }\\ 1&1&1&0\end{pmatrix} \tag{5.7}\] with real entries. Then \(\det(H_{\mathcal{L}})=-3\), \(\det(C)=\frac{1}{\sqrt{3}}\), and \(C\cdot H_{\mathcal{L}}\cdot C^{t}=H\). We denote by \[J_{k}=(C^{t})^{-1}\cdot J_{\mathcal{L}}\cdot C^{t},\] and \[I_{k,i}=(C^{t})^{-1}\cdot I_{\mathcal{L},i}\cdot C^{t}\] for \(i=1,2,3,4\). Then \[J_{k}\cdot H\cdot J_{k}^{*}=H\] and \[I_{k,i}\cdot H\cdot I_{k,i}^{*}=H\] for \(i=1,2,3,4\). So now \(J_{k}\) and \(I_{k,i}\) for \(i=1,2,3,4\) are the matrix presentations in the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\) for \(J\) and \(I_{i}\) for \(i=1,2,3,4\) acting on \(\mathcal{L}\). It is simple to get the matrices of \(J_{k}\) and \(I_{k,i}\) for \(i=1,2,3,4\) from the matrix \(C\), \(J_{\mathcal{L}}\) and \(I_{\mathcal{L},i}\), we do not write down them explicitly here. ### The partial Dirichlet domain \(D_{n}\) in \(\mathbf{H}_{\mathbb{R}}^{3}\) From the transformation in Subsection 5.1, in this subsection, we always consider the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\). Consider the partial Dirichlet domain \(D_{R}\) with center point \[p_{0}=(C^{t})^{-1}([\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}]^{t})=[- \frac{1}{2},-\frac{1}{2},\frac{\sqrt{3}}{2},\frac{3}{2}]^{t}\] in \(\mathbf{H}_{\mathbb{R}}^{3}\), here \(p_{0}\) is the fixed point of \(J_{k}\) in the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\). Where \[R=\left\{(I_{k,1}I_{k,2})^{\pm 1},(I_{k,2}I_{k,3})^{\pm 1},(I_{k,3}I_{k,4})^{\pm 1 },(I_{k,4}I_{k,1})^{\pm 1},I_{k,1}I_{k,3},I_{k,2}I_{k,4}\right\}\] is a subset of the group \[\langle I_{k,1}I_{k,2},I_{k,2}I_{k,3},I_{k,3}I_{k,4},I_{k,4}I_{k,1}\rangle.\] As usual, we denote by \(p_{ij}=I_{k,i}I_{k,j}(p_{0})\in\mathbf{H}_{\mathbb{R}}^{3}\) and \(B_{ij}=B(p_{0},p_{ij})\) the bisector in \(\mathbf{H}_{\mathbb{R}}^{3}\) with respect to the two points \(p_{0}\) and \(p_{ij}\) for certain \(i,j\in\{1,2,3,4\}\). Then [MISSING_PAGE_POST] We fix lifts \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) of \(p_{0}\) and \(p_{ij}\) as vectors in \(\mathbb{R}^{3,1}\), such that the entries of \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) are just the same as entries of \(p_{0}\) and \(p_{ij}\) above. We have \[\langle\mathbf{p_{0}},\mathbf{p_{0}}\rangle=\langle\mathbf{p_{ij}},\mathbf{p_ {ij}}\rangle=-1\] for \(\mathbf{p_{ij}}\) above. Now for \(p=[x,y,z,1]^{t}\in\mathbf{H}_{\mathbb{R}}^{3}\), if \(p\) lies in the bisector \(B_{12}\) of \(I_{k,1}I_{k,2}\), that is, the distances of \(p\) to \(p_{0}\) and \(p_{12}\) equal, then \[|\mathbf{p_{0}}^{*}\cdot H\cdot\mathbf{p}|=|\mathbf{p_{12}}^{*}\cdot H\cdot \mathbf{p}|.\] So the bisector \(B_{12}\) is given by all \([x,y,z,1]^{t}\) with the conditions \[|-\frac{1}{2}\cdot x-\frac{1}{2}\cdot y+\frac{\sqrt{3}}{2}\cdot z-\frac{3}{2} \cdot 1|=|\frac{3}{2}\cdot x-\frac{3}{2}\cdot y+\frac{\sqrt{3}}{2}\cdot z- \frac{5}{2}\cdot 1|\] and \[x^{2}+y^{2}+z^{2}<1.\] Which is a round disk in the Klein model of \(\mathbf{H}_{\mathbb{R}}^{3}\). Similarly, we can give all the bisectors \(B_{21}\), \(B_{23}\), \(B_{32}\), \(B_{34}\), \(B_{43}\), \(B_{41}\), \(B_{14}\), \(B_{13}\) and \(B_{24}\). Let \(A_{k,i}=I_{k,i}I_{k,i+1}\) for \(i=1,2,3,4\) mod \(4\). It is not difficult to show the following directly (without using the over-killed methods in Section 6), we omit the details. **Proposition 5.1**.: \(D_{R}\) _is the Dirichlet domain for \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) acting on \(\mathbf{H}_{\mathbb{R}}^{3}\) with center \(p_{0}\). Moreover, the group \(\rho_{\pi}(K)=\langle A_{k,1},A_{k,2},A_{k,3},A_{k,4}\rangle\) is discrete and has a presentation_ \[\rho_{\pi}(K)=\left\langle A_{k,1},A_{k,2},A_{k,3},A_{k,4}\right|\begin{array} []{l}A_{k,1}A_{k,2}A_{k,3}A_{k,4}=id,\\ (A_{k,1}A_{k,2})^{2}=(A_{k,2}A_{k,3})^{2}=id,\\ (A_{k,3}A_{k,4})^{2}=(A_{k,4}A_{k,1})^{2}=id\end{array}\right\rangle\] See Figure 7 for the partial Dirichlet domain of \(D_{R}<\rho_{\pi}(K)\) acting on \(\mathbf{H}_{\mathbb{R}}^{3}\). The color rule in Figure 5 is: * \(B_{12}\), gray; * \(B_{21}\), cyan; * \(B_{23}\), blue; * \(B_{32}\), yellow; * \(B_{34}\), green; * \(B_{43}\), brown; * \(B_{41}\), aquamarine; * \(B_{14}\), red; * \(B_{13}\), purple; * \(B_{24}\), black. Figure 7. Part of a realistic view of Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) from a point view inside \(D_{R}\). The steelblue sphere is the ideal boundary of \(\mathbf{H}_{\mathbb{R}}^{3}\). For example, the brown rectangle in Figure 7 is \(B_{43}\cap D_{R}\), which is a part of the frontier of \(D_{R}\). The aquamarine and red rectangles are \(B_{41}\cap D_{R}\) and \(B_{14}\cap D_{R}\) respectively. The black hexagon is \(B_{24}\cap D_{R}\). But the other \(B_{ij}\cap D_{R}\) is not so clear from this realistic figure. See also Figure 8 for an abstract picture of the Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\). Figures 7 and 8 in \(\mathbf{H}^{3}_{\mathbb{R}}\)-geometry should be compared with Figures 5 and 6 in \(\mathbf{H}^{2}_{\mathbb{C}}\)-geometry of \(\rho_{\frac{5\pi}{6}}(K)\). In particular, both the Dirichlet domains are given by the same set \(R\). Moreover, the intersection patterns of the sides of \(D_{R}\) are also the same. Which give the hint that the Dirichlet domain of \(\rho_{\theta}(K)\) for general \(\theta\in(\frac{5\pi}{6},\pi]\) is also given by the set \(R\). This is what we do in Section 6. ## 6. Dirichlet domain of \(\rho_{\theta}(K)\) in \(\mathbf{H}^{3}_{\mathbb{C}}\) In this section, we prove Theorem 1.1 for all \(\theta\in(\frac{5}{6},\pi]\). Figure 8. The abstract picture of the boundary of Dirichlet domain of \(\rho_{\pi}(K)<\mathbf{PO}(3,1)\) look out side from \(p_{0}\), which is a \(2\)-sphere with labeled disks. For example, the disk labeled by \(B_{12}\) means \(B_{12}\cap D_{R}\). The out disk is \(B_{13}\cap D_{R}\). Each of the four disks labeled by \(\partial\mathbf{H}^{3}_{\mathbb{R}}\) means a disk in \(\partial\mathbf{H}^{3}_{\mathbb{R}}\). ### Some preliminaries For any \(\theta\in(\frac{5}{6},\pi]\) in the moduli space \(\mathcal{M}\), we denote by \(p_{ij}=I_{i}I_{j}(p_{0})\in\mathbf{H}^{3}_{\mathbb{C}}\), and \(B_{ij}=B(p_{0},p_{ij})\) the bisector in \(\mathbf{H}^{3}_{\mathbb{C}}\) with respect to the points \(p_{0}\) and \(p_{ij}\). We also consider the partial Dirichlet domain \(D_{R}\) for the same set \(R\) as in Sections 4 and 5. Recall \(R\subset\rho_{\frac{5\pi}{6}}(K)\) be the set of ten words in \(\rho_{\theta}(K)\): \[\left\{(I_{1}I_{2})^{\pm 1},\ (I_{2}I_{3})^{\pm 1},\ (I_{3}I_{4})^{\pm 1},\ (I_{4}I_{ 1})^{\pm 1},\ I_{1}I_{3},\ I_{2}I_{4}\right\}.\] _Remark 6.1_.: The main technical difference between the partial Dirichlet domains \(D_{R}\) when \(\theta\in(\frac{5}{6},\pi]\) and \(\theta=\frac{5\pi}{6}\) is: * \(B_{12}\cap B_{34}=\emptyset\) when \(\theta\) is near to \(\pi\); * \(B_{12}\cap B_{34}\neq\emptyset\) when \(\theta\) is near to \(\frac{5\pi}{6}\) (in particular, when \(\theta=\frac{5\pi}{6}\), \(B_{12}\cap B_{34}\) is non empty as shown in Proposition 4.2). But in any case \(B_{12}\cap B_{34}\) does not lie in the partial Dirichlet domain \(D_{R}\), see Proposition 6.4. Now the fixed point of \(J\) is \(p_{0}=[0,0,0,1]^{t}\), and \[p_{12}=\left[\begin{array}{c}-\mathrm{e}^{-\theta\mathrm{i}}\sqrt{2\cos(2 \theta)+1}\\ \\ \frac{(-2\mathrm{e}^{-\theta\mathrm{i}}+1+\mathrm{i})\sqrt{-1-2\cos(\theta)+2 \sin(\theta)+2\sin(2\theta)}}{2}\\ \\ \frac{(-2\mathrm{e}^{-\theta\mathrm{i}}+1-\mathrm{i})\sqrt{-2\sin(2\theta)-2 \cos(\theta)-1-2\sin(\theta)}}{2}\\ \\ \mathrm{e}^{-2\theta\mathrm{i}}-\mathrm{e}^{\theta\mathrm{i}}+1\end{array} \right], \tag{6.1}\] \[p_{21}=\left[\begin{array}{c}\mathrm{e}^{\theta\mathrm{i}}\sqrt{2\cos(2 \theta)+1}\\ \\ \frac{(-2\mathrm{i}\mathrm{e}^{\theta\mathrm{i}}+1+\mathrm{i})\sqrt{-1-2\cos( \theta)+2\sin(\theta)+2\sin(2\theta)}}{2}\\ \\ \frac{(2\mathrm{i}\mathrm{e}^{\theta\mathrm{i}}+1-\mathrm{i})\sqrt{-2\sin(2 \theta)-2\cos(\theta)-1-2\sin(\theta)}}{2}\\ \\ \mathrm{e}^{2\theta\mathrm{i}}-\mathrm{e}^{-\theta\mathrm{i}}+1\end{array} \right], \tag{6.2}\] \[p_{13}=\left[\begin{array}{c}\sqrt{2\cos(2\theta)+1}\\ \\ 0\\ -2\cos(\theta)\end{array}\right]. \tag{6.3}\] From the coordinates of these points in \(\mathbf{H}^{3}_{\mathbb{C}}\), it is easy to get all coordinates of \(p_{i,i\pm 1}\) for \(i=1,2,3,4\) mod \(4\) and \(p_{24}\) by the \(J\)-action. We fix lifts \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) of \(p_{0}\) and \(p_{ij}\) as vectors in \(\mathbb{C}^{3,1}\), such that the entries of \(\mathbf{p_{0}}\) and \(\mathbf{p_{ij}}\) are just the same as entries of \(p_{0}\) and \(p_{ij}\) above. We have \[\langle\mathbf{p_{0}},\mathbf{p_{0}}\rangle=\langle\mathbf{p_{ij}},\mathbf{p_ {ij}}\rangle=-1\] for \(\mathbf{p_{ij}}\) above. First we have **Lemma 6.2**.: _For any \(\theta\in(\frac{5\pi}{6},\pi]\) in the moduli space \(\mathcal{M}\), the four vectors \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{13}}\) and \(\mathbf{p_{14}}\) are linearly independent in \(\mathbb{C}^{3,1}\)._ Proof.: This is proved by a direct calculation. Consider the matrix \(S\) such that its four columns are the vectors \(p_{0}\), \(p_{12}\), \(p_{13}\) and \(p_{14}\) given by (6.1), (6.3) and \(J\)-action respectively. Then \[\det(S)=(-2\mathrm{i}\cos(\theta)-2\mathrm{i}\cos(3\theta)-2\sin(3\theta)+ \mathrm{i})\sqrt{4\cos^{2}(2\theta)-1}.\] Now it is easy to see \(\det(S)=0\) when \(\theta=\frac{5\pi}{6}\) and \(\det(S)\) is non-zero when \(\theta\in(\frac{5\pi}{6},\pi]\). Lemma 6.2 also explains why we should prove Theorem 1.1 when \(\theta=\frac{5\pi}{6}\) separately in Section 4, since the proof in this section does not hold when these four vectors \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{13}}\) and \(\mathbf{p_{14}}\) span a three dimensional subspace of \(\mathbb{C}^{3,1}\). **Lemma 6.3**.: _For any \(\theta\in(\frac{5\pi}{6},\pi]\) in the moduli space \(\mathcal{M}\), the four vectors \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{34}}\) and \(\mathbf{p_{13}}\) are co-planar in \(\mathbb{C}^{3,1}\)._ Proof.: This is also proved by a direct calculation. In fact we have \[2\mathrm{e}^{\theta\mathrm{i}}\cdot\mathbf{p_{0}}+\mathbf{p_{12}}+\mathbf{p_{3 4}}+2\mathrm{e}^{-\theta\mathrm{i}}\cdot\mathbf{p_{13}}\] is the zero vector in \(\mathbb{C}^{3,1}\). ### Intersection patterns of the bisectors of the partial Dirichlet domain \(D_{k}\) In this subsection, we will study intersection patterns of the bisectors for \(D_{R}\). The reader may compare to Table 1 in Section 4.2 (there is only one difference on the intersection \(B_{12}\cap B_{34}\) as noted in Remark 6.1). We first consider the intersections of the bisector \(B_{12}\) with other bisectors, comparing to Proposition 4.2 in Subsection 4.3, the proof here is much more involved. **Proposition 6.4**.: _For any \(\theta\in(\frac{5\pi}{6},\pi]\), the bisector \(B_{12}\) of \(I_{1}I_{2}\) have the following properties:_ 1. \(B_{12}\) _is tangent to_ \(B_{21}\)_;_ 2. \(B_{12}\) _does not intersect_ \(B_{23}\)_;_ 3. \(B_{12}\) _does not intersect_ \(B_{32}\)_;_ 4. \(B_{12}\) _does not intersect_ \(B_{43}\)_;_ 5. \(B_{12}\) _does not intersect_ \(B_{41}\)_;_ 6. \(B_{12}\) _does not intersect_ \(B_{24}\)_;_ 7. \(B_{12}\) _intersects_ \(B_{34}\) _in a non-empty Giraud disk_ \(B_{12}\cap B_{34}\) _in_ \(\mathbf{H}_{\mathbb{C}}^{3}\) _when_ \(\theta\in[\frac{5\pi}{6},\pi]\) _but near to_ \(\frac{5\pi}{6}\)_._ \(B_{12}\cap B_{34}=\emptyset\) _when_ \(\theta\in[\frac{5\pi}{6},\pi]\) _but near to_ \(\pi\)_. When_ \(B_{12}\cap B_{34}\) _is non-empty,_ \(B_{12}\cap B_{34}\) _lies in the component of_ \(\mathbf{H}_{\mathbb{C}}^{2}-B_{13}\) _which does not contain the point_ \(p_{0}\)_. In particular, in any case_ \(B_{12}\cap B_{34}\) _does not lie in the partial Dirichlet domain_ \(D_{R}\)_._ Proof.: The proof of (1) of Proposition 6.4 runs the same line as the proof of (1) of Proposition 4.2. For the proof of (2). We now consider the intersection \(B_{12}\cap B_{23}\). It is easy to see that for any \(\theta\in(\frac{5\pi}{6},\pi]\), the span of \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{23}}\) is a 3-dimensional subspace of \(\mathbb{C}^{3,1}\). The intersection of \(\mathbf{H}_{\mathbb{C}}^{3}\subset\mathbf{P}_{\mathbb{C}}^{3}\) with the projection of this 3-dimensional subspace into \(\mathbf{P}_{\mathbb{C}}^{3}\) is denoted by \(L\), then \(L\) is a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\). We re-denote \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\) and \(\mathbf{p_{23}}\) by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\). Then we denote \(H_{L}=H_{L}(p_{0},p_{12},p_{23})\) by the matrix \((\mathbf{e_{i}}^{*}\cdot H\cdot\mathbf{e_{j}})_{1\leq i,j\leq 3}\), we have \[H_{L}=\begin{pmatrix}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta \mathrm{i}}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta\mathrm{i}}- 1\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&-1&D\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&\bar{D}&-1 \end{pmatrix}.\] Here \[D=2\mathrm{e}^{-3\theta\mathrm{i}}-2\mathrm{e}^{-2\theta\mathrm{i}}+2\mathrm{ e}^{\theta\mathrm{i}}-2\cos(2\theta)-4,\] and \(\bar{D}\) is the complex conjugate of \(D\). Now \(\det(H_{L})\) is \[(-4\cos(4\theta)-22\cos(2\theta)-1+16\cos(3\theta)+12\cos(\theta))\cdot(1+2 \cos(\theta))^{2}.\] So \(H_{L}\) is the Hermitian form with signature \((2,1)\) on the subspace with the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) when \(\theta\in(\frac{5\pi}{6},\pi]\). The vector \(x_{1}\mathbf{e_{1}}+x_{2}\mathbf{e_{2}}+x_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{x}\) in \(\mathbb{C}^{3}\), and the vector \(y_{1}\mathbf{e_{1}}+y_{2}\mathbf{e_{2}}+y_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{y}\) in \(\mathbb{C}^{3}\), here \[\mathbf{x}=\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\end{array}\right),\quad\mathbf{y}=\left(\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\] with \(x_{i},y_{i}\in\mathbb{C}\). So \[\mathbf{E_{1}}=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),\quad\mathbf{E_{2}}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),\quad\mathbf{E_{3}}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\] in \(\mathbb{C}^{3}\) present the vectors \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\) in \(\mathbb{C}^{3,1}\). Comparing to Subsection 2.5, we define the Hermitian cross-product \(\boxtimes_{L}\) on the space \(H_{L}\) (which is isometric to \(\mathbf{H}_{\mathbb{C}}^{2}\)) with respect to the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) by \[\mathbf{x}\boxtimes_{L}\mathbf{y}=\begin{bmatrix}\mathbf{x}^{*}H_{L}(1,2) \cdot\mathbf{y}^{*}H_{L}(1,3)-\mathbf{y}^{*}H_{L}(1,2)\cdot\mathbf{x}^{*}H_{L }(1,3)\\ \mathbf{x}^{*}H_{L}(1,3)\cdot\mathbf{y}^{*}H_{L}(1,1)-\mathbf{y}^{*}H_{L}(1,3) \cdot\mathbf{x}^{*}H_{L}(1,1)\\ \mathbf{x}^{*}H_{L}(1,1)\cdot\mathbf{y}^{*}H_{L}(1,2)-\mathbf{y}^{*}H_{L}(1,1) \cdot\mathbf{x}^{*}H_{L}(1,2)\end{bmatrix}.\] Here \(\mathbf{x}^{*}H_{L}\) is a one-by-three matrix, and \(\mathbf{x}^{*}H_{L}(1,2)\) is the second entry of \(\mathbf{x}^{*}H_{L}\). The other terms have similar meanings. Then the intersection \(B_{12}\cap B_{23}\cap L\) is parameterized by \(V=V(z_{1},z_{2})\in\mathbb{C}^{3}\) with \(\langle V,V\rangle<0\) with respect to the Hermitian form \(H_{L}\). Where \[V=\mathbf{E_{2}}\boxtimes_{L}\mathbf{E_{3}}+z_{1}\cdot\mathbf{E_{1}}\boxtimes _{L}\mathbf{E_{3}}+z_{2}\cdot\mathbf{E_{1}}\boxtimes_{L}\mathbf{E_{2}}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S} ^{1}\times\mathbb{S}^{1}\). We note that \[E_{1}\boxtimes_{L}E_{2}=\left[\begin{array}{c}D_{1}\\ 2\mathrm{i}\sin(\theta)-2\mathrm{i}\sin(3\theta)+2\cos(2\theta)+1+2\mathrm{i }\sin(2\theta)\\ 2\cos(3\theta)+2\cos(\theta)-2\cos(2\theta)-2\end{array}\right],\] where \[D_{1}=-5\mathrm{i}\sin(2\theta)+\mathrm{i}\sin(3\theta)-3\mathrm{i}\sin(4 \theta)+2\mathrm{i}\sin(5\theta)+4+11\cos(2\theta)\] \[-3\cos(3\theta)-2\cos(5\theta)-10\cos(\theta)+3\cos(4\theta).\] We also have \[E_{1}\boxtimes_{L}E_{3}=\left[\begin{array}{c}D_{2}\\ -2\cos(3\theta)-2\cos(\theta)+2\cos(2\theta)+2\\ 2{\rm i}\sin(\theta)-2{\rm i}\sin(3\theta)+2\cos(2\theta)+1+2{\rm i}\sin(2 \theta)\end{array}\right],\] where \[D_{2}={\rm i}\sin(2\theta)+3{\rm i}\sin(3\theta)-{\rm i}\sin(4 \theta)+2{\rm i}\sin(\theta)-8-7\cos(2\theta)+8\cos(\theta)\] \[+7\cos(3\theta)-3\cos(4\theta),\] and \[E_{2}\boxtimes_{L}E_{3}=\left[\begin{array}{c}4\cos(5\theta)+2 8\cos(3\theta)+32\cos(\theta)-14\cos(4\theta)-32\cos(2\theta)-33\\ D_{3}\\ D_{4}\end{array}\right],\] here \[D_{3}={\rm i}\sin(2\theta)+3{\rm i}\sin(3\theta)-{\rm i}\sin(4 \theta)+2{\rm i}\sin(\theta)+8+7\cos(2\theta)-8\cos(\theta)\] \[-7\cos(3\theta)+3\cos(4\theta)\] and \[D_{4}=5{\rm i}\sin(2\theta)-{\rm i}\sin(3\theta)+3{\rm i}\sin(4 \theta)-2{\rm i}\sin(5\theta)+4+11\cos(2\theta)-3\cos(3\theta)\] \[-2\cos(5\theta)-10\cos(\theta)+3\cos(4\theta).\] The vector \(V\in\mathbb{C}^{3}\) is a three-by-one matrix, we denote by \(V(i,1)\) the entry in the \(i\)-th row of \(V\). Then \[V(1,1)\cdot{\bf e_{1}}+V(2,1)\cdot{\bf e_{2}}+V(3,1)\cdot{\bf e_{3}}\] is a vector in \(\mathbb{C}^{3,1}\), we denote it by \(W\). The projection of \(W\) into \(\mathbb{C}{\bf P}^{3}\) is a point in \(B_{12}\cap B_{23}\cap L\) if \(W\) is a negative vector. Now \(W^{*}\cdot H\cdot W\) is a very complicated term. But with Maple, \(W^{*}\cdot H\cdot W\) has minimum \(734.88\) numerically when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point when \(\theta=\frac{5\pi}{6}\). In particular, any \(W\) above is a positive vector in \(\mathbb{C}^{3,1}\), so \(B_{12}\cap B_{23}\cap L=\emptyset\). Then by the projection from \({\bf H}_{\mathbb{C}}^{3}\) to \(L\), we have \(B_{12}\cap B_{23}\) in \({\bf H}_{\mathbb{C}}^{3}\) is the empty set. We remark here the minimum of \(W^{*}\cdot H\cdot W\) when \(\theta=\frac{5\pi}{6}\) is \(734.88\), which is different from the minimum \(12.752\) in Proposition 4.2. The reason is that even through the Hermitian from \(H_{L}\) is related to \(H\), but it is not the same as \(H\). This end the proof of (2). For the proof of (3), so we consider \(B_{12}\cap B_{32}\). For any \(\theta\in(\frac{5\pi}{6},\pi]\), the span of \({\bf p_{0}}\), \({\bf p_{12}}\), \({\bf p_{32}}\) is a 3-dimensional subspace of \(\mathbb{C}^{3,1}\). The intersection of \({\bf H}_{\mathbb{C}}^{3}\subset{\bf P}_{\mathbb{C}}^{3}\) with the projection of this 3-dimensional subspace into \({\bf P}_{\mathbb{C}}^{3}\) is denoted by \(L\), then \(L\) is a totally geodesic \({\bf H}_{\mathbb{C}}^{2}\hookrightarrow{\bf H}_{\mathbb{C}}^{3}\). We re-denote \({\bf p_{0}}\), \({\bf p_{12}}\) and \({\bf p_{32}}\) by \({\bf e_{1}}\), \({\bf e_{2}}\) and \({\bf e_{3}}\). We denote \(H_{L}=H_{L}(p_{0},p_{12},p_{32})\) by the matrix \(({\bf e_{i}}^{*}\cdot H\cdot{\bf e_{j}})_{1\leq i,j\leq 3}\), then \[H_{L}=\begin{pmatrix}-1&-{\rm e}^{-2\theta{\rm i}}+{\rm e}^{\theta{\rm i}}-1&- {\rm e}^{2\theta{\rm i}}+{\rm e}^{-\theta{\rm i}}-1\\ -{\rm e}^{2\theta{\rm i}}+{\rm e}^{-\theta{\rm i}}-1&-1&6\cos(\theta)-4\cos(2 \theta)\\ -{\rm e}^{-2\theta{\rm i}}+{\rm e}^{\theta{\rm i}}-1&6\cos(\theta)-4\cos(2 \theta)&-1\end{pmatrix}.\] Now \(\det(H_{L})\) is \[-(4\cos(\theta)-3)\cdot(-1+2\cos(\theta))\cdot(4\cos(\theta)^{2}-2\cos(\theta )-3)\cdot(1+2\cos(\theta))^{2}.\] So \(H_{L}\) is the Hermitian form with signature \((2,1)\) on the subspace with the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) when \(\theta\in(\frac{5\pi}{6},\pi]\). The vector \(x_{1}\mathbf{e_{1}}+x_{2}\mathbf{e_{2}}+x_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{x}\) in \(\mathbb{C}^{3}\), and the vector \(y_{1}\mathbf{e_{1}}+y_{2}\mathbf{e_{2}}+y_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{y}\) in \(\mathbb{C}^{3}\), here \[\mathbf{x}=\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\end{array}\right),\quad\mathbf{y}=\left(\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\] with \(x_{i},y_{i}\in\mathbb{C}\). So \[\mathbf{E_{1}}=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),\quad\mathbf{E_{2}}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),\quad\mathbf{E_{3}}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\] in \(\mathbb{C}^{3}\) present the vectors \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\) in \(\mathbb{C}^{3,1}\). Comparing to Subsection 2.5, we define the Hermitian cross-product \(\boxtimes_{L}\) on the space \(H_{L}=\mathbf{H}_{\mathbb{C}}^{2}\) with respect to the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) by \[\mathbf{x}\boxtimes_{L}\mathbf{y}=\begin{bmatrix}\mathbf{x}^{*}H_{L}(1,2) \cdot\mathbf{y}^{*}H_{L}(1,3)-\mathbf{y}^{*}H_{L}(1,2)\cdot\mathbf{x}^{*}H_{L }(1,3)\\ \mathbf{x}^{*}H_{L}(1,3)\cdot\mathbf{y}^{*}H_{L}(1,1)-\mathbf{y}^{*}H_{L}(1,3) \cdot\mathbf{x}^{*}H_{L}(1,1)\\ \mathbf{x}^{*}H_{L}(1,1)\cdot\mathbf{y}^{*}H_{L}(1,2)-\mathbf{y}^{*}H_{L}(1,1 )\cdot\mathbf{x}^{*}H_{L}(1,2)\end{bmatrix}.\] Then the intersection \(B_{12}\cap B_{32}\cap L\) is parameterized by \(V=V(z_{1},z_{2})\in\mathbb{C}^{3}\) with \(\langle V,V\rangle<0\) with respect to the Hermitian form \(H_{L}\). Where \[V=\mathbf{E_{2}}\boxtimes_{L}\mathbf{E_{3}}+z_{1}\cdot\mathbf{E_{1}}\boxtimes _{L}\mathbf{E_{3}}+z_{2}\cdot\mathbf{E_{1}}\boxtimes_{L}\mathbf{E_{2}}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). We note that \(E_{1}\boxtimes_{L}E_{2}\) is \[\left[\begin{array}{c}(1+2\cos(\theta))(8\cos^{2}(\theta)\mathrm{e}^{-\theta \mathrm{i}}+3\mathrm{i}\sin(2\theta)+3\mathrm{i}\sin(\theta)-14\cos(\theta)^{2} +5\cos(\theta))\\ 3+8\cos^{3}(\theta)\mathrm{e}^{\theta\mathrm{i}}-10\cos^{2}(\theta)-\mathrm{i} \sin(2\theta)+2\cos(\theta)\\ 4\cos(\theta)(\cos(\theta)-1)(1+2\cos(\theta))\end{array}\right],\] \(E_{1}\boxtimes_{L}E_{3}\) is \[\left[\begin{array}{c}(1+2\cos(\theta))(-8\cos^{2}(\theta)\mathrm{e}^{ \theta\mathrm{i}}+3\mathrm{i}\sin(2\theta)+3\mathrm{i}\sin(\theta)+14\cos( \theta)^{2}-5\cos(\theta))\\ 4\cos(\theta)(1-\cos(\theta))(1+2\cos(\theta))\\ -3-8\cos^{3}(\theta)\mathrm{e}^{-\theta\mathrm{i}}+10\cos^{2}(\theta)-\mathrm{i }\sin(2\theta)-2\cos(\theta)\end{array}\right].\] and \(E_{2}\boxtimes_{L}E_{3}\) is \[=\left[\begin{array}{c}(5-4\cos(\theta))(1+2\cos(\theta))(8\cos^{2}(\theta)- 6\cos(\theta)-3)\\ (1+2\cos(\theta))(8\cos^{2}(\theta)\mathrm{e}^{-\theta\mathrm{i}}+3\mathrm{i} \sin(2\theta)+3\mathrm{i}\sin(\theta)-14\cos(\theta)^{2}+5\cos(\theta))\\ (1+2\cos(\theta))(8\cos^{2}(\theta)\mathrm{e}^{\theta\mathrm{i}}-3\mathrm{i} \sin(2\theta)-3\mathrm{i}\sin(\theta)-14\cos(\theta)^{2}+5\cos(\theta))\end{array} \right].\] The vector \(V\in\mathbb{C}^{3}\) is a three-by-one matrix, and \[V(1,1)\cdot\mathbf{e_{1}}+V(2,1)\cdot\mathbf{e_{2}}+V(3,1)\cdot\mathbf{e_{3}}\] is a vector in \(\mathbb{C}^{3,1}\), we denote it by \(W\). The projection of \(W\) into \(\mathbb{C}\mathbf{P}^{3}\) is a point in \(B_{12}\cap B_{32}\cap L\) if \(W\) is a negative vector. With Maple, \(W^{*}\cdot H\cdot W\) is \[\left(-256\cdot(\frac{1}{2}+\cos(\theta))^{3}\cdot(\cos(\theta)-\frac{3}{4}) \cdot(-\frac{1}{2}+\cos(\theta))\cdot(\cos^{2}(\theta)-\frac{\cos(\theta)}{2} -\frac{3}{4})\right)\cdot h, \tag{6.4}\] where \(h\) is \[21-8\cos(3\theta)+36\cos(2\theta)+4\cos(r-2\theta)+10\cos(r+2\theta )-50\cos(\theta)+14\cos(r)\] \[-14\cos(s)+2\cos(-2\theta+r-s)+2\cos(\theta+r-s)-4\cos(3\theta+r)+ 10\cos(-\theta+s)\] \[+12\cos(\theta+s)-10\cos(s-2\theta)-4\cos(s+2\theta)-2\cos(-3 \theta+r-s)\] \[-4\cos(r-s)-12\cos(-\theta+r)-10\cos(\theta+r)+4\cos(-3\theta+s).\] Note that the first term of (6.4) is positive when \(\theta\in[\frac{5\pi}{6},\pi]\). By Maple, \(h\) has minimum \(6.5305\) numerically when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point when \(\theta=\frac{5\pi}{6}\). In particular, any \(W\) above is a positive vector in \(\mathbb{C}^{3,1}\), so \(B_{12}\cap B_{32}\cap L=\emptyset\). Then by the projection from \(\mathbf{H}_{\mathbb{C}}^{3}\) to \(L\), we have \(B_{12}\cap B_{32}\) in \(\mathbf{H}_{\mathbb{C}}^{3}\) is the empty set. See Figure 9 for an illustration this fact when \(\theta=\frac{5\pi}{6}\). This ends the proof of (3). For the proof of (4), now we consider \(B_{12}\cap B_{43}\). So for any \(\theta\in(\frac{5\pi}{6},\pi]\), the span of \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{43}}\) is a 3-dimensional subspace of \(\mathbb{C}^{3,1}\). The intersection of \(\mathbf{H}_{\mathbb{C}}^{3}\subset\mathbf{P}_{\mathbb{C}}^{3}\) with the projection of this 3-dimensional subspace into \(\mathbf{P}_{\mathbb{C}}^{3}\) is denoted by \(L\), then \(L\) is a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\). We re-denote \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\) and \(\mathbf{p_{43}}\) by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\). We denote \(H_{L}=H(p_{0},p_{12},p_{43})\) by the matrix \((\mathbf{e_{i}}^{*}\cdot H\cdot\mathbf{e_{j}})_{1\leq i,j\leq 3}\), then \[H_{L}=\begin{pmatrix}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta \mathrm{i}}-1&-\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}- 1\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&-1&D_{5}\\ -\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta\mathrm{i}}-1&\overline{D} _{5}&-1\end{pmatrix},\] Figure 9. The positivity of the term \(h\) in the proof of (3) of Proposition 6.4. In this figure, we take \(h=h(r,s)\). The blue surface is the graph of the function \(h\) when \(\theta=\frac{5\pi}{6}\). The red plane is \(h=0\). This elucidates (but does not prove) \(h\) is positive when \(\theta=\frac{5\pi}{6}\). For general \(\theta\in(\frac{5\pi}{6},\pi]\), the graph of \(h=h(r,s)\) is a surface which is similar to the surface in the figure and it is also disjoint from the plane \(h=0\). where \[D_{5}=-32\cos^{3}(\theta)\mathrm{e}^{\theta\mathrm{i}}+6\mathrm{i}\sin(2\theta)+2 \mathrm{i}\sin(\theta)+24\cos^{2}(\theta)+4\cos(\theta)-3.\] Now \(\det(H_{L})\) is \[-64\cos^{6}(\theta)+160\cos^{5}(\theta)+144\cos^{4}(\theta)-168\cos^{3}(\theta) -96\cos^{2}(\theta)+40\cos(\theta)+20.\] So \(H_{L}\) is the Hermitian form with signature \((2,1)\) on the subspace with the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) when \(\theta\in(\frac{5\pi}{6},\pi]\). The vector \(x_{1}\mathbf{e_{1}}+x_{2}\mathbf{e_{2}}+x_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{x}\) in \(\mathbb{C}^{3}\), and the vector \(y_{1}\mathbf{e_{1}}+y_{2}\mathbf{e_{2}}+y_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{y}\) in \(\mathbb{C}^{3}\), here \[\mathbf{x}=\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\end{array}\right),\quad\mathbf{y}=\left(\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\] with \(x_{i},y_{i}\in\mathbb{C}\). So \[\mathbf{E_{1}}=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),\quad\mathbf{E_{2}}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),\quad\mathbf{E_{3}}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\] in \(\mathbb{C}^{3}\) present the vectors \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\) in \(\mathbb{C}^{3,1}\). Comparing to Subsection 2.5, we define the Hermitian cross-product \(\boxtimes_{L}\) on the subspace \(H_{L}\) (isometric to \(\mathbf{H}_{\mathbb{C}}^{2}\)) with respect to the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) by \[\mathbf{x}\boxtimes_{L}\mathbf{y}=\begin{bmatrix}\mathbf{x}^{*}H_{L}(1,2) \cdot\mathbf{y}^{*}H_{L}(1,3)-\mathbf{y}^{*}H_{L}(1,2)\cdot\mathbf{x}^{*}H_{ L}(1,3)\\ \mathbf{x}^{*}H_{L}(1,3)\cdot\mathbf{y}^{*}H_{L}(1,1)-\mathbf{y}^{*}H_{L}(1,3 )\cdot\mathbf{x}^{*}H_{L}(1,1)\\ \mathbf{x}^{*}H_{L}(1,1)\cdot\mathbf{y}^{*}H_{L}(1,2)-\mathbf{y}^{*}H_{L}(1,1 )\cdot\mathbf{x}^{*}H_{L}(1,2)\end{bmatrix}.\] Then the intersection \(B_{12}\cap B_{43}\cap L\) is parameterized by \(V=V(z_{1},z_{2})\in\mathbb{C}^{3}\) with \(\langle V,V\rangle<0\) with respect to the Hermitian form \(H_{L}\). Where \[V=\mathbf{E_{2}}\boxtimes_{L}\mathbf{E_{3}}+z_{1}\cdot\mathbf{E_{1}}\boxtimes _{L}\mathbf{E_{3}}+z_{2}\cdot\mathbf{E_{1}}\boxtimes_{L}\mathbf{E_{2}}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{ S}^{1}\times\mathbb{S}^{1}\). The vector \(V\in\mathbb{C}^{3}\) is a three-by-one matrix, and \[V(1,1)\cdot\mathbf{e_{1}}+V(2,1)\cdot\mathbf{e_{2}}+V(3,1)\cdot\mathbf{e_{3}}\] is a vector in in \(\mathbb{C}^{3,1}\), we denote it by \(W\). The projection of \(W\) into \(\mathbb{C}\mathbf{P}^{3}\) is a point in \(B_{12}\cap B_{43}\cap L\) if \(W\) is a negative vector. Now \(W^{*}\cdot H\cdot W\) is a very complicated term. But with Maple, \[W^{*}\cdot H\cdot W=-2(1+2\cos(\theta))^{3}\cdot h,\] where \(h\) is \[2213-1992\cos(3\theta)+718\cos(r)+3041\cos(2\theta)+...\] \[-3\cos(s-7\theta)-16\cos(s+7\theta)+\cos(r-s-6\theta).\] (We omit many terms of \(h\)). Note that the first term of \(W^{*}\cdot H\cdot W\) is positive when \(\theta\in[\frac{5\pi}{6},\pi]\). By Maple, \(h\) has minimum \(1521.583\) numerically when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point when \(\theta=\frac{5\pi}{6}\). In particular, \(W\) above is a positive vector in \(\mathbb{C}^{3,1}\), so \(B_{12}\cap B_{43}\cap L=\emptyset\). Then by the projection from \(\mathbf{H}_{\mathbb{C}}^{3}\) to \(L\), we have \(B_{12}\cap B_{32}\) in \(\mathbf{H}_{\mathbb{C}}^{3}\) is the empty set. This ends the proof of (4). Note that \(B_{12}\cap B_{41}=J^{-1}(B_{12}\cap B_{23})\). Since we have proved that \(B_{12}\cap B_{23}=\emptyset\), we have \(B_{12}\cap B_{41}=\emptyset\). This ends the proof of (5). For the proof of (6), so we consider \(B_{12}\cap B_{24}\). For any \(\theta\in(\frac{5\pi}{6},\pi]\), the span of \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{24}}\) is a \(3\)-dimensional subspace of \(\mathbb{C}^{3,1}\). The intersection of \(\mathbf{H}_{\mathbb{C}}^{3}\subset\mathbf{P}_{\mathbb{C}}^{3}\) with the projection of this \(3\)-dimensional subspace into \(\mathbf{P}_{\mathbb{C}}^{3}\) is denoted by \(L\), then \(L\) is a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\). We re-denote \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\) and \(\mathbf{p_{24}}\) by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\). We denote \(H_{L}=H(p_{0},p_{12},p_{24})\) by the matrix \((\mathbf{e_{i}}^{*}\cdot H\cdot\mathbf{e_{j}})_{1\leq i,j\leq 3}\), then \[H_{L}=\begin{pmatrix}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta \mathrm{i}}-1&2\cos(\theta)\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&-1&D_{6}\\ 2\cos(\theta)&\overline{D}_{6}&-1\end{pmatrix},\] where \[D_{6}=-8\cos^{2}(\theta)\mathrm{e}^{\theta\mathrm{i}}-\mathrm{e}^{\theta \mathrm{i}}-\mathrm{e}^{-2\theta\mathrm{i}}-1.\] Now \(\det(H_{L})\) is \[32\cos^{5}(\theta)-24\cos^{3}(\theta)-4\cos^{2}(\theta)+4\cos(\theta)+1.\] So \(H_{L}\) is the Hermitian form with signature \((2,1)\) on the subspace with the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) when \(\theta\in(\frac{5\pi}{6},\pi]\). The vector \(x_{1}\mathbf{e_{1}}+x_{2}\mathbf{e_{2}}+x_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{x}\in\mathbb{C}^{3}\), and the vector \(y_{1}\mathbf{e_{1}}+y_{2}\mathbf{e_{2}}+y_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{y}\in\mathbb{C}^{3}\), here \[\mathbf{x}=\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\end{array}\right),\quad\mathbf{y}=\left(\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\] with \(x_{i},y_{i}\in\mathbb{C}\). So \[\mathbf{E_{1}}=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),\quad\mathbf{E_{2}}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),\quad\mathbf{E_{3}}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\] in \(\mathbb{C}^{3}\) present the vectors \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\) in \(\mathbb{C}^{3,1}\). Comparing to Subsection 2.5, we define the Hermitian cross-product \(\boxtimes_{L}\) on the subspace \(H_{L}\) (which is isometric to \(\mathbf{H}_{\mathbb{C}}^{2}\)) with respect to the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) by \[\mathbf{x}\boxtimes_{L}\mathbf{y}=\begin{bmatrix}\mathbf{x}^{*}H_{L}(1,2) \cdot\mathbf{y}^{*}H_{L}(1,3)-\mathbf{y}^{*}H_{L}(1,2)\cdot\mathbf{x}^{*}H_{L }(1,3)\\ \mathbf{x}^{*}H_{L}(1,3)\cdot\mathbf{y}^{*}H_{L}(1,1)-\mathbf{y}^{*}H_{L}(1,3) \cdot\mathbf{x}^{*}H_{L}(1,1)\\ \mathbf{x}^{*}H_{L}(1,1)\cdot\mathbf{y}^{*}H_{L}(1,2)-\mathbf{y}^{*}H_{L}(1,1 )\cdot\mathbf{x}^{*}H_{L}(1,2)\end{bmatrix}.\] Then the intersection \(B_{12}\cap B_{24}\cap L\) is parameterized by \(V=V(z_{1},z_{2})\in\mathbb{C}^{3}\) with \(\langle V,V\rangle<0\) with respect to the Hermitian form \(H_{L}\). Where \[V=\mathbf{E_{2}}\boxtimes_{L}\mathbf{E_{3}}+z_{1}\cdot\mathbf{E_{1}}\boxtimes _{L}\mathbf{E_{3}}+z_{2}\cdot\mathbf{E_{1}}\boxtimes_{L}\mathbf{E_{2}}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). The vector \(V\in\mathbb{C}^{3}\) is a three-by-one matrix, we denote by \(V(i,1)\) the entry in the \(i\)-th row of \(V\). Then \[V(1,1)\cdot\mathbf{e_{1}}+V(2,1)\cdot\mathbf{e_{2}}+V(3,1)\cdot\mathbf{e_{3}}\] is a vector in \(\mathbb{C}^{3,1}\), we also denote it by \(W\). The projection of \(W\) into \(\mathbb{C}\mathbf{P}^{3}\) is a point in \(B_{12}\cap B_{24}\cap L\) if \(W\) is a negative vector. Now \(W^{*}\cdot H\cdot W\) is a very complicated term. But with Maple, \[W^{*}\cdot H\cdot W=-(1+2\cos(\theta))^{3}\cdot h,\] where \(h\) is \[799-798\cos(3\theta)+1180\cos(2\theta)-222\cos(5\theta)-24\cos(7 \theta)-86\cos(s-4\theta)...\] \[236\cos(r-2\theta)+110\cos(r+2\theta).\] (We omit many terms of \(h\)). Note that the first term of \(W^{*}\cdot H\cdot W\) is positive when \(\theta\in[\frac{5\pi}{6},\pi]\). By Maple, \(h\) has minimum \(275.152\) numerically when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point when \(\theta=\frac{5\pi}{6}\). In particular, any \(W\) above is a positive vector in \(\mathbb{C}^{3,1}\), so \(B_{12}\cap B_{24}\cap L=\emptyset\). Then by the projection from \(\mathbf{H}_{\mathbb{C}}^{3}\) to \(L\), we have \(B_{12}\cap B_{24}\) in \(\mathbf{H}_{\mathbb{C}}^{3}\) is the empty set. This ends the proof of (6). Now we consider (7). Recall in Lemma 6.3, we have proved that \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{34}}\) and \(\mathbf{p_{13}}\) are co-planar. So for any \(\theta\in(\frac{5\pi}{6},\pi]\), the span of \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\), \(\mathbf{p_{34}}\) and \(\mathbf{p_{13}}\) is a \(3\)-dimensional subspace of \(\mathbb{C}^{3,1}\). The intersection of \(\mathbf{H}_{\mathbb{C}}^{3}\subset\mathbf{P}_{\mathbb{C}}^{3}\) with the projection of this \(3\)-dimensional subspace into \(\mathbf{P}_{\mathbb{C}}^{3}\) is denoted by \(L\), then \(L\) is a totally geodesic \(\mathbf{H}_{\mathbb{C}}^{2}\hookrightarrow\mathbf{H}_{\mathbb{C}}^{3}\). By definition \(p_{0}\in L\). We will show if the intersection \(B_{12}\cap B_{34}\cap L\) is non-empty, then it is a Giraud disk in \(L\). Moreover, in this case, \(B_{12}\cap B_{34}\cap L\) lies in the half space of \(L-B_{13}\) which does not contain the fixed point \(p_{0}\) of \(J\). In particular, \(B_{12}\cap B_{34}\) does not lie in the partial Dirichlet domain \(D_{R}\). We re-denote \(\mathbf{p_{0}}\), \(\mathbf{p_{12}}\) and \(\mathbf{p_{34}}\) by \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\). We denote \(H_{L}=H(p_{0},p_{12},p_{34})\) by the matrix \((\mathbf{e_{i}}^{*}\cdot H\cdot\mathbf{e_{j}})_{1\leq i,j\leq 3}\), then \[H_{L}=\begin{pmatrix}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta \mathrm{i}}-1&-\mathrm{e}^{-2\theta\mathrm{i}}+\mathrm{e}^{\theta\mathrm{i}}- 1\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&-1&16\cos^{3}( \theta)-8\cos(\theta)-3\\ -\mathrm{e}^{2\theta\mathrm{i}}+\mathrm{e}^{-\theta\mathrm{i}}-1&16\cos^{3}( \theta)-8\cos(\theta)-3&-1\end{pmatrix}.\] Now \(\det(H_{L})\) is \[128\cos^{5}(\theta)-96\cos^{3}(\theta)-16\cos^{2}(\theta)+16\cos(\theta)+4.\] So \(H_{L}\) is the Hermitian form with signature \((2,1)\) on the subspace with the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) when \(\theta\in(\frac{5\pi}{6},\pi]\). The vector \(x_{1}\mathbf{e_{1}}+x_{2}\mathbf{e_{2}}+x_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{x}\), and the vector \(y_{1}\mathbf{e_{1}}+y_{2}\mathbf{e_{2}}+y_{3}\mathbf{e_{3}}\) is denoted by the vector \(\mathbf{y}\), here \[\mathbf{x}=\left(\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\end{array}\right),\quad\mathbf{y}=\left(\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\] with \(x_{i},y_{i}\in\mathbb{C}\). So \[\mathbf{E_{1}}=\left(\begin{array}{c}1\\ 0\\ 0\end{array}\right),\quad\mathbf{E_{2}}=\left(\begin{array}{c}0\\ 1\\ 0\end{array}\right),\quad\mathbf{E_{3}}=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right)\] in \(\mathbb{C}^{3}\) present the vectors \(\mathbf{e_{1}}\), \(\mathbf{e_{2}}\) and \(\mathbf{e_{3}}\) in \(\mathbb{C}^{3,1}\). Comparing to Subsection 2.5, we define the Hermitian cross-product \(\boxtimes_{L}\) on the subspace \(H_{L}\) (which is isometric to \(\mathbf{H}_{\mathbb{C}}^{2}\)) with respect to the basis \(\{\mathbf{e_{1}},\mathbf{e_{2}},\mathbf{e_{3}}\}\) by \[\mathbf{x}\boxtimes_{L}\mathbf{y}=\begin{bmatrix}\mathbf{x}^{*}H_{L}(1,2) \cdot\mathbf{y}^{*}H_{L}(1,3)-\mathbf{y}^{*}H_{L}(1,2)\cdot\mathbf{x}^{*}H_{L }(1,3)\\ \mathbf{x}^{*}H_{L}(1,3)\cdot\mathbf{y}^{*}H_{L}(1,1)-\mathbf{y}^{*}H_{L}(1,3) \cdot\mathbf{x}^{*}H_{L}(1,1)\\ \mathbf{x}^{*}H_{L}(1,1)\cdot\mathbf{y}^{*}H_{L}(1,2)-\mathbf{y}^{*}H_{L}(1,1) \cdot\mathbf{x}^{*}H_{L}(1,2)\end{bmatrix}.\] Then the intersection \(B_{12}\cap B_{34}\cap L\) is parameterized by \(V=V(z_{1},z_{2})\in\mathbb{C}^{3}\) with \(\langle V,V\rangle<0\) with respect to the Hermitian form \(H_{L}\). Where \[V=\mathbf{E_{2}}\boxtimes_{L}\mathbf{E_{3}}+z_{1}\cdot\mathbf{E_{1}}\boxtimes_{ L}\mathbf{E_{3}}+z_{2}\cdot\mathbf{E_{1}}\boxtimes_{L}\mathbf{E_{2}}\] and \((z_{1},z_{2})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}})\in\mathbb{S }^{1}\times\mathbb{S}^{1}\). The vector \(V\in\mathbb{C}^{3}\) is a three-by-one matrix, and \[V(1,1)\cdot\mathbf{e_{1}}+V(2,1)\cdot\mathbf{e_{2}}+V(3,1)\cdot\mathbf{e_{3}}\] is a vector in \(\mathbb{C}^{3,1}\), we also denote it by \(W\). If \(W\) is a negative vector, we consider the distances of \([W]\in\mathbf{H}_{\mathbb{C}}^{3}\) to \(p_{0}\) and \(p_{13}\). That is, we should consider \(|W^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}\) and \(|W^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}\). Now \(W^{*}\cdot H\cdot W\) is a very complicated term. But with Maple, the minimum of \[|W^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}-|W^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}\] with the condition \(W^{*}\cdot H\cdot W=0\) is \(99.42\) when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point when \(\theta=\frac{5\pi}{6}\). And the maximum of \[|W^{*}\cdot H\cdot\mathbf{p_{0}}|^{2}-|W^{*}\cdot H\cdot\mathbf{p_{13}}|^{2}\] with the condition \(W^{*}\cdot H\cdot W=0\) is \(921.79\) when \(r,s\in[-\pi,\pi]\) and \(\theta\in[\frac{5\pi}{6},\pi]\) at a point \(\theta=2.734\). This implies that \[B_{12}\cap B_{34}\cap L\cap\partial\mathbf{H}_{\mathbb{C}}^{3}\] does not intersect \(B_{13}\). Moreover, if the quadruple intersection above is non-empty, then it lies in the half-space of \(L-B_{13}\) which does not contain \(p_{0}\). Then by the well-known properties of intersections of bisectors in \(\mathbf{H}_{\mathbb{C}}^{2}\), we have \[B_{12}\cap B_{34}\cap B_{13}\cap L=\emptyset.\] Moreover, if the triple intersection \(B_{12}\cap B_{34}\cap L\) is non-empty, then it lies in the half-space of \(L-B_{13}\) which does not contain \(p_{0}\). Then by the projection from \(\mathbf{H}_{\mathbb{C}}^{3}\) to \(L\), we have \[B_{12}\cap B_{34}\cap B_{13}=\emptyset\] in \(\mathbf{H}_{\mathbb{C}}^{3}\). Moreover, if the intersection \(B_{12}\cap B_{34}\) is non empty, then it lies in the half-space of \(\mathbf{H}_{\mathbb{C}}^{3}-B_{13}\) which does not contain \(p_{0}\). So \(B_{12}\cap B_{34}\) does not lie in the partial Dirichlet domain \(D_{R}\) even if it is not empty. The author remarks that when \(\theta\) is near to \(\frac{5\pi}{6}\), for instance we can take a sample point \[\{r=\pi,\ s=0,\ \theta=\frac{5\pi}{6}\},\] then \(W^{*}\cdot H\cdot W\) is \(-32\sqrt{3}-32\), which is negative, and then \(B_{12}\cap B_{34}\cap L\) is non-empty. But when \(\theta=\pi\), it is not difficult to show \(B_{12}\cap B_{34}\cap L=\emptyset\). So there is a neighborhood of \(\pi\) in \([\frac{5\pi}{6},\pi]\), such that \(B_{12}\cap B_{34}\cap L=\emptyset\) when \(\theta\) lies in this neighborhood. It seems that when \(\theta\in[2.74,\pi]\), then \(W^{*}\cdot H\cdot W\) is always positive. When \(\theta=2.71\), the graph of \(W^{*}\cdot H\cdot W=h\) as a function of \((r,s)\) is the blue surface in Figure 10. This ends the proof of (7). **Proposition 6.5**.: _For all \(\theta\in(\frac{5\pi}{6},\pi]\), the bisector \(B_{13}\) of \(I_{1}I_{3}\), we have \(B_{13}\) does not intersect \(B_{21}\), \(B_{23}\), \(B_{43}\), \(B_{41}\) and \(B_{24}\)._ Proof.: We have proved \(B_{12}\cap B_{24}=\emptyset\), by the \(\langle J\rangle=\mathbb{Z}_{4}\) symmetry, we have \[B_{13}\cap B_{23}=\emptyset,\ B_{13}\cap B_{41}=\emptyset.\] The fact \[B_{13}\cap B_{21}=\emptyset,\ B_{13}\cap B_{43}=\emptyset\] can be proved similar to the proof of \(B_{12}\cap B_{24}=\emptyset\) in Proposition 6.4. Now we consider \(B_{13}\cap B_{24}\). Since \[p_{13}=[\sqrt{2\cos(2\theta)+1},\ 0,\ 0,\ -2\cos(\theta)]^{t},\] and \[p_{24}=[-\sqrt{2\cos(2\theta)+1},\ 0,\ 0,-2\cos(\theta)]^{t}.\] So \(p_{0}\), \(p_{13}\) and \(p_{24}\) lie in the \(\mathbb{C}\)-line \[l=\left\{[z_{1},\ 0,\ 0,\ 1]^{t}\in\mathbf{H}_{\mathbb{C}}^{3}\right\}.\] Now it is easy to see \(B_{13}\cap B_{24}\cap l=\emptyset\). Then from the projection of \(\mathbf{H}_{\mathbb{C}}^{3}\) to \(l\), we get \(B_{13}\cap B_{24}=\emptyset\). **Proposition 6.6**.: _For all \(\theta\in(\frac{5\pi}{6},\pi]\), the bisector \(B_{12}\) of \(I_{1}I_{2}\), we have_ 1. _the triple intersection_ \(B_{12}\cap B_{13}\cap B_{14}\) _is a non empty 3-dimensional object;_ 2. _the intersection_ \(B_{12}\cap B_{13}\) _is a Giraud disk, which is a 4-ball in_ \(\mathbf{H}_{\mathbb{C}}^{3}\)_;_ 3. _the intersection_ \(B_{12}\cap B_{14}\) _is a Giraud disk, which is a 4-ball in_ \(\mathbf{H}_{\mathbb{C}}^{3}\)_;_ 4. _the intersection_ \(B_{13}\cap B_{14}\) _is a Giraud disk, which is a 4-ball in_ \(\mathbf{H}_{\mathbb{C}}^{3}\) Figure 10. An illustration \(B_{12}\cap B_{34}\cap L=\emptyset\) when \(\theta\) is near to \(\pi\). The blue surface is the graph of the function \(h(r,s)=W^{*}\cdot H\cdot W\) when \(\theta=2.71\). The red plane is \(h=0\). They intersect in a small circle near \((r,s)=(\pi,0)\). When increasing \(\theta\) from 2.71 to \(\pi\), the graph of the function \(W^{*}\cdot H\cdot W\) is also a similar blue surface which tends to disjoint from the red plane \(h=0\). Proof.: In (2.6), we take \(\mathbf{p_{0}}=\mathbf{q_{0}}\), \(\mathbf{p_{12}}=\mathbf{q_{1}}\), \(\mathbf{p_{13}}=\mathbf{q_{2}}\) and \(\mathbf{p_{14}}=\mathbf{q_{3}}\), then we can parameterize \(B(p_{0},p_{12})\cap B(p_{0},p_{13})\cap B(p_{0},p_{14})\) by \[V(z_{1},z_{2},z_{3})=\mathbf{p_{12}}\mathbb{S}\mathbf{p_{13}}\mathbb{S} \mathbf{p_{14}}+z_{1}\cdot\mathbf{p_{0}}\mathbb{S}\mathbf{p_{13}}\mathbb{S} \mathbf{p_{14}}+z_{2}\cdot\mathbf{p_{0}}\mathbb{S}\mathbf{p_{12}}\mathbb{S} \mathbf{p_{14}}+z_{3}\cdot\mathbf{p_{0}}\mathbb{S}\mathbf{p_{12}}\mathbb{S} \mathbf{p_{13}} \tag{6.5}\] in \(\mathbb{C}^{3,1}\) with \((z_{1},z_{2},z_{3})=(\mathrm{e}^{\mathrm{ri}},\mathrm{e}^{\mathrm{si}}, \mathrm{e}^{\mathrm{ti}})\in\mathbb{S}^{1}\times\mathbb{S}^{1}\times\mathbb{S }^{1}\) such that \(\langle V(z_{1},z_{2},z_{3}),V(z_{1},z_{2},z_{3})\rangle\) is negative. Now \[\langle V(z_{1},z_{2},z_{3}),V(z_{1},z_{2},z_{3})\rangle=(16\cos(\theta)^{3}+ 8\cos(\theta)^{2}-4\cos(\theta)-2)\cdot h,\] with \[h=-2+2\cos(s)+4\cos(\theta)+2\cos(r-t)-\cos(-\theta+r-t)-\cos( \theta+r-t)\] \[-\cos(-\theta+s)-\cos(\theta+s)-\cos(-2\theta-s+r)+\cos(2\theta-s +r)-\cos(-2\theta+s-t)\] \[+\cos(2\theta+s-t)+\cos(-2\theta+r)-\cos(2\theta+r)-\cos(-2\theta +t)+\cos(2\theta+t)\] \[-4\cos(2\theta)+\cos(-s+r-\theta)-\cos(-s+r+\theta)+\cos(s-t- \theta)-\cos(s-t+\theta)\] \[-\cos(-\theta+r)+\cos(\theta+r)+\cos(-\theta+t)-\cos(\theta+t)+ \cos(-s+r-3\theta)\] \[+\cos(s-t-3\theta)+\cos(r+3\theta)+\cos(t-3\theta).\] The term \[16\cos(\theta)^{3}+8\cos(\theta)^{2}-4\cos(\theta)-2\] is always negative when \(\theta\in(\frac{5\pi}{6},\pi]\). When \(r=-\pi\), \(s=0\) and \(t=\pi\), the term \(h\) is \[-4\cos^{3}(\theta)-2\cos^{2}(\theta)+3\cos(\theta)+\frac{3}{2},\] which is always positive when \(\theta\in(\frac{5\pi}{6},\pi]\). This means that \(V=V(\mathrm{e}^{-\pi\mathrm{i}},\mathrm{e}^{0\mathrm{i}},\mathrm{e}^{\pi \mathrm{i}})\) is a point in \(\mathbf{H}_{\mathbb{C}}^{3}\) for any \(\theta\in(\frac{5\pi}{6},\pi]\). So \(B_{12}\cap B_{13}\cap B_{14}\) is non-empty for any \(\theta\). Then for any \((r,s,t)\) in a small neighborhood (depends on \(\theta\)) of \((-\pi,0,\pi)\) in \(\mathbb{R}^{3}\), the corresponding \(V\) is also a negative vector. This proves (1) of Proposition 6.6. The other terms of Proposition 6.6 is trivial now. See Figure 11 for the surface \(\langle V(z_{1},z_{2},z_{3}),V(z_{1},z_{2},z_{3})\rangle=0\) in the proof of Proposition 6.5 when \(\theta=2.7>\frac{5\pi}{6}\), which seems to be a 2-sphere (we do not need this fact in this paper). Which is an evidence for Question 1.2 in Section 1. By the \(\langle J\rangle=\mathbb{Z}_{4}\) symmetry, we have similar properties of other triple intersections of bisectors. From Propositions 6.4 and 6.5, there is no quadruple intersection of bisectors \(B_{ij}\) for \(I_{ij}\in R\). We take the side \(s_{ij}=B_{ij}\cap D_{R}\) when \(j=i\pm 1\) or \(j=i+2\) mod 4 and \(i=1,2,3,4\). By taking sample points, it is easy to see \(s_{ij}\) is non-empty, so each \(s_{ij}\) is a nonempty 5-dimensional object in \(\mathbf{H}_{\mathbb{C}}^{3}\). Since \[I_{2}I_{1}(s_{12})=s_{21},\ J(s_{12})=s_{23},\] there are only two isometric types of 5-facets, say \(s_{12}\) and \(s_{13}\). Since \[J(s_{14}\cap s_{12})=s_{21}\cap s_{23},\ J(s_{12}\cap s_{13})=s_{23}\cap s_{24},\ I_{2}I_{1}(s_{12}\cap s_{13})=s_{21}\cap s_{23},\] there is only one isometric types of 4-facets, say \(s_{12}\cap s_{14}\). There is only one isometric type of 3-facets, say \(s_{12}\cap s_{13}\cap s_{14}\). In particular, there is no quadruple intersection of bisectors. So we do not need the precisely combinatorial structure of the facets. What we really need is the above mentioned facets are all non-empty. Which is enough for the Poincare polyhedron theorem in our (lucky) case. We have the following propositions. **Proposition 6.7**.: _For all \(\theta\in(\frac{5\pi}{6},\pi]\), the side \(s_{12}=B_{12}\cap D_{R}\) is a non-empty 5-dimensional object in \(\mathbf{H}^{3}_{\mathbb{C}}\cup\partial\mathbf{H}^{3}_{\mathbb{C}}\). Moreover,_ * _the frontier of_ \(s_{12}\cap\mathbf{H}^{3}_{\mathbb{C}}\) _consists of two non-empty 4-dimensional objects_ \(s_{12}\cap s_{14}\) _and_ \(s_{12}\cap s_{13}\)_;_ * _the intersection of_ \(s_{12}\cap s_{14}\) _and_ \(s_{12}\cap s_{13}\) _is the non-empty 3-dimensional object_ \(s_{12}\cap s_{13}\cap s_{14}\)_._ **Proposition 6.8**.: _For all \(\theta\in(\frac{5\pi}{6},\pi]\), the side \(s_{13}=B_{13}\cap D_{R}\) is a non-empty 5-dimensional object in \(\mathbf{H}^{3}_{\mathbb{C}}\cup\partial\mathbf{H}^{3}_{\mathbb{C}}\). Moreover,_ 1. _the frontier of_ \(s_{13}\cap\mathbf{H}^{3}_{\mathbb{C}}\) _are two disjoint 4-dimensional objects;_ 2. _each of the 4-dimensional objects in (_1_) consists of two parts:_ * _the union of_ \(s_{13}\cap s_{12}\) _and_ \(s_{13}\cap s_{14}\) _is a component of the frontier of_ \(s_{13}\cap\mathbf{H}^{3}_{\mathbb{C}}\)_. The intersection of_ \(s_{13}\cap s_{12}\) _and_ \(s_{13}\cap s_{14}\) _is the non-empty 3-dimensional object_ \(s_{12}\cap s_{13}\cap s_{14}\)_;_ * _The union of_ \(s_{13}\cap s_{32}\) _and_ \(s_{13}\cap s_{34}\) _is the other component of the frontier of_ \(s_{13}\cap\mathbf{H}^{3}_{\mathbb{C}}\)_. The intersection of_ \(s_{13}\cap s_{32}\) _and_ \(s_{13}\cap s_{34}\) _is the non-empty 3-dimensional object_ \(s_{13}\cap s_{32}\cap s_{34}\)_._ ### Proof of Theorem 1.1 After above propositions, the proof of Theorem 1.1 for general \(\theta\in(\frac{5\pi}{6},\pi]\) is similar to the proof when \(\theta=\frac{5\pi}{6}\). We have the side pairing maps of \(D_{R}\) as in Subsection 4.4. For example, since \(I_{1}I_{3}\) is a self-homeomorphism of \(s_{13}\), \(I_{1}I_{3}\) exchanges the two components of \(\mathbf{H}^{3}_{\mathbb{C}}-B_{13}\). We can see that \(D_{R}\) and \(I_{1}I_{3}(D_{R})\) have disjoint interiors, and they together cover a neighborhood of each point in the interior of the side \(s_{13}\). The cases of the other 5-sides are similar. We now consider the tessellation about ridges. Recall that \(A_{1}=I_{1}I_{2}\), \(A_{2}=I_{2}I_{3}\), \(A_{3}=I_{3}I_{4}\) and \(A_{4}=I_{4}I_{1}\). For the ridge \(s_{14}\cap s_{12}\), we have proven it is a nonempty 4-dimensional object. The ridge circle of it is \[s_{14}\cap s_{12}\xrightarrow{A_{1}^{-1}}s_{21}\cap s_{24}\xrightarrow{(A_{2}A_{ 3})^{-1}}s_{24}\cap s_{41}\xrightarrow{A_{4}^{-1}}s_{14}\cap s_{12}.\] Which gives the relation \(A_{4}^{-1}\cdot A_{3}^{-1}A_{2}^{-1}\cdot A_{1}^{-1}=id\). What we really need (and we proved) is that \(s_{14}\cap s_{12}\), \(s_{21}\cap s_{24}\) and \(s_{24}\cap s_{41}\) are all non empty. Moreover \[A_{1}^{-1}(s_{12}\cap s_{13}\cap s_{14})=s_{21}\cap s_{23}\cap s_{24}.\] Then the ridge circle relation is proved by checking only the group action on the points \(p_{0}\), \(p_{14}\), \(p_{12}\) and \(p_{24}\). A standard argument implies \(D_{R}\cup A_{1}(D_{R})\cup A_{4}^{-1}(D_{R})\) covers a small neighborhood of \(s_{14}\cap s_{12}\). For the ridge \(s_{13}\cap s_{14}\), the ridge circle is \[s_{13}\cap s_{14}\xrightarrow{A_{4}}s_{41}\cap s_{43}\xrightarrow{A_{3}}s_{ 34}\cap s_{31}\xrightarrow{A_{1}A_{2}}s_{13}\cap s_{14}.\] Which gives the relation \(A_{1}A_{2}\cdot A_{3}\cdot A_{4}=id\). We have also proved that \(s_{13}\cap s_{14}\), \(s_{41}\cap s_{43}\) and \(s_{34}\cap s_{31}\) are all non empty. A standard argument implies \(D_{R}\cup A_{4}^{-1}(D_{R})\cup(A_{1}A_{2})^{-1}(D_{R})\) covers a small neighborhood of \(s_{13}\cap s_{14}\). The other ridge circles can be proved similar. Similar to the proof when \(\theta=\frac{5\pi}{6}\), the identification space of \(D_{R}\) by these side pairing maps is complete. By Poincare polyhedron theorem, the partial Dirichlet domain \(D_{R}\) is in fact the Dirichlet domain of \(\rho_{\theta}(K)<\mathbf{PU}(3,1)\). Now we have the presentation \[\rho_{\theta}(K)=\left\langle A_{1},A_{2},A_{3},A_{4}\middle|(A_{1}A_{2})^{2} =(A_{2}A_{3})^{2}=A_{1}A_{2}A_{3}A_{4}=id\right\rangle\] when \(\theta\in(\frac{5\pi}{6},\pi]\). So we have a discrete and faithful representation of \(K\). This ends the proof of Theorem 1.1.
2305.03435
Advances on the classification of radio image cubes
Modern radio telescopes will daily generate data sets on the scale of exabytes for systems like the Square Kilometre Array (SKA). Massive data sets are a source of unknown and rare astrophysical phenomena that lead to discoveries. Nonetheless, this is only plausible with the exploitation of intensive machine intelligence to complement human-aided and traditional statistical techniques. Recently, there has been a surge in scientific publications focusing on the use of artificial intelligence in radio astronomy, addressing challenges such as source extraction, morphological classification, and anomaly detection. This study presents a succinct, but comprehensive review of the application of machine intelligence techniques on radio images with emphasis on the morphological classification of radio galaxies. It aims to present a detailed synthesis of the relevant papers summarizing the literature based on data complexity, data pre-processing, and methodological novelty in radio astronomy. The rapid advancement and application of computer intelligence in radio astronomy has resulted in a revolution and a new paradigm shift in the automation of daunting data processes. However, the optimal exploitation of artificial intelligence in radio astronomy, calls for continued collaborative efforts in the creation of annotated data sets. Additionally, in order to quickly locate radio galaxies with similar or dissimilar physical characteristics, it is necessary to index the identified radio sources. Nonetheless, this issue has not been adequately addressed in the literature, making it an open area for further study.
Steven Ndung'u, Trienko Grobler, Stefan J. Wijnholds, Dimka Karastoyanova, George Azzopardi
2023-05-05T11:15:37Z
http://arxiv.org/abs/2305.03435v1
# Advances on the classification of radio image cubes ###### Abstract Modern radio telescopes will daily generate data sets on the scale of exabytes for systems like the Square Kilometre Array (SKA). Massive data sets are a source of unknown and rare astrophysical phenomena that lead to discoveries. Nonetheless, this is only plausible with the exploitation of intensive machine intelligence to complement human-aided and traditional statistical techniques. Recently, there has been a surge in scientific publications focusing on the use of artificial intelligence in radio astronomy, addressing challenges such as source extraction, morphological classification, and anomaly detection. This study presents a succinct, but comprehensive review of the application of machine intelligence techniques on radio images with emphasis on the morphological classification of radio galaxies. It aims to present a detailed synthesis of the relevant papers summarizing the literature based on data complexity, data pre-processing, and methodological novelty in radio astronomy. The rapid advancement and application of computer intelligence in radio astronomy has resulted in a revolution and a new paradigm shift in the automation of daunting data processes. However, the optimal exploitation of artificial intelligence in radio astronomy, calls for continued collaborative efforts in the creation of annotated data sets. Additionally, in order to quickly locate radio galaxies with similar or dissimilar physical characteristics, it is necessary to index the identified radio sources. Nonetheless, this issue has not been adequately addressed in the literature, making it an open area for further study. Survey Image processing Machine learning Deep learning Source extraction Galaxiesactive ## 1 Introduction Radio astronomy has seen an accelerated and exponential data eruption in the last two decades. Future radio telescopes like the Square Kilometre Array (SKA) will generate data sets on the scale of Exabytes. This will be one of the largest known big data projects in the world (Farnes et al., 2018). The low-frequency instrument SKA-LOW will be located in Australia while the mid-frequency instrument SKA-MID will be located in South Africa. SKA-LOW will have a peak real-time data rate of 10 TB/s (Labate et al., 2022), while SKA-MID will have a peak real-time data rate of 19 TB/s (Swart et al., 2022). Other similar projects currently contributing to data-intensive research in astronomy that form the baseline/pathfinder to SKA include MeerKAT2, which generates raw data at 2.2 TB/s (Booth and Jonas, 2012), the Murchison Widefield Array (MWA)3 with a data rate of \(\sim\)300 GB/s (Lonsdale et al., 2009) and the LOw-Frequency ARray (LOFAR) generating raw data at the rate of 13 TB/s (Haarlem et al., 2013). Astronomy has thus become a very data-intensive field with multi-wavelength and multi-messenger capabilities (An, 2019). These high data rates necessitate the automatic processing of the data using computer intelligence. This motivates the need to assess the recent developments of computer intelligence applications within the field. Footnote 2: [https://www.sarao.ac.za/gallery/meerkat/](https://www.sarao.ac.za/gallery/meerkat/) Footnote 3: [https://www.mwatelescope.org](https://www.mwatelescope.org) With the Evolutionary Map of the Universe (EMU) generating up to \(\sim\)70 million radio sources (Norris et al., 2011) and with the SKA expected to discover more than 500 million radio sources (Norris et al., 2014), computer-aided applications are unavoidable. This has resulted in an increase in the number of scientific publications using machine/deep learning to detect and classify the radio sources. In the last five years, there has been successful proliferation of machine intelligence applications, owing to the availability of highly curated and annotated data catalogs Table 8). Interestingly, publications on morphological classification have been on the incline, introducing novel and diverse machine/deep learning techniques to the radio astronomy field. This coupled with the above-mentioned progress and challenges have been our motivation to write this survey devoted to exploring the recent advancement in the classification of radio image cubes. Furthermore, other applications like anomaly/outlier detection, source extraction, and image retrieval will be discussed. Morphological classification is a crucial aspect of radio astronomy, as it allows scientists to understand the physical properties and characteristics of celestial objects based on their form and structure. Additionally, automated morphological analysis of large radio images can be a source of rare astrophysical phenomena, leading to serendipitous discoveries (Ray, 2016). This classification will focus on radio astronomy, which has played a very fundamental role in stimulating and spurring discoveries in the fields of cosmology, astrophysics, and telecommunications (Burke et al., 2019). Radio astronomy allows us to study celestial objects and phenomena at wavelengths that are not visible in the optical spectrum, providing unique insights into the universe. For instance, radio image cubes are supplemented by data obtained from other portions of the electromagnetic spectrum for cross-identification to help tackle fundamental scientific challenges. Fig. 1, obtained from the public LOFAR Galaxy Zoo: LOFAR project4, illustrates this cross-identification process on an optical and a radio image of the same celestial object. These studies can help us better understand the physical processes at work in the universe and the diverse objects it contains (Burke et al., 2019). Footnote 4: [https://www.zooniverse.org/projects/chrismrp/radio-galaxy-zoo-lofar](https://www.zooniverse.org/projects/chrismrp/radio-galaxy-zoo-lofar) ### Key challenges in radio astronomy In recent years, computer intelligence has been extensively applied to automate daunting manual and challenging tasks in radio astronomy. Some of the main areas that have experienced revolution and notable progress are telescope Figure 1: An astronomical image as obtained from an optical and a radio telescope: (a) the Legacy telescope (optical) \(R\)-band intensity, and (b) the LoTSS-DR2 stokes \(I\) intensity. Source: Public LOFAR Galaxy Zoo: LOFAR. This is a typical example of a bent type galaxy. performance monitoring and the processing/transformation of visibility and image cube data. In modern telescopes, the demand for high-resolution observations and efficiency is very high, hence, the necessity of spontaneous real-time system health checks. To achieve this, machine learning algorithms are exploited (Hu et al., 2020). In Mesarcik et al. (2020), machine learning algorithms have demonstrated the capability to reliably detect, flag, and report system issues with above 95% accuracy. This substantially mitigates the risk of failures while at the same time maintaining the peak performance of the telescopes. During the data curation stage in the visibility domain, machine learning techniques are used to automate the process of detection and correction of errors occurring in recorded data, while simultaneously removing outliers in the data sets (Yatawatta and Avruch, 2021). Furthermore, they are applied in the identification and extraction of radio frequency interference (RFI) - unwanted noise (signals) - which are produced by telecommunication technologies and other man-made equipment (Sun et al., 2022). These kinds of signals and errors would degrade the quality of the data if not removed. In the image domain, the process of calibration relies heavily on the optimal fine-tuning of calibration parameters in the raw data processing pipelines. Reinforcement learning is applied to automate the process of selecting and updating calibration parameters (Yatawatta and Avruch, 2021). This process is a tedious task due to the high number of calibration parameters that must be tuned for telescopes with large fields of view (Wijnholds et al., 2010). Moreover, astronomy has experienced a proliferation in the application of artificial intelligence in astronomical radio images to explore and address fundamental scientific challenges. The major areas of research in radio astronomy include: extraction and finding of radio sources such as point-like sources and extended sources (Lukic et al., 2019, Pino et al., 2021); classification of the celestial objects based on their morphological features (Lukic et al., 2018, Wu et al., 2018), spearheading the advancement in the discovery of rare celestial objects such as pulsars, supernovas, quasars, and galaxies with unique and extraordinary morphologies (Mostert et al., 2021); and the retrieval of galaxies with similar morphological characteristics (Aziz et al., 2017). Generally, computer-aided systems have resulted in a paradigm shift in the capacity, capability, and rate at which immense and complex astronomical data is exploited relative to traditional methods. This has been further boosted by high computing, software, and hardware improvements - playing a critical role in the automation of the research processes in modern astronomy. Big data, however, still presents challenges due to its complexity, and the computational resources and execution times that are required by such data sets. The rest of the paper is structured as follows: section 2, provides a brief background on radio astronomy. Section 3 presents the approach followed to retrieve the relevant papers for this review. Section 4 provides a detailed review of the adoption of machine/deep learning algorithms in morphological classification. Section 5 highlights the opportunities, challenges and future trends foreseen in the field of radio astronomy and finally, Section 6 presents a summary of the paper, highlighting the major insights from the review paper. ## 2 Background ### Radio telescopes Radio telescopes are specialised astronomical instruments that detect and receive very weak radio emissions radiated by extraterrestrial sources, for example, galaxies, planets, nebula, stars, and quasars. Radio telescopes can either be single parabolic dishes, such as the Five hundred meter Aperture Spherical Telescope (FAST) in China or a number of inter-connected telescopes/antennas, namely the Giant Metrewave Radio Telescope (GMRT) in India and LOFAR in the Netherlands (Table 2 and Fig. 3). Angular resolution and sensitivity are fundamental aspects to consider in a telescope. While angular resolution refers to the ability of a telescope to clearly differentiate radio sources observed in the sky, sensitivity is the measure of the weakest radio source emissions detected over the random background noise (the flux density of celestial objects). Sensitivity is a product of several factors, namely signal coherence and processing efficiency, collecting aperture/dish area, along with receiver noise levels (Swart et al., 2022). With high resolution and sensitivity, astronomers are able to clearly resolve between celestial objects and in doing so reveal more details of far faint stars and galaxies. The high angular resolution and sensitivity of radio telescopes have greatly boosted the acquisition of high resolution images through the next generation of wide-field radio surveys. For instance, LOFAR achieves a sensitivity of \(\sim\)100\(\upmu\)Jy/beam and a resolution of \(\sim\)6\({}^{\prime\prime}\) which enables it to detect sources that are faint and have small angular scales with a high resolution (Shimwell et al., 2022a). ### Radio galaxies Radio galaxies are extensive astrophysical objects of radio emissions created by active supermassive black holes which form extended structures called jets and lobes. Fanaroff and Riley (1974) proposed a seminal radio galaxy classification into two major families characterised by the distribution of luminosity of their extended radio emission. The first family is composed of centre-brightened (bright core) with one or two lobes. They have brightened cores extending to the lobes; exuding a decaying luminosity from the core. They are called Fanaroff & Riley I (FRI) galaxies. The second family is composed of edge-brightened lobes separated by a core at the center (the luminosity of the lobes decays as you move towards the center). They are referred to as Fanaroff & Riley II (FRII) galaxies (Fig. 4). Further examination of the morphological characteristics of FRI and FRII galaxies resulted in the identification of the narrow-angled tail (NAT) and wide-angled tail (WAT) (Rudnick and Owen, 1976) radio source populations with bent jets. In recent years, Fanaroff & Riley 0 (FR0) galaxies, which are compact point-like sources, were added to the radio galaxy classification (Baldi et al., 2015). They are approximately five times compared to the total number of FRI and FRII sources and therefore constitute the largest population of radio galaxies (Baldi et al., 2018). Other rare and minority classes of sources include Ring-shape, X-shape, W-shape, S-shape or Z-shape, Double Double, Tri-axial, and other Hybrid morphologies (Proctor, 2011). Figure 3: Radio telescopes: a) Effelsberg radio telescope single parabolic dish, b) LOFAR antennas, and c) the Karl G. Jansky Very Large Array (VLA) telescope array. Figure 2: Type and major radio telescopes of both the parabolic dishes and aperture arrays. ### Data management In data-centric fields such as astronomy, data management standards of the archived data are essential in conduit of knowledge discovery and innovation. They increase the rate of adoption of scientific discovery, knowledge integration and reuse in the wider community of researchers. The data management practices adopted must by design and implementation follow the FAIR (Findable, Accessible, Interoperable and Reusable) principles (Wilkinson et al., 2016). The system should allow easy data access, search, tagging, retrieval, and replication in an efficient and transparent way. This leads to seamless integration and will allow global collaborations with other projects with similar data programs/systems. Large radio astronomy facilities in the world store their data in either raw, calibrated/intermediate (for instance, VLA and LOFAR) or science-ready archives (for instance, ASKAP5 and MeerKAT ) (Mireille et al., 2022). Some projects share their visibility data publicly via project-specific web interfaces6. Additionally, over the last few years, commendable progress in implementing FAIR principles in the field of astronomy has occurred due to the International Virtual Observatory Alliance (IVOA). It has been at the forefront of coordinating the integration of all the world's astronomy data into a federated system and has developed a standard set of protocols and specifications to be followed in astronomical data management (Mireille et al., 2022). IVOA enhances data interoperability across global astronomical data providers. Moreover, a case study conducted by the Australian All-Sky Virtual Observatory demonstrated that the Figure 4: A typical Fanaroff Riley I & II classification of radio galaxies Figure 5: The main steps illustrating the process of characterization and source extraction using PyBDSF. implementation of the recommended IVOA standards and protocols results in _almost_ FAIR data [O'Toole and Tocknell, 2022]. #### 2.3.1 Data annotation Finding, extraction, and characterization of radio sources which are typically galaxies containing an active galactic nucleus (AGN) or star-forming galaxy (SFG) and other celestial objects form the basis of the exploitation of radio surveys for scientific purposes. The data annotation mainly entails recovering the radio sources' delineation, position, estimated size, peak surface luminosity brightness, and providing labels and descriptions as per their morphological structure. The most reliable and accurate approach to annotating radio sources is a manual visual inspection of the images by radio astronomers. However, manual inspection by astronomers is limited due to the number of experienced astronomers dedicated to this task and also considering the size of the data. Inspecting and characterizing radio sources is a difficult, costly, and time-consuming process. This has led to extensive development of statistical rule-based algorithms and methodologies for source extraction based on Cartesian shapelets, computer vision, Bayesian, and Gaussian methods. It has resulted in tools such as the Python Blob Detector and Source-Finder (PyBDSF) [Mohan and Rafferty, 2015], BLOBCAT, [Hales et al., 2012] and Aegean [Hancock et al., 2012]. PyBDSF, for instance, works based on the following algorithm, which is summarised in Fig. 5: i) perform image pre-processing procedures and obtain image statistics, ii) determine a threshold value that separates the radio sources and the background noise pixels in the image, iii) with the background root mean square and mean values of the images, neighbouring islands of radio source emissions are identified, iv) the identified islands are fitted with multiple Cartesian shapelets or Gaussians to check if they are acceptable, and finally v) the Gaussians fitted within an identified/detected island are labeled and grouped into discrete sources. Additionally, Fig. 6 shows an example of a two-component extended source extracted using PyBDSF. The study in Hopkins et al. [2015] concludes that while these source finders are excellent for detecting compact sources, they suffer from insufficient robustness in the extraction of extended or diffuse sources. #### 2.3.2 Data formats The most widely adopted community standard data formats in the field of astronomy include FITS (Flexible Image Transport System) [Pence et al., 2010], Hierarchical Data Format (HDF5)7, Extensible N-Dimensional Data Format (NDF) [Smith et al., 2014], MeasurementSet (MS) [van Diepen, 2015], FITS-IDI [Greisen, 2011], and UVFITS [Greisen, 2012]. The various formats have different strengths and weaknesses when it comes to the different data processing tasks, namely recording, transferring and archiving. For example, HDF5 format is excellent for data processing, transfer, and storage relative to other formats as it supports parallel I/O, distributed and data chunking mechanisms, and data compression which is very important in the era of big data [Price et al., 2014]. Figure 6: a) Original input image (with sources to be extracted) and b) two-component compact sources output as identified and extracted by the PyBDSF software. #### 2.3.3 Commonly used catalogs The compilation of annotated data catalogs that are publicly available and accessible is an important contribution to the promotion of the development of research in morphological classification of radio galaxies. Catalogs were compiled with different objectives such as detailed exploration, comparison and examination of a given population of galaxies (Baldi et al., 2018, Miraghaei and Best, 2017), provision of large and comprehensive labelled data sets for mining radio galaxy morphologies (Gendre et al., 2010, Proctor, 2011) and creating a representative and balanced catalogs encompassing different classes of radio galaxies (Aniyan and Thorat, 2017, Ma et al., 2019a). Owing to the varied aims and different procedures of sample selection in developing the catalogs, the number of radio morphological classes per data set is different. For example, some catalogs contain a single class (Baldi et al., 2018, Capetti et al., 2017, 2017), two classes (Best and Heckman, 2012, Gendre and Wall, 2008, Gendre et al., 2010), or more (Miraghaei and Best, 2017, Ma et al., 2019a, Proctor, 2011). Additionally, the catalogs are derived from various radio telescope surveys with different levels of luminosity. Table 8 summarises the commonly used data sets in machine/deep learning applications of radio astronomy. ## 3 Survey methodology The motivation of this survey paper is to give an account of the recent progress of computer intelligence in morphological classification in radio image data, with a focus on the last five years that have seen substantial progress in deep learning paradigms. Besides the core topic mentioned above, supplementary challenges like image annotation, data management, anomaly detection, and scalability are also considered to some extent. Web of Science8 and NASA's Astrophysics Data System9 databases were used to retrieve relevant literature papers for the study and the results cross-checked on Google Scholar10 database. These databases offer advanced search capabilities and comprehensive coverage of high-quality journal articles across various disciplines, particularly in the areas of Computer Science and Astronomy, which are the focus of our research. Footnote 8: [https://www.webofscience.com/](https://www.webofscience.com/) Footnote 9: [https://ui.adsabs.harvard.edu/](https://ui.adsabs.harvard.edu/) Footnote 10: [https://scholar.google.com/](https://scholar.google.com/) We aimed to achieve fair and representative sample papers from the large pool of published papers over the last five years. The search strategy protocol adopted is outlined in Fig. 7, (Wee and Banister, 2016). Furthermore, Fig. 9 illustrates the schematic study design of inclusion and exclusion criteria that were used. A total of 44 papers were retrieved from the initial query. Thereafter, an exclusion criterion was introduced to filter out papers in the fields of remote sensing and those in the field of radio astronomy but covering RFI, pulsars, solar and microwaves, as we consider them beyond the scope of our review. After retrieving relevant papers using refined queries on Table 1, we then applied the forward and backward snowballing technique of the obtained papers (Wohlin, 2014). This left us with a total of 30 papers. Notably, from the final selection of papers extracted, there was no review paper covering the scope of radio astronomy. The few available papers identified were in the wider field of astronomy, assessing the adoption and maturity of machine learning and deep learning in the field (Fluke and Jacobs, 2020, Wang et al., 2018). Table 11 presents a high-level summary of the surveyed papers. The papers provide a wide range of machine/deep learning-based methods applied in the field of radio astronomy. In the coxcomb chart (similar to a pie chart) shown in Fig. 10, the radius of each circle segment is proportional to the number of papers it represents. Therefore, the radius is Figure 7: The protocol followed to identify relevant articles for this survey. N represents the number of papers selected after each selection stage. determined by the frequency of the methodology in the papers surveyed. It can be observed that the majority of the methodologies used are based on shallow and deep convolutional neural networks (CNNs). Radio astronomy has indeed adopted and adapted the latest innovative and novel methodologies such as deep CNNs and Transformers from the larger science community. This has consequently led to the development of massive data-driven intelligent pipelines, which have automated the rather inefficient historically manual process. Figure 8: Commonly used data sets for morphological and anomaly detection. Abbreviations are defined in the Appendix. ## 4 Adoption of computer intelligence in radio astronomy The adoption of artificial intelligence in radio astronomy has led to a plethora of machine and deep learning applications in classification and segmentation tasks. This has been majorly attributed to the resurgence of artificial intelligence, resulting in the development of innovative and novel deep learning architectures such as CNNs (also known as ConvNets) due to the exploitation of high-resolution images. ConvNets are to some extent inspired by the biological functionality of the human visual cortex. They have become the de facto choice for many computer vision tasks. A simple ConvNet is generally composed of a set of convolutional (multiple building blocks), and subsampling (pooling) layers followed by a fully connected layer as shown in Fig. 12. In addition, various linear and non-linear mapping functions and regulatory units are embedded in the structure (e.g activation functions, batch normalization, and dropout) to optimize its performance. CNN models are designed to automatically and adaptively learn spatial features during training. The convolution and subsampling layers are focused on feature extraction while the fully connected layer maps the extracted features onto outputs. In the early layers of a CNN, simple features like edges are identified. Then, as the data progresses through the layers, more sophisticated features are determined. Notably, ConvNets classify images based on learned weights in the form of convolutional kernels obtained through the training process. In the next section, we delve into a synthesis of the papers listed in Table 11. ### Morphological classification The generation of science-ready survey catalogs requires the classification of processed calibrated radio images into various physical source categories such as galactic, extragalactic, AGN, and SF galaxies. The process of identifying and annotating such phenomena is very crucial in the preparation and release of science-ready products to the public for further scientific exploitation. Additionally, the process helps scientists to have a better comprehension of the Universe through exploring the fundamental laws of physics. Therefore, automating the process of visualization and the labeling of sources based on their morphological features is, therefore, critical in astronomy. \begin{table} \begin{tabular}{l} \hline **Web of Science Query** \\ \hline Query = ((TS=(”radio astronomy” OR ”radio galaxy” OR ”radio interferometry”) AND TS=(”radio” OR ”anomaly” OR ”outlier” OR “source extraction”) AND TS=(“machine learning*” OR ”*convolutional neural network* OR ”*deep learning*” OR ”transfer learning*” OR ”artificial intelligence*”) OR KP=(”galaxies:active”, ”radio continuum:galaxies”, ”radio continuum:general”, ”galaxies:jets”, ‘image processing”, ”surveys”, “galaxies:active”, “radio continuum:galaxies”, ”radio continuum:general”, “galaxies:jets”, ‘image processing”, ”surveys”)) NOT TS=( “solar” OR ”rfn” OR ”pulsar” OR ”remote sensing” OR ”synthetic aperture radar” OR ”microwave”) \\ \hline \end{tabular} \end{table} Table 1: Search query used in Web of Science for the retrieval of relevant review papers. TS = Topic sentence and KS = Keywords Plus. Quotation marks are used for exact matching. Figure 9: A schematic study design process of exclusion and inclusion criteria adopted for the retrieval of the relevant articles considered in this survey. Broadly, morphological classification in radio astronomy entails grouping populations of Fanaroff-Riley (FR) radio galaxies into compact (point-like) and extended sources (FRI, FRII, WAT, NAT, XRG - X-shaped radio galaxies, RRG - ringlike radio galaxies, along with others); the extended sources contain complex morphological structures with two or more components in a galaxy. The developed FR classification approaches utilize either unsupervised, semi-supervised or supervised machine learning. Fig. 13 illustrates the general taxonomical categorization of classification methods reviewed. Using supervised learning, Aniyan and Thorat (2017) developed the first ConvNet model based on Alexnet CNN architecture (Toothless11). Their model was evaluated on the Toothless12 data set achieving accuracies of 95%, 91% and 75% for Bent-tailed, FRI and FRII, respectively. Their work provided a baseline that clearly demonstrates the potential of deep learning in classifying radio galaxies. Besides, the VGG-16 architecture (Liu and Deng, 2015)13 was used in a semi-supervised way to classify radio galaxies and as such it leverages the large unlabelled data sets that are available (Ma et al., 2019b). Footnote 11: [https://github.com/ratt-ru/toothless](https://github.com/ratt-ru/toothless) Footnote 12: Toothless is a three-class radio galaxy data set composed of selected well-resolved FRI (178 samples), FRII (284 samples), and Bent-tailed (254 samples) sources. Footnote 13: The symbol \({}^{*}\) is used on citations that are not part of the papers under review Unsupervised learning using methodologies like self-organizing maps were used by Polsterer et al. (2016), to construct radio morphologies based on similar/dissimilar characteristics of the Radio Galaxy Zoo project data (Banfield et al., 2015). The authors proposed the Parallelized rotation and flipping INvariant Kohonen maps (PINK) approach, which does not require training data labels, and hence avoids any potential bias by inexperienced practitioners in the Radio Galaxy Zoo project (Banfield et al., 2015). It only required human inspection and profiling of the resulting prototypes into known FR galaxy sources accordingly. While deep learning methodologies are seen to be dominant in the classification task as seen in Table 11, conventional machine learning techniques have also been explored in the classification of FR galaxies. Becker and Grobler (2019) compared the following methodologies: Nearest Neighbors (Peterson, 2009)\({}^{*}\), Support Vector Machine (SVM) (Cortes and Vapnik, 1995)\({}^{*}\), Radial Basis Function SVM (Ding et al., 2021)\({}^{*}\), Gaussian Process Regression (Banerjee et al., 2013)\({}^{*}\), AdaBoosted Decision Tree (Freund and Schapire, 1997)\({}^{*}\), Random Forest (Breiman, 2001)\({}^{*}\), Naive Bayes (Rish et al., 2001)\({}^{*}\), Multi-layered Perceptron (Piramuthu et al., 1994)\({}^{*}\) and Quadratic Discriminant Analysis (Bose et al., 2015)\({}^{*}\) in the classification of Fanaroff-Riley Radio Galaxies. Becker and Grobler (2019) used the Toothless data set excluding the bent-tailed radio sources in their implementation. A comparative analysis was performed between Figure 10: A Coxcomb chart illustrating the top seven most commonly used machine learning methodologies in radio astronomy in recent years. The quantity of papers belonging to each of the seven categories is equal to the number of concentric circles that overlap the respective segment. different conventional machine-learning algorithms on radio images. The Random Forest classifier was found to have the highest performance with an accuracy of 94.66% [Becker and Grobler, 2019]. The study demonstrated that the derived morphological features from radio images are distinct and unique to radio galaxy classes. In order to comprehensively discuss the papers under review, we consider data processing pipelines and model architectures used in the research papers. Specifically, the methodological applications covered in this review are categorized into three major groups: model-centric approaches, data-centric approaches, and weakly supervised approaches. This is motivated by the need to develop robust algorithms when limited annotated data is available or when massive amounts of unlabelled data can be utilized. ### Model-centric approach Research in computer intelligence predominantly dedicates resources and time to improving and optimizing machine learning algorithms. Developing novel model architectures has been witnessed in the space of deep learning. This has gradually been translated into the field of radio astronomy given it is a data-driven field. #### 4.2.1 CNN architectures Model architectures have been shown to play a significant role in improving and increasing the generalization of deep learning algorithms in classification problems. Therefore, we have seen progressive breakthroughs and applications of more complex architectures such as AlexNet [Krizhevsky et al., 2017]*[Aniyan and Thorat, 2017], VGG-16 [Ma et al., 2019, Wu et al., 2018], and DenseNet [Huang et al., 2017]* [Samudre et al., 2022] in radio astronomy. The depth of the CNN architecture models are varied across different applications, depending on the required complexity. For instance, Lukic et al. [2019] constructed four-layer (CONVNET4) and eight-layer (CONVNET8) convolutional networks, Becker et al. [2021] constructed eleven layers, Aniyan and Thorat [2017] constructed twelve layers, and Tang Figure 11: Summary of classification, source extraction and anomaly detection papers. Abbreviations are defined in the Appendix. et al. (2019) constructed thirteen layers for classification of radio galaxies. According to a comparative analysis done on a capsule network, CONVNET4 and CONVNET8 on the LoTSS DR1 data set, it was observed that CONVNET8 outperformed CONVNET4 and a capsule network, though with a marginal difference (Lukic et al., 2019). The eight- and four-layer CNNs and the capsule network attained average precision scores of 94.3%, 93.3% and 89.7%, respectively. The secret behind the increase in depth of the convolutional layers is that it augments the number of nonlinear functions and introduces additional feature hierarchies that optimize the classification function. Consequently, the deep networks tend to achieve higher performance compared to more shallow networks (Tang et al., 2019). #### 4.2.2 Regularization techniques Overfitting has been one of the central challenges affecting the robustness of radio galaxy classification models. The availability of small labeled astronomical data sets for building the models remains to be a major contributor to the challenge. To address this, researchers have adopted regularization techniques during model building. This is aimed at allowing the models to maximally learn from the limited training data and achieve better generalization. One technique used is the random dropping out of weakly connected units (neurons) of CNN connections during training (Tang et al., 2019; Tang et al., 2022). This approach is commonly referred to as dropout. Dropout helps to reduce parameter saturation during the training process preventing excessive co-adapting of the units. Moreover, to reduce covariance shift in the input data, the batch normalization technique is applied during model training (Tang et al., 2019; Tang et al., 2022). This involves standardizing the feature maps such that the values are transformed to follow a Gaussian distribution (regularize the network). These regularization approaches reduce the chances that the network will succumb to the vanishing gradient problem and reduce the time that the network requires to converge. #### 4.2.3 Specialized convolutional blocks The key thrust in the performance of ConvNets compared to other models is the continued construction and integration of innovative processing units and the embedding of newly designed novel convolutional blocks. In radio astronomy, there are several novel research efforts in this direction. Figure 12: The fundamental building blocks of a standard ConvNet. Figure 13: Computer intelligence methodologies applied in the classification of radio galaxies. Attention gates are convolutional blocks that are analogous to the visual system of humans to efficiently prioritize localized salient features in an object in order to contextualize and identify it. Bowles et al. (2021) implemented novel convolutional filters that localize salient features while suppressing irrelevant information on the provided images, thus, resulting in predictions obtained directly from pertinent and contextualized feature maps. The attention-gate layers are integrated in the CNN architectural backbone. This approach was found to reduce the CNN model training parameters by 50% and improves the interpretability of CNN models. It promotes explainable deep learning by using attention maps that can be investigated to trace the root cause of misclassification in a model. Despite the notable reduction in training parameters, the performance of the CNN architecture developed was equivalent to the state-of-the-art CNN applications in the literature. Group equivariant Convolutional Neural Networks (G-CNNs) are convolution kernel filters that are embedded in the conventional CNN (Cohen and Welling, 2016). G-CNNs are aimed at supporting equivariance translation for a wider set of isometries (for example rotation and reflections) on the training data. By design, CNNs are constructed to be translation-equivariant of their feature maps, but this does not apply to other isometries such as rotation. This implies that G-CNNs allow preservation of group equivariance on augmented data - a common data-centric approach in deep learning model building. Thus, the increased data samples via rotational augmentations result in the same kernel (weight sharing) as they pass through the convolutional layers. This approach has been demonstrated to improve CNN architecture performance in the galaxy classification task using the MiraBest data set (Scaife and Porter, 2021). Other innovative ideas introduced to the standard convolutional architectures in radio astronomy include, multidomain multibranch CNNs, which allow the models to take multiple data inputs as opposed to single source images (Alger et al., 2018; Tang et al., 2022). ### Data-centric approaches The quality and robustness of machine and deep learning algorithms are highly dependent on the quality of data. Quality entails the consistency, accuracy, completeness, relevance, and timeliness of the data. Principally, in order to improve the performance of the algorithms, data-centered approaches are paramount. The data (radio images) must be free from RFI noise and artifacts before calibration and processing. The data should not be ambiguous and each sample should belong to a definite radio galaxy class. Ideally, data must be highly curated. In addition, to circumvent overfitting and simultaneously achieve high generalization accuracies, adequate data diversity on the training data set is a prerequisite. This aids in avoiding poor model performance when tested with real-world out-of-distribution data or covariate-shifted data. #### 4.3.1 Data augmentation Data augmentation aims to increase the size and diversity of the training set. It is applied on the assumption that additional important information can be extracted from the insufficient data set available via augmentations. It has been widely espoused in radio galaxy classification to mitigate overfitting (Aniyan and Thorat, 2017; Alhassan et al., 2018; Lukic et al., 2018), to improve the performance of machine and deep learning models (Maslej-Kresnakova et al., 2021; Kummer et al., 2022; Lukic et al., 2018), to address rotational invariance (Becker et al., 2021), to increase the size and the diversity of the training data (Aniyan and Thorat, 2017; Alhassan et al., 2018; Becker et al., 2021; Ma et al., 2019), and to address the class imbalance, especially for the minority classes among the radio galaxy population groups in the training data (Lukic et al., 2018). There are different kinds of augmentation strategies. Two of these strategies are positional augmentation and color augmentation. Examples of the former include scaling, flipping, rotation, and affine transformation. Examples of the latter include brightness, contrast, and saturation (Best and Heckman, 2012; Becker et al., 2021; Scaife and Porter, 2021; Slijepcevic et al., 2022). Other augmentation approaches include up-sampling or oversampling of the minority class and generative adversarial networks (Kummer et al., 2022). The literature attests to the fact that data augmentation is a data-centered strategy that can significantly improve model performance and result in models with improved generalization ability(Maslej-Kresnakova et al., 2021). Maslej-Kresnakova et al. (2021) found that improvement of model performance and capacity to generalise on out-of-distribution data was highly dependent on augmentation strategy that was employed. They found that brightness increase, vertical or horizontal flips, and rotations led to better performance while zoom, shifts, and decrease in the brightness of the images degraded model performance. Therefore, the process of finding an optimal data augmentation strategy in a project is non-trivial. A downside of data augmentation is that any inherent bias or data errors will be inherited by the augmented data. Nevertheless, this does not rule out the fact that data augmentation is an important data-centric approach for both increasing minority data classes and improving model performance in the computer vision paradigm. #### 4.3.2 Feature engineering Feature engineering is aimed at improving model accuracy in machine learning. It involves the process of careful selection based on domain knowledge, feature extraction, creation, manipulation, and transformation of the training data. The engineered features are targeted at providing the 'precise physical properties' of the image data for model development. In radio galaxy classification, morphological features engineered include peak brightness, lobe size, number of lobes, and right ascension and declination (Becker and Grobler, 2019). Moreover, feature descriptors that represent the texture of radio images via Haralick features14(Ntwaetsile and Geach, 2021) and use Radial Zernike polynomials to extract image moments such as translation, rotation, that are scale-invariant (Sadeghi et al., 2021). Footnote 14: Haralick features are a set of thirteen non-parametric measures which are derived from the radio images based on the Grey Level Co-occurrence Matrix. Machine learning algorithms are applied on the features engineered (compact representations of the radio images) for classification of radio galaxies. In this case, either supervised or unsupervised approaches are used, for example, Hierarchical Density Based Spatial Clustering of Applications with Noise (HDBSCAN) (Ntwaetsile and Geach, 2021), Random Forest (RF) (Becker and Grobler, 2019) and SVM (Sadeghi et al., 2021). Feature engineering has been shown to provide machine learning algorithms with features of high importance resulting in high performances, with accuracies above 95% (Sadeghi et al., 2021). However, the main drawback is that it requires domain expertise to design feature descriptors. Therefore, they may not be able to capture all the relevant information in the data. ### Weak supervision approaches In radio astronomy, most publicly available catalogs contain \(10^{3}\) radio galaxies. Moreover, the cost of labeling sufficiently large (in deep learning terms) radio astronomical data sets is very high. On the contrary, unlabelled catalogs consist of Petabytes of data (from a single survey). Hence, the essence of exploring algorithms and strategies with the capacity of leveraging the massive unlabelled public catalogs and/or exploiting the small annotated data sets available are paramount. The three weakly supervised methods, namely transfer learning, semi-supervised learning, and N-shot learning are discussed. #### 4.4.1 Transfer learning Transfer learning is a paradigm that reuses knowledge gained from pre-trained models on massive data sets to fine-tune them on other tasks, making it effective for scarce training data. In the context of classification of radio galaxies, transfer learning has been investigated and has contributed to improved accuracies compared to other methods, such as few-shot learning (Samudre et al., 2022). The pre-trained model's weights and biases provide the generic feature representations essential to the model for identifying low-level features (i.e, shapes and edges) of the objects. Then, the complementary complex features specific to the classification task at hand are learned by fine-tuning the last layers of the model using the available small labeled data set. The study by Tang et al. (2019) investigated whether it was possible to develop robust cross-survey identification machine learning algorithms that made use of the transfer learning paradigm. In their research, they used FIRST and NVSS survey data, which are characterized by high- and low-resolution images, respectively. They found that models pre-trained on high-resolution surveys (FIRST) can be effectively transferred with high accuracies of about 94% (a case of 2 classes: FRI and FRII), to lower-resolution surveys (NVSS). However, the converse was observed not to be true. Similarly, transfer learning on radio galaxy classification has been shown to achieve high performance even after extracting the number of classes to more than two: FRI and FRII. Lukic et al. (2019) used Inception ResNet model v2 (Szegedy et al., 2017) to classify three classes (FRI, FRII, and Unresolved) from the LoTSS-DR1 data. Inception ResNet model v2 achieved an average accuracy of 96.8%; the best performance compared to ConvNet-4, ConvNet-8 and Capsule Networks model architectures that they experimented with on the same data set. Additionally, a transfer learning method based on the Dense-net architecture (Huang et al., 2017)1 was tested by Samudre et al. (2022). They obtained a precision of 91.9%, a recall of 91.8% and an F\({}_{1}\) score of 91.8% for the classification of compact, FRI, FRII, and Bent radio galaxies with less than 3000 test samples (Samudre et al., 2022). Notably, transfer learning was observed to converge faster compared to conventional CNN architectures. For instance, the model converged faster (10 fewer epochs on average) than other models such as ConvNet-4 (Lukic et al., 2019). Footnote 11: Haralick features are a set of thirteen non-parametric measures which are derived from the radio images based on the Grey Level Co-occurrence Matrix. #### 4.4.2 Semi-supervised learning Semi-supervised learning (SSL) lies between unsupervised and supervised learning, utilizing both annotated data samples and a large amount of unannotated data during training. Employing semi-supervised techniques for the radio galaxy morphological classification task has recently been gaining traction within the literature. The reason for this can be ascribed to the fact that there are large publicly available unannotated data sets that are available for use within the field of radio astronomy. Concerted efforts have been dedicated to investigating the possibility to exploit these algorithms and conduct a comparative analysis of the performance with supervised machine learning [Ma et al., 2019b, a, Slijepcevic et al., 2022]. Ma et al. [2019b] trained a semi-supervised model where they constructed a radio galaxy morphology classifier (autoencoder) from the VGG-16 architecture. The autoencoder was pre-trained on a large unannotated data set of 18,000 radio galaxies from the BH12 catalog [Best and Heckman, 2012]. The pre-training of the modified VGG-16 architecture was aimed at updating its weight and bias parameters - allowing the model to learn the low-level morphological features of the radio galaxies (such as shapes and outlines). The pre-trained model was then fine-tuned with a small annotated data set of about 600 radio galaxies only. It was observed that the SSL strategy achieved high average precision and recall of 91% and 90%, respectively. Similarly, the MCRGNet classifier (SSL model) was pre-trained on the unLRG (unlabelled radio galaxy) (14,245 samples) and fine-tuned on the LRG (labeled radio galaxy) (1442 samples) data sets [Ma et al., 2019a]. The MCRGNet's average classification precision was 93%. This was a better precision compared to the competing methods at the time. Another methodological approach used in SSL for radio galaxy classification is presented by Slijepcevic et al. [2022], which used the FixMatch algorithm [Sohn et al., 2020]1. In FixMatch's strategy, a weakly augmented (for instance, shift or flip data augmentation methods) unannotated image is first fed into a model and then used to generate a pseudo-label. Then, in a concurrent fashion, the same unannotated image under strong augmentations (for instance, brightness, translation, or contrast) is fed into a model to generate a prediction. Thirdly, using cross-entropy or a distance measure, such as Frechet inception distance, the model is trained to make the best prediction by matching the predictions of the pseudo-label15 with the ones generated under the strongly augmented image [Sohn et al., 2020, Slijepcevic et al., 2022]. Slijepcevic et al. [2022] used Tang network classifier, in an SSL manner. They used MiraBest data (labeled) and the Radio Galaxy Zoo data release 1 (unlabelled). It was shown that the SSL strategy was able to extract knowledge from the unlabelled data thus achieving higher accuracy compared to the Tang classifier of the MiraBest data (baseline). Footnote 15: A label that is generated by a model’s prediction rather than being manually assigned by a human annotator. #### 4.4.3 N-shot learning N-shot learning algorithms are designed to leverage limited supervised information available (labeled data set) to make accurate predictions while avoiding overfitting challenges. Types of N-shot learning include Few-Shot Learning (FSL), One-Shot Learning (OSL), and Zero-Shot Learning (ZSL). Samudre et al. [2022] applied an FSL approach based on a Siamese neural network [Koch et al., 2015]2. The twin network model achieved an average precision of 74.2%, a recall of 74.0%, and an F\({}_{1}\) of 74.1% for the classification of compact, FRI, FRII, and Bent radio galaxies [Samudre et al., 2022]. In their experimentation, a sample size of 2708 radio galaxies was used. The samples were composed of selections from FRICAT, FRIICAT, CoNFIG, and Proctor data catalogs. While this approach has shown excellent performance on standard benchmark data sets, the twin network was found to yield relatively lower performance compared to the state-of-the-art supervised machine learning approaches on real data sets. Footnote 2: A label that is generated by a model’s prediction rather than being manually assigned by a human annotator. ### Beyond classification #### 4.5.1 Anomaly detection In the context of astronomy, anomalies can be defined as undiscovered and serendipitous astrophysical objects and phenomena [Giles and Walkowicz, 2018, Lochner and Bassett, 2021] - peculiar objects having unexpected properties. With large data sets generated by radio telescopes, such as the EMU generating \(\sim\)70 million radio sources [Norris et al., 2011], the SKA1 All-Sky continuum survey (SASS1), which is expected to generate \(\sim\)500 million radio sources, or the SKA2 All-Sky continuum Survey (SASS2), which is expected to increase to \(\sim\)3500 million radio sources [Norris et al., 2014], the odds of discovering unknown unique objects are beyond doubt. Machine learning continues to play a critical role in unlocking discoveries by unpacking deep patterns in massive data sets. Hence, such automatic process supplements manual inspection of the objects to annotate new interesting radio sources and separate them from artifacts and already known sources. Anomaly detection is mainly an unsupervised task where no labelled data is required. In radio astronomy, there are few anomaly detection applications that can be referenced. Polsterer et al. [2016] and Mostert et al. [2021] investigated self-organizing maps to identify categories of radio galaxies using the Radio Galaxy Zoo Citizen project and LoTSS data, respectively. The identified objects that did not fall in any category of the known galaxies were annotated as outliers. In addition, Lochner and Bassett (2021) developed an active anomaly detection algorithm16 that uses isolation forest and local outlier factor algorithms. In their paper, the anomaly detector is coupled with user feedback (based on interest). The algorithm detects and flags outliers and the user scores the results, which are then used to suppress dissimilar objects and display similar ones. Footnote 16: Active anomaly detection is an anomaly detection approach based on active learning. Active learning involves leveraging the expertise of a domain expert and the computational power of machine learning to improve the efficiency and effectiveness of the learning process. Anomaly detection is mainly challenging because some identified anomalies may be artifacts introduced during data recording, calibration, and reduction procedures. Further to Lochner and Bassett (2021), some flagged anomalies may not be of interest to the research objectives of the astronomer. Therefore, the identified anomalies largely depend on the focus area of the astronomer and hence the relevance of the anomalies to a study may not be easily captured by machine/deep learning algorithms. Despite the progress achieved in the exploitation of machine intelligence, anomaly detection remains to be a challenging field of research. #### 4.5.2 Source extraction Automated source finding and parameterization are necessary for next-generation radio interferometric surveys to extract radio sources, as these sources often lack clear boundaries and exhibit luminosity decay/diffuse from the center, making it challenging to distinguish them from noise in an image. The development of deep learning-based techniques has been on the rise to solve the challenge of extracting compact and diffuse sources alike. Application of different architectural designs and implementations of CNNs have been explored, such as the simple CNN in ConvoSource (Lukic et al., 2019), Mask R-CNN (He et al., 2017) in Astro R-CNN, and Tiramisu (Pino et al., 2021) - recent semantic segmentation based on U-Net (Ronneberger et al., 2015). These methods have shown that the use of deep learning methodologies in automatic detection and extraction of radio sources is robust and achieves high accuracies of above 90%. In addition, they have shown significant improvements in classifying extended sources, for instance, the Tiramisu semantic segmentation by Pino et al. (2021) achieves an accuracy of 97% though with a small sample size of 2,348 sources (where 320 sources are extended). In essence, the latest state-of-art deep learning methodologies are promising alternatives to the dominant tools like PyBDSF. However, the deep learning algorithms' performance is found to be limiting when the images are noisy, the sources are faint or have diffuse morphological structure. ## 5 Opportunities, challenges, and outlook Computer intelligence is having a remarkable impact on radio astronomy. A plethora of new insightful scientific work is published every year, resulting in even better and more accurate models that generalize well. As a result, there are now open opportunities to develop robust models that are capable of generating predictions across surveys from different yet related next-generation telescopes (such as LOFAR, MeerKAT, and SKA). Furthermore, these models would require slight to no modification once a new data release is made available. This highlights the potential for further scientific progress in utilizing raw radio image cubes generated by modern telescopes, through the incorporation of computer intelligence. Despite the predominance of massive high-resolution data sets from modern telescopes, there is limited availability of annotated data sets. As a result, this hinders the ability to fully utilize and exploit the potential of artificial intelligence in the data-rich field. While there are developed strategies (such as data augmentation, semi-supervised learning and weakly supervised approaches) leveraging small data samples (Tang et al., 2019; Slijepcevic et al., 2022), such strategies cannot match the diverse and unique astrophysical phenomena embedded in the massive radio images. Therefore, this calls for continued collaborative efforts in the generation of annotated machine/deep learning-ready data sets while considering compute resources. Radio astronomy is a data-rich and compute-intensive field, hence exploitation of scalable platforms and software is paramount. In order to train a model using techniques such as SOM (Galvin et al., 2019), SVM (Sadeghi et al., 2021) and DCNN (Sadeghi et al., 2021), a significant amount of computing resources are required. For instance, DCNNs typically require large amounts of images in order to learn over a million parameters that characterize a model. Therefore, as the available data in astronomy increases exponentially, and more specialized machine/deep learning algorithms are developed, the demand for highly scalable computing performance is inevitable. High-performance computing (HPC), graphical processing units (GPUs) and distributed computing are often used to run such algorithms. In particular, big data (radio astronomical data) requires sophisticated methodologies to efficiently query and process large volumes of data. Despite the availability of numerous studies, as discussed in this review paper, there is still a wide gap in the utilization of scalable pipelines that allow for more efficient parallel and distributed machine/deep learning computations. Pipelines that would take advantage of some of the storage formats of the radio astronomical survey data. For instance, LOFAR uses H5parm, a Hierarchical Data Format version 5 (HDF5) compliant file format, which provides an excellent basis for applying Apache Spark17, a Big data processing ecosystem. Footnote 17: [https://spark.apache.org/](https://spark.apache.org/) Indexing of identified radio sources is a prerequisite for fast retrieval of radio galaxies of similar/dissimilar morphological attributes. However, as this topic is hardly addressed in the literature covered, it highlights the existing research gap in radio astronomy that needs to be filled. Image indexing and/or retrieval is the process of finding objects (images) that have similar characteristics with varied shapes and sizes. Having developed a database of known and unknown (anomalous) radio astronomical structures, it is of great importance to develop a system that would aid in the quick retrieval of galaxies with similar morphological characteristics (Aziz et al., 2017). Ideally, identified objects are indexed with a hashing function that minimizes the distances between perceptually similar objects and maximizes those of dissimilar objects. This is a paradigm that has seen a lot of progress in recent years with the development of deep hashing methods (Luo et al., 2020), a paradigm that to our knowledge is yet to be leveraged in radio astronomy. ## 6 Conclusion Radio astronomy is in the era of Big Data, presenting ubiquitous opportunities that necessitate extensive automation of data processing, exploration, and scientific exploitation. This will unravel the cosmology space, if modern telescopes reach their scientific goals. In this regard, astronomers have taken undue advantage of the deep neural network revolution in computer vision with notable success. In this survey paper, we have presented a detailed literature overview of the data and algorithmic advances in data curation pipelines, data preprocessing strategies, and cutting-edge machine intelligence methods. New scientific works that involve the development of robust and accurate novel models have emerged in the field of radio astronomy. These models can capture the diverse and unique astrophysical phenomena found in large radio images through the use of techniques like data augmentation, semi-supervised learning, and weakly supervised approaches. This has opened up the possibility of creating models that can accurately predict the outcomes of surveys conducted with telescopes like LOFAR and SKA, without significant modification when new data becomes available. The survey revealed that there has been little exploration of image indexing and retrieval within the field of radio astronomy, even though it is an essential step for quickly retrieving radio images with similar or dissimilar morphological structures. This area of research offers considerable potential for future investigation. ## 7 Appendix Figure 14: The abbreviations are categorized in three sections, with the top section representing algorithm keywords, the middle section representing galaxies, and the bottom section representing astronomical surveys.
2305.13626
Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration
Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, despite their impressive capabilities, they still possess limitations, such as providing randomly-guessed answers to ambiguous queries or failing to refuse users' requests, both of which are considered aspects of a conversational agent's proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems.
Yang Deng, Lizi Liao, Liang Chen, Hongru Wang, Wenqiang Lei, Tat-Seng Chua
2023-05-23T02:49:35Z
http://arxiv.org/abs/2305.13626v2
Prompting and Evaluating Large Language Models for Proactive Dialogues: Clarification, Target-guided, and Non-collaboration ###### Abstract Conversational systems based on Large Language Models (LLMs), such as ChatGPT, show exceptional proficiency in context understanding and response generation. However, despite their impressive capabilities, they still possess limitations, such as providing randomly-guessed answers to ambiguous queries or failing to refuse users' requests, both of which are considered aspects of a conversational agent's proactivity. This raises the question of whether LLM-based conversational systems are equipped to handle proactive dialogue problems. In this work, we conduct a comprehensive analysis of LLM-based conversational systems, specifically focusing on three aspects of proactive dialogue systems: clarification, target-guided, and non-collaborative dialogues. To trigger the proactivity of LLMs, we propose the Proactive Chain-of-Thought prompting scheme, which augments LLMs with the goal planning capability over descriptive reasoning chains. Empirical findings are discussed to promote future studies on LLM-based proactive dialogue systems. ## 1 Introduction Conversational systems are envisioned to provide social support or functional service to human users via natural language interactions. Conversation researches typically center around a system's response capabilities, such as understanding the context of dialogue Wu et al. (2020); Chen et al. (2022) and generating appropriate responses to user requests Zhang et al. (2020); Roller et al. (2021). The popularity of conversational systems has grown unprecedentedly with the advent of ChatGPT, which showcases exceptional proficiency in the capabilities of context understanding and response generation with large language models (LLMs). Recent studies observe that, compared with current fine-tuned methods, ChatGPT can still achieve competitive performance under zero-shot setting on different dialogue problems, such as knowledge-grounded dialogues Bang et al. (2023), task-oriented dialogues Zhang et al. (2023), emotion-aware dialogues Zhao et al. (2023). Despite the powerfulness of ChatGPT, there are still several limitations1, such as providing randomly-guessed answers to ambiguous user queries or failing to refuse problematic user requests. These kinds of capabilities are typically regarded as the _proactivity_ of the conversational system Deng et al. (2023), where the system can create or control the conversation to achieve the conversational goals by taking initiative and anticipating impacts on themselves or human users. Thus, it raises the question: _Are these LLM-based conversational systems equipped to manage proactive dialogue problems?_ In this work, we conduct the first comprehensive analysis of LLM-based conversational systems regarding three common aspects of proactive dialogue systems, including clarification Guo et al. (2021); Deng et al. (2022), target-guided Tang et al. (2019); Wu et al. (2019), and non-collaborative dialogues Zhan et al. (2022). Footnote 1: as stated in its official blog [https://openai.com/blog/chatgpt/](https://openai.com/blog/chatgpt/). Motivated by the emergent capabilities of LLMs Wei et al. (2022);,B on reasoning over texts, some recent studies investigate in-context learning or chain-of-thought prompting schemes on planning Huang et al. (2022) or taking actions Yao et al. (2022) in interactive environments. Similarly, strategy learning and goal planning attach great importance in proactive dialogue systems. In order to enhance the proactivity of LLM-based conversational systems, we design the proactive chain-of-thought prompting (ProCoT) scheme. As shown in Figure 1, with standard prompting, LLM-based systems directly provide a randomly-guessed answer to the ambiguous user question (left), and generate a general bargain response without any negotiation strategy (right). When providing the system with options to take different dialogue acts (proactive prompting), the generated responses are unaware of the conversational goal, such as generating under-specified clarification questions (left) and conservative negotiation responses (right). To this end, ProCoT first instructs the system to generate descriptive thoughts about intermediate steps of reasoning and planning for reaching the conversational goal, and then make the decision of the next action to take. Finally, the system generates an appropriate response based on the decided action. We conduct extensive experiments with two LLM-based conversational systems, including ChatGPT and an open-sourced model, Vicuna (Chiang et al., 2023). With the aforementioned three types of prompting schemes, we compare these LLM-based conversational systems with fine-tuned SOTA dialogue models. The main contributions of this work can be summarized as follows: * This work presents the first comprehensive evaluation on the proactivity of LLM-based dialogue systems2, including handling clarification, target-guided, and non-collaborative dialogues. Footnote 2: [https://github.com/dengyang17/LLM-Proactive](https://github.com/dengyang17/LLM-Proactive) * We design the proactive chain-of-thought prompting scheme to endow LLM-based dialogue systems with the capability of planning and taking the initiative towards the conversational goal. * Specifically, the main findings of the evaluation on LLM-based dialogue systems include 1) Barely ask clarification questions when encountering ambiguous queries. ProCoT largely overcomes this issue, but the performance is still unsatisfactory in domain-specific applications (SS4.1). 2) Proficient at performing topic shifting towards the designated target, but tend to make aggressive topic transition. ProCoT further improves this capability by planning a more smooth transition. (SS4.2) 3) Fail to make strategic decision for non-collaborative dialogues, even with ProCoT prompting (SS4.3). ## 2 Related Works Proactive DialoguesRecent years have witnessed many advanced designs on developing proactive dialogue systems Liao et al. (2023) for various applications. For example, target-guided dialogues aim to proactively lead the conversation to either a designated target topic Tang et al. (2019) or a pre-defined knowledge entity Wu et al. (2019). Existing studies typically adopt keyword transition Qin et al. (2020); Zhong et al. (2021) or knowledge graph reasoning Yang et al. (2022); Lei et al. (2022) techniques to proactively plan the topic thread towards the target. Besides, in information-seeking dialogues, proactive dialogue systems can ask clarification questions for clarifying the ambiguity of the query or question in conversational search and recommendation Aliannejadi et al. (2021); Deng et al. (2021) and conversation question answering Guo et al. (2021); Deng et al. (2022). In addition, under non-collaborative setting, the system and the user have competing goals towards the task completion but the system aims to proactively reach an agreement favorable Figure 1: Examples of three kinds of prompting schemes for proactive dialogues. In the example of non-collaborative dialogue, the system plays the role of “Buyer”, and the sale-to-list (SL) ratio shows the effectiveness of negotiation, which is calculated by (listed price \(-\) bargain price)\(/(\)listed price \(-\) buyer target price\()\). The higher ratio means the current bargain price is closer to the target. to itself (Zhou et al., 2020), such as negotiating a product price (He et al., 2018) and persuading users to make a donation (Wang et al., 2019). Large Language Models for DialoguesPrevious dialogue systems, such as DialoGPT (Zhang et al., 2020), Meena (Adiwardana et al., 2020), BlenderBot (Roller et al., 2021), LaMDA (Thoppilan et al., 2022), typically fine-tune pre-trained language models on public dialogue data. Inspired by the success of ChatGPT, recent practices of building dialogue systems through conducting supervised fine-tuning on open-source large language models, such as LLaMA (Touvron et al., 2023), with either constructed instruction-following examples or distilled conversation data from ChatGPT. For example, Alpaca (Taori et al., 2023) adopts Self-Instruct (Wang et al., 2022) to collect instruction-following examples from GPT-3.5 (Ouyang et al., 2022), while Dolly3 relies on a large-scale human-annotated instruction-following examples. Vicuna (Chiang et al., 2023) and Baize (Xu et al., 2023) leverage use-shared or self-chat conversation data that is generated by ChatGPT. As all these LLM-based dialogue systems are trained to follow the user's instruction, it remains the question on whether these systems can take the initiative for handling proactive dialogues. Prompting in Dialogue SystemsTo induce the knowledge from PLMs, various prompting methods are designed for zero-shot or few-shot learning for dialogue applications, such as task-oriented dialogues (Lee et al., 2021; Mi et al., 2022), knowledge-grounded dialogues (Shuster et al., 2022; Liu et al., 2022), and open-domain dialogues (Chen et al., 2023; Lee et al., 2023). Chen et al. (2023) propose to prompt LLMs for controllable response generation in emotional support and persuasion dialogues, conditioned on the ground-truth dialogue strategies. In this work, we aim at prompting LLMs to proactively interact with users. Footnote 3: [https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm) ## 3 Prompting LLMs to be Proactive We describe the prompting schemes in a general form, including the standard prompting, the proactive prompting, and the proactive chain-of-thought (ProCoT) prompting. Two specific examples are presented in Figure 1. Standard PromptingIn order to instruct LLMs to perform specific dialogue tasks, the typical prompting scheme can formulated as \[p(r|\mathcal{D},\mathcal{C}). \tag{1}\] Given the task background \(\mathcal{D}\) and the conversation history \(\mathcal{C}\), instruct the LLM to generate the response \(r\). In specific, the task background can be the grounded document in clarification dialogues or the target description in target-guided dialogues. Proactive PromptingProactive prompting aims to provide alternative options for LLMs to decide what kinds of actions should be taken in the response, instead of simply responding to the instruction, which can be formulated as: \[p(a,r|\mathcal{D},\mathcal{C},\mathcal{A}). \tag{2}\] Given the task background \(\mathcal{D}\), the conversation history \(\mathcal{C}\), and a set of possible dialogue acts \(\mathcal{A}\), instruct the LLM to select the most appropriate dialogue act \(a\in\mathcal{A}\) and then generate the response \(r\). For example, the dialogue act can be _Ask a Clarification Question_ or _Directly Answer the Question_ in clarification dialogues. Besides, it can also be different negotiation strategies in non-collaborative dialogues or different conversation topics in target-guided dialogues. Proactive Chain-of-Thought PromptingIn order to endow LLMs with the capability of planning and taking the initiative towards the ultimate goal, we develop the proactive chain-of-thought prompting scheme. ProCoT first analyse the next action to take by performing dynamic reasoning and planning for reaching the conversational goal. Then the response is generated based on the decided action. ProCoT can be formulated as: \[p(t,a,r|\mathcal{D},\mathcal{C},\mathcal{A}), \tag{3}\] where \(t\) is the thought description for the decision-making process of the next action. For example, \(t\) can be the ambiguity analysis of the user question at the current turn in clarification dialogues or the topic transition analysis of the current topic in target-guided dialogues. ## 4 Evaluation We evaluate the proactivity of LLM-based conversational systems from three perspectives, including the capability of asking clarification questions (SS 4.1), guiding the conversation towards the designated target (SS 4.2), and strategically handling conflicting goals (SS 4.3). ### Clarification Dialogues Clarification in information-seeking dialogues (Zamani et al., 2022) refers to the process of seeking further information or details to better understand the topic or question at hand. In this context, clarification is an important part of the dialogue as it helps to ensure that the information being shared is accurate and complete. #### 4.1.1 Problem Definition Following previous studies (Aliannejadi et al., 2021; Guo et al., 2021; Deng et al., 2022), the problem of asking clarification questions can be decomposed into two subtasks: 1) _Clarification Need Prediction_ to identify the necessity of clarification in the current turn and 2) _Clarification Question Generation_ to produce an appropriate clarifying question if needed. Given the grounded document \(d\) and the dialogue context \(\mathcal{C}=\{q_{1},a_{1},...,q_{t-1},a_{t-1},q_{t}\}\), the dialogue system aims to first predict the binary ambiguity label \(y\) on whether the current question \(q_{t}\) needs to be clarified. If so, a corresponding clarification question should be generated as the response \(a_{t}\) for clarifying the ambiguity. #### 4.1.2 Experimental Setups DatasetsTwo datasets are adopted for evaluating the capability of asking clarification questions in LLM-based dialogue systems: 1) **Abg-CoQA**(Guo et al., 2021) and 2) **PACIFIC**(Deng et al., 2022). Detailed descriptions can be found in Appendix A. Evaluation MetricsFollowing previous studies (Guo et al., 2021; Deng et al., 2022), we use Precision, Recall, and F1 for the evaluation of _Clarification Need Prediction_, and BLEU-2 and ROUGE-2 (F1) for the evaluation of _Clarification Question Generation_. In addition, since the automatic lexical matching metrics may fail to actually estimate the clarification capability of the generated clarifying questions (Guo et al., 2021), we also adopt human evaluation to score whether the generated question is helpful for clarifying the existing ambiguity. Usage of LLMsIn order to facilitate the reproducibility, we adopt a static version of ChatGPT, _i.e._, gpt-3.5-turbo-0301, and set the temperature to 0 for generating deterministic outputs under the same inputs. In addition, we also adopt an open-source LLM-based conversational system, _i.e._, Vicuna-13B-delta-v1.14, for evaluation. The maximum number of new tokens is set to be 128 for generation. Footnote 4: [https://github.com/lm-sys/FastChat](https://github.com/lm-sys/FastChat) Prompting SchemesWe evaluate three prompting schemes introduced in Section 3, including standard prompting, proactive prompting and proactive chain-of-thought prompting. In addition, we report their results under both zero-shot and few-shot settings. Due to the limitation of the maximum sequence length in Vicuna (2,048 tokens), we only apply one-shot in-context learning for comparisons. The complete prompts adopted for evaluation is presented in Appendix D. #### 4.1.3 Experimental Results Table 1 summarizes the evaluation results on Abg-CoQA and PACIFIC datasets. There are several \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & & & \multicolumn{4}{c}{Abg-CoQA\({}^{*}\)} & \multicolumn{4}{c}{PACIFIC\({}^{**}\)} \\ \cline{4-13} Method & Shot & Prompt & P & R & F1 & BLEU-1 & Human & P & R & F1 & ROUGE-2 & Human \\ \hline Baseline & - & - & 19.0 & 26.6 & 22.1 & 36.5 & 30.0 & 78.7 & 79.2 & 79.0 & 69.2 & 38.2 \\ SOTA & - & 30.0 & 19.5 & 23.6 & 38.2 & 56.0 & 87.4 & 86.6 & 86.9 & 90.7 & 80.1 \\ \hline \multirow{8}{*}{Vicuna-13B} & 0 & Standard & - & - & - & 11.3 & 0.0 & - & - & - & 1.2 & 0.0 \\ & 1 & Standard & - & - & - & - & 11.0 & 0.0 & - & - & - & 2.5 & 0.0 \\ & 0 & ProCST & 13.0 & 2.4 & 4.1 & 13.2 & 0.0 & 13.8 & 1.3 & 2.3 & 2.3 & 0.0 \\ & 1 & ProCute & **16.0** & 9.8 & 12.1 & 13.2 & 4.5 & 0.0 & 0.0 & 0.0 & 3.3 & 0.0 \\ & 0 & ProCST & 6.7 & 0.8 & 1.4 & 21.3 & 9.1 & 2.68 & **5.9** & 9.7 & 3.8 & 10.5 \\ & 1 & ProCST & 14.4 & **25.2** & **18.3** & **23.7** & **22.7** & 20.2 & **40.9** & **27.0** & **41.3** & **33.1** \\ \hline \multirow{8}{*}{ChatGPT} & 0 & Standard & - & - & - & 12.1 & 0.0 & - & - & - & 2.2 & 0.0 \\ & 1 & Standard & - & - & - & 12.3 & 0.0 & - & - & - & 2.0 & 0.0 \\ \cline{1-1} & 0 & Proactive & 15.1 & 50.7 & 22.0 & 13.7 & 17.6 & 18.2 & 20.9 & 19.4 & 2.9 & 0.0 \\ \cline{1-1} & 1 & Proactive & **27.4** & 16.3 & 20.4 & **23.4** & 23.5 & **19.1** & 16.6 & 17.7 & 14.0 & 12.5 \\ \cline{1-1} & 0 & ProCST & 13.8 & **87.8** & 23.8 & 21.6 & 32.4 & 17.9 & **63.8** & **28.0** & **21.5** & 26.7 \\ \cline{1-1} & 1 & ProCST & 17.6 & 66.7 & **27.9** & 18.4 & **45.9** & 18.7 & 54.1 & 27.7 & 16.2 & **35.8** \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results on Abg-CoQA and PACIFIC datasets. \({}^{*}\)Baseline and SOTA results are adopting from Guo et al. (2021). \({}^{**}\)The baseline method is fine-tuned T5, while the SOTA method is UniPCQA (Deng et al., 2022). notable observations as follows: * Standard Prompting: 1) ChatGPT never asks clarification questions when encountering ambiguous queries. 2) One-shot in-context learning (ICL) cannot provide them with such ability. * Proactive Prompting: 1) Given the option of clarification, Vicuna still barely take this action, while ChatGPT becomes capable of asking clarification questions. 2) One-shot ICL further improves the performance of proactive prompting. * ProCoT prompting: 1) ChatGPT achieves competitive performance with SOTA fine-tuned methods on the open-domain problem, _i.e._, AbgoCoQA. 2) The performance on the domain-specific task, _i.e._, PACIFIC (finance), is still far behind the fine-tuned method. 3) Zero-shot ProCoT is not working in Vicuna, but one-shot ICL can largely improve the performance. Overall, **LLM-based conversational systems fail to ask clarification questions** if there is no related instruction, even with demonstration. **ProCoT effectively endows LLM-based conversational systems the capability of asking clarification questions** so that they can achieve competitive performance as fine-tuned SOTA methods on the task in general domain. However, **as for domain-specific problem, there is still a noticeable gap from the fine-tuned methods**. #### 4.1.4 Error Analysis In order to find out the reason why LLM-based dialogue systems with ProCoT prompting fall short of handling domain-specific clarification dialogues, _i.e._, PACIFIC in finance domain, we randomly sample 100 error cases from each dataset for analysis (all cases are generated by ChatGPT with one-shot ProCoT). We categorize these failure cases into five groups, including _Wrong Clarification Need Prediction, Wrong Aspect_, _Under-specified Clarification_, _Over-specified Clarification_, and _Generation Error_. The details and examples can be found in the Appendix B. The statistics of error analysis is presented in Table 2. It can be observed that there are more failure cases attribute to the wrong aspect and under-specified clarification in PACIFIC. This indicates that ChatGPT may lack of some domain knowledge required for asking precise and specific clarification questions. ### Target-guided Dialogues Instead of making consistent responses to the user-oriented topics, the dialogue system for target-guided dialogues is required to proactively lead the conversation topics towards a designated target Tang et al. (2019). According to different applications, the target can be topical keywords Zhong et al. (2021), knowledge entities Wu et al. (2019), items to recommended Zhou et al. (2020), etc. #### 4.2.1 Problem Definition Given a target \(t\) that is only presented to the agent and is unknown to the user, the dialogue begins from an arbitrary initial topic, and the system needs to produce multiple turns of responses \(\{u_{n}\}\) to lead the conversation towards the target in the end. The produced responses should satisfy (i) **transition smoothness**, natural and appropriate content under the given dialogue history, and (ii) **target achievement**, driving the conversation to reach the designated target. The problem is typically decomposed into two subtasks Tang et al. (2019); Zhong et al. (2021); Yang et al. (2022): next topic selection and transition response generation. #### 4.2.2 Experimental Setups DatasetsWe first conduct turn-level evaluation of the target-guided capability on a next-turn target-oriented dataset OTTers Sevegnani et al. (2021), which requires the dialogue system to pro-actively bridge the current conversation topic to approach the target. Furthermore, we adopt TGConv Yang et al. (2022) to testify the ability to guide the user to the target topic in multi-turn conversations as the dialogue-level evaluation. Detailed descriptions can be found in Appendix A. Evaluation MetricsFollowing previous studies Sevegnani et al. (2021); Yang et al. (2022), we adopt the hits@\(k\) (\(k\in[1,3]\)) for evaluating next topic prediction, and ROUGE-L, METEOR, CIDEr scores for the evaluation of response generation on the Otters dataset. As for the dialogue-level eval \begin{table} \begin{tabular}{l c c} \hline \hline & Abg-CoQA & PACIFIC \\ \hline Wrong Clari. Need Pred. & 52\% & 40\% \\ Wrong Aspect & 10\% & 18\% \\ Under-spec. Clari. & 8\% & 14\% \\ Over-spec. Clari. & 7\% & 3\% \\ Generation Error & 23\% & 26\% \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of error analysis. uation on the TGConv dataset, we follow existing studies Yang et al. (2022); Wang et al. (2023) to simulate multi-turn conversations via self-play Tang et al. (2019), where the simulated user is unaware of the target topic. Three aspects are evaluated: 1) **Succ.** is the success rate of generating the target word within 8 turns of conversations; 2) **Turns** is the average turns of all dialogues that successfully reach the target word; and 3) **Coh.** is the contextual semantic similarity between the last utterance and the generated response, which is measured by MiniLM Wang et al. (2020). BaselinesWe report the results of several strong fine-tuned baseline methods for target-guided dialogues, including GPT-2 Radford et al. (2019), MultiGen Ji et al. (2020), DKRN Qin et al. (2020), CKC Zhong et al. (2021), TopKG Yang et al. (2022), and ColorWang et al. (2023). #### 4.2.3 Turn-level Evaluation Table 3 summarizes the turn-level evaluation results on the OTTers dataset. There are several notable observations as follows: * Standard Prompting: 1) As for the next-topic prediction, ChatGPT has already achieved better performance than fine-tuned methods with a noticeable margin. 2) As for the transition response generation, automatic evaluation metrics indicate close performance with fine-tuned methods on the lexical similarity with the reference response. * Proactive Prompting: It is effective in smaller-size LLM, _i.e._, Vicuna, but not in ChatGPT. * ProCoT Prompting: 1) ProCoT is effective in both Vicuna and ChatGPT. 2) One-shot ICL further improves the performance of ProCoT prompting on target-guided topic shifting. #### 4.2.4 Dialogue-level Evaluation Table 4 summarizes the dialogue-level evaluation results on the TGConv dataset. Results show an overwhelming capability of ChatGPT for controllable response generation, since the target topics are almost achieved in two turns, which means that ChatGPT will aggressively generate the response with the target topic without planning a smooth multi-turn conversation. As for Vicuna, we draw the following observations: * Standard Prompting: 1) LLM-based dialogue systems can achieve a high success rate of reaching the designated target using standard prompting. 2) Similar to ChatGPT, the target is reached avergeely in two turns, which means that the system tends to directly generate the response with the target topic. 3) The coherence score is relatively low, indicating the topic transition is aggressive. * Proactive Prompting: Although it improves the coherence of the generated responses, the success rate is quite low and the systems still tend to make a direct topic transition to the designated target. * ProCoT Prompting: With ProCoT, one-shot Vicuna effectively outperform all the fine-tuned SOTA methods for successfully guiding the con \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{5}{c}{**Easy Target**} & \multicolumn{3}{c}{**Hard Target**} \\ \cline{3-8} Method & Shot & Prompt & Succ.(\%) & Turns & Coh. & Succ.(\%) & Turns & Coh. \\ \hline GPT2 & - & - & 22.3 & 2.86 & 0.23 & 17.3 & 2.94 & 0.21 \\ MultiGen & - & - & 26.7 & 2.55 & 0.21 & 19.6 & 7.31 & 0.24 \\ DKRN & - & - & 38.6 & 4.24 & 0.33 & 21.7 & 7.19 & 0.31 \\ CKC & - & - & 41.9 & 4.08 & 0.35 & 24.8 & 6.88 & 0.33 \\ TopKG & - & - & 48.9 & 3.95 & 0.31 & 27.3 & 4.96 & 0.33 \\ Color & - & - & 66.3 & - & 0.36 & 201 & - & 0.35 \\ \hline \multirow{6}{*}{Vicuna-13B} & 0 & Standard & 63.0 & **2.63** & 0.43 & 62.5 & **2.45** & 0.39 \\ & 1 & Standard & 62.7 & 2.83 & 0.45 & **65.0** & 2.90 & 0.43 \\ & 0 & Proactive & 37.8 & 2.71 & 0.48 & 35.6 & 2.56 & **0.55** \\ & 1 & ProCoT & 48.3 & 2.71 & 0.50 & 34.6 & 2.95 & 0.51 \\ & 0 & ProCoT & 65.2 & 4.22 & 0.49 & 54.9 & 4.17 & 0.45 \\ & 1 & ProCoT & **72.3** & 3.55 & **0.52** & 59.8 & 3.81 & 0.48 \\ \hline \multirow{6}{*}{ChaGPT} & 0 & Standard & **97.5** & **2.26** & 0.38 & **96.3** & 2.30 & 0.41 \\ & 1 & Standard & 96.3 & 2.42 & 0.42 & 93.5 & **2.28** & 0.38 \\ \cline{1-1} & 0 & Proactive & 85.9 & 3.20 & **0.47** & 83.0 & 2.83 & **0.43** \\ \cline{1-1} & 1 & ProCoT & 90.7 & 2.86 & 0.36 & 86.2 & 2.94 & 0.31 \\ \cline{1-1} & 0 & ProCoT & 96.3 & 2.47 & 0.41 & 92.0 & 2.29 & 0.34 \\ \cline{1-1} & 1 & ProCoT & 95.9 & 2.63 & 0.45 & 92.1 & 2.47 & 0.39 \\ \hline \hline \end{tabular} \end{table} Table 4: Dialogue-level evaluation results on Target-guided Open-domain Dialogues. Note that “Turns” and ”Coh.” not the higher/lower the better. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{5}{c}{**Response Generation**} & \multicolumn{2}{c}{**Next Topic Prediction**} \\ \cline{2-7} Method & Shot & Prompt & BLEU & METG & R-L & hits@1 & hits@3 \\ \hline GPT2 & - & - & 11.58 & 10.26 & 17.67 & 4.39 & 15.79 \\ MultiGen & - & - & 13.57 & 12.51 & 26.27 & 6.58 & 20.51 \\ DkRN & - & - & 12.86 & 11.90 & 21.52 & 4.91 & 17.72 \\ CKC & - & - & 13.34 & 11.65 & 24.77 & 6.87 & 21.89 \\ TopKG & - & - & 15.35 & 13.41 & 27.16 & 7.78 & 22.06 \\ \hline \multirow{6}{*}{Vicuna-13B} & 0 & Standard & 10.01 & 13.27 & 16.00 & 12.01 & 19.03 \\ & 1 & Standard & 10.63 & 14.81 & 17.53 & 12.10 & 16.13 \\ & 0 & Proactive & 1.41 & 18.45 & 15.45 & 9.41 & 19.89 \\ & 1 & Proactive & **13.87** & **20.96** & **21.36** & 12.90 & **22.31** \\ & 0 & ProCoT & 5.27 & 16.59 & 15.96 & 15.16 & 18.01 \\ & 1 & ProCoT & 13.38 & 19.70 & 20.62 & **15.05** & 20.70 \\ \hline \multirow{6}{*}{ChaGPT} & 0 & Standard & 11.34 & 20.62 & **18.26** & 13.44 & 27.69 \\ & 1 & Standard & 14.41 & 19.29 & 17.73 & 15.86 & 26.34 \\ \cline{1-1} & 0 & Proactive & 14.09 & **21.66** & 15.56 & 7.53 & 22.58 \\ \cline{1-1} & 1 & Proactive & **14.74** & 19.59 & 16.29 & 8.60 & 21.23 \\ \cline{1-1} & 0 & ProCoT & 10.20 & 19.57 & 15.97 & 12.63 & 23.92 \\ \cline{1-1} & 1 & ProCoT & 9.63 & 19.82 & 17.19 & **17.74** & **29.57** \\ \hline \hline \end{tabular} \end{table} Table 3: Turn-level evaluation results on Next Topic Prediction and Transition Response Generation. versation towards the designated target with a more smooth and interactive conversation. Overall, **LLM-based dialogue systems are proficient at performing topic shifting towards the designated target**. However, when using standard prompting, these systems **tend to make aggressive topic transition**, as they possess powerful capability of controllable generation. **ProCoT prompting enables a more smooth topic transition of target-guided dialogues** with LLMs. ### Non-collaborative Dialogues Unlike collaborative dialogue settings, where the user and the system work together to reach a common goal (_e.g._, booking hotels), in non-collaborative dialogues, the user and the system have a conflict of interest but aim to strategically communicate to reach an agreement (_e.g._, negotiation) (Zhan et al., 2022). Therefore, the system is required to leverage a series of proactive dialogue strategies to reach an agreement favorable to itself, instead of passively following the user's intents. #### 4.3.1 Problem Definition Given the dialogue history \(\mathcal{C}=\{u_{1},...,u_{t-1}\}\) and the dialogue background \(d\), the goal is to generate a response \(u_{t}\) with appropriate dialogue strategy \(s_{t}\) that can lead to a consensus state between the system's and the user's goal. A set of dialogue strategies \(\mathcal{S}\) is pre-defined for prediction. Based on different applications, the dialogue strategy can be coarse dialogue act labels or fine-grained strategy labels. The dialogue background includes the system's goal and the related grounded information that can be like item descriptions in bargain negotiation (He et al., 2018) or user profile in persuasion dialogues (Wang et al., 2019). #### 4.3.2 Experimental Setups DatasetsWe use the **CraigslistBargain** dataset (He et al., 2018) for evaluating the capability of strategically handling non-collaboration in LLM-based dialogue systems. The dataset was created under the bargain negotiation setting where the buyer and the seller are negotiating the price of an item on sale. Detailed descriptions can be found in Appendix A. Evaluation MetricsFollowing the previous study (Joshi et al., 2021), we conduct a comprehensive evaluation over three subtasks, including negotiation strategy prediction, dialogue act prediction, and response generation. We report the F1 and ROC AUC scores for strategy prediction and dialogue act prediction, where the former one is a multi-label prediction problem. For the response generation, we adopt BLEU score and BERTScore (Zhang et al., 2020) for evaluation. Usage of LLMs & Prompting SchemesThe adopted LLMs are the same, but the maximum number of new tokens is set to be 256, as there are more information needed to be generated, including negotiation strategies and dialogue acts. BaselinesWe compare several fine-tuned SOTA baselines for negotiation dialogues, including FRED (Zhou et al., 2020), HED+RNN/TFM, and DialoGraph(Joshi et al., 2021). \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{**Negotiation Strategies**} & \multicolumn{6}{c}{**Dialogue Acts**} & \multicolumn{6}{c}{**Response Generation**} \\ \cline{3-14} & & & \multicolumn{3}{c}{F1} & \multicolumn{3}{c}{ROC AUC} & \multicolumn{3}{c}{F1} & \multicolumn{3}{c}{ROC AUC} & \multicolumn{3}{c}{BERTTScore} \\ \cline{3-14} Method & Shot & Prompt & Macro & Micro & Weighted & Macro & Micro & Weighted & Macro & Micro & Weighted & Macro & Weighted & BLEU & P & R & F1 \\ \hline FeHED & - & - & 17.6 & 25.6 & 36.3 & 55.8 & 61.7 & 54.7 & 20.6 & 37.4 & 30.6 & 76.9 & 79.2 & 23.7 & 27.1 & 26.8 & 27.0 \\ HED+RNN & - & - & 23.2 & 26.7 & 42.4 & 65.3 & 65.3 & 60.4 & 33.0 & 46.2 & 42.8 & 83.1 & 84.2 & 22.5 & 22.9 & 22.7 & 22.8 \\ HED+TFM & - & - & 26.3 & 32.1 & 43.3 & 68.2 & 71.8 & 61.8 & 32.5 & 44.6 & 42.0 & 85.6 & 85.1 & 24.4 & 27.4 & 28.1 & 27.7 \\ DialoGraph & - & - & 26.1 & 34.1 & 43.5 & 68.1 & 72.0 & 61.8 & 33.4 & 45.8 & 43.7 & 85.6 & 85.4 & 24.7 & 22.8 & 28.3 & 28.1 \\ \hline \multirow{6}{*}{Vicuna-13B} & 0 & Standard & - & - & - & - & - & - & - & - & - & - & - & - & - & 1.7 & -28.9 & 1.7 & -14.0 \\ & 1 & Standard & - & - & - & - & - & - & - & - & - & - & - & 1.9 & **-3.1** & -2.0 & -2.8 \\ & 0 & Proactive & **20.6** & **25.2** & **39.6** & **51.1** & 48.2 & **49.8** & 4.2 & **18.5** & 8.4 & 50.3 & 49.8 & 2.3 & -6.1 & -7.0 & -7.0 \\ & 1 & Proactive & 15.2 & 21.0 & 26.0 & 50.0 & 48.8 & 49.5 & 6.7 & 12.0 & 11.4 & 50.8 & 51.3 & 2.6 & -10.3 & **8.9** & -0.9 \\ & 0 & ProCoT & 19.0 & 24.0 & 38.5 & 49.7 & 47.4 & 49.3 & 3.6 & 13.5 & 7.0 & 50.3 & 49.4 & 2.6 & -7.5 & -4.1 & -6.2 \\ & 1 & ProCoT & 17.8 & 23.8 & 31.9 & 48.9 & **50.0** & 49.0 & **7.7** & 14.0 & **13.9** & **52.5** & **52.2** & **2.6** & -9.0 & 7.6 & **-0.9** \\ \hline \multirow{6}{*}{ChaGPT} & 0 & Standard & - & - & - & - & - & - & - & - & - & - & - & 2.3 & -16.4 & 8.3 & -4.3 \\ & 1 & Standard & - & - & - & - & - & - & - & - & - & - & - & - & 3.1 & -3.4 & 6.9 & 0.7 \\ \cline{1-1} & 0 & Proactive & 12.8 & 19.2 & 19.6 & 51.3 & 49.3 & 50.3 & 13.3 & 29.7 & 19.5 & 56.3 & 60.0 & **4.2** & -4.3 & 7.3 & 1.3 \\ \cline{1-1} & 1 & Proactive & 13.7 & 22.4 & 20.8 & 50.9 & 51.6 & 51.2 & 12.0 & 26.1 & 17.6 & 54.9 & 58.0 & 3.9 & -4.3 & 10.4 & **2.9** \\ \cline{1-1} & 0 & ProCoT & 10.8 & 17.5 & 16.0 & 50.4 & 47.5 & 50.6 & 10.1 & 26.2 & 16.8 & 54.2 & 57.7 & 3.7 & **-0.2** & -0.9 & -0.9 \\ \cline{1-1} & 1 & ProCoT & **15.1** & **22.8** & **22.9** & **55.5** & **52.5** & **53.1** & **16.3** & **33.3** & **24.4** & **58.2** & **62.8** & 3.9 & -7.1 & **10.5** & 1.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results on Negotiation Strategy Prediction, Dialogue Act Prediction, and Response Generation. #### 4.3.3 Experimental Results Table 5 summarizes the experimental results on the CraigslistBargain dataset. Experimental results show that LLM-based dialogue systems fail to predict appropriate negotiation strategies and dialogue acts in non-collaborative dialogues, further resulting in a low performance of response generation. Chen et al. (2023b) empirically show that, given the optimal planned strategy, ChatGPT achieves strong performance on controllable response generation in strategy-based dialogues. Drawing upon these findings, **the key challenge of LLMs in handling non-collaborative dialogues is how to effectively optimize the strategy planning**. #### 4.3.4 Analysis of Strategy Learning Figure 2 presents the analysis of the relationships between target and predicted dialogue acts by ChatGPT with three types of prompting schemes. As for the standard prompting, we observe three typical mistakes: 1) The system tends to propose the initial bargain price (init-price) instead of greetings (intro). 2) The system often directly accepts the buyer's offer (accept) when it is supposed to offer another price for negotiation (offer). On the other hand, Proactive and ProCoT prompting share similar patterns of mistakes, where ChatGPT tends to propose a counter price (counter-price) to negotiate with the buyer. Figure 3 presents the analysis of the distribution of selected strategies by ChatGPT. In the reference responses, the seller often shows positive/negative sentiment to negotiate with the buyer. However, ChatGPT inclines to adopt conservative or concessionary strategies, such as using hedge words, show gratitude, or propose a counter price. Overall, we conclude that **ChatGPT tends to make compromise with the buyer during the negotiation, rather than strategically taking actions to maximize its own benefit**. ## 5 Conclusion In this work, we conduct the first comprehensive evaluation on the capability of LLM-based dialogue systems in handling proactive dialogues, including clarification, target-guided, and non-collaborative dialogues. To enhance the proactivity of LLM-based dialogue systems, we propose a proactive chain-of-thought prompting scheme Figure 3: Distribution of selected negotiation strategies. Similarly, a negotiation strategy classifier is trained to identify the negotiation strategies of the generated response in standard prompting. Figure 2: Heatmaps on the relationships between target and predicted dialogue acts. As no dialogue act is predicted in standard prompting, a dialogue act classifier is trained to identify the dialogue act of the generated response. that triggers the reasoning and planning capability of LLMs. The empirical analysis sheds light on the potentials of LLM-based dialogue systems for proactive dialogues: 1) ProCoT largely enhances the originally poor performance of asking clarification questions, but still limits in handling domain-specific applications. 2) LLM-based dialogue systems perform aggressive topic shifting towards the designated target, while ProCoT enables the topic planning to be more smooth. 3) Despite the powerfulness on controllable response generation, the capability of strategy learning and planning is the key challenge for LLM-based dialogue systems handling non-collaborative dialogues.
2305.05044
Some properties of affine $\mathcal C$-semigroups
Numerical semigroups have been extensively studied throughout the literature, and many of their invariants have been characterized. In this work, we generalize some of the most important results about symmetry, pseudo-symmetry, or fundamental gaps, to affine $\mathcal C$-semigroups. In addition, we give algorithms to compute the tree of irreducible $\mathcal C$-semigroups and $\mathcal C$-semigroups with a given Frobenius vector.
Juan Ignacio García-García, Daniel Marín-Aragón, Adrián Sánchez-Loureiro, Alberto Vigneron-Tenorio
2023-05-04T10:34:50Z
http://arxiv.org/abs/2305.05044v1
# Some properties of affine \(\mathcal{C}\)-semigroups ###### Abstract Numerical semigroups have been extensively studied throughout the literature, and many of their invariants have been characterized. In this work, we generalize some of the most important results about symmetry, pseudo-symmetry, or fundamental gaps, to affine \(\mathcal{C}\)-semigroups. In addition, we give algorithms to compute the tree of irreducible \(\mathcal{C}\)-semigroups and \(\mathcal{C}\)-semigroups with a given Frobenius vector. _Keywords:_\(\mathcal{C}\)-semigroup, Frobenius element, fundamental gap, irreducible semigroup, pseudo-Frobenius element, pseudo-symmetric semigroup, special gap, symmetric semigroup. _2020 Mathematics Subject Classification:_ 20M14 (Primary), 68R05 (Secondary). ## Introduction A \(\mathcal{C}\)-semigroup is a non-empty subset of \(\mathbb{N}^{p}\) (for some non-zero natural number \(p\)), containing \(0\) and closed under addition, such that \(\mathcal{C}\setminus S\) is finite; \(\mathcal{C}\subset\mathbb{N}^{p}\) denotes the integer cone generated by \(S\). These semigroups are the natural generalization to higher dimensions of the numerical semigroups. Moreover, some objects related to numerical semigroups can be generalized to \(\mathcal{C}\)-semigroups. For example, the elements in \(\mathcal{C}\setminus S\) are called _gaps_ of \(S\), and the cardinality of its gap set is called _genus_ of \(S\). We denote this set by \(\mathcal{H}(S)\), and its cardinality, by \(g(S)\). There are other objects whose generalization needs to consider a total order on \(\mathbb{N}^{p}\). For example, the Frobenius number of a numerical semigroup is the maximum integer that is not in it. Still, its generalization over the \(\mathcal{C}\)-semigroups is not unique if we do not fix a total order. So, fixed \(\prec\) a total order on \(\mathbb{N}^{p}\), the Frobenius element of \(S\) is \(\max_{\prec}(\mathcal{C}\setminus S)\). Even though \(\mathcal{C}\)-semigroups frequently appear in semigroup theory, it is not until the publication of [10] that they have become an object of study in their own right. This paper defines _generalized numerical semigroups_ as the \(\mathcal{C}\)-semigroups where the cone \(\mathcal{C}\) is \(\mathbb{N}^{p}\). Since 2016, several works have been devoted to study different properties of \(\mathcal{C}\)-semigroups in general and generalized numerical semigroups in particular. For example, in [6], the authors show that any \(\mathbb{N}^{p}\)-semigroup has a unique minimal system of generators and provide an algorithm to compute its set of gaps from a set of generators of the \(\mathbb{N}^{p}\)-semigroup. In [12], an extension of Wilf's conjecture for numerical semigroups is given to \(\mathcal{C}\)-semigroups, and in [3], another one is introduced for \(\mathbb{N}^{p}\)-semigroups. This paper also studies the irreducibleness of the \(\mathbb{N}^{p}\)-semigroups. More recent papers about \(\mathbb{N}^{p}\)-semigroups are [1], [4], [5], [7], and [17]. For any \(\mathcal{C}\)-semigroup, in [9], the authors mainly provide an algorithm to check if an affine semigroup given by a generating set is a \(\mathcal{C}\)-semigroup and to compute its gap set. The main goal of this work is to generalize several results of numerical semigroups to \(\mathcal{C}\)-semigroups. A \(\mathcal{C}\)-semigroup is \(\mathcal{C}\)-reducible (simplifying reducible) when it can be expressed as an intersection of two \(\mathcal{C}\)-semigroups containing it properly (see [13]); \(S\) is \(\mathcal{C}\)-irreducible (simplifying irreducible) in another case. In this work, we also characterize irreducible \(\mathcal{C}\)-semigroups from their genus and from their generalized Frobenius numbers. We also study when a subset of a cone \(\mathcal{C}\) is the gap set of a \(\mathcal{C}\)-semigroup or determines it. These results are complemented by some algorithms for checking the corresponding properties. Moreover, some algorithms for computing some objects related to \(\mathcal{C}\)-semigroups are provided. In particular, it is defined a tree whose vertex set is the set of all irreducible \(\mathcal{C}\)-semigroups with a fixed Frobenius vector. An algorithm to compute this tree is also introduced. For any integer cone \(\mathcal{C}\) and any non-null element \(\mathbf{f}\in\mathcal{C}\), we give a procedure to obtain all \(\mathcal{C}\)-semigroups with Frobenius element equal to \(\mathbf{f}\). The results of this work are illustrated with several examples. For this purpose, we have implemented all the algorithms shown in this work in our library _CommutativeMonoids_ dedicated to the study of numerical and affine semigroups (see [11]) developed by the authors in Python [14] and C++. A notebook containing all the examples of this work can be found at the following link [https://github.com/D-marina/CommutativeMonoids/blob/master/CClassCSemigroups/SomePropertiesCSemigroup.ipynb](https://github.com/D-marina/CommutativeMonoids/blob/master/CClassCSemigroups/SomePropertiesCSemigroup.ipynb). The content of this work is organized as follows: Section 1 is devoted to provide the reader with the necessary background for the correct understanding of the work. In Section 2, we introduce the concept of symmetric and pseudo-symmetric \(\mathcal{C}\)-semigroups, and some characterizations of these concepts are given. We turn our attention in Section 3 to the irreducible \(\mathcal{C}\)-semigroups, we prove that we can build a tree with all these semigroups with a fixed Frobenius vector, and we show an algorithm for computing them. Similarly, an algorithm for computing all the \(\mathcal{C}\)-semigroups with a fixed Frobenius vector is given in Section 5. Finally, in Section 4, we study the fundamental gaps of a \(\mathcal{C}\)-semigroup, and for any set \(X\subset\mathcal{C}\), we give conditions to determine if \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup. Preliminaries In this work, \(\mathbb{Q}\), \(\mathbb{Q}_{\geq}\), and \(\mathbb{N}\) denote the sets of rational numbers, non-negative rational numbers, and non-negative integer numbers, respectively. For any \(n\in\mathbb{N}\), \([n]\) denotes the set \(\{1,\ldots,n\}\). A non-degenerated rational cone in \(\mathbb{Q}_{\geq}^{p}\) is the convex hull of finitely many half lines in \(\mathbb{Q}_{\geq}^{p}\) emanating from the origin. These cones can also be determined from their supporting hyperplanes. We consider that the integer points of a rational cone form an integer cone in \(\mathbb{N}^{p}\). It is well known that any integer cone \(\mathcal{C}\subset\mathbb{N}^{p}\) is finitely generated if and only if a rational point exists in each of its extremal rays. Moreover, any subsemigroup of \(\mathcal{C}\) is finitely generated if and only if there exists an element in the subsemigroup in each extremal ray of \(\mathcal{C}\). Both results are proved in [2, Chapter 2], where an in-depth study on cones can also be found. We assume that any integer cone considered in this work is finitely generated. Throughout this work, we use some particular gaps in \(\mathcal{H}(S)\) whose definitions are the same for numerical semigroups [15]: * \(\mathbf{x}\in\mathcal{H}(S)\) is a _fundamental gap_ if \(2\mathbf{x},3\mathbf{x}\in S\). The set of these elements is denoted by \(\mathrm{FG}(S)\). * \(\mathbf{x}\in\mathcal{H}(S)\) is a _pseudo-Frobenius element_ if \(\mathbf{x}+(S\setminus\{0\})\subset S\), the set of pseudo-Frobenius elements of \(S\) is denoted by \(\mathrm{PF}(S)\), and its cardinality is known as the type of \(S\), \(t(S)\). * \(\mathbf{x}\in\mathcal{H}(S)\) is a _special gap_ of \(S\) if \(\mathbf{x}\in\mathrm{PF}(S)\) and \(2\mathbf{x}\in S\). We denote by \(\mathrm{SG}(S)\) the set of special gaps of \(S\). In this work, we consider different orders on some sets. On a non-empty set \(L\subset\mathbb{N}^{p}\) and \(\mathbf{x},\mathbf{y}\in\mathbb{N}^{p}\), consider the partial order \(\mathbf{x}\leq_{L}\mathbf{y}\) if \(\mathbf{y}-\mathbf{x}\in L\). Besides, we also fix \(\preceq\) a total order on \(\mathbb{N}^{p}\) determined by a monomial order. A monomial order is a total order on the set of all (monic) monomials in a given polynomial ring (see [8]). From the properties of a monomial order, the (induced) total order \(\preceq\) on \(\mathbb{N}^{p}\) satisfies: * if \(\mathbf{a}\preceq\mathbf{b}\) and \(\mathbf{c}\in\mathbb{N}^{p}\), then \(\mathbf{a}+\mathbf{c}\preceq\mathbf{b}+\mathbf{c}\), * if \(\mathbf{c}\in\mathbb{N}^{p}\), then \(0\preceq\mathbf{c}\). Every monomial order can be represented via matrices. For a nonsingular integer \((p\times p)\)-matrix \(M\) with rows \(M_{1},\ldots,M_{p}\), the \(M\)-ordering \(\prec\) is defined by \(\mathbf{a}\prec\mathbf{b}\) if and only if there exists an integer \(i\) belonging to \([p-1]\), such that \(M_{1}\mathbf{a}=M_{1}\mathbf{b},\ldots,M_{i}\mathbf{a}=M_{i}\mathbf{b}\) and \(M_{i+1}\mathbf{a}<M_{i+1}\mathbf{b}\). From the fixed total order on \(\mathbb{N}^{p}\), the Frobenius vector of \(S\), \(F(S)\), is the maximal element in \(\mathcal{H}(S)\) respect to \(\preceq\), and we set \(n(S)\) as the cardinality of \(\mathcal{N}(S)=\{\mathbf{x}\in S\mid\mathbf{x}\preceq F(S)\}\). The following lemma generalizes to \(\mathcal{C}\)-semigroups Proposition 2.26 in [15]. **Lemma 1**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup. Then, \(g(S)\leq t(S)n(S)\)._ Proof.: Just as it occurs for numerical semigroups, for any \(\mathbf{x}\in\mathcal{H}(S)\), there exist \((\mathbf{f},\mathbf{s})\in\mathrm{PF}(S)\times S\) such that \(\mathbf{f}=\mathbf{x}+\mathbf{s}\), and \(\mathbf{f}_{\mathbf{x}}=\min_{\preceq}\{\mathbf{f}\in\mathrm{PF}(S)\mid \mathbf{f}-\mathbf{x}\in S\}\). Hence, the map \(\mathcal{H}(S)\to\mathrm{PF}(S)\times\mathcal{N}(S)\), defined by \(x\mapsto(\mathbf{f}_{\mathbf{x}},\mathbf{f}_{\mathbf{x}}-\mathbf{x})\) is injective, and thus \(g(S)\leq t(S)n(S)\). ## 2 Symmetric and pseudo-symmetric \(\mathcal{C}\)-semigroups Fix \(S\subset\mathbb{N}^{p}\) a \(\mathcal{C}\)-semigroup with genus \(g\). In this section, we characterize the symmetric and pseudo-symmetric \(\mathcal{C}\)-semigroups using their genus. We say that \(S\) is \(\mathcal{C}\)-irreducible when \(\mathrm{PF}(S)\) is equal to \(\{F(S)\}\) or \(\{F(S),F(S)/2\}\) (see [13]). If \(\mathrm{PF}(S)=\{F(S)\}\), we say that \(S\) is symmetric, and pseudo-symmetric when \(\mathrm{PF}(S)=\{F(S),F(S)/2\}\). For any element \(\mathbf{n}\) in \(\mathcal{C}\), let \(I_{S}(\mathbf{n})\) be the set \(\{\mathbf{s}\in S\mid\mathbf{s}\leq_{\mathcal{C}}\mathbf{n}\}\). _Remark 2_.: Note that, for any \(\mathbf{s}\in S\), \(\mathbf{s}\in I_{S}(F(S))\) if and only if \(F(S)-\mathbf{s}\in\mathcal{H}(S)\). Thus, \(g\geq\sharp I_{S}(F(S))\). We have the following characterizations of symmetric and pseudo-symmetric \(\mathcal{C}\)-semigroups. **Proposition 3**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup with genus \(g\). Then, \(S\) is symmetric if and only if \(g=\sharp I_{S}(F(S))\)._ Proof.: Assume that \(S\) is symmetric. Thus, \(F(S)\) is the unique pseudo-Frobenius element of \(S\). Furthermore, for any \(\mathbf{x}\in\mathcal{H}(S)\), there exists \(\mathbf{s}\in S\) such that \(\mathbf{x}+\mathbf{s}=F(S)\), that is \(\mathbf{s}\in I_{S}(F(S))\), and then \(\sharp I_{S}(F(S))\geq g\). Since \(g\geq\sharp I_{S}(F(S))\), we conclude that \(g=\sharp I_{S}(F(S))\). Conversely, note that \(I_{S}(F(S))=\{\mathbf{s}\in S\mid F(S)-\mathbf{s}\in\mathcal{H}(S)\}\), and suppose that \(g=\sharp I_{S}(F(S))\). Hence, every \(\mathbf{x}\in\mathcal{H}(S)\setminus\{F(S)\}\) satisfies \(F(S)-\mathbf{x}\in S\), and then \(\mathbf{x}\) is not a pseudo-Frobenius element of \(S\). **Proposition 4**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup with genus \(g\). Then, \(S\) is pseudo-symmetric if and only if \(g=1+\sharp I_{S}(F(S))\) and \(F(S)/2\in\mathbb{N}^{p}\)._ Proof.: Assume that \(S\) is pseudo-symmetric, thus \(\mathrm{PF}(S)=\{F(S),F(S)/2\}\), and \(g>\sharp I_{S}(F(S))\). For all \(\mathbf{x}\in\mathcal{H}(S)\setminus\{F(S)/2\}\), there exists some \(\mathbf{s}\in S\) such that \(\mathbf{x}+\mathbf{s}=F(S)\), or \(\mathbf{x}+\mathbf{s}=F(S)/2\). If the first option is satisfied, \(\mathbf{s}\in I_{S}(F(S))\). In other case, \(\mathbf{x}+\mathbf{s}+F(S)/2=F(S)\) and then \(\mathbf{s}+F(S)/2\) also belongs to \(I_{S}(F(S))\). Besides, \(F(S)/2+\mathbf{s}\neq F(S)\) for every \(\mathbf{s}\in S\). Hence, \(\sharp I_{S}(F(S))\geq g-1\). Conversely, suppose that \(g=\sharp I_{S}(F(S))+1\) and \(F(S)/2\in\mathbb{N}^{p}\). Hence, there exists only one \(\mathbf{x}\in\mathcal{H}(S)\setminus\{F(S)\}\) with \(\mathbf{x}+\mathbf{s}\neq F(S)\) for all \(\mathbf{s}\in S\). Hence, \(\mathrm{PF}(S)=\{F(S),\mathbf{x}\}\). If \(\mathbf{x}\neq F(S)/2\), then there is \(\mathbf{s}\in S\) such that \(F(S)/2+\mathbf{s}=F(S)\), and \(F(S)/2\in S\), but it is not possible. So, \(\mathbf{x}=F(S)/2\) Consider the Apery set of a \(\mathcal{C}\)-semigroup \(S\) relative to \(\mathbf{b}\in S\setminus\{0\}\) as \(\mathrm{Ap}(S,\mathbf{b})=\{\mathbf{a}\in S\mid\mathbf{a}-\mathbf{b}\in\mathcal{ H}(S)\}\). The following proposition shows the relationship between the pseudo-Frobenius elements of \(S\) and its Apery set. **Proposition 5**.: _[_13_, Proposition 16]_ _Let \(S\) be a \(\mathcal{C}\)-semigroup and \(\mathbf{b}\in S\setminus\{0\}\). Then,_ \[\mathrm{PF}(S)=\{\mathbf{a}-\mathbf{b}\mid\mathbf{a}\in\mathrm{maximals}_{ \leq_{S}}\mathrm{Ap}(S,\mathbf{b})\}.\] From this result, we can generalize the corollaries 4.12 and 4.19 in [15]. **Corollary 6**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup and \(\mathbf{b}\in S\setminus\{0\}\). The semigroup \(S\) is symmetric if and only if \(\mathrm{maximals}_{\leq_{S}}\mathrm{Ap}(S,\mathbf{b})=\{F(S)+\mathbf{b}\}\)._ **Corollary 7**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup and \(\mathbf{b}\in S\setminus\{0\}\). The semigroup \(S\) is pseudo-symmetric if and only if \(\mathrm{maximals}_{\leq_{S}}\mathrm{Ap}(S,\mathbf{b})=\{F(S)+\mathbf{b},F(S)/ 2+\mathbf{b}\}\)._ The Frobenius number of a numerical semigroup is the maximum non-negative integer that is not an element of the semigroup. We define the (generalized) Frobenius number of a \(\mathcal{C}\)-semigroup \(S\) as \(\mathcal{F}(S)=\sharp I_{S}(F(S))+g(S)\). We can easily rewrite the previous propositions 3 and 4 from this definition. **Corollary 8**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup with genus \(g\). Then, \(S\) is symmetric if and only if \(2g=\mathcal{F}(S)\)._ **Corollary 9**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup with genus \(g\). Then, \(S\) is pseudo-symmetric if and only if \(2g=1+\mathcal{F}(S)\) and \(F(S)/2\in\mathbb{N}^{p}\)._ These corollaries specialized to numerical semigroups or \(\mathbb{N}^{p}\)-semigroups are equivalent to Corollary 4.5 in [15], and the theorems 5.6 and 5.7 in [5], respectively. We illustrate the previous results with one easy example. _Example 10_.: Let \(\mathcal{C}\subset\mathbb{N}^{2}\) be the cone with extremal rays \(\overrightarrow{(7,3)}\) and \(\overrightarrow{(15,1)}\). The \(\mathcal{C}\)-semigroup \(S_{1}\) minimally generated by \[\Lambda_{S_{1}}=\{(3,1),(4,1),(5,1),(6,1),(7,1),(7,3),(8,1),(8,3),(9,1),(10,1),\\ (11,1),(12,1),(12,5),(13,1),(14,1),(15,1)\}\] is symmetric, while the \(S_{2}\) minimally generated by \[\Lambda_{S_{2}}=\{(3,1),(5,2),(6,1),(7,1),(7,2),(7,3),(8,1),(9,1),(10,1)\] \[(11,1),(12,1),(13,1),(14,1),(15,1)\}\] is pseudo-symmetric. Note that \(\mathrm{PF}(S_{1})=\mathrm{SG}(S_{1})=\mathcal{H}(S_{1})=\{(5,2)\}\), but \(\mathcal{H}(S_{2})=\{(4,1),(5,1),(8,2)\}\), \(\mathrm{PF}(S_{2})=\{(4,1),(8,2)\}\), and \(\mathrm{SG}(S_{2})=\{(8,2)\}\). ## 3 Trees of irreducible \(\mathcal{C}\)-semigroups This section describes a tree whose vertex set is the set of all irreducible \(\mathcal{C}\)-semigroups with a fixed Frobenius vector. Again, consider \(\mathcal{C}\subset\mathbb{N}^{p}\) an integer cone and \(\mathbf{f}\in\mathcal{C}\setminus\{0\}\). Consider a monomial order \(\preceq\) on \(\mathbb{N}^{p}\) and decompose the set \(I_{\mathcal{C}}(\mathbf{f})\) as \(I_{\mathcal{C}}(\mathbf{f})=I_{1}(\mathbf{f})\sqcup I_{2}(\mathbf{f})\) with \(I_{1}(\mathbf{f})=\{\mathbf{x}\in I_{\mathcal{C}}(\mathbf{f})\mid\mathbf{0} \neq\mathbf{x}\preceq\mathbf{f}/2\}\) and \(I_{2}(\mathbf{f})=\{\mathbf{x}\in I_{\mathcal{C}}(\mathbf{f})\mid\mathbf{x} \succ\mathbf{f}/2\}\) (when \(\mathbf{f}/2\notin\mathbb{N}^{p}\), consider \(\preceq\) as the monomial order extended to \(\mathbb{Q}_{\geq}^{p}\)). We define the \(\mathcal{C}\)-semigroup \(S(\mathbf{f})\) as \(\left(\mathcal{C}\setminus\{\mathbf{f}\}\right)\setminus I_{1}(\mathbf{f})\). This semigroup will be the root of our tree of irreducible \(\mathcal{C}\)-semigroups; this root depends on the fixed monomial order, as the following example shows. _Example 11_.: Let \(\mathcal{C}\subset\mathbb{N}^{2}\) be the integer cone with extremal rays \(\overrightarrow{(1,0)}\) and \(\overrightarrow{(1,2)}\), and \(\mathbf{f}=(4,2)\). Then, \(\mathbf{f}/2=(2,1)\) and \[I_{\mathcal{C}}(\mathbf{f})=\{(1,0),(1,1),(1,2),(2,0),(2,1),(2,2),(3,0),(3,1), (3,2),(4,2)\}.\] Let \(\prec_{1}\) and \(\prec_{2}\) be the orders defined by the matrices \(\left(\begin{array}{cc}1&1\\ 1&0\end{array}\right)\) and \(\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right)\), respectively. In the first case, \(I_{1}(\mathbf{f})_{\prec_{1}}=\{(1,0),(1,1),(1,2),(2,0),(2,1)\}\) and \[S(\mathbf{f})_{\prec_{1}}=\langle(3,0),(4,0),(5,0),(3,1),(4,1),\\ (5,1),(2,2),(3,2),(2,3),(3,3),(4,3),(2,4),(3,4),(3,5),(3,6)\rangle.\] In the other one, \(I_{1}(\mathbf{f})_{\prec_{2}}=\{(1,0),(1,1),(2,0),(2,1),(3,0)\}\) and \[S(\mathbf{f})_{\prec_{2}}=\langle(4,0),(5,0),(6,0),(7,0),(3,1),\\ (4,1),(5,1),(6,1),(1,2),(2,2),(3,2),(2,3),(3,3)\rangle.\] The set \(S(\mathbf{f})\) satisfies interesting properties collected in the following lemma. **Lemma 12**.: _The \(\mathcal{C}\)-semigroup \(S(\mathbf{f})\) is irreducible. Moreover, \(\mathbf{f}\) is the Frobenius vector of \(S(\mathbf{f})\) for any monomial order, and \(S(\mathbf{f})\) is the unique irreducible \(\mathcal{C}\)-semigroup satisfying all its gaps belong to \(I_{1}(\mathbf{f})\cup\{\mathbf{f}\}\)._ Proof.: By definition of \(S(\mathbf{f})\), \(\mathbf{f}\) is the unique maximum in \(\mathcal{H}(S(\mathbf{f}))\) respect to \(\leq_{\mathcal{C}}\). So, it is also the unique maximum in \(\mathcal{H}(S(\mathbf{f}))\) respect to \(\leq_{\mathbb{N}^{p}}\). This fact implies that \(\mathbf{f}\) is the Frobenius vector of \(S(\mathbf{f})\) for any monomial order. Note that the set of gaps of \(S(\mathbf{f})\) is the set \(\mathcal{H}(S(\mathbf{f}))=I_{1}(\mathbf{f})\cup\{\mathbf{f}\}\), and \(I_{S(\mathbf{f})}(\mathbf{f})=I_{2}(\mathbf{f})\setminus\{\mathbf{f}\}\). Besides, for any \(\mathbf{x}\in\mathcal{H}(S(\mathbf{f}))\), the element \(\mathbf{f}-\mathbf{x}\) belongs to \(I_{S(\mathbf{f})}(\mathbf{f})\). In other case, \(\mathbf{f}=\mathbf{f}-\mathbf{x}+\mathbf{x}\prec\mathbf{f}/2+\mathbf{f}/2= \mathbf{f}\). Furthermore, since \(\mathbf{x}\in I_{S(\mathbf{f})}(\mathbf{f})\) if and only if \(\mathbf{f}-\mathbf{x}\in\mathcal{H}(S(\mathbf{f}))\), we have that the cardinality of \(\mathcal{H}(S(\mathbf{f}))\) is equal to \(1+\sharp I_{S(\mathbf{f})}(\mathbf{f})\) when \(\mathbf{f}\in 2\mathbb{N}^{p}\), or equal to \(\sharp I_{S(\mathbf{f})}(\mathbf{f})\) in the other case. By Proposition 4 or Proposition 3 (respectively), \(S(\mathbf{f})\) is an irreducible \(\mathcal{C}\)-semigroup. The uniqueness of \(S(\mathbf{f})\) is given by its definition. The following proposition gives us an irreducible \(\mathcal{C}\)-semigroups from an existing one, such that both have the same Frobenius vector. **Proposition 13**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup irreducible with Frobenius vector \(\mathbf{f}\), and \(\mathbf{x}\in I_{S}(\mathbf{f})\) be one of its minimal generators such that:_ 1. \(2\mathbf{x}-\mathbf{f}\notin S\)_._ 2. \(3\mathbf{x}\neq 2\mathbf{f}\)_._ 3. \(4\mathbf{x}\neq 3\mathbf{f}\)_._ _Then, \(S^{\prime}=(S\setminus\{\mathbf{x}\})\cup\{\mathbf{f}-\mathbf{x}\}\) is a \(\mathcal{C}\)-semigroup irreducible with Frobenius vector \(\mathbf{f}\)._ Proof.: Note \(F(S^{\prime})=\mathbf{f}\). We prove that \(S^{\prime}\) is closed under addition. Since \(\mathbf{x}=(\mathbf{f}-\mathbf{x})+(2\mathbf{x}-\mathbf{f})\), \(2\mathbf{x}-\mathbf{f}\) can not belong to \(S\), that is, the second condition is necessary. Trivially, given two elements in \(S\setminus\{\mathbf{x}\}\), their addition belongs to the same set. Besides, \(\mathbf{f}-\mathbf{x}+\mathbf{s}\in S\setminus\{\mathbf{x}\}\) for any \(\mathbf{s}\in S\setminus\{\mathbf{x}\}\). In other case, \(\mathbf{f}-\mathbf{x}+\mathbf{s}=\mathbf{x}\) or \(\mathbf{f}-\mathbf{x}+\mathbf{s}\in\mathcal{H}(S)\), for some \(\mathbf{s}\in S\setminus\{\mathbf{x}\}\). If \(\mathbf{f}-\mathbf{x}+\mathbf{s}=\mathbf{x}\), then \(\mathbf{s}=2\mathbf{x}-\mathbf{f}\notin S\). If \(\mathbf{f}-\mathbf{x}+\mathbf{s}\in\mathcal{H}(S)\), then there exists \(\mathbf{s}^{\prime}\in S\) such that \(\mathbf{f}-\mathbf{x}+\mathbf{s}+\mathbf{s}^{\prime}\in\mathrm{PF}(S)\). When \(\mathbf{f}-\mathbf{x}+\mathbf{s}+\mathbf{s}^{\prime}=\mathbf{f}/2\), we have \(2(\mathbf{s}+\mathbf{s}^{\prime})=2\mathbf{x}-\mathbf{f}\notin S\), and when \(\mathbf{f}-\mathbf{x}+\mathbf{s}+\mathbf{s}^{\prime}=\mathbf{f}\), \(\mathbf{x}=\mathbf{s}+\mathbf{s}^{\prime}\). Both conclusions are not possible. To finish this proof, we show that \(2(\mathbf{f}-\mathbf{x})\in S\setminus\{\mathbf{x}\}\). Assume that \(2(\mathbf{f}-\mathbf{x})\notin S\setminus\{\mathbf{x}\}\), so \(2(\mathbf{f}-\mathbf{x})=\mathbf{x}\), or \(2(\mathbf{f}-\mathbf{x})+\mathbf{s}\in\mathrm{PF}(S)\) for some \(\mathbf{s}\in S\). Since \(3\mathbf{x}\neq 2\mathbf{f}\), \(2(\mathbf{f}-\mathbf{x})\neq\mathbf{x}\). The semigroup \(S\) to be irreducible implies that \(2(\mathbf{f}-\mathbf{x})+\mathbf{s}\in\{\mathbf{f},\mathbf{f}/2\}\), but \(2(\mathbf{f}-\mathbf{x})+\mathbf{s}\) is not equal to \(\mathbf{f}\) because of \(2\mathbf{x}-\mathbf{f}\notin S\). Hence, \(2(\mathbf{f}-\mathbf{x})+\mathbf{s}=\mathbf{f}/2\). Since \(4\mathbf{x}\neq 3\mathbf{f}\), \(\mathbf{s}\neq 0\), and from \(2\mathbf{x}-\mathbf{f}\notin S\), we obtain \(2(\mathbf{f}-\mathbf{x})+\mathbf{s}\neq\mathbf{f}/2\). We conclude \(2(\mathbf{f}-\mathbf{x})\in S\setminus\{\mathbf{x}\}\). Since \(\sharp I_{S}(\mathbf{f})=\sharp I_{S^{\prime}}(\mathbf{f})\) and \(\sharp\mathcal{H}(S)=\sharp\mathcal{H}(S^{\prime})\), \(S^{\prime}\) is irreducible by the propositions 3 and 4. From this point forward, we will use the notation \(m(S)\) to represent the minimum element (with respect to the partial order \(\preceq\)) in the minimal generating set of \(S\). This element is often referred to as the multiplicity of \(S\). We denote by \(\mathfrak{I}(\mathbf{f})\) the set of the irreducible \(\mathcal{C}\)-semigroups with Frobenius vector \(\mathbf{f}\). Given \(S\in\mathfrak{I}(\mathbf{f})\), consider \(S_{0}=S\), and \(S_{n}=(S_{n-1}\setminus\{m(S_{n-1})\})\cup\{\mathbf{f}-m(S_{n-1})\}\) when \(m(S_{n-1})\in I_{1}(\mathbf{f})\), or \(S_{n}=S_{n-1}\) in other case, for \(n>1\). Note that \(S_{n}=S_{n-1}\) if \(S_{n}=S(\mathbf{f})\). Since \(I_{1}(\mathbf{f})\) is a finite set, the set \(\{S_{0},S_{1},\ldots\}\) is also finite. Let \(G=(V,E)\) be the digraph given by the set of vertices \(V=\mathfrak{I}(\mathbf{f})\), and edge set \(E=\big{\{}(A,B)\in V\times V\mid m(A)\prec\mathbf{f}/2\text{ and }B=(A \setminus\{m(A)\})\cup\{\mathbf{f}-m(A)\}\big{\}}\). **Theorem 14**.: _Let \(\preceq\) be a monomial order on \(\mathbb{N}^{p}\), \(\mathcal{C}\subset\mathbb{N}^{p}\) be an integer cone, and \(\mathbf{f}\in\mathcal{C}\) be a non zero element. The digraph \(G\) is a rooted tree with root \(S(\mathbf{f})\)._ Proof.: Let \(S\) be an element belonging to \(\mathfrak{J}(\mathbf{f})\). If \(m(S)\notin I_{1}(\mathbf{f})\), then \(S=S(\mathbf{f})\). Assume that \(m(S)\in I_{1}(\mathbf{f})\). In that case, \(m(S)\prec\mathbf{f}/2\), that is, \(2m(S)-\mathbf{f}\notin S\), \(3m(S)\neq 2\mathbf{f}\), and \(4m(S)\neq 3\mathbf{f}\). By Proposition 13, \(S_{1}=(S\setminus\{m(S)\})\cup\{\mathbf{f}-m(S)\}\) is irreducible. That means \((S,S_{1})\in E\). Following this construction, \(G\) is a tree whose root is \(S(\mathbf{f})\). We obtain an algorithm from previous construction and results to compute a tree of all irreducible \(\mathcal{C}\)-semigroups with a given Frobenius vector and a fixed monomial order (Algorithm 1). ``` Input: A monomial order \(\preceq\) on \(\mathbb{N}^{p}\), an integer cone \(\mathcal{C}\) and \(\mathbf{f}\in\mathcal{C}\). Output: A tree of irreducible \(\mathcal{C}\)-semigroups with Frobenius vector \(\mathbf{f}\). begin \(X\leftarrow\{S(\mathbf{f})\}\); \(Y\leftarrow\emptyset\); while\(X\neq\emptyset\)do \(S\leftarrow\operatorname{First}(X)\); \(A\leftarrow\{x\in S\mid x\in I_{2}(\mathbf{f})\cap\Lambda_{S},2x-\mathbf{f} \notin S,3x\neq\mathbf{f},4x\neq 3\mathbf{f},\mathbf{f}-x\prec m(S)\}\); if\(A=\emptyset\)then \(Y\gets Y\cup\{S\}\); else for\(x\in A\)do \(H\leftarrow(\mathcal{H}(S)\setminus\{\mathbf{f}-x\})\cup\{x\}\); \(S^{\prime}\leftarrow\mathcal{C}\)-semigroup with \(\mathcal{H}(S^{\prime})=H\); \(X\gets X\cup\{S^{\prime}\}\); \(X\gets X\setminus\{S\}\); return\(Y\) ``` **Algorithm 1**Computing a tree of irreducible \(\mathcal{C}\)-semigroups with a given Frobenius vector. The following example shows how to apply Algorithm 1 using the semigroups of Example 11. _Example 15_.: Let \(S(\mathbf{f})_{\prec_{1}}\) be the semigroup spanned by \[\{(3,0),(4,0),(5,0),(3,1),(4,1),(5,1),(2,2),(3,2),(2,3),(3,3),(4,3),(2,4),\] \[(3,4),(3,5),(3,6)\}.\] Applying Algorithm 1, we have that \(I_{2\prec_{1}}(\mathbf{f})=\{(2,2),(3,0),(3,1),(3,2),(4,2)\}\). Hence, \(S(\mathbf{f})_{\prec_{1}}\) has three children: * \(\langle(4,0),(5,0),(6,0),(7,0),(3,1),(4,1),(5,1),(6,1),(1,2),(2,2),(3,2),(2,3), (3,3)\rangle\), * \(\langle(3,0),(4,0),\,(5,0),(1,1),(3,2),(2,3),(2,4),(3,6)\rangle\), * \(\langle(2,0),(3,0),(3,1),(4,1),(3,2),(2,3),(3,3),(2,4),(3,4),(3,5),(4,5),(3,6)\rangle\). After repeating this procedure, the tree in Figure 1 is obtained. Since the definition of \(S(\mathbf{f})\) depends on the monomial order, we get a new tree if we change it. For example, when we use the order \(\prec_{2}\), Figure 2 appears. ## 4 Fundamental gaps of \(\mathcal{C}\)-semigroups In this section, we generalize to \(\mathcal{C}\)-semigroups several results related to the fundamental gaps of a numerical semigroup (see [15, Chapter 4]). The first results allow us to check when \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup for any finite subset \(X\subset\mathcal{C}\). Denote by \(D(X)\) the set \(\{\mathbf{a}\in\mathcal{C}\mid n\mathbf{a}\in X\text{ for some }n\in\mathbb{N}\}\). Figure 1: Tree of irreducible \(\mathcal{C}\)-semigroups with \(\prec_{1}\). Figure 2: Tree of irreducible \(\mathcal{C}\)-semigroups with \(\prec_{2}\). **Proposition 16**.: _Let \(\mathcal{C}\subset\mathbb{N}^{p}\) be an integer cone and \(X\) be a finite subset of \(\mathcal{C}\setminus\{0\}\). Then, \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup if and only if \(\mathbf{x}-\mathbf{s}\in X\) for every \((\mathbf{x},\mathbf{s})\in(X,\mathcal{C}\setminus X)\) with \(\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\)._ Proof.: Let \(S\) be the set \(\mathcal{C}\setminus X\), and assume that \(S\) is a \(\mathcal{C}\)-semigroup. Set \((\mathbf{x},\mathbf{s})\in(X,S)\) with \(\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\). Since \(\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\), we have that \(\mathbf{x}-\mathbf{s}\in\mathcal{C}\). If \(\mathbf{x}-\mathbf{s}\notin X\), then \(\mathbf{x}=\mathbf{s}+\mathbf{s}^{\prime}\) for some \(\mathbf{s}^{\prime}\in S\), and \(S\) is not a semigroup. So, \(\mathbf{x}-\mathbf{s}\in X\) for any \((\mathbf{x},\mathbf{s})\in(X,\mathcal{C}\setminus X)\) with \(\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\). Conversely, since \(\mathbf{x}-\mathbf{s}\) belongs to \(X\) for every \((\mathbf{x},\mathbf{s})\in(X,S)\) with \(\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\), \(S\) is an additive submonoid of \(\mathbb{N}^{p}\) with finite complement in \(\mathcal{C}\), that is, \(S\) is a \(\mathcal{C}\)-semigroup. From above proposition, \(\mathcal{C}\setminus X\) to be a \(\mathcal{C}\)-semigroup implies that \(X=D(X)\); for example, if we consider \(\mathcal{C}\) the cone generated by \(\{(1,0),(1,1),(1,2)\}\) and \(X=\{(2,0),(2,1)\}\), \(\mathcal{C}\setminus X\) is not a semigroup because of \(D(X)=\{(2,0),(2,1),(1,0)\}\). We now provide an algorithm to determine if \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup (Algorithm 2). ``` Input:\(\mathcal{C}\subset\mathbb{N}^{p}\) an integer cone, and \(X\) a finite subset of \(\mathcal{C}\setminus\{0\}\). Output: True if \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup, and False in other case. begin if\(X\neq D(X)\)then return False while\(X\neq\emptyset\)do\(\mathbf{x}\leftarrow\mathrm{First}(X)\); \(A\leftarrow\{\mathbf{s}\in\mathcal{C}\setminus X\mid\mathbf{s}\leq_{\mathcal{C }}\mathbf{x}\}\); \(s\leftarrow\mathrm{First}(A)\); while\(A\neq\emptyset\)do if\(\mathbf{x}-\mathbf{s}\notin X\)then return False \(s\leftarrow\mathrm{First}(A\setminus\{\mathbf{s}\})\); \(X\gets X\setminus\{\mathbf{x}\}\); return True. ``` **Algorithm 2**Checking if \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup. Since, for each \(\mathbf{x}\in X\), the set \(\{\mathbf{s}\in\mathcal{C}\setminus X\mid\mathbf{s}\leq_{\mathcal{C}}\mathbf{x}\}\) can be very very large, the condition \(\mathbf{x}-\mathbf{s}\notin X\) has to be checked many, many times in Algorithm 2, and many iterations are required for the worst cases. To improve the computational resolution of this problem, we provide an alternative algorithm (Algorithm 3) obtained from the following lemma and [12, Lemma 3]. **Lemma 17**.: _Fix a total order \(\preceq\) on \(\mathbb{N}^{p}\), and let \(X=\{\mathbf{x}_{1}\preceq\mathbf{x}_{2}\preceq\cdots\preceq\mathbf{x}_{t}\}\) be a subset of an integer cone \(S_{0}=\mathcal{C}\subset\mathbb{N}^{p}\). Assume that \(S_{t}=\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup. Then, \(S_{i}=S_{i-1}\setminus\{\mathbf{x}_{i}\}\) is a \(\mathcal{C}\)-semigroup, and \(\mathbf{x}_{i}\) is a minimal generator of \(S_{i-1}\), for every \(i\in[t]\)._ Proof.: Note that \(\mathbf{x}_{i}\) is the Frobenius vector of \(S_{i}\) respect to \(\preceq\). Hence, \(S_{i-1}=S_{i}\cup\{\mathbf{x}_{i}\}\) is a \(\mathcal{C}\)-semigroup and \(\mathbf{x}_{i}\) is a minimal generator of \(S_{i-1}\), for every \(i\in[t]\). ``` Input: A total order \(\preceq\) on \(\mathbb{N}^{p}\), \(\Lambda_{\mathcal{C}}\) the minimal generating set of the integer cone \(\mathcal{C}\subset\mathbb{N}^{p}\), and \(X=\{\mathbf{x}_{1}\preceq\cdots\preceq\mathbf{x}_{t}\}\subset\mathcal{C} \setminus\{0\}\). Output: If \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup, its minimal generating set, and the empty set in another case. begin if\(X\subset\Lambda_{\mathcal{C}}\)then return the minimal generating set of \(\mathcal{C}\setminus X\) if\(X\neq D(X)\)then return\(\{\}\) return\(\{\}\) \(\Lambda\leftarrow\Lambda_{\mathcal{C}}\); for\(1\leq i\leq t\)do if\(\mathbf{x}_{i}\notin\Lambda\)then return\(\{\}\) \(\Lambda\leftarrow\) the minimal generating set of \(\langle\Lambda\rangle\setminus\{\mathbf{x}_{i}\}\); \(X\gets X\setminus\{\mathbf{x}_{i}\}\); if\(X\subset\Lambda\)then return the minimal generating set of \(\langle\Lambda\rangle\setminus X\) ``` **Algorithm 3**Checking if \(\mathcal{C}\setminus X\) is a \(\mathcal{C}\)-semigroup. We illustrate this algorithm with the following example. _Example 18_.: Let \(\mathcal{C}\) be the cone generated by \(\Lambda_{\mathcal{C}}=\{(1,0),(1,1),(1,2)\}\) and \(X=\{(1,0),(1,1),(1,2),(2,0),(2,1),(2,2),(2,3),(2,4)\}\). Since \(X\not\subset\Lambda_{\mathcal{C}}\) and \(X=D(X)\), if we apply Algorithm 3, we obtain that: * \(t=0\), \(\Lambda=\{(2,0),(3,0),(1,1),(2,1),(1,2)\}\), * \(t=1\), \(\Lambda=\{(2,0),(3,0),(2,1),(3,1),(1,2),(2,2),(2,3)\}\), * \(t=2\), \(\Lambda=\{(2,0),(3,0),(2,1),(3,1),(2,2),(3,2),(2,3),(3,3),(2,4),\\ (3,4),(3,5),(3,6)\}\). Therefore, \[\mathcal{C}\setminus X=\big{\langle}(3,0),(4,0),(5,0),(3,1),(4, 1),(5,1),(3,2),(4,2),(5,2),(3,3),\\ (4,3),(5,3),(3,4),(4,4),(5,4),(3,5),(4,5),(5,5),(3,6),(4,6),\\ (5,6),(4,7),(5,7),(4,8),(5,8),(5,9),(5,10)\big{\rangle}.\] Fix \(S\subset\mathbb{N}^{p}\) a \(\mathcal{C}\)-semigroup minimally generated by \(\Lambda=\{\mathbf{s}_{1},\ldots,\mathbf{s}_{q},\mathbf{s}_{q+1},\ldots,\mathbf{ s}_{t}\}\), and consider \(\Lambda_{\mathcal{C}}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{q},\mathbf{a}_{q+1}, \ldots,\mathbf{a}_{m}\}\) the minimal generating set of \(\mathcal{C}\), with \(\mathbf{s}_{i},\mathbf{a}_{i}\in\tau_{i}\) for \(i=1,\ldots,q\) (we assume that the integer cone \(\mathcal{C}\) has \(q\) extremal rays \(\{\tau_{1},\ldots,\tau_{q}\}\)). Note that, the elements \(\mathbf{x}\) of \(\mathrm{SG}(S)\) are those elements in \(\mathcal{H}(S)\) such that \(S\cup\{\mathbf{x}\}\) is again a \(\mathcal{C}\)-semigroup. These gaps play an important role in decomposing a \(\mathcal{C}\)-semigroups into irreducible \(\mathcal{C}\)-semigroups ([9]). Similarly to numerical semigroups, given two \(\mathcal{C}\)-semigroups \(S\) and \(T\) with \(S\subsetneq T\), any \(\mathbf{x}\in\max_{\leq\mathcal{C}}(T\setminus S)\) belongs to \(\mathrm{SG}(S)\), that is to say \(S\cup\{\mathbf{x}\}\) is a \(\mathcal{C}\)-semigroup. Note that if \(\mathbf{x}\in\max_{\leq\mathcal{C}}(T\setminus S)\), then \(2\mathbf{x}\in S\). From this fact, we can prove the following proposition. **Proposition 19**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup and \(G\) be a subset of \(\mathcal{H}(S)\). Then, \(S\in\max_{\subseteq}\{T\) is a \(\mathcal{C}\)-semigroup \(|\ G\subseteq\mathcal{H}(T)\}\) if and only if \(\mathrm{SG}(S)\subseteq G\)._ Proof.: We know that \(\mathbf{x}\in\mathrm{SG}(S)\) if and only if \(S\cup\{\mathbf{x}\}\) is a \(\mathcal{C}\)-semigroup. So, if \(S\in\max_{\subseteq}\{T\) is a \(\mathcal{C}\)-semigroup \(|\ G\subseteq\mathcal{H}(T)\}\), then \(\mathbf{x}\in G\). In other case, \(S\subsetneq S\cup\{\mathbf{x}\}\) and \(S\) is not maximal. Assume that \(S\) is not maximal but \(\mathrm{SG}(S)\subseteq G\), so there exists \(T\) a \(\mathcal{C}\)-semigroup such that \(S\subsetneq T\) and \(G\subseteq\mathcal{H}(T)\). Let \(\mathbf{x}\in\max_{\leq\mathcal{C}}(T\setminus S)\), thus \(\mathbf{x}\in\mathrm{SG}(S)\cap T\), but it is not possible (\(\mathrm{SG}(S)\subseteq G\subseteq T\)). There is another interesting subset related to the set of gaps of \(S\). A subset \(X\) of \(\mathcal{H}(S)\) is said to determine \(\mathcal{H}(S)\) if \(S=\max_{\subseteq}\{T\) is a \(\mathcal{C}\)-semigroup \(|\ X\subseteq\mathcal{H}(T)\}\). These subsets were introduced in [16] for numerical semigroups. **Proposition 20**.: _Let \(X\) be a finite subset of an integer cone \(\mathcal{C}\subset\mathbb{N}^{p}\). Then, \(X\) determines the set of gaps of a \(\mathcal{C}\)-semigroup if and only if \(\mathcal{C}\setminus D(X)\) is a \(\mathcal{C}\)-semigroup._ Proof.: Fix \(\Lambda_{\mathcal{C}}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{q},\mathbf{a}_{q+1}, \ldots,\mathbf{a}_{m}\}\) the minimal generating set of \(\mathcal{C}\subset\mathbb{N}^{p}\). Assume that \(X\) determines \(\mathcal{H}(S)\) for a \(\mathcal{C}\)-semigroup \(S\), so \(X\subset D(X)\subset\mathcal{H}(S)\) and \(S\subset\mathcal{C}\setminus D(X)\). Let \(S^{\prime}\) be the non-empty set \[\{0\}\cup\bigcup_{i=1}^{q}\Big{\{}h_{i}\mathbf{a}_{i}+\mathcal{C}\mid h_{i}= \min_{n\in\mathbb{N}}\{(n\mathbf{a}_{i}+\mathcal{C})\cap X=\emptyset\}\Big{\}}.\] Note that \(S^{\prime}\) is a \(\mathcal{C}\)-semigroup. Let \(\mathbf{a}\) and \(\mathbf{b}\) be two elements in \(S^{\prime}\), so \(\mathbf{a}=h_{i}\mathbf{a}_{i}+\sum_{k=1}^{m}\alpha_{k}\mathbf{a}_{k}\), and \(\mathbf{b}=h_{j}\mathbf{a}_{j}+\sum_{k=1}^{m}\beta_{k}\mathbf{a}_{k}\) for some \(h_{i},h_{j},j,i,\alpha_{k},\beta_{k}\in\mathbb{N}\) with \(i,j\in[q]\) and \(k\in[m]\). Hence, \(\mathbf{a}+\mathbf{b}=h_{i}\mathbf{a}_{i}+(h_{j}\mathbf{a}_{j}+\sum_{k=1}^{m}( \alpha_{k}+\beta_{k})\mathbf{a}_{k})\in h\mathbf{a}_{i}+\mathcal{C}\). Furthermore, \(\mathcal{C}\setminus S^{\prime}\) is finite. Get any \(\mathbf{a}\in\Lambda_{\mathcal{C}}\), then \(\mathbf{a}=\sum_{i=1}^{q}\alpha_{i}\mathbf{a}_{i}\) for some \(\alpha_{1},\ldots,\alpha_{q}\in\mathbb{Q}_{\geq}\), and hence \(k\mathbf{a}=\sum_{i=1}^{q}\beta_{i}\mathbf{a}_{i}\) for some \(\beta_{1},\ldots,\beta_{q},k\in\mathbb{N}\). We can assume that \(\beta_{i}\geq h_{i}\). In that case, \(\mathcal{C}\setminus S^{\prime}\) is a subset of the finite set \(\{\sum_{i=1}^{q}\gamma_{i}\mathbf{a}_{i}\mid 0\leq\gamma_{i}\leq\beta_{i}\}\). We obtain that \(S^{\prime}\) is a finitely generated \(\mathcal{C}\)-semigroup; let \(\Lambda_{S^{\prime}}\) its minimal generating set. The set \(X\) is a subset of \(\mathcal{H}(S^{\prime})\) by construction. For every \(\mathbf{a}\in\mathcal{C}\setminus D(X)\) and we can define \(S_{\mathbf{a}}\) as the semigroup generated by \(\{\mathbf{a}\}\cup\Lambda_{S^{\prime}}\). Since \(X\subset\mathcal{H}(S_{\mathbf{a}})\), and \(X\) determines \(\mathcal{H}(S)\), we have that \(S_{\mathbf{a}}\subset S\). Hence, \(\mathcal{C}\setminus D(X)\subset S\) and then \(\mathcal{C}\setminus D(X)\) is a \(\mathcal{C}\)-semigroup. Conversely, any \(\mathcal{C}\)-semigroup \(T\) such that \(X\subset\mathcal{H}(T)\) satisfies that \(D(X)\subset\mathcal{H}(T)\). Thus, \(X\) determines the set of gaps of the \(\mathcal{C}\)-semigroup \(\mathcal{C}\setminus D(X)\). The sets determining the set of gaps of a \(\mathcal{C}\)-semigroup are related to its set of fundamental gaps. **Lemma 21**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup and \(X\) be a subset of \(\mathcal{H}(S)\). Then, \(X\) determines \(\mathcal{H}(S)\) if and only if \(\operatorname{FG}(S)\subseteq X\)._ Proof.: By Proposition 20, if \(X\) determines \(\mathcal{H}(S)\), then \(\mathcal{H}(S)=D(X)\). Thus, for all \(\mathbf{x}\in\mathcal{H}(S)\), \(h\mathbf{x}\in X\) for some \(h\in\mathbb{N}\). In particular, for every fundamental gap of \(S\), the integer \(h\) has to be one. Hence, \(\mathbf{x}\in X\). Conversely, since \(X\subset\mathcal{H}(S)\), we know that \(D(X)\subseteq\mathcal{H}(S)\). Let \(\mathbf{x}\in\mathcal{H}(S)\) and consider \(h=\max\{k\in\mathbb{N}\mid k\mathbf{x}\in\mathcal{H}(S)\}\). In that case, \(h\mathbf{x}\in\mathcal{H}(S)\), and \(2h\mathbf{x},3h\mathbf{x}\in S\). Therefore, \(h\mathbf{x}\in\operatorname{FG}(S)\subseteq X\), \(\mathbf{x}\in D(X)\), and \(\mathcal{H}(S)\subseteq D(X)\). Analogously to the case of numerical semigroups, it happens that \(\operatorname{FG}(S)\) is the smallest subset of \(\mathcal{H}(S)\) determining \(\mathcal{H}(S)\). Also, the relationship between the special and fundamental gaps of a \(\mathcal{C}\)-semigroup is equivalent to their relationship for numerical semigroups. **Lemma 22**.: _Let \(S\) be a \(\mathcal{C}\)-semigroup. Then, \(\operatorname{SG}(S)=\max_{\leq s}\operatorname{FG}(S)\)._ Proof.: Trivially, for any \(\mathbf{x}\in\operatorname{SG}(S)\), \(2\mathbf{x},3\mathbf{x}\in S\), and then \(\operatorname{SG}(S)\subseteq\operatorname{FG}(S)\). Assume that for a \(\mathbf{x}\in\operatorname{SG}(S)\), there exists some \(\mathbf{y}\in\operatorname{FG}(S)\) with \(\mathbf{x}\leq_{S}\mathbf{y}\). So, \(\mathbf{x}+\mathbf{s}=\mathbf{y}\) for some \(\mathbf{s}\in S\). Since \(\mathbf{x}\) is a pseudo-Frobenius element of \(S\), \(\mathbf{y}\in S\). It is not possible, then \(\mathbf{x}\in\max_{\leq_{S}}\operatorname{FG}(S)\). A \(\mathcal{C}\)-irreducible semigroup can also be characterized using its fundamental gaps using the above lemma. **Corollary 23**.: \(S\) _is a \(\mathcal{C}\)-irreducible semigroup if and only if the cardinality of \(\max_{\leq_{S}}\operatorname{FG}(S)\) is equal to one._ The next example illustrates many results appearing in this section. _Example 24_.: Let \(\mathcal{C}\) be the cone with extremal rays \(\tau_{1}=\langle(1,0)\rangle\) and \(\tau_{2}=\langle(1,1)\rangle\) and \(X=\{(1,1),(3,0),(3,1),(3,2),(5,1),(5,2)\}\). Since \(D(X)=\{(1,0),(1,1),(3,0),(3,1),(3,2),(5,1),(5,2)\}\), we have that \[\{(x,s)\in(D(X),\mathcal{C}\setminus D(X))\mid s\leq_{\mathcal{C}} D(X)\}=\\ \{((0,0),(1,1)),((0,0),(3,0)),((0,0),(3,1)),((0,0),(3,2)),\\ ((0,0),(5,1)),((0,0),(5,2)),((2,0),(3,0)),((2,0),(3,1)),((2,0),(5,1 )),\\ ((2,0),(5,2)),((2,1),(3,2)),((2,1),(5,1)),((2,1),(5,2)),\\ ((2,2),(3,2)),((2,2),(5,2)),((4,0),(5,1))\}\] Therefore, by Proposition 16, \(\mathcal{C}\setminus D(X)\) is a \(\mathcal{C}\)-semigroup and, by Proposition 20, \(X\) determines the set of gaps of a \(\mathcal{C}\)-semigroup. If we call this semigroup \(S\), we have that \(\mathcal{H}(S)=D(X)\). It is not difficult to check that \(S=\langle(2,0),(5,0),(2,1),(2,2),(3,3)\rangle\) and that, in this case, \(\operatorname{FG}(S)=X\). Moreover, we can compute the set of pseudo-Frobenius elements of \(S\), and we get \(\operatorname{PF}(S)=\{(5,1),(5,2)\}\), so \(\operatorname{SG}(S)=\{(5,1),(5,2)\}\). On the other hand, \(\operatorname{FG}(S)=\{(1,1),(3,0),(3,1),(3,2),(5,1),(5,2)\}\) and \(\max_{\leq_{S}}\operatorname{FG}(S)=\{(5,1),(5,2)\}\), as we knew by Lemma 22. ## 5 Computing all the \(\mathcal{C}\)-semigroups with a given Frobenius vector Let \(\mathcal{C}\subset\mathbb{N}^{p}\) be an integer cone, \(\preceq\) be a monomial order on \(\mathbb{N}^{p}\), and \(S\) be a \(\mathcal{C}\)-semigroup with Frobenius vector \(F(S)\in C\setminus\{0\}\). Note that \(F(S)\) is a minimal generator of \(S\cup\{F(S)\}\). Conversely to Lemma 17, we can consider the following sequence of \(\mathcal{C}\)-semigroups for some \(t\in\mathbb{N}\): \(S_{t}=S\), \(S_{i-1}=S_{i}\cup F(S_{i})\) for all \(i=1,\ldots,t\), and \(S_{0}=\mathcal{C}\). Such a sequence can be constructed for any \(\mathcal{C}\)-semigroup with Frobenius vector \(F(S)\). So, from a minimal system of generators of \(\mathcal{C}\), we obtain new \(\mathcal{C}\)-semigroups just by removing a minimal generator \(\mathbf{s}\) fulfilling that \(\mathbf{s}\preceq F\). Performing this process as many times as possible, we obtain all the \(\mathcal{C}\)-semigroups with Frobenius vector \(F\). Note that this process is finite due to the finitiness of the set \(\{\mathbf{s}\in\mathcal{C}\mid\mathbf{s}\preceq F\}\). This idea allows us to provide an algorithm for computing all the \(\mathcal{C}\)-semigroups with a fixed Frobenius vector (Algorithm 4). Moreover, this algorithm can be modified to obtain all the \(\mathcal{C}\)-semigroups with the Frobenius vector less than or equal to a fixed Frobenius vector. For any set of ordered pairs, \(A\), \(\pi_{1}(A)\) denotes the set of the first projection of its elements. _Example 25_.: Let \(\mathcal{C}\) be the cone generated by \(\{(1,0),(1,1),(1,2)\}\) and \(F=(2,1)\). Then, applying Algorithm 4, we get that the set of all \(\mathcal{C}\)-semigroups with Frobenius vector \((2,1)\) is \(\{\{(2,0),(3,0),(1,1),(1,2)\},\{(1,0),(3,1),(1,2),\\ (2,3)\},\{(3,0),(4,0),(5,0),(1,1),(3,1),(1,2),(3,2)\},\{(2,0),(3,0),(3,1),(4,1),\\ (1,2),(2,2),(2,3),(3,3)\},\{(3,0),(4,0),(5,0),(3,1),(4,1),(5,1),(1,2),(2,2),\\ (3,2),(2,3),(3,3)\},\{(3,0),(4,0),(5,0),(3,1),(4,1),(5,1),(2,2),(3,2),(4,2),\\ (2,3),(3,3),(4,3),(2,4),(3,4),(3,5),(3,6)\}\}\). These semigroups are shown in Table 1. ### Funding The first, second, and last authors were partially supported by Junta de Andalucia research group FQM-343, and by Consejeria de Universidad, Investigacion e Innovacion de la Junta de Andalucia project ProyExcel_00868. Proyecto de investigacion del Plan Propio - UCA 2022-2023 (PR2022-004) partially supported the second and last authors. Proyecto de investigacion del Plan Propio - UCA 2022-2023 (PR2022-011) also partially supported all the authors. This publication and research have been partially granted by INDESS (Research University Institute for Sustainable Social Development), Universidad de Cadiz, Spain. #### Author information J. I. Garcia-Garcia. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. D. Marin-Aragon. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. D. Marin-Aragon. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. D. Marin-Aragon. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. D. Marin-Aragon. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. D. Marin-Aragon. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. Departamento de Matematicas/INDESS (Instituto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Sanchez-Loureiro. Departamento de Matematicas, Universidad de Cadiz, E-11510 Puerto Real (Cadiz, Spain). E-mail: [email protected]. A. Vigneron-Tenorio. tuto Universitario para el Desarrollo Social Sostenible), Universidad de Cadiz, E-11406 Jerez de la Frontera (Cadiz, Spain). E-mail: [email protected].
2305.12538
Testing Multiband (G, GBP, GRP, B, V and TESS) Standard Bolometric Corrections by Recovering Luminosity and Radii of 341 Host Stars
Main-sequence bolometric corrections (BC) and a standard BC-Teff relation are produced for TESS wavelengths using published physical parameters and light ratios from SED models of 209 detached double-lined eclipsing binaries. This and previous five-band (Johnson B, V, Gaia G, GBP, GRP) standard BC-Teff relations are tested by recovering luminosity (L) of the most accurate 341 single host stars (281 MS, 40 subgiants, 19 giants and one PMS). The recovered L of photometry is compared to L from published R and Teff. A very high correlation ($R^2$ = 0.9983) is achieved for this mixed sample. Error histograms of recovered and calculated L show peaks at 2 and 4 per cent respectively. The recovered L and the published Teff} were then used in $L = 4 \pi R^2 \sigma Teff^4$ to predict the standard R of the host stars. Comparison between the predicted and published R of all luminosity classes are found successful with a negligible offset associated with the giants and subgiants. The peak of the predicted R errors is found at 2 per cent, which is equivalent to the peak of the published R errors. Thus, a main-sequence BC-Teff relation could be used in predicting both L and R of a single star at any luminosity class, but this does not mean BC-Teff relations of all luminosity classes are the same because luminosity information could be more constrained by star's apparent magnitude $\xi$ than its BC since $m_{Bol} = \xi + BC_\xi$.
Zeki Eker, Volkan Bakis
2023-05-21T18:45:48Z
http://arxiv.org/abs/2305.12538v1
Testing Multiband (\(G\), \(G_{\rm BP}\), \(G_{\rm RP}\), \(B\), \(V\) and \(Tess\)) Standard Bolometric Corrections by Recovering Luminosity and Radii of 341 Host Stars ###### Abstract Main-sequence bolometric corrections (\(BC\)) and a standard \(BC-T_{\rm eff}\) relation are produced for TESS wavelengths using published physical parameters and light ratios from SED models of 209 detached double-lined eclipsing binaries. This and previous five-band (Johnson \(B\), \(V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\)) standard \(BC-T_{\rm eff}\) relations are tested by recovering luminosity (\(L\)) of the most accurate 341 single host stars (281 MS, 40 subgiants, 19 giants and one PMS). Recovered \(L\) of photometry are compared to \(L\) from published \(R\) and \(T_{\rm eff}\). A very high correlation (\(R^{2}\) = 0.9983) is achieved for this mixed sample. Error histograms of recovered and calculated \(L\) show peaks at \(\sim\)2 and \(\sim\)4 per cent respectively. The recovered L and the published \(T_{\rm eff}\) were then used in \(L=4\pi R^{2}\sigma T_{\rm eff}^{4}\) to predict the standard \(R\) of the host stars. Comparison between the predicted and published R of all luminosity classes are found successful with a negligible offset associated with the giants and subgiants. The peak of the predicted \(R\) errors is found at 2 per cent, which is equivalent to the peak of the published \(R\) errors. Thus, a main-sequence \(BC-T_{\rm eff}\) relation could be used in predicting both \(L\) and \(R\) of a single star at any luminosity class, but this does not mean \(BC-T_{\rm eff}\) relations of all luminosity classes are the same because luminosity information could be more constrained by star's apparent magnitude \(\xi\) than its \(BC\) since \(m_{\rm Bol}=\xi+BC_{\xi}\). keywords: Stars: fundamental parameters, Stars: general, Stars: planetary systems ## 1 Introduction Luminosity (\(L\)) is not an observable parameter for a star. There are only one direct and two indirect methods of obtaining \(L\). The direct method uses the Stefan-Boltzmann law (\(L=4\pi R^{2}\sigma T_{\rm eff}^{4}\)) for calculating a luminosity directly from a given radius (\(R\)) and effective temperature (\(T_{\rm eff}\)) of a star. Therefore, the most accurate stellar \(L\) comes from observationally determined most accurate \(R\) and \(T_{\rm eff}\) values which would be available from Detached Double-lined Eclipsing Binaries (DDEBs; Andersen 1991; Torres et al. 2010; Eker et al. 2014, 2015, 2018). The typical accuracy of a calculated \(L\) is 8.2-12.2 per cent (Eker et al. 2021). The first of the two indirect methods requires mass and pre-determined classical mass-luminosity relation (MLR) in the form \(L\propto M^{\alpha}\), where the typical accuracy is 17.5-37.99 per cent (Eker et al. 2018, 2021). In early times, especially after the discovery of main-sequence MLR independently by Hertzsprung (1923) and Russell et al. (1923), the MLR was claimed to be one of the most prominent empirical laws of nature by Eddington (1926) and Gabovits (1938) and it was used either predicting \(L\) from \(M\) or \(M\) from \(L\) at least until the middle of the 20th century or perhaps until Andersen (1991) was objecting it. Those were the times the observational accuracy of \(L\) (or \(M\)) was not high enough to distinguish the true \(L\) (or \(M\)) from the average \(L\) (or \(M\)) for a given \(M\) (or \(L\)) of a main-sequence star. The second of the two indirect methods requires an absolute visual magnitude and a pre-estimated bolometric correction (\(BC\)) to compute the absolute bolometric magnitude of a star in the first step as \(M_{\rm Bol}=M_{\rm V}+BC_{\rm V}\) and then to obtain its \(L\) in SI units from \[M_{\rm Bol}=-2.5\times logL+71.197425..., \tag{1}\] where the typical accuracy of \(L\) is about to 10-13 per cent (Eker et al. 2021, 2022; Bakis & Eker 2022) which is equivalent to the uncertainty of \(BC_{\rm V}\) if the uncertainty of the absolute visual magnitude is negligible and the \(BC_{\rm V}\) is a standard \(BC_{\rm V}\). Otherwise, that is if the \(BC_{\rm V}\) is not a standard \(BC_{\rm V}\), an additional uncertainty 10 per cent or more (Eker et al. 2021, 2022) would arise due to the complexities caused by the three paradigms (the \(BC\) scale is arbitrary, \(BC\) values must always be negative, and the bolometric magnitude of a star ought to be brighter than its \(V\) magnitude) which are not valid since 2015 (Eker et al. 2022). It was Torres (2010) who first noticed inconsistencies due to improper usage of \(M_{Bol,\odot}\) and tabulated \(BC_{\rm V}\) values that may lead to errors of up to 10 per cent or more in the derived \(L\) equivalent to about 0.1 mag or more in the bolometric magnitudes. Disagreed bolometric corrections (\(BC_{\rm V}\)), which are primarily in tabulated form, had been used for about a century (Kuiper 1938; Popper 1959; Johnson 1966; Flower 1996; Bessell et al. 1998; Girardi et al. 2008; Sung et al. 2013). International Astronomical Union was aware of the problem, thus, issued a general assembly resolution1 hereafter IAU 2015 GAR B2, to solve the problems associated with the arbitrariness attributed to the zero point constants of the \(M_{\rm Bol}\) and \(BC\) scales. This revolutionary document (Eker et al., 2022) appears to be ignored or its full potential was not understood properly since some authors continued the old tradition (Casagrande & VandenBerg, 2018; Andrae et al., 2018; Chen et al., 2019; Eker et al., 2020) under the influence of the paradigms. The revolution has been noticed first by Eker et al. (2022) who revised the definition of standard bolometric correction according classical definition \(BC_{\rm V}=M_{\rm Bol}\) - \(M_{\rm V}\) where \(M_{\rm V}\) is calculated from an apparent magnitude, parallax, and interstellar extinction while \(M_{\rm Bol}\) is calculated according to IAU 2015 GAR B2 (Eq.1) or using Footnote 1: [https://www.iau.org/static/resolutions/IAU2015_English.pdf](https://www.iau.org/static/resolutions/IAU2015_English.pdf) \[M_{\rm Bol}=M_{bol,\odot}-2.5\times logL/L_{\odot} \tag{2}\] in which the nominal2 solar values \(M_{Bol,\odot}=4.74\) mag and \(L_{\odot}=3.828\times 10^{26}\) W should be used during pre-computation of a \(BC\) because the zero point constant of the bolometric magnitude scale \(C_{\rm Bol}=M_{Bol,\odot}+2.5logL_{\odot}=71.197425...\) has been fixed by the General Assembly of IAU in 2015 meeting. Footnote 2: Actual values are \(M_{Bol,\odot}=4.739997...\) mag and \(L_{\odot}=3.8275(\pm 0.0014)\times 10^{26}\) W A predicted \(L\) according to the third method using an absolute magnitude and a standard \(BC\) could be called standard \(L\) while a computed \(L\) according to the Stefan-Boltzmann law is standard by the definition. Investigating typical and limiting accuracies of the three methods of obtaining stellar \(L\) in the era after Gaia, Eker et al. (2021) claimed that it is possible to predict a standard \(L\) within a few per cent if the pre-required \(BC_{\rm V}\) is measured directly from a high signal to noise ratio spectrum according to the following definition of \(BC_{\rm V}\) \[BC_{\rm V}=2.5log\frac{f_{\rm V}}{f_{\rm Bol}}+C_{2}=2.5log\frac{\int_{0}^{ \infty}S_{A}(V)f_{A}d\lambda}{\int_{0}^{\infty}f_{A}d\lambda}+C_{2}, \tag{3}\] where \(S_{A}(V)\) is the sensitivity function of the \(V\) magnitude system, and \(f_{A}\) is the monochromatic flux from a star, and \(C_{2}=C_{\rm Bol}-C_{\rm V}\), in which \(C_{\rm V}\) is the zero point constant for the \(V\) system of magnitudes. This claim, however, was found inapplicable and speculative by Bakis & Eker (2022), who suggested an alternative way of increasing the accuracy of standard \(L\) by using multiband standard \(BCs\). Bakis & Eker (2022) first determined the main-sequence standard \(BC\) values for the photometric bands Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and Johnson \(B\), \(V\) from the most accurate stellar parameters of 406 main-sequence stars which are the components of 209 DDEBs contained in the catalogue of Eker et al. (2018). Then, \(BC\) - \(T_{\rm eff}\) relations of main-sequence stars were established for the same five photometric (\(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and Johnson \(B\), \(V\)) passbands. After, five \(M_{\rm Bol}\) values of each star are estimated by \(M_{\rm Bol}(\xi)=M_{\xi}+BC_{\xi}\), the mean \(M_{\rm Bol}\) and its standard error is computed to represent the star in the list. At last, the standard \(L\) of each star is obtained according to Eq.1 from its mean \(M_{\rm Bol}\), while the standard error of \(M_{\rm Bol}\) is propagated to be the uncertainty of the predicted \(L\). Comparing predicted \(L\) (from photometry) to the calculated \(L\) (from SB) indicated a high degree of correlation (\(R^{2}\geq 0.999\)). The most important is that comparing histogram distributions of errors showed that uncertainties associated with the predicted \(L\) (peak at \(\approx 2.5\) per cent) are \(\sim\)3 times smaller than the uncertainties of \(L\) (peak at \(\approx 8\) per cent) by the Stefan-Boltzmann law. There was no method providing \(L\) more accurately than the direct method. Now, multiband \(BC\)s provides such an accuracy first in the history of astrophysics. This study, however, is motivated for further testing of the multiband (Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and Johnson \(B\), \(V\)) main-sequence \(BC\) and \(BC\) - \(T_{\rm eff}\) relations by adding one more passband (TESS) in the series and applying them to the planet-hosting stars with most accurate \(T_{\rm eff}\) and \(R\). Not only by comparing predicted \(L\) to calculated \(L\) of the host stars, but also by comparing predicted \(R\) to the published \(R\). Choosing host stars sample not only among the main-sequence but also subgiants and giants, this study is also important for testing the claim of Flower (1996) who declared \(BC\) values are the same for all luminosity classes including main-sequence, subgiants and giants even for the pre-main-sequence stars. ## 2 Data The Transiting Exoplanet Survey Satellite (TESS) (Ricker et al., 2014, 2015), which was put into orbit in 2018, has led to important contributions to stellar and planetary astrophysics with its extremely sensitive photometric data (among hundreds of studies; Stassun et al., 2017; Fulton et al., 2017; Montalto, 2022). In addition to its numerous exoplanet discovery, which is the main mission of the satellite, it has also made important contributions to stellar astrophysics (among hundreds of studies; Gunther et al., 2020; Antoci et al., 2019; Bakis et al., 2022; Espinoza & Jordan, 2016, for different kind of stellar objects). In the case of eclipsing binaries, it has become possible to obtain relative radii and light contribution of components with a precision better than 0.1 per cent. On the way of estimating accurate absolute bolometric magnitudes and luminosity of the component stars of binaries using TESS light curves, interstellar dimming (\(A_{\rm TESS}\)) and the bolometric corrections (\(BC_{\rm TESS}\)) are two very critical parameters in addition to a sensitive Gaia distance (Gaia Collaboration et al., 2016). Despite there is a reliable source of obtaining these two parameters now exist at Johnson \(B\), \(V\) and Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) band passes (Bakis & Eker, 2022), there is no reliable common source satisfying numerous researchers who are interested in solving TESS light curves of eclipsing binaries and exoplanet transiting light curves of single stars. This study, therefore, is devoted first to producing an empirical \(E(B-V)\) - \(A_{\rm TESS}\) relation for estimating \(A_{\rm TESS}\) from the \(E(B-V)\) colour excess of the system and then to calibrate an empirical main-sequence \(BC\) - \(T_{\rm eff}\) relation for the TESS pass band for estimating \(BC_{\rm TESS}\) from the \(T_{\rm eff}\) of the system. Note that, \(A_{\rm TESS}\) is also needed in calculating \(M_{\rm TESS}\) before obtaining mean \(M_{\rm Bol}\) and its uncertainty for a star from six independent estimations of \(M_{\rm Bol}(\xi)=M_{\xi}+BC_{\xi}\) including the TESS band. The standard \(L\) of a star if computed from such a mean \(M_{\rm Bol}\) using Eq.1 has already been shown to produce more accurate stellar \(L\) than the classical direct method (Bakis & Eker, 2022) by a sample of DDEB components. Comparing L of the host star sample of this study will be an independent second test of multiband \(BC\) - \(T_{\rm eff}\) relations. At last, estimating the radii of host stars directly from this luminosity would be a second application tested in this study. ### Data for establishing \(Bc_{\rm TESS}\) - \(T_{\rm eff}\) relation The same data set of 209 DDEBs from Bakis & Eker (2022) are used in this study also for estimating component light ratios (\(I\)) and interstellar extinction (\(A_{\rm TESS}\)) first in the TESS magnitudes by the same method and software involving SED and SIMBAD data as described by Bakis & Eker (2022). After eliminating the stars out of the main-sequence limits, which are the theoretical ZAMS and TAMS lines from PARSEC evolutionary models by Bressan et al. (2012), the number of component stars left on the main-sequence is 406 (197 binaries, 9 primaries and 3 secondaries). Furthermore, not all of the 209 systems have TESS apparent magnitudes. Therefore, we have been able to use only 390 main-sequence stars to establish \(E(B-V)\) - \(A_{\rm TESS}\) relation shown in Fig.1. It can be used in estimating interstellar dimming in TESS magnitudes if \(E(B-V)\) colour excess is known. Once, the component light radios and the interstellar extinctions of 209 DDEB systems are known, and then it is straightforward to calculate \(BC_{\rm TESS}\) of 390 main-sequence stars as shown by Table 1, where the columns are self-explanatory. Order, name of the system and the component (primary or secondary) are given in the first three columns. Only the parameters of 390 stars, a sample selected for this study with TESS magnitudes, are listed thus the numbers in the first column (order) are the same as given by Bakys & Eker (2022) for the readers who are interested in looking for the references of the published observed parameters, which are not given in this study to save space; that is, reducing the number of columns in the table. Columns 4, 5, 6 and 7 give radius (\(R\)), the relative error of \(R\), effective temperature (\(T_{\rm eff}\)) and relative error of \(T_{\rm eff}\), from which the luminosity (\(L\)) of each component is computed according to the Stefan Boltzmann law (column 8) in solar units. It is translated to SI units in column 9. The relative uncertainty of the computed \(L\) in column 10 is the propagated uncertainty estimated from the random observational uncertainties of observed \(R\) and \(T_{\rm eff}\). Absolute bolometric magnitudes (\(M_{\rm Bol}\)) and corresponding uncertainties in columns 11 and 12 are calculated according to Eq. 1 from the computed \(L\). This completes the first step of obtaining \(BC_{\rm TESS}\) of the component stars of the DDEB sample (Table 1) according to the classical definition \(BC_{\rm TESS}=M_{\rm Bol}\) - \(M_{\rm TESS}\). The second step, obtaining \(M_{\rm TESS}\) requires apparent brightness of the system [\(m({\rm sys})_{\rm TESS}\)], a light ratio of components (\(t_{\rm TESS}\)), a parallax (\(\varpi\)) and an interstellar extinction (\(A_{\rm TESS}\)). Column 13 and 14 retain brightness of the system [\(m({\rm sys})_{\rm TESS}\)] and its associated uncertainty [\(\Delta m({\rm sys})_{\rm TESS}\)] from the TESS catalogue (Stassun et al., 2018). The light ratio of components (column 15) is calculated as described by Bakys & Eker (2022) using the TESS bandpass curve (Sullivan et al., 2015) in the form of component contributions (\(I_{pri}+I_{sec}=1\)) that is the total contribution of primary and secondary is equal to one. Observational uncertainties (\(\Delta m_{\rm TESS}\)) of component's magnitudes are assumed to be the same as systemic magnitudes [\(\Delta m({\rm sys})_{\rm TESS}\)] (column 14) as done by Bakys & Eker (2022), therefore, (\(m_{\rm TESS}\)) in column 16 are given without uncertainty in the table. Columns 17 and 18 copy the parallaxes and associated relative errors directly from Bakys & Eker (2022) mostly from EDR3, which are checked and confirmed to be the same as Gaia DR3 parallaxes. Column 19 presents interstellar extinctions for the TESS photometry (\(A_{\rm TESS}\)) predicted in this study with an estimated accuracy of 0.019 mag. Next, the absolute magnitude of a component (\(M_{\rm TESS}\)) and its propagated uncertainty are listed in columns 20 and 21. At last in the third step, the \(BC_{\rm TESS}\) of a component (column 22) is found by subtracting \(M_{\rm TESS}\) (column 20) from \(M_{\rm Bol}\) (column 11). The last column (column 23) gives the propagated uncertainty computed from the uncertainties of the absolute bolometric and TESS magnitudes (column 23). ### Data for expanding \(BC\) - \(T_{\rm eff}\) relations Empirical standard \(BC_{\rm TESS}\) from the published \(R\) and \(T_{\rm eff}\) of 390 DDEB main-sequence components are displayed in Fig.2 where the continuous line is the best fitting curve. Coefficients and associated uncertainties are listed in Table 2 together with other \(BC-T_{\rm eff}\) relations all in the forms of fourth-degree polynomials from Bakys & Eker (2022) representing the photometric bands of Johnson \(B\), \(V\), Gaia \(G\), \(G_{\rm BP}\) and \(G_{\rm BP}\), where the uncertainties of the coefficients are shown by \(\pm\) symbol. We have tried to fit both of the third and fourth-degree polynomials to the \(BC_{\rm TESS}\) of this study, nevertheless, the fit of the third-degree polynomial is found better with a smaller RMS and more meaningful errors associated with the coefficients while the curves of both functions are very similar. Using the values of \(a\), \(b\), \(c\), \(d\) and \(e\) from the table, one can calculate \[BC_{\xi}=a+bX+cX^{2}+dX^{3}+eX^{4}, \tag{4}\] of any photometric band, where \(X=log\,T_{\rm eff}\). The lower part of Table 2 is for comparing standard deviations (RMS), correlations (\(R^{2}\)) and the standard \(BC\) of a main sequence star having a \(T_{\rm eff}\) similar to the Sun (\(T_{\rm eff}\) = 5772 K). The smallest RMS, which is 0.1092 mag, Figure 1: The \(E(B-V)\) - \(A_{\rm TESS}\) relation derived from 209 DDEB. Figure 2: Distribution of empirical standard \(BC_{\rm TESS}\) from 209 DDEB. Filled and empty circles are primaries and secondaries, respectively. The best fitting line represents empirical standard \(BC_{\rm TESS}\) - \(T_{\rm eff}\) relation. The number of stars (\(N\)) and RMS are indicated. One sigma deviation from the best-fitting curve is shown below. occurs at Gaia \(B_{\rm RP}\) band, while the largest RMS, which is 0.1363 mag, occurs at Johnson \(B\) band. The RMS of the TESS band has a moderate value (0.1110 mag) in between these limits. Note that the RMS values of each band determine the limiting accuracy of \(M_{\rm Bol}\) since \(M_{\rm Bol}=M_{\xi}+BC_{\xi}\) under the condition the uncertainty of \(M_{\xi}\) is negligible. The Maximum \(BC\) values (\(BC_{\rm max}\)) occurring at effective temperature \(T_{\rm max}\) are given below the absolute and apparent magnitudes if the main-sequence star with \(T_{\rm eff}\) = 5772 K is shown by symbols \(M_{\odot}\) and \(m_{\odot}\) in the table. The lowest part of the table indicates the range of positive \(BC\) values if exist. Accordingly, the \(BC\) values of Johnson \(B\) and Gaia \(G_{\rm BP}\) bands are always negative. The \(BC\) values of Johnson \(V\) and Gaia \(G\) bands would produce positive \(BC\) values in the middle temperatures; for the \(V\) band if 5300 \(<\)\(T_{\rm eff}\) < 7830 K, the G band if 4565 \(<\)\(T_{\rm eff}\) < 4720 K. The \(BC\) values of Gaia \(G_{\rm RP}\) are positive if \(T_{\rm eff}\) < 8590 K, while the \(BC_{\rm TESS}\) are positive if \(T_{\rm eff}\) < 8425 K. The relations are set valid for the main-sequence stars within the temperature range 2900-38000 K as it is implied by the sample of DDEB used in the calibrations. Independently calibrated multimodal standard \(BC\) - \(T_{\rm eff}\) relations are plotted all together in Fig.3. Except \(TESS\) (thick solid) and \(G_{\rm RP}\) (dashed) curves, which are similar, all of the other curves deviate from each other, especially towards lowest temperatures while all curves cross each other at \(T_{\rm eff}\)\(\approx\)10000 K and appear not deviating from each other \(T_{\rm eff}\)\(>\)10000 K as much as lower temperatures. Each curve in Fig.3 could be used to estimate a \(BC\) of a star at a photometric band preferred if its effective temperature were known. Because \(BC\) - \(T_{\rm eff}\) relations are independently calibrated and because observations at different photometric bands are independent, the predicted values of \(M_{\rm Bol}\) of various bands would be independent. The six independent \(M_{\rm Bol}\) for a single star could then be combined by taking an average. At last, the standard \(L\) of a star is calculated from the averaged \(M_{\rm Bol}\). Otherwise, each \(M_{\rm Bol}\) providing a standard \(L\) would be less accurate than the limiting accuracy indicated by the RMS values in Table 2. ### Data for Testing \(BC\) - \(T_{\rm eff}\) Relations Five band (\(B\), \(V\), \(G\), \(G_{\rm RP}\) and \(G_{\rm BP}\)) \(BC\) and \(BC\) - \(T_{\rm eff}\) relations has already been tested by Bakis & Eker (2022) by a sample of DDEB components from which \(BC_{\xi}\) were computed and \(BC_{\xi}\) - \(T_{\rm eff}\) calibrated, where \(\xi\) is any of the five photometric bands. Simply because DDEB are known to provide the most accurate stellar parameters (Andersen, 1991; Torres, 2010; Eker et al., 2014, 2015, 2018) and are already ready for such a test. The next most accurate stellar parameters appear to be coming from single stars which are hosting one or more exoplanets discovered by radial velocity variations and/or transiting light curves. High resolution and high signal-to-noise ratio spectra provide reliable \(log\,g\) and \(T_{\rm eff}\) of the host stars, while relatively shallow transits detected by ultra-high signal to ration light curves like TESS, on the other hand, resemble eclipses of DDEB, from which stellar \(R\) and \(M\) could be estimated at about \(\sim\) 8 per cent and 30 per cent (Stassun et al., 2017) using the direct observables. Later, the method is revised to provide host star \(R\) and \(M\) at a level of 10 per cent and 25 per cent by Stassun et al. (2018). At last, with the improved parallaxes of DR2 at the time and using granulation based \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Coffict} & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) & \(BC_{\xi}\) \\ \hline a & -1272.4 & -3767.98 & -1407.4 & -3425.58 & -1455.3 & -3185.33 & -1457.3 & -1385.33 & -1457.3 & -1385.33 & -1457.3 & -1457.3 & -1457.3 \\ b & -1075.3 & -3598.56 & -1305.08 & -258.9 & -1352.8 & -1352.8 & -228.96 & -145.2 & -145.4 & -1457.1 & -1457.0 & -1457.0 & -1457.0 & -1457.0 \\ c & -3573.1 & -1285.6 & -1545.59 & -1558.56 & -1458.52 & -1457.52 & -257.52 & -552.16 & -141.7 & -147.7 & -147.09 & -847.34 & -1462.4 & -1462.4 \\ d & 4808.24 & -204.76 & -1407.02 & -1283.37 & -279.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.13 & -1470.02 & -1470.02 \\ e & -2453.4 & -182.32 & -145.14 & -168.18 & -148.02 & -149.55 & -149.05 & -149.05 & -149.42 & -149.03 & -149.00 & -1470.02 & -1470.02 & -1470.02 & -1470.02 \\ e & -24582.1 & -12.469 & -146.16 & -1203.2 & -116.99 & -1109.00 & -149.09 & -149.00 & -149.00 & -149.00 & -149.00 & -1470.02 & -1470.02 & -1470.02 \\ \hline mas & 0.13257.2 & 0.12007.1 & -101.088 & -102.577 & 0.1019.79 & -1001.79 & -10055.1 & -1007.97 & -1001.951 & -1005.1 & -1007.02 & -1470.02 & -1470.02 \\ R\({}^{2}\) & 0.9616.0 & 0.9789 & 0.9793 & 0.9739.73 & 0.9784 & 0.984 & 0.9847 & -100.5 & -1007.02 & -143.34 & -1567.0 & -1577.17 & -1470.02 \\ R\({}^{2}\) & -0.600 & -0.600 & -0.609 & -0.106 & -0.104 & -0.134 & 0.567 & -0.517 & -1007.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 \\ M\({}^{2}_{\rm s}\) & 5.340 & -4.671 & -4.634.87 & -4.582 & -4.713 & -4.232 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 \\ m & -282.22 & -28.900 & -28.938 & -26.988 & -26.068 & -27.399 & -27.399 & -27.349 & -27.349 & -27.349 & -27.349 & -27.349 \\ \hline EC\({}_{\rm max}\) & -301 & 0.094 & 0.105 & -0.0622 & -0.079 & 0.664 & -100.64 & -10.69 & -100.64 & -100.64 & -10.64 & -101.04 \\ T\({}_{\rm max}\) (K) & 8222 & 6397 & 5715 & 6829 & 4345 & 4210 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 & -1470.02 \\ \hline \hline \end{tabular} *: Bailos & Eker (2022). \end{table} Table 1: Input parameters and \(BC_{\rm TESS}\) values of DDEB stars with TESS apparent magnitudes. The full table is available online. \begin{table} \begin{tabular}{c c c c c c c \(log\,g\) via Fourier background modelling (Corsaro et al., 2017) with TESS, the accuracy reached to 3 and 10 per cent levels respectively, which appear ideal for testing existing multiband \(BC\) and \(BC\) - \(T_{\rm eff}\) relations by a different sample of stars other than DDEB components. Host star physical parameters and associated errors are listed in NASA Exoplanet Archive3 together with the physical and orbital parameters of the confirmed planets. There was 5081 confirmed exoplanet belonging to 3799 host stars at the time when we downloaded the planetary systems composite data, from which we have selected 306 host stars having the most accurate \(R\) and \(T_{\rm eff}\) both within 2 per cent of accuracy for our preliminary list. This way, host stars with the most accurate stellar \(L\) at all luminosity classes were collected intentionally in order to see how good existing main-sequence multiband \(BC\) - \(T_{\rm eff}\) relations are usable to other stages of evolution. Noting that the \(L\) of a star primarily depends on its mass and metallicity, which is critical to identify sub-giants and main-sequence stars on H-R diagram, the host stars with a published metallicity were preferred. Because of the note "Data may not be self-consistent if drawn from multiple sources" on the table of Planetary Systems Composite Data, we have studied each star from its original sources to make sure self-consistency. In the second step, where we had to replace some of the original choices of \(R\) and \(T_{\rm eff}\) with a new \(R\) and \(T_{\rm eff}\) together with their associated uncertainty for the sake of consistency. We preferred not to discard a host star if the uncertainty of the new entry is bigger than 2 per cent but less than 3 per cent. Additional host stars fitting the selection criteria were also added thus our final list is enlarged to have 350 host stars. Footnote 3: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/) However, during the process of estimating interstellar dimming of the selected host stars one by one from their un-reddened and reddened SED fitting, the following six systems KELT-21, WASP-118, HATS-37, Kepler-38, Kepler-1647 and Kepler-1661 were discarded because they have secondaries polluting their SED. The bright host \(\alpha\) Tau was also discarded because it does not have Gaia apparent magnitudes. We had no choice but to discard the host K2-374 because its observed spectrophotometric fluxes could not be reached and the host K2-127 was discarded because its observed spectrophotometric fluxes were found not to fit its SED. The sample containing 341 hosts selected for this study for testing multiband \(BC\) and \(BC\) - \(T_{\rm eff}\) relations are listed in Table 3. Order and the star name are in columns 1 and 2 while parallax, error and reference are in columns 3, 4, and 5. Columns 6, 7 and 8 indicate radius, radius error and reference while columns 9, 10, 11 and 12 show spectral type, \(T_{\rm eff}\), temperature error and reference. Column 13 is the luminosity class determined by us using ZAMS and TAMS lines of PARSEC models (Bressan et al., 2012) according to the published metallicity. The luminosity according to the Stefan-Boltzmann law from the observed \(R\) and \(T_{\rm eff}\) in solar and SI units are in columns 14 and 15 respectively while its propagated relative uncertainty (\(\Delta L/L\)) is in column 16. Corresponding \(M_{\rm Bol}\) and uncertainty in the magnitude scale according to Eq.1 are given in columns 17 and 18. Observational random errors of \(R\) and \(T_{\rm eff}\) are compared to the propagated errors of \(L\) in Fig.4, where a much wider distribution of \(L\) errors with a peak at 5 per cent caused by the powers of \(R\) and \(T_{\rm eff}\) is obvious. Fig.5 displays the positions of the sample host stars on the H-R diagram where the luminosity classes are indicated. Thus, the single host star sample contains 281 main-sequence (V), 40 sub-giants (IV), 19 giants and one pre-main-sequence (PMS) star YSES-2 (Bohn et al., 2021). ## 3 Calculations ### Estimating interstellar extinction using SED Assuming a star is a black body having a temperature \(T_{\rm eff}\) and if there is no interstellar extinction, a spherical star of size \(R\) would produce a continuum flux (SED) at a distance \(d\) from its centre. Putting this star at the \(d\) parsec away from the Earth, according to the notation of Bakis & Eker (2022), its monochromatic flux above the atmosphere is \[f_{\lambda}=\frac{R^{2}}{d^{2}}\pi B_{\lambda}(T_{\rm eff}). \tag{5}\] Figure 4: Error distributions of effective temperatures, radii and luminosity of 341 single host-stars in the present sample. Figure 5: HR diagram of the host stars sample. The main sequence limits indicated by ZAMS and TAMS of the solar metallicity (Z=0.014) from PARSEC models (Bressan et al., 2012) while large empty circles are giants, medium empty circles are sub-giants, small filled circles are main-sequence stars and the empty square is a pre-main-sequence star. where \(\pi B_{\lambda}(T_{\rm eff})\) is monochromatic surface flux and \(R^{2}/d^{2}\) is the dilution factor. Using the parallaxes (\(\varpi\)), radii (\(R\)) and effective temperatures (\(T_{\rm eff}\)) in Table 3, the unreddened SED of the host stars in this study are computed in units of \(Wm^{-2}\)\(\AA^{-1}\) and compared to their observed spectrophotometric flux data from the SIMBAD database (Wenger et al., 2000). For the nearby hosts with no interstellar extinction, like \(\sigma\) Boo in the _upper_ panel of Fig.6, the SED are found to fit perfectly to observed spectrophotometric flux data at all wavelengths. But, for the hosts with interstellar extinction like TOI-4329, the observed data towards the short wavelengths would be found off while the fit towards the long wavelengths appears acceptable to confirm the observed input parameters (\(\varpi\), \(R\) and \(T_{\rm eff}\) ) are consistent. Inconsistency occurs if one or all of the observed parameters are determined wrongly and/or there is excess radiation in the system, which could be due to radiating circumstellar dust or flux from a companion star if the host is not a single star but a close binary. The reddened SED of the host stars is modelled one by one by adjusting \(E(B-V)\) of the system until a best-fitting reddened SED is obtained using the reddening model of Fitzpatrick (1999). Unreddened SED and best fitting reddened SED of the host star TOI-4329 are shown together in the _lower_ panel of Fig.6. Filter transition profiles, \(S_{\lambda}(\xi)\), of the photometric bands Johnson \(B,V,\) Gaia \(G,\)\(G_{\rm BP},\)\(G_{\rm RP}\) and \(TESS\) are needed to calculate interstellar extinctions, \(A_{\xi}\), \[A_{\xi}=2.5log\frac{\int_{0}^{\infty}S_{\lambda}(\xi)f_{\lambda}^{0}d\lambda}{ \int_{0}^{\infty}S_{\lambda}(\xi)f_{\lambda}d\lambda}, \tag{6}\] where it is clear that if un-reddened \(f_{\lambda}^{0}\) and reddened \(f_{\lambda}\) are the same, \(E(B-V)\) and all \(A_{\xi}\) would be zero, which means no interstellar extinction. The filter profiles of the photometric bands are displayed in Fig.7 where transmission data for Johnson \(B\) and \(V\) are taken from Bessell (1990), Gaia passbands from Evans et al. (2018) and TESS from Sullivan et al. (2015). The passband-based interstellar extinctions determined by equation 6 for the host stars in this study are given in Table 4 together with apparent magnitudes. ### Calculating standard \(L\) by multiband \(Bc\) Using the apparent magnitudes and the interstellar extinctions from Table 4 and the parallaxes from Table 3, the multiband absolute magnitudes of the host stars are calculated by \[M_{\xi}=\xi+Slog\varpi+5-A_{\xi} \tag{7}\] where \(\xi\) and \(A_{\xi}\) are apparent magnitude and interstellar extinction in which the symbol \(\xi\) indicates one of the bands at Johnson \(B,V\), Gaia \(G,\)\(G_{\rm BP},\)\(G_{\rm RP}\) or \(TESS\), while \(\varpi\) is the parallax in parsec units. The uncertainties of the multiband absolute magnitudes are calculated by \[\Delta M_{\xi}=\sqrt{(\Delta m_{\xi})^{2}+(5log\frac{\sigma_{\varpi}}{\varpi} )^{2}+(\Delta A_{\xi})^{2}} \tag{8}\] where the first term in the square root represents the uncertainty contribution of the apparent magnitude, while the second and third \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline Okehr & Host & Premie & Gkp & Fts & \(R\) & \(\varpi\) & Reference & Sp. & \(T\) & \(\varpi\) & Reference & Lam. & \(L/L_{\rm 1.5}\)(S) & \(\lambda_{\rm B}\) & \(M_{\rm BH}\) & \(\varpi\) & \(T_{\rm eff}\) \\ & Name & (mm) & [\(\%\)] & Source & [\(R_{\rm s}\)] & [\(R_{\rm s}\)] & Type & [\(\rm K\)] & [\(\rm K\)] & & & & & & & & & \\ & 1 & 16\(\,\)C/B & 47.330 & 20.062 & DR3 & 1.13 & 0.01 & 20174.131.1368 & G3 V & 5796 & 84 & 20174.131.1368 & V & 1.258 & 0.411 & 1.94 & 0.030 \\ 2 & 24\(\,\)S/N & 13.462 & 0.1960 & DR3 & -4.90 & 0.08 & 20114.141.1440 & BUV & 5098 & 44 & 20131.141.1410 & III & 1.4611 & 5.392 & 4.8 & 1.823 & 0.052 \\ 3 & MASSOS/L11+91637 & 3.447 & 0.3377 & DR3 & 0.83 & 0.01 & 20164.048.58114 & 0 & 5030 & 306 & 20164.048.58114 & V & 0.486 & 0.136 & 3.523 & 0.040 \\ 4 & 55\(\,\)C/N & 74.942 & 0.6480 & DR3 & 0.96 & 0.02 & 20124.058.255.88 & BUV & 5318 & 81 & 20124.058.258.58 & V & 0.664 & 0.246 & 7.4 & 5.118 & 0.080 \\ 5 & 61\(\,\)V & 117.175 & 0.1243 & DR3 & 0.96 & 0.01 & 20044.700.1364V & G5 V & 5577 & 33 & 20140.047.201.1360V & V & 0.806 & 0.339 & 3.3 & 4.971 & 0.038 \\ 337 & WASP-127 & 2.770 & 0.6539 & DR3 & 2.83 & 0.05 & 20020.0 terms are uncertainty contributions of the parallax and interstellar extinction. Computed multiband absolute magnitudes and associated uncertainties are listed in Table 5 together with multiband _BC_ values from the \(BC-T_{\rm eff}\) relations in Table 2. Next, the absolute bolometric magnitudes from multiband photometry are calculated by \[M_{\rm Bol}(\xi)=M_{\xi}+BC_{\xi} \tag{9}\] and listed in Table 6 together with their propagated errors from \[\Delta M_{\rm Bol}(\xi)=\sqrt{(\Delta M_{\xi})^{2}+(\Delta BC_{\xi})^{2}} \tag{10}\] where \(\Delta BC_{\xi}\) are RMS values in Table 2, while \(\Delta M_{\xi}\) are from Table 5. Notice that the propagated errors are slightly bigger than the RMS values in Table 2. This must be the result of the high accuracy at the apparent magnitudes and parallaxes of the host stars selected for this study. Next, multiband absolute bolometric magnitudes are combined by a weighted mean using \[M_{\rm Bol}=\frac{\sum_{i=1}^{N}w_{i}M_{\rm Bol,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, relation valid in the temperature range 3100 - 36000 K using the published parameters of 290 DDEB systems having Gaia DR2 trigonometric parallaxes. Non-main-sequence empirical \(BC-T_{\rm eff}\) relations are still not calibrated to confirm theoretically claimed differences of a \(BC\) at various luminosity classes which appear as the tabulated tables generated theoretically using model atmospheres. The situation has not yet changed, thus, one way of testing the claim of Flower (1996) is to apply most recently calibrated main-sequence empirical multi-band \(BC-T_{\rm eff}\) relations of Bakis & Eker (2022) and \(BC_{\rm TESS}-T_{\rm eff}\) relation of this study on the stars with most accurate \(R\) and \(T_{\rm eff}\) with mixed luminosity classes and to see how successful their \(L\) would be recovered from their multiband apparent magnitudes (Johnson \(B\), \(V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and \(TESS\)), \(BC\) values (Table 2), parallaxes (Table 3) and interstellar extinctions (Table 4). This study gave us this opportunity to compare the recovered \(L\) from multiband photometry and the computed \(L\) from published \(R\) and \(T_{\rm eff}\) by the Stefan Boltzmann law. ### Comparing \(L\) from multiband photometry and \(L\) from \(R\) and \(T_{\rm eff}\) Figure 8a compares the standard \(L\) of the host stars (Table 6, column 18) recovered from their six band (Johnson \(B\), \(V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and \(TESS\)) photometry and \(L\) (Table 3, column 15) of the same sample calculated directly from their published \(R\) and \(T_{\rm eff}\) using the Stefan-Boltzmann law. Regardless to the luminosity classes of the host stars, an outstanding agreement between the predicted and the computed \(L\) values is clear. Accuracy of the recovered \(L\) is indicated in Figure 8b, where histogram error distributions of the recovered \(L\) of host stars are compared to the histogram error distributions of the computed \(L\). It is obviously seen in Figure 8b that the indirect method of obtaining \(L\), which requires a pre-computed \(BC\), produced twice more accurate standard \(L\) than the direct method because the peak of error distribution of the predicted \(L\) is 2 per cent while the peak of error distribution of the computed \(L\) is 4 per cent. This result, however, is the result of combining independent photometric \(M_{\rm Bol}\) values (Table 6) by a weighted average, where the standard error of the weighted mean is propagated as the relative uncertainty (\(\Delta L/L\)) of the recovered \(L\). If single band photometry is used to obtain a single \(M_{\rm Bol}\) from \(M_{\xi}+BC_{\xi}\), where \(\xi\) is one of the bands of Johnson \(B\), \(V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and \(TESS\), the accuracy of the recovered \(L\) would have been worse; worse than the errors of computed \(L\) which has a peak at 4 per cent. Because of very accurate apparent magnitudes, parallaxes and a few per cent accuracies in interstellar extinctions (Table 4), the errors of the absolute magnitudes in Table 5 contributed very little with respect to the error contributions of \(B\)Cs that the errors of \(M_{\rm Bol}(\xi)\) in Table 6 are just slightly bigger than the corresponding RMS values in Table 2. According to Table 6 the most accurate \(M_{\rm Bol}(\xi)\) would have a 0.111 mag uncertainty, which corresponds to 10 per cent error in the recovered \(L\) (\(\Delta L/L\)) according to Eq.12 if single channel photometry is used. If a non-standard \(B\)Cs were used, an additional 10 per cent error further reducing this accuracy would have been inevitable (see Torres 2010; Eker et al. 2021a,b, 2022), which was the case in earlier applications with various tabulated \(BC\) values. Figure 8 indicates that a main-sequence \(BC\) or \(BC-T_{\rm eff}\) relation could be used in predicting \(L\) of giants, subgiants and even a PMS also without ruling out the fact that \(BC\) of a star also depends on its luminosity class (\(\log\,g\)), metal abundance and even its rotational speed as displayed by many tabulated \(B\)Cs produced theoretically from model atmospheres. This fact becomes more noticeable in the comparison of the predicted \(R\) and published \(R\) of the host stars in the next subsection. difference of \(BC\) values among different luminosity classes actually disapproves him. The Small difference appears to be negligible so that a user could be satisfied by the recovered R of a giant star using a main-sequence \(BC-T_{\rm eff}\) relation, but this does not mean \(BC-T_{\rm eff}\) relations of different luminosity classes are the same. Accuracy of the predicted \(R\) of the whole sample is displayed in Figure 9c, where error histograms of the predicted and published \(R\) of the host stars are compared. It is obvious in Figure 9c that the peaks of both distributions are about the same at 2 per cent, however, the error distribution of the predicted \(R\) has rather a gradually decreasing tail reaching out to 6 per cent while the errors of published \(R\) has a sharply ending tail at 2.5 per cent due to the selection criteria of the host stars for this study. More accurate sides of the distributions are also different so that almost half of the most accurate \(R\)s of the host stars appear to be moved towards the less accurate side because of the less accurate tail of the \(L\) errors (see Fig.4). Such an error distribution of the predicted \(R\) is deducible from Eq.14. Noticing that the peak of the error distribution of the predicted \(L\) is at 2 per cent (see Fig. 8b) and the peak of the error distribution of the published \(T_{\rm eff}\) (See Fig 4a) is about 1 per cent. Plugging these values into Eq.14, one obtains the typical error of predicted \(R\) is \(\sim\) 2 Figure 8: a) Comparing recovered \(L\) (from photometry) and calculated \(L\) (from \(R\) and \(T_{\rm eff}\)) of the sample stars. b) Histogram distribution of the uncertainties associated with recovered (dark) \(L\) is compared to the histogram distribution of the uncertainties associated with calculated (grey) \(L\). Figure 9: Comparing recovered and published R of giants (a), subgiants (b), main-sequence stars and a PMS star. Error histograms of recovered and published \(R\) (c). per cent (\(\sqrt{5}\)). It is obvious that the errors of published \(T_{\rm eff}\) dominate. That is, using multiband \(BC-T_{\rm eff}\) relations in this study, it is now possible to obtain \(R\) of a single star with a relative error twice the relative error of \(T_{\rm eff}\). ### How a main-sequence \(BC-T_{\rm eff}\) relation works at all luminosity classes? Main-sequence \(BC-T_{\rm eff}\) relations from Table 2 are used in predicting \(L\) and \(R\) of the host stars. Figures 8 & 9 indicate predictions are successful within the error limits regardless of the luminosity classes (V, IV, III and a PMS) of the host stars. Therefore, one may think the claim of Flower (1996) "All luminosity classes appear to follow a unique \(BC-Teff\) relation" would be true. Consequently, a reader would have a question "How do main-sequence \(BC-T_{\rm eff}\) relations work in predicting the \(L\) and \(R\) of a star, which could be a main sequence star, or a subgiant or a giant and even a pre-main-sequence star?". This is because the information about the luminosity class of a star is contained mostly in its apparent magnitude rather than its \(BC\). Therefore, because the contributions of \(B\)Cs are secondary or negligible in the predictions of \(M_{\rm Bol}(\xi)\), which are \(M_{\xi}+BC_{\xi}\), one can apparently obtain standard \(L\) and \(R\) of a single star, which may belong to any luminosity class V, IV, III, or PMS, from its multiband photometry within the error limits set by the propagation of observational random errors and RMS deviations of the \(BC-T_{\rm eff}\) relations (Table 2) if the star has a reliable \(T_{\rm eff}\) and parallax. ## 5 Conclusions The following main-sequence \(BC_{\rm TESS}-T_{\rm eff}\) relation, where \(X=\log T_{\rm eff}\). \[BC_{\rm TESS}=-318.533+232.298X-55.2916X^{2}+4.27613X^{3} \tag{15}\] is calibrated using the published \(R\) and \(T_{\rm eff}\) of 390 main-sequence stars which are the components of 202 DDEB with TESS apparent magnitudes selected from 209 DDEB. The other five band (Johnson \(B,V\), Gaia \(G\), \(G_{\rm BP}\) and \(G_{\rm RP}\)) \(BC-T_{\rm eff}\) relations (Table 2) calibrated by the same DDEB sample are taken from Bakis & Eker (2022). Being different from previously calibrated relations, which are polynomials of the fourth degree, this newly calibrated relation is found to be a third-degree polynomial fitting best to the existing data. The uncertainties of the coefficients and other statistics such as RMS deviations and correlation coefficient (\(R^{2}\)) are given in Table 2 together with the other statistics of the other five band-relations. The five-band \(BC-T_{\rm eff}\) relations and \(BC\) coefficients were already tested by Bakis & Eker (2022) by a successful recovery of \(L\) of the main-sequence stars from the same DDEB sample used in the calibrations of these relations. Consequently, Bakis & Eker (2022) concluded that one of the secondary methods of obtaining \(L\), which requires a pre-computed \(BC\), may provide more accurate \(L\) than the classical method relying on the Stefan-Boltzmann law if the information provided by multiband \(BC-T_{\rm eff}\) relations are combined. Briefly, various \(M_{\rm Bol}\) are calculated first using the \(BC\) values from the multiband \(BC-T_{\rm eff}\) relation for the given \(T_{\rm eff}\) of a star. Then all \(M_{\rm Bol}\) values are combined to a mean \(M_{\rm Bol}\). Eq.1 is used to obtain \(L\) of the star from the mean \(M_{\rm Bol}\). Including the \(BC_{\rm TESS}-T_{\rm eff}\) relation produced in this study, the six-band (Johnson \(B,V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and \(TESS\)) \(BC-T_{\rm eff}\) relations, which are deduced from DDEB components, were tested by recovering the \(L\) and \(R\) of the single host stars with the most accurate published \(T_{\rm eff}\) and \(R\) similarly. Both tests (recovering \(L\) and recovering \(R\)) are found successful to conclude that 1) Using multiband \(BC-T_{\rm eff}\) relations and \(T_{\rm eff}\), one can obtain \(L\) of a single star more accurate than the classical direct method of obtaining \(L\) relying on the Stefan-Boltzmann law. 2) Accurately predicted \(L\) of a single star could be used in predicting its \(R\), a critical parameter for exoplanet studies, with reasonable accuracy. The \(T_{\rm eff}\) of the star is the only stellar parameter one needs to know to obtain its \(L\) and \(R\) in addition to its parallax, apparent magnitudes in Johnson \(B,V\), Gaia \(G\), \(G_{\rm BP}\), \(G_{\rm RP}\) and \(TESS\) passband and interstellar extinctions. Since the present sample contains stars of various luminosity classes (281 main-sequence, 40 subgiants, 19 giants and 1 PMS), we could conclude that not only \(L\) and \(R\) of the main-sequence but also subgiant, giant and even pre-main sequence stars could be recovered at about 2 per cent uncertainty if the input \(T_{\rm eff}\) in the range from 2900 to 38000 K is accurate to have a one per cent uncertainty using the six band \(BC-T_{\rm eff}\) relations in Table 2, despite they were calibrated with main-sequence stars. This result implies that the claim of Flower (1996) "All luminosity classes appear to follow a unique \(BC-Teff\) relation" is true, however, it does not rule out the effect of \(logg\) of a star on its \(BC\). This is because there is a small but clear underestimation with the recovered \(R\) of the subgiant and giant hosts according to Figure 9 and perhaps a similar underestimation of recovered \(L\) of the same subgiant and giant hosts also exists in Figure 8. These negligible differences may well be caused by small differences in \(BC\) values due to different luminosity classes. Thus, we can also conclude that the information about the luminosity class of a star is mostly contained in its apparent magnitude rather than its \(BC\). Thus, a main sequence \(BC\) or a main-sequence \(BC-T_{\rm eff}\) relation could be used in predicting \(L\) and \(R\) of single stars in all luminosity classes. We encourage researchers to calculate \(BC\) values of a sufficient number of non-main sequence stars and calibrate empirical \(BC-T_{\rm eff}\) relation also for giants, subgiants, and PMS in order to see the difference and to confirm the theoretically produced tabulated \(B\)Cs, where the \(BC\) values usually presented as functions of luminosity class (log \(g\)) (Johnson 1966; Cox 2000) and metallicity [m/H] (Girardi et al. 2002, 2008; Masana et al. 2006; Pedersen et al. 2020) and even speed of its rotation (Chen et al. 2019). ## Acknowledgements This work uses the VizieR catalogue access tool, CDS, Strasbourg, France; the SIMBAD database, operated at CDS, Strasbourg, France. This work presents results from the European Space Agency (ESA) space mission, Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular, the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia). The Gaia archive website is [https://archives.esac.esa.int/gaia](https://archives.esac.esa.int/gaia). We thank to Yasin Yalcyn and Sezer Kayact for their assistance during the data collection of single host stars. Finally, we thank to Oleg Malkov who reviewed the paper with his useful comments. ## Data Availability The data underlying this article are available in the article and in its online supplementary material.
2306.14419
The Role of Magnetic Shear in Reconnection-Driven Flare Energy Release
Using observations from the Solar Dynamics Observatory's Atmosphere Imaging Assembly and the Ramaty High Energy Solar Spectroscopic Imager, we present novel measurements of the shear of post-reconnection flare loops (PRFLs) in SOL20141218T21:40 and study its evolution with respect to magnetic reconnection and flare emission. Two quasi-parallel ribbons form adjacent to the magnetic polarity inversion line (PIL), spreading in time first parallel to the PIL and then mostly in a perpendicular direction. We measure magnetic reconnection rate from the ribbon evolution, and also the shear angle of a large number of PRFLs observed in extreme ultraviolet passbands ($\lesssim$1 MK). For the first time, the shear angle measurements are conducted using several complementary techniques allowing for a cross-validation of the results. In this flare, the total reconnection rate is much enhanced before a sharp increase of the hard X-ray emission, and the median shear decreases from 60$^\circ$-70$^\circ$ to 20$^\circ$, on a time scale of ten minutes. We find a correlation between the shear-modulated total reconnection rate and the non-thermal electron flux. These results confirm the strong-to-weak shear evolution suggested in previous observational studies and reproduced in numerical models, and also confirm that, in this flare, reconnection is not an efficient producer of energetic non-thermal electrons during the first ten minutes when the strongly sheared PRFLs are formed. We conclude that an intermediate shear angle, $\le 40^\circ$, is needed for efficient particle acceleration via reconnection, and we propose a theoretical interpretation.
J. Qiu, M. Alaoui, S. K. Antiochos, J. T. Dahlin, M. Swisdak, J. F. Drake, A. Robison, C. R. DeVore, V. M. Uritsky
2023-06-26T04:50:44Z
http://arxiv.org/abs/2306.14419v2
# The Role of Magnetic Shear in Reconnection-Driven Flare Energy Release ###### Abstract Using observations from the Solar Dynamics Observatory's Atmosphere Imaging Assembly and the Ramaty High Energy Solar Spectroscopic Imager, we present novel measurements of the shear of post-reconnection flare loops (PRFLs) in SOL20141218T21:40 and study its evolution with respect to magnetic reconnection and flare emission. Two quasi-parallel ribbons form adjacent to the magnetic polarity inversion line (PIL), spreading in time first parallel to the PIL and then mostly in a perpendicular direction. We measure magnetic reconnection rate from the ribbon evolution, and also the shear angle of a large number of PRFLs observed in extreme ultraviolet passbands (\(\lesssim\)1 MK). For the first time, the shear angle measurements are conducted using several complementary techniques allowing for a cross-validation of the results. In this flare, the total reconnection rate is much enhanced before a sharp increase of the hard X-ray emission, and the median shear decreases from 60\({}^{\circ}\)-70\({}^{\circ}\) to 20\({}^{\circ}\), on a time scale of ten minutes. We find a correlation between the shear-modulated total reconnection rate and the non-thermal electron flux. These results confirm the strong-to-weak shear evolution suggested in previous observational studies and reproduced in numerical models, and also confirm that, in this flare, reconnection is not an efficient producer of energetic non-thermal electrons during the first ten minutes when the strongly sheared PRFLs are formed. We conclude that an intermediate shear angle, \(\leq\) 40\({}^{\circ}\), is needed for efficient particle acceleration via reconnection, and we propose a theoretical interpretation. ## 1 Introduction Magnetic reconnection in the solar corona is widely believed to be the energy release mechanism that drives solar flares. For eruptive two-ribbon flares, the Carmichael-Sturrock-Hirayama-Kopp-Pneuman (CSHKP) model (Carmichael, 1964; Sturrock, 1966; Hirayama, 1974; Kopp & Pneuman, 1976) provides the canonical description. An arcade of flare loops form: at their foot-points, two flare ribbons spread apart and away from the magnetic polarity inversion line (PIL) as reconnection proceeds along a vertical current sheet in the corona. The leading edges of the ribbons map the feet of the reconnecting magnetic field lines (Svestka, 1980; Forbes & Priest 1984). The model also schematically describes the evolution of energized particles and plasma, as well as the dynamics of the lower atmosphere in response to the flare energy deposition. The greatest challenge to understanding flare reconnection is that it occurs in the corona, where detailed, accurate measurements of the magnetic field are very rare. The standard flare model connecting the dynamics in the corona to the lower atmosphere response, however, provides a recipe for inferring reconnection properties by tracking the evolution of flare ribbons. For a typical Alfven speed of order 1,000 km s\({}^{-1}\) and length scale of 10,000 km, the reconnection-released energy flux travels along the flare loops, i.e., the closed field lines formed by reconnection, to reach and heat the upper chromosphere in a matter of seconds. Therefore, signatures of impulsive brightening in the lower atmosphere may be tracked to derive the reconnected flux, \(\psi=\int B_{r}da\), where \(B_{r}\) is the photospheric radial magnetic flux density and \(da\) is the area of newly brightened flare ribbons. Its time derivative \(\dot{\psi}\) gives the global reconnection rate. For strictly two-dimensional models such as CSHKP, the global reconnection rate is equivalent to a uniform reconnection electric field \(E_{rec}=\dot{\psi}/L=v_{r}B_{r}\), where \(L\) is the length of the macroscopic reconnection current sheet (RCS) running along the axis of the arcade and \(v_{r}\) is the apparent speed of the ribbons perpendicular to the PIL (Forbes & Priest 1984; Forbes & Lin 2000). The reconnection rate, in terms of \(\dot{\psi}\) or \(E_{rec}\), has been measured in this way for more than two decades (Poletto & Kopp 1986; Fletcher & Hudson 2001; Qiu et al. 2002, 2004; Isobe et al. 2002, 2005; Krucker et al. 2003; Jing et al. 2005; Saba et al. 2006; Temmer et al. 2007; Liu et al. 2009; Kazachenko et al. 2017; Hinterreiter et al. 2018). In essentially all models of flares, including CSHKP, the ultimate energy source for the event is the magnetic free energy stored in the strongly sheared field of a filament channel (e.g. Patsourakos et al. 2020). The basic scenario for eruptive two-ribbon flares is that the eruption ejects the shear, after which reconnection relaxes the field toward a potential state. Consequently, the flare reconnection is presumed to start between field lines that are not anti-parallel. An invariant component of the inflow magnetic field, often called the guide field or shear component \(B_{g}\), is expected to vary as the location of the reconnecting field, \(B_{rec}\), rises in altitude. Post-reconnection flare loops (PRFLs) are also expected to make an angle with the PIL, an angle that varies during the flare, as has been demonstrated in observations (e.g. Aschwanden & Alexander 2001). The shear variation is also manifest in the apparent motions of flare ribbons or kernels, observed in the optical, ultraviolet, and hard X-ray (HXR) emissions that map the feet of the PRFLs. Observations have often shown that flare ribbons or kernels at first move or spread along the PIL and then move away from it (Vorpahl 1976; Kawaguchi et al. 1982; Kitahara & Kurokawa 1990; Krucker et al. 2003; Fletcher et al. 2004; Bogachev et al. 2005; Lee & Gary 2008; Yang et al. 2009; Qiu 2009; Qiu et al. 2010, 2017). For the along-the-PIL motion, conjugate flare foot-points in magnetic fields of opposite polarities may move in the same direction (i.e., zipper motion) or in opposite directions, either approaching or receding from each other. The parallel-to-perpendicular evolution of this motion is sometimes related to changes in shear, the angle made by the line connecting conjugate foot-points with respect to the PIL. For two decades, observations have revealed strong-to-weak shear evolution in two-ribbon flares (Aschwanden & Alexander 2001; Su et al. 2006; Ji et al. 2006; Su et al. 2007; Liu & Wang 2009; Yang et al. 2009; Qiu et al. 2010, 2017; Qiu & Cheng 2022), suggesting that the relative guide field, defined by \(\mathcal{R}\equiv B_{g}/B_{rec}\), in the RCS decreases during flare reconnection. Note that strong-to-weak shear evolution is not necessarily present in all flares. Many flares exhibit irregular motions of the conjugate foot-points (e.g. Fletcher et al. 2004; Bogachev et al. 2005; Grigis & Benz 2005; Yang et al. 2009; Cheng et al., 2012; Inglis and Gilbert, 2013), reflecting the complex configurations or tempo-spatial structures of flare reconnection. Reconnection releases magnetic energy, a significant amount of which is transferred to non-thermal particles (Emslie et al., 2012; Aschwanden et al., 2019). Past observational studies have often shown that HXR (or microwave) emissions are temporally, and sometimes spatially, correlated with \(\dot{\psi}\) or \(E_{rec}\)(Qiu et al., 2002, 2004; Krucker et al., 2003; Fletcher et al., 2004; Lee et al., 2006; Temmer et al., 2007; Jing et al., 2007), albeit sometimes with delays in the HXRs on the order of 1-2 minutes (e.g. Miklenic et al., 2007; Yang et al., 2011; Naus et al., 2022; Viveering et al., 2023). On the other hand, most of these studies did not verify a one-to-one coincidence between significant HXR emission and an enhanced reconnection rate, however the latter is measured. In particular, flare emissions in UV, optical, or soft X-rays (SXRs) and the inferred reconnection rates, may rise well before the occurrence of impulsive and significant non-thermal HXRs (see Warren and Warshall, 2001; Su et al., 2006; Krucker et al., 2011; Caspi et al., 2014; Naus et al., 2022, for some prominent examples), and it has not been clear what mechanisms govern the partition of flare energy during different stages of the flare evolution. Recent numerical simulations find that flare energetics depend critically on the reconnection rate as well as the guide field (Dahlin et al., 2015; Arnold et al., 2021). The models predict that the ratio of the guide field to the reconnecting component plays an important role in determining the efficiency of particle acceleration via magnetic islands (Dahlin et al., 2017; Dahlin, 2020; Arnold et al., 2021). Consequently, experimental determination of the guide field could provide stringent tests of the theoretical models. Furthermore, while the phenomenological relationship between the reconnection rate, \(\dot{\psi}\) or \(E_{rec}\), and flare emission has been intensively studied in observations, the role of \(B_{g}\) has not been considered because of the difficulty of quantifying this parameter. In this paper, we measure the shear of post-reconnection flare loops (PRFLs) as a proxy for the relative guide field \(\mathcal{R}=B_{g}/B_{rec}\) during the evolution of the M6.9 two-ribbon flare SOL20141218T21:40. We then investigate how the shear is related to the flare energetics, in particular the efficiency of converting free magnetic energy into kinetic energy of non-thermal electrons. We also relate the evolution of shear in the observed flare to the evolution of shear and reconnection guide field in a three-dimensional simulation of an eruptive solar flare (Dahlin et al., 2022). Our paper is organized as follows. Section 2 provides an overview of the flare observations. Section 3 discusses the evolution of the flare ribbons and loop tops, and Section 4 the shear evolution of the post-reconnection flare loops. The flare energetics are the focus of Section 5. Inferences from recent modeling work that yield insight into our results are developed in Section 6. Section 7 offers a summary of our findings and final conclusions. ## 2 Overview of the Sol20141218T21:40 M6.9 Two-Ribbon Flare The M6.9 flare discussed here occurred in the active region NOAA AR 12241 and was accompanied by a coronal mass ejection (CME). Joshi et al. (2017) studied its evolution in great detail. They suggested that the CME flux rope was formed in-situ in the early phase of the flare by reconnection of a sheared arcade. The rope erupted as soon as it formed, accelerating the reconnection and forming an arcade of post-reconnection flare loops (PRFLs) anchored to two parallel flare ribbons along the PIL (Figure 1a-h). This scenario resembles "tether-cutting" reconnection (Moore et al., 2001) in a modified standard model. Subsequently, the erupting rope interacted (reconnected) with high-lying flux, forming a remote circular ribbon (not shown in this paper; see Figures 10 and 11 of Joshi et al. 2017) before finally escaping the corona, indicating that AR 12241 was in a "breakout" reconnection configuration (Antiochos et al., 1999). Figure 1 presents an overview of the evolution of the flare adjacent to the PIL. The flare was observed by AIA with a time cadence of 24 s in the UV 1600 passband and 12 s in each of the seven EUV passbands. The flare ribbon development follows the elongation-to-expansion style, with ribbons rapidly spreading along the PIL in the first few minutes, followed by expansion away from and perpendicular to the PIL, first rapidly, then more gradually. Based on the flare ribbon morphology (Figure 1a-d), we track the flare evolution during the intervals marked in Figure 2a: 21:40-21:46 UT, 21:46-21:58 UT, and 21:58-22:20 UT, marked as phases I, II, and III, respectively. Such elongation-to-expansion development is often accompanied by the strong-to-weak shear evolution of PRFLs reported in many previous studies (Aschwanden & Alexander, 2001; Su et al., 2006, 2007; Liu et al., 2009; Qiu et al., 2010, 2017; Qiu & Cheng, 2022), and is also evident in this flare as shown in the EUV images in Figure 1e-h. Figure 2a illustrates the flare emission in the soft X-ray (SXR) 1-8 A band observed by GOES, its time derivative, the X-rays at photon energies 6-12 keV and 35-80 keV observed by RHESSI, and the total count rates in the UV 1600 A passband from AIA. The three stages of the flare ribbon evolution are marked, showing that the first stage, the ribbon _elongation_ stage, has little energetic consequence in terms of flare radiation. The second stage, the _fast expansion_ stage, is coincident with the impulsive phase of the flare when emissions peak; during this stage non-thermal hard X-rays (HXRs), represented by \(\geq 30\) keV emission, are most significant. Finally, the third stage, the _slow expansion_ stage, is coincident with the decay phase of the flare, when flare emissions have passed their peak and \(\geq\)30 keV HXRs have decreased. Note that the flare SXR light curves do not exhibit a simple smooth decay after the peak, suggesting additional episodes of energy release, possibly involving reconnection between the erupting flux rope and overlying flux systems (Joshi et al., 2017). In these later episodes, however, energetic HXR emissions beyond 30 keV are significantly diminished. X-ray images from RHESSI are constructed and displayed in Figure 2b-f. To generate these images, data from detectors 3 and 6 through 9 were reduced with the _clean_ algorithm using a beam width of 2 (Dennis & Pernak, 2009), making the angular resolution \(\approx\) 7\({}^{\prime\prime}\). Images of the SXR sources between 6-12 keV were taken from the RHESSI archive.1 The SXR emission below 20 keV is located close to the southern ribbon, but not at the same place as the 30-80 keV sources; the lower-energy X-ray emission likely comes from or above the top of newly formed flare loops. The 6-12 keV source exhibits apparent motion initially along the PIL. After 22:05 UT, the source moves beyond the PRFL system observed in EUV passbands, possibly due to production at higher altitude when the erupting flux rope interacts with high-lying flux systems. Footnote 1: [https://hesperia.gsfc.nasa.gov/rhessi_extras/flare_images/hsi_flare_image_archive.html](https://hesperia.gsfc.nasa.gov/rhessi_extras/flare_images/hsi_flare_image_archive.html). The images are constructed by applying the CLEAN algorithm to data from detectors 3, 6, 7, 8 and 9, and the integration time of each map varies from 16 s to 120 s. The HXR sources in 30-100 keV are constructed between 21:50 UT and 22:05 UT, with integration times of 80-240 s to obtain good count statistics. These sources are mostly located at the southern ribbon and cover nearly all of it. The presence of multiple HXR sources along the southern ribbon may be partially due to the dynamic range of the instrument. However, it is noteworthy that the locations of the sources coincide with co-temporal regions of increased flux in AIA 1600 A, indicating a higher energy deposition at discrete locations along the ribbon. We find very little thick-target HXR emission at the northern ribbon. Such an asymmetry in the HXR thick-target source has been often observed (Sakao, 1994; Yang et al., 2009) and likely reflects a magnetic mirroring effect where the weaker HXR source corresponds to the region of higher photospheric magnetic field strength (Melrose and White, 1981; Liu et al., 2009; Daou and Alexander, 2016). In support of the magnetic mirroring scenario, Figure 3 shows the photospheric radial magnetogram in the flare active region (panel a), obtained by the Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) and the Spaceweather HMI Active Region Patch (SHARP) database. The distribution of magnetic field strength (flux density) on each of the flare UV ribbons is displayed in gray scale in panel b. The mean magnetic flux density (red curves in panel b) on the northern ribbon in the negative magnetic field is more than twice that on the southern ribbon located in the positive field, and at the peak time the total UV emission on the northern ribbon is one third that on the southern ribbon. This study focuses on the major phase of the flare development adjacent to the PIL, when and where flare emissions are most energetic. In the subsequent sections we derive properties of magnetic reconnection from the evolution of the flare ribbons and PRFLs and investigate how they are related to the flare energetics, in particular the non-thermal energetics reflected in the HXR emissions. ## 3 Evolution of the flare ribbons and X-ray loop tops The temporal and spatial evolution of flare emission signatures reflect the dynamics of reconnection energy release. Whereas the UV ribbon emission maps all energy release events on the chromosphere, the reconstructed X-ray sources likely only reflect the strongest events due to the limited dynamic range of the X-ray maps. In this section, we infer the total reconnection rate from the apparent motion of the UV ribbons and also measure the apparent motion of the centroids of the soft X-ray and/or UV sources to estimate the locations of prominent energy release events likely related to where non-thermal electrons are produced and deposited, respectively. Figure 4a shows the evolution of the newly brightened flare ribbons observed in the AIA UV 1600 A passband at a cadence of 24 s, with the color code indicating the time the ribbons start to brighten. To minimize saturation effects or transient unrelated brightenings, we detect the ribbon fronts when the brightness (in units of data counts per second) in the 1600 A passband is at least six times the pre-flare quiescent background and stays bright for at least four minutes (for further discussion see Figure 1: Overview of the SOL20141218T21:40 M6.9 flare. (a-d) Evolution of flare ribbons observed in UV 1600 Å by AIA. (e-h) Post-reconnection flare loops (PRFLs) observed in the EUV 131 or 171 Å passbands by AIA. Images in all panels have been rotated to 21:00 UT and therefore co-aligned. Naus et al., 2022). Furthermore, we assume that the effects of reconnection only appear once at a given location so that ribbon front pixels are found only at their first brightening. It is evident that during the first stage, the stage of ribbon _elongation_, evolution of the ribbon fronts is less like what is depicted in a 2D picture where only expansion of ribbons perpendicular to the PIL would be observed. Figure 2: (a) Soft X-ray (1-8 Å) and its time derivative, hard X-ray (6-12 and 35-80 keV), and UV 1600 Å light curves of the flare obtained by GOES, RHESSI, and AIA, respectively. The solid vertical lines divide the flare evolution into three stages (see text). (b) RHESSI X-ray 6-12 keV 80% contours plotted over an AIA 131 Å image at 22:40:59 UT. The colors indicate the midpoint of the RHESSI integration time interval; see color bar above panel (a). The black arrows indicate the apparent motion of the HXR centroids over time. (c-f) HXR \(>\) 30 keV contours superimposed on AIA 1600 Å images. RHESSI contour colors correspond to the color bar above panel (a) to easily follow the time evolution. be expected. The ribbons spread along the PIL at an apparent speed of about 10-40 km s\({}^{-12}\). The elongation parallel to the PIL halts by the end of the fast expansion stage. Both ribbons also expand in the direction perpendicular to the PIL, with a mean speed of 3-4 km s\({}^{-1}\) during the _fast expansion_ stage and then 1-2 km s\({}^{-1}\) in the _slow expansion_ stage. These mean speeds in the parallel and perpendicular directions are consistent with those reported for other two-ribbon flares (Qiu et al., 2017, and references therein). Note that the estimated speeds reflect the mean motion. At various locations, the ribbon front may expand much faster (Naus et al., 2022, and references therein). The apparent motion of the flare ribbons is accompanied by motion of the X-ray sources shown in Figure 2. In particular, the X-ray emission at 6-12 keV is likely produced at or above the top of flare loops just formed by reconnection. Figure 4b shows the trajectory of the centroid of the 6-12 keV source, \((x_{xr},y_{xr})\), indicative of the apparent motion of the loop top. Since the cadence of the thick-target HXR maps at \(\geq 30\) keV is very low, we do not measure the centroid of these HXR sources. However, since the UV 1600 A light curve closely follows that of the non-thermal HXR emission during the impulsive phase (Figure 2a), we track the centroid of the UV emission as a proxy for the location of prominent thick-target non-thermal HXR emissions. Figure 4b also displays the trajectory of the centroid \((x_{uv},y_{uv})\) of the UV emission for the positive and negative ribbons separately (at the cadence of 72 s).3 We note that, during the impulsive phase when UV (and X-ray) emissions are significant and less dispersed, the centroid measurements with different thresholds are more consistent and therefore more reliable. Figure 3: (a) Photospheric radial magnetogram obtained from HMI/SHARP. Color contours indicate magnetic flux density of \(\pm 200\), 400, 800, 1600 Mx cm\({}^{-2}\). (b) 2D histogram of photospheric radial magnetic flux density (gray scale) on the southern (positive \(B_{r}\)) and northern (negative \(B_{r}\)) ribbons, respectively, integrated for every two minutes. Red curves give the mean magnetic field \(\langle B_{r}\rangle\) on the photosphere (solid) and the mean magnetic field extrapolated to 1 Mm above the photosphere (dashed) on the ribbons. These are compared with the total HXR count rates at 35-80 keV (blue; arbitrary units), and the total UV emissions (black) integrated on the positive and negative ribbons separately. For a close look at the impulsive phase, the X-ray and UV centroids measured between 21:44-22:00 UT are further illustrated in Figure 4d (at the cadence of 24 s). The comparison between \((x_{xr},y_{xr})\) and \((x_{uv},y_{uv})\) suggests a meandering motion of the sources of prominent emissions at both the chromosphere and the corona in the early phase. Up to 21:53 UT, the UV centroid in the positive magnetic field exhibits an apparent back-and-forth motion along the PIL. The X-ray 6-12 keV source moves in the same manner with a similar range of distance, suggesting that energy release occurs along the RCS (e.g. Grigis & Benz, 2005; Krucker et al., 2005; Inglis & Gilbert, 2013). In addition, the UV centroid in the positive magnetic field also moves away from the PIL, with this perpendicular motion becoming faster after 21:53 UT when the 6-12 keV source speeds up as well. The UV centroid Figure 4: (a) Evolution of the flare ribbon fronts, derived with the AIA 1600 Å images, superimposed on a radial photospheric magnetogram obtained from HMI/SHARP. (b) HXR 6-12 keV (RHESSI) centroids (diamonds; with \(f=0.85\), see text) and the UV 1600 Å (AIA) centroids (pluses; with \(f=0.7\), see text) superimposed on an EUV image in the AIA 171 Å passband. For clarity of display, we have reduced the cadence of the UV centroids to 72 s, or every third frame. (c) Total reconnection rate in terms of the flux change rate \(\dot{\psi}\) (black) measured in the positive field and negative field, respectively, the mean plane-of-sky motion velocity of the HXR 6-12 keV centroids, computed with the centroids measured with \(f=0.75,0.85,0.95\) (red), and the HXR count rates at 35-80 keV (blue). (d) Close-up view of the foot-point (UV centroid, with \(f=0.7\), and at full cadence 24 s) trajectory and the loop-top (X-ray 6-12 keV centroid, with \(f=0.85\)) trajectory between 21:44-22:00 UT. In panels (a), (b), and (d), the colors represent the times of the observed signatures as indicated by the color bar. Note that the color code in (a) and (b) is the same as in Figures 2, 5, and 6, but the color code in (d) is different. The curve in panels (a), (b), and (d) outlines the polarity inversion line of the radial magnetic field \(B_{r}\). All images in this figure have been rotated to 21:00 UT, and measurements using these images reflect the coordinates at this reference time. in the negative field exhibits similar meandering motions in the early phase, and becomes less regular later, perhaps due to the weaker and more dispersed emission on this ribbon (see Figure 3b). Overall, the apparent trajectory of the UV centroids suggests that the projected motion of the X-ray source at 6-12 keV may be partly due to the apparent motion along the PIL, particularly in the early phase, and partly due to the rise of the coronal source (and coincident with the separation of the two ribbons or UV centroids), which is more significant after 21:53 UT. The total reconnection rate, i.e., the flux change rate, can be measured from the ribbon front evolution. Figure 4c shows \(\dot{\psi}_{+}\) and \(\dot{\psi}_{-}\) measured in the southern (positive) and northern (negative) ribbons, respectively.4 The two flux change rates evolve similarly and are roughly balanced. (In principle, equal amounts of positive and negative fluxes should participate in reconnection.) At the peak, \(\dot{\psi}\) averaged between \(\dot{\psi}_{+}\) and \(\dot{\psi}_{-}\) is about 6\(\times\)10\({}^{18}\) Mx s\({}^{-1}\). The apparent speed of the X-ray 6-12 keV source, \(v_{top}\), is also measured and displayed in Figure 4c. This source is accelerated during the _fast-expansion_ stage at an average _projected_ speed of a few tens of km s\({}^{-1}\), with the peak speed approaching 100 km s\({}^{-1}\) between 21:53 and 21:56 UT at nearly the same time as the peak flux change rate. Observations therefore indicate the consistent evolution of the apparent motion in the corona and chromosphere, with both being indicative of reconnection dynamics. Footnote 4: The measurements in this study use the radial magnetic field \(B_{r}\) rather than the longitudinal component (as used in Qiu and Cheng (2022)). On the other hand, we do not correct for projection effects in calculating the areas of the newly brightened ribbon fronts and we do not extrapolate the chromospheric magnetic field (ribbons actually form in the upper chromosphere) from the photospheric magnetic field, as these two effects partially cancel each other. These uncertainties can offset the measured total reconnection flux by up to 30% in most flares (Qiu et al., 2007), but have lesser effects on the time evolution of the reconnection flux and the global reconnection rate. Significant flare emission, particularly non-thermal hard X-ray emission \(\mathcal{I}_{hxr}\) above photon energies of 30 keV, occurs in the fast expansion stage. However, Figure 4c shows that in this flare \(\dot{\psi}\) and \(v_{top}\) rise and peak 2-4 minutes before \(\mathcal{I}_{hxr}\). In particular, the reconnection rate derived from the ribbon evolution has been much enhanced before the prominent high-energy HXR emission. Such time lags have been reported in several prior studies that measured \(\dot{\psi}\) or \(\langle E_{rec}\rangle\) by tracking the ribbon fronts (Falchi et al., 1997; Miklenic et al., 2007; Qiu et al., 2010; Naus et al., 2022; Vievering et al., 2023). We note that many previous studies have compared \(\mathcal{I}_{hxr}\) and the reconnection electric field \(E_{rec}\) measured in a different way, by tracking the apparent motion of the brightest optical, UV, or HXR kernels and assuming \(E_{rec}\approx v_{k}B\), \(v_{k}\) and \(B\) being the apparent motion speed of the kernel and magnetic field at the kernel. Some of these studies have revealed a temporal correlation between the two for some times and/or at some locations (Qiu et al., 2002; Krucker et al., 2003; Qiu et al., 2004; Fletcher et al., 2004; Lee et al., 2006; Lee and Gary, 2008), whereas others do not find a temporal or spatial correlation, particularly with refined tempo-spatial scales (Grigis and Benz, 2005; Inglis and Gilbert, 2013). These discrepancies suggest that the reconnection dynamics can be complicated by the field configuration, which can be more 2D-like in some flares than others. In a 3D reconnection configuration, the reconnection rate \(E_{rec}\) is not related to \(\dot{\psi}\) in a simple way since the motion of the flare ribbons along the PIL can make a significant contribution. Furthermore, the reconnection rate might not be the only property governing flare energetics. It has been proposed that the reconnecting guide field plays a crucial role in energizing particles (Wang et al., 2016; Dahlin et al., 2017; Arnold et al., 2021). Information about the reconnection guide field may be gleaned from the observed shear of the PRFLs. In past studies, this shear angle, \(\theta_{rb}\), has been inferred from observations of flare ribbons or kernels. For example, we may assume that non-thermal electrons travel to the chromosphere along "loops" (which may or may not exist) connecting the UV centroids in the positive and negative ribbons. Figure 4d would then suggest that before 21:46 UT, such "loops" are very sheared (violet) and that in the fast expansion stage (blue, 21:46-21:52 UT), when 35-80 keV HXR is rising, the "loops" connecting the UV centroids become less sheared. Finally, toward the peak of the HXR emission (green, 21:52-21:58 UT), the "loops" are least sheared. This is consistent with the strong-to-weak shear evolution trend inferred with ribbon fronts (Qiu & Cheng, 2022, and references therein). The connectivity between the centroid pair, however, is an assumption. In the next section, instead, we will employ observations of PRFLs in the EUV passbands, and make direct measurements of the shear of a large number of PRFLs, which will provide substantially more information than inferred from the evolution of flare ribbons or kernels. ## 4 Shear evolution of post-reconnection flare loops (prfls) As the flare evolves from the elongation to the fast expansion stage, the PRFLs become notably less sheared, exhibiting the strong-to-weak shear evolution reported in studies of many other two-ribbon flares (Aschwanden & Alexander, 2001; Ji et al., 2006; Su et al., 2006, 2007; Liu et al., 2009; Yang et al., 2009; Qiu, 2009; Qiu et al., 2010, 2017). Since the shear of the PRFLs is likely a proxy for the guide component of the magnetic field flowing into the RCS (Dahlin et al., 2022), we attempt to characterize it here. In most previous studies (except Qiu et al., 2017), the shear of the PRFLs has been inferred using observations of flare ribbons or kernels exclusively. In some of these studies, the shear angle was measured between the PIL, approximated by a straight line, and another straight line connecting two dominant flare kernels in UV (Su et al., 2006), optical (Ji et al., 2006), or HXR (Liu et al., 2009; Yang et al., 2009) emissions, assuming that these are conjugate foot-points of PRFLs. The complement of this angle is defined as the shear angle \(\theta\): \(\theta\approx 0^{\circ}\) indicates the PRFL is perpendicular to the PIL and \(\theta\approx 90^{\circ}\) refers to very high shear where the PRFL almost parallels the PIL. In the left panels of Figure 5, the strong-to-weak shear evolution of the PRFLs is apparent. However, at any given time an arcade of PRFLs is formed with their foot-points outlined by a number of flare kernels aligned along the ribbon front. Therefore, it is not directly evident from ribbon observations which pairs of kernels in opposite magnetic fields are conjugate foot-points. Furthermore, the thick-target HXR emission is mapped to a few kernels almost exclusively on one ribbon, without clear signatures of their conjugates on the other ribbon. Due to these factors, the above-described method of estimating the shear from the foot-points is not easily applied to this flare. Instead, we will measure the shear angle \(\theta\) directly using PRFLs observed in the EUV images by AIA. ### Measuring the Shear of PRFLs To do so, we first track PRFLs in the time series of EUV images. PRFLs anchored to flare ribbons formed in the elongation stage are easily visible in the EUV 131 A passband (Figure 1e,f, Figure 5i-a) and then, when these loops have cooled down sufficiently, in the EUV 304 A passband (Figure 5i-b). PRFLs anchored to the flare ribbons formed later in the expansion stages are visible in the EUV 171 A passband (Figure 1g,h; images in the 171 A passband at earlier times are saturated and not usable), as well as the EUV 304 A passband (Figure 5i-b to i-e). These broadband EUV images can capture emission by plasmas in the temperature range \(\leq 1\) MK (O'Dwyer et al., 2010; Boerner et al., 2012). PRFLs visible in these passbands have cooled to these temperatures minutes after they are formed by reconnection. Therefore, the measured shear \(\theta\) is delayed by their cooling time to the passband at which they are observed. We have experimented on tracking PRFLs in three passbands, in EUV 304 A images that are least subject to saturation, and in the EUV 131 A and 171 A passbands when they are not saturated before or after the peak of the flare. To track PRFLs, we apply the algorithm of Aschwanden (2010) that identifies all curvilinear structures in a given image. As unwanted byproducts, the algorithm can also pick out active region loops and, sometimes, flare ribbons. Non-PRFLs are cleaned out with a semi-automated approach guided by the geometry of ribbons. Briefly, a PRFL has to be rooted at and confined between two flare ribbons. The method is applied to more than 200 images in the 304 A passband at the full cadence (12 s per image) between 21:55 and 22:45 UT, and has successfully identified close to 2,000 PRFLs. Figure 5: _Left_: Evolution of flare ribbon fronts (color symbols) derived from the UV 1600 Å images by AIA during the (a) elongation, (b-c) fast expansion, and (d-e) slow expansion phases. Superimposed are the EUV images from AIA that show post-reconnection flare loops (PRFLs) anchored at the ribbon fronts and the hard X-ray sources at \(\geq\) 30 keV (color contours) obtained from RHESSI. The colors of the ribbon fronts and HXR contours indicate the times given in the color bar at right. _Middle_: PRFLs identified from AIA 304 Å images, superimposed with the ribbon fronts (pink symbols) during (a) elongation, (b,c) fast-expansion, and (d,e) slow expansion, on a pre-flare magnetogram of the photospheric radial magnetic field from HMI. Colors of the PRFLs indicate the times the PRFLs are identified in the AIA 304 Å images minus 15 minutes, the nominal cooling time (see text in Section 4.2), which are the same colors used in Figure 6. _Right_: magnetic loops from the potential field extrapolation projected to the AIA image plane, superimposed on a pre-flare magnetogram of the photospheric radial magnetic field from HMI. Potential field loops are traced from ribbon fronts (pink symbols) during the different stages of the flare evolution. The colors of the potential field loops (right panels) indicate the times at which the ribbon fronts formed (left panels); see color bar at right. The technique is also applied to about 90 images in the 171 A passband at half cadence (24 s per image) between 22:03 and 22:45 UT, which yields about 900 PRFLs - images in this passband are saturated before 22:03 UT. The PRFLs found in these two different passbands are generally consistent. The middle panels in Figure 5 illustrate PRFLs tracked from a series of EUV 304 A images, superimposed on the \(B_{r}\) map and the ribbon fronts (pink symbols) during different stages of the flare evolution, where the color code indicates the times the PRFLs are observed (minus 15 minutes, the nominal cooling time of the PRFLs; see Section 4.2 for more discussion). Qualitatively, it is evident from Figure 5 that PRFLs in the early stage are more sheared, i.e., more inclined toward the PIL, than those later. Strictly speaking, the shear of a PRFL is a 3D property that is not feasible to determine without a realistic model of the magnetic configuration of the reconnection current sheet. As an alternative method we compare the geometry of the observed PRFLs with the extrapolated potential field lines projected to the AIA image plane. These are Figure 6: _Top_: the shear angle \(\theta_{lp}\) of the PRFLs (1320 measurements) identified from AIA 304 Å images with respect to the vertical of the PIL of the photosperical radial magnetic field, measured at where the PRFLs crosses the PIL, versus the \(x\) position on the PIL (a) or times they are identified (b). The colors indicate the times PRFLs are identified in the AIA 304 Å images minus 15 minutes (see S4.2), which are the same as in the middle panels in Figure 5. The solid black curve in (b) presents the median \(\theta_{lp}\) every minute. For comparison, the dashed black curve shows the median \(\theta_{lp}\) every minute of PRFLs identified in the AIA 171 Å images. _Bottom_: The shear angle \(\theta_{pot}\) of the potential field lines (769 measurements) rooted at the flare ribbon fronts and projected to the AIA image plane with the vertical of the PIL, versus the \(x\) position along the PIL (c) or the time (d) of the ribbon fronts. The color coding is the same as in the right panels of Figure 5. The solid black curve shows the median \(\theta_{pot}\) every two minutes. traced from all 5,800 ribbon front pixels in both the positive and negative magnetic polarities. The right panels of Figure 5 show a subset of potential field lines anchored at ribbon fronts; those field lines traced from northern ribbons (negative field) and southern ribbons (positive field), respectively. Colors indicate the time when the ribbon pixels are brightened. The comparison of the observed PRFLs with the potential field indicates that PRFLs deviate more from the potential field in the early phases of the flare, i.e., the elongation phase and early expansion phase. Such a comparison can be quantified by measuring the angle made by a PRFL (or a potential field line projected in the AIA image plane) with the PIL at where it crosses the PIL. By convention, this angle ranges between 0 and 180 degrees, measured clockwise from the east (the PIL roughly follows the east-west direction). We define the complement of this angle as the shear angle, \(\theta_{lp}\) for PRFLs and \(\theta_{pot}\) for the potential field. The shear angle \(\theta_{lp}\) is measured for all PRFLs, yielding more than 1,300 valid measurements (i.e., when the PRFL crosses the PIL). The angle \(\theta_{pot}\) is measured in one-fifth of all 5,800 potential field lines projected to the AIA image plane, yielding more than 700 valid measurements. Figure 6a-b shows the measured \(\theta_{lp}\) for about 1,300 PRFLs identified in the AIA 304 A images, along the PIL (panel a) during the flare evolution (panel b). Colors indicate the times the PRFLs are observed (minus 15 minutes) and are the same as in the middle panels of Figure 5. Initially \(\theta_{lp}\) is as high as 60-70\({}^{\circ}\), but over a period of 10 minutes, its median decreases to about 20\({}^{\circ}\) and then continues to decrease gradually as the flare evolves. In comparison, the shear of the potential field \(\theta_{pot}\) also exhibits a decreasing trend, but its median starts at 20\({}^{\circ}\) and then decreases to around \(0\pm 10^{\circ}\). We note the difference in the spatial distributions of the potential field loops and the observed PRFLs. For example, Figure 6a and 6c show that the observed PRFLs extend to the east of \(-210^{\prime\prime}\), during the slow expansion stage, whereas there are a larger number of potential field loops west of \(-160^{\prime\prime}\). However, a comparison of the shear evolution of a subset of modeled and observed loops crossing the PIL only between \(-210^{\prime\prime}\) and \(-160^{\prime\prime}\) finds that the trend of the shear evolution of the subsets is not changed significantly. This analysis supports the strong-to-weak shear evolution of PRFLs, which is also consistent with the trend inferred qualitatively from the apparent motion of the ribbon fronts or UV centroids. ### Cooling Times of PRFLs The potential field is traced from the locations of the ribbon fronts, which are brightened at the times PRFLs are just formed by reconnection. The PRFLs then cool down to the necessary \(\leq\)1 MK to produce prominent emissions in the 304 A (or 171 A) passband. We can estimate this cooling time in several ways. First, Figure 7a compares the light curve of the total UV 1600 A emission \(\mathcal{I}_{1600}\) from ribbons with that of the total EUV 304 A emission \(\mathcal{I}_{304}\) at the locations along the PIL that sample loops connecting the two ribbons. The peaks of \(\mathcal{I}_{304}\) lag those of \(\mathcal{I}_{1600}\) by \(\sim\)5 min. Figure 7b shows the rise time \(\tau_{rise}\) of the UV emission at each ribbon pixel as either the time it takes for the UV emission to rise from six times the pre-flare quiescent brightness to its peak, or as the width of the half-Gaussian used to approximate the UV light curve from its rise to peak. Either way, the statistical analysis shows that, in this flare, \(\tau_{rise}\) of the UV 1600 A emission in the majority (\(\geq\) 70%) of 5,800 ribbon-front pixels is larger than 2 minutes, with the median \(\tau_{rise}\) being 4-5 min. Taking this rise time into account, it takes about 10 minutes, on average, for reconnection-formed PRFLs to produce prominent emission in the EUV 304 A passband. Next, we estimate the PRFL cooling times using the Ultraviolet Foot-point Calorimeter (UFC) method to model evolution of the flare loops with heating rates inferred from the foot-point UV light curves (Qiu et al., 2012; Zhu et al., 2018; Qiu, 2021). As a first-order estimate, the lengths of these loops are computed using the potential field extrapolation. This way, the 5,800 (half-)loops, assumed to be anchored at 5,800 ribbon pixels, are modeled and the synthetic total X-ray and EUV emissions from these loops are compared against observations by GOES and AIA, which allows constraints to be placed on the few free parameters used in the model. Once reasonable agreement between the observed and synthetic total X-ray/EUV emissions has been achieved, we obtain the synthetic time profiles of the EUV emission in the AIA 304 A passband from individual loops (again, assumed to be anchored at the ribbon pixels) and estimate the time lags \(\tau_{304}\) of the peak EUV emission in these loops with respect to the times when their feet are brightened in the UV 1600 A passband. Figure 7c shows histograms of the cooling times \(\tau_{304}\) and \(\tau_{171}\). Statistically, \(\tau_{304}\) is found to lie between 5 and 30 minutes, with the mode at 11 minutes and median at 16 minutes. The time lags can also be estimated as the difference between the peak UV 1600 A emission at the ribbon front pixel and the peak synthetic EUV 304 A emission in the (half-)loop anchored to the foot-point. The mode and median of these lags are 4 minutes and 9 minutes respectively - recall that the median of the rise time, \(\tau_{rise}\), of the foot-point UV emission is 5 minutes. The time lags of the loop emission in 171 A are similar, suggesting that PRFLs shown in these two passbands emit at similar temperatures. These time lags, or the cooling times of PRFLs, are shown to grow with their length - shorter loops cool more quickly than longer loops. Estimated with these different approaches, the cooling time of the bulk of the observed PRFLs in the 304 A and 171 A passbands ranges between 5 and 15 minutes. Neglecting such variations, we take \(\langle\tau_{304}\rangle\approx 15\) minutes for all the PRFLs as a nominal cooling time, and shift the times of the PRFLs backward by 15 minutes in the middle panels in Figure 5 and Figure 6a-b. Finally, we compare the variation of shear \(\theta_{lp}\) measured from 1,300 PRFLs observed in the EUV 304 A passband with the shear \(\theta_{rb}\) that is inferred from the mean positions of the ribbon fronts Figure 7: (a) The light curve of total UV 1600 Å emission from the flare ribbons (black), in comparison with that of the total emission of the EUV 304 Å (red) or the EUV 171 Å (blue) at locations along the PIL. (b) Histograms of the rise times of UV 1600 Å emission at 5,800 ribbon front pixels, as either the time it takes for the UV emission to rise from six times the pre-flare quiescent brightness to its peak (solid), or as the width of the half-Gaussian used to approximate the UV light curve from its rise to peak (dashed). (c) Histograms of the cooling times of PRFLs to the EUV 304 Å and 171 Å passbands estimated with the UFC model. (see details of the method in Qiu & Cheng, 2022).5 The brightening of the ribbon fronts essentially coincides with the time PRFLs are just formed by reconnection. Figure 8a shows the median of \(\theta_{lp}\) over every minute (red; shifted back by 15 minutes) in comparison with \(\theta_{rb}\) (orange). The two independent measurements show consistent strong-to-weak shear evolution, but over different time scales. This is due to the varying cooling times of the PRFLs to the EUV 304 A passband. Specifically, in the early or impulsive phase of the flare, \(\tau_{304}\) is expected to be shorter, of order 5-10 minutes, than in the late phase. Footnote 5: Qiu & Cheng (2022) measured the shear index \(\mathcal{S}\), which is equivalent to the tangent of the shear angle \(\theta_{rb}\) if the PIL is assumed or approximated to be a straight line. Also note that \(\theta_{rb}\approx\tan^{-1}(\mathcal{S})\) is a crude measurement of the shear of a “loop” assumed to connect the average position of the ribbon fronts in the positive field and that in the negative field. Although it is difficult to establish a one-to-one association between PRFLs observed in EUV images and their foot-points observed in the UV 1600 A images, the comparison of the observed shear of PRFLs and that of the potential field loops anchored at flare ribbon fronts provides quantitative evidence supporting the strong-to-weak shear evolution of PRFLs. A more accurate, one-to-one comparison can be achieved with improved magnetic and hydrodynamic models of the PRFLs, which will be pursued in future work. ## 5 Flare Energetics and Reconnection Properties To understand the implication of the shear on flare energetics, we compare its evolution measured with the PRFLs against other properties. Figure 8a shows the median of \(\theta_{lp}\) over every minute (red), as well as \(\theta_{rb}\) inferred from the ribbon fronts (orange) and the flux change rate \(\dot{\psi}\) (both at the cadence of 24 s). The light curve of the HXR 35-80 keV counts is given in Figure 8b. As discussed in Section 3, \(\dot{\psi}\) rises and peaks ahead of the HXR emission; meanwhile, \(\langle\theta_{lp}\rangle\) or \(\theta_{rb}\) starts high and decreases, during which time the observed \(\geq 30\) keV HXR emission rises toward its peak. The flare HXR emission is a proxy for the flux carried by the non-thermal electrons, and the shear is a proxy of the relative guide field in the RCS. In this section, we derive properties of the non-thermal electrons from HXR spectral analysis and relate them to the observationally measured shear. We conduct spectroscopic analysis of the flare X-ray emissions observed by RHESSI to derive properties of non-thermal electrons.6 Panels (b-d) of Figure 8 show the non-thermal electron distribution parameters: the electron spectral index, the total non-thermal electron flux \(\mathcal{F}_{e}\), and the low-energy cutoff, respectively. The widths of the curves represent the \(1\sigma\) uncertainty on the respective fit parameters. At time intervals before 21:48 and after 22:06, the electron spectral index is fixed at the plotted values. These were adjusted to provide the best fits and to reduce the number of free parameters because the counts above 30 keV are significantly reduced. Therefore, the value of the spectral index is plotted but no uncertainty is provided. Figure 8e shows an example of the fit to the observed spectrum at the peak of the HXR emission. The fits to all the spectra, integrated with varying intervals depending on the counts, are provided in the supplemental movie. Footnote 6: HXR fluxes were detected by RHESSI up to \(\sim 100\) keV. A spatially-integrated spectral analysis using detector 6 was performed using a model with two thermal components, a single power law consistent with the collisional thick-target model, two physical spectral lines at 6.7 keV and 8 keV and an instrumental line around 10 keV that is needed to obtain a good spectral fit. Analyses using detectors 1 and 3 separately give similar results. In addition, the spectra were corrected for pulse pile-up and albedo effects assuming an isotropic distribution of electrons (the default parameters in OSPEX). The fitting procedure used here is different from that in Qiu & Cheng (2022), who only fitted the photon spectrum, not the electron spectrum, and used an iso-thermal model with one thermal component plus a broken power-law. The flux of accelerated electrons is coupled to the value of the low-energy cutoff. The spectral fits reveal a flattening at lower energies, i.e., below 30-50 keV, during the HXR peak time between 21:56 and 21:58 UT, whereas we deduce a single power-law without any significant flattening before and after this interval. While the electron spectral index at higher energies reflects that of the accelerated distribution, the spectral index at lower energies can have other sources, including propagation effects, for example, through deceleration of the non-thermal beam by the co-spatial return current electric field (e.g., Zharkova & Gordovskyy, 2006; Allred et al., 2020; Alaoui et al., 2021, and references therein) or non-uniform target ionization (Su et al., 2011), or even instrumental effects (see Holman et al., 2011; Kontar et al., 2011, for reviews on the low-energy cutoff and mechanisms affecting the HXR spectra). The accelerated distribution can include a double power law that would appear either as a gradually flattening spectrum toward lower energies or a low-energy cutoff value higher than the transition energy between the thermal and non-thermal portions of the X-ray spectrum. However, the interpretation adopted in this paper is an HXR spectrum that flattens as a consequence of a low-energy cutoff (e.g., Holman, 2003). Although it is known that a sharp low-energy cutoff is unstable to wave-particle interactions (e.g., Emslie, 2003; Hannah et al., 2009), its adoption is customary, both to simplify calculations of the non-thermal electron flux and because it is usually indistinguishable from a gradually flattening low-energy cutoff (Saint-Hilaire & Benz, 2005). During time intervals where the HXR spectrum is consistent with a single power-law (without a flattening at lower energies), only the maximum low-energy cutoff and minimum electron flux can be deduced from the spectra. This corresponds to all the intervals before and after the peak of the impulsive phase at 21:56:20-21:58:00. Conversely, as the cutoff is needed to explain the flattening, _under the assumption of an injected single power-law electron distribution in the collisional thick target model_, at the above-mentioned HXR peak times the value of the electron flux (and low-energy cutoff) is determined rather than its lower limit (and upper limit, respectively). Note that the total non-thermal flux of electrons peaks ahead of the hard X-ray emission at 35-80 keV, possibly because of the higher deduced value of the low-energy cutoff during the HXR peak, similarly to Warmuth et al. (2009). The peak of the magnetic flux change rate \(\dot{\psi}\) in Figure 8a is nearly coincident with the peak of the non-thermal electron flux \(\mathcal{F}_{e}\) in Figure 8c. On the other hand, \(\dot{\psi}\) has already been enhanced in the first 10 minutes of the flare, when the non-thermal flux is insignificant. This relationship will be explored in the following section. ## 6 Inferences from modeling Neither the magnetic field nor the energy distribution of electrons in the reconnection current sheet of a solar flare can be measured directly. However, recent advances in theory and numerical modeling of eruptive flares and reconnecting current sheets may provide significant insights into the observed evolution of eruptive events, such as the flare studied in detail here. Even the most sophisticated such simulations are, of necessity, far simpler than any actual event occurring in nature. Nevertheless, just as the principles of the canonical CSHKP model provide basic understanding of our observations, more recent investigations extend and deepen this understanding in important ways. The connections between the reconnection guide field and the PRFL shear on the one hand, and the guide field and nonthermal electron acceleration on the other, are explored below. ### Relative Guide Field The shear measured from PRFLs or inferred from the ribbons is a proxy of the relative guide field at the RCS. This quantity is not directly observable. However, detailed numerical models of the 3D reconnection configuration can be exploited to infer the relative reconnection guide field from the measured shear. For this purpose, we present results from a high-resolution three-dimensional magnetohydrodynamic calculation of an eruptive flare, described in detail by Dahlin et al. (2022). This simulation was performed with the Adaptively Refined Magnetohydrodynamics Solver (ARMS; DeVore and Antiochos, 2008) and employed an idealized magnetic configuration consisting of two sets of dipoles located just beneath the solar surface at the equator, forming an elongated polarity inver Figure 8: Flare parameters versus time. (a): The flux change rate (black, 24 s cadence), median PRFL shear (\(\theta_{lp}\), red, 1-min cadence ), and ribbon front shear (\(\theta_{rb}\), orange, 24 s cadence). The displayed flux change rate is the average of \(\dot{\psi}_{+}\) and \(\dot{\psi}_{-}\), with vertical bars indicating the range of the rate measured in the positive and negative fields. The red vertical bars in the \(\theta_{lp}\) plot indicate the one-half of the standard deviation of the measured \(\theta_{lp}\) every minute. The orange vertical bars in the \(\theta_{rb}\) plot show the standard deviation of the measurements using varying thresholds to identify ribbon fronts (Qiu and Cheng, 2022). (b): The HXR 35-80 keV flux (blue). (b-d): The non-thermal electron distribution parameters (black) derived from fitting the hard X-ray spectra, including the electron spectral index, the total non-thermal electron flux, and the low-energy cutoff, respectively. The width of the curves represents the \(1\sigma\) uncertainty on the respective fit parameters. Fit time intervals are non-uniform. (e): An example of the fit to the X-ray spectrum at the peak of the HXR emission, showing the X-ray light curves by GOES and RHESSI and the time interval of the fit (top), the observed spectrum and the best fit to it with fitting parameters (middle), and the normalized residuals of the fit and the reduced \(\chi^{2}\) (bottom). The fits to the spectra throughout the flare are displayed in the attached supplemental movie. The animation lasts 21 s and shows the results of the RHESSI spectral fits similarly to the right panel. The complete figure set (19 images) is available in the online journal. Each image represents the time interval of successive spectral fits. sion line aligned with the equator. Shear flux was injected at this PIL using the STITCH method (STatistical InjectTion of Condensed Helicity; Dahlin et al., 2022, and references therein) to form a filament channel that eventually erupted via the breakout mechanism (Antiochos et al., 1999). To investigate the relationship between reconnection properties at the RCS and observables, namely PRFLs and ribbons, we traced field lines from a grid of \(901\times 226\) foot-points at the inner boundary of the simulation. Our criterion for identifying reconnection events was a shortening of the field-line length by 40% relative to its maximum value. We then measured the reconnection flux \(\psi\) underlying these foot-points of shortening field lines and computed the reconnection rate \(\dot{\psi}\). We also estimated the ratio of the guide field (the \(B_{\phi}\) or longitudinal component in our simulation coordinates) to the reconnected field (the \(B_{r}\) or radial component) upstream of the current sheet at zero longitude (the center of the configuration). The time evolutions of \(\dot{\psi}\) and the relative guide field \(\mathcal{R}\equiv B_{\phi}/B_{r}\) are plotted in Figure 9a, showing that the guide field ratio \(\mathcal{R}\gtrsim 0.75\) before the reconnection rate peaks and \(\mathcal{R}\lesssim 0.75\) afterward. We then calculated the shear angles \(\theta\) from the conjugate foot-points of the resulting flare loops, and generated figures that relate the guide field to the PRFL shear. The mean shear angle (averaged over the region \(|\phi|<2^{\circ}\)) is plotted against the guide field ratio at 10 s cadence. A parabolic curve fit for the range \(9^{\circ}<\theta<81^{\circ}\) is shown in Figure 9b. At a guide-field ratio of 0.75, the mean shear angle is about \(35^{\circ}\). For comparison, the observed M6.9 flare had an average PRFL shear of about \(20^{\circ}\) at the peak of the flare (Figure 8a). This corresponds to a guide-field ratio of about 0.40 in the simulation (Figure 9b). Finally, Figure 9c shows that for \(|B_{\phi}/B_{r}|\lesssim 2\) (or \(\theta\lesssim 60^{\circ}\)) the scaling \(|B_{\phi}/B_{r}|=\tan\theta\) holds. We emphasize that the relations above are derived from a model describing a symmetric configuration with a straight PIL and two ribbons parallel to that PIL. Detailed quantitative agreement with any particular observed flare cannot be expected. Nevertheless, the results provide a baseline reference for flares that have a relatively simple geometry, such as the M6.9 flare studied in this paper. Specifically, we find that the values (0.40, 0.75) of the guide-field ratio \(\mathcal{R}\) at the times of peak flux change rate agree within a factor of two. Both values are consistent with a guide field that is somewhat weaker than the reconnecting field components in the RCS. The observed HXR flux peaks slightly later than the flux change rate, when the guide-field ratio is steady or slowly decreasing further. This finding is consistent with recent models for electron acceleration in reconnecting current sheets, as discussed below. ### Magnetic Shear and Non-Thermal Electron Production Theoretical models of non-thermal electron production during magnetic reconnection suggest that a dominant control parameter is the magnetic shear upstream of the reconnection current sheet. An empirical fit to the results of the numerical simulations of Arnold et al. (2021) - see their Figure 4c - finds that the fraction of non-thermal electrons \(f_{nt}=n_{nt}/(n_{nt}+n_{t})\) scales as \(\mathrm{sech}^{2}(2.4B_{g}/B_{rec})\), where \(B_{g}\) is the guide field and is related to the shear by \(B_{g}/B_{rec}\approx\tan\theta\) (see Figure 9c of this paper). A rough scaling law for the rate of production of non-thermal electrons then follows by multiplying the total number of electrons injected into the current layer by this fraction \[\dot{n}_{nt}\approx f_{nt}n_{tot}V_{r}L^{2}\approx f_{nt}n_{tot}\dot{\psi}L/B_{ rec}, \tag{1}\] in which \(V_{r}\) is the characteristic reconnection inflow speed and \(L\) is the characteristic scale length of the flare current sheet. Thus, the modulation of the non-thermal electron production rate as the guide field changes during a flare can be written as \(\dot{n}_{nt}=\dot{\psi}_{mod}n_{tot}L/B_{rec}\), where the shear-modulated total reconnection rate is given by \[\dot{\psi}_{mod}=\dot{\psi}\,\mathrm{sech}^{2}\left(2.4\frac{B_{g}}{B_{rec}} \right). \tag{2}\] The above equation suggests that, although other parameters \(n_{tot}\), \(L\), and \(B_{rec}\) may vary during the flare, the modulation due to the changing relative guide field has the largest impact on the production of non-thermal electrons. Figure 10a combines the electron flux \(\mathcal{F}_{e}\) determined from the RHESSI spectral fits, the modulated magnetic flux change rate calculated from equation 2, using the magnetic shear calculated in both ways discussed above. The modulated reconnection rate using \(\theta_{rb}\) has a similar time history to the electron fluxes up to the peak and for about 5 minutes following. Although the correlation diminishes after that point, so do the calculated RHESSI electron fluxes, suggesting that these times do not contribute significantly to the total non-thermal electron production. The modulated reconnection rate using \(\langle\theta_{lp}\rangle\) shifted back by a nominal cooling time of 15 minutes is not as well correlated. Nevertheless, there are uncertainties in the cooling times of the PRFLs, and \(\tau_{304}\) of PRFLs formed in the impulsive phase are expected to be shorter than 15 min (see section 4.2), which would bring \(\dot{\psi}_{mod}\) closer to \(\mathcal{F}_{e}\). Pursuit of an improved estimate of the time evolution of \(\dot{\psi}_{mod}\) - perhaps with improved estimates of cooling times of observed PRFLs that will also permit the establishment of the spatial distribution of the shear with respect to energetic electrons - will be left to future work. In Fig. 10b, we show \(\dot{\psi}_{mod}\) computed with \(\dot{\psi}\) and \(B_{\phi}/B_{r}\) from the model (Figure 9a) suggesting that peak particle acceleration would be delayed with respect to the peak reconnection rate, as seen in the observations. ## 7 Summary and Conclusions Figure 9: Guide field and shear angle evolution in the ARMS eruptive flare model. (a) Guide-field ratio (\(-B_{\phi}/B_{r}\)) calculated upstream of the reconnecting current sheet at \(\phi=0\) (black) and rate of total reconnected flux (red). Guide field ratio versus the mean shear angle (\(\theta\)) is shown in (b) and versus its tangent in (c). The mean shear angle is the angle between the foot-points of a newly reconnected flare loop and the direction normal to the PIL, averaged over the region \(|\phi|<2^{\circ}\). The guide field is calculated at \(\phi=0\), and the upstream is taken to be the location where the current density first attains 25% of its peak value when approaching the current sheet. The color indicates the time when the newly reconnected flare loops are identified and the corresponding guide field is calculated. The solid line in the center panel is a parabolic fit and the dashed line in the right panel corresponds to \(-B_{\phi}/|B_{r}|=\tan(\theta)\). We have analyzed the evolution of an M6.9 two-ribbon flare to study the properties of the triggering reconnection as well as the flare-accelerated non-thermal electrons. For the first time, the shear of post-reconnection flare loops has been measured using several independent techniques enabling a cross validation of the obtained estimates. The results obtained by these complementary techniques are in a reasonable quantitative and an excellent qualitative agreement. Observational measurements of this M6.9 flare lead to the following findings. * An enhanced reconnection rate leads prominent flare emissions, particularly the thick-target non-thermal HXR emission, by several minutes. * The median shear of PRFLs decreases monotonically during the impulsive phase. * The non-thermal electron flux \(\mathcal{F}_{e}\) peaks when \(\dot{\psi}\) is nearly maximal and the median shear of the PRFLs satisfies \(\langle\theta_{lp}\rangle\approx 20^{\circ}\). * An MHD model of an eruptive flare confirms that the temporal variation of the shear is related to the change of the ratio \(\mathcal{R}\equiv B_{g}/B_{rec}\) in the RCS. * Models of electron acceleration in a reconnecting current sheet indicate that acceleration becomes more efficient for \(\mathcal{R}\lesssim 1\)(Dahlin et al., 2017; Arnold et al., 2021). * The observations and models are fully consistent with the HXR fluxes peaking later than the reconnection rate and, in this particular case at least, long after the initial onset of flare reconnection. Our results confirm the strong-to-weak shear evolution reported in previous observational and numerical studies. The analysis shows that during the first ten minutes of forming strongly sheared PRFLs in this flare, magnetic reconnection is not an efficient producer of energetic non-thermal Figure 10: _Left:_ Temporal evolution of the non-thermal electron flux \(\mathcal{F}_{e}\) as deduced from RHESSI spectral fits (black curve with gray uncertainties), and the modulated magnetic flux change rate using the deduced shear from ribbon fronts (orange) and the shear deduced from the PRFLs (red), which are shifted backward by a nominal cooling time of 15 minutes. _Right:_ the modulated reconnection rate \(\dot{\psi}_{mod}\) (blue), or the model predicted non-thermal electron flux, calculated from Equation 2, using the reconnection rate \(\dot{\psi}\) (solid black) and the relative guide field \(\mathcal{R}\) (dashed black) from the numerical simulation shown in Figure 9a. electrons. Similarly, energetic electrons are not prevalent in the late phase when the shear of the PFRLs is near zero yet the reconnection rate is low. These results suggest that intermediate shear is needed, \(\theta\leq 40^{\circ}\), for efficient particle acceleration via reconnection. Past observational studies (Qiu and Cheng, 2022, and references therein) have inferred the evolution of the magnetic shear of PRFLs by tracking the foot-points or ribbons with the assumed connectivities between one or a few pairs of foot-points. This study takes advantage of the AIA observations of a multitude of PRFLs, and directly measures the angles made by the (projected) PRFLs with the PIL at where they cross. The high-cadence (12 s) and continuous AIA observations in multiple passbands make it possible to derive more than one thousand \(\theta_{lp}\) measurements, which is a substantial progress, in both quality and quantity, over the \(\theta_{rb}\) measurements. The comparison between \(\theta_{rb}\) and the mean \(\theta_{lp}\), for this specific event, shows that measurements in different passbands and with two independent methods are consistent, thus validating the practice to infer the shear by tracking the evolution of flare ribbons or foot-points. This study also demonstrates that PRFLs are not potential (Section 4). In future work, three-dimensional magnetic structure of PRFLs may be reconstructed guided by the projected PRFLs identified from observations, and the spatial distribution of the magnetic shear and of flare radiation signatures (UV, EUV, and HXR) will be compared. These efforts will advance our understanding of three-dimensional magnetic reconnection and energy release. Such experiment will also be expanded to more flares to test the general validity of the methods and conclusions based on this event. The PRFL shear is considered to be a proxy of the relative guide field \(\mathcal{R}\equiv B_{g}/B_{rec}\) in the current sheet. The observed shear evolution is indicative of the reconnection configuration and dynamics, and the phenomenological relation with the non-thermal emission suggests that the reconnection guide field plays a crucial role in flare energetics. This role can be further clarified in future studies combining sophisticated data analysis techniques with data-constrained numerical simulations. The physical explanation of the nonlinear relation between the shear angle and the non-thermal electron production proposed in our paper has been tested by idealized MHD and PIC models. In future investigations, it will be important to model the 3D structure of the RCS for real flaring events in order to infer \(\mathcal{R}\) from the observationally measured shear angles. Such data-constrained 3D modeling would also help resolve ambiguities associated with apparent motions of the flare ribbons and the X-ray loop top emission indicating flare reconnection beyond the 2D geometry. We thank the referee for constructive comments that have helped improve the clarity of the manuscript. We thank Drs. Judy Karpen and Dale Gary for discussions. The collaboration leading to these results was facilitated by the NASA Drive Science Center on Solar Flare Energy Release (SolFER), Grant No. 80NSSC20K0627. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center. J.Q. was supported by NASA grants Nos. 80NSSC22K0519 and 80NSSC23K0414. M.A. was supported by NASA grants 80NSSC23K0043 and 80NSSC20K1813. S.K.A. was supported by the Partnership for Heliophysics and Space Environmental Research between U MD and NASA/GSFC. J.T.D. was supported by NASA grants Nos. 80NSSC21K1313 and 80NSSC21K0816. C.R.D. was supported by NASA's H-ISFM program at GSFC. J.F.D. and M.S. were also supported by NSF Grants Nos. PHY1805829 and PHY2109083 and NASA Grant No. 80NSSC20K1813. A.R. was supported by the NSF's REU program at Montana State University. V.M.U. was partly supported through the Partnership for Heliophysics and Space Environment Research (NASA grant No. 80NSSC21M0180). _SDO_ is a mission of NASA's Living With a Star Program. The authors also thank Kim Tolbert for technical support and the _RHESSI_ Mission Archive for the data support.
2310.11287
Assessing the Causal Impact of Humanitarian Aid on Food Security
In the face of climate change-induced droughts, vulnerable regions encounter severe threats to food security, demanding urgent humanitarian assistance. This paper introduces a causal inference framework for the Horn of Africa, aiming to assess the impact of cash-based interventions on food crises. Our contributions include identifying causal relationships within the food security system, harmonizing a comprehensive database including socio-economic, weather and remote sensing data, and estimating the causal effect of humanitarian interventions on malnutrition. On a country level, our results revealed no significant effects, likely due to limited sample size, suboptimal data quality, and an imperfect causal graph resulting from our limited understanding of multidisciplinary systems like food security. Instead, on a district level, results revealed significant effects, further implying the context-specific nature of the system. This underscores the need to enhance data collection and refine causal models with domain experts for more effective future interventions and policies, improving transparency and accountability in humanitarian aid.
Jordi Cerdà-Bautista, José María Tárraga, Vasileios Sitokonstantinou, Gustau Camps-Valls
2023-10-17T14:09:45Z
http://arxiv.org/abs/2310.11287v3
# Evaluating the Impact of Humanitarian Aid on Food Security ###### Abstract In the face of climate change-induced droughts, vulnerable regions encounter severe threats to food security, demanding urgent humanitarian assistance. This paper introduces a causal inference framework for the Horn of Africa, aiming to assess the impact of cash-based interventions on food crises. Our contributions encompass identifying causal relationships within the food security system, harmonizing a comprehensive database, and estimating the causal effect of humanitarian interventions on malnutrition. Our results revealed no significant effects, likely due to limited sample size, suboptimal data quality, and an imperfect causal graph resulting from our limited understanding of multidisciplinary systems like food security. This underscores the need to enhance data collection and refine causal models with domain experts for more effective future interventions and policies, improving transparency and accountability in humanitarian aid. ## 1 Introduction In a world where climate change is rapidly accelerating, droughts are becoming more frequent and severe, posing a serious challenge to food security in the most vulnerable regions of our planet. In this context, communities that rely solely on rainfall for their livelihoods are especially at risk, often requiring immediate humanitarian assistance to survive [5; 27]. Failure to act or provide adequate aid can have immense consequences, including devastating economic losses, mass displacement of people, malnutrition in infants, and elevated mortality rates due to hunger and famine [16; 40; 12]. Humanitarian organizations are facing a significant challenge due to the widening gap between funding and the needs of the people affected by food crises [44; 17]. As a result, designing effective humanitarian interventions in resource-constrained situations has become a critical issue. Despite numerous comprehensive reviews, there is still a lack of solid evidence to identify the best strategies to help populations affected by crises [32]. Cash-based and voucher aid programs are considered effective in emergencies, but their cost-effectiveness varies by context [33]. Standardized methods for evaluating humanitarian interventions in food emergencies are lacking [32]. Our aim is to determine the impact of interventions, using observational causal inference to enhance intervention design, and transparency in charity, and improve humanitarian aid outcomes during extreme droughts. The Horn of Africa has witnessed a concerning rise in acute food insecurity, affecting 65 million people in 2022 [44]. Prolonged dry spells significantly contribute to this crisis [14], yet it is crucial to recognize that droughts are not the sole driver. Various factors, including hydrological conditions, food production capabilities, market access, insufficient humanitarian aid, conflicts, and displacement, contribute to the complex challenges households face [13; 2; 19; 3; 24]. Studying food security in this context is intricate, involving multiple variables, scales, and non-linear relationships, making it unsuitable for predictive machine learning [25; 26]. Instead, this paper focuses on causal inference, specifically assessing the impact of humanitarian interventions during the 2016, 2018, and 2022 Horn of Africa droughts. Our aim is to demonstrate the application of causal inference for evaluating the effectiveness of cash-based interventions in food crisis scenarios. ## 2 Related Work In recent years, the surge in available data has enabled us to assess the impact of climate change on food insecurity. This data originates from diverse sources, encompassing Earth observation products [7] and systematic socioeconomic data collection programs [18; 43]. Leveraging this wealth of data, we can estimate causal effects from observations [25; 26]. This approach is particularly vital in domains where conducting controlled experiments is impractical, costly, or unethical, with food insecurity research being a prominent example. Observational data for causal inference has gained prominence across various disciplines, including ecology [31], agriculture [21; 39; 15], public policy [10; 11], and Earth sciences [28; 1]. While there have been subjective and technical assessments of humanitarian interventions in emergency contexts [32; 22], to the best of our knowledge, this is the first effort to apply modern observational causal inference methods to evaluate humanitarian policy in a food emergency context. It is also the first time such a broad database of driving factors has been used for this purpose. The contributions of our work are summarized as follows: i) identifying the overarching causal graph and the drivers of food insecurity in the Horn of Africa, ii) building a harmonized database with the best available data suitable to evaluate cash-based interventions, iii) the estimation of the causal effect of humanitarian interventions on malnutrition. ## 3 Data & Methods Notation and terminology.In this paper, we assess the impact of cash interventions (treatment) on malnutrition (outcome) using a Directed Acyclic Graph (DAG) denoted as \(G\equiv(V,E)\) (see Figure 1). The set of vertices, labeled by \(V\), represents relevant variables, and directed edges in set \(E\) indicate causation from one variable to another [30]. We employ the \(do\)-operator to describe interventions. \(\mathbb{P}(Y=y|do(T=t))\) denotes the probability that \(Y=y\) when we intervene by setting the value of \(T\) to \(t\). Here, \(T\) is the treatment variable (cash interventions), and \(Y\) is the outcome variable (Global Acute Malnutrition, GAM). Data.Food security is influenced by various climatic, economic, and social factors, as represented in our DAG (Figure 1), which reflects the dynamics of agropastoralist households in drought displacement situations [23]. We collect and harmonize data for the variables in the DAG from multiple sources (Appendix A.1). Our outcome variable is the food security index GAM [18], and the treatment variable is a proxy for cash interventions, reflecting the number of individuals who received money in the form of credit or remittances [18]. We also collect data on El Nino Southern Oscillation (ENSO) to account for climate variability [34] and use the Standardized Precipitation Index (SPI) to characterize dry spells [37; 7]. Socio-economic data include monthly market prices of livestock, staple food, water, and sorghum production [4]. We measure conflict levels using a proxy based on recorded fatalities [38] and incorporate data on drought-induced internal displacement [43]. All data are aggregated annually and by district. Problem Formulation.To estimate the Average Treatment Effect (ATE), ATE \(=\mathbb{E}[Y|do(T=1)]-\mathbb{E}[Y|do(T=0)]\), we identify an adjustment set \(Z\subseteq V\). We apply the back-door criterion, which relies on a graphical test to determine whether adjusting for a set of graph nodes \(Z\subseteq V\) is sufficient for estimating \(\mathbb{P}(Y=y|do(T=t))\). We find the parent adjustment set that is sufficient for estimating the ATE, which is Figure 1: DAG representing the food security system in Somalia. {Market Prices, Sorghum Production, Fatalities, Drought-induced internal displacements, Population}. Utilizing the Potential Outcomes framework, our ATE estimation aims to capture the difference between the average GAM values under humanitarian aid exceeding a chosen threshold and the average value of the outcome when humanitarian aid falls below that threshold. To estimate the effect, we use several methods of varying complexity. Linear regression (LR) and distance matching (M) are selected as baseline estimation methods. The popular Inverse Propensity Score weighting (IPS W) is also used [36], as well as modern machine learning methods, the T-learner (T-L) and X-learner (X-L) [29]. Given the unavailability of observed ground truth estimates, we resort to performing refutation tests, in line with recent research [35; 8], to assess the robustness of our models. We perform the following tests: i) Placebo treatment, where the treatment is randomly permuted and the estimated effect is expected to drop to 0; ii) Random Common Cause (RCC), where a random confounder is added to the dataset and the estimate is expected to remain unchanged; iii) Random Subset Removal (RSR), where a subset of data is randomly removed and the effect is expected to remain the same. ## 4 Results From 2016 to 2022, we collected data spanning 57 districts in Somalia, resulting in a dataset of 378 samples. To address population differences between urban and agro-pastoral areas, we normalized the data per district population. We framed the problem as an ATE estimation task by converting the number of people receiving money into a binary variable using various thresholds, as outlined in Appendix A.3. The estimation represents the percentage of malonuished people who would have been affected if specific thresholds of people receiving money had been reached (Table 3). While all estimations show a reduction in the percentage of people with GAM as more individuals receive cash interventions, none are statistically significant at the 95% confidence level. This outcome is expected due to data scarcity and the complexity of the real problem. It is impossible to account for all system drivers, but ongoing efforts aim to improve our understanding and reduce bias by addressing unaccounted major drivers and acquiring more observational data. The humanitarian community has established data repositories, but there's a need for enhanced and broader data collection following FAIR principles (Findability, Accessibility, Interoperability, and Reuse). Additionally, our country-level DAG may not fully capture context-specific relationships and localized impacts on the ground, including factors like past drought events, the political situation, poverty levels, and livelihood options, which significantly influence intervention effectiveness [23]. ## 5 Conclusions and Future Work Optimally distributing available resources and evaluating how, who, where, and when to intervene is crucial to mitigating climate change impacts. In this proposal, we presented a novel data-driven approach for assessing the effectiveness of humanitarian interventions in food emergencies through the lens of causal inference. We constructed a DAG to capture the dynamics of food insecurity under drought conditions and collected data characterizing the system. Our goal was to estimate the causal effects of cash-based interventions on reducing district-level food insecurity across Somalia. Preliminary results did not reach statistical significance, prompting further steps: i) identifying more suitable treatment variables, ii) refining the causal graph with domain experts, iii) gaining insights on the spatio-temporal heterogeneity of impact of interventions through Conditional Average Treatment Effects (CATE) [20]. If data allows it, causal inference can be used to assess the efficacy of interventions in specific locations, supporting targeted aid where on-ground surveys are not feasible. The proposed approach could promote greater accountability and transparency amongst humanitarian actors, encouraging individuals to contribute to impactful and traceable aid.
2302.11096
Entanglement entropy as an order parameter for strongly coupled nodal line semimetals
Topological semimetals are a class of many-body systems exhibiting novel macroscopic quantum phenomena at the interplay between high energy and condensed matter physics. They display a topological quantum phase transition (TQPT) which evades the standard Landau paradigm. In the case of Weyl semimetals, the anomalous Hall effect is a good non-local order parameter for the TQPT, as it is proportional to the separation between the Weyl nodes in momentum space. On the contrary, for nodal line semimetals (NLSM), the quest for an order parameter is still open. By taking advantage of a recently proposed holographic model for strongly-coupled NLSM, we explicitly show that entanglement entropy (EE) provides an optimal probe for nodal topology. We propose a generalized $c$-function, constructed from the EE, as an order parameter for the TQPT. Moreover, we find that the derivative of the renormalized EE with respect to the external coupling driving the TQPT diverges at the critical point, signaling the rise of non-local quantum correlations. Finally, we show that these quantum information quantities might be able to characterize not only the critical point but the whole quantum critical region at finite temperature.
Matteo Baggioli, Yan Liu, Xin-Meng Wu
2023-02-22T02:31:44Z
http://arxiv.org/abs/2302.11096v2
# Entanglement entropy as an order parameter ###### Abstract Topological semimetals are a class of many-body systems exhibiting novel macroscopic quantum phenomena at the interplay between high energy and condensed matter physics. They display a topological quantum phase transition (TQPT) which evades the standard Landau paradigm. In the case of Weyl semimetals, the anomalous Hall effect is a good non-local order parameter for the TQPT, as it is proportional to the separation between the Weyl nodes in momentum space. On the contrary, for nodal line semimetals (NLSM), the quest for an order parameter is still open. By taking advantage of a recently proposed holographic model for strongly-coupled NLSM, we explicitly show that entanglement entropy (EE) provides an optimal probe for nodal topology. We propose a generalized \(c\)-function, constructed from the EE, as an order parameter for the TPQT. Moreover, we find that the derivative of the renormalized EE with respect to the external coupling driving the TQPT diverges at the critical point, signaling the rise of non-local quantum correlations. Finally, we show that these quantum information quantities might be able to characterize not only the critical point but the whole quantum critical region at finite temperature. **Introduction** - Within the Landau paradigm (LP), phases of matter which exhibit different macroscopic properties are defined by their symmetries, and whether or not those are spontaneously broken [1]. The LP is not only a powerful classification tool based on the definition of a local order parameter (OP) but also an important ingredient for the identification of the low-energy degrees of freedom, and the understanding of the critical dynamics across classical phase transitions [2; 3]. Nevertheless, Nature abounds of apparent exceptions to the LP; topological phases of matter [4] are the most famous example of this sort. On the other hand, quantum phase transitions defy the LP as well, since they cannot be described in terms of a standard (i.e., local) OP [5; 6]. One of the attempts to rationalize phases of matter and phase transitions beyond the LP is based on the notion of generalised symmetries (see [7] for a recent review). An alternative approach utilizes quantum information quantities, such as entanglement entropy [8], and generalized related concepts (e.g., topological entanglement entropy [9; 10]), to describe topological order [11] and quantum phase transitions [12; 13; 14; 15; 16]. Topological semimetals (TS) have emerged as a promising platform not only to reach a fundamental understanding of topological quantum many-body systems but also thanks to their incredible potential for applications [17]. In TS, the low energy electronic excitations are chiral fermions forming point-shaped (Weyl semimetals, or WSM) or line-shaped (nodal line semimetals, or NLSM) Fermi surfaces characterized by stable topological invariants. TS undergo quantum critical phase transitions to trivially gapped insulating states driven by non-thermal external parameters, such as uniaxial strain or chemical pressure [18; 19; 20; 21; 22]. Both their topological nature and the associated quantum phase transition elude the Landau paradigm. In addition to that, since the density of states vanishes along the nodal line, the Coulomb interaction between electrons is very weakly screened, leading to strong coupling. Indeed, strong electronic correlations and signatures of hydrodynamic behavior have been observed respectively in ZrSiSe [23] and NbP [24], and further confirmed by an extremely low value of the viscosity to entropy ratio [25]. In a nutshell, the applicability of a weakly-coupled field theory description for certain TS is questionable (e.g., the breakdown of Fermi liquid theory in TS [26]). In this regard, holographic methods, which have been already successfully applied to strongly-coupled electronic phases of matter [27; 28; 29], present a viable alternative tool [30; 31; 32] (see [33] for a review on holographic semimetals). In Weyl semimetals, valence and conduction bands cross in single points, the Weyl nodes [34; 35]. In this case, the anomalous Hall effect, which appears as a consequence of the chiral anomaly [36], has been early recognized as a non-local order parameter for the topological quantum phase transition (TQPT) between WSM and insulating states. This result, which is based on the proportionality between this transport coefficient and the distance between the Weyl nodes in wave-vector space, has been derived both using weakly-coupled field theory techniques [37] and holographic methods at strong coupling [31]. In holographic WSM, quantum information quantities such as entanglement entropy [38] and butter fly velocity [39] have been shown to be successful probes for the TQPT. Nodal line semimetals [40] (e.g., Ca\({}_{3}\)P\({}_{2}\)[41; 42], PbTaSe\({}_{2}\)[43], ZrSiS [44]), in which the conduction and the valence bands touch along a one-dimensional curve in the three-dimensional Brillouin zone, represent an even more difficult challenge. Because of the absence of topologically protected surface states, the identification of a DC transport coefficient as an order parameter for the TQPT is not possible. A robust probe for nodal topology and the quantum phase transition in NLSM has not been found yet. For weakly-coupled NLSM, the power-law scaling of the shear viscosity in the collisionless limit has been proposed as a signature of nodal topology [45]. More recently, using a holographic model for strongly coupled NLSM, Ref.[46] showed that the DC electrical conductivity at low temperature displays a structure reminiscent of a quantum critical fan, which may provide a probe for the TQPT. Finally, in CaAgAs [47], quantum oscillations have been proven to be sensitive to the topology of the Fermi surface. **Weakly coupled nodal line semimetals** - A weakly-coupled field theory description for NLSM can be realized, in terms of a Dirac fermion \(\psi\), using a \((3+1)\)-dimensional Lorentz-violation Lagrangian [40; 48], \[\mathcal{L}=\bar{\psi}\big{(}\gamma^{\mu}\partial_{\mu}-m-\gamma^{\mu\nu}b_{ \mu\nu}+\gamma^{\mu\nu}\gamma^{5}b_{\mu\nu}^{5}\big{)}\psi\,. \tag{1}\] Here \(\bar{\psi}=\psi^{\dagger}i\gamma^{0}\), \(\gamma^{\mu\nu}=\frac{i}{2}\left[\gamma^{\mu},\gamma^{\nu}\right]\), \(\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\), and the anti-symmetric tensor operators obey a self-duality condition \(\bar{\psi}\gamma^{\mu\nu}\gamma^{5}\psi=-\frac{i}{2}\varepsilon^{\mu\nu}_{\ \ \ \alpha\beta} \bar{\psi}\gamma^{\alpha\beta}\psi\). As we turn on a background for the two-form field, \(b_{xy}=-b_{yx}\), \(b_{iz}^{5}=-b_{zt}^{5}=ib_{xy}\), a topological semimetal with a nodal line in the \(k_{z}=0\) plane is realized. As long as \(4|b_{xy}|>|m|\), the valence and conduction bands touch along a Weyl circle with radius \(k_{F}\equiv\sqrt{k_{x}^{2}+k_{y}^{2}}=\sqrt{16b_{xy}^{2}-m^{2}}\). At the critical value, \(4|b_{xy}|=|m|\), the nodal circle collapses to a nodal point. The system at the quantum critical point is a Dirac semimetal. For \(4|b_{xy}|<|m|\), the nodal topology disappears via a topological quantum phase transition, and the system is a trivial insulator. These three phases display different topological properties in their electronic spectrum on the \(k_{z}=0\) plane. Fig.1 provides a cartoon of the phase diagram for weakly coupled NLSM. **Nodal entanglement entropy** - The entanglement entropy for weakly-coupled Fermi-surface systems has been discussed using a generalized Widom formula from entanglement Hamiltonians in [49]. The EE can be calculated by summing up the local thermal entropy densities \(s_{\text{th}}(T(z))\), \[S_{ent}\propto k_{F}\int_{0}^{L}dx\int_{0}^{L}dy\int_{a}^{l}dz\;s_{\text{th}} \left(T(z)\right)\,, \tag{2}\] where \(k_{F}\) is the radius of the nodal circle along the \(k_{z}=0\) plane. \(L^{2}\) is the area of the partition surface, \(a\) the lattice size and \(l\) the strip width along the \(z\)-direction perpendicular to the partition surface. The local temperature is given by \(T(z)=1/(2\pi z)\), as defined from the reduced density matrix [49]. All in all, we find \[S_{ent}\propto k_{F}L^{2}\int_{a}^{l}dz\frac{1}{z^{2}}=k_{F}L^{2}\left(\frac{1 }{a}-\frac{1}{l}\right)\,. \tag{3}\] Up to geometrical factors, the EE is linear in the radius of the nodal line \(k_{F}\) and inversely proportional to the strip width \(l\). This immediately implies that \[l^{2}\,\frac{\partial S_{ent}(l)}{\partial l}\propto k_{F} \tag{4}\] is independent of the UV cutoff, and an immediate probe for nodal topology. As expected, the EE is highly anisotropic and crucially depends on the orientation of the partition. Nevertheless, in case of codimension-2 Fermi surfaces, it has been proven in [49] that the sum of the EE for the partitions in three orthogonal directions (see Supplementary Information (SI) for details) remains proportional to the size of the nodal line \(k_{F}\). As we will derive, in the holographic strongly coupled case, this extra complication will not be necessary since the EE will be dominated by the contribution from the partition parallel to the anisotropic direction. **Holographic setup** - The holographic model for a \((3+1)\)-dimensional strongly coupled NLSM [50] is described by the following gravitational action \[\begin{split} S&=\int d^{5}x\sqrt{-g}\bigg{[}R+12 -\frac{1}{4}\mathcal{F}^{2}-\frac{1}{4}F^{2}\\ &+\frac{\alpha}{3}\epsilon^{abcde}A_{a}\Big{(}3\mathcal{F}_{bc} \mathcal{F}_{de}+F_{bc}F_{de}\Big{)}-(D_{a}\Phi)^{*}(D^{a}\Phi)\\ &-V_{\Phi}-\frac{i}{6\eta}\epsilon^{abcde}\Big{(}B_{ab}H_{cde}^{ *}-B_{ab}^{*}H_{cde}\Big{)}-V_{B}\\ &-\lambda|\Phi|^{2}B_{ab}^{*}B^{ab}\bigg{]}\,.\end{split} \tag{5}\] The two gauge fields with strength \(\mathcal{F}=dV\) and \(F=dA\) correspond to vector and axial currents in the dual field Figure 1: A cartoon of the topological quantum phase transition between a nodal line semimetal and a trivial insulating state. For simplicity, we set \(k_{z}=0\) and we show only the lowest electronic bands. In the weakly-coupled theory, the quantum critical point is located at \(4|b_{xy}|=|m|\). theory. The bulk Chern-Simons term is chosen to ensure the correct anomalous Ward identity [33; 36]. The axially charged scalar field \(\Phi\) plays the role of the mass term for the fermion \(\psi\) in Eq.(1). The complex two form field \(B_{ab}\) is dual to the rank-two operators described above. Its bulk field strength is defined as \(H_{abc}=\partial_{a}B_{bc}+\partial_{b}B_{ca}+\partial_{c}B_{ab}-iq_{2}A_{a}B_ {bc}-iq_{2}A_{b}B_{ca}-iq_{2}A_{c}B_{ab}\). The Chern-Simons term for the two form field is introduced in order to impose the self-duality constraint. The potentials are chosen to be \(V_{\Phi}=m_{1}^{2}|\Phi|^{2}+\frac{\lambda_{1}}{2}|\Phi|^{4}\) and \(V_{B}=m_{2}^{2}B_{ab}^{*}B^{ab}+\frac{\lambda_{2}}{2}(B_{ab}^{*}B^{ab})^{2}\). For more details about the holographic model, see [50] and the SI. In the main text, we choose \(m_{1}^{2}=-3,m_{2}^{2}=1,\eta=2,q_{1}=q_{2}=1\), \(\lambda_{1}=\lambda=2\) and \(\lambda_{2}=0\). For the zero temperature solutions, we use the following ansatz \[\begin{split}& ds^{2}=\frac{dr^{2}}{r^{2}}+u(-dt^{2}+dz^{2})+f(dx^{2 }+dy^{2})\,,\\ &\Phi=\phi\,,\,B_{xy}=-B_{yx}=\mathcal{B}_{xy}\,,\,B_{tz}=-B_{zt }=i\mathcal{B}_{tz}\,,\end{split} \tag{6}\] where all the bulk fields \(u,f,\phi,\mathcal{B}_{xy},\mathcal{B}_{tz}\) are functions of the radial coordinate \(r\in[0,\infty]\). The behaviors of the matter fields near the asymptotically AdS\({}_{5}\) boundary, \(r\to\infty\), are given by \[\lim_{r\to\infty}\,\,r\phi=M\,,\,\,\,\lim_{r\to\infty}\,\,r^{-1}\mathcal{B}_{tz }=\lim_{r\to\infty}\,\,r^{-1}\mathcal{B}_{xy}=b\,, \tag{7}\] where \(M\) and \(b\) represent external sources for the scalar operator \(\bar{\psi}\psi\) and tensor operators \(\bar{\psi}\gamma^{\mu\nu}\psi\) and \(\bar{\psi}\gamma^{\mu\nu}\gamma^{5}\psi\) respectively. \(M\) and \(b\) play the role of the parameters \(m\) and \(b_{xy}\) in the weakly-coupled theory, Eq.(1). At zero temperature, there exist three different phases with distinguished infrared (IR) (\(r\to 0\)) geometries, depending on the value of \(M/b\). For \(M/b<(M/b)_{c}\approx 1.17\), the bulk scalar field \(\phi\) vanishes in the IR. The near-horizon geometry enjoys an anisotropic scaling symmetry \(r^{-1}\to s^{2/\alpha}\,r^{-1}\,,(t,z)\to s^{\delta/\alpha}(t,z)\,,(x,y)\to s(x,y)\) with scaling exponent \(\mathbf{z}\equiv\delta/\alpha>1\). The dual many-body system is in a strongly coupled NLSM phase. For \(M/b=(M/b)_{c}\), both the scalar field and the two form field are finite in the IR, and the system is quantum critical. When \(M/b>(M/b)_{c}\), the two form field vanishes in the IR. The near-horizon geometry enjoys the same form of anisotropic scaling symmetry but with a different exponent. The dual field theory is in a topologically trivial phase. The location of the quantum critical point, \((M/b)_{c}\), depends on the couplings \(\lambda,\lambda_{1}\) and \(\lambda_{2}\) in the potentials. At finite temperature, the sharp quantum phase transition becomes a smooth crossover. For more details on the solutions, see SI. Based on the properties of the dual fermionic spectral function, it has been shown that in the NLSM phase there exist multiple nodal line Fermi surfaces which are topologically stable [50]. This confirms that the holographic model describes a topological quantum phase transition between a NLSM and a trivial phase (see also [46] for a recent analysis of the thermodynamics and transport properties of a similar holographic NLSM model). **Order parameter for nodal topology** - Motivated by the weakly coupled results, we use physical quantities constructed from holographic EE [51] to characterize the topological phase transition and probe the nodal topology. The subsystems under consideration are strip geometries of width \(l_{i}\) (\(i=x,y,z\)) and length \(L\to\infty\). We compute the entanglement entropy \(S_{i}\) for these three configurations using the Ryu-Takayanagi (RT) prescription [52] (see SI for more details). Due to the isotropy in \(x\)-\(y\) plane, we have \(S_{x}=S_{y}\). Moreover, we define \[c_{i}=4\,G\frac{l_{i}^{3}}{L^{2}}\frac{\partial S_{i}}{\partial l_{i}}\,, \tag{8}\] where \(G\) is the Newton constant. These quantities are known as \(c\)-functions and can be used to parameterize the number of degrees of freedom along the renormalization group flow. Notice the similarity between Eq.(8) and the quantity defined in Eq.(4). At zero temperature, Eq.(8) can be further simplified as \(c_{i}=\frac{1}{2}\mathfrak{C}_{i}l_{i}^{3}\) with \(\mathfrak{C}_{i}=f\sqrt{u}|_{r_{i}}\). \(r_{t}\) is the turning point of the extremal RT surface corresponding to the strip with width \(l_{i}\). In the limit of \(l_{i}\to\infty\), the turning point approaches the location of the IR horizon, \(r=0\) (see SI). For isotropic systems, all \(c\)-functions are equal and obey the so-called \(c\)-theorem [53; 54]. For anisotropic systems, an analogous \(c\)-theorem has been proposed in [55], and utilized in the context of WSM in [38]. In order to characterize the quantum phase transition and probe the nodal topology, we define the following order parameter \[\mathcal{O}(\bar{M})\equiv\lim_{l_{z}\to\infty}\frac{c_{z}(\bar{M})}{c_{z}(\bar {M}=0)}\,, \tag{9}\] where \(\bar{M}=M/b\), and \(z\) is the direction along the anisotropy. The behavior of this quantity as a function Figure 2: The order parameter \(\mathcal{O}\), Eq.(9), as a function of \(\bar{M}\) across the topological quantum phase transition. The normalized ratio \(c_{z}(\bar{M})/c_{z}(\bar{M}=0)\) at \(T=0\) converges to the order parameter (solid line) as \(l_{z}\) increases. The inset shows the coefficient \(\varpi(\mathbf{z})\) defined in Eq.(10). of \(\bar{M}\) is shown in Fig.2. At zero temperature, \(\mathcal{O}\) is zero in the topologically trivial phase and becomes non-zero in the topological NLSM phase. Its behavior across the quantum critical point is continuous and follows a power-law scaling \(\mathcal{O}\propto(\bar{M}_{c}-\bar{M})^{\xi}\), with \(\xi\approx 0.39\). This scaling exponent is different from the weakly-coupled theory, Eq.(1), for which \(\xi=0.5\) (mean field behavior). Thermal effects and finite \(l_{i}\) corrections modify the sharp transition into a smooth crossover. At \(T=0\), we can analytically prove (see SI) that: \[\mathcal{O}(\bar{M})=\lim_{l_{i}\to\infty}\varpi(\mathbf{z})\,l_{i}^{\frac{2}{ \xi}}\mathcal{B}_{xy}(r_{t})\equiv\varpi(\mathbf{z})\beta_{xy}(\bar{M})\,. \tag{10}\] In this limit, the order parameter is independent of the geometry of the entanglement boundary region. \(\varpi(\mathbf{z})\) is a \(\bar{M}\)-independent function which is finite in the NLSM phase, zero in the trivial phase, and determined by the anisotropic IR exponent \(\mathbf{z}\) (see inset in Fig.2). The coefficient \(\varpi(\mathbf{z})\) in the topological phase can be computed analytically. Finally, \(\beta_{xy}(\bar{M})\propto r_{t}^{-\alpha}\mathcal{B}_{xy}(r_{t})\,\) in the IR limit \(r_{t}\to 0\) which is equivalent to \(l_{z}\to\infty\). \(\alpha\) is a parameter related to the scaling properties of the topological IR phase. The factor \(l_{z}^{2/\mathbf{z}}\) in Eq.(10) cancels the \(r_{t}\) factor in \(\mathcal{B}_{xy}(r_{t})\) and leads to a finite result. Finally, \(\beta_{xy}\) corresponds to the IR value for the field theory source \(b_{xy}\). Eq.(10) shares strong similarities with the weakly-coupled result, Eq.(3). In particular, despite we are not able to provide a formal proof, there is good evidence that the IR parameter \(\beta_{xy}\) is proportional to the nodal line length \(k_{F}\), as in the weakly-coupled picture. This can be seen from the fact that \(\mathcal{B}_{xy}=0\) in the IR corresponds to the trivial phase, with no nodal line in the fermionic spectral function [50, 56]. Additionally, \(\mathcal{B}_{xy}\neq 0\) implies the breaking of time reversal and charge conjugate symmetry, which is a distinctive feature of the NLSM phase with \(k_{F}\neq 0\). The different scaling of \(S\), and consequently of \(c_{i}\), with respect to the length-scales \(l_{i}\) is due to the anisotropic nature of the IR fixed point in the strongly-coupled holographic model. In the weakly-coupled case [49], in order to obtain the universal relation between the EE and the nodal line length, it is imperative to sum over the different directions. In our case, because of \(\mathbf{z}_{\text{nlsm}}>\mathbf{z}_{\text{trivial}}=1\), the EE related to the strip oriented along the anisotropic direction represents always the dominant contribution in the large \(l\) limit. Because of this reason, \(\sum_{i}c_{i}\approx c_{z}\), and our definition in Eq.(9) is equivalent to that in [49]. **Locating the quantum critical point** - In order to probe the quantum critical point further, we define a second quantity which is given by \(\partial s_{i}/\partial\bar{M}\). Here, \(s_{i}\) is the renormalized EE once the UV divergent terms in the EE \(S_{i}\) are removed (see SI). \(s_{i}\) do not depend anymore on the UV properties of the dual quantum theory and they are therefore sensitive to the IR properties of our many-body system, which carry the fingerprints of the TQPT. Conceptually, this definition shares many similarities with the proposal of [57]. There, it was shown that the derivative of the "entanglement of formation" \(C\) with respect to the external coupling \(\lambda\) shows a sharp dip at a quantum critical point, signaling the divergence of non-local correlations in the critical region. Here, we run a similar argument in terms of the renormalized EE. The results at zero temperature are shown in Fig.3. For large enough entanglement regions, \(l_{i}\gg 1\), both derivatives in the parallel (with respect to anisotropy) and perpendicular directions display a clear signature at the QCP. In the limit of \(l_{i}\to\infty\), in which the EE surface reaches the IR horizon of the geometry, the derivatives become divergent at the QCP. In analogy to the behavior of the correlation length in classical thermal phase transition, it is tempting to associate this feature to the divergence of non-local quantum correlations at the QCP. In the SI, we show that this behavior persists even at finite but small temperature. This indicates that our quantum information inspired quantities might be good probes not only for the QCP but for the quantum critical region as well. **Discussion** - Using a holographic model for strongly-coupled nodal line semimetals, we show that quantum information quantities related to entanglement entropy are efficient probes for nodal topology and for the topological quantum phase transition between NLSM and topologically trivial phases. Inspired by field theory studies [49], we propose an order parameter which displays a critical Figure 3: The derivative of the renormalized EE \(s_{i}\) with respect to \(\bar{M}\) for \(l\) along the \(x\) (**top**) and \(z\) (**bottom**) directions. The vertical dashed line indicates the location of the QCP. and beyond mean-field behavior across the topological phase transition. Moreover, in analogy with the findings of [57], we find that the derivative of the renormalized EE with respect to the external coupling driving the quantum phase transition diverges at the critical point, signaling the explosion of non-local quantum correlations. Interestingly, our findings are robust against thermal effects and indicate that quantum information observables might be functional to outline and describe quantum critical regions in many-body systems. In other words, the full structure of the renormalization group flow from the UV to the IR fixed points might be retrieved in the scaling behavior of those quantities, providing useful insights away from the quantum critical point. ###### Acknowledgements. We thank Karl Landsteiner and Ya-Wen Sun for useful comments on a preliminary draft of this work. M.B. and X.-M.W. acknowledge the support of the Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01). M.B. acknowledges the sponsorship from the Yangyang Development Fund. Y.L. is supported by the National Natural Science Foundation of China grant No.11875083.
2307.14730
Equivariant chararacter bijections and the Alperin-McKay conjecture
In this paper we consider the inductive Alperin--McKay condition for isolated blocks of groups of Lie type $B$ and $C$. This finishes the verification of the inductive condition for groups of this type.
Julian Brough, Lucas Ruhstorfer
2023-07-27T09:36:06Z
http://arxiv.org/abs/2307.14730v1
# Equivariant character bijections and the inductive alternin-McKay condition ###### Abstract. In this paper we consider the inductive Alperin-McKay condition for isolated blocks of groups of Lie type \(B\) and \(C\). This finishes the verification of the inductive condition for groups of this type. Key words and phrases:Alperin-McKay conjecture, inductive conditions, isolated blocks 2010 Mathematics Subject Classification: 20C33 In the representation theory of finite groups some of the most important open conjectures relate the representation theory of a finite group \(G\) and those of its \(\ell\)-local subgroups, for \(\ell\) a prime dividing the order of \(G\). One of these conjectures is the Alperin-McKay conjecture, which forms a blockwise generalisation of the McKay conjecture. For an \(\ell\)-block \(b\) of \(G\) we denote by \(\operatorname{Irr}_{0}(G,b)\) the subset of height zero characters of \(\operatorname{Irr}(G,b)\), the characters of the block \(b\). **Conjecture A** (Alperin-McKay).: _Let \(b\) be an \(\ell\)-block of \(G\) with defect group \(D\) and \(B\) its Brauer correspondent in \(\operatorname{N}_{G}(D)\). Then_ \[|\operatorname{Irr}_{0}(G,b)|=|\operatorname{Irr}_{0}(\operatorname{N}_{G}(D), B)|.\] In [10], the Alperin-McKay conjecture was reduced to the verification of the so-called _inductive Alperin-McKay condition_ (iAM) for all finite simple groups and primes \(\ell\). In previous papers [11] for simple groups of Lie type, the second author has proven that it suffices to verify the iAM-condition for quasi-isolated blocks of groups of Lie type and in many cases just for isolated blocks. Applying these techniques together with results from [10], yielded a verification of the iAM-condition for all finite simple groups of type \(A\) and primes \(\ell\geq 5\). The present paper is concerned with considering the iAM-condition for the remaining classical quasi-simple groups, i.e., for groups of Lie type \(B\), \(C\) and \(D\) defined over a field of characteristic \(p\neq\ell\). Due to the nature of the inductive proof using the iAM-condition, there is some flexibility in the choice of local subgroups that can be taken. When \(\ell\geq 5\) the defect groups of \(\ell\)-blocks of these groups have a Cabanes subgroup, i.e. they have a unique maximal normal abelian subgroup. With this in mind the first half of this paper is focused on the following result, which provides a method to verify the iAM-condition. **Theorem B**.: _Assume that \(\mathbf{G}\) is a simple algebraic group of simply connected type \(B_{n}\) or \(C_{n}\) (\(n\geq 2\)) with Frobenius endomorphism \(F:\mathbf{G}\to\mathbf{G}\) and let \(b\) be a quasi-isolated \(\ell\)-block of \(G:=\mathbf{G}^{F}\) with \(\ell\geq 5\). Assume that the block \(b\) satisfies Assumption 4.5. Then the block \(b\) is AM-good relative to the Cabanes subgroup of its defect group._ The block \(b\) of \(G\) is labeled by a pair \((\mathbf{L},\lambda)\) consisting of a \(d\)-split Levi subgroup \(\mathbf{L}\) of \((\mathbf{G},F)\) and a \(d\)-cuspidal character \(\lambda\in\operatorname{Irr}(\mathbf{L}^{F})\), where \(d\) is the order of \(q\) modulo \(\ell\), see 2.1. Assumption 4.5 concerns the Clifford theory associated to \(\mathbf{L}^{F}\lhd\operatorname{N}_{G}(\mathbf{L})\) taking into account the action arising from the automorphisms of \(G\) stabilizing \(\mathbf{L}\). Assumption 4.5 was verified in type \(C\) with respect to any \(d\)-cuspidal pair [1] and for blocks of maximal defect in type \(B\)[1]. In types \(B\) and \(C\), by [14, Theorem D] it suffices to validate the iAM-condition for all isolated \(\ell\)-blocks. The second half of this paper therefore focuses on verifying this assumption also for the isolated \(\ell\)-blocks in type \(B_{n}\). In contrast to the computation in [1] we make explicit use of the structure of the possible Levi subgroups that can occur for isolated blocks. This for one reduces the computational effort of verifying Assumption 4.5 and is helpful in overcoming technical difficulties related to the more complicated structure of the extended Weyl group (see e.g. [1, Remark 3.4]). By verifying Assumption 4.5 we can therefore finish the verification of the iAM-condition for the considered groups. **Theorem C**.: _Let \(G\) be a quasi-simple group of Lie type \(B_{n}\) or \(C_{n}\), with \(n\geq 2\), defined over the finite field \(\mathbb{F}_{q}\) for \(q\) a prime power of an odd prime and let \(\ell\geq 5\) not dividing \(q\). Then every \(\ell\)-block of \(G\) satisfies the iAM-condition._ ### Structure of the paper In Section 1 we provide some of the basic notation that will be used throughout the paper. Sections 2 and 3 are dedicated to the parametrisation of the global, respectively local, height zero characters via the techniques developed by Enguehard together with \(d\)-Harish-Chandra theory. This will then be used in Section 4 to prove Theorem B. The remaining sections are then dedicated to verifying Assumption 4.5 in types \(B_{n}\). **Acknowledgment**.: The research of the first author is funded through the DFG (Project: BR 6142/1-1). The second author thanks Jay Taylor for helpful conversations. Moreover, he would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme "Groups, representations and applications: new perspectives" when work on this paper was undertaken. This work was supported by: EPSRC grant number EP/R014604/1. This paper is a contribution via the second author to SFB TRR 195. The authors thank Britta Spath and Gunter Malle for carefully reading a previous version of this paper. ## 1. notation ### Finite groups of Lie type Throughout, \(\mathbf{G}\) denotes a connected reductive group over an algebraic closure \(\mathbb{F}\) of \(\mathbb{F}_{p}\) for a prime number \(p\). Let \(F:\mathbf{G}\to\mathbf{G}\) be a Frobenius endomorphism defining an \(\mathbb{F}_{q}\)-structure on \(\mathbf{G}\) where \(q\) is an integral power of \(p\). We let \((\mathbf{G}^{*},F)\) be a pair in duality with \((\mathbf{G},F)\). If \(\mathbf{H}\) is an \(F\)-stable subgroup of \(\mathbf{G}\) (or of \(\mathbf{G}^{*}\)), then we denote \(H:=\mathbf{H}^{F}\). An \(F\)-stable torus \(\mathbf{T}\) of \(\mathbf{G}\) is called an _\(e\)-torus_ if it splits completely over \(\mathbb{F}_{q^{e}}\) but no non-trivial subtorus splits over any smaller field. In particular, \(|\mathbf{T}^{F}|\) is a power of \(\Phi_{e}(q)\), where \(\Phi_{e}\) denotes the \(e\)th cyclotomic polynomial. For an \(F\)-stable torus \(\mathbf{T}\) we denote by \(\mathbf{T}_{\phi_{e}}\) its maximal \(e\)-split subtorus. The centralizers of \(e\)-tori of \(\mathbf{G}\) are called _\(e\)-split Levi subgroups_. ### Character theory For any \(F\)-stable Levi subgroup \(\mathbf{L}\) of a (not necessarily \(F\)-stable) parabolic subgroup \(\mathbf{P}\) of \(\mathbf{G}\) Lusztig defines linear maps \[R^{\mathbf{G}}_{\mathbf{L}\subset\mathbf{P}}:\mathbb{Z}\operatorname{Irr}(L) \longrightarrow\mathbb{Z}\operatorname{Irr}(G),\] \[{}^{*}R^{\mathbf{G}}_{\mathbf{L}\subset\mathbf{P}}:\mathbb{Z}\operatorname{Irr}(G) \longrightarrow\mathbb{Z}\operatorname{Irr}(L).\] By the Mackey formula (see [1, Theorem 3.3.7]) these maps won't depend in our cases on the choice of parabolic subgroups so that we will write \({}^{(*)}R^{\mathbf{G}}_{\mathbf{L}}\) instead of \({}^{(*)}R^{\mathbf{G}}_{\mathbf{L}\subset\mathbf{P}}\). A character \(\chi\in\operatorname{Irr}(G)\) is called \(e\)_-cuspidal_ if \({}^{*}R^{\mathbf{G}}_{\mathbf{L}}(\chi)=0\) for every \(e\)-split proper Levi subgroup \(\mathbf{L}\) of \(\mathbf{G}\). A pair \((\mathbf{L},\lambda)\) consisting of an \(e\)-split Levi subgroup \(\mathbf{L}\) of \(\mathbf{G}\) and an \(e\)-cuspidal character \(\lambda\in\operatorname{Irr}(L)\) is then called an \(e\)_-cuspidal pair_. Given an \(e\)-cuspidal pair \((\mathbf{L},\lambda)\), we write \[\mathcal{E}(G,(\mathbf{L},\lambda)):=\{\chi\in\operatorname{Irr}(G)\mid\langle {}^{*}R^{\mathbf{G}}_{\mathbf{L}}(\chi),\lambda\rangle\neq 0\}\] for the set of constituents of \(R^{\mathbf{G}}_{\mathbf{L}}(\lambda)\). This is called the \(e\)_-Harish-Chandra series of \(G\) above \((\mathbf{L},\lambda)\)_. For a semisimple element \(s\in G^{*}\) we denote by \(\mathcal{E}(G,s)\) the rational Lusztig series associated to it. For \(z\in\operatorname{Z}(G^{*})\) there exists a linear character \(\hat{z}\in\operatorname{Irr}(G)\) and multiplication with \(\hat{z}\) induces a bijection \(\hat{z}:\mathcal{E}(G,1)\to\mathcal{E}(G,z)\), see [1, Equation 8.19]. Moreover, if the center \(\operatorname{Z}(\mathbf{G})\) is connected \[\psi_{G,s}:\mathcal{E}(\operatorname{C}_{G^{*}}(s),1)\to\mathcal{E}(G,s)\] will denote the unique Jordan decomposition from Digne-Michel [1, Theorem 4.7.1]. In particular, we have a bijection \(\psi_{G,1}:\mathcal{E}(G^{*},1)\to\mathcal{E}(G,1)\) between the set of unipotent characters of \(G\) and \(G^{*}\). ### Blocks of groups of Lie type We let \(\ell\neq p\) be a fixed prime and denote by \(\mathcal{E}(G,\ell^{\prime})\) the union of Lusztig series \(\mathcal{E}(G,s)\) with \(s\in G^{*}\) a semisimple element of \(\ell^{\prime}\)-order. For such a semisimple element \(s\) of \(\ell^{\prime}\)-order, we denote by \(\operatorname{C}_{G^{*}}(s)_{\ell}\) the subset of \(\ell\)-power order elements of the centralizer of \(s\). By a fundamental result of Broue-Michel [1, Theorem 9.12], the set \[\mathcal{E}_{\ell}(G,s)=\bigcup_{t\in\operatorname{C}_{G^{*}}(s)_{\ell}} \mathcal{E}(G,st)\] is the set of characters associated to a sum of \(\ell\)-blocks. We denote by \(\operatorname{Bl}(G,s)\) the corresponding set of blocks. ## 2. Global characters ### Parametrizing height zero characters Throughout this section, we let \(\mathbf{G}\) be a connected reductive group with connected centre with all simple components of classical type. Suppose that \(\ell\) is a prime with \(\ell\nmid q\) and \(\ell\geq 5\). We denote by \(d\) the order of \(q\) modulo \(\ell\). According to [1, Theorem 4.1] to each \(d\)-cuspidal pair \((\mathbf{L},\lambda)\) with \(\lambda\in\mathcal{E}(L,\ell^{\prime})\), up to \(\mathbf{G}^{F}\)-conjugation, there exists a unique \(\ell\)-block \(b=b_{\mathbf{G}^{F}}(\mathbf{L},\lambda)\) of \(\mathbf{G}^{F}\) associated to it. Let \(s\in L^{*}\) a semisimple element of \(\ell^{\prime}\)-order such that \(\lambda\in\mathcal{E}(L,s)\). Observe that since \((\mathbf{L},\lambda)\) is \(d\)-cuspidal, we have \(\operatorname{Z}^{\circ}(\operatorname{C}^{\circ}_{\mathbf{L}^{*}}(s))_{\Phi_{ d}}=\operatorname{Z}^{\circ}(\mathbf{L}^{*})_{\Phi_{d}}\), see [1, Remark 2.2]. Since \(\mathbf{L}^{*}\) is \(d\)-split we have \(\mathbf{L}^{*}=\operatorname{C}_{\mathbf{G}^{*}}(\operatorname{Z}^{\circ}( \mathbf{L}^{*})_{\Phi_{d}})\) and so \(\operatorname{C}_{\mathbf{L}^{*}}(s)=\operatorname{C}_{\operatorname{C}_{ \mathbf{G}^{*}}(s)}(\operatorname{Z}^{\circ}(\operatorname{C}^{\circ}_{\mathbf{L }^{*}}(s))_{\Phi_{d}})\) is a \(d\)-split Levi subgroup of \((\operatorname{C}_{\mathbf{G}^{*}}(s),F)\). **Lemma 2.1**.: _With the assumptions and notation from above, we have_ \[\operatorname{Irr}_{0}(G,b)\subseteq\bigcup_{t\in\operatorname{Z}(\operatorname{C }_{L^{*}}(s))_{\ell}}\mathcal{E}(G,st).\] Proof.: Let \(\mathbf{G}(s)\) be a connected reductive group in duality with \(\mathrm{C}_{\mathbf{G}^{*}}(s)\) (this group is connected by [1, Proposition 13.16] as \(\mathrm{Z}(\mathbf{G})\) is connected by assumption). According to [1, Theorem 1.6], there exists a bijection \[\mathcal{B}_{G,s}:\mathrm{Bl}(\mathbf{G}(s)^{F},1)\to\mathrm{Bl}(\mathbf{G}^{F },s)\] such that for \(t\in\mathrm{C}_{G^{*}}(s)_{\ell}\) we have \[\psi_{G,st}\circ\psi_{G(s),t}^{-1}(\mathrm{Irr}(G(s),c)\cap\mathcal{E}(G(s),t ))=\mathrm{Irr}(G,b)\cap\mathcal{E}(G,st),\] where \(\mathcal{B}_{G,s}(c)=b\). By [1, Theorem 1.6] this bijection yields a height preserving bijection \(\mathrm{Irr}(G(s),c)\to\mathrm{Irr}(G,b)\). The block \(c\) is determined from \(b\) by [1, Theorem 1.6(B.1.a)] as follows: By Jordan decomposition \(\lambda\) corresponds to a unipotent character \(\lambda_{s}\in\mathcal{E}(\mathrm{C}_{L^{*}}(s),1)\). Therefore, there is a Levi subgroup \(\mathbf{L}(s)\) of \(\mathbf{G}(s)\) in duality with the \(d\)-split Levi subgroup \(\mathrm{C}_{\mathbf{L}^{*}}(s)\) of \(\mathrm{C}_{\mathbf{G}^{*}}(s)\). We let \(\lambda(s):=\psi_{L(s),1}(\lambda_{s})\). Then \(c\) is the block associated to the unipotent \(d\)-cuspidal pair \((L(s),\lambda(s))\) of \(G(s)\). By [1, Theorem 5.2] we therefore deduce that \[\mathrm{Irr}_{0}(G(s),c)\subseteq\bigcup_{t\in\mathrm{Z}(L(s)^{*})_{\ell}} \mathcal{E}(G(s),st).\] By definition \(L(s)^{*}=\mathrm{C}_{L^{*}}(s)\), so \(t\in\mathrm{Z}(\mathrm{C}_{L^{*}}(s))\). Using that the height preserving bijection \(\mathrm{Irr}(G(s),c)\to\mathrm{Irr}(G,b)\) preserves Lusztig series yields the claim. **Lemma 2.2**.: _Keep the notation and assumptions of Lemma 2.1 and let \(\chi\in\mathcal{E}(G,st)\cap\mathrm{Irr}_{0}(G,b)\) be a height zero character. Then \(\chi\) is a constituent of \(R^{\mathbf{G}}_{\mathbf{L}}(\theta)\), where \(\theta\) is the \(d\)-cuspidal character defined by \(\theta:=\psi_{L,st}(\psi_{L,s}^{-1}(\lambda))\in\mathcal{E}(L,st)\). Note that by Lemma 2.1 the character \(\theta\) is well-defined as \(\mathrm{C}_{L^{*}}(st)=\mathrm{C}_{L^{*}}(s)\)._ Proof.: Let \(\mathbf{K}\) be a Levi subgroup of \(\mathbf{G}\) in duality with the Levi subgroup \(\mathrm{C}_{\mathbf{G}^{*}}(t)\) of \(\mathbf{G}^{*}\). Note that the latter is indeed a Levi subgroup of \(\mathbf{G}^{*}\) since \(\ell\geq 5\) (see [1, Theorem 13.16]). By [1, Proposition 2.2.4] there exists a \(d\)-cuspidal pair \((\mathbf{L}_{\mathbf{K}},\lambda_{\mathbf{K}})\) of \(\mathbf{K}\) such that the map \(R^{\mathbf{G}}_{\mathbf{K}}\hat{t}\) yields a bijection \[\mathrm{Irr}(b_{K}(\mathbf{L}_{\mathbf{K}},\lambda_{\mathbf{K}}))\cap\mathcal{ E}(K,s)\to\mathrm{Irr}(b_{G}(\mathbf{L},\lambda))\cap\mathcal{E}(G,st).\] Here, the \(d\)-cuspidal pair \((\mathbf{L}_{\mathbf{K}},\lambda_{\mathbf{K}})\) of \(\mathbf{K}\) is constructed as follows: Let \((\mathrm{C}_{\mathbf{L}^{*}}(s),\lambda_{s})\) be the unipotent \(d\)-cuspidal pair of \(\mathrm{C}_{\mathbf{G}^{*}}(s)\) corresponding under Jordan decomposition to the \(d\)-cuspidal pair \((\mathbf{L},\lambda)\) of \(G\). By [1, Definition and Proposition 1.4.7] there exists a unipotent \(d\)-cuspidal pair \((\mathrm{C}_{\mathbf{L}^{*}}(s)_{\mathrm{C}_{\mathbf{K}^{*}}(s)},(\lambda_{s} )_{\mathrm{C}_{\mathbf{K}^{*}}(s)})\) of \(\mathrm{C}_{\mathbf{K}^{*}}(s)\) such that \([\mathrm{C}_{\mathbf{L}^{*}}(s),\mathrm{C}_{\mathbf{L}^{*}}(s)]=[\mathrm{C}_{ \mathbf{L}^{*}}(s)_{\mathrm{C}_{\mathbf{K}^{*}}(s)},\mathrm{C}_{\mathbf{L}^{*} }(s)_{\mathrm{C}_{\mathbf{K}^{*}}(s)}]^{g}\) for some \(g\in\mathrm{C}_{G^{*}}(s)\), and the characters \(\lambda_{s}\) and \((\lambda_{s})_{\mathrm{C}_{\mathbf{K}^{*}}(s)}^{g}\) both restrict to the same character of \([\mathrm{C}_{\mathbf{L}^{*}}(s),\mathrm{C}_{\mathbf{L}^{*}}(s)]^{F}\). Then \((\mathbf{L}_{\mathbf{K}},\lambda_{\mathbf{K}})\) is defined as the \(d\)-cuspidal pair of \(\mathbf{K}\) with \(\lambda_{K}\in\mathcal{E}(\mathbf{L}_{\mathbf{K}}^{F},s)\) corresponding to \((\mathrm{C}_{\mathbf{L}^{*}}(s)_{\mathrm{C}_{\mathbf{K}^{*}}(s)},(\lambda_{s} )_{\mathrm{C}_{\mathbf{K}^{*}}(s)})\) under Jordan decomposition. Observe that by Lemma 2.1 we have \(t\in\mathrm{Z}(\mathrm{C}_{L^{*}}(s))\) and so \(\mathrm{C}_{\mathbf{G}^{*}}(t)\cap\mathrm{C}_{\mathbf{L}^{*}}(s)=\mathrm{C}_{ \mathbf{L}^{*}}(s)\). We therefore have \(\mathbf{K}(s)^{*}=\mathbf{L}(s)^{*}\) and so \((\mathbf{L}(s)_{\mathbf{K}(s)^{*}},(\lambda_{s})_{\mathbf{K}(s)^{*}})=(\mathbf{ L}(s)^{*},\lambda_{s})\). Thus, \(\mathbf{L}_{\mathbf{K}}\) is the Levi subgroup of \(\mathbf{K}\) in duality with the Levi subgroup \(\mathrm{C}_{\mathbf{L}^{*}}(t)\) of \(\mathrm{C}_{\mathbf{G}^{*}}(t)\) and \(\lambda_{\mathbf{K}}=\psi_{K,s}(\lambda_{s})\in\mathcal{E}(K,s)\). Moreover, \(\mathbf{L}_{\mathbf{K}}\) is a Levi subgroup of \(\mathbf{L}\). Now the center of the Levi subgroup \(\mathbf{K}\) is again connected by [1, Proposition 13.12]. By [1, Proposition 5.2] (and its proof) we find that any \(\psi\in\mathrm{Irr}(b_{K}(\mathbf{L}_{\mathbf{K}},\lambda_{\mathbf{K}}))\cap \mathcal{E}(K,s)\) appears in the \(d\)-Harish-Chandra series of \((\mathbf{L_{K}},\lambda_{\mathbf{K}})\). Hence, \(\psi\) is a constituent of \(R^{\mathbf{K}}_{\mathbf{L_{K}}}(\lambda_{\mathbf{K}})\) and thus \(\chi:=R^{\mathbf{G}}_{\mathbf{K}}(\hat{t}\psi)\) is a constituent of \[R^{\mathbf{G}}_{\mathbf{K}}(\hat{t}R^{\mathbf{K}}_{\mathbf{L_{K}}}(\lambda_{ \mathbf{K}}))=R^{\mathbf{G}}_{\mathbf{L_{K}}}(\lambda_{\mathbf{K}}\hat{t})=R^{ \mathbf{G}}_{\mathbf{L}}(R^{\mathbf{L}}_{\mathbf{L_{K}}}(\lambda_{\mathbf{K}} \hat{t})).\] Now \(R^{\mathbf{L}}_{\mathbf{L_{K}}}\circ\psi_{\mathbf{L_{K}},st}=\psi_{\mathbf{L}, st}\) by the properties of Jordan decomposition (see [10, Theorem 4.7.1]). By what we said before, \(st\) is central in \(\mathrm{C}_{L^{*}}(s)\). Hence, \(\chi\) is a constituent of \(R^{\mathbf{G}}_{\mathbf{L}}(\theta)\) with \(\theta:=R^{\mathbf{L}}_{\mathbf{L_{K}}}(\lambda_{\mathbf{K}}\hat{t})=\psi_{L, st}\psi_{L,s}^{-1}(\lambda)\). Finally, observe that \(\lambda_{\mathbf{K}}\) is a \(d\)-cuspidal character. Thus, \(\lambda_{\mathbf{K}}\hat{t}\) is also \(d\)-cuspidal and hence by the proof of [11, Proposition 4.1] the character \(\theta\) is \(d\)-cuspidal as well. Note that the proof of the previous lemma shows in particular that all irreducible constituents of \(R^{\mathbf{G}}_{\mathbf{L}}(\theta)\) lie in the block \(b\). A similar description of the (height zero) characters of blocks of abelian defect using similar arguments was already obtained in [16, Theorem 2.9]. ### \(d\)-Harish-Chandra theory The previous lemma can now be used as a parametrization of the global height zero characters in terms of \(d\)-Harish-Chandra theory. Recall that if \([\mathbf{G},\mathbf{G}]\) is simply connected, we have by [10, Equation 8.19] a bijection \[\mathrm{Z}(G^{*})\to\mathrm{Irr}(\mathbf{G}^{F}/[\mathbf{G},\mathbf{G}]^{F}), z\mapsto\hat{z}.\] **Proposition 2.3**.: _In addition to the assumptions from before we assume that \([\mathbf{G},\mathbf{G}]\) is simply connected. Let \((\mathbf{L},\theta)\), \(\theta\in\mathcal{E}(L,st)\), be a \(d\)-cuspidal pair as in Lemma 2.2. Let \(\mathrm{Aut}_{\mathbb{F}}(\mathbf{G}^{F})\) be defined as in [10, 2.4]. There exist an \((\mathrm{Irr}(\mathbf{G}^{F}/[\mathbf{G},\mathbf{G}]^{F})\rtimes\mathrm{Aut}_ {\mathbb{F}}(\mathbf{G}^{F}))_{(\mathbf{L},\theta)}\)-equivariant bijection_ \[\mathrm{Irr}(W_{G}(\mathbf{L},\theta))\to\mathcal{E}(\mathbf{G}^{F},(\mathbf{L },\theta)),\eta\mapsto R^{\mathbf{G}}_{\mathbf{L}}(\theta)_{\eta}.\] _Here, \((\hat{z},\sigma)\in(\mathrm{Irr}(\mathbf{G}^{F}/[\mathbf{G},\mathbf{G}]^{F}) \rtimes\mathrm{Aut}_{\mathbb{F}}(\mathbf{G}^{F}))_{(\mathbf{L},\theta)}\) acts by conjugation with \(\sigma\) on \(\mathrm{Irr}(W_{G}(\mathbf{L},\theta))\). In particular, we have \(R^{\mathbf{G}}_{\mathbf{L}}(\theta)_{\eta}\in\mathrm{Irr}_{0}(G,b)\) if and only if the following conditions are satisfied:_ 1. \(W_{G}(\mathbf{L},\theta)\) _contains a Sylow_ \(\ell\)_-subgroup of_ \(W_{G}(\mathbf{L},\lambda)\)_._ 2. \(\eta\in\mathrm{Irr}(W_{G}(\mathbf{L},\theta))\) _is an_ \(\ell^{\prime}\)_-character._ Proof.: We follow the general ideas of [11, Section 5.B]. Let \((\mathrm{C}_{\mathbf{L}^{*}}(st),\theta_{st})\) be the unipotent \(d\)-cuspidal pair in \(\mathrm{C}_{\mathbf{L}^{*}}(st)\) associated to \((\mathbf{L},\theta)\). Then as observed in the proof of [11, Proposition 5.4] we have as in [10, Theorem 3.4] an \(\mathrm{Aut}_{\mathbb{F}}(\mathrm{C}_{\mathbf{G}^{*}}(st))_{(\mathrm{C}_{ \mathbf{L}^{*}}(st),\theta_{st})}\)-equivariant bijection \[I^{\mathrm{C}_{\mathbf{G}^{*}}(st)}_{\mathrm{C}_{\mathbf{L}^{*}}(st),\theta_{ st}}:\mathrm{Irr}(W_{\mathrm{C}_{G^{*}}(s)}(\mathrm{C}_{\mathbf{L}^{*}}(st), \theta_{st}))\to\mathcal{E}(\mathrm{C}_{\mathbf{G}^{*}}(st),(\mathrm{C}_{ \mathbf{L}^{*}}(st),\theta_{st})).\] As in the proof of [11, Lemma 5.3] we obtain for \(\eta\in\mathrm{Irr}(W_{\mathrm{C}_{G^{*}}(s)}(\mathrm{C}_{\mathbf{L}^{*}}(st), \theta_{st}))\) the equality \[I^{\mathrm{C}_{\mathbf{G}^{*}}(st)}_{\mathrm{C}_{\mathbf{L}^{*}}(st),\theta_{ st}}(\eta)(1)_{\ell}=|\,\mathrm{C}_{G^{*}}(st):\mathrm{N}_{\mathrm{C}_{G^{*}}( st)}(\mathrm{C}_{\mathbf{L}^{*}}(st))|_{\ell}\theta_{st}(1)_{\ell}\eta(1)_{ \ell}.\] By the uniqueness properties of Jordan decomposition (see also the proof of [16, Theorem 2.9]) duality induces a natural isomorphism \[\Phi:W_{\mathrm{C}_{G^{*}}(st)}(\mathrm{C}_{\mathbf{L}^{*}}(st),\theta_{st}) \to W_{G}(\mathbf{L},\theta).\] In particular, we also obtain in this case a bijection \[\mathrm{Irr}(W_{G}(\mathbf{L},\theta))\to\mathcal{E}(G,(\mathbf{L},\theta)), \,\eta\mapsto\psi_{G,st}\circ I^{\mathrm{C}_{\mathbf{G}^{*}}(st)}_{\mathrm{C}_{ \mathbf{L}^{*}}(st),\theta_{st}}\circ(\eta\circ\Phi),\] and as in the Harish-Chandra case we denote by \(R^{\mathbf{G}}_{\mathbf{L}}(\theta)_{\eta}\) the character corresponding to \(\eta\in\operatorname{Irr}(W_{G}(\mathbf{L},\theta))\) under this bijection. As in [1, Lemma 5.3] the degree formula for the unipotent \(d\)-Harish-Chandra case yields that \[R^{\mathbf{G}}_{\mathbf{L}}(\theta)_{\eta}(1)_{\ell}=|G:\operatorname{N}_{C_{G ^{*}}(st)}(\operatorname{C}_{\mathbf{L}^{*}}(st))|_{\ell}\theta_{st}(1)_{\ell} \eta(1)_{\ell}.\] By Jordan decomposition, we have \(\theta(1)_{\ell}=|L:\operatorname{C}_{L^{*}}(st)|_{\ell}\theta_{st}(1)_{\ell}\). Replacing this in the formula above yields \[R^{\mathbf{G}}_{\mathbf{L}}(\theta)_{\eta}(1)_{\ell}=\frac{|G:L|_{\ell}}{|W_{ \operatorname{C}_{\mathbf{G}^{*}}(st)}(\operatorname{C}_{\mathbf{L}^{*}}(st) )|_{\ell}}\theta(1)_{\ell}\eta(1)_{\ell}.\] As in the proof of [1, Theorem 5.6], we deduce that the minimum over all such \((t,\eta)\) is obtained for \((1,1)\); hence the block \(b\) has defect \[\frac{|\operatorname{C}_{G^{*}}(s)|_{\ell}}{\lambda_{s}(1)_{\ell}}.\] It remains to show that the so-obtained bijection is \((\operatorname{Irr}(\mathbf{G}^{F}/[\mathbf{G},\mathbf{G}]^{F})\rtimes \operatorname{Aut}_{\mathbb{F}}(\mathbf{G}^{F}))_{(\mathbf{L},\theta)}\)-equivariant. For this, let \(\sigma:\mathbf{G}\to\mathbf{G}\) be a bijective morphism commuting with the action of \(F\). If \(\sigma(\mathbf{L})=\mathbf{L}\), then there exists a dual morphism \(\sigma^{*}:\mathbf{G}^{*}\to\mathbf{G}^{*}\) commuting with \(F^{*}\) and such that \(\sigma|_{\mathbf{L}}\) and \(\sigma^{*}|_{\mathbf{L}^{*}}\) are in duality. If \(\hat{z}\sigma\) stabilizes \((\mathbf{L},\theta)\) for some \(z\in\operatorname{Z}(G^{*})\), then \(z\sigma^{*}(st)\) is \(L^{*}\)-conjugate to \(st\). Hence, we can assume (by possibly replacing \(\sigma^{*}\) by \(l\sigma^{*}\) for some suitable \(l\in L^{*}\)) that \(z\sigma^{*}(st)=st\). By the equivariance of Jordan decomposition [1, Theorem 3.1] it follows that \(\sigma^{*}\) stabilizes \((\operatorname{C}_{\mathbf{L}^{*}}(st),\theta_{st})\). Using this construction, it follows that \(\Phi:W_{\operatorname{C}_{G^{*}}(st)}(\operatorname{C}_{\mathbf{L}^{*}}(st),\theta_{st})\to W_{G}(\mathbf{L},\theta)\) is \((\sigma,\sigma^{*})\)-equivariant. The equivariance of Jordan decomposition (see [1, Theorem 3.1]) and the equivariance properties of the parametrization of unipotent \(d\)-Harish-Chandra series recalled at the beginning of the proof show that the constructed bijection is \((\operatorname{Irr}(\mathbf{G}^{F}/[\mathbf{G},\mathbf{G}]^{F})\rtimes \operatorname{Aut}_{\mathbb{F}}(\mathbf{G}^{F}))_{(\mathbf{L},\theta)}\)-equivariant. ## 3. Local characters From now on we assume that \(\mathbf{G}\) is a simple, simply connected algebraic group of classical type \(B_{n}\), \(C_{n}\) or \(D_{n}\). In particular, the center of \(\mathbf{G}\) can be disconnected. Let \(\iota:\mathbf{G}\hookrightarrow\widetilde{\mathbf{G}}\) be a regular embedding, i.e. a closed embedding of algebraic groups with \(\operatorname{Z}(\widetilde{\mathbf{G}})\) connected and \(\widetilde{\mathbf{G}}=\operatorname{Z}(\widetilde{\mathbf{G}})\mathbf{G}\), see [1, Section 15.1]. Assume \(F:\widetilde{\mathbf{G}}\to\widetilde{\mathbf{G}}\) is a Frobenius endomorphism extending the one of \(\mathbf{G}\), as in [11, Section 2]. Recall that \(\Phi_{e}\) denotes the \(e\)th cyclotomic polynomial. Set \(E_{q,\ell}:=\{e:\ell\mid\Phi_{e}(q)\}\) and observe that \(E_{q,\ell}=\{d\ell^{i}\mid i\geq 1\}\) by the remarks before [1, Lemma 13.17], where \(d\) is the order of \(q\) modulo \(\ell\). We have the following elementary lemma: **Lemma 3.1**.: _Let \(k\) be a positve integer with \((k,\ell)=1\). Then \((q^{k}-1)_{\ell}\leq\Phi_{d}(q)_{\ell}\) (resp. \((q^{k}+1)_{\ell}\leq\Phi_{d}(q)_{\ell}\)) with equality if and only if \(d\mid k\) (resp. \(d\mid 2k\) but \(d\nmid k\))._ Proof.: We write \(q^{k}-1=\prod_{e\mid k}\Phi_{e}(q)\) resp. \(q^{k}+1=\prod_{e\mid 2k,e\nmid k}\Phi_{e}(q)\). The property follows by the above characterization of \(E_{q,\ell}\). **Remark 3.2**.: _Recall [1, Example 3.5.20]: If \(F\) defines an \(\mathbb{F}_{q}\)-structure on \(\mathbf{G}\), then \(\mathbf{G}^{F^{d}}\) inherits an \(\mathbb{F}_{q}\)-structure but can also be considered as group over \(\mathbb{F}_{q^{d}}\). Under this identification, the \(d\)-split Levi subgroups of \((\mathbf{G},F)\) correspond to the \(1\)-split Levi subgroups of \((\mathbf{G},F^{d})\)._ _We claim that similarly for a positive integer \(k\), the \(d\)-split Levi subgroups of \((\mathbf{G},F)\) correspond to the \(d/(d,k)\)-split Levi subgroups of \((\mathbf{G},F^{k})\). For this it suffices to observe that if \((\mathbf{T},F)\) is a \(d\)-split torus, then \((\mathbf{T},F^{k})\) is a \(d/(d,k)\)-split torus. Indeed, \((\mathbf{T},F^{k})\) splits completely over \(\mathbb{F}_{q^{\mathrm{clom}(d,k)}}\) and if any non-trivial subtorus splits over \(\mathbb{F}_{q^{ki}}\), then it also splits over \(\mathbb{F}_{q^{(d,ki)}}\) which forces \(d/(d,k)\mid i\)._ Recall that \(d\) is the order of \(q\) modulo \(\ell\). Additionally, we set \(d_{0}:=d\) if \(d\) is odd and \(d_{0}:=d/2\) if \(d\) is even. **Lemma 3.3**.: _Assume that \(s\) is quasi-isolated in \(\mathbf{G}^{*}\) and \((\mathbf{L},\lambda)\) a \(d\)-cuspidal pair of \(G\) with \(\lambda\in\mathcal{E}(L,s)\) as before. Then \(\mathrm{Z}(\mathrm{C}_{L^{*}}(s))_{\ell}=\mathrm{Z}(L^{*})_{\ell}\)._ Proof.: By the properties of \(d\)-cuspidal pairs, see [12, Remark 2.2], we have \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))_{\Phi_{d}}= \mathrm{Z}^{\circ}(\mathbf{L}^{*})_{\Phi_{d}}\). For a torus \(\mathbf{T}\) defined over \(\mathbb{F}_{q}\) and \(E\) a set of integers we let \(\mathbf{T}_{\phi_{E}}\) the unique maximal \(\Phi_{E}\)-subgroup, see [10, Definition 13.3, Proposition 13.5]. To prove the claim it is therefore by [10, Proposition 13.12] sufficient to show that \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))_{\Phi_{d}}=\mathrm{ Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))_{\Phi_{E}}\), where \(E:=E_{q,\ell}\). Observe that \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))_{\Phi_{d}}\) is contained in a Sylow \(d\)-torus \(\mathbf{S}\) of \(\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)\). Hence, it enough to show the statement with \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))\) replaced by \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)}( \mathbf{S}))\). For \(\mathbf{H}\) simple of classical type the connected center of the minimal \(d\)-split Levi subgroups are (up to multiplication or division by a factor \(q\pm 1\)) of the form \((q^{d_{0}}-\varepsilon_{d})^{a(d)}\), where \(d_{0}\in\{2d,d,d/2\}\), \(a(d)\in\mathbb{N}\) and \(\varepsilon_{d}\in\{\pm 1\}\), see [11, Example 3.5.15]. Since \(\ell\neq 2\), the \(E_{q,\ell}\)-part of the center is equal to its \(d\)-part by Lemma 3.1. We can write the adjoint quotient \((\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s))_{\mathrm{ad}}\) of \(\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)\) as \((\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s))_{\mathrm{ad}}=\mathbf{H}_{1}\times \mathbf{H}_{2}\times\mathbf{H}_{3}\) for simple (or trivial) algebraic groups \(\mathbf{H}_{1},\mathbf{H}_{2},\mathbf{H}_{3}\) with \((\mathrm{C}_{G^{*}}^{\circ}(s))_{\mathrm{ad}}=H_{1}\times H_{2}\times H_{3}\) or \((\mathrm{C}_{G^{*}}^{\circ}(s))_{\mathrm{ad}}\cong H_{1}^{F^{2}}\times H_{3}\), see [10, Table 2]. In particular, by Remark 3.2 the polynomial order of \(\mathbf{S}_{\mathrm{ad}}\), the image of \(\mathbf{S}\) in the adjoint quotient \((\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s))_{\mathrm{ad}}\), is a product of \(\Phi_{e}\)-polynomials with \(e\mid 4d\). Since \(2\neq\ell\) it follows that \(\ell\nmid\Phi_{4d}(q)\) by Lemma 3.1. Since \(\Phi_{1},\Phi_{2}\) are the only cyclotomic polynomials that possibly divide the generic order of \(\mathrm{Z}^{\circ}(\mathrm{C}_{G^{*}}^{\circ}(s))\) by [10, Table 2] and \(2\neq\ell\) the claim follows from [10, Proposition 13.7]. **Corollary 3.4**.: _Let \(b=b_{\mathbf{G}^{F}}(\mathbf{L},\lambda)\) be a quasi-isolated block associated to the \(d\)-cuspidal pair \((\mathbf{L},\lambda)\). Then the subgroup \(Q:=\mathrm{Z}(L)_{\ell}\) is a Cabanes subgroup of a defect group of \(b\)._ Proof.: Since \(\ell\nmid|\tilde{G}/G\,\mathrm{Z}(\tilde{G})|\), it suffices to prove the statement for a \(d\)-cuspidal pair \((\tilde{\mathbf{L}},\tilde{\lambda})\) of \((\tilde{\mathbf{G}},F)\) covering \((\mathbf{L},\lambda)\). The defect group of the block of \(\tilde{G}\) associated to this \(d\)-cuspidal pair is isomorphic to the defect group of the block of the associated unipotent \(d\)-cuspidal pair \((\tilde{L}(\tilde{s}),\tilde{\lambda}(\tilde{s}))\) of \(\tilde{G}(\tilde{s})\), see [10, Proposition 5.1]. We know that \(Z:=\mathrm{Z}(\tilde{L}(\tilde{s}))_{\ell}\) is a normal abelian subgroup of a defect group \(D\) of the associated unipotent block, see [10, Lemma 4.5]. Moreover, by the proof of [10, Lemma 4.5], we have \(D\cap\mathrm{C}_{G}(Z)\leq Z\). This implies that \(Z\) is a maximal abelian normal subgroup of \(D\) and thus the Cabanes subgroup of \(D\). We also observe that \((\mathrm{Z}(\tilde{L})_{\ell},b_{\tilde{L}}(\tilde{\lambda}))\) is a \(b_{\tilde{G}}(\tilde{\mathbf{L}},\tilde{\lambda})\)-subpair by [10, Theorem 2.5]. Since \(Z\) and \(\mathrm{Z}(\tilde{L})_{\ell}\) have the same order by Lemma 3.3, this shows the claim. Note that \(\mathbf{L}=\mathrm{C}_{\mathbf{G}}(Q)\) by [10, Proposition 13.19]. **Lemma 3.5**.: _In the situation of Corollary 3.4, the following hold._ 1. _The block_ \(b_{\mathbf{L}^{F}}(\mathbf{L},\lambda)\) _is of central defect and there exists a bijection_ \[\mathrm{Z}(\mathbf{L}^{*})_{\ell}^{F}=\mathrm{Z}(\mathrm{C}_{\mathbf{L}^{*}}(s ))_{\ell}^{F}\to\mathrm{Irr}(b_{\mathbf{L}^{F}}(\mathbf{L},\lambda)),t\mapsto \hat{t}\otimes\lambda.\] 2. _Any height zero character_ \(\chi\in\mathrm{Irr}_{0}(G,b)\) _is a constituent of_ \(R_{\mathbf{L}}^{\mathbf{G}}(\lambda\hat{t})\) _with_ \(t\in\mathrm{Z}(\mathbf{L}^{*})_{\ell}^{F}\) Proof.: By Lemma 3.3, \((Q,b_{L}(\lambda))\) is a self-centralizing \(b\)-subpair. In particular, \(b_{L}(\lambda)\) is of central defect. From this part (a) follows from the remarks after [11, Definition 1.13] and [10, (8.19)]. Part (b) follows from Lemma 2.2 and the compatibility of Lusztig induction with regular embeddings. ### Properties of blocks associated to height zero characters The aim of this section is to give an alternative proof of Lemma 3.5(b) using slightly different techniques. **Lemma 3.6**.: _Let \(\mathbf{G}\) and \(F\) as in Lemma 2.3. Suppose that \(b\) is an \(\ell\)-block of \(G\) associated to the semisimple element \(s\in G^{*}\). Then \(\mathcal{E}(G,s)\cap\operatorname{Irr}_{0}(G,b)\neq\emptyset\)._ Proof.: The block \(b\) is parametrized by the \(G\)-conjugacy class of a \(d\)-cuspidal pair \((\mathbf{L},\lambda)\). The defect group of \(b\) has order \(d(b):=|W_{\operatorname{C}_{G^{*}}(s)}(\operatorname{C}_{\mathbf{L}^{*}}(s) )|_{\ell}|\operatorname{Z}(\operatorname{C}_{L^{*}}(s))|_{\ell}\) by [10, Lemma 4.16]. The characters in \(\mathcal{E}(G,s)\cap\operatorname{Irr}(G,b)\) are precisely the constitutents of the \(d\)-Harish-Chandra series of \((\mathbf{L},\lambda)\), see [10, Theorem 4.1]. Via Jordan decomposition the \(d\)-Harish-Chandra series of \((\mathbf{L},\lambda)\) is mapped to the unipotent \(d\)-Harish-Chandra series of \((\operatorname{C}_{\mathbf{L}^{*}}(s),\lambda_{s})\), see [1, Theorem 4.7.2]. The degree formula following [1, Theorem 4.6.24] shows that the character \(\psi\in\operatorname{Irr}(\operatorname{C}_{G^{*}}(s),1)\) corresponding to the trivial character of the relative Weyl group satisfies \(\psi(1)_{\ell}=\lambda_{s}(1)_{\ell}|\operatorname{C}_{G^{*}}(s):\operatorname {C}_{L^{*}}(s)|_{\ell}|W_{\operatorname{C}_{G^{*}}(s)}(\operatorname{C}_{L^{* }}(s),\lambda_{s})|_{\ell}^{-1}\) and \(\lambda_{s}\) has \(\ell\)-central defect (remarks after [1, Corollary 4.6.16]), i.e. \(\lambda_{s}(1)_{\ell}=|\operatorname{C}_{L^{*}}(s):\operatorname{Z}( \operatorname{C}_{L^{*}}(s))|_{\ell}\). Its Jordan correspondent therefore has degree, \[\chi(1)_{\ell}=|G:\operatorname{C}_{G^{*}}(s)|_{\ell}|\operatorname{C}_{G^{*} }(s):\operatorname{Z}(\operatorname{C}_{L^{*}}(s))|_{\ell}|W_{G(s)}( \operatorname{C}_{L^{*}}(s),\lambda_{s})|_{\ell}^{-1}=|G|d(b)_{\ell}^{-1}.\] Hence, \(\chi\in\operatorname{Irr}(G,b)\cap\mathcal{E}(G,s)\) is of height zero. **Proposition 3.7**.: _Let \(\mathbf{G}\) and \(F\) as in Lemma 3.6. Let \(b=b_{G}(\mathbf{L},\lambda)\) be an \(\ell\)-block of \(G\) which covers a quasi-isolated \(\ell\)-block of \([\mathbf{G},\mathbf{G}]^{F}\). Then we have \(\operatorname{Irr}_{0}(G,b)\subset\cup_{t\in\operatorname{Z}(\mathbf{L}^{*}) _{\ell}}\mathcal{E}(G,st)\). Moreover, any \(\chi\in\operatorname{Irr}_{0}(G,b)\cap\mathcal{E}(G,st)\) is contained in the \(d\)-Harish-Chandra series of \((\mathbf{L},\hat{t}\lambda)\)._ Proof.: Assume that \(\operatorname{Irr}_{0}(G,b)\cap\mathcal{E}(G,st)\neq\emptyset\) and let \(\mathbf{K}:=\mathbf{G}(t)\). By [10, Theorem 9.16] the map \(\pm R_{\mathbf{K}}^{\mathbf{G}}\hat{t}:\mathcal{E}(K,s)\to\mathcal{E}(G,st)\) is a bijection. According to [10, Theorem 2.5], there exists a unique block \(c\) of \(K\) such that \(\pm R_{\mathbf{K}}^{\mathbf{G}}\hat{t}\) maps \(\operatorname{Irr}(K,c)\) to \(\operatorname{Irr}(G,b)\). In [11], the author characterizes the block \(c\) by a certain \(d\)-cuspidal pair which he obtains from (the Jordan correspondent) of the \(d\)-cuspidal pair \((\mathbf{L},\lambda)\), see also the proof of Lemma 2.2. In our situation we make instead use of the fact that \(\mathcal{E}(G,s)\) contains a height zero character in order to describe the block \(c\). By Lemma 3.6, it follows that the map \(\mathcal{E}(K,s)\to\mathcal{E}(G,st)\) from above is height preserving. In particular, the defect groups of \(b\) and \(c\) have the same order. Furthermore, by [10, Theorem 2.5] and [10, Proposition 13.19] since \(\mathbf{K}\) is \(E_{q,l}\)-split we obtain that \((\operatorname{Z}(K)_{\ell},c)\) is a \(b\)-Brauer pair. Let \((D,c_{D})\) be a maximal \(c\)-Brauer pair so that \(D\leq K\). Hence, \((1,b)\lhd(\operatorname{Z}(K)_{\ell},c)\lhd(D,b_{D})\). By transitivity of Brauer pairs, \((D,c_{D})\) is also a \(b\)-Brauer and since the defect groups of \(c\) and \(b\) have the same order it follows that \((D,c_{D})\) is a maximal \(b\)-Brauer pair. In particular, \(\operatorname{Z}(K)_{\ell}\) is a normal abelian subgroup of \(D\leq M\). Since \(\operatorname{Z}(L)_{\ell}=\operatorname{Z}(\operatorname{C}_{L^{*}}(s))_{\ell}\) by Lemma 3.3 is the unique maximal normal abelian subgroup of a defect group of \(b\) we may by replacing \((\mathbf{L},\lambda)\) by a \(G\)-conjugate assume that \((\operatorname{Z}(K)_{\ell},c)\leq(\operatorname{Z}(L)_{\ell},b_{L}(\lambda))\). This implies \(\mathbf{L}=\operatorname{C}_{\mathbf{G}}(\operatorname{Z}(L)_{\ell})\subset \operatorname{C}_{\mathbf{G}}(\operatorname{Z}(K)_{\ell})=\mathbf{K}\) since both \(\mathbf{L}\) and \(\mathbf{K}\) are \(E_{q,l}\)-split ([10, Proposition 13.19]). Dually this means \(\mathbf{L}^{*}\subset\mathbf{K}^{*}\) which implies \(\operatorname{Z}(\mathbf{K}^{*})\leq\operatorname{Z}(\mathbf{L}^{*})\) and therefore \(t\in\operatorname{Z}(\mathbf{L}^{*})\). Since \(\mathbf{L}\) is an \(e\)-split Levi subgroup of \(\mathbf{K}\) and \((\operatorname{Z}(L)_{\ell},b_{L}(\lambda))\) is a \(c\)-Brauer pair it follows from [10, Theorem 4.1] that \(c=R_{\mathbf{L}}^{\mathbf{K}}(b_{L}(\lambda))\) in the notation of [10, Theorem 4.1]. Hence, every character of \(\mathcal{E}(K,s)\) appears as a constituent of \(R^{\mathbf{K}}_{\mathbf{L}}(\lambda)\), see [10, Theorem 4.1]. By transitivity of Lusztig induction, every character of \(\mathcal{E}(G,st)\) therefore appears as a constituent of \(R^{\mathbf{G}}_{\mathbf{K}}\hat{t}R^{\mathbf{K}}_{\mathbf{L}}(\lambda)=R^{ \mathbf{G}}_{\mathbf{L}}(\hat{t}\lambda)\). ## 4. Constructing an AM-bijection ### Conditions on the local block As before, we let \((\mathbf{L},\lambda)\) be a \(d\)-cuspidal pair of a simple, simply connected algebraic group \(\mathbf{G}\) of classical type \(B_{n}\), \(C_{n}\) or \(D_{n}\). Denote by \(\mathcal{B}\) the subgroup of \(\operatorname{Aut}_{\mathbb{F}}(\mathbf{G}^{F})\) generated by field and graph automorphisms as in [11, Section 2.A]. For simplicity, we denote \(N:=\operatorname{N}_{G}(\mathbf{L})\), \(\hat{N}:=\operatorname{N}_{G\mathcal{B}}(\mathbf{L})\) and \(\tilde{N}:=N\tilde{L}\). **Definition 4.1**.: _We say that a character \(\chi\in\operatorname{Irr}(G)\) (resp. \(\psi\in\operatorname{Irr}(N)\)) satisfies \(A^{\prime}(\infty)\), if \((\tilde{G}\mathcal{B})_{\chi}=\tilde{G}_{\chi}\mathcal{B}_{\chi}\) (resp. \((\tilde{N}\hat{N})_{\psi}=\tilde{N}_{\psi}\hat{N}_{\psi}\))._ **Proposition 4.2** (Enguehard).: _Let \((\mathbf{\tilde{L}},\tilde{\lambda})\) be a \(d\)-cuspidal pair of \((\mathbf{\tilde{G}},F)\) covering \((\mathbf{L},\lambda)\)._ 1. _There exists_ \(\tilde{\chi}\in\operatorname{Irr}(\tilde{G})\) _with_ \(\langle R^{\mathbf{\tilde{G}}}_{\mathbf{\tilde{L}}}(\tilde{\lambda}),\tilde{ \chi}\rangle=\pm 1\) _and the degree of_ \(\tilde{\chi}\) _is different from the degrees of all other irreducible constituents of_ \(R^{\mathbf{\tilde{G}}}_{\mathbf{\tilde{L}}}(\tilde{\lambda})\)_._ 2. _We have_ \(\operatorname{N}_{G}(\mathbf{L},\operatorname{Res}^{\tilde{L}}_{L}(\tilde{ \lambda}))=\operatorname{N}_{G}(\mathbf{L},\lambda)\)_._ Proof.: Part (a) is [1, 2.3.1] while part (b) is [1, Proposition 2.3.2]. **Corollary 4.3**.: _Let \((\mathbf{L},\lambda)\) be as in the previous proposition. Then we have:_ 1. \((N\tilde{L})_{\lambda}=N_{\lambda}\tilde{L}_{\lambda}\)_._ 2. _There exists some_ \(\tilde{L}\)_-conjugate_ \(\lambda_{0}\) _of_ \(\lambda\) _such that_ \[(\tilde{N}\tilde{L})_{\lambda_{0}}=\hat{N}_{\lambda_{0}}\tilde{L}_{\lambda_{ 0}}.\] 3. _The stabilizer of_ \(b_{G}(\mathbf{L},\lambda)\) _and of_ \(\operatorname{Ind}^{N}_{L}(\lambda)\) _in_ \(\operatorname{N}_{\tilde{G}\mathcal{B}}(\mathbf{L})=\hat{N}\tilde{L}\) _is the same. In particular, the number of_ \(\tilde{G}\)_-conjugate blocks to_ \(b_{G}(\mathbf{L},\lambda)\) _is_ \(|\tilde{L}:\tilde{L}_{\lambda}|\)_._ Proof.: Part (a) is a reformulation of Proposition 4.2(b). For part (b), let \(\tilde{\chi}\in\operatorname{Irr}(\tilde{G})\) as in Proposition 4.2(a). By the introduction of [1] there exists a character \(\chi\in\operatorname{Irr}(G\mid\tilde{\chi})\) which satisfies \(A^{\prime}(\infty)\). In particular, there exists some \(\tilde{L}\)-conjugate \(\lambda_{0}\) of \(\lambda\) such that \(\langle R^{\mathbf{G}}_{\mathbf{L}}(\lambda_{0}),\chi\rangle=\pm 1\), see the proof of [1, Prop. 2.3.2]. Now assume that \({}^{\tilde{I}\hat{n}}\lambda_{0}=\lambda_{0}\) for \(\tilde{l}\in\tilde{L}\) and \(\hat{n}\in\hat{N}\). Then \({}^{\hat{n}}\tilde{\lambda}=\tilde{\lambda}\hat{z}\) for some \(z\in\operatorname{Z}(\tilde{G}^{*})\). From this it follows that \(\tilde{\chi}^{\hat{n}}\hat{z}^{-1}\) and \(\tilde{\chi}\) are both constituents of the same degree of \(R^{\mathbf{\tilde{G}}}_{\mathbf{\tilde{L}}}(\tilde{\lambda})\). Hence, \(\tilde{\chi}^{\hat{n}}\hat{z}^{-1}=\tilde{\chi}\) and so \(\chi^{\hat{n}}=\chi\) by the \(A^{\prime}(\infty)\) condition. This means that the \(d\)-Harish-Chandra series \((\mathbf{L},\lambda)\) and \((\mathbf{L},{}^{\hat{n}}\lambda)\) have a non-trivial intersection hence these pairs must be \(N\)-conjugate by [13, Theorem A]. In particular, \(\tilde{l}\) stabilizes the \(N\)-orbit of \(\lambda_{0}\) and thus \(\tilde{l}\in\tilde{L}_{\lambda_{0}}\) by part (a). Part (c) follows similarly from [13, Theorem A]. We expect that in many cases the properties of the \(d\)-cuspidal pair \((\mathbf{L},\lambda)\) (or more precisely the characters of \([\mathbf{L},\mathbf{L}]^{F}\) lying below \(\lambda\)) determine the properties of the block of \(N\) covering it. In view of the similarity to Definition 4.1, we make the following definition: **Definition 4.4**.: _We say that a \(d\)-cuspidal pair \((\mathbf{L},\lambda)\) of \(G\) satisfies \(A^{\prime}(\infty)\), if \((\tilde{N}\tilde{L})_{\lambda}=\hat{N}_{\lambda}\tilde{L}_{\lambda}\)._ Define \(\hat{W}:=\operatorname{N}_{G\mathcal{B}}(\mathbf{L})/L=\hat{N}/L\) and \(W(\theta):=\operatorname{N}_{G}(\mathbf{L},\theta)/L\) for \(\theta\in\operatorname{Irr}(H)\) with \(L\leq H\leq\tilde{L}\). The definition of extension maps used in the following can be found for instance in [11, Section 2]. We will work with the following assumption. **Assumption 4.5**.: _We suppose that \((\mathbf{L},\lambda)\) is a \(d\)-cuspidal pair of \(G\) which satisfies \(A^{\prime}(\infty)\). Moreover we assume the following:_ 1. _There exists an_ \(\hat{N}_{\lambda}\)_-equivariant extension map_ \(\Lambda\) _from_ \(L\) _to_ \(N\) _for_ \(\operatorname{Irr}(L,b_{L}(\lambda))\)_._ 2. _For_ \(\theta\in\operatorname{Irr}(L,b_{L}(\lambda))\)_,_ \(\widetilde{\theta}\in\operatorname{Irr}(\tilde{L}_{\theta}\mid\theta)\) _and_ \(\eta_{0}\in\operatorname{Irr}(W(\widetilde{\theta}))\) _there exists some_ \(\hat{W}(\theta)_{\eta_{0}}\)_-stable_ \(\eta\in\operatorname{Irr}(W(\theta)\mid\eta_{0})\) _with_ \(\langle\operatorname{Res}_{W(\widetilde{\theta})}^{W(\theta)}(\eta),\eta_{0} \rangle=1\)_._ **Remark 4.6**.: 1. _Note that condition (ii) is always satisfied when_ \(W(\theta)=W(\tilde{\theta})\)_._ 2. _The multiplicity one condition in (ii) is always satisfied whenever_ \(\mathbf{G}\) _is not of type_ \(D_{2n}\) _(proof of_ _[_10_, Theorem 4.2]_) or when_ \(d=1\) _by_ _[_11_, Corollary 13.13]__._ 3. _Assumption_ 4.5 _was checked in_ _[_11_]_ _when_ \(d=1\) _with_ \(G=D_{n}(q)\)_._ ### Constructing an AM-bijection Recall from [1, Proposition 13.16,13.19] that \(\mathbf{L}=\operatorname{C}_{\mathbf{G}}(\operatorname{Z}(\mathbf{L})_{\ell}^ {F})\). In particular, there exists by [25, Theorem 9.19,9.20] a unique block \(B\) of \(\operatorname{N}_{G}(\mathbf{L})\) covering \(b_{L}(\lambda)\). The following proposition can be seen as a blockwise version of [11, Proposition 1.12]. **Proposition 4.7**.: _Assume that the block \(b_{G}(\mathbf{L},\lambda)\) is a quasi-isolated block of \(G\). Moreover, suppose that the block \(b_{L}(\lambda)\) satisfies Assumption 4.5. Then every character of \(B\) has a \(\tilde{L}_{b}\)-conjugate which satisfies \(A^{\prime}(\infty)\)._ Proof.: By Lemma 3.5 we have \(\operatorname{Irr}(L,b_{L}(\lambda))=\{\lambda\otimes\hat{t}\mid t\in \operatorname{Z}(L^{*})_{\ell}\}\). The character \(\lambda\) is the canonical character of the block \(b_{L}(\lambda)\), see [25, Theorem 9.12]. Together with Assumption 4.5 this implies \[(\hat{N}\tilde{L})_{\theta}\leq(\hat{N}\tilde{L})_{\lambda}=\tilde{N}_{ \lambda}\tilde{L}_{\lambda}\] for \(\theta:=\lambda\otimes\hat{t}\). Since \(\tilde{L}_{\lambda}=\tilde{L}_{\theta}\), we deduce that \((\tilde{N}\tilde{L})_{\theta}=\hat{N}_{\theta}\tilde{L}_{\theta}\). The extension map \(\Lambda\) yields by Clifford theory a parametrization \[\operatorname{Irr}(W(\theta))\to\operatorname{Irr}(N\mid\theta),\eta\mapsto \operatorname{Ind}_{N_{\theta}}^{N}(\eta\Lambda(\theta)).\] For such a character \(\psi:=\operatorname{Ind}_{N_{\theta}}^{N}(\eta\Lambda(\theta))\) let \(\eta_{0}\) be a constituent of \(\operatorname{Res}_{W(\widetilde{\theta})}^{W(\theta)}(\eta)\). For \(t\in\tilde{L}_{\theta}\), there exists a unique character \(\nu_{t}\in\operatorname{Irr}(W(\theta)/W(\tilde{\theta}))\) such that \({}^{t}\Lambda(\theta)=\Lambda(\theta)\nu_{t}\). It follows that the map \[\tilde{L}_{\theta}/\tilde{L}_{\Lambda(\theta)}\to\operatorname{Irr}(W(\theta )/W(\tilde{\theta})),t\mapsto\nu_{t},\] is bijective, see the proof of [10, Theorem 4.3]. Hence, by replacing \(\psi\) by a \(\tilde{L}_{\theta}=\tilde{L}_{\lambda}\)-conjugate, we can assume that \(\eta\) satisfies Assumption 4.5(ii). Assume now that some \(\tilde{l}\hat{n}^{-1}\) with \(\tilde{l}\in\tilde{L}\) and \(\hat{n}\in\hat{N}\) stabilizes \(\psi\). In particular, \(\tilde{l}\hat{n}^{-1}\) stabilizes the \(N\)-orbit of \(\theta\). We deduce that \(\tilde{l}\in\tilde{L}_{\theta}\) and there exists some \(n\in N\) such that \(\hat{n}^{\prime}:=\hat{n}n\in\hat{N}_{\theta}\). We deduce that \[\operatorname{Ind}_{N_{\theta}}^{N}(\eta^{\hat{n}^{\prime}}\Lambda(\theta))= \psi^{\hat{n}^{\prime}}=\psi^{\tilde{l}}=\operatorname{Ind}_{N_{\theta}}^{N}( \eta\nu\Lambda(\theta)),\] for some linear character \(\nu\in\operatorname{Irr}(W(\theta)/W(\tilde{\theta}))\). From this, we deduce that \(\eta^{\hat{n}^{\prime}}=\eta\nu\) and as \(\eta\in\operatorname{Irr}(W(\theta))\) is \(W(\theta)\hat{W}(\theta)_{\eta_{0}}\)-stable, we have \(\nu=1\). In particular, \(\psi\in\operatorname{Irr}(N,B)\) satisfies \(A^{\prime}(\infty)\). Suppose that there exists an \(\hat{N}_{\lambda}\)-equivariant extension map \(\Lambda\) from \(L\) to \(N\) for \(\operatorname{Irr}(L,b_{L}(\lambda))\) as in Assumption 4.5. This then yields an \(\hat{N}_{\lambda}\)-equivariant extension map \(\tilde{\Lambda}\) for \(\tilde{L}\lhd\tilde{N}\) for \(\operatorname{Irr}(\tilde{L},b_{L}(\lambda))\), the set of characters of \(\tilde{L}\) lying over a character of \(\operatorname{Irr}(L,b_{L}(\lambda))\), which is compatible with the action of linear characters of \(\operatorname{Irr}(\tilde{N}/N)\), see the proof of [10, Theorem 4.2]. **Lemma 4.8**.: _Suppose Assumption 4.5 and let \(B\) be as before. Then every character of \(\operatorname{Irr}(N,B)\) extends to its inertia group in \(\tilde{N}\)._ Proof.: Let \(\psi\in\operatorname{Irr}(N)\) and \(\theta\in\operatorname{Irr}(L\mid\psi)\). We can assume that \(\theta\) is \(\tilde{L}\)-stable since otherwise \(\tilde{N}_{\psi}/N\operatorname{Z}(\tilde{G})\) is cyclic and the claim trivially holds. We can write \(\tilde{\psi}\in\operatorname{Irr}(\tilde{N}\mid\psi)\) as \(\tilde{\psi}=\operatorname{Ind}_{\tilde{N}_{\theta}}^{\tilde{N}}(\tilde{ \Lambda}(\tilde{\theta})\tilde{\eta})\) for \(\tilde{\theta}\in\operatorname{Irr}(\tilde{L}\mid\theta)\) and \(\tilde{\eta}\in\operatorname{Irr}(W_{\tilde{G}}(\tilde{\mathbf{L}},\tilde{ \theta}))\). By Mackey's formula, \[\operatorname{Res}_{N}^{\tilde{N}}(\tilde{\psi})=\operatorname{Ind}_{N_{ \tilde{\theta}}}^{N}\operatorname{Res}_{N_{\tilde{\theta}}}^{\tilde{N}_{ \tilde{\theta}}}(\tilde{\eta}\tilde{\Lambda}(\tilde{\theta})).\] Now, \(\operatorname{Ind}_{W(\tilde{\lambda})}^{W(\lambda)}(\tilde{\eta})\) is multiplicity free as well by Assumption 4.5(ii) and Clifford theory. By the construction of the extension map \(\tilde{\Lambda}\) in [10, Theorem 4.2] we observe that \[\operatorname{Ind}_{N_{\tilde{\theta}}}^{N_{\theta}}(\operatorname{Res}_{N_{ \tilde{\theta}}}^{\tilde{N}_{\tilde{\theta}}}(\tilde{\eta}\tilde{\Lambda}( \tilde{\theta})))=\sum_{\eta\in\operatorname{Irr}(W(\theta)|\tilde{\eta})} \eta\Lambda(\theta).\] By Clifford correspondence, \(\operatorname{Res}_{N}^{\tilde{N}}(\tilde{\psi})\) is therefore multiplicity free. Clifford theory now gives the claim. Note that since \(\ell\nmid|\tilde{G}/G\operatorname{Z}(\tilde{G})|\) every character of \(\tilde{G}\) covering a character in \(\operatorname{Irr}_{0}(G,b)\) is a height zero character of a block lying over \(b\). We denote by \(\operatorname{Irr}_{0}(\tilde{G},b)\) the set of these characters. Similarly, \(\operatorname{Irr}_{0}(\tilde{N},B)\) denotes the set of characters of \(\tilde{N}\) covering a character of \(\operatorname{Irr}_{0}(N,B)\). Recall the notion of AM-good relative to the Cabanes subgroup of its defect group from [12, Remark 9.6]. Applying the criterion from [1, Theorem 2.4] yields the following: **Theorem 4.9**.: _Let \(\mathbf{G}\) be of type \(B_{n}\) or \(C_{n}\), \(p\) an odd prime and \(\ell\geq 5\). Moreover, let \(b=b_{G}(\mathbf{L},\lambda)\) be a quasi-isolated block and \(B\) as above. Suppose that Assumption 4.5 is satisfied. Then there exists an automorphism equivariant bijection \(f:\operatorname{Irr}_{0}(G,b)\to\operatorname{Irr}_{0}(N,B)\). In particular, the block \(b\) is AM-good relative ot the Cabanes subgroup of its defect group._ Proof.: By Clifford theory, we have for \(\tilde{\theta}\in\operatorname{Irr}(\tilde{L},b_{L}(\lambda))\) a bijection \[\operatorname{Irr}(W_{\tilde{G}}(\tilde{\mathbf{L}},\tilde{\theta}))\to \operatorname{Irr}(\tilde{N}\mid\tilde{\theta}),\,\tilde{\eta}\mapsto \operatorname{Ind}_{\tilde{N}_{\tilde{\theta}}}^{\tilde{N}}(\tilde{\eta} \Lambda(\tilde{\theta})),\] where \(\tilde{\Lambda}\) is the extension map defined before Lemma 4.8. We compose this with the bijection \[\operatorname{Irr}(W_{\tilde{G}}(\tilde{\mathbf{L}},\tilde{\theta}))\to \mathcal{E}(\mathbf{G}^{F},(\tilde{\mathbf{L}},\tilde{\theta})),\,\tilde{ \eta}\mapsto R_{\tilde{\mathbf{L}}}^{\tilde{\mathbf{G}}}(\tilde{\theta})_{ \tilde{\eta}}\] from Proposition 2.3. Note that by the properties of both bijections, the tuples \((\tilde{\theta},\tilde{\eta})\) and \((\tilde{\theta}^{\prime},\tilde{\eta}^{\prime})\) give in both cases the same character if and only if they are \(N_{\lambda}\)-conjugate. Hence, the composition yields an \((\operatorname{N}_{\tilde{G}\mathcal{B}}(Q)\ltimes\operatorname{Irr}(\tilde{ G}/G))_{b}\)-equivariant bijection \(\tilde{f}:\operatorname{Irr}_{0}(\tilde{G},b)\to\operatorname{Irr}_{0}(\tilde{ N},B)\). According to Proposition 4.7 every character in \(B\) satisfies \(A^{\prime}(\infty)\). Moreover by the introduction of [11] every character of \(b\) satisfies \(A^{\prime}(\infty)\). The claim follows therefore from the proof of [1, Theorem 2.4]. ## 5. Isolated blocks for groups of type \(B\) Assume now that \(\mathbf{G}\) is simple, simply connected of type \(B\) defined over a field of odd characteristic \(p\). By the reduction in [12] it is enough for our purposes to consider isolated blocks. **Lemma 5.1**.: _For a given isolated element \(s\in G^{*}\) and \((\mathbf{L},\lambda)\) a \(d\)-cuspidal pair of \(\mathbf{G}\) with \(\lambda\in\mathcal{E}(L,s)\) the structure of the Levi subgroup is one of the following:_ Proof.: Let \((\mathbf{L},\lambda)\) be a \(d\)-cuspidal pair as in the statement of the lemma. According to [15, Remark 2.2], this \(d\)-cuspidal pair corresponds via Jordan decomposition to the \(\mathrm{C}_{G^{*}}(s)\)-orbit of a \(d\)-cuspidal unipotent pair \((\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s),\lambda_{s})\) of \(\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)\) and \(\mathbf{L}^{*}=\mathrm{C}_{\mathbf{G}^{*}}(\mathrm{Z}^{\circ}(\mathrm{C}_{ \mathbf{L}^{*}}^{\circ}(s))_{\Phi_{d}})\). We therefore first determine the possible rational structures of the \(d\)-split Levi subgroups \(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s)\) of \(\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)\). If \(\mathrm{C}_{G^{*}}^{\circ}(s)\) is of type \(C_{k}(q)C_{n-k}(q)\) then the structure indicated in Table 1 follows directly from [14, Example 3.5.29]. Let's therefore consider the case when \(\mathrm{C}_{G^{*}}^{\circ}(s)\) is of type of \(C_{n/2}(q^{2})\). Recall that \(d\) is the order of \(q\) modulo \(\ell\) and \(d_{0}\) is the order of \(q^{2}\) modulo \(\ell\). We now apply [14, Example 3.5.29] together with Remark 3.2. We conclude that if \(d_{0}\) is odd, the various \(d\)-split Levi subgroups have the form \[C_{m/2}(q^{2})((q^{2})^{d_{0}}-1)^{n-m/2d_{0}}\] for some integer \(m\) while if \(d_{0}\) is even the Levi subgroups have rational form \[C_{m/2}(q^{2})((q^{2})^{d_{0}/2}+1)^{n-m/2d_{0}}=C_{m/2}(q^{2})(q^{d_{0}}+1)^{n -m/2d_{0}}\] for some \(m\). We now compute the structure of \((\mathbf{L},F)\) and we concentrate on case 2 since the computations in the other cases are similar. In [14, Example 3.5.15] the structure of a \(d\)-split Levi subgroup is described. In particular \(L\) has rational type \[A_{n_{1}}(\varepsilon q^{d_{0}})\dots A_{n_{x}}(\varepsilon q^{d_{0}})B_{y}(q )(q^{d_{0}}-\varepsilon)^{x}\] for some integers \(x,y,n_{1},\dots,n_{x}\) which satisfy \(d_{0}x+y+\sum_{i=1}^{x}d_{0}n_{i}=n\). Since \(\mathrm{Z}^{\circ}(L)\) has order \((q^{d}-1)^{x}\) and \(\mathrm{Z}^{\circ}(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s))_{\Phi_{d}}=\mathrm{ Z}^{\circ}(\mathbf{L}^{*})_{\Phi_{d}}\), we must have \(x=\frac{n-m}{2d_{0}}\). The natural inclusion \([\mathbf{L},\mathbf{L}]\hookrightarrow\mathbf{L}\) induces a surjective dual map \(\mathbf{L}^{*}\twoheadrightarrow[\mathbf{L},\mathbf{L}]^{*}\) and we denote by \(s_{0}\) the image of \(s\) under this map. Then any character \(\lambda_{0}\in\mathrm{Irr}(L_{0}\mid\lambda)\) where \(L_{0}:=[\mathbf{L},\mathbf{L}]^{F}\) lies in the Lusztig series associated to \(s_{0}\) and is \(d\)-cuspidal. We now consider the projection of \(\lambda_{0}\) to the various rational components of \(L_{0}\). In particular, the projection of \(s_{0}\) onto the simple component of \(([\mathbf{L},\mathbf{L}]^{*})^{F}\) of type \(C_{y}(q)\) is an involution with centralizer containing \(C_{m/2}(q^{2})\). Hence, we must have \(y=m\), see [11, Table 4.3.1]. Note that a \(d\)-cuspidal character of \(\mathrm{SL}_{n_{i}+1}(\varepsilon q^{d_{0}})\) with respect to its \(\mathbb{F}_{q}\)-structure corresponds to a \(1\)-cuspidal character (resp. \(2\)-cuspidal if \(\varepsilon=-1\)) of \(\mathrm{SL}_{n_{i}+1}(\varepsilon q^{d_{0}})\) with respect to its \(\mathbb{F}_{q^{d_{0}}}\)-structure, see Remark 3.2. Now, the connected centralizer of the semisimple element associated to such a cuspidal (resp. \(2\)-cuspidal) character of \(\mathrm{SL}_{n_{i}+1}(\varepsilon q^{d_{0}})\) is a Coxeter torus, \begin{table} \begin{tabular}{|c|c|c c c|} \hline No. & \(\mathrm{C}_{\mathbf{G}^{*}}^{\circ}(s)^{F}\) & \(d_{0}\) & \(\mathbf{L}^{F}\) & \(\mathrm{C}_{\mathbf{L}^{*}}^{\circ}(s)^{F}\) \\ \hline \hline 1 & \(C_{k}(q)C_{n-k}(q)\) & & \(B_{m}(q)(q^{d_{0}}+\varepsilon)^{a}\) & \(C_{l}(q)C_{m-l}(q)(q^{d_{0}}+\varepsilon)^{a}\) \\ \hline 2 & \(C_{n/2}(q^{2})\) & \(d_{0}\) odd & \(B_{m}(q)A_{1}(q^{d_{0}})^{a/2}(q^{d_{0}}+\varepsilon)^{a/2}\) & \(C_{m/2}(q^{2})(q^{2d_{0}}-1)^{a/2}\) \\ \hline 3 & \(C_{n/2}(q^{2})\) & \(d_{0}\) even & \(B_{m}(q)(q^{d_{0}}+1)^{a}\) & \(C_{m/2}(q^{2})(q^{d_{0}}+1)^{a}\) \\ \hline \end{tabular} Here \(l,m\) are integers, \(a:=(n-m)/d_{0}\) and \(\varepsilon:=(-1)^{d}\). \end{table} Table 1. Isolated \(\ell\)-blocks in \(B_{n}(q)\) see the proof of [1, Proposition 2.7]. Moreover, since this semisimple element has order at most \(2\), by [1, Table 4.3.1] it follows that \(n_{i}\leq 1\). We can now compare this with the polynomial order of \(\mathrm{C}_{\mathbf{L}^{*}}(s)\) and it follows that \(n_{i}=1\) for all \(i\). In particular, \(L\) has the structure as described in Table 1. For \(d=1\), the results of the previous lemma were already obtained in [13, Table 4.1]. We observe that in cases \(1\) and \(3\), the appearing Levi subgroups were already considered in Cabanes-Spath [11], see Remark 9.11. In particular, we can in the following focus on case \(2\). ## 6. The local condition for groups of type \(B\) ### Root system and Weyl group Let \(\mathbf{T}\) be an \(F\)-stable maximal torus and \(\mathbf{B}\) be an \(F\)-stable Borel subgroup of \(\mathbf{G}\) with \(\mathbf{T}\subseteq\mathbf{B}\). Set \(\Phi\subseteq\mathrm{Hom}(\mathbf{T},\mathbb{F}^{\times})\) and \(\Phi^{+}\supseteq\Delta\) to be the corresponding set of roots, positive and simple roots associated with the choice of \(\mathbf{B}\). The Chevalley generators \(\mathbf{x}_{\alpha}(t)\), \(\mathbf{n}_{\alpha}(t^{\prime})\) and \(\mathbf{h}_{\alpha}(t^{\prime})\), for \(\alpha\in\Phi\), \(t,t^{\prime}\in\mathbb{F}\) with \(t^{\prime}\neq 0\), together with the Chevalley relations describe the group structure of \(\mathbf{G}\), see [1, Theorem 1.12.1]. In particular, \(\mathbf{n}_{\alpha}(t)=\mathbf{x}_{\alpha}(t)\mathbf{x}_{-\alpha}(-t^{-1}) \mathbf{x}_{\alpha}(t)\in\mathrm{N}_{\mathbf{G}}(\mathbf{T})\) and \(\mathbf{h}_{\alpha}(t)=\mathbf{n}_{\alpha}(t)\mathbf{n}_{\alpha}(1)^{-1}\in \mathbf{T}\) for \(t\in\mathbb{F}^{\times}\). We let \(F_{q}:\mathbf{G}\to\mathbf{G}\) be the Frobenius endomorphism with \(F_{q}(\mathbf{x}_{\alpha}(t))=\mathbf{x}_{\alpha}(t^{q})\) for \(\alpha\in\Delta\). Recall Tits' extended Weyl group \(\mathbf{V}:=\langle\mathbf{n}_{\alpha}(1)\mid\alpha\in\Phi\rangle\leq\mathrm{ N}_{\mathbf{G}}(\mathbf{T})\) and \(\mathbf{H}:=\mathbf{T}\cap\mathbf{V}\). The root system \(\Phi\) of \(\mathbf{G}\) and its system of simple roots \(\Delta=\{\alpha_{1},\dots,\alpha_{n}\}\) are given as follows (see for instance [1, 1.8.8] with a slightly different convention). Letting \((e_{1},\dots,e_{n})\) be the standard orthonormal basis of \(\mathbb{R}^{n}\) with its euclidean scalar product, one takes \[\alpha_{1}=e_{1},\alpha_{2}=e_{2}-e_{1},\dots,\alpha_{n}=e_{n}-e_{n-1}.\] We identify the Weyl group \(\mathbf{W}\) of \(\mathbf{G}\), a Coxeter group of type \(B_{n}\) with the subgroup of bijections \(\sigma\) on \(\{\pm 1,\dots\pm n\}\) such that \(\sigma(-i)=-\sigma(i)\) for all \(1\leq i\leq n\). This group is denoted by \(\mathfrak{S}_{\pm n}\). Via the natural identification of \(S_{n}\) with a quotient of \(\mathfrak{S}_{\pm n}\) and the identification of \(\mathfrak{S}_{\pm n}\) with \(\mathbf{W}\), the map \(\rho:\mathrm{N}_{\mathbf{G}}(\mathbf{T})\to\mathbf{W}\) induces an epimorphism \(\overline{\rho}\colon\mathbf{V}\longrightarrow\mathfrak{S}_{n}\). (see [1, Definition 4.1]). In \(\mathfrak{S}_{\pm n}\) the set \[\big{\{}\sigma\in\mathfrak{S}_{\pm n}\ \big{|}|\{1,\dots,n\}\cap\sigma^{-1}(\{-1, \dots,-n\})|\ \text{is even}\ \big{\}}\] forms a normal subgroup with index \(2\), naturally isomorphic to a Coxeter group of type \(D_{n}\). We denote by \(\mathbf{W}_{D}\) the associated subgroup of \(\mathbf{W}\) and \(\mathbf{N}_{c}:=\rho^{-1}(\mathbf{W}_{D})\). ### Sylow \(d\)-tori Take \(n^{\prime}\leq n\) to be maximal such that \(d_{0}\mid n^{\prime}\). Set \(a:=\frac{n^{\prime}}{d_{0}}\), \(v_{0}:=\mathbf{n}_{\alpha_{1}}(1)\cdots\mathbf{n}_{\alpha_{n^{\prime}}}(1)\in \mathbf{V}\) and \(v:=v_{0}^{\frac{2n^{\prime}}{d}}\). Additionally set \[w_{0}:=\rho(v_{0})=(1,2,\dots,n^{\prime},-1,\dots,-n^{\prime})\ \text{and}\ w=(w_{0})^{\frac{2n^{\prime}}{d}}.\] Finally, set \(v^{\prime}:=(v_{0})^{a}\) and \[w^{\prime}:=\rho(v^{\prime})=(w_{0})^{a}=\prod_{i=1}^{a}(i,a+i,2a+i,\dots,(d_{ 0}-1)a+i,-1,\dots).\] Note that \(\mathrm{C}_{\mathbf{W}}(w)=\mathrm{C}_{\mathbf{W}}(w^{\prime})\). For our computations it is sometimes convenient to work with \(w^{\prime}\) instead of \(w\). According to [1, Remark 3.2], the Sylow \(d\)-torus of \((\mathbf{T},vF_{q})\) is one of \((\mathbf{G},vF_{q})\). In particular, any \(d\)-split Levi subgroup of \((\mathbf{G},vF_{q})\) is up to \(\mathbf{G}^{vF_{q}}\)-conjugation the centralizer of an \(vF_{q}\)-stable subtorus of \(\mathbf{T}\). ### Structure of Levi subgroups Recall that we are only interested in \(d\)-split Levi subgroups whose fixed point subgroup under \(vF_{q}\) is of type \(A_{1}(q^{d_{0}})^{\frac{n-m}{2d_{0}}}B_{m}(q)\) for some \(m\) with \(2d_{0}\mid(n-m)\). Therefore, we first replace \(v\) by an alternate twist which is chosen to be "nice" with respect to the considered Levi subgroup. For this purpose set \[l:=n-m,\,t_{l}:=\frac{l}{2d_{0}}\text{ and }a_{l}:=2t_{l}.\] The following constructions will depend on the parameter \(l\). There is some \(h\in V\) such that \[((w^{\prime})^{\rho(h)})^{-1}\left(\prod_{i=1}^{a_{l}}(i,a_{l}+i,\dots,(d_{0}- 1)a_{l}+i,-i,\dots)\right)\in\mathfrak{S}_{\pm}(\{l+1,\dots,n\})\] and the Sylow \(d\)-torus of \((\mathbf{T},v^{h}F_{q})\) is one of \((\mathbf{G},v^{h}F_{q})\). In particular, \(\rho(\operatorname{C}_{\mathbf{V}}(v^{h}))=\operatorname{C}_{\mathbf{W}}(w^{ \rho(h)})\). Let \(\mathbf{L}\) be a \(d\)-split Levi subgroup for \((\mathbf{G},v^{h}F_{q})\) whose fixed point subgroup is of type \(A_{1}(q^{d_{0}})^{t_{l}}B_{m}(q)\). Then the root system \(\Phi_{\mathbf{L}}\) of \(\mathbf{L}\) must be \(w^{\rho(h)}\)-stable. Hence after suitable conjugation in \(\mathbf{G}^{v^{h}F_{q}}\), and by [1, Remark 3.12], it can be assumed that \[\Phi_{\mathbf{L}}=\big{\{}\pm e_{i},\pm e_{i}\pm e_{j}\mid l+1\leq i,j\leq n \big{\}}\sqcup\bigsqcup_{i=1}^{t_{l}}\big{\{}\pm(e_{2i}-e_{2i-1})\big{\}}.\] We denote \(\mathbf{V}_{l}:=\langle\mathbf{n}_{\alpha}(1)\mid\alpha\in\Phi\cap\langle e_{ i}\mid i\leq l+1\rangle\rangle\). One sees that \(\mathbf{V}_{l}\) is the extended Weyl group associated to the root system \(\Phi_{l}:=\Phi\cap\langle e_{i}\mid i\leq l+1\rangle\rangle\) of type \(B_{l}\). Set \(v_{l,0}:=\mathbf{n}_{\alpha_{1}}(1)\cdots\mathbf{n}_{\alpha_{l}}(1)\in \mathbf{V}_{l}\), whose image is \(w_{l,0}=(1,2,\dots,l,-1,\dots)\). Additionally, set \(v_{l}^{\prime}:=(v_{l,0})^{a_{l}}\), \(v_{l}:=(v_{l,0})^{\frac{2l}{d}}\), \(w_{l}^{\prime}:=\rho(v_{l}^{\prime})\) and \(w_{l}:=\rho(v_{l})\). In particular, \(w_{l}^{\prime}=\prod_{i=1}^{a_{l}}(i,a_{l}+i,\dots,(d_{0}-1)a_{l}+i,-i\dots)\). Then \(v_{l}^{-1}v^{h}\in\mathbf{L}\) and thus the Levi subgroups \(\mathbf{L}^{v^{h}F_{q}}\) are \(\mathbf{L}^{v_{l}F_{q}}\) isomorphic by [1, Lemma 4.2]. We will therefore in the following consider the twist \(F:=v_{l}F_{q}\). ### Blocks and diagonal automorphism We can now use the explicit description of the root system of the Levi subgroup \(\mathbf{L}\) to deduce some block-theoretic properties. **Lemma 6.1**.: _Suppose that we are in case 2 of Table 1. Then the blocks of \(\mathcal{E}_{\ell}(G,s)\) are not \(\check{G}\)-stable._ Proof.: Keep the notation of the proof of Lemma 5.1. By Corollary 4.3 it suffices to show that \((\mathbf{L},\lambda)\) is not \(\tilde{L}\)-stable. We first claim that \(\operatorname{Z}(\mathbf{L})\) is disconnected. For this observe that it suffices to prove similar to the proof of [1, Lemma 3.3] that \(X(\mathbf{T})/\mathbb{Z}\Phi_{\mathbf{L}}\) has non-trivial \(p^{\prime}\)-torsion. We observe that \[\mathbb{Z}\Phi_{\mathbf{L}}=\langle\alpha_{1},\dots,\alpha_{m},\alpha_{m+2}, \dots,\alpha_{n-2},\alpha_{n}\rangle=\langle e_{1},\dots,e_{m},e_{m+1}-e_{m+2},\dots,e_{n-3}-e_{n-2},e_{n-1}-e_{n}\rangle.\] By the proof of [1, Lemma 3.3] it follows that \(\frac{1}{2}(\sum_{i=1}^{m}e_{i}+\sum_{i=1}^{(n-m)/2}e_{m+2i-1}-e_{m+2i})\in X (\mathbf{T})\setminus\mathbb{Z}\Phi\) but \(\sum_{i=1}^{m}e_{i}+\sum_{i=1}^{(n-m)/2}e_{m+2i-1}-e_{m+2i}\in\mathbb{Z}\Phi_ {\mathbf{L}}\). Hence, \(\operatorname{Z}(\mathbf{L})\) is disconnected. Since \(\tilde{L}/L_{0}\) is abelian and \(\operatorname{Z}(\mathbf{L})\) is disconnected it suffices to show that any \(\lambda_{0}\in\operatorname{Irr}(L_{0}\mid\lambda)\) is not stable under any diagonal automorphism of \(L_{0}\). Now, \(L_{0}^{*}\cong B_{m}(q)\times\operatorname{PGL}_{2}(q^{d_{0}})^{a_{l}/2}\) while \(\operatorname{C}_{L_{0}^{*}}(s_{0})\cong C_{m/2}(q^{2})\times C_{q^{d_{0}}- \varepsilon}^{a_{l}/2}\). By [1, Table 4.3.1] this proves that the projection of \(s_{0}\) onto each factor of \(L_{0}^{*}\) is an involution with disconnected centralizer. By [16, Proposition 4.4] and Jordan decomposition [1, Theorem 2.6.22] we therefore deduce that the character \(\lambda_{0}\) is not invariant under any diagonal automorphism. **Remark 6.2**.: _Suppose that \((\mathbf{L},\lambda)\) is a \(d\)-cuspidal pair corresponding to a block in case 2 of Table 1. Then \(\tilde{L}_{\lambda}=L\operatorname{Z}(\tilde{G})\) by Lemma 6.1. By the proof of Proposition 4.7, the quotient \(\tilde{L}_{\theta}/\tilde{L}_{\Lambda(\theta)}\) is in bijection with \(\operatorname{Irr}(W(\theta)/W(\tilde{\theta}))\) and \(\tilde{L}_{\theta}\leq\tilde{L}_{\lambda}\) for every \(\theta\in\operatorname{Irr}(L,b_{L}(\lambda))\). We conclude that \(W(\theta)=W(\tilde{\theta})\) for every \(\theta\in\operatorname{Irr}(L,b_{L}(\lambda))\)._ ### The structure of \(N/l\) Recall that we work with the Frobenius endomorphism \(F:=v_{l}F_{q}\) and the \(d\)-split Levi subgroup \(\mathbf{L}\) of \((\mathbf{G},F)\) with \(w_{l}\)-stable root system \(\Phi_{\mathbf{L}}\) as given above. Recall that \[\mathbf{W}_{\mathbf{G}}(\mathbf{L}):=\operatorname{N}_{\mathbf{G}}(\mathbf{L })/\mathbf{L}\cong\operatorname{N}_{W_{\mathbf{G}}}(\mathbf{W_{L}})/\mathbf{W _{L}}\text{ and }\operatorname{N}_{\mathbf{G}}(\mathbf{L})^{v_{l}F_{q}}/ \mathbf{L}^{v_{l}F_{q}}\cong\operatorname{C}_{\mathbf{W}_{\mathbf{G}}( \mathbf{L})}(w_{l}\mathbf{W_{L}}).\] For \(1\leq i\leq t_{l}\) set \[w_{l,i}^{\prime}:=(2i-1,a_{l}+2i-1,\dots,(d_{0}-1)a_{l}+2i-1,-(2i+1),\dots)(2i,a_{l}+2i,\dots,(d_{0}-1)a_{l}+2i,-(2i),\dots)\] so that \(w_{l}^{\prime}=\prod_{i=1}^{t_{l}}w_{l,i}^{\prime}\). Additionally, for \(1\leq i\leq t_{l}-1\), set \[\tau_{i}:=\prod_{k=0}^{d_{0}-1}\big{(}(2i-1,2i+1)(2i,2i+2)(-(2i-1),-(2i+1))(-( 2i),-(2i+2))\big{)}^{(w_{l}^{\prime})^{k}}.\] Then \[W^{\prime}:=\big{(}\prod_{i=1}^{t_{l}}\langle w_{l,i}^{\prime}\rangle\big{)} \rtimes\langle\tau_{i}\mid 1\leq i\leq t_{l}-1\rangle\leq\operatorname{C}_{ \mathbf{W}_{l}}(w_{l})\] whose image in \(\mathbf{W}_{\mathbf{G}}(\mathbf{L})\), that is \(W^{\prime}\mathbf{W_{L}}/\mathbf{W_{L}}\), is \(\operatorname{C}_{\mathbf{W}_{\mathbf{G}}(\mathbf{L})}(w_{l}\mathbf{W_{L}}) \cong C_{2d_{0}}\wr\mathfrak{S}_{t_{l}}\) (as in [1, Section 4]). Our aim in the next sections is to construct a supplement \(V^{\prime}\leq\operatorname{N}_{\mathbf{G}}(\mathbf{L})\) with \(\rho(V^{\prime})=W^{\prime}\). By construction, the subgroup \(W^{\prime}\) is contained in \(\mathbf{W}_{l}\). Therefore, we will construct the subgroup \(V^{\prime}\) inside \(\mathbf{V}_{l}^{v_{l}F_{q}}\). Note that \(v_{l}\) is by construction a Sylow \(d\)-twist and \(d\) is a regular number for the root system \(B_{l}\), i.e. \(d\mid 2l\). Therefore, we are precisely in the situation already considered in [1, Section 5.A]. ## 7. The extended Weyl group and \(d\)-twists In this section, we recall the main construction from [1, Section 5] which we will then modify later. We keep the notation of the previous section. In particular, we let \(\mathbf{V}_{l}\) be the extended Weyl group associated to a root sytem \(\Phi_{l}\) of type \(B_{l}\). Let \(\rho:\mathbf{V}_{l}\to\mathbf{W}_{l}\) be the natural epimorphism to the Weyl group with kernel \(\mathbf{H}_{l}\). By construction in the last section, the integer \(d\) divides \(2l\) and thus \(a_{l}:=\frac{l}{d_{0}}\) is an integer. We let \(v_{l}\in\mathbf{V}_{l}\) be a Sylow \(d\)-twist as in 6.3 (see also [1, Section 5.A]) and denote \(H_{l}:=\mathbf{H}_{l}^{v_{l}F_{q}}\) and \(V_{l}:=\mathbf{V}_{l}^{v_{l}F_{q}}\). For a subset \(\mathcal{O}\subset\{1,\dots,l\}\) we define \(\Phi_{\mathcal{O}}:=\Phi_{l}\cap\langle\pm e_{i}\mid i\in\mathcal{O}\rangle\) and \(\mathbf{V}_{\mathcal{O}}:=\langle\mathbf{n}_{\alpha}(\pm 1)\mid\alpha\in\Phi_{ \mathcal{O}}\rangle\) and \(\mathbf{H}_{\mathcal{O}}:=\mathbf{V}_{\mathcal{O}}\cap\mathbf{H}\). ### The group \(H_{l}\) Set \(\overline{v}_{l}:=\overline{\rho}(v_{l})\) and for \(k\in\{1,\dots,l\}\) let \(\mathcal{O}_{k}\) be the \(\overline{v}_{l}\)-orbit on \(\{1,\dots,l\}\) containing \(k\). Let \(\varpi\in\mathbb{F}\) be a fourth root of unity and \[h_{0}:=\mathbf{h}_{e_{1}}(-1).\] For \(1\leq k\leq a_{l}\) let \[h_{k}:=\prod_{i\in\mathcal{O}_{k}}\mathbf{h}_{e_{i}}(\varpi).\] By the Chevalley relations the following equalities hold: \[h_{k}^{2} =\mathbf{h}_{e_{1}}(-1)^{|\mathcal{O}_{k}|}=h_{0}^{|\mathcal{O}_{k} |}\text{ and } \tag{2}\] \[h_{k}^{v} =\begin{cases}h_{k}&\text{if }2\nmid d\\ h_{0}h_{k}&\text{otherwise.}\end{cases} \tag{1}\] By [1, Section 5.B], we have \[H_{l}=\langle h_{0}\rangle\times\langle h_{1}h_{2}\rangle\times\cdots\times \langle h_{a_{l}-1}h_{a_{l}}\rangle,\] an elementary abelian \(2\)-group of rank \(a_{l}\). ### The group \(\mathbf{V}_{l}\) and some of its elements We define \(\overline{c}_{1}\in\mathrm{C}_{\mathbf{W}_{l}}(\rho(v_{l}))\) as in [1, Section 5.B]. If \(2\mid d\) let \[\overline{c}_{1}:=(1,a_{l}+1,\dots,a_{l}(d_{0}-1)+1,-1,-a_{l}-1,\dots,-a_{l}( d_{0}-1)-1)\in\mathrm{C}_{\mathbf{W}_{l}}(\rho(v_{l})).\] This is the cycle of \(\rho(v)\) containing \(1\). If \(2\nmid d\) let \[\overline{c}_{1}^{\prime}:= (\ 1,\qquad 2a_{l}+1,\ \ 4a_{l}+1,\qquad\dots,\,(d-1)a_{l}+1,-a_{l}-1, \dots,-(d-2)a_{l}-1)\] \[(-1,\ \ -2a_{l}-1,-4a_{l}-1,\ \dots,-(d-1)a_{l}-1, a_{l}+1,\qquad\dots,(d-2)a_{l}+1)\in\mathrm{C}_{\mathbf{W}_{l}}(\rho(v)),\] and set \(\overline{c}_{1}:=\overline{c}_{1}^{\prime}\prod_{i=0}^{d-1}(ia_{l}+1,-ia_{l}-1)\). Since \(\rho(\mathbf{V}_{l})=\mathbf{W}_{l}\) there exists some \(c_{1}\in\mathbf{V}_{l}\) with \(\rho(c_{1})=\overline{c}_{1}\). As in [1, Section 5.B] one can even choose \(c_{1}\) such that \(c_{1}\in(\mathbf{V}_{\mathcal{O}_{1}})^{v_{l}F_{q}}\) and \((\mathbf{V}_{\mathcal{O}_{1}}\cap V_{l})\,\mathrm{C}_{\mathbf{H}_{\mathcal{O} _{1}}}(v_{l})=\langle h_{1},h_{0},c_{1}\rangle\). As in [1, Section 5.B], let \(p_{k}:=\prod_{i=0}^{d_{0}-1}\mathbf{n}_{\alpha_{k+1}}(1)^{v_{l}^{i}}\) for \(1\leq k\leq a_{l}-1\), which satisfy \[\rho(p_{k})(\mathcal{O}_{k})=\mathcal{O}_{k+1}\text{ and }\rho(p_{k})(\mathcal{O} _{k+1})=\mathcal{O}_{k},\] as well as \(p_{k}\in V_{l}\) and \(p_{k}\in\mathbf{V}_{\mathcal{O}_{k}\cup\mathcal{O}_{k+1}}\). The elements \(p_{k}\) satisfy the type \(A\) braid relations and we have \(p_{k}^{2}=h_{k}h_{k+1}h_{0}^{|\mathcal{O}_{k}|}\in H_{l}\). For \(2\leq k\leq a_{l}\) let \(c_{k}:=(c_{1})^{p_{1}\cdots p_{k-1}}\). Since \(\rho(\mathbf{V}_{l})=\langle\overline{c}_{i}\mid 1\leq i\leq a_{l}\rangle \rtimes\mathfrak{S}_{a_{l}}\) we see that the elements \(c_{i}\) (\(1\leq i\leq a_{l}\)) together with the elements \(p_{k}\) (\(1\leq k<a_{l}\)) generate the group \(\mathbf{V}_{l}\). ## 8. A supplement of the relative Weyl group We will now modify the construction in the last section to construct a supplement of the relative Weyl group. First, there is a set theoretic map \[\begin{array}{cccc}\iota_{1}:\langle\mathbf{n}_{\alpha_{2}}(1),\dots, \mathbf{n}_{\alpha_{a_{l}}}(1)\rangle&\to&\langle p_{1},\dots,p_{a_{l}-1} \rangle,\\ n&\mapsto&\prod_{k=0}^{d_{0}-1}n^{v_{l}^{k}},\end{array}\] and by definition \(\iota_{1}(\mathbf{n}_{\alpha_{k}}(1))=p_{k-1}\) for \(2\leq k\leq a_{l}\). Next we define two elements \[g_{1}:=\prod_{i=1}^{t_{l}}p_{i}^{p_{i+1}\cdots p_{2i-2}},\quad\text{ and }\quad g_{2}:=\prod_{i=1}^{t_{l}}p_{i}^{p_{i+1}\cdots p_{2i-1}},\] which yields another map \[\begin{array}{cccc}\iota_{2}:\langle p_{1},\dots,p_{t_{l}-1}\rangle&\to& \langle p_{1},\dots,p_{a_{l}}\rangle,\\ p&\mapsto&p^{g_{1}}p^{g_{2}}.\end{array}\] **Lemma 8.1**.: _The maps \(\iota_{1}\) and \(\iota_{2}\) are injective group homomorphisms. Moreover, for \(1\leq i\leq t_{l}-1\), the elements \(p_{i}^{\prime}:=\iota_{2}(p_{i})\) satisfy the type \(A\) Braid relations and \((p_{i}^{\prime})^{2}=h_{2i-1}h_{2i}h_{2i+1}h_{2i+2}\)._ Proof.: To show that \(\iota_{1}\) is a homomorphism it suffices to check that \(\iota_{1}(\mathbf{n}_{\alpha_{i}}(1)\mathbf{n}_{\alpha_{j}}(1))=\iota_{1}( \mathbf{n}_{\alpha_{i}}(1))\iota_{1}(\mathbf{n}_{\alpha_{j}}(1))\) for \(2\leq i,j\leq l\). First note for \(2\leq i\leq a_{l}\) and \(0\leq k\leq d_{0}-1\) that \(\mathbf{n}_{\alpha_{i}}(1)^{(v_{l}^{\prime})^{k}}\in\langle\mathbf{n}_{e_{ka_ {l}+i+1}-e_{ka_{l}+i}}(1)\rangle\). As \(a_{l}\geq 2\) we have \(e_{ka_{l}+i+1}-e_{ka_{l}+i}\perp e_{k^{\prime}a_{l}+i+1}-e_{k^{\prime}a_{l}+i}\) for \(k\neq k^{\prime}\). Moreover, as both are long roots, it follows that \([\mathbf{n}_{e_{ka_{l}+i+1}-e_{ka_{l}+i}}(1),\mathbf{n}_{e_{k^{\prime}a_{l}+i+1 }-e_{k^{\prime}a_{l}+i}}(1)]=1\). In particular, we have \(p_{i}\in\prod_{k=0}^{d_{0}-1}\langle\mathbf{n}_{e_{ka_{l}+i+1}-e_{ka_{l}+i}} (1)\rangle\). For \(0\leq k<k^{\prime}\leq d_{0}-1\), then \(e_{ka_{l}+i+1}-e_{ka_{l}+i}\perp e_{k^{\prime}a_{l}+j+1}-e_{k^{\prime}a_{l}+j}\) and both are long roots. Hence as before \(\mathbf{n}_{\alpha_{i}}(1)^{(v_{l}^{\prime})^{k^{\prime}}}\mathbf{n}_{\alpha _{j}}(1)^{(v_{l}^{\prime})^{k}}=\mathbf{n}_{\alpha_{j}}(1)^{(v_{l}^{\prime})^{ k}}\mathbf{n}_{\alpha_{i}}(1)^{(v_{l}^{\prime})^{k^{\prime}}}\). Thus \(\iota_{1}\) is a homomorphism. Moreover, \(\iota_{1}\) is injective as \(\prod_{k=0}^{d_{0}-1}\langle\mathbf{n}_{e_{ka_{l}+i+1}-e_{ka_{l}+i}}(1)\rangle \cap\prod_{k=0}^{d_{0}-1}\langle\mathbf{n}_{e_{ka_{l}+j+1}-e_{ka_{l}+j}}(1)\rangle=1\) for \(i\neq j\). Observe by construction that \[\rho(g_{1})=\prod_{i=1}^{t_{l}}\left(\prod_{k=0}^{d_{0}-1}(ka_{l}+i,ka_{l}+2i- 1)\right),\quad\text{ and }\quad\rho(g_{2})=\prod_{i=1}^{t_{l}}\left(\prod_{k=0}^{d_{0}-1}(ka_{l}+i, ka_{l}+2i)\right)\] and so for \(1\leq i\leq t_{l}-1\), \[p_{i}^{g_{1}}\in\langle\mathbf{n}_{\alpha}(1)\mid\alpha=e_{ka_{l}+2i-1}-e_{ka _{l}+2i+1}\text{ with }0\leq k\leq d_{0}-1\rangle\] and \[p_{i}^{g_{2}}\in\langle\mathbf{n}_{\alpha}(1)\mid\alpha=e_{ka_{l}+2i}-e_{ka_{ l}+2i+2}\text{ with }0\leq k\leq d_{0}-1\rangle.\] As in the previous case, the long roots \(e_{ka_{l}+2i-1}-e_{ka_{l}+2i+1}\) and \(e_{k^{\prime}am+2j}-e_{k^{\prime}a_{l}+2j+2}\) orthogonal and it follows that \([p_{i}^{g_{1}},p_{j}^{g_{2}}]=1\). It follows from this that \(\iota_{2}\) is a homomorphism. To compute the formula for \(\iota_{2}(p_{i})^{2}\) we know that \(p_{i}^{2}=h_{i}h_{i+1}h_{0}^{d_{0}}\) by 7.2 and thus \(\iota_{2}(p_{i})^{2}=h_{2i-1}h_{2i}h_{2i+1}h_{2i+2}\). Since \(\iota_{2}\) is injective modulo \(H_{l}\) and \(H_{l}\cap\langle p_{1},\dots p_{t_{l}-1}\rangle=\langle p_{1}^{1},\dots p_{t_{l }-1}^{2}\rangle\) it follows from this formula that \(\iota_{2}\) itself must be injective. **Corollary 8.2**.: _For \(c_{1}\in(V_{\mathcal{O}_{1}})^{v_{l}F_{q}}\) as constructed above and \(V^{\prime}:=\langle c_{1}c_{1}^{p_{1}},p_{i}^{\prime}\mid 1\leq i\leq t_{l}-1\rangle\), we have \(\mathrm{N}_{G}(\mathbf{L})=LV^{\prime}\)._ Proof.: Note that by construction in [1, Section 5.B], the elements \(c_{1}\) and \(p_{i}\) with \(1\leq i\leq a_{l}-1\) all lie in \(V_{l}\). Moreover \(\rho(c_{1}c_{1}^{p_{1}})=w_{l,1}^{\prime}\) and \(\rho(p_{i}^{\prime})=\tau_{i}\) as defined in 6.5. Hence \(LV^{\prime}\leq\mathrm{N}_{G}(\mathbf{L})\) with \(LV^{\prime}/L=\mathrm{N}_{G}(\mathbf{L})/L\) showing the required equality. Mimicking the definition in [1, Section 5.B], for \(1\leq i\leq t_{l}\) we define \(c_{i}^{\prime}:=(c_{1}c_{1}^{p_{1}})^{p_{1}^{\prime}p_{2}^{\prime}\cdots p_{i-1}^ {\prime}}\), which satisfy \(\rho(c_{i}^{\prime})=w_{l,i}^{\prime}\). Additionally, set \(C^{\prime}:=\langle c_{i}^{\prime}\mid 1\leq i\leq t_{l}\rangle\) and \(P^{\prime}:=\langle p_{i}^{\prime}\mid 1\leq i\leq t_{l}-1\rangle\). Then \(c_{i}^{\prime}\in\langle\mathbf{n}_{\alpha}(1)\mid\alpha\in\Phi\cap\langle\pm e _{k}\mid k\in\mathcal{O}_{2i-1}\cup\mathcal{O}_{2i}\rangle\rangle=\mathbf{V}_ {\mathcal{O}_{2i-1}\cup\mathcal{O}_{2i}}\). We claim that \[(c_{i}^{\prime})^{p_{j}^{\prime}}=\left\{\begin{array}{ll}c_{i}^{\prime}& \text{ if }j\notin\{i-1,i\},\\ c_{i+1}^{\prime}&\text{ if }j=i,\\ c_{i-1}^{\prime}&\text{ if }j=i-1.\end{array}\right.\] The case \(j\notin\{i-1,i\}\) is immediate, while when \(j=i-1\) there is \(h\in H_{l}\) such that \(p_{1}^{\prime}\cdots p_{i-2}^{\prime}(p_{i-1}^{\prime})^{2}=hp_{1}^{\prime} \cdots p_{i-2}^{\prime}\) and by construction \([c_{1}^{\prime},H_{l}]=1\), hence \((c_{i}^{\prime})^{p_{i-1}^{\prime}}=(c_{1}^{\prime})^{p_{1}^{\prime}\cdots p_{i-2 }^{\prime}}=c_{i-1}^{\prime}\). **Lemma 8.3**.: _The group \(C^{\prime}\) is abelian and is the central product of the groups \(\langle c_{i}^{\prime}\rangle\), \(i=1,\dots,t_{l}\), over \(\langle h_{0}\rangle\), where \((c_{i}^{\prime})^{2d_{0}}=h_{0}\)._ Proof.: Recall from [13, Section 5.B] that \(c_{1}^{2d_{0}}\in\langle h_{0}\rangle\). As \([c_{1},c_{2}]=h_{0}\) by [13, Section 5.B] we see that \((c_{1}^{\prime})^{m}=h_{0}^{1+2+\cdots+m-1}c_{1}^{m}c_{2}^{m}\) for \(m\in\mathbb{N}\). Since \(d_{0}(2d_{0}-1)\) is odd (as \(d_{0}\) is assumed to be odd) it therefore follows that \((c_{1}^{\prime})^{2d_{0}}=h_{0}c_{1}^{2d_{0}}c_{2}^{2d_{0}}=h_{0}\). From this we obtain \((c_{i}^{\prime})^{2d_{0}}=h_{0}\) for all \(i\). To show \(C^{\prime}\) is abelian consider the commutator \([c_{1}^{\prime},c_{i}^{\prime}]\) for \(i>1\). Observe that \(c_{i}^{\prime}\in\langle c_{j}\mid 1\leq j\leq a_{l}\rangle\) and hence \([c_{1},c_{i}^{\prime}]\in\langle h_{0}\rangle\). Moreover, as \(i>1\) and thus \([p_{1},c_{i}^{\prime}]=1\), it follows that \[[c_{1}^{\prime},c_{i}^{\prime}]=[c_{1}c_{2},c_{i}^{\prime}]=[c_{1},c_{i}^{ \prime}]^{c_{2}}[c_{2},c_{i}^{\prime}]=[c_{1},c_{i}^{\prime}]^{c_{2}}[c_{1},c_ {i}^{\prime}]^{p_{1}}=1.\] Let \(i<j\). By applying the formula for \((c_{i}^{\prime})^{p_{j}^{\prime}}\) from above, we deduce that \[[c_{i}^{\prime},c_{j}^{\prime}]=[(c_{1}^{\prime})^{p_{1}^{\prime}\cdots p_{i-1 }^{\prime}},(c_{1}^{\prime})^{p_{1}^{\prime}\cdots p_{j-1}^{\prime}}]=[(c_{1}^ {\prime})^{p_{1}^{\prime}\cdots p_{i-1}^{\prime}},(c_{1}^{\prime})^{p_{1}^{ \prime}\cdots p_{i}^{\prime}}]^{p_{i+1}^{\prime}\cdots p_{j-1}^{\prime}}=[c_{1 }^{\prime},c_{i+1}^{\prime}]^{p_{1}^{\prime}\cdots p_{i-1}^{\prime}p_{i+1}^{ \prime}\cdots p_{j-1}^{\prime}}.\] Hence, \([c_{i}^{\prime},c_{j}^{\prime}]=1\). **Proposition 8.4**.: _The group \(V^{\prime}\) is a semidirect product \(V^{\prime}=C^{\prime}\rtimes P^{\prime}\). Moreover, the subgroup \(H^{\prime}:=\langle h_{0}\rangle\times\langle p_{k}^{\prime 2}\mid k=1, \ldots,t_{l}-1\rangle\) is equal to the intersection \(V^{\prime}\cap H\)._ Proof.: First consider \(H\cap C^{\prime}\). By construction \(C^{\prime}\lhd V^{\prime}\) with \(\rho(C^{\prime})=C_{2d_{0}}^{a_{l}/2}\). By Lemma 8.3, it follows that \(|C^{\prime}|\leq 2(2d_{0})^{a_{l}/2}\) and \(h_{0}\in C^{\prime}\cap H\). Hence \(|C^{\prime}|=2(2d_{0})^{a_{l}/2}\) and \(H^{\prime}\cap C^{\prime}=\langle h_{0}\rangle\). As \(P^{\prime}=\iota_{2}\circ\iota_{1}(\langle\mathbf{n}_{\alpha_{2}}(1),\ldots, \mathbf{n}_{\alpha_{a_{l}}}(1)\rangle)\), it follows that \(|P^{\prime}|=|\mathfrak{S}_{t_{l}}|2^{t_{l}-1}\). Moreover, \((p_{i}^{\prime})^{2}\in H\) and \(\rho(P^{\prime})=\mathfrak{S}_{t_{l}}\). Hence \(P^{\prime}\cap H=\langle p_{k}^{\prime 2}\mid k=1,\ldots,t_{l}\rangle\). It is immediate that \(H^{\prime}\leq V^{\prime}\cap H\) and so \[|V^{\prime}|\geq|V^{\prime}/V^{\prime}\cap H||H^{\prime}|=2^{t+1}|\mathfrak{S }_{t_{l}}||\langle(p_{i}^{\prime})^{2}\mid 1\leq i\leq t_{l}\rangle|=|C^{\prime}||P^{ \prime}|.\] Therefore \(H^{\prime}=V^{\prime}\cap H\) and \(C^{\prime}\cap P^{\prime}=1\). Note that for \(i\geq 1\), the element \(h_{i}h_{i+1}\) lies in the center of \(L\) precisely when \(i\) is odd. From this it follows that the intersection \(H^{\prime}\cap L\leq\mathrm{Z}(L)\) is contained in the center of \(L\). ## 9. Extension map for \(L\lhd N\) ### A criterion for an extension map The following proposition, see [14, Proposition 2.2], is useful for constructing extension maps. **Proposition 9.1**.: _Let \(K\lhd M\) be finite groups, let the group \(E\) act on \(M\), stabilizing \(K\) and let \(\mathbb{K}\subset\mathrm{Irr}(K)\) be \(ME\)-stable. Assume there exist \(E\)-stable subgroups \(K_{0}\) and \(\hat{V}\) of \(M\) such that_ 1. _the groups satisfy:_ 1. \(K=K_{0}(K\cap\hat{V})\) _and_ \(\hat{H}:=K\cap\hat{V}\leq\mathrm{Z}(K)\)_,_ 2. \(M=K\hat{V}\) _;_ 2. _for_ \(\mathbb{K}_{0}:=\cup_{\lambda\in\mathbb{K}}\mathrm{Irr}(\lambda|_{K_{0}})\) _there exist_ 1. \(a\) \(\hat{V}E\)_-equivariant extension map_ \(\Lambda_{0}\) _with respect to_ \(\hat{H}\lhd\hat{V}\) _; and_ 2. _an_ \(\varepsilon(\hat{V})E\)_-equivariant extension map_ \(\Lambda_{\varepsilon}\) _with respect to_ \(K_{0}\lhd K_{0}\rtimes\varepsilon(\hat{V})\) _for_ \(\mathbb{K}_{0}\)_, where_ \(\varepsilon:\hat{V}\to\hat{V}/\hat{H}\) _denotes the canonical epimorphism._ _Then there exists an \(ME\)-equivariant extension map with respect to \(K\lhd M\) for \(\mathbb{K}\)._ We wish to apply this proposition in the following situation. We set \(K_{0}:=[\mathbf{L},\mathbf{L}]^{F}\) and we let \(E:=\langle F_{p}\rangle\) be the group generated by field automorphisms in [13, Definition 4.1]. Moreover, \(K:=K_{0}H^{\prime}\), \(\hat{V}:=V^{\prime}\) and \(M:=KV^{\prime}=K_{0}V^{\prime}\). It follows from Proposition 8.4 and the remarks following it that the group theoretic requirements in part (a) are satisfied. We will now work towards showing that the character theoretic properties of part (b) are also satisfied. The computations in the previous section allow us to conclude that assumption (b)(i) in Proposition 9.1 is satisfied: **Proposition 9.2**.: _There exists a \(V^{\prime}E\)-equivariant extension map \(\Lambda\) with respect to \(H^{\prime}\lhd V^{\prime}\)._ Proof.: Observe that \(E\) centralizes \(V^{\prime}\). Hence, it is enough to construct for each character \(\lambda\in\operatorname{Irr}(H^{\prime})\) an extension to \(V^{\prime}_{\lambda}\). For a character \(\lambda\in\operatorname{Irr}(H^{\prime})\) its inertia group decomposes as \(V^{\prime}_{\lambda}=C^{\prime}\rtimes P^{\prime}_{\lambda}\) and \(\operatorname{Z}(G)=H^{\prime}\cap C^{\prime}\). The character \(\theta:=\lambda|_{\operatorname{Z}(G)}\) extends to a character of \(C^{\prime}\) with \(\hat{\theta}(c^{\prime}_{i})=\hat{\theta}(c^{\prime}_{1})\) for all \(i\). By the computation before Lemma 8.3, we see that the set \(\{c^{\prime}_{i}\mid i=1,\ldots,a_{l}\}\) is stable under the group action of \(P^{\prime}=\langle p^{\prime}_{i}\mid i=1,\ldots,a_{l}-1\rangle\). Therefore, the character \(\hat{\theta}\) is \(P^{\prime}\)-stable. Hence, there exists a unique character of \(C^{\prime}H^{\prime}\) extending both \(\lambda\) and \(\hat{\theta}\). In particular, the so-obtained character is \(P^{\prime}_{\lambda}\)-stable. According to Lemma 8.1, the map \(\iota_{2}:\langle p_{1},\ldots,p_{t_{l}-1}\rangle\to P^{\prime},p\mapsto p^{g_{ 1}}p^{g_{2}}\), is an isomorphism. Since \(H_{l}=\langle h_{0}\rangle(\langle p_{1},\ldots,p_{t_{l}-1}\rangle\cap H_{l})\) and \(h_{0}\in\operatorname{Z}(G)\) this isomorphism can be extended to an isomorphism \(\tilde{\iota}_{2}:H_{l}P\to H^{\prime}P^{\prime},p\mapsto p^{g_{1}}p^{g_{2}}\). According to [13, Theorem 5.5], there exists an extension map for \(H_{l}\lhd V_{l}\) and this gives via the isomorphism \(\tilde{\iota}_{2}\) an extension map from \(H^{\prime}\) to \(P^{\prime}\). Consequently, \(\lambda\) extends to \(P^{\prime}_{\lambda}\). Using the \(P^{\prime}_{\lambda}\)-stable extension of \(\lambda\) to \(C^{\prime}\) and the extension of \(\lambda\) to \(P^{\prime}_{\lambda}\) uniquely determines an extension to \(V^{\prime}_{\lambda}\). ### Structure of \(K_{0}\) The group \(K_{0}=[\mathbf{L},\mathbf{L}]^{F}\) is isomorphic to \(\operatorname{SL}_{2}(\varepsilon q^{d_{0}})^{t_{l}}\times B_{m}(q)\), where \(\varepsilon:=(-1)^{d+1}\). More concretely, set \(L_{1}:=\langle N_{F^{d_{0}}/F}(\mathbf{x}_{e_{2}-e_{1}}(1)),N_{F^{d_{0}}/F}( \mathbf{x}_{e_{1}-e_{2}}(1))\rangle\) and \(L_{i}:=L_{1}^{p_{1}^{\prime}\cdots p_{i-1}^{\prime}}\cong\operatorname{SL}_{2} (\varepsilon q^{d_{0}})\). Here, \(F=v_{l}F_{q}\) as in 6.5 and \(N_{F^{d_{0}}/F}\) denotes the norm map. Then \(K_{0}=L_{1}\times\cdots\times L_{t_{l}}\times B_{m}(q)\). **Lemma 9.3**.: _There exists a unique isomorphism_ \[\Theta:L_{1}\to\operatorname{SL}_{2}(\varepsilon q^{d_{0}})\] _with \(\Theta(N_{F^{d_{0}}/F}(\mathbf{x}_{\pm(e_{1}-e_{2})}(u)))=\mathbf{x}_{\pm(e_ {1}-e_{2})}(u)\)._ Proof.: We first claim that \(v_{l}^{d_{0}}\in\operatorname{Z}(\mathbf{V}_{l})\). Firstly, \(\rho(v_{l})^{d_{0}}\in\operatorname{Z}(\mathbf{W}_{l})\) and \(v_{l}^{d_{0}}=v_{l,0}^{l}\) or \(v_{l}^{d_{0}}=v_{l,0}^{2l}\). Hence, by [13, Section 5.B], we have \(\mathbf{H}_{l}\leq\operatorname{C}_{\mathbf{V}_{l}}(v_{l}^{d_{0}})\). By [13, Lemma 5.4] we deduce therefore that \(v_{l}^{d_{0}}\in\operatorname{Z}(\mathbf{V}_{l})\). Note that for \(u\in\mathbb{F}\) we have \(v_{l}^{d_{0}}\mathbf{x}_{e_{1}-e_{2}}(u)=\mathbf{x}_{\varepsilon(e_{1}-e_{2}) }(\delta u)\) for some \(\delta\in\{\pm 1\}\) by the Chevalley relations. Since \(v_{l}^{d_{0}}\in\operatorname{Z}(\mathbf{V}_{l})\) we have \(v_{l}^{d_{0}}\operatorname{\mathbf{n}}_{e_{1}-e_{2}}(1)=\mathbf{n}_{e_{1}-e_{ 2}}(1)\). This implies \(\delta=\varepsilon\) by the Chevalley relations [14, Satz 2.1.6]. Therefore, \(F^{d_{0}}(\mathbf{x}_{e_{1}-e_{2}}(u))=\mathbf{x}_{\varepsilon(e_{1}-e_{2}) }(\varepsilon u^{d_{0}})\). The claim of the lemma can now be deduced from this. **Lemma 9.4**.: _For \(1\leq i\neq j\leq t_{l}\), the commutator \([L_{i},c^{\prime}_{j}]=1\). Moreover, \([B_{m}(q),V^{\prime}]=1\)._ Proof.: Denote by \(\Phi_{A_{\mathcal{O}_{2i-1}}\sqcup\mathcal{O}_{2i}}\) the type \(A\) subsystem as in [10, Notation 3.1]. By construction \(L_{i}\leq\langle\mathbf{X}_{\alpha}\mid\alpha\in\Phi_{A_{\mathcal{O}_{2i-1}} \sqcup\mathcal{O}_{2i}}\rangle\) and \(c^{\prime}_{j}=x_{2j-1}x_{2j}\) for \(x_{i}\in\langle\mathbf{X}_{\alpha}\mid\alpha\in\Phi_{\mathcal{O}_{i}}\rangle\). Fix \(\alpha\in\Phi_{A_{\mathcal{O}_{2i-1}}\sqcup\mathcal{O}_{2i}}\) and assume \(i\neq j\). Then \(\alpha\perp\Phi_{\mathcal{O}_{i}}\). By applying [14, Remark 2.1.7], it follows that \(\mathbf{x}_{\alpha}(u)^{\mathbf{n}_{\beta}(1)}=\mathbf{x}_{\alpha}(u)\) for all \(\beta\in\Phi_{\mathcal{O}_{i}}\). Thus \([\mathbf{x}_{\alpha}(u),c^{\prime}_{j}]=1\) for \(\alpha\in\Phi_{A_{\mathcal{O}_{2i-1}}\sqcup\mathcal{O}_{2i}}\) whenever \(i\neq j\). The same argument also shows for \(1\leq i\leq a_{l}\) that \([B_{m}(q),p_{i}]=1\) which implies \([B_{m}(q),P^{\prime}]=1\). It remains to consider \([B_{m}(q),c_{1}^{\prime}]\). Let \(\alpha\in\Phi\cap\langle e_{i}\mid l+1\leq i\leq n\rangle\). Again applying [10, Remark 2.1.7], yields \(\mathbf{x}_{\alpha}(u)^{c_{1}}=\mathbf{x}_{\alpha}(\varepsilon u)\) with \(\varepsilon\in\{\pm 1\}\). Hence as \([B_{m}(q),p_{1}]=1\), it follows that \(\mathbf{x}_{\alpha}(u)^{c_{1}^{\prime}}=\mathbf{x}_{\alpha}(u)^{c_{1}p_{1}^{- 1}c_{1}p_{1}}=\mathbf{x}_{\alpha}(\varepsilon u)^{c_{1}p_{1}}=\mathbf{x}_{ \alpha}(u)\). It remains to understand the action of \(c_{i}^{\prime}\) on the factor \(L_{i}\). Note that it suffices to explain the action of \(c_{1}^{\prime}\) on \(L_{1}\) as \(c_{i}^{\prime}=(c_{1}^{\prime})^{p_{1}^{\prime}\cdots p_{i-1}^{\prime}}\) and \(L_{i}^{\prime}=(L_{1}^{\prime})^{p_{1}^{\prime}\cdots p_{i-1}^{\prime}}\). **Lemma 9.5**.: _The element \(c_{1}^{\prime}\) acts as \(v_{l}^{\prime}\) on \(L_{1}\) and \(c_{1}^{\prime}\) acts trivially on \(B_{m}(q)\)._ Proof.: We consider the element \(x:=v_{l}^{\prime}(c_{1}^{\prime}\cdots c_{t_{l}}^{\prime})^{-1}\). A computation in the Weyl group shows that \(x\in H_{l}\) and we claim that the image of \(x\) in \(H_{l}/\langle h_{0}\rangle\) gets centralized by \(P:=\langle p_{1},\ldots,p_{a_{l}-1}\rangle\). By [10, Section 5.B] the group \(C/\langle h_{0}\rangle\), where \(C:=\langle c_{1},\ldots c_{a_{l}}\rangle\), embedds into \(W_{l}\). In particular, since \(P\) centralizes the image of \(c_{1}^{\prime}\cdots c_{t_{l}}^{\prime}\) in \(W_{l}\) it follows that \(P\) centralizes the image of \(x\) in \(H_{l}/\langle h_{0}\rangle\). Using [10, Section 5.B], which describes the action of \(P\) on \(H_{l}\), it follows that \(x\in\langle h_{0},h_{1}\cdots h_{a_{l}}\rangle\cap H_{l}\). By the Chevalley relations we conclude that \(x\) centralizes \(\mathbf{X}_{\pm(e_{1}-e_{2})}\). In particular, since \(c_{2}^{\prime},\ldots,c_{t_{l}}^{\prime}\) centralize \(\mathbf{X}_{\pm(e_{1}-e_{2})}\) as well it follows that \(c_{1}^{\prime}\) acts as \(v_{l}^{\prime}\) on \(L_{1}\). **Lemma 9.6**.: _There exists an \(NE\)-stable \(\tilde{L}\)-transversal \(\mathbb{K}_{0}\subset\mathrm{Irr}(L_{0})\). Moreover, for the constructed set \(\mathbb{K}_{0}\), there exists an \(V^{\prime}/H^{\prime}\langle F_{p}\rangle\)-equivariant extension map \(L_{0}\lhd L_{0}\rtimes V^{\prime}/H^{\prime}\)._ Proof.: Observe that \(\mathfrak{S}_{t_{l}}\cong P^{\prime}/H^{\prime}\cap P^{\prime}\) permutes the set \(\{c_{i}^{\prime}\mid i=1,\ldots t_{l}\}\). That is we have \[L_{0}\rtimes V^{\prime}/H^{\prime}\cong(L_{1}\rtimes\langle c_{1}^{\prime}H^ {\prime}\rangle)\wr\mathfrak{S}_{t_{l}}\times B_{m}(q).\] The field automorphism \(F_{p}\in E\) acts diagonally on all factors in the decomposition of \(L_{0}\). Observe that the group \(\tilde{L}\) induces all diagonal automorphisms on \(L_{0}\). We can therefore choose a \(\tilde{L}\)-transversal \(\mathbb{K}_{0}\subset\mathrm{Irr}(L_{0})\) such that every \(\chi\in\mathbb{K}_{0}\) is of the form \(\chi=\chi_{1}\times\cdots\times\chi_{t_{l}}\times\chi^{\prime}\) with \(\chi_{i}\in\mathrm{Irr}(\mathrm{SL}_{2}(\varepsilon q^{d_{0}}))\), \(\chi^{\prime}\in\mathrm{Irr}(B_{m}(q))\), such that \(\chi_{i}=\chi_{j}\) are either equal or not in the same \(\mathrm{GL}_{2}(\varepsilon q^{d_{0}})\)-orbit. Note that \(c_{1}^{\prime}H^{\prime}\) has order \(2d_{0}\) and acts as \(v_{l}^{\prime}\) on \(L_{1}\) by Lemma 9.5. If \(d\) is odd then \(v_{l}^{\prime}\) is \(2d\)-regular. The proof of Lemma 9.3 therefore shows that \({}^{v_{l}^{d}}_{1}\mathbf{x}_{e_{1}-e_{2}}(u)=\mathbf{x}_{-(e_{1}-e_{2})}(-u)\) when \(d\) is odd. By Lemma 9.5 and the construction of the isomorphism \(\Theta:L_{1}\to\mathrm{SL}_{2}(\varepsilon q^{d_{0}})\) in Lemma 9.3, the automorphism \(c_{1}^{\prime}\) therefore acts as \(F_{q}\) or as \(\tau F_{q}\), where \(\tau\) is transpose-inverse, on \(\mathrm{SL}_{2}(\varepsilon q^{d_{0}})\). Note that transpose-inverse is an inner automorphism of \(\mathrm{SL}_{2}(\varepsilon q^{d_{0}})\). The characters \(\chi_{i},\chi^{\prime}\) satisfy \(A^{\prime}(\infty)\) by [10]. In particular, the set \(\mathbb{K}_{0}\) is \(NE\)-stable. Thus, there exists an \(\langle c_{1},F_{p}\rangle\)-equivariant extension map for \(L_{1}\lhd L_{1}\rtimes\langle c_{1}^{\prime}H^{\prime}\rangle\). The claim follows now from applying [10, Lemma 3.6]. We define \(\mathbb{K}:=\mathrm{Irr}(K\mid\mathbb{K}_{0})\) and \(\mathbb{T}:=\mathrm{Irr}(L\mid\mathbb{K})\). Since \(\tilde{L}/L_{0}\) is abelian, the sets \(\mathbb{K}\) and \(\mathbb{T}\) are again \(NE\)-stable \(\tilde{L}\)-transversals of \(\mathrm{Irr}(K)\) resp. \(\mathrm{Irr}(L)\). Note that \(K=K_{0}H^{\prime}=L_{0}H^{\prime}\) is a central product. Hence, \(\mathbb{K}\) consists of all extensions to \(K\) of characters in \(\mathbb{K}_{0}\). **Proposition 9.7**.: _There exists an \(ME\)-equivariant extension map with respect to \(K\lhd M\) for \(\mathbb{K}\)._ Proof.: We observe that by Lemma 9.6 and Lemma 9.2 the assumptions of Proposition 9.1(b) are satisfied. **Proposition 9.8**.: _There exists an \(NE\)-equivariant extension map from \(L\) to \(N\) for \(\mathbb{T}\)._ Proof.: The proof is similar to [12, p.28] using the properties we proved so far. For the proof it is sufficient to construct for every \(\theta\in\mathbb{T}=\operatorname{Irr}(L\mid\mathbb{K})\) some \((NE)_{\theta}\)-stable extension of \(\theta\) to \(N_{\theta}\). A character \(\theta\in\mathbb{T}\) lies above a unique \(\theta_{0}\in\mathbb{K}=\operatorname{Irr}(K\mid\mathbb{K}_{0})\). Moreover some extension \(\tilde{\theta}_{0}\in\operatorname{Irr}(L_{\theta_{0}})\) to \(L_{\theta_{0}}\) satisfies \(\operatorname{Ind}_{\mathbb{L}_{\theta_{0}}}^{L}(\tilde{\theta}_{0})=\theta\). By the properties of \(\mathbb{K}\) we see \(N_{\theta_{0}}=L_{\theta_{0}}M_{\theta_{0}}\). By Proposition 9.1 the character \(\theta_{0}\) has a \((V^{\prime}E)_{\theta_{0}}\)-stable extension to \(M_{\theta_{0}}\). According to [12, Lemma 4.1] this defines an extension \(\phi\) of \(\tilde{\theta}_{0}\) to \(N_{\tilde{\theta}_{0}}\) since \(N_{\tilde{\theta}_{0}}\leq L_{\theta_{0}}M_{\theta_{0}}\). By the construction we see that \(\operatorname{Ind}_{N_{\tilde{\theta}_{0}}}^{N_{\theta}}(\phi)\) is an extension of \(\theta\). As \(\mathbb{T}\) is an \(M\)-stable \(\tilde{L}\)-transversal \(\tilde{N}_{\theta_{0}}=\tilde{L}_{\theta_{0}}M_{\theta_{0}}\) and \((\tilde{N}E)_{\theta_{0}}=\tilde{L}_{\theta_{0}}(ME)_{\theta_{0}}\). Hence this extension of \(\theta_{0}\) defines an extension of \(\theta\) as required. In our situation, it is easily possible to extend the extension map from Proposition 9.8 to an \(NE\)-equivariant extension map on \(\operatorname{Irr}(L)\): **Corollary 9.9**.: _There exists an \(NE\)-equivariant extension map from \(L\) to \(N\)._ Proof.: Let \(\Lambda\) be the extension map for \(\mathbb{T}\) from Proposition 9.8. We extend \(\Lambda\) to \(\operatorname{Irr}(L)\) by extending it \(\tilde{L}\)-equivariantly, i.e. we define \(\Lambda(\tilde{l}\theta):=\tilde{l}\Lambda(\theta)\) for \(\theta\in\mathbb{T}\) and \(\tilde{l}\in\tilde{L}\). Note that this is well-defined (i.e. does not depend on the choice of \(\tilde{l}\)) since \(\tilde{L}_{\lambda}=\tilde{L}_{\Lambda(\lambda)}\) by Remark 6.2. It is now easily checked that the extended map is still \(NE\)-equivariant. **Remark 9.10**.: _The conclusion of Proposition 9.8 will also hold for any \(d\)-split Levi subgroup \(\mathbf{L}\) with root system \(\Phi_{\mathbf{L}}\cong\Phi_{B_{l}}\sqcup_{i}\Phi_{A_{2i-1}}^{s_{i}}\). In particular, for suitably chosen elements \(g_{1},\dots,g_{2i}\) a corresponding map \(\iota_{2}\) for the factor \(\Phi_{A_{2i-1}}^{s_{i}}\) can be constructed (as in [10, Section 4.C.2] for the analogous situation in type C)._ ### Verifying the inductive condition Before proving our main theorem, we show how one obtains Assumption 4.5 for cases 1 and 3 of Table 1 from [10]: **Remark 9.11**.: _We wish to construct a \(d\)-split Levi subgroup of \(B_{n}(q)\) of rational type \(B_{m}(q)(q^{d_{0}}+(-1)^{d+1})^{a}\) for some integers \(m\) and \(a\). For this we modify the construction in 6.2 by defining \(n^{\prime}:=n-m\) and by keeping the same definitions as there. It follows that the Levi subgroup \(\mathbf{L}\) with root system \(\Phi_{2}=\Phi\cap\langle e_{i}\mid n^{\prime}\leq i\leq n\rangle\) is \(vF_{q}\)-stable (where \(v=(\mathbf{n}_{\alpha_{1}}(1)\cdots\mathbf{n}_{\alpha_{n^{\prime}}}(1))^{\frac {2n^{\prime}}{d_{0}}}\)) and \(\mathbf{L}^{vF_{q}}\) has type \(B_{m}(q)(q^{d_{0}}+(-1)^{d+1})^{a}\). One observes that the considerations in [10, Section 5.D] apply verbatim to our situation. In particular, Assumption 4.5 follows from the proof of [10, Theorem 4.2]._ **Theorem 9.12**.: _Let \(G\) be a quasi-simple group of Lie type \(B_{n}\) or \(C_{n}\) defined over the finite field \(\mathbb{F}_{q}\) for \(q\) a prime power of an odd prime and let \(\ell\geq 5\) not dividing \(q\). Then every \(\ell\)-block of \(G\) satisfies the iAM-condition._ Proof.: According to the reduction theorem in [13, Theorem 12.6] in order to show the theorem it suffices to check the iAM-condition for isolated blocks relative to the normalizer of their Cabanes subgroup. As explained in the proof there, we may also assume that \(G\) has non-exceptional Schur multiplier. Let \(b\) be a isolated block associated to the \(d\)-cuspidal pair \((\mathbf{L},\lambda)\). According to Theorem 4.9 it suffices for this to check that Assumption 4.5 is satisfied. For groups of type \(C_{n}\) this criterion was verified in [10, Theorem 1.2]. If \(\mathbf{G}\) is of type \(B_{n}\), then \(\mathbf{L}\) is one of the Levi subgroups of Table 1. For the Levi subgroups in cases 1 and 3 Assumption 4.5 follows from Remark 9.11. For the Levi subgroups in case 2 the Assumption 4.5(i) follows from Corollary 9.9. Assumption 4.5(ii) on the other hand follows from Remark 4.6(i) and Remark 6.2.
2305.01222
SOS Construction of Compatible Control Lyapunov and Barrier Functions
We propose a novel approach to certify closed-loop stability and safety of a constrained polynomial system based on the combination of Control Lyapunov Functions (CLFs) and Control Barrier Functions (CBFs). For polynomial systems that are affine in the control input, both classes of functions can be constructed via Sum Of Squares (SOS) programming. Using two versions of the Positivstellensatz we derive an SOS formulation seeking a rational controller that - if feasible - results in compatible CLF and multiple CBFs.
Michael Schneeberger, Florian Dörfler, Silvia Mastellone
2023-05-02T06:26:56Z
http://arxiv.org/abs/2305.01222v1
# SOS Construction of Compatible Control Lyapunov and Barrier Functions ###### Abstract We propose a novel approach to certify closed-loop stability and safety of a constrained polynomial system based on the combination of Control Lyapunov Functions (CLFs) and Control Barrier Functions (CBFs). For polynomial systems that are affine in the control input, both classes of functions can be constructed via Sum of Squares (SOS) programming. Using two versions of the Positivstellensatz we derive an SOS formulation seeking a rational controller that -- if feasible -- results in compatible CLF and multiple CBFs. 1,2]Michael Schneeberger 1,2]Florian Dorfler ## 1 Introduction When dealing with systems that have state constraints, it is crucial to have a controller that ensures both stability and compliance with the constraints. In most cases, feedback control design focuses mainly on achieving stability, while protection functions only engage when constraints are violated. Unfortunately, this approach results in downtime and the need to investigate faults. By contrast, a controller that is both stable and safe can trade off control performance for the ability to prevent unsafe states. The focus of this paper is on constructing compatible CLFs and CBFs that can certify both stability and safety in control systems. A CLF establishes conditions for the existence of a stabilizing controller for a given control system. Similarly, a CBF guarantees the existence of a controller that can render the control system safe. According to Wieland and Allgower (2007), a system is considered safe if any state trajectory starting from a safe set of states remains within an allowable region defined by the state constraints. For systems that are affine in the input, a controller that meets the CLF and CBF conditions can be implemented by solving an online Quadratic Programming (QP), see Ames et al. (2019). However, for some states, these conditions may conflict with each other. For both to hold jointly for all states, the control-sharing property is additionally required, see Grammatico et al. (2013); Xu (2016). Finding such compatible CLF and CBF is generally difficult. For polynomial systems, however, this can be achieved by formulating SOS constraints and solving them via Semidefinite Programming (SDP). The contributions of this paper is twofold: first, we derive SOS constraints on compatible CLF and multiple CBFs using two versions of the Positivstellensatz, and second, an algorithm is developed that efficiently finds solutions to these SOS constraints by maximizing a surrogate of the volume of the safe set. The conditions on compatible CBFs are formulated in Isaly et al. (2022); Tan and Dimarogonas (2022) but without giving a method to construct them. A constructive approach described by Clark (2021) is based on the introduction of additional SOS constraints to enforce compatibility. In this paper, we reveal a correspondence between the SOS constraint derived from the CLF, resp. CBF, condition, and the existence of a rational controller that renders the closed-loop system stable, resp. safe. By restricting such controllers to be identical, we derive a new set of SOS constraints that guarantee compatibility between a CLF and multiple CBFs without the introduction of additional SOS constraints. These SOS constraints contain bilinear terms, and hence, cannot be directly converted to an SDP. We therefore present an alternating algorithm that searches simultaneously for a CLF and multiple CBFs by repeatedly solving two SDPs. In particular, the algorithm seeks to maximize the volume of the safe set, which is given by the intersection of the invariant sets defined by each CBF. Multiple CBFs offer additional flexibility to increase the volume of the safe set when a single CBF does not suffice. Similar approaches were presented in Anghel et al. (2013); Kundu et al. (2019); Wang et al. (2022) but they either only searched for a single CLF or a single CBF. Korda et al. (2014) proposed another intriguing approach for identifying a safe set using an infinite-dimensional linear programming problem. We demonstrate the utility of our approach with a power converter control example for which safety is of paramount importance. The paper is structured as follows: In Section 2, we introduce the notation adopted in the paper and review some preliminaries. Then, we recall the definition of a CLF and CBF and derive a rational controller in Section 3. In Section 4, we combine the CLF and CBF by introducing the control-sharing property. Section 5 defines the SOS program encoding the CLF and CBF conditions, and Section 6 presents the algorithm that solves the SOS program. Numerical simulation are given in Section 7. Finally, Section 8 is dedicated to concluding remarks and future work. 2 Preliminaries & Notation ### Notation Across this paper, we adopt the following notation. The shorthand \([t]:=\{1,2,...,t\}\) is used to denote a range of numbers. A scalar function \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is positive definite w.r.t. \(x^{*}\), if \(V(x^{*})=0\) and \(V(x)>0\) for \(x\neq x^{*}\). \(R[x]\) refers to the set of scalar polynomials in variables \(x\in\mathbb{R}^{n}\), and \(\Sigma[x]\) refers to the set of scalar SOS polynomials in \(x\). A polynomial \(p(x)\) is an SOS polynomial if it can be written as \(p(x)=\sum_{i=1}^{k}g_{i}(x)^{2}\) for \(g_{i}(x)\in R[x]\) and \(i\in[k]\). If \(p(x)\in\Sigma[x]\), it can be expanded to \[p(x)=Z(x)^{T}QZ(x),\] where \(Z(x)\) is a vector of monomials in \(x\), and \(Q\) is a square positive semidefinite matrix. In the following, we consider a polynomial control system described as \[\dot{x}=\mathcal{F}_{c}(x,u)=f(x)+G(x)u_{c}, \tag{1}\] where \(f(x)\in\left(R[x]\right)^{n}\) and \(G(x)\in\left(R[x]\right)^{n\times m}\) are polynomial matrices, and \(u_{c}\in\mathbb{R}^{m}\) is the control input vector. System (1) with polynomial state feedback control policy \(u(x)\in\left(R[x]\right)^{m}\) results in a closed-loop polynomial system of the form \[\dot{x}=\mathcal{F}_{a}(x)=f(x)+G(x)u(x) \tag{2}\] The state \(x^{*}\) is called an equilibrium of (2), if \(\mathcal{F}_{a}(x^{*})=0\). A set \(\mathcal{X}\) is called forward invariant (Khalil, 2002) with respect to (2) if for every \(x_{0}\in\mathcal{X}\), \(x(t)\in\mathcal{X}\) for all \(t\in\mathbb{R}_{+}\). We say that a system (2) is safe (Wieland and Allgower, 2007) w.r.t. an allowable set of states \(\mathcal{X}_{a}\subseteq\mathbb{R}^{n}\) and the safe states \(\mathcal{X}_{s}\subseteq\mathbb{R}^{n}\), if \(\mathcal{X}_{s}\) is forward invariant and \(\mathcal{X}_{s}\subseteq\mathcal{X}_{a}\). ### SOS Programming An SOS program minimizing a quadratic cost function subject to SOS constraints is defined as follows: \[\underset{v}{\text{minimize}} v^{T}Pv+c^{T}v\] subject to \[p_{i}(x,v)\in\Sigma[x]\ i=1,...,N.\] Here \(P\in\mathbb{R}^{l\times l}\) is a positive semi-definite matrix, \(c\in\mathbb{R}^{l}\) is a vector, and \(p_{i}(x,v)\in\Sigma[x]\) is an SOS polynomial in \(x\in\mathbb{R}^{n}\) parameterized by \(v\in\mathbb{R}^{l}\), i.e. it can be expanded as \[p_{i}(x,v)=Z_{i}(x)^{T}Q_{i}(v)Z_{i}(x), \tag{3}\] where \(Q_{i}(v)=Q_{0,i}+v_{1}Q_{1,i}+...+v_{l}Q_{1,l}\) is a square positive semidefinite matrix linearly parameterized by \(v\) encoding the coefficients of the polynomial. When deriving the SOS constraints in the following subsections, the parameter \(v\) is omitted for simplicity, and the polynomial is simply written as \(p_{i}(x)\) instead. ### Positivstellensatze In the following, we present a version of the Positivstellensatz that can be regarded as a specialization of the weak Positivstellensatz. This version will be relevant for our analysis later on. **Theorem 1**: _Given polynomials \(f_{1}(x)\), \(f_{2}(x)\), \(g_{1}(x)\), \(h_{1}(x)\),..., \(h_{m+1}(x)\in R[x]\) such that_ \[\Big{\{}x\in\mathbb{R}^{n}\ |f_{1}(x)\geq 0,f_{2}(x)\geq 0,g_{1}(x)\neq 0, \tag{4}\] \[h_{1}(x)=0,...,h_{m+1}(x)=0\Big{\}}=\emptyset,\] _then there exist polynomials_ * \(f_{\text{cone}}(x)=s_{1}(x)f_{1}(x)+s_{2}(x)f_{2}(x)\) \[+s_{3}(x)f_{1}(x)f_{2}(x)\] * \(h_{\text{ideal}}(x)=p_{1}(x)h_{1}(x)+...+p_{m+1}(x)h_{m+1}(x)\) _such that_ \[-f_{\text{cone}}(x)-h_{\text{ideal}}(x)-g_{1}(x)^{2k}\in\Sigma[x] \tag{5}\] _for all \(x\in\mathbb{R}^{n}\), where \(p_{1}(x),...,p_{m+1}(x)\in R[x]\) are polynomials, \(s_{1}(x),s_{2}(x),s_{3}(x)\in\Sigma[x]\) are SOS polynomials, and \(k\in\mathbb{N}_{+}\)._ Using (Bochnak et al., 2013, Theorem 4.4.2), we note that the cone \(P\) generated by \(f_{1}(x)\) and \(f_{2}(x)\) is contained in \(\Big{\{}s_{0}(x)+s_{1}(x)f_{1}(x)+s_{2}(x)f_{2}(x)+s_{3}(x)f_{1}(x)f_{2}(x)\mid s _{0},s_{1},s_{2},s_{3}\in\Sigma[x]\Big{\}}\). If the set (4) is empty, the Positivstellensatz ensures that the polynomials \(s_{1}(x),s_{2}(x),...,p_{m+1}(x)\) exist without specifying their degree. To find these polynomials computationally will require to iteratively increase the degree of the polynomials until a solution can be found for (5). An increase in the degree of the polynomials will, however, deteriorate computation time. For many practical examples of interest, however, low-degree polynomials suffice. The following theorem states a version of the Positivstellensatz that can be seen as specializations of Putinar's Positivstellensatz. **Theorem 2**: _Given polynomials \(f_{1}(x)\), \(f_{2}(x)\), \(h_{1}(x)\),..., \(h_{m+1}(x)\in R[x]\) such that the set \(\Big{\{}x\in\mathbb{R}^{n}\mid f_{2}(x)\geq 0\Big{\}}\) is compact and_ \[\Big{\{}x\in\mathbb{R}^{n}\ |f_{1}(x)\geq 0,f_{2}(x)\geq 0, \tag{6}\] \[h_{1}(x)=0,...,h_{m+1}(x)=0\Big{\}}=\emptyset,\] _then there exist polynomials_ * \(f_{\text{cone}}(x)=f_{1}(x)+s_{2}(x)f_{2}(x)\)__ * \(h_{\text{ideal}}(x)=p_{1}(x)h_{1}(x)+...+p_{m+1}(x)h_{m+1}(x)\)__ such that \[-f_{\text{cone}}(x)-h_{\text{ideal}}(x)\in\Sigma[x] \tag{7}\] _for all \(x\in\mathbb{R}^{n}\), where \(p_{1}(x),...,p_{m+1}(x)\in R[x]\) are polynomials and \(s_{2}(x)\in\Sigma[x]\) is an SOS polynomial._ Let's consider the closed semialgebraic set \[K:=\big{\{}x\in\mathbb{R}^{n}\ |f_{2}(x)\geq 0,\] \[h_{k}(x)\geq 0\quad k=1,...,m+1,\] \[-h_{k}(x)\geq 0\quad k=1,...,m+1\big{\}},\] and the quadratic module \[M(f_{2}(x),h_{1}(x),-h_{1}(x),...,-h_{m+1}(x)),\] then \(M\) is Archimedean according to (Laurent, 2009, Theorem 3.17) using the fact that \(\left\{x\in\mathbb{R}^{n}\mid f_{2}(x)\geq 0\right\}\) is compact. The empty set condition (6) is equivalent to the condition that \(-f_{1}(x)>0\) on K, and, therefore, \(K\) is equivalent to \[-f_{1}(x)= s_{0}(x)+s_{2}(x)f_{2}(x) \tag{8}\] \[+\sum_{k=1}^{m+1}\left(s_{h,k}^{p}(x)-s_{h,k}^{n}(x)\right)h_{k}( x),\] where \(s_{0}(x),s_{2}(x),s_{h,1}^{p}(x),...,s_{h,m+1}^{n}(x)\in\Sigma[x]\) (Laurent, 2009, Theorem 3.20). We note that \(s_{h,k}^{p}(x)-s_{h,k}^{n}(x)\) can be replaced by a polynomial \(p_{k}(x)\). ## 3 Closed-Loop Stability and Safety In this section, we first review the concept of CLFs and CBFs. Given a polynomial system (1), we then propose an approach to construct a rational controller resulting from the SOS formulation of CLF and CBF conditions. We prove global asymptotic stability and safety of the resulting closed-loop system (2) given such a controller. ### Control Lyapunov Function For a control system (1) to be stabilizable around the equilibrium point \(x^{*}\), we require the existence of a control input \(u_{c}\in\mathbb{R}^{m}\) at every state \(x\in\mathbb{R}^{n}\) that renders the sublevel sets of a scalar polynomial \(V(x)\in R[x]\) forward invariant. This motivates the following definition of a CLF: **Definition 1**: (Nidori (1995)) Consider a differentiable function \(V:\mathbb{R}^{n}\to\mathbb{R}\) such that \(V(x^{*})=0\) and \(V(x)>0\) for all \(x\neq x^{*}\). Such a scalar function is called a _Control Lyapunov Function_ (CLF) for the control system (1) if \[\nabla V(x)^{T}f(x)<0 \tag{9}\] for all \(x\in\left\{x\in\mathbb{R}^{n}\mid\nabla V(x)^{T}G(x)=0,x\neq x^{*}\right\}\). The inequality (9) can also be formulated as the empty set condition \[\left\{x\in\mathbb{R}^{n}\mid\nabla V(x)^{T}f(x)\geq 0,\nabla V(x)^{T}G(x)=0,\right. \tag{10}\] \[\left.x\neq x^{*}\right\}=\emptyset.\] Condition (10) when restricted to _polynomial_ CLFs can be solved via SOS programming. Hence, we restrict our attention to polynomial scalar functions \(V\in R[x]\) for the rest of the paper. Next, we replace the inequality constraints \(x=(x_{1},...,x_{n})\)\(\neq x^{*}\) in (10) by a single inequality constraint \(l(x)\neq 0\) (c.f. Tan and Packard (2004)), where \(l(x^{*})=0\) and \(l(x)\neq 0\) elsewhere. A single inequality constraint has the advantage that it translates into a simpler SOS constraint. The resulting empty set condition equivalent to (10) is then given by: \[\left\{x\in\mathbb{R}^{n}\mid\nabla V(x)^{T}f(x)\geq 0,\nabla V(x)^{T}G(x)=0,\right. \tag{11}\] \[\left.l(x)\neq 0\right\}=\emptyset.\] According to Theorem 1, by choosing \(k=1\), the empty set condition (11) becomes \[-s_{1}(x)\nabla V(x)^{T}f(x)-\nabla V(x)^{T}G(x)p(x)-l(x)^{2}\in\Sigma[x], \tag{12}\] where \(s_{1}(x)\in\Sigma[x]\) is an SOS polynomial, and \(p(x)=\left[p_{1}(x)\...\ p_{m}(x)\right]^{T}\in(R[x])^{m}\) is a vector of polynomials. If \(s_{1}(x)\) is strictly positive w.r.t. \(x^{*}\), the following controller -- in the form of a rational function -- naturally results from the SOS constraint (12): \[u_{CLF}(x):=p(x)/s_{1}(x). \tag{13}\] **Remark 1**: _Fixing the controller in (1) to \(u_{c}=u_{CLF}(x)\) results in the closed-loop system (2). Hence, stability is asserted by the existence of a Lyapunov function \(V(x)\). For the sake of readability, however, we keep denoting \(V(x)\) a CLF._ **Lemma 1**: _Given a CLF \(V(x)\) for the control system (1) with \(s_{1}(x)\) in (12) strictly positive, the closed-loop system (2) using controller \(u(x)=u_{CLF}(x)\) defined in (13) is globally asymptotically stable (GAS)._ The polynomial CLF \(V(x)\) is strictly positive definite w.r.t. \(x^{*}\). Hence, \(V(x)\) is radially unbounded. From (12), we derive \(s_{1}(x)\nabla V(x)^{T}f(x)+\nabla V(x)^{T}G(x)p(x)<0\) for all \(x\neq x^{*}\). Since \(s_{1}(x)\) is strictly positive w.r.t. \(x^{*}\), the division by \(s_{1}(x)\) results in \(\nabla V(x)^{T}(f(x)+G(x)u_{CLF}(x))\)\(<0\) for all \(x\neq x^{*}\). GAS follows from Theorem 4.2 in Khalil (2002). The strict positivity condition on \(s_{1}(x)\) can be enforced by the SOS constraint \[s_{1}(x)-\epsilon_{s1}\in\Sigma[x], \tag{14}\] for some \(\epsilon_{s1}>0\). ### Control Barrier Function Similar to the stability argument, safety can be asserted with the existence of a scalar function \(B_{1}:\mathbb{R}^{n}\to\mathbb{R}\). Specifically, the control system (1) is safe w.r.t. the set of safe states \[\mathcal{X}_{s,1}=\Big{\{}x\in\mathbb{R}^{n}\mid B_{1}(x)\leq 0\Big{\}}, \tag{15}\] if there exists a control input \(u_{c}\in\mathbb{R}^{m}\) for every state \(x\in\mathbb{R}^{n}\) such that \(\mathcal{X}_{s,1}\) is forward invariant. This motivates the following definition of a CBF (cf. Wang et al. (2022)): **Definition 2**: _Consider a differentiable function \(B_{1}:\mathbb{R}^{n}\to\mathbb{R}\) such that \(\mathcal{X}_{s,1}\) is non-empty. Such a scalar function is called a _Control Barrier Function_ (CBF) for the control system (1) if_ \[\nabla B_{1}(x)^{T}f(x)<0 \tag{16}\] _for all \(x\in\Big{\{}x\in\mathbb{R}^{n}\mid\nabla B_{1}(x)^{T}G(x)=0,B_{1}(x)=0\Big{\}}\)._ An alternative definition of a CBF (cf. Ames et al. (2019)) involves a supremum and a class K function. This definition, however, is not well suited for polynomial optimization since the resulting functions cannot be directly translated to polynomial inequalities. Similarly to CLF, we restrict our focus on _polynomial_ CBFs \(B_{1}(x)\in R[x]\) for the rest of the paper. Inequality (16) can also be formulated as the empty set condition \[\Big{\{}x\in\mathbb{R}^{n}\ |\nabla B_{1}(x)^{T}f(x)\geq 0,\nabla B_{1}(x)^{T} G(x)=0, \tag{17}\] \[B_{1}(x)=0\Big{\}}=\emptyset.\] The set \(\big{\{}x\in\mathbb{R}\mid B_{1}(x)=0\big{\}}\) is compact. According to Theorem 2 and Assumption 1, the empty set condition (17) is equivalent to the SOS constraint \[-s_{1}(x)\nabla B_{1}(x)^{T}f(x)-\nabla B_{1}(x)^{T}G(x)p(x) \tag{18}\] \[-p_{m+1}(x)B_{1}(x)\in\Sigma[x],\] where \(s_{1}(x)\in\Sigma[x]\) is an SOS polynomial, \(p(x)=\big{[}p_{1}(x)\...\ p_{m}(x)\big{]}^{T}\in(R[x])^{m}\) is a vector of polynomials, and \(p_{m+1}(x)\in R[x]\) is a scalar polynomial. If \(s_{1}(x)\) in (18) is strictly positive w.r.t. \(x^{*}\), there exists a controller defined by the rational function \[u_{\mathrm{CBF},1}(x):=p(x)/s_{1}(x). \tag{19}\] As in Remark 1, we denote \(B(x)\) as CBF for both the control and closed-loop system. Given a CBF \(B_{1}(x)\) for the control system (1) with \(s_{1}(x)\) in (18) strictly positive, the set \(\mathcal{X}_{s,1}\) in (15) -- if compact (cf. Assumption 1) -- is forward invariant w.r.t. system (2) using controller \(u(x)=u_{\mathrm{CBF},1}(x)\) defined in (19). A compact zero sublevel set of a scalar polynomial \(B_{1}(x)\) is forward invariant w.r.t. closed-loop system (2) with control \(u(x)\), if \(\nabla B_{1}(x)^{T}\left(f(x)+G(x)u(x)\right)<0\) for all \(x\in\{x\in\mathbb{R}^{n}\mid B_{1}(x)=0\}\)(Blanchini, 1999, Nagumo's Theorem 3.1). From (18), we derive \(s_{1}(x)\nabla B_{1}(x)^{T}f(x)+\nabla B_{1}(x)^{T}G(x)p(x)<0\) for all \(x\in\{x\in\mathbb{R}^{n}\mid B_{1}(x)=0\}\). Since \(s_{1}(x)\) is strictly positive w.r.t. \(x^{*}\), the division by \(s_{1}(x)\) results in \(s_{1}(x)\nabla B_{1}(x)^{T}\left(f(x)+G(x)u_{\mathrm{CBF},1}(x)\right)<0\) for all \(x\in\{x\in\mathbb{R}^{n}\mid B_{1}(x)=0\}\). ## 4 SOS construction of compatible CLF and multiple CBFs In this section, we are interested in finding a CLF and multiple CBFs that are compatible with each other. This is achieved by deriving a new set of SOS constraints that result from restricting the rational controllers (13) and (19) to be identical. Multiple CBFs provide an additional flexibility to increase the volume of the safe set \(\mathcal{X}_{s}\), which is defined by the intersection of the zero sublevel sets of all the CBFs. Intuitively, this makes sense since the solution of a single CBF can be recovered by equating all CBFs. ### Multiple Control Barrier Functions Consider a set of CBFs \(\{B_{i}(x)\}_{i\in[t]}\) for some \(t\in\mathbb{N}_{+}\), each defining a forward invariant set \[\mathcal{X}_{s,i}=\Big{\{}x\in\mathbb{R}^{n}\mid B_{i}(x)\leq 0\Big{\}}. \tag{20}\] Multiple CBFs are compatible with each other if they have the control-sharing property (cf. Grammatico et al. (2013)). Consider the closed-loop system (2) with feedback controller \(u(x)\). A set of CBFs \(\{B_{i}(x)\}_{i\in[t]}\) with associated controllers \(u_{\mathrm{CBF},i}(x)\) rendering (20) invariant has the control-sharing property, if \(u(x)=u_{\mathrm{CBF},i}(x)\) for all \(i\in[t]\). The sets of compatible CBFs \(\{B_{i}(x)\}_{i\in[t]}\) form a new forward invariant set \(\mathcal{X}_{s}\), called safe set, defined as the intersection of the invariant sets from each \(B_{i}(x)\): \[\mathcal{X}_{s}:=\bigcap_{i\in[t]}\mathcal{X}_{s,i}=\Big{\{}x\in \mathbb{R}^{n}\mid B_{i}(x)\leq 0\ \forall i\in[t]\Big{\}}. \tag{21}\] Consider a set of allowable states of the form \[\mathcal{X}_{a}=\Big{\{}x\in\mathbb{R}^{n}\mid w_{i}(x)\leq 0\ \forall i\in[t]\Big{\}}, \tag{22}\] where \(w_{1}(x),...,w_{t}\in R[x]\) are polynomials. Note that for each polynomial \(w_{i}(x)\), that defines the allowable set \(\mathcal{X}_{a}\), a corresponding CBF can be assigned. The sets \(\big{\{}x\in\mathbb{R}\mid w_{i}(x)\leq 0\big{\}}_{i\in[t]}\) are compact. To conclude safety, the resulting invariant set \(\mathcal{X}_{s}\) needs to be contained in the allowable set, i.e. \(\mathcal{X}_{s}\subseteq\mathcal{X}_{a}\). This can be achieved by restricting the invariant set of each CBF to the zero sublevel set of the corresponding \(w_{i}(x)\) as stated by the following lemma. Consider a set of CBFs \(\{B_{i}(x)\}_{i\in[t]}\) that have the control-sharing property with input \(u(x)\). If for every \(i\in[t]\) \[\mathcal{X}_{s,i}\subseteq\Big{\{}x\in\mathbb{R}^{n}\mid w_{i}(x)\leq 0\Big{\}}, \tag{23}\] then the closed-loop system (2) using controller \(u(x)\) is safe w.r.t. the safe set \(\mathcal{X}_{s}\) (21) and the allowable set \(\mathcal{X}_{a}\) (22). For a closed-loop system to be safe, we need to show that \(\mathcal{X}_{s}\) is (i) forward invariant and (ii) \(\mathcal{X}_{s}\subseteq\mathcal{X}_{a}\). Note that the sets \(\mathcal{X}_{s,i}\), \(i\in[t]\) are compact due to (23). By Lemma 2, every \(\mathcal{X}_{s,i}\), \(i\in[t]\), is forward invariant w.r.t. the closed-loop system, and the intersection of forward invariant sets is also forward invariant. This proves the condition (i). The condition (ii) can be seen from \[\mathcal{X}_{s}=\bigcap_{i\in[t]}\mathcal{X}_{s,i}\subseteq\bigcap_{i\in[t]} \Big{\{}x\in\mathbb{R}^{n}\mid w_{i}(x)\leq 0\Big{\}}=\mathcal{X}_{a}.\] Finally, we formalize (23) as the empty set condition \[\Big{\{}x\in\mathbb{R}^{n}\mid w_{i}(x)\geq 0,-B_{i}(x)\geq 0\Big{\}}=\emptyset \tag{24}\] for all \(i\in[t]\). Under Theorem 2 and Assumption 2, condition (24) is equivalent to the following SOS condition: \[B_{i}(x)-s_{4,i}(x)w_{i}(x)\in\Sigma[x] \tag{25}\] where \(s_{4,i}\in\Sigma[x]\) is an SOS polynomial for all \(i\in[t]\). ### Compatible CLF and CBFs The notion of stability and safety can be unified by the existence of both a CLF and multiple CBFs that are compatible with each other via the following control-sharing property. Consider a closed-loop system (2) with feedback controller \(u(x)\). A CLF with associated controller \(u_{\mathrm{CLF}}(x)\) stabilizing the system to \(x^{*}\) and set of CBFs \(\{B_{i}(x)\}_{i\in[t]}\) with associated controllers \(u_{\mathrm{CBF},i}(x)\) rendering (20) invariant have the control-sharing property, if \(u(x)=u_{\mathrm{CLF}}(x)\) and \(u(x)=u_{\mathrm{CBF},i}(x)\) for all \(i\in[t]\). With Definition 4 in place, we can state our main proposition: **Proposition 4**: Given a CLF \(V(x)\) and a set of CBFs \(\{B_{i}(x)\}_{i\in[t]}\) that have the control-sharing property with input \(u(x)\). If condition (23) holds for every \(i\in[t]\), then the closed-loop system (2) with input \(u(x)\) is asymptotically stable and safe w.r.t. the allowable set \(\mathcal{X}_{a}\) (22). **Proof.** Lemma 1 and Lemma 3 prove stability and safety for the closed-loop system using input \(u(x)\). Based on conditions (12), (14), (18) and (25), the stability and safety condition in Lemma 4 can be restated as the SOS constraints: \[\begin{array}{ll}-\nabla V(x)^{T}\left(s_{1}(x)f(x)+G(x)p(x)\right)-l(x)^{2 }&\\ \quad\in\Sigma[x],&\\ -\nabla B_{i}(x)^{T}\left(s_{1}(x)f(x)+G(x)p(x)\right)&\\ \quad-p_{m+1,i}(x)B_{i}(x)\in\Sigma[x],&\forall i\in[t]\\ B_{i}(x)-s_{4,i}(x)w_{i}(x)\in\Sigma[x],&\forall i\in[t]\\ s_{1}(x)-\epsilon_{s1}\in\Sigma[x],&\end{array} \tag{26}\] where \(s_{1}(x),s_{4,i}(x)\in\Sigma[x]\) are SOS polynomials, \(p(x)=\left[p_{1}(x)\...\ p_{m}(x)\right]^{T}\in(R[x])^{m}\) is a vector of polynomials, and \(p_{m+1,i}\in R[x]\) is a scalar polynomial. The resulting control input \(u(x)=p(x)/s_{1}(x)\) renders the system asymptotically stable on \(\mathcal{X}_{a}\) and safe w.r.t. the allowable set \(\mathcal{X}_{a}\) and safe set \(\mathcal{X}_{s}\). ## 5 SOS Program In this section, we state the SOS program that finds a CLF and multiple CBFs for the control system (1). The SOS program maximizes a surrogate of the volume of \(\mathcal{X}_{s}\) subject to the SOS condition (26). ### Cost function Ideally, the SOS program would optimize the CLF and CBFs over the volume of the safe set \(\mathcal{X}_{s}\): \[\begin{array}{ll}\underset{v}{\text{minimize}}&-vol\left(\left\{x\in \mathbb{R}^{n}\ |\ B_{i}(x,v)\leq 0\text{ for }i\in[t]\right\}\right)\\ \text{subject to}&\text{SOS constraints}\ \eqref{eq:cost}\end{array} \tag{27}\] where \(B_{i}(x,v)=Z_{B,i}(x)^{T}Q_{B,i}(v)Z_{B,i}(x)\) (cf. (3)). To make (27) computationally tractable, the volume of \(\mathcal{X}_{s}\) is surrogated by the traces of the quadratic matrices encoding the \(B_{i}(x)\). The cost then becomes: \[c_{1}(v)=\sum_{i=1}^{t}tr(Q_{B,i}(v)). \tag{28}\] Furthermore, we have empirically observed that the outcome is improved by restricting the CBF to a predefined center point. The justification for this lies in the fact that the CBF condition is only valid when \(B(x)=0\) and, therefore, it does not restrict the CBF from taking arbitrary large values when \(B(x)\neq 0\). Let us define a center point \((x_{c,i},B_{c,i})\in(\mathbb{R}^{n},\mathbb{R})\) for each CBF \(B_{i}(x)\), \(i\in[t]\). Then, we define a cost function by the deviation from that point: \[c_{2}(v)=\sum_{i=1}^{t}(B_{i}(x_{c,i})-B_{c,i})^{2} \tag{29}\] The cost function of the SOS program is set to \[c(v)=c_{1}(v)+c_{2}(v). \tag{30}\] ### SOS Program Given the cost (30) and the SOS constraint (26), we summarize the SOS program by: find \[\begin{array}{ll}&V,B_{i},s_{1},s_{4,i},p_{m+1,i}\in R[x],\\ &p\in(\mathbb{R}[x])^{m}\\ \text{minimize}&\sum_{i=1}^{t}tr(Q_{B,i})+(B_{i}(x_{c,i})-B_{c,i})^{2}\\ \text{subject to}&-\nabla V^{T}\left(s_{1}f+Gp\right)-l^{2}\in\Sigma[x]&\\ &-\nabla B_{i}^{T}\left(s_{1}f+Gp\right)-p_{m+1,i}B_{i}&\\ &\in\Sigma[x]&\forall i\in[t]\\ B_{i}-s_{4,i}w_{i}\in\Sigma[x]&\forall i\in[t]\\ s_{1}-\epsilon_{s1}\in\Sigma[x]&\forall i\in[t]\\ s_{4,i}\in\Sigma[x]&V(x^{*})=0&\end{array}\] (31) where \((x_{c,i},B_{c,i})\in(\mathbb{R}^{n},\mathbb{R})\) are the predefined center points for each CBF. ## 6 Algorithm The SOS problem in Section 5.2 cannot directly be converted to an SDP problem due to bilinear terms in its decision variables. However, an alternating algorithm can find adequate solutions to this non-convex problem. ### Alternating Algorithm Consider an abstract optimization problem with \(n_{c}\) SOS inequality constraints defined by \(f_{1}(v_{1},v_{2})\in(R[x])^{n_{c}}\) that are bilinear in the decision variables \(v_{1}\in\mathbb{R}^{n_{1}}\) and \(v_{2}\in\mathbb{R}^{n_{2}}\), and a cost function \(f_{0}(v_{1})\in\mathbb{R}\) that only depends on \(v_{1}\): \[\begin{array}{ll}\underset{v_{1},v_{2}}{\text{minimize}}&f_{0}(v_{1})\\ &\text{subject to}&f_{1}(v_{1},v_{2})\in(\Sigma[x])^{n_{c}}\end{array} \tag{32}\] The optimization problem (32) cannot be solved directly with an SOS program. Instead, we propose a method that alternates between searching over one variable while holding fixed the other. Starting from an iteration \(v_{1}^{(0)}\) and \(v_{2}^{(0)}\), the algorithm at iteration \(k\) is defined by: * Step 1: Substitute \(v_{2}=v_{2}^{(k-1)}\) and solve for \(v_{1}\) \[\begin{array}{ll}&\underset{v_{1}}{\text{minimize}}&f_{0}(v_{1})\\ &\text{subject to}&f_{1}(v_{1},v_{2}^{(k-1)})\in(\Sigma[x])^{n_{c}}\end{array}\] (33) * Step 2: Substitute \(v_{1}=v_{1}^{(k)}\) and solve for \(\epsilon\) and \(v_{2}\) \[\begin{array}{ll}&\underset{v_{2}}{\text{minimize}}&\epsilon\\ &\text{subject to}&f_{1}(v_{1}^{(k)},v_{2})+s(v_{1}^{(k)})\epsilon\in(\Sigma[x])^{n _{c}}\end{array}\] (34) where \((x_{c,i},B_{c,i})\in(\mathbb{R}^{n},\mathbb{R})\) are the predefined center points for each CBF. The algorithm terminates when \(|f_{0}(v_{1}^{(k)})-f_{0}(v_{1}^{(k-1)})|\) is lower than a given threshold. **Lemma 5**: _If the initial pair \((v_{1}^{(0)},v_{2}^{(0)})\) is feasible for (31), then feasibility is maintained along the alternating algorithms and the cost \(f_{0}(v_{1}^{(k)})\) in (32) is non-increasing with iteration \(k\)._ First, we show that if Step 1 in iteration \(k\) is feasible for (32) then also Step 2 in iteration \(k\) must be feasible (33). For feasibility, we only need to find a point \((v_{2},\epsilon)\), for which the constraint in Step 2 holds. From Step 1, we know that \(f_{1}(v_{1}^{(k)},v_{2}^{(k-1)})\) is an SOS polynomial. But then, \((v_{2},\epsilon)=(v_{2}^{(k-1)},0)\) is a feasible point of (33). As a consequence, \(\epsilon\) must be equal or smaller than zero, i.e. \(\epsilon\leq 0\). Next, we show that the solution of iteration \(k\) is a feasible point of Step 1 in iteration \(k+1\). First, note that \(-s(v_{1}^{(k)})\epsilon\) is an SOS polynomial vector since \(\epsilon\leq 0\). But then also \(f_{1}(v_{1}^{(k)},v_{2}^{(k)})\) is an SOS polynomial. Therefore, \((v_{1}^{(k)},v_{2}^{(k)})\) is a feasible point of Step 1 in iteration \(k\). Taken together, we showed that \((v_{1}^{(k)},v_{2}^{(k)})\) remain feasible for (31). Therefore, \(f_{0}(v_{1}^{(k+1)})\) is either equal or smaller than \(f_{0}(v_{1}^{(k)})\). ### Main algorithm The main algorithm improves feasibility by introducing an operating region \(\mathcal{X}_{op}\). Under Assumption 2, there exists a scalar polynomial \(r(x)\) whose zero sublevel set is compact and strictly contains the allowable set \(\mathcal{X}_{a}\), i.e. \[\mathcal{X}_{s}\subseteq\mathcal{X}_{a}\subseteq\mathcal{X}_{op}:=\Big{\{}x \in\mathbb{R}^{n}\mid r(x)\leq 0\Big{\}}. \tag{34}\] Note that all state trajectories of interest are contained in the operating region \(\mathcal{X}_{op}\). We will use this fact to slightly alter the empty set conditions and the corresponding SOS constraints developed in the previous chapters. Given an allowable set \(\mathcal{X}_{a}\) defined by \(w_{i}(x)\) for \(i\in[t]\) (22), positive definite SOS polynomials \(l(x)\in\Sigma[x]\), a positive scalar \(\epsilon_{s1}>0\), center points \((x_{c,i},B_{c,i})\in(\mathbb{R}^{n},\mathbb{R})\) for each \(i\in[t]\), a region of operation defined by \(r(x)\) (34) and initial values \(s_{1}(x)\), \(p(x)\), and \(p_{m+1,i}(x)\) for \(i\in[t]\), the main SOS algorithm is summarized below. * Step 1: Given a controller \(u=p/s_{1}\) find a CLF and multiple CBFs. * Step 2: Given a CLF and multiple CBFs find \(\begin{array}{ll}&V,B_{i},s_{2},s_{3,i},s_{4,i}\in R[x]\\ &s_{1},p_{m+1,i}\in R[x],p\in(\mathbb{R}[x])^{m}\\ \end{array}\) * \(\begin{array}{ll}&\sum\limits_{i=1}^{t}tr(Q_{B,i})+(B_{i}(x_{c,i})-B_{c,i })^{2}\\ \end{array}\) * subject to \(\begin{array}{ll}&-\nabla V^{T}\left(s_{1}f+Gp\right)-l^{2}+s_{2}r\in \Sigma[x]\\ &-\nabla B_{i}^{T}\left(s_{1}f+Gp\right)-p_{m+1,i}B_{i}\\ &\quad+s_{3,i}r\in\Sigma[x]\hskip 56.905512pt\forall i\in[t]\\ &B_{i}-s_{4,i}w_{i}\in\Sigma[x]\hskip 56.905512pt\forall i\in[t]\\ &s_{2}\in\Sigma[x]\\ &s_{3,i}\in\Sigma[x]\hskip 56.905512pt\forall i\in[t]\\ &V\in\Sigma[x],V(x^{*})=0\end{array}\) [MISSING_PAGE_POST] ## 7 Simulation The alternating algorithm from Section 6.2 is implemented for a three-dimensional vector field realization of a dc/ac power converter model. A feedback controller that avoids unsafe states is of paramount importance for this application since all electrical variables (e.g. voltages and currents) need to be constrained at all times. With the dc voltage and ac currents as states, the control system (1) is defined by \[f(x)=\begin{bmatrix}-0.05x_{1}-57.9x_{2}+0.00919x_{3}\\ &1710x_{1}+314x_{3}\\ &-0.271x_{1}-314x_{2}\end{bmatrix} \tag{35}\] and \[G(x)=\begin{bmatrix}0.05-57.9x_{2}&-57.9x_{3}\\ 1710+1710x_{1}&0\\ 0&1710+1710x_{1}\end{bmatrix}. \tag{36}\] The allowable set \(\mathcal{X}_{a}\) is defined by two polynomials \(w_{1}(x)\) and \(w_{2}(x)\) encoding the state constraints for current and voltage respectively: \[\begin{array}{ll}&w_{1}(x)=(x_{1}+0.3)^{2}+(x_{2}/20)^{2}+(x_{3}/20)^{2}-0.5^{ 2}\\ &w_{2}(x)=(x_{1}/20)^{2}+x_{2}^{2}+x_{3}^{2}-1.2^{2}.\end{array} \tag{37}\] An operating region \(\mathcal{X}_{op}\) in (34) can be parameterized by: \[r(x)=(x_{1}/0.8)^{2}+(x_{2}/1.2)^{2}+(x_{3}/1.2)^{2}-1.8. \tag{38}\] A \(r(x)\) that tighter fits the allowable set results in a faster convergence of the alternating algorithm. For illustration purposes, however, we selected a suboptimal operation region. The center points of the CBFs are given by: \(x_{c,i}=\left[-0.3\ 0\ 0\right]^{T}\) and \(B_{c,i}=-10\) for \(i\in[2]\). The degrees of the polynomials involved in the algorithm are summarized as follows: \begin{tabular}{||l l renders the system safe w.r.t. a safe set \(\mathcal{X}_{\text{s}}\). By using two CBFs, the algorithm terminates with a safe set that tightly fits the allowable set \(\mathcal{X}_{\text{a}}\). The number of variables and constraints involved in the resulting SDP are summarized as follows: \begin{tabular}{||c c c||} \hline & Step 1 & Step 2 \\ \hline \hline Variables & 337 & 372 \\ \hline Constraints & 5188 & 4892 \\ \hline \end{tabular} The algorithm was completed in 90 seconds. For each iteration, the SDP algorithm took on average 33 inner iterations to solve step 1 and on average 21 inner iterations to solve step 2. ## 8 Conclusions and Future Work This paper presented a framework to combine stability and safety conditions using CLF and multiple CBFs. By employing two versions of the Positivstellensatz, we synthesized a controller used to prove compatibility between CLF and CBFs. We then formalized SOS constraints that encode compatible CLF and CBF conditions. Finally, we proposed an algorithm that solves the resulting SOS program by iteratively solving two SDPs. For future work, we plan to study the computational complexity of our algorithm and also to explore a unified framework, that proves stability and safety with weaker CLF and CBF conditions. Moreover, we intend to incorporate noise-robustness into our SOS formulation, as demonstrated by Kang et al. (2023).
2304.08891
Tailoring Domain Adaptation for Machine Translation Quality Estimation
While quality estimation (QE) can play an important role in the translation process, its effectiveness relies on the availability and quality of training data. For QE in particular, high-quality labeled data is often lacking due to the high cost and effort associated with labeling such data. Aside from the data scarcity challenge, QE models should also be generalizable, i.e., they should be able to handle data from different domains, both generic and specific. To alleviate these two main issues -- data scarcity and domain mismatch -- this paper combines domain adaptation and data augmentation within a robust QE system. Our method first trains a generic QE model and then fine-tunes it on a specific domain while retaining generic knowledge. Our results show a significant improvement for all the language pairs investigated, better cross-lingual inference, and a superior performance in zero-shot learning scenarios as compared to state-of-the-art baselines.
Javad Pourmostafa Roshan Sharami, Dimitar Shterionov, Frédéric Blain, Eva Vanmassenhove, Mirella De Sisto, Chris Emmery, Pieter Spronck
2023-04-18T10:36:50Z
http://arxiv.org/abs/2304.08891v2
# Tailoring Domain Adaptation for Machine Translation Quality Estimation ###### Abstract While quality estimation (QE) can play an important role in the translation process, its effectiveness relies on the availability and quality of training data. For QE in particular, high-quality labeled data is often lacking due to the high cost and effort associated with labeling such data. Aside from the _data scarcity_ challenge, QE models should also be generalizable; i.e., they should be able to _handle data from different domains_, both generic and specific. To alleviate these two main issues -- data scarcity and domain mismatch -- this paper combines domain adaptation and data augmentation in a robust QE system. Our method first trains a generic QE model and then fine-tunes it on a specific domain while retaining generic knowledge. Our results show a significant improvement for all the language pairs investigated, better cross-lingual inference, and a superior performance in zero-shot learning scenarios as compared to state-of-the-art baselines. + Footnote †: © 2023 The authors. This article is licensed under a Creative Commons 4.0 licence, no derivative works, attribution, CC-BY-ND. ## 1 Introduction Predicting the quality of machine translation (MT) output is crucial in translation workflows. Informing translation professionals about the quality of an MT system allows them to quickly assess the overall usefulness of the generated translations and gauge the amount of post-editing that will be required [13, 20]. Quality estimation (QE) is an approach that aims to reduce the human effort required to analyze the quality of an MT system by assessing the quality of its output without the need for reference translations. QE can be applied on word-, sentence- or document-levels. The goal of sentence-level QE, which is the focus of our work, is to predict a quality label based on a source sentences and its MT equivalents. This label, (i.e., the quality estimate), can be expressed in various ways such as TER/HTER [21], BLEU [2] or any metric of interest to the user. Training a sentence-level QE system typically requires aligned data of the form: _source sentence_ (SRC), _target sentence_ (TRG), and _quality gold label_ (LBL). However, most quality labels are by-products of MT and post-editing -- a rather difficult and expensive process -- limiting the size of the available QE data [15, 23]. The WMT QE shared task [22, 20] has been offered a platform to compare different QE systems and to share QE data. Despite efforts from initiatives like the QE shared task to publicly release QE datasets, such resources remain scarce across language pairs and, by extension, also have a limited coverage across domains [16, 17]. This can pose a challenge for all QE models, especially recent ones that utilize large pre-trained language models (LLMs) [18, 2], since fine-tuning pre-trained models with small datasets has been demonstrated to be quite unstable [19, 20]. Furthermore, QE models trained on specific data do not generalize well to other domains that are outside of the training domain [12]. al., 2022). _Domain mismatches_ lead to significant decreases in the performance of QE models (de Souza et al., 2014; Zouhar et al., 2023). To improve the generalizability of QE models, it is important to establish the right balance between domain-specific and generic training data. To date, only a few attempts have been made to address this challenge (de Souza et al., 2014; Rubino, 2020; Lee, 2020). Thus, the majority of QE models have difficulty with accurately estimating quality across different domains, whether they are generic or specific (Zouhar et al., 2023). In this work, we propose to tackle both the data scarcity and the domain mismatch challenge that LLM-based QE models face. _We propose a methodology whereby a small amount of domain-specific data is used to boost the overall QE prediction performance._ This approach is inspired by work on domain adaptation (DA) in the field of MT, where a large generic model is initially trained and then fine-tuned with domain-specific data (Chu and Wang, 2018; Pham et al., 2022). To assess the validity of the proposed approach in QE, we conducted experiments using small and large, authentic and synthetic data in bilingual, cross-lingual, and zero-shot settings. We experimented with publicly available language pairs from English (EN) into German (DE), Chinese (ZH), Italian (IT), Czech (CZ), and Japanese (JA) and from Romanian (RO) and Russian (RU) into English (EN). We used the common test sets from the WMT2021 QE shared tasks1. Footnote 1: [https://www.statmt.org/wmt21/quality-estimation-task.html](https://www.statmt.org/wmt21/quality-estimation-task.html) Our experiments show a statistically significant improvement in the performance of QE models. Our findings also indicate that not only our implementation leads to better multi-/cross-lingual QE models (where multi-/cross-lingual data is provided) but also zero-shot QE (where no data for the evaluated language pairs was provided at training). The main contributions of our research are: * A QE methodology that employs DA and data augmentation (DAG), along with a novel QE training pipeline that supports this methodology. * An empirical demonstration of the pipeline's effectiveness, which highlights improvements in QE performance, and better cross-lingual inference. * A comparative analysis with state-of-the-art (SOTA) baseline methods that demonstrates the effectiveness of our approach in enhancing zero-shot learning (ZSL) for the task of QE. * Adaptable QE pipelines that can be tailored and implemented for other language pairs; i.e., high generalizable QE pipelines. To the best of our knowledge, this is the first QE methodology to use DA and DAG. Furthermore, it is easily reusable and adaptable: (i) while we used XLM-R in our experiments, one can easily replace it with any preferred LLM as long as the input-output criteria are met; (ii) we built our tool around Hugging Face (HF) implementations of LLMs, meaning one can employ a certain generic model and apply it to any QE task by simply fine-tuning it on (newly-collected) QE data. ## 2 Domain adaptation for specialized QE In this section, we outline our methodology for training LLM-based QE models for a specific domain with limited available in-domain data. This involves: (i) a set of training steps that we found to be particularly effective, and (ii) DAG techniques to improve the QE models' specificity. Additionally, we provide details on two different training modes we implemented (with or without tags). ### Training steps We implement the "mixed fine-tuning + fine-tuning" DA technique that proved promising for MT (Chu et al., 2017). We tailor this methodology to suit our needs following the steps outlined below. A visualization of the steps involved can be found in Appendix A.1. Our technique involves leveraging both in-domain (ID) and out-of-domain (OOD) QE data (see Section 3.1 for details on the datasets). Step 1We train a QE model using OOD data until it converges. We employ the experimental framework described in Section 3.2 in which an LLM is fine-tuned to predict QE labels. The goal of this step is two-fold: (i) leveraging the LLM's cross-lingual reference capabilities and (ii) building a generic QE model. This way we ensure that the model can estimate the quality of a broad range of systems, but with limited accuracy on ID data. Step 2The model's parameters are fine-tuned using a mix of OOD and ID data. We use different ID data, both authentic and synthetic according to the DAG approaches in Section 2.2. The objective here is to ensure the model does not forget generic-domain knowledge acquired during the first step while simultaneously improving its ability to perform QE on the domain-specific data. This mixing step is often referred to as "oversampling" in DA literature, where a smaller subset of OOD data is concatenated with ID data to allow the model to assign equal attention to both datasets; it aims to further adapt the model to the specific domain of interest. Step 3We continue to train the QE model on a specific ID dataset until convergence, resulting in a more domain-specific QE model than that obtained in Step 2. ### Data augmentation for DA in QE In our study, we explore two alternative approaches to oversampling to optimize the utilization of available ID resources and assess the potential benefits of incorporating synthetic ID data into the QE pipeline: **Approach 1: Concatenating all available authentic ID data across all languages.** The XLM-R model is multilingual, allowing us to apply it to different language pairs. When there is not enough data to fine-tune it for a specific language, one can use multilingual data. In our work, to increase the amount of authentic data (given the small volume of parallel data for two languages), we construct a multilingual ID dataset: we concatenate all available ID data, which includes different language pairs. The rationale behind this approach is to make use of all available authentic resources in order to improve the performance of the QE model by providing better cross-lingual references. **Approach 2: Generating synthetic ID data.** Given that all available ID resources have been already utilized in Approach 1, we propose to supplement the existing data with artificially generated additional ID data using a trained MT model for each language pair, inspired by the research conducted by Negri et al. (2018) and Lee (2020). This approach aims to tackle the data scarcity problem and further improve the QE model's accuracy. Let \(D_{lp}\) denote the publicly available parallel data (SRC, TRG) for a language pair \(lp\), as identified in Section 3.1. The approach consists of the following steps for each ID involved in the pipeline: 1. Randomly select \(N\) samples from \(D_{lp}\) to obtain a set \(S_{lp}\) of training samples. Divide \(S_{lp}\) into two equal sets \(S_{1}\) and \(S_{2}\). 2. Train a multilingual MT model \(M_{lp}\) on \(S_{1}\) (details of the model can be found in Section 3.2). 3. Use \(M_{lp}\) to translate the sources-side of \(S_{2}\) (or a portion of it), obtaining a set \(T_{lp}\) of translated samples. 4. Compute quality labels (e.g., TER/HTER) by comparing \(T_{lp}\) with the reference (\(TRG\)) text from \(S_{2}\). The resulting three-part output of this approach comprises the source-side of \(S_{2}\), \(T_{lp}\), and TER/HTER obtained from the fourth step. A visual representation of these steps can be found in Appendix A.3. ### Additional indication of domain In NMT, in order to handle multiple domains and reduce catastrophic forgetting, DA has been controlled using additional tags added at the beginning or at the end of the sentence (Sennrich et al., 2016; Chu and Dabre, 2019). Following these studies, we explore two training modes: (i) with tag ("TAG"), by appending either <OOD> or <ID> at the end of sentences based on the dataset domain type (i.e., OOD or ID). The input format in this mode is <s> SRC </s> TRG <Tag> </s>, where SRC and TRG represent source and target of the QE triplet, and <s> and </s> are the beginning and separator tokens for the LLM used in the pipeline; (ii) without tag ("NO TAG"), where the training steps are the same as detailed in Section 2.1. ## 3 Experiments ### Data We conducted experiments on publicly available data in different languages: from EN into DE, ZH, IT, CZ, and JA and from RO and RU into EN. We categorize the data into three groups according to their use in our pipeline: **Group 1: for building _ID_ and _OOD_ QE models.** The _ID_ data is collected from WMT 2021 shared task on QE (Specia et al., 2021), Task 2, consisting of sentence-level post-editing efforts for four language pairs: EN-DE, EN-ZH, RU-EN and RO-EN. For each pair there are train, development (dev), and test sets of \(7K\), \(1K\), \(1K\) samples, respectively. Additionally, as our _OOD_ data we used the eSCAPE (Negri et al., 2018) dataset with approximately 3.4\(M\) tokenized SRC, machine-translated text (MTT), post-edited (PE) sentences. We used sacrebleu2(Post, 2018) to calculate TER (Snover et al., 2006) from MTT and PE pairs. We split the data into train, dev, test sets via the scikit-learn package3(Pedregosa et al., 2011) with 98%, 1%, and 1% of the total data, respectively. To improve the generalization of our models and enable them to better adapt to specific QE through the ID dataset, we utilized a larger OOD dataset. This decision is in line with prior studies on DA, which are described in the related work section (Section 6). Footnote 2: signature:nrefs:1|case:lc|tok:tercom|punct:yes|version:2.3.1 Footnote 3: random state/seed=8, shuffle=True, used for all splits. Group 2: for building MT systems as a component of _Approach 2_ in the proposed DAG (Section 2.2).We collected parallel data -- SRC and reference translations (REF) -- from Opus (Tiedemann, 2012) for each language pair used in ID: EN-DE, EN-ZH, RO-EN, and RU-EN. Next, we trained MT models for Approach 2 of our methodology by selecting 4\(M\) samples and dividing them into two equal parts, each with 2\(M\) samples. We split either of the two parts into train, dev, test sets. To save time during evaluation and inference, we set the size of the dev and test splits to be the same as the number of training samples in the ID datasets, which is 7\(K\). Moreover, we randomly selected a portion of the SRC (7\(K\) out of 2\(M\)) in the second split, which was not used for training. We passed this portion to the trained MT to get MTT. Finally, we computed the TER using the MTT and the corresponding REF via sacrebleu. We set the portion size 7\(K\) as the goal was to double the size of the initial ID data. Group 3: for testing the zero-shot capabilities of the trained QE models in our proposed methodology.We used two zero-shot test sets, namely English to Czech (EN-CS) and English to Japanese (EN-JA), which were provided by WMT 2021 shared task on QE for Task 2. Each test set contained 1\(K\) samples. ### Frameworks Quality Estimation.To train all QE models of our study, we developed a new QE framework with the ability to invoke multilingual models from HF model repository. In all our experiments we chose to use XLM-RoBERTa4 (XLM-R) (Conneau et al., 2020), to derive cross-lingual embeddings, which has shown success in prior studies such as Ranasinghe et al. (2020). The framework is similar in architecture to "MonoTransQuest" (Ranasinghe et al., 2020), but adapted to the needs of our experiments. The differences with "MonoTransQuest" are the additional tokens (<OOD> and <ID>) added during the tokenization process, as well as the resizing of the model's token embeddings in order to support the added tags. Additionally, rather than computing the softmax, we directly used logits to estimate the quality labels. Footnote 4: xlm-roberta-large Training and evaluation details of QE models.In Section 2.1 we describe our methodology for training and evaluating QE models. During Step 1, we trained and evaluated an OOD QE model every 1000 \(steps_{HF}\)5 using the train and dev sets from Group 1. In Step 2, we trained and evaluated QE mix models every 500 \(steps_{HF}\) using a mix of OOD and ID data from Group 1. For Step 3, we evaluated the final domain-specific QE model after 500 \(steps_{HF}\) using only an ID train and dev set. Throughout training, we used an early stopping mechanism to halt the training process if there was no improvement in the evaluation loss after 5 evaluations. We adjusted the default evaluation \(steps_{HF}\) from 500 to 1000 for Step 1 due to the larger number of training samples in that step. Footnote 5: \(steps_{HF}\) refers to Hugging Face framework’s training or evaluation steps, which are different from the ones we described in Section 2.1. Machine Translation.Our approach to generating synthetic ID (Approach 2, Section 2.2) differs from prior studies, such as Eo et al., (2021), which rely on a generic/common translation model (e.g., Google machine translate). Instead, we first trained a separate NMT model on a subset of the original dataset. This approach ensures that the training data and the data used for translation have similar vocabularies, cover comparable topics, styles, and domains, which leads to higher quality translations. We used an in-house MT framework to train our models, based on pre-trained mBART-50 (Liu et al., 2020) from HF. We followed the Seq2SeqTraining arguments recommended by HF and trained the model for Approach 2, stopping the training if the evaluation loss did not improve after 5 evaluations. We used default hyperparameters recommended by HF for QE and MT, and our frameworks with modified hyperparameters are available at [https://github.com/JoyeBright/DA-QE-EAMT2023](https://github.com/JoyeBright/DA-QE-EAMT2023) to reproduce our results. ## 4 Results To assess the performance of our approach we evaluate output from the trained QE models in comparison to the reference quality metric (HTER/TER) on the test sets described in data Groups 1 and 3. We use Pearson's coefficient (\(\rho\in-1:1\), which we rescale to \(-100\) to \(100\) for clarity) to correlate our predictions with the test set. We use the BLEU score as a metric to evaluate the translation quality of our MT models. ### Baseline results To establish a baseline for our study, we fine-tuned XLM-R with the ID data for each language pair as provided by WMT 2021 shared task (Group 1 of data). This is a conventional approach employed in prior research, such as Ranasinghe et al. (2020), where pre-trained models are utilized to provide cross-lingual reference for training QE models. We also attempted to compare our work with the models of Rubino (2020) and Lee (2020). For the latter work, their experiments used the WMT 2020 test sets, while we used WMT 2021, which makes it difficult to compare our results to theirs directly. Furthermore, we could not replicate their models as no code is available (at the time of writing this paper). Our baseline results are presented in Table 1. ### Main results In Table 1 we present our results using the DAG approaches and the two training modes (Tag and No Tag). Additional details on the statistical tests for each language pair are available in Appendix A.2. The results in Table 1 show that, in general, all of the proposed DA methods performed better than the baseline for each language pair, except for Approach 1 in the RO-EN language pair. For this language pair, the use of a domain tag led to reduced performance, and the improvement achieved without such a tag was not statistically significant. We also observe that the increase of performance compared to the baseline for each language pair shown as percentage in the last column of Table 1 is substantial, except for RO-EN (only 0.92% increase over the baseline). This is mainly due to the already high baseline performance (83.63), making it challenging to achieve significant improvements. Among the other language pairs, the EN-ZH pair had the largest increase in performance -- just over 25%. The RU-EN and EN-DE pairs had the second and third highest increases, with improvements of around 16% and 10% over their respective baselines. Additional indication of domain results.The results indicate that incorporating tags into the DA training pipeline was generally effective, although in some instances, the improvement was not statistically significant compared to the models that were trained without tags. However, it was observed that at least one model outperformed the same language pair's models that were not trained with tags, when DAG techniques were used. Specifically, the EN-DE Approach 1 model trained with tags performed better compared to Approach 2 without tags, as did the EN-ZH Approach 1 model trained with tags relative to the same approach without tags. Finally, the RO-EN Approach 2 model trained with tags outperformed Approach 2 without tags, and the RU-EN Approach 1 model trained with tags exhibited better performance than Approach 1 without tags. ### Data Augmentation results Upon analyzing the integration of DAG techniques into the specialized QE pipeline, we observe that for most language pairs, both approaches showed better performance than their respective baselines. However, in situations where tags were not employed, Approach 2 only showed statistical significance over Approach 1 in the EN-ZH and RU-EN language pairs. Moreover, when tags were used, Approach 2 lead to statistically significant \begin{table} \begin{tabular}{l|c|c c|c c|c} \hline \multirow{2}{*}{Language pair} & \multirow{2}{*}{Baseline} & \multicolumn{3}{c}{NO TAG} & \multicolumn{3}{c}{TAG} \\ \cline{3-7} & & DAG 1 & DAG 2 & DAG 1 & DAG 2 & \multicolumn{1}{c|}{} \\ \hline EN-DE & 47.17 & 49.93 & 49.54 & **51.90** & 51.25 & 10.03 \\ EN-ZH & 29.16 & 34.75 & 35.27 & 35.62 & **36.60** & 25.51 \\ RO-EN & 83.63 & 83.67 & 83.74 & 83.37 & **84.40** & 00.92 \\ RU-EN & 40.65 & 44.91 & 45.40 & **47.16** & 43.98 & 16.01 \\ \hline \end{tabular} \end{table} Table 1: **Pearson correlation scores for proposed QE models across 4 language pairs**: EN-DE, EN-ZH, RO-EN, and RU-EN. For each language pair, the bold result indicates the highest-performing method compared to the baseline. Results for the first and second DAG approaches are reported under DAG 1 and DAG 2, respectively. The column labeled “Increase %” shows the percentage improvement for the highest-performing model (in bold) compared to the baseline. improvements only for EN-DE and EN-ZH. These findings suggest that the choice of DAG approach and the use of tags should be carefully considered when applying DA in QE. Additionally, DAG was observed to be significant for EN-ZH, for both cases -- with or without tags. ### Zero-shot results In order to evaluate the effectiveness of our QE models in the context of ZSL, we compared their performance with the baseline models for the EN-CS and EN-JA language pairs (test sets). The results of these tests are presented in Table 2. The findings show that, for the EN-CS test set, the QE model trained solely on the EN-DE dataset achieved the highest performance among all QE baselines, with a Pearson correlation score of 46.97. Additionally, we observe that our proposed DA pipeline performed even better than the highest-performing baseline for EN-CS, but only DAG approach 1 and 2 with tags were found to be statistically significant. Likewise, for the EN-JA test set, the highest-performing QE baseline was the one that was trained solely on the RU-EN dataset, with a Pearson correlation score of 20.32. In contrast to EN-CS, none of the models that were trained with our pipeline and with the RU-EN dataset outperformed the baselines. Nevertheless, we observed that three models trained with EN-ZH and using our pipeline (Approach 1 with and without tag, and Approach 2 with tag) performed better than the highest-performing baseline. Overall, these findings suggest that if a QE model is conventionally trained with and evaluated on an unseen QE dataset, some extent of ZSL capabilities can be achieved due to the use of XLM-R. However, the proposed DA pipeline can significantly increase this extent, whether through models trained with the same dataset or other datasets used in the pipeline. Furthermore, we observed that training a QE model conventionally using certain language pairs may lead to decreased performance. For instance, a model trained exclusively with the EN-DE language pair showed a Pearson correlation of approximately 10. In such cases, the proposed pipeline may enhance performance even when using the same training data. ## 5 Additional observations ### Cross-lingual inference Table 3 presents data that shows that our proposed methodology has an overall advantage over the conventional training method of using a pre-trained LLM and fine-tuning it with QE data (baselines) in terms of cross-lingual inference. That is, the QE models trained with our proposed DA pipeline not only perform significantly better than baselines on their target domain and language pair but can also estimate the quality of other language pairs to some extent better than their corresponding baseline. By examining the data closely (bottom to top row of the Table 3), we observe that XLM-R provides a limited level of cross-lingual inference, which is insufficient for estimating quality labels due to the absence of prior knowledge about them. However, using Step 1 of our pipeline, which utilizes little inference knowledge, the model still achieves an acceptable level of generalization across all language pairs. Specifically, the first step achieved an average Pearson correlation score of approximately 39, which is higher than all baseline scores, except for the RO-EN pair, which achieved around 42. Furthermore, the model trained using Step 1 of the pipeline achieved a Pearson correlation of around 70 when evaluated with the RO-EN test set. This result can be attributed to the training of the model with IT, which was used as OOD data. From a linguistic point of view, this result could be explained by the fact that IT and RO belong to the same language family, i.e., the "romance languages" (refer to Appendix A.5), which explains the high Pearson correlation score achieved by the model. As we move up the table, we can observe that the model built in Step 2 of our pipeline becomes more specific toward the task and the ID datasets. Consequently, there is an average im \begin{table} \begin{tabular}{l|c|c|c c|c c} \hline \multicolumn{2}{l|}{\multirow{2}{*}{ \begin{tabular}{c} Trained \\ on \\ \end{tabular} }} & \multirow{2}{*}{Test set} & \multirow{2}{*}{Baseline} & \multicolumn{2}{c|}{NO TAG} & \multicolumn{2}{c}{TAG} \\ \cline{4-7} \multicolumn{2}{c|}{} & & & DAG 1 & DAG 2 & DAG 1 & DAG 2 \\ \hline \hline \multirow{3}{*}{EN-DE} & EN-CS & 46.97 & 48.77 & 48.07 & 47.78 & 47.82 \\ & EN-JA & 09.67 & 18.16 & 08.00 & 16.12 & 17.36 \\ \hline \hline \multirow{3}{*}{EN-ZH} & EN-CS & 35.56 & 49.33 & 48.54 & 47.98 & 46.83 \\ & EN-JA & 13.13 & 22.77 & 19.87 & 22.24 & 21.54 \\ \hline \hline \multirow{3}{*}{RO-EN} & EN-CS & 26.33 & 39.10 & 39.79 & 39.20 & 40.41 \\ & EN-JA & 18.88 & 20.34 & 18.55 & 20.11 & 21.22 \\ \hline \hline \multirow{3}{*}{RU-EN} & EN-CS & 28.42 & 45.58 & 44.85 & 46.43 & 45.22 \\ & EN-JA & 20.32 & 17.64 & 17.04 & 17.26 & 19.63 \\ \hline \end{tabular} \end{table} Table 2: Performance comparison of the proposed methods and the baseline model trained on the EN-DE, EN-ZH, RO-EN, and RU-EN datasets in the context of ZSL, with results presented for EN-CS and EN-JA test sets. Results for the first and second DAG approaches are reported under DAG 1 and DAG 2, respectively. provement of around 3.5 Pearson correlation (from 39.36 to 42.83) across the languages. This indicates that our DA pipeline is effective in improving more specific cross-lingual QE performance. Ultimately, fine-tuning Step 2 with any of the ID languages provides a highly domain-specific QE model that is not only better estimates the quality of their language pair, but also performs better cross-lingual inference over its baseline. ### OOD Performance The main goals of DA are to quickly create an adapted system and to develop a system that performs well on ID test data while minimizing performance degradation on a general domain. In our study, we showed that models from Step 1 or Step 2 can be fine-tuned quickly using the user's data (achieving the first of these goals). Our main focus was on the assessment of ID QE. However, we test the generalizability of our ID models on an OOD test set. Our results, summarized in Table 4, indicate that all ID models outperformed the corresponding baselines on the OOD test set, and we observe that incorporating ID data in Approaches 1 and 2 did not compromise the performance with respect to OOD. However, comparing the models' performance with models trained solely on OOD we see a small performance drop, which is inevitable and in most cases acceptable. ## 6 Related Work Data Scarcity in QE.The issue of data scarcity in MT QE has been explored in numerous previous studies. The work of Rubino and Sumita (2020) involves the use of pre-training sentence encoders and an intermediate self-supervised learning step to enhance QE performances at both the sentence and word levels. This approach aims to facilitate a smooth transition between pre-training and fine-tuning for the QE task. Similarly, Fomicheva et al., (2020) proposed an unsupervised method for QE that does not depend on additional resources and obtains valuable data from MT systems. Qiu et al. (2022) conducted a recent study on the the impact of various types of parallel data in QE DAG, and put forward a classifier to differentiate the parallel corpus. Their research revealed a significant discrepancy between the parallel data and real QE data, as the most common QE DAG technique involves using the target size of parallel data as the reference translation (Baek et al., 2020; Qiu et al., 2022), followed by translation of the source side using an MT model, and ultimately generating pseudo QE labels (Freitag et al., 2021). However, our study diverges from this conventional approach and concentrates on a straightforward yet effective DAG methods to mitigate this gap. Similarly, Kocyigit et al. (2022) proposed a negative DAG technique to improve the robustness of their QE models. They suggested training a sentence embedding model to decrease the search space and training it on QE data using a contrastive loss. Domain Adaptation in QE.To tackle the challenges with translating data when training data comes from diverse domains, researchers have extensively used DA in MT. DA involves training a large generic model and then fine-tuning its \begin{table} \begin{tabular}{l|c c c c|c|c} \hline \multirow{2}{*}{ \begin{tabular}{c} Trained \\ with \\ EN-DE \\ \end{tabular} } & \multicolumn{5}{c|}{QE Models} \\ \cline{2-7} & EN-DE & EN-ZH & RO-EN & RU-EN & OOD & DAG 1 & DAG 2 \\ \hline Baseline & 11.95 & 03.59 & 11.60 & 03.43 & & & \\ Our pipeline & 54.62 & 59.30 & 52.51 & 47.36 & 64.33 & 65.24 & 64.76 \\ \hline \(\Delta_{Baseline}\) & 42.67 & 55.71 & 40.91 & 43.93 & & & \\ \(\Delta_{OOD}\) & **-09.71** & **-05.03** & **-11.82** & **-16.97** & & & \\ \hline \end{tabular} \end{table} Table 4: Model comparison on OOD test set using Pearson correlation as the metric. The \(\Delta_{Baseline}\) values indicate the performance difference relative to the corresponding baseline, while the \(\Delta_{OOD}\) values compare the models’ performance with the one trained solely with OOD. \begin{table} \begin{tabular}{l|c c c c c|c} \hline \multirow{2}{*}{Models} & \multicolumn{5}{c|}{Test Sets} & \multirow{2}{*}{AVG} \\ \cline{2-2} \cline{4-6} & EN-DE & EN-ZH & RO-EN & RU-EN & \\ \hline \hline Baseline & 47.17 & 19.67 & 44.96 & 32.91 & 36.17 \\ EN-DE & 49.93 & 22.66 & 78.97 & 39.55 & 47.77 \\ \(\Delta\) & 02.76 & 02.99 & 34.01 & 06.64 & **11.60** \\ \hline \hline Baseline & 30.34 & 29.16 & 47.55 & 36.87 & 35.98 \\ EN-ZH & 43.46 & 34.75 & 80.51 & 42.67 & 50.34 \\ \(\Delta\) & 13.12 & 05.59 & 32.96 & 05.80 & **14.36** \\ \hline \hline Baseline & 24.64 & 23.56 & 83.63 & 39.97 & 42.95 \\ RO-EN & 43.02 & 24.31 & 83.67 & 38.74 & 47.43 \\ \(\Delta\) & 18.38 & 00.75 & 00.04 & -01.23 & **04.48** \\ \hline \hline Baseline & 22.40 & 24.67 & 57.17 & 40.69 & 36.23 \\ RU-EN & 25.36 & 26.06 & 75.34 & 44.91 & 42.91 \\ \(\Delta\) & 02.96 & 01.39 & 18.17 & 04.22 & **06.68** \\ \hline \hline Step2 & 38.29 & 24.72 & 76.96 & 31.35 & 42.83 \\ \hline Step1 & 30.80 & 16.57 & 70.14 & 39.93 & 39.36 \\ \hline XLM-R & -02.74 & 07.30 & 02.97 & 03.12 & 02.66 \\ \hline \end{tabular} \end{table} Table 3: Performance comparison of proposed models and baselines across all test sets using Pearson correlation as the metric. \(\Delta\) represents the difference between them. “AVG” column shows the overall difference for each language model. Step 1: model trained with OOD. Step 2: model trained with DAG approach 1 and OOD. Approach 2 in Step 2 had similar results, not included. XLM-R: model not being trained. Models and baselines are color-coded for clarity, with bold numbers indicating the average \(\Delta\) across all language pairs, and underlined numbers representing each model’s performance on their respective test sets. parameters with domain-specific data (Chu and Wang, 2018; Saunders, 2021; Pourmostafa Roshan Sharami et al., 2021; Pham et al., 2022). In MT, one way to achieve DA is by appending tags to sentences to handle different domains (Sennrich et al., 2016; Vanmassenhove et al., 2018; Chu and Dabre, 2019) and reduce catastrophic forgetting. Despite being useful in MT, DA has not been widely used in QE according to our knowledge. Dongjun Lee (2020) proposed a two-step QE training process similar to our own, and Raphael Rubino (2020) pre-trained XLM and further adapted it to the target domain through intermediate training. Both studies demonstrated that adding a step before fine-tuning improves performance compared to fine-tuning alone. However, unlike our methodology, neither of them included sentence tags or conducted additional fine-tuning (such as Step 3 in our methodology). As a result, their QE models are not as specialized for the target domain as ours. A few researchers have made attempts to integrate aspects of DA into QE. For instance, in an effort to improve QE performance in domain-specific scenarios, Arda Tezcan (2022) included fuzzy matches into MonoTransQuest with the aid of XLM-RoBERTa model and data augmentation techniques. ## 7 Conclusion and future work This paper addresses two key challenges related to quality estimation (QE) of machine translation (MT): (i) the scarcity of available QE data and (ii) the difficulties in estimating translations across diverse domains. The primary aim of this study is to enhance the performance of QE models by addressing these challenges. To do so, we propose a solution that utilizes domain adaptation (DA) techniques adopted from MT. We adapt the "mixed fine-tuning + fine-tuning" approach (Chu et al., 2017) and extend it with data augmentation as an alternative to the traditional oversampling technique. We adopt a three-step training methodology: (i) we fine-tune XLM-R, a language model, with a large generic QE dataset, which enables the model to generalize; (ii) we fine-tune the model with a mix of out-of-domain (OOD) and in-domain (ID) data derived from two data augmentation (DAG) approaches; and (iii) we fine-tune the model with a small amount of domain-specific data, which leads to a more specific model. We evaluated models' performance with and without domain tags appended to the sentences. Our experiments show significant improvements across all language pairs under consideration, indicating that our proposed solution has a beneficial impact in addressing the aforementioned challenges. Our study also demonstrates the effectiveness of both proposed DAG approaches and shows that using domain tags improves the performance of the models. Additionally, we find that our model outperforms the baseline in the context of zero-shot learning and in cross-lingual inference. Moving forward, there are several directions for future work based on our findings. First, it would be interesting to investigate the performance of our pipeline on low-resource language pairs, where there is limited ID data available. This is particularly relevant given the smaller coverage of QE datasets compared to parallel data in MT. Second, we only used one type of OOD data in our experiments (EN-IT); it would be useful to explore other OOD data over different language pairs for QE. Third, it would be valuable to study the performance of other LLMs than XLM-R. Fourth, since the choice of languages employed in the pipeline was based on availability, we would suggest exploring a more regulated approach for selecting the languages to be used in the proposed pipeline. Specifically, the optimal transfer languages can be selected based on their data-specific features, such as dataset size, word overlap, and subword overlap, or dataset-independent factors, such as genetic (see Appendix A.5) and syntactic distance (Lin et al., 2019).
2302.13859
Propagation Constant Measurement Based on a Single Transmission Line Standard Using a Two-port VNA
This study presents a new method for measuring the propagation constant of transmission lines using a single line standard and without prior calibration of a two-port vector network analyzer (VNA). The method provides accurate results by emulating multiple line standards of the multiline calibration method. Each line standard is realized by sweeping an unknown network along a transmission line. The network need not be symmetric or reciprocal, but must exhibit both transmission and reflection. We performed measurements using a slab coaxial airline and repeated the measurements on three different VNAs. The measured propagation constant of the slab coaxial airline from all VNAs is nearly identical. By avoiding disconnecting or moving the cables, the proposed method eliminates errors related to repeatability of connectors, resulting in improved broadband traceability to SI units.
Ziad Hatab, Arezoo Abdi, Gregor Steinbauer, Michael Ernst Gadringer, Wolfgang Bösch
2023-02-27T14:58:52Z
http://arxiv.org/abs/2302.13859v2
# Propagation Constant Measurement Based on a Single Transmission Line Standard Using a Two-port VNA ###### Abstract This study presents a new method for measuring the propagation constant of transmission lines using a single line standard and without prior calibration of a two-port vector network analyzer (VNA). The method provides accurate results by emulating multiple line standards of the multiline calibration method. Each line standard is realized by sweeping an unknown network along a transmission line. The network need not be symmetric or reciprocal, but must exhibit both transmission and reflection. We performed measurements using a slab coaxial airline and repeated the measurements on three different VNAs. The measured propagation constant of the slab coaxial airline from all VNAs is nearly identical. By avoiding disconnecting or moving the cables, the proposed method eliminates errors related to repeatability of connectors, resulting in improved broadband traceability to SI units. microwave measurement network analyzers propagation constant traceability ## 1 Introduction The propagation constant is a critical parameter in transmission line analysis, providing valuable information about the electrical properties of materials at different frequencies. The need for accurate measurement of the propagation constant arises in various applications, such as material characterization [1, 2, 3, 4], or estimation of the characteristic impedance of transmission lines, which allows impedance renormalization in various vector network analyzer (VNA) calibration methods [5]. Furthermore, knowledge of the propagation constant allows the analysis of losses along a transmission line, which is a critical aspect in signal integrity applications [6, 7]. In general, there are many reasons to measure the propagation constant of guided wave structures such as transmission lines. There are several methods to measure the propagation constant using a two-port VNA, but the most versatile method because of its broadband applicability is the multiline technique [8]. In this method, multiple lines of different lengths are measured to sample the traveling wave along the line standards in a broadband scheme. However, this approach has several drawbacks, including the need for multiple lines, the possibility of uncertainties in their geometry, and the requirement for accurate repeated connection or probing, all of which contribute to measurement uncertainties [9, 10, 11]. To address some of the problems of the multiline method, some techniques have been introduced, such as the multireflect method [12, 13] and the line-network-network method [14, 15, 16, 17]. The multireflect method uses multiple identical reflect standards with different offsets to provide broadband measurement of the propagation constant. However, because it requires multiple identical independent standards, it is susceptible to repeatability errors due to repeated connection or probing. In addition, the propagation constant must be solved using optimization techniques that could diverge if not well conditioned. The line-network-network method involves moving an unknown symmetric and reciprocal network along a transmission line and solving for the propagation constant using the derived similarity equations. This method has limitations, such as the restriction to three offsets, which limits the frequency range, and the requirement to use symmetric and reciprocal offset networks. Results of relative effective permittivity measurements using this method were presented in [18], highlighting the sensitivity and limitations of this solution. It is noteworthy that there is a significant amount of literature discussing the broadband measurement of the propagation constant using only two line standards of varying lengths, commonly known as the line-line method [19, 20, 21, 22]. Despite the different mathematical formulations used, all these methods are based on solving the characteristic polynomial of the eigenvalue problem associated with the thru-reflect-line (TRL) calibration [23]. However, because of the use of only two lines, which often have a significant length difference to cover lower frequencies, the result of the propagation constant exhibits multiple resonance peaks, caused by integer multiples of half-wavelength occurrences in the electrical length of the transmission lines. To mitigate this issue, some authors have proposed post-processing techniques to filter the resonance peaks [24, 25]. There are several indirect techniques for determining the propagation constant of transmission lines, which involve evaluating the permittivity of materials separately. These methods can be broadly classified into two categories: the resonant method and the transmission/reflection method. The resonant method, described in [26, 27], estimates the permittivity from S-parameters at resonant frequencies, resulting in measurements only at specific frequencies. In contrast, the transmission/reflection method estimates the permittivity of a sample placed between two waveguides from the measured transmission and reflection coefficients. This method can be implemented using various configurations such as free space, rectangular waveguide, and coaxial line, as discussed in [28, 29, 30]. In this paper, we present a different approach for measuring the propagation constant using a single transmission line standard, without the need for prior calibration of a two-port VNA. Our approach builds on the general idea introduced in [14] by shifting an unknown network along a transmission line. Unlike the approach in [14], our method is not constrained by the number of offsets one can use, nor does the unknown network need to be symmetric or reciprocal. To combine all offset measurements, we propose a weighted \(4\times 4\) eigenvalue problem, inspired by the modified multiline method introduced in [31]. One of the key advantages of our proposed method is that it only requires a single transmission line to generate equations similar to those of the multiline method. Furthermore, highly repeatable measurements are possible because cable reconnection is not required. The remaining uncertainty is mainly due to the dimensional motion of the unknown network and the intrinsic noise of the VNA. We demonstrate the effectiveness of our method on a commercial slab coaxial airline tuner, where the offset network is the sliding tuning element. We performed measurements with three different VNA brands. The measured propagation constant obtained by the different VNAs show overlapping agreement. Our proposed approach offers a promising alternative to existing methods for measuring the propagation constant. The remainder of this paper is structured as follows. In Section 2, we provide a detailed explanation of the mathematical derivation of the eigenvalue problem formulation that allows for the adaptation of the multiline method. Subsequently, in Section 3, we discuss the use of normalized eigenvectors to extract the complex exponential terms, which contain the propagation constant, and the utilization of least squares to derive an accurate estimate of the propagation constant. In Section 4, we describe the experimental setup, where we perform measurements using various VNAs and present the measured propagation constant of the slab coaxial airline, as well as a comparison with EM simulation. Finally, a conclusion is given in Section 5. ## 2 Formulating the eigenvalue problem The general idea of the measurement setup is to move an unknown network along a transmission line. For each movement of the network, either to the left or to the right, we create two offset elements that are complementary to each other. When the offset length is zero, the offset elements are reduced to a thru connection, which we refer to as the reference plane. An illustration of this concept is shown in Figure 1. Before proceeding with the mathematical derivation, we need to define the sign convention for the offset shift. In our analysis, we define that moving the network to the right results in a positive offset, while moving the network to the left results in a negative offset. This convention is shown in Figure 2 as modeled by the error box model of a two-port VNA [32]. With the definition of the offsets in Figure 2, the measured T-parameters of the offsetted network by the length \(l_{i}\) are given as follows: \[\mathbf{M}_{i}=\underbrace{k_{a}k_{b}}_{k}\underbrace{\begin{bmatrix}a_{11}&a_{12 }\\ a_{21}&1\end{bmatrix}}_{\mathbf{A}}\mathbf{L}_{i}\mathbf{N}\mathbf{L}_{i}^{-1}\underbrace{ \begin{bmatrix}b_{11}&b_{12}\\ b_{21}&1\end{bmatrix}}_{\mathbf{B}}, \tag{1}\] where \(k\), \(\mathbf{A}\), and \(\mathbf{B}\) are the error terms of an uncalibrated two-port VNA. The matrices \(\mathbf{L}_{i}\) and \(\mathbf{N}\) are given as follows: \[\mathbf{L}_{i}=\begin{bmatrix}e^{-\gamma l_{i}}&0\\ 0&e^{\gamma l_{i}}\end{bmatrix},\quad\mathbf{N}=\begin{bmatrix}\frac{-S_{11}S_{2 0}+S_{12}S_{21}}{S_{21}}&\frac{S_{11}}{S_{21}}\\ \frac{-S_{22}}{S_{21}}&\frac{1}{S_{21}}\end{bmatrix}. \tag{2}\] Here, \(\gamma\) represents the propagation constant of the transmission line and \(\{S_{11},S_{12},S_{21},S_{22}\}\) are the S-parameters of the network \(\mathbf{N}\). The S-parameters of the offset network are generally unknown, and the network can be asymmetric or non-reciprocal. However, the network must satisfy some basic criteria, which are listed below: 1. All S-parameters must be non-zero within the considered frequency range (\(|S_{ij}|>0\)). 2. The S-parameters of the network should not change as the network is moved. 3. The network should not lead to the generation of additional modes along the transmission line. Although the first condition is unique to our method's formulation, the remaining two conditions are also similar to the multiline method [8; 31], which requires single-mode propagation and repeated error boxes. Fortunately, it is not difficult to design a system that satisfies these requirements. We will show this later in a Section 4, where we used a commercial sliding tuner that was not designed for our application, but met our conditions. We now define the T-parameters of a new network by taking the difference in T-parameters of two offset networks of different lengths \(l_{i}\) and \(l_{j}\), which is given by \[\begin{split}\overline{\mathbf{N}}_{i,j}&=\mathbf{L}_{i}\mathbf{N} \mathbf{L}_{i}^{-1}-\mathbf{L}_{j}\mathbf{N}\mathbf{L}_{j}^{-1}\\ &=\nu_{i,j}\begin{bmatrix}0&\frac{S_{11}}{S_{21}}e^{-\gamma(l_{i}+ l_{j})}\\ \frac{S_{22}}{S_{21}}e^{\gamma(l_{i}+l_{j})}&0\end{bmatrix},\end{split} \tag{3}\] where \[\nu_{i,j}=e^{-\gamma(l_{i}-l_{j})}-e^{\gamma(l_{i}-l_{j})}. \tag{4}\] The expression in (3) is very similar to a line standard in multiline calibration, but now the line standard is described by an antidiagonal matrix and with additional multiplication factors. We define an equivalent measurement of a line standard by \[\overline{\mathbf{M}}_{i,j}=\mathbf{M}_{i}-\mathbf{M}_{j}=k\mathbf{A}\overline{\mathbf{N}}_{i,j} \mathbf{B}. \tag{5}\] Figure 1: Illustration of network offset on a transmission line. Figure 2: VNA two-port error box model of a network offsetted by a length \(l_{i}\) (positive or negative). Each offset results in two complementary offset boxes. All blocks are given by their T-parameters. Similar to the multiline calibration, we also need an equation that describes the inverse of the measurements. This is given by \[\widehat{\overline{\mathbf{M}}}_{i,j}=\mathbf{M}_{i}^{-1}-\mathbf{M}_{j}^{-1}=\frac{1}{k}\bm {B}^{-1}\widehat{\overline{\mathbf{N}}}_{i,j}\mathbf{A}^{-1}, \tag{6}\] where the matrix \(\widehat{\overline{\mathbf{N}}}_{i,j}\) is given by \[\begin{split}\widehat{\overline{\mathbf{N}}}_{i,j}&=\bm {L}_{i}^{-1}\mathbf{N}^{-1}\mathbf{L}_{i}-\mathbf{L}_{j}^{-1}\mathbf{N}^{-1}\mathbf{L}_{j}\\ &=-\nu_{i,j}\begin{bmatrix}0&\frac{S_{1}}{S_{12}}e^{-\gamma(l_{i} +l_{j})}\\ \frac{S_{22}}{S_{12}}e^{\gamma(l_{i}+l_{j})}&0\end{bmatrix}.\end{split} \tag{7}\] Given the expressions in (5) and (6), we can construct an eigenvalue problem in terms of \(\mathbf{A}\) as follows: \[\overline{\mathbf{M}}_{i,j}\widehat{\overline{\mathbf{M}}}_{n,m}=\mathbf{A}\overline{\mathbf{ N}}_{i,j}\widehat{\overline{\mathbf{N}}}_{n,m}\mathbf{A}^{-1}, \tag{8}\] where the matrix product \(\overline{\mathbf{N}}_{i,j}\widehat{\overline{\mathbf{N}}}_{n,m}\) is given by \[\overline{\mathbf{N}}_{i,j}\widehat{\overline{\mathbf{N}}}_{n,m}=-\kappa\nu_{i,j}\nu_ {n,m}\begin{bmatrix}e^{-\gamma(l_{i,j}^{+}-l_{n,m}^{+})}&0\\ 0&e^{\gamma(l_{i,j}^{+}-l_{n,m}^{+})}\end{bmatrix}, \tag{9}\] with \[\kappa=\frac{S_{11}S_{22}}{S_{21}S_{12}},\qquad l_{i,j}^{+}=l_{i}+l_{j},\qquad l _{n,m}^{+}=l_{n}+l_{m}. \tag{10}\] To have a valid eigenvalue problem, we need at least three unique offsets, where one of the offsets \(l_{n}\) or \(l_{m}\) can be equal to \(l_{i}\) or \(l_{j}\), but \(l_{i}\neq l_{j}\), or vice versa. However, with three offsets, we have three possible pairs of eigenvalue problems. In fact, for \(N\geq 3\) offsets, we have \(N(N-2)(N^{2}-1)/8\) possible pairs of eigenvalue problems. This is because for a set of \(N\) offsets, we have \(N(N-1)/2\) pairs, and when we create pairs from \(N(N-1)/2\) pairs, we substitute the equation into itself, resulting in \(N(N-2)(N^{2}-1)/8\) pairs of pairs. To address the issue of multiple eigenvalue problems, we refer to our previous work in [31, 33], where a similar problem was presented in the context of multiline calibration. This problem was solved by combining all measurements using a weighting matrix, reducing the problem to solving a single \(4\times 4\) eigenvalue problem, regardless of the number of lines. This method not only reduced the size of the problem, but also allowed us to express both error boxes \(\mathbf{A}\) and \(\mathbf{B}\) simultaneously in a single matrix using Kronecker product notation. By applying the techniques described in [31, 33], we obtain the following set of equations: \[\overline{\mathbf{M}} =k\mathbf{X}\overline{\mathbf{N}}, \tag{11a}\] \[\widehat{\overline{\mathbf{M}}}^{T}\mathbf{P} =\frac{1}{k}\widehat{\overline{\mathbf{N}}}^{T}\mathbf{P}\mathbf{X}^{-1}, \tag{11b}\] with, \[\mathbf{X}=\mathbf{B}^{T}\otimes\mathbf{A}, \tag{12a}\] \[\overline{\mathbf{M}} =\left[\mathrm{vec}\left(\overline{\mathbf{M}}_{1,2}\right)\quad \cdots\quad\mathrm{vec}\left(\overline{\mathbf{M}}_{i,j}\right)\right],\] (12b) \[\overline{\mathbf{N}} =\left[\mathrm{vec}\left(\overline{\mathbf{N}}_{1,2}\right)\quad \cdots\quad\mathrm{vec}\left(\overline{\mathbf{N}}_{i,j}\right)\right],\] (12c) \[\widehat{\overline{\mathbf{M}}} =\left[\mathrm{vec}\left(\widehat{\overline{\mathbf{M}}}_{1,2}\right) \quad\cdots\quad\mathrm{vec}\left(\widehat{\overline{\mathbf{M}}}_{i,j}\right) \right],\] (12d) \[\widehat{\overline{\mathbf{N}}} =\left[\mathrm{vec}\left(\widehat{\overline{\mathbf{N}}}_{1,2}\right) \quad\cdots\quad\mathrm{vec}\left(\widehat{\overline{\mathbf{N}}}_{i,j}\right) \right],\] (12e) \[\mathbf{P} =\begin{bmatrix}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1\end{bmatrix},\quad\text{where, }\mathbf{P}=\mathbf{P}^{-1}=\mathbf{P}^{T}. \tag{12f}\] Details on the definition and properties of the Kronecker product (\(\otimes\)) and the matrix vectorization (\(\operatorname{vec}\left(\right)\)) can be found in the reference [34]. We now formulate the main eigenvalue problem by defining a new matrix \(\mathbf{W}\), which we multiply on the right side of (11a). We call this matrix the weighting matrix. In the next step, we construct the weighted eigenvalue problem by multiplying the new equation on the left side of (11b). This results in \[\underbrace{\mathbf{\overline{M}}\mathbf{W}\widehat{\mathbf{\overline{M}}}^{T}\mathbf{P}}_{\bm {F}}=\mathbf{X}\underbrace{\mathbf{\overline{N}}\mathbf{W}\widehat{\mathbf{\overline{N}}}^{T} \mathbf{P}}_{\mathbf{H}}\mathbf{X}^{-1}. \tag{13}\] The expression presented in (13) represents a similarity problem between the matrices \(\mathbf{F}\) and \(\mathbf{H}\), with \(\mathbf{X}\) as the transformation matrix. The purpose of introducing the weighting matrix \(\mathbf{W}\) is to transform this similarity problem into an eigenvalue problem by forcing \(\mathbf{H}\) into a diagonal form. It turns out that if \(\mathbf{W}\) is any non-zero skew-symmetric matrix, then \(\mathbf{H}\) takes a diagonal form [31]. However, we do not only want to diagonalize \(\mathbf{H}\), but also want to maximize the distance between the eigenvalues, which in turn minimizes the sensitivity in the eigenvectors [35]. For multiline calibration, the optimal form of \(\mathbf{W}\) was derived in [31], and since the formulation in (13) is similar to that discussed in [31], we use the same choice of \(\mathbf{W}\) with some scaling modifications. The optimal weighting matrix \(\mathbf{W}\) can be written as follows, taking into account the scaling factors: \[\mathbf{W}^{H}=-\kappa(\mathbf{z}\mathbf{y}^{T}-\mathbf{y}\mathbf{z}^{T}), \tag{14}\] where \[\mathbf{y}^{T} =\left[\nu_{1,2}e^{\gamma t_{1,2}^{+}}\quad\ldots\quad\nu_{i,j}e^ {\gamma t_{i,j}^{+}}\right], \tag{15a}\] \[\mathbf{z}^{T} =\left[\nu_{1,2}e^{-\gamma t_{1,2}^{+}}\quad\ldots\quad\nu_{i,j} e^{-\gamma t_{i,j}^{+}}\right]. \tag{15b}\] As a result of choosing \(\mathbf{W}\) as defined in (14), the expression in (13) takes an eigendecomposition form, as given below. \[\mathbf{F}=\mathbf{X}\begin{bmatrix}0&0&0&0\\ 0&\lambda&0&0\\ 0&0&-\lambda&0\\ 0&0&0&0\end{bmatrix}\mathbf{X}^{-1}, \tag{16}\] where \(\lambda\) is real-valued and proportional to the square Frobenius norm of the matrix \(\mathbf{W}\), given by \[\lambda=\frac{1}{2}\left\|\mathbf{W}\right\|_{F}^{2}=\frac{1}{2}\sum_{i,j}|w_{i,j }|^{2}. \tag{17}\] There are two ways to compute \(\mathbf{W}\), the first is the direct method where we already know the propagation constant \(\gamma\) and the factor \(\kappa\) which describes the unknown network. Naturally, the first option is not practical since both \(\gamma\) and \(\kappa\) are unknown. The better option is to apply a rank-2 Takagi decomposition to the left side of the following equation, as described in [33] for multiline calibration. \[\underbrace{\mathbf{\widehat{\mathbf{M}}}^{T}\mathbf{P}\mathbf{\overline{M}}}_{\text{ measurement}}=\underbrace{\mathbf{\widehat{\mathbf{N}}}^{T}\mathbf{P}\mathbf{\overline{N}}}_{\text{ model}}. \tag{18}\] Note that the left side of (18) contains only the measurement data, while the right side describes the model. Also, the error boxes are not present in (18). To determine \(\mathbf{W}\), we need to calculate the rank-2 Takagi decomposition. This is done in two steps. First, we compute the rank-2 of (18) via sigular value decomposition (SVD), and then we apply the Takagi decomposition to decompose the matrix into its symmetric basis [36]. This looks as follows: \[\widehat{\mathbf{\overline{N}}}^{T}\mathbf{P}\mathbf{\overline{N}}=\underbrace{s_{1}\bm {u}_{1}\mathbf{v}_{1}^{H}+s_{2}\mathbf{u}_{2}\mathbf{v}_{2}^{H}}_{\text{rank-2 SVD from measurement}}=\underbrace{\mathbf{G}\mathbf{G}^{T}}_{\text{Takagi}} \tag{19}\] Then, the weighting matrix is given by \[\mathbf{W}^{H}=\pm\mathbf{G}\begin{bmatrix}0&j\\ -j&0\end{bmatrix}\mathbf{G}^{T} \tag{20}\] The derivation process of the matrix \(\mathbf{W}\) is described in more detail in [33]. To resolve the sign ambiguity, one approach is to select the answer that has the smallest Euclidean distance to a known estimate. Such as an estimate can be obtained from an approximate knowledge of the material properties of the transmission line. The last step is the solution of the eigenvectors described by \(\mathbf{X}\) in (16). The solution of the eigenvectors has been discussed in [31]. It is worth noting that we cannot solve the matrix \(\mathbf{X}\) uniquely, but only up to a diagonal matrix multiplication. Therefore, to define a unique solution for \(\mathbf{X}\), we normalize its columns so that the diagonal elements are equal to one. This is written as follows: \[\widetilde{\mathbf{X}}=\mathbf{X}\mathrm{diag}(a_{11}b_{11},b_{11},a_{11},1)^{-1}, \tag{21}\] where \(a_{11}\) and \(b_{11}\) are part of the error boxes \(\mathbf{A}\) and \(\mathbf{B}\) (see (1) and (12a)). ## 3 Least squares solution for the propagation constant Knowing \(\widetilde{\mathbf{X}}\) from the eigenvector solutions, we can extract the complex exponential terms that contain the propagation constant. To do this, we first multiply the inverse of the normalized error terms to all vectorized measurements of the offset network. This is given by \[\mathbf{E}=\widetilde{\mathbf{X}}^{-1}\mathbf{M}=\mathrm{diag}(ka_{11}b_{11},kb_{11},ka_{ 11},1)\mathbf{N}^{\prime}, \tag{22}\] where \[\mathbf{M} =\left[\mathrm{vec}\left(\mathbf{M}_{1}\right)\quad\cdots\quad \mathrm{vec}\left(\mathbf{M}_{N}\right)\right], \tag{23a}\] \[\mathbf{N}^{\prime} =\left[\mathrm{vec}\left(\mathbf{L}_{1}\mathbf{N}\mathbf{L}_{1}^{-1}\right) \quad\cdots\quad\mathrm{vec}\left(\mathbf{L}_{N}\mathbf{N}\mathbf{L}_{N}^{-1}\right) \right]. \tag{23b}\] Since we do not know the remaining error terms \(k,a_{11},b_{11}\), as well as the S-parameters of the network \(\mathbf{N}\), we need to choose a reference offset to eliminate these unknown factors. For simplicity, we choose the first offset, which we define as zero, i.e., \(l_{1}=0\) (any other choice is also valid). As a result, the positive and negative complex exponential terms are given as follows, using indexing notation based on Python. \[\mathbf{E}[1,1:]/\mathbf{E}[1,0]= \left[e^{2\gamma l_{2}}\quad e^{2\gamma l_{3}}\quad\cdots\quad e^ {2\gamma l_{N}}\right], \tag{24a}\] \[\mathbf{E}[2,1:]/\mathbf{E}[2,0]= \left[e^{-2\gamma l_{2}}\quad e^{-2\gamma l_{3}}\quad\cdots\quad e ^{-2\gamma l_{N}}\right]. \tag{24b}\] Now that we have the complex exponential terms, we can extract the exponents using the complex logarithm function and determine \(\gamma\) using the least squares method, while taking care of any phase unwrapping. First, since we have both the positive and negative complex exponential terms, we can account for both by averaging them. This is done by defining a new vector \(\mathbf{\tau}\): \[\mathbf{\tau}=\left[\frac{e^{2\gamma l_{2}}+1/e^{-2\gamma l_{2}}}{2}\quad\cdots \quad\frac{e^{2\gamma l_{N}}+1/e^{-2\gamma l_{N}}}{2}\right]^{T}. \tag{25}\] The next step is to calculate the logarithm to extract the exponents, which is given by \[\mathbf{\phi}=\log\left(\mathbf{\tau}\right)+j2\pi\mathbf{n},\quad\text{where, }\mathbf{n} \in\mathbb{Z}^{N-1}. \tag{26}\] The phase unwrapping factor \(\mathbf{n}\) can be estimated by rounding the difference between \(\mathbf{\phi}\) and an estimated value. This is given by \[\mathbf{n}=\mathrm{round}\left(\frac{\mathrm{Im}\left(\mathbf{\phi}\right)-2\gamma_{ \mathrm{est}}\mathbf{l}}{2\pi}\right), \tag{27}\] where \(\gamma_{\mathrm{est}}\) is a known approximation for \(\gamma\), and \(\mathbf{l}\) is a vector containing all length offsets except the reference zero offset. The initial estimate for \(\gamma_{\mathrm{est}}\) can be derived from the material properties of the transmission line. Finally, we can determine \(\gamma\) through weighted least squares [37], \[\gamma=\frac{\mathbf{l}^{T}\mathbf{V}^{-1}\mathbf{\phi}}{\mathbf{l}^{T}\mathbf{V}^{-1}\mathbf{l}}, \tag{28}\] where \(\mathbf{V}^{-1}\) is given by \[\mathbf{V}^{-1}=\mathbf{I}_{(N-1)\times(N-1)}-\frac{1}{N}\mathbf{1}_{N-1}\mathbf{1}_{N-1}^{T}, \tag{29}\] The matrix \(\mathbf{I}\) is the identity matrix and \(\mathbf{1}\) is a vector of ones. The weighting matrix \(\mathbf{V}^{-1}\) is necessary because each measurement has a common reference, which is \(l_{1}\). Therefore, the correlation between the measurements had to be taken into account by the matrix \(\mathbf{V}^{-1}\)[37]. Figure 3 summarizes the mathematical derivation presented in this and the previous sections, and provides a visual representation of the steps taken to compute the propagation constant. ## 4 Experiment and Discussion ### Measurement setup For demonstration purposes, we used the slide screw tuner 8045P from Maury Microwave as an implementation of the offset network, where the transmission line is a slab coaxial airline that supports frequencies up to 18 GHz. The tuner is depicted in Figure 4. Figure 4: Maury Microwave 8045P tuner. The cross-section dimensions of the airline are given as follows: \(w=9.398\,\mathrm{mm}\), \(h=40.691\,\mathrm{mm}\), \(d=3.040\,\mathrm{mm}\), and \(p=2.778\,\mathrm{mm}\). Figure 3: Block diagram summary of the proposed propagation constant measurement method. The matrix \(\mathbf{M}\) contains the T-parameter measurements of all offsets. The vector \(\mathbf{l}\) contains the relative length of the offsets with respect to the reference offset (i.e., the zero offset). For our method to work, we require that the unknown network (i.e., the tuner element) be both reflective and transmissive, as the factor \(\kappa\) in (9) can explode to infinity if the network is only reflective, and can be zero if the network is only transmissive. Ideally, we want \(\kappa=1\) to minimize its effect on the eigenvalue problem. However, we also want to avoid scenarios where the network causes the generation of additional modes or resonances. Therefore, we adjusted the tuner with an already calibrated VNA to tune the network to a desired response, as shown in Figure 5. It should be noted that this step of tuning the tuner with an existing calibrated VNA is only necessary because the tuner is a commercial product designed for circuit matching applications and not for our purpose. If we were designing the network ourselves, we would not need to measure it with a calibrated VNA because we would have already designed it to meet our frequency specifications. Also, the S-parameters of the network are never explicitly used in the derivation of the propagation constant. As shown in Figure 5, we set the lower frequency to 3 GHz to avoid very low return loss and resonances. We then measured the airline using different uncalibrated VNAs. This was done to demonstrate that even if we changed the measurement system, we would still get consistent results because the error boxes would not be affected by uncertainties caused by connector and cable movement. For the offset lengths, we chose \(\{0,21,66,81,84,93,117,123,171,192\}\)\(\mathrm{mm}\), which ensures that the eigenvalue \(\lambda\) in (16) does not reach zero in the target frequency range. The VNAs used for the measurements are: Arritsu VectorStar, R&S ZNA and Keysight ENA. The ENA is limited to 14 GHz. All VNAs were placed in the same room to provide the same room conditions. The power level and IF bandwidth for all VNAs were set to 0 dBm and 100 Hz, respectively. Due to the low loss of the airline, an average measurement of 50 frequency sweeps was calculated to reduce noise. Pictures of the three instruments are shown in Figure 6. ### Results and discussion All measurements of the different offsets were taken without prior calibration of the VNAs. The collected data is then read in Python using the _scikit-rf_ package [38]. In Figure 6 we show the measured magnitude response of \(S_{11}\) and \(S_{21}\) from all three VNAs for the offset \(123\,\mathrm{mm}\). From the figure, we can see that all three VNAs give different responses because the error boxes are different for each VNA. After collecting all raw measurements for all the offsets and from all the VNAs, the data were processed to extract the propagation constant according to the discussion in Sections 3 and 4. For easier and better interpretation of the extracted propagation constant, we have plotted in Figure 8 the real part of the relative effective permittivity and the loss per unit length of the slab coaxial airline from all three VNA measurements. The real part of the relative effective permittivity and the loss per unit length are calculated from the propagation constant as follows: \[\epsilon^{\prime}_{\mathrm{r,eff}}=-\operatorname{Re}\left(\left(\frac{c_{0} \gamma}{2\pi f}\right)^{2}\right)\quad(\mathrm{Untilless}),\qquad\mathrm{ loss}=\frac{20\times 10^{-2}}{\ln 10}\operatorname{Re}\left(\gamma\right)\quad( \text{dB/cm}), \tag{30}\] where \(c_{0}\) is the speed of light in vacuum and \(f\) is the frequency. Figure 5: Calibrated measurement of the tuner after tuning. The highlighted frequency range below 3 GHz is not usable due to small reflection and resonance. The relative effective permittivity and loss per unit length results presented in Figure 8 show clear agreement between all VNA measurements, demonstrating the high repeatability of the proposed method even when using different VNA setups. We also performed an EM simulation with the dimensional parameters of the airline given in Figure 4. Unfortunately, we did not have information on the metal types of the inner and outer conductors. From the appearance of the inner conductor, we think it is made of some kind of brass. For the ground plates, we think they are made of aluminum because they have a black anodized coating, which is typical for aluminum components. The anodized layer is often based on aluminum oxide and typically has a relative permittivity of \(8.3\)[39]. Since the thickness of the oxide layer and the exact conductivity of brass are unknown, we ran some values for the thickness of the anodic layer and the conductivity of brass. We found that a coating thickness of \(15\,\mathrm{\mu m}\) and a relative conductivity of \(35\%\) IACS (International Annealed Copper Standard) overlap with the measurement shown in Figure 8. The value obtained for the thickness of the anodic layer is quite typical to obtain a dark black coating [40]. The conductivity of the inner conductor of \(35\%\) IACS (=20.3 MS/m) is within the range of common brass types [41]. Figure 6: The VNAs used for the measurements. (a) Rohde & Schwarz ZNA, (b) Anritsu VectorStar, and (c) Keysight ENA. Figure 7: Raw measurements from the three VNAs of the magnitude response of \(S_{11}\) and \(S_{21}\) of the 8045P tuner, at an offset location of \(123\,\mathrm{mm}\). The purpose of the simulation is to show that the results obtained from the proposed method of measuring the propagation constant do indeed translate into realistic properties of the transmission line. In fact, with the proposed method, one could characterize materials in reverse, as in our case, the conductivity of the metal. Another aspect that may be of interest is the quality of the extracted propagation constant by varying the length and number of offsets. In the results shown in Figure 8 we used 10 offsets ranging from 0 to 192 mm. Now we consider different cases. These cases are listed in Table 1. In Figure 9 we show the results of the relative effective permittivity and the loss per unit length of the slab coaxial airline from the VectorStar VNA measurements for all the cases mentioned in Table 1. Cases 1 and 2 show the results when only three offsets are considered. Case 2 differs from Case 1 in that we have replaced the last offset with a much longer offset. The results of both cases 1 and 2 are poor and show multiple resonances. For case 2 we see more resonances than for case 1. This is the result of the eigenvalue crossing zero at multiple frequencies (see Figure 10). In case 3 we spread the offsets further to include 5 offsets. We can see a clear improvement over cases 1 and 2. We can further improve the accuracy of the extracted relative effective permittivity and loss per unit length by further spreading the offsets as in case 4, where we use 7 offsets. In case 4, we obtain results of similar accuracy to the case of using all 10 offset lengths. The quality of the extracted propagation constant depends on the eigenvalue \(\lambda\) as defined in (16). As the eigenvalue approaches zero, the eigenvectors become more sensitive, which in turn affects the calculation of the extracted propagation constant. To visualize the differences between different scenarios, we present a scaled representation of the eigenvalue \(\lambda\) for each case. This scaled representation excludes the influence of the network through the common factor \(\kappa\), which is invariant over all offset lengths. Since \(\kappa>0\) was established earlier, variations in the eigenvalues can only be induced by the choice of offset lengths. Accordingly, we define the normalized version of the eigenvalue by dividing it by the absolute value of \(\kappa\), as shown below: \[\lambda=\frac{1}{2}\left\|\mathbf{W}\right\|_{F}^{2}=\frac{\left|\kappa\right|^{2} }{2}\left\|\mathbf{z}\mathbf{y}^{T}-\mathbf{y}\mathbf{z}^{T}\right\|_{F}^{2}\quad\Longrightarrow \quad\lambda^{\prime}=\frac{\lambda}{\left|\kappa\right|^{2}}=\frac{1}{2} \left\|\mathbf{z}\mathbf{y}^{T}-\mathbf{y}\mathbf{z}^{T}\right\|_{F}^{2} \tag{31}\] In Figure 10, we present a plot of the scaled eigenvalue normalized to its maximum value, which facilitates a consistent comparison as the number of offsets varies. As illustrated in the figure, for cases 1 and 2, the eigenvalue exhibits multiple zero crossings at various frequencies. Similarly, in case 3, the eigenvalue approaches zero at several instances, \begin{table} \begin{tabular}{c|c} \hline \hline Cases & Offset lengths (mm) \\ \hline Case 1 & \(0,21,81\) \\ Case 2 & \(0,21,192\) \\ Case 3 & \(0,21,66,117,192\) \\ Case 4 & \(0,21,81,93,117,123,192\) \\ All offsets & \(0,21,66,81,84,93,117,123,171,192\) \\ \hline \hline \end{tabular} \end{table} Table 1: Considered cases of different offset lengths. Figure 8: Extracted measurement of relative effective permittivity and loss per unit length of the slab coaxial airline, as well as EM simulated results for an anodic coating of \(15\,\mathrm{\mu m}\) and inner conductor conductivity of \(35\%\) IACS. although to a lesser extent than in cases 1 and 2. In contrast, in case 4, the eigenvalue never reaches zero, but attains values closer to zero at specific frequencies than when all 10 offsets are utilized. Ideally, a flat eigenvalue over frequency would be preferred, but this would necessitate employing even more offsets. This is not different from the multiline calibration approach proposed in [31], where a finer spacing between lines results in a flatter eigenvalue over frequency. Therefore, utilizing a broader range of offset lengths is highly advantageous for enhancing the accuracy of results across frequency. It is also noteworthy that the eigenvalue possesses a bandpass characteristic, whereby the lowest and highest frequency limits are bound by the largest and smallest relative offset, respectively. For comparison, it is worth noting that the multiline method necessitates the measurement of multiple line standards of different lengths, a process that can introduce errors due to connector repeatability. Achieving high repeatability in this context poses a significant mechanical challenge, especially concerning connectors, and automating this process represents an even greater hurdle. In contrast, our proposed method eliminates the need for physical contact between the sliding element and the transmission line. Furthermore, although the sliding process is performed manually in the example presented, it could be automated by employing a linear actuator, thus eliminating the need for any user interaction with the measurement system. Figure 10: The scaled eigenvalue normalized to its maximum for the invested offset length cases. Figure 9: Extracted measurement of relative effective permittivity and loss per unit length of the slab coaxial airline for various combinations of offsets from the VectorStar VNA measurements. ## 5 Conclusion We presented a new broadband method for measuring the propagation constant of transmission lines that does not require the prior calibration of a two-port VNA or the use of multiple line standards. This method provides accurate results by emulating the use of multiple line standards through sweeping an unknown network along a transmission line. The shifted network does not have to be symmetric or reciprocal, but it must exhibit both transmission and reflection properties and remain invariant when moved along the line. The experimental results obtained using different VNAs on a slab coaxial airline with a slider tuner showed consistent agreement with each other and with EM simulation. One of the significant advantages of the proposed method is that it uses the same eigenvalue formulation as multiline calibration, but without the need for disconnecting or moving the cables. As a result, it eliminates errors related to connector repeatability and provides improved broadband traceability to the SI units. Moreover, since the offsets are implemented by simply moving the unknown network laterally, the process can be easily automated using an automated linear actuator. Therefore, the proposed method can accurately measure the propagation constant without requiring any physical interaction from the user on the measurement system. ## Acknowledgment The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development is gratefully acknowledged. The authors also thank ebSCENTER for lending their Anritsu and Keysight VNAs, and Maury Microwave for their support in providing the airline cross-section dimensions of 8045P tuner.
2304.11318
A Semi-Supervised Framework for Misinformation Detection
The spread of misinformation in social media outlets has become a prevalent societal problem and is the cause of many kinds of social unrest. Curtailing its prevalence is of great importance and machine learning has shown significant promise. However, there are two main challenges when applying machine learning to this problem. First, while much too prevalent in one respect, misinformation, actually, represents only a minor proportion of all the postings seen on social media. Second, labeling the massive amount of data necessary to train a useful classifier becomes impractical. Considering these challenges, we propose a simple semi-supervised learning framework in order to deal with extreme class imbalances that has the advantage, over other approaches, of using actual rather than simulated data to inflate the minority class. We tested our framework on two sets of Covid-related Twitter data and obtained significant improvement in F1-measure on extremely imbalanced scenarios, as compared to simple classical and deep-learning data generation methods such as SMOTE, ADASYN, or GAN-based data generation.
Yueyang Liu, Zois Boukouvalas, Nathalie Japkowicz
2023-04-22T05:20:58Z
http://arxiv.org/abs/2304.11318v1
# A Semi-Supervised Framework for Misinformation Detection ###### Abstract The spread of misinformation in social media outlets has become a prevalent societal problem and is the cause of many kinds of social unrest. Curtailing its prevalence is of great importance and machine learning has shown significant promise. However, there are two main challenges when applying machine learning to this problem. First, while much too prevalent in one respect, misinformation, actually, represents only a minor proportion of all the postings seen on social media. Second, labeling the massive amount of data necessary to train a useful classifier becomes impractical. Considering these challenges, we propose a simple semi-supervised learning framework in order to deal with extreme class imbalances that has the advantage, over other approaches, of using actual rather than simulated data to inflate the minority class. We tested our framework on two sets of Covid-related Twitter data and obtained significant improvement in F1-measure on extremely imbalanced scenarios, as compared to simple classical and deep-learning data generation methods such as SMOTE, ADASYN, or GAN-based data generation. Keywords:Semi-supervised learning Class imbalance Misinformation Detection. ## 1 Introduction The spread of misinformation in social media outlets has become a prevalent societal problem and is the cause of many kinds of social unrest. Curtailing its prevalence is of great importance and machine learning advances have shown significant promise for the detection of misinformation [12]. However, to build a reliable model a large data set of reliable posts as well as posts containing misinformation is needed. In practice, this is not feasible since detecting posts containing misinformation is inherently a class imbalanced problem: the majority of posts are reliable whereas a very small minority contains misinformation. For instance, according to The Verge, an American technology news website operated by Vox Media, Twitter removed 2,230 misleading tweets between March 16 and April 18, 20201. Given that, on average, 6,000 tweets are tweeted every second2, the class imbalance ratio is around 0.000014% for that month, or 1 unreliable Tweet for every 71,428 reliable ones, an extreme imbalance ratio. The class imbalance problem has been pervasive in the Machine Learning field for over two decades [14, 9, 3, 16, 15]. The class imbalance problem and issues related to it are, in part, responsible for questions of algorithmic bias and fairness that are very much on researcher's and the public's mind now that machine learning algorithms are routinely deployed in applications that directly affect people. Over the years, many techniques for dealing with class imbalances have been proposed including classical methods for inflating the minority class such as SMOTE [4] and ADASYN [8] and Deep-Learning based methods such as DEAGO [1] and GAMO [20], which use an autoencoder and a Generative Adversarial Network, respectively. One of the issues with previously proposed minority-class oversampling methods for the class imbalance problem is that either the data used to inflate the minority class is real but simply repeated from the existing minority class, as in random oversampling [14], or it is artificial as in SMOTE [4]. Random oversampling is not an acceptable solution given that it is known to cause overfitting [6]. Artificial oversampling, while not overfitting as much as random oversampling, generates artificial data. While this kind of data approximates real data fairly well in continuous domains such as computer vision, it is not as representative in non-continuous domains such as text [10]. This is the reason why, instead of proposing a text generation method to inflate the minority class, this paper proposes a semi-supervised method which, instead of generating new text artificially, relies on the available unlabeled data. A deep-learning method is used to label the data, but not to generate new text. Semi-Supervised Learning for text data is not new and was first proposed in the context of class imbalance in [18]. However, while the class imbalance was present in that study, it was not as extreme as it is in the case of misinformation detection since the authors use an undersampling of the majority class strategy to bring the size of the two classes closer to one another. In our case, we are dealing with such an extremely imbalanced data set that solutions of the type proposed in [18] would not apply. Semi-supervised learning in class-imbalanced setting is also not new. Authors in [11] review existing approaches and propose their own. However, they focus on algorithmic modifications rather than the simpler and more practical re-sampling strategy. Our framework is similar to standard approaches previously designed to tackle the class imbalance problem, but it differs from them in one important way. On the one hand, like methods such as SMOTE, GAMO and so on, it proposes to oversample the minority class, but on the other hand, unlike these approaches, instead of using generated samples it identifies candidates from the unlabeled data set to inflate the minority class with. Although the search for such candidates could be extremely costly, we show how the use of a K-D Tree makes it tractable. We evaluate our framework on two data sets related to Covid-19 misinformation in social media, the one collected and curated in-house, early in the pandemic [2], and a data set obtained from English COVID-19 Fake News and Hindi Hostile Posts data set[22]. Our framework takes two forms: the direct approach in which the labeled minority samples alone are used to search the unlabeled data set; and the indirect approach, designed to increase the diversity of the search, where artificial data are first generated from the minority class and these samples, along with the original minority samples, are used to search the unlabeled set. Different instantiations of these approaches are compared to traditional ways of overcoming the class imbalance problem and to the results obtained on the original imbalanced data set. The results show that the direct implementation of our framework is superior to the indirect approach, which in turn, is superior to the traditional approaches. All of them improve upon not attempting to counter the class imbalance problem. The remainder of the paper is organized as follows. In section 2, we discuss previous work on oversampling methods for class imbalances, semi-supervised learning, and discuss the functionality of K-D Trees. Section 3 introduces our framework and discusses its direct and indirect instantiations. The experimental set-up is discussed in Section 4, and the results of our experiments are presented in Section 5. Section 6 concludes the paper. ## 2 Related work This section reviews previous work related to this study. We first discuss the methods for inflating the minority class that were previously proposed in the context of the class imbalance problem, and we then move to a discussion of previous work in semi-supervised learning, especially for class imbalanced data. We then describe the K-D Tree data structure along with the Nearest Neighbor Search algorithm associated with it, and used in this paper. ### The class imbalance problem The class imbalance problem corresponds to the problem where one or more classes are represented by a much smaller proportion of examples than the other classes. In such cases, classifiers tend to ignore the data from the minority class causing systematic misclassification of these classes. The problem has been well documented for a number of years [14, 9, 3, 16, 15]. It is typically addressed in one of four ways: undersampling, oversampling, re-weighting the classes, and one-class classification. In this study, we focus on oversampling, which was shown, over the years, to be a reliable and simple approach to deal with the class imbalance problem. As discussed in [6], random oversampling is not effective as it causes overfitting of the minority class instances. Instead, it is important to generate instances that are closely related to the original instances, but not exact replicas. We review the three approaches used here to re-balance the minority class: SMOTE, ADASYN, and a Generative Adversarial Network (GAN) combined with a Variational Autoencoder (VAE).1 SMOTE and ADASYNThe Synthetic Minority Oversampling Technique (SMOTE) [4], is an oversampling approach that generates minority class instances to balance data sets. It searches for the \(K\) closest minority neighbors of each sample point in the minority class using the Euclidean distance. For each minority class sample \(x_{i}\), the algorithm randomly chooses a number of samples from its \(K\) closet minority neighbors denoted as \(x_{i}(nn)\). For each \(x_{i}\), we generate new samples using the following formula \[x_{i}^{new}=x_{i}+\alpha\left(x_{i}(nn)-x_{i}\ \right),\] where \(\alpha\) is a random number from \(0\) to \(1\). For the purpose of this work, we use the implementation found in the imbalanced-learn python library, with \(K=2\). Adaptive Synthetic Sampling (ADASYN) is an other oversampling method [8], which, instead of synthesizing the same number of samples for each minority sample like SMOTE, it uses a mechanism to automatically determine how many synthetic samples need to be generated for each minority sample. For each minority class sample \(x_{i}\), with its \(K\) nearest neighbors \(x_{i}(nn)\), it is possible to calculate the ratio \(r_{i}=\frac{x_{i}(nn)}{K}\), and then normalize this ratio to obtain the density distribution \(\Gamma_{i}=\frac{r_{i}}{\sum r_{i}}\). The calculation of a synthetic sample is obtained by \(g_{i}=\Gamma_{i}\times G\), where \(G\) is the discrepancy between \(2\) classes. For the purpose of this work, we use the ADASYN package from the imbalanced-learn python library, with \(K=2\). #### 2.0.1 Generating Adversarial Networks (GANs) A generative adversarial network (GAN) [7] consists of two neural networks: a generator \(G\) and a discriminator \(D\). These two networks are trained in opposition to one another. The generator \(G\) takes as input a random noise vector \(z\sim p(z)\) and outputs \(m\) sample \(\widetilde{x}^{i}=G\left(z^{i}\right)\). The discriminator \(D\) receives as input the training sample \(x^{i}\) and \(\widetilde{x}^{i}\) and uses the loss function \[\check{V}_{max}=\frac{1}{m}\sum_{i=1}^{m}\log D\left(x^{i}\right)+\frac{1}{m} \sum_{i=1}^{m}\log\left(1-D\left(\check{x}^{i}\right)\right)\] to update the discriminator \(D\)'s parameters \(\theta_{d}\); Then it uses another random noise vector \(z\sim p(z)\) and loss function: \[\check{V}_{min}=\frac{1}{m}\sum_{i=1}^{m}\log\left(1-\left(D\left(G\left(z^{i} \right)\right)\right)\right.\] to update the generator \(G\)'s parameters \(\theta_{g}\) A VAE-GAN is a Variational Autoencoder combined with a Generative Adversarial Network [17]. It uses a GAN discriminator that can be used in place of a Variational Autoencoder (VAE) decoder to learn the loss function. The VAE loss function equals the negative sum of the expected log-likelihood (the reconstruction error) and a prior regularization term as well as a binary cross-entropy in the discriminator. This is what was used in this work. ### Semi-supervised learning Semi-supervised learning is a learning paradigm in which unlabeled data are leveraged along with the labeled training set to help improve classification performance [23]. Semi-supervised learning is highly practical since labeling work is usually costly in terms of manpower and material resources [27]. There are two common methods used in semi-supervised learning [26]. The first one relies on the "clustering assumption" which assumes that the data follows a cluster structure and that samples in the same cluster belong to the same category. This is the approach we follow in our research, as will be discussed in the next section. Another method follows the "manifold assumption" which assumes that the data is distributed on a manifold structure and that adjacent samples on that structure should output similar values. In such methods, the degree of proximity is often used to described the degree of similarity. The manifold hypothesis can be viewed as a generalization of the clustering hypothesis. Therefore, with no restriction on the format of the output value, the manifold assumption is more widely applicable than the clustering assumption as it can be used for a variety of learning tasks. Since we are working in the context of detection, a special case of classification, the "clustering assumption" is sufficient for our purposes. As discussed in the introduction, several works have looked at the question of semi-supervised learning in the context of the class imbalance problem [11, 18, 25]. While interesting, these works are not closely related to the work in this paper since they do not consider the approach that consists of inflating the minority class nor do they look at the extremely imbalanced context. ### K-Dimensional tree and nearest neighbor search The search for nearest neighbors that we propose to undertake to identify data close to the labeled minority class data is computationally expensive. K-D Trees (K-dimension trees) are a kind of binary trees, which divide the k-dimensional data space hierarchically, and stores the points in the k-dimension space in order to query its tree-shaped data structure afterwards [21]. Using K-D Trees can reduce the search space compared to other clustering algorithm such as K-Nearest Neighbors which have a time complexity of \(O(n\times m)\), where \(n\) is the size of the data set and \(m\) is its dimension. Since our corpus embedding method generate a high dimensional corpus feature matrix(\(n\times 1024\)), to reduce the search time complexity, we used K-D Tree search rather than other clustering algorithms. Commonly, the K-D Tree can be constructed in \(O(n\log n)\), and the query algorithm has a running time \(O(\sqrt{n}+k)\) where \(k\) is the number of nearest points reported. ## 3 Our Framework We propose a data augmentation method which, instead of randomly sampling from the minority class or generating synthetic minority samples based on the existing minority samples, leverages the unlabeled data set. The method is geared at non-continuous feature spaces such as those emanating from text applications, which present particular difficulty for data generation processes. Our approach works on binary data and takes as input a labeled imbalanced data set \(LI\) and an unlabeled data set \(U\), drawn from the same population. It outputs a labeled balanced data set \(LB\) that is then used for classification. It works as follows: **Step 1:**: Pre-process the \(LI\) and \(U\) using the same embedding process and separate the majority from the minority samples **Step 2 (optional):**: Use the minority set as a sample to generate synthetic data resembling that data. **Step 3:**: Construct a K-D Tree from the minority samples of Step 1 or the augmented minority samples from Step 2. **Step 4:**: Conduct a Nearest Neighbor Search to identify points from \(U\), nearest to the K-D Tree. **Step 5:**: Add these points to the minority set, form a new labeled balanced data set \(LB\) and use \(LB\) to train a classifier. We consider two instantiations of our framework: the direct approach and the indirect approach. The direct approach is illustrated in Figure 1. That approach skips step 2. In other words, it constructs a K-D Tree from the labeled minority instances present in \(LI\). Because the minority class can contain a very small number of samples, we also propose the indirect approach which implements Step 2. The indirect approach is illustrated in Figure 2. The rationale for the indirect approach is that the minority data set may be very small and not diverse enough to help direct the search for appropriate additional instances from \(U\). Generating synthetic samples which will not be included in \(LB\) but which will help select actual instances from \(U\), we assume, can enhance the method. We now describe each of the steps of our algorithm in detail: Figure 1: The K-D Tree data generation method Step 1: Pre-processingIn this step, we conduct the corpus cleaning work first. Since we are working with Twitter textual data, we remove all special symbols, white spaces and emoticon icons from the content of the tweets. This helps reduce the complexity of the text content. In addition, we remove all the stop words, which forces the model to pay more attention to vocabulary with practical meaning than common terms. To minimize the external factors like the word embedding to the evaluation, Using pre-trained checkpoints provided by the Digital Epidemiology Lab[19] as a starting checkpoint, we train a (Bidirectional Encoder Representations from Transformers) BERT model [5], to compute embedding for our tweet corpora,. #### 3.2.1 Step 2: Synthetic Sample Generation In this step, used by the indirect approach, we generate synthetic samples by using both classical and deep-learning means. In particular, we use: SMOTE, ADASYN and a VAE-GAN. The samples are generated according to the processes described in Section 2 for each of the approaches. Please note that we also experimented with both a DC-GAN and a VAE, but since the results were not better than those obtained with a VAE-GAN, we decided not to include them in our graphs in order not to clutter the presentation. #### 3.2.2 Step 3: K-D Tree Construction and Nearest Neighbor Search K-D Tree Construction:In this step, we construct the tree with a recursive rule that splits the data according to the dimension/feature with highest variance. The dimension selected for splitting is set as the root node of the K-D Tree or subtree under consideration. This is done by finding the median for this dimension and using that value as a segmentation hyperplane, i.e., all the points whose value in that dimension is smaller than the median value are placed in the left child, and all the points with a greater value are placed in the right child. This procedure is followed recursively until nodes cannot be split anymore. Figure 2: Replace GAN’s structure Nearest Neighbor Search:In the search query, the search starts from the root node and moves down the tree recursively. It goes to the left or right child of the node it is currently visiting depending on its relation to the node value. Once the search reaches a leaf node, the algorithm sets it as "best current result". It then searches the other side of the parent to find out whether a better solution is available there. If so, it continues its search there, looking for a closer point. If such a point does not exist, it moves up the tree by one level and repeats the process. The search is completed when the root node is reached. #### 3.3.2 Step 4: Balanced Data Set Formation and Classification To re-balance the data set, we first assume there are \(n_{max}\) instances of the majority class and \(n_{min}\) instances of the minority class in the data set. For each method, we augment the data using the following rules: **K-D Tree:**: For each minority data \(x_{i}\), traverse the tree composed of unlabeled data and find the \(n_{aug_{i}}=\frac{n_{max}-n_{min}}{n_{min}}\). Add \(n_{aug}=\sum_{i=1}^{n_{min}}n_{aug_{i}}\) to the data set after assigning them to the minority class. **SMOTE, ADASYN, GAN:**: Generate \(n_{aug}=(n_{max}-n_{min})\) artificial samples, set them as minority class and add to the data set. After the data set is balanced, a logistic regression classifier is trained. ## 4 Experimental Evaluation ### Data sets _Data Set 1:_ The first data set was collected for the study by [2] which initially randomly collected a sample of 282,201 Twitter users from Canada by using the Conditional Independence Coupling (CIC) method [24]. All tweets posted by these users between January 1, 2020 and March 13 were collected, and a random subset of 1,600 tweets was further analyzed through keyword search. A carefully curated and labeled sub data set was carved out from the random subset and includes 280 reliable and 280 unreliable Tweets and represent data set \(LI\). The remaining 1,040 samples are unlabeled and correspond to data set \(U_{1}\). We created a testing set \(Test_{1}\) by randomly selecting 40 reliable and 40 unreliable tweets from \(LI\). From the rest of the labeled data, we created several \(LI_{1n}\) data sets with all the 240 reliable tweets and different numbers, \(n\), of unreliable tweets, where \(n\) belongs to the set \(\{5,6,7,8,9,10,20,30,40,50,100,150\}\). _Data Set 2:_ The second data set is the COVID-19 Fake News Data set from [22], which includes a manually annotated data set of 10,700 social media posts and articles of real and fake news on Covid-19. We randomly selected 6,000 of them, with 3,000 true news and 3,000 fake news for \(LI\). We randomly selected 100 reliable and 100 unreliable tweets from \(LI\) to create our testing set, \(Test_{2}\). To create training sets \(LI_{2n}\), we randomly selected 900 samples from the true news subset and different numbers, \(n\), from the fake news subset, where \(n\) belongs to the set \(\{5,6,7,8,9,10,20,30,40,50,100,150\}\). The samples that were not selected were stripped of their labels and constitute the unlabeled data set \(U_{2}\). ### Training and Testing method Training:In our experiments, we trained the logistic regression classifier on the two data sets (Data Set 1 and Data Set 2), using the different data augmentation methods previously discussed to balance the training set. In more detail, we ran the following series of experiments on both Data Sets 1 and 2. Each experiment was repeated for a minority class of size \(n\) where \(n\) belongs to \(\{5,6,7,8,9,10,20,30,40,50,100,150\}\). Each of the 150 generated data sets are called \(LI_{n}\). * Train Logistic Regression on \(LI_{n}\). The results for this series of experiments are seen on the curve called "Original". * Train Logistic Regression on \(LI_{n}\) augmented by: SMOTE, ADASYN, VAE-GAN. The SMOTE and ADASYN functions we used in our task come from the python package "imblearn". We implemented the VAE-GAN on our own. The results for this series of experiments are reported on the curves called "SMOTE", ADASYN" and "VAE-GAN" respectively. * Train Logistic Regression on \(LI_{n}\) augmented using the K-D Tree and Nearest Neighbor Search technique on the n instances of the minority class present in \(LI_{n}\). We recall that that technique selects data from \(U\), the unlabeled set, that most closely resembles the \(n\) samples of the minority class. This the Direct implementation of our framework that skips Step 2 in the Algorithm of Section 3. The results for this series of experiments are reported on the curve called "K-D Tree". * Train Logistic Regression on \(LI_{n}\) augmented using the K-D Tree and nearest neighbor search technique on the \(n\) instances of the minority class and their augmentations through: SMOTE, ADASYN and VAE-GAN. We recall that this technique selects data from \(U\), the unlabeled set, that most closely resembles the n samples of the minority class and the synthetic samples generated from them using one of the generation method shown above. This is the indirect implementation of our framework that uses Step 2 in the Algorithm of Section 3. The results for this series of experiments are reported on the curves called "SMOTE-KD, "ADASYN-KD", and "Replace-GAN". Testing Regimen:In total, we conducted 192 tests on 24 \(LI\) data sets with different numbers of minority class samples from the 2 data sets. The final result for each of these 192 experiments are reported based on the testing sets \(Test_{1}\) and \(Test_{2}\), respectively. Since both data sets only had very few labeled data samples to use for testing, we decided to use the Bootstrap error estimation technique to evaluate the performance of our Method [13]4. We report the F1, Precision, and Recall values of all the classifiers tested on the test set. Footnote 4: The Bootstrap technique is implemented by repeating the sampling and testing procedure previously described 100 times and using the results of these experiments to estimate the real F1, Precision and Recall values and evaluate their standard deviation. ## 5 Results The results of our experiments appear in Figures 3 and 4. work, specifically, what happens when the number of labeled minority instances is extremely small, the horizontal axis shows the results for 5, 6... 10 labeled minority samples and then jumps to 20,..50, and then to 100 and 150, where the methods produce results much closer to each other than in the very sparse case. The vertical axis represents the F1-measure, the Precision or the Recall obtained by the classifiers. The standard deviations at each point are indicated by a bar, visible only when it is high enough. The graphs show that distinct differences in the results really happen when \(n\), the number of minority instances initially present, is small. As \(n\) increases, the differences between the methods becomes less and less visible. We also find that the results are similar for dataset 1 and and dataset 2. In general, we find that "Original", where no correction for the class imbalance domain is made, obtains the worst performance. This is followed closely by the three synthetic approaches (SMOTE, ADASYN and VAE-GAN) with a slight advantage for VAE-GAN in Data Set 1 and a slight advantage for SMOTE and ADASYN (which show identical performance in all experiments) in Data Set 2. As shown in all graphs, the advantage gained by these synthetic resampling methods is modest. Next, in terms of performance, come the three indirect methods of our framework, SMOTE-KD, ADASYN-KD, ReplaceGAN. We recall that these are the methods that generate synthetic data but do not use them directly. Instead, they are used to identify appropriate unlabeled samples to add to the minority class. The results show that these approaches obtain noticeably higher F1, Precision and Recall results, with a distinct advantage for ReplaceGAN. This is true for both data sets, and suggests that the addition of real data through our semi-supervised scheme rather than synthetically generated data is a superior proposition. Finally, the results show, that in both domains, using the Direct implementation of our framework yields a better performance than the ReplaceGAN strategy. That difference is slight in Data Set 1, where the standard decision bars indicate that ReplaceGAN, while slightly less accurate, is more stable than K-D Tree, but it is unmistakable in Data Set 2 where ReplaceGAN is noticeably less accurate than K-D Tree. This suggests that our hypothesis regarding the advantage that a greater diversity to start off our unlabeled data set search for minority sample candidates did not pan out and the indirect implementation of our framework is less desirable than its Direct implementation. While we commented on the results qualitatively, some quantitative remarks are in order. For \(n=5\), the difference between the F1 value of the methods is remarkable. In both domains, the results obtained by "Original" are below.4. They get near or reach.4 with the synthetic resampling methods. The semi-supervised indirect methods SMOTE-KD and ADASYN-KD yield F1 measures around.5 in both domains while ReplaceGAN reaches an F1 measure above.7 for Domain 1 and between.6 and.7 for Domain 2. Finally, the semi-supervised Direct K-D Tree method obtains an F1-measure well over.7 in each domain. Until \(n=10\), a similar trend is observed. By \(n=50\), however, the fluctuation of all the methods lies in a much smaller interval since the F1 measures are all between slightly over.7 and slightly aver.8 for Domain 1; and between slightly below.7 and slightly above.8 for Domain 2.For higher values of \(n\) all methods become equivalent. This shows that the impact of our framework is much more significant in extreme class imbalance cases than in more moderate ones. In terms of run time, we tested the K-D tree and ReplaceGAN's running time with data set 2. The results are shown in figure 5 for 3,000 and 300 reliable samples. We use Python's built in time function to calculate the running time for each epoch. We run each method with different number of minority samples and report the results as an average of 50 times. Time is measured in seconds. As expected, we found that the K-D Tree has a much lower running time than ReplaceGAN. This is because the K-D Tree search method needs this time to conduct many fewer root-to-leaf search queries than it does in the case of the ReplaceGAN strategy due to the smaller number of instances present. Interestingly, however, this discrepency is less important and eventually vanishes in smaller data sets and with larger amounts of labeled minority samples, where the stability of ReplaceGAN is also greater. ## 6 Discussion In this paper, we presented a semi-supervised framework for identifying appropriate unlabeled samples to inflate the minority class and create a more balanced data set to learn from. The framework was designed specifically for non-continuous domains such as text, and tested on two misinformation/Fake news detection data sets where it obtained remarkable results, especially in cases of extreme class imbalance. Two categories of approaches of the framework were tested: the direct and indirect approach. The direct approach (K-D Tree) performed better than the indirect approach using a GAN (ReplaceGAN) but was Figure 5: The running time of K-D Tree and ReplaceGAN on Data set 2 with 3,000 reliable instances (left) and 300 reliable instances (right). not as stable in the smaller dataset (dataset 1). The direct approach is also more efficient than the indirect one, but the disparity is less noticeable in smaller data sets. The results obtained with our framework were significantly better than those obtained by methods that augment the data by synthetic generation, thus supporting the assumption that synthetic generation in non-continuous domains such as Text is not particularly useful and that semi-supervised methods such as ours fare much better. In the future, we propose to investigate the utility of the ReplaceGAN indirect approach more carefully. We will also extend our framework to different domains (e.g., the genetic domain and images) including continuous and discrete ones where an unlabeled data set exists, and test other classifiers on our resulting augmented data sets. This will allow us to test whether the advantage we noticed in text data and with logistic regression carries over to other types of domains and classifiers as well. We will also apply our method to less extremely imbalanced data sets but use it in a finer grained manner, using a decomposition of the classes into sub-classes prior to re-sampling from the unlabeled set. This, we believe, will allow us to counter the kind of biases and unfairness introduced by incomplete data sets. More generally, we will also attempt to use our framework in the context of a data labeling tool having only a few seed labels to start from. ## Acknowledgement Computing resources used for this work were provided by the American University Zorro High Performance Computing System. Pre-trained Bert model from Digital Epidemiology Lab EPFL.
2307.08319
Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data
Label-noise or curated unlabeled data is used to compensate for the assumption of clean labeled data in training the conditional generative adversarial network; however, satisfying such an extended assumption is occasionally laborious or impractical. As a step towards generative modeling accessible to everyone, we introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated unlabeled data during training: (i) closed-set and open-set label noise in labeled data and (ii) closed-set and open-set unlabeled data. To combat it, we propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data and correcting wrong labels for labeled data. Unlike popular curriculum learning, which uses a threshold to pick the training samples, our soft curriculum controls the effect of each training instance by using the weights predicted by the auxiliary classifier, resulting in the preservation of useful samples while ignoring harmful ones. Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance. In particular, the proposed approach is able to match the performance of (semi-) supervised GANs even with less than half the labeled data.
Kai Katsumata, Duc Minh Vo, Tatsuya Harada, Hideki Nakayama
2023-07-17T08:31:59Z
http://arxiv.org/abs/2307.08319v1
# Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and Uncurated Unlabeled Data ###### Abstract Label-noise or curated unlabeled data is used to compensate for the assumption of clean labeled data in training the conditional generative adversarial network; however, satisfying such an extended assumption is occasionally laborious or impractical. As a step towards generative modeling accessible to everyone, we introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated unlabeled data during training: (i) closed-set and open-set label noise in labeled data and (ii) closed-set and open-set unlabeled data. To combat it, we propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data and correcting wrong labels for labeled data. Unlike popular curriculum learning, which uses a threshold to pick the training samples, our soft curriculum controls the effect of each training instance by using the weights predicted by the auxiliary classifier, resulting in the preservation of useful samples while ignoring harmful ones. Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance. In particular, the proposed approach is able to match the performance of (semi-) supervised GANs even with less than half the labeled data. ## 1 Introduction Significant breakthroughs [19, 20, 36, 3, 22] in class-conditional image generation (cGANs) yield images with high fidelity and diversity; yet they are all trained in a supervised fashion where the training data consists of carefully labeled samples. However, the training data for supervised learning requires immense labor-cost, making it difficult to achieve a sophisticated performance. To deflate the labor-cost in collecting data, semi-supervised [17, 13] and label-noise robust [12, 29] approaches have been investigated. Despite substantial efforts of semi-supervised cGANs [17, 13] to reduce the amount of labeled data, a dataset with a high annotation cost is still required. In this work, to significantly reduce the data collection and annotation cost, we present a new framework for training cGANs (see Fig. 1), which utilizes unreliable labeled data and uncurated unlabeled data. Namely, in this study, we aim to unify the research directions for training conditional image generation on imperfect data: annotation quality [12, 29] and unannotated data [17, 13]. In our realistic data assumption, the dataset consists of two parts: noisy labeled data (_i.e_., labeled data with closed-set and open-set label noise) and uncurated unlabeled data (_i.e_., unlabeled data with closed-set and open-set samples). Here, closed-set and open-set label noise mean that the actual labels of samples with label noise are inside and outside the known category (label) set, respectively. Closed-set and open-set unlabeled samples also mean that the actual unknown labels are inside and outside the known category set, respectively. The objective of the new framework is to generate the images with the known categories. This setting generalizes (i) semi-supervised image generation [17, 13] where the labels are reliable, and (ii) label-noise image generation [12, 29] where labeled data contains only closed-set label noise, and unlabeled data are not available. Hence, this new data assumption enables the use of personal collection or user-annotated data in conditional image synthesis. To address the complex data, we propose soft curriculum learning, which makes clean and fully labeled data from noisy and partially labeled data while assigning weights to samples for adversarial training. It eliminates the harmful samples (_e.g_., samples failed to assign labels and samples far away from the training categories) while preserving the useful ones (_e.g_., samples with proper labels). Motivated by the aim, we jointly train cGAN and an auxiliary classifier that assigns clean or new labels to labeled or unlabeled samples, respectively, and confidences to all real samples. Our implicit sample selection mechanism addresses the shortcomings of curriculum learning techniques [34, 35, 7, 4], which potentially retain harmful samples and miss helpful ones because it explicitly uses a predetermined or adaptive threshold. Consequently, our approach allows curriculum learning to handle noisy labeled and uncurated unlabeled data naturally, resulting in maintaining the number of training samples while reducing the effects of adverse samples. Since our method is free of the hard selection procedure, we term it as _soft curriculum learning_. Our comprehensive experiments demonstrate that soft curriculum learning works well in challenging imperfect datasets containing label noise and unlabeled data. More precisely, we observe performance gains of our method over baselines in terms of Frechet Inception Distance (FID) [10], Inception Score (IS) [28], \(F_{1/8}\), \(F_{8}\)[27], and intra-FID (iFID). Qualitative results also indicate the effectiveness of our method in terms of image fidelity and diversity. In summary, our main contributions are as follows: 1. We introduce a new problem: conditional image generation trained on datasets that consists of labeled data with closed-set and open-set label noise and unlabeled data composed of closed-set and open-set samples. 2. We develop a soft curriculum technique for correcting wrong labels and assigning temporal labels while weighting importances of each instance by employing an additional classifier trained jointly. 3. We consistently demonstrate the effectiveness of our method in experiments on a variety of GAN architectures (_i.e_., projection- and classifier-based cGANs) and datasets. Note that recent attempts at limited data employ only a projection GAN. ## 2 Related work **Conditional image generation with imperfect data.** One of the prominent research directions in image generation is to build a training framework without requiring large and curated datasets. Semi-supervised learning approaches [5, 17, 13] explore cGANs in partially labeled data. Introducing an additional classifier enables a discriminator to train on labeled real data. OSSGAN [13] considers a more practical scenario where the labeled and unlabeled data do not share the label space, and it proposes entropy regularization to identify open-set samples smoothly. Robust learning for image generation [12, 29] aims to learn a clean conditional distribution even when labels are noisy by modeling a noise transition. In this study, we extend these directions to a real-world scenario. Our setting relaxes the assumption of label reliability in a semi-supervised fashion and allows robust learning to exploit open-set label noise and unlabeled data. **Semi-supervised and robust learning in image recognition.** Image recognition also remains the issue that supervised learning requires datasets, which are difficult and sometimes impossible to collect, _i.e_. cleanly labeled large-scale datasets. To address the issue, two popular frameworks (_i.e_., semi-supervised [23, 9] and label-noise robust learning [21]) have been explored in recent decades. Recent attempts address a more realistic scenario where the categories of samples are not bounded by the known categories. Open-set semi-supervised learning [26, 34, 18] involves unlabeled data containing samples with categories unseen in labeled data, aiming to classify closed-set samples precisely while rejecting open-set samples. Learning methods robust to closed-set and open-set label noise [1, 25, 33, 31] generalize methods that only consider closed-set noise [2, 21]. In this study, we attempt to unify these research directions that are independently addressed in conditional image synthesis. ## 3 Problem statement We present a novel training setting for data-efficient conditional image generation that leverages noisy labeled data and uncurated unlabeled data. For \(K\)-class conditional image generation, let \(\mathcal{D}_{l}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n_{l}}\) be the noisy labeled training set consisting of \(n_{l}\) labeled samples, where a \(d\)-dimensional instance \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and its corresponding noisy label \(\mathbf{y}_{i}\in\mathcal{Y}\) that are sampled from labeled data distribution \(p(\mathbf{x},\mathbf{y}):(\mathbf{x}_{i},\mathbf{y}_{i})\sim p(\mathbf{x},\mathbf{y})\). The noisy label space \(\mathcal{Y}=\{\mathbf{e}^{(1)},\ldots,\mathbf{e}^{(K-1)},\mathbf{e}^{(K)}\}\) consists of the standard basis vectors of the \(K\)-dimensional space. The clean label space \(\bar{\mathcal{Y}}=\mathcal{Y}\cup\{\text{open-set classes}\}\) is inaccessible. Let \(\mathcal{D}_{u}=\{\mathbf{u}_{i}\}_{i=1}^{n_{u}}\) be an uncurated unlabeled training set having \(n_{u}\) samples, where an instance \(\mathbf{u}_{i}\in\mathbb{R}^{d}\) is sampled from unlabeled data distribution \(p(\mathbf{u}):\mathbf{u}_{i}\sim p(\mathbf{u})\). Unlabeled data also includes both closed-set and open-set samples. The goal of the conditional image generation is to model the true distribution without label noise via a generator \(G\) and a discriminator \(D\). The generator \(G\) generates samples \(G(\mathbf{z},\mathbf{y})\) from a latent vector \(\mathbf{z}\in\mathbb{R}^{d_{z}}\) and a conditioning label \(\mathbf{y}\) drawn from a prior distribution \((\mathbf{z},\mathbf{y})\sim q(\mathbf{z},\mathbf{y})=q(\mathbf{z})q(\mathbf{y})\), where \(q(\mathbf{z})\) is typically the standard Gaussian distribution and \(q(\mathbf{y})\) is the uniform distribution over \(\mathcal{Y}\). The discriminator \(D\) aims to identify fake samples \((G(\mathbf{z},\mathbf{y}),\mathbf{y})\) from real samples \((\mathbf{x},\mathbf{y})\). Before formulating our method, we introduce a supervised cGAN model. The conditional GANs for a fully and cleanly labeled dataset optimize the losses \(\mathcal{L}_{D}\) and \(\mathcal{L}_{G}\) for the discriminator and the generator, respectively: \[\mathcal{L}_{D}= \mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p(\mathbf{x},\mathbf{y})}[f_{D}(-D(\mathbf{x},\mathbf{y}))]\] \[+\mathbb{E}_{(\mathbf{z},\mathbf{y})\sim q(\mathbf{z},\mathbf{y})}[f_{D}(D(G(\bm {z},\mathbf{y}),\mathbf{y}))], \tag{1}\] \[\mathcal{L}_{G}= \mathbb{E}_{(\mathbf{z},\mathbf{y})\sim q(\mathbf{z},\mathbf{y})}[-D(G(\mathbf{z}, \mathbf{y}),\mathbf{y})], \tag{2}\] where a hinge loss [16, 30] for the discriminator \(f_{D}(\cdot)=\max(0,1+\cdot)\). Updates of the generator and discriminator parameters with \(\mathcal{L}_{G}\) and \(\mathcal{L}_{D}\) alternately make a generator that generates indistinguishable samples and a discriminator that distinguishes fake and real samples well. To present our method, we customize the cGANs (Eqs. (1) and (2)). Although SoTA cGANs achieve outstanding performance, the absence of a dataset with sufficient quantity and reliable labels leads to poor performance and training instability. Difficulties in training on a dataset with limited quantity and quality are how to improve the stability of the training and how to estimate appropriate labels to unlabeled data under noisy labels. To overcome the difficulties, we consider a technique that assigns labels while handling label noise based on curriculum learning and robust learning. ## 4 Method **Intuitive idea**. Curriculum learning [34, 35, 7, 4] filters out adverse samples from the dataset, aiming to train a model on only useful samples. However, since curriculum learning employs explicit thresholds, it does not leverage the feature of ignored samples, resulting in shrinking training datasets. Furthermore, curriculum learning methods [34] for semi-supervised learning maintain label noise. To overcome these flaws, we consider a safer way for learning cGANs on noisy data, aiming to reduce the adverse effect of misclassification while maintaining the amount of training data. Therefore, we have to achieve three objects: handling label noise containing open-set noise; handling unlabeled data including open-set samples; and eliminating samples causing negative effects from both labeled and unlabeled data. Our main idea is to make clean data from noisy labeled and uncurated unlabeled data and to control the effects of each instance tolerantly. Our method can train the discriminator on all samples via the instance-wise weight distribution, label correction, and label assignment (Fig. 2), unlike curriculum learning, which picks unlabeled samples and trains a model on all the labeled data and the selected Figure 2: Overview of the proposed method. The auxiliary classifier is trained with the classification loss \(l_{\mathrm{GCE}}\) (Eq. (7)). It corrects wrong labels in labeled samples by \(C(\mathbf{x})\), assigns labels to unlabeled samples by \(C(\mathbf{u})\), and distributes confidences \(c\) for the discriminator optimization (Eq. (10)). The discriminator is trained with the adversarial loss for labeled data, unlabeled data, and fake data (Eqs. (5), (8), and (9)). Zoom in for best view. unlabeled data. Our instance-wise weighting mechanism leads to reducing the negative effects of label noise in labeled data by assigning small weights for samples that could not be corrected by the auxiliary classifier or are open-set. **Overall concept**. In addition to a generator \(G:\mathbb{R}^{d_{x}}\times\mathcal{Y}\rightarrow\mathbb{R}^{d}\) and a discriminator \(D:\mathbb{R}^{d}\times\Delta^{K-1}\rightarrow\mathbb{R}\), we employ a classifier \(C:\mathbb{R}^{d}\rightarrow\Delta^{K-1}\) where \(\Delta^{K-1}\) is a probability simplex whose vertices are in \(\mathcal{Y}\). To extend the above loss function (Eqs. (1) and (2)) into our setting, we introduce discriminator losses for noisy labeled data and uncurated unlabeled data \(\mathcal{L}^{\text{lbl}}_{\text{adv}}\), \(\mathcal{L}^{\text{unbl}}_{\text{adv}}\) and an auxiliary classifier loss \(\mathcal{L}_{\text{cls}}\). Our approach can be divided into four key components: training a robust auxiliary classifier, assigning new labels to unlabeled data, correcting labels for labeled data, and weighting loss for real data (_i.e_., both labeled and unlabeled data). For involving noisy labeled and unlabeled data, we optimize the loss functions \(\mathcal{L}_{D}\) and \(\mathcal{L}_{G}\): \[\mathcal{L}_{D}= \mathcal{L}^{\text{lbl}}_{\text{adv}}+\mathcal{L}^{\text{unblbl} }_{\text{adv}}+\mathcal{L}^{\text{fake}}_{\text{adv}}+\lambda\mathcal{L}_{ \text{cls}}, \tag{3}\] \[\mathcal{L}_{G}= \mathbb{E}_{(\mathbf{z},\mathbf{y})\sim q(\mathbf{z},\mathbf{y})}[-D(G(\mathbf{z}, \mathbf{y}),\mathbf{y})], \tag{4}\] where \(\lambda\) is a balancing parameter between the adversarial loss and the classification loss. We use the discriminator loss for fake data in the same as the supervised way: \[\mathcal{L}^{\text{fake}}_{\text{adv}}=\mathbb{E}_{(\mathbf{z},\mathbf{y})\sim q( \mathbf{z},\mathbf{y})}[f_{D}(D(G(\mathbf{z},\mathbf{y}),\mathbf{y}))]. \tag{5}\] Soft curriculum is an instance-wise weighting framework for discriminator training, which aims to assign small weights to harmful or irrelevant samples (_e.g_., wrongly labeled closed-set samples and open-set samples) and large weights to helpful samples (_e.g_., correctly labeled samples). **Robust training of auxiliary classifier**. We employ an auxiliary classifier for label assignment and correction (the details in a later paragraph). In training the classifier, besides real labeled data, we also use generated samples to increase the training samples. Incorporating generated samples into the training may prevent memorizing training samples (_i.e_., overfitting). The classification loss is given by: \[\mathcal{L}_{\text{cls}}= \mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p(\mathbf{x},\mathbf{y})}[l_{\text{GCE}}( C(\mathbf{x}),\mathbf{y})]\] \[+\mathbb{E}_{(\mathbf{z},\mathbf{y})\sim q(\mathbf{z},\mathbf{y})}[l_{\text{GCE}} (C(G(\mathbf{z},\mathbf{y})),\mathbf{y})]. \tag{6}\] For robust classification with label noise, we use the generalized cross entropy [38], which is the generalization of the mean absolute error (MAE) [6] and the cross entropy. The loss of the generalized cross entropy is given by \[l_{\text{GCE}}(\mathbf{x},\mathbf{y})=\frac{1-(\mathbf{x}^{\text{T}}\mathbf{y})^{q}}{q}, \tag{7}\] where, the hyperparameter \(q\in[0,1]\) controls the trade-off between optimization and noise robustness. When \(q=1\), it is equivalent to the MAE, which is robust to label noise but difficult to optimize. When \(q=0\), it is equivalent to the cross entropy loss, which can be optimized easily. The discriminator and classifier share the feature extractor to extract features efficiently. We use the classifier prediction for label assignment for unlabeled data and label correction for labeled data. **Label assignment for unlabeled data**. To assign new labels to unlabeled data, we take classifier's softmax outputs \(\hat{\mathbf{y}}=C(\mathbf{u})\) as a condition in discriminator inputs. We use soft labels (_i.e_. probability vector) for the robustness to classification errors and open-set samples instead of hard labels. Soft labels prevents the discriminator inputs from wrong labels with the classifier mistake because soft labels assign a small probability to the correct class and avoid assigning a probability of 1 to the wrong class. **Label correction for labeled data**. To correct noisy labels for labeled data, we take the interpolation between a given label and a predicted label: \((\mathbf{y}+\hat{\mathbf{y}})/2\), before feeding labels into the discriminator where \(\hat{\mathbf{y}}=C(\mathbf{x})\). Since some samples have proper labels depending on the label noise ratio, overwriting the given labels loses helpful information about samples with correct labels. We use the simple average because the average weighted with confidence may amplify the negative effects of wrong predictions. While we use predicted labels for inputs of the discriminator to real labeled and unlabeled samples, we maintain labels for generated samples because their labels are already proper. **Confidence assignment**. To focus on helpful samples, we quantify the sample-wise importance in the discriminator training via classifier predictions. The discriminator losses for labeled and unlabeled data are defined by \[\mathcal{L}^{\text{lbl}}_{\text{adv}} =\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p(\mathbf{x},\mathbf{y})}[cf_{D}(-D(\mathbf{x },(\mathbf{y}+\hat{\mathbf{y}})/2))], \tag{8}\] \[\mathcal{L}^{\text{unblbl}}_{\text{adv}} =\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}[cf_{D}(-D(\mathbf{u},\hat{\mathbf{y}}))], \tag{9}\] where \(\hat{\mathbf{y}}=C(\mathbf{x})\) and \(\hat{\mathbf{y}}=C(\mathbf{u})\) are the softmax output of the classifier, and the confidence in the soft curriculum \(c\in[0,1]\) is the normalized entropy of the classifier prediction: \[c=1-\frac{\sum_{\hat{y}_{i}\in\hat{\mathbf{y}}}\hat{y}_{i}\log\hat{y}_{i}}{\log K}. \tag{10}\] Here, it assigns large \(c\) for samples with high confidence and small \(c\) for samples with low confidence. **Implementation details.** In the experiments on the Tiny ImageNet [14] datasets at \(64\times 64\) resolution, we use a mini-batch size of \(1024\), the latent dimension of \(100\), and the learning rates of \(1\times 10^{-4}\)and \(4\times 10^{-4}\)for the generator and the discriminator, respectively. In the experiments on the ImageNet [24] and WebVision [15] datasets at \(128\times 128\) resolution, we have a minibatch size of \(256\), the latent dimension of \(120\), and learning rates of \(5\times 10^{-5}\)and \(2\times 10^{-4}\)for the generator and the discriminator, respectively. We update a discriminator in two steps per iteration. We train the auxiliary classifier with the same learning rate as the discriminator. We select a parameter \(\lambda\) in the preliminary experiments with the 150-class TinyImageNet dataset and set \(0.1\) for all the experiments. The parameter \(q\) in generalized cross entropy is \(0.7\), which is the default value in [38]. ## 5 Experiments **Datasets.** For the comprehensive evaluation, we perform experiments on TinyImageNet [32], ImageNet [24], and WebVision [15] datasets. We construct partially labeled datasets consisting of noisy labeled and uncurated unlabeled samples to benchmark our method. We use four variables that control a dataset configuration: the ratio of label noise, the number of closed-set classes, the labeled sample ratio, and the usage ratio. For the WebVision dataset, we omit the procedure for injecting label noise since it already contains label noise. To raise the open-set label noise, we first shuffle the labels by the ratio of label noise. We change a label to another label uniformly with the probability of the ratio of label noise. The label transition is run among all the classes. Second, we divide the fully labeled dataset with flipped labels into a part of closed-set classes and a part of open-set classes. The rest of the classes subtracted the number of closed-set classes from 1000 classes are considered as open-set classes. Since label noise is brought before separation into closed-set and open-set classes, the subset for closed-set classes contains both open-set and closed-set label noise. Then, we take a subset of closed-set samples according to the labeled sample ratio as labeled data, and we take the remaining closed-set samples as unlabeled data. Finally, we extract unlabeled samples from open-set class samples with the usage ratio and concatenate them with unlabeled samples that come from closed-set samples. We use the usage ratio of 100%, if not otherwise specified. **Compared methods.** We use CR-BigGAN [37] with DiffAugment [39] (DiffAug CR-GAN) as a base architecture, and we build all the compared methods on it. We compare the proposed method (**Ours**), with DiffAug CR-GAN [3], RandomGAN, SingleGAN, \(S^{3}\)GAN [17], OSSGAN [13], and CurriculumGAN. RandomGAN is a naive baseline and assigns labels to unlabeled samples by picking a label from \(\mathbf{y}\in\mathcal{Y}\) with equal probability. SingleGAN is another simple baseline and assigns constant labels \([1/K,\dots,1/K]^{\mathsf{T}}\) to all unlabeled samples without considering their content. CurriculumGAN uses a curriculum learning for semi-supervised learning by following [34] instead of our soft curriculum. For further comparison, we introduce two types of extended baselines (_i.e_., relabeling and rcGAN [12]). The extended relabeling baselines are denoted by the prefix're' correct labels of labeled samples by using Eq. (8) and predicted labels \(\hat{\mathbf{y}}=C(\mathbf{x})\) for labeled samples. The methods with the prefix 'rc' have rcGAN, which is a technique for robust learning with label noise. The details of the compared methods are given in the supplementary material. **Evaluation metrics.** We use IS [28], FID [10], iFID, \(F_{1/8}\) score [27], and \(F_{8}\) score [27]. FID measures the distance between the generated and reference images in the feature space using overall data and iFID uses per-class data, but \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(F_{8}\uparrow\) & \(F_{1/8}\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & iFID\(\downarrow\) \\ \hline DiffAug CR-GAN [39] & 0.9341 \(\pm\).0103 & 0.9669 \(\pm\).0034 & 41.6848 \(\pm\) 1.0075 & 12.0270 \(\pm\) 0.3451 & 227.2077 \(\pm\) 3.3538 \\ RandomGAN & 0.6908 \(\pm\).0310 & 0.8061 \(\pm\).0492 & 84.2262 \(\pm\) 9.7936 & 7.6780 \(\pm\) 0.6785 & 312.8149 \(\pm\) 6.1245 \\ SingleGAN & 0.9374 \(\pm\).0009 & 0.9761 \(\pm\).0018 & 35.5989 \(\pm\) 1.5018 & 12.3043 \(\pm\) 0.2951 & 233.8048 \(\pm\) 4.3930 \\ \(S^{3}\)GAN [17] & 0.9287 \(\pm\).0027 & 0.9667 \(\pm\).0031 & 39.8652 \(\pm\) 1.2017 & 12.1443 \(\pm\) 0.2344 & 223.5165 \(\pm\) 0.5562 \\ OSSGAN [13] & 0.8954 \(\pm\).0119 & 0.9598 \(\pm\).0029 & 46.9769 \(\pm\) 3.0722 & 10.8745 \(\pm\) 0.4495 & 236.6557 \(\pm\) 5.0004 \\ CurriculumGAN & 0.9146 \(\pm\).0128 & 0.9388 \(\pm\).0144 & 34.4142 \(\pm\) 0.6545 & 13.3153 \(\pm\) 0.6545 & 217.9899 \(\pm\) 1.5723 \\ \hline \multirow{6}{*}{reRandomGAN} & \multirow{6}{*}{0.4890 \(\pm\).0396} & 0.7653 \(\pm\).0154 & 88.9622 \(\pm\) 3.7217 & 6.8242 \(\pm\) 0.5130 & 317.4159 \(\pm\) 2.2235 \\ & & & & & \\ \cline{1-1} reSingleGAN & 0.8969 \(\pm\).0047 & 0.9422 \(\pm\).0099 & 36.2851 \(\pm\) 1.3121 & 12.4421 \(\pm\) 0.3234 & 237.1689 \(\pm\) 2.2875 \\ \cline{1-1} re\(S^{3}\)GAN & 0.9089 \(\pm\).0070 & 0.9476 \(\pm\).0024 & 37.4676 \(\pm\) 0.7783 & 13.0772 \(\pm\) 0.2206 & 221.3113 \(\pm\) 0.6992 \\ \cline{1-1} reOSGAN & 0.8745 \(\pm\).0037 & 0.9320 \(\pm\).0044 & 40.1548 \(\pm\) 1.1753 & 12.1081 \(\pm\) 0.1531 & 229.1839 \(\pm\) 1.6075 \\ \cline{1-1} reDiffAugCRGAN & 0.9332 \(\pm\).0044 & 0.9617 \(\pm\).0078 & 43.5950 \(\pm\) 2.2703 & 18.1816 \(\pm\) 0.4097 & 226.1654 \(\pm\) 4.4462 \\ \cline{1-1} reRandomGAN & 0.7466 \(\pm\).0298 & 0.8801 \(\pm\).0312 & 69.7574 \(\pm\) 4.8421 & 7.5598 \(\pm\) 0.9622 & 293.7392 \(\pm\) 5.9841 \\ \cline{1-1} reSingleGAN & 0.9409 \(\pm\).0072 & 0.9743 \(\pm\).0026 & 34.1262 \(\pm\) 1.3978 & 12.9476 \(\pm\) 0.3931 & 223.1789 \(\pm\) 3.8244 \\ \cline{1-1} re\(S^{3}\)GAN & 0.9258 \(\pm\).0072 & 0.9661 \(\pm\).0056 & 42.0012 \(\pm\) 2.1783 & 12.0116 \(\pm\) 0.3488 & 228.4053 \(\pm\) 4.8632 \\ \cline{1-1} reOSGAN & 0.9281 \(\pm\).0082 & 0.9692 \(\pm\).0006 & 42.0705 \(\pm\) 1.1632 & 12.0458 \(\pm\) 0.2670 & 227.5382 \(\pm\) 2.3760 \\ \cline{1-1} **Ours** & **0.9581**\(\pm\).0063 & **0.9789**\(\pm\).0003 & **29.6607**\(\pm\) 0.4979 & **14.7235**\(\pm\) 0.3509 & **206.6937**\(\pm\) 2.1925 \\ \hline \hline \end{tabular} \end{table} Table 1: Average and standard deviation of \(F_{8}\), \(F_{1/8}\), FID, Inception score (IS), and iFID over three trials on TinyImageNet with 150 closed-set classes, 20% labeled samples, and 10% label noise. We compare our proposed method with 15 baselines. Our method yields better performance (_i.e_., the higher \(F_{8}\), \(F_{1/8}\), and IS and lower FID and iFID) and consistent performance (small standard deviation). The best results are highlighted in **bold**, and the second best results are \(\underline{underlined}\). it was not possible to separate the evaluated values into fidelity and diversity. On the contrary, \(F_{1/8}\) and \(F_{8}\) quantify the fidelity and diversity, respectively. We sample 10K generated images for all metrics and use the evaluation set as the reference distribution for FID, iFID, \(F_{1/8}\), and \(F_{8}\). **Comprehensive study.** We first conduct a quantitative study on the TinyImageNet dataset with 150 closed-set classes, 50 open-set classes, 20% labeled data, and 10% label noise. Namely, the dataset consists of 15K labeled \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{ADC-GAN [11]} & \multicolumn{2}{c}{TAC-GAN [8]} \\ \cline{2-5} & FID\(\downarrow\) & IS\(\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) \\ \hline Supervised & 66.5229 & 8.6387 & 50.4258 & 9.2594 \\ RandomGAN & 40.2519 & 10.5410 & 37.7453 & 10.9988 \\ SingleGAN & 43.6353 & 10.2666 & 38.5622 & 10.6992 \\ \(S^{3}\)GAN & 50.7904 & 10.0583 & 39.2887 & 10.5139 \\ OSSGAN & 113.1070 & 4.7492 & 41.7552 & 10.3462 \\ **Ours** & **37.0131** & **12.1424** & **37.4393** & **11.3654** \\ \hline \hline \end{tabular} \end{table} Table 6: Quantitative comparison of other cGAN models on TinyImageNet. In addition to a projection-based GAN, our method shows the performance gain over classifier-based cGAN models. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{30\% label noise} & \multicolumn{3}{c}{50\% label noise} \\ \cline{2-6} & \(F_{8}\uparrow\) & \(F_{1/8}\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & iFID\(\downarrow\) \\ \hline AB1 & 0.8874 & 0.9615 & 36.2120 & 12.3104 & 232.8597 & 0.8910 & 0.9427 & 35.5164 & 12.2659 & 245.5752 \\ AB2 & 0.9092 & 0.9619 & 39.7125 & 11.6496 & 236.4422 & 0.9131 & 0.9671 & 40.9006 & 10.7329 & 253.2198 \\ AB3 & 0.9145 & 0.9625 & 31.2353 & 13.5164 & 222.1403 & 0.8322 & 0.9517 & 35.6693 & 11.4738 & 241.6102 \\ **Ours** & **0.9238** & **0.9664** & **30.5527** & **14.0052** & **221.6443** & **0.9492** & **0.9743** & **33.0788** & **12.3833** & **238.7180** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on Tiny ImageNet with 150 closed-set classes, 20% labeled samples, and 30%/50% labeled noise. AB1 is the method without generalized cross entropy. AB2 is the method without curriculum learning. AB3 is the method without curriculum for labeled data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(F_{8}\uparrow\) & \(F_{1/8}\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & iFID\(\downarrow\) \\ \hline DiffAug CR-GAN & 0.8526 & 0.7430 & 82.8757 & 14.6339 & \(\underline{256.1464}\) \\ RandomGAN & 0.7479 & 0.8783 & 70.9336 & 15.0161 & 300.5579 \\ SingleGAN & 0.6599 & 0.8349 & 77.8994 & 14.0210 & 310.9954 \\ \(S^{3}\)GAN & 0.8429 & 0.8758 & 65.9445 & 16.1675 & 264.8367 \\ OSSGAN & 0.8959 & 0.8453 & 68.4343 & 17.4661 & 284.4511 \\ **Ours** & **0.9443** & **0.9430** & **57.1299** & **22.3548** & **219.5597** \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative comparison on ImageNet with closed-set 100 classes, 5% labeled data, 10% label-noise. Our method outperforms the baselines in terms of all metrics. Figure 3: Quantitative comparison over different label noise ratios. We report the results of the experiments on the TinyImageNet dataset with 150 classes, 20% labeled data, and label noise ratio of \(\{10\%,30\%,50\%,70\%,90\%\}\). We compare the methods over datasets with different label noise ratios. The blue lines indicate the results of the proposed method. Our method considerably outperforms baselines on difficult datasets (_i.e._, large noise ratio). \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(F_{8}\uparrow\) & \(F_{1/8}\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & iFID\(\downarrow\) \\ \hline DiffAug CR-GAN & 0.8962 & 0.8171 & 56.7504 & 22.4951 & \(\underline{228.9962}\) \\ RandomGAN & 0.7620 & 0.9095 & 49.4013 & 17.3114 & 266.7480 \\ SingleGAN & 0.7434 & 0.8903 & 51.4632 & 17.3922 & 292.1951 \\ \(S^{3}\)GAN & 0.4078 & 0.5097 & 111.2998 & 8.7617 & 246.3401 \\ OSSGAN & 0.9245 & 0.8995 & 44.3262 & 23.0263 & 238.2692 \\ **Ours** & **0.9630** & **0.9433** & **29.6751** & **33.5418** & **183.1367** \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative comparison on ImageNet with closed-set 200 classes, 5% labeled data, 10% label-noise. samples and 85K unlabeled samples. Table 1 reports the average and standard deviation of FID, IS, \(F_{1/8}\), \(F_{8}\), and iFID over three trials. Our method achieves the best scores in terms of all metrics and achieves tight standard deviations, showing the consistent improvement over the baselines. On the contrary, the improvement by rcGAN is not the case. In relabeling baselines, only classifier-based GANs improve the performance from naive baselines, because reRandomGAN and reSingleGAN add extra noise strongly. We then investigate the robustness of the method to label noise in experiments with different label noise ratios. We show the performance of the methods on different label noise ratios, \(\{10\%,30\%,50\%,70\%,90\%\}\). Our method still outperforms compared methods even when the labels are considerably noisy (_e.g._, 90%), as shown in Fig. 3. CurriculumGAN easily fails in the experiments in difficult datasets (_e.g._, 70% or 90%). **Ablation study.** To evaluate the individual contribution of each component, we carried out an ablation study of our method. For this evaluation, we prepare three ablation models: AB1 AB2, and AB3. AB1 is equipped with cross entropy loss instead of generalized cross entropy, having lost the robustness to label noise. AB2 does not use curriculum learning, assigning equal weights to all samples. The method corrects wrong labels, assigns new labels, and distributes equal weights to all samples, and their classifier is trained on generalized cross entropy. AB3 does not correct the labels of the labeled data. The method assigns new labels to unlabeled data and distributes weights according to the classifier's confidences. It is close to ordinal curriculum learning. The results of the ablation study on two configurations are given in Tab. 2. With cross entropy, AB1 drops performance, showing the contribution of the robust classifier. Since correcting labels of labeled samples without soft curriculum may add extra label noises, AB2 records the worst performance in terms of FID, IS, and iFID in datasets with a large label noise ratio. AB3 shows a large degradation in the performance under highly noisy data by maintaining label noise. In both trials, the final model (**Ours**) enhances the performance of the ablation models by the combination of robust training and soft curriculum learning. **Evaluation on large datasets.** We evaluate the proposed method on more complex and challenging datasets to see its stability. Table 3 show the quantitative results of the ImageNet experiments. In the experiments, we observe the performance gains over baselines in terms of quantitative metrics. Figures 4 and 5 and Tab. 4 show the experimental results on the ImageNet dataset with 200 closed-set classes, 5% labeled data, 10% label noise, and 10% usage ratio. Namely, the dataset has about 12K labeled samples and 345K unlabeled samples. Our method outperforms all baselines with the quantitative metrics as shown in Tab. 4. Figure 4 demonstrates the fidelity of the images generated by our method. Figure 5 shows in consistency with Tab. 4 that our method generates images with high fidelity and diversity. With our soft curriculum, we observe the performance gain over baselines on difficult datasets with limited labeled samples, as shown in Fig. 6. In particular, the proposed approach achieves a competitive performance to semi-supervised and supervised cGANs with 1/3 of the labeled data in terms of FID and IS (5% vs. 15%) and half of the labeled data in terms of \(F_{1/8}\) and \(F_{8}\) (5% vs. 10%). To demonstrate the effectiveness of our method on high resolution, we conduct experiments on ImageNet \(256\times 256\) with 200 closed-set classes, 4% labeled samples, 10% label-noise, and 10% usage ratio. Table 5 shows that the proposed method outperforms the baselines stably. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(F_{8}\uparrow\) & \(F_{1/8}\uparrow\) & FID\(\downarrow\) & IS\(\uparrow\) & iFID\(\downarrow\) \\ \hline DiffAug CR-GAN & 0.7812 & 0.7725 & 74.3157 & 14.3693 & 249.0955 \\ RandomGAN & 0.7840 & 0.8627 & 54.8598 & 14.7182 & 246.9653 \\ SingleGAN & 0.7065 & 0.8276 & 64.8178 & 13.5105 & 280.1292 \\ \(S^{3}\)GAN & 0.8209 & 0.8680 & 63.4304 & 14.6397 & 238.7989 \\ OSSGAN & 0.7911 & 0.8294 & 66.7111 & 14.9287 & 242.6553 \\ **Ours** & **0.8465** & **0.8866** & **51.1604** & **18.0428** & **213.5669** \\ \hline \hline \end{tabular} \end{table} Table 7: Quantitative comparison on WebVision [15]. Figure 4: Visual comparison of class-conditional image synthesis results on ImageNet. Our method produces plausible images while respecting the given condition. **Evaluation on classifier-based cGANs.** Next, we evaluate our method on different cGAN models. In the above evaluations, we build the compared method by integrating semi-supervised methods into projection-based cGANs. To evaluate the applicability of our method to other cGAN models, we conduct experiments on additional base architectures of classifier-based cGANs (, ADC-GAN [11] and TAC-GAN [8]). Table 6 shows our method outperforms baselines in the ADC-GAN and TAC-GAN experiments. **Evaluation on real-world noise.** Finally, we test our method on WebVision [15] to assess the effectiveness on real-world noise. WebVision is a dataset built via web queries, and so it contains real-world noise. We use 200 classes as the closed-set classes, drop 98% labels from the closed-set class samples to make unlabeled data, and the usage ratio of 10%. Table 7 shows the results of the experiments on WebVision. We improve DiffAug CR-GAN and achieve an FID of \(51.1604\) with an IS of \(18.0428\) on the dataset with real-world noise. ## 6 Conclusion We presented a novel image generation training framework that allows the training dataset to be composed of noisy labeled and uncurated unlabeled data. We proposed soft curriculum learning for this new data setting that provides clean labeled data to the discriminator while eliminating the effects of useless samples by correcting noisy labels and assigning new labels. Concurrently, we use soft labels and generalized cross entropy loss to deal with open-set samples, avoiding overconfidence in samples that do not belong to known classes. Our comprehensive experiments show that, even when the number of labeled samples is limited and noisy, the proposed method consistently outperforms baselines in both qualitative and quantitative evaluations. Our method reduces the amount of labeled data required to achieve equivalent performance in the training of conditional GANs. Furthermore, when tested with different GANs architectures, our method demonstrates stable per Figure 5: Visual comparison of class-conditional image synthesis results on ImageNet. Our method constantly produces plausible images while respecting the given condition. Figure 6: Quantitative comparison over different numbers of labeled samples. We report the results of the experiments on the ImageNet dataset with 200 classes, 10% label noise ratio, and labeled sample ratio of \(\{4\%,5\%,8\%,10\%,15\%,20\%\}\). Our method outperforms baselines in difficult datasets (blue line). formance. We believe that our proposed method expands the real-world applications of cGANs in a sustainable way by making it easier to create datasets for training cGANs. **Limitation**. Although our method improves baselines on challenge datasets, the beneficial improvement on datasets with sufficient labeled samples is not observed. A deep analysis of the relationship between labeled data size and cGAN performance will provide further insight into the use of our soft curriculum method.
2307.10556
The best multicore-parallelization refactoring you've never heard of
In this short paper, we explore a new way to refactor a simple but tricky-to-parallelize tree-traversal algorithm to harness multicore parallelism. Crucially, the refactoring draws from some classic techniques from programming-languages research, such as the continuation-passing-style transform and defunctionalization. The algorithm we consider faces a particularly acute granularity-control challenge, owing to the wide range of inputs it has to deal with. Our solution achieves efficiency from heartbeat scheduling, a recent approach to automatic granularity control. We present our solution in a series of individually simple refactoring steps, starting from a high-level, recursive specification of the algorithm. As such, our approach may prove useful as a teaching tool, and perhaps be used for one-off parallelizations, as the technique requires no special compiler support.
Mike Rainey
2023-07-20T03:42:50Z
http://arxiv.org/abs/2307.10556v1
# The best multicore-parallelization refactoring you've never heard of+ ###### Abstract. In this short paper, we explore a new way to refactor a simple but tricky-to-parallelize tree-traversal algorithm to harness multicore parallelism. Crucially, the refactoring draws from some classic techniques from programming-languages research, such as the continuation-passing-style transform and defunctionalization. The algorithm we consider faces a particularly acute granularity-control challenge, owing to the wide range of inputs it has to deal with. Our solution achieves efficiency from heartbeat scheduling, a recent approach to automatic granularity control. We present our solution in a series of individually simple refactoring steps, starting from a high-level, recursive specification of the algorithm. As such, our approach may prove useful as a teaching tool, and perhaps be used for one-off parallelizations, as the technique requires no special compiler support. ## 1. Challenge: traverse a pointer-based tree We are to write a program that traverses a given binary tree and returns the sum of the numbers stored in the nodes, as exemplified by the following reference code, taking care to utilize parallelism when the input permits and to perform well in any case, even when parallelism is limited. typenode={v:int,bs:node=[2]} sum(node*n)int{if(n==null)return0returnsum(n.bs[0])+sum(n.bs[1])+n.v} We may use fork join to parallelize on nonempty input trees: s=newint[2] fork2join(l())={s[i]=sum(n.bs[i]),...)i{0,1} returns[0]+s[1]+n.v There are mature implementations of fork join (e.g. [(7; 12)] based on work stealing [(4; 5; 6)], an algorithm well suited for irregular, input-dependent workloads like ours. However, suppose our program can assume nothing of its input: it can range from balanced and large, where parallelism is abundant, to, e.g., a chain or a small tree, where traversal is mostly serial. As such, possible workloads may be fine grain and irregular, making granularity control acute [(1; 3; 13)]. Also, inputs, e.g., long chains, may cause callstack overflow [(9)]. We address both problems by combining two known techniques: (1) heartbeat scheduling [(2; 10)] for granularity control and (2) defunctionalize the continuation [(11)] to replace recursion by iteration, for efficient, serial traversal. The latter is inspired by Koppel's presentation [(8)], adding to his list a new refactoring application: multicore parallelization. We proceed via a series of refactorings, using conventional features of C++. ## 2. Refactoring for parallel traversal First, we replace the direct-style treatment that allocates function-activation records on the C callstack with a continuation-passing-style (CPS) one that allocates on the heap. To support CPS, we use a lower-level interface with task scheduling:new_task(\(f\)) takes a chunk \(f\) and returns a pointer to a new, heap-allocated task that, when executed by the scheduler will run \(f()\) to completion;fork(\(c\), \(k\)) takes a child task \(c\), its continuation task \(k\), registers a dependency edge from \(c\) to \(k\), and marks \(c\) ready to run;join(\(j\)) marks an incoming dependency edge on the task \(j\) as resolved, and, when all of its dependencies are resolved, schedules \(j\). Step 1: CPS convert the parallel algorithmWe introduce a continuation parameter k and a _join task_tj, which receives the results of the recursive calls and passes the results to the return continuation. sum(node*n,k:int*void)*void{if(n==null)[k(0);return];s=newint[2] tj=new_task(l())=k(s[0]+s[1]+n.v)){tj=new_task(l())=sum(n.bs[i],l*s!=s!;join(tj))} fork(tj,tj)i{0,1}} Step 2: defunctionalize the continuationWe introduce one activation record to handle the final result, and another for completion of branch i{0,1} (full code in appendix). typekont=|KTermofint*/finalresult|KPranchof(i:int,s:int*,tj:task*) This refactoring delivers a highly parallel algorithm, but one with poor work efficiency, given that it performs little useful work per task. ## 3. Refactoring for serial traversal Now, we obtain a work-efficient version by replacing recursion with iteration. _Step 3: CPS convert & defunctionalize the continuations._ We CPS convert our reference algorithm, first by introducing two new continuations, and then defunctionalize them, giving us two new activation records, such that the first represents an in-flight recursive call for the first branch of a node, and the second for the second branch, with s0 storing result obtained for the first branch. **type** kont =... | KSBranch**of** (n : node*, k : kont*) | KSBranch**of** {s0 : int, n : node*, k : kont*} _Step 4: refractor for iterative, stack-based traversal._ We eliminate recursion by applying to the **apply** function (introduced in Step 3) both tail-call elimination and inlining, and tail-call elimination to our defunctionalized sum function. ## 4 Merging parallel and serial refactorings The conceptual glue for merging our serial and parallel algorithms is in heartbeat scheduling. With it, we make it so that our serial and parallel traversals alternate on a regular basis. Starting out, our program spends a certain amount of its time in serial traversal, specified by a heartbeat-rate parameter \(H\), after which it switches momentarily to parallel traversal. It then switches back to serial, and the alternation repeats until the traversal completes. By ensuring \(H\) serial traversal steps happen for each invocation of our parallel traversal, we amortize task-creation costs, and therefore, achieve granularity control for _all_ inputs. In our implementation, we found that it suffices to specify \(H\) to be the number of trips around the main loop of our serial traversal. On our test machine, we observed that, by experimenting with different settings of \(H\), we can bound task-creation costs such that the total amount of work is increased by a desired amount, e.g., 10%, compared to the serial refactoring. _Step 5: give the serial traversal a heartbeat._ To track the number of steps, we introduce a helper function heartbeat that returns true every \(H\) times it is called. When it returns true, we inspect the current continuation to see if it is holding onto any latent parallelism. If so, we _promote_ that latent parallelism into an actual task, which may realize actual parallelism (if, e.g., the task is stolen). _Step 6: implement promotion._ Promotion is initiated by calling try_promote(\(k\)), which looks for latent parallelism in \(k\) and, if present, spawns from it a task and returns a modified continuation \(k^{\prime}\). There is latent parallelism in \(k\) if there is an instance of KSBranch\({}_{\texttt{b}}\) in \(k\). The reason is that such an instance represents a recursive call to the first branch of some tree node (the only opportunity parallelism in a traversal). However, there may be multiple instances of latent parallelism in a given \(k\), and, for performance reasons, heartbeat scheduling requires that the _outermost_ instance is the one that should be targeted for promotion. Heartbeat scheduling targets outermost parallelism because doing so turns out to be crucial for achieving worst-case bounds on the loss of parallelism [2]. Implementing this behavior efficiently requires some care, as a naive implementation could repeatedly traverse the whole stack, leading to quadratic blowup. Fortunately, the blowup can be remedied by extending the continuation structure with a double-ended list, which marks promotion potentials [2, 10]. When it finds a KSBranch\({}_{\texttt{b}}\){n, k=\(k^{\prime}\)} activation record in \(k\), our promotion handler modifies \(k\) so that, thereafter, it is as if our (defunctionalized) parallel algorithm was invoked at that point instead of the serial version. This behavior is achieved by (1) allocating storage for the results of the branches, s = newint[2] (2) replacing our KSBranch\({}_{\texttt{b}}\) activation record with KBPranch(i=0, s=s, tj=tj), a task-parallel one (3) spawning a new task corresponding to the second branch, i.e., n.bs[1], and giving that task the return continuation KBPranch(i=1, s=s, tj=tj), and (4) creating a join task tj for this new fork point, which is seeded with the continuation of our promotion point, \(k^{\prime}\). The pseudocode below gives sketch of the main loops. sum(node*n, k : kont*) \(\rightarrow\) void { **while** (true) k = try_promote(k) if heartbeat() else k **if** (n == null) { sa = 0 // sum accumulator **while** (true) k = try_promote(k) if heartbeat() else k **match** *k **with** // all activation records in kont | KSBranch\({}_{\texttt{b}}\)(n=n1, k=k1) \(\rightarrow\) {... } |... **else** { k = KSBranch\({}_{\texttt{b}}\)(n=n, k=k); n = n.bs[0] } } ## 5 Performance study Table 1 summarizes our performance study, for which we used a C++ implementation. From the _perfect_ tree, we see that our algorithm can achieve a speedup comparable to that of OpenCilk [12] with manually tuned granularity control, and a speedup almost twice faster than that of OpenCilk \begin{table} \begin{tabular}{c|c c c c} input & serial (s) & ours & cilk & cilk+granctrl \\ \hline perfect & 0.7 & 28.4x & 15.4x & 34.5x \\ random & 0.8 & 31.8x & 15.3x & 33.7x \\ chains & 2.5 & 11.5x & n/a & n/a \\ chain & 1.2 & 0.4x & n/a & n/a \\ \end{tabular} \end{table} Table 1: Performance results from an Intel Xeon system, using all 64 cores, showing speedup over the iterative, serial algorithm, with four inputs: (1) _perfect_ is a perfect binary tree of height 27 (2) _random_ is a tree built from a series of path-copying insertions targeting random leaves (3) _chains_ is a small initial tree of height 20 extended with 30 paths of length 1 million (4) _chain_ is a long chain. without granularity control. For _random_, our algorithm outperforms vanilla OpenCilk, but not the granularity-controlled version. The reason relates to the data structure we used in our C++ implementation to store the activation records, an STL queue, which uses heap-allocated chunks internally, whereas OpenCilk uses the callback, which is more efficient. However, our algorithm supports long chains, whereas OpenCilk crashes with stack overflow (indicated by cells with n/a). From _chains_, we see that our algorithm can obtain speedup even when parallelism is somewhat scarce. On _chain_, our algorithm is about 2.5x slower serial.
2305.08560
A dual approach to ShEx visualization with complexity management
Shape Expressions (ShEx) are used in various fields of knowledge to define RDF graph structures. ShEx visualizations enable all kinds of users to better comprehend the underlying schemas and perceive its properties. Nevertheless, the only antecedent (RDFShape) suffers from limited scalability which impairs comprehension in large cases. In this work, a visual notation for ShEx is defined which is built upon operationalized principles for cognitively efficient design. Furthermore, two approaches to said notation with complexity management mechanisms are implemented: a 2D diagram (Shumlex) and a 3D Graph (3DShEx). A comparative user evaluation between both approaches and RDFShape was performed. Results show that Shumlex users were significantly faster than 3DShEx users in large schemas. Even though no significant differences were observed for success rates and precision, only Shumlex achieved a perfect score in both. Moreover, while users' ratings were mostly positive for all tools, their feedback was mostly favourable towards Shumlex. By contrast, RDFShape and 3DShEx's scalability is widely criticised. Given those results, it is concluded that Shumlex may have potential as a cognitively efficient visualization of ShEx. In contrast, the more intricate interaction with a 3D environment appears to hinder 3DShEx users.
Jorge Alvarez-Fidalgo, Jose Emilio Labra-Gayo
2023-05-15T11:41:50Z
http://arxiv.org/abs/2305.08560v1
# A dual approach to ShEx visualization with complexity management ###### Abstract Shape Expressions (ShEx) are used in various fields of knowledge to define RDF graph structures. ShEx visualizations enable all kinds of users to better comprehend the underlying schemas and perceive its properties. Nevertheless, the only antecedent (RDFShape) suffers from limited scalability which impairs comprehension in large cases. In this work, a visual notation for ShEx is defined which is built upon operationalized principles for cognitively efficient design. Furthermore, two approaches to said notation with complexity management mechanisms are implemented: a 2D diagram (Shumlex) and a 3D Graph (3DShEx). A comparative user evaluation between both approaches and RDFShape was performed. Results show that Shumlex users were significantly faster than 3DShEx users in large schemas. Even though no significant differences were observed for success rates and precision, only Shumlex achieved a perfect score in both. Moreover, while users' ratings were mostly positive for all tools, their feedback was mostly favourable towards Shumlex. By contrast, RDFShape and 3DShEx's scalability is widely criticised. Given those results, it is concluded that Shumlex may have potential as a cognitively efficient visualization of ShEx. In contrast, the more intricate interaction with a 3D environment appears to hinder 3DShEx users. Shape Expressions, Visual notation, UML, 3D, Cognitive load 1570-0844/$35.00 (c) 0 - IOS Press and the authors. All rights reserved ## 1 Introduction Shape Expressions (ShEx) [1] was proposed in 2014 as a language for RDF1 data validation. By allowing to define RDF graph structures, it enables data producers and consumers to settle in a common ground and avoid inconsistencies. Since RDF brings together users from various branches of human knowledge, ShEx is employed in a variety of different contexts. E.g., ShEx is used to validate the RDF representation of FHIR2, a standard for health care data exchange. Footnote 1: [http://www.w3.org/RDF/](http://www.w3.org/RDF/) Footnote 2: [https://www.h17.org/h17.org/h17.html](https://www.h17.org/h17.org/h17.html) This implies that users do not necessarily have to be familiar with textual programming languages, resulting in a steep learning curve. One possible solution to such problem is the use of **visualizations**. They enable users to comprehend sheer amounts of data in a efficient manner and allows for better perception of emergent properties, errors and patterns [2]. The only precedent as far as ShEx is concerned is RDFShape [3], being capable of generating UML-like3 class diagrams for a subset of the language. Alas, it suffers from limited scalability as well as a degree of symbol overload which may affect its semantic transparency. Therefore, the information conveyed may be cognitively inefficient, particularly in larger use cases. Other visualisations in the Semantic Web ecosystem formulate different solutions to the problem of developing a comprehensible visual notation, with varying degrees of success. However, the aforementioned issue of scalability -also referred to as complexity management- is rarely addressed. At most, a few -such as WebVOWL4- provide automatic mechanisms for reducing the number of elements displayed, but no choice is given to the user about the specifics. Footnote 4: [https://service.tib.eu/webvowl/](https://service.tib.eu/webvowl/) Thus, the main **contribution** of this work lies in the proposal of a visual notation for ShEx which aims to be cognitively efficient -with an emphasis on complexity management-, analysing the perceptual implications of its materialisation in both a 2D plane and 3D space. The rest of the paper is structured as follows. A motivating example is provided in Section 2. In Section 3, background information about cognitive implications of visual design is provided, as well as the state of the art about visualization tools in the Semantic Web. The proposed visual notation and both approaches to it are exposed in Section 4. The implementation of the corresponding prototypes is discussed in Section 5. User evaluation methodology, results and discussion are provided in Section 6. Finally, conclusions and future work are discussed in Section 7. ## 2 Motivating example The Wikidata GeneWiki [4] project aims to use Wikidata as a semantic framework to manage and disseminate biomedical data. To that end, it describes a knowledge graph about such entities and their relationships. Our ShEx motivating example5 defines this graph structure. Given its abundant number of elements, it poses a challenge for proper visualization. Footnote 5: [https://github.com/weso/sparkwdsub/blob/master/examples/genewiki.shex](https://github.com/weso/sparkwdsub/blob/master/examples/genewiki.shex) RDFShape's visual representation (DOT)6 generates a class diagram with 23 classes and over 70 relationships (see Fig. 1). Besides the cognitive implications of processing a large network -which will be discussed later-, common scalability issues may be observed. Key sections of the diagram become filled with relationships and difficult to discern between each other. Therefore, it is a suitable testing ground for testing complexity management mechanisms and the cognitive efficiency of the visual notation. Footnote 6: [https://rdfshape.weso.es/link/1652006264](https://rdfshape.weso.es/link/1652006264) ## 3 State of the Art In this section, (i) cognitive implications of visual notation design and (ii) visualizations in the Semantic Web are discussed. Figure 1: Genewiki ShEx visualization in RDFShape. ### Cognitive implications of visual notation design **Cognitive load theory** "is concerned with the manner in which cognitive resources are focused and used during learning and problem solving" [5]. It describes the impairment of understanding that takes place when learning procedures lead to further cognitive processes. A distinction is made between intrinsic and extraneous cognitive load; the former is due to the inherent complexity of the information, while the latter is generated because of the manner in which such information is presented. Those phenomena are related in such a way that the consequences of extraneous cognitive load may only be noticeable when dealing as well with intrinsic cognitive load caused by high element interactivity [6]. D. Moody describes in his **Physics of Notations** (PoN) theory a series of principles for designing cognitively effective visual notations [7]: _semiotic clarity, perceptual discriminability, semantic transparency, complexity management_ (reduction of extraneous cognitive load), _cognitive integration, visual expressiveness, dual coding, graphic economy and cognitive fit_. In the last decade, PoN has become a widely used standard for notation design to the detriment of competing approaches [8]. Various criticisms have been stated about PoN. The operationalization of said principles ranges from objective measures -semiotic clarity is a 1:1 correspondence- to subjective evaluations -the "suggestion of meaning" implied by semantic transparency may only be determined by empirical means- [9]. This implies a degree of user involvement usually lacking in its application [10]. Subsequent proposals were made in order to improve such operationalization, either partially [11] or completely [12]. On a different note, the impact of 3D visualizations in cognitive load may be closely related to spatial ability [13]. I.e. subjects with high spatial ability perceive their cognitive load as low and vice versa. Further research shows that this effect is exacerbated when dealing with static visualizations; dynamic interactions providing a compensating effect for low spatial ability learners [14]. ### Visualizations in the Semantic Web As stated in the introduction, **RDFShape** provides the only visualization currently available for Shape Expressions7. It generates a bidimensional graph in which UML-like boxes symbolize shapes and directional arrows represent references to other shapes. No interactive actions nor complexity management mechanisms are provided. Footnote 7: [https://rdfshape.weso.es/shexInfo](https://rdfshape.weso.es/shexInfo) Further work has been carried out for the closely related Shapes Contraints Language (SHACL)8 in the form of visual editors. Arndt et al. implemented an Ontopad-based9 tool which allows for composing a SHACL visual data model [15]. Most of the interaction is done through a textual interface; new elements must be dragged into a canvas to become part of the visualization. Users may perform a few tasks on the visualization, such as linking properties. Footnote 8: [https://www.w3.org/TR/shacl/](https://www.w3.org/TR/shacl/) Footnote 9: [https://github.com/AKSW/OntoPad](https://github.com/AKSW/OntoPad) Footnote 10: [https://www.w3.org/TR/owl-features/](https://www.w3.org/TR/owl-features/) Lieber et al. define both UML-based and VOWL-based visual notations to represent RDF constraints and implement them in **UnSHACLed**,a SHACL visual editor [16]. Empirical tests showed no significant difference in error rates between the approaches. Nevertheless, the majority of users did prefer the VOWL-based notation. The authors acknowledge the need for complexity management mechanisms, but it is considered out of their scope. **VOWL**[17] is a visual notation for representing OWL10 ontologies, with two implementations available: WebVOWL and ProtegeVOWL. WebVOWL makes use of a force graph, allowing for user interaction with the positioning of elements. Moreover, it provides a complexity management tool: a collapsing feature which reduces the number of elements on screen, even though it leaves no choice to the user over the specifics. Nonetheless, this does not prevent the overlapping of a large number of relationships between two nodes. Footnote 10: [https://www.w3.org/TR/owl-features/](https://www.w3.org/TR/owl-features/) This overlapping issue has been a recurring problem in the history of visualizations in the semantic web. As far back as the year 2000, a possible solution emerged: to represent semantic graphs in three dimensions. With such purpose tools as **UNIVIT**[18] and **NV3D** were implemented; alas, their visual notations were too dependant on the arbitrary combination of visual variables (shape, color...) to efficiently convey complex data [19]. A decade later, **X3D-UML** was proposed as a 3D UML implementation, particularly focused on state machine diagrams [20]. It consists of a number of interconnected planes in a tridimensional space, each one displaying a 2D UML diagram. Thus, it is rather an intermediate solution. ## 4 Proposal This proposal consists of a UML class diagram-like visual notation in order to graphically represent ShEx, with two alternative approaches: a 2D diagram (**Shumlex**) and a 3D directed graph (**3DShEx**). ### Visual notation The proposed visual notation is displayed in Table 1. Its design rationale is structured according to PoN's principles, exposed hereunder. #### 4.1.1 Semiotic Clarity Semiotic clarity is sacrificed in favour of both semantic transparency and graphic economy. Firstly, the notation incurs in a deliberate case of **symbol deficit**, since there exist a number of semantic constructs without a unique visual construct mapped to it. Node constraints are displayed textually inside shapes, much like attributes in UML classes. By doing so, it is expected to take advantage of the widely recognized UML class diagram notation to convey information to a broader audience. Secondly, some semantic constructs employ the same visual construct (**symbol overload**) with textual differentiation. Since those semantic constructs are conceptually similar (e.g. conjunction and disjunction) it is hoped to achieve graphic economy without disrupting clarity. #### 4.1.2 Perceptual discriminability In order to objectively ascertain the ease of discrimination between symbols, both a metric and a threshold of dissimilarity between two graphical symbols have to be defined [12]. To that end, the proposal from [11] is modified slightly in order to calculate the average of the following criteria: visual distance (VD), redundant coding (RC), perceptual pop-out (PPO) and textual differentiation (TD). These are normalized to an interval of [0, 1], in such a way that 0 denotes null discriminability and 1 compliance with all criteria. 0.5 is chosen as the threshold of dissimilarity. For brevity's sake, the details of the modifications and calculations are exposed in Appendix B. The values obtained were VD = 0.47, RC = 0.29, PPO = 1 and TD = 0.5. This results in an average value of 0.57, therefore demonstrating the positive perceptual discriminability. #### 4.1.3 Semantic transparency By shaping the notation to resemble a UML class diagram, the objective is to increase its semantic transparency, particularly to novice users. At least, semantic translucency is expected, so that the visual constructs provide a cue to its meaning by association. Nonetheless, this cannot be ascertained until a user evaluation is performed, given its aforementioned subjectivity. #### 4.1.4 Complexity management Complexity management is approached in different ways. Shumlex takes inspiration from the modularization utility implemented by GraphQL Voyager11, which by clicking on one of the classes highlights only that class and its relations with its neighbours, drastically reducing the others' visibility. A modification of this concept is proposed such that it is cumulative; that is, clicking on a second class does not change the focus to that class, but adds it to the highlighted set. As far as 3DShEx is concerned, an _intelligent zoom_[21] will be implemented. This implies additional interactivity besides the common zooming. By default, nodes will display only the shape identifier. When interacted with - opened-, nodes will expand to reveal the pertinent restrictions (_black boxing_). Further interaction -closing- will revert it to its initial state. As an additional complexity management tool, a simple collapsing function is proposed which on demand shows only the desired node and its neighbours. Both provide a layer of abstraction. \begin{table} \begin{tabular}{|p{56.9pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline **Feature** & **\{PLACEHOLDER\}** & **Visual representation** & **Example** \\ \hline TripleConstraint & \textless{Property}\textgreater{} & & :User \{ \\ & \textless{NodeConstraint}\textgreater{} & & :name xsd:string?; \\ & \textless{Cardinality}\textgreater{}; & & :User \{ \\ & & & schema:name xsd:string; \\ & & & schema:age xsd:string; \\ \hline & \textless{Top-level} & nodeKind: \textless{NodeKind}\textgreater{} & :HomePage IRI \\ & & & schema:name xsd:string ; \\ \hline & \textless{ClosED} & & :User CLOSED \{ \} \\ \hline & \textless{ShapeRef} & & :User \{ \\ & & & schema:worksFor @:Company ; \\ & & & \} \\ \hline & \textless{ShapeAnd} & AND & & :User \{ \\ & & & schema:name xsd:string ; \\ & & & \} \\ \hline & \textless{ShapeOr} & OR & & :User \{ \\ & & & schema:name xsd:string ; \\ & & & \} & :GivenName xsd:string ; \\ \hline & \textless{OneOf} & OneOf & & :User \{ \\ & & & :name xsd:string;! \\ & & & :givenName xsd:string +; \\ & & & :familyName xsd:string;?; \} \\ \hline & \textless{ShapeNot} & NOT & & :NoName Not \{ \\ & & & schema:name.. \\ \hline & \textless{Labeled} & Composed of & & :User \{ \\ & & & :name.; \\ & & & :email IRI; \\ & & & \} \\ \hline \end{tabular} \end{table} Table 1: Proposed visual notation. #### 4.1.5 Cognitive integration Given the fact that no multiple diagrams are used to represent a dataset, this principle does not apply. #### 4.1.6 Visual expressiveness PoN builds upon Bertin's list of visual variables [22]: shape, texture, brightness, size, color, orientation and planar variables. The proposed notation makes use of the following: shape, texture and brightness. Hence, it lies in a middle ground between visual one-dimensionality and visual saturation. Moody claims that most diagrams in software engineering are visually one-dimensional [7] and therefore a higher degree of discriminability is achieved, as discussed earlier. #### 4.1.7 Dual coding Every visual representation is complemented by text which provides a cue to its meaning. #### 4.1.8 Graphic economy As previously stated, symbol deficit was introduced in order to reduce graphic complexity. Consequently, the number of graphical symbols is 4. Such is an inferior value to the 7\(\pm\)2 processing capacity limit proposed by Miller [23], suggested in [12] as a reference for this principle. #### 4.1.9 Cognitive fit The aim of the aforementioned binary approach is to maximize cognitive fit, each prioritizing different necessities. Shumlex aims to be closer to the UML class diagram spec, with the intention of being accessible to a wider audience not necessarily familiar with the technical details. Thus, all information is initially available as it would be in a common diagram, making for a more constraint-focused visualization in contrast to a relationship-focused 3DShex. On the contrary, 3DShex is of a more experimental nature, which by interactively presenting the same information in a tridimensional space aims to further analyse the cognitive implications in its audience and the potential benefits it may bring. The details of shapes are concealed behind a layer of abstraction, thus giving greater importance to the diagram as a whole. ## 5 Implementation In this section, the elaboration of the prototypes for both approaches to the visual notation is described. ### Shumlex In order to build the visualization Mermaid12, a Javascript library for text-based generation of various diagrams, is used. Therefore, the architecture of the prototype is as follows: Footnote 12: [https://mermaid-js.github.io/mermaid/](https://mermaid-js.github.io/mermaid/) 1. A **conversion** engine which receives a ShEx input and generates the Markdown-like syntax that Mermaid requires. Given the fact that Mermaid does not accept a variety of symbols used in ShEx, it is necessary to use alternatives. For instance, the use of colons is not allowed; the prefixed term ":User" would have to be codified as "_User". 2. A **visualization** module which invokes the library with the previous outcome in order to generate a SVG. Once displayed, the sanitized texts are substituted by the original ones. 3. A **post-processing** module which implements the complexity management funcionality. It assigns to every class in the diagram an event which, on click, lowers to a minimum the opacity of every element except that very class, its relationships and the targets of these. There exist a couple of exceptions: a) it won't obscure the already highlighted elements and b) it will reverse the effect if it has already been applied to said class. Furthermore, hovering any label will check the existence of the entity in Wikidata and display its meaning as a tooltip. The purpose of this is to increase comprehension of commonly used, Wikidata related Shape Expressions, in which shapes and predicates are semantically opaque (e.g. wd:Q42944 refers to CERN). The application of Shumlex to the motivating example is shown in Fig. 2. Despite the fact that relationships are more spaced out than in RDFShape's visualization, areas with high concentrations of elements remain cognitively overloaded. As shown in Fig. 3, the complexity management mechanism allows for a limited display of the desired components. In the provided example, focus is on _:medication_, thus highlighting its relationships with other shapes. This prototype is freely available at [http://www.weso.es/shumlex/](http://www.weso.es/shumlex/). ### 3DShex For the implementation of this prototype 3D Force Graph13 (3DFG), a NodeJS library to represent graph data in a tridimensional space, is used. The architecture of the prototype is as follows: Footnote 13: [https://github.com/vasturiano/3d-force-graph](https://github.com/vasturiano/3d-force-graph) 1. A **conversion** engine which receives a ShEx input and generates the JSON data that 3DFG requires. Besides the required parameters, additional information is included to facilitate the next phase. 1. _List of constraints of a node_. Information to be displayed on demand, equivalent to the class attributes in UML. 2. _Name and cardinality of a relationship_. Figure 3: Genewiki ShEx visualization with reduced complexity in Shumlex. Figure 2: Genewiki ShEx visualization in Shumlex. 3. _Curvature of a link_. Curved links allow for distinct relationships between a pair of nodes, while straight links only enable one to be displayed clearly. Therefore, references between shapes (:User :works @:Company) require the former since there may be any number of them. On the contrary, compositional relationships (ShapeAnd, ShapeOr, OneOf, ShapeNot and Labelled in Table 1) are unique to the source shape and thus are able to be represented by straight links. 4. _Arrow head_. As displayed in the notation, there are three posibilities: none, arrow or diamond. 5. _Rotation_. As previously mentioned, curved links allow for an arrangement free of the overlapping described in Section 3.2. However, by default, links are displayed in the same position. 3DFG allows for a rotation value -taking the node as the centre of a circumference- to be specified, but the calculations are left to the user. Hence, every link occurrence for each node pair is registered and the circumference is divided in equal parts. Moreover, the source of the link should be taken into account, since from the perspective of each circumference the angle will be different for a certain position (e.g. \(\pi\) in the source node equals to \(2\pi\) in the target). 2. A **visualization** module which makes use of the previous information to invoke the library. HTML objects are utilized to build the nodes, so their contents can be customized as well as dynamic behaviour assigned (constraints are hidden by default). The following functionalities are enabled: 1. _Highlight on hover_. Both links and nodes possess this property; in the case of the latter, its neighbours and the corresponding relationships are emphasized as well. In links, moving particles are shown to reinforce the direction. 2. _Details_. When clicking a node, all its constraints are displayed in a expanded box. Another click reverts it to its original state. 3. _Collapsing_. Right clicking a node displays a reduced graph, composed of such node and its neighbours. 4. _Wikidata tooltips_. As in Shumlex. The application of 3DShEx to the motivating example is shown in Fig. 4. Even though static images do little for its comprehension -since it may be examined from any position- it is clear that clusters of highly interdependent shapes excess working memory limits. The complexity management mechanism is thus applied to _:medication_ yet again. As shown in Fig. 5, a much smaller graph is displayed, composed by the desired element and its neighbours. This prototype is freely available at [http://www.weso.es/3dshex/](http://www.weso.es/3dshex/). Figure 4: Genewiki ShEx visualization in 3DShEx. ## 6 Evaluation In order to test the proposed notation, an experiment was carried out based in the one conducted in [24]. The methodology employed, the results obtained and their discussion are detailed in the following subsections. Datasets, questionnaires, manuals and anonymized results are freely available at [https://github.com/fidalgoLXXVI/shex-visualization-paper](https://github.com/fidalgoLXXVI/shex-visualization-paper). ### Methodology This user study follows a between-subjects design, in which each participant is exposed to a single tool and asked to perform a few measured tasks. Both a quantitative and a qualitative analysis are conducted. Hereunder, the methodology of this experiment is discussed in greater detail. #### 6.1.1 Procedure The experiment is divided in the following steps: 1. **Preliminary questionnaire**. Subjects are inquired about background and self-assessment of relevant skills -such as knowledge in UML or spatial ability-. 2. **Tool description**. A brief manual is provided to participants which briefly describes the operation and features of the corresponding tool. The selected tools for the experiment were _RDFShape_, _Shumlex_ and _3DShEx_. Thus, the different approaches to the notation may be compared to each other as well as to the existing solution. 3. **Main questionnaire**. A series of tasks on the test cases are requested to the participants. By means of the mandated tool, subjects must try to perform those while their interactions are measured by a timer. Each test case comprises the following tasks, which aim to ascertain the user's ability to navigate the diagram and comprehend the various semantic equivalences. 1. _Find a shape by name_. 2. _Find a shape with a specific constraint_. 3. _List within-node constraints of a shape_. 4. _Find a reference between two shapes_. 5. _Determine subject and object of a reference_.. 6. _List all neighbours of a shape_. 4. **Follow-up questionnaire**. A number of questions based in the Likert scale are asked to the participants in order to perform the qualitative analysis. Those allow us to obtain a number of variables: _general satisfaction level, ease of use, learnability, semantic transparency, applicability, error proneness, scalability, complexity management, understanding of constraints and understanding of references_. Figure 5: Genewiki ShEx visualization with reduced complexity in 3DShEx. #### 6.1.2 Sample The sample consisted of 13 students of the MSc in Web Engineering at University of Oviedo. This experiment took place in the last day of a course in semantic web, where they were taught the basics of technologies such as RDF or ShEx. Most participants share a similar demographic as well as academic background, with a bachelor's degree in Computer Science. According to self-assessment results a) 92.3% have either medium or high knowledge of UML, b) 84.6% have basic knowledge of RDF, c) 69.2% have basic knowledge of ShEx and 15.4% declare no knowledge on the subject and d) 69% have high spatial ability while the rest declare medium spatial ability. #### 6.1.3 Test cases Two test cases are used in this experiment. The first one is based on the WebIndex ShEx schema proposed in [25], "one of the earliest practical applications of ShEx". Modifications have been made in order to reflect all the features reflected in the visual notation, thus including logical operations and composition. The _OneOf_ constraint is removed since RDFShape's current version hasn't implemented it yet. This schema features a few shape references, with greater focus on other node constraints. The second one is the Genewiki schema, as featured in Section 2. It has approximately three times the shapes of the former and over 70 shape references, while other constraints are a scarce ocurrence. Hence, according to the cognitive implications layed out in Section 3.1, participants would be confronted with **distinct cognitive loads**. The first test case has a higher intrinsic load, given the greater inherent complexity of using complex semantic features such as conjunctions and composition while having few elements. On the contrary, the second test case has little implicit complexity -most are simple references to other shapes- but its large quantity of elements causes diagrammatic complexity upon display. #### 6.1.4 Threats to validity Taking as reference the list of threats to both internal and external validity proposed in [26], the following have been identified: SelectionParticipants may share certain characteristics which predispose them towards the same results, especially given the common background. In order to address this, subjects are distributed randomly among the experimental groups so that those characteristics may be equally distributed. TestingParticipants may become familiar with the test cases and remember responses for later tasks. In order to mitigate this, special care is taken to use different fragments of the schema and avoid repetitions. Interaction of selection and treatmentBecause of the limited variety of the participants, generalization to individuals of other contexts may not be possible. Hence, claims about the universality of the results must be restricted. However, given the highly specialized nature of the contribution, this issue is lessened. #### 6.1.5 Analysis Both quantitative and qualitative results were collected and anonymised. From those, the following variables are calculated for each test case: **elapsed time**, **success rate** and **precision**. Elapsed time (\(T_{c}\)) is the total time spent for a given test case. Success rate (\(S_{c}\)) is calculated as the number of correct answers divided by the number of questions. Precision (\(P_{c}\)) is calculated as the division of minimum elapsed time of all participants by current student's elapsed time, multiplied by the success rate. This measure gives an insight on the swiftness of participants while taking into account their effectiveness. Hence, given a test case \(c\) and a student _sn_: \[P_{cnn}:=\frac{min(\{T_{cs1},...,T_{cnn}\})}{T_{csn}}\cdot S_{csn}\] R 4.2.0 is used for the statistical analysis. Comparisons between the three groups are made by means of a One-Way ANOVA whenever assumptions are met, removing outliers if necessary. Otherwise, Kruskal-Wallis is used. ### Results Descriptive statistics of the quantitative results for the first test case are shown in Table 2. Shumlex mean scores are consistently better than RDFShape's, and those better than 3DShEx's. Nonetheless, those differences between the three groups are not statistically significant for any of the variables: _F(2,8)=1.1; p=0.377, F(2,10)=1.67; p=0.236_ and _F(2,9)=1.29; p = 0.32_ for T, S and P respectively. Descriptive statistics of the quantitative results for the second test case are shown in Table 3. Mean score comparisons show the same relationship between groups as before. However, in this case there are significant differences between the three groups in elapsed times (_H(2)=6.05; p=0.048; \(\eta^{2}\)=0.405_). Dunn post-hoc determined significant differences in elapsed times between Shumlex and 3DShEx (_p=0.014_). While 75% of Shumlex users achieve lower times than every RDFShape user, overall differences are not significant (_p=0.242_). As success rate and precision are concerned, there are no significant differences between groups in the second test case (_H(2)=1.78; p = 0.41 and F(2,10)=2.43; p=0.137_). Descriptive statistics of the qualitative results are shown in Table 4 in Appendix A. Overall, user ratings are positive for all tools: 71.1% of answers express either high or very high level of approval. Tools score on average neutral or positive ratings for every measure, with the exception of Scalability which obtains neutral or negative ratings on average. Statistical analysis showed no significant differences between the three groups for any measure. ### Discussion Results for the first test case do not show any significant difference between groups for any of the metrics. This can be explained by the great variability in all groups -e.g. elapsed time for Shumlex ranges from \(\sim\)1m to \(\sim\)5m- as well as little difference between means. Nonetheless, it should be noted that only one member of the Shumlex group achieved a perfect score both in success rate and precision. Overall, success rate is likely negatively influenced by the scarce theoretical knowledge of Shape Expressions which participants assessed. This would explain how a few simple tasks seem to cause general \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Measure** & **Group** & \(\mathbf{\overline{x}}\) & **s** & **max** & **min** \\ \hline \multirow{3}{*}{Elapsed seconds} & 3DShEx & 256.2 & 66.55 & 355 & 210 \\ & RDFShape & 210.2 & 119.62 & 411 & 95 \\ & Shumlex & 196 & 95.63 & 302 & 73 \\ \multirow{3}{*}{Success rate} & 3DShEx & 0.667 & 0.136 & 0.833 & 0.5 \\ & RDFShape & 0.7 & 0.139 & 0.833 & 0.5 \\ \multirow{3}{*}{Precision} & 3DShEx & 0.833 & 0.136 & 1 & 0.667 \\ & 3DShEx & 0.204 & 0.077 & 0.29 & 0.103 \\ \multirow{3}{*}{Precision} & \multirow{3}{*}{3DShEx} & 0.311 & 0.197 & 0.64 & 0.118 \\ & Shumlex & 0.441 & 0.380 & 1 & 0.161 \\ \hline \hline \end{tabular} \end{table} Table 2: Descriptive statistics for test case 1 results. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Measure** & **Group** & \(\mathbf{\overline{x}}\) & **s** & **max** & **min** \\ \hline \multirow{3}{*}{Elapsed seconds} & 3DShEx & 417.8 & 173.034 & 644 & 247 \\ & RDFShape & 265.6 & 106.746 & 456 & 204 \\ & Shumlex & 186.5 & 91.799 & 314 & 95 \\ \multirow{3}{*}{Success rate} & 3DShEx & 0.583 & 0.096 & 0.667 & 0.5 \\ & RDFShape & 0.7 & 0.14 & 0.833 & 0.5 \\ \multirow{3}{*}{Precision} & Shumlex & 0.708 & 0.21 & 1 & 0.5 \\ & 3DShEx & 0.154 & 0.0757 & 0.256 & 0.074 \\ \multirow{3}{*}{Precision} & RDFShape & 0.268 & 0.074 & 0.357 & 0.174 \\ & Shumlex & 0.476 & 0.365 & 1 & 0.151 \\ \hline \hline \end{tabular} \end{table} Table 3: Descriptive statistics for test case 2 results. confusion. Most notably, question 9 which involved pointing out the reference which connected two shapes got no correct answers from both 3DShEx and RDFShape groups, while most Shumlex users answered correctly. Given its uniqueness within the experiment, this particular difference in performance may be due to either an underlying cause or pure chance. Thus, it may be only stated with certainty that **in cases with low diagrammatic complexity, there is no evidence of difference in performances between tools**. Since only the Shumlex group managed to complete all the proposed tasks and performed adequately in error-prone tasks, there may be need of further evaluation with larger samples to assess the potential influence of the tool in success rates and precision. Regarding the second test case, **elapsed times show significant differences between groups** with a large effect size (\(\eta^{2}\)=0.405). Post-hoc results suggest that 3DShEx users require more time to perform tasks on large cases than Shumlex users. This may be explained by the combination of a novel navigable space with large diagrammatic complexity causing high cognitive load, thus exceeding working memory limits. This high cognitive load hypothesis is supported by the fact that the only 3DShEx user with lower spatial ability obtained the highest time in the experiment. As stated in Section 3.1, lower spatial ability may imply higher cognitive loads in 3D enviroments. Superior time performance for Shumlex users may be due to its closer resemblance to UML class diagrams, whose specification users claimed to be familiar with. Finally, results show no significant difference between groups at any of the variables in the subjective evaluation performed by the students. Average ratings are mostly positive or neutral if not. The sole exception in the qualitative analysis is the variable Scalability14, where the tools scored either neutrally or negatively. This suggests that users perceive the visualization tools to be of more use with small or medium schemas. Even though complexity management tools seem to be appreciated, extraneous cognitive load may be still excessive. This effect is probably exacerbated by their unfamiliarity with the language and the notation. Footnote 14: “The tool is most useful in large use cases.” Oddly enough, while RDFShape scores a 3.8/5 in Complexity Management, when asked for feedback some of its users convey naught but dissatisfaction in this regard. _"As expected, the larger the use case, the more confusing the diagram"_ or _"In very large graphs it is complicated to see the arrows that link entities in the central regions"_. In spite of those -expected- statements, neither rates it negatively. In the light of such contradictions, there are several possible explanations. They might have not completely understood the statement to assess15, or they might have feared that too harsh an assessment would be detrimental to our interests (a RDFShape user even gives a perfect score to all variables but one). Mayhap it is merely a consequence of their lack of experience. Either way, it may seem like user feedback holds information of greater value to us. Footnote 15: “The tool facilitates the understanding of complex areas.” Further analysis of user's comments unveils a similar perception of 3DShEx's complexity management. _"In small cases it is very useful, in large cases like the second one it is quite difficult to deal with"_ and _"In the second case it was impossible to follow the relationships and to find the texts of the relationships for each shape"_. It cannot be Figure 6: Precision values by tool in the second test case. concluded whether they actually used the complexity management mechanisms. However, if that is the case those may be unintuitive to the users; either way, the tool fails at meeting those needs. By comparison, there is a single comment related to Shumlex: _"In the second case, [...] it can be a little complicated to discern the name of the relationship. You can select the shape from which it comes out to differentiate [the name] but it would be nice to be able to do it by clicking on it or hovering over it"_. The contribution of the mechanism is appreciated while providing alternative solutions to that particular task. As a summary of the qualitative analysis, **complexity management perception seems to be more favourable to Shumlex**, while the remaining variables appear to have a similar impact throughout the tools. ## 7 Conclusions and future work A UML-based visual notation for ShEx has been proposed, which is built upon broadly used and operationalized principles. Moreover, said notation has been implemented in both 2D and 3D prototypes, named respectively _Shumlex_ and _3DShEx_. Results of both qualitative and quantitative analysis lead to the following conclusions: Efficiency of ShumlexEven though Shumlex users mostly obtained better results than participants with other tools independently of the test case, the small sample size implies that those differences were not significant enough as to be able to generalize claims of efficiency. Nonetheless, the above together with receiving the most positive user feedback make us think that such universalization may be possible with further research. The absence of widespread complaints about complexity management -as it occurs in the others- is likely to be the result of the mechanisms put in place. Cognitive overload in 3D environment3DShEx users were significantly slower than Shumlex users in a large use case. Furthermore, 3DShEx obtains the worst average ratings in success rate, precision and most qualitative variables. While differences are not statistically significant in those, it is considered likely that further research may provide a basis for confirmation. Lastly, despite having at their disposal a complexity management mechanism similar to that of Shumlex, user complaints are directed towards scalability. Given that intrinsic cognitive load is the same as Shumlex, it is concluded that interaction with the 3D environment is causing a greater extraneous cognitive load upon the user. The resulting cognitive overload frustrates the user to the detriment of their comprehension and proper use of the available features. UML-like visual notationBy shaping the visual notation to resemble UML class diagrams, it was hoped to achieve a intuitive, transparent solution without forsaking efficiency. User evaluation of learning ease and semantic transparency is reasonably affirmative of such intent. On the other hand, its efficiency seems rather dependent on the manner in which the visual notation is presented. Future workIt is considered that future efforts should be focused on Shumlex as the more promising approach. Analysis of user feedback suggests an extension of the complexity management capabilities as to support more specialized tasks. E.g., being able to select a single shape reference. Moreover, the inclusion of a search engine could be of assistance to users when navigating large schemas. ## Appendix A Qualitative analysis results ## Appendix B Metric of similarity Given a notation \(N\) with \(G_{N}\) graphical symbols, the operationalization framework of Storrle et al. [11] proposes 4 criteria to ascertain perceptual discriminability: visual distance (VD), redundant coding (RC), perceptual pop-out (PPO) and textual differentiation (TD). These are normalized to an interval of [0, 1], in such a way that 0 denotes null discriminability and 1 compliance with all criteria. Finally, the average discriminability for the notation is calculated. ### Visual distance Both the visual variable difference function _vvd(a,b)_ and the visual distance function _vd(g,h)_ are used as-is. Maximum value of _vd(g,h)_ is 1 and there are \(|G_{N}^{2}|\) possible combinations of _g,h_. Hence, the average visual distance function _VD(N)_ uses \(|G_{N}^{2}|\) as a denominator so that 1 is the highest value possible. This does not take into account that whenever \(g\) equals \(h\), _vd(g,h) = 0_, so the maximum value of the summation is \(|G_{N}^{2}|-|G_{N}|\). Thus, the denominator is modified and the substraction of the unit removed so that the values are normalized in the specified range. \[VD(N):=\frac{1}{|G_{N}^{2}|-|G_{N}|}\sum_{g,h\in G_{N}}\emph{vd}(g,h)\] For instance, the visual distance between the graphical symbols _box (b)_ and _directed arrow (da)_ is as follows. As suggested, weights \(w\) are 7 for shape and 1 for the rest of visual variables. Those visual variables not used have _vd = 0_, thus only shape, brightness and texture may be computed. In this particular case, shapes are in different main groups: lines and regions. Therefore, \(\emph{vvd}(v_{sh}(b),v_{sh}(da))=1\). Same for brightness, given that the colour of their main areas is completely opposite (black and white). However, they do have the same solid texture, hence \(\emph{vvd}(v_{tx}(b),v_{tx}(da))=0\). \[\emph{vd}(b,da):=\frac{1}{||w||}\sum_{i=1}^{d}w_{i}\cdot\emph{vvd}(v_{i}(b),v_ {i}(da)):=\frac{1}{14}(7\cdot 1+1\cdot 1+1\cdot 0):=0.57\] This same procedure is repeated for all combinations of graphical symbols. The only new development is the comparison between shapes of the same basic group (i.e. arrows and lines) for which _vvd = 0.5_. Its results are shown in Table 5. With all of the above, VD for our notation N may be finally calculated. \[VD(N):=\frac{1}{16-4}(0.57\cdot 4+0.32\cdot 4+0.64\cdot 2+0.39\cdot 2):=0.47\] ### Redundant coding Previous changes to the denominator apply to the RC(N) function as well. The _vr(g,h)_ function is used as-is. Results are displayed in Table 6. \[RC(N):=\frac{1}{16-4}(0.25\cdot 8+0.38\cdot 4):=0.29\] \begin{table} \begin{tabular}{l l l l l} \hline & Box & Directed arrow & Diamond arrow & Dashed line \\ \hline Box & 0 & 0.25 & 0.25 & 0.38 \\ Directed arrow & 0.25 & 0 & 0.25 & 0.25 \\ Diamond arrow & 0.25 & 0.25 & 0 & 0.38 \\ Dashed line & 0.38 & 0.25 & 0.38 & 0 \\ \hline \end{tabular} \end{table} Table 6: Redundant coding for all graphical symbol pairs. \begin{table} \begin{tabular}{l l l l l} \hline & Box & Directed arrow & Diamond arrow & Dashed line \\ \hline Box & 0 & 0.57 & 0.57 & 0.64 \\ Directed arrow & 0.57 & 0 & 0.32 & 0.32 \\ Diamond arrow & 0.57 & 0.32 & 0 & 0.39 \\ Dashed line & 0.64 & 0.32 & 0.39 & 0 \\ \hline \end{tabular} \end{table} Table 5: Visual distance for all graphical symbol pairs. ### Perceptual popout Each graphical symbol has at least a unique value in one visual variable. Taking into account the _shape_ variable alone fulfils this criterion. Once more, the substraction is removed so that the best possible value is 1. Therefore, the function is as follows: \[PPO(N):=\frac{1}{4}(1\cdot 4):=1\] ### Textual differentiation As stated in Section 4.1.1, PoN does not consider visual constructs that make use of textual differentiation to convey distinct meanings as different graphical symbols (symbol overload). Therefore, a modification is made to this criterion so that it does not measure the proportion of graphical symbols which only differ by textual cues. The proportion of graphical symbols which convey several concepts by textual differentiation is evaluated instead. \[TD(N):=1-\frac{2}{4}:=0.5\]
2305.07653
Resonant elastic X-ray scattering of antiferromagnetic superstructures in EuPtSi$_{3}$
We report resonant elastic X-ray scattering (REXS) of long-range magnetic order in EuPtSi$_{\text{3}}$, combining different scattering geometries with full linear polarization analysis to unambiguously identify magnetic scattering contributions. At low temperatures, EuPtSi$_{\text{3}}$ stabilizes type A antiferromagnetism featuring various long-wavelength modulations. For magnetic fields applied in the hard magnetic basal plane, well-defined regimes of cycloidal, conical, and fan-like superstructures may be distinguished that encompass a pocket of commensurate type A order without superstructure. For magnetic field applied along the easy axis, the phase diagram comprises the cycloidal and conical superstructures only. Highlighting the power of polarized REXS, our results reveal a combination of magnetic phases that suggest a highly unusual competition between antiferromagnetic exchange interactions with Dzyaloshinsky--Moriya spin--orbit coupling of similar strength.
Wolfgang Simeth, Andreas Bauer, Christian Franz, Aisha Aqeel, Pablo J. Bereciartua Perez, Jennifer A. Sears, Sonia Francoual, Christian H. Back, Christian Pfleiderer
2023-05-12T17:59:59Z
http://arxiv.org/abs/2305.07653v1
# Resonant elastic X-ray scattering of antiferromagnetic superstructures in EuPtSi\({}_{3}\) ###### Abstract We report resonant elastic X-ray scattering (REXS) of long-range magnetic order in EuPtSi\({}_{3}\), combining different scattering geometries with full linear polarization analysis to unambiguously identify magnetic scattering contributions. At low temperatures, EuPtSi\({}_{3}\) stabilizes type A antiferromagnetism featuring various long-wavelength modulations. For magnetic fields applied in the hard magnetic basal plane, well-defined regimes of cycloidal, conical, and fan-like superstructures may be distinguished that encompass a pocket of commensurate type A order without superstructure. For magnetic field applied along the easy axis, the phase diagram comprises the cycloidal and conical superstructures only. Highlighting the power of polarized REXS, our results reveal a combination of magnetic phases that suggest a highly unusual competition between antiferromagnetic exchange interactions with Dzyaloshinsky-Moriya spin-orbit coupling of similar strength. In recent years great efforts have been made to identify magnetic superstructures in bulk materials, thin films, and nano-scaled systems [1; 2; 3; 4; 5]. In systems comprising ferromagnetic exchange with Dzyaloshinsky-Moriya (DM) spin-orbit coupling [6; 7] major discoveries include long-wavelength incommensurate modulations [8; 9; 10; 11], solitonic structures [12], and topologically nontrivial order such as skyrmion lattices [13; 14; 15; 16; 17; 18; 19]. While these modulated states under applied magnetic field may feature transitions between different superstructures, they collapse at a well-defined transition into a field-polarized state [20; 21; 22]. In comparison, less is known about materials comprising antiferromagnetic exchange with DM interactions, as the mere number of possible modulated structures is much larger. Representing the perhaps most general condition, an unresolved question concerns possible magnetic order in the presence of antiferromagnetic exchange and DM interactions of similar strength. Focusing on magnetic ions such as Eu\({}^{2+}\) or Gd\({}^{3+}\), in which quenched orbital momentum gives way to almost unconstrained spin degrees of freedom, a rich variety of antiferromagnetic states has attracted great interest. Topical examples include incommensurate antiferromagnetism and a large topological Hall effect in EuGa\({}_{2}\)Al\({}_{2}\) and EuAl\({}_{4}\)[23; 24; 25; 26; 27; 28], complex antiferromagnetism in GdRh\({}_{2}\)Si\({}_{2}\), skyrmion lattice order in GdRu\({}_{2}\)Si\({}_{2}\)[29; 30] and Gd\({}_{2}\)PdSi\({}_{3}\)[31; 32], colossal magnetoresistance in antiferromagnetic (Zintl compounds such as Eu\(X_{2}Y_{2}\) (\(X=\text{Cd, In}\), and \(Y=\text{As, Sb, P}\)) [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43], as well as magnetic order and superconductivity in Eu\(X_{2}\)As\({}_{2}\) (\(X=\text{Fe, Ni, Cr, Co}\)) and related compounds [44; 45; 46]. These systems, however, lack global DM interactions in their centrosymmetric crystal structures. This is contrasted by the observation of magnetic superstructures, superconductivity, and quantum criticality in Eu\(TX_{3}\), where \(T=\text{Pt, Pd, Ni, Rh, Co, Ir}\) and \(X=\text{Si, Ge, Sn, Ga}\), most of which lack inversion symmetry [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. For our study, we selected EuPtSi\({}_{3}\), which crystallizes in the noncentrosymmetric tetragonal BaNiSn\({}_{3}\) structure (space group \(I4mm\)), shown in Fig. 1(a) [60]. Measurements of the bulk properties established the characteristics of antiferromagnetic order of localized Eu\({}^{2+}\) moments below a transition temperature \(T_{\text{N}}=17\,\text{K}\)[61]. Depending on field direction, up to four different phase pockets, denoted A through D, were identified, as shown in Fig. 1(b) for field parallel \(\langle 110\rangle\). For the point group symmetry of EuPtSi\({}_{3}\), DM vectors \(\mathbf{D}_{ij}\) are permitted that support the formation of superlattice structures with Neel-type twisting including antiferromagnetic Neel skyrmions [5; 62]. Preliminary neutron scattering suggested some form of superlattice modulation with a wavelength of about 100 A at zero field, however, without information on the nature of the underlying antiferromagnetism [61]. Experimentally, the unambiguous determination of complex antiferromagnetic spin structures is especially demanding, requiring scattering techniques with high momentum resolution at large momentum transfers, the possibility to obtain element-specific information, and to separate spin and orbital degrees of freedom. Moreover, in many cases tiny sample volumes are available only, e.g., in the form of thin films, nano-scale systems, or bulk samples of highest purity. While neutron scattering has become indispensable in studies of magnetic structures, it cannot meet these general requirements, not to mention prohibitively strong neutron absorption in elements such as Eu, Gd, or Cd. In contrast, seminal studies of comparatively simple magnetic structures in selected rare-earth compounds have demonstrated the capability of resonant elastic X-ray scattering (REXS) with full linear polarization analysis (FLPA) to overcome these challenges [63, 64, 65, 66, 38]. Using REXS, we determined the four antiferromagnetically ordered phases of EuPtSi\({}_{3}\). As a main result, we find that all antiferromagnetic phases represent variations of the same type A antiferromagnetism, where the tetragonal [001] axis is the easy magnetic direction. Combining different scattering geometries and FLPA, we identify long-wavelength cycloidal, conical, and fan-like superstructures, consistent with the DM vectors expected in space group \(I4mm\)[61]. At intermediate fields, the conical and fan-like superstructure encompass a phase pocket of pure type A antiferromagnetism without superstructure, reflecting antiferromagnetic exchange and DM interactions of similar strength. For field along [001], only the phases with cycloidal and conical superstructures are stabilized. For the REXS experiments, a polished single-crystal cube with an edge length of 2 mm was used as prepared from an ingot grown by means of the optical floating-zone technique [67, 68]. The same sample was also used for the study of the magnetization, ac susceptibility, and specific heat reported in Ref. [61]. REXS was carried out in the second experimental hutch EH2 of beamline P09 at the synchrotron source PETRA III [69]. Hard X-rays at an incident photon energy of 7.61 keV were used close to the L\({}_{\rm II}\) edge of europium, cf. Fig. 1(a), where the magnetic cross-section is dominated by electric dipole transitions [70]. The X-ray diffraction was carried out in a horizontal scattering geometry and the polarization of the scattered beam was analyzed using PG006 as the analyzer crystal [67, 71], permitting a clean polarization analysis. Magnetic structure refinement by means of a FLPA was carried out using a double phase-retarder [71] in combination with the analyzer. In this setup, the polarization plane of the incident beam was rotated rather than the sample, which permits to avoid parasitic mixing due to slightly different beam spot positions on the sample as well as differences of angular positions of the diffractometer. Further information on the sample alignment, the mathematical description of polarized REXS, and the magnetic structure determination may be found in the supplementary information [67]. The sample was cooled using a variable temperature insert. A cryomagnet was used to apply vertical magnetic fields of up to 14 T, where two field orientations were studied. In a first experiment, the magnetic field was applied in the tetragonal basal plane enclosing an angle of 20 deg with the [\(\bar{1}10\)] axis. This way, the magnetic scattering of all domains could be studied in a single scattering channel, namely \(\pi\to\sigma^{\prime}\). In a second experiment, the magnetic field was applied parallel to the crystallographic [\(\bar{1}10\)] axis, for which the evolution of domain populations as a function of field permitted to discriminate multi-\(\mathbf{k}\) from single-\(\mathbf{k}\) characteristics [61, 67]. The FLPA was carried out for this high-symmetry configuration. Data shown in Figs. 1 and 2 were recorded with the first configuration; data shown in Fig. 3 were recorded with the second configuration; further data are shown in the supplementary information [67]. For clarity, momentum transfers are given in reciprocal lattice units (r.l.u.), Figure 1: Crystallographic and magnetic properties of EuPtSi\({}_{3}\). (a) Tetragonal unit cell of EuPtSi\({}_{3}\), space group \(I4mm\), and resonant enhancement of the magnetic Bragg intensity when tuning the incident photon energy across the L\({}_{\rm II}\) edge of europium, characteristic of magnetism predominantly carried by Eu\({}^{2+}\) moments. (b) Magnetic phase diagram for field parallel to [\(\bar{1}10\)], as inferred from susceptibility, \(\mathrm{Re}\,\chi_{\mathrm{ac}}\)[61]. Antiferromagnetic phases with cycloidal (Cycl, green), conical (Con, red), commensurate (Com, orange), and fan-like (Fan, yellow) order as well as paramagnetic (PM) and field-polarized (FP) regimes may be distinguished. (c) Field dependence of the REXS intensities at Bragg positions characteristic of the different magnetic phases. corresponding to \(\frac{2\pi}{a}\) along directions \(h\) and \(k\) or \(\frac{2\pi}{c}\) along \(l\). For all four antiferromagnetic phases, REXS intensity was recorded at specific positions \(\mathbf{Q}\) in the vicinity of the reciprocal-space position \((h,k,l)=(0,0,5)\), which is crystallographically forbidden. As shown in Fig. 1(c), the integrated scattering intensities as a function of field reflect accurately the phase boundaries of the magnetic phase diagram. The intensity distributions are depicted schematically in Fig. 2(a). Typical REXS data are presented in the form of two-dimensional maps inferred from scans at constant \(l\) in Fig. 2(b) and scans along \(l\) at fixed \(h\) and \(k\) in Fig. 2(c). The data presented below were measured under a rotation of the linear polarization by 90 deg, namely in the \(\pi\to\sigma^{\prime}\) channel, characteristic of magnetic scattering [70]. In phase A (\(H<H_{1}\)), eight magnetic satellites were observed at \((\pm\epsilon,\pm\epsilon,5\pm\delta_{1})\) with \(\epsilon=0.007(1)\) and \(\delta_{1}=0.077(6)\) (green spheres). This field distribution implies superlattice modulations of the staggered magnetization with \(c/(2\delta_{1})\approx 64\) A along \([001]\) and \(a/(2\sqrt{2}\epsilon)\approx 215\) A in the basal plane along \(\langle 110\rangle\). As satellites antipodal to \((0,0,5)\) arise from the same domain of the incommensurate modulation axis, four crystallographically equivalent domains are distinguished that were populated equally after zero-field cooling. Maxima at reciprocal space positions with \(l<5\) are labeled by an index enumerating the domains. Maxima at \(l>5\) attributed to the same domain are denoted by an asterisk. In the \(\pi\to\sigma^{\prime}\) polarization channel, the maxima at \(\mathbf{Q}_{\text{cycl,3}}\) and \(\mathbf{Q}_{\text{cycl,4}}\) are weak due to well understood polarization effects for the scattering geometry chosen here, although all domains are populated equally (see supplementary information [67]). The scattering intensity in phase B (\(H_{1}<H<H_{2}\)) is characteristic of domains with an in-plane modulation perpendicular to the field (red spheres). In this field range, satellites at \(\mathbf{Q}_{\text{cycl,3}}\) and \(\mathbf{Q}_{\text{cycl,4}}\) vanish, while the modulation lengths remain unchanged \(c/(2\delta_{1})\approx 64\) A along \([001]\) and \(a/(2\sqrt{2}\epsilon)\approx 215\) A in the basal plane along \(\langle 110\rangle\). Accordingly, the in-plane modulation remains aligned with the crystallographic axes rather than following the low-symmetry field direction. The domain populations display hysteresis as a function of field, as illustrated in the supplementary information [67]. Other than in phases A and B, scattering intensity in phase C (\(H_{2}<H<H_{3}\)) was only observed at \((0,0,5)\) (orange sphere), characteristic of single-domain commensurate antiferromagnetic order without superlattice modulations. Entering phase D (\(H_{3}<H<H_{4}\)), weak magnetic intensity at \((0,0,5\pm\delta_{2})\) with \(\delta_{2}=0.114(6)\) is observed (yellow spheres), characteristic of single-domain incommensurate order with a modulation length 43 A of the staggered magnetization along \([001]\) and no superlattice modulation in the basal plane. For fields exceeding the highest critical fields observed in the bulk properties, i.e., \(H_{4}<H\), no scattering intensity was observed as expected of the field-polarized state. To determine the nature of the magnetic order unambiguously, FLPA was carried out in each magnetic phase for magnetic field parallel to \([\bar{1}10]\)[63; 64]. The experimental setup is schematically depicted in Fig. 3(a). The procedure is illustrated by means of data recorded in the commensurate phase shown in Figs. 3(b) to 3(d), cf. supplementary information for data recorded in the other phases [67]. For a given polarization angle \(\eta\) of the incident beam, the scattering intensity for a given orientation of the analyzer crystal, \(\nu^{\prime}\), was determined Figure 2: Resonant elastic X-ray scattering. (a) Schematic depiction of the REXS intensity around the reciprocal-space position \((h,k,l)=(0,0,5)\) in the four ordered phases established in bulk measurements [61]. Maxima at positions \(\mathbf{Q}\) are indexed with the name of the phase. Arabic numbers indicate crystallographically equivalent positions attributed to different magnetic domains. Positions \(\mathbf{Q}\) and \(\mathbf{Q}^{*}\) are mirrored with respect to \((0,0,5)\) and belong to the same domain. Magnetic field \(\mathbf{H}\) was applied along a nonsymmetry axis within the basal plane in order to discriminate single-domain and multi-domain states by means of their evolution under field, cf. supplementary information for data with field parallel to \(\langle 110\rangle\)[67]. (b) Intensity distributions recorded across planes of constant \(l\), marked by blue shading in (a). (c) Intensity when scanning \(l\) through characteristic magnetic Bragg peaks at constant \(h\) and \(k\). For clarity, in the conical phase data were mirrored at \(l=5\) (open symbols). by integrating over a rocking scan of the analyzer crystal using a Gaussian fit [72] [Fig. 3(b)]. Such rocking scans were carried out for a series of analyzer orientations \(\nu^{\prime}\). Fitting the integrated intensities with the equation \(f(\nu^{\prime})\propto 1+P_{1}^{\prime}\cos 2\nu^{\prime}+P_{2}^{\prime}\sin 2\nu^{\prime}\) [Fig. 3(c)], the linear polarization of the scattered beam was determined in terms of its Poincare-Stokes parameters \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\). This measurement protocol was repeated for different values of \(\eta\)[73]. Finally, starting from the irreducible representations, values of the Poincare-Stokes parameters \(P_{1}^{\prime\mathrm{calc}}\left(\eta_{i}\right)\) and \(P_{2}^{\prime\mathrm{calc}}\left(\eta_{i}\right)\) were calculated for each candidate magnetic structure and compared with the seven pairs of Poincare-Stokes parameters experimentally determined [Fig. 3(d)]. Crucial for the refinement of the magnetic structure, the FLPA allowed to single out the spin scattering contributions. Namely, in all magnetic phases scattering intensity was also observed under unchanged linear polarization, i.e., special care has to be taken to distinguish magnetic from nonmagnetic scattering contributions. In the \(\sigma\rightarrow\sigma^{\prime}\) channel, the scattering must be purely nonmagnetic, while it may be magnetic or nonmagnetic in the \(\pi\rightarrow\pi^{\prime}\) channel [74; 70]. For the magnetic structure refinement, it was assumed that the nonmagnetic scattering is due to charge scattering. For increasing magnetic field going from phase A to phase D, inclusion of the charge scattering improved the goodness of fit dramatically, cf. supplementary information [67]. In addition, intensity maxima were observed in the cycloidal and conical phase in the \(\pi\rightarrow\pi^{\prime}\) channel at \((0,0,5\pm\delta_{1})\), independent of the magnetic satellites. This intensity may be the characteristic of so-called truncation rods arising from finite penetration depth or a symmetry reduction due to structural modulations or charge-density wave order [27]. While further studies are needed to resolve the origin of the charge scattering, determination of the magnetic structures pursued here turns out to be robust. The magnetic structures inferred from REXS with FLPA, taking charge scattering into account, are depicted schematically for the crystallographic (110) plane in Figs. 3(e) to 3(h). Starting with phase A, shown in Fig. 3(e), type A antiferromagnetism is observed with an Figure 3: Magnetic structure refinement by means of FLPA. (a) Schematic depiction of the setup used for FLPA. Variables without and with prime denote quantities before and after scattering. Polarization directions \(\pi\) and \(\sigma\) are in and perpendicular to the scattering plane, respectively. The direction of the X-ray polarization (red double-headed arrow) with respect to \(\sigma\) is denoted by the angle \(\eta\). The rotatable analyzer crystal selects a polarization direction enclosing an angle \(\nu^{\prime}\) with \(\sigma^{\prime}\). The crystal may be rocked by the angle \(\theta_{A}^{\prime}\). Phase plates determine the incident polarization [71]. (b) Rocking scan of the analyzer for a given incident (\(\pi\)) and scattered (\(\sigma^{\prime}\)) polarization channel. Integrated intensity is inferred from a Gaussian fit (solid line). Typical data for the commensurate phase are shown. (c) Integrated intensity for a given incident polarization (\(\pi\)) as a function of the analyzer orientation \(\nu^{\prime}\) and Poincaré–Stokes fit (solid line). Magnetic intensity in channels \(\pi\rightarrow\sigma^{\prime}\) and \(\pi\rightarrow\pi^{\prime}\) reflect magnetization components in and perpendicular to the scattering plane. (d) Poincaré–Stokes parameters \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\) as a function of the incident polarization angle \(\eta\). Solid lines correspond to calculations based on commensurate antiferromagnetic order. Discrepancy from \(P_{1}^{\prime}=-1\) at \(\eta=0\), marked in green, is attributed to charge scattering. (e)–(h) Schematic real-space depictions of the magnetic structure in the different phases in the crystallographic (110) plane (orange shading). Blue and red colors indicate large and small components along [001]. The modulation length refers to the staggered magnetization. antiparallel coupling of the moments along the \(\langle 111\rangle\) directions and a long-wavelength cycloidal superstructure. The superstructure exhibits modulations along [001] and one of the \(\langle 110\rangle\) axes. This superstructure supports four equivalent domain populations in zero field. Fig. 3(e) depicts the domain associated with \(\mathbf{Q}_{\text{cycl,1}}\). Considering phase B, depicted in Fig. 3(f), the same type A antiferromagnetism persists with a superstructure that is closely related to the cycloid, supporting modulations along [001] and perpendicular to the field direction. The main difference with respect to phase A is the uniform magnetization along the field direction. Thus, with increasing field the opening angle of the conical structure decreases. Phase C, shown in Fig. 3(g), represents commensurate type A antiferromagnetism in which ferromagnetic layers of moments parallel and antiparallel to the [001] axis alternate along the same axis, superimposed with a uniform magnetization along the field direction. The resulting magnetic structure is noncollinear but coplanar without additional twisting or scalar spin chiralities. Finally, as shown in Fig. 3(h), phase D corresponds to type A order with a long-wavelength amplitude-modulated superstructure of moments pointing along [001] and a uniform magnetization along the field direction. This modulation may be referred to as fan-like and differs distinctly from the cycloidal and conical modulations. Considering the DM vectors permitted by the crystal structure [61], Hamiltonian contributions of magnetic moments for the next-nearest neighbor bonds along \(\langle 111\rangle\) perpendicular to the field direction \([\bar{1}10]\) favor spin canting around \([\bar{1}10]\), such as in the cycloidal and conical state. For next-nearest neighbor bonds along \(\langle 111\rangle\) perpendicular to [110], instead spin canting around [110] is favored, as observed in the fan-like phase. In combination with the Zeeman energy in applied fields, modulated states as different as the cycloidal and the fan-like state may be realized. We finally note that the critical field of the field-polarized state of order 9 T, which sets the scale of the antiferromagnetic exchange interactions, exhibits comparatively small anisotropy [60; 61]. As the commensurate phase is encompassed by the phases supporting cycloidal, conical, and fan-like superstructures, the DM spin-orbit coupling must be comparable in strength to the antiferromagnetic exchange. Thus, building on the advantages offered by REXS with FLPA in studies of antiferromagnetic superstructures and materials not amenable to neutron scattering, we identify a highly unusual combination of interactions and magnetic phases which, to the best of our knowledge, has neither been reported experimentally nor addressed theoretically before. We wish to thank M. Azhar, M. Garst, F. Haslbeck, J. R. Linares Mardegan, S. Mayr, A. Senyshyn, S. Sorn, and M. Wilde for fruitful discussions and assistance with the experiments. This study has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under TRR80 (From Electronic Correlations to Functionality, Project No. 107745057, Project E1), SPP2137 (Skyrmionics, Project No. 403191981, Grant PF393/19), and the excellence cluster MCQST under Germany's Excellence Strategy EXC-2111 (Project No. 390814868). Financial support by the European Research Council (ERC) through Advanced Grants No. 291079 (TOPFIT) and No. 788031 (ExQuiSid) as well as through the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 884104 (PSI-FELLOW-III-3i) is gratefully acknowledged. We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at beamline P09 at PETRA III at DESY. Beamtimes were allocated for proposals I-20180440 and I-20200748.
2310.03683
A strong min-max property for level sets of Phase Transitions
We show that certain functions whose nodal sets lie near a fixed nondegenerate minimal hypersurface satisfy a strong min-max principle for the Allen--Cahn energy which is analogous to the strong min-max principle for non-degenerate minimal hypersurfaces first proved by Brian White.
Érico Melo Silva
2023-10-05T17:01:51Z
http://arxiv.org/abs/2310.03683v2
# A strong min-max property for level sets of phase transitions ###### Abstract. We show that certain functions whose nodal sets lie near a fixed nondegenerate minimal hypersurface satisfy a strong min-max principle for the Allen-Cahn energy which is analogous to the strong min-max principle for nondegenerate minimal hypersurfaces first proved by Brian White [14]. ## 1. Introduction Given a Riemannian manifold \((M^{n},g)\) a minimal hypersurface \(\Sigma^{n-1}\subset M\) is one which arises as a _critical point_ for the area functional. The variational properties of minimal hypersurfaces have been a rich source of study for geometric analysis. In particular, the properties of the _stability operator_, \(L_{\Sigma}\), of a minimal hypersurface often determine important aspects of geometric variational problems. In [14], B. White proved that any minimal submanifold which is nondegenerate (in the sense that it admits no non-zero Jacobi fields, i.e. the equation \(L_{\Sigma}f=0\) has no nontrivial solutions), and finite Morse index is the solution of a strong min-max problem. The Allen-Cahn energy for functions \(u\in W^{1,2}(M)\) is given by \[E_{\varepsilon}(u)\doteq\int_{M}\frac{\varepsilon|\nabla u|^{2}}{2}+\frac{W(u )}{\varepsilon}, \tag{1.1}\] where \(W(t)=\frac{(1-t^{2})^{2}}{4}\) is the standard double-well potential. The Euler-Lagrange equation of 1.1 is known as the _Allen-Cahn equation_ and is given by \[\varepsilon^{2}\Delta_{g}u=W^{\prime}(u). \tag{1.2}\] The solutions to 1.2 have a well known correspondence with minimal hypersurfaces, beginning with Modica-Mortola's work (see e.g., [12], [13]) on the \(\Gamma\)-convergence of the energies 1.1 to the perimeter functional acting on sets of finite perimeter. A rich body of work has emerged in recent years utilizing min-max properties of the Allen-Cahn equation 1.2 to provide variational constructions of minimal hypersurface. Of particular note in this direction is the work of Guaraco [10], Guaraco-Gaspar [11], Chodosh-Mantoulidis [12], and many others. In the other direction, constructing solutions to the Allen-Cahn equation given a fixed minimal hypersurface, we have the following result of Pacard-Ritore [14]. **Theorem 1.1** (Theorem 1.1 [14]).: _Assume that \((M,g)\) is an \(n\)-dimensional closed Riemannian manifold and \(\Sigma^{n-1}\subseteq M^{n}\) is a \(2\)-sided, nondegenerate minimal hypersurface. Then there exists \(\varepsilon_{0}>0\) such that \(\forall\varepsilon<\varepsilon_{0}\), there exist solutions, \(u_{\varepsilon}\), to equation (1.2) such that \(u_{\varepsilon}\) converges to \(+1\) (resp. -1) on compact subsets of \((M^{+})^{o}\) (resp. \((M^{-})^{o}\)) and_ \[E_{\varepsilon}(u_{\varepsilon})\xrightarrow{\varepsilon\to 0}2\sigma_{0} \mathcal{A}(\Sigma)\] _where \(\mathcal{A}(\Sigma)\) the \((n-1)\)-dimensional area of \(\Sigma\) and \(\Sigma\) is a constant which depends on the choice of potential in 1.1._ In [15] the author, together with J. Marx-Kuo, gave a variant of the Allen-Cahn energy 1.1 suited to the study of the geometry of level sets of solutions. In this paper, we prove that our variant of the Allen-Cahn energy satisfies a strong min-max property analogous to that proved in White [21], provided that we restrict to surfaces which lie over the minimal hypersurface as small normal graphs. As a geometric application of our result, we give a new, purely variational proof of 1.1. _Remark 1.2_.: The first variational proof of 1.1 was given recently by Pigati-De Philippis [14]. In their work, the candidate solutions to the Allen-Cahn equation are constructed using gradient flow methods. In the present work, our construction relies on the variational properties of the associated surface energy first defined in [15], as well as a variant of the classical mountain pass theorem for solutions to elliptic semilinear equations. ### Acknowledgements The author is indebted to his advisor, Fernando Coda Marques, for his patience and essential conversations regarding the completion of the present work. The author would also like to thank Jared Marx-Kuo for innumerable mathematical conversations. ## 2. Preliminaries and Main Results ### Notation and conventions * \((M^{n},g)\) always denotes an orientable, closed Riemannian manifold. * Unless specified otherwise \(\Sigma\subset M\) denotes an embedded, closed hypersurface, i.e. a submanifold of \(M\) of codimension 1. * \(B^{k}\) is the unit ball in \(\mathbf{R^{k}}\). * If \(E\subset M\) is a Borel set, then \(\mathcal{C}(E)\) denotes the space of sets of finite perimeter (Caccioppoli sets) \(\Omega\subset E\). * We say a sequence of Caccioppoli sets \(\Omega_{n}\to\Omega\) if the measure of the symmetric difference tends to \(0\), (i.e. \(|\Omega_{n}\Delta\Omega|\to 0\). * We denote by \(C_{\varepsilon}^{k,\alpha}(M)\) the Banach space of \(C^{k,\alpha}\) functions on \(M\) equipped with the \(\varepsilon\)-weighted Holder norms (2.1) \[\|f\|_{C_{\varepsilon}^{k,\alpha}}\doteq\sum_{|\beta|\leq k}\varepsilon^{| \beta|\|D^{\beta}}f\|_{C^{0}(M)}+\varepsilon^{k+\alpha}[D^{k}f]_{\alpha},\] where \(\beta\) are multiindices and \([\cdot]_{\alpha}\) is the usual \(\alpha\)-Holder seminorm. * We denote by \(W_{\varepsilon}^{1,2}(M)\) a Banach space we call the \(\varepsilon\)-weighted Sobolev space, with norm (2.2) \[\|f\|_{W_{\varepsilon}^{1,2}(M)}^{2}\doteq\varepsilon\|f\|_{L^{2}(M)}+ \varepsilon^{3}\|\nabla f\|_{L^{2}(M)}.\] Similarly we denote by \(W_{0,\varepsilon}^{1,2}(M)\) those functions in \(W_{\varepsilon}^{1,2}(M)\) which have trace \(0\). * The properties of the one-dimensional solution \(\mathbf{H}(z)=\tanh(t/\sqrt{2})\) to the Allen-Cahn equation \(\partial_{z}^{2}u-W^{\prime}(u)=0\). We define four functions as the unique solutions to auxiliary ODEs: \(\omega,\rho,\tau,\kappa\), (2.3) \[\omega^{\prime\prime}-W^{\prime\prime}(\mathbf{H})\omega =\partial_{z}\,\mathbf{H}^{\prime},\] (2.4) \[\rho^{\prime\prime}-W^{\prime\prime}(\mathbf{H})\rho =\omega^{\prime},\] (2.5) \[\tau^{\prime\prime}-W^{\prime\prime}(\mathbf{H})\tau =z\,\mathbf{H}^{\prime},\] (2.6) \[\kappa^{\prime\prime}-W^{\prime\prime}(\mathbf{H})\kappa =\mathbf{H}\,\omega,\] all with the boundary conditions \(f(0)=0\), \(\lim_{z\to\infty}f(z)=0\). * We define constants \[\sigma_{0}\doteq\int_{0}^{\infty}\partial_{z}\,\mathbf{H}(z)dz=\frac{\sqrt{2} }{3},\] \[\sigma\doteq\mathbf{H}^{\prime}(0)=\sqrt{2}^{-1}.\] * We write \(\ell_{\varepsilon,u}\doteq\Delta_{g}-W^{\prime\prime}(u)\) for the linearized Allen-Cahn operator at a solution \(u\). * For any function \(f:\mathbf{R}\to\mathbf{R}\) we define \(f_{\varepsilon}(z)\doteq f(z/\varepsilon)\). * We define a cut-off of the heteroclinic, \(\overline{\mathbf{H}}\), as follows. Let \(\ell>5\)\(\chi_{\varepsilon}:\mathbf{R}\to\mathbf{R}\) be a cutoff function. \[\begin{cases}\chi_{\varepsilon}(z)=1,&|z|\in[0,-\ell\log(\varepsilon)],\\ \chi_{\varepsilon}(z)=0,&|z|\not\in[0,-2\ell\log(\varepsilon)].\end{cases}\] Then \[\begin{cases}\chi_{\varepsilon}(z)=1,&|z|\in[0,-\ell\log(\varepsilon)],\\ \chi_{\varepsilon}(z)=0,&|z|\not\in[0,-2\ell\log(\varepsilon)].\end{cases}\] \[\overline{\mathbf{H}}(z)\doteq\chi_{\varepsilon}(z)\,\mathbf{H}(z)+(1-\chi_{ \varepsilon}(z)).\] Similarly, we can define the cut-off functions \(\overline{\omega}\), \(\overline{\rho}\), \(\overline{\tau}\) and \(\overline{\kappa}\), where these each cut-off to \(0\) instead of to \(1\), as the corresponding functions decay to \(0\) at infinity. We note the cut-offs satisfy \[\left(\frac{d^{2}}{dz^{2}}-W^{\prime\prime}(\overline{\mathbf{H}})\right) \overline{\mathbf{H}}^{\prime}=E,\] \[\left(\frac{d^{2}}{dz^{2}}-W^{\prime\prime}(\overline{\mathbf{H}})\right) \overline{\omega}^{\prime}=E+\overline{\mathbf{H}}^{\prime},\] with \(\|E\|_{W^{1,k}}\leq C\varepsilon^{\ell}\). * If \(\Sigma\) is a sufficiently smooth minimal hypersurface, we denote by \(H_{\Sigma}\) its mean curvature, and denote by \(A_{\Sigma}\) its second fundamental form. * We denote by \(\nu\) the outward pointing unit normal to \(\Sigma\). * We write by \(N(\eta)\) a tubular neighborhood of \(\Sigma\) of height \(\eta\). Provided \(\eta\) is sufficiently small, we may find a parametrization \(\Sigma\times(-\eta,\eta)\to N(\eta)\), by \[(s,z)\mapsto\exp_{s}(z\nu(s)).\] We call this parametrization _Fermi coordinates_ in the tubular neighborhood \(N(\eta)\). * If \(f\in C^{k,\alpha}(\Sigma)\) and \(f\) is sufficently small, we denote by \(\Gamma(f)\) the _normal graph_ of \(f\) over \(\Sigma\). That is, in Fermi coordinates, \[\Gamma(f)\doteq\{x\in M|x=\exp_{s}(f(s)\nu(s)\}.\] In [1] Brezis-Oswald considered the solvability of the Dirichlet problem for the equation 1.2. **Theorem 2.1** ([1] Theorem 1).: _Suppose \(\Omega\subset M\) is an open domain with smooth boundary. If \(\varepsilon<\lambda_{1}(\Omega)^{-\frac{1}{2}}\), then_ \[\begin{cases}\Delta_{g}u=\frac{W^{\prime}(u)}{\varepsilon^{2}}&\text{ in }\Omega,\\ u>0&\text{ in }\Omega,\\ u=0&\text{ on }\partial\Omega,\end{cases} \tag{2.7}\] _has a unique solution. Moreover, the solution minimizes the energy 1.1 over all functions in \(W^{1,2}_{0}(\Omega)\)._ It follows from 2.1 that for a separating hypersurface \(\Sigma\subset M\) along which \(M\) splits into a disjoint union \(M=M^{+}\sqcup_{\Sigma}M^{-}\), there exists an \(\varepsilon_{0}\) sufficiently small so that for all \(\varepsilon<\varepsilon_{0}\) the problem 2.7 is uniquely solvable in both \(M^{+}\) and \(M^{-}\). This observation led the author, together with J. Marx-Kuo in [13] to define the _balanced energy_ of \(\Sigma\). **Definition 2.2**.: Let \(\Sigma\subset M\) be a 2-sided, separating hypersurface splitting \(M\) into two disjoint open sets \(M^{+}\) and \(M^{-}\). Suppose \(\varepsilon_{0}\) is sufficiently small that the problem 2.7 admits a solution for both \(M^{+}\) and \(M^{-}\). Call these solutions \(u^{+}_{\Sigma,\varepsilon}\) and \(u^{-}_{\Sigma,\varepsilon}\), respectively. Then we define \[\mathcal{B}_{\varepsilon}(\Sigma)\doteq\int_{M^{+}}\frac{\varepsilon|\nabla u ^{+}_{\Sigma,\varepsilon}|^{2}}{2}+\frac{W(u^{+}_{\Sigma,\varepsilon})}{ \varepsilon}+\int_{M^{-}}\frac{\varepsilon|\nabla u^{-}_{\Sigma,\varepsilon}|^ {2}}{2}+\frac{W(u^{-}_{\Sigma,\varepsilon})}{\varepsilon} \tag{2.8}\] When \(u^{+}_{\Sigma,\varepsilon}\) and \(u^{-}_{\Sigma,\varepsilon}\) are simultaneously defined, we write \[u_{\Sigma,\varepsilon}\doteq\begin{cases}u^{+}_{\Sigma,\varepsilon}&\text{in }M ^{+},\\ u^{-}_{\Sigma,\varepsilon}&\text{in }M^{-},\\ 0&\text{on }\Sigma,\end{cases}\] and we call \(u_{\Sigma,\varepsilon}\) the _broken \(\varepsilon\)-phase transition associated to \(\Sigma\)_. If \(\Sigma\) and \(\varepsilon\) are understood, we omit writing them. ### Preliminary results Suppose \(\Sigma\subset M\) is a smooth, closed, embedded minimal hypersurface. Suppose further that \(\Sigma\) is nondegenerate and has Morse index \(k\). In [10] B. White proved the following theorem. **Theorem 2.3** ([10], Theorem 4).: _With \(\Sigma\) as above, there exists a tubular neighborhood \(U\) of \(\Sigma\), and a smooth \(k\)-parameter family of hypersurfaces homologous to \(\Sigma\), \(\Sigma_{v},v\in B^{k}\) so that the following hold._ 1. \(\Sigma_{0}=\Sigma\)_._ 2. \(\sup_{v\in B^{k}}\mathcal{A}(\Sigma_{v})=\mathcal{A}(\Sigma)\)_._ 3. _If_ \(\tilde{\Sigma}_{v},v\in B^{k}\) _is any other smooth_ \(k\)_-parameter family of smooth, closed, embedded hypersurfaces in_ \(U\)_, homologous to_ \(\Sigma\)_, and_ \(\tilde{\Sigma}_{v}=\Sigma_{v}\) _for all_ \(v\in\partial B^{k}\)_, then_ (2.9) \[\sup_{v\in B^{k}}\mathcal{A}(\tilde{\Sigma}_{v})\geq\mathcal{A}(\Sigma),\] _with equality if and only if_ \(\tilde{\Sigma}_{v}=\Sigma\) _for some_ \(v\in B^{k}\)_._ In [11], J. Marx-Kuo computed an expansion of the solutions to 2.7 in \(\varepsilon\) to derive an asymptotic formula for solutions to the Dirichlet problem which makes clear the geometric dependency of solutions on the boundary of the domain. **Theorem 2.4** ([11], Theorem 1.5).: _Let \(u:\Omega\to\mathbf{R}\) be the solution to 2.7 afforded by 2.1, where \(\Sigma=\partial\Omega\). Then_ \[u(s,z)=\overline{\mathbf{H}}_{\varepsilon}(z)+\varepsilon H_{\Sigma}(s) \overline{\omega}_{\varepsilon}(z) \tag{2.10}\] \[+\varepsilon^{2}\bigg{(}(\mathrm{Ric}(\nu(s),\nu(s))+|A_{\Sigma}(s)|^ {2})\overline{\nu}_{\varepsilon}(z)\] \[+H_{\Sigma}^{2}(s)\bigg{(}\overline{\rho}_{\varepsilon}(z)+\frac{ 1}{2}\overline{\kappa}_{\varepsilon}(z)\bigg{)}\bigg{)}+\phi_{0,\varepsilon},\] _where the remainder term \(\phi_{0,\varepsilon}\) satisfies the estimate,_ \[\|\phi_{0,\varepsilon}\|_{C^{2,\alpha}_{\varepsilon}(\Omega)}\leq C(\Sigma) \varepsilon^{3}. \tag{2.11}\] In [10], it was shown that the solutions \(u_{\Sigma}\) depend in a smooth way on the hypersurface \(\Sigma\). This allows one to take the first and second variations of the energies \(\mathcal{B}_{\varepsilon}\). We collect the following results. **Theorem 2.5** ([10], Corollary 2.1).: _If \(\Sigma_{t}\) is a smooth \(1\)-parameter family of closed, \(2\)-sided, separating hypersurfaces for which \(\mathcal{B}_{\varepsilon}\) is defined, and the variational vector field \(X\) is given by \(f\nu\) for some \(f\in C^{\infty}(\Sigma_{0})\), then_ \[\frac{d}{dt}\bigg{|}_{t=0}\mathcal{B}_{\varepsilon}(\Sigma_{t})=\frac{ \varepsilon}{2}\int_{\Sigma_{0}}f((u_{\nu}^{+})^{2}-(u_{\nu}^{-})^{2})=-2 \sigma_{0}\int_{\Sigma_{0}}H_{\Sigma_{0}}fd\mathcal{H}^{n-1}+E(f), \tag{2.12}\] _where \(E(f)\leq C(\Sigma_{0})\varepsilon\)._ Since \(u_{t}\doteq u_{\Sigma_{t},\varepsilon}\) is a well defined family of functions in \(W^{1,2}(M)\) smooth in \(t\), we denote by \(\dot{u}_{t}\) its time derivative, which can be considered as a function in \(W^{1,2}(M)\) which, following [10], satisfies the boundary value problem \[\begin{cases}\ell_{\varepsilon,u_{t}}\dot{u}_{t}=0&\text{in }M_{t}^{+},\\ \dot{u}_{t}=-f\frac{\partial u_{t}}{\partial\nu_{t}}&\text{on }\Sigma_{t}. \end{cases} \tag{2.13}\] We have the following decomposition for the function \(\dot{u}_{t}\). **Proposition 2.6**.: _[_10_]__, Lemma 9.1 In fermi coordinates adapted to \(\Sigma_{t}\), we may write_ \[\dot{u}_{t}^{\pm}(s,z)=-f(s)\frac{\partial u_{t}^{\pm}}{\partial\nu_{t}} \dot{\overline{\mathbf{H}}}_{\varepsilon}(z)+\phi_{1,\varepsilon}^{\pm}, \tag{2.14}\] _where \(\|\phi_{1,\varepsilon}^{\pm}\|_{W^{1,2}_{0,\varepsilon}(M^{\pm})}\leq C( \Sigma)\varepsilon\|f\|_{W^{1,2}(\Sigma)}\)._ For the second variation, the following was shown. **Theorem 2.7** ([13], Theorem 2.6).: _Let \(\Sigma\) be a critical point for \(\mathcal{B}_{\varepsilon}\). Then if \(\Sigma_{t}\) is any normal variation with variational vector field given by \(f\nu\) as above, then_ \[\left.\frac{d^{2}}{dt^{2}}\right|_{t=0}\mathcal{B}_{\varepsilon}( \Sigma_{t}) =\varepsilon\int_{\Sigma}fu_{\nu}(\dot{u}_{\nu}^{+}-\dot{u}_{\nu}^ {-}),\] \[=2\sigma_{0}\int|\nabla f|^{2}-(\mathrm{Ric}(\nu,\nu)+|A_{\Sigma} |^{2})f^{2}d\mathcal{H}^{n-1}+\tilde{E}(f), \tag{2.15}\] _where \(E(f)\leq C(\Sigma)\varepsilon^{\frac{1}{2}}\|f\|_{H^{1}(\Sigma)}\)._ ### Main results We now state the main results of this paper. **Theorem 2.8**.: _Let \(\Sigma\subset(M,g)\) be a smooth, closed, embedded, \(2\)-sided, separating minimal hypersurface with Morse index equal to \(k\) and nullity equal to \(0\). Let \(\Sigma_{v},\,v\in B^{k}\) be the family afforded by 2.3, and write \(\mathcal{F}\doteq\{\tilde{\Sigma}_{v}|\) for every \(v\in\partial B^{k},\tilde{\Sigma}_{v}=\Sigma_{v}\}\) for the set of all families of normal graphs over \(\Sigma\) which agree with the canonical family \(\Sigma_{v}\) on \(\partial B^{k}\). Then, for any tubular neighborhood of \(\Sigma\) contained in the one afforded by Theorem 2.3, and for any \(\delta>0\) there is \(\varepsilon_{0}>0\) so that for any \(\varepsilon<\varepsilon_{0}\),_ \[\inf_{\tilde{\Sigma}_{v}\in\mathcal{F}}\sup_{v\in B^{k}}\mathcal{B}_{ \varepsilon}(\tilde{\Sigma}_{v})\geq\mathcal{B}_{\varepsilon}(\Sigma)-\delta.\] _Remark 2.9_.: Effectively, Theorem 2.8 says that any Allen-Cahn solution produced as a limit of broken phase transitions over surfaces which are small normal graphs over \(\Sigma\) must have Allen-Cahn energy which is controlled by the area of \(\Sigma\) from below. Arguing as in the proof of the classical mountain pass principle for solutions to semilinear PDEs, we can show a mountain pass principle for the \(\mathcal{B}_{\varepsilon}\) energies. Let \(\mathcal{U}\) be a sufficently small neighborhood of \(0\) in \(C^{k,\alpha}(\Sigma)\) (here we take \(k>5\)) so that the normal graph of any member of \(\mathcal{U}\) is contained in the tubular neighborhood \(U\) given by 2.8. **Theorem 2.10**.: _Consider a family of normal graphs parametrized by the boundary of the unit \(k\)-ball \(\{\Sigma_{v}\}_{v\in\partial B^{k}}\). Define \(\mathcal{P}^{L},c\) and \(d\) as follows._ \[\mathcal{P}^{L} \doteq\{p\in C(B^{k},\mathcal{U})|\text{ for every }v\in\partial B^{k}, \Gamma(p(v))=\Sigma_{v},\mathcal{A}(\Gamma(p(v)))\leq L\}, \tag{2.17}\] \[c \doteq\max_{v\in\partial B^{k}}\mathcal{B}_{\varepsilon}(\Sigma_ {v}),\] (2.18) \[d \doteq\inf_{p\in\mathcal{P}^{L}}\sup_{v\in B^{k}}\mathcal{B}_{ \varepsilon}(\Gamma(p(v))). \tag{2.16}\] _If \(d>c\), then there exists a smooth phase transition \(u\) with \(E_{\varepsilon}(u)=d\), and moreover the level set \(u^{-1}(0)\subset U\)._ By combining 2.3 and 2.10, we can then show **Corollary 2.11**.: _If \(\Sigma\) is a smooth, closed, embedded, separating, nondegenerate minimal hypersurface in a closed Riemannian manifold \(M\), then there exists a sequence of phase transitions \(u_{i}\) solving 1.2 for a sequence \(\varepsilon_{i}\to 0\) as \(i\to\infty\) with_ \[E_{\varepsilon_{i}}(u_{i})\to 2\sigma_{0}\mathcal{A}(\Sigma).\] _Moreover, \(u_{i}^{-1}(0)\) Hausdorff-converges to \(\Sigma\) as \(i\to\infty\)._ ## 3. First and second variations of energy The goal of this section is to compute the first and second variations of \(\mathcal{B}_{\varepsilon}\) at an arbitrary point, and to prove a result analogous to 2.15 for sufficiently small normal graphs over a minimal hypersurface. Let \(F_{t}:M\to M\) be a 1-parameter family of diffeomorphisms and set, * \(F_{0}=\operatorname{id}_{M}\), * \(\partial_{t}F_{t}\doteq X_{t}\). We write \(X\doteq X_{0}\) and call \(X\) the variational vector field of \(F_{t}\). * \(Z_{t}\doteq\nabla_{\partial_{t}}X_{t}\). We write \(Z\doteq Z_{0}\) and call \(Z_{t}\) the acceleration vector field for \(F_{t}\). _Remark 3.1_.: We can take \(X=f\nu\) for \(f\in C^{\infty}(\Sigma)\) and we can take \(Z_{t}\equiv 0\) for all sufficiently small \(t\) by assuming \(F_{t}\) is of the form \[F_{t}(s,z)=\exp_{s}(tf(s)\chi(z)\nu(s)), \tag{3.1}\] where \(\chi(z)\) is a suitably chosen cutoff function which is identically 1 in a neighborhood of \(z=0\). Write \(M_{t}^{\pm}\doteq F_{t}(M^{\pm})\) and \(\Sigma_{t}\doteq F_{t}(\Sigma)\). For small \(t\), we may compute the first and second variations of \(\mathcal{B}_{\varepsilon}(\Sigma_{t})\). For the sake of concision we omit the subscript \(t\). **Proposition 3.2**.: _Let \(u^{\pm},i^{\pm}\) be as above. Then,_ \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=-\frac{\varepsilon}{2} \int_{\Sigma_{t}}\left(\left(\frac{\partial u^{+}}{\partial\nu}\right)^{2}- \left(\frac{\partial u^{-}}{\partial\nu}\right)^{2}\right)\langle X,\nu \rangle d\mathcal{H}^{n-1}, \tag{3.2}\] _and_ \[\frac{d^{2}}{dt^{2}}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=-\varepsilon\int _{\Sigma}\left(\frac{\partial u^{+}}{\partial\nu}\frac{\partial\dot{u}^{+}}{ \partial\nu}-\frac{\partial u^{-}}{\partial\nu}\frac{\partial\dot{u}^{-}}{ \partial\nu}\right)\langle X,\nu\rangle \tag{3.3}\] \[\frac{d}{dt}E_{\varepsilon}(u_{t})=\int_{\Sigma_{t}}\left(-\frac{ \varepsilon}{2}\left(\frac{\partial u_{t}}{\nu_{t}}\right)^{2}+\frac{1}{4 \varepsilon}\right)\langle X_{t},\partial\nu_{t}\rangle d\mathcal{H}^{n-1}.\] And so, \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=\frac{d}{dt}E_{ \varepsilon}(u_{t}^{+};M_{t}^{+})+\frac{d}{dt}E_{\varepsilon}(u_{t}^{-};M_{t}^ {-})\] \[=-\frac{\varepsilon}{2}\int_{\Sigma_{t}}\left(\left(\frac{\partial u_{t}^{+}}{ \partial\nu_{t}}\right)^{2}-\left(\frac{\partial u_{t}^{-}}{\partial\nu_{t}} \right)^{2}\right)\langle X_{t},\nu_{t}\rangle d\mathcal{H}^{n-1}.\] To compute the second variation it suffices to compute \[I_{1}=\frac{d}{dt}\int_{\Sigma_{t}}\left(\frac{\partial u_{t}^{+}}{\partial\nu _{t}}\right)^{2}\langle X_{t},\nu_{t}\rangle d\mathcal{H}^{n-1}.\] For the purposes of simplifying notation we write \(\Phi(x,t)=(\partial u_{t}^{+}/\partial\nu_{t})^{2}(x,t)\). We pull back by the variation \(F_{t}\) and compute. \[I_{1} =\int_{\Sigma}\left(\frac{d}{dt}\Phi(F_{t}(x),t)\right)\langle X_{ t}(F_{t}(x)),\nu_{t}(F_{t}(x))\rangle F_{t}^{*}d\mathcal{H}^{n-1}(x)\] \[\qquad+\int_{\Sigma}\Phi(F_{t}(x),t)\left(\frac{d}{dt}\langle X_ {t}(F_{t}(x)),\nu_{t}(F_{t}(x))\rangle\right)F_{t}^{*}d\mathcal{H}^{n-1}(x).\] \[\qquad+\int_{\Sigma}\Phi(F_{t}(x),t)\langle X_{t}(F_{t}(x)),\nu_ {t}(F_{t}(x))\rangle\left(\frac{d}{dt}F_{t}^{*}d\mathcal{H}^{n-1}(x)\right)\] \[=I_{1}+I_{2}+I_{3}.\] The term in \(I_{3}\) is the same as in the usual first variation of the area functional. \[I_{3}=-\int_{\Sigma_{t}}\Phi H_{\Sigma_{t}}\langle X_{t},\nu_{t}\rangle^{2}d \mathcal{H}^{n-1}.\] Dealing now with \(I_{1}\), we compute \[\int_{\Sigma_{t}}\left(\frac{\partial\Phi}{\partial t}+\langle\nabla^{M}\Phi,X_{t}\rangle\right)\langle X_{t},\nu_{t}\rangle d\mathcal{H}^{n-1}.\] Computing \(I_{2}\) gives \[I_{2} =\int_{\Sigma_{t}}\Phi\left(\langle\nabla_{\partial_{t}}X_{t}, \nu_{t}\rangle+\langle X_{t},\nabla_{\partial_{t}}\nu_{t}\rangle\right)d \mathcal{H}^{n-1}\] \[=\int_{\Sigma_{t}}\Phi\langle Z_{t},\nu_{t}\rangle d\mathcal{H}^ {n-1},\] Putting all three parts together gives us \[I_{1}+I_{2}+I_{3}=\int_{\Sigma_{t}}\dot{\Phi}\langle X_{t},\nu_{t}\rangle+( \Phi_{\nu_{t}}-\Phi H_{\Sigma_{t}})\langle X_{t},\nu_{t}\rangle^{2}+\] \[\Phi(\langle Z_{t},\nu_{t}\rangle+\langle X_{t},\nabla_{\partial_{t}}\nu_{t}\rangle )d\mathcal{H}^{n-1},\] which is precisely 3.3. _Remark 3.3_.: By selecting our variation \(F_{t}\) as in 3.1, we see that the terms in the second variation involving \(Z\) and \(\dot{\nu}\) vanish for all small \(t\), and in this case we have simplified expressions for the first and second variation. **Corollary 3.4**.: _Let \(F_{t}\) be a variation such that \(X_{t}=f\nu_{t}\) and \(Z_{t}\equiv 0\) for \(t\) sufficiently small. Then for such \(t\) we have,_ \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=-\frac{ \varepsilon}{2}\int_{\Sigma_{t}}f\left(\left(\frac{\partial u^{+}}{\partial \nu}\right)^{2}-\left(\frac{\partial u^{-}}{\partial\nu}\right)^{2}\right)d \mathcal{H}^{n-1}, \tag{3.4}\] _and_ \[\frac{d^{2}}{dt^{2}}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=- \varepsilon\int_{\Sigma}f\left(\frac{\partial u^{+}}{\partial \nu}\frac{\partial\dot{u}^{+}}{\partial\nu}-\frac{\partial u^{-}}{\partial \nu}\frac{\partial\dot{u}^{-}}{\partial\nu}\right)\] \[\qquad+f^{2}\left(\frac{\partial u^{+}}{\partial\nu}\frac{ \partial^{2}u^{+}}{\partial\nu^{2}}-\frac{\partial u^{-}}{\partial\nu}\frac{ \partial^{2}u^{-}}{\partial\nu^{2}}\right)\] \[\qquad-H_{\Sigma}f^{2}\frac{1}{2}\left(\left(\frac{\partial u^{+ }}{\partial\nu}\right)^{2}-\left(\frac{\partial u^{-}}{\partial\nu}\right)^{2} \right)d\mathcal{H}^{n-1} \tag{3.5}\] ### Estimate for the second variation In this section, we use the asymptotic formulae 2.10, 2.14 for \(u^{\pm}\) and \(\dot{u}^{\pm}\) to expand the second variation formula 3.3. More precisely we prove the following asymptotic formula for the second variation. **Theorem 3.5**.: _Let \(\Sigma_{t}\) be a \(1\)-parameter family of hypersurfaces as in Corollary 3.4. Then_ \[\frac{d^{2}}{dt^{2}}\,\mathcal{B}_{\varepsilon}(\Sigma_{t})=2 \sigma_{0}\int_{\Sigma_{t}}|\nabla^{\Sigma_{t}}f|^{2}+(\mathrm{Ric}(\nu,\nu)+| A_{\Sigma_{t}}|^{2}-H_{\Sigma_{t}}^{2})f^{2}d\mathcal{H}^{n-1}+E(f) \tag{3.6}\] _Where \(E(f)\leq C(\Sigma)\varepsilon\|f\|_{W^{1,2}(\Sigma)}\)._ Proof.: The proof is identical to the proof of Theorem 2.6 in [10] except in a few respects. The terms contributed by the variation of the volume form of \(\Sigma_{t}\) and the terms containing two normal derivatives of the Dirichlet phase transitions do not appear in the calculation of the second variation at a critical point, so we deal with them here. The term containing a normal derivative of \(\dot{u}\) is treated in exactly the same way, so we refer to the appendix of [10] for the proof. We work in Fermi coordinates \((s,z)\) adapted to \(\Sigma_{t}\). Recall, \[u_{\nu}^{+}(s,0) =\frac{1}{\varepsilon\sqrt{2}}-\frac{2}{3}H_{\Sigma}+O(\varepsilon), \tag{3.8}\] \[u_{\nu\nu}^{+}(s,0) =\frac{H_{\Sigma}}{\varepsilon\sqrt{2}}-\frac{2}{3}H_{\Sigma}^{2} +O(\varepsilon),\] (3.9) \[\dot{u}_{\nu}^{+}(s,0) =\phi_{z}^{+}(s,0). \tag{3.7}\] substituting in 3.7, 3.8 and 3.9 into the first two terms in 3.5 gives the following integral terms. \[I_{1}\doteq-\varepsilon\int_{\Sigma_{t}}f\left(\frac{1}{ \varepsilon\sqrt{2}}+O(1)\right)(\partial_{z}\phi^{+}-\partial_{z}\phi^{-})d \mathcal{H}^{n-1}, \tag{3.10}\] \[I_{2}\doteq-\varepsilon\int_{\Sigma_{t}}f^{2}\left(\frac{1}{ \varepsilon\sqrt{2}}-\frac{2}{3}H_{\Sigma_{t}}+O(\varepsilon)\right)\left( \frac{H_{\Sigma_{t}}}{\varepsilon\sqrt{2}}-\frac{2}{3}H_{\Sigma_{t}}^{2}+O( \varepsilon)\right)\\ +f^{2}\left(-\frac{1}{\varepsilon\sqrt{2}}+\frac{2}{3}H_{ \Sigma_{t}}+O(\varepsilon)\right)\left(\frac{H_{\Sigma_{t}}}{\varepsilon \sqrt{2}}-\frac{2}{3}H_{\Sigma_{t}}^{2}+O(\varepsilon)\right)d\mathcal{H}^{n-1} \tag{3.11}\] The third term contributes, \[\frac{\varepsilon}{2}\int_{\Sigma_{t}}H_{\Sigma_{t}}f^{2}\frac{1}{2}\left( \left(\frac{\partial u^{+}}{\partial\nu}\right)^{2}-\left(\frac{\partial u^{- }}{\partial\nu}\right)^{2}\right)d\mathcal{H}^{n-1}=2\sigma_{0}\int_{\Sigma_{ t}}H_{\Sigma_{t}}^{2}f^{2}+C(\Sigma)\varepsilon\|f\|_{L^{\infty}}^{2}\] by 2.12. The second summand in the integral \(I_{2}\) appears with a reversed sign owing to the fact that the expansion of \(u^{-}\) occurs with respect to the opposite normal vector as in Theorem 2.10. It follows that this term contributes only an error term which is \(O(\varepsilon)\) as \(\varepsilon\to 0\). Finally, to deal with the term \(I_{1}\), we follow the proof of Theorem (2.6) in [10], only stating what the difference is in the current calculation. In the Fermi coordinates \((s,z)\), the Laplacian \(\Delta_{g}=\partial_{z}^{2}-H_{z}\partial_{z}+\Delta_{z}\), where \(H_{z}\) and \(\Delta_{z}\) represent the mean curvature of the parallel hypersurface \(\{p\in M|\mathrm{d}(p,\Sigma_{t})=z\), \(\mathrm{d}\) the signed distance function, and the Laplacian along the parallel hypersurface of distance \(z\) respectively. Using the decomposition 2.14, it follows that \(\ell_{\varepsilon,u\pm}(\phi^{\pm})=-\ell_{\varepsilon,u^{\pm}}(h\overline{ \mathbf{H}}_{\varepsilon}^{\prime})\), where \(h=-f(s)u_{\nu}^{\pm}(s,0)\). Expanding this using Fermi coordinates gives \[\ell_{\varepsilon,u}\phi=-\varepsilon^{2}\Delta_{z}(h)\overline{\mathbf{H}}_{ \varepsilon}^{\prime}+h\varepsilon H_{z}+(W^{\prime\prime}(u)-W^{\prime\prime }(\overline{\mathbf{H}}_{\varepsilon})+E)h\overline{\mathbf{H}}_{\varepsilon }^{\prime}\] Substituting this into the expression for each term of \(I_{1}\), \[\int_{\Sigma_{t}}f\phi_{z}^{+} =-\frac{\sqrt{2}}{\varepsilon^{2}}\int_{\Sigma_{t}}f\int_{0}^{-2 \ell\log\varepsilon}\bigg{(}-\varepsilon^{2}\Delta_{z}(h)\overline{\mathbf{H}}_ {\varepsilon}^{\prime 2}+\varepsilon H_{z}h\overline{\mathbf{H}}_{\varepsilon}^{ \prime\prime}\] \[\qquad+(W^{\prime\prime}(u)-W^{\prime\prime}(\overline{\mathbf{H }}_{\varepsilon})+E)h\overline{\mathbf{H}}_{\varepsilon}^{\prime 2}\] \[\qquad-\sqrt{2}(\varepsilon^{2}\Delta_{z}(\phi^{+})-H_{z} \varepsilon^{2}\phi_{z}^{+}\] \[\qquad-(W^{\prime\prime}(u)-W^{\prime\prime}(\overline{\mathbf{ H}}_{\varepsilon})+E)\phi^{+})\overline{\mathbf{H}}_{\varepsilon}^{\prime} \bigg{)}\sqrt{\det g(s,z)}dzds \tag{3.12}\] That this term is equal to \[\int_{\Sigma_{t}}|\nabla f|^{2}-(\operatorname{Ric}(\nu,\nu)+|A_{\Sigma_{t}}| ^{2})f^{2}d\mathcal{H}^{n-1}+E(f)\] where \(E(f)\leq C\varepsilon^{\frac{1}{2}}\|f\|_{W^{1,2}}\) is the subject of [13], section 11.4, the only difference in our situation, is that \(H_{z}\) is not necessarily small in \(\varepsilon\), and so cannot be absorbed into the remainder, instead we note that the terms which appear with \(H_{z}\) in the integral 3.12 appear with opposite contributions from \(\phi_{z}^{+}\) and \(\phi_{z}^{-}\) respectively. ## 4. The Strong min-max property We are ready to prove 2.8. Proof of Theorem 2.8.: We proceed by contradiction. Suppose there exists a \(\delta>0\) so that we can find for each \(i\) a \(k\)-parameter family of surfaces \(\{\Sigma_{p}^{i}|p\in B^{k}\}\) in \(\mathcal{U}\) with \[\sup_{p\in B_{k}}\mathcal{B}_{\varepsilon_{1}}(\Sigma_{p}^{i})\leq\mathcal{B} _{\varepsilon_{1}}(\Sigma)-\delta.\] In particular, for each \(i\) we can find a sequence of functions \(f_{i}\) so that each normal graph \(\Gamma(f_{i})\) satisfies \[\mathcal{B}_{\varepsilon_{1}}(\Gamma(f_{i}))\leq\mathcal{B}_{ \varepsilon_{1}}(\Sigma)-\frac{\delta}{2}.\] Taylor expanding \(\mathcal{B}_{\varepsilon_{1}}\) around \(f_{i}\) and applying Corollary 2.1 in [13] gives \[\mathcal{B}_{\varepsilon_{1}}(\Gamma(f_{i}))=\mathcal{B}_{\varepsilon_{1}}( \Sigma)+R_{\varepsilon_{i}}+T_{\varepsilon_{i}}(f_{i}),\] where \(T_{\varepsilon_{i}}\) is the remainder term in the first order Taylor expansion and, \[R_{\varepsilon_{i}}=-\int_{\Sigma}H_{\Sigma}f+E_{i}(f),\] and \(|E_{i}(f)|\leq C\varepsilon_{i}\) where \(C\) is a constant depending only on \(\Sigma\). Since \(\Sigma\) is a minimal surface it follows that \(R_{\varepsilon_{i}}=O(\varepsilon_{i})\) as \(i\to\infty\). It follows from Taylor's theorem that \[T_{i}(f_{i})=\frac{1}{2}\frac{d^{2}}{dt^{2}}\bigg{|}_{t=\xi_{i}}\,\mathcal{B}_ {\varepsilon_{i}}(\Gamma(\xi_{i}f_{i})),\] where \(\xi_{i}\in(0,1)\). However, from Theorem 3.5, this is \[\sigma_{0}\frac{d^{2}}{dt^{2}}\bigg{|}_{t=\xi_{i}}\mathcal{A}(\Gamma(\xi_{i}f_ {i}))+\tilde{R}_{\varepsilon_{i}}\] where \(R_{\varepsilon_{i}}\to 0\) as \(i\to\infty\). In particular, it follows that \(T_{\varepsilon_{i}}(f_{i})-2\sigma_{0}T(f_{i})\to 0\) as \(i\to\infty\), where \(T(f_{i})\) is the Taylor remainder term when expanding the area functional about \(f_{i}\). Therefore it follows that, for \(i\) sufficiently large \[|\,\mathcal{B}_{\varepsilon_{i}}(\Gamma(f_{i}))-\mathcal{A}(\Gamma(f_{i}))| \leq\frac{\delta}{4}.\] So for any such \(i\) we have \[\sigma_{0}\sup_{p\in B^{k}}\mathcal{A}(\Sigma_{p}^{i})\leq\frac{\delta}{4}+ \sup_{p\in B^{k}}\mathcal{B}_{\varepsilon_{i}}(\Sigma_{p}^{i})\leq\mathcal{B} _{\varepsilon_{i}}(\Sigma)-\frac{\delta}{4}.\] However, \(\mathcal{B}_{\varepsilon_{i}}(\Sigma)\to\mathcal{A}(\Sigma)\), so this implies that \[\sup_{p\in B^{k}}\mathcal{A}(\Sigma_{p}^{i})<\mathcal{A}(\Sigma),\] which is impossible. ## 5. A weak Palais-Smale condition Let \(\mathcal{S}^{k,\alpha}\) denote the space of all \(2\)-sided, closed, \(C^{k,\alpha}\)-hypersurfaces of \(M\) which are homologous to \(\Sigma\) (and moreover which are therefore boundaries). We wish to show the following Palais-Smale type compactness condition holds. **Proposition 5.1**.: _Let \(\{\Sigma_{n}\}\) be a sequence of hypersurfaces in \(\mathcal{S}^{k,\alpha}\) and suppose \(d>0\). Then if_ 1. \(\delta\,\mathcal{B}_{\varepsilon}(\Sigma_{n})\to 0\)_,_ 2. \(\mathcal{B}_{\varepsilon}(\Sigma_{n})\to d\)_,_ 3. \(\sup_{n}\mathrm{Area}(\Sigma_{n})<\infty\)__ _then, up to taking a subsequence, there exists a Caccioppoli set \(M^{+}\) for which \(M^{+}_{n}\to M^{+}\) and \(\Sigma_{\infty}\doteq\partial M^{+}\), and moreover there is a smooth phase transition \(u:M\to\mathbf{R}\) which vanishes on \(\Sigma_{\infty}\)_ _Remark 5.2_.: The boundary \(\Sigma_{\infty}\) need not be smooth, as there can be points along \(\Sigma_{\infty}\) where \(\nabla u=0\). However, the phase transition \(u\) will be smooth everywhere by standard elliptic regularity. Proof of Proposition 5.1.: Suppose \(u_{n}\) denotes the broken phase transition associated to the hypersurface \(\Sigma_{n}\). Then it is easy to see that \(u_{n}\) satisfies \[E^{\prime}_{\varepsilon}(u_{n}) \to 0 \tag{5.2}\] \[E_{\varepsilon}(u_{n}) \to d \tag{5.1}\] Therefore, by the classical Palais-Smale condition, there is a function \(u_{\infty}\) with \(u_{n}\xrightarrow[W^{1,2}]{}u_{\infty}\). It follows that \(u_{\infty}\) is a weak solution to the Allen-Cahn equation in \(W^{1,2}(M)\), and therefore by elliptic regularity it is a smooth phase transition. By 5.2, \(M^{+}\doteq\{u_{\infty}>0\}\) and \(M^{-}\doteq\{u_{\infty}<0\}\) are nonempty. We note that on \(M^{+}\), \(u_{\infty}\) is a positive Dirichlet minimizer for \[\begin{cases}\Delta_{g}u_{\infty}=\frac{W^{\prime}(u)}{\varepsilon^{2}},& \text{in }M^{+},\\ u_{\infty}\equiv 0,&\text{on }\partial M^{+}.\end{cases}\] By the weak convergence of \(u_{n}\) to \(u_{\infty}\) in \(W^{1,2}\), it follows from Rellich's theorem that \(u_{n}\to u_{\infty}\) strongly in \(L^{2}\). Moreover, by compactness of \(M\), we may pass to a further subsequence with \(u_{n}\to u_{\infty}\) in \(L^{1}\). _Claim 5.3_.: \(M^{+}_{n}\to M^{+}\). Proof of Claim 5.3.: Suppose it is not the case. Then, possibly up to taking a subsequence, we may find a \(\delta>0\) so that \(|M^{+}_{n}\Delta M^{+}|>\delta\). However, the area bound on \(\Sigma_{n}\) implies that \(\{M^{+}_{n}\}\) is of uniformly bounded variation, so BV-compactness ([10], Theorem 12.26) then implies that there is a measurable set \(M^{+}_{\infty}\) so that \[M^{+}_{n}\to M^{+}_{\infty}.\] Therefore it follows that \[|M^{+}_{\infty}\Delta M^{+}|\geq\delta.\] So moreover there is, for some \(k_{0}<0\), a set \(A\subset M^{+}_{\infty}\) of positive measure so that \[u_{\infty}\big{|}_{A}\leq k_{0}.\] However, \(u_{n}\geq 0\) along \(A\) by construction, and \(u_{n}\big{|}_{A}\underset{L^{1}}{\longrightarrow}u_{\infty}\big{|}_{A}\), which is a contradiction. The proof is now complete. ## 6. A mountain pass theorem ### A Deformation Lemma In order to prove an appropriate mountain-pass type theorem, we need to prove some variant of the usual deformation lemma [10]. The classical deformation lemma does not quite work because our Palais-Smale type condition requires us to use a weaker notion of convergence than convergence as boundaries of Caccioppoli sets. For \(c\in\mathbf{R}\) we write \[A_{c}^{L} \doteq\{\Sigma\in\mathcal{S}^{k,\alpha}|\,\mathcal{B}_{\varepsilon }(\Sigma)\leq c,\mathcal{H}^{n}\leq L\}, \tag{6.2}\] \[K_{c} \doteq\{\Omega\in\mathcal{C}(M)|\partial\Omega=u^{-1}(0),E_{ \varepsilon}(u)=c,\delta E_{\varepsilon}(u)=0\}. \tag{6.1}\] _Remark 6.1_.: Some care must be taken in the definition of the sets 6.1. As stated, \(\mathcal{B}_{\varepsilon}\) is defined only for those hypersurfaces which split \(M\) into two components for which the first eigenvalue satisfies the condition \(\lambda_{1}<\varepsilon^{2}\). We may still uniquely define \(\mathcal{B}_{\varepsilon}\) on all of \(\mathcal{S}^{k,\alpha}\) by taking the Dirichlet phase transition associated to a side of \(\tilde{\Sigma}\in\mathcal{S}^{k,\alpha}\) to be \(0\) if 2.1 admits no solution. This extended definition of \(\mathcal{B}_{\varepsilon}\) remains smooth on \(\mathcal{S}^{k,\alpha}\), but 3.5 is only true in the regime where the condition \(\lambda_{1}<\varepsilon^{2}\) on both sides of \(\tilde{\Sigma}\). We wish to show that if \(K_{c}\) is empty, we can always deform \(A_{c+\eta}\) into \(A_{c-\eta}\) for some appropriate \(\eta>0\) without running into a critical point. **Lemma 6.2** (Deformation).: _Suppose \(c>0\) is such that \(K_{c}=\emptyset\). For each \(\eta>0\) taken sufficiently small, there are constants \(0<\bar{\eta}<\eta\) and \(\theta>0\) and a function \(F\in C([0,1]\times\mathcal{S}^{k,\alpha};\mathcal{S}^{k,\alpha})\) so that_ 1. \(F(0,\Sigma)=\Sigma\)_,_ 2. \(F(1,\Sigma)=\Sigma\) _for all_ \(\Sigma\not\in BE_{\varepsilon}^{-1}[c-\eta,c+\eta]\)_,_ 3. \(\mathcal{B}_{\varepsilon}(F(t,\Sigma))\leq F(\Sigma)\)_,_ 4. \(F(1,A_{c+\bar{\eta}}^{L})\subset A_{c-\bar{\eta}}^{\theta L}\)_._ Proof.: First we claim the following. _Claim 6.3_.: There are constants \(0<\mu,\eta<1\) so that \[\|\delta\,\mathcal{B}_{\varepsilon}(\Sigma)\|\geq\mu\] whenever \(\Sigma\in A_{c+\eta}^{L}\backslash A_{c-\eta}^{L}\). Proof of claim 6.3.: Suppose this is not the case. Then there are sequences \(\mu_{k},\eta_{k}\to 0\) as \(k\to\infty\) and \(\Sigma_{k}\in A^{L}_{c+\eta_{k}}\setminus A^{L}_{c-\eta_{k}}\) with \[\|\delta\,\mathcal{B}_{\varepsilon}(\Sigma_{k})\|\leq\mu_{k}, \tag{6.4}\] \[\mathcal{B}_{\varepsilon}(\Sigma_{k})\to c. \tag{6.3}\] The Palais-Smale condition 5.1 then implies the existence of \(\Sigma_{\infty}\in K_{c}\), which contradicts our assumption that \(K_{c}=\emptyset\). Now set \(\bar{\eta}\) so that \[0<\bar{\eta}<\eta,0<\bar{\eta}<\frac{\mu^{2}}{2}.\] We also set \[A\doteq\{\Sigma\in\mathcal{S}^{k,\alpha}|\,\mathcal{B}_{ \varepsilon}\leq c-\eta\text{ or }\,\mathcal{B}_{\varepsilon}\geq c+\eta,\mathcal{H}^{n}(\Sigma)\leq L\},\] \[B\doteq\{\Sigma\in\mathcal{S}^{k,\alpha}|c-\bar{\eta}\leq \mathcal{B}_{\varepsilon}(\Sigma)\leq c+\bar{\eta},\mathcal{H}^{n}(\Sigma) \leq\theta L\}.\] We define the functions \(d:\mathcal{S}^{k,\alpha}\to\mathbf{R}\), \(h:\mathbf{R}^{+}\to\mathbf{R}\) by \[d(\Sigma)=\operatorname{dist}(\Sigma,A)(\operatorname{dist}( \Sigma,A)+\operatorname{dist}(\Sigma,B))^{-1},\] \[h(t)=\begin{cases}1,&0\leq t\leq 1,\\ 1/t,&t\geq 1.\end{cases}\] The function \(d\) can be easily seen to satisfy \(0\leq d\leq 1\), \(d|_{A}\equiv 0,d|_{B}\equiv 1.\)It can readily be seen that \(d\) is Lipschitz continuous. Now we may define a potential \(G:\mathcal{S}^{k,\alpha}\to T\mathcal{S}^{k,\alpha}\) by \[G(\Sigma)\doteq-d(\Sigma)h(\|\,\mathcal{B}_{\varepsilon}{}^{\prime}(\Sigma) \|)\,\mathcal{B}_{\varepsilon}{}^{\prime}(\Sigma).\] We observe that \(G\) is bounded and globally Lipschitz on \(\mathcal{S}^{k,\alpha}\). Therefore by the general existence and uniqueness theory for ODEs on Banach spaces (see [10], Chapter IV), a solution exists for some time \(t\geq 0\) to the equation \[\begin{cases}\dot{F}(t)=G\cdot F(t),\\ F(0)=\Sigma.\end{cases} \tag{6.5}\] Moreover, we may observe that, by the area formula, for any bounded subset of \(\mathcal{S}^{k,\alpha}\) there is a \(\theta>0\) so that if \(\Sigma\) satisfies \(\mathcal{H}^{n-1}(\Sigma)\leq L\) then \(\mathcal{H}^{n-1}(F(t,\Sigma))\leq\theta L\). We may now compute \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(F(t,\Sigma)) =\mathcal{B}_{\varepsilon}{}^{\prime}(\Sigma)\dot{F}(t,\Sigma)\] \[=-d(F(t,\Sigma))h(\|\,\mathcal{B}_{\varepsilon}{}^{\prime}(F(t, \Sigma))\|)\|\,\mathcal{B}_{\varepsilon}{}^{\prime}(F(t,\Sigma))\|^{2}\] \[\leq 0.\] Finally, suppose that \(\Sigma\in A^{L}_{c+\bar{\eta}}\). We wish to show that \(F(1,\Sigma)\in A^{\theta L}_{c-\bar{\eta}}.\) Since we have just shown that deforming by \(F\) causes the \(\mathcal{B}_{\varepsilon}\) to decrease, then if \(\Sigma\not\in B\) for some time \(t\) we are done, so we may assume that \(F(t,\Sigma)\in B\) for all \(t\in[0,1]\). This implies that \(g(F(t,\Sigma))\equiv 1\) for \(0\leq t\leq 1\). As a result our previous calculation gives \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(F(t,\Sigma))=-h(\|\,\mathcal{B}_{ \varepsilon}{}^{\prime}(F(t,\Sigma))\|)\|\,\mathcal{B}_{\varepsilon}(F(t, \Sigma))\|^{2}.\] If \(\|\,\mathcal{B}_{\varepsilon}{}^{\prime}(F(t,\Sigma))\|\geq 1\) we consequently get \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(F(t,\Sigma))=-\|\,\mathcal{B}_{ \varepsilon}(F(t,\Sigma))\|\leq-\mu\leq-\mu^{2}.\] Otherwise if \(\|\,\mathcal{B}_{\varepsilon}{}^{\prime}(F(t,\Sigma))\|\leq 1\), then by definition of \(h\) we get \[\frac{d}{dt}\,\mathcal{B}_{\varepsilon}(F(t,\Sigma))\leq-\mu^{2}.\] Either way we find \[\mathcal{B}_{\varepsilon}(F(1,\Sigma))\leq\mathcal{B}_{\varepsilon}(\Sigma)- \mu^{2}\leq c-\bar{\eta},\] which completes the proof. ### \(1\)-parameter mountain pass With the deformation lemma in hand, we are now ready to prove a mountain pass type result. For simplicity, we present the \(1\)-parameter version. **Theorem 6.4** (Mountain Pass Theorem).: _Let \(\Sigma_{0},\Sigma_{1}\in\mathcal{S}^{k,\alpha}\) be distinct. Define \(\mathcal{P}^{L},c,d\) by_ \[\mathcal{P}^{L}\doteq\{p\in C([0,1],\mathcal{U})|p(0)=\Sigma_{0}, p(1)=\Sigma_{1},\mathcal{H}^{n}(p(t))\leq L\}, \tag{6.7}\] \[c\doteq\max\{\mathcal{B}_{\varepsilon}(\Sigma_{0}),\mathcal{B}_ {\varepsilon}(\Sigma_{1})\}, \tag{6.6}\] \[d\doteq\inf_{p\in\mathcal{P}^{L}}\max_{t\in[0,1]}\mathcal{B}_{\varepsilon}(p(t)). \tag{6.8}\] _Then, if \(d>c\), \(K_{d}\neq\emptyset\)._ Proof.: Suppose that \(K_{d}\) is empty. Choosing a sufficiently small number \(\eta<d\) we can apply our deformation lemma to find positive constants \(0<\bar{\eta}<\eta\), \(0<\theta<1\) and a homeomorphism \(F:\mathcal{U}\to\mathcal{U}\) satisfying (6.9) \[F(A^{L}_{d+\bar{\eta}})\subset A^{\theta L}_{d-\bar{\eta}},\] (6.10) \[F(\Sigma)=\Sigma\] if \[\Sigma\not\in\mathcal{B}_{\varepsilon}{}^{-1}[d-\eta,d+\eta].\] Take some path \(a\in\mathcal{P}^{L}\) satisfying \[\max_{t\in[0,1]}\mathcal{B}_{\varepsilon}(a(t))\leq d+\bar{\eta}.\] Then \(\bar{a}\doteq F(a)\in\mathcal{P}^{\theta L}\subset\mathcal{P}^{L}\). The deformation lemma now implies that \[\max_{t\in[0,1]}\mathcal{B}_{\varepsilon}(\bar{a}(t))\leq d-\bar{\eta},\] a contradiction. The proof of the \(k\)-parameter version, 2.10, is identical. We now prove the corollary 2.11 as a consequence of the mountain-pass construction. Proof of Corollary 2.11.:. Given a nondegenerate minimal hypersurface \(\Sigma\) with Morse index \(k\), we may write \[L_{\Sigma}=\Delta_{\Sigma}+(\operatorname{Ric}(\nu,\nu)+|A_{\Sigma}|^{2})\] for its stability operator. Since \(\Sigma\) is nondegenerate, the eigenvalues of \(L_{\Sigma}\) satisfy \[\lambda_{1}<\lambda_{2}\leq\dots\leq\lambda_{k}<0<\lambda_{k+1}\leq\dots.\] Let \(\varphi_{1},\dots,\varphi_{k}\) be the \(k\) eigenfunctions which correspond to the negative eigenvalues of \(L_{\Sigma}\). Let \(\eta>0\) be a constant chosen sufficently small that each \(N(\eta\phi_{k})\) is contained in the tubular neighborhood \(N(\frac{1}{n})\) of height \(\frac{1}{n}\) about \(\Sigma\). The smooth \(k\)-parameter family \(F:B^{k}\to S^{k,\alpha}\) given by \(F:(v_{1},\dots,v_{k})\mapsto\Gamma(\eta(v_{1}\varphi_{1}+\dots+v_{k}\varphi_{ k}))\) is precisely the family of hypersurfaces satisfying (2) in 2.3. As in the proof of Theorem 2.8, we have \(\mathcal{B}_{\varepsilon}(F(v))\to\mathcal{A}(F(v))\) uniformly in \(\varepsilon\). Let \(\delta<\frac{1}{n}\). Then applying 2.8 to \(N(\frac{1}{n})\) we find an \(\tilde{\varepsilon}_{n}\) so that for all \(\varepsilon<\varepsilon_{n}\) \[\inf_{\tilde{\Sigma}_{v}}\sup_{v}\mathcal{B}_{\varepsilon}(\tilde{\Sigma}_{v}) \geq\mathcal{B}_{\varepsilon}(\Sigma)-\frac{1}{n}.\] where \(\tilde{\Sigma}_{v}\) ranges over all families which agree with the canonical family \(F(v)\) on \(\partial B^{k}\). Now by uniformity of the convergence as \(\varepsilon\to 0\) we may simply select \(\varepsilon_{n}<\tilde{\varepsilon}_{n}\) sufficiently small that \[\max_{v\in\partial B^{k}}\mathcal{B}_{\varepsilon}(F(v))<\mathcal{B}_{ \varepsilon}(\Sigma)-\frac{1}{n}\] for every \(\varepsilon\leq\varepsilon_{n}\). Therefore by 2.10, there is a smooth phase transition \(u_{\varepsilon}\), \(\varepsilon<\varepsilon_{n}\), with \[E_{\varepsilon}(u_{\varepsilon})=\inf_{\tilde{\Sigma}_{v}}\sup_{v}\mathcal{B} _{\varepsilon}(\tilde{\Sigma}_{v}),\] and moreover \(u_{\varepsilon}^{-1}(0)\subset\overline{N(\frac{1}{n})}\). Now the result follows, since \(E_{\varepsilon}(u_{\varepsilon})\to 2\sigma_{0}\mathcal{A}(\Sigma)\) by construction. ## Appendix A Smooth dependence on the domain. Let \(k>2\) be a given integer, and suppose that \(\Omega\subset(M,g)\) is a domain with \(C^{k,\alpha}\) boundary (i.e., the inclusion map \(i_{\Omega}\in C^{k,\alpha}(\Omega;M)\)) which supports a positive Dirichlet phase transition vanishing on the boundary. Take a neighborhood \(U\subset C^{k,\alpha}(\Omega;M)\) consisting only of \(C^{k,\alpha}\) embeddings of \(\Omega\). Suppose that \(F\in U\) and let \(u_{F}\) be the unique positive solution, afforded by [1], to the problem \[\begin{cases}\varepsilon^{2}\Delta_{g}u=W^{\prime}(u)&\text{in }F(\Omega),\\ u\equiv 0&\text{on }\partial F(\Omega).\end{cases}\] In [11] it was shown that the dependence \(F\mapsto u_{F}\) is smooth, provided the neihgborhood \(U\) is taken sufficiently small. Let \(F_{t}\) be a smooth one parameter family of diffeomorphisms with \(F_{0}=\operatorname{id}_{M}\), \(\partial_{t}F_{t}\doteq X_{t}\) and \(\nabla_{\partial_{t}}X_{t}=0\). Moreover, set \(X\doteq X_{0}\) for the variation vector field. It follows that \(t\mapsto u_{t}\doteq u_{F_{t}}\) is a smooth one parameter family of maps. It is a straightforward computation to show that the time-derivative \(\dot{u}_{t}\) satisfies the boundary value problem: (A.1) \[\begin{cases}\Delta_{g}\dot{u}_{t}=W^{\prime\prime}(u_{t})\dot{u}_{t}\text{ in }\Omega_{t},\\ \dot{u}_{t}=-\langle X_{t},\nabla^{M}u_{t}\rangle\text{ on }\partial\Omega_{t}, \end{cases}\]
2302.13681
The (ab)use of Open Source Code to Train Large Language Models
In recent years, Large Language Models (LLMs) have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyleft code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue.
Ali Al-Kaswan, Maliheh Izadi
2023-02-27T11:34:53Z
http://arxiv.org/abs/2302.13681v2
# The (ab)use of Open Source Code to Train Large Language Models ###### Abstract In recent years, Large Language Models (LLMs) have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsonitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyright code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue. ## I Language Models for Code Large Language Models (LLMs) have gained significant attention in the field of Natural Language Processing (NLP) in recent years due to their ability to perform a wide range of NLP tasks with impressive accuracy. These models, trained on massive amounts of data, improve in accuracy as they grow from millions to billions of parameters. LLMs for code are trained on massive amounts of data and can learn the structure and syntax of programming languages, making them well-suited for tasks such as code summarization, generation, and completion [1, 2]. LLMs are even making their way into commercial products like GitHub's Copilot, Replits's GhostWriter and Tabnine. Meanwhile, some have identified that LLMs can memorize large swaths of training data [3]. Memorization enables the extraction of the data using Data Extraction Attacks. Some attacks have even been able to extract addresses and other personal information from public models [3]. Memorization also impacts LLMs for code, with all its associated consequences. We will discuss these consequences in three categories: security, privacy and licensing. ## II Security Implications Text memorization has strong security implications. Firstly, massively mined code datasets are not sanitized or manually curated, the datasets could therefore contain many biases,1 and instances of badly written or buggy and insecure code. A recent study found that around 40% of GitHub Copilot's code generations for MITRE's top 25 Common Weakness Enumerations, a list of the most dangerous software weaknesses were found to be vulnerable [4]. If these models become more prevalent and trusted, they can introduce more vulnerable code into software. Footnote 1: Does GPT-2 Know Your Phone Number?: [http://archive.is/LxsyA](http://archive.is/LxsyA) ## III Privacy Implications Memorization enables adversaries to access training data, and everything contained within, simply by accessing the model. This has major privacy implications since code can contain private information. Think of credentials, API keys, directory structures, logged info, or in-code discussions by developers. Code can also contain personal information like emails or contact information. If personal data is published on the Internet, the data could be retracted and deleted from the source. But once it is mined and used to train an LLM, the information is forever embedded in a compact representation, which is queryable at scale. With query access to these models, an adversary can potentially extract this data [3] and threaten Internet users' privacy. There are many reasons why one could publicly share private information; (1) simply by accident, or (2) a malicious actor could share this information in a doxing campaign [3]. Even if the data is published willingly, the owner has a certain use and audience in mind and might not wish to share this information with the entire world. This is referred to as the re-purposed data problem.1 Footnote 2: Matrix Transpose: [http://archive.is/YU5BI](http://archive.is/YU5BI) ## IV Licensing Publicly available source code is also subject to licences, some of which heavily regulate the use of the material. Initially, developers raised concerns about licensed code on social media. GitHub Copilot could be prompted to produce verbatim copies of copyrighted code, without providing the required attribution or licence terms.3 Similarly, Copilot was producing copyrighted code while attributing the wrong author and providing the wrong license.4 Later, a lawsuit was filed against GitHub, Microsoft and OpenAI, claiming that Copilot is violating the licence of open-source code.5 Footnote 3: Fast Inverse Square: [http://archive.is/HNiyg](http://archive.is/HNiyg) Footnote 5: GitHub Complaint (p.26): [http://archive.is/3PFAs](http://archive.is/3PFAs) Broadly, open-source code is licensed under two types of licences. **Permissive licenses**, allow users to use, modify, and distribute the software for any purpose, without requiring that the user share their work. **Non-permissive licenses**, also known as "copyleft" licenses, require that users freely share their own software under the same licence if they distribute the software or any _derivatives_ of it. Creating closed or commercial software based on non-permissively licensed code is unethical and possibly even illegal [5]. But this does raise the following question: **Does training LLMs on copyleft code infringe on their license?** Firstly, we must determine how many LLMs for code are trained on copyleft code. Looking at some of the most popular code models, we can observe that the vast majority are trained on open-source code. CodeBERT and CodeT5 are trained on CodeSearchNet, which contains copyleft code. We also found that CodeBERT, CodeGen, and CodeClippy make use of The Pile, a collection of \(22\) datasets, one of which is a GitHub dataset containing copyleft data. We found that only InCoder makes an effort to prevent training on copyleft code. InCoder does however make use of a dataset of StackOverflow answers, which are licensed under varying CC-BY-SA licences, all of which require attribution.5 Footnote 5: StackOverflow license: [http://archive.is/obaoy](http://archive.is/obaoy) Despite the public attention, it is not completely clear whether Codex, the model behind Copilot, is trained on non-permissive code. Many imply that it does,6 citing the copying of copyleft code and the fact that the system has encountered a copy of the GPL licence many times during training. The training data for Codex is not publicly available, and neither OpenAI nor Github have provided any clarification.7 Footnote 6: Comment on Copilot and OSS: [http://archive.is/6gEOU](http://archive.is/6gEOU) LLMs for code can be seen as derivatives of their training data. So unless the model is published under the same licence as the training data and includes the copyright notice, this would be a clear violation. Moreover, many licences are not inter-compatible, i.e., the inclusion of code licensed under them automatically warrants an infringement as the combined licence agreements contain irreconcilable conditions. Some opponents,8 including OpenAI, argue that the use of public code is an instance of transformative fair use, which is a defence that allows the use of copyrighted works in new and unexpected ways and exists in many jurisdictions including the US.8 Yet, it is still unclear whether the fair use defence applies to ML-systems,4 as it has not yet been tested in court. Furthermore, the fair use argument is sometimes based on the assumption that models do not memorize and emit training data, which is false. Even if the fair use argument protects the use of the data, the verbatim outputting might not be protected. Footnote 8: If Software is My Copilot, Who Programmed My Software?: Link Footnote 8: GitHub Copilot is not infringing your copyright: [http://archive.is/PYlm5](http://archive.is/PYlm5) A moral argument can also be made on this issue. Training LLMs on copyleft code goes against the will of some open-source developers, who share their code for the betterment of society and who believe in the principle of free and open software so profoundly, they're willing to add a full legal clause to their work to perpetuate this ideal. The use of their work, without attribution, especially by commercial parties, is not what they had in mind. Finally, some researchers have also proposed a different approach to this issue by letting the authors of open-source code take matters into their own hands. Using data poisoning techniques, the authors can reduce the performance and embed watermarks into the models [5]. ## V Discussion and Recommendations To conclude we recommend the following: * The ML community should carefully consider the licence of their training material, from both a legal and an ethical point of view. The authors of published LLMs should be transparent about the licences of their training material. * More research should be conducted on the nature and proportionality of text memorization in LLMs for code and LLMs in general. Other topics include memorized text extraction and prevention. * Lawmakers should clarify whether the use of copyleft code (and copyrighted materials in general) and text to train LLMs constitutes fair use and under which conditions this clause applies. * Finally, the software engineering community should clarify their stance on this issue. Developers could make informed decisions and clearly denote if their source code can be used to train AI models. LLMs for code are likely to stay and bring new tools that change the way software is engineered. So the community needs to answer important questions on this matter. For instance, should open-source code be allowed for training these models? If so, should the developers be credited and compensated, and under which license should the models be released? Alternatively, do we need to revise current code licenses to clarify the community's stance?9 Footnote 9: Additional Reading Material: Link to our GitHub Repository
2305.18773
On a neural network approach for solving potential control problem of the semiclassical Schrödinger equation
Robust control design for quantum systems is a challenging and key task for practical technology. In this work, we apply neural networks to learn the control problem for the semiclassical Schr\"odinger equation, where the control variable is the potential given by an external field that may contain uncertainties. Inspired by a relevant work [29], we incorporate the sampling-based learning process into the training of networks, while combining with the fast time-splitting spectral method for the Schr\"odinger equation in the semiclassical regime. The numerical results have shown the efficiency and accuracy of our proposed deep learning approach.
Yating Wang, Liu Liu
2023-05-30T06:14:20Z
http://arxiv.org/abs/2305.18773v1
On a neural network approach for solving potential control problem of the semiclassical Schrodinger equation ###### Abstract Robust control design for quantum systems is a challenging and key task for practical technology. In this work, we apply neural networks to learn the control problem for the semiclassical Schrodinger equation, where the control variable is the potential given by an external field that may contain uncertainties. Inspired by a relevant work [29], we incorporate the sampling-based learning process into the training of networks, while combining with the fast time-splitting spectral method for the Schrodinger equation in the semi-classical regime. The numerical results have shown the efficiency and accuracy of our proposed deep learning approach. ## 1 Introduction Control of quantum phenomena has been an important scientific problem in the emerging quantum technology [16]. The control of quantum electronic states in physical systems has a variety of applications such as quantum computers [4], control of photochemical processes [38] and semiconductor lasers [18]. Detailed overviews of the quantum control field can be found in survey papers and monographs [15; 43]. One issue of the controllability theory [35] aims to assess the ability to steer a quantum system from an arbitrary initial state to a targeted final state, under the impact of a control field such as a potential function, given possibly noisy observation data. Uncertainty Quantification (UQ) has drawn many attentions over the past decade. In simulating physical systems, which are often modeled by differential equations, there are inevitably modeling errors, imprecise measurements of the initial data or background coefficients, which may bring uncertainties to the models. In this project, we study the semiclassical Schrodinger equation with external potential that may contain uncertainties, and is treated as the control variable. Let \(\Omega\) be a bounded domain in \(\mathbb{R}\), the Schrodinger equation in the semiclassical regime is described by a wave function \(\psi:\mathcal{Q}\mapsto\mathbb{C}\), \[\left\{\begin{array}{l}i\varepsilon\partial_{t}\psi^{\varepsilon}=-\frac{ \varepsilon^{2}}{2}\Delta\psi^{\varepsilon}+V(x,\boldsymbol{z})\psi^{ \varepsilon},\qquad(x,t)\in\mathcal{Q}\times(0,T),\\ \psi|_{t=0}=\psi_{0}(x),\qquad x\in\Omega\subset\mathbb{R},\end{array}\right. \tag{1.1}\] where \(0<\varepsilon\ll 1\) is the scaled Planck constant describing the microscopic and macroscopic scale ratio. Here the solution \(\psi=\psi(t,x,\boldsymbol{z})\) is the electron wave function with initial condition \(\psi_{0}(x)\) the potential \(V(x,\mathbf{z})\in L^{\infty}(\Omega\times I_{\mathbf{z}})\) is the control variable that models the external field and is spatially dependent. Periodic boundary condition is assumed in our problem. The uncertainty is described by the random variable \(\mathbf{z}\), which lies in the random space \(I_{\mathbf{z}}\) with a probability measure \(\pi(\mathbf{z})d\mathbf{z}\). We introduce the notation for the expected value of \(f(\mathbf{z})\) in the random variable \(\mathbf{z}\), \[\langle f\rangle_{\pi(\mathbf{z})}=\int f(\mathbf{z})\pi(\mathbf{z})d\mathbf{z}. \tag{1.2}\] The solution to the Schrodinger equation is a complex valued wave function, whose nonlinear transforms lead to probabilistic measures of the physical observables. The primary physical quantities of interests include position density, \[n^{\varepsilon}=|\psi^{\varepsilon}|^{2}, \tag{1.3}\] and current density \[J^{\varepsilon}=\varepsilon\operatorname{Im}\left(\overline{\psi^{\varepsilon }}\nabla\psi^{\varepsilon}\right)=\frac{1}{2i}\left(\overline{\psi^{ \varepsilon}}\nabla\psi^{\varepsilon}-\psi^{\varepsilon}\nabla\overline{\psi^ {\varepsilon}}\right). \tag{1.4}\] At each fixed \(\mathbf{z}\), with \(V\) being continuous and bounded, the Hamiltonian operator \(H^{\varepsilon}\) defined by \[H^{\varepsilon}\psi^{\varepsilon}=-\frac{\varepsilon^{2}}{2}\Delta\psi^{ \varepsilon}+V(x,\mathbf{z})\psi^{\varepsilon}\] maps functions in \(H^{2}(\mathbb{R}^{d})\) to \(L^{2}(\mathbb{R}^{d})\) and is self-adjoint. The operator \(\frac{1}{i\varepsilon}H^{\varepsilon}\) generates a unitary, strongly continuous semi-group on \(L^{2}(\mathbb{R}^{d})\), which guarantees a unique solution of the Schrodinger equation (1.1) that lie in the space [39]: \[W(0,T):=\left\{\phi\in L^{2}((0,T);H^{1}_{0}(\Omega;\mathbb{C}))\Big{|}\frac{ d\phi}{dt}\in L^{2}((0,T);H^{-1}(\Omega;\mathbb{C}))\right\}.\] As a literature review, we mention that there has been several work [2, 27] on boundary control for the Schrodinger equation (1.1), where the observation is taken from the Dirichlet or Neumann boundary data. In some references such as [7], the authors consider the quantum system with evolution of its state \(|\psi(t)\rangle\) described by the Schrodinger equation \(\frac{d}{dt}|\psi(t)\rangle=-iH(t)|\psi(t)\rangle\) with the initial condition \(|\psi(0)\rangle=|\psi_{0}\rangle\). The Hamiltonian \(H(t)\) there corresponds to a time-dependent control variable that contains random parameters. Their goal is to drive the quantum ensemble from an initial state \(|\psi_{0}\rangle\) to the target state \(|\psi_{\text{target}}\rangle\), by employing a gradient-based learning method to optimize the control field. In [5, Section 7.3], the control problem of a charged particle in a well potential was formulated, where in their setting the potential field is time-dependent. We mention some other relevant work on stability estimates and semiclassical limit of inverse problem for the Schrodinger equation [3, 8, 17, 26, 39]. We continue to mention several studies that are related to the inverse problems for the Schrodinger equation or other models. For relevant inverse boundary value problems on this topic, there are existing iterative methods applied to the Helmholtz equation [31], where one starts with an initial guess of the boundary condition, then adjusts it iteratively by minimizing functionals such as error norms between the calculated data and measured data. This could be extremely time-consuming since at each iteration step, a forward problem needs to be solved. In the partial boundary data situation, there has been research on studying the linearized inverse problem of recovering potential function for the time-independent Schrodinger equation [47]. Moreover, for inverse potential problems, well-posedness of the continuous regularized formulation was analyzed in both elliptic and parabolic problems, with conditional stability estimates and error analysis for the discrete scheme studied in [10, 22]. The desired control problem can be described as the following: To which extend can the wave solution \(\psi^{\varepsilon}\) of (1.1) be perturbed by the control field-in our case the potential function \(V\), in order to reach the desired target state at the final time \(T\)? The above question can be reformulated into an _optimal control_ problem. At the final time \(T\), given the target state \(\psi_{\text{\it target}}\), let \(V\) be approximated by a neural network parameterized by \(\boldsymbol{\theta}\), and \(\lambda>0\) be a regularization coefficient, we aim to solve the following minimization problem: \[\left\{\begin{array}{l}\min_{\boldsymbol{\theta}}J_{\lambda}(V(\boldsymbol{ \theta}))=\min_{\boldsymbol{\theta}}||\psi^{\varepsilon}(x,T;\boldsymbol{ \theta})-\psi_{\text{\it target}}||^{2}_{L^{2}(\Omega)}+\lambda\,||V(x; \boldsymbol{\theta})||^{2}_{L^{2}(\Omega)},\\ \text{such that }\ \ \ \ i\varepsilon\partial_{t}\psi^{\varepsilon}(x,t; \boldsymbol{\theta})=-\frac{\varepsilon^{2}}{2}\Delta\psi^{\varepsilon}(x,t; \boldsymbol{\theta})+V(x;\boldsymbol{\theta})\psi^{\varepsilon}(x,t; \boldsymbol{\theta}),\\ \psi^{\varepsilon}(x,t=0;\boldsymbol{\theta})=\psi_{0}(x).\end{array}\right. \tag{1.5}\] if \(V\) is a deterministic potential, and \[\left\{\begin{array}{l}\min_{\boldsymbol{\theta}}J_{\lambda}(V(\boldsymbol{ \theta}))=\min_{\boldsymbol{\theta}}||\psi^{\varepsilon}(x,T;\boldsymbol{ \theta},\boldsymbol{z})-\psi_{\text{\it target}}(\boldsymbol{z})||^{2}_{L^{2}( \Omega\times I_{\boldsymbol{z}})}+\lambda\,||V(x;\boldsymbol{\theta}, \boldsymbol{z})||^{2}_{L^{2}(\Omega\times I_{\boldsymbol{z}})},\\ \text{such that }\ \ \ \ i\varepsilon\partial_{t}\psi^{\varepsilon}(x,t; \boldsymbol{\theta},\boldsymbol{z})=-\frac{\varepsilon^{2}}{2}\Delta\psi^{ \varepsilon}(x,t;\boldsymbol{\theta},\boldsymbol{z})+V(x;\boldsymbol{\theta}, \boldsymbol{z})\psi^{\varepsilon}(x,t;\boldsymbol{\theta},\boldsymbol{z}),\\ \psi^{\varepsilon}(x,t=0;\boldsymbol{\theta},\boldsymbol{z})=\psi_{0}(x; \boldsymbol{z}).\end{array}\right. \tag{1.6}\] if the potential \(V\) contains uncertainty and the random variable is \(\boldsymbol{z}\). In each particular problem setting, discretized form of the above loss function will be presented. We now highlight the main contributions of our work: 1. We take advantage of the rising trend of machine learning and use neural networks to approximate the control variable considered as the potential field in the Schrodinger equation. Both deterministic and stochastic control functions are considered. A fully-connected neural network is used for the deterministic problem, and the DeepONet [30] is applied in the stochastic case. 2. During the training process, the Schrodinger equation in the semiclassical regime is solved using the fast time-splitting spectral method to improve the computational efficiency and accuracy of our algorithm. 3. We study and compare both cases when the observation data is associated with or without noise, and propose different training strategies. For data without noise, the popular stochastic gradient descent (SGD) method is used. For noisy data, we consider a Bayesian framework and adopt the stochastic gradient Markov chain Monte Carlo (MCMC) approach to obtain robust learning results. The rest of the paper is organized as follows. In Section 2, we discuss the oscillatory behavior of solution to the semiclassical Schrodinger equation in the random variable and mention the numerical challenges even for the forward UQ problems. Our main methodology of using the learning-based technique to solve the optimization problem (1.6) will be proposed in Section 3, with numerical scheme for the forward problem introduced in subsection 3.1 and several neural network approaches described in subsection 3.2. We conduct extensive numerical experiments for both the deterministic and stochastic potential control problems and present the results in Section 4. Conclusion and future work will be addressed lastly. ## 2 Regularity of solution in the random space The semi-classical Schrodinger equation is a family of dispersive wave equations parameterized by \(\varepsilon\ll 1\), it is well known that the wave equation propagates \(O(\varepsilon)\) scaled oscillations in space and time. However, for UQ problems it is not obvious whether the small parameter \(\varepsilon\) induces oscillations in the random variable \(\mathbf{z}\). We conduct a regularity analysis of \(\psi\) in the random space, which enables us to study the oscillatory behavior of solution in the random space. To investigate the regularity of the wave function in the \(\mathbf{z}\) variable, we check the following averaged norm \[||\psi||_{\Gamma}:=\left(\int_{I_{z}}\int_{\mathbb{R}^{3}}|\psi(t,\mathbf{x}, \mathbf{z})|^{2}\ d\mathbf{x}\pi(\mathbf{z})d\mathbf{z}\right)^{1/2}. \tag{2.1}\] First, observe that \(\forall\,\mathbf{z}\in I_{\mathbf{z}}\), \[\frac{\partial}{\partial t}\|\psi^{\varepsilon}\|_{L^{2}_{\mathbf{z}}}^{2}(t, \mathbf{z})=0,\] thus \[\frac{d}{dt}\|\psi^{\varepsilon}\|_{\Gamma}^{2}=0,\] which indicates the \(\Gamma\)-norm of the wave function \(\psi^{\varepsilon}\) is conserved in time, \(\psi^{\varepsilon}\|_{\Gamma}(t)=\|\psi^{\varepsilon}_{\mathrm{in}}\|_{\Gamma}\). Below we show that \(\psi^{\varepsilon}\) has \(\varepsilon\)-scaled oscillations in \(\mathbf{z}\). As an example, we analyze first-order partial derivative of \(\psi^{\varepsilon}\) in \(z_{1}\) and denote \(\psi^{1}=\psi^{\varepsilon}_{z_{1}}\) and \(V^{1}=V_{z_{1}}\). By differentiating the semi-classical Schrodinger equation (1.1) with respect to \(z_{1}\), one gets \[i\varepsilon\psi^{1}_{t}=-\frac{\varepsilon^{2}}{2}\Delta_{\mathbf{x}}\psi^{ 1}+V^{1}\psi^{\varepsilon}+V\psi^{1}.\] Direct calculation leads to \[\frac{d}{dt}\|\psi^{1}\|_{\Gamma}^{2} =\int\bigl{(}\psi^{1}_{t}\bar{\psi}^{1}+\psi^{1}\bar{\psi}^{1}_{ t}\bigr{)}\pi d\mathbf{x}d\mathbf{z}\] \[=\int\bigl{(}\frac{1}{i\varepsilon}V^{1}\psi^{\varepsilon}\bar{ \psi}^{1}-\frac{1}{i\varepsilon}V^{1}\psi^{1}\bar{\psi}^{\varepsilon}\bigr{)} \pi d\mathbf{x}d\mathbf{z}\] \[\leq\frac{2}{\varepsilon}\|\psi^{1}\|_{\Gamma}\,\|V^{1}\psi^{ \varepsilon}\|_{\Gamma}\,,\] where we use the Cauchy-Schwarz inequality and Jensen inequality in the last step, namely \[\int V^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\leq\left(\int(V^{1}\psi^{ \varepsilon})^{2}dx\right)^{1/2}\left(\int(\bar{\psi}^{1})^{2}dx\right)^{1/2},\] \[\int\int V^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\,\pi(z)dz\leq\left(\int\left(\int V ^{1}\psi^{\varepsilon}\bar{\psi}^{1}dx\right)^{2}\pi(z)dz\right)^{1/2}\leq||V^{1 }\psi^{\varepsilon}||_{\Gamma}\,||\psi^{1}||_{\Gamma}\,.\] Therefore, \[\frac{d}{dt}\|\psi^{1}\|_{\Gamma}\leq\frac{1}{\varepsilon}\|V^{1}\psi^{ \varepsilon}\|_{\Gamma}^{2}\,.\] For \(t=O(1)\), this pessimistic estimate implies \[\|\psi^{1}\|_{\Gamma}=O\big{(}\varepsilon^{-1}\big{)}.\] To summarize, in this part we emphasize the oscillatory behavior of the solution \(\psi\) in the random space, which brings numerical challenges for the forward UQ problem. If one directly adopts the generalized polynomial chaos (gPC)-based Galerkin methods or stochastic collocation methods [44] to the semi-classical Schrodinger equation with random parameters, \(\varepsilon\)-dependent basis functions or quadrature points are needed to get an accurate approximation. There has been some work developed for this forward problem [11; 23], where in our inverse problem case shares the similar difficulty. In the future work, to more efficiently sample from the random space, we will adopt numerical solvers that can resolve the \(\varepsilon\)-oscillations in the random variable. For simplicity of notations, we will omit the superscript \(\varepsilon\) in \(\psi^{\varepsilon}\) and use \(\psi\) in the rest of the paper. ## 3 Optimal control using neural networks ### The time-splitting spectral method In the semiclassical regime where \(\varepsilon\ll 1\), the solution to the Schrodinger equation (1.1) is oscillatory both temporally and spatially, with an oscillation frequency of \(O(1/\varepsilon)\). This poses tremendous computational challenges since one needs to numerically resolve, both spatially and temporally, the small wave length of \(O(\varepsilon)\). The time-splitting spectral (TSSP) method, studied by Bao, Jin and Markowich in [1], is one of the most popular and highly accurate methods for such problems, where the meshing strategy \(\Delta t=O(\varepsilon)\) and \(\Delta x=O(\varepsilon)\) is required for moderate values of \(\varepsilon\). Moreover, in order to just compute accurately the physical observables (such as position density, flux, and energy), one still needs to resolve the spatial oscillations, but the time step \(\Delta t=o(1)\) is much more relaxed [1; 20; 24]. Recently a rigorous uniform in \(\varepsilon\) error estimate was obtained in [19], by using errors measured by a pseudo-metric in analogy to the Wasserstein distance between a quantum density operator and a classical density in phase space, with the regularity requirement for \(V\) being \(V\in C^{1,1}\). In this section, we review the first-order time-splitting spectral method studied in [1, Section 2]. Consider an one-dimensional spatial variable and a given potential \(V(x)\). We choose the spatial mesh size \(h=(b-a)/M\) for an even integer \(M\), and the time step \(k=\Delta t\), let the grid points and time step be \[x_{j}:=a+jh,\qquad t_{n}:=nk,\qquad j=0,1,\cdots,M,\quad n=0,1,2,\cdots.\] For the time discretization, from \(t=t_{n}\) to \(t=t_{n+1}\), the Schrodinger equation (1.1) is solved in the following two steps. First, one solves \[\varepsilon\psi_{t}-i\frac{\varepsilon^{2}}{2}\psi_{xx}=0, \tag{3.1}\] then \[\varepsilon\psi_{t}+iV(x)\psi=0, \tag{3.2}\] in the second step. We discretize (3.1) in space by the spectral method, then integrate in time _exactly_. Note that the ODE (3.2) can be solved exactly. Denote \(\Psi^{n}_{j}\) by the numerical approximation of the analytic solution \(\psi(t_{n},x_{j})\) to the Schrodinger equation (1.1). Then the discretized scheme is given by \[\begin{split}&\Psi^{*}_{j}=\frac{1}{M}\sum_{l=-M/2}^{M/2-1}e^{-i \varepsilon k\mu_{l}^{2}/2}\,\hat{\Psi}^{n}_{l}\,e^{i\mu_{l}(x_{j}-a)},\qquad j =0,1,2,\cdots,M-1,\\ &\Psi^{n+1}_{j}=e^{-iV(x_{j})k/\varepsilon}\Psi^{*}_{j},\end{split} \tag{3.3}\] where the Fourier coefficients of \(\Psi^{n}\) is defined as \[\hat{\Psi}^{n}_{j}=\sum_{j=0}^{M-1}\Psi^{n}_{j}\,e^{-i\mu_{l}(x_{j}-a)},\qquad \mu_{l}=\frac{2\pi l}{b-a},\quad l=-\frac{M}{2},\cdots,\frac{M}{2}-1,\] with \[\Psi^{0}_{j}=\psi(0,x_{j}),\quad j=0,1,2,\cdots,M.\] We remark that instead of directly simulating the semi-classical Schrodinger equation, there are quite a few other methods which are valid in the limit \(\varepsilon\to 0\), see [25] for a general discussion. In particular, many wave packets based methods have been introduced in past few years, which reduce the full quantum dynamics to Gaussian wave packets dynamics [21]. In this work, we simply adopt the TSSP method as our deterministic solver in the learning algorithm ### Learning method for the control problem Thanks to the nonlinear structure of deep neural network, it has shown great potential in approximating high dimensional functions and overcoming the curse of dimensionality. In recent years, deep learning has gained great success in solving high-dimensional PDEs, in both forward and inverse problem settings [34; 45]. There have been studies that suggested learning-based methods on solving general control problems, such as [14; 41]. Recently, in [32] the authors proposed SympOCnet to solve high dimensional optimal control problems with state constraints. The idea is to apply the Symplectic network, which can approximate arbitrary symplectic transformations, to perform a change of variables in the phase space and solve the forward Hamiltonian equation in the new coordinate system. In our work, we consider the control problem for the semiclassical Schrodinger equation and adopt neural networks to approximate the control field \(V\) that may contain uncertainties. The neural network parameterized potential function is learnt by minimizing the discrepancies between the state solution of the system with neural network and the observation of the target state. In this section, we will describe the neural network structures under two different problem settings: (i) the deterministic case where the underlying target potential is fixed; (ii) the stochastic case where the target potential is parameterized by some random variables. In both problems, we will validate the efficiency of our proposed method by using both clean and noisy training data. #### 3.2.1 Deterministic problem In the deterministic problem, our goal is to learn a single target function \(V(x)\) using the neural network. In this case, the input of the neural network is the spatial variable \(\{x_{k}\}\), while the output is the value of the potential function at \(x_{k}\), i.e., \(\{V(x_{k})\}\), \(k=1,\cdots,M\). We will use \(5\) fully connected layers with \(50\) neurons per layer to build up the network. For the data points, assume the spatial domain \(\Omega\in\mathbb{R}\) and temporal domain \([0,T]\), \(N\) equally distributed points in \(\Omega\) (where \(N\ll M\)) are taken, and the measurement data are the corresponding numerical solutions of the wave function at time \(T\). This implies that the data pairs are chosen as \((x_{i},\psi_{\text{obs}}(x_{i}))\) for \(i=1,\cdots,N\) and \(\psi_{\text{obs}}(x_{i})\sim\mathcal{N}(\psi(x_{i}),\sigma^{2})\). In our numerical examples, we set \(N=50\) and \(M=1000\). An illustration of the network for the deterministic problem is presented in Figure 1. As noticed from Figure 1, the input-output pairs for the fully connected neural network are \((x_{i},V(x_{i}))\). The output of the neural network, i.e. the potential function, is then used to solve the forward Schrodinger equation by adopting the time-splitting spectral method. The predicted solution obtained at the final time step \(\psi(x;T)\) is then compared with the measurement data \(\psi_{\text{obs}}(x;T)\). The mismatch between the predicted solution and the measurement data will form the loss function. A pseudocode is presented in Algorithm 1. Figure 1: Illustration of the network for the deterministic problem. ``` 1:Neural network input \(\{x_{i}\}_{i=1}^{M}\). Observation data \(\{\psi_{\mathrm{obs}}(x_{j},T)\}_{j=1}^{N}\). Initialization of neural network parameters \(\mathbf{\theta}_{0}\). 2:for For \(k\gets 0:\#iterations\)do 3: Get the output of the neural network \(\{V(x_{i};\mathbf{\theta}_{k})\}_{i=1}^{M}\). 4: Given \(V(x;\mathbf{\theta}_{k})\), solve equation (1.1) by time-splitting spectral method and get the solution \(\psi(x,T;\mathbf{\theta}_{k})\). 5: Compute the mismatch between \(\psi_{\mathrm{obs}}(x,T)\) and \(\psi(x,T;\mathbf{\theta}_{k})\), and get the loss. 6: Use SGD type or SGLD method to update the network parameter and get \(\mathbf{\theta}_{k+1}\). 7: The solution of (1.1) \(\psi(x_{j},t_{m})\) at all spatial locations and all time steps of interest. ``` **Algorithm 1** Deterministic case #### 3.2.2 Stochastic problem In the stochastic problem, our goal is to learn a set of functions described by a stochastic potential function \(V(x;z)\) containing a random parameter \(z\), by training the DNN. We will utilize the DeepONet architecture developed in [30]. First we give a brief overview of DeepONet, which is a powerful tool designed to learn continuous nonlinear operators. Denote \(G\) by an operator with input function \(u\); for any coordinate \(y\) in the domain of \(G(u)\), the output \(G(u)(y)\) is a number. DeepONet aims to approximate \(G\) with a neural network \(G_{\mathbf{\theta}}\) parameterized by \(\mathbf{\theta}\), which takes inputs \((u,y)\) and returns the output \(G(u)(y)\). The architecture of DeepONet is composed of a branch net and a trunk net. In the unstacked setting, the branch net encodes the discrete input function \(u\) into the features represented by \([b_{1},\cdots,b_{q}]\), and the trunk net takes the coordinate \(y\) as input and encodes it into the features represented by \([t_{1},\cdots,t_{q}]\). Then the dot product of \(\mathbf{b}\) and \(\mathbf{t}\) provides the final output of DeepONet, i.e. \[G_{\mathbf{\theta}}(u)(y)=\sum_{k=1}^{q}b_{k}(u(x_{1}),\cdots,u(x_{N}))t_{k}(y).\] The parameter \(\mathbf{\theta}\) consists of all weights and biases in the branch and trunk net. In our setting, we aim to approximate the parameterized potentials \(V(x;z)\) using \(G_{\mathbf{\theta}}\) that takes the discrete data \([\psi_{\mathrm{obs}}(x_{1};z),\cdots,\psi_{\mathrm{obs}}(x_{N};z)]\) and the coordinate \(y_{k}\) as inputs. Here \(k=1,\cdots,M\). We note that for each \(z\), there are \(N\) sensors that provide the observation data \(\psi_{\mathrm{obs}}(\cdot,z)\), thus the dataset size is equal to the product of \(M\) and the number of \(z\) samples. The value of \(G_{\mathbf{\theta}}(\psi_{\mathrm{obs}}(\cdot;z))(y_{k})\) is a prediction of \(V(y_{k};z)\). Utilizing the predictions from the DeepONet, namely \(V(y_{k};z)\) (\(k=1,\cdots,M\)), the time-splitting spectral method is then applied to compute the value of wave functions \(\psi(y_{k},z)\). We aim to minimize the mismatch between the observations \(\psi_{\mathrm{obs}}(x_{j},z)\) and the numerical solutions \(\psi(x_{j},z)\) at all sensor locations \(x_{j}\) for all \(z\). An illustration of the network for the stochastic problem is presented in Figure 2. A pseudocode is presented in Algorithm 2. ``` 1:Neural network input \(\{\psi_{\text{obs}}(x_{j},t_{m};z_{s})\}_{j=1}^{N}\) for some stochastic samples \(z_{s}\), at few time instances \(t_{m}\), as well as spatial points \(\{y_{k}\}_{k=1}^{M}\). Observation data \(\{\psi_{\text{obs}}(x_{j},T;z_{s})\}_{j=1}^{N}\). Initialization of neural network parameters \(\mathbf{\theta}_{0}\). 2:for\(k\gets 0:\#iterations\)do 3: Get the output of the neural network \(\{V(y_{k};z_{s};\mathbf{\theta}_{k})\}_{k=1}^{M}\). 4: For each \(V(x;z_{s};\mathbf{\theta}_{k})\), solve equation (1.1) by time-splitting spectral method and get the solutions \(\psi(x,t;\mathbf{\theta}_{k})\) at all spatial points and time instances. 5: Compute the mismatch between \(\{\psi_{\text{obs}}(x_{j},T;z_{s})\}_{j=1}^{N}\) and \(\psi(x_{j},T;z_{s};\mathbf{\theta}_{k})\) (at the observational spatial and temporal points) over all samples of \(z_{s}\), and get the loss. 6: Use SGLD method to update the network parameter and get \(\mathbf{\theta}_{k+1}\). 7: For each \(z_{s}\), the solution of (1.1) \(\psi(x_{j},t_{m};z_{s})\) at all spatial locations and all time steps of interest. ``` **Algorithm 2** Stochastic case #### 3.2.3 Training of the neural network When dealing with large-scale problems, traditional Bayesian inference methods, e.g., Markov chain Monte Carlo (MCMC)[37] have shown disadvantages due to extremely expensive computational cost of handling the whole dataset at each iteration. To tackle problems with large datasets, deep learning algorithms such as stochastic gradient descent (SGD) [36] are favorable Figure 2: Illustration of the network for the stochastic problem. and have been popularly used, since one only needs to employ a small subset of samples randomly selected from the whole dataset at each iteration. To bring together advantages of these two types of methods, Welling and Teh [42] first proposed the stochastic gradient Langevin dynamics (SGLD) (also known as stochastic gradient MCMC) method. It adds a suitable amount of noise to the standard SGD and uses mini-batches to approximate the gradient of loss function. With the help of decreasing training step size \(\eta_{k}\), it has demonstrated powerful and provided a transition between optimization and Bayesian posterior sampling [6]. We now briefly review the SGLD method. Denote \(D=\{d_{i}\}_{i=1}^{N}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) by a given dataset, where \(\mathbf{x}_{i}\) is the input and \(\mathbf{y}_{i}\) is the corresponding noisy output. We let \(\mathcal{NN}\) be a neural network parameterized by the parameter \(\boldsymbol{\theta}\); the goal of its training is to find suitable parameters \(\boldsymbol{\theta}\) such that \(F(\mathcal{NN}(\mathbf{x}_{i};\boldsymbol{\theta}))\approx\mathbf{y}_{i}\) (\(i=1,\cdots,N\)). Due to the noise in measurement data, we assume the parameters are associated with uncertainties and obey a prior distribution \(p(\boldsymbol{\theta})\). The uncertainties in the parameters \(\boldsymbol{\theta}\) can be captured through Bayesian inference to avoid overfitting. Let \(d^{j}\) be a mini-batch of data with size \(n\), the likelihood can be written as \[p(d^{j}|\boldsymbol{\theta})=\frac{1}{(2\pi\sigma^{2})^{n/2}}\exp\Big{\{}- \frac{\sum\limits_{\mathbf{x}_{i}^{j}\in d^{j}}(\mathbf{y}_{i}^{j}-F( \mathcal{NN}(\mathbf{x}_{i}^{j};\boldsymbol{\theta})))^{2}}{2\sigma^{2}}\Big{\}},\] where \(\sigma\) is standard deviation of the Gaussian likelihood. In our case, for the dataset \(d^{j}=(\mathbf{x}_{i}^{j},\mathbf{y}_{i}^{j})\), \(\mathbf{x}_{j}^{i}\) corresponds to the input \([\psi_{\text{obs}}(x_{1};z),\cdots,\psi_{\text{obs}}(x_{N};z),y]\), \(\mathbf{y}_{j}^{i}\) corresponds to the labels \([\psi_{\text{obs}}(x_{1};z),\cdots,\psi_{\text{obs}}(x_{N};z)]\) and \(F\) maps the output of the neural network output \(\mathcal{NN}(\mathbf{x}_{i};\boldsymbol{\theta})\) which approximates \(V(y,z)\) to the quantities of interest \(\psi(y;z;T)\) with \(T\) the final simulation time. According to the Bayes' theorem, the posterior distribution of \(\boldsymbol{\theta}\), given the data \(D\), then follows \(p(\boldsymbol{\theta}|D)\propto p(\boldsymbol{\theta})\prod_{i=1}^{N}p(d_{i}| \boldsymbol{\theta})\). To sample from the posterior, one efficient proposal algorithm is to use the gradient of the target distribution. Let \(\eta_{k}\) be the learning rate at epoch \(k\) and \(\tau>0\) be the inverse temperature, the parameters will be updated by SGLD based on the following rule: \[\boldsymbol{\theta}_{k+1}=\boldsymbol{\theta}_{k}+\eta_{k}\nabla_{\boldsymbol {\theta}}\tilde{L}(\boldsymbol{\theta}_{k})+\mathcal{N}(0,2\eta_{k}\tau^{-1}).\] Here for a subset of \(n\) data points \(d^{j}=\{d_{1}^{j},\cdots,d_{n}^{j}\}\), \[\nabla_{\boldsymbol{\theta}}\tilde{L}(\boldsymbol{\theta})=\nabla_{\boldsymbol {\theta}}\log p(\boldsymbol{\theta})+\frac{N}{n}\sum_{i=1}^{n}\nabla_{ \boldsymbol{\theta}}\log p(d_{i}^{j}|\boldsymbol{\theta})\] is the stochastic gradient computed by using a minibatch that approximate the true gradient of the loss function \(\nabla_{\boldsymbol{\theta}}L(\boldsymbol{\theta})\). However, if the components of the network parameters \(\boldsymbol{\theta}\) have different scales, the invariant probability distribution for the Langevin equation is not isotropic. If one still uses a uniform learning rate in each direction, this may leads to slow mixing [9, 12, 13, 28, 40, 46]. To incorporate the geometric information of the target posterior, stochastic Gradient Riemann Langevin Dynamics (SGRLD) [33] generalizes SGLD on a Riemannian manifold. Consider the probabilit model on a Riemann manifold with some metric tensor \(P^{-1}(\mathbf{\theta})\), in SGRLD, the parameter is updated at the \(k\)-th iteration by the following rule: \[\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k}+\eta_{k}\left[P(\mathbf{\theta}_{k})\nabla_{\mathbf{ \theta}}\tilde{L}(\mathbf{\theta}_{k})+\Gamma(\mathbf{\theta}_{k})\right]+\mathcal{N}( 0,2\eta_{k}\tau^{-1}P(\mathbf{\theta}_{k})) \tag{3.4}\] where \(\Gamma_{i}(\mathbf{\theta}_{k})=\sum_{j}\dfrac{\partial P_{ij}(\mathbf{\theta}_{k})}{ \partial\theta_{j}}\). One popular and computationally efficient approach to approximate \(P(\mathbf{\theta}_{k})\) is to use a diagonal preconditioning matrix [28; 40], that is, \[P(\mathbf{\theta}_{k}) =diag^{-1}(\lambda+\sqrt{V(\mathbf{\theta}_{k})}), \tag{3.5}\] \[V(\mathbf{\theta}_{k}) =(1-\omega_{k})V(\mathbf{\theta}_{k-1})+\omega_{k}g(\mathbf{\theta}_{k}) \circ g(\mathbf{\theta}_{k}), \tag{3.6}\] where \(\lambda\) is a regularization constant, \(g(\mathbf{\theta}_{k})=\nabla_{\mathbf{\theta}}\tilde{L}(\mathbf{\theta}_{k})\) is the stochastic gradient, the operator \(\circ\) denotes a elementwise multiplication, and \(\omega_{k}\in(0,1)\) is a weight parameter used in the moving average \(V(\mathbf{\theta}_{k})\). In our framework, we will use the preconditioned SGLD to train the network parameters. ## 4 Numerical results In our numerical experiments, we consider two types of potential functions, the deterministic and stochastic potential. In the deterministic case, the potential \(V\) is only spatially dependent. In the stochastic problem, the potential function \(V(\cdot,z)\) is assumed to depend on a random parameter characterized by \(z\). In particular, we consider a simple example with \(V(x,z)=(1+0.5z)x^{2}\), where \(z\) is a random variable following the uniform distribution in \([-1,1]\). ### Test I: A Deterministic Potential In the first problem setup, we assume the potential function as \(V(x)=x^{2}\). The network architecture introduced in Section 3.2.1 is adopted, and we train the network by using standard SGD and SGLD studied in Section 3.2.3. For the observation data, we choose it to be the electron wave function \(\psi\) solved by the Schrodinger equation (1.1) at several spatial locations and time instances using the forward TSSP solver, given the reference potential function \(V\). We first consider that there is no noise in the observation data and apply both SGD and SGLD to train. The numerical results show that the wave function \(\psi\) obtained from the network of both training algorithms matches well with the observation data, while it is also noticeable that the SGLD gives a slightly better approximation of the potential function. We then consider when some noise is added to the observation data, one can just apply SGLD to train the network in order to more accurately capture the uncertainties in the target potential function. #### 4.1.1 \(V(x)=x^{2}\), no noise in the observation and by SGD In this case, we let the reference potential function be \(V(x)=x^{2}\), here the observation data is clean and without noise interference, SGD method is used in our training algorithm. In the forward solver, the spatial mesh size is \(\pi/250\) and the temporal mesh size is \(6.25\times 10^{-4}\) The learning rate is \(10^{-4}\) and the total training epoch is \(20000\). In Figure 3 (a), a comparison between the reference and predicted potential function obtained from the neural network is shown. We observe that there is some mismatch in the region when \(x>0\), while the underlying reason remains to be discovered. In Figure 3 (b)-(c), a comparison between the reference with predicted position density \(n^{\varepsilon}\) and the wave function \(\psi^{\varepsilon}\) (real and imaginary parts) is presented. We conclude that the predicted wave and density functions at the final time \(T\), which are computed by solving the Schrodinger equation (1.1) under the neural network's predicted output potential, can provide good approximations to the solution quantities obtained by using the true potential \(V(x)=x^{2}\) in the TSSP solver. #### 4.1.2 \(V(x)=x^{2}\), no noise in the observation and by SGLD In the second case, the problem setup is the same as the previous case, while we apply SGLD algorithm to train the neural network. In the forward solver, the spatial mesh size is \(\pi/1000\) and the temporal mesh size is \(3\times 10^{-3}\). The learning rate is \(10^{-5}\) and the total training epoch is Figure 3: Test I case 1: \(V(x)=x^{2}\), without noise in the observation and by SGD. (a) True and predicted value of the potential function. (b) True and predicted value of the position density at time \(T\). (c) True and predicted value of the wave function at time \(T\). 10000. A comparison between the reference and predicted potential function is shown in Figure 4 (a). According to the nature of SGLD, we collect samples of neural network's parameters during the training process, then compute the mean and standard deviation of output potential functions (at each spatial point) obtained by using those parameter samples. The blue dashed line represents the mean of the predicted potential \(V\), and the confidence interval are depicted by the shaded blue area in Figure 3. Based on these two tests, we observe that SGLD provides more reliable results compared to the standard SGD, and the uncertainty is neglible in the prediction since the data is clean. In Figure 4 (b)-(c), we again present a comparison between the reference and predicted wave function \(\psi^{\varepsilon}\) or position density \(v^{\varepsilon}\) that is computed by the TSSP solver by using the predicted mean value of the potential. Similar to the previous test, it is obvious that the predicted wave or density can provide quite good approximations to the true data, i.e., the numerical solution at final time \(T\) obtained by using the true potential \(V(x)=x^{2}\) in the TSSP solver. Figure 4: Test I case 2: \(V(x)=x^{2}\), without noise in the observation and by SGLD. (a) True and predictions of the potential function. (b) True and predictions of the position density at time \(T\). (c) True and predictions the wave function at time \(T\). #### 4.1.3 \(V(x)=x^{2}\), noisy data and by SGLD In the third case, we consider some noise in the observation data and use SGLD to train the network. The mesh size in the forward solver, the learning rate and the training epochs are the same as in the previous subsection. We let the noise be a random variable that follows the normal distribution with mean \(0\) and standard deviation \(0.05\). In Figure 5 (c), the yellow circles are the noisy values of \(\psi^{\varepsilon}\) at \(50\) equally spaced locations. A comparison between the reference, i.e., \(V(x)=x^{2}\), with the predicted mean of the potential function is shown in Figure 5 (a). One can observe that the predicted mean value is consistent with the reference potential, and the blue shaded area indicates that there are some uncertainties due to the noisy data, compared to the previous tests where there is no noise in the observation. Similarly, we can see from Figure 5 (b)-(c), the predicted wave and density at final time \(T\) that are computed using the mean of network's predicted potential \(V\), capture well the true solution obtained by using \(V(x)=x^{2}\) in the TSSP solver. Therefore, we conclude that SGLD can deal with the noisy data and provide reliable results. ### Test II: A Stochastic Potential In Test II, we consider a stochastic potential, \(V(x,z)=(1+0.5z)x^{2}\), where \(z\) follows the uniform distribution on \([-1,1]\). To generate the dataset, we first take eight Gauss-Legendre points for \(z\in[-1,1]\). For each \(z_{k}\) (\(k=1,\cdots,K\)), i.e., each specific potential \(V(x;z_{k})\), we have the corresponding noisy measurement data \(\psi_{\rm obs}(x;z_{k})\) at the final time instance \(T=0.6\). The observation \(\psi_{\rm obs}(x;z_{k})\sim\mathcal{N}(\psi(x;z_{k}),\sigma^{2})\) where \(\sigma\) is the standard deviation. The wave functions at the final time instance \(\psi(x;z_{k})\) is computed using the time-splitting spectral method on a \(640\times 1000\) temporal-spatial grid. Then for each \(z_{k}\) we select \(N\) sensor locations to collect the measurement data, the sensors are uniformly located in the spatial domain \(\Omega=[-\pi/2,\pi/2]\). We will take \(N=20,50\) in the numerical tests. In the forward solver, the spatial mesh size is \(\pi/1000\) and the temporal mesh size is \(6.25\times 10^{-4}\). The learning rate is \(10^{-5}\) and the total training epoch is \(10000\). The input of the network then consists of the spatial evaluation point \(x_{i}\), and the real part \(\Re(\psi_{\rm obs}(x_{1};z_{k})),\cdots,\Re(\psi_{\rm obs}(x_{N};z_{k})\) and the imaginary part \(\Im(\psi_{\rm obs}(x_{1};z_{k})),\cdots,\Im(\psi_{\rm obs}(x_{N};z_{k})\) of the observation data. The output of the network is the value of potential at \(x_{i}\), i.e., \(V(x_{i};z_{k})\). The number of training samples is equal to the product of \(M\) (the number of evaluation points \(x_{i}\)) and the number of \(z\) samples. We assume that the values of \(V(x,z)\) at the endpoints \(x=-\frac{\pi}{2}\) and \(x=\frac{\pi}{2}\) are known for the training samples. The loss function consists of three parts, (1) the mismatch between the observation data \(\psi_{\rm obs}(x;z_{k})\) and the \(\psi\) computed using neural network predicted potential function, (2) the mismatch between the true potential and neural network predicted potential at the endpoints of the spatial domain, and (3) a regularization term on the potential. After training, we will obtain the full potential profile for different \(z\) samples. In the testing stage, we will only have noisy observations of the wave function at final time \(T\) without knowing any information of the true potential function. We will feed the a set of spatial location Figure 5: Test I case 3: \(V(x)=x^{2}\), noisy data and by SGLD. (a) True and predictions (with confidence interval) of the potential function. (b) True and predictions of the position density at time \(T\). (c) True and predictions the wave function at time \(T\). \(x_{i}\) as well as the observation data into the neural network, and obtain the predictions of the potential evaluated at these points \(x_{i}\). We first show the predictions of \(V(x;z)\) for some training samples of \(z\) when there are 50 sensors and \(\psi_{\text{obs}}(x;z)\sim\mathcal{N}(\psi(x;z),0.05)\). The comparison of predictions and references of \(V(x;z)=(1+0.5z)x^{2}\) for four different \(z\) values (\(z=[0.9603,0.7967,0.5255,0.1834]\)) are presented on the left of Figure 6. The expected value of \(V\) over the random variable \(z\) are computed using 8 Legendre quadrature points in the interval \(z\in[-1,1]\), and the comparison of the predicted mean and reference mean are shown on the right of Figure 6. With large numbers of observation data and suitable amount of noise in the data, the neural network can provide reasonable approximations for the potential functions. The corresponding predictions of wave function \(\psi\) (computed using predicted potential functions) at the final time \(T=0.6\) with different values of \(z\) are shown in Figures 7, 8. We observe good agreements between the predictions and the true values of the wave functions. A testing case for \(z=0.0976\) is shown in Figure 9. It shows that our trained neural network can generalize well to new samples of \(z\). We then show the predictions of \(V(x;z)\) when there are 20 sensors and \(\psi_{\text{obs}}(x;z)\sim\mathcal{N}(\psi(x;z),0.02)\), that is, the number of sensors are getting smaller and the noise in the observation data is also less. In this case, the predictions of the potential function for \(z=[0.9603,0.7967,0.5255,0.1834]\) are shown in Figure 10. The corresponding predictions of wave function \(\psi\) with different values of \(z=[0.9603,-0.9603]\) are shown in Figure 11 and 12, respectively. In addition, a testing case for \(z=-0.57315\) is presented in 13. We observe that the results are still quite satisfactory under this test setting. This indicates our proposed network architecture and training algorithm can work well to learn the target stochastic potential, when the observation data is corrupted with a reasonable amount of noise. Schrodinger equation. We then develop a learning-based optimal control strategy by training neural networks to learn the control variate, considering observation data with or without noise. Our numerical results show that more reliable predictions can be obtained by adopting the SGLD Figure 11: Test II, 20 sensors, true and predicted value of the potential function \(\psi\) at final time \(T=1.0\), for a training sample \(z=0.0.7967\). Figure 12: Test II, 20 sensors, true and predicted value of the potential function \(\psi\) at final time \(T=1.0\), for a training sample \(z=-0.9603\). Figure 10: Test II, 20 sensors, true and predicted value of the potential function \(V(x;z)=(1+0.5z)x^{2}\). Left: different \(z\)s, right: mean prediction with respect to \(z\). algorithm. We address the importance of our work by the following: (i) we investigate a _new_ problem that is barely studied in the scientific computing fields; (ii) we introduce a _novel_ hybrid NN-TSSP method as a deep learning approach to study the potential control problem described by the Schrodinger equation; (iii) the TSSP method as the forward solver in the sampling process is crucial, as the small parameter in the Schrodinger equation brings numerical challenges. We mention some limitations of the current work, thus propose them as future works listed below. In the loss function during the training process, one can try to minimize the variance of the solution for more robust control. Besides, we shall investigate higher-dimensional space problem for the Schrodinger equation, where other efficient schemes such as Gaussian wave packet based schemes can be adapted. Finally, more complicated potential function that depend on the temporal variable will be studied, in order to explore more general cases with practical applications for the quantum control problem.
2307.16127
Interactive Car-Following: Matters but NOT Always
Following a leading vehicle is a daily but challenging task because it requires adapting to various traffic conditions and the leading vehicle's behaviors. However, the question `Does the following vehicle always actively react to the leading vehicle?' remains open. To seek the answer, we propose a novel metric to quantify the interaction intensity within the car-following pairs. The quantified interaction intensity enables us to recognize interactive and non-interactive car-following scenarios and derive corresponding policies for each scenario. Then, we develop an interaction-aware switching control framework with interactive and non-interactive policies, achieving a human-level car-following performance. The extensive simulations demonstrate that our interaction-aware switching control framework achieves improved control performance and data efficiency compared to the unified control strategies. Moreover, the experimental results reveal that human drivers would not always keep reacting to their leading vehicle but occasionally take safety-critical or intentional actions -- interaction matters but not always.
Chengyuan Zhang, Rui Chen, Jiacheng Zhu, Wenshuo Wang, Changliu Liu, Lijun Sun
2023-07-30T04:35:00Z
http://arxiv.org/abs/2307.16127v1
# Interactive Car-Following: Matters but NOT Always ###### Abstract Following a leading vehicle is a daily but challenging task because it requires adapting to various traffic conditions and the leading vehicle's behaviors. However, the question _'Does the following vehicle always actively react to the leading vehicle?_ remains open. To seek the answer, we propose a novel metric to quantify the interaction intensity within the car-following pairs. The quantified interaction intensity enables us to recognize interactive and non-interactive car-following scenarios and derive corresponding policies for each scenario. Then, we develop an interaction-aware switching control framework with interactive and non-interactive policies, achieving a human-level car-following performance. The extensive simulations demonstrate that our interaction-aware switching control framework achieves improved control performance and data efficiency compared to the unified control strategies. Moreover, the experimental results reveal that human drivers would not always keep reacting to their leading vehicle but occasionally take safety-critical or intentional actions -- interaction matters but not always. ## I Introduction Autonomous driving systems promise to revolutionize our transport networks by enhancing safety, efficiency, and convenience. One challenging task is to follow a leading vehicle (i.e., leader) like a human driver -- a seemingly simple yet intricate operation (as illustrated in Fig. 1). The complexity arises from the need to adapt to ever-changing traffic conditions and the diverse behaviors of the leader. In general, there are two types of car-following models used for autonomous vehicles: one is stimulus-response-based, and the other is learning-based. Most car-following models are developed with the assumption that the following vehicle (i.e., follower) _always_ actively response to the changes of environment states (e.g., leader's speed and position, as the stimulus), such as the Newell's model [1], optimal velocity model [2], and intelligent driver model (IDM) [3]. These models already encoded some prior knowledge and thus do not require much data for calibration [4, 5]. However, these models heavily rely on the stimulus-response assumption: The follower's reaction is sensitive to the leader's instantaneous action. Our driving experience indicates that in natural traffic settings, the follower's response is scenario-dependent -- drivers will take strong reactions in interactive scenarios but weak reactions in non-interactive scenarios. This is why these stimulus-response-based car-following models might fail to capture varying traffic environments. Many learning-based models trained with a large amount of data are developed to capture driving behaviors in diverse driving environments, such as Gaussian mixture models (GMMs) [6], deep neural networks [7], and deep reinforcement learning [8]. With the power of big data, such methods could cover both interactive and non-interactive scenarios using a unified model. However, training a unified model often demands an intricate architecture and extensive training data to depict the car-following behavior in different traffic scenarios, posing significant challenges in practice. For instance, sufficiently representing the complex and stochastic driving environment requires tons of real-world data to approximate the true distribution of the behaviors [9]. Moreover, the low proportion of interactive behaviors in all driving behaviors could lead to a biased model due to the imbalanced data [10]. To overcome the limitations of stimulus-response and learning-based car-following models, we argue that the follower does not always react to the leading vehicle but occasionally takes a safety-critical or intentional response. To this end, we introduce and design a new metric to quantify the interactions, i.e., the intensity of interactions within car-following pairs. Quantified interactions enables us to recognize between interactive and non-interactive scenarios, providing the basis for developing interactive and non-interactive policies. To verify the effectiveness of the interaction intensity metric, we propose an interaction-aware switching control framework, which allows the follower to adaptively switch between the two policies. Extensive simulations indicate that our interaction-aware switching control framework outperforms traditional unified car-following strategies regarding control performance and data efficiency. In summary, our contributions are as follows: 1. We introduce interaction intensity as a quantifiable metric to determine the intensity level of interaction within the car-following pairs (Section II). 2. We develop an interaction-aware switching control framework by leveraging interaction intensity to decide to switch between interactive and non-interactive policies (Section III). 3. We preliminary demonstrate that the follower would not always actively react to the leader but occasionally Fig. 1: Car-following problem illustration. take safety-critical or intentional actions (Section IV). ## II Interaction Quantification Quantifying the intensity of interaction is a critical cornerstone for our interaction-aware switching control, and a quantifiable definition of social interaction in traffic scenarios can be [11]: _'A dynamic sequence of acts that mutually consider the actions and reactions of individuals through an information exchange process between two or more agents to maximize benefits and minimize costs.'_ This definition implies that checking the influences of human drivers on each other can identify the absence and presence of human interactions. Here we assume that the follower's decisions can be formulated as a probability distribution. And we also assume that the leader's action may directly have a real-time impact on the follower's reaction, which is reflected by the shifting in the follower's probability distribution. Therefore, we are interested in estimating the interaction intensity \(\mathcal{I}\) between the leader-follower pair by measuring to what extent the probability distribution shifts upon the leader's actions. ### _Interaction Influence Formulation_ We denote \(\mathbf{a}_{\mathrm{foll}}^{1:t}\) and \(\hat{\mathbf{a}}_{\mathrm{foll}}^{t+1:t+\Delta T}\) as historical and future (distinguished by the hat symbol) action sequences of the follower, respectively. We then denote the state sequences \(\mathbf{s}=[\mathbf{v}_{\mathrm{foll}}^{1:t},\Delta\mathbf{v}^{1:t},\Delta\mathbf{x}^{1:t}]\) as a concatenation of the follower's speed and relative speed and distance. For the simplicity of notation, we will omit the superscripts in the following. There are mainly two parts of information conveyed by \(\mathbf{s}\), the leader's motion state \(\mathbf{s}_{\mathrm{lead}}=[\Delta\mathbf{v},\Delta\mathbf{x}]\) and the follower's motion state \(\mathbf{s}_{\mathrm{foll}}=\mathbf{v}_{\mathrm{foll}}\). The intuition behind designing the interaction metric \(\mathcal{I}\) is to investigate the influences of the leader's state \(\mathbf{s}_{\mathrm{lead}}\) on the follower's future action \(\hat{\mathbf{a}}_{\mathrm{foll}}\), \[\mathcal{I}(\mathbf{a}_{\mathrm{foll}},\mathbf{s})\coloneqq\mathcal{D}\big{(}\underbrace {p(\hat{\mathbf{a}}_{\mathrm{foll}}|\mathbf{s}_{\mathrm{foll}},\mathbf{s}_{\mathrm{lead}}, \ast)}_{\text{conditional dist. }f}||\underbrace{p(\hat{\mathbf{a}}_{\mathrm{foll}}| \mathbf{s}_{\mathrm{foll}},\ast)}_{\text{marginal dist. }g}\big{)}, \tag{1}\] with a conditional behavior model \(p(\hat{\mathbf{a}}_{\mathrm{foll}}|\mathbf{s}_{\mathrm{foll}},\mathbf{s}_{\mathrm{lead}},\ast)\), and a marginalized conditional behavior model \(p(\hat{\mathbf{a}}_{\mathrm{foll}}|\mathbf{s}_{\mathrm{foll}},\ast)\), where \(\ast\) represents the conditions on the action history \(\mathbf{a}_{\mathrm{foll}}\) and the model parameters. We use a distance-based measure \(\mathcal{D}(\cdot||\cdot)\) to evaluate the distance between these two probability distributions, indicating the influences of the leader's states on the follower's future action. Computing the distance depends the probabilistic formulations, and we will parameterize these two models using Gaussian mixture regression (GMR) since it allows for conditionalization and marginalization. ### _Conditional and Marginal Behavior Models_ GMR is widely-used for multivariate nonlinear regression modeling [12] and car-following behavior modeling [13, 14]. One fundamental step of GMR is modeling the generative processes of the car-following data as a GMM parameterized by \(\mathbf{\theta}\), i.e., the joint distribution \(p_{\mathbf{\theta}}(\mathbf{a}_{\mathrm{foll}},\hat{\mathbf{a}}_{\mathrm{foll}},\mathbf{s}_{ \mathrm{foll}},\mathbf{s}_{\mathrm{lead}})\) is formulated as GMM, from which we can derive the conditional distribution (which is still a GMM) to approximate the nonlinear function \[f:(\mathbf{s}_{\mathrm{foll}},\mathbf{s}_{\mathrm{lead}},\mathbf{a}_{\mathrm{foll}}) \mapsto\hat{\mathbf{a}}_{\mathrm{foll}} \tag{2}\] in regression tasks. By taking marginalization, one can derive \(g=p_{\mathbf{\theta}}(\hat{\mathbf{a}}_{\mathrm{foll}}|\mathbf{s}_{\mathrm{foll}},\mathbf{a}_ {\mathrm{foll}})\) as another GMM. ### _Quantifying Decision Shifting_ Two popular methods used for measuring the dissimilarity between two probability distributions are the Jenson-Shannon (JS) divergence and Wasserstein distance. The above section indicates that both \(f\) and \(g\) are mixtures of \(K\) Gaussian components as \[f(\mathbf{x}) =\sum_{i=1}^{K}\pi_{i}^{f}\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{i}^{f}, \mathbf{\Sigma}_{i}^{f}) \tag{3a}\] \[g(\mathbf{x}) =\sum_{j=1}^{K}\pi_{j}^{g}\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{j}^{g}, \mathbf{\Sigma}_{j}^{g}) \tag{3b}\] where (\(\pi_{i}^{f}\), \(\pi_{j}^{g}\)), (\(\mathbf{\mu}_{i}^{f}\), \(\mathbf{\mu}_{j}^{g}\)), and (\(\mathbf{\Sigma}_{i}^{f}\), \(\mathbf{\Sigma}_{j}^{g}\)) are the weights, means, and covariance, respectively. To simply notations, we denote the Gaussian distributions \(\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{i}^{f},\mathbf{\Sigma}_{i}^{f})\) as \(\mathcal{N}_{i}^{f}\) and \(\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{j}^{g},\mathbf{\Sigma}_{j}^{g})\) as \(\mathcal{N}_{j}^{g}\). In what follows, we will introduce the JS divergence and Wasserstein distance of \(f\) and \(g\) for quantifying the interactions. #### Ii-C1 JS divergence The JS divergence between two probability distributions \(f\) and \(g\) is defined as \[\mathcal{D}_{\text{JS}}(f,g)=\frac{1}{2}\mathcal{D}_{\text{KL}}(f||h)+\frac{1} {2}\mathcal{D}_{\text{KL}}(g||h), \tag{4}\] where \(\mathcal{D}_{\text{KL}}(f||g)\) is the Kullback-Leibler (KL) divergence, and \(h(\mathbf{x})=\frac{1}{2}(f(\mathbf{x})+g(\mathbf{x}))\). Note that the KL divergence between two GMMs is generally intractable, and we use Monte Carlo sampling to approximately estimate KL [15]. #### Ii-C2 Wasserstein distance The \(p\)-th order Wasserstein distance between two GMMs is expressed as \[W_{p}(f,g)=\left(\min_{\gamma\in\Gamma(f,g)}\sum_{i=1}^{K}\sum_{j=1}^{K}\gamma_{ ij}d_{p}(\mathcal{N}_{i}^{f},\mathcal{N}_{j}^{g})\right)^{1/p}, \tag{5}\] where \(\Gamma(f,g)\) is the set of all couplings between the two distributions, \(\gamma_{ij}\) is the elements of the optimal coupling matrix, and \(d_{p}(\mathcal{N}_{i}^{f},\mathcal{N}_{j}^{g})\) is the \(p\)-th order distance between the Gaussian components \(\mathcal{N}_{i}^{f}\) and \(\mathcal{N}_{j}^{g}\). The \(p\)-th order Wasserstein distance for Gaussians can be calculated by \[d_{p}(\mathcal{N}_{i}^{f},\mathcal{N}_{j}^{g})=||\mu_{i}^{f}-\mu_{j}^{g}||^{p} +\mathcal{B}_{p}(\Sigma_{i}^{f},\Sigma_{j}^{g}), \tag{6}\] where \(\mathcal{B}_{p}(\Sigma_{i}^{f},\Sigma_{j}^{g})\) is a Bures-like distance between the covariance matrices. The Bures-like distance is generally not available in closed-form, but there exist approximations and optimization techniques to compute it as well [16, 17]. ## III Interaction-Aware Switching Control-Based Car-Following Model Instead of directly training a unified car-following policy using all car-following behavior data, we propose to utilize several interaction-aware sub-policies, and switch between them based on interaction intensity, i.e., the value of \(\mathcal{I}\). To this end, we need to tackle two challenges: (a) construction of the sub-policies and (b) design of the switching mechanism. To acquire sub-policies, we classify car-following behaviors into _interactive_ and _non-interactive_ ones according to the value of interaction intensity \(\mathcal{I}\), and train a separate model for each. This allows us to exploits the benefits of training _ad hoc_ models and only use minimal data to overcome the data imbalance problem. This aligns with the fact that intense interactive behaviors are rare in naturalistic data. The proposed switching mechanism is the core of this framework, and we will explicitly elaborate it in the following. Switching control consists of different control policies for various operating modes (e.g., interactive or non-interactive) the car-following behaviors. Depending on the current mode of the leader-follower pair, an appropriate control policy is selected and applied. For instance, given an interactive car-following policy \(\pi_{\text{int}}\) and a non-interactive policy \(\pi_{\text{non}}\), a high-level supervisory logic \(\psi\) is used to decide which one to apply at each moment. For the follower in a car-following pair, given the current state, we seek to select the control policy according to the interaction intensity \(\mathcal{I}\), with the switch logic function \[\psi(\mathcal{I})=\begin{cases}\text{select}\,\pi_{\text{int}},\;\text{if}\; \;\mathcal{I}>\mathcal{I}_{0},\\ \text{select}\,\pi_{\text{non}},\;\text{if}\;\;\mathcal{I}\leq\mathcal{I}_{0}, \end{cases} \tag{7}\] where \(\mathcal{I}_{0}\) is a intensity threshold, above which is considered as intense interaction intensity. However, one should note that switching between controllers can cause transient effects or stability issues, especially if the controllers are not designed with smooth transitions in mind. Therefore, we developed a soft switching scheme to model the the supervisory logic \(\psi\) as a mixing of both two policies \[\pi_{\text{switch}}=\psi(\mathcal{I})\pi_{\text{int}}+(1-\psi(\mathcal{I})) \pi_{\text{non}}, \tag{8}\] where \(\psi(\mathcal{I})=\sigma\left(\frac{\mathcal{I}-\mathcal{I}_{0}}{\beta}\right)\), in which \(\sigma\) represents the sigmoid function and \(\beta\) is a scaling factor. The intuition behind this setting is putting more weights on the interaction policy \(\pi_{\text{int}}\) when encounters an intensely interactive situation, while maintaining a smooth transition between the two policies. ## IV Experiment Results and Analysis ### _Dataset and Experiment Settings_ We use the HighD dataset [18], a high-resolution trajectory data collected using drones. It has \(60\) video recordings, logged with the sampling frequency of \(25\) Hz on several German highway sections with a length of \(420\) m. To simplify our data, we downsample the original dataset to a smaller set with sampling frequency of \(5\) Hz (i.e., the time step between consecutive data points is \(0.2\) sec.) In each recording, the trajectories, velocities, and accelerations are measured and estimated. We follow the same data processing procedures as in [19] to transform the data into a new coordinate system. We extract informative car-following pairs according to [4]. ### _Learning Car-Following Policies_ #### Iv-B1 Quantification Results Here we set the historical time horizon \(t=1\) sec and the future prediction time horizon \(\Delta T=0.6\) sec, and train the GMR on \(200\) randomly selected car-following pairs. Then we randomly selected another \(20\) pairs to test the control policy. Basically, given the observations of \((\textbf{{s}}_{\text{full}},\textbf{{s}}_{\text{lead}},\textbf{{a}}_{\text{ foil}})\), we evaluate \(\mathcal{D}_{\text{Is}}(f||g)\) and \(W_{2}(f,g)\) at any time step. The quantified interactions of two random car-following pairs are shown in Fig. 2. The bottom indicates that the interactions quantified by JS divergence and 2-Wasserstein (W2) distances have almost the same trends, except that their values have different scales. Therefore, in the following parts of this paper, we will not distinguish between the different quantification methods but only use JS divergence in default. For an individual's driving behavior, the intense interaction indicates that the follower takes a strong reaction to the leader's action. For instance, the leader takes abrupt braking or the leader stops pushing the gas pedal after a rapid acceleration. To better understand the car-following interaction Fig. 3: Histograms of the quantified interaction intensity. Fig. 2: The motion profiles and the quantified interaction intensity of two car-following pairs. Note that the second row shows the trajectories at the point view of another constant-mean-speed ‘observing’ vehicle. Therefore, the relative space of the leader at the beginning and the ending are both zeros. from the population level, we visualize the histogram of the quantified interaction intensity on all of the available car-following pairs in Fig. 3. Notice that the human drivers tend to drive without intense interaction in most of the time. #### Iv-B2 Interactive/Non-interactive Data Sampling Recall that we evaluate the interaction intensity \(\mathcal{I}\) at any time step in Fig. 2, it is straightforward to sample interactive/non-interactive data based on the interaction intensity. We illustrate this intuition with Fig. 4, where \(3\%\), \(10\%\), and \(30\%\) data are sampled from the original trajectory. #### Iv-B3 Learning Interactive/Non-interactive Models To verify the performance under data insufficient cases, we only use \(3\%\) interactive/non-interactive samples from a full trajectory (see Fig. 4) to obtain \(\pi_{\text{int}}\) and \(\pi_{\text{non}}\), respectively. Here we use the IDM [3] as the car-following policy, and adopt the Bayesian calibration method proposed in [4] to identify the IDM parameters' distribution, from which we could draw many sets of IDM parameters. In addition, another IDM \(\pi_{\text{rand}}\) is calibrated as the baseline with \(6\%\) randomly sampled data, which contains both interactive and non-interactive samples at random. ### _Simulations with Interaction-Aware Switching Control_ In this part, we evaluate and compare the performances of different control polices in simulation. Specifically, the follower takes actions by a specific control policy to follow a human leader. We run the simulation with the same initial states for several times, and the comparison of \(\pi_{\text{int}}\), \(\pi_{\text{non}}\), \(\pi_{\text{rand}}\), and the hard-switching policy \(\pi_{\text{switch}}\) for two car-following pairs are illustrated in Fig. 5(a) and Fig. 5(b). The interactive policy \(\pi_{\text{int}}\) learns to take safety-critical actions in scenarios with intense interactions, such as collision avoidance; while the non-interactive policy \(\pi_{\text{non}}\) learns to follow the leader and reach the target speed. Therefore, the results indicate that \(\pi_{\text{int}}\) behaves too conservative; it tends to keep a low speed with a large space headway. \(\pi_{\text{non}}\) usually keeps a short space headway and actively follows the leader; and \(\pi_{\text{rand}}\) seems to be a compromise between the two strategies. In general, \(\pi_{\text{switch}}\) takes the characteristics of both \(\pi_{\text{int}}\) and \(\pi_{\text{non}}\) by switching between the 'actively following' mode and the 'avoiding collision' mode according to the interaction intensity \(\mathcal{I}\). As a comparison, the results of \(\pi_{\text{rand}}\) indicate that a parsimonious model cannot be well-calibrated with so limited data. As mentioned previously, switching between controllers can cause transient effects or stability issues, especially if the controllers are not designed with smooth transitions, see the jumping interactive weights at the bottom parts in Fig. 5(a) and Fig. 5(b). Since the hard-switching mechanism is set as a step function in (7), the stability of the system is very sensitive to the switching points. It requires carefully tuning of the intensity threshold \(\mathcal{I}_{0}\) in application. Therefore, with a fixed \(\mathcal{I}_{0}\), although we can find some results under hard-switching control that replicate the human-driver trajectories pretty well, the specific threshold apparently cannot fit all of the sets of IDM parameters drawn from the learned policies. Therefore, a soft-switching control policy is significant. Setting a sigmoid switching function instead of the step function could be an effective solution to this issue. To illustrate, we evaluate the soft-switching control policy based on (8). The results are demonstrated in Fig. 5(c) and Fig. 5(d). Here we quantitatively evaluate the simulated trajectories for 7 distinct car-following pairs in Table I. The performance metric used is the root-mean-square error (RMSE) of the spatial headway (\(\Delta x\)) and safety measure. For the safety measure, we evaluate how far the simulated trajectories are closer to the leader than the human driver's trajectories, thus, a lower RMSE indicates a safer policy. Each cell in the table contains the mean and standard deviation of the RMSE over multiple simulation runs. The bold numbers indicate the lowest RMSE value for each car-following pair, which represents the best-performing policy for that scenario. Given that \(\pi_{\text{int}}\) is too conservative and it keeps a large spacing behind the leader, thus we didn't evaluate its safety measure in Table I. Overall, the table demonstrates the superiority of the soft-switching policy \(\pi_{\text{switch}}\) in most scenarios, achieving a good balance between actively following the leader and ensuring safety, which is the ultimate goal of our proposed approach. ### _Discussions and Limitations_ Our interaction-aware switching control method has demonstrated promising results with a loose data requirement in improving the efficiency and performance of car-following control in autonomous vehicles. By quantifying the level of interaction required between vehicles, our approach enables a more adaptive and context-aware control strategy that can handle a wide range of driving scenarios. Fig. 4: The interactive and non-interactive samples. The blue dots represent interactive samples, the red stars denote non-interactive samples, and the black triangles are random samples. The percentage in parentheses represents the amount of samples. In addition, the quantification results in Fig. 3 revealed from the population level that intense interactions are rare events in the car-following task. The results in Fig. 5 further confirmed this point that the interactive policy \(\pi_{\text{int}}\) is only actively adopted for a small proportion across the whole time horizon. In general, the results validate our hypothesis that not all car-following scenarios require the follower to take interactive reactions with respect to the leader, but safety-critical or intentional actions are occasionally needed; interactive car-following policy matters but not always. This interesting finding is consistent with the intuition that human typically do complex tasks using simple actions [19, 20]. Our results shed some potential insights that social interactions behind overwhelmingly complex human driving behaviors are not always complicated but governed by some simple rules. However, despite its promising results, several limitations are critical to our approach. First, the proposed quantification method heavily relies on the performance of the car-following behavior model (i.e., GMR). The behavior model is crucial for the success of our approach, as it determines when to switch between the control policies. Our current method for quantifying interaction intensity may not be optimal or universally applicable, and it might require further refinement or adaptation to different driving environments and vehicle dynamics. Second, although our method has shown to reduce transient effects when switching between control policies, ensuring smooth transitions remains a challenge. The design of the interactive and non-interactive policies must take into account the possibility of abrupt changes in control inputs to prevent undesirable effects on the vehicle's stability Fig. 5: The simulated trajectories and quantified interaction intensity of two followers controlled by hard/soft-switching policies, respectively. The upper rows illustrate the simulated trajectories. The lower rows represent the quantified interaction intensity (left axis) and the interactive weights \(\psi(\mathcal{I})\) (right axis), which correspond to one of the green lines. and passenger comfort. Third, although our approach has shown promising results in the car-following scenarios, its generalization to other urban traffic conditions, vehicle types, and sensor configurations remains to be validated. Additional experiments and evaluations in diverse urban scenarios are worth trying to verify the robustness and reliability of our method. ## V Conclusions In this paper, we present a novel interaction-aware switching control method for car-following scenarios in autonomous driving systems. By introducing the concept of interaction intensity as a quantifiable metric, we develop an adaptive control strategy that switches between interactive and non-interactive policies based on the current driving situation. Through extensive simulations, we demonstrate the effectiveness of our interaction-aware switching control method in adapting to different driving scenarios and achieving superior performance compared to unified control strategies. Our results indicate that considering the varying interaction intensities in car-following scenarios can lead to more robust and efficient autonomous vehicle control. Furthermore, the experiments confirmed that human drivers would not always keep reacting to their leading vehicle but occasionally take safety-critical or intentional actions. Despite its promising results, our approach is preliminary in the choice of interaction intensity metric, transition smoothness between policies, and generalization to other traffic conditions and vehicle types. Future research should focus on extensions over those directions and further refining our method to enhance its robustness and applicability in complex urban traffics. On a broader scale, our framework also provides insights into designing efficient controllers in other robotics tasks, such as human-robot interactions (HRI), especially when a large amount of human data are expensive to collect. ## Acknowledgment C. Zhang would like to thank the McGill Engineering Doctoral Awards (MEDA), the Mitacs Globalink Research Award, Fonds de recherche du Quebec - Nature et technologies (FRQNT), and the Natural Sciences and Engineering Research Council (NSERC) of Canada for providing scholarships and funding to support this study.
2306.17104
Deep Ensemble for Rotorcraft Attitude Prediction
Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems could accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm could correctly identify rotorcraft attitude at an accuracy in the range of 80\%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94\%. In this paper, five onboard camera views included the pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained various convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79\% % to 90\% %. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3\%.
Hikmat Khan, Nidhal Carla Bouaynaya, Ghulam Rasool, Tyler Travis, Lacey Thompson, Charles C. Johnson
2023-06-29T17:06:42Z
http://arxiv.org/abs/2306.17104v1
# Deep Ensemble for Rotorcraft Attitude Prediction ###### Abstract Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. To date, traditional methods applied to reduce incident rates have not proven hugely successful for the rotorcraft community. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques may provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems can accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm was able to correctly identify rotorcraft attitude at an accuracy in the range of 80%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94%. Our current approach, which is referred to as ensembled prediction, significantly increased the reliability in the predicted attitude (i.e., pitch and yaw). For example, in some camera views, the horizon may be obstructed or not visible. The proposed ensemble method can combine visual details recorded from other cameras and predict the attitude with high reliability. In our setup, the five onboard camera views included pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained a variety of convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79% to 90%. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3%. Our efforts could potentially provide a cost-effective means to supplement traditional Flight Data Recorders (FDR), a technology that to date has been challenging to incorporate into the fleets of most rotorcraft operators due to cost and resource constraints. Such cost-effective solutions can gradually increase the rotorcraft community's participation in various safety programs, enhancing safety and opening up helicopter flight data monitoring (HFDM) to historically underrepresented segments of the vertical flight community. ## 1 Introduction As the premier agency for promoting and ensuring aviation safety, the Federal Aviation Administration (FAA) continually strives to improve safety. The FAA recognizes the importance of participating in Helicopter Flight Data Monitoring (HFDM) programs and encourages their increased utilization to improve flight safety and operational efficiency. Indeed, rotorcraft safety was one of the agency's top ten most wanted list of safety improvements in 2017-2018 and continues to be a high priority still in 2021. Organizations including the FAA, National Transportation Safety Board (NTSB), and the United States Helicopter Safety Team (USHST) are strong proponents of flight data recorders (FDRs). These organizations and other industry partners are working together to promote helicopter flight data monitoring (HFDM) programs as one possible mitigation strategy to reduce the rotorcraft fatal accident rate. However, despite all of these efforts by various safety organizations, barriers to widespread implementation of FDRs and adaptation of HFDM still exist. These include, but are not limited to, the technical skills required to certify, install, and maintain an FDR, skilled resources who can perform HFDM analyses, and the costs associated with the acquisition, certification, and installation of the FDR. Traditional FDRs require a Supplemental Type Certificate (STC) or Field Approval (FA) to install and operate under the Rotorcraft Flight Manual (RFM). On average, the initial acquisition cost of an FDR can range from $5,000 - $50,000. Given a range of factors, rotorcraft, in general, have a lower participation rate in FDM programs than other forms of aviation, including commercial fixed-wing or part 121 air carriers. Inexpensive and off-the-shelf video cameras mounted inside the cockpit may offer a potential alternative to traditional FDRs. Even small helicopter operators often have access to or have the financial means to purchase one or more Figure 1: A set of representative images from five different cameras mounted inside the helicopter cockpit are presented. In the cases of (a) and (b), we used the whole video frame as an input for CNNs. While in the case of (c), (d), and (e), the recorded image was cropped. The input to the CNN included the areas inside the highlighted green rectangles. (best viewed in color) off-the-shelf video cameras. These cameras can potentially record all the data that traditional FDRs record. Moreover, onboard cameras may provide supplementary data that may not be available in some FDRs. The recorded video data from onboard cameras can be used for various analyses and inquiries. The examples including estimation of flight parameters from instrument panel gauges, flight replay during post-accident investigations, estimation of rotorcraft attitude, and any other visual information that can be extracted from video data. In this work, we recorded video data from five different onboard cameras for the accurate prediction of rotorcraft attitude. The cameras included the pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Figure 1 shows representative images from all onboard cameras. We used in-flight videos from these cameras to build training datasets for our AI models. We used the video datasets to train a variety of convolutional neural networks (CNNs). Later, we combined the knowledge learned by all CNNs using the ensemble approach, which improved the attitude prediction accuracy to 93.3%. Our proposed ensemble approach improves attitude prediction accuracy especially when the horizon curve is obstructed or not visible due to bad weather or other conditions. Our results from this work and previous publications support the viability of an inexpensive onboard camera-based solution for flight parameters' estimation and attitude prediction. Such cameras-based solutions do not require any special modification to the helicopter's avionics, communications, or display systems and are more suitable for legacy helicopters. Importantly, the solution's cost-effective nature will encourage the rotorcraft community to participate in these voluntary safety programs. The paper is organized as follows. The _Related Work_ section presents a brief description of the current AI-based approaches proposed to increase rotorcraft safety. In _Methodology_ section, we explain the data acquisition methodology and describe our experimental setup. _Results_ section presents experimental results along with a discussion on our results. Finally, we conclude the current research in _Conclusion_ section. ## 2 Related Work Various AI models based on deep neural networks have obtained above human-level performance on many different tasks. Examples in the computer vision domain include image classification, object detection, and image segmentation [1, 2, 3]. Computer vision tasks are primarily approached using the well-known type of artificial neural networks, i.e., convolutional neural network or CNN. A great deal of research has recently focused on developing various kinds of Figure 2: Representative images from different camera views. These images show the prediction of attitude from one gauge can be challenging, and an ensemble approach may benefit due to the availability of different types of data related to the attitude of the rotorcraft.(best viewed in color) CNNs, including EfficientNet, VGG-16, VGG-19, ResNet, Inception, and Xception [4, 5, 6, 7]. These CNNs are also referred to as deep neural networks as they are built using tens or hundreds of layers of artificial neurons. CNNs have demonstrated the ability to learn complicated features directly from the image or video data in an increasingly complex hierarchy. Deep neural networks and CNNs are being proposed to tackle various challenging tasks in the aviation community [8, 9, 10, 11]. Khan _et al._ showed that CNNs could infer different flight parameters from video data recorded during flights [8]. The authors showed that their trained models could accurately estimate various flight parameters, including airspeed and engine torque, directly from the instrument panel videos, with the core purpose to facilitate the post-flight analysis. Alligier _et al._ used machine learning techniques to improve airspeed estimation during aircraft climbing [10]. In another work, the same authors used machine learning to estimate the mass of ground-based aircraft during climb prediction [11]. Kenneth _et al._ applied a natural language processing (NLP) technique of structural topic modeling to the Aviation Safety Reporting System (ASRS) corpus [12]. The authors identified subjects, patterns, and areas that needed further analysis in the ASRS corpus [12]. Gianazza _et al._ trained a variety of machine learning algorithms to predict the workload of air traffic controllers [13]. The closest research work to ours was performed by Shin _et al._[14]. The author proposed a conventional computer vision strategy using hand-engineered features to predict rotorcraft attitude from onboard cameras [14]. The authors manually analyzed each video frame and marked the natural horizon line. Later, the classical machine learning approach of DBSCAN Clustering was employed to estimate the roll and bank angles [14]. However, the proposed method was computationally expensive and lacked scalability for the inference of multiple parameters in large video datasets. ## 3 Methodology We trained multiple different deep learning models for each dataset to predict rotorcraft attitudes (i.e., pitch and yaw). Each dataset was build using the video stream from a different camera. The five cameras installed in the cockpit collected different details on the horizon curve from their own viewpoints. For instance, the pilot and co-pilot windshield view directly record the horizon curve, as seen through the windshield. The pilot and co-pilot EFIS display and the attitude indicator gauge on the instrument penal present a different visual representation of the same horizon curve. Figure 1 shows the sample images of each of the five onboard cameras. ### Convolutional Neural Networks (CNNs) CNNs are inspired by the human visual system and have obtained state-of-the-art performance in various computer vision tasks [15, 16, 17]. CNNs can learn features from input images in a hierarchical fashion taking advantage of the spatial coherence in images, and do not require domain-specific knowledge, i.e., feature engineering. The early layers of a CNN learn standard features, such as edges or lines, while subsequent layers learn complex domain-dependant features. The convolutional layers of a CNN perform convolution operations between the input image and learnable parameters (also referred to as filters or kernels). A CNN may have multiple layers of convolutional kernels depending upon the complexity of the task to be learned. The features extracted by a convolutional layer are passed through a nonlinear function, such as rectified linear unit (ReLU). The nonlinear functions are also referred to as activation Figure 3: The pitch and roll values for the whole duration of the flight are presented in figure (a) and (b), respectively. The y-axis shows pitch values in degrees and x-axis represents flight duration. (best viewed in color) functions or simply activations. The activation functions are generally followed by a max-pooling layer, which reduces the dimensionality of the input. At the end of the last convolutional layer, the extracted features are flattened and densely connected to the next layer, referred to as the fully-connected layer. Finally, a softmax function is used to produce class scores or probabilities. #### 3.1.1 Weight Initialization: In deep neural networks, appropriate weights (or parameters) initialization can reduce the convergence time and computational cost. In our experiments, we initialized our weights with ImageNet [6]. #### 3.1.2 Activation Functions: The nonlinear activation functions are generally introduced into every layer of a neural network. Some commonly used activation functions include sigmoid, hyperbolic tangent (tanh), ReLU, Leaky ReLU (LeakyReLU), and parametric ReLU [18]. We used ReLU activation function in all of our experiments. #### 3.1.3 Max-Pooling Layers: Pooling is used to reduce the dimensionality of the input features by sub-sampling. Pooling operation makes the CNN invariant to small intensity and illumination changes as well as translations. The commonly used pooling operations include max-pooling and average-pooling [18]. The max-pooling operation select features with the maximum value in the pooling region. The average pooling calculates the average of the features in the pooling region. We used the max-pooling operation in all experiments. #### 3.1.4 Regularization: A common problem with deep neural networks is overfitting, i.e., lack of generalization to the unseen data. Several regularization schemes have been proposed to avoid overfitting, i.e., \(L_{1}\)-Regularization, \(L_{2}\)-Regularization, batch normalization, and dropout [18]. Batch normalization is performed by calculating parameterized mean and standard deviation of the input and processed data at various layers of the CNN during the training phase. Dropout is a commonly used and effective technique for regularization [19] where a randomly selected set of neurons are tuned off (forced to zero) at each forward pass. Dropout forced each neuron to learn and contribute independently to the overall output of the CNN. We used both batch normalization and dropout techniques in our experiments. #### 3.1.5 Fully Connected Layers and the Softmax Function: The output of the convolutional layers is called the feature map. The feature map is flattened (i.e., vectorized) and densely connected to the next layer, a fully connected layer. A CNN may have multiple fully connected layers. The output of the last fully connected layer is referred to as class scores and becomes an input to the softmax function. Softmax function nonlinearly normalizes input class scores to numbers between 0 and 1. The highest score is considered the classification decision, i.e., the predicted class label by the CNN. #### 3.1.6 Loss Function: The loss function measures the error between the predicted class label by the CNN and the ground truth label. The training process of a neural network is aimed to minimize the loss function by optimally adjusting weights or parameters. We used the categorical cross-entropy loss function in all our experiments. ### Data Acquisition: We mounted five onboard cameras with various viewpoints inside the cockpit to record the instrument panel and the horizon inside an S-76 helicopter. The pilot and co-pilot perspectives of the horizon were captured with pilot and co-pilot windshield cameras mounted above the pilot and co-pilot. Similarly, the pilot and co-pilot perspectives of the EFIS displays were recorded using the two separate cameras. The fifth camera continuously recorded the different gauges on the instrument panel. The representative images can be seen in Figure 1. Table 1 presents the total duration of the available videos for each camera view. The rotorcraft was equipped with an onboard Helicopter Flight Data Recorder (HFDR). Both HFDR and cameras (i.e., each frame of flight video) were timestamped using a time server. The timestamps were used to annotate individual frames of flight videos with the corresponding HFDR recordings. ### Classes Definition: The HFDR recorded reading for the attitude (i.e., pitch and yaw) are real numbers. Figure 3 present the pitch and roll attitude values for one of the flights of S-76 helicopter, respectively. We define nine classes for the attitude. The nine classes are: class 0 - nose down (ND), class 1 - nose up (NU), class 2 - roll positive (RP), class 3 - roll negative (RN), class 4 - ND and RP, class 5 - NU and RP, class 6 - ND and RN, class 7 - NU and RN, and class 8 - level and steady-state (L). Table 2 presents definition of nine classes. The \(\alpha\) takes the user-defined values and defines the boundary among nine classes. In our experiments we set the value as \(\alpha=3\). ### Experimental Setup: We trained four CNNs for each video dataset. CNN architectures that we considered for attitude prediction included EfficientNet, VGG16, VGG19, ResNet50, and Xception and InceptionV3 [4, 20, 5, 6, 7, 21]. The models were trained on the training set and then evaluated on a separate test set. In all experiments, we initialized the models with ImageNet weights and fine-tuned them to our dataset [22]. All experiments were performed using the Adam optimizer with a consistent batch size of 256 [23]. The remaining parameters of the Adam optimizer were initialized to default values, as discussed in [23]. ## 4 Results In Table 3, we present average accuracy values for all models trained on five different datasets. Each row of the table represents a different CNN, and each column represents a dataset (i.e., camera view). The last column shows the performance of two ensemble approaches. The ensemble approaches combine the attitude predictions of all trained models (i.e., 20 models) on five camera views using a majority voting strategy. The first ensemble approach did not include models trained with the attitude gauge datasets. The second approach included all the models. As evident from Table 3, the proposed two ensemble approaches obtained higher averaged accuracy (i.e., 92.5% and 93.3%) compared to other individual models trained on different camera views. The improvement in the predictive performance using the ensemble approach supports our hypothesis that combining data from multiple cameras will improve prediction accuracy. Figure 4 presents class-wise normalized confusion matrices of four models trained on pilot windshield datasets. Figure 5 presents class-wise normalized confusion matrices of four models trained on co-pilot windshield datasets. The x-axis on for confusion matrices presents the predicted attitude class while the y-axis is reserved for the true class. The \begin{table} \begin{tabular}{l c c} \hline \hline Camera veiw & Flight duration \\ \hline Pilot Windshield & 44:37:10 \\ Co-pilot Windshield & 55:38:34 \\ Pilot EFIS & 23:42:35 \\ Co-pilot EFIS & 34:09:50 \\ Artificial Attitude Indicator & 15:20:00 \\ \hline \hline \end{tabular} \end{table} Table 1: Total flight duration for five different datasets, i.e., camera views. The flight duration is in the standard time format (i.e. hh:mm:ss). \begin{table} \begin{tabular}{l c c c} \hline \hline Class & Description & Pitch(P) & Roll(R) \\ \hline 0 & NU & P \(>\alpha\) & \(-\alpha\leq\) R \(\leq+\alpha\) \\ 1 & ND & P \(<-\alpha\) & \(-\alpha\leq\) R \(\leq+\alpha\) \\ 2 & RR & \(-\alpha\leq\) P \(\leq+\alpha\) & R \(>\alpha\) \\ 3 & RL & \(-\alpha\leq\) P \(\leq+\alpha\) & R \(<-\alpha\) \\ 4 & NU \& RP & P \(>\alpha\) & R \(>\alpha\) \\ 5 & NU \& RN & P \(>\alpha\) & R \(<-\alpha\) \\ 6 & ND \& RP & P \(<-\alpha\) & R \(>\alpha\) \\ 7 & ND \& RN & P \(<-\alpha\) & R \(<-\alpha\) \\ 8 & L & \(-\alpha\leq\) P \(\leq+\alpha\) & \(-\alpha\leq\) R \(\leq+\alpha\) \\ \hline \hline \end{tabular} \end{table} Table 2: Definition of classes for attitude. We used a threshold \(\alpha=3\) and defined 9 discrete classes. Abbreviations used: NU - nose up, ND - nose down, RP - roll positive; RN - roll negative, and L - level or steady-state. diagonal values show the correct classification rates for nine different attitude classes. The higher numbers on the diagonal of a confusion matrix reflect a better model. Figure 6 present class-wise normalized confusion matrices of four models trained on pilot EFIS display dataset. Figure 7 presents class-wise normalized confusion matrices of four models trained on co-pilot EFIS display datasets. Figure 8 presents class-wise normalized confusion matrices of four models trained on artificial attitude indicator gauge. Figure 9 presents the normalized confusion matrices for both ensemble approaches. First, we considered 16 models trained on pilot and co-pilot windshield and pilot and co-pilot EFIS datasets in the first ensemble approach. While in the second, we considered all 20 models trained on datasets. In Figure 9, we use green and red rectangles to show the positive (an increase in predicting accuracy) and negative (a decrease in the predictive accuracy) effects of the class-wise prediction in ensemble confusion matrices. We observed that increasing the number of models in the ensemble improved the prediction accuracy. It is also evident (red and green arrows) from Figure 9 that an increased number of models in the ensemble result in an increase in prediction accuracy for most classes. It is important to highlight that the ensemble-based approach obtained consistent results on all nine attitude classes. Whereas the models trained on specific camera views have comparatively lower classification accuracy values on classes 6 and 7. Figure 4: Class-wise normalized confusion matrices for 4 CNNs (i.e. VGG16, VGG19, EfficientNet, and ResNet50) used for attitude prediction using pilot windshield camera view are presented. Figure 5: Class-wise normalized confusion matrices for 4 CNNs (i.e. EfficientNet, ResNet50, VGG16, and VGG19) used for attitude prediction using co-pilot windshield camera view are presented. Figure 6: Class-wise normalized confusion matrices for 4 CNN architectures (i.e. EfficientNet, RestNet50, VGG16 and VGG19) used for attitude prediction using pilot EFIS display are presented. Figure 7: Class-wise normalized confusion matrices for 4 CNN architectures (i.e. EfficientNet, Xception, VGG16, and InceptionV3) used for attitude prediction using co-pilot EFIS display are presented. Figure 8: Class-wise normalized confusion matrices for 4 CNN architectures (i.e. EfficientNet, VGG16, VGG19, and Xception) used for attitude prediction using pilot attitude indicator gauge are presented. Figure 9: Class-wise normalized confusion matrices of ensemble attitude prediction are presented. We used the majority voting strategy to combine different model that are trained on five different camera views. (a) Ensemble confusion matrix based on 16 models trained on pilot windshield, co-pilot windshield, pilot EFIS and co-pilot EFIS views. (b) Ensemble confusion matrix for all 20 models. The ensemble approach obtained better accuracy at the individual class level and enhanced overall attitude predictive accuracy. ## 7 Authors Biography **Hikmat Khan** is currently a PhD student at Rowan University. He is a research fellow supporting the Federal Aviation Administration (FAA) via a research grant/cooperative agreement by evaluating the feasibility of applying deep learning approaches to increase safety within the rotorcraft industry. His research interests include deep learning, continual learning, few-shot learning and optimization. **Nidhal C. Bouaynaya** received her Ph.D. degree in Electrical and Computer Engineering and M.S. degree in Pure Mathematics from The University of Illinois at Chicago, in 2007. From 2007-2013, she was an Assistant then Associate Professor with the Department of Systems Engineering at the University of Arkansas at Little Rock. In Fall 2013, she joined the Department of Electrical and Computer Engineering at Rowan University, where she is currently a Professor and the Associate Dean for Research and Graduate Studies. Dr. Bouaynaya co-authored more than 100 refereed journal articles, book chapters and conference proceedings. She won numerous Best Paper Awards, the most recent was at the 2019 _IEEE International Workshop on Machine Learning for Signal Processing_. She is also the winner of the Top algorithm at the 2016 Multinomial Brain Tumor Segmentation Challenge (BRATS). Her research interests are in Big Data Analytics, Machine Learning, Artificial Intelligence and Mathematical Optimization. In 2017, she Co-founded and is Chief Executive Officer (CEO) of MRIMATH, LLC, a start-up company that uses artificial intelligence to improve patient oncology outcome and treatment response. **Ghulam Rasool** is an Assistant Professor of Electrical and Computer Engineering at Rowan University. He received a BS. in Mechanical Engineering from the National University of Sciences and Technology (NUST), Pakistan, in 2000, an M.S. in Computer Engineering from the Center for Advanced Studies in Engineering (CASE), Pakistan, in 2010, and the Ph.D. in Systems Engineering from the University of Arkansas at Little Rock in 2014. He was a postdoctoral fellow with the Rehabilitation Institute of Chicago and Northwestern University from 2014 to 2016. He joined Rowan University as an adjunct professor and later as a lecturer in the year 2018. Currently, he is the co-director of the Rowan AI Lab. His current research focuses on machine learning, artificial intelligence, data analytics, signal, image, and video processing. His research is funded by National Science Foundation (NSF), U.S. Department of Education, U.S. Department of Transportation (through the University Transportation Center (UTC), Rutgers University), Federal Aviation Administration (FAA), New Jersey Health Foundation (NJHF), and Lockheed Martin, Inc. His recent work on Bayesian machine learning won the Best Student Award at the 2019 IEEE Machine Learning for Signal Processing Workshop. **Charles C. Johnson** works as a research engineer, program manager, and technical expert for the Aviation Research Division at the FAA William J. Hughes Technical Center in Atlantic City, NJ. During his 10+ year-career with the FAA, he has led several rotorcraft and unmanned aircraft systems (UAS) research and development simulation/flight test activities that seek to improve aviation safety. Cliff is qualified on the ScanEagle UAS platform. He holds a Bachelor's degree in Mechanical Engineering from Rowan University. He also holds a private pilot's license (single engine land/fixed-wing) and is pursuing his instrument, commercial, and helicopter add-on ratings. **Tyler Travis** serves as a Research Analyst at the Federal Aviation Administration's (FAA) William J. Hughes Technical Center in Atlantic City, NJ. She supports several research and development activities that seek to improve aviation safety. During her 5+ years at the FAA Technical Center, Tyler has worked on several UAS and Rotorcraft Human-In-The Loop simulations involving new technologies and procedural changes impacting the National Airspace System. Prior to joining the FAA, Tyler completed her B.S. in Business Administration with a concentration in Management Information Systems at Drexel University in September 2012. **Lacey Thompson** works as an Operations Research Analyst for the Unmanned Aircraft Systems (UAS) Engineering Branch at the Federal Aviation Administration's (FAA) William J. Hughes Technical Center in Atlantic City, NJ. During her 7.5 years with the FAA, Lacey has managed several UAS Human-In-the-Loop (HITL) simulations. In addition, she serves as an Aviation Science, Technology, and Mathematics (AvSTEM) Ambassador, creating the curriculum for the first ever module on UAS for the Aviation Monthly Mentoring Program. Lacey holds a Bachelor's degree in Physics from Northwestern State University of Louisiana and a Master's degree in Aeronautics from Embry-Riddle Aeronautical University Worldwide.
2308.08588
Entanglement and Topology in Su-Schrieffer-Heeger Cavity Quantum Electrodynamics
Cavity materials are a frontier to investigate the role of light-matter interactions on the properties of electronic phases of matter. In this work, we raise a fundamental question: can non-local interactions mediated by cavity photons destabilize a topological electronic phase? We investigate this question by characterizing entanglement, energy spectrum and correlation functions of the topological Su-Schrieffer-Heeger (SSH) chain interacting with an optical cavity mode. Employing density-matrix renormalization group (DMRG) and exact diagonalization (ED), we demonstrate the stability of the edge state and establish an area law scaling for the ground state entanglement entropy, despite long-range correlations induced by light-matter interactions. These features are linked to gauge invariance and the scaling of virtual photon excitations entangled with matter, effectively computed in a low-dimensional Krylov subspace of the full Hilbert space. This work provides a framework for characterizing novel equilibrium phenomena in topological cavity materials.
Daniel Shaffer, Martin Claassen, Ajit Srivastava, Luiz H. Santos
2023-08-16T18:00:00Z
http://arxiv.org/abs/2308.08588v1
# Entanglement and Topology in Su-Schrieffer-Heeger Cavity Quantum Electrodynamics ###### Abstract Cavity materials are a frontier to investigate the role of light-matter interactions on the properties of electronic phases of matter. In this work, we raise a fundamental question: can non-local interactions mediated by cavity photons destabilize a topological electronic phase? We investigate this question by characterizing entanglement, energy spectrum and correlation functions of the topological Su-Schrieffer-Heeger (SSH) chain interacting with an optical cavity mode. Employing density-matrix renormalization group (DMRG) and exact diagonalization (ED), we demonstrate the stability of the edge state and establish an area law scaling for the ground state entanglement entropy, despite long-range correlations induced by light-matter interactions. These features are linked to gauge invariance and the scaling of virtual photon excitations entangled with matter, effectively computed in a low-dimensional Krylov subspace of the full Hilbert space. This work provides a framework for characterizing novel equilibrium phenomena in topological cavity materials. _Introduction_- Ever since Purcell's seminal discovery Purcell (1947) that light-matter interactions (LMI) can be controlled by engineering electromagnetic vacuum, cavity quantum electrodynamics (cQED) Susskind (1993); Susskind (1993) has been a fruitful platform to create and manipulate light-matter hybrids. Notable experimental progress in the last decade has enabled ultra-strong coupling regimes where LMI is comparable to or even more significant than the bare cavity and matter excitation energy scales, Susskind (1993); Susskind and Girvin (2000); Susskind and Girvin (2000); Susskind (2001) opening a promising path to alter the equilibrium properties of quantum materials with quantum light. Susskind (1993); Susskind and Girvin (2000); Susskind (2001); Susskind and Girvin (2000); Susskind (2001) Quantum entanglement is inherently part of the description of strongly interacting light-matter systems, for LMI entangles photons and charged particles, resulting in hybrid many-body states containing virtual excitations.Susskind (1993) Nevertheless, the nature of quantum entanglement in the regime where light strongly interacts with many-body electronic systems remains to be harnessed. In particular, in contrast with the pivotal role played by entanglement as a universal marker of long-range entangled topological orderSusskind and Girvin (2000); Susskind (2001); Susskind and Girvin (2000); Susskind (2001) and short-range entangled symmetry-protected topological states Susskind and Girvin (2000); Susskind and Girvin (2000); Susskind and Girvin (2000); Susskind (2001); Susskind and Girvin (2000); Susskind (2001), the scaling regimes of entanglement in topological matter interacting with cavity fields remains poorly understood. This issue is central to a potential classification of hybrid light-matter phases, as well as a timely endeavor given the observed breakdown of topological protection in quantum Hall systems Zhang _et al._ (2015) strongly interacting with THz cavity modes. In this Letter, we investigate the one-dimensional (1D) Su-Schrieffer-Heeger (SSH) spinless fermionic chain Su _et al._ (2015) coupled to a single optical mode as a paradigm to address the effects of LMI onto equilibrium properties of topological fermionic matter. At half-filling, the SSH chain has a trivial and a topological gapped phase separated by a phase transition upon tuning the intra- and inter-unit cell hopping amplitudes, as shown in Fig.1. While both phases display similar area law scaling of the ground state entanglement entropy (EE), the low eigenvalues of the entanglement spectrum are two-fold degenerate due to the presence of non-trivial edge states Susskind and Girvin (2000); Susskind (2001); Susskind and Girvin (2000). In the SSH-cQED system with electrons strongly interacting with a single photonic mode, a burning question is to characterize the role of photon-mediated non-local interactions on the system's short-range entanglement and topological properties. We address these issues through analytical and numerical methods that reveal a detailed account of the entanglement features, spectral properties, and edge states of SSH-cQED low energy states. Departing from previous mean-field studies Susskind (2001); Susskind and Girvin (2000), we employ density-matrix renormalization group (DMRG) as a non-perturbative method to extract the structure of entanglement between light and electrons of the SSH chain as a function of light-matter coupling and of system size. Our DMRG analysis shows that, while LMI induces an expected increase in entanglement entropy due to LMI, this entanglement contribution _saturates_ with system size despite the non-locality of light-matter interactions. This behavior is associated with a many-body state characterized by dressed photon and electronic states which, nevertheless, preserves the area-law scaling of entan Figure 1: Fermionic Su-Schrieffer-Heeger chain interacting with an optical cavity mode (purple). Intra- and inter-hopping amplitudes \(t_{e}\) and \(t_{o}\) respectively represented by red and blue bonds. glement and the topological edge states, which are the central results of this work. Our DMRG analysis establishes an interesting correlation between EE saturation and the diamagnetic response of the ground state. This diamagnetic response is a non-perturbative feature that highlights the importance of gauge invariance in describing the interaction of Bloch electrons with quantum light [28; 29]. While gauge invariant diamagnetic effects have been linked with stability against superradiant phase transitions [30; 31; 32; 33; 34], the link between diamagnetism and quantum entanglement is a new aspect of LMIs that this work uncovers. Furthermore, we corroborate the DMRG results by performing exact diagonalization (ED) and by identifying a closed Krylov subspace where light-matter entanglement can be efficiently described when the number of virtual photons in the ground state is small. The Krylov subspace is generated from the decoupled state of matter and photons by action of a composite operator involving creation and annihilation photons operators and a many-body fermionic current operator. This subspace spans the many-body ground state characterizing short-range entanglement of light and matter degrees of freedom observed in DMRG. _Model -_ Adopting the Coulomb gauge [35], we consider a half-filled chain of spinless fermions with \(L=2N\) sites described by creation (annihilation) operators \(c_{j}^{\dagger}(c_{j})\) coupled to a single cavity transverse photonic mode of frequency \(\omega\) represented by canonical bosonic operators \(a^{\dagger}\) and \(a\), which is described by the Hamiltonian \[H=\sum_{j}t_{j}e^{i\,\frac{\hbar}{\hbar}\mathcal{L}\mathcal{A}_{0}\,(a+a^{ \dagger})}c_{j}^{\dagger}\,c_{j+1}+\text{H.c.}+\hbar\,\omega\,a^{\dagger}\,a\,, \tag{1}\] where the LMI is encoded via the Peierls substitution, and interactions \(V_{ij}c_{i}^{\dagger}c_{i}c_{j}^{\dagger}c_{j}\) mediated by the longitudinal component of the gauge field are disregarded since the fermionic matter is weakly correlated. Nearest-neighbor intra- (inter-) unit cell real hopping amplitudes are, respectively, \(t_{2j}=t_{e}\) and \(t_{2j+1}=t_{o}\) (see Fig. 1), with \(t_{o}>t_{e}\) giving the topological SSH phase. The distance between neighboring sites is \(\ell\), \(e\) is the electron charge, \(\hbar\) is Planck's constant divided by \(2\pi\). The vector potential, polarized along the chain direction, has an amplitude \(\mathcal{A}_{0}=\sqrt{\frac{\hbar}{2\omega V\epsilon}}\) where \(V\) is the cavity volume and \(\epsilon\) is the dielectric constant. We fix the cross-sectional area of the cavity and consider the same length \(L\) for the cavity and the chain (Fig. 1). As such, the Peierls phase in (1) \(\frac{\epsilon\,\mathcal{A}_{0}}{\hbar}\equiv g/\sqrt{L}\) explicitly encodes size variations of the chain size and the dimensionless strength \(g\) of LMI, which shall be varied from weak- to ultra-strong coupling regimes. We measure length in units of \(\ell\) and regard \(L\) as dimensionless. _Numerical Analysis -_ Using TeNPy [36], we conducted a DMRG study of the model Eq. 1, varying the system size up to \(L=200\) and capping the the number of photons to 100. The DMRG results presented here are for the quasi-resonance condition \(\hbar\omega=2t_{o}=2\), but we have verified that no qualitative changes incur upon varying \(\omega\). In this study, the dimensionless coupling \(g\) is varied over a wide range between the weak coupling (\(g\ll 1\)) and ultra-strong coupling (\(g\gg 1\)) regimes, with DMRG analysis for \(g\in[0.1,2.5]\) presented here, and additional data shown in the supplementary materials (SM) [37]. As shown in Figs. 2(a)-(b) obtained for the dimerized limit \(t_{o}=1\) and \(t_{e}=0\), both the ground state energy change due to light-matter coupling \(\Delta E=E(g)-E(g=0)\) and the number of photons \(N_{ph}=\langle a^{\dagger}a\rangle\) plateau to a Figure 2: DMRG simulation of cavity SSH system for light-matter coupling \(g\in[0.1,2.5]\) (color coding legend on the right) for system sizes incremented in steps of \(4\) up to \(L=200\). (a) Change in ground state energy in the presence of light-matter coupling in the dimerized limit (\(t_{e}=0\)), \(\Delta E=E(g)-E(0)\). Diamagnetic response (\(\Delta E>0\)) saturates to a finite value for large \(L\). (b) Average number of photons \(N_{ph}=\langle a^{\dagger}a\rangle\) in presence of light-matter coupling in the dimerized limit (\(t_{e}=0\)). \(N_{ph}\) also saturates to a finite value, much smaller than one for the considered values of \(g\). (c) Electron density \(n_{j}=\langle c_{j}^{\dagger}c_{j}\rangle\) for \(L=200\) shows the robustness of topological edge states away from dimerized limit \(t_{e}=t_{o}/2=0.5\), for a system at half-filling minus one electron. The states can be seen as charge deficit at the two ends of the chain, which is insensitive to the strength of the light-matter coupling within numerical precision. Inset shows ED spectral gap \(\Delta\) for (\(L\leq 12\)) exhibiting the expected exponential decay with system size. constant value as system size \(L\) increases, with the value of the plateau increasing with increasing \(g\). Notably, while the lowest-order term in expansion of the Hamiltonian (1), \(\delta H_{1}=(g/\sqrt{L})(a+a^{\dagger})\,\sum_{j}i\,t_{j}\,c_{j}^{\dagger}c_{j+1 }+\text{H.c.}\), yields a _negative_ Lamb shift [35], the \(\Delta E>0\) plateau in Fig. 2(a) highlights important diamagnetic effect, which contributes to the suppression of the number of ground state virtual photons as displayed in Fig. 2(b). An important finding of this work is the stability of the topological edge states, despite the non-locality of the cavity mode. This is seen explicitly in the dimerized limit \(t_{e}=0\) where the edge fermion operators \(c_{0}\) and \(c_{L}\) remain _decoupled_ from the bulk owing to the gauge invariant form of the LMI. We explicitly confirmed the stability of the edge states away from the dimerized limit by studying the case \(t_{o}=1\) and \(t_{e}=0.5\); the resulting electron density \(\langle n_{j}\rangle=\langle c_{j}^{\dagger}c_{j}\rangle\) along a chain of length \(L=200\) for a system at half-filling minus one electron is shown in Fig. 2(c), where the edge states are clearly seen as positive charge excess on both sides of the SSH chain. Note that the electron density for different \(g\) are identical within numerical precision, indicating that the light-matter coupling does not affect the electron density at all. Furthermore, an ED analysis [see inset of Fig. 2 (c)] confirms the existence of two quasi-degenerate lowest energy states separated by a gap \(\Delta\) that exponentially decreases increasing system size. The robustness of the edge states in DMRG is verified deep in the ultra-coupling regime (\(g\sim 100\)) [37]. The same plateau behavior displayed in Fig.2 is observed for \(\Delta E\) and \(N_{ph}\) away from the dimerized limit, though the plateaus are not reached as quickly as in the dimerized limit. Moreover, the topologically trivial chain (\(t_{o}<t_{e}\)) displays similar bulk behavior for \(\Delta E\), \(N_{\text{ph}}\) and \(n_{j}\), except for edge states that are not present in this case. The spectral features described above are consistent with the structure of entanglement in the many-body ground state found in the DMRG simulation. This can be seen in Figs. 3 (a-b) for the dimerized limit, showing the system size scaling of the entanglement entropy (EE) between the photon and the chain of electrons \(S_{ph}=-\text{Tr}\left[\rho_{ph}\ln\rho_{ph}\right]\), and the EE of half of the electron chain (with the entanglement cut across the strong bond) with the rest of the chain and the photon \(S_{el}=-\text{Tr}\left[\rho_{el}\ln\rho_{el}\right]\), where \(\rho_{ph}\) and \(\rho_{el}\) are the corresponding reduced density matrices. The observed _area law_ scaling of EE in the presence of light-matter coupling is another key result of this work. The behavior of \(S_{ph}\) in Fig. 3 (a) is similar to the saturation of the virtual photons \(N_{ph}\) in Fig. 2 (b), as further discussed in Eq.(6). Moreover, LMI generates an additional contribution to the electronic EE \(S_{el}\) in addition to \(\ln 2\) for \(g=0\) in the dimerized limit \(t_{e}=0\) (assuming \(L\) is divisible by four such that a non-trivial bond is cut in the bipartition), signifying that electronic states are dressed by the photon while the system remains short-ranged entangled. The stability of the short-range entangled SPT phase of the SSH chain is further confirmed by the double degeneracy of the entanglement spectrum of \(\rho_{el}\)[18; 19; 20]. The additional EE indicates the presence of interaction-induced correlations in the system: although, as seen in Fig. 2 (c), the excess electron density \(\langle\delta n_{j}\rangle=\langle n_{j}\rangle-1/2\) is unchanged by the light-matter coupling, we find that it induces charge fluctuations \(\langle\delta n_{j}\delta n_{j}\rangle\) (for \(i\neq j,j\pm 1\)) with a characteristic \(1/L\) decay while having an infinite correlation length for _fixed system size_, as seen in the constant value of\(\langle\delta n_{i}\delta n_{j}\rangle\) as a function of separation between the sites \(|i-j|\) at fixed system size \(L=200\) (the fluctuations change sign between even and odd values of \(|i-j|\); only even values are shown for clarity). This infinite correlation range persists to changes away from dimerized limit Figure 3: Entanglement entropy and density correlations of the cavity SSH system in the dimerized limit (\(t_{e}=0\)) found in DMRG for the same parameters as in Fig. 2. (a-b) The entanglement entropies between the photon and the fermions, \(S_{ph}\), and between the right half of the chain and the rest of the system, \(S_{el}\), versus system size. Both exhibit area law behavior in the thermodynamic limit. In the absence of light-matter coupling, \(S_{el}=\ln 2\) due to the non-trivial topology of the SSH chain. (c) Charge density correlation function \(\langle\delta n_{i}\delta n_{j}\rangle\) for \(i\) and \(j\) not belonging to the same dimer; as shown in the inset, the charge correlation function is independent of \(i\) and \(j\) (however, the sign alternates as \(\langle\delta n_{i}\delta n_{j}\rangle\propto(-1)^{i+j}\); not shown in the figure for clarity). The correlation length is infinite for fixed \(L\), with a magnitude that decays as \(1/L\). and stronger LMI, and is an important signature of the LMI. In the dimerized limit, it follows from a permutation symmetry of Hamiltonian (1) that exchanges pairs of dimers, resulting in many-body states where photons are entangled with gas of delocalized dimers that mediate such long-range correlation functions. However, despite the constancy of these correlations for fixed system size, the \(1/L\) behavior indicates the absence of long-range order in the thermodynamic limit. ### Physical Interpretation of Low Energy States - Physical insight into numerical results can be gained by recasting Hamiltonian (1) as \[H=H_{0}\cos\frac{g}{\sqrt{L}}(a+a^{\dagger})+J\sin\frac{g}{\sqrt{L}}(a+a^{ \dagger})+\hbar\omega a^{\dagger}a \tag{2}\] where \(H_{0}=\sum_{j}t_{j}c_{j}^{\dagger}c_{j+1}+\text{H.c.}\) and \(J=\sum_{j}it_{j}c_{j}^{\dagger}c_{j+1}+\text{H.c.}\) are the SSH Hamiltonian and electron current operators in the absence of LMI, respectively. Importantly, the hopping imbalance \(t_{o}\neq t_{e}\) of the SSH chain is responsible for quantum fluctuations (\([J,H_{0}]\neq 0\)) that manifest in matter sector of the ground state, as follows. In the dimerized limit, many-body eigenstates of \(H_{0}\) are tensor products of dimer states \(|\psi_{l\pm}\rangle=(|0_{2l-1},1_{2l}\rangle\pm|1_{2l-1},0_{2l}\rangle)/\sqrt{2}\) expressed in occupation number basis, where \(l=1,\ldots,N_{b}=(L-2)/2\) is a dimer index, \(N_{b}\) being the number of non-trivial bonds (recall that the \(j=0,L\) sites decouple from the Hamiltonian). Observe that \(J=\sum_{l}J_{l}=\sum_{l}it_{o}c_{2j-1}^{\dagger}c_{2j}+\text{H.c.}\), where \(J_{l}\) act as ladder operators on the dimer states: \(J_{l}|\psi_{l\pm}\rangle=\pm it_{o}|\psi_{l\mp}\rangle\). Let us therefore denote product states with dimers \(l_{1},\ldots,l_{n}\) in excited states \(|\psi_{l_{j}+}\rangle\) as \(|\Psi_{l_{1},\ldots,l_{n}}\rangle\). This allows us to identify the Krylov subspace of the ground state \(H_{0}\)\(|\Psi^{(0)}\rangle=\bigotimes_{1}|\psi_{l-}\rangle\) by successive applications of \(J\). This subspace is spanned by \(|\Psi^{(0)}\rangle\) and the orthonormal states \(|\Psi^{(n)}\rangle\) describing _uniform_ superpositions of all \(\binom{N_{b}}{n}\) states with \(n\) excited dimers, similar to Dicke states [38]: \[\left|\Psi^{(n)}\right\rangle=\frac{1}{\sqrt{\binom{N_{b}}{n}}}\sum_{0<l_{1}< l_{2}<\cdots<l_{n}<N_{b}-1}|\Psi_{l_{1},l_{2},\ldots,l_{n}}\rangle\;, \tag{3}\] Importantly, the Hamiltonian can thus be brought into block-diagonal form with one of the blocks acting only on this \((N_{b}+1)\)-dimensional Krylov subspace. At weak coupling, the ground state wavefunction is in the Krylov subspace of \(|\Psi^{(0)}\rangle\) and can thus be expressed as \(|\Xi\rangle=\sum_{n}\xi_{n}|\Psi^{(n)}\rangle|\gamma_{n}\rangle\) where \(|\gamma_{n}\rangle\) are photon states. Furthermore, noting that \(N_{ph}\ll 1\) for small \(g\) as seen in Fig. 2 (b), the ground state can be further approximated by capping the photon number to one, yielding an effective Hamiltonian \[H^{\prime}=H_{0}\cos\left(g/\sqrt{L}\right)+J\sin\left(g/\sqrt{L}\right)\sigma ^{x}-\frac{\hbar\omega}{2}\left(\sigma^{z}-1\right) \tag{4}\] where the Pauli matrices \(\sigma^{j}\) act on the two-state truncated photon Hilbert space. Upon projecting the Hamiltonian Eq. (4) onto the Krylov subspace, we obtain \(|\gamma_{n}\rangle=|n\mod 2\rangle\) and the matrix equation \[\frac{E+(N_{b}-2n)t_{o}\cos\left(g/\sqrt{L}\right)+(1-(-1)^{n}) \hbar\omega/2}{t_{o}\sin\left(g/\sqrt{L}\right)}\xi_{n}=\] \[=i\sqrt{(n+1)(N_{b}-n)}\xi_{n+1}-i\sqrt{n(N_{b}-n+1)}\xi_{n-1} \tag{5}\] for \(E\) and \(\xi_{n}\), which can be efficiently solved numerically for large system sizes. Remarkably, all observables including the energy, entanglement entropies and correlation functions obtained by solving (5) are _identical within numerical precision_ to those obtained in DMRG with photon number restricted to at most one, confirming that \(|\Xi\rangle\) is an exact ground state of \(H^{\prime}\). In particular, the number of photons is \(N_{ph}=\sum_{m}|\xi_{2m+1}|^{2}\) and the photon EE takes an intuitive Gibbs form \[S_{ph}=-(1-N_{ph})\ln(1-N_{ph})-N_{ph}\ln N_{ph}\;, \tag{6}\] since either a photon is created or not created in the weak coupling limit. Eq.(6) then relates the area law for the photon EE seen in Fig. 3 (a) and the saturation of the photon number \(N_{ph}\) in Fig.2 (b) Analogous but lengthier expressions for \(S_{el}\) and \(\langle\delta n_{i}\delta n_{j}\rangle\) are given in the SM [37]. We next perform perturbation theory of the Krylov theory to leading order in \(g\) by taking \(\xi_{n}=0\) for \(n>1\). Note that the first-order perturbation theory for \(H\) and \(H^{\prime}\) are equivalent as only zero and one photon states appear in both cases. This yields \[N_{ph}=|\xi_{1}|^{2}=\frac{g^{2}}{4}\frac{1-2/L}{(1+\hbar\omega/2t_{o})^{2}}\,, \tag{7a}\] \[\Delta E=\left(1+\frac{\hbar\omega}{2t_{o}}\right)\hbar\omega N_{ ph}\,,\] (7b) \[S_{el}=-(1-N_{ph})\ln\frac{1-N_{ph}}{2}-\frac{N_{ph}}{2}\ln\frac{ N_{ph}}{8}\,,\] (7c) \[\langle\delta n_{2l+1}\delta n_{2l^{\prime}+1}\rangle=\frac{N_{ph}} {L-2}\,, \tag{7d}\] with \(l\neq l^{\prime}\) in the last expression; for \(S_{el}\), we further assumed that \(L\) is large. We note that the charge fluctuations are related to current fluctuations in the space of dimer states, since \(\delta n_{2l+1}|\psi_{l\pm}\rangle=\mp iJ_{l}/(2t_{o})|\psi_{l\pm}\rangle\), and in particular \(\langle J_{l}J_{l^{\prime}}\rangle=4t^{2}\langle\delta n_{2l+1}\delta n_{2l^ {\prime}+1}\rangle\) to leading order in perturbation theory. This is in agreement with the recent result in [39] that found that \(S_{ph}=0\) iff \(\langle J_{l}J_{l^{\prime}}\rangle=0\). However, the scaling analysis of the EE and the stability of the topological edge states, which are central results of this work, were not discussed in Ref. [39]. The excess in EE may therefore be measurable in transport experiments, which are sensitive to current fluctuations, for example in a setup proposed in [40]. Eqs. (6) and (7)) capture all of the qualitative aspects of the DMRG results shown in Fig.'s 2 and 3: since \(\Delta E\), \(S_{ph}\) and \(S_{el}\) are determined by \(N_{ph}\), the saturation of \(N_{ph}\) in the thermodynamic limit dictates similar behavior for all other quantities. The EE in particular follows the area law. Because the photon couples to fermions via the total current \(J\), \(N_{ph}\) scales linearly with \(L\); however, it is also proportional to the square of the light-matter coupling strength \(g^{2}/L\). It is the precise cancellation between these two factors that result in the saturation of \(N_{ph}\). Furthermore, DMRG simulations confirm that the qualitative features of the weak coupling regime described by (6) and (7)) persist all the way to the ultra-strong coupling [37]. In particular, while more photons are virtually created in the ground state at stronger LMI, correlation functions \(\langle\delta n_{i}\delta n_{j}\rangle\) in the dimerized limit display long range behavior consistent with the ground state being spanned by a uniform superposition of dimers belonging to the Krylov subspace of \(|\Psi^{(0)}\rangle\). The identification of this subspace strongly suggests a remarkable connection between light-matter entanglement and Hilbert space fragmentation [41; 42; 43; 44; 45], a scenario worthy of further examination. In summary, we have characterized the effects of light-matter interaction on the SSH-cQED low energy states, employing numerical methods (DMRG, ED) and a low-dimensional Krylov subspace effective theory. We have established the stability of the topological edge states despite long-range correlations induced by the interaction of electrons with a uniformly extended cavity mode. This work highlights how gauge invariance, diamagnetic effects, and electron-photon entanglement give rise to an area law scaling of the entanglement entropy despite the non-locality of light-matter interactions. Extending this approach to higher dimensional topological phases in cavity material systems offers a promising path to classify novel light-matter hybrid states of matter. We leave such matters for future investigation. ###### Acknowledgements. We thank Claudio Chamon, Raman Sohal, and the participants of the Quantum Science Gordon Research Conference "Many-Body Quantum Systems: From Quantum Computing and Simulation to Metrology and Coherent Light-Matter Hybrids" for useful discussions. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award DE-SC0023327 (D.S. and L.H.S.), and the National Science Foundation, under Award DMR-2132591 (M.C.). DMRG simulations were performed on the mobius cluster at the University of Pennsylvania.
2307.11884
Augmented Symbolic Execution for Information Flow in Hardware Designs
We present SEIF, a methodology that combines static analysis with symbolic execution to verify and explicate information flow paths in a hardware design. SEIF begins with a statically built model of the information flow through a design and uses guided symbolic execution to recognize and eliminate non-flows with high precision or to find corresponding paths through the design state for true flows. We evaluate SEIF on two open-source CPUs, an AES core, and the AKER access control module. SEIF can exhaustively explore 10-12 clock cycles deep in 4-6 seconds on average, and can automatically account for 86-90% of the paths in the statically built model. Additionally, SEIF can be used to find multiple violating paths for security properties, providing a new angle for security verification.
Kaki Ryan, Matthew Gregoire, Cynthia Sturton
2023-07-21T19:58:59Z
http://arxiv.org/abs/2307.11884v2
# Augmented Symbolic Execution for Information Flow in Hardware Designs ###### Abstract We present _SEIF_, a methodology that combines static analysis with symbolic execution to verify and explicate information flow paths in a hardware design. SEIF begins with a statically built model of the information flow through a design and uses guided symbolic execution to recognize and eliminate non-flows with high precision or to find corresponding paths through the design state for true flows. We evaluate SEIF on two open-source CPUs, an AES core, and the AKER access control module. SEIF can exhaustively explore 10-12 clock cycles deep in 4-6 seconds on average, and can automatically account for 86-90% of the paths in the statically built model. Additionally, SEIF can be used to find multiple violating paths for security properties, providing a new angle for security verification. ## 1 Introduction Analyzing how information flows through a hardware design is critical to verifying the security of the design [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Unwanted flows of information to or from a signal in the design can violate desired security policies in the form of access authorization violations [19, 20], memory leakage vulnerabilities [21, 22], and possible privilege escalation vulnerabilities [23]. Recent work has shown that symbolic execution is a powerful tool for analyzing how information flows through a hardware design [21, 24, 25]. Symbolic analysis is precise and enables tracking both direct and indirect flows of information through each path of execution without instrumenting the design with added tracking logic. Unfortunately, symbolic execution infamously suffers from the path explosion problem. The number of paths through a design grows exponentially with the number of branch points in the design. Hardware designs have the added complexity of reasoning about paths over multiple clock cycles in order to realize complete flows of information from an input port (source) signal to an output port (sink) signal. Current solutions to the path explosion problem have been to consider small, but security critical designs [24, 25], or to constrain the hardware design space by analyzing how information flows for a particular software program [21, 26]. We take a different approach. We start with static analysis, using an existing tool [27] to build a graph that over-approximates how information flows through a design. Such a graph is useful for designers, allowing them to explore their design and find possible illegal or insecure flows. For our purposes, the graph provides useful information that can be used to guide symbolic execution: a sequence of landmark points in the hardware design that execution must reach in order to realize a given path of information flow. Using these landmarks as a guide, we use symbolic execution to improve the graph's efficacy: finding a realizable path through the design state, along with the inputs needed to take that path, corresponding to a path in the graph; recognizing and eliminating from the graph paths which are unrealizable in execution; and recognizing and eliminating from the graph paths which are realizable, but do not represent a true flow of information. This paper presents _SEIF_ (pronounced "safe"), a toolflow that combines symbolic execution with static analysis in the form of the information flow graph. SEIF takes as input the statically built information flow (IF) graph and the source signals of interest in the design. Three outcomes are possible: 1. SEIF finds that the path is unrealizable or does not represent a true flow of information, and requires no further scrutiny from the security engineer, 2. SEIF returns a sequence of input values that will drive the design along the IF-graph path to realize the flow of information, or 3. the complexity of the search space leaves the IF-graph path unaccounted for. To find that a path is unrealizable or does not represent a true flow of information, SEIF uses two mechanisms: a check for mutually contradictory constraints and symbolic analysis. If the first mechanism reports that a path is unrealizable then it is, regardless of the number of clock cycles the design is allowed to run. If the second mechanism reports that a path is unrealizable, then it is within the clock cycle bound used by SEIF. In our experiments, this was the case for 5-7% of the paths. To find and return a sequence of input values that will drive execution along an IF-graph path, SEIF uses symbolic execution guided by the IF graph and heuristics we develop. The returned sequence of input values will drive execution along the IF-graph path, either starting from the design's reset state or from an intermediate state. In our evaluation (Section 7) we differentiate these two cases. In this paper, we develop SEIF, an algorithm and tool to search for and eliminate false paths of information flow from a static analysis of a hardware design and then to further explicate the paths that remain. We show that by using the static analysis as a guide, we can guide symbolic execution toward more probable paths and eliminate impossible paths early. Our contributions are: * Define _SEIF_, an augmented symbolic execution methodology for information flow analysis. * Implement the methodology and search heuristics on top of the symbolic execution engine discussed in [28]. * Evaluate the augmented symbolic execution strategy on four open-source designs. ## 2 Threat Model Information-flow analysis is a part of the security validation activities [29] that take place during the design phase of the hardware lifecycle [30]. The goal is to find weaknesses, vulnerabilities, and flaws in register transfer level (RTL) designs that may be exploitable post-deployment. Flaws that result from logic and physical synthesis tools, manufacturing, or the supply chain cannot be discovered by SEIF. We target flaws that occur by benign human error in the specification, design, or implementation phases. Our analysis may find maliciously inserted flaws, they will have a lower chance of being uncovered than benign flaws as the attacker will likely take steps to hide their work, so that the security engineer does not recognize the malicious flow of information as dangerous. Flaws maliciously inserted after the security validation is complete, e.g., analog Trojans [31], cannot be discovered by SEIF. ## 3 Problem Statement We approach the problem of information-flow analysis by transforming it into a graph reachability problem over a labeled, directed graph representing signal connectivity, extracted from the Verilog RTL design. We use symbolic execution of the RTL to determine which paths through the labeled directed graph represent true flows of information through the design in execution. Given a hardware design and a particular input signal of interest, the goal is to return: 1. the set of realizable information flows through the design originating at that signal; and 2. for each found information flow, return a sequence of input values to the design that will drive the information flow. ## 4 Preliminaries It is useful to keep in mind three models: the state diagram of the design showing machine states and transitions between them; the labeled, directed, signal-connectivity graph, which we call the _Information Flow (IF)_ graph; and the symbolic execution (SE) tree, showing execution paths through the RTL along with the associated (symbolic) states and path conditions. We describe these three in the following sections, but first we introduce a fragment of Verilog RTL as a toy example to help illustrate the three models. ### _Toy Example_ The code snippet of Figure 1 shows a flow from an input, secret, to an output, led. The flow is guarded by an internal, state-holding variable and the secret will only flow to the LED output in the clock cycle after state\(=3\). Note that with non-blocking assignments ("\(<=\)") all right-hand side expressions are calculated at the same time and assignments take effect at the next clock cycle. Blocking assignments ("\(=\)") take effect immediately. ### _State Diagram_ We model a hardware design as a tuple, \(D=(S,s_{0},I,\delta,\omega)\), where * \(S\) is the set of states of the design; * \(S_{0}\subset S\) is the set of initial states; * \(I\) is the finite set of input strings; * \(\delta:S\times I\to S\) is the transition function; * \(\omega:S\to O\) is the output function. A state \(s\in S\) is a vector of valuations to state-holding internal variables of the design, \(s=\langle v_{0},v_{1},\ldots,v_{|s|}\rangle\). We use \(v_{i}\) to indicate the variable and \(\langle v_{i}=x\rangle_{s_{j}}\) to indicate that the value of variable \(v_{i}\) is \(x\) in state \(s_{j}\). As shorthand, we sometimes use \(v_{i}\) to refer to both the variable and its value, when it is clear in the text what we mean. The design powers up in an initial state, \(s_{0}\). Many state-holding variables are reset to 0 in the initial state. An input string \(i\in I\) is a concatenation of values to input variables of the design. Inputs are provided on every clock cycle. Similarly to state-holding variables, we refer to the value of input variable \(v_{j}\) at any given clock cycle as \(\langle v_{j}=x\rangle\) or simply \(v_{j}\). The clk signal is a special input that synchronizes reading input values and state transitions, which happen on clock cycle edges. The output function is the identity function over a subset of the design's variables. Figure 1: Toy example. clk, enable, and secret are input **wires**. state, prev, and guard are state-holding **regs**. Not shown is the initialization, which sets state, prev, and guard to 0. led is an output wire. secret flows through guard to led after four clock cycles. For example, Figure 2 shows a sequence of state transitions for the toy example, starting with the initial state, in which information flows from secret to guard to output variable led. In this example, the initial state \(s_{0}=\langle\texttt{prev}=0,\texttt{state}=0,\texttt{guard}=0\rangle\) produces output \(\omega(s_{0})=\langle\texttt{led}=0\rangle\), and transitions to state \(s_{1}=\langle\texttt{prev}=0,\texttt{state}=1,\texttt{guard}=0\rangle\) when enable is high on a positive clock edge. ### _Information Flow (IF) Graph_ The Information Flow (IF) graph is a labeled, directed graph that captures signal connectivity and provides additional information, taken from the Verilog source, about the conditions under which two signals are connected [27]. Nodes represent the variables (wires and regs) of the design, and edges indicate a possible flow of information from one variable to another. An edge \((v_{1},v_{2})\) exists when either there is an assignment from \(v_{1}\) to \(v_{2}\) (e.g., \(v_{2}<=v_{1}\)) or \(v_{1}\) appears in a condition (e.g., \(\texttt{if}(v_{1})\)), and \(v_{2}\) appears on the left-hand side of an assignment in either branch. The edge is labeled with the line number of the relevant Verilog statement and lists the surrounding conditions in the code that must be true for the information flow to take place. For example, in Figure 3, which shows the IF graph for the code in Figure 1, the edge \((\texttt{secret},\texttt{guard})\) would be labeled with the condition that \(\texttt{state}==3\). Note that this graph inherently has no notion of timing or clock cycles. Each edge in the IF graph represents a viable 1-hop flow of information in the design. However, multi-hop paths through the IF graph may not correspond to viable information flows. In other words, if we view the IF graph as an information-flow relation, taking the transitive closure of the relation yields an over-approximation of information flow through the design. To demonstrate, consider the code in Figure 1, but with the last line replaced with the following: assign led = (prev == 2)? guard : 0; The IF graph would have the same nodes and edges, but the path from secret to guard to led does not correspond to any flow of information through the design. There are two reasons why a path through the IF graph may not correspond to a true information flow. The first is that, as in the example above, the sequence of conditions needed for each edge cannot be satisfied. The second is that a path through the IF graph from \(x\) to \(y\) may not correspond to a true flow of information in the sense that the value of \(y\) depends on the value of \(x\). A common example of this is the assignment \(y=x\oplus x\). ### _Symbolic Execution_ In symbolic execution, concrete input values are replaced with abstract symbols. The design is executed using the symbols in place of literals. When a branch point (e.g., if(enable)) is reached, both paths are separately explored. For each path, the branching condition that must be true for that path (e.g., enable == 1). is maintained in the _path condition_. At the end of a single path of symbolic execution, satisfying assignments to the constraints in the path condition can be used as concrete input values to drive concrete execution down that same path. Symbolic execution is modeled as a directed tree. Each node \(n\) in the tree is associated with a line of code in the design and is associated with a symbolic state, \(\sigma\), and path condition, \(\pi\). A node's children are the possible next lines of code to symbolically execute. A path from the root node to any leaf node represents a realizable path through the design. The number of paths to explore grows quickly. For example, the symbolic execution of the design in Figure 1 for the four clock cycles necessary to find the information-flow path from secret to led would yield the tree of nodes shown in Figure 4. A single path through the IF graph can correspond to many paths through the symbolic execution tree. For example, enable can remain low for \(0,1,2,\ldots\) clock cycles between each update to state. Each of these options represents a separate path through the symbolic execution tree. This example, although simple, is not all that contrived. It may be that state in another module of the design can take varying time to compute an action before enable becomes high again. Two problems become apparent: 1. The choice of path in the current clock cycle can determine whether there exists a path in a future clock cycle that will allow the flow of information to continue. 2. Once exploration starts down one path, it is not clear at what point - after how many clock cycles - the current path should be abandoned as incorrect, and a new path should be tried. For example, there are infinitely long paths in which prev never gets to 3 and enable remains low. Figure 3: An IF graph for the code in Figure 1. Dashed lines represent implicit flows of information and solid lines represent explicit flows. Labels are omitted for space. Figure 2: State transitions of the toy example (Figure 1) in which information flows from secret to led. ## 5 Methodology: Symbolic Execution for Information Flow Given a design and an input signal of interest, src, our goal is to find how information can flow during execution from src through the design. Our approach is to first use the IF graph to enumerate all potential paths of information flow through the design from src. As this is a static analysis, complexity grows linearly with the number of variables in the design and the length of the RTL code. Then, for each enumerated path, SEIF uses symbolic execution to either find a corresponding information-flow path through the design, or determine that no such path exists. ### _Overview_ Once the IF graph is generated, the analysis proceeds in three main phases: pruning globally unrealizable paths, symbolically executing the design to find realizable paths through the design, and analyzing the semantics of each found path to find true paths of information flow. In the following sections, we describe each phase in more detail. If SEIF returns a path, it is a true path through the design corresponding to the path in the IF graph. Depending on the post-processing option used, this will either be a path starting at the design's reset state or an intermediate state. If SEIF does not return a path, there are three possibilities. First, the path in the IF graph has been identified as infeasible within a bounded number of clock cycles. Second, the path in the IF graph is feasible in the design, but does not represent an actual flow of information - this result is sound with one caveat discussed in Section 5.4. Third, the path in the IF graph cannot be accounted for. These options are discussed in Section 5.3.5 and evaluated in Section 7.2. ### _Pruning Globally Unrealizable Paths from the IF Graph_ In the first phase, our goal is to quickly and cheaply eliminate paths through the IF graph that are easily falsified before moving on to the next, more expensive phase. Consider the example code in Figure 5. The variable temp carries the input secret only when the input signal enable is high. The secret information is conditionally passed on to result and from there to led2. The corresponding IF graph is shown in Figure 6. While the IF graph appears to show a flow of information from secret to led2 via temp, the constraint for edges (secret, temp) and (temp, result) require enable to be high and low, respectively. Since both edges must occur in the same clock cycle, this flow cannot be realized. This analysis requires knowing where clock cycle boundaries are. In the IF graph, an edge corresponding to a nonblocking assignment (for example, result \(<=\) temp) denotes a clock cycle boundary. When state is updated in one clock cycle, the updated value can be read in the next clock cycle cycle. At the start of this phase, the given path through the IF is divided into _segments_. One segment of an IF-graph path is a sequence of hops in the IF graph. These hops could be any implicit or explicit flows. However, the explicit non-blocking assignments are of particular interest to us in determining how we should break the IF path into segments. Each non-blocking assignment represents exactly where we reach a clock cycle boundary in the IF path and thus break off a new segment after that flow. If a path Figure 4: Symbolic execution tree of the design in Figure 1 after four clock cycles. Figure 5: A toy example illustrating globally unrealizable paths. clk, enable, and secret are input wires and led2 is an output wire. result is a state-holding reg. secret cannot flow through temp to result and led2. Figure 6: The partial IF graph for the code shown in 5, showing only the paths through temp. Although the graph shows a path from secret to led2, an SMT query finds that the constraints along the path will never be co-satisfiable. has \(n\) nonblocking assignments, it has \(n+1\) segments. Let us take the following IF path in Figure 6 as an example: \(\langle(\texttt{secret},\texttt{temp}),(\texttt{temp},\texttt{result}),( \texttt{result},\texttt{led2})\rangle\). This path has two segments. The first segment is the two-hop sequence, \(\langle(\texttt{secret},\texttt{temp}),(\texttt{temp},\texttt{result})\rangle\), made up of a continuous assignment and a non-blocking assignment. The second segment, \(\langle(\texttt{result},\texttt{led2})\rangle\), is a single hop and a continuous assignment. For every segment in a given IF-graph path, the conditions involved in that segment are collected and checked for co-satisfiability. If the hops in any one segment have mutually contradictory constraints, that path is discarded. In Figure 6, the segment \(\langle(\texttt{secret},\texttt{temp}),(\texttt{temp},\texttt{result})\rangle\) has contradictory constraints, as the first hop requires that enable is high, while the second hop requires it to be low. This pruning analysis is sound--only unrealizable paths are discarded--as long as the co-satisfiability check considers only state-holding signals and input signals in the satisfiability query, as these signals do not change value in the middle of a clock cycle. ### _Symbolic Execution to Find Paths through the Design_ In the second phase, the goal is to find true paths through the design for each remaining path in the IF graph. We use symbolic execution to find a sequence of machine states and a corresponding sequence of input signals (for example, as seen in Figure 2) that aligns with the path outlined by the IF-graph path. #### 5.3.1 Symbolic Execution Guided by IF-Graph Path Segments The segment analysis done in the first phase provides information about where the clock cycle boundaries lie; the IF graph also provides information about which lines of code must execute for each hop in a segment. SEIF uses this information to drive symbolic execution along the path outlined by the IF graph. In each clock cycle, the symbolic execution engine is restricted to following only those design paths which include the lines of code that must be executed for the current IF-path segment to be realized. For example, in Figure 1, the symbolic execution engine only considers paths which take the if branch at line 8, when \(\texttt{state}==3\). By doing so, the search space is significantly reduced. However, there may still be many possible paths through the design to consider, only some of which allow the complete IF path to be realized. Continuing with our example, Figure 7 shows the symbolic execution tree for one clock cycle of the code in Figure 1. Each node in the tree represents a line of code, or non-branching sequence of code (e.g., lines 3-4) to be executed. The path of interest, this time annotated with which line of code needs to execute for each hop to be realized, is \(\langle(\texttt{secret},\texttt{guard})_{\mathrm{line}~{}8},(\texttt{guard },\texttt{led})_{\mathrm{line}~{}13}\rangle\). Examining the symbolic execution tree in Figure 7, it would appear that two of the four possible paths achieve the desired flow. But annotations in the IF graph tell us that the sequence of conditions \((\texttt{state}==3)_{s3},(\texttt{prev}==3)_{s4}\) needs to be met. For that to happen, lines 3-4 need to execute in the first four clock cycles and lines 8, 13 need to execute in only the fourth clock cycle. While this is clear to see when examining the state transition diagram (Figure 2), there is nothing in the IF graph, or even the code itself, indicating that it will take four clock cycles to realize this flow. Finding the desired path through the multi-clock-cycle symbolic execution tree is a search problem. We discuss the search strategies we developed to guide search in SEIF in Section 5.3.4 #### 5.3.2 Pruning Unrealizable Paths at Clock Cycle Boundaries As a first strategy, the symbolic execution engine prunes unrealizable paths at each clock cycle boundary. At each clock cycle, the engine first checks the co-satisfiability of the conditions required in the current IF segment, similar to the check done to prune globally unrealizable paths (Section 5.2). However, this time the SMT query includes the current symbolic state along with the conditions required for the IF segment. As with the global pruning step, the check considers only the state-holding variables in the segment conditions, as the value of combinational logic variables may change during the course of a clock cycle. Continuing with our example from Figure 1, at the start of the initial clock cycle, the symbolic execution engine checks whether the condition required for the first hop in the IF graph (\(\texttt{state}==3\)) is mutually contradictory with the initial symbolic state (in which \(\texttt{state}==0\)). Indeed, it is, and the symbolic execution engine discards any paths that would include line 8, the line of code required for the first hop in the IF graph.1 At this point, SEIF recognizes that realizing the first segment of the IF graph at the current state (state \(s0\)) is infeasible. Footnote 1: Discarding these paths can be done prior to exploration of any paths in the current clock cycle, as the engine has information from the design’s statically built control flow graph about which lines of code are included in which path. #### 5.3.3 Stalling the IF-Graph Path to Advance to a New Machine State The second strategy used by SEIF is to pause the search for realizing a segment of the IF path in order to advance the design to a next-state when needed. In our example, the first segment of the IF graph cannot be realized from the initial reset state. SEIF symbolically executes the design for a single clock cycle, without considering the constraints required by the next IF path segment, to advance the design to a new state. SEIF then checks whether the IF graph segment can be realized from this new state. Figure 7: Symbolic Execution Tree of Paths There are many possible next states and SEIF must find one that satisfies two criteria: 1. The next state advances the design toward a state in which the next IF segment can be realized, and 2. The next state does not undo any prior progress along the IF graph path that has already been made. We discuss search strategies for finding valuable next-states in the next section. The second constraint is trickier. During normal execution, it is likely that information written to a reg in one clock cycle gets overwritten in a subsequent clock cycle. For example, consider the code in Figure 8, which is similar to that of our first example (Figure 1), but made slightly more complex by the addition of two new registers: guard0 and clear. The IF path of interest is now from secret to guard0 to guard to led. To achieve the second flow segment, \(\langle(\mathtt{guard0,guard})\rangle\), SEIF needs to first advance the design to a state \(s^{\prime}=\langle\mathtt{state}==3\rangle\). However, it is important that while the design advances to state \(s^{\prime}\), the clear signal is never set, as a 0 written to guard0 would undo the information flow from secret to guard0 from the prior IF path segment. SEIF uses information from the IF graph to _stall_ the information flow while advancing the design to a next-state. We define stalling as symbolically executing the design for a single clock cycle, such that the design transitions to a next state, but the position along the IF path remains unchanged. To stall, SEIF prevents the symbolic execution engine from considering any paths of execution that will undo information flow from prior segments in the IF path. To do this, SEIF considers the node \(n\) in the IF path, in which information currently "resides." In our current example, this would be the node guard0. SEIF then uses the IF graph to find all edges incident to node \(n\), which represent flows of information from variables in the design to \(n\) and are associated with lines of code. Explicit flows need to be prevented during stalling, but implicit flows do not need to be prevented, as they do not cause the value in \(n\) to be overwritten. SEIF avoids exploration of any paths through the design which would execute a line of code in which \(n\) is written to. In this way, the information in \(n\) is not lost while stalling. There are two edge cases to consider. The first is self-loops. Direct flows from \(n\) to \(n\) (e.g., \(\mathtt{n}<=\mathtt{n}+1\) are allowed, as the information in \(n\) stays in \(n\). The second is the case when \(n\) is assigned a constant (e.g., \(\mathtt{n}<=1\)). SEIF checks this corner case during symbolic execution and abandons any path in which it occurs. If the assignment by a constant happens regardless of the rest of the state, then stalling cannot occur at this point in the IF graph. Because of stalling, the number of clock cycles needed to verify the information flow may exceed the length of the IF path. #### 5.3.4 Search Strategies The goal is to find a sequence of design states, and corresponding input values, that correspond to an IF path, or determine that no such sequence exists. The search space is large; an IF path with \(n\) segments requires at least \(n\) clock cycles through the design. When stalling is needed, the number of clock cycles required is unbounded (although finite). Information from the IF graph is used to prune the symbolic execution tree at each clock cycle, but a single IF hop can correspond to many paths through the symbolic execution tree. This is because a segment of the IF-path involves only a small number (typically fewer than 5) of lines of code be executed. The input space is partially constrained to ensure those few lines of code are executed, but most of the input space is unconstrained, and therefore there is freedom in how most of the design is explored at each clock cycle. We developed and implemented four search strategies: * Continue / Stall Only. * Backtracking Only * Stalling with Backtracking * Stalling with UNSAT Core Heuristic **Baseline 1: Continue / Stall Only**. The key idea behind this strategy is that, for each segment, we can either symbolically execute until a design path is found in which the segment conditions are satisfied (termed a _continue_), or we can stall for some bounded number of cycles. For an IF path, we build and exhaustively search a list of all possible _continue_, _stall_ combinations. If SEIF is unable to complete the IF path for a given continue-stall pattern, it moves on to the next pattern. The list of continue-stall combinations are in truth-table order to allow the SEIF engine to explore as deeply as possible first, aiming to verify the shortest path possible with no stalls. In this context, depth equates to the number of IF-path segments successfully traversed, and for which SEIF has realized a partial path of execution. **Baseline 2: Backtracking Only**. In this search strategy SEIF begins by symbolically executing for one clock Figure 8: A design demonstrating the challenges of stalling. cycle for the first segment. If the flow found, SEIF moves to the next segment in the IF path. If at any segment, the flow is not found in some bounded number of clock cycles, or there are no more design paths to try, SEIF returns (or _backtracks_) to an earlier segment to find a different path that satisfies the same segment conditions. **Stalling with Backtracking.** This strategy is a hybrid of baselines 1 and 2. For any given _continue_, _stall_ pattern, after successfully executing consecutive _continues_, and reaching a _stall_, SEIF stalls for a bounded number of clock cycles and attempts to find execution paths where SEIF can make forward progress in the next segments. If all symbolic execution paths are explored, or SEIF times out (according to some pre-determined bound), it backtracks. **Stalling with Heuristic**. This strategy builds on top of stalling with backtracking. Our heuristic relies on the _UNSAT core_, the subset of constraints in a SAT query for which no satisfying assignment exists. If SEIF stalls, it is searching for a new machine state that will satisfy the conditions of the next IF path segment. In this case, SEIF pushes the symbolic state and the constraints from the next segment to the SMT solver, which returns the UNSAT core. For each path explored while stalling, SEIF checks if the UNSAT core became smaller. If it did, SEIF continues searching for a new machine state along the path. If it grows, SEIF prioritize the next candidate stall path. #### 5.3.5 Post Processing to Find Reset. SEIF begins exploration from a symbolic state, and therefore the design paths it generates inputs for may not start from the reset state. We mitigate this by checking whether the found design path has constraints that conflict with the design's reset state. If not, the path can start from reset. If so, the path starts from an intermediate state of the design, and SEIF cannot guarantee that it is a reachable state. Most often, SEIF finds paths that can start from reset and we evaluate this in Section 7.2. ### Semantic Analysis to Identify True Information Flows Once per execution path, SEIF performs a semantic analysis check to prune flows that represent viable design paths, but not true flows of information. This can happen when a textual flow does not represent an information flow. For example, y <= x xor x, would yield a path showing x flows to y even though there is no flow from x to y. SEIF prunes explicit textual flows which do not represent information flows. If there is an implicit textual flow that is not a true information flow, SEIF cannot eliminate that false positive. For example, if (x XOR x) y <= 0; else y <= 1; (Here, there is no path in which y is set to 0, and SEIF does recognize that.) In the case of reconvergent fan-out SEIF may or may not find the flow. In the example of Figure 9, x is an input and blocks 2 and 3 represent different areas of the design (i.e. modules, always blocks). There are 4 cases to consider: 1. The writes to y and y' are both unconditional and there is no flow from x to z because \(\mathbf{z}=3\mathbf{x}-3\mathbf{x}\). SEIF performs the check and correctly detects no flow. 2. The writes to y and y' are conditional, and depend on the same conditions. SEIF detects there is no flow 3. The write to y' is conditioned on something that is mutually UNSAT with the condition for y. In this case, there is always a flow from x into z, and SEIF detects it. 4. The write to y' is conditioned on something mutually satisfiable with the condition for y, where the condition for y is different. If SEIF follows a design path where both conditions are true at the same time, it detects no flow, while there may be other design paths through block 3 which would enable a flow, and vice-versa. Unless SEIF is able to exhaustively explore, it may report an incorrect result. ## 6 Implementation We implemented SEIF using the Sylvia symbolic execution engine [28] and using hyperflow graphs [27] as our IF graph engine.2 Both Sylvia and the hyperflow graph toolchain were built using python3. Sylvia implements the Verilog semantics according to the IEEE 1364-2005 semantics using pyVerilog and the Z3 solver for SMT solving. SEIF also uses Z3 for preprocessing and path removal. Footnote 2: We contacted the authors of the hyperflow graph paper and they gave us closed-box access to the tool, providing the static analysis for the designs we gave them. The authors of the Sylvia symbolic execution engine gave us source-code access to their tool. When considering information flow paths that span multiple modules, enumerating all possible paths for even a single source/sink pair becomes too expensive. We manage this complexity by following the divide-and-conquer approach of Ryan et al. [28]. SEIF first finds the partial IF paths within a module, and then uses the segment conditions to find the next module to explore. SEIF uses the SMT solver to ensure that the path fragments can be stitched back together to form a valid information flow path from source to sink. This approach reduces repeated work within a module when exploring paths across multiple modules. ## 7 Evaluation We evaluate SEIF over four open-source designs to study its viability as a means for accounting for information flows Figure 9: Reconvergent flows within a hardware design. The evaluation addresses the following questions: 1) Can SEIF recognize and eliminate paths through the IF graph that are unrealizeable in practice? 2) Can SEIF find paths through the design, along with the sequence of inputs to realize the path, that corresponds to paths through the IF graph? 3) Can SEIF be meaningfully applied to security relevant signals in hardware designs to give experts feedback on the security of the design or new areas to explore? ### _Dataset and Experimental Setup_ We collected four open-source Verilog designs for evaluation. The designs are: 1. OR1200, a 5-stage RISC processor core [32]; 2. openMSP430, a synthesizable 16-bit microcontroller core [33]; 3. the AKER Access Control Wrapper (ACW) [19]; and 4. an AES implementation from TrustHub [7][35]. The experiments are performed on a machine with an Intel Xeon E5-2620 V3 12-core CPU (2.40GHz, a dual-socket server) and 62G of available RAM. ### _Accounting for Paths in the IF Graph_ We first examine SEIF's ability to account for paths in the IF graph, either by finding paths through the design that correspond to the IF path, or by eliminating the IF path as infeasible. In these experiments we look at the OR1200, MPS430, and ACW designs, which are the largest of the four. We identified 20 security-critical signals in the OR1200 to use in our experiments that appear in security properties of the OR1200 collected from the security literature [36][37][38][39][40]. We selected 10 sources to analyze in the MSP430 by finding signals roughly analogous to those in the security properties for the OR1200. For the ACW, we chose 20 main internal signals to look at that appear in the security properties manually and automatically generated by [20][41] and map to several known CWEs. For each source signal there can be tens of thousands of IF paths. (See the numbers in Tables 1 and 2, discussed in Section 7.5.) For the efficacy and performance evaluations in this and the next two sections, we analyze a subset of the total paths. For each source signal of interest, we randomly selected 300 paths from the IF graph for analysis. For the security analysis case study (Section 7.5), we analyze all paths from a given source. Figure 10 summarizes SEIF's ability to account for the IF paths. For 86% to 90% IF paths on average, SEIF either finds the corresponding path through the design or eliminates the IF path as infeasible or not representing a true flow of information. The majority of accounted-for IF paths, 58% to 77% on average in the three designs, are true paths in the design, indicating that the static analysis done to build the IF graph is a decent approximation of information flow through the design. We further break down these numbers to show the percentage of the found IF paths for which SEIF returns a design path that starts at the reset state vs. a design path that starts at some intermediate state. Paths that start at the reset state are better for the engineer as they can be immediately replayed from the known reset state. ### _Evaluation of Search Strategies_ In the following we evaluate the four search strategies discussed in Section 5.3.4. Figure 11 reports, for each design, the percentage of IF paths found by each of the four search strategies. These are paths for which SEIF found a corresponding path through the design. As expected, the heuristic guided search outperforms the other strategies in all three designs, improving over the baselines by 26% on average and over bounded stalling with backtracking by 11% on average. We note that baseline 2, which does not include stalling, is the least successful at finding corresponding paths in the design. This highlights the value of SEIF: many IF paths give an incomplete picture of a path through the design and include points where the design must advance to a new state before the IF path can continue. Without SEIF, it would be up to the engineers to figure out how and whether to advance the design state. Figures 12 and 13 report on the performance of the four search strategies, both in terms of average time taken to find a corresponding design path and average number of clock cycles through the design for the found path. Again, the heuristic guided search outperforms the other strategies, completing the search for each IF path in 3-6 seconds. Figure 14 shows that the amount of backtracking that is required is lowered when we incorporate bounded stalling. Fig. 11: Finding Design Paths Corresponding to IF Paths Fig. 10: Accounting for IF Paths Adding the heuristic improves the efficacy of stalling and therefore decreases backtracking even further. To better understand how SEIF is finding flows over time, we explore all IF paths from a single source signal, the program counter, in the MSP430. We track how many IF paths are found in the design after 1 clock cycle of search, 2 clock cycles of search, etc. The experiment was done with heuristic-guided stalling turned on. Figure 15 shows the results. There were a total of 19060 IF paths, and SEIF found design paths for 89.93% of them. The complete search took 16 clock cycles, however, most of the paths were found withing the first 8 clock cycles. The experiment took 3.5 days to run. #### 7.3.1 Determining the Stall Bound For all the experiments in the previous sections, the number of stalls per IF path segment was set to be 5, 5, and 4 for baseline 1, bounded stalling with backtracking and the UNSAT core heuristic, respectively. (As a reminder, baseline 2 is backtracking only, with no stalling). We determined these numbers empirically by selecting at random 5 of the security-critical source signals from the OR1200, and for each of these source signals selecting at random 300 paths to evaluate, and then running the experiments with an increasing number of stalls allowed until we saw the number of IF paths found begin to flatten out. Finding the bound for the heuristic-guided stalling strategy is shown in Figure 16. The graphs for the other three search strategies are in the appendix. ### Eliminating Information Flows Paths We examine how IF paths that do not correspond to information-flow paths through the design are falsified in Figure 17. The experiment used the 300 randomly chosen paths for the 20 security-critical signals in the OR1200. The largest percentage of eliminated paths are found statically before symbolic execution begins. This is good news, as that is the cheapest and quickest phase of the analysis. There is a non-trivial portion, 5% to 7%, that are eliminated because they do not represent true flows of information through the design. SEIF's use of symbolic execution allows for this precise analysis, which taint tracking may not be able to provide. Figure 16: Finding the stall bound. Figure 12: Time to Find Design Paths Figure 14: Frequency of Backtracking Figure 13: Clock Cycles to Find Design Paths Figure 15: Finding design paths over time ### Case Study: Security Property Verification When starting with a property, such as is often done in security verification tasks, SEIF goes beyond producing a single counterexample. In traditional, assertion-based formal methods, once the formal or bit-level engine produces the first counterexample, it takes manual manipulation of the property or environment to generate subsequent violating traces. SEIF is able find multiple realizable traces through the design that exhibit the vulnerable behavior and can guide the security engineer to other areas of the design they may be interested in exploring. We demonstrate the approach for two security-critical properties from the TrustHub Security Property/Rule Database [7, 35], one for the MSP430 and one for an AES implementation. The MSP430 property asserts that the program counter's value should not be readable from the debug access port during normal operation. The AES property verifies that the secret key material is not accessible to any unprivileged internal data registers [42]. SEIF generates all the paths from the source of interest to the security-critical sink automatically. In order to produce the violating paths, SEIF adds a constraint to the solver specifying the desired precondition. If we find a candidate violation of the security property, we ensure it is replayable from the reset state of the design. The results for the MSP430 and AES are presented in Tables 1 and 2, respectively. ## 8 Related Work **Symbolic Execution of HW Designs for Information Flow Analysis**. EISec uses netlist-level symbolic execution to verify information-flow safety and quantify confusion and diffusion in cryptographic modules [25]. Our work improves upon EISec by allowing analysis at the RT-level and enabling verification of a wider class of information-flow properties. Other tools use symbolic simulation (e.g., [43]) to verify particular binaries running on the hardware [21, 24]. **Symbolic Execution of SW for Information Flow Analysis**. The software community was perhaps the first to leverage symbolic execution to verify information flow. The approach has been used in combination with taint tracking [44], to find and mitigate side channels [45, 46, 47, 48], and to identify programs that are vulnerable to transient execution attacks [49]. **Symbolic Execution of SW or HW to Find Exploitable Flaws**. There is a long history of using symbolic execution in software to find exploitable security flaws (e.g, [50, 51, 52]). In hardware, symbolic execution has been used to find violations of and exploits for security-critical assertions [53] and to find and trigger trojans in the Verilog RTL [54]. As with SEIF, the main challenge is guiding search through the tree to find the salient paths. **Information Flow Tracking in HW**. The state of the art for information flow analysis in hardware is information flow tracking (IFT), which instruments a design with tracking logic [2]. Many tools operate at the netlist level, although some operate at the RTL level [11]. IFT has also been used in analog designs [14], and tools exist to synthesize designs that incorporate tracking logic [16, 18]. IFT can be used to check hyperproperties and has been used to verify the safety and security of many different systems [55, 56][3, 15][57][19, 20][12][9][4]. IFT has also been used to automatically generate information flow properties for use with formal verification engines [41, 58]. We used these properties in our evaluation. **Formal Analysis for Information Flow**. Proof-checking approaches have been used for detecting security vulnerabilities in hardware designs [22][10]. These approaches are often less automated, more time intensive, and tackle smaller designs, for stronger results that are both sound and complete. VeriCoq translated Verilog to Coq for proof-carrying designs [8]. Another approach is to use self-composition, or program products, to verify information-flow properties [1]. Security extensions in the hardware description language can enforce information flow policies at the language level [5, 6, 7, 13, 59, 60]. ## 9 Conclusion SEIF combines static analysis and symbolic execution to find information flows in hardware designs. SEIF improves over static analysis, eliminating false-positive flows, and finding replayable designs through the path for true flows. In our experiments, SEIF accounts for 86-90% of statically identified flows in three open-source designs. SEIF also leverages static analysis to explore the designs for 10-12 clock cycles in 4-6 seconds on average. Additionally, SEIF can be used to find multiple violating paths for security properties, providing a new angle for security verification. \begin{table} \begin{tabular}{r r} \hline \hline Metric & Result \\ \hline Total IF paths from source: & 19060 \\ Total sinks reachable from source: & 41 \\ Total IF paths violating security property: & 58 \\ Avg. time to produce a counterexample (s): & 0.678 \\ Avg. no of clock cycles explored: & 8.13 \\ Total realizable paths violating security property: & 46 \\ \hline \hline \end{tabular} \end{table} TABLE 1: Security Property Verification: Program Counter in MSP430 \begin{table} \begin{tabular}{r r} \hline \hline Metric & Result \\ \hline Total IF paths from source: & 61639 \\ Total IFs reachable from source: & 39 \\ Total IF paths from source violating security property: & 57 \\ Avg. time to produce a counterexample (s): & 0.505 \\ Avg. no of clock cycles explored: & 4.102 \\ Total realizable paths violating security property: & 25 \\ \hline \hline \end{tabular} \end{table} TABLE 2: Security Property Verification: Secret Key in AES Implementation ## 10 Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. CNS-2247754, and by a Meta Security Research Award.
2306.08024
Old Data, New Forensics: The First Second of SN 1987A Neutrino Emission
The next Milky Way supernova will be an epochal event in multi-messenger astronomy, critical to tests of supernovae, neutrinos, and new physics. Realizing this potential depends on having realistic simulations of core collapse. We investigate the neutrino predictions of nearly all modern models (1-, 2-, and 3-d) over the first $\simeq$1 s, making the first detailed comparisons of these models to each other and to the SN 1987A neutrino data. Even with different methods and inputs, the models generally agree with each other. However, even considering the low neutrino counts, the models generally disagree with data. What can cause this? We show that neither neutrino oscillations nor different progenitor masses appear to be a sufficient solution. We outline urgently needed work.
Shirley Weishi Li, John F. Beacom, Luke F. Roberts, Francesco Capozzi
2023-06-13T18:00:00Z
http://arxiv.org/abs/2306.08024v1
# Old Data, New Forensics: The First Second of SN 1987A Neutrino Emission ###### Abstract The next Milky Way supernova will be an epochal event in multi-messenger astronomy, critical to tests of supernovae, neutrinos, and new physics. Realizing this potential depends on having realistic simulations of core collapse. We investigate the neutrino predictions of nearly all modern models (1-, 2-, and 3-d) over the first \(\simeq\)1 s, making the first detailed comparisons of these models to each other and to the SN 1987A neutrino data. Even with different methods and inputs, _the models generally agree with each other_. However, even considering the low neutrino counts, _the models generally disagree with data_. What can cause this? We show that neither neutrino oscillations nor different progenitor masses appear to be a sufficient solution. We outline urgently needed work. + Footnote †: preprint: FERMILAB-PUB-23-087-PPD, UCI-HEP-TR-2023-02, LA-UR-23-22079 As spectacular as SN 1987A was for multi-messenger astronomy [1, 2, 3, 4, 5, 6] -- with detections across the electromagnetic spectrum, plus neutrinos -- the next Milky Way core-collapse supernova should be much more so [7, 8, 9, 10]. We will have dramatically better sensitivity to neutrinos, which are a key observable because they carry the dominant energy release and because they probe the dynamics of the inner core. And we will have dramatically better sensitivity across the electromagnetic spectrum and to gravitational waves. Because supernovae are rare -- \(\simeq\)(2\(\pm\)1)/century [11, 12, 13] -- we likely have just one chance over the next few decades to get this right. To interpret the data from the next Milky Way supernova, numerical simulations of core collapse will be essential. In the last decade, sophisticated approaches -- including 3-d, multi-energy group radio-hydrodynamics models of successful explosions -- have become available [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. In addition to making predictions of properties of the explosions themselves (e.g., final energies and remnant masses), these models also predict the neutrino signals. State-of-the-art calculations provide this signal up to \(\simeq\)1 s after core bounce, which is crucial for assessing explodability and which includes a large fraction of the total neutrino emission. However, the readiness of these simulations for comparison to the next supernova has not been adequately assessed. Only one paper [18] compares many models to each other, but only for 1-d models with near-common inputs, finding good agreement among models. And there is little comparison of modern models to the SN 1987A data (Refs. [29, 30] do some), even though those \(\simeq\)19 events [1, 2, 3, 4] can have a decisive impact. Also, this is the only supernova neutrino data we have. In this _Letter_, we tackle both problems. Our first goal is to compare models to each other, which gives an estimate of the _modeling uncertainties_. Our second goal is to compare models to SN 1987A data, which gives an estimate of the _physical uncertainties_. While we simply use available models (not tuned to match SN 1987A), our results are an important start that we hope stimulates new work to prepare for the next Milky Way neutrino burst. In the following, we first consider a nominal case of a \(20M_{\odot}\) (initial mass) single-star progenitor with no neutrino oscillations. This was initially thought to be appropriate for SN 1987A [5, 31] and, accordingly, gives us the largest set of supernova models. While neglecting neutrino oscillations is not realistic, it matches supernova simulation outputs and is well defined. We allow other aspects of the simulations, including the dimensionality (1-d, 2-d, and 3-d), to vary freely so that we can include all modern predictions [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. Then, to test the impact of changing two key theoretical inputs, we vary the neutrino-oscillation scenario and the progenitor mass. Last, we conclude and discuss actions needed. In Supplemental Material (S.M.), we provide supporting details. **Review of Supernova Models.--** In core-collapse supernovae (reviewed in Refs. [32, 33, 34, 35]), the white-dwarf-like iron core of the pre-supernova star collapses to a proto-neutron star (PNS) and releases nearly all of the gravitational binding energy difference, \(GM_{\rm PNS}^{2}/R_{\rm PNS}\simeq 3\times 10^{53}\,{\rm erg}\), in neutrinos of all flavors with comparable fluences. Neutrinos diffuse out of the warm, dense, neutron-rich material of the PNS, decoupling at the neutrinospheres, with average energies of \(\simeq\)10-15 MeV. Neutrinos may be critical to whether core collapse leads to a successful supernova. In the so-called neutrino mechanism [36, 37], after decoupling from the PNS, a few percent of the early-time neutrinos interact with the collapsing layers of the star outside the PNS, potentially reversing the infall and driving an explosion. About half of the total energy in neutrinos is emitted during the first \(\simeq\)1 s after core bounce, powered in a large part by accretion onto the PNS, while the other half is released over \(\gtrsim\)10 s, as the PNS cools and deleptonizes. To understand the detailed physics and astrophysics of core collapse, large-scale multi-dimensional simulations are necessary [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. Starting from a pre-explosion massive-star progenitor model and choices for the equation of state and neutrino opacities of dense matter, modern simulations evolve the equations of non-equilibrium neutrino transport, (magneto-)hydrodynamics, and gravity for as long as is computationally feasible. In the last decade, the simulation community has made significant progress towards showing the viability of the neutrino mechanism in multi-dimensional simulations and in predicting the observed properties of supernovae. Nevertheless, these models still have shortcomings, including the neglect of neutrino oscillations, significant uncertainties in the progenitor models, often under-resolved hydrodynamic flows, and simulation times of \(\lesssim\)1 s after bounce, which misses the PNS cooling phase [38; 39; 40; 41]. **Review of Supernova 1987A.--** Multi-messenger observations of SN 1987A confirmed that a Type-II supernova is driven by the collapse of the core of a massive star into a PNS, powering a neutrino burst from the core and a delayed optical burst from the envelope [5; 6]. The water-Cherenkov experiments Kamiokande-II (Kam-II) and Irvine-Michigan-Brookhaven (IMB) detected a total of \(\simeq\)19 \(\bar{\nu}_{e}\) events via the inverse beta decay process, \(\bar{\nu}_{e}+p\to e^{+}+n\), over \(\simeq\)10 s [1; 2; 3; 4]. Though only one flavor was clearly detected, the results were broadly consistent with basic expectations for the total energy, average neutrino energy, and duration of the neutrino pulse. Theoretical analyses included comparisons to the supernova models of the time [42; 43; 44], which were far less sophisticated than those available today. New work on understanding the neutrino signals is needed. Observations across the electromagnetic spectrum, at the time and since, have also been critical for understanding the explosion [5; 6; 45]. Initially, it was thought that the pre- and post-supernova observations were consistent with those expected for a \(20M_{\odot}\) single-star progenitor [46; 47; 48]. Later work claimed that a binary-merger scenario is favored [49], though there is no consensus on this. On the one hand, the binary-progenitor models of Ref. [50] suggest that the helium core mass may be substantially smaller -- and the envelope mass substantially larger -- than the values found for typical single-star progenitors. On the other hand, the binary-progenitor models of Refs. [51; 52] suggest that the pre-collapse structure of the merger remnant is not so different than that predicted for single-star \(20M_{\odot}\) progenitors. New work on understanding the electromagnetic signals is also needed. **Comparing models.--** We first consider the nominal case of a \(20M_{\odot}\) progenitor and no neutrino oscillations. For all modern 1-, 2-, and 3-d models [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], we collect information on their neutrino fluxes and spectra. We seek to assess the full variation between models, though they are not completely distinct, e.g., many share progenitors [53]. These models vary significantly in their sophistication in various aspects, but we do not attempt to adjudicate between them. Most multi-d models lead to successful explosions, with explosion times ranging from 0.2-0.8 s. The model details are given in S.M. Figure 1 shows their time profiles of \(\bar{\nu}_{e}\) luminosity and root-mean-square (RMS) energy (other flavors are shown in S.M.). For the spectra, we assume a commonly used form, \(f_{\alpha}(E_{\nu})=\mathcal{N}\left(E_{\nu}/\langle E_{\nu}\rangle\right)^{ \alpha-2}e^{-(\alpha+1)E_{\nu}/\langle E_{\nu}\rangle}\), where \(\langle E_{\nu}\rangle\) is the average energy and \(\alpha\) sets the spectrum shape [54]. Different groups characterize spectra differently, and we show how we correct for this in S.M. To model the detected spectra, we follow standard calculations (see, e.g., Refs. [55; 56; 57; 58; 59]) and give details in S.M. The dominant process is \(\bar{\nu}_{e}+p\to e^{+}+n\) with free (hydrogen) protons, for which we take the cross sections and kinematics from Refs. [60; 61]. To model the detectors, we take into account their fiducial masses, energy resolutions, and trigger efficiencies [1; 2; 3; 4]. Because of the different detector responses, the detected positron energies are expected to be significantly lower for Kam-II than IMB. For the distance of SN 1987A, we use 51.4 kpc [62]. We compare the predicted and observed SN 1987A neutrino data using simple, robust observables and statistical tests. We conduct goodness-of-fit tests (comput Figure 1: Neutrino (\(\bar{\nu}_{e}\); others shown in S.M.) luminosity and RMS energy profiles from supernova simulations [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. ing p-values) between pairs of models and between each model and the SN 1987A data. _Because we are testing goodness-of-fit, rather than doing parameter estimation, maximum likelihood is not a suitable method; see S.M._ The main panels of Figs. 2-4 show simple visual comparisons of the counts and average detected energies. For consistency, here we cut off all models at 0.5 s. Because we forward-model the theoretical predictions (taking into account properties of the individual detectors and Poisson fluctuations), the full error bars (see S.M. for details) are shown on the predictions. _The insets of Figs. 2-4 show our main statistical calculations._ Larger p-values indicate agreement; we define \(p<0.05\) as indicating inconsistency for a given model, _though our main focus is on what happens for the majority of models._ Here we allow each model to go to its full run time (typically 0.5-1.5 s). For these, we consider both the counts in the time profile and the shape of the energy spectrum. Given the short timescale and the low statistics, we treat these separately. For the counts tests, the p-values are the one-sided cumulative Poisson probabilities. For the spectrum tests, we use one-dimensional Kolmogorov-Smirnov statistics, following Monte Carlo modeling of the predicted data. We allow free time offsets between the predictions and the data for each detector, finding that these values are \(\simeq\)0.1 s for Kam-II and \(\simeq\)0.2 s for IMB, both small, so this freedom does not affect our results. **Results for the Nominal Case.--** Figure 2 shows the model-to-model comparisons for a \(20M_{\odot}\) progenitor with no neutrino oscillations. The p-values obtained by comparing pairs of models range over 0.06-0.52 for the counts (Kam-II and IMB combined) and 0.03-0.99 for the spectra (Kam-II only, as IMB has too few counts). A general _consistency_ in both the predicted counts and spectra is evident. Considering the range and complexity of the inputs and methods in supernova modeling, this agreement is encouraging, though it remains important to understand the residual differences. Figure 2 also shows the model-to-data comparisons. A general _inconsistency_ in both the counts and spectra is evident. The predicted counts are too high for Kam-II and mostly too high for IMB. The predicted average detected energies are too high for Kam-II and slightly too low for IMB (because IMB has just one detected event in this time range, we do not use the predicted spectrum in our statistical tests). Quantitatively, no model-to-data comparisons have both p-values larger than 0.05, and many are much worse. To confidently interpret data from the next Milky Way supernova, multiple simulations that can reproduce the neutrino and electromagnetic data will be a must. Confidence in this would greatly increase if the same were achieved for SN 1987A. **Possible Solutions: Neutrino Oscillations.--** The detected supernova neutrino data are affected by neutrino oscillations [63, 64, 65, 66, 9], with the effects depending upon differences in the initial neutrino luminosities and spectra, set by differences in their production processes and opacities. The details depend on the high densities of matter and other neutrinos, which are uncertain [9]. Generally, matter-induced effects occur in the stellar envelope [68], while neutrino-induced effects occur just outside the neutrinospheres [65, 66, 67]. We study the effects of oscillations with a few representative cases. Considering only matter-induced effects, if the neutrino mass ordering follows the inverted hierarchy (IH), then there can be a nearly complete exchange of the \(\bar{\nu}_{e}\) and \(\bar{\nu}_{x}\) (\(\bar{\nu}_{\mu}\) and \(\bar{\nu}_{\tau}\)) flavors, with almost no change for the \(\nu_{e}\) and \(\nu_{x}\) flavors [68]. In the normal hierarchy (NH), the opposite occurs. With neutrino-induced effects, it is possible to have nearly complete equilibration of all six flavors soon after decoupling, because of rapid flavor conversions induced by interactions of neutrinos among themselves [65, 66, 67]. Further details are given in S.M. Figure 3 shows the effects of example neutrino-oscillation scenarios (all for the Alcar 3-d [27] model and a \(20M_{\odot}\) progenitor, chosen because it is 3-d and has a long runtime). In general, oscillations decrease the predicted counts and increase the average energies, as \(\bar{\nu}_{x}\) has lower fluxes but higher energies than \(\bar{\nu}_{e}\). For most simulations (including this model), the reduction in flux is Figure 2: Predicted counts and average energies of supernova models (colors as in Fig. 1), compared to each other and to SN 1987A data. Our main calculations are in the inset. more significant. The correlated trend in how the spectrum p-value changes relative to the counts p-value is due to finite statistics and is explained in S.M. In effect, only one p-value matters, and that is the one for the spectrum. Last, as an overall trend, models with longer simulation times tend to disagree more with data, as shown via the grey symbols in the inset. **Possible Solutions: Supernova Progenitors.--** The detected supernova neutrino data are also affected by the choice of progenitor [29; 69]. The structure of the star at collapse determines the accretion rate onto the PNS, which strongly influences the neutrino emission. A key question is if models with progenitors other than the \(20M_{\odot}\) single-star cases considered above would better fit the SN 1987A data. Figure 4 shows the effects of different choices of progenitor mass, using the suite of Fornax 2-d models in Ref. [35], chosen because of their wide range of progenitor masses (we use 16-30\(M_{\odot}\)) and long runtimes. None of the progenitors provides a good fit to the data. We have also carried out this analysis (see selected results in S.M.) for the suites of models from Refs. [70; 71; 72; 73; 74; 22; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 284; 286; 287; 289; 288; 289; 281; 285; 286; 287; 289; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 324; 325; 326; 327; 328; 333; 337; 338; 340; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 41; 44; 45; 47; 49; 42; 44; 46; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 88; 89; 90; 89; 89; 91; 80; 89; 81; 84; 86; 88; 89; 92; 80; 81; 85; 87; 88; 89; 93; 82; 86; 89; 87; 88; 89; 94; 88; 89; 95; 89; 96; 89; 97; 98; 99; 97; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 109; 111; 113; 114; 107; 109; 112; 114; 115; 116; 117; 118; 119; 121; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 152; 153; 154; 155; 156; 157; 158; 159; 161; 170; 171; 182; 183; 184; 185; 186; 187; 188; 199; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 211; 211; 206; 207; 209; 212; 213; 214; 201; 208; 209; 214; 215; 216; 217; 223; 218; 224; 217; 24; 218; 25; 267; 27; 28; 298; 200; 219; 22; 231; 225; 226; 27; 28; 299; 30; 310; 32; 333; 34; 35; 36; 37; 38; 39; 40; 41; 41; 42; 42; 43; 43; 44; 45; 46; 47; 47; 48; 49; 51; 50; 51; 53; 54; 56; 57; 58; 59; 61; 70; 71; 71; 72; 72; 73; 74; 75; 76; 77; 78; 89; 99; 100; 101; 102; 104; 105; 106; 107; 108; 109; 111; 12; 113; 114; 115; 116; 117; 18; 199; 219; 22; 233; 24; 25; 26; 27; 28; 29; 31; 32; 333; 34; 35; 36; 37; 38; 39; 41; 50; 51; 53; 54; 56; 57; 58; 59; 60; 70; 72; 73; 74; 75; 76; 77; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 99; 90; 91; 100; 11; 112; 113; 114; 115; 116; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 34; 36; 37; 38; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 61; 70; 73; 75; 76; 77; 78; 89; 92; 80; 81; 82; 84; 85; 86; 87; 88; 89; 93; 94; 95; 96; 97; 101; 11; 12; 133; 14; 1 critical importance to our understanding of core-collapse supernovae. It is commonly assumed that modern supernova models -- with 36 years of improvements -- also match the data. We revisit this assumption. We show that most modern models (for a \(20M_{\odot}\) progenitor and no neutrino oscillations) disagree with 1987A neutrino data in the first \(\simeq\)0.5-1.5 s, where the highest-precision models end. Compared to the Kam-II data, these models predict higher counts and higher average energies. Compared to the IMB data, these models generally predict higher counts (as noted, we do not use their average energies, which means that our results do not depend on the well-known spectrum tension between Kam-II and IMB). When we include the effects of various neutrino-oscillation scenarios, the predicted counts become lower and the average energies become slightly higher, so that the tension with data remains. When we vary the progenitor mass, the trend is not monotonic, but the tension with data again remains. Finally, as can be seen from comparing Figs. 3 and 4, varying the oscillation scenario and the progenitor together would not resolve the tensions with data. We also show that most modern models are in good agreement with each other, which suggests that there may be a common solution to the disagreement with SN 1987A data, perhaps even one that also improves explosion energies in simulations. There is a range of possibilities, including that our implementation or even understanding of the physics in the simulations is incomplete, that not enough progenitor models have been considered, that the initial neutrino spectra are nonthermal (e.g., Refs. [81; 82]), or that neutrino oscillations need to be directly implemented in supernova simulations [83]. Separately, it would also be interesting to re-analyze the raw data from both detectors, using present detector-modeling and event-reconstruction techniques, both vastly improved over those from 36 years ago. To realize the full potential of observing and interpreting the signals from the next Milky Way supernova, the community will ultimately need a set of modern models that agree (for the same progenitor mass) with each other and the neutrino and electromagnetic data. An important step towards that is seeking the same for SN 1987A. Reaching these goals should be pursued with urgency. It is especially important that multi-d supernova simulations push their run times out to a few seconds, beyond which PNS cooling simulations may be adequate. **Acknowledgments.--** We are grateful for helpful discussions with Benoit Assi, Elias Bernreuther, Adam Burrows, Nikita Blinov, Basudeb Dasgupta, Sebastian Ellis, Ivan Esteban, Chris Fryer, Christopher Hirata, Josh Isaacson, Thomas Janka, Daniel Kresse, Gordan Krnjaic, Bryce Littlejohn, Sam McDermott, Alessandro Mirizzi, Masayuki Nakahata, Evan O'Connor, Ryan Plestid, Georg Raffelt, Prasanth Shyamsundar, Michael Smy, and especially Pedro Machado. We acknowledge the use of the KS2D code. S.W.L. was supported at FNAL by the Department of Energy under Contract No. DE-AC02-07CH11359 during the early stage of this work. J.F.B. was supported by NSF Grant No. PHY-2012955.
2305.11025
Dominant sets for model spaces in several variables
Let $I$ be an inner function in $\mathcal{D} = B_{n_1}\times B_{n_2}\cdots \times B_{n_k}$, where $B_n$ denotes the open unit ball of $\mathbb{C}^n$, $n\ge 1$. We construct dominant sets for the space $H^2 \ominus I H^2$, where $H^2 = H^2(\mathcal{D})$ denotes the standard Hardy space.
Aleksei B. Aleksandrov, Evgueni Doubtsov
2023-05-18T15:06:10Z
http://arxiv.org/abs/2305.11025v1
# Dominant sets for model spaces ###### Abstract. Let \(I\) be an inner function in \(\mathcal{D}=B_{n_{1}}\times B_{n_{2}}\cdots\times B_{n_{k}}\), where \(B_{n}\) denotes the open unit ball of \(\mathbb{C}^{n}\), \(n\geq 1\). We construct dominant sets for the space \(H^{2}\ominus IH^{2}\), where \(H^{2}=H^{2}(\mathcal{D})\) denotes the standard Hardy space. Key words and phrases:Dominant sets, Hardy spaces, large and small model spaces The research on Sections 1 and 3 was supported by Russian Science Foundation (grant No. 19-11-00058); the research on Sections 2 and 4 was supported by Russian Science Foundation (grant No. 23-11-00153). ### Inner functions Let \(\Sigma\) denote the normalized Lebesgue measure on the distinguished boundary \(\partial\mathcal{D}\). **Definition 1**.: A holomorphic function \(I:\mathcal{D}\to\mathbb{D}\) is called inner if \(|I(\zeta)|=1\) for \(\Sigma\)-almost all points \(\zeta\in\partial\mathcal{D}\). In the above definition, \(I(\zeta)\) stands, as usual, for \(\lim_{r\to 1-}I(r\zeta)\). It is well known that the corresponding limit exists for \(\Sigma\)-almost all points \(\zeta\in\partial\mathcal{D}\). Also, observe that by Definition 1, unimodular constants are not inner functions. ### Clark measures Let \(M(\partial\mathcal{D})\) denote the space of all complex Borel measures on \(\partial\mathcal{D}\). For \(\mu\in M(\partial\mathcal{D})\) the Poisson integral \(P[\mu]\) is defined by the formula \[P[\mu](z)=\int_{\partial\mathcal{D}}P(z,\zeta)\,d\mu(\zeta),\quad z\in \mathcal{D}.\] Given an \(\alpha\in\mathbb{T}\) and an inner function \(I:\mathcal{D}\to\mathbb{D}\), the expression \[\frac{1-|I(z)|^{2}}{|\alpha-I(z)|^{2}}=\operatorname{Re}\left(\frac{\alpha+I( z)}{\alpha-I(z)}\right),\quad z\in\mathcal{D},\] is positive and pluriharmonic. Thus, there exists a unique positive measure \(\sigma_{\alpha}=\sigma_{\alpha}[I]\in M(\partial\mathcal{D})\) such that \[P[\sigma_{\alpha}](z)=\operatorname{Re}\left(\frac{\alpha+I(z)}{\alpha-I(z)} \right),\quad z\in\mathcal{D}.\] Since \(I\) is an inner function, we have \[P[\sigma_{\alpha}](\zeta)=\frac{1-|I(\zeta)|^{2}}{|\alpha-I(\zeta)|^{2}}=0 \quad\Sigma\text{-a.e.},\] hence, \(\sigma_{\alpha}=\sigma_{\alpha}[I]\) is a singular measure. Here and in what follows, this means that \(\sigma_{\alpha}\) is singular with respect to Lebesgue measure \(\Sigma\). ### Clark measures and model spaces Let \(\mathcal{H}ol(\mathcal{D})\) denote the space of all holomorphic functions in \(\mathcal{D}\). For \(0<p<\infty\), the standard Hardy space \(H^{p}=H^{p}(\mathcal{D})\) consists of \(f\in\mathcal{H}ol(\mathcal{D})\) such that \[\|f\|_{H^{p}}^{p}=\sup_{0<r<1}\int_{\partial\mathcal{D}}|f(r\zeta)|^{p}\,d \Sigma(\zeta)<\infty.\] As usual, the Hardy space \(H^{p}(\mathcal{D})\), \(p>0\), is identified with the space \(H^{p}(\partial\mathcal{D})\) of the corresponding boundary values. For an inner function \(\Theta\) on \(\mathbb{D}\), the classical model space \(K_{\Theta}\) is defined by the equality \[K_{\Theta}=H^{2}(\mathbb{D})\ominus\Theta H^{2}(\mathbb{D}),\] see, e.g., monograph [7]. Clark [6] established useful connections between Clark measures and model spaces. In particular, he introduced and studied a family of canonical unitary operators \(U_{\alpha}:K_{\Theta}\to L^{2}(\sigma_{\alpha})\), \(\alpha\in\mathbb{T}\). Further development of the Clark theory in the unit disk and its applications are described, in particular, in [8, 11]. Given an inner function \(I\) in \(\mathcal{D}\), consider the following natural analogs of the space \(K_{\Theta}\): \[I^{*}(H^{2}) =H^{2}\ominus IH^{2},\] \[I_{*}(H^{2}) =\{f\in H^{2}:\,f\overline{f}\in H_{0}^{2}\},\] where \(H_{0}^{2}=\{f\in H^{2}:f(0)=0\}\). It is clear that \[I_{*}(H^{2})\subset I^{*}(H^{2}),\] therefore, \(I^{*}(H^{2})\) is called a large model space and \(I_{*}(H^{2})\) is called a small model space. Certain basic results about the spaces \(I^{*}(H^{2})\) and \(I_{*}(H^{2})\) were obtained in the authors' paper [2]. For an inner function \(\Theta\) in \(\mathbb{D}\), we have \(\Theta^{*}(H^{2}(\mathbb{D}))=\Theta_{*}(H^{2}(\mathbb{D}))=K_{\Theta}\). ### Dominant sets for model spaces If \(\Theta\) is an inner function in the unit disk, then the concept of a dominant set for the model space \(K_{\Theta}\) was introduced in [4]. Such terminology is motivated by the notion of a dominant sequence for the classical space \(H^{\infty}\) (see [5]). In the present paper, this terminology is used for analogs of model spaces in \(\mathcal{D}\). **Definition 2**.: A Lebesgue measurable set \(E\subset\partial\mathcal{D}\) is said to be dominant for the large model space \(I^{*}(H^{2})\) if \(\Sigma(E)<1\) and \[\|f\|_{H^{2}}^{2}\leq C\int_{E}|f|^{2}\,d\Sigma \tag{1.1}\] for all \(f\in I^{*}(H^{2})\). On the one hand, the condition \(\Sigma(E)<1\) excludes the corresponding trivial examples of dominant sets. On the other hand, property (1.1) implies the inequality \(\Sigma(E)>0\). ### Existence of dominant sets For inner functions \(\Theta\) defined on \(\mathbb{D}\) and having additional properties, a number of examples of dominant sets were constructed in [4]. Thus, it is natural to ask whether there are dominant sets for an arbitrary model space. Kapustin (see [4, Theorem 5.14]) obtained a positive answer to this question in the case of the unit disk. The main result of the present paper is the following existence theorem related to the large model spaces in the multidimensional domain \(\mathcal{D}\). **Theorem 1**.: _Let \(I\) be an inner function in \(\mathcal{D}\). Then there exists a dominant set for the large model space \(I^{*}(H^{2})\)._ ### Organization of the paper Auxiliary results, in particular, a result on the disintegration of Lebesgue measure in terms of Clark measures are collected in Section 2. Theorem 1 is proved in Section 3. The existence of radial limits almost everywhere with respect to Clark measures is discussed in the final Section 4. ## 2. Auxiliary results ### Supports of singular Clark measures Let \(\alpha\in\mathbb{T}\) and \(I\) be an inner function in \(\mathcal{D}\). Since \(\sigma_{\alpha}\) is a singular measure, well known properties of Poisson integrals guarantee the following property: \[I(\zeta)=\alpha\quad\text{for $\sigma_{\alpha}$-almost all points $\zeta\in\partial\mathcal{D}$}. \tag{2.1}\] In particular, \(\sigma_{\alpha}\) and \(\sigma_{\beta}\) are mutually singular for \(\alpha\neq\beta\). ### Disintegration of Lebesgue measure Let \(m\) denote the normalized Lebesgue measure on the unit circle \(\mathbb{T}\). For \(\mathcal{D}=\mathbb{D}\), the following result was obtained in [1]. **Proposition 1**.: _Let \(I:\mathcal{D}\to\mathbb{D}\) be an inner function, \(\alpha\in\mathbb{T}\) and let \(\sigma_{\alpha}=\sigma_{\alpha}[I]\) be the corresponding Clark measure. Then_ \[\int_{\mathbb{T}}\int_{\partial\mathcal{D}}g\,d\sigma_{\alpha}\,dm(\alpha)= \int_{\partial\mathcal{D}}g\,d\Sigma \tag{2.2}\] _for all \(g\in L^{1}(\partial\mathcal{D})\), where equality (2.2) is understood in the following weak sense: for \(m\)-almost all \(\alpha\in\mathbb{T}\), the function \(g\) is defined \(\sigma_{\alpha}\)-a.e. and is summable with respect to \(\sigma_{\alpha}\)._ Proof.: If \(g\in C(\partial\mathcal{D})\), then it suffices to repeat the argument used in [2, Theorem 3.3] for \(\mathcal{D}=B_{n}\). The special case \(g\in C(\partial\mathcal{D})\) implies the general one, since equality (2.2) is understood in the above weak sense. Proposition 1 easily implies the following assertion. **Corollary 1**.: _Let \(\{f_{n}\}_{n=1}^{\infty}\) be a functional sequence converging to zero in \(L^{2}(\Sigma)\). Then there exists a subsequence \(\{f_{n_{k}}\}_{k=1}^{\infty}\) such that \(\lim_{k\to\infty}\|f_{n_{k}}\|_{L^{2}(\sigma_{\alpha})}=0\) for \(m\)-almost all \(\alpha\in\mathbb{T}\)._ ### Cauchy integrals and Clark measures The following technical assertion is proved in [3, Proposition 2.2]. **Proposition 2**.: _Let \(I:\mathcal{D}\to\mathbb{D}\) be an inner function, \(\alpha\in\mathbb{T}\) and let \(\sigma_{\alpha}=\sigma_{\alpha}[I]\) be the corresponding Clark measure. Then_ \[\int_{\partial\mathcal{D}}C(z,\zeta)C(\zeta,w)\,d\sigma_{\alpha}(\zeta)= \frac{1-I(z)\overline{I(w)}}{(1-\overline{\alpha}I(z))(1-\alpha\overline{I(w )})}C(z,w)\] _for all \(\alpha\in\mathbb{T}\), \(z,w\in\mathcal{D}\)._ ### Reproducing kernels for the spaces \(I^{*}(H^{2})\) Let \(I\) be an inner function in \(\mathcal{D}\). Then \[k(z,\zeta)=k(I;z,\zeta)\stackrel{{\rm def}}{{=}}(1-I(z) \overline{I(\zeta)})C(z,\zeta)\] is the reproducing kernel for the large model space \(I^{*}(H^{2})\). Indeed, \(C(z,\zeta)\) is the reproducing kernel for \(H^{2}(\mathcal{D})\), hence, \(I(z)C(z,\zeta)\overline{I(\zeta)}\) is the reproducing kernel for \(IH^{2}(\mathcal{D})\). Therefore, the difference \(C(z,\zeta)-I(z)C(z,\zeta)\overline{I(\zeta)}\) is the reproducing kernel for the space \(H^{2}(\mathcal{D})\om IH^{2}(\mathcal{D})\). ## 3. Dominant sets for model spaces ### Preliminary results **Proposition 3**.: _Let \(I:\mathcal{D}\to\mathbb{D}\) be an inner function, \(\alpha\in\mathbb{T}\) and let \(\sigma_{\alpha}=\sigma_{\alpha}[I]\) be the corresponding Clark measure. Then, for any \(f,g\in I^{*}(H^{2})\), the equality_ \[(f,g)_{H^{2}}=\int_{\partial\mathcal{D}}f\overline{g}\,d\sigma_{\alpha} \tag{3.1}\] _holds for \(m\)-almost all \(\alpha\in\mathbb{T}\)._ Proof.: Put \(k_{z}(\zeta)=k(z,\zeta)\). Successively applying explicit formulas for the kernels \(k_{z}(\zeta)\) and \(k_{w}(\zeta)\), \(z,w\in\mathcal{D}\), property (2.1) and Proposition 2, we obtain \[\int_{\partial\mathcal{D}}k_{z}(\zeta)\overline{k_{w}(\zeta)}\,d \sigma_{\alpha}[I](\zeta)\] \[=\int_{\partial\mathcal{D}}\Big{(}1-\overline{I(z)}I(\zeta) \Big{)}\,C(\zeta,z)\,\Big{(}1-I(w)\overline{I(\zeta)}\Big{)}\,C(w,\zeta)\,d \sigma_{\alpha}[I](\zeta)\] \[=(1-\alpha\overline{I(z)})(1-\overline{\alpha}I(w))\int_{ \partial\mathcal{D}}C(\zeta,z)C(w,\zeta)\,d\sigma_{\alpha}[I](\zeta)\] \[=\Big{(}1-\overline{I(z)}I(w)\Big{)}\,C(w,z)\] \[=k(w,z)=(k_{z},k_{w})_{H^{2}}.\] Thus, for all \(\alpha\in\mathbb{T}\), equality (3.1) holds for all finite linear combinations of \(k_{z}\), \(z\in\mathcal{D}\). In particular, the equality \[\|f\|_{H^{2}}^{2}=\int_{\partial\mathcal{D}}|f|^{2}\,d\sigma_{\alpha} \tag{3.2}\] holds for \(m\)-almost all \(\alpha\in\mathbb{T}\) for any function \(f\) in the linear span of the family \(\{k_{z}\}_{z\in\mathcal{D}}\). Applying Corollary 1, we extend this equality to all functions \(f\) in the space \(I^{*}(H^{2})\). To deduce equality (3.1) from (3.2), it suffices to apply the standard formula expressing the scalar product in terms of the corresponding norms. Before formulating the next assertion, observe that the composition \(\Phi\circ I\) is well defined for any function \(\Phi\in L^{1}(\mathbb{T})=L^{1}(\mathbb{T},m)\). Indeed, if \(I(0)=0\), then \(\Sigma(I^{-1}(Q))=m(Q)\) for any measurable set \(Q\subset\mathbb{T}\), where \(I^{-1}(Q)=\{\zeta\in\partial\mathcal{D}:I(\zeta)\in Q\}\). If \(I\) is an arbitrary inner function, then we have \(\psi\circ I(0)=0\), where \[\psi(z)=\frac{I(0)-z}{1-\overline{I(0)}z},\quad z\in\mathbb{D}.\] Therefore, the properties \(\Sigma(I^{-1}(Q))=0\) and \(m(Q)=0\) are equivalent; hence, the composition \(\Phi\circ I\) is defined almost everywhere and is measurable. **Corollary 2**.: _Let \(\Phi\in L^{1}(\mathbb{T})\). Then_ \[\int_{\partial\mathcal{D}}(\Phi\circ I)f\overline{g}\,d\Sigma=\left(\int_{ \mathbb{T}}\Phi\,dm\right)(f,g)_{H^{2}}\] _for all \(f,g\in I^{*}(H^{2})\)._ Proof.: Applying Proposition 3, property (2.1) and Proposition 1, we obtain \[\left(\int_{\mathbb{T}}\Phi\,dm\right)(f,g)_{H^{2}} =\int_{\mathbb{T}}\Phi(\alpha)(f,g)_{H^{2}}\,dm(\alpha)\] \[=\int_{\mathbb{T}}\int_{\partial\mathcal{D}}\Phi(\alpha)f(\zeta) \overline{g}(\zeta)\,d\sigma_{\alpha}(\zeta)\,dm(\alpha)\] \[=\int_{\mathbb{T}}\int_{\partial\mathcal{D}}(\Phi\circ I)f \overline{g}\,d\sigma_{\alpha}\,dm(\alpha)\] \[=\int_{\partial\mathcal{D}}(\Phi\circ I)f\overline{g}\,d\Sigma,\] as required. ### Proof of Theorem 1 Fix a measurable set \(Q\subset\mathbb{T}\) such that \(0<m(Q)<1\). Consider the set \(I^{-1}(Q)\). Applying Corollary 2 with \(\Phi=\chi_{Q}\), we obtain \[m(Q)\|f\|_{H^{2}}^{2}=\int_{I^{-1}(Q)}|f|^{2}\,d\Sigma,\quad f\in I^{*}(H^{2}),\] in particular, estimate (1.1) holds for \(E=I^{-1}(Q)\). Next, recall that \(\Sigma(I^{-1}(Q))=m(Q)\) provided that \(I(0)=0\). Thus, for an arbitrary inner function \(I\), the condition \(m(Q)<1\) guarantees that \(\Sigma(I^{-1}(Q))<1\). Therefore, \(I^{-1}(Q)\) is a dominant set, and the proof of Theorem 1 is finished. ## 4. Radial behavior of functions from model spaces Let \(I:\mathcal{D}\to\mathbb{D}\) be an inner function. As indicated in Proposition 1, if \(g\in L^{1}(\partial\mathcal{D})\), then for \(m\)-almost all \(\alpha\in\mathbb{T}\), the function \(g\) is defined \(\sigma_{\alpha}\)-a.e. and is summable with respect to \(\sigma_{\alpha}=\sigma_{\alpha}[I]\). Given an \(f\in I^{*}(H^{2})\), one can draw similar conclusions on the existence of radial limits \(f(\zeta)=\lim_{r\to 1-}f(r\zeta)\). Indeed, the radial limit \(f(\zeta)\) is defined for \(\Sigma\)-almost all \(\zeta\in\partial\mathcal{D}\). Hence, properties of the family \(\{\sigma_{\alpha}\}_{\alpha\in\mathbb{T}}\) guarantee that for \(m\)-almost all \(\alpha\in\mathbb{T}\), the value \(f(\zeta)\) is defined for \(\sigma_{\alpha}\)-almost all points \(\zeta\in\partial\mathcal{D}\). Therefore, it is natural to ask whether the corresponding statement holds for _all_\(\alpha\in\mathbb{T}\). Poltoratski [9] obtained a positive answer to this question for \(\mathcal{D}=\mathbb{D}\). ### Poltoratski's theorem for the space \(I_{*}(h^{2})\) In this section, we show that consideration of slice functions allows one to extend Poltoratski's theorem to the functions from the small model space \(I_{*}(H^{2})\). Let \(\alpha\in\mathbb{T}\) and \(\sigma_{\alpha}=\sigma_{\alpha}[I]\) be a Clark measure. Put \(u=P[\sigma_{\alpha}]\). For \(\xi\in\partial\mathcal{D}\), consider the slice function \(u_{\xi}(z):=u(z\xi)\), \(z\in\mathbb{D}\). Since \(u_{\xi}\) is a positive harmonic function on \(\mathbb{D}\), it is the one-dimensional Poisson integral of a positive measure defined on \(\mathbb{T}\). We use the notation \((\sigma_{\alpha})_{\xi}\) for the corresponding slice-measure. The following formula for integration on slices is well known: \[\int_{\partial\mathcal{D}}g\,d\Sigma=\int_{\partial\mathcal{D}}\int_{\mathbb{T }}g(\lambda\zeta)\,dm(\lambda)\,d\Sigma(\zeta),\quad g\in C(\partial\mathcal{ D}). \tag{4.1}\] The above formula guarantees (see, e.g., [2, Proposition 2.1] in the case, where \(\mathcal{D}=B_{n}\)) that \[\int_{\partial\mathcal{D}}g\,d\sigma_{\alpha}=\int_{\partial\mathcal{D}}\int_ {\mathbb{T}}g(\lambda\xi)\,d(\sigma_{\alpha})_{\xi}(\lambda)\,d\Sigma(\xi) \tag{4.2}\] for all \(g\in C(\partial\mathcal{D})\). Moreover, standard arguments show that equality (4.2) extends to all bounded Borel functions \(g\) defined on \(\partial\mathcal{D}\). **Proposition 4**.: _Let \(I\) be an inner function in \(\mathcal{D}\), \(f\in I_{*}(H^{2})\) and \(\alpha\in\mathbb{T}\). Then the radial limit \(\lim_{r\to 1-}f(r\zeta)\) exists for \(\sigma_{\alpha}\)-almost all points \(\zeta\in\partial\mathcal{D}\)._ Proof.: Let \(f\in I_{*}(H^{2})\). By (4.1), the slice functions \(I_{\xi}\) and \(f_{\xi}\) have the following properties for \(\Sigma\)-almost all \(\xi\in\partial\mathcal{D}\): * \(I_{\xi}\) is an inner function in \(\mathbb{D}\), * \(f_{\xi}\in(I_{\xi})_{*}(H^{2}(\mathbb{D}))\subset H^{2}(\mathbb{D})\). Let \(\alpha\in\mathbb{T}\). Assume that the above properties hold for \(\xi\in\partial\mathcal{D}\). Since \((I_{\xi})_{*}(H^{2}(\mathbb{D}))\) is a classical model space and \(f_{\xi}\in(I_{\xi})_{*}(H^{2}(\mathbb{D}))\), the radial limit \(\lim_{r\to 1-}f_{\xi}(r\lambda)\) exists for \(\sigma_{\alpha}[I_{\xi}]\)-almost all \(\lambda\in\mathbb{T}\) by Poltoratski's theorem [9]. Now, observe that the Clark measure \(\sigma_{\alpha}[I_{\xi}]\) coincides with the slice-measure \((\sigma_{\alpha}[I])_{\xi}\). Hence \((\sigma_{\alpha}[I])_{\xi}(E_{\xi})=0\), where \(E\) denotes the set of those points \(\zeta\in\partial\mathcal{D}\), for which the limit \(\lim_{r\to 1-}f(r\zeta)\) does not exist, and \(E_{\xi}=\{\lambda\in\mathbb{T}:\lambda\xi\in E\}\). Since the stated property holds for \(\Sigma\)-almost all points \(\xi\in\partial\mathcal{D}\), formula (4.2) with \(g=\chi_{E}\) guarantees that \(\sigma_{\alpha}(E)=0\), as required.
2302.02798
Accumulation of scale-free localized states induced by local non-Hermiticity
The bulk states of Hermitian systems are believed insensitive to local Hermitian impurities or perturbations except for a few impurity-induced bound states. Thus, it is important to ask whether \textit{local} non-Hermiticity can cause drastic changes to the original Hermitian systems. Here we address this issue affirmatively and present exact solutions for the double chain model with local non-Hermitian terms possessing parity-time ($\mathcal{PT}$) symmetry. Induced by the non-Hermiticity, the system undergoes a sequence of $\mathcal{PT}$-symmetry breakings, after which the eigenenergies appear in complex conjugate pairs. The associated extended bulk states then become scale-free localized and unidirectionally accumulated around the impurity. There exist mobility edges separating the residual extended states until a full scale-free localization of all eigenstates. Further increasing the non-Hermitity counter-intuitively brings the system to a $\mathcal{PT}$-restoration regime with fully real spectra except for a pair of complex bound states. We demonstrate that the local non-Hermiticity generated scale-free localization is a general phenomenon and can even survive the quasiperiodic disorder. Our results indicate that the bulk properties of the original Hermitian system can be globally reshaped by local non-Hermiticity.
Cui-Xian Guo, Xueliang Wang, Haiping Hu, Shu Chen
2023-02-06T14:17:52Z
http://arxiv.org/abs/2302.02798v3
# Accumulation of scale-free localized states induced by local non-Hermiticity ###### Abstract The bulk states of Hermitian systems are believed insensitive to local Hermitian impurities or perturbations except for a few impurity-induced bound states. Thus, it is important to ask whether _local_ non-Hermiticity can cause drastic changes to the original Hermitian systems. Here we address this issue affirmatively and present exact solutions for the double chain model with local non-Hermitian terms possessing parity-time (\(\mathcal{PT}\)) symmetry. Induced by the non-Hermiticity, the system undergoes a sequence of \(\mathcal{PT}\)-symmetry breakings, after which the eigenenergies appear in complex conjugate pairs. The associated extended bulk states then become scale-free localized and unidirectionally accumulated around the impurity. There exist mobility edges separating the residual extended states until a full scale-free localization of all eigenstates. Further increasing the non-Hermiticity counter-intuitively brings the system to a \(\mathcal{PT}\)-restoration regime with fully real spectra except for a pair of complex bound states. We demonstrate that the local non-Hermiticity generated scale-free localization is a general phenomenon and can even survive the quasiperiodic disorder. Our results indicate that the bulk properties of the original Hermitian system can be globally reshaped by local non-Hermiticity. ## I Introduction An essential subject of band theory is the study of the sensitivity of the energy spectrum and eigenstate to local perturbations, like impurities or defects and various boundary conditions. Generally speaking, a local impurity or domain wall would only induce a few bound states for Hermitian systems. The bulk energy spectra are insensitive to such local perturbations, with the eigenstates' localization properties staying unchanged. In topological phases of matter, nontrivial in-gap modes residing at the impurities/defects or system boundaries may appear, governed by certain bulk topological invariants. However, such an intuitive physical picture breaks down for some non-Hermitian systems. As a paradigm, the non-Hermitian skin effect (NHSE), i.e., the extreme sensitivity of energy spectra and eigenstates to the change of boundary conditions, has attracted intensive studies in the past few years [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Without any Hermitian counterparts, it is featured by the entirely distinct energy spectra under different boundary conditions and the condensation of eigenstates at system boundaries [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], domain walls [24; 25; 26; 27; 28; 29] or impurities [27; 28; 29]. The NHSE necessities the extension of band theory to its non-Bloch form by introducing the so-called generalized Brillouin zone [1; 2; 3; 4]. In this context, most previous studies focused on non-Hermitian systems with either global (i.e., the non-Hermitian terms have support on the whole lattice) nonreciprocal hoppings or gain/loss. A few exceptions include studies on the dynamical properties of quantum systems dissipatively coupled to baths at the boundary [30] or subject to local loss [31; 32; 33; 34; 35; 36; 37]. Two fundamental and interesting questions naturally arise: (1) Is there any paradigmatic and universal phenomenon akin to the NHSE emerge from local non-Hermiticity? (2) can a local non-Hermitian term (i.e., has support only on a few lattice sites) cause dramatic changes to the energy spectra and eigenstates for an otherwise Hermitian system? Addressing these issues would bridge a comprehensive understanding of both the Bloch and non-Bloch band theory and is also experimentally relevant thanks to the feasibility of local manipulations of non-Hermiticity (e.g., non-reciprocity and gain/loss) in various classical and quantum simulation platforms. In this work, we give affirmative answers to these questions by analytically solving a \(\mathcal{PT}\)-symmetric double chain model with a local gain/loss term (It equally describes a Su-Schrieffer-Heeger (SSH) lattice with a single asymmetrical hopping). We show that increasing the strength of gain/loss (\(\gamma\)) drives the system from the \(\mathcal{PT}\)-unbroken regime with entirely real spectra into a \(\mathcal{PT}\)-broken regime with the appearance of paired complex-conjugated eigenenergies. The \(\mathcal{PT}\)-transition is through a sequence of exceptional points accompanied by the formation of scale-free localized (SFL) eigenstates. These SFL states, unidirectionally accumulated near the impurity, have localization length of the order of system size [38]. Separated by mobility edges, the SFL states and residual extended states coexist until a full scale-free localization of all eigenstates occurs. Further increasing \(\gamma\), the complex eigenenergies gradually coalesce into real eigenenergies, with their associated eigenstates changing from SFL to extended. Last, the system enters into a \(\mathcal{PT}\)-restoration regime with entirely real spectra except for a pair of complex bound states. We demonstrate that the local non-Hermiticity-induced scale-free localization is a general phenomenon, regardless of the specific models, the underlying \(\mathcal{PT}\)-symmetry, the coalescence of eigenstates or even a priori Bloch-band description of the underlying Hermitian systems, as verified in the quasiperiodic Aubry-Andre (AA) model and a single-impurity chain. We note the key differences between the SFL states and the non-Hermitian skin modes. With its localization length proportional to system size, the emergence of SFL states only requires local non-Hermiticity and goes beyond both Bloch and non-Bloch band descriptions. The rest of the paper is organized as follows. In section II, we demonstrate in details how scale-free localization is induced by local non-Hermiticity by exactly solving the double chain model and SSH model with local non-Hermiticity. In section III, we unveil the generality of SFL states by studying the model of single-impurity chain with an imaginary on-site potential and the quasiperiodic AA model with local non-Hermiticity. Conclusions and discussions are given in the last section. ## II Scale-free localization induced by local non-Hermiticity ### Models and solutions We start from a closed double chain model with an on-site gain/loss term of strength \(\gamma\in\mathbb{R}\) residing on a single rung, as depicted in Fig. 1(a). The Hamiltonian is expressed as \[\hat{H}_{dc}= \hat{H}_{0}+i\gamma\hat{c}_{mA}^{\dagger}\hat{c}_{mA}-i\gamma \hat{c}_{mB}^{\dagger}\hat{c}_{mB}, \tag{1}\] where \(m\) is the position of the impurity rung. \(\hat{H}_{0}\) is the tight-binding Hermitian Hamiltonian described by \[\begin{split}\hat{H}_{0}=& t_{1}\sum_{n\neq m} \left(\hat{c}_{nA}^{\dagger}\hat{c}_{nB}+h.c.\right)+\delta\left(\hat{c}_{mA}^{ \dagger}\hat{c}_{mB}+h.c.\right)\\ &+\frac{t_{2}}{2}\sum_{n=1}^{N}\left(\hat{c}_{n+1,B}^{\dagger} \hat{c}_{nA}+\hat{c}_{n+1,A}^{\dagger}\hat{c}_{nB}+h.c.\right)\\ &+\frac{t_{2}}{2}\sum_{n=1}^{N}\left(i\hat{c}_{n+1,A}^{\dagger} \hat{c}_{nA}-i\hat{c}_{n+1,B}^{\dagger}\hat{c}_{nB}+h.c.\right).\end{split} \tag{2}\] Here \(\hat{c}_{n,A/B}^{\dagger}\) (\(\hat{c}_{n,A/B}\)) is the particle creation (annihilation) operator at the \(A/B\)-sublattice of the \(n\)th cell, and \(N\) is the number of unit cells. \(t_{1}\in\mathbb{R}\) is the intracell hopping strength except for the \(m\)th rung with \(\delta\in\mathbb{R}\). \(t_{2}\in\mathbb{R}\) is the intercell hopping. The non-Hermiticity is introduced solely through the local gain/loss on the \(m\)th rung. For convenience, we set \(\gamma>0,t_{2}>0,\delta>0\) without loss of generality and \(t_{1}=1\) as energy unit. The double chain model possesses \(\mathcal{PT}\)-symmetry [39]\((\mathcal{PT})\hat{H}_{dc}(\mathcal{PT})^{-1}=\hat{H}_{dc}\) with \(\mathcal{P}=\bigoplus_{n=1}^{N}\sigma_{n}^{x}\) and \(\mathcal{T}\) the complex conjugate. \(\hat{H}_{dc}\) also has a sublattice symmetry \(\Gamma\hat{H}_{dc}\Gamma^{-1}=-\hat{H}_{dc}\) with \(\Gamma=\bigoplus_{n=1}^{N}\sigma_{n}^{y}\). Here, \(\sigma_{n}^{x}\) and \(\sigma_{n}^{y}\) are Pauli matrices for the \(n\)th unit cell, \(\bigoplus\) is the direct sum. A similarity transformation \(S=\bigoplus_{n=1}^{N}S_{\sigma}^{n}\) with \(S_{\sigma}^{n}=e^{i\frac{\pi}{2}\sigma_{n}^{x}}\) brings the Hamiltonian to the more familiar SSH model as depicted in Fig. 1(b). The explicit form of the non-Hermitian SSH model is written as \[\begin{split}\hat{H}_{SSH}=& t_{1}\sum_{n\neq m} \left(\hat{c}_{nB}^{\dagger}\hat{c}_{nA}+h.c.\right)+\\ & t_{2}\sum_{n=1}^{N}\left(\hat{c}_{n+1,A}^{\dagger}\hat{c}_{nB}+ h.c.\right)+\\ &(\delta+\gamma)\hat{c}_{mA}^{\dagger}\hat{c}_{mB}+(\delta- \gamma)\hat{c}_{mB}^{\dagger}\hat{c}_{mA}.\end{split} \tag{3}\] Clearly, the local gain/loss term is transformed to the nonreciprocal hopping term inside the \(m\)th unit cell. The symmetries ensure that the eigenvalues of \(\hat{H}_{dc}\) and \(\hat{H}_{SSH}\) appear in quartet of \((E,-E,E^{*},-E^{*})\) (See Appendix A). In the following, we focus on the double chain model \(\hat{H}_{dc}\) and analyze its spectral properties. Using the method developed in Ref. [8] (See Appendix B), the eigenenergies can be expressed as \[E=\pm\sqrt{2t_{1}t_{2}\cos\theta+t_{1}^{2}+t_{2}^{2}}, \tag{4}\] with the complex variable \(\theta\) determined by the condition \[\sin[(N+1)\theta]+\eta_{3}\sin(N\theta)-\eta_{2}\sin[(N-1)\theta]-\eta_{1} \sin\theta=0. \tag{5}\] Here \(\eta_{1}=\frac{2\delta}{t_{1}}\), \(\eta_{2}=\frac{\delta^{2}-\gamma^{2}}{t_{1}^{2}}\), \(\eta_{3}=\frac{t_{1}^{2}-\delta^{2}+\gamma^{2}}{t_{1}t_{2}}\). The eigenfunctions are \(\Psi=S^{-1}(...,\overline{\psi}_{n,A},\overline{\psi}_{n,B},...)\), taking the superposition form: \[\begin{split}\overline{\psi}_{n,A}&=\overline{c}_{1} e^{i(N-\tilde{n})\theta}\overline{\phi}_{A}^{(1)}+\overline{c}_{2}e^{-i(N-\tilde{n}) \theta}\overline{\phi}_{A}^{(2)},\\ \overline{\psi}_{n,B}&=\overline{c}_{1}e^{i(N-\tilde{ n}+1)\theta}\overline{\phi}_{B}^{(1)}+\overline{c}_{2}e^{-i(N-\tilde{n}+1) \theta}\overline{\phi}_{B}^{(2)}.\end{split} \tag{6}\] Here \(\tilde{n}\) is the distance from the impurity from the left side [40]. It is clear that the imaginary part of \(\theta\) determines the localization properties of eigenfunctions. To grasp the main physics, we first consider the \(t_{2}=t_{1}=t\) case where the eigenvalues reduce to \[E=\pm 2t\cos\frac{\theta}{2}, \tag{7}\] and leave the discussions on generic cases to the Appendix C. Figure 1: (a) Sketch of the double chain model described in Eq. (1). The red and green circles represent the \(A\) and \(B\) sublattice, respectively. The gain and loss terms are added only in the \(m\)th unit cell. (b) SSH model with nonreciprocal hopping on a single bond. The two models are related by a similarity transformation. ### Sequential breaking of \(\mathcal{PT}\)-symmetry and spectral coalescence We investigate the evolution of energy spectra and the \(\mathcal{PT}\)-transition with respect to varying gain/loss strength \(\gamma\) by solving Eq. (5). The phase diagram is summarized in Fig. 2(A). There exist three distinct regimes, dubbed \(\mathcal{PT}\)-unbroken, \(\mathcal{PT}\)-broken, and \(\mathcal{PT}\)-restoration, respectively. Their boundaries are determined by \[\gamma_{c_{1}}=|\delta-t|,\ \ \ \ \gamma_{c_{2}}=\delta+t, \tag{8}\] as marked in gray lines in Fig. 2(A). As an example, Fig. 2(B) plots the spectra versus \(\gamma\) with fixed \(\delta=4t\). When \(\gamma<\gamma_{c_{1}}\), Eq. (5) has \((N-1)\) real roots corresponding to \(2(N-1)\) extended bulk states, and a purely imaginary root corresponding to a pair of real-energy bound states residing at the impurity rung [see Figs. 2(a1),(a2)]. In this regime, all eigenvalues are real, and the system is in the \(\mathcal{PT}\)-unbroken phase. Increasing \(\gamma\) to exceed \(\gamma_{c_{1}}\), \(\theta\) starts to take complex roots and the corresponding eigenvalues become complex. The system enters into the \(\mathcal{PT}\)-broken phase. The number of real roots of \(\theta\) shrinks first, and reaches its minimum at \(\gamma=\gamma_{a}\) with \(\gamma_{a}=\sqrt{\delta^{2}-t^{2}}\) [See the pink line in Fig. 2(A)] and then increases. The \(\mathcal{PT}\)-symmetry breakings start from the band center at \(\mathrm{Re}(E)=0\) to the band edges for \(\gamma_{c_{1}}<\gamma<\gamma_{a}\), through a sequence of exceptional points where two nearby real eigenvalues coalesce. For \(\gamma_{a}<\gamma<\gamma_{c_{2}}\), two complex eigenvalues coalesce again and their eigenstates restore the \(\mathcal{PT}\)-symmetry. When \(\gamma>\gamma_{c_{2}}\), we regain \((N-1)\) real roots and a complex root for \(\theta\). They correspond to \(2(N-1)\) extended bulk states of real eigenvalues and two bound states of purely imaginary eigenvalues, as shown in Figs. 2(e1),(e2). We dub this regime the \(\mathcal{PT}\)-restoration phase. In the \(\mathcal{PT}\)-unbroken regime with \(\delta<t\), no bound states exist as Eq. (5) has \(N\) real roots. Notably at \(\delta=t\), an arbitrarily small gain/loss or non-reciprocity induces the \(\mathcal{PT}\)-symmetry breakings and drastically changes all the eigenstates as will be discussed later. ### Scale-free localization We proceed to consider the spatial distributions of eigenstates in the \(\mathcal{PT}\)-broken regime \(\gamma_{c_{1}}<\gamma<\gamma_{c_{2}}\), where complex eigenvalues (corresponding to complex roots of \(\theta\)) emerge. We start from the \(\gamma=\gamma_{a}\) case. There are \(2(N-1)\) complex eigenvalues and two real eigenvalues forming an oval on the complex-energy plane [See Figure 2: (A) Phase diagram of the double chain model Eq. (1). The phase boundaries (in gray lines) between the three regimes are given by Eq. (8). The number of complex eigenenergies is coded in colors for lattice size \(2N=40\). At \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t^{2}}\) (pink line), there are \(N_{\mathrm{Im}}=2(N-1)\) complex eigenenergies and all eigenstates are scale-free localized (SFL). (B) Energy spectra with respect to \(\gamma\) for \(\delta=4t\) (the brown line in (A)). The real/imaginary parts of eigenenergies are marked in red/cyan. (a1-e1) Spectra on the complex-energy plane for \(\gamma=1,\ 3.2,\ \sqrt{15},\ 4.3,\ 6.5\), corresponding to dots ’a-e’ in (A), respectively. (a2-e2) Spatial profiles of all eigenstates for the same parameters as (a1-e1). The inset in (c2) plots the wave function for the related SSH model \(\hat{H}_{SSH}\). In (a1-e1, a2-e2), the extended/bound/SFL states are marked in magenta/blue/green, respectively. The impurity rung is set at \(m=11\). Fig. 2(c1)]. The \(\theta\)-solutions are \[\theta=\theta_{R}+i\theta_{I}=\frac{2l\pi}{N}-i\frac{\log\mu}{N}\] with \(\mu=\delta/t+\sqrt{(\delta/t)^{2}-1}\), \(l=0,1,\cdots,N-1\). Therefore, we have \[|\mathrm{Im}(E)|=(\mu^{1/(2N)}-\mu^{-1/(2N)})|\sin\theta_{R}|\approx\frac{\log \mu}{N}|\sin\theta_{R}|.\] Obviously, the local non-Hermitian term contributes a \(1/N\)-order correction to the imaginary part of the \(\theta\)-roots as well as the eigenenergies. Further, all eigenstates have the same spatial distributions. Formally, the moduli of all wave functions are \[|\overline{\psi}_{x}|=\left\{\begin{array}{ll}\mu^{\frac{x-x_{mA}}{2N}+1},&x \leq x_{mA};\\ \mu^{\frac{x-x_{mA}}{2N}},&x>x_{mA}.\end{array}\right. \tag{9}\] Here \(x_{mA}=2m-1\), \(x=2n-1\) or \(2n\) represent the A or B sublattice of the \(n\)th unit cell. The localization length of these wave functions is \[\xi=\frac{2N}{\log\mu}, \tag{10}\] which is proportional to the system size. As plotted in Fig. 2(c2), the spatial profiles of all eigenstates decay away from the impurity in a unidirectional way. The linear dependence of \(\xi\) on the system size suggests that such unidirectional accumulation is the scale-free localization [38; 41]. Note the difference from the usual non-Hermitian skin modes of finite localization length independent of \(N\). As a striking feature, the rescaled spatial profiles of SFL states (by the system size) stay intact varying system sizes, as depicted in Fig. 3(a). The emergence of SFL states is not limited to the special parameter \(\gamma=\gamma_{a}\). When \(\gamma\) deviates from \(\gamma_{a}\), the extended states associated with real eigenvalues and SFL states associated with complex eigenvalues coexist, as shown in Figs. 2(b1),(b2),(d1),(d2). Figure 3(b) further plots the rescaled profiles of a chosen complex-energy eigenstate for \(\gamma\neq\gamma_{a}\). The scale-free localization accompanied by the \(\mathcal{PT}\)-symmetry breaking can be understood from the dispersion relation Eq. (4). Heuristically, the local non-Hermitian term contributes a \(1/N\)-order correction to both the imaginary part of eigenenergies and roots of \(\theta\) in the wave functions, yielding localization length of the order of system size \(N\). In the \(\mathcal{PT}\)-broken regime, the extended and SFL states are separated by mobility edges. They can be distinguished by an _ad hoc_ quantity \[\chi=\frac{\sum_{x\in\mathrm{left}}|\overline{\psi}_{x}|^{2}}{\sum_{x\in \mathrm{right}}|\overline{\psi}_{x}|^{2}}, \tag{11}\] where \(x\in\mathrm{left}/\mathrm{right}\) labels lattice sites on the left/right half side of the impurity. The positions of mobility edges can be read out from the discontinuity of \(\chi\) (for extended states, \(\chi\approx 1\) and for SFL states \(\chi>1\)), as shown in Fig. 3(c). For the general case of \(t_{1}\neq t_{2}\), SFL states appear after the \(\mathcal{PT}\)-transition. A full scale-free localization of all eigenstates occurs when \(\gamma=\sqrt{\delta^{2}-t_{1}^{2}}\) (See Appendix C). ## III Generality of SFL states The above double-stranded or SSH model is for illustrative purposes. Roughly, the \(\mathcal{PT}\)-symmetry imposes a threshold of the strength of non-Hermiticity to induce scale-free localization. Yet, we emphasize that the local non-Hermiticity-induced scale-free localization is a general phenomenon. It exists in a much broader context, regardless of the \(\mathcal{PT}\)-symmetry and coalescence of extended eigenstates or even a priori Bloch-band description of the underlying Hermitian system. In this section, we demonstrate that the SFL states can be induced by a single lossy impurity and may survive even when incommensurate lattice potential is added. ### Scale-free localization induced by local on-site imaginary potential We consider the model of a closed chain with the local non-Hermiticity given by an imaginary on-site potential (a single lossy impurity). Explicitly, the Hamiltonian of the single-impurity model is given by \[\hat{H}=\sum_{n=1}^{L}\left[t(\hat{c}_{n+1}^{\dagger}\hat{c}_{n}+h.c.)+i\gamma \hat{c}_{m}^{\dagger}\hat{c}_{m}\right]. \tag{12}\] Figure 3: (a) Rescaled spatial distributions of all eigenstates for the SSH model in Fig. 1(b) with \(\gamma=\gamma_{a}\). (b) Rescaled spatial distribution of the eigenstate with the largest imaginary part of eigenvalue for various system sizes, \(\gamma=3.4\). The localization lengths divided by system sizes are equal. (c) Mobility edges (dashed lines) extracted from the quantity \(\chi\) for different eigenstates, \(\gamma=3.2\), \(2N=80\). For (a)-(c), \(\delta=4t\), and the impurity resides at \(m=N/2+1\) for various system sizes. This model can be analytically solved by following the same method in Refs. [8; 27]. (The detailed derivation is given in Appendix D.) The eigenvalues are given by \[E=2t\cos\theta, \tag{13}\] where \(\theta\) is determined by \[\sin(L\theta/2)\left[2t\sin\theta\sin(L\theta/2)+i\gamma\cos(L\theta/2)\right]=0. \tag{14}\] The wave function of the single-impurity model can be obtained as \(\Psi=(\psi_{1},\psi_{2},\cdots,\psi_{m},\cdots,\psi_{L-1},\psi_{L})^{T}\) with the superposition form: \[\psi_{n}=\left\{\begin{array}{l}c_{1}e^{i(L-m+n)\theta}+c_{2}e^{-i(L-m+n) \theta},1\leq n\leq j;\\ c_{1}e^{i(n-m)\theta}+c_{2}e^{-i(n-m)\theta},\quad j<n\leq L.\end{array}\right. \tag{15}\] Obviously, the localization properties of eigenfunctions are determined by the imaginary part of \(\theta\). There are two types of solutions for the Eq. (14). The first type is from \[\sin(L\theta/2)=0. \tag{16}\] The roots of Eq. (16) are \(\theta=\frac{2l\pi}{L}\) with \(l=1,2,\cdots,L/2-1\) for even \(L\), and \(l=1,2,\cdots,(L-1)/2\) for odd \(L\). Thus there are \(L/2-1\) real eigenenergies for \(L\in\) even, and \((L-1)/2\) real eigenenergies for \(L\in\) odd. Their corresponding eigenstates with odd-parity are all extended and unaffected by the local on-site imaginary potential. The other eigenstates come from the second type of solutions: \[2t\sin\theta\sin(L\theta/2)+i\gamma\cos(L\theta/2)=0, \tag{17}\] and the corresponding roots \(\theta\) are complex denoted as \(\theta=\theta_{R}+i\theta_{I}\). Equation (17) has a root with \(\theta_{I}\propto L^{0}\) only if \(\gamma>2t\). Explicitly, the solution is written as \[\theta=\frac{\pi}{2}+i\text{arcosh}\big{(}\frac{\gamma}{2t}\big{)}, \tag{18}\] which is associated with a bound state. In short, there is only a bound state with \(\theta=\frac{\pi}{2}+\text{arcosh}\big{(}\frac{\gamma}{2t}\big{)}\) when \(\gamma>2t\), as shown in Fig. 4. For the other complex roots of Eq. (17), their imaginary part satisfy \(\theta_{I}\propto\frac{1}{L}\). In another word, the localization length of these eigenstates (except for bound states) with complex roots is proportional to the system size \(\xi\propto L\), and these eigenstates are SFL states. In Fig. 4(A), we show the energy spectra of the single-impurity model as \(\gamma/t\) varies. A bound state appears when \(\gamma>2t\) as expected. The energy spectra and spatial profiles of the eigenstates with \(\gamma=1,\ 2,\ 2.05\) are plotted respectively in Figs. 4(a1)-(c1) and 4(a2)-(c2). Clearly, nearly half of all eigenstates are SFL or extended, consistent with our exact solutions. Almost all of the SFL states accumulate around the impurity, except for a pair of states with a very small imaginary part of eigenvalue, as displayed in the insets of Figs. 4(a2)-(c2). Therefore, for the single-impurity model, only half of all eigenstates determined by Eq. (17) are affected by the impurity, and the other half determined by Eq. (16) are irrelevant to the impurity strength due to their odd parity. Except for a bound state, the other even-parity states become SFL states. ### The non-Hermitian AA model with local nonreciprocal hopping Now we demonstrate that the SFL states can survive even when incommensurate lattice potential is added. To be explicit, we consider the 1D quasiperiodic lattice described by the non-Hermitian AA model with a local non-Hermitian term. The Hamiltonian is \[\hat{H}=\hat{H}_{AA}+\hat{H}_{NH}, \tag{19}\] Figure 4: (A) Energy spectra versus \(\gamma/t\) for the single-impurity model on a closed chain. Red/cyan curves lines represent real/imaginary parts of eigenenergies. The dotted brown line separates the two regions with or without a bound state. (a1)-(c1) Energy spectra on the complex energy plane for the single-impurity model with \(\gamma=1,\ 2,\ 2.05\), respectively. (a2)-(c2) Spatial distributions of the chosen eigenstates marked by green circles in (a1)-(c1). The insets in (a2)-(c2) plot the wave functions of the chosen eigenstates marked by black circles in (a1)-(c1). The parameters are chosen as \(L=40\), \(m=20\), \(t=1\). with \[\hat{H}_{AA}=\sum_{n=1}^{L}\left[t(\hat{c}_{n+1}^{\dagger}\hat{c}_{n}+h.c.)+2 \lambda\cos(2\pi\alpha n)\hat{c}_{n}^{\dagger}\hat{c}_{n}\right], \tag{20}\] here \(\alpha=(\sqrt{5}-1)/2\) is an irrational number. In the following, we discuss the case with different forms of the local non-Hermitian term separately by using examples such as nonreciprocal hopping or imaginary on-site potential. Here, we consider the non-Hermitian AA model of (19) with the local non-Hermiticity given by nonreciprocal hopping \[\hat{H}_{NH}=(\delta+\gamma-t)\hat{c}_{m}^{\dagger}\hat{c}_{m+1}+(\delta- \gamma-t)\hat{c}_{m+1}^{\dagger}\hat{c}_{m}. \tag{21}\] The case of \(\lambda=0\) has been studied in Fig. 2. Note that the AA model \(\hat{H}_{AA}\) undergoes a delocalization/localization phase transition at the critical strength of quasiperiodic potential \(\lambda=t\)[42]. We expect the SFL states to survive in the delocalization regime, for which all the unperturbed eigenstates of \(\hat{H}_{AA}\) are extended. In particular, when \(\gamma=\sqrt{\delta^{2}-t^{2}}\), a full scale-free localization persists for all eigenstates even in the presence of on-site incommensurate potential, as verified by our numerical results in Fig. 5. The eigenenergies and their associated SFL eigenstates which decay away from the impurity are shown in Figs. 5(a),(b) with \(\lambda=0.1\), and in Figs. 5(c),(d) with \(\lambda=0.2\). In Figs. 6(a),(b), we further plot the rescaled spatial profiles of the eigenstate with the largest imaginary part of eigenenergies for various system sizes. They coincide with each other, indicating their scale-free nature. In contrast, when \(\lambda\) lies in the localized regime, the eigenstates of the non-Hermitian AA model are insensitive to the non-Hermiticity, and no SFL state is observed. As shown in Figs. 6(c),(d) with a large incommensurate potential \(\lambda=1.4\), all eigenstates become localized. ### The non-Hermitian AA model with local on-site imaginary potential Next we consider the non-Hermitian AA model of (19) with the local non-Hermiticity described by the local on-site imaginary potential, i.e., \[\hat{H}_{NH}=i\gamma\hat{c}_{m}^{\dagger}\hat{c}_{m}. \tag{22}\] The case of \(\lambda=0\) reduces to the model of (12). For \(\lambda\neq 0\), we can numerically diagonalize the Hamiltonian. Our numerical results verify that the SFL states can survive for small incommensurate potential strength \(\lambda\). In Fig. 7, we present the numerical results for four typical \(\lambda\) (i.e., the strength of incommensurate potential) for the non-Hermitian AA model with imaginary on-site potential. We can clearly see that the SFL states survive a finite \(\lambda\). Then we show the rescaled spatial distributions in Fig. 8 for the specific eigenstate with the largest imaginary of eigenvalue. The spatial distributions for different system sizes almost coincide with each other, indicating that they are SFL states. Figure 6: (a)(b) Rescaled spatial distributions of the eigenstates with the largest imaginary part of eigenenergies for the non-Hermitian AA model with local nonreciprocal hopping. The system size takes \(L=20,40,80\), and the impurity is located at \(m=L/2+1\). (a) \(\lambda=0.05\); (b) \(\lambda=0.1\). (c)(d) Energy spectra and spatial distributions of several chosen eigenstates for the non-Hermitian AA model with local nonreciprocal hopping for \(\lambda=1.4\), \(L=34\), \(m=17\). Common parameters: \(t=1\), \(\delta=8\), \(\gamma=\sqrt{\delta^{2}-t^{2}}=\sqrt{63}\). Figure 5: (a)(b) Energy spectra and spatial distributions of all the eigenstates for the non-Hermitian AA model with local nonreciprocal hopping for \(\lambda=0.1\). (c)(d) Energy spectra and spatial distributions of all the eigenstates for the non-Hermitian AA model with local nonreciprocal hopping for \(\lambda=0.2\). Common parameters: \(L=34\), \(m=17\), \(t=1\), \(\delta=8\), \(\gamma=\sqrt{\delta^{2}-t^{2}}=\sqrt{63}\). ## IV Conclusions and Discussions To summarize, we have unveiled the emergence of local non-Hermiticity-induced SFL states by presenting exact solutions for the double chain model. The non-Hermitian term drives the system through a sequence of \(\mathcal{PT}\)-symmetry breakings, accompanied by the appearance of complex eigenenergies and SFL states. Mobility edges separate the residual extended states and SFL states till a full scale-free localization occurs. We have further demonstrated the generality and robustness of local non-Hermiticity-induced scale-free localization regardless of, e.g., the \(\mathcal{PT}\)-symmetry and the incommensurate lattice potential. The lattice Hamiltonian with local non-Hermiticity (including both the non-rejeciprocity and gain/loss) should be readily implemented in various classical/quantum simulation platforms like electric circuits [43; 44], optical [45; 46; 47] or acoustic cavities [48], quantum walks [26; 49; 50], and cold atoms [51; 52; 53; 54]. The induced scale-free localization could thus be identified through the spectral measurement and spatial distributions of eigenstates in these platforms. Our results indicate that local non-Hermiticity drastically alters the bulk spectral properties. The next step is to investigate its influence on the macroscopic observables, phase transitions, and dynamical properties. Other important issues include extending the studies to higher dimensions and continuum systems (without lattices), and exploring the intriguing interplay between local non-Hermiticity, long-ranged couplings, many-body interactions, and other localization mechanisms. ###### Acknowledgements. We thank L. Li for helpful discussions. This work is supported by National Key Research and Development Program of China (Grant No. 2021YFA1402104 and Grant No. 2022YFA1405800), the NSFC under Grants No. 11974413, No. 12174436 and No. T2121001, and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000. H. H. is also supported by the start-up grant of IOP, CAS. ## Appendix A Symmetry analysis of the non-Hermitian SSH model The non-Hermitian Su-Schrieffer-Heeger (SSH) model (3) is related to the double chain model through the similarity transformation \(S\hat{H}_{de}S^{-1}=\hat{H}_{SSH}\). The \(\mathcal{PT}\)-symmetry and sublattice symmetry of the double chain model correspond respectively to pseudo-Hermitian symmetry and sublattice symmetry of the non-Hermitian SSH model. The pseudo-Hermitian symmetry which guarantees that the eigenvalues appear in \((E,E^{*})\) pair takes \[\eta\hat{H}_{SSH}\eta^{-1}=\hat{H}_{SSH}^{\dagger}, \tag{10}\] where \[\eta=\bar{I}_{L_{1}\times L_{1}}\bigoplus\bar{I}_{L_{2}\times L_{2}}, \tag{11}\] and \(\bar{I}_{L_{1/2}\times L_{1/2}}\) denotes the \(L_{1/2}\times L_{1/2}\) matrix whose sub-diagonal entries are all \(1\) and other entries are \(0\). Explicitly, we have \(L_{1}=2(2m-1)\), \(L_{2}=2(N-2m+1)\) if \(m\leq\frac{N+1}{2}\) while \(L_{1}=2(2m-N-1)\), \(L_{2}=2(2(N-m)+1)\) for \(m>\frac{N+1}{2}\). The sublattice symmetry reads \[\overline{\Gamma}\hat{H}_{SSH}\overline{\Gamma}^{-1}=-\hat{H}_{SSH} \tag{12}\] Figure 8: Rescaled spatial distributions of the eigenstate with the largest imaginary part of eigenenergy for the non-Hermitian AA model with an on-site imaginary potential. The system sizes take \(L=60,80,100\). (a) \(\lambda=0\); (b) \(\lambda=0.05\); (c) \(\lambda=0.1\). Other parameters are \(m=L/2\), \(t=1\), \(\gamma=1\). Figure 7: (a1)-(d1) Energy spectra of the non-Hermitian AA model with local on-site imaginary potential for \(\lambda=0,\ 0.05,\ 0.1,\ 0.15\), respectively. (a2)-(d2) Spatial distributions of the ten chosen eigenstates with largest imaginary parts of eigenenergies (marked by green dots in (a1)-(d1)) for \(\lambda=0,\ 0.05,\ 0.1,\ 0.15\), respectively. Common parameters: \(L=40\), \(m=20\), \(t=1\), \(\gamma=1\). with \(\overline{\Gamma}=\bigoplus_{n=1}^{N}\sigma_{z}^{n}\). It ensures the eigenvalues appear in \((E,-E)\) pair. Therefore, the eigenvalues of \(\hat{H}_{SSH}\) and \(\hat{H}_{dc}\) appear in quartet of \((E,-E,E^{*},-E^{*})\). ## Appendix B Exact solutions of the double chain model ### Solutions for the generic case Here we detail the exact solutions of the two models \(\hat{H}_{dc}\) and \(\hat{H}_{SSH}\) depicted in Fig. 1. For the double chain model, the eigenvalue equation is \[\hat{H}_{dc}|\Psi\rangle=E|\Psi\rangle, \tag{10}\] with \[|\Psi\rangle=\sum_{n=1}^{N}(\psi_{n,A}\hat{c}_{nA}^{\dagger}+\psi_{n,B}\hat{c}_ {nB}^{\dagger})|0\rangle. \tag{11}\] In its components, \[\Psi=(\psi_{1A},\psi_{1B},\psi_{2A},\cdots,\psi_{mA},\psi_{mB},\cdots,\psi_{NA},\psi_{NB})^{T}. \tag{12}\] For the non-Hermitian SSH model, the eigenvalue equation is \[\hat{H}_{SSH}|\overline{\Psi}\rangle=\overline{E}|\overline{\Psi}\rangle, \tag{13}\] where \[\overline{\Psi}=(\overline{\psi}_{1A},\overline{\psi}_{1B},\overline{\psi}_{2 A},\cdots,\overline{\psi}_{mA},\overline{\psi}_{mB},\cdots,\overline{\psi}_{NA}, \overline{\psi}_{NB})^{T}. \tag{14}\] As mentioned in the main text, the two models are related by a similarity transformation. Hence they have the same energy spectra \(E=\overline{E}\), with their wave functions related by the similar transformation, i.e., \[|\Psi\rangle=S^{-1}|\overline{\Psi}\rangle. \tag{15}\] In the following, we focus on the non-Hermitian SSH model and obtain its solutions. Formally, for the bulk lattice sites, the eigenvalue equation takes \[t_{1}\overline{\psi}_{sA}-E\overline{\psi}_{sB}+t_{2}\overline{\psi}_{s+1,A}=0, \tag{16}\] with \(s=1,\cdots,m-1,m+1,\cdots,N\), and \[t_{2}\overline{\psi}_{sB}-E\overline{\psi}_{s+1,A}+t_{1}\overline{\psi}_{s+1, B}=0, \tag{17}\] with \(s=1,\cdots,m-2,m,\cdots,N\). For the impurity site, we have \[(\delta-\gamma)\overline{\psi}_{mA}-E\overline{\psi}_{mB}+t_{2} \overline{\psi}_{m+1,A} = 0, \tag{18}\] \[t_{2}\overline{\psi}_{m-1,B}-E\overline{\psi}_{mA}+(\delta+ \gamma)\overline{\psi}_{mB} = 0. \tag{19}\] We take an ansatz wave function satisfying the bulk Eqs. (16)(17) as follows: \[\overline{\psi}_{i}= (z_{i}^{N-m+1}\overline{\phi}_{A}^{(i)},z_{i}^{N-m+2}\overline{ \phi}_{B}^{(i)},z_{i}^{N-m+2}\overline{\phi}_{A}^{(i)},z_{i}^{N-m+3}\overline{ \phi}_{B}^{(i)}, \tag{20}\] \[\cdots,z_{i}^{N}\overline{\phi}_{A}^{(i)},z_{i}\overline{\phi}_ {B}^{(i)},\cdots,z_{i}^{N-m}\overline{\phi}_{A}^{(i)},z_{i}^{N-m+1}\overline{ \phi}_{B}^{(i)})^{T}.\] Inserting the ansatz into Eqs. (16),(17) yields the expression of eigenvalue in terms of \(z_{i}\): \[E=\pm\sqrt{\frac{t_{1}t_{2}}{z_{i}}+t_{1}t_{2}z_{i}+t_{1}^{2}+t_{2}^{2}}, \tag{21}\] and the relation between \(\overline{\phi}_{A}^{(i)}\) and \(\overline{\phi}_{B}^{(i)}\): \[\overline{\phi}_{B}^{(i)}=\frac{E}{(t_{2}+t_{1}z_{i})}\overline{\phi}_{A}^{(i )}=\frac{(t_{1}+t_{2}z_{i})}{Ez_{i}}\overline{\phi}_{A}^{(i)}. \tag{22}\] Obviously, there are two solutions \(z_{i}\) (denoted as \(z_{1},z_{2}\)) for a given \(E\) from Eq. (21) satisfying the constraint: \[z_{1}z_{2} = 1. \tag{23}\] The eigenfunction in general takes the form of the superposition: \[\overline{\Psi}= \overline{c}_{1}\overline{\Psi}_{1}+\overline{c}_{2}\overline{ \Psi}_{2} \tag{24}\] \[\equiv (\overline{\psi}_{1A},\overline{\psi}_{1B},\overline{\psi}_{2A}, \cdots,\overline{\psi}_{mA},\overline{\psi}_{mB},\cdots,\overline{\psi}_{NA}, \overline{\psi}_{NB})^{T},\] where \[\overline{\psi}_{n,A}=\left\{\begin{array}{l}\sum_{i=1}^{2}( \overline{c}_{i}z_{i}^{N-m+n}\overline{\phi}_{A}^{(i)}),1\leq n\leq m;\\ \sum_{i=1}^{2}(\overline{c}_{i}z_{i}^{n-m}\overline{\phi}_{A}^{(i)}),m<n\leq N ;\end{array}\right. \tag{25}\] \[\overline{\psi}_{n,B}=\left\{\begin{array}{l}\sum_{i=1}^{2}( \overline{c}_{i}z_{i}^{N-m+1+n}\overline{\phi}_{B}^{(i)}),1\leq n<m;\\ \sum_{i=1}^{2}(\overline{c}_{i}z_{i}^{n-m+1}\overline{\phi}_{B}^{(i)}),m\leq n \leq N.\end{array}\right.\] Further substituting Eq. (24) into the impurity conditions Eqs. (18),(19) and combining Eqs. (21),(22), we obtain the constraints on the superposition coefficients: \[\overline{H}_{B}\left(\begin{array}{c}\overline{c}_{1}\\ \overline{c}_{2}\end{array}\right)=0 \tag{26}\] with \[\overline{H}_{B}=\left(\begin{array}{cc}(t_{1}-(\delta-\gamma)z_{1}^{N}) \overline{\phi}_{A}^{(1)}&(t_{1}-(\delta-\gamma)z_{2}^{N})\overline{\phi}_{A}^ {(2)}\\ (t_{1}z_{1}^{N}-(\delta+\gamma))z_{1}\overline{\phi}_{B}^{(1)}&\left(t_{1}z_{2}^{N }-(\delta+\gamma)\right)z_{2}\overline{\phi}_{B}^{(2)}\end{array}\right). \tag{27}\] For nontrivial solutions of \((\overline{c}_{1},\overline{c}_{2})\), \(\det[\overline{H}_{B}]=0\) yields the following condition: \[\begin{split}&\eta_{1}(z_{1}-z_{2})+\eta_{2}(z_{1}^{N-1}-z_{2}^{N-1 })-\eta_{3}(z_{1}^{N}-z_{2}^{N})\\ =&(z_{1}^{N+1}-z_{2}^{N+1}),\end{split} \tag{28}\] where \(\eta_{1}=\frac{2\delta}{t_{1}}\), \(\eta_{2}=\frac{\delta^{2}-\gamma^{2}}{t_{1}^{2}}\), \(\eta_{3}=\frac{t_{2}^{2}-\delta^{2}+\gamma^{2}}{t_{1}t_{2}}\). Equation (28) together with Eq. (23) determines the solutions of \(z_{1}\) and \(z_{2}\). From Eq. (23), we set \(z_{1}=e^{i\theta}\), \(z_{2}=e^{-i\theta}\). The energy spectrum Eq. (21) then becomes \[E=\pm\sqrt{2t_{1}t_{2}\cos\theta+t_{1}^{2}+t_{2}^{2}}. \tag{29}\] And Eq. (149) reduces to \[\sin[(N+1)\theta]+\eta_{3}\sin(N\theta)-\eta_{2}\sin[(N-1)\theta]-\eta_{1}\sin[ \theta]=0. \tag{150}\] Depending on \(\eta_{1}\), \(\eta_{2}\) and \(\eta_{3}\), the solution of \(\theta\) of the above equation may take real or complex values. It is worth discussing the special case with \(\overline{c}_{2}=0\), i.e., the eigenfunction contains only the \(z_{1}\) solution. From the constraint Eq. (148), we have \[z_{1}^{N}=\frac{t_{1}}{(\delta-\gamma)},\ \ \ \ z_{1}^{N}=\frac{(\delta+\gamma)}{t_ {1}}. \tag{151}\] This condition can only be satisfied when \[\gamma^{2}=\delta^{2}-t_{1}^{2}, \tag{152}\] which is equal to \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t_{1}^{2}}\). The solution of \(z_{1}\) is then \[z_{1}=e^{i\theta}=\sqrt[N]{\mu}e^{i\frac{2\mu}{N}}, \tag{153}\] where \(\mu=\frac{\delta}{t_{1}}+\sqrt{(\frac{\delta}{t_{1}})^{2}-1}\), \(l=0,1,2,\cdots,N-1\), and \(\theta=\theta_{R}+i\theta_{I}=\frac{2\pi}{N}-i\frac{\log\mu}{N}\). The energy spectrum is given by \[E=\pm\sqrt{t_{1}t_{2}\bigg{(}\sqrt[N]{\mu}e^{i\theta_{R}}+\frac{1}{\sqrt[N]{ \mu}}e^{-i\theta_{R}}\bigg{)}+t_{1}^{2}+t_{2}^{2}}, \tag{154}\] with \(\theta_{R}=\frac{2l\pi}{N}\) (\(l=0,1,2,\cdots,N-1\)). All \(\theta\)-solutions are complex with \(\theta_{I}=-\frac{\log\mu}{N}\propto\frac{1}{N}\). The eigenvalues are complex except for \(\theta_{R}=0.\) Thus, there are \(2(N-1)\) complex eigenenergies and 2 real energies. The eigenstates can be expressed as \[\overline{\Psi}= [(\sqrt[N]{\mu}e^{i\theta_{R}})^{N-m+1}\overline{\phi}_{A}^{(i)},(\sqrt[N]{\mu}e^{i\theta_{R}})^{N-m+2}\overline{\phi}_{B}^{(i)},\cdots,\] \[(\sqrt[N]{\mu}e^{i\theta_{R}})^{N}\overline{\phi}_{A}^{(i)},\sqrt [N]{\mu}e^{i\theta_{R}}\overline{\phi}_{B}^{(i)},\cdots,\] \[(\sqrt[N]{\mu}e^{i\theta_{R}})^{N-m}\overline{\phi}_{A}^{(i)},( \sqrt[N]{\mu}e^{i\theta_{R}})^{N-m+1}\overline{\phi}_{B}^{(i)}]^{T}. \tag{155}\] As \(|z_{1}|=|\sqrt[N]{\mu}|\neq 1\), the spatial profiles of all eigenstates decay away from the impurity in a unidirectional way. ### Solutions for the case of \(t_{1}=t_{2}\) We specify the simple case with \(t_{1}=t_{2}=t\) in this subsection. Without loss of generality, we set \(t>0,\delta>0,\gamma>0\). For this case, \(\eta_{3}=1-\eta_{2}\). The eigenvalues can be reduced to \[E=\pm 2t\cos\big{(}\frac{\theta}{2}\big{)}. \tag{156}\] Equation (150) reduces to \[\sin[(N+\frac{1}{2})\theta]-\eta_{1}\sin(\frac{\theta}{2})-\eta_{2}\sin[(N- \frac{1}{2})\theta]=0, \tag{157}\] where \(\eta_{1}=\frac{2\delta}{t}\), \(\eta_{2}=\frac{\delta-\gamma^{2}}{t^{2}}\). The solution \(\theta=\theta_{R}+i\theta_{I}\) of Eq. (157) may take real or complex values depending on \(\eta_{1}\), and \(\eta_{2}\). If \(\theta\in\mathbb{R}\), we have \(E\in\mathbb{R}\) and \(|z_{1}|=|z_{2}|=1\), which indicates that the corresponding eigenstate is an extended state. If \(\theta\in\mathbb{C}\), we have \(E\in\mathbb{C}\) (except for \(\theta_{R}=0\)) and \(|z_{1}|\neq 1,\ |z_{2}|\neq 1\), which indicates that the corresponding eigenstate is not extended. By inserting Eq. (156) into Eq. (141), we have \[\overline{\phi}_{B}^{(i)}=\pm z_{i}^{-1/2}\overline{\phi}_{A}^{(i)}. \tag{158}\] Here the "\(\pm\)" sign is consistent with "\(\pm\)" in the expression of \(E\). The ansatz wave function \(\overline{\Psi}_{i}\) can be rewritten as \[\overline{\Psi}_{i}= \big{(}z_{i}^{N-m+1},\pm z_{i}^{N-m+3/2},z_{i}^{N-m+2},\pm z_{i}^{ N-m+5/2},\cdots,\] \[z_{i}^{N},\pm z_{i}^{1/2},\cdots,z_{i}^{N-m},\pm z_{i}^{N-m+1/2 }\big{)}^{T}\overline{\phi}_{A}^{(i)}. \tag{159}\] Obviously, the wave function is extended when \(|z_{i}|=1\). For the superimposed eigenstate described by Eq. (147), its spatial component can be rewritten as \[\overline{\psi}_{n,A}=\bigg{\{} \sum_{i=1}^{2}(c_{i}z_{i}^{N-m+n}),\ \ 1\leq n\leq m;\] \[\overline{\psi}_{n,B}=\bigg{\{} \pm\sum_{i=1}^{2}(c_{i}z_{i}^{N-m+n+1/2}),\ \ 1\leq n<m;\] \[\pm\sum_{i=1}^{2}(c_{i}z_{i}^{n-m+1/2}),\ \ \ \ \ m\leq n\leq N;\] with \(c_{i}=\overline{c}_{i}\overline{\phi}_{A}^{(i)}\). In the following, we analyze the solution of \(\theta\) in Eq. (157) as the non-Hermitian strength \(\gamma\) varies. We set \(f_{1}(\theta)=\sin[(N+\frac{1}{2})\theta]-\eta_{2}\sin[(N-\frac{1}{2})\theta]\) and \(f_{2}(\theta)=\eta_{1}\sin(\frac{\theta}{2})\), and Eq. (157) reduces to \[f_{1}(\theta)=f_{2}(\theta). \tag{160}\] The intersections of \(f_{1}\) and \(f_{2}\) determine the real solutions of \(\theta\), as exemplified in Fig. 9. For small \(\gamma\), there are \(N\) real roots in \(\theta\in(0,\pi)\) corresponding to extended bulk states when \(\delta<t\), as depicted in Fig. 9(a), while there are at most \((N-1)\) real roots of \(\theta\) in \(\theta\in(0,\pi)\) when \(\delta>t\), as depicted in Fig. 9(b). As \(\gamma\) increases, the number of intersections of \(f_{1}\) and \(f_{2}\), i.e., the real solutions of \(\theta\), will shrink first, reach its minimum and then increase. The first disappearance and the last reemergence of the intersections occurs at \(\theta=\pi\). Thus, the condition of \(N\) real roots (for \(\delta<t\)) and \((N-1)\) real roots (for \(\delta>t\)) is determined by \(|f_{1}(\theta=\pi)|>|f_{2}(\theta=\pi)|\), yielding \(|1+\eta_{2}|>\eta_{1}\). This condition is satisfied when \[\gamma<\gamma_{c_{1}}\ \ \text{and}\ \ \gamma>\gamma_{c_{2}}, \tag{101}\] with \(\gamma_{c_{1}}=|\delta-t|\) and \(\gamma_{c_{2}}=\delta+t\). As long as Eq. (101) is satisfied, there are at least \((N-1)\) real roots for Eq. (101). Further, we discuss the region with all complex \(\theta\)-solutions. As \(\gamma\) increases, the last disappearance and the first reemergence of the intersections occurs nearly \(\theta=0\). Thus, the condition of \(N\) complex roots is determined by \(|f_{1}^{\prime}(\theta=0)|<|f_{2}^{\prime}(\theta=0)|\) with \(f_{i}^{\prime}=\frac{\partial f_{i}}{\partial\theta}\), giving rise to \(|N(1-\eta_{2})+\frac{1}{2}(1+\eta_{2})|<\eta_{1}\). When \(\eta_{2}=1\) (i.e., \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t^{2}}\)), the condition always is satisfied independent of \(N\). In fact, this condition is satisfied in a narrow region near \(\gamma=\gamma_{a}\) for finite \(N\), and the narrow region shrinks to \(\gamma=\gamma_{a}\) in the thermodynamic limit. We proceed to study the \(\mathcal{PT}\)-transitions of the system as \(\gamma\) increase for fixed \(t\) and \(\delta\). There are three difference cases as listed below. (i) \(\mathcal{PT}\)**-unbroken regime, \(\gamma<\gamma_{c_{1}}\)**. (1) First, if \(\delta>t\) and \(\gamma=0\) (Hermitian limit), Eq. (101) has \((N-1)\) real roots corresponding to extended bulk states and a complex root with \(\theta_{R}=0\) corresponding to a pair of bound states residing at the impurity. As expected for the Hermitian impurity, all eigenenergies are real. There are \((2N-2)\) extended bulk states except for a pair of bound states. The scenario persists even for \(\gamma\neq 0\), provided that \(\gamma<\gamma_{c_{1}}\) is satisfied. (2) Second, if \(\delta<t\), as long as \(\gamma<\gamma_{c_{1}}\), Eq. (101) has \(N\) real roots corresponding to extended bulk states (no bound state exists), and all eigenvalues are real. Combining (1)(2), all eigenvalues are real when \(\gamma<\gamma_{c_{1}}\), the system is in the \(\mathcal{PT}\)-unbroken phase, with at least \(2(N-1)\) extended bulk states and at most \(2\) bound states. (ii) \(\mathcal{PT}\)**-broken regime, \(\gamma_{c_{1}}<\gamma<\gamma_{c_{2}}\)**. Increasing \(\gamma\) to enter this regime, the number of real roots shrinks first, reaches its minimum and then increases. The complex roots of \(\theta\) give rise to complex eigenenergies and the system is in the \(\mathcal{PT}\)-broken phase. The number of complex eigenenergies reaches its maximum \(2(N-2)\) at \(\gamma=\gamma_{a}\equiv\sqrt{\delta^{2}-t^{2}}\). Based on the discussions in the previous section, the solutions of \(z_{1}\) are \(z_{1}=e^{i\theta}=\sqrt[3]{\!\!\!\!/}\mu e^{i\frac{3\pi}{N}}\), \((l=0,1,2,\cdots,N-1)\), with \(\mu=\frac{\delta}{t}+\sqrt{(\frac{\delta}{t})^{2}-1}\), and \(\theta=\theta_{R}+i\theta_{I}=\frac{2l\pi}{N}-i\frac{\log\mu}{N}\). The eigenenergies are given by \(E=\pm 2t\cos[\frac{2l\pi}{N}-i\log\left(\sqrt[3]{\!\!\!\!/}\mu\right)]\), and we have \(|\text{Im}(E)|=(\mu^{1/(2N)}-\mu^{-1/(2N)})|\sin\theta_{R}|\approx\frac{\log \mu}{N}|\sin\theta_{R}|\). Obviously, the local non-Hermitian term contributes a \(1/N\)-order correction to the imaginary part of the \(\theta\)-roots as well as the eigenenergies (except for \(\theta_{R}=0\)). The associated wave function \(\overline{\Psi}\) for the non-Hermitian SSH model is \[\overline{\Psi}\sim\left(\begin{array}{c}\left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-m+1}\\ \pm(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-m+3/2}\\ \pm(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-m+5/2}\\ \vdots\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-1}\\ \pm(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-1/2}\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N}\\ \pm(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{1/2}\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{3/2}\\ \vdots\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-m}\\ \pm(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-m+1/2}\end{array}\right). \tag{102}\] Thus all wave functions have the same spatial profiles: \[|\overline{\psi}_{x}|=\left\{\begin{array}{ll}\mu^{\frac{x-x_{m}A}{2N}+1},&x \leq x_{mA};\\ \mu^{\frac{x-x_{m}A}{2N}},&x>x_{mA};\end{array}\right. \tag{103}\] where \(x=1,\cdots,2N\), and \(x_{mA}=2m-1\). Denote \(\xi\) as the localization length of the eigenstate: \(|\overline{\psi}_{x}|\sim e^{\frac{x-x_{m}A}{2N}}\). It is easy to see \(\xi=\frac{2N}{\log\mu}\), which is proportional to the system size. These eigenstates are dubbed scale-free localized (SFL) states in the main text, which decay away from the impurity in a unidirectional way. They differ from the usual non-Hermitian skin modes that have finite localization length even when \(N\rightarrow\infty\). The eigenstates for the double chain model can be easily obtained by the transformation \(|\Psi\rangle=S^{-1}|\overline{\Psi}\rangle\) as \[\Psi\sim\left(\begin{array}{c}\left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}} \right)^{N-m+1}\mp i\left(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-m+3/2} \\ -i\left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-m+1}\pm(\sqrt[3]{\!\!\!/} \mu e^{i\theta_{R}})^{N-m+3/2}\\ \vdots\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-1}\mp i(\sqrt[3]{\!\!\!/} \mu e^{i\theta_{R}})^{N-1/2}\\ -i(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-1}\pm(\sqrt[3]{\!\!\!/}\mu e^{i \theta_{R}})^{N-1/2}\\ \left(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}}\right)^{N}\mp i(\sqrt[3]{\!\!\!\!/}\mu e^{i \theta_{R}})^{1/2}\\ -(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}})\mp i(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}})^{3/2}\\ -i(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})\pm(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}})^{3/2} \\ \vdots\\ \left(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}}\right)^{N-m}\mp i(\sqrt[3]{\!\!\!\!/} \mu e^{i\theta_{R}})^{N-m+1/2}\\ -i(\sqrt[3]{\!\!\!\!/}\mu e^{i\theta_{R}})^{N-m}\pm(\sqrt[3]{\!\!\!/}\mu e^{i\theta_{R}})^{N-m+1/2} \end{array}\right). \tag{104}\] To conclude, in the \(\mathcal{PT}\)-broken regime, the number of complex eigenenergies ranges from \(2\) to \(2(N-1)\), with their associated eigenstates being SFL states. In particular, when \(\gamma=\gamma_{a}\) (in fact, in a narrow region near \(\gamma_{a}\) for finite \(N\)), the number of complex eigenenergies reaches its maximum \(2(N-1)\) and all eigenstates are SFL states. (iii) \(\mathcal{PT}\)**-restoration regime, \(\gamma>\gamma_{c_{2}}\)**. In this regime, the number of real roots of Eq. (101) recovers to \((N-1)\). Besides, there is a complex root with \(\theta_{R}=\pi\) corresponding to a pair of bound states with complex eigenenergies. Therefore, there are \(2(N-1)\) real eigenenergies corresponding to extended bulk states and 2 bound states with complex eigenenergies. ## Appendix C Emergence of SFL states of the double chain model for the \(t_{1}\neq t_{2}\) case In the main text, we have demonstrated the phase diagram [See Fig. 1(a)] and \(\mathcal{PT}\)-symmetry breaking for \(t_{1}=t_{2}\). Here we turn to the generic case with \(t_{1}\neq t_{2}\) and show that the \(\mathcal{PT}\)-symmetry breaking and the emergence of SFL states also occur. In Fig. 10, we display the phase diagram for the double chain model with \(t_{1}=1,\ t_{2}=2\). There also exist three distinct regimes, i.e., \(\mathcal{PT}\)-unbroken, \(\mathcal{PT}\)-broken, and \(\mathcal{PT}\)-restoration with a little subtlety. Their boundaries are determined by \(\gamma_{c_{1}}=|\delta-t_{1}|,\ \gamma_{c_{2}}=\delta+t_{1}\), as marked in gray lines in Fig. 10(A). Figure 10(B) plots the spectrum versus \(\gamma\) by fixing \(\delta/t_{1}=4\). In the \(\mathcal{PT}\)-unbroken regime with \(\gamma<\gamma_{c_{1}}\), there are \(2(N-2)\) extended bulk states corresponding to \(N-2\) real roots of Eq. (44) and two pairs of bound states located at impurity corresponding to 2 pure imaginary roots. All eigenenergies are real as shown in Figs. 10(a1),(a2). The system is in \(\mathcal{PT}\)-broken phase when \(\gamma_{c_{1}}<\gamma<\gamma_{c_{2}}\), where \(\theta\) has complex roots corresponding to complex eigenenergies. In this regime, the number of real roots of \(\theta\) decreases first, reaches its minimum at \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t_{1}^{2}}\) and then increases with increasing \(\gamma\). At \(\gamma=\gamma_{a}\), the eigenfunction contains only the \(z_{1}\) solution due to \(\overline{c}_{2}=0\), and \(|z_{1}|=|\sqrt[3]{\mu}|=\sqrt[3]{\frac{\delta}{t_{1}}+\sqrt{(\frac{\delta}{t_ {1}})^{2}-1}}>1\). Thus all eigenstates are SFL states, decaying away from the impurity in a unidirectional way as depicted in Figs. 10(c1),(c2). For \(\gamma\neq\gamma_{a}\) in the \(\mathcal{PT}\)-broken regime, the extended state with real eigenenergies and SFL states with complex eigenenergies coexist as shown in Figs. 10(b1),(b2),(d1),(d2). In short, the number of complex eigenenergies \(N_{\text{Im}}=4\sim 2(N-2)\), and there are \((4\sim 2N)\) SFL states in the \(\mathcal{PT}\)-broken regime. In the \(\mathcal{PT}\)-restoration regime with \(\gamma>\gamma_{c_{2}}\), the number of real roots of Eq. (44) recovers to \((N-2)\). In addition, there are \(2\) complex roots of Eq. (44). Thus, there are \(2(N-2)\) extended bulk states with real eigenenergies and 4 bound states at the impurity with complex eigenenergies, as shown in Figs. 10(e1),(e2). In Fig. 11, we display phase diagram for the double chain model with different \(t_{2}\). Except of the extreme case \(t_{2}=0\), there always exists a \(\mathcal{PT}\)-broken regime surrounding \(\gamma=\gamma_{a}\), accompanied by the emergence of SFL states, for various \(t_{2}\) as displayed in Fig. 11. In particular, when \(\gamma=\gamma_{a}\), all eigenstates are SFL states, whose spatial profiles decay away from the impurity in a unidirectional way. Explicitly, when \(t_{2}\geq t_{1}\), the phase boundary are determined by \(\gamma_{c_{1}}=|\delta-t_{1}|,\ \gamma_{c_{2}}=\delta+t_{1}\). When \(t_{2}<t_{1}\), the \(\mathcal{PT}\)-broken region with SFL states gradually decrease as \(t_{2}\) decreases, and this region get closer and closer around \(\gamma=\gamma_{a}\). Imaging the extreme situation with \(t_{2}=0\), there is no SFL state as expected because the part of local non-Hermitian is not connected to other parts of bulk. In addition, the emergence of SFL states requires that \(\delta\) can not be too small when \(t_{2}<t_{1}\). Our results reveal that local non-Hermiticity generated scale-free localization is a general phenomenon even for multi-band systems. ## Appendix D Exact solutions of the single-impurity model The Hamiltonian of the single-impurity model is given by Eq. (12). The corresponding eigenproblem reads \[\hat{H}|\Psi\rangle=E|\Psi\rangle, \tag{45}\] with \(|\Psi\rangle=\sum\limits_{n=1}^{L}(\psi_{n}\hat{c}_{n}^{\dagger})|0\rangle\). In the form of its component, \(\Psi=(\psi_{1},\psi_{2},\cdots,\psi_{m},\cdots,\psi_{L-1},\psi_{L})^{T}\). Equation (45) consists of a series of bulk equations and the impurity equations. The bulk equations are given by \[t\psi_{s-1}-E\psi_{s}+t\psi_{s+1}=0 \tag{46}\] with \(s=1,\cdots,m-1,m+2,\cdots,L\). The impurity equations are given by \[t\psi_{m-1}-(E-i\gamma)\psi_{m}+t\psi_{m+1} = 0, \tag{47}\] \[t\psi_{m}-E\psi_{m+1}+t\psi_{m+2} = 0. \tag{48}\] We take the ansatz wave function \(\Psi_{i}\) satisfying the bulk Eqs. (46) as follows \[\begin{split}\Psi_{i}=&(z_{i}^{L-m+1},z_{i}^{L-m+ 2},\cdots,z_{i}^{L-1},z_{i}^{L},z_{i},\\ &\cdots,z_{i}^{L-m-1},z_{i}^{L-m})^{T}.\end{split} \tag{49}\] Inserting Eq. (49) into the Eq. (46) yields the expression of eigenvalue in terms of \(z_{i}\): \[E=t\bigg{(}z+\frac{1}{z}\bigg{)}. \tag{50}\] For a given \(E\), there are two solutions of \(z_{i}\) (denoted as \(z_{1},z_{2}\)) and they fulfill the constraint: \[z_{1}z_{2} = 1. \tag{51}\] The wave function should be the superposition: \[\Psi=c_{1}\Psi_{1}+c_{2}\Psi_{2}=(\psi_{1},\psi_{2},\cdots,\psi_{m},\cdots,\psi _{L-1},\psi_{L})^{T}, \tag{52}\] where \[\psi_{n}=\left\{\begin{array}{l}\sum_{i=1}^{2}(c_{i}z_{i}^{L-m+n}),1\leq n \leq j;\\ \sum_{i=1}^{2}(c_{i}z_{i}^{n-m}),\quad j<n\leq L.\end{array}\right. \tag{53}\] By inserting it into Eqs. (47)(48) and combining Eq. (50), the impurity equation transforms into \[H_{B}\left(\begin{array}{c}c_{1}\\ c_{2}\end{array}\right)=0 \tag{54}\] Figure 11: Phase diagram for the double chain model with different \(t_{2}\). Along the pink line \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t_{1}^{2}}\), all eigenstates are SFL states, whose spatial profiles decay away from the impurity in a unidirectional way. (a) \(t_{2}=10\); (b) \(t_{2}=2\); (c) \(t_{2}=1\); (d) \(t_{2}=0.8\); (e) \(t_{2}=0.5\); (f) \(t_{2}=0.1\). The common parameters: \(2N=40\), \(t_{1}=1\), and the impurity rung is set at \(m=N/2+1\). Figure 10: (A) Phase diagram for the double chain model. Boundaries of different regimes are marked by gray lines. In different phase regimes, The color-coded numbers represent the number of complex eigenenergies for a finite-size lattice with \(2N=40\). Along the pink line \(\gamma=\gamma_{a}=\sqrt{\delta^{2}-t_{1}^{2}}\), all eigenstates are SFL states. (B) Energy spectra for the double chain model versus \(\gamma\) with fixed \(\delta/t_{1}=4\) [See the brown line in (A)]. Red/cyan lines represent real/imaginary parts of eigenenergies. (a1-e1) Energy spectra on the complex-energy plane with \(\gamma=1,\ 3.2,\ \sqrt{15},\ 4.3,\ 6.5\) corresponding to dots ‘a-e’ in (A), respectively. (a2-e2) The associated spatial profiles of all eigenstates. The inset in (c2) plots the spatial profiles of eigenstates for the non-Hermitian SSH model with the same parameters. In (a1-e1, a2-e2), the blue/magenta/green data represent bound states/extended states/SFL states, respectively. Other parameters are \(2N=40\), \(t_{1}=1\), \(t_{2}=2\), and the impurity rung is set at \(m=N/2+1\). with \[H_{B}=\left(\begin{array}{cc}1-z_{1}^{L}&1-z_{2}^{L}\\ tz_{1}(1-z_{1}^{L})+i\gamma z_{1}^{L}&tz_{2}(1-z_{2}^{L})+i\gamma z_{2}^{L}\end{array} \right). \tag{45}\] The nontrivial solutions of \((c_{1},c_{2})\) are determined by \(\det[H_{B}]=0\), yielding \[t(2-z_{1}^{L}-z_{2}^{L})(z_{1}-z_{2})+i\gamma(z_{1}^{L}-z_{2}^{L})=0. \tag{46}\] Equation (46) and Eq. (31) together determine the solutions of \(z_{1}\) and \(z_{2}\). From Eq. (31), we set \(z_{1}=e^{i\theta}\), \(z_{2}=e^{-i\theta}\), then Eq. (46) becomes \[\sin\big{(}\frac{L\theta}{2}\big{)}\bigg{[}2t\sin\theta\sin\big{(}\frac{L \theta}{2}\big{)}+i\gamma\cos\big{(}\frac{L}{2}\theta\big{)}\bigg{]}=0. \tag{47}\] And the corresponding eigenenergies are expressed as \[E=2t\cos\theta. \tag{48}\] By observing Eq. (47), we have two types of solutions. The first type is from \(\sin\big{(}\frac{L\theta}{2}\big{)}=0\). The roots are \(\theta=\frac{2t\pi}{L}\) with \(l=1,2,\cdots,L/2-1\) for even \(L\), and \(l=1,2,\cdots,(L-1)/2\) for odd \(L\). Thus there are \(L/2-1\) real eigenenergies for \(L\in\) even, and \((L-1)/2\) real eigenenergies for \(L\in\) odd. Their corresponding eigenstates are all extended and unaffected by the local on-site imaginary potential. The other eigenstates come from the second type of solutions: \[2t\sin\theta\sin\big{(}\frac{L\theta}{2}\big{)}+i\gamma\cos\big{(}\frac{L}{2} \theta\big{)}=0. \tag{49}\] The solutions \(\theta\) of above equation has complex roots, denoted as \(\theta=\theta_{R}+i\theta_{I}\). The number of complex eigenenergies are \(L/2+1\) for even \(L\) and \((L+1)/2\) for odd \(L\). To obtain them, let us first assume \(\theta_{I}\propto L^{0}\), then we have \[\sin\big{(}\frac{L\theta}{2}\big{)} \approx\frac{i}{2}\text{sgn}(\theta_{I})e^{-\frac{i}{2}\text{sgn }(\theta_{I})L\theta}, \tag{50}\] \[\cos\big{(}\frac{L\theta}{2}\big{)} \approx\frac{1}{2}e^{-\frac{i}{2}\text{sgn}(\theta_{I})L\theta}\] for large \(L\). Inserting Eq. (50) into Eq. (49), we get \(2t\sin\theta\text{sgn}(\theta_{I})+\gamma=0\). It has a root with \(\theta_{I}<0\) only if \(\frac{\gamma}{2t}>1\). Explicitly, the solution is written as \[\theta=\frac{\pi}{2}+i\text{arcosh}\big{(}\frac{\gamma}{2t}\big{)}, \tag{51}\] which satisfies \(\theta_{I}\propto L^{0}\). This solution is associated with a bound state. Let us then assume \(\theta_{I}\propto L^{-1}\), we have \(\sin\theta\approx\sin\theta_{R}\) for large \(L\). The real and imaginary parts of Eq. (49) become \[\sin\big{(}\frac{L\theta_{R}}{2}\big{)}\cosh\big{(}\frac{L\theta_ {I}}{2}\big{)}\bigg{[}2t\sin\theta_{R}+\gamma\tanh\big{(}\frac{L\theta_{I}}{2 }\big{)}\bigg{]} =0, \tag{52}\] \[\cos\big{(}\frac{L\theta_{R}}{2}\big{)}\cosh\big{(}\frac{L\theta_ {I}}{2}\big{)}\bigg{[}2t\sin\theta_{R}\tanh\big{(}\frac{L\theta_{I}}{2}\big{)}+ \gamma\bigg{]} =0. \tag{53}\] Hence we have either \[\cos\big{(}\frac{L\theta_{R}}{2}\big{)}=0,\quad\tanh\big{(}\frac{L\theta_{I}}{ 2}\big{)}=-\frac{2t}{\gamma}\sin\theta_{R}, \tag{54}\] or \[\sin\big{(}\frac{L\theta_{R}}{2}\big{)}=0,\quad\tanh\big{(}\frac{L\theta_{I}}{ 2}\big{)}=-\frac{\gamma}{2t}(\sin\theta_{R})^{-1}. \tag{55}\] The solutions of Eqs. (54)(55) are respectively \[\theta=\frac{(2l+1)\pi}{L}+i\frac{2}{L}\text{artanh}\bigg{[}-\frac {2t}{\gamma}\sin\big{[}\frac{(2l+1)\pi}{L}\big{]}\bigg{]}; \tag{56}\] \[\theta=\frac{2l\pi}{L}+i\frac{2}{L}\text{artanh}\bigg{[}-\frac{\gamma}{2t} \big{[}\sin(\frac{2l\pi}{L})\big{]}^{-1}\bigg{]}. \tag{57}\] It is clear that the imaginary part of the complex roots satisfy \(\theta_{I}\propto\frac{1}{L}\). In another word, the localization length of these eigenstates (except for bound states) with complex roots is proportional to the system size \(\xi\propto L\), and these eigenstates are SFL states.
2307.10542
Photometric Survey of Neptune's Trojan Asteroids I: The Color Distribution
In 2018, Jewitt identified the "The Trojan Color Conundrum", namely that Neptune's Trojan asteroids (NTs) had no ultra-red members, unlike the the nearby Kuiper Belt. Since then, numerous ultra-red NTs have been discovered, seemingly resolving this conundrum (Lin et al. 2019; Bolin et al.12 2023). However, it is still unclear whether or not the Kuiper Belt has a color distribution consistent with the NT population, as would be expected if it were the source population. In this work, we present a new photometric survey of 15 out of 31 NTs. We utilized the Sloan g'r'i'z' filters on the IMACS f/4 instrument which is mounted on the 6.5m Baade telescope. In this survey, we identify four NTs as being ultra-red using a Principal Component Analysis (PCA). This result brings the ratio of red to ultra-red NTs to 7.75:1, more consistent with the corresponding Trans-Neptunian Object (TNO) ratio of 4-11:1. We also identify three targets as being blue (nearly Solar) in color. Such objects may be C-type surfaces, but we see more of these blue NTs than has been observed in the Kuiper Belt (Seccull et al. 2018). Finally, we show that there are hints of a color-absolute magnitude (H) correlation, with larger H (smaller sized, lower albedo) tending to be more red, but more data is needed to confirm this result. The origin of such a correlation remains an open question which will be addressed by future observations of the surface composition of these targets and their rotational properties.
Larissa Markwardt, Hsing Wen Lin, David Gerdes, Fred C. Adams
2023-07-20T02:43:49Z
http://arxiv.org/abs/2307.10542v1
# Photometric Survey of Neptune's Trojan Asteroids I: The Color Distribution ###### Abstract In 2018, Jewitt identified the "The Trojan Color Conundrum", namely that Neptune's Trojan asteroids (NTs) had no ultra-red members, unlike the the nearby Kuiper Belt. Since then, numerous ultra-red NTs have been discovered, seemingly resolving this conundrum (Lin et al., 2019; Bolin et al., 2023). However, it is still unclear whether or not the Kuiper Belt has a color distribution consistent with the NT population, as would be expected if it were the source population. In this work, we present a new photometric survey of 15 out of 31 NTs. We utilized the Sloan \(g^{\prime}r^{\prime}i^{\prime}z^{\prime}\) filters on the IMACS f/4 instrument which is mounted on the 6.5m Baade telescope. In this survey, we identify four NTs as being ultra-red using a Principal Component Analysis (PCA). This result brings the ratio of red to ultra-red NTs to 7.75:1, more consistent with the corresponding Trans-Neptunian Object (TNO) ratio of 4-11:1. We also identify three targets as being blue (nearly Solar) in color. Such objects may be C-type surfaces, but we see more of these blue NTs than has been observed in the Kuiper Belt (Seccull et al., 2018). Finally, we show that there are hints of a color-absolute magnitude (H) correlation, with larger H (smaller sized, lower albedo) tending to be more red, but more data is needed to confirm this result. The origin of such a correlation remains an open question which will be addressed by future observations of the surface composition of these targets and their rotational properties. Neptune trojans (1097) - Multi-color photometry (1077) - CCD photometry (208) 0000-0002-4880-880X]Larissa Markwardt 0000-0002-2882-0880]Hsing Wen Lin ( ) 0000-0002-2882-0880]David Gerdes 0000-0002-4133-0888]Fred C. Adams ## 1 Introduction Trojan asteroids are planetary companions that reside in the asymmetric 1:1 mean-motion resonance of planets; these asteroids librate at the planet-Sun L4 and L5 Lagrange points, meaning that they have the same orbit as the planet but librate about a point 60\({}^{\circ}\) ahead of (L4) or behind (L5) the planet. Numerical simulations show that orbits of Trojan asteroids can be quite stable, on order the age of the Solar System (Cuk et al., 2012; Gomes & Nesvorny, 2016; Lykawka et al., 2011). Therefore, the stable members of these populations are likely relatively undisturbed remnants of our primordial planetary disk. The physical properties of these populations can thus give us a window into the early Solar System. However, Neptune's Trojan asteroids are not thought to have formed _in-situ_. Rather, this population likely grew through capture of planetesimals during the epoch of planetary migration, during which the outer planets migrated from the location of their formation to their present day locations (Fernandez & Ip, 1984; Malhotra, 1993, 1995; Hahn & Malhotra, 1999). Assuming Neptune migrated significantly in its early evolution, the Lagrange points must have also migrated with it (Kortenkamp et al., 2004) Therefore, the NT population can be used to constrain migratory models (Gomes & Nesvorny, 2016; Nesvorny et al., 2013, 2018; Pike et al., 2017). Such migration would have occurred in the first several hundred Myr in the history of the Solar System, so while these objects may not have formed _in-situ_, they still are remnants of the very early Solar System. Such models show that primordial Jupiter Trojan populations do not survive this planetary migration, indicating they must have originated from elsewhere in the Solar System. (Roig & Nesvorny, 2015). Similarly, since the dynamics of planetary migration likely dispersed any primordial NTs as well, from where did the current population of NTs originate? The most likely source is the nearby Kuiper Belt. If that were the case, one would expect these two populations to be similar in size and color (surface composition). Regarding the color of the KBOs, the bimodality of red (\(g-i<1.2\)) vs. ultra-red (\(g-i>1.2\)) members has been well established (Sheppard, 2010; Schwarz et al., 2011; Hainaut et al., 2012; Peixinho et al., 2012; Sheppard, 2012; Lacerda et al., 2014; Peixinho et al., 2015; Pike et al., 2017; Wong & Brown, 2017; Schwamb et al., 2019). Similarly, the centaur population, small bodies which orbit between Jupiter and Neptune, are thought to be fed by planetesimals escaping the NT region (Horner & Lykawka, 2010). These objects are also red/ultra-red in color (Peixinho et al., 2012, 2015). Through 2018, no ultra-red NTs had been found, making their color distribution distinctly different than their expected origins or offshoots. Termed the "Trojan Color Conundrum", this tension is not easy to resolve (Jewitt, 2018). One explanation is that some sort of resurfacing has happened to the NT population specifically that affected neither the centaurs or KBOs. Jupiter's Trojan population is also devoid of ultra-red members which is thought to be due to thermal resurfacing (Luu & Jewitt, 1996; Jewitt, 2002). However, the temperatures at the distance of Neptune are too cold for such a scenario to be valid (Jewitt, 2018). Another potential explanation is collisional resurfacing, which could strip the ultra-red crust off of the surfaces of these bodies revealing a bluer surface underneath. One source of such collisions could be Plutinos, 3:2 resonators with Neptune, which have significant orbital overlap with the NT population (Almeida et al., 2009). Such collisions are expected to occur when Plutinos have high libration amplitudes, high eccentricities, and low inclinations; therefore, we would expect the color distribution of NTs to be inclination-dependent as well, where high inclination NTs avoid these collisions and retain their ultra-red surfaces. Finally, this discrepancy could be due to a primordial boundary between red/ultra-red bodies that was subsequently mixed by Neptune's migration (DeMeo & Carry, 2014; Neveu & Vernazza, 2019). Based on the exact nature of the epochs of radial mixing, mass removal, and planet migration, the resulting NT population could be devoid of ultra-red members while the Centaur population is not (Neveu & Vernazza, 2019), but specific simulations of these two populations have not been conducted. This hypothesis has been supported by the discovery of two Trans-Neptunian Object (TNO)-like (red) objects all the way in the asteroid belt (Hasegawa et al., 2021). In 2019, the first ultra-red NT, 2013 VX\({}_{30}\), was discovered (Lin et al., 2019), and additional ultra-red NTs have been discovered since then (Bolin et al., 2023). On the surface, these discoveries seem to resolve the conundrum. However, the color distribution of NTs still appears distinct from that of other TNO populations (Bolin et al., 2023). Further observations of the NT population are needed to determine whether or not these distributions are truly distinct. The structure of this paper is as follows: Section 2 describes the design of our photometric survey. Section 3 outlines our data reduction process. Section 4 presents the results of our survey. Section 5 discuss the meaning of our results. Section 6 is conclusions drawn from these results. ## 2 Survey Design The goal of this paper is to measure the optical colors of currently known NTs in order to better understand the physical characteristics of their surfaces. The targets are listed in Table 1. All of our targets have been previously observed but not by the same survey. All of our targets, except 2015 VU\({}_{207}\), were already known to be stable for \(\sim\)Gyr (Lin et al., 2021, 2022). Following the methods of Lin et al. (2022), we find that 2015 VU\({}_{207}\) is also stable for Gyr in our simulations. We used the IMACS f/4 instrument on the 6.5m Baade telescope at Las Campanas Observatory on 4 unique nights to observe this population. IMACS was most suitable for this task with its optical wavelength coverage (\(\sim\)400 - 900 nm) and large FOV to account for the positional uncertainty of the targets. The Sloan \(g^{\prime}r^{\prime}i^{\prime}z^{\prime}\) filters were used for our photometric measurements. In order to account for any variation due to a target's rotational period, we observed each target with "bounding" \(r^{\prime}\)-band observations (i.e. each observation in a different filter was preceded and followed by an observation in \(r^{\prime}\)).. We chose \(r^{\prime}\) to be the bounding observations since this filter reaches the highest SNR in the shortest amount of time. The fast readout mode with 2x2 binning was used. ### Calibration To calibrate the photometry of our IMACS observations, we cross-matched the in-frame background stars against PS1 sources (Magnier et al., 2013). We first converted the PS1 griz photometry to the SDSS system using the transformation equations in Tonry et al. (2012), and then selected the sources with \(g-r\) between 0.25 and 2.0, \(r-i\) between 0.0 and 0.8 as the reference sources. By solving the equation below using the apparent magnitude of the reference sources, we determined the photometric zeropoint of each frame: \[m_{sdss}=m_{ins}+2.5\log_{10}(\tau_{exp})+m_{0}, \tag{1}\] where \(m_{sdss}\) is the apparent magnitude of a specific band of the cross-matched reference sources, \(m_{ins}\) is the instrumental magnitude of that specific band measured from the IMACS image, \(\tau_{exp}\) is the exposure time, and \(m_{0}\) is the photometric zeropoint of that frame. After we determined the zeropoints of each frame, we used every cross-matched star in every frame to evaluate the linear color conversions between the IMACS and SDSS photometric system by solving the following equation: \[m_{M}=m_{sdss}+a~{}(g-r)_{sdss}+b, \tag{2}\] where \(m_{M}\) and \(m_{sdss}\) are the IMACS and SDSS magnitude, respectively, and a, b are the coefficients of the linear conversion. The results are: \[\begin{split} g_{M}&=g_{sdss}-0.078(g-r)_{sdss}+0. 069\\ r_{M}&=r_{sdss}-0.024(g-r)_{sdss}+0.024\\ r_{M}&=r_{sdss}-0.038(r-i)_{sdss}+0.015\\ i_{M}&=i_{sdss}-0.188(r-i)_{sdss}+0.134\\ z_{M}&=z_{sdss}-0.026(g-r)_{sdss}+0.031\end{split} \tag{3}\] With the photometric zeropoints and the color conversion equations, we are able to measure the griz colors of targets in SDSS photometry system. \begin{table} \begin{tabular}{l c c c c c c c c c c} Name & L4/L5 & e & i & H & Date Observed & Ave. r & \(g-r\) & \(r-i\) & \(r-z\) & Color Class. \\ \hline 2006 \(\rm{R_{J103}}^{1,2}\) & L4 & 0.03 & 8.2 & 7.56 & 113021 & 21.97 & 0.59 \(\pm\) 0.045 & 0.16 \(\pm\) 0.035 & 0.17 \(\pm\) 0.058 & red \\ & & & & & & 120222 & 21.88 & — & — & 0.24 \(\pm\) 0.055 & indeterminate \\ \hline 2007 \(\rm{V_{L908}}^{1,2}\) & L4 & 0.07 & 28.1 & 8.51 & 113021 & 22.60 & 0.60 \(\pm\) 0.054 & 0.25 \(\pm\) 0.038 & -0.15 \(\pm\) 0.109 & red \\ & & & & & 120222 & 22.60 & — & — & — & 0.30 \(\pm\) 0.047 & indeterminate \\ \hline 2010 \(\rm{TS_{191}}^{3}\) & L4 & 0.05 & 6.6 & 8.07 & 113021 & 22.39 & 0.61 \(\pm\) 0.029 & 0.30 \(\pm\) 0.029 & 0.64 \(\pm\) 0.078 & red \\ \hline 2011 \(\rm{SO_{277}}^{3}\) & L4 & 0.01 & 9.6 & 7.76 & 113021 & 22.43 & 0.60 \(\pm\) 0.067 & — & — & indeterminate \\ & & & & & 120222 & 22.53 & — & 0.57 \(\pm\) 0.050 & 0.82 \(\pm\) 0.047 & ultra-red \\ \hline 2012 \(\rm{U_{188}}^{5}\) & L4 & 0.04 & 28.3 & 7.59 & 113021 & 22.32 & 0.61 \(\pm\) 0.033 & 0.37 \(\pm\) 0.045 & 0.12 \(\pm\) 0.081 & red \\ \hline 2012 \(\rm{V_{177}}^{5}\) & L4 & 0.07 & 20.8 & 9.28 & 113021 & 23.76 & 0.71 \(\pm\) 0.058 & 0.23 \(\pm\) 0.051 & — & red \\ \hline 2013 \(\rm{R_{L28}}^{5}\) & L4 & 0.03 & 10.1 & 8.83 & 113021 & 23.37 & 0.38 \(\pm\) 0.075 & 0.54 \(\pm\) 0.086 & 0.67 \(\pm\) 0.128 & red \\ \hline 2013 \(\rm{Y_{L28}}^{5}\) & L4 & 0.07 & 13.1 & 8.19 & 113021 & 23.27 & 0.90 \(\pm\) 0.053 & 0.30 \(\pm\) 0.057 & — & ultra-red \\ \hline 2013 \(\rm{V_{30}}^{4,5}\) & L4 & 0.09 & 31.2 & 8.31 & 113021 & 22.60 & 1.01 \(\pm\) 0.043 & 0.44 \(\pm\) 0.043 & 0.86 \(\pm\) 0.049 & ultra-red \\ & & & & & 091122 & 22.96 & 0.70 \(\pm\) 0.104 & 0.47 \(\pm\) 0.048 & 0.73 \(\pm\) 0.045 & ultra-red \\ \hline 2014 \(\rm{R_{J75}}^{5}\) & L4 & 0.05 & 29.5 & 8.39 & 120222 & 23.34 & 0.65 \(\pm\) 0.052 & 0.42 \(\pm\) 0.064 & 1.42 \(\pm\) 0.069 & ultra-red \\ \hline 2014 \(\rm{SO_{274}}^{5}\) & L4 & 0.10 & 33.7 & 8.18 & 113021 & 23.24 & 0.43 \(\pm\) 0.066 & 0.12 \(\pm\) 0.081 & — & blue \\ \hline 2014 \(\rm{Y_{B2}}^{5}\) & L4 & 0.10 & 30.8 & 8.62 & 091222 & 23.41 & 0.46 \(\pm\) 0.187 & 0.07 \(\pm\) 0.100 & 0.36 \(\pm\) 0.090 & blue \\ \hline 2015 \(\rm{V_{207}}^{5}\) & L4 & 0.03 & 38.9 & 7.28 & 080922 & 22.23 & 0.31 \(\pm\) 0.034 & 0.24 \(\pm\) 0.031 & 0.40 \(\pm\) 0.024 & red \\ & & & & & 091122 & 22.10 & 0.47 \(\pm\) 0.052 & 0.09 \(\pm\) 0.068 & 0.35 \(\pm\) 0.028 & blue \\ \hline 2015 \(\rm{V_{165}}^{5}\) & L4 & 0.09 & 16.8 & 9.02 & 113021 & 23.32 & 0.87 \(\pm\) 0.049 & 0.32 \(\pm\) 0.055 & — & ultra-red \\ \hline 2015 \(\rm{V_{165}}^{6}\) & L4 & 0.05 & 5.0 & 8.39 & 113021 & 22.89 & 0.45 \(\pm\) 0.032 & 0.36 \(\pm\) 0.048 & — & red \\ & & & & & 120222 & 22.93 & — & — & 0.61 \(\pm\) 0.060 & indeterminate \\ \hline \end{tabular} \end{table} Table 1: NT targets of this survey. Columns: (1) Object Designation, has previous color measurements taken from 1: Sheppard (2012), 2: Parker et al. (2013), 3: Jewitt (2018), 4: Lin et al. (2019), 5: Bolin et al. (2023); (2) Lagrange Point; (3) Eccentricity; (4) Inclination (\({}^{\circ}\)); (5) Absolute Magnitude; (6) Dates observed; (7) Measured ave. SDSS r-band magnitude; (8) Measured SDSS g-r (mag); (9) Measured SDSS r-i (mag); (10) Measured SDSS r-z (mag); (11) Color classification determined based on the Principal Component Analysis (see Sec. 4.2). ### PSF Modeling To accurately measure the flux and apparent magnitude of NTs, we select stars around the target NT to model the local PSF. Several popular analytical functions are considered for modeling the PSF, such as Moffat (Moffat, 1969) and the sum of 2D Gaussians (Bendinelli et al., 1990). Both functions can adequately model the "wing" of PSF. However, considering our PSF can be asymmetric (not round, see Figure 1), we model the PSF by using the superposition of n asymmetric 2D Gaussians. The flux of the PSF at any point in the \((x^{\prime},y^{\prime})\) orthogonal coordinate system is: \[PSF(x^{\prime},y^{\prime})=b(x^{\prime},y^{\prime})+\sum_{i=1}^{n}\mathrm{A_{i} }\ \Big{(}\texttt{exp}\big{[}-(\frac{x^{\prime 2}}{2\sigma_{x^{\prime}i}^{2}}+ \frac{y^{\prime 2}}{2\sigma_{y^{\prime}i}^{2}})\big{]}\big{)}, \tag{4}\] where \(b(x^{\prime},y^{\prime})\) is the background flux at that point, n is a small number, \(\mathrm{A_{i}}\) is the amplitude of individual Gaussian, \(\sigma_{x^{\prime}i}\) and \(\sigma_{y^{\prime}i}\) are the widths on \(x^{\prime}\) and \(y^{\prime}\) axes of individual Gaussian, respectively. This equation can be rotated to the image reference frame \((x,y)\) with a position angle \(\theta\) and translating the centroid to \((x_{0},y_{0})\) such that \[\begin{pmatrix}x^{\prime}\\ y^{\prime}\end{pmatrix}=\begin{bmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{bmatrix}\begin{pmatrix}x-x_{0}\\ y-y_{0}\end{pmatrix}.\] Therefore, the Gaussian functions share the same center, position angle, and ellipticity but have unequal contribution and different width. To proper chose 'n', the number of Gaussians we should use, we calculate Bayesian information criterion (BIC) for each n we use. The BIC is defined as: \[\mathrm{BIC}=-2\ln(\hat{\mathcal{L}})+k\ln(\mathrm{m}), \tag{5}\] where \(\hat{\mathcal{L}}\) is the maximum likelihood of the model, \(k\) is the number of parameters estimated by the model, and m is number of data points we use to fit the model. The models with lower BIC values are generally preferred, and it penalizes the model with larger \(k\) automatically. Since the multiple Gaussian PSF model can be linearized by taking logarithm and assuming that the errors are normally-distributed, the \(\hat{\mathcal{L}}\) is equivalent to the least squares estimation. Thus, BIC can be written as a function of error variance \(\hat{\sigma_{e}^{2}}\): \[\mathrm{BIC}=m\ln(\hat{\sigma_{e}^{2}})+k\ln(\mathrm{m}), \tag{6}\] In other words, the model with lower residual and fewer parameters is preferred. We find that the model with n = 1, a single 2D Gaussian, always has highest BIC. On the other hand, the models with n = 2 and n = 3 generally have similar BICs, therefore we conclude that using any model with n > 3 is redundant. Finally, we use the PSF model with n = 2 or 3, depending on which one has lower BIC. Once all of the parameters are measured via modeling the stars, the target NT can be modeled by refitting the center and amplitude of the PSF. The flux is the sum of the final model. Figure 1 demonstrates that both the star and the NT can be properly subtracted by the PSF model. ### Rotation Curve Correction The observed magnitudes, and the resulting colors we are trying to measure, are subject to rotational variations on the surface of these objects. To approximately account for this, we use a model that exhibits a linear variation in source brightness (its \(r^{\prime}\)-band magnitudes) and constant \(g^{\prime}-r^{\prime}\), \(r^{\prime}-i^{\prime}\), \(r^{\prime}-z^{\prime}\) colors (to convert each measurement to and \(r^{\prime}\)-band magnitude). This model was then fit using a least-squares approach (see Fig. 2). The resulting colors have been converted to SDSS magnitudes (\(griz\); see Eq. 3), which are reported in Table 1. ### Reddening Line Taken from Hainaut & Delsanti (2002), the reddening, or the spectral index, can be expressed as the percent of reddening per 100 nm: \[S(\lambda_{1},\lambda_{2})=100*\frac{R(\lambda_{2})-R(\lambda_{1})}{(\lambda_ {2}-\lambda_{1})/100} \tag{7}\] where \(R(\lambda)\) is taken from Jewitt & Meech (1986): \[R(\lambda)=10^{-0.4(m(\lambda)-m_{\odot}(\lambda))} \tag{8}\] such that \(m(\lambda)\) and \(m_{\odot}(\lambda)\) are the magnitude of the object and the Sun, respectively, at a particular wavelength, \(\lambda\). Setting the reddening line to pass through the color of the Sun (i.e. for \(S(\lambda_{1},\lambda_{2})=0\), \(m(\lambda_{1})-m(\lambda_{2})=m_{\odot}(\lambda_{1})-m_{\odot}(\lambda_{2})\)), we can derive the following equation, assuming \(m(\lambda_{1})=m_{\odot}(\lambda_{1})\): \[m(\lambda_{2})=-2.5log[1-10^{-4}S(\lambda_{1},\lambda_{2})(\lambda_{1}-\lambda _{2})]+m_{\odot}(\lambda_{2}) \tag{9}\] Assuming \(S(\lambda_{1},\lambda_{2})\) varies from -10% to 80%, we can plot the reddening line for \(g-r\) vs \(r-i\) and \(g-r\) vs \(r-z\) in Fig. 3 and Fig. 4 respectively. Note that our targets generally fall along the reddening line, as has been observed for small bodies in the outer Solar System previously (Hainaut and Delsanti, 2002). Objects that fall above/below the reddening line must exhibit emission/absorption lines at those particular wavelengths causing them to deviate from a flat spectral index. ## 4 Results ### Color-Color Results In Fig. 3, we show the \(g-r\) and \(r-i\) colors measured for our NT targets. Similar to the scattered TNOs, our targets exhibit wide range in this color space; while most targets fall within the "red" zone (principal component \(<\)1.75; see Sec. 4.2), there are three firm and one potential NTs in the "ultra-red" zone (principal component \(>\)1.75). Of these objects, we identified two new ultra-red NTs, 2013 TZ\({}_{187}\) and 2015 VV\({}_{165}\), which were also independently found and reported in Bolin et al. (2023). The potential "ultra-red" NT, 2011 SO\({}_{277}\) has varying results from different observations (Jewitt (2018), Lin et al. (2019), and this work); see more discussion of this object in Sec. 4.4. With extra ultra-red colored NTs, the red to ultra-red ratio for our sample is 3.75:1, or 7.75:1 for the entire known population. This ratio is much more consistent with the dynamically excited KBO ratio of between 4-11 : 1 (Schwamb Figure 1: PSF modeling and subtraction. **top-left:** A star with the PSF model contour. **bottom-left:** The image of NT. **middle:** The model of the star (top) and the NT (bottom). **right:** the images after subtraction of the model. et al., 2019). However, comparing these ratios is not sufficient to determine if the NT and KBO population come from the same source distribution (see Sec. 4.2). We also show the kernal density estimations (KDEs) of g-r and r-i color in Fig. 3. Unlike the results from previous works, which claimed that the NTs and JTs have very similar color distributions, our new results show that the KDEs of NTs are closer the the KDEs of scattered TNOs. Further analysis is presented in Sec. 4.2. In Fig. 4, we show the \(g-r\) and \(r-z\) colors measured for our NT targets. All of our targets are consistent with the scattered/hot TNO populations. This result is expected as NTs are thought to have originated from scattered/unstable TNOs. The physical cause of this \(z\)-band colorization of the cold TNO population is not currently clear, but must be due to some absorption feature around 900 nm based on the displacement from reddening line. Spectroscopic information, such as will be taken with JWST (Markwardt et al., 2021), will shed further light on chemical links between these populations. ### Comparison to Previous Observations All of the targets in our sample have previous observations (though not all from the same survey). Therefore, we compare the difference between our measurements and those from the literature to our computed errors, shown in Fig. 5, to determine if there is any systematic offset in our observations. We find that the observed differences in g-r are mostly within our observational errors, meaning our observations are roughly consistent with previous literature. However, previous observations are split between being slightly systematically larger in r-i and systematically lower than our measurements. Further investigation indicated that the larger group has smaller offset in the order of 0.05, and the lower group has larger offset about -0.15. We also find an instrument dependency on the groups; the smaller offset samples were mostly measured with Gemini and Dark Energy Survey, which both have proper photometry transformation equations to SDSS system. On the other hand, the larger offset samples were mostly measured by using the R and I filters or without proper photometry transformation equations. Therefore it is likely that the Figure 2: This figure shows our least-squares approach to fitting \(r^{\prime}\)-band lightcurves and colors for an example NT target, 2013 VX30. Each observation is shown as a colored point (blue downward triangle for \(g^{\prime}\), green square for \(r^{\prime}\), yellow diamond for \(i^{\prime}\), and orange sideways triangle for \(z^{\prime}\) different photometry systems mostly contribute such systematic offsets. In every case, this did not change the result much on the following Principal Component Analysis (PCA) analysis, since the g-r axis is the dominant element on our Principal Component. ### Comparison to Other Populations The ultimate goal of this work is to determine how similar the NT colors are to other populations in the Solar System. A simple statistical test to measure the likelihood that two distributions are drawn from the same underlying distribution is the Kolmogorov-Smirnov (KS) test (Darling, 1957). Although the KS test can be generalized to more than a single dimension, the interpretation becomes complicated. For simplicity, we reduce the dimension of our data and use the traditional statistical test. Specifically, we performed a Principal Component Analysis (PCA) of our data, using the scikit-learn python package (Pedregosa et al., 2011). Fig. 6 demonstrates that the PCA is able to successfully reduce the g-r vs. r-i color-color plot to a 1-D parameter that still distinguishes between the red and Figure 3: Measured \(g-r\) vs \(r-i\) of the NT population. Blue points are colors of scattered TNOs and orange triangles are JTs, both taken from the literature (Hainaut et al., 2012). Light blue x’s are previously observed colors of NTs which the ”Trojan Color Conundrum” was based on (Sheppard and Trujillo, 2006; Sheppard, 2012; Parker et al., 2013; Jewitt, 2018), while the blue plus signs are more recently observed NT colors which bring this conundrum into question (Lin et al., 2019; Bolin et al., 2023). Targets observed in this paper are shown as green squares. Solar color and the reddening line (see 3.4) are depicted as an yellow star and orange dotted line respectively. Objects that have multiple observations in this paper are connected by a dot-dashed line. NTs that have been previously observed in the literature are connected by a dashed line. The yellow line marks values where the PCA yields values equal to our cutoff of 1.75 (see Fig. 6 and Sec. 4.2). Objects in the yellow region are above this cutoff and considered ultra-red in this paper. The blue line marks values where the PCA yields values equal to our cutoff of -1.25 (see Fig. 6 and Sec. 4.2). Objects in the blue region are blue this cutoff and considered blue in this paper. The top and right inset plots show the kernel density estimation (KDE) of the g-r and r-i distributions respectively of the included sub-populations. ultra-red populations of TNOs and the whole JT population (which is comprised of only red objects). The principal component value (PC1) which separates these populations is 1.75 (shown as a dotted line in Fig. 6). We use this definition to classify our NT targets as red or ultra-red; the corresponding region in g-r vs r-i space is shown in Fig. 3 as a yellow shaded region. We then applied this PCA model to other populations in the Solar System, including JTs and previous observations of NTs, the results of which are shown in Fig. 7. By eye, the JT population is clearly unique in that it is nearly devoid of any ultra-red members (i.e. targets with a PC1 \(>\)1.75). Also of note, about 25% of the NT targets presented in this paper occupy a unique region of PC1 \(\sim-1\). This region corresponds to blue objects that are not frequently present in the outer Solar System populations (see Sec. 4.4 for a more in-depth discussion of these objects). We then ran a KS test for each combination of these Solar System populations to determine the likelihood that they came from the same underlying distribution; the results of these tests are recorded in Table 2. We conclude that the compared populations are from different distributions if they have a p-value of \(\leq\) 0.05, corresponding to a 95% confidence level to reject the null hypothesis. Therefore, we find that the population observed in this work is not consistent with being drawn from the same distribution as the JTs, but is instead more consistent with the TNO population. This result is the opposite of what was found pre-2019, where the NTs were more consistent with the JT population. The results from post-2019 data also show that the NT population is more consistent with the TNO Figure 4: Measured \(g-r\) vs \(r-z\) of the NT population. Navy upward triangles, green downward triangles, and blue circles are measurements taken from the literature of TNOs (scattered, cold, and hot respectively) (Schwamb et al., 2019). Teal plus signs are colors of NTs taken from the literature (Lin et al., 2019). Targets observed in this paper are shown as orange squares. Solar color and the reddening line (see 3.4) are depicted as an yellow star and orange dotted line respectively. Objects that have observations taken in this paper and from the literature are connected with a dashed line. Objects that have multiple observations in this paper are connected by a dot-dashed line. The green ellipse demarcates the region of color-color space occupied only by cold TNOs. The top and right inset plots show the kernel density estimation (KDE) of the g-r and r-z distributions respectively of the included sub-populations. population, but this work shores up this result significantly. Further observations of members of the NT population in particular could also increase the statistical significance of this result. However, we feel confident in claiming that our results show NTs and TNOs are consistent with coming from the same underlying distribution based on their optical colors with the greatest confidence to date. ### Color-Absolute Magnitude Relations In Fig. 8, we plot the Principal Component for our targets as a function of absolute magnitude (H). We look for any significant clustering or correlations in these plots which would indicate that the color classification of NTs is dependent on their size. To search for clustering in our datasets, we run a Mean Shift Clustering algorithm (Pedregosa et al., 2011), which does not need a number of clusters as an input parameter (just a bandwith which can be initialized with the \begin{table} \begin{tabular}{c|c|c|c|c|c} KS Test P-value & NTs (This Work) & NTs (Pre-2019) & NTs (Post-2019) & TNOs & JTs \\ \hline NTs (This Work) & 1 & 0.020 & 0.61 & 0.56 & 0.003 \\ \hline NTs (Pre-2019) & 0.020 & 1 & 0.15 & 0.03 & 0.27 \\ \hline NTs (Post-2019) & 0.61 & 0.15 & 1 & 0.14 & 0.05 \\ \hline TNOs & 0.56 & 0.03 & 0.14 & 1 & 0.0002 \\ \hline JTs & 0.003 & 0.27 & 0.05 & 0.0002 & 1 \\ \hline \end{tabular} \end{table} Table 2: The resulting p-values of the KS Test on each combination of sub-populations considered in this work. Figure 5: The differences in observed color between NTs in this paper and the literature as compared to the average error on our observations. The differences in g-r, r-i, and r-z observations are shown as blue, orange, and green histograms respectively. The average g-r, r-i, and r-z errors are shown as blue dotted, green dot-dashed, and orange dashed lines respectively estimate_bandwith function). To test the significance of clustering we calculate the Cluster Index. The Cluster Index from the SigClust evaluation tool is defined as (Ferland et al., 2013): \[CI=\frac{\sum_{k=1}^{N}\sum_{i\in C_{k}}\parallel\mathbf{x_{i}}-\mathbf{\bar{x}}^{(k)} \parallel^{2}}{\sum_{i=1}^{n}\parallel\mathbf{x_{i}}-\mathbf{\bar{x}}\parallel^{2}} \tag{10}\] where \(\mathbf{\bar{x}}^{(k)}\) represents the mean of the kth cluster for k = 1, 2,... N for N clusters and \(\mathbf{\bar{x}}\) represents the overall mean. The CI provides a p-value for the significance of the cluster between these two clusters. To test if our data was correlated, we used the Pearson Correlation Coefficient (Kirch, 2008) which is defined as: \[r=\frac{\sum(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\sum(x_{i}-\bar{x})^{2}\sum( y_{i}-\bar{y})^{2}}} \tag{11}\] where \(x_{i}\) and \(y_{i}\) are the data points and \(\bar{x}\) and \(\bar{y}\) are the respective means. We calculated each of these values for all of the plots shown in Fig. 8. To determine whether or not these values could be obtained from random noise we generated 1000 sets of points with the same number of objects as our observation within the same region of Principal Component vs H space and ran the same analysis on those sets. These results are shown in the inset histograms in Fig. 8. We found that the cluster is consistent with random noise and should not be considered significant. This result also suggest that the color of NTs are distributed continuously from blue to ultra-red rather than Figure 6: The results of running a Principal Component Analysis (PCA) with the g-r and r-i colors of certain Solar System populations. The green histogram corresponds to the JTs (taken from Hainaut et al. (2012)). The blue and orange histograms correspond to the red and ultra-red subpopulations of the scattered TNOs, taken from Hainaut et al. (2012); the classification of red vs ultra-red was determined by using a clustering algorithm (DBSCAN; Pedregosa et al. (2011)) which separated the TNOs into two sub-populations. correlation with size is intriguing and may point to primordial differences in objects of different sizes in the outer Solar System. However, H is not a direct correlation to size as the object's albedo must be taken into account. Such observations do not currently exist for the NT population and will be necessary to establish a color-size correlation. Indeed, photometric observations of the rest of the NT population are necessary to confirm this slight correlation. ### Unique Targets While most of our targets are consistent with previous color measurements, one object, 2011 SO\({}_{277}\) is classified here as ultra-red while its previous observations place it firmly with in the red zone. Based on our other observations, we consider our results to be roughly consistent with previous literature (see Fig. 5), so this result is indeed unexpected. One explanation as to why this object has such different colors in independent observations is that its surface is not homogeneous. To test this hypothesis, a more in depth study of the rotational properties of the surface of this object is necessary, which will be upcoming in our next work on the lightcurves of NTs. Three of our targets, 2014 SC\({}_{375}\), 2014 YB\({}_{92}\), and 2015 UV\({}_{207}\), are much bluer, nearly solar in color, as compared to the other NTs or KBOs. Bolin et al. (2023) also reported 2014 YB\({}_{92}\) and 2015 UV\({}_{207}\) have blue, near solar color. In fact, these objects are as blue as the blue B/C-type asteroids, such as 3200 Phaetheon (Tabeshian et al., 2019; Lisse and Steckloff, 2022). A similarly blue TNO has been observed, which appears to be covered ferric oxides and phyllosilicates (Seccull et al., 2018). This TNO has a highly eccentric and inclined orbit, suggesting it may have a common origin with C-type asteroids and has since been implanted into trans-Neptunian space. It is possible that these NTs originated elsewhere in the Solar System, but their current orbits are stable for \(>\) Gyrs (see Sec. 2), implying that they were Figure 7: Cumulative distributions of the Principal Component (see Sec. 4.2) values of populations in the Solar System. The cut-off between red and ultra-red as defined by this PCA is shown as a black dashed line (see Fig. 6). The cut-off between red and blue objects is similarly shown as a dot-dashed line. The JT and scattered TNO results are shown as orange and navy histograms respectively. The NTs observations from previous literature are shown as a blue histogram. The NT observations from this work are shown as a green histogram. Figure 8: NT colors as a function of absolute magnitude. Grey points are taken from the literature (Sheppard and Trujillo, 2006; Sheppard, 2012; Parker et al., 2013; Jewitt, 2018; Lin et al., 2019; Schwamb et al., 2019). Colored squares were measured in this paper. Duplicate observations of the same object are connected by dashed lines. The inset plots contain histograms of the Cluster Indices and Pearson Correlation Coefficients of a random distribution colors and absolute magnitude (see Sec. 4.1). Each grey dashed line in the inset plots shows the corresponding value calculated for the observed distribution. captured just after Neptune's migration. However, based on these results the blue ratio for NTs is currently much higher than that of the TNO population. This result may suggest that inner Solar System material may be more efficiently transferred to NT orbits which have a smaller perihelion than the Kuiper Belt. Future spectral observations would be necessary to reveal any compositional differences this target may have as compared to the rest of the NT population. ## 5 Why Were the Ultra-Red Nts Rare Before 2019? Prior to 2019, the ultra-red NTs were very rare; none of the 13 NT samples in Jewitt (2018) are ultra-red NTs, which led to the claim of a "Trojan Color Conundrum". Here we propose two possibilities to explain this inconsistency: 1. **Small number statistics:** Small number statistics could generate such a surprising result. If we assume a 7.75:1 apparent red to ultra-red ratio of NTs, the chance to randomly select 13 objects without picking up any ultra-red one is about 18%, which is very likely. If we use a 3.75:1 apparent red to ultra-red ratio, the chance is now 0.5%. While it is not impossible, we may also consider alternative explanations. 2. **Selection effect:** Since bigger objects are easier to detect and obtain color measurements for, the 13 objects in Jewitt (2018) trend to be large; 10 of 13 have H \(\leqslant\) 8. Moreover, many NTs have been discovered by deeper (Lin et al., 2021) or wider (Bernardinelli et al., 2022, 2020; Lin et al., 2019) surveys since 2018, which included many high-inclination objects. Thus the Jewitt (2018) sample appears to be biased toward bigger sized and lower inclination objects. In fact, 8 of 13 NTs in the Jewitt (2018) sample have orbital inclination \(<\) 10\({}^{\circ}\); 9 of the 31 currently known NTs have inclination \(<\) 10\({}^{\circ}\), meaning that 8 of the 9 total low-inclination NTs were included in the Jewitt (2018) sample. Such objects has very similar red color (see Figure 8). Therefore, the possible color-orbit-size correlation in NT population could be at least partially explain why the "Trojan Color Conundrum" was observed, especially when there were some selection biases in that sample. ## 6 Conclusions In this paper, we measure the griz colors for 15 of the 24 known NTs. We used the IMACS f/4 instrument on the 6.5m Baade telescope with Sloan g'r'i'z' filters to conduct our photometric survey. We confirm that 2013 VX\({}_{30}\) is ultra-red in color, and identify three NTs as ultra-red. This result brings the red to ultra-red ratio of NTs to 7.75:1, much more consistent with the corresponding TNO ratio and resolving the "Trojan Color Conundrum". Moreover, the color distribution of NTs is now indistinguishable from the scattered population of TNOs and different from the Jovian Trojans. We also find three targets which have solar color, the origin of which is unclear; the most likely explanation is that these objects originated from the inner solar system. For the entire NT population, we find that color of NTs may correlated to their absolute magnitude, and the objects with larger H trend to have redder color. The explanation behind this correlation remains an open question that is difficult to address with current data. More discoveries of NTs (especially around L5) are clearly needed. The L5 point has historically been difficult to study due to its overlap with the galactic plane, but the NT L5 region is moving away from this high stellar density region, making now the perfect time to start studying this population. The true degree of asymmetry between the L4 and L5 clouds will be an important to distinguishing different formation scenarios for the NT population. Moreover, our ongoing work to measure the rotational period and specific composition of these small bodies directly will be vital to understanding the true origin of the NT population. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This material is based upon work supported by the National Aeronautics and Space Administration under grant No. NNX17AF21G issued through the SSO Planetary Astronomy Program and by the National Science Foundation under grant No. AST-2009096. This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor.
2305.08574
Controlling Biofilm Transport with Porous Metamaterials Designed with Bayesian Learning
Biofilm growth and transport in confined systems frequently occur in natural and engineered systems. Designing customizable engineered porous materials for controllable biofilm transportation properties could significantly improve the rapid utilization of biofilms as engineered living materials for applications in pollution alleviation, material self-healing, energy production, and many more. We combine Bayesian optimization (BO) and individual-based modeling to conduct design optimizations for maximizing different porous materials' (PM) biofilm transportation capability. We first characterize the acquisition function in BO for designing 2-dimensional porous membranes. We use the expected improvement acquisition function for designing lattice metamaterials (LM) and 3-dimensional porous media (3DPM). We find that BO is 92.89% more efficient than the uniform grid search method for LM and 223.04% more efficient for 3DPM. For all three types of structures, the selected characterization simulation tests are in good agreement with the design spaces approximated with Gaussian process regression. All the extracted optimal designs exhibit better biofilm growth and transportability than unconfined space without substrates. Our comparison study shows that PM stimulates biofilm growth by taking up volumetric space and pushing biofilms' upward growth, as evidenced by a 20% increase in bacteria cell numbers in unconfined space compared to porous materials, and 128% more bacteria cells in the target growth region for PM-induced biofilm growth compared with unconfined growth. Our work provides deeper insights into the design of substrates to tune biofilm growth, analyzing the optimization process and characterizing the design space, and understanding biophysical mechanisms governing the growth of biofilms.
Hanfeng Zhai, Jingjie Yeo
2023-05-15T12:01:50Z
http://arxiv.org/abs/2305.08574v2
# Bayesian Learning Designs and Characterizes Porous Metamaterials for Biofilm Transport and Control ###### Abstract Biofilm growth and transport in confined systems is a common phenomenon. While machine learning (ML) and optimization have been extensively applied in materials design, there is still a scarcity of thorough evaluations regarding the optimization process. We combined Bayesian optimization (BO) and individual-based modeling to conduct design optimizations for maximizing different porous materials' (PM) biofilm transportation capability. We first characterize the acquisition function in BO for designing 2-dimensional porous membranes. Results showed that the variance of the overall samples by the upper confidence bound (UCB) is 32.08% higher than that of the expected improvement (EI); the mean objective of the overall samples by the EI is 1.49% higher than that of the UCB. Given the predefined target region, the EI is 2.35% more efficient than the UCB compared with uniform grid search. We then use EI for designing lattice metamaterials (LM) and 3-dimensional porous media (3DPM). It is found that BO is 92.89% more efficient than the uniform grid search for LM and 223.04% more efficient for 3DPM. The selected characterization simulation tests match well with the Gaussian process regression approximated design spaces for three cases. We found that all the extracted optimal designs exhibit better biofilm growth and transportability than nonconfined vacuum space. Our comparison study shows that PM stimulates biofilm growth by taking up volumetric space and pushing biofilms' upward growth, as evidenced by a 20% increase in biofilms in vacuum space compared to porous materials. There are 128% more biofilms in the target growth region for the PM-induced biofilm growth compared with the vacuum space growth. Our work provides deeper insights into bio-porous materials design, ML and optimization process characterizations, and extracting new physical mechanisms from the optimizations. Individual-based modeling biofilm metamaterials Bayesian optimization porous media ## 1 Introduction Biofilms, commonly defined as surface-attached communities of microorganisms (i.e., groups of bacteria cells) embedded in a self-produced matrix of extracellular polymeric substances (EPS) Costerton et al. (1999), grow mostly in confined systems such as rock cracks, industrial pipelines, biological bodies, and many other artificial or natural microenvironments Friedlander et al. (2013). One of the prerequisites of biofilm growth is the existence of adhesive surfaces that allow bacteria to grow and clustered into "film-shaped" communities adhered to by EPS. This accounts for the phenomenon that biofilms are observed in mostly confined systems: an increasing surface area would allow biofilm to attach and further grow. Biofilms, from the engineering perspective, possess abundant pros and cons to human society. On the negative side, the formation and attachment of biofilms pose serious problems for marine engineering as cause fouling on the surfaces of marine vessels, equipment, and infrastructure, leading to reduced efficiency and increased maintenance costs Yebra et al. (2004), Dobretsov et al. (2006) and in biomedical treatments as forming on medical devices, such as catheters and implants, leading to infections that are difficult to treat Costerton et al. (1999), Donlan and Costerton (2002). On the positive side, biofilm can also be utilized as _engineered living materials_ (ELM) to be used to create self-healing concrete by incorporating bacteria into the concrete mix Jonkers et al. (2010), treat wastewater by removing pollutants and nutrients Chattopadhyay et al. (2022), and conduct 3D bioprinting for functional soft materials Balasubramanian et al. (2019). Considering all these pros and cons, one deduces that understanding the mechanism of biofilm growth within confined systems for biofilm utilization is significant for human-desired biofilm control. To summarize and elaborate on this significance, there are three major points: (1) One can prevent undesired biofilm attachment and conduct efficient biofilm removal Zhai and Yeo (2022). (2) One may utilize the physics to promote efficient usage of biofilm as ELM, e.g., clean energy applications Liu et al. (2022). (3) By combining both the pros and cons to enable biofilm control to design customized devices and sensors Mukherjee and Cao (2020). From both three points, we identify a major goal that may potentially bring in the solutions for biofilm control: design porous structural materials that can control biofilm growth. Following our points, two problems naturally arise as efficient biofilm control and utilization, (1) Conducting experiments on biofilm is time-consuming, making it extremely difficult the characterize and benchmark in a timely effort; (2) Directly modify the structures of porous materials to test the corresponding biofilm growth properties is not straightforward and making the investigation even more time-consuming. Hence, novel techniques that can bypass this "trial-and-error" approach is of urgent need. To tackle the first problem, our solution is to use computational modeling, more specifically, individual-based modeling. Notably, there are various computational modeling methods have been proposed in recent years to model biofilm. For example, one can use molecular dynamics simulations to model the biochemical properties of biofilm on the molecular scale Powell et al. (2018), using dissipative particle dynamics to model biofilm deformation under shear flow Xu et al. (2011) and coarse-grained molecular dynamics to study dewetting phenomena Brandani et al. (2015) to model biofilm in the mesoscale and use finite element methods to simulate the linearized growth Smith et al. (2007) to model biofilm on the continuum scale. Among all these different methods, we choose the individual-based modeling (IbM) Li et al. (2019) method to represent each bacteria cell as individual particles, as it is capable of modeling the growth and dynamics of biofilm capturing both the cellular to clustered scales with relatively low computational resources required. More specifically, there are three main reasons for choosing IbM: * IbM is a general multiscale method, capable of capturing the scaling effects from cell to "film". Since each bacteria cell is considered as a particle in the simulation, both "cell-cell"2, "film-materials", and "cell-materials" interactions are captured and described. When studying the transport of biofilm within porous regions, where both the individual and group dynamics play important roles, the ability to capture multiscale mechanics is essential in our problem. The flexibility of tuning parameters among scales is essential Li et al. (2019). Footnote 2: many cells consist the film, “cell-cell” means the dynamics of individuals within the films * The IbM method is physically realistic to the particular scale of interest. The scale we are focusing on is mostly the micrometer scale, where IbM offers extremely high representation power and accuracy. First, the biofilms observed in natural "pore-scale" mostly refer to the scale of \(10^{-5}\sim 10^{-3}\)m Kapellos et al. (2015), which is particularly suitable for IbM. Note that each bacteria cells are approximately 1\(\mu\)m, making the pore-scale perfectly captures the local morphology of the biofilms. Second, the adhesion and other micromechanisms that governs the overall mechanical behavior of biofilm mainly originate in the micrometer scale Galy et al. (2012), where IbM could bring in the computational tools for decent understanding. Third, our ultimate goal is to bring in our theoretical predictions and understanding to experimental implementations for ELM that can be used for designing new materials and devices. Most recent work on ELM is on the micrometer scale Rodrigo-Navarro et al. (2021), where our computational efforts could bring in the most help and impact. * Compared with other methods, IbM has the most decent computational burden requirement for relatively high fidelity3. On the molecular scale, if one is to simulate the growth of biofilm using methods like molecular dynamics (MD) simulation or Monte Carlo sampling, the computational burden will be extremely high, making characterizing the mechanism on the micrometer scale impossible. As a reference, it would require 6 months to run MD of a protein structure for 1 \(\mu\)s Li et al. (2021), making this method infeasible for our problem. For the continuum scale, the simulation of biofilm usually incorporates extended finite element methods (XFEM) and level set method (LSM) Duddu et al. (2008), which is also extremely computationally burdensome. To elaborate, the incorporation of the FEM with LSM usually requires a moving mesh that resolves the phase boundary Zhai et al. (2022), which significantly increases the computational resources required. Footnote 3: The goal is to combine simulation with optimization, where the simulation is treated as the evaluated function. Hence, the function evaluation time is important for efficient optimizations. To solve the second problem, our answer is to use approximation methods to solve the inverse problem of materials design. If one is to define designing materials by perturbing their original structures to obtain the target properties as a _forward problem_, one can then define obtaining the tailored materials' structures from the predefined targeted properties as an _inverse problem4_ The detailed inverse problem here is formulated as finding the optimal porous structure corresponding to the target biofilm transport properties (i.e., maximize biofilm growth), as a class 2 inverse problem. If one is to examine this defined problem in detail, there are two main problems: (A) The defined inverse problem is ill-posed Hadamard (1902). Two or more different porous structures may yield the same biofilm transport properties, in which if one yields back the materials structure representation as the solution of the inverse problem, this solution may not be unique. (B) There are no analytical (or symbolic) forms of the inverse map. The biofilm simulation is constituted of iterative growth and update of bacteria cells, where it is almost impossible to obtain an analytical inverse of this coupled multiphysics system with changed parameters5. Footnote 4: The rigorous formulation follows the _Hadamard’s principles_, which we do not discuss in details here. Footnote 5: For the detail of the simulation algorithms please refer to Refs. Li et al. (2019) & Zhai and Yeo (2022) To solve problem (A), our proposed solution is characterizing the design space. We hope to approximate a surrogate model of the design space and verify the approximated map by conducting verification simulations along the observed maximal solution and randomly selected points. This allows us to verify the accuracy of the fitted surrogate and the further analysis is reliable. To solve the problem (B), our way is to avoid gradient-based optimizations and use machine learning (ML) techniques (i.e., Gaussian process regression (GPR)), which also allows us to do direct surrogate modeling of the design space, and solve the first problem simultaneously. Hence, combining the two solutions we proposed, Bayesian optimization (BO) Frazier (2018) seems to be the perfect fit, which mainly consists of a GPR to approximate the design space map and an acquisition function to update the solution search scheme. To summarize and elaborate in detail, there are three major reasons for choosing BO: * The flexibility of handling complex problems. Compared with gradient-based methods, BO is flexible and can be adapted to solving complicated optimization problems without requiring the calculation of the derivative of the evaluated functions. * It is less computationally burdensome compared with other ML methods. As a non-parametric method, GPR requires less computational resources compared with neural networks (NN) and is especially suitable for problems defined within the limited data regime Fuhg and Bouklas (2022). Compared with the widely used deep reinforcement learning (DRL) Sutton and Barto (2018), BO does not require iterative training of deep NN for each function evaluation, and hence is significantly less computationally burdensome. * The approximation of the design space map allows direct characterization and analysis of the sampling process. Compared with metaheuristic methods such as genetic algorithms Mitchell (1998) or particle swarm optimization Kennedy and Eberhart (1995), in which the updates of the function evaluation are based on random perturbation of the input variables inspired by natural phenomena, the learning of the design space map from GPR allows us to do detailed characterization, and hence support our proposed solution to the problem (A). Other than that, BO usually does not heavily rely on population, usually requiring one evaluation per iteration, making the characterization much easier. In a more detailed sense, in this paper, we are combining IbM and BO to solve a focused problem: Inversely design the porous structural materials for biofilms transport and characterize the biomechanics from the optimization processes. By solving this problem, we hope to potentially answer the following questions: (1) What are the optimal porous microstructures that can maximize the transportability of biofilms? (2) Are the approximated design space accurate and how do we verify them? (3) What biomechanical mechanisms are discovered by conducting the optimization and characterizing the design space? We will answer these questions in the following sections. This paper is formulated as follows. In Section 2 we briefly introduce the method we used, including our computational models of biofilm physics (Section 2.1), the Bayesian optimization scheme (Section 2.2), including the surrogate modeling of GPR (Section 2.2.1) and the iterative update scheme by acquisition function (Section 2.2.2), followed by our designed three numerical experimental formulations for different porous material in Section 2.3. We then showed our results in Section 3: we discuss our optimization processes and optimal structures for different numerical experiments in Sections 3.1 & 3.2, verify the discovered new phenomena, and provide additional mechanistic explanations in Section 3.3. Eventually, we conclude the paper in Section 4. ## 2 Methods As elaborated in Section 1, we will use computational methods to model the growth of biofilms and their mechanical interactions with the porous metamaterials in a predefined simulation box. We then combine the material representation of the porous structure parameterized based on our defined numerical experiments and combine the simulation framework with the Bayesian optimization methods for iterative searching for a better porous structure with better biofilm transport properties. The general schematic of this study is represented in Figure 1: one begins with an inspiration from a natural phenomenon: biofilms mostly grow in confined systems Friedlander et al. (2013), then one can define a porous structure that allows the biofilm to grow within to mimic this phenomenon (Figure 1**A**). What follows is one can then run the simulation initiated by parametrized materials representation (Figure 1**B**) for coupling with Bayesian optimization (Figure 1**C**). The coupling is enabled by "variable passing" between the simulation and optimization: the simulation takes the materials' representations as input and outputs the biofilm transport properties as the objective to input to the optimization, where the optimization algorithm updates the new materials' representation as its output for an iterative loop. This iterative searching will eventually propose an optimal structure (Figure 1**D**). By characterizing the design space obtained in the optimization (Figure 1**C**) and comparing the observation from these simulations, one can then propose explanations for the optimal structures and identify new mechanisms of biofilm transport physics (Figure 1**E**). In the following subsections, we first briefly introduce the basic formulation of our computational methods of individual-based modeling, and then the basic mathematical formulation of Bayesian optimization. Eventually, we briefly introduce the designed formulation of different formulated problems for porous membranes, lattice metamaterials, and nonconvex porous media, respectively. ### Computational Models In this work, our employed IbM computational models developed based on the Newcastle University Frontiers in Engineering Biology (NUFEB) framework Li et al. (2019), in which each bacteria cell is modeled as a spherical particle. Biofilms are formed by cell division and extrusion of EPS. Following our previous work on surface shape optimization Zhai and Yeo (2022), the microbe growth and decay are governed by the following differential equation: \[\frac{dm_{i}}{dt}=\xi_{i}m_{i} \tag{1}\] where \(m_{i}\) is the biomass of the \(i^{th}\) bacteria cells and \(\xi_{i}\) is the growth rate. The growth rate of the bacteria cell is \(\xi=0.00028s^{-1}\). To avoid overlap of the particles during the growth processes, the particles are mechanically relaxed using the individual-based approach, solved via Newton's equation \[m_{i}\frac{d\mathbf{v}_{i}}{dt}=\mathbf{F}_{c,i}+\mathbf{F}_{a,i} \tag{2}\] Figure 1: The overall schematic of this research. The formulation is inspired by the simulation of biofilm transport in porous materials in **(A)** (on top), which inspires us to define a computational framework on the bottom, where there are some initial bacteria cells distributed at the bottom of simulations growing to throw a porous media as indicated in the grey area. **(B)** The growth processes of the biofilms within the porous materials, i.e., the computational simulations, calculated by individual-based modeling. **(C)** The reconstructed design space for the porous material from the Bayesian optimization. **(D)** The extracted optimal design from the design space. **(E)** One may uncover new physical phenomena and the mechanism of bacteria transport in porous materials by comparing the optimal design and designed benchmark cases. where \(\mathbf{v}_{i}\) is the particles' velocity. The contact force \(\mathbf{F}_{c,i}\) is a pair-wise force between particles to prevent overlapping based on Hooke's law \[\mathbf{F}_{c,i}=\sum_{j=1}^{N_{i}}\left(K_{\mathbb{N}}\delta\mathbf{n}_{i,j}-m_{ i,j}\gamma_{\mathbb{N}}\mathbf{v}_{i,j}\right) \tag{3}\] where \(N_{i}\) is the total number of neighboring particles of \(i\), \(K_{\mathbb{N}}\) is the elastic constant for normal contact, \(\delta\mathbf{n}_{ij}\) is the overlap distance between the center of particle \(i\) and its neighbour particle \(j\). \(\gamma_{\mathbb{N}}\) is the viscoelastic damping constant for normal contact, and \(v_{i,j}\) is the relative velocity of the two particles. The EPS adhesive force \(\mathbf{F}_{a,i}\) is a pair-wise interaction modelled as a van der Waals force \[\mathbf{F}_{a,i}=\sum_{j=1}^{N_{i}}\frac{H_{a}r_{i,j}}{12h_{min,i,j}^{2}} \mathbf{n}_{i,j} \tag{4}\] where \(H_{a}\) is the Hamaker coefficient, \(r_{i,j}\) is the effective outer-radius of the \(i^{\mathrm{th}}\) and \(j^{\mathrm{th}}\) particles. \(h_{min,i,j}\) is the minimum separation distance of the two particles, and \(\mathbf{n}_{i,j}\) is the unit vector from particle \(i\) to \(j\). Mechanical equilibrium is achieved when the average pressure of the microbial community reaches a plateau. The average pressure \(P\) of the system calculates \[P=\frac{1}{3V}\left(\sum_{i=1}^{N}m_{i}\mathbf{v}_{i}\cdot\mathbf{v}_{i}+\sum _{i=1}^{N}\sum_{j>i}^{N}\mathbf{r}_{i,j}\cdot\mathbf{F}_{i,j}\right) \tag{5}\] where \(V\) is the sum of the particles' volumes. The first term in the bracket is the contribution from the kinetic energy of each particle. The second term is the interaction energy, where \(\mathbf{r}_{i,j}\) and \(\mathbf{F}_{i,j}\) are the distance and force between two interacting particles \(i\) and \(j\), respectively. Here, we employ the Monod-based method Monod (1949) to model microbial growth, in which the growth rate is determined by the Monod kinetic equation driven by the local concentration of nutrients. The porous materials are modeled as fully rigid particles with neither growth nor decay. Here, under the Monod model formulation, each bacteria cells first grow with increasing radii, and after their radii reach a critical value \(r^{\mathsf{C}}=1.36\times 10^{-6}\)m, the cell is separated into two daughter cells (Details see Ref. Li et al. (2019)). The EPS, also modeled as particles, are secreted by the main bacteria cells in the growing process (Details see Refs. Xavier et al. (2005), Jayathilake et al. (2017)). After certain iterations, the system preserves a total number of bacteria cells and the EPS particles, \(\mathcal{N}_{\mathrm{bio}}^{\mathrm{total}}\). ### Bayesian Optimization The goal of optimization is to minimize or maximize an objective function, which in our case is the bacteria cell number under a target design region, denoted as \(\mathcal{N}_{\mathrm{bio}}\) for ease of notation (\(\mathcal{N}_{\mathrm{bio}}\subset\mathcal{N}_{\mathrm{bio}}^{\mathrm{total} }\in\mathbb{Z}\)). Using \(\mathcal{N}_{\mathrm{bio}}=\mathcal{M}_{\mathrm{NIFEB}}(N_{\mathrm{unit}}, \bar{\mathcal{D}};\mathbf{p})\) to denote a multivariate function relation, in which \(N_{\mathrm{unit}}\) and \(\bar{\mathcal{D}}\) stand for unit cell numbers per simulation box side and the dimensionless structural parameter (or dimensionless variable), respectively, are the design variables to be elaborated in details in Section 2.3. For simplicity, we use \(\mathcal{D}\mathcal{V}=[N_{\mathrm{unit}},\bar{\mathcal{D}}]\) to denote the design variables. \(\mathbf{p}\) is the parameter involved in the numerical simulation, as presented in Equations (1\(\sim\)5). The optimization process can be simplified as: \[\operatorname*{arg\,max}_{N_{\mathrm{unit}},\bar{\mathcal{D}}} \mathcal{N}_{\mathrm{bio}}=\mathcal{M}_{\mathrm{NIFEB}}(N_{\mathrm{unit}}, \bar{\mathcal{D}};\mathbf{p}), \tag{6}\] \[\mathrm{subject\ to}\quad\bar{\mathcal{D}}_{\mathrm{LB}}\leq\bar{ \mathcal{D}}_{\mathrm{UB}},\;1\leq N_{\mathrm{unit}}\leq 15\;(N_{\mathrm{unit}}\in \mathbb{Z})\] Here, we define a target growth region to count \(\mathcal{N}_{\mathrm{bio}}\) (Section 2.3), so that the optimizations are tailor the materials' microstructure to enhance growth toward the target region. Given the input design variables \(\mathcal{D}\mathcal{V}\), we represent the biofilm physics growth simulation model as a map, \(\mathcal{M}_{\mathrm{NIFEB}}:N_{\mathrm{unit}},\bar{\mathcal{D}}\to\mathcal{N}_ {\mathrm{bio}}\), where the simulation parameters \(\mathbf{p}=[\xi_{i},K_{\mathbb{N}},\gamma_{\mathbb{N}},H_{a},r^{\mathsf{C}},...]\) are incorporated in the IbM model (Section 2.1). \(\mathcal{M}_{\mathrm{NIFEB}}(\cdot)\) stands for the numerical simulation from NUFEB that maps the design representation of the materials as input and the bacterial cell number count as output. \(N_{\mathrm{unit}}\) is an integer between 1 and 15 as the number of unit cells are changing along the BO iterations. The dimensionless structure parameter \(\bar{\mathcal{D}}\) is defined per case, as the lower and upper bounds \(\bar{\mathcal{D}}_{\mathrm{LB}}\) & \(\bar{\mathcal{D}}_{\mathrm{UB}}\) differs based on the simulations and materials basis settings, to be discussed in Section 2.3. BO aims to iteratively update new evaluations from the computational models in Section to search for optimal materials. Through sampling multiple simulations and mapping the design variables into the defined objective, one can construct a surrogate of the direct map between the input (i.e., the design variables) and the output (i.e., the objective) from GPR. This GPR reconstructed surrogate is then updated through the acquisition functions of choice. #### 2.2.1 Gaussian Process Regression GPR is a Bayesian statistical approach to approximate and model function(s). Considering our optimization problem, the function can be denoted as \(\mathcal{N}_{\mathrm{bio}}=\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V}; \mathbf{p})\), where \(\mathcal{N}_{\mathrm{bio}}\) is evaluated at a collection of different sets of points (or design variables): \(\mathcal{D}\mathcal{V}_{1},\mathcal{D}\mathcal{V}_{2},...,\mathcal{D}\mathcal{ V}_{k}\in\mathbb{R}^{2}\), we can obtain the vector \([\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V}_{1}),...,\mathcal{M}_{ \text{NUFEB}}(\mathcal{D}\mathcal{V}_{k})]\) to construct a surrogate model for the design parameters with the correlated objectives. The vector is randomly drawn from a prior probability distribution, where GPR takes this prior distribution to be a multivariate normal with a particular mean vector and covariance matrix. Here, the mean vector and covariance matrix are constructed by evaluating the mean function \(\mu_{0}\) and the covariance function \(\Sigma_{0}\) at each pair of points \(\mathcal{D}\mathcal{V}_{i}\), \(\mathcal{D}\mathcal{V}_{j}\). The resulting prior distribution on the vector \([\mathcal{M}_{\text{NUFEB}}(x_{1}),...,\mathcal{M}_{\text{NUFEB}}(x_{k})]\) is represented in the form of a normal distribution to construct the surrogate model Frazier (2018): \[\mathcal{N}_{\mathrm{bio}}(\mathcal{D}\mathcal{V}_{1:k})\sim\mathfrak{N} \left(\mu_{0}(\mathcal{D}\mathcal{V}_{1:k}),\Sigma_{0}(\mathcal{D}\mathcal{V} _{1:k},\mathcal{D}\mathcal{V}_{1:k})\right) \tag{7}\] where \(\mathfrak{N}(\cdot)\) denotes the normal distribution. The collection of input points is represented in compact notation: \(1:k\) represents the range of \(1,2,...,k\). The surrogate model \(\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V})\) on \(1:k\) is represented as a probability distribution given in Equation (7). To update the model with new observations, such as after inferring the value of \(\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V})\) at a new point \(\mathcal{D}\mathcal{V}\), we let \(k=l+1\) and \(\mathcal{D}\mathcal{V}_{k}=\mathcal{D}\mathcal{V}\). The conditional distribution of \(\mathcal{N}_{\mathrm{bio}}\) given observations \(\mathcal{D}\mathcal{V}_{1:l}\) using Bayes' rule is \[\mathcal{N}_{\mathrm{bio}}(\mathcal{D}\mathcal{V})|\mathcal{N}_{ \mathrm{bio}}(\mathcal{D}\mathcal{V}_{1:l}) \sim\mathfrak{N}(\mu_{l}(\mathcal{D}\mathcal{V}),\sigma_{l}^{2}( \mathcal{D}\mathcal{V})) \tag{8}\] \[\mu_{l}(\mathcal{D}\mathcal{V}) =\Sigma_{0}(\mathcal{D}\mathcal{V},\mathcal{D}\mathcal{V}_{1:l}) \Sigma_{0}(\mathcal{D}\mathcal{V}_{1:l},\mathcal{D}\mathcal{V}_{1:l})^{-1} \left(\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V}_{1:l})-\mu_{0}( \mathcal{D}\mathcal{V}_{1:l})+\mu_{0}(\mathcal{D}\mathcal{V})\right)\] \[\sigma_{l}^{2} =\Sigma_{0}(\mathcal{D}\mathcal{V},\mathcal{D}\mathcal{V})-\Sigma _{0}(\mathcal{D}\mathcal{V},\mathcal{D}\mathcal{V}_{1:l})\Sigma_{0}(\mathcal{ D}\mathcal{V}_{1:l},\mathcal{D}\mathcal{V}_{1:l})^{-1}\Sigma_{0}(\mathcal{D} \mathcal{V}_{1:l},\mathcal{D}\mathcal{V})\] where the posterior mean \(\mu_{l}(\mathcal{D}\mathcal{V})\) is a weighted average between the prior \(\mu_{0}(\mathcal{D}\mathcal{V})\) and the estimation from \(\mathcal{M}_{\text{NUFEB}}(\mathcal{D}\mathcal{V}_{1:l})\), where the weight applied depends on the kernel used. Here, we use the Gaussian kernel, hence the prior covariance is Biswas et al. (2021) \[\Sigma_{0}(\mathcal{D}\mathcal{V}_{i},\mathcal{D}\mathcal{V}_{j}) =\sigma^{2}R(\mathcal{D}\mathcal{V}_{i},\mathcal{D}\mathcal{V}_{j}), \tag{9}\] \[R(\mathcal{D}\mathcal{V}_{i},\mathcal{D}\mathcal{V}_{j}) =\texttt{exp}\left(\frac{1}{2}\sum_{m=1}^{d}\frac{(\mathcal{D} \mathcal{V}_{i,m}-\mathcal{D}\mathcal{V}_{j,m})^{2}}{\theta_{m}^{2}}\right)\] \[\theta_{m} =(\theta_{1},\theta_{2},...,\theta_{d})\] where \(\sigma^{2}\) is the overall variance parameter and \(\theta_{m}\) is the correlation length scale parameter in dimension \(m\) of the \(d^{\mathrm{h}}\) dimension of \(\mathcal{D}\mathcal{V}\), which are all hyperparameters of GPR. \(R(\mathcal{D}\mathcal{V}_{i},\mathcal{D}\mathcal{V}_{j})\) is the spatial correlation function. Our goal is to estimate the parameters \(\sigma\) and \(\theta_{m}\) that create the surrogate model given the training data \([(\mathcal{N}_{\mathrm{bio}})_{k},\ \mathcal{D}\mathcal{V}_{k}]\) at iteration \(k\). Here, we will use \(\hat{\mathcal{M}}_{\text{GPR}}\) to denote the surrogate model constructed from GPR in the iterative updating process. The updating sampling scheme is achieved through the acquisition function in the following section, which improves the accuracy of the updated surrogate so that the reconstructed design space is approximating the theoretical continuous design from NUFEB simulations \(\hat{\mathcal{M}}_{\text{GPR}}\sim\mathcal{M}_{\text{NUFEB}}\). #### 2.2.2 Acquisition Function Given the training data \([(\mathcal{N}_{\mathrm{bio}})_{k},\ \mathcal{D}\mathcal{V}_{k}]\), Equation (7) gives us the prior distribution \((\mathcal{N}_{\mathrm{bio}})_{l}\sim\mathfrak{N}(\mu_{0},\Sigma_{0})\) as the surrogate. This prior and the given dataset induce a posterior: the acquisition function denoted as \(\mathcal{A}:\mathcal{X}\rightarrow\mathbb{R}^{+}\), determines the point in \(\mathcal{X}\) to be evaluated through the proxy optimization \(\mathcal{D}\mathcal{V}_{\mathrm{best}}=\arg\max_{\mathcal{D}\mathcal{V}} \mathcal{A}(\mathcal{D}\mathcal{V})\). The acquisition function depends on the previous observations, which can be represented as \(\mathcal{A}=\mathcal{A}(\mathcal{D}\mathcal{V};(\mathcal{D}\mathcal{V}_{l},( \mathcal{N}_{\mathrm{bio}})_{l}),\theta)\), where \((\mathcal{D}\mathcal{V}_{l},(\mathcal{N}_{\mathrm{bio}})_{l})\) leads to the reconstructed \(\hat{\mathcal{M}}_{\text{GPR}}\). Taking our previous notation, the new observation is probed through the acquisition Deshwal et al. (2021): \[\mathcal{D}\mathcal{V}_{k}=\mathcal{D}\mathcal{V}_{l+1}=\operatorname*{arg\,max }_{\mathcal{D}\mathcal{V}}\mathcal{A}\left(\mathcal{D}\mathcal{V};(\hat{ \mathcal{M}}_{\text{GPR}})_{l},\theta_{m}\right) \tag{10}\] where the input space contains the evaluation of design variables at \(l\) points: \((\mathcal{D}\mathcal{V}_{1},\mathcal{D}\mathcal{V}_{2},...,\mathcal{D}\mathcal{V} _{l})\). We compare and characterize two different acquisition functions, the Upper Confidence Bound (UCB) and the Expected Improvement (EI), to do a benchmark study on the effect of acquisition updates. The UCB exploits the upper confidence bounds to construct the acquisition and minimize the regret. UCB takes the form Snoek et al. (2012) \[\mathcal{A}_{\text{UCB}}\left(\mathcal{D}\mathcal{V};(\mathcal{D}\mathcal{V}_{l},( \mathcal{N}_{\mathrm{bio}})_{l}),\theta_{m}\right):=\mu_{l}\left(\mathcal{D} \mathcal{V};(\mathcal{D}\mathcal{V}_{l},(\mathcal{N}_{\mathrm{bio}})_{l}), \theta_{m}\right)+\kappa\sigma\left(\mathcal{D}\mathcal{V};(\mathcal{D}\mathcal{V} _{l},(\mathcal{N}_{\mathrm{bio}})_{l}),\theta_{m}\right) \tag{11}\] where \(\kappa\) is a tunable parameter balancing exploitation and exploration when constructing the surrogate model. We take \(\kappa=2\) in our implementations. For the EI acquisition, the function writes: \[\mathcal{A}_{\text{EI}}\left(\mathcal{DV};\left(\mathcal{DV}_{l},(\mathcal{N}_{ \text{bio}})_{l}\right),\theta_{m}\right):=\sigma_{l}\left(\mathcal{DV};\left( \mathcal{DV}_{l},(\mathcal{N}_{\text{bio}})_{l}\right),\theta_{m}\right)\left( \gamma(\mathcal{DV})\boldsymbol{\Phi}\left(\gamma(\mathcal{DV})\right)+ \mathfrak{N}\left(\gamma(\mathcal{DV});0,1\right)\right) \tag{12}\] where \(\gamma\) is computed as \(\gamma=\left(-\mathcal{M}_{\text{NUFEB}}(\mathcal{DV}_{\text{best}})+\mu( \mathcal{DV};\{\mathcal{DV}_{l},(\mathcal{N}_{\text{bio}})_{l}\}_{l},\theta )-\Xi\right)/\sigma\left(\mathcal{DV};\{\mathcal{DV}_{l},(\mathcal{N}_{\text{bio }})_{l}\}_{l},\theta\right)\), where \(\Xi\) is a damping factor in the code implementation, and \(\Xi=10^{-4}\) in our implementation. Note that \(\mathcal{A}_{\text{EI}}\) preserves a closed form under the GP evaluations. Combining GPR and the acquisition function, the surrogate model can approximate the design space's maximal value. In our case, such BO methods are applied to obtain optimal porous materials' structures with maximal bacterial cell numbers in the target transport region. Here, the total function evaluations are different per case, as to be discussed in the following Section 2.3. ### Numerical Experiments Here, we define three different simulation cases to simulate the process of biofilm growth constrained within porous materials, inspired by experimental setup, literature results, and natural phenomena. The general schematic representing the numerical experimental setup is illustrated in Figure 2. Recall the overall optimization formulation in Equation (6), \(\mathcal{N}_{\text{bio}}\) is the bacteria cell numbers counted in the top quarter region nominated as the _objective growth region_, i.e., \(L_{\text{obj}}\times L_{X}\times L_{Y}\). The porous materials' microstructures are defined in the _materials region_, i.e., \(L_{\text{mat}}\times L_{X}\times L_{Y}\). The initial bacteria cells are distributed in the _initial biomass region_, i.e., \(L_{\text{bio}}\times L_{X}\times L_{Y}\). \(N_{\text{unit}}\) are formulated differently based on the "dimension" of the problem, where for the porous membrane (Figure 2**A**) \(N_{\text{unit}}\) is only defined in the X-Y plane. For lattice metamaterials and non-convex porous media, it is defined in both the X, Y, and Z directions. \(\bar{\mathcal{D}}\) are defined within the unit cells. Here, \(L_{X}=L_{Y}=50\mu\text{m}\), \(L_{\text{bio}}=L_{\text{obj}}=12.5\mu\text{m}\), and \(L_{\text{mat}}=25\mu\text{m}\). * **Porous Membranes.** Considering biofilm growth and flow constrained in a microchannel are widely applied and studied by the microfluidics communities and their wide applications spanning from energy, biosensing, and many others Pousti et al. (2019); Ye et al. (2021); many numerical Landa-Marban et al. (2019); Aspa et al. (2011) and theoretical Landa-Marban et al. (2020) studies have been developed trying to understand the mechanism of biofilm growth and flow in a microchannel. Here, our numerical implementations for channeled biofilm growth are mainly inspired by the simulation setup by Aspa et al. (2011), where cylinder-shaped convex pores are "drilled" in the solid materials to create channels for biofilm to grow within in Figure 2**A**. The morphology of the unit cell is shown in the right subfigure in Figure 2**A**: the radius of the hole (vacuum area) is denoted as \(\mathcal{R}_{\text{vac}}\) and the length of the residual solid body (the volumetric part, equals to half length of the unit cell minus \(\mathcal{R}_{\text{vac}}\)) is denoted as \(\mathcal{R}_{\text{vol}}\). The dimensionless variable can then be computed as \(\bar{\mathcal{D}}=\frac{\mathcal{R}_{\text{vac}}}{\mathcal{R}_{\text{vac}}+ \mathcal{R}_{\text{vol}}}\). In this scenario, the range of the dimensionless variable is defined as \(\bar{\mathcal{D}}\in[0.1,0.9]\) (\(\bar{\mathcal{D}}_{\text{LB}}\) and \(\bar{\mathcal{D}}_{\text{UB}}\) in Equation (6)). The meaning of defining this problem is the optimization results from designing porous channels (or 2D porous membranes) could potentially bring in solutions for biofilm transport and utilization as ELM, as this kind of topological formulation is easy to manufacture. Also, based on this material formulation, we also conduct a benchmarking study comparing the effect of acquisition function in sampling the design space from BO (Section 2.2.2), in which we also characterize the design space from the sampling perspective, that could guide general materials design optimizations. * **Lattice Metamaterials.** In recent five years, there is a huge growth in the study of the design Ma et al. (2022); Shaw et al. (2019) and properties Gu (2018); Portela et al. (2020) of mechanical metamaterials recently (or synonymously architectured materials). However, their potential applications in biomass storage and transport are rarely touched, with very few works concerning their potential use as biofilm carriers Ovelheiro (2020); He et al. (2021) and related properties Hall et al. (2021). Here, we hope to use our simulations to fill in this gap and brought new insights into the possibilities of using lattice metamaterials for biofilm storage and transport. The unit cell of such metamaterials is shown in the right subfigure Figure 2**B**: the half length of the vacuum area is denoted as \(\ell_{\text{vac}}\), and the edge length of the solid volumetric part is denoted as \(\ell_{\text{vol}}\), where the dimensionless variable is defined as \(\bar{\mathcal{D}}=\frac{\ell_{\text{vol}}}{\ell_{\text{vac}}+\ell_{\text{vol}}}\). The range of the dimensionless variable is defined as \([0.1,0.5]\). * **Non-convex Porous Media.** Inspired by the fact that biofilms were mostly found in natural habitats where they were constrained in pseudo- or spherical solid bodies Bhattacharjee and Datta (2019); Carrel et al. (2018); Coyte et al. (2016); Kurz et al. (2022), we propose the simulation scenario where biofilm grows in nonconvex solid bodies shown in Figure 2**C**. The simulations were mainly inspired by the study of Dehkharghani et al. (2023) and Bhattacharjee and Datta Bhattacharjee and Datta (2019), where we are using BO as a tool to sample the scale effect studied in Dehkharghani et al. (2023) defined a similar 3D porous packing of solid spherical bodies in Bhattacharjee and Datta (2019). The dimensionless variable is defined as the radii Figure 2: The schematic illustration for the three different porous materials formulations. The porous materials are treated as repeated elements of unit cells, and the number of unit cells per length is \(N_{\rm unit}\) (marked in the middle sub-figures), which is defined as a design variable in the optimization. For every unit cell, the dimensionless structure parameter, \(\bar{\mathcal{D}}\), is defined to quantify the vacuum-solid region spatial ratio in a defined unit cell illustrated in the right sub-figures. **(A)** Two-dimensional porous membranes for biofilm transport. Note that the “two-dimensional” means no repeated unit cells are in the third dimension, i.e., the Z axis. The design variables hence do not perturb the geometries in the third dimension. Bacteria cells are grown within the “micro-pipelines” within the membranes to the top region. The dimensionless variable writes \(\bar{\mathcal{D}}=\mathcal{R}_{\rm vol}/(\mathcal{R}_{\rm vac}+\mathcal{R}_{ \rm vol})\), is defined as the radii ratio between the vacuum region and the overall region (vacuum + volumetric solid). **(B)** Lattice porous metamaterials for biofilm transport. Bacteria cells are grown within the porous region within the lattice microstructures to reach the top. The unit cell dimensionless variable takes the form \(\bar{\mathcal{D}}=\ell_{\rm vac}/(\ell_{\rm vac}+\ell_{\rm vol})\), is defined as the length ratio between the vacuum region and the overall region. **(C)** Non-convex three-dimensional porous media for biofilm transport. Bacteria cells are grown within the porous region within the porous media to reach the top. The unit cell dimensionless variable takes the form \(\mathcal{D}=\mathcal{R}_{\rm vol}/(\mathcal{R}_{\rm vac}+\mathcal{R}_{\rm vol})\), is defined as the radii ratio between the volumetric region and the overall region. ratio between the solid spheres and the overall unit cell lengths (right subfigure in Figure 2**C**): \(\bar{\mathcal{D}}=\frac{\mathcal{R}_{\mathrm{val}}}{\mathcal{R}_{\mathrm{val}}+ \mathcal{R}_{\mathrm{area}}}\). The range of the dimensionless variable is defined as \([0.5,1.2]\). Note that instead of simulating potential manufacturable porous materials to inspire industrial applications, we hope to use this case to combine with the BO sampling to investigate the biofilm transport scenarios in nature. We use the porous membrane case to first characterize the acquisition function used, and apply the BO for 500 iterations each. For the lattice metamaterials case, due to the high computation burden of the simulation, we only apply BO for 300 iterations with only the EI acquisition function. For the porous media case, we apply BO for 500 iterations with only the EI acquisition function. For both three cases, we conduct characterization simulation cases to examine the accuracy of the GPR approximated design space, in which one case is based on observation towards the maximal point in the visualized reconstructed design space, and the other case is based on random test selected in the design space. ## 3 Results & Discussion ### Porous Membranes Figure 3 shows the whole optimization process updated by both EI and UCB acquisition functions for the porous membrane design case. Figure 3**A** visualizes the change of the objective along with the iteration, in which the red line stands for the optimization process updated by the EI acquisition function, and the blue line stands for that of the UCB acquisition function. It can be observed that UCB exhibits more evident fluctuation along the sampling process and the EI acquisition sampled objectives are more "clustered" towards the upper region. To be more rigorous, we generate Figure 3**B**, visualizing the overall statistical distribution of the objectives by two different acquisition functions. It can be qualitatively observed that the variance of EI is evidently smaller than that of UCB, and the mean objective value sampled by EI is higher. Quantitatively, the objective variance for the EI and UCB acquisition functions are \(2.62\times 10^{7}\) and \(3.46\times 10^{7}\), respectively, where the UCB acquisition sampled objectives' variance is relatively 32.08% higher than the EI acquisition. The mean objective values updated by EI and UCB acquisition functions are \(\mathcal{N}_{\mathrm{bio}}^{\mathrm{UB}}=30502\) & \(\mathcal{N}_{\mathrm{bio}}^{\mathrm{UCB}}=30056\), respectively. The EI mean objective is relatively 1.49% higher than the UCB acquisition function. Figure 3**C1** & **C2** visualizes the trends of the normalized design variables along the sampling process by EI and UCB acquisition functions, respectively. It can be deduced from both the subfigures that \(\bar{\mathcal{D}}\) are generally being sampled towards higher values and \(N_{\mathrm{unit}}\) are being sampled in relatively lower values during the optimization processes, by observing their value range visualized by the color bar. Based on our qualitative observations from Figure 3, three questions may naturally arise as further verifying the qualitative observations. 1. Just observing the objectives change may not be comprehensive enough to estimate whether both the acquisition functions are sampling toward the "correct" directions, i.e., whether the sampling directions are moving toward higher objective values (i.e., the design goal). 2. Can we generally verify the accuracy of the approximated GPR approximated design space? 3. Concerning the observations in Figure 3**C1** & **C2**, what are the exact geometries represented by the changing variables? Note that these three questions are fundamental in our following analysis for different materials design cases. Here, to answer Question (I), we generate Figure 4, to visualize the sampling process during the optimizations, and characterize them with the overall sampling density. To answer Questions (II) & (III), we generate Figure 5, to characterize the approximated design space using simulations and visually show the general trends captured by the approximated models and simulation points. We then further visualize the geometries extracted from the characterization simulations. Figure 4**A1** & **A2** visualize the overall reconstructed design spaces updated by EI and UCB acquisition functions. Note that the dimensionless variable \(\bar{\mathcal{D}}\) is multiplied by 100 in the visualizations for ease of analysis. It can be observed two different acquisition functions all approximated the same trend: there is a large objective gradient changing from the bottom-right corner. Physically, this would indicate that when the pores' radii (\(\mathcal{R}_{\mathrm{vac}}\) in Figure 2**A**) are small and the unit cell numbers (\(N_{\mathrm{unit}}\)) are generally larger, the biofilm transport capability of the porous materials decreases. One also observes that the objective values are qualitatively higher with higher \(\bar{\mathcal{D}}\) values, i.e., \(\bar{\mathcal{D}}\gtrapprox 0.5\). We hence visualize the "upper design space" in Figure 4**B1** & **B2**, in which the region \(\bar{\mathcal{D}}\in[0.5,0.9]\) are visualized. It can be observed that the objective values are higher in the "top-right" corner of the design space, where both the sampling points' density and normalized objective values are higher. To directly visualize the (normalized) sampling density, Figure 4**C** is created. We observe that the sampling density distribution basically overlaps with our observations on the design space: there are higher sampling densities toward the top-right corners (i.e., higher \(\bar{\mathcal{D}}\) and \(N_{\mathrm{unit}}\) values) characterized by both acquisition functions. Combining both Figure 4 **A**, **B**, & **C**, one deduces that both the reconstructed design spaces and the sampling densities both tell us when the porous membranes contain larger \(\bar{\mathcal{D}}\) & \(N_{\rm unit}\) values the biofilm transportability, i.e., \(\mathcal{N}_{\rm bio}\), are higher. Here, the EI acquisition function samples 407 points in the "upper design space" (Figure 4 **B1**), and the UCB acquisition function samples 373 points (Figure 4 **B2**). If we define design space in Figure 4 **B** as the target region, the EI acquisition sampling technique is 9.12% relatively more efficient than the acquisition function. If we only look at the last 100 iterations from the BO, the EI acquisition function samples 87 points in the target region, and the UCB acquisition function samples 85 points. Compared with a uniformly distributed grid search method, the EI acquisition function is 74% more efficient and the UCB acquisition function is 70% more efficient, compared with the grid search sampling. The EI acquisition function is 2.35% more efficient than the UCB acquisition by estimating the last 100 design space samples in the target region. To cross-verify these cross-validated observations from a more quantitative perspective and answer our Questions (II) & (III), we conduct design space characterization from additional simulations in Figure 5. Figure 5 **A** & **B** show the general and zoomed views of the design space characterizations comparing the selected characterization simulations (in colored dots) and randomly selected simulations (in grey dots) to verify the effect of the design variables (\(\bar{\mathcal{D}}\) & \(N_{\rm unit}\)) to the target bacteria cell numbers \(\mathcal{N}_{\rm bio}\). Here, the blue dots and grey dots in Figure 5 **A** are extracted based on \(\bar{\mathcal{D}}=0.9\) and 0.2, respectively. The blue dots and grey dots in Figure 5 **B** are extracted based on \(N_{\rm unit}=15\) and 10, respectively. The \(\bar{\mathcal{D}}\) and \(N_{\rm unit}\) values for blue and red dots are selected based on observations from Figure 4 as our guess for the porous materials' geometries that contain the highest objective value. The \(\bar{\mathcal{D}}\) and \(N_{\rm unit}\) values for the grey dots are randomly selected to compare with our observational guess. We then directly visualize the points from the characterization simulations on the GPR reconstructed design space in Figure 5 **D**: The black triangular dots indicate the blue and red dots and the grey dots indicate the grey dots in the left subfigures (Figure 5 **A** & **B**). It can be observed that the characterization simulation tests fit well with the GPR-approximated design space as both the black and grey dots overlap well with the surface contours. We then pick a series of representative points from Figure 3: Optimization results for 2D porous membrane design. **(A)** The change of the objective value \(\mathcal{N}_{\rm bio}\) along the iteration process. The red dotted dashed line represents the BO process through the Expected Improvement (EI) acquisition function. The blue dotted dashed line represents the BO process through the Upper Confidence Bound (UCB) acquisition function. **(B)** The statistical distribution of the objective values along the optimization processes characterized by the two different acquisition functions. **(C)** The normalized design variable change along the iteration process, corresponding to subfigure **A**, where subfigures **C1** & **C2** represent the BO updated by EI and UCB acquisition function, respectively. the characterization simulations and directly visualize them in Figure 5**C**, marked in red triangles in Figure 5**A** & **B**, nominated as \(\mathbb{T}_{\alpha}\sim\mathbb{T}_{\gamma}\) & \(\mathbb{T}_{\alpha}\sim\mathbb{T}_{\epsilon}\). It can be detected from the zoom view in Figure 5**A** that \(\mathbb{T}_{\alpha}\) is evidently smaller than that of \(\mathbb{T}_{\beta}\) and \(\mathbb{T}_{\gamma}\), and we can further deduce that the porous membrane with larger pores does not necessarily enhance the transportability of the porous materials, which is not intuitive. We propose that the reactive forces in the pore wall drive the new bacteria cells to be generated toward the upper region. When the radii of the pores are too large, such reactive force does not act on the bacteria cells as they do under smaller radii. Moreover, it can be observed from Figure 5**B** that for \(N_{\mathrm{unit}}=10\) & 15, the effects of the dimensionless variable \(\bar{\mathcal{D}}\) on the objective \(\mathcal{N}_{\mathrm{bio}}\) are similar, where there are sudden increases of the objective between \(\bar{\mathcal{D}}\in[0.2,0.4]\). Based on our analysis, we deduce that the EI acquisition function outperforms the UCB acquisition function in our formulated porous materials design case by estimating the objective variance, the mean objective values, and sampling improvements over the design space. We also observe that with larger relative radii of the pores and more unit cells per side, the transportability of porous materials to biofilms is then higher, from analyzing the design space. We then will adopt only the EI acquisition function and conduct further analysis for lattice and 3D nonconvex porous media (Section 3.2). ### Lattice and Porous Materials Figure 6 shows the reconstructed design space and the sampling process along with the sampling density updated by the EI acquisition function, similar to what we did in Figure 4 to answer our Question (I). It can be observed from Figure 6 Figure 4: The design space reconstruction (visualized in normalized values) and sampling density maps by the two different acquisition functions for 2D porous membrane design case. Here, **(A1\(\sim\)C1)** stand for the design space surrogate and sampling density map from the EI acquisition function, and **(A2\(\sim\)C2)** stand for those by UCB acquisition function. Note that for subfigures **A**, the white dots are visualized in three batches: the first batch represents the first 300 iterations, visualized in small circular dots, the mid-100 iterations are visualized as squared-shaped dots, and the last 100 iterations are visualized in large triangular dots, which are the easiest to be identified. For subfigure **B**, the visualization of the first two batches remains the same, whereas the last batch set contains different evaluations and is marked still in triangular dots. For details please see the texts. The main goal is to characterize the sampling density map through the morphology of the sampling dots in the reconstructed design space. **(A1)** The reconstructed design space by EI acquisition function. **(B1)** Zoomed view toward the target design region from subfigure **A1**, where \(N_{\mathrm{unit}}\in[5,15]\), and \(\bar{\mathcal{D}}\times 100\in[50,100]\). **(C1)** The normalized sampling density map for the EI acquisition function, visualizing the density of the choice of the design variables in the optimization processes. **(A2)** The reconstructed design space by UCB acquisition function. **(B2)** Zoomed view toward the target design region from subfigure **A2**, where \(N_{\mathrm{unit}}\in[5,15]\), and \(\bar{\mathcal{D}}\times 100\in[50,100]\). **(C2)** The normalized sampling density map for the UCB acquisition function, visualizing the design variables’ densities in the optimization processes. **A1** that the reconstructed design space from 300 evaluations is much more nonconvex compared with that of the 2D porous membrane (Figure 4**A**) and porous media (Figure 6**A2**), but the sampling are more concentrated toward the mid-top region (\(N_{\rm unit}\approx 0.5\) & \(\bar{\cal D}\in[0.4,0.5]\)). Figure 6**B1** is created to better visualize this region (\(N_{\rm unit}\in[1,10]\) & \(\bar{\cal D}\in[0.3,0.5]\)), in which by qualitative estimation one deduces that there are more sampling points around \(N_{\rm unit}=6\) and \(\bar{\cal D}=0.5\). Comparing the reconstructed design space and the sampling density (Figure 6**C1**), one observes that the general trends of the sampling density and the reconstructed design space overlap well, where we thence pick \(N_{\rm unit}=6\) and \(\bar{\cal D}=0.45\) for further characterization simulations based on qualitative observations (Figure 7**1**). Figure 6**A2** shows that the reconstructed design space shapes like a "tilted wave" -- the higher objective values are distributed along the "cross-split" across the design space coordinates. By observing both the Figure 6**A2** & **C2** we deduce that the sampling density is more centered toward the "upper design space". Hence, we only extract the zoomed view of the top-mid design space in Figure 6**B2** (\(N_{\rm unit}\in[1,10]\) & \(\bar{\cal D}\in[0.9,1.2]\)). From Figure 6**B2** we pick \(N_{\rm unit}=7\) and \(\bar{\cal D}=1.1\) to conduct characterization tests in Figure 7**2**. To estimate the effect of the acquisition function over the sampling of the design space, we also estimate the spatial distribution of the last 100 iterations within the target design space (or target region), where the target regions are defined based on the zoomed design space in Figure 6**B** (\(N_{\rm unit}\in[1,10]\) & \(\bar{\cal D}\in[0.3,0.5]\) for lattice metamaterials in Figure 6**B1**; \(N_{\rm unit}\in[1,10]\) & \(\bar{\cal D}\in[0.9,1.2]\) for 3D porous media in Figure 6**B2**). For the lattice metamaterials, there are 62 points sampled in the target region, which is 92.89% higher than the uniform distribution of 100 points with assumed grid search methods (32.14 points in the target region). For the 3D porous media, there are 89 points sampled in the target region, which is 223.04% more efficient than the uniformed sampled 100 points (27.55 points in the target region). In fine, BO exhibits outstanding ability for sampling towards the target design goal for both porous structures cases. Other than our selected characterization tests, same with our porous membrane design space characterization (Figure 5), we also randomly pick two additional characterization tests for characterizing \(N_{\rm unit}\) and \(\bar{\cal D}\) for each porous materials design case, respectively. For designing the lattice metamaterials, we pick \(\bar{\cal D}=0.1\) (Figure 7**A1**) and \(N_{\rm unit}=15\) (Figure 7**B1**), and for 3D porous media design, we pick \(\bar{\cal D}=0.5\) (Figure 7**A2**) and \(N_{\rm unit}=15\) (Figure 7**B2**). It can Figure 5: Design space characterization for the Gaussian process regression (GPR) reconstructed design space and topologies extraction from the characterization processes for the 2D porous membrane design case. **(A)** Characterization of the design variable \(N_{\rm unit}\) with different fixed values of \(\bar{\cal D}\). Note that the blue circular dots correspond to the black triangular dots, and the grey circular dots corresponds to the black triangular dots, in subfigure **D**. The blue and red circular dots are the characterization tests informed by qualitative observation of the GPR reconstructed design space to approximate the optimal design (i.e., maximal point), and the grey dots are random tests to benchmark our characterization informed by the observations. The zoomed view describes the detailed differences between the two sets of characterization simulations, in which three sets of membrane topologies are selected and highlighted in red triangular plots, nominated as \(\mathbb{T}_{\alpha}\), \(\mathbb{T}_{\beta}\), and \(\mathbb{T}_{\gamma}\), respectively. **(B)** Design variable characterization for \(\bar{\cal D}\) compared with random benchmark test marked in red and grey dots, respectively. The zoomed view describes the detailed differences between the two sets of characterization simulations, in which three sets of membrane topologies are selected and highlighted in red triangular plots, nominated as \(\mathbb{T}_{\rm a}\), \(\mathbb{T}_{\rm b}\), and \(\mathbb{T}_{\rm c}\), respectively. **(C)** Extracted porous membranes’ topologies (\(\mathbb{T}_{\alpha}\sim\mathbb{T}_{\gamma}\) & \(\mathbb{T}_{\rm a}\sim\mathbb{T}_{\rm c}\)) from characterizing both the design variables \(N_{\rm unit}\) and \(\bar{\cal D}\) corresponding to the selections in subfigures **A** & **B**. **(D)** The characterization data match with the GPR reconstructed design spaces from both the EI and UCB acquisition function. The black triangular dots are the characterization informed by observation from the GPR reconstructed design space towards the maximal value. The grey triangular dots are randomly selected test points to benchmark the observation-informed characterizations. For details please see the texts. Figure 6: The design space reconstruction (visualized in normalized values) and sampling density maps by the two different acquisition functions for lattice metamaterials **(A1\(\sim\)C1)** and 3D porous media **(A2\(\sim\)C2)**, updated by the EI acquisition function. The morphologies of the white dots are separated into three different batches. **(A1)** The reconstructed design space by EI acquisition function. The first batch represents the first 100 iterations, visualized in small circular dots, the mid-100 iterations are visualized as squared-shaped dots, and the last 100 iterations are visualized in large triangular dots. **(B1)** Zoomed view toward the target design region from subfigure **A1**, where \(N_{\mathrm{unit}}\in[1,10]\), and \(\bar{\mathcal{D}}\times 100\in[30,50]\). The first batch represents the first 100 iterations, visualized in small circular dots, the mid-50 iterations are visualized as squared-shaped dots, and the large triangular dots represent the rest visualizations. For details please see the texts. **(C1)** The normalized sampling density map for the EI acquisition function for the lattice metamaterials design case, visualizing the density of the choice of the design variables in the optimization processes. **(A2)** The reconstructed design space by EI acquisition function. The first batch represents the first 300 iterations, visualized in small circular dots, the mid-100 iterations are visualized as squared-shaped dots, and the last 100 iterations are visualized in large triangular dots. **(B2)** Zoomed view toward the target design region from subfigure **A2,** where \(N_{\mathrm{unit}}\in[1,10]\), and \(\bar{\mathcal{D}}\times 100\in[90,120]\). The first batch represents the first 300 iterations, visualized in small circular dots, the mid-50 iterations are visualized as squared-shaped dots, and the large triangular dots represent the rest visualizations. For details please see the texts. **(C2)** The normalized sampling density map for the EI acquisition function for the 3D porous media design case, visualizing the design variables’ densities in the optimization processes. be observed from Figure 7**A & B** that the selected characterization tests generally capture the geometries of the highest objectives, where the blue and red dots exhibit higher values than the grey dots. Interestingly, for both porous materials cases, the topology corresponds to the highest objective value selected from the characterization tests for \(\bar{\mathcal{D}}\) (Figure 7**B**), \(\mathbb{T}_{\beta}\) are not the topology that contends the highest objective value by characterizing \(N_{\mathrm{unit}}\) (Figure 7**A**). This indicates that our observational guess toward the highest objective is not fully accurate, where our characterization tests correct our initial guess and contends the porous structural topologies \(\mathbb{T}_{\alpha}\). By observing Figure 7**D1 & D2** we observe that the characterization tests generally match well with the GPR approximated design space, indicating the effectiveness of the general data-driven design scheme. Notwithstanding, by comparing Figure 7**D1** and D2** it is observed that the characterization tests match better with the GPR approximated design space for the lattice structures than the nonconvex porous materials. Both Figure 7**A**, B, & D indicate the importance of additional qualitative characterizations but also prove the general accuracy of the GPR approximation. Although the focus of this paper is on the examination of the optimization process for complex materials design cases, rather than simply proposing the designs from the optimizations, we would like to still provide the eventual extracted optimal design for each case for references. The objective values (\(\mathcal{N}_{\mathrm{bio}}\)), their corresponding design variables (\(N_{\mathrm{unit}}\) & \(\bar{\mathcal{D}}\)), and the transformed characteristic length \(\mathfrak{L}\) (in the unit of \(\mu\mathrm{m}\)) for all three cases benchmarked by a nonconfined pure biofilm growth in vacuum space are shown in Table 1. Very interestingly and unexpectedly, it is observed that all the optimal designs extracted from porous materials confined biofilm growth exhibit more bacteria cells in the target growth region than nonconfined biofilm growth in a vacuum space. The optimal designs of the 2D porous membrane, lattice metamaterials, and 3D porous media have 16%, 7%, and 11%, more biofilms in the target growth region than the pure growth in the vacuum space, respectively. This confinement-induced biofilm growth may help us (1) better utilize biofilms as ELM and address the three points presented in the second paragraph in Section 1, and (2) potentially explain the natural phenomena described in the first paragraph in Section 1. We focus on this point to conduct a further comparison study in the following Section 3.3. ### Biomechanics of Porous Transport Eventually, we would like to answer Question (3) in Section 1. Figure 8 shows the benchmark study of the biofilm growth in porous membrane and vacuum space. We pick the case of a 2D porous membrane with \(N_{\mathrm{unit}}=6\) and \(\bar{\mathcal{D}}\) for comparison with biofilm growth in nonconfined vacuum space. Figure 8**A & B** visualize the snapshots of the biofilm growth simulations, where \(\tilde{\tau}\) stand for the iteration number (or time steps), which can be converted to real-world time as \(t=10\times\tilde{\tau}\) [s]. Figure 8**C** visualizes the sliced view of the biofilm growth at \(\tilde{\tau}=12000\), to further explain confinement-induced biofilm growth. Figure 8**D** shows the change of the total bacteria cells \(\mathcal{N}_{\mathrm{bio}}^{\mathrm{total}}\) along with the iterations \(\tilde{\tau}\), where the blue solid line stand for biofilm growth in nonconfined vacuum space and red dashed line stand for biofilm growth in the porous membrane. We observe two key moments that distinguish the overall biofilm growth, the first moment is when \(\tilde{\tau}\approx 6000\) when the biofilms in the vacuum space (blue solid line) exceeds that of in the porous materials (red dashed line); and the second moment is when \(\tilde{\tau}\approx 13500\) when the biofilms in the porous materials (red dashed line) exceeds that of in the vacuum space (blue solid line). The sliced views of the two moments (\(\tilde{\tau}=6000\) & \(\tilde{\tau}=13500\)) are visualized and indicated by shaded arrows. To quantitatively understand the mechanism of confinement-induced biofilm growth and transport, we compute the biofilm cell numbers distribution along the Z-axis by counting through 100 slices at \(\tilde{\tau}=12000\) (detailed analysis can be found in ESI of Ref. Zhai and Yeo (2022)) and visualize the results in Figure 8**E**, corresponds to Figure 8**C**. The blue bars indicate the accumulative bacteria counts for biofilm growth in vacuum space and the red bars indicate that of the porous materials. It can be observed from Figure 8**A & B** that the overall biofilms are more densely compacted in the target growth region grew through the porous materials compared with the vacuum space growth. From the sliced view in Figure 8**C**, we may hence propose a qualitative explanation for our observation: the existence of the porous material takes a certain \begin{table} \begin{tabular}{c|c c c c} & \(\mathcal{N}_{\mathrm{bio}}\) & \(N_{\mathrm{unit}}\) & \(\bar{\mathcal{D}}\) & \(\mathfrak{L}\) [\(\mu\mathrm{m}\)] \\ \hline 2D porous membrane & 32655 & 10 & 0.1 & 0.5 \\ 32655 & 11 & 0.1 & 0.45 \\ Lattice metamaterials & 30096 & 1 & 0.5 & 25 \\ 3D porous media & 31152 & 7 & 1.1 & 0.71 \\ Vacuum space & 28086 & N/A & N/A & N/A \\ \end{tabular} \end{table} Table 1: The highest objective values and their corresponding design variables for different porous materials design cases, with transformed characteristic lengths in the unit of \(\mu\mathrm{m}\). For 2D porous membrane, the characteristic length is defined as \(\mathfrak{L}\equiv\mathcal{R}_{\mathrm{vac}}\). For lattice metamaterials, the characteristic length is defined as \(\mathfrak{L}\equiv\mathcal{R}_{\mathrm{vol}}\). For 3D porous media, the characteristic length is defined as \(\mathfrak{L}\equiv\mathcal{R}_{\mathrm{vol}}\). The “Vacuum space” stand for the case where there are no porous materials defined on top of the initial bacteria cells and their growth in a nonconfined space. Figure 7: Design space characterization for the Gaussian process regression (GPR) reconstructed design space and topologies extraction from the characterization processes for both the lattice metamaterials and 3D porous media design optimization. **(A1)** Characterization of the design variable \(N_{\mathrm{unit}}\) with different fixed values of \(\bar{\mathcal{D}}\). Note that the blue circular dots correspond to the black triangular dots, and the grey circular dots corresponds to the black triangular dots, in subfigure **D1**. The blue and red circular dots are the characterization tests informed by qualitative observation of the GPR reconstructed design space to approximate the optimal design (i.e., maximal point), and the grey dots are random tests to benchmark our characterization informed by the observations. The zoomed view describes the detailed differences between the two sets of characterization simulations, in which three sets of membrane topologies are selected and highlighted in red triangular plots, nominated as \(\mathbb{T}_{\alpha}\), and \(\mathbb{T}_{\beta}\), respectively. **(B1)** Design variable characterization for \(\bar{\mathcal{D}}\) compared with random benchmark test marked in red and grey dots, respectively. The zoomed view describes the detailed differences between the two sets of characterization simulations, in which three sets of membrane topologies are selected and highlighted in red triangular plots, nominated as \(\mathbb{T}_{\beta}\), and \(\mathbb{T}_{\gamma}\), respectively (\(\mathbb{T}_{\beta}\) is the same topology as in subfigure **A1**). **(C1)** Extracted porous membranes’ topologies (\(\mathbb{T}_{\alpha}\sim\mathbb{T}_{\gamma}\)) from characterizing both the design variables \(N_{\mathrm{unit}}\) and \(\bar{\mathcal{D}}\) corresponding to the selections in subfigures **A1** & **B1**. **(D1)** The characterization data match with the GPR reconstructed design spaces from the EI acquisition function. The black triangular dots are the characterization informed by observation from the GPR reconstructed design space towards the maximal value. The grey triangular dots are randomly selected test points to benchmark the observation-informed characterizations. **(A2)** Characterization of the design variable \(N_{\mathrm{unit}}\) with different fixed values of \(\bar{\mathcal{D}}\). Visualization details are the same as in subfigure **A1**. **(B2)** Design variable characterization for \(\bar{\mathcal{D}}\) compared with random benchmark test marked in red and grey dots, respectively. Visualization details are the same as in subfigure **B1**, except there is no zoomed view since the range for the objective \(\mathcal{N}_{\mathrm{bio}}\) are already within a small range. **(C2)** Extracted porous membranes’ topologies (\(\mathbb{T}_{\alpha}\sim\mathbb{T}_{\gamma}\)) from characterizing both the design variables \(N_{\mathrm{unit}}\) and \(\bar{\mathcal{D}}\) corresponding to the selections in subfigures **A2** & **B2**. **(D2)** The characterization data match with the GPR reconstructed design spaces from the EI acquisition function. Visualization details are the same as in subfigure **D1**. For details please see the texts. Figure 8: Comparison study for a single 2D porous membrane with vacuum biofilm growth case to unravel the biomechanics of porous materials induced biofilm growth. **(A)** The snapshots of the simulation of biofilm growth in pure vacuum space, where \(\tilde{\tau}\) is the simulation iteration step or can be treated as the pseudo-time. **(B)** The snapshots of the simulation of biofilm growth in the 2D porous membrane. **(C)** Slice view of snapshot \(\tilde{\tau}=12000\) for both the 2D membrane and vacuum growth cases. **(D)** The accumulated bacteria cell numbers \(\mathcal{N}_{\rm bio}\) along the iteration process, where the simulation snapshot of \(\tilde{\tau}=6000\) is indicated in the left top subfigure and \(\tilde{\tau}=13500\) is indicated in the bottom right subfigure. The solid blue line indicates the biofilm growth in vacuum space (without any porous materials) and the red dashed line indicates the biofilm growth in the 2D porous membrane for benchmarking. The zoomed view for \(\tilde{\tau}\in[12000,15000]\) is indicated in the right subfigure with a gradient-shaded background. **(E)** The bacteria cells’ spatial distribution along the perpendicular direction (Z axis) at \(\tilde{\tau}=12000\), where the cell numbers are counted based on 100 interval slices visualized in bar plots. The blue bars indicate the vacuum space bacteria counts and the red bars indicate the bacteria counts in the 2D porous membrane. For details see the text. amount of volume, which pushes the biofilm to grow upwards to occupy more space. To break down this process more detailedly, Figure 8**D** tells us that after \(\tilde{\tau}\approx 6000\) the existence of the porous materials first suppress the biofilm growth, as \(\mathcal{N}_{\rm bio}^{\rm total}\) for vacuum space (solid blue line) first increases nonlinearly with larger values than that of porous materials (dashed red line). But after the biofilms are well grown in the target growth region (\(\tilde{\tau}\approx 13500\)), the pores in the porous materials can be treated as "channels" that enhance the growth and transport of biofilms. This finding is significant in the sense that the effects of porous materials on the overall growth of biofilms change in different stages of the growth processes within the pores. Based on these comprehensive qualitative analyses, Figure 8**E** offers quantitative evidence that porous material push biofilms' upward growth by taking up volumetric spaces -- the biofilm accumulation within the porous materials spatial range (\(Z\in[12.5,37.5|\mu\rm m)\) for porous materials (red bars) are evidently smaller than that of vacuum space (blue bars). Based on the bacteria cell numbers count from 100 slices, the porous region bacteria counts for porous membrane and vacuum space are 48643 and 58482, respectively, where the vacuum space contains 20% more biofilms than that constrained in the porous membrane. The target growth region bacteria counts for porous membrane and vacuum space are 31404 and 13764, respectively, where the porous membrane contains 128% more biofilms than that grew in the vacuum space. The data not only verifies our qualitative explanations that the porous membrane facilitates biofilm growth by taking up volumetric space but also further explains how the porous membrane increases the overall biofilms -- the pores behave like channels that transport biofilms to the target region so that the bacteria count in the target growth region for porous membrane are significantly large than that of in vacuum space. ## 4 Conclusions & Outlook In this paper, we present efforts to design different porous materials for enhanced biofilm transport and control from computational models using Bayesian optimization. We focus on characterizing the design optimization process, comprehensively analyzing the approximated design space, and further providing in-depth physical insights from the optimization. We formulate three different types of porous structural materials for design optimization aiming to maximize the biofilms in the target growth region. For three different types of porous materials, the trends of the reconstructed design space match well with the sampling density. For the 2D porous membrane, the variance of the overall samples by the UCB acquisition function is 32.08% higher than that of the EI acquisition function; the mean objective of the overall samples by the EI acquisition function is 1.49% higher than that of the UCB acquisition function. Given the predefined target region of higher sampled densities, the EI acquisition function is 2.35% more efficient than the UCB acquisition function compared with uniformly distributed grid search methods by estimating the last 100 sampling points. The GPR approximated design spaces match well with the selected characterization tests. Using only the EI acquisition function, we conduct the design space characterization for lattice metamaterials and porous media under the same procedure. For the lattice metamaterials, by looking at the last 100 samples in the predefined target design space, BO is 92.89% more efficient than the uniform grid search. For the 3D porous media, there are 223.04% more sampled points by BO than the uniform grid search in the predefined target design space. We further provide the design variables of the selected optimal design for different porous materials formulations. Very interestingly, all the extracted optimal designs have more bacteria cells in the target growth region than pure biofilm growth in the vacuum space without any confinement. We conduct a comparison study trying to understand the phenomenon and found that there are 20% more biofilms for the vacuum space than that confined in the porous materials. What's more, there are 128% more biofilms in the target growth region for the porous materials-induced biofilm growth compared with the vacuum space growth. We hence propose that the existence of porous materials stimulates the biofilms by taking up volumetric space to push upward growth. Note that this is not universally tested for all kinds of porous materials with all radii range, and testing the size effect for confinement-induced biofilm growth would be our follow-up work in the future. Our work is significant and innovative from three major aspects: (1) Implications and guidance to broad audiences. Our work could inspire theorists and programmers to develop new theories and algorithms for modeling biofilm and guide experimentalists to conduct new investigations. (2) Rigorous and comprehensive optimization analysis of the optimization process and direct characterization of the design space. (3) Understanding the mechanism from both the optimization characterization and computational modeling brings in new knowledge. From both three aspects, our work reaches a broad range of different research areas spanning mechanics, materials, machine learning, biology, environments, and many fields. This paper, to our knowledge, is the first work that utilizes ML as an optimization tool for characterizing the underlying mechanisms of confined biofilm dynamics using computational models. Our work is expected to unveil a new paradigm of conducting inverse design to inspire physics discovery by leveraging computational models, ML, and design optimizations. ## Acknowledgement J.Y. acknowledges support from the US National Science Foundation (grant nos. 2038057 and 2223785) and the Cornell University faculty startup grant. The authors also acknowledge the computational resources provided by the XSEDE program under grants TG-MAT200004 and TG-BIO210063 and the computational resources provided by the G2 cluster from Cornell University. ## Appendix Supplementary Figure 1 visualizes the overall design processes for lattice metamaterials and 3D porous media, respectively (Figure 2 **B & C**). The upper figure (**1**) stands for the change of the objectives and the lower figure stand for the design variables' changes w.r.t. the iterations, similar to what has been shown in Figure 3. A converging process of the objective values is observed for 3D porous media (**B1**), whereas the objectives are most fluctuating more for the lattice metamaterials (**A1**), which can be attributed to the nonconvex design space in Figure 6. For the lattice metamaterials, the design variables are fluctuating along the iterations where \(\tilde{\mathcal{D}}\) is sampled toward higher values and \(N_{\mathrm{unit}}\) is sampled toward the lower (**A2**). For the 3D porous media, similar trends are also observed yet the difference is they are initially sampled in a similar value range and the discrepancy of the sampling value trends begins to occur after approximately 300 iterations (**B2**).
2308.07863
StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models
Content and style (C-S) disentanglement is a fundamental problem and critical challenge of style transfer. Existing approaches based on explicit definitions (e.g., Gram matrix) or implicit learning (e.g., GANs) are neither interpretable nor easy to control, resulting in entangled representations and less satisfying results. In this paper, we propose a new C-S disentangled framework for style transfer without using previous assumptions. The key insight is to explicitly extract the content information and implicitly learn the complementary style information, yielding interpretable and controllable C-S disentanglement and style transfer. A simple yet effective CLIP-based style disentanglement loss coordinated with a style reconstruction prior is introduced to disentangle C-S in the CLIP image space. By further leveraging the powerful style removal and generative ability of diffusion models, our framework achieves superior results than state of the art and flexible C-S disentanglement and trade-off control. Our work provides new insights into the C-S disentanglement in style transfer and demonstrates the potential of diffusion models for learning well-disentangled C-S characteristics.
Zhizhong Wang, Lei Zhao, Wei Xing
2023-08-15T16:30:49Z
http://arxiv.org/abs/2308.07863v1
# StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models ###### Abstract Content and style (C-S) disentanglement is a fundamental problem and critical challenge of style transfer. Existing approaches based on explicit definitions (_e.g._, Gram matrix) or implicit learning (_e.g._, GANs) are neither interpretable nor easy to control, resulting in entangled representations and less satisfying results. In this paper, we propose a new C-S disentangled framework for style transfer without using previous assumptions. The key insight is to explicitly extract the content information and implicitly learn the complementary style information, yielding interpretable and controllable C-S disentanglement and style transfer. A simple yet effective CLIP-based style disentanglement loss coordinated with a style reconstruction prior is introduced to disentangle C-S in the CLIP image space. By further leveraging the powerful style removal and generative ability of diffusion models, our framework achieves superior results than state of the art and flexible C-S disentanglement and trade-off control. Our work provides new insights into the C-S disentanglement in style transfer and demonstrates the potential of diffusion models for learning well-disentangled C-S characteristics. ## 1 Introduction Given a reference style image, _e.g._, _Starry Night_ by Vincent Van Gogh, style transfer aims to transfer its artistic style, such as colors and brushstrokes, to an arbitrary content target. To achieve such a goal, it must first properly separate the style from the content and then transfer it to another content. This raises two fundamental challenges: (1) "how to disentangle content and style (C-S)" and (2) "how to transfer style to another content". To resolve these challenges, valuable efforts have been devoted. Gatys _et al_. [19] proposed _A Neural Algorithm of Artistic Style_ to achieve style transfer, which _explicitly_ defines the high-level features extracted from a pre-trained Convolutional Neural Network (CNN) (_e.g._, VGG [76]) as content, and the feature correlations (_i.e._, Gram matrix) as style. This approach acquires visually stunning results and inspires a large number of successors [35, 30, 53, 1, 9]. Despite the successes, by diving into the essence of style transfer, we observed three problems with these approaches: (1) The C-S are not completely disentangled. Theoretically, the C-S representations are intertwined. For example, matching the content representation of an image may also match its Gram matrix, and vice versa. (2) What CNN learned is a black box rugged to interpret [97], which makes the C-S definitions [19] uninterpretable and hard to control. (3) The transfer process is modeled as a separate optimization of content loss and style loss [19], so there lacks a deep understanding of the relationship between C-S. These problems usually lead to unbalanced stylizations and disharmonious artifacts [6], as will be shown in later Fig. 3. On the other hand, disentangled representation learning [27] provides other ideas to _implicitly_ disentangle C-S, either supervised [47, 37] or unsupervised [9, 98]. For style transfer, Kotovenko _et al_. [45] utilized fixpoint triplet style loss and disentanglement loss to enforce a GAN [21]-based framework to learn separate C-S representations in an unsupervised manner. Similarly, TPFR [79] learned to disentangle C-S in latent space via metric learning and two-stage peer-regularization, producing high-quality images even in the zero-shot setting. While these approaches successfully enforce properties "encouraged" by the corresponding losses, they still have three main problems: (1) Well-disentangled models seemingly cannot be identified without supervision [57, 70], which means the unsupervised learning [45, 79] may not achieve truly disentangled C-S, as will be shown in later Fig. 3. (2) These approaches are all based on GANs and thus often confined to the GAN pre-defined domains, _e.g._, a specific artist's style domain [75]. (3) The implicitly learned C-S representations are still black boxes that are hard to interpret and control [57]. Facing the challenges above, in this paper, we propose a new C-S disentangled framework for style transfer _without using previous assumptions_ such as Gram matrix [19] or GANs [45]. Our key insight stems from the fact that the definition of an image's style is much more complex than its content, _e.g_., we can easily identify the content of a painting by its structures, semantics, or shapes, but it is intractable to define the style [67, 22, 38, 87]. Therefore, we can bypass such a dilemma by _explicitly_ extracting the content information and _implicitly_ learning its _complementary_ style information. Since we strictly constrain style as the _complement_ of content, the C-S can be completely disentangled, and the control of disentanglement has been transformed into the control of content extraction. It achieves both controllability and interpretability. However, achieving plausible and controllable content extraction is also non-trivial because the contents extracted from the content images and style images should share the same content domain, and the details of the extracted contents should be easy to control. To this end, we resort to recent developed diffusion models [28, 78] and introduce a _diffusion-based style removal module_ to smoothly dispel the style information of the content and style images, extracting the domain-aligned content information. Moreover, owing to the strong generative capability of diffusion models, we also introduce a _diffusion-based style transfer module_ to better learn the disentangled style information of the style image and transfer it to the content image. The style disentanglement and transfer are encouraged via a simple yet effective _CLIP [68]-based style disentanglement loss_, which induces the transfer mapping of the content image's content to its stylization (_i.e_., the stylized result) to be aligned with that of the style image's content to its stylization (_i.e_., the style image itself) in the CLIP image space. By further coordinating with a _style reconstruction prior_, it achieves both generalized and faithful style transfer. We conduct comprehensive comparisons and ablation study to demonstrate the effectiveness and superiority of our framework. With the well-disentangled C-S, it achieves very promising stylizations with fine style details, well-preserved contents, and a deep understanding of the relationship between C-S. In summary, our contributions are threefold: * We propose a novel C-S disentangled framework for style transfer, which achieves more interpretable and controllable C-S disentanglement and higher-quality stylized results. * We introduce diffusion models to our framework and demonstrate their effectiveness and superiority in controllable style removal and learning well-disentangled C-S characteristics. * A new CLIP-based style disentanglement loss coordinated with a style reconstruction prior is introduced to disentangle C-S in the CLIP image space. ## 2 Related Work **Neural Style Transfer (NST).** The pioneering work of Gatys [19] has opened the era of NST [34]. Since then, this task has experienced tremendous progress, including efficiency [35, 52, 90], quality [23, 89, 55, 10, 7, 1, 46, 83, 56, 6, 92, 32, 99, 12, 96, 84], generality [5, 30, 53, 65, 13, 33, 29, 85, 95, 59, 93], and diversity [80, 86, 88]. Despite these successes, the essence of these approaches is mostly based on the _explicitly_ defined C-S representations, such as Gram matrix [19], which have several limitations as discussed in Sec. 1. In our work, we propose new disentangled C-S representations _explicitly_ extracted or _implicitly_ learned by diffusion models, achieving more effective style transfer and higher-quality results. **Disentangled Representation Learning (DRL).** The task of DRL [27] aims at modeling the factors of data variations [51]. Earlier works used labeled data to factorize representations in a supervised manner [37]. Recently, unsupervised settings have been largely explored [42], especially for disentangling style from content [98, 31, 51, 40, 91, 45, 66, 70, 8, 48]. However, due to the dependence on GANs [21], their C-S disentanglement is usually restricted in the GAN pre-defined domains (_e.g_., Van Gogh's style domain). Besides, disentanglement cannot be effectively achieved without providing sufficient data [57]. In contrast, our framework learns the disentangled style from a single style image, and the disentanglement can be easily achieved by providing only a few (\(\sim\)50) content images for training. **Diffusion Models.** Diffusion models [77] such as denoising diffusion probabilistic models (DDPMs) [28, 63] have recently shown great success in image generation [78, 14, 17], image manipulation [62, 2, 41], and text-conditional synthesis [64, 74, 69, 71, 24, 4, 54]. These works have demonstrated the power of diffusion models to achieve higher-quality results than other generative models like VAEs [81], auto-regressive models [16], flows [44], and GANs [39]. Inspired by them, we introduce a diffusion-based style removal module and a style transfer module in our framework. These modules can smoothly remove the style information of images and better learn the recovery of it to achieve higher-quality style transfer results. _To the best of our knowledge, our work is the first to introduce diffusion models to the field of neural style transfer_. ## 3 Background Denoising diffusion probabilistic models (DDPMs) [77, 28] are latent variable models that consist of two diffusion processes, _i.e_., a forward diffusion process and a reverse diffusion process. The forward process is a fixed Markov Chain that sequentially produces a series of latents \(x_{1},...,x_{T}\) by gradually adding Gaussian noises at each timestep \(t\in[1,T]\): \[q(x_{t}|x_{t-1}):=\mathcal{N}(\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{I}), \tag{1}\] where \(\beta_{t}\in(0,1)\) is a fixed variance schedule. An important property of the forward process is that given clean data \(x_{0}\), \(x_{t}\) can be directly sampled as: \[\begin{split} q(x_{t}|x_{0})&:=\mathcal{N}(\sqrt{\bar{ \alpha}_{t}}x_{0},(1-\bar{\alpha}_{t})\mathbf{I}),\\ x_{t}&:=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{ \alpha}_{t}}\epsilon,\end{split} \tag{2}\] where \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=0}^{t}\alpha_{s}\). Noise \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\) has the same dimensionality as data \(x_{0}\) and latent \(x_{t}\). The reverse process generates a reverse sequence by sampling the posteriors \(q(x_{t-1}|x_{t})\), starting from a Gaussian noise sample \(x_{T}\sim\mathcal{N}(0,\mathbf{I})\). However, since \(q(x_{t-1}|x_{t})\) is intractable, DDPMs learn parameterized Gaussian transitions \(p_{\theta}(x_{t-1}|x_{t})\) with a learned mean \(\mu_{\theta}(x_{t},t)\) and a fixed variance \(\sigma_{t}^{2}\mathbf{I}\)[28]: \[p_{\theta}(x_{t-1}|x_{t}):=\mathcal{N}(\mu_{\theta}(x_{t},t),\sigma_{t}^{2} \mathbf{I}), \tag{3}\] where \(\mu_{\theta}(x_{t},t)\) is the function of a noise approximator \(\epsilon_{\theta}(x_{t},t)\). Then, the reverse process can be expressed as: \[x_{t-1}:=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{ \alpha}_{t}}}\epsilon_{\theta}(x_{t},t))+\sigma_{t}\mathbf{z}, \tag{4}\] where \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\) is a standard Gaussian noise independent of \(x_{t}\). \(\epsilon_{\theta}(x_{t},t)\) is learned by a deep neural network [72] through optimizing the following loss: \[\min_{\theta}\parallel\epsilon_{\theta}(x_{t},t)-\epsilon\parallel^{2}. \tag{5}\] Later, instead of using the fixed variances, Nichol and Dhariwal [63] presented a strategy for learning the variances. Song _et al_. [78] proposed DDIM, which formulates an alternative non-Markovian noising process that has the same forward marginals as DDPM but allows a different reverse process: \[x_{t-1}:=\sqrt{\bar{\alpha}_{t-1}}f_{\theta}(x_{t},t)+\sqrt{1-\bar{\alpha}_{t -1}-\sigma_{t}^{2}}\epsilon_{\theta}(x_{t},t)+\sigma_{t}\mathbf{z}, \tag{6}\] where \(f_{\theta}(x_{t},t)\) is the predicted \(x_{0}\) at timestep \(t\) given \(x_{t}\) and \(\epsilon_{\theta}(x_{t},t)\): \[f_{\theta}(x_{t},t):=\frac{x_{t}-\sqrt{1-\bar{\alpha}_{t}}\epsilon_{\theta}(x _{t},t)}{\sqrt{\bar{\alpha}_{t}}}. \tag{7}\] Changing the choice of \(\sigma_{t}\) values in Eq. (6) can achieve different reverse processes. Especially when \(\sigma_{t}=0\), which is called DDIM [78], the reverse process becomes a deterministic mapping from latents to images, which enables nearly perfect inversion [41]. Besides, it can also accelerate the reverse process with much fewer sampling steps [14, 41]. ## 4 Method Our task can be described as follows: given a style image \(I_{s}\) and an arbitrary content image \(I_{c}\), we want to first disentangle the content and style of them and then transfer the style of \(I_{s}\) to the content of \(I_{c}\). To achieve so, as stated in Sec. 1, our key idea is to explicitly extract the content information and then implicitly learn the _complementary_ style information. Since our framework is built upon diffusion models [28, 78], we dub it _StyleDiffusion_. Fig. 1 shows the overview of our StyleDiffusion, which consists of three key ingredients: I) a diffusion-based style Figure 1: **Overview of our proposed StyleDiffusion.** The content image \(I_{c}\) and style image \(I_{s}\) are first fed into a diffusion-based style removal module to explicitly extract the domain-aligned content information. Then, the content of \(I_{c}\) is fed into a diffusion-based style transfer module to obtain the stylized result \(I_{cs}\). During training, we fine-tune the style transfer module via a CLIP-based style disentanglement loss \(\mathcal{L}_{SD}\) coordinated with a style reconstruction prior (see details in Sec. 4.3, we omit it here for brevity) to implicitly learn the disentangled style information of \(I_{s}\). removal module, II) a diffusion-based style transfer module, and III) a CLIP-based style disentanglement loss coordinated with a style reconstruction prior. In the following subsections, we will introduce each of them in detail. ### Style Removal Module The style removal module aims at removing the style information of the content and style images, explicitly extracting the domain-aligned content information. Any reasonable content extraction operation can be used, depending on how the users define the content. For instance, users may want to use the structural outline as the content, so they can extract the outlines [36, 94] here. However, as discussed in Sec. 1, one challenge is _controllability_ since the control of C-S disentanglement has been transformed into the control of content extraction. To this end, we introduce a diffusion-based style removal module to achieve both plausible and controllable content extraction. Given an input image, _e.g._, the style image \(I_{s}\), since the color is an integral part of style [50], our style removal module first removes its color by a commonly used ITU-R 601-2 luma transform [20]. The obtained grayscale image is denoted as \(I^{\prime}_{s}\). Then, we leverage a pre-trained diffusion model [14]\(\epsilon_{\theta}\) to remove the style details such as brushstrokes and textures of \(I^{\prime}_{s}\), extracting the content \(I^{c}_{s}\). The insight is that the pre-trained diffusion model can help eliminate the domain-specific characteristics of input images and align them to the pre-trained domain [11, 41]. We assume that images with different styles belong to different domains, but their contents should share the same domain. Therefore, we can pre-train the diffusion model on a surrogate domain, _e.g._, the photograph domain, and then use this domain to construct the contents of images. After pre-training, the diffusion model can convert the input images from diverse domains to the latents \(x\) via the forward process and then inverse them to the photograph domain via the reverse process. In this way, the style characteristics can be ideally dispelled, leaving only the contents of the images. Specifically, in order to obtain the results with fewer sampling steps and ensure that the content structures of the input images can be well preserved, we adopt the deterministic DDIM [78] sampling as the reverse process (Eq. (8)), and the ODE approximation of its reversal [41] as the forward process (Eq. (9)): \[x_{t-1} =\sqrt{\bar{\alpha}_{t-1}}f_{\theta}(x_{t},t)+\sqrt{1-\bar{\alpha }_{t-1}}\epsilon_{\theta}(x_{t},t), \tag{8}\] \[x_{t+1} =\sqrt{\bar{\alpha}_{t+1}}f_{\theta}(x_{t},t)+\sqrt{1-\bar{\alpha }_{t+1}}\epsilon_{\theta}(x_{t},t), \tag{9}\] where \(f_{\theta}(x_{t},t)\) is defined in Eq. (7). The forward and reverse diffusion processes enable us to easily control the intensity of style removal by adjusting the number of return step \(T_{remov}\) (see details in later Sec. 5.1). With the increase of \(T_{remov}\), more style characteristics will be removed, and the main content structures are retained, as will be shown in later Sec. 5.3. Note that for content images that are photographs, the diffusion processes are optional1 since they are already within the pre-trained domain, and there is almost no style except the colors to be dispelled. The superiority of diffusion-based style removal against other operations, such as Auto-Encoder (AE) [53]-based style removal, can be found in _supplementary material (SM)_. Footnote 1: Unless otherwise specified, we do not use the diffusion processes for content images in order to better maintain the content structures. ### Style Transfer Module The style transfer module aims to learn the disentangled style information of the style image and transfer it to the content image. A common generative model like AEs [30] can be used here. However, inspired by the recent great success of diffusion models [14, 41], we introduce a diffusion-based style transfer module, which can better learn the disentangled style information in our framework and achieve higher-quality and more flexible stylizations (see Sec. 5.3). Given a content image \(I_{c}\), denote \(I^{c}_{c}\) is the content of \(I_{c}\) extracted by the style removal module (Sec. 4.1). We first convert it to the latent \(x\) using a pre-trained diffusion model \(\epsilon_{\theta}\). Then, guided by a CLIP-based style disentanglement loss coordinated with a style reconstruction prior (Sec. 4.3), the _reverse process_ of the diffusion model is fine-tuned (\(\epsilon_{\theta}\rightarrow\epsilon_{\hat{\theta}}\)) to generate the stylized result \(I_{cs}\) referenced by the style image \(I_{s}\). Once the fine-tuning is completed, _any content image can be manipulated into the stylized result with the disentangled style of the style image \(I_{s}\)_. To make the training easier and more stable, we adopt the deterministic DDIM forward and reverse processes in Eq. (8) and Eq. (9) during the fine-tuning. However, at inference, the stochastic DDPM [28] forward process (Eq. (2)) can also be used directly to help obtain diverse results [86] (Sec. 5.3). ### Loss Functions and Fine-tuning Enforcing the style transfer module (Sec. 4.2) to learn and transfer the disentangled style information should address two key questions: (1) "how to regularize the learned style is disentangled" and (2) "how to aptly transfer it to other contents". To answer these questions, we introduce a novel CLIP-based style disentanglement loss coordinated with a style reconstruction prior to train the networks. **CLIP-based Style Disentanglement Loss.** Denote \(I^{c}_{c}\) and \(I^{c}_{s}\) are the respective contents of the content image \(I_{c}\) and the style image \(I_{s}\) extracted by the style removal module (Sec. 4.1). We aim to learn the disentangled style information of the style image \(I_{s}\)_complementary_ to its content \(I^{c}_{s}\). Therefore, a straightforward way to obtain the disentangled style information is a direct subtraction: \[D^{px}_{s}=I_{s}-I^{c}_{s}. \tag{10}\] However, the simple pixel differences do not contain meaningful semantic information, thus cannot achieve plausible results [19, 45]. To address this problem, we can formulate the disentanglement in a latent semantic space: \[D_{s}=E(I_{s})-E(I_{s}^{c}), \tag{11}\] where \(E\) is a well-pre-trained projector. Specifically, since \(I_{s}\) and \(I_{s}^{c}\) have similar contents but with different styles, the projector \(E\) must have the ability to distinguish them in terms of the style characteristics. In other words, as we define that images with different styles belong to different domains, the projector \(E\) should be able to distinguish the domains of \(I_{s}\) and \(I_{s}^{c}\). Fortunately, inspired by the recent vision-language model CLIP [68] that encapsulates knowledgeable semantic information of not only the photograph domain but also the artistic domain [18, 69, 49], we can use its image encoder as our projector \(E\) off the shelf. The open-domain CLIP space here serves as a good metric space to measure the "style distance" between content and its stylized result. This "style distance" thus can be interpreted as the disentangled style information. Note that here the style is implicitly defined as the _complement_ of content, which is fundamentally different from the Gram matrix [19] that is an explicit style definition independent of content (see comparisons in Sec. 5.3). The comparisons between CLIP space and other possible spaces can be found in _SM_. After obtaining the disentangled style information \(D_{s}\), the next question is how to properly transfer it to other contents. A possible solution is directly optimizing the L1 loss: \[\begin{split} D_{cs}&=E(I_{cs})-E(I_{c}^{c}),\\ \mathcal{L}_{SD}^{L1}&=\parallel D_{cs}-D_{s} \parallel,\end{split} \tag{12}\] where \(I_{cs}\) is the stylized result, \(D_{cs}\) is the disentangled style information of \(I_{cs}\). However, as illustrated in Fig. 2 (a) and further validated in later Sec. 5.3, minimizing the L1 loss cannot guarantee the stylized result \(I_{cs}\) is within the style domain of the style image \(I_{s}\). It is because L1 loss only minimizes the absolute pixel difference (_i.e._, Manhattan distance); thus, it may produce stylized images that satisfy the Manhattan distance but deviate from the target style domain in the transfer direction. Besides, it may also lead to a collapse problem where a stylized output meets the same Manhattan distance with different contents in the latent space. To address these problems, we can further constrain the disentangled directions as follows: \[\mathcal{L}_{SD}^{dir}=1-\frac{D_{cs}\cdot D_{s}}{\parallel D_{cs}\parallel \parallel D_{s}\parallel}. \tag{13}\] This direction loss aligns the transfer direction of the content image's content to its stylization (_i.e._, the stylized result) with the direction of the style image's content to its stylization (_i.e._, the style image itself), as illustrated in Fig. 2 (b). Collaborated with this loss, the L1 loss \(\mathcal{L}_{SD}^{L1}\) thus can achieve accurate one-to-one mappings from contents in the content domain to their stylizations in the style domain, as illustrated in Fig. 2 (c). Finally, our style disentanglement loss is defined as a compound of \(\mathcal{L}_{SD}^{L1}\) and \(\mathcal{L}_{SD}^{dir}\): \[\mathcal{L}_{SD}=\lambda_{L1}\mathcal{L}_{SD}^{L1}+\lambda_{dir}\mathcal{L}_{ SD}^{dir}, \tag{14}\] where \(\lambda_{L1}\) and \(\lambda_{dir}\) are hyper-parameters set to 10 and 1 in our experiments. Since our style information is induced by the difference between content and its stylized result, we can deeply understand the relationship between C-S through learning. As a result, the style can be naturally and harmoniously transferred to the content, leading to better stylized images, as will be shown in later Fig. 3. **Style Reconstruction Prior.** To fully use the prior information provided by the style image and further elevate the stylization effects, we integrate a style reconstruction prior into the fine-tuning of the style transfer module. Intuitively, given the content \(I_{s}^{c}\) of the style image \(I_{s}\), the style transfer module should be capable of recovering it to the original style image as much as possible. Therefore, we can define Figure 2: **Illustration of different loss functions** to transfer the disentangled style information. (a) L1 loss cannot guarantee the stylized results are within the style domain and may suffer from a collapse problem. (b) Direction loss aligns the disentangled directions but cannot realize accurate mappings. (c) Combining L1 loss and direction loss is able to achieve accurate one-to-one mappings from the content domain to the style domain. a style reconstruction loss as follows: \[\mathcal{L}_{SR}=\parallel I_{ss}-I_{s}\parallel, \tag{15}\] where \(I_{ss}\) is the stylized result given \(I_{s}^{c}\) as content. We optimize it separately before optimizing the style disentanglement loss \(\mathcal{L}_{SD}\). The detailed fine-tuning procedure can be found in _SM_. The style reconstruction prior helps our model recover the style information more sufficiently. It also provides a good initialization for the optimization of \(\mathcal{L}_{SD}\), which helps the latter give full play to its ability, thus producing higher-quality results (see later Sec. 5.3). ## 5 Experimental Results ### Implementation Details We use ADM diffusion model [14] pre-trained on ImageNet [73] and adopt a fast sampling strategy [41]. Specifically, instead of sequentially conducting the diffusion processes until the last timestep \(T\) (_e.g_., 1000), we accelerate them by performing up to \(T_{\{\cdot\}}<T\) (which is called return step), _i.e_., \(T_{remov}=601\) for style removal and \(T_{trans}=301\) for style transfer. Moreover, as suggested by [41], we further accelerate the forward and reverse processes with fewer discretization steps, _i.e_., \((S_{for},S_{rev})=(40,40)\) (\(S_{for}\) for forward process and \(S_{rev}\) for reverse process) for style removal, and \((S_{for},S_{rev})=(40,6)\) for style transfer. When fine-tuning or inference, we can adjust \(T_{remov}\) or \(T_{trans}\) to flexibly control the degree of style removal and C-S disentanglement, as will be shown in Sec. 5.3. To fine-tune the model for a target style image, we randomly sample 50 images from ImageNet as the content images. We use Adam optimizer [43] with an initial learning rate of 4e-6 and increase it linearly by 1.2 per epoch. All models are fine-tuned with 5 epochs. See more details in _SM_. ### Comparisons with Prior Arts We compare our StyleDiffusion against ten state-of-the-art (SOTA) methods [19, 95, 12, 1, 56, 6, 13, 79, 35, 55]. For fair comparisons, all these methods are fine-tuned or trained on the target styles similar to our approach. **Qualitative Comparisons.** As can be observed in Fig. 3, due to the entangling of C-S representations, Gatys [19] Figure 3: **Qualitative comparisons** with state of the art. Zoom-in for better comparison. Please see more in _SM_. and EFDM [95] often produce unsatisfying results with distorted contents (_e.g_., rows 1-3) and messy textures (_e.g_., rows 4-8). StyTr\({}^{2}\)[12] and ArtFlow [1] improve the results by adopting more advanced networks [82, 44], but they may still produce inferior results with halo boundaries (_e.g_., rows 2-3) or dirty artifacts (_e.g_., rows 4-6). AdaAttN [56] performs per-point attentive normalization to preserve the content structures better, but the stylization effects may be degraded in some cases (_e.g_., rows 1, 2, 4, and 5). IECAST [6] utilizes contrastive learning and external learning for style transfer, so fine-tuning it on a single style image would result in degraded results. MAST [13] uses multi-adaptation networks to disentangle C-S. However, since it still relies on the C-S representations of [19], the results usually exhibit messy textures and conspicuous artifacts. TPFR [79] is a GAN-based framework that learns to disentangle C-S in latent space. As the results show, it cannot recover correct style details and often generates deviated stylizations, which signifies that it may not learn truly disentangled C-S representations [57]. Like our method, Johnson [35] and LapStyle [55] also train separate models for each style. However, due to the trade-off between C-S losses of [19], they may produce less-stylized results or introduce unnatural patterns (_e.g_., rows 1-6). By contrast, our StyleDiffusion completely disentangles C-S based on diffusion models. Therefore, it can generate high-quality results with sufficient style details (_e.g_., rows 1-4) and well-preserved contents (_e.g_., rows 5-8). Compared with the previous methods that tend to produce mixed results of content and style, our approach can better consider the relationship between them. Thus, the stylizations are more natural and harmonious, especially for challenging styles such as cubism (_e.g_., row 2) and oil painting (_e.g_., rows 1, 3, 4, and 5). **Quantitative Comparisons.** We also resort to quantitative metrics to better evaluate our method, as shown in Tab. 1. We collect 32 content and 12 style images to synthesize 384 stylized results and compute the average Structural Similarity Index (SSIM) [1] to assess the content similarity. To evaluate the style similarity, we calculate the CLIP image similarity score [68] and Style Loss [19, 30] between the style images and the corresponding stylized results. As shown in Tab. 1, our method obtains the highest SSIM and CLIP Score while the Style Loss is relatively higher than other methods. It is because these methods are directly trained to optimize Style Loss. Nevertheless, the Style Loss achieved by our method is still comparable and lower than the GAN-based TPFR [79]. Furthermore, it is noteworthy that our method can also incorporate Style Loss to enhance the performance in this regard (see later Sec. 5.3). **User Study.** As style transfer is highly subjective and CLIP Score and Style Loss are biased to the training objective, we additionally resort to user study to evaluate the style similarity and overall stylization quality. We randomly select 50 C-S pairs for each user. Given each C-S pair, we show the stylized results generated by our method and a randomly selected SOTA method side by side in random order. The users are asked to choose (1) which result transfers the style patterns better and (2) which result has overall better stylization effects. We obtain 1000 votes for each question from 20 users and show the percentage of votes that existing methods are preferred to ours in Tab. 1. The lower numbers indicate our method is more preferred than the competitors. As the results show, our method is superior to others in both style consistency and overall quality. **Efficiency.** As shown in the bottom two rows of Tab. 1, our approach requires less training time than others as it is fine-tuned on only a few (\(\sim\)50) content images. When testing, our approach is faster than the optimization-based method Gatys [19], albeit slower than the remaining feed-forward methods due to the utilization of diffusion models. We discuss it in later Sec. 6, and more timing and resource details can be found in _SM_. ### Ablation Study **Control of C-S Disentanglement.** A prominent advantage of our StyleDiffusion is that we can flexibly control the C-S disentanglement by adjusting the content extraction of the style removal module (Sec. 4.1). Fig. 4 demonstrates the continuous control achieved by adjusting the return step \(T_{remov}\) of the style removal module. As shown in the top row, with the increase of \(T_{remov}\), more style characteristics are dispelled, and the main content structures are retained. Correspondingly, when more style is removed in the \begin{table} \begin{tabular}{c|c c c c c c c c c|c c c} & **Ours** & Gatys & EFDM & StyTr\({}^{2}\) & ArtFlow & AdaAttN & IECAST & MAST & TPFR & Johnson & LapStyle \\ \hline SSIM \(\uparrow\) & **0.672** & 0.311 & 0.316 & 0.537 & 0.501 & 0.542 & 0.365 & 0.392 & 0.536 & 0.634 & 0.657 \\ CLIP Score \(\uparrow\) & **0.741** & 0.677 & 0.607 & 0.531 & 0.546 & 0.577 & 0.646 & 0.590 & 0.644 & 0.537 & 0.595 \\ Style Loss \(\downarrow\) & 0.837 & **0.111** & 0.178 & 0.216 & 0.258 & 0.310 & 0.284 & 0.229 & 0.989 & 0.364 & 0.274 \\ \hline User & Style & - & 43.1\(\downarrow\) & 41.2\(\downarrow\) & 39.3\(\downarrow\) & 36.4\(\downarrow\) & 37.2\(\downarrow\) & 33.8\(\downarrow\) & 39.1\(\downarrow\) & 14.5\(\downarrow\) & 42.8\(\downarrow\) & 47.3\(\downarrow\) \\ Study & Overall & - & 26.0\(\%\) & 38.1\(\downarrow\) & 44.0\(\uparrow\) & 34.2\(\uparrow\) & 43.9\(\uparrow\) & 32.7\(\uparrow\) & 32.2\(\uparrow\) & 22.6\(\uparrow\) & 43.4\(\uparrow\) & 46.2\(\uparrow\) \\ \hline Training Time/h & \(\sim\)0.4 & - & \(\sim\)3 & \(\sim\)4 & \(\sim\)3 & \(\sim\)3 & \(\sim\)3 & \(\sim\)3 & \(\sim\)10 & \(\sim\)1 & \(\sim\)3 \\ Testing Time/s & 5.612 & 10.165 & 0.028 & 0.168 & 0.204 & 0.076 & 0.034 & 0.066 & 0.302 & 0.015 & 0.008 \\ \hline \end{tabular} \end{table} Table 1: **Quantitative comparisons** with state of the art. The training/testing time is measured with an Nvidia Tesla A100 GPU, and the testing time is averaged on images of size 512\(\times\)512 pixels. \(\uparrow\): Higher is better. \(\downarrow\): Lower is better. top row, it will be aptly transferred to the stylized results in the bottom row, _e.g_., the twisted brushstrokes and the star patterns. It validates that our method successfully separates style from content in a controllable manner and properly transfers it to other contents. Moreover, the flexible C-S disentanglement also makes our StyleDiffusion versatile for other tasks, such as photo-realistic style transfer (see _SM_). **Superiority of Diffusion-based Style Transfer.** Although our style transfer module is not limited to the diffusion model, using it offers three main advantages: **(1)**_Flexible C-S trade-off control_. As shown in Fig. 5, we can flexibly control the C-S trade-off at both the training stage (top row) and the testing stage (bottom row) by adjusting the return step \(T_{trans}\) of the diffusion model. With the increase of \(T_{trans}\), more style characteristics are transferred, yet the content structures may be ruined (_e.g_., the last column). When proper \(T_{trans}\) is adopted, _e.g_., \(T_{trans}=301\), the sweet spot can be well achieved. Interestingly, as shown in the last two columns of the bottom row, though the model is trained on \(T_{trans}=301\), we can extrapolate the style by using larger \(T_{trans}\) (_e.g_., 401) at the testing stage (but the results may be degraded when using too large \(T_{trans}\), _e.g_., 601). It provides a very flexible way for users to adjust the results according to their preferences. This property, however, cannot be simply achieved by using other models, _e.g_., the widely used AEs [30, 53], since our framework does not involve any feature transforms [30, 53] or C-S losses trade-off [3]. **(2)**_Higher-quality stylizations._ Owing to the strong generative ability of the diffusion model, it can achieve higher-quality stylizations than other models. For comparison, we use the pre-trained VGG-AE [30, 49] as the style transfer module and fine-tune its decoder network for each style. As shown in column (b) of Fig. 6, though the results are still acceptable, they may produce distorted contents and inferior textures, clearly worse than the results generated by the diffusion model in column (a). This is also validated by the bottom quantitative scores. It signifies that the diffusion model can better learn the disentangled content and style characteristics in our framework, helping produce better style transfer results. **(3)**_Diversified style transfer._ As mentioned in Sec. 4.2, during inference, we can directly adopt the stochastic DDPM [28] forward process (Eq. (2)) to obtain diverse results (see _SM_). The diverse results can give users endless choices to obtain more satisfactory results. However, using other models like AEs in our framework cannot easily achieve it [86]. **Loss Analyses.** To verify the effectiveness of each loss term used for fine-tuning our StyleDiffusion, we present ablation study results in Fig. 7 (a-d). **(1)** Using L1 loss \(\mathcal{L}_{SD}^{L1}\) successfully transfers the cubism style like the blocky patterns in the top row, but the colors stray from the style images, especially in the bottom row. It is consistent with our earlier analyses in Sec. 4.3 that the L1 loss is prone to produce implausible results outside the style domain. **(2)** Adding direction loss \(\mathcal{L}_{SD}^{dir}\) helps pull the results closer to the style domain. The textures are enhanced in the top row, and the colors are more plausible in the top and bottom rows. **(3)** By further coordinating with the style reconstruction prior \(\mathcal{L}_{SR}\), the stylization effects are significantly elevated where the style information is recovered more su Figure 4: **Control of C-S disentanglement** by adjusting the return step \(T_{remov}\) of the _style removal module_. The top row shows the extracted contents of the style image. The bottom row shows the corresponding stylized results. * denotes our default setting. Zoom-in for better comparison. _See SM for quantitative analyses._ Figure 5: **Control of C-S trade-off** by adjusting the return step \(T_{trans}\) of the _style transfer module_. The top row shows adjusting \(T_{trans}\) at the **training** stage while fixing \(T_{trans}=301\) at the testing stage. The bottom row shows adjusting \(T_{trans}\) at the **testing** stage while fixing \(T_{trans}=301\) at the training stage. * denotes our default setting. Zoom-in for better comparison. _See SM for quantitative analyses._ Figure 6: **Diffusion-based vs. AE-based style transfer.** ficiently. It may be because it provides a good initialization for the optimization of \(\mathcal{L}_{SD}^{L1}\) and \(\mathcal{L}_{SD}^{dir}\), which helps them give full play to their abilities. As verified in Fig. 7 (d), using the style reconstruction alone cannot learn meaningful style patterns except for basic colors. All the above analyses are also supported by the bottom quantitative scores. **Comparison with Gram Loss.** To further verify the superiority of our proposed losses, we replace them with the widely used Gram Loss [19, 30] in Fig. 7 (e-f). As can be observed, Gram Loss destroys the content structures severely,, the zebra head in the top row and the enlarged area in the bottom row. This is because it does not disentangle C-S and only matches the global statistics without considering the relationship between C-S. In contrast, our losses focus on learning the disentangled style information apart from the content, which is induced by the difference between content and its stylized result. Therefore, they can better understand the relationship between C-S, achieving more satisfactory results with fine style details and better-preserved contents, as validated by Fig. 7 (c) and the bottom quantitative scores. Furthermore, we also conduct comparisons between our proposed losses and Gram Loss [19, 30] on the AE baseline [30, 49] to eliminate the impact of diffusion models. As shown in Fig. 8 (a-b), our losses can achieve more satisfactory results than Gram Loss, which is consistent with the results in Fig. 7. Moreover, as shown in Fig. 8 (c), they can also be combined with Gram Loss to improve the performance on the Style Loss metric. However, it may affect the full disentanglement of C-S in our framework, which strays from our target and decreases the content preservation (see SSIM score in Fig. 8 (c)). Therefore, we do not incorporate Gram Loss in our framework by default. ## 6 Conclusion and Limitation In this work, we present a new framework for more interpretable and controllable C-S disentanglement and style transfer. Our framework, termed _StyleDiffusion_, leverages diffusion models to explicitly extract the content information and implicitly learn the complementary style information. A novel CLIP-based style disentanglement loss coordinated with a style reconstruction prior is also introduced to encourage the disentanglement and style transfer. Our method yields very encouraging stylizations, especially for challenging styles, and the experimental results verify its effectiveness and superiority against state of the art. Currently, the framework still suffers from several limitations: (1) The model needs to be fine-tuned for each style, and arbitrary style transfer is left to our future work. (2) The efficiency is not fast enough due to the use of diffusion models. Further research in accelerating diffusion sampling would be helpful. (3) There are some failure cases analyzed in \(SM\), which may help inspire future improvements. Moreover, our framework may also be applied to other image translation [31] or manipulation [66] tasks, and we would like to explore them in our future work. **Acknowledgments. We thank Zeyi Huang and Xiaoting Zhang for their insightful discussions and suggestions. This work was supported in part by the National Program of China (2020YFC1523201, 62172365, 19ZDA197), Zhejiang Elite Program (2022C01222), and Key Technologies and Product Research and Development Projects for Cultural Relics Protection and Trading Circulation.** Figure 8: **More loss function ablation study** on the AE baseline. Figure 7: **Ablation study on loss functions.** * denotes our full model. Zoom-in for better comparison.
2306.11575
A Hunt for Magnetic Signatures of Hidden-Photon and Axion Dark Matter in the Wilderness
Earth can act as a transducer to convert ultralight bosonic dark matter (axions and hidden photons) into an oscillating magnetic field with a characteristic pattern across its surface. Here we describe the first results of a dedicated experiment, the Search for Non-Interacting Particles Experimental Hunt (SNIPE Hunt), that aims to detect such dark-matter-induced magnetic-field patterns by performing correlated measurements with a network of magnetometers in relatively quiet magnetic environments (in the wilderness far from human-generated magnetic noise). Our experiment constrains parameter space describing hidden-photon and axion dark matter with Compton frequencies in the 0.5-5.0 Hz range. Limits on the kinetic-mixing parameter for hidden-photon dark matter represent the best experimental bounds to date in this frequency range.
Ibrahim A. Sulai, Saarik Kalia, Ariel Arza, Itay M. Bloch, Eduardo Castro Muñoz, Christopher Fabian, Michael A. Fedderke, Madison Forseth, Brian Garthwaite, Peter W. Graham, Will Griffith, Erik Helgren, Andres Interiano-Alvarado, Brittany Karki, Abaz Kryemadhi, Andre Li, Ehsanullah Nikfar, Jason E. Stalnaker, Yicheng Wang, Derek F. Jackson Kimball
2023-06-20T14:43:43Z
http://arxiv.org/abs/2306.11575v2
# A Hunt for Magnetic Signatures of Hidden-Photon and Axion Dark Matter in the Wilderness ###### Abstract Earth can act as a transducer to convert ultralight bosonic dark matter (axions and hidden photons) into an oscillating magnetic field with a characteristic pattern across its surface. Here we describe the first results of a dedicated experiment, the Search for Non-Interacting Particles Experimental Hunt (SNIPE Hunt), that aims to detect such dark-matter-induced magnetic-field patterns by performing correlated measurements with a network of magnetometers in relatively quiet magnetic environments (in the wilderness far from human-generated magnetic noise). Our experiment constrains parameter space describing hidden-photon and axion dark matter with Compton frequencies in the 0.5-5.0 Hz range. Limits on the kinetic-mixing parameter for hidden-photon dark matter represent the best experimental bounds to date in this frequency range. ## I Introduction Understanding the nature of dark matter is of paramount importance to astrophysics, cosmology, and particle physics. A well-motivated hypothesis is that the dark matter consists of ultralight bosons (masses \(\ll 1\) eV/\(c^{2}\)) such as hidden photons, axions, or axion-like particles (ALPs) [1; 2; 3]. If ultralight bosons are the dark matter, under reasonable assumptions1 the ensemble of virialized bosons constituting the dark matter halo has extremely large mode-occupation numbers and can be well described as a stochastic classical field [8; 9; 10; 11; 12]. Footnote 1: Here we assume models where the self-interactions among the bosons are sufficiently feeble that they do not collapse into large composite structures (such as boson stars [4]). Therefore, the bosons can be treated as an ensemble of independent particles described by the standard halo model (SHM) of dark matter [5; 6; 7]. Ultralight bosonic fields can couple to Standard Model particles through various "portals" [13; 14], one of which is the interaction between the ultralight bosonic dark matter (UBDM) and the electromagnetic field. Several ongoing laboratory experiments employ sensitive magnetometers located within controlled magnetic environments to search for electromagnetic signatures of UBDM; see, for example, Refs. [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. As noted in Refs. [26; 27; 28], the conceptual framework for UBDM-to-photon conversion upon which these aforementioned laboratory searches are based also applies to Earth as a whole. For hidden-photon dark matter, the non-conducting atmosphere sandwiched between the conductive Earth interior and the ionosphere acts as a transducer to convert the hidden photon field into a real magnetic field, just as laboratory-scale shields act as transducers in lumped-element or resonant-cavity experiments [23; 24; 25]. For axion dark matter, Earth's geomagnetic field causes axion-to-photon conversion via the inverse Primakoff effect [29; 30], playing the role of the applied magnetic field in laboratory-scale axion haloscope experiments [15; 16; 17; 18; 19; 20; 21; 22]. Thus, unshielded magnetometers can be used to search for ambient oscillating magnetic fields generated by UBDM. In this paper we describe initial results of the "Search for Non-Interacting Particles Experimental Hunt" (SNIPE Hunt [31]): a campaign to search for axion2 and hidden-photon dark matter using magnetometers located in the "wilderness" (away from the high levels of magnetic noise associated with urban environments [32; 33]). This work extends to higher axion/hidden-photon Compton frequencies (covering the range from 0.5-5 Hz) than earlier analyses of archival data from the SuperMAG network of magnetometers [34; 35; 36] published in Refs. [27; 28]. In this frequency range, the dominant magnetic field noise sources are anthropogenic [37], so we anticipate that the sensitivity to UBDM can be drastically enhanced by measuring in a remote location. The rest of this paper is structured as follows. Section II reviews the model developed in Refs. [26; 28] to predict the global magnetic field patterns induced by hidden-photon and axion dark matter and used to interpret our data. In Sec. III, we discuss the experimental setup for the magnetometers that measured the magnetic fields at three different locations in July 2022 as well as the time and frequency characteristics of the acquired data. In Sec. IV, the data analysis procedure is described, which is closely based on that presented in Refs. [27; 28]. Section IV is subdivided into one subsection on the hidden-photon dark-matter analysis and another on the axion dark-matter analysis; in both cases no evidence of a dark-matter-induced magnetic signal was discovered, so each subsection concludes by summarizing the constraints obtained on relevant parameters. In Sec. V, we summarize the next steps for the SNIPE Hunt research program, namely developing and carrying out an experiment for higher Compton frequencies with more sensitive magnetometers. Finally, in our conclusion we summarize results and compare them to other experiments and observational limits. ## II Dark-matter signal First, we review relevant features of the theory motivating our hidden-photon dark-matter search. The hidden photon is associated with an additional \(U(1)\) symmetry, beyond that corresponding to electromagnetism, which is a common feature of beyond-the-Standard-Model theories, such as string theory [38]. In our case, we are interested in hidden photons that kinetically mix with ordinary photons [39]. This allows hidden and ordinary photons to interconvert via a phenomenon akin to neutrino mixing [40]; i.e., the mass (propagation) and interaction eigenstates are misaligned. Hidden photons possess a non-zero mass \(m_{A^{\prime}}\) and can be generated in the early universe (see, for example, Refs. [41; 42; 43; 44]), which means that they have the right characteristics to be wave-like dark matter [45]. A useful way to understand the impact of the existence of hidden-photon dark matter on electrodynamics is to write the Lagrangian describing real and hidden photons in the "interaction" basis [26; 24]:3 Footnote 3: Throughout, we use natural units where \(\hbar=c=1\). \[\mathcal{L}\supset-\frac{1}{4}\Big{[}F_{\mu\nu}F^{\mu\nu}+\left(F^{\prime} \right)_{\mu\nu}\left(F^{\prime}\right)^{\mu\nu}\Big{]}+\frac{1}{2}m_{A^{ \prime}}^{2}\left(A^{\prime}\right)_{\mu}\left(A^{\prime}\right)^{\mu}+ \varepsilon m_{A^{\prime}}^{2}\left(A^{\prime}\right)^{\mu}A_{\mu}-J_{\rm EM}^ {\mu}A_{\mu}\;, \tag{1}\] where only terms up to first order in the kinetic mixing parameter \(\varepsilon\ll 1\) are retained. In Eq. (1), \(F_{\mu\nu}\) is the field-strength tensor for the "interacting" mode of the electromagnetic field that couples to charges, \(\left(F^{\prime}\right)_{\mu\nu}\) is the field-strength tensor for the "sterile" mode that does not interact with charges, \(A_{\mu}\) is the four-potential for the interacting mode, \(\left(A^{\prime}\right)_{\mu}\) is the four-potential for the sterile mode, and \(J_{\rm EM}^{\mu}\) is the electromagnetic four-current density. In our case of interest, the hidden-photon dark-matter field in the vicinity of Earth is a coherently oscillating vector field with random polarization:4 Footnote 4: In this work, we assume that both the hidden-photon phase and its polarization state randomize on the coherence timescale. It is also possible, depending on the production mechanism and subsequent structure-formation processing, that the hidden-photon polarization state could be fixed in inertial space; see, e.g., the discussions in Refs. [46; 47]. We do not explicitly consider this case in this work; a closely related, but different, analysis would need to be undertaken. However, absent accidental geometrical cancellations that are made unlikely by virtue of the length of the data-taking period compared to Earth’s sidereal rotational period and the widely separated geographical locations of the magnetic-field stations on which we report, limits in that case are expected to be of the same order of magnitude as those we obtain. \[\mathbf{A}^{\prime}(\mathbf{r},t)\approx\frac{\sqrt{2\rho_{\rm DM}}}{m_{A^{\prime}}}e ^{-im_{A^{\prime}}t}\sum_{i=1}^{3}\xi_{i}(\mathbf{r},t)\mathbf{\hat{n}}_{i}e^{i\phi_{ i}(\mathbf{r},t)}\;, \tag{2}\] where \(\mathbf{A}^{\prime}\) is the sterile vector potential, \(\rho_{\rm DM}\approx 0.3\) GeV/cm\({}^{3}\) is the local dark-matter density [48], \(\mathbf{\hat{n}}_{i}\) are a set of orthonormal unit vectors, \(\xi_{i}(\mathbf{r},t)\) are slowly varying \(\mathcal{O}(1)\) amplitudes, and \(\phi_{i}(\mathbf{r},t)\) are slowly varying random phases. Both the amplitudes \(\xi_{i}(\mathbf{r},t)\) and phases \(\phi_{i}(\mathbf{r},t)\) of the hidden-photon dark-matter field change stochastically on length scales given by the dark-matter coherence length, \[\ell_{\rm coh}\approx\frac{2\pi}{m_{A^{\prime}}v_{\rm DM}}\;, \tag{3}\] and time scales given by the coherence time of the field, \[\tau_{\rm coh}\approx\frac{\ell_{\rm coh}}{v_{\rm DM}}\approx\frac{2\pi}{m_{A^ {\prime}}v_{\rm DM}^{2}}\;, \tag{4}\] where \(v_{\rm DM}\sim 10^{-3}\) is the characteristic dispersion (virial) velocity of the dark matter in the vicinity of Earth [7, 49]. Note that the timelike component of the four-potential \(\left(A^{\prime}\right)^{\mu}\) is suppressed relative to the spacelike component (the vector potential \(\mathbf{A}^{\prime}\)) by \(\sim v_{\rm DM}\sim 10^{-3}\). From inspection of Eq. (1), it can be seen that the physical effects due to the hidden-photon dark-matter field \(\left(A^{\prime}\right)^{\mu}\) are to leading order the same as those generated by an effective current density \[\mathbf{J}_{A^{\prime}}=-\varepsilon m_{A^{\prime}}^{2}\mathbf{A}^{\prime}. \tag{5}\] Inside a good conductor, the interacting mode vanishes, \(F_{\mu\nu}=0\) and \(A_{\mu}=0\), whereas the sterile mode can propagate into a conducting region with essentially no perturbation. Outside a conducting region, the effective current density due to the sterile mode acts to generate a non-zero interacting mode. These effects, where Earth's conducting interior and the conducting ionosphere provide relevant boundary conditions, give rise to the oscillating magnetic-field pattern we seek to measure in our experiment, as described in detail in Ref. [26]. The second theoretical scenario we consider is the hypothesis that the dark matter consists primarily of axions [50, 51, 52, 53, 54, 55]. Axions are pseudoscalar particles arising from spontaneous symmetry breaking at a high energy scale associated, for example, with grand unified theories (GUTs) or even the Planck scale [56]. Combined with explicit symmetry breaking at lower energy scales, such pseudoscalar particles acquire small masses (\(\ll 1\) eV) and couplings to Standard Model particles and fields [2]. Like hidden photons, axions are ubiquitous features of beyond-the-Standard-Model theories [57, 58, 59, 60], and have all the requisite characteristics to be the dark matter [1, 2, 3]. The focus of our experiment is the axion-to-photon coupling which is described by the Lagrangian: \[\mathcal{L}\supset-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}( \partial_{\mu}a)^{2}-\frac{1}{2}m_{a}^{2}a^{2}+\frac{1}{4}g_{a\gamma}aF_{\mu \nu}\tilde{F}^{\mu\nu}\, \tag{6}\] where \(a\) is the axion field, \(m_{a}\) is the axion mass, \(g_{a\gamma}\) parameterizes the axion-photon coupling, and \(\tilde{F}^{\mu\nu}\) is the dual field-strength tensor. The last term appearing in Eq. (6) describes the interaction between the axion and electromagnetic fields: \[\frac{1}{4}g_{a\gamma}aF_{\mu\nu}\tilde{F}^{\mu\nu}=-g_{a\gamma} a\mathbf{E}\cdot\mathbf{B}\, \tag{7}\] where \(\mathbf{E}\) and \(\mathbf{B}\) are the electric and magnetic fields. In the non-relativistic limit, the leading-order correction to Maxwell's equations arising from the existence of the axion-photon coupling described by Eq. (7) appears in the Ampere-Maxwell Law: \[\mathbf{\nabla}\times\mathbf{B}-\partial_{t}\mathbf{E}=\mathbf{J}-g_{a\gamma}( \partial_{t}a)\mathbf{B}. \tag{8}\] It follows that the physical effects of the axion-photon coupling in the presence of a magnetic field \(\mathbf{B}\), as in the case of hidden photons [Eq. (5)], manifest as an effective current: \[\mathbf{J}_{a}=-g_{a\gamma}(\partial_{t}a)\mathbf{B}=ig_{a\gamma}m_{a}a( \mathbf{r},t)\mathbf{B}\, \tag{9}\] where \[a(\mathbf{r},t)=a_{0}(\mathbf{r},t)e^{-im_{a}t} \tag{10}\] is the axion field with a stochastically (slowly) varying amplitude \(|a_{0}|\sim\sqrt{2\rho_{\rm DM}}/m_{a}\), with coherence length \(\ell_{\rm coh}\) and coherence time \(\tau_{\rm coh}\) analogous to those for hidden photons described by Eqs. (3) and (4), with the replacement \(m_{A^{\prime}}\to m_{a}\). The interaction of an axion dark-matter field with the geomagnetic field of Earth thus generates an oscillating magnetic-field pattern, which is discussed in detail in Ref. [28]. In this work, we aim to analyze the first dedicated measurements of the SNIPE Hunt experiment in the frequency range 0.5-5 Hz. The lower frequency bound of 0.5 Hz for our analysis was chosen for practical reasons: \(1/f\) noise begins to reduce our sensitivity below \(\approx 0.5\) Hz and there is ongoing analysis of SuperMAG data covering frequencies up to \(\approx 1\) Hz that is expected to surpass the sensitivity of this experiment. For the upper bound of 5 Hz, we have considered the fact that we do not have a robust prediction for Schumann resonances because of finite conductivity effects and also inhomogeneities in the ionosphere refractive index [61]. Indeed, the first Schumann resonance occurs at a frequency around 7.8 Hz with time-dependent fluctuations of the order of 0.5 Hz. Most importantly, its width is about 2 Hz, which makes \(f\leq 5\) Hz a region where the dark-matter-induced magnetic-field pattern can be reliably derived (see Sec. IV.3.1 for further discussion). The analyses carried out in Refs. [26, 28] considered a quasi-static limit valid only when the UBDM Compton wavelengths are much larger than Earth's radius \(R\): \(\lambda_{A^{\prime}}\approx 1/m_{A^{\prime}}\gg R\) and \(\lambda_{a}\approx 1/m_{a}\gg R\). This sets an upper limit on the hidden-photon mass \(m_{A^{\prime}}\) and axion mass \(m_{a}\) of \(\sim 3\times 10^{-14}\) eV and, correspondingly, for their Compton frequencies: \(f_{A^{\prime}}\) and \(f_{a}\) must be \(\ll 7\) Hz. As we are working at frequencies up to 5 Hz, the formulas used in Refs. [26; 28] are only marginally correct, and therefore more robust formulas are needed here. In the following we calculate a more general signal for dark-matter masses close to \(\sim 1/R\). We write the magnetic and electric fields in terms of vector spherical harmonics (VSH; see Appendix D of [26]) \(\mathbf{Y}_{\ell m}\), \(\mathbf{\Psi}_{\ell m}\), \(\mathbf{\Phi}_{\ell m}\) as \[\mathbf{B}(\mathbf{x},t) =e^{-i\omega t}\sum_{\ell,m}\left(B^{(r)}_{\ell m}(r)\mathbf{Y}_{\ell m }+B^{(1)}_{\ell m}(r)\mathbf{\Psi}_{\ell m}+B^{(2)}_{\ell m}(r)\mathbf{\Phi}_{\ell m}\right) \tag{11}\] \[\mathbf{E}(\mathbf{x},t) =e^{-i\omega t}\sum_{\ell,m}\left(E^{(r)}_{\ell m}(r)\mathbf{Y}_{\ell m }+E^{(1)}_{\ell m}(r)\mathbf{\Psi}_{\ell m}+E^{(2)}_{\ell m}(r)\mathbf{\Phi}_{\ell m} \right), \tag{12}\] where \(\omega\) is the oscillation angular frequency of the dark-matter effective current. For the dark-matter effective current \(\mathbf{J}\) which stands for both hidden photons and axion-like particles, we use the fact that it satisfies \(\mathbf{\nabla}\times\mathbf{J}=0\) to write \[\mathbf{J}(\mathbf{x},t)=e^{-i\omega t}\sum_{\ell,m}\left(J^{(r)}_{\ell m}(r)\mathbf{Y}_{ \ell m}+J^{(1)}_{\ell m}(r)\mathbf{\Psi}_{\ell m}\right). \tag{13}\] Inserting the above ansatz into Maxwell's equations, we get \[\left(\frac{1}{r^{2}}\frac{d}{dr}\left(r^{2}\frac{d}{dr}\right)+\omega^{2}- \frac{\ell(\ell+1)}{r^{2}}\right)\left(\begin{array}{c}B^{(2)}_{\ell m}\\ E^{(2)}_{\ell m}\end{array}\right)=0\, \tag{14}\] and the other components are determined by \[E^{(r)}_{\ell m} =\frac{1}{i\omega}\left(\frac{\ell(\ell+1)}{r}B^{(2)}_{\ell m}+J^ {(r)}_{\ell m}\right) \tag{15}\] \[E^{(1)}_{\ell m} =\frac{1}{i\omega}\left(\frac{1}{r}\frac{d}{dr}\left(rB^{(2)}_{ \ell m}\right)+J^{(1)}_{\ell m}\right)\] (16) \[B^{(r)}_{\ell m} =-\frac{1}{i\omega}\frac{\ell(\ell+1)}{r}E^{(2)}_{\ell m}\] (17) \[B^{(1)}_{\ell m} =-\frac{1}{i\omega}\frac{1}{r}\frac{d}{dr}\left(rE^{(2)}_{\ell m }\right). \tag{18}\] This system is solved with boundary conditions such that \(E^{(1)}_{\ell m}\) and \(E^{(2)}_{\ell m}\) vanish at both Earth's surface \(r=R\) and ionosphere \(r=R+h\), where \(h\) is the ionosphere height. Because we work in the regime \(\omega h\ll 1\), the boundary condition for \(E^{(2)}_{\ell m}\) implies immediately that it is zero everywhere; it follows that \(B^{(r)}_{\ell m}\) and \(B^{(1)}_{\ell m}\) also vanish identically. Writing \(B^{(2)}_{\ell m}=u_{\ell m}/r\), in the limit in which \(h\ll R\) we find \[u^{\prime\prime}_{\ell m}-\lambda_{\ell}^{2}u_{\ell m}=0, \tag{19}\] where \(\lambda_{\ell}^{2}=\ell(\ell+1)/R^{2}-\omega^{2}\). We write the solution for \(u_{\ell m}\) as \(u_{\ell m}=\alpha_{\ell m}\cosh(\lambda_{\ell}(r-R))+\beta_{\ell m}\sinh( \lambda_{\ell}(r-R))\). Notice that the magnetic field signal at Earth's surface (\(r=R\)) is simply given by \[\mathbf{B}=\sum_{\ell,m}\frac{\alpha_{\ell m}}{R}\mathbf{\Phi}_{\ell m}. \tag{20}\] From the boundary condition \(u^{\prime}_{\ell m}=-rJ^{(1)}_{\ell m}\) at \(r=R\) and \(r=R+h\), we find at zeroth order in \(h/R\) \[\alpha_{\ell m}=-\frac{J^{(1)}_{\ell m}(R)+RJ^{(1)}_{\ell m}(R)}{\lambda_{ \ell}^{2}}. \tag{21}\] ### Hidden-Photon Signal In terms of vector spherical harmonics, the hidden-photon effective current, given in Eq. (5), is written as \[\mathbf{J}_{A^{\prime}}=-\sqrt{\frac{4\pi}{3}}\varepsilon m_{A^{\prime}}^{2}\sum_{m=-1 }^{1}A_{m}(\mathbf{Y}_{1m}+\mathbf{\Psi}_{1m})e^{-i\omega_{m}t}\enspace. \tag{22}\] Here \(\omega_{m}=m_{A^{\prime}}-2\pi f_{d}m\), where \(f_{d}\) is the frequency associated to the sidereal day,5 and the hidden-photon amplitudes \(A^{\prime}_{m}\) (for polarizations \(m=0,\pm 1\)) appearing in Eq. (22) are normalized via Footnote 5: The appearance of \(f_{d}\) here is due to the rotation of Earth. While the direction of the hidden photon is fixed in the inertial celestial frame, our measurements are performed by magnetometers which are fixed to the rotating Earth. Transforming the hidden-photon amplitude from the inertial to co-rotating frame, introduces an additional time dependence related to Earth’s rotational frequency. \[\frac{1}{2}m_{A^{\prime}}^{2}\langle|A^{{}^{\prime}}|^{2}\rangle=\rho_{\rm DM}, \tag{23}\] where \(\rho_{\rm DM}=0.3\,{\rm GeV/cm}^{3}\) is the local dark-matter density. Extracting \(J_{1m}^{(1)}\) from Eq. (22), we find \[\mathbf{B}_{A^{\prime}}=\sqrt{\frac{4\pi}{3}}\frac{\varepsilon\,m_{A^{\prime}}^{2 }R}{2-m_{A^{\prime}}^{2}R^{2}}\sum_{m=-1}^{1}A_{m}\mathbf{\Phi}_{1m}e^{-i\omega_{m }t}. \tag{24}\] ### Axion Signal For axion dark matter, the orientation of the effective current is determined by Earth's dc magnetic field [see Eq. (9)]. As in Ref. [28], we utilize the IGRF-13 model [62], which parameterizes Earth's magnetic field \(\mathbf{B}_{\oplus}\) in terms of a scalar potential \(V_{0}\), such that \(\mathbf{B}_{\oplus}=-\nabla V_{0}\), where \(V_{0}\) is expanded as \[V_{0}=\sum_{\ell=1}^{\infty}\sum_{m=0}^{\ell}\frac{R^{\ell+2}}{r^{\ell+1}}(g_ {\ell m}\cos(m\phi)+h_{\ell m}\sin(m\phi))P_{\ell}^{m}(\cos\theta), \tag{25}\] where \(P_{\ell}^{m}\) are the Schmidt-normalized associated Legendre polynomials. The Gauss coefficients \(g_{\ell m}\) and \(h_{\ell m}\) are specified by the IGRF model at five-year intervals (see Tab. 2 of Ref. [62]). The last of these coefficients correspond to the year 2020, with time derivatives provided for their subsequent evolution. In this work, we extrapolate the 2020 values (up to \(\ell=4\)) forward to July 23, 2022 using these time derivatives, and adopt the conventions \(g_{\ell,-m}=(-1)^{m}g_{\ell m}\) to \(h_{\ell,-m}=(-1)^{m+1}h_{\ell m}\) to extend to negative \(m\). Once Earth's dc field has been parametrized in this way, the effective current that axion dark matter of mass \(m_{a}\) and axion-photon coupling \(g_{a\gamma}\) generates can be written as [28] \[\mathbf{J}_{a}=ig_{a\gamma}a_{0}m_{a}\sum_{\ell,m}C_{\ell m}\left(\frac{R}{r} \right)^{\ell+2}((\ell+1)\mathbf{Y}_{\ell m}-\mathbf{\Psi}_{\ell m})e^{-im_{a}t}, \tag{26}\] where \(a_{0}\) is the (complex) axion amplitude, normalized by \(\frac{1}{2}m_{a}^{2}\langle|a_{0}|^{2}\rangle=\rho_{\rm DM}\), and \[C_{\ell m}=(-1)^{m}\sqrt{\frac{4\pi(2-\delta_{m0})}{2\ell+1}}\frac{g_{\ell m}- ih_{\ell m}}{2}. \tag{27}\] Now, by identifying \(J^{(1)}(r)\) in Eq. (26), the magnetic-field signal from axion dark matter is found to be \[\mathbf{B}_{a}=-iga_{0}m_{a}R\sum_{\ell,m}\frac{(\ell+1)C_{\ell m}}{\ell(\ell+1)- m_{a}^{2}R^{2}}\mathbf{\Phi}_{\ell m}e^{-im_{a}t}. \tag{28}\] ## III Experimental Details From 21 July 2022 to 24 July 2022, we conducted the first coordinated SNIPE Hunt science run. Measurements were made with battery-operated magnetometers located at three sites which were chosen to have minimal magnetic-field interference from power lines, traffic, and other anthropic sources. A block diagram of the experimental setup at an individual station is shown in Fig. 1. The magnetometers were Vector Magnetoresistive (VMR) sensors manufactured by Twinleaf LLC. The VMRs use three mutually perpendicular giant magnetoresistive (GMR) field sensors to measure all three components of the magnetic field. The sensitivity of the GMR sensors is specified to be 300 pT\(/\sqrt{\mathrm{Hz}}\) over a frequency range of 0.1-100 Hz. In addition to the magnetic field, the VMR also has a three-axis gyroscope, a three-axis accelerometer, a barometer, and a thermometer. The measurements from all of these sensors were recorded during the course of the science run on a laptop computer which also provided power to the VMR via a USB connection. The sample rate for the data acquisition was set to 160 samples/s. In order to limit the influence of magnetic noise from the laptop on the VMR, the laptop was located in a capping tent 9-12 m from the sensor, depending on the station. The laptops were powered by 50 A \(\cdot\) hr powerbanks, which were swapped with fully charged powerbanks every 6-10 hours and recharged using a solar generator. Fig. 3 shows the operation times for the three stations. The data were time stamped using the computer clocks, which were steered to GPS time using a receiver antenna and synchronization software. To account for the software lag present in the timing calibration, the timing offset correction was set prior to the science run using a time server from the National Institute for Standards and Technology. The accuracy of the timing was tested in the laboratory by applying magnetic-field signals that were triggered by an external GPS receiver before and after the science run. Based on these tests, we estimate the accuracy of the timing to be \(\lesssim 100\) ms. The location of the three stations is shown in Table 1. The magnetometers were aligned so that the \(y\) axis of the magnetometers was vertical, relative to local gravity, and the \(z\) axis of the detectors was pointing to true north as determined by smart-phone compasses. We estimate the pointing accuracy of the detectors to be \(\lesssim 1^{\circ}\). An example of one of the mounts used for the alignment of the magnetometers is shown in Fig. 2. The sensors and mounts were covered with a plastic container that was secured to the ground to guard against rain. ### Noise Characteristics For the three sites, we show in Fig. 4 the amplitude spectral density for the East-West and North-South components of the magnetic field - the components relevant for this search. A couple of features are evident. The Hayward station \begin{table} \begin{tabular}{l|c|c c c} \hline \hline Station & Location & Latitude & Longitude & Elevation \\ & & (deg) & (deg) & (meters) \\ \hline Hayward & Auburn State Recreation Area & 39.1017 & -120.924 & 355.0 \\ Lewisburg & Penn Roosevelt State Park & 40.7404 & -77.7113 & 692.2 \\ Oberlin & Findley State Park & 41.1303 & -82.2069 & 277.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Locations of sensors used in the 2022 SNIPE hunt. The stations are referred to by the location of the home institution for the groups in charge of each station. Figure 1: Block diagram of SNIPE station setup. A three-axis GMR magnetometer was connected via USB to a laptop located 9–12 m from the sensor. The data were recorded with a laptop and time stamped using the laptop computer time, which was steered to GPS time using a GPS timing receiver. The laptop was powered with battery power banks that were swapped out every 6–10 hours. had noticeably smaller power-line noise at 60 Hz than the Lewisburg and Oberlin stations. The Lewisburg station had a significant \(1/f\) pedestal in the 0.1 to 0.5 Hz band that was absent in the other two stations. Also, the Oberlin station had narrow peaks at 0.25, 0.5, and 0.75 Hz suggesting a common origin as harmonics of some fundamental frequency. As the local magnetic environments are distinct, this difference in noise profile between the stations is expected even though we have not identified the origins of the particular features noted above. However, for the three stations, the amplitude spectral density in most of the band of interest is flat and corresponds to approximately \(300\,\mathrm{pT}/\sqrt{\mathrm{Hz}}\), the noise floor of the sensors. In Fig. 5, we plot time series of the sensor temperature (shown as the blue dashed lines on the right), and of the temperature-corrected measurements of the magnetic field covering the first \(\sim 30\) hours of the observing run. The rows correspond to the different sites, and the columns to the North-South, East-West, and Vertical components of the field. We apply the temperature correction purely for plotting purposes, as we noticed a temperature-dependent drift in the sensor calibration at dc of up to 10 percent in the case of the Hayward station and about 2 percent for the other two stations. However, in the analysis band - 0.5 to 5.0 Hz - we do not make any temperature correction. Instead, Figure 3: Activity for the 2022 SNIPE science run. The horizontal bars indicate when the Hayward, Lewisburg, and Oberlin stations were operational. Two subsets of the data were analyzed independently: Scan-1 covering the interval shown as the light blue shaded region on the left, and Scan-2, the grey shaded region on the right. Figure 2: Mount for the detector. The pitch, roll, and yaw can be adjusted. A smart phone fits onto the table that holds the sensor for alignment. The phone is removed during data collection. The mount was attached to the ground using heavy-duty plastic tent screws. as we discuss in Sec. IV.3, we assign an uncertainty on the quoted HPDM and axion limits due to temperature drifts. Between hours \(\sim 13\) and \(20\) of the time series, we observe increased fluctuations in the North and East components of the Lewisburg data - fluctuations which were not present in the other stations. This interval coincides with an overnight thunderstorm, during which mechanical agitation of the sensor or lightning occurring nearby may have led to the fluctuations. However, in the temporal window between hours \(\sim 25\) and \(32\) (shown enclosed in the red dashed boxes of Fig. 5), we notice features which are clearly correlated across all three stations, and which we believe are due to a geomagnetic storm associated with the eruption of sunspot AR3060. This produced a C5-class solar flare and a coronal mass ejection directed toward Earth [63, 64]. The storm led to the modulation of Earth's magnetic field which we detected. Including data from this window in the the analysis presented below led to noticeable non-gaussianities in the test statistic used for setting limits on the HPDM and axion parameters. For this reason, we excluded the time interval containing the geomagnetic storm in the analysis and instead separate the data into two independently analyzed measurement periods: Scan-1 and Scan-2. These time periods are shown as shaded regions in Fig. 3. ## IV Data analysis In this section, we outline how the SNIPE Hunt data is analyzed to search for both a hidden-photon dark-matter (HPDM) and axion dark-matter signal. ### Hidden-Photon Analysis We begin with the HPDM signal. Our analysis follows a similar (but simplified) methodology to that described in Ref. [27]. In this search, our data consist of six time series, corresponding to the south-directed and east-directed magnetic field components measured at each of the three SNIPE Hunt measurement locations: \(B_{\theta}(\Omega_{1},t_{j})\), \(B_{\phi}(\Omega_{1},t_{j})\) Figure 4: Amplitude spectral densities of the North-South and East-West components of the magnetic field measurements from the three measurement sites. The shaded band 0.5–5.0 Hz shows the range of frequencies probed in this work. In this band, the noise floor is limited by the instrumental sensitivity of \(\sim 300\,\mathrm{pT}/\sqrt{\mathrm{Hz}}\). \(B_{\theta}(\Omega_{2},t_{j})\), \(B_{\phi}(\Omega_{2},t_{j})\), \(B_{\theta}(\Omega_{3},t_{j})\), and \(B_{\phi}(\Omega_{3},t_{j})\).6 We model these time series as being given by (the real part of) the signal in Eq. (24) plus Gaussian white noise. Our goal is then to extract a bound on \(\varepsilon\). As the exact amplitudes \(A^{\prime}_{m}\) are unknown, we utilize a Bayesian framework and treat these as nuisance parameters. We also take a Gaussian distribution for them,7 normalized by Eq. (23). Footnote 6: Here \(\Omega_{i}=(\theta_{i},\phi_{i})\) denotes the geographic coordinates of each station. Note that while \(\phi_{i}\) is exactly the longitude of each station, the latitude of each station is given by \(\frac{\pi}{2}-\theta_{i}\). Likewise, \(\hat{\mathbf{\phi}}\) points east, while \(\mathbf{\theta}\) points south. Footnote 7: \(A^{\prime}\) can be written as a sum of several independent plane wave solutions of different velocities \(v_{n}\sim\mathcal{O}(v_{\text{DM}})\). These have corresponding frequencies \(f_{n}\sim f_{A^{\prime}}\left(1+\mathcal{O}(v_{\text{DM}}^{2})\right)\). On timescales longer than \(\tau_{\text{coh}}\sim 2\pi/(f_{A^{\prime}}v_{\text{DM}}^{2})\), the value of \(A^{\prime}\) is thus a sum of many contributions with random phases. By the central limit theorem, it is thus distributed as a Gaussian variable. The signal in Eq. (24) indicates that all relevant information is contained at the frequencies \(f_{A^{\prime}}\) and \(f_{A^{\prime}}\pm f_{d}\). Thus we Fourier transform the six time series \(B_{\alpha}(\Omega_{i})\), and construct an 18-dimensional data vector8\(\vec{X}\) which contains all information which may be relevant to setting a bound at \(f_{A^{\prime}}\). Namely, \(\vec{X}\) consists of the six values \(\tilde{B}_{\alpha}\left(\Omega_{i},f_{A^{\prime}}-\hat{f}_{d}\right)\), followed by the six values \(\tilde{B}_{\alpha}\left(\Omega_{i},f_{A^{\prime}}\right)\), followed by the six values \(\tilde{B}_{\alpha}\left(\Omega_{i},f_{A^{\prime}}+\hat{f}_{d}\right)\). In our analysis, we compute bounds only at discrete Fourier transform (DFT) frequencies \(f_{A^{\prime}}=n/T\) (where \(T\) is the total duration of the time window in consideration). Note that \(f_{d}\) may not generically be a DFT frequency, and so we have instead used \(\hat{f}_{d}\), which we define as the nearest DFT frequency to \(f_{d}\). With these choices, \(\vec{X}\) can be computed via a fast Fourier transform (FFT). (This allows us to compute \(\vec{X}\) at all frequencies simultaneously, and perform the subsequent analysis for all frequencies in parallel.) The first step of our analysis is to characterize the statistics of \(\vec{X}\), namely its expectation and variance. Footnote 8: We use \(\vec{x}\) to denote a vector \(x\) with 18 components (or six components in Sec. IV.2), and \(\mathbf{y}\) to indicate a vector \(y\) with three components. First, let us compute the expectation of \(\vec{X}\). As mentioned above, we model our measurements as being Gaussian noise on top of the signal in Eq. (24). Since the expectation of the noise vanishes, the expectation of \(\vec{X}\) simply comes from Fourier transforming Eq. (24) and assembling its relevant components into a vector. To remove the normalization Figure 5: Time series of magnetic fields made at the Hayward, Lewisburg, and Oberlin measurement stations. The North-South, East-West, and Vertical (normal to Earth’s surface) directions are shown. Scan-1 begins at time \(t=0\), and covers the first 24 hours of the data shown. The red dashed boxes correspond to the occurrence of a geomagnetic storm. During that time, we noticed correlated low-frequency oscillations in all three stations. Data from this period were not included in Scan-1, as discussed in the main text. The blue dashed line shows the sensor temperature measured at the different locations. from the amplitudes \(A^{\prime}_{m}\), let us define \[c_{m}=\frac{\sqrt{2}\pi f_{A^{\prime}}A^{\prime}_{m}}{\sqrt{\rho_{ \mathrm{DM}}}}. \tag{29}\] These now have \(\sum_{m}\langle|c_{m}|^{2}\rangle=1\). In the case \(c_{\pm}=0\), (the real part of) Eq. (24) takes the simple form \[\mathbf{B}_{0}(\Omega,t)=-\frac{2\pi f_{A^{\prime}}R}{2-(2\pi f_{A^{ \prime}}R)^{2}}\varepsilon\sqrt{2\rho_{\mathrm{DM}}}\sin\theta\cdot\mathrm{Re} \left[c_{0}e^{-2\pi if_{A^{\prime}}t}\right]\mathbf{\hat{\phi}}, \tag{30}\] and the only nonzero components of \(\langle\vec{X}\rangle\) are \[\langle X_{8}\rangle_{0} =\tilde{B}_{0,\phi}(\Omega_{1},f_{A^{\prime}})=-\frac{2\pi f_{A^{ \prime}}R}{2-(2\pi f_{A^{\prime}}R)^{2}}c_{0}^{*}\varepsilon T\sqrt{\frac{ \rho_{\mathrm{DM}}}{2}}\sin\theta_{1}\equiv c_{0}^{*}\varepsilon\mu_{0,8} \tag{31}\] \[\langle X_{10}\rangle_{0} =\tilde{B}_{0,\phi}(\Omega_{2},f_{A^{\prime}})=-\frac{2\pi f_{A^{ \prime}}R}{2-(2\pi f_{A^{\prime}}R)^{2}}c_{0}^{*}\varepsilon T\sqrt{\frac{ \rho_{\mathrm{DM}}}{2}}\sin\theta_{2}\equiv c_{0}^{*}\varepsilon\mu_{0,10}\] (32) \[\langle X_{12}\rangle_{0} =\tilde{B}_{0,\phi}(\Omega_{3},f_{A^{\prime}})=-\frac{2\pi f_{A^{ \prime}}R}{2-(2\pi f_{A^{\prime}}R)^{2}}c_{0}^{*}\varepsilon T\sqrt{\frac{ \rho_{\mathrm{DM}}}{2}}\sin\theta_{3}\equiv c_{0}^{*}\varepsilon\mu_{0,12}, \tag{33}\] On the other hand if \(c_{0}=c_{-}=0\), then the signal becomes \[\mathbf{B}_{+}(\Omega,t)=\frac{2\pi f_{A^{\prime}}R}{2-(2\pi f_{A^{ \prime}}R)^{2}}\varepsilon\sqrt{\rho_{\mathrm{DM}}}\cdot\mathrm{Re}\left[c_{+ }\left(i\mathbf{\hat{\theta}}-\cos\theta\mathbf{\hat{\phi}}\right)e^{-2\pi i(f_{A^{ \prime}}-f_{A})t+i\phi}\right], \tag{34}\] and so the expectation of \(\vec{X}\) is \[\langle\vec{X}\rangle_{+}\approx-\frac{\pi f_{A^{\prime}}R}{2-(2 \pi f_{A^{\prime}}R)^{2}}c_{+}^{*}\varepsilon\Delta t\sqrt{\rho_{\mathrm{DM}}} \begin{array}{c}ie^{-i\phi_{1}}Q(f_{d}-\hat{f}_{d})\\ \cos\theta_{1}e^{-i\phi_{1}}Q(f_{d}-\hat{f}_{d})\\ ie^{-i\phi_{2}}Q(f_{d}-\hat{f}_{d})\\ ie^{-i\phi_{3}}Q(f_{d}-\hat{f}_{d})\\ \cos\theta_{3}e^{-i\phi_{3}}Q(f_{d}-\hat{f}_{d})\\ ie^{-i\phi_{1}}Q(f_{d})\\ \cos\theta_{1}e^{-i\phi_{1}}Q(f_{d})\\ ie^{-i\phi_{2}}Q(f_{d})\\ \cos\theta_{2}e^{-i\phi_{2}}Q(f_{d}+\hat{f}_{d})\\ ie^{-i\phi_{3}}Q(f_{d}+\hat{f}_{d})\\ \cos\theta_{3}e^{-i\phi_{3}}Q(f_{d}+\hat{f}_{d})\end{array}\equiv c_{+}^{*} \varepsilon\vec{\mu}_{+}, \tag{35}\] where \[Q(f)=\frac{1-e^{-2\pi ifT}}{1-e^{-2\pi if\Delta t}}, \tag{36}\] and \(\Delta t=(1/160)\,\mathrm{s}\) is the time resolution. Note that, in principle, Eq. (35) should have an additional term proportional to \(c_{+}\), which contains factors of \(Q(2f_{A^{\prime}}-f_{d}-\hat{f}_{d})\), \(Q(2f_{A^{\prime}}-f_{d})\), and \(Q(2f_{A^{\prime}}-f_{d}+\hat{f}_{d})\). Since \(f_{d}\ll f_{A^{\prime}}\) and \(Q(f)\sim 1/f\), these will all be significantly smaller than the \(Q\) factors appearing in Eq. (35). Thus we are safe to neglect this additional term. Similarly, \(\langle\vec{X}\rangle_{-}\equiv c_{-}^{*}\varepsilon\vec{\mu}_{-}\) can be computed (for the case when \(c_{0}=c_{+}=0\)). Then generically, the full expectation of \(\vec{X}\) is \[\langle\vec{X}\rangle=\varepsilon(c_{+}^{*}\vec{\mu}_{+}+c_{0}^{*} \vec{\mu}_{0}+c_{-}^{*}\vec{\mu}_{-}). \tag{37}\] Now that we have computed the expectation of \(\vec{X}\), let us consider its variance. In this analysis, we consider the frequency range \(0.5\,\mathrm{Hz}\leq f_{A^{\prime}}\leq 5\,\mathrm{Hz}\), over which the noise is roughly frequency independent [see Fig. (4)]. Therefore, we may consider each instance of \(\vec{X}\) for different frequencies as independent realizations of the noise, and use these to estimate the noise. In particular, we can compute the covariance matrix for \(\vec{X}\) as \[\Sigma_{ij}\equiv\langle X_{i}X_{j}^{*}\rangle=\frac{1}{N}\sum_{k=1}^{N}X_{i}(f _{k})X_{j}(f_{k})^{*}, \tag{38}\] where \(f_{k}\) indexes the DFT frequencies between \(0.5\,\mathrm{Hz}\) and \(5\,\mathrm{Hz}\) (for \(k=1,\ldots,N\sim 10^{5}\)).9 Footnote 9: Note that since the first six elements, the middle six elements, and the final six elements of \(\vec{X}\) correspond to different frequencies, then covariances between elements from these different groups should vanish, i.e. \(\Sigma\) should be block diagonal. Moreover, the three diagonal blocks should be identical, since they correspond to the same averages in Eq. (38) (only with the frequency \(f_{k}\) shifted by \(\tilde{f}_{d}\)). Thus it suffices to only compute \(\Sigma_{ij}\) for \(7\leq i,j\leq 12\). Now that we understand the statistics of \(\vec{X}\), we can write down its likelihood \[-\ln\mathcal{L}\left(\varepsilon,\mathbf{c}|\vec{X}\right)=\left(\vec{X}- \varepsilon\sum_{m}c_{m}^{*}\vec{\mu}_{m}\right)^{\dagger}\Sigma^{-1}\left( \vec{X}-\varepsilon\sum_{m}c_{m}^{*}\vec{\mu}_{m}\right). \tag{39}\] From this likelihood, the computation of the bound on \(\varepsilon\) proceeds as in Sec. V.4 of Ref. [27], but we reproduce it here for completeness. Let us write \(\Sigma=LL^{\dagger}\) and then define \[\vec{Y} =L^{-1}\vec{X}, \tag{40}\] \[\vec{\nu}_{m} =L^{-1}\vec{\mu}_{m}. \tag{41}\] If we let \(N\) be the \(18\times 3\) matrix whose columns are \(\vec{\nu}_{m}\), then Eq. (39) becomes \[-\ln\mathcal{L}\left(\varepsilon,\mathbf{c}|\vec{Y}\right)=\left|\vec{Y}- \varepsilon N\mathbf{c}^{*}\right|^{2}. \tag{42}\] Now if we perform a singular value decomposition \(N=USV^{\dagger}\) (where \(U\) is a \(18\times 3\) matrix with orthonormal columns, \(S\) is a \(3\times 3\) diagonal matrix, and \(V\) is a \(3\times 3\) unitary matrix) and further define \[\mathbf{d} =V^{\dagger}\mathbf{c}^{*}, \tag{43}\] \[\mathbf{Z} =U^{\dagger}\vec{Y}, \tag{44}\] then the likelihood in Eq. (42) can be reduced to \[-\ln\mathcal{L}\left(\varepsilon,\mathbf{d}|\mathbf{Z}\right)=\left|\mathbf{Z}- \varepsilon S\mathbf{d}\right|^{2}. \tag{45}\] As mentioned earlier, the polarization amplitudes \(c_{m}\), and thus also the parameters \(d_{m}\), are nuisance parameters over which we need to marginalize. We take them to have a Gaussian likelihood \[\mathcal{L}(\mathbf{d})=\exp(-3|\mathbf{d}|^{2}). \tag{46}\] Marginalizing over \(\mathbf{d}\), the likelihood Eq. (45) reduces to \[\mathcal{L}\left(\varepsilon|\mathbf{Z}\right)\propto\prod_{m}\frac{1}{3+ \varepsilon^{2}s_{m}^{2}}\exp\left(-\frac{3|z_{m}|^{2}}{3+\varepsilon^{2}s_{m} ^{2}}\right), \tag{47}\] where \(z_{m}\) are the components of \(\mathbf{Z}\) and \(s_{m}\) are the diagonal entries of \(S\) [see Appendix D.1 of Ref. [27] for a derivation of Eq. (47)]. In order to turn this into a posterior on \(\varepsilon\), we must assume some prior. We take a Jeffreys prior \[p(\varepsilon)\propto\sqrt{\sum_{m}\frac{4\varepsilon^{2}s_{m}^{4}}{(3+ \varepsilon^{2}s_{m}^{2})^{2}}}; \tag{48}\] again see Appendix D.1 of Ref. [27]. The posterior for \(\varepsilon\) is thus \[p(\varepsilon|\mathbf{Z})=\mathcal{N}\sqrt{\sum_{m}\frac{4\varepsilon^{2}s_{m}^{4} }{(3+\varepsilon^{2}s_{m}^{2})^{2}}}\prod_{m}\frac{1}{3+\varepsilon^{2}s_{m}^ {2}}\exp\left(-\frac{3|z_{m}|^{2}}{3+\varepsilon^{2}s_{m}^{2}}\right), \tag{49}\] where \(\mathcal{N}\) must be calculated to normalize the integral of \(p(\varepsilon|\mathbf{Z})\) to 1. We then set a 95% credible upper limit \(\hat{\varepsilon}\) by solving \[\int_{0}^{\hat{\varepsilon}}d\varepsilon\,p(\varepsilon|\mathbf{Z})=0.95. \tag{50}\] By performing this analysis at all DFT frequencies between \(0.5\,\mathrm{Hz}\) and \(5\,\mathrm{Hz}\), we arrive at a bound over a range of HPDM masses. Fig. (6) shows the results of our analysis for both Scan-1 and Scan-2. Following the methodology in Sec. VI of Ref. [27], we evaluate our data at each frequency for evidence of a significant dark-matter candidate. From Eq. (45), we see that under the null hypothesis of no dark matter signal (\(\varepsilon=0\)), the vector \(\mathbf{Z}\) should be distributed as a multivariate Gaussian of mean zero. Specifically, the statistic \[Q=2\sum_{m}\left|z_{m}\right|^{2} \tag{51}\] should follow a \(\chi^{2}\)-squared distribution with six degrees of freedom. We may therefore compute the corresponding local \(p\)-value \[p_{0}=1-F_{\chi^{2}(6)}(Q), \tag{52}\] where \(F_{\chi^{2}(\nu)}\) denotes the cumulative distribution function for a \(\chi^{2}\)-distribution with \(\nu\) degrees of freedom. Fig. (7) shows the local \(p\)-values at each frequency \(f_{A^{\prime}}\) for both Scan-1 and Scan-2. We consider there to be evidence for a DM candidate at a given frequency (with 95% global significance) if its local \(p\)-value is below the threshold \(p_{\text{crit}}\) defined by \[(1-p_{\text{crit}})^{N}=0.95. \tag{53}\] Figure 6: 95% credible upper limit on \(\varepsilon\), the HPDM kinetic-mixing parameter. The top figure shows the results for Scan-1, and the bottom figure shows the results for Scan-2. The orange traces on both plots are smoothed versions of the limits obtained by averaging over 100 adjacent frequency bins. This threshold is shown as a dotted line in Fig. (7). Scan-1 exhibits seven frequency bins which cross the threshold. Four of these are clustered around \(0.5\,\mathrm{Hz}\), while the other three are clustered around \(0.75\,\mathrm{Hz}\). Scan-2, likewise, exhibits three candidate frequencies clustered around \(0.5\,\mathrm{Hz}\), and one at \(0.75\,\mathrm{Hz}\). We expect these candidates are associated with the narrow peaks observed in the Oberlin station data. We have re-performed our analysis using only the Hayward and Lewisburg data, and find that these peaks do not cross the threshold for significance in either scan when restricting to these two stations [see Fig. (8)]. Since dark matter should be present in all locations at all times, this strongly suggests that these signal candidates do not correspond to dark matter. Moreover, we note that the width of a dark-matter signal is given by \(f_{a}v_{\mathrm{DM}}^{2}\), where \(v_{\mathrm{DM}}\) is the dark matter velocity dispersion. Since the frequency bin size for our analysis is roughly \(10^{-5}\,\mathrm{Hz}\), these signal candidates have widths of roughly \(10^{-5}f_{a}\), corresponding to a large velocity dispersion of \(v_{\mathrm{DM}}\sim 1000\,\mathrm{km/s}\) (which is far above the escape velocity of the Milky Way). We therefore rule out these dark-matter candidates and conclude that our analysis finds no evidence for HPDM in the \(0.5\,\mathrm{Hz}\leq f_{A^{\prime}}\leq 5\,\mathrm{Hz}\) range. ### Axion Analysis Now we move to the analysis for an axion dark-matter signal. This analysis proceeds similarly to the HPDM analysis, but is slightly simpler. As in the HPDM analysis, we construct a data vector \(\vec{X}\) consisting of Fourier transforms of the measured magnetic field at each location. Since the axion signal in Eq. (28) contains no \(f_{d}\) dependence, however, the only relevant information is contained at frequency \(f_{a}\). Therefore in this analysis, we only take \(\vec{X}\) to be a six-dimensional vector, consisting of the measurements: \(\bar{B}_{\theta}(\Omega_{1},f_{a})\), \(\bar{B}_{\phi}(\Omega_{1},f_{a})\), \(\bar{B}_{\theta}(\Omega_{2},f_{a})\), \(\bar{B}_{\phi}(\Omega_{2},f_{a})\), \(\bar{B}_{\theta}(\Omega_{3},f_{a})\), and Figure 7: The local \(p_{0}\)-values for each of the \(N=414572\) frequency bins analyzed in the Scan-1, shown in the top (blue) figure, and each of the \(N=340291\) bins searched in Scan-2, shown in the lower (grey) figure. The threshold value for declaring a dark-matter candidate at \(95\%\) global confidence is shown by the dotted line (after accounting for the trials factor given by the multiplicity of frequencies searched; see Eq. 53). The left panels show \(p_{0}\) as a function of frequency with candidates having \(p\)-values below the threshold. The right panels show histograms of \(p_{0}\) for the two different scans and candidates as outliers to the right of the threshold. \(\tilde{B}_{\phi}(\Omega_{3},f_{a})\). The expectation of \(\vec{X}\) is now given by \[\langle\vec{X}\rangle=ic^{*}g_{a\gamma}RT\sqrt{\frac{\rho_{\rm DM}}{2}}\sum_{\ell m }\frac{(\ell+1)C_{\ell m}}{\ell(\ell+1)-(2\pi f_{a}R)^{2}}\begin{pmatrix}\Phi^{ \theta}_{\ell m}(\Omega_{1})\\ \Phi^{\phi}_{\ell m}(\Omega_{1})\\ \Phi^{\theta}_{\ell m}(\Omega_{2})\\ \Phi^{\phi}_{\ell m}(\Omega_{3})\\ \Phi^{\phi}_{\ell m}(\Omega_{3})\\ \end{pmatrix}\equiv c^{*}g_{a\gamma}\vec{\mu}, \tag{54}\] where \(\Phi^{\theta}_{\ell m}\) and \(\Phi^{\phi}_{\ell m}\) denote the \(\mathbf{\hat{\theta}}\)-component and \(\mathbf{\hat{\phi}}\)-components of the VSH \(\mathbf{\Phi}_{\ell m}\), and \[c=\frac{\sqrt{2}\pi f_{a}a_{0}}{\sqrt{\rho_{\rm DM}}}. \tag{55}\] The covariance matrix \(\Sigma\) of \(\vec{X}\) can again be determined by averaging over independent frequencies, as in Eq. (37) [except that \(\Sigma\) will now be a \(6\times 6\) matrix]. If we define \(\vec{Y}\) and \(\vec{\nu}\) as in Eqs. (40) and (41) [without the \(m\) index], and further define \[s =|\vec{\nu}|, \tag{56}\] \[z =\frac{\vec{\nu}^{\dagger}\vec{Y}}{s}, \tag{57}\] we can write the likelihood function for the axion signal as \[-\ln\mathcal{L}(g_{a\gamma},c|z)=|z-g_{a\gamma}c^{*}s|^{2}\,. \tag{58}\] Again marginalizing over \(c\) (which we take to have a Gaussian distribution with \(\langle|c|^{2}\rangle=1\)), and utilizing a Jeffreys prior for \(g_{a\gamma}\), we arrive at the posterior distribution \[p(g_{a\gamma}|z)=\frac{|z|^{2}}{1-e^{-|z|^{2}}}\cdot\frac{2g_{a\gamma}s^{2}}{( 1+g_{a\gamma}^{2}s^{2})^{2}}\exp\left(-\frac{|z|^{2}}{1+g_{a\gamma}^{2}s^{2}} \right). \tag{59}\] Note that Eq. (59) is properly normalized, which is possible because its integral over \(g_{a\gamma\gamma}\) can be taken analytically. The 95% credible limit \(\hat{g}_{a\gamma}\) can then be defined, as in Eq. (50). In this case, we can solve for it analytically to find \[\hat{g}_{a\gamma}=\frac{1}{s}\sqrt{-\frac{|z|^{2}}{\log\left(0.95+0.05e^{-|z|^ {2}}\right)}}-1. \tag{60}\] Fig. (9) shows the resulting limit as a function of frequency, for both Scan-1 and Scan-2. Note that the lower edge of the limit appears as a smooth curve. This is due to the fact that \(\hat{g}_{a\gamma}\to 4.36/s\) in the limit \(z\to 0\). Therefore, even Figure 8: The local \(p_{0}\)-values for each frequency bin when only data from the Hayward and Lewisburg stations are considered. No beyond-threshold candidates appear in common in _both_ Scan-1 and Scan-2. Also, the peaks at 0.50 and 0.75 Hz evident in Fig. (7) are not present in this subset of stations. This indicates that those candidates were due to artefacts in the Oberlin data. when the measured data at a particular frequency becomes arbitrarily small (compared to the estimated noise level), the limit on \(g_{a\gamma}\) asymptotes to a finite floor.10 Footnote 10: This floor exhibits a slight frequency dependence because of the \(f_{a}\)-dependence in Eq. (54). As in the HPDM case, we evaluate our data at each frequency in order to determine whether there is evidence for a significant DM signal. We may compute the local \(p\)-value at a particular frequency under the null hypothesis (\(g_{a\gamma\gamma=0}\)) as \[p_{0}=1-F_{\chi^{2}(2)}(2|z|^{2}). \tag{61}\] (The \(\chi^{2}\)-distribution only has two degrees of freedom now, since the likelihood in Eq. (58) only has one \(z\) variable.) Fig. (10) shows these \(p\)-values as a function of frequency for both Scan-1 and Scan-2, along with the threshold value \(p_{\rm crit}\), as defined in Eq. (53). Neither scan shows any significant signal candidates, and so we again conclude that our data contains no evidence for axion dark matter in the \(0.5\,{\rm Hz}\leq f_{a}\leq 5\,{\rm Hz}\) range. ### Error Budget The results of this science run and analysis are summarized in Figs. 6 and 9. They show upper limits on \(\varepsilon\), the HPDM kinetic mixing parameter, and on \(g_{a\gamma}\), the axion-photon coupling constant, respectively. Below, we discuss Figure 9: 95% CL upper limit on \(g_{a\gamma\gamma}\) for Scan-1 and Scan-2. The orange traces on both plots show smoothed versions of the limits obtained by averaging over 100 adjacent frequency bins. the impact of uncertainties in the signal model and experimental conditions on the quoted limits. #### iv.2.1 Signal model uncertainty The signals in Eqs. (24) and (28) assume a simplified model of Earth and the ionosphere, where both are treated as spherical perfect conductors. In Ref. [26], it is argued that this model holds to a high degree of accuracy in the frequency range relevant to this work. In particular, both Earth's crust [65] and the ionosphere [66, 67] achieve conductivities of at least \(10^{-4}\,\mathrm{S/m}\) at certain depths/heights, which translate to skin depths of \(\sim 50\,\mathrm{km}\) for frequencies \(f\sim 1\,\mathrm{Hz}\). Given that the only relevant length scale appearing in Eqs. (24) and (28) is the radius of Earth \(R\sim 6000\,\mathrm{km}\), finite-conductivity effects only modify the geometry of the system at the percent level. In the absence of resonances, we conclude that the signal should also only be affected at the percent level. Close examination of Eqs. (24) and (28), however, reveals that our model predicts resonances in the signal at \(mR=\sqrt{\ell(\ell+1)}\) (for \(\ell=1\) in the HPDM case, and \(\ell\geq 1\) in the axion case). These are the well-studied Schumann resonances of the Earth-ionosphere cavity [61, 68]. Our simplified spherical model predicts the first of these resonances to occur at \(\sim 10\,\mathrm{Hz}\), but the central frequency of this resonance has been measured to be \(\sim 8\,\mathrm{Hz}\)[61], indicating that our spherical model does not accurately account for environmental effects on the Schumann resonances. Moreover, since the signal nominally diverges at the Schumann resonances, small deviations in their central frequency can have a large impact on the predicted signal. For this reason, we limit our analysis to \(f\leq 5\,\mathrm{Hz}\), in order to remain below the measured Schumann resonances. We note that the measured width of the Schumann resonances can, however, be quite large at certain times. In the summer, during the day, the first Schumann resonance can reach widths as large as \(\sim 4\,\mathrm{Hz}\)[68]. The upper end of our frequency range may therefore be mildly affected by the first Schumann resonance for certain portions of the runtime. Such an effect would result in a slight _enhancement_ of the signal, beyond what our model predicted. Therefore our exclusion limits are still conservative. In principle, the effect of the Schumann resonances may, however, invalidate our signal-candidate rejection procedure. This is because environmental effects could influence each station differently, meaning we cannot accurately characterize the spatial dependence of a true signal. To this point, we simply note that Figure 10: The local \(p_{0}\)-values for each of the \(N=414572\) frequency bins analyzed in Scan-1 (top), and each of the \(N=340291\) frequency bins searched in Scan-2 (bottom). \(p_{\mathrm{crit}}\), the threshold value for declaring a candidate signal at 95% confidence is shown as the dotted line on each of the plots. The right panel shows a histogram of all the \(p_{0}\)-values for each scan. Signal candidates would appear as outliers to the right of the threshold. our only signal candidates presented at the end of Sec. IV.1 were at \(f\sim 0.5,0.75\,\mathrm{Hz}\), and so are too low frequency to be affected by the Schumann resonances. We therefore conclude that both our exclusion analysis and our candidate rejection are robust to signal-model uncertainties. #### iv.2.2 Sensor orientation As discussed in section III, we orient the magnetometers at each site such that the N-S, and E-W axes of each sensor lie in a horizontal plane with North indicating True (i.e., geographic) north, and the Normal (Up-Down) axis lies in the direction of the local force of gravity. We are able to achieve this orientation with repeatability \(\lesssim 1^{\circ}\). By adjusting the orientation of the sensor in the analysis, we estimate that the impact of such an orientation error is to change the \(\varepsilon\) and \(g_{a\gamma}\) upper limits by \(\lesssim 1\%\). #### iv.2.3 Calibration drift A temperature-dependent sensor calibration will lead to systematic errors in magnetic-field measurements. As shown in Fig. 5, we observed that the temperature swing over the course of a day at the Hayward station was significantly greater than that in the Oberlin and Lewisburg stations. In that period, we recorded changes in the dc magnetic-field readings that tracked the sensor temperature of up to 10% for the Hayward station, and less than 3% for the Oberlin and Lewisburg stations. In the the 0.5-5.0 Hz band, we estimate the impact of a drifting calibration on the upper limits of \(\varepsilon\) and \(g_{a\gamma}\) by running analyses where we scale the sensor readings by up to 10 percent of their values. We then determined the resulting limits, concluding that a drifting calibration of the magnitude we observed would change the limits on \(\varepsilon\) and \(g_{a\gamma}\) by \(\lesssim 3\%\). #### iv.2.4 Timing synchronization As discussed in Sec. III, the magnetic-field measurements were digitized at 160 samples per second. An on-sensor real-time clock ensured sample-to-sample timing to better than 1 ppm and a GPS-referenced computer clock provided the absolute time reference for the time stamps. The absolute timing accuracy between sensors was limited to \(\sim 100\,\mathrm{ms}\) due to latencies in the steering of the DAQ clock to GPS. This can be significantly improved. However, such an accuracy was adequate for an analysis covering the 0.5 to 5 Hz window. We estimate the systematic on the derived limits due to this error to be negible. ## V Future directions The current experiment is limited by the sensitivity of the magnetometers, rather than by the geomagnetic noise, and our model only accurately describes signals at frequencies below \(\approx 5\) Hz. In the next generation of the experiment, we plan to use more sensitive magnetometers to reach the limit imposed by geomagnetic noise. In addition, we propose to employ a novel experimental geometry to avoid model uncertainties in interpretation of our data. At frequencies \(\gtrsim 5\) Hz, the DM-induced magnetic field signal becomes sensitive to the details of Earth's atmosphere, which would require more careful modelling than that needed for the lower-frequency analysis presented in this paper. In order to be sensitive to higher-mass ALPs and hidden photons, we are investigating the prospect of measuring spatial derivatives of the magnetic field. By measuring components of the magnetic field across multiple stations which are positioned \(\lesssim 1\) km from one another, it is possible to compute the numerical derivatives of \(\mathbf{B}\), and particularly components of \(\nabla\times\mathbf{B}\). In the envisioned measurement scheme, we do not expect to have significant local electric currents, so the modified Ampere-Maxwell law describing the sought-after effect of DM fields is \[\nabla\times\mathbf{B}-\partial_{t}\mathbf{E}=\mathbf{J}_{\mathrm{eff}}, \tag{62}\] where \(\mathbf{J}_{\mathrm{eff}}\) encapsulates the effect of the dark matter [see Eqs. (5) and (9)]. Since \(\mathbf{E}\) is negligible in directions tangent to the ground, a measurement of \(\nabla\times\mathbf{B}\) in a tangent direction gives a direct measurement of the dark matter, which is insensitive to the atmospheric boundary conditions. Moreover, we expect this scheme to reduce sensitivity to geomagnetic noise, as physical geomagnetic fields in the lower atmosphere should have \((\nabla\times\mathbf{B})_{\parallel}=\mathbf{J}_{\parallel}=0\). However, it is important to note that, unlike the low-frequency measurements whose signal is enhanced by the full radius of Earth, the effective enhancement here would only be the separation between stations. SNIPE Hunt is currently carrying out an investigation of the expected background and signal, while simultaneously taking steps to perform a search based on this new methodology. ## VI Conclusions In this work, we reported on a search for axion and hidden-photon dark matter using a network of unshielded vector magnetoresistive (VMR) magnetometers located in relatively quiet magnetic environments, in wilderness areas far from anthropogenic magnetic noise. The magnetic signal pattern targeted by our search could, in principle, be generated by the interaction of axion or hidden photon dark matter with Earth, which can act as a transducer to convert the dark matter into oscillating magnetic fields as described in Refs. [26, 27, 28]. Analysis of the data acquired over the course of approximately three days in July 2022 revealed no evidence of a persistent oscillating magnetic field matching the expected characteristics of a dark-matter-induced signal. Consequently, we set upper limits on the kinetic-mixing parameter \(\varepsilon\) for hidden-photon dark matter and on the axion-photon coupling constant \(g_{a\gamma}\). Figure 11 displays constraints on \(\varepsilon\) as a function of hidden-photon mass \(m_{A^{\prime}}\) obtained in our experiment as well as those from other experiments [27, 70], derived from planetary science [71, 72], and based on astrophysical observations [73, 74, 75, 76, 77, 78]. We note that, in the studied frequency range, the results of the SNIPE Hunt experiment are the most stringent experimental bounds, and can be regarded as complementary to the more severe observational constraints. Fig. 12 shows bounds on the axion-photon coupling constant parameter \(g_{a\gamma}\) as a function of axion mass \(m_{a}\). We are actively pursuing further measurements based on this concept, but instead using induction-coil magnetometers [85, 86, 87]. We anticipate an improvement in sensitivity to dark-matter-induced magnetic signals of several orders Figure 11: Constraints on the hidden-photon kinetic-mixing parameter \(\varepsilon\) as a function of hidden-photon mass \(m_{A^{\prime}}\). The plot was created based on Refs. [69] and [47] and includes the SuperMAG limit [26, 27] and the recent measurement using a network of magnetometers in meter-scale shielded rooms [70], which we denote the “Synchronized Quantum Sensor Network” (SQSN). The results reported in Refs. [26, 27, 70] are the only other laboratory measurements in this mass range. In additional to the laboratory constraints, the plot also shows various astrophysical bounds, including the geomagnetic limit obtained from satellite measurements of the Earth’s magnetic field [71], the hidden-photon limits from magnetic-field measurements in Jupiter’s magnetosphere [72], limits from cold gas clouds at the Milky Way center [73], heating of the ionized interstellar medium in the galaxy from hidden photons [74], and the limit on heating/cooling due to DM in the Leo T dwarf galaxy [75]. Cosmological bounds on hidden photons from COBE/FIRAS data estimated from potential hidden-photon interactions with plasmas in the universe are from Refs. [76, 46, 77]. Finally, the figure also displays cosmological/astrophysical bounds on hidden photons from He II reionization [78]. of magnitude. Furthermore, as discussed in Sec. V, we will use local multi-sensor arrays to measure the curl of the local magnetic field at the various sites and thereby extend the frequency range probed up to about a kHz. ###### Acknowledgements. The Oberlin group thank Michael Miller for his work on the construction of the sensor mount for the Oberlin station. This work was supported by the U.S. National Science Foundation under grants PHY-2110370, PHY-2110385, and PHY-2110388. S.K. and M.A.F. are supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. M.A.F. is also supported by the Simons Investigator Award No. 827042. S.K. and M.A.F. thank the Aspen Center for Physics for hospitality during the final stages of this work, supported by NSF Grant No. PHY-2210452. Figure 12: Constraints on the axion–photon coupling constant parameter \(g_{a\gamma}\) as a function of axion mass \(m_{a}\). The plot was created based on Ref. [69], and includes the relevant experimental bounds based on the SuperMAG analysis [28] in maroon and the CAST result [79] in grey. Additionally, the plot displays astrophysical limits on the axion–photon interaction, represented in various shades of green, including (Diffuse SNe) [80], (Hydra A) [81], (Super star clusters) [82], M87 [83], and (H1821+643) [84].
2307.06320
Bayesian analysis of a Unified Dark Matter model with transition: can it alleviate the $H_{0}$ tension?
We consider cosmological models in which Dark Matter (DM) and Dark Energy (DE) are described by a single component, dubbed Unified Dark Matter (UDM) models, in which the DE-like part can have an equation state $<-1$ at late times without violating the null energy condition. In this paper, we investigate whether this feature can relieve the Hubble tension. We perform a Bayesian analysis of the model using SNIa data from Pantheon, the CMB distance prior from Planck, and the prior on the absolute magnitude $M$ of SNIa from SH0ES. Using the prior, the data suggests a smooth transition taking place at redshifts $z_{\rm t} \simeq 2.85$, which provides a value $H_0=69.64\pm 0.88$ for the Hubble constant, slightly alleviating the tension by $\sim 1.5 \sigma$. Without it, we obtain $H_0 = 67.6^{+1.3}_{-0.82}$ and a transition happening at $z_t=1.36$. We also discuss the importance of using the prior on $M$ for constraining this model.
Emmanuel Frion, David Camarena, Leonardo Giani, Tays Miranda, Daniele Bertacca, Valerio Marra, Oliver F. Piattella
2023-07-12T17:33:16Z
http://arxiv.org/abs/2307.06320v2
Bayesian analysis of Unified Dark Matter models with fast transition: can they alleviate the \(H_{0}\) tension? ###### Abstract We consider cosmological models in which Dark Matter (DM) and Dark Energy (DE) are described by a single component, dubbed Unified Dark Matter (UDM) models, in which the DE-like part can have an equation state \(<-1\) at late times without violating the null energy condition. In this paper, we investigate whether this feature can relieve the Hubble tension. We perform a Bayesian analysis of the model using SNIa data from Pantheon, the CMB distance prior from Planck, and the prior on the absolute magnitude \(M\) of SNIa from SH0ES. The data suggests a smooth transition taking place at redshifts \(z_{\rm t}\simeq 2.85\), which provides a value \(H_{0}=69.64\pm 0.88\) for the Hubble constant, slightly alleviating the tension by \(\sim 1.5\sigma\). We also discuss the importance of using the prior on \(M\) for constraining this model. ## I Introduction The observed accelerated expansion of the Universe requires, within the framework of General Relativity, some form of Dark Energy (DE) to overcome the gravitational collapse of ordinary matter. A cosmological constant \(\Lambda\) seems to be the most natural candidate for DE, and together with Cold Dark Matter (CDM) they constitute the main ingredients of the standard model of cosmology, hereby referred to as \(\Lambda\)CDM. Despite providing an extremely successful and (relatively) simple description of the expansion history of the Universe, the \(\Lambda\)CDM model has been recently challenged by the appearance of statistical tensions between the values of two cosmological parameters measured using late- and early-times probes. Specifically, there is a \(\sim 5\sigma\) tension concerning the value of the Hubble factor today \(H_{0}\), and a 2-3\(\sigma\) tension in the parameter combination \(S_{8}\equiv\sigma_{8}\,(\Omega_{\rm m0}/0.3)^{0.5}\), where \(\sigma_{8}\) is the averaged amplitude of the linear matter density fluctuations over spheres of radius \(8h^{-1}\) Mpc today and \(\Omega_{\rm m0}\) is the present day matter density. Early-times probes seem to prefer lower values of \(H_{0}\) and higher values of \(S_{8}\) than late-times ones, see for example Refs. [1; 2; 3; 4; 5; 6] for a review of these problems. It is worth noticing that the \(H_{0}\) tension might be also interpreted as a tension on the absolute magnitude \(M\) of type Ia supernovae, since the calibration of the absolute magnitude is inferred from the luminosity-distance relation of supernovae at both high and low redshift, therefore introducing correlations between the value of \(M\) and the intrinsic properties of DE [7; 8; 9; 10; 11]. If not due to systematics,1 these observations will require new physics beyond \(\Lambda\)CDM to be properly addressed [19]. On the other hand, it is unclear which kind of new physics could successfully tackle both tensions _at the same time_[2; 20]. Indeed, naive resolutions of one seem to worsen the other. For example, if one tries to solve the \(H_{0}\) tension at late times by increasing the present day DE energy density \(\Omega_{\rm DE0}\), then the matter density decreases proportionally (\(\Omega_{\rm m0}\approx 1-\Omega_{\rm DE0}\) today), and consequently \(S_{8}\) decreases, exacerbating the \(S_{8}\) tension. Footnote 1: In particular, the \(H_{0}\) tension might be related to systematics in supernova standardization [12] or in the Cepheid calibration of the cosmic ladder. For example, the analysis of Ref. [13] uses Type Ia supernovae (SNIa) observations calibrated with the Tip of the Red Giant Branch [14; 15] rather than Cepheids, and results in \(H_{0}=69.8\pm 0.8~{}({\rm stat})\pm 1.8~{}({\rm sys})~{}{\rm km~{}s^{-1}Mpc^{-1}}\), compatible with the value inferred by the Planck collaboration \(H_{0}=67.4\pm 0.5{\rm km~{}s^{-1}Mpc^{-1}}\)[16]. Note that systematics in Cepheids only are insufficient to solve the tension though [17; 18]. Most of the attempts addressing the \(H_{0}\) tension can be classified into early- and late-time modifications of the \(\Lambda\)CDM expansion history [2]. Early-time modifications aim to modify the value of the sound horizon \(r_{\rm s}\)[21; 22; 23; 24; 25; 26; 27; 28; 29], which results in a different value of \(H_{0}\) inferred from the CMB2. Late-time modifications instead try to obtain a higher \(H_{0}\) by modifying the expansion history at recent times, for example including interactions in the dark sector or through dynamical DE models [31; 32; 33; 34; 35]. Gravitational transitions models, in which the effective gravitational coupling \(G\) undergoes a rapid transition at low redshift, have also been proposed as a resolution of the Hubble tension [36; 37; 38; 39] because they change the value inferred for the absolute magnitude of type Ia supernovae \(M\), therefore providing a better fit than smooth \(H(z)\) models [40]. Concerning late time resolutions, the analysis of Refs. [41; 42] indicates that, in order to not worsen the \(S_{8}\) tension, a dynamical DE field is required with Equation of State (EoS) parameter evolving from \(w_{\rm DE}\geq-1\) to \(w_{\rm DE}<-1\). Perfect fluids satisfying the second inequality are labelled _phantom_ since the seminal work [43]. They are considered unphysical for multiple reasons. Among them, we mention that their kinetic energy is negative, therefore introducing instability at high energy, and also that their energy density grows with the expansion of the Universe, consequently undermining the principle of energy conservation [44; 45]. Footnote 2: Though an increased \(H_{0}\) within this approach usually creates a tension with \(\Omega_{\rm m0}h^{2}\), with either BAO or weak lensing data [30] In this work we consider a unified model for the dark sector, called Unified Dark Matter (UDM) model or _Quartessence_, of the Universe where DM and DE are interpreted as different manifestations of the same dark component. In this work, we are interested in a particular class of UDM models in which the DE-like part of the model can also mimic a _phantom_ fluid behavior. UDM models were investigated extensively in the past, see for example Refs. [46; 47; 48; 49; 50; 51; 52; 53; 54] on the generalized Chaplygin gas, Refs. [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68] concerning scalar field models, or more general, non-adiabatic models, see _e.g._[69]. More recent proposals were also given in Refs. [70; 71; 72; 73; 74]. The potential of Unified Dark Matter models (UDM) in addressing the \(S_{8}\) tension was investigated in Ref. [75]. Inspired by these models, in this work we consider the possibility of addressing the Hubble tension with a \(w_{\rm DE}<-1\) of the DE-like part at late times, but evolving towards an asymptotic de Sitter attractor. The presence of the latter mitigates the stability issues by avoiding the appearance of a future big-rip singularity, see, for example, Refs. [76; 77; 78], and is a key feature in many beyond \(\Lambda\)CDM scenarios, see for example Refs. [79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2778; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 289; 291; 288; 289; 281; 285; 286; 289; 292; 300; 30; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 333; 341; 338; 342; 343; 353; 354; 355; 356; 357; 358; 360; 361; 362; 363; 364; 365; 366; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 387; 388; 389; 391; 392; 393; 40; 40; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68] concerning scalar field models, or more general, non-adiabatic models, see _e.g._[69]. More recent proposals were also given in Refs. [70; 71; 72; 73; 74; 75; 76; 77; 78]. The potential of Unified Dark Matter models (UDM) in addressing the \(S_{8}\) tension was investigated in Ref. [75]. Inspired by these models, in this work we consider the possibility of addressing the Hubble tension with a \(w_{\rm DE}<-1\) of the DE-like part at late times, but evolving towards an asymptotic de Sitter attractor. The presence of the latter mitigates the stability issues by avoiding the appearance of a future big-rip singularity, see, for example, Refs. [76; 77; 78], and is a key feature in many beyond \(\Lambda\)CDM scenarios, see for example Refs. [78; 79; 80; 81; 82; 83]. We restrict our study at the background level for this purpose, so we do not consider structure formation at this time. To illustrate how this type of models can potentially address the Hubble tension, we will employ a very simple toy model proposed originally in Ref. [68], for which the UDM energy momentum can be described as a perfect fluid with a fixed, time-dependent, analytical pressure profile chosen _a priori_. This can be done through a Lagrange multiplier, for example, or by fixing at the background level a suitable initial condition and a scalar Lagrangian field with a non-canonical kinetic term that can reproduce this pressure profile [60; 68; 84]. The structure of the paper is the following: in Sec. II, we review the UDM model proposed in [68], and discuss under which conditions a simple toy model can address the Hubble tension. Then, we perform a statistical analysis of the chosen model in Sec. III. In Sec. IV, we report our results and in Sec. V our conclusions. ## II A simple UDM toy model Using the e-fold number \(N=\log a\) as time parameter (here we set \(a(t_{0})=1\)), the continuity equation of a UDM fluid in a FLRW background can be written as: \[\frac{d\rho(N)}{dN}+3\rho(N)=-3p(N)\;. \tag{1}\] Following [84], for a given pressure \(p(N)\), the formal solution of the latter equation is: \[\rho(N)=e^{-3N}\left[K-3\int_{-\infty}^{N}d\bar{N}e^{3N}p(N)\right]\;, \tag{2}\] where \(K\) is an integration constant. As a result, we see that UDM models always contain a dust-like component which corresponds to the homogeneous solution of Eq. (1). The prescription above is very general and can be valid whether \(p(N)\) is an analytic function or not. We immediately notice that \(\Lambda\)CDM, at the background level, can be a sub-case of this class of UDM models. Indeed if \(p={\rm const.}=-\Lambda\), we are left with a Universe filled with a cosmological constant \(\Lambda\) and a dust fluid with \(\rho(0)=K\) (see, _e.g._, [60]). Different choices of \(p\) will result in different behaviours, which make UDM models suitable to mimic a wide range of DE candidates. Here, we adopt the ansatz proposed in [68], such that the pressure of the UDM fluid follows \[p_{\varphi}=-\frac{\rho_{\lambda}}{2}\left\{1+\tanh\left[\frac{\beta}{3}\left(a^ {3}-a_{\rm t}^{3}\right)\right]\right\}\;, \tag{3}\] where \(\rho_{\lambda}\) is the energy density of an effective cosmological constant, such that \(\rho_{\lambda}\propto\Lambda\) with \(\Lambda\) being the cosmological constant. After integration of Eq. (1), this ansatz leads to the following density profile: \[\rho_{\varphi}=\frac{\rho_{\lambda}}{2}+\frac{3}{a^{3}\beta}\frac{\rho_{\lambda}}{2 }\ln\left\{\cosh\left[\frac{\beta}{3}\left(a^{ matter component does not contribute to the pressure, we additionally define the EoS parameter of the DE-like component as \(w_{\rm DE}=p_{\varphi}/\rho_{\rm DE}\). With the above profile, this UDM model typically transitions from a matter fluid to a fluid dominated by a (phantom) DE component. The moment at which the field starts behaving differently than either a pure matter component or a pure DE field occurs at time \(a_{t}\), and the speed of the transition is controlled by \(\beta\). Such behaviour is showed in the top panel of Fig. 1 where we show \(w_{\rm DE}\) as a function of the redshift for different sets of values of \(\beta\) and \(a_{\rm t}\) (see also Figs. in [68]). As we see, \(w_{\rm DE}(z)\) shows the transition from a matter-like fluid to an effective dark energy component that can attain the phantomic regime. This is particularly noticeable for intermediate values of \(\beta\), _i.e._, \(\beta\in[1,10]\). As we discuss later in this section, the smoothness and transition to the \(\Lambda\)CDM regime is controlled by the combination of \(\beta\) and \(a_{\rm t}\). The division of the UDM fluid into the dark energy and matter components is justified from a phenomenological point of view, since an observer can associate the DM and DE components of this UDM model with the observed ones. Indeed, since the hyperbolic cosine function is bounded by 1 from below, the energy density \(\rho_{\rm DE}\) is a positive definite function that can be used to provide an effective interpretation of the dark energy sector. Consequently, \(\rho_{\varphi}\) is also positive definite and, notably, the UDM fluid satisfies the Weak Energy Condition (WEC) as long as \(\rho_{\varphi}+p_{\varphi}\geq 0\), or equivalently: \[(1+w_{\rm DE})\left[\frac{\rho_{\lambda}}{2}+\frac{3}{2\beta}\frac{\rho_{ \lambda}}{a^{3}}\ln F(\beta,a_{t})\right]+\frac{\rho_{\rm m0}}{a^{3}}\geq 0\,, \tag{5}\] where, for the sake of simplicity, we have defined \[F(a)\equiv\left\{\cosh\left[\frac{\beta}{3}\left(a^{3}-a_{t}^{3}\right) \right]\right\}\,.\] Therefore, the UDM fluid can provide a phantomic dark energy component, _i.e._\(w_{\rm DE}<-1\), while keeping the concordance with the WEC. The bottom panel of Fig. 1 shows the WEC, recast as \((1+w_{\varphi})\Omega_{\varphi}\geq 0\) with \(w_{\varphi}=p_{\varphi}/\rho_{\varphi}\), as a function of redshift for different values of \(a_{\rm t}\) and \(\beta\). The lightest blue and red lines (\(\beta=1\) and \(\beta=10\)) in Fig. 1 illustrate that the UDM fluid can behave as a phantomic DE component without violating the WEC. In a flat FLRW background, the corresponding Friedmann equations (in units \(8\pi G=1\)) at late times may be written as: \[3H^{2} = \frac{\rho_{\lambda}}{2}+\rho_{\rm m}+\rho_{\rm aux}\;, \tag{6}\] \[3\frac{\ddot{a}}{a} = \frac{\rho_{\lambda}}{2}-\frac{\rho_{\rm m}}{2}-\frac{1}{2}\left( \rho_{\rm aux}+3p_{\rm aux}\right)\;, \tag{7}\] where we have defined the convenient auxiliary quantities \(\rho_{\rm aux}\) and \(p_{\rm aux}\) as \[\rho_{\rm aux} := \frac{3}{a^{3}\beta}\frac{\rho_{\lambda}}{2}\ln\left\{\cosh \left[\frac{\beta}{3}\left(a^{3}-a_{t}^{3}\right)\right]\right\}, \tag{8}\] \[p_{\rm aux} := -\frac{\rho_{\lambda}}{2}\tanh\left[\frac{\beta}{3}\left(a^{3}-a_ {t}^{3}\right)\right]. \tag{9}\] Written in this form, the UDM fluid recovers the standard \(\Lambda\)CDM phenomenology today in two scenarios:3 Footnote 3: In this work, we consider the evolution of the Universe only until today, so that \(0<a\leq 1\). However, UDM models in which \(a\to\infty\) can also recover the \(\Lambda\)CDM limit. 1. when the contributions of the auxiliary field are negligible, _i.e._\(\rho_{\rm aux}\to 0\) and \(p_{\rm aux}\to 0\), and the effective cosmological constant is set to \(\rho_{\lambda}=2\Lambda\), or 2. when they tend towards \(\rho_{\rm aux}\to\rho_{\lambda}/2\) and \(p_{\rm aux}\to-\rho_{\lambda}/2\), with an effective cosmological constant \(\rho_{\lambda}=\Lambda\). Figure 1: \(w_{\rm DE}\) (top panel) and the WEC condition (bottom panel) as functions of the redshift for different combinations of \(\beta\) and \(a_{t}\). Solid and dashed color lines represent the corresponding function evaluated at \(a_{t}=1/11\) (vertical black solid line) and \(a_{t}=0.4\) (dashed black solid line), respectively. The lighter blue and red lines (\(\beta=1,10\)) show that the UDM fluid can introduce a phantomic dark energy component without violating the WEC. We maintain this color and line-style notation throughout the manuscript, unless otherwise specified. Since one of our goals is to address the potentiality of this model in tackling the Hubble tension without spoiling the observational success of the \(\Lambda\)CDM model, in what follows we expand the discussion on the aforementioned \(\Lambda\)CDM limits. We explicitly demonstrate that such limits are achieved either by \(|\beta(a^{3}-a_{t}^{3})/3|\ll 1\) or \(|\beta(a^{3}-a_{t}^{3})/3|\gg 1\), where the former leads to \(\rho_{\rm aux}\to 0\) and the latter to \(\rho_{\rm aux}\to\rho_{\Lambda}/2\). We will show that, in these limits, the auxiliary field can be decomposed into components acting like either a cosmological constant, usual matter, or an exotic form of matter. This decomposition allows us to add the contributions of the auxiliary field like small deviations of the \(\Lambda\)CDM expansion. Even though we do not know precisely when this decomposition occurs, it gives us a convenient way to investigate the importance of the auxiliary term, particularly at late times. In order to lighten the notation, hereafter we define \(\alpha:=\beta(a^{3}-a_{t}^{3})/3\). ### Slow transition (\(|\alpha|\ll 1\)) Let us consider the Taylor expansion of Eqs. (8) and (9) for \(|\alpha|\ll 1\). Their leading order contributions are \[p_{\rm aux} \simeq -\frac{\alpha}{2}\rho_{\lambda}\;, \tag{10}\] \[\rho_{\rm aux} \simeq \frac{3\alpha^{2}}{4\beta}\frac{\rho_{\lambda}}{a^{3}}\,. \tag{11}\] These equations show that, in the slow transition limit, the contributions of the auxiliary field \(p_{\rm aux}\) and \(\rho_{\rm aux}\) are always negligible, and that the \(\Lambda\)CDM limit is reached if the effective cosmological constant is set to \(\rho_{\lambda}=2\Lambda\). Even though the pressure of the auxiliary field is almost zero at all times, \(\rho_{\rm aux}\) is not matter-like, as we can see from the expansion of Eq. (11), \[\rho_{\rm aux}=\frac{\beta}{3}\left(\frac{a^{3}}{2}-a_{t}^{3}+\frac{a_{t}^{6} }{2a^{3}}\right)\frac{\rho_{\lambda}}{2}\,. \tag{12}\] Eq (12) shows that, much before the transition (\(a^{3}\ll a_{t}^{3}\)), the auxiliary field is dominated by the matter-like component \(\propto\beta a_{t}^{6}\rho_{\lambda}/a^{3}\). Much after the transition (\(a^{3}\gg a_{t}^{3}\)), the term \(\propto\beta a^{3}\) will prevail over the other contributions. Depending on the particular value of \(a_{t}\), this regime could be not reached today though, since \(a>1\) would be necessary. Therefore, in scenarios with \(a_{t}\) close to unity, the energy density of the auxiliary field has a non-negligible \(\Lambda\)-like contribution \(\propto\beta a_{t}^{3}\) at \(a=1\). The darkest solid and dashed blue lines (\(\beta=0.01\) and \(\beta=0.1\)) in Fig. 2 display the slow transition regime. From the top panel, as anticipated, we can note that \(p_{\rm aux}/\rho_{\lambda}\) is constantly small across all the history of the Universe. The middle panel shows that the quantity \(\rho_{\rm aux}/\rho_{\lambda}\) becomes negligible in the slow transition regime at late times, of the order of \(\alpha^{2}\). Complementary to this, the bottom panel confirms the matter-like behavior of the auxiliary field before the transition. As discussed Figure 2: \(p_{\rm aux}/\rho_{\lambda}\) (top panel), \(\rho_{\rm aux}/\rho_{\lambda}\) (middle panel), and \(\rho_{\rm aux}/\rho_{\rm m}\) (bottom panel) as functions of the redshift for different combinations of \(\beta\) and \(a_{t}\). Solid and dashed color lines represent the corresponding functions evaluated at \(a_{t}=1/11\) and \(a_{t}=0.4\), respectively. The vertical solid and dashed black lines represent \(z_{t}=10\) and \(z_{t}=1.5\), respectively, while the horizontal dashed lines lines denote \(1/2\). before, the matter contribution at this epoch is proportional to \(a_{t}^{6}\). This is confirmed by comparing the ratio of the dashed (\(a_{t}=1/11\), which corresponds to a transition at redshift \(z_{t}=10\)) and solid (for which we choose a randomly lower transition redshift value \(a_{t}=0.4\)) blue lines (\(\beta=0.01\)) in the bottom panel, where these values of \(a_{t}\) lead to a \(10^{4}\) discrepancy in \(\rho_{\rm aux}/\rho_{\rm m}\). Let us now address briefly which values of the parameters \(\beta\) and \(a_{t}\) are required to ease the Hubble tension in this scenario, _i.e._, for which values the auxiliary field could produce an expansion history very close to \(\Lambda\)CDM's own history, but with a percent level difference today. First, we rewrite the Friedmann equations in the slow transition regime by regrouping all contributions by powers of the scale factor, leading to \[3H^{2} \simeq\rho_{\lambda,{\rm S}}+\rho_{\rm m,S}+\rho_{\rm ph}\;, \tag{13}\] \[3\frac{\ddot{a}}{a} \simeq\rho_{\lambda,{\rm S}}-\frac{\rho_{\rm m,S}}{2}+\frac{5}{2} \rho_{\rm ph}\;, \tag{14}\] where \[\rho_{\lambda,{\rm S}} := \left(1-\frac{\beta}{3}a_{t}^{3}\right)\frac{\rho_{\lambda}}{2}\;, \tag{15}\] \[\rho_{\rm m,S} := \left(1+\frac{\rho_{\lambda}}{\rho_{\rm m0}}\frac{\beta}{12}a_{t }^{6}\right)\frac{\rho_{\rm m0}}{a^{3}}\;,\] (16) \[\rho_{\rm ph} := \frac{\beta}{12}a^{3}\rho_{\lambda}\;. \tag{17}\] With these definitions, we show that the cosmological-like \(\rho_{\lambda,{\rm S}}\) and matter-like \(\rho_{\rm m,S}\) terms encompass the contributions from the auxiliary field in this regime, and slightly deviate from \(\Lambda\)CDM. The \(\rho_{\rm ph}\) term includes contributions which act like neither a cosmological constant nor usual matter. It sources the phantomic contribution of the auxiliary fluid, hence the "ph"subscript, and will be used later in the paper to assess the capability of the UDM model to tackle the Hubble tension. Through the expansion of the auxiliary pressure Eq.(10), we define the phantomic pressure \(p_{\rm ph}\simeq-\beta\rho_{\lambda}a^{3}/6\), which includes the only exotic term in the total pressure (_i.e._ the term which grows \(\propto a^{3}\)), and from which we derive the phantomic equation of state \(w_{\rm ph}:=p_{\rm ph}/\rho_{\rm ph}=-2\). Although this phantomic field may dominate \(\rho_{\rm aux}\) after the transition due to the small value of \(\alpha\), the EoS parameter of the effective dark energy component remains consistent with \(w_{\rm DE}=-1\). This is exemplified by the solid and dashed darker blues lines in the top panel of Fig. 1. Note that this phantomic component appears only in the slow transition approximation of our UDM model, so that one implies the other. Next, we choose the energy density of the effective dark energy component such that the \(\Lambda\)CDM limit is recovered for \(|\alpha|\ll 1\), _i.e._, we set \(\rho_{\lambda}=2\Lambda\). To avoid unnecessary modifications of the structure formation history, and given that the change in the matter energy density at early times is proportional to \(a_{t}^{6}\), let us assume a sufficiently small scalar factor transition \(a_{t}\approx 1/11\). By looking at Eqs. (15) and (16), we conclude that the cosmological constant and the matter energy density shifts by about roughly \(10^{-3}\beta\) and \(10^{-6}\beta\). Neglecting those shifts, the leading variation from the \(\Lambda\)CDM Hubble function today is due to the term \[\rho_{\rm ph}(a=1)=\frac{\beta}{6}\Lambda\,,\] where we used the assumption \(\rho_{\lambda}=2\Lambda\). Therefore, in the slow transition regime, the first Friedmann equation becomes \[3H_{0}^{2}=3\tilde{H}_{0}^{2}+\rho_{\rm ph}\,,\] where \(\tilde{H}_{0}\) is the Hubble constant in the \(\Lambda\)CDM limit such that \(3\tilde{H}_{0}^{2}=\rho_{\rm m0}+\Lambda\). The shift in the Hubble constant is characterized by \[\frac{\delta H_{0}}{\tilde{H}_{0}}=\left(\sqrt{1+\frac{\beta}{6}\frac{\Lambda }{3\tilde{H}_{0}^{2}}}-1\right)\,. \tag{18}\] We compare in Fig. 3 the shifts predicted by the slow transition regime as a function of \(\beta\) (blue line) and the numerical resolution of Friedmann equation Eq. (6) (green lines). Overall, Eq (18) offers a good approximation for values of \(\beta\ll 1\) and \(a_{t}\) sufficiently small, although it tends to slightly overestimate the cases with greater \(a_{t}\), see for instance the dashed green line in Fig. 3. Notably, the slow transition approximation holds even for values of \(\beta\approx 1\). This is explained by the fact that the next-to-leading order term in Eq. (11) scales as \(\alpha^{4}\), which is also negligible for such \(\beta\). Additionally, the shifts in the cosmological constant and matter energy densities are negligible too in this case. Figure 3: Relative change in the Hubble constant as a function of \(\beta\) in the slow transition approximation (blue line) and from the numerical evaluation of the Friedmann equations (dashed and solid green lines). In the slow transition regime, the Hubble tension can be resolved with \(\beta\sim 1\). Note that the horizontal black line represents the change in \(H_{0}\) needed to alleviate the tension. Finally, we note from Fig. 3 that the slow transition regime (or small deviations of it, _i.e._, \(\beta\gtrsim 1\)) could alleviate the tension. Interestingly, we see in the top panel of Fig. 1 that the limit \(\beta=1\) (lightest blue line) features a phantomic dark energy component \(w_{\rm DE}<-1\) without violating the WEC. This is not surprising though since, after the transition, \(\rho_{\rm aux}\) is dominated by the phantomic contribution \(\rho_{\rm ph}\) with an EoS parameter \(w_{\rm ph}=-2\). ### Fast transition (\(|\alpha|\gg 1\)) In the limit \(|\alpha|\gg 1\), we can approximate \(\ln\cosh\alpha\simeq|\alpha|-\ln 2+e^{-2|\alpha|}\) and \(\tanh\alpha\simeq{\rm sgn}(\alpha)(1-2e^{-2|\alpha|})\), so we can rewrite the auxiliary density (8) and pressure (9) as \[\rho_{\rm aux} = \frac{\rho_{\lambda}}{2}\left\{\left|1-\frac{a_{\rm t}^{3}}{a^{3} }\right|-\frac{3}{a^{3}\beta}\left(\log 2-e^{-2|\alpha|}\right)\right\}\, \tag{19}\] \[p_{\rm aux} = -{\rm sgn}(\alpha)\frac{\rho_{\lambda}}{2}\left(1-2e^{-2|\alpha| }\right)\, \tag{20}\] where, for the sake of illustration, we have kept the subdominant terms in \(\alpha\). From this set of equations, and given that \(\beta\) is assumed to be positive, we note that the behaviour of the UDM model depends on whether \(\alpha\) is positive or negative, _i.e._, is different after and before the transition. We analyse these two limits in the remainder of this section. #### ii.2.1 Before the transition: \(a<a_{\rm t}\) Before the transition, the auxiliary fluid exhibits a positive pressure of \(p_{\rm aux}\approx\rho_{\lambda}/2\). This term cancels out the expected contribution of the \(\Lambda\)-like component in the total pressure of the UDM field leading to \(p_{\varphi}\approx 0\). Therefore, the UDM field is dominated by the matter component before the transition in the fast transition regime, and \(\rho_{\rm m}\) is the sole component behind the expansion history. This is particularly noticeable if we rewrite the Friedmann equations as \[3H^{2} = \rho_{\rm m,F}+\rho_{\rm mph}\, \tag{21}\] \[3\frac{\ddot{a}}{a} = -\frac{\rho_{\rm m,F}}{2}-\frac{\rho_{\rm mph}}{2}\left(1-2\beta a ^{3}\right)\, \tag{22}\] where we have defined \[\rho_{\rm m,F} := \rho_{\rm m}+\left(a_{\rm t}^{3}-\frac{3\log 2}{\beta}\right) \frac{\rho_{\lambda}}{2a^{3}}\, \tag{23}\] \[\rho_{\rm mph} := \frac{3\rho_{\lambda}e^{2\alpha}}{2\beta a^{3}}\, \tag{24}\] where we use the subscript "\(\rm mph\)" to exacerbate that this exotic contribution will not behave as a phantomic component, opposed to exotic contribution in the slow transition regime. From the definition of \(\rho_{\rm m,F}\), it is clear that changes in the total energy budget will be dominated by the shift \(\propto\rho_{\lambda}a_{t}^{3}/a^{3}\) in the matter density, since the non-matter contribution \(\rho_{\rm mph}\) is negligible. In summary, a sufficiently large value of \(a_{t}\) changes significantly \(\rho_{\rm m}\) while introducing a non-panthomic component with negligible energy density. Figure 4: Relative variation of the Hubble parameter produced by the contribution of \(\rho_{\rm aux}\) for two different values of the effective cosmological constant: \(\rho_{\lambda}=\Lambda\) (left panel) and \(\rho_{\lambda}=2\Lambda\) (right panel). The darkest dashed and solid red lines (\(\beta=1000\) and \(\beta=100\)) in the left panel show that the fast transition regime can, at most, recover the \(\Lambda\)CDM phenomenology for sufficiently small \(a_{t}\). On the other hand, the darkest dashed and solid blue lines (\(\beta=0.01\) and \(\beta=0.1\)) in the right panel represent the small transition regime, where \(\rho_{\rm aux}\) is of order \(\alpha\). The lightest dashed and solid lines (\(\beta=1\)) are solutions alleviating the tension while keeping, an overall expansion rate consistent with the \(\Lambda\)CDM dynamics. Both panels show that large values of \(a_{t}\) (dashed lines) significantly change \(H(z)\) before the transition. After the transition: \(a>a_{t}\) After the transition, the auxiliary pressure \(p_{\rm aux}\approx-\rho_{\lambda}/2\) behaves like a cosmological constant, accounting for half of the total pressure since \(p_{\varphi}\approx-\rho_{\lambda}\). The Friedmann equations then become \[3H^{2} = \rho_{\lambda}+\rho_{\rm m,F}+\rho_{\rm mph}\;, \tag{25}\] \[3\frac{\ddot{a}}{a} = \rho_{\lambda}-\frac{\rho_{\rm m,F}}{2}-\frac{\rho_{\rm mph}}{2} \left(1+2\beta a^{3}\right)\;, \tag{26}\] where we now define \[\rho_{\rm m,F} := \rho_{\rm m}-\left(a_{t}^{3}+\frac{1}{\beta}3\log 2\right)\frac{ \rho_{\lambda}}{2a^{3}}\;, \tag{27}\] \[\rho_{\rm mph} := \frac{3\rho_{\lambda}}{2\beta a^{3}}e^{-2\alpha}\;. \tag{28}\] Similarly to the previous case, the leading order contributions of the auxiliary fluid in the fast transition regime are in the matter term, with an amplitude proportional to \(a_{t}^{3}/a^{3}\). Our analysis shows that, prior to the transition, there is no cosmological constant-like contribution from this UDM model. Instead, the UDM field acts like a dust-like component plus an additional auxiliary field, whose density decays exponentially with \(\alpha\). On the other hand, after the transition, it behaves as a cosmological constant plus a very similar auxiliary field. In both cases, the auxiliary fluid is non-barotropic, with an EoS parameter \(w_{\rm mph}={\rm sign}(\alpha)2\beta a^{3}/3\) and a negligible energy density. After the transition, we recover the \(\Lambda\)CDM paradigm whenever \(\beta\gg 1\), \(a_{t}^{3}\ll 1\), and \(\rho_{\lambda}=\Lambda\). The shift in the energy budget of matter and the contribution of the non-barotopic fluid both become negligible across all the cosmological history. Unlike the slow transition regime, the fast transition regime can differ significantly from \(\Lambda\)CDM. For instance, if we assume \(a_{t}=0.4\) and \(\beta=1000\), we satisfy the fast transition condition and attain the \(\Lambda\)CDM limit today, though at the cost of a large shift in the matter component that can be seen from the darkest dashed red lines in the bottom panel of Fig. 2. This issue is dependent on the transition time. If the transition happens earlier, for instance at \(a_{t}=1/11\), the energy density of matter remains pragmatically unaltered (darkest solid red lines in the bottom panel of Fig. 2). The darkest dashed and solid red lines in the top and middle panels of Fig. 2 show that, after the transition, the \(\Lambda\)CDM limit is recovered in the fast transition regime, _i.e._\(\rho_{\rm aux}/\rho_{\lambda}=-p_{\rm aux}/\rho_{\lambda}=1/2\), regardless of the value of \(a_{t}\). The previous discussion clarifies the reason why the fast transition regime fails to resolve the Hubble tension and only recovers the \(\Lambda\)CDM phenomenology for fine-tuned values of \(\beta\) and \(a_{t}\). The failure of the fast transition regime is not surprising according to previous results about fast transitioning models [41; 42], which show that a panthomic behaviour is needed in order to increase the current value of the Hubble constant, and that fields with non-phantom component are not suited to handle the Hubble tension. For the sake of completeness, we plot in Fig. 4 the Hubble parameter produced by the UDM fluid when compared to the \(\Lambda\)CDM case under the assumption of \(\rho_{\lambda}=\Lambda\) (left panel) and \(\rho_{\lambda}=2\Lambda\) (right panel). In the fast transition regime, _i.e._, \(\rho_{\lambda}=\Lambda\) and \(\beta\gg 1\), we recover the \(\Lambda\)CDM behaviour nearly across all redshifts if \(a_{t}\) is sufficiently small (solid darker red line). A greater scale factor transition leads to a significant change in the Hubble rate, due to the change the matter component before transition (solid darkest red line). On the other hand, in the slow transition limit, _i.e._, \(\rho_{\lambda}=2\Lambda\) and \(\beta\lesssim 1\), the cosmic expansion is similar to the \(\Lambda\)CDM expansion with only a small deviation at late times, see darkest blue lines in the right panel. However, for values of \(\beta\) outside this regime, the expansion of the Universe is greatly modified at high redshifts. Finally, we insist on the fact that the chosen values of \(\rho_{\lambda}\) in Fig. 4 are merely illustrations of the slow and fast transition regimes, and do not necessarily represent a realistic picture of the Universe. In order to present a more realistic scenario, we fix the values of \(\rho_{\Lambda}\) and \(H_{0}\) in Fig. 5, such that the angular distance to the last-scattering surface \(D_{\rm A}(z_{\rm LSS})\) remains consistent with the value inferred from the \(\Lambda\)CDM constraints. We fix the time of last scattering at \(z_{\rm LSS}=1100\). In agreement with the deviation of the Hubble rate in the slow transition approximation (18), Fig. 5 shows that the Hubble tension is alleviated for intermediate values of \(\beta\in[1,10]\), while presumably keeping a good agreement with the CMB observations. ## III Statistical analysis Following the results of the previous section, we restrict our analysis to a parameter space potentially solving the Figure 5: Relative variation of the Hubble parameter for different values of \(\beta\) and \(a_{t}\) when the angular distance to the last scattering surface is fixed to match the \(\Lambda\)CDM scenario. Hubble tension by imposing the flat priors \(\beta\in[0,10]\) and \(a_{t}\in[0,1]\). Although these ranges are chiefly justified from the point of view of the Hubble tension, they prevent large modifications of the matter component, see Figs. 4 and 5. This is crucial once we notice that large deviations in the matter field \(\rho_{\rm m}\) at early times significantly modify the evolution of cosmological perturbations. Since we do not address the evolution of the cosmological perturbations of the UDM field, we effectively treat the UDM model as a late-time modification of the \(\Lambda\)CDM model. Furthermore, a preliminary analysis showed that values of \(\beta>10\) are typically discarded by the data. We now use these assumptions to perform a Bayesian analysis of the UDM model considering cosmological probes of the background. Specifically, we consider SNIa data from the Pantheon catalog [85], a prior on their absolute magnitude \(M\) from Cepheids [9] and the CMB distance prior inferred from Planck data [86]. ### CMB distance priors At the background level, the positions of the CMB acoustic peaks constrain cosmological distances through the so-called CMB distance prior. Typically, such prior is implemented via the baryon energy density \(\Omega_{\rm b0}h^{2}\), the spectral index \(n_{\rm s}\), acoustic scale \(l_{\rm A}\), and shift parameter \(R\): \[l_{\rm A} :=(1+z_{\star})\frac{\pi D_{\rm A}(z_{\star})}{r_{\rm s}(z_{ \star})}\, \tag{29}\] \[R(z_{\star}) :=(1+z_{\star})\frac{D_{\rm A}(z_{\star})\sqrt{\Omega_{\rm m0}H_{ 0}^{2}}}{c}\, \tag{30}\] where \(z_{\star}\) is the decoupling redshift, \(D_{\rm A}\) is the angular diameter distance, and \(r_{\rm s}\) is the sound horizon4. Here, we assume a flat FLRW background, therefore \(D_{\rm A}\) is Footnote 4: In the WMAP paper by Komatsu et al [87], the authors argue that dark energy influences the distance scales and the growth of structures, though the sensitivity of the latter is limited. We show in Apendix C that the auxiliary fluid does not influence the CMB distance prior. \[D_{\rm A}(z)=\frac{c}{(1+z)H_{0}}\int_{0}^{z}\mathrm{d}z^{\prime}\frac{1}{E(z ^{\prime})}\, \tag{31}\] where \(E(z)\equiv H(z)/H_{0}\) is the normalised Hubble rate. As mentioned before, we adopt the CMB distance prior inferred from Planck 2018. Specifically, we use the values and correlation matrix presented in Table 1 of Ref. [86] (\(w\)CDM model).5 Footnote 5: Note that although we do not deal with the matter power spectrum, the spectral index, \(n_{\rm s}\), is included to correctly account for correlations with \(R\), \(l_{\rm A}\), and \(\Omega_{\rm b0}h^{2}\). As discussed in Ref. [86], the CMB distance prior should be used to constrain models that deviate from the \(\Lambda\)CDM model at late times, and which are expected to not significantly impact the peak structure of the CMB power spectrum. In our case, this corresponds to a negligible contribution from \(\rho_{\rm aux}\) at early times, especially those that are proportional to \((1+z)^{3}\). Although this is _a priori_ guaranteed for our choice of priors, in Appendix C we investigate whether the use of the CMB distance prior is consistent with the analysis. We address in particular the potential changes that \(\rho_{\rm aux}\) induce in the definition of the shift parameter. ### SNIa In order to constrain late-time deviations from \(\Lambda\)CDM of the expansion rate when considering \(\beta\in[0,10]\), we use the cosmological distance provided by standard candles. In particular, we use the Pantheon SNIa compilation [85]. Standard candles measure the apparent magnitude \(m\), which constrains the background dynamics of the Universe through the relation \[m(z)=5\log\frac{D_{\rm L}(z)}{1\rm Mpc}+25+M\,, \tag{32}\] with \(D_{\rm L}\) the luminosity distance, and \(M\) the absolute magnitude of SNIa. ### Absolute Magnitude of SNIa In order to offer a calibration of the apparent magnitude of SNIa, we use the Gaussian prior on \(M\) \[\chi^{2}=\frac{(M-M_{\rm R21})^{2}}{\sigma_{M_{\rm R21}}^{2}}\, \tag{33}\] whose use is equivalent to the local determination of \(H_{0}\) by SH0ES [88]. Indeed, as discussed in [9], the use of a prior on \(M\) instead of a prior on \(H_{0}\) provides several advantages. For instance, it counts for the discrepancy on the absolute magnitude measured by the CMB distances and the local Cepheids, and it also avoids the double counting of SNIa in the range \(0.023<z<0.15\). Lastly, it is important to note that our analysis does not include BAO data. Although standard rulers provided by BAO strongly constrain late-time modifications of the \(\Lambda\)CDM, the interpretation of the clustering of matter and formation of BAO is incomplete without understanding the evolution of cosmological perturbations. On the other hand, as shown in the following section, the combination of the CMB priors and the SNIa already provide stringent constraints of the parameter space of the UDM model. In order to understand the effect of the local determination of the Hubble parameter on the results, in the following, we perform the Bayesian analysis considering two cases: one with and one without the prior on \(M\). We implement the background evolution of the UDM model in CLASS [89; 90], and we perform the MCMC sampling with MontePython [91; 92]. We produce most of the plots of this section using Gt-Dist [93]. The modified CLASS version can be accessed at github.com/EFrion/class_public. The MCMC analysis uses the usual \(\Lambda\)CDM parameters \(\{\omega_{\rm b0},\omega_{\rm cdm0},n_{\rm s},h,M\}\) with improper flat prior, plus the two parameters \(\{\beta,a_{\rm t}\}\) of the UDM model, whose flat priors are shown in Table 1. ## IV Results Given our assumptions, we are in a position to constrain unambiguously the UDM model. In the presentation of these results, we denote the density of the effective dark energy component today as \(\Omega_{\rm DE0}\), whether it comes from vacuum (in the \(\Lambda\)CDM case) or from the component \(\rho_{\rm DE}=\rho_{\varphi}-\rho_{\rm m0}a^{-3}\) (in the UDM case). **UDM vs \(\Lambda\)CDM** - In figure 6, we present the constraints for the set of variables \(\{H_{0},\Omega_{\rm DE0},\Omega_{\rm b0},M\}\) with \begin{table} \begin{tabular}{l l} Parameter & Prior \\ \hline \hline \(\mathbf{\beta_{udm}}\) & \(\left[10^{-3},10\right]\) \\ \(\mathbf{a_{\rm t}}\) & \(\left[0,1\right]\) \\ \hline \hline \end{tabular} \end{table} Table 1: UDM parameters used in the MCMC simulation. Figure 6: Marginalized constraints (68% and 95% credible regions) of the UDM and \(\Lambda\)CDM models from the Planck 2018 CMB prior, Pantheon supernovae and the local prior on the supernova absolute magnitude \(M\). \begin{table} \begin{tabular}{l c c c c} Parameter & UDM (68\%) & \(\Lambda\)CDM (68\%) & UDM no \(M\) (68\%) & \(\Lambda\)CDM no \(M\) (68\%) \\ \hline \hline \(\mathbf{\beta_{\mathrm{udm}}}\) & \(0.93^{+0.38}_{-0.62}\) & - & \(<0.862\) & - \\ \(\mathbf{a_{\mathrm{t}}}\) & \(0.26^{+0.12}_{-0.21}\) & - & \(0.42\pm 0.22\) & - \\ \(\mathbf{10^{-2}\omega_{\mathrm{b}}}\) & \(2.251\pm 0.015\) & \(2.259\pm 0.015\) & \(2.238\pm 0.015\) & \(2.241\pm 0.015\) \\ \(\mathbf{M}\) [mag] & \(-19.384\pm 0.018\) & \(-19.400\pm 0.015\) & \(-19.434^{+0.037}_{-0.014}\) & \(-19.427\pm 0.016\) \\ \(\mathbf{H_{0}}\) [km/s/Mpc] & \(69.64\pm 0.88\) & \(68.34\pm 0.57\) & \(67.6^{+1.3}_{-0.82}\) & \(67.35\pm 0.60\) \\ \(\mathbf{\Omega_{\mathrm{DE0}}}\) & \(0.7084\pm 0.0085\) & \(0.6999\pm 0.0074\) & \(0.684^{+0.019}_{-0.0067}\) & \(0.6868\pm 0.0083\) \\ \hline \hline \end{tabular} \end{table} Table 2: Marginalized constraints of the UDM and \(\Lambda\)CDM models. Figure 7: Marginalized constraints (68% and 95% credible regions) of the UDM and \(\Lambda\)CDM models from the Planck 2018 CMB prior and Pantheon supernovae. Figure 8: Marginalized constraints (68% and 95% credible regions) of the UDM model from the Planck 2018 CMB prior and Pantheon supernovae, either with and without the local prior on the supernova magnitude \(M\). a prior on the absolute magnitude \(M\). In the UDM scenario, the dark energy contribution from the auxiliary fluid is typically greater than the vacuum energy density in \(\Lambda\)CDM which reflects in a slightly greater value for \(H_{0}\). From Table 2, we can read the constraint \(H_{0}=69.64\pm 0.88\) in the UDM case, which is indeed bigger than in the \(\Lambda\)CDM case, for which it is \(H_{0}=68.3^{+1.1}_{-1.1}\), though the difference is modest. Additionally, Fig. 9 shows that the UDM model constraints lead to a cosmic expansion overall consistent with the \(\Lambda\)CDM model that only allows a 2% increase in the value of the Hubble constant. **UDM vs \(\Lambda\)CDM (no \(M\))** - Fig. 7 shows that, in the absence of the prior on \(M\), the constraints on the UDM scenario loosen up while the constraints on \(\Lambda\)CDM are still tight. Since the SNIa are effectively calibrated by the CMB distance, the UDM model reproduces a cosmic expansion consistent with \(\Lambda\)CDM and the best fit values of the UDM model parameters are very close to those of \(\Lambda\)CDM. Additionally, Table 2 shows that constraints on the UDM scenario leads to almost negligible increase on \(H_{0}\) together with a significant increase in the uncertainties. This exemplifies that the \(M\) prior helps in constraining more precisely the dark energy content in alternative scenarios to the standard cosmological model. **UDM vs UDM no \(M\)** - The effect of the prior on \(M\) is even more striking when comparing the Bayesian analysis for the UDM model with and without the prior on \(M\). Figure 8 gives a clear visual proof that the prior helps to constrain the UDM model, and in particular the two parameters \(\beta\) and \(a_{t}\) specific to the EoS transition. We see that the prior enhances the best fit values of \(\beta\), \(H_{0}\), \(\Omega_{\rm DE0}\), and \(M\), while decreasing \(a_{t}\). In both cases, the value for \(\beta\) is quite small (\(<2\) in the \(1\sigma\) region), in favour of a smooth transition. The prior favours an earlier transition redshift, which explains the highest best fit for \(H_{0}\) (\(69.64\pm 0.88\) against \(67.6^{+1.3}_{-0.82}\) without the prior). Finally, both analysis leads to a \(\sim 1.5\sigma\) decrease in the Hubble tension, which is mainly driven by the increase of the uncertainties. Table 3 gives the best-fit values together with the \(\chi^{2}\) for each individual experiment and their combined sum. The relative difference in the Hubble rate with respect to \(\Lambda\)CDM and the equation of state of the dark energy-like component are shown in Appendix A. In Fig. 9, we compare the evolution of the Hubble rate extracted from the Bayesian analysis for the UDM and \(\Lambda\)CDM models. The bottom panel includes the prior on \(M\), the top panel does not. In both panels, the vertical dashed line indicates the preferred redshift transition, which is slightly higher with the prior on \(M\), \(z_{t}=2.88\) vs \(z_{t}=1.36\) without it. In Fig. 10, the top panels show how the Hubble rate varies when we keep fixed the CMB priors \(R\) and \(l_{\rm A}\), as well as the physical baryon density. The panels are very similar to the theoretical prediction from Fig. 5, and confirm that \(\beta\in[1,10]\) is required to alleviate the tension. The Bayesian analysis in Figs. 6, 7, and 8 provide the stronger constraints \(\beta<0.93^{+0.38}_{-0.62}\) with the \(M\) prior and \(\beta<0.862\) without, which is explained when considering the expected difference in supernovae magnitude. The bottom panels show this for the two models. The green dots with error bars are Pantheon measurements. In these panels, the UDM model with \(\beta\approx 10\) (pink line) is inconsistent with observations, while the slow and fast transition regimes are consistent. We complement the previous results with two model selection criteria, namely the Akaike information criterion (AIC) [94] and the Bayesian information criterion (BIC) [95]. They are defined as \[\mathrm{AIC} =\chi^{2}_{\rm min}+2k\;, \tag{34}\] \[\mathrm{BIC} =\chi^{2}_{\rm min}+k\ln N\;, \tag{35}\] where \(k\) is the number of parameters of a model and \(N\) is the number of data points used to derive the probabilities of the parameters. Summing the data from the Pantheon \begin{table} \begin{tabular}{l c c c c} Parameter & UDM & \(\Lambda\)CDM & UDM no \(M\) & \(\Lambda\)CDM no \(M\) \\ \hline \hline \(\mathbf{\beta_{\rm udm}}\) & 0.80 & - & 0.339 & - \\ \(\mathbf{a_{\rm t}}\) & 0.14 & - & 0.417 & - \\ \(\mathbf{10^{-2}\omega_{\rm b}}\) & 2.251 & 2.258 & 2.239 & 2.242 \\ \(\mathbf{M}\) [mag] & \(-19.383\) & \(-19.40\) & \(-19.422\) & \(-19.427\) \\ \(\mathbf{H_{0}}\) [km/s/Mpc] & 69.60 & 68.26 & 67.82 & 67.38 \\ \(\mathbf{\Omega_{\rm DE0}}\) & 0.7088 & 0.6990 & 0.6900 & 0.6873 \\ \hline \hline \(\mathbf{\chi^{2}_{\rm Pant}}\) & 1028.2 & 1025.9 & 1025.6 & 1026.0 \\ \(\mathbf{\chi^{2}_{\rm emb}}\) & \(9.9\times 10^{-1}\) & 2.9 & \(3.9\times 10^{-2}\) & \(9.5\times 10^{-2}\) \\ \(\mathbf{\chi^{2}_{\rm M}}\) & 14.5 & 18.0 & & \\ \(\mathbf{\chi^{2}_{\rm tot}}\) & 1043.6 & 1046.8 & 1025.7 & 1026.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Best fit of the UDM and \(\Lambda\)CDM models. We display below their \(\chi^{2}\) for each individual experiment and their combined sum. catalog (1048) and the CMB priors (3) results in \(\ln N=7\), regardless of the prior on \(M\). We compare the UDM and \(\Lambda\)CDM models with the differences [96] \[\Delta AIC =\Delta\chi^{2}+2\Delta k\;, \tag{36}\] \[\Delta BIC =\Delta\chi^{2}+\Delta k\ln N\;, \tag{37}\] in which a positive value means the \(\Lambda\)CDM model is favoured over the UDM model. The results reported in Table 4 are positive, both in the case assuming a prior on \(M\) or not. Therefore, we conclude that the \(\Lambda\)CDM is favoured. According to the qualitative interpretations of the criteria found in Table 6 and VII of Ref [96], the empirical support of \(\Lambda\)CDM is substantial (\(\Delta\)AIC\(<2\)), and the evidence against the UDM model is very strong (\(\Delta\)BIC\(>10\)). ## V Discussion We performed in this work an analysis of a UDM model that acts also as dark energy at late times. Two goals were achieved: 1) we analysed the pressure profile (3) chosen in Ref. [68] with a Bayesian approach using CLASS and MontePython for the first time, and 2) we assessed whether this profile possesses the capability of alleviating the Hubble tension. This particular profile is similar to a late-time modification of the \(\Lambda\)CDM model in which the transition from the matter regime to the dark energy regime is parametrised by two variables, \(\beta\) and \(a_{t}\). In Section II, we argued that the product \(\beta(a^{3}-a_{t}^{3})\) controls the behaviour of the transition. If the product is very small, the transition happens smoothly and, conversely, it happens quickly if the product is large. We find that supernovae data constrain \(\beta\) in the range \(0<\beta<2\), though \(a_{t}\) is less constrained. The correlation of \(a_{t}\) with independent variables such as \(H_{0}\), the dark energy density today \(\Omega_{\rm DEO}\) or the supernovae absolute magnitude \(M\) gives \(a_{t}<0.42\pm 0.22\) at \(1\sigma\), or \(a_{t}<0.26^{+0.12}_{-0.21}\) when a prior on \(M\) is assumed. This implies that the transition has to occur at least at a redshift \(z>0.5\). The posteriors on \(a_{t}\) hint at a transition redshift \(z_{t}=1.38\) when no prior is assumed, and a further transition in time at \(z_{t}=2.85\) with the prior. In light of the preferred values of \(\beta\) and \(a_{t}\), we conclude that the data favours a smooth transition (\(\beta(a^{3}-a_{t}^{3})<1\)) over a quick one. Although, the same analyses demonstrate that the UDM model can decrease the Hubble tension by \(\sim 1.5\sigma\), such decrease is mainly produced by an increase in the uncertainties caused by the correlations of the UDM parameters with \(H_{0}\) rather than an increase in \(H_{0}\). Complementary to this, the AIC (BIC) information criterion penalizes the UDM model and points out to a substantial (strong) empirical support to the \(\Lambda\)CDM model. Thus, overall, even in the most promising case, _i.e., assuming a prior on \(M\)_, the UDM model does not constitute an advantageous solution to the Hubble tension. Our results partially agree with the claims made in Ref [97], in which the authors argue that a DE model must possess two features in order to potentially solve the Hubble tension. First, the equation of state must cross the phantom line \(w_{\rm DE}<-1\), and, second, the integrated dark energy density must be smaller than that of a cosmological constant in \(\Lambda\)CDM. As we discuss in Appendix B, the UDM model possesses both these requirements, however both lower and higher values of \(H_{0}\) are allowed, where the typical increase in \(H_{0}\) is not enough to explain away the tension. On the other hand, this does not necessarily mean that this UDM model is unable to solve the tension. Indeed, we only consider the Figure 9: Ratio between the Hubble parameter obtained through cosmological constraints and the expansion rate predicted by the \(\Lambda\)CDM baseline, from Table 2. The bottom panel shows that, when \(M\) is included in the analysis, only a slight deviation from the \(\Lambda\)CDM regime is allowed by data inducing to a \(2\%\) increase in the Hubble constant. On the other hand, as shown by the top panel, when \(M\) is not assumed in the analysis, the UDM model displays a cosmic expansion consistent with the \(\Lambda\)CDM model and even smaller deviation are allowed by data, although, as expected, uncertainties are larger. background evolution of the model in this analysis, even though perturbations can also affect the equation of state of the auxiliary field and potentially have an impact on \(H_{0}\). We will investigate this possibility in a future paper. ###### Acknowledgements. EF thanks the Helsinki Institute of Physics for their hospitality. The numerical analysis was done using the Puck cluster of the University of Jyvaskyla. DC thanks the Robert E. Young Origins of the Universe Chair fund for its generous support. DB acknowledges support from the COSMOS network (www.cosmosnet.it) through the ASI (Italian Space Agency) Grants 2016-24-H.0, 2016-24-H.1-2018 and 2020-9-HH.0. VM thanks CNPq (Brazil, 307969/2022-3) and FAPES (Brazil, TO 365/2022, 712/2022, 976/2022, 1020/2022, 1081/2022) for partial financial support. LG acknowledges support from the Australian Government through the Australian Research Council Laureate Fellowship grant FL180100168. Figure 10: Top panels: relative variation of the Hubble rate when the quantities related to the CMB priors are assumed to be fixed. Bottom panels: theoretical (solid lines) and Pantheon measurements (green dots) of the difference in the apparent magnitude of SNIa. The UDM model for \(\beta=10\), the most promising case for solving the tension, is inconsistent with observations. \begin{table} \begin{tabular}{c c c} Criterion & With prior & Without prior \\ \hline \hline \(\mathbf{\Delta AIC}\) & 0.8 & 3.6 \\ \(\mathbf{\Delta BIC}\) & 10.8 & 13.6 \\ \end{tabular} \end{table} Table 4: Difference of the Akaike (AIC) and Bayesian (BIC) information criteria between the UDM and \(\Lambda\)CDM models. The selection criteria favour the \(\Lambda\)CDM model, regardless of the prior on \(M\).
2307.16133
Universal approach to deterministic spatial search via alternating quantum walks
Spatial search is an important problem in quantum computation, which aims to find a marked vertex on a graph. We propose a novel approach for designing deterministic quantum search algorithms on a variety of graphs via alternating quantum walks. Our approach is universal because it does not require an instance-specific analysis for different graphs. We highlight the flexibility of our approach by proving that for Johnson graphs, rook graphs, complete-square graphs and complete bipartite graphs, our quantum algorithms can find the marked vertex with $100\%$ success probability and achieve quadratic speedups over classical algorithms. This not only gives an alternative succinct way to prove the existing results, but also leads to new interesting findings on more general graphs.
Qingwen Wang, Ying Jiang, Shiguang Feng, Lvzhou Li
2023-07-30T05:14:19Z
http://arxiv.org/abs/2307.16133v4
# Universal approach to deterministic spatial search via alternating quantum walks ###### Abstract Spatial search is an important problem in quantum computation, which aims to find a marked vertex on a graph. We propose a novel approach for designing deterministic quantum search algorithms on a variety of graphs via alternating quantum walks. Our approach is universal because it does not require an instance-specific analysis for different graphs. We highlight the flexibility of our approach by proving that for Johnson graphs, rook graphs, complete-square graphs and complete bipartite graphs, our quantum algorithms can find the marked vertex with 100% success probability and achieve quadratic speedups over classical algorithms. This not only gives an alternative succinct way to prove the existing results, but also leads to new interesting findings on more general graphs. _Introduction_.-- The continuous-time quantum walk (CTQW) was introduced by Farhi and Gutmann [1] in 1998, and has been one of the key components in quantum computation [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In 2004, Childs and Goldstone [14] presented an algorithmic framework using CTQW to solve the spatial search problem which aims to find an unknown marked vertex on an underlying graph of specified topology. They showed that the algorithm has \(O(\sqrt{N})\) searching time on complete graphs, hypercubes, and \(d\)-dimensional periodic lattices for \(d>4\), where \(N\) is the number of vertices in the graph. In this framework, a Hamiltonian \(H\) is constructed using the adjacency matrix of the graph and information about the location of the marked vertex. The algorithm is then to make a quantum system evolve \(T\) time from an initial state under the Hamiltonian \(H\), where \(T\) can be set arbitrarily. Since then, many kinds of graphs, e.g., strong regular graphs [15], complete bipartite graphs [16], balanced trees [8], and Johnson graphs [17], have been studied in this framework. All of these graphs can admit a quadratic quantum speedup by using CTQW. Specially, it is worth mentioning that an exponential algorithmic speedup can be achieved via CTQW for the welded tree problem [18]. Recently, an interesting framework called _alternating phase-walk_ for spatial search was proposed and applied on a variety of graphs [19; 20; 21]. In the framework, CTQWs and marked-vertex phase shifts are alternately performed. More specifically, two Hamiltonians are constructed: One uses the Laplacian matrix or adjacency matrix of the graph and the other uses the information of the marked vertex. Then the quantum system evolves alternately under the two Hamiltonians, which is similar to the quantum approximate optimization algorithm (QAOA). Marsh and Wang [19] first utilized alternating phase-walks to design a deterministic quantum algorithm for spatial search on the class of complete identity interdependent networks (CIINs), which achieves quadratic speedups over classical algorithms. Note that a CINN is equivalent to an \(n\times 2\) rook graph, and the general case of an \(n\times m\) rook graph was studied by Chakraborty et al. [22] in the context of the framework proposed by Childs and Goldstone [14]. Later, Marsh and Wang [20] presented a method based on alternating phase-walks for designing quantum spatial search algorithms on periodic graphs, and by applying this method they obtained quantum search algorithms with quadratic speedups on Johnson graphs \(J(n,2)\), rook graphs and complete-square graphs. However, it is worth pointing out that all the quantum search algorithms given in [20] are not deterministic, or in other words, have a certain probability of failure. In fact, deterministic spatial search algorithms not only mean we can improve the theoretical success probability to 100% but also imply a kind of perfect state transfer between two vertices on graphs [23; 24]. Hence, this drives Ref. [20] to propose the open problem "another compelling direction for future research is making the algorithm deterministic". Inspired by the above work, Qu et al. [21] proposed a deterministic quantum spatial search on star graphs via the alternating phase-walk framework, which achieves the well-known lower bound of Grover's search. Again, how to fully characterize the class of graphs that permit deterministic search was proposed as a topic of future study in [21]. In this article, we present a novel and universal approach based on alternating phase-walks to design deterministic quantum spatial search algorithms on a variety of graphs. Taking advantage of the approach, we obtain deterministic quantum search algorithms on Johnson graphs \(J(n,k)\) for any fixed \(k\), rook graphs, complete-square graphs, and complete bipartite graphs \(K(N_{1},N_{2})\), respectively. All of these algorithms can find the marked vertex with 100% success probability theoretically and achieve quadratic speedups over classical ones. Our approach is universal because it does not require an instance-specific analysis for different graphs. The results obtained in the paper not only subsume the consequences of [19; 20; 21] but also generalize to more graphs. On one hand, the authors of [19] only studied a subset of rook graphs, and the star graph considered in [21] is obviously a special case of complete bipartite graphs. Hence, their results can be obtained as direct conclusions of this work. On the other hand, in contrast to the algorithms with er rors in [20], our algorithms are fully deterministic. Also, we significantly simplify the proof and generalize from Johnson graphs \(J(n,2)\) to \(J(n,k)\) for any fixed \(k\). The main contribution of this paper is that we introduce a more succinct and efficient formalism of alternating phase-walks based on which a universal framework for deterministic quantum spatial search on a large family of graph classes is proposed. The contribution made should be wide of interest. This not only solves the open problem stated in [20] "another compelling direction for future research is making the algorithm deterministic", but also may give inspiration for inventing potential techniques that are applicable to more graph classes. _Preliminaries._-- For quantum search in an unstructured database, one frequently-used way is to perform alternately the two unitary operators \[U_{1}(\alpha) =I-(1-e^{-i\alpha})|s\rangle\langle s|,\] \[U_{2}(\beta) =I-(1-e^{-i\beta})|m\rangle\langle m|,\] where \(\alpha\) and \(\beta\) are real numbers, on the initial state \(|s\rangle\) to find the marked element \(|m\rangle\). In [25; 26], the authors showed that if the value of \(|\langle m|s\rangle|\) is known, then we can choose \(\alpha=\pi\) and appropriate values for \(\beta\) to carry out the search deterministically. **Lemma 1** ([25; 26]).: _Given two unitary operators \(U_{1}(\pi)\), \(U_{2}(\beta)\), and a positive number \(|\langle m|s\rangle|\), where \(U_{1}(\pi)=I-2|s\rangle\langle s|\), \(U_{2}(\beta)=I-(1-e^{-i\beta})|m\rangle\langle m|\), we can find an integer \(p\in O(\frac{1}{|\langle m|s\rangle|})\), and real numbers \(\gamma\), \(\beta_{1},\ldots,\beta_{p}\) such that_ \[|m\rangle=e^{-i\gamma}\prod_{k=1}^{p}U_{1}(\pi)U_{2}(\beta_{k})|s\rangle.\] Let \(G=(V,E)\) be a graph where \(V\) is the vertex set and \(E\) is the edge set. Denote \(\mathcal{H}=span\{|v\rangle:v\in V\}\). The continuous-time quantum walk on \(G\) starts from the initial state \(|\psi(0)\rangle\in\mathcal{H}\) and evolves by the following Schrodinger equation: \[i\cdot\frac{d\langle v|\psi(t)\rangle}{dt}=\sum_{u\in V}\langle v|H|u\rangle \langle u|\psi(t)\rangle, \tag{1}\] where \(|\psi(t)\rangle\) denotes the state at time \(t\), \(H\) is a Hamiltonian satisfying \(\langle v|H|u\rangle=0\) when \(v\) and \(u\) are not adjacent in \(G\). This equation means that at time \(t\), the change of the amplitude of \(|v\rangle\) is only related to the amplitudes of its adjacent vertices. From (1), we see that the continuous-time quantum walk over a graph \(G\) at time \(t\) can be defined by the unitary transformation \(U=e^{-iHt}\). One choice of \(H\) is the Laplacian matrix \(L=D-A\), where \(D\) is the degree matrix (a diagonal matrix with \(D_{jj}=deg(j)\)) and \(A\) is the adjacency matrix of \(G\). The other choice is to let \(H=A\). In this article, when we mention a graph, it always refers to a simple undirected connected graph, where "simple" means the graph has no loops and has no multiple edges between any two vertices. Below we give some properties of the Laplacian matrix \(L\). **Lemma 2**.: _Let \(G\) be a graph with Laplacian matrix \(L=D-A\), where \(D\) is the degree matrix and \(A\) is the adjacency matrix of \(G\). Then we have the follow properties. (i) \(0\) is a simple1 eigenvalue of \(L\), and the corresponding eigenvector is \(r=(1,1,\ldots,1)^{T}\). (ii) Given the spectral decomposition \(L=\sum_{i=1}^{N}\lambda_{i}|\eta_{i}\rangle\langle\eta_{i}|\), \(\langle v|\eta_{i}\rangle\) is a real number for each vertex \(v\) of \(G\) and each \(i\in\{1,2,\ldots,N\}\)._ Footnote 1: An eigenvalue is said to be simple if its algebraic multiplicity is \(1\). Proof.: The property (i) is a direct conclusion in [27]. Since \(D\) and \(A\) are both real and symmetric, \(L\) is real and symmetric and its eigenvectors must be real vectors. Thus, property (ii) holds. _Search framework._-- Here we will present a universal approach that can be used to design deterministic quantum search algorithms on a variety of graphs. Let \(G=(V,E)\) be a graph with Laplacian matrix \(L\) and the marked vertex \(m\). Our aim is to find the location of \(m\) via performing alternately the continuous-time quantum walk operator \(e^{-iLt}\) and the search operator \(e^{-i\theta|m\rangle\langle m|}\) on the initial state \(|s\rangle\). The idea behind our approach is to prove that there exist an integer \(p\) and real numbers \(\gamma\), \(\theta_{k}\), \(t_{k}\) (\(k\in\{1,2,\ldots,p\}\)) such that the following equation holds: \[|m\rangle=e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^{- iLt_{k}}|s\rangle.\] Let \(S\) be a finite set of integers. \(gcd(S)\) denotes the greatest common divisor of all nonzero elements in \(S\). If there is no nonzero element in \(S\), then we let \(gcd(S)=1\). **Definition 1**.: _Let \(M\) be an \(N\times N\) Hermitian matrix with spectral decomposition \(M=\sum_{i=1}^{N}\lambda_{i}|\eta_{i}\rangle\langle\eta_{i}|\), where \(\lambda_{1},\ldots,\lambda_{N}\) are integers, at least one of which is \(0\)._ _(i) Define \(\Lambda_{0}=\{\lambda_{1},\ldots,\lambda_{N}\}\). For \(k\geq 0\), we recursively define \(\Lambda_{k+1}\) and \(\overline{\Lambda}_{k+1}\) as follows_ \[\Lambda_{k+1}=\{\lambda\in\Lambda_{k}\mid e^{-i\lambda\frac{\pi}{gcd(\Lambda_{k })}}=1\},\] _and_ \[\overline{\Lambda}_{k+1}=\{\lambda\in\Lambda_{k}\mid e^{-i\lambda\frac{\pi}{ gcd(\Lambda_{k})}}=-1\}.\] _We use \(d_{M}\) to denote the least \(k\) such that \(\Lambda_{k}\) contains only 0._ _(ii) Let \(|m\rangle=\sum_{i=1}^{N}\alpha_{i}|\eta_{i}\rangle\) be a vector where each \(\alpha_{i}\) is a real number such that_ \[(\sum_{\lambda_{i}\in\Lambda_{k}}\alpha_{i}^{2})(\sum_{\lambda_{i}\in\overline {\Lambda}_{k}}\alpha_{i}^{2})\neq 0\] _for any \(k\in\{1,\ldots,d_{M}\}\). We let \(|w_{0}\rangle=|m\rangle\), and for \(k\in\{1,\ldots,d_{M}\}\) define_ \[|w_{k}\rangle=\frac{1}{\sqrt{\sum_{\lambda_{i}\in\overline{\Lambda}_{k}}\alpha_ {i}^{2}}}\sum_{\lambda_{i}\in\Lambda_{k}}\alpha_{i}|\eta_{i}\rangle\] _and_ \[|\overline{w}_{k}\rangle=\frac{1}{\sqrt{\sum_{\lambda_{i}\in\overline{ \Lambda}_{k}}\alpha_{i}^{2}}}\sum_{\lambda_{i}\in\overline{\Lambda}_{k}}\alpha _{i}|\eta_{i}\rangle.\] **Example 1**.: _Given the following \(6\times 6\) Hermitian matrix_ \[\begin{split} M&=0|\eta_{1}\rangle\langle\eta_{1} |+1|\eta_{2}\rangle\langle\eta_{2}|+3|\eta_{3}\rangle\langle\eta_{3}|+6|\eta_{4 }\rangle\langle\eta_{4}|\\ &+64|\eta_{5}\rangle\langle\eta_{5}|+64|\eta_{6}\rangle\langle \eta_{6}|,\end{split}\] _the process of computing \(d_{M}\), \(\Lambda_{0}\), \(|w_{0}\rangle\), and \(\Lambda_{k}\), \(\overline{\Lambda}_{k}\), \(|w_{k}\rangle\), \(|\overline{w}_{k}\rangle\) for \(k\in\{1,2,\ldots,d_{M}\}\) with \(|m\rangle=\sum_{i=1}^{6}\alpha_{i}|\eta_{i}\rangle\) where \(\alpha_{1},\ldots,\alpha_{6}\) are not zero is shown in Fig. 1._ In the following, we consider a kind of restricted graphs which are vertex transitive and whose Laplacian matrices have only integer eigenvalues. A graph \(G\) is said to be vertex transitive, if for any two vertices \(v_{1}\) and \(v_{2}\) of \(G\), there is a automorphism \(f:G\to G\) such that \(f(v_{1})=v_{2}\). Now let \(G\) be such a graph with \(N\) vertices and Laplacian matrix \(L\). Assume the spectral decomposition is \(L=\sum_{i=1}^{N}\lambda_{i}|\eta_{i}\rangle\langle\eta_{i}|\). By Lemma 2, one can see that \(L\) satisfies the condition in Definition 1, and thus we can define \(d_{L}\), \(\Lambda_{0}\), and \(\Lambda_{k}\), \(\overline{\Lambda}_{k}\), (\(k\in\{1,2,\ldots,d_{L}\}\)) as in (i) of Definition 1. In the search space spanned by \(\{|v_{1}\rangle,\ldots,|v_{N}\rangle\}\), \(|\eta_{1}\rangle,\ldots,|\eta_{N}\rangle\) constitute a set of orthonormal basis, and the marked vertex \(|m\rangle\) can be represented as \(|m\rangle=\sum_{i=1}^{N}\alpha_{i}|\eta_{i}\rangle\), where each \(\alpha_{i}=\langle\eta_{i}|m\rangle\) is a real number. Since \(G\) is vertex transitive, as mentioned in [20], we have \[\begin{split}&\sqrt{\sum_{\lambda_{i}\in\Lambda_{k}}\alpha_{i}^{2 }}=\sqrt{\frac{|\Lambda_{k}|}{N}},\\ &\sqrt{\sum_{\lambda_{i}\in\overline{\Lambda}_{k}}\alpha_{i}^{2 }}=\sqrt{\frac{|\overline{\Lambda}_{k}|}{N}},\end{split} \tag{2}\] where \(|\Lambda_{k}|\) and \(|\overline{\Lambda_{k}}|\) are both positive.2 For \(|m\rangle=\sum_{i=1}^{N}\alpha_{i}|\eta_{i}\rangle\), we define \(|w_{0}\rangle\), \(|w_{k}\rangle\), \(|\overline{w}_{k}\rangle\) (\(k\in\{1,2,\ldots,d_{L}\}\)) as in (ii) of Definition 1. We can observe that \(\Lambda_{k}=\Lambda_{k+1}\cup\overline{\Lambda}_{k+1}\), \(|w_{k}\rangle\in span\{|w_{k+1}\rangle,|\overline{w}_{k+1}\rangle\}\) (\(k\in\{0,1,\ldots,d_{L}-1\}\)) and \(d_{L}\) is less than the number of distinct eigenvalues of \(L\). Footnote 2: For a set \(S\), \(|S|\) denotes its cardinality. Below we present one of our main results. **Theorem 1**.: _Given a vertex transitive graph \(G=(V,E)\) with \(N\) vertices and Laplacian matrix \(L\) such that each eigenvalue of \(L\) is an integer, we can find an integer \(p\in O(2^{d_{L}-1}\sqrt{N})\), and real numbers \(\gamma\), \(\theta_{k}\), \(t_{k}\) (\(k\in\{1,2,\ldots,p\}\)) such that \(|s\rangle=e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^{-iLt _{k}}|m\rangle\), where \(m\) is the marked vertex and \(|s\rangle=\frac{1}{\sqrt{N}}\sum_{v\in V}|v\rangle\)._ Proof.: The idea for the proof is to divide the search space into a series of subspaces \(span\{|w_{i}\rangle,|\overline{w}_{i}\rangle\}\) (\(i\in\{1,\cdots,d_{L}\}\)). In each subspace \(span\{|w_{i}\rangle,|\overline{w}_{i}\rangle\}\), we start from \(|w_{i-1}\rangle\) and use \(e^{-i\theta|m\rangle\langle m|}\) and \(e^{-iLt}\) to construct \(I-2|w_{i}\rangle\langle w_{i}|\) and \(I-(1-e^{-i\theta})|w_{i-1}\rangle\langle w_{i-1}|\) and perform deterministic algorithms to make the state evolve into \(|w_{i}\rangle\) as we do in Lemma 1. By repeating the above operation in order, we obtain a procedure which achieves the following state evolution: \[|m\rangle=|w_{0}\rangle\rightarrow|w_{1}\rangle\rightarrow\cdots\rightarrow| w_{d}\rangle=|s\rangle.\] Therefore, the reversed procedure is what we want. We start from the walk operator \(e^{-iLt}\). Let \(t=\frac{\pi}{gcd(\Lambda_{0})}\). Then \[\begin{split} e^{-iL\frac{\pi}{gcd(\Lambda_{0})}}& =\sum_{i=1}^{N}e^{-i\lambda_{i}\frac{\pi}{gcd(\Lambda_{0})}}|\eta _{i}\rangle\langle\eta_{i}|\\ &=\sum_{\lambda_{i}\in\Lambda_{1}}|\eta_{i}\rangle\langle\eta_{i}| -\sum_{\lambda_{i}\in\overline{\Lambda}_{1}}|\eta_{i}\rangle\langle\eta_{i}|. \end{split} \tag{3}\] Recall that \[|m\rangle=|w_{0}\rangle\in span\{|w_{1}\rangle,|\overline{w}_{1}\rangle\}\] and \(|w_{1}\rangle\) (\(|\overline{w}_{1}\rangle\)) are linear combinations of eigenvectors \(|\eta_{1}\rangle,\ldots,|\eta_{N}\rangle\) whose corresponding eigenvalues are in \(\Lambda_{1}\) (\(\overline{\Lambda}_{1}\)). We can see that \[\begin{split}& e^{-iL\frac{\pi}{gcd(\Lambda_{0})}}|w_{1}\rangle=|w_{ 1}\rangle\\ & e^{-iL\frac{\pi}{gcd(\Lambda_{0})}}|\overline{w}_{1}\rangle=- |\overline{w}_{1}\rangle.\end{split} \tag{4}\] Restricted to the subspace \(span\{|w_{1}\rangle,|\overline{w}_{1}\rangle\}\), we have \[\begin{split}& e^{-iL\frac{\pi}{gcd(\Lambda_{0})}}=2|w_{1}\rangle \langle w_{1}|-I=e^{i\pi}(I-2|w_{1}\rangle\langle w_{1}|),\\ & e^{-i\theta|m\rangle\langle m|}=I-(1-e^{-i\theta})|w_{0}\rangle \langle w_{0}|.\end{split} \tag{5}\] From (2), we have \[\langle w_{k}|w_{k+1}\rangle=\frac{\sqrt{\sum_{\lambda_{i}\in\Lambda_{k+1}} \alpha_{i}^{2}}}{\sqrt{\sum_{\lambda_{i}\in\Lambda_{k}}\alpha_{i}^{2}}}=\frac {\sqrt{|\Lambda_{k+1}|}}{\sqrt{|\Lambda_{k}|}}, \tag{6}\] which is a number independent of \(|m\rangle\). According to (5), (6) and Lemma 1, we can find parameters \(p\in O(\frac{1}{[\langle w_{1}|w_{0}\rangle]})\), \(\gamma\), \(t=\frac{\pi}{gcd(\Lambda_{0})}\) and \(\theta_{k}\) (\(k\in\{1,2,\ldots,p\}\)) such that \[|w_{1}\rangle=e^{-i\gamma}\Big{(}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle \langle m|}e^{-iLt}\Big{)}|w_{0}\rangle. \tag{7}\] Next, we shall prove the following fact. **Lemma 3**.: _If there are parameters \(p\), \(\gamma\), and \(\theta_{k}\), \(t_{k}\) (\(k\in\{1,2,\ldots,p\}\)) satisfying_ \[|w_{i}\rangle=e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^ {-iLt_{k}}|w_{0}\rangle, \tag{8}\] _where \(1\leq i\leq d_{L}-1\), then we can find parameters \(p^{\prime}\in O(\frac{2p}{[\langle w_{1}|w_{i+1}\rangle]})\), \(\gamma^{\prime}\), and \(\theta_{k}^{\prime}\), \(t_{k}^{\prime}\), (\(k\in\{1,2,\ldots,p^{\prime}\}\)) such that_ \[|w_{i+1}\rangle=e^{-i\gamma^{\prime}}\prod_{k=1}^{p^{\prime}}e^{-i\theta_{k}^ {\prime}|m\rangle\langle m|}e^{-iLt_{k}^{\prime}}|w_{0}\rangle. \tag{9}\] Proof of Lemma 3.: Denote \(e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^{-iLt_{k}}\) in (8) by \(A_{i}\). Then we have \[\begin{split} A_{i}e^{-i\theta|m\rangle\langle m|}A_{i}^{ \dagger}&=A_{i}(I-(1-e^{-i\theta})|w_{0}\rangle\langle w_{0}|)A_{i }^{\dagger}\\ &=I-(1-e^{-i\theta})|w_{i}\rangle\langle w_{i}|.\end{split}\] Recall that \(|w_{i}\rangle\in span\{|w_{i+1}\rangle,|\overline{w}_{i+1}\rangle\}\). Restricted to this subspace, we have \[e^{-iL\frac{\pi}{gcd(\Lambda_{i})}}=2|w_{i+1}\rangle\langle w_{i+1}|-I=e^{i \pi}(I-2|w_{i+1}\rangle\langle w_{i+1}|).\] Therefore, by Lemma 1 there are parameters \(p^{\prime\prime}\in O(\frac{1}{[\langle w_{i+1}\rangle]})\), \(\gamma^{\prime\prime}\), \(t=\frac{\pi}{gcd(\Lambda_{i})}\) and \(\theta_{k}^{\prime\prime}\) (\(k\in\{1,2,\ldots,p^{\prime\prime}\}\)) such that \[\begin{split}|w_{i+1}\rangle&=e^{-i\gamma^{\prime \prime}}\Big{(}\prod_{k=1}^{p^{\prime\prime}}A_{i}e^{-i\theta_{k}^{\prime \prime}|m\rangle\langle m|}A_{i}^{\dagger}e^{-iLt}\Big{)}|w_{i}\rangle\\ &=e^{-i\gamma^{\prime\prime}}\Big{(}\prod_{k=1}^{p^{\prime\prime}} A_{i}e^{-i\theta_{k}^{\prime\prime}|m\rangle\langle m|}A_{i}^{\dagger}e^{-iLt} \Big{)}A_{i}|w_{0}\rangle.\end{split} \tag{10}\] By replacing \(A_{i}\) with \(e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^{-iLt_{k}}\) in (10), we can get the parameters satisfying (9). This proves the fact. By (7) and using Lemma 3 recursively, we can find parameters \(p\in O(\frac{1}{2}\prod_{k=0}^{d_{L}-1}\frac{2}{[\langle w_{k+1}|w_{k}\rangle]})\), \(\gamma\), and \(\theta_{k}\), \(t_{k}\) (\(k\in\{1,2,\ldots,p\}\)) such that \[\begin{split}|w_{d_{L}}\rangle&=e^{-i\gamma}\prod_{k= 1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^{-iLt_{k}}|w_{0}\rangle\\ &=e^{-i\gamma}\prod_{k=1}^{p}e^{-i\theta_{k}|m\rangle\langle m|}e^ {-iLt_{k}}|m\rangle.\end{split} \tag{11}\] By Lemma 2, \(0\) is a simple eigenvalue of \(L\), and \(|s\rangle\) is the corresponding eigenvector. Thus we have \[\begin{split}|w_{d_{L}}\rangle&=\frac{1}{\sqrt{ \sum_{\lambda_{i}\in\Lambda_{d_{L}}}\alpha_{i}^{2}}}\sum_{\lambda_{i}\in \Lambda_{d_{L}}}\alpha_{i}|\eta_{i}\rangle\\ &=\frac{1}{\sqrt{\sum_{\lambda_{i}=0}\alpha_{i}^{2}}}\sum_{\lambda_ {i}=0}\alpha_{i}|\eta_{i}\rangle=e^{i\gamma_{0}}|s\rangle,\end{split}\] where \(e^{i\gamma_{0}}=1\) or \(-1\) and we can ignore it since it is a global phase. The upper bound of times that we call the search operator is \[\begin{split}\frac{1}{2}\prod_{k=0}^{d_{L}-1}\frac{2}{|\langle w_{ k+1}|w_{k}\rangle|}&=\frac{1}{2}\prod_{k=0}^{d_{L}-1}\frac{2\sqrt{ \sum_{\lambda_{c}\Lambda_{k}}\alpha_{i}^{2}}}{\sqrt{\sum_{\lambda_{i}\in \Lambda_{k+1}}\alpha_{i}^{2}}}\\ &=\frac{2^{d_{L}-1}\sqrt{\sum_{\lambda_{c}\Lambda_{0}}\alpha_{i}^{2} }}{\sqrt{\sum_{\lambda_{i}\in\Lambda_{d_{L}}}\alpha_{i}^{2}}}\\ &=\frac{2^{d_{L}-1}\sqrt{|\Lambda_{0}|}}{\sqrt{|\Lambda_{d_{L}}|} }\\ &=2^{d_{L}-1}\sqrt{N}.\end{split}\] This completes the proof of Theorem 1. **Remark 1**.: _In Theorem 1, Laplacian matrix \(L\) can be relaxed to have rational eigenvalues rather than restricted to integer eigenvalues. In fact, for rational numbers \(\lambda_{i}\) (\(i\in\{1,2,\ldots,N\}\)), we can always find a number \(q\) such that \(q\lambda_{i}\) (\(i\in\{1,2,\ldots,N\}\)) are all integers._ _Applications._-- It will be shown that by applying Theorem 1 we can not only obtain easily the results in [19, 20, 21], but also design deterministic quantum spatial search algorithms on some more general graphs. We first take Johnson graphs as an example. The Johnson graph \(J(n,k)\) has vertices given by the \(k\)-subsets of \(\{1,\cdots,n\}\), with two vertices connected when their intersection has size \(k-1\). It has many interesting properties and connections with many important problems. Marsh and Wang [20] showed that a quadratic speedup spatial search algorithm on \(J(n,2)\) can be designed using alternating quantum walks. We prove that, for any fixed positive integer \(k\), the quadratic speedup can be generalized to the Johnson graphs \(J(n,k)\). **Theorem 2**.: _Let \(k\) be a fixed positive integer. For any Johnson graph \(J(n,k)\) with Laplacian matrix \(L\) and \(N\) vertices in which there is a marked vertex \(m\), we can design a quantum search algorithm that deterministically finds the marked vertex using \(O(\sqrt{N})\) calls to the search operator \(e^{-i\theta|m\rangle\langle m|}\)._ Proof.: The Johnson graph \(J(n,k)\) is vertex transitive, and \(L\) has \(min(k,n-k)+1\) distinct eigenvalues that are all integers [28]. By Theorem 1, we can construct a quantum algorithm \(A\) satisfying \(A|m\rangle=|s\rangle\) that uses \(O(2^{d_{L}-1}\sqrt{N})\) calls to the search operator where \(|s\rangle\) is the uniform superposition state over all vertices of the graph and \(d_{L}\) is defined as in Definition 1. Obviously, \(A^{\dagger}|s\rangle=|m\rangle\), which means that the algorithm can find the marked vertex from \(|s\rangle\). Recall that \(d_{L}\) is less than the number of distinct eigenvalues of \(L\) and we have \(d_{L}<min(k,n-k)+1\leq k+1\). Since \(k\) is fixed, the number of queries for the algorithm \(A^{\dagger}\) to call the search operator is bounded by \(O(\sqrt{N})\). Therefore, the algorithm achieves the quadratic speedup. Rook graphs and complete-square graphs are both vertex transitive. An \(m\times n\) rook graph is the graph Cartesian product \(K_{m}\,\square\,K_{n}\) of complete graphs, having a total of \(N=mn\) vertices. The Laplacian matrix of a rook graph has at most four distinct eigenvalues which are all integers [20]. The \(K_{n}\,\square\,Q_{2}\) graph that is called complete-square graph is the graph Cartesian product of a complete graph \(K_{n}\) and a square graph \(Q_{2}\). The Laplacian matrix of a complete-square graph has at most six distinct eigenvalues which are all integers [20]. Using Theorem 1 on rook graphs and complete-square graphs as we do in Theorem 2, we can get similar quantum algorithms that deterministically find the marked vertex with quadratic speedups, which cover the results in [19; 20]. Next, we consider the complete bipartite graph \(K(N_{1},N_{2})\), which usually neither is vertex transitive nor has integer eigenvalues. So Theorem 1 does not apply to this situation directly. However, we can still design a deterministic quantum search algorithm in a similar way as done in Theorem 1. A complete bipartite graph \(K(N_{1},N_{2})\) is an undirected graph that has its vertex set partitioned into two subsets \(V_{1}\) of size \(N_{1}\) and \(V_{2}\) of size \(N_{2}\), such that there is an edge from every vertex in \(V_{1}\) to every vertex in \(V_{2}\). A special case of complete bipartite graphs is star graphs, where there is only one vertex in \(V_{1}\) or \(V_{2}\). For this case, Qu et al. [21] gave a quantum search algorithm using alternating quantum walks that has \(O(\sqrt{N})\) calls to the search operators. We will give a generalized algorithm for all complete bipartite graphs. **Theorem 3**.: _For any complete bipartite graph \(K(N_{1},N_{2})\) with a marked vertex \(m\), we can design a quantum search algorithm that deterministically finds the marked vertex using \(O(\sqrt{N_{1}+N_{2}})\) calls to the search operator \(e^{-i\theta|m\rangle\langle m|}\)._ Proof.: For a complete bipartite graph \(K(N_{1},N_{2})\), its adjacency matrix \(A\) has three distinct eigenvalues: \(0\), \(\sqrt{N_{1}N_{2}}\), \(-\sqrt{N_{1}N_{2}}\). The algebraic multiplicity of \(0\) is \(N_{1}+N_{2}-2\), and both \(\sqrt{N_{1}N_{2}}\) and \(-\sqrt{N_{1}N_{2}}\) are simple eigenvalues. Moreover, \(A\) has the following spectral decomposition \[A =\sqrt{N_{1}N_{2}}|\eta_{+}\rangle\langle\eta_{+}|-\sqrt{N_{1}N_ {2}}|\eta_{-}\rangle\langle\eta_{-}|\] \[+\sum_{i=1}^{N_{1}+N_{2}-2}0|\eta_{i}\rangle\langle\eta_{i}|,\] where \[|\eta_{+}\rangle =\frac{1}{\sqrt{2N_{1}N_{2}}}(\underbrace{\sqrt{N_{2}},\ldots, \sqrt{N_{2}}}_{N_{1}},\underbrace{\sqrt{N_{1}},\ldots,\sqrt{N_{1}}}_{N_{2}})^{T},\] \[|\eta_{-}\rangle =\frac{1}{\sqrt{2N_{1}N_{2}}}(\underbrace{\sqrt{N_{2}},\ldots, \sqrt{N_{2}}}_{N_{1}},\underbrace{-\sqrt{N_{1}},\ldots,-\sqrt{N_{1}}}_{N_{2}})^ {T}.\] We rewrite the marked vertex on this basis as \[|m\rangle=\sum_{i=1}^{N_{1}+N_{2}-2}\alpha_{i}|\eta_{i}\rangle+\alpha_{+}| \eta_{+}\rangle+\alpha_{-}|\eta_{-}\rangle.\] If the marked vertex is in \(V_{1}\), then \[\langle\eta_{+}|m\rangle=\langle\eta_{-}|m\rangle=\frac{1}{\sqrt{2N_{1}}},\] and \[|m\rangle=\sum_{i=1}^{N_{1}+N_{2}-2}\alpha_{i}|\eta_{i}\rangle+\frac{1}{\sqrt{ 2N_{1}}}|\eta_{+}\rangle+\frac{1}{\sqrt{2N_{1}}}|\eta_{-}\rangle.\] Define \[|\eta_{0}\rangle =\frac{1}{\sqrt{\sum_{i=1}^{N_{1}+N_{2}-2}\alpha_{i}^{2}}}\sum_{i =1}^{N_{1}+N_{2}-2}\alpha_{i}|\eta_{i}\rangle,\] \[|s\rangle =\frac{1}{\sqrt{2}}(|\eta_{+}\rangle+|\eta_{-}\rangle)=\frac{1}{ \sqrt{N_{1}}}(1,1,\cdots,1,0,\cdots,0).\] We can see \(|m\rangle\in span\{|\eta_{0}\rangle,|s\rangle\}\), and in this subspace, \[e^{-iA\frac{\pi}{\sqrt{N_{1}N_{2}}}}=I-2|s\rangle\langle s|.\] Obviously, \(\langle m|s\rangle=\frac{1}{\sqrt{N_{1}}}\). Thus, by Lemma 1 we can find parameters \(p\in O(\sqrt{N_{1}})\), \(\gamma\), \(\theta_{k}\) (\(k\in\{1,2,\ldots,p\}\)) such that \[|m\rangle=e^{-i\gamma}\prod_{k=1}^{p}e^{-iA\frac{a}{\sqrt{N_{1}}\pi_{2}}}e^{-i \theta_{k}|m\rangle\langle m|}|s\rangle.\] Hence, we get a deterministic quantum search algorithm that uses \(O(\sqrt{N_{1}})\) calls to the search operator and finds the marked vertex from the uniform superposition state over all vertices in \(V_{1}\). Similarly, if the marked vertex is in \(V_{2}\), we can construct a deterministic quantum search algorithm that finds the marked vertex from the uniform superposition state over all vertices in \(V_{2}\) and has \(O(\sqrt{N_{2}})\) calls to the search operator. Combining the two algorithms above, we can get a quantum algorithm that has \(O(\sqrt{N_{1}}+\sqrt{N_{2}})\) calls to the search operator and finds the marked vertex exactly. Since \(\sqrt{N_{1}}+\sqrt{N_{2}}\leq\sqrt{2N_{1}+2N_{2}}\), the algorithm has \(O(\sqrt{N_{1}+N_{2}})\) calls to the search operator and the theorem is proved completely. _Conclusion and Discussion.--_ In this article, we have presented a universal approach that can be used to design deterministic quantum algorithms for spatial search on a variety of graphs based on alternating phase-walks. Using this approach, we have obtained deterministic quantum search algorithms with quadratic speedups on Johnson graphs \(J(n,k)\) for any fixed \(k\), rook graphs, complete-square graphs, and complete bipartite graphs, which not only cover the previous results obtained in [19; 20; 21], but also result in some new and general results. Our algorithms are concise and easy to understand, which are simply to perform alternately the continuous-time quantum walk operator \(e^{-iHt}\) and the search operator \(e^{-i\theta|m\rangle\langle m|}\) on the initial state \(|s\rangle\). For future work, we will consider generalizing our approach to more graphs and designing algorithms for the case of multiple marked vertices.
2305.00128
An analogue to the pion decay constant in the multi-flavor Schwinger model
We study the Schwinger model with $N_{\rm f} \geq 2$ degenerate fermion flavors, by means of lattice simulations. We use dynamical Wilson fermions for $N_{\rm f} = 2$, and re-weighted quenched configurations for overlap-hypercube fermions with $N_{\rm f} \leq 6$. In this framework, we explore an analogue of the QCD pion decay constant $F_{\pi}$, which is dimensionless in $d=2$, and which has hardly been considered in the literature. We determine $F_{\pi}$ by three independent methods, with numerical and analytical ingredients. First, we consider the 2-dimensional version of the Gell-Mann--Oakes--Renner relation, where we insert both theoretical and numerical values for the quantities involved. Next we refer to the $\delta$-regime, {\it i.e.\ a small spatial volume, where we assume formulae from Chiral Perturbation Theory to apply even in the absence of Nambu-Goldstone bosons. We further postulate an effective relation between $N_{\rm f}$ and the number of relevant, light bosons, which we denote as "pions". Thus $F_{\pi}$ is obtained from the residual "pion" mass in the chiral limit, which is a finite-size effect. Finally, we address to the 2-dimensional Witten--Veneziano formula: it yields a value for $F_{\eta}$, which we identify with $F_{\pi}$, as in large-$N_{\rm c}$ QCD. All three approaches consistently lead to $F_{\pi} \simeq 1/\sqrt{2 \pi}$ at fermion mass $m=0$, which implies that this quantity is meaningful.
Jaime Fabián Nieto Castellanos, Ivan Hip, Wolfgang Bietenholz
2023-04-28T23:55:16Z
http://arxiv.org/abs/2305.00128v2
# An analogue to the pion decay constant ###### Abstract We study the Schwinger model with \(N_{\rm f}\geq 2\) degenerate fermion flavors, by means of lattice simulations. We use dynamical Wilson fermions for \(N_{\rm f}=2\), and re-weighted quenched configurations for overlap-hypercube fermions with \(N_{\rm f}\leq 6\). In this framework, we explore an analogue of the QCD pion decay constant \(F_{\pi}\), which is dimensionless in \(d=2\), and which has hardly been considered in the literature. We determine \(F_{\pi}\) by three independent methods, with numerical and analytical ingredients. First, we consider the 2-dimensional version of the Gell-Mann-Oakes-Renner relation, where we insert both theoretical and numerical values for the quantities involved. Next we refer to the \(\delta\)-regime, _i.e._ a small spatial volume, where we assume formulae from Chiral Perturbation Theory to apply even in the absence of Nambu-Goldstone bosons. We further postulate an effective relation between \(N_{\rm f}\) and the number of relevant, light bosons, which we denote as "pions". Thus \(F_{\pi}\) is obtained from the residual "pion" mass in the chiral limit, which is a finite-size effect. Finally, we address to the 2-dimensional Witten-Veneziano formula: it yields a value for \(F_{\eta}\), which we identify with \(F_{\pi}\), as in large-\(N_{\rm c}\) QCD. All three approaches consistently lead to \(F_{\pi}\simeq 1/\sqrt{2\pi}\) at fermion mass \(m=0\), which implies that this quantity is meaningful. ###### Contents * 1 Introduction * 2 2d Gell-Mann-Oakes-Renner relation * 3 \(F_{\pi}\) from the residual pion mass in the \(\delta\)-regime * 4 Witten-Veneziano formula in the Schwinger model * 5 Summary and conclusions * A The "pion" mass in the \(\epsilon\)-regime ## 1 Introduction The Schwinger model represents Quantum Electrodynamics in two space-time dimensions [1]. This model shares several fundamental features with 4-dimensional Quantum Chromodynamics (QCD), in particular confinement [2] as well as the division of the gauge configurations into topological sectors. This model has been solved exactly in the massless case, but not at finite fermion mass, \(m>0\). In that case, analytic approaches are usually based on bosonization and involve some assumptions and approximations. Here we consider the Schwinger model with \(N_{\rm f}\geq 2\) degenerate fermion flavors, in Euclidean space-time. Chiral Perturbation Theory is a systematic low-energy effective theory of QCD, in terms of light meson fields. Its Lagrangian includes a string of terms, which are Lorentz invariant and chirally symmetric (if we refer to the chiral limit, where the mesons are massless Nambu-Goldstone bosons). The number of these terms is infinite, but they can be hierarchically ordered in powers of the momenta, and truncated. Each term has a coefficient, known as a low-energy constant, which is a free parameter within Chiral Perturbation Theory. It can only be determined from QCD as the underlying, fundamental theory, or from experiment. To leading order, there is only one term, \[{\cal L}=\frac{F_{\pi}^{2}}{4}\,\partial^{\mu}\vec{\pi}(x)\cdot\partial_{\mu} \vec{\pi}(x)\, \tag{1.1}\] where \(\vec{\pi}\) is the pion field and the corresponding low-energy constant \(F_{\pi}\) is known as the _pion decay constant_. It appears in a variety of relations, which are not necessarily related to the pion decay. Some of these relations occur in an analogous form in the multi-flavor Schwinger model. Based on such analogies, we are going to discuss three independent formulations of \(F_{\pi}\) in the Schwinger model. It is dimensionless in \(d=2\), and the results obtained with these three approaches are all compatible with the value \[F_{\pi}\simeq 1/\sqrt{2\pi}=0.3989\ldots \tag{1.2}\] in the chiral limit. In addition, this result is in good agreement with the only previous determination that we are aware of: a study for \(N_{\rm f}=2\) by Harada _et al._ at strong coupling in a light-cone formulation [3], which considered the relation \[\langle 0|\partial^{\mu}J^{5}_{\mu}(0)|\pi(p)\rangle=M_{\pi}^{2}F_{\pi}\, \tag{1.3}\] where \(J^{5}_{\mu}\) is the axial current and \(M_{\pi}\) is the "pion" mass. In this manner, Ref. [3] obtained a mild dependence on the (degenerate) fermion mass \(m\), \[F_{\pi}(m)=0.394518(14)+0.040(1)m/g\, \tag{1.4}\] where \(g\) is the gauge coupling, and \(F_{\pi}(0)\) is close to our value in eq. (1.2). On the other hand, if one refers directly to the axial current, instead of its divergence, \[\langle 0|J^{5}_{\mu}(0)|\pi(p)\rangle={\rm i}p_{\mu}F_{\pi}\, \tag{1.5}\] one seems to arrive at \(F_{\pi}=0\), so the outcome does depend on the QCD relation to which one establishes an analogy. The QCD-inspired relations that we are going to refer to are the Gell-Mann-Oakes-Renner relation (Section 2), the residual pion mass in the \(\delta\)-regime (Section 3), and the Witten-Veneziano formula (Section 4). Finally we present our conclusions and an appendix about finite-size effects on \(M_{\pi}\). Preliminary results of this work were presented in a thesis [4] and two proceeding contributions [5]. ## 2 2d Gell-Mann-Oakes-Renner relation Back in 1992, Smilga derived the relation [6] \[m\Sigma=CM_{\pi}^{2}\, \tag{2.1}\] where \(\Sigma\) is the chiral condensate, which -- in terms of the fermion fields -- takes the usual form \(\Sigma=-\langle\bar{\Psi}\Psi\rangle\). In the effective Lagrangian for QCD at small but non-zero quark masses, \(F_{\pi}\) and \(\Sigma\) are the two leading low-energy constants. However, Smilga did not specify the constant \(C\). That was accomplished in Refs. [7, 8, 9]: the bosonized 2-flavor Schwinger model leads to a Schrodinger-type equation, and in this framework these works studied the interactions of (quasi) zero-modes due to the chiral anomaly and the fermion masses. This led to an interesting formula (eq. (37) in Ref. [7]), which -- in our notation and at zero vacuum angle -- reads \[\Sigma=\frac{M_{\pi}^{2}}{4\pi m}. \tag{2.2}\] This relation is explained in detail in Ref. [9]. In addition, Ref. [7] also derived expressions for \(M_{\pi}\) in terms of \(m\), \(g\) and the volume, in three different regimes. By inserting \(M_{\pi}\) into eq. (2.2), the authors obtained formulae for \(\Sigma\) in each of these regimes. However, that work did not relate eq. (2.2) to the "pion decay constant", which we are interested in. This can be achieved by invoking the Gell-Mann-Oakes-Renner relation [10], which is well-known in QCD, \[F_{\pi}^{2}(m)=\frac{2m}{M_{\pi}^{2}}\,\Sigma. \tag{2.3}\] If we postulate the same relation in the multi-flavor Schwinger model, and combine it with eq. (2.2), we arrive at \[F_{\pi}=\frac{1}{\sqrt{2\pi}}\, \tag{2.4}\] without any mass dependence. Alternatively -- without relying on the approximations in the bosonization approach -- we can numerically compute the quantities on the right-hand side of eq. (2.3) in order to derive results for \(F_{\pi}(m)\). Such results are shown in Figure 1: they were obtained based on quenched configurations on a lattice of size \(V=24\times 24\), and re-weighting with the overlap-hypercube fermion determinant, for the cases of \(N_{\rm f}=2,\ldots,6\) degenerate fermion flavors. The overlap-hypercube Dirac operator is obtained by using the overlap formula [11], which solves the Ginsparg-Wilson relation [12]. This guarantees an exact, lattice-modified chiral symmetry [13]. However, for the kernel we do not insert the usual Wilson operator, but a truncated perfect hypercube fermion operator [14]. Compared to the standard overlap formulation, this improves the scaling behavior, approximate rotation invariance and the level of locality, as demonstrated in quenched QCD [15]. The 2-dimensional version that we are using in the Schwinger model was proposed in Ref. [16], and applied also in Refs. [17, 18]. Thanks to the chiral symmetry of the overlap-hypercube operator, we can insert the bare fermion mass \(m\), and reliably calculate \(M_{\pi}\) even at small \(m\). \(\Sigma\) is computed from the spectrum of the Dirac operator, \[\Sigma(m)=\frac{1}{V}\left\langle\sum_{k}\frac{1}{\lambda_{k}+m}\right\rangle\, \tag{2.5}\] where the Dirac eigenvalues \(\lambda_{k}\) are mapped from the Ginsparg-Wilson circle (with center 1 and radius 1) to the imaginary axis (their location in the continuum limit) by means of a Mobius transform, \(\lambda_{k}\to\lambda_{k}/(1-\lambda_{k}/2)\). Figure 1 shows that the results for \(F_{\pi}(m)\) are consistently in the magnitude of 0.4. With 30,000 configurations and for \(m\,{}^{>}_{\sim}\,0.2\) (in lattice units), the numerical values are quite precise. This figure refers to the third regime in the case distinction of eq. (36) in Ref. [7], which is characterized (among other conditions) by \(L_{t}M_{\pi}\gg 1\) (in a volume \(L\times L_{t}\)). We also observe consistent agreement in the range \(N_{\rm f}=2,\ldots,6\), which indicates that the value of \(F_{\pi}(m)\) does not depend on the number of flavors. At smaller fermion mass we enter the second regime of eq. (36) in Ref. [7], where \(L_{t}M_{\pi}\ll 1\) (the spatial size \(L\) remains large, so there is no relevant residual pion mass due to finite size effects). Here the errors increase visibly, and if \(m\) is too small, even the measured values are not reliable anymore: we still obtain good results for \(\Sigma\), as we see in Figure 2, which shows a comparison with predictions in Ref. [7]. This also implies that re-weighting works well, even down to tiny fermion masses, in agreement with earlier results in Ref. [19]. However, at tiny values of \(m\), the pion mass \(M_{\pi}\) suffers from significant finite-size effects, since the product \(LM_{\pi}\) is not large anymore. Moreover, in that regime there is a discrepancy between different ways to measure \(M_{\pi}\), as we will point out in the appendix. For all \(N_{\rm f}\) that we included, we observe in Figure 1 a slight maximum of \(F_{\pi}(m)\) around \(m\approx 0.14\). At even smaller fermion mass \(m\), \(F_{\pi}(m)\) decreases and the chiral extrapolation is again compatible with \(F_{\pi}(0)=1/\sqrt{2\pi}\), although at tiny \(m\) the finite-size effects on \(M_{\pi}\) and the large statistical errors prevent a precise chiral extrapolation. On the other hand, in some circumstances, the increase of \(M_{\pi}\) due to finite-size effects can also be used to extract physical information. This will be addressed in the next section. ## 3 \(F_{\pi}\) from the residual pion mass in the \(\delta\)-regime The approach of this section refers to Chiral Perturbation Theory, which is a systematic effective field theory for low-energy QCD, cf. Section 1. One writes a general Lagrangian -- with all terms allowed by the symmetries -- in terms of pseudo-Nambu-Goldstone boson fields. In 2-flavor QCD, the fields Figure 1: _Values for \(F_{\pi}\) obtained from the Gell-Mann–Oakes–Renner relation (2.3) for \(N_{\rm f}=2,\ldots,6\) flavors, at fermion masses \(0.05\leq m\leq 0.4\). The data are obtained from quenched simulations and overlap-hypercube re-weighting, which works well, but for fermion mass \(m\lesssim 0.05\) the results are affected by finite-size effects on \(M_{\pi}\). We see convincing agreement for different \(N_{\rm f}\), and the extrapolations are compatible with \(F_{\pi}\simeq 0.4\)._ represent the pions, which pick up a small mass \(M_{\pi}\) through non-zero masses of the \(u\)- and \(d\)-quark, and by finite-size effects (if the volume is finite). The latter are negligible in the most commonly used setting, the \(p\)-regime of Chiral Perturbation Theory: here the space-time volume is large, in all directions, compared to the correlation length \(1/M_{\pi}\). From a theoretical perspective, it is also instructive to study the \(\epsilon\)-regime of a small space-time volume, and the \(\delta\)-regime, with a small spatial box \(L^{3}\) but a large extent \(L_{t}\) in (Euclidean) time, \(L_{t}\gg L={\cal O}(1/M_{\pi})\). In the \(\epsilon\)- and \(\delta\)-regime, finite-size effects give rise to a significant energy gap, hence the pions have a residual mass \(M_{\pi}^{\rm R}\) even in the chiral limit of massless quarks. Here we focus on the \(\delta\)_-regime:_ it represents a quasi-1-dimensional field theory, which formally corresponds to a quantum mechanical system. Leutwyler introduced this regime in Ref. [20]: he employed the picture of a quantum mechanical rotor with the energy gap \[M_{\pi}^{\rm R}=\frac{N_{\pi}}{2\Theta}\, \tag{3.1}\] Figure 2: _The chiral condensate, measured for \(N_{\rm f}=2\), at \(\beta=5\) on a \(24\times 24\) lattice, based on the Dirac spectrum according to eq. (2.5). It is compared to an asymptotic formula for small \(m\) given in Ref. [7], where \(M_{\eta}\) represents the “\(\eta\)-meson” mass in the chiral limit, see eq. (4.4). We see that re-weighting works very well even for fermion masses down to \(m=0.001\), but the Gell-Mann–Oakes–Renner relation for \(F_{\pi}\), eq. (2.3), also involves \(M_{\pi}\), which is amplified by finite-size effects._ where \(N_{\pi}\) is the number of pions (or generally: of pseudo-Nambu-Goldstone bosons) involved. The challenge is to compute the "moment of inertia" \(\Theta\). Leutwyler also established the appropriate rules for the \(\delta\)-expansion, and to leading order (LO) he obtained \(\Theta=F_{\pi}^{2}L^{3}\). This expansion was extended to the next-to-leading order (NLO) by Hasenfratz and Niedermayer, who referred to an O(\(N\)) model in \(d>2\) dimensional Euclidean space [21]. According to the Goldstone Theorem, the spontaneous symmetry breaking pattern O(\(N\)) \(\to\) O(\(N-1\)) yields \(N-1\) Nambu-Goldstone bosons, which is the number to be inserted for \(N_{\pi}\) in eq. (3.1), along with \[\Theta=F_{\pi}^{2}L^{d-1}\Big{[}1+\frac{N_{\pi}-1}{2\pi F_{\pi}^{2}L^{d-2}} \Big{(}\frac{d-1}{d-2}-\frac{1}{2}\alpha_{1/2}^{(d-1)}(1)\Big{)}\Big{]}\, \tag{3.2}\] where \(F_{\pi}\) has the mass dimension \(d/2-1\). (The constant \(\alpha_{1/2}^{(d-1)}(1)\) is a shape coefficient; its numerical values are given for symmetric boxes in various dimensions in Ref. [22].) Since that work refers to \(d>2\), Nambu-Goldstone bosons are present, and there was no problem with the pole at \(d=2\). Later the NNLO was investigated in Refs. [23, 24]. At this order, the sub-leading low-energy constants \(l_{1},\ldots,l_{4}\) enter. Comparison of simulation data with the formulae of Ref. [23] yielded in particular a sensible value for the controversial coupling \(l_{3}\)[25]. Another numerical study explored the transitions from the \(\delta\)- to the \(\epsilon\)- and \(p\)-regime [26]. The current study refers to \(d=2\), with a volume \(L_{t}\times L\), \(L_{t}\gg L\). Here the Mermin-Wagner-Coleman Theorem excludes Nambu-Goldstone bosons in the strict sense, but it is known that the "pions" at small but finite fermion mass behave similarly to pseudo-Nambu-Goldstone bosons in higher dimensions. (At \(m=0\) they decouple, thus avoiding a contradiction with the Mermin-Wagner-Coleman Theorem [27].) For considerations about the applicability of Chiral Perturbation Theory in the Schwinger model, we refer to Ref. [28]. Due to the singularity at the NLO, we can only refer to the LO, so we start from the hypothesis \[M_{\pi}^{\rm R}=\frac{N_{\pi}}{2F_{\pi}^{2}L}. \tag{3.3}\] The basic prediction reduces to \(M_{\pi}^{\rm R}\propto 1/L\), which is plausible on dimensional grounds. If this is observed numerically, we have another way to determine \(F_{\pi}\), up to the question how \(N_{\pi}\) should be interpreted in this setting, with \(N_{\rm f}\) massless fermion flavors. Part of the literature, for instance Refs. [29, 30], assumes \(N_{\rm f}^{2}-1\) "pions". This matches the number of Nambu-Goldstone bosons in the spontaneous symmetry breaking \({\rm SU}(N_{\rm f})\otimes{\rm SU}(N_{\rm f})\to{\rm SU}(N_{\rm f})\) in \(d\geq 3\) dimensions according to the Goldstone Theorem. On the other hand, the literature which analyzes the multi-flavor Schwinger model with bosonization usually deals with \(N_{\rm f}-1\) "pions" [7, 8, 9, 31, 32]. In fact, in the case \(N_{\rm f}=2\) we obtain values for \(F_{\pi}\), which are consistent with the results based on the Gell-Mann-Oakes-Renner relation, if and only if we insert \(N_{\pi}=1\). When we proceed to \(N_{\rm f}>2\), however, we see that the bosonization formula \(N_{\rm f}-1\) does not work anymore. So we take a pragmatic point of view and adjust the number of "pionic" degrees of freedom, which are manifest in formula (3.3). We obtain consistent values for \(F_{\pi}\), to an impressively high accuracy, if we insert the effective formula \[N_{\pi}=\frac{2(N_{\rm f}-1)}{N_{\rm f}}\, \tag{3.4}\] although -- according to this formula -- \(N_{\pi}\) is non-integer for \(N_{\rm f}\geq 3\). Figure 3: _Simulation results for the “pion” mass \(M_{\pi}\) in the \(\delta\)-regime, \(L\ll L_{t}\) (with \(L_{t}=64\)), using dynamical Wilson fermions. For a small fermion mass \(m\) (determined by the PCAC relation) and a small spatial extent \(L\), significant errors occur, as expected for Wilson fermions. Still, the full range of fermion masses enables sensible extrapolations to the residual “pion” mass \(M_{\pi}^{\rm R}\) in the chiral limit \(m\to 0\)._ Let us substantiate this statement by presenting our simulation results. We first refer to \(N_{\rm f}=2\) flavors of dynamical Wilson fermions, which are convenient to simulate (we used 10,000 configurations), but which are plagued by additive mass renormalization. As usual for non-chiral lattice fermions, the renormalized fermion mass \(m\) is measured based on the PCAC relation. Figure 3 shows results for the "pion" mass \(M_{\pi}\) in the \(\delta\)-regime, with \(L_{t}=64\gg L\) (\(L=6\), 8, 10, 12), which is plotted against the dimensionless parameter \((m^{2}g)^{1/3}\), at \(\beta=1/g^{2}=5\) (still in lattice units). As a generic property, at decreasing, small values of \(m\) and \((m^{2}g)^{1/3}\), the statistical errors of \(M_{\pi}\) and in particular of \(m\) itself increase rapidly (at fixed statistics), but the complete set of results allows for smooth fits with sensible extrapolations to the chiral limit \(m=0\). Figure 4 shows these extrapolated values of \(M_{\pi}(m=0)\) as a function of the spatial size \(L\) over the range of \(L=6,\ldots,12\). A fit confirms the expected behavior \(M_{\pi}(m=0)\propto 1/L\) to high accuracy, in particular up to \(L=11\). The proportionality constant is a fitting parameter, which -- inserted in eq. Figure 4: _The residual “pion” masses \(M_{\pi}^{\rm R}\) in the \(\delta\)-regime, obtained from simulations of two flavors of dynamical Wilson fermions and extrapolated to the chiral limit according to Figure 3, in spatial volumes \(L=6,\ldots,12\). The data follow well a fit proportional to \(1/L\), and the coefficient corresponds to \(F_{\pi}=0.3923(6)\)._ (3.3) -- yields an \(F_{\pi}\)-value close to the one in eq. (1.2), \[F_{\pi}=0.3923(6). \tag{3.5}\] Next we proceed to results that we obtained with overlap-hypercube fermions, by using 10,000 gauge configurations that we generated quenched at \(\beta=4\), which were re-weighted again for the case of \(N_{\rm f}=2\) flavors. As we see in Figure 5, the exact, lattice modified chirality of the overlap-hypercube fermions strongly suppresses the statistical fluctuations at relatively small fermion mass \(m\), which -- in this case -- is directly taken from the Lagrangian. Thus in this approach the values for \(M_{\pi}(m=0)\) are quite precise. They represent the residual "pion" mass in the \(\delta\)-regime with \(L_{t}=32\) and \(L=4,\ldots,12\). Figure 6 shows that these safely extrapolated values again follow very well a behavior \(M_{\pi}(m=0)\propto 1/L\), at least for \(L<12\). Here the fitting constant leads to \[F_{\pi}=0.3988(1)\, \tag{3.6}\] Figure 5: _Like Figure 3, but here the “pion” mass \(M_{\pi}\) is measured with overlap-hypercube fermions, using quenched, re-weighted gauge configurations. In contrast to Figure 3, this yields small errors and smooth chiral extrapolations for all spatial sizes \(L=4,\ldots,12\) under consideration._ in remarkable proximity to the result obtained with Wilson fermions, and in perfect agreement with formula (1.2). Finally, we extend the study with quenched and overlap-hypercube re-weighted configurations up to \(N_{\rm f}=6\) degenerate flavors, as in Section 2. Figure 7 illustrates the residual "pion" masses against the spatial lattice size \(L=4,\ldots,12\). As \(N_{\rm f}\) increases, the behavior \(\propto 1/L\) is observed only up to \(L=6\); at somewhat larger \(L\), the residual "pion" mass stays below this proportionality relation. However, when we restrict the fit to the range where the relation \(M_{\pi}^{R}\propto 1/L\) is well approximated, we consistently obtain \(F_{\pi}=0.399(1)\) over this range of \(N_{\rm f}\), if we insert the effective formula (3.4). This underscores that the value \(F_{\pi}=1/\sqrt{2\pi}\) is meaningful, and that eq. (3.4) correctly captures the number of "pionic" degrees of freedom which are manifest in the \(\delta\)-regime (even if this number is non-integer). Figure 6: _Like Figure 4, but now with data obtained from the extrapolation of overlap-hypercube fermions results, see Figure 5. Again the fit to the conjectured behavior \(M_{\pi}^{\rm R}\propto 1/L\) works very well for \(L<12\), and we extract \(F_{\pi}=0.3988(1)\). This value is well compatible with further results that we obtained for \(F_{\pi}\) by employing different methods, and in perfect agreement with formula (1.2)._ ## 4 Witten-Veneziano formula in the Schwinger model The Witten-Veneziano formula is well-known in the framework of QCD [34]: it refers to the 't Hooft large-\(N_{\rm c}\) limit, which keeps the product \(g_{\rm s}^{2}N_{\rm c}\) constant (\(g_{\rm s}\) is the strong gauge coupling and \(N_{\rm c}\) the number of colors). This limit overcomes the axial anomaly of chiral 3-flavor QCD. Hence the spontaneous symmetry breaking pattern takes the form \({\rm U(3)_{L}\otimes U(3)_{R}\to U(3)_{L=R}}\) (where the subscripts L and R denote the quark chiralities), and we obtain a nonet of Nambu-Goldstone bosons: they correspond to the pions, the kaons and the mesons \(\eta\) and \(\eta^{\prime}\), which are all massless in this limit. The Witten-Veneziano formula expresses the mass that the \(\eta^{\prime}\)-meson picks up due to the leading \(1/N_{\rm c}\)-corrections. For the more general case of Figure 7: _Residual “pion” masses \(M_{\pi}^{\rm R}\) in the \(\delta\)-regime (\(L_{t}=32\)) for a variety of spatial sizes \(L\ll L_{t}\), and \(N_{\rm f}=2,\ldots,6\) flavors. We show chiral extrapolations of quenched, re-weighted results with overlap-hypercube fermions, at \(\beta=4\). The fits were performed in the range where they are successful, i.e. in the full range for \(N_{\rm f}=2\), and for \(L\leq 6\) for \(N_{\rm f}>2\). They lead to highly consistent values for \(F_{\pi}\), if we apply the effective formula (3.4)._ massless quark flavors, this mass is given by \[M_{\eta^{\prime}}^{2}=\frac{2N_{\rm f}\chi_{\rm t}^{\rm q}}{F_{\eta^{\prime}}^{2} }\, \tag{4.1}\] where \(\chi_{\rm t}^{\rm q}\) is the quenched topological susceptibility, which can be measured by means of lattice simulations. In this particular case, the quenched value is relevant, because quark loops do not contribute to this order in the \(1/N_{\rm c}\)-expansion. Moreover, in this order the pion decay constant coincides with the \(\eta^{\prime}\)-decay constant, \[F_{\pi}=F_{\eta^{\prime}}. \tag{4.2}\] Inserting the experimental value of \(F_{\pi}\simeq 92.4\) MeV and simulation results for \(\chi_{\rm t}^{\rm q}\), see in particular Ref. [33], (roughly) confirms the observed mass \(M_{\eta^{\prime}}\simeq 958\,{\rm MeV}\). Thus the fact that \(\eta^{\prime}\) is far heavier than the light meson octet (and even a little heavier than a nucleon) is explained as a topological effect. This is the quantitative solution to the U(1) problem. According to Seiler and Stamatescu, the Witten-Veneziano relation is actually on more solid grounds in the framework of the Schwinger model with \(N_{\rm f}\geq 2\) massless fermion flavors [35]. In the chiral limit, it takes the form \[M_{\eta}^{2}=\frac{2N_{\rm f}}{F_{\eta}^{2}}\chi_{\rm t}^{\rm q}\, \tag{4.3}\] where the "\(\eta\)-meson" is the meson-type singlet state. Its mass has been computed analytically [31], \[M_{\eta}^{2}=\frac{1}{\pi}N_{\rm f}g^{2}. \tag{4.4}\] Ref. [35] further derived the following relation for the quenched, topological susceptibility (in the continuum and infinite volume) \[\chi_{\rm t}^{\rm q}=\frac{g^{2}}{4\pi^{2}}. \tag{4.5}\] Figure 8 shows results for \(\chi_{\rm t}^{\rm q}/g^{2}\) obtained for two lattice formulations of the topological charge, \[Q_{\rm T}=\sum_{P}\theta_{\rm P}/2\pi\,\quad Q_{\rm S}=\sum_{P}\sin(\theta_{ \rm P})/2\pi. \tag{4.6}\] The sums run over all plaquettes \(P\), and \(\theta_{\rm P}\) is the plaquette discretization of the topological density \(\epsilon_{\mu\nu}\partial_{\mu}A_{\nu}\). \(Q_{\rm T}\in\mathbb{Z}\) is the standard formulation, which can be numerically evaluated to high precision (see _e.g._ Ref. [36]). For the alternative formulation \(Q_{\rm S}\), the lattice topological charges are in general non-integer, but Ref. [37] derived an analytic formula at finite \(g\), _i.e._ at finite lattice spacing, in terms of Bessel functions, \(\beta\chi_{\rm t}^{\rm q}=I_{1}(\beta)/[4\pi^{2}I_{0}(\beta)]\). In both cases, we computed \(\chi_{\rm t}^{\rm q}\) at finite \(g\) also with Monte Carlo simulations. The results agree accurately, and the continuum limit smoothly leads to the value given in eq. (4.5), for both formulations, as Figure 8 shows. This result is also in agreement with Ref. [19]. Inserting eqs. (4.4) and (4.5) into eq. (4.3), we obtain \[F_{\eta}=\frac{1}{\sqrt{2\pi}}. \tag{4.7}\] At this point, we push the analogy to large-\(N_{\rm c}\) QCD further and assume Figure 8: _The quenched topological susceptibility \(\chi_{t}^{\rm q}\), for two different lattice formulations of the topological charge density (standard plaquette term \(\theta_{\rm P}\) and \(\sin(\theta_{\rm P})\)). In the latter case, the topological charge \(Q_{\rm S}=\sum_{P}\sin(\theta_{\rm P})/2\pi\) can be computed analytically [37], while in case of \(Q_{\rm T}=\sum_{P}\theta_{\rm P}/2\pi\) the sum over the plaquettes can be computed numerically [36]. In both cases, the values are in excellent agreement with our simulation results. Both formulations consistently lead to the continuum limit with \(\chi_{\rm t}^{\rm q}/g^{2}=1/4\pi^{2}\), which was derived in Ref. [35]._ \(F_{\pi}=F_{\eta}\). We are not aware of a basic justification of this step, but it exactly confirms once more formula (1.2). ## 5 Summary and conclusions In this work, we have attracted attention to a dimensionless constant, which plays a relevant role in the multi-flavor Schwinger model, but which has been ignored in most of the literature. The only exception is a study by Harada _et al._[3] in the light-cone formulation, which led to the result that we quoted in eq. (1.4). By analogy to specific aspects of QCD, we denote this constant as \(F_{\pi}\), as it was done before in Ref. [3]. We derived its value by another three independent methods, which all provide consistent results. In particular, referring to the 2d Gell-Mann-Oakes-Renner relation and inserting formulae from bosonization approaches [6, 7, 8, 9] leads to \(F_{\pi}=1/\sqrt{2\pi}\), which is also compatible with simulation results. The residual "pion" mass in the \(\delta\)-regime confirms this value to a good precision, if we rely on relations of Chiral Perturbation Theory even in the absence of Nambu-Goldstone bosons, and on our effective formula (3.4) for the number of light degrees of freedom. Finally, the Witten-Veneziano formula yields \(F_{\eta}=1/\sqrt{2\pi}\), and if we identify \(F_{\pi}=F_{\eta}\), as in large-\(N_{\rm c}\) QCD, we arrive once more at the same value for \(F_{\pi}\). The first method seems most robust. The latter two involve some _ad hoc_ assumptions, which are, however, motivated from analogies to QCD. The impressive agreement of the results for \(F_{\pi}\) cannot be by accident, so we conclude that these _ad hoc_ assumptions are -- in this context -- sensible. This concerns in particular our effective formula (3.4) for the "pionic" degrees of freedom, which are manifest in the \(\delta\)-regime, as well as the relation \(F_{\pi}=F_{\eta}\). It further implies that the constant \(F_{\pi}=1/\sqrt{2\pi}\) is indeed relevant for the multi-flavor Schwinger model, in particular for the case \(N_{\rm f}=2\). The underlying reason, as well as further appearances of \(F_{\pi}\) in the Schwinger model, remain to be explored. **Acknowledgments:** We thank Stephan Durr, Christian Hoelbling and Satoshi Iso for helpful comments. The code was developed at the cluster Isabella of the Zagreb University Computing Centre (SRCE), and the production runs were performed at the cluster of the Instituto de Ciencias Nucleares, UNAM. This work was supported by the Faculty of Geotechnical Engineering of Zagreb University through the project "Change of the Eigenvalue Distribution at the Temperature Transition" (2186-73-13-19-11), by UNAM-DGAPA through PAPIIT project IG100219, "Exploracion teorica y experimental del diagrama de fase de la cromodinamica cuantica", and by the Consejo Nacional de Ciencia y Tecnologia (CONACYT). ## Appendix A The "pion" mass in the \(\epsilon\)-regime In Sections 2 and 4 we showed simulation results obtained on \(L\times L\) square lattices. In these cases, the measured "pion" mass \(M_{\pi}\) is close to its value in the thermodynamic limit (\(L\to\infty\)), since the condition \(L\gg 1/M_{\pi}\) is reasonably well approximated. Down to the corresponding values for the fermion mass \(m\), we also observed agreement of \(M_{\pi}\) calculated either with the correlation function of the density \(\bar{\psi}\sigma_{3}\psi\), or with \(\bar{\psi}\sigma_{1}\psi\). As usual, we refer to a Dirac operator in terms of \(\sigma_{1}\) and \(\sigma_{2}\), and both formulations have been used in the literature. The former is closer to the concept of the physical pion, but the latter is a valid alternative in the range of the plots in Sections 2 and 4. However, the situation changes when we proceed to even smaller values of \(m\). Here we enter the \(\epsilon\)-regime, where it is natural that \(M_{\pi}\) is significantly enhanced by finite-size effect. Moreover, we observed that these two formulations of \(M_{\pi}\) react very differently to the squeezing in a small physical volume, as we illustrate in Figure 9. For the formulation with \(\sigma_{3}\), one obtains a plateau with a residual "pion" mass \(M_{\pi}^{\sigma_{3}}\), similarly to the \(\delta\)-regime, which is the generic behavior. For the \(\sigma_{1}\)-formulation, however, \(M_{\pi}^{\sigma_{1}}\) approaches 0, closely following the relation \(M_{\pi}^{\sigma_{1}}\propto m\), which is an artifact due to the use of \(\sigma_{1}\). In this sense the \(\sigma_{1}\)-formulation is a valid alternative only in large volumes. However, it is an amazing observation that \(M_{\pi}^{\sigma_{1}}\) at tiny \(m\raisebox{-2.15pt}{$\,\stackrel{{<}}{{\sim}}\,$}0.02\) accurately follows the prediction in eq. (36) of Ref. [7] in the second regime, where \(M_{\pi}L_{t}\ll 1\). We referred to it before in Section 2; in that prediction, there is no residual "pion mass" because Ref. [7] deals with a large spatial size \(L\). This is not the setting of our simulation, but \(M_{\pi}^{\sigma_{1}}\) follows this prediction to high accuracy. The reason for this observation remains to be explored. As the fermion mass increases to \(m\raisebox{-2.15pt}{$\,\stackrel{{>}}{{\sim}}\,$}0.05\), the "pion masses" measured in both ways are close, \(M_{\pi}^{\sigma_{1}}\approx M_{\pi}^{\sigma_{3}}\), and they are now in the vicinity of the third regime eq. (36) of Ref. [7], where \(M_{\pi}L\gg 1\), and \(M_{\pi}\propto m^{2/3}\). This is a technical observation, which could be of interest for future lattice studies, but it does not seem to be documented in the literature.
2306.01766
An interpretable wildfire spreading model for real-time predictions
Forest fires pose a natural threat with devastating social, environmental, and economic implications. The rapid and highly uncertain rate of spread of wildfires necessitates a trustworthy digital tool capable of providing real-time estimates of fire evolution and human interventions, while receiving continuous input from remote sensing. The current work aims at developing an interpretable, physics-based model that will serve as the core of such a tool. This model is constructed using easily understandable equations, incorporating a limited set of parameters that capture essential quantities and heat transport mechanisms. The simplicity of the model allows for effective utilization of data from sensory input, enabling optimal estimation of these parameters. In particular, simplified versions of combustion kinetics and mass/energy balances lead to a computationally inexpensive system of differential equations that provide the spatio-temporal evolution of temperature and flammables over a two-dimensional region. The model is validated by comparing its predictions and the effect of parameters such as flammable bulk density, moisture content, and wind speed, with benchmark results. Additionally, the model successfully captures the evolution of the firefront shape and its rate of spread in multiple directions.
Konstantinos Vogiatzoglou, Costas Papadimitriou, Konstantinos Ampountolas, Michail Chatzimanolakis, Petros Koumoutsakos, Vasilis Bontozoglou
2023-05-27T15:19:43Z
http://arxiv.org/abs/2306.01766v2
# An interpretable wildfire spreading model for real-time predictions ###### Abstract Forest fires pose a natural threat with devastating social, environmental, and economic implications. The rapid and highly uncertain rate of spread of wildfires necessitates a trustworthy digital tool capable of providing real-time estimates of fire evolution and human interventions, while receiving continuous input from remote sensing. The current work aims at developing an interpretable, physics-based model that will serve as the core of such a tool. This model is constructed using easily understandable equations, incorporating a limited set of parameters that capture essential quantities and heat transport mechanisms. The simplicity of the model allows for effective utilization of data from sensory input, enabling optimal estimation of these parameters. In particular, simplified versions of combustion kinetics and mass/energy balances lead to a computationally inexpensive system of differential equations that provide the spatio-temporal evolution of temperature and flammables over a two-dimensional region. The model is validated by comparing its predictions and the effect of parameters such as flammable bulk density, moisture content, and wind speed, with benchmark results. Additionally, the model successfully captures the evolution of the firefront shape and its rate of spread in multiple directions. ## 1 Introduction In recent years, the frequency and severity of natural disasters resulting from wildfires have witnessed an alarming rise. This trend is expected to persist in the future, primarily due to the combined effects of climate change and unfavorable human activities [25, 8]. Wildfires are destructive and rapidly spreading fires that engulf expansive areas covered in vegetation and constitute a major threat for communities at the wildland-urban interface (WUI) [34]. The primary concern revolves around the threat to human life, but the detrimental consequences of this natural hazard are evident across physical, social, and economic domains [19]. The most crucial information needed to protect lives and property and guide firefighter efforts is the rate of spread (ROS) of the firefront. Moreover, the fireline perimeter and area, along with firepower and intensity, represent essential quantities that are crucial to predict during situations of intense fire propagation [1]. Wildland fires are complex environmental phenomena, that involve multiple spatial and temporal scales and physical processes including the chemistry of fuel combustion, the physics of fluid flow and heat-mass transport [60, 57]. In particular, the fuel in forests and shrublands consists of particle-like materials (e.g., leaves, grass, twigs, and pine needles) of varying size, composition, and humidity. The heterogeneity of the biomass fuel in a forest area can be described as a random field, inserting a probabilistic character into the flame's direction. The heating of the fuel particles that initiates combustion is evidently provided by radiation and convection, though their relative contributions are under debate [24, 27]. Heat convection is influenced by airflow above the plant canopy dictated by local meteorological conditions and terrain morphology [30, 31], and in addition affected by convection currents triggered by flame instabilities [23]. Although numerous models have been proposed over the years to develop hypotheses on how fire grows and spreads, there are still many questions concerning the physics underlying these phenomena [54]. Models aiming at the prediction of wildland fire dynamics, and in particular the rate of spread, may be classified into three categories: At the simplest -and most operation-oriented- end are models that propose empirical functional relations between the ROS and parameters such as wind speed, fuel bulk density, and humidity [64, 63, 20, 40, 15, 47]. At the other end of the spectrum are three-dimensional computational fluid dynamics (CFD) simulations that aim at fully describing all physical phenomena over a large range of scales. Typical examples are the so-called multiphase models [57] that analyze at the lengthscale of tens of centimeters and provide detailed information on the combustion process [44], the wildfire models at the scale of meters such as FIRETEC [37, 6] and the atmospheric boundary layer models that discretize at the lengthscale of hundreds of meters and provide an overall view of the fire progression [38, 13]. The CFD models are computationally intensive, so their main contribution at present is in the elucidation of the governing physical mechanisms operative at the various spatial scales. Between empirical functional relations and CFD tools, a number of simple but interpretable models have been proposed that describe the essential physics while being computationally tractable with the aim of providing real-time predictions [14, 39, 56, 4]. A key challenge for these models is the ability to include a multitude of parameters that affect the firefront progression, such as fuel density, composition, humidity, wind speed/direction and terrain morphology. More importantly, such models should be capable of reliable predictions subject to significant uncertainties, such as those imposed by canopy heterogeneity and the time-variation of weather conditions. Here, we propose a model that entails simplified forms of mass and energy balances that are averaged over the plantation height. The model is built around a small number of interpretable parameters that correspond to dominant physical quantities or transport mechanisms. The model is designed to be utilized in real-time during an active fire, continuously gathering feedback through remote sensing techniques such as infrared cameras, weather radar, sensors, and satellites positioned in orbit or geostationary satellites [41, 35, 48]. In this context, a sequence of real-time predictions and comparisons with actual fire evolution will permit optimum parameter estimation, taking into account all the aforementioned uncertainties. We argue that the model provides a useful framework to support risk-informed decisions in the management of wildfires to help safeguard communities and natural resources. In this paper we present model validation studies and compare model predictions with one and two-dimensional benchmark problems for the propagation of the firefront. The paper is structured as follows: The model derivation is described in Section 2. Section 2.1 introduces the simplified mass balances and reaction kinetics that approximate the real combustion processes, while Section 2.2 formulates the simplified energy balance in terms of the defined dispersion and convection coefficients, Section 2.3 introduces the effect of wind and Section 2.4 summarizes the model equations and clarifies the numerical implementation. In Section 3 the model is validated and used to predict results, first for one-dimensional fire propagation (straight frontline), and then for two-dimensional propagation from a localized or an extended ignition site. Finally, conclusions are drawn and future refinements are outlined in Section 4. ## 2 Modeling ### Problem setup and mass balances The proposed model incorporates the main physical mechanisms that are known to affect the rate of spread of a fire front, including reaction kinetics for water evaporation and wood combustion, heat transfer by convection and dispersion due to the motion of the gaseous phase (air/flue gases) through the plantation, and heat loss to the ambient by free convection and radiation. We consider a field extending in the \(x-\) and \(y-\)direction, covered by a plantation of uniform height \(H\) ([=] m). Following standard convention [50], the bulk density of solid material, \(m_{s,0}=\alpha\rho_{s}\) ([=] kg/m\({}^{3}\)), is defined as the product of the volume fraction or packing ratio of solid, \(\alpha\) ([=] solid m\({}^{3}\)/total m\({}^{3}\)), and the material density, \(\rho_{s}\) ([=] kg/m\({}^{3}\)). For example, \(\rho_{s}=700\) kg/m\({}^{3}\) for Quercus coccifera (an oak native to the Mediterranean region) and 500 kg/m\({}^{3}\) for Brachypodium ramosum (a perennial grass native to Europe and Asia). The solid phase consists of two components: water and combustibles, the latter including non-aqueous volatiles (e.g., CO, CO\({}_{2}\), NO\({}_{x}\), VOC) and char. Thus, \(m_{s,0}=m_{s1,0}+m_{s2,0}\), where \(m_{s1,0}/m_{s,0}\) is the original water content of the wood and \(m_{s2,0}/m_{s,0}\) the initial content of combustibles. Equivalently, the fuel moisture content (FMC), defined traditionally on a dry basis, is \(\text{FMC}=100\,(m_{s1,0}/m_{s2,0})\). Water content is typically classified as the humidity of live and/or dead plants. While the former varies mainly with plant type and time of the year, the latter is a function of air humidity and fuel size (the smaller the fuel particle, the faster it equilibrates with air humidity). With decreasing moisture content, the fuel is more flammable, and the fire spreads faster [51, 43]. During the combustion process, the remaining mass of solid material is \(m_{s}(x,y,t)=m_{s1}(x,y,t)+m_{s2}(x,y,t)\). The rest of the field volume is the gaseous phase, whose mass, \(m_{g}\) ([=] kg/m\({}^{3}\)), increases with time due to the conversion of solid material into gaseous products. Thus, the conservation of total mass for a closed system, gives: \[m_{s}+m_{g}=\alpha\rho_{s}+(1-\alpha)\rho_{g}, \tag{1}\] where \(\rho_{g}\) ([=] kg/m\({}^{3}\)) is a representative gas density. Equation (1) may be written in dimensionless form by defining the remaining mass fraction of water, \(S_{1}=m_{s1}/m_{s,0}\) (endothermic fuel mass fraction), and combustibles, \(S_{2}=m_{s2}/m_{s,0}\) (exothermic fuel mass fraction), with \(S=S_{1}+S_{2}\) (solid total fuel mass fraction), and also \(S_{g}=m_{g}/m_{s,0}\). Thus: \[S+S_{g}=1+\frac{1-\alpha}{\alpha}\lambda, \tag{2}\] where \(\lambda\) is the density ratio \(\rho_{g}/\rho_{s}\). We remark that parameter \(\alpha\) is a small number, typically of order \(10^{-3}\)[49]. In general, flame progression represents constant combustion that is influenced mostly by characteristics of the biomass fuel, such as moisture, surface roughness, and geometry. However, not only the combustible material but also the weather and the rest of the environmental conditions inside and around the canopy can either enhance or reduce fire intensity and progression rate. As a simplified description of a very complex set of chemical and physical processes [59], combustion may be considered to initiate with fuel dehydration, followed by pyrolysis and charring reactions, and conclude with oxidation reactions. While dehydration is endothermic, the start of pyrolysis reactions (T > 550-600 K) marks the onset of the flammable exothermic part [53; 17]. However, it is mainly in the last stage that the environmental conditions (turbulence, buoyant instabilities) play a key role. They may allow or prevent oxygen delivery to the burning fuel (resulting for example in glowing or smoldering fire oxidation [52]), and transfer heat to adjacent fuel, thus spreading the fire. The wood dehydration and disintegration/combustion processes are modeled by two consecutive reactions, following first-order Arrhenius kinetics. Thus, the variation in water content with time is expressed as follows: \[\frac{\partial S_{1}}{\partial t}=-S_{1}\,r_{1}, \tag{3}\] where \[r_{1}=c_{s1}\,e^{-\frac{b_{1}}{T}}, \tag{4}\] with \(T\) ([=] K) representing the temperature of the fire layer, while parameters \(c_{s1}\) ([=] s\({}^{-1}\)) and \(b_{1}\) ([=] K) may be used to quantify differences in behavior between dead and live moisture content [51]. A similar expression, with different constants, is used for the rate, \(r_{2}\), of the burning process. However, in this case, the reaction rate is limited not only by the low temperature but also by a lack of oxygen [36]. Considering the two resistances in series, the final reaction rate is modeled as: \[r_{2t}=\frac{r_{2}r_{m}}{r_{2}+r_{m}}, \tag{5}\] where \(r_{m}\) ([=] s\({}^{-1}\)) is the rate of oxygen arrival to the burning solid. Finally, the temporal variation of combustibles is given by: \[\frac{\partial S_{2}}{\partial t}=-S_{2}\,r_{2t}=-S_{2}\,\frac{c_{s2}\,e^{-\frac{ b_{2}}{T}}r_{m}}{c_{s2}\,e^{-\frac{b_{2}}{T}}+r_{m}}. \tag{6}\] The rate \(r_{m}\) is empirically determined to take values in the range [\(10^{-3}\), \(10^{-2}\)] (very small values result in fire extinction, very large values in unrealistically high flame temperatures). It is further expected to be an increasing function of wind speed and is presently approximated as, \(r_{m}=r_{m,0}+0.004(\left\langle u\right\rangle-1)\), where \(r_{m,0}=0.002\) s\({}^{-1}\) is an initial rate of oxygen arrival without wind, and \(\left\langle\mathbf{u}\right\rangle\) ([=] m/s) is the local mean velocity across the plantation. ### The energy balance The energy balance is based on the assumption of local equilibrium [55], i.e., that the solid and gaseous phases have the same temperature. As a result, we may write the following equation for a volume of unit surface area and height \(H\): \[\left(m_{s}\,c_{ps}+m_{g}\,c_{pg}\right)\frac{\partial T}{\partial t}=-A_{1} \,m_{s1}\,r_{1}+A_{2}\,m_{s2}\,r_{2t}-m_{g}\,c_{pg}\left\langle\mathbf{u} \right\rangle\cdot\nabla T+m_{g}\,c_{pg}\nabla\cdot\left(\mathbf{D}_{\mathrm{ eff}}\cdot\nabla T\right)-\frac{U}{H}(T-T_{a}). \tag{7}\] Terms \(c_{ps}\) and \(c_{pg}\) ([=] J/kgK) are the heat capacities of the solid and gaseous phases, and \(A_{1}\), \(A_{2}\) ([=] J/kg) are the standard heats of the endothermic (water evaporation) and exothermic (pyrolysis and combustion) reactions. Term \(\left\langle\mathbf{u}\right\rangle\) is the mean velocity vector through the plantation, and the respective term is the convective contribution to the energy balance. Term \(\mathbf{D}_{\mathrm{eff}}\) ([=] m\({}^{2}\)/s) is a dispersion coefficient, and \(U\) ([=] W/m\({}^{2}\)K) is an overall heat transfer coefficient for thermal losses to the environment, with \(T_{a}\) representing the ambient temperature and \(\nabla\) being the gradient operator. The dispersion coefficient quantifies short-range heat transport by radiation, buoyant instabilities, and turbulent eddies. Radiation has long been considered the main transport mechanism [60, 56] for preheating to ignite the unburned fuel in front of the flame. However, this view was questioned [24], and it has been argued that instabilities caused by buoyant dynamics and unsteady convection (flame intermittency) dominate local transport [23] when the wind is high enough for the flame to be tilted. As \(\mathbf{D}_{\mathrm{eff}}\) depends on the local velocity and a characteristic length in each direction [16], the dispersion coefficient is actually a (2x2) diagonal matrix with components \(D_{\mathrm{eff,x}}\) and \(D_{\mathrm{eff,y}}\). In order to model all the above contributions, the following form is adopted for the components of the dispersion coefficient: \[D_{\mathrm{eff,x}}=D_{\mathrm{rb}}+A_{d}\left\langle u_{x}\right\rangle L_{x} \left(1-e^{-\gamma_{d}u_{x}}\right),\quad D_{\mathrm{eff,y}}=D_{\mathrm{rb}}+A _{d}\left\langle u_{y}\right\rangle L_{y}\left(1-e^{-\gamma_{d}u_{y}}\right), \tag{8}\] where \(D_{\rm{rb}}\) ([=] m\({}^{2}\)/s) is a representative constant that models buoyant dynamics and radiation, \(\left<u_{x}\right>\), \(\left<u_{y}\right>\) are the components of the mean velocity vector through the plantation, and \(A_{d}=0.125\) is a selected dispersion constant. The first term in eq. (8) is the contribution of radiation and buoyancy and the second is the intensification of dispersion due to the wind. Two new length vectors, \(\mathbf{L}\) and \(\mathbf{w}\), are introduced in eq. (8), which represent global lengthscales of the burning field and are updated every time-step. The vector \(\mathbf{L}=(L_{x},\ L_{y})\) is the distance from the location of global maximum temperature over the field, \(T_{\rm{max}}\), to the points in the \(x-\) and \(y-\)direction where temperature has dropped down to \(T=0.1\ T_{max}+T_{a}\). Thus, it provides a measure of the width of the burning zone. The term in parenthesis on the right hand side of eq. (8) expresses a mitigation of the effect of wind when the fire is spatially restricted, i.e., when the fireline is short [10]. It is known that the buoyant column of rising air partially obstructs the ambient air flow, and as a result pressure gradients develop that tend to redirect it. However, as has been shown by numerical simulations [6], a longer fireline will tend to allow more wind to push through, in contrast to a shorter fireline where the wind can be redirected more efficiently around the fire. In order to model this effect, the extent of the fireline in the \(x-\) and \(y-\)direction is expressed by the vector \(\mathbf{w}=(w_{x},\ w_{y})\). This vector is determined at every time-step, by considering the temperature maxima in one direction (say \(x\)) and their variation in the other direction (say \(y\)). The resulting array, \(T_{\rm{max}}(x_{i},y_{i})\), is used to define the endpoints \((x_{A},y_{A})\) and \((x_{B},y_{B})\) of the fireline by the condition \(T_{\rm{max}}>550\ K\). Then, the vector \(\mathbf{w}\) is defined as, \(w_{\rm{x}}=|x_{A}-x_{B}|\), \(w_{y}=|y_{A}-y_{B}|\). Varying the parameter \(\gamma_{d}\) ([=] m\({}^{-1}\)) in eq. (8) increases or decreases the fireline length at which the asymptotic limit (of the ROS of a long fireline) is practically reached. Term \(U\) ([=] W/m\({}^{2}\)K) is an overall heat transfer coefficient that includes free convection and radiation, according to the expression: \[U=h_{nc}+\varepsilon\sigma_{b}\left(T^{2}+T_{a}^{2}\right)(T+T_{a})=A_{nc}(T-T _{a})^{1/3}+\varepsilon\sigma_{b}\left(T^{2}+T_{a}^{2}\right)(T+T_{a}), \tag{9}\] where \(A_{nc}\approx 0.2\) sums up terms from the correlation \(Nu_{nc}=0.15\ Gr^{1/3}Pr^{1/3}\), valid for a horizontal hot surface and \(Gr>10^{7}\): \[\frac{h_{nc}l_{nc}}{k}=0.15\left(\frac{g\beta}{\nu^{2}}\right)^{1/3}I_{nc}\ Pr^{1/3}(T-T_{a})^{1/3}\Leftrightarrow h _{nc}=\left[0.15\left(\frac{g\beta}{\nu^{2}}\right)^{1/3}\ Pr^{1/3}k\right](T- T_{a})^{1/3}. \tag{10}\] Here, \(h_{nc}\) ([=] W/m\({}^{2}\)K) is the natural convection coefficient, \(\varepsilon\) is the emissivity, \(\sigma_{b}\) ([=] W/m\({}^{2}\)K\({}^{4}\)) is the Stefan Boltzmann constant, \(Nu_{nc}\) is the Nusselt number for natural convection, \(Gr\) is the Grashof number, \(Pr\) is the Prantl number, \(g\) ([=] m/s\({}^{2}\)) is the acceleration of gravity, \(\beta\) ([=] K\({}^{-1}\)) is the thermal expansion coefficient, which is equal to \(1/T\) for ideal gases, \(\nu\) ([=] m\({}^{2}\)/s) is the kinematic viscosity, and \(k\) ([=] W/mK) is the thermal conductivity of the hot gaseous products. The characteristic lengthscale for natural convection, \(l_{nc}\), is usually taken as the ratio of cross-sectional area to perimeter of the hot surface, but it drops out in the high \(Gr\) limit. ### Gas flow through the canopy The transfer processes involved in the energy balance depend strongly on the motion of the gaseous phase through the canopy, which is expressed as a mean velocity, \(\langle u\rangle\), across the plantation height, \(H\). This velocity is determined by the wind speed above the plantation and the resistance to air motion exerted by the plantation. More specifically, [33, 30], the velocity field of the atmospheric boundary layer exerts a shear \(\tau\) ([=] N/m\({}^{2}\)) at the top of the canopy and imparts momentum to the underlying air. However, this momentum is dissipated not on the ground but throughout the canopy. In the following, the wind is defined by the velocity, \(u_{10}\), at 10 m above the ground, as it is standard for meteorological measurements. Following Inoue [33], (see fig. 1), the turbulent flow above the plantation is described by the following expression (law of the wall), and the nominal wind speed, \(u_{10}\), is recovered by setting the height \(z=10\) m: \[u_{v}(z)=\frac{u_{ev}}{\kappa}\,\ln\left(\frac{z-d}{z_{0}}\right). \tag{11}\] In eq. (11), \(\kappa=0.41\) is the Karman's constant, \(z_{0}\) ([=] m) is the surface roughness at the top of the plantation [58] and \(u_{ev}\) ([=] m/s) is the friction velocity, which is evaluated by the substitution \(u_{v}(z=10m)=u_{10}\). The height \(d\) ([=] m) corresponds to the "nominal ground" as experienced empirically by the air flow above the plantation. In other words, the tentative extension of the logarithmic velocity profile inside the plantation goes to zero at \(z=d+z_{0}\). This is also the location where the "nominal wall shear stress", \(\tau_{w}\), applies. The height \(d\) decreases with wind speed, i.e., with increasing \(u_{10}\) the effect of the wind penetrates deeper inside the plantation. This tendency is currently approximated as follows: \[d=(H-z_{0})-\delta\,u_{10}, \tag{12}\] where \(\delta\) ([=] s) and \(z_{0}\) are expected to vary with the thickness of the plantation (see table 1 for suggested representative values). \begin{table} \begin{tabular}{c c c} \hline **Parameter** & **Sparse Canopy** & **Dense Canopy** \\ \hline \(z_{0}\) & 0.5 m & 0.25 m \\ \(\delta\) & 0.08 s & 0.04 s \\ \(\eta\) & 2 & 3 \\ \hline \end{tabular} \end{table} Table 1: Representative values of the parameters \(z_{0}\), \(\delta\) and \(\eta\) for a sparse and a dense canopy. The actual velocity profile inside the plantation is determined by the following force balance over a differential section: \[\frac{d\tau}{dz}=\rho_{g}C_{d}\,A_{pl}u^{2}, \tag{13}\] where \(C_{d}\approx 0.25\) is an indicative value for the drag coefficient [28], and \(A_{pl}\) ([=] m\({}^{-1}\)) is the total surface area that exerts drag on the flow, per unit control volume (\(A_{pl}\) is usually expressed as the product \(A_{pl}=\alpha\,s_{pl}\), where \(s_{pl}\) is the area per solid volume ratio [50]). Invoking the mixing-length hypothesis as a simple turbulence closure model, we may write: \[\tau=l_{m}^{2}\,\left(\frac{du}{dz}\right)^{2}, \tag{14}\] where it has been argued [33; 30] that the fluid dynamics inside the plantation are satisfactorily captured by assuming a constant eddy size, \(l_{m}\) ([=] m). Combining eq. (13) with eq. (14) and setting \(u(H)=u_{H}\) for the velocity at the top of the plantation leads to the following exponential profile inside the canopy: \[u_{v}(z)=u_{H}e^{-\eta(1-z/H)}, \tag{15}\] where \[\eta=H\left(\frac{C_{d}\,A_{pl}}{2\,l_{m}^{2}}\right)^{1/3}, \tag{16}\] Figure 1: Wind profile inside and above the unburned plantation. with typical values in the range \(\eta\in[2,3]\), see [33]. Finally, the mean velocity across the plantation height is calculated as: \[\langle u_{v}\rangle=\frac{1}{H}\,\int_{0}^{H}u_{v}(z)dz=\frac{u_{H}}{\eta}\,(1- e^{-\eta})\,. \tag{17}\] The mean velocity \(\langle u_{v}\rangle\) calculated above describes the air motion ahead of the fire front and into the unburned region and is thus relevant to the rate of spread of the fire. However, behind the front, the air flow meets less drag resistance, as foliage and branches have been to a large extent eliminated by the fire. This latter air flow must thus be described by a higher mean velocity value, \(\langle u_{b}\rangle\). Of course, the streamwise gradient of velocity thus imposed will generate pressure gradients that will drive the excess air flow around and over the fireline [6]. In order to express the aforementioned effect, we consider the logarithmic velocity profile over bare ground of roughness \(z_{0}\): \[u_{b}(z)=\frac{u_{b*}}{\kappa}\,\ln\left(\frac{z}{z_{0}}\right), \tag{18}\] and define the mean velocity, \(\langle u_{b}\rangle\), over the height \(H\) that corresponds to the unburned plantation ahead of the fire front. Thus: \[\langle u_{b}\rangle=\frac{1}{H-z_{0}}\int_{z_{0}}^{H}u_{b}(z)dz=\frac{u_{b*} }{\kappa}\left[\frac{H}{H-z_{0}}\ln\left(\frac{H}{z_{0}}\right)-1\right], \tag{19}\] where the friction velocity, \(u_{b*}\), is evaluated by the substitution \(u_{b}(z=10m)=u_{10}\). As \(\langle u_{v}\rangle\) is relevant to the intact, and \(\langle u_{b}\rangle\) to the totally burned-down canopy, we define the varying streamwise air velocity as a function of the remaining fraction of combustibles, \(x_{c}=S_{2}/S_{2,0}\). In particular, we presently use a simple linear approximation, and thus the mean local velocity to be applied in eq. (7), is calculated as: \[\langle u\rangle=\langle u_{v}\rangle+\left(\langle u_{b}\rangle-\langle u_{v} \rangle\right)(1-x_{c}). \tag{20}\] We note that the mean velocity \(\langle u\rangle\) is a vector quantity. However, in the case of horizontal topography (to which the present paper is restricted), the air velocity inside the burning area is always co-current with the wind. This is why no vector notation has been used in the above derivations. ### Final form and numerical implementation of the model Summarizing the above developments, the final form of the model, to be implemented numerically, is outlined below. It consists of eqs. (3) and (6) that describe the evaporation of water and the consumption of combustibles, supplemented by the final dimensionless form of the energy balance, eq. (7), that -in the mathematical literature- is also known as an advection-diffusion-reaction equation: \[\frac{\partial S_{1}}{\partial t} =-S_{1}\,r_{1} \tag{21}\] \[\frac{\partial S_{2}}{\partial t} =-S_{2}\,r_{2t}\] (22) \[\frac{\partial T}{\partial t} =\frac{c_{1}}{c_{0}}\left(\mathcal{D}_{\mathrm{eff},x}\frac{ \partial^{2}T}{\partial x^{2}}+\mathcal{D}_{\mathrm{eff},y}\frac{\partial^{2} T}{\partial y^{2}}-\left\langle u_{x}\right\rangle\frac{\partial T}{ \partial x}-\left\langle u_{y}\right\rangle\frac{\partial T}{\partial y} \right)-\frac{c_{2}}{c_{0}}\,S_{1}\,r_{1}+\frac{c_{3}}{c_{0}}\,S_{2}\,r_{2t}- \frac{c_{4}}{c_{0}}\,U(T-T_{a}) \tag{23}\] The coefficients \(c_{i}\) combine thermo-physical properties and are given below: \[c_{0}(S,\alpha,\gamma,\lambda) =\alpha S+(1-\alpha)\lambda\gamma+\alpha\gamma(1-S) \tag{24}\] \[c_{1}(S,\alpha,\gamma,\lambda) =(1-\alpha)\lambda\gamma+\alpha\gamma(1-S)=c_{0}-\alpha S\] (25) \[c_{2}(\alpha,\lambda_{1}) =\alpha\lambda_{1}\] (26) \[c_{3}(\alpha,\lambda_{2}) =\alpha\lambda_{2}\] (27) \[c_{4}(\lambda_{3}) =1/\lambda_{3}, \tag{28}\] where \(\gamma=c_{pg}/c_{ps}\), \(\lambda_{1}=A_{1}/c_{ps}\), \(\lambda_{2}=A_{2}/c_{ps}\), \(\lambda_{3}=H\rho_{s}c_{ps}\) and it is recalled that \(\lambda=\rho_{g}/\rho_{s}\). The set of eqs. (21), (22), and (23) is discretized by an explicit, finite difference scheme on a rectangular uniform grid with typical spacing \(\Delta x=\Delta y=1\) m. First-order upwinding is used for the advection terms and second-order, central differences for the dispersion terms. The resulting system of temporal ODEs is solved by Adams-Bashforth methods [29] with time step \(\Delta t=0.01\) s for stability [5]. The evolution of the dependent variables, \(T,S_{1},S_{2}\), is followed in time on the computational grid, and data are stored for every \(1\)s. The computation is run on an AMD Ryzen Threadripper PRO 3995WX processor with \(64\) cores. A localized temperature spike with \(T_{i,max}=1200\) K is used as the initial condition, described either by a rectangular step-function of width \(10-30\) m, or by a Gaussian spike with typical standard deviation \(\sigma=20\) m. As for the other two dependent variables, a uniform endothermic, \(S_{1,0}\), and exothermic, \(S_{2,0}\), fuel composition is considered across the domain. An open outflow boundary condition is implemented at the downstream boundaries, in order to allow the firefront to move smoothly out of the computational domain without significant backward influence [46; 18]. This condition essentially amounts to extending the validity of the discretized equations at the boundary nodes, using appropriate one-sided approximations for the spatial derivatives. The effectiveness of this condition is confirmed by the very good agreement with representative runs on a computational domain twice the regular size. ## 3 Model validation and results ### A representative one-dimensional simulation Having developed a model for wildfire spreading, the next objective is to probe its behavior and compare its predictions to observations that are known from laboratory experiments and field studies. The one-dimensional (1D) version of the model is first used to investigate the effects of some fuel properties (such as bulk density, moisture content, and particle size) and of wind speed and to demonstrate in a simple way the mechanisms that should be considered more carefully in a two-dimensional (2D) case. The 1D version corresponds to a very long ifeline that moves uniformly in the longitudinal direction. As it is known that the rate of spread, ROS ([=] m/s) increases with the length \(w\) ([=] m) of the firefront [11, 37], the 1D model is expected to provide an upper limit to the actual ROS. The 2D version will be employed in the last subsection, in order to investigate the pattern of spread of the fire in various directions and the spatiotemporal evolution of the shape of the firefront. The results for a representative case are shown in fig. 2, which correspond to the parameter values listed in table 2 and the values for a dense canopy in table 1. Figure 2(a) depicts the spatial variation of temperature for (a\({}_{1}\)) \(u_{10}\) = 3 m/s and (a\({}_{2}\)) \(u_{10}\) = 10 m/s at time instants \(t=0,800,1600\) and \(2400\) s from the onset of the initial spikes (solid lines). The progression of the firefront with time is evident (dashed lines), and the ROS is readily calculated from the displacement of the temperature maximum and the corresponding time lag. Such a calculation indicates that sometimes the ROS increases gradually with time, a behavior reminiscent of fire's acceleration with size. Such a concept has been included in some prediction models, such as FARSITE or the Forestry Canada Fire Danger Group [50]. Aiming to overcome possible ambiguity, the ROS reported in the parametric investigations of this section is the mean value calculated from the crest progression between 1000 s and 1500 s from ignition. Figure 2(b) depicts the spatial distribution of \(S_{1}\) and \(S_{2}\), i.e., the dimensionless mass of water and combustibles remaining in the solid phase for (b\({}_{1}\)) \(u_{10}\) = 3 m/s and (b\({}_{2}\)) \(u_{10}\) = 10 m/s at the same time instants \(t=0,800,1600\) and \(2400\) s from ignition. As expected, water has totally evaporated everywhere the flame has passed (blue lines) before the exothermic reaction takes place (red lines). However, varying amounts of combusti \begin{table} \begin{tabular}{l l|l l|l l|l l} \hline \hline **Parameter** & **Value** & **Parameter** & **Value** & **Parameter** & **Value** & **Parameter** & **Value** \\ \hline FMC & 25\% & \(c_{ps}\) (J/kgK) & 1800 & \(A_{1}\) (J/kg) & \(22\cdot 10^{5}\) & \(\sigma\) (m) & 20 \\ \(H\) (m) & 2 & \(c_{ps}\) (J/kgK) & 1043 & \(A_{2}\) (J/kg) & \(2\cdot 10^{7}\) & \(C_{d}\) & 0.25 \\ \(T_{a}\) (K) & 300 & \(c_{s1}\) (s\({}^{-1}\)) & 30 & \(\sigma_{b}\) (W/m\({}^{2}\)K\({}^{3}\)) & \(5.67\cdot 10^{-8}\) & \(A_{d}\) & 0.125 \\ \(T_{max}\) (K) & 1200 & \(c_{s2}\) (s\({}^{-1}\)) & 40 & \(D_{tb}\) (m\({}^{2}\)/s) & 0.1 & \(A_{ac}\) & 0.2 \\ \(\rho_{s}\) (kg/m\({}^{3}\)) & 700 & \(b_{1}\) (K) & 4500 & \(r_{m,0}\) (s\({}^{-1}\)) & 0.002 & \(\alpha\) & 0.002 \\ \(\rho_{s}\) (kg/m\({}^{3}\)) & 1 & \(b_{2}\) (K) & 7000 & \(\gamma_{d}\) (m\({}^{-1}\)) & 0.03 & \(\epsilon\) & 0.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Representative values of the model parameters. on how fast the combustion proceeds and how efficiently the burned area is cooled down by the incoming wind. The increase in consumed combustibles with downstream distance explains the acceleration of the ROS that is sometimes observed: As fire moves ahead, it intensifies and burns more fuel during its passage, thus the firefront spreads faster. However, a steady-progressive state evidently establishes further downstream, which corresponds to an asymptotic limit \(S_{2}\to S_{2,min}\). As an alternative representation of the same process, fig. 2(c) plots the three variables \(T/T_{max}\), \(S_{1}\) and \(S_{2}\) as functions of time, for (c\({}_{1}\)) \(u_{10}=3\) m/s and (c\({}_{2}\)) \(u_{10}=10\) m/s with \(t\in[0,2400]\) s, at two different spatial locations, x = \(220,270\) m and x = \(400,700\) m respectively, from the ignition site (term \(T_{\max}\) is the maximum temperature achieved at each location over time). The onset of combustion with the arrival of the firefront is evident, as well as the Figure 2: (a) The spatial distribution of temperature at time instants \(t=0,800,1600\) and \(2400\) s, (b) the spatial distribution of \(S_{1}\) and \(S_{2}\) at the same time instants, (c) the temporal variation of dimensionless temperature \(T/T_{\max}\), \(S_{1}\) and \(S_{2}\). The subscript 1 (left column) is for \(u_{10}=3\) m/s and the subscript 2 (right column) for \(u_{10}=10\) m/s, while the localized ignition area is everywhere at \(x_{0}=150\) m from the origin. temperature decline behind the front, depicted by a tail that may be longer or shorter depending mainly on the wind speed. The remaining amount of combustibles, \(S_{2}\), stabilizes to a constant value after some time, indicating extinction below the minimum ignition temperature. Again, it is evident that the remaining combustibles may decrease somewhat with downstream distance. ### Fuel modeling flexibility: bulk density, moisture content and particle size The model offers a number of parameters that may be combined to provide a realistic representation of the fuel's properties. The effect of some of these on the predicted ROS is considered next, and the predictions are set in perspective with experimental data and/or detailed, three-dimensional simulations in the literature. The bulk density of solid material, defined as \(m_{s,0}=\alpha\rho_{s}\), in terms of the volume fraction of solid, \(\alpha\), and the material density, \(\rho_{s}\), is known to have a systematic effect on the fire spread rate. More specifically, it has been observed [62, 7, 65, 9] that increasing bulk density leads to slower ROS. This behavior has been mathematically expressed by an inverse power law, \(\text{ROS}\sim m_{s,0}^{-\zeta}\), which appears to satisfactorily describe both laboratory experiments and field observations [63, 20, 40, 62, 65, 9]. However, the proposed exponent varies widely among the above studies, moving in the range \([0.23,0.73]\). The present model confirms the central role of bulk density. In particular, keeping all other parameters constant as outlined in tables 1, 2 the ROS is found to vary only with \(m_{s,0}\), and not with \(\alpha\) or \(\rho_{s}\) independently. The inverse power law is also followed by the model, with the exponent, \(\zeta\), actually being an increasing function of the air velocity, \(u_{10}\), above the canopy. Indicative examples for \(u_{10}=1\), \(5\), \(10\) and \(12\) m/s, and \(m_{s,0}\) in the range \([1,6]\) kg/m\({}^{3}\), are shown in fig. 3. The exponent \(\zeta\) appears in the present model to follow the empirical fit, \(\zeta=0.04\,(1+u_{10})\), with very good accuracy. Figure 3: The ROS as a function of the fuel bulk density \(m_{s,0}\) for wind speeds, \(u_{10}=1,\,5,\,10\) and \(12\) m/s. Points are simulation results and lines the best-fit inverse power law. The increase in fuel moisture content is also known to result in slower ROS [40, 51] and to eventually lead to the extinction of the flame under sufficiently wet conditions. An exponential functionality of the form, ROS \(\sim e^{-\mu\left(FMC\right)}\), has been repeatedly proposed [11, 21, 40, 43], but the exponent \(\mu\) is found to vary widely. In particular, it is higher when the correlation is based on dead moisture content and lower when using the mean value of dead and live moisture. Predictions of the present model, which is based on the total moisture content, are satisfactorily correlated by the same functional form and values of the exponent \(\mu\) in the range \([0.013,0.017]\), or even smaller for very low air velocities. The FMC that results in fire extinction is predicted to increase with bulk density and decrease with air velocity, i.e., a fire on wet fuel is more persistent at high bulk density and low air speed. The effect of fuel particle size is also critical, as leaves and small-diameter sticks have larger surface-to-volume ratios and thus, burn faster than thick branches. This behavior may be quantified by the appropriate choice of the constants in the combustion kinetics. With reference to eq. (6), a characteristic burning time may be defined as: \[t_{c}=-\frac{\ln(S_{2,f}/S_{2,0})}{c_{s2}\,e^{-b_{2}/T_{c}}}, \tag{29}\] in terms of the initial, \(S_{2,0}\), and the remaining, \(S_{2,f}\), combustible material after time \(t_{c}\). Taking this ratio as equal to \(0.1\) and using a characteristic burning temperature \(T_{c}\approx 1250\) K [61], gives the estimate \(t_{c}\approx 1000/c_{s2}\). Values of \(c_{s2}\) in the range \([5,200]\) s\({}^{-1}\) result in \(t_{c}\) in the range \([200,5]\) s. This range is very favorably compared with the fuel residence times for a variety of fuels in the extensive database presented in [45]. ### The role of wind As evidenced already, the overlying wind speed is a key ingredient in determining the fire propagation speed and direction [64, 50, 55, 61, 3]. In the absence of wind, the fire is expected to spread symmetrically in all directions, with heat for ignition being transported by buoyant dynamics and radiation, as modeled by the term, \(D_{\rm fb}\), of the effective diffusivity. With increasing wind speed, fire propagation is accelerated in the direction of the wind and decelerated against it. Beyond a value of wind speed, propagation against the wind is arrested, and the backward front extinguishes [32]. In the present model, the effect of wind is introduced by a local mean velocity of the gaseous phase inside the plantation, as described in Section 2.3. Representative examples of front propagation are shown in fig. 4. Specifically, fig. 4(a) corresponds to zero wind speed, \(u_{10}=0\) m/s, and exhibits symmetric propagation in both directions, as evidenced by the temperature profiles for time instants, \(t=0,1000\) and \(2000\) s. Introducing a small wind speed, \(u_{10}=1\) m/s, in fig. 4(b) results in the acceleration of the wavefront moving with the wind and the deceleration of the front moving against the wind, as evidenced by the temperature profiles, now for time instants, at \(t=0,600\) and \(1200\) s. Increasing the wind speed further enhances the difference between the spread rates of the two fronts. Eventually, beyond \(u_{10}=2.3\) m/s, the wind-opposed front blows off and the fire spreads only in the direction of the wind (see fig. 4(c) for \(u_{10}=2.5\) m/s and \(t=0,300\), and \(600\) s). The predicted dependence of the steady ROS on wind speed, \(u_{10}\), is depicted quantitatively in fig. 5. Apart from the variation of \(u_{10}\), all other parameter values are as listed in table 2. It is noted that the numerical results in fig. 5 follow very closely a parabolic curve, \(\text{ROS}=(\text{ROS})_{0}+\phi u_{10}^{2}\), with \((\text{ROS})_{0}\) the rate of spread at zero wind speed. The coefficient \(\phi\) is evidently expected to vary with the fuel properties, as outlined in the previous subsection. Figure 5: The ROS as function of the wind speed \(u_{10}\), with all other parameters as listed in table 2. Points are simulation results and line the best-fit parabolic curve. ### Wildfire development in two-dimensional lattice Next, simulations of fire spreading over a two-dimensional field are presented. In all cases considered, the composition and properties of the plantation are assumed to be uniform, and the wind maintains a constant magnitude and direction. The main goal of this section is to compare predictions with field observations on the progression and shape of the fire front in various directions. In most simulations, the fire is ignited by a step-function over a narrow square section (localized ignition). However, high-temperature stripes with varying width in the direction normal to the wind are also considered as an initial condition, in order to investigate the effect of fireline width on its rate of spread. Figure 6 depicts simulation results for fires developing from a 10 m by 30 m ignition site subject to different wind velocities. The column on the left shows the fraction of solid material (including moisture) that remains on the field after \(t=900\) s of burning. Thus, it provides an overall view of the area affected by the fire. The column on the right plots the distribution of temperature, and thus it depicts the location and shape of the firefront after a lapse of \(t=900\) s. The first row in fig. 6 corresponds to \(u_{10}=0\) m/s, and, as expected, the region affected by the fire is a perfect circle. With increasing wind speed (\(u_{10}=3,\,6\) and \(10\,\)m/s in the 2nd, 3rd, and 4th rows, respectively), the shape of the fire-affected region becomes gradually more restricted in the transverse direction and far more elongated in the direction of the wind [1]. A striking feature of the left column of fig. 6 is the difference in fuel consumption. Thus, with no-wind, maximum consumption occurs at the ignition site, and relatively little fuel remains there as a consequence of the slow burning process. This trend rapidly diminishes with wind, and at high speeds, the maximum in fuel consumption progresses with the wave front. On the contrary, the ignition site remains rich in fuel. The change in the firefront location and shape with increasing wind speed is depicted on the right column of fig. 6. As expected, the front moves faster with a stronger wind. Also, the shape varies from symmetric to horseshoe and then to a parabola of increasing steepness. This tendency of the high-temperature zone to progress faster in the direction of the wind than in the transverse direction, which gives rise to the parabolic shape of the firefront, is strongly supported by laboratory and field studies [26, 2, 12]. The evolution of the firefront may be more clearly observed in fig. 7, which depicts the temperature profile for a wind speed of \(u_{10}=6\) m/s at three time instants, \(t=0,900\), and 1800 s from ignition. It shows that the front and the flanks progress with distinctly different speeds, which -in combination with the rapid cooling behind the front by the incoming cold wind- results in the pointed parabolic shape of the fireline. Of course, the semi-burned material left behind is totally dry and will easily ignite if heated again. The last set of simulations concerns a study of the effect of initial fireline length on the ROS. It is recalled that field studies and simulations converge on the prediction that the ROS is slower for a short fireline and increases asymptotically to a steady-state value with increasing fireline length [10, 6, 22]. It is also recalled that an effect of fireline length has been included in the dispersion coefficient, eq. (8), in an attempt to model this behavior. Figure 6: The state of the burning field 900 s after ignition from a localized source, \(x\in[75,85]\) m and \(y\in[235,265]\) m. Left column: the remaining total mass fraction of solid. Right column: The spatial variation of temperature. The rows are (from top to bottom) for \(u_{10}=0\), \(3\), \(6\) and \(10\) m/s. Results of the present model are presented in fig. 8. Three different wind speeds, \(u_{10}=3,6\), and 10 m/s, are considered, and the initial length of the fireline in the transverse direction (normal to the wind) is varied in the interval \(w\in[5,200]\) m. It is evident from fig. 8 that, for all cases, the ROS is accurately described by an equation of the form: \[\text{ROS}=a_{1}(1-e^{-a_{2}w}), \tag{30}\] with the best-fit values of the two coefficients given in table 3. Coefficient \(a_{1}\) ([=] m/s) denotes the asymptotic (quasi-steady-state) velocity of the combustion wave for a long fireline, (ROS)\({}_{0}\). It is evidently determined by the strength of the wind, which remains the dominant influence [42]. Coefficient \(a_{2}\) ([=] m\({}^{-1}\)) affects the length, \(w_{0}\), of the fireline beyond which the asymptotic value (ROS)\({}_{0}\) is practically reached. According to table 3, \(a_{2}\) depends very weakly on \(u_{10}\), and a constant value \(a_{2}\approx 7.3\cdot 10^{-2}\,m^{-1}\) gives accurate results. By selecting a smaller value for parameter \(\gamma_{d}\) in eq. (8), \(a_{2}\) also decreases and the length \(w_{0}\) increases. ## 4 Conclusions and Outlook A physically-based, and interpretable model of fire propagation has been developed by combining simplified reaction kinetics with mass and energy balances. The latter includes a convective contribution by the mean gas velocity \begin{table} \begin{tabular}{c c c} \hline \hline **Wind speeds** & \(u_{10}\) **(m/s)** & \(a_{1}\) **(m/s)** & \(a_{2}\) **(m\({}^{-1}\))** \\ \hline 3 & 0.09 & 6.91\(\cdot\)10\({}^{-2}\) \\ 6 & 0.18 & 7.93\(\cdot\)10\({}^{-2}\) \\ 10 & 0.33 & 6.72\(\cdot\)10\({}^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Best-fit values for the parameters in eq. (30) Figure 7: Evolution of the temperature profile from the same localized source at three time instants, \(t=0,900\) and 1800 s, for \(u_{10}=6\) m/s. through the canopy, a dispersion term that accounts for short-scale heat transfer by turbulence, buoyant currents, and radiation, and a term representing losses to the ambient by free convection and radiation. The mean velocity through the canopy is a key quantity both for the convection and dispersion terms. It is estimated from the effect of ambient wind, quantified by \(u_{10}\), on the two extremes, (i) air flow through an intact canopy (wind-induced momentum dissipated as canopy drag) and (ii) air flow above totally burned ground (wind-induced momentum dissipated as rough wall drag). In the present work, an off-line validation of the developed model is attempted by providing predictions for which benchmark data are available in the literature. Concerning fuel properties, it is shown that higher bulk density leads to slower ROS according to the inverse power expression \(\mathrm{ROS}\sim m_{s,0}^{-\zeta}\), as does fuel moisture according to the exponential expression \(\mathrm{ROS}\sim e^{-\mu\,(\mathrm{FMC})}\). Also, the effect of fuel size (leaves, branches, and trunks) may be quantified by an appropriate choice of the constants in the combustion kinetics. The effect of ambient wind is considered first for one-dimensional propagation, i.e., for a straight firefront normal to the wind that extends across the entire field. With increasing wind speed, backward propagation of the firefront is decelerated and then extinguished, while propagation along the wind is intensified, varying with the square of the speed, \(u_{10}\). Fire propagation in a two-dimensional field is considered next, ignited by a localized high-temperature spike, and the spatio-temporal evolution of the firefront and of the fire-affected region is investigated. In particular, the model correctly predicts the effect of wind on increasingly elongating the fire-affected region in the direction of wind and restricting it in the transverse direction. Also, the firefront is predicted to evolve from symmetric to horseshoe to parabolic as a result of the reduced rate of spread of the flanks. Last, the effect of the length of an initial ignition front is investigated, and it is shown that (as a result of the functional form chosen for the dispersion coefficient), the model correctly predicts reduced ROS for short firefronts and an asymptotic approach to the one-dimensional ROS for a long enough firefront. Figure 8: The ROS as function of the width, \(w\), of the fireline for wind speeds \(u_{10}=3\), \(6\) and \(10\) m/s. Points are simulation results and lines the best-fit to eq. (30). The aforementioned model is envisioned as a component of a more general data-informed and quick-feedback simulation framework that will assist decisions in the management of wildfires. In particular, the uncertain and stochastic nature of physical features that influence wildfire spread (e.g., fuel properties and weather conditions) will be quantified by an optimal inference process of the model parameters. In future work, it will be attempted to extend the model's applicability by including the effect of terrain topography on the local rate of spread.
2302.02940
Integrating Eye-Gaze Data into CXR DL Approaches: A Preliminary study
This paper proposes a novel multimodal DL architecture incorporating medical images and eye-tracking data for abnormality detection in chest x-rays. Our results show that applying eye gaze data directly into DL architectures does not show superior predictive performance in abnormality detection chest X-rays. These results support other works in the literature and suggest that human-generated data, such as eye gaze, needs a more thorough investigation before being applied to DL architectures.
André Luís, Chihcheng Hsieh, Isabel Blanco Nobre, Sandra Costa Sousa, Anderson Maciel, Catarina Moreira, Joaquim Jorge
2023-02-06T17:14:59Z
http://arxiv.org/abs/2302.02940v1
# Integrating Eye-Gaze Data into CXR DL Approaches: A Preliminary study ###### Abstract This paper proposes a novel multimodal DL architecture incorporating medical images and eye-tracking data for abnormality detection in chest x-rays. Our results show that applying eye gaze data directly into DL architectures does not show superior predictive performance in abnormality detection chest X-rays. These results support other works in the literature and suggest that human-generated data, such as eye gaze, needs a more thorough investigation before being applied to DL architectures. ## 1 Introduction Diagnostic imaging plays a crucial role in diagnosing and treating a wide range of diseases, including cancer, heart disease, and injuries. It enables medical professionals to make accurate and timely diagnoses and determine the most effective treatment plans. However, access to radiologists, especially in underserved and remote areas, remains a significant challenge. The shortage of radiologists worldwide and the difficulty of providing radiology services to remote locations contribute to this issue. This can lead to delays in diagnosis, increased medical costs, and poorer patient outcomes. Deep learning (DL) has emerged as a powerful solution to the challenges of diagnostic imaging in underserved and remote areas. DL algorithms are able to extract and learn complex features from medical images with high accuracy, enabling them to automatically analyze images and identify patterns and abnormalities that may be difficult for human radiologists to detect. This can help to reduce the dependence on radiologists and improve access to healthcare for patients in underserved areas. However, these systems are also subject to bias and lack transparency, which can lead to a decline in radiologists' trust and adoption in healthcare. Virtual reality (VR) technology has the potential to transform the way radiologists assess medical images, providing a game-changing solution to the challenges of radiologist shortages and access to healthcare [29]. By creating an immersive, virtual radiology reading room, VR enables radiologists to work remotely and provide accurate diagnoses even in the face of radiologist shortages. This is particularly significant during pandemics, where healthy radiologists may have to stay home, and diagnoses may be delayed due to radiologist shortages. Furthermore, VR technology can also open new research paths for DL approaches by incorporating new data modalities, such as eye tracking. Eye tracking in VR has been shown to be able to estimate cognitive load [27, 31], fatigue [28] and detect true/false positives [19]. By providing valuable information about how radiologists interact with medical images [18], VR-based eye tracking can help improve the diagnostic accuracy of DL systems [21]. In this article, we present the initial findings of a cutting-edge multimodal deep learning (DL) architecture that integrates medical images and eye-tracking as a novel modality for detecting abnormalities in chest x-rays. The goal of this research is to evaluate the potential of this approach to enhance the diagnostic accuracy of DL systems. Eye-tracking, the process of measuring the eye gaze, can provide valuable insights into the diagnostic process, such as where the radiologist is focusing and how they are interpreting the image. By incorporating this data into our DL architecture, we aim to reduce the dependence on radiologists and improve access to healthcare for patients in underserved areas. Additionally, we aim to investigate whether incorporating eye-tracking can make DL systems more robust to biases and improve their diagnostic accuracy. ## 2 Related Work Diagnostic imaging is a well-established field with a long history of research aimed at improving the accuracy and accessibility of medical imaging. Although there has been a growing body of research on the use of DL for medical imaging diagnostics [34], these studies still suffer from the detection of spurious correlations that result in biased and erroneous diagnosis. This includes studies on the use of DL for detecting abnormalities in medical images such as X-ray, CT and MRI scans [30]. The potential solution to this problem involves incorporating multiple data modalities into DL architectures to reduce reliance on medical images alone in the learning process. Previous research has investigated the integration of eye-tracking data with medical images, specifically for training purposes [15]. More recent works have shifted the focus towards exploring the implications of eye-tracking in deep learning [19]. The EYEGAZE dataset [14] and REFLACK [2] are two examples of such datasets. The Eye-Gaze dataset contains 1,083 frontal chest x-ray (CXR) image readings by one radiologist, as well as dictations reports. The images used were sourced from the MIMIC database [12], and image-level labels were extracted from MIMIC reports using natural language processing techniques. REFLACX, on the other hand, contains 3,032 synchronized sets of eye-tracking data and timestamped report transcriptions for images from the MIMIC-CXR dataset. In this case, CXRs were annotated by five radiologists, who identified bounding boxes and their labels on each image. These datasets provide valuable resources for investigating the use of eye-tracking data in combination with medical images to improve diagnostic accuracy. The public availability of these two eye-tracking datasets for medical images facilitated the investigation of new multimodal DL architectures that integrate information from radiologists' eye tracking data with the patients' medical images. Karargyris et al. [14] proposed two DL architectures to incorporate eye-tracking data in predicting image-level labels for chest x-ray (CXR) images. The first approach concatenates CXR images passed through a convolutional neural network (CNN) with temporal fixation heatmaps through a 1-layer bidirectional long short-term memory network with self-attention [9]. The results showed a 5% area under the curve (AUC) improvement using temporal fixation heatmaps, compared with the baseline model using only CXR image data as input. The second approach used static fixation heatmaps, i.e., aggregating all the temporal fixations in a single image. During training, the model jointly trains the static fixation heatmap and the image-level label. During testing, when a CXR image is given as input, the model outputs the label and a heatmap distribution of the most important locations of the condition. This approach showed similar results to the baseline model but with added interpretability. Gaze data in DL were also studied by Wang et al. [32], who collected gaze information from radiologists and modelled a gaze-guided attention network, which ensures the network can focus on the disease regions like the radiologists and outputs the disease label, as well as an attention map based on the annotated bounding boxes and the gaze information from radiologists. The results show that the use of the radiologist's gaze as supervision in the GA-net architecture outperformed state-of-the-art methods using only images, such as ResNet [33], or the Vision Transformer [5]. Also, the collection of gaze data compared to manually annotated bounding boxes from radiologists showed to be faster and led to a similar performance in classification accuracy. Other works suggest that eye tracking can be useful since the first fixations that the radiologist makes usually coincide with regions containing some lesions [24]. Despite the potential benefits of incorporating eye-tracking data into DL models for diagnostic imaging, several challenges have been identified in the literature. Some studies have suggested that saliency maps from human fixations should not be used in DL, as there is evidence that these systems may use background context for object classification, leading to biases [23, 26]. Additionally, one of the first studies to use eye-tracking data in CXR analysis [3] distinguished three types of diagnostic errors using eye-tracking data: (1) search errors, which occur when the target is missed; (2) recognition errors, which happen when the eyes fixate on the target, but the target is still missed; and (3) decision errors, which relate to the inability of the radiologist to report the findings. These errors can contaminate the collection of eye-tracking data and negatively affect the performance of DL models. Studies have reported that the proportion of these errors can be as high as 30% search errors, 25% recognition errors, and 45% decision errors [11, 16]. These challenges must be taken into consideration when designing and evaluating DL models that incorporate eye-tracking data to improve diagnostic accuracy. It is the purpose of this study to further examine whether DL technologies can benefit from eye-tracking data in medical diagnosis. ## 3 A Multimodal Deep Learning Architecture with Fixation Masks The proposed architecture is an extension of the Mask-RCNN network [6]. Mask R-CNN is a state-of-the-art algorithm for object detection and instance segmentation in images and video frames. It uses a region proposal network (RPN) to generate potential regions in an image that may contain objects and then uses a separate network to classify and segment the objects within those regions. The key innovation of Mask R-CNN is the addition of a fully convolutional network branch on top of the RPN and classification network, which generates a binary mask for each instance of an object in the image. This allows for more precise localization of objects within an image, as it can identify the specific pixels that belong to each object. The architecture of Mask R-CNN consists of three main components: _the backbone network_, which is used to extract feature maps from the input image (typically a pre-trained convolutional neural network (CNN) such as ResNet); _the RPN_ that generates object proposals from the feature maps; and _the detection network_ that classifies and segments the objects within the proposals. We extended this architecture by adding a new backbone based on the fixation maps of the radiologists' eye gaze patterns. We used Mobilenet as a backbone since it performs well on small datasets [10]. Figure 1 presents the proposed multimodal model architecture. First, the CXR image and fixation mask are fused using an element-wise Figure 1: Proposed Multimodal DL architecture that combines fixation maps with chest x-ray images and performs abnormality detection for five classes: enlarged cardiac silhouette, atelectasis, pleural abnormality, consolidation, and pulmonary edema sum. Then, the RPN outputs a set of rectangular object proposals, and the pooling layer aligns the regions of interest with the input into a single feature map. Finally, these are flattened and passed to the classifier, which outputs, for each candidate region, the bounding box coordinates, a class label, and the object mask. The inner workings and outputs of the multimodal approach are similar to the baseline technique. A key difference lies in the input. Instead of a single CXR image, both the image and the fixation mask, containing the heatmap of a radiologist's fixations during a reading, serve as input. The fusion between the two is achieved through element-wise multiplication, where the features of both image representations are multiplied together, element by element. \[L=L_{classification}+L_{bbox}+L_{mask} \tag{1}\] The loss function in the Mask R-CNN architecture for each sample's region of interest is represented in equation 1. The classification and bounding box losses are the same as in Faster R-CNN [6], which means that the classification loss corresponds to the binary cross-entropy (object vs. not object). In contrast, the regression loss (bounding box) is the smooth L1 norm. The mask loss is only defined for the ground truth class and is the average binary cross-entropy loss. During training, these losses were monitored throughout the epochs, where a decreasing pattern indicates that meaningful relations are being learned. During the evaluation, the Intersection over the detected B-Box area ratio (IoBB) threshold is used to determine the correctness of predictions. A prediction is considered correct (true positive) only if it intersects the ground truth box above the IoBB threshold. In order to observe the models' ability to localise, we also test their performance on different IoBB thresholds. In the medical field, the IoBB is more preferable than the Intersection over Union ratio (IoU) since the ground truth boxes are often oversized in order to include all the scattered or large lesions. To evaluate the different models, absolute precision and recall were used, for a 50% intersection over bounding boxes. While precision refers to the proportion of lesion class predictions that belong to the lesion class, recall concerns the proportion of lesion class predictions made from all lesion examples in the dataset. Training these architectures does not start from scratch but rather from pre-trained models on publicly available datasets. Here, we tested two backbones, ResNet [8] and Mobilenet [10], both pre-trained on Microsoft COCO dataset [17]. ## 3 Dataset REFLACK [2] is a dataset of reports and eye-tracking data for localization of abnormalities in 3,032 CXR images. It is one of the first publicly available datasets containing eye gaze location, pupil data, and dictations from radiologists while performing the CXR image readings. Besides, the abnormality bounding boxes were identified by five radiologists, acting as local labels that can be further used for object localization. The images shown to the radiologists in the data collection procedures were randomly sampled from MIMIC-CXR dataset [12]. After sampling, outlier images were excluded, either due to missing parts or because they were flipped. A CXR reading refers to a single data collection session where one radiologist analyses one CXR image from one patient. In the session, the professional can identify zero, one or more abnormality location ellipses. Figure 2 shows an example of the interface used to collect the data [2] and the radiologist's eye gaze patterns. ## 4 Results and Discussion We compared the Mark-RCNN using only a CXR (the baseline) with the proposed multimodal DL architecture that combines both CXRs and fixation maps. Table 1 presents the obtained results. The results indicate that the model using only images performs better than the multimodal approach using the radiologists' fixation masks. We believe that these poor results are due to the fact that fixation masks consist of the clustering of gaze data, which means that a CXR image will contain a small number of fixations. Those fixations mostly do not correlate with the groundtruth annotations, implying that more investigation is needed to extract regions of interest from eye gaze data that correlate with the lesion annotations in CXRs. Additionally, further analysis indicated that the REFLACK data contains a lot of noise: the eye gaze patterns that were recorded and made available to the public contained games resulting from the interaction with the interface and with the CXR assessment. When looking at the performance of the models for each abnormal condition alone, Mask-CNN shows a clear performance advantage. For this model, it is easy to identify _enlarged cardiac silhouette_ conditions (with an AP@[BBOX=0.5] = 43%). The reason for this is due to its more localized nature around the heart. The remaining lesions show similar low performances, probably because they mostly occur together in the lungs region, making it difficult for the classifier to differentiate them. Although in terms of performance, we could not prove any superior advantage of using eye tracking data in a multimodal DL architecture, this preliminary study enabled us to extract several insights for future research: 1. The current approaches in deep learning working on the REFLACK dataset tend to utilize raw eye gaze data without taking into account the intricacies of its human generation process. This results in a data stream that is highly noisy and does not effectively contribute to the supervised learning process. 2. The data collection process employed in REFLACK utilized traditional desktop eye-trackers, which are known to be highly susceptible to noise and do not permit extreme head movement during data collection. This can result in missing data points or inaccuracies in the recorded data. 3. Eye tracking data is a form of human-centric data. As such, it is essential to consider the human factors involved in its generation when processing and analyzing this data. This includes, but is not limited to, the examination of human search patterns. ## 5 Conclusion and Future Directions Eye-tracking data have been widely studied in medical imaging. More recently, researchers have started investigating the potential of combining eye-tracking data in DL approaches. The reason for this is to develop novel DL architectures that can promote more accurate and precise medical diagnoses based on the radiologists' behavioral patterns. These systems could be a plausible solution for the current shortage of radiologists worldwide. The few works that used DL to combine eye-tracking data with CXR images show mixed results: some report predictive advantages, while others do not. In this work, we investigated this divergence by proposing a multimodal DL architecture based on Mark-RCNN that combines radiologists' fixation maps with CXR images. Our results showed that incorporating fixation maps showed no performance advantage. Figure 2: Example of the REFLACK data collection interface (left) and the recorded eye gaze data from the radiologist. For future research, we plan to extend this study to the analysis of the radiologists' pupil dilations while reading a CXR. Literature shows a strong correlation between pupil dilations and cognitive load [1], which can be a better metric to correlate with abnormal regions in the CXR than fixation maps. From this study, we also verified that the desktop eye trackers used to collect data are highly susceptible to noise and head movements. For future work, we also intend to replicate the REFLACX data collection process using VR glasses. In a VR setting, conditions of light and contrast can be easily controlled, the radiologist will not be bounded to restricted head movements, and the data eye gaze data collection is less noisy. ## Acknowledgments This work partially supported by the UNESCO Chair on AI&XR; and the Portuguese _Fundacao para a Ciencia e a Tecnologia (FCT)_ under grants no. 2022.09212.PTDC and no. UIDB/50021/2020.
2302.13695
Impact of shocks to economies on the efficiency and robustness of the international pesticide trade networks
Pesticides are important agricultural inputs to increase agricultural productivity and improve food security. The availability of pesticides is partially achieved through international trade. However, economies involved in the international trade of pesticides are impacted by internal and external shocks from time to time, which influence the redistribution efficiency of pesticides all over the world. In this work, we adopt simulations to quantify the efficiency and robustness of the international pesticide trade networks under shocks to economies. Shocks are simulated based on nine node metrics, and three strategies are utilized based on descending, random, and ascending node removal. It is found that the efficiency and robustness of the international trade networks of pesticides increased for all the node metrics except the clustering coefficient. Moreover, the international pesticide trade networks are more fragile when import-oriented economies are affected by shocks.
Jian-An Li, Li Wang, Wen-Jie Xie, Wei-Xing Zhou
2023-02-27T11:54:12Z
http://arxiv.org/abs/2302.13695v1
Impact of shocks to economies on the efficiency and robustness of the international pesticide trade networks ###### Abstract Pesticides are important agricultural inputs to increase agricultural productivity and improve food security. The availability of pesticides is partially achieved through international trade. However, economies involved in the international trade of pesticides are impacted by internal and external shocks from time to time, which influence the redistribution efficiency of pesticides all over the world. In this work, we adopt simulations to quantify the efficiency and robustness of the international pesticide trade networks under shocks to economies. Shocks are simulated based on nine node metrics, and three strategies are utilized based on descending, random, and ascending node removal. It is found that the efficiency and robustness of the international trade networks of pesticides increased for all the node metrics except the clustering coefficient. Moreover, the international pesticide trade networks are more fragile when import-oriented economies are affected by shocks. + Footnote †: journal: Eur. Phys. J. B ## 1 Introduction Pesticides include insecticides, fungicides, herbicides, disinfectants, and rodenticides and other similar products, which are invented to protect agricultural crops from harms caused by insects, fungi, weeds, viruses, and rats. Therefore, the main functions of pesticides are to reduce yield losses, regulate plant growth, and increase agricultural productivity, which is essential to improving food security. The availability of pesticides in most economies is partially supported by international trade. However, there are internal and external shocks from time to time that affect the involved economies and their ability to trade pesticides internationally. Such shocks to economies influence the efficiency of pesticides redistribution all over the world. It is, thus, important to quantify the efficiency and robustness of the international pesticide trade networks (iPTNs). The structural properties and their evolutionary behavior of the iPTNs have been studied [1; 2]. Moreover, the structural robustness and efficiency-based robustness of the iPTNs have also been investigated when the trade relationships are affected by internal and external shocks [3]. When the economies involved in the international pesticide trade are influenced by shocks, the structural robustness of the iPTNs has been quantified [4]. In this work, we aim to complete the unfinished puzzle by quantifying the efficiency and efficiency-based robustness of the iPTNs under shocks to economies. We note that the efficiency-based robustness of complex networks are less studied [5; 6]. Concerning the structural robustness of most complex networks, accumulating evidence shows that complex networks are robust to internal shocks (or random failures) but may be fragile to external shocks (or intentional attacks) [7; 8; 9; 10]. Researchers have performed extensive analysis of the robustness and fragility of complex networks in various fields, such as international oil trade networks [11; 12; 13; 14; 15; 16], infrastructure networks [17; 18; 19; 20; 21; 22; 23; 24], and the networks of Cosa Nostra affiliates [25], to list a few. To carry out simulations for quantifying the efficiency and robustness of the iPTNs under shocks to economies, we utilize nine node metrics (clustering coefficient; betweenness; in-degree, PageRank, authority, and in-closeness; out-degree, hub, and out-closeness) and three node removal strategies (descending, random, and ascending). The remainder of this work is organized as follows. We briefly describe the data in Section 2. We quantify the network efficiency of iPTNs in Section 3 and the efficiency-based robustness of iPTNs in Section 4. We summarize our results in Section 5. ## 2 Data description The data sets for the international pesticide trade were retrieved from the UN Comtrade database (publicly available at [https://comtrade.un.org](https://comtrade.un.org)), which covers the period from 2007 to 2018 and contains the import and export economies and trading values of five categories, including insecticides (380891), fungicides (380892), herbicides (380893), disinfectants (380894), and rodenticides and other similar products (380899). Based on the data sets, for each year, we construct international pesticide trade networks for each category of pesticides, where involved economies are nodes and a directed link is drawn when two economies trade [13; 3]. ## 3 Network efficiency ### Shortest path length between economies In the international pesticide trade networks, pesticide products flow between economies. There are a lot of pesticide-producing and pesticide-consuming economies in the trade network, and there are also some economies with a lot of pesticide imports and exports. The network efficiency analysis in this paper is based on directed networks, and the relevant metric indicators are for asymmetric matrices, except for the clustering coefficient. Network efficiency is also based on directed shortest paths. In the international pesticide trade system, different economies play different roles and contribute their own strengths to the global pesticide trade and food production. It is difficult to distinguish between exporting and importing economies because the international pesticide trade networks include a large number of economies that both export and import. Network efficiency refers to the flow efficiency of information, energy, and matter in the network, such as the information transmission efficiency of the information network and the traffic efficiency of the traffic network [26; 27]. Generally speaking, the trade network efficiency can be used as the path length between the producer and the consumer. The larger the path length, the lower the efficiency; otherwise, the higher the efficiency. We calculated the shortest path length between economies in the iPTNs. Figure 1 shows the evolution of occurrence frequencies \(P(d)\) for \(d=1\), \(d=2\), \(d=3\), \(d=4\), \(d=5\), and \(d>5\), where \(d\) is the distance between the economies in the pesticide trade network. Overall, the occurrences of shortest path length in the six plots are relatively similar, especially in plot (c) for fungicides, plot (d) for herbicides, plot (e) for disinfectants, and plot (f) for rodenticides and other similar products. In each of these four temporal iPTNs, the proportion \(P(d)\) with \(d>5\) is basically the largest, which is close to 0.5 in 2007, but most of them have an infinite shortest path length. It is due to the fact that the iPTNs we are studying are directed and thus there is no reachable path between some economies. With the development of globalization, there are more and more trade relations, and the proportion of economy pairs with infinite shortest path lengths is getting smaller and smaller until around 2015. The main reason is that the sales in the international pesticide market plunged 8.5% year on year, the steepest decline in more than a decade. In most years, we have \[P(2)>P(3)>P(1)>P(4)>P(5). \tag{1}\] This relation does not hold for the aggregated iPTN in Fig. 1(a) and the iPTN for insecticides in Fig 1(b). The aggregated iPTN integrates the five pesticide products and has the largest number Figure 1: Distance distribution among economies in the international pesticide trade network. \(d\) is the network distance between economies in the directed pesticide trade network. \(P(d)\) represents the proportion of the shortest path with a distance of \(d\). The figure shows the time evolution of the proportion of the distance \(d=1\), \(2\), \(3\), \(4\), \(5\), and \(d>5\). The six plots correspond to the trade networks of pesticide products, including (a) aggregated, (b) insecticides, (c) fungicides, (d) herbicides, (e) disinfectants, and (f) rodenticides and other similar products. of links. Moreover, compared with the other four pesticide trade products, the insecticides trade network showed the largest trade volume and the largest trade relationships. The proportion \(P(1)\) of distance \(d=1\) in Fig. 1 measures the density of the pesticide trade network. It can be seen from the figure that, with the evolution of time \(t\), \(P(1)\) becomes larger and larger, indicating that there are more and more pesticide trade relations, but after 2015, there is also a downward trend. The proportion \(P(2)\) of distance \(d=2\) measures that the economies in the pesticide trade network do not have direct trade relations, but have indirect trade relations through intermediate economies. It can be seen from Fig. 1 that, with the evolution of time \(t\), \(P(2)\) becomes larger and larger, indicating that there are more and more pesticide trade relations, but after 2015, there is also a downward trend. In particular, the \(P(2)\) curves for the aggregated iPTN in Fig. 1(a), the iPTN of insecticides in Fig. 1(b), and the iPTN of fungicides in Fig. 1(c) showed obvious fluctuations in 2015. Generally speaking, the shorter the trade distance between economies in the international trade network, the higher the trade efficiency. However, the international trade network is affected by geographical location, political factors, resource endowment distribution, etc. It is impossible that the shortest path length of the trade network is too small. Therefore, we introduced the index of network efficiency to measure the trade flow efficiency and the change rule of the trade structure of pesticide products in the international pesticide trade network. ### Network efficiency Network efficiency is defined as the average reciprocal path length between nodes in the network. The path length between nodes is defined differently between directed networks and undirected networks, as well as between weighted networks and unweighted networks. In an unweighted, directed network, the network efficiency \(E\) is defined as follows [26; 27], \[E=\frac{1}{N(N-1)}\sum_{i\neq j}e_{ij}, \tag{2}\] where \(e_{ij}\) represents the path efficiency between economy \(i\) and economy \(j\), measuring the transmission efficiency of material flow, information flow, or energy flow between two nodes, specifically defined as \[e_{ij}=\frac{1}{d_{ij}}, \tag{3}\] where \(d_{ij}\) represents the shortest path length between economy \(i\) and economy \(j\). Fig. 2 shows the evolution of the efficiency of the international pesticide trade networks. Each solid line corresponds to an international pesticide trade network, including aggregated, insecticides, fungicides, herbicides, disinfectants and rodenticides and other similar products. The five types of pesticide trade network efficiency have similar evolutionary patterns, which can be basically divided into three stages. The first stage is from 2007 to 2010, which can be called the rapid growth period of network efficiency. The period from 2010 to 2015 can be regarded as a stable period, and the efficiency of the international pesticide trade network has not changed significantly. After 2015, it can be seen as a period of declining network efficiency. The efficiency of the international pesticide trade network has declined. ## 4 Network robustness against shocks to economies ### Node metrics There are many node metrics capturing the local or global characteristics of nodes that can be used to measure the importance of nodes in different aspects [32; 33]. We consider here nine \begin{table} \begin{tabular}{l l l l} \hline Variable & DefinitionDescription & References \\ \hline In-degree & \(\sum_{i=1}^{N}a_{ij}\) & \(N\) is the number of nodes in the trade network. \(a_{ij}\) is the element of adjacent matrix of trade network. & [16] \\ Out-degree & \(\sum_{j=1}^{N}a_{ij}\) & It is the count of importing partners of economy \(i\). & [16] \\ In-closeness & \(\frac{N-1}{\sum_{j\neq i\neq i}l_{ij}}\) & It is defined as the reciprocal of the mean distance of \(i\) to all other nodes. & [16] \\ Out- & \(\frac{N-1}{\sum_{j\neq i\neq i}l_{ij}}\) & It is defined as the reciprocal of the mean distance of all other nodes to \(i\). & [16] \\ Authorities/Hubs & The authority score and hub score of a node can be calculated in an iterative way. & [28] \\ PageRank & - & It is an important variant of eigenvector centrality. & [29] \\ Betweenness & \(\sum_{st}\frac{n_{u}^{i}}{g_{u}}\) & It is the ratio of the number of shortest paths go through the investigated node to the number of shortest paths between all pairs of nodes in the network. & [31] \\ Clustering & \(\frac{2n_{i}}{k_{i}(k_{i}-1)}\) & It characterizes the connectance of its trade partners. \(n_{i}\) is the number of triangles adjacent to \(i\). & [30] \\ \hline \end{tabular} \end{table} Table 1: The nine node metrics used in this work. Figure 2: Evolution of the efficiency of the international pesticide trade networks. Each solid line corresponds to an international pesticide trade network, including aggregated, insecticides, fungicides, herbicides, disinfectants and redenticides and other similar products. node metrics, including clustering coefficient, betweenness, PageRank, in-degree, out-degree, incloseness, out-closeness, authorities, and hubs, as shown in Table 1. These node metrics have different traits. The clustering coefficient is defined for undirected networks, while the other eight metrics are extracted from directed networks. Concerning the other eight node metrics, betweenness does not distinguish between importing and exporting economies; in-degree, PageRank, authority, and in-closeness are calculated from exporting (thus the node under consideration is a target node); and out-degree, out-closeness, and hub are obtained from importing economies (thus the node under consideration is a source node). In this section, we quantify the importance of the economy based on these nine node metrics and further investigate their mutual correlations. ### Network efficiency under shocks to economies Complex network structures and network functions are closely related, and different network structures have different roles and functions for specific problems and backgrounds. In the analysis of network stability, indicators such as in-degree, authorities, and betweenness have a great impact on the network structure, but not all indicators are equally important for all network functions. Therefore, we analyzed the differences and impacts of important indicators in the study of network efficiency. Network efficiency describes the transmission efficiency of material flow, information flow, or energy flow in a complex network, which has important research and application value. To compare the changes in network efficiency after removing network nodes according to the indicators of different nodes, we take the network efficiency of the original network as the benchmark, recalculate the efficiency of the residual network after removing network nodes, and calculate the ratio between the two so that we can horizontally compare the changes in network efficiency of different pesticide trade networks. The formula of the ratio \(\beta^{l}\) is \[\beta^{l}(p)=\frac{E_{p}}{E}, \tag{4}\] where \(E\) represents the network efficiency of the original network, and \(E_{p}\) represents the network efficiency after removing the nodes with a proportion of \(p\) according to the given indicators. We consider three node removal rules. Based on the node metric index \(I\), we remove the economies from the pesticide trade network and recalculate the network efficiency for the remaining networks. We then compare the results of the three node removal strategies. The "descend" removal rule is to remove a proportion of \(p\) of the largest trading economies based on indicator \(I\) and recalculate the network efficiency \(E_{p}\) of the remaining network. The "ascend" removal rule is to remove a proportion of \(p\) of the economies with the smallest \(I\). The "random" removal method is to randomly remove a proportion of \(p\) of the trading economies and then calculate the network efficiency \(E_{p}\) of the remaining network. In the process of removing network nodes, in addition to randomly removing nodes, node removal is mostly performed in descending order of indicators. We introduce the removal of indicators in ascending order, the main purpose of which is to compare with the other two removal methods. Not all indicators meet the rule that the higher the value, the greater the impact on network efficiency. Considering both ascending and descending removal methods, it is not easy to miss some key indicators. For example, the presence of weak ties during edge removal plays a key role in network connectivity. For the aggregated iPTN in 2018, we illustrate in Fig. 3 the \(\beta^{l}(p)\) curves for the nine indicators \(I\). The curves associated with "descend", "ascend", and "random" in the figure correspond to the priorities of removing nodes with the largest \(I\) values, the smallest \(I\) values, and random \(I\) values, respectively. We find that the results are similar for different node metrics, except for the clustering coefficient. For the clustering coefficient, the two \(\beta^{l}\) curves corresponding to the descending and ascending node removal strategies almost overlap, while the \(\beta^{l}\) curve corresponding to the random node removal strategy decreases faster than the other two curves. For the other eight node metrics, the corresponding \(\beta^{l}\) curves have similar patterns. Specifically, the curves obtained from the descending node removal strategy decrease the fastest, while those from the ascending node removal strategy decrease the lowest. For the descending node removal strategy, the \(\beta^{l}(p)\) value is almost nil when the economies are affected by around 40%. In comparison, the giant component still contains more than 40% nodes [4]. Figure 3: Analysis of network efficiency under shocks to the economies for the aggregated iPTN in 2018. The lines are the \(\beta^{l}\) curves calculated from the largest connected subnetworks after removing nodes based on indicator \(I\). The words “descend”, “ascend”, and “random” in the legend correspond to the cases where the nodes with largest \(I\), smallest \(I\), and random \(I\) are removed preferentially. For the random node removal stategy, we repeated 20 runs and presented the averages. ### Network-efficiency-based robustness To compare the differences in network efficiency under different node removal strategies in a more detailed way, a more quantitative method is also used to analyze the network efficiency. There are many similar studies on the robustness and fragility of complex networks, such as Ref. [8]. The approach in Ref. [8] is based on percolation theory and analyzes the general condition for the critical fraction of nodes. Our approach is based on efficiency and robustness as presented in Ref. [34]. The area under a curve in Fig. 3 represents the impact of the corresponding indicator on the network efficiency. The more the area is, the more obvious the network efficiency will be. This efficiency-based robustness can be expressed as follows [34], \[R_{\beta}^{I}=\frac{1}{n}\sum_{k=0}^{n}\beta_{\text{node}}^{I}\left(p_{k} \right), \tag{5}\] where \(p_{k}=k/n\) indicates the proportion of economies in the pesticide trade network to be deleted. Figure 4 shows the extent to which the efficiency of the international pesticide trade network is affected under the impact of shocks to the economies, where Fig. 4(a) shows the combined trade network of all five global trade networks of pesticide products, and Fig. 4 (b-f) correspond to the trade networks of five pesticide products. Different markers correspond to different node removal strategies. As can be seen from Fig. 4, the strategies that remove economies from the iPTNs in descending order according to the clustering coefficient show different behaviors. For the six networks, we have \[R_{\beta,\text{ascend}}^{\text{Clustering}}>R_{\beta,\text{descend}}^{\text{ Clustering}}>R_{\beta,\text{random}}^{\text{Clustering}}, \tag{6}\] Figure 4: Network efficiency based robustness of the aggregated iPTN (a), the iPTN of insecticides (b), the iPTN of fungicides (c), the iPTN of herbicides (d), the iPTN of disinfectants (e), and the iPTN of rodenticides and other similar products (f) in 2018. The words “descend”, “ascend”, and “random” in the legend correspond to the cases where the nodes with largest \(I\), smallest \(I\), and random \(I\) are removed preferentially. For the random node removal strategy, we repeated 20 runs and presented the averages. showing that the random node removal strategy has the largest impact on network efficiency. The distinct behavior is mainly reflected in the descending node removal strategy. In contrast, for the other eight node metrics, we have \[R^{I}_{\beta,\text{ascend}}>R^{I}_{\beta,\text{random}}>R^{I}_{\beta,\text{descend }}, \tag{7}\] showing that the descending node removal strategy has the largest impact on network efficiency and the ascending node removal strategy has the least impact on network efficiency. We also observe that the strategies based node metrics associated with import (authority, incloseness, and in-degree) have a larger impact on network efficiency than those with export (hub, out-closeness, and out-degree). The importing economies have a greater impact on network efficiency. This conclusion is consistent with many studies, such as the study of oil network efficiency and robustness, in which the impact of removing importing economies is greater than that of exporting economies [13; 16]. Figure 5: Evolution of the network-efficiency-based robustness of the aggregated iPTN and the iPTNs of insecticides, fungicides, herbicides, disinfectants, and rodenticides and other similar products for the descending node removal strategies based on the nine node metrics. ### Evolution of network-efficiency-based robustness We now turn to investigate the evolution of the network-efficiency-based robustness of the aggregated iPTN and the five iPTNs of insecticides, fungicides, herbicides, disinfectants, and rodenticides and other similar products from 2007 to 2018 for the descending node removal strategies based on the nine node metrics. The results are presented in Fig. 5. Each plot corresponds to a node metric. Based on this metric, we remove the nodes and then analyze the changes in network efficiency. By comparing the robustness of different networks, we can find that the aggregated network is the most robust. From the definition of network efficiency, it is known that the higher the density of connected edges of the network, the smaller the average path length between nodes, and the higher the network efficiency. Removing the edges of the network has less impact on the connectivity and the average shortest path for a network with a high density of edges, so the corresponding network robustness is higher. Once more, we can see that while the results for the other eight node metrics are qualitatively more similar, the results for the clustering coefficient are noticeably different. The six robustness curves exhibit an overall decreasing trend. More precisely, the robustness curves decreased in the early years and remained relatively stable with local fluctuations. In contrast, for the other eight node metrics, the robustness curves show an overall upward trend, which increased in the early years and remained stable. Among the six networks, the aggregated iPTN is the most robust. Among the other five networks, the iPTN of insecticides is the most robust. We also observed a sharp decrease in robustness in 2015, especially for the two iPTNs of herbicides and fungicides. We contend that international pesticide trade networks are becoming more robust in terms of network efficiency in the face of shocks to economies. ## 5 Summary We have investigated the efficiency of the international pesticide networks of insecticides (380891), fungicides (380892), herbicides (380893), disinfectants (380894), and rodenticides and other similar products (380899) from 2007 to 2018, as well as the corresponding aggregated networks of all the five categories of pesticide. We found that the network efficiency increased in the first four years and decreased in the last four years. In addition, for each year, the aggregated iPTN had the highest efficiency and the iPTN of insecticides had the second highest efficiency. These observations are by and large consistent with the time-varying pattern of the number of links [1]. There are, of course, other factors to explore. We further investigated the robustness of iPTN efficiency, or the efficiency-based robustness of the iPTNs, as adopted in Refs. [13; 3]. To simulate different types of shocks to economies, we utilized three strategies by removing nodes with descending, random, and ascending orders of nine node metrics. We found that the efficiency-based robustness of the international pesticide trade networks increased for all the node metrics except the clustering coefficient. Moreover, the international pesticide trade networks are more vulnerable when shocks hit import-oriented economies than export-oriented economies. We also found that the aggregated iPTN is the most robust against shocks, and the iPTN of insecticides is the second most robust. The robustness curves are mostly \(\cup\)-shaped, decorated by a sharp drop in 2015 in some robustness curves, which was caused by an 8 percent plunge in market sales in 2015. ## Acknowledgements This work was supported by the National Natural Science Foundation of China (72171083), the Shanghai Outstanding Academic Leaders Plan, and the Fundamental Research Funds for the Central Universities. ## Author contributions Funding acquisition: W-XZ; investigation: J-AL, LW, W-JX and W-XZ; methodology: W-JX and W-XZ; supervision: W-JX and W-XZ; writing--original draft, J-AL and W-JX; writing--review and editing: W-XZ. ## Data availability statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: The associated data in this manuscript can be retrieved from the UN Comtrade database at [https://comtrade.un.org](https://comtrade.un.org).]
2305.17454
Cloud Computing: Applications, Challenges and Open Issues
Cloud computing is one of the innovative computing, which deals with storing and accessing data and programs over the Internet [1]. It is the delivery of computing resources and services, such as storing of data on servers and databases, providing networking facilities and software development platforms over the Internet. It provides the flexibility of resources for everyone. These services are provided via data centers, which are located in various parts of the world [2, 3]. Cloud computing makes access to these resources to everyone on a global scale at a very minimal cost and significantly higher speed. These servers provide services to the users, which would have cost a lot of computational power to them if they had to buy them. The first mention of cloud computing was referenced in a Compaq internal document released in 1996 [4]. Cloud computing was then commercialized in 2006 when Amazon released elastic compute cloud (EC2). Furthermore, Google released Google app engine in 2008 and Microsoft Azure services were launched in October 2008, which increased the competition in the area of cloud computing. Since then these companies have done a lot of development in cloud computing.
Sahil Mishra, Sanjaya Kumar Panda
2023-05-27T11:52:48Z
http://arxiv.org/abs/2305.17454v1
# Cloud Computing: Applications, Challenges and Open Issues ###### Abstract Cloud computing is one of the innovative computing, which deals with storing and accessing data and programs over the Internet [1]. It is the delivery of computing resources and services, such as storing of data on servers and databases, providing networking facilities and software development platforms over the Internet. It provides the flexibility of resources for everyone. These services are provided via data centers, which are located in various parts of the world [2, 3]. Cloud computing makes access to these resources to everyone on a global scale at a very minimal cost and significantly higher speed. These servers provide services to the users, which would have cost a lot of computational power to them if they had to buy them. The first mention of cloud computing was referenced in a Compaq internal document released in 1996 [4]. Cloud computing was then commercialized in 2006 when Amazon released elastic compute cloud (EC2). Furthermore, Google released Google app engine in 2008 and Microsoft Azure services were launched in October 2008, which increased the competition in the area of cloud computing. Since then these companies have done a lot of development in cloud computing. Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing, Cloud computing and private organizations prefer using the private cloud. It provides physical control of data and more security than keeping it in the public cloud. Cloud computing is very cost efficient for the users as well as the organizations. They can easily expand their processing capabilities, without spending large amounts of money in the hardware. However, on-demand availability and scalability of cloud computing make it harder to predict the required quantity in future [5]. Even the server failures due to lack of maintenance incur huge losses. The other big challenge cloud computing faces are the lack of resources like data centers and cloud engineers. As the time is changing, the users of cloud computing are increasing at an exponential rate, which may result in failure of data centers. This puts a lot of pressure on the service providers to get enough resources for the huge number of users. This also incurs a lot of cost to them. Even the power consumption by these data centers is quite high. Therefore, service providers like Microsoft are trying to reduce it by putting the data centers in the ocean. This reduces the cost of cooling them to a large extent. Cloud servers handle a lot of data and computations. As a circumstance, they need continuous monitoring and supervision. If any technical glitch occurs on the server, then lot of users face its consequences, which results in loss of time, data and money to both users and service providers. Technical faults are also sometimes caused if the consumer, especially the big organizations does not implement it properly. This not only costs them unnecessary overhead, but also the servers to handle vague data and reach to the people at all the remote places across the globe. Cloud computing has served as a boon to the mankind. Users can store the data in the cloud and access it anytime and anywhere, but accessing them requires the Internet connectivity. In order to access large amounts of data, good internet connectivity is required, which is not same at every part of the world. Therefore, various places are refraining from using the cloud services. Sometimes, the users and organizations tend to change the service providers due to a lot of issues. Therefore, ensuring resource portability is very necessary. Cloud technology must have the capability to transfer and integrate resources on other servers, without any necessary overhead. If a user deploys an application on a server, then migrating it to the server of another service provider requires the user to modify the application according to the requirement of new server. Even the user cannot share resources between the servers of different service providers. ## Conclusion Cloud computing has opened the gates for large storage and computational power at a very minimal cost. Consumers don't need to buy expensive resources to carry on their daily jobs. But, in spite of having a lot of pros, the cloud computing also has a significant number of cons. These issues have always been prevalent in the field of cloud computing. Various attempts are being made to reduce these challenges to a certain level by the people of different domains. Despite of facing these challenges, cloud computing is still helping the technology to reach to the people at all the remote places across the globe.
2308.09696
Metric and strong metric dimension in inclusion ideal graphs of commutative rings
The inclusion ideal graph of a commutative unitary ring $R$ is the (undirected) graph $In(R)$ whose vertices all non-trivial ideals of $R$ and two distinct vertices are adjacent if and only if one of them is a proper subset of the other one. In this paper, the metric dimension of $In(R)$ is discussed. Moreover, the structure of the resolving graph of $In(R)$ is characterized and as an application, we compute the strong metric dimension of $In(R)$.
E. Dodongeh, A. Moussavi, R. Nikandish
2023-08-18T17:45:08Z
http://arxiv.org/abs/2308.09696v1
# Metric and strong metric dimension in ###### Abstract The inclusion ideal graph of a commutative unitary ring \(R\) is the (undirected) graph \(In(R)\) whose vertices all non-trivial ideals of \(R\) and two distinct vertices are adjacent if and only if one of them is a proper subset of the other one. In this paper, the metric dimension of \(In(R)\) is discussed. Moreover, the structure of the resolving graph of \(In(R)\) is characterized and as an application, we compute the strong metric dimension of \(In(R)\). + Footnote †: _Key Words_: Metric dimension, Strong metric dimension, Inclusion ideal graph, Commutative ring. + Footnote †: _Key Words_: Metric dimension, Strong metric dimension, Inclusion ideal graph, Commutative ring. ## 1 Introduction Metric and strong metric dimension in a graph are examples of NP-hard problems in discrete structures which have found several applications in computer science, mechanical engineering, optimization, chemistry etc. Although many important works have been done by graph theorists in computing metric and strong metric dimension, they are still two of the most active research areas in graph theory; for the most recent study in this field see [1, 5, 9, 10, 19]. In addition to wide range of applications, the complexity of computations has caused considerable interest in characterizing these invariants for graphs associated with algebraic structures. Some examples in this direction may be found in [3, 7, 8, 11, 13, 14, 16, 17, 23]. This paper has a such theme and aims to compute the metric and strong metric dimension in inclusion ideal graphs of commutative rings. For graph theory terminology, we follow [21]. Let \(G=(V,E)\) be a graph with \(V=V(G)\) as the vertex set and \(E=E(G)\) as the edge set. A complete graph of order \(n\) is denoted by \(K_{n}\). Also, distance between two distinct vertices \(x\) and \(y\) is denoted by \(d(x,y)\). By \(\mbox{diam}(G)\), we mean the diameter of \(G\). If a graph \(H\) is a subgraph of \(G\), then we write \(H\subseteq G\). Moreover, the induced subgraph by \(V_{0}\subseteq V\) is denoted by \(G[V_{0}]\). The open and closed neighborhood of the vertex \(x\) are denoted by \(N(x)\) and \(N[x]\), respectively. The independence number and vertex cover number of the graph \(G\) are denoted by \(\beta(G)\) and \(\alpha(G)\), respectively. Let \(S=\{v_{1},v_{2},\ldots,v_{k}\}\) be an ordered subset of \(V\) and \(v\in V\setminus S\). Then the representation vector of \(v\) with respect to \(S\) is denoted by \(D(v|S)\) which is defined as follows: \(D(v|S)=(d(v,v_{1}),d(v,v_{2}),\ldots,d(v,v_{k}))\). An ordered subset \(S\subseteq V(G)\) is called _resolving_ provided that distinct vertices out of \(S\) have different representation vectors with respect to \(S\). Any resolving set of minimum cardinality is called _metric basis for_\(G\), and its cardinal number is called the _metric dimension of_\(G\). We denote the metric dimension of \(G\) by \(dim_{M}(G)\). Two different vertices \(u,v\)_are mutually maximally distant_ if \(d(v,w)\leq d(u,v)\), for every \(w\in N(u)\) and \(d(u,w)\leq d(u,v)\), for every \(w\in N(v)\). For a graph \(G\), _the strong resolving graph of_\(G\), is denoted by \(G_{SR}\) and its vertex and edge set are defined as follow: \(V(G_{SR})=\{u\in V(G)|\,there\ exists\ v\in V(G)\ such\ that\ u,v\ are\ mutually\ maximally\ distant\}\) and \(uv\in E(G_{SR})\) if and only if \(u\) and \(v\) are mutually maximally distant. Two vertices \(u\) and \(v\) are _strongly resolved_ by some vertex \(w\) if either \(d(w,u)\) is equal to \(d(w,v)+d(v,u)\) or \(d(w,v)\) is equal to \(d(w,u)+d(v,u)\). A set \(W\) of vertices is a _strong resolving set of_\(G\) if every two distinct vertices of \(G\) are strongly resolved by some vertex of \(W\) and a minimum strong resolving set is called _strong metric basis_ and its cardinality is _the strong metric dimension of_\(G\). We denoted the strong metric dimension of \(G\), by \(sdim(G)\). Throughout this paper, all rings are assumed to be commutative with identity. The set of all non-trivial ideals of \(R\) is denoted by \(I(R)\). The ring \(R\) is called _reduced_ if it has no nilpotent elements other than \(0_{R}\). For undefined notions in ring theory, we refer the reader to [4]. _The inclusion ideal graph of a ring \(R\)_, denoted by \(In(R)\), is a graph whose vertex set is \(I(R)\) and two distinct vertices are adjacent if and only if one of them is contained properly in the other one. This graph was first introduced and studied by Akbari et.al in [2] and several interesting properties of it were obtained. Afterward, this concept has been the subject of many researches; see for instance [6, 15, 20]. In this paper, we characterize the metric dimension of \(In(R)\). Moreover, the structure of strong resolving graph of \(In(R)\) is investigated and as an application \(sdim(In(R))\) is computed. \(dim_{M}(In(R))\) and \(sdim(In(R))\), when \(R\) is reduced In this section, first it is shown that \(dim_{M}(In(R))\) is finite if and only if \(|I(R)|<\infty\). Then we provide some metric and strong metric dimension formulas for \(dim_{M}(In(R))\) and \(sdim(In(R))\), when \(R\) is a reduced ring. We fix the following notations. **Remark 2.1**: Let \(R\cong\prod_{i=1}^{n}R_{i}\), where \(R_{i}\) is a ring for every \(1\leq i\leq n\), and \(I=I_{1}\times\cdots\times I_{n}\in V(In(R))\). We adopt the following notations: 1) By \(I^{c}=I_{1}^{c}\times\cdots\times I_{n}^{c}\), we mean a vertex of \(In(R)\) such that \(I_{i}^{c}=R_{i}\) if and only if \(I_{i}=0\) for every \(1\leq i\leq n\). 2) If \(R_{i}\) is a field, then \(X_{i}=0\times\cdots\times 0\times R_{i}\times 0\times\cdots\times 0\), where the field \(R_{i}\) is in the \(i\)-th position (We call \(X_{i}\) the \(i\)-th minimal ideal, if every \(R_{i}\) is a field). 3) By \(M\), we mean the following subset of \(V(In(R))\): \[M=\{I=I_{1}\times\cdots\times I_{m}|\,I_{i}=0\ or\ I_{i}=R_{i}\ for\ every\ 1\leq i\leq n\}.\] **Proposition 2.1**: _Let \(R\) be a ring that is not a field. Then \(dim_{M}(In(R))<\infty\) if and only if \(|I(R)|<\infty\)._ **Proof.** First assume that \(dim_{M}(In(R))\) is finite and \(W=\{W_{1},\ldots,W_{n}\}\) is a metric basis for \(In(R)\), where \(n\) is a non-negative integer. By [2, Theorem 2.1], there are only \(3^{n}\) possibilities for \(D(X|W)\), for every \(X\in V(In(R))\setminus W\). Thus \(|V(In(R)|\leq 3^{n}+n\) and hence \(R\) has finitely many ideals. The converse implication is clear. \(\Box\) By Proposition 2.1, to express metric and strong metric dimensions in some explicit formulas, it is enough to consider rings with finitely many ideals. Therefore, from now on, we suppose that all rings \(R\) have finitely many ideals. It is well-known that reduced Artinian rings are direct product of finitely many fields. Let \(n\geq 3\) be a positive integer. In the next theorem, \(dim_{M}(In(\prod_{i=1}^{n}\mathbb{F}_{i}))\) is determined. For this aim, the following lemma is needed. **Lemma 2.1**: _Let \(n\geq 3\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(diam(In(R))=3\)._ **Proof.** By [2, Theorem 2.1], \(In(R)\) is a connected graph and \(diam(In(R))\leq 3\). Now, let \(J=X_{1}^{c}\). Since \(X_{1}\nsim J\), \(d(X_{1},J)\geq 2\). If \(d(X_{1},J)=2\), then there exists a vertex such that \(X_{1}\sim T\sim J\) is the shortest path between \(X_{1}\) and \(J\). Since \(X_{1}\sim T\), \(X_{1}\subset T\) or \(T\subset X_{1}\). If \(T\subset X_{1}\), then clearly \(T\nsim J\). Thus \(X_{1}\subset T\). A similar argument shows that \(J\subset T\), which means \(T=R\), a contradiction. Thus \(X_{1}\sim\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times\cdots\times 0\sim X_{2} \sim J\) is the shortest path between \(X_{1}\) and \(J\), and so \(diam(In(R))=3\). \(\Box\) **Theorem 2.1**: _Suppose that \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\) and \(n\geq 3\) be a positive integer. Then the following statements hold:_ \(1)\) _If \(n\leq 4\), then \(dim_{M}(In(R))=n-1\)._ \(2)\) _If \(n\geq 5\), then \(dim_{M}(In(R))=n\)._ **Proof.** (1) If \(n=3\), then \(In(R)=C_{6}\) and thus \(dim_{M}(In(R))=2\). If \(n=4\), then since \(diam(In(R))=3\) and \(|V(In(R))|=14\), we have \(dim_{M}(In(R))\geq 3\). Now we show that \(dim_{M}(In(R))\leq 3\). If we put \(W=\{X_{1},X_{2},X_{3}\}\), then we get \[D(\mathbb{F}_{1}\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times 0)|W)=(1,1,1) D(\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times\mathbb{F}_{4})|W)=(1,1,3) D(\mathbb{F}_{1}\times 0\times\mathbb{F}_{3}\times\mathbb{F}_{4})|W)=(1,3,1)\] \[D(0\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times\mathbb{F}_{4})|W)= (3,1,1) D(\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times 0)|W)=(1,1,2) D(\mathbb{F}_{1}\times 0\times\mathbb{F}_{3}\times 0)|W)=(1,2,1)\] \[D(0\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times 0)|W)=(2,1,1) D(\mathbb{F}_{1}\times 0\times 0\times\mathbb{F}_{4})|W)=(1,2,2) D(0\times\mathbb{F}_{2}\times 0\times\mathbb{F}_{4})|W)=(2,1,2)\] \[D(0\times 0\times\mathbb{F}_{3}\times\mathbb{F}_{4})|W)=(2,2,1) D(0\times 0\times 0\times\mathbb{F}_{4})|W)=(2,2,2).\] This shows that \(W\) is a resolving set for \(In(R)\) and hence \(dim_{M}(In(R))\leq 3\). (2) We show that \(dim_{M}(In(R))=n\), for every \(n\geq 5\). Indeed, we have the following claims: **Claim 1.**\(dim_{M}(In(R))\geq n\). Proposition 2.1 shows that \(dim_{M}(In(R))\) is finite. Let \(W=\{W_{1},\ldots,W_{k}\}\) be a metric Figure 1: \(In(R)\) basis for \(In(R)\), where \(k\) is a positive integer. By Lemma 2.1, \(d(I,I^{c})=3\), for every \(I\in V(In(R))\) (Indeed, \(d(I,J)=3\) iff \(J=I^{c}\)). Hence there is at most one \(3\) in \(D(I|W)\) and for the other components there are \(2\) possibilities, for each \(I\in V(In(R))\). Thus if \(I^{c}\in W\), then there are \(2^{k-1}\) possibilities for \(D(I|W)\) and otherwise there are \(2^{k}\) possibilities, for every \(X\in V(In(R))\setminus W\). Since \(|V(In(R))|=2^{n}-2\) and \(|V(In(R))|-2k\leq 2^{k}\), we have \(2^{n}\leq 2^{k}+2k+2\). Since \(n\geq 5\), we conclude that \(k\geq n\). Therefore \(dim_{M}(In(R))\geq n\). **Claim 2.**\(dim_{M}(In(R))\leq n\). Let \(W=\{X_{1},\ldots,X_{n}\}\), where \(X_{i}\) is the i-th minimal ideal, for every \(1\leq i\leq n\) (see Remark 2.1). We show that \(W\) is a resolving set for \(In(R)\). Let \(I,J\in V(In(R))\setminus W\) and \(I\neq J\). We show that \(D(I|W)\neq D(J|W)\). Since \(diam(In(R))=3\), we consider the following cases: \((a)\)\(I=X_{i}^{c}\), for some \(1\leq i\leq n\). In this case, each component of \(D(I|W)\) is \(1\) if and only if the corresponding component in \(I\) is a field. Moreover, the \(i\)-th component of \(D(I|W)\) is \(3\). \((b)\)\(I\neq X_{i}^{c}\), for every \(1\leq i\leq n\). In this case each component of \(D(I|W)\) is \(1\) if and only if the corresponding component in \(I\) is a field, and each component of \(D(I|W)\) is \(2\) if and only if the corresponding component in \(I\) is zero. By cases \((a)\) and \((b)\), if \(I\neq J\), then \(D(I|W)\neq D(J|W)\). Thus \(W\) is a resolving set for \(In(R)\). Therefore \(dim_{M}(In(R))\leq n\). By Claims \(1,2\), \(dim_{M}(In(R))=n\), for \(n\geq 5\). \(\Box\) The next goal of this section is to find \(sdim(In(R))\), where \(R\) is a direct product of finitely many fields. To this end, we need a series of lemmas. **Lemma 2.2**: ([12, Theorem 2.1]) _For any connected graph \(G\), \(sdim_{M}(G)=\alpha(G_{SR})\)._ **Lemma 2.3**: (Gallai's theorem) _For any graph \(G\) of order \(n\), \(\alpha(G)+\beta(G)=n\)._ **Lemma 2.4**: _Let \(n\geq 3\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then the following statements hold._ \(1)\)\(V(In(R))=V(In(R)_{SR})\)_._ \(2)\)_Suppose that \(I,J\in V(In(R)_{SR})\), then \(IJ\in E(In(R)_{SR})\) if and only if \(I=J^{c}\) or \(IJ,IJ^{c}\notin E(In(R))\)._ **Proof.** 1) For every \(I=I_{1}\times\cdots\times I_{n}\in V(In(R))\), Lemma 2.1 implies that \(d(I,I^{c})=3=diam(In(R))\). Thus \(I,I^{c}\) are mutually maximally distant and so \(I\in V(In(R)_{SR})\) i.e., \(V(In(R))=V(In(R)_{SR})\). 2) First suppose that \(I=J^{c}\) or \(IJ,IJ^{c}\notin E(In(R))\). If \(I=J^{c}\), then obviously \(IJ\in E(In(R)_{SR})\). Hence one may suppose that \(I\neq J^{c}\) and \(IJ,IJ^{c}\notin E(In(R))\). Since \(IJ\notin E(In(R))\) and \(I\neq J^{c}\), \(d_{In(R)}(I,J)=2\). Also \(I\nsim J^{c}\) implies that \(d(V,J)\leq d(I,J)\), for every \(V\in N(I)\). Moreover, \(d(U,I)\leq d(I,J)\), for every \(U\in N(J)\). Therefore, \(I,I^{c}\) are mutually maximally distant, thus \(IJ\in E(In(R)_{SR})\). Conversely, suppose that \(IJ\in E(In(R)_{SR}\), for some \(I,J\in V(In(R)_{SR})\) and \(I\neq J^{c}\). Then clearly \(I\nsim J\) and if \(I\sim J^{c}\), then \(d_{In(R)}(J,J^{c})=3>d(I,J)\), and so \(I,I^{c}\) are not mutually maximally distant, a contradiction. This completes the proof. \(\Box\) **Lemma 2.5**: _Let \(n\geq 3\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(In(R)_{SR}=H+\underbrace{K_{2}+\cdots+K_{2}}_{n\,times}\), where \(H\) is a connected graph._ **Proof.** By Lemma 2.4, \(V(In(R))=V(In(R)_{SR})\). Assume that \(I=I_{1}\times\cdots\times I_{n}\in V(In(R)_{SR})\). Let \(NZC(I)\) be the number of zero components in \(I\). Obviously, \(1\leq NZC(I)\leq n-1\). Assume that \(A_{1}=\{I\in V(In(R)_{SR})|NZC(I)=1\}\), \(A_{2}=\{I\in V(In(R)_{SR})|NZC(I)=2\}\), \(\vdots\) and \(A_{n-1}=\{I\in V(In(R)_{SR})|NZC(I)=n-1\}\). Let \(I\in A_{1}\). Then for every \(J\neq I^{c}\) with \(IJ\notin E(In(R))\) we have \(I^{c}\subset J\) and so by Lemma 2.4, \(IJ\notin E(In(R)_{SR}\). Similarly, for every \(I\in A_{n-1}\) the only vertex maximally distant from \(I\) is \(I^{c}\) and vice versa, so \(I\) is only mutually maximally distant from \(I^{c}\). Since \(|A_{1}|=|A_{n-1}|=n\), \(In(R)_{SR}\) contains \(n\) copies of \(K_{2}\). Now, we show that \(H=In(R)_{SR}[S]\) is a connected graph. For every two distinct vertices \(I,J\in S=V(In(R)_{SR})\setminus(A_{1}\cup A_{n-1})\), if \(I\nsim J\) and \(I\nsim J^{c}\) in \(In(R)\), then \(d_{H}(I,J)=1\), else \(d_{H}(I,J)=2\), so \(H=In(R)_{SR}[S]\) is a connected graph. \(\Box\) The next example explains Lemma 2.5 in case \(n=4\). **Example 2.1**: Suppose that \(R\cong\prod_{i=1}^{4}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq 4\). Thus \(|V(In(R))|=14\). Let \(V_{1}=\mathbb{F}_{1}\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times 0,\quad V_{2}= \mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times\mathbb{F}_{4},\quad V_{3}= \mathbb{F}_{1}\times 0\times\mathbb{F}_{3}\times\mathbb{F}_{4},\) \(V_{4}=0\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times\mathbb{F}_{4},\quad V _{5}=\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times 0,\quad V_{6}= \mathbb{F}_{1}\times 0\times\mathbb{F}_{3}\times 0,\) \(V_{7}=0\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times 0,\quad V_{8}= \mathbb{F}_{1}\times 0\times 0\times\mathbb{F}_{4},\quad V_{9}=0\times\mathbb{F}_{2} \times 0\times\mathbb{F}_{4},\) \(V_{10}=0\times 0\times\mathbb{F}_{3}\times\mathbb{F}_{4},\quad V_{11}=0\times 0 \times 0\times\mathbb{F}_{4},\)\(V_{12}=0\times 0\times\mathbb{F}_{3}\times 0,\)\(V_{13}=0\times\mathbb{F}_{2}\times 0\times 0,\quad V_{14}=\mathbb{F}_{1}\times 0 \times 0\times 0.\) Then \(In(R)\) and \(In(R)_{SR}\) are shown in Figure 2. **Lemma 2.6**: _Let \(n\geq 3\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(\beta(In(R)_{SR})=2n-3.\)_ **Proof.** By Lemma 2.5, \(In(R)_{SR}=H+\underbrace{K_{2}+\cdots+K_{2}}_{n\;\text{times}}\), so \(\beta(In(R)_{SR})=n+\beta(H)\). We show that \(\beta(H)=n-3\). By the proof of Lemma 2.5, \(V(H)=\cup_{i=2}^{n-2}A_{i}\), where \(A_{i}=\{I\in V(In(R)_{SR})|NZC(I)=i\}\). Take the following facts into observation: **Fact 1.** Let \(I,J\in A_{i}\), \(I\neq J\) and \(2\leq i\leq n-2\). Then since \(NZC(I)=NZC(J)\), \(IJ\notin E(In(R))\). **Fact 2.** Let \(I,J\in A_{i}\), for some \(2\leq i\leq n-2\). If \(I\) is not adjacent to \(J\) in \(In(R)_{SR}\), then by Fact (1) and Lemma 2.4, \(I\sim J^{c}\) in \(In(R)\). **Fact 3.** Let \(i=\dfrac{n}{2}\), where \(n\) is even. Then \(In(R)_{SR}[A_{i}]\) is a complete graph, by Facts 1,2. **Fact 4.** Let \(2\leq i\leq[\dfrac{n}{2}]-1\), for even \(n\) and \(2\leq i\leq[\dfrac{n}{2}]\), otherwise. Then \(S_{i}\subseteq A_{i}\) is the largest subset of \(A_{i}\) such that \(IJ\notin E(In(R)_{SR})\), for every \(I,J\in S_{i}\) (Indeed, \(S_{i}\) is the largest independent subset of \(A_{i}\) in \(In(R)SR[A_{i}]\)). Then \(|S_{i}|=[\dfrac{n}{i}]\). And symmetrically, for every \([\dfrac{n}{2}]+1\leq i\leq n-2\), \(S_{i}\subseteq A_{i}\) is the largest subset of \(A_{i}\) such that \(IJ\notin E(In(R)_{SR})\), for every \(I,J\in S_{i}\) (Indeed, \(S_{i}\) is the largest independent subset of \(A_{i}\) in \(In(R)SR[A_{i}]\) ). Then \(|S_{i}|=[\dfrac{n}{n-i}]\). For every \(I,J\in S_{i}\) (except for \(i=\dfrac{n}{2}\), with even \(n\) ), where \(2\leq i\leq n-2\), we have \(I\sim J^{c}\) in \(In(R)\). Thus for every \(2\leq j\neq i\leq n-2\) and for every \(V\in S_{j}\), \(V\) is adjacent to some vertices contained in \(S_{i}\). In this case \(|S_{2}|=|S_{n-2}|=[\frac{n}{2}]\) is the largest independent subset of \(V(H)\). If some set \(S^{\prime}\) is the largest independent subset of \(V(H)\), then for every \(I,J\in S^{\prime}\), either \(I\sim J^{c}\) or \(I\sim J\) in \(In(R)\). First suppose that \(S^{\prime}\) is the largest subset of \(V(H)\) such that for every \(I,J\in S^{\prime}\), \(I\sim J^{c}\) in \(In(R)\), then it is not hard to check that \(|S^{\prime}|\leq|S_{2}|\). Now let \(W=\{I_{1}=\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0\times\cdots\times 0,I_{2}= \mathbb{F}_{1}\times\mathbb{F}_{2}\times\mathbb{F}_{3}\times 0\times\cdots \times 0,\ldots,I_{n-3}=\mathbb{F}_{1}\times\cdots\times\mathbb{F}_{n-2}\times 0 \times 0\}\). Then \(W\) is an independent subset of \(V(H)\). For every \(V\in V(H)\backslash W\), \(V\) is adjacent to \(I_{i}\) for some \(1\leq j\neq i\leq n-3\). Thus \(W\) is the largest independent subset of \(V(H)\). Since \(|W|=n-3\geq[\frac{n}{2}]\), \(\beta(H)=n-3\). Thus \(\beta(In(R)_{SR})=n+\beta(H)=2n-3\). \(\Box\) Now, we are in a position to find \(sdim(In(R))\). **Theorem 2.2**: _Suppose that \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\) and \(n\geq 3\) be a positive integer. Then \(sdim(In(R))=2^{n}-2n+1\)._ **Proof.** By Lemma 2.6, \(\beta(In(R)_{SR})=2n-3\). On the other hand, since \(|V(In(R)_{SR})|=2^{n}-2\), Gallai's theorem and Lemma 2.2 show that \(sdim(In(R))=|V(In(R)_{SR})|-\beta(In(R)_{SR})=2^{n}-2n+1\). \(\Box\) ## 3 \(dim_{M}(In(R))\) and \(sdim(In(R))\), when \(R\) is non-reduced As it has been mentioned in Section 2, we consider rings \(R\) with finitely many ideals. Then there exists positive integer \(m\) such that \(R\cong R_{1}\times\cdots\times R_{m}\), where \((R_{i},m_{i})\) is a local Artinian ring, for all \(1\leq i\leq m\). If every \(m_{i}\) is principal, then by [4, Propostion 8.8], every \(R_{i}\) is a PIR with finitely many ideals. Moreover, ideals of every \(R_{i}\) are totally ordered by inclusion. In this section, we compute \(dim_{M}(In(R))\) and \(sdim(In(R))\) for such rings \(R\). **Theorem 3.1**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(dim_{M}(In(R))=(\sum_{i=1}^{m}n_{i})+m-1\), where \(n_{i}=|I(R_{i})|\) for every \(1\leq i\leq m\)._ **Proof.** We show that \(dim_{M}(In(R))=(\sum_{i=1}^{m}n_{i})+m-1\), where \(n_{i}=|I(R_{i})|\) for every \(1\leq i\leq m\). Indeed, we have the following claims: **Claim 1.**\(dim_{M}(In(R))\geq(\sum_{i=1}^{m}n_{i})+m-1\). Let \(\chi_{1}=\{I_{1i}\times 0\times\cdots\times 0\ |\ 0\neq I_{1i}\unlhd R_{1},1\leq i \leq n_{1}+1\}\), \(\chi_{2}=\{0\times I_{2i}\times 0\times\cdots\times 0\mid 0\neq I_{2i}\trianglelefteq R _{2},1\leq i\leq n_{2}+1\}\), \(\vdots\) and \(\chi_{m}=\{0\times 0\times\cdots\times 0\times I_{mi}\mid 0\neq I_{mi}\trianglelefteq R _{m},1\leq i\leq n_{m}+1\}\). From the above sets, only one member may not contained in \(W\), otherwise, we assume that two vertices \(J_{1}\) and \(J_{2}\) of the above sets are not in \(W\). If two vertices \(J_{1}\) and \(J_{2}\) are contained \(\chi_{i}\), then clearly \(D(J_{1}|W)=D(J_{2}|W)\). Thus without loss of generality, assume that \(J_{1}\in\chi_{1}\) and \(J_{2}\in\chi_{2}\). In this case, there are \(1\leq i\leq n_{1}\) and \(1\leq j\leq n_{2}\) such that \(J_{1}=I_{1i}\times 0\times\cdots\times 0\) and \(J_{2}=0\times I_{2j}\times 0\times\cdots\times 0\). Suppose that \(V=R_{1}\times I_{2j}\times 0\times\cdots\times 0\), then \(D(J_{1}|W)=D(V|W)\). Also, \(|\bigcup_{i=1}^{m}\chi_{i}|=n_{1}+n_{2}+\cdots+n_{m}+m\) implies that \(dim_{M}(In(R))\geq(\sum_{i=1}^{m}n_{i})+m-1\). **Claim 2.**\(dim_{M}(In(R))\leq(\sum_{i=1}^{m}n_{i})+m-1\). Let \(W=\bigcup_{i=1}^{m}\chi_{i}\setminus\{I_{1i}\times 0\times\cdots\times 0\}\). We claim that \(W\) is a resolving set for \(In(R)\). For this, let \(I,J\not\in W\) and \(I_{1}\times\cdots\times I_{m}=I\neq J=J_{1}\times\cdots\times J_{m}\). Hence, there exists \(1\leq j\leq m\) such that \(I_{j}\neq J_{j}.\) Since \(R\) is PIR, \(I_{j}\subset J_{j}\) or \(J_{j}\subset I_{j}\). Without loss of generality assume that, \(I_{j}\subset J_{j}\). In this case, we have \(d(I,V)=2\neq 1=d(V,J)\), where \(V=0\times\cdots\times 0\times J_{j}\times 0\times\cdots\times 0\). Hence for every \(I\neq J\), \(D(I|W)\neq D(J|W)\). Thus \(dim_{M}(In(R))\leq(\sum_{i=1}^{m}n_{i})+m-1\). By Claims 1,2, \(dim_{M}(In(R))=(\sum_{i=1}^{m}n_{i})+m-1\). \(\Box\) **Theorem 3.2**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(m\geq 1\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), \(n\geq 1\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 1\) is a positive integer and \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\), \(n\geq 1\) is a positive integer. Then_ \(1)\) _If \(m=n=1\), then \(dim_{M}(In(R))=n+m+n_{1}-2=n_{1}\), where \(n_{1}=|I(R_{1})|\)._ \(2)\) _If \(m=1\) and \(n=2\), then \(dim_{M}(In(R))=n+m+n_{1}-1=n_{1}+2\), where \(n_{1}=|I(R_{1})|\)._ \(3)\) _If \(n\geq 3\), then \(dim_{M}(In(R))=(\sum_{i=1}^{m}n_{i})+m+n\), where \(n_{i}=|I(R_{i})|\)._ **Proof.** 1) We first show that \(dim_{M}(In(R))\geq n_{1}\). For this, let \(W\) be a metric basis for \(In(R)\). Let \(I_{i}\subset I_{i+1}\), where \(I_{i}\in I(R_{1})\), for every \(1\leq i\leq n_{1}-1\). Then, \(d(I_{i}\times\mathbb{F},V)=d(I_{i+1}\times\mathbb{F},V)\), for every \(1\leq i\leq n_{1}-1\), for every \(V\in V(In(R))\setminus\{I_{2}\times 0,\ldots,I_{n_{1}}\times 0\}\). Thus \(\{W_{1}=I_{2}\times 0,W_{2}=I_{3}\times 0,\ldots,W_{n_{1}-1}=I_{n_{1}}\times 0\}\subseteq W\). Also, for every \(1\leq j\leq n_{1}-1\), we have \(d(I_{n_{1}}\times\mathbb{F},W_{j})=d(R_{1}\times 0,W_{j})\). Hence \(W_{n_{1}}=R_{1}\times 0\in W\). Therefore, \(dim_{M}(In(R))\geq n_{1}\). Conversely, we show that \(dim_{M}(In(R))\leq n_{1}\). Let \(W=\{W_{1}=I_{2}\times 0,W_{2}=I_{3}\times 0,\ldots,W_{n_{1}-1}=I_{n_{1}} \times 0,W_{n_{1}}=R_{1}\times 0\}\). It is enough to show that \(W\) is a resolving set of \(In(R)\). Let \(I^{\prime}=I^{\prime}_{1}\times I^{\prime}_{2}\) and \(J=J_{1}\times J_{2}\) be two distinct vertices of \(V(In(R))\setminus W\). If \(I^{\prime}=I_{1}\times 0\) or \(J=I_{1}\times 0\), then obviously \(D(I^{\prime}|W)\neq D(J|W)\). Thus we may assume that \(I^{\prime}_{2}=J_{2}=\mathbb{F}\). Since \(I^{\prime}\neq J\), without loss of generality, we may assume that \(I^{\prime}\subset J\). In this case there exists \(1\leq i\leq n_{1}\) such that \(d(I^{\prime},W_{i})=2\neq 1=d(J,W_{i})\). Thus in both cases we have \(D(I^{\prime}|W)\neq D(J|W)\). Therefore, \(dim_{M}(In(R))\geq n_{1}\). 2) Let \(S=\{V_{1}=I_{11}\times 0\times 0,\ldots,V_{n_{1}}=I_{1n_{1}}\times 0\times 0,V_{n _{1}+1}=R_{1}\times 0\times 0,V_{n_{1}+2}=0\times\mathbb{F}_{1}\times 0,V_{n _{1}+3}=0\times 0\times\mathbb{F}_{2}\}\). From the set \(S\), only one member may not contained in the metric basis \(W\). For if not, we assume that two vertices \(J_{1}\) and \(J_{2}\) of the above set are not contained in \(W\). If \(J_{1},J_{2}\in\{V_{1},\ldots,V_{n_{1}}\}\) or \(J_{1}\in\{V_{1},\ldots,V_{n_{1}}\}\) and \(J_{2}=V_{n_{1}+1}\) or \(J_{1}=V_{n_{1}+2}\) and \(J_{2}=V_{n_{1}+3}\), then clearly \(D(J_{1}|W)=D(J_{2}|W)\). Now without loss of generality, assume that \(J_{1}=V_{1}\) and \(J_{2}=V_{n_{1}+2}\), then \(D(I_{11}\times\mathbb{F}_{1}\times\mathbb{F}_{2}|W)=D(I_{11}\times 0\times \mathbb{F}_{2}|W)\). Finally, assume that \(J_{1}=V_{n_{1}+1}\) and \(J_{2}=V_{n_{1}+2}\), in this case, \(D(I_{11}\times\mathbb{F}_{1}\times\mathbb{F}_{2}|W)=D(I_{11}\times 0\times \mathbb{F}_{2}|W)\). Since \(|S|=n_{1}+3\), \(dim_{M}(In(R))\geq n_{1}+2\). Conversely, let \(W=\{V_{2},\ldots,V_{n_{1}+3}\}\). We claim that \(W\) is a resolving set and consequently a metric basis for \(In(R)\). It is enough to show that for every two distinct vertices \(V^{\prime}=V^{\prime}_{1}\times V^{\prime}_{2}\times V^{\prime}_{3}\) and \(U=U_{1}\times U_{2}\times U_{3}\) of \(V(In(R))\setminus W\), \(D(V^{\prime}|W)\neq D(U|W)\). Since \(V^{\prime}\neq U\), we have the following cases: **Case 1.**\(V^{\prime}_{j}=0\) and \(U_{j}=\mathbb{F}_{j}\) or \(V^{\prime}_{j}=\mathbb{F}_{j}\), \(U_{j}=0\) for some \(2\leq j\leq 3\). Without loss of generality, we may assume that \(V^{\prime}_{j}=0\) and \(U_{j}=\mathbb{F}_{j}\). This clearly implies that \(d(V^{\prime},X_{j})\in\{2,3\}\neq 1=d(U,X_{j}).\) Thus \(D(V^{\prime}|W)\neq D(U|W)\). **Case 2.**\(V^{\prime}_{1}\neq U_{1}\). Since \(R\) is a PIR, \(V^{\prime}_{1}\subset U_{1}\) or \(U_{1}\subset V^{\prime}_{1}\). Without loss of generality, we may assume that \(V^{\prime}_{1}\subset U_{1}\). If \(U_{1}=I_{11}\), then \(d(U,I_{12}\times 0\times 0)=2\neq 3=d(V^{\prime},I_{12}\times 0\times 0)\), otherwise, \(d(U,U_{1}\times 0\times 0)=1\neq 2=d(V^{\prime},U_{1}\times 0\times 0)\). Thus in both cases we have that \(D(V^{\prime}|W)\neq D(U|W)\). Therefore, \(dim_{M}(In(R))\leq n_{1}+2\). 3) We first show that \(dim_{M}(In(R))\geq(\sum_{i=1}^{m}n_{i})+m+n\). For this let \(\chi_{1}=\{I_{1i}\times 0\times\cdots\times 0\mid 0\neq I_{i1}\unlhd R_{1},1 \leq i\leq n_{1}+1\}\), \(\chi_{2}=\{0\times I_{2i}\times 0\times\cdots\times 0\mid 0\neq I_{i1}\unlhd R_{2},1 \leq i\leq n_{2}+1\}\), \(\vdots\) and \(\chi_{m}=\{0\times 0\times\cdots\times 0\times I_{mi}\mid 0\neq I_{i1}\unlhd R_{m},1 \leq i\leq n_{m}+1\}\). Let \(S=\cup_{X\in\chi_{i},1\leq i\leq m}X\cup_{j=m+1}^{m+n}X_{j}\), \(X_{j}=0\times\cdots\times 0\times\mathbb{F}_{j-m}\times 0\times\cdots\times 0\) where \(m+1\leq j\leq m+n\). We claim that for every metric basis \(W\), we have \(S\subseteq W\). Otherwise, there exists \(J\in S\) such that \(J\notin W\). Since \(J\in S\), we have the following cases: **Case 1.**\(J=0\times\cdots\times 0\times I_{ij}\times 0\times\cdots\times 0\), where \(1\leq i\leq m\) and \(1\leq j\leq n_{i}\). In this case, if \(j=1\), then \(D(R_{1}\times R_{2}\times\cdots\times R_{i-1}\times J_{i1}\times R_{i+1} \times\cdots\times\mathbb{F}_{m+n-1}\times 0|W)=D(R_{1}\times R_{2}\times\cdots\times R_{i-1} \times 0\times R_{i+1}\times\cdots\times\mathbb{F}_{m+n-1}\times 0|W)\). If \(j=n_{i}\), then \(0\times\cdots\times 0\times\mathbb{F}_{m+1}\times 0\times\cdots\times 0|W)=D(0\times \cdots\times 0\times I_{in_{i}-1}\times 0\times\cdots\times 0\times\mathbb{F}_{m+1} \times 0\times\cdots\times 0|W)\). Otherwise, \(D(0\times\cdots\times 0\times I_{in_{i}}\times 0\times\cdots\times 0\times \mathbb{F}_{m+1}\times 0\times\cdots\times 0|W)=D(0\times\cdots\times 0\times R_{i} \times 0\times\cdots\times 0\times\mathbb{F}_{m+1}\times 0\times\cdots\times 0|W)\). **Case 2.**\(J=0\times\cdots\times 0\times R_{i}\times 0\times\cdots\times 0\), where \(1\leq i\leq m\). In this case, \(D(0\times\cdots\times 0\times I_{in_{i}}\times 0\times\cdots\times 0\times \mathbb{F}_{m+1}\times 0\times\cdots\times 0|W)=D(0\times\cdots\times 0\times R_{i} \times 0\times\cdots\times 0\times\mathbb{F}_{m+1}\times 0\times\cdots\times 0|W)\). **Case 3.**\(J=X_{j}\) (See Remark 2.1), where \(m+1\leq j\leq m+n\) In this case, \(D(I_{11}\times R_{2}\times\cdots\times R_{m}\times\mathbb{F}_{1}\times\cdots \times\mathbb{F}_{j-1}\times 0\times\mathbb{F}_{j+1}\times\cdots\times \mathbb{F}_{m+n-1}\times 0|W)=D(I_{11}\times R_{2}\times\cdots\times\mathbb{F}_{m+n-1} \times 0|W)\). Thus \(S\subseteq W\). This implies that \(dim_{M}(In(R))=|W|\geq|S|=(\sum_{i=1}^{m}n_{i})+m+n\). Conversely, we claim that \(W=\cup_{X\in\chi_{i},1\leq i\leq m}X\cup_{j=m+1}^{m+n}X_{j}\) is a resolving set and consequently a metric basis for \(In(R)\). It is enough to show that for every two distinct vertices \(V^{\prime}=V_{1}^{\prime}\times\cdots\times V_{m+n}^{\prime}\) and \(U=U_{1}\times\cdots\times U_{m+n}\) of \(V(In(R))\setminus W\), the inequality \(D(V^{\prime}|W)\neq D(U|W)\) holds. Since \(V^{\prime}\neq U\), we have the following cases: **Case 1.**\(V_{j}^{\prime}=0\) and \(U_{j}=\mathbb{F}_{m-j}\) or \(V_{j}^{\prime}=\mathbb{F}_{m-j}\), \(U_{j}=0\) for some \(m+1\leq j\leq m+n\). Without loss of generality, one may assume that \(V_{j}^{\prime}=0\) and \(U_{j}=\mathbb{F}_{m-j}\). This clearly implies that \(d(V^{\prime},X_{j})\in\{2,3\}\neq 1=d(U,X_{j}).\) Thus \(D(V^{\prime}|W)\neq D(U|W)\). **Case 2.**\(V_{i}^{\prime}\neq U_{i}\) for some \(1\leq i\leq m\). Since \(R\) is a PIR, \(V_{i}^{\prime}\subset U_{i}\) or \(U_{i}\subset V_{i}^{\prime}\). Without loss of generality, one may assume that \(V_{i}^{\prime}\subset U_{i}\). Then \(d(U,0\times\cdots\times 0\times U_{i}\times 0\times\cdots\times 0)=1\neq 2=d(V^{ \prime},0\times\cdots\times 0\times U_{i}\times 0\times\cdots\times 0)\). Thus in both cases we have that \(D(V^{\prime}|W)\neq D(U|W)\). Therefore, \(dim_{M}(In(R))\leq(\sum_{i=1}^{m}n_{i})+m+n\). \(\Box\) Next, we study \(sdim(In(R))\). First, the case no fields appear in decomposition of \(R\) is investigated. **Lemma 3.1**: _Let \(m\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR for every \(1\leq i\leq m\). For every \(I,J\in V(In(R))\), \(d(I,J)=3\) if and only if \(I,J\in M\) and \(I=J^{c}\)._ **Proof.** First let \(I,J\in M\) and \(I=J^{c}\). Since \(I\nsim J\), \(d(I,J)\geq 2\). Now let \(d(I,J)=2\). Thus there exists a vertex \(V\) such that \(I\sim V\sim J\) is the shortest path between \(I\) and \(J\). Since \(I\sim V\), \(I\subset V\) or \(V\subset I\). If \(V\subset I\), then clearly \(V\nsim J\). Thus \(I\subset V\). A similar argument shows that \(J\subset V\), which is impossible. Thus \(d(I,J)=3\). Conversely, since \(d(I,J)=3\), if \(I_{i}\neq 0\), then \(J_{i}=0\) and if \(J_{i}\neq 0\), then \(I_{i}=0\) for every \(1\leq i\leq m\). Otherwise, \(I\sim V\sim J\), where \(V=0\times\cdots\times 0\times V_{i}\times 0\times\cdots\times 0\), \(V_{i}=I_{i}\) if \(I_{i}\subset J_{i}\), else \(V_{i}=J_{i}\). This implies that \(d(I,J)\leq 2\), a contradiction. Now suppose to the contrary, \(I,J\notin M\). Then there exists \(1\leq i\leq m\) such that \(I_{i}\in I(R_{i})\). Thus \(I\sim V\sim J\), where \(V=R_{1}\times\cdots\times R_{i-1}\times I_{i}\times R_{i+1}\times\cdots\times R _{m}\), which implies that \(d(I,J)\leq 2\), a contradiction. Thus \(I,J\in M\) and \(I=J^{c}\). \(\Box\) **Lemma 3.2**: _Let \(n\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\). Then, \(IJ^{c}\in In(R)\) if and only if \(JI^{c}\in In(R)\), for every \(I,J\in M\)._ **Proof.** It is straight forward. \(\Box\) **Lemma 3.3**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 2\) is a positive integer. Then the following statements hold:_ 1) \(V(In(R))=V(In(R)_{SR})\)_._ 2) _Suppose that \(I,J\in M\subset V(In(R)_{SR})\), then \(IJ\in E(In(R)_{SR})\) if and only if \(I=J^{c}\) or \(IJ,IJ^{c}\notin In(R)\)._ 3) _Suppose that \(I,J\in V(In(R)_{SR})\setminus M\), then \(IJ\in E(In(R)_{SR})\) if and only if \(IJ\notin E(In(R))\)._ 4) _For every \(I\in V(In(R)_{SR})\setminus M\) and \(J\in M\), \(IJ\in E(In(R)_{SR})\) if and only if \(IJ,IJ^{c}\notin E(In(R))\)._ **Proof.** 1) For every \(I=I_{1}\times\cdots\times I_{n}\in M\), by the proof of Lemma 2.1, \(d(I,I^{c})=3=diam(In(R))\). Thus \(I,I^{c}\) are mutually maximally distant and so \(I\in V(In(R)_{SR})\). Also for every \(I\in V(In(R))\setminus M\), there exists \(J\in V(In(R))\setminus M\) such that \(I,J\) are mutually maximally distant and \(I\in V(In(R)_{SR})\) i.e., \(V(In(R))=V(In(R)_{SR})\). 2) If \(I=J^{c}\) or \(IJ,IJ^{c}\notin In(R)\), then clearly \(I,J\) are mutually maximally distant and \(IJ\in E(In(R)_{SR})\). Now suppose that \(IJ\in E(In(R)_{SR})\) and \(I\neq J^{c}\). Since \(I\neq J^{c}\), \(d(I,J)\leq 2\). Also, \(I,J\) are mutually maximally distant, thus \(IJ,IJ^{c}\notin In(R)\). 3) If \(IJ\in E(In(R)_{SR})\), then \(I,J\) are mutually maximally distant, thus clearly \(IJ\notin E(In(R))\). Now suppose that \(IJ\notin E(In(R))\). Since \(d(I,J)_{In(R)}\leq 2\), for every \(I\in V(In(R)_{SR})\setminus M\) and for every \(J\in V(In(R)_{SR})\), we deduce that \(I,J\) are mutually maximally distant and \(IJ\in E(In(R)_{SR})\). 4) One side is clear. To prove the other side, assume that \(IJ,IJ^{c}\notin E(In(R))\). Since \(d(I,J)\leq 2\), \(I,J\) are mutually maximally distant and \(IJ\in E(In(R)_{SR})\). \(\Box\) **Remark 3.1**: _Let \(R\cong R_{1}\times R_{2}\). If \(|I(R_{1})|=|I(R_{2})|=1\), then \(In(R)_{SR}=K_{3}+K_{2}+K_{2}\). Thus the condition \(|I(R_{i})|\geq 2\) is necessary for \(In(R)_{SR}\) to be connected. Therefore, we exclude this case from Lemma 3.4._ **Lemma 3.4**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(In(R)_{SR}\) is a connected graph._ **Proof.** By Lemma 3.3, \(V(In(R))=V(In(R)_{SR})\). We show that for any two distinct vertices \(X^{\prime}=(X^{\prime}_{1},\ldots,X^{\prime}_{m})\) and \(Y^{\prime}=(Y^{\prime}_{1},\ldots,Y^{\prime}_{m})\) there is a path between them. For this we have the following cases: **Case 1.**\(X^{\prime},Y^{\prime}\in M\). If \(X^{\prime},=Y^{\prime c}\) or \(X^{\prime}Y^{\prime},X^{\prime}Y^{\prime c}\notin E(In(R))\), then by Lemma 3.3, \(X^{\prime}\) and \(Y^{\prime}\) are adjacent in \(In(R)_{SR}\). Thus suppose that \(X^{\prime}Y^{\prime}\notin E(In(R)_{SR})\), so either \(X^{\prime}\neq Y^{\prime c}\), \(Y^{\prime}X^{\prime c}\in E(In(R))\) or \(X^{\prime}\neq Y^{\prime c}\), \(X^{\prime}Y^{\prime}\in E(In(R))\). If \(X^{\prime}\neq Y^{\prime c}\) and \(X^{\prime}Y^{\prime}\in E(In(R))\), then \(X^{\prime}\subset Y^{\prime}\) or \(Y^{\prime}\subset X^{\prime}\). Without loss of generality, we may assume that \(X^{\prime}\subset Y^{\prime}\). Thus there are \(1\leq i\neq j\leq m\) such that \(X^{\prime}_{i}=Y^{\prime}_{i}=R_{i}\) and \(X^{\prime}_{j}=Y^{\prime}_{j}=0\). Let \(V=V_{1}\times\cdots\times V_{m}\), where \(V_{i}=I_{i1}\), \(V_{j}=I_{j1}\) and the other components are zero. Then \(X^{\prime}\sim V\sim Y^{\prime}\) is a path between \(X^{\prime}\) and \(Y^{\prime}\). Now let \(X^{\prime}\neq Y^{\prime c}\) and \(Y^{\prime}X^{\prime c}\in E(In(R))\). Since \(Y^{\prime}X^{\prime c}\in E(In(R))\), there are \(1\leq i\neq j\leq m\) such that \(X^{\prime}_{i}=0,Y^{\prime}_{i}=R_{i}\) and \(X^{\prime}_{j}=R_{j},Y^{\prime}_{j}=0\). Let \(V=V_{1}\times\cdots\times V_{m}\), where \(V_{i}=I_{i1}\), \(V_{j}=I_{j1}\) and the other components are \(R_{i}\)s. Then \(X^{\prime}\sim V\sim Y^{\prime}\) is a path between \(X^{\prime}\) and \(Y^{\prime}\). Thus there exists a path between \(X^{\prime}\) and \(Y^{\prime}\). **Case 2.**\(X^{\prime},Y^{\prime}\in V(In(R)_{SR})\setminus M\). If \(X^{\prime}Y^{\prime}\notin E(In(R))\), by Lemma 3.3, \(X^{\prime}Y^{\prime}\in E(In(R)_{SR})\). Thus suppose that \(X^{\prime}Y^{\prime}\in E(In(R))\), so \(X^{\prime}\subset Y^{\prime}\) or \(Y^{\prime}\subset X^{\prime}\). Without loss of generality, we may assume that \(X^{\prime}\subset Y^{\prime}\). If there exists \(1\leq i\leq m\) such that \(X^{\prime}_{i}=Y^{\prime}_{i}=0\), then \(X^{\prime}\sim 0\times\cdots\times 0\times I_{i1}\times 0\times\cdots\times 0 \sim Y^{\prime}\) is a path between \(X^{\prime}\) and \(Y^{\prime}\). Thus suppose that \(Y^{\prime}_{i}\neq 0\) for every \(1\leq i\leq m\). Since \(Y^{\prime}\notin M\) there exists \(1\leq i\leq m\) such that \(Y^{\prime}_{i}\neq R_{i}\), so \(Y^{\prime}\sim V\), where \(V\) is the vertex that is obtained by replacing the first and the \(i\)-th component of \(Y^{\prime}\) with zero and \(R_{i}\). If the vertex \(V\) is adjacent to \(X^{\prime}\), then \(Y^{\prime}\sim V\sim X^{\prime}\) is a path between \(X^{\prime}\) and \(Y^{\prime}\). Otherwise, continue the same process to get a path between \(X^{\prime}\) and \(Y^{\prime}\). **Case 3.**\(X^{\prime}\in V(In(R)_{SR})\setminus M\) and \(Y^{\prime}\in M\). If \(X^{\prime}Y^{\prime},X^{\prime}Y^{\prime c}\notin E(In(R))\), then by Lemma 3.3, \(X^{\prime}Y^{\prime}\in E(In(R)_{SR})\). Thus suppose that \(X^{\prime}Y^{\prime}\in E(In(R))\) or \(X^{\prime}Y^{\prime c}\in E(In(R))\). If \(X^{\prime}Y^{\prime}\in E(In(R))\), then \(X^{\prime}\subset Y^{\prime}\) or \(Y^{\prime}\subset X^{\prime}\). Without loss of generality, we may assume that \(X^{\prime}\subset Y^{\prime}\). Since \(Y^{\prime}\in M\) there exists \(1\leq i\neq j\leq m\) such that \(Y^{\prime}_{i}=0\) and \(Y^{\prime}_{j}\neq 0\). Then \(X^{\prime}\sim V\sim Y^{\prime}\), where \(V=0\times\cdots\times 0\times R_{i}\times 0\times\cdots\times 0\times I_{j1} \times 0\times\cdots\times 0\) is a path between \(X^{\prime}\) and \(Y^{\prime}\). If \(X^{\prime}Y^{\prime c}\in E(In(R))\), by a similar argument there exists a path between \(X^{\prime}\) and \(Y^{\prime}\). Thus \(In(R)_{SR}\) is a connected graph. **Lemma 3.5**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(\beta(In(R)_{SR})=\Sigma_{i=1}^{m}n_{i}+m-1\)._ **Proof.** If \(m=2\) and \(|I(R_{1})|=|I(R_{2})|=1\), then \(In(R)_{SR}=K_{3}+K_{2}+K_{2}\), so \(\beta(In(R)_{SR})=3=n_{1}+n_{2}+m-1\). Otherwise, by Lemma 3.4, \(In(R)_{SR}\) is a connected graph. Moreover, by Lemma 3.3, \(V(In(R))=V(In(R)_{SR})\). Suppose that \(X,Y\in V(In(R)_{SR})\setminus M\), then by Lemma 3.3, \(XY\in E(In(R)_{SR})\) if and only if \(XY\notin E(In(R))\). So the vertex \(X\) is not adjacent to \(Y\) if and only if \(X\subset Y\) or \(Y\subset X\). Let \(S=\{0\times\cdots\times 0\times I_{m,1},\ldots,0\times\cdots\times 0\times I_{m,n_{ m}},\ldots,0\times\cdots\times 0\times I_{m-1,1}\times R_{m},\ldots,0\times\cdots \times 0\times I_{m-1,n_{m-1}}\times R_{m},\ldots,I_{1,1}\times R_{2}\times\cdots \times R_{m}\}\). Then \(S\) is the largest independent subset of \(V(In(R)_{SR})\setminus M\) such that \(X\nsim Y\) for every \(X,Y\in S\) and \(|S|=\Sigma_{i=1}^{m}n_{i}\). Indeed, \(S\) is the largest independent subset of \(V(In(R)_{SR})\setminus M\) in \(In(R)_{SR}[V(In(R)_{SR})\setminus M]\). On the other hand, among the vertices contained in \(M\), only vertices which are contained in \(S^{\prime}=\{0\times\cdots\times 0\times R_{m},0\times\cdots\times 0\times R_{m-1} \times R_{m},\ldots,0\times R_{2}\times\cdots\times R_{m}\}\) are not adjacent to vertices in \(S\)( we note that \(|S^{\prime}|=m-1\)). Let \(A=S\cup S^{\prime}\). Thus \(A\) is the largest independent subset of \(V(In(R)_{SR})\) and hence \(\beta(In(R)_{SR})=|A|=\Sigma_{i=1}^{m}n_{i}+m-1\). \(\Box\) **Theorem 3.3**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(sdim(In(R))=\Pi_{i=1}^{m}(n_{i}+2)-\Sigma_{i=1}^{m}n_{i}-m-1\)._ **Proof.** By Lemma 3.5, \(\beta(In(R)_{SR})=\Sigma_{i=1}^{m}n_{i}+m-1\). On the other hand, since \(|V(In(R)_{SR})|=\Pi_{i=1}^{m}(n_{i}+2)-2\), Gallai's theorem and Lemma 2.2 show that \(sdim(In(R))=|V(In(R)_{SR})|-\beta(In(R)_{SR})=\Pi_{i=1}^{m}(n_{i}+2)-\Sigma_{i= 1}^{m}n_{i}-m-1\). \(\Box\) Finally, we investigate \(sdim(In(R))\), where both of fields and non-fields appear in decomposition of \(R\). **Lemma 3.6**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(m\geq 1\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), \(n\geq 1\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 1\) is a positive integer and \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\), \(n\geq 1\) is a positive integer. Then the following statements hold:_ 1) _If \(m=1\), \(V(In(R)_{SR})=V(In(R))\setminus\{I_{1}\times 0\times\cdots\times 0,I_{1} \times\mathbb{F}_{1}\times\cdots\times\mathbb{F}_{n}\}\), else \(V(In(R)_{SR})=V(In(R))\)._ 2) _Suppose that \(I,J\in M\subset V(In(R)_{SR})\), then \(IJ\in E(In(R)_{SR})\) if and only if \(I=J^{c}\) or \(IJ,IJ^{c}\notin In(R)\). In particular, for every \(V\in V(In(R)_{SR})\), if \(VX_{i}\in E(In(R)_{SR})\), then \(V=X_{i}^{c}\)._ 3) _Suppose that \(I,J\in V(In(R)_{SR})\setminus M\), then \(IJ\in E(In(R)_{SR})\) if and only if \(IJ\notin J\)._ \(E(In(R))\). \(4)\) _For every \(I\in V(In(R)_{SR})\setminus M\) and \(J\in M\), \(IJ\in E(In(R)_{SR})\) if and only if \(IJ,IJ^{c}\notin E(In(R))\)._ **Proof.** Let \(m=1\) and \(X=\{I_{1}\times 0\times\cdots\times 0,I_{1}\times\mathbb{F}_{1}\times\cdots \times\mathbb{F}_{n}\}\). It is not hard to check that for any \(U\in X\), there is no \(V\in V(In(R))\) such that \(U\) and \(V\) are mutually maximally distant. Thus \(V(In(R)_{SR})=V(In(R))\setminus X\). Now we show that for every \(V\in V(In(R)_{SR})\), if \(VX_{i}\in E(In(R)_{SR})\), then \(V=X_{i}^{c}\). Suppose to the contrary, \(V\neq X_{i}^{c}\). Then \(X_{i}\subset V\) or \(VX_{i}^{c}\in E(In(R))\). These two cases imply that \(VX_{i}\notin E(In(R)_{SR}\), a contradiction. To complete the proof, it is enough to apply a similar argument to that of Lemma 3.3. \(\Box\) **Lemma 3.7**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(m\geq 1\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), \(n\geq 1\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 1\) is a positive integer and \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\), \(n\geq 1\) is a positive integer. Then \(In(R)_{SR}=H+\underbrace{K_{2}+\cdots+K_{2}}_{n\,times times}\), where \(H\) is a connected graph._ **Proof.** Since for every \(I\in\{X_{m+1},\ldots,X_{m+n}\}\), \(I\) is only mutually maximally distant from \(I^{c}\) and vice versa, \(In(R)_{SR}\) contains \(n\) copies of \(K_{2}\). To complete the proof, it is enough to apply a similar argument to that of Lemma 3.4 and Lemma 2.5. \(\Box\) **Lemma 3.8**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(m\geq 1\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), \(n\geq 1\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 1\) is a positive integer and \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\), \(n\geq 1\) is a positive integer. Then the followings hold:_ \(1)\) _If \(m=1\), then \(\beta(In(R)_{SR})=2n+n(R_{1})-2\)._ \(2)\) _If \(m\geq 2\), then \(\beta(In(R)_{SR})=(\sum_{i=1}^{m}n_{i})+2n+m-1\)._ **Proof.** 1) By Lemma 3.7, \(In(R)_{SR}=H+\underbrace{K_{2}+\cdots+K_{2}}_{n\,times times}\), so \(\beta(In(R)_{SR})=\beta(H)+n\). Also, by a similar argument to that of Lemma 3.5 and case (1) of Lemma 3.6, \(S=\{I_{1,1}\times 0\times\cdots\times 0,\ldots,I_{1,n(R_{1})}\times 0\times \cdots\times 0,I_{1,n(R_{1})}\times\mathbb{F}_{1}\times 0\cdots\times 0, \ldots,I_{1,n(R_{1})}\times\mathbb{F}_{1}\times\cdots\times\mathbb{F}_{n-1}\}\) is the largest independent subset of \(V(H)\) and \(|S|=n(R_{1})+n-2\). Hence \(\beta(In(R)_{SR})=|S|+n=n(R_{1})+2n-2\). 2) By Lemma 3.6, \(V(In(R)_{SR})=V(In(R))\) and by Lemma 3.7, \(In(R)_{SR}=H+\underbrace{K_{2}+\cdots+K_{2}}_{n\,times times}\), so \(\beta(In(R)_{SR})=\beta(H)+n\). Also, by a similar argument to that of Lemma 3.5 and Lemma 2.6, \(S=\{0\times\cdots\times 0\times I_{m,1}\times 0\times\cdots\times 0,\ldots,0 \times\cdots\times 0\times I_{m,n_{m}}\times 0\times\cdots\times 0,0\times\cdots \times 0\times R_{m}\times 0\times\cdots\times 0\times R_{m}\times 0\times\}\) \(\cdots\times 0,0\times\cdots\times 0\times I_{m-1,1}\times R_{m}\times 0\times \cdots\times 0,\ldots,0\times\cdots\times 0\times I_{m-1,n_{m-1}}\times R_{m} \times 0\times\cdots\times 0,0\times\cdots\times 0,0\times\cdots\times 0,\ldots,I_{1,1} \times R_{2}\times\cdots\times R_{m}\times 0\times\cdots\times 0,\ldots,I_{1,n_{1}} \times R_{2}\times 0\times\cdots\times R_{m}\times 0\times\cdots\times 0,I_{1,n_{1}} \times R_{2}\times\cdots\times R_{m}\times\mathbb{F}_{1}\times 0\times\cdots\times 0, \ldots,I_{1,n_{1}}\times R_{2}\times\cdots\times R_{m}\times\mathbb{F}_{1} \times\cdots\times\mathbb{F}_{n}\}\) is the largest independent subset of \(V(H)\) and \(|S|=(\sum_{i=1}^{m}n_{i})+n+m-1\). Hence \(\beta(In(R)_{SR})=|S|+n=(\sum_{i=1}^{m}n_{i})+2n+m-1\)\(\square\) We close this paper with the following result. **Theorem 3.4**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(m\geq 1\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), \(n\geq 1\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(m\geq 1\) is a positive integer and \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\), \(n\geq 1\) is a positive integer. Then \(1)\) If \(m=1\), then \(sdim(In(R))=(n_{1}+2)2^{n}-2n-n_{1}-2\). \(2)\) If \(m\geq 2\), then \(sdim(In(R))=\prod_{i=1}^{m}(n_{i}+2)2^{n}-(\sum_{i=1}^{m}n_{i})-2n-m-1\)._ **Proof.** 1) By Lemma 3.8, \(\beta(In(R)_{SR})=2n+n(R_{1})-2\). On the other hand, since \(|V(In(R)_{SR})|=n_{1}+2)2^{n}-4\), Gallai's theorem and Lemma 2.2 show that \(sdim(In(R))=|V(In(R)_{SR})|-\beta(In(R)_{SR})=(n_{1}+2)2^{n}-2n-n_{1}-2\). 2) By Lemma 3.8, \(\beta(In(R)_{SR})=(\sum_{i=1}^{m}n_{i})+2n+m-1\). Since \(|V(In(R)_{SR})|=\prod_{i=1}^{m}(n_{i}+2)2^{n}-2\), Gallai's theorem and Lemma 2.2 show that \(sdim(In(R))=|V(In(R)_{SR})|-\beta(In(R)_{SR})=\prod_{i=1}^{m}(n_{i}+2)2^{n}-( \sum_{i=1}^{m}n_{i})-2n-m-1\). \(\square\)
2304.14274
When Do Graph Neural Networks Help with Node Classification? Investigating the Impact of Homophily Principle on Node Distinguishability
Homophily principle, i.e., nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over Neural Networks on node classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns. However, this argument only considers intra-class Node Distinguishability (ND) but neglects inter-class ND, which provides incomplete understanding of homophily on GNNs. In this paper, we first demonstrate such deficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea and study ND deeply, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND. With the metrics, we visualize and analyze how graph filters, node degree distributions and class variances influence ND, and investigate the combined effect of intra- and inter-class ND. Besides, we discovered the mid-homophily pitfall, which occurs widely in graph datasets. Furthermore, we verified that, in real-work tasks, the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels. Grounded in this observation, we propose a new hypothesis-testing based performance metric beyond homophily, which is non-linear, feature-based and can provide statistical threshold value for GNNs' the superiority. Experiments indicate that it is significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of graph-aware modes on both synthetic and benchmark real-world datasets.
Sitao Luan, Chenqing Hua, Minkai Xu, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang, Jie Fu, Jure Leskovec, Doina Precup
2023-04-25T09:40:47Z
http://arxiv.org/abs/2304.14274v4
When Do Graph Neural Networks Help with Node Classification: Investigating the Homophily Principle on Node Distinguishability ###### Abstract Homophily principle, _i.e.,_ nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over node-based Neural Networks on Node Classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns [34]. However, this argument only considers intra-class Node Distinguishability (ND) and neglects inter-class ND, which provides incomplete understanding of homophily. In this paper, we first demonstrate the aforementioned insufficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND, through which we can find how intra- and inter-class ND influence ND together. We visualize the results and give detailed analysis. Through experiments, we verified that the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels, based on which we propose a new performance metric beyond homophily, which is non-linear and feature-based. Experiments indicate it significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of GNNs on both synthetic and benchmark real-world datasets. ## 1 Introduction Graph Neural Networks (GNNs) have gained popularity in recent years as a powerful tool for graph-based machine learning tasks. By combining graph signal processing and convolutional neural networks, various GNN architectures have been proposed [24; 17; 42; 32; 21], and have been shown to outperform traditional neural networks in tasks such as node classification (**NC**), graph classification, link prediction and graph generation. The success of GNNs is believed to be rooted in the homophily assumption [38], which states that connected nodes tend to have similar attributes [16], providing extra useful information to the aggregated features over the original node features. This relational inductive bias is thought to be a major contributor to the superior performance of GNNs over traditional neural networks in various tasks [4]. On the other hand, the lack of homophily, _i.e.,_ heterophily, is considered as the main cause of the inferiority of GNNs on heterophilic graphs, because nodes from different classes are connected and mixed, which can lead to indistinguishable node embeddings, making the classification task more difficult for GNNs [48; 47; 33]. Numerous models have been proposed to address the heterophily challenge lately [40; 48; 47; 33; 5; 28; 7; 46; 19; 30; 27; 43; 31]. Recently, both empirical and theoretical studies indicate that the relationship between homophily and GNN performance is more complicated than "homophily wins, heterophily loses" [34; 31]. For example, the authors in [34] stated that, as long as nodes within the same class share similar neigh borhood patterns, their embeddings will be similar after aggregation. They provided experimental evidence and theoretical analysis, and concluded that homophily may not be necessary for GNNs to distinguish nodes. The paper [31] studied homophily/heterophily from post-aggregation node similarity perspective and found that heterophily is not always harmful, which is consistent with [34]. Besides, the authors have proposed to use high-pass filter to address some heterophily cases, which is adopted in [7; 5] as well. They have also proposed aggregation homophily, which is a linear feature-independent performance metric and is verified to be better at revealing the performance advantages and disadvantages of GNNs than the existing homophily metrics [40; 48; 28]. Moreover, [6] has investigated heterophily from a neighbor identifiable perspective and stated that heterophily can be helpful for NC when the neighbor distributions of intra-class nodes are identifiable. Inspite that the current literatures on studying homophily principle provide the profound insights, they are still deficient: 1. [34; 6] only consider intra-class node distinguishability (**ND**), but ignore inter-class ND; 2. [31] does not show when and how high-pass filter can help with heterophily problem; 3. There is a lack of a non-linear, feature-based performance metric which can leverage richer information to provide an **accurate threshold value** to indicate whether GNNs are really needed on certain task or not. To address those issues, in this paper: 1. We show that, to comprehensively study the impact of homophily on ND, one needs to consider intra- and inter-class ND together and an ideal case is to have smaller intra-class ND than inter-class ND; 2. To formulate this idea, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) as the graph generative model. It incorporates an explicit parameter to manage homophily, alongside class variance parameters to control intra-class ND, and node degree parameters which are important [34; 46]; 3. To quantify ND of CSBM-H, we propose Probabilistic Bayes Error (**PBE**) and Negative Generalized Jeffreys Divergence (\(D_{\text{NGJ}}\)), through which we can analytically study how intra- and inter-class ND impact ND together. We visualize PBE and \(D_{\text{NGJ}}\) of original features, low-pass (**LP**) filtered features and high-pass (**HP**) filtered features at different homophily levels, discuss how class variances and node degree will influence ND in details; 4. In practice, we verify that the performance superiority of GNNs is indeed related to whether intra-class ND is smaller than inter-class ND, regardless of homophily levels. Based on this, we propose Classifier-based Performance Metric (CPM), a new non-linear feature-based metric that can provide statistical threshold. Experiments show that CPM is significantly more effective than the existing homophily metrics on predicting the performance of GNNs versus NNs. ## 2 Preliminaries We use **bold** font for vectors (_e.g._,\(\mathbf{v}\)) and define an undirected connected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes with a total of \(N\) elements, \(\mathcal{E}\) is the set of edges without self-loops. \(A\) is the symmetric adjacency matrix with \(A_{i,j}=1\) if there is an edge between nodes \(i\) and \(j\), otherwise \(A_{i,j}=0\). We also define \(D\) as the diagonal degree matrix of the graph, with \(D_{i,i}=d_{i}=\sum_{j}A_{i,j}\). The neighborhood set of a node \(i\), denoted as \(\mathcal{N}_{i}\), is defined as \(\mathcal{N}_{i}=\{j:e_{ij}\in\mathcal{E}\}\). A graph signal is a vector in \(\mathbb{R}^{N}\), whose \(i\)-th entry is a feature of node \(i\). Additionally, we use \(X\in\mathbb{R}^{N\times F}\) to denote the feature matrix, whose columns are graph signals and whose \(i\)-th row \(X_{i,:}=\mathbf{x}_{i}^{T}\) is the feature vector of node \(i\). The label encoding matrix \(Z\in\mathbb{R}^{N\times C}\), where \(C\) is the number of classes, has its \(i\)-th row \(Z_{i,:}\) as the one-hot encoding of the label of node \(i\). We denote \(z_{i}=\operatorname*{arg\,max}_{j}Z_{i,j}\in\{1,2,\ldots C\}\). The indicator function \(\mathbf{1}_{B}\) equals 1 when event \(B\) happens and 0 otherwise. For nodes \(i,j\in\mathcal{V}\), if \(z_{i}=z_{j}\), then they are considered as _intra-class nodes_; if \(z_{i}\neq z_{j}\), then they are considered to be _inter-class nodes_. Similarly, an edge \(e_{i,j}\in\mathcal{E}\) is considered to be an _intra-class edge_ if \(z_{i}=z_{j}\), and an _inter-class edge_ if \(z_{i}\neq z_{j}\). ### Graph-aware Models and Graph-agnostic Models A network that includes the feature aggregation step according to graph structure is called graph-aware (**G-aware**) model, _e.g.,_ GCN [24], SGC-1 [45]; A network that does not use graph structure is called graph-agnostic (**G-agnostic**) model, such as Multi-Layer Perceptron with 2 layers (MLP-2) and MLP-1. A G-aware model is often coupled with a G-agnostic model because when we remove the aggregation step in G-aware model, it becomes exactly the same as its coupled G-agnostic model, _e.g.,_ GCN is coupled with MLP-2 and SGC is coupled with MLP-1 as shown below, \[\text{GCN: }Y=\text{softmax}(\hat{A}_{\text{sym}}\text{ ReLU}(\hat{A}_{\text{sym}}XW_{0})\ W_{1}),\ \ \text{MLP-2: }Y=\text{softmax}(\text{ReLU}(XW_{0})\ W_{1}), \tag{1}\] \[\text{SGC-1: }Y=\text{softmax}(\hat{A}_{\text{sym}}XW_{0}),\ \ \text{MLP-1: }Y=\text{softmax}(XW_{0}),\] where \(\hat{A}_{\text{sym}}=\tilde{D}^{-1/2}\tilde{A}\tilde{D}^{-1/2}\), \(\tilde{A}\equiv A+I\) and \(\tilde{D}\equiv D+I\); \(W_{0}\in\mathbb{R}^{F_{0}\times F_{1}}\) and \(W_{1}\in\mathbb{R}^{F_{1}\times O}\) are learnable parameter matrices. For simplicity, we denote \(y_{i}=\operatorname*{arg\,max}_{j}Y_{i,j}\in\{1,2,\ldots C\}\). The random walk renormalized matrix \(\hat{A}_{\text{rw}}=\tilde{D}^{-1}\tilde{A}\) can also be applied to GCN, which is essentially a mean aggregator commonly used in some spatial-based GNNs [17]. To bridge spectral and spatial methods, we use \(\hat{A}_{\text{rw}}\) in the theoretical analysis, but **self-loops are not added to the adjacency matrix** to maintain consistency with previous literature [34; 31]. To address the heterophily challenge, high-pass (HP) filter [13], such as \(I-\hat{A}_{\text{rw}}\), is often used to replace low-pass (LP) filter [35]\(\hat{A}_{\text{rw}}\) in GCN [5; 7; 31]. In this paper, we use \(\tilde{A}_{\text{rw}}\) and \(I-\hat{A}_{\text{rw}}\) as the LP and HP operators, respectively. The LP and HP filtered feature matrices are represented as \(H=\hat{A}_{\text{rw}}X\) and \(H^{\text{HP}}=(I-\hat{A}_{\text{rw}})X\). For simplicity, we denote \(\mathbf{h}_{i}=(H_{i,:})^{T},\mathbf{h}_{i}^{\text{HP}}=(H_{i,:}^{\text{HP}})^{T}\). **To measure if the G-aware models can outperform its coupled G-agnostic model without training**, a lot of homophily metrics have been proposed and we will introduce the most commonly used ones in the following subsection. ### Homophily Metrics The homophily metric is a way to describe the relation between node labels and graph structure. We introduce five commonly used homophily metrics: edge homophily [1; 48], node homophily [40], class homophily [28], generalized edge homophily [23] and aggregation homophily [31] as follows: \[\text{H}_{\text{edge}}(\mathcal{G})=\frac{\big{|}\{\epsilon_{uv}| \epsilon_{uv}\in\mathcal{E},Z_{u},:=Z_{v,:}\}\big{|}}{|\mathcal{E}|},\text{H}_ {\text{node}}(\mathcal{G})=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}\text {H}_{\text{node}}^{v}=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}\frac{ \big{|}\{\{u|u\in\mathcal{N}_{v},Z_{u},:=Z_{v,:}\}\big{|}\}}{d_{v}},\\ \text{H}_{\text{class}}(\mathcal{G})\!=\!\frac{1}{C\!-\!1}\sum_{k=1}^{C} \bigg{[}h_{k}\!-\!\frac{\big{|}\{i\!+\!2_{v,:}\neq\!1\}}{N}\bigg{]}_{+},\text { where }h_{k}\!=\!\frac{\sum_{v\in\mathcal{V}}\{u\!+\!2_{v,:}\neq\!1,u\in\mathcal{N}_{v },Z_{u},:=Z_{v,:}\}\big{|}}{\sum_{v\in\{v|x_{v,k}=1\}}d_{v}},\\ \text{H}_{\text{def}}(\mathcal{G})=\frac{\sum\limits_{(i,j)\in \mathcal{E}}\cos(\mathbf{x},\mathbf{x}_{i})}{|\mathcal{E}|},\,\text{H}_{\text{agg}}( \mathcal{G})=\frac{1}{|\mathcal{V}|}\times\Big{|}\{\{v\,|\operatorname{Mean} _{u}\big{(}\{S(\hat{A},Z)_{v,u}^{Z_{u},:=Z_{v,:}\}\}\big{)}\geq\operatorname{ Mean}_{u}\big{(}\{S(\hat{A},Z)_{v,u}^{Z_{u},:=Z_{v,:}\}\}\big{)}\}\Big{|}} \tag{2}\] where \(\text{H}_{\text{node}}^{v}\) is the local homophily value for node \(v\); \([a]_{+}=\max(0,a)\); \(h_{k}\) is the class-wise homophily metric [28]; \(\operatorname{Mean}_{u}\big{(}\{\cdot\}\big{)}\) takes the average over \(u\) of a ghost multiset of values or variables and \(S(\hat{A},Z)=\hat{A}Z(\hat{A}Z)^{T}\) is the post-aggregation node similarity matrix. These metrics all fall within the range of \([0,1]\), with a value closer to \(1\) indicating strong homophily and imply that G-aware models are more likely to outperform its coupled G-agnostic model, and vice versa. However, the current homophily metrics are all linear, feature-independent metrics which fail to give an accurate indication of the superiority of G-aware models and cannot provide a threshold value [31] for the superiority. ## 3 Analysis of Homophily on Node Distinguishability (ND) ### Motivation The Problem in Current LiteratureRecent research has shown that heterophily does not always negatively impact the embeddings of intra-class nodes, as long as their neighborhood patterns "corrupt in the same way" [34; 6]. For example, in Figure 1, nodes {1,2} are from class blue and both have the same heterophilic neighborhood patterns. As a result, their aggregated features will still be similar and they can be classified into the same class. However, this is only partially true for ND if we forget to discuss inter-class ND, _e.g.,_ node 3 in Figure 1 are from class green and also has the same neighborhood pattern as nodes {1,2}, which means the inter-class ND will be lost after aggregation. This highlights the necessity for careful consideration of both intra- and inter-class ND when evaluating the impact of homophily on the performance of GNNs and an ideal case for NC would be node {1,2,4}, where we have smaller intra-class "distance" than inter-class "distance". We will formulate the above idea in this section and verify if it really relates to the performance of GNNs in section 4. Figure 1: Example of intra- and inter-class node distinguishability. ### CSBM-H and Optimal Bayes Classifier In order to have more control over the assumptions made about the node embeddings, we consider the Contextual Stochastic Block Model (CSBM) [11]. It is a generative model that is commonly used to create graphs and node features, and it has been widely adopted to study the behavior of GNNs [41, 3, 44]. To investigate the impact of homophily on ND, the authors in [34] simplify CSBM to the two-normal setting, where the node features \(X\) and are assumed to be sampled from two normal distributions and intra- and inter-class edges are generated according to two separate parameters. This simplification does not lose much information about CSBM, but 1. it does not include an explicit homophily parameter to study homophily directly and intuitively; 2. it does not include class variances parameters to study intra-class ND; 3. the authors do not rigorously quantify ND. In this section, we introduce the Contextual Stochastic Block Model for Homophily/Heterophily (CSBM-H), which is a variation of CSBM that incorporates an explicit homophily parameter \(h\) for the two-normal setting and also has class variance parameters \(\sigma_{0}^{2},\sigma_{1}^{2}\) to describe the inner-class ND. We then derive the the optimal Bayes classifier (\(\text{CL}_{\text{Bayes}}\)) and negative generalized Jeffreys divergence for CSBM-H, based on which we can quantify ND for CSBM-H. **CSBM-H(\(\mathbf{\mu}_{0},\mathbf{\mu}_{1},\sigma_{0}^{2}I,\sigma_{1}^{2}I,d_{0},d_{1},h\))** The generated graph consists of two disjoint sets of nodes, \(i\in\mathcal{C}_{0}\) and \(j\in\mathcal{C}_{1}\), corresponding to the two classes. The features of each node are generated independently, with \(\mathbf{x}_{i}\) generated from \(N(\mathbf{\mu}_{0},\sigma_{0}^{2}I)\) and \(\mathbf{x}_{j}\) generated from \(N(\mathbf{\mu}_{1},\sigma_{1}^{2}I)\), where \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\in\mathbb{R}^{F_{h}}\) and \(F_{h}\) is the dimension of the embeddings. The degrees of nodes in \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) are \(d_{0},d_{1}\in\mathbb{N}\) respectively. For \(i\in\mathcal{C}_{0}\), its neighbors are generated by independently sampling from \(h\cdot d_{0}\) intra-class nodes and \((1-h)\cdot d_{0}\) inter-class nodes. The neighbors of \(j\in\mathcal{C}_{1}\) are generated in the same way. As a result, the FP, LP and HP filtered features are generated as follows, \[\begin{split}& i\in\mathcal{C}_{0}:\mathbf{x}_{i}\sim N(\mathbf{\mu}_{0}, \sigma_{0}^{2}I);\ \mathbf{h}_{i}\sim N(\tilde{\mathbf{\mu}}_{0},\tilde{\sigma}_{0}^{2}I),\ \mathbf{h}_{i}^{\text{HP}}\sim N\left(\tilde{\mathbf{\mu}}_{0}^{ \text{HP}},(\tilde{\sigma}_{0}^{\text{HP}})^{2}I\right),\\ & j\in\mathcal{C}_{1}:\mathbf{x}_{j}\sim N(\mathbf{\mu}_{1},\sigma_{1}^{2 }I);\ \mathbf{h}_{j}\sim N(\tilde{\mathbf{\mu}}_{1},\tilde{\sigma}_{1}^{2}I),\ \mathbf{h}_{j}^{ \text{HP}}\sim N\left(\tilde{\mathbf{\mu}}_{1}^{\text{HP}},(\tilde{\sigma}_{1}^{ \text{HP}})^{2}I\right),\end{split} \tag{3}\] where \(\tilde{\mathbf{\mu}}_{0}=h(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})+\mathbf{\mu}_{1}\), \(\tilde{\mathbf{\mu}}_{1}=h(\mathbf{\mu}_{1}-\mathbf{\mu}_{0})+\mathbf{\mu}_{0}\), \(\tilde{\mathbf{\mu}}_{0}^{\text{HP}}=(1-h)(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})\), \(\tilde{\mathbf{\mu}}_{1}^{\text{HP}}=(1-h)(\mathbf{\mu}_{1}-\mathbf{\mu}_{0})\), \(\tilde{\mathbf{\sigma}}_{0}^{2}=\frac{(h(\sigma_{0}^{2}-\sigma_{0}^{2})+\sigma_{1} ^{2})}{d_{0}}\), \(\tilde{\sigma}_{1}^{2}=\frac{(h(\sigma_{0}^{2}-\sigma_{0}^{2})+\sigma_{0}^{2} )}{d_{1}}\), \((\tilde{\sigma}_{0}^{\text{HP}})^{2}=\sigma_{0}^{2}+\frac{(h(\sigma_{0}^{2}- \sigma_{1}^{2})+\sigma_{1}^{2})}{d_{0}}\), \((\tilde{\sigma}_{1}^{\text{HP}})^{2}=\sigma_{1}^{2}+\frac{(h(\sigma_{1}^{2}- \sigma_{0}^{2})+\sigma_{0}^{2})}{d_{1}}\). If \(\sigma_{0}^{2}<\sigma_{1}^{2}\), we refer to \(\mathcal{C}_{0}\) as the low variation class and \(\mathcal{C}_{1}\) as the high variation class. The variance of each class can reflect the intra-class ND. We abuse the notation \(\mathbf{x}_{i}\in\mathcal{C}_{0}\) for \(i\in\mathcal{C}_{0}\) and \(\mathbf{x}_{j}\in\mathcal{C}_{1}\) for \(j\in\mathcal{C}_{1}\). To quantify the ND of CSBM-H, we first compute the optimal Bayes classifier in the following theorem. The theorem is about \(\mathbf{x}\), but the results are applicable to \(\mathbf{h}\) and \(\mathbf{h}^{\text{HP}}\) when the parameters are replaced according to Equation 3. **Theorem 1**.: Suppose \(\sigma_{0}^{2}\neq\sigma_{1}^{2}\) and \(\sigma_{0}^{2},\sigma_{1}^{2}>0\), the prior distribution for \(\mathbf{x}_{i}\) is \(\mathbb{P}(\mathbf{x}_{i}\in\mathcal{C}_{0})=\mathbb{P}(\mathbf{x}_{i}\in\mathcal{C}_{1 })=1/2\), then the optimal Bayes Classifier (\(\text{CL}_{\text{Bayes}}\)) for CSBM-H (\(\mathbf{\mu}_{0},\mathbf{\mu}_{1},\sigma_{0}^{2}I,\sigma_{1}^{2}I,d_{0},d_{1},h\)) is1 Footnote 1: The Bayes classifier for multiple categories (\(>2\)) can be computed by stacking multiple expectation terms using similar methods as in [12, 14]. We do not discuss the more complicated settings in this paper. \[\text{CL}_{\text{Bayes}}(\mathbf{x}_{i})=\begin{cases}1,\ \eta(\mathbf{x}_{i})\geq 0.5\\ 0,\ \eta(\mathbf{x}_{i})<0.5\end{cases},\ \ \text{and}\ \ \eta(\mathbf{x}_{i})=\mathbb{P}(z_{i}=1|\mathbf{x}_{i})=\frac{1}{1+\exp \left(Q(\mathbf{x}_{i})\right)},\] where \(Q(\mathbf{x}_{i})=a\mathbf{x}_{i}^{T}\mathbf{x}_{i}+\mathbf{b}^{T}\mathbf{x}_{i}+c\), \(a=\frac{1}{2}\left(\frac{1}{\sigma_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\right),\bm {b}=\frac{\mathbf{\mu}_{0}}{\sigma_{0}^{2}}-\frac{\mathbf{\mu}_{1}}{\sigma_{1}^{2}},c= \frac{\mu_{1}^{T}\mathbf{\mu}_{1}}{2\sigma_{1}^{2}}-\frac{\mu_{2}^{T}\mathbf{\mu}_{0}}{2 \sigma_{0}^{2}}+\ln\left(\frac{\sigma_{1}^{F_{h}}}{\sigma_{0}^{F_{h}}}\right)\). Proof.: See Appendix A. **Advantages of \(\text{CL}_{\text{Bayes}}\) Over the Fixed Linear Classifier in [34]** The classifier proposed in [34] is fixed and depends only on the two centers \(\mathbf{\mu}_{0},\mathbf{\mu}_{1}\). The data centers will shift as \(h\) changes. However, the fixed the classifier cannot capture such distribution movement and thus, is not qualified to measure ND for different \(h\). Besides, we cannot investigate how variances \(\sigma_{0}^{2}\) and \(\sigma_{1}^{2}\) and node degrees \(d_{0}\) and \(d_{1}\) affect ND with the fixed classifier in [34]. In the following subsection, we will define two methods to quantify ND of CSBM-H, one is based on \(\text{CL}_{\text{Bayes}}\), which is a precise measure but hard to be explainable; another is based on KL-divergence, which can give us more intuitive understanding of how intra- and inter-class ND will impact ND at different homophily levels. These two measurements can also be used together to analyze ND. ### Measure Node Distinguishability of CSBM-H The Bayes error rate (BE) is the probability of a node being mis-classified when the true class probabilities given the predictors are known [18]. It can be used to measure the distinguishability of node embeddings and the BE for \(\text{CL}_{\text{Bayes}}\) is defined as follows, **Definition 1** (Bayes Error Rate).: _The Bayes error rate [18] for \(\text{CL}_{\text{Bayes}}\) is defined as_ \[\text{BE}=\mathbb{E}_{\mathbf{x}}[\mathbb{P}(z|\text{CL}_{\text{Bayes}}(\mathbf{x}) \neq z)]=\mathbb{E}_{\mathbf{x}}[1-\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x}))].\] Specifically, the BE for CSBM-H can be written as \[\text{BE}=\mathbb{P}\left(\mathbf{x}\in\mathcal{C}_{0}\right)(1-\mathbb{P}(\text{ CL}_{\text{Bayes}}(\mathbf{x})=0|\mathbf{x}\in\mathcal{C}_{0}))+\mathbb{P}(\mathbf{x}\in \mathcal{C}_{1})\left(1-\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=1|\mathbf{x} \in\mathcal{C}_{1})\right). \tag{4}\] In order to estimate the above value, we define Probabilistic Bayes Error (PBE). **Probabilistic Bayes Error (PBE)** The random variable in each dimension of \(\mathbf{x}_{i}\) is independently normally distributed. As a result, \(Q(\mathbf{x}_{i})\) defined in Theorem 1 follows a generalized \(\chi^{2}\) distribution [9, 10](See the calculation in Appendix D). Specifically, \[\text{For }\mathbf{x}_{i}\in\mathcal{C}_{0},\ Q(\mathbf{x}_{i})\sim\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})+\xi;\ \mathbf{x}_{j}\in\mathcal{C}_{1},\ Q(\mathbf{x}_{j})\sim\tilde{\chi}^{2}(w_{1},F_{h}, \lambda_{1})+\xi,\] where \(w_{0}=a\sigma_{0}^{2},w_{1}=a\sigma_{1}^{2}\), the degree of freedom is \(F_{h}\), \(\lambda_{0}=(\frac{\mathbf{\mu}_{0}}{\sigma_{0}}+\frac{\mathbf{b}}{2a\sigma_{0}})^{T} (\frac{\mathbf{\mu}_{0}}{\sigma_{0}}+\frac{\mathbf{b}}{2a\sigma_{0}}),\ \lambda_{1}=(\frac{\mathbf{\mu}_{1}}{\sigma_{1}}+\frac{\mathbf{b}}{2a\sigma_{1}})^{T} (\frac{\mathbf{\mu}_{1}}{\sigma_{1}}+\frac{\mathbf{b}}{2a\sigma_{1}})\) and \(\xi=c-\frac{\mathbf{b}^{T}\mathbf{b}}{4a}\). Then, by using the Cumulative Distribution Function (CDF) of \(\tilde{\chi}^{2}\), we can calculate the predicted probabilities directly as, \[\mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=0|\mathbf{x}\in\mathcal{C}_{0})=1- \text{CDF}_{\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})}(-\xi),\ \mathbb{P}(\text{CL}_{\text{Bayes}}(\mathbf{x})=1|\mathbf{x}\in \mathcal{C}_{1})=\text{CDF}_{\tilde{\chi}^{2}(w_{1},F_{h},\lambda_{1})}(-\xi).\] Suppose we have a balanced prior distribution \(\mathbb{P}(\mathbf{x}\in\mathcal{C}_{0})=\mathbb{P}(\mathbf{x}\in\mathcal{C}_{1})=1/2\). Then, the PBE is computed as, \[\frac{\text{CDF}_{\tilde{\chi}^{2}(w_{0},F_{h},\lambda_{0})}(-\xi)+\left(1- \text{CDF}_{\tilde{\chi}^{2}(w_{1},F_{h},\lambda_{1})}(-\xi)\right)}{2}\] To investigate the impact of homophily on the ND of LP filtered and HP filtered embeddings, we just need to replace \(\left(\mathbf{\mu}_{0},\sigma_{0}^{2},\mathbf{\mu}_{1},\sigma_{1}^{2}\right)\) with \(\left(\tilde{\mathbf{\mu}}_{0},\tilde{\sigma}_{0}^{2},\tilde{\mathbf{\mu}}_{1},\tilde{ \sigma}_{1}^{2}\right)\) and \(\left(\tilde{\mathbf{\mu}}_{0}^{\text{HP}},(\tilde{\sigma}_{0}^{\text{HP}})^{2}, \tilde{\mathbf{\mu}}_{1}^{\text{HP}},(\tilde{\sigma}_{1}^{\text{HP}})^{2}\right)\) as equation 3. PBE can be numerically calculated and visualized to show the relation between \(h\) and ND precisely. However, we do not have an analytic expression for PBE, which makes it less explainable and intuitive. To address this issue, we define another metric for ND in the following paragraphs. **Generalized Jeffreys Divergence** The KL-divergence is a statistical measure of how a probability distribution \(P\) is different from another distribution \(Q\)[8]. It offers us a tool to define an explainable ND measure, generalized Jeffreys divergence, as follows. **Definition 2** (Generalized Jeffreys Divergence).: _For a random variable \(\mathbf{x}\) which has either the distribution \(P(\mathbf{x})\) or the distribution \(Q(\mathbf{x})\), the generalized Jeffreys divergence 2 is defined as_ Footnote 2: Jeffreys divergence [22] is defined as \(D_{\text{KL}}(P||Q)+D_{\text{KL}}(Q||P)\) \[D_{\text{GJ}}(P,Q)=\mathbb{P}(\mathbf{x}\sim P)\mathbb{E}_{\mathbf{x}\sim P}\left[ \ln\frac{P(\mathbf{x})}{Q(\mathbf{x})}\right]+\mathbb{P}(\mathbf{x}\sim Q)\mathbb{E}_{\bm {x}\sim Q}\left[\ln\frac{Q(\mathbf{x})}{P(\mathbf{x})}\right]\] With \(\mathbb{P}(\mathbf{x}\sim P)=\mathbb{P}(\mathbf{x}\sim Q)=1/2\), the negative generalized Jeffreys divergence for the two-normal setting in CSBM-H can be computed by (See Appendix C for the calculation) \[D_{\text{NGJ}}(\text{CSBM-H})=\ \ \begin{subarray}{c}-d_{X}^{2}(\frac{1}{4\sigma_{1}^{2}}+ \frac{1}{4\sigma_{0}^{2}})\\ \text{Negative Normalized Distance}\end{subarray}\underbrace{-\frac{F_{h}}{4}(\rho^{2}+ \frac{1}{\rho^{2}}-2)}_{\text{Negative Variance Ratio}}\end{subarray} \tag{5}\] where \(d_{X}^{2}=(\mathbf{\mu}_{0}-\mathbf{\mu}_{1})^{T}(\mathbf{\mu}_{0}-\mathbf{\mu}_{1}),\rho= \frac{\sigma_{0}}{\sigma_{1}}\) is the squared Euclidean distance between centers and since we assume \(\sigma_{0}^{2}<\sigma_{1}^{2}\), we have \(0<\rho<1\). For \(\mathbf{h}\) and \(\mathbf{h}^{\text{HP}}\), we have \(d_{H}^{2}=(2h-1)^{2}d_{X}^{2},d_{\text{HP}}^{2}=4(1-h)^{2}d_{X}^{2}\). The smaller \(D_{\text{NGJ}}\) a CSBM-H has, the more distinguishable the node embeddings are. \(D_{\text{NGJ}}\) relies on two terms, Expected Negative Normalized Distance (ENND) and the Negative Variance Ratio (NVR): 1. ENND depends on how large is the inter-class ND \(d_{X}^{2}\) compared with the normalization term \(\frac{1}{4\sigma_{1}^{2}}+\frac{1}{4\sigma_{0}^{2}}\), which is determined by intra-class ND (variances \(\sigma_{0},\sigma_{1}\)); NVR depends on how different the two intra-class NDs are, _i.e.,_ when the intra-class ND of high-variation class is significantly larger than that of low-variation class (\(\rho\) is close to 0), NVR is small which means the nodes are more distinguishable and vice versa. Now, we can investigate the impact of homophily on ND through the lens of PBE and \(D_{\text{NGJ}}\). Specifically, in the standard CSBM-H setting as shown in Figure 2 with \(\mathbf{\mu}_{0}=[-1,0],\mathbf{\mu}_{1}=[0,1],\sigma_{0}^{2}=1,\sigma_{1}^{2}=2,d_{0}= 5,d_{1}=5\), the PBE and \(D_{\text{NGJ}}\) curves for LP filtered feature \(\mathbf{h}\) are bell-shaped 3, indicating that when the homophily value is extremely low or high, the aggregated node embeddings become more distinguishable than at medium levels of homophily. The PBE and \(D_{\text{NGJ}}\) curves for \(\mathbf{h}^{\text{HP}}\) are monotonically increasing, which means that the high-pass filter works better in heterophily areas than in homophily areas. Moreover, it is observed that \(\mathbf{x}\), \(\mathbf{h}\), and \(\mathbf{h}^{\text{HP}}\) will get the lowest PBE and \(D_{\text{NGJ}}\) in different homophily intervals, which we refer to as the "FP zone _(black)_", "LP zone _(green)_", and "HP zone _(red)_". This indicates that LP filter works better at very low and very high homophily intervals (two ends), HP filter works better at low to medium homophily interval 4, the original (_i.e.,_ full-pass or FP filtered) features works betters at medium to high homophily area. Footnote 3: This is consistent with the empirical results found in [31] that the relation between GNN performance and homophily value is a U-shaped curve. Footnote 4: This verifies the conjecture made in [31] saying that high-pass filter cannot address all kinds of heterophily and only works well for certain heterophily cases. Researchers have always been interested in exploring how node degree relate to the effect of homophily [34, 46]. In the upcoming subsection, besides node degree, we will also take a deeper look at the impact of class variances via the homophily-ND curves and the FP, LP and HP zones. ### Ablation Study on CSBM-H **Increase the Variance of High-variation Class (\(\sigma_{0}^{2}=1,\sigma_{1}^{2}=5\))** From Figure 3, it is observed that as the variance in \(\mathcal{C}_{1}\) increases and the variance between \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) becomes more imbalanced, the PBE and \(D_{\text{NGJ}}\) of the three curves all go up which means the node embeddings become less distinguishable under HP, LP and FP filters. The significant shrinkage of the HP zones and the expansion of the \(X\) zone indicates that the original features are more robust to imbalanced variances especially in the low heterophily area, which can be reflected by the NVR in Figure 3 (d). **Increase the Variance of Low-variation Class (\(\sigma_{0}^{2}=1.9,\sigma_{1}^{2}=2\))** As shown in Figure 9 in Appendix F, when the variance in \(\mathcal{C}_{0}\) increases and the variance between \(\mathcal{C}_{0}\) and \(\mathcal{C}_{1}\) becomes more balanced, PBE and \(D_{\text{NGJ}}\) curves go up which means the node embeddings become less distinguishable. The LP, HP and the FP zones almost stays the same because the magnitude of NVR becomes too small that it almost has no effect to ND as shown in Figure 9 (d). Interestingly, we found the change of variances cause little differences of the 3 zones in ENND and the movement of 3 zones mainly comes from NVR 5 and HP filter is less sensitive to \(\rho\) changes in low homophily area. This insensitivity will have significant impact to the 3 zones when \(\rho\) is close to \(0\) and have trivial effect when \(\rho\) is close to \(1\) because the magnitude of NVR is too small. Figure 3: Comparison of CSBM-H with \(\sigma_{0}^{2}=1,\sigma_{1}^{2}=5\). **Increase the Node Degree of High-variation Class (\(d_{0}=5,d_{1}=25\))** From Figure 4, it can be observed that as the node degree of the high-variation class increases, the PBE and \(D_{\text{NGJ}}\) curves of FP and HP filters almost stay the same while the curves of LP filters go down with a large margin. This leads to a substantial expansion of LP zone and shrinkage of FP and HP zone. This is mainly due to the decrease of ENND of LP filters and the decrease of its NVR in low homophily area also plays an important role. **Increase the Node Degree of Low-variation Class (\(d_{0}=25,d_{1}=5\))** From Figure 5, we have the similar observation as when we increase the node degree of high-variation class. The difference is that the expansion of LP zone and shrinkage of FP and HP zones are not as significant as before. From \(\tilde{\sigma}_{0}^{2},~{}\tilde{\sigma}_{1}^{2}\) we can see that increasing node degree can help LP filter reduce variances of the features so that the ENND will decrease, especially for high-variation class while HP filter is less sensitive to the change of variances and node degree. ### More General Theoretical Analysis In this subsection, we aim to gain a deeper understanding of how LP and HP affect ND in a broader context beyond the two-normal settings. To be consistent with previous literature, we follow the assumptions outlined in [34], which are: 1. The features of node \(i\) are sampled from distribution \(\mathcal{F}_{z_{i}}\), _i.e._, \(\mathbf{x}_{i}\sim\mathcal{F}_{z_{i}}\), with mean \(\mathbf{\mu}_{z_{i}}\in\mathbb{R}^{F_{h}}\); 2. Dimensions of \(\mathbf{x}_{i}\) are independent to each other; 3. Each dimension in feature \(\mathbf{x}_{i}\) is bounded, _i.e._, \(a\leq\mathbf{x}_{i,k}\leq b\); 4. For node \(i\), the labels of its neighbors are independently sampled from neighborhood distribution \(\mathcal{D}_{z_{i}}\) and repeated for \(d_{i}\) times. We refer to a graph that follows the above assumptions as \(\mathcal{G}=\left\{\mathcal{V},\mathcal{E},\left\{\mathcal{F}_{c},c\in \mathcal{C}\right\},\left\{\mathcal{D}_{c},c\in\mathcal{C}\right\}\right\}, \mathcal{C}=\left\{1,\ldots,C\right\}\) and \((b-a)^{2}\) reflects how variation the features are. The authors in [34] analyze the distance between the aggregated node embedding and its expectation, _i.e._, \(\left\|\mathbf{h}_{i}-\mathbb{E}(\mathbf{h}_{i})\right\|_{2}\), which only considers the intra-class ND and has been shown to be inadequate for a comprehensive understanding of ND. Instead, we investigate **how significant the intra-class embedding distance is smaller than the inter-class embedding distance** in the following theorem, which is a better way to understand ND. **Theorem 2**.: Suppose a graph \(\mathcal{G}=\left\{\mathcal{V},\mathcal{E},\left\{\mathcal{F}_{c},c\in \mathcal{C}\right\},\left\{\mathcal{D}_{c},c\in\mathcal{C}\right\}\right\}\) meets all the above assumptions (1-4). For node \(i,j,v\in\mathcal{V}\), suppose \(z_{i}\neq z_{j}\) and \(z_{i}=z_{v}\), then for constants \(t_{x},t_{h},t_{\text{HP}}\) that satisfy \(t_{x}\geq\sqrt{F_{h}D_{x}}(i,j),~{}t_{h}\geq\sqrt{F_{h}}D_{h}(i,j),~{}t_{ \text{HP}}\geq\sqrt{F_{h}}D_{\text{HP}}(i,j)\) we have \[\mathbb{P}\left(\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|_{2}\geq\left\|\mathbf{x}_{i}- \mathbf{x}_{v}\right\|_{2}+t_{x}\right)\leq 2F_{h}\exp\left(-\frac{(D_{x}(v,j)- \frac{t_{x}}{\sqrt{F_{h}}})^{2}}{V_{x}(v,j)}\right),\] \[\mathbb{P}(\left\|\mathbf{h}_{i}-\mathbf{h}_{j}\right\|_{2}\geq\left\|\mathbf{h}_{i}-\mathbf{ h}_{v}\right\|_{2}+t_{h})\leq 2F_{h}\exp\left(-\frac{(D_{h}(v,j)-\frac{t_{h}}{ \sqrt{F_{h}}})^{2}}{V_{h}(v,j)}\right), \tag{6}\] Figure 4: Comparison of CSBM with different \(d_{0}=5,d_{1}=25\) setups. Figure 5: Comparison of CSBM with different \(d_{0}=25,d_{1}=5\) setups. where \(D_{x}(v,j)=\left\|\mathbf{\mu}_{x_{v}}-\mathbf{\mu}_{z_{v}}\right\|_{2},\ V_{x}(v,j)=(b-a)^{ 2},\ D_{h}(v,j)=\left\|\mathbf{\hat{\mu}}_{x_{v}}-\mathbf{\hat{\mu}}_{z_{v}}\right\|_{2},V_{x}(v,j)=\left(\frac{1}{2d_{v}}+\frac{1}{2d_{v}}\right)(b-a)^{2},\) \[D_{\text{HP}}(v,j)=\left\|\mathbf{\mu}_{x_{v}}-\mathbf{\hat{\mu}}_{x_{v}}-\left(\mathbf{ \mu}_{x_{j}}-\mathbf{\hat{\mu}}_{z_{v}}\right)\right\|_{2},\ V_{\text{HP}}(v,j)= \left(1+\frac{1}{2d_{v}}+\frac{1}{2d_{v}}\right)(b-a)^{2},\ \mathbf{\hat{\mu}}_{x_{v}}=\sum_{u\in\mathcal{N}(v)}\mathbb{E}_{x_{v} \sim\mathcal{D}_{x_{v}}^{\sim}}\left[\frac{1}{d_{v}}\mathbf{x}_{u}\right].\] Proof.: See Appendix B. We can see that, the probability upper bound mainly depends on a distance term (inter-class ND) and normalized variance term (intra-class ND). The normalized variance term of HP filter is less sensitive to the changes of node degree than that of LP filter because there is an additional 1 in the constant term. Moreover, we show that the distance term of HP filter actually depends on the **relative center distance** which is a novel discovery. As shown in Figure 6, when homophily decreases, the aggregated centers will move away from the original centers, and the relative center distance (purple) will get larger which means the embedding distance of nodes from different classes will have larger probability to be big. This explains how HP filter work for some heterophily cases. Overall, in a more general setting with weaker assumptions, we can see that ND is also described by the intra- and inter-class ND terms rather than intra-class ND only, which is consistent with CSBM-H. ## 4 Empirical Study of Node Distinguishability Besides theoretical analysis, in this section, we will conduct experiments to verify whether the effect of homophily on the performance of GNNs really relates to its effect on ND. If a strong relation can be verified, then it indicates that we can design new ND-based performance metrics, beyond homophily metrics, to evaluate the superiority and inferiority of G-aware models against its coupled G-agnostic models without training which saves time and computational costs. ### Tests on Real-world Datasets To test whether "intra-class embedding distance is smaller than the inter-class embedding distance" strongly relates to the superiority of G-aware models to their coupled G-agnostic models in practice, we conduct the following hypothesis testing 6. Footnote 6: [29] also conduct hypothesis testing to find out when to use GNNs for node classification, but they test the differences between connected nodes and unconnected nodes instead of intra- and inter-class nodes. **Experimental Setup** We first train two G-aware models GCN, SGC-1 and their coupled G-agnostic models MLP-2 and MLP-1 with fine-tuned hyperparameters provided by [31]. For each trained model, we calculate the pairwise Euclidean distance of the node embeddings in output layers. Next, we compute the proportion of nodes whose intra-class node distance is significantly smaller than inter-class node distance 7_e.g.,_ we obtain Prop(GCN) for GCN. We use Prop to quantify ND and we \begin{table} \begin{tabular}{c|c|c c c c c c c c c} \hline \hline & & Cornell & Wisconsin & Texas & Film & Chandown & Squier & Corn & Citeseer & PubMed \\ \hline \multirow{3}{*}{Baseline} & H\({}_{\text{tr train the models multiple times for samples to conduct the following hypothesis tests: \[\text{H}_{0}:\text{Prop}(\text{G-aware model})=\text{Prop}(\text{G-agnostic model});\ \text{H}_{1}:\text{Prop}(\text{G-aware model})<\text{Prop}(\text{G-agnostic model})\] Specifically, we compare GCN v.s. MLP-2 and SGC-1 v.s. MLP-1 on \(9\) widely used benchmark datasets with different homophily values for 100 times. In each time, we randomly split the data into training/validation/test sets with a ratio of 60%/20%/20%. With the 100 samples, we conduct _T-test for the means of two independent samples of scores_, and obtain the corresponding p-values. The test results and model performance comparisons are shown in Table 1 (See more experimental tests on state-of-the-art model in Appendix G). It is observed that, in most cases (except for GCN v.s. MLP-2 on _PubMedMed_), when \(\text{H}_{1}\) significantly holds, G-aware models will underperform the coupled G-agnostic models and vice versa. This supports our claim that the performance of G-aware models is closely related to "intra-class v.s. inter-class node embedding distances", no matter the homophily levels. It reminds us that the p-value can be a better performance metric for GNNs beyond homophily. Moreover, the p-value can provide a statistical threshold, such as \(p\leq 0.05\). This property is not present in existing homophily metrics. However, it is required to train and fine-tune the models to obtain the p-values, which make it less practical because of computational costs. To overcome this issue, in the next subsection, we propose a classifier-based performance metric that can provide p-values without training. ### Beyond Homophily: Classifier-based Performance Metrics A qualified classifier should not require iterative training. In this paper, we choose Gaussian Naive Bayes (GNB)[18] and Kernel Regression (KR) with Neural Network Gaussian Process (NNGP) [26; 2; 15; 37] to capture the **feature-based linear or non-linear** information. To get the p-value, we first randomly sample 500 nodes from \(\mathcal{V}\) and splits them into 60%/40% as training and test data. The original features \(X\) and aggregated features \(H\) of the sampled training and test nodes can be calculated and are then fed into a given classifier. The predicted results and prediction accuracy of the test nodes will be computed directly. We repeat this process for 100 times to get 100 samples of prediction accuracy for \(X\) and \(H\). Then, for the given classifier, we compute the p-value of the following hypothesis testing, \[\text{H}_{0}:\text{Acc}(\text{Classifier}(H))=\text{Acc}(\text{Classifier}(X ));\ \text{H}_{1}:\text{Acc}(\text{Classifier}(H))<\text{Acc}(\text{Classifier}(X )).\] The p-values can provide a statistical threshold value, such as 0.05, to indicate whether the \(H\) is significantly better than \(X\) for node classification. As seen in Table 1, KR and GNB based metrics significantly outperform the existing homophily metrics, reducing the errors from at least \(5\) down to just \(1\) out of 18 cases. Besides, we only need a small set of the labels to calculate the p-value, which makes it better for sparse label scenario. Table 2 summarizes its advantages over the existing metrics. (See Appendix G for more details on classifier-based performance metrics, experiments on synthetic datasets, more detailed comparisons on small-scale and large-scale datasets, results for symmetric renormalized affinity matrix and running time.) ## 5 Conclusions In this paper, we provide a complete understanding of homophily by studying intra- and inter-class ND together. To theoretically investigate ND, we study the PBE and \(D_{\text{NGJ}}\) of the proposed CSBM-H and analyze how class variances and node degree will influence the PBE and \(D_{\text{NGJ}}\) curves and 3 zones of the original, LP and HP filtered features. Empirically, through hypothesis testing, we corroborate that the performance of GNNs versus NNs is closely related to whether intra-class node embedding "distance" is smaller than inter-class node embedding "distance". We find that the p-value is a much more effective performance metric beyond homophily metrics on revealing the advantage and disadvantage of GNNs. Based on this observation, we propose classifier-based performance metric, which is a non-linear feature-based metric and can provide statistical threshold value. \begin{table} \begin{tabular}{l|c c c c} \hline \hline \begin{tabular}{c} Performance \\ Metrics \\ \end{tabular} & \begin{tabular}{c} Linear or \\ Non-linear \\ \end{tabular} & \begin{tabular}{c} Feature \\ Dependency \\ \end{tabular} & \begin{tabular}{c} Sparse \\ Labels \\ \end{tabular} & \begin{tabular}{c} Statistical \\ Threshold \\ \end{tabular} \\ \hline \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{K}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{K}\) & \(\mathcal{V}\) & \(\mathcal{K}\) \\ \(\text{H}_{\text{data}}\) & linear & \(\mathcal{V}\) & \(\mathcal{V}\) & \(\mathcal{K}\) \\ Classifier & both & \(\mathcal{V}\) & \(\mathcal{V}\) & \(\mathcal{V}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Property comparisons of performance metrics
2307.12355
Simulations of Weakly Magnetized Turbulent Mixing Layers
Radiative turbulent mixing layers are expected to form pervasively at the phase boundaries in multiphase astrophysical systems. This inherently small scale structure is dynamically crucial because it directly regulates the mass, momentum and energy exchanges between adjacent phases. Previous studies on hydrodynamic turbulent mixing layers have revealed the interactions between cold and hot phases in the context of the circumgalactic medium, offering important insight into the fate of cold clouds traveling through hot galactic winds. However, the role of magnetic field has only been sparsely investigated. We perform a series of 3D magnetohydrodynamics (MHD) simulations of such mixing layers in the presence of weak to modest background magnetic field. We find that due to field amplification, even relatively weak background magnetic fields can significantly reduce the surface brightness and inflow velocity of the hot gas in the mixing layer. This reduction is attributed to a combination of magnetic pressure support and direct suppression of turbulent mixing, both of which alter the phase structures. Our results are largely independent of thermal conduction and converged with resolution, offering insights on the survival of cold gas in multiphase systems.
Xihui Zhao, Xue-Ning Bai
2023-07-23T15:39:20Z
http://arxiv.org/abs/2307.12355v2
# Simulations of Weakly Magnetized Turbulent Mixing Layers ###### Abstract Radiative turbulent mixing layers are expected to form pervasively at the phase boundaries in multiphase astrophysical systems. This inherently small scale structure is dynamically crucial because it directly regulates the mass, momentum and energy exchanges between adjacent phases. Previous studies on hydrodynamic turbulent mixing layers have revealed the interactions between cold and hot phases in the context of the circumgalactic medium, offering important insight into the fate of cold clouds traveling through hot galactic winds. However, the role of magnetic field has only been sparsely investigated. We perform a series of 3D magnetohydrodynamics (MHD) simulations of such mixing layers in the presence of weak to modest background magnetic field. We find that due to field amplification, even relatively weak background magnetic fields can significantly reduce the surface brightness and inflow velocity of the hot gas in the mixing layer. This reduction is attributed to a combination of magnetic pressure support and direct suppression of turbulent mixing, both of which alter the phase structures. Our results are largely independent of thermal conduction and converged with resolution, offering insights on the survival of cold gas in multiphase systems. keywords: hydrodynamics - MHD - turbulence - magnetic fields - instabilities - galaxies: haloes - galaxies: evolution ## 1 Introduction Commonly found in astrophysical plasmas are baryons coexisting in various phases spanning a wide range of temperatures and densities. While the different phases are discrete and thermally stable on their own right, the boundaries separating them are not necessarily sharp discontinuities, but extended layers thickened by diffusive transport processes such as viscosity and thermal conduction (Borkowski et al., 1990; Gnat et al., 2010). Furthermore, turbulent motions possibly driven by Kelvin-Helmholtz instabilities (KHI) mix up different phases at the interfaces, giving rise to turbulent mixing layers (TMLs) (Begelman and Fabian, 1990). Usually, these TMLs at intermediate-temperatures radiate more efficiently thus cools more rapidly, and hence the TMLs can dominate the energetics and play an active role in shaping the phase structure. Examples include supernova remnants (Kim et al., 2017; Fielding et al., 2018; El-Badry et al., 2019), galactic winds (Gronke and Oh, 2020; Fielding and Bryan, 2022; Tan and Fielding, 2023) and cosmic filaments (Mandeker et al., 2020). TMLs exist on nearly all scales within and around galaxies. In a multiphase system, understanding TMLs is crucial because it is directly through these layers that mass, momentum and energy of different phases are transported, which regulates the evolution of the system. One important venue under intense investigation is the circumgalactic medium (CGM), the gas in hot halos surrounding galaxies outside their disks or interstellar medium (ISM), but inside their virial radii on \(\sim 100\) kpc scales (Tumlinson et al., 2017; Faucher-Giguere and Oh, 2023). As an interface between the intergalactic medium (IGM) and galaxies, the CGM is a pivot where all components of the galactic ecosystem connect, making it a new frontier to study galaxy formation and evolution. Recent observations have revealed a variety of features in the CGM, particularly on its multiphase nature. Absorption and emission-line analysis has identified that cold dense clouds (\(T\sim 10^{4}-10^{5}\)K) are scattered ubiquitously throughout the entire diffuse halo (Hennawi et al., 2015), traveling through hot ambient gas (\(T\gtrsim 10^{6}\)K) at a typical velocity of \(\sim 100\) km/s. Very importantly, ultraviolet absorption lines well constrained the cool phase to have a total mass on the order of \(\sim 10^{9}-10^{10}\)\(M_{\odot}\)(Chen and Mulchaey, 2009; Chen et al., 2010; Prochaska et al., 2011; Werk et al., 2012; Stocke et al., 2013; Stern et al., 2016; Prochaska et al., 2017), indicating the CGM is a massive reservoir sustaining star formation. Therefore, a solid understanding on the origin, evolution and ultimate fate of cold clouds becomes a key element for understanding the life cycle of the galactic ecosystem. The TMLs play a crucial role here because these layers govern the energetics at cold/hot interfaces and therefore the growth or destruction of cold clouds (Gronke and Oh, 2018, 2020). Additionally, since the TMLs reside at intermediate temperatures with higher emissivity, they provide a set of important observational diagnostics. For example, TMLs are expected to explain the high ions (such as O\({}_{\rm V1}\)) observed in absorption spectra of high-velocity clouds around the Milky Way (Savage et al., 2014). Over the past few years, the fate of cold gas in the CGM has been extensively studied in the form of "cloud-crushing" simulations, where the typical setup is to embed a single cold cloud (\(T_{\rm cold}\sim 10^{4}\)K) in hot ambient wind (\(T_{\rm hot}\sim 10^{6}\)K) with a relative speed on the order of 100 km/s. Stemmed from early hydrodynamic studies (e.g., Klein et al., 1994; Xu and Stone, 1995), recent cloud-crushing simulations have investigated the role of various additional physical ingredients, including thermal conduction (e.g., Bruggen and Scannapieco (2016); Armillotta et al. (2016)), radiative cooling (Scannapieco and Bruggen, 2015; Gronke and Oh, 2018), magnetic fields (Dursi and Pfrommer, 2008; McCourt et al., 2015; Gronke and Oh, 2020; Cottle et al., 2020) and cosmic rays (Wiener et al., 2019; Bruggen and Scannapieco, 2020). However, besides the greatly expanded parameter space, these studies overall lead to diverse outcomes depending on problems setting and possibly, numerical resolution. As a result, the fate of the cold clouds, especially how various physical processes control their growth/destruction, remains elusive. We note that dynamically important TMLs are usually by necessity underresolved in cloud-scale simulations with certain outcomes dependent upon numerical schemes (e.g., Bruggen et al., 2022), which also motivates further studies on the TMLs. As an intrinsically small-scale structure, TML can be considered as local patches at the cold-hot interfaces in the cloud-crushing problem, and the study of it helps refine cloud-scale simulations and potentially provide the necessary sub-grid physics. The primary goal for studying the TMLs is to clarify the rate of local mass, momentum and energy exchanges between cold and hot phases. These rates are essentially encapsulated by the inflow velocity \(v_{\rm in}\) of hot gas to cold gas (because on a local scale, mixing layer quickly cools down, generating more cold gas while consuming more hot gas), and is also effectively reflected in the cooling luminosity. Begelman and Fabian (1990) did early analytic work on TMLs. They pointed out the existence of TMLs characterized by a temperature of \(T_{\rm mix}\sim\sqrt{T_{\rm cold}T_{\rm hot}}\) and width of \(l_{\rm mix}\sim v_{\rm turb}t_{\rm cool}\) at the cold/hot interfaces, mediating their interactions. Here \(v_{\rm turb}\) is turbulent velocity and \(t_{\rm cool}\) is the cooling time scale for mixing layer to cool down to the cold phase. Their theory thus implies that \(v_{\rm in}\sim l_{\rm mix}/t_{\rm cool}\sim v_{\rm turb}\). Recently, however, to investigate the abundance of high ions within TMLs, Ji et al. (2019) performed 3D plane parallel simulations of TMLs and found that \(l_{\rm mix}\propto t_{\rm cool}^{1/2}\), which is inconsistent with early results. This \(l_{\rm mix}\propto t_{\rm cool}^{1/2}\) scaling leads to \(v_{\rm in}\propto t_{\rm cool}^{-1/2}\). Subsequently, Gronke and Oh (2020) ran cloud simulations to study the growth rate of cold cloud, but derived \(v_{\rm in}\propto t_{\rm cool}^{-1/4}\) from their results. This is also observed in local simulations of turbulent mixing by Mandelker et al. (2020) and Fielding et al. (2020). Later, by exploiting parallels with the turbulent combustion theory, Tan et al. (2021) reconciled the discrepancy and bridged previous results. It turns out the competition between turbulent mixing and radiative cooling is responsible for different scalings, and the dominant process largely sets the flow properties. Numerically, consistent results have also been obtained among different groups. Thanks to the aforementioned studies of TMLs, a systematic understanding on the efficiency of turbulent mixing at small scales is being established, which offers subgrid physics for large-scale models such as galactic winds (e.g., Fielding and Bryan (2022); Tan and Fielding (2023)). However, most previous works on TMLs are hydrodynamic and neglected magnetic fields. In cloud-crushing simulations, magnetic field has been shown to slow down the destruction of clouds (Dursi and Pfrommer, 2008; Gronnow et al., 2018), but at late time the presence of magnetic fields only makes minor difference in the lifetime or mass growth of cold clouds (Li et al., 2020; Cottle et al., 2020; Gronke and Oh, 2020). Meanwhile, magnetic field geometry introduces additional complexity and influences the morphology and acceleration of cold clouds (McCourt et al., 2015; Banda-Barragan et al., 2018; Cottle et al., 2020). On the other hand, the resolution requirement for MHD cloud-crushing simulations is also uncertain, due to the complex interplay between magnetic field and turbulence in the TMLs that are not necessarily properly resolved. Therefore, we aim to quantify the role of magnetic fields by study magnetized TMLs. In this work, we perform 3D MHD plane parallel simulations with both radiative cooling and anisotropic thermal conduction to investigate the properties of magnetized TMLs, especially in comparison with previous hydrodynamic results. We note that Ji et al. (2019) has also run a subset of 3D simulations with magnetic fields that are similar to ours, but mostly focused on the high ion abundances rather than \(v_{\rm in}\). We cover a large parameter space in magnetic field and radiative cooling strength, and inspect the flow properties particularly on the inflow velocity \(v_{\rm in}\) (or equivalently surface brightness \(Q\), see Section 2.4), the morphology and the phase distributions of magnetized TMLs. Our primary focus is on the regime with weak initial magnetization, as this branch has been sparsely investigated, and is more applicable for the plane parallel model. We find even weak initial magnetic fields (magnetic pressure \(\sim 500\) times smaller than thermal pressure) can be amplified to balance thermal pressure, and substantially change the conclusions drawn from hydrodynamic simulations. Furthermore, we briefly studied the impacts of different field geometries and conductivity criteria. This paper is structured as follows. We describe our numerical methods and implementations in Section 2. Then we present our results in Section 3, including an overview (3.1\(\sim\)3.2), detailed diagnostics and analysis (3.3\(\sim\)3.4), and convergence study (3.5). In Section 4, we show additional results with different magnetic field geometry and conductivity. We discuss the connections of our results to previous works and to larger scale problems, and comment on caveats in Section 5. We conclude in Section 6. ## 2 Methods We use ATHENA++ (Stone et al., 2020) to solve the following three-dimensional MHD equations on a uniform, Cartesian grid with HLLD Riemann solver: \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0 \tag{1}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v})+\nabla\cdot(\rho \mathbf{v}\mathbf{v}-\mathbf{B}\mathbf{B}+\mathbf{P}^{\star})=0\] (2) \[\frac{\partial E}{\partial t}+\nabla\cdot[(E+P^{\star})\mathbf{v }-\mathbf{B}(\mathbf{B}\cdot\mathbf{v})]+\nabla\cdot\mathbf{q}=-\varepsilon_{ \mathrm{cool}}\] (3) \[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times(\mathbf{v} \times\mathbf{B})=0 \tag{4}\] where \(\rho\), \(\mathbf{v}\) and \(E\) are fluid density, velocity and total energy density which is defined by the sum of internal energy, kinetic energy and magnetic energy (i.e. \(E\equiv P_{\mathrm{therm}}/(\gamma-1)+\rho\mathbf{v}^{2}/2+B^{2}/2\) where \(\gamma\) is the adiabatic index). \(P^{\star}\) is total pressure including thermal \(P_{\mathrm{therm}}\) and magnetic pressure \(P_{\mathrm{mag}}\equiv B^{2}/2\), while \(\mathbf{P}^{\star}\) is the corresponding tensor. We implement thermal conduction through the term \(\nabla\cdot\mathbf{q}\), which is anisotropic due to the presence of magnetic field \(\mathbf{B}\). The heat flux \(\mathbf{q}\) is parallel to \(\mathbf{b}\equiv\mathbf{B}/\left|\mathbf{B}\right|\), and \(\varepsilon_{\mathrm{cool}}\) is an external energy source representing optically thin cooling. Details about conductivity and radiative cooling are discussed below. ### Thermal Conduction Conductive heat flux in magnetized plasma is mostly parallel to the magnetic field since charged particles are confined around field lines. Thus we adopt an anisotropic heat flux: \[\mathbf{q}\equiv-\kappa_{\parallel}(\mathbf{b}\cdot\nabla T)\mathbf{b}, \tag{5}\] where \(T\) is the temperature and \(\kappa_{\parallel}\) the parallel conductivity. We implement anisotropic thermal conduction following the prescription in Sharma & Hammett (2007), which preserves the monotonicity in anisotropic diffusion. The canonical Spitzer conductivity for fully ionized plasmas has been given by Spitzer (1962) : \[\kappa_{\mathrm{Spitzer}}(T)=5.7\times 10^{-7}\ T^{5/2}\mathrm{erg\ cm^{-1}\ s^{-1}\ K^{-1}}. \tag{6}\] In reality, the actual level of thermal conduction is uncertain due to small scale physics which may change the electron mean free path. Instead of using \(\kappa_{\mathrm{Spitzer}}\) above, we assume a constant conductivity \(\kappa_{\parallel}\) equivalent to the value of \(\kappa_{\mathrm{Spitzer}}(T)\) at \(T=0.8\times 10^{5}\) K: \[\kappa_{\parallel}=10^{6}\mathrm{erg\ cm^{-1}\ s^{-1}\ K^{-1}}\, \tag{7}\] which roughly corresponds to the conductivity in the mixing layers, as also adopted in Tan et al. (2021). We also discuss different choices of conductivity in Section 4.1 and show that the overall results are insensitive to our choices. For comparison purposes, we also run a set of 3D hydrodynamic simulations without magnetic fields, where conduction is isotropic with the conductivity \(\kappa_{\mathrm{iso}}=\kappa_{\parallel}\). ### Radiative Cooling Radiative cooling is probably the most important non-ideal process in TMLs and can qualitatively alter the underlying physics (Gronke & Oh, 2018; Fielding & Bryan, 2022). In our MHD equations, we add an external energy source due to radiative cooling as \[\frac{1}{\gamma-1}\left.\frac{dP}{dt}\right|_{\mathrm{cool}}=-\varepsilon_{ \mathrm{cool}}\equiv-n^{2}\Lambda(T) \tag{8}\] where \(n\) is number density and \(\Lambda\) is the cooling curve as a function of temperature. Following Fielding et al. (2020), we specify our log-normal cooling function \(\Lambda(T)\) by (i) the maximum value \(\Lambda(T_{\mathrm{mix}})\), which coordinates with the cooling table calculated by Gnat & Sternberg (2007) and (ii) the width, which is arranged so that \(\Lambda(T_{\mathrm{mix}})=100\times\Lambda(T_{\mathrm{cold/hot}})\). In reality, the shape of cooling curve is more sophisticated and depends on the metallicity in the environment. Although Tan & Oh (2021) acknowledged the shape of cooling curve can change the temperature distribution, we have tested the realistic cooling table calculated by Gnat & Sternberg (2007) and found the overall dynamics is not sensitive to the specific shape of the cooling curve, as long as it well reflects a bi-stable feature. Cooling energy loss described by equation 8 induces a cooling time scale: \[t_{\mathrm{cool}}(T)\equiv\frac{P_{\mathrm{therm}}}{(\gamma-1)n^{2}\Lambda(T)} \tag{9}\] Since it is the most intensive cooling within the mixing layer that concerns the system and utilizes evolution, we refer to the cooling time of mixed \(T_{\mathrm{mix}}\equiv\sqrt{T_{\mathrm{cold}}T_{\mathrm{hot}}}=1\times 10^{5}\) K gas as \(t_{\mathrm{cool}}\) afterwards, which is \(t_{\mathrm{cool}}(T_{\mathrm{mix}})\approx 1\) Myr. In the following numerical experiments, we adjust the strength of radiative cooling by multiplying a constant prefactor \(\Lambda_{0}\) on the fiducial cooling function. The actual cooling function is thus given as \[\Lambda(T)=\Lambda_{0}\Lambda_{\mathrm{fid}}(T) \tag{10}\] and \(\Lambda_{\mathrm{fid}}\) is the log-normal cooling function described above. By definition we have \(t_{\mathrm{cool}}\propto\Lambda_{0}^{-1}\). Utilization of this prefactor provides the numerical convenience for studying the influence of different cooling time scales. Physically, tuning the cooling strength is equivalent to changing the ambient pressure, where increasing \(\Lambda_{0}\) corresponds to larger ambient pressure. ### Initialization In order to generate a TML, we simulate a plane-parallel shear layer, and trigger the initial KHI by imposing relative shear motion to feed turbulence that mixes up the cold and hot gas. In reality, our setup imitates a local patch at the boundary of cold clouds travelling through hot ambient gas. We construct our 3D MHD plane parallel models closely following Ji et al. (2019); Tan et al. (2021). The simulation domain contains \(512\times 128\times 128\) cells, corresponding to \(400\times 100\times 100\) pc. This means a cell length of \(0.78\) pc, approximately resolving the Field length \(\lambda_{F}\equiv\sqrt{\kappa T/n^{2}\Lambda}\) in our simulations when thermal conduction is included (Begelman & McKee, 1990). We arrange our coordinate system such that the \(x\) axis is normal to the cold/hot interface, along which the turbulent mixing front propagates. The \(y\) axis parallels shear flows, the initial relative motion feeding the KHI, and the \(z\) axis is the remaining dimension. Outflow boundaries are applied in \(x\) direction, and periodic boundaries to the rest. Physically, the coordinate ranges are \([-200,200]\) pc or \([-100,300]\) pc along the \(x\) axis, and \([0,100]\) pc along the \(y\) and \(z\) axes. We use \(L_{\mathrm{box}}\equiv 100\) pc to denote the simulation box size hereafter. We adjust the bounds in \(x\) direction according to cooling strength to ensure that we can capture the entire mixing layer for sufficiently long time, and in the meantime keep its evolution unaffected by the choice of boundary conditions. We fill the negative \(x\) region with cold gas (\(T_{\rm cold}=10^{4}\) K) and the positive \(x\) region with hot gas (\(T_{\rm hot}=10^{6}\) K), separated by a thin (\(\sim 6\) cells) initial mixing layer (\(T_{\rm mix}\equiv\sqrt{T_{\rm cold}T_{\rm hot}}=10^{5}\) K) centered at \(x=0\). Different phases are originally in pressure equilibrium, with number density \(n_{\rm cold}=1.6\times 10^{-2}\)cm\({}^{-3}\) and \(n_{\rm hot}=1.6\times 10^{-4}\)cm\({}^{-3}\). We further impose a uniform magnetic field \(\mathbf{B}_{0}\) in the \(y\) direction parallel to the initial shear flows. We choose this field orientation because in spite of the uncertainty in realistic field direction, the relative motion generally tends to rearrange the field line along the shear flows due to the frozen-in effect, thus the \(\mathbf{B}_{y}\) component should dominate around the mixing layer. Effects of different initial field geometry will be discussed in Section 4.1. The initial field strength \(B_{0}\) is adjusted through the initial plasma beta \(\beta_{0}\equiv P_{\rm therm,0}/P_{\rm mag,0}\), the ratio of thermal pressure to magnetic pressure. Although the value of \(\beta\) is typically uncertain in real astrophysical systems, we focus on weak field regime with \(50\leq\beta_{0}\leq 50000\) so that the background field does not prevent initial mixing, and we will show that even such weakly magnetized environments are enough to drive deviations from hydrodynamic results. Note that by our design, \(P_{\rm therm,0}\) is uniform and constant throughout all our simulations, while the initial total pressure \(P_{0}^{*}=P_{\rm therm,0}\left(1+\frac{1}{20}\right)\) depends on \(\beta_{0}\). But since we only choose \(\beta_{0}\geq 50\), \(P_{0}^{*}\) can be regarded as nearly constant. The initial sound speed in the hot gas is \(c_{\rm s,hot}=\sqrt{\gamma P_{\rm therm}/\rho_{\rm hot}}=150\) km/s. Due to initial pressure equilibrium, we have \(c_{\rm s,cold}=0.1c_{s,hot}\). We drive the initial shear motion and seed the perturbation to induce KHI using the following velocity profile: \[v_{y} =\frac{v_{\rm shear}}{2}{\rm tanh}\left(\frac{x}{a}\right) \tag{11}\] \[v_{x} =\delta v\,\exp\left(-\frac{x^{2}}{a^{2}}\right)\cos(k_{y}y) \cos(k_{z}z) \tag{12}\] where \(v_{\rm shear}\) is 100 km/s, slightly lower than \(c_{s,\rm hot}\). \(a=5\) pc is approximately the width of the initial mixing layer, and \(\delta v=0.01v_{\rm shear}\). The perturbation wavelength \(\lambda_{y,z}=2\pi/k_{y,z}\) equals to \(L_{\rm box}\). We also apply a white noise \(\sim 1\times 10^{-4}\)\(v_{\rm shear}\) on top of this velocity profile. ### Surface brightness In magnetized TMLs, the physical quantity of major interest is the inflow velocity \(v_{\rm in}\) of hot gas flowing into the cold phase, because it directly encapsulates the the rates of mass and energy exchange between the two phases, and is a parameter that can be encoded in large-scale simulations or theories (Lancaster et al., 2021; Fielding and Bryan, 2022; Tan et al., 2022). However, the inflow velocity \(v_{\rm in}\), which describes the propagation of the entire turbulent mixing front, is not straightforward to measure because the simulation box can develop bulk velocities. We instead measure the surface brightness \(Q\) defined by the total cooling rate: \[Q\equiv\frac{1}{S}\int\varepsilon_{\rm cool}\ d\mathrm{v}=\frac{1}{S}\int n^{2 }\Lambda d\mathrm{v}, \tag{13}\] where \(S=100\ \mathrm{pc}\times 100\ \mathrm{pc}\) is the cross-sectional area of our simulation domain. Measuring \(Q\) or \(v_{\rm in}\) are roughly equivalent because in a quasi-steady state, radiative cooling energy loss should be balanced by enthalpy flux in a frame comoving with the mixing layer (Ji et al., 2019; Fielding et al., 2020; Tan et al., 2021), \[Q\approx\frac{5}{2}Pv_{\rm in}, \tag{14}\] where \(P\) is the ambient pressure surrounding the mixing layer (near the boundary at the hot end), which is approximately equal to \(P_{\rm therm,0}\), as will be shown in later sections. We caution that this relationship is not rigorous, especially in high-Mach number regions (Bustard and Gronke, 2022; Yang and Ji, 2023). In Section 3.4.1 we also measure \(v_{\rm in}\) and assess the relation (14), where we found that the discrepancies are generally small. In the following sections, we visualize the local cooling emissivity \(\varepsilon_{\rm cool}=n^{2}\Lambda\), and normalize it by the initial values within the mixing layer: \[\varepsilon_{0}\equiv n_{\rm mix}^{2}\Lambda(T_{\rm mix}). \tag{15}\] therefore \(\varepsilon_{0}\) has linear dependence on \(\Lambda_{0}\). To highlight the cooling radiation from mixing layers, we subtract the background cooling emission in our diagnostics below, which is defined as \(n^{2}\Lambda(T_{\rm cold/hot})\). We have checked that it has negligible influence to our main conclusions given our design of cooling curve. From this point onward, we define the temperature range \(2\times 10^{4}\)K \(<T<3\times 10^{5}\)K to describe mixed gas in subsequent sections. ## 3 3D simulation results ### Overview of the simulation set It is useful to start by taking an overview of our simulation set on magnetized TMLs. Figure 1 is a gallery that displays temperature slices of mixing layers seen edge on, taken from the majority of our simulations, which cover a parameter space with initial plasma \(\beta_{0}\) from 50000 to 50 (and also hydrodynamic runs), and radiative cooling strength \(\Lambda_{0}\) from 0.1 to 10. After the initial KHI is developed, the resulting turbulence mixes up different phases and eventually creates a quasi-steady TML. These snapshots of TMLs provide a general overview about how mixing shapes the phase distribution at the interfaces and how varying magnetic field or cooling strength affects the evolution. We first introduce the results from hydrodynamic simulations (first row). When cooling is very inefficient (\(\Lambda_{0}=0.1\)), we see a rather smooth temperature (and hence density) transition from the cold phase to the hot phase, showing an extended layer mostly filled with intermediate-temperature gas. This is because the long \(t_{\rm cool}\) allows different phases to fully mix up well before the intermediate-temperature gas cools down, and the mixing layer is "single-phase" (Tan et al., 2021). However, as \(\Lambda_{0}\) increases, faster cooling shortens the lifetime of intermediate-temperature gas. Once \(t_{\rm cool}\) is shorter than the mixing time scale (\(t_{\rm mix}\sim L_{\rm box}/v_{\rm shear}\) which happens to occur at \(\Lambda_{0}\approx 1\)), the mixing layers lack extended intermediate-temperature regions, with abrupt jump between the hot and cold phases, giving rise to "multi-phase" mixing layers, in which most gas is either cold or hot. The interface where a small amount of intermediate-temperature gas resides become corrugated and show fractal structures (Fielding et al., 2020; Tan et al., 2021). After magnetic field is incorporated, an immediate effect is to weaken the KHI and hence turbulent mixing through magnetic tension. We still observe an overall trend that as cooling rate increases, the phase structure in the mixing layer transitions from "single phase" to "multi-phase". On the other hand, the flow becomes more and more laminar as magnetic field strengthens, especially when \(\beta_{0}\leq 500\). In the cases with \(\beta_{0}=50000\) (second row), the morphology of TMLs appears similar to hydrodynamic simulations (except for being more fractal presumably because anisotropic conduction is less diffusive, see also Fielding et al. (2020); Tan et al. (2021)). We checked that in such cases \(Q\) (or \(v_{\rm in}\)) indeed only slightly deviates from hydrodynamic results (see Figure 8). Therefore, we mainly focus on \(50\leq\beta_{0}\leq 5000\) in the rest of this paper. ### General features of magnetized TMLs We first take our highest resolution run and show in Figure 2 the typical features of a weakly magnetized TML. Different panels illustrate temperature, emissivity, pressure deviation, magnetic field and velocity profiles of a single slice in the \(x-y\) plane. The corresponding simulation is with fiducial cooling strength \(\Lambda_{0}=1\) and \(\beta_{0}=5000\), a choice where magnetic fields can exert certain influence, while also preserves the turbulence structure. This snapshot is taken at \(t=110\)\(t_{\rm shear}\) where \(t_{\rm shear}\equiv L_{\rm box}/v_{\rm shear}\) is shear timescale across the simulation box, thus turbulence has been well developed at this moment by the initial KHI. Through Figure 2, we introduce the main characteristics of the magnetized TML. (1) _Phase structure_ The temperature panel shows that, instead of hosting a mixing layer that is mostly "single-phase" as in the hydrodynamic counterpart (e.g., the upper-middle panel in Figure 1), the mixing layer in Figure 2 is more "fractal", consisting more of discrete cold/hot phases interspersed within each other, while a small amount of mixed gas separates them. This can be understood as the suppression of mixing by magnetic field better preserves the temperature at the cold/hot phases. In the meantime, from the emission (middle) panel we can see that most radiation takes place within the narrow corrugated boundary where intermediate-temperature gas resides, reinforcing the more fractal nature of the mixing layer. (2) _Magnetic field amplification_ Figure 1: Gallery of our main simulation set, which covers a parameter space with \(\beta_{0}\) from 50000 to 50 (along with the hydrodynamic runs), and \(\Lambda_{0}\) from 0.1 to 10. Each panel is a projected \(x-y\) slice illustrating temperature fields around the mixing layers. From left to right, radiative cooling efficiency grows. From top to bottom, initial magnetic field strengthens. These snapshots are taken at late time during the evolution (\(t>120t_{\rm shear}\) for \(\beta_{0}\leq 5000\) while \(t>75t_{\rm shear}\) for \(\beta_{0}=50000\) ), so that the systems have sufficiently developed and reached their quasi-steady states. In each panel, we have adjusted the \(x-\)coordinates so that the mixing layer is roughly located at the same position for all runs. During the evolution of TMLs, the initial field \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{y}}\) gets entangled and amplified. With weak initial field, the amplification results from the kinematic dynamo from the KHI turbulence, which is the strongest in regions around the interface between the cold and hot gas where shear is the strongest (bottom row of Figure 2). In this case with \(\beta_{0}=5000\), an amplification of \(\sim 40\) times in field strength can be achieved (middle row in Figure 2). The amplified field is highly turbulent and is dominated by the \(\hat{y}-\)component along the direction of shear, as expected, while the other two components also reach nearly comparable strength. This saturated field strength corresponds to a minimum \(\beta\sim 3\), suggesting \(P_{\rm mag}\) could be comparable to \(P_{\rm therm}\) at later stage. In addition, note that the location of the mixing layer migrates from the initial \(x=0\) toward larger \(x\) as cold gas grows due to cooling. We observe that the amplified fields leave an imprint in the cold phase after the mixing layer sweeps through. (3) Isobaric cooling?One useful diagnostic on the dynamics of the mixing layer concerns whether this layer is isobaric (i.e., equal pressure as in the cold/hot phase). Ji et al. (2019) found in their hydrodynamic simulations that thermal pressure has a dip in the mixing layer that constitutes \(\sim 8\%\) of total pressure due to efficient cooling. This dip is compensated by turbulent pressure, reflecting that the hot gas is siphoned to the cold phase. Gronke and Oh (2020) also identified supportive signs in their cloud-scale MHD simulations. However, Fielding et al. (2020) pointed out that pressure dips could be a signature of inadequate resolution. They suggested the system should be isobaric as long as the cooling layers are properly resolved. In our MHD simulations, where the physical resolution reaches as high as those used in Fielding et al. (2020) (although our box size is smaller), we find the answer likely mainly depends on the magnetization. In hydrodynamic cases or MHD cases with very weak \(B_{0}\) (such as the \(\Delta P^{\star}/\bar{P}^{\star}\) panel in Figure 2), there are pressure deficits especially as \(\Lambda_{0}\) increases, in agreement with Ji et al. (2019). But when \(B_{0}\) increases, \(P_{\rm mag}\) almost fully compensates the deficits of \(P_{\rm therm}\) (see Figure 9), with negligible contribution from turbulent pressure. We will show when \(\beta_{0}=500\), the situation appears isobaric quite strictly, and the results remain consistent in both our fiducial and high-resolution simulations. ### Weakly Magnetized TMLs From here we center our discussions around the surface brightness \(Q\) of magnetized TMLs, which is roughly equivalent to the hot gas inflow velocity \(v_{\rm in}\). Previous hydrody Figure 2: Properties of our fiducial magnetized TML simulation with \(\beta_{0}=5000\), \(\Lambda_{0}=1\) at the highest resolution (\(\Delta x=L_{\rm box}/256\)), shown from a single slice in the \(x-y\) plane, taken from the snapshot at \(t=110t_{\rm shear}\). **Top row:** from left to right are temperature, cooling emission and total pressure deviation. Here \(P^{\star}=P_{\rm therm}+P_{\rm mag}\) and \(\overline{P}^{\star}\) is the average over the simulation box. **Middle row:** three magnetic field components, normalized by initial strength \(B_{0}\). **Bottom row:** three velocity components normalized by initial shear velocity \(v_{\rm shear}=100\) km/s. namic simulations (Fielding et al., 2020; Tan et al., 2021) have shown that the value of \(Q\) in the quasi-steady state is eventually determined by the balance between radiative cooling and turbulent mixing. In the following we describe how magnetic field takes part in this balance. We start with the very weak field regime by fixing \(\beta_{0}=5000\), where the field is sufficiently weak to respond mostly passively to gas turbulence, akin to dynamo action in the kinematic regime. We then solely adjust \(\Lambda_{0}\) to investigate the role of cooling. In this section we also lay out the main diagnostics used throughout the rest of this paper. #### 3.3.1 Time evolution of surface brightness \(Q\) The top panel in Figure 3 shows the time evolution of \(Q\) in different cooling regimes with \(\beta_{0}=5000\) (solid lines), in comparison with hydrodynamic results (dashed lines). Note that we cut off red and blue dashed lines at \(t\approx 50\)Myr and \(t\approx 85\)Myr, because afterwards hot gas is exhausted, leaving a box full of cold gas. At early stage (\(t\lesssim 10\)Myr), \(Q\) grows rapidly in all cases since turbulence fed by the initial KHI mixes up cold and hot gas, generating the intermediate phase with strong emission. However, after reaching the peak values, the two sets of simulations start to differ: instead of maintaining a steady level as in hydrodynamic cases, \(Q\) in the MHD cases drops by a factor \(\gtrsim 2\). This distinguished behavior can be intuitively understood through the bottom panel, where we show the evolution of average magnetic densities \(\varepsilon_{B,\rm mix}\) in the mixed regions. To calculate \(\varepsilon_{B,\rm mix}\), we average \(B^{2}/B_{0}^{2}\) within the mixed gas defined by the temperature range \(2\times 10^{4}\)K \(<T<3\times 10^{5}\)K (gas out of this temperature range has negligible contribution to \(Q\)). In the beginning, magnetic fields get quickly intensified by compression (i.e., as hot gas cools into cold gas) and/or turbulent mixing, initiating the rapid growth of \(\varepsilon_{B,\rm mix}\), which closely correlates with the initial increasing stage of \(Q\). After \(\varepsilon_{B,\rm mix}\) ends its rapid growth, it slowly increases/becomes steady as \(Q\) gradually decreases/maintains a steady value, which suggests amplified magnetic fields in turn suppress further mixing and hence cooling emission, and this is accompanied by reduced \(Q\) values. We also observed the exponential growth, followed by flattening (saturation) in the evolution of \(\overline{z}_{B}\) (magnetic energy density averaged in the region that contains TMLs for the first 30Myr), as indicated in the inset of the bottom panel. This is indicative of a turbulent dynamo (e.g., Brandenburg & Subramanian (2005); Federrath (2016)) from the development of the KHI turbulence, though further study is needed to better characterize its properties. From the figure it is clear that more efficient cooling eventually leads to stronger magnetic field amplification. Altogether, we think the key process in magnetized TMLs is to establish a balance between magnetic field amplification induced by cooling and/or turbulence, followed by magnetic suppression of mixing and hence cooling emission. Therefore, even weakly magnetized environment can possibly make a major difference on the properties of the mixing layers when cooling is very strong. #### 3.3.2 Profiles and morphology of mixing layers Next we analyze the differences between magnetized and hydrodynamic TMLs in detail. In Figure 4 we draw the profiles of transversely averaged temperatures, volumetric fractions of cold gas and pressure at the late stages in each simulation. From the top row, we clearly see that the width of the mixing layers narrows as \(\Lambda_{0}\) increases. This is likely because faster cooling naturally inhibits the existence of intermediate-temperature gas. Besides the layer width, Tan et al. (2021) has pointed out that the average temperature of a transverse slice should follow the simple estimate \(\overline{T}\approx\rm{f_{cold}}\rm{T_{cold}}+(1-\rm{f_{cold}})\rm{T_{hot}}\) when mixing layers are fractal with little intermediate-temperature gas. This further suggests \(\overline{T}/\rm{T_{hot}}+\rm{f_{cold}}\approx 1\) since \(T_{hot}\gg T_{cold}\). Indeed, we see when \(\Lambda_{0}=10\) the profile of \(\overline{T}/\rm{T_{hot}}\) well tracks \(\rm{f_{cold}}\), while they deviate in the \(\Lambda_{0}=0.1\) case because there is abundant volume-filling intermediate-temperature gas. In bottom panels, we see that the final level of \(P_{\rm mag}\) amplification increases with increasing cooling rate, in accordance with the findings in Figure 3. In spite of the same value of Figure 3: Time evolution of surface brightness \(Q\) (top) and average magnetic energy density \(\varepsilon_{B,\rm mix}\) in the mixed gas (bottom), where \(\varepsilon_{B,\rm mix}\) is normalized by initial magnetic energy density \(\varepsilon_{B,0}\) (being uniform). The three colors represent weak cooling (orange), fiducial cooling (blue) and strong cooling (red) regimes, respectively. Solid lines are from MHD simulations with \(\beta_{0}=5000\), in comparison with dashed lines from hydrodynamic simulations. Red and blue dashed lines are cut off at \(t\approx 50\)Myr and 85Myr as the hot gas gets fully exhausted. The inset in the bottom panel shows the corresponding evolution of magnetic energy density averaged in a region that contains TMLs during the first 30Myr. At early stage \(\overline{z}_{B}\) undergoes exponential growth fitted by dashed lines. \(\beta_{0}\), \(P_{\rm mag}\) remains negligible during the entire evolution of weakly cooling TMLs, while fast cooling can eventually lead to a \(P_{\rm mag}\sim 0.5P_{\rm therm}\). We also point out the peaks of \(P_{\rm mag}\) are tracked by cold gas adjacent to mixing layers. This suggests the strongest magnetization is usually achieved in the cold phases which underwent turbulent mixing, rather than in the mixed region where field amplification is currently taking place (also see central panels in Figure 2). We additionally calculate the turbulent pressure \(P_{\rm turb}\equiv\left\langle\rho\dot{\theta}v_{2}^{2}\right\rangle\), and find \(P_{\rm turb}\) is almost negligible even for such large \(\beta_{0}\) (compare with hydro case in Figure 9). Here, the total pressure \(P_{\rm sum}\) is largely flat (isobaric) while still fluctuates. On the other hand, we will see when \(\beta_{0}\lesssim 500\), the mixing layer is almost strictly isobaric. In Figure 5 we visualize the transverse intersections of the TML at the positions where \(f_{\rm cold}=0.5\). At first glance, there is obviously less intermediate-temperature gas as \(\Lambda_{0}\) increases, which is also evinced from the cooling emission panels (in the middle row). As cooling strengthens from left to right, besides the area of bright regions shrinks, we see when \(\Lambda_{0}=10\) there is hardly any bright emission even in gas with \(T=10^{5}\) K where the cooling rate \(\Lambda(T)\) peaks. Since our normalization has already taken into account the prefactor \(\Lambda_{0}\), such a dark pattern implies reduced contribution solely from the \(n^{2}\) term in \(\varepsilon_{\rm cool}\), which is consistent with the deficit in thermal pressure sustained by the amplified \(P_{\rm mag}\). The bottom row in Figure 5 indeed reveals a rising level of magnetization as cooling becomes more efficient, and stronger cooling makes the boundary separating weakly/strongly magnetized regions more distinct. #### 3.3.3 Density and temperature distributions within the mixing layers According to the definition of surface brightness \(Q\) (equation 13), its value mathematically only depends on two aspects: 1. \(n_{T}\), average density of gas with temperature around \(T\) 2. \(V_{T}\), volume occupied by gas with temperature around \(T\), or equivalently the probability density function (PDF) of temperature in our simulations. With the above information at hand, we can then directly estimate \[Q\approx\sum_{T}n_{T}^{2}\Lambda(T)V_{T} \tag{16}\] Physically, (1) and (2) correspond to magnetic pressure support and magnetic suppression of turbulent mixing, respectively. Note that in hydrodynamic simulations, (1) is a marginal factor because the deficits of \(P_{\rm therm}\), even if they exist, are very minor. Therefore, the isobaric relation \(n_{T}\propto T^{-1}\) roughly holds. In MHD, on the other hand, both (1) and (2) can significantly deviate from hydrodynamic results. To isolate magnetic influences on each aspect, we draw the density distributions as a function of temperature \(\rho_{T}\) and the temperature PDF in Figure 6. The top panel shows averaged densities in individual temperature bins together with the temperature PDF in the bottom panel. Note that for all cases shown, an \(n_{T}\propto T^{-1}\) relation approximately hold, indicating \(P_{\rm therm}\) is largely constant (in equilibrium). For the three different cooling strengths, the MHD cases only show slight deviations from their hydrodynamic counterparts by a factor of \(<2\). In fact, \(\rho_{T}\) (hence \(P_{\rm therm}\)) is even higher in MHD simulations when \(\beta_{0}=5000\), likely because magnetic fields here are too weak to create substantial \(P_{\rm therm}\) deficit Figure 4: Profiles of transversely averaged physical quantities along the \(x\) direction. All mixing layers are initially located at \(x=0\). **Top**: Temperature and volumetric fraction of cold gas (defined as \(T<5\times 10^{4}\) K). **Bottom**: Thermal, magnetic, turbulent pressure and their sum, normalized by the initial total pressure \(P_{0}^{*}\) in each case. Figure 5: Face-on views of mixing layers, taken at the position where volumetric fraction of cold gas \(f_{\rm cold}=0.5\) in Figure 4. These \(z-y\) slices are from simulations with \(\beta_{0}=5000\), and \(\Lambda_{0}\) increases from left to right. **Top row**: temperature. **Middle row**: cooling emission normalized by initial cooling energy loss rates \(\varepsilon_{0}\) within mixing layers (which has linear dependence on \(\Lambda_{0}\)). **Bottom row:** plasma \(\beta\), which illustrates strong \(P_{\rm mag}\) mostly emerges in cold phase. in the mixing layer, but can suppress the overall cooling rate (also compare the middle column in Figure 4 with the left column in Figure 9). In the bottom panel, the temperature PDFs peak at \(10^{4}\)K and \(10^{6}\)K by design, while their distributions at intermediate temperatures reflect the mixing efficiency. We see that for weak cooling (\(\Lambda_{0}=0.1\)), the temperature PDF in the MHD case is only modestly suppressed compared with the hydrodynamic counterpart, consistent with our earlier discussion. In the \(\Lambda_{0}=10\) case, however, it can be reduced by an order of magnitude (i.e., \(V_{T}\sim 0.1\) of that in hydrodynamic case when \(\Lambda_{0}=10\)), suggesting that \(Q\) can be reduced by the significant suppression on turbulent mixing when cooling is strong, which is also in line with stronger magnetic field amplification (Figure 3.3.1). To summarize the results from weakly magnetized TMLs (\(\beta_{0}\sim 5000\)), their properties are largely degenerate to hydrodynamic TMLs when cooling is inefficient (\(\Lambda_{0}=0.1\)). As cooling intensifies, there is progressively stronger magnetic field amplification. Although \(P_{\rm mag}\) is yet to reach equipartition to offset \(P_{\rm therm}\), it effectively leads to suppression of mixing and reduction in \(Q\). ### Modestly magnetized TMLs We now turn to cases where initial magnetic fields are mildly stronger (\(\beta_{0}=500\) and \(50\)). We demonstrate that in such situations, magnetized and hydrodynamic TMLs differ in most aspects, including surface brightness \(Q\), mixing layer morphology, density distribution and temperature PDF. #### 3.4.1 Reduced Surface brightness \(Q\) We again start from the time evolution of \(Q\) in magnetized TMLs. From Figure 7, we see oscillatory but persistent decline of \(Q\) after peaking at \(t\sim 10\) Myr in most MHD cases, similar to the discussion on Figure 3, and there are generally larger fluctuations when stronger magnetic fields are involved. With stronger magnetization, \(Q\) is consistently reduced in all cooling regimes, and at late periods, the \(Q\) values become similar regardless of cooling strength. To illustrate how \(Q\) is affected by cooling strength and magnetization, we calculate averaged \(Q\) in each of our simulation runs and draw the dependence on \(\Lambda_{0}\) (top left panel) and \(\beta_{0}\) (bottom left panel) in Figure 8. Since evolution of \(Q\) in MHD is much more fluctuating than the hydrodynamic case and sometimes shows long-term trends instead of being steady, here we simply assert that the time average always starts at \(t_{\rm start}=50\)Myr, and ends when our simulations terminate (\(t_{\rm stop}\geq 200\)Myr), or hot gas is completely exhausted. This can be justified given the typical timescale at global scale (e.g., cloud-crushing timescale in the CGM) is on the order of a few 10Myrs (Scannapieco and Bruggen, 2015; Li et al., 2020). We quote the \(1\sigma\) error bars reflecting the uncertainties from such fluctuations. Parallel to \(Q\), the growth of the cold phase is directly related to \(v_{\rm in}\), which is expected to be proportional to \(Q\) through Equation (14), although corrections may apply due to magnetic fields and turbulence. We measure \(v_{\rm in}\) by firstly recording the positions of TMLs as x-coordinate average of mixed gas (\(2\times 10^{4}\)K \(<T<3\times 10^{5}\)K), then calculate the propagation speed of TMLs and subtract it with the average \(v_{x}\) of the hot boundary. In the right column of Figure 8, we show the same scaling relation but for \(v_{\rm in}\). To facilitate comparison, we multiply \(v_{\rm in}\) by a constant \(5/2P_{\rm therm,0}\) (background thermal pressure, which is largely unchanged over time) so that the equivalence between \(Q\) and \(v_{\rm in}\) is directly assessed. We do observe there is some difference between \(Q\) and \(5/2P_{\rm therm,0}v_{\rm in}\), where the latter tends to be systematically smaller. This is mainly due to additional contributions from turbulence ad magnetic fields. On the other hand, the difference is within a factor of 2, and generally speaking, the scaling relation measured by \(v_{\rm in}\) remains identical to that measured in \(Q\). The purple dots in the top panels of Figure 8 represent our hydrodynamic reproductions of Tan et al. (2021). The data well match the two-piece scaling \(Q\propto t_{\rm cool}^{-1/2}\) and \(Q\propto t_{\rm cool}^{-1/4}\) for weak and strong cooling, respectively. Tan et al. (2021) successfully explained these relations by exploiting the parallels between the turbulent mixing layers and the combustion theory, where the hot gas is the "fuel" that "burns" (i.e. cools radiatively). They borrowed the dimensionless Damkholer number (Damkohler, 1940): \[{\rm Da}\equiv\frac{L_{\rm box}}{u^{\prime}t_{\rm cool}} \tag{17}\] which is the ratio of turbulence eddy turnover time scale to the cooling time scale. Here \(u^{\prime}\) is the turbulent velocity. If we estimate \(u^{\prime}\sim v_{\rm shear}=100\)km/s, and plug in \(L_{\rm box}=100\)pc, \(t_{\rm cool}\approx 1\)Myr (\(\Lambda_{0}=1\)), then \({\rm Da}\approx 0.98\) for our fiducial cooling case. According to their analogy, the mixing fronts are "single-phase" or "multi-phase" in the \({\rm Da}<1\) and \({\rm Da}>1\) regimes (called "laminar flame" and "turbulent flame" in the combustion literature), respectively, and are Figure 6: **Top:** Mean gas densities averaged over 60 temperature bins measured from both hydrodynamic (dashed line) and MHD simulations with \(\beta_{0}=5000\) (solid line) in different cooling regimes. Grey dotted line indicates the relation \(n_{T}\propto T^{-1}\), which should approximately hold if \(P_{\rm therm}\) is in equilibrium. **Bottom:** PDFs of temperature measured from the same set of simulations above. Two panels share the same set of temperature bins. In making each line, we average over snapshots from \(t=70\)Myr to the end of our simulation, or the moment right before hot gas is completely consumed. Figure 8: **Top row:** Average surface brightness (left column) and inflow velocity of hot gas (right column) as functions of cooling strength \(\Lambda_{0}\). Numerically, tuning the cooling strength is equivalent to changing the pressure. **Bottom row:** Same data points, but are drawn as functions of \(\beta_{0}\). For each data point, the time average starts at 70 Myr, and ends when the simulation terminates (\(\geq 200\) Myr) or the hot gas is going to be consumed. We also quote \(1\sigma\) error bars reflecting fluctuation levels. The two dashed lines in the top row indicate the two scalings \(\propto\Lambda_{0}^{1/2}\) and \(\propto\Lambda_{0}^{1/4}\), respectively. Figure 7: Time evolution of surface brightness \(Q\) similar to Fig. 3, but with \(\beta_{0}\) ranging from 50 to 5000, and also includes higher resolution results. Titles explain the initial plasma beta. Solid lines are from fiducial runs (\(\Delta x=L_{\rm box}/128\)) while dotted lines represent doubled resolution (\(\Delta x=L_{\rm box}/256\)). Some lines end halfway as the hot gas get completely exhausted in such cases, and we only run 150Myr for doubled resolution runs. thus subject to different scaling relations. Note that Fielding et al. (2020) also explained \(Q\propto t_{\rm cool}^{-1/4}\) from the perspective of fractal dimensions, which we will discuss in Section 5.1. However, while the properties of hydrodynamic TMLs can be well described by theory, the presence of even weak magnetic fields immediately complicates the situation. We see relatively large fluctuations in \(Q\) in most cases with \(\beta_{0}\leq 500\), with no clear sign of any specific functional relation between \(Q\) and \(\Lambda_{0}\). The trend of an increasing \(Q\) with \(\Lambda_{0}\) is significantly attenuated under higher levels of magnetization, as also discussed earlier, and the specific role of Da = 1 setting the regime transition is no longer applicable. In fact, reading from the bottom panels of Figure 8, it appears that the value of \(Q\) (and \(v_{\rm in}\)) converges towards stronger magnetization within the uncertainties as long as \(\Lambda_{0}\geq 0.5\). The weak cooling branch (\(\Lambda_{0}=0.1\)), however, seems somewhat isolated from other cases. This is another fact that suggests some coupling between cooling and magnetic fields could be at play in magnetized TMLs. From the figure we can see \(Q\) can be suppressed by an order of magnitude when \(\beta_{0}\lesssim 500\). Therefore, the rate of local energy exchange between cold and hot phases is significantly restricted even in relatively weakly magnetized environments. #### 3.4.2 Isobaric profile and elongated morphology of mixing layers To examine how initial field strength affects the states of magnetized TMLs, we first show in Figure 9 the transversely averaged profiles of temperature and pressure. We see that when \(\beta_{0}\leq 500\), \(P_{\rm mag}\) is more easily amplified to a value close to or exceeding \(P_{\rm therm}\), and magnetic tension then stabilizes the initial KHI and inhibits gas mixing. Therefore, the temperature gradients become steeper as \(\beta_{0}\) decreases. Probably due to the suppressed turbulence, profiles of total pressure are flat in a very strict manner, compared with hydrodynamic results (left column) or \(\beta_{0}=5000\) cases in Figure 4. While it is debatable whether cooling is isobaric in the pure hydrodynamic case, we see that when \(\beta_{0}\lesssim 500\), turbulent pressure should be negligible and hot gas is unlikely to be siphoned by total pressure cavities. In addition, the strong \(P_{\rm mag}\) observed here could potentially explain the \(P_{\rm therm}\) imbalance identified in the CGM observed by the Hubble Space Telescope (Werk et al., 2014). We also notice that a different morphology of mixing layers gradually emerge when \(\beta_{0}\lesssim 500\). In fact, from Figure 1, we have already seen the flows become quite laminar in the bottom two rows, presumably because of the constraints by stronger magnetic tension. Besides the laminar flows seen edge on, we report that the transverse slices of mixing layers in Figure 10 also demonstrate a much less turbulent pattern. From the top panels in Figure 10, we observe the formation of elongated stripes along the \(y\) direction (i.e., background magnetic field) in the MHD runs, where the hot/cold gas are interspersed with each other. From the bottom panels, we see that strong magnetizations \(\beta\lesssim 1\) are ubiquitously realized in cold phases. Partly due to enhanced magnetic pressure, the emissivity in these strips becomes weaker as magnetization increases. We note that Ji et al. (2019) have run MHD plane parallel simulations similar to ours, and they also found shear amplified magnetic fields stabilize the turbulence at the interfaces through magnetic tension, generating almost laminar flows (see their Figure 11). #### 3.4.3 Density distributions and temperature PDFs Following the discussion about equation 16 and the spirit in Section 3.3.3, we then isolate what exactly caused the decrease of surface brightness in each case, density deficit sustained by \(P_{\rm mag}\) or suppression of mixing which prevents the generation of intermediate temperature gas? We again draw density distributions on temperature and temperature PDFs in Figure 11 to isolate the two parts. Recall that in the weak field limit (see Figure 6 where \(\beta_{0}=5000\)), \(Q\) is mostly depressed by the reduction of intermediate-temperature gas, instead of creating density cavities sustained by \(P_{\rm mag}\). But as we investigate systems with larger initial \(B_{0}\), we see a different pattern in Figure 11: In the weak cooling regime (\(\Lambda_{0}=0.1\)), the situation is quite similar to Figure 6, where density distribution barely deviates from the \(n_{T}\propto T^{-1}\) relation. As \(B_{0}\) increases, there is a higher level of suppression on temperature PDF and hence the depression of \(Q\). However, in the strong cooling regime (\(\Lambda_{0}=10\)), although MHD temperature PDFs are also substantially suppressed compared to the hydrodynamic result, they remarkably overlap with each other regardless of \(\beta_{0}\). On the other hand, the density contrasts among cases with different \(\beta_{0}\) are magnified. As \(B_{0}\) increases, regions with strong magnetization not just mainly reside in the cold phase, but also permeate towards the hot phase, sustaining significant density deficits in a broader temperature range. Therefore, when cooling is fast, the major effect of stronger magnetic fields is to increase \(P_{\rm mag}\) within the intermediate phase, instead of further suppression of turbulent mixing. These re Figure 9: Profiles of transversely averaged physical quantities along \(x\) direction, similar to Fig. 4 but here we fix \(\Lambda_{0}\) while adjust magnetization. All mixing layers are initially located at \(x=0\). **Top**: Temperature and volumetric fraction of cold gas (defined as \(T<5\times 10^{4}\) K). **Bottom**: Thermal, magnetic, turbulent pressure and their sum, separately normalized by initial total pressure \(P_{0}^{*}\). When \(\beta_{0}\lesssim 500\), the total pressure is in well equilibrium while the turbulent pressure is negligible. sults suggest that there is a limit at which turbulent mixing can be suppressed (achieved with fastest cooling), beyond which further reduction in \(Q\) is through magnetic field amplification. ### Convergence To assess the robustness of our simulation results, we further conduct convergence tests for our 3D MHD simulations doubling the resolution (\(\Delta x=L_{\rm box}/256\)). We show the results on the evolution of \(Q\) also in Figure 7 by dotted lines. Generally speaking, in terms of surface brightness \(Q\), our simulations reaches convergence, where the evolution of \(Q\) values between the fiducial and high-resolution runs typically closely follow each other. We note that similar convergence behavior was found in earlier hydrodynamic and MHD simulations of Ji et al. (2019) and hydrodynamic simulations of Tan et al. (2021). Besides the overall dynamics, we find density distributions and temperature PDFs are also well converged, and we show relevant diagnostics in Appendix A to avoid distraction. Additionally, it is important to assess whether large scale results can faithfully reflect the local energy exchange between cold and hot gas, by running simulations with worse resolutions. We again leave the results to Appendix A, but report here that at later stage, \(Q\) appear largely converged when \(\Delta x=L_{\rm box}/32\) (a factor of 4 coarser than our fiducial resolution). Overall, this supports the reliability of simulation results at more global (e.g., cloud) scale. We also caution that magnetization is usually stronger in cloud simulations (usually \(\beta\lesssim 1\)), while our plane parallel setup may suffer from unrealistic boundary conditions when adopting such strong magnetic fields. ## 4 Other freedom in parameter space As an initial study, our parameter space simply covers two dimensions, initial magnetization \(\beta_{0}\) and cooling strength \(\Lambda_{0}\). However, many other physical properties may influence the state of radiative magnetized TMLs, such as magnetic field geometry and conductivity. In this section we show results from our tests adopting different initial field orientations and conductivity prescriptions. We leave detailed investigation and exploration on other effects to future works. ### Magnetic field geometry Previous studies on adiabatic MHD KHI have suggested the importance of geometry and amplitude of the initial magnetic field \(\mathbf{B_{0}}\), both theoretically (Chandrasekar, 1961; Miura and Pritchett, 1982) and numerically (Jones et al., 1997; Ryu et al., 2000; Ji et al., 2019). We thus investigate how different \(\mathbf{B_{0}}\) orientations affect \(Q\). Figure 12 displays time evolution of \(Q\) in simulations with \(\mathbf{B_{0}}\) along three separate axes. In these runs, we fix \(\Lambda_{0}=1\) and examine two different initial field strengths, \(\beta_{0}=5000\) and \(500\). Our original results discussed in previous sections are shown in blue lines for benchmark, and below we discuss the results from the two remaining field orientations. 1. \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{z}}\) The orange lines in Figure 12 stand for results from the simulations with \(\mathbf{B_{0}}\) along the \(\hat{z}\) direction, i.e., perpendicular to both shear flow direction and the interface normal. The comparable values of orange lines and benchmark blue lines indicate similar influences on stabilizing mixing layers, except \(Q\) appears to have a weaker dependence on \(\beta_{0}\) in the \(\mathbf{B_{0}}\) along \(\hat{z}\) cases. On the other hand, this initial field direction is not expected to interfere with the development of the adiabatic KHI, and without cooling, the flow from the simulations is indeed essentially hydrodynamic (Ji et al., 2019). Once cooling is on, Ji et al. (2019) found that mixing is suppressed to the same level as cases with \(\mathbf{B_{0}}\) parallel to shear flows, in agreement with our results. This fact again stresses the importance of radiative cooling in the problem. We also checked the density distributions and temperature PDFs in this sets of simulations, and found no substantial difference from Figure 11, further reinforcing the similarities between the \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{y}}\) and \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{z}}\) cases. 1. \(\mathbf{B_{0}}=B_{0}\mathbf{\hat{x}}\) The green lines in Figure 12 denote results from the simulations with \(\mathbf{B_{0}}\) along \(\hat{x}\), i.e., normal to the cold/hot interface. Compared with other two field orientations, an obvious feature is the much stronger suppression of \(Q\). Since the initial field lines are both perpendicular to shear flows and normal to the interface, they get continuously twisted by the shear flows into the \(\hat{y}\) direction (along the shear) since the very beginning, giving rise to quick field amplification resulting in the suppression of the initial KHI. Consequently, there is no Figure 10: Transverse slices of mixing layers, taken at the position where volumetric fraction of cold gas \(\rm f_{cold}=0.5\) in Figure 9. Slices are taken from simulations with \(\rm\Lambda_{0}=1\), but from left to right are simulations with \(\beta_{0}=\infty\) (hydro), \(500\) and \(50\), respectively. **Top row:** temperature. **Middle row:** cooling emission normalized by initial cooling energy loss rates \(\varepsilon_{0}\) within the mixing layers (which has linear dependence on \(\Lambda_{0}\)). **Bottom row:** plasma \(\beta\), which illustrates that magnetization is the strongest in the cold phases. background equilibrium state, and there is only very limited mixing (Ji et al., 2019). The behavior is even more peculiar when \(\beta_{0}=500\), in which case we see a steep decrease of \(Q\) around \(t\sim 115\)Myr. We found the reason is that, magnetic fields get shear amplified so fast that a large \(P_{\rm mag}\) quickly builds up in the hot phase. Then the large pressure in hot phase soon pushes the cold gas away, leaving a box full of hot gas with small cooling radiation. This scenario probably implies a possibility that at global scale, the cold gas can be squeezed by the rapid shear amplification of the field in the hot phase. However, it is perhaps more likely that this result is an artifact of our local simulation setup, and this scenario should be better studied in a global simulation setting. Our choices for field geometry are far from exhausting all possibilities. In particular, tangled initial magnetic configuration has been considered in studying the global cloud-crushing problem (e.g. McCourt et al. (2015); Banda-Barragan et al. (2018)), which lead to different consequences on the cloud clumping factor, filament morphology and other related observable quantities. Given the localized nature of the TMLs, our study may still be considered as a part of the global problem, and offers useful benchmark on the role of magnetic fields at local scales. In the meantime, we acknowledge that the role of more complex global field geometry on turbulent mixing deserves further study. ### Conductivity Thermal conduction can hinder the onset of hydrodynamic instabilities in the cloud-crushing problem, and has been known to change cloud morphology (Bruggen and Scannapieco, 2016), accelerate the evaporation of small clouds (Cowie and McKee, 1977) but substantially prolong the lifetime of large clouds (Armillotta et al., 2016; Li et al., 2020). However, the Figure 11: **Top:** Mean density distributions as a function of temperature. **Bottom:** Temperature PDFs. In making each line, we average over all snapshots from \(t\geq 70\) Myr to the end of the simulation (same as in Figure 8). From the left to right columns, \(\Lambda_{0}\) increases from weak cooling to strong cooling. Different colors represent different initial magnetizations. Grey dotted lines show the \(n_{T}\propto T^{-1}\) relation which should hold when \(P_{\rm therm}\) is in equilibrium. effect of thermal conduction around the local mixing layers is yet uncertain, especially with the presence of magnetic fields. Previous hydrodynamic TML works with (isotropic) constant conductivity (e.g. Tan et al. (2021)) have shown that \(Q\) is insensitive to conduction, as long as turbulent diffusion dominates heat transport. On the other hand Tan & Oh (2021) found that a temperature-dependent conductivity such as Spitzer conductivity (Spitzer, 1962) can cause substantial difference in the temperature distribution and elemental column density within the mixing layers. Here, we briefly assess consequences of different conductivity prescriptions, which would help better constrain its consequences in global-scale models (e.g., cloud growth criterion (Gronke & Oh, 2020) and galactic winds (Fielding & Bryan, 2022; Tan & Fielding, 2023)). All simulations presented above adopt a constant anisotropic conductivity (equation 7). In the following, we compare them with MHD simulations that employ \(\kappa_{\rm Spitzer}\) (equation 6), which is better physically motivated. Considering that electron transport may be hindered by micro-scale instabilities that are not well understood, the canonical Spitzer value is likely suppressed by certain factor, often taken to be an order of magnitude (e.g., Roberg-Clark et al., 2016; Komarov et al., 2018; Drake et al., 2021; Meinecke et al., 2022). Therefore, we also test the cases with conductivity being \(0.1\kappa_{\rm Spitzer}\). We show in Figure 13 the evolution of \(Q\) in cases with different initial magnetization and conductivity prescriptions, while we fix \(\Lambda_{0}=1\). We see that the curves for different conductivity prescriptions closely follow each other, regardless of initial field strength. This result suggests that different conductivity prescriptions do not strongly affect the energy exchange rate between cold and hot gas. The reason is likely due to the dominance of the \(y\) component magnetic fields (see Figure 2), which is perpendicular to the overall temperature gradient. In fact, we indeed observe (though not shown) that, using constant conductivity, the mean heat flux across the TML in MHD simulations is more than an order of magnitude smaller than that in the hydrodynamic simulations. Even using the Spitzer conductivity (which enhances heat transport), the resulting mean heat flux in the MHD simulations is no more than that in hydrodynamic simulations adopting constant conductivity. Recently Bruggen et al. (2023) also observed magnetic fields strongly limit anisotropic thermal conduction in their cloud simulation, which is consistent with our picture here. When adopting \(\kappa_{\rm Spitzer}\) in hydrodynamic TMLs, Tan & Oh (2021) find temperature PDF is obviously shifted to higher temperatures and the cold phases are heavily suppressed compared with the simulations employing constant isotropic thermal conductivity. This is because \(\kappa_{\rm Spitzer}\) is strongly temperature-dependent and substantially raises the conductivity in hot phase. In magnetized TMLs, we expect temperature PDFs to be less sensitive to the anisotropic conductivity since magnetic fields largely perpendicular to the overall temperature gradient should inhibit thermal conduction. Indeed, in Figure 14, we see different conductivities only make minor difference in the hot phases which modestly contribute to the surface brightness \(Q\). The density distributions are barely affected, indicating \(P_{\rm mag}\) is not sensitive to conductivity either. Nevertheless, given the natural coupling of anisotropic conduction and magnetic field orientations, the role of conductivity should be further investigated with other field geometries. Figure 12: Time evolution of surface brightness \(Q\) in simulations with different initial field orientations. Dashed lines are in the weak field limit (\(\beta_{0}=5000\)), while solid lines indicate modestly weak fields (\(\beta_{0}=500\)). We choose \(\Lambda_{0}=1\) for all the cases shown here. Results are similar when \(\mathbf{B}_{0}\) is parallel to the initial interface (blue and orange lines). However, once \(\mathbf{B}_{0}\) is normal to the mixing front (along \(\hat{x}\)), it strongly suppresses \(Q\) (green dashed line), or quickly shear amplifies the field to push cold gas out of the domain (green solid line). Figure 13: Time evolution of surface brightness \(Q\), in simulations with different conductivity prescriptions, with \(\Lambda_{0}=1\) for all the cases shown here. Three colors represent different initial level of magnetization. Different types of conductivities are indicated by line style. Generally speaking, evolution of \(Q\) is not sensitive to conductivity. ## 5 Discussion In this section, we compare our work with previous results, and discuss potential implications to global scale simulations. ### Compare with previous TML works: scaling relations of \(Q\) Main results of earlier works on hydrodynamic TMLs (e.g., Fielding et al. (2020); Tan et al. (2021)) can be roughly summarized by the two-piece scaling relation: \(Q\propto t_{\rm cool}^{-1/2}\) and \(Q\propto t_{\rm cool}^{-1/4}\), with the criterion \(\mathrm{Da}=1\) governing the transition (equation 17). While the former half can be plainly elucidated by a laminar model, the latter half, however, requires more insight to understand since emerging fractal structure of the intermediate-temperature surface complicates the physics. Borrowing wisdom from combustion theory, Tan et al. (2021) explained the \(Q\propto t_{\rm cool}^{-1/4}\) scaling, whose success suggests similarities in the essential physics behind such two problems. On the other hand, Fielding et al. (2020) also derived the same scaling from a fractal point of view. They measured the surface of turbulent mixing front to have a fractal dimension \(D=5/2\), and directly estimate \(Q\) by definition (equation 13) at the scale where cooling is the most efficient. In spite of the good match between theories and hydrodynamic TML simulations, we see in Figure 8 they are no longer accurate even in relatively weakly magnetized environments (\(\beta_{0}\lesssim 1000\)). As has been discussed, magnetic fields can easily be amplified to reduce/suppress mixing, and can play a major in the pressure balance. Therefore, it fundamentally alters the balance between radiative cooling and turbulent mixing, and breaks the analogy with the combustion theory. On the other hand, the scaling predicted by the fractal theory is sensitive to the fractal dimension \(D\), which is obtained by fitting the effective surface area \(A_{\lambda}\) with fastest cooling rate as a function of scale \(\lambda\), and \(A_{\lambda}\propto\lambda^{-1/2}\) leads to \(D=5/2\). Since the morphology of mixing layers is substantially different in MHD (Figure 1 and 10), we anticipate that the fractal nature changes as well. In fact, we have measured \(D\) in our simulations following Fielding et al. (2020) and can approximately reproduce their result in pure hydrodynamic simulations without thermal conduction. However, we also find that the measurement could depend on the simulation setup, especially on the presence of thermal conduction, as well as the algorithm that extracts \(A_{\lambda}\)(Cintosun et al., 2007), which may not lead to a well-defined \(D\) from fitting the \(A_{\lambda}-\lambda\) relation at our resolution. Nevertheless, conducting such measurement from our MHD simulations, we observe a clear flattening trend in the \(A_{\lambda}-\lambda\) relation indicative of smaller \(D\) (more laminar) towards higher magnetization. This is in line with our main findings. ### Links to global scale problems One main purpose for studying magnetized TMLs is to find its applications towards global scales, and our finding of heavily reduced \(Q\) in mixing layers should imply inefficient energy transfer between cold/hot gas in magnetized environments. This is indeed observed by Gronnow et al. (2018) in their cloud crushing simulations, where magnetic fields appreciably prevent the mass growth of cold clouds at early time. However, Gronke and Oh (2020) reported that at later time, neither the cloud growth criterion nor the final mass growth is sensitive to magnetic fields, which appears inconsistent Figure 14: The mean density distributions as a function of temperature **(top)** and the temperature PDFs **(bottom)**, similar to Fig. 11. From the left to right columns, magnetization increases from \(\beta_{0}=5000\) to \(\beta_{0}=50\). Cooling is fiducial (\(\Lambda_{0}=1\)) for all simulations here. The three colors represent different prescriptions of the conductivity coefficients. Regarding the temperature PDFs, there are only small differences in hot phases with minor contribution to the surface brightness \(Q\). with our results. They raised two possibilities: (i) global simulations may not faithfully capture small scale interactions across TMLs. (ii) magnetic fields considerably change cloud morphology and increase the contact area between cold and hot gas, neutralizing suppressed local interactions. Given the convergence tests we have performed, and the fact that magnetic fields reshape the contact surface both locally (Figure 10) and globally (Banda-Barragan et al., 2016; Cottle et al., 2020; Gronke and Oh, 2020), we expect the second explanation to be more likely to hold. Very recently, we notice Das and Gronke (2023) simultaneously studied both the magnetized TMLs and their connections to the growth of cold clouds. While they also found suppressed \(Q\) in magnetized TMLs, they confirmed a lack of difference in cloud growth rates between MHD and hydrodynamic simulations. They attributed such paradox to that mixing is mostly governed by the intensity of turbulence: in TML simulations magnetic fields suppress turbulence by suppressing KHI, which can be overtaken by turbulence driven at large scales. Besides there are uncertainties on the nature of large-scale turbulence, we also note that in simulations with external turbulence, the relative velocity between the clouds and background gas is vanishing, representing a different regime from typical TML simulations. Li et al. (2020) also ran MHD cloud-crushing simulations, reporting the role of magnetic fields being inconsequential. However, they acknowledge that the geometry of magnetic fields could make a difference, especially considering its coupling with thermal conduction. Global scale MHD simulations typically embrace magnetic fields much stronger than our initial conditions (McCourt et al., 2015; Gronnow et al., 2018; Cottle et al., 2020), so that magnetic fields can be dynamically important. However, there is yet no consensus answer for the realistic \(\beta\) in the warm and hot CGM, while some estimations suggest \(\beta\sim 10^{2}-10^{9}\)(Su et al., 2017; Martin-Alvarez et al., 2018; Hopkins et al., 2020; Li et al., 2020). Our results therefore provide supplements to the weak field regime, and point out even quite weak magnetization can change the local dynamics and phase distribution, especially when radiative cooling is very efficient. For strongly magnetized TMLs, our plane-parallel model in limited simulation domain would be less appropriate, because strong magnetic fields can resist rolling by the KHI, and thus the overall dynamics is subject to the largely unknown global field configuration. Under this situation, it has recently been suggested that the system can exhibit rich magnetothermal phase structures filled with plasmoids (Fielding et al., 2022), a scenario that deserves further investigation in a global setting. In addition, our results could provide corrections for a 1D effective model representing the contribution from TMLs. For example, Tan and Oh (2021) recently constructed such a hydrodynamic 1D model which enables sub-grid absorption and emission line predictions, and their predicted line ratios agree well with the observations. However in their model, assuming a Spitzer value conductivity would lead to significant deviations from the observations. This issue may potentially be resolved by taking into account of magnetic effects. As we discuss about Figure 14, even weak magnetic fields substantially prohibit the heat fluxes, and the temperature PDF only modestly shifts towards the hot phase when we use different conductivity prescriptions. The reduced \(Q\) in magnetized TMLs also suggests that column densities should be lower than the predictions from 1D hydrodynamic models. ## 6 Summary We have performed 3D MHD plane parallel simulations to examine the development of weakly magnetized TMLs, which is designed to mimic phase boundaries in a multiphase system (such as the CGM), aiming to compare and contrast existing studies of hydrodynamic TMLs. Imposing an initially uniform magnetic field along shear flow direction (\(\mathbf{B_{0}}=B_{0}\mathbf{\hat{y}}\)), we have found that even rather weakly magnetized environment (\(\beta_{0}\sim 1000\)) can lead to substantial differences from the hydrodynamic cases. We list our main conclusions below: 1. **Surface brightness \(Q\)** The surface brightness \(Q\), and hence the inflow velocities \(v_{\rm in}\) for the growth of cold gas, can be substantially suppressed in magnetized TMLs even with a weak initial field (e.g., by a factor of \(\sim 10\) for \(\beta_{0}=500\)). There is a lack of specific scaling relations for \(Q\) as a function of cooling strength \(\Lambda\), in contrast to the hydrodynamic case, and the time evolution of \(Q\) is more fluctuating. Within the range of fluctuations of \(Q\), our results reach good convergence. The final level of \(Q\) should be determined by the coupling of radiative cooling, turbulent mixing and magnetic field amplification. 2. **Morphology of magnetized TML** We observe weak initial fields (\(\beta_{0}\geq 50\)) can be substantially amplified in the TML, resulting in highly magnetized (\(\beta\sim 1\)) cold phase in the vicinity of the mixing layer. The intensified magnetic fields makes the mixing layer more "fractal", and eventually more laminar (for stronger initial field and cooling), compared with hydrodynamic TMLs. The magnetic pressure \(P_{\rm mag}\) has major contributions in the TML, so that the total pressure \(P^{*}\equiv P_{\rm mag}+P_{\rm therm}\) is in good equilibrium across the magnetized TMLs. 3. **Two distinct magnetic influences** In magnetized TMLs, magnetic fields mainly reduce \(Q\) in two ways: reducing gas pressure (and hence density and emissivity) by \(P_{\rm mag}\), and directly suppressing turbulent mixing. With very weak initial field (\(\beta_{0}\gtrsim 5000\)) and/or weak cooling (\(\Lambda_{0}\sim 0.1\)), \(Q\) is mostly reduced by suppression of turbulent mixing. With stronger magnetic field and cooling, this suppression tends to saturate, and further reduction in \(Q\) results from the TML being increasingly strongly magnetized with \(P_{\rm mag}\) dominating over \(P_{\rm therm}\). This could potentially explain the density imbalance between phases in the CGM, as inferred from observations (Werk et al., 2014). 4. **Initial field geometry** We run simulations with initial magnetic fields \(\mathbf{B}_{0}\) separately along three coordinate axes. When \(\mathbf{B}_{0}\) is parallel to the cold/hot interface, the properties of magnetized TMLs are generally similar, in agreement with Ji et al. (2019). Once \(\mathbf{B}_{0}\) is normal to the interface, \(Q\) is heavily suppressed due to continuous shear amplification of the initial field without establishing a pressure equilibrium. 5. **Conductivity** We compare three sets of simulations separately with constant conductivity \(\kappa_{\parallel}\) (equation 7), Spitzer conductivity (equation 6) and reduced Spitzer conductivity (\(0.1\kappa_{\rm Spitzer}\)). Given the anisotropy in thermal conduction and our choice of field geometry, different choices of conductivity hardly change the result of \(Q\), and only make minor differences in the temperature PDFs. Our simulations serve as an initial investigation to incorporate one important physics, namely, magnetic fields, on the dynamics of the multiphase ISM/CGM. While demonstrating its importance, the results are subject to a number of simplifications and caveats. The choice of our initial field geometry with uniform fields is somewhat artificial, and entangled magnetic field configuration deserves further investigations. Being local simulations with limited domain size, our study can be subject to artificial truncation of large-scale magnetic fields. This hinders us from exploring more strongly magnetized environments where global field geometry becomes more crucial. The fact that we find \(Q\) is insensitive to resolution encourages migration towards cloud-scale simulations (e.g., Gronke & Oh, 2020; Fielding & Bryan, 2022; Tan & Fielding, 2023). Also, in this work, we only examined one typical choice of density contrast \(\chi\equiv\rho_{\rm cold}/\rho_{\rm hot}=100\) for cooling curves in typical CGM conditions. It remains to explore the parameter corresponding to other environments, such as \(\chi\sim 4000\) in a ICM-like environment (Qiu et al., 2020), where hot gas entrainment into the TMLs is expected to be enhanced by larger \(\chi\)(Fielding et al., 2020). There are also additional missing physical ingredients. We did not consider the role of viscosity, though our preliminary investigation suggests it is unlikely to be important, as also found in Li et al. (2020) in the context of the cloud crushing problem. A more important factor concerns the role of dynamically-important cosmic-rays (CRs), which can substantially alter the phase structure and energetics of the multiphase ISM/CGM (e.g. Ji et al., 2020). Given that the CRs and magnetic fields are inherently coupled, incorporating CRs represents another natural extension of our work towards better understanding the physics of the multiphase ISM/CGM. ## Acknowledgements We thank Suoqing Ji, Drummond Fielding, Eve Ostriker, Yu Qiu and Haitao Xu for helpful discussions and advice. XZ is also grateful to his friends for their unwavering encouragement. This research was supported by NSFC grant 11873033 and 12042505. Numerical simulations are conducted on TianHe-1 (A) at National Supercomputer Center in Tianjin, China, and on the Orion cluster at Department of Astronomy, Tsinghua University. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2308.13816
Homological Convolutional Neural Networks
Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still pose a challenge, with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural language, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations to gain relational information from sparse tabular inputs. The resulting model leverages the power of convolution and is centered on a limited number of concepts from network topology to guarantee: (i) a data-centric and deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models, demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at https://github.com/FinancialComputingUCL/HomologicalCNN.
Antonio Briola, Yuanrong Wang, Silvia Bartolucci, Tomaso Aste
2023-08-26T08:48:51Z
http://arxiv.org/abs/2308.13816v2
# Homological Convolutional Neural Networks ###### Abstract Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still poses a challenge with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural languages, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep-learning architecture that exploits the data structural organization through topologically constrained network representations to gain spatial information from sparse tabular data. The resulting model leverages the power of convolutions and is centered on a limited number of concepts from network topology to guarantee (i) a data-centric, deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on \(18\) benchmark datasets against \(5\) classic machine learning and \(3\) deep learning models demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at [https://github.com/FinancialComputingUCL/HomologicalCNN](https://github.com/FinancialComputingUCL/HomologicalCNN). ## 1 Introduction We are experiencing a tremendous and inexorable progress in the field of deep learning. Such a progress has been catalyzed by the availability of increasing computational resources and always larger datasets. The areas of success of deep learning are heterogeneous. However, the three application domains where superior performances have been detected are the ones involving the usage of images [1; 2], audio [3; 4] and text [5; 6; 7] data. Despite their inherent diversity, these data types share a fundamental characteristic: they exhibit homogeneity with notable inter-feature correlations and evident spatial or semantic relationships. On the contrary, tabular data represent the "unconquered castle" of deep neural network models [8]. Tabular data are heterogeneous and present a mixture of continuous, categorical, and ordinal values which can be either independent or correlated. They are characterized by the absence of any inherent positional information, and tabular models have to handle features from multiple discrete and continuous distributions. Tabular data are the most common data format and are ubiquitous in many crucial applications, such as medicine [9, 10, 11, 12], finance [13, 14, 15, 16], recommendation systems [17, 18, 19, 20], cybersecurity [21, 22], anomaly detection [23, 24, 25, 26] and so forth. During the last decade, traditional machine learning methods dominated tabular data modeling, and nowadays, tree ensemble algorithms (e.g. XGBoost, LightGBM, CatBoost) are considered the recommended option to solve real-life problems of this kind [27, 28, 29]. In this paper, we introduce a novel deep learning architecture for tabular numerical data classification and we name it "Homological Convolutional Neural Network" (HCNN). The building process is entirely centered on the structural organization of input data obtained through network representations that allow gaining spatial information from tabular data. A network (or graph) represents components of a system as nodes (or vertices) and interactions among them as links (or edges). The number of nodes defines the size of the network, and the number of links determines the network's sparsity (or, conversely, density). Reversible interactions between components are represented through undirected links, while non-reversible interactions are represented as direct links [30]. In this research work, we exploit a class of information filtering networks [31], namely the Triangulated Maximally Filtered Graph [32], to model the inner sparsity of tabular data and obtain a geometrical organization of input features. The choice of the network representation is not binding, even if limited to the family of so-called simplicial complexes [33]. Simplicial complexes are generalized network structures that allow capturing many-body interactions between the constituents of complex systems [34]. They are formed by sets of simplices such as nodes, links, triangles, tetrahedra, and so on, glued to each other along their faces forming higher-order graphs [33]. These graphs connect not only vertices (\(0\)-dimensional simplices) with edges (\(1\)-dimensional simplices) but also higher-order simplices (e.g. triangles, \(2\)-dimensional simplices, and tetrahedra, \(3\)-dimensional simplices). The study of networks in terms of the relationship between structures at different dimensionality is a form of "homology" and HCNNs keep into account higher-order interactions in data dependency structure as homological priors. During the neural network's building process, given a proper network representation of input data, we isolate all the simplicial structures with dimension \(\geq 1\) and we process them at two granularity levels: (i) across single representatives of each simplicial structure (i.e. convolution over each edge, triangle, tetrahedron); and (ii) across representatives of each simplicial structure (i.e. convolution over all the transformed edges, all the transformed triangles, and all the transformed tetrahedra). In doing so, we capture both the simplicial and the homological structure of input data also searching for non-trivial structural data relationships. This methodology allows to find localities in tabular data and leverages the power of Convolutional Neural Networks (CNNs) to effectively model their sparsity. Compared to its state-of-the-art (SOTA) machine learning alternatives, our method (i) maintains an equivalent level of explainability; (ii) has a comparatively low level of computational complexity; and (iii) can be scaled to a higher number of learning tasks (e.g. time series forecasting) without structural changes. Compared to its SOTA deep-learning alternatives, our method (i) is data-centric (i.e. the architecture depends on the data describing the system under analysis), (ii) presents an algorithmic data-driven building pipeline, (ii) it has a lower complexity replacing complex architectural modules (e.g. attention-based mechanisms) with elementary computational units (e.g. convolutional layers). We provide a comparison between HCNNs, simple-to-advanced machine learning algorithms and SOTA deep tabular architectures using a heterogeneous battery of small-sized numerical benchmark datasets. We observe that HCNN always ties SOTA performances on the proposed tasks, providing, at the same time, structural and computational advantages. The rest of the paper is organized as follows. In Section 2 we review the previous research on information filtering networks, sparsity handling in deep learning, and automated learning for tabular data. In Section 3.1 we discuss the data acquisition and transformation pipeline. In Section 3.2 we introduce the basic concepts about network science and information filtering networks. In Section 3.3 we provide the background for Homological Neural Networks. In Section 3.4 we present the working mechanism of Homological Convolutional Neural Networks. In Section 3.5, we provide the mathematical justification for the proposed methodology. In Section 4, we explore the effectiveness of Homological Convolutional Neural Networks compared to SOTA machine learning and deep learning models. Finally, in Section 5, we interpret our results and discuss future research lines in this area. Related Work **Information Filtering Networks**. The search for increasingly sophisticated sparse network representations of heterogeneous data types is an active area of research. During the past three decades, Information Filtering Networks (IFNs) [35; 36; 31; 32; 37] emerged as an effective tool in this research field. Their effectiveness has been demonstrated in many application domains, including but not limited to finance [38; 39; 40; 41; 42], psychology [43; 44], medicine [45; 46] and biology [47; 48]. However, in many cases, the power of IFNs has been limited to descriptive tasks. More recently, considerable efforts have been spent to make them active modeling tools. In this sense, the work by [49] suggests to use IFNs to perform topological regularization in multivariate probabilistic modeling with both linear and non-linear multivariate probability distributions; the work by [50] proposes a new unsupervised feature selection algorithm entirely based on the study of the relative position of nodes inside the above mentioned constrained network representations; while the work by [39] suggests a first integration of IFNs into articulated pipelines involving also complex deep learning architectures. The latest milestone is represented by the introduction of Homological Neural Networks (HNN) [51], where the authors propose a pioneering methodology to extract a versatile computational unit directly from the IFNs' network representation. **Sparsity in Deep Learning**. Recent advances in many deep learning related fields [52; 53; 54; 55] came with an increasing demand for computational resources. The growing energy costs have driven the community to search for new models with reduced size which heavily rely on selective pruning of redundant connections. Indeed, sparse neural networks have been found to generalize just as well (sometimes even better) than original dense networks while reducing the memory footprint and shortening training time [56]. Even if large, the landscape of approaches to sparsify deep neural network models can be schematically organized considering six main categories: (i) down-sizing models [57; 58; 59]; (ii) operator factorization [60; 61; 62]; (iii) value quantization [63; 64; 65], (iv) value compression [66; 67]; (v) parameter sharing [68]; and (vi) sparsification [69; 70; 71; 72]. All these approaches intend sparsity as a concept to refer to the proportion of neural network weights that are zero-valued. Higher sparsity corresponds to fewer weights, and smaller computational and storage requirements. Based on this, the weights pruning phase can be at (i) at initialization [73]; (ii) after training [74]; or (iii) while training [75]. The current research work introduces a unique approach to neural network sparsification, which emphasizes the pruning of weak relationships during the data modeling stage. This approach involves constructing a lightweight neural network architecture that adapts its structure to a sparse representation of input data. In this sense, the sparsification process occurs before the initialization stage. The most similar solution to this is represented by Simplicial NNs [76] and Simplicial CNNs [77]. Indeed, these architectures constitute the very first attempt to exploit the topological properties of sparse graph representations to capture higher-order data relationships. Despite their novelty, the design of these neural network architectures limits them to pre-designed network data, without the possibility to easily scale to more general data types (e.g., tabular data). **Tabular Learning**. Traditionally the field of tabular data learning has been widely dominated by classic machine learning methods. Among them, ensembles of decision trees (DTs), such as GBDT (Gradient Boosting Decision Tree) [27; 78], represent the top choice for both practitioners and academics. The prominent strength of DTs is the efficient picking of global features with a high rate of statistical information gain [79], while their ensemble guarantees a generalized performance improvement by reducing variance [80]. GBDT is an algorithm in which new week learners (i.e. decision trees) are created from previous models' residuals and then combined to make the final prediction. Several GBDT variations exist, including XGBoost [78], LightGBM [81] and CatBoost [28]. Extended studies demonstrated how, despite their differences, the performance of these algorithms on many tasks is statistically equivalent [28]. In the last decade, several studies proposed novel deep learning architectures explicitly designed to solve tabular problems [82; 83; 84; 85; 86; 87; 80; 88]. These models can be roughly categorized into five groups: (i) differentiable trees; (ii) attention-based model; (iii) explicit modeling of multiplicative interactions; (iv) regularization methods; and (v) convolutions-based approaches. Differentiable tree models leverage the power of classical decision trees proposing smoother decision functions, which make them differentiable [83; 86; 87]. Attention-based models suggest exploiting the power of attention mechanisms [88; 89] by integrating it into tabular deep learning architectures [80; 84; 90; 91]. Methods that explicitly model multiplicative interactions try to incorporate feature products into Multilayer Perceptron models [92; 93; 94]. Regularization methods leverage large-scale hyper-parameter tuning schemes to learn a "regularization strength" for every neural weight [95; 8; 96]. Finally, convolutions-based approaches leverage the power of CNNs in tabular learning problems. The two most significant attempts in this sense are the work by [97], where tabular data are reshaped directly into a multi-channel image forman letting the model learn the correct features sorting through back-propagation, and the work by [98], where tabular data are transformed into images by minimizing the difference between the ranking of distances between features and the ranking of distances between their assigned pixels in the image. Despite these attempts, there is still an active debate over whether or not deep neural networks generally outperform gradient-boosted decision trees on tabular data, with multiple works arguing either for [80; 95; 87; 99] or against [11; 100; 101; 29] neural networks [102]. ## 3 Data and Methods ### Data To provide a fair comparison between HCNN and SOTA models, we use a collection of \(18\) tabular numerical datasets (see Appendix A) from the open-source "OpenML-CC18" benchmark suite [103]. Following the selection criteria in [104], all the datasets contain up to \(2000\) samples, \(100\) features, and \(10\) classes. A deep overview on the properties of this first set of data is provided in Appendix A. Training/validation/test split is not provided. For all the datasets, the \(50\%\) of the raw dataset is used as a training set, the \(25\%\) as validation set, and the remaining \(25\%\) as a test set. To prove the statistical significance of results presented in the current research work, all the analyses are repeated on \(10\) different combinations of training/validation/test splits. The reproducibility of results is guaranteed by a rigorous usage of seeds (i.e. \([12,190,903,7687,8279,9433,12555,22443,67822,9822127]\)). Following [101], we focus on small datasets because of two main reasons: (i) small datasets are often encountered in real-world applications [105] and (ii) existing deep learning methods are limited in this domain. It is worth noting that, differently from other deep learning architectures (e.g. [104; 80]), the applicability of HCNNs is not limited to small tabular data problems and can easily scale to medium-to-large problems. To provide evidence of this, we use a collection of 9 numerical tabular datasets (see Appendix A) from the "OpenML tabular benchmark numerical classification" suite [101]. All these datasets violate at least one of the selection criteria in [104] (i.e. they are characterized by a number of samples \(>2000\) or they are characterized by a number of features \(>100\)). A more detailed overview of the properties of this second set of data is provided in Appendix A. ### Information Filtering Networks The HCNN's building process is entirely centered on the structural organization of data emerging from the underlying network representation. The choice of the network representation is not binding even if limited to the family of simplicial complexes [33]. In this paper, we exploit the power of a class of information filtering networks (IFNs) [35; 36; 31; 32; 37], namely the Triangulated Maximally Filtered Graph (TMFG) [32], to model the inner sparsity of tabular data and obtain a structural organization of input features. IFNs are an effective tool to represent and model dependency structures among variables characterizing complex systems while imposing topological constraints (e.g. being a tree or a planar graph) and optimizing specific global properties (e.g. the likelihood) [49]. Starting from a system characterized by \(n\) features and \(T\) samples, arranged in a matrix \(\mathbf{X}\), this methodology builds a \(n\times n\) similarity matrix \(\hat{\mathbf{C}}\) which is filtered to obtain a sparse adjacency matrix \(A\) retaining only the most structurally significant relationships among variables. The introduction of TMFG is a milestone in the IFNs' research area. The building process of TMFG (see Appendix B) is based on a simple topological move that preserves planarity (i.e. a graph is planar if it can be embedded on the surface of a sphere without edges crossing): it adds one node to the center of three-nodes cliques by using a score function that maximizes the sum of the weights of the three edges connecting the existing vertices. This addition transforms three-node cliques (i.e. triangles) into four-node cliques (i.e. tetrahedra) characterized by a chord (i.e. an edge that is not part of the cycle but connects two vertices of the cycle itself) that is not part of the clique but connects two nodes in the clique, forming two triangles and generating a chordal network (a graph is said to be chordal if all cycles made of four or more vertices have a chord, reducing the cycle to a set of triangles [106]) [43]. As with all chordal graphs, the TMFG fulfills the independence assumptions of Markov and Bayesian networks [107; 43]. It has \(n\) nodes, where \(n\) is the cardinality of the set of input features and \(3n-6\) edges. A nested hierarchy emerges from its cliques [108]: compared to the fully connected graph represented by \(\hat{\mathbf{C}},\boldsymbol{A}\)'s density is reduced in a deterministic manner while the global hierarchical structure of the original network is retained. The TMFG presents three main advantages: (i) it can be used to generate sparse probabilistic models as a form of topological regularization [49]; (ii) it is computationally efficient and (iii) allows to find maximal cliques in polynomial time although the problem is NP-complete for general graphs. On the other hand, the two main limitations of chordal networks are that (i) they may add unnecessary edges to satisfy the property of chordality; and (ii) their building cost can vary based on the chosen optimization function. Working with numerical-only, tabular data, in the current paper, \(\hat{\mathbf{C}}\) corresponds to a matrix of squared correlation coefficients. It is worth noting that, while characterizing cross-correlations, one could face statistical uncertainty due to many reasons including, but not limited to the noise in the data and the intrinsic complexity of interactions among variables of the system. Attempts to overcome these problems may require filtering out statistically reliable information from the correlation matrix. Spectral analysis [109; 110; 111], clustering [112] and graph theory [113] demonstrated to be fruitful approaches to efficiently handle this problem [114; 35; 115]. In line with the work by [116], in the current paper, we use the bootstrapping approach [117; 118]. This technique requires to build a number \(r\) of replicas \(X_{i}^{*}\), \(i\in 1,\ldots,r\) of the data matrix \(\mathbf{X}\). Each replica \(X_{i}^{*}\) is constructed by randomly selecting \(T\) rows from the matrix \(\mathbf{X}\) allowing for repetitions. For each replica \(X_{i}^{*}\) the correlation matrix \(\hat{\mathbf{C}}_{i}^{*}\) is then computed. We highlight that (i) the bootstrap approach does not require the knowledge of the data distribution and (ii) it is particularly useful to deal with high dimensional systems where it is difficult to infer the joint probability distribution from data. Once obtained replicas-dependent correlation matrices, we treat them in two different ways: * We compute \(\hat{\mathbf{C}}\) as the entry-wise mean of correlation matrices \(\hat{\mathbf{C}}_{i\in 1,\ldots,r}^{*}\). * Based on each replica-dependent correlation matrix \(\hat{\mathbf{C}}_{i}^{*}\), we compute a TMFG\({}_{i}^{*}\) and we obtain the final TMFG by taking only the links that appear in all the TMFGs with a frequency higher than a specified threshold. In the rest of the paper, we refer to the first configuration as MeanSimMatrix and to the second one as BootstrapNet. These two approaches lead to widely different results. In the former case, the final TMFG will be a sparse, connected graph that necessarily maintains all the topological characterization of the family of IFNs it belongs to (i.e. planarity and chordality). In the latter case, instead, there will be no guarantee on the connectedness of the graph. Indeed, the chosen threshold could lead to disconnected components and to the removal of edges assuring the graph's chordality. ### Homological Neural Networks The main idea behind IFNs is to explicitly model higher-order sub-structures, which are crucial for the representation of the underlying system's interactions. In the case of TMFG, a simple higher-order representation can be obtained by adding triplets (triangles) and quadruplets (tetrahedra) to the set of nodes in the network. However, the associated higher-order graph is hard to be handled both visually and computationally. As a solution to this problem, in the work by [51], the authors start from a layered representation (i.e. the Hasse diagram), which explicitly takes into account higher-order sub-structures and their interconnections, and show how to easily convert this representation into a stand-alone computational unit named Homological Neural Network (HNN). Specifically, to represent the complexity of a higher order network (i.e. a TMFG), the authors propose to adopt a layered structure. As shown in Figure 1 nodes in layer \(d\) represent \(d\)-dimensional simplices (i.e. \(0\)-dimensional simplices are nodes, \(1\)-dimensional simplices are edges, \(2\)-dimensional simplices are triangles, \(3\)-dimensional simplices are tetrahedra). The structure starts with the vertices in layer \(0\); couples of vertices connect to edges, which are represented in layer \(1\); edges connect to triangles, which are represented in layer \(2\); triangles connect to tetrahedra, which are represented in layer \(3\), and so on. The resulting deep neural network is a sparse Multilayer Perceptron (MLP) with a one-to-one correspondence with the original network representation, explicitly retaining the simplices and their interconnection in the structure. All information about the network at all dimensions is explicitly encoded in this representation, including elements such as maximal cliques, separators, and their multiplicity. ### Homological Convolutional Neural Networks Despite the undeniable advantages deriving from the sparse structure provided by HNNs, results in [51] suggest that the choice of the Multilayer Perceptron as deep learning architecture to process the information encoded in the underlying network representation is sub-optimal (especially for tabular data problems). In addition to this, HNNs impose the chordality of the underlying network and the building process of the deep neural network architecture implies the usage of non-native components inducing a substantial computational overhead. In this research work, we propose an alternative computational architecture that aims to solve these issues and we name it "Homological Convolutional Neural Network" (HCNN). Given the adjacency matrix \(\mathbf{A}\) constructed using IFNs (see Section 3.2), to model the complexity embedded in the network representation, we isolate \(3\) different simplicial families: (i) maximal cliques with size \(4\) (i.e. the \(3\)-dimensional simplices or tetrahedra), (ii) maximal cliques with size \(3\) (i.e. the \(2\)-dimensional simplices or triangles) and (iii) maximal cliques with size \(2\) (i.e. the \(1\)-dimensional simplices or edges). When using the TMFG as network representation, these \(3\) structures are sufficient to capture all the higher-order dependency structures characterizing the underlying system. Each input of the novel deep learning architecture is hence represented by \(3\) different \(1\)-\(d\) vectors that we call \(H\) (i.e. realizations of the input features belonging to at least one tetrahedron), \(R\) (i.e. realizations of the input features belonging to at least one triangle), \(E\) (i.e. realizations of the input features belonging to at least one edge). As a first step, in HCNN, we perform a \(1\)-\(d\) convolution across each set of features defining a realization of a simplicial family. We use a kernel size and a stride equals to \(d+1\) (i.e. the dimension of the simplicial structure itself), and a number of filters \(\zeta\in[4,8,12,16]\). This means that, given the three input vectors \(H\), \(R\) and \(E\) representing the three simplicial families characterizing a TMFG, we compute a \(1\)-\(d\) convolution with a kernel size and a stride of \(2\), \(3\) and \(4\) respectively for edges, triangles, and tetrahedra. The usage of stride is necessary to prevent the "parameter sharing". While generally considered an attractive property as fewer parameters are estimated and overfitting is avoided, in our case parameter sharing leads to inconsistencies. Indeed, geometrical structures belonging to the same simplicial family (i.e. edges, triangles, and tetrahedra) but independent in the hierarchical dependency structure of the system would share parameters, which is obviously wrong. After the \(1^{st}\)-level Figure 1: Pictorial representation of an HNN and its building pipeline. From left to right, (i) we start from a chordal graph representing the dependency structures of features in the underlying system, (ii) we re-arrange the network’s representation to highlight the underlying simplicial complex structures (i.e. edges, triangles, tetrahedra), and (iii) we finally report a layered representation, which explicitly takes into account higher order sub-structures and their interconnections, and can be easily converted into a computational unit (i.e. a sparse MLP). convolutions, which extract element-wise information from geometrical structures belonging to the same simplicial family, we apply a \(2^{nd}\)-level convolutions extracting homological insights. Indeed, the convolution is applied to the output of the first layer, extracting information related to entities belonging to the same simplicial family, which are not necessarily related in the original network representation. In this case, we use a kernel with a size equal to the cardinality of the simplicial family (i.e. \(|E|\), \(|R|\), \(|H|\) respectively) and a number of filters \(\xi\in[32-64]\) with a skip factor of \(4\). The final layer of the HCNN architecture is linear and maps the outputs from the \(2^{nd}\)-level convolutions to the output. It is worth noting that each level of convolution is followed by a regularization layer with a dropout rate equal to \(0.25\) and the non-linear activation function is the classic Rectified Linear Unit (ReLU). Even if HNN and HCNN are built on the concept of homology and exploit it in the construction of a data-centric neural network unit, it is worth noting that their design aims to capture different data relationships. In the case of HNN different aggregation layers aim to capture relationships between increasingly complex geometrical structures, which are linked together through at least one edge in the network representation of the system under analysis. If two geometrical structures are not linked, then any potential relationship is missed. This architectural philosophy is maintained in HCNN and is fully captured in the \(1^{st}\)-level of convolution, where we model interactions embedded in unitary geometrical structures. In so doing, we capture the information contained in all the representatives of each simplicial family since the convolution is iterated for each size of higher-order structures. This step is highly eased if the input network is chordal. Indeed, this allows to have increasingly complex structures containing all the possible substructures. The chordality property has also additional advantages. The building process of this first layer of the data-centric unit can be built in polynomial time. This property is based on the fact that the recognition of maximal cliques of different sizes and the maximal clique of a chordal graph can be found in polynomial time although the problem is NP-complete for general graphs [119]. In the second and third layer of aggregation, in HCNNs, we aim to capture homological relationships characterizing the underlying system. Specifically, they Figure 2: Pictorial representation of an HCNN and its building pipeline. From left to right, (i) we start from a chordal graph representing the dependency structures of features in the underlying system (the choice of the network representation is not binding), (ii) we isolate the maximal cliques corresponding to \(1\)-, \(2\)- and \(3\)-dimensional simplices (i.e. edges, triangles, tetrahedra) and we group them into \(1-d\) vectors containing features’ realizations, (iii) we compute a \(1\)-\(d\) convolution which extracts simplicial-wise non-linear relationships, (iv) we compute a \(2^{nd}\)-level convolution, which operates on the output of the previous level of convolution across all the representatives of each simplicial family extracting a first class of non-trivial homological insights, (v) we finally apply a linear map from the \(2^{nd}\)-level convolutions to the output extracting a second class of cross-network homological insights. allow us to overcome the limits imposed by any network structure, by capturing potential hidden data dependency structures. ### On the learning process of network's representation In the problem setting described in Section 3.4, we are dealing with a computational system \(\mathcal{M}_{\mathcal{G}}\), the HCNN, which depends on a network representation \(\mathcal{G}\). To discover the best network representation, in principle, one needs to explore the ensemble of all possible networks and identify the one that makes the model perform best. This problem is known to be NP-hard [119]. However, one can restrict the search space and identify a priori the kind of optimal network by analysing the dependency structure of the features of the system under analysis. From an information theoretic perspective, the general problem consists in finding the multivariate probability density function with representation structure \(\mathcal{G}\), \(\hat{f}(\mathbf{X}|\mathcal{G})\), that best describes the "true" underlying distribution \(f(\mathbf{X})\) (which is unknown). To quantify the distance between a model, \(\hat{f}(\mathbf{X}|\mathcal{G})\), and the true distribution, \(f(\mathbf{X})\), one can use the Kullback-Leibler divergence [120] \[D_{KL}(f\parallel\hat{f})=\mathbb{E}(\log f(\mathbf{X}))-\mathbb{E}(\log\hat {f}(\mathbf{X}|\mathcal{G})), \tag{1}\] which must be minimized. The first term of Equation 1 is independent of the model and therefore its value is irrelevant to the purpose of discovering the representation network. The second term, \(-\mathbb{E}(\log\hat{f}(\mathbf{X}|\mathcal{G}))\) (note the minus), instead depends on \(\mathcal{G}\) and must be minimized. This term is the estimate of the entropy of the multivariate system of variables \(\mathbf{X}\) by using the model \(\hat{f}(\mathbf{X}|\mathcal{G})\): \[\hat{H}(\mathbf{X}|\mathcal{G})=-\mathbb{E}(\log\hat{f}(\mathbf{X}|\mathcal{G })) \tag{2}\] and corresponds to the so-called cross-entropy. Given that the true underlying distribution is unknown, the expectation cannot be computed exactly, however, it can be estimated with arbitrary precision using the sample mean. Such a sample mean approximates the expected value of the negative log-likelihood of the model \(\hat{f}(\mathbf{X}|\mathcal{G})\). Therefore, the construction of the representation network must aim to maximize the likelihood of the model, which is indeed a typical quantity that is maximized when training a model. The network associated with the largest model's likelihood can be constructed step-by-step by joining disconnected parts that share the largest mutual information. Indeed, in a graph, the gain achieved by joining two variables \(X_{a}\) and \(X_{b}\), is approximately given by the mutual information shared by the two variables \(\simeq I(X_{a};X_{b})\). In turn, at the second-order approximation, the mutual information is approximated by the square of the correlation coefficient between the two variables. Therefore, the gain in the model's likelihood is \(I(X_{a};X_{b})\simeq\rho_{a,b}^{2}\)[121], and the TMFG construction with \(\rho^{2}\) weights implies a graph that aims to maximize the model's likelihood itself. ## 4 Experiments In this section, we compare the performance of HCNN classifier in its MeanSimMatrix and BootstrapNet configuration (see Section 3.2) against \(8\) machine learning and deep learning SOTA classifiers under homogeneous evaluation conditions. We consider LogisticRegression, RandomForest, XGBoost, LightGBM and CatBoost as representatives of machine learning classifiers, and MLP, TabNet and TabPFN as representatives of deep learning classifiers. For each of them, the inference process is structured into two different phases: (i) the hyper-parameters search stage and (ii) the training/test stage with optimal hyper-parameters. Both stages are repeated \(10\) times with fixed seeds that guarantee a full reproducibility of results. For each run, we allow for a maximum of \(500\) hyper-parameters search iterations allocating \(8\) CPUs each with \(2\)GB memory and a time budget of \(48\) hours. Experiments are entirely run on the University College London HPC CS Cluster [122]. The hyper-parameters search phase consists of a Sequential Model-Based Optimization with the Tree Parzen Estimator [123], where we maximize the F1_score on each validation set. In Appendix C, we describe the hyper-parameters' search space for each classifier. We use three metrics to evaluate the classifiers on out-of-sample datasets: the F1_score, the Accuracy, and the Matthews Correlation Coefficient (MCC) [124; 125]. The results obtained are statistically validated using the Wilcoxon significance test, a standard metric for comparing classifiers across multiple datasets [126]. As second stage of the analysis, we investigate the scalability of each model in tackling extensive numerical tabular classification tasks. In so doing, we use an ad-hoc suite of datasets (see Section 3.1), while maintaining the inference process described earlier in this section. A model converges (i.e. it is able to scale to larger classification problems) once completing the learning task using the given computational resources in the allocated time budget for all the \(10\) seeds. ### Small tabular classification problems Table 4.1 reports a cross-datasets, out-of-sample comparison of classifiers previously listed in this Section. For each model, we provide (i) the average and (ii) the best/worst ranking position considering three different evaluation metrics, (iii) the average value for each evaluation metric, and (iv) the time required for the hyper-parameters tuning and for the training/test run with optimal hyper-parameters. On average, the TabPFN model occupies a ranking position higher than the one of HCNN both in its MeanSimMatrix and in BootstrapNet configuration. However, it is worth noting that when we evaluate models' performance through F1_Score and MCC (i.e. the two performance metrics that are less prone to bias induced by unbalanced datasets), the HCNN in the MeanSimMatrix configuration occupies a ranking position for the worst performance, which is better than the one of its immediate competitor (i.e. \(7\) and \(6\) of HCNN MeanSimMatrix vs \(10\) and \(8\) of TabPFN). The same happens in the case of HCNN BootstrapNet with F1_score. These findings highlight an evident robustness of the HCNN model, which is superior not only to TabPFN model but also to all the other deep learning and machine learning alternatives. More generally, both TabPFN and HCNN show superior performance compared to the other two deep learning models (i.e. MLP and TabNet), which occupy an average ranking position equal to \(~{}7\) and \(~{}9\) respectively on all the three different evaluation metrics. Among machine learning models, CatBoost achieves the highest performance with an average ranking position equal to \(~{}4\) considering the F1_Score and the MCC, and equal to \(~{}5\) considering the Accuracy (in this case the position number \(4\) is occupied by LogisticRegression). All these findings can be visualized in Figure 4.1. Specifically, the highest robustness of HCNN model in MeanSimMatrix configuration compared to TabPFN model can be observed in Figure 3(a) and Figure 3(c). They represent the ranking position of each model on each dataset using the F1_Score and the MCC as performance metrics respectively. In the first case, we notice that the ranking position of the worst performance by HCNN is \(7\) when dealing with dataset "climate-model-simulation-crashes" (OpenML ID \(40994\)), while the one occupied by TabPFN is \(10\) with dataset "pc_1" (OpenML ID \(1068\)). In the second case, we notice that the worst performance by HCNN has ranking \(6\) when dealing with datasets "mfeat-karhunen" (OpenML ID \(16\)), "steel-plates-fault" (OpenML ID \(40982\)) and "climate-model-simulation-crashes" (OpenML ID \(40994\)), while the one occupied by TabPFN is \(8\) with dataset "pc_1" (OpenML ID \(1068\)). Except for "mfeat-karhunen" dataset (OpenML ID \(16\)), all the datasets listed before are strongly unbalanced. Models' numerical performances for each evaluation metric enforce all the findings discussed above. It is, however, clear that the differences in performance are very reduced. This evidence suggests a potential statistical equivalence of the models and this hypothesis is verified through a specific statistical test discussed later in this Section. The final comparison to be performed is the one related to the models' running time. In this sense, machine learning models still represent the SOTA with CatBoost being an exception. Among deep \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & **LogileRegression** & **RandomForest** & **XCBoost** & **LightGBM** & **Cellboard** & **MLP** & **TabNet** & **TabPFN** & **BICNN** \\ \hline **3.7.max T1.Score** & 5.533 & 5.333 & 5.380 & 5.277 & 4.500 & 9.500 & 7.584 & 2.538 & 4.588 & 3.722 \\ **3.7.max Accuracy** & 4.838 & 5.972 & 6.914 & 5.916 & 5.388 & 9.646 & 7.166 & 1.833 & 4.694 & 3.277 \\ **3.7.max MCC** & 5.166 & 6.388 & 5.611 & 5.646 & 4.833 & 9.500 & 7.313 & 2.166 & 4.777 & 3.556 \\ \hline **3.7.max T1.Score** & 1.79 & -1.00 & 1.79 & -1.8 & 1.5 & 8.10 & -2.10 & -1.70 & -1.79 & -2.7 \\ **2.8/8.5** & -1.10 & -1.0 & 3.9 & -2.8 & -1.8 & 8.10 & -2.10 & -1.5 & -1.9 & -1.6 \\ **3.7.max MAC** & -1.10 & -2.00 & -1.9 & 1.8 & 1.8 & 8.10 & -2.10 & -1.8 & 1.9 & -2.6 \\ \hline **3.7.max T1.Score** & 0.791 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.771 & 0.511 & 0.301 & 0.301 & 0.301 \\ **3.7.max T1.Score** & 0.861 & 0.861 & 0.879 & 0.874 & 0.862 & 0.861 & 0. learning models, however, it is worth noting that HCNN has a running time that is comparable with the one of TabPFN and much lower than the other attention-based model, TabNet. This result is relevant since it legitimates the architecture proposed in this paper as a strong competitor of TabPFN. Indeed, the proposed architecture reaches comparable results without pre-training, with a higher level of explainability in the architectural building process and with a much lower number of parameters. Also among deep learning models, there is an exception represented by the MLP: its SOTA running time heavily depends on the number of layers and on the number of neurons per layer emerging from the hyper-parameter search. Figure 4.1 reports the relationship between the number of features and the total number of parameters in the HCNN MeanSimMatrix configuration, the relationship between the number of features and the total number of parameters in the HCNN BootstrapNet and the relationship between the difference in the number of features and the difference in the total number of parameters in the two configurations. Looking at Figure 4(a), it is possible to conclude that a strong linear relationship exists between the number of features and the total number of parameters of the HCNN model in the MeanSimMatrix. This finding was expected since the proposed model's architecture totally depends on the complete homological structure of the underlying system. This means that each time a new feature is introduced, we could potentially observe an increase in the number of edges, triangles, and tetrahedra which in turn determines a proportional increase in the number of parameters of the HCNN itself. Figure 3: Out-of-sample model- and dataset-dependent average ranking considering (a) F1_Score (b) Accuracy (c) MCC evaluation metrics. This representation allows to clearly assess the higher robustness of HCNN model to datasets’ unbalance, over all its deep learning and machine learning competitors. On this point, we need to underline that the magnitude of the slope of the regression line heavily depends on the optimal hyper-parameters describing the number of filters in the two convolutional layers. Looking at Figure 4(b), we observe again a relatively strong linear relationship between the number of features and the total number of parameters of the HCNN model in the BootstrapNet. The difference in r_value between the two configurations is equal to \(0.19\) and depends on the fact that, in the second case, the optimal threshold value, which maximizes the model's performance, is different across datasets, it does not depend on the number of inputs features and determines an ablation of features that has no dependence on any other factor. More generally, in the BootstrapNet configuration, we observe a number of parameters that is, on average, one order of magnitude below the one in the HCNN MeanSimMatrix configuration. To better study this finding, in Figure 4(c) we report, on the \(x\)-axis the difference in the number of features \(\Delta_{f}\) and on \(y\)-axis the difference in the number of parameters \(\Delta_{p}\). As one can see, the linear relationship is strong only when the two deltas are low. For higher deltas, specifically for the three datasets "mfeat-fourier" (OpenML ID \(14\)), "mfeat-karhunen" (OpenML ID \(16\)), and "analcaddata_authorship" (OpenML ID \(458\)), even if the decrement is significant for both parameters, the relationship is not linear. To assess the statistical significance of the difference in models' performance, we use the Critical Difference (CD) diagram of the ranks based on the Wilcoxon significance test (with \(p\)-values below \(0.1\)), a standard metric for comparing classifiers across multiple datasets [126]. The overall empirical Figure 4: Study of the relationship between the number of total features (\(x\)-axis) and a number of total parameters (\(y\)-axis) for the (a) HCNN MeanSimMatrix and (b) HCNN BootstrapNet configuration. Figure (c) reports the relationship structure among the difference in the number of features (\(x\)-axis) and the difference in the number of total parameters (\(y\)-axis) when using the two above-mentioned configurations. comparison of the methods is given in Figure 4.1. We notice that the performance of HCNN and TabPFN is not statistically different. This finding is coherent across the three different evaluation metrics. This result is particularly relevant because makes these deep learning architectures the only two which are really comparable with the SOTA machine learning ones. Indeed, MLP and TabNet are statistically different from other models in the majority of cases. These findings legitimate the methodology proposed in the current research work as a SOTA one both in terms of performance and in terms of computational complexity (i.e. number of parameters). We cannot assert the same for the TabPFN, which is among the SOTA models in terms of performance but the worst model in terms of computational (and architectural) complexity. ### Models' scalability to larger tabular numerical classification problems All the models considered in the current research work are primarily designed to handle small tabular classification problems. As described in [104], a dataset is defined as "small" if contains up to \(2000\) samples and \(100\) features. In this section, we explore the ability of the models to scale to larger problems. In so doing, we use benchmark datasets characterized, in turn, by a number of samples greater than \(2000\) or a number of features greater than \(100\). In Table 2, we mark the success in solving the corresponding tabular classification task with a () symbol, while a failure to solve the problem is denoted by an () symbol. As one can notice, the proposed datasets are sufficient in underlining the criticalities of two models: the TabPFN model and the HCNN model in its MeanSimMatrix configuration. In the first case, the model is unable to scale to problems with a larger number of samples and features. This limitation was already pointed out in the original work by [104] and directly depends on the model's architecture, which strongly leverage the power of attention-based mechanisms. Indeed, the runtime and memory usage of the TabPFN architecture scales quadratically with the number of inputs (i.e. training samples passed) and the fitted model cannot work with datasets with a number of features \(>100\). The authors propose a potential solution to these problems by recommending the incorporation of attention mechanisms that exhibit linear scalability with the number of inputs [127; 128], while simultaneously maintaining satisfactory Figure 5: Critical Difference plots on out-of-sample average ranks with a Wilcoxon significance analysis. In (a) the test is run considering the F1_Score, in (b) the test is run considering the Accuracy, in (c) the test is run considering the MCC. performance outcomes. However, no evidence is presented to support this suggestion. In the case of HCNN MeanSimMatrix, instead, the proposed architecture demonstrates a limit in handling problems characterised by a large number of features (but not samples). Also in this case, the reason of the failure should be searched in the model's architectural design choices. Indeed, as underlined in Figure 4(a), there is a strong linear relationship between the number of features and the number of parameters, meaning that when the first parameter is large, convolving across all representatives of each simplicial complex family becomes computationally demanding. A solution to this problem can be found in employing the BootstrapNet configuration, which disrupts the linear relationship discussed earlier, resulting in a significant reduction in the number of parameters when dealing with a large number of features. While this approach demonstrates considerable efficacy, it remains reliant on a threshold parameter (see Section 3.2), suggesting the need for more advanced and parameter-free alternatives. For the seek of completeness, in Appendix E, we partially repeat the analyses presented in Section 4.1 on the newly introduced datasets. Because of the fragmentation caused by the increased size, we report only the dataset-dependent analyses, excluding cross-datasets ones. ## 5 Conclusion In this paper, we introduce the Homological Convolutional Neural Network (HCNN), a novel deep learning architecture that revisits the simpler Homological Neural Network (HNN) to gain abstraction, representation power, robustness, and scalability. The proposed architecture is data-centric and arises from a graph-based higher-order representation of dependency structures among multivariate input features. Compared to HNN, our model demonstrates a higher level of abstraction since we have higher flexibility in choosing the initial network's representation, as we can choose from the universe simplicial complexes and we are not restricted to specific sub-families. Looking at geometrical structures at different granularity levels, we propose a clear-cut way to leverage the power of convolution on sparse data representations. This allows to fully absorb the representation power of HNN in the very first level of HCNN, leaving room for additional data transformations at deeper levels of the architecture. Specifically, in the current research work we build the HCNN using a class of information filtering networks (i.e. the TMFG) that uses squared correlation coefficients to maximize the likelihood of the underlying system. We propose two alternative architectural solutions: (i) the MeanSimMatrix configuration and (ii) the BootstrapNet configuration. Both of them leverage the power of bootstrapping to gain robustness toward data noise and the intrinsic complexity of interactions among the underlying system's variables. We test these two modeling solutions on a set of tabular numerical classification problems (i.e. one of the most challenging tasks for deep learning models and the one where HNN demonstrates the poorest performances). We compare HCNN with different machine- and deep-learning architectures, always teeing SOTA performances and demonstrating superior robustness to data unbalances. Specifically, we demonstrate that HCNN is not only able to compete with the latest transformer architectures (e.g. TabPFN) by using a considerably lower and easily controllable number of parameters (especially in the BootstrapNet configuration), guaranteeing a higher level of explicability in the neural network's building process and having a comparable running time without the need for pre-training. We finally propose a study on models' scalability to dataset with increasing size. We underline the fragility of transformer models and we also demonstrate that HCNN in its MeanSimMatrix configuration is unable to manage datasets \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline **OpenML ID** & **\# Samples** & **\# Features** & & & & & & & & & \\ & & **LogisticRegression** & **RandomForest** & **XGBon** & **LogicGBM** & **MLP** & **ThaNet** & **ThaPFN** & **HCNN** & **HCNN** \\ \hline 361055 & 16714 & 10 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ 361052 & 10825 & 26 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ 361053 & 13488 & 16 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361055 & 13276 & 10 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361046 & 10578 & 27 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361275 & 13272 & 209 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361277 & 3664 & 8 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ 361278 & 10000 & 22 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: Study on models’ ability to scale to larger problems. Considered datasets belong to the OpenML benchmark suite “Tabular benchmark numerical classification” [101]. For each of them, we report the OpenML ID, the number of samples, and the number of features. We indicate the success in solving the corresponding tabular classification task with a \(\bigcirc\) symbol, while a failure to solve the problem is denoted by an \(\bigcirc\) symbol. characterized by a large number of input features. On the other side, we show that the design choice adopted for the BootstrapNet configuration offers a parametric solution to the problem. Despite significant advances introduced by HCNNs, this class of neural networks remains in an embryonic phase. Further studies on underlying network representations should propose alternative metrics that replace squared correlation coefficients for mixed data-types (i.e. categorical and numerical or categorical only data-types), and further work is finally required to better understand low-level interactions captured by the proposed neural network model. This final point would certainly lead to a class of non-parametric parsimonious HCNN.
2307.05629
Characterization of AGM Belief Contraction in Terms of Conditionals
We provide a semantic characterization of AGM belief contraction based on frames consisting of a Kripke belief relation and a Stalnaker-Lewis selection function. The central idea is as follows. Let K be the initial belief set and K-A be the contraction of K by the formula A; then B belongs to the set K-A if and only if, at the actual state, the agent believes B and believes that if not-A is (were) the case then B is (would be) the case.
Giacomo Bonanno
2023-07-11T07:07:15Z
http://arxiv.org/abs/2307.05629v1
# Characterization of AGM Belief Contraction ###### Abstract We provide a semantic characterization of AGM belief contraction based on frames consisting of a Kripke belief relation and a Stalnaker-Lewis selection function. The central idea is as follows. Let \(K\) be the initial belief set and \(K\div\phi\) be the contraction of \(K\) by the formula \(\phi\); then \(\psi\in K\div\phi\) if and only if, at the actual state, the agent believes \(\psi\) and believes that if \(\neg\phi\) is (were) the case then \(\psi\) is (would be) the case. ## 1 Introduction Belief contraction is the operation of removing from the set \(K\) of initial beliefs a particular belief \(\phi\). One reason for doing so is, for example, the discovery that some previously trusted evidence supporting \(\phi\) was faulty. For instance, a prosecutor might form the belief that the defendant is guilty on the basis of his confession; if the prosecutor later discovers that the confession was extorted, she might abandon the belief of guilt, that is, become open minded about whether the defendant is guilty or not. In their seminal contribution to belief change, Alchourron, Gardenfors and Makinson ([2]) defined the notion of "rational and minimal" contraction by means of a set of eight properties, known as the AGM axioms or postulates. They did so within a syntactic approach where the initial belief set \(K\) is a consistent and deductively closed set of propositional formulas and the result of removing \(\phi\) from \(K\) is a new set of propositional formulas, denoted by \(K\div\phi\). We provide a new characterization of AGM belief contraction based on a so-far-unnoticed connection between the notion of belief contraction and the Stalnaker-Lewis theory of conditionals ([35, 22]). Stalnaker introduced the notion of a selection function \(f\) taking as input a possible world \(w\) and a set of worlds \(E\) (representing a proposition) and giving as output a world \(w^{\prime}=f(w,E)\in E\), interpreted as the closest \(E\)-world to \(w\) (an \(E\)-world is a world that belongs to \(E\)). Lewis generalized this by allowing \(f(w,E)\) to be a set of worlds. In the Stalnaker-Lewis theory the (indicative or subjunctive) conditional "if \(\phi\) is (were) the case then \(\psi\) is (would be) the case", denoted by \(\phi>\psi\), is declared to be true at a world \(w\) if and only if \(\psi\) is true at all the worlds in \(f(w,\|\phi\|)\) (\(\|\phi\|\) denotes the set of worlds at which \(\phi\) is true). We consider semantic frames consisting of a Kripke belief relation on a set of states \(S\), representing the agent's initial beliefs, and a Stalnaker-Lewis selection function on \(S\times 2^{S}\) representing conditionals. Adding a valuation to such a frame yields a model. Given a model, we define the initial belief set \(K\) as the set of formulas that the agent believes at the actual state and \(K\div\phi\) (the contraction of \(K\) by \(\phi\)) as the set of formulas that the agent believes initially and also on the supposition that \(\neg\phi\): \(\psi\in K\div\phi\) if and only if, at the actual state, the agent (1) believes \(\psi\) and (2) believes the conditional \(\neg\phi>\psi\). We show that, when the selection function satisfies some natural properties, the contraction operation so defined captures precisely the set of AGM belief contraction functions. ## 2 AGM contraction functions Let \(\mathtt{At}\) be a countable set of atomic formulas. We denote by \(\Phi_{0}\) the set of Boolean formulas constructed from \(\mathtt{At}\) as follows: \(\mathtt{At}\subset\Phi_{0}\) and if \(\phi,\psi\in\Phi_{0}\) then \(\neg\phi\) and \(\phi\vee\psi\) belong to \(\Phi_{0}\). Define \(\phi\to\psi\), \(\phi\wedge\psi\), and \(\phi\leftrightarrow\psi\) in terms of \(\neg\) and \(\vee\) in the usual way. Given a subset \(K\) of \(\Phi_{0}\), its deductive closure \(Cn(K)\subseteq\Phi_{0}\) is defined as follows: \(\psi\in Cn(K)\) if and only if there exist \(\phi_{1},...,\phi_{n}\in K\) (with \(n\geq 0\)) such that \((\phi_{1}\wedge...\wedge\phi_{n})\to\psi\) is a tautology. A set \(K\subseteq\Phi_{0}\) is _consistent_ if \(Cn(K)\neq\Phi_{0}\); it is _deductively closed_ if \(K=Cn(K)\). Given a set \(K\subseteq\Phi_{0}\) and a formula \(\phi\in\Phi_{0}\), the _expansion_ of \(K\) by \(\phi\), denoted by \(K+\phi\), is defined as follows: \(K+\phi=Cn\left(K\cup\{\phi\}\right)\). Let \(K\subseteq\Phi_{0}\) be a consistent and deductively closed set representing the agent's initial beliefs and let \(\Psi\subseteq\Phi_{0}\) be a set of formulas representing possible candidates for withdrawal. A _belief contraction function_ (based on \(K\) and \(\Psi\)) is a function \(\div_{\Psi}:\Psi\to 2^{\Phi_{0}}\) (where \(2^{\Phi_{0}}\) denotes the set of subsets of \(\Phi_{0}\)) that associates with every formula \(\phi\in\Psi\) a set \(K\div_{\Psi}\phi\subseteq\Phi_{0}\) (interpreted as the result of removing \(\phi\) from \(K\)). If \(\Psi\neq\Phi_{0}\) then \(\div_{\Psi}\) is called a _partial contraction function_, while if \(\Psi=\Phi_{0}\) then \(\div_{\Phi_{0}}\) is called a _full-domain_ contraction function; in this case we simplify the notation and omit the subscript \(\Phi_{0}\). **Definition 1**.: _Let \(\div_{\Psi}:\Psi\to 2^{\Phi_{0}}\) be a partial contraction function and \(\div^{\prime}:\Phi_{0}\to 2^{\Phi_{0}}\) a full-domain contraction function (both of them based on \(K\)). We say that \(\div^{\prime}\) is an extension of \(\div\)\(\psi\) if, for every \(\phi\in\Psi\), \(K\div^{\prime}\phi=K\div_{\Psi}\phi\)._ A _full-domain_ contraction function is called an _AGM contraction function_ if it satisfies the following properties, known as the AGM postulates: \[\begin{array}{rl}(K-1)&[\text{Closure}]\ K\div\phi=Cn(K\div\phi).\\ (K-2)&[\text{Inclusion}]\ K\div\phi\subseteq K.\\ (K-3)&[\text{Vacuity}]\ \text{If}\ \phi\notin K\ \text{then}\ K\subseteq K\div\phi.\\ (K-4)&[\text{Success}]\ \text{If}\ \phi\ \text{is not a tautology, then}\ \phi\notin K\div\phi.\\ (K-5)&[\text{Recovery}]\ \text{If}\ \phi\in K\ \text{then}\ K\subseteq(K\div\phi)+\phi. \\ (K-6)&[\text{Extensionality}]\ \text{If}\ \phi\leftrightarrow\psi\ \text{is a tautology, then}\ K\div\phi=K\div\psi.\\ (K-7)&[\text{Conjunctive overlap}]\ (K\div\phi)\cap(K\div\psi)\subseteq K\div(\phi\wedge\psi). \\ (K-8)&[\text{Conjunctive inclusion}]\ \text{If}\ \phi\notin K\div(\phi\wedge\psi),\ \text{then}\ K\div(\phi\wedge\psi)\subseteq K\div\phi.\end{array}\] \((K-1)\) requires the result of contracting \(K\) by \(\phi\) to be a deductively closed set. \((K-2)\) requires the contraction of \(K\) by \(\phi\) not to contain any beliefs that were not in \(K\). \((K-3)\) requires that if \(\phi\) is not in the initial belief set, then every belief in \(K\) should also be present in \(K\div\phi\) (thus, by \((K-2)\) and \((K-3)\), if \(\phi\notin K\) then the contraction of \(K\) by \(\phi\) coincides with \(K\)). \((K-4)\) requires that \(\phi\) not be contained in \(K\div\phi\), unless \(\phi\) is a tautology (in which case, by \((K-1)\), it must be in \(K\div\phi\)). \((K-5)\) is a conservativity requirement: when \(\phi\in K\), contracting by \(\phi\) and then expanding the resulting set \(K\div\phi\) by \(\phi\) should involve no loss of beliefs relative to \(K\) (the converse inclusion \((K\div\phi)+\phi\subseteq K\) follows from \((K-2)\) and the hypothesis that \(K=Cn(K)\)). \((K-6)\) says that logically equivalent formulas should lead to the same result in terms of contraction. By \((K-7)\), if a formula \(\chi\in K\) is neither removed in the contraction of \(K\) by \(\phi\) nor in the contraction of \(K\) by \(\psi\), then \(\chi\) should not be removed in the contraction of \(K\) by the conjunction \(\phi\wedge\psi\). \((K-8)\), on the other hand, requires that if \(\phi\) is removed when we contract by \(\phi\wedge\psi\), then every formula that survives the contraction of \(K\) by \(\phi\wedge\psi\) survives also when \(K\) is contracted by \(\phi\) alone. For an extensive discussion of the above postulates see [11, 17, 6]. The notion of AGM belief contraction has been given alternative characterizations. One characterization is in terms of a binary relation \(\leqslant\) of "epistemic entrenchment" on \(K\), with the interpretation of \(\phi\leqslant\psi\) as "\(\phi\) is either less entrenched than, or as entrenched as, \(\psi\)". Gardenfors ([11, Theorem 4.30, p. 96]) shows that if the relation \(\leqslant\) satisfies five properties and a contraction function is defined by '\(\psi\in K\div\phi\) if and only if \(\psi\in K\) and either \(\phi\) is a tautology or \(\phi<(\phi\vee\psi)\)', then such contraction function is an AGM contraction function and, conversely, if an AGM contraction function is used to define the relation \(\leqslant\) by '\(\phi\leqslant\psi\) if and only if either \(\phi\notin K\div(\phi\wedge\psi)\) or \(\phi\wedge\psi\) is a tautology' then such relation satisfies those five properties. Another characterization makes use of the set \(W\) of possible worlds, where a possible world is defined as a maximally consistent set of formulas in \(\Phi_{0}\); within this approach, contraction has been characterized either in terms of systems of spheres ([13, 21]) or in terms of a plausibility relation on \(W\) or in terms of propositional selection functions (see [6, Chapter 4]). In this paper we provide an alternative characterization in terms of Stalnaker-Lewis conditionals. ## 3 An alternative semantic characterization of AGM contraction Given a binary relation \(R\subseteq S\times S\) on a set \(S\), for every \(s\in S\) we define \(R(s)=\{x\in S:(s,x)\in R\}\). **Definition 2**.: _A pointed frame is a quadruple \(\langle S,s_{\oplus},\mathcal{B},f\rangle\) where_ 1. \(S\) _is a set of_ states_; subsets of_ \(S\) _are called_ events_._ 2. \(s_{\oplus}\in S\) _is a distinguished element of_ \(S\) _interpreted as the_ actual state_._ 3. \(\mathcal{B}\subseteq S\times S\) _is a binary_ belief relation _on_ \(S\) _which is serial:_ \(\forall s\in S\)_,_ \(\mathcal{B}(s)\neq\varnothing\)_._ 4. \(f:\mathcal{B}(s_{\oplus})\times 2^{S}\setminus\varnothing\to 2^{S}\) _is a_ Stalnaker-Lewis selection function1 _that associates with every state-event pair_ \((s,E)\) _(with_ \(s\in\mathcal{B}(s_{\oplus})\) _and_ \(\varnothing\neq E\subseteq S\) _) a set of states_ \(f(s,E)\subseteq S\) _such that,_ Footnote 1: Note that, for the purpose of this paper, the domain of \(f\) can be taken to be \(\mathcal{B}(s_{\oplus})\times 2^{S}\setminus\varnothing\) rather than \(S\times 2^{S}\setminus\varnothing\). However, it can easily be extended to \(S\times 2^{S}\setminus\varnothing\) as follows: first, fix an arbitrary function \(g:S\setminus\mathcal{B}(s_{\oplus})\to\mathcal{B}(s_{\oplus})\) and then define, for every \(s\in S\setminus\mathcal{B}(s_{\oplus})\) and every \(\varnothing\neq E\subseteq S\), \(f(s,E)=f(g(s),E)\). 1. (a.1) \(f(s,E)\neq\varnothing\) and (a.2) (Success) \(f(s,E)\subseteq E\),_ Footnote 2: For the purposes of this paper, the domain of \(f\) can be taken to be \(\mathcal{B}(s_{\oplus})\times 2^{S}\setminus\varnothing\) rather than \(S\times 2^{S}\setminus\varnothing\). However, it can easily be extended to \(S\times 2^{S}\setminus\varnothing\) as follows: first, fix an arbitrary function \(g:S\setminus\mathcal{B}(s_{\oplus})\to\mathcal{B}(s_{\oplus})\) and then define, for every \(s\in S\setminus\mathcal{B}(s_{\oplus})\) and every \(\varnothing\neq E\subseteq S\), \(f(s,E)=f(g(s),E)\). 3. (Weak Centering) if \(s\in E\) then \(s\in f(s,E)\), 4. (Doxastic Priority 1) if \(\mathcal{B}(s_{\oplus})\cap E\neq\varnothing\) then \(f(s,E)\subseteq\mathcal{B}(s_{\oplus})\cap E\), 5. (Intersection) \(f(s,E)\cap F\subseteq f(s,E\cap F)\), 6. (Doxastic Priority 2) Let \(B_{EF}=\{s\in\mathcal{B}(s_{\oplus}):f(s,E)\cap F\neq\varnothing\}\). If \(B_{EF}\neq\varnothing\) then 1. if \(s\in B_{EF}\) then \(f(s,E\cap F)\subseteq f(s,E)\cap F\), 2. if \(s\notin B_{EF}\) then \(f(s,E\cap F)\subseteq f(s,E\cap F)\) for some \(\hat{s}\in B_{EF}\)._ The set \(\mathcal{B}(s)\) is the set of states that the agent considers possible at state \(s\), so that \(\mathcal{B}(s_{\oplus})\) is the set of doxastic possibilities at the actual state \(s_{\oplus}\) and represents the agent's initial beliefs. \(f(s,E)\) is the set of states that the agent considers closest, or most similar, to state \(s\) conditional on event \(E\). (4.a) of Definition 2 requires \(f(s,E)\) to be non-empty and, furthermore, that every state in \(f(s,E)\) be an \(E\)-state. (4.b) postulates that if \(s\) is an \(E\)-state then it belongs to \(f(s,E)\), that is, \(s\) itself is one of the \(E\)-states that are closest to \(s\). By (4.c) if there exists an \(E\)-state among those initially considered possible (\(\mathcal{B}(s_{\oplus})\cap E\neq\varnothing\)), then, for every \(s\in\mathcal{B}(s_{\oplus})\), the closest \(E\)-states to \(s\) must belong to \(\mathcal{B}(s_{\oplus})\cap E\). By (4.d), the closest \(E\)-states to \(s\) that are also \(F\)-states must belong to the set of closest \((E\cap F)\)-states to \(s\). (4.e) can be viewed as an extension of (4.c): it says that if, among the states initially considered possible, there is at least one state, call it \(s\), that satisfies the property that among its closest \(E\)-states there is at least one that is also an \(F\)-state, then (1) the closest \((E\cap F)\)-states to \(s\) must belong to the intersection \(f(s,E)\cap F\) and (2) for any other state that does not satisfy the property, the closest \((E\cap F)\)-states to it are contained in the set of closest \((E\cap F)\)-states to some state that does satisfy the property. Adding a valuation to a pointed frame yields a model. Thus a _model_ is a tuple \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) where \(\langle S,s_{\oplus},\mathcal{B},f\rangle\) is a pointed frame and \(V:\mathtt{At}\to 2^{S}\) is a valuation that assigns to every atomic formula \(p\in\mathtt{At}\) the set of states where \(p\) is true. Given a model \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) define truth of a Boolean formula \(\phi\in\Phi_{0}\) at a state \(s\in S\), denoted by \(s\models\phi\), in the usual way: **Definition 3**.: _Truth of a formula at a state is defined as follows:_ 1. _if_ \(p\in\mathtt{At}\) _then_ \(s\models p\) _if and only if_ \(s\in V(p)\)_,_ 2. \(s\models\neg\phi\) _if and only if_ \(s\not\models\phi\)_,_ 3. \(s\models\phi\vee\psi\) _if and only if_ \(s\models\phi\) _or_ \(s\models\psi\) _(or both),_ We denote by \(\|\phi\|\) the truth set of \(\phi\): \(\|\phi\|=\{s\in S:s\models\phi\}\). Fix a model \(M=\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) and let \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) (to simplify the notation, we omit the subscript denoting the model and thus write \(K\) rather than \(K_{M}\)); thus a Boolean formula \(\phi\) belongs to \(K\) if and only if at the actual state \(s_{\oplus}\) the agent believes \(\phi\). It is shown in the Appendix (Lemma 1) that the set \(K\subseteq\Phi_{0}\) so defined is deductively closed and consistent. Next, for every \(\phi\in\Phi_{0}\) such that \(\|\neg\phi\|\neq\varnothing\), define \(K\div\phi\subseteq\Phi_{0}\) as follows: \[\begin{array}{ll}\psi\in K\div\phi\text{ if and only if }&(1)\,\mathcal{B}(s_{\oplus})\subseteq\|\psi\|,\text{ and}\\ &(2)\,\forall s\in\mathcal{B}(s_{\oplus}),f(s,\|\neg\phi\|)\subseteq\|\psi\|.\end{array} \tag{1}\] In (2) below we rewrite (1) in an extended language containing a belief operator and a conditional operator, thus making the interpretation more transparent: \(\psi\in K\div\phi\) if and only if, at the actual state \(s_{\oplus}\), the agent believes \(\psi\) initially as well as on the supposition that \(\neg\phi\).2. Footnote 2: We take “believing \(\psi\) on the supposition that \(\neg\phi\)” to mean “believing that if \(\neg\phi\) is (were) the case then \(\psi\) is (would be) the case”. Since, in general, not every \(\phi\in\Phi_{0}\) is such that \(\|\neg\phi\|\neq\varnothing\), this definition gives rise to a _partial_ belief contraction function. The next proposition says that this partial contraction function can be extended to a full-domain AGM contraction function; conversely, given a full-domain AGM contraction function based on a consistent and deductively closed set \(K\), there exists a model \(M=\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) such that \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) and, for every \(\phi\in\Phi_{0}\) such that \(\|\neg\phi\|\neq\varnothing\), \(K\div\phi\) satisfies (1). Thus the proposed semantics provides an alternative characterization of AGM belief contraction. The proof of the following proposition is given in the Appendix. **Proposition 1**.: 1. _Given a model_ \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) _let_ \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) _and, for every_ \(\phi\in\Phi_{0}\) _such that_ \(\|\neg\phi\|\neq\varnothing\)_, let_ \(K\div\phi\) _be defined by (_1_). Then_ \(K\) _is consistent and deductively closed and the (partial) belief contraction function so defined can be extended to a full-domain AGM belief contraction function._ 2. _Let_ \(K\subset\Phi_{0}\) _be consistent and deductively closed and let_ \(\div:\Phi_{0}\to 2^{\Phi_{0}}\) _be an AGM belief contraction function. Then there exists a model_ \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) _such that_ \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) _and, for every_ \(\phi\in\Phi_{0}\) _such that_ \(\|\neg\phi\|\neq\varnothing\)_,_ \(K\div\phi\) _satisfies (_1_)._ The proposed semantics becomes more transparent if we extend the language by introducing two modal operators: a unimodal belief operator \(\mathbb{B}\), corresponding to the belief relation \(\mathcal{B}\), and a bimodal conditional operator \(>\), corresponding to the selection function \(f\). Recall that \(\Phi_{0}\) is the set of Boolean (or factual) formulas. Let \(\Phi_{1}\) be the modal language constructed as follows. * \(\Phi_{0}\subset\Phi_{1}\), * if \(\phi,\psi\in\Phi_{0}\) then \(\phi>\psi\in\Phi_{1}\), * all the Boolean combinations of formulas in \(\Phi_{1}\). Thus, for the purpose of this paper, the conditional \(\phi>\psi\) (interpreted as the indicative or subjunctive conditional "if \(\phi\) is (were) the case then \(\psi\) is (would be) the case") is defined only for Boolean formulas. Finally, let \(\Phi\) be the modal language constructed as follows: * \(\Phi_{1}\subset\Phi\), * if \(\phi\in\Phi_{1}\) then \(\mathbb{B}\phi\in\Phi\), * all the Boolean combinations of formulas in \(\Phi\). Thus formulas in \(\Phi\) are either Boolean or formulas of the form \(\phi>\psi\), with \(\phi\) and \(\psi\) Boolean, or of the form \(\mathbb{B}\phi\) where \(\phi\) is either Boolean or of the form \(\psi>\chi\) with \(\psi\) and \(\chi\) Boolean, or a Boolean combination of such formulas. We can now extend the definition of truth of a formula at a state (Definition 3) to the set \(\Phi\) as follows: **Definition 4**.: _If \(\phi\in\Phi_{0}\) then \(s\models\phi\) according to the rules of Definition 3. Furthermore,_ * \(s\models(\phi>\psi)\) _(with \(\phi,\psi\in\Phi_{0}\)) if and only if either \(\|\phi\|=\varnothing\), or \(\|\phi\|\neq\varnothing\) and \(f(s,\|\phi\|)\subseteq\|\psi\|\),_ * \(s\models\mathbb{B}\phi\) _if and only if_ \(\mathcal{B}(s)\subseteq\|\phi\|\)_._ Then we can re-write the definition of \(K\div\phi\) given in (1) in terms of the modal operators \(\mathbb{B}\) and \(>\) as follows: \[\psi\in K\div\phi\text{ if and only if }\phi,\psi\in\Phi_{0}\text{ and }s_{\oplus}\models\mathbb{B}\psi\wedge\mathbb{B}\left(\neg\phi>\psi \right). \tag{2}\] Thus, in the statement of Proposition 1, \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) can be replaced by \(K=\{\phi\in\Phi_{0}:s_{\oplus}\models\mathbb{B}\phi\}\) and reference to (1) can be replaced by reference to (2). Note that only a fragment of the extended language is used in the characterization result of Proposition 1. In particular, nesting of conditionals and beliefs is disallowed. The study of whether the extended language can be used to obtain generalizations of AGM-style belief change that go beyond merely Boolean expressions is a topic left for future research. ## 4 Related literature There is a vast literature that deals with AGM belief contraction (for a survey see, for example, [6, 7]). Because of space limitations we will only focus on a few issues. The recovery postulate (AGM axiom \((K-5)\)) appears to be a natural way of capturing a "minimal" way of suspending belief in \(\phi\), but has been subject to extensive scrutiny (see [26, 10, 15, 19, 23, 29, 16, 27, 17]). In Makinson's terminology ([26]), contraction operations that do not satisfy the recovery postulates are called _withdrawals_. Alternative types of withdrawal operators have been studied in the literature: contraction without recovery ([5]), semi-contraction ([8]), severe withdrawal ([33]), systematic withdrawal ([27]), mild contraction ([27]). If one interprets belief contraction as a form of _actual belief change_ (in response to some input), then perhaps the recovery postulate is open to scrutiny. However, in the interpretation of belief contraction proposed in this paper, the recovery postulate is entirely natural. Indeed, if \(\psi\) belongs to the contraction of \(K\) by \(\phi\) then \(\psi\) is believed both initially and on the supposition that \(\neg\phi\); if this supposition is removed then one naturally falls back to the initial beliefs \(K\). There have been attempts in the literature to establish a link between notions of AGM belief change and Stalnaker-Lewis conditionals. Within the context of AGM belief revision this was done by [11], who considered the language that we called \(\Phi_{1}\), which includes conditionals of the form \(\phi>\psi\). Gaerdenfors introduced the following postulate (where \(K*\phi\) denotes the revised belief set in response to information \(\phi\)): \((\phi>\psi)\in K\) if and only if \(\psi\in K*\phi\). This postulate was taken to be an expression of the so-called Ransey test.3 Gaerdenfors showed that this postulate can be satisfied only in cases where the revision operation is trivial; in other words, there cannot be interesting revision theories based on conditionals if one requires that the conditionals themselves be incorporated in the initial belief set. Several attempts have been made to circumvent Gaerdenfors' "triviality result". Different routes have been taken: weakening or re-interpretating the theorem ([23, 25, 24, 31, 32], generalizing from belief revision functions to belief change systems (consisting of a set of epistemic states, an assignment of a belief set to each epistemic state and a transition function function that determines how the epistemic state changes as a result of learning new information: [9]), considering an alternative semantics, namely Moss and Parikh's epistemic logic of subsets logic ([28]), and augmenting it with conditionals ([13]), and, in the context of iterated belief contraction, defining the notion of "contractional" in the context of belief states ([34]: if \(\Psi\) denotes a belief state and \([\beta|\alpha]\) is interpreted as "belief in \(\beta\) even in the absence of \(\alpha\)", then the contractional is defined as \(\Psi\models[\beta|\alpha]\) if and only if \(\Psi\div\alpha\models\beta\)). None of the approaches described above coincides with the framework considered in this paper. Footnote 3: The expression “Ramsey Test” refers to the following passage from [30, p. 247]: “If two people are arguing “If \(p\) will \(q\)” and are both in doubt as to \(p\), they are adding \(p\) hypothetically to their stock of knowledge and arguing on that basis about \(q\)”. ## 5 Conclusion We proposed a semantic characterization of AGM belief contraction in terms of a semantics consisting of a Kripke belief relation \(\mathcal{B}\) (with associated modal operator \(\mathbb{B}\)) and a Stalnaker-Lewis selection function \(f\) (with associated conditional bimodal operator \(>\)). The proposed semantics can also be used to characterize AGM belief revision (see [3]). Indeed all three operations: belief expansion, belief contraction and belief revision, can be captured within this framework. Letting \(s_{@}\) denote the actual state, we have: 1. Expansion: \(\psi\in K+\phi\) if and only if \(s_{@}\models\neg\mathbb{B}\neg\phi\wedge\mathbb{B}(\phi\to\psi)\), 2. Contraction: \(\psi\in K\div\phi\) if and only if \(s_{@}\models\mathbb{B}\psi\wedge\mathbb{B}(\neg\phi>\psi)\), 3. Revision: \(\psi\in K*\phi\) if and only if \(s_{@}\models\mathbb{B}(\phi>\psi)\). There are several issues that can be studied within this framework and are left for future work, for example, whether the extended modal language can provide a way to generalize AGM-style belief change and whether the proposed framework can accommodate iterated belief contraction/revision. ## Appendix A Appendix In this Appendix we prove Proposition 1. In order to make the proof entirely self-contained we include the proofs of known auxiliary results (e.g. the lemmas).4 **Lemma 1**.: _Fix a model \(M=\langle S,s_{\emptyset},\mathcal{B},f,V\rangle\) and let \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\emptyset})\subseteq\|\phi\|\}\). Then \(K\) is deductively closed and consistent._ Proof.: First we show that \(K\) is deductively closed, that is, \(K=Cn(K)\). If \(\psi\in K\) then \(\psi\in Cn(K)\), because \(\psi\to\psi\) is a tautology; thus \(K\subseteq Cn(K)\). To show that \(Cn(K)\subseteq K\), let \(\psi\in Cn(K)\), that is, there exist \(\phi_{1},...,\phi_{n}\in K\) (\(n\geq 0\)) such that \((\phi_{1}\wedge...\wedge\phi_{n})\to\psi\) is a tautology. Since \(\|\phi_{1}\wedge...\wedge\phi_{n}\|=\|\phi_{1}\|\cap...\cap\|\phi_{n}\|\) and, for all \(i=1,...,n\), \(\phi_{i}\in K\) (that is, \(\mathcal{B}(s_{\emptyset})\subseteq\|\phi_{i}\|\)), it follows that \(\mathcal{B}(s_{\emptyset})\subseteq\|\phi_{1}\wedge...\wedge\phi_{n}\|\). Since \((\phi_{1}\wedge...\wedge\phi_{n})\to\psi\) is a tautology, \(\|(\phi_{1}\wedge...\wedge\phi_{n})\to\psi\|=S\), that is, \(\|\phi_{1}\wedge...\wedge\phi_{n}\|\subseteq\|\psi\|\). Thus \(\mathcal{B}(s_{\emptyset})\subseteq\|\psi\|\), that is, \(\psi\in K\). Next we show that \(Cn(K)\neq\Phi_{0}\), that is, \(K\) is consistent. Let \(p\in\mathtt{At}\) be an atomic formula. Then \(\|p\wedge\neg p\|=\varnothing\). By seriality of \(\mathcal{B}\), \(\mathcal{B}(s_{\emptyset})\neq\varnothing\) so that \(\mathcal{B}(s_{\emptyset})\nsubseteq \((K-4)\)We need to show that if \(\phi\) is not a tautology then \(\phi\notin K\div^{\prime}\phi\). Suppose that \(\phi\) is not a tautology, so that \(\phi\notin Cn(\neg\phi)\). If \(\|\neg\phi\|=\varnothing\) then \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)\) and thus \(\phi\notin K\div^{\prime}\phi\). Next, suppose that \(\|\neg\phi\|\neq\varnothing\) so that \(K\div^{\prime}\phi=K\div\phi\). Since \(K\div\phi=K\cap\Psi_{\neg\phi}\) (where \(\Psi_{\neg\phi}\) is given by (A3)) it is sufficient to show that \(\phi\notin\Psi_{\neg\phi}\), that is, \(f(s,\|\neg\phi\|)\not\subseteq\|\phi\|\), for some \(s\in\mathcal{B}(s_{\oplus})\). This follows from the fact that, by 4(a) of Definition 2, for every \(s\in\mathcal{B}(s_{\oplus})\), \(f(s,\|\neg\phi\|)\subseteq\|\neg\phi\|\). \((K-5)\)We need to show that if \(\phi\in K\) then \(K\subseteq(K\div^{\prime}\phi)+\phi=Cn(K\div^{\prime}\phi\cup\{\phi\})\). Assume that \(\phi\in K\) and fix an arbitrary \(\psi\in K\). Then \((\phi\to\psi)\in K\). If \(\|\neg\phi\|=\varnothing\) then \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)\). Since \(\neg\phi\in Cn(\neg\phi)\), \(\phi\to\psi\in Cn(\neg\phi)\) and thus \(\phi\to\psi\in K\div^{\prime}\phi\), from which it follows (since, by \((K-1)\), \(K\div^{\prime}\phi\) is deductively closed) that \(\psi\in Cn(K\div^{\prime}\phi\cup\{\phi\})\). Suppose now that \(\|\neg\phi\|\neq\varnothing\) so that \(K\div^{\prime}\phi=K\div\phi=K\cap\Psi_{\neg\phi}\) (where \(\Psi_{\neg\phi}\) is given by (A3)). By 4(a) of Definition 2, for every \(s\in\mathcal{B}(s_{\oplus})\), \(f(s,\|\neg\phi\|)\subseteq\|\neg\phi\|\) and thus \(f(s,\|\neg\phi\|)\subseteq\|\phi\to\psi\|=\|\neg\phi\|\cup\|\psi\|\). Hence (recall that \((\phi\to\psi)\in K\)) \((\phi\to\psi)\in K\div\phi\) so that \(\psi\in Cn(K\div\phi\cup\{\phi\})\). \((K-6)\)We need to show that if \(\phi\leftrightarrow\psi\) is a tautology then \(K\div^{\prime}\phi=K\div^{\prime}\psi\). Assume that \(\phi\leftrightarrow\psi\) is a tautology. Then \(Cn(\neg\phi)=Cn(\neg\psi)\) and \(\|\neg\phi\|=\|\neg\psi\|\). Thus \(\|\neg\phi\|=\varnothing\) if and only if \(\|\neg\psi\|=\varnothing\), in which case \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)=K\cap Cn(\neg\psi)=K\div^{\prime}\psi\). Furthermore, \(\|\neg\phi\|\neq\varnothing\) if and only if \(\|\neg\psi\|\neq\varnothing\), in which case \(\{\chi\in\Phi_{0}:f(s,\|\neg\phi\|)\subseteq\|\chi\|,\forall s\in\mathcal{B}( s_{\oplus})\}=\{\chi\in\Phi_{0}:f(s,\|\neg\psi\|)\subseteq\|\chi\|,\forall s\in \mathcal{B}(s_{\oplus})\}\), from which it follows that \(K\div\phi=K\div\psi\). \((K-7)\)We have to show that \((K\div^{\prime}\phi)\cap(K\div^{\prime}\psi)\subseteq K\div^{\prime}(\phi\wedge\psi)\). We need to consider several cases. Case 1: \(\|\neg\phi\|=\|\neg\psi\|=\varnothing\) so that \(\|\neg\phi\|\cup\|\neg\psi\|=\|\neg\phi\vee\neg\psi\|=\|\neg(\phi\wedge\psi)\|=\varnothing\). In this case \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)\), \(K\div^{\prime}\psi=K\cap Cn(\neg\psi)\) and \(K\div^{\prime}(\phi\wedge\psi)=K\cap Cn(\neg\phi\wedge\psi)\). Since \(Cn(\neg\phi)\cap Cn(\neg\phi)\subseteq Cn(\neg\phi\vee\neg\psi)=Cn(\neg(\phi \wedge\psi))\) it follows that \((K\div^{\prime}\phi)\cap(K\div^{\prime}\psi)\subseteq K\div^{\prime}(\phi\wedge\psi)\). Case 2: \(\|\neg\phi\|=\varnothing\) and \(\|\neg\psi\|\neq\varnothing\), so that \(\|\neg(\phi\wedge\psi)\|=\|\neg\phi\vee\neg\psi\|=\|\neg\phi\|\cup\|\neg\psi \|=\|\neg\psi\|\neq\varnothing\). In this case \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)\), \(K\div^{\prime}\psi=K\div\psi=K\cap\{\chi\in\Phi_{0}:f(s,\|\neg\psi\|)\subseteq\| \chi\|,\forall s\in\mathcal{B}(s_{\oplus})\}\) and \(K\div^{\prime}(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K\cap\{\chi\in\Phi_{0}:f(s, \|\neg(\phi\wedge\psi)\|)\subseteq\|\chi\|,\forall s\in\mathcal{B}(s_{\oplus})\}\). Since \(\|\neg(\phi\wedge\psi)\|=\|\neg\psi\|\), \(f(s,\|\neg(\phi\wedge\psi)\|)=f(s,\|\neg\psi\|)\) and thus \(K\div(\phi\wedge\psi)=K\div\psi\). Hence the inclusion \((K\div^{\prime}\phi)\cap(K\div\psi)\subseteq K\div(\phi\wedge\psi)\) reduces to \((K\div^{\prime}\phi)\cap(K\div\psi)\subseteq K\div\psi\), which is trivially true. Case 3: \(\|\neg\phi\|\neq\varnothing\) and \(\|\neg\psi\|=\varnothing\), so that \(\|\neg\phi\vee\neg\psi\|=\|\neg\phi\|\cup\|\neg\psi\|=\|\neg\phi\|\neq\varnothing\). In this case, by an argument similar to the one used in Case 2, \(K\div^{\prime}(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K\div\phi=K\div^{\prime}\phi\), so that the inclusion \((K\div^{\prime}\phi)\cap(K\div^{\prime}\psi)\subseteq K\div^{\prime}(\phi\wedge\psi)\) reduces to \((K\div\phi)\cap(K\div^{\prime}\psi)\subseteq K\div\phi\), which is trivially true. Case 4: \(\|\neg\phi\|\neq\varnothing\) and \(\|\neg\psi\|\neq\varnothing\), so that \(\|\neg(\phi\wedge\psi)\|=\|\neg\phi\vee\neg\psi\|=\|\neg\phi\|\cup\|\neg\psi\|\neq\varnothing\). In this case \(K\div^{\prime}\phi=K\div\phi=K\cap\{\chi\in\Phi_{0}:f(s,\|\neg\phi\|)\subseteq\| \chi\|,\forall s\in\mathcal{B}(s_{\oplus})\}\), \(K\div^{\prime}\psi=K\div=K\cap\{\chi\in\Phi_{0}:f(s,\|\neg\psi\|)\subseteq\| \chi\|,\forall s\in\mathcal{B}(s_{\oplus})\}\) and \(K\div^{\prime}(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K \cap\{\chi\in\Phi_{0}:f(s,\|\neg(\phi\wedge\psi)\|)\subseteq\|\chi\|,\forall s \in\mathcal{B}(s_{\oplus})\}\). Fix an arbitrary \(\chi\in(K\div\phi)\cap(K\div\psi)\) (thus, in particular, \(\chi\in K\)). We need to show that \(\chi\in K\div(\phi\wedge\psi)\), that is, that, \(\forall s\in\mathcal{B}(s_{\oplus})\), \(f(s,\|\neg(\phi\wedge\psi)\|)\subseteq\|\chi\|\). Since \(\chi\in(K\div\phi)\cap(K\div\psi)\), \[f(s,\|\neg\phi\|)\subseteq\|\chi\|\text{ and }f(s,\|\neg\psi\|)\subseteq\|\chi\|.\] (A4) By Property 4(a) of Definition 2, \(f(s,\|\neg(\phi\wedge\psi)\|)\subseteq\|\neg(\phi\wedge\psi)\|=\|\neg\phi\| \cup\|\neg\psi\|\). It follows from this that \[f(s,\|\neg(\phi\wedge\psi)\|)=(f(s,\|\neg(\phi\wedge\psi)\|)\cap\|\neg\phi\|) \cup\left(f(s,\|\neg(\phi\wedge\psi)\|)\cap\|\neg\psi\|\right).\] (A5) By Property 4(d) of Definition 2 (with \(E=\|\neg\phi\|\cup\|\neg\psi\|=\|\neg(\phi\wedge\psi)\|\) and \(F=\|\neg\phi\|\)) \[f(s,\|\neg(\phi\wedge\psi)\|)\cap\|\neg\phi\|\subseteq f(s,\|\neg\phi\|).\] (A6) A second application of Property 4(d) of Definition 2 (with \(E=\|\neg\phi\|\cup\|\neg\psi\|=\|\neg(\phi\wedge\psi)\|\) and, this time, with \(F=\|\neg\psi\|\)) gives \[f(s,\|\neg(\phi\wedge\psi)\|)\cap\|\neg\psi\|\subseteq f(s,\|\neg\psi\|).\] (A7) It follows from (A5), (A6), (A7) that \(f(s,\|\neg(\phi\wedge\psi)\|\subseteq(f(s,\|\neg\phi\|)\cup f(s,\|\neg\psi\|))\) and thus, by (A4), \(f(s,\|\neg(\phi\wedge\psi)\|)\subseteq\|\chi\|\). (\(K\)**-8**)We need to show that if \(\phi\notin K\div^{\prime}(\phi\wedge\psi)\) then \(K\div^{\prime}(\phi\wedge\psi)\subseteq K\div^{\prime}\phi\). Assume that \(\phi\notin K\div^{\prime}(\phi\wedge\psi)\). Suppose first that \(\|\neg\phi\|=\varnothing\), that is, \(\|\phi\|=S\). Then \(\mathcal{B}(s_{\emptyset})\subseteq\|\phi\|\) and thus \(\phi\in K\). If \(\|\neg(\phi\wedge\psi)\|=\|\neg\phi\|\cup\|\|\neg\psi\|\neq\varnothing\) then \(K\div^{\prime}(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K\cap\{\chi\in\Phi_{0}:f(s, \|\neg\phi\|\cup\|\neg\psi\|)\subseteq\|\chi\|,\forall s\in\mathcal{B}(s_{ \emptyset})\}\) and, since \(\|\phi\|=S\), for all \(s\in\mathcal{B}(s_{\emptyset})\) we have that \(f(s,\|\neg\phi\|\cup\|\neg\psi\|)\subseteq\|\phi\|\), implying that \(\phi\in K\div(\phi\wedge\psi)\), contradicting our assumption. Thus the case where \(\|\neg\phi\|=\varnothing\) and \(\|\neg\phi\|\cup\|\neg\psi\|\neq\varnothing\) is ruled out and we are left with only two cases to consider. Case 1: \(\|\neg\phi\|\cup\|\neg\psi\|=\varnothing\) so that \(\|\neg\phi\|=\varnothing\). In this case \(K\div^{\prime}(\phi\wedge\psi)=K\cap Cn(\neg(\phi\wedge\psi))\) and \(K\div^{\prime}\phi=K\cap Cn(\neg\phi)\). Fix an arbitrary \(\chi\in K\div^{\prime}(\phi\wedge\psi)\). Then \(\chi\in K\) and \(\chi\in Cn(\neg(\phi\wedge\psi))\). We need to show that \(\chi\in K\div^{\prime}\phi\), that is, that \(\chi\in Cn(\neg)\). Since \(\chi\in Cn(\neg(\phi\wedge\psi))\), \(\neg(\phi\wedge\psi)\rightarrow\chi\) is a tautology. Thus, since \(\neg\phi\rightarrow\neg(\phi\wedge\psi)\) is also a tautology, \(\neg\phi\rightarrow\chi\) is a tautology and thus \(\chi\in Cn(\neg\phi)\). Case 2: \(\|\neg\phi\|\neq\varnothing\) and thus \(\|\neg(\phi\wedge\psi)\|=\|\neg\phi\|\cup\|\neg\psi\|\neq\varnothing\). Then \(K\div^{\prime}(\phi\wedge\psi)=K\div(\phi\wedge\psi)=K\cap\{\chi\in\Phi_{0}:f(s,\| \neg\phi\|\cup\|\neg\psi\|)\subseteq\|\chi\|,\forall s\in\mathcal{B}(s_{ \emptyset})\}\) and \(K\div^{\prime}\phi=K\div\phi=K\cap\{\chi\in\Phi_{0}:f(s,\|\neg\phi\|)\subseteq \|\chi\|,\forall s\in\mathcal{B}(s_{\emptyset})\}\). Recall the assumption that \(\phi\notin K\div(\phi\wedge\psi)\). Then two sub-cases are possible. Case 2.1: \(\phi\notin K\), that is, \(\mathcal{B}(s_{\emptyset})\cap\|\neg\phi\|\neq\varnothing\). Then, by 4(c) of Definition 2, \[\forall s\in\mathcal{B}(s_{\emptyset}),f(s,\|\neg\phi\|)\subseteq\mathcal{B}(s_ {\emptyset})\cap\|\neg\phi\|\subseteq\mathcal{B}(s_{\emptyset}).\] (A8) Fix an arbitrary \(\chi\in K\div(\phi\wedge\psi)\). Then \(\chi\in K\), that is, \(\mathcal{B}(s_{\emptyset})\subseteq\|\chi\|\) and thus, by (A8), \(\forall s\in\mathcal{B}(s_{\emptyset})\), \(f(s,\|\neg\phi\|)\subseteq\|\chi\|\) so that \(\chi\in K\div\phi\). Case 2.2: \(\phi\in K\) and \(B\neg_{\phi\rightarrow\psi}\neq\varnothing\), where \(B\neg_{\phi\rightarrow\psi}=\{s\in\mathcal{B}(s_{\emptyset}):f(s,\|\neg\phi\| \cup\|\neg\psi\|)\cap\|\neg\phi\|\neq\varnothing\}\).5 Then, by 4(e.1) of Definition 2 (with \(E=\|\neg\phi\|\cup\|\neg\psi\|\) and \(F=\|\neg\phi\|\)) Footnote 5: Note that the case where \(\phi\in K\) and \(B\neg_{\phi\rightarrow\psi}=\varnothing\) is ruled out by our initial assumption that \(\phi\notin K\div(\phi\wedge\psi)\). In fact, \(B\neg_{\phi\rightarrow\psi}=\varnothing\) means that, \(\forall s\in\mathcal{B}(s_{\emptyset}),f(s,\|\neg\phi\|\cup\|\neg\psi\|) \cap\|\neg\phi\|=\varnothing\), that is, \(f(s,\|\neg\phi\|\cup\|\neg\psi\|)\subseteq\|\phi\|\), which, in conjunction with the hypothesis that \(\phi\in K\), yields \(\phi\in K\div(\phi\wedge\psi)\). \[\forall s\in\mathcal{B}(s_{\emptyset}),f(s,\|\neg\phi\|)\subseteq f(s,\|\neg \phi\|)\cap\|\neg\phi\|\subseteq\mathcal{B}(s_{\emptyset}).\] (A9) and, by 4(e.2) of Definition 2 (again, with \(E=\|\neg\phi\|\cup\|\neg\psi\|\) and \(F=\|\neg\phi\|\)), \[\forall s\in\mathcal{B}(s_{\emptyset}),f(s,\|\neg\phi\|)\subseteq f(s^{\prime}, \|\neg\phi\|)\text{ for some }s^{\prime}\in B_{\neg\phi\rightarrow\psi}.\] (A10) Fix an arbitrary \(\chi\in K\div(\phi\wedge\psi)\). Then, \(\chi\in K\) and (recall that \(\|\neg(\phi\wedge\psi)\|=\|\neg\phi\|\cup\|\neg\psi\|)\,f(s,\|\neg\phi\|\cup\| \neg\psi\|)\subseteq\|\chi\|\), \(\forall s\in\mathcal{B}(s_{\emptyset})\); it follows from this, (A9) and (A10) that, \(\forall s\in\mathcal{B}(s_{\emptyset})\), \(f(s,\|\neg\phi\|)\subseteq\|\chi\|\). Thus \(\chi\in K\div\phi\). Before we proceed to the proof of Part (B) of Proposition 1, we establish the following lemma. **Lemma 2**.: _Let \(A\subseteq\Phi_{0}\) be such that \(A=Cn(A)\). Then, \(\forall\alpha\in\Phi_{0}\), \(\|Cn\left(A\cup\{\alpha\}\right)\|=\|A\|\cap\|\alpha\|\)._ Proof.: Since \(A\) is deductively closed, \(\forall\beta\in\Phi_{0}\), \[\beta\in Cn\left(A\cup\{\alpha\}\right)\text{ if and only if }(\alpha\to\beta) \in A.\] (A12) First we show that \(\|A\|\cap\|\alpha\|\subseteq\|Cn\left(A\cup\{\alpha\}\right)\|\). Fix an arbitrary \(s\in\|A\|\cap\|\alpha\|\); we need to show that \(s\in\|Cn\left(A\cup\{\alpha\}\right)\|\), that is, that \(\forall\beta\in Cn\left(A\cup\{\alpha\}\right),\,\beta\in s\). Since \(s\in\|\alpha\|\), \(\alpha\in s\). Fix an arbitrary \(\beta\in Cn\left(A\cup\{\alpha\}\right)\); then, by (A12), \((\alpha\to\beta)\in A\); thus, since \(s\in\|A\|\), \((\alpha\to\beta)\in s\). Hence, since both \(\alpha\) and \(\alpha\to\beta\) belong to \(s\) and \(s\) is deductively closed, \(\beta\in s\). Next we show that \(\|Cn\left(A\cup\{\alpha\}\right)\|\subseteq\|A\|\cap\|\alpha\|\). Let \(s\in\|Cn\left(A\cup\{\alpha\}\right)\|\). Then, since \(\alpha\in Cn\left(A\cup\{\alpha\}\right)\), \(\alpha\in s\), that is, \(s\in\|\alpha\|\). It remains to show that \(s\in\|A\|\), that is, that, for every \(\beta\in A\), \(\beta\in s\). Fix an arbitrary \(\beta\in A\); then, since \(A\) is deductively closed, \((\alpha\to\beta)\in A\). Thus, by (A12), \(\beta\in Cn\left(A\cup\{\alpha\}\right)\) and thus, since \(s\in\|Cn\left(A\cup\{\alpha\}\right)\|\), \(\beta\in s\). Proof of Part (B) of Proposition 1.: We need to show that if \(K\subset\Phi_{0}\) is consistent and deductively closed and \(\div:\Phi_{0}\to 2^{\Phi_{0}}\) is an AGM belief contraction function based on \(K\), then there exists a model \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\) such that \(K=\{\phi\in\Phi_{0}:\mathcal{B}(s_{\oplus})\subseteq\|\phi\|\}\) and, for all \(\phi,\psi\in\Phi_{0}\), \(\psi\in K\div\phi\) if and only if (A1) is satisfied. Define the following model \(\langle S,s_{\oplus},\mathcal{B},f,V\rangle\): 1. \(S\) is the set of maximally consistent sets of formulas in \(\Phi_{0}\). 2. The valuation \(V:At\to S\) is defined by \(V(p)=\{s\in S:p\in s\}\), so that, for every \(\phi\in\Phi_{0}\), \(\|\phi\|=\{s\in S:\phi\in s\}\). If \(\Psi\subseteq\Phi_{0}\), define \(\|\Psi\|=\{s\in S:\forall\phi\in\Psi,\phi\in s\}\). 3. Choose an arbitrary \(s_{\oplus}\in S\) and define \(\mathcal{B}(s_{\oplus})=\|K\|\). 4. Let \(\mathcal{E}=\{E\subseteq S:\varnothing\neq E=\|\phi\|\text{ for some }\phi\in\Phi_{0}\}\). Define \(f:\mathcal{B}(s_{\oplus})\times\mathcal{E}\to 2^{S}\) as follows: \[\forall s\in\mathcal{B}(s_{\oplus}),\;f(s,\|\phi\|)=\|K\div\neg\phi\|\cap\| \phi\|.\] (A11) **Remark 1**.: _If \(\phi\) is a tautology then \(\neg\phi\) is a contradiction and thus (since, by hypothesis, \(K\) is consistent) \(\neg\phi\notin K\). It follows from (\(K-2\)) and (\(K-3\)) that \(K\div\neg\phi=K\). Furthermore, since \(\phi\) is a tautology and \(K\) is deductively closed, \(\phi\in K\), that is \(\|K\|\subseteq\|\phi\|\) so that \(\|K\|\cap\|\phi\|=\|K\|\). Hence, by (A11), \(\forall s\in\mathcal{B}(s_{\oplus}),\,f(s,\|\phi\|)=\|K\|\). On the other hand, if \(\neg\phi\) is a tautology then \(\|\phi\|=\varnothing\) and thus \(\|\phi\|\notin\mathcal{E}\), that is, \(\|\phi\|\) is not in the domain of \(f\)._ First we show that the selection function defined in (A11) satisfies Properties 4(a)-4(e) of Definition 2. In view of Remark 1, we can restrict attention to contingent formulas, that is, to formulas \(\phi\) such that neither \(\phi\) nor \(\neg\phi\) is a tautology. Denote by \(\Phi_{cont}\subseteq\Phi_{0}\) the set of contingent formulas. Recall that \(S\) is the set of maximally consistent sets of formulas in \(\Phi_{0}\) and, for \(A\subseteq\Phi_{0}\), \(\|A\|=\{s\in S:\chi\in s,\,\forall\chi\in A\}\). Property 4(a)We need to show that if \(\phi\in\Phi_{cont}\) then \(\|K\div\neg\phi\|\cap\|\phi\|\subseteq\|\phi\|\), which is obviously true, and \(\|K\div\neg\phi\|\cap\|\phi\|\neq\varnothing\). Since \(\phi\in\Phi_{cont}\), \(\|\phi\|\neq\varnothing\) and, by (\(K-4\)), \(\neg\phi\notin K\div-\phi\). By (\(K-1\)) \(K\div\neg\phi=Cn(K\div\neg\phi)\) and thus \(\neg\phi\notin Cn(K\div\neg\phi)\), that is, \(K\div\neg\phi\) is consistent and hence \(\|K\div\neg\phi\|\neq\varnothing\). Property 4(b)Fix an arbitrary \(s\in\mathcal{B}(s_{\oplus})\) and an arbitrary \(\phi\in\Phi_{cont}\). We need to show that if \(s\in\|\phi\|\) then \(s\in f(s,\|\phi\|)=\|K\div\neg\phi\|\cap\|\phi\|\). By construction, \(\mathcal{B}(s_{\oplus})=\|K\|\); thus, \(s\in\|K\|\). By (\(K-2\)), \(K\div\neg\phi\subseteq K\) so that \(\|K\|\subseteq\|K\div\neg\phi\|\). Hence \(s\in\|K\div\neg\phi\|\) then \(s\in\|K\div\neg\phi\|\cap\|\phi\|\). Property 4(c)We need to show that if \(\mathcal{B}(s_{\oplus})\cap\|\phi\|\neq\varnothing\) then (since \(\mathcal{B}(s_{\oplus})=\|K\|\) and, \(\forall s\in\mathcal{B}(s_{\oplus})\), \(f(s,\|\phi\|)=\|K\div\neg\phi\|\cap\|\phi\|)\;\|K\div\neg\phi\|\cap\|\phi\|\subseteq \|K\|\cap\|\phi\|\). If \(\|K\|\cap\|\phi\|\neq\varnothing\) then \(\neg\phi\notin K\) and thus, by \((K-3)\), \(K\subseteq K\div\neg\phi\), so that \(\|K\div\neg\phi\|\subseteq\|K\|\) and thus \(\|K\div\neg\phi\|\cap\|\phi\|\subseteq\|K\|\cap\|\phi\|\). Property 4(d)We need to show that if \(\phi\in\Phi_{cont}\) and \(\psi\in\Phi_{0}\), then \(\forall s\in\mathcal{B}(s_{\oplus})\), \(f(s,\|\phi\|)\cap\|\psi\|\subseteq f(s,\|\phi\|\cap\|\psi\|)\), that is, using (A11) and the fact that \(\|\phi\|\cap\|\psi\|=\|\phi\wedge\psi\|\), \[\|K\div\neg\phi\|\cap\|\phi\|\cap\|\psi\|\subseteq\|K\div\neg(\phi\wedge\psi)\| \cap\|\phi\wedge\psi\|\] (A13) By \((K-7)\), \(\forall\alpha,\beta\in\Phi_{0},\,(K\div\alpha)\cap(K\div\beta)\subseteq K\div( \alpha\wedge\beta)\). Thus applying \((K-7)\) to \(\alpha=\neg(\phi\wedge\psi)\) and \(\beta=\phi\to\psi\) we get \[K\div\neg(\phi\wedge\psi)\cap K\div(\phi\to\psi)\subseteq K\div(\neg(\phi \wedge\psi)\wedge(\phi\to\psi))\] (A14) Since \(\neg(\phi\wedge\psi)\wedge(\phi\to\psi)\) is logically equivalent to \(\neg\phi\), by \((K-6)\,K\div(\neg(\phi\wedge\psi)\wedge(\phi\to\psi))=K\div\neg\phi\). Thus, by (A14) \[K\div\neg(\phi\wedge\psi)\cap K\div(\phi\to\psi)\subseteq K\div\neg\phi.\] (A15) Next we show that \[Cn\left(K\div\neg(\phi\wedge\psi)\cup\{\phi\wedge\psi\}\right)\subseteq Cn \left(K\div\neg\phi\cup\{\phi\wedge\psi\}\right).\] (A16) Fix an arbitrary \(\chi\in Cn\left(K\div\neg(\phi\wedge\psi)\cup\{\phi\wedge\psi\}\right)\). Then, since, by \((K-1)\), \(K\div\neg(\phi\wedge\psi)\) is deductively closed, \[((\phi\wedge\psi)\to\chi)\in K\div\neg(\phi\wedge\psi).\] (A17) By \((K-2)\), \(K\div\neg(\phi\wedge\psi)\subseteq K\) and thus, by (A17), \[((\phi\wedge\psi)\to\chi)\in K.\] (A18) Next we show that \[((\phi\wedge\psi)\to\chi)\in K\div(\phi\to\psi).\] (A19) If \((\phi\to\psi)\notin K\) then, by \((K-3)\), \(K\subseteq K\div(\phi\to\psi)\) and thus (A19) follows from (A18). If \((\phi\to\psi)\in K\) then, by \((K-5)\), \(K\subseteq Cn(K\div(\phi\to\psi)\cup\{\phi\to\psi\})\) so that, by (A18), \(((\phi\wedge\psi)\to\chi)\in Cn\left(K\div(\phi\to\psi)\cup\{\phi\to\psi\}\right)\), that is (since, by \((K-1)\), \(K\div(\phi\to\psi)\) s deductively closed) \((\phi\to\psi)\to((\phi\wedge\psi)\to\chi)\in K\div(\phi\to\psi)\). Since \((\phi\to\psi)\to((\phi\wedge\psi)\to\chi)\) is logically equivalent to \(((\phi\to\psi)\wedge(\phi\wedge\psi))\to\chi\), which, in turn is logically equivalent to \((\phi\wedge\psi))\to\chi\), (A19) is satisfied. It follows from (A18), (A19) and (A15) that \(\big{(}(\phi\wedge\psi)\to\chi\big{)}\in K\div\neg\phi\), that is, that \(\chi\in Cn\big{(}K\div\neg\phi\cup\{\phi\wedge\psi\}\big{)}\), thus establishing (A16). From (A16) we get that \[\|Cn\left(K\div\neg\phi\cup\{\phi\wedge\psi\}\right)\|\subseteq\|Cn\left(K \div\neg(\phi\wedge\psi)\cup\{\phi\wedge\psi\}\right)\|\] (A20) By Lemma 2 (with \(A=K\div\neg\phi\) and \(\alpha=\phi\wedge\psi\)), \(\|Cn\left(K\div\neg\phi\cup\{\phi\wedge\psi\}\right)\|=\|K\div\neg\phi\|\cap\| \phi\wedge\psi\|\) which in turn (since =\(\|\phi\wedge\psi\|=\|\phi\|\cap\|\psi\|\)) is equal to \(\|K\div\neg\phi\|\cap\|\phi\|\cap\|\psi\|\). By Lemma 2 again (with \(A=K\div\neg(\phi\wedge\psi)\) and \(\alpha=\phi\wedge\psi\)), \(\|Cn(K\div\neg(\phi\wedge\psi)\cup\{\phi\wedge\psi\})\|=\|K\div\neg(\phi\wedge \psi)\|\cap\|\psi\wedge\psi\|\). Hence (A13) follows from (A20). Property 4(e)Since, by (A11), \(\forall s,s^{\prime}\in\mathcal{B}(s_{\otimes})\), \(f(s,\|\phi\|)=f(s^{\prime},\|\phi\|)=\|K\div\neg\phi\|\cap\|\phi\|\), it is sufficient to show that if \(\|K\div\neg\phi\|\cap\|\phi\|\cap\|\psi\|\neq\varnothing\) then \(\|K\div\neg(\phi\wedge\psi)\|\cap\|\phi\wedge\psi\|\subseteq\|K\div\neg\phi\| \cap\|\phi\|\cap\|\psi\|\). Assume that \(\|K\div\neg\phi\|\cap\|\phi\|\cap\|\psi\|=\|K\div\neg\phi\|\cap\|\phi\wedge\psi\|\neq\varnothing\). Then \[\neg(\phi\wedge\psi)\notin K\div\neg\phi.\] (A21) Since \(\neg\phi\) is logically equivalent to \(\neg(\phi\wedge\psi)\wedge\neg\phi\), by \((K-6)\) \[K\div\neg\phi=K\div\left(\neg(\phi\wedge\psi)\wedge\neg\phi\right).\] (A22) Thus, by (A21) and (A22), \[\neg(\phi\wedge\psi)\notin K\div\left(\neg(\phi\wedge\psi)\wedge\neg\phi \right).\] (A23) By \((K-8)\), \(\forall\alpha,\beta\in\Phi_{0}\), if \(\alpha\notin K\div(\alpha\wedge\beta)\) then \(K\div(\alpha\wedge\beta)\subseteq K\div\alpha\). Thus, by (A23) and \((K-8)\) (with \(\alpha=\neg(\phi\wedge\psi)\) and \(\beta=\neg\phi\)), \(K\div(\neg\phi\wedge\neg(\phi\wedge\psi))\subseteq K\div\neg(\phi\wedge\psi)\). It follows from this and (A22) that \(K\div\neg\phi\subseteq K\div\neg(\phi\wedge\psi)\) and thus \[\|K\div\neg(\phi\wedge\psi)\|\subseteq\|K\div\neg\phi\|.\] (A24) Intersecting both sides of (A24) with \(\|\phi\wedge\psi\|=\|\phi\|\cap\|\psi\|\) we get \(\|K\div\neg(\phi\wedge\psi)\|\cap\|\phi\wedge\psi\|\subseteq\|K\div\neg\phi\| \cap\|\phi\|\cap\|\psi\|\), as desired. To complete the proof of Part (B) of Proposition 1 we need to show that \[\begin{array}{ll}\psi\in K\div\phi&\mbox{if and only if}&(1)\,\mathcal{B}(s_{ \otimes})\subseteq\|\psi\|,\mbox{ and}\\ &(2)\,\forall s\in\mathcal{B}(s_{\otimes}),f(s,\|\neg\phi\|)\subseteq\|\psi \|.\end{array}\] By (A11), \(\forall s\in\mathcal{B}(s_{\otimes})=\|K\|\), \(f(s,\|\neg\phi\|)=\|K\div\phi\|\cap\|\neg\phi\|\). Thus we have to show that \[\psi\in K\div\phi\mbox{ if and only if }\|K\|\subseteq\|\psi\|\mbox{ and }\|K\div\phi\|\cap\|\neg\phi\|\subseteq\|\psi\|.\] (A25) First we establish a lemma. **Lemma 3**.: \(\forall\phi\in\Phi_{0}\)_._ 1. _if_ \(A\subseteq\Phi_{0}\) _is such that_ \(A=Cn(A)\)_, then_ \(A=Cn\left(A\cup\{\phi\}\right)\cap Cn(A\cup\{\neg\phi\})\)__ 2. \(K\div\phi=K\cap Cn(K\div\phi\cup\{\neg\phi\})\)__ Proof.: (i) Let \(A\subseteq\Phi_{0}\) be such that \(A=Cn(A)\). Since \(A\subseteq Cn\left(A\cup\{\phi\}\right)\) and \(A\subseteq Cn\left(A\cup\{\neg\phi\}\right)\cap Cn\left(A\cup\{\neg\phi\}\right)\). Conversely, suppose that \(\chi\in Cn\left(A\cup\{\phi\}\right)\cap Cn\left(A\cup\{\neg\phi\}\right)\). Then both \(\phi\to\chi\) and \(\neg\phi\to\chi\) belong to \(A\) and thus so does their conjunction. Since \((\phi\to\chi)\wedge(\neg\phi\to\chi)\) is logically equivalent to \(\chi\) it follows that \(\chi\in A\). (ii) We need to consider two cases. Case 1: \(\phi\in K\). Then, by \((K-5)\), \(K\subseteq Cn\left(K\div\phi\cup\{\phi\}\right)\). By \((K-2)\), \(K\div\phi\subseteq K\), so that \(Cn\left(K\div\phi\cup\{\phi\}\right)\subseteq Cn\left(K\cup\{\phi\}\right)=Cn (K)=K\) (by hypothesis, \(K\) is deductively closed). Thus \[K=Cn\left(K\div\phi\cup\{\phi\}\right)\] (A26) By Part (\(i\)) (with \(A=K\div\phi\), which, by \((K-1)\), is deductively closed), \[K\div\phi=Cn\left(K\div\phi\cup\{\phi\}\right)\cap Cn\left(K\div\phi\cup\{ \neg\phi\}\right)\] (A27) Thus, by (A26) and (A27), \(K\div\phi=K\cap Cn(K\div\phi\cup\{-\phi\})\). Case 2: \(\phi\notin K\). Then, by \((K-2)\) and \((K-3)\), \[K\div\phi=K\] (A28) By Part \((i)\) (with \(A=K\)) \[K=Cn\left(K\cup\{\phi\}\right)\cap Cn\left(K\cup\{-\phi\}\right)\] (A29) From (A29) we get that \(K\cap Cn\left(K\cup\{-\phi\}\right)=Cn\left(K\cup\{\phi\}\right)\cap Cn\left(K \cup\{-\phi\}\right)=K\). Thus, by (A28), \(K\div\phi=K\cap Cn\left(K\cup\{-\phi\}\right)\), from which, by using (A28) again to replace the second instance of \(K\) with \(K\div\phi\), we get \(K\div\phi=K\cap Cn\left(K\div\phi\cup\{-\phi\}\right)\) Now we are ready to prove (A25), namely that \[\psi\in K\div\phi\text{ if and only if }\|K\|\subseteq\|\psi\|,\text{ and }\|Cn\left(K\div\phi\cup\{-\phi\}\right)\|\subseteq\|\psi\|.\] Let \(\psi\in K\div\phi\). By \((ii)\) of Lemma 3, \(K\div\phi=K\cap Cn\left(K\div\phi\cup\{-\phi\}\right)\); thus \(\psi\in K\), that is, \(\|K\|\subseteq\|\psi\|\), and \(\psi\in Cn\left(K\div\phi\cup\{-\phi\}\right)\), that is, \(\|Cn\left(K\cup\{-\phi\}\right)\|\subseteq\|\psi\|\). Conversely, suppose that \(\|K\|\subseteq\|\psi\|\) and \(\|Cn\left(K\div\phi\cup\{-\phi\}\right)\|\subseteq\|\psi\|\), that is, \(\psi\in K\cap Cn\left(K\div\phi\cup\{-\phi\}\right)\). Then, by \((ii)\) of Lemma 3, \(\psi\in K\div\phi\). \(\Box\)
2310.06937
Liouville-type results for time-dependent stratified water flows over variable bottom in the $β$-plane approximation
We consider here time-dependent three-dimensional stratified geophysical water flows of finite depth over a variable bottom with a free surface and an interface (separating two layers of constant and different densities). Under the assumption that the vorticity vectors in the two layers are constant, we prove that bounded solutions to the three-dimensional water waves equations in the $\beta$-plane approximation exist if and only if one of the horizontal components of the velocity, as well as its vertical component, are zero; the other horizontal component being constant. Moreover, the interface is flat, the free surface has a traveling character in the horizontal direction of the nonvanishing velocity component, being of general type in the other horizontal direction, and the pressure is hydrostatic in both layers. Unlike previous studies of three-dimensional flows with constant vorticity in each layer, we consider a non-flat bottom boundary and different constant vorticity vectors for the upper and lower layer.
Calin Martin
2023-10-10T18:49:40Z
http://arxiv.org/abs/2310.06937v1
Liouville-type results for time-dependent stratified water flows over variable bottom in the \(\beta\)-plane approximation ###### Abstract We consider here time-dependent three-dimensional stratified geophysical water flows of finite depth over a variable bottom with a free surface and an interface (separating two layers of constant and different densities). Under the assumption that the vorticity vectors in the two layers are constant, we prove that bounded solutions to the three-dimensional water waves equations in the \(\beta\)-plane approximation exist if and only if one of the horizontal components of the velocity, as well as its vertical component, are zero; the other horizontal component being constant. Moreover, the interface is flat, the free surface has a traveling character in the horizontal direction of the nonvanishing velocity component, being of general type in the other horizontal direction, and the pressure is hydrostatic in both layers. Unlike previous studies of three-dimensional flows with constant vorticity in each layer, we consider a non-flat bottom boundary and different constant vorticity vectors for the upper and lower layer. **Keywords**: Time-dependent thee-dimensional gravity water flows, stratification, \(\beta\)-plane effects, variable bottom, piecewise constant vorticity. **Mathematics Subject Classification:** 35A01, 35Q35, 35R35, 76B15, 76B70. ## 1 Introduction Geophysical fluid dynamics (GFD) is the study of fluid motion characterized by the incorporation of Coriolis effects in the governing equations. The Coriolis force is a result of the Earth's rotation and plays a substantial role in the resulting dynamics. While the nonlinear GFD equations are able to capture a plethora of oceanic and atmospheric flows [12, 13, 14, 21, 22, 23, 24] their high level of difficulty greatly challenges the available mathematical techniques. The possible course of action is the recourse to simpler approximate models that are justified by oceanographical considerations. One of these approximations refers to the linearization of Coriolis forces in the tangent plane approximation, a procedure that-despite the spherical shape of the Earth-is valid due to the moderate spatial scale of the motion: the region occupied by the fluid can be approximated by a tangent plane and the linear term of the Taylor expansion captures the \(\beta\)-plane effect, cf. the discussions in Constantin [9], Cushman-Roisin & Beckers [26], Gill [38], Pedlosky [63], Salmon [66]. The paper [9], by Constantin, presented for the first time three-dimensional explicit and exact solutions (in Lagrangian coordinates) to the equatorial \(\beta\)-plane model. These solutions described equatorially trapped waves propagating eastward in a stratified inviscid flow. The latter \(\beta\)-plane model was modified to incorporate centripetal terms by Constantin & Johnson [13] whereby the authors also established the existence of equatorial purely azimuthal solutions to the GFD equations in spherical, cylindrical and \(\beta\)-plane coordinates. A further significant extension was realized by Henry [40] who presented an exact and explicit solution (to the GFD equations in the \(\beta\)-plane approximation with Coriolis and centripetal forces) representing equatorially trapped waves propagating in the presence of a constant background current. Yet, another improvement in the realm of the GFD equations in the \(\beta\)-plane was obtained by Henry [41] and concerns the addition of a gravitational-correction term in the tangent plane approximation. Historically, this type of approximation was proposed by Rossby _e_t al. [65] as a conceptual model for motion on a sphere. Recent results concerning flows in the \(\beta\)-plane approximation were obtained in [28, 31]. For subsequent solutions to the GFD equations concerning a variety of geophysical scenarios, we refer the reader to [2, 10, 11, 16, 19, 42, 45, 46, 55, 58, 60, 61, 62]. In the quest to derive explicit and exact solutions to the GFD equations in the \(\beta\)-plane approximation with Coriolis and centripetal terms, we start from the assumption that the vorticity vector is constant in each of the two layers of the fluid domain which is assumed to be stratified: the water flow, bounded below by a bottom (varying) boundary and above by the free surface, is split by an interface into a layer adjacent to the bottom of some constant density (say \(\rho\)) which sits below the layer adjacent to the free surface of constant density \(\tilde{\rho}<\rho\). Stratification is an important aspect in the ocean science: stratified layers act as a barrier to the mixing of water, which impacts the exchange of heat, carbon, oxygen and other nutrients, cf. [48]. The discontinuous stratification (of the type we consider here) gives rise to internal waves, an aspect that has attracted much attention lately from the perspective of exact solutions describing large-scale geophysical (and non-geophysical) flows [1, 12, 13, 14, 17, 18, 32, 33, 34, 35, 39, 42, 58] or of qualitative studies of intricate features underlying the dynamics of coupled surface and internal waves, cf. [43]. The structural consequences of constant vorticity in water flows satisfying the three-dimensional equations were discussed in a handful of papers by Constantin & Kartashova [5], Constantin [7], Craig [25], Wahlen [70], Stuhlmeier [67], Martin [53, 54, 56, 57]: the main outcome is that occurrence of constant vorticity in a flow that satisfies the three-dimensional nonlinear governing equations is possible if and only if the flow is two-dimensional and if the vorticity vector has only one non-vanishing component that points in the horizontal direction orthogonal to the direction of wave propagation. In particular, constant vorticity gives a good description of tidal currents; cf. [64]. These are the most regular and predictable currents, and on areas of the continental shelf and in many coastal inlets they are the most significant currents; cf. [50]. After introducing the governing equations we state and prove a Liouville-type result in the context of a two-layer fluid domain bounded below by a bottom boundary \(z=b(y)\) (for some given function \((x,y)\to b(y)\)) and above by the free surface \(z=\tilde{\eta}(x,y,t)\) (for some unknown function \(\tilde{\eta}\)), and split by an interface \(z=\eta(x,y,t)\) (for some unknown function \(\eta\)) into two layers. We prove that if the vorticity vectors in the two layers are constant then bounded solutions to the three-dimensional water waves equations in the \(\beta\)-plane approximation (with the associated boundary conditions) exist if and only if one of the horizontal components of the velocity, as well as its vertical component, are zero; the other horizontal component being constant. Moreover, the interface is flat, the free surface has a traveling character in the horizontal direction of the nonvanishing velocity component, being of general type in the other horizontal direction, and the pressure is hydrostatic in both layers. We would like to underline that allowing for an extra \(x\)-dependence in the bottom function \(b\), leads only to unbounded solutions, cf. Remark 2.3. We would also like to note that, unlike previous Liouville-type results, [53, 54, 5, 56, 57, 59, 67, 70, 4], our analysis here does not require the bottom boundary to be flat. It is also worth to note that results similar in outcome, but for the Euler- or Navier-Stokes equations without a free boundary, were obtained recently and relatively recently, cf. [36, 37]. The ethos of the previously mentioned studies is that under integrability conditions or conditions concerning the mean oscillation of the velocity field, the latter vanishes identically, or that it displays less complexity. The three dimensional water wave problem in the \(\beta\)-plane approximation We choose to work in a rotating framework with the origin at a point on the Earth's surface which is approximated by a sphere of radius \(R=6378\) km and denote with \((x,y,z)\) the Cartesian coordinates where the spatial variable \(x\) refers to the longitude, the variable \(y\) to latitude, and the variable \(z\) stands for the local vertical, cf. Fig. 1. The fluid domain, bounded below by a bottom boundary \(z=b(x,y)\) (for some differentiable function \((x,y)\to b(x,y)\)) and above by the free surface \(z=\tilde{\eta}(x,y,t)\) is split by an interface, denoted \(z=\eta(x,y,t)\), into two layers: a layer adjacent to the bottom, written as \[D_{\eta}(t):=\{(x,y,z):b(x,y)\leq z\leq\eta(x,y,t)\},\] which sits below a layer adjacent to the surface, written as \[D_{\eta,\tilde{\eta}}(t):=\{(x,y,z):\eta(x,y,t)\leq z\leq\tilde{\eta}(x,y,t)\}.\] We denote with \(\mathbf{u}(x,y,z,t)=(u(x,y,z,t),v(x,y,z,t),w(x,y,z,t))\) (respectively \(\tilde{\mathbf{u}}(x,y,z,t)=(\tilde{u}(x,y,z,t),\tilde{v}(x,y,z,t),\tilde{w} (x,y,z,t))\)) the velocity field in \(D_{\eta}(t)\) (respectively in \(D_{\eta,\tilde{\eta}}(t)\)), with \(P(x,y,z,t)\) (respectively \(\tilde{P}(x,y,z,t)\)) the pressure in \(D_{\eta}(t)\) (respectively in \(D_{\eta,\tilde{\eta}}\)), and with \(g\) the gravitational acceleration. We denote with \(\rho\) the density in the lower layer \(D_{\eta}\) and with \(\tilde{\rho}\) the density in the upper layer \(D_{\eta,\tilde{\eta}}\). Then the motion of incompressible and inviscid three-dimensional water flows in the \(\beta\)-plane approximation near the Equator, (e.g. Constantin [10], Constantin & Johnson [13] and Dellar [29]), is governed in \(D_{\eta}\) by the Euler equations \[\begin{split} u_{t}+uu_{x}+vu_{y}+wu_{z}+2\omega w-\beta yv& =-\frac{P_{x}}{\rho}\\ v_{t}+uv_{x}+vv_{y}+wv_{z}+\beta yu+\omega^{2}y&= -\frac{P_{y}}{\rho}\\ w_{t}+uw_{x}+vw_{y}+ww_{z}-2\omega u-\omega^{2}R&= -\frac{P_{z}}{\rho}-g\end{split} \tag{2.1}\] and by the incompressibility condition \[u_{x}+v_{y}+w_{z}=0. \tag{2.2}\] Above, \(t\) denotes the time variable, \(\omega=7.29\cdot 10^{-5}\) rad \(s^{-1}\) is the rotational speed of Earth round the polar axis toward east and \(\beta:=\frac{2\omega}{R}\). **Remark 2.1**.: _The \(\beta\)-plane effect, captured by the quantity \(\beta y\), appears in the first and second equation in (2.1), and is a result of linearizing the Coriolis force in the tangent plane approximation. In spite of the spherical shape of the Earth, the before-mentioned linearization procedure is legitimate due to the moderate spatial scale of the motion, cf. the discussions in Cushman-Roisin & Beckers [26] and Constantin [9]. We would like to note that the \(\beta\)-plane equations (2.1) represent a consistent approximation to the governing equations only near the Equator, cf. Figure 1: The rotating frame of reference, with the x axis chosen horizontally due east, the y axis chosen horizontally due north, and the z axis chosen upward: x corresponds to longitude, y to latitude, and z to the local vertical. Dellar [29]._ Likewise, in \(D_{\eta,\tilde{\eta}}\) the flow motion obeys the equations \[\begin{split}\tilde{u}_{t}+\tilde{u}\tilde{u}_{x}+\tilde{v}\tilde{u }_{y}+\tilde{w}\tilde{u}_{z}+2\omega\tilde{w}-\beta y\tilde{v}&=- \frac{\tilde{P}_{x}}{\tilde{\rho}}\\ \tilde{v}_{t}+\tilde{u}\tilde{v}_{x}+\tilde{v}\tilde{v}_{y}+ \tilde{w}\tilde{v}_{z}+\beta y\tilde{u}+\omega^{2}y&=-\frac{ \tilde{P}_{y}}{\tilde{\rho}}\\ \tilde{w}_{t}+\tilde{u}\tilde{w}_{x}+\tilde{v}\tilde{w}_{y}+ \tilde{w}\tilde{w}_{z}-2\omega\tilde{u}-\omega^{2}R&=-\frac{ \tilde{P}_{z}}{\tilde{\rho}}-g\end{split} \tag{2.3}\] and \[\tilde{u}_{x}+\tilde{v}_{y}+\tilde{w}_{z}=0. \tag{2.4}\] We specify now the boundary conditions which conclude the formulation of the water wave problem. We start with the kinematic boundary conditions, stating the impermeability of the boundaries: on the free surface \(z=\tilde{\eta}(x,y,t)\) we require \[\tilde{w}=\tilde{\eta}_{t}+\tilde{u}\tilde{\eta}_{x}+\tilde{v}\eta_{y}\quad \text{on}\quad z=\tilde{\eta}(x,y,t) \tag{2.5}\] On the interface \(z=\eta(x,y,t)\) the conditions \[\begin{split}&\tilde{w}=\eta_{t}+\tilde{u}\eta_{x}+\tilde{v} \eta_{y}\quad\text{on}\quad z=\eta(x,y,t)\\ & w=\eta_{t}+u\eta_{x}+v\eta_{y}\quad\text{on}\quad z=\eta(x,y,t) \end{split} \tag{2.6}\] while on the bed it holds that \[w=ub_{x}+vb_{y}\quad\text{on}\quad z=b(x,y). \tag{2.7}\] The balance of forces at the interface is encoded in the continuity of the pressure across \(z=\eta(x,y,t)\), that is \[P(x,y,\eta(x,y,t),t)=\tilde{P}(x,y,\eta(x,y,t),t)\text{ for all }x,y,t. \tag{2.8}\] Lastly, the dynamic boundary condition, states the continuity of the pressure across the free surface, that is, we require that \[\tilde{P}=p\left(x+\frac{\omega R}{2}t,y\right)\quad\text{on}\quad z=\tilde{ \eta}(x,y,t), \tag{2.9}\] for some given differentiable function \((X,Y)\to p(X,Y)\). The local rotation in the flow is captured by the vorticity vector, defined as the curl of the velocity field, that is, \[\begin{split}\Omega=(w_{y}-v_{z},u_{z}-w_{x},v_{x}-u_{y})=:( \Omega_{1},\Omega_{2},\Omega_{3})\text{ in }D_{\eta},\\ \tilde{\Omega}=(\tilde{w}_{y}-\tilde{v}_{z},\tilde{u}_{z}-\tilde {w}_{x},\tilde{v}_{x}-\tilde{u}_{y})=:(\tilde{\Omega}_{1},\tilde{\Omega}_{2}, \tilde{\Omega}_{3})\text{ in }D_{\eta,\tilde{\eta}}.\end{split} \tag{2.10}\] **Remark 2.2**.: _The (piecewise) constant vorticity is instrumental in describing wave-current interactions in sheared flows [49, 50, 64, 69]. For a comprehensive treatment of two-dimensional water flows with discontinuous vorticity (from the point of view of exact solutions describing waves of small and large amplitudes) we refer the reader to the work by Constantin & Strauss [8]. However, the landscape of rotational three-dimensional water flows is much less understood. Nevertheless, it is known that in three-dimensional flows the constant vorticity significantly steers the dimensionality of the velocity field and of the pressure [5, 7, 53, 54, 56, 57, 67, 70]. Compared with previous studies on rotational three-dimensional water flows, we move here one step further and assume that the vorticity vectors, \(\Omega\) and \(\tilde{\Omega}\), respectively, are constant. As we shall see, this assumption will have massive consequences on the velocity field._ We are now ready to state the main result for whose proof we will rely on rather direct partial differential equations methods, unlike more sophisticated Hamiltonian tools (and other structure-preserving methods) used recently in the context of layered domains [3, 15, 71, 72, 73, 18], which are not known to be available in our three-dimensional setting of the \(\beta\)-plane. **Theorem 2.1**.: _If the velocity field satisfies \((u,v,w)\in L^{\infty}(D_{\eta})^{3}\) and \((\tilde{u},\tilde{v},\tilde{w})\in L^{\infty}(D_{\eta,\tilde{\eta}})^{3}\) then the flow has constant vorticity \(\Omega\in D_{\eta}\) and \(\tilde{\Omega}\in D_{\eta,\tilde{\eta}}\) if and only if_ \[u(x,y,z,t)=u(t),\ v(x,y,z,t)=0,\ w(x,y,z,t)=w(t),\ \mathrm{for\ all}\ x,y,z,t,\] \[\tilde{u}(x,y,z,t)=\tilde{u}(t),\ \tilde{v}(x,y,z,t)=0,\ \tilde{w}(x,y,z,t)= \tilde{w}(t),\ \mathrm{for\ all}\ x,y,z,t,0,\] _that is, \(u,\tilde{u},w,\tilde{w}\) depend only on \(t\), while \(v\) and \(\tilde{v}\) vanish._ _If, in addition, the bottom defining function \(b\) depends only on \(y\), and \(P\) and \(\tilde{P}\) are bounded in \(\Omega\), and \(\tilde{\Omega}\), respectively, then_ \[u=\tilde{u}=-\frac{\omega R}{2}\ \mathrm{and}\ v=w=\tilde{v}=\tilde{w}=0.\] _If, moreover, \(\rho\neq\tilde{\rho}\) then_ \[\eta(x,y,t)=0\ \mathrm{for\ all}\ x,y,t,\] _and_ \[\tilde{\eta}(x,y,t)=-\frac{p\left(x+\frac{\omega R}{2}t,y\right)}{\tilde{\rho }g},\] _where \(p\) is the given function from (2.9)._ Proof.: Eliminating the pressure between the three equations in (2.1) yields the vorticity equation \[\begin{split}\Omega_{1}u_{x}+(\Omega_{2}+2\omega)u_{y}+(\Omega_{3}+ \beta y)u_{z}&=0,\\ \Omega_{1}v_{x}+(\Omega_{2}+2\omega)v_{y}+(\Omega_{3}+\beta y)v_ {z}&=0,\\ \Omega_{1}w_{x}+(\Omega_{2}+2\omega)w_{y}+(\Omega_{3}+\beta y)w_ {z}&=\beta v,\end{split} \tag{2.11}\] cf. eg. [51, 52, 6]. Applying the operator \(\Delta\) to the first equation above and using that \(u_{x}\), \(u_{y}\) and \(u_{z}\) are harmonic functions we find that \(\Delta(yu_{z})=0\). The latter equation can be expanded as \[y\Delta u_{z}+2u_{yz}=0\] which leads to \[u_{yz}\equiv 0. \tag{2.12}\] Following the same line of proof we conclude that \[v_{yz}(x,y,z,t)=w_{yz}(x,y,z,t)=0 \tag{2.13}\] at all points \((x,y,z)\) of \(D_{\eta}\). Utilizing the definitions of \(\Omega_{1}\) and \(\Omega_{2}\) and recalling (2.13) we obtain \[w_{yx}=w_{yy}=w_{yz}=0, \tag{2.14}\] which proves that \(w_{y}\) is constant throughout \(D_{\eta}\). That is, there is a differentiable function \(t\to f(t)\) such \[w_{y}=f(t),\] from which we infer the existence of another differentiable function \(t\to g(t)\) such that \[v_{z}=g(t). \tag{2.15}\] We apply now the operator of differentiation with respect to \(y\) in the vorticity equation (2.11) and obtain, via (2.14) that \[w_{z}=v_{y}\text{ within }D_{\eta}. \tag{2.16}\] We now see from above that the equalities \[w_{zz}=v_{yz}=0\quad\text{and}\quad v_{yy}=w_{yz}=0 \tag{2.17}\] hold at all points of \(D_{\eta}\). Differentiating the equation of mass conservation (2.2) with respect to \(z\) we get that \(u_{xz}+v_{yz}+w_{zz}=0\), and so we have via (2.17) that \[u_{xz}\equiv 0, \tag{2.18}\] while from the differentiation of (2.2) with respect to \(y\) we infer that \[u_{xy}\equiv 0. \tag{2.19}\] Differentiating with respect to \(z\) in the first equation of (2.11) and recalling that \(u_{xz}=u_{yz}=0\) we see that \((\Omega_{3}+\beta y)u_{zz}=\) for all \((x,y,z)\) in \(D_{\eta}\). Hence, \[u_{zz}=w_{xz}=0 \tag{2.20}\] at all points of \(D_{\eta}\). We claim now that \[u_{xx}\equiv 0\quad\mbox{within}\quad D_{\eta}. \tag{2.21}\] To prove (2.21) we distinguish two cases as follows: * \(\Omega_{1}=0\). Then the second equation in (2.11) becomes \[(\Omega_{2}+2\omega)v_{y}+(\Omega_{3}+\beta y)v_{z}=0.\] After a differentiation with respect to \(x\) in the previous equation and accounting also for (2.15), we obtain that \[v_{xy}=u_{yy}=0\quad\mbox{within}\quad D_{\eta},\] (2.22) so that the harmonicity of \(u\) and relation (2.20) provides the claim in (2.21). * \(\Omega_{1}\neq 0\). In this case we differentiate the first equation of (2.11) with respect to \(x\) and avail of \(u_{xy}=u_{xz}=0\) to conclude the correctness of the claim in (2.21). We can infer now from (2.21), (2.20) and the harmonicity of \(u\) that \[u_{yy}=0\quad\mbox{within}\quad D_{\eta}. \tag{2.23}\] An inspection of the previous considerations shows that \(\nabla u\), \(\nabla v\) and \(\nabla w\) are (vectorial) functions that depend only on \(t\). Corroborating the latter with the boundedness of \(u,v,w\) yields that \(u,v,w\) are functions that depend only on \(t\). The latter implies now that the spatial gradients of \(u,v\) and \(w\) vanish identically within \(D_{\eta}\). This last information yields via (2.11) that \[v=0\quad\mbox{throughout}\quad D_{\eta}. \tag{2.24}\] Since \(w\) is constant within \(D_{\eta}\) we conclude via the bottom condition (2.7) that \(w=0\) throughout the lower layer \(D_{\eta}\). Consequently, the Euler equations are now written as \[\begin{split} P_{x}&=-\rho u^{\prime}(t),\\ P_{y}&=-\rho(\beta u(t)+\omega^{2})y,\\ P_{z}&=\rho(2\omega u(t)-g+\omega^{2}R),\end{split} \tag{2.25}\] so that \[P(x,y,z,t)=-\rho\left[u^{\prime}(t)x+(\beta u(t)+\omega^{2})\frac{y^{2}}{2}+(g-2 \omega u(t)-\omega^{2}R)z\right]. \tag{2.26}\] The boundedness of the pressure implies now that \[u(t)=-\frac{\omega^{2}}{\beta}=-\frac{\omega R}{2}\quad\mbox{for all}\quad t,\] and therefore for all \(x,y,t\) and all \(-d\leq z\leq\eta(x,y,t)\) it holds \[P(x,y,z,t)=-\rho gz. \tag{2.27}\] Utilizing the vorticity equation for the domain \(D_{\eta,\tilde{\eta}}\) and employing an argument analogous to the one that led to the constancy of \(u,v,w\) we conclude that \(\tilde{u}\) and \(\tilde{w}\) are constants of the time \(t\) and \(\tilde{v}\) vanishes throughout \(D_{\eta,\tilde{\eta}}\). Hence, the Euler equations in the upper layer are written as \[\begin{split}\tilde{P}_{x}&=-\tilde{\rho}\left(2 \omega\tilde{w}(t)+\tilde{u}^{\prime}(t)\right),\\ \tilde{P}_{y}&=-\tilde{\rho}(\beta\tilde{u}(t)+ \omega^{2})y,\\ \tilde{P}_{z}&=\tilde{\rho}(2\omega\tilde{u}(t)-g+ \omega^{2}R),\end{split} \tag{2.28}\] which yields that the pressure in the upper layer \(D_{\eta,\tilde{\eta}}\) is given as \[\tilde{P}(x,y,z,t)=-\tilde{\rho}\left[\left(2\omega\tilde{w}(t)+\tilde{u}^{ \prime}(t)\right)x+(\beta\tilde{u}(t)+\omega^{2})\frac{y^{2}}{2}-(2\omega \tilde{u}(t)-g+\omega^{2}R)z\right]. \tag{2.29}\] Owing to the boundedness of \(\tilde{P}\) we infer from the previous formula that \[\tilde{u}(t)=-\frac{\omega^{2}}{\beta}=-\frac{\omega R}{2}\quad\mbox{and} \quad\tilde{w}(t)=0\quad\mbox{for all}\quad t, \tag{2.30}\] and for all \(x,y,z,t\) such that \(\eta(x,y,t)\leq z\leq\tilde{\eta}(x,y,t)\) it holds \[\tilde{P}(x,y,z,t)=-\tilde{\rho}gz. \tag{2.31}\] Applying now the balance of forces at the interface we obtain (availing also of \(\rho\neq\tilde{\rho}\)) that \(\eta(x,y,t)=0\) for all \(x,y,t\). With this finding we immediately see that the kinematic conditions on the interface (2.6) are also verified. The kinematic condition on the free surface (2.5) becomes \[\tilde{\eta}_{t}-\frac{\omega R}{2}\tilde{\eta}_{x}=0,\] whose general solution is given as \(\tilde{\eta}(x,y,t)=F\left(x+\frac{\omega R}{2}t,y\right)\) for some function \((X,Y)\mapsto F(X,Y)\). But, from the dynamic boundary condition (2.9) we have that \(F=-\frac{p}{\tilde{\rho}g}\). \(\sqcap\)\(\sqcup\) We conclude by revealing the reason for choosing the bottom defining function \(b\) to depend only on \(y\). **Remark 2.3**.: _Maintaining the hypotheses from Theorem 2.1 and assuming that the bottom defining function presents a most general dependence \((x,y)\to b(x,y)\) with \(b_{x}\neq 0\) we obtain that no solution with bounded pressure in the lower layer \(D_{\eta}\) exists. Indeed, proceeding like in the proof of Theorem 2.1 we notice that the arguments and the conclusions therein hold verbatim also in this scenario (with a more general bottom) until (and including) formula (2.24). Then, utilizing the bottom condition \(w=ub_{x}+vb_{y}\) on \(z=b(x,y)\) we get \(w(t)=u(t)b_{x}(x,y)\) for all \(x,y\) and for all \(t\). Assuming, ad absurdum that there is \(t_{0}\) such that \(u(t_{0})\neq 0\), we obtain from the previous equality that \(b_{xx}=b_{yy}=0\). Since \(b\) is bounded, it follows that \(b_{x}(x,y)=0\) for all \(x,y\), which is a contradiction with the hypothesis that \(b_{x}\neq 0\). Thus, \(u(t)=0\) for all \(t\). From \(w(t)=u(t)b_{x}\) it follows that also \(w(t)=0\) for all \(t\). The governing equations now yield that \(P_{y}=-\rho\omega^{2}y\) which clearly impedes the boundedness of \(P\)._ ## 3 Conclusion We explored here the impact of (piecewise) constant vorticity on the dimension reduction of the velocity field in water flows satisfying the three-dimensional governing equations. This is part of a broader research agenda [1, 7, 10, 11, 16, 54, 56, 57, 67, 70] that aims to deepen the analytical understanding of the three-dimensional water waves equations with vorticity. In addition to the previous aspects concerning the vorticity, our analysis takes into account the presence of geophysical effects in the form of the (equatorial) \(\beta\)-plane approximation: this procedure consists in linearizing the Coriolis force in the tangent plane approximation, this course of action being justified by the moderate scale of motion. Unlike the \(f\)-plane approximation, the \(\beta\)-plane approximation [63] displays the essential characteristic that the Coriolis parameter is not constant in space. This is a feature that not only makes the equations of motion more tractable but also allows the study of many phenomena in the atmosphere and ocean; in particular, Rossby waves, the most important type of waves for large-scale atmospheric and oceanic dynamics, depend on the variation of the Coriolis parameter as a restoring force cf. [44]. Under the assumptions (on the vorticity) stated before, we showed that the bounded solutions have zero vertical and one horizontal velocity components, while the other horizontal component is constant. We have also proved that the interface is flat, the free surface has a traveling character in the horizontal direction of the nonvanishing velocity component, being of general type in the other horizontal direction, and the pressure is hydrostatic in both layers. Different from other scenarios investigated before [5, 7, 53, 54, 56, 57, 59, 67, 70], we allow a varying bottom boundary. A further improvement, when compared with previous studies dealing with three-dimensional flows with piecewise constant vorticity [59], the vorticity vectors corresponding to the upper and lower layer, respectively, need not be parallel. Our conclusion (regarding the dimension reduction of the flow) is reinforced in the study by Xia and Francois [74], which shows that large-scale structures in thick fluid layers can suppress vertical eddies and reinforce the planarity of the flow. In connection with planar flows we would also like to mention the result on the geometric structure of flows by Sun [68]. ## 4 Appendix To justify the consideration of the system (2.1) and for the sake of self-containedness, we recall in this section a derivation of the governing equations for the \(\beta\)-plane, as presented by Constantin & Johnson [13]. We start from the setting of cylindrical coordinates in which the equator is replaced by a line parallel to the \(x\) axis: the equator is "straightened" and the body of the sphere is represented by a circular disc, which is mapped out by the corresponding polar coordinates \((x,\theta,z)\). Here, \(x\) stands for the azimuthal direction and points from west to east, the direction of increasing \(\theta\) is from north to south, and \(z\) represents the local vertical. Then, the governing equations for water wave propagation written in cylindrical coordinates \((x,\theta,z)\) in a rotating framework are the momentum conservation equations \[u_{t}+uu_{x}+\frac{vu_{\theta}}{R+z}+wu_{z}+2\omega(w\cos\theta -v\sin\theta) =-\frac{p_{x}}{\rho},\] \[v_{t}+uv_{x}+\frac{vv_{\theta}}{R+z}+\frac{wv_{z}}{R+z}+2\omega u \sin\theta+(R+z)\omega^{2}\sin\theta\cos\theta =-\frac{p_{\theta}}{\rho(R+z)},\] \[w_{t}+uw_{x}+\frac{vw_{\theta}}{R+z}+ww_{z}-\frac{v^{2}}{R+z}-2 \omega u\cos\theta-(R+z)\omega^{2}\cos^{2}\theta =-\frac{p_{z}}{\rho}-g, \tag{4.1}\] and the equation of mass conservation \[u_{x}+\frac{v_{\theta}}{R+z}+\frac{\big{(}(R+z)w\big{)}_{z}}{R+z}=0, \tag{4.2}\] where \(u,v,w\) are the components of the velocity field corresponding to the \(x\), \(\theta\) and \(z\) variable, respectively. Since we are interested in the behavior of the flow close to the equator we will confine the discussion to small \(\theta\). Our considerations also take into account the fact that the radius of Earth, with respect to the depth of the oceans, is extremely large. Thus, it is justified to perform the approximations \[\sin\theta\approx\theta,\quad\frac{z}{R}\approx 0\quad. \tag{4.3}\] Setting \(y:=R\theta\) and disregarding the terms smaller than \(O(\theta)\) in the expansion of the trigonometric functions, the equations of momentum conservation (4.1) become \[u_{t}+uu_{x}+vu_{y}+wu_{z}+2\omega\left(w-\frac{y}{R}v\right) =-\frac{p_{x}}{\rho},\] \[v_{t}+uv_{x}+vv_{y}+wv_{z}+2\omega\frac{y}{R}u+\omega^{2}y =-\frac{p_{y}}{\rho}, \tag{4.4}\] \[w_{t}+uw_{x}+vw_{y}+ww_{z}-2\omega u-R\omega^{2} =-\frac{p_{z}}{\rho}-g,\] while the equation of mass conservation acquires the form \[u_{x}+v_{y}+w_{z}=0, \tag{4.5}\] which are precisely the equations (2.1) and (2.2). **Acknowledgements**: The support of the Austrian Science Fund (FWF) under research grant P 33107 N is gratefully acknowledged. The author is indebted to the referees whose comments and suggestions have improved the quality of the manuscript. **Data Availability Statement**: No data is associated to this manuscript.
2304.12605
Performance Evaluation of Regression Models in Predicting the Cost of Medical Insurance
The study aimed to evaluate the regression models' performance in predicting the cost of medical insurance. The Three (3) Regression Models in Machine Learning namely Linear Regression, Gradient Boosting, and Support Vector Machine were used. The performance will be evaluated using the metrics RMSE (Root Mean Square), r2 (R Square), and K-Fold Cross-validation. The study also sought to pinpoint the feature that would be most important in predicting the cost of medical insurance.The study is anchored on the knowledge discovery in databases (KDD) process. (KDD) process refers to the overall process of discovering useful knowledge from data. It show the performance evaluation results reveal that among the three (3) Regression models, Gradient boosting received the highest r2 (R Square) 0.892 and the lowest RMSE (Root Mean Square) 1336.594. Furthermore, the 10-Fold Cross-validation weighted mean findings are not significantly different from the r2 (R Square) results of the three (3) regression models. In addition, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics observed that in the charges and smoker features the median of one group lies outside of the box of the other group, so there is a difference between the two groups. It concludes that Gradient boosting appears to perform better among the three (3) regression models. K-Fold Cross-Validation concluded that the three (3) regression models are good. Moreover, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics ceases that the highest charges are due to the smoker feature.
Jonelle Angelo S. Cenita, Paul Richie F. Asuncion, Jayson M. Victoriano
2023-04-25T06:33:49Z
http://arxiv.org/abs/2304.12605v1
## 10.3 Short Paper* ## 10.4 Abstract _Purpose_ - The study aimed to evaluate the regression models' performance in predicting the cost of medical insurance. The Three (3) Regression Models in Machine Learning namely Linear Regression, Gradient Boosting, and Support Vector Machine were used. The performance will be evaluated using the metrics RMSE (Root Mean Square), r2 (R Square), and K-Fold Cross-validation. The study also sought to pinpoint the feature that would be most important in predicting the cost of medical insurance. Method - The methodology of the study is anchored on the knowledge discovery in databases (KDD) process. (KDD) process refers to the overall process of discovering useful knowledge from data. Results - The performance evaluation results reveal that among the three (3) Regression models, Gradient boosting received the highest r2 (R Square) o.892 and the lowest RMSE (Root Mean Square) 1336.594. Furthermore, the 10-Fold Cross-validation weighted mean findings are not significantly different from the r2 (R Square) results of the three (3) regression models. In addition, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics observed that in the charges and smoker features the median of one group lies outside of the box of the other group, so there is a difference between the two groups. Conclusion - In conclusion, Gradient boosting appears to perform better among the three (3) regression models. K-Fold Cross-Validation concluded that the three (3) regression models are good. Moreover, Exploratory Data Analysis (EDA) using a box plot of descriptive statistics ceases that the highest charges are due to the smoker feature. Recommendations - Gradient boosting model be used in predicting the cost of medical insurance. Research Implications - Utilizing an accurate regression model to predict medical costs can aid medical insurance organizations in prioritizing the allocation of limited care management resources as it plays a role in the development of insurance policies. Keywords - machine learning, regression models, prediction, gradient boosting regression ## Introduction In the study of Tkachenko et al. (2018), one of the key points of the development of the modern healthcare system is medical insurance. Also, the most crucial issue in this field is the prediction of individual medical insurance costs. A medical insurance business can only be profitable if it can collect greater money than it must pay for the beneficiaries' medical care. With that, it is necessary to pinpoint the feature that is most important in predicting the cost of medical insurance as it establishes the actuarial tables that adjust the price of yearly premiums higher or lower in accordance with the anticipated treatment costs. Accurate cost predictions can aid health insurers to choose insurance plans with appropriate deductibles and premiums and medical insurance organizations in prioritizing the allocation of limited care management resources as it plays a role in the development of insurance policies (Milovic & Milovic, 2012; Kumar et al., 2010). According to Jordan & Mitchell (2015), machine-learning technology rapidly growing in technical fields because of the growing explosion in the availability of online data along with the advancement of computing technology and storage solutions. Panay et al. (2019) stated that researchers and practitioners have utilized a variety of machine-learning algorithms to analyze medical data to calculate medical insurance costs. Also, several machine-learning approaches have been applied to medical data analysis in studies. According to Muhammad & Yan (2015), there are three different types of machine learning can be distinguished: (1) supervised machine learning, which is task-driven and uses all labeled data for classification and regression; (2) unsupervised machine learning, which is data-driven and uses all unlabeled data for clustering; and (3) reinforcement learning, which uses mistakes as a decision making. This study uses supervised machine learning models to demonstrate and compare the accuracy of various regression models in predicting medical insurance costs. In the big data era, the problem is deepened by the need for accurate and quick computations and the existence of a large number of data leads to the possibility of using a machine learning algorithm. Furthermore, the application of traditional regression approaches does not provide satisfactory results for the prediction of the medical insurance cost because of the big data problem (Mladenovic et al., 2020; Tkachenko et. al., 2018). According to Roopa and Asha (2019), regression models are used to predict and forecast the independent and dependent variables. Moreover, numerous types of regression models can be used and it is necessary to compare the various regression models to identify the most accurate for predicting the cost of medical insurance. A statistical method known as regression analysis is used to establish the link between a single dependent (criterion) variable and one or more independent (predictor) variables. Following a linear combination of the predictors, the analysis produces a predicted value for the criterion (Palmer & O'Connel, 2009). Research by Marill (2004), claimed that linear regression is one of the common regression models to derive the regression line and is a popular technique because it can demonstrate mathematically and visually the relationship between important variables. The study by Maulud and Abdulazeez (2020), stated that linear regression is a mathematical approach that is used to perform predictive analysis, and its allowed continuous or mathematical variables projections. It is also a mathematical research method that possible to measure the predicted effects and model them against multiple input variables (Lim, 2019). Evaluation of the model is one of the crucial stages in machine learning studies. Comparing the trained model predictions with the actual (observed) data from the test data set is the goal of the evaluation (Botchkarev, 2018). The study aimed to evaluate the regression models the performance in predicting the cost of medical insurance using the Kaggle dataset titled Medical Cost Personal Datasets. The Three (3) Regression Models in Machine Learning namely Linear Regression, Gradient Boosting, and Support Vector Machine were used. The performance will be evaluated using the metrics RMSE (Root Mean Square) and r2 (R Square) that quantify how well the regression model fits a dataset then Ten (10) cross-validation will also be performed. The study also sought to pinpoint the feature that would be most important in predicting the cost of medical insurance. ## Methodology The methodology of the study is anchored on the knowledge discovery in databases (KDD) process (Figure 1). (KDD) process refers to the overall process of discovering useful knowledge from data. In addition, the steps in the KDD Process are as followed, Selection, Pre-processing, Transformation, Data mining, and Interpretation/evaluation (Fayad et al., 1996). ### Selection (Data Source) The data sets originated from kaggle.com with Datasets titled Medical Cost Personal Datasets have four (4) numerical features (age, BMI, children, charges) with two (2) int and two (2) floats as datatype and two (3) categorical features (smoker, sex, and region) with three (3) objects as a datatype. The dataset has seven (7) features with non-null attributes and has a total of one thousand three hundred eighty-eight (1,388) entries in each column. ### Preprocessing (Data Analysis) Figure 2 shows that there is an outlier between the charges feature to region, children, sex, and smoker features. Also, it reveals that the outlier begins in the charges cost amounting to seventeen thousand five hundred (17,500). The identified outliers that have charges value greater than seventeen thousand five hundred (17,500) are removed from the datasets. Figure 1: KDD Process ### Transformation (Data processing) The categorical features (smoker, sex, and region) with objects as datatype are converted into numerical features with int as data type (Figure 3). With that, the dataset has seven (7) features with non-null attributes and has a total of one thousand seventeen (1,017) in each column. Creating dependent as y and independent as x features, charges feature is the dependent feature while age, sex, bmi, children, smoker, and region features are the independent feature. Furthermore, The dataset is split into two training (80%) and test (20%), the training dataset undergoes fit_transform() while the test dataset will undergoes StandardScaler(). Figure 3: Exploratory Data Analysis (EDA) of Charges to region, children, sex, and smoker Figure 2: Exploratory Data Analysis (EDA) of Charges to region, children, sex, and smoker Figure 3 shows that there is no outlier after the data processing. Furthermore, the median of one group lies within the box of the other groups which indicates that so there is no difference among the group. However, in the smoker feature, the median of one group lies outside of the box of the other group, so there is a difference between the two groups and it suggests that the highest charges are due to the smoker feature. ### Data Mining #### Regression models According to Wu et al. (2019), Regression is a technique applied to two theories. The first is the theory that regression analyses are frequently employed for forecasting and prediction, with significant overlaps between their application and machine learning. The second theory is that regression analysis can occasionally be utilized to determine the relationships between the dependent and independent variables. Regression models are used to predict and forecast the independent and dependent variables (Roopa & Asha, 2019). Also, in the book of Seber and Lee (2012), The goal of regression modeling is to create mathematical representations that characterize or explain potential relationships between variables. #### Linear regression According to Acharya et al. (2019), linear regression is one of the simplest regression models for predicting outcomes (Figure 4). Also, the correlation between the independent and dependent variables is modeled. In a case model with a single independent variable, simple linear regression uses the formula of Linear Regression (Equation 1) to define the dependence of the variable (Maulud & Abdulzeez, 2020). #### Linear Regression Figure 4: Linear Regression Model Gradient boosting is one of the techniques that allow for the recursive fitting of a weak learner to the residual, improving model performance with an ever-increasing number of iterations (Figure 5). It can automatically discover complex data structures, including nonlinearity and high-order interactions, even in the context of hundreds, thousands, or tens of thousands of potential predictors. The regression model would try to find a function that can accurately describe the data. However, the Gradient boosting function can only be an approximation of the data distribution and there must be errors: \[\text{yi}=\text{F1}\left(\text{Xi}\right)+\text{errori}\] _Equation 2. Gradient Boosting_ Where Xi is a vector of predictors and Yi is the result variable. Assume that X and y relationship is not fully stated and that the F1 (Xi) function is a weak learner. The error or residual in this instance has some correlation with y rather than being white noise (Zhang, Z., et al., 2019). ### Support Vector Machine In the study of Maulud and Abdulazeez (2020), Support Vector Machines (SVM), sometimes known as SVR, SVR both linear and nonlinear regression. SVR seeks to fit as many instances on the street while reducing margin violations rather than aiming to fit the largest street between two classes while limiting margin violations and the hyperparameter epsilon controls the street's width (Figure 6). According to Parbat and Chakraborty (2020), the generalized equation for hyperplane is represented in Equation 3, where w is weights and b is the intercept at X = o. The margin of tolerance is represented by epsilon \(\epsilon\) \[\text{y}=\text{wX}+\text{b}\] _Equation 3. Support Vector Machine_ Figure 5: Gradient Boosting Model ### Performance Evaluation The factor that can be used to evaluate the performance of a regression model is defined in the work by Alexander et al. (2015). ### r2 (R Square) Equation 4 has been utilized in several scenarios in the literature in conjunction with training and test data when dealing with both linear and nonlinear regression models. Also, good models give small residuals \[\text{R2}=1-(\text{RSS/TSS})\] ### RMSE (Root Mean Square) Equation 5 is an estimate of the standard deviation of residuals from the model. \[\text{RMSE}=\text{V}\Sigma(\text{Pi}-\text{Oi})_{2}/\text{n}\] ### K-fold Cross-Validation Equation 6 is the most often used technique for estimating model predictability. Also, it is likely to give an overly optimistic assessment of the model's predictive power. \[\text{MSE}=(\text{i}/\text{n})^{\star}\Sigma(\text{yi}-\text{f}(\text{xi}))_{2}\] _Equation 6._ _K-fold Cross-Validation_ Figure 6: Support Vector Machine Model ## Results ### Interpretation / Evaluation Three (3) regression models were used: Linear Regression, Gradient Boosting, and Support Vector Machine to evaluate the regression models' performance. Also, the regression models undergo K-Fold Cross-Validation to evaluate the model performance and predict new data that hasn't been tested previously, which estimates the regression model's accuracy. Using the training dataset, it processed ten (10) consecutive times with different splits each time. The weighted mean of the K-Fold Cross-Validation will be computed. Table 1 shows that the Gradient boosting has the highest r2 value of 89%, indicating that the model can explain more variability of observed data. Furthermore, the r2 (R Square) also known as accuracy of Gradient boosting is 1% higher than the linear regression and the support vector machine. In light of this, the Gradient boosting model with 89% accuracy suggests that this model performed better than the rest. Moreover, the Gradient boosting has the lowest RMSE value of 1336, which suggests that it is able to fit the dataset the best. Furthermore, the RMSE (Root Mean Square) also known as Residuals of Gradient boosting are 66 lower than linear regression and 67 lower than support vector machine. Gradient boosting appears to perform better than linear regression and vector machines, according to results of performance evaluation of regression models. Table 2 shows that the three (3) regression models' 10-fold cross-validation weighted mean results are not significantly different from the r2 (R Square) also known as accuracy results of the three (3) regression models. This led to the conclusion that the three (3) regression models are good. \begin{table} \begin{tabular}{|c|c|} \hline **Regression Models** & **K-Fold Cross-Validation** \\ \hline Gradient boosting & 0.87g / 88\% \\ \hline Linear regression & 0.860 / 86\% \\ \hline Support vector machine & 0.856 / 86\% \\ \hline \end{tabular} \end{table} Table 2: 10-Fold Cross-validation weighted mean In addition, the Gradient boosting has the highest 10-fold cross-validation weighted mean of 88% followed by linear regression and support vector machine with 86%. This suggests that among the three (3) models gradient boosting model performed better. The scatter plot in Figure 7 displays the 10-fold cross-validation results for the three regression models, and it demonstrates that the accuracy is inversely correlated with the residuals or prediction error. Using descriptive statistics, a Box plot is a type of chart often used in explanatory data analysis and a Box plot visually shows the distribution of numerical data and skewness by displaying the data quartiles (or percentiles) and averages. The box plot in Figure 8 shows that there is a difference between the two groups by showing that the boxes or interquartile range do not overlap. Additionally, each group's middle line, or median, completely lies outside of each box, indicating that there is probably a difference between the two groups. The EDA contends that the smoking feature is to cause for the highest medical insurance. ## Discussion Many researchers utilize machine learning algorithms because they offer effective outcomes. The exploratory data analysis using a box plot of descriptive statistics reveals that the smoker factor is the greatest contributor to the increase in chargers because the median of one group lies outside of the box of the other group, which indicates there is a difference between the two groups. Considering this, it is strongly recommended that the highly contributory factor to the cost of medical insurance is smoking. The three (3) Machine Learning regression models - Linear Regression, Gradient Boosting, and Support Vector Machine, were applied in this study. The experiment shows that the Gradient boosting with 89% r2 (R Square) produced the highest accuracy for the prediction of the medical insurance cost and proved that the model can explain more variation in the observed data. Furthermore, it has the lowest RMSE (Root Mean Squared) of 1336 residuals and it suggests that it is the best fit for the dataset. To validate the results of the performance evaluation of the regression models, it undergoes 10-fold cross-validation and the results are not significantly different from the r2 (R Square) accuracy results of the three (3) regression models. ## Conclusions and Recommendations In conclusion, Gradient boosting appears to perform better among the Three (3) Regression Models in Machine Learning namely Linear Regression, Gradient Boosting, and Support Vector Machine. K-Fold Cross-Validation concluded that the three (3) regression models are good. Moreover, Exploratory Data Analysis (EDA) ceases that the highest charges are due to the smoker feature. Therefore, it is favorable and feasible to Figure 8: Exploratory Data Analysis (EDA) of Charges and smoker suggest that the Gradient boosting model, which outperforms the other two regression models in terms of accuracy in predicting the cost of medical insurance, be used. ## Implications Utilizing an accurate regression model to predict medical costs can aid medical insurance organizations in prioritizing the allocation of limited care management resources as it plays a role in the development of insurance policies. ## Acknowledgement The researchers want to express deep gratitude to the ICpEP and IRCCETE 2023. Sincere appreciation to the conference organizers for organizing this event and for including this paper in the conference. The researchers would like to acknowledge and express gratitude to the organizations with which we are affiliated, including Bulacan State University, Polytecnic College of Botalan, and Richwell Colleges, Inc. The researchers are grateful for your kind encouragement and motivation to carry out this research. Also, for the Kaggle.com for the dataset. ## Declarations ### Conflict of Interest The authors declare no conflict of interest. ### Informed Consent Not applicable, datasets came from Kaggle.com a subsidiary of Google that allows users to find datasets they want to use in building AI models and publish datasets. ### Ethics Approval Not applicable, the dataset was not collected by the authors, and it is publicly available at Kaggle.com.
2305.05050
ANALOGICAL -- A Novel Benchmark for Long Text Analogy Evaluation in Large Language Models
Over the past decade, analogies, in the form of word-level analogies, have played a significant role as an intrinsic measure of evaluating the quality of word embedding methods such as word2vec. Modern large language models (LLMs), however, are primarily evaluated on extrinsic measures based on benchmarks such as GLUE and SuperGLUE, and there are only a few investigations on whether LLMs can draw analogies between long texts. In this paper, we present ANALOGICAL, a new benchmark to intrinsically evaluate LLMs across a taxonomy of analogies of long text with six levels of complexity -- (i) word, (ii) word vs. sentence, (iii) syntactic, (iv) negation, (v) entailment, and (vi) metaphor. Using thirteen datasets and three different distance measures, we evaluate the abilities of eight LLMs in identifying analogical pairs in the semantic vector space. Our evaluation finds that it is increasingly challenging for LLMs to identify analogies when going up the analogy taxonomy.
Thilini Wijesiriwardene, Ruwan Wickramarachchi, Bimal G. Gajera, Shreeyash Mukul Gowaikar, Chandan Gupta, Aman Chadha, Aishwarya Naresh Reganti, Amit Sheth, Amitava Das
2023-05-08T21:12:20Z
http://arxiv.org/abs/2305.05050v3
# ANALOGICAL - A Novel Benchmark for Long Text Analogy Evaluation in Large Language Models ###### Abstract Over the past decade, analogies, in the form of word-level analogies, have played a significant role as an intrinsic measure of evaluating the quality of word embedding methods such as word2vec. Modern large language models (LLMs), however, are primarily evaluated on extrinsic measures based on benchmarks such as GLUE and SuperGLUE, and there are only a few investigations on whether LLMs can draw analogies between long texts. In this paper, we present ANALOGICAL, a new benchmark to intrinsically evaluate LLMs across a taxonomy of analogies of long text with six levels of complexity - (i) word, (ii) word vs. sentence, (iii) syntactic, (iv) negation, (v) entailment, and (vi) metaphor. Using thirteen datasets and three different distance measures, we evaluate the abilities of eight LLMs in identifying analogical pairs in the semantic vector space. Our evaluation finds that it is increasingly challenging for LLMs to identify analogies when going up the analogy taxonomy. ## 1 Introducing ANALOGIAL - a Benchmark for Analogy The ability of humans to perceive a situation in one context as similar to that in a different context is known as _analogy-making_. It is considered to be a central component of human cognition and learning. Analogy-making has received attention from a broad audience, including cognitive scientists (Gentner and Markman, 1997; Holyoak et al., 2001), linguists (Itkonen, 2005), and educators (Richland and Simms, 2015) during the last several decades. Current neural network-based word embeddings are primarily influenced by the distributional hypothesis _"You shall know a word by the company it keeps"_(Firth, 1957). During 2013-2017, less complex, word-level analogies played a central role in intrinsically evaluating the quality of word embedding methods, such as word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017). Different types of textual analogies can be identified, such as word analogies (Gladkova et al., 2016), proportional analogies (Mikolov et al., 2013), and long-text analogies (Ichien et al., 2020). The techniques to create word embeddings have progressed from categorical (i.e., one-hot, bag-of-words) to continuous contextualized techniques exemplified by LLMs such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2022). However, only a few investigations have been done on the capabilities of LLMs to draw analogies between long text (Czinczoll et al., 2022). For example - embeddings of sentences _'I can speak two languages.'_ and _'I am bilingual.'_ should be close-by in vector space and _'I like chocolate.'_ and _'I do not like chocolate.'_ should not be close-by. Performance evaluations of modern LLM are driven mainly by extrinsic measures based on benchmarks such as GLUE (Wang et al., 2018), and Super Figure 1: Expected vector space embeddings of three analogical sentence pairs from a hypothetical LLM that captures sentence analogies accurately. GLUE (Wang et al., 2019). We take this opportunity to introduce a new benchmark to _intrinsically evaluate_ LLMs using analogies consisting of long text (sentences, paragraphs). We hypothesize that an LLM should be able to organize the semantic vector space so that analogical lexical pairs are closer to each other (see Figure 1). In this paper, we introduce ANALOGICAL - a benchmark based on an analogy taxonomy consisting of six levels of analogy complexity - (i) word level, (ii) word vs. sentence level, (iii) syntactic level, (iv) negation level, (v) semantic (entailment) level and (vi) metaphor level. We proxy analogy complexity with the length of lexical items compared. We derive five and identify eight datasets at each level of the analogy taxonomy. Euclidean distance and cosine similarity are the de facto standards for capturing analogy in the NLP community. We show that, in contrast, such measures suffer from several correlations and indirect dependencies among the vector dimensions. Finally, we argue and empirically report that Mahalanobis distance (Mahalanobis, 1936) better captures the semantic equivalence in high dimensional vector spaces. ## 2 Related Work In this section, we elaborate on previous work on analogy identification, the background of encoder-based language models and distance measures used in analogy-based comparisons in NLP. There has been previous work on analogy identification by Turney (2008) applying singular value decomposition (SVD) (Golub and Van Loan, 2013) based approach and by Mikolov et al. (2013); Gladkova et al. (2016) using static word embeddings with vector offset approaches. In more contemporary literature, Ushio et al. (2021) evaluates the ability of LMs such as BERT, GPT-2 and RoBERTa to identify word analogies in a zero-shot setting with prompts. In this work, we perform more comprehensive evaluations, including several types of analogies in addition to word analogies. We also evaluate the analogy identification abilities of eight contemporary LLMs. Current neural network-based LMs play a pivotal role in the present-day NLP landscape by performing exceptionally well in numerous NLP tasks such as machine translation (Zhang et al., 2015; Singh et al., 2017), classification (Marwa et al., 2018), and sentiment analysis (Hoang et al., 2019). These LMs are trained on large, heterogeneous text corpora resulting in pretrained LMs that are then used on downstream tasks via supervised fine-tuning. This work uses the pretrained LMs in a zero-shot setting for embedding creation. Previous research in NLP has used cosine distance/ similarity, Euclidean distance and Mahalanobis distance as popular distance measures to quantify the semantic similarity between text (Agarwala et al., 2021; Han et al., 2021; Sunilkumar and Shaji, 2019; Bollegala et al., 2009). Even though Figure 2: Analogy taxonomy with six levels. The definitions of the analogies at each level and examples for each analogy type from the datasets are indicated. Mahalanobis distance has been popularly used to measure the distance between a sample and a distribution, it has been increasingly used to measure the distance between two samples in a dataset (Balasubramanian et al., 2016; Rahman et al., 2018). This work extends these distance measures to measure the analogy between two lexical items. ## 3 Analogical - Six Levels of Analogy ANALOGICAL is a comprehensive benchmark focusing on six distinct categories of analogies organized within a taxonomy. These categories are determined based on the level of complexity they pose for current LLMs. Even though current language models perform exceptionally well on tasks that involve pattern recognizing the underlying text distribution and learning to infer correlations, they struggle with complex and intricate tasks such as basic symbol manipulation (Piekos et al., 2021), compositionality (Dankers et al., 2022), and appropriating commonsense knowledge (Zhou et al., 2020). In higher levels of this taxonomy, the LMs are required to identify analogies between long and more abstract texts and, when doing so, have to face the complexities highlighted above. In the next section, we formally introduce the analogy taxonomy and the datasets representing each level in the taxonomy. Analogies are often expressed as an explicit or implicit relational similarity, involving two main lexical items. In this work, these two lexical items vary from single words to word phrases or sentences. More formally, we denote analogy as \(X::Y\), where \(X\) and \(Y\) are the two lexical items and analogy is a symmetric relation. The taxonomy of analogy is divided into six levels (see figure 2) where complexity is increased from bottom to top. In this section, we identify and introduce different datasets corresponding to each level of complexity in the analogy taxonomy that can be used to evaluate the performances of several SOTA language models. Table 1 summarizes the dataset statistics. ### Level One #### 3.1.1 Word level In this level of analogy, the two analogous lexical items are either single words or word pairs. If all lexical items in a language are in set A, then the analogy between two single words \(a\in W\) and \(b\in W\) are denoted by \(a::b\). An analogy between two-word pairs (also known as proportional analogies) where \(a,b,c,d,\in W\) is denoted by \(a:b::c:d\). This indicates that \(a\)_is related to \(b\) as \(c\) is related to \(d\)_. #### 3.1.2 Datasets for Level One This level represents word analogies. We identify four datasets at this level. Two of them, namely the **Bigger Analogy Test Set (BATS)**(Gladkova et al., 2016) and **MSR Dataset**(Gao et al., 2014), contain analogies between two words. We use the MSR dataset as is and slightly modify the BATS dataset as below for our intended use. BATS Dataset consists of four main analogy types namely _Morphology-inflections, Morphology, Semantics-encyclopedia and Semantics-lexicography_. Semantics-lexicography data contain hypernyms, hyponyms and synonyms where one word is identified to be analogous to several other words (e.g. afraid :: terrified/ horrified/ scared/ stiff/ petrified/ fearful/ panicky). In this case, we identify each element on the right as analogous to the element on the left separately (e.g., for the example above, afraid :: terrified, afraid :: horrified, etc.). We identify two other datasets for word pair analogies in level one of the taxonomy. One is referred to as the **Google Dataset**(Mikolov et al., 2013), with syntactic and semantic analogies. The other comprises educational resources such as analogy problems from SAT exams (US college admission tests) and other similar problems targeted at younger students in US school system. We use these data aggregated by Ushio et al. (2021) and identify it as the **SAT Dataset**. ### Level Two #### 3.2.1 Word vs. Sentence Level This level consists of analogies between a word \(w\) and a sentence \(S\), denoted by \(S::w\). Sentence \(S\) is a sequence of words \(S\) = \([a_{1},\cdots,a_{n}]\) and word \(w\) is \(\{w_{1},\cdots,w_{n}\}\in W\). #### 3.2.2 Datasets for Level Two This level consists of two datasets with single words and their analogous sentences. The first dataset, (Pwanson, 2016), is a crossword puzzle dataset where the crosswords are words and clues are sentences/phrases (e.g., amen :: famous last words). We identify this dataset as the **Crossword Dataset**. The second dataset is the **WordNet Dataset**. WordNet is a large lexical database of English words grouped into cognitive synonym sets known as synsets (Miller, 1992). The two lexical terms of interest in this dataset are the WordNet words and the different senses of these words explained in a sentence/phrase. ### Level Three #### 3.3.1 Syntactic Level These analogies are between single sentences. We propose that a single sentence \(S\) with a word sequence \(\llbracket w_{1},\cdots,w_{n}\rrbracket\in W\) is analogous to a syntactically altered version of the same sentence. We generate altered versions of original sentences by random deletion, random reordering, and random masking of the words in the sentence. If an original sentence is denoted by a word sequence \(\llbracket w_{1},w_{2},w_{3},w_{4},w_{5}\rrbracket\), an altered version of the sentence \(S_{RD}\) is created by randomly deleting a consecutive range of tokens such as \(\llbracket w_{1},w_{4},w_{5}\rrbracket\). Another altered version is created by random reordering of the original sentence denoted by \(S_{RR}\) where the altered sentence would look like \(\llbracket w_{1},w_{2},w_{4},w_{3},w_{5}\rrbracket\). The final alteration masks random words (\(S_{RM}\)) in the original sentence resulting in an altered version of \(\llbracket w_{1},\llbracket MASK\rrbracket,w_{3},\llbracket MASK\rrbracket,w_{5}\rrbracket\). #### 3.3.2 Datasets for Level Three We are looking at analogies between two syntactically equivalent sentences at this level. We are introducing three datasets on three types of syntactic equivalence variants: random deletion, random masking, and random reordering. We use the sentence tagged as "neutral" in the SNLI dataset (Bowman et al., 2015) as the basis for creating all three datasets introduced at this level. To create the **Random Deletion Dataset**, we delete 20% of the words in a sentence randomly; to create the **Random Masking Dataset**, we randomly replace 20% of tokens in a sentence with [MASK]. Finally, to create the **Random Reorder Dataset**, we randomly reorder 20% of the words in a sentence. The original sentence and its altered version are identified as an analogous pair. ### Level Four #### 3.4.1 Negation Level The two lexical items considered in this level are single sentences, one negating the other denoted by \(S\) and \(S_{NG}\). #### 3.4.2 Datasets for Level Four We identify sentences and their negated forms as a pair. Since a sentence and its negation are recognized as opposites to each other, we postulate that this is a non-analogy. We use Stanford Contradiction Corpora (specifically the negation dataset) (De Marneffe et al., 2008). We extract the sentence with negation markers and create sentence pairs from each of these extracted sentences by keeping the negation marker and removing it. We identify this dataset as **Negation Dataset**. ### Level Five ### Entailment Level This level again contains analogies between sentences. The type of analogies contained in this level is entailing sentences. Textual Entailment attempts to infer one sentence from the other. We propose that entailment considers attributional and relational similarities between sentences, making them analogous. More formally given a sentence as \(S\), its entailment sentence as \(S_{ET}\), words in the sentence as \(w\) and words in the entailment sentence as \(w^{\prime}\), \(S\) = \(\llbracket w_{1}\cdots w_{n}\rrbracket\), \(S_{ET}\) = \(\llbracket w_{1}^{\prime}\cdots w_{n}^{\prime}\rrbracket\) and \(S\) :: \(S_{ET}\). #### 3.6.1 Datasets for Level Five We identify one dataset for this level and refer to it as the **Entailment Dataset**. We extract the sentence pairs tagged with the "entailment" relationship from the SNLI dataset (Bowman et al., 2015) to create the data points. ### Level Six #### 3.7.1 Metaphor Level This is the highest level in the taxonomy with the most complexity with regard to analogy identification, with the least attention from the NLP community. In this level, the two lexical items are a sentence and a paragraph. If a sentence is denoted by \(S\) = \(\llbracket w_{1}\cdots w_{n}\rrbracket\), a paragraph is denoted by several sentences that do not include the original sentence. \(P\) = \(\llbracket s_{1}\cdots s_{n}\rrbracket\). The analogy is indicated by \(S\) :: \(P\). #### 3.7.2 Datasets for Level Six We have metaphors at the top level of the analogy taxonomy. We identify two datasets at this level. One is "ePic", a crowdsourced proverb dataset by Ghosh and Srivastava (2022) with narratives explaining each proverb. Since the proverb and its explanation essentially have the same meaning, we assume that a proverb and its corresponding narrative are analogous to each other. We refer to this dataset as **ePiC Dataset**. Similarly, the second dataset Rudrapal et al. (2017) includes quotes and the elaborated meaning of each quote. We refer to this dataset as the **Quotes dataset**. ## 4 Large Language Models to Evaluate ANALOGICAL Modern LLMs are built upon the transformer architecture Vaswani et al. (2017). The LLMs we use in this study fall into two classes based on their training objective. **Masked language models (MLMs)** are trained to predict randomly masked tokens (random words replaced by a [MASK] token) based on all the other words present in a sequence in a bidirectional manner. MLMs use the _encoder_ portion of the transformer architecture. **Encoder-decoder language models(EDLMs)** build upon the entire _encoder-decoder_ architecture of transformers and are trained by predicting the original sequence of text given a corrupted version of the text sequence. In the current empirical study, we examine the performance of eight popular pretrained language models on identifying analogies introduced in the analogy taxonomy without fine-tuning (zero-shot setting). We choose six MLM-based LLMs, namely (i) BERT Devlin et al. (2018), (ii) RoBERTa Liu et al. (2019), (iii) AlBERT Lan et al. (2019), (iv) LinkBERT Yasunaga et al. (2022), (v) SpanBERT Joshi et al. (2020), and (vi) XLNet Yang et al. (2019), T5 Raffel et al. (2020), an encoder-decoder-based model, and ELECTRA Clark et al. (2020) an LLM with two transformers, one as a generator and the other as a discriminator. We include further details of these LLMs in Appendix C). ## 5 Distance Measures and Their Importance Previous work Mikolov et al. (2013); Gladkova et al. (2016) used static word embeddings with vector offset approaches (such as _3CosMul, 3CosAdd_) to identify word analogies. In this work, we use the distance between the lexical items in a high-dimensional vector space to identify the analogy between two lexical items. We identify three distance measures, namely, cosine distance (CD), Euclidean distance(ED), and Mahalanobis distance(MD). Next, we briefly explain MD. CD and ED are explained in the appendix. ### Mahalanobis Distance (MD) ED does not perform well if the vector dimensions depend on each other. Mahalanobis distance Mahalanobis (1936), is a generalized extension of the Euclidean distance that takes into account the correlation between vector dimensions, thereby providing a balanced measure of dissimilarity. In the next section, we show that word vectors' dimensions are highly correlated. Therefore, we use MD in this work to get an accurate distance measure. Given two vectors \(A=[a_{i},\cdots,a_{n}]\) and \(B=[b_{i},\cdots,b_{n}]\), MD between the two points are given by (\(C^{-1}\) indicates the covariance matrix of the dataset.): \[MD(\overrightarrow{A},\overrightarrow{B})=\sqrt{(\overrightarrow{A}- \overrightarrow{B})^{T}C^{-1}(\overrightarrow{A}-\overrightarrow{B})}\] ### Importance of Mahalanobis Distance as a Distance Measure Vector representations of lexical items produced by LLMs are opaque due to the low interpretability of individual vector dimensions. Tsvetkov et al. (2015) introduce QVEC, which uses a subspace alignment technique to align linguistic properties with distributional vector dimensions. Wordnet divides verbs and nouns into 41 coarse semantic categories known as supersenses. For example, NOUN.QUANTITY and NOUN.SHAPE is supersenses related to nouns and VERB.POSESSISON and VERB.CREATION are supersenses related to verbs. SemCor is a corpus containing 13,174 noun lemmas and 5,686 verb lemmas from wordnet, and these are annotated \begin{table} \begin{tabular}{c c c} \hline \hline Levels in Analogy & \multirow{2}{*}{Dataset} & \multirow{2}{*}{\# Datapoints} \\ Taxonomy & & \\ \hline \multirow{3}{*}{Level One} & MSR & 44584 \\ & BATS* & 2880 \\ & Google & 19544 \\ & SAT & 1106 \\ \hline \multirow{2}{*}{Level Two} & Crossword & 100000 \\ & WordNet & 104356 \\ \hline \multirow{3}{*}{Level Three} & Random Deletion* & 100000 \\ & Random Masking* & 100000 \\ & Random Reorder* & 100000 \\ \hline \multirow{2}{*}{Level Four} & Negation* & 100000 \\ \hline \multirow{2}{*}{Level Five} & Entainment & 100000 \\ \hline \multirow{2}{*}{Level Six} & ePiC & 42501 \\ & Quotes & 998 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of datasets used at each level of the Analogy taxonomy. Datasets derived by authors are indicated with *. with supersenses. Terms from SemCor are converted into linguistic word vectors based on term frequency, resulting in a set of 4,199 linguistic word vectors, each with 41 interpretable dimensions. QVEC aligns distributional word vector dimensions with above described linguistically interpretable word vector dimensions through Pearson's correlations-based matrix alignments. We use the same methods to calculate Pearson's correlation between the 41 vector dimensions to identify the correlations among them. Figure 3 illustrates a subset of 10 vector dimensions and their correlations. We see that dimension VERB.CONSUMPTION is highly correlated with the dimension NOUN.QUANTITY and dimension of VERB.WEATHER is highly correlated with VERB.EMOTION. Due to the correlated nature of vector dimensions, and the ability of MD to take into account the correlations between vector dimensions when calculating the distance measures, we identify MD as the best distance measure among CD, ED, and MD. ## 6 Experiment Settings We have set up comprehensive experiments across eight LLMs, thirteen datasets, and three distance measures adding up to 312 (\(8\times 13\times 3\)) experiments. We analyze the performance of LLMs across the analogy taxonomy by comparing the normalized distance measures. We present the complete results table for all the experiments in Appendix A). The embedding (representation) of each lexical item in an analogical pair (word embedding, sentence embedding) is extracted from eight LMs (In this work, we use the simplest representation, which is the [CLS] token representation). The distance measures between these two representations are then calculated using ED, CD, and MD. For each dataset containing analogical pairs, these distance measures are calculated, and the mean of all the data points of a dataset is considered the representative distance for that dataset (these distances are Min-Max normalized). Given the analogy taxonomy (figure 2), except for the negation dataset at level 4, all the other datasets are positive analogies, meaning, that the two lexical items of a data point are considered analogical to each other. Therefore the mean distance values of these datasets should indicate such similarity (low cosine, Euclidean, and Mahalanobis distances). For the negation dataset, the two lexical items in a data point should not be analogical to each other. Therefore, the representative distance measures should be large. We discuss the implementation details in appendix D. ## 7 Benchmark Results ### Performance of LLMs on ANALOGICAL We illustrate the performance of each LLM on different datasets at different levels of the analogy taxonomy based on the three distance measures in Figure 4. We further analyze the performance of LLMs based on MD akin to the superiority of MD over CD and ED mentioned in section 5.2 (see Table 2). When inspecting the performance of LLMs at the word level, for BATS and MSR datasets, most LLMs perform considerably well with mean distance values close to zero. When moving into the word pair datasets (Google, SAT), all the LLMs struggle to perform with mean distance values closer to one. In word pair datasets, it is crucial to understand the implicit relations among the word pairs to model the analogies correctly in the vector space. The suboptimal performance exhibited by LLMs on the aforementioned datasets Figure 3: Pearson correlation between 10 word-vector dimensions. VERB.CONSUMPTION is highly correlated with the dimension NOUN.QUANTITY and dimension of VERB.WEATHER is highly correlated with VERB.EMOTION. indicates the necessity of equipping them with the capability to identify implicit relationships. We believe that the integration of external knowledge into LLMs is a potential solution to enhance their performance on word pair analogies. Analogies at level two (words vs. sentences) are also illustrated to be challenging for the LLMs to identify. These analogies are abstract since a single word represents the meaning of a sentence. Abstraction is an area of NLP that is yet to be studied systematically (Lachmy et al., 2022). There are no widely established benchmarks to evaluate the performance of LLMs on abstraction. Therefore we postulate that it is hard for the LLMs to capture abstractions, performing poorly at this level. The Random Reordering dataset is the hardest dataset for the LLMs at level three of analogy taxonomy compared to Random Deletion and Random Masking datasets. The current analogous sentences are created using a simple mechanism of deleting, reordering, or masking of words, as opposed to replacing nouns and/or verbs with their analogous counterparts. Therefore the resulting analogies should be easier for the LLMs to identify, as illustrated. At the fifth level, pertaining to entailment, the majority of LLMs demonstrate suboptimal performance, with the exception of T5, RoBERTa, and SpanBERT. Textual entailment consists of identifying semantically related sentences, and interpreting semantics is known to be a challenge to LLMs (Mayer, 2020), which explains the mean MD values closer to one. Out of eight, six language models struggle to perform well at Metaphor Level. At this level, analogies are drawn between sentences and paragraphs, mainly introducing the issue of compositionality. Compositionality suggests that the meanings of complex expressions are constructed from the meanings of the less complex constituents (Fodor and Lepore, 2002). The inability of transformers to effectively capture the inherent compositionality in language, in the absence of suitable prompting techniques, has been extensively observed (Keysers et al., 2019; Furrer et al., 2020). We posit that this limitation directly contributes to the subpar performance of LLMs at this particular level. ### Performance on Negation Dataset Figure 6 illustrates the performance of LLMs on the Negation Dataset. XLNET performs the best with a mean MD of 0.6. T5 and RoBERTa record the poorest performance by placing the negations pairs very closely in the vector space. This performance is justified based on previous research on negation identification by pretrained language models Kassner and Schutze (2020). ### Best performing LLMs In Figure 5, we illustrate the best-performing models and their performance at each level of the analogy taxonomy across the three distance measures, ED, CD and MD. We see that RoBERTa performs the best based on mean CD values close to zero at all most all levels. However, CD considers all vector dimensions of a lexical item to be equally valuable and uncorrelated, which we reveal to be incorrect in section 5.2. Therefore we focus on the best-performing LLMs based on their mean MD values. We see that except for the Random Deletion dataset, the best performance for other datasets shows a general upward trend, indicating that it is increasingly hard for LLMs to identify analogous pairs when the complexity of the analogies increases. ## 8 Conclusion & Future Avenues This work introduces ANALOGICAL, a benchmark for LLMs based on a taxonomy of six levels of analogies. Through comprehensive experiments, we show that LLMs increasingly struggle to identify analogies when the complexity of analogies increase (going up the analogy taxonomy). The datasets derived for level three are crude at this time. In the future, we will incorporate more challenging and comprehensive datasets to this level. We also will move on from this empirical study to investigate why some LLMs perform well at specific levels and not others. ## 9 Limitations Syntactic analogies at level three consist of simple alterations of sentences based on deleting, reorder Figure 5: Best performing model(s) for each dataset in each level of the analogy taxonomy (Performance on the Negation Dataset is shown separately in Figure 6). The range of each normalized distance measure is [0,1], with zero being the best and one being the worst. Figure 6: Performance of LLMs on the Negation dataset. The range of each normalized distance measure is [0,1], with zero being the **worst** and one being the **best**. ing, and masking of random words. A more sophisticated method of creating syntactic analogies would be to replace nouns/ verbs in sentences with nouns and verbs of similar meaning, which is not explored in this work. In this study, we utilize the [CLS] token as the representation for lexical items in analogies. While previous research efforts have investigated the optimal representations of lexical items in Large Language Models (LLMs) Reimers and Gurevych (2019); Li et al. (2020), we have chosen not to incorporate these findings into our current investigation. This work uses mean distance measures to capture the LLMs' ability to identify analogies. There could be data points more challenging for the LLMs to capture than others within the same dataset or across datasets at the same level of the analogy taxonomy. Using mean distance values ignores this detail and considers all the data points equal, which is not optimum. ## Acknowledgements We thank Dr. Krishnaprasad Thirunarayan for his valuable feedback and the anonymous reviewers for their constructive comments. This work was supported in part by the NSF grant #2133842: EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organization.
2307.04537
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
In this work, we present an efficient and quantization-aware panoptic driving perception model (Q- YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and iVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and iVS datasets. Both strategies enhance the model's generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements.
Chi-Chih Chang, Wei-Cheng Lin, Pei-Shuo Wang, Sheng-Feng Yu, Yu-Chen Lu, Kuan-Cheng Lin, Kai-Chiang Wu
2023-07-10T13:02:46Z
http://arxiv.org/abs/2307.04537v1
# Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception ###### Abstract In this work, we present an efficient and quantization-aware panoptic driving perception model (Q-YOLOP) for object detection, drivable area segmentation, and lane line segmentation, in the context of autonomous driving. Our model employs the Efficient Layer Aggregation Network (ELAN) as its backbone and task-specific heads for each task. We employ a four-stage training process that includes pretraining on the BDD100K dataset, finetuning on both the BDD100K and IVS datasets, and quantization-aware training (QAT) on BDD100K. During the training process, we use powerful data augmentation techniques, such as random perspective and mosaic, and train the model on a combination of the BDD100K and IVS datasets. Both strategies enhance the model's generalization capabilities. The proposed model achieves state-of-the-art performance with an [email protected] of 0.622 for object detection and an mIoU of 0.612 for segmentation, while maintaining low computational and memory requirements. Object detection, semantic segmentation, quantization-aware training, autonomous driving ## I Introduction Panoptic perception systems are critical components of autonomous cars, enabling them to perceive and understand their environment comprehensively. These systems solve multiple vision tasks simultaneously, including object detection, lane line segmentation, drivable area segmentation, and generate a rich understanding of the road scene. In order to solve the multi-task problem for panoptic driving perception, we develop a low-power, multi-task model tailored for traffic scenarios, addressing the challenges of object detection and semantic segmentation. The aim is to create efficient algorithms capable of accurately recognizing objects and segmenting both lane line and drivable area while maintaining minimal computational cost, rendering them ideal for deployment in resource-constrained environments such as mobile devices, IoT devices, and embedded systems. To achieve low-power consumption, we adopt a neural network architectures optimized for energy efficiency. The development process involves reducing the size and complexity of the models used for object detection and segmentation, as well as quantizing the model to minimize energy consumption. Our panoptic driving perception system reaches \(93.46\) FPS on NVIDIA V100 and \(3.68\) FPS on MediaTek Dimensity 9200 Series Platform. Meanwhile, it attains \(0.622\) mAP and \(0.612\) mIoU on the object detection and segmentation tasks of the competition iVS dataset. ## II Method Our model, derived from YOLOPv2 [1] and YOLOv7 [2], is specifically designed to address both object detection and segmentation tasks. It comprises five main components: the backbone, the neck, the detection head, drivable area segmentation head, and lane line segmentation head. The backbone is Efficient Layer Aggregation Network (ELAN) [3], optimized for rapid and efficient feature extraction. The neck of our model is a Spatial Pyramid Pooling (SPP) network [4], which facilitates the handling of objects with varying scales and sizes by pooling features at multiple resolutions. This enhancement improves the accuracy and robustness of object detection. The detection head is based on RepConv [5], an innovative neural network architecture that merges the efficiency of mobile networks with the accuracy of more complex models. Subsequently, a non-maximum suppression is applied to the output of object detection process to generate the final predictions. Consequently, our model is capable of accurately detecting objects in images while managing computation and memory requirements. Furthermore, in addition to object detection, our neural network also encompasses task-specific heads for drivable Fig. 1: Our model is designed to simultaneously process object detection, drivable area segmentation, and lane line segmentation on a single input image. The bounding boxes indicate the location of traffic objects, the green areas represent the main lane of drivable areas, the red areas represent the alternate lane of drivable areas, the light blue areas represent single lines, and the pink-purple areas represent dashed lines. area segmentation and lane line segmentation. These dedicated heads possess distinct network structures that are optimized for their respective tasks. As drivable area segmentation and lane line segmentation generate separate predictions, we allow the result of lane line segmentation to overlap with the result of drivable area segmentation. In summary, our model is engineered to optimize efficiency and accuracy while also addressing the challenges associated with multi-task. Its unique combination of components and specialized task heads make it ideal for real-world applications such as autonomous driving and object recognition in resource-constrained environments. A visual representation of our model architecture is presented in Figure 2. ### _Loss Function_ As we modify the head of YOLOPv2 [1] to support multi-label prediction, we introduce the loss function derived from HybridNets [6] to enhance the performance of our approach. The loss function for objection detection task consists of three components, \[L_{det}=\alpha_{1}L_{class}+\alpha_{2}L_{obj}+\alpha_{3}L_{box} \tag{1}\] Specifically, for \(L_{det}\), focal loss is used in both \(L_{class}\) and \(L_{obj}\). The classification loss, \(L_{class}\), is responsible for penalizing classification errors, while \(L_{obj}\) is used for predicting object confidence. Both terms are implemented by focal loss [7]. The term \(L_{box}\) represents the similarity between the predicted results and ground truth by considering the overlap rate, aspect ratio, and scale. We implement \(L_{box}\) using the smooth L1 loss function. The coefficient \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are hyperparameters used to balance the detection losses. The objective for lane line segmentation task combines three components, \[L_{seg\_l}=\beta_{1}L_{Tversky}+\beta_{2}L_{Focal}+\beta_{3}L_{ Jaccard} \tag{2}\] The first term Tversky loss [8], \(L_{Tversky}\), is used to address the issue of data imbalance and achieve much better trade-off between precision and recall, and the second term \(L_{Focal}\) aims to minimize the classification error between pixels and focuses on hard labels. The third term, \(L_{Jaccard}\), is utilized to measure the similarity between prediction and ground-truth segmentation masks. The coefficient \(\beta_{1}\), \(\beta_{2}\) and \(\beta_{3}\) are hyperparameters used to balance losses. On the other hand, the objective for drivable area segmentation task only combines two components: \[L_{seg\_da}=\gamma_{1}L_{Tversky}+\gamma_{2}L_{Focal} \tag{3}\] The coefficient \(\gamma_{1}\) and \(\gamma_{2}\) are hyperparameters used to balance the losses. The overall objective, \(L_{all}\), for our final model combines the object detection loss \(L_{det}\) and the segmentation loss \(L_{seg}\) to learn both tasks at the same time: \[L_{all}=\delta_{1}L_{det}+\delta_{2}L_{seg\_da}+\delta_{3}L_{seg\_ll} \tag{4}\] The coefficient \(\delta_{1}\), \(\delta_{2}\) and \(\delta_{3}\) are hyperparameters used to balance the detection loss and segmentation losses. ### _Quantization_ Quantization-Aware Training (QAT) is a technique aimed at making neural networks more amenable to quantization. During QAT, we introduce the quantization error during training by sequentially applying quantize and dequantize operations. This enables the network to learn more robust representations that can be efficiently quantized during inference. We employ the Straight-Through Estimator (STE) [9] algorithm for QAT, which offers a simple and efficient approach. With STE, we round the weights and activations to the nearest quantization level during forward propagation, while utilizing the gradients of the unquantized values during backward propagation. In this manner, the network can backpropagate the gradients through the quantization operation, which is not differentiable in its original form. By simulating the quantization error during training, we can ensure that the network learns robust features that are less sensitive to quantization. ## III Implementation Detail ### _Data Preparation_ As the organizers of the contest provided only a portion of the BDD100K [10] dataset, we opted to use the complete BDD100K dataset to augment the training data. In previous works that used the BDD100K dataset for semantic segmentation, the focus was typically on segmenting only the drivable Fig. 2: The proposed model architecture and post-processing flow. First, a non-maximum suppression (NMS) technique is applied to the output of the object detection head in order to refine the predictions. Moreover, the prediction of lane line segmentation is allowed to overwrite the prediction of drivable area segmentation in regions where both predictions overlap. areas and lane lines. There were no attempts to further classify the drivable areas or lane lines into multiple categories. However, our semantic segmentation task involves categorizing images into six classes: background, main lane, alternative lane, single line, double line, and dashed line. This is different from previous works, which only segmented images into two classes: line and lane. Therefore, we re-generate the six classes of segmentation labels for the BDD100K dataset. For the object detection task, the objective is to detect four types of objects: pedestrian, vehicle, scooter, and bicycle. In the case of scooters and bicycles, both the rider and the respective vehicle are included within the bounding box. However, the BDD100K dataset labels riders, scooters, and bicycles as distinct entities, as depicted in the following figure. To comply with the task requirements, we employ the Hungarian algorithm [11] to pair riders with their corresponding scooters or bicycles and label them within the same bounding box. ### _Training Process_ In our experiments, the training process consists of several stages: 1) initial pretraining on the BDD100K [10] dataset, then 2) pretraining on the BDD100K with mosaic augmentation [12], 3) finetuning on both BDD100K and iVS datasets, 4) quantization-aware training (QAT) on the integrated iVS and BDD100K datasets. Initially, we train our model on the BDD100K dataset without mosaic for 300 epochs, then turning on mosaic augmentation for 150 epochs. Subsequently, we jointly train the model on both the BDD100K and iVS datasets for an additional 150 epochs. Finally, we apply QAT [9] for an extra 20 epochs for quantization. **Data Augmentation Techniques.** To enhance the model's generalization capabilities, we apply several data augmentation techniques during the training process. These techniques include normalization, random perspective transformation, HSV color space augmentation, horizontal flipping, and mosaic. By simulating variations that may occur in real-world scenarios, these techniques improve the model's ability to adapt to new data. The mosaic technique turns on in the second and third stages, and it is turned off for the last 10 epochs of third stage. In detail, all images is normalized with mean \((0.485,0.456,0.406)\) and std \((0.229,0.224,0.225)\), random perspective transforming with scale factor \(0.25\), and translation factor \(0.1\). For HSV color space augmentation, the factor of Hue augmentation is \(0.015\), the factor of Saturation augmentation is \(0.7\), and the factor of Value augmentation is \(0.4\). **Weight Initialization.** The weight of the backbone and detection head of our model is initialized from YOLOv7 [2] pretrained weight, while the other parameters are all random initialized. **Implementation Details.** We resize all images to \(384\times 640\) of both BDD100K [10] and iVS datasets. The Adam optimizer is used for optimization. Different batch sizes are used for different stages, with \(32\) during first and second pretraining, \(32\) during finetuning, and \(16\) during quantization-aware training (QAT). The default anchor sizes are set as (12,16), (19,36), (40,28), (36,75), (76,55), (72,146), (142,110), (192,243), and (459,401). The learning rate scheduler employed is cosine annealing with a warm-up phase, and the initial learning rates are set to 1e-2 during first pretraining, 5e-3 during second pretraining, 5e-4 during finetuning, and 5e-5 during QAT. The minimum learning rates are set to 1e-5 during first pretraining, 5e-6 during second pretraining, 5e-7 during finetuning, and 5e-8 during QAT. The warm-up phase is set to 5 epochs during pretraining and 0 epochs during finetuning and QAT. The values of the coefficients for the losses are reported as follows: \(\alpha_{1}\) = 0.5, \(\alpha_{2}\) = 1.0, \(\alpha_{3}\) = 0.05, \(\beta_{1}\) = 1.0, \(\beta_{2}\) = 1.0, \(\beta_{3}\) = 1.0, \(\delta_{1}\) = 1.0, \(\delta_{2}\) = 1.0, \(\gamma_{1}\) = 0.2, \(\gamma_{2}\) = 0.2, and \(\gamma_{3}\) = 0.2. These coefficients are used in the computation of the loss function, which is a crucial component of our proposed method. ### _Inference Process_ The inference process involves pre-processing the input images, which includes resizing from \(1080\times 1920\) to \(384\times 640\). Following this, images are normalized with mean \((0.485,0.456,0.406)\) and standard deviation \((0.229,0.224,0.225)\). The post-processing steps for the detection and segmentation parts are carried out. In the detection part, the intersection over union (IoU) threshold of non-maximum suppression (NMS) is set to \(0.25\), and the confidence threshold is set to \(0.05\). In the segmentation part, the results from the two segmentation heads are merged, and the output is upsampled from \(384\times 640\) to \(1080\times 1920\). ## IV Experimental Results ### _Environment Setup_ We conducted our experiments using 8 Nvidia V100 GPUs for training. PyTorch 1.10 [13] and TensorFlow 2.8.0 [14] were used to implement our models and training pipeline, while OpenCV 4.6.0 [15] was used for image pre-processing. Our model architecture was based on the publicly available PyTorch implementations of YOLOP [16] and YOLOv7 [2]. To migrate the model from PyTorch to TensorFlow, we first translated the PyTorch model into ONNX1 format, and then used the onnx2fflite2 toolkit to convert ONNX into TensorFlow (.h5) and TFLite model (.tflite). Footnote 1: [https://onnx.ai/](https://onnx.ai/) Footnote 2: [https://github.com/MPolaris/onnx2fflite](https://github.com/MPolaris/onnx2fflite) ### _Main Results_ We present the performance of our model on the final testing dataset provided by the contest organizer at different training stages. Initially, we trained the model only on the BDD100K [10] dataset. However, due to the variation in the data distribution between BDD100K and the target task, the model may not be able to generalize well on the target task. To address this issue, we added the iVS dataset to the training process and performed mix data finetuning (i.e. the third stage). This approach enabled the model to adapt itself to better fit the target task, as the iVS dataset provided additional data with a similar data distribution to the target task. By training on this diverse dataset, the model was able to learn more effectively from the data and improve its performance on the target task. The performance of our proposed model is evaluated through various training stages. In the pretraining without mosaic stage, as depicted in Table I, the model is trained on BDD100K dataset, which effectively boosts the performance of all. Based on YOLOv4 [12], we integrate mosaic technology in our model training. However, in the pretraining stage with mosaic shown in Table I, we notice a decrease in performance across all tasks. The implementation of the mosaic technique does not yield improved performance, which could potentially be attributed to its training exclusively on the BDD100K dataset. As a result, the model may be more suited to the BDD100K dataset, leading to a slight decline in performance when applied to the iVS dataset. Nevertheless, further finentuning on the iVS dataset enables the model to achieve enhanced performance. In the third stage, the model is finetuned using a mix of the BDD100K and iVS datasets with mosaic augmentation, which resulted in a significant improvement in object detection and lane line segmentation performance. Additionally, in the last 10 epochs, the mosaic augmentation was turned off to allow the model to recover its adaptability to normal images. ### _Testing Results in the Competition_ Table II shows the testing results of public dataset in the competition provided by the contest organizer. Our approach is effective for both object detection and segmentation tasks, achieving 0.495 mAP and 0.401 mIoU on pretraining with mosaic stage. Finetuning the model on the mix dataset improved the performance to 0.540 mAP and 0.615 mIoU, demonstrating the importance of the mix dataset in overcoming domain shift. Applying QAT to the finetuned model not only maintained the model's performance but also improved the detection task, which achieved 0.622 mAP and 0.612 mIoU. The testing results of private dataset in the competition provided by the contest organizer is shown in Table III. Our approach achieves state-of-the-art performance in both object detection and segmentation tasks, with 0.421 mAP and 0.612 mIoU. Moreover, Table IV shows that our quantization strategy effectively reduced the model size by 4 times and improved inference speed by 3 times. These results demonstrate the effectiveness of our quantization strategy not only in improving model performance but also in reducing computational cost and memory footprint, which is important for real-world deployment of deep learning models. ### _Quantization Strategy_ The performance of the quantized network using different quantization paradigms is presented in Table V. We first observe that Post-Training Quantization led to a significant performance drop in the segmentation tasks, with only 0.285 and 0.248 mIoU achieved for drivable area and lane line segmentation, respectively. However, this performance drop can be mitigated by adopting a Quantization-Aware Training (QAT) strategy. Our experimental results demonstrate the effectiveness of QAT in mitigating the performance drop caused by quantization. Specifically, the quantized network achieved an 0.569 mAP for object detection and 0.852 mIoU for drivable area segmentation and 0.402 mIoU for lane line segmentation. These findings demonstrate the effectiveness of the QAT strategy in boosting the performance of quantized network, as compared to the Post-Training Quantization strategy. \begin{table} \begin{tabular}{c|c c c} Model & \begin{tabular}{c} Object \\ Detection \\ ([email protected]) \\ \end{tabular} & \begin{tabular}{c} Drivable Area \\ Segmentation \\ (mIoU) \\ \end{tabular} & \begin{tabular}{c} lane Line \\ Segmentation \\ (mIoU) \\ \end{tabular} \\ \hline \begin{tabular}{c} original (fp32) \\ PTQ (int8) \\ QAT (int8) \\ \end{tabular} & 0.582 & 0.842 & 0.397 \\ \begin{tabular}{c} PTQ (int8) \\ \end{tabular} & 0.557 & 0.285 & 0.248 \\ \begin{tabular}{c} QAT (int8) \\ \end{tabular} & 0.569 & 0.852 & 0.402 \\ \end{tabular} \end{table} TABLE IV: The comparison of 8-bits integer (INT8) weights and 32-bits floating (FP32) point weights. The model efficiency is conducted on MediaTek Dimensity 9200 Series Platform. \begin{table} \begin{tabular}{c|c c c} Model & \begin{tabular}{c} Object \\ Detection \\ ([email protected]) \\ \end{tabular} & \begin{tabular}{c} Drivable Area \\ Segmentation \\ (mIoU) \\ \end{tabular} & \begin{tabular}{c} lane Line \\ Segmentation \\ (mIoU) \\ \end{tabular} \\ \hline Pretraining w/o mosaic & 0.445 & 0.837 & 0.433 \\ Pretraining w/ mosaic & 0.417 & 0.852 & 0.379 \\ Finetuning & 0.531 & 0.841 & 0.435 \\ \end{tabular} \end{table} TABLE II: The test performance on the final public testing dataset provided by the contest organizer. \begin{table} \begin{tabular}{c|c c c} Model & \begin{tabular}{c} Object \\ Detection \\ ([email protected]) \\ \end{tabular} & \begin{tabular}{c} Drivable Area \\ Segmentation \\ (mIoU) \\ \end{tabular} & \begin{tabular}{c} lane Line \\ Segmentation \\ (mIoU) \\ \end{tabular} \\ \hline Pretraining w/o mosaic & 0.445 & 0.837 & 0.433 \\ Pretraining w/ mosaic & 0.417 & 0.852 & 0.379 \\ Finetuning & 0.531 & 0.841 & 0.435 \\ \end{tabular} \end{table} TABLE I: The test performance on the iVS dataset provided by the contest organizer. \begin{table} \begin{tabular}{c|c c c} Model & \begin{tabular}{c} Object \\ Detection \\ ([email protected]) \\ \end{tabular} & \begin{tabular}{c} Drivable Area \\ Segmentation \\ (mIoU) \\ \end{tabular} & \begin{tabular}{c} lane Line \\ Segmentation \\ (mIoU) \\ \end{tabular} \\ \hline \begin{tabular}{c} original (fp32) \\ PTQ (int8) \\ QAT (int8) \\ \end{tabular} & 0.582 & 0.842 & 0.397 \\ \begin{tabular}{c} PTQ (int8) \\ \end{tabular} & 0.557 & 0.285 & 0.248 \\ \begin{tabular}{c} QAT (int8) \\ \end{tabular} & 0.569 & 0.852 & 0.402 \\ \end{tabular} \end{table} TABLE V: The test performance of model after three-stage training with different quantization paradigms on the iVS dataset provided by the contest organizer. ## V Conclusion In this work, we have successfully implemented a light-weighted object detection and segmentation model. To improve its efficiency, we explored the effectiveness of two techniques: quantization-aware training and mix data finetuning (i.e. the third stage). Through extensive experimentation, we have demonstrated the effectiveness of these techniques in improving the accuracy and efficiency of our model. Our final model has achieved competitive results on the target dataset, demonstrating its potential for real-world applications.